�����    �   huggingface�{"info": {"features": {"id": {"dtype": "string", "_type": "Value"}, "text": {"dtype": "string", "_type": "Value"}, "dataset_id": {"dtype": "string", "_type": "Value"}}}}p4���� dataset_id��������text���� id����  ��r   �� ���&�&�X6M�q�r�r�Pr@���� #)/5;@EKQW]ciou{���������������������� %+17=CIOTZ`fkqw}���������������������� #)/5;AFLRX]ciou{����������������������� "(.4:@FLRX^djpv|����������������������%+17=CIOU[agmsy��������������������� &,28>DIOU[agmsy��������������������� &,28>DJPV\bhntz���������������������� "(-39?EKQW]cintz���������������������� $*05;AGMSY_ejpv{����������������������      % + 1 7 = C I O U [ a g l r x ~ � � � � � � � � � � � � � � � � � � � � �      # ( - 2 8 > C I O U [ a g m s y  � � � � � � � � � � � � � � � � � � � � �     $ * 0 6 < B H N T Y _ e k q w } � � � � � � � � � � � � � � � � � � � � �     $ * 0 5 ; A G K Q W ] a g m r x ~ � � � � � � � � � � � � � � � � � � � � �     $ * 0 6 : @ F L R X ^ d j p v | � � � � � � � � � � � � � � � � � � � � � �  &,28>DJPU[afkpv|���������������������� !'-39?EKQW]bhntz���������������������� #)/5;@FLQW]ciou{���������������������� !'-38>DJPV\agmsy��������������������� &,28>DJPV\bhntz���������������������� &,28>DJPV\bhntz���������������������� &,28>DJPV[aflrx~���������������������� $*05;AGMSX^djpv|���������������������� !'-39?EKQV[agmsx~���������������������936658650725319028069744017174472914868793330242630631596108009261467121628327355211531728644415315435010886556852068882637192392631134563530237716534311210321382082087189058130249510767075199644696774508952925554676057627143281896582181792866602442170862331805117381144943733456326897836044273738776156785218886240145474707602066470396269579831511109353241535696215805465401456051603013651732383030192838684370207280861583265578837097812906233251543826081566364132009298278120076464009176434835258243192190007974042615475863009941940859405311894289224931317011336225679761418399194824578289569895351309031920388185183624683698749107586069775506338705180657376253504695511817622095706918107147886929188543894330818061136637273545746641878719302535597262461407316403644591634069113616187296961870573860782995421571938248619237327619151471968999856879010704527004729214054703769481662255462022535070517676859585563134436395837707283888860043121652756032184170245856698140587202798273868234338451475198292785222888095199124169972286162166750682733147384848523269588673225859516011544383811740643028602138035047763738164156208044728512713072024790554133405409558142841311633465184205479024265916685220464275948626138593178618117414675188323298720876868807210200879121849901674393541533413646541895330961478176010409131170456817717523275851668464622941368475855954268776785230633236260881775401527687827137184811103672640987554517809503694702165685881742659787843593425908210486691386467254609556002074537455910175021784332024644418478596837951828461857672348011896207357643690605012516499057024948569102350631065378407806149564771533944191885905977903835505446907048397813696414676188653637473457952819992047433800558906523561103349997022104649123188165829368377222239823607631279326339127733904844977134852886458058981791887409772132319334180243269210929641823704844314327819477007147725510033616230290394739998304120893654305962103398330113041579147967557132780684650988938350489955306825320544181237185738901105063151634839536853120222771120764738854585753996667165712187141488732956735110598946002900910961110638566970232166386087865791526461849349572477113037678430166307533343192204214812943218667867066104718072212167830377604469355055194888274463292496362211498958127707021114954374103126608105948375452427084232661154836719730639981499867129859185685727840121563597757557549025730803556440769358632033477045329694653373412340357535223429076441540296451408856473555633972622449781249477120263584763175973311176679451588584407828712724195376820944896602834934533346581619396170624901973439382763598631431212141819197229790759087575504306792908502745131001522927875324822667822757359764389526659597791003876988684519385862178959001892790534678851877339317605680632029465301475556180256282473365009147763639478959457097283119107306117481551648945556246986135378253784302035902152835463916725883717488391233642553496159370946628160366540395445360166231575534252175639462687242480688481593258181265041288340962250040843300539836294117871289369802025606700405877440855854158258734573758350270688226506999686506113997734830174374678188033211907384925776640621914235521260260262655747776465751378740744408234337111756847294753582630604773782171718887864226818613256931092398987261363021521956447467170771031143353329511342518931024290052465826635582664033554779856665030681054100489608675943210403939114359579115901998802848077828671638136153496389345095876761667442219012153050362689018493887125513521351814114089370147278076518515369259730171480495392957366565349579254566430498925377834112857637018486338959319275825087626387991154489726543385377156226836782740730665464759835778687166323429955229375836583010199074664804442710418866682860677841623435123375286721539849516276683244302065566561947923449160926269509222838581331311628425725817309259428969315471361050318609842390575128017810363814107502449546163267943623012433925774950461694343352652877991729050542923266067471419932855978483046984711923294770053851141250176749535884628775081415423130775680278503881969052604216390078662265909733885214090196398427648858525940566460222896520157645616171460795933695931987037641088418848128421372120002823124649237923722367940214811621759315888213786680073484139452591202530874983165100592117187348456454627644279825341314117360820532309246593395795905841574302859894763970193504878090034708625437933749746000816962746299146187612171164134143346387705203034387740624005695954509875486600204567424531820630374085326300528504169090731217450379491750336091909347629388533569728938424177423875621540694876092333051829113148745584648678307198907656643406984173976275649961420547077227413711907191926018525953398177883506083491281222477145792378070498455795700269977147877759625855762973190187019788805034673355076181899860068392252664453501547301839685526460988467113160071400428170343542709569656811904641169165281738715143546329403126353250292801278155646566256551650277683640493633933275782270506301870156515819280147155646607520915830967570083458233283783707723217976316542814464100798013791921045706698196032343940827750597967433512724710650538200211061216693018595037347310852922157177238733811737678088672954411772052277272035025450027196116129089414638989516084228944053313358747783613619110153578515021674989107235336414302781066214572675279986925737829203666708439861379749235845361704746825424697411472298772543948206138513708213525066233324865858353539666024856273918241983442258716552639315100316927724693483019715814691976814964867087713529318905768321268478172913688871210902898111501542142713161405502352809335124851583568020595757665615473393956081629313851136655568251579718838412773484396311545266759567704293855884864889672615472612661741380145846929844776171921287216379162662676358237707325590861441856854356350047577827490910857184760340664861518653200790284608624244274819�* :�E�G�x�~�������;�ҿ%P-`���w �?`Wdf�nDw���z���;dVyi����]��R�{�U�\������Q�|V� �֗�f�q��,k}�v�4����X�l�{z~W���%�ĥ˸�����(-\f3 y= �H �S �o �� �� �� - �5 0B 7G 2{ �� >� � & �0 �6 �I x� � U� �� Y� �� A � �f �r ܔ Y� 3 6 5 �f h� � � �� 0� R� �� ����&�?�Rv_�i��w��������#SWg���B �(4�;fw�����������������E!n�����7������% ̈<����e����t���Z����#�;�aq�e���f i!hV�aVe_�E� �i�W�������*����"XI�XjmE���T�����`�j o{{D������f-�h]�������4�|��z~A�P���-+,J/ 7�IBL�U�q�yH�� ��CtH]�h��.��F�D�1��+�5�H+tS�������c� �����&qK�^Ts��%�l����W����$�3�5�A�L�Q>W*�����1������ � ! ~& �C r] �� [� �!�!�>!�?!t!�!H�!K�!�!\�!�2"�4"E"��""�"B�"��"F�"��"��"X�"&�"V #A #�D#[#�]#bj# s#�#��#y�#��#�$�$�$ E$�O$Q�$�$�-%yY%,_%�%,�%)�%��%�&��&�'�'U�'��'@�'��'�' (J7(��(?�(3) )tO)�j)�)�)׭)b�)��)D�)�))�)�*�*�5*G*�a*Jj*3�*k�*��*�*�+�M+pj+�n+�w+�~+��+��+?�+A�+�, u,��,x�,��,R�,��,��, �, -1<-LC-�-�90'H0��0��0ǟ0��0��0��0��0�1}01G<1�L11g1�q1ܔ1��1��1m�1)2^2J 2�92�<2lX2]u2%�2b�2O 3E3�V38_3�e3�i3 �3R�3�3�4594=4�Z4�e4o4�|4��4��4L�4i�4,5�55:�5��5=�5b�5��5��5W(6�`67�6��6��6v7*7�-7CR7`7}o7Y�7�7��7�8`8#;8(I8 U8��8�8��8=9�=91L9�b9�9{�9L�9'�9r:�+:/: ?:YA:St:K�:��:��:�;� ;�;;�D;Q;pf;�$<�w<˔<(�<d�<q=L ==�)=�4=�Y=�d=��=��=ʮ=ɱ=��=q�=�5>;K>Z\>�b>[h>�>��>�@�V@�Z@2�@��@Z�@M�@��@��@�ACA�*AKJA%SA�VA�|A��A �A��Ad�Aw�At�A�B�wB�|B �B~�B��B��Bm�B.�B��Bl�B�.C�;C�{CA�C_�C�CX�C��CT�C��CDD�D=D�ZD�bD/�D&�D��D��D��D�EZ,E�@E�CEmEE�UE�yE��E �Ep�E�F��F��F|�F)�F��F\G�G�cG��G��G-�GS�Gq�GyH-H�8H� API details. # - Technical notes https://www.flir.com/support/products/blackfly-s-usb3#Overview # - http://softwareservices.ptgrey.com/Spinnaker/latest/index.html # - https://www.flir.com/support-center/iis/machine-vision/knowledge-base/relationship-between-color-processing-and-number-of-bits-per-pixel/ #hide from nbdev.showdoc import * #export from FLIRCam.core import * # hide from itself #export import time import cv2 import PySpin from threading import Thread import sys # import the Queue class from Python 3 if sys.version_info >= (3, 0): from queue import Queue # otherwise, import the Queue class for Python 2.7 else: from Queue import Queue # + # def print_device_info(nodemap): # """ # This function prints the device information of the camera from the transport # layer; please see NodeMapInfo example for more in-depth comments on printing # device information from the nodemap. # # - nodemap: Transport layer device nodemap. # - nodemap: INodeMap # - returns: True if successful, False otherwise. # """ # # print('*** DEVICE INFORMATION ***\n') # # try: # result = True # node_device_information = PySpin.CCategoryPtr(nodemap.GetNode('DeviceInformation')) # # if PySpin.IsAvailable(node_device_information) and PySpin.IsReadable(node_device_information): # features = node_device_information.GetFeatures() # for feature in features: # node_feature = PySpin.CValuePtr(feature) # print('%s: %s' % (node_feature.GetName(), # node_feature.ToString() if PySpin.IsReadable(node_feature) else 'Node not readable')) # # else: # print('Device control information not available.') # # except PySpin.SpinnakerException as ex: # print('Error: %s' % ex) # return False # # return result # + # # Retrieve singleton reference to system object # system = PySpin.System.GetInstance() # # # Get current library version # version = system.GetLibraryVersion() # print('Library version: %d.%d.%d.%d' % (version.major, version.minor, version.type, version.build)) # # # Retrieve list of cameras from the system # cam_list = system.GetCameras() # # num_cameras = cam_list.GetSize() # nodemap_tldevice = cam_list[0].GetTLDeviceNodeMap() # # print_device_info(nodemap_tldevice) # # Release system instance # system.ReleaseInstance() # - #export class USBVideoStream: """ Create threaded USB3 video stream for 1 or more FLIR USB3 cameras, with max queue size Pass optional transform code to alter the image before queueing""" def __init__(self, transform=None, queue_size=10): # initialize the file video stream along with the boolean # used to indicate if the thread should be stopped or not self.stopped = False self.transform = transform # initialize the queue used to store frames read from the video file self.Q = Queue(maxsize=queue_size) # intialize thread self.thread = Thread(target=self.update, args=()) self.thread.daemon = True self.system = PySpin.System.GetInstance() # Retrieve singleton reference to system object version = self.system.GetLibraryVersion() # Get current library version print('Library version: %d.%d.%d.%d' % (version.major, version.minor, version.type, version.build)) self.camlist = self.system.GetCameras() # Retrieve list of cameras from the system # num_cameras = # for cam in cam_list: cam.DeInit() # system.ReleaseInstance() print(f'Number of cameras detected: {self.camlist.GetSize()}') def __del__(self): self.camlist.Clear() self.system.ReleaseInstance() def set_cam_mode_continuous(self, i, cam): """For all cameras set the acquisition mode set to continuous""" node_acquisition_mode = PySpin.CEnumerationPtr(cam.GetNodeMap().GetNode('AcquisitionMode')) if not PySpin.IsAvailable(node_acquisition_mode) or not PySpin.IsWritable(node_acquisition_mode): print('Unable to set acquisition mode to continuous . Aborting... \n') return False node_acquisition_mode_continuous = node_acquisition_mode.GetEntryByName('Continuous') if not PySpin.IsAvailable(node_acquisition_mode_continuous) or not PySpin.IsReadable( node_acquisition_mode_continuous): print('Unable to set acquisition mode to continuous , Aborting... \n') return False acquisition_mode_continuous = node_acquisition_mode_continuous.GetValue() node_acquisition_mode.SetIntValue(acquisition_mode_continuous) print('Camera acquisition mode set to continuous...') return True def start(self): """ For all cameras, Initialise and set camera mode to continuous. and begin acquisisition """ for i, cam in enumerate(self.camlist): cam.Init() # intialize cameras for i, cam in enumerate(self.camlist): # Set acquisition mode to continuous self.set_cam_mode_continuous(i, cam) # Begin acquiring images cam.BeginAcquisition() print('Camera %d started acquiring images...' % i) print() # start a thread to read frames from the file video stream self.thread.start() return self def update(self): # keep looping infinitely while True: # if the thread indicator variable is set, stop the # thread if self.stopped: break # otherwise, ensure the queue has room in it if not self.Q.full(): for i, cam in enumerate(self.camlist): try: frame = cam.GetNextImage() image_converted = frame.Convert(PySpin.PixelFormat_BGR8) # image_converted = frame.Convert(PySpin.PixelFormat_Mono8, PySpin.HQ_LINEAR) image_converted = image_converted.GetNDArray() # print(f'Image retrieved from cam {i}, shape = {image_converted.shape}') frame.Release() # Release image producer/consumer queues since this one was generally # idle grabbing frames. if self.transform: image_converted = self.transform(image_converted) # add the frame to the queue self.Q.put({'cam':i, 'image':image_converted}) except PySpin.SpinnakerException as ex: print('Error: %s' % ex) self.stopped = True else: time.sleep(0.1) # Rest for 100ms, we have a full queue # self.stream.release() for cam in self.camlist: # End acquisition cam.EndAcquisition() cam.DeInit() def read(self): """Return next frame in the queue""" return self.Q.get() def read_wait(self): """Wait for and return next frame in the queue""" try: return self.Q.get(block=True,timeout=1) except: return None # Insufficient to have consumer use while(more()) which does # not take into account if the producer has reached end of # file stream. def running(self): """ Test if thread id running""" return self.more() or not self.stopped def more(self): """Return True if there are still frames in the queue. If stream is not stopped, try to wait a moment""" tries = 0 while self.Q.qsize() == 0 and not self.stopped and tries < 5: time.sleep(0.1) tries += 1 return self.Q.qsize() > 1 def queue_size(self): """Return the queue size, ie number of frames""" return self.Q.qsize() def stop(self): """indicate that the thread should be stopped""" self.stopped = True # wait until stream resources are released (producer thread might be still grabbing frame) self.thread.join() show_doc(USBVideoStream.set_cam_mode_continuous) show_doc(USBVideoStream.start) show_doc(USBVideoStream.more) show_doc(USBVideoStream.read) show_doc(USBVideoStream.read_wait) show_doc(USBVideoStream.queue_size) show_doc(USBVideoStream.stop) # ## Show FLIR camera # + from imutils.video import FPS import imutils def show_FLIRcam(width=1000, height=750): fvs = USBVideoStream().start() fps = FPS().start() for i in range(50): img = fvs.read_wait() if img is not None: img = imutils.resize(img['image'], width=width, height=height) cv2.imshow('FLIR cam', img) if cv2.waitKey(10) == 27: break # esc to quit fps.update() fps.stop() fvs.stop() del fvs print("[INFO] approx. FPS: {:.2f}".format(fps.fps())) cv2.destroyAllWindows() show_FLIRcam() # - # ## Test Example Usage, recoding a movie # + from moviepy.video.io.ffmpeg_writer import FFMPEG_VideoWriter from imutils.video import FPS import imutils import time import cv2 def record_FLIRcam(mirror=False, width=1000, height=750): fvs = USBVideoStream().start() fps = FPS().start() with FFMPEG_VideoWriter('out.mp4', (width,height), 5.0) as video: for i in range(50): frame = fvs.read_wait() if frame is not None: img = imutils.resize(frame['image'], width=width, height=height) cv2.putText(img, f"Queue Size: {fvs.Q.qsize()}, Camera: {frame['cam']}, Shape: {img.shape}", (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 255, 0), 2) print(':', end='', flush=True) video.write_frame(img) cv2.imshow('FLIR cam', img) if cv2.waitKey(10) == 27: break # esc to quit fps.update() fps.stop() fvs.stop() print() print("[INFO] elasped time: {:.2f}".format(fps.elapsed())) print("[INFO] approx. FPS: {:.2f}".format(fps.fps())) # Not FPS will be slower due to video writing del fvs cv2.destroyAllWindows() record_FLIRcam(mirror=True) # - # ## Show result video import moviepy.editor as mvp mvp.ipython_display('out.mp4', loop=True) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Import Modules # # This notebook is an extension to `snowpack.ipynb` which describes how the raw data was downloaded, formatted, and cleaned from the Department of Agriculture's API. Please see the main notebook for EDA, modeling, feature selection, etc. import os, urllib, re import pandas as pd from tqdm import tqdm # There are also some custom functions contained in `./functions/snowpack.py` that we will want to import import functions.snowpack as sp # # Download Raw Data # First we need to get the meta data of site names so we can query the Department of Agriculture's API. We'll also define the list of variables needed for the `eCodes` parameter. The abbreviations used by the API are: # # * T = temperature (average, maximum, minimum, observered), $^\circ$F # * PREC = yearly accumulated precipitation at start of day, inches # * SNWD = snow depth at start of day, inches # * WTEQ = Depth of water that would theoretically result if the entire snowpack were melted instantaneously, inches # * STO:x = soil temperature observed at depth x inches, $^\circ$F # * SMS:x = volumetric soil moisture at depth x inches, percent # + os.chdir('../data') ntwk = pd.read_csv('snowpack-meta.csv') eCodes = ['TAVG', 'TMAX', 'TMIN', 'TOBS', 'PREC', 'SNWD', 'WTEQ','STO','SMS'] eCodes = [s + '::value' for s in eCodes] eCodes = [s.replace('STO', 'STO:-2:value,STO:-8:value,STO:-20:value') for s in eCodes] eCodes = [s.replace('SMS', 'SMS:-2:value,SMS:-8:value,SMS:-20:value') for s in eCodes] eCodes = ','.join(eCodes) # - # Now we've gathered all the information needed to read in data from website API for each row in snow-meta.csv. Due to the large amount of data, this will take a **long** time (~30 minutes) to run, so only do so if you actually want to get the raw data onto your computer. for row in tqdm(ntwk.itertuples()): temp = sp.ReadSnowData(row.state, row.site_id, eCodes) if temp is not None: if row.Index != 1: master = master.append(temp) else: master = temp else: continue # # Data Cleaning and Export # # * Remove `NA`s # * Rename columns to shorter abbreviations # * Add meta data (latitude, longitude, elevation, etc.) # * Parse dates into additional formats clean = master.dropna() clean.rename(columns = {'temp avg (degf)':'tAvg', 'temp max (degf)':'tMax', 'temp min (degf)':'tMin', 'temp (degf)':'t', 'precip accum (in)':'precipAccum', 'snow (in)':'snow', 'snow water equiv (in)':'waterEquiv', 'soil temp 2in (degf)':'tSoil2', 'soil temp 8in (degf)':'tSoil8'}, inplace=True) meta = ntwk[['state', 'site_name', 'latitude', 'longitude', 'elev_ft', 'county', 'huc', 'site_id']] meta.rename(columns = {'site_name':'name', 'latitude':'lat', 'longitude':'long', 'elev_ft':'elev', 'site_id':'id'}, inplace=True) clean = pd.merge(clean, meta) clean = clean[clean['snow'].notnull()] clean.insert(loc=0, column='md', value=clean.date.str.extract(r'((?<=-)\d{2}-\d{2})', expand=False)) clean.insert(loc=0, column='day', value=clean.date.str.extract(r'((?<=-)\d{2}$)', expand=False)) clean.insert(loc=0, column='month', value=clean.date.str.extract(r'((?<=-)\d{2}(?=-))', expand=False)) clean.insert(loc=0, column='year', value=clean.date.str.extract(r'(\d{4}(?=-))', expand=False)) # Export a copy of both the original and cleaned data master.to('snow-raw.csv', index=False) clean.to_csv('snow-clean.csv', index=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Defending the model # # In this section we will look at defences for the ML model(s) # + from art.attacks.evasion import DecisionTreeAttack, HopSkipJump from art.estimators.classification import SklearnClassifier from art.estimators.classification.scikitlearn import ScikitlearnDecisionTreeClassifier from models import Model from utils import compare_data, parse_df_for_pcap_validity, get_training_data, get_testing_data import numpy as np import pandas as pd # - # ## Defence: Training with adversarial samples # generate white box samples attack_data_pcap = "datasets/AdversaryPingFlood.pcap" model = Model(None, save_model_name="time_model_dt") target_attack_x, target_attack_y, preds = model.test(attack_data_pcap, malicious=1, return_x_y_preds=True, verbose=False) target_attack_x, target_attack_y = target_attack_x[np.where(preds == 1)], target_attack_y[np.where(preds == 1)] # Create a dataframe for ease retraining model target_attack_df = pd.DataFrame(target_attack_x, columns=model.features) target_attack_df["malicious"] = 1 # White-box Attack art_classifier = ScikitlearnDecisionTreeClassifier(model=model.get_classifier()) dt_attack = DecisionTreeAttack(classifier=art_classifier) # adversarial samples white_box_adversarial = dt_attack.generate(x=target_attack_x) valid_white_box_adversarial = parse_df_for_pcap_validity(white_box_adversarial, original_data=target_attack_x, columns=model.features) # + # generate black-box samples art_classifier = SklearnClassifier(model=model.get_classifier()) hsj_attack = HopSkipJump(classifier=art_classifier) # adversarial samples # target_attack_x = target_attack_x # 2k samples, takes longer target_attack_x = target_attack_x[:100] black_box_adversarial = hsj_attack.generate(x=target_attack_x, y=np.zeros(len(target_attack_x))) valid_black_box_adversarial = parse_df_for_pcap_validity(black_box_adversarial, original_data=target_attack_x, columns=model.features) # + model.test(white_box_adversarial) # retrain model with white box samples model.train(white_box_adversarial, continue_training=True) # check classification on adversarial samples model.test(white_box_adversarial) # - # check classification accuracy on test_test test_set = get_testing_data() model.test(test_set) # ### Part 3 exercises # + # Try retraining models with the white box / black box / valid / invalid packets and see how it impacts # classification accuracy # # You may want to combine the adversarial samples with the original training set to avoid catastrophic forgetting # To combine datasets use pd.concat() # # e.g. # training_set = pd.get_training_set() # pd.concat([target_attack_df, training_set]) # - # ### ---------- End of Part 3 ---------- # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python3 # --- # + active="" # merge dataframe by columns using index # - import pandas as pd df = pd.DataFrame([[1, 3], [2, 4]], columns=['A', 'B']) df df2 = pd.DataFrame([[1, 5], [1, 6]], columns=['A', 'D']) df2 pd.concat([df, df2], axis=1) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # #!/usr/bin/python # -*- coding: utf-8 -*- """This notebook creates the directory tree of TAG in COUNTRY in YEAR""" import inspect, os, sys try : import pywikibot as pb from pywikibot import pagegenerators, textlib from pywikibot.specialbots import UploadRobot except : current_folder = os.path.realpath(os.path.abspath(os.path.split(inspect.getfile(inspect.currentframe()))[0])) folder_parts = current_folder.split(os.sep) pywikibot_folder = os.sep.join(folder_parts[:-1]) if current_folder not in sys.path: sys.path.insert(0, current_folder) if pywikibot_folder not in sys.path: sys.path.insert(0, pywikibot_folder) import pywikibot as pb from pywikibot import pagegenerators, textlib from pywikibot.specialbots import UploadRobot import mwparserfromhell as mwh # - from anytree import Node, LevelOrderIter, RenderTree, AsciiStyle from mako.template import Template # + # Project parameters YEAR = 2018 TAG = 'WLE' TAG_EXT = "Wiki Loves Earth" COUNTRY = "Spain" ES_COUNTRY = "España" BASE_NAME = "Commons:Wiki Loves in {2}/{1}/{0}".format(YEAR, TAG_EXT, COUNTRY) commons_site = pb.Site('commons', 'commons') # + class TabularDataPage(pb.Page): def __init__(self, source, title=''): if not isinstance(source, pb.site.BaseSite): site = source.site else: site = source super(TabularDataPage, self).__init__(source, title) if self.namespace() != 486: raise ValueError('Page %s must belong to %d namespace' % (self.title(), 486)) def text(self, value): self._text = value def _page_to_json(self): return json.dumps(_text, ensure_ascii=False) def save(self, *args, **kwargs): # See Page.save(). """Save page content after recomposing the page.""" # Save using contentformat='application/json'. kwargs['contentformat'] = 'application/json' _dict = { "license": "CC0-1.0", "schema": { "fields": [ { "name": "date", "type": "string" } ] }, "data": [ [ "1876-01" ] ] } #kwargs['contentmodel'] = 'Tabular.JsonConfig' super(TabularDataPage, self).save(*args, text=json.dumps(_dict, ensure_ascii=False), **kwargs) tabdata_page = TabularDataPage(commons_site, 'Data:Wiki Loves in Spain/Wiki Loves Earth/2017/Log_db.tab') tabdata_page.text = '' #tabdata_page.save("{0} in Spain report".format(TAG)) # + root_template = """{{see also|Commons:${tag_extended} ${year}}} {{es|1=Esta es la categoría principal del concurso fotográfico '''[[Commons:${tag_extended} ${year} in ${country}|${tag_extended} ${year} en ${es_country}]]'''.}} [[Category:${tag_extended} ${year}|${country}]] [[Category:${tag_extended} in ${country}| ${year}]] [[Category:${year} events in ${country}]] [[Category:Activities by Wikimedia ${es_country} in ${year}]]""" vars = { "tag_extended": TAG_EXT, "year": YEAR, "country": COUNTRY, "es_country": ES_COUNTRY, "WM_CHAPTER": None } t = Template(root_template) root_text = t.render(**vars) root_text # - page = pb.Page(commons_site, "{0} {1} in {2}".format(TAG_EXT, YEAR, COUNTRY), ns=14) if page.text != root_text: page.text = root_text page.save("{1} {0} in {2}: Folder structure creation".format(YEAR, TAG, COUNTRY)) # + images_root_template = """{{Countries of Europe|prefix=:Category:Images from ${tag_extended} ${year} in}} {{see also|Commons${tag_extended} ${year} in ${country}}} {{Image template notice|${tag_extended} ${year}|es}} {{GeoGroupTemplate}} {{CategoryTOC}} {{Hiddencat}}
'''El concurso fotogáfico [[Commons:${tag_extended} ${year} in ${country}|${tag_extended} ${year} (${country})]] se celebra entre el 1 y el 31 de mayo de ${year}. Por favor, no incluyáis aquí imágenes que se subieron antes o después del plazo del concurso.'''
[[Category:${tag_extended} ${year} in ${country}]] [[Category:Photographs of ${country}|${tag_extended} ${year}]] [[Category:Images from ${tag_extended} ${year}|${country}]] [[Category:Images from ${tag_extended} in ${country} by year| ${year}]]""" vars = { "tag_extended": TAG_EXT, "year": YEAR, "country": COUNTRY, "WM_CHAPTER": None } t = Template(images_root_template) images_root_text = t.render(**vars) images_root_text # - regular_template = """{{{{hiddencat}}}} {0} """ header_meta = "{{{{metacat|{0}}}}}\n" header_categorize = "{{categorize}}\n" header_featured = "{{Collection of featured pictures|country=es}}\n" header_quality = """{{Countries of Europe|prefix=:Category:Quality images from Wiki Loves Earth 2015 in}} {{Collection of quality images|country=es}} {{CategoryTOC}} """ header_valued = """{{VI-cat}} {{CategoryTOC}} """ # + root = Node("Images from {0} {1} in {2}".format(TAG_EXT, YEAR, COUNTRY), text=images_root_text) maintenance = Node("Images from {0} {1} in {2} (maintenance)".format(TAG_EXT, YEAR, COUNTRY), parent=root, order=0, header=header_categorize) evaluation = Node("Images from {0} {1} in {2} (evaluation)".format(TAG_EXT, YEAR, COUNTRY), parent=root, order=2, header=header_categorize) classification = Node("Classified images from {0} {1} in {2}".format(TAG_EXT, YEAR, COUNTRY), parent=root, order=1, header=header_categorize) remarkable = Node("High quality images from {0} {1} in {2}".format(TAG_EXT, YEAR, COUNTRY), parent=root, order=3, header=header_categorize) issues = Node("Images with issues from {0} {1} in {2}".format(TAG_EXT, YEAR, COUNTRY), parent=maintenance, order=0, header=header_categorize) unqualified = Node("Unqualified images from {0} {1} in {2}".format(TAG_EXT, YEAR, COUNTRY), parent=maintenance, order=1) wrong_code = Node("Images from {0} {1} in {2} with a wrong code".format(TAG_EXT, YEAR, COUNTRY), parent=issues) no_code = Node("Images from {0} {1} in {2} without code".format(TAG_EXT, YEAR, COUNTRY), parent=issues) no_template = Node("Images from {0} {1} in {2} without valid template".format(TAG_EXT, YEAR, COUNTRY), parent=issues) no_category = Node("Uncategorized images from {0} {1} in {2}".format(TAG_EXT, YEAR, COUNTRY), parent=issues) no_site = Node("Unqualified images from {0} {1} in {2} (not from a site of community importance)".format(TAG, YEAR, COUNTRY), parent=unqualified) no_country = Node("Unqualified images from {0} {1} in {2} (not from {2})".format(TAG, YEAR, COUNTRY), parent=unqualified) no_date = Node("Unqualified images from {0} {1} in {2} (wrong submission time)".format(TAG, YEAR, COUNTRY), parent=unqualified) no_size = Node("Unqualified images from {0} {1} in {2} (too small)".format(TAG, YEAR, COUNTRY), parent=unqualified) no_location = Node("Unqualified images from {0} {1} in {2} (unidentified locations)".format(TAG, YEAR, COUNTRY), parent=unqualified) watermarked = Node("Unqualified images from {0} {1} in {2} (watermarked)".format(TAG, YEAR, COUNTRY), parent=unqualified) to_evaluate = Node("Images from {0} {1} in {2} to be evaluated".format(TAG_EXT, YEAR, COUNTRY), parent=evaluation, order=0) first_round = Node("Evaluation of images from {0} {1} in {2} - Round 1".format(TAG_EXT, YEAR, COUNTRY), parent=evaluation, order=1) finalist = Node("Evaluation of images from {0} {1} in {2} - Final".format(TAG_EXT, YEAR, COUNTRY), parent=evaluation, cats=["Evaluation of images from {0} in {2} - Final| {1}".format(TAG_EXT, YEAR, COUNTRY)], order=3) winners = Node("Winners of {0} {1} in {2}".format(TAG_EXT, YEAR, COUNTRY), parent=evaluation, cats=["Winners of {0} in {2}| {1}".format(TAG_EXT, YEAR, COUNTRY), "Winners of {0} {1} by country|{2}".format(TAG_EXT, YEAR, COUNTRY) ], order=4) by_author = Node("Images from {0} {1} in {2} by author".format(TAG_EXT, YEAR, COUNTRY), parent=classification, cats=["Images from {0} in {2} by author| {1}".format(TAG_EXT, YEAR, COUNTRY)], order='a', header=header_meta.format("author")) by_autcom = Node("Images from {0} {1} in {2} by autonomous community".format(TAG_EXT, YEAR, COUNTRY), parent=classification, cats=["Images from {0} in {2} by autonomous community| {1}".format(TAG_EXT, YEAR, COUNTRY)], order='c', header=header_meta.format("autonomous community")) by_site = Node("Images from {0} {1} in {2} by site".format(TAG_EXT, YEAR, COUNTRY), parent=classification, cats=["Images from {0} in {2} by site and year| {1}".format(TAG_EXT, YEAR, COUNTRY)], order='b', header=header_meta.format("site")) featured = Node("Featured images from {0} {1} in {2}".format(TAG_EXT, YEAR, COUNTRY), parent=remarkable, cats=["Featured pictures from {0} in {2}| {1}".format(TAG_EXT, YEAR, COUNTRY), "Featured pictures from {0} {1}|{2}".format(TAG_EXT, YEAR, COUNTRY), "Featured pictures supported by Wikimedia España", ], order=0, header=header_featured) quality = Node("Quality images from {0} {1} in {2}".format(TAG_EXT, YEAR, COUNTRY), parent=remarkable, cats=["Quality images from {0} in {2}| {1}".format(TAG_EXT, YEAR, COUNTRY), "Quality images from {0} {1}|{2}".format(TAG_EXT, YEAR, COUNTRY), "Quality images supported by Wikimedia España" ], order=1, header=header_quality) valued = Node("Valued images from {0} {1} in {2}".format(TAG_EXT, YEAR, COUNTRY), parent=remarkable, cats=["Valued images from {0} in {2}| {1}".format(TAG_EXT, YEAR, COUNTRY), "Valued images from {0} {1}|{2}".format(TAG_EXT, YEAR, COUNTRY), "Valued images of {2}|{0} {1}".format(TAG_EXT, YEAR, COUNTRY) ], order=2, header=header_valued) # - for pre, fill, node in RenderTree(root): print("%s%s" % (pre, node.name)) for node in LevelOrderIter(root) : cat_wikitext = None try: try : # Parent category when there's a category key cats = ["{0}| {1}".format(node.parent.name, node.order)] except : # Parent category when there isn't a category key cats = [node.parent.name] try: # Include additional cats cats.extend(node.cats) except : pass # Create category string and include it into template throuhg format cat_text = '\n'.join(["[[Category:{}]]".format(cat) for cat in cats]) cat_wikitext = regular_template.format(cat_text) except : # Root category cat_wikitext = node.text try : cat_wikitext = node.header + cat_wikitext except : pass page = pb.Page(commons_site, node.name, ns=14) if page.text.strip() != cat_wikitext.strip(): page.text = cat_wikitext page.save("{1} {0} in {2}: Folder structure creation".format(YEAR, TAG, COUNTRY)) #print (cat_wikitext) #print ("---------------------") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + jupyter={"outputs_hidden": false} # %run quiz_2.py # - # --- # #### Run the following cell to load question 1. q1 # + [markdown] jupyter={"outputs_hidden": false} # ##### Do your work in the following cell. Then answer the question. # - # --- # #### Run the following cell to load question 2. q2 # + [markdown] jupyter={"outputs_hidden": false} # ##### Do your work in the following cell. Then answer the question. # + jupyter={"outputs_hidden": false} # - # --- # ### Problem # Write a check to make sure that there are no age groups in which the number of females plus the number of males does not add up to the total number of people that age in 2019. # --- # ### Help with Question 2 # Get the number of years lived by the people alive in 2019 and then divide that by the number of people alive in 2019. # # #### To get the number of people alive in 2019: # 1. First get a `DataFrame` that only contains the rows that contain data for both sexes (i.e., where `SEX == 'T'`). # 1. Get the sum of the `POPESTIMATE2019` column values. # # #### To get the total years lived by the people alive in 2019: # 1. Use element-wise multiplication to create a `Series` by multiplying `POPESTIMATE2019` by `AGE`. # 1. Get the sum of that series. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pandas as pd # # Reading Datasets # Reading dummy books dataset books = pd.read_csv('/home/student/Desktop/archive/BX-Books.csv',sep=';', encoding="latin-1", error_bad_lines=False) # Reading dummy users dataset users = pd.read_csv("/home/student/Desktop/archive/BX-Users.csv", sep=';', encoding="latin-1", error_bad_lines=False) # Reading ratings dataset # # Since our main theme is not to recommend books based on ratings, but based on search. Since we dont have search dataset, we are simulating the idea of book recommendation based on ratings(because of available datasets) ratings = pd.read_csv("/home/student/Desktop/archive/BX-Book-Ratings.csv", sep=';', encoding="latin-1", error_bad_lines=False) books.count() # # Preprocessing the data books = books[['ISBN', 'Book-Title', 'Book-Author', 'Year-Of-Publication', 'Publisher']] books.head() books.rename(columns = {'Book-Title':'title', 'Book-Author':'author', 'Year-Of-Publication':'year', 'Publisher':'publisher'}, inplace=True) users.rename(columns = {'User-ID':'user_id', 'Location':'location', 'Age':'age'}, inplace=True) ratings.rename(columns = {'User-ID':'user_id', 'Book-Rating':'rating'}, inplace=True) users.head() books.head() ratings.head() ratings['user_id'].value_counts() users_rated_200_books = ratings['user_id'].value_counts()>200 x = ratings['user_id'].value_counts() > 200 y = x[x].index #user_ids print(y.shape) ratings = ratings[ratings['user_id'].isin(y)] rating_with_books = ratings.merge(books, on='ISBN') rating_with_books.head() number_rating = rating_with_books.groupby('title')['rating'].count().reset_index() number_rating.rename(columns= {'rating':'number_of_ratings'}, inplace=True) final_rating = rating_with_books.merge(number_rating, on='title') final_rating.shape final_rating = final_rating[final_rating['number_of_ratings'] >= 50] final_rating.drop_duplicates(['user_id','title'], inplace=True) book_pivot = final_rating.pivot_table(columns='user_id', index='title', values="rating") book_pivot.fillna(0, inplace=True) from scipy.sparse import csr_matrix book_sparse = csr_matrix(book_pivot) from sklearn.neighbors import NearestNeighbors model = NearestNeighbors(algorithm='brute') model.fit(book_sparse) distances, suggestions = model.kneighbors(book_pivot.iloc[500, :].values.reshape(1, -1)) # # Lets see what are the recommendations based on his ratings count for i in range(len(suggestions)): print(book_pivot.index[suggestions[i]]) distances # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="fOFacjQBFx9C" colab_type="code" colab={} class Solution(object): def numJewelsInStones(self, J, S): """ :type J: str :type S: str :rtype: int """ ans=0 J=set(J) for i in S: if i in J: ans+=1 return ans # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Pyspark for Linear Regression # ### 1. Set up spark context and SparkSession # + from pyspark.sql import SparkSession spark = SparkSession \ .builder \ .appName("Python Spark SQL basic example") \ .config("spark.some.config.option", "some-value") \ .getOrCreate() # - # ### 2. Load dataset df = spark.read.format('com.databricks.spark.csv').\ options(header='true', \ inferschema='true').load("./data/Advertising.csv",header=True); df.take(2) df.printSchema() df.show(6) # ### 3. Convert the data to features (dense vector) and label from pyspark.sql import Row from pyspark.ml.linalg import Vectors # convert the data to dense vector #def transData(row): # return Row(label=row["Sales"], # features=Vectors.dense([row["TV"], # row["Radio"], # row["Newspaper"]])) def transData(data): return data.rdd.map(lambda r: [Vectors.dense(r[:-1]),r[-1]]).toDF(['features','label']) # convert the data to dense vector #def transData(row): # return Row(label=row["Sales"], # features=Vectors.dense([row["TV"], # row["Radio"], # row["Newspaper"]])) def transData(data): return data.rdd.map(lambda r: [Vectors.dense(r[:-1]),r[-1]]).toDF(['features','label']) # ### 4. Transform the dataset to DataFrame #transformed = df.rdd.map(transData).toDF() data= transData(df) data.show(6) # ### 5. Convert features data format and set up training and test data sets from pyspark.ml import Pipeline from pyspark.ml.regression import LinearRegression from pyspark.ml.feature import VectorIndexer from pyspark.ml.evaluation import RegressionEvaluator featureIndexer = VectorIndexer(inputCol="features", \ outputCol="indexedFeatures",\ maxCategories=4).fit(data) # Split the data into training and test sets (40% held out for testing) (trainingData, testData) = data.randomSplit([0.6, 0.4]) # ### 5. Fit model (Ridge Regression and the LASSO) lr = LinearRegression(maxIter=10, regParam=0.3, elasticNetParam=0.8) # Chain indexer and tree in a Pipeline pipeline = Pipeline(stages=[featureIndexer, lr]) # Train model. This also runs the indexer. model = pipeline.fit(trainingData) lrmodel= model.stages[1] lrmodel.coefficients # or [stage.coefficients for stage in model.stages if hasattr(stage, "coefficients")] lrmodel.summary.meanAbsoluteError # ### 6. Make predictions predictions = model.transform(testData) # Select example rows to display. predictions.select("prediction", "label", "features").show(5) # ### 8. Evaluation # Select (prediction, true label) and compute test error evaluator = RegressionEvaluator( labelCol="label", predictionCol="prediction", metricName="rmse") rmse = evaluator.evaluate(predictions) print("Root Mean Squared Error (RMSE) on test data = %g" % rmse) # - summary temp_path = 'temp/Users/wenqiangfeng/Dropbox/Spark/Code/model' modelPath = temp_path + "/lr_model" model.save(modelPath) # - save and extract model lr2 = model.load(modelPath) lr2.coefficients # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + from sigvisa.treegp.gp import GPCov, GP, prior_sample, mcov import scipy.stats # + # step 1: sample from a GP cov1 = GPCov(wfn_params=np.array((1.0,)), dfn_params=((2.0,)), wfn_str="se", dfn_str="euclidean") x = np.linspace(-5, 5, 50).reshape((-1, 1)) K = mcov(x, cov1, 0.0) fx = scipy.stats.multivariate_normal(mean=np.zeros((x.shape[0],)), cov=K).rvs(2) A = np.array(((0.8, 0.2), (0.2, 0.8))) Afx = np.dot(A, fx) plot(x, Afx[0, :]) plot(x, Afx[1, :]) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas_datareader.data as web symbols = ['SP500', 'NASDAQCOM', 'DJCA', 'NIKKEI225'] data = pd.DataFrame() for sym in symbols: data[sym] = web.DataReader(sym, data_source='fred')[sym] data = data.dropna() (data / data.iloc[0] * 100).plot() plt.show() # 주식의 수익률은 보통 정규분포를 따른다!! log_returns = np.log(data / data.shift(1)) log_returns.hist(bins=50) plt.show() for i, sym in enumerate(symbols): ax = plt.subplot(2, 2, i+1) sp.stats.probplot(log_returns[sym].dropna(), plot=ax) plt.tight_layout() plt.show() # 아래와 같은 모양이 fat tail = long tail : 정규분포의 양 끝이 뚱뚱하고 두꺼운 모양새 # 자유도 (아래코드에선 df(degree of freedom)를 의미) # 자유도가 커지면 분산이 점차 줄어들면서 정규분포에 가까워진다. xx = np.linspace(-4, 4, 100) for df in [1, 2, 5, 10, 20]: rv = sp.stats.t(df=df) plt.plot(xx, rv.pdf(xx), label=("student-t (dof=%d)" % df)) plt.plot(xx, sp.stats.norm().pdf(xx), label="Normal", lw=5, alpha=0.5) plt.legend() plt.show() # #### student - t 분포 # - 샘플의 수가 유한한 경우에 샘플표준편차를 이용하여 샘플평균을 정규화 한 것이다. # + # 이 또한 N의 값이 커지면 정규분포에 가까워진다. np.random.seed(0) rv = sp.stats.norm() M = 1000 plt.subplot(1, 2, 1) N = 4 x1 = rv.rvs((N, M)) xbar1 = x1.mean(axis=0) xstd1 = x1.std(axis=0, ddof=1) x = xbar1 / (xstd1 / np.sqrt(N)) sns.distplot(x, kde=True) xx = np.linspace(-6, 6, 1000) plt.plot(xx, rv.pdf(xx), 'r:', label="normal") plt.xlim(-6, 6) plt.ylim(0, 0.5) plt.title("N = 4") plt.legend() plt.subplot(1, 2, 2) N = 40 x2 = rv.rvs((N, M)) xbar2 = x2.mean(axis=0) xstd2 = x2.std(axis=0, ddof=1) x = xbar2 / (xstd2 / np.sqrt(N)) sns.distplot(x, kde=True) xx = np.linspace(-6, 6, 1000) plt.plot(xx, rv.pdf(xx), 'r:', label="normal") plt.xlim(-6, 6) plt.ylim(0, 0.5) plt.title("N = 40") plt.legend() plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #imports import time start_time = time.time() import numpy as np from matplotlib import pyplot as plt from keras import Sequential from keras.layers import Dense, Dropout, Conv1D, Flatten from keras.metrics import binary_accuracy #from keras.utils import np_utils print("--- %s seconds ---" % (time.time() - start_time)) # + # import datasets with time taken! #smoll start_time = time.time() smoll = np.loadtxt("/home/willett/NeutrinoData/small_CNN_input_processed.txt", comments='#') print("--- %s seconds ---" % (time.time() - start_time)) print(smoll.shape) """ # commented out to save computation #and the full start_time = time.time() fll = np.loadtxt("/home/willett/NeutrinoData/full_CNN_input_processed.txt", comments='#') print("--- %s seconds ---" % (time.time() - start_time)) print(fll.shape) """ """ # commented out to save computation #and the full start_time = time.time() fll = np.loadtxt("/home/willett/NeutrinoData/test_CNN_input_processed.txt", comments='#') print("--- %s seconds ---" % (time.time() - start_time)) print(fll.shape) """ # extract title pls = open("/home/willett/NeutrinoData/small_CNN_input_processed.txt", "r") title = pls.readline() title = title[2:-1] print(title) # + # creating a dataset switch, change what UsedData is to change CNN UD = smoll # Used Data = UDLength = UD.shape[0] print("shape: ",UD.shape,"\nsize: ", UD.size," \nlength: ", UDLength) # dataset is expected in this format: # FirstLayer LastLayer NHits AverageZP Thrust PID_Angle PID_Front PID_LLR_M FirstLayer LastLayer NHits_Low AverageZP Thrust_Lo PID_Angle PID_Front PID_LLR_M Energy_As Angle_Bet Distance_Bet Sig Bg # with Sig and Bg expected as one hot vectors. # + # splitting X = dataset , Y = one hot vectors X = UD[:,0:-2] Y = UD[:,-2:1000] print("X shape: ",X.shape,"\nY shape: ", Y.shape) # they will be split into testing and training at compile # + # this is a convolutional network so data must be spacially relevant: i.e. columns must be swapped. # Convolution kernel size = (2,) #swapping PID angle and PID front for high energy so two charge related variables in one convolution PIDAH = X[:,5] PIDFH = X[:,6] X[:,5:7] = np.column_stack((PIDFH,PIDAH)) #swapping PID angle and PID front for low energy so two charge related variables in one convolution PIDAL = X[:,13] PIDFL = X[:,14] X[:,13:15] = np.column_stack((PIDFL,PIDAL)) #swapping Energy Asymetry and Distance so two geometric related variables in one convolution # While simultaneously padding with zeros! EAS = X[:,16] DB = X[:,-1] print(EAS[0],DB[0]) X2 = np.zeros((UDLength,20)) X2[:,0:-1] = X X2[:,16] = DB X2[:,18] = EAS #To debug print X before and X2 after, see if they swap # - X2 = np.expand_dims(X2, axis=2) # i.e. reshape (569, 30) to (569, 30, 1) for convolution # + # inevitable bias removal... # + # set variables: InDim = (X2.shape[1],X2.shape[2]) #input dimension Fltr = 4 #dimensionality of output space KD = 2 # kernel size Width = 8 # width of dense layers ~ 0.75 input DR = 0.5 # rate of dropout # linear model with a convolutional and 3 dense layers. Model = Sequential() Model.add(Conv1D(Fltr, KD , input_shape=InDim , activation="sigmoid", use_bias=True )) #conv Model.add(Flatten()) Model.add(Dense(Width, activation="sigmoid", use_bias=True)) #1 Model.add(Dropout(DR) ) Model.add(Dense(Width, activation="sigmoid", use_bias=True)) #2 Model.add(Dropout(DR) ) Model.add(Dense(Width, activation="sigmoid", use_bias=True)) #3 Model.add(Dropout(DR) ) Model.add(Dense(2, activation="softmax", use_bias=True)) # output layer # + # compile model: # For a binary classification problem Model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy', 'binary_accuracy' ]) # + # Trainining # Train the model, iterating on the data in batches of 32 samples history = Model.fit(X2, # the dataset Y, #true or false values for the dataset epochs=100, #number of iteration over data batch_size=32, #number of trainings between tests verbose=1, #prints one line per epoch of progress bar validation_split=0.2 ) #ratio of test to train # - #summarise history for accuracy plt.plot(history.history['acc']) plt.plot(history.history['val_acc']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() # summarize history for loss plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() # + active="" # from sklearn.metrics import confusion_matrix # import itertools # # # def plot_confusion_matrix(cm, classes, # normalize=False, # title='Confusion matrix', # cmap=plt.cm.Blues): # """ # This function prints and plots the confusion matrix. # Normalization can be applied by setting `normalize=True`. # """ # if normalize: # cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] # print("Normalized confusion matrix") # else: # print('Confusion matrix, without normalization') # # print(cm) # # plt.imshow(cm, interpolation='nearest', cmap=cmap) # plt.title(title) # plt.colorbar() # tick_marks = np.arange(len(classes)) # plt.xticks(tick_marks, classes, rotation=45) # plt.yticks(tick_marks, classes) # # fmt = '.2f' if normalize else 'd' # thresh = cm.max() / 2. # for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): # plt.text(j, i, format(cm[i, j], fmt), # horizontalalignment="center", # color="white" if cm[i, j] > thresh else "black") # # plt.tight_layout() # plt.ylabel('True label') # plt.xlabel('Predicted label') # # # y_prob=model.predict(X_test) # y_pred = y_prob.argmax(axis=-1) # y_test_labels = y_test.argmax(axis=-1) # cnf_matrix=confusion_matrix(y_test_labels, y_pred) # class_names = ['No hurricane','hurricane'] # plot_confusion_matrix(cnf_matrix, classes=class_names, # title='Confusion matrix, without normalization') # # - Y.sum(0)[0] / Y.sum(0)[1] = SNRatio # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # #
# # Website # #
# # Ghani, Rayid, , , , , , , , , , , and . # # Data Visualization in Python # --- # ## Table of Contents # - [Introduction](#Introduction) # - [Learning Objectives](#Learning-Objectives) # - [Python Setup](#Python-Setup) # - [Load the Data](#Load-the-Data) # - [Our First Chart in `matplotlib`](#Our-First-Chart-in-matplotlib) # - [A Note on Data Sourcing](#A-Note-on-Data-Sourcing) # - [Layering in `matplotlib`](#Layering-in-matplotlib) # - [Introducing seaborn](#Introducing-seaborn) # - [Combining `seaborn` and `matplotlib`](#Combining-seaborn-and-matplotlib) # - [Exploring cohort employment](#Exploring-cohort-employment) # - [Saving Charts as a Variable](#Saving-Charts-as-a-Variable) # - [Exporting Completed Graphs](#Exporting-Completed-Graphs) # - [Choosing a Data Visualization Package](#Choosing-a-Data-Visualization-Package) # - [An Important Note on Graph Titles](#An-Important-Note-on-Graph-Titles) # - [Additional Resources](#Additional-Resources) # ## Introduction # - Back to [Table of Contents](#Table-of-Contents) # # In this module, you will learn to quickly and flexibly make a wide series of visualizations for exploratory data analysis and communicating to your audience. This module contains a practical introduction to data visualization in Python and covers important rules that any data visualizer should follow. # # ### Learning Objectives # # * Become familiar with a core base of data visualization tools in Python - specifically matplotlib and seaborn # # * Begin exploring what visualizations are going to best reveal various types of patterns in your data # # * Learn more about our primary datasets data with exploratory analyses and visualizations # ## Python Setup # - Back to [Table of Contents](#Table-of-Contents) # + # data manipulation in Python import pandas as pd # visualization packages import matplotlib.pyplot as plt import seaborn as sns # database connection from sqlalchemy import create_engine # see how long queries/etc take import time # so images get plotted in the notebook # %matplotlib inline # - # ## Load the Data # - Back to [Table of Contents](#Table-of-Contents) # + # set up sqlalchemy engine host = 'stuffed.adrf.info' DB = 'appliedda' connection_string = "postgresql://{}/{}".format(host, DB) conn = create_engine(connection_string) # - # We will continue exploring a similar selection of data as we ended with in the [Dataset Exploration](02_2_Dataset_Exploration.ipynb) notebook. # # **SQL code to generate the tables we'll explore below** # # Table 1: `tanf_cohort_2006q4`: study cohort in this notebook, individuals who finished a TANF spell in Q4 of 2006 # # CREATE TABLE ada_tanf.tanf_cohort_2006q4 AS # SELECT DISTINCT ON (i.recptno) i.recptno, i.start_date, i.end_date, # m.birth_date, m.ssn_hash, sex, rac, rootrace # FROM il_dhs.ind_spells i # LEFT JOIN il_dhs.member m # ON i.recptno = m.recptno # WHERE end_date >= '2006-10-01'::date AND # end_date < '2007-01-01'::date # AND benefit_type = 'tanf46'; # # -- age at end/beginning of spell # ALTER TABLE ada_tanf.tanf_cohort_2006q4 ADD COLUMN age_end numeric, ADD COLUMN age_start numeric; # UPDATE ada_tanf.tanf_cohort_2006q4 SET (age_start, age_end) = # (extract(epoch from age(start_date, birth_date))/(3600.*24*365), # extract(epoch from age(end_date, birth_date))/(3600.*24*365)); # # -- add duration of spell # ALTER TABLE ada_tanf.tanf_cohort_2006q4 ADD COLUMN spell_dur int; # UPDATE ada_tanf.tanf_cohort_2006q4 SET spell_dur = end_date - start_date; # # -- add indexes # CREATE INDEX ON ada_tanf.tanf_cohort_2006q4 (recptno); # CREATE INDEX ON ada_tanf.tanf_cohort_2006q4 (ssn_hash); # CREATE INDEX ON ada_tanf.tanf_cohort_2006q4 (start_date); # CREATE INDEX ON ada_tanf.tanf_cohort_2006q4 (end_date); # # -- change owner to schema's admin group # ALTER TABLE ada_tanf.tanf_cohort_2006q4 OWNER TO ada_tanf_admin; # # -- good practice to VACUUM (although DB does it periodically) # VACUUM FULL ada_tanf.tanf_cohort_2006q4; # Table 2: `tanf_cohort_2006q4_jobs` # # -- create job view for the 2006q4 cohort # CREATE TABLE ada_tanf.tanf_cohort_2006q4_jobs AS # SELECT # -- job identifiers # ssn, ein, seinunit, empr_no, year, quarter, # -- individual's earnings at this job # wage AS earnings # FROM il_des_kcmo.il_wage # WHERE year IN (2005, 2006, 2007) # AND ssn IN # (SELECT ssn_hash FROM ada_tanf.tanf_cohort_2006q4); # # -- add indexes # CREATE INDEX ON ada_tanf.tanf_cohort_2006q4_jobs (ssn); # CREATE INDEX ON ada_tanf.tanf_cohort_2006q4_jobs (ein); # CREATE INDEX ON ada_tanf.tanf_cohort_2006q4_jobs (seinunit); # CREATE INDEX ON ada_tanf.tanf_cohort_2006q4_jobs (empr_no); # CREATE INDEX ON ada_tanf.tanf_cohort_2006q4_jobs (year); # CREATE INDEX ON ada_tanf.tanf_cohort_2006q4_jobs (quarter); # # -- change owner to schema's admin group # ALTER TABLE ada_tanf.tanf_cohort_2006q4_jobs OWNER TO ada_tanf_admin; # # -- good practice to VACUUM (although DB does it periodically) # VACUUM FULL ada_tanf.tanf_cohort_2006q4_jobs; # + # get dataframe of study cohort start_time = time.time() query = """ SELECT * FROM ada_tanf.tanf_cohort_2006q4; """ # read the data, and parse the dates so we can use datetime functions df = pd.read_sql(query, conn, parse_dates=['start_date', 'end_date', 'birth_date']) # report how long reading this data frame took print('data read in {:.2f} seconds'.format(time.time()-start_time)) # - df.info() # + # get DataFrame of cohort jobs start_time = time.time() query = """ SELECT * FROM ada_tanf.tanf_cohort_2006q4_jobs; """ df_jobs = pd.read_sql(query, conn) print('data read in {:.2f} seconds'.format(time.time()-start_time)) # - df_jobs.info() # how many of our cohort have a job in the `il_wage` dataset df_jobs['ssn'].nunique() #.nunique() returns the unique number of values # ## Visual data exploration with `matplotlib` # - Back to [Table of Contents](#Table-of-Contents) # # Under the hood, `Pandas` uses `matplotlib` to produce visualizations. `matplotlib` is the most commonly used base visualization package and provides low level access to different chart characteristics (eg tick mark labels) # and view a simple hist of the age distribution of our cohort df.hist(column='age_end') # one default we may want to change for histograms is the number of bins df.hist(column='age_end', bins=50) df.hist('spell_dur') # spell duration is in days, so what is the duration in years? (df['spell_dur']/365.).hist() # + # aside: note the "." after 365 is a holdover # from Python 2 to give us floating point rather than integer values # in Python 3 integer division returns floating point values print('type with "." is: {}'.format((df['spell_dur']/365.).dtypes)) print('type without "." is: {}'.format((df['spell_dur']/365).dtypes)) # to get integers in python 3 you can do floor division: print('floor division type is: {}'.format((df['spell_dur']//365).dtypes)) # - # how many spells last over 5 years? ((df['spell_dur']/365.)>5).value_counts() # the .value_counts() function can also report percentages by using the 'normalize' parameter: ((df['spell_dur']/365.)>5).value_counts(normalize=True) # what is the overall distribution of earnings across all the jobs? df_jobs.hist(column='earnings', bins=50) # the simple histogram produced above shows a l/ot of small earnings values # what is the distribution of the higher values df_jobs['earnings'].describe(percentiles = [.01, .1, .25, .5, .75, .9, .95, .99, .999]) # + ## We can see a long tail in the earnings per job ## let's subset to below the 99% percentile and make a historgram subset_values = df_jobs['earnings'] Note in the above cell we split subsetting the data into two steps: # 1. We created `subset_values` which is simply a list of True or False # 2. Then we selected all rows in the `df_jobs` dataframe where `subset_values` was True # + ## We can change options within the hist function (e.g. number of bins, color, transparency): df_jobs[subset_values].hist(column='earnings', bins=20, facecolor="purple", alpha=0.5, figsize=(10,6)) ## And we can change the plot options with `plt` (which is our alias for matplotlib.pyplot) plt.xlabel('Job earnings ($)') plt.ylabel('Number of jobs') plt.title('Distribution of jobs by earnings for 2006q4 cohort') ## And add Data sourcing: ### xy are measured in percent of axes length, from bottom left of graph: plt.annotate('Source: IL Depts of Employment Security and Human Services', xy=(0.5,-0.15), xycoords="axes fraction") ## We use plt.show() to display the graph once we are done setting options: plt.show() # - # ### A Note on Data Sourcing # - Back to [Table of Contents](#Table-of-Contents) # # Data sourcing is a critical aspect of any data visualization. Although here we are simply referencing the agencies that created the data, it is ideal to provide as direct of a path as possible for the viewer to find the data the graph is based on. When this is not possible (e.g. the data is sequestered), directing the viewer to documentation or methodology for the data is a good alternative. Regardless, providing clear sourcing for the underlying data is an **absolutely requirement** of any respectable visualization, and further builds trusts and enables reproducibility. # ### Layering in `matplotlib` # - Back to [Table of Contents](#Table-of-Contents) # # As briefly demonstrated by changing the labels and adding the source, above, we can make consecutive changes to the same plot; that means we can also layer multiple plots on the same `figure`. By default, the first graph you create will be on the bottom with following graphs on top. # + # to demonstrate simple layering # we will create a histogram of 2005 and 2007 earnings # similar to that already demonstrated above plt.hist(df_jobs[subset_values & (df_jobs['year']==2005)]['earnings'], facecolor="blue", alpha=0.6) plt.hist(df_jobs[subset_values & (df_jobs['year']==2007)]['earnings'], facecolor="orange", alpha=0.6) plt.annotate('Source: IL Depts of Employment Security and Human Services', xy=(0.5,-0.15), xycoords="axes fraction") plt.show() # - # ## Introducing `seaborn` # - Back to [Table of Contents](#Table-of-Contents) # # `Seaborn` is a popular visualization package built on top of `matplotlib` which makes some more cumbersome graphs easier to make, however it does not give direct access to the lower level objects in a `figure` (more on that later). ## Barplot function in seaborn sns.barplot(x='year', y='earnings', data=df_jobs) plt.show() # What values does the above plot actually show us? Let's use the `help()` function to check the details of the `seaborn.barplot()` function we called above: help(sns.barplot) # In the documentation, we can see that there is an `estimator` function that by default is `mean` ## Barplot using sum of earnings rather than the default mean sns.barplot(x='year', y='earnings', data=df_jobs, estimator=sum) plt.show() # + ## Seaborn has a great series of charts for showing different cuts of data sns.factorplot(x='quarter', y='earnings', hue='year', data=df_jobs, kind='bar') plt.show() ## Other options for the 'kind' argument can be found in the documentation # - # ### Combining `seaborn` and `matplotlib` # - Back to [Table of Contents](#Table-of-Contents) # # There are many excellent data visualiation modules available in Python, but for the tutorial we will stick to the tried and true combination of `matplotlib` and `seaborn`. # # Below, we use `seaborn` for setting an overall aesthetic style and then faceting (created small multiples). We then use `matplotlib` to set very specific adjustments - things like adding the title, adjusting the locations of the plots, and sizing th graph space. This is a pretty protoyptical use of the power of these two libraries together. # # More on [`seaborn`'s set_style function](https://seaborn.pydata.org/generated/seaborn.set_style.html). # More on [`matplotlib`'s figure (fig) API](https://matplotlib.org/api/figure_api.html). # + ## Seaborn offers a powerful tool called FacetGrid for making small multiples of matplotlib graphs: ### Create an empty set of grids: facet_histograms = sns.FacetGrid(df_jobs[subset_values], row='year', col='quarter') ## "map' a histogram to each grid: facet_histograms = facet_histograms.map(plt.hist, 'earnings') ## Data Sourcing: plt.annotate('Source: IL Depts of Employment Security and Human Services', xy=(0.5,-0.35), xycoords="axes fraction") plt.show() # + # Seaborn's set_style function allows us to set many aesthetic parameters. sns.set_style("whitegrid") facet_histograms = sns.FacetGrid(df_jobs[subset_values], row='year', col='quarter') facet_histograms = facet_histograms.map(plt.hist, 'earnings') ## We can still change options with matplotlib, using facet_histograms.fig facet_histograms.fig.subplots_adjust(top=0.9) facet_histograms.fig.suptitle("Earnings for 99% of the jobs held by 2006q4 cohort", fontsize=14) facet_histograms.fig.set_size_inches(12,8) ## Data Sourcing: facet_histograms.fig.text(x=0.5, y=-0.05, s='Source: IL Depts of Employment Security and Human Services', fontsize=12) plt.show() # - # ## Exploring cohort employment # # Question: what are employment patterns of our cohort? # reminder of what columns we have in our two DataFrames print(df.columns.tolist()) print('') # just to add a space print(df_jobs.columns.tolist()) # also check the total rows in the two datasets, and the number of unique individuals in our jobs data print(df.shape[0]) print(df_jobs.shape[0], df_jobs['ssn'].nunique()) # how many in our cohort had any job during each quarter df_jobs.groupby(['year', 'quarter'])['ssn'].nunique().plot(kind='bar') # did individuals have more than one job in a given quarter? df_jobs.groupby(['year', 'quarter', 'ssn'])['ein'].count().sort_values(ascending=False).head() # How many people were employed in the same pattern of quarters over our 3 year period? # count the number of jobs each individual had in each quarter # where a "job" is simply that they had a record in the IDES data df_tmp = df_jobs.groupby(['year', 'quarter', 'ssn'])['ein'].count().unstack(['year', 'quarter']) df_tmp.head(1) # flatten all columns to a single name with an '_' separator: df_tmp.columns = ['_'.join([str(c) for c in col]) for col in df_tmp.columns.values] df_tmp.head() # + # replace NaN with 0 df_tmp.fillna(0, inplace=True) # and set values >0 to 1 df_tmp[df_tmp>0] = 1 # - # make "ssn" a column instead of an index - then we can count it when we group by the 'year_q' columns df_tmp.reset_index(inplace=True) df_tmp.head() # + # make a list of just the columns that start with '2005' or 2006 cols = [c for c in df_tmp.columns.values if c.startswith('2005') | c.startswith('2006')] print(cols) # + # aside on the above "list comprehension", here are the same steps one by one: # 1- get an array of our columns column_list = df_tmp.columns.values # 2 - loop through each value in the array for c in column_list: # 3 - check if the string starts with either '2005' or '2006' if c.startswith('2005') | c.startswith('2006'): # 4 - add the column to our new list (here we just print to demonstrate) print(c) # - # group by all columns to count number of people with the same pattern df_tmp = df_tmp.groupby(cols)['ssn'].count().reset_index() print('There are {} different patterns of employment in 2005 and 2006'.format(df_tmp.shape[0])) # + # total possible patterns of employment poss_patterns = 2**len(cols) pct_of_patterns = 100 * df_tmp.shape[0] / poss_patterns print('With this definition of employment, our cohort shows {:.1f}% of the possible patterns'.format(pct_of_patterns)) # - # Look at just the top 10: df_tmp.sort_values('ssn', ascending=False).head(10) # and how many people follow other patterns df_tmp.sort_values('ssn', ascending=False).tail(df_tmp.shape[0]-10)['ssn'].sum() # grab the top 10 for a visualization df_tmp_top = df_tmp.sort_values('ssn', ascending=False).head(10).reset_index() # drop old index df_tmp_top.drop(columns='index', inplace=True) print('percent of employed in top 10 patterns is {:.1f}%'.format(100*df_tmp_top['ssn'].sum()/df_tmp['ssn'].sum())) # calculate percentage of cohort in each group: df_tmp_top['pct_cohort'] = df_tmp_top['ssn'].astype(float) / df['ssn_hash'].nunique() df_tmp_top.head() # ### A heatmap using Seaborn # visualize with a simple heatmap sns.heatmap(df_tmp_top[cols]) # The default visualization leaves a lot to be desired. Now let's customize the same heatmap. # + # Create the matplotlib object so we can tweak graph properties later fig, ax = plt.subplots(figsize = (14,8)) # create the list of labels we want on our y-axis ylabs = ['{:.2f}%'.format(x*100) for x in df_tmp_top['pct_cohort']] # make the heatmap sns.heatmap(df_tmp_top[cols], linewidths=0.01, linecolor='grey', yticklabels=ylabs, cbar=False, cmap="Blues") # make y-labels horizontal and change tickmark font size plt.yticks(rotation=360, fontsize=12) plt.xticks(fontsize=12) # add axis labels ax.set_ylabel('Percent of cohort', fontsize=16) ax.set_xlabel('Quarter', fontsize=16) ## Data Sourcing: ax.annotate('Source: IL Depts of Employment Security and Human Services', xy=(0.5,-0.15), xycoords="axes fraction", fontsize=12) ## add a title fig.suptitle('Top 10 most common employment patterns of cohort', fontsize=18) ax.set_title('Blue is "employed" and white is "not employed"', fontsize=12) plt.show() # - # ### Decision trees # # Decision trees are a useful visualization when exploring how important your "features" (aka "right-hand variables", "explanatory variables", etc) are in predicting your "label" (aka "outcome") - we will revisit these concepts much more in the Machine Learning portion of the program. For now, we're going to use the data we have been exploring above to demonstrate creating and visualizing a Decision Tree. # + # our "label" will just be if our cohort was present in the wage data after 2006 # get the list of SSN's present after 2006: employed = df_jobs[df_jobs['year']>2006]['ssn'].unique() df['label'] = df['ssn_hash'].isin(employed) # how many of our cohort are "employed" after exiting TANF: df['label'].value_counts(normalize=True) # - # set which columns to use as our "features" sel_features = ['sex', 'rac', 'rootrace', 'age_end', 'age_start', 'spell_dur'] # + # additional imports to create and visualize tree # we will revisit sklearn during the Machine Learning portions of the program from sklearn.tree import DecisionTreeClassifier # packages to display a tree in Jupyter notebooks from sklearn.externals.six import StringIO from IPython.display import Image from sklearn.tree import export_graphviz import graphviz as gv import pydotplus # + # create Tree to visualize, # here we'll set maximum tree depth to 3 but you should try different values dtree = DecisionTreeClassifier(max_depth=3) # fit our data dtree.fit(df[sel_features],df['label']) # + # visualize the tree # object to hold the graphviz data dot_data = StringIO() # create the visualization export_graphviz(dtree, out_file=dot_data, filled=True, rounded=True, special_characters=True, feature_names=df[sel_features].columns.values) # convert to a graph from the data graph = pydotplus.graph_from_dot_data(dot_data.getvalue()) # display the graph in our notebook Image(graph.create_png()) # - # > what does this tree tell us about our data? # ## Exporting Completed Graphs # - Back to [Table of Contents](#Table-of-Contents) # # When you are satisfied with your visualization, you may want to save a a copy outside of your notebook. You can do this with `matplotlib`'s savefig function. You simply need to run: # # plt.savefig("fileName.fileExtension") # # The file extension is actually surprisingly important. Image formats like png and jpeg are actually **not ideal**. These file formats store your graph as a giant grid of pixels, which is space-efficient, but can't be edited later. Saving your visualizations instead as a PDF is strongly advised. PDFs are a type of vector image, which means all the components of the graph will be maintained. # # With PDFs, you can later open the image in a program like Adobe Illustrator and make changes like the size or typeface of your text, move your legends, or adjust the colors of your visual encodings. All of this would be impossible with a png or jpeg. # + ## Let's save the employement patterns heatmap we created earlier ## below just copied and pasted from above: # Create the matplotlib object so we can tweak graph properties later fig, ax = plt.subplots(figsize = (14,8)) # create the list of labels we want on our y-axis ylabs = ['{:.2f}%'.format(x*100) for x in df_tmp_top['pct_cohort']] # make the heatmap sns.heatmap(df_tmp_top[cols], linewidths=0.01, linecolor='grey', yticklabels=ylabs, cbar=False, cmap="Blues") # make y-labels horizontal and change tickmark font size plt.yticks(rotation=360, fontsize=12) plt.xticks(fontsize=12) # add axis labels ax.set_ylabel('Percent of cohort', fontsize=16) ax.set_xlabel('Quarter', fontsize=16) ## Data Sourcing: ax.annotate('Source: IL Depts of Employment Security and Human Services', xy=(0.5,-0.15), xycoords="axes fraction", fontsize=12) ## add a title fig.suptitle('Top 10 most common employment patterns of cohort', fontsize=18) ax.set_title('Blue is "employed" and white is "not employed"', fontsize=12) fig.savefig('./output/cohort2006q4_empl_patterns.pdf') # + # and the decision tree we made - note since we created the "graph" object we # do not need to completely reproduce the tree itself graph.write_pdf('./output/cohort2006q4_dtree.pdf') # - # ## Choosing a Data Visualization Package # # - Back to [Table of Contents](#Table-of-Contents) # # You can read more about different options for data visualization in Python in the [Additional Resources](#Additional-Resources) section at the bottom of this notebook. # # `matplotlib` is very expressive, meaning it has functionality that can easily account for fine-tuned graph creation and adjustment. However, this also means that `matplotlib` is somewhat more complex to code. # # `seaborn` is a higher-level visualization module, which means it is much less expressive and flexible than matplotlib, but far more concise and easier to code. # # It may seem like we need to choose between these two approaches, but this is not the case! Since `seaborn` is itself written in `matplotlib` (you will sometimes see `seaborn` be called a `matplotlib` 'wrapper'), we can use `seaborn` for making graphs quickly and then `matplotlib` for specific adjustments. When you see `plt` referenced in the code below, we are using `matplotlib`'s pyplot submodule. # # # `seaborn` also improves on `matplotlib` in important ways, such as the ability to more easily visualize regression model results, creating small multiples, enabling better color palettes, and improve default aesthetics. From [`seaborn`'s documentation](https://seaborn.pydata.org/introduction.html): # # > If matplotlib 'tries to make easy things easy and hard things possible', seaborn tries to make a well-defined set of hard things easy too. # ### An Important Note on Graph Titles # - Back to [Table of Contents](#Table-of-Contents) # # The title of a visualization occupies the most valuable real estate on the page. If nothing else, you can be reasonably sure a viewer will at least read the title and glance at your visualization. This is why you want to put thought into making a clear and effective title that acts as a **narrative** for your chart. Many novice visualizers default to an **explanatory** title, something like: "Average Wages Over Time (2006-2016)". This title is correct - it just isn't very useful. This is particularly true since any good graph will have explained what the visualization is through the axes and legends. Instead, use the title to reinforce and explain the core point of the visualization. It should answer the question "Why is this graph important?" and focus the viewer onto the most critical take-away. # --- # ## Additional Resources # # * [Data-Viz-Extras](../notebooks_additional/Data-Viz-extras.ipynb) notebook in the "notebooks_additional" folder # # * [A Thorough Comparison of Python's DataViz Modules](https://dsaber.com/2016/10/02/a-dramatic-tour-through-pythons-data-visualization-landscape-including-ggplot-and-altair) # # * [Seaborn Documentation](http://seaborn.pydata.org) # # * [Matplotlib Documentation](https://matplotlib.org) # # * [Advanced Functionality in Seaborn](blog.insightdatalabs.com/advanced-functionality-in-seaborn) # # * Other Python Visualization Libraries: # * [`Bokeh`](http://bokeh.pydata.org) # * [`Altair`](https://altair-viz.github.io) # * [`ggplot`](http://ggplot.yhathq.com.com) # * [`Plotly`](https://plot.ly) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Introduction to TensorFlow, now leveraging tensors! # In this notebook, we modify our [intro to TensorFlow notebook](https://github.com/the-deep-learners/TensorFlow-LiveLessons/blob/master/notebooks/point_by_point_intro_to_tensorflow.ipynb) to use tensors in place of our *for* loop. This is a derivation of 's [Naked Tensor](https://github.com/jostmey/NakedTensor/) code. # #### The initial steps are identical to the earlier notebook import numpy as np np.random.seed(42) import pandas as pd import matplotlib.pyplot as plt # %matplotlib inline import tensorflow as tf tf.set_random_seed(42) xs = [0., 1., 2., 3., 4., 5., 6., 7.] ys = [-.82, -.94, -.12, .26, .39, .64, 1.02, 1.] fig, ax = plt.subplots() _ = ax.scatter(xs, ys) m = tf.Variable(-0.5) b = tf.Variable(1.0) # #### Define the cost as a tensor -- more elegant than a *for* loop and enables distributed computing in TensorFlow ys_model = m*xs+b total_error = tf.reduce_sum((ys-ys_model)**2) # use an op to calculate SSE across all values instead of one by one # #### The remaining steps are also identical to the earlier notebook! optimizer_operation = tf.train.GradientDescentOptimizer(learning_rate=0.001).minimize(total_error) initializer_operation = tf.global_variables_initializer() with tf.Session() as session: session.run(initializer_operation) n_epochs = 1000 # 10, then 1000 for iteration in range(n_epochs): session.run(optimizer_operation) slope, intercept = session.run([m, b]) slope intercept y_hat = intercept + slope*np.array(xs) pd.DataFrame(list(zip(ys, y_hat)), columns=['y', 'y_hat']) # + fig, ax = plt.subplots() ax.scatter(xs, ys) x_min, x_max = ax.get_xlim() y_min, y_max = intercept, intercept + slope*(x_max-x_min) ax.plot([x_min, x_max], [y_min, y_max]) _ = ax.set_xlim([x_min, x_max]) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (system-wide) # language: python # name: python3 # --- # + id="_vlECuRq_4hm" import numpy as np import pandas as pd from sklearn.model_selection import train_test_split # + colab={"base_uri": "https://localhost:8080/", "height": 226} id="x4hC1JeX_4hr" outputId="3f523e73-aa11-4e4b-ee69-ae05732942ec" dataset = pd.read_csv('/content/sample_data/exoplanets_2018.csv') #dataset = dataset.rename(columns=header_list) dataset.head() # + id="RG46y94I_4hv" pd.set_option("display.max_columns", None) # + id="zptX-WgS_4hx" dataset=dataset[['koi_disposition','koi_fpflag_nt','koi_fpflag_ss','koi_fpflag_co','koi_fpflag_ec','koi_depth', 'koi_model_snr']] # + colab={"base_uri": "https://localhost:8080/"} id="PPLMQuHx_4hy" outputId="17a1412e-184d-4e9b-c288-62f1fb6d5e57" for col in dataset.columns: print(col) # + id="8A_53nBl_4hz" colab={"base_uri": "https://localhost:8080/"} outputId="c8cae344-b871-4752-fbeb-8b997075f64d" dataset["koi_disposition"].replace({"CONFIRMED": "1", "CANDIDATE": "0", "FALSE POSITIVE": "0"}, inplace=True) dataset.isnull().sum().sum() dataset=dataset.dropna() dataset["koi_disposition"].value_counts() # + colab={"base_uri": "https://localhost:8080/", "height": 206} id="99qyLE1w_4h0" outputId="4d849742-fae0-463b-fcba-39c469939050" dataset.head() # + colab={"base_uri": "https://localhost:8080/"} id="ivSf2KaX_4h2" outputId="9814edb0-d36d-4259-dde9-8fe380fb5a80" dataset = dataset.to_numpy() # Sepparating the label (Y) from the input features (X) Y = dataset[:, 0] Y = np.array(Y, dtype=int) X = dataset[:, 1:] # Sanity check print(X.shape, Y.shape) # + id="FhxV-lls_4h3" cellView="form" #@title Hidden # Separate data with label 0 and label 1 x_0 = X[Y == 0, :] x_1 = X[Y == 1, :] y_0 = Y[Y==0] y_1 = Y[Y==1] # Sanity check print(x_0.shape, y_0.shape) print(x_1.shape, y_1.shape) # + id="CCLRDtvg_4h5" colab={"base_uri": "https://localhost:8080/"} outputId="b7898e49-f8e0-4b40-b433-4e04dccf7c27" # Split 70% of the data for training set and 30% for testing set X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.3, random_state=2001, shuffle=True, stratify=Y) # x_train_1, x_test_1, y_train_1, y_test_1 = train_test_split(x_1, y_1, test_size=0.3, random_state=2001,shuffle=True, stratify=Y) print(X_train.shape, Y_train.shape) print(X_test.shape, Y_test.shape) # + id="NAoYVInq_4h6" cellView="form" #@title Hidden num_sample = 1500 # sample per class, total = 5000 # Take the first 5000 samples (1500 from each class) from training set for X_train and Y_train X_train = np.concatenate((x_train_0[:num_sample, :], x_train_1[:num_sample, :]), axis=0) Y_train = np.concatenate((y_train_0[:num_sample], y_train_1[:num_sample]), axis=0) # Take the first 5000 samples (1500 from each class) from testing set for X_test and Y_test X_test = np.concatenate((x_test_0[:num_sample, :], x_test_1[:num_sample, :]), axis=0) Y_test = np.concatenate((y_test_0[:num_sample], y_test_1[:num_sample]), axis=0) # Manual Check print(X_train.shape, Y_train.shape) print(X_test.shape, Y_test.shape) # + id="9deOUOq8_4h6" #Save extracted datasets on the disk # #!mkdir 'KeplerDataset' np.savetxt('/content/sample_data/X_train_6694.txt', X_train) np.savetxt('/content/sample_data/X_test_2870.txt', X_test) np.savetxt('/content/sample_data/Y_train_6694.txt', Y_train) np.savetxt('/content/sample_data/Y_test_2870.txt', Y_test) # + id="iYkuN1MK_4h7" colab={"base_uri": "https://localhost:8080/"} cellView="code" outputId="e5f2a6e7-60b5-442e-ddde-c7d033754d54" #@title Hidden # Code to load the saved subset X_train = np.loadtxt('/content/sample_data/X_train_6694.txt') Y_train = np.loadtxt('/content/sample_data/Y_train_6694.txt') X_test = np.loadtxt('/content/sample_data/X_test_2870.txt') Y_test = np.loadtxt('/content/sample_data/Y_test_2870.txt') # Check print(X_train.shape, Y_train.shape) print(X_test.shape, Y_test.shape) # + id="9ICKFYk4_4h7" #Training the Quantum Circuit in Simulator #pip install pennylane import pennylane as qml from pennylane import numpy as np from pennylane.optimize import AdamOptimizer import tensorflow as tf from tensorflow.keras.utils import to_categorical # + id="3N5YVrYeKeHP" # Set a random seed np.random.seed(42) # + id="zgGsQMKo_4h8" # Define output labels as quantum state vectors label_0 = [[1], [0]] label_1 = [[0], [1]] def density_matrix(state): """Calculates the density matrix representation of a state. Args: state (array[complex]): array representing a quantum state vector Returns: dm: (array[complex]): array representing the density matrix """ return np.outer(state, np.conj(state)) state_labels = [label_0, label_1] dm_labels = [density_matrix(state_labels[i]) for i in range(2)] # + id="aaqPRUy3_4h8" from keras import backend as K # Alpha Custom Layer class class_weights(tf.keras.layers.Layer): def __init__(self): super(class_weights, self).__init__() w_init = tf.random_normal_initializer() self.w = tf.Variable( initial_value=w_init(shape=(1, 2), dtype="float32"), trainable=True, ) def call(self, inputs): return (inputs * self.w) # + id="-carE_ToZcs8" from keras import backend as K # Alpha Custom Layer class class_weights(tf.keras.layers.Layer): def __init__(self): super(class_weights, self).__init__() w_init = tf.random_normal_initializer() self.w = tf.Variable( initial_value=w_init(shape=(1, 2), dtype="float32"), trainable=True, ) def call(self, inputs): return (inputs * self.w) # + colab={"base_uri": "https://localhost:8080/"} id="mUBE653l0Yjm" outputId="d56af406-d9b8-404b-bf17-b81aa2c6acbf" num_fc_layer = 5 # number of layer params_fix = np.random.uniform(size=(2, num_fc_layer, 6)) print(params_fix) # + id="X3Ln2XAj0Yyq" n_qubits = 2 # number of class dev_fc = qml.device("default.qubit", wires=n_qubits) layer_id = 0 # the layer index to be trained, it starts from zero because of Numpy convention @qml.qnode(dev_fc) def q_fc(params, inputs): """A variational quantum circuit representing the DRC. Args: params (array[float]): array of parameters inputs (array[float]): 1-d input vector (data sample) Returns: array[float]: 1-d output vector in the form of [alpha0*, alpha1*] """ # layer iteration for l in range(len(params_fix[0])): # qubit iteration for q in range(n_qubits): # gate iteration for g in range(int(len(inputs)/3)): if l == layer_id: # train only the specified layer qml.Rot(*(params[0][3*g:3*(g+1)] * inputs[3*g:3*(g+1)] + params[1][3*g:3*(g+1)]), wires=q) else: qml.Rot(*(params_fix[0][l][3*g:3*(g+1)] * inputs[3*g:3*(g+1)] + params_fix[1][l][3*g:3*(g+1)]), wires=q) return [qml.expval(qml.Hermitian(dm_labels[i], wires=[i])) for i in range(n_qubits)] # + id="1dpf-rjN0hyf" n_component = 6 # number of features used X = tf.keras.Input(shape=(n_component,), name='Input_Layer') # Quantum Layer q_fc_layer = qml.qnn.KerasLayer(q_fc, {"params": (2, n_component)}, output_dim=2)(X) # Alpha Layer alpha_layer = class_weights()(q_fc_layer) model = tf.keras.Model(inputs=X, outputs=alpha_layer) # + id="1RB5piW-0h8R" opt = tf.keras.optimizers.Adam(learning_rate=0.05) model.compile(opt, loss='mse', metrics=["accuracy"]) # + id="uPGrIIv60iIB" filepath = "/content/sample_data/Best6_layer5(layer_id=0)_set1000_8epoch_saved-model-{epoch:02d}.hdf5" checkpoint = tf.keras.callbacks.ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_weights_only=True, save_best_only=False, mode='auto') # + colab={"base_uri": "https://localhost:8080/"} id="vU8-Kr6V050y" outputId="7cbb1c2e-9c99-4327-846d-7413d5569f89" H = model.fit(X_train, to_categorical(Y_train), epochs=1, batch_size=256, initial_epoch=0, validation_data=(X_test, to_categorical(Y_test)), verbose=1, callbacks=[checkpoint]) # + colab={"base_uri": "https://localhost:8080/"} id="vM8ixM-aJmxz" outputId="b1ea6693-af61-4a6f-ff1e-92cfa3f8e2e5" # !pwd # !ls /content/sample_data/Best # + id="yBjtr3Gt06BF" model.load_weights('/content/sample_data/Best6_layer5(layer_id=0)_set1000_8epoch_saved-model-03.hdf5') # keep that parameters and replace the initial one params_layer_0 = model.get_weights()[0] params_fix[:, 0, :] = params_layer_0 # + [markdown] id="S8D2OrBLJJvI" # **2nd Layer** # + id="PRMuEmwjI78q" n_qubits = 2 # number of class dev_fc = qml.device("default.qubit", wires=n_qubits) layer_id = 1 # the layer index to be trained, it starts from zero because of Numpy convention @qml.qnode(dev_fc) def q_fc(params, inputs): """A variational quantum circuit representing the DRC. Args: params (array[float]): array of parameters inputs (array[float]): 1-d input vector (data sample) Returns: array[float]: 1-d output vector in the form of [alpha0*, alpha1*] """ # layer iteration for l in range(len(params_fix[0])): # qubit iteration for q in range(n_qubits): # gate iteration for g in range(int(len(inputs)/3)): if l == layer_id: # train only the specified layer qml.Rot(*(params[0][3*g:3*(g+1)] * inputs[3*g:3*(g+1)] + params[1][3*g:3*(g+1)]), wires=q) else: qml.Rot(*(params_fix[0][l][3*g:3*(g+1)] * inputs[3*g:3*(g+1)] + params_fix[1][l][3*g:3*(g+1)]), wires=q) return [qml.expval(qml.Hermitian(dm_labels[i], wires=[i])) for i in range(n_qubits)] # + id="Jcl4-trtJdrq" n_component = 6 # number of features used X = tf.keras.Input(shape=(n_component,), name='Input_Layer') # Quantum Layer q_fc_layer = qml.qnn.KerasLayer(q_fc, {"params": (2, n_component)}, output_dim=2)(X) # Alpha Layer alpha_layer = class_weights()(q_fc_layer) model = tf.keras.Model(inputs=X, outputs=alpha_layer) # + id="NIEhpBJaJd3U" opt = tf.keras.optimizers.Adam(learning_rate=0.05) model.compile(opt, loss='mse', metrics=["accuracy"]) # + id="1rj2LIT-Jd6o" # Note: please put the correct directory to save the weights accordingly if you want to run the code filepath = "/content/sample_data/Best6_layer5(layer_id=1)_3epoch_saved-model-{epoch:02d}.hdf5" checkpoint = tf.keras.callbacks.ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_weights_only=True, save_best_only=False, mode='auto') # + id="4sDPQDQqJd-a" colab={"base_uri": "https://localhost:8080/"} outputId="b903f9a1-1ae4-4497-e4f1-e81cf6944177" H = model.fit(X_train, to_categorical(Y_train), epochs=1, batch_size=256, initial_epoch=0, validation_data=(X_test, to_categorical(Y_test)), verbose=1, callbacks=[checkpoint]) # + id="BnpMQPejJeB9" # load the parameters that is considered giving the best balance between train and test accuracy model.load_weights('/content/sample_data/Best6_layer5(layer_id=1)_3epoch_saved-model-02.hdf5') # keep that parameters and replace the initial one params_layer_1 = model.get_weights()[0] params_fix[:, 1, :] = params_layer_1 # + [markdown] id="3IfcAMhiJr8n" # **3rd Layer** # + id="zkiS9SqUJeJR" n_qubits = 2 # number of class dev_fc = qml.device("default.qubit", wires=n_qubits) layer_id = 2 # the layer index to be trained, it starts from zero because of Numpy convention @qml.qnode(dev_fc) def q_fc(params, inputs): """A variational quantum circuit representing the DRC. Args: params (array[float]): array of parameters inputs (array[float]): 1-d input vector (data sample) Returns: array[float]: 1-d output vector in the form of [alpha0*, alpha1*] """ # layer iteration for l in range(len(params_fix[0])): # qubit iteration for q in range(n_qubits): # gate iteration for g in range(int(len(inputs)/3)): if l == layer_id: # train only the specified layer qml.Rot(*(params[0][3*g:3*(g+1)] * inputs[3*g:3*(g+1)] + params[1][3*g:3*(g+1)]), wires=q) else: qml.Rot(*(params_fix[0][l][3*g:3*(g+1)] * inputs[3*g:3*(g+1)] + params_fix[1][l][3*g:3*(g+1)]), wires=q) return [qml.expval(qml.Hermitian(dm_labels[i], wires=[i])) for i in range(n_qubits)] # + id="1C8tPRqtJeMD" n_component = 6 # number of features used X = tf.keras.Input(shape=(n_component,), name='Input_Layer') # Quantum Layer q_fc_layer = qml.qnn.KerasLayer(q_fc, {"params": (2, n_component)}, output_dim=2)(X) # Alpha Layer alpha_layer = class_weights()(q_fc_layer) model = tf.keras.Model(inputs=X, outputs=alpha_layer) # + id="T32BcvXgJePB" opt = tf.keras.optimizers.Adam(learning_rate=0.05) model.compile(opt, loss='mse', metrics=["accuracy"]) # + id="xjeKolFYJeSJ" # Note: please put the correct directory to save the weights accordingly if you want to run the code filepath = "/content/sample_data/Best6_layer5(layer_id=2)_3epoch_saved-model-{epoch:02d}.hdf5" checkpoint = tf.keras.callbacks.ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_weights_only=True, save_best_only=False, mode='auto') # + id="SAm0dk1gJeVL" colab={"base_uri": "https://localhost:8080/"} outputId="f39e8f33-6ecf-46f5-cc7a-6f20651b1a31" H = model.fit(X_train, to_categorical(Y_train), epochs=3, batch_size=256, initial_epoch=0, validation_data=(X_test, to_categorical(Y_test)), verbose=1, callbacks=[checkpoint]) # + id="w2zUR-P6JeYj" # load the parameters that is considered giving the best balance between train and test accuracy model.load_weights('./Model/Best6_layer5(layer_id=2)_set5000_10epoch_saved-model-08.hdf5') # keep that parameters and replace the initial one params_layer_2 = model.get_weights()[0] params_fix[:, 2, :] = params_layer_2 # + [markdown] id="X_RxcJ8rJ-8g" # **4th Layer** # + id="QjH0FSD-Jeb0" n_qubits = 2 # number of class dev_fc = qml.device("default.qubit", wires=n_qubits) layer_id = 3 # the layer index to be trained, it starts from zero because of Numpy convention @qml.qnode(dev_fc) def q_fc(params, inputs): """A variational quantum circuit representing the DRC. Args: params (array[float]): array of parameters inputs (array[float]): 1-d input vector (data sample) Returns: array[float]: 1-d output vector in the form of [alpha0*, alpha1*] """ # layer iteration for l in range(len(params_fix[0])): # qubit iteration for q in range(n_qubits): # gate iteration for g in range(int(len(inputs)/3)): if l == layer_id: # train only the specified layer qml.Rot(*(params[0][3*g:3*(g+1)] * inputs[3*g:3*(g+1)] + params[1][3*g:3*(g+1)]), wires=q) else: qml.Rot(*(params_fix[0][l][3*g:3*(g+1)] * inputs[3*g:3*(g+1)] + params_fix[1][l][3*g:3*(g+1)]), wires=q) return [qml.expval(qml.Hermitian(dm_labels[i], wires=[i])) for i in range(n_qubits)] # + id="Y0mlC1teKGdK" n_component = 6 # number of features used X = tf.keras.Input(shape=(n_component,), name='Input_Layer') # Quantum Layer q_fc_layer = qml.qnn.KerasLayer(q_fc, {"params": (2, n_component)}, output_dim=2)(X) # Alpha Layer alpha_layer = class_weights()(q_fc_layer) model = tf.keras.Model(inputs=X, outputs=alpha_layer) # + id="InoTB0x2KGpD" opt = tf.keras.optimizers.Adam(learning_rate=0.05) model.compile(opt, loss='mse', metrics=["accuracy"]) # + id="GZ3QYe55KGv2" # Note: please put the correct directory to save the weights accordingly if you want to run the code filepath = "./Model/Best6_layer5(layer_id=3)_set5000_10epoch_saved-model-{epoch:02d}.hdf5" checkpoint = tf.keras.callbacks.ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_weights_only=True, save_best_only=False, mode='auto') # + id="xRUvb7oYKGxs" H = model.fit(X_train, to_categorical(Y_train), epochs=10, batch_size=128, initial_epoch=0, validation_data=(X_test, to_categorical(Y_test)), verbose=1, callbacks=[checkpoint]) # + id="wyzoJqVMKG0M" # load the parameters that is considered giving the best balance between train and test accuracy model.load_weights('./Model/Best6_layer5(layer_id=3)_set5000_10epoch_saved-model-08.hdf5') # keep that parameters and replace the initial one params_layer_3 = model.get_weights()[0] params_fix[:, 3, :] = params_layer_3 # + [markdown] id="dM1OX1lTKXKU" # **5th Layer** # + id="4C2thMTSKG2a" n_qubits = 2 # number of class dev_fc = qml.device("default.qubit", wires=n_qubits) layer_id = 4 # the layer index to be trained, it starts from zero because of Numpy convention @qml.qnode(dev_fc) def q_fc(params, inputs): """A variational quantum circuit representing the DRC. Args: params (array[float]): array of parameters inputs (array[float]): 1-d input vector (data sample) Returns: array[float]: 1-d output vector in the form of [alpha0*, alpha1*] """ # layer iteration for l in range(len(params_fix[0])): # qubit iteration for q in range(n_qubits): # gate iteration for g in range(int(len(inputs)/3)): if l == layer_id: # train only the specified layer qml.Rot(*(params[0][3*g:3*(g+1)] * inputs[3*g:3*(g+1)] + params[1][3*g:3*(g+1)]), wires=q) else: qml.Rot(*(params_fix[0][l][3*g:3*(g+1)] * inputs[3*g:3*(g+1)] + params_fix[1][l][3*g:3*(g+1)]), wires=q) return [qml.expval(qml.Hermitian(dm_labels[i], wires=[i])) for i in range(n_qubits)] # + id="TBx6jcxAKG4t" n_component = 6 # number of features used X = tf.keras.Input(shape=(n_component,), name='Input_Layer') # Quantum Layer q_fc_layer = qml.qnn.KerasLayer(q_fc, {"params": (2, n_component)}, output_dim=2)(X) # Alpha Layer alpha_layer = class_weights()(q_fc_layer) model = tf.keras.Model(inputs=X, outputs=alpha_layer) # + id="LFrr5qt5KG74" opt = tf.keras.optimizers.Adam(learning_rate=0.05) model.compile(opt, loss='mse', metrics=["accuracy"]) # + id="JlGzjF7EKG_W" # Note: please put the correct directory to save the weights accordingly if you want to run the code filepath = "./Model/Best6_layer5(layer_id=4)_set5000_10epoch_saved-model-{epoch:02d}.hdf5" checkpoint = tf.keras.callbacks.ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_weights_only=True, save_best_only=False, mode='auto') # + id="iuyfflQDKHN_" H = model.fit(X_train, to_categorical(Y_train), epochs=10, batch_size=128, initial_epoch=0, validation_data=(X_test, to_categorical(Y_test)), verbose=1, callbacks=[checkpoint]) # + id="vadhmtfUKHRP" # load the parameters that is considered giving the best balance between train and test accuracy model.load_weights('./Model/Best6_layer5(layer_id=4)_set5000_10epoch_saved-model-08.hdf5') # keep that parameters and replace the initial one params_layer_4 = model.get_weights()[0] params_fix[:, 4, :] = params_layer_4 # + [markdown] id="cPynKdGdKuhd" # **Final Training** # + id="QZkLkl31KHUH" n_qubits = 2 # number of class dev_fc = qml.device("default.qubit", wires=n_qubits) @qml.qnode(dev_fc) def q_fc(params, inputs): """A variational quantum circuit representing the DRC. Args: params (array[float]): array of parameters inputs (array[float]): 1-d input vector (data sample) Returns: array[float]: 1-d output vector in the form of [alpha0*, alpha1*] """ # layer iteration for l in range(len(params[0])): # qubit iteration for q in range(n_qubits): # gate iteration for g in range(int(len(inputs)/3)): qml.Rot(*(params[0][l][3*g:3*(g+1)] * inputs[3*g:3*(g+1)] + params[1][l][3*g:3*(g+1)]), wires=q) return [qml.expval(qml.Hermitian(dm_labels[i], wires=[i])) for i in range(n_qubits)] # + id="eOYdQKhaKzNl" n_component = 6 X = tf.keras.Input(shape=(n_component,), name='Input_Layer') # Quantum Layer num_fc_layer = 5 q_fc_layer = qml.qnn.KerasLayer(q_fc, {"params": (2, num_fc_layer, n_component)}, output_dim=2)(X) # Alpha Layer alpha_layer = class_weights()(q_fc_layer) model = tf.keras.Model(inputs=X, outputs=alpha_layer) # + id="hEe9DbG4KzQp" # view each of the layer's name and its specifications model.summary() # + id="7t5UOSPcKzTx" model(X_train[0:3]) # set the initial weights to the params_fix that has been optimized layer by layer # Note: please change the name of the keras layer accordingly based on the model.summary() model.get_layer('keras_layer').set_weights([params_fix]) # + id="VrFbdrrZKzWy" opt = tf.keras.optimizers.Adam(learning_rate=0.05) model.compile(opt, loss='mse', metrics=["accuracy"]) # + id="Vx1gDFVFKzZ7" # Note: please put the correct directory to save the weights accordingly if you want to run the code filepath = "./Model/Best6_layer5(layer_id=all)_set5000_10epoch_saved-model-{epoch:02d}.hdf5" checkpoint = tf.keras.callbacks.ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_weights_only=True, save_best_only=False, mode='auto') # + id="hja6mSU6Kzbv" H = model.fit(X_train, to_categorical(Y_train), epochs=10, batch_size=128, initial_epoch=0, validation_data=(X_test, to_categorical(Y_test)), verbose=1, callbacks=[checkpoint]) # + id="7Q8_pMeyKzfJ" # load the parameters that is considered giving the best balance between train and test accuracy model.load_weights('./Model/Best6_layer5(layer_id=all)_set5000_10epoch_saved-model-08.hdf5') # keep the parameters as the best params, also keep the weight vector alpha best_params = model.get_weights()[0] alpha = model.get_weights()[1] # + [markdown] id="vS0vPxOxLMfb" # **Testing the Performance** # + id="GQMr8Vt4KzhM" best_params # + id="_bNDu_fsKzje" alpha # + id="vI1eaf9jKzlv" # view each of the layer's name and its specifications model.summary() # + id="mApm_szWKHW0" # load the parameters to the model # Note: please change the name of the keras layer accordingly based on the model.summary() model.get_layer('keras_layer').set_weights([best_params]) model.get_layer('class_weights').set_weights([np.array([alpha])]) # + id="rEySODFqKHaP" from sklearn.metrics import roc_auc_score # + id="LLEx0j2sKHdF" #AUC for training set Y_pred_train = model.predict(X_train) roc_auc_score(Y_train, Y_pred_train[:, 1]) # + id="5VDEQpsnKHez" #AUC for testing set Y_pred_test = model.predict(X_test) roc_auc_score(Y_test, Y_pred_test[:, 1]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- def my_func(x, y): pass def my_sq(x): return x**2 my_sq(2) my_sq(4) #Testing the function assert my_sq(4) == 16 def avg_2(x,y): return (x+y)/2 avg_2(10,1) import pandas as pd df = pd.DataFrame({ 'a': [10,20,30], 'b': [20,30,40] }) df df['a'] ** 2 df['a'].apply(my_sq) def my_exp(x, e): return x ** e my_exp(2,10) df['a'].apply(my_exp, e=4) def pritn_me(x): print(x) df.apply(pritn_me) def avg(x, y, z): return (x + y + z)/3 df.apply(avg) import numpy as np def avg_apply(col): return np.mean(col) df.apply(avg_apply) def avg(col): x = col[0] y = col[1] z = col[2] return (x + y + z)/3 df.apply(avg) df.apply(avg,axis=0) def avg_2_mod(x,y): if (x==20): return np.NAN else: return (x+y)/2 avg_2_mod(df['a'],df['b']) avg_2_mod_vec=np.vectorize(avg_2_mod) avg_2_mod_vec(df['a'],df['b']) @np.vectorize def avg_2_mod(x,y): if (x==20): return np.NAN else: return (x+y)/2 avg_2_mod(df['a'],df['b']) import numba @numba.vectorize def avg_2_mod_numba(x,y): if (x==20): return np.NAN else: return (x+y)/2 avg_2_mod_numba(df['a'],df['b']) avg_2_mod_numba(df['a'].values,df['b'].values) # %%timeit avg_2(df['a'],df['b']) # %%timeit avg_2_mod(df['a'],df['b']) # %%timeit avg_2_mod_numba(df['a'].values,df['b'].values) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.10 64-bit (''base'': conda)' # name: python3 # --- # + id="2pmrZHSMvYUC" import tensorflow as tf ## data 선언 x_data = [[0.,0.], [0.,1.], [1.,0.],[1.,1.]] y_data = [[0.], [0.], [0.], [1.]] test_data=[[0.3, 0.3]] # + colab={"base_uri": "https://localhost:8080/"} id="y8doDlvfvZNo" executionInfo={"status": "ok", "timestamp": 1628761343229, "user_tz": -540, "elapsed": 5, "user": {"displayName": "\uae40\ud604\uc6b0", "photoUrl": "", "userId": "06560543018646300359"}} outputId="a178216c-6600-4acd-febf-d0c61e0b807f" ## tf.keras를 활용한 perceptron 모델 구현. model = tf.keras.Sequential() ## 모델 만들기 위해 sequential 매서드를 선언. 이를 통해 모델을 만들 수 있다. model.add(tf.keras.layers.Dense(1, input_dim=2)) # 선언된 모델에 add를 통해 쌓아감. , 현재는 입력 변수 갯수 2, perceptron 1개. model.summary() ## 설계한 모델 프린트 # + colab={"base_uri": "https://localhost:8080/"} id="RdMH3P_YvaAo" executionInfo={"status": "ok", "timestamp": 1628761422872, "user_tz": -540, "elapsed": 79646, "user": {"displayName": "\uae40\ud604\uc6b0", "photoUrl": "", "userId": "06560543018646300359"}} outputId="66022d34-e062-4875-f69c-170c88ce5945" # 모델 loss, 학습 방법 결정하기 optimizer=tf.keras.optimizers.SGD(lr=0.001) ### 경사 하강법으로 global min 에 찾아가는 최적화 방법 선언. loss=tf.keras.losses.binary_crossentropy #losses.mean_squared_error ## 예측값 과 정답의 오차값 정의. mse는 mean squre error로 (예측값 - 정답)^2 를 의미 metrics=tf.keras.metrics.binary_accuracy ### 학습하면서 평가할 메트릭스 선언 # 모델 컴파일하기 model.compile(loss=loss, optimizer=optimizer, metrics=[metrics]) # 모델 동작하기 model.fit(x_data, y_data, epochs=500, batch_size=4, workers=8, use_multiprocessing=True) # + id="ndaBss6yva4w" # 결과를 출력합니다. print(model.weights) print(" test data [0.3, 0.3] 예측 값 : ", model.predict(test_data)) if model.predict(test_data)>0.5: print(" 합격 " ) else: print(" 불합격 ") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Project: Train a Quadcopter How to Fly # # Design an agent to fly a quadcopter, and then train it using a reinforcement learning algorithm of your choice! # # Try to apply the techniques you have learnt, but also feel free to come up with innovative ideas and test them. # ## Instructions # # Take a look at the files in the directory to better understand the structure of the project. # # - `task.py`: Define your task (environment) in this file. # - `agents/`: Folder containing reinforcement learning agents. # - `policy_search.py`: A sample agent has been provided here. # - `agent.py`: Develop your agent here. # - `physics_sim.py`: This file contains the simulator for the quadcopter. **DO NOT MODIFY THIS FILE**. # # For this project, you will define your own task in `task.py`. Although we have provided a example task to get you started, you are encouraged to change it. Later in this notebook, you will learn more about how to amend this file. # # You will also design a reinforcement learning agent in `agent.py` to complete your chosen task. # # You are welcome to create any additional files to help you to organize your code. For instance, you may find it useful to define a `model.py` file defining any needed neural network architectures. # # ## Controlling the Quadcopter # # We provide a sample agent in the code cell below to show you how to use the sim to control the quadcopter. This agent is even simpler than the sample agent that you'll examine (in `agents/policy_search.py`) later in this notebook! # # The agent controls the quadcopter by setting the revolutions per second on each of its four rotors. The provided agent in the `Basic_Agent` class below always selects a random action for each of the four rotors. These four speeds are returned by the `act` method as a list of four floating-point numbers. # # For this project, the agent that you will implement in `agents/agent.py` will have a far more intelligent method for selecting actions! # + import random class Basic_Agent(): def __init__(self, task): self.task = task def act(self): new_thrust = random.gauss(450., 25.) return [new_thrust + random.gauss(0., 1.) for x in range(4)] # - # Run the code cell below to have the agent select actions to control the quadcopter. # # Feel free to change the provided values of `runtime`, `init_pose`, `init_velocities`, and `init_angle_velocities` below to change the starting conditions of the quadcopter. # # The `labels` list below annotates statistics that are saved while running the simulation. All of this information is saved in a text file `data.txt` and stored in the dictionary `results`. # + # %load_ext autoreload # %autoreload 2 import csv import numpy as np from tasks import ExampleTask # Modify the values below to give the quadcopter a different starting position. runtime = 5. # time limit of the episode init_pose = np.array([0., 0., 10., 0., 0., 0.]) # initial pose init_velocities = np.array([0., 0., 0.]) # initial velocities init_angle_velocities = np.array([0., 0., 0.]) # initial angle velocities file_output = 'data.txt' # file name for saved results # Setup task = ExampleTask(init_pose, init_velocities, init_angle_velocities, runtime) agent = Basic_Agent(task) done = False labels = ['time', 'x', 'y', 'z', 'phi', 'theta', 'psi', 'x_velocity', 'y_velocity', 'z_velocity', 'phi_velocity', 'theta_velocity', 'psi_velocity', 'rotor_speed1', 'rotor_speed2', 'rotor_speed3', 'rotor_speed4'] results = {x : [] for x in labels} # Run the simulation, and save the results. with open(file_output, 'w') as csvfile: writer = csv.writer(csvfile) writer.writerow(labels) while True: rotor_speeds = agent.act() _, _, done = task.step(rotor_speeds) to_write = [task.sim.time] + list(task.sim.pose) + list(task.sim.v) + list(task.sim.angular_v) + list(rotor_speeds) for ii in range(len(labels)): results[labels[ii]].append(to_write[ii]) writer.writerow(to_write) if done: break # - # Run the code cell below to visualize how the position of the quadcopter evolved during the simulation. # + import matplotlib.pyplot as plt # %matplotlib inline plt.plot(results['time'], results['x'], label='x') plt.plot(results['time'], results['y'], label='y') plt.plot(results['time'], results['z'], label='z') plt.legend() _ = plt.ylim() # - # The next code cell visualizes the velocity of the quadcopter. plt.plot(results['time'], results['x_velocity'], label='x_hat') plt.plot(results['time'], results['y_velocity'], label='y_hat') plt.plot(results['time'], results['z_velocity'], label='z_hat') plt.legend() _ = plt.ylim() # Next, you can plot the Euler angles (the rotation of the quadcopter over the $x$-, $y$-, and $z$-axes), plt.plot(results['time'], results['phi'], label='phi') plt.plot(results['time'], results['theta'], label='theta') plt.plot(results['time'], results['psi'], label='psi') plt.legend() _ = plt.ylim() # before plotting the velocities (in radians per second) corresponding to each of the Euler angles. plt.plot(results['time'], results['phi_velocity'], label='phi_velocity') plt.plot(results['time'], results['theta_velocity'], label='theta_velocity') plt.plot(results['time'], results['psi_velocity'], label='psi_velocity') plt.legend() _ = plt.ylim() # Finally, you can use the code cell below to print the agent's choice of actions. plt.plot(results['time'], results['rotor_speed1'], label='Rotor 1 revolutions / second') plt.plot(results['time'], results['rotor_speed2'], label='Rotor 2 revolutions / second') plt.plot(results['time'], results['rotor_speed3'], label='Rotor 3 revolutions / second') plt.plot(results['time'], results['rotor_speed4'], label='Rotor 4 revolutions / second') plt.legend() _ = plt.ylim() # When specifying a task, you will derive the environment state from the simulator. Run the code cell below to print the values of the following variables at the end of the simulation: # - `task.sim.pose` (the position of the quadcopter in ($x,y,z$) dimensions and the Euler angles), # - `task.sim.v` (the velocity of the quadcopter in ($x,y,z$) dimensions), and # - `task.sim.angular_v` (radians/second for each of the three Euler angles). # the pose, velocity, and angular velocity of the quadcopter at the end of the episode print(task.sim.pose) print(task.sim.v) print(task.sim.angular_v) # In the sample task in `task.py`, we use the 6-dimensional pose of the quadcopter to construct the state of the environment at each timestep. However, when amending the task for your purposes, you are welcome to expand the size of the state vector by including the velocity information. You can use any combination of the pose, velocity, and angular velocity - feel free to tinker here, and construct the state to suit your task. # # ## The Task # # A sample task has been provided for you in `task.py`. Open this file in a new window now. # # The `__init__()` method is used to initialize several variables that are needed to specify the task. # - The simulator is initialized as an instance of the `PhysicsSim` class (from `physics_sim.py`). # - Inspired by the methodology in the original DDPG paper, we make use of action repeats. For each timestep of the agent, we step the simulation `action_repeats` timesteps. If you are not familiar with action repeats, please read the **Results** section in [the DDPG paper](https://arxiv.org/abs/1509.02971). # - We set the number of elements in the state vector. For the sample task, we only work with the 6-dimensional pose information. To set the size of the state (`state_size`), we must take action repeats into account. # - The environment will always have a 4-dimensional action space, with one entry for each rotor (`action_size=4`). You can set the minimum (`action_low`) and maximum (`action_high`) values of each entry here. # - The sample task in this provided file is for the agent to reach a target position. We specify that target position as a variable. # # The `reset()` method resets the simulator. The agent should call this method every time the episode ends. You can see an example of this in the code cell below. # # The `step()` method is perhaps the most important. It accepts the agent's choice of action `rotor_speeds`, which is used to prepare the next state to pass on to the agent. Then, the reward is computed from `get_reward()`. The episode is considered done if the time limit has been exceeded, or the quadcopter has travelled outside of the bounds of the simulation. # # In the next section, you will learn how to test the performance of an agent on this task. # ## The Agent # # The sample agent given in `agents/policy_search.py` uses a very simplistic linear policy to directly compute the action vector as a dot product of the state vector and a matrix of weights. Then, it randomly perturbs the parameters by adding some Gaussian noise, to produce a different policy. Based on the average reward obtained in each episode (`score`), it keeps track of the best set of parameters found so far, how the score is changing, and accordingly tweaks a scaling factor to widen or tighten the noise. # # Run the code cell below to see how the agent performs on the sample task. # + import sys import pandas as pd from agents import PolicySearchAgent from tasks import ExampleTask num_episodes = 1000 target_pos = np.array([0., 0., 10.]) task = ExampleTask(target_pos=target_pos) agent = PolicySearchAgent(task) for i_episode in range(1, num_episodes+1): state = agent.reset_episode() # start a new episode while True: action = agent.act(state) next_state, reward, done = task.step(action) agent.step(reward, done) state = next_state if done: print("\rEpisode = {:4d}, score = {:7.3f} (best = {:7.3f}), noise_scale = {}".format( i_episode, agent.score, agent.best_score, agent.noise_scale), end="") # [debug] break sys.stdout.flush() # - # This agent should perform very poorly on this task. And that's where you come in! # ## Define the Task, Design the Agent, and Train Your Agent! # # Amend `task.py` to specify a task of your choosing. If you're unsure what kind of task to specify, you may like to teach your quadcopter to takeoff, hover in place, land softly, or reach a target pose. # # After specifying your task, use the sample agent in `agents/policy_search.py` as a template to define your own agent in `agents/agent.py`. You can borrow whatever you need from the sample agent, including ideas on how you might modularize your code (using helper methods like `act()`, `learn()`, `reset_episode()`, etc.). # # Note that it is **highly unlikely** that the first agent and task that you specify will learn well. You will likely have to tweak various hyperparameters and the reward function for your task until you arrive at reasonably good behavior. # # As you develop your agent, it's important to keep an eye on how it's performing. Use the code above as inspiration to build in a mechanism to log/save the total rewards obtained in each episode to file. If the episode rewards are gradually increasing, this is an indication that your agent is learning. # # --- # # I want to implement TD3. Pseudocode is: # # ![TD3 Algorithm](td3-pseudocode.png) # # Reference Implementation is on [GitHub](https://github.com/sfujim/TD3). # # --- # + ## TODO: Train your agent here. # needed imports -> I decided to use PyTorch to learn something not seen in the nanodegree until now import torch.nn.init as init from agents import TD3Agent from tasks import Task from utils import ReplayBuffer from collections import deque # set global configuration DICT config = { 'INIT_FN_ACTOR': init.uniform_, # function to use for weight initialization, see torch.nn.init 'INIT_W_MAX_ACTOR': 0.003, # maximum value to use if uniform initialization of actor is used 'INIT_W_MIN_ACTOR': -0.003, # minimum value to use if uniform initialization of actor is used 'INIT_W_ACTOR': 0., # fixed value to use if init.constant_ initialization is used 'INIT_FN_CRITIC': init.uniform_, # function to use for weight initialization, see torch.nn.init 'INIT_W_MAX_CRITIC': 0.003, # maximum value to use if uniform initialization of critic is used 'INIT_W_MIN_CRITIC': -0.003, # minimum value to use if uniform initialization of critic is used 'INIT_W_CRITIC': 0., # fixed value to use if init.constant_ initialization is used 'LR_ACTOR': 0.0001, # learning rate of actor optimizer 'LR_CRITIC': 0.001, # learning rate of critic optimizer 'WEIGHT_DECAY': 0.0005, # weight decay of both optimizers 'BATCH_SIZE': 100, # size of mini-batch to fetch from memory store during training 'DISCOUNT': 0.99, # discount factor (gamma) to use 'TAU': 0.001, # factor to use for soft update of target parameters 'POLICY_NOISE': 0.2, # noise added to policy smoothing (sections 5.3, 6.1) 'NOISE_CLIP': 0.5, # noise clip for policy smoothing (sections 5.3, 6.1) 'REWARD_SCALE': 1.0, # reward scaling factor used 'POLICY_FREQ': 2, # update frequence for target networks (sections 5.2, 6.1) 'ACTION_REPEAT': 3, # how often the Task should get next_timestamp per step 'BUFFER_SIZE': 1_000_000, # replay memory buffer size 'EXPLORATION_NOISE': 0.5, # noise from normal distribution to add during exploration (paper table 3) 'NUM_EPISODES': 2000, # total number of episodes 'TASK_RUNTIME': 5.0, # the time horizon of a single task } # create global list to hold episodic rewards -> for logging all_episode_reward = list() # create global deque to hold just the last 100 rewards -> for logging last_rewards = deque(maxlen=100) # initialize the task to solve # the quadcopter should forward in every direction task = Task(init_pose=np.array([10., 10., 0., 0., 0., 0.]), init_velocities=np.array([0.1, 0.1, 0.]), init_angle_velocities=np.array([0., 0., 0.]), target_pos=np.array([20., 20., 110., 0., 0., 0.]), runtime=config.get('TASK_RUNTIME', 5.0), action_repeat=config.get('ACTION_REPEAT', 3)) # initialize the agent agent = TD3Agent(task=task, parameters=config) # PSEUDO CODE: initialize replay buffer memory = ReplayBuffer(config.get('BUFFER_SIZE', 1_000_000)) # - ## start training for i_episode in range(1, config.get('NUM_EPISODES', 1000)+1): steps = 0 episode_reward = 0. reward = 0. state = agent.reset() while True: # PSEUDO CODE: select action with exploration noise # Paper section 6.1 states that a Gaussian noise instead # of Ornstein-Uhlenbeck was used because the latter offered # no performance benefits noise = np.random.normal(0, config.get('EXPLORATION_NOISE', 0.1), size=task.action_size).clip(task.action_low, task.action_high) action = agent.act(state) action += noise # PSEUDO CODE: execute action in environment next_state, reward, done = task.step(action) # PSEUDO CODE: store transition tuple (state, action, reward, next_state) in memory buffer # ignore 'done' signal if maximum runtime is reached done_bool = 0. if task.sim.time + task.sim.dt > task.sim.runtime else float(done) memory.add((state, action, reward, next_state, done_bool)) # sum up reward episode_reward += reward # progress state state = next_state # increment step# steps += 1 # PSEUDO CODE: After each time step, the networks are trained with a # mini-batch of a 100 transitions, sampled uniformly from a replay buffer # containing the entire history of the agent. # --- like the implementation on GitHub training is delayed until end # of a episode if done: # add cumulative reward to global lists all_episode_reward.append(episode_reward) last_rewards.append(episode_reward) # train using a mini batch as often as steps done in this episode agent.update(memory=memory, episode_steps=steps) # print progress after each episode print('\rEpisode {:4d} used {:4d} steps, reward = {:7.3f}, average (100) = {:7.3f}, [{:3d}][{:3d}][{:3d}]'.format( i_episode, steps, episode_reward, np.mean(last_rewards), int(task.sim.pose[0]), int(task.sim.pose[1]), int(task.sim.pose[2])), end="") break sys.stdout.flush() # ## Plot the Rewards # # Once you are satisfied with your performance, plot the episode rewards, either from a single run, or averaged over multiple runs. ## TODO: Plot the rewards. import seaborn as sns sns.set() plt.figure(figsize=(15,5)) plt.grid(True) plt.plot(all_episode_reward, '.', alpha=0.5, color='red') plt.plot(np.convolve(all_episode_reward, np.ones(21)/21)[(21-1)//2:-21], color='red', label='Reward per Episode') plt.ylabel('Reward') plt.xlabel("Episode #") plt.legend(loc=2) plt.xlim(0, len(all_episode_reward)) plt.show() # ## Reflections # # **Question 1**: Describe the task that you specified in `task.py`. How did you design the reward function? # # **Answer**: # # I decided to let the Quadcopter start at an initial position and take some units in every direction, the largest in the Z-direction. # # In the first versions rewards were given in a _positive_ way. The idea was to only punish the agent if the quadcopter hits the wall. That means if the position (either x,y, or z) is at the lower or upper bounds of the environment. The reward was designed as # # - start with a zero reward # - calculate the distance between the points (current position and target position) and add this distance between the points as reward if the new distance is smaller than the initial distance. For example: if the distance between target and initial position is 10 and the distance between target and current position is 2 then the reward is 8. This reward is only given if the distance between the quadcopter and the target position decreased. In addition this reward is multiplied by 100 to get a high reward if the distance decreases. # - the only punishment is to subtract 10 points if the wall is touched # - for every x, y, or z position that matches the target position a reward of 1 is added # - for positive velocity in any direction a reward of 1 is added # - if x, y, and z positions of the target position are reached a reward of 1000 is given # # Additionally I tried to punish the distance by subtract the calculated distance of the reward if the distance increased. Hence the code was # # if current_distance < initial_distance: # reward += initial_distance - current_distance # else: # reward -= current_distance # # But adding the `else` branch did not improve the performance. # # --- # # Finally I decided to give no reward for constant running and only add some small rewards (1) whenever one of the positions are reached (x, y, or z). If all positions were reached at the same time I add 10 to the reward. # **Question 2**: Discuss your agent briefly, using the following questions as a guide: # # - What learning algorithm(s) did you try? What worked best for you? # - What was your final choice of hyperparameters (such as $\alpha$, $\gamma$, $\epsilon$, etc.)? # - What neural network architecture did you use (if any)? Specify layers, sizes, activation functions, etc. # # **Answer**: # # As written above I decided to implement `Twin Delayed Deep Deterministic Policy Gradients (TD3)`. This decision was made by just reading a lot about reinforcement learning. I first read # # - [Reinforcement learning](https://en.wikipedia.org/wiki/Reinforcement_learning) # - [Policy Gradient Algorithms](https://lilianweng.github.io/lil-log/2018/04/08/policy-gradient-algorithms.html) # # After that I read about the mentioned algorithms. I did not read all papers in detail, but skimmed all of them to find the submitted solution. # # - [Continuous control with deep reinforcement learning](https://arxiv.org/abs/1509.02971v5) # - [Trust Region Policy Optimization](https://arxiv.org/abs/1502.05477) # - [Asynchronous Methods for Deep Reinforcement Learning](https://arxiv.org/abs/1602.01783) # - [Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation](https://arxiv.org/abs/1708.05144) # - [Continuous Deep Q-Learning with Model-based Acceleration](https://arxiv.org/abs/1603.00748) # - [OpenAI Baselines:ACKTR & A2C](https://openai.com/blog/baselines-acktr-a2c/) # - [Off-policy evaluation for MDPs with unknown structure](https://arxiv.org/abs/1502.03255) # - [Reinforcement Learning (DQN) Tutorial](https://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html) # - [Prioritized Experience Replay](https://arxiv.org/abs/1511.05952) # - [Hindsight Experience Replay](https://arxiv.org/abs/1707.01495) # - [Learning Multi-Level Hierarchies with Hindsight](https://arxiv.org/abs/1712.00948) # - [Proximal Policy Optimization Algorithms](https://arxiv.org/abs/1707.06347) # - [Distributed Distributional Deterministic Policy Gradients](https://arxiv.org/abs/1804.08617) # - [Addressing Function Approximation Error in Actor-Critic Methods](https://arxiv.org/abs/1802.09477) # - [Learning to reach by reinforcement learning using a receptive field based function approximation approach with continuous actions](https://link.springer.com/article/10.1007/s00422-009-0295-8) # # As the **Instructions and Hints** in the classroom suggested _DDPG_ I decided to implement TD3 because this tries to address some of the *issues* found in DDPG. And I have to admit that I didn't want to just copy and paste the code given by Udacity. # # Unfortunately I made several mistakes during the implementation (read below) and I am short of time now. Hence I did not try other algorithms. I wanted to compare TD3 with [SAC](https://arxiv.org/abs/1812.05905) and/or [PPO](https://arxiv.org/abs/1707.06347) but only implemented TD3. # # The agent is using PyTorch as I read that PyTorch and Caffe2 were joined. And as we only used Keras and Tensorflow I wanted to learn a bit of PyTorch as well. # # Pseudo code of TD3 is given above. I just followed this for the implementation. The reference implementation used only one critic network logically split into two. That means that the first three layers are used as Q1 and the other three as Q2. I implemented both critics as separate models. # # The replay buffer used first was a simple deque based one. Initially the neural nets used were using just the same layout as in the research paper of TD3. # # Actor networks: # # (state dim, 400) # ReLU # (400, 300) # ReLU # (300, action dim) # tanh # # Critic networks: # # (state dim + action dim, 400) # ReLU # (400, 300) # RelU # (300, 1) # # In the run above I switched the layout as inspired by [CEM-RL](https://arxiv.org/abs/1810.01222) to # # Actor networks # # (state dim, 400) # tanh # (400, 300) # tanh # (300, action dim) # tanh # # Critic networks # # (state dim + action dim, 400) # Leaky ReLU # (400, 300) # Leaky RelU # (300, 1) # # Additionally I implemented a ReplayBuffer that uses a softmax-Function over the rewards saved in the buffer. That way it samples the transitions having the highest rewards first. # # The hyper parameters used first are the same parameters that were used in the research paper as well. # # | Parameter | value | # |-----------|---------| # | $\gamma$ | 0.99 | # | $\tau$ | 0.05 | # | $\epsilon$ | $\mathcal{N}$ (0, 0.2) clipped to (-0.5, 0.5) | # | $\alpha$ of actor optimizer | 0.01 | # | $\alpha$ of critic optimizer | 0.01 | # | Optimizer of all networks | Adam | # | Reward scaling | 1.0 | # | Exploration noise | $\mathcal{N}$ (0, 0.1) | # | Mini-Batch size | 100 | # # As written in the research paper I used a update frequency of **2**, hence the target networks are only updated every second learning cycle. # # Additionally I decided to manually initialize the network weights randomly drawn from a uniform distribution from (-0.005, 0.005) without weight decay for the optimizer. # # During training the agent had some problems (read below) and the hyper parameters finally used can be seen in the config dict above. # **Question 3**: Using the episode rewards plot, discuss how the agent learned over time. # # - Was it an easy task to learn or hard? # - Was there a gradual learning curve, or an aha moment? # - How good was the final performance of the agent? (e.g. mean rewards over the last 10 episodes) # # **Answer**: # # As seen in the reward plot and the final position of the quadcopter the task was not learned by this agent. I think that I choosed one of the hardest tasks: Starting at one point and blindly find another one. I assume that lifting or landing the quadcopter is easier to train. # # I assume that I did not find a reward function good enough to help the agent. The other things I tried: # # - running for more number of periods (2000 instead of 1000) # - changed the network architecture # - changed learning rates of actor and critics # - changed discount # - changed tau # - changed exploration noise # - changed replay buffer implementation # - changed the task runtime # **Question 4**: Briefly summarize your experience working on this project. You can use the following prompts for ideas. # # - What was the hardest part of the project? (e.g. getting started, plotting, specifying the task, etc.) # - Did you find anything interesting in how the quadcopter or your agent behaved? # # **Answer**: # # This project was very hard for me. I realized that I did not fully understand the concepts during the classroom sessions and the mini projects. I finally get the concepts after reading a lot of papers (not only the list above) and blog posts (mainly on medium.com). # # Then I tried to start by implementing some of the replay buffer concepts first as this class is used as utility class and I wanted to have that done first. I tried the simple replay buffer (easy to understand), the rank based prioritized replay buffer (I used a heapq), the td error based prioritized replay buffer (using a binary segment tree) and implemented two by myself that used the gained rewards as priorization. The first one works as: # # split the length of the deque into sample_size / 2 segments # for every segment # get the index of the sample that contains the highest reward # add this sample to the return list # if this index is at the first position of the segment # add the next sample to the return list # else # add the previous sample to the return list # return sample list # # The second one just stores the observations in two deque objects: One holding all observations and the second holding only the rewards gained. During sampling I use `softmax` to squeeze the rewards into the probability range 0-1 and use that with `numpy.random.choice` to sample the observations having the highest rewards (what means that training using the sample are biased because most of the time the same observations will be sampled over and over again.) # # After that I needed to learn how to use PyTorch. Thanks to the excellent documentation this was easy enough. I implemented the TD3 agent and the Actor and Critic models as described above. But something went wrong. I made two errors: # # 1. the usage of the replay buffer was not implemented correct # 2. I did some errors implementing and training the Critic networks # # Fixing that took too much time. Hence I just come back to use the softmax replay buffer (that uses the same structure as the simple replay buffer) and finally got the Critic correct (at least I think so). # # One very bad problem remaines: Whatever I do my agent stucks. As written above I changed the reward function several times. I changed the hyper parameters several times. I change the network architecture. But the agent hucks the quadcopter to the area boundary very soon somtimes and then learns to stay with that. For example in several runs the Z-Position was trained to 300. Or the Y-Position remained at 0 all the time. I change the Z-position for inital and target position and set the initial velocity in the Z-Direction to 0. That just freezes the Z-position to around 127 (instead of 110). X and Y positions are not learned at all. # # I assume that I'm not able to find a reward function that helps the agent to understand the task at hand. But I do not want to train a startup-task or a hover task. It should be possible to train the agent to find the target position. # # In addition I did not understand `physics_sim.py`. Perhaps I need to change state or action size. But I have no idea about how to do that. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="Okfr_uhwhS1X" colab_type="text" # # Lambda School Data Science - A First Look at Data # # # + [markdown] id="9dtJETFRhnOG" colab_type="text" # ## Lecture - let's explore Python DS libraries and examples! # # The Python Data Science ecosystem is huge. You've seen some of the big pieces - pandas, scikit-learn, matplotlib. What parts do you want to see more of? # + id="WiBkgmPJhmhE" colab_type="code" outputId="8e564c02-c8d4-482c-f837-eca9403bf41e" colab={"base_uri": "https://localhost:8080/", "height": 36} # TODO - we'll be doing this live, taking requests # and reproducing what it is to look up and learn things 2 + 2 # + id="oRULtcvouo8Y" colab_type="code" colab={} import pandas as pd import numpy as np import matplotlib.pyplot as plt # %matplotlib inline # + id="-rmZBFpTuscF" colab_type="code" colab={} drinks = pd.read_csv('https://raw.githubusercontent.com/fivethirtyeight/data/master/alcohol-consumption/drinks.csv') # + id="i-Ybt6vFvnXH" colab_type="code" outputId="ef6b7759-5676-4337-fe44-5d6f55cbd4c2" colab={"base_uri": "https://localhost:8080/", "height": 363} drinks.sort_values(by='beer_servings', ascending=False).head(10) # + id="cUAAj3-PyvTx" colab_type="code" colab={} drinks['drinks_alcohol'] = np.where(drinks['total_litres_of_pure_alcohol'] > 0, 'Yes', 'No') # + id="u1ms_cofzzYE" colab_type="code" outputId="c37e04d0-e899-4728-873a-8923c8abcbb2" colab={"base_uri": "https://localhost:8080/", "height": 206} drinks.head() # + id="Ormn2rTh0CFl" colab_type="code" outputId="715d48b9-6ad2-40d9-bf54-c78908fbcad5" colab={"base_uri": "https://localhost:8080/", "height": 441} drinks.total_litres_of_pure_alcohol.hist() drinks['total_litres_of_pure_alcohol'].describe() # + id="FSfBeSbs1Xj0" colab_type="code" colab={} drinks['drinks_alcohol'] = np.where(drinks['total_litres_of_pure_alcohol'] > 7.2, 'high', np.where(drinks['total_litres_of_pure_alcohol'] > 4.2, 'medium', np.where(drinks['total_litres_of_pure_alcohol'] == 0, 'none', 'low'))) # + id="EKnQISGC2BVW" colab_type="code" outputId="6b749d2c-d898-4b02-be72-eeb07d6f1592" colab={"base_uri": "https://localhost:8080/", "height": 113} drinks['drinks_alcohol'].value_counts() # + id="GqJUEDrU33HW" colab_type="code" outputId="35ae2234-9cc4-45c8-c93e-534281b77567" colab={"base_uri": "https://localhost:8080/", "height": 312} countries = pd.read_csv('https://raw.githubusercontent.com/lukes/ISO-3166-Countries-with-Regional-Codes/master/all/all.csv') print(countries.shape) countries.head() # + id="gDB17JqK5xgr" colab_type="code" outputId="193ad956-07a6-4eb2-93a4-e1e59ff8859d" colab={"base_uri": "https://localhost:8080/", "height": 1000} countries[['name', 'region', 'sub-region']] # + id="rJ7dT5Ki86qG" colab_type="code" colab={} drinks.at[184, 'country'] = "United States of America" # + id="Z6u-fdj04jo5" colab_type="code" colab={} df = pd.merge(drinks, countries[['name', 'region', 'sub-region']], how='left', left_on='country', right_on='name') # + id="Y6VuvHEJ6_br" colab_type="code" outputId="3369f6df-a5f3-4495-98ed-5a3509040441" colab={"base_uri": "https://localhost:8080/", "height": 208} df.isnull().sum() # + id="pFc3s0fd8Kp3" colab_type="code" outputId="29911669-6195-45e1-f466-3e05434cd81d" colab={"base_uri": "https://localhost:8080/", "height": 834} df[df['name'].isna()].head(30) # + id="UBpOQ263_sKu" colab_type="code" outputId="63f08107-aa77-48a0-9369-d319dbd49a71" colab={"base_uri": "https://localhost:8080/", "height": 206} df.head() # + id="5Dp6GBHM-YBz" colab_type="code" outputId="3fd3b821-9119-49b7-cd15-c791cb87abdd" colab={"base_uri": "https://localhost:8080/", "height": 361} df['sub-region'].value_counts() # + id="HCqGDBI0_aZF" colab_type="code" outputId="e9ce722a-0b6c-43f2-f3c5-f4816d1ea2df" colab={"base_uri": "https://localhost:8080/", "height": 441} df.groupby('region')['beer_servings'].mean().plot(kind="bar", figsize=(10,6)) plt.title('Average Beer Servings by Region') plt.ylabel('Beer Servings per Year') plt.show() # + id="Tq3CNFmlAa4s" colab_type="code" outputId="5a048ec0-fb25-4099-cab8-2c279126b0d6" colab={"base_uri": "https://localhost:8080/", "height": 420} df.boxplot(column="beer_servings", by="region", figsize=(10,6)); # + id="uoew5Z_RCS4E" colab_type="code" colab={} df = df.dropna() # + id="0iRyFPPxCfte" colab_type="code" outputId="9246e676-2f90-4267-9007-0a001ea19fc2" colab={"base_uri": "https://localhost:8080/", "height": 36} df.shape # + id="NR55Rjz6B8bz" colab_type="code" outputId="fa50756c-f581-4083-e419-d805a4311139" colab={"base_uri": "https://localhost:8080/", "height": 647} from bokeh.io import output_file, show, output_notebook from bokeh.models import ColumnDataSource, HoverTool, CategoricalColorMapper from bokeh.palettes import d3 from bokeh.plotting import figure from bokeh.transform import transform output_notebook() x = df.beer_servings.values y = df.wine_servings.values country = df.country region = df.region source = ColumnDataSource(data=dict(x=x, y=y, country=country, region=region)) hover = HoverTool(tooltips=[ ("index", "$index"), ("(x,y)", "(@x, @y)"), ('country', '@country'), ]) palette = d3['Category10'][len(df['region'].unique())] mapper = CategoricalColorMapper(factors = df['region'].unique(), palette = palette) # create figure and plot p = figure(plot_width=600, plot_height=600, tools=[hover, "wheel_zoom", "box_zoom", "reset"], title="Beer and Wine Servings per Year by Country") # create plot p.scatter(x='x', y='y', size=10, alpha=0.8, color={'field': 'region', 'transform': mapper}, legend='region', source=source) # add axis labels p.xaxis.axis_label = "Beer Servings" p.yaxis.axis_label = "Wine Servings" output_file('interactive_bokeh_plot.html') show(p) # + [markdown] id="lOqaPds9huME" colab_type="text" # ## Assignment - now it's your turn # # Pick at least one Python DS library, and using documentation/examples reproduce in this notebook something cool. It's OK if you don't fully understand it or get it 100% working, but do put in effort and look things up. # + id="2lFAgZVv1Biu" colab_type="code" colab={} import pandas as pd import matplotlib.pyplot as plt # + id="hWWLx0Ev1VIf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 56} outputId="196092e6-8c51-448b-942a-a20d442e3233" ratings = pd.read_csv("title.ratings.tsv", sep='\t') basics = pd.read_csv("title.basics.tsv", sep='\t') # + id="UUqG2j-w2Ght" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 303} outputId="8199ee84-0c76-43c2-ff2a-83f9006ee835" print(ratings.shape) print(ratings.isnull().sum()) ratings.head() # + id="vacW12EN2NTd" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 419} outputId="d170b643-0f96-488c-e6d2-ff69a6ec009e" print(basics.shape) print(basics.isnull().sum()) basics.head() # + id="UOL12t-028-r" colab_type="code" colab={} basics = basics.dropna() # + id="fDVr5HLo2Z_d" colab_type="code" colab={} df = pd.merge(ratings, basics, how='left', left_on='tconst', right_on='tconst') # + id="l1jlY1Sv2jVQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 226} outputId="b83f950f-40c0-4f14-fe05-6727776aea8f" print(df.shape) df.head() # + id="RQ1zIPyp2zH8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 249} outputId="29d9995c-4d13-4e75-a84a-36b8c9aa7314" df.isnull().sum() # + id="Dhr3mf1O3uqe" colab_type="code" colab={} df = df.drop(columns=['tconst', 'titleType']) # + id="g8_f3fCMffa-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 36} outputId="aca87bab-53d8-4500-d4be-9b36bfe0263d" df.shape # + id="8eP0RjkHnFxx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 300} outputId="e952624f-e3e1-40f0-a507-bd5f084c346c" df.describe() # + id="ieaUjvH6fr1M" colab_type="code" colab={} df = df[df['runtimeMinutes'] != '\\N'] df = df[df['genres'] != '\\N'] df = df[df['startYear'] != '\\N'] # + id="8qCLOXlEgxuj" colab_type="code" colab={} df['runtimeMinutes'] = df['runtimeMinutes'].astype(str).astype(int) df['startYear'] = df['startYear'].astype(str).astype(int) # + id="59ZwpsKlfxO8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 206} outputId="d6bd1c4d-2408-4123-8596-547c57c50dc4" df.head() # + id="PJnZ86d4f2Zr" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 211} outputId="b5aec55e-3a19-4a0c-da2c-54d77b8e0ef6" df.dtypes # + id="2fnnkPFvpQDa" colab_type="code" colab={} df = df.loc[(df['numVotes'] > 1000) & (df['runtimeMinutes'] > 10) & (df['endYear'] == '\\N') & (df['isAdult'] == 0)] # + id="MwOG-pZa06Oh" colab_type="code" colab={} df = df.drop(columns=['isAdult']) # + id="vFLp-4Luw95-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 36} outputId="c78f7ac0-08f6-4e27-eda2-16381a7ec1ff" df.shape # + id="bbFVuDez3FQU" colab_type="code" colab={} df_year_count = df.groupby('startYear').size().to_frame() # + id="mUvXdKkb4GgQ" colab_type="code" colab={} df_year_count.columns = ['numMovies'] # + id="cOIWfnMM53U7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 36} outputId="05549315-4a80-4f13-84a7-70e869b129bc" df_year_count.shape # + id="KYUal5IO3Zim" colab_type="code" colab={} df_by_year = df.groupby('startYear').mean() # + id="aA64F7K76PIF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 36} outputId="eb838c68-ae2d-41c0-903a-53f746a447eb" df_by_year.shape # + id="fsPJ800Z4SeK" colab_type="code" colab={} df_by_year = df_by_year.join(df_year_count, how='left') # + id="yuJ7kj6A3lvs" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 238} outputId="73feaa61-3a97-4062-e48b-178b9ae5f441" df_by_year.head() # + id="kjLhzjsQzZSg" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 404} outputId="0807d9b9-b0dd-443f-c1a0-68d6b05555c7" df_by_year['averageRating'].plot(figsize=(10,6), title="Are Movies Getting Worse?") plt.xlabel('Year') plt.ylabel('Average Rating') plt.show() # + id="qU9YtV-J1Lla" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 404} outputId="6c83199a-3018-410a-b19f-6f5c8040ee56" df_by_year['numVotes'].plot(figsize=(10,6), title="Are Movies Getting More Popular?") plt.xlabel('Year') plt.ylabel('Average Number of Votes') plt.show() # + id="XnTBKTna1fIS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 404} outputId="227d7e36-a585-490d-962d-a2786a7c3fe5" df_by_year['runtimeMinutes'].plot(figsize=(10,6), title="Are Movies Getting Longer?") plt.xlabel('Year') plt.ylabel('Average Runtime in Minutes') plt.show() # + id="EKnnRoaD6T80" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 404} outputId="48259487-30c1-414b-9cd2-ff0307b5ee28" df_by_year['numMovies'].plot(figsize=(10,6), title="Are There Too Many Movies?") plt.xlabel('Year') plt.ylabel('Number of Movies') plt.show() # + [markdown] id="CPe6T0jb8Fib" colab_type="text" # ### Assignment questions # # After you've worked on some code, answer the following questions in this text block: # # 1. Describe in a paragraph of text what you did and why, as if you were writing an email to somebody interested but nontechnical. # # 2. What was the most challenging part of what you did? # # 3. What was the most interesting thing you learned? # # 4. What area would you like to explore with more time? # # # # # --- # # 1. I've always been a fan of bad 80's movies, so I wanted to see if movies in the 80's were acutally worse then other decades. In order to do this, I first downloaded data from IMdb. Once I had the data I looked at it and saw that I was going to have to "clean" it up a little bit. Bascially this just means getting rid of any movies or tv shows that don't have all the information I'm interested in. I also wanted to eliminate anything that doesn't have a certain number of ratings, let's say above 1,000 votes. Now that I've got the data how I want, I group all the movies by year, and then get the mean number of votes per movie per year, the mean rating per movie per year, and the mean length per movie per year. I also decided I wanted to see which years had the most movies, so I counted them up and added that to the data set as well. Finally I plotted everything up and confirmed that yes, the 80's did infact have the worst movies. But I love them anyway! # 2. The most challenging part was deciding what data to use, what data to get rid of and what data to change. # 3. I learned that movies have been getting less reviewed, higher rated, shorter, and much more frequent in the last ~15 years. # 4. I'd like to explore more about specific genres, like have horror movies kept up with this trend. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + colab={} colab_type="code" id="j4t3fBZKF8lg" import pandas as pd import numpy as np import numpy.random as ran import math NN_in = 2 def hiddenL(input_nodes): develop_Layer = [input_nodes] nodes = 2 # chosing number of nodes to be two for i in range(int(nodes)): n = 256 # number of inputs to be 256, as each value consists 256 labels L.append(int(n)) out = 1 # number of outputs to be two L.append(int(out)) return develop_Layer L_Neural = hiddenL(NN_in) for i in range(1,len(L_Neural)-1): size = len(L_Neural)-1 def gen_weight(Lin): Wt = [] for i in range(len(Lin)-1): Wt_L = ran.rand(Lin[i],Lin[i+1]) Wt_L = Wt_L / 256 Wt.append(Wt_L) return Wt Wt_Neural = gen_weight(L_NN) # + colab={} colab_type="code" id="qvmtey2pF8mT" def actPrime(output): return (1 - output * output) def fwd_Propagation(Win,Xin,Levels): VC1 = [] S_L1 = np.dot(Xin,Win[0]) theta_SL1 = np.tanh(S_L1) VC1.append(theta_SL1) theta_SLN = theta_SL1 for i in range(2,len(Levels)): S_LN = np.dot(theta_SLN,Win[i-1]) theta_SLN = np.tanh(S_LN) VC1.append(theta_SLN) return theta_SLN,S_LN,VC1 def back_Propagation(inX,expY,predY,vecTheta,layer,Next): L = [] Next = np.reshape(Next, (len(Next), 1)) for i in range(1,len(layer)): L.append(layer[i]) vec = [] vec.append(Next) for i in range(1,len(L)): theta = vecTheta[len(L)- i-1] theta = np.reshape(theta,(len(theta),1)) Cur = (Next.dot(np.transpose(Wt_Neural[len(L)-i]))) * actPrime(vecTheta[len(L)- i-1]) vec.append(Cur) inX = np.reshape(inX, (1,len(inX))) delta = np.dot(np.transpose(inX), Next) vec.append(delta) return vec # + colab={} colab_type="code" id="CCFuAYSLNK7s" df = pd.read_csv('ZipDigits_train.csv',header=None) test = pd.read_csv('ZipDigits.test',header=None, delimiter=' ') # + colab={} colab_type="code" id="abt6Rtg2F8mx" train_set = np.array(df) TestData = np.array(test) # - # ## normalise x and y terms to fit for our Neural Network # + colab={} colab_type="code" id="i44EAKHwF8m0" X1= train_set[:,1:257] X2 = TestData[:,1:257] X1 = X1 - X1.min() # normalize the values to bring them into the range 0-1 X2 = X2-X2.min() # normalize the values to bring them into the range 0-1 X1 = X1/X1.max() X2 = X2/X2.max() Y1 = train_set[:,0] Y2 = TestData[:,0] Y1 = (np.reshape(Y1, (len(Y1), 1))) Y2 = (np.reshape(Y2, (len(Y2), 1))) print(Y1.shape) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="iROACqItF8m3" outputId="c1799633-0b74-447d-d2de-f515e414c3e9" X1.shape # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="Kyc9xrTEF8m8" outputId="5c143ce9-6841-40e8-b95e-62c1a44b0319" X2.shape # + [markdown] colab_type="text" id="POTtNe2tF8nT" # ## Fine tuning such that target returns valid for 1 and invalid for other inputs # - for i in range(len(Y1)): if(Y1[i]|Y2[i] != 1): Y1[i],Y2[i] = -1 # + colab={"base_uri": "https://localhost:8080/", "height": 85} colab_type="code" id="qR7JLeU4F8nd" outputId="701160b2-12e1-41ad-aa80-a89fc1e716e1" input_len = X1.shape[1] L_NN = hiddenL(input_len) for i in range(1,len(L_NN)-1): size = len(L_NN)-1 Wt_Neural = gen_weight(L_NN) for i in range(len(Wt_Neural)): print("No of weights into Layer-",i+1,": ",Wt_Neural[i].shape) print("Size of weight vector: ", len(Wt_Neural)) # + colab={} colab_type="code" id="Nn-klqBDF8ng" Err = [] for _ in range(2*(10**6)): Ein = 0 gradient = 0 G_Vector = [] i = ran.randint(len(X1)); for i in range(len(L_NN) - 1): G_Vector.append(0.0) Delout,sLNN,VNN = fwd_Propagation(Wt_Neural,X1[i],L_NN) predictedOutput = np.sign(DelOut) deltaLNN = 2 * (DelOut-Y1[i]) * actPrime(DelOut) Ein = Ein + (1/len(X1))* mean_squared_error(DelOut,Y1[i]) G_Vector[0] = G_Vector[0] + ((1/len(X1)) * (np.dot(np.transpose(X1[i]),del_vector[len(back_Propagation(X1[i],Y1[i],DelOut,VNN,L_NN,deltaLNN))-1]))) temp = [] temp.append(X1) for i in range(len(VNN)-1): temp.append(VNN[i]) for i in range(2,len(L_NN)-1): temp[i-1] = np.reshape(temp[i-1],(len(temp[i-1]),1)) GLN = np.dot(del_vector[len(del_vector)-i], temp[i-1]) G_Vector[i] = G_Vector[i] + ((1/len(X1)) * GLN[0][0]) for i in range(len(Wt_Neural)): Wt_Neural[i] = Wt_Neural[i] - (0.01) * G_Vector[i] Err.append(Ein) # + colab={"base_uri": "https://localhost:8080/", "height": 500} colab_type="code" id="qutpnLazF8nn" outputId="71fe5456-9027-4e19-c469-99eb61d8981a" import matplotlib.pyplot as plt matplotlib inline fig,ax = plt.subplots(figsize=(12,8)) ax.set_ylabel('Ein') ax.set_xlabel('Iterations') H = ax.plot(range(2*(10**6)),Err,'b.') # + [markdown] colab_type="text" id="TwtXuVf-F8nq" # ## formal test # + colab={"base_uri": "https://localhost:8080/", "height": 136} colab_type="code" id="qV4gnWbiF8nr" outputId="9eb6f309-bfc4-461a-84ba-ffbf8ba5cca3" from sklearn.metrics import mean_squared_error DelOutTt,sLTest,theta_VectorTest = fwd_Propagation(Wt_Neural,X2,L_NN) T_Out = np.sign(DelOutTt) errorTest = np.sqrt(mean_squared_error(Y2, T_Out)) print("predicted output: ",T_Out) # + [markdown] colab_type="text" id="wmZsm4a3h9YT" # ## final question 3 # ### 1 Variable Rate Gradient Descent # - we could see there's decline in Ein w.r.t learning curve improvement at every epoch # ### 2 and 3 steepest descent and conjugate gradient descent # - may be increasing number of epochs and increasing learning rate could improve these terms # + colab={} colab_type="code" id="5gryk-u6F8nu" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + colab={"base_uri": "https://localhost:8080/"} id="VvMG8jtJfo8a" outputId="63924f21-6665-4381-903b-5deb2d389489" list1 = ['Symbol', 'AAPL', 'F', 'AMD', 'DIDI', 'VICI'] list1[1:] # + colab={"base_uri": "https://localhost:8080/"} id="iEKcJyFdfua0" outputId="b476e8f7-b1dc-4eac-e016-ac224d19fc95" # #!pip install yfinance # + colab={"base_uri": "https://localhost:8080/", "height": 426} id="rnB0b_XefsvU" outputId="b0cb9e05-baf5-4a4a-809f-5c921f081205" import yfinance as yf import csv import pandas as pd import numpy as np urls=list1[1:] shortName = [] symbol = [] volume = [] currentPrice = [] averageVolume = [] previousClose = [] for url in urls: tickerTag = yf.Ticker(url) shortName.append(tickerTag.info['shortName']) symbol.append(tickerTag.info['symbol']) volume.append(tickerTag.info['volume']) currentPrice.append(tickerTag.info['currentPrice']) averageVolume.append(tickerTag.info['averageVolume']) previousClose.append(tickerTag.info['previousClose']) print(shortName) print(symbol) print(volume) print(currentPrice) print(averageVolume) print(previousClose) data = [ shortName, symbol, volume, currentPrice, averageVolume, previousClose, ] df = pd.DataFrame(columns=[list1[1:]], data=data) df # + id="0uYkevDPf_mj" list = ['Symbol', 'AAPL', 'F', 'AMD', 'DIDI', 'VICI', 'NIO', 'INTC', 'T', 'VALE', 'BAC', 'NVDA', 'FB', 'BABA', 'NOK', 'ITUB', 'BBD', 'NLY', 'TWTR', 'SWN', 'MSFT', 'SNAP', 'CMCSA', 'TDOC', 'VZ', 'PBR', 'HOOD', 'SOFI', 'SIRI', 'ABEV', 'PLTR', 'TSLA', 'BMY', 'PDD', 'PCG', 'XOM', 'KMI', 'BEKE', 'UBER', 'IQ', 'GGB', 'CLF', 'X', 'DKNG', 'C', 'AAL', 'PYPL', 'ABBV', 'CCL', 'AMC', 'KO', 'HBAN', 'OXY', 'PFE', 'WFC', 'TLRY', 'LYG', 'RBLX', 'TELL', 'GOLD', 'JD', 'WBD', 'LCID', 'AMCR', 'CS', 'DNA', 'ROKU', 'FCX', 'MU', 'DIS', 'GM', 'RIG', 'AUY', 'ET', 'PLUG', 'ZNGA', 'MRK', 'AGNC', 'KGC', 'LUMN', 'PBR-A', 'BP', 'NFLX', 'BSX', 'PINS', 'CL', 'JPM', 'CVE', 'CVX', 'TME', 'AVTR', 'GRAB', 'FTI', 'CSCO', 'QCOM', 'DAL', 'BKR', 'IBN', 'MRO', 'WU', 'APA'] # + colab={"base_uri": "https://localhost:8080/"} id="_0N5-kjXgApK" outputId="784eab8e-ac2e-491a-d8e0-90e619779c44" list[1:] print(list[1:10]) # + colab={"base_uri": "https://localhost:8080/", "height": 461} id="dzQL_YnHgFQq" outputId="c725f407-5820-4c5d-ec9b-59a621f9f460" import yfinance as yf import csv import pandas as pd import numpy as np urls=list[1:10] shortName = [] symbol = [] volume = [] currentPrice = [] averageVolume = [] previousClose = [] for url in urls: tickerTag = yf.Ticker(url) shortName.append(tickerTag.info['shortName']) symbol.append(tickerTag.info['symbol']) volume.append(tickerTag.info['volume']) currentPrice.append(tickerTag.info['currentPrice']) averageVolume.append(tickerTag.info['averageVolume']) previousClose.append(tickerTag.info['previousClose']) print(shortName) print(symbol) print(volume) print(currentPrice) print(averageVolume) print(previousClose) data = [ shortName, symbol, volume, currentPrice, averageVolume, previousClose, ] df = pd.DataFrame(columns=[list[1:10]], data=data) df # + id="7ucCLCpxiQ9_" df.to_csv('data.csv') # + colab={"base_uri": "https://localhost:8080/"} id="P8LBsJ89liv_" outputId="a4183cde-9e12-4147-a2e2-9736b5888864" # reading a second row in csv import csv noise_amp=[] #an empty list to store the second column with open('data.csv', 'r') as rf: reader = csv.reader(rf, delimiter=',') for row in reader: noise_amp.append(row[1]) # which row we need to read , 1 is frist row , 2 is second row print(noise_amp) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Beam statistics? pred_dir = "/Users/memray/Project/keyphrase/OpenNMT-kpg/output/meng17-one2many-beam10-maxlen40/pred" import os import json ckpt_list = ["kp20k-meng17-length-rnn-BS64-LR0.05-Layer1-Dim150-Emb100-Dropout0.0-Copytrue-Reusetrue-Covtrue-PEfalse-Contboth-IF1_step_90000", "kp20k-meng17-random-rnn-BS64-LR0.05-Layer1-Dim150-Emb100-Dropout0.0-Copytrue-Reusetrue-Covtrue-PEfalse-Contboth-IF1_step_90000", "kp20k-meng17-alphabetical-rnn-BS64-LR0.05-Layer1-Dim150-Emb100-Dropout0.0-Copytrue-Reusetrue-Covtrue-PEfalse-Contboth-IF1_step_70000", "kp20k-meng17-no_sort-rnn-BS64-LR0.05-Layer1-Dim150-Emb100-Dropout0.0-Copytrue-Reusetrue-Covtrue-PEfalse-Contboth-IF1_step_90000", "kp20k-meng17-verbatim_prepend-rnn-BS64-LR0.05-Layer1-Dim150-Emb100-Dropout0.0-Copytrue-Reusetrue-Covtrue-PEfalse-Contboth-IF1_step_65000", "kp20k-meng17-verbatim_append-rnn-BS64-LR0.05-Layer1-Dim150-Emb100-Dropout0.0-Copytrue-Reusetrue-Covtrue-PEfalse-Contboth-IF1_step_50000"] datasets = ["kp20k_valid500", "duc", "inspec", "krapivin", "nus", "semeval"] # + for ckpt in ckpt_list: print(ckpt) beam_num = [] beam_len = [] for dataset in datasets: # print(dataset) pred_json_path = os.path.join(pred_dir, ckpt, dataset + '.pred') for jsonl in open(pred_json_path, 'r'): pred = json.loads(jsonl) beams = pred["ori_pred_sents"] beam_num.append(len(beams)) beam_len.extend([len(b) for b in beams]) # print("beam number: total=%d, avg=%f" % (sum(beam_num), sum(beam_num)/len(beam_num))) # print("beam length: total=%d, avg=%f" % (sum(beam_len), sum(beam_len)/len(beam_len))) # print("%d\t%f\t%d\t%f" % (sum(beam_num), sum(beam_num)/len(beam_num), sum(beam_len), sum(beam_len)/len(beam_len))) print(sum(beam_len)) # - os.path.abspath(pred_dir) os.path.exists(pred_dir) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import sqlite3 as sql conn = sql.connect('nba.db') action = conn.cursor() nba = pd.read_csv('nba_draft.csv') type(nba) nba.info() nba = nba[['player', 'draft_yr', 'pk', 'team', 'college', 'yrs', 'games', 'minutes_played','pts', 'fg_percentage', 'ft_percentage']] action.execute("DROP TABLE IF EXISTS players ") action.execute("""CREATE TABLE players( player text, draft_yr integer, pk integer, team text, college text, yrs integer, games integer, minutes_played integer, pts integer, fg_percentage real, ft_percentage real)""") conn.commit() nba.to_sql('players', conn, if_exists='append', index=False) action.execute("SELECT * FROM players") action.fetchmany(3) action.execute("SELECT Count(*) FROM players") action.fetchall() action.execute("SELECT avg(games) FROM players") action.fetchall() action.execute("SELECT avg(games) FROM players WHERE pk <= 10") action.fetchall() action.execute("SELECT DISTINCT college FROM players ORDER BY college") action.fetchall() action.execute("SELECT count(DISTINCT college) FROM players ORDER BY college") action.fetchall() action.execute("SELECT player FROM players WHERE college = 'Yale University' ORDER BY college") action.fetchall() action.execute("SELECT avg(fg_percentage) FROM players WHERE pk <= 10") action.fetchone() action.execute("SELECT avg(fg_percentage) FROM players") action.fetchone() action.execute("SELECT max(fg_percentage) FROM players") action.fetchone() action.execute("SELECT avg(fg_percentage) FROM players WHERE games > 187") action.fetchone() action.execute("SELECT player FROM players WHERE yrs > 20 ORDER BY player") action.fetchall() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + [markdown] deletable=true editable=true # # Step 5 # # ## Predict behavioral performance from the match of the gradients # # We combine steps 2 and 4 to investigate a potential behavioral relevance of the match of time series to gradients # + deletable=true editable=true # %matplotlib inline import numpy as np import h5py as h5 import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from scipy.stats import spearmanr from fgrad.predict import features_targets, predict_performance # + [markdown] deletable=true editable=true # ## Prepare the features # # Same as in Step 3, we simply calculate the average values for each 100-subject group # + deletable=true editable=true f = h5.File('/Users/marcel/projects/HCP/volumes_embedded_full.hdf5') d_LR = f['Working_memory/Run1'] d_RL = f['Working_memory/Run2'] # + deletable=true editable=true labels = dict() labels['WM_fix'] = 0 labels['WM_0back'] = 1 labels['WM_2back'] = 2 # Block onsets expressed as TRs # We add 6 volumes (4.32 s) to each onset to take into account hemodynamic lag # and additional 4 volumes (2.88 s) to account for instruction nback_LR_2b = np.round(np.array([7.977, 79.369, 150.553, 178.689])/0.72).astype(int)+10 nback_LR_0b = np.round(np.array([36.159, 107.464, 221.965, 250.18])/0.72).astype(int)+10 nback_RL_2b = np.round(np.array([7.977, 79.369, 178.769, 250.22])/0.72).astype(int)+10 nback_RL_0b = np.round(np.array([36.159, 107.464, 150.567, 222.031])/0.72).astype(int)+10 nback_fix = np.array([88, 187, 286])+6 # Each block lasts for 27.5 seconds vols_2b_LR = np.concatenate([range(x,x+38) for x in nback_LR_2b]) vols_0b_LR = np.concatenate([range(x,x+38) for x in nback_LR_0b]) vols_2b_RL = np.concatenate([range(x,x+38) for x in nback_RL_2b]) vols_0b_RL = np.concatenate([range(x,x+38) for x in nback_RL_0b]) vols_fix = np.concatenate([range(x,x+22) for x in nback_fix]) vols_fix = np.concatenate([vols_fix, range(395, 405)]) # Targets nback_targets_LR = np.zeros(405) nback_targets_LR[vols_2b_LR] = 1 nback_targets_LR[vols_fix] = -1 nback_targets_RL = np.zeros(405) nback_targets_RL[vols_2b_RL] = 1 nback_targets_RL[vols_fix] = -1 # + deletable=true editable=true # Get random group assignments subjects = f['Working_memory/Subjects'][...] np.random.seed(123) sind = np.arange(len(subjects)) G1 = sorted(np.random.choice(sind, 100, replace = False )) sind = np.delete(sind,G1) G2 = sorted(np.random.choice(sind, 100, replace = False )) sind = np.delete(sind,G2) G3 = sorted(np.random.choice(sind, 100, replace = False )) # + deletable=true editable=true conds = ['WM_fix', 'WM_0back', 'WM_2back'] grads = [0,1,2] # Group 1 f_WM_train1, t_WM_train1 = features_targets(data = d_LR, subjects = G1, inds = nback_targets_LR, condnames = conds, gradients = grads, labels = labels) f_WM_train2, t_WM_train2 = features_targets(data = d_RL, subjects = G1, inds = nback_targets_RL, condnames = conds, gradients = grads, labels = labels) # Group 2 f_WM_train3, t_WM_train3 = features_targets(data = d_LR, subjects = G2, inds = nback_targets_LR, condnames = conds, gradients = grads, labels = labels) f_WM_train4, t_WM_train4 = features_targets(data = d_RL, subjects = G2, inds = nback_targets_RL, condnames = conds, gradients = grads, labels = labels) # Group 3 f_WM_test1, t_WM_test1 = features_targets(data = d_LR, subjects = G3, inds = nback_targets_LR, condnames = conds, gradients = grads, labels = labels) f_WM_test2, t_WM_test2 = features_targets(data = d_RL, subjects = G3, inds = nback_targets_RL, condnames = conds, gradients = grads, labels = labels) # + [markdown] deletable=true editable=true # ## Prepare targets # # Get the estimated d-prime and bias and try to predict them from the gradient match # + deletable=true editable=true data_performance = pd.read_csv('../data/WM_SDT.csv', index_col=0) data_performance.subject_id = data_performance.subject_id.astype('str') data_performance = data_performance.reset_index(drop = True) # + deletable=true editable=true G1_ID = subjects[G1] G2_ID = subjects[G2] G3_ID = subjects[G3] # + deletable=true editable=true # Indices to include from the behavioral data WM_ind_G1 = [] WM_ind_G2 = [] WM_ind_G3 = [] # These are the indices of features to remove because of missing data remove_G1 = [] remove_G2 = [] remove_G3 = [] for i, n in enumerate(G1_ID): try: WM_ind_G1.append(np.where(data_performance.iloc[:,0] == n)[0][0]) except: remove_G1.append(i) for i, n in enumerate(G2_ID): try: WM_ind_G2.append(np.where(data_performance.iloc[:,0] == n)[0][0]) except: remove_G2.append(i) for i, n in enumerate(G3_ID): try: WM_ind_G3.append(np.where(data_performance.iloc[:,0] == n)[0][0]) except: remove_G3.append(i) # + [markdown] deletable=true editable=true # #### Prepare the vectors # + deletable=true editable=true t_WM_2b_train1 = data_performance['dprime_2b_LR'][WM_ind_G1] t_WM_2b_train2 = data_performance['dprime_2b_RL'][WM_ind_G1] t_WM_2b_train3 = data_performance['dprime_2b_LR'][WM_ind_G2] t_WM_2b_train4 = data_performance['dprime_2b_RL'][WM_ind_G2] t_WM_2b_test1 = data_performance['dprime_2b_LR'][WM_ind_G3] t_WM_2b_test2 = data_performance['dprime_2b_RL'][WM_ind_G3] t_WM_0b_train1 = data_performance['dprime_0b_LR'][WM_ind_G1] t_WM_0b_train2 = data_performance['dprime_0b_RL'][WM_ind_G1] t_WM_0b_train3 = data_performance['dprime_0b_LR'][WM_ind_G2] t_WM_0b_train4 = data_performance['dprime_0b_RL'][WM_ind_G2] t_WM_0b_test1 = data_performance['dprime_0b_LR'][WM_ind_G3] t_WM_0b_test2 = data_performance['dprime_0b_RL'][WM_ind_G3] f_WM_2b_t1 = np.delete(f_WM_train1[2::3], remove_G1, axis = 0) f_WM_2b_t2 = np.delete(f_WM_train2[2::3], remove_G1, axis = 0) f_WM_2b_t3 = np.delete(f_WM_train3[2::3], remove_G2, axis = 0) f_WM_2b_t4 = np.delete(f_WM_train4[2::3], remove_G2, axis = 0) f_WM_2b_test1 = np.delete(f_WM_test1[2::3], remove_G3, axis = 0) f_WM_2b_test2 = np.delete(f_WM_test2[2::3], remove_G3, axis = 0) f_WM_0b_t1 = np.delete(f_WM_train1[1::3], remove_G1, axis = 0) f_WM_0b_t2 = np.delete(f_WM_train2[1::3], remove_G1, axis = 0) f_WM_0b_t3 = np.delete(f_WM_train3[1::3], remove_G2, axis = 0) f_WM_0b_t4 = np.delete(f_WM_train4[1::3], remove_G2, axis = 0) f_WM_0b_test1 = np.delete(f_WM_test1[1::3], remove_G3, axis = 0) f_WM_0b_test2 = np.delete(f_WM_test2[1::3], remove_G3, axis = 0) # + [markdown] deletable=true editable=true # ## Predict performance # + deletable=true editable=true from sklearn.preprocessing import StandardScaler from sklearn.metrics import explained_variance_score import statsmodels.api as sm # + [markdown] deletable=true editable=true # ### 2-back # + deletable=true editable=true features_A_2b = np.vstack([f_WM_2b_t1, f_WM_2b_t2]) features_B_2b = np.vstack([f_WM_2b_t3, f_WM_2b_t4]) features_C_2b = np.vstack([f_WM_2b_test1, f_WM_2b_test2]) targets_A_2b = np.concatenate([t_WM_2b_train1, t_WM_2b_train2]) targets_B_2b = np.concatenate([t_WM_2b_train3, t_WM_2b_train4]) targets_C_2b = np.concatenate([t_WM_2b_test1, t_WM_2b_test2]) # + deletable=true editable=true predict_performance(features_A_2b, targets_A_2b, features_B_2b, targets_B_2b, features_C_2b, targets_C_2b) # + deletable=true editable=true sns.set_style('whitegrid') size = 6 fig = plt.figure(figsize = (15,12)) ax = fig.add_subplot(3,3,1) ax.scatter(features_A_2b[:,0], targets_A_2b, s = size) ax.set_ylabel('Data split 1', fontsize = 18) ax.set_title('Gradient 1', fontsize = 18) ax = fig.add_subplot(3,3,2) ax.scatter(features_A_2b[:,1], targets_A_2b, s = size) ax.set_title('Gradient 2', fontsize = 18) ax = fig.add_subplot(3,3,3) ax.scatter(features_A_2b[:,2], targets_A_2b, s = size) ax.set_title('Gradient 3', fontsize = 18) ax = fig.add_subplot(3,3,4) ax.scatter(features_B_2b[:,0], targets_B_2b, s = size) ax.set_ylabel('Data split 2', fontsize = 18) ax = fig.add_subplot(3,3,5) ax.scatter(features_B_2b[:,1], targets_B_2b, s = size) ax = fig.add_subplot(3,3,6) ax.scatter(features_B_2b[:,2], targets_B_2b, s = size) ax = fig.add_subplot(3,3,7) ax.scatter(features_C_2b[:,0], targets_C_2b, s = size) ax.set_ylabel('Data split 3', fontsize = 18) ax = fig.add_subplot(3,3,8) ax.scatter(features_C_2b[:,1], targets_C_2b, s = size) ax = fig.add_subplot(3,3,9) ax.scatter(features_C_2b[:,2], targets_C_2b, s = size) plt.tight_layout() # + deletable=true editable=true scaler = StandardScaler() scaler.fit(features_A_2b[:,[1,2]]) ols_2b = sm.OLS(targets_A_2b, sm.add_constant(scaler.transform(features_A_2b[:,[1,2]]))) ols_2b = ols_2b.fit(cov_type="HC1") print ols_2b.summary() pred_2b_B_A = ols_2b.predict(sm.add_constant(scaler.transform(features_A_2b[:,[1,2]]))) pred_2b_B_C = ols_2b.predict(sm.add_constant(scaler.transform(features_C_2b[:,[1,2]]))) print "Variance explained (A): %.2f" % explained_variance_score(targets_A_2b, pred_2b_B_A) print "Variance explained (C): %.2f" % explained_variance_score(targets_C_2b, pred_2b_B_C) # + [markdown] deletable=true editable=true # ### 0-back # + deletable=true editable=true scaler = StandardScaler() scaler.fit(f_WM_0b_t1) features_A_0b = np.vstack([f_WM_0b_t1, f_WM_0b_t2]) features_B_0b = np.vstack([f_WM_0b_t3, f_WM_0b_t4]) features_C_0b = np.vstack([f_WM_0b_test1, f_WM_0b_test2]) targets_A_0b = np.concatenate([t_WM_0b_train1, t_WM_0b_train2]) targets_B_0b = np.concatenate([t_WM_0b_train3, t_WM_0b_train4]) targets_C_0b = np.concatenate([t_WM_0b_test1, t_WM_0b_test2]) # + deletable=true editable=true predict_performance(features_A_0b, targets_A_0b, features_B_0b, targets_B_0b, features_C_0b, targets_C_0b) # + deletable=true editable=true scaler = StandardScaler() scaler.fit(features_A_0b[:,2]) ols_0b = sm.OLS(targets_A_0b, sm.add_constant(scaler.transform(features_A_0b[:,2]))) ols_0b = ols_0b.fit(cov_type="HC1") print ols_0b.summary() pred_0b_B_A = ols_0b.predict(sm.add_constant(scaler.transform(features_A_0b[:,2]))) pred_0b_B_C = ols_0b.predict(sm.add_constant(scaler.transform(features_C_0b[:,2]))) print "Variance explained (A): %.2f" % explained_variance_score(targets_A_0b, pred_0b_B_A) print "Variance explained (C): %.2f" % explained_variance_score(targets_C_0b, pred_0b_B_C) # + deletable=true editable=true sns.set_style('whitegrid') size = 6 fig = plt.figure(figsize = (15,12)) ax = fig.add_subplot(3,3,1) ax.scatter(features_A_0b[:,0], targets_A_0b, s = size) ax.set_ylabel('Data split 1', fontsize = 18) ax.set_title('Gradient 1', fontsize = 18) ax = fig.add_subplot(3,3,2) ax.scatter(features_A_0b[:,1], targets_A_0b, s = size) ax.set_title('Gradient 2', fontsize = 18) ax = fig.add_subplot(3,3,3) ax.scatter(features_A_0b[:,2], targets_A_0b, s = size) ax.set_title('Gradient 3', fontsize = 18) ax = fig.add_subplot(3,3,4) ax.scatter(features_B_0b[:,0], targets_B_0b, s = size) ax.set_ylabel('Data split 2', fontsize = 18) ax = fig.add_subplot(3,3,5) ax.scatter(features_B_0b[:,1], targets_B_0b, s = size) ax = fig.add_subplot(3,3,6) ax.scatter(features_B_0b[:,2], targets_B_0b, s = size) ax = fig.add_subplot(3,3,7) ax.scatter(features_C_0b[:,0], targets_C_0b, s = size) ax.set_ylabel('Data split 3', fontsize = 18) ax = fig.add_subplot(3,3,8) ax.scatter(features_C_0b[:,1], targets_C_0b, s = size) ax = fig.add_subplot(3,3,9) ax.scatter(features_C_0b[:,2], targets_C_0b, s = size) plt.tight_layout() # + [markdown] deletable=true editable=true # ## Plot relationships for winning models # + deletable=true editable=true sns.set_style('white') dotsize = 10 fig = plt.figure(figsize = (10,10)) ax = plt.subplot2grid((2,2),(1, 0)) ax.scatter(features_B_2b[:,1], targets_B_2b, s = dotsize) ax.set_xlabel('Gradient 2', fontsize = 16) ax.set_ylabel('2-back D-Prime', fontsize = 16) #ax.set_title('2-back', fontsize = 16) ax = plt.subplot2grid((2,2),(1, 1)) ax.scatter(features_B_2b[:,2], targets_B_2b, s = dotsize) ax.set_xlabel('Gradient 3', fontsize = 16) ax.set_ylabel('2-back D-Prime', fontsize = 16) #ax.set_title('2-back', fontsize = 16) ax = plt.subplot2grid((2,2),(0, 0), colspan = 2) ax.scatter(features_B_0b[:,2], targets_B_0b, s = dotsize) ax.set_xlabel('Gradient 3', fontsize = 16) ax.set_ylabel('0-back D-Prime', fontsize = 16) #ax.set_title('2-back', fontsize = 16) fig.savefig('../figures/grad-performance_B.pdf') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.6.9 64-bit (''computer_vision'': venv)' # name: python3 # --- # + import cv2 import numpy as np from matplotlib import pyplot as plt # l = cv2.imread('left/1.jpg') # r = cv2.imread('right/1.jpg') r = cv2.imread('/home/aadiv/Documents/WPI/Fall 21/RBE 549 - Computer Vision/Project/L.jpeg') l = cv2.imread('/home/aadiv/Documents/WPI/Fall 21/RBE 549 - Computer Vision/Project/R.jpeg') # print(imgL) # print(imgR) imgL = cv2.cvtColor(l, cv2.COLOR_BGR2GRAY) imgR = cv2.cvtColor(r, cv2.COLOR_BGR2GRAY) stereo = cv2.StereoBM_create(numDisparities=160, blockSize=15) disparity = stereo.compute(imgL,imgR) plt.imshow(disparity,'gray') plt.show() print("Success") # + def create_output(vertices, colors, filename): colors = colors.reshape(-1,3) vertices = np.hstack([vertices.reshape(-1,3),colors]) ply_header = '''ply format ascii 1.0 element vertex %(vert_num)d property float x property float y property float z property uchar red property uchar green property uchar blue end_header ''' with open(filename, 'w') as f: f.write(ply_header %dict(vert_num=len(vertices))) np.savetxt(f,vertices,'%f %f %f %d %d %d') #Function that Downsamples image x number (reduce_factor) of times. def downsample_image(image, reduce_factor): for i in range(0,reduce_factor): #Check if image is color or grayscale if len(image.shape) > 2: row,col = image.shape[:2] else: row,col = image.shape image = cv2.pyrDown(image, dstsize= (col//2, row // 2)) return image def write_ply(fn, verts, colors): ply_header = '''ply format ascii 1.0 element vertex %(vert_num)d property float x property float y property float z property uchar red property uchar green property uchar blue end_header ''' out_colors = colors.copy() verts = verts.reshape(-1, 3) verts = np.hstack([verts, out_colors]) with open(fn, 'wb') as f: f.write((ply_header % dict(vert_num=len(verts))).encode('utf-8')) np.savetxt(f, verts, fmt='%f %f %f %d %d %d ') # + f = 3997.684 q2 = np.float32([[1,0,0,0], [0,-1,0,0], [0,0,f*0.05,0], [0,0,0,1]]) points3d = cv2.reprojectImageTo3D(disparity,q2) # plt.show(points3d) # print(points3d) # - colors = cv2.cvtColor(l, cv2.COLOR_BGR2RGB) #Get rid of points with value 0 (i.e no depth) mask_map = disparity > disparity.min() #Mask colors and points. output_points = points3d[mask_map] output_colors = colors[mask_map] #Define name for output file output_file = 'reconstructed.ply' #Generate point cloud print ("\n Creating the output file... \n") create_output(output_points, output_colors, output_file) print("Complete") # + gray = cv2.cvtColor(r,cv2.COLOR_BGR2GRAY) sift = cv2.SIFT_create() kp = sift.detect(gray,None) img=cv2.drawKeypoints(gray,kp,r) plt.imshow(img,'gray') plt.show() # - cam1 = np.array([[3997.684, 0, 1176.728], [0, 3997.684, 1011.728], [0, 0, 1]]) cam2 = np.array([[3997.684, 0, 1307.839], [0, 3997.684, 1011.728], [0, 0, 1]]) doffs = 131.111 baseline = 193.001 width = 2964 height = 1988 ndisp = 280 isint = 0 vmin = 31 vmax = 257 dyavg = 0.918 dymax = 1.516 # + # Calculate depth-to-disparity # print(cam1) Tmat = np.array([0.54, 0., 0.]) rev_proj_matrix = np.zeros((4,4)) cv2.stereoRectify(cameraMatrix1 = cam1,cameraMatrix2 = cam2, \ distCoeffs1 = 0, distCoeffs2 = 0, \ imageSize = l.shape[:2], \ R = np.identity(3), T = Tmat, \ R1 = None, R2 = None, \ P1 = None, P2 = None, Q = rev_proj_matrix); print(type(rev_proj_matrix)) # + points = cv2.reprojectImageTo3D(imgL, rev_proj_matrix) print(imgL.shape) #reflect on x axis reflect_matrix = np.identity(3) reflect_matrix[0] *= -1 points = np.matmul(points,reflect_matrix) print("points",points.shape) #extract colors from image colors = cv2.cvtColor(l, cv2.COLOR_BGR2RGB) #filter by min disparity mask = imgL > imgL.min() out_points = points[mask] out_colors = colors[mask] print(out_points) #filter by dimension idx = np.fabs(out_points[:,0]) < 4.5 out_points = out_points[idx] out_colors = out_colors.reshape(-1, 3) out_colors = out_colors[idx] create_output(out_points, out_colors, 'output.ply') print('%s saved' % 'out.ply') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Training an RL agent with Keras-RL2 Using a GEM Environment # # This notebook serves as an educational introduction to the usage of Keras-RL2 using a GEM environment. Goal of this notebook is to give an understanding of what Keras-RL2 is and how to use it to train and evaluate a Reinforcement Learning agent which is able to solve a current control problem within the GEM toolbox. # ## 1. Installation # Before you can start you need to make sure that you have installed both, gym-electric-motor and Keras-RL2. You can install both easily using pip: # # - ```pip install gym-electric-motor``` # - ```pip install keras-rl2``` # # Alternatively, you can install them and their latest developer version directly from GitHub (recommended) : # # - https://github.com/upb-lea/gym-electric-motor # - https://github.com/wau/keras-rl2 # # For this notebook, the following cell will do the job: # !pip install -q git+https://github.com/upb-lea/gym-electric-motor.git git+https://github.com/wau/keras-rl2.git # ## 2. Setting up a GEM Environment # # The basic idea behind reinforcement learning is to create a so-called agent, that should learn by itself to solve a specified task in a given environment. # This environment gives the agent feedback on its actions and reinforces the targeted behavior. # In this notebook, the task is to train a controller for the current control of a *permanent magnet synchronous motor* (*PMSM*). # # In the following, the used GEM-environment is briefly presented, but this notebook does not focus directly on the detailed usage of GEM. If you are new to the used environment and interested in finding out what it does and how to use it, you should take a look at the [GEM cookbook](https://colab.research.google.com/github/upb-lea/gym-electric-motor/blob/master/examples/example_notebooks/GEM_cookbook.ipynb). # # To save some space in this notebook, there is a function defined in an external python file called **getting_environment.py**. If you want to know how the environment's parameters are defined you can take a look at that file. By simply calling the **get_env()** function from the external file, you can set up an environment for a *PMSM* with discrete inputs. # # The basic idea of the control setup from the GEM-environment is displayed in the following figure. # # ![](../../docs/plots/SCML_Overview.png) # # The agent controls the converter who converts the supply currents to the currents flowing into the motor - for the *PMSM*: $i_{sq}$ and $i_{sd}$ # # In the continuous case, the agent's action equals a duty cycle which will be modulated into a corresponding voltage. # # In the discrete case, the agent's actions denote switching states of the converter at the given instant. Here, only a discrete amount of options are available. In this notebook, for the PMSM the *discrete B6 bridge converter* with six switches is utilized per default. This converter provides a total of eight possible actions. # # ![](../../docs/plots/B6.svg) # # The motor schematic is the following: # # # ![](../../docs/plots/ESBdq.svg) # # And the electrical ODEs for that motor are: # #

# # # # $ \frac{\mathrm{d}i_{sd}}{\mathrm{d}t}=\frac{u_{sd} + p\omega_{me}L_q i_{sq} - R_s i_{sd}}{L_d} $

# $\frac{\mathrm{d} i_{sq}}{\mathrm{d} t}=\frac{u_{sq} - p \omega_{me} (L_d i_{sd} + \mathit{\Psi}_p) - R_s i_{sq}}{L_q}$

# $\frac{\mathrm{d}\epsilon_{el}}{\mathrm{d}t} = p\omega_{me}$ # #

# # The target for the agent is now to learn to control the currents. For this, a reference generator produces a trajectory that the agent has to follow. # Therefore, it has to learn a function (policy) from given states, references and rewards to appropriate actions. # # For a deeper understanding of the used models behind the environment see the [documentation](https://upb-lea.github.io/gym-electric-motor/). # Comprehensive learning material to RL is also [freely available](https://github.com/upb-lea/reinforcement_learning_course_materials). # import numpy as np from pathlib import Path import gym_electric_motor as gem from gym_electric_motor.reference_generators import \ MultipleReferenceGenerator,\ WienerProcessReferenceGenerator from gym_electric_motor.visualization import MotorDashboard from gym.spaces import Discrete, Box from gym.wrappers import FlattenObservation, TimeLimit from gym import ObservationWrapper class FeatureWrapper(ObservationWrapper): """ Wrapper class which wraps the environment to change its observation. Serves the purpose to improve the agent's learning speed. It changes epsilon to cos(epsilon) and sin(epsilon). This serves the purpose to have the angles -pi and pi close to each other numerically without losing any information on the angle. Additionally, this wrapper adds a new observation i_sd**2 + i_sq**2. This should help the agent to easier detect incoming limit violations. """ def __init__(self, env, epsilon_idx, i_sd_idx, i_sq_idx): """ Changes the observation space to fit the new features Args: env(GEM env): GEM environment to wrap epsilon_idx(integer): Epsilon's index in the observation array i_sd_idx(integer): I_sd's index in the observation array i_sq_idx(integer): I_sq's index in the observation array """ super(FeatureWrapper, self).__init__(env) self.EPSILON_IDX = epsilon_idx self.I_SQ_IDX = i_sq_idx self.I_SD_IDX = i_sd_idx new_low = np.concatenate((self.env.observation_space.low[ :self.EPSILON_IDX], np.array([-1.]), self.env.observation_space.low[ self.EPSILON_IDX:], np.array([0.]))) new_high = np.concatenate((self.env.observation_space.high[ :self.EPSILON_IDX], np.array([1.]), self.env.observation_space.high[ self.EPSILON_IDX:],np.array([1.]))) self.observation_space = Box(new_low, new_high) def observation(self, observation): """ Gets called at each return of an observation. Adds the new features to the observation and removes original epsilon. """ cos_eps = np.cos(observation[self.EPSILON_IDX] * np.pi) sin_eps = np.sin(observation[self.EPSILON_IDX] * np.pi) currents_squared = observation[self.I_SQ_IDX]**2 + observation[self.I_SD_IDX]**2 observation = np.concatenate((observation[:self.EPSILON_IDX], np.array([cos_eps, sin_eps]), observation[self.EPSILON_IDX + 1:], np.array([currents_squared]))) return observation # + # define motor arguments motor_parameter = dict(p=3, # [p] = 1, nb of pole pairs r_s=17.932e-3, # [r_s] = Ohm, stator resistance l_d=0.37e-3, # [l_d] = H, d-axis inductance l_q=1.2e-3, # [l_q] = H, q-axis inductance psi_p=65.65e-3, # [psi_p] = Vs, magnetic flux of the permanent magnet ) # supply voltage u_sup = 350 # nominal and absolute state limitations nominal_values=dict(omega=4000*2*np.pi/60, i=230, u=u_sup ) limit_values=dict(omega=4000*2*np.pi/60, i=1.5*230, u=u_sup ) # defining reference-generators q_generator = WienerProcessReferenceGenerator(reference_state='i_sq') d_generator = WienerProcessReferenceGenerator(reference_state='i_sd') rg = MultipleReferenceGenerator([q_generator, d_generator]) # defining sampling interval tau = 1e-5 # defining maximal episode steps max_eps_steps = 10000 motor_initializer={'random_init': 'uniform', 'interval': [[-230, 230], [-230, 230], [-np.pi, np.pi]]} reward_function=gem.reward_functions.WeightedSumOfErrors( reward_weights={'i_sq': 10, 'i_sd': 10}, gamma=0.99, # discount rate reward_power=1) # creating gem environment env = gem.make( # define a PMSM with discrete action space "PMSMDisc-v1", # visualize the results visualization=MotorDashboard(state_plots=['i_sq', 'i_sd'], reward_plot=True), # parameterize the PMSM and update limitations motor_parameter=motor_parameter, limit_values=limit_values, nominal_values=nominal_values, # define the random initialisation for load and motor load='ConstSpeedLoad', load_initializer={'random_init': 'uniform', }, motor_initializer=motor_initializer, reward_function=reward_function, # define the duration of one sampling step tau=tau, u_sup=u_sup, # turn off terminations via limit violation, parameterize the rew-fct reference_generator=rg, ode_solver='euler', ) # remove one action from the action space to help the agent speed up its training # this can be done as both switchting states (1,1,1) and (-1,-1,-1) - which are encoded # by action 0 and 7 - both lead to the same zero voltage vector in alpha/beta-coordinates env.action_space = Discrete(7) # applying wrappers eps_idx = env.physical_system.state_names.index('epsilon') i_sd_idx = env.physical_system.state_names.index('i_sd') i_sq_idx = env.physical_system.state_names.index('i_sq') env = TimeLimit(FeatureWrapper(FlattenObservation(env), eps_idx, i_sd_idx, i_sq_idx), max_eps_steps) # - # ## 3. Train an Agent with Keras-RL2 # [Keras-RL2](https://github.com/wau/keras-rl2) is a widely used Python library for reinforcement learning. It compiles a collection of Reinforcement Learning algorithm implementations. # Most of the algorithms and their extensions are readily available and are quite stable. The ease of adopting to various environments adds to its popularity. # For currently available RL algorithms see their [documentation](https://keras-rl.readthedocs.io/en/latest/agents/overview/) # ### 3.1 Imports # The environment in this control problem poses a discrete action space. Therefore, the [Deep Q-Network (DQN)](https://arxiv.org/abs/1312.5602) is a suitable agent. from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Flatten from tensorflow.keras.optimizers import Adam from rl.agents.dqn import DQNAgent from rl.policy import LinearAnnealedPolicy, EpsGreedyQPolicy from rl.memory import SequentialMemory # ### 3.2 Parameters and Instantiation # For the DQN algorithm, a set of parameters has to be pre-defined. # In particular, a multilayer perceptron (MLP) is used as function approximator within the DQN algorithm. # Note: Although there are 8 possible actions, we ignore the last action as the voltage applied on the motor there is the same as for the first action. This effectively reduces the feature space that is to be explored and benefits training. # + # Training parameters. time_limit = True buffer_size = 200000 # observation history size batch_size = 25 # mini batch size sampled from history at each update step nb_actions = env.action_space.n window_length = 1 # construct a MLP model = Sequential() model.add(Flatten(input_shape=(window_length,) + env.observation_space.shape)) model.add(Dense(64, activation='relu')) # hidden layer 1 model.add(Dense(64, activation='relu')) # hidden layer 2 model.add(Dense(nb_actions, activation='linear')) # output layer # keras-rl2 objects memory = SequentialMemory(limit=200000, window_length=window_length) policy = LinearAnnealedPolicy(EpsGreedyQPolicy(eps=0.2), 'eps', 1, 0.05, 0, 50000) # - # ### 3.3 Training # Once you've setup the environment and defined your parameters starting the training is nothing more than a one-liner. For each algorithm all you have to do is call its ```fit()``` function. # # The advantage of keras-RL2 is that you can see all the important parameters and their values during training phase to get an intution of learning. # Hence, there is no need of additional callbacks. # + dqn = DQNAgent( model=model, policy=policy, nb_actions=nb_actions, memory=memory, gamma=0.99, batch_size=25, train_interval=1, memory_interval=1, target_model_update=1000, nb_steps_warmup=10000, enable_double_dqn=True ) dqn.compile(Adam(lr=1e-4), metrics=['mse'] ) # - # %matplotlib notebook history = dqn.fit(env, nb_steps=100000, action_repetition=1, verbose=2, visualize=True, nb_max_episode_steps=10000, log_interval=1000 ) # ### 3.4 Saving the Model # When the training is finished, you can save the model that your DQN-agent has learned for later reuse, e.g. for evaluation or continued training. For this purpose, ```save_weights()``` is used. weight_path = Path('saved_agents') weight_path.mkdir(parents=True, exist_ok=True) dqn.save_weights(str(weight_path / 'dqn_keras-RL2.hdf5'), overwrite=True) # ## 5. Evaluating an Agent # Keras-RL2 agents have their own ```test()``` method which can be used to evaluate their performance. # Here, the metric used is the total reward the agent accumulates at the end of each episode. # ### 5.1 Loading a Model (Optional) try: dqn.load_weights(str(weight_path / 'dqn_keras-RL2.hdf5')) except OSError: print('Could not find model file. Continue') # ### 5.2 Testing env.reset() test_history = dqn.test(env, nb_episodes=5, nb_max_episode_steps=100000, visualize=True ) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # hide # %load_ext autoreload # %autoreload 2 # %load_ext nb_black # %load_ext lab_black # + # default_exp model_pipeline # - # # ModelPipeline # > Putting it all together. # ## Overview # # The functionality below uses the `NumerFrame`, `PreProcessor`, `Model` and `PostProcessor` objects to easily propagate # data, generate predictions and postprocess them in one go. # # Specifically, this section introduces two objects: # 1. `ModelPipeline`: Run all preprocessing, models and postprocessing that you define and return a `NumerFrame`. # 2. `ModelPipelineCollection`: Manage and run multiple `ModelPipeline` objects. # hide from nbdev.showdoc import * # + #export import uuid import pandas as pd from tqdm.auto import tqdm from typeguard import typechecked from typing import List, Union, Dict from rich import print as rich_print from numerblox.numerframe import NumerFrame, create_numerframe from numerblox.preprocessing import BaseProcessor, CopyPreProcessor, GroupStatsPreProcessor, FeatureSelectionPreProcessor from numerblox.model import BaseModel, ConstantModel, RandomModel from numerblox.postprocessing import Standardizer, MeanEnsembler, FeatureNeutralizer # - # ## 1. ModelPipeline # `ModelPipeline` handles all preprocessing, model prediction and postprocessing. It returns a `NumerFrame` with the preprocessed data, metadata and postprocessed prediction columns. #export @typechecked class ModelPipeline: """ Execute all preprocessing, prediction and postprocessing for a given setup. :param models: Initiliazed numerai-blocks Models (Objects inheriting from BaseModel) \n :param preprocessors: List of initialized Preprocessors. \n :param postprocessors: List of initialized Postprocessors. \n :param copy_first: Whether to copy the NumerFrame as a first preprocessing step. \n Highly recommended in order to avoid surprise behaviour by manipulating the original dataset. \n :param pipeline_name: Unique name for pipeline. Only used for display purposes. """ def __init__(self, models: List[BaseModel], preprocessors: List[BaseProcessor] = [], postprocessors: List[BaseProcessor] = [], copy_first = True, standardize = True, pipeline_name: str = None): self.pipeline_name = pipeline_name if pipeline_name else uuid.uuid4().hex self.models = models self.copy_first = copy_first self.standardize = standardize self.preprocessors = preprocessors self.postprocessors = postprocessors def preprocess(self, dataf: Union[pd.DataFrame, NumerFrame]) -> NumerFrame: """ Run all preprocessing steps. Copies input by default. """ if self.copy_first: dataf = CopyPreProcessor()(dataf) for preprocessor in tqdm(self.preprocessors, desc=f"{self.pipeline_name} Preprocessing:", position=0): rich_print(f":construction: Applying preprocessing: '[bold]{preprocessor.__class__.__name__}[/bold]' :construction:") dataf = preprocessor(dataf) return NumerFrame(dataf) def postprocess(self, dataf: Union[pd.DataFrame, NumerFrame]) -> NumerFrame: """ Run all postprocessing steps. Standardizes model prediction by default. """ if self.standardize: dataf = Standardizer()(dataf) for postprocessor in tqdm(self.postprocessors, desc=f"{self.pipeline_name} Postprocessing: ", position=0): rich_print(f":construction: Applying postprocessing: '[bold]{postprocessor.__class__.__name__}[/bold]' :construction:") dataf = postprocessor(dataf) return NumerFrame(dataf) def process_models(self, dataf: Union[pd.DataFrame, NumerFrame]) -> NumerFrame: """ Run all models. """ for model in tqdm(self.models, desc=f"{self.pipeline_name} Model prediction: ", position=0): rich_print(f":robot: Generating model predictions with '[bold]{model.__class__.__name__}[/bold]'. :robot:") dataf = model(dataf) return NumerFrame(dataf) def pipeline(self, dataf: Union[pd.DataFrame, NumerFrame]) -> NumerFrame: """ Process full pipeline and return resulting NumerFrame. """ preprocessed_dataf = self.preprocess(dataf) prediction_dataf = self.process_models(preprocessed_dataf) processed_prediction_dataf = self.postprocess(prediction_dataf) rich_print(f":checkered_flag: [green]Finished pipeline:[green] [bold blue]'{self.pipeline_name}'[bold blue]! :checkered_flag:") return processed_prediction_dataf def __call__(self, dataf: Union[pd.DataFrame, NumerFrame]) -> NumerFrame: return self.pipeline(dataf) # Example using several preprocessor, dummy models and postprocessors # + model_names = ["test_0.5", "test_0.8"] dataf = create_numerframe("test_assets/mini_numerai_version_1_data.csv", metadata={'version': 1}) preprocessors = [GroupStatsPreProcessor(), FeatureSelectionPreProcessor(feature_cols=['feature_intelligence_mean', 'feature_intelligence_std'])] models = [ConstantModel(constant=0.5, model_name=model_names[0]), ConstantModel(constant=0.8, model_name=model_names[1])] postprocessors = [MeanEnsembler(cols=[f"prediction_{name}" for name in model_names], final_col_name='prediction_ensembled'), FeatureNeutralizer(feature_names=['feature_intelligence_mean', 'feature_intelligence_std'], pred_name='prediction_ensembled', proportion=0.8)] # - test_pipeline = ModelPipeline(preprocessors=preprocessors, models=models, postprocessors=postprocessors, pipeline_name="test_pipeline", standardize=False) processed_dataf = test_pipeline(dataf) assert processed_dataf.meta == dataf.meta assert isinstance(processed_dataf, NumerFrame) processed_dataf.head(2) # ## 2. ModelPipelineCollection # `ModelPipelineCollection` can be used to manage and run multiple `ModelPipeline` objects. # # `ModelPipelineCollection` simply takes a list of `ModelPipeline` objects as input. #export @typechecked class ModelPipelineCollection: """ Execute multiple initialized ModelPipelines in a sequence. :param pipelines: List of initialized ModelPipelines. """ def __init__(self, pipelines: List[ModelPipeline]): self.pipelines = {pipe.pipeline_name: pipe for pipe in pipelines} self.pipeline_names = list(self.pipelines.keys()) def process_all_pipelines(self, dataf: Union[pd.DataFrame, NumerFrame]) -> Dict[str, NumerFrame]: """ Process all pipelines and return Dictionary mapping pipeline names to resulting NumerFrames. """ result_datafs = dict() for name, pipeline in tqdm(self.pipelines.items(), desc="Processing Pipeline Collection"): result_datafs[name] = self.process_single_pipeline(dataf, name) return result_datafs def process_single_pipeline(self, dataf: Union[pd.DataFrame, NumerFrame], pipeline_name: str) -> NumerFrame: """ Run full model pipeline for given name in collection. """ rich_print(f":construction_worker: [bold green]Processing model pipeline:[/bold green] '{pipeline_name}' :construction_worker:") pipeline = self.get_pipeline(pipeline_name) dataf = pipeline(dataf) return NumerFrame(dataf) def get_pipeline(self, pipeline_name: str) -> ModelPipeline: """ Retrieve model pipeline for given name. """ available_pipelines = self.pipeline_names assert pipeline_name in available_pipelines, f"Requested pipeline '{pipeline_name}', but only the following models are in the collection: '{available_pipelines}'." return self.pipelines[pipeline_name] def __call__(self, dataf: Union[pd.DataFrame, NumerFrame]) -> Dict[str, NumerFrame]: return self.process_all_pipelines(dataf=dataf) # We introduce a different pipeline with no preprocessing or postprocessing. Only a `RandomModel`. test_pipeline2 = ModelPipeline(models=[RandomModel()], pipeline_name="test_pipeline2") # + [markdown] pycharm={"name": "#%% md\n"} # We process two `ModelPipeline`s with different characteristics on the same data. # - collection = ModelPipelineCollection([test_pipeline, test_pipeline2]) assert collection.get_pipeline("test_pipeline2").pipeline_name == 'test_pipeline2' result_datasets = collection(dataf=dataf) # The `ModelPipelineCollection` returns a dictionary mapping pipeline names to `NumerFrame` objects, retaining all metadata and added prediction columns for each. Note that in this example, the 1st `NumerFrame` had a feature selection step, so it did not retain all columns. However, the second dataset retained all feature columns, because no preprocessing was done. # + pycharm={"name": "#%%\n"} result_datasets.keys() # - result_datasets['test_pipeline'].head(2) # + pycharm={"name": "#%%\n"} result_datasets['test_pipeline2'].head(2) # - # Since metadata is not manipulated in these pipelines, metadata should be the same as the original `NumerFrame` for all resulting `NumerFrame` objects. # + pycharm={"name": "#%%\n"} for _, result in result_datasets.items(): assert dataf.meta == result.meta # + pycharm={"name": "#%%\n"} result_datasets['test_pipeline'].meta # - # ----------------------------------------------------------------------------- # + # hide # Run this cell to sync all changes with library from nbdev.export import notebook2script notebook2script() # + pycharm={"name": "#%%\n"} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="yHG6wUbvifuX" # # Forecasting with machine learning # + [markdown] id="vidayERjaO5q" # ## Setup # + id="gqWabzlJ63nL" import numpy as np import matplotlib.pyplot as plt import tensorflow as tf keras = tf.keras # + id="cg1hfKCPldZG" def plot_series(time, series, format="-", start=0, end=None, label=None): plt.plot(time[start:end], series[start:end], format, label=label) plt.xlabel("Time") plt.ylabel("Value") if label: plt.legend(fontsize=14) plt.grid(True) def trend(time, slope=0): return slope * time def seasonal_pattern(season_time): """Just an arbitrary pattern, you can change it if you wish""" return np.where(season_time < 0.4, np.cos(season_time * 2 * np.pi), 1 / np.exp(3 * season_time)) def seasonality(time, period, amplitude=1, phase=0): """Repeats the same pattern at each period""" season_time = ((time + phase) % period) / period return amplitude * seasonal_pattern(season_time) def white_noise(time, noise_level=1, seed=None): rnd = np.random.RandomState(seed) return rnd.randn(len(time)) * noise_level # + id="iL2DDjV3lel6" time = np.arange(4 * 365 + 1) slope = 0.05 baseline = 10 amplitude = 40 series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude) noise_level = 5 noise = white_noise(time, noise_level, seed=42) series += noise plt.figure(figsize=(10, 6)) plot_series(time, series) plt.show() # + [markdown] id="ViWVB9qd8OIR" # ## Forecasting with Machine Learning # # First, we will train a model to forecast the next step given the previous 30 steps, therefore, we need to create a dataset of 30-step windows for training. # + id="1tl-0BOKkEtk" def window_dataset(series, window_size, batch_size=32, shuffle_buffer=1000): dataset = tf.data.Dataset.from_tensor_slices(series) dataset = dataset.window(window_size + 1, shift=1, drop_remainder=True) dataset = dataset.flat_map(lambda window: window.batch(window_size + 1)) dataset = dataset.shuffle(shuffle_buffer) dataset = dataset.map(lambda window: (window[:-1], window[-1])) dataset = dataset.batch(batch_size).prefetch(1) return dataset # + id="Zmp1JXKxk9Vb" split_time = 1000 time_train = time[:split_time] x_train = series[:split_time] time_valid = time[split_time:] x_valid = series[split_time:] # + [markdown] id="T1IvwAFn8OIc" # ### Linear Model # + id="ieOKdcEQ0A6k" keras.backend.clear_session() tf.random.set_seed(42) # do we need to set a random no for new session? np.random.seed(42) window_size = 30 train_set = window_dataset(x_train, window_size) # here we just use the function blindly :) valid_set = window_dataset(x_valid, window_size) model = keras.models.Sequential([ keras.layers.Dense(1, input_shape=[window_size]) ]) optimizer = keras.optimizers.SGD(lr=1e-5, momentum=0.9) model.compile(loss=keras.losses.Huber(), # what is huber? quadratic for small errors but linear for large errors (MAE) absolute error are large optimizer=optimizer, # try other like Adam rmsprop metrics=["mae"]) model.fit(train_set, epochs=100, validation_data=valid_set) # + id="N3N8AGRM8OIc" keras.backend.clear_session() # do we need to set a random no for new session? tf.random.set_seed(42) np.random.seed(42) window_size = 30 train_set = window_dataset(x_train, window_size) model = keras.models.Sequential([ keras.layers.Dense(1, input_shape=[window_size]) ]) lr_schedule = keras.callbacks.LearningRateScheduler( lambda epoch: 1e-6 * 10**(epoch / 30)) # what is the logic here? optimizer = keras.optimizers.SGD(lr=1e-6, momentum=0.9) model.compile(loss=keras.losses.Huber(), optimizer=optimizer, metrics=["mae"]) history = model.fit(train_set, epochs=100, callbacks=[lr_schedule]) # call back here but why validation is not here? # + id="PF9e7IDm8OId" plt.semilogx(history.history["lr"], history.history["loss"]) plt.axis([1e-6, 1e-3, 0, 20]) # + id="uMNwyIFE8OIf" keras.backend.clear_session() tf.random.set_seed(42) np.random.seed(42) window_size = 30 train_set = window_dataset(x_train, window_size) valid_set = window_dataset(x_valid, window_size) model = keras.models.Sequential([ keras.layers.Dense(1, input_shape=[window_size]) ]) optimizer = keras.optimizers.SGD(lr=1e-5, momentum=0.9) model.compile(loss=keras.losses.Huber(), optimizer=optimizer, metrics=["mae"]) early_stopping = keras.callbacks.EarlyStopping(patience=10) model.fit(train_set, epochs=500, validation_data=valid_set, callbacks=[early_stopping]) # + id="_eaAX9g_jS5W" def model_forecast(model, series, window_size): ds = tf.data.Dataset.from_tensor_slices(series) ds = ds.window(window_size, shift=1, drop_remainder=True) ds = ds.flat_map(lambda w: w.batch(window_size)) # No need to shuffle here because we are not training here. ds = ds.batch(32).prefetch(1) forecast = model.predict(ds) return forecast # + id="FnIWROQ08OIj" lin_forecast = model_forecast(model, series[split_time - window_size:-1], window_size)[:, 0] # what is going on? # + id="xd7Tj_fA8OIk" lin_forecast.shape # what is the vector here? # - type(lin_forecast) x_valid.shape # + id="F-nftslfgQJs" plt.figure(figsize=(10, 6)) plot_series(time_valid, x_valid) plot_series(time_valid, lin_forecast) # + id="W4E_jXktf7iv" keras.metrics.mean_absolute_error(x_valid, lin_forecast).numpy() # + [markdown] id="9nEM33dZ8OIp" # ### Dense Model Forecasting # Lets see if we can we do better with additional layers? # + id="RhGTv4G_8OIp" keras.backend.clear_session() tf.random.set_seed(42) np.random.seed(42) window_size = 30 train_set = window_dataset(x_train, window_size) model = keras.models.Sequential([ keras.layers.Dense(10, activation="relu", input_shape=[window_size]), keras.layers.Dense(10, activation="relu"), keras.layers.Dense(1) ]) lr_schedule = keras.callbacks.LearningRateScheduler( lambda epoch: 1e-7 * 10**(epoch / 20)) optimizer = keras.optimizers.SGD(lr=1e-7, momentum=0.9) model.compile(loss=keras.losses.Huber(), optimizer=optimizer, metrics=["mae"]) history = model.fit(train_set, epochs=100, callbacks=[lr_schedule]) # here we don't use validation data b/c we are trying to get the learning rate # learning rate deals with training data only. # + id="5g-nC_em8OIq" plt.semilogx(history.history["lr"], history.history["loss"]) plt.axis([1e-7, 5e-3, 0, 30]) #How do we get this learning rate number? Is it by intuition? # - # Optional code import pandas as pd history.history.keys() losses = pd.DataFrame(history.history) losses['mae'].plot() model.history.history.keys() losses = pd.DataFrame(model.history.history) losses['mae'].plot() # Optional code # + id="B7t0VrCH8OIr" keras.backend.clear_session() tf.random.set_seed(42) np.random.seed(42) window_size = 30 train_set = window_dataset(x_train, window_size) valid_set = window_dataset(x_valid, window_size) model = keras.models.Sequential([ keras.layers.Dense(10, activation="relu", input_shape=[window_size]), keras.layers.Dense(10, activation="relu"), keras.layers.Dense(1) ]) optimizer = keras.optimizers.SGD(lr=1e-5, momentum=0.9) model.compile(loss=keras.losses.Huber(), optimizer=optimizer, metrics=["mae"]) early_stopping = keras.callbacks.EarlyStopping(patience=10) # if it doesn't make any change in last 10 epochs then stop model.fit(train_set, epochs=500, validation_data=valid_set, # here we use the validation data callbacks=[early_stopping]) # + id="RqQbX6DZ8OIu" dense_forecast = model_forecast( model, series[split_time - window_size:-1], # little bit before the validation series window_size)[:, 0] # 30 time steps before the start of validation period. It causes overlap but ignore it here ???? # training 30 steps before? # + id="98zwAuIo8OIv" plt.figure(figsize=(10, 6)) plot_series(time_valid, x_valid) plot_series(time_valid, dense_forecast) # + id="EgkELN-58OIw" keras.metrics.mean_absolute_error(x_valid, dense_forecast).numpy() # little better with the MAE but graph shows a diff story # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: pymedphys-master # language: python # name: pymedphys-master # --- # + import keyring import getpass import functools import itertools import tempfile import io import numpy as np import matplotlib.pyplot as plt import imageio import skimage.transform import pydicom import segments # - # Makes it so any changes in pymedphys is automatically # propagated into the notebook without needing a kernel reset. from IPython.lib.deepreload import reload # %load_ext autoreload # %autoreload 2 from pymedphys._experimental.autosegmentation import pipeline, mask EXPANSION = 4 def get_instance_id(name): # So that 0 isn't a category return category_id_map[name] + 1 # + segments_api_key = keyring.get_password('segments-ai', 'api-key') if not segments_api_key: segments_api_key = getpass.getpass() keyring.set_password('', 'api-key', segments_api_key) # - client = segments.SegmentsClient(segments_api_key) dataset_name = 'SimonBiggs/AnimalContours' # Name of a dataset you've created on Segments.ai dataset = client.get_dataset(dataset_name) dataset # + contouring_task = [item for item in dataset['tasks'] if item['name'] == 'contouring'][0] categories = contouring_task['attributes']['categories'] category_id_map = { item['name']: item['id'] for item in categories } category_id_map # + samples = client.get_samples(dataset_name) ct_uid_to_upload_uuid = { item['name'].replace(".png", ""): item['uuid'] for item in samples } # - samples[0] client.get_label('b2b5e02a-e5a3-4cf0-a361-a2349207c930', 'contouring') client.put ( data_path_root, structure_set_paths, ct_image_paths, ct_uid_to_structure_uid, structure_uid_to_ct_uids, names_map, structure_names_by_ct_uid, structure_names_by_structure_set_uid, uid_to_url, hash_path, ) = pipeline.get_dataset_metadata() # + def is_mask_a_subset(subset, superset): return np.all(np.logical_and(subset, superset) == subset) def cmp(x, y, masks): mask_x = masks[x] mask_y = masks[y] if is_mask_a_subset(mask_x, mask_y): return -1 if is_mask_a_subset(mask_y, mask_x): return 1 disjoint = np.logical_xor(mask_x, mask_y) == np.logical_or(mask_x, mask_y) if np.any(np.invert(disjoint)): raise ValueError(f"Masks ({x}, {y}) are disjoint") return 0 def create_sorting_key(masks): return functools.cmp_to_key( functools.partial(cmp, masks=masks) ) # + @functools.lru_cache() def get_dcm_ct_from_uid(ct_uid): ct_path = ct_image_paths[ct_uid] dcm_ct = pydicom.read_file(ct_path, force=True) dcm_ct.file_meta.TransferSyntaxUID = pydicom.uid.ImplicitVRLittleEndian return dcm_ct @functools.lru_cache() def get_dcm_structure_from_uid(structure_set_uid): structure_set_path = structure_set_paths[structure_set_uid] dcm_structure = pydicom.read_file( structure_set_path, force=True, specific_tags=["ROIContourSequence", "StructureSetROISequence"], ) return dcm_structure @functools.lru_cache() def get_contours_by_ct_uid_from_structure_uid(structure_set_uid): dcm_structure = get_dcm_structure_from_uid(structure_set_uid) number_to_name_map = { roi_sequence_item.ROINumber: names_map[roi_sequence_item.ROIName] for roi_sequence_item in dcm_structure.StructureSetROISequence if names_map[roi_sequence_item.ROIName] is not None } contours_by_ct_uid = pipeline.get_contours_by_ct_uid(dcm_structure, number_to_name_map) return contours_by_ct_uid # - ct_uid, sample_uuid = list(ct_uid_to_upload_uuid.items())[0] client.get_label(sample_uuid, 'contouring') sample_uuid # + # client.put? # - for ct_uid, sample_uuid in ct_uid_to_upload_uuid.items(): current_label_data = client.get_label(sample_uuid, 'contouring') print(ct_uid, sample_uuid) try: if current_label_data['label_status'] in ('PRELABELED', 'REVIEWED', 'LABELED'): print('Already labelled. Skipping...') continue else: print(current_label_data['label_status']) except KeyError: pass structure_uid = ct_uid_to_structure_uid[ct_uid] ct_path = pipeline.download_uid(data_path_root, ct_uid, uid_to_url, hash_path) structure_path = pipeline.download_uid(data_path_root, structure_uid, uid_to_url, hash_path) dcm_ct = get_dcm_ct_from_uid(ct_uid) dcm_structure = get_dcm_structure_from_uid(structure_uid) grid_x, grid_y, ct_img = pipeline.create_input_ct_image(dcm_ct) contours_by_ct_uid = get_contours_by_ct_uid_from_structure_uid( structure_uid ) _, _, ct_size = mask.get_grid(dcm_ct) ct_size = tuple(np.array(ct_size) * EXPANSION) try: contours_on_this_slice = contours_by_ct_uid[ct_uid].keys() except KeyError as e: print(e) print("Key Error in contours on slice. Skipping...") continue masks = dict() for structure in contours_on_this_slice: if structure in contours_on_this_slice: masks[structure] = mask.calculate_expanded_mask( contours_by_ct_uid[ct_uid][structure], dcm_ct, EXPANSION ) else: masks[structure] = np.zeros(ct_size).astype(bool) try: mask_assignment_order = sorted( list(contours_on_this_slice), key=create_sorting_key(masks), reverse=True) except ValueError as e: print(e) print("Disjoint contours. Skipping...") continue objects_map = [ { "id": get_instance_id(name), "category_id": category_id_map[name] } for name in contours_on_this_slice ] catagorised_mask = np.zeros(ct_size).astype(np.uint8) for structure_name in mask_assignment_order: instance_id = get_instance_id(structure_name) catagorised_mask[masks[structure_name]] = instance_id png_file = io.BytesIO() imageio.imsave(png_file, catagorised_mask, format='PNG-PIL', prefer_uint8=True) sample_name = f"{ct_uid}_mask.png" asset = client.upload_asset(png_file, filename=sample_name) image_url = asset["url"] sample_uuid = ct_uid_to_upload_uuid[ct_uid] task_name = "contouring" attributes = { "segmentation_bitmap": { "url": image_url }, "annotations": objects_map } client.add_label(sample_uuid, task_name, attributes, label_status='REVIEWED') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: tsat # language: python # name: tsat # --- import torch, torchvision from torchvision.datasets.video_utils import VideoClips import os, json import numpy as np from torch.utils.data import Dataset, DataLoader import ffmpeg from collections import Counter as C class prober_class(Dataset): def __init__(self, paths): super(prober_class, self).__init__() self.paths = paths def __len__(self): return len(self.paths) def __getitem__(self, idx): k,p = self.paths[idx] assert os.path.exists(p) try: res = ffmpeg.probe(p) except: res = "Error" return [res, k] # + basepath = '/data/datasets/kinetics700_2020/' def read_dataset_details(label_path, batch_size, vid_path): print("hello") dataset = json.load(open(basepath + label_path)) print(list(dataset.items())[0], len(dataset)) collate = lambda x: x loader = DataLoader(prober_class([(k, basepath + vid_path + k + '.mp4') for k, v in dataset.items()]), batch_size=batch_size, num_workers=batch_size, collate_fn=collate) print("Example paths:", loader.dataset.paths[:3]) probes = [] for i,batch in enumerate(loader): if i % 50 == 0: print(i*batch_size,'/',len(dataset)) for b in batch: probes.append(b) print(C([type(p[0]) for p in probes])) return probes # - def filter_dataset(label_path, probes): dataset = json.load(open(basepath + label_path)) print(list(dataset.items())[0], len(dataset)) new_dataset = {} error_remove_ids = [] stream_err_ids = [] for p, k in probes: if p == "Error": error_remove_ids.append(k) elif len(p['streams']) != 2: stream_err_ids.append(k) else: dataset[k]['nb_frames'] = p['streams'][0]['nb_frames'] dataset[k]['hw'] = (p['streams'][0]['height'], p['streams'][0]['width']) dataset[k]['true_duration'] = p['streams'][0]['duration'] new_dataset[k] = dataset[k] return new_dataset, error_remove_ids, stream_err_ids probess = read_dataset_details('full/train.json', 50, 'train_cache_short_fixed/') len(probess) probess[1][0]['streams'] new_dataset, error_remove_ids, stream_err_ids = filter_dataset('full/train.json', probess) len(stream_err_ids) # + #json.dump(new_dataset, open(basepath + 'labels/val.json', 'w+'), indent=2) # - probess = read_dataset_details('labels/train_full.json', 60, 'train/') json.dump(probess, open("tmp.json", 'w+')) new_dataset, error_remove_ids, stream_err_ids = filter_dataset('labels/train_full.json', probess) len(new_dataset) len(error_remove_ids) json.dump(error_remove_ids, open("corrupt_missing_train.json", 'w+')) json.dump(stream_err_ids, open("stream_err_train.json", 'w+')) json.dump(['xsGtbUnp9tw', '141fJ89Ed2k', 'q-S0pBZDhZU', 'wN8qYmPv5yk', 'uz6rjbw0ZA0', 'kUVnT7Ld80M', 'XY5FDVay5_A', 'bOU2oGVBM_o', 'fg2BS7H_dAU', 'KTCQpjUrCe8', 'erh2ngRZxs0', '7hIAtSLdAUo', 'h_5SZwWFg1c', 'z8qEtdr1ZuU', 'VYZPozZ5Eig', '6iuD3pSgBcw', 'NPNP-7B9P3M', 'yhA_TTKetyM', 'YjUcA9zOp5g', 'hKqCUWTQQxU' ], open("download_err_corrupt.json", 'w+') ) len(new_dataset) json.dump(new_dataset, open(basepath + 'labels/train.json', 'w+'), indent=2) # Create mini kinetics train import random label_class = list(set(m['annotations']['label'] for m in new_dataset.values())) assert len(label_class) == 700 classwise_keys = {l:[] for l in label_class} for k, d in new_dataset.items(): classwise_keys[d['annotations']['label']].append(k) classwise_ixs['sword swallowing'] mini_train = dict(random.sample(list(new_dataset.items()), 200000)) dict(sorted(C(m['annotations']['label'] for m in mini_train.values()).items(),key=lambda x: x[1])) label_to_ix = {k:i for i, k in enumerate(label_class)} json.dump(label_to_ix, open(basepath + 'mini/class_to_index.json', 'w+'), indent=2) for k in mini_train.keys(): l = mini_train[k]['annotations']['label'] mini_train[k]['annotations']['label'] = (l, label_to_ix[l]) list(mini_train.items())[1], len(mini_train) json.dump(mini_train, open(basepath + 'mini/train.json', 'w+'), indent=2) # + mini_val = json.load(open(basepath + 'labels/val.json')) for k in mini_val.keys(): l = mini_val[k]['annotations']['label'] mini_val[k]['annotations']['label'] = (l, label_to_ix[l]) # - list(mini_val.items())[1], len(mini_val) json.dump(mini_val, open(basepath + 'mini/val.json', 'w+'), indent=2) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="l-CQB0NFgWxA" # Lambda School Data Science # # *Unit 2, Sprint 1, Module 3* # # --- # + [markdown] id="7IXUfiQ2UKj6" # # Ridge Regression # # ## Assignment # # We're going back to our other **New York City** real estate dataset. Instead of predicting apartment rents, you'll predict property sales prices. # # But not just for condos in Tribeca... # # - [ ] Use a subset of the data where `BUILDING_CLASS_CATEGORY` == `'01 ONE FAMILY DWELLINGS'` and the sale price was more than 100 thousand and less than 2 million. # - [ ] Do train/test split. Use data from January — March 2019 to train. Use data from April 2019 to test. # - [ ] Do one-hot encoding of categorical features. # - [ ] Do feature selection with `SelectKBest`. # - [ ] Fit a ridge regression model with multiple features. Use the `normalize=True` parameter (or do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html) beforehand — use the scaler's `fit_transform` method with the train set, and the scaler's `transform` method with the test set) # - [ ] Get mean absolute error for the test set. # - [ ] As always, commit your notebook to your fork of the GitHub repo. # # The [NYC Department of Finance](https://www1.nyc.gov/site/finance/taxes/property-rolling-sales-data.page) has a glossary of property sales terms and NYC Building Class Code Descriptions. The data comes from the [NYC OpenData](https://data.cityofnewyork.us/browse?q=NYC%20calendar%20sales) portal. # # # ## Stretch Goals # # Don't worry, you aren't expected to do all these stretch goals! These are just ideas to consider and choose from. # # - [ ] Add your own stretch goal(s) ! # - [ ] Instead of `Ridge`, try `LinearRegression`. Depending on how many features you select, your errors will probably blow up! 💥 # - [ ] Instead of `Ridge`, try [`RidgeCV`](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RidgeCV.html). # - [ ] Learn more about feature selection: # - ["Permutation importance"](https://www.kaggle.com/dansbecker/permutation-importance) # - [scikit-learn's User Guide for Feature Selection](https://scikit-learn.org/stable/modules/feature_selection.html) # - [mlxtend](http://rasbt.github.io/mlxtend/) library # - scikit-learn-contrib libraries: [boruta_py](https://github.com/scikit-learn-contrib/boruta_py) & [stability-selection](https://github.com/scikit-learn-contrib/stability-selection) # - [_Feature Engineering and Selection_](http://www.feat.engineering/) by Kuhn & Johnson. # - [ ] Try [statsmodels](https://www.statsmodels.org/stable/index.html) if you’re interested in more inferential statistical approach to linear regression and feature selection, looking at p values and 95% confidence intervals for the coefficients. # - [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way. # - [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html). # + id="o9eSnDYhUGD7" # %%capture import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/' # !pip install category_encoders==2.* # If you're working locally: else: DATA_PATH = '../data/' # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') # + id="8Y-aT700kLcQ" import numpy as np # + id="QJBD4ruICm1m" import pandas as pd import pandas_profiling # Read New York City property sales data df = pd.read_csv(DATA_PATH+'condos/NYC_Citywide_Rolling_Calendar_Sales.csv', parse_dates= ['SALE DATE'], index_col= 'SALE DATE') # Change column names: replace spaces with underscores df.columns = [col.replace(' ', '_') for col in df] # SALE_PRICE was read as strings. # Remove symbols, convert to integer df['SALE_PRICE'] = ( df['SALE_PRICE'] .str.replace('$','') .str.replace('-','') .str.replace(',','') .astype(int) ) # + id="1ksiUuHZgWxQ" # BOROUGH is a numeric column, but arguably should be a categorical feature, # so convert it from a number to a string df['BOROUGH'] = df['BOROUGH'].astype(str) # + id="l4JwfVWlgWxV" # Reduce cardinality for NEIGHBORHOOD feature # Get a list of the top 10 neighborhoods top10 = df['NEIGHBORHOOD'].value_counts()[:10].index # At locations where the neighborhood is NOT in the top 10, # replace the neighborhood with 'OTHER' df.loc[~df['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER' # + id="LZBxoyNqhLiA" # Subsetting the dataframe condition1 = df['BUILDING_CLASS_CATEGORY'] == '01 ONE FAMILY DWELLINGS' condition2 = df['SALE_PRICE'] > 100000 condition3 = df['SALE_PRICE'] < 2000000 df = df[condition1 & condition2 & condition3] # + id="KmdF_Ym71E2z" outputId="ab732e30-c828-4fa1-cb57-c5e6573ad755" colab={"base_uri": "https://localhost:8080/", "height": 391} print(df.shape) df.isnull().sum() # + id="E3OAh2l01Kdn" outputId="d8c6529b-7bc7-4edb-d7f3-2523ce05e8b7" colab={"base_uri": "https://localhost:8080/", "height": 357} # Removing useless columns filled with NaNs df.dropna(axis= 1, inplace= True) print(df.shape) df.isnull().sum() # + id="Qu-dLeyDl_3K" outputId="ebcdfef8-fbc7-49ab-abcc-4cbfc0b10215" colab={"base_uri": "https://localhost:8080/", "height": 170} # Find categorical variables df.select_dtypes('object').nunique() # + id="2k8c6KxQmmtE" # Drop high cardinality columns hc_cols = [col for col in df.select_dtypes('object').columns if df[col].nunique() > 12] df.drop(hc_cols, inplace= True, axis = 1) # + id="0SFU5taMhiUl" # Train/test split target = 'SALE_PRICE' y = df[target] X = df.drop(columns= target) cutoff = '2019-04-01' mask = X.index < cutoff X_train, y_train = X.loc[mask], y.loc[mask] X_test, y_test = X.loc[~mask], y.loc[~mask] # + id="MRmKyhGYldYe" outputId="836f7186-700f-41b8-dd15-17f99c076bda" colab={"base_uri": "https://localhost:8080/", "height": 105} # One hot encoding categorical variables from category_encoders import OneHotEncoder ohe = OneHotEncoder(use_cat_names= True) XT_train = ohe.fit_transform(X_train) XT_test = ohe.transform(X_test) # + id="7NJl-Oo90hlM" outputId="1ae9ca17-f643-46df-a9dc-ae1cbc530644" colab={"base_uri": "https://localhost:8080/", "height": 85} print(XT_train.shape) print(XT_test.shape) print(y_train.shape) print(y_test.shape) # + id="YhaICBTNjOP-" outputId="ae3c570d-0deb-4ae2-80cd-185c5bc8b3c2" colab={"base_uri": "https://localhost:8080/", "height": 374} XT_train.columns # + id="wN8GGXXskBkb" outputId="920d3cf0-54b7-4f57-9f80-28a3c28958ec" colab={"base_uri": "https://localhost:8080/", "height": 119} # Feature selection with selectkbest from sklearn.feature_selection import SelectKBest, f_regression selector = SelectKBest(k= 15) XT_train_selected = selector.fit_transform(XT_train, y_train) XT_test_selected = selector.transform(XT_test) # + id="Ejo7LhK2sLER" outputId="246e6d11-d9d5-41b3-ad54-a1560144955c" colab={"base_uri": "https://localhost:8080/", "height": 51} print(XT_train_selected.shape) print(XT_test_selected.shape) # + id="I3oHbHAX2BFe" outputId="f8ab8f31-eecf-4d15-db24-b003cf609632" colab={"base_uri": "https://localhost:8080/", "height": 272} selected_mask = selector.get_support() selected_names = XT_train.columns[selected_mask] [print(name) for name in selected_names]; # + id="KqbHaTVM2Nik" outputId="4d3e7fe1-ec1a-4080-d692-a1cb0b7bba5b" colab={"base_uri": "https://localhost:8080/", "height": 51} # Modeling using linear and ridge regression from sklearn.linear_model import Ridge, LinearRegression from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score LR_model = LinearRegression() LR_model.fit(XT_train_selected, y_train) ridge_model = Ridge(alpha = 5, normalize= False) ridge_model.fit(XT_train_selected, y_train) # + id="iXDWuIdH2Q0i" outputId="29b6489e-593d-419f-d4b3-257db416fec5" colab={"base_uri": "https://localhost:8080/", "height": 119} print('Linear Reg training MAE: ', mean_absolute_error(y_train, LR_model.predict(XT_train_selected))) print('Linear Reg testing MAE: ', mean_absolute_error(y_test, LR_model.predict(XT_test_selected))) print('Ridge training MAE: ', mean_absolute_error(y_train, ridge_model.predict(XT_train_selected))) print('Ridge testing MAE: ', mean_absolute_error(y_test, ridge_model.predict(XT_test_selected))) print('Ridge R^2: ', r2_score(y_train, ridge_model.predict(XT_train_selected))) print('Ridge R^2: ', r2_score(y_test, ridge_model.predict(XT_test_selected))) # + id="DESyrC1zAEob" outputId="8daa1e26-cd60-44e5-8e71-3c82d5d91c6b" colab={"base_uri": "https://localhost:8080/", "height": 657} X_train # + id="nzqMMphM94Co" outputId="7f8cf168-6f25-4f18-b557-14354a6b8df7" colab={"base_uri": "https://localhost:8080/", "height": 1000} maes = [] Rsquareds = [] for k in range(1, len(XT_train.columns)+1): selector = SelectKBest(score_func=f_regression, k=k) X_train_selected = selector.fit_transform(XT_train, y_train) X_test_selected = selector.transform(XT_test) #model = LinearRegression() model = Ridge(alpha = 5, normalize= False) model.fit(X_train_selected, y_train) y_pred = model.predict(X_test_selected) mae = mean_absolute_error(y_test, y_pred) Rsquare = r2_score(y_test, y_pred) maes.append(mae) Rsquareds.append(Rsquare) # + id="OVq1fg5Z5rQf" outputId="a6a4a88a-1413-4102-b805-cf7a11169d99" colab={"base_uri": "https://localhost:8080/", "height": 296} import matplotlib.pyplot as plt # %matplotlib inline x = range(1, len(maes)+1) y = maes print(min(maes)) plt.plot(x, y) plt.xlabel('Number of Features') plt.ylabel('Mean Absolute Error') plt.show(); # + id="O6KMo7ZTbE3R" outputId="2ffbd0ed-dd60-4d72-b9ff-5e340e4da448" colab={"base_uri": "https://localhost:8080/", "height": 296} print(max(Rsquareds)) y = Rsquareds plt.plot(x, y) plt.xlabel('Number of Features') plt.ylabel('R^2') plt.show(); # + id="oFNobrQ_dusJ" # + id="sdwbp-Ohc8M2" 157492.24827086728 for Linear Regression # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: tensorflow_env # language: python # name: tensorflow_env # --- # ## Keras Assignment - Diabetes Dataset """ A. Build a sequential model using Keras on top of this Diabetes dataset to find out if the patient has diabetes or not, using ‘Pregnancies’, ‘Glucose’ & ‘BloodPressure’ as independent columns. a. This model should have 1 hidden layer with 8 nodes b. Use Stochastic Gradient as the optimization algorithm c. Fit the model, with number of epochs to be 100 and batch size to be 10 B. Build another sequential model where ‘Outcome’ is the dependent variable and all other columns are predictors. a. This model should have 3 hidden layers with 16 nodes in each layer b. Use ‘adam’ as the optimization algorithm c. Fit the model, with number of epochs to be 150 and batch size to be 10 """ import pandas as pd from sklearn.model_selection import train_test_split from tensorflow.keras import Sequential from tensorflow.keras.layers import Dense diabetes = pd.read_csv("C:\\Users\\black\\Desktop\\ai_py\\datasets\\diabetes.csv") diabetes # ### Selecting independent and dependent variables for model_1 X = diabetes.iloc[:,:3] X.shape y = diabetes.iloc[:,8:9] y.shape # ### Splitting data into train and test sets X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=.7, random_state=40) # ### Building, optimizing and fitting model_1 model_1 = Sequential() model_1.add(Dense(8, activation="relu", input_dim=3)) model_1.add(Dense(1,activation="softmax")) model_1.compile(optimizer="sgd", loss="binary_crossentropy") model_1.fit(X_train, y_train, epochs=100, batch_size=10) # ### Selecting independent and dependent variables for model_2 X = diabetes.iloc[:,:8] X.shape y = diabetes.iloc[:,8:9] y.shape # ### Splitting data into train and test sets X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=.7, random_state=40) # ### Building, optimizing and fitting model_2 model_2 = Sequential() model_2.add(Dense(16, activation="relu", input_dim=8)) model_2.add(Dense(16, activation="relu")) model_2.add(Dense(16, activation="relu")) model_2.add(Dense(1,activation="softmax")) model_2.compile(optimizer="adam", loss="binary_crossentropy") model_2.fit(X_train, y_train, epochs=150, batch_size=10) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Stekhouse Overall Rating # # Here is the counting rate from various platform website, this time I'll be using Google Maps and Zomato. The sections will be divided based on genre : Western, Western Specialized, Fushioned, and Local. As we can see, Local Steakhouse are the most available in this Jakarta area, whereas Western Specialized and Fushioned are the least. import pandas as pd from pandasql import sqldf steak = pd.read_csv('../input/steakhouse-jakarta/Steakhouse.csv', sep = ";") # # Western Section # # Based on total rating, the highest is **Cut & Grill (Tebet, South Jakarta)** which scored 4.6 from 3086 votes, also the best among South Jakarta's Western Steakhouse. # # If being divided for city representative, ** (Thamrin)** is the best western steak from Central Jakarta, scored 4.55 from 1047 votes. **B'Steak Grill & Pancake Green Ville** has the highest rating among West Jakarta's Western style steakhouse, scored 4.55 from 2618 votes. Rest of them is **Street Steak (Kelapa Gading)** from North Jakarta scored 4.5 from 1919 votes. steak = sqldf("""SELECT restaurant_name, city, district, ltrim(google_rating) as google_rating, ltrim(platform_rating) as platform_rating, ltrim((google_rating + platform_rating)/2) as total_rating, ltrim(google_count + platform_count) as total_count, CASE WHEN (google_rating + platform_rating)/2 >= 3.5 THEN "Recommended" ELSE "Reconsidered" END AS "Recommendation" FROM steak WHERE specialized = "Western" ORDER BY total_rating DESC;""") steak.style.hide_index() # # Western Specialized Section # # As for Western Specialized Section, **Tucano's **, whom serving Brazilian-style steak buffet is the best among list (scored 4.7 from 1107 votes), also best among Central Jakarta on this section. **Carbon**, which serving Mexican food got 4.5 from 165 votes within South Jakarta area. # # For other genre, best Italian restaurant is **Caffe Milano** (Thamrin, scored 4.45 from 1869 votes), meanwhile for French restaurant **Cafe de Paris (Kebayoran Lama)** is the most worth to try (scored 3.6 from 37 votes). steak = sqldf("""SELECT restaurant_name, specialized, city, district, ltrim(google_rating) as google_rating, ltrim(platform_rating) as platform_rating, ltrim((google_rating + platform_rating)/2) as total_rating, ltrim(google_count + platform_count) as total_count, CASE WHEN (google_rating + platform_rating)/2 >= 3.5 THEN "Recommended" ELSE "Reconsidered" END AS "Recommendation" FROM steak WHERE specialized = 'Brazilian' OR specialized = 'Mexican' OR specialized = 'Argentinian' OR specialized = 'Italian' OR specialized = 'French' ORDER BY total_rating DESC;""") steak.style.hide_index() # # Fushioned # # In this section, **Sumibi (Gandaria)** won the best restaurant among fushioned restaurant, scored 4.65 from 1655 votes, also the best for Japanese restaurant and South Jakarta area. **Benihana Japanese Steakhouse (Thamrin)** scored 4.45 from 410 votes, made them the highest rangking thorough Central Jakarta. As for North Jakarta, they have **Pepperloin Steakhouse** which scored 4.35 from 1015 votes. # # For other genre, **VIN+ Wine & Beyond (Kemang)** won the most for Indonesian fushion, scored 4.4 from 1724 votes. **Ninety Nine (Thamrin, Grand Indonesia)** won for fushioned Indonesian and Singaporean, scored 4.3 from 655 votes. **Greyhond Cafe (Thamrin, Grand Indonesia)** is the only one Thailand fushioned restaurant among this list (scored 4.3 from 1576 votes). steak = sqldf("""SELECT restaurant_name, fushioned, city, district, ltrim(google_rating) as google_rating, ltrim(platform_rating) as platform_rating, ltrim((google_rating + platform_rating)/2) as total_rating, ltrim(google_count + platform_count) as total_count, CASE WHEN (google_rating + platform_rating)/2 >= 3.5 THEN "Recommended" ELSE "Reconsidered" END AS "Recommendation" FROM steak WHERE fushioned NOT LIKE '%no' ORDER BY total_rating DESC;""") steak.style.hide_index() # # Local Section # # This is the most interesting section, which is the most restaurant available thorough Jakarta area. In this section, **Holycow! K** won the most which scored 4.55 from 4601 votes, also the best among Local Western and West Jakarta area. For local section, **Joni Steak (Gajah Mada)** won the section with score 4.3 from 5638 votes, also the best local steak in Central Jakarta and thorough Jakarta area. # # **Holycow! Kalimalang** (scored 4.4 from 2562 votes) and **Waroeng Steak & Shake Kalimalang** (scored 3.95 from 3532 votes) is the best for both local westerned and local restaurant among East Jakarta area. **The Real Holysteak** (scored 4.35 from 471 votes) and **Andakar** (scored 4.3 from 2661 votes) also the best among South Jakarta area for both local westerned and local restaurant. # # Last but not least, **The Garden** (scored 4.45 from 3268 votes) and **Meaters Steak & Ribs** (scored 4.2 from 744 votes) are the best for both local westerned and local restaurant in North Jakarta area. steak = sqldf("""SELECT restaurant_name, specialized, city, district, ltrim(google_rating) as google_rating, ltrim(platform_rating) as platform_rating, ltrim((google_rating + platform_rating)/2) as total_rating, ltrim(google_count + platform_count) as total_count, CASE WHEN (google_rating + platform_rating)/2 >= 3.5 THEN "Recommended" ELSE "Reconsidered" END AS "Recommendation" FROM steak WHERE specialized = 'Local' OR specialized = 'Local Westerned' ORDER BY total_rating DESC;""") steak.style.hide_index() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Abrir aquivos completo # + # Função 'open' abre um arquivo # r - read # w - write arqTitanic = open('titanic.csv', 'r') # - print(arqTitanic) type(arqTitanic) # + #Função read lê o arquivo e atribui o conteúdo a uma variável dadosTitanic = arqTitanic.read() # - type(dadosTitanic) print(dadosTitanic) # + # fecha o arquivo aberto anteriormente arqTitanic.close() # - len(dadosTitanic) # # Abrir arquivo linha a linha arqTitanic2 = open('titanic.csv', 'r') type(arqTitanic2) # Função readlines abre o arquivo e coloca o conteúdo linha a linha dadosTitanic2 = arqTitanic2.readlines() dadosTitanic2 dadosTitanic2[0] dadosTitanic2[1] len(dadosTitanic2) type(dadosTitanic2) arqTitanic2.close() # # Criar arquivos with open('meusdados.csv', 'w') as arquivo: arquivo.write('Dados a serem gravados no meu arquivo\n') arquivo.write('Dados a serem gravados no meu arquivo 02\n') arquivo.write('Dados a serem gravados no meu arquivo 03\n') # # Posições Específicas # + # Le o arquivo e para na última linha with open('titanic.csv') as arq: print(arq.read(10)) print(arq.read(5)) print(arq.read(1)) print(arq.read(2)) # - arq.close() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 5 Shot # ### 80 classes # Detection mAP = [24.9, 25.0, 24.9, 25.0, 24.9] mAP50 = [40.5, 40.5, 40.4, 40.6, 40.3] mAP75 = [26.6, 26.8, 26.6, 26.7, 26.7] mAP_small = [13.1, 13.8, 12.7, 13.7, 13.4] mAP_medium = [25.2, 24.9, 24.9, 24.9, 25.0] mAP_large = [35.4, 36.1, 36.1, 35.8, 36.1] AR1 = [21.9, 21.8, 22.0, 21.7, 21.8] AR10 = [40.1, 40.3, 40.1, 40.0, 40.0] AR100 = [41.7, 42.0, 41.7, 41.7, 41.7] AR_small = [23.3, 24.3, 23.9, 24.1, 24.0] AR_medium = [44.4, 44.4, 44.2, 44.2, 44.1] AR_large = [58.8, 59.4, 59.2, 59.0, 59.2] # Segmentation mAP = [22.0, 22.0, 22.1, 22.1, 21.9] mAP50 = [37.9, 37.6, 38.0, 38.0, 37.7] mAP75 = [22.8, 22.8, 23.0, 23.0, 22.7] mAP_small = [11.0, 11.9, 10.7, 11.8, 11.5] mAP_medium = [22.3, 22.1, 22.3, 22.1, 22.3] mAP_large = [32.1, 32.4, 32.6, 32.4, 32.3] AR1 = [19.7, 19.6, 19.9, 19.6, 19.7] AR10 = [35.7, 35.8, 35.8, 35.8, 35.7] AR100 = [37.2, 37.2, 37.2, 37.2, 37.2] AR_small = [20.1, 21.3, 21.1, 21.0, 21.3] AR_medium = [39.7, 39.5, 39.5, 39.5, 39.5] AR_large = [52.4, 52.3, 52.7, 52.8, 52.6] # ### S1 # #### background # Detection mAP = [25.7, 25.7, 25.6, 25.6, 25.7] mAP50 = [42.5, 42.4, 42.3, 42.2, 42.4] mAP75 = [27.2, 27.1, 27.1, 26.9, 27.4] mAP_small = [12.6, 12.5, 13.1, 12.5, 12.5] mAP_medium = [25.9, 25.6, 25.5, 25.3, 25.7] mAP_large = [36.0, 36.1, 36.1, 36.3, 36.4] AR1 = [22.1, 22.1, 22.1, 22.1, 22.0] AR10 = [40.6, 40.6, 40.7, 40.5, 40.4] AR100 = [42.5, 42.4, 42.5, 42.3, 42.2] AR_small = [24.3, 24.2, 24.7, 24.2, 24.3] AR_medium = [45.3, 45.3, 45.2, 44.8, 45.0] AR_large = [59.4, 59.2, 59.3, 59.4, 59.4] # Segmentation mAP = [22.8, 22.7, 22.6, 22.8, 22.8] mAP50 = [39.8, 39.6, 39.6, 39.7, 39.8] mAP75 = [23.2, 23.1, 22.8, 23.1, 23.3] mAP_small = [10.4, 10.1, 10.8, 10.2, 10.2] mAP_medium = [23.2, 23.0, 22.9, 22.8, 23.1] mAP_large = [32.9, 33.0, 32.9, 33.2, 33.2] AR1 = [20.3, 20.2, 20.2, 20.3, 20.1] AR10 = [36.3, 36.1, 36.1, 36.1, 36.0] AR100 = [37.8, 37.6, 37.6, 37.7, 37.5] AR_small = [20.9, 20.5, 21.0, 20.8, 20.6] AR_medium = [40.8, 40.9, 40.7, 40.6, 40.6] AR_large = [53.4, 53.0, 52.9, 53.3, 53.4] # #### one-shot # Detection mAP = [9.4, 9.4, 9.5, 9.4, 9.3] mAP50 = [16.9, 16.7, 16.8, 16.8, 16.6] mAP75 = [9.7, 9.6, 9.7, 9.7, 9.6] mAP_small = [5.4, 5.4, 5.9, 6.2, 5.3] mAP_medium = [9.5, 9.2, 9.1, 9.5, 9.0] mAP_large = [14.4, 14.8, 14.9, 14.2, 14.9] AR1 = [11.2, 11.0, 10.8, 11.0, 10.9] AR10 = [28.0, 28.2, 28.2, 28.0, 28.2] AR100 = [29.4, 29.4, 29.4, 29.2, 29.6] AR_small = [17.9, 17.5, 14.5, 14.6, 14.4] AR_medium = [32.4, 31.7, 32.0, 31.4, 32.0] AR_large = [45.8, 45.7, 45.8, 45.1, 46.6] # Segmentation mAP = [7.3, 7.4, 7.5, 7.4, 7.4] mAP50 = [14.7, 14.7, 14.9, 15.0, 14.8] mAP75 = [6.5, 6.7, 7.0, 6.5, 6.8] mAP_small = [4.1, 4.2, 4.4, 4.7, 4.2] mAP_medium = [7.1, 7.1, 7.2, 7.6, 7.0] mAP_large = [11.9, 12.2, 12.4, 11.9, 12.4] AR1 = [9.1, 9.2, 9.0, 9.3, 9.2] AR10 = [23.3, 23.8, 23.8, 23.7, 23.7] AR100 = [24.5, 24.7, 24.8, 24.7, 24.7] AR_small = [15.1, 15.3, 11.5, 11.8, 11.7] AR_medium = [26.8, 26.6, 27.2, 26.5, 26.7] AR_large = [39.3, 38.8, 39.4, 39.2, 39.8] # ### S2 # #### background # Detection mAP = [24.2, 24.4, 24.3, 24.4, 24.3] mAP50 = [40.3, 40.7, 40.5, 40.6, 40.5] mAP75 = [26.0, 26.2, 26.2, 26.0, 26.1] mAP_small = [12.5, 12.5, 13.1, 13.2, 12.7] mAP_medium = [25.0, 25.4, 25.2, 25.1, 25.0] mAP_large = [35.0, 35.3, 35.1, 35.6, 35.7] AR1 = [21.3, 21.4, 21.4, 21.4, 21.4] AR10 = [39.7, 39.6, 39.8, 39.7, 39.7] AR100 = [41.3, 41.3, 41.5, 41.4, 41.2] AR_small = [24.2, 24.0, 24.3, 24.3, 23.9] AR_medium = [44.2, 44.4, 44.4, 44.1, 44.1] AR_large = [59.7, 59.8, 59.9, 60.1, 59.8] # Segmentation mAP = [21.0, 21.1, 21.0, 21.0, 21.1] mAP50 = [37.1, 37.4, 37.3, 37.4, 37.2] mAP75 = [21.6, 21.9, 21.6, 21.4, 21.8] mAP_small = [10.2, 10.2, 10.8, 10.8, 10.3] mAP_medium = [21.6, 21.9, 21.7, 21.7, 21.7] mAP_large = [30.9, 31.2, 31.0, 31.2, 31.7] AR1 = [19.1, 19.2, 19.2, 19.1, 19.3] AR10 = [34.9, 34.9, 35.0, 34.9, 34.9] AR100 = [36.2, 36.3, 36.4, 36.2, 36.2] AR_small = [20.8, 20.6, 20.7, 20.9, 20.5] AR_medium = [38.6, 38.9, 38.8, 38.7, 38.7] AR_large = [52.8, 53.0, 52.6, 53.0, 53.1] # #### one-shot # Detection mAP = [11.6, 11.8, 11.6, 11.7, 11.7] mAP50 = [20.0, 20.0, 19.9, 19.9, 20.0] mAP75 = [12.1, 12.3, 11.9, 12.3, 11.8] mAP_small = [6.3, 6.1, 6.2, 6.4, 6.6] mAP_medium = [9.5, 9.9, 9.3, 9.9, 9.9] mAP_large = [19.3, 19.9, 18.9, 19.5, 18.9] AR1 = [13.6, 13.3, 13.3, 13.1, 13.3] AR10 = [29.3, 29.1, 29.0, 29.1, 29.2] AR100 = [30.4, 30.1, 30.2, 30.3, 30.4] AR_small = [15.6, 14.6, 14.9, 15.2, 15.4] AR_medium = [30.6, 30.6, 30.7, 30.7, 30.9] AR_large = [48.0, 48.1, 48.2, 48.1, 49.0] # Segmentation mAP = [10.0, 10.3, 10.2, 10.3, 10.2] mAP50 = [17.7, 18.1, 18.0, 18.2, 18.1] mAP75 = [10.3, 10.5, 10.5, 10.6, 10.5] mAP_small = [5.2, 4.9, 5.1, 5.2, 5.3] mAP_medium = [8.5, 8.6, 8.3, 8.7, 8.7] mAP_large = [16.9, 17.9, 16.8, 17.4, 16.9] AR1 = [12.0, 11.9, 12.0, 11.9, 12.0] AR10 = [26.0, 25.9, 26.0, 25.8, 26.1] AR100 = [27.1, 26.8, 27.0, 27.0, 27.2] AR_small = [12.7, 12.0, 12.5, 12.7, 12.7] AR_medium = [27.8, 27.5, 27.6, 27.7, 27.9] AR_large = [43.9, 44.0, 44.0, 43.9, 44.6] # ### S3 # #### background # Detection mAP = [25.8, 25.8, 25.8, 25.8, 25.9] mAP50 = [41.4, 41.5, 41.6, 41.5, 41.7] mAP75 = [28.0, 27.9, 27.9, 28.0, 28.0] mAP_small = [12.8, 12.7, 12.7, 12.7, 12.6] mAP_medium = [24.9, 25.2, 25.2, 25.3, 25.2] mAP_large = [38.0, 38.3, 38.2, 38.2, 38.4] AR1 = [22.5, 22.4, 22.3, 22.4, 22.6] AR10 = [41.2, 41.0, 41.1, 40.9, 41.0] AR100 = [42.8, 42.8, 42.8, 42.6, 42.7] AR_small = [23.4, 23.8, 23.4, 23.6, 23.3] AR_medium = [45.1, 45.3, 45.1, 45.0, 45.2] AR_large = [61.4, 61.6, 61.6, 61.5, 61.2] # Segmentation mAP = [22.3, 22.3, 22.4, 22.4, 22.4] mAP50 = [38.5, 38.7, 38.8, 38.6, 38.9] mAP75 = [22.7, 22.8, 22.6, 23.0, 22.8] mAP_small = [10.4, 10.2, 10.3, 10.2, 10.2] mAP_medium = [21.6, 21.9, 22.0, 22.0, 21.9] mAP_large = [34.0, 34.4, 34.0, 34.3, 34.4] AR1 = [20.2, 20.4, 20.1, 20.2, 20.4] AR10 = [36.5, 36.5, 36.5, 36.4, 36.4] AR100 = [37.8, 37.9, 37.9, 37.8, 37.8] AR_small = [19.8, 19.8, 19.7, 19.9, 19.7] AR_medium = [40.1, 40.2, 40.4, 40.1, 40.2] AR_large = [56.1, 56.3, 56.3, 56.5, 56.2] # #### one-shot # Detection mAP = [10.0, 9.7, 9.8, 9.8, 9.5] mAP50 = [18.5, 18.2, 18.3, 18.4, 17.5] mAP75 = [10.0, 9.2, 9.4, 9.7, 9.1] mAP_small = [6.5, 6.6, 6.9, 6.9, 6.7] mAP_medium = [9.5, 9.6, 8.8, 9.4, 8.7] mAP_large = [18.1, 17.5, 17.8, 17.2, 17.1] AR1 = [9.7, 9.8, 9.6, 9.5, 9.3] AR10 = [25.0, 25.1, 25.0, 25.0, 24.8] AR100 = [26.0, 26.0, 26.0, 26.1, 25.8] AR_small = [16.0, 16.3, 16.5, 16.4, 16.1] AR_medium = [26.3, 26.6, 26.2, 26.8, 26.1] AR_large = [42.6, 43.3, 42.3, 42.1, 41.5] # Segmentation mAP = [9.1, 9.0, 9.0, 9.0, 8.7] mAP50 = [17.5, 17.3, 17.5, 17.6, 17.1] mAP75 = [8.7, 8.6, 8.5, 8.5, 8.3] mAP_small = [5.4, 5.4, 5.7, 5.7, 5.6] mAP_medium = [8.5, 8.9, 7.9, 8.3, 7.6] mAP_large = [17.1, 16.4, 16.7, 16.4, 16.6] AR1 = [9.5, 9.6, 9.4, 9.3, 9.0] AR10 = [23.1, 23.2, 23.0, 23.0, 22.8] AR100 = [24.0, 24.0, 23.9, 24.0, 23.6] AR_small = [14.6, 14.4, 14.8, 14.4, 14.4] AR_medium = [24.5, 24.5, 24.4, 24.6, 23.5] AR_large = [38.4, 38.7, 38.4, 38.4, 37.8] # ### S4 # #### background # Detection mAP = [25.1, 25.0, 25.1, 25.1, 25.0] mAP50 = [40.8, 40.8, 41.1, 41.0, 40.8] mAP75 = [26.8, 26.5, 26.8, 26.9, 26.8] mAP_small = [13.2, 12.4, 12.6, 13.5, 12.7] mAP_medium = [23.8, 23.9, 24.0, 23.8, 23.7] mAP_large = [36.7, 36.2, 36.0, 36.2, 36.5] AR1 = [21.3, 21.6, 21.6, 21.4, 21.5] AR10 = [40.2, 40.3, 40.3, 40.3, 40.2] AR100 = [42.3, 42.2, 42.4, 42.3, 42.3] AR_small = [24.8, 23.9, 25.1, 24.5, 25.1] AR_medium = [44.4, 44.5, 44.7, 44.3, 44.4] AR_large = [59.6, 58.8, 58.9, 59.0, 59.1] # Segmentation mAP = [22.1, 22.0, 22.1, 22.2, 22.0] mAP50 = [37.9, 37.9, 38.0, 38.2, 37.7] mAP75 = [23.3, 23.1, 23.2, 23.1, 23.1] mAP_small = [10.9, 10.1, 10.3, 11.2, 10.3] mAP_medium = [20.8, 20.9, 20.9, 20.8, 20.8] mAP_large = [33.3, 33.0, 32.7, 32.8, 32.9] AR1 = [19.3, 19.6, 19.7, 19.4, 19.6] AR10 = [35.8, 35.9, 35.9, 35.9, 35.9] AR100 = [37.6, 37.5, 37.7, 37.6, 37.7] AR_small = [21.7, 20.8, 21.6, 21.3, 21.7] AR_medium = [39.3, 39.1, 39.5, 39.3, 39.4] AR_large = [53.8, 53.8, 53.3, 53.4, 53.7] # #### one-shot # Detection mAP = [10.5, 10.7, 10.7, 10.5, 10.4] mAP50 = [19.0, 19.2, 19.0, 19.0, 18.8] mAP75 = [10.7, 10.7, 10.8, 10.8, 10.1] mAP_small = [5.6, 5.4, 6.1, 5.9, 5.9] mAP_medium = [10.4, 10.8, 10.6, 10.4, 10.0] mAP_large = [17.0, 16.9, 16.2, 16.4, 16.5] AR1 = [11.6, 12.0, 12.0, 11.7, 11.7] AR10 = [27.7, 27.7, 27.9, 27.8, 27.7] AR100 = [29.5, 29.8, 29.6, 29.5, 29.5] AR_small = [15.0, 14.7, 15.0, 14.7, 14.8] AR_medium = [32.6, 33.5, 33.2, 33.6, 32.8] AR_large = [47.1, 48.2, 47.8, 47.2, 47.3] # Segmentation mAP = [8.6, 8.6, 8.6, 8.4, 8.3] mAP50 = [17.0, 16.8, 17.0, 16.7, 16.8] mAP75 = [8.0, 7.9, 7.8, 7.8, 7.5] mAP_small = [4.0, 3.8, 4.2, 4.3, 4.1] mAP_medium = [8.8, 9.0, 9.0, 8.8, 8.5] mAP_large = [14.8, 14.6, 14.1, 14.4, 14.2] AR1 = [10.3, 10.4, 10.6, 10.2, 10.1] AR10 = [24.2, 24.3, 24.3, 24.3, 24.3] AR100 = [25.9, 26.1, 25.9, 25.8, 25.9] AR_small = [12.4, 12.0, 12.4, 12.4, 12.3] AR_medium = [30.2, 31.0, 30.2, 30.6, 30.1] AR_large = [42.4, 42.4, 42.1, 41.5, 41.7] # # Analyses main table import numpy as np # + # background detection mAP50_s1 = [42.5, 42.4, 42.3, 42.2, 42.4] mAP50_s2 = [40.3, 40.7, 40.5, 40.6, 40.5] mAP50_s3 = [41.4, 41.5, 41.6, 41.5, 41.7] mAP50_s4 = [40.8, 40.8, 41.1, 41.0, 40.8] mAP50 = [mAP50_s1, mAP50_s2, mAP50_s3, mAP50_s4] mAP50_runwise = [np.mean([mAP50_s1[i], mAP50_s2[i], mAP50_s3[i], mAP50_s4[i]]) for i in range(5)] mean_mAP50 = [np.mean(mAP50[i]) for i in range(4)] std_mAP50 = [np.std(mAP50[i]) for i in range(4)] conf95_mAP50 = [std_mAP50[i]*1.96/np.sqrt(5) for i in range(4)] print('${:.1f}\pm{:.2f}$'.format(np.mean(mAP50_runwise), np.std(mAP50_runwise)*1.96/np.sqrt(5))) # + # background segmentation mAP50_s1 = [39.8, 39.6, 39.6, 39.7, 39.8] mAP50_s2 = [37.1, 37.4, 37.3, 37.4, 37.2] mAP50_s3 = [38.5, 38.7, 38.8, 38.6, 38.9] mAP50_s4 = [37.9, 37.9, 38.0, 38.2, 37.7] mAP50 = [mAP50_s1, mAP50_s2, mAP50_s3, mAP50_s4] mAP50_runwise = [np.mean([mAP50_s1[i], mAP50_s2[i], mAP50_s3[i], mAP50_s4[i]]) for i in range(5)] mean_mAP50 = [np.mean(mAP50[i]) for i in range(4)] std_mAP50 = [np.std(mAP50[i]) for i in range(4)] conf95_mAP50 = [std_mAP50[i]*1.96/np.sqrt(5) for i in range(4)] print('${:.1f}\pm{:.2f}$'.format(np.mean(mAP50_runwise), np.std(mAP50_runwise)*1.96/np.sqrt(5))) # + # one-shot detection mAP50_s1 = [16.9, 16.7, 16.8, 16.8, 16.6] mAP50_s2 = [20.0, 20.0, 19.9, 19.9, 20.0] mAP50_s3 = [18.5, 18.2, 18.3, 18.4, 17.5] mAP50_s4 = [19.0, 19.2, 19.0, 19.0, 18.8] mAP50 = [mAP50_s1, mAP50_s2, mAP50_s3, mAP50_s4] mAP50_runwise = [np.mean([mAP50_s1[i], mAP50_s2[i], mAP50_s3[i], mAP50_s4[i]]) for i in range(5)] mean_mAP50 = [np.mean(mAP50[i]) for i in range(4)] std_mAP50 = [np.std(mAP50[i]) for i in range(4)] conf95_mAP50 = [std_mAP50[i]*1.96/np.sqrt(5) for i in range(4)] print('${:.1f}\pm{:.2f}$'.format(np.mean(mAP50_runwise), np.std(mAP50_runwise)*1.96/np.sqrt(5))) # + # one-shot segmentation mAP50_s1 = [14.7, 14.7, 14.9, 15.0, 14.8] mAP50_s2 = [17.7, 18.1, 18.0, 18.2, 18.1] mAP50_s3 = [17.5, 17.3, 17.5, 17.6, 17.1] mAP50_s4 = [17.0, 16.8, 17.0, 16.7, 16.8] mAP50 = [mAP50_s1, mAP50_s2, mAP50_s3, mAP50_s4] mAP50_runwise = [np.mean([mAP50_s1[i], mAP50_s2[i], mAP50_s3[i], mAP50_s4[i]]) for i in range(5)] mean_mAP50 = [np.mean(mAP50[i]) for i in range(4)] std_mAP50 = [np.std(mAP50[i]) for i in range(4)] conf95_mAP50 = [std_mAP50[i]*1.96/np.sqrt(5) for i in range(4)] print('${:.1f}\pm{:.2f}$'.format(np.mean(mAP50_runwise), np.std(mAP50_runwise)*1.96/np.sqrt(5))) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="HA_PFuoRgisM" # !pip install sorted-months-weekdays --quiet # !pip install sort-dataframeby-monthorweek --quiet # + id="5ECDwY3lfQXs" import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt import calendar from sorted_months_weekdays import * from sort_dataframeby_monthorweek import * # + colab={"base_uri": "https://localhost:8080/", "height": 444} id="ScY_EJwZfacU" outputId="1f7113ab-4f97-4176-84d3-cf95a74d16f2" data = pd.read_csv("/content/covid_impact_on_airport_traffic.csv") data.head() # + colab={"base_uri": "https://localhost:8080/"} id="IEBgnqXKfafO" outputId="be379daa-fc78-4b53-9f7d-935dd28d62fd" data.shape # + colab={"base_uri": "https://localhost:8080/"} id="byrqBYbeltcj" outputId="e0644d70-0483-4a11-ca23-0af68c1d24ff" data.duplicated().sum() # + colab={"base_uri": "https://localhost:8080/"} id="IvyQsJYVgPMi" outputId="2d5ab05a-3b47-4e0c-88e4-4a2123207a1b" data.isnull().sum() # + id="uUHesE8iNz9E" data['long'] = data['Centroid'].apply(lambda x: x[6:-1].split(' ')[0]) data['lat'] = data['Centroid'].apply(lambda x: x[6:-1].split(' ')[1]) # + colab={"base_uri": "https://localhost:8080/", "height": 245} id="BzDntZjvQDuB" outputId="10d524be-5dcf-4db0-fdf1-7a4e201b57f4" data.head(2) # + [markdown] id="aup6W9aYhGGy" # ## **Exploratory Data Analysis** # # # + colab={"base_uri": "https://localhost:8080/"} id="gBFoDd9HhErc" outputId="b40dc8d4-07d6-4e52-ffab-a8b5ef41a49c" data[['Country']].value_counts().sort_values() # + colab={"base_uri": "https://localhost:8080/", "height": 575} id="o-hH9QP8reji" outputId="37fd3ae5-b60f-4a9a-802f-198302f89a85" data[['Country']].value_counts().plot(kind='bar', figsize=(8,6), title="Number Of Countries") # + id="1bkSK3ftwTyk" colab={"base_uri": "https://localhost:8080/", "height": 514} outputId="bbab1919-3662-4f0e-aa44-cd5d668ca5b0" data[['State']].value_counts().plot(kind='bar', figsize=(10,6), title='Count of States') # + id="xFghBTwMwuLF" colab={"base_uri": "https://localhost:8080/", "height": 525} outputId="e4058a45-8323-4a8f-cc2e-ab16ce06b53c" data[['City']].value_counts().plot(kind='bar', figsize=(10,6), title='Count of Cities') # + id="tDLuqPUohElQ" colab={"base_uri": "https://localhost:8080/"} outputId="e8e791c4-4590-44b9-f141-581218dc71fc" data[['AirportName']].value_counts() # + id="ALrmjzTJwTvi" colab={"base_uri": "https://localhost:8080/", "height": 624} outputId="cb3c2a09-0df6-49cf-da5a-8153934cd578" data[['AirportName']].value_counts().plot(kind='bar', figsize=(10,6), title='Count of Airports') # + id="lQzWGyy03ZMD" data = data.sort_values('Date') data['New_Date'] = pd.to_datetime(data['Date']).dt.strftime('%d-%m-%Y') data['Year'] = pd.DatetimeIndex(data['Date']).year data['Month'] = pd.to_datetime(data['Date']).dt.month_name() data['Day'] = pd.to_datetime(data['Date']).dt.day_name() # + id="PNKTEvgR6StX" colab={"base_uri": "https://localhost:8080/", "height": 228} outputId="b38a2a45-e219-4096-fb72-f1e7ae441518" data.head(2) # + id="Uf6BODp4hEN7" colab={"base_uri": "https://localhost:8080/", "height": 408} outputId="f6ca2493-5af0-40fb-e666-3f4a364c720b" data[['Month']].value_counts().plot(kind='barh', figsize=(10,6), title='Month Counts') # + id="Plo1TjpUhELL" colab={"base_uri": "https://localhost:8080/", "height": 445} outputId="54fd9f07-d958-4ffb-fdb3-85a13f527f79" df_weekday = pd.DataFrame(data["Day"].value_counts()) p = df_weekday.plot.pie(y='Day', figsize=(7, 7)) p.set_title("Day of The Week Records") # + id="qa279sZrhEJH" colab={"base_uri": "https://localhost:8080/", "height": 354} outputId="d3779e50-b5c0-4953-efb4-b13cb066ebf8" sns.distplot(data['PercentOfBaseline']) # + id="rqM3TRIP4Cuq" colab={"base_uri": "https://localhost:8080/", "height": 387} outputId="bd0d8125-51a7-4525-bafa-b4f38e022a38" sns.displot(data, x="PercentOfBaseline", hue='Country', palette='husl', kind="kde") # + [markdown] id="lzluw-dFGTwA" # ### Average Percent of Baseline for Each Country (Month & Weekdays) # + id="MrjXvuI9abBh" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="cc664078-cd57-437b-a844-8ddcca979947" for i in data['Country'].unique(): df_mnth = data[['PercentOfBaseline', 'Month']][data['Country']==i].sort_values('Month') df_mnth_avg = df_mnth.groupby('Month', as_index=False)['PercentOfBaseline'].mean() df_sort_month = Sort_Dataframeby_Month(df=df_mnth_avg,monthcolumnname='Month') df_sort_month.plot.bar(x='Month', y='PercentOfBaseline',figsize=(12,7)) plt.ylabel("Average percent of Baseline",size=15) plt.xlabel("Month",size=15) plt.title(i, size=20) plt.show() # + id="2oF9BPhKiZCz" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="3eeff47c-bf0a-41d5-f978-230c47e19268" for i in data['Country'].unique(): df = data[['PercentOfBaseline', 'Day']][data['Country']==i] df_day = df.groupby('Day', as_index=False)['PercentOfBaseline'].mean() sort_days = Sort_Dataframeby_Weekday(df=df_day,Weekdaycolumnname='Day') sort_days.plot.bar(x='Day', y='PercentOfBaseline',figsize=(10,6)) plt.ylabel("Average percent of Baseline",size=12) plt.xlabel("Day",size=12) plt.title(i, size=20) plt.show() # + [markdown] id="NQTSKnVdvHl4" # ### Analysis For Chile # + id="2-eZCC3OhEHH" chile = data[data['Country']== 'Chile'] # + id="3yPyTDGShEEz" colab={"base_uri": "https://localhost:8080/", "height": 211} outputId="675211e6-f12b-48af-e0a3-c4b54fc3ccd6" chile = chile.set_index('New_Date') chile.head(1) # + id="_A-aUDEIhEC1" colab={"base_uri": "https://localhost:8080/", "height": 458} outputId="b3ecb28f-aeb4-474b-fa6f-b444787fcefd" for i in chile['AirportName'].unique(): df = chile[chile['AirportName']== i] fig = plt.figure(figsize=(12,7)) sns.lineplot(data=df, x ="Date", y="PercentOfBaseline") plt.ylabel("PercentOfBaseline") plt.xlabel("Date") plt.title(i) plt.grid() plt.show() # + [markdown] id="H_i2zuu0Y-PO" # Stay tuned for more analysis and notes on this.. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Regression Week 3: Assessing Fit (polynomial regression) # In this notebook, we will compare different regression models in order to assess which model fits best. We will be using polynomial regression as a means to examine this topic. In particular, you will: # # * Write a function to take a Series and a degree and return a DataFrame where each column is the Series to a polynomial value up to the total degree e.g. degree = 3 then column 1 is the Series column 2 is the Series squared and column 3 is the Series cubed # * Use matplotlib to visualize polynomial regressions # * Use matplotlib to visualize the same polynomial degree on different subsets of the data # * Use a validation set to select a polynomial degree # * Assess the final fit using test data # ## Importing Libraries import os import zipfile import numpy as np import pandas as pd from sklearn.linear_model import LinearRegression import matplotlib as mpl import matplotlib.pyplot as plt import seaborn as sns sns.set_style('darkgrid') # %matplotlib inline # ## Unzipping files with house sales data # Dataset is from house sales in King County, the region where the city of Seattle, WA is located. # Put files in current direction into a list files_list = [f for f in os.listdir('.') if os.path.isfile(f)] # Filenames of unzipped files unzip_files = ['kc_house_data.csv','wk3_kc_house_set_1_data.csv', 'wk3_kc_house_set_2_data.csv', 'wk3_kc_house_set_3_data.csv', 'wk3_kc_house_set_4_data.csv', 'wk3_kc_house_test_data.csv', 'wk3_kc_house_train_data.csv', 'wk3_kc_house_valid_data.csv'] # If upzipped file not in files_list, unzip the file for filename in unzip_files: if filename not in files_list: zip_file = filename + '.zip' unzipping = zipfile.ZipFile(zip_file) unzipping.extractall() unzipping.close # ## Basics of apply function for Pandas DataFrames # Next we're going to write a polynomial function that takes an Series and a maximal degree and returns an DataFrame with columns containing the Series to all the powers up to the maximal degree. # # The easiest way to apply a power to a Series is to use the .apply() and lambda x: functions. # For example to take the example array and compute the third power we can do as follows: (note running this cell the first time may take longer than expected since it loads graphlab) tmp = pd.Series([1.0, 2.0, 3.0]) tmp_cubed = tmp.apply(lambda x: x**3) print tmp print tmp_cubed # We can create an empty DataFrame using pd.DataFrame() and then add any columns to it with ex_dframe['column_name'] = value. For example we create an empty DataFrame and make the column 'power_1' to be the first power of tmp (i.e. tmp itself). ex_dframe = pd.DataFrame() ex_dframe['power_1'] = tmp print ex_dframe print type(ex_dframe) # ## Polynomial_dataframe function # Using the hints above complete the following function to create an DataFrame consisting of the powers of a Series up to a specific degree: def polynomial_dataframe(feature, degree): # feature is pandas.Series type # assume that degree >= 1 # initialize the dataframe: poly_dataframe = pd.DataFrame() # and set poly_dataframe['power_1'] equal to the passed feature poly_dataframe['power_1'] = feature # first check if degree > 1 if degree > 1: # then loop over the remaining degrees: for power in range(2, degree+1): # first we'll give the column a name: name = 'power_' + str(power) # assign poly_dataframe[name] to be feature^power; use apply(*) poly_dataframe[name] = poly_dataframe['power_1'].apply(lambda x: x**power) return poly_dataframe # To test your function consider the smaller tmp variable and what you would expect the outcome of the following call: tmp = pd.Series([1.0, 2.0, 3.0]) print polynomial_dataframe(tmp, 3) # ## Visualizing polynomial regression # Let's use matplotlib to visualize what a polynomial regression looks like on some real data. First, let's load house sales data # Dictionary with the correct dtypes for the DataFrame columns dtype_dict = {'bathrooms':float, 'waterfront':int, 'sqft_above':int, 'sqft_living15':float, 'grade':int, 'yr_renovated':int, 'price':float, 'bedrooms':float, 'zipcode':str, 'long':float, 'sqft_lot15':float, 'sqft_living':float, 'floors':str, 'condition':int, 'lat':float, 'date':str, 'sqft_basement':int, 'yr_built':int, 'id':str, 'sqft_lot':int, 'view':int} sales = pd.read_csv('kc_house_data.csv', dtype = dtype_dict) # As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices. sales = sales.sort_values(['sqft_living', 'price']) sales[['sqft_living', 'price']].head() # Let's start with a degree 1 polynomial using 'sqft_living' (i.e. a line) to predict 'price' and plot what it looks like. poly1_data = polynomial_dataframe(sales['sqft_living'], 1) poly1_data['price'] = sales['price'] # add price to the data since it's the target poly1_data.head() # Creating feature matrix and output vector to perform linear reggression with Sklearn # Note: Must pass list of features to feature matrix X_feat_model_1 for sklearn to work X_feat_model_1 = poly1_data[ ['power_1'] ] y_output_model_1 = poly1_data['price'] model_1 = LinearRegression() model_1.fit(X_feat_model_1, y_output_model_1) # Let's look at the intercept and weight before we plot. print model_1.intercept_ print model_1.coef_ # Now, plotting the data and the line learned by linear regression plt.figure(figsize=(8,6)) plt.plot(poly1_data['power_1'],poly1_data['price'],'.', label= 'House Price Data') plt.hold(True) plt.plot(poly1_data['power_1'], model_1.predict(X_feat_model_1), '-' , label= 'Linear Regression Model') plt.hold(False) plt.legend(loc='upper left', fontsize=16) plt.xlabel('Living Area (ft^2)', fontsize=16) plt.ylabel('House Price ($)', fontsize=16) plt.title('Simple Linear Regression', fontsize=18) plt.axis([0.0, 14000.0, 0.0, 8000000.0]) plt.show() # We can see, not surprisingly, that the predicted values all fall on a line, specifically the one with slope 280 and intercept -43579. What if we wanted to plot a second degree polynomial? poly2_data = polynomial_dataframe(sales['sqft_living'], 2) my_features = list(poly2_data) # Get col_names of DataFrame and put in list poly2_data['price'] = sales['price'] # add price to the data since it's the target # Creating feature matrix and output vector to perform regression w/ sklearn. X_feat_model_2 = poly2_data[my_features] y_output_model_2 = poly2_data['price'] # Creating a LinearRegression Object. Then, performing linear regression on feature matrix and output vector model_2 = LinearRegression() model_2.fit(X_feat_model_2, y_output_model_2) # Let's look at the intercept and weights before we plot. print model_2.intercept_ print model_2.coef_ plt.figure(figsize=(8,6)) plt.plot(poly2_data['power_1'],poly2_data['price'],'.', label= 'House Price Data') plt.hold(True) plt.plot(poly2_data['power_1'], model_2.predict(X_feat_model_2), '-' , label= 'Regression Model') plt.hold(False) plt.legend(loc='upper left', fontsize=16) plt.xlabel('Living Area (ft^2)', fontsize=16) plt.ylabel('House Price ($)', fontsize=16) plt.title('2nd Degree Polynomial Regression', fontsize=18) plt.axis([0.0, 14000.0, 0.0, 8000000.0]) plt.show() # The resulting model looks like half a parabola. Try on your own to see what the cubic looks like: poly3_data = polynomial_dataframe(sales['sqft_living'], 3) my_features = list(poly3_data) # Get col_names of DataFrame and put in list poly3_data['price'] = sales['price'] # add price to the data since it's the target # Creating feature matrix and output vector to perform regression w/ sklearn. X_feat_model_3 = poly3_data[my_features] y_output_model_3 = poly3_data['price'] # Creating a LinearRegression Object. Then, performing linear regression on feature matrix and output vector model_3 = LinearRegression() model_3.fit(X_feat_model_3, y_output_model_3) # Looking at intercept and weights before plotting print model_3.intercept_ print model_3.coef_ plt.figure(figsize=(8,6)) plt.plot(poly3_data['power_1'],poly3_data['price'],'.', label= 'House Price Data') plt.hold(True) plt.plot(poly3_data['power_1'], model_3.predict(X_feat_model_3), '-' , label= 'Regression Model') plt.hold(False) plt.legend(loc='upper left', fontsize=16) plt.xlabel('Living Area (ft^2)', fontsize=16) plt.ylabel('House Price ($)', fontsize=16) plt.title('3rd Degree Polynomial Regression', fontsize=18) plt.axis([0.0, 14000.0, 0.0, 8000000.0]) plt.show() # Now try a 15th degree polynomial: poly15_data = polynomial_dataframe(sales['sqft_living'], 15) my_features = list(poly15_data) # Get col_names of DataFrame and put in list poly15_data['price'] = sales['price'] # add price to the data since it's the target # Creating feature matrix and output vector to perform regression w/ sklearn. X_feat_model_15 = poly15_data[my_features] y_output_model_15 = poly15_data['price'] # Creating a LinearRegression Object. Then, performing linear regression on feature matrix and output vector model_15 = LinearRegression() model_15.fit(X_feat_model_15, y_output_model_15) # Looking at intercept and weights before plotting print model_15.intercept_ print model_15.coef_ plt.figure(figsize=(8,6)) plt.plot(poly15_data['power_1'],poly15_data['price'],'.', label= 'House Price Data') plt.hold(True) plt.plot(poly15_data['power_1'], model_15.predict(X_feat_model_15), '-' , label= 'Regression Model') plt.hold(False) plt.legend(loc='upper left', fontsize=16) plt.xlabel('Living Area (ft^2)', fontsize=16) plt.ylabel('House Price ($)', fontsize=16) plt.title('15th Degree Polynomial Regression', fontsize=18) plt.axis([0.0, 14000.0, 0.0, 8000000.0]) plt.show() # What do you think of the 15th degree polynomial? Do you think this is appropriate? If we were to change the data do you think you'd get pretty much the same curve? Let's take a look. # # Changing the data and re-learning # We're going to split the sales data into four subsets of roughly equal size. Then you will estimate a 15th degree polynomial model on all four subsets of the data. Print the coefficients (you should use .print_rows(num_rows = 16) to view all of them) and plot the resulting fit (as we did above). The quiz will ask you some questions about these results. # Loading the 4 datasets (set_1, set_2) = (pd.read_csv('wk3_kc_house_set_1_data.csv', dtype = dtype_dict), pd.read_csv('wk3_kc_house_set_2_data.csv', dtype = dtype_dict)) (set_3, set_4) = (pd.read_csv('wk3_kc_house_set_3_data.csv', dtype = dtype_dict), pd.read_csv('wk3_kc_house_set_4_data.csv', dtype = dtype_dict)) # Making 4 dataframes with 15 features and a price column # + (poly15_set_1, poly15_set_2) = ( polynomial_dataframe(set_1['sqft_living'], 15) , polynomial_dataframe(set_2['sqft_living'], 15) ) (poly15_set_3, poly15_set_4) = ( polynomial_dataframe(set_3['sqft_living'], 15) , polynomial_dataframe(set_4['sqft_living'], 15) ) ( poly15_set_1['price'], poly15_set_2['price'] ) = ( set_1['price'] , set_2['price'] ) ( poly15_set_3['price'], poly15_set_4['price'] ) = ( set_3['price'] , set_4['price'] ) my_features = list(poly15_set_1) # Put DataFrame col_names in a list. All dataframes have same col_names # + ( X_feat_set_1, X_feat_set_2 ) = ( poly15_set_1[my_features], poly15_set_2[my_features] ) ( X_feat_set_3, X_feat_set_4 ) = ( poly15_set_3[my_features], poly15_set_4[my_features] ) ( y_output_set_1, y_output_set_2 ) = ( poly15_set_1['price'], poly15_set_2['price'] ) ( y_output_set_3, y_output_set_4 ) = ( poly15_set_3['price'], poly15_set_4['price'] ) # + # Creating a LinearRegression Object. Then, performing linear regression on feature matrix and output vector model_deg_15_set_1 = LinearRegression() model_deg_15_set_2 = LinearRegression() model_deg_15_set_3 = LinearRegression() model_deg_15_set_4 = LinearRegression() model_deg_15_set_1.fit(X_feat_set_1, y_output_set_1) model_deg_15_set_2.fit(X_feat_set_2, y_output_set_2) model_deg_15_set_3.fit(X_feat_set_3, y_output_set_3) model_deg_15_set_4.fit(X_feat_set_4, y_output_set_4) # - plt.figure(figsize=(8,6)) plt.plot(poly15_data['power_1'],poly15_data['price'],'.', label= 'House Price Data') plt.hold(True) plt.plot(poly15_set_1['power_1'], model_deg_15_set_1.predict(X_feat_set_1), '-' , label= 'Model 1') plt.plot(poly15_set_2['power_1'], model_deg_15_set_2.predict(X_feat_set_2), '-' , label= 'Model 2') plt.plot(poly15_set_3['power_1'], model_deg_15_set_3.predict(X_feat_set_3), '-' , label= 'Model 3') plt.plot(poly15_set_4['power_1'], model_deg_15_set_4.predict(X_feat_set_4), '-' , label= 'Model 4') plt.hold(False) plt.legend(loc='upper left', fontsize=16) plt.xlabel('Living Area (ft^2)', fontsize=16) plt.ylabel('House Price ($)', fontsize=16) plt.title('4 Different 15th Deg. Polynomial Regr. Models', fontsize=18) plt.axis([0.0, 14000.0, 0.0, 8000000.0]) plt.show() # **Quiz Question: Is the sign (positive or negative) for power_15 the same in all four models?** power_15_coeff = [ model_deg_15_set_1.coef_[-1], model_deg_15_set_2.coef_[-1], model_deg_15_set_3.coef_[-1], model_deg_15_set_4.coef_[-1] ] print power_15_coeff print if all(i > 0 for i in power_15_coeff): print 'Sign the SAME (Positive) for all 4 models' elif all(i < 0 for i in power_15_coeff): print 'Sign the SAME (Negative) for all 4 models' else: print 'Sign NOT the same for all 4 models' # **Quiz Question: (True/False) the plotted fitted lines look the same in all four plots** # Fits for 4 different models look very different # # Selecting a Polynomial Degree # Whenever we have a "magic" parameter like the degree of the polynomial there is one well-known way to select these parameters: validation set. (We will explore another approach in week 4). # # We now load sales dataset split 3-way into training set, test set, and validation set: train_data = pd.read_csv('wk3_kc_house_train_data.csv', dtype = dtype_dict) valid_data = pd.read_csv('wk3_kc_house_valid_data.csv', dtype = dtype_dict) test_data = pd.read_csv('wk3_kc_house_test_data.csv', dtype = dtype_dict) # Sorting the Training Data for Plotting train_data = train_data.sort_values(['sqft_living', 'price']) train_data[['sqft_living', 'price']].head() # Next you should write a loop that does the following: # * For degree in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] (to get this in python type range(1, 15+1)) # * Build an DAtaFrame of polynomial data of train_data['sqft_living'] at the current degree # * Add train_data['price'] to the polynomial DataFrame # * Learn a polynomial regression model to sqft vs price with that degree on TRAIN data # * Compute the RSS on VALIDATION data (here you will want to use .predict()) for that degree and you will need to make a polynomial DataFrame using validation data. # * Report which degree had the lowest RSS on validation data # + # poly_deg_dict is a dict which holds poly_features dataframes. key_list is a list of keys for the dicts. # The keys in key_list are of the form 'poly_deg_i', where i refers to the ith polynomial poly_deg_dict = {} key_list = [] # X_feat_dict is a dict with all the feature matrices and y_output_dict is a dict with all the output vectors X_feat_dict = {} y_output_dict = {} # model_poly_deg is a dict which holds all the regression models for the ith polynomial fit model_poly_deg = {} # Looping over polynomial features from 1-15 for i in range(1, 15+1, 1): # Defining key-name and appending key_name to the key_list key_poly_deg = 'poly_deg_' + str(i) key_list.append(key_poly_deg) # Entering each dataframe returned from polynomial_dataframe function into a dict # Then, saving col_names into a list to do regression w/ these features. Then, adding price column to dataframe poly_deg_dict[key_poly_deg] = polynomial_dataframe(train_data['sqft_living'], i) feat_poly_fit = list(poly_deg_dict[key_poly_deg]) poly_deg_dict[key_poly_deg]['price'] = train_data['price'] # Adding feature matrix and output_vector into dicts X_feat_dict[key_poly_deg] = poly_deg_dict[key_poly_deg][feat_poly_fit] y_output_dict[key_poly_deg] = poly_deg_dict[key_poly_deg]['price'] # Adding regression models to dicts model_poly_deg[key_poly_deg] = LinearRegression() model_poly_deg[key_poly_deg].fit( X_feat_dict[key_poly_deg], y_output_dict[key_poly_deg] ) # + plt.figure(figsize=(8,6)) plt.plot(train_data['sqft_living'], train_data['price'],'.', label= 'House Price Data') plt.hold(True) for i in range(0,5): leg_label = 'Deg. ' + str(i+1) plt.plot( poly_deg_dict[key_list[i]]['power_1'], model_poly_deg[key_list[i]].predict(X_feat_dict[key_list[i]]), '-', label = leg_label ) plt.hold(False) plt.legend(loc='upper left', fontsize=16) plt.xlabel('Living Area (ft^2)', fontsize=16) plt.ylabel('House Price ($)', fontsize=16) plt.title('Degree 1-5 Polynomial Regression Models', fontsize=18) plt.axis([0.0, 14000.0, 0.0, 8000000.0]) plt.show() # + plt.figure(figsize=(8,6)) plt.plot(train_data['sqft_living'], train_data['price'],'.', label= 'House Price Data') plt.hold(True) for i in range(5,10): leg_label = 'Deg. ' + str(i+1) plt.plot( poly_deg_dict[key_list[i]]['power_1'], model_poly_deg[key_list[i]].predict(X_feat_dict[key_list[i]]), '-', label = leg_label ) plt.hold(False) plt.legend(loc='upper left', fontsize=16) plt.xlabel('Living Area (ft^2)', fontsize=16) plt.ylabel('House Price ($)', fontsize=16) plt.title('Degree 6-10 Polynomial Regression Models', fontsize=18) plt.axis([0.0, 14000.0, 0.0, 8000000.0]) plt.show() # + plt.figure(figsize=(8,6)) plt.plot(train_data['sqft_living'], train_data['price'],'.', label= 'House Price Data') plt.hold(True) for i in range(10,15): leg_label = 'Deg. ' + str(i+1) plt.plot( poly_deg_dict[key_list[i]]['power_1'], model_poly_deg[key_list[i]].predict(X_feat_dict[key_list[i]]), '-', label = leg_label ) plt.hold(False) plt.legend(loc='upper left', fontsize=16) plt.xlabel('Living Area (ft^2)', fontsize=16) plt.ylabel('House Price ($)', fontsize=16) plt.title('Degree 11-15 Polynomial Regression Models', fontsize=18) plt.axis([0.0, 14000.0, 0.0, 8000000.0]) plt.show() # - # **Quiz Question: Which degree (1, 2, …, 15) had the lowest RSS on Validation data?** # Now that you have chosen the degree of your polynomial using validation data, compute the RSS of this model on TEST data. Report the RSS on your quiz. # First, sorting validation data in case of plotting valid_data = valid_data.sort_values(['sqft_living', 'price']) # Now, building a function to compute the RSS def get_residual_sum_of_squares(model, data, outcome): # - data holds the data points with the features (columns) we are interested in performing a linear regression fit # - model holds the linear regression model obtained from fitting to the data # - outcome is the y, the observed house price for each data point # By using the model and applying predict on the data, we return a numpy array which holds # the predicted outcome (house price) from the linear regression model model_predictions = model.predict(data) # Computing the residuals between the predicted house price and the actual house price for each data point residuals = outcome - model_predictions # To get RSS, square the residuals and add them up RSS = sum(residuals*residuals) return RSS # Now, creating a list of tuples with the values (RSS_deg_i , i). Finding min of list will give min RSS_val and and degree of polynomial # + # First, clearing empty list which will hold tuples RSS_tup_list = [] # Looping over polynomial features from 1-15 for i in range(1, 15+1, 1): # Creating dataframe w/ additional features on the validation data. Then, putting these features into a list valid_data_poly = polynomial_dataframe(valid_data['sqft_living'], i) feat_val_poly = list(valid_data_poly) # Using get_residual_sum_of_squares to compute RSS. Using the key_list[i-1] since index starts at 0. # Each entry of key_list[i-1] contains the key we want for the dict of regression models RSS_val = get_residual_sum_of_squares(model_poly_deg[key_list[i-1]], valid_data_poly[feat_val_poly], valid_data['price']) # Appending tuppler with RSS_val and i into RSS_tup_list RSS_tup_list.append( (RSS_val, i) ) # - RSS_min = min(RSS_tup_list) print 'Polynomial Degree with lowest RSS from validation set: ', RSS_min[1] # **Quiz Question: what is the RSS on TEST data for the model with the degree selected from Validation data?** # First, sorting test data in case of plotting test_data = test_data.sort_values(['sqft_living', 'price']) # Now, finding RSS of polynomial degree 6 on TEST data # + # Creating dataframe w/ additional features on the test data. Then, putting these features into a list test_data_poly_6 = polynomial_dataframe(test_data['sqft_living'], 6) feat_val_poly_6 = list(test_data_poly_6) RSS_test_poly6 = get_residual_sum_of_squares(model_poly_deg[key_list[6-1]], test_data_poly_6[feat_val_poly_6], test_data['price']) # - print 'RSS on Test data for Degree 6 Polynomial: ', RSS_test_poly6 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import string import numpy as np import pandas as pd # + df = pd.read_json("Sarcasm_Headlines_Dataset.json", lines=True) display(df.head()) print(df.shape) df2 = pd.read_json("Sarcasm_Headlines_Dataset_v2.json", lines=True) display(df2.head()) print(df2.shape) # - sum([i in df2.headline for i in df.headline]) df_all = pd.concat([df, df2]).reset_index(drop=True) df_all.shape df_clean = df_all.drop(['article_link'], axis=1) df_clean.head() df_clean['headline'] = df_clean['headline'].apply(lambda x: x.lower()) df_clean['len'] = df_clean['headline'].apply(lambda x: len(x.split(" "))) df_clean.head() for i in string.punctuation: df_clean['headline'] = df_clean['headline'].apply(lambda x: x.replace(i, "")) df_clean df_clean.groupby('is_sarcastic').describe() df_clean[df_clean.len==151].iloc[0].headline df_clean[df_clean.len==2] sen = pd.read_csv('vader_lexicon.txt', sep='\t', usecols=[0, 1], header=None, names=['token', 'polarity'], index_col='token' ) sen.head() tidy_format = ( df_clean['headline'] .str.split(expand=True) .stack() .reset_index(level=1) .rename(columns={'level_1': 'num', 0: 'word'}) ) tidy_format.head() df_clean['polarity'] = ( tidy_format .merge(sen, how='left', left_on='word', right_index=True) .reset_index() .loc[:, ['index', 'polarity']] .groupby('index') .sum() .fillna(0) ) df_clean.head() df_clean.groupby('is_sarcastic').describe() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Introduction to Applied Machine Learning # ## Operations example based on real-world application # In this notebook you will implement a machine learning algorithm to model a complex system that has been anonymized from a real problem within the Operations space. You will understand how to look beyond single factor analysis and utilize machine learning to model the system and also discover key factors that contribute to failure modes. # # The data has been anonymized, randomized, and any identifying information removed. However, it is still heavily derived from types of operations data you may have seen before and is a great way to begin thinking about how you can use data mining methods and machine learning to solve problems in your space. # # ## Contents # 1. Import dataset # 2. Data exploration # 3. Feature engineering # 4. Modeling # 5. Model Evaluation # # ## How to use this notebook # - To execute any single block of text or markdown, use ctrl+enter, shift+enter or press the run arrow on the left of the box (only in Colaboratory) # - To reset the notebook select "Factory reset runtime" from the Runtime tab at the top of Colaboratory # ## 1. Import dataset # + # First let's import our data. The data is split into 2 files # data_url contains manufacturing-related data that describes the conditions in which the product was manufactured # label_url contains information related to the outcome of each product subset - and we will discuss later what constitutes bad vs. good product import pandas as pd data_url = 'https://raw.githubusercontent.com/jzhangab/DS101/master/1_Data/data.csv' label_url = 'https://raw.githubusercontent.com/jzhangab/DS101/master/1_Data/outcome.csv' df = pd.read_csv(data_url, sep = ',') df_label = pd.read_csv(label_url, sep = ',') # - # Let's look at the first 5 rows to begin understanding what factors are available df.head() # Now let's look at the outcome dataset df_label.head() # You can begin thinking/answering some questions you may have about this data. # # 1. How is each data point separated? What denotes a unique data point? # 2. How many data points do we have? # 3. Do we have any missing data? # 4. Do we only have numerical data? # 5. How many features do we potentially have? # 6. What is the question we are trying to answer with this data, can this data answer this question? # 7. etc... # ## 2. Data Exploration # Let's now begin exploring the data we have to help ourselves understand the problem and see what approach we might want to take. # # Before we can begin plotting our data, we have to deal with 2 problems with data cleanliness # # 1. There is a value "Low Value" that is a replacement for undetectable signal. We have to replace this with a number. # 2. There are null values in our parameter columns that we need to fill. # For cleanliness, missing data is very important, let's check how much missing data there is for each factor for col in list(df): num_na = len(df[col]) - df[col].count() print ("Percent null in column " + col + " is:", 100*num_na/len(df[col])) # And let's check the number of "Low Value" in each column for col in list(df): num_lv = len(df.loc[df[col] == 'Low Value'][col]) print ("Percent Low Value in column " + col + " is:", 100*num_lv/len(df[col])) # ### Important observations # 1. Thanks domain knowledge, we know that "Low Value" is a placeholder for any measurement that is less than 0.5. Therefore we will replace this by the average of values 0-0.5 (0.25) # 2. A-priori we don't care about the DATE of manufacture because we know that our process should be in control regardless of the date so we will not consider it for this analysis. This does not mean that in your own analysis on a separate problem that DATE will not be important. # 3. We also do not care for the DESCRIPTOR_1 and SOURCE columns because we know they are unrelated to manufacturing process. (Again, for just this particular problem. For your own work unstructured data may apply.) # + # Let's do the following to clean our data # 1. Replace all null values with 0 df.fillna(0, inplace=True) # 2. Replace Low Value with 0.025 df = df.replace("Low Value", 0.025) # 3. Convert parameter columns to float64 from string objects, we have to do this because "Low Value" defaults each column to string objects, but the machine learning model expects numerical data for c in list(df): if "PARAM" in c: df[c] = df[c].astype(float) # - # We can use dtypes to check the type of each column to make sure our parameters are now numerical df.dtypes # + # Now let's remove the non-parametric columns from our dataframe. These are columns we will not use for modeling. # This notation is a generator, a one-line for loop in python. It is used to make the code cleaner and use fewer new variable declarations. df = df[[c for c in list(df) if c not in ['SOURCE', 'DATE', 'DESCRIPTOR_1']]] # Alternatively it can take the form of a multi-line for loop as shown in the commented block below. ''' column_list = [] for c in list(df): if c not in ['SOURCE', 'DATE', 'DESCRIPTOR_1']: column_list.append(c) df = df[column_list] ''' # - # ### Important observation # Another potential issue for us is determining what to do with duplicate data. We know based on domain knowledge that each data point should be unique by the NAME column, but how do we handle this if each NAME has multiple SUB_NAMES? This is very common when we deal with products that have multiple measurements for the same attribute. We need to decide if we should treat each measurement separately or aggregate them somehow using mean, max, median, or another measure of center or extremes. # # For this particular problem we will use the aggregation by mean. We need to do the following. # # 1. Figure out if there are duplicate entires by the NAME column # 2. If there are, we need to average all data by SUB_NAME for each NAME # Are there duplicate names? We can check for this by doing a value count for the NAME column # The function value_counts allows us to count the number of each unique value in the column NAME df['NAME'].value_counts() # + # It looks like there are some NAMEs with up to 32 SUB-NAMEs, we need to average the PARAM values for each NAME # The groupby function aggregates columns of the dataframe using a list of index columns. Here we want to aggregate by NAME only so we get mean of each parameter by NAME df = df.groupby(['NAME']).mean().reset_index() # Remove SUB_NAME from columns because we no longer care about the SUB_NAMEs after averaging by NAME df = df[[c for c in list(df) if c not in ['SUB_NAME']]] # - # Check the dataframe df.head() # + # Let's take a look at the histograms of the dataframe to understand each factor. import matplotlib.pyplot as plt # %matplotlib inline fig = plt.figure(figsize = (15,15)) ax = fig.gca() df.hist(ax = ax) # - # That's a lot of histograms! We can make some quick observations. # # 1. Some of the data is normally distributed (PARAM_1, PARAM_10, PARAM_2, PARAM_8) # 2. Some of the data is skewed (PARAM_3, PARAM_4) # 3. Some of the data is multi-modal (PARAM_5, PARAM_6, PARAM_7, PARAM_9) # 4. Some of the parameters are between 0 and 1 while others are between 0 and 10, is this going to be a problem? # The non-normally distributed data is very interesting, perhaps they are related to each other? # %matplotlib inline df.plot.scatter(x = 'PARAM_5', y = 'PARAM_6') # ## 3. Feature Engineering # We've already performed a lot of data cleaning in the exploration phase. Now we need to finish the job by combining the feature and label dataframes as well as creating our classification label # # 1. Merge feature data and label data # 2. Decide which features to use # 3. Create a label (for binary classification) # The labels are located in the dataframe df_label # df_label has the same issue as the parameter dataframe in that there are NAMEs with duplicates by SUB_NAME df_label.head() # Check to see if there are duplicate NAME using value_counts df_label['NAME'].value_counts() # + # Now let's aggregate on NAME using mean df_label = df_label.groupby(['NAME']).mean().reset_index() # Remove SUB_NAME from columns because we no longer care about the SUB_NAMEs after averaging by NAME df_label = df_label[[c for c in list(df_label) if c != 'SUB_NAME']] # - df_label.head() # ### Important observation # How do we determine if a particular data point represents bad product? Well thanks to a-priori knowledge we know that there is an existing process-monitoring and controls action limit at VALUE = 2.0. Let's explore the label using this action limit # First let's plot a histogram to visualize the label measure distribution # %matplotlib inline fig = plt.figure(figsize = (5, 5)) ax = fig.gca() df_label.hist(ax = ax) # + # Hmm, by the way how many standard deviations from the mean is 2.0? # First calculate the standard deviation of a column sd = df_label['MEASURE'].std() # Then calculate the mean mean = df_label['MEASURE'].mean() # Finally print # SD between mean and action limit (2.0) print("Action limit # SD from mean:", (2-mean)/sd) # - # ### Important observation # # 1. The label measure itself appears to be normally distributed. # 2. The action limit is 2.85 SD from mean which means our system does not quite meet the 3.0 SD threshold normally found in most preventive capability analysis systems. # 3. We have an imbalanced dataset, there are far more "good" datapoints than "bad" datapoints, as bad datapoints are only those above the action limit. # + # For a classification label, we need to convert the continuous column MEASURE into binary column (0 or 1) LABEL # First create a new column and set all values to 0 (not a failure) df_label['LABEL'] = 0 # Set data point values in the new column LABEL to 1 that is greater than or equal to the action limit (2.0) in the continuous column MEASURE df_label.loc[df_label['MEASURE'] >= 2, 'LABEL'] = 1 # - # ### Multicollinearity # The idea of collinearity is that if certain input factors are closely correlated, they will bias the output of the model by amplifying their particular effects. We need to understand if some of our factors are highly collinear and then reduce bias by removing all but 1 of the collinear factors from the dataframe. # # 1. Let's check if we have a collinearity problem in our parameter dataframe # We can check the correlation (R-square) between variables using a correlation matrix df.corr() # + # To quantify multicollinearity, we will use variance inflation factor (VIF) # Rule of thumb, VIF above 10 indicates a particular variable ought to be removed from statsmodels.stats.outliers_influence import variance_inflation_factor from statsmodels.tools.tools import add_constant # Also - VIF for a constant term should be high because the intercept is a proxy for the constant. # A constant term needs to be added to accurately measure VIF for the other terms df_c = add_constant(df[[c for c in list(df) if 'PARAM' in c]]) # inline Generator on a pandas series pd.Series([variance_inflation_factor(df_c.values, i) for i in range(df_c.shape[1])], index=df_c.columns) # - # ### Lastly we have to merge the feature and label dataframes for the modeling phase of the analysis # Left merge label onto features (look up SQL left join if unsure what this does) df = pd.merge(df, df_label, on = 'NAME', how = 'left') # ## 4. Modeling # In the modeling step we will train a supervised machine learning model to understand relationships in the operations data set. We will then evaluate the model to see how well it predicts. # # The particular model that we will use is the random forest algorithm, a classical machine learning algorithm very useful for classification tasks. # # 1. Split dataset into training and validation datasets # 2. Train model # 3. Predict outcomes of validation dataset # 4. Calculate accuracy of validation dataset # + # We will split the data 80%/20% using 80% of the data to train the model and 20% to validate the accuracy of the model # We can use pre-built functions from the machine learning package sci-kit learn to do this task from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score, confusion_matrix # The NAME, MEASURE, and LABEL columns are not part of input features so we will use a generator to create a new list and call it "features" features = [col for col in list(df) if col not in ['NAME', 'MEASURE', 'LABEL']] # The X input is df[features] which is all columns in the dataframe of the list features we created using the generator # The Y input is df['LABEL'] which is the binary label column we created in data exploration X = df[features] y = df['LABEL'] # Generate the 4 datasets we need # X_train and y_train to train the model # X_test to generate predictions # y_test to evaluate the accuracy of the predictions X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) # - # Declare and fit model # We have to use the hyperparameter class_weight because of the very imbalanced classes (very few 1's compared to 0's) # Balanced class weight uses the inverse of frequency to weight each class model = RandomForestClassifier(random_state=0, class_weight="balanced") model.fit(X_train, y_train) # Predict using test set. These are binary predictions of 1's and 0's y_pred = model.predict(X_test) y_pred # ## 5. Model Evaluation # We will use several techniques to evaluate the strength of the Model # # 1. Accuracy # 2. Confusion Matrix (false positive, true positive, false negative, true negative) # 3. Receiver operating characteristic # 4. Feature importance # Compare y_test (true values) to y_pred (predicted values) accuracy_score(y_test, y_pred) # Let's take a look at the confusion matrix, which shows us false positives and false negatives # According to the confusion matrix we have only a single false negative and selected correctly the other 2 positive classes confusion_matrix(y_test, y_pred) # + # Another method of evaluating a classifier is using the Receiver Operating Characteristic (ROC) # ROC is a plot of true positive vs. false positive. We calculate the area under the curve (AUC) # AUC = 1 indicates a perfect classifier, AUC = 0.5 means the classifier is no better than a coin flip from sklearn.metrics import roc_curve, roc_auc_score # %matplotlib inline y_pred_proba = model.predict_proba(X_test)[::,1] falseposrate, trueposrate, _ = roc_curve(y_test, y_pred_proba) auc = roc_auc_score(y_test, y_pred_proba) plt.plot(falseposrate,trueposrate,label="ROC curve, auc="+str(auc)) plt.legend(loc=4) plt.show() # + import numpy as np # With tree models we can use feature importance to get some insight into root cause # Mean feature importance across all trees mean_i = model.feature_importances_ # Standard deviation of feature importances across all trees st_i = np.std([tree.feature_importances_ for tree in model.estimators_], axis=0) # Features features_i = np.argsort(mean_i)[::-1] # Print the feature ranking print("Feature ranking:") for f in range(X_train.shape[1]): print("%d. PARAM %d (%f)" % (f + 1, features_i[f]+1, mean_i[features_i[f]])) # - # ### Preliminary Conclusions # It appears we can make some preliminary analysis about the root cause of the products that fall above the action limit. Next steps for such an analysis would be to investigate the physical relationship between PARAM_8 and the manufacturing process. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="-4DF18EyM3rb" colab_type="text" # In this Notebook, you can test the example's assets and potentially run ML tasks on a public distant VM. # On Google Colab, you can see and modify the assets with the "Files" button on the left. # + id="0X1CGFsjTxFp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="532a7e45-07d3-436e-8cf0-326525044473" try: # # %tensorflow_version only exists in Colab. # %tensorflow_version 1.x except Exception: pass # + [markdown] id="1SJi6ACtKP0B" colab_type="text" # # Mnist with Differential Privacy # # *This example is a Substra implementation of on the [Classification_Privacy tutorial](https://github.com/tensorflow/privacy/blob/master/tutorials/Classification_Privacy.ipynb) from [Tensorflow_Privacy](https://github.com/tensorflow/privacy). The structure of this example is inspired from [Substra's Titanic Example](https://github.com/SubstraFoundation/substra/blob/master/examples/titanic/)* # # > [Differential privacy](https://en.wikipedia.org/wiki/Differential_privacy) (DP) is a framework for measuring the privacy guarantees provided by an algorithm. Through the lens of differential privacy, we can design machine learning algorithms that responsibly train models on private data. Learning with differential privacy provides provable guarantees of privacy, mitigating the risk of exposing sensitive training data in machine learning. Intuitively, a model trained with differential privacy should not be affected by any single training example, or small set of training examples, in its data set. This mitigates the risk of exposing sensitive training data in ML. # > # > The basic idea of this approach, called differentially private stochastic gradient descent (DP-SGD), is to modify the gradients used in stochastic gradient descent (SGD), which lies at the core of almost all deep learning algorithms. Models trained with DP-SGD provide provable differential privacy guarantees for their input data. There are two modifications made to the vanilla SGD algorithm: # > # > 1. First, the sensitivity of each gradient needs to be bounded. In other words, we need to limit how much each individual training point sampled in a minibatch can influence gradient computations and the resulting updates applied to model parameters. This can be done by *clipping* each gradient computed on each training point. # > 2. *Random noise* is sampled and added to the clipped gradients to make it statistically impossible to know whether or not a particular data point was included in the training dataset by comparing the updates SGD applies when it operates with or without this particular data point in the training dataset. # > # > This tutorial uses [tf.keras](https://www.tensorflow.org/guide/keras) to train a convolutional neural network (CNN) to recognize handwritten digits with the DP-SGD optimizer provided by the TensorFlow Privacy library. TensorFlow Privacy provides code that wraps an existing TensorFlow optimizer to create a variant that implements DP-SGD. # > — [The TensorFlow Authors][1] # # [1]: https://github.com/tensorflow/privacy/blob/master/tutorials/Classification_Privacy.ipynb # # + [markdown] id="jFjGg7WgOb1q" colab_type="text" # # ## Prerequisites # # In order to run this example, you'll need to: # # * use Python 3 # * have [Docker](https://www.docker.com/) installed # * [install the `substra` cli](https://github.com/SubstraFoundation/substra#install) (supported version: 0.6.0) # + id="0pnSX37XNth3" colab_type="code" colab={} # ! pip3 install substra==0.6.0 # + [markdown] id="78Dg-ioqNqie" colab_type="text" # # * [install the `substratools` library](https://github.com/substrafoundation/substra-tools) (supported version: 0.6.0) # * [pull the `substra-tools` docker images](https://github.com/substrafoundation/substra-tools#pull-from-private-docker-registry) # * have access to a Substra installation ([configure your host to a public node ip](https://doc.substra.ai/getting_started/installation/local_install_skaffold.html#network) or [install Substra on your machine](https://doc.substra.ai/getting_started/installation/local_install_skaffold.html)) # + id="OXNWTXfLdNEr" colab_type="code" colab={} #replace this ip by the ip of a distant VM running a substra node public_node_ip = "127.0.0.1" # + id="3Sj64EGcd2bG" colab_type="code" colab={} # ! echo "{public_node_ip} substra-backend.node-1.com substra-frontend.node-1.com substra-backend.node-2.com substra-frontend.node-2.com" | sudo tee -a /etc/hosts # Check if it's ok # ! curl substra-backend.node-1.com/readiness # + [markdown] id="WiNgyy4odKzR" colab_type="text" # # * create a substra profile to define the substra network to target, for instance: # # + id="njy5vYAvWoEE" colab_type="code" colab={} # ! substra config --profile node-1 http://substra-backend.node-1.com # ! substra login --profile node-1 --username node-1 --password '' # + [markdown] id="laY7B4rs6Ik-" colab_type="text" # * checkout this repository # + id="NCaLsa5LOLTu" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 221} outputId="5cb0369a-7e6e-470a-f607-f8c4cbc2d95b" # %cd /content # ! git clone https://github.com/SubstraFoundation/substra-examples/ # %cd substra-examples/content/substra-examples/mnist-dp # + [markdown] id="KUMkvMmyOHe5" colab_type="text" # # All commands in this example are run from the `mnist-dp` folder. # # + [markdown] id="bibVWArLPIoM" colab_type="text" # ## Data preparation # # The first step will be to generate train and test data from keras.datasets.mnist # # To generate the data, run: # # # + id="UBprYg-5PUvx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 102} outputId="888d61d9-e26f-466d-b56f-484526de9f58" # %cd /content/substra-examples/mnist-dp # #! pip install --upgrade pip # #! pip install -r scripts/requirements.txt # requirements are already satisfied in Colab, except for substratools # ! pip install substratools==0.6.0 # + id="3M648e_-QH4N" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 170} outputId="f429e716-b585-4512-b95e-c47af0f2faf7" # %cd /content/substra-examples/mnist-dp # ! python scripts/generate_data.py # + [markdown] id="ccjTJmyMPTiX" colab_type="text" # # # This will create two sub-folders in the `assets` folder: # # * `train_data` contains train data features and labels as numpy array files # * `test_data` contains test data features and labels as numpy array files # # ## Writing the objective and data manager # # Both objective and data manager will need a proper markdown description, you can check them out in their respective # folders. Notice that the data manager's description includes a formal description of the data structure. # # Notice also that the `metrics.py` and `opener.py` module both rely on classes imported from the `substratools` module. # These classes provide a simple yet rigid structure that will make algorithms pretty easy to write. # # ## Writing a simple algorithm # # You'll find under `assets/algo_cnn_dp` an implementation of the cnn model in the [Classification_Privacy tutorial](https://github.com/tensorflow/privacy/blob/master/tutorials/Classification_Privacy.ipynb). Like the metrics and opener scripts, it relies on a # class imported from `substratools` that greatly simplifies the writing process. You'll notice that it handles not only # the train and predict tasks but also a lot of data preprocessing. # # This algorithm measure the differential privacy guarantee after training the model: # You will see in the console the value Epsilon (ϵ) - This is the privacy budget. It measures the strength of the privacy guarantee by bounding how much the probability of a particular model output can vary by including (or excluding) a single training point. A smaller value for ϵ implies a better privacy guarantee. However, the ϵ value is only an upper bound and a large value could still mean good privacy in practice. # # This value depends on: # # 1. The total number of points in the training data, `n`. # 2. The `batch_size`. # 3. The `noise_multiplier`. # 4. The number of `epochs` of training. # # You can find a description of the hyperparameters [here](https://github.com/tensorflow/privacy/tree/master/tutorials#parameters). # # # + [markdown] id="tImaUQdlRDPv" colab_type="text" # ## Testing our assets # # ### Using asset command line interfaces # # You can first test each assets with the `substratools` CLI, by running specific ML tasks in your local Python environment. # # # + [markdown] id="2sKcqmkFUNvi" colab_type="text" # #### Training task # # # # # + id="yby0gIzZRQx7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="a6c4d866-b0de-4477-8198-219c676000b0" # train your model with the train_data # ! python assets/algo_cnn_dp/algo.py train \ # --debug \ # --opener-path assets/dataset/opener.py \ # --data-samples-path assets/train_data \ # --output-model-path assets/model/model \ # --log-path assets/logs/train.log # + id="5Pp7yN4YRYTy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 938} outputId="dc1e0d1f-27f2-4e13-e7c9-9167802da87a" #predict the labels of train_data with your previously trained model # ! python assets/algo_cnn_dp/algo.py predict \ # --debug \ # --opener-path assets/dataset/opener.py \ # --data-samples-path assets/train_data \ # --output-predictions-path assets/pred-train.npy \ # --models-path assets/model/ \ # --log-path assets/logs/train_predict.log \ # model # + id="Nm6SnGQJRboG" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 136} outputId="357993af-b10e-419b-ef38-d58d0cc708fb" #calculate the score of your model on train_data predictions # ! python assets/objective/metrics.py \ # --debug \ # --opener-path assets/dataset/opener.py \ # --data-samples-path assets/train_data \ # --input-predictions-path assets/pred-train.npy \ # --output-perf-path assets/perf-train.json \ # --log-path assets/logs/train_metrics.log # + [markdown] id="711Mg8smVlEf" colab_type="text" # #### Testing task # + id="vhoL2XIkVpjW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 938} outputId="40a29a34-276b-4238-8350-4f868d4eb39e" #predict the labels of test_data with your previously trained model # ! python assets/algo_cnn_dp/algo.py predict \ # --debug \ # --opener-path assets/dataset/opener.py \ # --data-samples-path assets/test_data \ # --output-predictions-path assets/pred-test.npy \ # --models-path assets/model/ \ # --log-path assets/logs/test_predict.log \ # model # + id="pFXL-iIFVrkR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 136} outputId="676009ba-9e10-49f3-e003-86b2364cc0a5" #calculate the score of your model on test_data predictions # ! python assets/objective/metrics.py \ # --debug \ # --opener-path assets/dataset/opener.py \ # --data-samples-path assets/test_data \ # --input-predictions-path assets/pred-test.npy \ # --output-perf-path assets/perf-test.json \ # --log-path assets/logs/test_metrics.log # + [markdown] id="e56pBMUsRfHw" colab_type="text" # ### Using substra cli # # Before pushing our assets to the platform, we need to make sure they work well. To do so, we can run them locally in a # Docker container. This way, if the training fails, we can access the logs and debug our code. # # To test the assets, we'll use `substra run-local`, passing it paths to our algorithm of course, but also the opener, # the metrics and to the data samples we want to use. It will launch a training task on the train data, a prediction task on the test data and return the accuracy score. # # + id="ZsVjUgrvYziC" colab_type="code" colab={} #you will need Docker to run this (not available in Colab) # ! substra run-local assets/algo_cnn_dp \ # --train-opener=assets/dataset/opener.py \ # --test-opener=assets/dataset/opener.py \ # --metrics=assets/objective/ \ # --train-data-samples=assets/train_data \ # --test-data-samples=assets/test_data # + [markdown] id="1nYflmdBWB_y" colab_type="text" # At the end of this step, you'll find in the newly created `sandbox/model` folder a `model` file that contains your # trained model. There is also a `sandbox/pred_train` folder that contains both the predictions made by the model on # train data and the associated performance. # # + [markdown] id="Hljsxt1PWEAr" colab_type="text" # #### Debugging # # It's more than probable that your code won't run perfectly the first time. Since runs happen in dockers, you can't # debug using prints. Instead, you should use the `logging` module from python. All logs can then be consulted at the end # of the run in `sandbox/model/log_model.log`. # # ## Adding the assets to substra # # ### Adding the objective, dataset and data samples to substra # # A script has been written that adds objective, data manager and data samples to substra. It uses the `substra` python # sdk to perform actions. It's main goal is to create assets, get their keys and use these keys in the creation of other # assets. # # To run it: # # + id="jNSaDP9iWLyy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 153} outputId="0a81aea5-cdb2-4293-8672-ecc445d1ea64" # ! python scripts/add_dataset_objective.py # + [markdown] id="BP7PUXTyWVt0" colab_type="text" # This script just generated an `assets_keys.json` file in the `mnist` folder. This file contains the keys of all assets # we've just created and organizes the keys of the train data samples in folds. This file will be used as input when # adding an algorithm so that we can automatically launch all training and testing tasks. # # + [markdown] id="iF3ykfHVWYum" colab_type="text" # ### Adding the algorithm and training it # # The script `add_train_algo_cnn.py` pushes our simple algo to substra and then uses the `assets_keys.json` file # we just generated to train it against the dataset and objective we previously set up. It will then update the # `assets_keys.json` file with the newly created assets keys (algo, traintuple and testtuple) # # To run it: # + id="cpnh9e_yWVC_" colab_type="code" colab={} # ! python scripts/add_train_algo_cnn_dp.py # + [markdown] id="v9kc1CyBWaxJ" colab_type="text" # It will end by providing a couple of commands you can use to track the progress of the train and test tuples as well # as the associated scores. Alternatively, you can browse the frontend to look up progress and scores. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="1HtAfOeRUw0c" # Classes with Multiple Objects # + colab={"base_uri": "https://localhost:8080/"} id="sdLfYw2xU7wv" outputId="93c414a4-a23d-4e9f-c602-fcab4f77c849" class Birds: def __init__(self, bird_name): self.bird_name = bird_name def flying_birds(self): print(f"{self.bird_name} flies above clouds") def non_flying_birds(self): print(f"{self.bird_name} is the national bird of the Philippines") vulture = Birds("Griffon Vulture") crane = Birds("Common Crane") emu = Birds("Emu") vulture.flying_birds() crane.flying_birds() emu.non_flying_birds() # + [markdown] id="dxBvjOgTXKrB" # Encapsulation # + colab={"base_uri": "https://localhost:8080/"} id="PvalStpcXM-y" outputId="5d6bc264-ddf7-4d13-c165-1edc089d0b68" class foo: def __init__(self, a , b): self.__a = a self.__b = b def add(self): return self.__a + self.__b def sub(self): return self.__a - self.__b foo_object = foo(3,4) print(foo_object.add()) #adding a and b print(foo_object.sub()) #subtracting a and b foo_object.__b = 5 foo_object.__a = 7 print(foo_object.add()) print(foo_object.sub()) # + [markdown] id="4qUfPbAGdcRS" # Inheritance # + colab={"base_uri": "https://localhost:8080/"} id="Q606wV01df1c" outputId="1d425254-d1ab-4095-ec56-b265d97f4e9a" class Person: def __init__(self,firstname, surname): self.firstname = firstname self.surname = surname def printname(self): print(self.firstname, self.surname) person = Person("Ashley", "Goce") person.printname() class Student(Person): pass person = Student("Denise", "Goce") person.printname() # + [markdown] id="B5jcD9gcgax5" # Polymorphism # + colab={"base_uri": "https://localhost:8080/"} id="Yd1p7ps6gcgj" outputId="5f34030f-76d6-4920-86cc-228fd651931d" class RegularPolygon: def __init__(self,side): self.side = side class Square(RegularPolygon): def area(self): return self.side*self.side class EquillateralTriangle(RegularPolygon): def area(self): return self.side*self.side*0.433 x = Square(4) y = EquillateralTriangle(3) print(x.area()) print(y.area()) # + [markdown] id="_kiHfkWXjVGu" # Application 1 # + id="dgHFWNXjjWk1" outputId="02388092-bc2b-4c18-d205-e7e1178cc36c" colab={"base_uri": "https://localhost:8080/"} class Person: def __init__(self,std1,pre,mid,fin): self.std1 = std1 self.pre = pre self.mid = mid self.fin = fin def Grade(self): print(self.std1) print(self.pre) print(self.mid) print(self.fin) s1 =Person("Student 1",'Prelim: 90','Midterm: 87','Finals: 94\n') s1.Grade() avg =(90+87+94)/3 print('Average grade is: ' + str(avg)+'\n') def __init__(self,std2,pre,mid,fin): self.std2 = std2 self.pre = pre self.mid = mid self.fin = fin def Grade(self): print(self.std2) print(self.pre) print(self.mid) print(self.fin) s2 =Person("Student 2",'Prelim: 77','Midterm: 83','Finals: 89\n') s2.Grade() avg =(77+83+89)/3 print('Average grade is: ' + str(avg)+'\n') def __init__(self,std3,pre,mid,fin): self.std3 = std3 self.pre = pre self.mid = mid self.fin = fin def Grade(self): print(self.std3) print(self.pre) print(self.mid) print(self.fin) s3 =Person("Student 3",'Prelim: 93','Midterm: 97','Finals: 84\n') s3.Grade() avg =(93+97+84)/3 print('Average grade is: ' + str(avg)+'\n') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Advent of Code 2019 - Day 1 # # The Tyranny of the Rocket Equation # Santa has become stranded at the edge of the Solar System while delivering presents to other planets! To accurately calculate his position in space, safely align his warp drive, and return to Earth in time to save Christmas, he needs you to bring him measurements from fifty stars. # # Collect stars by solving puzzles. Two puzzles will be made available on each day in the Advent calendar; the second puzzle is unlocked when you complete the first. Each puzzle grants one star. Good luck! # # The Elves quickly load you into a spacecraft and prepare to launch. # # At the first Go / No Go poll, every Elf is Go until the Fuel Counter-Upper. They haven't determined the amount of fuel required yet. # # Fuel required to launch a given module is based on its mass. Specifically, to find the fuel required for a module, take its mass, divide by three, round down, and subtract 2. # # For example: # # For a mass of 12, divide by 3 and round down to get 4, then subtract 2 to get 2. # For a mass of 14, dividing by 3 and rounding down still yields 4, so the fuel required is also 2. # For a mass of 1969, the fuel required is 654. # For a mass of 100756, the fuel required is 33583. # The Fuel Counter-Upper needs to know the total fuel requirement. To find it, individually calculate the fuel needed for the mass of each module (your puzzle input), then add together all the fuel values. # # What is the sum of the fuel requirements for all of the modules on your spacecraft? # # ## --- Part Two --- # During the second Go / No Go poll, the Elf in charge of the Rocket Equation Double-Checker stops the launch sequence. Apparently, you forgot to include additional fuel for the fuel you just added. # # Fuel itself requires fuel just like a module - take its mass, divide by three, round down, and subtract 2. However, that fuel also requires fuel, and that fuel requires fuel, and so on. Any mass that would require negative fuel should instead be treated as if it requires zero fuel; the remaining mass, if any, is instead handled by wishing really hard, which has no mass and is outside the scope of this calculation. # # So, for each module mass, calculate its fuel and add it to the total. Then, treat the fuel amount you just calculated as the input mass and repeat the process, continuing until a fuel requirement is zero or negative. For example: # # A module of mass 14 requires 2 fuel. This fuel requires no further fuel (2 divided by 3 and rounded down is 0, which would call for a negative fuel), so the total fuel required is still just 2. # At first, a module of mass 1969 requires 654 fuel. Then, this fuel requires 216 more fuel (654 / 3 - 2). 216 then requires 70 more fuel, which requires 21 fuel, which requires 5 fuel, which requires no further fuel. So, the total fuel required for a module of mass 1969 is 654 + 216 + 70 + 21 + 5 = 966. # The fuel required by a module of mass 100756 and its fuel is: 33583 + 11192 + 3728 + 1240 + 411 + 135 + 43 + 12 + 2 = 50346. # What is the sum of the fuel requirements for all of the modules on your spacecraft when also taking into account the mass of the added fuel? (Calculate the fuel requirements for each module separately, then add them all up at the end.) def calculate_fuel(mass): fuel = (mass // 3) - 2 if fuel < 0: return 0 # if we add fuel, we need to consider its additional mass else: return fuel + calculate_fuel(fuel) # + import os # where the input file is located base_path = "./input/" filename = "input-day1.txt" path_to_file = os.path.join(base_path, filename) # total amount of fuel needed fuel_required = 0 # iterate over mass inputs with open(path_to_file) as f: for cnt,line in enumerate(f): mass = int(line.rstrip()) # sum it all up fuel_required += calculate_fuel(mass) print("Total fuel required: {}".format(fuel_required)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Predicting the Sale Price of Bulldozers Using Machine Learning # # For this dataset, the problem we're trying to solve, or better, the question we're trying to answer is, # # **How well can we predict the future sale price of a bulldozer, given its characteristics previous examples of how much similar bulldozers have been sold for?** # # ## Data # # The data is downloaded from the Kaggle Bluebook for Bulldozers competition: https://www.kaggle.com/c/bluebook-for-bulldozers/data # # There are 3 main datasets: # # * Train.csv is the training set, which contains data through the end of 2011. # * Valid.csv is the validation set, which contains data from January 1, 2012 - April 30, 2012 You make predictions on this set throughout the majority of the competition. Your score on this set is used to create the public leaderboard. # * Test.csv is the test set, which won't be released until the last week of the competition. It contains data from May 1, 2012 - November 2012. Your score on the test set determines your final rank for the competition. # # ## Evaluation # # The evaluation metric for this competition is the RMSLE (root mean squared log error) between the actual and predicted auction prices. # # For more on the evaluation of this project check: https://www.kaggle.com/c/bluebook-for-bulldozers/overview/evaluation # # Note: The goal for most regression evaluation metrics is to minimize the error. For example, our goal for this project will be to build a machine learning model which minimises RMSLE. # # ## Features # # For this dataset, Kaggle provide a data dictionary which contains information about what each attribute of the dataset means. You can download this file directly from the Kaggle competition page (account required) or view it on Google Sheets. # # ## EDA import numpy as np import pandas as pd import matplotlib.pyplot as plt import sklearn df = pd.read_csv("data/TrainAndValid.csv", low_memory=False) df.info() df.isna().sum() df.columns fig, ax = plt.subplots() ax.scatter(df['saledate'][:1000], df['SalePrice'][:1000]); df['saledate'][:1000] df.SalePrice.plot(kind='hist'); # ### Parsing Dates # # Here, we have to convert the 'saledate' column from object type to datetime type in order to make it actual time so that we can work with it. df = pd.read_csv("data/TrainAndValid.csv", low_memory=False, parse_dates=['saledate']) df.saledate.dtype df.saledate[:1000] fig, ax = plt.subplots() ax.scatter(df['saledate'][:1000], df['SalePrice'][:1000]); df.head() df.head().T df.saledate.head(20) # ### Sorting DataFrame by Dates in Ascending Order df.sort_values(by=['saledate'], inplace=True) df.saledate.head() # ### Make a copy of original DataFrame df_tmp = df.copy() # ### Feature Engineering # # Basically, we are splitting the 'saledate; column into 5 new features called day, month, year, sale day of week (The day of the week with Monday=0, Sunday=6) & sale day of year (The ordinal day of the year) # + df_tmp['saleYear'] = df_tmp.saledate.dt.year df_tmp['saleMonth'] = df_tmp.saledate.dt.month df_tmp['saleDay'] = df_tmp.saledate.dt.day df_tmp['saleDayOfWeek'] = df_tmp.saledate.dt.dayofweek df_tmp['saleDayOfYear'] = df_tmp.saledate.dt.dayofyear df_tmp.drop('saledate', axis=1, inplace=True) # - df_tmp.head().T df_tmp.state.value_counts() df_tmp.head() len(df_tmp) # ## Modelling # This will not work as we have missing values along with various object data types like strings which needs # to be converted into categories for assigning numbers. # + # from sklearn.ensemble import RandomForestClassifier # model = RandomForestClassifier(n_jobs=-1, random_state=42) # model.fit(df_tmp.drop('SalePrice', axis=1), df_tmp['SalePrice']) # - df_tmp.info(); df_tmp["UsageBand"].dtype # ## Converting String to Categories # # We can check the different datatypes compatible with pandas here: https://pandas.pydata.org/pandas-docs/stable/reference/general_utility_functions.html#data-types-related-functionality pd.api.types.is_string_dtype(df_tmp["UsageBand"]) # + #Columns with strings #Label is column name and content is column contents for label, content in df_tmp.items(): if pd.api.types.is_string_dtype(content): print(label) # + # If you're wondering what df_tmp.items() does, here is an example: random_dict = {"key1" : "Hello", "key2" : "World"} for key, value in random_dict.items(): print(f"The key is: {key}", f"The value is: {value}") # - for label, content in df_tmp.items(): if pd.api.types.is_string_dtype(content): df_tmp[label] = content.astype("category").cat.as_ordered() df_tmp.info(); df_tmp.state.cat.categories df_tmp.state.cat.codes # With the help of Pandas Categories, we now have a way to access all of our data in the form of numbers. # # But in order to train/fit model without errors we still have to handle missing data. # ### Save Preprocessed Data as CSV #Export current temp dataframe df_tmp.to_csv("data/train_tmp.csv", index=False) # + #Import preprocessed data df_tmp = pd.read_csv("data/train_tmp.csv", low_memory=False) df_tmp.head().T # - # ## Fill in Missing Values # + #Check missing data ratio = df_tmp.isna().sum()/len(df_tmp) df_tmp.isna().sum() # - # ### Fill Numerical Missing Values First for label, content in df_tmp.items(): if pd.api.types.is_numeric_dtype(content): print(label) # Check which numeric columns have null values for label, content in df_tmp.items(): if pd.api.types.is_numeric_dtype(content): if pd.isnull(content).sum(): print(label) # Fill the missing numeric rows with the median and then create a new column which says if the content was filled with # missing values or it has actual value for label, content in df_tmp.items(): if pd.api.types.is_numeric_dtype(content): if pd.isnull(content).sum(): # Add a binary column which tells if the data was missing or not df_tmp[label + "_is_missing"] = pd.isnull(content) # Fill missing values with media since it is more robust to outliers than mean df_tmp[label] = content.fillna(content.median()) # Why add a binary column indicating whether the data was missing or not? # # We can easily fill all of the missing numeric values in our dataset with the median. However, a numeric value may be missing for a reason. In other words, absence of evidence may be evidence of absence. Adding a binary column which indicates whether the value was missing or not helps to retain this information. # Check if there are any null values for label, content in df_tmp.items(): if pd.api.types.is_numeric_dtype(content): if pd.isnull(content).sum(): print(label) # Check howmany values were missing df_tmp.auctioneerID_is_missing.value_counts() # ### Filling Categorical Values After Converting Them to Numbers # Check columns which are NOT numeric for label, content in df_tmp.items(): if not pd.api.types.is_numeric_dtype(content): print(label) # Turn categorical variables into numbers for label, content in df_tmp.items(): # Check columns which are NOT numeric if not pd.api.types.is_numeric_dtype(content): # Add binary column to indicate whether sample had missing values df_tmp[label + "_is_missing"] = pd.isnull(content) # Convert the categories to codes using the .codes property of Categorical class and add +1 # We add +1 as pandas encodes missing values of categories as -1. But we don't want -ve values so we make all missing # values to 0 df_tmp[label] = pd.Categorical(content).codes + 1 df_tmp.info() df_tmp.isna().sum() df_tmp.head().T # ## Time for Modelling # # Now all of our data is numeric and there are no missing values, we should be able to build a machine learning model! # # Let's reinstantiate our trusty [RandomForestRegressor](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html). # # This will take a few minutes which is too long for interacting with it. So what we'll do is create a subset of rows to work with. # + # %%time from sklearn.ensemble import RandomForestRegressor # Instantiating Model model = RandomForestRegressor(n_jobs=-1, random_state=42) #Training the Model # model.fit(df_tmp.drop('SalePrice', axis=1), df_tmp['SalePrice']) # - # %%time # Score the model model.score(df_tmp.drop("SalePrice", axis=1), df_tmp['SalePrice']) # Why is the metric not reliable? # # Because we have scored it with the training data. In order to score a model we should always use validation set and test set. # ### Splitting Data into Train & Valid Sets df_tmp.head() # According to the [Kaggle data page](https://www.kaggle.com/c/bluebook-for-bulldozers/data), the validation set and test set are split according to dates. # # This makes sense since we're working on a time series problem. # # E.g. using past events to try and predict future events. # # Knowing this, randomly splitting our data into train and test sets using something like `train_test_split()` wouldn't work. # # Instead, we split our data into training, validation and test sets using the date each sample occured. # # In our case: # * Training = all samples up until 2011 # * Valid = all samples form January 1, 2012 - April 30, 2012 # * Test = all samples from May 1, 2012 - November 2012 # # For more on making good training, validation and test sets, check out the post [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/) by . df_tmp.saleYear.value_counts() # + # Split data into train and valid sets df_val = df_tmp[df_tmp.saleYear == 2012] df_train = df_tmp[df_tmp.saleYear != 2012] len(df_val), len(df_train) # + # Split data into X and Y # df_train["SalePrice"] is X_train, y_train = df_train.drop("SalePrice", axis=1), df_train["SalePrice"] X_valid, y_valid = df_val.drop("SalePrice", axis=1), df_val.SalePrice X_train.shape, y_train.shape, X_valid.shape, y_valid.shape # - # ### Building An Evaluation Function # According to Kaggle for the Bluebook for Bulldozers competition, [the evaluation function](https://www.kaggle.com/c/bluebook-for-bulldozers/overview/evaluation) they use is root mean squared log error (RMSLE). # # **RMSLE** = Generally you don't care if you're off by $10. But you'd care if you were off by 10%. You care more about ratios/percentages rather than differences. **MAE** (mean absolute error) is more about exact differences. # # It's important to understand the evaluation metric you're going for. # # Since Scikit-Learn doesn't have a function built-in for RMSLE, we'll create our own. # # We can do this by taking the square root of Scikit-Learn's [mean_squared_log_error](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_squared_log_error.html#sklearn.metrics.mean_squared_log_error) (MSLE). MSLE is the same as taking the log of mean squared error (MSE). # # We'll also calculate the MAE and R^2 for fun. # + # Create an evaluation function (this competition uses Root Mean Square Log Error) from sklearn.metrics import mean_squared_log_error, mean_absolute_error, r2_score def rmsle(y_test, y_preds): """ Caculates root mean squared log error between predictions and true labels. """ return np.sqrt(mean_squared_log_error(y_test, y_preds)) def show_scores(model): train_preds = model.predict(X_train) valid_preds = model.predict(X_valid) scores = {"Training MAE": mean_absolute_error(y_train, train_preds), "Valid MAE": mean_absolute_error(y_valid, valid_preds), "Training RMSLE": rmsle(y_train, train_preds), "Valid RMSLE": rmsle(y_valid, valid_preds), "Training R^2": model.score(X_train, y_train), #can also use r2_score(y_train, train_preds) "Valid R^2": model.score(X_valid, y_valid)} #can also use r2_score(y_valid, val_preds) return scores # - # ### Testing our model on a subset (to tune the hyperparameters) # # Retraing an entire model would take far too long to continuing experimenting as fast as we want to. # # So what we'll do is take a sample of the training set and tune the hyperparameters on that before training a larger model. # # If you're experiments are taking longer than 10-seconds (give or take how long you have to wait), you should be trying to speed things up. You can speed things up by sampling less data or using a faster computer. # + # # %%time # # This takes a long time............ # # Retrain our model on training data since the model was previously trained on both train & valid data. # model.fit(X_train, y_train) # show_scores(model) # - len(X_train) # Depending on our computer, making calculations on ~400,000 rows may take a while... # # Let's alter the number of samples each `n_estimator` in the [`RandomForestRegressor`](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html) see's using the `max_samples` parameter. # Change max samples in RandomForestRegressor model = RandomForestRegressor(n_jobs=-1, max_samples=10000) # Setting `max_samples` to 10000 means every `n_estimator` (default 100) in our `RandomForestRegressor` will only see 10000 random samples from our DataFrame instead of the entire 400,000. # # In other words, we'll be looking at 40x less samples which means we'll get faster computation speeds but we should expect our results to worsen (simple the model has less samples to learn patterns from). # %%time # Cutting down the max no. of samples each tree can see reduces training time model.fit(X_train, y_train) show_scores(model) # ### Hyperparameter tuning RandomizedSearchCV # You can increase `n_iter` to try more combinations of hyperparameters but in our case, we'll try 20 and see where it gets us. # # Remember, we're trying to reduce the amount of time it takes between experiments. # + # %%time from sklearn.model_selection import RandomizedSearchCV # Different RandomForestClassifer hyperparameters rf_grid = { "n_estimators": np.arange(10, 100, 10), "max_depth": [None, 3, 5, 10], "min_samples_split": np.arange(2, 20, 2), "min_samples_leaf": np.arange(1, 20,2), "max_features": [0.5, 1, "sqrt", "auto"], "max_samples": [10000] } rs_model = RandomizedSearchCV(RandomForestRegressor(), param_distributions=rf_grid, n_iter=20, cv=5, n_jobs=-1, verbose=True, random_state=42) rs_model.fit(X_train, y_train) # - # Find the best parameters from the RandomizedSearch rs_model.best_params_ # Evaluate the RandomizedSearch Model show_scores(rs_model) # ### Train a model with the best parameters # # In a model I prepared earlier, I tried 100 different combinations of hyperparameters (setting `n_iter` to 100 in `RandomizedSearchCV`) and found the best results came from the ones you see below. # # **Note:** This kind of search on my computer (`n_iter` = 100) took ~2-hours. So it's kind of a set and come back later experiment. # # We'll instantiate a new model with these discovered hyperparameters and reset the `max_samples` back to its original value. # + # %%time # Most ideal hyperparameters ideal_model = RandomForestRegressor(n_estimators=90, min_samples_leaf=1, min_samples_split=14, max_features=0.5, n_jobs=-1, max_samples=None, random_state=42) # random state to reproduce same results ideal_model.fit(X_train, y_train) # - # Show scores for ideal_model (trained on all data) show_scores(ideal_model) # Show scores for rs_model (only trained on ~10,000 examples) show_scores(rs_model) # With these new hyperparameters as well as using all the samples, we can see an improvement to our models performance. # # You can make a faster model by altering some of the hyperparameters. Particularly by lowering `n_estimators` since each increase in `n_estimators` is basically building another small model. # # However, lowering of `n_estimators` or altering of other hyperparameters may lead to poorer results. # + # %%time # Faster model than ideal model fast_model = RandomForestRegressor(n_estimators=40, min_samples_leaf=3, max_features=0.5, n_jobs=-1, random_state=42) # random state to reproduce same results fast_model.fit(X_train, y_train) # - show_scores(fast_model) # ### Make predictions on test data # # Now we've got a trained model, it's time to make predictions on the test data. # # Remember what we've done. # # Our model is trained on data prior to 2011. However, the test data is from May 1 2012 to November 2012. # # So what we're doing is trying to use the patterns our model has learned in the training data to predict the sale price of a Bulldozer with characteristics it's never seen before but are assumed to be similar to that of those in the training data. # + df_test = pd.read_csv("data/Test.csv", parse_dates=["saledate"]) df_test.head() # + # Now if we try to predict, we'll get error since the model was trained on training dataset which has different no. of # columns/features. Moreover, the test dataset has missing values and also many features need to be convert from object to # categorical codes #test_preds = ideal_model.predict(df_test) # - # ### Preprocessing the data # # Our model has been trained on data formatted in the same way as the training data. # # This means in order to make predictions on the test data, we need to take the same steps we used to preprocess the training data to preprocess the test data. # # Remember: Whatever you do to the training data, you have to do to the test data. # # Let's create a function for doing so (by copying the preprocessing steps we used above). def preprocess_data(df): # Add datetime parameters for saledate df["saleYear"] = df.saledate.dt.year df["saleMonth"] = df.saledate.dt.month df["saleDay"] = df.saledate.dt.day df["saleDayOfWeek"] = df.saledate.dt.dayofweek df["saleDayOfYear"] = df.saledate.dt.dayofyear # Drop orginal saledate df.drop("saledate", axis=1, inplace=True) # Fill numeric rows with the median for label, content in df.items(): if pd.api.types.is_numeric_dtype(content): if pd.isnull(content).sum(): # Add a binary column which tells us if the data was missing or not df[label+"_is_missing"] = pd.isnull(content) # Fill missing numeric values with median df[label] = content.fillna(content.median()) # Turn categorical variables into numbers and fill missing if not pd.api.types.is_numeric_dtype(content): # Add binary column to indicate whether sample had missing value df[label+"_is_missing"] = pd.isnull(content) # Turn categories into numbers and add +1 df[label] = pd.Categorical(content).codes+1 return df # + df_test = preprocess_data(df_test) df_test.head() # - X_train.head() # + # # We still cannot get y_preds from test.csv as our test dataset (after preprocessing) has 101 columns whereas, # # our training dataset (X_train) has 102 columns (after preprocessing). # y_preds = ideal_model.predict(df_test) # - # Let's find the difference between two dataframes set(X_train.columns) - set(df_test.columns) # In this case, it's because the test dataset wasn't missing any `auctioneerID` fields. # # To fix it, we'll add a column to the test dataset called `auctioneerID_is_missing` and fill it with `False`, since none of the `auctioneerID` fields are missing in the test dataset. # Match test dataset columns to training dataset df_test["auctioneerID_is_missing"] = False df_test.head() # Make predictions on the test dataset using the best model test_preds = ideal_model.predict(df_test) # When looking at the [Kaggle submission requirements](https://www.kaggle.com/c/bluebook-for-bulldozers/overview/evaluation), we see that if we wanted to make a submission, the data is required to be in a certain format. Namely, a DataFrame containing the `SalesID` and the predicted `SalePrice` of the bulldozer. # Create Dataframe compatible with Kaggle data submission requirements df_preds = pd.DataFrame() df_preds["SalesID"] = df_test["SalesID"] df_preds["SalePrice"] = test_preds df_preds # Export prediction data df_preds.to_csv("data/test_predictions.csv", index=False) # ## Feature Importance # # Since we've built a model which is able to make predictions. The people you share these predictions with (or yourself) might be curious of what parts of the data led to these predictions. # # This is where **feature importance** comes in. Feature importance seeks to figure out which different attributes of the data were most important when it comes to predicting the **target variable**. # # In our case, after our model learned the patterns in the data, which bulldozer sale attributes were most important for predicting its overall sale price? # # Beware: the default feature importances for random forests can lead to non-ideal results. # # To find which features were most important of a machine learning model, a good idea is to search something like "\[MODEL NAME\] feature importance". # # Doing this for our `RandomForestRegressor` leads us to find the `feature_importances_` attribute. # # Let's check it out. # Find feature importance of our best model ideal_model.feature_importances_ # Helper functions for plotting feature importance def plot_features(columns, importances, n=20): df = (pd.DataFrame({"features": columns, "feature_importances": importances}) .sort_values("feature_importances", ascending=False) .reset_index(drop=True)) #Plot the dataframe fig, ax = plt.subplots() ax.barh(df["features"][:n], df["feature_importances"][:20]) ax.set_xlabel("Feature Importance") ax.set_ylabel("Features") ax.invert_yaxis() plot_features(X_train.columns, ideal_model.feature_importances_) sum(ideal_model.feature_importances_) df.ProductSize.isna().sum() df.ProductSize.value_counts() df.Turbocharged.value_counts() df.Thumb.value_counts() df["Enclosure"].value_counts() # **Question to finish:** Why might knowing the feature importances of a trained machine learning model be helpful? # # **Final challenge/extension:** What other machine learning models could you try on our dataset? # https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html Check out the regression section of this map, or try to look at something like CatBoost.ai or XGBooost.ai. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import csv size = 1000 np.random.seed(123) x1 = np.random.uniform(size=size) #f1 = np.random.laplace(size=size) #f2 = np.random.laplace(size=size) f1 = np.random.uniform(size=size) f2 = np.random.uniform(size=size) x2 = 0.4*x1 + 0.8*f1 + np.random.uniform(size=size) x3 = 0.3*x2 + 0.7*f1 + 0.7*f2 + np.random.uniform(size=size) x4 = 0.2*x3 + 0.8*f2 + np.random.uniform(size=size) x5 = 0.5*x3 + 0.5*x4 + np.random.uniform(size=size) x6 = 0.6*x5 + np.random.uniform(size=size) xs = np.array([x1,x2,x3,x4,x5,x6,f1,f2]).T X = pd.DataFrame(np.array([x1, x2, x3,x4,x5,x6,f1,f2]).T,columns=['x1', 'x2', 'x3', 'x4', 'x5', 'x6', 'f1', 'f2']) X.head() X.to_csv('LiNGAM_latest_1000.csv', index=False) # + import pandas as pd import numpy as np import csv size = 1000 np.random.seed(123) x3 = np.random.uniform(size=size) x1 = 1.4*x3 + np.random.uniform(size=size) x2 = -0.8*x1 + 0.5*x3+ np.random.uniform(size=size) #xs = np.array([x1,x2,x3]).T X = pd.DataFrame(np.array([x1, x2, x3]).T,columns=['x1', 'x2', 'x3']) X.head() X.to_csv('LiNGAM_test_1000.csv', index=False) # + import numpy as np import csv import pandas as pd size = 1000 np.random.seed(123) x1 = np.random.uniform(size=size) #f1 = np.random.laplace(size=size) #f2 = np.random.laplace(size=size) f1 = np.random.uniform(size=size) f2 = np.random.uniform(size=size) x2 = 0.4*x1 + 0.8*f1 + np.random.uniform(size=size) x3 = 0.3*x2 + 0.7*f1 + 0.7*f2 + np.random.uniform(size=size) x4 = 0.2*x3 + 0.8*f2 + np.random.uniform(size=size) x5 = 0.5*x3 + 0.5*x4 + np.random.uniform(size=size) x6 = 0.6*x5 + np.random.uniform(size=size) np.random.seed(456) x7 = 0.5*x6 + np.random.uniform(size=size) #f3 = np.random.laplace(size=size) #f4 = np.random.laplace(size=size) f3 = np.random.uniform(size=size) f4 = np.random.uniform(size=size) x8 = 0.4*x7 + 0.8*f3 + np.random.uniform(size=size) x9 = 0.3*x8 + 0.7*f3 + 0.7*f4 + np.random.uniform(size=size) x10 = 0.2*x9 + 0.8*f4 + np.random.uniform(size=size) x11 = 0.5*x9 + 0.5*x10 + np.random.uniform(size=size) x12 = 0.6*x11 + np.random.uniform(size=size) xs = np.array([x1,x2,x3,x4,x5,x6,f1,f2]).T X = pd.DataFrame(np.array([x1, x2, x3,x4,x5,x6,f1,f2,x7, x8, x9,x10,x11,x12,f3,f4]). T,columns=['x1', 'x2', 'x3', 'x4', 'x5', 'x6', 'f1', 'f2','x7', 'x8', 'x9', 'x10', 'x11', 'x12', 'f3', 'f4']) X.head() X.to_csv('LiNGAM_latest2_1000.csv', index=False) #X = pd.DataFrame(np.array([x1, x2, x3,x4,x5,x6,f1,f2]).T,columns=['x1', 'x2', 'x3', 'x4', 'x5', 'x6', 'f1', 'f2']) #X.head() #X.to_csv('LiNGAM_latest2.csv', index=False) # + import numpy as np import csv import pandas as pd size = 1000 np.random.seed(123) x1 = np.random.uniform(size=size) #f1 = np.random.laplace(size=size) #f2 = np.random.laplace(size=size) f1 = np.random.uniform(size=size) f2 = np.random.uniform(size=size) x2 = -0.7*x1 + 1.2*f1 + np.random.uniform(size=size) x3 = 0.5*x2 + f2 + np.random.uniform(size=size) x4 = -0.5*x3 + 0.8*f1 + 1.2*f2 + np.random.uniform(size=size) x5 = 1.2*x1 - 0.8*x4 + np.random.uniform(size=size) xs = np.array([x1,x2,x3,x4,x5,f1,f2]).T X = pd.DataFrame(np.array([x1, x2, x3,x4,x5,f1,f2]).T,columns=["x1", "x2", "x3", "x4", "x5", "f1", "f2"]) X.head() X.to_csv('LiNGAM_latest3_1000.csv', index=False) # + import numpy as np import csv import pandas as pd size = 1000 np.random.seed(123) x1 = np.random.uniform(size=size) #f1 = np.random.laplace(size=size) #f2 = np.random.laplace(size=size) f1 = np.random.uniform(size=size) f2 = np.random.uniform(size=size) x2 = 0.4*x1 + 0.8*f1 + np.random.uniform(size=size) x3 = 0.3*x2 + 0.7*f1 + 0.7*f2 + np.random.uniform(size=size) x4 = 0.2*x3 + 0.8*f2 + np.random.uniform(size=size) x5 = 0.5*x3 + 0.5*x4 + np.random.uniform(size=size) x6 = 0.6*x5 + np.random.uniform(size=size) np.random.seed(456) x7 = 0.5*x6 + np.random.uniform(size=size) #f3 = np.random.laplace(size=size) #f4 = np.random.laplace(size=size) f3 = np.random.uniform(size=size) f4 = np.random.uniform(size=size) x8 = 0.4*x7 + 0.8*f3 + np.random.uniform(size=size) x9 = 0.3*x8 + 0.7*f3 + 0.7*f4 + np.random.uniform(size=size) x10 = 0.2*x9 + 0.8*f4 + np.random.uniform(size=size) x11 = 0.5*x9 + 0.5*x10 + np.random.uniform(size=size) x12 = 0.6*x11 + np.random.uniform(size=size) np.random.seed(789) x13 = 0.5*x12 + np.random.uniform(size=size) #f5 = np.random.laplace(size=size) #f6 = np.random.laplace(size=size) f5 = np.random.uniform(size=size) f6 = np.random.uniform(size=size) x14 = 0.4*x13 + 0.8*f5 + np.random.uniform(size=size) x15 = 0.3*x14 + 0.7*f5 + 0.7*f6 + np.random.uniform(size=size) x16 = 0.2*x15 + 0.8*f6 + np.random.uniform(size=size) x17 = 0.5*x15 + 0.5*x16 + np.random.uniform(size=size) x18 = 0.6*x17 + np.random.uniform(size=size) X = pd.DataFrame(np.array([x1, x2, x3,x4,x5,x6,f1,f2,x7, x8, x9,x10,x11,x12,f3,f4, x13,x14,x15,x16,x17,x18,f5,f6]). T,columns=['x1', 'x2', 'x3', 'x4', 'x5', 'x6', 'f1', 'f2','x7', 'x8', 'x9', 'x10', 'x11', 'x12', 'f3', 'f4', 'x13','x14','x15','x16','x17','x18','f5','f6']) X.head() X.to_csv('LiNGAM_latest4_1000.csv', index=False) # + import numpy as np import csv import pandas as pd size = 10000 np.random.seed(123) f1 = np.random.uniform(size=size) f10 = np.random.uniform(size=size) np.random.seed(456) f2 = 0.3*f1+np.random.uniform(size=size) f3 = 0.3*f2+np.random.uniform(size=size) f4 = 0.3*f3+np.random.uniform(size=size) f5 = 0.3*f4+np.random.uniform(size=size) f9 = -0.3*f10+np.random.uniform(size=size) f8 = -0.3*f9+np.random.uniform(size=size) f7 = -0.3*f8+np.random.uniform(size=size) f6 = -0.3*f7+np.random.uniform(size=size) np.random.seed(789) x1 = 0.4*f1 + 0.6*f6 + 0.1*f5 + np.random.uniform(size=size) x2 = 0.2*x1 + 0.5*f1 + 0.5*f2 + 0.6*f7+ 0.1*f8 + np.random.uniform(size=size) x3 = 0.2*x2 + 0.5*f2 + 0.5*f4 + 0.6*f8+ 0.1*f9 + np.random.uniform(size=size) x4 = 0.2*x3 + 0.5*f3 + 0.5*f5 + 0.6*f9+ 0.1*f10 + np.random.uniform(size=size) x5 = 0.2*x4 + 0.5*f4 + 0.5*f5 + 0.1*f10 + np.random.uniform(size=size) X = pd.DataFrame(np.array([x1, x2, x3,x4,x5,f1,f2,f3,f4,f5,f6,f7,f8,f9,f10]).T, columns=['x1', 'x2', 'x3', 'x4', 'x5', 'f1', 'f2','f3', 'f4', 'f5', 'f6', 'f7', 'f8', 'f9', 'f10']) X.head() X.to_csv('LiNGAM_latest5.csv', index=False) # + import numpy as np import csv import pandas as pd size = 10000 np.random.seed(123) f1 = np.random.uniform(size=size) f10 = np.random.uniform(size=size) np.random.seed(456) f2 = 0.3*f1+np.random.uniform(size=size) f3 = 0.3*f2+np.random.uniform(size=size) f4 = 0.3*f3+np.random.uniform(size=size) f5 = 0.3*f4+np.random.uniform(size=size) f9 = -0.3*f10+np.random.uniform(size=size) f8 = -0.3*f9+np.random.uniform(size=size) f7 = -0.3*f8+np.random.uniform(size=size) f6 = -0.3*f7+np.random.uniform(size=size) np.random.seed(789) x1 = 0.4*f1 + 0.6*f6 + 0.1*f5 + np.random.uniform(size=size) x2 = 0.5*f1 + 0.5*f2 + 0.6*f7+ 0.1*f8 + np.random.uniform(size=size) x3 = 0.5*f2 + 0.5*f4 + 0.6*f8+ 0.1*f9 + np.random.uniform(size=size) x4 = 0.2*x3 + 0.5*f3 + 0.5*f5 + 0.6*f9+ 0.1*f10 + np.random.uniform(size=size) x5 = 0.5*f4 + 0.5*f5 + 0.1*f10 + np.random.uniform(size=size) X = pd.DataFrame(np.array([x1, x2, x3,x4,x5,f1,f2,f3,f4,f5,f6,f7,f8,f9,f10]).T, columns=['x1', 'x2', 'x3', 'x4', 'x5', 'f1', 'f2','f3', 'f4', 'f5', 'f6', 'f7', 'f8', 'f9', 'f10']) X.head() X.to_csv('LiNGAM_latest6.csv', index=False) # + import numpy as np import csv import pandas as pd size = 10000 np.random.seed(123) f1 = np.random.uniform(size=size) f10 = np.random.uniform(size=size) np.random.seed(456) f2 = 0.3*f1+np.random.uniform(size=size) f3 = 0.3*f2+np.random.uniform(size=size) f4 = 0.3*f3+np.random.uniform(size=size) f5 = 0.3*f4+np.random.uniform(size=size) f9 = -0.3*f10+np.random.uniform(size=size) f8 = -0.3*f9+np.random.uniform(size=size) f7 = -0.3*f8+np.random.uniform(size=size) f6 = -0.3*f7+np.random.uniform(size=size) np.random.seed(789) x1 = np.random.uniform(size=size) x2 = 0.2*x1+0.5*f1 + 0.4*f2 + 0.5*f3 + 0.1*f6+ 0.6*f7+ 0.1*f8 + 0.1*f9 + np.random.uniform(size=size) x3 = 0.2*x2+0.2*x4+0.4*f3 + 0.5*f5 + 0.6*f8+ 0.1*f10 + np.random.uniform(size=size) x4 = 0.5*f3 + 0.4*f4 + 0.6*f9 + np.random.uniform(size=size) X = pd.DataFrame(np.array([x1, x2, x3,x4,f1,f2,f3,f4,f5,f6,f7,f8,f9,f10]).T, columns=['x1', 'x2', 'x3', 'x4', 'f1', 'f2','f3', 'f4', 'f5', 'f6', 'f7', 'f8', 'f9', 'f10']) X.head() X.to_csv('LiNGAM_latest7.csv', index=False) # + import numpy as np import csv import pandas as pd size = 10000 np.random.seed(123) f2 = np.random.uniform(size=size) np.random.seed(345) f3 = 3.0*f2+np.random.uniform(size=size) np.random.seed(567) f1 = 5.0*f3+np.random.uniform(size=size) a = 1.2 b = 1.2 np.random.seed(111) x1 = 2.3*f1 + 0.1*f2 + np.random.uniform(size=size) np.random.seed(222) x2 = a*x1 + 3.0*f2 + 1.6*f3 + np.random.uniform(size=size) np.random.seed(333) x3 = b*x2 + 1.4*f1 + np.random.uniform(size=size) np.random.seed(444) x4 = 1.2*x3 + 1.1*f3 + np.random.uniform(size=size) X = pd.DataFrame(np.array([x1, x2, x3,x4,f1,f2,f3]).T, columns=['x1', 'x2', 'x3', 'x4', 'f1', 'f2','f3']) X.head() X.to_csv('LiNGAM_latest8.csv', index=False) # + import numpy as np import csv import pandas as pd size = 10000 np.random.seed(123) f2 = np.random.uniform(size=size) np.random.seed(345) f3 = 3.0*f2+np.random.uniform(size=size) np.random.seed(567) f1 = 5.0*f3+np.random.uniform(size=size) a = 0.9 b = 0.8 np.random.seed(111) x1 = 2.3*f1 + 0.1*f2 + np.random.uniform(size=size) np.random.seed(222) x2 = a*x1 + 3.0*f2 + 1.6*f3 + np.random.uniform(size=size) np.random.seed(333) x3 = b*x2 + 1.4*f1 + np.random.uniform(size=size) np.random.seed(444) x4 = 1.2*x3 + 1.1*f3 + np.random.uniform(size=size) X = pd.DataFrame(np.array([x1, x2, x3,x4,f1,f2,f3]).T, columns=['x1', 'x2', 'x3', 'x4', 'f1', 'f2','f3']) X.head() X.to_csv('LiNGAM_latest9.csv', index=False) # + import numpy as np import csv import pandas as pd size = 1380 np.random.seed(123) f1 = np.random.uniform(size=size) np.random.seed(456) f2 = np.random.uniform(size=size) np.random.seed(789) f3 = np.random.uniform(size=size) np.random.seed(1111) x3 = f1 + f2 + np.random.uniform(size=size) np.random.seed(2222) x1 = f2 + f3 + np.random.uniform(size=size) np.random.seed(3333) x6 = f1 + f3 + np.random.uniform(size=size) np.random.seed(4444) x5 = 0.16*x3 + 0.3*x1 + 0.08*x6+ np.random.uniform(size=size) np.random.seed(5555) x4 = -0.05*x1 - 0.02*x6 + 0.61*x5 + np.random.uniform(size=size) np.random.seed(6666) x2 = 0.18*x5 + 0.3*x4 + np.random.uniform(size=size) X = pd.DataFrame(np.array([x1, x2, x3,x4, x5, x6, f1,f2,f3]).T, columns=['Fathers Occupation', 'Sons Income', 'Fathers Education', 'Sons Occupation', 'Sons Education', 'Number of Siblings', 'f1', 'f2','f3']) X.head() X.to_csv('GSS_sim.csv', index=False) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # Import SQLAlchemy `automap` and other dependencies import sqlalchemy from sqlalchemy.ext.automap import automap_base from sqlalchemy.orm import Session from sqlalchemy import create_engine, inspect # Create the connection engine engine = create_engine("sqlite:///../Resources/dow.sqlite") # Create the inspector and connect it to the engine inspector = inspect(engine) # Collect the names of tables within the database inspector.get_table_names() # Using the inspector to print the column names within the 'dow' table and its types columns = inspector.get_columns('dow') for column in columns: print(column["name"], column["type"]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Panda 数据分析常用函数(上) # ### 1. 导入模块 import pandas as pd import numpy as np # ### 2. 创建数据集并读取 # **创建数据集** data = pd.DataFrame({ "id":np.arange(101,111), # np.arange会自动输出范围内的数据,这里会输出101~110的id号。 "date":pd.date_range(start="20200310",periods=10), # 输出日期数据,设置周期为10,注意这里的周期数应该与数据条数相等。 "money":[5,4,65,-10,15,20,35,16,6,20], # 设置一个-10的坑,下面会填(好惨,自己给自己挖坑,幸亏不准备跳~) "product":['苏打水','可乐','牛肉干','老干妈','菠萝','冰激凌','洗面奶','洋葱','牙膏','薯片'], "department":['饮料','饮料','零食','调味品','水果',np.nan,'日用品','蔬菜','日用品','零食'], # 再设置一个空值的坑 "origin":['China',' China','America','China','Thailand','China','america','China','China','Japan'] # 再再设置一个america的坑 }) data # **数据写入和读取** data.to_csv("shopping.csv", index=False) # index = False表示不加索引,否则会多一行索引 data = pd.read_csv("shopping.csv") # ### 3. 数据查看 # **数据集基础信息查询** # 数据的形状 data.shape # + # 所有列的数据类型 data.dtypes # 某一列的数据类型 # data['id'].dtype # - # 数据的维度 data.ndim # 行索引 data.index # 列索引 data.columns # 查看对象值 data.values # **数据集集体情况查询** data.head() # 显示头部几行(默认5行) data.tail() # 显示尾部几行(默认5行) data.info() # 数据集相关信息概览:索引情况、列数据类型、非空值、内存使用情况 data.describe() # 快速综合统计结果 # ### 4. 数据清洗 # **查看异常值** # 当然,现在这个数据集很小,可以直观地发现异常值 for i in data: print(i+": "+str(data[i].unique())) # **空值检测** data.isnull() # data['department'].isnull() # 查看某一列的空值 # 将空值判断进行汇总,更加直观,ascending默认为True,升序。 data.isnull().sum().sort_values(ascending=False) # + # Detect missing values. # data.isna() # Whether each element in the DataFrame is contained in values. # data.isin() df = pd.DataFrame({'num_legs': [2, 4], 'num_wings': [2, 0]}, index=['falcon', 'dog']) df.isin([2]) # - # **空值处理** # + # pandas.DataFrame.fillna(value = None,method = None,inplace = False) # value:用于填充的值,可以是具体值、字典和数组,不能是列表; # method:填充方法,有 ffill(填充上一个值) 和 bfill(填充下一个值) 等; # inplace默认无False,如果为True,则将修改此对象上的所有其他视图。 data['department'].fillna(method="bfill") # 替换为具体值,并且在原对象值上进行修改 # data['department'].fillna(value="冷冻食品",inplace=True) # - # **空格处理** for i in data: if pd.api.types.is_object_dtype(data[i]): data[i] = data[i].str.strip() data['origin'].unique() # **大小写转换** # + # data['origin'].str.title() # 将首字母大写 # data['origin'].str.capitalize() # 将首字母大写 # data['origin'].str.upper() # 全部大写 # data['origin'].str.lower() # 全部小写 # - # **数据替换** # 将 America 替换为 America data['origin'].replace("america", "America", inplace=True) data['origin'] data['money'].replace(-10,np.nan,inplace=True) # 将负值替换为空值 data['money'].replace(np.nan,data['money'].mean(),inplace=True) # 将空值替换为均值 data['money'] # **数据删除(方法一)** #去掉 origin 为 American 的行 data1 = data[data.origin != 'America'] data1 #去掉所有包含Japan的行 不等于Japan的行为真,则返回 data2=data[(data != 'Japan').all(1)] data2 # **数据删除(方法2)** # + # # 默认删除后面出现的重复值,即保留第一次出现的重复值 data['origin'].drop_duplicates() # 删除前面出现的重复值,即保留最后一次出现的重复值 # data['origin'].drop_duplicates(keep='last') # - data # **数据格式转换** data3 = data['id'].astype('str') # 将id列的类型转换为字符串类型。 # **更改列名称** data4 = data.rename(columns={'id':'ID', 'origin':'产地'}) # 将id列改为ID,将origin改为产地。 data4 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python (sbi) # language: python # name: sbi # --- # + # %load_ext autoreload # %autoreload 2 import numpy as np import os import pickle import matplotlib.pyplot as plt import seaborn as sns import itertools import torch torch.manual_seed(0) import tensorflow as tf from neural_circuits.LRRNN import tf_num_params, torch_num_params import warnings warnings.filterwarnings('ignore') """RNN stable amplification.""" # - # ## Supp Fig 2 # + colors = sns.color_palette() Ns = [2, 5, 10, 25, 50, 100, 250] epi_num_params = [] snpe_num_params = [] for N in Ns: epi_num_params.append(tf_num_params(N)) snpe_num_params.append(torch_num_params(N)) yticks = [0, 1e5, 2e5, 3e5, 4e5, 5e5] yticklabels = ["%dk" % (y/1000) for y in yticks] fig, ax = plt.subplots(1,1) plt.plot(Ns, epi_num_params, '-o', c=colors[0], label="EPI", alpha=0.7) plt.plot(Ns, snpe_num_params, '-o', c=colors[1], label="SNPE", alpha=0.7) plt.legend() ax.set_yticks(yticks) ax.set_yticklabels(yticklabels) plt.show() # - # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### "Drink because you are happy, but never because you are miserable" # # import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import sklearn as sk from sklearn.model_selection import train_test_split from sklearn.tree import DecisionTreeRegressor from sklearn.metrics import mean_squared_error from IPython.core.display import HTML display(HTML("")) # #### The Dataset: # The dataset contains information on countries that rank their level of happiness through various data such as HDI, gross domestic product (GDP) and types of alcohol. # [link to the data source](https://www.kaggle.com/marcospessotto/happiness-and-alcohol-consumption) # # We read the data from a github repository: alcohol_df = pd.read_csv('https://raw.githubusercontent.com/shahafmalka13/DS_PROJECT/main/HappinessAlcoholConsumption.csv.csv') alcohol_df # ## 1. Wrangling the data: # # - Treat data types (if needed) # - Treat missing values (if needed) # - Treat column names (if needed) # # - Treat any other weird thing your data might have # # # we check the data types- alcohol_df.dtypes # Check if there are missing values- alcohol_df.isnull().sum().sort_values() # ## 2. Understanding the data # alcohol_df.plot(subplots = True , layout =(3,3) , kind ='box', figsize = (14,16), patch_artist = False) plt.subplots_adjust(wspace=0.5) # GDP- It can be clearly seen that many data are not in the relevant field and therefore we have chosen to present in another way. plt.title('GDP_PerCapita:') alcohol_df['GDP_PerCapita'].hist(); # - It can be noted that there are several anomalies in the data but these anomalies are not problematic because it is indeed possible that the gross national product in one country is significantly higher than in the other countries, as well as the consumption of wine or beer in one country is significantly different from another. # According to state production capacity / demand / supply alcohol_df.pivot_table(['Beer_PerCapita','Spirit_PerCapita','Wine_PerCapita' ],'Region') # #### findings: # - It seems that in the provinces characterized by developing or Muslim countries the level of alcohol consumption is relatively low compared to the developed countries plt.figure(figsize=(15,6)) plt.title('Presentation of the sample of countries:') plot1 = sns.countplot(x="Region", data=alcohol_df); plot1.set_xticklabels(plot1.get_xticklabels(), rotation=40, ha="right"); # - It can be seen that the sample presented in the data reliably represents the distribution and division of countries into regions in the world. alcohol_df["Total_alcohol"] = alcohol_df['Beer_PerCapita'] + alcohol_df['Spirit_PerCapita'] + alcohol_df['Wine_PerCapita'] alcohol_df.head() alcohol_groupd = alcohol_df.groupby('Country')[['HappinessScore', 'Total_alcohol']].max() groupH = alcohol_groupd.sort_values('HappinessScore',ascending = False).reset_index() groupH alcohol_groupd = alcohol_df.groupby('Country')[['HappinessScore', 'Total_alcohol']].max() groupT = alcohol_groupd.sort_values('Total_alcohol',ascending = False).reset_index() groupT countries = groupT['Country'] diff=[] for country in countries: c1 = int(groupT[groupT['Country'] == country].index[0]) c2 = int(groupH[groupH['Country'] == country].index[0]) diff+=[abs(c1-c2)] mean = np.mean(diff) mean # ##### findings: # - It can be seen that the happiest country is not necessarily the country with the highest alcohol consumption. # - It can be seen that there is indeed a connection between the level of alcohol and happiness. # - With the help of the function we built it can be seen that the sorting is not completely random. corr_prep = alcohol_df[['HappinessScore','Total_alcohol']] corr_prep.corr(method = 'spearman') # We chose to use Spearman's correlation because the other methods give a lower correlation. (unit 5) sns.heatmap(corr_prep.corr(), vmin=0.0 , vmax = 1,cmap='Reds' , annot=True); plt.title('Amount of Alcohol versus happiness level:') sns.regplot(x='Total_alcohol', y='HappinessScore', data=corr_prep); # ##### findings: # # - The correlation between happiness level and overall alcohol consumption is moderate. This medium relationship can be explained by a number of explanations: # - It is natural to think that in countries where people drink more the population is happier (celebrating more) but it is possible that in countries where people drink more it is the population that is obligated to alcohol addictions, ie a population with bigger problems. # checking the correlation between types of alcohol with happines- spc_corr = alcohol_df[['HappinessScore','Beer_PerCapita','Spirit_PerCapita','Wine_PerCapita']].corr(method = 'spearman') spc_corr spc_corr.style.background_gradient(cmap='Blues') plt.title('The connection between drinking Wine and drinking Beer:') sns.regplot(x='Wine_PerCapita', y='Beer_PerCapita', data=alcohol_df); # ##### findings: # # - we can see that the relationship between beer and happiness is the strongest between all of them. # - From the heat map it can be seen that there is a strong connection between drinking beer and drinking wine. hdi_groupd = alcohol_df.groupby('Region')[['Total_alcohol','HDI']].sum() hdi_groupd.sort_values('HDI',ascending = False) # - There is a direct relationship between alcohol consumption and the level of development of the country # ## 3. Building a model from the data # We will try to predict the level of happiness using the HDI, GDP, total alcohol index. In order to predict we will use tree in the regression method because happiness is measured in a continuous range. Using the size of the error we will try to understand if there is a relationship between these features and the level of happiness. features = ['Total_alcohol','HDI','GDP_PerCapita'] X = alcohol_df[features] y = alcohol_df['HappinessScore'] X_train, X_test, y_train, y_test = sk.model_selection.train_test_split(X, y, test_size=0.3, random_state=42) X.head() Alcohol_model = sk.tree.DecisionTreeRegressor(random_state=42) Alcohol_model.fit(X_train, y_train) #def mse(a,b): #return np.sqrt(np.square(a-b).mean()) def eval(x_test,y_test,Alcohol_model): pred = Alcohol_model.predict(x_test) print("MSE: {:.3f}".format(mean_squared_error(pred,y_test,squared=False))) eval(X_test,y_test,Alcohol_model) # + print("Making predictions:") print(y_test) print("The predictions are:") test_pred = Alcohol_model.predict(X_test) print(test_pred) print("MSE: {:.3f}".format(mse(y_test.values,test_pred))) # - # We will check the depth of the tree and try to change it to see if there is a positive change in the size of the error. print("depth:",Alcohol_model.get_depth()) import sklearn.tree as tree def plot_tree(tree_model,feat,size=(15,10)): fig = plt.figure(figsize=size) tree.plot_tree(tree_model, feature_names = feat, filled=True, fontsize=15) plt.show() # + model = DecisionTreeRegressor(max_depth=3,random_state=42) model.fit(X_train,y_train) eval(X_test,y_test,model) plot_tree(model,X_test.columns,size=(30,20)) # - # Because the mse is very low it can be concluded that the features we have chosen reliably predict the level of happiness measured in each country and that there is indeed a direct relationship between these features and the level of happiness. display(HTML("")) # ##### And a little for general knowledge, let's see where we're located groupH[groupH['Country']=='Israel'] groupT[groupT['Country']=='Israel'] # And what makes Israelis rank so high in the global happiness rankings? # The reasons can be seen in the next article # [link to the article](https://www.ynet.co.il/articles/0,7340,L-4956142,00.html) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import lattice # #%matplotlib widget import xrayutilities as xu import matplotlib.pyplot as plt from matplotlib import cm, rcParams from matplotlib.colors import ListedColormap, LinearSegmentedColormap import numpy as np # Generate new colormap for RSM display rainbow = cm.get_cmap('rainbow', 256) newcolors = rainbow(np.linspace(0, 1, 256)) white = np.array([1, 1, 1, 1]) newcolors[:20, :] = white newcmp = ListedColormap(newcolors) refHKL = (2, 2, 0) iHKL = (0,0,1) oHKL = (1, 1, 0) subMat = 'LSAT' rsmFile = lattice.lattice(refHKL, iHKL, oHKL, subMat, geometry = 'real') rsmFile.load_sub() # - # %matplotlib widget ax = (rsmFile.plot2d( cmap='rainbow' )) #ax2 = (rsmFile.plotQ(60,60)) # + # %matplotlib widget b = rsmFile.plotQ(100, 100, 4, 2, nlev=100, cmap='rainbow') b.set_title(rsmFile.filename) b.set_aspect(1) #b.set_xlim(-0.11,0.11) #b.set_ylim(4.18, 5.35) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="phDtr1X15jaE" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 89} outputId="e8764435-8a36-4b63-d7ba-ea3ec12a7ae2" executionInfo={"status": "ok", "timestamp": 1526393365191, "user_tz": -540, "elapsed": 5971, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s128", "userId": "107995332831641667384"}} # !pip install dynet # !git clone https://github.com/neubig/nn4nlp-code.git # + id="NPJBoWVw5sbV" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}} from collections import defaultdict import math import time import random import dynet as dy import numpy as np # + id="vrO9UJgb6CCG" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}} N=2 #length of window on each side (so N=2 gives a total window size of 5, as in t-2 t-1 t t+1 t+2) EMB_SIZE = 128 # The size of the embedding embeddings_location = "embeddings.txt" #the file to write the word embeddings to labels_location = "labels.txt" #the file to write the labels to # We reuse the data reading from the language modeling class w2i = defaultdict(lambda: len(w2i)) S = w2i[""] UNK = w2i[""] def read_dataset(filename): with open(filename, "r") as f: for line in f: yield [w2i[x] for x in line.strip().split(" ")] # + id="pc-V-N_F6ESJ" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}} # Read in the data train = list(read_dataset("nn4nlp-code/data/ptb/train.txt")) w2i = defaultdict(lambda: UNK, w2i) dev = list(read_dataset("nn4nlp-code/data/ptb/valid.txt")) i2w = {v: k for k, v in w2i.items()} nwords = len(w2i) # + id="47r1FUun6GOg" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}} with open(labels_location, 'w') as labels_file: for i in range(nwords): labels_file.write(i2w[i] + '\n') # Start DyNet and define trainer model = dy.Model() trainer = dy.SimpleSGDTrainer(model, learning_rate=0.1) # + id="4L-43ZT06I-y" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}} # Start DyNet and define trainer model = dy.Model() trainer = dy.SimpleSGDTrainer(model, learning_rate=0.1) # Define the model W_c_p = model.add_lookup_parameters((nwords, EMB_SIZE)) # Word weights at each position W_w_p = model.add_parameters((nwords, EMB_SIZE)) # Weights of the softmax # + id="8938NP8l66Ei" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}} # Calculate the loss value for the entire sentence def calc_sent_loss(sent): # Create a computation graph dy.renew_cg() #add padding to the sentence equal to the size of the window #as we need to predict the eos as well, the future window at that point is N past it padded_sent = [S] * N + sent + [S] * N padded_emb = [W_c_p[x] for x in padded_sent] W_w = dy.parameter(W_w_p) # Step through the sentence all_losses = [] for i in range(N,len(sent)+N): c = dy.esum(padded_emb[i-N:i] + padded_emb[i+1:i+N+1]) s = W_w * c all_losses.append(dy.pickneglogsoftmax(s, padded_sent[i])) return dy.esum(all_losses) # + id="LaQSAD9Y7_9D" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 35} outputId="40299f2a-c495-44ad-c525-565222db80e6" executionInfo={"status": "ok", "timestamp": 1526393370357, "user_tz": -540, "elapsed": 514, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s128", "userId": "107995332831641667384"}} calc_sent_loss(sent) # + id="M-fSvQOp67-0" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 485} outputId="a269d09e-6167-494d-d994-5c9b7af983f4" MAX_LEN = 100 for ITER in range(100): print("started iter %r" % ITER) # Perform training random.shuffle(train) train_words, train_loss = 0, 0.0 start = time.time() for sent_id, sent in enumerate(train): my_loss = calc_sent_loss(sent) train_loss += my_loss.value() train_words += len(sent) my_loss.backward() trainer.update() if (sent_id+1) % 5000 == 0: print("--finished %r sentences" % (sent_id+1)) print("iter %r: train loss/word=%.4f, ppl=%.4f, time=%.2fs" % (ITER, train_loss/train_words, math.exp(train_loss/train_words), time.time()-start)) # Evaluate on dev set dev_words, dev_loss = 0, 0.0 start = time.time() for sent_id, sent in enumerate(dev): my_loss = calc_sent_loss(sent) dev_loss += my_loss.value() dev_words += len(sent) trainer.update() print("iter %r: dev loss/word=%.4f, ppl=%.4f, time=%.2fs" % (ITER, dev_loss/dev_words, math.exp(dev_loss/dev_words), time.time()-start)) print("saving embedding files") with open(embeddings_location, 'w') as embeddings_file: W_w_np = W_w_p.as_array() for i in range(nwords): ith_embedding = '\t'.join(map(str, W_w_np[i])) embeddings_file.write(ith_embedding + '\n') # + id="6j_hjsIQ69xb" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # FaIR Model v1.6 # ## - Demo notebook: Sept. 2020 # # # The Finite Amplitude Impulse Response (FaIR) model is a simple emissions-based, globally-averaged climate model. It allows the user to input emissions of greenhouse gases and short lived climate forcers in order to estimate global mean **atmospheric GHG concentrations**, **radiative forcing** and **temperature anomalies**.
# # Read the docs: https://readthedocs.org/projects/fair/downloads/pdf/latest/ # # # ## Basic imports # # - **Numpy** is a library built on top of C which allows you to perform rapid numerical calculations in Python and generate/manipulate arrays and matrices in a similar way to in MATLAB.
# # # - **Matplotlib** is the canonical plotting package in Python; as you might have guessed by the name, it's basically an open-source version of MATLAB's plotting functions.
# - The '%matplotlib inline' comment just allows normal plotting within the Jupyter notebooks # %matplotlib inline import numpy as np import matplotlib.pyplot as plt # + import fair # Check we're using v1.6.0c0 print(f"We're using FaIR version {fair.__version__}") # The "engine" of fair is fair_scm, stored in the fair.forward class from fair.forward import fair_scm # - # ## Workflow # # The general workflow for using the FaIR model is actually quite straightforward once you get set up: # # - Specify the emissions timeseries going into `fair_scm`.
# - Choose the settings (`useMultigas=True/False` etc).
# - If running with multiple emissions sources, then the emissions input is multidimensional, and we set `useMultigas=True`.
# - Run the model.
# - Plot!
# # The output from FaIR is a 3-tuple of (C, F, T) arrays. In CO2-only mode, both C (representing CO2 concentrations in ppm) and F (total effective radiative forcing in W m-2) are 1D arrays. T (temperature change since the pre-industrial) is always output as a 1D array. # # Basic Example run - CO2 only # # Let's try a basic run where we only vary CO2 emissions! Aside from this, there will be no external forcing (no volcanic etc) and also no other emission sources (no other GHGs/aerosols etc). # # ### Step 1: Set up idealised emissions time-series # # To start with, lets see how our simple climate model responds to an instantaneous "pulse" of carbon / **C0$_{2}$**!
# # To do this, let's create an emissions timeseries 100 years long, and inject a 100 GtC/yr pulse of carbon a couple of years in. co2_emissions = np.zeros(100) co2_emissions[10] = 100 # GtC/yr # ### Step 2: Now, run the model! # # The outputs are: # # `C`: Concentrations
# `F`: Effective Radiative forcing
# `T`: Temp anomaly
# # Additionally, the model outputs some other parameters related to feedbacks and the ocean response (`lambda_eff`, `ohc`, `heatflux`).
# However, we don't need these, so just ignore them for now :) # # + # Now, run the model! C, F, T, _,_,_ = fair_scm( emissions=co2_emissions, temperature_function='Geoffroy', useMultigas=False ) # - # ### Step 3: Plot! # + # --- Plot the output! --- fig, axs = plt.subplots(ncols=2, nrows=2, dpi=100, figsize=(10,5)) # Create time axis time = np.arange(0, co2_emissions.size) # Emissions axs[0,0].plot(time, co2_emissions, color='black') axs[0,0].set_ylabel(r'CO$_{2}$ emissions (GtC yr$^{-1}$)') # Concentrations axs[0,1].plot(time, C, color='blue') axs[0,1].set_ylabel(r'CO$_2$ concentrations (ppm)') # Radiative Forcing axs[1,0].plot(time, F, color='orange') axs[1,0].set_ylabel(r'Radiative forcing (W m$^{-2}$)') # Temperature Anomaly axs[1,1].plot(time, T, color='red') axs[1,1].set_ylabel('Temperature anomaly (K)') fig.tight_layout() # Cleans up the labels so they don't overlap # - # ## Exercise #1 # # **1) Extend the length of the emissions timeseries and re-run the model. What fraction of the emitted CO2 remains in the atmosphere after 10, 100 and 250 years (approximately)?** # # **2) How do the shapes of the CO2 concentration and warming curves change as the duration of the pulse is increased?** # ## Exercise #2 (Optional/Homework) # # Now, let's test how sensitive the previous results are to changes in these two "effective" ocean heat capacities! # # To do this, we can add the `ocean_heat_capacity` argument to our `fair_scm` call. This has to be a 2-element array where the elements refer to the upper and deep ocean heat capacities, respectively. The default values are shown below. # # # ```python # C_new, F_new, T_new, _,_,_ = fair_scm( # emissions=co2_emissions, # temperature_function='Geoffroy', # useMultigas=False, # ocean_heat_capacity=np.array([8.2, 109.0]), # Default values in [W yr m^-2 K^-1] # ) # ``` # # **1) Copy the previous example (pulse of $\mathrm{CO}_{2}$) below and re-run with different values for the upper and deep ocean heat capacities. Can you explain your results?** # # *2) Have a think: What physical phenomena affect these timescales (fast vs slow) for the real Earth?* # #### Answers below... # # Climate Feedbacks and Uncertainty # # Many aspects of the climate system change as we increase CO$_{2}$ and raise temperature, these are called "feedbacks".
# # The basic idea is that the net radiative imbalance ($\Delta \mathrm{R}$) due to different components of the climate changes *with* global warming.
# # > Recall from Johannes' and Philip's lecture: $\Delta \mathrm{R}(t) = \mathcal{E} - \lambda \Delta \mathrm{T}_{\mathrm{s}}(t)$
# # > Therefore, for a given radiative forcing the temperature change at equilibrium (i.e. $\Delta \mathrm{R}=0$) is given by: $T_{eq} = \frac{\mathcal{E}}{\lambda}$ # # > So, a "large" $\lambda$ means the temperature response of the climate to forcing is low, whereas a "small" $\lambda$ means the climate is *very sensitive* to forcing. # # **Examples of feedbacks:**
# # - 1) **The "Planck" feedback:** As the temperature increases, the emission of infrared radiation back into space increases proportional to $\sim \, T_{earth}^{4}$. This increases the amount of outgoing radiation as the Earth warms. **--> "Positive" feedback** # # - 2) **Snow and sea-ice cover**: Intuitively, snow and ice cover decreases with warming, and this reduces the reflectivity of the Earth (the *albedo*). This then reduces the amount of radiation emitted to space. **--> "Negative" feedback** # # - 3) **Low cloud changes**: How low-level clouds respond to climate changes is *very* uncertain, but currently we expect that tropical low clouds will become slightly less reflective as the planet warms, amplifying the warming. **--> "Positive" feedback**
# # ### Results from CMIP5 and CMIP6 models # Mathematically, for an individual component of the climate system, we can write the "feedback parameter", $\lambda$, as its derivative of forcing with respect to temperature, measured in $[\text{W} \, \text{m}^{-2} \, \text{K}^{-1}]$:
# # $$\boxed{\lambda_{component} = \frac{d \Delta R_{component}}{d T_{s}}}$$
# # The latest results from the CMIP5 and CMIP6 models (Zelinka et al. 2020, below - *note the reversed sign*) suggest a total feedback parameter of $\boxed{\lambda_{global} = \sum_{i} \lambda_{i} = 1.13 \pm 0.28 \, [\text{W} \, \text{m}^{-2} \, \text{K}^{-1}]}$. # #
# #

# # **However the uncertainty is mostly dominated by model disagreement over the shortwave cloud feedback.** # ## Question: # # #### 1) What other feedbacks can you think of? # #### 2) For a simple pulse injection of $\mathrm{CO}_{2}$, what range of temperatures can you get by changing the `lambda_global` argument? # # For example: # # ```python # C, F, T_lambda, _,_,_ = fair_scm( # emissions=co2_emissions, # temperature_function='Geoffroy', # useMultigas=False, # lambda_global = 1.13, # Default CMIP5 range: 1.13 ± 0.28 # ) # ``` # # Can you explain the results you get? # + """ Set up emissions ... """ co2_emissions = np.zeros(100) co2_emissions[10] = 100 # GtC/yr """ Run the model with different lambda_global values ... """ C3, F3, T3, _,_,_ = fair_scm( emissions=co2_emissions, temperature_function='Geoffroy', useMultigas=False, lambda_global = ... , # FILL THIS IN ! ) C4, F4, T4, _,_,_ = fair_scm( emissions=co2_emissions, temperature_function='Geoffroy', useMultigas=False, lambda_global = ... , # FILL THIS IN ! ) C5, F5, T5, _,_,_ = fair_scm( emissions=co2_emissions, temperature_function='Geoffroy', useMultigas=False, lambda_global = ... , # FILL THIS IN ! ) # + """ Plot the outputs for each of the different lambda values... """ fig, axs = plt.subplots(ncols=2, nrows=2, dpi=100, figsize=(10,5)) # Create time axis time = np.arange(0, co2_emissions.size) # Plot Emissions # Plot Concentrations # Plot Radiative Forcing # Plot Temperature Anomaly fig.tight_layout() # Cleans up the labels so they don't overlap # - # # What's next? - Aerosols and other multiple forcing scenarios # # So, now we've got a sense of how the climate system responds to a simple pulse of **CO$_{2}$**, but in reality the atmosphere is composed of multiple different gases and forcing agents. (E.g. Methane, Sulphate/Black Carbon aerosols, CFCs etc)
# # Luckily, we can include these in `FaIR` using the `useMultigas=True` option! # #### Workflow changes # # This time the emissions input is a `(nt, 40)` array, with the 40 different columns corresponding to different forcing agents. See [this page](https://fair.readthedocs.io/en/latest/examples.html#emissions) for a list of what each column corresponds to.

# # **Example: Setting up a custom emissions array** # # ```python # emissions = np.zeros((150, 40)) # # # Column 0 is the years! # emissions[:,0] = np.arange(1850, 2000) # 1850,1851,... # # # Column 1 is the CO2, in GtC/yr # emissions[:,1] = 10 # # # Add some methane to column 3, in Mt/yr # emissions[:,3] = 300. # # # aerosols, Mt/yr # emissions[:,5] = 0.1*np.arange(150) # SOx # emissions[:,9] = 6. # BC # # # And so on... # ``` # ### Example: A pulse injection of sulphur oxides ($\mathrm{SO}_{\mathrm{x}}$) # # Sulphur oxides (eg. $\mathrm{SO}_{\mathrm{2}}$) are the main precursors of **sulphate aerosol**, so what we're really doing here is injecting a short pulse of aerosol into our idealized "climate".
# # These aerosols scatter incoming solar radiation, causing a *global dimming* effect which can cool the planet and offset some of the warming caused by increased GHGs.
# # **N.B.** The `aerosol_forcing="stevens"` argument means that `FaIR` is using the *very* simple aerosol radiative forcing parameterization we introduced before.
# + """Set up emissions time series""" emissions_pulse = np.zeros((300, 40)) # Column 0 is year. emissions_pulse[:,0] = np.arange(1850,2150) # Column 5 is SOX emissions, Mt/yr # Inject 100 Mt pulses in years 100-110 emissions_pulse[:,5][100:110] = 100. """ Scale all other forcings to zero except aerosol """ scale = np.zeros(13) # all other forcing (methane etc) = 0 scale[8] = 1. # aerosol forcing = 1 """ Run model """ C_pulse,F_pulse,T_pulse,_,_,_ = fair_scm(emissions_pulse, temperature_function='Geoffroy', natural=np.array([209.2492,11.1555]), # natural emissions of CH4 and N2O aerosol_forcing="stevens", # use simple parameterization from Stevens (2015) scale=scale, # ignore all non-aerosol forcings F_volcanic=0, # turn volcanic forcing off F_solar=0, # turn solar forcing off ) # + """ --- Plot the output! --- """ fig, ax = plt.subplots(ncols=3, dpi=200, figsize=(20, 4)) # Plot ax[0].plot(emissions[:,0], emissions_pulse[:,5], color='black') ax[0].set_ylabel(r"$\mathrm{SO}_{\mathrm{x}}$ emissions [ Mt / yr ]") ax[1].plot(emissions[:,0], F_pulse[:,8], color='orange') ax[1].set_ylabel(r"Aerosol radiative forcing [W m$^{-2}$ K$^{-1}$]") ax[2].plot(emissions[:,0], T_pulse, color='red') ax[2].set_ylabel(r"Temperature change [K]") # - # What do you notice about the response of the radiative forcing and temperature changes? How is this different from the GHG response?
# 1) Timescales? # - How long does this last? # 2) Temperature change? # # - How does the size of the temperature perturbation compare with a similar magnitude GHG injection? # # Did someone say "geo-engineering"?! # ### Question: How much $\mathrm{SO}_{\mathrm{x}}$ is required to offset a pulse injection of $\mathrm{CO}_{2}$? How long does this last for?
# - **Set up another emissions timeseries like in the above example, but this time also include a 100 $\mathrm{GtC}$ $\mathrm{ yr}^{-1}$ pulse of carbon a couple of decades in.**
# Remember this line from before... # ```python # # Column 1 is CO2, GtC/yr. # emissions[:,1][20] = 100 # ``` #
# # - **In 1950, add a pulse of sulphate aerosol precursor ($\mathrm{SO}_{\mathrm{x}}$) and run the model.** # # - P.S. *Don't forget to include the scaling factor for both aerosols **and** $\mathrm{CO}_{2}$*!
# ```python # scale = np.zeros(13) # all other forcing (methane etc) = 0 # scale[0] = 1. # co2 forcing = 1 # scale[8] = 1. # aerosol forcing = 1 # ``` # + """Set up emissions time series""" emissions = np.zeros((300, 40)) # Column 0 is year. emissions[:,0] = np.arange(1850,2150) # Column 1 is CO2, GtC/yr. # ---Fill in below... # Column 5 is SOX emissions, Mt/yr # Inject a 100 Mt pulses in years 100-110 # ---Fill in below... """ Scale all other forcings to zero except aerosol """ # ---Fill in below... """ Run model """ C,F,T,_,_,_ = fair_scm( emissions, temperature_function='Geoffroy', natural=np.array([209.2492,11.1555]), # natural emissions of CH4 and N2O aerosol_forcing="stevens", scale=scale, F_volcanic=0, # turn volcanic forcing off F_solar=0, # turn solar forcing off ) # + """ --- Plot the output! --- """ fig, ax = plt.subplots(ncols=4, dpi=300, figsize=(16, 4)) # Plot CO2 concs # ... # Plot SOx emissions # ... # Plot RF # ... # Plot dT # ... fig.tight_layout() # - # ### (2) Now, instead of adding a pulse injection of $\mathrm{SO}_{\mathrm{x}}$, change it to a steady emission rate. # # ### Can you stabilize global temperatures?? # # #### Extension: How much harder is it if CO2 emissions increase with time? # For example # ```python # emissions = np.zeros((300, 40)) # # # Column 0 is year. # emissions[:,0] = np.arange(1850,2150) # # # Column 1 is CO2, GtC/yr. # emissions[:,1]=0.02*np.arange(300) # ``` # + """Set up emissions time series""" emissions = np.zeros((300, 40)) # Column 0 is year. emissions[:,0] = np.arange(1850,2150) # Column 1 is CO2, GtC/yr. # ... # Column 5 is SOX emissions, Mt/yr # ... """ Scale all other forcings to zero except aerosol """ # ... """ Run model """ C,F,T,_,_,_ = fair_scm( emissions, temperature_function='Geoffroy', natural=np.array([209.2492,11.1555]), # natural emissions of CH4 and N2O aerosol_forcing="stevens", scale=scale, F_volcanic=0, # turn volcanic forcing off F_solar=0, # turn solar forcing off ) # + """ --- Plot the output! --- """ fig, ax = plt.subplots(ncols=4, dpi=300, figsize=(16, 4)) # Plot CO2 concs # ... # Plot SOx emissions # ... # Plot RF # ... # Plot dT # ... fig.tight_layout() # - # # Understanding and simulating the historical temperature record # ## Observations # # We have a good handle on what global temperature was during the historical period (read: the 20$^{\mathrm{th}}$ century), you can see a plot of the global, annual-mean temperature below. The warming is clear to see, and should make it all the stranger that climate deniers still manage to exist. #Read in observations, and remove 1861-80 climatology Y_obs, T_obs = np.genfromtxt("./Data/HadCRUT.4.5.0.0.annual_ns_avg.txt")[:,0], np.genfromtxt("./Data/HadCRUT.4.5.0.0.annual_ns_avg.txt")[:,1] i_obs_clm=(1861<=Y_obs) & (Y_obs<=1880) T_obs=T_obs-np.mean(T_obs[i_obs_clm]) fig, ax = plt.subplots(dpi=100) ax.scatter(Y_obs, T_obs, s=10) ax.set_ylabel("Temperature anomaly [K]") # #### Now, let's try and simulate this historical temperature record using `FaIR` and see how well it does! # # #### To do that, we'll need to make use of the "Representative Concentration Pathways" # ### 3.1 - The Representative Concentration Pathways (RCPs) # # For more realistic multiGas scenarios, setting up a 40-column emissions timeseries every time you want to run `FaIR` is a lot of work!
# # Lucky for us though, `FaIR` already has some "IPCC-approved" multi-species scenarios coded up! These are called the *Representative Concentration Pathways* (RCPs).
# # **These RCPs contain all the same data on emissions/forcings over the 20$^{\mathrm{th}}$ century, and then extrapolate into the future under various different socio-economic scenarios.**
# Get RCP modules from fair.RCPs import rcp85 # + """ Run model """ C,F,T_fair,_,_,_ = fair_scm(rcp85.Emissions.emissions, temperature_function='Geoffroy', aerosol_forcing="stevens", ) # Remove 1861-80 climatology T_fair_anom = T_fair - np.mean(T_fair[1861-1765:1880-1765]) # Just select years from 1850-2017 T_fair_anom = T_fair_anom[1850-1765:2018-1765] # + """ Compare observations and model-predictions """ fig, ax = plt.subplots(dpi=100) ax.scatter(Y_obs, T_obs, s=10) # Obs ax.plot(Y_obs, T_fair_anom, 'orange') # FaIR ax.set_ylabel("Temperature anomaly [K]") # Calculate standard deviation between model and obs error = round(np.std(T_obs - T_fair_anom), 2) props = dict(boxstyle='round', facecolor='wheat', alpha=0.5) ax.text(0.1, 0.7, f"Error = {error} K", transform=ax.transAxes, bbox=props) # - # ## "What's going on?": A simple decomposition # # At the moment, this is a little bit of a black box! But thankfully, we can **decompose the changes** in forcing and temperature into contributions from: # - Greenhouse gases
# - Aerosols # - Natural variability (volcanos etc) # Decomposing the forcing and temperature changes like this means we can get a clearer picture of what's happening! # + """ Set model parameters """ lambda_global = 1.13 # Default = 1.13 [W m^-2 K^-1] ocean_heat_capacity=np.array([8.2, 109.0]) # Defaults = 8.2, 109.0 [W yr m^-2 K^-1] """Set scalings for the anthropogenic GHG and aerosol and natural forcings (defaults=1.0)""" scale_ghg = 1.0 scale_aer = 1.0 scale_nat = 1.0 """ Run model """ _,F_output,_,_,_,_ = fair_scm(rcp85.Emissions.emissions, temperature_function='Geoffroy', aerosol_forcing="stevens", lambda_global = lambda_global, ) # Subset to 1850-2017 F_output = F_output[1850-1765:2018-1765, :] """ Decompose forcings into GHGs, aerosols and natural forcing """ F_ghg = F_output[:,0] +F_output[:,1]+F_output[:,2]+F_output[:,3]+F_output[:,7] F_aer = F_output[:,6]+F_output[:,8]+F_output[:,9] # F_output[:,4]+F_output[:,5] F_nat = F_output[:,10]+F_output[:,11]+F_output[:,12] F_tot = F_ghg+F_aer+F_nat """ Re-run model to get temperature response due to decomposed forcings """ _,_,T_ghg,_,_,_ = fair_scm(emissions=False, other_rf=F_ghg, temperature_function='Geoffroy', aerosol_forcing="stevens", useMultigas=False, lambda_global = lambda_global, scale=scale_ghg ) _,_,T_aer,_,_,_ = fair_scm(emissions=False, other_rf=F_aer, temperature_function='Geoffroy', aerosol_forcing="stevens", useMultigas=False, lambda_global = lambda_global, scale=scale_aer ) _,_,T_nat,_,_,_ = fair_scm(emissions=False, other_rf=F_nat, temperature_function='Geoffroy', aerosol_forcing="stevens", useMultigas=False, lambda_global = lambda_global, scale=scale_nat ) T_tot = T_ghg + T_aer + T_nat # + """ Plot the output! """ fig, ax = plt.subplots(nrows=2, dpi=100, figsize = (8,8)) ax[0].plot(Y_obs, F_tot, color='red', label='Total') ax[0].plot(Y_obs, F_ghg, color='magenta', label='GHGs') ax[0].plot(Y_obs, F_aer, color='blue', label='Aerosols') ax[0].plot(Y_obs, F_ghg+F_aer, color='orange', label='Total anthro.') ax[0].plot(Y_obs, F_nat, color='green', label='Natural') ax[0].set_ylim(-1.5, 3.5) ax[0].set_ylabel(r"Forcing [W m$^{-2}$ K$^{-1}$]") ax[0].legend() ax[1].scatter(Y_obs, T_obs, s=10, color='black', label='Obs') ax[1].plot(Y_obs, T_tot, color='red', label='Total') ax[1].plot(Y_obs, T_ghg, color='magenta', label='GHGs') ax[1].plot(Y_obs, T_aer, color='blue', label='Aerosols') ax[1].plot(Y_obs, T_ghg+T_aer, color='orange', label='Total anthro.') ax[1].plot(Y_obs, T_nat, color='green', label='Natural') # Calculate standard deviation between model and obs error = round(np.std(T_obs - T_tot), 2) props = dict(boxstyle='round', facecolor='wheat', alpha=0.5) ax[1].text(0.3, 0.85, f"Standard deviation = {error} K", transform=ax[1].transAxes, bbox=props) ax[1].set_ylabel(r"Temperature anomaly [K]") ax[1].legend() # - # ## Question: So, what happened? # **1) Explain the main features of the GHG, aerosol and natural radiative forcing curves.** # **2) Vary the scalings of the GHG, aerosol and natural radiative forcings – how does each affect the simulated temperatures?** # **3) Set the scalings back to 1 and instead vary the `lambda_global` parameter and re-run the model. What's the highest/lowest value you can get while still matching the historical record?** # **4) Extension: It's often suggested that climate models with a high climate sensitivity (small $\lambda$) must *also* have a strong aerosol forcing in order to match the historical record, do you agree?**
# # To explore this question, vary the `lambda_global` and the `scale_aer` parameters simultaneously to simulate different (sensitivity, F$_{\mathrm{aer}}$) combinations. How does the model error change as a function of these two parameters? (P.S. Try a contour plot!) # # You're also given that $\lambda=1.13\pm0.28 \, [\mathrm{W} \, \mathrm{m}^{-2} \, \mathrm{K}^{-1}]$ and `scale_aer`$\approx 1\pm0.3$ # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Classify seismic receiver functions using Decision Trees # In this lab exercise, you are going to classify the seismic receiver functions using Decision Trees, and you will implement Decision Trees using Scikit-Learn. Note that you used the same data set for your lab exercise on logistic regression.
# # More specifically, your task is to classify the P-wave receiver functions, which were computed based on the recorded seismic data, into two categories: good and bad. The entire data set consists of 12,597 receiver functions (i.e., seismic traces), each of which was visually examined and manually labeled as either good or bad by one of Prof. 's PhD students, , in the Department of Earth and Atmospheric Sciences at University of Houston. The good seismic traces are labled (or, encoded)as 1, and bad seismic traces are encoded as 0.
# # After finishing this exercise, you can expect to
# 1. be able to implement Decision Trees using Scikit-Learn;
# 2. have a better understaning of how Decision Trees perform classification;
# 3. better understand the regularization role played by the hyperparameter, **max_depth**;
# 4. be able to diagnose overfitting vs. underfitting by constructing the error curves.
# #
# Author: @ University of Houston, 02/21/2019 # ## 1. Import data # Let us first import our labled data from Traces_qc.mat. import numpy as np import h5py with h5py.File("../Traces_qc.mat") as f: ampdata = [f[element[0]][:] for element in f["Data"]["amps"]] flag = [f[element[0]][:] for element in f["Data"]["Flags"]] ntr = [f[element[0]][:] for element in f["Data"]["ntr"]] time = [f[element[0]][:] for element in f["Data"]["time"]] staname = [f[element[0]][:] for element in f["Data"]["staname"]] ampall = np.zeros((1,651)) flagall = np.zeros(1) for i in np.arange(201): ampall = np.vstack((ampall, ampdata[i])) flagall = np.vstack((flagall, flag[i])) amp_data = np.delete(ampall, 0, 0) flag_data = np.delete(flagall, 0, 0) # The **amp_data** stores the seismic amplitudes from all seismic stations. The **flag_data** contains the labels for each seismic traces. These labels are encoded as 1s and 0s, with 0 representing bad seismic traces, and 1 corresponding good seismic traces. amp_data.shape # total number of bad traces np.where(flag_data == 0)[0].shape # total number of good traces np.nonzero(flag_data)[0].shape # There are 12,597 seismic traces, and each trace contains 651 values. Using machine learning terminology, we have 12,597 data examples (or instances) and each of our data example has 651 features. # ## 2. Visualization # Visualiation is a good way to get to know your data better. Let us first take a look at a few seismic traces that were considered as good by our human expert. # + import matplotlib.pyplot as plt goodtraceindex = np.nonzero(flag_data)[0].reshape(-1,1) fig, axs = plt.subplots(1,5, figsize=(15, 6), facecolor='w', edgecolor='k') fig.subplots_adjust(hspace = .5, wspace=.001) fig.suptitle('a few good traces', fontsize=20) axs = axs.ravel() ic = 0 for icount in goodtraceindex[5:10,0]: axs[ic].plot(amp_data[icount,:], time[0]) axs[ic].invert_yaxis() axs[ic].set_xlabel('amplitude') axs[ic].set_ylabel('time') ic = ic + 1 # tight_layout() will also adjust spacing between subplots to minimize the overlaps plt.tight_layout() plt.show() # - # Let us also take a look at those 'bad' seismic traces. In practice, these bad seismic traces will be discarded, and excluded from any further analysis. # + badtraceindex = np.where(flag_data == 0)[0].reshape(-1,1) fig, axs = plt.subplots(1,5, figsize=(15, 6), facecolor='w', edgecolor='k') fig.subplots_adjust(hspace = .5, wspace=.001) fig.suptitle('a few bad traces', fontsize=20) axs = axs.ravel() ic = 0 for icount in badtraceindex[10:15,0]: axs[ic].plot(amp_data[icount,:], time[0]) axs[ic].invert_yaxis() axs[ic].set_xlabel('amplitude') axs[ic].set_ylabel('time') ic = ic + 1 # tight_layout() will also adjust spacing between subplots to minimize the overlaps plt.tight_layout() plt.show() # - # The 'good' seismic traces have several characteristic features. First, We expect to see a clearly defined peak at time 0, which corresponds to when the incident P wave converted to S wave. We also expect to see a second peak around 4 or 5 seconds, which corresponds to the Ps conversions from Moho. The amplitude for the peak at time 0 should be clearly higher than that for the second peak. # ## 3. Preprocessing data # For most ML application, We will perform the standard preprocessing by removing the mean and scaling to unit variance, just as what you did for your lab exercises on Logistic Regression and Support Vector Machines. The reason for it is that different features in our data might be vastly different scales. This will cause a lot of problems for minimization. The standardization (or scaling) helps alleviate this problem. In fact, it helps speed up the convergence of gradient descent and shorten the training time. It also helps improve SVM results. See more at [this Wikipedia page](https://en.wikipedia.org/wiki/Feature_scaling), [this Stackexchange webage](https://stats.stackexchange.com/questions/41704/how-and-why-do-normalization-and-feature-scaling-work) and the [official Scikit-Learn webpage on standardization](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html).
# However, for Decision Trees (and also the Random Forests that build upon Decisiont Trees), at each node, we simply ask a question about one single feature, and then split the data sets at the threshold value. It turns out that, features at different scales do not affect the splitting and thefore, the growing of a decision tree. Therefore, the standardization procedure is usually skipped for Decision Trees. # But we still need to randomly permute our data, just as what you did in your lab exercise on logistic regression. The reason for doing this is to avoid the situation where your training data are ordered in some specific way. For example, suppose we have 10,000 seismic traces, and somebody put all the 8,000 good seismic traces together, followed by all the 2,000 bad traces. Also suppose that we decide to split our data into training and test sets with the proportion of 4:1. If we do not randomly permute our training data set, then our training data will be all the good seismic traces, and our validation or test data set will be all the bad ones. This is very dangerous because your machine lerning algorithm will not have any chance of learning from the bad seismic traces at the training stage, and you can expect that no matter how you train a machine learning model, it will not predict well on the validation/test data. It is like a child is exposed to only cat images, but you ask him or her to tell if an image contains a car. Randomly permuting the data will ensure that the training set contains data from every category (good and bad), and validation/test set also contains data from all categories. np.random.seed(42) whole_data = np.append(amp_data,flag_data,1) # put all the seismic traces and their lables into one matrix which contain the whole data set for subsequent machine learing. # **Task 1:** Randomly permute the data stored in the variable **whole_data** using *np.random.permutation*, and store the permuted data in a new variable **whole_data_permute**. **(5 points)**
#
# **HINT**: If you forget how to do it, please refer back to your lab exercise on logistic regression. Please note that in your lab exercise on logistic regression, you used **training_data_permute** as the name of the variable storing the permuted data, but here you are supposed to use **whole_data_permute** as the variable name. # Answer to Task 1 # ## 4. Split data into training and cross-validation sets # We are going to use the first 10,000 seismic traces as out training data set, and the rest 2,597 traces as validation dat set. # **Task 2:** Create the training data set by assigning the first **10,000** data examples and their corresponding labels in **whole_data_permute** to new variables, **X_train** and **y_train**. **(5 points)**
#
# **HINT**: If you forget how to do it, please refer back to your lab exercise on logistic regression. # Ansewr to Task 2 # **Task 3:** Create the validation data set by assigning the remaining data examples and their corresponding labels in **whole_data_permute** to new variables, **X_validation** and **y_validation**. **(5 points)**
#
# **HINT**: If you forget how to do it, please refer back to your lab exercise on logistic regression. # Answer to Task 3 # ## 5. Set up Decision Tree classifier # **Task 4:** Import *DecisionTreeClassifier* from Scikit-Learn. **(5 points)**
#
# **HINT**: If you forget how to do it, please refer back to our lecture slides, or the accompanying example notebook *DecisionTrees_example.ipynb*. # Answer to Task 4 # **Task 5:** Set up your DecisionTreeClassifier by setting *max_depth = 2*, and *random_state = 42*, and assign this classifier to a new variable **tree_clf_2** (DO NOT forget the last digit 2 in the variable name) **(5 points)**
#
# **HINT**: If you forget how to do it, please refer back to our lecture slides, or the accompanying example notebook *DecisionTrees_example.ipynb*. # Answer to Task 5 # ## 6. Train a decision tree # **Task 6:** Train a decision tree using the **training** data set, **X_train** and **y_train**, and the classifier, **tree_clf_2**, you set up above. **(5 points)**
#
# **HINT**: If you forget how to do it, please refer back to our lecture slides, or the accompanying example notebook *DecisionTrees_example.ipynb*. # Answer to Task 6 # ## 7. Evaluation # After we have successfully trained a decision tree, we need to evalute it and determine if it is a good or bad one. This is also what happens for any practical ML project. That is, after you get your first-pass results (i.e., obtain your first ML model through learning), you ought to evaluate it in order to determine what to do next. Usually, the first-pass results are far from being satisfactory due to either under-fitting or over-fitting. Evaluation is important before it informs you what problem you run into and provides clues for improvements. # **Task 7:** Make predictions on the validation data set, and assign the predictions to a new variable, **y_pred**. **(10 points)**
#
# **HINT**: Suppose the name of your classifier is *my_tree_classifier*, and the name of your cross-validation data (excluding the labels) is *my_validation*, then you can make prediction by the following line of code:
#
#        *my_tree_classifier.predict(my_validation)*
#
# Do not forget to assign the predictions to a new variable **y_pred**. # Answer to Task 7 # Now, let us calcuate the precision and recall values by comparing our predictions, *y_pred*, with the true labels, *y_validation*. We are going to use the *classification_report* from Scikit-Learn for this purpose.
#
# To learn more about classification_report and see examples, please refer to this [Scikit-Learn webpage](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html). from sklearn.metrics import classification_report print(classification_report(y_validation, y_pred, target_names=['bad','good'])) # Your classification results should look very similar (ideally, identical) to the following:
# # **Task 8:** Please explain what the precision value, 0.92, and recall value, 0.79, mean, for the cateogry of 'bad' seismic traces. Similarly, explain what the precision value, 0.52, and recall value, 0.76, mean, for the cateogry of 'good' seismic traces. **(10 points)**
#
# **HINT**: Please refer to your last lab exericse on SVM for a better understanding of what precision and recall means.
#
# (answer to Task 8:) # # Let us calculate the **error** of the predictions on the training data. 1- tree_clf_2.score(X_train, y_train) # Recall that you used .score in your lab exercise on logistic regression. # The expeted output is:      0.21760000000000002 # Let us also calculate the **error** of the predictions on the validation data set. 1 - tree_clf_2.score(X_validation,y_validation) # The expeted output is:      0.21948402002310363 # Now, let us calculate the importance of all the features, and take a look at the top 10 most important features. These are the features that Decision Trees used when deciding how to split the data set and growing the tree. # + # The following code is based on a modification of the codes in this webpage # http://scikit-learn.org/stable/auto_examples/ensemble/plot_forest_importances.html importances = tree_clf_2.feature_importances_ indices = np.argsort(importances)[::-1] # Print the feature ranking print("Feature ranking:") for f in range(10): print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]])) # Plot the feature importances of the forest plt.figure() plt.title("Feature importances") plt.bar(range(10), importances[indices][:10]) plt.xticks(range(10), indices) plt.xlim([-1, 10]) plt.show() # - # You can also export the actual decision tree that was trained. # + import graphviz from sklearn.tree import export_graphviz dot_data = export_graphviz(tree_clf_2, out_file=None, feature_names=None, class_names=['bad','good'], filled=True, rounded=True, special_characters=True) graph = graphviz.Source(dot_data) graph # - # Your decision tree should look like the following tree:
# # To learn more about how to explore a decision tree in Scikit-Learn, please visit [this webpage](https://stackoverflow.com/questions/32506951/how-to-explore-a-decision-tree-built-using-scikit-learn) # **Task 9:** Explain what each number means in the root node (i.e., the node at the very top). **(10 points)**
#
# **HINT**: Please refer to our lecture slides for what each number represents.
(answer to Task 9:) # ## 8. Construct error curves # Looking at the above training and cross-validation errors, it is not straightforward to tell whether we had an overfitting or underfitting problem (or if we have already had the best possible classifier). In order to diagose our problem, it is helpful to construct the error curves such as the one shown below. Please refer to our lecture slides **Week3_Concept.pdf**, if you need to refresh your memory of the error curves.
# # # # # The decision tree that you trained above is based on *max_depth* = 2. Remember that this hyperparameter, **max_depth**, plays the role of regularization. The larger this value is, the deeper the tree will be, and the more capable the decision tree will be of fitting our training data. If we set *max_depth* to unlimited, then the tree will grow unlimitedly in order to perfectly fit our training data (i.e., overfitting).
# # Now, let us construct the error curves by increasing *max_depth* from 3 to 12 while keeping everyelse the same as the previous tree. # + train_errors = np.zeros(11) validation_errors = np.zeros(11) train_errors[0] = 0.218 validation_errors[0] = 0.219 # - # **Task 10:** Train **10** decision trees by assuming **max_depth** = 3, 4, 5, 6, 7, 8, 9, 10, 11, and 12. Name each one of your decision trees as *tree_clf_number* where *number* is the value of the *max_depth* for that tree. For example, if *max_depth* = 3, then you should name your decision tree as *tree_clf_3*.
#
# Calculate the prediction errors on both training and validation data sets for each decision tree. And store them in the variables, *train_errors*, and *validation_errors*, that I have already created for you in the above cell. **(30 points)**
#
# **HINT**: You did something very similar in your lab exercise on logistic regression (by increasing the size of the training data).
# Answer to Task 10 # Now, let us plot up the error curves. import matplotlib.pyplot as plt max_depth = np.arange(2,13) plt.plot(max_depth,train_errors,'-ro',label="training errors") plt.plot(max_depth,validation_errors,'-bo',label="validation errors") plt.title('error curves',fontsize=20) plt.legend(loc="lower left", fontsize=16) plt.xlabel("Max_depth", fontsize=20) plt.ylabel("Prediction errors", fontsize=20, rotation=90) plt.show() # Your error curves should look very similar (ideally, identical) to those in the following picture.
# # # Great! We see roughly the same trend in the above error curves, which were constructed using real seismic data, as what we saw in the theoretical error curves. min(validation_errors) # **BONUS:** Explain the behavior of the error curves. Also, please write down what else you can learn from the error curves. **(10 points)**
#
# **HINT**: For example, when *max_depth* = 2, without the above error curves, if we only look at the training and validation errors, 0.218 and 0.219, it is not straightforward to tell if we overfit or underfit the data. But with the help of the error curves, we now can confidently say that we underfit our data (that is, some of the important features in our data were not captured by the decision tree) when *max_depth* = 2.
(answer to Bonus:) # ## 9. Applications of Decision Trees to geoscience # **Task 11:** Do a literature search and look for at least one example where Decision Tree is used to solve some geoscience-related problems. Then, report the source of the information (e.g., URL, DOI, etc.), and summarize the example using a few sentences. **(10 points)** # (answer to Task 11:) # # ## 10. Acknowledgments # I would like to thank for manually labeling all the seismic traces, and Prof. for making this data set available to the students in this class. Ms. Zhang also kindly explained the fundamentals of seismic P-wave receiver functions to me.
# # # ## 11. Useful links # In addition to the links I provided above, you might find the materials in the following webpages useful. # 1. [Feature importances](http://scikit-learn.org/stable/auto_examples/ensemble/plot_forest_importances.html) # 2. [Plot bar chart](https://pythonspot.com/matplotlib-bar-chart/) # 3. [DecisionTreeClassifier on Scikit-Learn](http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html#) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Solving Sudoku # + from sudoku import solve_sudoku, draw_sudoku def test_sudoku(constraints): status, solution = solve_sudoku(constraints) return "PASS" if status == 'Optimal' else "FAIL" # - # ## Constraints # Tuples of (x, y, number) constraints = [(1,1,9), (1,2,3), (8,3,7), (2,2,9)] print(draw_sudoku(constraints)) # ## The Delta Debugging Algorithm # + from ddebug import get_partitions from covenant import pre, post @pre(lambda data, test, granularity: test(data)=='FAIL') @post(lambda result, data, test, granularity: test(result)=='FAIL') def delta_debug(data, test, granularity=2): print('\n"{2}", granularity={1}'.format(len(data), granularity, data)) for subset in get_partitions(data, granularity): result = test(subset) print('"{}" -> {}'.format(subset, result)) if result == 'FAIL': if len(subset) > 1: return delta_debug(subset, test, granularity) else: return subset # minimal failing subset if granularity < len(data): return delta_debug(data, test, granularity + 1) return data # - # ## Simple string example # + def no_rats(s): return "FAIL" if 'rat' in s else "PASS" text = "ten wooden crates" delta_debug(text, no_rats) # - # ## Solving a Sudoku constraints = [(1,1,9), (1,2,3), (8,3,7), (2,2,9)] minimal = delta_debug(constraints, test_sudoku) print(draw_sudoku(minimal)) # ## More complicated Sudoku example constraints = [(1,1,1), (2,1,2), (3,1,3), (4,2,1), (5,2,2), (6,2,3), (7,3,7), (8,3,8), (9,3,9)] print(draw_sudoku(constraints)) minimal = delta_debug(constraints, test_sudoku) print(draw_sudoku(minimal)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Introduction To TimeSeriesDataFrames # # The core data structure of the Azure ML Package for Forecasting is the `TimeSeriesDataFrame`. In this notebook, you will learn about this class and it's children the `ForecastDataFrame` and the `MultiForecastDataFrame`. However, we expect that if you are using this notebook you are already familiar with the `TimeSeriesDataFrame`'s parent class, the `pandas.DataFrame`. If you are not already familiar with `pandas.DataFrame`s and in particular `pandas.DataFrame`s with a `MultiIndex`, please check out our notebook titled '*Data wrangling with "Azure ML Package for Forecasting" data structures*'. # # Let's start by importing our data structures. And then we can go over each class and see how they build on eachother. import warnings # Suppress warnings warnings.filterwarnings("ignore") import pandas as pd import matplotlib.pyplot as plt from ftk import TimeSeriesDataFrame, ForecastDataFrame, MultiForecastDataFrame from ftk.data import load_dominicks_oj_dataset from ftk.models import Naive, SeasonalNaive, ForecasterUnion # ## `TimeSeriesDataFrame` # # The `TimeSeriesDataFrame` is a `pandas.DataFrame` with additional metadata, properties, and functions to provide time series specific functionality. It uses [hierarchical indexing through MultiIndexes](https://pandas.pydata.org/pandas-docs/stable/advanced.html) where more than one one column can be used as an index to provide some of this functionality. For this notebook we will be working with a dataset of orange juice sales from a number of stores in the greater Chicago area. We will begin by reading it in. oj_data, oj_test = load_dominicks_oj_dataset() oj_data.head() # ### Metadata # # Lets start by looking at our orange juice dataset and see what metadata fields it has. print("{") for m in oj_data._metadata: print(" {0}: {1}".format(m, getattr(oj_data, m))) print("}") # We see our `TimeSeriesDataFrame` has 5 metadata fields: # * time_colname # * ts_value_colname # * grain_colnames # * group_colnames # * origin_time_colname # # ### `time_colname` # # Since the definition of a time series is "a series of values of a quantity obtained at successive times" it seems reasonable that the first metadata field we need to look at is the `time_colname`. This is the only required metadata field, you can't have a time series without time after all. # # * `time_colname`: The name of the column storing the time point for the row of data. The values stored in this column should be of a datetime type. # # Let's try an example. We will make a set of 4 months starting in January of 2017 create a value of our time series starting at 0 and increasing by one each month. We will compare the results of turning the data into a `pandas.DataFrame` and a `TimeSeriesDataFrame`. # Make example data my_data = {'date': pd.date_range('2017-01-01', periods=4, freq = 'MS'), 'value': list(range(0,4))} # Create pandas.DataFrame my_pddf = pd.DataFrame(my_data) my_pddf # Create TimeSeriesDataFrame my_tsdf = TimeSeriesDataFrame(my_data, time_colname='date') my_tsdf # We see that the `TimeSeriesDataFrame` moved the `time_colname` to the index. This provided a lot of [time related functionality](https://pandas.pydata.org/pandas-docs/stable/timeseries.html) available in `pandas` such as getting windows of your data. # # `TimeSeriesDataFrame`s have additional checks to ensure that the time series data are properly formatted, such as making sure that each time point only occurs once in the data. # # **Exercise**: Compare the results of creating a data frame with a duplicated time index value in `pandas` vs in `ftk`. Why would this check be important when making time series? dup_data = {'date': [pd.to_datetime('2017-01-01')]*4, 'value': list(range(0,4))} pd.DataFrame({'value':dup_data['value']}, index = dup_data['date']) ### Uncomment below to run ## # TimeSeriesDataFrame(dup_data, time_colname='date') # ### `ts_value_colname` # # The next field we will look at is the `ts_value_colname`. # # * `ts_value_colname`: Name of the column containing the data we want to forecast, kind of like our target in machine learning. # # This is important because in data like the orange juice dataset we may have many additional columns that help provide features for the forecast and we need to be able to distinguish our target column from these features. Once we have designated a `ts_value_colname` we can start fitting models. Let's make `value` our `ts_value_colname` and fit a simple `Naive` model. my_tsdf_value = TimeSeriesDataFrame(my_data, time_colname='date', ts_value_colname='value') snaive_model = SeasonalNaive(freq = 'MS', seasonality=7) snaive_model.fit(my_tsdf_value) # **Exercise**: Try running the cell above with `my_tsdf` instead of `my_tsdf_value`. Try the same thing with `oj_data`. Which column of `oj_data` did it build the `Naive` model to forecast? # # ### `grain_colnames` # # Our next metadata field is the `grain_colnames`. This is the first meta data field we will look at where you can specify more than one column for the metadata field. # # * `grain_colnames`: Name of the column or columns that specify unique time series. # # A `TimeSeriesDataFrame` can hold more than one time series at a time, so for example, `oj_data` contains 249 different time series; 83 different stores and each store has 3 different brands of orange juice. # # Lets try building our own `TimeSeriesDataFrame` with grains. grain_data = {'date': list(pd.date_range('2017-01-01', periods=4, freq = 'MS'))*2, 'grain': ['A']*4 + ['B']*4, 'value': list(range(0,8))} my_tsdf_grain = TimeSeriesDataFrame(grain_data, time_colname='date', ts_value_colname='value', grain_colnames = 'grain') my_tsdf_grain # Like the `time_colname`, the `grain_colname` gets moved to the index so it can help identify rows of our time series. # # **Exercise**: What happens in the cell above if we don't specify the `grain_colnames` parameter? Why? For a hint, try looking at the excercise for `time_colname`. # # ### `group_colnames` # # `group_colnames` are similar to the idea of grain, but are related to modeling rather than identifying a single time series. # # * `group_colnames`: designate columns whose unique values specify rows of the data that would benifit from being modeled together. `RegressionForecaster` will create one model for each unique `group`. # # By default `group_colnames` is the same as `grain_colnames`. # # **Excercise**: Make it so all the grains of our `grain_data` will be modeled together by setting `group_colnames` to `None`. print(my_tsdf_grain.group_colnames) my_tsdf_no_group = TimeSeriesDataFrame(grain_data, time_colname='date', ts_value_colname='value', # group_colnames = ???, grain_colnames = 'grain') print(my_tsdf_no_group.group_colnames) # **Exercise**: `oj_data` has the default setting currently where `grain` and `group` are the same. Overwrite `group` to just be `brand`. oj_group_brand = oj_data.copy() print('oj_group_brand.grain_colnames = {0}'.format(oj_group_brand.grain_colnames)) print('oj_group_brand.group_colnames = {0}'.format(oj_group_brand.group_colnames)) # oj_group_brand.group.colnames = ??? print('oj_group_brand.grain_colnames = {0}'.format(oj_group_brand.grain_colnames)) print('oj_group_brand.group_colnames = {0}'.format(oj_group_brand.group_colnames)) # ### origin_time_colname # # You can sucessfully use `ftk` from start to finish without ever directly interacting with the conceptually dense field of `origin_time`. In the majority of cases, the users will not specify this column, instead it will be created for them by one of the featurizers. A full discussion of `origin_time` will be left for the notebook '*Constructing Lags and Explaining Origin Times*'. In brief, `origin_time` accounts for the fact that we have different knowledge and expectations of a forecast based on when that forecast is made, giving us different features and different forecasts. For example, when checking the weather forecast you probably have a higher expectation for the accuracy of tomorrow's forecast compared to if you got the forecast from 10 days earlier. That is because meterologists know more about the climitological conditions the closer we get to the time we are forecasting. If you have features that your are inputting into the `TimeSeriesDataFrame` constructor that you learn more about the closer you get to the time you are forecasting, then you will need to specify an `origin_time_colname`. Otherwise, sit back and let the `ftk` featurizers worry about it for you. # # * `origin_time_colname`: The name of the column that tells you when you *collected* the features on that row of data. In other words if this row is for a 10 day forecast, the `origin_time` will be 10 days before the `time_index`. # # There is one special aspect of `origin_time` we should mention; the `ts_value` has to be the same across all `origin_time`s. In other words, if we are making a 10 day forecast vs a 1 day forecast for a particular date, the features we use to make that forecast may change, but the actual value of the series (our target) will be the same. The true temperature on June 23rd will not change just because we are trying to make the forecast on June 13th instead of June 22nd. origin_data = {'date': list(pd.date_range('2017-03-01', periods=4, freq = 'MS'))*2, 'origin': [pd.to_datetime('2017-01-01')]*4 + [pd.to_datetime('2017-02-01')]*4 , 'value': list(range(0,4))*2} my_tsdf_origin = TimeSeriesDataFrame(origin_data, time_colname='date', ts_value_colname='value', origin_time_colname = 'origin') my_tsdf_origin # Like `time_index` and `grain_colnames`, `origin_time` is part of the index for the dataset. # # **Excercise**: What happens if we made the `value` column of `origin_data` `list(range(0,8))` instead? Why does this happen? # # `TimeSeriesDataFrame`s have an additional property `horizon` which tell you the time difference between the `time_index` and the `origin_time` for each row of data. We can access the `horizon` as follows: my_tsdf_origin.horizon # ## `ForecastDataFrame` # # While `TimeSeriesDataFrames` store the inputs to forecasting models, `ForecastDataFrames` hold the outputs. To get an example of a `ForecastDataFrame` lets make a `SeasonalNaive` model and forecast our `oj_data`. snaive_model = SeasonalNaive(freq = oj_data.infer_freq(), seasonality=7) snaive_model.fit(oj_data) oj_fcast = snaive_model.predict(oj_test) oj_fcast.head() # As we did with our `TimeSeriesDataFrame` lets take a look at our `ForecastDataFrame` and see what new metadata it has. print("{") for m in oj_fcast._metadata: print(" {0}: {1}".format(m, getattr(oj_fcast, m))) print("}") # The first thing we note that even though `oj_data` didn't have an `origin_time_colname` specified, the `ForecastDataFrame` does. This is because forecasts are always relative to when you are making them. If when you made a forecast did not affect the value of the forecast, then you could think if it as just a standard regression. Next we notice we have three new fields: # # * `actual` # * `pred_point` # * `pred_dist` # # Let's go through each of them. # # ### `actual` # # `actual` is the `ForecastDataFrame`'s version of `ts_value_colname`. # # * `actual`: the true value of the time series. # # When making forecast into the future, the value of `actual` will be `None`, but when we are testing our models we can provide this column so we can compute errors. For example, if we want to compute the Mean Absolute Percent Error (MAPE) for our models we can do the following: oj_fcast.calc_error('MAPE') # **Excercise**: In the box above what would you change to calculate the Mean Absolute Saled Error (MASE) instead? # # ### `pred_point` # # This is the defining property of the `ForecastDataFrame`. # # * `pred_point`: The point forecast value for the series. # # In a package like `scikit-learn` calling `predict` on a model would return just the values in this column. # # ### `pred_dist` # # One difference between many machine learning applications and forecasting models is that it is often possible to get prediction distributions as well as point predictions. # # * `pred_dist`: The distribution forecast for the series. # # These prediction distributions can be used to generate prediction intervals, for example to get the 95% prediction interval for our first forecast data point we can use: oj_fcast[oj_fcast.pred_dist][0].interval(0.95) # **Exercise**: What would you change in the box above to get the 90% prediction interval instead of the 95%? # # We can also evaluate how our actual values compare to our predicted distributions using the `cdf` function for the `pred_dist`. For example: p_values = [d.cdf(a) for d, a in zip(oj_fcast[oj_fcast.pred_dist], oj_fcast[oj_fcast.actual])] plt.hist(p_values, normed=True, histtype='stepfilled', bins= 50) plt.ylabel('Probability') plt.xlabel('Prediction Distribution Quantile') # From this experiment it looks like the prediction intervals may be too narrow on this particular dataset. That can happen some times for data with a lot of zeros in it. # ## `MultiForecastDataFrame` # # The `MultiForecastDataFrame` is a sub class of the `ForecastDataFrame` that can hold the outputs of more than one model. One easy way to get a `MultiForecastDataFrame` is to use the `ForecasterUnion` function to evaluate many different forecasting models at once. Here we will compare the results of a `Naive` model and our `SeasonalNaive` model from before. naive_model = Naive(freq = oj_data.infer_freq()) forecaster_union = ForecasterUnion( forecaster_list=[('naive', naive_model), ('seasonal_naive', snaive_model)]) forecaster_union.fit(oj_data) oj_multifcast = forecaster_union.predict(oj_test) oj_multifcast.head() # Once again, lets take a look at out new metadata. print("{") for m in oj_multifcast._metadata: print(" {0}: {1}".format(m, getattr(oj_multifcast, m))) print("}") # There is just one new field this time. # # ### `model_colnames` # # `model_colnames` is our first metadata field to contain a `dict` rather than a string or list. # # * `model_colnames`: A standardized dictionary with at least one of two keys, `model_class` and `param_values`. # * `model_class`: A string specifying the column name which contains the name of the forecasting model used to create the forecasts. # * `param_values`: A string specifying the column name which contains the parameters of the model used to create the forecast. # # By specifying these two columns we can compare results from the same model or from the same model with different parameters or both. Below we show an example of how we would compare the MAPE of our two models. oj_multifcast.calc_error('MAPE', by=oj_multifcast.model_colnames['model_class']) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="ooskRt7gxTRs" import tensorflow as tf # + colab={"base_uri": "https://localhost:8080/"} id="gyVquDxUxq4C" outputId="4c46288f-b2bf-408d-aebc-393251644604" x_data = [[0,0], [1,0], [0,1], [1,1]] y_data = [[0], [1], [1], [1]] type(x_data),type(y_data) # + [markdown] id="bmCtQ5Y4ySiw" # ## 넘파이 어레이로 바꿈 # + id="4fi-7hBSx5zB" import numpy as np # + colab={"base_uri": "https://localhost:8080/"} id="i8K_hswJx77S" outputId="f30a456d-d584-4888-bdf7-c8acd3e3cd4b" x_train = np.array(x_data) y_train = np.array(y_data) x_train.shape , y_train.shape # + [markdown] id="FfnBdew1yLwr" # ## 딥러닝에 넣어주는 작업(모델만들기) # # **create model** # # multi perceptron(layer가 한 개 이상)의 사용 # # solve XOR model(XOR 해결하기) # + id="61d7dntdyKGh" model = tf.keras.models.Sequential() # + colab={"base_uri": "https://localhost:8080/"} id="YJDiAlCi07nz" outputId="ca8c7ff6-411c-4616-b970-c8a34f90b80d" model.add(tf.keras.Input(shape=(2,))) # input layer 1차원 # + id="KiY61-RT250O" model.add(tf.keras.layers.Dense(2,activation='sigmoid')) # model.add(tf.keras.layers.Dense(128,activation='sigmoid')) # 기능 layer # model.add(tf.keras.layers.Dense(128,activation='sigmoid')) # + [markdown] id="RatVfV0b3Cgp" # 엑티베이션 펑션(활성화 함수) 사용 # # dense자체는 regrassin 이어서 activation을 넣어줘야한다 # + id="6-RF13hh12B3" model.add(tf.keras.layers.Dense(1, activation='sigmoid'))# output layer # + [markdown] id="SYStxNZVC0J7" # metrics=['acc'] 정확도 측정할 때 필요 # + id="P5Y-mWr_8OOV" model.compile(optimizer='sgd', loss='mse', metrics=['acc']) # + id="2Ytyc3sw8xxu" # tf.keras.utils.plot_model(model, show_shapes=True) # + [markdown] id="DgyBFYEE_JYB" # 기능 layer부분 개수를 늘리기 # + colab={"base_uri": "https://localhost:8080/"} id="rNKay-a98U1D" outputId="1a95cc18-a261-4eb8-a033-492624086799" model.fit(x_train, y_train, epochs=100) # + [markdown] id="uxJkWcDQ_n5X" # ## loss 체크 : 숫자가 낮을 수록 좋다 # # 기능 레이어 Dense 2 로 변화 체크 # # epochs = 100 한 번 loss: 0.2202 # # 기능 레이어 Dense 32 늘리고 변화 체크 # # epochs = 100 한 번 loss: 0.1904 # # 기능 레이어 Dense 132 늘리고 변화 체크 # # epochs = 100 한 번 loss: 0.1848 # # 기능 레이어 Dense 128*2 하고 변화 체크 # # epochs = 100 한 번 loss: 0.1865 array = 0.7413647 # # **결론**: 성능을 높이려면 # # Dense 높이거나, # # 기능레이어 여러번(2번) 쌓거나, # # 에폭 횟수를 늘리면 된다 # # + colab={"base_uri": "https://localhost:8080/"} id="5fWvq9yP9blh" outputId="9b39b414-3002-423e-bbbc-e9c4bd970f04" model.predict([[0,1]]) # + [markdown] id="DNyJp7WTB1NL" # ## evaluate() 정확도 : 숫자가 높을 수록 좋다 # # sklearn이랑 동일한 아이 # + colab={"base_uri": "https://localhost:8080/"} id="KaLlvvkn-3p0" outputId="177ee3ed-0a6c-41cf-98e5-e57af78c0e36" model.evaluate(x_train, y_train) # + id="TBpnuQIzCXP9" colab={"base_uri": "https://localhost:8080/"} outputId="064a0df6-244d-4a82-e75d-8fa8895abbb5" model.weights # + [markdown] id="FdR05IS53azZ" # [구조확인](https://playground.tensorflow.org/ # ) # + colab={"base_uri": "https://localhost:8080/"} id="RdjfHUKr1uD8" outputId="2356ce13-1910-4cd9-950d-332efc54fbdc" model.summary() # + [markdown] id="L2G_XNYPjZGd" # [문자 연산가능 사이트](https://word2vec.kr/search/) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="MMBM34S1NZKE" # #Data Transforms and Data Augmentation for medical imaging # In this notebook, i'll implement data augmentation and data trasnforms to 3d medical imaging. Dataset is placed in a dictinoary, so i will be using dictionary transforms. I will first load an image, following that i will Add a channel to 3d image, because they have (hieght, width, slices). I will add a channal then it will become (channles,height, width, slices). Following that we scale the pixels intensity value, after that image will be resized to (128, 128, 128). # # Now after that i will apply some data augmentation to 3d medical images to make dataset more diverse. # # # + id="g_SHWeS1NHvy" #first install monai # !pip install -qU "monai[ignite, nibabel, torchvision, tqdm]==0.6.0" # + id="1ANnd59JPUIF" #lets import some necessary packakges import torch from monai.transforms import (AddChanneld, Compose, LoadImaged, RandRotate90d, Resized, ScaleIntensityd, RandAffined, RandRotated, Zoomd, EnsureTyped, ToTensord) # + colab={"base_uri": "https://localhost:8080/"} id="z9a8tMSGSpU1" outputId="b5835bbd-36b1-4f59-f386-a1da5d6b1efd" from google.colab import drive drive.mount('/gdrive') # + id="Gm29W1qUQBk2" def transfoms_augmentation(): ''' transform and augment images return: transformed and augmentated images''' data_transforms = { 'train': Compose([ LoadImaged(keys = ["image"]), AddChanneld(keys = ["image"]), ScaleIntensityd(keys = ["image"]), Resized(keys = ["image"], spatial_size = (128, 128, 128)), RandAffined(keys = ["image"], prob=0.5, translate_range = 10), RandRotate90d(keys=["image"], prob=0.8, spatial_axes=[0, 2]), EnsureTyped(keys=["image"]), # ToTensord(keys = ["image"]) ]), 'val': Compose([ LoadImaged(keys=["image"]), AddChanneld(keys=["image"]), ScaleIntensityd(keys=["image"]), Resized(keys=["image"], spatial_size=(128, 128, 128)), EnsureTyped(keys=["image"]), # ToTensord(keys = ["image"]) ]) } return data_transforms # + id="5_FzQkGzTINx" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="EbTdWmBR6E4d" # ##Kneighbors classifier # + colab={"base_uri": "https://localhost:8080/"} id="Llz_MiVeK-hY" outputId="c3409aa5-e167-4218-df0c-764c74959359" # !pip install wandb # + colab={"base_uri": "https://localhost:8080/"} id="XQf-1d5fLXzA" outputId="a8b00b01-6fcc-446d-cff6-de39464265b5" import wandb wandb.login() # + id="7KMs3KKn-EBi" from wandb.keras import WandbCallback # + colab={"base_uri": "https://localhost:8080/", "height": 580} id="Ur-fF0pQLlZX" outputId="ef6eb805-ecfa-4eae-a446-9ac6892cdef1" config = wandb.config wandb.init(project="Climate change Tweet Classification", notes="tweak baseline", tags=["baseline", "Remote Learning"], config=config) # + id="hVa2lZgPw3PC" import pandas as pd import numpy as np from google.colab import drive from sklearn.metrics import confusion_matrix,accuracy_score, classification_report # drive.mount('/content/drive') # + colab={"base_uri": "https://localhost:8080/"} id="fPlkS-bCnwkC" outputId="87a8a74b-b671-4873-f085-85c0ae22c7a2" train_df = pd.read_csv('/content/Clean classified data_kwisha.csv') test_df = pd.read_csv('/content/Test (2).csv') train_df['class'].value_counts() # + id="X7yWNhUNzGFs" from sklearn import preprocessing le = preprocessing.LabelEncoder() train_df['class'] = le.fit_transform(train_df['class']) # + colab={"base_uri": "https://localhost:8080/"} id="sdwdYU0NxXfb" outputId="7a7c7665-68ac-4580-8e44-c9ac9ecb95e2" #SPLITTING THE TRAINING DATASET INTO TRAINING AND VALIDATION # Input: "cleaned tweets" # Target: "class" from sklearn.model_selection import train_test_split X_train, X_val, y_train, y_val = train_test_split(train_df["clean_text"], train_df["class"], test_size=0.05, shuffle=True, stratify =train_df["class"] ) #import necessary libraries import nltk from nltk.tokenize import word_tokenize nltk.download('punkt') nltk.download('averaged_perceptron_tagger') nltk.download('wordnet') X_train_tok= [nltk.word_tokenize(i) for i in X_train] #for word2vec X_val_tok= [nltk.word_tokenize(i) for i in X_val] #for word2vec #TF-IDF # Convert x_train to vector since model can only run on numbers and not words- Fit and transform #import TfidfVectorizer from sklearn.feature_extraction.text import TfidfVectorizer tfidf_vectorizer = TfidfVectorizer(use_idf=True, ) X_train_vectors_tfidf = tfidf_vectorizer.fit_transform(X_train) #tfidf runs on non-tokenized sentences unlike word2vec # Only transform x_test (not fit and transform) X_val_vectors_tfidf = tfidf_vectorizer.transform(X_val) #Don't fit() your TfidfVectorizer to your test data: it will #change the word-indexes & weights to match test data. Rather, fit on the training data, then use the same train-data- #fit model on the test data, to reflect the fact you're analyzing the test data only based on what was learned without #it, and the have compatible # + colab={"base_uri": "https://localhost:8080/", "height": 302} id="PJ-FlYSkLtjY" outputId="6a5bba62-4abc-4e72-8bbe-4dc5eee7c966" test_df.head() # + id="gIUBcQwsnQXd" # # Replace classes that are related # train_df["class"].replace({"famine": "drought", "water": "floods"}, inplace=True) # + id="R8znjJJrxyky" from keras.preprocessing.text import Tokenizer tokenizer = Tokenizer() tokenizer.fit_on_texts(train_df['clean_text']) tokenizer.fit_on_texts(test_df['clean_text']) vocab_size = len(tokenizer.word_index) + 1 # Adding 1 because of reserved 0 index # + colab={"base_uri": "https://localhost:8080/"} id="thMVCQ4axQcp" outputId="5c2f8386-cc67-4bec-e2dc-14e690ec3ca6" # split the data into labels and features X = X_train_vectors_tfidf y = train_df['class'] #Defining the hyper parameters for the Knearest Neighbors Classifier leaf_size = list(range(7,20)) n_neighbors = list(range(3,10)) p = [1,2] metric = ['manhattan', 'euclidean', 'minkowski'] #Creating a dictionary with the hyperparameters hyperparameters = dict(leaf_size = leaf_size, n_neighbors = n_neighbors, p=p,metric = metric) #I used GridSearch to look for the best parameters from sklearn.model_selection import RandomizedSearchCV from sklearn.neighbors import KNeighborsClassifier classifier = KNeighborsClassifier() clf = RandomizedSearchCV(classifier, hyperparameters,cv=10) clf = clf.fit(X_train_vectors_tfidf, y_train) print('\n') print(clf.best_params_) print('\n') #Creating a better model based on the parameters given to us by the gridsearch modelone = KNeighborsClassifier(leaf_size = clf.best_params_['leaf_size'], n_neighbors = clf.best_params_['n_neighbors'], p = clf.best_params_['p'], metric = clf.best_params_['metric']).fit(X_train_vectors_tfidf, y_train) modelone # predicting using the model built y_pred = modelone.predict(X_val_vectors_tfidf) # comparing the actual and predicted comparison_frame = pd.DataFrame({'Actual': y_val, 'Predicted': y_pred}) print('\n') print(comparison_frame) print('\n') print(comparison_frame.describe()) # Evaluating the Algorithm # --- from sklearn import metrics print('Mean Absolute Error:', metrics.mean_absolute_error(y_val, y_pred)) print('Mean Squared Error:', metrics.mean_squared_error(y_val, y_pred)) print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_val, y_pred))) print('The accuracy of the model is ',metrics.accuracy_score(y_val, y_pred)) print('The F1 score is: ',metrics.f1_score(y_val, y_pred,average='weighted')) print('The recall score is ',metrics.recall_score(y_val, y_pred,average='weighted')) print('\n', 'Confusion matrix') print(confusion_matrix(y_val, y_pred)) print('\n', 'Classification report') print(classification_report(y_val, y_pred)) # + id="O7uPvymXz0KZ" #define X variable for fitting of the new model test_df X_val = test_df['clean_text'] # + id="5YGCKDya0LB9" X_val_vectors_tfidf = tfidf_vectorizer.transform(X_val) #fit the tfidf model on the test data, to reflect the fact you're analyzing the test data only based on what was learned without #it, and the have compatible # predict using the test dataset y_pred = modelone.predict(X_val_vectors_tfidf) y_pred = le.inverse_transform(y_pred) #Adding the predicted values to our dataset test_df['class'] = y_pred # + id="tk0XZDi800IB" #Lets save our new dataset with class names # test_df.to_csv('/content/drive/MyDrive/Module 2 groupwork Datasets/Test df with classes KNNclassifier.csv') # + [markdown] id="9KpZVmzMVt0M" # # Naive Bayes # + colab={"base_uri": "https://localhost:8080/"} id="U5QY734eVzRr" outputId="57553b67-ae07-4b3c-83ce-9b35bac66ac9" from sklearn.naive_bayes import BernoulliNB from sklearn.naive_bayes import GaussianNB from sklearn.naive_bayes import MultinomialNB #SPLITTING THE TRAINING DATASET INTO TRAINING AND VALIDATION # Input: "cleaned tweets" # Target: "class" from sklearn.model_selection import train_test_split X_train, X_val, y_train, y_val = train_test_split(train_df["clean_text"], train_df["class"], test_size=0.05, shuffle=True, stratify =train_df["class"] ) #import necessary libraries import nltk from nltk.tokenize import word_tokenize nltk.download('punkt') nltk.download('averaged_perceptron_tagger') nltk.download('wordnet') X_train_tok= [nltk.word_tokenize(i) for i in X_train] #for word2vec X_val_tok= [nltk.word_tokenize(i) for i in X_val] #for word2vec #TF-IDF # Convert x_train to vector since model can only run on numbers and not words- Fit and transform #import TfidfVectorizer from sklearn.feature_extraction.text import TfidfVectorizer tfidf_vectorizer = TfidfVectorizer(use_idf=True, ) X_train_vectors_tfidf = tfidf_vectorizer.fit_transform(X_train) #tfidf runs on non-tokenized sentences unlike word2vec # Only transform x_test (not fit and transform) X_val_vectors_tfidf = tfidf_vectorizer.transform(X_val) #Don't fit() your TfidfVectorizer to your test data: it will #change the word-indexes & weights to match test data. Rather, fit on the training data, then use the same train-data- #fit model on the test data, to reflect the fact you're analyzing the test data only based on what was learned without #it, and the have compatible # + [markdown] id="t1DjPvdTWB0m" # Gaussian NB # + colab={"base_uri": "https://localhost:8080/"} id="tYmx0Pu0V68w" outputId="4a9cb290-c5cd-444e-e15b-faace899814d" from sklearn.naive_bayes import BernoulliNB from sklearn.naive_bayes import GaussianNB from sklearn.naive_bayes import MultinomialNB # Training and fitting the multinomial model gausnb = GaussianNB() gausnb.fit(X_train_vectors_tfidf.toarray(),y_train) y_pred = gausnb.predict(X_val_vectors_tfidf.toarray()) #Checking performance our model with performance metrics. comparison_frame = pd.DataFrame({'Actual': y_val, 'Predicted': y_pred}) print('\n') print(comparison_frame) print('\n') print(comparison_frame.describe()) # Evaluating the Algorithm # --- from sklearn import metrics print('Mean Absolute Error:', metrics.mean_absolute_error(y_val, y_pred)) print('Mean Squared Error:', metrics.mean_squared_error(y_val, y_pred)) print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_val, y_pred))) print('The accuracy of the model is ',metrics.accuracy_score(y_val, y_pred)) print('The F1 score is: ',metrics.f1_score(y_val, y_pred,average='weighted')) print('The recall score is ',metrics.recall_score(y_val, y_pred,average='weighted')) print('\n', 'Confusion matrix') print(confusion_matrix(y_val, y_pred)) print('\n', 'Classification report') print(classification_report(y_val, y_pred)) # + [markdown] id="XBjncBXoWNly" # Bernoulli NB # + colab={"base_uri": "https://localhost:8080/"} id="4WwuCug1V9Rj" outputId="d0f18d14-c352-4941-a0b9-4c55294c52e4" # Training and fitting the bernoulli model bernb = BernoulliNB() bernb.fit(X_train_vectors_tfidf,y_train) y_pred = bernb.predict(X_val_vectors_tfidf) #Checking performance our model with performance metrics. # comparison_frame = pd.DataFrame({'Actual': y_val, 'Predicted': y_pred}) print('\n') print(comparison_frame) print('\n') print(comparison_frame.describe()) # Evaluating the Algorithm # --- from sklearn import metrics print('Mean Absolute Error:', metrics.mean_absolute_error(y_val, y_pred)) print('Mean Squared Error:', metrics.mean_squared_error(y_val, y_pred)) print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_val, y_pred))) print('The accuracy of the model is ',metrics.accuracy_score(y_val, y_pred)) print('The F1 score is: ',metrics.f1_score(y_val, y_pred,average='weighted')) print('The recall score is ',metrics.recall_score(y_val, y_pred,average='weighted')) print('\n', 'Confusion matrix') print(confusion_matrix(y_val, y_pred)) print('\n', 'Classification report') print(classification_report(y_val, y_pred)) # + [markdown] id="XQwsqPgLWZpF" # Multinomial NB # + id="_IJiejZ-WSYo" from sklearn.pipeline import Pipeline from sklearn.naive_bayes import MultinomialNB from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer from sklearn.model_selection import train_test_split, GridSearchCV text_clf = Pipeline([('clf', MultinomialNB())]) tuned_parameters = { 'vect__ngram_range': [(1, 1), (1, 2), (2, 2)], 'tfidf__use_idf': (True, False), 'tfidf__norm': ('l1', 'l2'), 'clf__alpha': [1, 1e-1, 1e-2]} # + colab={"base_uri": "https://localhost:8080/"} id="RED2qKmKWWJM" outputId="100d2aef-53f7-4885-849b-aa0cba4ad1c3" # Training and fitting the multinomial model mnb = MultinomialNB() mnb.fit( X_train_vectors_tfidf , y_train) y_pred = mnb.predict(X_val_vectors_tfidf) #Checking performance our model with performance metrics. comparison_frame = pd.DataFrame({'Actual': y_val, 'Predicted': y_pred}) print('\n') print(comparison_frame) print('\n') print(comparison_frame.describe()) # Evaluating the Algorithm # --- from sklearn import metrics print('Mean Absolute Error:', metrics.mean_absolute_error(y_val, y_pred)) print('Mean Squared Error:', metrics.mean_squared_error(y_val, y_pred)) print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_val, y_pred))) print('The accuracy of the model is ',metrics.accuracy_score(y_val, y_pred)) print('The F1 score is: ',metrics.f1_score(y_val, y_pred,average='weighted')) print('The recall score is ',metrics.recall_score(y_val, y_pred,average='weighted')) print('\n', 'Confusion matrix') print(confusion_matrix(y_val, y_pred)) print('\n', 'Classification report') print(classification_report(y_val, y_pred)) # + [markdown] id="DMeFlzNLCLIX" # Since the accuracy is 60.7%, we will try to improve the model by hyperparameter tuning. # + [markdown] id="4yMqf6sQWmrw" # Hyperparameter Tuning # + colab={"base_uri": "https://localhost:8080/"} id="3v2AZ6GnF_ou" outputId="0c4b69bf-9520-48c9-b52b-d723ce82d2b1" #Tuning hyperparameters and transforming features to a normal distribution from sklearn.preprocessing import PowerTransformer from sklearn.model_selection import GridSearchCV from sklearn.model_selection import RepeatedStratifiedKFold alphas = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0] p_grid_NB = {'alpha': alphas} NB_cls= MultinomialNB() grid = GridSearchCV(estimator = NB_cls, param_grid = p_grid_NB, cv = 5) grid.fit(X_train_vectors_tfidf, y_train) grid.best_params_ # + colab={"base_uri": "https://localhost:8080/"} id="gqo35_H4xkaq" outputId="70ac842a-af8a-433e-b50c-5caf5db78adf" # Training and fitting the multinomial model with hyperparameter tuned mnb = MultinomialNB(alpha=0.1) mnb.fit( X_train_vectors_tfidf , y_train) y_pred = mnb.predict(X_val_vectors_tfidf) #Checking performance our model with performance metrics. comparison_frame = pd.DataFrame({'Actual': y_val, 'Predicted': y_pred}) print('\n') print(comparison_frame) print('\n') print(comparison_frame.describe()) # Evaluating the Algorithm # --- from sklearn import metrics print('Mean Absolute Error:', metrics.mean_absolute_error(y_val, y_pred)) print('Mean Squared Error:', metrics.mean_squared_error(y_val, y_pred)) print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_val, y_pred))) print('The accuracy of the model is ',metrics.accuracy_score(y_val, y_pred)) print('The F1 score is: ',metrics.f1_score(y_val, y_pred,average='weighted')) print('The recall score is ',metrics.recall_score(y_val, y_pred,average='weighted')) print('\n', 'Confusion matrix') print(confusion_matrix(y_val, y_pred)) print('\n', 'Classification report') print(classification_report(y_val, y_pred)) # + [markdown] id="vPfGTOxF6hAr" # #Random_Forests_classifier_Climate_change_tweet_classification # + id="XUv_zk9A2MTk" from sklearn import preprocessing le = preprocessing.LabelEncoder() train_df['class'] = le.fit_transform(train_df['class']) # + colab={"base_uri": "https://localhost:8080/"} id="Nn6ODfb-18z-" outputId="95482a30-5e71-4c62-8ba1-42574902c432" import pandas as pd import numpy as np from google.colab import drive from sklearn.model_selection import GridSearchCV from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import r2_score #drive.mount('/content/drive') from sklearn.naive_bayes import BernoulliNB from sklearn.naive_bayes import GaussianNB from sklearn.naive_bayes import MultinomialNB #SPLITTING THE TRAINING DATASET INTO TRAINING AND VALIDATION # Input: "cleaned tweets" # Target: "class" from sklearn.model_selection import train_test_split X_train, X_val, y_train, y_val = train_test_split(train_df["clean_text"], train_df["class"], test_size=0.05, shuffle=True, stratify =train_df["class"] ) #import necessary libraries import nltk from nltk.tokenize import word_tokenize nltk.download('punkt') nltk.download('averaged_perceptron_tagger') nltk.download('wordnet') X_train_tok= [nltk.word_tokenize(i) for i in X_train] #for word2vec X_val_tok= [nltk.word_tokenize(i) for i in X_val] #for word2vec #TF-IDF # Convert x_train to vector since model can only run on numbers and not words- Fit and transform #import TfidfVectorizer from sklearn.feature_extraction.text import TfidfVectorizer tfidf_vectorizer = TfidfVectorizer(use_idf=True, ) X_train_vectors_tfidf = tfidf_vectorizer.fit_transform(X_train) #tfidf runs on non-tokenized sentences unlike word2vec # Only transform x_test (not fit and transform) X_val_vectors_tfidf = tfidf_vectorizer.transform(X_val) #Don't fit() your TfidfVectorizer to your test data: it will #change the word-indexes & weights to match test data. Rather, fit on the training data, then use the same train-data- #fit model on the test data, to reflect the fact you're analyzing the test data only based on what was learned without #it, and the have compatible # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="M8upO2y-2pRH" outputId="dc48507c-78fd-412c-d611-db256faa2ec5" # split the data into labels and features # Creating a dictionary of parameters to tune # parameters = {'n_estimators': np.arange(10,13), 'max_depth': np.arange(7,17)} # Setting the number of folds to 5 and instantiating the model # grid_search = RandomizedSearchCV(RandomForestClassifier(), parameters, cv = 5, return_train_score = True) grid_search.fit(X_train_vectors_tfidf, y_train) print(grid_search.best_params_) print('\n') #Lets see how the different max depth values compare to each other print('how do the different depths compare to each other''\n') for i in range(len(parameters['max_depth'])): print('parameters', grid_search.cv_results_['params'][i]) print('mean Test scores:', grid_search.cv_results_['mean_test_score'][i]) print('Rank:', grid_search.cv_results_['rank_test_score'][i]) print('\n') dtree_model = RandomForestClassifier(n_estimators = grid_search.best_params_['n_estimators'], max_depth = grid_search.best_params_['max_depth']).fit(X_train_vectors_tfidf, y_train) dtree_model print('Training score:', dtree_model.score(X_train_vectors_tfidf, y_train)) print('Test score:', r2_score(y_val, y_pred)) y_pred = dtree_model.predict(X_train_vectors_tfidf) #Random Forests model comparison_frame = pd.DataFrame({'Actual': y_val, 'Predicted': y_pred}) print('\n') print(comparison_frame) print('\n') print(comparison_frame.describe()) # Evaluating the Algorithm # --- from sklearn import metrics print('Mean Absolute Error:', metrics.mean_absolute_error(y_val, y_pred)) print('Mean Squared Error:', metrics.mean_squared_error(y_val, y_pred)) print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_val, y_pred))) print('The accuracy of the model is ',metrics.accuracy_score(y_val, y_pred)) print('\n', 'Confusion matrix') print(confusion_matrix(y_val, y_pred)) print('\n', 'Classification report') # + id="Dg6Mh5BT_txE" from sklearn import metrics # print('Mean Absolute Error:', metrics.mean_absolute_error(y_val, y_pred)) # print('Mean Squared Error:', metrics.mean_squared_error(y_val, y_pred)) # print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_val, y_pred))) print('The accuracy of the model is ',metrics.accuracy_score(y_val, y_pred)) print('\n', 'Confusion matrix') print(confusion_matrix(y_val, y_pred)) print('\n', 'Classification report') # + id="4dZ13qqh37Ti" #define X variable for fitting of the new model test_df X_val = test_df['clean_text'] # + id="jHQhA5AC3-8y" X_val_vectors_tfidf = tfidf_vectorizer.transform(X_val) #fit the tfidf model on the test data, to reflect the fact you're analyzing the test data only based on what was learned without #it, and the have compatible # predict using the test dataset y_pred = dtree_model.predict(X_val_vectors_tfidf) y_pred = le.inverse_transform(y_pred) #Adding the predicted values to our dataset test_df['class'] = y_pred # + id="W_NKVm-w4Bgs" #Lets save our new dataset with class names test_df.to_csv('/content/drive/MyDrive/Module 2 groupwork Datasets/Test df with classes RandomForestsclassifier.csv') # + [markdown] id="ZGJsEVYYVxg-" # #Logistic Regression # + id="Ygc_yGVqwEtd" import pandas as pd import numpy as np import seaborn as sns from sklearn.metrics import r2_score import sklearn import matplotlib.pyplot as plt # %matplotlib inline from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.metrics import accuracy_score # + id="KrXKd8EcxXX-" train_df = pd.read_csv('/content/Clean classified data_kwisha.csv') test_df = pd.read_csv('Test (2).csv') # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="7Y1bTKeSyeDl" outputId="45c33338-3a09-4140-f8db-eb2d0b4d40d5" sns.pairplot(train_df, hue='class', size=2.5) # + id="3usf__B-0nXM" from sklearn import preprocessing le = preprocessing.LabelEncoder() train_df['class'] = le.fit_transform(train_df['class']) # + id="LqlAFR96555m" from sklearn.model_selection import train_test_split X_train, X_val, y_train, y_val = train_test_split(train_df["clean_text"], train_df["class"], test_size=0.05, shuffle=True, stratify =train_df["class"] ) # + colab={"base_uri": "https://localhost:8080/"} id="1fYpdO_15_3H" outputId="489cff0c-ba40-4840-d5e2-7aaeb1361925" import nltk from nltk.tokenize import word_tokenize nltk.download('punkt') nltk.download('averaged_perceptron_tagger') nltk.download('wordnet') X_train_tok= [nltk.word_tokenize(i) for i in X_train] X_val_tok= [nltk.word_tokenize(i) for i in X_val] # + id="7TWXnPjm6H8E" # Convert x_train to vector since model can only run on numbers and not words- Fit and transform #import TfidfVectorizer from sklearn.feature_extraction.text import TfidfVectorizer tfidf_vectorizer = TfidfVectorizer(use_idf=True, ) X_train_vectors_tfidf = tfidf_vectorizer.fit_transform(X_train) #tfidf runs on non-tokenized sentences unlike word2vec # Only transform x_test (not fit and transform) X_val_vectors_tfidf = tfidf_vectorizer.transform(X_val) #Don't fit() your TfidfVectorizer to your test data: it will #change the word-indexes & weights to match test data. Rather, fit on the training data, then use the same train-data- #fit model on the test data, to reflect the fact you're analyzing the test data only based on what was learned without #it, and the have compatibl # + colab={"base_uri": "https://localhost:8080/"} id="AzlsQo7CT5Gm" outputId="e779657d-4ed5-45a0-a5f0-77a4f8a960e3" from sklearn.metrics import classification_report, f1_score, accuracy_score, confusion_matrix from sklearn.linear_model import LogisticRegression lr_tfidf=LogisticRegression(solver = 'liblinear', C=10, penalty = 'l2') lr_tfidf.fit(X_train_vectors_tfidf, y_train) #model #Predict y value for test dataset y_predict = lr_tfidf.predict(X_val_vectors_tfidf) y_prob = lr_tfidf.predict_proba(X_val_vectors_tfidf)[:,1] baselog_accuracy = accuracy_score(y_val, y_predict) from sklearn import metrics print('Mean Absolute Error:', metrics.mean_absolute_error(y_val, y_predict)) print('Mean Squared Error:', metrics.mean_squared_error(y_val, y_predict)) print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_val, y_predict))) print('The accuracy of the model is ',metrics.accuracy_score(y_val, y_predict)) print('The F1 score is: ',metrics.f1_score(y_val, y_predict,average='weighted')) print('The recall score is ',metrics.recall_score(y_val, y_predict,average='weighted')) print('\n', 'Confusion matrix') print(confusion_matrix(y_val, y_predict)) print('\n', 'Classification report') print(classification_report(y_val, y_predict)) # + id="0PE5Syc7Csh2" from six import StringIO # + colab={"base_uri": "https://localhost:8080/"} id="rRCiIfFHUhyG" outputId="e24eeeb8-dc11-4e83-816d-21a67eae43f6" # Import the libraries from sklearn.datasets import make_classification from sklearn.datasets import make_classification from imblearn.over_sampling import SVMSMOTE lr_tfidf = SVMSMOTE(random_state = 101) # Choosing a sample X_oversample_svm, y_oversample_svm = make_classification(n_samples=10000, n_features=2, n_redundant=0, n_clusters_per_class=1, weights=[0.99], flip_y=0, random_state=101) # Perform Logistic Regression X_oversample_svm, y_oversample_svm = lr_tfidf.fit_resample(X_train_vectors_tfidf, y_train) classifier_svm = LogisticRegression(solver = 'liblinear', C=10, penalty = 'l2') classifier_svm.fit(X_oversample_svm, y_oversample_svm) #Predict y value for test dataset y_predict = classifier_svm.predict(X_val_vectors_tfidf) y_prob = classifier_svm.predict_proba(X_val_vectors_tfidf)[:,1] from sklearn import metrics print('Mean Absolute Error:', metrics.mean_absolute_error(y_val, y_predict)) print('Mean Squared Error:', metrics.mean_squared_error(y_val, y_predict)) print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_val, y_predict))) print('The accuracy of the model is ',metrics.accuracy_score(y_val, y_predict)) print('The F1 score is: ',metrics.f1_score(y_val, y_predict,average='weighted')) print('The recall score is ',metrics.recall_score(y_val, y_predict,average='weighted')) print('\n', 'Confusion matrix') print(confusion_matrix(y_val, y_predict)) print('\n', 'Classification report') print(classification_report(y_val, y_predict)) #print(classification_report(y_test, classifier_svm.predict(X_test))) accuracy_score(y_val, y_predict) # + [markdown] id="CIX5-KBJfA_q" # ## Transformers # + [markdown] id="4b0uYrN1WSxd" # ## Transformers # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="m3FDSNzqhG3o" outputId="d45e349f-7a0c-490d-8942-fb80aa64cd1a" # ! pip install ktrain # + id="s6UDhSmqhQ8Z" # %reload_ext autoreload # %autoreload 2 # %matplotlib inline import os os.environ['CUDA_DEVICE_ORDER']='PCI_BUS_ID'; os.environ['CUDA_VISIBLE_DEVICES']='0'; # + id="HD-8mBL_V-Tm" import pandas as pd import ktrain from ktrain import text df=pd.read_csv('/content/Clean classified data_kwisha.csv') # + id="FM6K6k4FV-Tn" from sklearn import preprocessing le = preprocessing.LabelEncoder() df['class'] = le.fit_transform(df['class']) # + id="s_7k5V8dV-To" data_texts = df["clean_text"].to_list() # Features (not-tokenized yet) data_labels = df["class"].to_list() # Lables from sklearn.model_selection import train_test_split # # Split Train and Validation data # train_texts, _texts, train_labels, val_labels = train_test_split(data_texts, data_labels, test_size=0.2, random_state=0) X_train,X_test,y_train,y_test=train_test_split(data_texts, data_labels, test_size=0.2, random_state=0) # # Keep some data for inference (testing) # train_texts, test_texts, train_labels, test_labels = train_test_split(train_texts, train_labels, test_size=0.01, random_state=0) # + colab={"base_uri": "https://localhost:8080/"} id="j6AP5IMofXa-" outputId="5bdc4be7-6193-4173-c75c-38ab981135b7" len(X_test),len(X_train),len(y_test),len(y_train) # + [markdown] id="d_t0FTongK9A" # Build model with Transformer # + colab={"base_uri": "https://localhost:8080/", "height": 48, "referenced_widgets": ["53c85db8698b4a988155aaf0075f200f", "3e1c435758194733b3f82b574723d212", "ebcdbe00229244c48ab2d5b1be31e030", "a97ded6d7e674d0bac081fceef1f3a84", "7e3b9e7586d141bf8143db2fa901ea0b", "ed2d2b0c74a2435786ed5624740d9594", "d88b524dc5a047fca69ac47b59a8e8b6", "1f2d887e405844be8a4b8da37741b45a", "ce0e8b81725946d7a3e381deb20a354f", "ea7ce50ab6484cb5b155692acf035364", "c893cf781bee487c9dbcfccc7b6d303d"]} id="jpwB6uWwgQ0F" outputId="8553c41d-1a97-47d8-dba0-a34bee2a835c" # Build the model with pretrained model model_name='distilbert-base-uncased' categories=['renewable','drought','floods','air_polutants','temperature','greentalk'] #transformer trans=text.Transformer(model_name,maxlen=512,class_names=categories) # + colab={"base_uri": "https://localhost:8080/", "height": 330, "referenced_widgets": ["44990b5bb5c24aa9a11fc07e55e28ba9", "6c6257fd4a0140e7ac7133fabe3fa092", "f3a5ede38ccb4b3b8fb83d30e4acbdd3", "1e293e11f86d40c791fdab7545722bb5", "8d1022bc697749e982b6189d658707a6", "bb6f4945e1844972b22bd05c9334fd64", "3f0bd4cef66f4f2193c23dc6ed312cd9", "19b6e5964a9543e7a2c22ed53f0a826e", "", "", "", "5afcc8964e0045cf8f87ecc53446ac2d", "d001a01282ca4dc9a7cb108f37a7f31b", "38cd23dcdff84c1c9c6ee1c3873a12b0", "", "6ce8cd49be7b4e35b12c964c1460795f", "254c4d3fe5fc44eb8699a3c35a1f544e", "", "c280131056224e25bb9b3eb0dc7ad447", "0518d45f83af4ab3b49448998465595a", "3867ee7d58364b7e8f2ee0799a5fab90", "0089132b65104968a852d80c0689ae0f"]} id="m4m3AB3oiUWD" outputId="e54a6230-0606-4f4b-b17e-78fe78f029a1" # Get the train and test data train_data=trans.preprocess_train(X_train,y_train) test_data=trans.preprocess_train(X_test,y_test) # + colab={"base_uri": "https://localhost:8080/", "height": 48, "referenced_widgets": ["5916e97f5b124d0d846fdde042adb433", "4e68b32962f847e79cf9680436da7978", "1392cb14660941b4be727903f8f42a36", "023c56ea7da94c8a824edc5272e95e89", "d9e924a78fec46a79704650615081a33", "", "0a86c2cd1bdf461890b0eca8a5294541", "e864ce49e8984b44a7472834f7eb4162", "db6aecc462a14218873a857ac30596ec", "98a1ce45ecc04240aefabf7dce41f84d", "1c2209b1fdeb42e28a5930645a7504b2"]} id="YKuZaixPjGYu" outputId="bab2a43f-ed00-40e2-adf7-1a1c807aa122" # We retrive the transformer distilbert classifier model=trans.get_classifier() # + id="U3v7u2ohjRTk" # Get the learner learner=ktrain.get_learner(model,train_data=train_data,val_data=test_data,batch_size=16) # + id="hDaRBEIJkFkv" # Train the model learner.lr_find(show_plot=True,max_epochs=3) # we us ethe code to find the best learning rate # + colab={"base_uri": "https://localhost:8080/"} id="VVAbXA2xkEJT" outputId="17f293d1-e77a-4945-9ba3-e42b5c564925" learner.fit_onecycle(0.003,1) # + colab={"base_uri": "https://localhost:8080/"} id="3jvCSIq7lnfB" outputId="789cdbf2-d2c2-403e-936e-f339f6beafc9" # Test the metrics learner.validate(class_names=categories) # + id="ZB7F__lAmnTD" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="2d949859-d440-4f33-8e31-69f85b805116" X_test[553] # + [markdown] id="HBx4IzOKWCvO" # Making predictions # + id="ztwcAMyPmIzi" colab={"base_uri": "https://localhost:8080/", "height": 167} outputId="be531be5-af80-4f1b-c27e-f85f18531794" predictor=ktrain.get_predictor(learner.model,prepoc=trans) # + id="UVSVr4L9mzZS" x='Six bodies of the flash flood victims have been recovered, leaving one tourist missing. The search and rescue operation continues as we reach out to next of kin to share details of (this) sad incident and plan together (our) next course of action.”' # + id="jeTutAThm1_G" # We predict the random sentence predictor.predict(x) # + id="rCVgw7AyV-T6" from wandb.keras import WandbCallback # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda root] # language: python # name: conda-root-py # --- # + import numpy as np import matplotlib.pyplot as plt # %matplotlib inline from keras.utils import np_utils from keras.preprocessing.image import ImageDataGenerator from keras.models import Sequential from keras.layers import Dense, Dropout, Activation, Flatten from keras.layers import Convolution2D, MaxPooling2D, AveragePooling2D from keras.optimizers import Adam import glob from PIL import Image import keras from keras.preprocessing.image import ImageDataGenerator from keras.models import Sequential from keras.layers import Dense, Dropout, Activation, Flatten from keras.layers import Conv2D, MaxPooling2D from keras.utils import np_utils from keras.layers.core import Flatten, Dense, Dropout, Lambda # - def plots(ims, figsize=(12,6), rows=1, interp=False, titles=None): if type(ims[0]) is np.ndarray: ims = np.array(ims).astype(np.uint8) if (ims.shape[-1] != 3): ims = ims.transpose((0,2,3,1)) f = plt.figure(figsize=figsize) for i in range(len(ims)): sp = f.add_subplot(rows, len(ims)//rows, i+1) sp.axis('Off') if titles is not None: sp.set_title(titles[i], fontsize=16) plt.imshow(ims[i], interpolation=None if interp else 'none') # + from keras.preprocessing import image BATCH_SIZE = 64 PATH="data/" def get_fit_sample(): gen = image.ImageDataGenerator() sample_batches = gen.flow_from_directory(PATH+'valid', target_size=(224,224), class_mode='categorical', shuffle=False, batch_size=300) imgs, labels = next(sample_batches) return imgs gen = image.ImageDataGenerator(featurewise_std_normalization=True) gen.fit(get_fit_sample()) val_batches = gen.flow_from_directory(PATH+'valid', target_size=(224,224), class_mode='categorical', shuffle=True, batch_size=BATCH_SIZE) gen = image.ImageDataGenerator(featurewise_std_normalization=True) gen.fit(get_fit_sample()) batches = gen.flow_from_directory(PATH+'train', target_size=(224,224), class_mode='categorical', shuffle=True, batch_size=BATCH_SIZE) #imgs,labels = next(batches) #plots(imgs[:2]) # - # + CLASSES = 2 INPUT_SHAPE = (224,224,3) model = Sequential() # Block 1 model.add(Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv1', input_shape=INPUT_SHAPE)) model.add(Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv2')) model.add(MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool')) # Block 2 model.add(Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv1')) model.add(Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv2')) model.add(MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool')) # Block 3 model.add(Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv1')) model.add(Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv2')) model.add(Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv3')) model.add(MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool')) # Block 4 model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv1')) model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv2')) model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv3')) model.add(MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool')) # Block 5 model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1')) model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2')) model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3')) model.add(MaxPooling2D((2, 2), strides=(2, 2), name='block5_pool')) # Classification block model.add(Flatten(name='flatten')) model.add(Dense(4096, activation='relu', name='fc1')) model.add(Dropout(0.5)) model.add(Dense(4096, activation='relu', name='fc2')) model.add(Dropout(0.5)) model.add(Dense(CLASSES, activation='softmax', name='predictions')) from keras.optimizers import SGD sgd = SGD(lr=0.01, decay=0.0005, momentum=0.9, nesterov=False) model.compile(optimizer=sgd, loss='mean_squared_error', metrics=['accuracy']) # + # %%time hist = model.fit_generator(batches, steps_per_epoch=100, epochs=10, validation_data=val_batches, validation_steps=10) model.save('ConvNet-D-vgg16.h5') # http://qiita.com/TypeNULL/items/4e4d7de11ab4361d6085 loss = hist.history['loss'] val_loss = hist.history['val_loss'] nb_epoch = len(loss) plt.plot(range(nb_epoch), loss, marker='.', label='loss') plt.plot(range(nb_epoch), val_loss, marker='.', label='val_loss') plt.legend(loc='best', fontsize=10) plt.grid() plt.xlabel('epoch') plt.ylabel('loss') plt.show() # - # https://gist.github.com/baraldilorenzo/07d7802847aaad0a35d3 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import matplotlib.pyplot as plt import os import cv2 from tqdm import tqdm DATADIR = "kaggle-one-shot-pokemon/" CATEGORIES = ["pokemon-a", "pokemon-b"] for category in CATEGORIES: # do dogs and cats path = os.path.join(DATADIR,category) # create path to dogs and cats for img in os.listdir(path): # iterate over each image per dogs and cats img_array = cv2.imread(os.path.join(path,img) ,cv2.IMREAD_GRAYSCALE) # convert to array plt.imshow(img_array) # graph it plt.show() # display! break # we just want one for now so break break #...and one more! # - print(img_array.shape) # + IMG_SIZE = 50 new_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE)) plt.imshow(new_array) plt.show() # - new_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE)) plt.imshow(new_array) plt.show() # + training_data = [] def create_training_data(): for category in CATEGORIES: # do dogs and cats path = os.path.join(DATADIR,category) # create path to dogs and cats class_num = CATEGORIES.index(category) # get the classification (0 or a 1). 0=dog 1=cat for img in tqdm(os.listdir(path)): # iterate over each image per dogs and cats try: img_array = cv2.imread(os.path.join(path,img) ,cv2.IMREAD_GRAYSCALE) # convert to array new_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE)) # resize to normalize data size training_data.append([new_array, class_num]) # add this to our training_data except Exception as e: # in the interest in keeping the output clean... pass #except OSError as e: # print("OSErrroBad img most likely", e, os.path.join(path,img)) #except Exception as e: # print("general exception", e, os.path.join(path,img)) create_training_data() print(len(training_data)) # + import random random.shuffle(training_data) # - for sample in training_data[:10]: print(sample[1]) # + X = [] y = [] for features,label in training_data: X.append(features) y.append(label) print(X[0].reshape(-1, IMG_SIZE, IMG_SIZE, 1)) X = np.array(X).reshape(-1, IMG_SIZE, IMG_SIZE, 1) # - X.shape # + import pickle pickle_out = open("X.pickle","wb") pickle.dump(X, pickle_out) pickle_out.close() pickle_out = open("y.pickle","wb") pickle.dump(y, pickle_out) pickle_out.close() # + pickle_in = open("X.pickle","rb") X = pickle.load(pickle_in) pickle_in = open("y.pickle","rb") y = pickle.load(pickle_in) # + import tensorflow as tf from tensorflow.keras.datasets import cifar10 from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten from tensorflow.keras.layers import Conv2D, MaxPooling2D import pickle pickle_in = open("X.pickle","rb") X = pickle.load(pickle_in) pickle_in = open("y.pickle","rb") y = pickle.load(pickle_in) X = X/255.0 model = Sequential() model.add(Conv2D(256, (3, 3), input_shape=X.shape[1:])) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(256, (3, 3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) # this converts our 3D feature maps to 1D feature vectors model.add(Dense(64)) model.add(Dense(1)) model.add(Activation('sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit(X, y, batch_size=32, epochs=3, validation_split=0.3) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd data15feb = pd.read_csv('log-15feb18.txt') data15feb.head() data16feb = pd.read_csv('log-16feb18.txt') data16feb.head() data17feb = pd.read_csv('log-17feb18.txt') data17feb.head() data = pd.concat( (data15feb,data16feb,data17feb), ignore_index=True) data.head(40) from glob import glob files = glob('log*') files.sort() files data = pd.concat( (pd.read_csv(file) for file in files), ignore_index=True) data.head(60) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import requests#Connect to endpoints import time #time script import sys import csv import pandas as pd import spotipy#authentication import spotipy.util as util#authentication from spotipy.oauth2 import SpotifyClientCredentials#authentication #Credentials cid = '049ade7215e54c63a2b628f3784dc407' secret = '' redirect_uri = 'http://google.com/' username = 'xxx' # + #Authentication scope = 'user-library-read' token = util.prompt_for_user_token(username, scope, client_id=cid, client_secret=secret, redirect_uri=redirect_uri) if token: sp = spotipy.Spotify(auth=token) else: print("Can't get token for", username) # - def get_albums(nexturl,artist_uri): all_albums = [] try: resp = requests.get(url=nexturl, headers={'Authorization': 'Bearer ' + token}) resp.raise_for_status() except requests.exceptions.HTTPError as err: print(err) response = resp.json() album_limit = (response['limit'])-1 print(album_limit) for x in range(album_limit): try: album_uri = response['items'][x]['uri'].split(':') all_albums.append(album_uri[2]) except IndexError as error: continue #Natural part of the process. The length of response is sometimes less than limit try: if (nexturl is not None): get_albums(response['next']) else: return except: return all_albums # + i = 0 album_list=[] start = time.time() with open('Data/artist_uri.csv') as csvfile: file = csv.reader(csvfile, delimiter='|') next(file) head = [next(file) for x in range(1)] #for limiting amount of rows read for row in head: artist_uri = row[0] artist_name = row[1] print(i,artist_uri,artist_name) #output album_uri albums = get_albums('https://api.spotify.com/v1/artists/{id}/albums?limit=20'.format(id=artist_uri),artist_uri)#list of album uris #There are a number of albums per artist so need to create a record of each for album in albums: album_list.append({'Artist URI':artist_uri,'Artist Name':artist_name,'Album URI':album}) i+=1 stop = time.time() duration = (stop - start)/60 # - #Duration of script str(duration)+' minutes' df = pd.DataFrame(album_list) df #no_duplicates = df.drop_duplicates(['Album URI']) #no_duplicates.to_csv('Data/artist_album_uri.csv',sep='|',index=False) #no_duplicates # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + # A Basic bokeh line graph from bokeh.plotting import figure from bokeh.io import output_file, show x = [1, 2, 3, 4, 5] y = [6, 7, 8, 9, 10] output_file('Line.html') fig = figure() fig.line(x, y) show(fig) # + # Bokeh with Pandas from bokeh.plotting import figure from bokeh.io import output_file, show import pandas data_frame = pandas.read_csv('data.csv') x = data_frame['x'] y = data_frame['y'] output_file('Line.html') fig = figure() fig.line(x, y) show(fig) # + # Bokeh with Pandas - Bachelors_csv from bokeh.plotting import figure from bokeh.io import output_file, show import pandas data_frame = pandas.read_csv('http://pythonhow.com/data/bachelors.csv') x = data_frame['Year'] y = data_frame['Engineering'] output_file('Line.html') fig = figure() fig.line(x, y) show(fig) # + import pandas from bokeh.plotting import figure, output_file, show p=figure(plot_width=500,plot_height=400, tools='pan') p.title.text="Cool Data" p.title.text_color="Gray" p.title.text_font="times" p.title.text_font_style="bold" p.xaxis.minor_tick_line_color=None p.yaxis.minor_tick_line_color=None p.xaxis.axis_label="Date" p.yaxis.axis_label="Intensity" p.line([1,2,3],[4,5,6]) output_file("graph.html") show(p) # + import pandas from bokeh.plotting import figure, output_file, show df=pandas.read_excel("http://pythonhow.com/data/verlegenhuken.xlsx",sheet_name=0) df["Temperature"]=df["Temperature"]/10 df["Pressure"]=df["Pressure"]/10 p=figure(plot_width=500,plot_height=400,tools='pan') p.title.text="Temperature and Air Pressure" p.title.text_color="Gray" p.title.text_font="arial" p.title.text_font_style="bold" p.xaxis.minor_tick_line_color=None p.yaxis.minor_tick_line_color=None p.xaxis.axis_label="Temperature (°C)" p.yaxis.axis_label="Pressure (hPa)" p.circle(df["Temperature"],df["Pressure"],size=0.5) output_file("Weather.html") show(p) # - from bokeh.plotting import figure, output_file, show p = figure(plot_width=500, plot_height=400, tools = 'pan, reset') p.title.text = "Earthquakes" p.title.text_color = "Orange" p.title.text_font = "times" p.title.text_font_style = "italic" p.yaxis.minor_tick_line_color = "Yellow" p.xaxis.axis_label = "Times" p.yaxis.axis_label = "Value" p.circle([1,2,3,4,5], [5,6,5,5,3], size = [i*2 for i in [8,12,14,15,20]], color="red", alpha=0.5) output_file("Scatter_plotting.html") show(p) # + from bokeh.plotting import figure, output_file, show import pandas data_frame = pandas.read_csv( 'adbe.csv', parse_dates=['Date'] ) fig = figure(width=800, height=250, x_axis_type='datetime') fig.line(data_frame['Date'], data_frame['Close'], color='Orange', alpha=0.5) output_file('Timeseries.html') show(fig) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd import numpy as np from sklearn.feature_selection import SelectKBest from sklearn.feature_selection import chi2 from sklearn.preprocessing import LabelEncoder # + train = pd.read_csv('train.csv',parse_dates=['issue_date','listing_date']) train = train.sample(frac=1).reset_index(drop=True) #filling the missing the value of condition train.fillna(3,inplace=True) # + train['area_occupied'] = train['height(cm)'] * train['length(m)'] * 100 train['no_of_days'] =abs((train['listing_date'] - train['issue_date']).dt.days) train['listing_day'] = train['listing_date'].dt.day train['listing_month'] = train['listing_date'].dt.month train['listing_dayofweek'] = train['listing_date'].dt.dayofweek train['listing_weekday'] = train['listing_date'].dt.weekday train['listing_hour'] = train['listing_date'].dt.hour train['listing_time'] = train['listing_date'].dt.time train['issue_day'] = train['issue_date'].dt.day train['issue_month'] = train['listing_date'].dt.month train['issue_dayofweek'] = train['listing_date'].dt.dayofweek train['issue_weekday'] = train['listing_date'].dt.weekday train['issue_hour'] = train['listing_date'].dt.hour train['issue_time'] = train['listing_date'].dt.time train['pet_id'] = train['pet_id'].apply(lambda x: int(x.split('_')[1])) #get some with pet_id def id_bins(s): if s['pet_id'] <= 63355: return 'OLD' if s['pet_id'] <= 70150: return 'MID' else: return 'NEW' train['id_bins'] = train.apply(id_bins,axis=1) def height_bins(s): if s['height(cm)'] <=27.36: return 'SHORT' else: return 'TALL' train['height_bin'] = train.apply(height_bins,axis=1) cat = list(train.select_dtypes('object')) print('categorical values',cat) for i in cat: l = LabelEncoder() l.fit(train[i]) train[i]= l.transform(train[i]) #train = pd.get_dummies(train, # columns=['color_type','id_bins', 'height_bin'], # drop_first=True) #train.drop(['pet_id']) print(train.shape) # - train.head() X = train.drop(['breed_category','pet_category', 'issue_date','listing_date'],axis=1) y = train[['breed_category']] X.shape , y.shape bestfeatures = SelectKBest(score_func=chi2,k=20) fit=bestfeatures.fit(X,y) # + df1 = pd.DataFrame(fit.scores_) df2 = pd.DataFrame(X.columns) df = pd.concat([df1,df2],axis=1) df.columns = ['specs','score'] df.sort_values(by=['specs'],ascending=False).head(10) # - print(df.nlargest(10,'specs')) # + from sklearn.ensemble import ExtraTreesClassifier import matplotlib.pyplot as plt model = ExtraTreesClassifier(random_state=10) model.fit(X,y) d= {'columns':X.columns,'importance':model.feature_importances_} df = pd.DataFrame(data=d) df.sort_values(by='importance',ascending=False).head(10) # - fe = pd.Series(model.feature_importances_,index=X.columns) fe.nlargest(10).plot(kind='barh') plt.show() import seaborn as sns corrmat = train.corr() top_corr_features = corrmat.index plt.figure(figsize=(30,30)) g = sns.heatmap(train[top_corr_features].corr(),annot=True,cmap='RdYlGn') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [Root] # language: python # name: Python [Root] # --- # + ## Administrative things from IPython.display import Markdown, display def printmd(string): display(Markdown(string)) import numpy as np from pprint import pprint # - # Q. How would you represent a Scalar / Vector / Matrix / Tensor quantity in Python? printmd('In numpy, everything (viz scalar, vector, matrix, tensor) is represented as an **array**.
\ So you can see examples as below:') print 'Scalar: ', x = np.array([1]) pprint(x) print '\nVector: ', x = np.array([1, 1]) pprint(x) print '\nMatrix: ', x = np.array([[1, 1], [1, 1]]) pprint(x) print '\nTensor: ', x = np.array([[1, 1, 1], [1, 1, 1], [1, 1, 1]]) pprint(x) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Finding the error bar when interpolating the measurements # The RF losses in the T-resonator, due to the metal resistivities of the coaxial lines, is relatively unknown as the resisitivity that one can find in textbooks is in practice never achieved in real-life. # # In this notebook, the measured return loss of the propely matched T-resonator is compared to the RF model. The short-circuit lengths and resistivity and eventually the total RF losses in the model are adjusted to fit the measurement. This gives a estimation of the RF losses achieved in real-life. # + import numpy as np import matplotlib.pyplot as plt from scipy.optimize import minimize from ipywidgets import interact, interactive, fixed, interact_manual import ipywidgets as widgets import tresonator as T # %matplotlib notebook # - # Below we import the return loss measurements. # + # Load a matched configuration - S-parameter measurements filename = 'data/RES2.ASC' exp_freq,reS11,imS11 = np.loadtxt(filename, skiprows=14, delimiter=';', unpack=True) exp_mag = np.sqrt(reS11**2 + imS11**2) exp_mag_dB = 20*np.log10(exp_mag) # - # The match frequency found in practice is: # find the match frequency exp_freq_match = exp_freq[np.argmin(exp_mag)] print('Match frequency : {} MHz'.format(exp_freq_match/1e6)) # Then we define an optimization function. The goal of this function is to minimize the error between the measured S11 and the simulated one, for the whole frequency band. def optim_fun_impedance(short_properties, use_add_loss): L_DUT, Z_DUT, L_CEA, Z_CEA, add_loss = short_properties # calculates the resonator S11 vs freq S11dB = [] for f in exp_freq: if not use_add_loss: # force no additional loss is requested add_loss = 1 _cfg = T.Configuration(f, P_in=1, L_DUT=L_DUT, L_CEA=L_CEA, Z_short_DUT=Z_DUT, Z_short_CEA=Z_CEA, additional_losses=add_loss) S11dB.append(_cfg.S11dB()) crit = np.sum( (np.array(S11dB) - exp_mag_dB)**2) #least square print(short_properties, crit) return crit # Below we launch the minimization process, starting from first guess. # # Two minimization are launches: one taking into account a multiplicative loss coefficient, and one without. # first guess d_DUT_0 = 0.035 # m d_CEA_0 = 0.035 # m Z_DUT_0 = 0.01 # Ohm Z_CEA_0 = 0.01 # Ohm add_loss = 1.0 # multiplicative coefficient on total RF loss # find a optimum taking into account additional losses bounds_pties = ((20e-3, 63e-3), (1e-3, 1), # d,Z DUT (5e-3, 200e-3), (1e-3, 1), # d,Z CEA (0.1, 2)) # add losses opt_res = minimize(optim_fun_impedance, (d_DUT_0, Z_DUT_0, d_CEA_0, Z_CEA_0, add_loss), bounds=bounds_pties, args=(True)) # find a optimum taking into account additional losses bounds_pties = ((20e-3, 63e-3), (1e-3, 1), # d,Z DUT (5e-3, 200e-3), (1e-3, 1), # d,Z CEA (0.1, 2)) # add losses opt_res_noloss = minimize(optim_fun_impedance, (d_DUT_0, Z_DUT_0, d_CEA_0, Z_CEA_0, add_loss), bounds=bounds_pties, args=(False)) # Now the results: # + print(f'With losses: \t\t d_DUT_0={opt_res.x[0]:.3}, Z_DUT_0={opt_res.x[1]:.3}', f' d_CEA_0={opt_res.x[2]:.3}, Z_CEA_0={opt_res.x[3]:.3}, add_loss={opt_res.x[4]:.3}') print(f'Without losses: \t d_DUT_0={opt_res_noloss.x[0]:.3}, Z_DUT_0={opt_res_noloss.x[1]:.3},', f'd_CEA_0={opt_res_noloss.x[2]:.3}, Z_CEA_0={opt_res_noloss.x[3]:.3}') # - # Graphically: # + P_in = 20e3 # W S11dB = [] S11dB_noloss = [] L_DUT_opt, Z_DUT_opt, L_CEA_opt, Z_CEA_opt, add_loss_opt = opt_res.x L_DUT_opt_nl, Z_DUT_opt_nl, L_CEA_opt_nl, Z_CEA_opt_nl, add_loss_opt_nl = opt_res_noloss.x for f in exp_freq: _cfg = T.Configuration(f, P_in, L_DUT_opt, L_CEA_opt, Z_short_DUT = Z_DUT_opt, Z_short_CEA = Z_CEA_opt, additional_losses=add_loss_opt) S11dB.append(_cfg.S11dB()) _cfg = T.Configuration(f, P_in, L_DUT_opt_nl, L_CEA_opt_nl, Z_short_DUT = Z_DUT_opt_nl, Z_short_CEA = Z_CEA_opt_nl, additional_losses=add_loss_opt_nl) S11dB_noloss.append(_cfg.S11dB()) fig,ax=plt.subplots() ax.plot(exp_freq/1e6, exp_mag_dB, lw=2) ax.plot(exp_freq/1e6, S11dB, lw=2) ax.plot(exp_freq/1e6, S11dB_noloss, lw=2, ls='--') ax.legend(('Measurement', 'TL model - with add. losses', 'TL model - no add. losses')) plt.grid(True) ax.set_xlabel('f [MHz]') ax.set_ylabel('S11 [dB]') ax.set_xlim(62.5, 62.8) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Getting Started # Custom GPU kernels for elementwise additions are educationally valuable but won't get you very far in practice. Let us consider instead the case of a simple (numerically stabilized) softmax operation: # + import torch # Compute the row-wise softmax of x \in R^{M \times N} def naive_softmax(x): # read MN elements ; write M elements x_max = torch.max(x, axis=1)[0] # read 2MN elements ; write MN elements z = x - x_max[:, None] # read MN elements ; write MN elements numerator = torch.exp(x) # read MN elements ; write M elements denominator = torch.sum(numerator, axis=1) # read 2MN elements ; write MN elements ret = numerator / denominator[:, None] # in total: read 7MN elements ; wrote 3MN + 2M elements return ret # - # When implemented naively in pytorch, computing $y$ requires reading $7MN$ elements from DRAM and writing back $3MN + 2M$ elements. # # Instead, we want to write a custom "fused" pytorch operators that only reads X once and does all the necessary computations on-chip. This would require reading and writing back only $MN$ bytes, so we could expect a theoretical speed-up of 5x. In practice, though, we expect less because our kernel will spend some time computing exponentials and moving data around in shared memory. # # Writing the Compute Kernel # Our softmax kernel works as follows: each program loads a row of X and writes back a normalized row of Y. Note that one important limitation of Triton is that each block must have a power-of-two number of elements, which means that we need to guard the memory operations properly if we want to handle any possible input shapes: # # ```c # __global__ void softmax(float* Y, float* X, int stride_xm, int stride_ym, int M, int N){ # // row index # int m = get_program_id(0); # // column indices # int n [BLOCK] = 0 ... BLOCK; # // the memory address of all the elements # // that we want to load can be computed as follows # float* px [BLOCK] = X + m*stride_xm + n; # // because BLOCK has to be a power of two # // (per Triton-C specs), it is important # // to guard each memory operation with predicates # // or we will read out of bounds # bool check[BLOCK] = n < N; # float x [BLOCK] = check ? *px : -F32_INFINITY; # // syntax for reduction in Triton is: # // x[..., OPERATOR, ...] # // ^ # // index # // The operators currently supported are {min, max, +} # float z [BLOCK] = x - x[max]; # // The exponential in Triton is fast but approximate # // (i.e., like __expf in CUDA) # float num [BLOCK] = exp(z); # float denom = num[+]; # // The result of the reduction is now stored in y # float y [BLOCK] = num / denom; # // We write it back # float* py [BLOCK] = Y + m*stride_ym + n; # *?(check)py = y; # } # ``` # # Writing the Torch bindings # + import torch import triton # source-code for Triton compute kernel _src = """ __global__ void softmax(float* Y, float* X, int stride_ym, int stride_xm, int M, int N){ int m = get_program_id(0); int n [BLOCK] = 0 ... BLOCK; float* px [BLOCK] = X + m*stride_xm + n; bool check[BLOCK] = n < N; float x [BLOCK] = check ? *px : -F32_INFINITY; float z [BLOCK] = x - x[max]; float num [BLOCK] = exp(z); float denom = num[+]; float y [BLOCK] = num / denom; float* py [BLOCK] = Y + m*stride_ym + n; *?(check)py = y; } """ # We need to make sure that BLOCK is the smallest power of two # greater than the number of rows N of the input matrix. # Different values of BLOCK will result in different kernels def next_power_of_2(n): n -= 1 n |= n >> 1 n |= n >> 2 n |= n >> 4 n |= n >> 8 n |= n >> 16 n += 1 return n _kernels = dict() def make_kernel(N, device): BLOCK = next_power_of_2(N) key = (BLOCK, device) if key not in _kernels: defines = {'BLOCK': BLOCK} _kernels[key] = triton.kernel(_src, device=device, defines=defines) return _kernels[key] class _softmax(torch.autograd.Function): @staticmethod def forward(ctx, x): # constraints of the op assert x.dtype == torch.float32 y = torch.empty_like(x) # *create launch grid*: # here we just launch a grid of M programs M, N = y.shape grid = lambda opt: (M, ) # *launch kernel*: kernel = make_kernel(N, y.device) kernel(y.data_ptr(), x.data_ptr(), y.stride(0), x.stride(0), M, N, grid = grid) return y softmax = _softmax.apply # - # # Writing a Unit Test x = torch.randn(1823, 781, device='cuda') y_tri = softmax(x) y_ref = torch.softmax(x, axis=1) print(y_tri) print(y_ref) print(torch.allclose(y_tri, y_ref)) # Seems to work! # # Writing a Benchmark # + import matplotlib.pyplot as plt M = 4096 Ns = [128*i for i in range(2, 50)] tri_ms = [] ref_ms = [] def_ms = [] for N in Ns: x = torch.randn(M, N, device='cuda', dtype=torch.float32) gbps = lambda ms: x.nelement() * x.element_size() * 1e-9 / (ms * 1e-3) tri_ms += [gbps(triton.testing.do_bench(lambda: softmax(x)))] ref_ms += [gbps(triton.testing.do_bench(lambda: torch.softmax(x, axis=1)))] def_ms += [gbps(triton.testing.do_bench(lambda: naive_softmax(x)))] plt.xlabel('N') plt.ylabel('Bandwidth (GB/s)') plt.plot(Ns, tri_ms, label = 'Triton') plt.plot(Ns, ref_ms, label = 'Torch') plt.plot(Ns, def_ms, label = 'Naive') plt.legend() plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Abstract # The following investigation explores a contemporary music dataset, aiming to answer the overarching question of "What factors make music popular?" This ultimate question can be broken up in to smaller inquiries such as "What generes of music are most popular?" and "What are the most popular song keys and modes?" among others. The answers to these questions can reveal what elements of music listeners are attracted to in the modern age. # # Dataset # The dataset I selected was a subset of a million song dataset on contemporary music. The subset is 10,000 observations long and has .... variable columns. The data was on a sample of contemporary music, with variables describing musical parameters of the song as well as some subjective ratings. https://labrosa.ee.columbia.edu/millionsong/pages/field-list contains the column descriptions. https://think.cs.vt.edu/corgis/csv/music/music.html is the source of the data subset. # # Some of the feature data was collected through an audio analysis program called 'Echo Nest' and this program attempted to quantify subjective elements of music such as dancability and energy. The documentation for the analyzer can be found here: http://docs.echonest.com.s3-website-us-east-1.amazonaws.com/_static/AnalyzeDocumentation.pdf. # # # Other Notebooks # The data is imported and breifly investigated in 02-Import.ipynb, then cleaned in 03-Tidy.ipynb. The analysis occurs in 04-EDA.ipynb and 05-Modeling.ipynb. The answers to the research question as well as an overall review of the project are in 06-Conclusions.ipynb. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Importando pacotes # %matplotlib inline import pandas as pd import numpy as np from sqlalchemy import create_engine # ## Criando conexão com o banco de dados # _Caso tenha problemas com `encoding`, você pode passar o charset como parâmetro na conexão._ # ``` # "firebird+fdb://sysdba:masterkey@firebird:3050//firebird/data/database.fdb?charset=ISO8859_1") # ``` engine = create_engine("firebird+fdb://sysdba:masterkey@firebird:3050//firebird/data/examples.fdb") def run_query(query): return pd.read_sql_query(query,con=engine) # ## Executando query query = """ select * from CUSTOMER; """ df = run_query(query) df.head() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="view-in-github" # Open In Colab # + id="NTGJDkaxsrnw" colab={"base_uri": "https://localhost:8080/"} outputId="11a20a6e-0e12-4206-e99d-9ae35cb72029" #code is used from these 3 repositories, have a look at them on GitHub or access the files from colab # !git clone https://github.com/PeterWang512/FALdetector # !git clone https://github.com/NVIDIA/flownet2-pytorch.git # !git clone https://github.com/Kwanss/PCLNet # + id="3W8h8ElvvMC7" # -*- coding: utf-8 -*- """ Created on Wed Nov 25 10:32:09 2020 @author: """ #import necessary modules and append paths import os import csv import pickle import cv2 from PIL import Image import matplotlib.pyplot as plt import sys sys.path.append("/content/FALdetector/networks/") sys.path.append("/content/FALdetector/") from drn_seg import DRNSeg import torch from torch.autograd import Variable from torch.nn import Linear, ReLU, CrossEntropyLoss,BCELoss, Sequential, Conv2d, MaxPool2d, Module, Softmax, BatchNorm2d, Dropout from torch.optim import Adam, SGD sys.path.append("/content/flownet2-pytorch/") import losses from losses import MultiScale,EPE sys.path.append("/content/PCLNet/Losses/") sys.path.append("/content/PCLNet/models/") from utils.tools import * from utils.visualize import * import pandas as pd import numpy as np import torch.nn as nn # for reading and displaying images from skimage.io import imread # for creating validation set from sklearn.model_selection import train_test_split # for evaluating the model from sklearn.metrics import accuracy_score from tqdm import tqdm from skimage.transform import rescale, resize, downscale_local_mean from skimage import data, color from torchsummary import summary # + id="11kNl6zCuPPO" colab={"base_uri": "https://localhost:8080/"} outputId="87377767-612e-4fd6-b98a-5b509f6f38c2" from google.colab import drive drive.mount('/content/gdrive') # !ln -s /content/gdrive/My\ Drive/ /mydrive # !ls /mydrive # !unzip /mydrive/nofakes/flow_pred_data/modified.zip -d modif # !unzip /mydrive/nofakes/flow_pred_data/reference.zip -d ref # !unzip /mydrive/nofakes/flow_pred_data/local_weight.zip # + colab={"base_uri": "https://localhost:8080/"} id="RCc_Sot-srnw" outputId="44a23064-dfe2-4fd5-88dc-7bd52ca30de4" pathM =r"/content/modif" path =r"/content/ref" #if you guys can come up with a more efficient way to do this go ahead. height =400 width=400 #variable to control how many training examples to import train_size =5 filenames = [] #Redo this part.... it doesnt import training images in order, I have printed out the filenames imported for convenience def createTrain(path,n_images): arr=[] counter =0 for r,d,f in os.walk(path): for _file in f: #make sure to change to correct image format .jpg or .png etc... if '.png' in _file: #print(_file) img = cv2.imread(r""+path+"/"+str(_file)) filenames.append(path+"/"+str(_file)) #keep image dimensions at 500 for now img = cv2.resize(img, (width,height), interpolation = cv2.INTER_AREA) #img = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) arr.append(img) counter+=1 #terminate once we imported the desired number of images if(counter==n_images): return np.array(arr) X_ref=createTrain(path,train_size) X_mod = createTrain(pathM,train_size) print("the filenames are",filenames) shape = X_ref.shape # + colab={"base_uri": "https://localhost:8080/"} id="xn4RUUb4srnx" outputId="3a7bfb8d-aba9-4043-c78b-1404870c0945" flow_arr =[] #calcOptical gives an error if I input all training data at once (doesnt seem very efficient atm) for i in range(shape[0]): flow = cv2.calcOpticalFlowFarneback(cv2.cvtColor(X_ref[i],cv2.COLOR_BGR2GRAY), cv2.cvtColor(X_mod[i],cv2.COLOR_BGR2GRAY),None, 0.5, 3, 4, 2, 3, 1.2, 0) flow_arr.append(flow) print(X_mod.shape) flow_arr = np.array(flow_arr) mag, ang = cv2.cartToPolar(flow[...,0], flow[...,1]) print(mag.shape) path = "tester_before_train.png" tester = save_heatmap_cv(X_mod[0], mag, path) #print(flow) print("flow arr has shape",flow_arr.shape) #the code below is pretty similar to how the binary classifier is trained with a few changes #since the output shape is (2,height,width) for the vector field # + [markdown] id="1WaL8ji9pVCE" # **Discretize the flow fields** # + colab={"base_uri": "https://localhost:8080/"} id="hKrXUh7npF0J" outputId="40bf8db8-d13c-4edd-b691-da9e5e386e92" print("maximum flow value is: ",np.max(flow_arr)) print("minimum flow value is: ",np.min(flow_arr)) categorical_flow =[] dic ={} inv_dic={} counter =0 #create a placeholder for the (u,v)->class pairs, hashmap is the easiest as its O(1) lookup time for the values #Note when obtaining flow values from classes it takes O(n) time to look up each one, not very efficient so #maybe re-do this part so that the hashmap key-value pairs are {class_pairs:(u,v)} #max and minimum allowed flow values max_f =5 min_f=-5 #fill up hashmap with values for i in range(min_f,max_f+1): for j in range(min_f,max_f+1): dic[i,j]=counter inv_dic[counter]=[i,j] counter+=1 print(len(dic)) print("this is what the hashmap looks like",dic) print("this is what the inverse looks like",inv_dic) dimens= flow_arr.shape #categorise the flow into distinct values for flows in flow_arr: temp_flow =np.zeros((dimens[1],dimens[2])) print(temp_flow.shape) for i in range(dimens[1]): for j in range(dimens[2]): value = flows[i][j] #makes sure we are not going over the max and min values if (value[1]>max_f or value[1]max_f or value[0] 0: write_tf_record( filepath="{}_{}.tfrecord".format(filepath, num_shards - 1), data={ key: value[shard_size * (num_shards - 1):shard_size * num_shards + remainder] for key, value in data.items() } ) # + # Set number of shards to split dataset into. num_shards = 10 # Loop through each progressive resolution image size. for i in range(4): img_sz = 4 * 2 ** i print("\nWriting {0}_{0} TF Records!".format(img_sz)) # Cars create_tf_record_shards( num_shards=num_shards, filepath="data/cifar10_car/train_{0}x{0}".format(img_sz), data={ "image_raw": tf.image.resize( images=car_train, size=(img_sz, img_sz), method="nearest" ).numpy(), "label": np.ones(shape=[car_train.shape[0]], dtype=np.int32) } ) create_tf_record_shards( num_shards=num_shards, filepath="data/cifar10_car/test_{0}x{0}".format(img_sz), data={ "image_raw": tf.image.resize( images=car_test, size=(img_sz, img_sz), method="nearest" ).numpy(), "label": np.ones(shape=[car_test.shape[0]], dtype=np.int32) } ) # Non-cars create_tf_record_shards( num_shards=num_shards, filepath="data/cifar10_non_car/train_{0}x{0}".format(img_sz), data={ "image_raw": tf.image.resize( images=anom_train, size=(img_sz, img_sz), method="nearest" ).numpy(), "label": np.ones(shape=[car_train.shape[0]], dtype=np.int32) } ) create_tf_record_shards( num_shards=num_shards, filepath="data/cifar10_non_car/test_{0}x{0}".format(img_sz), data={ "image_raw": tf.image.resize( images=anom_test, size=(img_sz, img_sz), method="nearest" ).numpy(), "label": np.ones(shape=[car_test.shape[0]], dtype=np.int32) } ) # - # ## Read def print_obj(function_name, object_name, object_value): """Prints enclosing function, object name, and object value. Args: function_name: str, name of function. object_name: str, name of object. object_value: object, value of passed object. """ # pass print("{}: {} = {}".format(function_name, object_name, object_value)) # + def decode_example(protos, params): """Decodes TFRecord file into tensors. Given protobufs, decode into image and label tensors. Args: protos: protobufs from TFRecord file. params: dict, user passed parameters. Returns: Image and label tensors. """ # Create feature schema map for protos. features = { "image_raw": tf.io.FixedLenFeature(shape=[], dtype=tf.string), "label": tf.io.FixedLenFeature(shape=[], dtype=tf.int64) } # Parse features from tf.Example. parsed_features = tf.io.parse_single_example( serialized=protos, features=features ) print_obj("\ndecode_example", "features", features) # Convert from a scalar string tensor (whose single string has # length height * width * depth) to a uint8 tensor with shape # [height * width * depth]. image = tf.io.decode_raw( input_bytes=parsed_features["image_raw"], out_type=tf.uint8 ) print_obj("decode_example", "image", image) # Reshape flattened image back into normal dimensions. image = tf.reshape( tensor=image, shape=[params["height"], params["width"], params["depth"]] ) print_obj("decode_example", "image", image) # Convert from [0, 255] -> [-1.0, 1.0] floats. image_scaled = tf.cast(x=image, dtype=tf.float32) * (2. / 255) - 1.0 print_obj("decode_example", "image", image) # Convert label from a scalar uint8 tensor to an int32 scalar. label = tf.cast(x=parsed_features["label"], dtype=tf.int32) print_obj("decode_example", "label", label) return {"image": image, "image_scaled": image_scaled}, label def read_dataset(filename, mode, batch_size, params): """Reads CSV time series data using tf.data, doing necessary preprocessing. Given filename, mode, batch size, and other parameters, read CSV dataset using Dataset API, apply necessary preprocessing, and return an input function to the Estimator API. Args: filename: str, file pattern that to read into our tf.data dataset. mode: The estimator ModeKeys. Can be TRAIN or EVAL. batch_size: int, number of examples per batch. params: dict, dictionary of user passed parameters. Returns: An input function. """ def _input_fn(): """Wrapper input function used by Estimator API to get data tensors. Returns: Batched dataset object of dictionary of feature tensors and label tensor. """ # Create list of files that match pattern. file_list = tf.io.gfile.glob(pattern=filename) # Create dataset from file list. dataset = tf.data.TFRecordDataset( filenames=file_list, num_parallel_reads=40 ) # Shuffle and repeat if training with fused op. if mode == tf.estimator.ModeKeys.TRAIN: dataset = dataset.apply( tf.contrib.data.shuffle_and_repeat( buffer_size=50 * batch_size, count=None # indefinitely ) ) # Decode example into a features dictionary of tensors, then batch. dataset = dataset.map( map_func=lambda x: decode_example( protos=x, params=params ), num_parallel_calls=4 ) dataset = dataset.batch(batch_size=batch_size) # Prefetch data to improve latency. dataset = dataset.prefetch(buffer_size=2) return dataset return _input_fn # - def try_out_input_function(arguments, print_out=False): """Trys out input function for testing purposes. Args: arguments: dict, user passed parameters. """ fn = read_dataset( filename=arguments["filename"], mode=tf.estimator.ModeKeys.EVAL, batch_size=8, params=arguments ) dataset = fn() batches = [batch for batch in dataset] features, labels = batches[0] print("features[image].shape = {}".format(features["image"].shape)) print("labels.shape = {}".format(labels.shape)) # print("features = \n{}".format(features)) print("labels = \n{}".format(labels)) return features, labels def plot_all_resolutions(filepath, starting_size, num_doublings, depth): """Plots all resolutions of images. Args: filepath: str, filepath location. starting_size: int, smallest/initial size of image. num_doublings: int, number of times to double height/width of original image size. depth: int, number of image channels. """ for i in range(num_doublings): img_sz = starting_size * 2 ** i print("\nImage size = {0}x{0}".format(img_sz)) features, labels = try_out_input_function( arguments={ "filename": "{0}_{1}x{1}_0.tfrecord".format( filepath, img_sz ), "height": img_sz, "width": img_sz, "depth": depth }, print_out=True ) plot_images(features["image"]) plot_all_resolutions( filepath="data/cifar10_car/train", starting_size=4, num_doublings=4, depth=3 ) plot_all_resolutions( filepath="data/cifar10_car/test", starting_size=4, num_doublings=4, depth=3 ) plot_all_resolutions( filepath="data/cifar10_non_car/train", starting_size=4, num_doublings=4, depth=3 ) plot_all_resolutions( filepath="data/cifar10_non_car/test", starting_size=4, num_doublings=4, depth=3 ) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import scipy.constants as co import matplotlib.pyplot as plt from bolosKhai import parser, grid, solver2 np.seterr(divide='ignore', invalid='ignore') # Create an energy grid for Boltzmann Solver # This energy grid has unit in eV gr = grid.QuadraticGrid(0, 20, 200) bsolver = solver2.BoltzmannSolver(gr) # Import data file, which contains the cross section data. with open('Cross Section.dat') as fp: processes = parser.parse(fp) processes = bsolver.load_collisions(processes) bsolver.target['CH4'].density = 0.5 bsolver.target['Ar'].density = 0.5 ################################################## # INPUT # We have electric field: # E = E0 * exp (i * Omega * t) bsolver.OmegaN = 0.10000E-11 # Omega / N bsolver.kT = 400 * co.k / co.eV # Gas - Temperature 400 K # GUESS by Maxwell distribution function. # Here we are starting with # with an electron temperature of 6 eV f0 = bsolver.maxwell(6.0) mean_max = bsolver.mean_energy(f0) def EEDF_AC(EN, f0): bsolver.grid = gr bsolver.EN = EN * solver2.TOWNSEND # After change any parameter we must initial the solver bsolver.init() f1 = bsolver.converge(f0, maxn=200, rtol=1e-4) mean1 = bsolver.mean_energy(f1) print('E/N = %.0f Td' % EN) print('Mean Energy 1 = %.4f eV' % (mean1)) # Get new grid newgrid = grid.QuadraticGrid(0, 10 * mean1, 200) bsolver.grid = newgrid bsolver.init() # Interpolate the previous EEDF over new grid f2 = bsolver.grid.interpolate(f1, gr) mean2 = bsolver.mean_energy(f2) # Find final EEDF f3 = bsolver.converge(f2, maxn=200, rtol=1e-5) mean3 = bsolver.mean_energy(f3) print('Mean Energy Inter-EEDF = %.4f eV' % mean2) print('Mean Energy Final-EEDF = %.4f eV \n' % mean3) grid_EEDF = bsolver.cenergy return f3, grid_EEDF # Range of Electric field / Number of electron - E0/N # E = E0 * exp (i * Omega * t) EN = np.linspace(1000,2000,11) plt.figure() for i in EN: EEDF, gr_EEDF = EEDF_AC(i, f0) plt.plot(gr_EEDF, EEDF, label='%.f Td' % i) plt.xlabel('Mean Energy (eV)') plt.ylabel('EEDF (eV$^\mathdefault{3/2}$)') plt.xlim([0,3]) plt.ylim([0,0.85]) plt.legend() plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # IMDB | IST 652 FINAL PROJECT | & # # ======================================================= # # PART 1: COMPARISON to R # # ======================================================= # + #Ali and Kendra Final Project #importing pandas, csv, import csv import pandas as pd import matplotlib.pyplot as plt import numpy as np import statistics #To create testing and training dfs and labels from sklearn.model_selection import train_test_split #To get a count or tally of a category in our df from collections import Counter # To model the Gaussian Navie Bayes classifier from sklearn.naive_bayes import GaussianNB # To calculate the accuracy score of the model from sklearn.metrics import accuracy_score #confusion matrix from sklearn.metrics import confusion_matrix, classification_report #for pre-processing to fit all numeric data on the standard scale from sklearn.preprocessing import StandardScaler #for applying PCA function on training and testing sets from sklearn.decomposition import PCA #logistic regression from sklearn.linear_model import LogisticRegression #SVMs from sklearn.svm import SVC #For association rule mining from apyori import apriori #This will allow us to silence the warnings import warnings warnings.simplefilter("ignore") #For the confusion matrix import seaborn as sns # + #Functions that we are going to use in our file: #Creating a function that will discretize our columns based on quartiles def quartile_discretize(df, column, categories): df[column] = pd.qcut(df[column], 4, labels = categories) return(df[column]) #Creating a function that will merge our dfs with a left join def left_merge_2_conditions(df1, df2, column1, column2): df = pd.merge(df1, df2, how = "left", on=[column1, column2]) return(df) #Creating a function that groups by, counts, creates a new column from the index, drops the index and changes the column names def groupby_count(df, groupby_column, count_column): new_df = pd.DataFrame(df.groupby(groupby_column)[count_column].count()) new_df.columns = ["count"] new_df[groupby_column] = new_df.index.get_level_values(0) new_df.reset_index(drop = True, inplace = True) return(new_df) #Creating a function that groups by, counts, creates a new column from the index, drops the index and changes the column names def groupby_2_count(df, groupby_column1, groupby_column2, count_column): new_df = pd.DataFrame(df.groupby([groupby_column1, groupby_column2 ])[count_column].count()) new_df.columns = ["count"] new_df[groupby_column1] = new_df.index.get_level_values(0) new_df[groupby_column2] = new_df.index.get_level_values(1) new_df.reset_index(drop = True, inplace = True) return(new_df) #This will calculate the exponential moving average of the columns we want #exponential moving averages give more weight to the most recent data and less weight to older data def exp_moving_avg(d): d["exp_moving_avg"] = d["score"].ewm(span=40,adjust=False).mean() exp_moving_avg = list(d["exp_moving_avg"]) #Adding a 0 to the first entry to exp_moving_avg exp_moving_avg = [0] + exp_moving_avg #Removing the last entry in the list exp_moving_avg.pop() #Creating a column named exp_moving_avg with the results d["exp_moving_avg"] = exp_moving_avg return(exp_moving_avg) #This will calculate the cumulative moving average def cumulative_moving_avg(d): d["moving_avg"] = d.expanding(min_periods = 1).mean() moving_avg = list(d["moving_avg"]) #Adding a 0 to the first entry to moving avg cumulative_moving_avg = [0] + moving_avg #Removing the last entry in the list cumulative_moving_avg.pop() return(cumulative_moving_avg) #This will get the list of all of the entries in the column that we are interested in for calculating the averages def getting_list_of_entries(df, column_interested_in): avg_people = pd.DataFrame(df.groupby([column_interested_in, "released"])["score"].mean()) avg_column_scores = pd.DataFrame() column_interested = list(df[column_interested_in].unique()) return([avg_people, column_interested]) # Going to use matplotlib for plotting... # To create a plot we followed the following formula: # df.plot(x-axis, y-axis, kind = type of plot, color = [(we specified colors to use here)], legend = False (we did not # want a legend displayed), title = "Title") then we added a ylabel with plt.ylabel("Type label here") and an x label # with plt.xlabel("type label here"). Finally, we wanted to change the direction of the xtick names from a 90 degree angle # to no angle with plt.xticks(rotation = rotation angle desired) def bar_graph_count(df, x_column, y_column, title): g = df.plot(x_column, y_column, kind = "bar", legend = False, title = title) g = plt.ylabel(y_column) g = plt.xlabel(x_column) return(g) #This will make a df for our moving averages that we are calculating def making_df(people_df, column_interested_in, released, person, cumulative_avg, exp_avg): df_2 = pd.DataFrame({column_interested_in: person, "released": released, "cumulative_mean": cumulative_avg, "exp_mean": exp_avg}) return(df_2) #This includes the functions above, and will calculate the exponential and cumulative moving averages for which ever #column we specify and return a df will the column interested in, released, cumulative_mean, exp_mean def calculating_moving_avg(df, column_interested_in): people_df = pd.DataFrame() people = getting_list_of_entries(df, column_interested_in) cumulative_avg = [] avg_people = people[0] avg_people for person in people[1]: d = avg_people.groupby(column_interested_in).get_group(person) cumulative_avg = cumulative_moving_avg(d) exp_avg = exp_moving_avg(d) d.reset_index(inplace = True) released = d["released"] df = pd.DataFrame({column_interested_in: person, "released": released, "cumulative_mean_"+column_interested_in : cumulative_avg, "exp_mean_"+column_interested_in: exp_avg}) people_df = people_df.append(df) return(people_df) #Confusion Matrix Graph Function def confusion_matrix_graph (cm, accuracy_label, type_of_df): g = plt.figure(figsize=(2,2)) g = sns.heatmap(cm, annot=True, fmt=".3f", linewidths=.5, square = True, cmap = 'Blues_r', cbar = False); g = plt.ylabel('Actual'); g = plt.xlabel('Predicted'); g = all_sample_title = type_of_df +' Accuracy Score: {0}'.format(round(accuracy_label, 4)) g = plt.title(all_sample_title, size = 12); return(g) # - #reading in the movies.csv file from Kaggle movies = pd.read_csv("movies.csv", encoding = "ISO-8859-1") len(movies) #Looking at the head of the dataframe movies.head() #Getting the shape of the df movies.shape #We currently have 6,820 rows and 15 columns #Checking to see if we have any missing values... It shows that we do not. movies.isnull().sum() # + # If we had missing values we would do the following #We are dropping those rows with the following code #movies.dropna(inplace = True) # - #We are removing any movie that has a budget of 0, because for our machine learning, we will want to predict profit. # movies in our df that do not contain gross or budget will not be useful for this. movies = movies[movies["budget"] != 0] #We are removing any movie with a gross of 0 movies = movies[movies["gross"] != 0] # movies = movies[movies["production_companies"] != "[]"] # movies = movies[movies["genres"] != "[]"] len(movies) #Checking data types of the columns movies.dtypes #Once we are done cleaning the data we are going to change the data types of: company, director, genre, rating, released #star, writer, and potentially year. If we change them now, when we clean the df and removed rows, the old categories #remain, and still show as possible categories. #Need to change the following to date #released,year movies["released"] = pd.to_datetime(movies["released"]) #Separating the month, day and year into their own columns in case we would like to analyze based on month, day or year movies["month"], movies["day"] = movies["released"].dt.month, movies["released"].dt.day #Checking the data types of the columns and making sure the new columns were added movies.dtypes cat = list(range(1,13)) #Changing the month data type from int to ordered category movies["month"] = pd.Categorical(movies["month"], ordered = True, categories = cat) #Making sure it shows as an ordered factor movies.month.dtype movies.dtypes #Getting a list of a the different ratings in our df movies["rating"].unique() #UNRATED, not specified AND NOT RATED mean the same thing, therefore we are going to change all not rated, not specified entries to unrated movies["rating"] = movies["rating"].replace(["NOT RATED", "Not specified"], "UNRATED") #Checking to make sure that worked: movies["rating"].unique() #Changing rating to an ordered factor #Creating the order that we would like for the ordered factor cat = ["UNRATED", "G", "PG", "PG-13", "R", "NC-17"] #Changing to ordered factor movies["rating"] = pd.Categorical(movies["rating"], ordered = True, categories = cat) #Checking to see if it worked movies.rating.dtype # + #We want to be able to look at the profit for each movie... Therefore we are creating a #profit column which is gross - budget movies["profit"] = movies["gross"] - movies["budget"] # - #Creating a percent profit column to have a normalized way to compare profits. #percent_profit = profit/budget*100 movies["percent_profit"] = movies["profit"]/movies["budget"]*100 movies.head() #Directors #Aggregating a moving average column and calculating the mean average imdb score for each actor; by calculating the #mean imdb scores for all actors but for only the movies prior to the movie we are calculting the mean for. directors_avg = calculating_moving_avg(movies, "director") #Writers: writers_avg = calculating_moving_avg(movies, "writer") #actors: stars_avg = calculating_moving_avg(movies, "star") #company: companies_avg = calculating_moving_avg(movies, "company") #We are going to use our left_merge_2_conditions function: #Inputs: df1, df2, column to merge on 1 and column to merge on 2 movies = left_merge_2_conditions(movies, directors_avg, "director", "released") movies = left_merge_2_conditions(movies, writers_avg, "writer", "released") movies = left_merge_2_conditions(movies, stars_avg, "star", "released") movies = left_merge_2_conditions(movies, companies_avg, "company", "released") movies.head() #Looking to see what happens if we remove all the movies with a 0 for exp_mean_director and exp_mean_star movies = movies[movies["exp_mean_director"] != 0] movies = movies[movies["exp_mean_star"] != 0] movies = movies[movies["exp_mean_writer"] != 0] movies = movies[movies["exp_mean_company"] != 0] len(movies) #We still have 883 movies in our df movies.head() #Creating an aggregated column for the avg writer, director, company, actor cumulative mean movies["cumulative_mean_avg"] = (movies["cumulative_mean_writer"] + movies["cumulative_mean_director"] + movies["cumulative_mean_company"] + movies["cumulative_mean_star"])/4 # Creating an aggregated column for the avg writer, director, company, # and actor exponential mean movies["exp_mean_avg"] = (movies["exp_mean_writer"] + movies["exp_mean_director"] + movies["exp_mean_company"] + movies["exp_mean_star"])/4 movies.head() # + #What is the breakdown of genre in our df? #Getting the count of movies for each genre in our df and saving it as a pandas df. #We are grouping by genre and then getting the count of the genre column in each group by #we could have used any column to get the count of... #We are using the groupby_count function that takes the following arguments (df, groupby_column, count_column) movies_genre = groupby_count(movies, "genre", "genre") # - movies_genre #Sorting the df, so the bar graph will be in descending order movies_genre.sort_values(['count'], ascending=[False], inplace = True) #Creating a graph of the movies_genre df using our bar_graph_count function. It takes the following inputs: # df, x-column, y_column, and title bar_graph_count(movies_genre, "genre", "count", "Visualization of Number of Movies Per Genre") #Creating a data frame of the movies star count movies_star = groupby_count(movies, "star", "genre") movies_star.head() # #Creating a subset of the movies_star df that contains only stars that have 2+ movies in our df # movies_star = movies_star[movies_star["count"] > 1] # #Creating a list of the stars that are in our subsetted df # movies_star_list = list(movies_star["star"]) # movies_star_list # #Subsetting our movies df to include only stars who are listed in our movies_star_list # movies = movies[movies.star.isin(movies_star_list)] movies_star.describe() #The majority of our 356 stars. Only 25% of our stars in 3 or more movies. # We have 649 stars in our newly reduced df # + # How many movies have each star in our df starred in? #Looking to see how many stars have starred in 1, 2, 3, 4+ movies movies_star = groupby_count(movies, "star", "genre") # - #Changing the column names for our movies_star movies_star.columns = ["number_of_movies", "star"] movies_star = groupby_count(movies_star, "number_of_movies", "star") movies_star #Changing the column names movies_star.columns = ["number_of_stars", "number_of_movies"] #Sorting the df, so the bar graph will be in descending order movies_star.sort_values(['number_of_movies'], ascending=[True], inplace = True) #Creating a graph of the movies_star df using our bar_graph_count function. It takes the following inputs: # df, x-column, y_column, and title bar_graph_count(movies_star, "number_of_movies", "number_of_stars", "Visualization of the The Number of Movies each Star has in our DF") #Creating a data frame of the movies director count movies_director = groupby_count(movies, "director", "genre") movies_director.columns = ["director_count", "director"] # #Creating a subset of the movies_director df that contains only directors that have 2 movies in our df # movies_director = movies_director[movies_director["director_count"] > 1] # #Creating a list of the stars that are in our subsetted df # movies_director_list = list(movies_director["director"]) # movies_director_list # #Subsetting our movies df to include only stars who are listed in our movies_star_list # movies = movies[movies.director.isin(movies_director_list)] # + # How many movies have each director in our df produced? #Creating a new groupby to see how many directors we have that have produced 1, 2, 3, 4... movies movies_director = groupby_count(movies_director, "director_count", "director") # - #Getting a tally of the number of directors that have the same number of movies movies_director.columns = ["number_of_movies", "director_count"] movies_director #Creating a graph of the movies_director df bar_graph_count(movies_director, "director_count", "number_of_movies", "Visualization of the The Number of Movies each Director has in our DF") # How many movies have each company in our df produced? #Creating a new groupby to see how many company we have that have produced 1, 2, 3, 4... movies movies_company = groupby_count(movies, "company", "star") movies_company.columns = ["number_of_movies", "company"] movies_company = groupby_count(movies_company, "number_of_movies", "company") movies_company.columns = ["number_of_companies", "number_of_movies"] movies_company #Creating a graph of the movies_company df bar_graph_count(movies_company, "number_of_movies", "number_of_companies", "Visualization of the The Number of Movies each Company has in our DF") # + #How are the movies in our df distributed by year? #Looking to see how the movies are distributed by year movies_year = groupby_count(movies, "year", "star") movies_year # - #Creating a graph of the movies_year df bar_graph_count(movies_year, "year", "count", "Visualization of Number of Movies Per Year") # + # What is the breakdown of the number of movies per genre per year? #Looking at the number of movies per genre per year #Looking to see how the movies are distributed by year #We are using the groupby_2 function that takes the following arguements: #(df, groupby_column1, groupby_column2, agg_column) movies_year_genre = groupby_2_count(movies, "year", "genre", "star") # - movies_year_genre.head() # + #To visualize this, we are going to use a grouped bar graph. Our bar_graph_count function does not apply to this #situation... We are going to use the pivot function # Attempting to graph this data using a grouped bar chart: # formula: df.pivot(columns, group, values).plot(kind = "type of graph", color = ["color to use, can be a list of colors"], # title = "you can set the title of your graph here") movies_year_genre.pivot("year", "genre", "count").plot(kind="bar", title = "Visualization of Genre Breakdown by Year") #Creating a y axis label plt.ylabel("Number of Movies") #Changing the x axis label plt.xlabel("Year") #Changing the label to display male and female instead of 0 and 1 plt.legend(loc = "lower center", bbox_to_anchor = (.5, -.8), ncol = 4, title = "Genre") #This was not very helpful, we are going to subset the data by 5 year chunks and regraph # - '''Come back here if time prevails to do what is said above''' # + #What are the most prolific months for movies to be released in our dataset? #Looking at month movies_month = groupby_count(movies, "month", "star") movies_month # - #Creating a new column called month with the names of the month of the year, since the df is ordered, we know that #the months of the year can be added this way movies_month["month"] = ["Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sept", "Oct", "Nov", "Dec"] #Changing the data type of the month column to ordered categorical cat = ["Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sept", "Oct", "Nov", "Dec"] movies_month["month"] = pd.Categorical(movies_month["month"], ordered = True, categories = cat) movies_month.month.dtype movies_month bar_graph_count(movies_month, "month", "count", "Visualization of Number of Movies by Month") #Descritizing the df based on all of the numeric columns: #Creating a new df name movies_descritized movies_discretized = movies.copy() movies_discretized.columns # We are going to descritize our data based on the quartiles. The categories are: # extremely_low, low, high, extremely_high # We are using our quartile_discretize function that takes the following arguments: #(df, column, category) categories = ["extremely_low", "low", "high", "extremely_high"] movies_discretized["budget"] = quartile_discretize(movies_discretized, "budget", categories) #Checking to make sure it worked movies_discretized.budget.dtype #Gross: We are using the same categories and discretizing based on quantiles 25%, 50%, 75% 100% movies_discretized["gross"] = quartile_discretize(movies_discretized, "gross", categories) #Checking to see that it worked movies_discretized.gross.dtype #Score: We are using the same categories and discretizing based on quantiles 25%, 50%, 75% 100% movies_discretized["score"] = quartile_discretize(movies_discretized, "score", categories) #Checking to see that it worked movies_discretized.score.dtype #Votes: We are using the same categories and discretizing based on quantiles 25%, 50%, 75% 100% movies_discretized["votes"] = quartile_discretize(movies_discretized, "votes", categories) movies_discretized.votes.dtype #writer_mean: We are using the same categories and discretizing based on quantiles 25%, 50%, 75% 100% movies_discretized["cumulative_mean_writer"] = quartile_discretize(movies_discretized,"cumulative_mean_writer", categories) movies_discretized["exp_mean_writer"] = quartile_discretize(movies_discretized,"exp_mean_writer", categories) movies_discretized["cumulative_mean_director"] = quartile_discretize(movies_discretized, "cumulative_mean_director", categories) movies_discretized["exp_mean_director"] = quartile_discretize(movies_discretized,"exp_mean_director", categories) movies_discretized["cumulative_mean_star"] = quartile_discretize(movies_discretized, "cumulative_mean_star", categories) movies_discretized["exp_mean_star"] = quartile_discretize(movies_discretized, "exp_mean_star", categories) movies_discretized["cumulative_mean_company"] = quartile_discretize(movies_discretized, "cumulative_mean_company", categories) movies_discretized["exp_mean_company"] = quartile_discretize(movies_discretized, "exp_mean_company", categories) #We are creating new label categories categories = ["extremely_short", "short", "long", "extremely_long"] movies_discretized["runtime"] = quartile_discretize(movies_discretized, "runtime", categories) #Checking to see if that worked movies_discretized.runtime.dtype movies_discretized.percent_profit.describe() #We are creating new label categories ; Discretized Percent Profit #We cannot use our function on this, because we are not discretizing by quartiles categories = ["negative", "low", "high", "extremely_high"] movies_discretized["percent_profit"] = pd.cut(movies_discretized["percent_profit"], [-100, 0, 50, 150, 999999], labels = categories) #Checking to see if it worked movies_discretized.percent_profit.dtype movies_discretized.profit.describe() #Profit #We cannot use our function on this, because we are not discretizing by quartiles movies_discretized["profit"] = pd.cut(movies_discretized["profit"], [-9999999999, 0, 1000000, 50000000, 999999999], labels = categories) movies_discretized.head() # + #We are setting new categories for the day column by creating a new column for week # week_1 is the first 7 days of the month, week_2 is days 8 - 14, week_3 is days 15 - 21, and week_4 are the # rest of the days categories = ["week_1", "week_2", "week_3", "week_4"] movies_discretized["week"] = pd.cut(movies_discretized["day"], [0, 8, 15, 22, 32], labels = categories) # - movies_discretized.week.dtype #Looking at the relationship between genre and percent profit movies_discretized_genre_pp = groupby_2_count(movies_discretized, "genre", "percent_profit", "score") movies_discretized_genre_pp.head() #Now we are getting the sum of each genre category... We do not have a function for sum... we could go back and rework #our function. movies_discretized_genre_pp.groupby("genre")["count"].sum() movies_discretized_genre_pp["genre_count"] = movies_discretized_genre_pp["genre"] '''We ultimately want a column that contains the total counts for each genre group. We are probably doing this in a roundabout way, but as I am extremely new to python this is the best way I can think of doing it. We are going to create a new column that replicates the genre column called genre_count and then we will use the replace function to replace the genre names with their total count ''' #First, replicating the income level column in a column named budget_category_count movies_discretized_genre_pp["genre_count"] = movies_discretized_genre_pp["genre"] #Now replacing the income level with the total count for each income level movies_discretized_genre_pp["genre_count"] = movies_discretized_genre_pp["genre_count"].replace(["Action"], 301 ) movies_discretized_genre_pp["genre_count"] = movies_discretized_genre_pp["genre_count"].replace(["Adventure"], 54) movies_discretized_genre_pp["genre_count"] = movies_discretized_genre_pp["genre_count"].replace(["Animation"], 33) movies_discretized_genre_pp["genre_count"] = movies_discretized_genre_pp["genre_count"].replace(["Biography"], 38) movies_discretized_genre_pp["genre_count"] = movies_discretized_genre_pp["genre_count"].replace(["Comedy"], 233) movies_discretized_genre_pp["genre_count"] = movies_discretized_genre_pp["genre_count"].replace(["Crime"], 69) movies_discretized_genre_pp["genre_count"] = movies_discretized_genre_pp["genre_count"].replace(["Drama"], 116) movies_discretized_genre_pp["genre_count"] = movies_discretized_genre_pp["genre_count"].replace(["Fantasy"], 4) movies_discretized_genre_pp["genre_count"] = movies_discretized_genre_pp["genre_count"].replace(["Horror"], 27) movies_discretized_genre_pp["genre_count"] = movies_discretized_genre_pp["genre_count"].replace(["Mystery"], 7) movies_discretized_genre_pp["genre_count"] = movies_discretized_genre_pp["genre_count"].replace(["Romance"], 1) movies_discretized_genre_pp.head() movies_discretized_genre_pp["genre_count"] = pd.to_numeric(movies_discretized_genre_pp["genre_count"]) #Okay, we are one step closer... Now, we need to create a column that takes the counts/genre_counts * 100 movies_discretized_genre_pp["percent"] = movies_discretized_genre_pp["count"]/movies_discretized_genre_pp["genre_count"] *100 movies_discretized_genre_pp.head() # Attempting to graph this data using a grouped bar chart: # formula: df.pivot(columns, group, values).plot(kind = "type of graph", color = ["color to use, can be a list of colors"], # title = "you can set the title of your graph here") graph = movies_discretized_genre_pp.pivot("genre", "percent_profit", "percent").plot(kind="bar", color = ["crimson", "salmon", "palegreen", "darkgreen"], title = "Percent of Percent Profit to Genre Category") #Changing the y label of our graph to Percent plt.ylabel("Percent") #Changing the x axis label of our graph to Budget Category plt.xlabel("Genre") #How to change the tick labels (we ended up not needing this, but want to keep for future reference) #plt.Axes.set_xticklabels(graph, labels = ['extremely low', 'low', 'high', 'extremely high']) #moving the legend position to underneath the graph, also setting it to have 4 columns so the legend is in a #straight single line and adding a legend title plt.legend( loc = "lower center", bbox_to_anchor = (.5, -.6), ncol = 4, title = "Percent Makeup of Genre Category") ############################################################################################################## # We are going to implement machine learning to see if we can predict either tmdb score or percent profit ############################################################################################################## # ############################################################################################################## # We are going to create an all numeric df, that does not contain the profit, percent profit, votes, and gross. # This information would give us an unfair look when trying to predict either score or percent profit, as these # metrics would not be available prior to the release of the movie. ################################################################################################################ # Potential Machine Learning algorithms we can use on our all numeric df: # logistic regression # Naive Bayes # k-means # Decision Tree # knn # svm ################################################################################################################ # # ################################################################################################################ # We are also going to create an all discretized df ################################################################################################################ ################################################################################################################ # Potential Machine Learning algorithms we can use on our discretized df: # Decision Tree # Association Rule Mining # Random Forest ################################################################################################################ # + ################################################################# #Naive Bayes #**All Numerica Data *** ################################################################# # - #Creating testing and training datasets comprised of all numeric data for score test_train_movies = movies.copy() test_train_movies_score = test_train_movies.copy() test_train_movies_score.score.describe() #We believe that people will be most interested in if they will get a high imdb score and not if they will have an #average or low score... Due to this, we have decided to discretize score in 2 categories: high and not_high. #We have decided that a high score is 7+. test_train_movies_score = test_train_movies_score.copy() categories = ["not_high", "high"] # not_high = 0 - 7 # high = 7 - 10 test_train_movies_score["score"] = pd.cut(test_train_movies_score["score"], [0, 7, 10], labels = categories) #Getting a count of the number of movies that are classified as high and not high in our df. test_train_movies_score_count = test_train_movies_score.groupby("score")["score"].count() test_train_movies_score_count # We are going to create a testing and training df that contains 242 not_high entries and 242 high entries #First we are going to subset the not_high scores and the high scores not_high = test_train_movies_score[test_train_movies_score["score"] == "not_high"] test_train_movies_score = test_train_movies_score[test_train_movies_score["score"] == "high"] #Getting the length to make sure that we have 641 movies len(not_high) #Checking to make sure that the test_train_movies_score is equal to the 242 high movies len(test_train_movies_score) #Now getting a random sample of 242 entries in the not_high df and setting the seed to 123 to reproduce the results not_high = not_high.sample(n = 242, random_state = 123) #Getting the length to make sure that it worked len(not_high) #Adding the not_high movies back to the test_train_movies_score df test_train_movies_score = pd.concat([test_train_movies_score, not_high]) len(test_train_movies_score) #Changing the data type of month day and year to numeric columns = ["month", "day"] test_train_movies_score[columns] = test_train_movies_score[columns].apply(pd.to_numeric) # + # We need to remove gross, votes, profit, and percent profit (these columns give an unfair look at the potential # imdb rating) and all non numeric data #We are creating a test_train_movies_score_exp df that includes the exponential moving averages. We are interested to #see if the cumulative or exponential moving averages will help our algorithm the most. #Using the exponential moving average to try to predict score first columns = list(test_train_movies_score.columns) columns = ['budget', 'runtime', 'score', 'month', 'day', 'exp_mean_writer', 'exp_mean_director', 'exp_mean_star', 'exp_mean_company', 'exp_mean_avg'] test_train_movies_score_exp = test_train_movies_score[columns].copy() # + #Creating a test_train_movies_score_cumulative df #We need to remove gross, votes, profit, and percent profit (these columns give an unfair look at the potential imdb rating) and all non numeric data #Getting a list of column names, so we can copy and paste it to select the columns we want. columns = list(test_train_movies_score.columns) columns = ['budget', 'runtime', 'score', "month", 'day', 'cumulative_mean_writer', 'cumulative_mean_director', 'cumulative_mean_star', 'cumulative_mean_company', 'cumulative_mean_avg'] test_train_movies_score_cumulative = test_train_movies_score[columns].copy() # - #Creating a df that contains both the cumulative and exponential moving averages columns = ['budget', 'runtime', 'score', "month", 'day', 'cumulative_mean_writer', 'cumulative_mean_director', 'cumulative_mean_star', 'cumulative_mean_company', 'cumulative_mean_avg', 'exp_mean_writer', 'exp_mean_director', 'exp_mean_star', 'exp_mean_company', 'exp_mean_avg'] test_train_movies_score_cumulative_exp = test_train_movies_score[columns].copy() # #removing the label from the test_train_movies_score_exp df and saving it in a label df test_train_movies_score_exp_label = test_train_movies_score_exp["score"] test_train_movies_score_exp.drop("score", axis = 1, inplace = True) #repeating the process for cumulative test_train_movies_score_cumulative_label = test_train_movies_score_cumulative["score"] test_train_movies_score_cumulative.drop("score", axis = 1, inplace = True) # #repeating the process for the cumulative and exp combined dfs test_train_movies_score_cumulative_exp_label = test_train_movies_score_cumulative_exp["score"] test_train_movies_score_cumulative_exp.drop("score", axis = 1, inplace = True) #Creating 4 dfs: 1: the training df with label removed, 2: the testing df with label removed, 3: the training label, 4: testing label #For each test_train df from sklearn.model_selection import train_test_split score_exp_train, score_exp_test, score_exp_train_label, score_exp_test_label = train_test_split(test_train_movies_score_exp, test_train_movies_score_exp_label, test_size = .3, random_state = 123) score_cumulative_train, score_cumulative_test, score_cumulative_train_label, score_cumulative_test_label = train_test_split(test_train_movies_score_cumulative, test_train_movies_score_cumulative_label, test_size = .3, random_state = 123) score_cumulative_exp_train, score_cumulative_exp_test, score_cumulative_exp_train_label, score_cumulative_exp_test_label = train_test_split(test_train_movies_score_cumulative_exp, test_train_movies_score_cumulative_exp_label, test_size = .3, random_state = 123) # + #Getting a count of high scores in our test label and not_high scores in out test label #We would prefer to have an equal or almost equal number of movies classified as high and not high in our test set Counter(score_exp_test_label) # - #Using the standard scale to help preprocess and normalize the data # performing preprocessing part sc = StandardScaler() score_exp_train = sc.fit_transform(score_exp_train) score_exp_test = sc.transform(score_exp_test) score_cumulative_train = sc.fit_transform(score_cumulative_train) score_cumulative_test = sc.transform(score_cumulative_test) score_cumulative_exp_train = sc.fit_transform(score_cumulative_exp_train) score_cumulative_exp_test = sc.transform(score_cumulative_exp_test) # + #Attempt 1: all variables clf = GaussianNB() clf.fit(score_exp_train, score_exp_train_label) test_predicted_exp_nb = clf.predict(score_exp_test) clf.fit(score_cumulative_train, score_cumulative_train_label) test_predicted_cumulative_nb = clf.predict(score_cumulative_test) clf.fit(score_cumulative_exp_train, score_cumulative_exp_train_label) test_predicted_cumulative_exp_nb = clf.predict(score_cumulative_exp_test) # - #Accuracy for exp exp_accuracy_nb = accuracy_score(score_exp_test_label, test_predicted_exp_nb, normalize = True) cm = confusion_matrix(score_exp_test_label, test_predicted_exp_nb) confusion_matrix_graph(cm, exp_accuracy_nb, "Exp") print(classification_report(score_exp_test_label, test_predicted_exp_nb)) #Accuracy for cumulative cum_accuracy_nb = accuracy_score(score_cumulative_test_label, test_predicted_cumulative_nb, normalize = True) cm = confusion_matrix(score_cumulative_test_label, test_predicted_cumulative_nb) confusion_matrix_graph(cm, cum_accuracy_nb, "Cumulative") print(classification_report(score_cumulative_test_label, test_predicted_cumulative_nb)) #Accuracy for cumulative and exp cum_exp_accuracy_nb = accuracy_score(score_cumulative_exp_test_label, test_predicted_cumulative_exp_nb, normalize = True) cm = confusion_matrix(score_cumulative_exp_test_label, test_predicted_cumulative_exp_nb) confusion_matrix_graph(cm, cum_exp_accuracy_nb, "Cumulative & Exp") print(classification_report(score_cumulative_exp_test_label, test_predicted_cumulative_exp_nb)) # + #The Naive Bayes with the cumulative mean, proved better than the other options... While, it did correctly classify # less high scores than the other two models. It had a much better accuracy classifying low scores as well as high #scores. It correctly classifying 57 out of 74 high scores and 52 out of 72 not high scores... The other models #suffered when classifying not high scores... # + ########################################################################################### # PCA and Logistic Regression ########################################################################################### #Running PCA (Principal Component Analysis) to see which variables are the most beneficial for our prediction # pca = PCA(n_components = 2) # score_exp_train_pca = pca.fit_transform(score_exp_train) # score_exp_test_pca = pca.transform(score_exp_test) # + # explained_variance = pca.explained_variance_ratio_ # explained_variance # + #Fitting Logistic Regression classifier = LogisticRegression(random_state = 723) classifier.fit(score_exp_train, score_exp_train_label) #Predicting the test labels test_predicted_exp_lg = classifier.predict(score_exp_test) #getting the accuracy exp_accuracy_lg = accuracy_score(score_exp_test_label, test_predicted_exp_lg, normalize = True) cm = confusion_matrix(score_exp_test_label, test_predicted_exp_lg) confusion_matrix_graph(cm, exp_accuracy_lg, "LG Exp") # - print(classification_report(score_exp_test_label, test_predicted_exp_lg)) # + #For the cumulative avg classifier.fit(score_cumulative_train, score_cumulative_train_label) #Predicting the test labels test_predicted_cumulative_lg = classifier.predict(score_cumulative_test) #getting the accuracy cum_accuracy_lg =accuracy_score(score_cumulative_test_label, test_predicted_cumulative_lg, normalize = True) #Looking at the confusion matrix cm = confusion_matrix(score_cumulative_test_label, test_predicted_cumulative_lg) confusion_matrix_graph(cm, cum_accuracy_lg, "LG Cumulative") # - print(classification_report(score_cumulative_test_label, test_predicted_cumulative_lg)) # + #For the cumulative avg classifier.fit(score_cumulative_exp_train, score_cumulative_exp_train_label) #Predicting the test labels test_predicted_cumulative_exp_lg = classifier.predict(score_cumulative_exp_test) #getting the accuracy cum_exp_accuracy_lg = accuracy_score(score_cumulative_exp_test_label, test_predicted_cumulative_exp_lg, normalize = True) cm = confusion_matrix(score_cumulative_exp_test_label, test_predicted_cumulative_exp_lg) confusion_matrix_graph(cm, cum_exp_accuracy_lg, "LG Cumulative & Exp") # - print(classification_report(score_cumulative_exp_test_label,test_predicted_cumulative_exp_lg)) # + #Out of the 3 different testing and training dfs the df with the exponential moving averages proved best for our # logistic regression models. We had an overall accuracy of 72.6% and accurately classified high scores 74.4% and # not_high scores 64.1% of the time... # - ########################################################################################### # SVMS ########################################################################################### #Cumulative SVM - Fitting the classifier svclassifier = SVC(kernel='sigmoid') svclassifier.fit(score_cumulative_train, score_cumulative_train_label) #Making the predictions test_predicted_cum_svm = svclassifier.predict(score_cumulative_test) #Creating my confusion matrix cum_accuracy_svm = accuracy_score(score_cumulative_test_label, test_predicted_cum_svm, normalize = True) cm = confusion_matrix(score_cumulative_test_label, test_predicted_cum_svm) confusion_matrix_graph(cm, cum_accuracy_svm, "SVM Cumulative") print(classification_report(score_cumulative_test_label, test_predicted_cum_svm)) #Exp SVM - Fitting the classifier svclassifier = SVC(kernel='linear') svclassifier.fit(score_exp_train, score_exp_train_label) #Making the predictions test_predicted_exp_svm = svclassifier.predict(score_exp_test) #Creating my confusion matrix exp_accuracy_svm = accuracy_score(score_exp_test_label, test_predicted_exp_svm, normalize = True) cm = confusion_matrix(score_exp_test_label, test_predicted_exp_svm) confusion_matrix_graph(cm, exp_accuracy_svm, "SVM Exp") print(classification_report(score_exp_test_label, test_predicted_exp_svm)) #Exp & Cum SVM - Fitting the classifier svclassifier = SVC(kernel='sigmoid') svclassifier.fit(score_cumulative_exp_train, score_cumulative_exp_train_label) #Making the predictions test_predicted_cum_exp_svm = svclassifier.predict(score_cumulative_exp_test) #Creating my confusion matrix cum_exp_accuracy_svm = accuracy_score(score_cumulative_exp_test_label, test_predicted_cum_exp_svm, normalize = True) cm = confusion_matrix(score_cumulative_exp_test_label, test_predicted_cum_exp_svm) confusion_matrix_graph(cm, cum_exp_accuracy_svm, "SVM Exp & Cumulative") print(classification_report(score_cumulative_exp_test_label, test_predicted_cum_exp_svm)) # + ################################################################################################ # # Now looking to see if we can predict percent profit # ################################################################################################ #We will be using the same columns, but instead of having score, we will have percent_profit. # - #We are interested in if a movie will make a profit or not. Therefore, we are only discretizing into 2 categories: #negative and postive. test_train_movies_pp = test_train_movies_score.copy() categories = ["negative", "positive"] # Negative anything less than 0 #positive anything greater than 0 test_train_movies_pp["percent_profit"] = pd.cut(test_train_movies_pp["percent_profit"], [-101, 0, 999999], labels = categories) #Getting the count of each category in our df test_train_movies_pp_count = test_train_movies_pp.groupby("percent_profit")["percent_profit"].count() test_train_movies_pp_count # We are going to create a testing and training df that contains 198 negative, 198 positive percent_profits #First we are going to subset the positive percent profits and the negative per+cent_profits positive = test_train_movies_pp[test_train_movies_pp["percent_profit"] == "positive"] test_train_movies_pp = test_train_movies_pp[test_train_movies_pp["percent_profit"] == "negative"] #Getting the length to make sure that we have 198 negative, 286 postive in our df print(len(positive)) print(len(test_train_movies_pp)) #Now getting a random sample of 198 entries in the positive df and setting the seed to 123 #to reproduce the results positive = positive.sample(n = 198, random_state = 123) #Getting the length to make sure that it worked print(len(positive)) #Adding the positive movies back to the test_train_movies_pp df test_train_movies_pp = pd.concat([test_train_movies_pp, positive]) #Getting the length to make sure that the 2 df were combined correctly and if it did we would have 396 movies in our df len(test_train_movies_pp) #Changing the data type of month day and year to numeric columns = ["month", "day"] test_train_movies_pp[columns] = test_train_movies_pp[columns].apply(pd.to_numeric) '''We need to remove gross, votes, profit, and score (these columns give an unfair look at the potential imdb rating) and all non numeric data ''' #Using the exponential moving average to try to predict score first columns = list(test_train_movies_pp.columns) columns = ['budget', 'runtime', 'percent_profit', 'month', 'day', 'exp_mean_writer', 'exp_mean_director', 'exp_mean_star', 'exp_mean_company', 'exp_mean_avg'] test_train_movies_pp_exp = test_train_movies_pp[columns].copy() #We need to remove gross, votes, profit, and percent profit (these columns give an unfair look at the potential imdb rating) and all non numeric data #Getting a list of column names, so we can copy and paste it to select the columns we want. columns = list(test_train_movies_pp.columns) columns = ['budget', 'runtime', 'percent_profit', "month", 'day', 'cumulative_mean_writer', 'cumulative_mean_director', 'cumulative_mean_star', 'cumulative_mean_company', 'cumulative_mean_avg'] test_train_movies_pp_cumulative = test_train_movies_pp[columns].copy() columns = ['percent_profit', "month", 'day', 'cumulative_mean_writer', 'cumulative_mean_director', 'cumulative_mean_star', 'cumulative_mean_company', 'cumulative_mean_avg', 'exp_mean_writer', 'exp_mean_director', 'exp_mean_star', 'exp_mean_company', 'exp_mean_avg'] test_train_movies_pp_cumulative_exp = test_train_movies_pp[columns].copy() # #removing the label from the test_train_movies_spp df and saving it in a label df test_train_movies_pp_exp_label = test_train_movies_pp_exp["percent_profit"] test_train_movies_pp_exp.drop("percent_profit", axis = 1, inplace = True) #repeating the process for cumulative test_train_movies_pp_cumulative_label = test_train_movies_pp_cumulative["percent_profit"] test_train_movies_pp_cumulative.drop("percent_profit", axis = 1, inplace = True) # #repeating the process for the cumulative and exp combined dfs test_train_movies_pp_cumulative_exp_label = test_train_movies_pp_cumulative_exp["percent_profit"] test_train_movies_pp_cumulative_exp.drop("percent_profit", axis = 1, inplace = True) #Creating 4 df: 1: the training df with label removed, 2: the testing df with label removed, 3: the training label, 4: testing label from sklearn.model_selection import train_test_split pp_exp_train, pp_exp_test, pp_exp_train_label, pp_exp_test_label = train_test_split(test_train_movies_pp_exp, test_train_movies_pp_exp_label, test_size = .3, random_state = 123) pp_cumulative_train, pp_cumulative_test, pp_cumulative_train_label, pp_cumulative_test_label = train_test_split(test_train_movies_pp_cumulative, test_train_movies_pp_cumulative_label, test_size = .3, random_state = 123) pp_cumulative_exp_train, pp_cumulative_exp_test, pp_cumulative_exp_train_label, pp_cumulative_exp_test_label = train_test_split(test_train_movies_pp_cumulative_exp, test_train_movies_pp_cumulative_exp_label, test_size = .3, random_state = 123) #Getting a count of percent_profit in our test label scores in out test label #We want to have roughly the same number of positive and negative movies in our test df Counter(pp_exp_test_label) #Using the standard scale to help preprocess and normalize the data # performing preprocessing part sc = StandardScaler() pp_exp_train = sc.fit_transform(pp_exp_train) pp_exp_test = sc.transform(pp_exp_test) pp_cumulative_train = sc.fit_transform(pp_cumulative_train) pp_cumulative_test = sc.transform(pp_cumulative_test) pp_cumulative_exp_train = sc.fit_transform(pp_cumulative_exp_train) pp_cumulative_exp_test = sc.transform(pp_cumulative_exp_test) # + #Attempt 1: all variables clf = GaussianNB() clf.fit(pp_exp_train, pp_exp_train_label) test_predicted_exp_svm = clf.predict(pp_exp_test) clf.fit(pp_cumulative_train, pp_cumulative_train_label) test_predicted_cumulative_svm = clf.predict(pp_cumulative_test) clf.fit(pp_cumulative_exp_train, pp_cumulative_exp_train_label) test_predicted_cumulative_exp_svm = clf.predict(pp_cumulative_exp_test) # - #Accuracy for exp exp_accuracy_svm = accuracy_score(pp_exp_test_label, test_predicted_exp_svm, normalize = True) cm = confusion_matrix(pp_exp_test_label, test_predicted_exp_svm) confusion_matrix_graph(cm, exp_accuracy_svm, "SVM Exp") print(classification_report(pp_exp_test_label, test_predicted_exp_svm)) #Accuracy for cumulative cum_accuracy_svm = accuracy_score(pp_cumulative_test_label, test_predicted_cumulative_svm, normalize = True) cm = confusion_matrix(pp_cumulative_test_label, test_predicted_cumulative_svm) confusion_matrix_graph(cm, cum_accuracy_svm, "SVM Cumulative") #Accuracy for cumulative and exp exp_cum_accuracy_svm = accuracy_score(pp_cumulative_exp_test_label, test_predicted_cumulative_exp_svm, normalize = True) cm = confusion_matrix(pp_cumulative_exp_test_label, test_predicted_cumulative_exp_svm) confusion_matrix_graph(cm, exp_cum_accuracy_svm, "SVM Exp & Cumulative") # + ########################################################################################### # PCA and Logistic Regression ########################################################################################### # + #Fitting Logistic Regression classifier = LogisticRegression(random_state = 723) classifier.fit(pp_exp_train, pp_exp_train_label) #Predicting the test labels test_predicted_exp_lg = classifier.predict(pp_exp_test) #getting the accuracy exp_accuracy_lg = accuracy_score(pp_exp_test_label, test_predicted_exp_lg, normalize = True) cm = confusion_matrix(pp_exp_test_label, test_predicted_exp_lg) confusion_matrix_graph(cm, exp_accuracy_lg, "LG Exp") # - print(classification_report(pp_exp_test_label, test_predicted_exp_lg)) # + #Fitting Logistic Regression classifier = LogisticRegression(random_state = 723) classifier.fit(pp_cumulative_train, pp_cumulative_train_label) #Predicting the test labels test_predicted_cum_lg = classifier.predict(pp_exp_test) #getting the accuracy cum_accuracy_lg = accuracy_score(pp_cumulative_test_label, test_predicted_cum_lg, normalize = True) cm = confusion_matrix(pp_cumulative_test_label, test_predicted_cum_lg) confusion_matrix_graph(cm, cum_accuracy_lg, "LG Cumulative") # - print(classification_report(pp_cumulative_test_label, test_predicted_cum_lg)) # + #Fitting Logistic Regression classifier = LogisticRegression(random_state = 723) classifier.fit(pp_cumulative_exp_train, pp_cumulative_exp_train_label) #Predicting the test labels test_predicted_cum_exp_lg = classifier.predict(pp_cumulative_exp_test) #getting the accuracy cum_exp_accuracy_lg = accuracy_score(pp_cumulative_exp_test_label, test_predicted_cum_exp_lg, normalize = True) cm = confusion_matrix(pp_cumulative_exp_test_label, test_predicted_cum_exp_lg) confusion_matrix_graph(cm, cum_exp_accuracy_lg, "LG Exp & Cumulative") # - print(classification_report(pp_cumulative_exp_test_label, test_predicted_cum_exp_lg)) ########################################################################################### # SVMS ########################################################################################### #Cumulative SVM - Fitting the classifier svclassifier = SVC(kernel='rbf') svclassifier.fit(pp_cumulative_train, pp_cumulative_train_label) #Making the predictions test_predicted_cum_svm = svclassifier.predict(pp_cumulative_test) #Creating my confusion matrix cum_accuracy_svm = accuracy_score(pp_cumulative_test_label, test_predicted_cum_svm, normalize = True) cm = confusion_matrix(pp_cumulative_test_label, test_predicted_cum_svm) confusion_matrix_graph(cm, cum_accuracy_svm, "SVM Cumulative") print(classification_report(pp_cumulative_test_label, test_predicted_cum_svm)) #Exp SVM - Fitting the classifier svclassifier = SVC(kernel='linear') svclassifier.fit(pp_exp_train, pp_exp_train_label) #Making the predictions test_predicted_exp_svm = svclassifier.predict(pp_exp_test) #Creating my confusion matrix exp_accuracy_svm = accuracy_score(pp_exp_test_label, test_predicted_exp_svm, normalize = True) cm = confusion_matrix(pp_exp_test_label, test_predicted_exp_svm) confusion_matrix_graph(cm, exp_accuracy_svm, "SVM Exp") print(classification_report(pp_exp_test_label, test_predicted_exp_svm)) #Exp & Cum SVM - Fitting the classifier svclassifier = SVC(kernel='rbf') svclassifier.fit(pp_cumulative_exp_train, pp_cumulative_exp_train_label) #Making the predictions test_predicted_cum_exp_svm = svclassifier.predict(pp_cumulative_exp_test) #Creating my confusion matrix cum_exp_accuracy_svm = accuracy_score(pp_cumulative_exp_test_label, test_predicted_cum_exp_svm, normalize = True) cm = confusion_matrix(pp_cumulative_exp_test_label, test_predicted_cum_exp_svm) confusion_matrix_graph(cm, cum_exp_accuracy_svm, "SVM Exp & Cumulative") print(classification_report(pp_cumulative_exp_test_label, test_predicted_cum_exp_svm)) # + # We had much more success when predicting score versus percent profit... However, the SVM with Linear Kernel on #the exponential df did have a 57% accuracy. #We believe that in order to have a better accuracy when predicting percent profit, we need to have more data. Our #next step is to find more data and then use the information gained from this analysis on our new data. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # nuclio: ignore import nuclio # %nuclio config kind = "job" # %nuclio config spec.image = "python:3.6-jessie" # %%nuclio cmd -c pip install requests import warnings warnings.simplefilter(action='ignore', category=FutureWarning) import os import json import requests from mlrun.execution import MLClientCtx from typing import List def slack_notify( context: MLClientCtx, webhook_url: str = "URL", slack_blocks: List[str] = [], notification_text: str = "Notification" ) -> None: """Summarize a table :param context: the function context :param webhook_url: Slack incoming webhook URL. Please read: https://api.slack.com/messaging/webhooks :param notification_text: Notification text :param slack_blocks: Message blocks list. NOT IMPLEMENTED YET """ data = { 'text': notification_text } print("====",webhook_url) response = requests.post(webhook_url, data=json.dumps( data), headers={'Content-Type': 'application/json'}) print('Response: ' + str(response.text)) print('Response code: ' + str(response.status_code)) # + # nuclio: end-code # - # ### mlconfig # + from mlrun import mlconf import os mlconf.dbpath = 'http://mlrun-api:8080' mlconf.artifact_path = mlconf.artifact_path or f'{os.environ["HOME"]}/artifacts' # - # ### save # + from mlrun import code_to_function # create job function object from notebook code fn = code_to_function("slack_notify") # add metadata (for templates and reuse) fn.spec.default_handler = "slack_notify" fn.spec.description = "Send Slack notification" fn.metadata.categories = ["ops"] fn.metadata.labels = {"author": "mdl"} fn.export("function.yaml") # - # ## tests from mlrun import import_function func = import_function("hub://slack_notify") # + from mlrun import NewTask, run_local #Slack incoming webhook URL. Please read: https://api.slack.com/messaging/webhooks task_params = { "webhook_url" : "", "notification_text" : "Test Notification" } # - task = NewTask( name="tasks slack notify", params = task_params, handler=slack_notify) # ### run local where artifact path is fixed run = run_local(task, artifact_path=mlconf.artifact_path) # ### run remote where artifact path includes the run id func.deploy() func.run(task, params=task_params, workdir=mlconf.artifact_path) func.doc() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #

Report 13

#

# # Introduction # The glint detection is done, as well as the first attempt for data filtering and live plots updates for the user interface. # # Setup # Setup path to include our files. import them. use `autoreload` to get changes in as they are made. # + import os import sys import cv2 from matplotlib import pyplot as plt import statistics import numpy as np # load our code sys.path.insert(0, os.path.abspath('../')) from plotting import auto_draw # specific to jupyter notebook from jupyter_help import cvplt, cvplt_sub #Import image processing function from optimization # load any changes as we make them # %load_ext autoreload # %autoreload 2 # - # # Results showcase # Original pupil plot with filtered pupil plot. # Original glint plot with filtered glint plot. # # The filtered plots are lag behind original plots because for the zscore to work, I set the boundary to be 2000 before the data is stored all together into the file. #Read in the original image image1 = cv2.imread("../plotting/origin_x_pupil.png") image2 = cv2.imread("../plotting/filtered_x_pupil.png") #Showing x_pupil cvplt_sub([image1, image2],1 ,2) #Showing y_pupil image3 = cv2.imread("../plotting/origin_y_pupil.png") image4 = cv2.imread("../plotting/filtered_y_pupil.png") cvplt_sub([image3, image4],1 ,2) #Showing r pupil image5 = cv2.imread("../plotting/origin_r_pupil.png") image6 = cv2.imread("../plotting/filtered_r_pupil.png") cvplt_sub([image5, image6],1 ,2) #Showing blink image7 = cv2.imread("../plotting/blink_pupil.png") cvplt(image7) #Showing x_glint image8 = cv2.imread("../plotting/origin_x_glint.png") image9 = cv2.imread("../plotting/filtered_x_glint.png") cvplt_sub([image8, image9],1 ,2) #Showing y_glint image10 = cv2.imread("../plotting/origin_y_glint.png") image11 = cv2.imread("../plotting/filtered_y_glint.png") cvplt_sub([image10, image11],1 ,2) #Showing r_glint image12 = cv2.imread("../plotting/origin_r_glint.png") image13 = cv2.imread("../plotting/filtered_r_glint.png") cvplt_sub([image12, image13],1 ,2) # # Analysis # Ideally, there are going to be three filters to show the optimal plots for user to see. However, with the current data set, computer would be able to distinguish the staring directions. # # Conclusion # The hardest part for this research is almost done. However, a new issue pops up that when the target falls asleep and their heads drop, the glint would in some cases be covered by the noises and the whole parameters would change. Solution: Impose another filter. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:nba-win-prob] # language: python # name: conda-env-nba-win-prob-py # --- # + # %matplotlib notebook import datetime import pandas as pd from IPython.core.display import display, HTML import matplotlib.pyplot as plt import numpy as np display(HTML("")) pd.set_option('display.max_columns', 50) # - # # Load datasets start_year = 1984 end_year = 2018 df_season_summaries = pd.read_csv("../Data/nba_season_summaries_{}_{}.csv".format(start_year, end_year), index_col=[0, 1]) # For MultiIndex slicing support df_season_summaries.sort_index(inplace=True) df_boxscores = pd.read_csv("../Data/nba_boxscores_{}_{}.csv".format(start_year, end_year), index_col=0, parse_dates=[1], infer_datetime_format=True) df_season_summaries.head() df_boxscores.head() # # Functions for querying/masking # + boxscore_season_range_mask = lambda df, start_year, end_year: (df["season"] >= start_year) & (df["season"] <= end_year) boxscore_date_range_mask = lambda df, start_date, end_date: (df["date"] >= start_date) & (df["date"] <= end_date) boxscore_team_mask = lambda df, team_initials: (df["team1"] == team_initials) | (df["team2"] == team_initials) boxscore_regular_season_mask = lambda df: pd.isnull(df["playoff"]) summary_season_range = lambda df, start_year, end_year: df.loc[start_year:end_year] summary_season_query = lambda df, years, teams, col_names: df.loc[(years, teams), col_names] summary_season_remove_league_average = lambda: df # - # # Cleaning data for plotting # Only use completed seasons df_season_summaries = df_season_summaries.loc[1984:2017] X = df_season_summaries.columns.values.tolist() remove_cols = ["Rk", "W", "L", "PW", "PL", "MOV", "SOS", "Arena", "Attend.", "Attend./G"] X = [x for x in X if x not in remove_cols] y = "W" df_season_summaries = df_season_summaries.dropna(subset=X) # # Visualization: Regular season wins vs. season metrics # Pearson r correlations of wins vs. various season metrics correlations = [(x, df_season_summaries[[y, x]].corr().loc[x][y]) for x in X] plt.figure(1, figsize=(16,16), dpi=85) plt.suptitle("Wins vs season metrics (regular seasons 1984-2017)", fontsize=20) sqrt_X = np.sqrt(len(X)) rows = int(sqrt_X) cols = int(len(X) / int(sqrt_X)) + 1 shared_y = plt.subplot(111) shared_y.set_ylabel("W") for indx, (x, pearsonR) in enumerate(sorted(correlations, key=lambda t: t[1], reverse=True)): ax = plt.subplot(rows, cols, indx+1, sharey=shared_y) ax.set_xlabel(x) ax.text(0.025, 0.9, "r={}".format(round(pearsonR, 4)), transform=ax.transAxes) ax.scatter(df_season_summaries[x], df_season_summaries[y], c='b', alpha=0.05) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Added custom loss function base on @kyakvolev 's work. Credit to the author. # The forum post is here: https://www.kaggle.com/c/m5-forecasting-uncertainty/discussion/139515 # + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" import pandas as pd import numpy as np import matplotlib.pyplot as plt from tqdm.auto import tqdm as tqdm from ipywidgets import widgets, interactive, interact import ipywidgets as widgets from IPython.display import display import os for dirname, _, filenames in os.walk('data/raw'): for filename in filenames: print(os.path.join(dirname, filename)) # - # ## Reading data # + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a" train_sales = pd.read_csv('data/raw/sales_train_validation.csv') calendar_df = pd.read_csv('data/raw/calendar.csv') submission_file = pd.read_csv('data/raw/sales_train_validation.csv') sell_prices = pd.read_csv('data/raw/sample_submission.csv') # - # ## Variables to help with aggregation total = ['Total'] train_sales['Total'] = 'Total' train_sales['state_cat'] = train_sales.state_id + "_" + train_sales.cat_id train_sales['state_dept'] = train_sales.state_id + "_" + train_sales.dept_id train_sales['store_cat'] = train_sales.store_id + "_" + train_sales.cat_id train_sales['store_dept'] = train_sales.store_id + "_" + train_sales.dept_id train_sales['state_item'] = train_sales.state_id + "_" + train_sales.item_id train_sales['item_store'] = train_sales.item_id + "_" + train_sales.store_id # + val_eval = ['validation', 'evaluation'] # creating lists for different aggregation levels total = ['Total'] states = ['CA', 'TX', 'WI'] num_stores = [('CA',4), ('TX',3), ('WI',3)] stores = [x[0] + "_" + str(y + 1) for x in num_stores for y in range(x[1])] cats = ['FOODS', 'HOBBIES', 'HOUSEHOLD'] num_depts = [('FOODS',3), ('HOBBIES',2), ('HOUSEHOLD',2)] depts = [x[0] + "_" + str(y + 1) for x in num_depts for y in range(x[1])] state_cats = [state + "_" + cat for state in states for cat in cats] state_depts = [state + "_" + dept for state in states for dept in depts] store_cats = [store + "_" + cat for store in stores for cat in cats] store_depts = [store + "_" + dept for store in stores for dept in depts] prods = list(train_sales.item_id.unique()) prod_state = [prod + "_" + state for prod in prods for state in states] prod_store = [prod + "_" + store for prod in prods for store in stores] # - print("Departments: ", depts) print("Categories by state: ", state_cats) quants = ['0.005', '0.025', '0.165', '0.250', '0.500', '0.750', '0.835', '0.975', '0.995'] days = range(1, 1913 + 1) time_series_columns = [f'd_{i}' for i in days] # ## Getting aggregated sales def CreateSales(name_list, group): ''' This function returns a dataframe (sales) on the aggregation level given by name list and group ''' rows_ve = [(name + "_X_" + str(q) + "_" + ve, str(q)) for name in name_list for q in quants for ve in val_eval] sales = train_sales.groupby(group)[time_series_columns].sum() #would not be necessary for lowest level return sales total = ['Total'] train_sales['Total'] = 'Total' train_sales['state_cat'] = train_sales.state_id + "_" + train_sales.cat_id train_sales['state_dept'] = train_sales.state_id + "_" + train_sales.dept_id train_sales['store_cat'] = train_sales.store_id + "_" + train_sales.cat_id train_sales['store_dept'] = train_sales.store_id + "_" + train_sales.dept_id train_sales['state_item'] = train_sales.state_id + "_" + train_sales.item_id train_sales['item_store'] = train_sales.item_id + "_" + train_sales.store_id #example usage of CreateSales sales_by_state_cats = CreateSales(state_cats, 'state_cat') sales_by_state_cats # ## Getting quantiles adjusted by day-of-week def CreateQuantileDict(name_list = stores, group = 'store_id' ,X = False): ''' This function writes creates sales data on given aggregation level, and then writes predictions to the global dictionary my_dict ''' sales = CreateSales(name_list, group) sales = sales.iloc[:, 2:] #starting from d_3 because it is a monday, needed to make daily_factors work sales_quants = pd.DataFrame(index = sales.index) for q in quants: sales_quants[q] = np.quantile(sales, float(q), axis = 1) full_mean = pd.DataFrame(np.mean(sales, axis = 1)) daily_means = pd.DataFrame(index = sales.index) for i in range(7): daily_means[str(i)] = np.mean(sales.iloc[:, i::7], axis = 1) daily_factors = daily_means / np.array(full_mean) daily_factors = pd.concat([daily_factors, daily_factors, daily_factors, daily_factors], axis = 1) daily_factors_np = np.array(daily_factors) factor_df = pd.DataFrame(daily_factors_np, columns = submission_file.columns[1:]) factor_df.index = daily_factors.index for i,x in enumerate(tqdm(sales_quants.index)): for q in quants: v = sales_quants.loc[x, q] * np.array(factor_df.loc[x, :]) if X: my_dict[x + "_X_" + q + "_validation"] = v my_dict[x + "_X_" + q + "_evaluation"] = v else: my_dict[x + "_" + q + "_validation"] = v my_dict[x + "_" + q + "_evaluation"] = v my_dict = {} #adding prediction to my_dict on all 12 aggregation levels CreateQuantileDict(total, 'Total', X=True) #1 CreateQuantileDict(states, 'state_id', X=True) #2 CreateQuantileDict(stores, 'store_id', X=True) #3 CreateQuantileDict(cats, 'cat_id', X=True) #4 CreateQuantileDict(depts, 'dept_id', X=True) #5 CreateQuantileDict(state_cats, 'state_cat') #6 CreateQuantileDict(state_depts, 'state_dept') #7 CreateQuantileDict(store_cats, 'store_cat') #8 CreateQuantileDict(store_depts, 'store_dept') #9 CreateQuantileDict(prods, 'item_id', X=True) #10 CreateQuantileDict(prod_state, 'state_item') #11 CreateQuantileDict(prod_store, 'item_store') #12 total # ## Creating valid prediction df from my_dict pred_df = pd.DataFrame(my_dict) pred_df = pred_df.transpose() pred_df_reset = pred_df.reset_index() final_pred = pd.merge(pd.DataFrame(submission_file.id), pred_df_reset, left_on = 'id', right_on = 'index') del final_pred['index'] final_pred = final_pred.rename(columns={0: 'F1', 1: 'F2', 2: 'F3', 3: 'F4', 4: 'F5', 5: 'F6', 6: 'F7', 7: 'F8', 8: 'F9', 9: 'F10', 10: 'F11', 11: 'F12', 12: 'F13', 13: 'F14', 14: 'F15', 15: 'F16', 16: 'F17', 17: 'F18', 18: 'F19', 19: 'F20', 20: 'F21', 21: 'F22', 22: 'F23', 23: 'F24', 24: 'F25', 25: 'F26', 26: 'F27', 27: 'F28'}) final_pred = final_pred.fillna(0) for i in range(1,29): final_pred['F'+str(i)] *= 1.170 final_pred.to_csv('return_of_the_blend.csv', index=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # ## Basic I/O : Implement basic I/O function that can read the data from the dataset and write the results to a file. # # Loading required libraries import pandas as pd import numpy as np from itertools import chain, combinations import time # ### As part of Basic input, I have created two functions: # Function 1(read_as_dataframe) - Reads the data from csv file and returns a dataframe # Function 2(read_as_list) - Reads the data from csv or txt file and returns a list # read the data from the dataset using read_csv from pandas, read stabilized with engine def read_as_dataframe(filename): cols = ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10"]# at most 11 columns in the dataset, can be adopted to all datasets df = pd.read_csv(filename, names = cols, engine = 'python') return df # Reading Data - flexible to use txt or csv file which reads the line and strips the unnecessary data, split and form a list. # The FM creates a list of Transaction Database def read_as_list(file): list_out = list() with open(file) as f: row_lines = f.readlines() for i in row_lines: single_line = i.strip().strip(",") list_out.append(single_line.split(',')) return list_out # #### Reading/Input demonstrated in using the function calls below: # read the file into df by passing filename(parameter) df = read_as_dataframe("GroceryStore.csv") df.head() # read the file to create a list input_list = read_as_list('GroceryStore.csv') input_list # Using the textbook example to demonstrate the functionalities df_example = read_as_dataframe("ex.csv") df_example.head() input_list_example = read_as_list('ex.csv') input_list_example # ### Basic Output: Writing a data frame to a file # # As part of output, 1. the function is created to write a dataframe into a text file # 2. Writing FP tree into a file # writing data to a file def writetoafile(filename,df): with open(filename, 'w') as f: df.to_csv(filename) # writing FP tree to a file def write_tree(filename,fptree): file1 = open(filename,"w") file1.write(str(fptree)) file1.close() def write_list(filename,list_items): with open(filename, 'w') as filehandle: for list_item in list_items: filehandle.write('%s\n' % list_item) # #### Writing as output demonstrated below: # + # Writing outputs to file # - # writing dataframe values into a file writetoafile('output_file.txt',df) # writing frequent itemlists into a file write_list("freq_itemsets_fp",all_frequent_itemsets_fp) # write FP tree into a file write_tree("fptree.txt",fptree) # # ## Frequent Itemset : Find all possible 2, 3, 4 and 5-itemsets given the parameter of minimum-support. # The Frequent Itemsets are created using three different algorithms which is implemented in this notebook: # 1. Brute-force approach - Creates all possible combinations and prunes using minimum support # 2. Apriori approach - Implements Apriori algorithm to generate frequent itemsets # 3. FP Growth approach - Implements FP Growth algorithm to generate frequent itemsets # ### This task provides the implementation of Brute-Force algorithm to generate frequent itemsets - 2,3,4 and 5 itemsets by taking minimum support as input parameter # + # I have used a modular approach where the functions can be re-used which increases the scalability and achieves literate coding # enhancing the readility of the implementation # + # function to calculate the frequency of occurence of the item_sets def freq(df,item_sets): count_list = [0] * len(item_sets) item_list = df.values.tolist() count = 0 support_set = {} for i,k in zip(item_sets,range(len(item_sets))): for j in range(len(df)): if(set(i).issubset(set(item_list[j]))): count += 1 count_list[k] = count count = 0 return count_list # + # function to generate frequent items given the item-set, support counts and minimum support def generate_frequent_itemset(count,comb_list,min_sup): freq_list = list() infreq_list = list() for item,i in zip(comb_list,range(len(comb_list))): if count[i]>=min_sup: freq_list.append(item) else: infreq_list.append(item) return freq_list,infreq_list # + # The brute force algorithm implemented below takes a dataframe, minimum support and number of itemsets as input and returns a # frequent item-sets as output def brute_force_frequent_itemset(number_itemset,df,min_sup): list_comb = list(((df['0'].append(df['1']).append(df['2']).append(df['3']).append(df['4']).append(df['5']).append(df['6']).append(df['7']).append(df['8']).append(df['9']).append(df['10'])).unique())) list_comb.remove(np.nan) all_freq_itemlist = list() # Generating n-itemset by obtaining unique values for i in range(1,number_itemset+1): # Generating combinations comb = combinations(list_comb,i) comb_list = list(comb) count = freq(df,comb_list) freq_list,infreq_list = generate_frequent_itemset(count,comb_list,min_sup) if freq_list: all_freq_itemlist.append(freq_list) return all_freq_itemlist # - # ### Illustration of Brute-force approach to generate frequent item-sets: # ### The two datasets are used to demonstrate the generation of outputs for all the algorithms. # 1. Grocery store provided as part of coursework # 2. Textbook example to demonstrate the working of different dataset # Using a textbook example to generate frequent itemsets # parameter 1 - number of itemsets. example - given 3, the frequent itemsets 1,2,3 are generated freq_itemsets_example = brute_force_frequent_itemset(3,df_example,2) freq_itemsets_example freq_itemsets0 = brute_force_frequent_itemset(1,df,1250) freq_itemsets0 # Using the dataset provided as part of coursework # Upto 3-itemsets generation, min_sup is 2300 freq_itemsets1 = brute_force_frequent_itemset(3,df,2300) freq_itemsets1 freq_itemsets2 = brute_force_frequent_itemset(5,df,280) freq_itemsets2 freq_itemsets3 = brute_force_frequent_itemset(5,df,280) freq_itemsets3 freq_itemsets_big = brute_force_frequent_itemset(12,df,50) freq_itemsets_big # # ## Associated Rule : Find all interesting association rules from the frequent item-sets given the parameter of minimum-confidence. # The association rules are created using only the frequent item-sets generated using various algorithms as mentioned above. # The associatioin function takes dataframe(database of transactions), frequent_itemsets and the minimum-confidence to generate # interesting association rules # + # Frequency - Count value of itemsets is determined by reusing the function created for brute-force approach, the count is used # to calculate association rules, the scan of frequent itemsets is started from the maximum frequent sets and working upwards # till 1-frequent itemsets to generate all possible association rules given the minimum confidence percentage value. def associations(df,itemsets,min_confidence_percent,filename='test_association_values.txt'): file_association = open(filename,'w') confidence = float(min_confidence_percent)/100 calc_confidence = 0.0 # Determining count of values in frequent itemsets count = list() for k in itemsets: c = freq(df,k) count.append(c) # Computing association rules n = len(itemsets)-1 n1 = n n2 = n for i in range(n): for k in range(len(itemsets[n2])): item_count = count[n2][k] item = itemsets[n2][k] n1 = n2 for l in range(n2): for j in range(len(itemsets[n1-1])): lower_item_count = count[n1-1][j] lower_item = itemsets[n1-1][j] if set(lower_item).issubset(set(item)): calc_confidence = item_count/lower_item_count if calc_confidence >= confidence: print("rule:"+str(lower_item)+"->"+str((set(item)-set(lower_item)))+"->"+str(calc_confidence*100)) file_association.write("\nrule:"+str(lower_item)+"->"+str((set(item)-set(lower_item)))+"->"+str(calc_confidence*100)) n1 = n1-1 n2 = n2-1 # - # ### Illustration of Association generation from frequent item-sets: # Using textbook example to produce the Association rules # parameter - frequent itemsets and minimum confidence associations(df_example,freq_itemsets_example,50) # + # associations(df,freq_itemsets2,50) # - # Computing Associations for minimum confidence of 50 percent for 1,2,3,4 and 5 frequent item-sets associations(df,freq_itemsets2,50) freq_itemsets4 = brute_force_frequent_itemset(5,df,5000) freq_itemsets4 associations(df,freq_itemsets4,50) freq_itemsets5 = brute_force_frequent_itemset(5,df,1250) freq_itemsets5 associations(df,freq_itemsets5,50) freq_itemsets6 = brute_force_frequent_itemset(5,df,2456) freq_itemsets6 associations(df,freq_itemsets6,45) associations(df,freq_itemsets_big,55) # ## Task 4 # ## Apriori Algorithm : Use Apriori algorithm for finding frequent itemsets. # The modular approach is again adopted here to split different functions as a module and each module can be combined to # effectively implement the Apriori algorithm # + # overriding frozen to unfreeze class MyFrozenSet(frozenset): def repr(self): return '([{}])'.format(', '.join(map(repr, self))) # + # unfreeze and display frozensets(not needed) def unfreeze(item_sets): temp = list() for i in item_sets: temp.append([MyFrozenSet(j) for j in i]) return temp # + # join the sets Lk * Lk def join(itemset,n): set1 = set() set2 = set() for i in itemset: for j in itemset: # joining sets to generate length of size n using all the possible subsets if( len(i.union(j)) == n ): set1 = [i.union(j)] set2 = set2.union(set1) return set2 # + # Generate Candidate set by removing infrequent subsets def candidate(infreq_itemset,joined_set): flag = 0 candidate_set = set() for i in joined_set: flag = 0 for j in infreq_itemset: if(frozenset.issubset(j,i)): # if the joined set contains infrequent subset it is flagged to be removed flag = 1 if flag == 0: # only the unflagged frequent sets are added into candidate set candidate_set.add(i) return candidate_set # + # Prune the Candidate set to obtain frequent itemsets def prune(count,itemset,sup_count=2): freq_itemset = set() infreq_itemset = set() for item,i in zip(itemset,range(len(itemset))): # if the count is greater than support count, it is added into frequent itemsets if count[i]>=sup_count: freq_itemset.add(item) else: infreq_itemset.add(item) return freq_itemset,infreq_itemset # + def apriori(number_itemset,df,min_sup): # Generating 1-itemset by obtaining unique values and pruning using min_support # Obtaining unique values comb1_list = list((df['0'].append(df['1']).append(df['2']).append(df['3']).append(df['4'])).unique()) comb1_list.remove(np.nan) comb1_set = set() for item in comb1_list: if item: comb1_set.add(frozenset([item])) # pruning using min_support count1 = freq(df,comb1_set) freq_itemset1,infreq_itemset1 = prune(count1,comb1_set,min_sup) if number_itemset == 1: return freq_itemset1 # Generating n-itemset all_freq_itemset = list() all_freq_itemset.append(list(freq_itemset1)) freq_itemset = freq_itemset1 infreq_itemset = infreq_itemset1 comb_set = comb1_set for i in range(2,number_itemset+1): # joining frequent itemsets joined_set = join(freq_itemset,i) # Candidate set is created by removing infrequent itemsets candidate_set = candidate(infreq_itemset,joined_set) # support counts of candidate set is obtained count = freq(df,candidate_set) # support count is used to prune by comparing with minimum support freq_itemset,infreq_itemset = prune(count,candidate_set,min_sup) if freq_itemset: all_freq_itemset.append(list(freq_itemset)) return all_freq_itemset # - # ### Illustration of Apriori algorithm to generate frequent item-sets: f_itemset_apriori_example = apriori(3,df_example,2) f_itemset_apriori_example f_itemset_apriori = apriori(5,df,280) f_itemset_apriori associations(df,f_itemset_apriori,50) # # ## FP-Growth Algorithm: Use FP-Growth algorithm for finding frequent itemsets. #class of FP TREE node class TreeNode: def __init__(self, node_name,count,parentnode): self.name = node_name self.count = count self.node_link = None self.parent = parentnode self.children = {} def __str__(self, level=0): ret = "\t"*level+repr(self.name)+"\n" for child in self.children: ret += (self.children[child]).__str__(level+1) return ret def __repr__(self): return '' def increment_counter(self, count): self.count += count # Reading Data - flexible to use txt or csv file which reads the line and strips the unnecessary data, split and form a list. # The FM creates a list of Transaction Database def read_as_list(file): list_out = list() with open(file) as f: row_lines = f.readlines() 1for i in row_lines: single_line = i.strip().strip(",") list_out.append(single_line.split(',')) return list_out # To convert initial transaction into frozenset # Creating a frozen dictionary of Database(transactions) and counting the occurences of transaction - to be used to generate frequent itemsets def create_frozen_set(database_list): dict_frozen_set = {} for Tx in database_list: if frozenset(Tx) in dict_frozen_set.keys(): dict_frozen_set[frozenset(Tx)] += 1 else: dict_frozen_set[frozenset(Tx)] = 1 return dict_frozen_set #The FP Tree is created using ordered sets def add_tree_nodes(item_set, fptree, header_table, count): if item_set[0] in fptree.children: fptree.children[item_set[0]].increment_counter(count) else: fptree.children[item_set[0]] = TreeNode(item_set[0], count, fptree) if header_table[item_set[0]][1] == None: header_table[item_set[0]][1] = fptree.children[item_set[0]] else: add_node_link(header_table[item_set[0]][1], fptree.children[item_set[0]]) if len(item_set) > 1: add_tree_nodes(item_set[1::], fptree.children[item_set[0]], header_table, count) #The node link is added def add_node_link(previous_node, next_node): while (previous_node.node_link != None): previous_node = previous_node.node_link previous_node.node_link = next_node # + # Generate Frequent Pattern tree def generate_FP_tree(dict_frozen_set, min_sup): # Creating header table - get previous counter using 'get' and then add that value to the row in consideration to obatin count of each unique item in DB header_table = {} for frozen_set in dict_frozen_set: for key_item in frozen_set: header_table[key_item] = header_table.get(key_item,0) + dict_frozen_set[frozen_set] # pruning using min_sup to retain only frequent 1-itemsets for i in list(header_table): if header_table[i] < min_sup: del(header_table[i]) # Obtaining only keys which are frequent itemsets frequent_itemset = set(header_table.keys()) if len(frequent_itemset) == 0: return None, None for j in header_table: header_table[j] = [header_table[j], None] Tree = TreeNode('Null',1,None) for item_set,count in dict_frozen_set.items(): frequent_tx = {} for item in item_set: if item in frequent_itemset: frequent_tx[item] = header_table[item][0] if len(frequent_tx) > 0: #the transaction itemsets are ordered with respect to support ordered_itemset = [v[0] for v in sorted(frequent_tx.items(), key=lambda p: p[1], reverse=True)] #the nodes are updated into tree add_tree_nodes(ordered_itemset, Tree, header_table, count) return Tree, header_table # - # #### Mining Frequent item-sets by using generated FP Tree #FP Tree is traversed upwards def traverse_fptree(leaf_Node, prefix_path): if leaf_Node.parent != None: prefix_path.append(leaf_Node.name) traverse_fptree(leaf_Node.parent, prefix_path) #returns conditional pattern base(prefix paths) def find_prefix_path(base_path, tree_node): Conditional_patterns_base = {} while tree_node != None: prefix_path = [] traverse_fptree(tree_node, prefix_path) if len(prefix_path) > 1: Conditional_patterns_base[frozenset(prefix_path[1:])] = tree_node.count tree_node = tree_node.node_link return Conditional_patterns_base #Condtional FP Tree and Condtional Pattern Base is recursively mined def mining(fptree, header_table, min_sup, prefix, frequent_itemset): FPGen = [v[0] for v in sorted(header_table.items(),key=lambda p: p[1][0])] for base_path in FPGen: all_frequentset = prefix.copy() all_frequentset.add(base_path) #appending frequent itemset frequent_itemset.append(all_frequentset) #obtain conditional pattern bases for itemsets Conditional_pattern_bases = find_prefix_path(base_path, header_table[base_path][1]) #Conditional FP Tree generation Conditional_FPTree, Conditional_header = generate_FP_tree(Conditional_pattern_bases,min_sup) if Conditional_header != None: mining(Conditional_FPTree, Conditional_header, min_sup, all_frequentset, frequent_itemset) # ### Illustration of FP Growth algorithm to generate frequent item-sets: # Creating FP Tree and header table for the example dataset # parameters - input list(frozenset), minimum support count value fptree_example, header_table_example = generate_FP_tree(create_frozen_set(input_list_example), 2) # Display of Tree showing it as object - tree node representation fptree_example # printing string values of a tree str(fptree_example) # printing FP Tree print(fptree_example) # Displaying header table header_table_example # function call to write FP tree into a file write_tree("fptreeexample.txt",fptree_example) # Mining to obtain frequent itemsets all_frequent_itemsets_example = [] #call function to mine all ferquent itemsets mining(fptree_example, header_table_example, 2, set([]), all_frequent_itemsets_example) # Display frequent itemsets all_frequent_itemsets_example associations(df,[all_frequent_itemsets_example],50) # ##### Testing on Dataset fptree, header_table = generate_FP_tree(create_frozen_set(input_list), 200) fptree str(fptree) print(fptree) header_table # function call to write FP tree into a file write_tree("fptree.txt",fptree) # Mining to obtain frequent itemsets all_frequent_itemsets_fp = [] #call function to mine all ferquent itemsets mining(fptree, header_table, 200, set([]), all_frequent_itemsets_fp) all_frequent_itemsets_fp # # ## Experiment on the Dataset : Apply your associated rule mining algorithms to the dataset and show some interesting rules. associations(df_example,freq_itemsets_example,50) associations(df_example,freq_itemsets_example,70) associations(df_example,freq_itemsets_example,60) # ##### Testing on dataset associations(df,freq_itemsets2,50) associations(df,freq_itemsets4,50) associations(df,freq_itemsets5,45) associations(df,freq_itemsets6,45) # # ##### The modular approach of coding is done so the function can be reused for different algorithms. Sufficient comments are added to make the code readable # # ## Run-Time Performance # + # Measuring run-time performance of generating frequent itemsets #1. Brute force approach # - start1 = time.time() freq_itemsets1 = brute_force_frequent_itemset(5,df,200) end1 = time.time() print("The time taken by Brute force algorithm implemented: ") print(end1-start1) # + #2. Apriori approach # - start2 = time.time() f_itemset_apriori = apriori(5,df,200) end2 = time.time() print("The time taken by Apriori algorithm implemented: ") print(end2-start2) # + #3. FP Growth approach # - start3 = time.time() fptree, header_table = generate_FP_tree(create_frozen_set(input_list), 200) all_frequent_itemsets_fp = [] mining(fptree, header_table, 200, set([]), all_frequent_itemsets_fp) end3 = time.time() print("The time taken by FP Growth algorithm implemented: ") print(end3-start3) all_frequent_itemsets_fp # From the above analysis, the FP Growth runs faster compared to other two algorithms by recursively computing. # Brute Force works slower by making use of all combinations, Apriori works faster compared to Brute Force by # making use of only prior information(Frequent sets) instead of all the combinations. It is generally slower # because of database scanning at each step of pruning # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Python 3 - Tic Tac Toe # # Author : # # Email ID : # # Display Board def display_board(my_board_entry): print(' | | \n {0} | {1} | {2} \n | | \n-----------------\n | | \n {3} | {4} | {5} \n | | \n-----------------\n | | \n {6} | {7} | {8} \n | | \n'.format(my_board_entry[0], my_board_entry[1], my_board_entry[2], my_board_entry[3], my_board_entry[4], my_board_entry[5], my_board_entry[6], my_board_entry[7], my_board_entry[8])) # # Get User Input def get_user_input(player1, player2, int_turn_counter, list_board_entry): #Variable Initialisation list_player_symbol = [] list_player_symbol.append(player1) list_player_symbol.append(player2) list_input = [] flag_get_input = 'Y' while(flag_get_input.upper() == 'Y'): int_board_position = int(input('Player {} --> {}, Enter the valid position number...\t'.format((int_turn_counter%2)+1, list_player_symbol[int_turn_counter%2]))) if((int_board_position > 0) and (int_board_position < 10)): #validation check if(list_board_entry[int_board_position - 1] == ' '): list_input.append(int_board_position) list_input.append(list_player_symbol[int_turn_counter%2]) return(list_input) else: print('Entered position is already filled, so kindly provide some other free position...\n') flag_get_input = input('Do want to continue ...[Y/N]\t ') if(flag_get_input.upper() == 'N'): list_input.append('End') return(list_input) elif((int_board_position < 0) or (int_board_position > 10)): print('Invalid position number | kindly enter the position between 1 to 9...\n') flag_get_input = input('Do want to continue ...[Y/N]\t ') if(flag_get_input.upper() == 'N'): list_input.append('End') return(list_input) # # Clear Cell def clear_cell(): from IPython.display import clear_output for i in range(2): clear_output() ##import time ##time.sleep(2) # ## Core Algorithm def core_algorithm(list_board_entry): #variable initialise temp_list_board_entry = list_board_entry.copy() if(temp_list_board_entry.count('X') >= 3): if(temp_list_board_entry[0:3].count('X') == 3 or temp_list_board_entry[3:6].count('X') == 3 or temp_list_board_entry[6:9].count('X') == 3 or (temp_list_board_entry[0] == 'X' and temp_list_board_entry[4] == 'X' and temp_list_board_entry[8] == 'X') or (temp_list_board_entry[2] == 'X' and temp_list_board_entry[4] == 'X' and temp_list_board_entry[6] == 'X') ): print('Hurrah!!! Player 1 Won...') return 'End' elif(temp_list_board_entry[0:3].count('O') == 3 or temp_list_board_entry[3:6].count('O') == 3 or temp_list_board_entry[6:9].count('O') == 3 or (temp_list_board_entry[0] == 'O' and temp_list_board_entry[4] == 'O' and temp_list_board_entry[8] == 'O') or (temp_list_board_entry[2] == 'O' and temp_list_board_entry[4] == 'O' and temp_list_board_entry[6] == 'O') ): if(tuple(temp_list) in dump_set): print('Hurrah!!! Player 2 Won...') return 'End' # # Driver Code def tic_tac_toe_main(): #Initalise Flag continue_flag = 'Y' while(continue_flag.upper() == 'Y'): #Reset the Flag continue_flag = '' #Variable Initialisation int_turn_counter = 0 list_board_entry = [" ", " ", " ", " ", " ", " ", " ", " ", " " ] print("Welcome to Arun's Tic Tac Toe Game using Python 3\n") #Select X or O player_choice = input('Player 1, do you want to be X or O...\t') if(player_choice.upper() == 'X'): #iterating through game print('Player 1 --> X \nPlayer 2 --> O') while(int_turn_counter < 9): list_input = get_user_input('X', 'O', int_turn_counter, list_board_entry) if(list_input[0] == 'End'): break #Displaying the updated board... list_board_entry[int(list_input[0]) -1] = list_input[1] clear_cell() display_board(list_board_entry) flag = core_algorithm(list_board_entry) if(flag == 'End'): break int_turn_counter += 1 elif(player_choice.upper() == 'O'): #iterating through game print('Player 1 --> O \nPlayer 2 --> X') while(int_turn_counter < 9): list_input = get_user_input('O', 'X', int_turn_counter, list_board_entry) if(list_input[0] == 'End'): break #Displaying the updated board... list_board_entry[int(list_input[0]) - 1] = list_input[1] clear_cell() display_board(list_board_entry) flag = core_algorithm(list_board_entry) if(flag == 'End'): break int_turn_counter += 1 else: print('Invalid Symbol selection...Exiting') if(list_input[0] == 'End' or flag == 'End'): continue_flag = 'N' else: continue_flag = input('Do want to continue ...[Y/N]\t ') tic_tac_toe_main() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Integração Python com E-mail # # ### 2 formas de integrar: # # - Gmail # - Outlook # # Vamos primeiro falar do método Gmail e depois do Outlook. Eles são bem diferentes, então você não precisa saber um para usar o outro. # ### Método Gmail # # - Você precisa liberar o seu e-mail para esse tipo de atividade. (ou criar um e-mail novo)
# https://myaccount.google.com/lesssecureapps -> Vá em 'Gerenciar sua Conta do Google' -> Segurança -> Acesso app menos seguro (ative essa opção) # - Como tudo no Python, existem várias bibliotecas que podem ajudar nisso, usaremos a yagmail porque ela simplifica muito a nossa vida # - Precisamos instalar o Yagmail, no terminal do anaconda digite: pip intall yagmail # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Welcome, Bishop Ireton Hackers! # # This tutorial leads you through the steps of a basic data mining pipeline: # 1. Define your problem # 2. Identify your data # 3. ,-------> Explore your data ---------- # 4. '-- Normalize and Clean your data <--' # 5. Extract information # # If you're a pro, feel free to modify as you go or carve your own path. # Make sure the following libraries work # To troubleshoot: open the command line and check that it's installed, e.g. "which numpy" # if it is not installed, simply install e.g. "pip install pandas" # if it is installed, you might need to check your paths from IPython.display import Image import os import numpy as np import pandas as pd import matplotlib.pyplot as plt from matplotlib.pyplot import figure plt.style.use('ggplot') import ast from sklearn import preprocessing # ## 1. Define your problem: What makes a good book-to-movie? # Seriously though, why do so many good books become terrible movies? Take this conundrum, for example: # # Mystic River. Written by , starring and , directed by . Award-winning book, and award-winning movie. # # Live By Night. Written by , starring and , directed by . Award-winning book, TERRIBLE movie. Even after it was adapted to film by the author himself. # # I don't expect to solve all the mysteries here, but I'd like to get some general trends. cwd = os.getcwd() movie_posters = cwd+"/img/MoviePosters.png" Image(movie_posters, width=400, height=400) # ## 2. Identify your data. # # Let's use a subset of a dataset that I created, using goodreads and imdb. # # For book data, I used the goodreads API https://www.goodreads.com/api, which I found pretty robust and easy-to-use, as far as API's go. # # For movie data, IMDb has a huge volume of public data. You can download it in large chunks, https://www.imdb.com/interfaces/, or go through the API like I did (http://www.omdbapi.com/). # # Holy crap there are a lot of books and movies! To link the two datasets, I went to wikipedia. # https://en.wikipedia.org/w/index.php?title=Category:American_novels_adapted_into_films&pageuntil=Burnt+Offerings+%28Marasco+novel%29#mw-pages # # # Some other great places to find free datasets: # https://www.kaggle.com/datasets # https://github.com/datasets # https://data.fivethirtyeight.com/ # https://datasetsearch.research.google.com/ # https://scikit-learn.org/stable/datasets/toy_dataset.html # # ## 3. Explore your data. # # Start by importing the data and examining its structure. # + # Data! Data! Data! film_adaptations = [] print(cwd) with open(cwd+'/Books-to-Movies.txt', encoding='utf8') as f: for line in f.read().splitlines(): d = (ast.literal_eval(line)) film_adaptations.append(d) # let's take a look at the information each entry has (the "keys") to get a sense of structure for key in film_adaptations[0].keys(): print(key,type(key)) # We'll keep this list handy, since we'll be referencing these keys to select our data # + # Transform to dataframe # Let's shift to pandas for more flexible handling and better features (like sample!). # to learn more about the pandas data science toolsuite: https://pandas.pydata.org/ df = pd.DataFrame(film_adaptations) # print a row, and print the top (head) of a column print('A row:',df.sample()) print('/nA column:',df['mActors'].head()) # - # Formatting the output # And lets examine some random samples and practice accessing values based on key. for i in range(5): row=df.sample() print('{} was written by {} in {}, and had an average rating of {}\n'.format( row['title'].values[0], row['author_name'].values[0], row['publication_date'].values[0][-4:], row['average_rating'].values[0])) # For practice, look up the goodreads id of your favorite film-adapted book # https://www.goodreads.com/book/ # The goodreads id preceeds the book title in the web address, highlighted below in blue Image(cwd+"/img/how_to_find_gr_id.png", width=500, height=300) # + # Access the row of the desired film adaptation grID = '41865' # <------ edit this variable with the goodreads ID you just looked up row = df.loc[df['goodreads_id'] == grID] # you can print the whole row, or just extract the key value you want print('title of row',row['title']) # <---- edit this line to access a different value, like 'average_rating' # + # What do other book ratings look like? df['average_rating'] = df['average_rating'].astype(float) print('good reads ratings:\n',df['average_rating'].head()) # Plot goodreads ratings plt.rcParams["figure.figsize"] = (18,3) df['average_rating'].plot(kind='bar') # - # ## 3. Normalize and clean your data. # # Normalizing, cleanding, transforming... These terms are thrown used loosely but have very concrete meanings in practice. For more: https://www.statisticshowto.com/normalized/ # # If you notice errors or something missing, please please please mention it in the comments so the quality of this dataset can be improved! # + #Plot goodreads ratings (average_rating) against imdb ratings (m_imdb_Rating) #recall all our key values are in string format, so we need to convert to numeric df['m_imdb_Rating'] = pd.to_numeric(df['m_imdb_Rating'],errors='coerce') df.plot(x='average_rating',y='m_imdb_Rating',style='o',figsize=(10,10)) # - # Let's explore the outliers print('outliers:',df[df['average_rating']< 2.25]) # + # Although these classics made ok movies, there are really too few book reviews for goodreads # scores to have any credibility. Let's establish a review count threshold for quality. df = df[df['ratings_count'].astype(float)>19] # for goodreads # repeat for imdb, then replot df['m_imdb_Votes'] = pd.to_numeric(df['m_imdb_Votes'],errors='coerce') df = df[df['m_imdb_Votes']>19] df.plot(x='average_rating',y='m_imdb_Rating',style='o',figsize=(10,10)) # plot a Marker for your favorite film adaptaion on top of the other data # use the grID variable you set up earlier row = df.loc[df['goodreads_id'] == grID] plt.plot(row['average_rating'],row['m_imdb_Rating'], marker='x', markersize=15, color="blue") # + # Before we move on to determining what is statistically significant any given column, we need to # transform each numeric column to be reflective of what is statistically significant # normalize the data from sklearn.preprocessing import scale df['scaled_gr'] = scale(df['average_rating'].astype(float)) # Plot goodreads ratings AFTER normalizing plt.rcParams["figure.figsize"] = (18,3) df['scaled_gr'].plot(kind='bar') # https://bruchez.blogspot.com/2017/12/having-fun-with-imdb-dataset-files.html # lots of cool data science tutorials on sci kit learn: https://scikit-learn.org/stable/ # + # Let's compare this to what the data would have looked like BEFORE normalizing: # Plot goodreads ratings WITHOUT normalization plt.rcParams["figure.figsize"] = (18,3) df['average_rating'].plot(kind='bar') # - # ## 5. Extract Information # # Let's start by analyzing what makes a good film adaptation, vice a bad one. # + # Using the ratings information, let's derive whether it was a good adaptation or bad adaptation # Create two more columns, labeling each row as either a good/bad book, and a good/bad movie df['scaled_imdb'] = scale(df['m_imdb_Rating'].astype(float)) df.loc[((df['scaled_imdb']>=0) & (df['scaled_gr']<0)),'Adaptation_Category'] = 1 # good movie, bad book df.loc[((df['scaled_imdb']>=0) & (df['scaled_gr']>=0)),'Adaptation_Category'] = 2 # good movie, good book df.loc[((df['scaled_imdb']<0) & (df['scaled_gr']>=0)),'Adaptation_Category'] = 3 # bad movie, good book df.loc[((df['scaled_imdb']<0) & (df['scaled_gr']<0)),'Adaptation_Category'] = 4 # bad movie, bad book print(df['Adaptation_Category']) # - # Chart out some numeric columns df[['Adaptation_Category','ratings_count','m_imdb_Votes']].groupby("Adaptation_Category").mean() # + from collections import Counter # put the non-numeric data into a numeric format bad_books_good_movies = df[df['Adaptation_Category']==1] good_books_good_movies = df[df['Adaptation_Category']==2] good_books_bad_movies = df[df['Adaptation_Category']==3] bad_books_bad_movies = df[df['Adaptation_Category']==4] # Let's grab all the actors from bad books, but good movies (bbgm) bbgm_Actors = [] for actors_in_a_given_movie in bad_books_good_movies['mActors'].values.tolist(): bbgm_Actors += actors_in_a_given_movie.split(', ') print("Top actors for good film adaptations:") print(Counter(bbgm_Actors).most_common(50)) # + # Let's compare this to all the actors from good movies, but bad books gbbm_Actors = [] for actors_in_a_given_movie in good_books_bad_movies['mActors'].values.tolist(): gbbm_Actors += actors_in_a_given_movie.split(', ') print("Top actors for bad film adaptations (Who ruined the book???):") print(Counter(gbbm_Actors).most_common(50)) # + # Any common elements? intersection = set(bbgm_Actors).intersection(gbbm_Actors) print(Counter(intersection).most_common(10)) # - # Thanks so much for hacking along, ! Please see my github project page https://github.com/ravedawg/HackBI to cite this work, leave a comment, follow, or collaborate! # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # #
Basic Feature Extraction
# # Purpose of this notebook is to Engineer basic set of features before performing cleaning of the whole corpus. We will want to extract some basic features like count of Upper case words before performing data processing steps such as lower casing, etc. After the basic features have been engineered we will further analyse these features for their usefulness in detecting the target class label. # # It is expected to observe a comment with intention to Abuse will have higher proportion of capitalized words, a less developed vocabulary and a higher number of exclamation points, etc. # # https://towardsdatascience.com/how-i-improved-my-text-classification-model-with-feature-engineering-98fbe6c13ef3 # # https://medium.com/@outside2SDs/an-overview-of-correlation-measures-between-categorical-and-continuous-variables-4c7f85610365#:~:text=A%20simple%20approach%20could%20be,variance%20of%20the%20continuous%20variable.&text=If%20the%20variables%20have%20no,similar%20to%20the%20original%20variance. # # # # NOte: # - calculate number of Sentences # - Average Sentence length # ON Preprocessed data. # # - add Upper_case_vs_words to train_pre_clean_features.csv and basic_fetaures_extraction.py # import spacy from spacy.lang.en.stop_words import STOP_WORDS # + import os.path import re import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt # from sklearn.feature_selection import mutual_info_classif # from sklearn.linear_model import LogisticRegression # - import warnings warnings.filterwarnings("ignore") pd.set_option("display.max_colwidth", 1000) # + # file_name = str(os.path.join(CLEANED_DATA_DIR, 'combined.csv')) data = pd.read_csv("../cleaned_data/train_pre_clean_features.csv", encoding="utf-8") data = data.sample(n=100000, random_state=42) # + # data = data.drop(columns=['message']) # - # #
Analysis
data.describe() # + grp_0 = ['word_count', 'char_count', 'label'] grp_1 = ['avg_word_len', 'word_density', 'stop_word_count', 'num_unique_words', 'unique_vs_words', 'be_verb_count', 'label'] grp_2 = ['upper_case_words', 'Upper_case_vs_words', 'numeric_count', 'punct_count', 'exclamation_count', 'label'] # + fig, ax = plt.subplots(1,3, figsize=(12,6)) # plt.subplot(ax[0]) data[grp_0].groupby('label').median().T.plot(kind='bar', ax=ax[0]) # plt.subplot(ax[1]) data[grp_1].groupby('label').median().T.plot(kind='bar', ax=ax[1]) # plt.subplot(ax[2]) data[grp_2].groupby('label').median().T.plot(kind='bar', ax=ax[2]) # plt.yscale('log') plt.suptitle("Category wise Median Statistics") plt.tight_layout() plt.show() # - # #### Observation # - From descriptive statistics we can observe that most of the varibales are right skewed and thin tailed indicating presence of extreme outliers. # - We also observe that `unique_vs_words` for most of the observations(q1) have $\approx 80\%$ or above unique words to words ratio. \ # The observations which have lower than lower accepted threshold level will need further exploration. # - Median statistics does not indicate any major differences among the target categories. # ## Mutual Information num_importance = mutual_info_classif(X = data.drop(columns=['label', 'message']), y = data['label'], discrete_features=False, random_state=42) temp_data = pd.DataFrame(data = np.column_stack((data.drop(columns=['label', 'message']).columns, num_importance)), columns = ['feature', 'mutual_info'] ) temp_data.mutual_info = temp_data.mutual_info.astype('float').round(5) temp_data.sort_values('mutual_info', ascending=False) # #### Observation on Mutual Information # # Mutual Information statistics is not very encouraging and only ranking of the variables is considered. Although it must be noted that Mutual Information is susceptible to under estimating. # ## Distribution of the Variables def max_thresh(series): q1, q3 = np.percentile(series, [25, 75]) iqr = q3 - q1 max_th = q3 + 1.5*iqr return list(map(lambda x: max_th if x > iqr else x, series)) # + data_thresholded = data.drop(columns=['message']) for column in data_thresholded.columns: if column!='label': data_thresholded[column] = max_thresh(data_thresholded[column]) # + # columns = ['stop_word_count', 'unique_vs_words' , 'upper_case_words', 'numeric_count', 'Upper_case_vs_words'] # print() # print('-'*50) # for column in columns: # sns.distplot(data[column][data.label == 'no_abuse'], color='b', hist=True, label='No Abuse') # sns.distplot(data[column][data.label == 'abuse'], color='r', hist=True, label='Abuse') # plt.xlabel('') # plt.title(column) fig, ax = plt.subplots(1,5, figsize=(25,5)) plt.subplot(ax[0]) sns.distplot(data['stop_word_count'][data.label == 'no_abuse'], color='b', hist=True, label='No Abuse') sns.distplot(data['stop_word_count'][data.label == 'abuse'], color='r', hist=True, label='Abuse') plt.xlabel('') plt.legend() plt.title('stop_word_count') plt.subplot(ax[1]) sns.distplot(data['unique_vs_words'][data.label == 'no_abuse'], color='b', hist=True, label='No Abuse') sns.distplot(data['unique_vs_words'][data.label == 'abuse'], color='r', hist=True, label='Abuse') plt.xlabel('') plt.legend() plt.title('unique_vs_words') plt.subplot(ax[2]) sns.distplot(data['upper_case_words'][data.label == 'no_abuse'], color='b', hist=True, label='No Abuse') sns.distplot(data['upper_case_words'][data.label == 'abuse'], color='r', hist=True, label='Abuse') plt.xlabel('') plt.legend() plt.title('upper_case_words') plt.subplot(ax[3]) sns.distplot(data['numeric_count'][data.label == 'no_abuse'], color='b', hist=True, label='No Abuse') sns.distplot(data['numeric_count'][data.label == 'abuse'], color='r', hist=True, label='Abuse') plt.xlabel('') plt.legend() plt.title('numeric_count') plt.subplot(ax[4]) sns.distplot(data['Upper_case_vs_words'][data.label == 'no_abuse'], color='b', hist=True, label='No Abuse') sns.distplot(data['Upper_case_vs_words'][data.label == 'abuse'], color='r', hist=True, label='Abuse') plt.xlabel('') plt.legend() plt.title('Upper_case_vs_words') first_five = r"$\bf{Five \ important}$" plt.suptitle(f"First {first_five} variables from Mutual Information.\n There does not seem to be significant discriminatory power among the variables for the target labels") plt.tight_layout() plt.show() ############################ SECOND FIGURE ################################# fig, ax = plt.subplots(1,5, figsize=(25,5)) plt.subplot(ax[0]) sns.distplot(data_thresholded['stop_word_count'][data_thresholded.label == 'no_abuse'], color='b', hist=True, label='No Abuse') sns.distplot(data_thresholded['stop_word_count'][data_thresholded.label == 'abuse'], color='r', hist=True, label='Abuse') plt.xlabel('') plt.legend() plt.title('stop_word_count') plt.subplot(ax[1]) sns.distplot(data_thresholded['unique_vs_words'][data_thresholded.label == 'no_abuse'], color='b', hist=True, label='No Abuse') sns.distplot(data_thresholded['unique_vs_words'][data_thresholded.label == 'abuse'], color='r', hist=True, label='Abuse') plt.xlabel('') plt.legend() plt.title('unique_vs_words') plt.subplot(ax[2]) sns.distplot(data_thresholded['upper_case_words'][data_thresholded.label == 'no_abuse'], color='b', hist=True, label='No Abuse') sns.distplot(data_thresholded['upper_case_words'][data_thresholded.label == 'abuse'], color='r', hist=True, label='Abuse') plt.xlabel('') plt.legend() plt.title('upper_case_words') plt.subplot(ax[3]) sns.distplot(data_thresholded['numeric_count'][data_thresholded.label == 'no_abuse'], color='b', hist=True, label='No Abuse') sns.distplot(data_thresholded['numeric_count'][data_thresholded.label == 'abuse'], color='r', hist=True, label='Abuse') plt.xlabel('') plt.legend() plt.title('numeric_count') plt.subplot(ax[4]) sns.distplot(data_thresholded['Upper_case_vs_words'][data_thresholded.label == 'no_abuse'], color='b', hist=True, label='No Abuse') sns.distplot(data_thresholded['Upper_case_vs_words'][data_thresholded.label == 'abuse'], color='r', hist=True, label='Abuse') plt.xlabel('') plt.legend() plt.title('Upper_case_vs_words') plt.suptitle(r"First $\bf{Five \ important}$ variables from Mutual Information, with max thresholded at q3+1.5IQR.") plt.tight_layout() plt.show() ############################ THIRD FIGURE ################################# fig, ax = plt.subplots(1,5, figsize=(25,5)) plt.subplot(ax[0]) sns.boxplot(y=data_thresholded.label, x=data_thresholded.stop_word_count) plt.xlabel('') plt.title('stop_word_count') plt.subplot(ax[1]) sns.boxplot(y=data_thresholded.label, x=data_thresholded.unique_vs_words) plt.xlabel('') plt.title('unique_vs_words') plt.subplot(ax[2]) sns.boxplot(y=data_thresholded.label, x=data_thresholded.upper_case_words) plt.xlabel('') plt.title('upper_case_words') plt.subplot(ax[3]) sns.boxplot(y=data_thresholded.label, x=data_thresholded.numeric_count) plt.xlabel('') plt.title('numeric_count') plt.subplot(ax[4]) sns.boxplot(y=data_thresholded.label, x=data_thresholded.Upper_case_vs_words) plt.xlabel('') plt.title('Upper_case_vs_words') # plt.suptitle(r"First $\bf{Five \ important}$ variables from Mutual Information.") plt.tight_layout() # - # #### Observation # Highly Overlapping distribution of the variables in respect to the target categories. # ## Correlation # calculate the correlation statisitcs corr = data.corr() # Generate a mask for the upper triangle mask = np.triu(np.ones_like(corr, dtype=bool)) plt.figure(figsize=(12,6)) sns.heatmap(corr, annot=True, mask=mask) plt.title('Feature Multi - collinearity Map') plt.show() # #### Observation for Correlation # # **None** of the first five important variables from Mutual Information exhibit **Multi - collinearity**. # ## Unique word to words Ratio sns.scatterplot(x=data.label, y=data.unique_vs_words, hue=data.label, alpha=0.6) plt.title('Spread by target category') plt.grid() plt.show() def min_thresh(series): q1, q3 = np.percentile(series, [25, 75]) iqr = q3 - q1 min_th = q1 - 1.5*iqr return min_th lower = min_thresh(data.unique_vs_words) data['unique_ratio_lower'] = data.unique_vs_words.apply(lambda x: 'yes' if x0: #print ('togobegin',togo) togonext=[] for go in togo:#cell to explore #print ('currentcell',go) neighborlist = neighbors(go)#check neighbors of this cell for n in neighborlist: #for all neighbors if n not in seen: #if we didnt already reach them distances[n]=steps seen.append(go) togonext.append(n) togo = togonext.copy() steps+=1 # + jupyter={"outputs_hidden": true} maze # + jupyter={"outputs_hidden": true} np.max(distances) # + jupyter={"outputs_hidden": true} np.count_nonzero(np.array(distances>=1000)) # + jupyter={"outputs_hidden": true} maze # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # makeTagDirectory genome hg19 -update -checkGC import os os.chdir('../') print(os.getcwd()) import pandas as pd import numpy as np import matplotlib.pyplot as plt import matplotlib.lines as mlines from matplotlib.colors import ListedColormap from matplotlib import cm def plotNeucleotideFreq(TagDir): TagFile = "tagFreq.txt" File = "homer/Tag.dir/" + TagDir + "/" + TagFile data = pd.read_csv(File, sep="\t") x = data.columns[0] y = data.columns[1:5] data.plot(x,y) plt.xlabel("Distance from 5' end of reads") plt.ylabel("Nucleotide Freq") plt.title("Genomic Nucleotide Frequency relative to read positions") plt.show() # + names = set() design = pd.read_table('design.tsv') samples = design.bamReads.values for sample in samples: #print(sample.strip(".bam")) names.add(sample.strip(".bam")) for (dirpath, dirnames, filenames) in os.walk(os.getcwd()): for dirname in dirnames: if dirname in names: plotNeucleotideFreq(dirname) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Dealing with imbalanced data by changing classification cut-off levels # # A problem with machine learning models is that they may end up biased towards the majority class, and under-predict the minority class(es). # # Some models (including sklearn's logistic regression) allow for thresholds of classification to be changed. This can help rebalance classification in models, especially where there is a binary classification (e.g. survived or not). # # Here we create a more imbalanced data set from the Titanic set, by dropping half the survivors. # # We vary the probability cut-off for a 'survived' classification, and examine the effect of classification probability cut-off on a range of accuracy measures. # # Note: You will need to look at the help files and documentation of other model types to find whether they have options to change classification cut-off levels. # Hide warnings (to keep notebook tidy; do not usually do this) import warnings warnings.filterwarnings("ignore") # ## Load modules # # A standard Anaconda install of Python (https://www.anaconda.com/distribution/) contains all the necessary modules. import numpy as np import pandas as pd # Import machine learning methods from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.model_selection import StratifiedKFold # ## Load data # # The section below downloads pre-processed data, and saves it to a subfolder (from where this code is run). # If data has already been downloaded that cell may be skipped. # + download_required = True if download_required: # Download processed data: address = 'https://raw.githubusercontent.com/MichaelAllen1966/' + \ '1804_python_healthcare/master/titanic/data/processed_data.csv' data = pd.read_csv(address) # Create a data subfolder if one does not already exist import os data_directory ='./data/' if not os.path.exists(data_directory): os.makedirs(data_directory) # Save data data.to_csv(data_directory + 'processed_data.csv', index=False) # - data = pd.read_csv('data/processed_data.csv') # Make all data 'float' type data = data.astype(float) # The first column is a passenger index number. We will remove this, as this is not part of the original Titanic passenger data. # + # Drop Passengerid (axis=1 indicates we are removing a column rather than a row) # We drop passenger ID as it is not original data data.drop('PassengerId', inplace=True, axis=1) # - # ## Artificially reduce the number of survivors (to make data set more imbalanced) # + # Shuffle original data data = data.sample(frac=1.0) # Sampling with a fraction of 1.0 shuffles data # Create masks for filters mask_died = data['Survived'] == 0 mask_survived = data['Survived'] == 1 # Filter data died = data[mask_died] survived = data[mask_survived] # Reduce survived by half survived = survived.sample(frac=0.5) # Recombine data and shuffle data = pd.concat([died, survived]) data = data.sample(frac=1.0) # Show average of survived survival_rate = data['Survived'].mean() print ('Proportion survived:', np.round(survival_rate,3)) # - # ## Define function to standardise data def standardise_data(X_train, X_test): # Initialise a new scaling object for normalising input data sc = StandardScaler() # Set up the scaler just on the training set sc.fit(X_train) # Apply the scaler to the training and test sets train_std=sc.transform(X_train) test_std=sc.transform(X_test) return train_std, test_std # ## Define function to measure accuracy # # The following is a function for multiple accuracy measures. def calculate_accuracy(observed, predicted): """ Calculates a range of accuracy scores from observed and predicted classes. Takes two list or NumPy arrays (observed class values, and predicted class values), and returns a dictionary of results. 1) observed positive rate: proportion of observed cases that are +ve 2) Predicted positive rate: proportion of predicted cases that are +ve 3) observed negative rate: proportion of observed cases that are -ve 4) Predicted negative rate: proportion of predicted cases that are -ve 5) accuracy: proportion of predicted results that are correct 6) precision: proportion of predicted +ve that are correct 7) recall: proportion of true +ve correctly identified 8) f1: harmonic mean of precision and recall 9) sensitivity: Same as recall 10) specificity: Proportion of true -ve identified: 11) positive likelihood: increased probability of true +ve if test +ve 12) negative likelihood: reduced probability of true +ve if test -ve 13) false positive rate: proportion of false +ves in true -ve patients 14) false negative rate: proportion of false -ves in true +ve patients 15) true positive rate: Same as recall 16) true negative rate 17) positive predictive value: chance of true +ve if test +ve 18) negative predictive value: chance of true -ve if test -ve """ # Converts list to NumPy arrays if type(observed) == list: observed = np.array(observed) if type(predicted) == list: predicted = np.array(predicted) # Calculate accuracy scores observed_positives = observed == 1 observed_negatives = observed == 0 predicted_positives = predicted == 1 predicted_negatives = predicted == 0 true_positives = (predicted_positives == 1) & (observed_positives == 1) false_positives = (predicted_positives == 1) & (observed_positives == 0) true_negatives = (predicted_negatives == 1) & (observed_negatives == 1) accuracy = np.mean(predicted == observed) precision = (np.sum(true_positives) / (np.sum(true_positives) + np.sum(false_positives))) recall = np.sum(true_positives) / np.sum(observed_positives) sensitivity = recall f1 = 2 * ((precision * recall) / (precision + recall)) specificity = np.sum(true_negatives) / np.sum(observed_negatives) positive_likelihood = sensitivity / (1 - specificity) negative_likelihood = (1 - sensitivity) / specificity false_positive_rate = 1 - specificity false_negative_rate = 1 - sensitivity true_positive_rate = sensitivity true_negative_rate = specificity positive_predictive_value = (np.sum(true_positives) / np.sum(observed_positives)) negative_predictive_value = (np.sum(true_negatives) / np.sum(observed_negatives)) # Create dictionary for results, and add results results = dict() results['observed_positive_rate'] = np.mean(observed_positives) results['observed_negative_rate'] = np.mean(observed_negatives) results['predicted_positive_rate'] = np.mean(predicted_positives) results['predicted_negative_rate'] = np.mean(predicted_negatives) results['accuracy'] = accuracy results['precision'] = precision results['recall'] = recall results['f1'] = f1 results['sensitivity'] = sensitivity results['specificity'] = specificity results['positive_likelihood'] = positive_likelihood results['negative_likelihood'] = negative_likelihood results['false_positive_rate'] = false_positive_rate results['false_negative_rate'] = false_negative_rate results['true_positive_rate'] = true_positive_rate results['true_negative_rate'] = true_negative_rate results['positive_predictive_value'] = positive_predictive_value results['negative_predictive_value'] = negative_predictive_value return results # ## Divide into X (features) and y (labels) # # We will separate out our features (the data we use to make a prediction) from our label (what we are truing to predict). # By convention our features are called `X` (usually upper case to denote multiple features), and the label (survive or not) `y`. X = data.drop('Survived',axis=1) # X = all 'data' except the 'survived' column y = data['Survived'] # y = 'survived' column from 'data' # ## Assess accuracy, precision, recall and f1 at different model classification thresholds # ## Run our model with probability cut-off levels # # We will use stratified k-fold verification to assess the model performance. If you are not familiar with this please see: # # https://github.com/MichaelAllen1966/1804_python_healthcare/blob/master/titanic/03_k_fold.ipynb # + # Create NumPy arrays of X and y (required for k-fold) X_np = X.values y_np = y.values # Set up k-fold training/test splits number_of_splits = 10 skf = StratifiedKFold(n_splits = number_of_splits) skf.get_n_splits(X_np, y_np) # Set up thresholds thresholds = np.arange(0, 1.01, 0.2) # Create arrays for overall results (rows=threshold, columns=k fold replicate) results_accuracy = np.zeros((len(thresholds),number_of_splits)) results_precision = np.zeros((len(thresholds),number_of_splits)) results_recall = np.zeros((len(thresholds),number_of_splits)) results_f1 = np.zeros((len(thresholds),number_of_splits)) results_predicted_positive_rate = np.zeros((len(thresholds),number_of_splits)) # Loop through the k-fold splits loop_index = 0 for train_index, test_index in skf.split(X_np, y_np): # Create lists for k-fold results threshold_accuracy = [] threshold_precision = [] threshold_recall = [] threshold_f1 = [] threshold_predicted_positive_rate = [] # Get X and Y train/test X_train, X_test = X_np[train_index], X_np[test_index] y_train, y_test = y_np[train_index], y_np[test_index] # Get X and Y train/test X_train_std, X_test_std = standardise_data(X_train, X_test) # Set up and fit model model = LogisticRegression(solver='lbfgs') model.fit(X_train_std,y_train) # Get probability of non-survive and survive probabilities = model.predict_proba(X_test_std) # Take just the survival probabilities (column 1) probability_survival = probabilities[:,1] # Loop through increments in probability of survival for cutoff in thresholds: # loop 0 --> 1 on steps of 0.1 # Get whether passengers survive using cutoff predicted_survived = probability_survival >= cutoff # Call accuracy measures function accuracy = calculate_accuracy(y_test, predicted_survived) # Add accuracy scores to lists threshold_accuracy.append(accuracy['accuracy']) threshold_precision.append(accuracy['precision']) threshold_recall.append(accuracy['recall']) threshold_f1.append(accuracy['f1']) threshold_predicted_positive_rate.append(accuracy['predicted_positive_rate']) # Add results to results arrays results_accuracy[:,loop_index] = threshold_accuracy results_precision[:, loop_index] = threshold_precision results_recall[:, loop_index] = threshold_recall results_f1[:, loop_index] = threshold_f1 results_predicted_positive_rate[:, loop_index] = threshold_predicted_positive_rate # Increment loop index loop_index += 1 # Transfer results to dataframe results = pd.DataFrame(thresholds, columns=['thresholds']) results['accuracy'] = results_accuracy.mean(axis=1) results['precision'] = results_precision.mean(axis=1) results['recall'] = results_recall.mean(axis=1) results['f1'] = results_f1.mean(axis=1) results['predicted_positive_rate'] = results_predicted_positive_rate.mean(axis=1) # - # ### Plot results # + import matplotlib.pyplot as plt # %matplotlib inline chart_x = results['thresholds'] plt.plot(chart_x, results['accuracy'], linestyle = '-', label = 'Accuracy') plt.plot(chart_x, results['precision'], linestyle = '--', label = 'Precision') plt.plot(chart_x, results['recall'], linestyle = '-.', label = 'Recall') plt.plot(chart_x, results['f1'], linestyle = ':', label = 'F1') plt.plot(chart_x, results['predicted_positive_rate'], linestyle = '-', label = 'Predicted positive rate') actual_positive_rate = np.repeat(y.mean(), len(chart_x)) plt.plot(chart_x, actual_positive_rate, linestyle = '--', color='k', label = 'Actual positive rate') plt.xlabel('Probability threshold') plt.ylabel('Score') plt.xlim(-0.02, 1.02) plt.ylim(-0.02, 1.02) plt.legend(loc='upper right') plt.grid(True) plt.show() # - # ## Observations # # * Accuracy is maximised with a probability threshold of 0.5 # * When the threshold is set at 0.5 (the default threshold for classification) minority class ('survived') is under-predicted. # * A threshold of 0.4 balances precision and recall and correctly estimates the proportion of passengers who survive. # * There is a marginal reduction in overall accuracy in order to balance accuracy of the classes. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # A2 Bias In Data # --- # # __TL;DR__ # The purpose of this exploration is to identify potential sources of bias in a corpus of human-annotated data, and describe some implications of those biases. Specifically, this analysis attempts to answer the question, _"How consistent are the labeling behaviors among workers with different demographic profiles?"_ in regards to two separate Wikimedia datasets: # 1. Toxicity # 2. Aggression # # # __RELATED LINKS:__ # _Data Source: https://figshare.com/projects/Wikipedia_Talk/16731_ # _Overview of the project: https://meta.wikimedia.org/wiki/Research:Detox_ # _Overview of the data: https://meta.wikimedia.org/wiki/Research:Detox/Data_Release_ # --- # ## DATA AQUISITION # The code below will download all the data needed in this analysis, using the `%%caption` magic keyword to suppress the outputs. # + # %%capture # download toxicity data # !wget https://ndownloader.figshare.com/files/7394539 -O Raw_Data/toxicity_annotations.tsv # !wget https://ndownloader.figshare.com/files/7394542 -O Raw_Data/toxicity_annotated_comments.tsv # !wget https://ndownloader.figshare.com/files/7640581 -O Raw_Data/toxicity_worker_demographics.tsv # download aggression data # !wget https://ndownloader.figshare.com/files/7394506 -O Raw_Data/aggression_annotations.tsv # !wget https://ndownloader.figshare.com/files/7038038 -O Raw_Data/aggression_annotated_comments.tsv # !wget https://ndownloader.figshare.com/files/7640644 -O Raw_Data/aggression_worker_demographics.tsv # downlaod personal attacks # !wget https://ndownloader.figshare.com/files/7554637 -O Raw_Data/attack_annotations.tsv # !wget https://ndownloader.figshare.com/files/7554634 -O Raw_Data/attack_annotated_comments.tsv # !wget https://ndownloader.figshare.com/files/7640752 -O Raw_Data/attack_worker_demographics.tsv # - # The code below will transform the downloaded data into a pandas dataframe. # + import pandas as pd # toxicity data toxicity_annotations = pd.read_csv("Raw_Data/toxicity_annotations.tsv", delimiter="\t") toxicity_annotated_comments = pd.read_csv("Raw_Data/toxicity_annotated_comments.tsv", delimiter="\t", index_col=0) toxicity_worker_demographics = pd.read_csv("Raw_Data/toxicity_worker_demographics.tsv", delimiter="\t") # aggression data aggression_annotations = pd.read_csv("Raw_Data/aggression_annotations.tsv", delimiter="\t") aggression_annotated_comments = pd.read_csv("Raw_Data/aggression_annotated_comments.tsv", delimiter="\t", index_col=0) aggression_worker_demographics = pd.read_csv("Raw_Data/aggression_worker_demographics.tsv", delimiter="\t") # personal attack data attack_annotations = pd.read_csv("Raw_Data/attack_annotations.tsv", delimiter="\t") attack_annotated_comments = pd.read_csv("Raw_Data/attack_annotated_comments.tsv", delimiter="\t", index_col=0) attack_worker_demographics = pd.read_csv("Raw_Data/attack_worker_demographics.tsv", delimiter="\t") # - # ## BIAS IN THE TOXICITY DATA SET # ### INITIAL EDA # From the documentation we know there are 160k labeled comments in the Toxicity dataset. First, let's look at the demographic distribution for the labelers in the toxicity dataset by building a few simple bar plots. # + # import libraries for visualizations import matplotlib.pyplot as plt import seaborn as sns # magic code for viewing plots using jupyter notebooks: # %matplotlib inline # + # plot education distribution sns.catplot(y="education", kind="count", data=toxicity_worker_demographics, order=["none", "some", "hs", "bachelors", "professional", "masters", "doctorate"], palette=("Blues")) # plot gender dist sns.catplot(x="gender", kind="count", data=toxicity_worker_demographics, palette=("Blues")) # plot age group dist sns.catplot(x="age_group", kind="count", data=toxicity_worker_demographics, order=["Under 18", "18-30", "30-45", "45-60", "Over 60"], palette=("Blues")) # plot language dist sns.catplot(x="english_first_language", kind="count", data=toxicity_worker_demographics, palette=("Blues")) plt.tight_layout() # - # Immediately, it's clear there is quite a lot of skewed data in the demographics. The following results become immediately apparent: # * Even though this is from english wikipedia, the vast majority of labelers are non-native English speakers. # * The majority of labelers are in the 18-30 age range. # * The majority of labelers are male. # * The majority of labelers hold a bachelors degree. # # We can combine some of this information to look for additional potential bias: # + # plot education v. gender sns.catplot(y="education", kind="count", hue = 'gender', data=toxicity_worker_demographics, order=["none", "some", "hs", "bachelors", "professional", "masters", "doctorate"], palette=("Blues")) # plot gender v. age sns.catplot(x="age_group", kind="count", hue = 'gender', data=toxicity_worker_demographics, palette=("Blues")) # plot age v. education sns.catplot(x="age_group", kind="count", hue = 'education', data=toxicity_worker_demographics, order=["Under 18", "18-30", "30-45", "45-60", "Over 60"], palette=("Blues")) # plot language v. gender sns.catplot(x="gender", kind="count", hue = 'english_first_language', data=toxicity_worker_demographics, palette=("Blues")) # plot language v. education sns.catplot(y="education", kind="count", hue = 'english_first_language', data=toxicity_worker_demographics, palette=("Blues")) # - # There is clearly a lot of bias towards non-native english speaking males with a bachelor's degree in the 18-30 age group. One interesting result here is that although there are significantly more men in the overall group, the _Over 60_ age group is biased in favor of women. Next, we will examine the types of columns that typically get labeled as toxic. We'll consider a comment to be toxic if at least 50% of reviewers labeled it as such. # + # labels a comment as toxic if the majority of annotators did so toxicity_labels = toxicity_annotations.groupby('rev_id')['toxicity'].mean() > 0.5 # join the labels and comments toxicity_annotated_comments['labeled_toxic'] = toxicity_labels # remove newline and tab tokens toxicity_annotated_comments['comment'] = toxicity_annotated_comments['comment'].apply(lambda x: x.replace("NEWLINE_TOKEN", " ")) toxicity_annotated_comments['comment'] = toxicity_annotated_comments['comment'].apply(lambda x: x.replace("TAB_TOKEN", " ")) # - # If we preview the results we can verify that these comments indeed appear to be toxic. # preview the results toxicity_annotated_comments.loc[toxicity_annotated_comments['labeled_toxic'] == True].head(10) # We can quickly look for common words contained in these toxic comments. # + # create a list of the 200 most common words in the comments from collections import Counter wordlist = Counter(" ".join(toxicity_annotated_comments.loc[toxicity_annotated_comments['labeled_toxic'] == True]['comment'].str.lower()).split()).most_common(75) # remove common stop words from the list import nltk nltk.download('stopwords') from nltk.corpus import stopwords stopword_list = set(stopwords.words('english')) # add a few extra words to the stop word list stopword_list |= set(['==', '-', '`', "I'm", 'wikipedia', 'wiki', '====', '.', 'me', 'im', 'hi']) # print the final result for item in wordlist: if item[0] not in stopword_list: print(item) # - # The results are not particulary surprising. This is the sort of langauge I would expect to find in toxic comments. With few exceptions, most of these words do not have an alternate definitions. In search of bias, we can examine how these comments were treated by the labelers based on on demographic breakdowns. # ### ANALYSIS # The analysis below attempts to answer the question, "How consistent are the toxicity labeling behaviors among workers with different demographic profiles? # Since the data is divided into three separate files, it must be joined in various ways in order to analyze. # + # join demographics to annotations joined_toxicity_demographics = toxicity_annotations.join(toxicity_worker_demographics, on="worker_id", rsuffix="_r") # join comments to annotations joined_toxicity_comments = toxicity_annotations.join(toxicity_annotated_comments, on="rev_id", rsuffix="_r") # - # Quickly previewing the results of each table: display(joined_toxicity_demographics.head(3)) display(joined_toxicity_comments.head(3)) # We can calculate a mean toxicity score for each individual and add that to the demographics table. # + # calculate average toxicity per user avg_worker_toxicity = joined_toxicity_demographics.groupby("worker_id")["toxicity_score"].mean() # join the average toxicity to the demographics table toxicity_worker_demographics = toxicity_worker_demographics.join( avg_worker_toxicity ) # preview the results toxicity_worker_demographics.head() # - # There is quite a lot of bias in the toxicity score by age group. This becomes more apparent in graph form. There is appears to be a direct linear relationship between age and the average toxicity score. # + # create barplot, specify the order sns_plot = sns.barplot(x="age_group", y="toxicity_score", data=toxicity_worker_demographics, order=["Under 18", "18-30", "30-45", "45-60", "Over 60"], palette=("Blues")) # save results for README file fig = sns_plot.get_figure() fig.savefig('output.png') # - # However, the scoring is much more uniform across the other demographic groups (education, language, and gender). # create barplot, specify the order sns.barplot(y="education", x="toxicity_score", data=toxicity_worker_demographics, order=["none", "some", "hs", "bachelors", "professional", "masters", "doctorate"], palette=("Blues")) # create barplot, specify the order sns.barplot(x="english_first_language", y="toxicity_score", data=toxicity_worker_demographics, palette=("Blues")) # create barplot, specify the order sns.barplot(x="gender", y="toxicity_score", data=toxicity_worker_demographics, palette=("Blues_d")) # The most dramatic difference in scoring appears to be from individuals in the over 60 group. Males over 60 appear to have a wide range of opinions on terms of scoring. sns.barplot(x="toxicity_score", y="gender", data=toxicity_worker_demographics.loc[toxicity_worker_demographics['age_group'] == "Over 60"], palette=("Blues")) # Now, we can calculate the average toxicity score per comment. We can see which demographics differ from the mean score in the most dramatic ways. # + # calculate average toxicity per rev_id avg_rev_toxicity = joined_toxicity_demographics.groupby("rev_id")["toxicity_score"].mean() # joined the average toxicity per rev to the dataframe joined_toxicity_demographics = joined_toxicity_demographics.join(avg_rev_toxicity, on="rev_id", rsuffix="_r") # calculate the user's difference from the mean joined_toxicity_demographics["dif_from_mean"] = joined_toxicity_demographics['toxicity_score'] - joined_toxicity_demographics['toxicity_score_r'] # preview results joined_toxicity_demographics # - sns.barplot(x="dif_from_mean", y="gender", data=joined_toxicity_demographics, palette=("Blues")) # In general, it appears that men have a tendency to rate comments as slightly more toxic than women. Looking at the age groups produces an even more startling result. Users over 60 deviate dramatically from the average toxicity score. Clearly, in this group there appears to be much less sensitivity towards toxic comments. sns.barplot(x="dif_from_mean", y="age_group", data=joined_toxicity_demographics, palette=("Blues")) # ## BIAS IN THE AGGRESSION DATASET # ### INITIAL EDA # There are about 100k labeled comments in the `aggression` data set. We can repeat a similar analysis here. # + # plot education distribution sns.catplot(y="education", kind="count", data=aggression_worker_demographics, order=["none", "some", "hs", "bachelors", "professional", "masters", "doctorate"], palette=("crest")) # plot gender dist sns.catplot(x="gender", kind="count", data=aggression_worker_demographics, palette=("crest")) # plot age group dist sns.catplot(x="age_group", kind="count", data=aggression_worker_demographics, order=["Under 18", "18-30", "30-45", "45-60", "Over 60"], palette=("crest")) # plot language dist sns.catplot(x="english_first_language", kind="count", data=aggression_worker_demographics, palette=("crest")) # - # Here again, we see similiar issues to the toxicity dataset. It appears as though this is a very similiar group of individuals was responsible for categorizing these comments as well. We will again examine the types of columns that typically get labeled as aggressive. We'll consider a comment to be toxic if at least 50% of reviewers labeled it as such. # + # labels a comment as toxic if the majority of annotators did so aggression_labels = aggression_annotations.groupby('rev_id')['aggression'].mean() > 0.5 # join the labels and comments aggression_annotated_comments['labeled_aggression'] = aggression_labels # remove newline and tab tokens aggression_annotated_comments['comment'] = aggression_annotated_comments['comment'].apply(lambda x: x.replace("NEWLINE_TOKEN", " ")) aggression_annotated_comments['comment'] = aggression_annotated_comments['comment'].apply(lambda x: x.replace("TAB_TOKEN", " ")) # - # preview the results aggression_annotated_comments.loc[aggression_annotated_comments['labeled_aggression'] == True].head(15) # The tone of these comments appears to be very similiar to the toxic comments. It doesn't seem as though there will be a lot of value to be gained from running a keyword analysis against this dataset for EDA as language is quite simliar. Instead, we will move on to the analysis. # ### AGGRESSION ANALYSIS # The analysis below attempts to answer the question, "How consistent are the aggression labeling behaviors among workers with different demographic profiles? # Again, we will start by joining the data files. # + # join demographics to annotations joined_aggression_demographics = aggression_annotations.join(aggression_worker_demographics, on="worker_id", rsuffix="_r") # join comments to annotations joined_aggression_comments = aggression_annotations.join(aggression_annotated_comments, on="rev_id", rsuffix="_r") # preview the results of each table display(joined_aggression_demographics.head(3)) display(joined_aggression_comments.head(3)) # + # calculate average attack score per user avg_worker_aggression = joined_aggression_demographics.groupby("worker_id")["aggression_score"].mean() # join the average toxicity to the demographics table aggression_worker_demographics = aggression_worker_demographics.join( avg_worker_aggression ) # preview the results aggression_worker_demographics.head() # - # We can now look at the average aggression score, broken down by demographic. For this dataset, the signs have switched, meaning positive numbers are friendly and negative numbers indicate an aggressive comment. # create barplot, specify the order sns.barplot(x="age_group", y="aggression_score", data=aggression_worker_demographics, order=["Under 18", "18-30", "30-45", "45-60", "Over 60"], palette=("crest")) sns.barplot(y="education", x="aggression_score", data=aggression_worker_demographics, order=["none", "some", "hs", "bachelors", "professional", "masters", "doctorate"], palette=("crest")) # create barplot, specify the order sns.barplot(x="english_first_language", y="aggression_score", data=aggression_worker_demographics, palette=("crest")) # create barplot, specify the order sns.barplot(x="gender", y="aggression_score", data=aggression_worker_demographics.loc[aggression_worker_demographics['gender'].isin(['female', 'male'])], palette=("crest")) # These results look much more uniform than the toxicity dataset. Most groups appear to have a similar scoring averages. Again, let's calculate the average aggression score per comment in an attempt to see which demographic groups differ the most from the mean score. # + # calculate average toxicity per rev_id avg_rev_aggression = joined_aggression_demographics.groupby("rev_id")["aggression_score"].mean() # joined the average toxicity per rev to the dataframe joined_aggression_demographics = joined_aggression_demographics.join(avg_rev_aggression, on="rev_id", rsuffix="_r") # calculate the user's difference from the mean joined_aggression_demographics["dif_from_mean"] = joined_aggression_demographics['aggression_score'] - joined_aggression_demographics['aggression_score_r'] # preview results joined_aggression_demographics # - # If we break this down by education we can see a large difference in the scoring of those with no education. sns.barplot(x="dif_from_mean", y="education", data=joined_aggression_demographics, palette=("crest")) # When we look at the difference in scoring across age groups, the result is quite alarming. There is very little consensus among teenagers and senior citizens when it comes to labeling an aggressive comment. On average, labelers under 18 tend to view comments as more aggressive, while those over 60 tend to view comments as less aggressive. sns.barplot(x="dif_from_mean", y="age_group", data=joined_aggression_demographics, order=["Under 18", "18-30", "30-45", "45-60", "Over 60"], palette=("crest")) # ## SUMMARY OF RESULTS # For both datasets examined in this notebook, the largest demographic factor in relation to labeling behavior was age. Users in the _Under 18_ age group consistently labeled comments in a different way than the _Over 60_ group. If a single comment was examined by 10 users in the _Over 60_ age group, they would be significantly more likely to label it friendly and non-toxic. The opposite would be true if the 10 labelers were replaced with teenagers. Since flagging comments is a classification problem requiring a binary designation, this lack of consesus should be handled carefully when attempting to create machine learning models. # ## IMPLICATIONS # _Which, if any, of these demo applications would you expect the Perspective API—or any model trained on the Wikipedia Talk corpus—to perform well in? Why?_ # # I believe the comment filter may be an effective application based on my findings. Since different demographic groups clearly have a differing tolerance levels for toxic and aggressive comments, this tool would allow users to set their own personal threshold. The potentially toxic or aggressive comments would not be removed from the site and would be available for anyone to read, but they wouldn't be immediately visible to underage users (the group that tends to find content most aggressive and toxic). # _What are some potential unintended, negative consequences of using the Perspective API for any of these purposes? In your opinion, are these consequences likely or serious enough that you would recommend that the Perspective API not be used in these applications? Why or why not?_ # # There are some clear differences in the labeling behavior among age groups. If a model is simply trained based on the aggregated labeling results, one would expect that users under the age of 18 are more likely to see content that they find toxic or aggressive on the site. Additionally, users over the age of 60 would be more likely to see content taken off the site that they feel is inoffensive. Since there is no clear consensus on which comments are offensive, building a model on this dataset may simply exacerbate existing problems. # _Which, if any, of these demo applications would you expect the Perspective API to perform poorly in? Why?_ # # I would expect the toxicity timeline to be less informative than some of the other demo projects. This tool may simply end up tracking behavior among demographic groups and share very little insight into the comments themselves. Since different groups have differing opinions on these comments, the tool may simply reflect the different times of day that particular groups of people are active online. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np from hyper_parameters import * BN_EPSILON = 0.001 # + def activation_summary(x): ''' :param x: A Tensor :return: Add histogram summary and scalar summary of the sparsity of the tensor ''' tensor_name = x.op.name tf.summary.histogram(tensor_name + '/activations', x) tf.summary.scalar(tensor_name + '/sparsity', tf.nn.zero_fraction(x)) def create_variables(name, shape, initializer=tf.contrib.layers.xavier_initializer(), is_fc_layer=False): ''' :param name: A string. The name of the new variable :param shape: A list of dimensions :param initializer: User Xavier as default. :param is_fc_layer: Want to create fc layer variable? May use different weight_decay for fc layers. :return: The created variable ''' ## TODO: to allow different weight decay to fully connected layer and conv layer if is_fc_layer is True: regularizer = tf.contrib.layers.l2_regularizer(scale=FLAGS.weight_decay) else: regularizer = tf.contrib.layers.l2_regularizer(scale=FLAGS.weight_decay) new_variables = tf.get_variable(name, shape=shape, initializer=initializer, regularizer=regularizer) return new_variables def output_layer(input_layer, num_labels): ''' :param input_layer: 2D tensor :param num_labels: int. How many output labels in total? (10 for cifar10 and 100 for cifar100) :return: output layer Y = WX + B ''' input_dim = input_layer.get_shape().as_list()[-1] fc_w = create_variables(name='fc_weights', shape=[input_dim, num_labels], is_fc_layer=True, initializer=tf.uniform_unit_scaling_initializer(factor=1.0)) fc_b = create_variables(name='fc_bias', shape=[num_labels], initializer=tf.zeros_initializer()) fc_h = tf.matmul(input_layer, fc_w) + fc_b return fc_h def batch_normalization_layer(input_layer, dimension): ''' Helper function to do batch normalziation :param input_layer: 4D tensor :param dimension: input_layer.get_shape().as_list()[-1]. The depth of the 4D tensor :return: the 4D tensor after being normalized ''' mean, variance = tf.nn.moments(input_layer, axes=[0, 1, 2]) beta = tf.get_variable('beta', dimension, tf.float32, initializer=tf.constant_initializer(0.0, tf.float32)) gamma = tf.get_variable('gamma', dimension, tf.float32, initializer=tf.constant_initializer(1.0, tf.float32)) bn_layer = tf.nn.batch_normalization(input_layer, mean, variance, beta, gamma, BN_EPSILON) return bn_layer def conv_bn_relu_layer(input_layer, filter_shape, stride): ''' A helper function to conv, batch normalize and relu the input tensor sequentially :param input_layer: 4D tensor :param filter_shape: list. [filter_height, filter_width, filter_depth, filter_number] :param stride: stride size for conv :return: 4D tensor. Y = Relu(batch_normalize(conv(X))) ''' out_channel = filter_shape[-1] filter = create_variables(name='conv', shape=filter_shape) conv_layer = tf.nn.conv2d(input_layer, filter, strides=[1, stride, stride, 1], padding='SAME') bn_layer = batch_normalization_layer(conv_layer, out_channel) output = tf.nn.relu(bn_layer) return output def bn_relu_conv_layer(input_layer, filter_shape, stride): ''' A helper function to batch normalize, relu and conv the input layer sequentially :param input_layer: 4D tensor :param filter_shape: list. [filter_height, filter_width, filter_depth, filter_number] :param stride: stride size for conv :return: 4D tensor. Y = conv(Relu(batch_normalize(X))) ''' in_channel = input_layer.get_shape().as_list()[-1] bn_layer = batch_normalization_layer(input_layer, in_channel) relu_layer = tf.nn.relu(bn_layer) filter = create_variables(name='conv', shape=filter_shape) conv_layer = tf.nn.conv2d(relu_layer, filter, strides=[1, stride, stride, 1], padding='SAME') return conv_layer def residual_block(input_layer, output_channel, first_block=False): ''' Defines a residual block in ResNet :param input_layer: 4D tensor :param output_channel: int. return_tensor.get_shape().as_list()[-1] = output_channel :param first_block: if this is the first residual block of the whole network :return: 4D tensor. ''' input_channel = input_layer.get_shape().as_list()[-1] # When it's time to "shrink" the image size, we use stride = 2 if input_channel * 2 == output_channel: increase_dim = True stride = 2 elif input_channel == output_channel: increase_dim = False stride = 1 else: raise ValueError('Output and input channel does not match in residual blocks!!!') # The first conv layer of the first residual block does not need to be normalized and relu-ed. with tf.variable_scope('conv1_in_block'): if first_block: filter = create_variables(name='conv', shape=[3, 3, input_channel, output_channel]) conv1 = tf.nn.conv2d(input_layer, filter=filter, strides=[1, 1, 1, 1], padding='SAME') else: conv1 = bn_relu_conv_layer(input_layer, [3, 3, input_channel, output_channel], stride) with tf.variable_scope('conv2_in_block'): conv2 = bn_relu_conv_layer(conv1, [3, 3, output_channel, output_channel], 1) # When the channels of input layer and conv2 does not match, we add zero pads to increase the # depth of input layers if increase_dim is True: pooled_input = tf.nn.avg_pool(input_layer, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID') padded_input = tf.pad(pooled_input, [[0, 0], [0, 0], [0, 0], [input_channel // 2, input_channel // 2]]) else: padded_input = input_layer output = conv2 + padded_input return output # - def inference(input_tensor_batch, n, reuse): ''' The main function that defines the ResNet. total layers = 1 + 2n + 2n + 2n +1 = 6n + 2 :param input_tensor_batch: 4D tensor :param n: num_residual_blocks :param reuse: To build train graph, reuse=False. To build validation graph and share weights with train graph, resue=True :return: last layer in the network. Not softmax-ed ''' layers = [] with tf.variable_scope('conv0', reuse=reuse): conv0 = conv_bn_relu_layer(input_tensor_batch, [3, 3, 3, 16], 1) activation_summary(conv0) layers.append(conv0) for i in range(n): with tf.variable_scope('conv1_%d' %i, reuse=reuse): if i == 0: conv1 = residual_block(layers[-1], 16, first_block=True) else: conv1 = residual_block(layers[-1], 16) activation_summary(conv1) layers.append(conv1) for i in range(n): with tf.variable_scope('conv2_%d' %i, reuse=reuse): conv2 = residual_block(layers[-1], 32) activation_summary(conv2) layers.append(conv2) for i in range(n): with tf.variable_scope('conv3_%d' %i, reuse=reuse): conv3 = residual_block(layers[-1], 64) layers.append(conv3) assert conv3.get_shape().as_list()[1:] == [8, 8, 64] with tf.variable_scope('fc', reuse=reuse): in_channel = layers[-1].get_shape().as_list()[-1] bn_layer = batch_normalization_layer(layers[-1], in_channel) relu_layer = tf.nn.relu(bn_layer) global_pool = tf.reduce_mean(relu_layer, [1, 2]) assert global_pool.get_shape().as_list()[-1:] == [64] output = output_layer(global_pool, 10) layers.append(output) return layers[-1] def test_graph(train_dir='logs'): ''' Run this function to look at the graph structure on tensorboard. A fast way! :param train_dir: ''' input_tensor = tf.constant(np.ones([128, 32, 32, 3]), dtype=tf.float32) result = inference(input_tensor, 2, reuse=False) init = tf.initialize_all_variables() sess = tf.Session() sess.run(init) summary_writer = tf.train.SummaryWriter(train_dir, sess.graph) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import pandas as pd from sklearn.metrics import balanced_accuracy_score as bas import gc gc.enable() from sklearn.linear_model import LogisticRegression, SGDClassifier from os import listdir from os.path import isfile, join # - data = pd.read_csv('../data-reduced-800-v3-shuffled.csv', index_col = 0) testfile = pd.read_csv('../test.csv') catcode = pd.read_csv('../data-simplified-1-catcode.csv', header = None, names = ['category'])['category'] sp = int(len(data) * 0.8) # Split Point sp2 = len(data) y_train = data.category.values[:sp] y_test = data.category.values[sp:sp2] rel = data.label_quality.values[sp:sp2] # ## Val DIR = '../ensemb3/' from os import listdir from os.path import isfile, join val = {f[4:-4] : f for f in listdir(DIR) if ((isfile(join(DIR, f))) and f[0] == 'v')} val # %%time for k, v in val.items(): print(k) val[k] = pd.read_csv(DIR+v, dtype = np.float32, header = None, index_col = False) classes = pd.Series(range(1588)) out = classes.loc[~classes.isin(pd.Series(y_test[rel == 0]))] outsiders = pd.Series(y_test).isin(out) num_rels = (rel == 0).sum() num_rels, len(rel) dfs = sorted(val.keys() - {}) X = np.zeros((len(y_test) * 1588, len(dfs)), dtype = np.float32) y = np.zeros((len(X)), dtype = np.int32) val_len = len(y_test) for k, v in val.items(): print(k, v.shape) # + # # %%time mask = rel == 0 for i in range(1588): for d, df in enumerate(dfs): X[i * val_len : (i + 1) * val_len, d] = val[df].values[:, i] y[i * val_len : (i + 1) * val_len] = 1 * (y_test == i) # X[i * val_len : (i + 1) * val_len, d] = val[df].values[mask, i] # y[i * val_len : (i + 1) * val_len] = 1 * (y_test[mask] == i) # - ix = np.random.randint(0, len(X), len(X) // 100) # better than shuffle arange ??? how? # %%time est = SGDClassifier(loss = 'modified_huber', max_iter=10, tol = 1e-5, penalty='l2', alpha = 1e-4) est.fit(X[ix], y[ix]) # consider sample weight? # %%time # 0.916317 (10) new = sum(val[df].values[mask] * coef for df, coef in zip(dfs, est.coef_.flatten())) / est.coef_.sum() val_pred = new.argmax(axis = 1) print(bas(y_test[rel == 0], val_pred)) # 0.9170534864774611 for df, coef in zip(dfs, est.coef_.flatten()): print(f"'{df}' : {coef},") # These values were obtained in the cell above coefval = { 'lstm_char_unproc' : 0.15214980219793464, 'lstm_word' : 0.5057028723619695, 'mnb' : 0.45090371936039003, 'sgd_char-v7' : 0.32103481728540106, 'sgd_char-v7_45' : 0.35221132620470513, 'sgd_word-v7' : 0.39905860580347985, } # %%time ensemble_val = sum(val[df] * coef for df, coef in coefval.items()) / sum(coefval.values()) # %%time ensemble_val.to_csv('../ensens/val_ensemble11.csv', index = False, header = False) # ## Test DIR = '../ensemb3/' test = {f[5:-4] : f for f in listdir(DIR) if ((isfile(join(DIR, f))) and f[0] == 't')} test, len(test) # %%time for k, v in test.items(): print(k) test[k] = pd.read_csv(DIR+v, header = None, index_col = False) # These values were obtained in the section above coeftest = { 'lstm_char_unproc' : 0.15214980219793464, 'lstm_word' : 0.5057028723619695, 'mnb' : 0.45090371936039003, 'sgd_char-v7' : 0.32103481728540106, 'sgd_char-v7_45' : 0.35221132620470513, 'sgd_word-v7' : 0.39905860580347985, } # %%time ensemble = sum(test[df] * coef for df, coef in coeftest.items()) / sum(coeftest.values()) sample = pd.read_csv('../sample_submission.csv') # %%time sample['category'] = np.vectorize(catcode.get)(ensemble.values.argmax(axis=1)) sample.head() sample.to_csv('../submission_ensemble11.csv', index = False) sample.shape pd.read_csv('../submission_ensemble11.csv').head() # %%time ensemble.to_csv('../ensens/test_ensemble11.csv', index = False, header = False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Import Libraries import os import glob import re import warnings warnings.filterwarnings("ignore", category=DeprecationWarning) # ## Read Config File import configparser config = configparser.ConfigParser() config.read('config.ini') ip = config['DEFAULT']['IP'] port = config['DEFAULT']['MongoDB-Port'] db_name = config['DEFAULT']['DB-Name'] contain_string = config['DEFAULT']['Contain-String'] output_path = config['DEFAULT']['Output-Path'] # ## Connect MongoDB from pymongo import MongoClient client = MongoClient(ip, int(port)) # ## Get Collection Name # connect to database db_twitter = client[db_name] collections_twitter = db_twitter.collection_names() # + dic_collection = {} for i in collections_twitter: if contain_string in i: dic_collection[i] = "{:}".format(db_twitter[i].find({}).count()) for key in sorted(dic_collection): print("%s: %s" % (key, dic_collection[key])) # - # ## Pipeline # pipeline for aggregation pipeline = [ {"$match": { "entities.hashtags": {"$exists":True,"$ne":[]}}}, {"$match": { "lang" : "en"}}, { "$group": { "_id": { "hashtags": "$entities.hashtags", "date": {"$substr": [ "$created_at", 4, 6 ]} }, "count": { "$sum": 1 }, } } ] # ## Supporting Functions # check if stringis English def isEnglish(s): try: s.encode('ascii') except UnicodeEncodeError: return False else: return True # create foler if not exist def create_folder(output_path): if not os.path.exists(output_path): os.makedirs(output_path) # delete existed collection from the list dic_collection def delete_collection(output_path,dic_collection): for input_file in glob.glob(os.path.join(output_path,'*.csv')): collection_name = re.search(output_path+'(.+?).csv', input_file).group(1) if collection_name in dic_collection: print("Existed collection: " + collection_name) del dic_collection[collection_name] return dic_collection def check_exist(data_format,h,date_year,exist,dic,count): for d in data_format: if (h == d["hashtag"]) and (date_year == d["date"]): d["count"] += count exist = 1 if exist == 0: data_format.append(dic) return data_format # count hashtag daily for each collection def count_hashtag(length,hashtags,date_year,count,data_format): for i in range(0,length-1): exist = 0 # get hashtag h = hashtags[i]["text"].lower() # check if it is in English if isEnglish(h): dic = {"hashtag": h, "date":date_year, "count" : count} if len(data_format) > 0 : #check if the record exists data_format = check_exist(data_format,h,date_year,exist,dic,count) else: data_format.append(dic) else: num_delete += 1 return data_format, num_delete # create data list def create_list(data_list,data_format,year): num_delete = 0 for data in data_list: hashtags = data["_id"]["hashtags"] date_year = data["_id"]["date"] + " " + year count = data["count"] length = len(hashtags) data_format,num_delete = count_hashtag(length,hashtags,date_year,count,data_format) return data_format,num_delete # write csv file def write_csv(collection,data_format,output_path): file_name = output_path + collection + ".csv" with open(file_name, 'w') as f: # header f.write('hashtag,date,count\n') for data in data_format: line = data['hashtag'] + ',' + data['date'] + ',' + str(data["count"]) + '\n' f.write(line) # ## Count the number of hashtag daily # + #create folder if not exist create_folder(output_path) # delete existed collection from the list dic_collection dic_collection = delete_collection(output_path,dic_collection) for collection in sorted(dic_collection): print("-----------------------\n") print("Processing on collection: " + collection) data_format = [] data_list = list(db_twitter[collection].aggregate(pipeline,allowDiskUse=True)) year = collection[-4:] if len(data_list) > 0: data_format, num_delete = create_list(data_list,data_format,year) print("hashtag and date list is finished") print(str(num_delete) + " non-English hashtags have been deleted.") write_csv(collection,data_format,output_path) print ("hashtag csv file for collection " + collection + ' is finished.') print("-----------------------\n") # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ___ # # # ___ # # Plotly and Cufflinks # Plotly is a library that allows you to create interactive plots that you can use in dashboards or websites (you can save them as html files or static images). # # ## Installation # # In order for this all to work, you'll need to install plotly and cufflinks to call plots directly off of a pandas dataframe. These libraries are not currently available through **conda** but are available through **pip**. Install the libraries at your command line/terminal using: # # pip install plotly # pip install cufflinks # # **NOTE: Make sure you only have one installation of Python on your computer when you do this, otherwise the installation may not work.** # # ## Imports and Set-up import pandas as pd import numpy as np # %matplotlib inline from IPython.display import HTML HTML(''' To toggle on/off output_stderr, click here.''') # To hide warnings, which won't change the desired outcome. # %%HTML # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
Feature NameDescriptionMetrics
RecordIDA unique integer for each ICU stay Integer
AgeAge(years)
HeightHeight(cm)
ICUtypeICU Type(1: Coronary Care Unit, 2: Cardiac Surgery Recovery Unit,
3: Medical ICU, or 4: Surgical ICU)
GenderGender(0: female, or 1: male)
#

# These 37 variables may be observed once, more than once, or not at all in some cases: #

# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
Feature NameDescriptionMetrics
AlbuminAlbumin (g/dL)
ALPAlkaline phosphatase(IU/L)
ALTAlanine transaminase(IU/L)
ASTAspartate transaminase(IU/L)
BilirubinBilirubin(mg/dL)
BUNBlood urea nitrogen(mg/dL)
CholesterolCholesterol(mg/dL)
CreatinineSerum creatinine(mg/dL)
DiasABP Invasive diastolic arterial blood pressure (mmHg)
FiO2 Fractional inspired O2 (0-1)
GCS Glasgow Coma Score (3-15)
Glucose Serum glucose (mg/dL)
HCO3 Serum bicarbonate (mmol/L)
HCT Hematocrit (%)
HR Heart rate (bpm)
K Serum potassium (mEq/L)
Lactate Lactate(mmol/L)
Mg Serum magnesium (mmol/L)
MAP Invasive mean arterial blood pressure (mmHg)
MechVent Mechanical ventilation respiration (0:false, or 1:true)
Na Serum sodium (mEq/L)
NIDiasABP Non-invasive diastolic arterial blood pressure (mmHg)
NIMAP Non-invasive mean arterial blood pressure (mmHg)
NISysABP Non-invasive systolic arterial blood pressure (mmHg)
PaCO2 partial pressure of arterial CO2 (mmHg)
PaO2 Partial pressure of arterial O2 (mmHg)
pH Arterial pH (0-14)
Platelets Platelets(cells/nL)
RespRate Respiration rate (bpm)
SaO2 O2 saturation in hemoglobin (%)
SysABP Invasive systolic arterial blood pressure (mmHg)
Temp Temperature (°C)
TropI Troponin-I (μg/L)
TropT Troponin-T (μg/L)
Urine Urine output (mL)
WBC White blood cell count (cells/nL)
Weight Weight(kg)
# #

# Outcomes-Related Descriptors: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# # Import Packages # #
# + # Import packages import pandas as pd import numpy as np import seaborn as sns from sklearn.preprocessing import RobustScaler, OneHotEncoder from sklearn.pipeline import FeatureUnion from sklearn.base import BaseEstimator, TransformerMixin from statsmodels.stats.outliers_influence import variance_inflation_factor pd.set_option('display.max_columns', 200) pd.set_option('display.max_rows',200) sns.set_style('whitegrid') sns.set(rc={"figure.figsize": (15, 8)}) # %config InlineBackend.figure_format = 'retina' # %matplotlib inline # + # Simple cleaning on csv file mortality = pd.read_csv('mortality_filled_median.csv') mortality.drop(['Unnamed: 0'],axis=1,inplace=True) mortality.set_index('recordid',inplace=True) # - X = mortality.drop('in-hospital_death',axis=1) # identify predictors #
# #

# # Feature Selection #

# Step 1: Preprocess #
# Get dummy variables for categorical features #
# Scale numerical features with Robust Scaler #
# (Scaled with median and the interquartile range so that outliers can be taken into account) #

# Step 2: Variance Inflation Factor #
Drop columns that are multi-colinear with each other #

# Step 3: Decision Trees on Feature Importance #
Rank features based on importance #

# Step 4: Reduce dimensionality using PCA #

class Categorical_Extractor(BaseEstimator, TransformerMixin): def __init__(self): pass def get_dummy_(self,df): icu_dummy = pd.get_dummies(df['icutype'],drop_first=True) df = pd.concat((df['gender'],icu_dummy),axis=1) return df.values.reshape(-1, 4) def transform(self, X, *args): X = self.get_dummy_(X) return X def fit(self, X, *args): return self class Numerical_Extractor(BaseEstimator, TransformerMixin): def __init__(self): pass def robust_scale_(self,df,threshold=10): num_col = [] for col, num in df.nunique().iteritems(): if num > threshold: num_col.append(col) df = df[num_col] rscaler = RobustScaler() array = rscaler.fit_transform(df) return array.reshape(-1,array.shape[1]) def transform(self, X, *args): X = self.robust_scale_(X) return X def fit(self, X, *args): return self feature_union = FeatureUnion([ ('numerical_scaler',Numerical_Extractor()), ('categorical_transform',Categorical_Extractor()) ]) transformed_X = feature_union.transform(X) new_columns = list(X.columns) new_columns.extend(['icutype3','icutype4']) transformed_X = pd.DataFrame(transformed_X,columns=new_columns,index=X.index) transformed_X.rename(columns={'icutype':'icutype2'},inplace=True) transformed_X.head() transformed_X.shape #
# # We have a total of 110 features now. # #
# + # Variance Inflation Factor vif = [variance_inflation_factor(transformed_X.iloc[:,:-4].values,i) for i in range(transformed_X.iloc[:,:-4].shape[1])] # + sns.set_style('whitegrid') sns.set(rc={"figure.figsize": (15, 15)}) vif_df = pd.DataFrame(transformed_X.iloc[:,:-4].columns,columns=['Features']) vif_df['VIF'] = vif sns.barplot(x='VIF',y='Features',data=vif_df) # - remove_col = list(vif_df[vif_df['VIF']>5]['Features']) selected_X = transformed_X.drop(remove_col,axis=1) selected_X.shape # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Predicting Aberrations in The Hippo Signaling Pathway # ## To do this, we will use Hetnets to query "Signaling by Hippo" # # The query is built in the [Cypher Language](https://neo4j.com/developer/cypher-query-language/) and draws data from [Hetionet](https://neo4j.het.io/browser/) # # ### How Cognoma could help with Hippo signaling # # The Hippo pathway is a highly conserved signaling cascade that controls organ size, cell growth, and cell death ([Zhao et al. 2010](http://doi.org/10.1101/gad.1909210)). It is one of the mechanisms that influences size diversity across eukaryotes; including different sizes across dog breeds ([Dong et al. 2007](http://doi.org/10.1016/j.cell.2007.07.019), [Crickmore and Mann 2008](http://doi.org/10.1002/bies.20806)). Recently, Hippo signaling has also been shown to be important for tumorigenesis, but there are shockingly few recurrent mutations of single genes within the pathway across tissues ([Harvey et al 2013](http://doi.org/10.1038/nrc3458)). Therefore, leveraging cancers from multiple tissues and combining genes associated with the same pathway could aid in the detection of a Hippo signaling specific gene expression signature. Cognoma is situated well to quickly query the list of all pathway associated genes, build a machine learning classifier to detect aberrant pathway activity, and output tissue and gene specific performance. # + import os import urllib import random import warnings import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from sklearn import preprocessing, grid_search from sklearn.linear_model import SGDClassifier from sklearn.cross_validation import train_test_split from sklearn.metrics import roc_auc_score, roc_curve from sklearn.pipeline import make_pipeline from sklearn.preprocessing import StandardScaler from sklearn.feature_selection import SelectKBest from statsmodels.robust.scale import mad # - from neo4j.v1 import GraphDatabase # %matplotlib inline plt.style.use('seaborn-notebook') # ## Specify model configuration - Generate genelist # + names = ('label', 'rel_type', 'node_id') query_params = [ ('Pathway', 'PARTICIPATES_GpPW', 'PC7_7459'), # "Signaling by Hippo" - Reactome ('BiologicalProcess', 'PARTICIPATES_GpBP', 'GO:0035329'), # "hippo signaling" - Gene Ontology ('BiologicalProcess', 'PARTICIPATES_GpBP', 'GO:0035330') # "regulation of hippo signaling" - Gene Ontology ] param_list = [dict(zip(names, qp)) for qp in query_params] # - query = ''' MATCH (node)-[rel]-(gene) WHERE node.identifier = {node_id} AND {label} in labels(node) AND {rel_type} = type(rel) RETURN gene.identifier as entrez_gene_id, gene.name as gene_symbol ORDER BY gene_symbol ''' # + driver = GraphDatabase.driver("bolt://neo4j.het.io") full_results_df = pd.DataFrame() with driver.session() as session: for parameters in param_list: result = session.run(query, parameters) result_df = pd.DataFrame((x.values() for x in result), columns=result.keys()) full_results_df = full_results_df.append(result_df, ignore_index=True) classifier_genes_df = full_results_df.drop_duplicates().sort_values('gene_symbol').reset_index(drop=True) classifier_genes_df['entrez_gene_id'] = classifier_genes_df['entrez_gene_id'].astype('str') # - # Here are the genes that participate in the Hippo signaling pathway classifier_genes_df # Parameter Sweep for Hyperparameters n_feature_kept = 8000 param_fixed = { 'loss': 'log', 'penalty': 'elasticnet', } param_grid = { 'alpha': [10 ** x for x in range(-6, 1)], 'l1_ratio': [0, 0.05, 0.1, 0.2, 0.5, 0.8, 0.9, 0.95, 1], } # *Here is some [documentation](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDClassifier.html) regarding the classifier and hyperparameters* # ## Load Data # %%time path = os.path.join('data', 'expression-matrix.tsv.bz2') X = pd.read_table(path, index_col=0) # %%time path = os.path.join('data', 'mutation-matrix.tsv.bz2') Y = pd.read_table(path, index_col=0) # %%time path = os.path.join('data', 'samples.tsv') clinical = pd.read_table(path, index_col=0) clinical.tail(5) # Subset the Y matrix to only the genes to be classified y_full = Y[classifier_genes_df['entrez_gene_id']] # + y_full.columns = classifier_genes_df['gene_symbol'] y_full = y_full.assign(disease = clinical['disease']) # This matrix now stores the final y matrix for the classifier (y['indicator']) y = y_full.assign(indicator = y_full.max(axis=1)) # - unique_pos = y.groupby('disease').apply(lambda x: x['indicator'].sum()) heatmap_df = y_full.groupby('disease').sum().assign(TOTAL = unique_pos) heatmap_df = heatmap_df.divide(y_full.disease.value_counts(sort=False).sort_index(), axis=0) # What is the percentage of different mutations across different cancer types? sns.heatmap(heatmap_df); # Visualizing the input data here is key. The heterogeneity of the mutations across tissues is apparent for this particular pathway. In comparison with `TP53` mutations, it appears that Hippo signaling impacts different tissues with higher diversity. # # Looking closer at the plots above, it is evident that several tissues do not demonstrate aberrations (at least at the mutation level) in Hippo signaling. Specifically, it appears that cancers with gender specificity like testicular cancer and and prostate cancer are _not_ impacted. Therefore, because of this artificial imbalance, if Cognoma were to include these cancers in the classifier, it **will** key in on gender specific signal (i.e. genes that are only on the Y chromosome, or X inactivation genes). # How many samples in each tissue that have Hippo signaling aberrations ind = ['Negatives', 'Positives', 'Positive Prop'] percent = heatmap_df['TOTAL'] neg = y.disease.value_counts() - unique_pos tissue_summary_df = pd.DataFrame([neg, unique_pos, percent], index=ind, dtype='object').T.sort_values('Positive Prop', ascending=False) tissue_summary_df # ## Filter Data by Tissue # # This is a crucial step that is different from previous classifiers # + # Technically, these are hyper parameters, but for simplicity, set here filter_prop = 0.10 filter_count = 15 tissue_prop_decision = tissue_summary_df['Positive Prop'] >= filter_prop tissue_count_decision = tissue_summary_df['Positives'] >= filter_count tissue_decision = tissue_prop_decision & tissue_count_decision # - # This criteria filters out the following tissues pd.Series(tissue_summary_df.index[~tissue_decision].sort_values()) # What are the tissues remaining? tissue_summary_df = tissue_summary_df[tissue_decision] tissue_summary_df # Distribution of mutation counts after filtering sns.heatmap(heatmap_df.loc[tissue_decision]); # Subset data clinical_sub = clinical[clinical['disease'].isin(tissue_summary_df.index)] X_sub = X.ix[clinical_sub.index] y_sub = y['indicator'].ix[clinical_sub.index] # Total distribution of positives/negatives y_sub.value_counts(True) y_sub.head(7) # ## Set aside 10% of the data for testing strat = clinical_sub['disease'].str.cat(y_sub.astype(str)) strat.head(6) # Make sure the splits have equal tissue and label partitions X_train, X_test, y_train, y_test = train_test_split(X_sub, y_sub, test_size=0.1, random_state=0, stratify=strat) 'Size: {:,} features, {:,} training samples, {:,} testing samples'.format(len(X_sub.columns), len(X_train), len(X_test)) # ## Median absolute deviation feature selection # + def fs_mad(x, y): """ Get the median absolute deviation (MAD) for each column of x """ scores = mad(x) return scores, np.array([np.NaN]*len(scores)) # select the top features with the highest MAD feature_select = SelectKBest(fs_mad, k=n_feature_kept) # - # ## Define pipeline and Cross validation model fitting # + # Include loss='log' in param_grid doesn't work with pipeline somehow clf = SGDClassifier(random_state=0, class_weight='balanced', loss=param_fixed['loss'], penalty=param_fixed['penalty']) # joblib is used to cross-validate in parallel by setting `n_jobs=-1` in GridSearchCV # Supress joblib warning. See https://github.com/scikit-learn/scikit-learn/issues/6370 warnings.filterwarnings('ignore', message='Changing the shape of non-C contiguous array') clf_grid = grid_search.GridSearchCV(estimator=clf, param_grid=param_grid, n_jobs=-1, scoring='roc_auc') pipeline = make_pipeline( feature_select, # Feature selection StandardScaler(), # Feature scaling clf_grid) # - # %%time # Fit the model (the computationally intensive part) pipeline.fit(X=X_train, y=y_train) best_clf = clf_grid.best_estimator_ feature_mask = feature_select.get_support() # Get a boolean array indicating the selected features clf_grid.best_params_ best_clf # ## Visualize hyperparameters performance def grid_scores_to_df(grid_scores): """ Convert a sklearn.grid_search.GridSearchCV.grid_scores_ attribute to a tidy pandas DataFrame where each row is a hyperparameter-fold combinatination. """ rows = list() for grid_score in grid_scores: for fold, score in enumerate(grid_score.cv_validation_scores): row = grid_score.parameters.copy() row['fold'] = fold row['score'] = score rows.append(row) df = pd.DataFrame(rows) return df # ## Process Mutation Matrix cv_score_df = grid_scores_to_df(clf_grid.grid_scores_) cv_score_df.head(2) # Cross-validated performance distribution facet_grid = sns.factorplot(x='l1_ratio', y='score', col='alpha', data=cv_score_df, kind='violin', size=4, aspect=1) facet_grid.set_ylabels('AUROC'); # Cross-validated performance heatmap cv_score_mat = pd.pivot_table(cv_score_df, values='score', index='l1_ratio', columns='alpha') ax = sns.heatmap(cv_score_mat, annot=True, fmt='.2%') ax.set_xlabel('Regularization strength multiplier (alpha)') ax.set_ylabel('Elastic net mixing parameter (l1_ratio)'); # ## Use Optimal Hyperparameters to Output ROC Curve # + y_pred_train = pipeline.decision_function(X_train) y_pred_test = pipeline.decision_function(X_test) def get_threshold_metrics(y_true, y_pred, tissue='all'): roc_columns = ['fpr', 'tpr', 'threshold'] roc_items = zip(roc_columns, roc_curve(y_true, y_pred)) roc_df = pd.DataFrame.from_items(roc_items) auroc = roc_auc_score(y_true, y_pred) return {'auroc': auroc, 'roc_df': roc_df, 'tissue': tissue} metrics_train = get_threshold_metrics(y_train, y_pred_train) metrics_test = get_threshold_metrics(y_test, y_pred_test) # - # Plot ROC plt.figure() for label, metrics in ('Training', metrics_train), ('Testing', metrics_test): roc_df = metrics['roc_df'] plt.plot(roc_df.fpr, roc_df.tpr, label='{} (AUROC = {:.1%})'.format(label, metrics['auroc'])) plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Predicting Hippo Signaling Pathway Aberations') plt.legend(loc='lower right'); # ## Tissue specific performance tissue_metrics = {} for tissue in clinical_sub.disease.unique(): sample_sub = clinical_sub[clinical_sub['disease'] == tissue].index.values y_tissue_train = y_train[y_train.index.isin(sample_sub)] y_tissue_pred_train = y_pred_train[y_train.index.isin(sample_sub)] y_tissue_test = y_test[y_test.index.isin(sample_sub)] y_tissue_pred_test = y_pred_test[y_test.index.isin(sample_sub)] metrics_train = get_threshold_metrics(y_tissue_train, y_tissue_pred_train, tissue=tissue) metrics_test = get_threshold_metrics(y_tissue_test, y_tissue_pred_test, tissue=tissue) tissue_metrics[tissue] = [metrics_train, metrics_test] tissue_auroc = {} plt.figure() for tissue, metrics_val in tissue_metrics.items(): metrics_train, metrics_test = metrics_val plt.subplot() auroc = [] for label, metrics in ('Training', metrics_train), ('Testing', metrics_test): roc_df = metrics['roc_df'] auroc.append(metrics['auroc']) plt.plot(roc_df.fpr, roc_df.tpr, label='{} (AUROC = {:.1%})'.format(label, metrics['auroc'])) tissue_auroc[tissue] = auroc plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Predicting Hippo Signaling in {}'.format(tissue)) plt.legend(loc='lower right'); plt.show() tissue_results = pd.DataFrame(tissue_auroc, index=['Train', 'Test']).T tissue_results = tissue_results.sort_values('Test', ascending=False) ax = tissue_results.plot(kind='bar', title='Tissue Specific Prediction of Hippo Signaling') ax.set_ylabel('AUROC'); # Hippo signaling prediction has highly variable predictions across different tissues. The classifier performs spectacularly in several tissues, but also appears to oppositely predict Hippo signaling in others. In three tissues the `test` set performance is actually _greater_ than the `train` set performance. This is likely a result of variance in samples across tissues and a happenstance in stratified `train_test_split`. # ## What are the classifier coefficients? coef_df = pd.DataFrame(best_clf.coef_.transpose(), index=X_sub.columns[feature_mask], columns=['weight']) coef_df['abs'] = coef_df['weight'].abs() coef_df = coef_df.sort_values('abs', ascending=False) '{:.1%} zero coefficients; {:,} negative and {:,} positive coefficients'.format( (coef_df.weight == 0).mean(), (coef_df.weight < 0).sum(), (coef_df.weight > 0).sum() ) coef_df.head(10) # The results are very interesting. First, only 200 genes are used to build a fairly successful classifier. Biologists like sparsity! Second, the genes that fall out at the top are informative: # # | Entrez | Symbol | Comments | # | ------ | ---- | -------- | # | 399671 | [HEATR4](http://www.ncbi.nlm.nih.gov/gene/399671) | Relatively unstudied gene | # | 29126 | [CD274](http://www.ncbi.nlm.nih.gov/gene/29126) | Immune cell receptor - inhibits Tcell activation and cytokine production | # | 2852 | [GPER1](http://www.ncbi.nlm.nih.gov/gene/2852) | Estrogen receptor - implicated in lymphoma | # | 140730 | [RIMS4](http://www.ncbi.nlm.nih.gov/gene/140730) | Synaptic regulatory protein | # | 84688 | [C9orf24](http://www.ncbi.nlm.nih.gov/gene/84688) | relatively unknown gene - important for differentiation of bronchial cells | # | 387628 | [FGF7P6](http://www.ncbi.nlm.nih.gov/gene/387628) | Fibroblast growth factor - implicated in ovarian cancer | # | 4438 | [MSH4](http://www.ncbi.nlm.nih.gov/gene/4438) | Involved in DNA mismatch repair | # | 80350 | [LPAL2](http://www.ncbi.nlm.nih.gov/gene/157777) | Pseudogene involved with elevated risk for atherosclerosis | # | 56892 | [C8orf4](http://www.ncbi.nlm.nih.gov/gene/56892) | Relatively uknown gene product - evidence it is important in WNT signaling and proliferation across cancer types | # | 22943 | [DKK1](http://www.ncbi.nlm.nih.gov/gene/22943) | Inhibits WNT signaling pathway - implicated in myeloma | # # # ## Investigate the predictions predict_df = pd.DataFrame.from_items([ ('sample_id', X_sub.index), ('testing', X_sub.index.isin(X_test.index).astype(int)), ('status', y_sub), ('decision_function', pipeline.decision_function(X_sub)), ('probability', pipeline.predict_proba(X_sub)[:, 1]), ]) predict_df['probability_str'] = predict_df['probability'].apply('{:.1%}'.format) # Top predictions amongst negatives (potential hidden responders) predict_df.sort_values('decision_function', ascending=False).query("status == 0").head(10) # + # Ignore numpy warning caused by seaborn warnings.filterwarnings('ignore', 'using a non-integer number instead of an integer') ax = sns.distplot(predict_df.query("status == 0").decision_function, hist=False, label='Negatives') ax = sns.distplot(predict_df.query("status == 1").decision_function, hist=False, label='Positives') # - ax = sns.distplot(predict_df.query("status == 0").probability, hist=False, label='Negatives') ax = sns.distplot(predict_df.query("status == 1").probability, hist=False, label='Positives') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # `sympy` guide import sympy print(sympy.__version__) # + # cf. https://docs.sympy.org/latest/index.html # Symbolic computation is with symbols, not numerical values print(sympy.sqrt(8)) print(sympy.sqrt(3)) from sympy import symbols x, y = symbols('x y') expr = x + 2*y print(expr) print(expr + 1) print(expr - x) # Simplified when possible; in this case to 2*y print(x * expr) from sympy import expand, factor expanded_expr = expand(x*expr) print(expanded_expr) print(factor(expanded_expr)) x, t, z, nu = symbols('x t z nu') # - # This will make all further examples pretty print with unicode. sympy.init_printing(use_unicode=True) from sympy import (diff, sin, exp) diff(sin(x) * exp(x), x) # Compute $\int (e^x \sin{x} + e^x \cos{x}) dx$ from sympy import (cos, integrate) integrate(exp(x) * sin(x) + exp(x)*cos(x), x) # Compute $\int^{-\infty}_{\infty} \sin{(x^2)} dx$ from sympy import oo integrate(sin(x**2), (x, -oo, oo)) # Find $\lim_{x \to 0} \frac{\sin{x}}{x}$ from sympy import limit limit(sin(x) / x, x, 0) # ### `solve`, Solve Algebraic equations # # Solve $x^2 - 2 = 0$ from sympy import solve solve(x**2 - 2, x) # ### `dsolve`, Solve differential equations # Solve $y'' - y = e^t$ from sympy import (dsolve, Eq, Function) y = Function('y') dsolve(Eq(y(t).diff(t, t) - y(t), exp(t)), y(t)) # ### `eigenvals()`, Find eigenvalues from sympy import Matrix Matrix([[1, 2], [2, 2]]).eigenvals() # ### `besselj`, Example of Bessel functions, special functions # # Rewrite the Bessel function $J_{\nu}(z)$ in terms of the spherical Bessel function $j_{\nu}(z)$ from sympy import besselj, jn besselj(nu, z).rewrite(jn) from sympy import latex, Integral, pi latex(Integral(cos(x)**2, (x, 0, pi))) # ## [Sympy Gotchas](https://docs.sympy.org/latest/tutorial/gotchas.html) # # Name of a Smybol and name of the variable it's assigned need not have anything to do with one another. # Best practice usually is to assing Symbols to Python variables of the same name. Symbol names can contain characters that aren't allowed in Python variable names, or assign Symbols long names to single letter Python variables. crazy = symbols('unrelated') print(crazy + 1) # + x = symbols('x') expr = x + 1 x = 2 print(expr) x = 'abc' expr = x + 'def' print(expr) x = 'ABC' print(expr) x = symbols('x') expr = x + 1 print(expr.subs(x, 2)) # - # Changing `x` to `2` had no effect on `expr`. This is because `x=2` changes the Python variable `x` to `2` but has no effect on SymPy Symbol `x`, which was what we used in creating `expr`. This is how python works. print(x + 1 == 4) Eq(x + 1, 4) # `Eq` is used to create **symbolic equalities**. # # It's best to check if a == b by taking a - b and simplify it. It can be [theoretical proven](https://en.wikipedia.org/wiki/Richardson%27s_theorem) that's impossible to determine if 2 symbolic expressions are identically equal in general. from sympy import simplify a = (x + 1)**2 b = x**2 + 2*x + 1 print(simplify(a - b)) c = x**2 - 2*x + 1 print(simplify(a - c)) # There's also a method called `equals` that tests if 2 expressions are equal by **evaluting them numerically at random points.** # + a = cos(x)**2 - sin(x)**2 b = cos(2*x) print(a.equals(b)) # EY: 20200909 print(simplify(a - b)) # - from sympy import Integer print(Integer(1)/Integer(3)) print(1/2) # PYthon 3 floating point division from sympy import Rational print(Rational(1, 2)) print(x + Rational(1, 2)) # Further reading for [Gotchas and Pitfalls](https://docs.sympy.org/latest/gotchas.html#gotchas) # ## [Basic Operations](https://docs.sympy.org/latest/tutorial/basic_operations.html) # ### Substitution # + x, y, z = symbols("x y z") expr = cos(x) + 1 print(expr.subs(x, y)) print(expr.subs(x, 0)) expr = x**y print(expr) expr = expr.subs(y, x**y) print(expr) expr = expr.subs(y, x**x) print(expr) from sympy import expand_trig expr = sin(2*x) + cos(2*x) print(expand_trig(expr)) # An easy way is just replace sin(2x) with 2sin(x)cos(x) expr.subs(sin(2*x), 2*sin(x)*cos(x)) print(expr) # - # 2 important things to note about `subs`: # 1. returns new expression, SymPy objects are immutable. That means `subs` doesn't modify it in-place # 2. # + expr = cos(x) print(expr.subs(x, 0)) print(expr) print(x) expr = x**3 + 4*x*y - z # To perform multiple substitutions at once, pass a list of (old, new) pairs to subs print(expr.subs([(x, 2), (y, 4), (z, 0)])) expr = x**4 - 4*x**3 + 4*x**2 - 2*x + 3 # Often useful to combine this list comprehension to do a large set of # similar replacements all at once. # For example, say we wanted to replace all instances of x that have # an even power with y replacements = [(x**i, y**i) for i in range(5) if i % 2 == 0] # - # #### Converting Strings to SymPy Expressions, `sympify` from sympy import sympify str_expr = "x**2 + 3*x - 1/2" expr = sympify(str_expr) print(expr) print(expr.subs(x, 2)) # ## `evalf`, evaluate numerical expression from sympy import sqrt expr = sqrt(8) # Default 15 digits of precision print(expr.evalf()) print(pi.evalf(100)) # Compute first 100 digits of pi # It's more efficient and numerically stable to pass the substitution to `evalf` using `subs` flag to numerically evaluate an expression. # + expr = cos(2*x) print(expr.evalf(subs={x: 2.4})) # Set desired precision one = cos(1)**2 + sin(1)**2 print((one - 1).evalf()) print((one - 1).evalf(chop=True)) # - # ## `lambdify`, convert Sympy expression to expression for numerics # # Convert Sympy expression to an expression that can be numerically evaluated using lambdify. # # Warning: lambdify uses eval. Don't use it on unsanitized input. # + import numpy a = numpy.arange(10) expr = sin(x) f = lambdify(x, expr, "numpy") print(f(a)) # Use other libraries than numpy, e.g. use standard library math f = lambdify(x, expr, "math") print(f(0.1)) def mysin(x): """ My sine. Note that this is only accurate for small x """ return x f = lambdify(x, expr, {"sin":mysin}) print(f(0.1)) # - # Go also to [advanced expression manipulation](https://docs.sympy.org/latest/tutorial/manipulation.html#tutorial-manipulation) # -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .r # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: R # language: R # name: ir # --- # # Churn Prediction # ## SVM # + #install.packages("e1071") # - library(e1071) df <- read.csv('evasao.csv') # + n <- nrow(df) set.seed(42) # Se não fizer isso, verá valores diferentes dos meus limite <- sample(1:n, size = round(0.75*n), replace = FALSE) train_df <- df[limite,] test_df <- df[-limite,] # - abandonou = train_df$abandonou periodo = train_df$periodo bolsa = train_df$bolsa repetiu = train_df$repetiu ematraso = train_df$ematraso disciplinas = train_df$disciplinas faltas = train_df$faltas desempenho = train_df$desempenho modelo_svm <- svm(abandonou ~ periodo + bolsa + repetiu + ematraso + disciplinas + faltas + desempenho , train_df, kernel = 'polynomial', gamma = 10, cost = 2, type='C-classification') summary(modelo_svm) # + abandonou = test_df$abandonou periodo = test_df$periodo bolsa = test_df$bolsa repetiu = test_df$repetiu ematraso = test_df$ematraso disciplinas = test_df$disciplinas faltas = test_df$faltas desempenho = test_df$desempenho predicoes <- predict(modelo_svm, test_df) # - head(predicoes) df_predicao <- data.frame(desempenho = test_df$desempenho, repetiu = test_df$repetiu, abandonou_obs = test_df$abandonou, abandonou_pred = predicoes) head(df_predicao) erros <- test_df$abandonou != predicoes qtdErros <- sum(erros) percErros <- qtdErros / length(test_df$abandonou) acuracia <- (1 - percErros) * 100 acuracia # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python (aind) # language: python # name: other-env # --- from z3 import * s=Solver() s.check() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # # Project - Programming for Data Analysis # #
OutcomesDescriptionMetrics
SAPS-I score(Le Gall et al., 1984) between 0 to 163
SOFA score(Ferreira et al., 2001) between 0 to 4
Length of stayLength of stay (days)
SurvivalSurvival(days)
In-hospital deathTarget Variable(0: survivor, or 1: died in-hospital)
# # # # # # # # # # # # # # # # #
Cource:HDIP Data Analytics
Module:52465 Project
Student ID:G00364778
Submission:14 December 2018
# # This Jupyter notebook discuss the creation of a dataset showing the fuel consumption of various vehicle manufacturers and engine sizes and also factor in the age and gender distribution of drivers to generate results in line with current manufacturer and age distributions reflected in published data sources referenced in the document. # ## Researching and selecting real world application of interest # # This notebook will show the process used to create a data set by simulating my real-world phenomenon. The phenomenon of choice is the fuel consumption of vehicles on the roads in Ireland and how the consumption might be influenced by drive style, gender and age distribution. We will also explore the distribution of engine size and the range of automotive manufactures and their influences on the overall dataset. # # The distribution of engine size and automotive manufacturers will be used as a guideline for simulating the distributions of manufacturer and engine sizes and the distribution of driver ages and genders from the drivers licence authority will be used to simulate the age and gender distribution of drivers. # # From age and gender distributions we will then infer drive styles that will have an impact on the overall fuel consumption of the vehicle associated with the driver profile. # # # ## Choosing a dataset # # Going through various articles on datasets and data collections across the spectrum of topics, a personal topic of interest is the factors having an effect and influence automotive fuel efficiency. # # The approach to the dataset of choice was made by looking at the vareity of datasources available and the list of lists consulted in the research is as follow: # # # - https://www.kaggle.com/datasets # - https://archive.ics.uci.edu/ml/datasets.html # - https://github.com/awesomedata/awesome-public-datasets # - https://data.gov.ie/ # - https://github.com/fivethirtyeight/data # - https://github.com/BuzzFeedNews # - https://opendata.socrata.com/ # - https://aws.amazon.com/datasets/ # - https://cloud.google.com/bigquery/public-data/ # - https://en.wikipedia.org/wiki/Wikipedia:Database_download # - https://toolbox.google.com/datasetsearch # # From this list the fist attractive proposition came up in data.gov.ie in the list of automotive sales for 2018 and from this was born the idea to review the fuel efficiency of vehicles on the roads today. # # I've had a personal interest in fuel efficiency from a very early age in my life trying to find ways to improve the fuel efficiency of all my modes of motorised transport. # # Having reviewed several articles on the topic, there seems to be many technical reasons for less than optimal fuel efficiency and the technical issues really comes down to maintenance or rather a lack thereof. A lot of the other factors mentioned, comes down to driving style and really reflects temperament and age related qualities. Other factors coming into play is the engine size of the vehicle, the distance of the commute and weather conditions, so all in all a reasonable set of conditions that can make an interesting dataset. # # There is an overwhelming amount of variables that can influence the overall fuel efficiency of a vehicle and very granular maintenance specifics like oil and fuel quality, tyres, air conditioner use and travel distance. The most basic gross factors influencing the outcome is the engine size, the weight or rather power to mass ratio, the speed, drive style, aerodynamics and mechanical resistance. # # So as a first pass this is the rough idea of data that should allow one to make reasonable estimates on fuel consumption based on this criteria. # # Make |Model |Sub-Class|Type|CC |Cylinders|Gender|Age|Drivestyle|Serviced|Commute # ------|-------|---------|----|---|---------|------|---|------- --|--------|------- # Toyota|Corolla|Verso |MPV |1.6|4 |Male |55 |Rational |Annually|32 # # Maybe subclass and type of vehicle is over complicating the matter and engine size alone would be sufficient. # # # ### Defining the data values and types for the dataset # # # # Variable |Description |Data Type |Distributions # :---------|:---------------------------|:---------:|:------------ # Make |Manufacturer |Text |Geometric # Model |Model |Text | - # CC |Engine size in CC |0.8-4.5 |Geometric # Cylinder |Cylinders inferred from CC |2-12 | - # Gender |Gender of driver |male/female|Bernoulli # Age |Age of driver to infer style|16-99 |Normal # Drivestyle|Driver Type |text | - # Services |Services annually |yes/no |Bernoulli # Commute |Distance of commute |1-100 |Gaussian # Type |Urban, Rural, Highway |text |Bernoulli # # # ### Generating a list of manufacturers # So to get an idea of where values should go, the TEA18 dataset, referenced below, was used as a guideline for distribution of types vehicle types. # # The TEA18 dataset lists all private cars licensed for the first time in 2018 listed by engine capacity in cc, Car Make, Emission Band, Licensing Authority, Year and Statistic. The values of interest in the dataset is explored and reviewed in the following sections and the main idea is just to see what manufacturers are the most popular. import pandas as pd # Import pandas for dataframe features pd.options.mode.chained_assignment = None # disable warnings en errors on dataframe usage list_url='https://github.com/G00364778/52465_project/raw/master/data/CarsIrelandbyCC.csv' # url link to csv file in github cars=pd.read_csv(list_url) # import the TEA18 dataset to establish benchmarks makes=cars[cars['Make'].str.contains('All ')==False] # filter out collection and keep manufacturers only #makes=df[df['Make'].str.contains('All')==True] # test result #df.sort_values('All', ascending=False) # sort for esy comparison makes['pct']=((cars['All']/168327)*100) # calculate a percentage for reference makes[['Make','All','pct']].sort_values('All', ascending=False).head(10) # display the list in descending order # So looking at the list of new motor vehicles purchased in 2018, sorted by total sales, the list seems to approximate a geometric distribution. The distribution of engine sizes seems to be similarly distributed, however regardless of the distribution type, ideally what I would like to reproduce is a random generated list from 41 manufacturers that will always yield around __12.5%__ Volkswagen's, __11%__ Ford's followed by __9.6%__ Toyota's etc.. # # Several hours were spent to try and reproduce the result and one sample of such an attempt is below trying to use distributions to yield this result. import numpy.random as rnd d=rnd.noncentral_chisquare(10,1,30)**2*rnd.randint(3,size=30) d # After countless hours working on the problem, and testing all kinds of distributions, the results achieved was a bit too granular for my liking and the final and most elegant solution has already been created and "secretly" exists in the numpy.random.choice library option at the fourth parameter __p__, _probability_. This important point was completely missed in dealing with this library and only stumbling on the phrase _"probability"_ and parameter __p__ in the parameter description documentation, yielded the desired results. # # ```python # choice(a, size=None, replace=True, p=None) # # Parameters # ----------- # p : 1-D array-like, optional # The probabilities associated with each entry in a. # If not given the sample assumes a uniform distribution over all # entries in a. # ``` # # This approach is favoured throughout the remainder of the document and yields a very reliable and repeatable result reflecting the datasets consulted as guidelines. # ### Determining the winners # # The code below extracts the list of automotive manufacturers in order of popularity for later use in generating the list. makes=cars[cars['Make'].str.contains('All makes')==False] # create a filtered list of manufacturers only m=makes.sort_values('All', ascending=False) # sort the list by All in descending order m=list(m['Make']) # filter the list by make only print(m) # print out the list # Arrange automotive manufacturers in order of popularity: # ```python # makes = ['Volkswagen', 'Ford', 'Toyota', 'Nissan', 'Hyundai', 'Audi', 'Skoda', 'Renault', 'BMW', 'Opel', 'Kia', 'Peugeot', 'Vauxhall', '', 'Citroen', 'Dacia', 'Mazda', 'Seat', 'Honda', 'Volvo', 'Suzuki', 'Mitsubishi', 'Mini', 'Land Rover', 'Fiat', 'Lexus', 'Jaguar', 'Saab', 'Subaru', 'All other makes', 'Porsche', 'Alfa Romeo', 'Ssangyong', 'Jeep', 'Smart', 'Chevrolet', 'Rover', 'Chrysler', 'Daihatsu', 'Daewoo', 'Austin', 'Dodge'] # ``` # # In order to to create a realistic simulation resembling the finding a reasonably narrow and specific distribution must be followed. # ### The working code to generate the distribution # # Testing the generator and distribution results of a list of vehicles by manufacturer. # + from collections import Counter as count # import the counter for validation of the generator nsamples=10000 # Setting the value for the glabal list length makes['pct']=((makes['All']/sum(makes['All']))) # create a percentage column value that adds up to one - required by propability makes[['Make','All','pct']].sort_values('All', ascending=False) #sort the list and create lists for validation purposes manufacturers=list(makes['Make']) # generate a manufacturers list from the makes dataframe probability=list(makes['pct']) # create the probabbility distribution list for the list #sum(probability) # test the list forcompatibility to the fucntion requirements, i.e. mist add up to one t=rnd.choice(manufacturers,nsamples,p=probability) # create a 1000 samples count(t).most_common() # count and show them # - # This approach yields exactly what I was hoping to achieve and reproduces the same distribution percentages from the sample data use for the model. Engine size can be derived in a similar fashion, bearing in mind that all manufacturers might not have all engine sizes, however for the purposes of this exercise this is deemed sufficient to match the overall criteria. # # ### Distilling Engine sizes # # In order to add engine sizes to the dataset, a review of the engine statistics and distributions is done in the following section and we briefly explore to effect of ignoring manufacturer specific lack of representation in some engine size categories. makes=cars[cars['Make'].str.contains('All makes')==True] # Import the summary line only from the dataframe showing numbers by cc makes # To better make sense of this result, it's transposed and cleaned up and stored in a csv file for easier reference and access in the following sections. This is then stored and referenced from my git repository. cc_dist_url='https://github.com/G00364778/52465_project/raw/master/data/cc_dist.csv' # set the url for the raw csv file ccd=pd.read_csv(cc_dist_url).sort_values('pct', ascending=False) # read the file from the url to a dataframe ccd ccd # display ccd # Reviewing the engine size distribution results like this yielded some surprising results with the biggest surprise being that the most common engine size, contrary to popular believe, is a 2000 cc engine, closely followed by the 1600 cc category. The gap in the categories, i.e. the jump from 1600 to 2000, probably add to the big percentage in this grouping, however the two groupings accounts for more than 50% of the vehicle population on the road. The remaining 45% is shared by 1000-1500 cc and 4.4% in 2.4cc category with the balance of around 1.5% in the remaining categories. So very few above 2.4k cc and even less below 900cc, almost negligible. # # # # So a list in the order of popularity would be: # # ```python # cc_order=[2000,1600,1300,1500,1400,1000,2400,>2400,<900] # ``` # ### Distribution of engine sizes and variance by manufacturers # # The order of manufacturers is actually not that surprising and aligns with popular conversation on the subject with petrol heads and when sorting the complete list by all or various engine size columns creates minor shifts. The shifting is really down to engine sizes more prevalent by specific manufacturers, for example Toyota would not make 900cc vehicles and the <900 cc category is dominated by Nissan, with relatively low volumes. # # It is quite surprising and then again very satisfying that this information can be derived from official datasets that is already publicly available. It is an area that must be explored more deeply and frequently for knowledge and gain. # # For the sake of completeness before proceeding to the other variables required for the list, let us investigate the variance of manufacturers by engine sizes for this set just to ensure that the assessments made holds true by validating the data. makes=cars[cars['Make'].str.contains('All makes')==False] # create a filtered list of unique manufacturers only makes.sort_values('<900', ascending=False).head(3) # filter by less than 900cc makes.sort_values('1000', ascending=False).head(3) # filter and sort by 1000cc makes.sort_values('1300', ascending=False).head(3) # filer and sort by 1300cc makes.sort_values('1400', ascending=False).head(3) # filter and sort by 1400cc makes.sort_values('1500', ascending=False).head(3) # filter and sort by 1500cc makes.sort_values('1600', ascending=False).head(3) # filter and sort by 1600cc makes.sort_values('2000', ascending=False).head(3) # filter and sort by 2000cc makes.sort_values('2400', ascending=False).head(3) # filter and sort by 2400cc makes.sort_values('>2400', ascending=False).head(3) # filter and sort above 2400cc # ### The conclusion on engine sizes # # Manufacturer order vary some between engine sizes since everyone does not manufacture the same ranges or engines, for example BMW and Mercedes dominates above the 2.4 category. # # The second aspect considered is the additional complexity introduced across 41 unique manufacturers to account for the engine sizes in the modelling of the generated dataset and the net effect overall is almost negligible as a result of the overall combined manufacturer popularity distribution, for example while Mercedes dominates the 2400 cc category, the overall representation of the manufacturer is only 2.39%, so roughly half this number around 1.2% that I will assign a bigger engine to the wrong manufacturer. All in all the set will balance as the cc distribution probability will still be accurately applied to the overall set. We will therefore not granularly distinguish engine size by manufacturer when generating the set, but rather use the overall set distribution as a guideline for all. # ## Modelling the engine sizes count(rnd.choice(ccd['cc'],nsamples,p=ccd['pct'])).most_common() # count the number of vehicles per cc and sort by most common #ccd df=pd.DataFrame(t, columns=['Make']) # create the dataframe for the dataset adding _t_ to _make_ df['cc']=rnd.choice(ccd['cc'],nsamples,p=ccd['pct']) # append the engine size tested above to the set under cc # + #count(df['Make']).most_common() # various test for validation during creation. #count(df['cc']).most_common() # Test the result #df # display the result , commented for final submisison # - # ## More Questions to answer # # Variable |Description |Data Type |Distributions # ----------|----------------------------|-----------|------------- # Make |Manufacturer |Text |Geometric # Model |Model |Text | - # CC |Engine size in CC |0.8-4.5 |Geometric # Cylinder |Cylinders inferred from CC |2-12 | - # Gender |Gender of driver |male/female|Bernoulli # Age |Age of driver to infer style|16-99 |Normal # Drivestyle|Driver Type |text | - # Services |Services annually |yes/no |Bernoulli # Commute |Distance of commute |1-100 |Gaussian # Type |Urban, Rural, Highway |text |Bernoulli # # The next couple of questions that needs to be addressed to generate a complete dataset, is to see if we can determine the distributions of drivers by gender and age, the commute distance range for our datasets commuters and the service intervals observed by the average individuals. # # Then we need to see if we can infer drive styles from this data somehow. # # ### Drivers by age # # firstly In this section we will review the drivers by gender and age and use the national licensing authority dataset for 2013 to determine the gender distribution and age distribution of drivers by licence category. # driver_type_url='https://github.com/G00364778/52465_project/raw/master/data/P-TRANOM2013_2.5.csv' # git url for driver list d=pd.read_csv(driver_type_url) # read the git csv file ito a dataframe d d['pct']=d['Total']/sum(d['Total'])*100 # add a percentage column d['ratio']=d['Female']/(d['Male']+d['Female']) # and calculate the male/female age ration #list(d['Age']) d # display the list and calculations # So the __pct__ column calculates the overall percentage that the age group represents and the __ratio__ column the ration of male to female drivers for the age group, so females vary from 38% to 48% representation for all age categories. # + import warnings # add the code to suppress seaborn future warnings warnings.simplefilter(action='ignore', category=FutureWarning) import seaborn as sns # import seaborn libabry for plotting import matplotlib.pyplot as pl # import matplotlib for controlling plots # %matplotlib inline barplot=sns.barplot('Age','pct',data=d, orient=90) # generate a bar plot by age distribution for item in barplot.get_xticklabels(): # orientate the labels for clear readability item.set_rotation(45) pl.plot() # show the plots # - # So it looks like this distribution might be achieved using a negative binomial and then creating a random generator to recreate the distribution for us. n=100-(rnd.negative_binomial(2.015, 0.1, 150)+17) n.sort() n # Unfortunately, having spend too much time on the topic once again to try and achieve perfection in my simulation matching the real world efforts, sadly this approach fails to yield accurately what I want to achieve and I will have to revert to a normal distribution with tight controls over the outcome and this will result in a slightly higher result on the lower age categories than the real world scenario investigated above. Still, trying any kind of distribution fails to yield the kinds of numbers repeatably to show the age distribution I would like to see. # # So in the end, to get the perfect dataset, I suppose it is a good idea to stick to what works and for the remainder of the set, sticking to what works well with the aim of producing a set that fits the needs and represents the probability modelled on other findings. # # + import numpy as np # import numpy to generate randowm number import numpy.random as rnd age=[17, 21, 25, 30, 40, 50, 60, 70, 80] # create an array of ages to use dist=np.array([1.3, 4.5, 8.9, 23.6, 21.8, 17.7, 13.3, 6.7, 2.2]) # set the distribution probabilities for the ages above dist=dist/100 # devide the probabilities for campatibility to random choice a=rnd.choice(age,nsamples,p=dist) # generate the age set from the tables above b=rnd.randint(-5,5,nsamples) # add futher ramdomisation to the selection to create agres between sample set in list df['Age']=a+b # add the result of the random choice and offset to the dataframe list df.head(5) # - # So testing my list for age and gender representation yields the ratios from the ample data. # ### Driver by Gender # # In the next section we will be setting up the drivers by gender and we look at the average for the distribution and apply the same logic as used in the previous section, i.e. compare the ratio of distribution and use the distribution values to create a new set. # np.average(d['ratio']) # calculate the overall ratio for age distribution from the complete list # So the female to male ration is 44% females to males for the complete list. We will use this ratio in our random generator. We also need to calculate the percentage of drivers with no gender assigned. (sum(d['Unknown'])/(sum(d['Male'])+sum(d['Female'])+sum(d['Unknown'])))*1 # calculate the percentage of unknow drivers # About 4 drivers in a million drivers are of unknown gender. import numpy as np # import numpy for array and random number generation import numpy.random as rnd gender=['male','female','unknown'] # A list of gender choices gdist=np.array([0.55999,0.44,0.00001]) # The distribution of the gender choices #count(rnd.choice(gender,1000000,p=gdist)) # test the results, commented for final submission df['Gender']=rnd.choice(gender,nsamples,p=gdist) # And adding the gender distribution to the dataframe #df.head(5) # test the results, commented display for final submission.head(5). # ### Setting the drive styles # # The drive style will affect the vehicle specif fuel consumption and we distribute drive style proportionally in styles by the distribution factors listed. # # In the last section we use the style to tweak the final fuel consumption math using the criteria generated here. styles=['speedy','slow','nervous','distracted','rational'] # defining the drive styles sdist=[0.22,0.04,0.06,0.07,0.61] # setting the distributions of drive styles #sum(sdist) # Testing the distribution math df['DriveStyle']=rnd.choice(styles,nsamples,p=sdist) # add the result to the dataframe #df # ### Setting the Service check field # # Similarly we assume that 33% percent of drivers in Dublin do no service their vehicles regularly and this value will later factor into the fuel efficiency calculations. service=['yes','no'] # defining the drive styles svcdist=[0.67,0.33] # setting the distributions of drive styles #sum(sdist) # Testing the distribution math df['Serviced']=rnd.choice(service,nsamples,p=svcdist) # add the result to the dataframe #df # ### Commute details - distance and type # # Then lastly we calculate a daily commute of 20 miles with a 50 mile variance and we add a further variance based on urban, rural of highway conditions that will be used in the final fuel efficiency math. #service=['yes','no'] # defining the drive styles #svcdist=[0.67,0.33] # setting the distributions of drive styles #sum(sdist) # Testing the distribution math df['Daily_commute']=rnd.randint(20,50,nsamples) # add the result to the dataframe #df commute_type=['urban','rural','highway'] # defining the commute types svcdist=[0.44,0.07,0.49] # setting the distributions of drive styles #sum(sdist) # Testing the distribution math df['Commute_type']=rnd.choice(commute_type,nsamples,p=svcdist) # add the result to the dataframe #df # ### Add a calculated fuel consumption # # Some assumptions are made for the sake of a further distributed mileage calculation on the dataset that is not necessarily based on statistical evidence, for example that male drivers are all more aggressive than female drivers. # + mpg=80.00 # baseline mpg for 1000cc - 18km/l @90km/h for i, row in df.iterrows(): if row['Gender']=='male': gender=1.0 # ad a gender skew assuming males are more agressive drivers else:gender=1.05 if row['DriveStyle']=='speedy': style=0.9 # assuming speedy drivers gets 10% less elif row['DriveStyle']=='nervous': style=1.05 # and assuming nervous drives get 5% more else: style=1.001 # and the rest gets the calculated default mileage if row['Serviced']=='no': serviced=0.89999 # assuming unservices vehicles will be less efficient else: serviced=1.0 if row['Commute_type']=='urban':ctype=0.77777 #urban mileage 23% less if row['Commute_type']=='rural':ctype=0.899999 # rural 10% less if row['Commute_type']=='highway':ctype=1.01111 # motorway is about standard if row['Make']=='Saab': mk=1.05 # and skodas are more fuel efficient than the rest else: mk=1 mpg_calc=mpg*(1000/row['cc'])*gender*style*serviced*ctype*mk # calculate the mileage df.at[i,'mpg'] = mpg_calc # insert the value into the table df.at[i,'l_day']=row['Daily_commute']/(mpg_calc/4.5*1.6)*2 #calculate the litres per day #df['mpg'].describe() df # display the final dataset # - # This concludes the generation of the dataset and the last step below commits it all the a comma separated file for convenience. df.to_csv('data/generated_dataset.csv',index=False) # Save the dataset to a file # ## Testing the dataset for distribution # # A quick test to proof the validation of the dataset, we will compare the distribution of manufacturers and the gender distribution and compare this to the research data for validation. import pandas as pd # import pandas and read in the previously generated dataset stored on the git repository ds=pd.read_csv('https://github.com/G00364778/52465_project/raw/master/data/generated_dataset.csv') # ds=pd.read_csv('data/generated_dataset.csv') # or uncomment to use the current local set ds.head(3) # and display the head just to make sure it was read properly from collections import Counter as count # import the counter feature from the collections library count(ds.Make).most_common(5) # and count and sort the list by most common # The result compares to the research on the topic and in range with the distribtion criteria set. # # ```python # [('Volkswagen', 1285), # ('Ford', 1042), # ('Toyota', 929), # ('Nissan', 825), # ('Hyundai', 751), # ``` # In the following section we will test the gender distribution for 44% females to males. print(count(ds.Gender).most_common(5)) # The results matches the expected distribution. # ## Conclusions # # There is a lot of free data available to explore and shed light on global questions being asked and quite surprisingly easy to distil and arrive at clear conclusive answers to the questions, for example the distribution of male to female drives in Ireland. # # The random.choice option provides a very accurate and repeatable selection process for reproducing sets repeatable with non standard distribution types. # # It is very easy to make the wrong assumptions on limited datasets or demographically biased sets affecting choices for example the European taxation on bigger engines sizes compared to a US market where this is not applicable. # # It is much more difficult than anticipated to generate meaningful representative datasets and a lot of time went into the background research and event testing code to achieve the end result. # # Our perceptions are mostly wrong, for example I would have imagined a bigger dominance of male drivers when in fact it is very close to even. The other wrong perception was that small cars in more abundant on the roads when in fact the bigger engine sizes are more prevalent. # ## References # # 1. __[5 Ways to Find Interesting Data Sets](https://www.dataquest.io/blog/5-ways-to-find-interesting-data-sets/)__ # 1. __[18 places to find data sets for data science projects](https://www.dataquest.io/blog/free-datasets-for-projects/)__ # 1. __[100+ Interesting Data Sets for Statistics](http://rs.io/100-interesting-data-sets-for-statistics/)__ # 1. __[19 Free Public Data Sets for Your First Data Science Project](https://www.springboard.com/blog/free-public-data-sets-data-science-project/)__ # 1. __[Cool Data Sets I’ve found](https://towardsdatascience.com/cool-data-sets-ive-found-adc17c5e55e1)__ # 1. __[Summary of Links to data sources](http://hdip-data-analytics.com/resources/data_sources)__ # 1. __[13 factors that increase fuel consumption](https://www.monitor.co.ug/Business/Auto/13-factors-that-increase-fuel-consumption/688614-2738644-b69hkkz/index.html)__ # 1. __[Many Factors Affect Fuel Economy](https://www.fueleconomy.gov/feg/factors.shtml)__ # 1. __[Want Your MPG? 10 Factors That Affect Fuel Economy](https://www.newgateschool.org/blog/entry/want-your-mpg-10-factors-that-affect-fuel-economy)__ # 1. __[How to Reduce Fuel Consumption](https://www.carsdirect.com/car-buying/10-ways-to-lower-engine-fuel-consumption)__ # 1. __[8 Main Causes of Bad Gas Mileage](https://www.carsdirect.com/car-buying/8-main-causes-of-bad-gas-mileage)__ # 1. __[Table 2.5 Number of current driving licences by age and gender at 31/12/2013](https://www.cso.ie/en/releasesandpublications/ep/p-tranom/transportomnibus2013/vehicles/driverandvehicletesting/)__ # 1. __[Cars Dataset](http://www.rpubs.com/dksmith01/cars)__ # 1. __[The 5 types of drivers on the road](https://rsadirect.ae/blog/5-types-drivers-road)__ # 1. __[TEA18 - Private Cars Licensed for the First Time](https://data.gov.ie/dataset/tea18-ime-by-engine-capacity-cc-car-make-emission-band-licensing-authority-year-and-statistic-b6cc)__ # 1. __[Github Markdown reference](https://guides.github.com/features/mastering-markdown/)__ # 1. __[Jupyter Markdown reference](https://jupyter-notebook.readthedocs.io/en/stable/examples/Notebook/Working%20With%20Markdown%20Cells.html)__ # 1. __[Latex Reference](http://www.malinc.se/math/latex/basiccodeen.php)__ # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # necessary libraries import pandas as pd import numpy as np import matplotlib import matplotlib.pyplot as plt import pickle # models from sklearn.neighbors import KNeighborsClassifier from sklearn.ensemble import RandomForestClassifier from xgboost import XGBClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.naive_bayes import GaussianNB from sklearn.svm import SVC # model selection from sklearn.model_selection import train_test_split from sklearn.model_selection import cross_val_score # preprocessing from sklearn.preprocessing import LabelEncoder from sklearn.preprocessing import StandardScaler # metrics from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix # hyperparameters from sklearn.model_selection import GridSearchCV # pipeline from sklearn.base import BaseEstimator, TransformerMixin from sklearn.pipeline import Pipeline # data s2019_2020 = pd.read_csv('data/2019-2020.csv') # - np.random.seed(43) columns = ['HomeTeam', 'AwayTeam', 'FTHG', 'FTAG', 'FTR', 'HTHG', 'HTAG', 'HS', 'AS', 'HST', 'AST', 'HC', 'AC'] s2019_2020 = s2019_2020[columns] s2019_2020.head() def form(data, k=0.33): clubs = data.HomeTeam.unique() form_dict = {} for club in clubs: form_dict[club] = [1.0] for idx, row in data.iterrows(): ht_current_form = form_dict[row['HomeTeam']][-1] at_current_form = form_dict[row['AwayTeam']][-1] if row['FTR'] == 'H': form_dict[row['HomeTeam']].append(ht_current_form + (k * at_current_form)) form_dict[row['AwayTeam']].append(at_current_form - (k * at_current_form)) if row['FTR'] == 'A': form_dict[row['AwayTeam']].append(at_current_form + (k * ht_current_form)) form_dict[row['HomeTeam']].append(ht_current_form - (k * ht_current_form)) if row['FTR'] == 'D': form_dict[row['HomeTeam']].append(ht_current_form - (k * (ht_current_form - at_current_form))) form_dict[row['AwayTeam']].append(at_current_form - (k * (at_current_form - ht_current_form))) return form_dict # + def transform_form(data): data['HF'] = 0.0 data['AF'] = 0.0 form_data = form(data) for club in data.HomeTeam.unique(): mask = (data['HomeTeam'] == club) | (data['AwayTeam'] == club) k = 0 for idx, row in data[mask].iterrows(): if row['HomeTeam'] == club: data.loc[idx, 'HF'] = form_data[club][k] if row['AwayTeam'] == club: data.loc[idx, 'AF'] = form_data[club][k] k += 1 return data s2019_2020 = transform_form(s2019_2020) s2019_2020.tail(12) # - def transform_ftr(row, column_name): if row[column_name] == 'H': return 1 if row[column_name] == 'A': return -1 else: return 0 s2019_2020.FTR = s2019_2020.apply(lambda row: transform_ftr(row, 'FTR'), axis=1) # ## Team Statistics # + def get_team_statistics(X, avg_HS, avg_AS, avg_AC, avg_HC): team_statistics = pd.DataFrame(columns=['Club Name', 'HTG', 'ATG', 'HTC', 'ATC', 'HAS', 'AAS']) home_team_group = X.groupby('HomeTeam') away_team_group = X.groupby('AwayTeam') num_games = X.shape[0] / 20 team_statistics['Club Name'] = home_team_group.groups.keys() team_statistics['HTG'] = home_team_group.FTHG.sum().values team_statistics['ATG'] = away_team_group.FTAG.sum().values team_statistics['HTC'] = home_team_group.FTAG.sum().values team_statistics['ATC'] = away_team_group.FTHG.sum().values team_statistics['HAS'] = (team_statistics['HTG'] / num_games) / avg_HS team_statistics['AAS'] = (team_statistics['ATG'] / num_games) / avg_AS team_statistics['HDS'] = (team_statistics['ATC'] / num_games) / avg_AC team_statistics['ADS'] = (team_statistics['HTC'] / num_games) / avg_HC return team_statistics def transform_stat(data): data['HAS'] = 0.0 data['AAS'] = 0.0 data['HDS'] = 0.0 data['ADS'] = 0.0 data['HXG'] = 0.0 data['AXG'] = 0.0 HAS = [] AAS = [] HDS = [] ADS = [] HXG = [] AXG = [] avg_HS = data.FTHG.sum() / data.shape[0] avg_AS = data.FTAG.sum() / data.shape[0] avg_HC = avg_AS avg_AC = avg_HS team_stat = get_team_statistics(data, avg_HS, avg_AS, avg_AC, avg_HC) for index, row in data.iterrows(): HAS.append(team_stat[team_stat['Club Name'] == row['HomeTeam']]['HAS'].values[0]) AAS.append(team_stat[team_stat['Club Name'] == row['AwayTeam']]['AAS'].values[0]) HDS.append(team_stat[team_stat['Club Name'] == row['HomeTeam']]['HDS'].values[0]) ADS.append(team_stat[team_stat['Club Name'] == row['AwayTeam']]['ADS'].values[0]) data['HAS'] = HAS data['AAS'] = AAS data['HDS'] = HDS data['ADS'] = ADS for index, row in data.iterrows(): HXG.append(row['HAS'] * row['ADS'] * avg_HS) AXG.append(row['AAS'] * row['HDS'] * avg_AS) data['HXG'] = HXG data['AXG'] = AXG return data # - s2019_2020 = transform_stat(s2019_2020) # ## Recent K Performance def k_perf(data, k=3): data['PastFTHG'] = 0.0 for idx in range(data.shape[0]-1, -1, -1): row = data.loc[idx] ht = row.HomeTeam at = row.AwayTeam ht_stats = data[idx:][(data.HomeTeam == ht)|(data.AwayTeam == ht)].head(k) at_stats = data[idx:][(data.HomeTeam == at)|(data.AwayTeam == at)].head(k) data.loc[idx, 'PastFTHG'] = ht_stats[ht_stats['HomeTeam'] == ht].FTHG.sum() + ht_stats[ht_stats['AwayTeam'] == ht].FTAG.sum() data.loc[idx, 'PastFTAG'] = at_stats[at_stats['HomeTeam'] == at].FTHG.sum() + at_stats[at_stats['AwayTeam'] == at].FTAG.sum() data.loc[idx, 'PastHST'] = ht_stats[ht_stats['HomeTeam'] == ht].HST.sum() + ht_stats[ht_stats['AwayTeam'] == ht].AST.sum() data.loc[idx, 'PastAST'] = at_stats[at_stats['HomeTeam'] == at].HST.sum() + ht_stats[ht_stats['AwayTeam'] == ht].AST.sum() data.loc[idx, 'PastHS'] = ht_stats[ht_stats['HomeTeam'] == ht].HS.sum() + ht_stats[ht_stats['AwayTeam'] == ht].AS.sum() data.loc[idx, 'PastAS'] = at_stats[at_stats['HomeTeam'] == at].HS.sum() + ht_stats[ht_stats['AwayTeam'] == ht].AS.sum() return data s2019_2020 = k_perf(s2019_2020) # ## Normal Features # + X = s2019_2020[['HAS', 'AAS', 'HDS', 'ADS', 'HXG', 'AXG', 'PastFTHG', 'PastFTAG', 'PastHST', 'PastAST', 'PastHS', 'PastAS']] y = s2019_2020['FTR'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.15, random_state=34, stratify=s2019_2020.FTR) print('X_train.shape: {}'.format(X_train.shape)) print('X_test.shape: {}'.format(X_test.shape)) print('y_train.shape: {}'.format(y_train.shape)) print('y_test.shape: {}'.format(y_test.shape)) # - # initialize models knn = KNeighborsClassifier(n_neighbors=10).fit(X_train, y_train) rf = RandomForestClassifier(n_estimators=1000).fit(X_train, y_train) xgb = XGBClassifier(n_estimators=1000).fit(X_train, y_train) tr = DecisionTreeClassifier().fit(X_train, y_train) gnb = GaussianNB().fit(X_train, y_train) svc = SVC(C=10).fit(X_train, y_train) print('KNN:\n train score: {:.3f}, test score: {:.3f}'.format(knn.score(X_train, y_train), knn.score(X_test, y_test))) print('RF:\n train score: {:.3f}, test score: {:.3f}'.format(rf.score(X_train, y_train), rf.score(X_test, y_test))) print('XGB:\n train score: {:.3f}, test score: {:.3f}'.format(xgb.score(X_train, y_train), xgb.score(X_test, y_test))) print('TR:\n train score: {:.3f}, test score: {:.3f}'.format(tr.score(X_train, y_train), tr.score(X_test, y_test))) print('GNB:\n train score: {:.3f}, test score: {:.3f}'.format(gnb.score(X_train, y_train), gnb.score(X_test, y_test))) print('SVC:\n train score: {:.3f}, test score: {:.3f}'.format(svc.score(X_train, y_train), svc.score(X_test, y_test))) # ## Differential Features def add_diff_features(data): scaler = StandardScaler() scaled = scaler.fit_transform(data.drop(['HomeTeam', 'AwayTeam', 'FTR'], axis=1)) columns = set(data.columns) - {'HomeTeam', 'AwayTeam', 'FTR'} data[list(columns)] = scaled data['AttackDiff'] = data['HAS'] - data['AAS'] data['DefenceDiff'] = data['HDS'] - data['ADS'] data['ExpGoalDiff'] = data['HXG'] - data['AXG'] data['PastGoalDiff'] = data['PastFTHG'] - data['PastFTAG'] data['PastShotsOnTargetDiff'] = data['PastHST'] - data['PastAST'] data['PastShotsDiff'] = data['PastHS'] - data['PastAS'] data['AttackDiff'] = scaler.fit_transform(data['AttackDiff'].values.reshape(-1, 1)) data['DefenceDiff'] = scaler.fit_transform(data['DefenceDiff'].values.reshape(-1, 1)) data['ExpGoalDiff'] = scaler.fit_transform(data['ExpGoalDiff'].values.reshape(-1, 1)) data['PastGoalDiff'] = scaler.fit_transform(data['PastGoalDiff'].values.reshape(-1, 1)) data['PastShotsOnTargetDiff'] = scaler.fit_transform(data['PastShotsOnTargetDiff'].values.reshape(-1, 1)) data['PastShotsDiff'] = scaler.fit_transform(data['PastShotsDiff'].values.reshape(-1, 1)) return data s2019_2020 = add_diff_features(s2019_2020) s2019_2020.head() # + X2 = s2019_2020[['AttackDiff', 'DefenceDiff', 'ExpGoalDiff', 'PastGoalDiff', 'PastShotsOnTargetDiff', 'PastShotsDiff']] y2 = s2019_2020['FTR'] X2_train, X2_test, y2_train, y2_test = train_test_split(X2, y2, test_size=0.15, random_state=34, stratify=s2019_2020.FTR) print('X_train.shape: {}'.format(X2_train.shape)) print('X_test.shape: {}'.format(X2_test.shape)) print('y_train.shape: {}'.format(y2_train.shape)) print('y_test.shape: {}'.format(y2_test.shape)) # - # initialize models knn = KNeighborsClassifier(n_neighbors=10).fit(X2_train, y2_train) rf = RandomForestClassifier(n_estimators=1000).fit(X2_train, y2_train) xgb = XGBClassifier(n_estimators=1000).fit(X2_train, y2_train) tr = DecisionTreeClassifier().fit(X2_train, y2_train) gnb = GaussianNB().fit(X2_train, y2_train) svc = SVC(C=10).fit(X2_train, y2_train) print('KNN:\n train score: {:.3f}, test score: {:.3f}'.format(knn.score(X2_train, y2_train), knn.score(X2_test, y2_test))) print('RF:\n train score: {:.3f}, test score: {:.3f}'.format(rf.score(X2_train, y2_train), rf.score(X2_test, y2_test))) print('XGB:\n train score: {:.3f}, test score: {:.3f}'.format(xgb.score(X2_train, y2_train), xgb.score(X2_test, y2_test))) print('TR:\n train score: {:.3f}, test score: {:.3f}'.format(tr.score(X2_train, y2_train), tr.score(X2_test, y2_test))) print('GNB:\n train score: {:.3f}, test score: {:.3f}'.format(gnb.score(X2_train, y2_train), gnb.score(X2_test, y2_test))) print('SVC:\n train score: {:.3f}, test score: {:.3f}'.format(svc.score(X2_train, y2_train), svc.score(X2_test, y2_test))) # ## Cross Validation forest_scores = cross_val_score(rf, X2, y2, cv=10) tree_scores = cross_val_score(tr, X2, y2, cv=10) knn_scores = cross_val_score(knn, X2, y2, cv=10) xgb_scores = cross_val_score(xgb, X2, y2, cv=10) gnb_scores = cross_val_score(gnb, X2, y2, cv=10) svc_scores = cross_val_score(svc, X2, y2, cv=10) print('Random Forest Classifier Accuracy: {:.2f}%'.format(forest_scores.mean() * 100)) print('Tree Classifier Accuracy: {:.2f}%'.format(tree_scores.mean() * 100)) print('K-Nearest Neighbor Accuracy: {:.2f}%'.format(knn_scores.mean() * 100)) print('XGB Classifier Accuracy: {:.2f}%'.format(xgb_scores.mean() * 100)) print('Gaussian Naive Bayes Classifier Accuracy: {:.2f}%'.format(gnb_scores.mean() * 100)) print('Support Vector Classifier Accuracy: {:.2f}%'.format(svc_scores.mean() * 100)) X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0, test_size=.15) knn.fit(X_train, y_train) predictions = knn.predict(X_test) print(classification_report(y_test, predictions)) X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0, test_size=.15) gnb.fit(X_train, y_train) predictions = gnb.predict(X_test) print(classification_report(y_test, predictions)) # ## Append Data s2018_2019 = pd.read_csv('data/2018-2019.csv') s2017_2018 = pd.read_csv('data/2017-2018.csv') s2016_2017 = pd.read_csv('data/2016-2017.csv') s2015_2016 = pd.read_csv('data/2015-2016.csv') s2014_2015 = pd.read_csv('data/2014-2015.csv') columns = ['HomeTeam', 'AwayTeam', 'FTHG', 'FTAG', 'FTR', 'HTHG', 'HTAG', 'HS', 'AS', 'HST', 'AST', 'HC', 'AC'] s2018_2019 = s2018_2019[columns] s2017_2018 = s2017_2018[columns] s2016_2017 = s2016_2017[columns] s2015_2016 = s2015_2016[columns] s2014_2015 = s2014_2015[columns] s2018_2019.FTR = s2018_2019.apply(lambda row: transform_ftr(row, 'FTR'), axis=1) s2017_2018.FTR = s2017_2018.apply(lambda row: transform_ftr(row, 'FTR'), axis=1) s2016_2017.FTR = s2016_2017.apply(lambda row: transform_ftr(row, 'FTR'), axis=1) s2015_2016.FTR = s2015_2016.apply(lambda row: transform_ftr(row, 'FTR'), axis=1) s2014_2015.FTR = s2014_2015.apply(lambda row: transform_ftr(row, 'FTR'), axis=1) s2018_2019 = transform_stat(s2018_2019) s2017_2018 = transform_stat(s2017_2018) s2016_2017 = transform_stat(s2016_2017) s2015_2016 = transform_stat(s2015_2016) s2014_2015 = transform_stat(s2014_2015) s2014_2015 = k_perf(s2014_2015) s2015_2016 = k_perf(s2015_2016) s2016_2017 = k_perf(s2016_2017) s2017_2018 = k_perf(s2017_2018) s2018_2019 = k_perf(s2018_2019) s2014_2015 = add_diff_features(s2014_2015) s2015_2016 = add_diff_features(s2015_2016) s2016_2017 = add_diff_features(s2016_2017) s2017_2018 = add_diff_features(s2017_2018) s2018_2019 = add_diff_features(s2018_2019) s2019_2020 = s2019_2020.append(s2018_2019, sort=False, ignore_index=True) s2019_2020 = s2019_2020.append(s2017_2018, sort=False, ignore_index=True) s2019_2020 = s2019_2020.append(s2016_2017, sort=False, ignore_index=True) s2019_2020 = s2019_2020.append(s2015_2016, sort=False, ignore_index=True) s2019_2020 = s2019_2020.append(s2014_2015, sort=False, ignore_index=True) s2019_2020.info() # + X2 = s2019_2020[['AttackDiff', 'DefenceDiff', 'ExpGoalDiff', 'PastGoalDiff', 'PastShotsOnTargetDiff', 'PastShotsDiff']] y2 = s2019_2020['FTR'] X2_train, X2_test, y2_train, y2_test = train_test_split(X2, y2, test_size=0.15, random_state=34, stratify=s2019_2020.FTR) print('X_train.shape: {}'.format(X2_train.shape)) print('X_test.shape: {}'.format(X2_test.shape)) print('y_train.shape: {}'.format(y2_train.shape)) print('y_test.shape: {}'.format(y2_test.shape)) # - # initialize models knn = KNeighborsClassifier(n_neighbors=10).fit(X2_train, y2_train) rf = RandomForestClassifier(n_estimators=1000).fit(X2_train, y2_train) xgb = XGBClassifier(n_estimators=1000).fit(X2_train, y2_train) tr = DecisionTreeClassifier().fit(X2_train, y2_train) gnb = GaussianNB().fit(X2_train, y2_train) svc = SVC(C=10).fit(X2_train, y2_train) print('KNN:\n train score: {:.3f}, test score: {:.3f}'.format(knn.score(X2_train, y2_train), knn.score(X2_test, y2_test))) print('RF:\n train score: {:.3f}, test score: {:.3f}'.format(rf.score(X2_train, y2_train), rf.score(X2_test, y2_test))) print('XGB:\n train score: {:.3f}, test score: {:.3f}'.format(xgb.score(X2_train, y2_train), xgb.score(X2_test, y2_test))) print('TR:\n train score: {:.3f}, test score: {:.3f}'.format(tr.score(X2_train, y2_train), tr.score(X2_test, y2_test))) print('GNB:\n train score: {:.3f}, test score: {:.3f}'.format(gnb.score(X2_train, y2_train), gnb.score(X2_test, y2_test))) print('SVC:\n train score: {:.3f}, test score: {:.3f}'.format(svc.score(X2_train, y2_train), svc.score(X2_test, y2_test))) forest_scores = cross_val_score(rf, X2, y2, cv=10) tree_scores = cross_val_score(tr, X2, y2, cv=10) knn_scores = cross_val_score(knn, X2, y2, cv=10) xgb_scores = cross_val_score(xgb, X2, y2, cv=10) gnb_scores = cross_val_score(gnb, X2, y2, cv=10) svc_scores = cross_val_score(svc, X2, y2, cv=10) print('Random Forest Classifier Accuracy: {:.2f}%'.format(forest_scores.mean() * 100)) print('Tree Classifier Accuracy: {:.2f}%'.format(tree_scores.mean() * 100)) print('K-Nearest Neighbor Accuracy: {:.2f}%'.format(knn_scores.mean() * 100)) print('XGB Classifier Accuracy: {:.2f}%'.format(xgb_scores.mean() * 100)) print('Gaussian Naive Bayes Classifier Accuracy: {:.2f}%'.format(gnb_scores.mean() * 100)) print('Support Vector Classifier Accuracy: {:.2f}%'.format(svc_scores.mean() * 100)) # ### Support Vector Machines Hyperparameter tuning param_grid = { 'C': [0.00001, 0.0001, 0.001, 0.01, 0.1, 1, 10, 20, 50, 100], 'gamma': [1, 0.1, 0.01, 0.001, 0.0001, 0.00001], 'kernel': ['rbf'] } clf = GridSearchCV(svc, param_grid, cv=10, n_jobs=-1) clf.fit(X2, y2) clf.best_params_ svc = SVC(**clf.best_params_) svc.fit(X2_train, y2_train) print('X_test score: {}'.format(svc.score(X2_test, y2_test))) predictions = svc.predict(X2_test) print(classification_report(y2_test, predictions)) # ### Knn Hyperpameter tuning param_grid = { 'n_neighbors': list(range(1, 200)), 'leaf_size': list(range(1, 50)), 'p': [1, 2] } clf = GridSearchCV(knn, param_grid, cv=10, n_jobs=-1) clf.fit(X2, y2) clf.best_params_ knn = KNeighborsClassifier(**clf.best_params_) knn.fit(X2_train, y2_train) print('X_test score: {}'.format(knn.score(X2_test, y2_test))) predictions = knn.predict(X2_test) print(classification_report(y2_test, predictions)) gnb = GaussianNB() gnb.fit(X2_train, y2_train) print('X_test score: {}'.format(gnb.score(X2_test, y2_test))) predictions = gnb.predict(X2_test) print(classification_report(y2_test, predictions)) class MakeAttributes(BaseEstimator): """ Engineer new attributes """ def __init__(self): pass def fit(self, X, y=None): return self def transform(self, X, y=None): X = transform_stat(X) X = k_perf(X) X = add_diff_features(X) return X s2014_2015.head() s2019_2020.head() model = SVC(C=10, gamma=0.01, kernel='rbf') model.fit(X2_train, y2_train) print('Train score: {}'.format(model.score(X2_train, y2_train))) print('Test score: {}'.format(model.score(X2_test, y2_test))) pickle.dump(model, open('model2.pkl', 'wb')) def transform_back_ftr(row, column_name): if row[column_name] == 1: return 'H' if row[column_name] == -1: return 'A' else: return 'D' # ## Pipeline # + model_pipeline = Pipeline(steps=[('Make Attributes', MakeAttributes())]) model = pickle.load(open('model2.pkl', 'rb')) data = pd.read_csv('data/2020-2021.csv') data = data[columns] predict_columns = [] data_tr = model_pipeline.fit_transform(data)[['AttackDiff', 'DefenceDiff', 'ExpGoalDiff', 'PastGoalDiff', 'PastShotsOnTargetDiff', 'PastShotsDiff']] predictions = model.predict(data_tr) data = data[['HomeTeam', 'AwayTeam', 'FTR']] data['Predictions'] = predictions data.Predictions = data.apply(lambda row: transform_back_ftr(row, 'Predictions'), axis=1) data # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.8 64-bit (''udemyPro_py388'': venv)' # name: python3 # --- # + # Built ins sind fest in die Sprache eingebaut! (kein Import) # Variable ist immer Objekt einer Klasse x = -9 # Betrag print(abs(x)) #or print(x.__abs__()) # + # return False: None, 0, [], {}, False # all returns true if all elements return true # any returns true if one element returns true my_list = [1 , 3 , None, "Hallo"] print(all(my_list)) print(any(my_list)) # - #Binär darstellung -> Output String bin(3) print(bin(5)) print(hex(12)) # Bool val = 0 print(bool(val)) # dir: Welche funktionen gehören zu einer Funktion/Variablen # + a = 10 b = 3 print(a/b) # divmod speichert / und % als tuple ab print(divmod(a, b)) # + # enumerate: iterable, start=0 # + #globals and locals: in einem dictionary die globals and locals #help(object) # + # isinstance(gehört objekt zu bestimmter klasse) # issubclass -> ist klasse vererbt von # + # max und min von einem objekt #Achtun bei dict: würde er das minimum von den Keys gehen # + # open: File öffnen: "with" verwenden # + # pow: Eine Zahl hoch eine andere: analog zu ** # + # range(stop) # range(start, stop, [step]) # + # reversed um anders herum zu iterieren (von hinten nach vorne) # + # sum(iterable, [start]) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="x5hb00f3LJAs" # # Intro to Python # + id="iNBNGa1uLM2n" #Phyton Indention if 5>2: print ("five is greater than two") # + [markdown] id="jxr5vym0MgfC" # Five is greater than two # + [markdown] id="McfJL3s2Mk0j" # #Python Variable # + id="N9f6grHPMoJC" x=1 a,b=0,1 a,b,c="zero","one","two" print (x) print (a) print (b) print (c) # + [markdown] id="SrQBUTRkM44D" # 1 # # zero # # one # # two # + id="OglSn9VyM8NC" d="Sally" #This is = string D="Ana" print (d) e="John" print (e) print (D) # + [markdown] id="ZiHGfismNR5L" # Sally # # John # # Ana # # + id="9aomNDXmNWNT" print (type(d)) #This is Type function print (type(x)) # + [markdown] id="UFZwNbsVNlns" # # # # + [markdown] id="nBv8tHVcNrRU" # #Casting # + [markdown] id="wbBQLdY9NuHL" # In [13]: # + id="T4TarnelN6Sr" f = float (4) print (f) g = int (5) print (g) # + [markdown] id="0rueu4EHOCy0" # 4.0 # # 5 # + [markdown] id="L3BSvytIOE1L" # #Multiple Variables with One Value # + [markdown] id="5EQ2IkAZOKJb" # In [21]: # + id="-LzD_oUAOM77" x=y=z="four" print (x) print (y) print (z) # + [markdown] id="vSMUuZNlOT2F" # four # # four # # four # + id="NIiKVKNaOXN0" In [22]: # + id="3N3sQhJNOZo8" x="enjoying" print ("Python Programming is " " ") + x) # + [markdown] id="kovMlyFROlu8" # Pyhton Programming is enjoying # + [markdown] id="6WhDRv_vOp-E" # #Operation in Python # + [markdown] id="tMizUbdQOuDV" # In [46] # + id="tFCtQ3wpOx68" x=5 y=7 print (x+y) print (x*y) print (x) # + [markdown] id="R-8dYohqO7Ms" # 12 # # 35 # # 5 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # orphan: true # --- # + tags=["remove-input", "remove-output", "active-ipynb"] # try: # from openmdao.utils.notebook_utils import notebook_mode # except ImportError: # !python -m pip install openmdao[notebooks] # - # # Checking Partial Derivatives on a Subset of a Model # ## Includes and Excludes # # When you have a model with a large number of components, you may want to reduce the number of components you check so that the output is small and readable. The `check_partials` method has two arguments: “includes” and “excludes” that help you specify a reduced set. Both of these arguments are lists of strings that default to None. If you specify “includes”, and give it a list containing strings, then only the components whose full pathnames match one of the patterns in those strings are included in the check. Wildcards are acceptable in the string patterns. Likewise, if you specify excludes, then components whose pathname matches the given patterns will be excluded from the check. # # You can use both arguments together to hone in on the precise set of components you wish to check. # + tags=["remove-input", "remove-output"] from openmdao.utils.notebook_utils import get_code from myst_nb import glue glue("code_src62", get_code("openmdao.test_suite.components.paraboloid.Paraboloid"), display=False) # - # :::{Admonition} `Paraboloid` class definition # :class: dropdown # # {glue:}`code_src62` # ::: # + import openmdao.api as om from openmdao.test_suite.components.paraboloid import Paraboloid prob = om.Problem() model = prob.model sub = model.add_subsystem('c1c', om.Group()) sub.add_subsystem('d1', Paraboloid()) sub.add_subsystem('e1', Paraboloid()) sub2 = model.add_subsystem('sss', om.Group()) sub3 = sub2.add_subsystem('sss2', om.Group()) sub2.add_subsystem('d1', Paraboloid()) sub3.add_subsystem('e1', Paraboloid()) model.add_subsystem('abc1cab', Paraboloid()) prob.setup() prob.run_model() prob.check_partials(compact_print=True, includes='*c*c*') # - prob.check_partials(compact_print=True, includes=['*d1', '*e1']) prob.check_partials(compact_print=True, includes=['abc1cab']) # + pycharm={"name": "#%%\n"} prob.check_partials(compact_print=True, includes='*c*c*', excludes=['*e*']) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Investigando a evolução do IBOPE das emissoras brasileiras # Neste trabalho buscamos determinar como evoluiu a audiência das emissoras de televisão aberta nos últimos anos. Para isto utilizaremos os dados disponíveis na página [Kantar Ibope Media](https://www.kantaribopemedia.com/). # # pseudo-código: # + active="" # imports () # for page in 22: # get_urls(page) # | urllib.request.urlopen(page) # | links_list = beautifulSoup(get_links) # * return links_list # # for link in link_list: # page = urllib.request.urlopen(link) # scrapper(page) # | for each chanel: # | get data # | return # + import numpy as np import pandas as pd from datetime import datetime from bs4 import BeautifulSoup import urllib import time import matplotlib.pyplot as plt plt.style.use('seaborn-darkgrid') # + # Get_URLs: domain = 'https://www.kantaribopemedia.com' link_list = [] complete_data = [] def get_link (link): page = urllib.request.urlopen(link) soup = BeautifulSoup(page) article = soup.find_all('article') links = [] for i in range(len(article)): path = article[i].h2.a.get('href') links.append(path) return links # Scrapper: def scrapper(link, domain): complete_link = domain + link page = urllib.request.urlopen(complete_link) soup = BeautifulSoup(page) day_raw = soup.h1.contents[0][-10:] day_raw = test_fmt_data(day_raw, soup) day = datetime.strptime(day_raw, '%d/%m/%Y') print(day) tables = soup.find_all('table') channels = ['Band', 'Globo', 'Record', 'SBT'] data, cols = channel_finder(channels, tables, day) return data, cols # Get the information from each channel def channel_finder(ch_list, tables, day): c = 0 data = [] for chn in ch_list: if chn == 'Band': c = 0 elif chn == 'Globo': c = 1 elif chn == 'Record': c = 2 elif chn == 'RedeTV': c = 3 elif chn == 'SBT': c = 4 else: raise Exception('ERRO! {} não é um canal válido'.format(chn.upper())) try: find_chn = tables[c].find_all('tr') except IndexError: pass channel_name = str(find_chn[0].find('th').contents[0]).upper() cols = [str(i.contents[0]) for i in find_chn[2].find_all('td')[0:3]] cols.insert(0, 'Programa') cols.insert(0, 'Data') cols.insert(0, 'Canal') values = [] programs = [] for j in find_chn[4:]: try: name = [str(i).upper() for i in j.find_all('td')[0]] num = [float(str(i.contents[0]).replace('.', '').replace(',','.')) for i in j.find_all('td')[1:4]] values.append(num) programs.append(name) except IndexError: name = ['ERROR'] num = [np.nan, np.nan, np.nan] programs.append(name) values.append(num) pass except ValueError: pass for i in range(len(values)): values[i].insert(0, programs[i][0]) values[i].insert(0, day) values[i].insert(0, channel_name) data.extend(values) return data, cols def test_fmt_data(day_raw, soup): if 'a' in day_raw: raw = day_raw[-5:] raw_month = int(raw[3:5]) post = str(soup.time.contents[0]) post_month = int(post[3:5]) post_year = int(post[-4:]) if post_month == raw_month: raw = raw + '/' + str(post_year) return raw elif post_month < raw_month: year = post_year - 1 raw = raw + '/' + str(year) return raw elif post_month > raw_month: raw = raw + '/' + str(post_year) return raw else: return day_raw ### Here we are getting the 10 first pages ### for PAGE in range(1,11): link = 'https://www.kantaribopemedia.com/conteudo/dados-rankings/audiencia-tv-15-mercados/page/{}/'.format(PAGE) list_element = get_link(link) link_list.append(list_element) time.sleep(1) count = 0 for line_link in link_list: for link in line_link: part_data, columns = scrapper(link, domain) complete_data.extend(part_data) count += 1 print('Line{}'.format(count)) time.sleep(5) print('DONE!') #print(complete_data) # - df = pd.DataFrame(complete_data, columns=columns) means = df.pivot_table(index = ['Canal','Data'], values=['Audiência Domiciliar','Audiência Individual', 'COV % Individual']) means['Audiência Individual']['SBT'] # + # %matplotlib notebook means['Audiência Individual']['GLOBO'].plot(label = 'GLOBO INDV.'); means['Audiência Domiciliar']['GLOBO'].plot(label = 'GLOBO DOM.'); means['COV % Individual']['GLOBO'].plot(label = 'GLOBO COV'); #means['Audiência Domiciliar']['GLOBO'][:170].plot(label = 'GLOBO'); #ma_4w.plot(label = 'Globo média móvel') #means['Audiência Individual']['RECORD'][:170].plot( label = 'RECORD'); #means['Audiência Individual']['SBT'][:170].plot( label = 'SBT'); plt.legend(prop = {'size': 10}); #means['Audiência Domiciliar']['GLOBO'].nlargest(5) # - errado_page = urllib.request.urlopen('https://www.kantaribopemedia.com/dados-de-audiencia-nas-15-pracas-regulares-com-base-no-ranking-consolidado-1604-a-2204/') certo_page = urllib.request.urlopen('https://www.kantaribopemedia.com/dados-de-audiencia-nas-15-pracas-regulares-com-base-no-ranking-consolidado-1510-a-2110/') certo_soup = BeautifulSoup(certo_page) errado_soup = BeautifulSoup(errado_page) tables = errado_soup.find_all('table') for c in range(0,4): find_chn = tables[c].find_all('tr') values = [] programs = [] print(c) name = 0 for j in find_chn[4:]: name = [str(i).upper() for i in j.find_all('td')[0]] print(name) num = [float(str(i.contents[0]).replace('.', '').replace(',','.')) for i in j.find_all('td')[1:4]] print(num) values.append(num) programs.append(name) certo_soup.find_all('table')[4].find_all('tr')[4:][9].find_all('td')[0] # + len(errado_soup.find_all('table'))#[4].find_all('tr')[4:][9].find_all('td')[0]) ### 2018-10-14 00:00:00 # + active="" # # Scrapper: # def scrapper(link, domain): # complete_link = domain + link # page = urllib.request.urlopen(complete_link) # soup = BeautifulSoup(page) # # day_raw = soup.h1.contents[0][-10:] # day = datetime.strptime(day_raw, '%d/%m/%Y') # # tables = soup.find_all('table') # # channels = ['Band', 'Globo', 'Record', 'RedeTV', 'SBT'] # # data = channel_finder(channels, tables, day) # # return(data) # # # def channel_finder(ch_list, tables, day): # c = 0 # data = [] # for chn in ch_list: # if chn == 'Band': # c = 0 # elif chn == 'Globo': # c = 1 # elif chn == 'Record': # c = 2 # elif chn == 'RedeTV': # c = 3 # elif chn == 'SBT': # c = 4 # else: # raise Exception('ERRO! {} não é um canal válido'.format(chn.upper())) # # # find_chn = tables[c].find_all('tr') # # channel_name = str(find_chn[0].find('th').contents[0]) # cols = [str(i.contents[0]) for i in find_chn[2].find_all('td')[0:3]] # cols.insert(0, 'Programa') # cols.insert(0, 'Data') # cols.insert(0, 'Canal') # # values = [] # programs = [] # for j in find_chn[4:]: # name = [str(i) for i in j.find_all('td')[0]] # num = [float(str(i.contents[0]).replace(',','.')) for i in j.find_all('td')[1:4]] # values.append(num) # programs.append(name) # # # for i in range(len(values)): # values[i].insert(0, programs[i][0]) # values[i].insert(0, day) # values[i].insert(0, channel_name) # # data.append(values) # # return(data) # + active="" # day = soup.find('h1').contents[0][-10:] # day_fmt = datetime.strptime(day, '%d/%m/%Y') # # find_chn = tables[1].find_all('tr') # # # channel = str(find_chn[0].find('th').contents[0]) # cols = [str(i.contents[0]) for i in find_chn[2].find_all('td')[0:3]] # cols.insert(0, 'Programa') # cols.insert(0, 'Data') # cols.insert(0, 'Canal') # # values = [] # programs = [] # for j in find_chn[4:]: # name = [str(i) for i in j.find_all('td')[0]] # num = [float(str(i.contents[0]).replace(',','.')) for i in j.find_all('td')[1:4]] # values.append(num) # programs.append(name) # # for i in range(len(values)): # values[i].insert(0, programs[i][0]) # values[i].insert(0, day_fmt) # values[i].insert(0, channel) # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import math import numpy as np from matplotlib import pyplot as plt # Generamos los datos para el gráfico x = np.array(range(500))*0.1 y = np.zeros(len(x)) for i in range(len(x)): y[i] = math.sin(x[i]) # Creamos el gráfico plt.plot(x,y) plt.show # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.7.2 64-bit (''myFirstEnv'': conda)' # name: python372jvsc74a57bd0fb244c6b6b9237e6e3fa3faf64cdbdad650653f36bee7e07f29506d67a0f182d # --- import pandas as pd import xgboost as xgb from sklearn.model_selection import train_test_split from sklearn.metrics import mean_squared_error import itertools # + data1, data2, data3 = pd.read_json('./data1.json'), pd.read_json('./data2.json'), pd.read_json('./data3.json') frames = [data1, data2, data3] data = pd.concat(frames, sort=False) trainSet, testSet = train_test_split(data, test_size=0.1) estimators = [x for x in range(300, 10001, 100)] lambdas = [10**-1, 10**-2, 10**-3, 10**-4] depths = [x for x in range(15, 51, 5)] mix = [estimators,lambdas,depths] textFile = 'xgtest.txt' #itertools.combinations(mix, 3) for i, comb in enumerate(itertools.product(*mix)): with open(textFile, 'a') as f: f.write(f'Set #:{i} with parameters: {comb} \n') regressor = xgb.XGBRegressor( n_estimators = comb[0], reg_lambda=comb[1], gamma = 0, max_depth=comb[2] ) regressor.fit(trainSet.iloc[:,:16], trainSet.iloc[:,16:]) output_pred = regressor.predict(testSet.iloc[:,:16]) offset = 0 total = testSet.shape[0] for i in range(len(output_pred)): offset += (output_pred[i] * 100) / testSet.iloc[i,16:].output #print(f'Prediction: {output_pred[i]}. Value: {testSet.iloc[i,16:].output}') with open(textFile, 'a') as f: f.write(f'The test were off by an average of: {offset/total:.2f}% \n') # - mean_squared_error(testSet.iloc[:,16:], output_pred) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from __future__ import absolute_import from __future__ import print_function import os import datetime from shutil import copyfile from tensorflow.keras.utils import plot_model # - #delete later import traci from simulation import Simulation, TrainSimulation, VanillaTrainSimulation, RNNTrainSimulation from generator import TrafficGenerator from memory import Memory, NormalMemory, SequenceMemory # from model import TrainModel from model import * from visualization import Visualization from utils import import_train_configuration, set_sumo, set_train_path # + if __name__ == "__main__": config = import_train_configuration(config_file='training_settings.ini') sumo_cmd = set_sumo(config['gui'], config['sumocfg_file_name'], config['max_steps']) path = set_train_path(config['models_path_name']) # # SET PARAMETERS (ADD TO CONFIG LATER) # #TO DO: add to config files: sequence_length = 15 #SET STATE DIMENSION PARAMETERS number_of_cells_per_lane = 10 conv_state_shape = (number_of_cells_per_lane, 8, 2) green_phase_state_shape = 4 elapsed_time_state_shape = 1 state_shape = [conv_state_shape, green_phase_state_shape, elapsed_time_state_shape] TrafficGen = TrafficGenerator( config['max_steps'], config['penetration_rate'] ) Visualization = Visualization( path, dpi=96 ) #VANILLA MODEL if config['uses_reccurent_network'] == False: # online model used for training Model = VanillaTrainModel( config['batch_size'], config['learning_rate'], output_dim=config['num_actions'], state_shape=state_shape ) Model._model.summary() plot_model(Model._model, 'my_first_model_with_shape_info.png', show_shapes=True) #target model, only used for predictions. regularly the values of Model are copied into TargetModel TargetModel = VanillaTrainModel( config['batch_size'], config['learning_rate'], output_dim=config['num_actions'], state_shape=state_shape ) Memory = NormalMemory( config['memory_size_max'], config['memory_size_min'] ) Simulation = VanillaTrainSimulation( Model, TargetModel, Memory, TrafficGen, sumo_cmd, config['gamma'], config['max_steps'], config['green_duration'], config['yellow_duration'], config['num_actions'], config['training_epochs'], config['copy_step'] ) #RECURRENT MODEL else: # online model used for training Model = RNNTrainModel( config['batch_size'], config['learning_rate'], output_dim=config['num_actions'], state_shape=state_shape, sequence_length=sequence_length, statefulness = False ) Model._model.summary() plot_model(Model._model, 'my_first_model_with_shape_info.png', show_shapes=True) #target model, only used for predictions. regularly the values of Model are copied into TargetModel TargetModel = RNNTrainModel( config['batch_size'], config['learning_rate'], output_dim=config['num_actions'], state_shape=state_shape, sequence_length=sequence_length, statefulness = False ) PredictModel = RNNTrainModel( config['batch_size'], config['learning_rate'], output_dim=config['num_actions'], state_shape=state_shape, sequence_length=sequence_length, statefulness = True ) Memory = SequenceMemory( config['memory_size_max'], config['memory_size_min'], sequence_length ) Simulation = RNNTrainSimulation( Model, TargetModel, Memory, TrafficGen, sumo_cmd, config['gamma'], config['max_steps'], config['green_duration'], config['yellow_duration'], config['num_actions'], config['training_epochs'], config['copy_step'], PredictModel ) print(' ') print(' ') print('Starting...' ) print(' ') episode = 0 timestamp_start = datetime.datetime.now() while episode < config['total_episodes']: print('\n----- Episode', str(episode+1), 'of', str(config['total_episodes'])) #set epsilon epsilon = 1.0 - (episode / config['total_episodes']) # set the epsilon for this episode according to epsilon-greedy policy #run simulation + train for one episode at a time simulation_time, training_time = Simulation.run(episode, epsilon) # run the simulation print('Simulation time:', simulation_time, 's - Training time:', training_time, 's - Total:', round(simulation_time+training_time, 1), 's') episode += 1 print("\n----- Start time:", timestamp_start) print("----- End time:", datetime.datetime.now()) print("----- Session info saved at:", path) Model.save_model(path) copyfile(src='training_settings.ini', dst=os.path.join(path, 'training_settings.ini')) Visualization.training_save_data_and_plot(data=Simulation.reward_store, filename='reward', xlabel='Episode', ylabel='Cumulative negative reward') # + plot_model(Simulation._Model._model, show_shapes=True, show_layer_names=True) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Introduction to Python # # ## Regular Expressions Tutorial # # Inspired in these sources: [1](https://github.com/KennyMiyasato/regex_blog_post/blob/master/regex_blog_post.ipynb), [2](https://luca-pessina.medium.com/python-regular-expressions-in-5-minutes-ecc8b6624308), [3](https://developers.google.com/edu/python/regular-expressions) # Tables from: [regexr.com](https://regexr.com/) # #### Regular expressions are a powerful language for matching text patterns. This page gives a basic introduction to regular expressions themselves sufficient for our Python exercises and shows how regular expressions work in Python. The Python "re" module provides regular expression support. # # #### In Python a regular expression search is typically written as: # # > match = re.search(pattern, str) # # ### Importing modules import re import pandas as pd # ### Defining sample strings # + lowercase_alphabet = "abcdefghijklmnopqrstuvwxyz" uppercase_alphabet = lowercase_alphabet.upper() numbers = "1234567890" sentence = "The Quick Brown Fox Jumps Over The Lazy Dog" website = "https://webmail2016.univie.ac.at/" phone_numbers = """123-456-7890 987.654.321 234-567-8901 654.321.987 345-678-9012 321.654.978 456-789-0123 """ special_characters = "[\^$.|?*+()" text_with_email = 'xyz purple monkey' long_text = ''' Nicht alles, was Gold ist, funkelt Nicht alles, was Gold ist, funkelt, Nicht jeder, der wandert, verlorn, Das Alte wird nicht verdunkelt, Noch Wurzeln der Tiefe erfroren. Aus Asche wird Feuer geschlagen, Aus Schatten geht Licht hervor; Heil wird geborstenes Schwert, Und König, der die Krone verlor. ''' # - # ### Match explicit character(s) # In order to match characters explicitly, all you need to do is type what you'd like to find. re.findall("abc", lowercase_alphabet) re.findall("abde", lowercase_alphabet) re.findall("cht", long_text) re.findall("ABC", uppercase_alphabet) re.findall("abc", uppercase_alphabet) re.findall("abc", uppercase_alphabet, flags=re.IGNORECASE) # ### Match with special character(s) # In order to match any *special characters `[\^$.|?*+()`* you have first introduce a backslash `\` followed by the character you'd like to select. re.findall("webmail2016\.univie\.ac\.at", website) re.findall("we.ma.l2016.uni.ie..c.at", website) re.findall("\$", special_characters) re.findall("$", special_characters) re.findall("\|", special_characters) re.findall("|", special_characters) # ### Match by pattern # There are a lot of ways we can match a pattern. Regex has its own syntax so we could pick and choose how we want our patterns to look like. # ### Character Classes # | Class | Explanation | # |---|---| # | . | any character except newline | # | \w \d \s | word, digit, whitespace | # | \W \D \S | not word, not digit, not whitespace | # | [abc] | any of a, b, or c | # | [^abc] | not a, b, or c | # | [a-g] | characters between a & g | # # [0-9] is not always equivalent to \d. # In python3, [0-9] matches only 0123456789 characters, while \d matches [0-9] and other digit characters, for example Eastern Arabic numerals ٠١٢٣٤٥٦٧٨٩. # ### Quantifiers & Alternation # | Class | Explanation | # |---|---| # | a* a+ a? | 0 or more, 1 or more, 0 or 1 | # | a{5} a{2,} | exactly five, two or more | # | a{1,3} | between one & three | # | a+? a{2,}? | match as few as possible | # | ab\|cd | match ab or cd | # re.findall("\w{1,}", sentence) re.findall("\W+", sentence) re.findall("[The]+", sentence) re.findall("[^TLB]{1,}", sentence) re.findall("[a-rA-R]+", sentence) re.findall("l+", long_text) re.findall("l+?", long_text) # ### Anchors # | Class | Explanation | # |---|---| # | ^abc$ | start / end of the string | # | \b \B | character between a & g | re.findall("^[TheQBFJOLD]", sentence) re.findall("^[TheQBFJOLD]+", sentence) re.findall("[a-z,\.]{1,}$", long_text) # ### Escaped Characters # | Class | Explanation | # |---|---| # | \\. \\* \\\ | escaped special characters | # | \\t \\n \\r | tab, linefeed, carriage return | re.findall("\d{3}\-\d{3}\-\d{4}", phone_numbers) re.findall("\d{3}[\.\-]{1}\d{3}[\.\-]{1}\d{3,4}", phone_numbers) print(long_text) re.findall("\n.{5}", long_text) # ### Groups & Lookaround # | Class | Explanation | # |---|---| # | (abc) | capture group | # | \1 | backreference to group #1 | # | (?:abc) | non-capturing group | # | (?=abc) | positive lookahead | # | (?!abc) | negative lookahead | match = re.search('([\w.-]+)@([\w.-]+)', text_with_email) if match: print(match.group()) ## '' (the whole match) print(match.group(1)) ## 'alice-b' (the username, group 1) print(match.group(2)) ## 'google.com' (the host, group 2) my_string = 'purple , blah monkey blah dishwasher' tuples = re.findall(r'([\w\.-]+)@([\w\.-]+)', my_string) print(tuples) ## [('alice', 'google.com'), ('bob', 'abc.com')] for tuple in tuples: print(tuple[0]) ## username print(tuple[1]) ## host # ### Substituting characters string = 'This%&$is@an#example' string = re.sub('[%&$@#]+', ' ', string) print(string) # ### Regular Expressions with Pandas # + fragebogen = pd.read_csv('../Data/CSV/fragebogen.csv', names=["id", "nummer", "titel", "schlagwoerter", "erscheinungsjahr", "autoren", "originaldaten", "anmerkung", "freigabe", "checked", "wordleiste", "druck", "online", "publiziert", "fragebogen_typ_id",]) fragebogen.drop(["id","schlagwoerter","erscheinungsjahr","autoren","originaldaten","anmerkung","freigabe", "checked","wordleiste","druck","online","publiziert","fragebogen_typ_id",], inplace=True, axis=1) fragebogen.set_index("nummer", drop=True, inplace=True) fragebogen = fragebogen[fragebogen.titel.str.startswith('Fragebogen')] # - fragebogen.info() fragebogen.head(20) # #### Spliting the string in groups # + regex1 = r'([Fragebon]+)\s{1}([0-9]+)[:]{1}([,A-ZÄÖÜa-zäöüß0-9.\s]+)[,\s]*([=\-\(\)\sA-ZÄÖÜa-zäöüß0-9]*)' fragebogen.titel.str.extract(regex1).head() # - fragebogen.loc[:,'fragebogen_num'] = fragebogen.titel.str.extract(regex1)[1].str.strip() fragebogen.loc[:,'fragebogen_headwords'] = fragebogen.titel.str.extract(regex1)[2].str.strip() fragebogen.loc[:,'fragebogen_series'] = fragebogen.titel.str.extract(regex1)[3].str.strip() fragebogen.head(20) # ## Exercises # ### Twitter: # # #### In twitter a username is validated if: # # + Is a text with a ‘@’ at the beginning so in regex ‘^@’ # + Contains only alphanumerical characters (letters A-Z, numbers 0–9). # + In this case we can use either ‘[A-Za-z0–9]’ or ‘[\w\d]’. # + We use the brackets because we want to apply another quantifiers to the set. # + It must be between 4 and 15 characters, so {4,15}. # # You can use the re.match() or re.search() methods to find a match for your string, in the following code is reported the selection throught the regex statement. # #### Let's find some twitter databases: # # [First GOP debate](https://www.kaggle.com/crowdflower/first-gop-debate-twitter-sentiment) # [US Elections 2020](https://www.kaggle.com/manchunhui/us-election-2020-tweets) import sqlite3 conn = sqlite3.connect('../Data/tweet_database.sqlite') cur = conn.cursor() # #### Checking the tables in database q1 = "SELECT name FROM sqlite_master WHERE type IN ('table','view') AND name NOT LIKE 'sqlite_%'" cur.execute(q1) cur.fetchall() # #### Checking columns in table "sentiments" and querying the first row from the "sentiments" table q2 = "SELECT * from Sentiment limit 1" cur.execute(q2) #print([f[0] for f in cur.description]) #cur.fetchall() for idx, row in enumerate(cur.fetchall()[0]): print(f"{cur.description[idx][0]}:\t{row}") print() # #### Retrieving first 50 rows from field sentiment.text: q3 = "SELECT text from Sentiment limit 50" cur.execute(q3) cur.fetchmany(3) tweets = cur.execute(q3).fetchall() tweets = [t[0] for t in tweets] tweets # ### Or you may choose usingmentions Pandas for querying q4 = "SELECT * from Sentiment limit 50" df = pd.read_sql_query(q4, conn) df.head() # ### Now it is you time! # # #### Extract from tweets: # + Mentions # + URLs # + Hashtags for tweet in tweets: match = re.findall('@\w+', tweet) if match: print(match) for tweet in tweets: match = re.findall('http+s?:/{2}\w{1,}\.\w{1,}/.\w{1,}', tweet) if match: print(match) for tweet in tweets: match = re.findall('\#\w+', tweet) if match: print(match) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Reject uncertain predictions # # An example application of well-calibrated prediction uncertainty is the rejection of uncertain predictions. # We define an uncertainty threshold $ \mathcal{H}_{\mathrm{max}} $ and reject all predictions from the test set where $ \tilde{\mathcal{H}}(\mathbf{p}) > \mathcal{H}_{\mathrm{max}} $. # An increase in the accuracy of the remaining test set predictions is expected. # Let's create a figure that shows the top-1 error as a function of decreasing $ \mathcal{H}_{\mathrm{max}} $. # + pycharm={} # %matplotlib notebook import numpy as np np.random.seed(0) import torch torch.manual_seed(0) torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torchvision import datasets, transforms, models from torch.utils.data.sampler import SubsetRandomSampler import matplotlib from tqdm import tqdm from uce import eceloss, uceloss from utils import accuracy, nentr from models import BayesianNet from matplotlib import pyplot as plt import seaborn as sns sns.set() matplotlib.rcParams['text.usetex'] = True matplotlib.rcParams['font.size'] = 8 # + pycharm={} batch_size = 128 valid_size = 5000 mean = [0.4914, 0.48216, 0.44653] std = [0.2470, 0.2435, 0.26159] valid_set_18 = datasets.CIFAR10('../data', train=True, download=True, transform=transforms.Compose([ transforms.RandomCrop(32, padding=4), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize(mean=mean, std=std)])) test_set_18 = datasets.CIFAR10('../data', train=False, download=False, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize(mean=mean, std=std)])) valid_indices_18 = torch.load('./valid_indices_cifar10.pth') mean = [0.485, 0.456, 0.406] std = [0.229, 0.224, 0.225] valid_set = datasets.CIFAR100('../data', train=True, download=True, transform=transforms.Compose([ transforms.RandomCrop(32, padding=4), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize(mean=mean, std=std)])) test_set = datasets.CIFAR100('../data', train=False, download=False, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize(mean=mean, std=std)])) valid_indices = torch.load('./valid_indices_cifar100.pth') valid_loader_18 = torch.utils.data.DataLoader(valid_set_18, batch_size=batch_size, pin_memory=True, sampler=SubsetRandomSampler(valid_indices_18)) test_loader_18 = torch.utils.data.DataLoader(test_set_18, batch_size=batch_size, pin_memory=True, num_workers=4) valid_loader = torch.utils.data.DataLoader(valid_set, batch_size=batch_size, pin_memory=True, sampler=SubsetRandomSampler(valid_indices)) test_loader = torch.utils.data.DataLoader(test_set, batch_size=batch_size, pin_memory=True, num_workers=4) device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # + pycharm={} print(device) resnet18 = BayesianNet(num_classes=10, model='resnet18').to(device) resnet101 = BayesianNet(num_classes=100, model='resnet101').to(device) densenet169 = BayesianNet(num_classes=100, model='densenet169').to(device) checkpoint = torch.load(f'../snapshots/resnet18_499.pth.tar', map_location=device) print("Loading previous weights at epoch " + str(checkpoint['epoch'])) resnet18.load_state_dict(checkpoint['state_dict']) checkpoint = torch.load(f'../snapshots/resnet101_499.pth.tar', map_location=device) print("Loading previous weights at epoch " + str(checkpoint['epoch'])) resnet101.load_state_dict(checkpoint['state_dict']) checkpoint = torch.load(f'../snapshots/densenet169_499.pth.tar', map_location=device) print("Loading previous weights at epoch " + str(checkpoint['epoch'])) densenet169.load_state_dict(checkpoint['state_dict']) # + pycharm={} def train(net, bayesian, valid_loader): optimizer = optim.Adam([net.T], lr=1e-2, weight_decay=0) lr_scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, patience=5, factor=0.1) train_losses = [] train_accuracies = [] epochs = 30 for e in range(epochs): net.eval() epoch_train_loss = [] epoch_train_acc = [] is_best = False for batch_idx, (data, target) in enumerate(valid_loader): data, target = data.to(device), target.to(device) optimizer.zero_grad() output = net(data, temp_scale=True, bayesian=bayesian) if bayesian: output = torch.log_softmax(output, dim=2) output = output.mean(dim=0) loss = F.nll_loss(output, target) else: loss = F.cross_entropy(output, target) loss.backward() epoch_train_loss.append(loss.item()) epoch_train_acc.append(accuracy(output, target)) optimizer.step() epoch_train_loss = np.mean(epoch_train_loss) epoch_train_acc = np.mean(epoch_train_acc) lr_scheduler.step(epoch_train_loss) # save epoch losses train_losses.append(epoch_train_loss) train_accuracies.append(epoch_train_acc) print("Epoch {:2d}, lr: {:.4f}, loss: {:.4f}, acc: {:.4f}, T: {:.4f}" .format(e, optimizer.param_groups[0]['lr'], epoch_train_loss, epoch_train_acc, net.T.item() )) # + #train(resnet18, bayesian=True, valid_loader=valid_loader_18) resnet18.T = torch.nn.Parameter(torch.tensor([2.3643]).to(device)) #train(resnet101, bayesian=True, valid_loader=valid_loader) resnet101.T = torch.nn.Parameter(torch.tensor([2.4711]).to(device)) #train(densenet169, bayesian=True, valid_loader=valid_loader) densenet169.T = torch.nn.Parameter(torch.tensor([2.7556]).to(device)) # + pycharm={} def test(net, temp_scale, bayesian, test_loader): logits = [] labels = [] with torch.no_grad(): for batch_idx, (data, target) in enumerate(tqdm(test_loader)): data, target = data.to(device), target.to(device) output = net(data, temp_scale=temp_scale, bayesian=bayesian) if bayesian: output = torch.softmax(output, dim=2).mean(dim=0) else: output = torch.softmax(output, dim=1) logits.append(output.detach()) labels.append(target.detach()) return torch.cat(logits, dim=0), torch.cat(labels, dim=0) # - softmaxes_uncalib1, labels1 = test(resnet18, temp_scale=False, bayesian=True, test_loader=test_loader_18) softmaxes_uncalib2, labels2 = test(resnet101, temp_scale=False, bayesian=True, test_loader=test_loader) softmaxes_uncalib3, labels3 = test(densenet169, temp_scale=False, bayesian=True, test_loader=test_loader) # + _, predictions_uncalib1 = torch.max(softmaxes_uncalib1, 1) uncert_thresholds1 = np.linspace(0.05, 1.0, 20)[::-1] top1_errors_uncalib1 = [] for uncert_thresh in uncert_thresholds1: uncertainties = nentr(softmaxes_uncalib1, base=softmaxes_uncalib1.size(1)) labels_filtered = labels1[torch.where(uncertainties <= uncert_thresh)] predictions_filtered = predictions_uncalib1[torch.where(uncertainties <= uncert_thresh)] errors = predictions_filtered.ne(labels_filtered).float().mean() top1_errors_uncalib1.append(errors.item()) ### _, predictions_uncalib2 = torch.max(softmaxes_uncalib2, 1) uncert_thresholds2 = np.linspace(0.05, 1.0, 20)[::-1] top1_errors_uncalib2 = [] for uncert_thresh in uncert_thresholds2: uncertainties = nentr(softmaxes_uncalib2, base=softmaxes_uncalib2.size(1)) labels_filtered = labels2[torch.where(uncertainties <= uncert_thresh)] predictions_filtered = predictions_uncalib2[torch.where(uncertainties <= uncert_thresh)] errors = predictions_filtered.ne(labels_filtered).float().mean() top1_errors_uncalib2.append(errors.item()) ### _, predictions_uncalib3 = torch.max(softmaxes_uncalib3, 1) uncert_thresholds3 = np.linspace(0.05, 1.0, 20)[::-1] top1_errors_uncalib3 = [] for uncert_thresh in uncert_thresholds3: uncertainties = nentr(softmaxes_uncalib3, base=softmaxes_uncalib3.size(1)) labels_filtered = labels3[torch.where(uncertainties <= uncert_thresh)] predictions_filtered = predictions_uncalib3[torch.where(uncertainties <= uncert_thresh)] errors = predictions_filtered.ne(labels_filtered).float().mean() top1_errors_uncalib3.append(errors.item()) # - softmaxes1, labels1 = test(resnet18, temp_scale=True, bayesian=True, test_loader=test_loader_18) softmaxes2, labels2 = test(resnet101, temp_scale=True, bayesian=True, test_loader=test_loader) softmaxes3, labels3 = test(densenet169, temp_scale=True, bayesian=True, test_loader=test_loader) # + _, predictions1 = torch.max(softmaxes1, 1) uncert_thresholds1 = np.linspace(0.05, 1.0, 20)[::-1] top1_errors1 = [] for uncert_thresh in uncert_thresholds1: uncertainties = nentr(softmaxes1, base=softmaxes1.size(1)) labels_filtered = labels1[torch.where(uncertainties <= uncert_thresh)] predictions_filtered = predictions1[torch.where(uncertainties <= uncert_thresh)] errors = predictions_filtered.ne(labels_filtered).float().mean() top1_errors1.append(errors.item()) ### _, predictions2 = torch.max(softmaxes2, 1) uncert_thresholds2 = np.linspace(0.05, 1.0, 20)[::-1] top1_errors2 = [] for uncert_thresh in uncert_thresholds2: uncertainties = nentr(softmaxes2, base=softmaxes2.size(1)) labels_filtered = labels2[torch.where(uncertainties <= uncert_thresh)] predictions_filtered = predictions2[torch.where(uncertainties <= uncert_thresh)] errors = predictions_filtered.ne(labels_filtered).float().mean() top1_errors2.append(errors.item()) ### _, predictions3 = torch.max(softmaxes3, 1) uncert_thresholds3 = np.linspace(0.05, 1.0, 20)[::-1] top1_errors3 = [] for uncert_thresh in uncert_thresholds3: uncertainties = nentr(softmaxes3, base=softmaxes3.size(1)) labels_filtered = labels3[torch.where(uncertainties <= uncert_thresh)] predictions_filtered = predictions3[torch.where(uncertainties <= uncert_thresh)] errors = predictions_filtered.ne(labels_filtered).float().mean() top1_errors3.append(errors.item()) # - fig1, ax1 = plt.subplots(1, 1, figsize=(2.5, 2.5)) ax1.plot(uncert_thresholds1, top1_errors_uncalib1, marker='.') ax1.plot(uncert_thresholds1, top1_errors1, marker='.') ax1.set_xlabel(r'uncertainty threshold') ax1.set_ylabel(r'top1-error') ax1.set_xlim(1.05, -0.05) ax1.set_ylim(-0.01, 0.21) ax1.set_xticks((np.arange(0.0, 1.1, step=0.2))) ax1.set_yticks((np.arange(0.0, 0.21, step=0.04))) ax1.set_title(r'ResNet-18/CIFAR-10') ax1.set_aspect(5) fig1.tight_layout() fig1.show() fig1.savefig('uncert_thresh_resnet18.pdf', bbox_inches='tight', pad_inches=0) fig2, ax2 = plt.subplots(1, 1, figsize=(2.5, 2.5)) ax2.plot(uncert_thresholds2, top1_errors_uncalib2, marker='.') ax2.plot(uncert_thresholds2, top1_errors2, marker='.') ax2.set_xlabel(r'uncertainty threshold') ax2.set_ylabel(r'top1-error') ax2.set_xlim(1.04, -0.04) ax2.set_ylim(-0.02, 0.52) ax2.set_xticks((np.arange(0.0, 1.1, step=0.2))) ax2.set_yticks((np.arange(0.0, 0.51, step=0.1))) ax2.set_title(r'ResNet-101/CIFAR-100') ax2.set_aspect(2) fig2.tight_layout() fig2.show() fig2.savefig('uncert_thresh_resnet101.pdf', bbox_inches='tight', pad_inches=0) fig3, ax3 = plt.subplots(1, 1, figsize=(2.5, 2.5)) ax3.plot(uncert_thresholds3, top1_errors_uncalib3, marker='.') ax3.plot(uncert_thresholds3, top1_errors3, marker='.') ax3.set_xlabel(r'uncertainty threshold') ax3.set_ylabel(r'top-1 error') ax3.set_xlim(1.04, -0.04) ax3.set_ylim(-0.02, 0.52) ax3.set_yticks((np.arange(0.0, 0.5, step=0.1))) ax3.set_xticks((np.arange(0.0, 1.1, step=0.2))) ax3.set_yticks((np.arange(0.0, 0.51, step=0.1))) ax3.set_title(r'DenseNet-169/CIFAR-100') ax3.set_aspect(2) fig3.tight_layout() fig3.show() fig3.savefig('uncert_thresh_densenet169.pdf', bbox_inches='tight', pad_inches=0) # + fig, ax = plt.subplots(1, 3, figsize=(3*2.25, 2.25)) ax[0].plot(uncert_thresholds1, top1_errors_uncalib1, '.-', color='C1') ax[0].plot(uncert_thresholds1, top1_errors1, '.-') ax[0].set_xlabel(r'uncertainty threshold') ax[0].set_ylabel(r'top-1 error') ax[0].set_xlim(1.05, -0.05) ax[0].set_ylim(-0.01, 0.21) ax[0].set_xticks((np.arange(0.0, 1.1, step=0.2))) ax[0].set_yticks((np.arange(0.0, 0.21, step=0.04))) ax[0].set_title(r'ResNet-18/CIFAR-10') ax[0].set_aspect(5) ax[1].plot(uncert_thresholds2, top1_errors_uncalib2, '.-', color='C1') ax[1].plot(uncert_thresholds2, top1_errors2, '.-') ax[1].set_xlabel(r'uncertainty threshold') ax[1].set_ylabel(r'top-1 error') ax[1].set_xlim(1.04, -0.04) ax[1].set_ylim(-0.02, 0.52) ax[1].set_xticks((np.arange(0.0, 1.1, step=0.2))) ax[1].set_yticks((np.arange(0.0, 0.51, step=0.1))) ax[1].set_title(r'ResNet-101/CIFAR-100') ax[1].set_aspect(2) ax[2].plot(uncert_thresholds3, top1_errors_uncalib3, '.-', color='C1') ax[2].plot(uncert_thresholds3, top1_errors3, '.-') ax[2].set_xlabel(r'uncertainty threshold') ax[2].set_ylabel(r'top-1 error') ax[2].set_xlim(1.04, -0.04) ax[2].set_ylim(-0.02, 0.52) ax[2].set_yticks((np.arange(0.0, 0.5, step=0.1))) ax[2].set_xticks((np.arange(0.0, 1.1, step=0.2))) ax[2].set_yticks((np.arange(0.0, 0.51, step=0.1))) ax[2].set_title(r'DenseNet-169/CIFAR-100') ax[2].set_aspect(2) fig.tight_layout() fig.savefig('uncert_thresh.pdf', bbox_inches='tight', pad_inches=0) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Below is an explanation of the whole source code. You may directly skip to the last code cell if you wish to execute all of the code at once. from bs4 import BeautifulSoup import requests # ### You need to restart jupyter kernel whenever you want to take user input again. Thats just how jupyter notebooks work input = input("Enter your value: ") response = requests.get('http://www.imdb.com/find?ref_=nv_sr_fn&q=Audrie & Daisy &s=tt') soup = BeautifulSoup(response.content, 'lxml') table = soup.find('table',class_='findList') movieid = table.tr.a['href'] movieid response = requests.get('http://www.imdb.com/find?ref_=nv_sr_fn&q=' +input+ ' &s=tt') # input helps in finding data of a movie dynamically entered by user # ### You can use the below way if you dont want to take user input i.e if you ony enter a movie and search. In that case replace 'Stranger things' with whatever series/movie you want #response = requests.get('http://www.imdb.com/find?ref_=nv_sr_fn&q= Stranger Things ! &s=tt') # We use lxml parser try: soup = BeautifulSoup(response.content, 'lxml') table = soup.find('table',class_='findList') movieid = table.tr.a['href'] movieid except Exception as e: print("No such title found. Dont execute any further code") # Fetching the movie id so that the browser can be directed to that webpage. Note out of many searches we are taking the 1st search result movieid # #### 1st soup object is used to form the extract movie id from imdb page. 2nd soup object is used to extract data from from the newly formed imdb page based upon movieid movielink = "http://www.imdb.com" + movieid # Form the movie url which has to be used to extract data moviepage = requests.get(movielink) # get method fetches the source code from entire page soup2 = BeautifulSoup(moviepage.content, 'lxml') # we create another soup object to perform crawling on the actual moviepage # Title uncleaned = soup2.find('div', class_ = 'title_wrapper').h1.text #it contains some weird text and extra   at end. So replace it with empty string uncleaned =uncleaned.replace('\xa0','') title = uncleaned.strip() title # ## For some of the variables like imdbrating, votes,metascore etc there are chances that these are not present on actual imdb page. Thus I have used try except to handle such cases and print appropriate statements at the end. A counter is initialized to 1 and in except cases it goes to zero indicating absence of any occurence on imdb page #imdb rating imdbcounter=1 try: imdb_rating = float(soup2.select('.ratingValue span')[0].text) except: imdbcounter=0 if(imdbcounter !=0): print(imdb_rating) else: print("imdb rating is not given on imdb page") #metascore # Tv series dont have metascore. Thus we store the values in metascore variable only when they are present on imdb page. try: metascore = float(soup2.select('.metacriticScore span')[0].text) if float(soup2.select('.metacriticScore span')[0].text) else None except Exception as e: metascore=None metascore # #### If you see cast details on imdb page, an inspect tells us that they are stored in two classes-odd and even. Thus we perform extracting of casts having class="odd" and then those having class="even". The results are stored in separate lists and then concatenated together to form final list of casts # + # Extracting cast values namelist=[] oddnames=[] evennames =[] ttag = soup2.find_all('tr', class_= 'odd') for i in ttag: namelist.append( i.find('td', attrs={'class': None})) for i in namelist: if (i.a) is None: pass else: oddnames.append(i.a.string.strip()) ttag = soup2.find_all('tr', class_= 'even') namelist=[] for i in ttag: namelist.append( i.find('td', attrs={'class': None})) for i in namelist: if (i.a) is None: pass else: evennames.append(i.a.string.strip()) cast = oddnames+evennames cast # - # #### Director details in imdb page are given as Director/Directors/Creator/Creators. Some shows have used the term creators whereas some have used director. Thus if we encounter any such word-> We extract the values # # ### Director values can also be null-> handled in 1st if statement # + # Directors directors=[] director_container = soup2.find('div',class_ = 'credit_summary_item') if(director_container is None): pass elif((director_container.find('h4').string =='Director:') or (director_container.find('h4').string =='Creator:') or (director_container.find('h4').string =='Creators:') or (director_container.find('h4').string =='Directors:')) : directors_container = director_container.find_all('a') for i in directors_container: directors.append(i.string) else: pass directors # - # ### Now we find the country of production for the movie. For some movies, there may be more than one country. Thus we append all countries in a list #country country=[] titledetails = soup2.find('div',id='titleDetails') titleheading = titledetails.find_all('a') for i in range(0,len(titleheading)): if('country_of_origin' in titleheading[i]['href']): country.append(titleheading[i].string) country # ### Language tells the languages, the movie is available in. There may be more than one language for a movie( due to dubbing), thus we store the results in a list #language languages=[] titledetails2 = soup2.find('div',id='titleDetails') titleheading2 = titledetails2.find_all('a') for i in range(0,len(titleheading2)): if('primary_language' in titleheading2[i]['href']): languages.append(titleheading2[i].string) languages # ### Below is a way to find according to an attribute in tag-> consider that as dictionary # ### The number of votes in an imdb page is written in the form '250,000)-> This cant be treated as integer value until we replace , with "". Also some movie may not have votes mentioned on imdb page. Hence the try except block. #Total number of votes votecounter=1 try: votes = soup2.find('span', {'itemprop':'ratingCount'}).string votes = votes.replace(',',"") votes = int(votes) except Exception as e: votecounter=0 if(votecounter!=0): print(votes) else: print("Number of votes are not given on imdb page") # #### next_sibling used to get the element in the same level of the tree. You cant find parent or children of a node through this tag but you can find its siblings. next_element also works in a similar way except that it prints the tag which was parsed just after this tag. It may or may not return next_sibling value depending on the next parsed element # + # release date dateheading = soup2.select('.txt-block h4') for i in range(0,len(dateheading)): if(dateheading[i].string=='Release Date:'): print(dateheading[i].next_sibling.strip()) # - # #### A movie or a show can belong to multiple genres-> Hence the list # + # genres genres=[] wrapper = soup2.select_one('.title_wrapper .subtext') links = wrapper.find_all('a') for i in links: if 'genres' in i['href']: genres.append(i.string) genres # - # ### Two concepts of duration are used-> The code just below tells run time of a movie or avg run time of an episode in a tv show. Whereas the code below this cell, tells the run time of a movie or number of seasons in a tv show. So both of these cells have their separate usage #duration-> This considers avg duration shown on imdb page for series try: wrapper = soup2.select_one('.title_wrapper .subtext') duration = wrapper.find('time').string.strip() duration except Exception as e: pass # #### On IMDB page, a tv show always has TV as its starting word in the duration block . Thus we can use an if block wich checks that the duration block starts with TV or not. If it starts, we extract the number of seasons else its a movie and we extract its runtime. Thus this code has a separate function based off the type- Movie/TV Show # #### Also note that some may have value TV Special which are of type movie and not tv show # + # Type, Duration/Seasons durncounter=1 wrapper = soup2.select_one('.title_wrapper .subtext') links = wrapper.find_all('a') links for i in range(0,len(links)): if 'releaseinfo' in links[i]['href']: value = links[i].string.strip() try: if (value.startswith('TV ') and (not value.startswith('TV Special'))): type = 'TV Series' seasons = soup2.select('.seasons-and-year-nav a')[0].string duration = seasons+" seasons" else: type='Movie' try: duration = wrapper.find('time').string.strip() except Exception as e: durncounter=0 if(type == 'Movie') and durncounter!= 0: print("The duration is", duration) elif(type == 'TV Series'): print("The number of seasons are ", duration) else: print(" Duration is not given on imdb page") except Exception as e: print('Duration is not given on imdb page') # - # #### On IMDB page, a storyline or short summary of the plot is given for movie/tv shows. However it may also happen that no such plot is given for a movie. Thus if our extracted storyline is not an empty string, we print it. # # #### Also some of the storyline have some links placed inside them( "anchor tag"). So normal (.strings ) wont work, we use get_text # # Storyline # Here we have links inside story for some movies. So normal .strings wont work story = soup2.select_one('#titleStoryLine div span').get_text().strip() story if(story!=''): print("\nHere is a description of the storyline - \n", story) else: print("\nStory description is not given on imdb page") # ### certificate tells the certificate or censor rating given to a movie/tv show. # Certificate wrapper = soup2.select_one('.title_wrapper .subtext') certificate = wrapper.next_element.strip() if(certificate != ''): print("\nThe {} has been given {} certificate".format(type,certificate)) else: print("Certificate details are not given on the imdb page") # ### COMBINING ALL THE CODE -> You need to restart the kernel to enter the movie/tv series everytime you execute this block # + from bs4 import BeautifulSoup import requests input = input("Enter your value: ") response = requests.get('http://www.imdb.com/find?ref_=nv_sr_fn&q=' +input+ ' &s=tt') try: soup = BeautifulSoup(response.content, 'lxml') table = soup.find('table',class_='findList') movieid = table.tr.a['href'] movieid except Exception as e: print("No such title found. Dont execute any further code") movielink = "http://www.imdb.com" + movieid moviepage = requests.get(movielink) soup2 = BeautifulSoup(moviepage.content, 'lxml') #title of movie/show uncleaned = soup2.find('div', class_ = 'title_wrapper').h1.text uncleaned =uncleaned.replace('\xa0','') title = uncleaned.strip() print("Title is - {}".format(title)) durncounter=1 wrapper = soup2.select_one('.title_wrapper .subtext') links = wrapper.find_all('a') links for i in range(0,len(links)): if 'releaseinfo' in links[i]['href']: value = links[i].string.strip() try: if (value.startswith('TV ') and (not value.startswith('TV Special'))): type = 'TV Series' seasons = soup2.select('.seasons-and-year-nav a')[0].string duration = seasons+" seasons" else: type='Movie' try: duration = wrapper.find('time').string.strip() except Exception as e: durncounter=0 if(type == 'Movie') and durncounter!= 0: print("The duration is", duration) elif(type == 'TV Series'): print("The number of seasons are ", duration) else: print(" Duration is not given on imdb page") except Exception as e: print('Duration is not given on imdb page') #imdb rating imdbcounter=1 try: imdb_rating = float(soup2.select('.ratingValue span')[0].text) except Exception as e: imdbcounter=0 if(imdbcounter !=0): print("\nThe imdb rating of the {} is = {}".format(type,imdb_rating)) else: print("\nimdb rating is not given on imdb page") #Total number of votes votecounter=1 try: votes = soup2.find('span', {'itemprop':'ratingCount'}).string votes = votes.replace(',',"") votes = int(votes) except Exception as e: votecounter=0 if(votecounter!=0): print("\nThe imdb rating is calculating on the basis of {} mumber of votes".format( votes)) else: print("\nNumber of votes are not given on imdb page") #metascore try: metascore = float(soup2.select('.metacriticScore span')[0].text) if float(soup2.select('.metacriticScore span')[0].text) else None print("\nMetascore - ",metascore) except Exception as e: metascore=None print("\nMetascore is not available in imdb site") # cast values namelist=[] oddnames=[] evennames =[] ttag = soup2.find_all('tr', class_= 'odd') for i in ttag: namelist.append( i.find('td', attrs={'class': None})) for i in namelist: if (i.a) is None: pass else: oddnames.append(i.a.string.strip()) ttag = soup2.find_all('tr', class_= 'even') namelist=[] for i in ttag: namelist.append( i.find('td', attrs={'class': None})) for i in namelist: if (i.a) is None: pass else: evennames.append(i.a.string.strip()) cast = oddnames+evennames print("\nCast is as follows",cast) # Extracting directors directors=[] director_container = soup2.find('div',class_ = 'credit_summary_item') if(director_container is None): pass elif((director_container.find('h4').string =='Director:') or (director_container.find('h4').string =='Creator:') or (director_container.find('h4').string =='Creators:') or (director_container.find('h4').string =='Directors:')) : directors_container = director_container.find_all('a') for i in directors_container: directors.append(i.string) else: pass # Country of production country=[] titledetails = soup2.find('div',id='titleDetails') titleheading = titledetails.find_all('a') for i in range(0,len(titleheading)): if('country_of_origin' in titleheading[i]['href']): country.append(titleheading[i].string) print("\nProduction Countries-", country) # Language languages=[] titledetails2 = soup2.find('div',id='titleDetails') titleheading2 = titledetails2.find_all('a') for i in range(0,len(titleheading2)): if('primary_language' in titleheading2[i]['href']): languages.append(titleheading2[i].string) print("\n{} is available in Languages- {}".format(type, languages)) # release date dateheading = soup2.select('.txt-block h4') for i in range(0,len(dateheading)): if(dateheading[i].string=='Release Date:'): releasedate = dateheading[i].next_sibling.strip() print("\n{} was released in- {}".format(type, releasedate)) # genres genres=[] wrapper = soup2.select_one('.title_wrapper .subtext') links = wrapper.find_all('a') for i in links: if 'genres' in i['href']: genres.append(i.string) print("\nFollowing are the genres of the {} - {}".format(type,genres)) # Certificate wrapper = soup2.select_one('.title_wrapper .subtext') certificate = wrapper.next_element.strip() if(certificate != ''): print("\nThe {} has been given {} certificate".format(type,certificate)) else: print("\nCertificate details are not given on the imdb page") # Storyline story = soup2.select_one('#titleStoryLine div span').get_text().strip() if(story!=''): print("\nHere is a description of the storyline - \n", story) else: print("\nStory description is not given on imdb page") # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Using autoreload to speed up IPython and Jupyter work # I try to do all of my interactive Python development with either Jupyter notebooks or an IPython session. One of the main reasons I like these environments is the [```%autoreload```](https://ipython.readthedocs.io/en/stable/config/extensions/autoreload.html) magic. What's so special about ```%autoreload``` and why does it often make development faster and simpler? # # ## Why IPython and Jupyter? # Before going further, if you haven't yet used both IPython and Jupyter, check out the [ipython interactive tutorial](https://ipython.readthedocs.io/en/stable/interactive/tutorial.html) first. It does a good job of explaining why using IPython is superior to the default Python interpreter. It has a host of useful features, but in this article I will only be talking about one feature (magics) and specifically one of those magics (```%autoreload```). [Jupyter notebooks](https://jupyter-notebook.readthedocs.io/en/stable/), like IPython, support most of the same magics, so much of what is discussed in the tutorial will work in either an interactive IPython session or a Jupyter notebook session. One thing to note is that I'm talking about Python here, not other languages running in a Jupyter notebook. # # ## Magics # First, what is a magic? # # Magics are just special functions that you can call in your IPython or Jupyter session. They come in two forms: line and cell. A line magic is prefixed with one ```%```, a cell magic is prefixed with two, ```%%```. A line magic consumes one line, whereas a cell magic consumes the lines below the magic, allowing for more input. For this article, we'll look at just one of the line magics, the ```%autoreload``` magic. # # # ## Why autoreload? # The ```%autoreload``` magic changes the Python session so that modules are automatically reloaded in that session before entering the execution of code typed at the IPython prompt (or the Jupyter notebook cell). What this means is that modules that are loaded into your session can be modified (outside your session), and the changes will be detected and reloaded without you having to restart your session. # # This can be tremendously useful. Let me describe a typical scenario where I use this. Let's say you have a Jupyter notebook that you've created and are enhancing, and you require data from several sources. You get the data by executing functions in modules you import at the beginning of your session, and those modules are Python code that you control. This will be a very typical use case for many users. Futhermore, let's say in your notebook you load all the data into memory and this takes a full 5 minutes. You then start to work with the data and soon realize that you need slightly different data from one of the functions in one of the modules you control, so you need to add another parameter to query data differently. How do you # # 1. Make this change # 1. Test this change # 1. Continue your work # # In most cases you will open the underlying code in your editor or IDE, modify it, test it in another session (or with unit tests), then optionally install changes locally. But what about the notebook that already has some of the data already loaded? One way to continue your work is to restart your Jupyter kernel to pick up the changes you just made, reload all data into memory (taking 5 minutes at least), and then continue your work. # # But there's a better way, using ```autoreload```. In your Jupyter session, you first load the ```autoreload``` extension, using the ```%load_ext``` magic. # %load_ext autoreload # Now, the ```%autoreload``` magic is available in your session. It can take a single argument that specifies how ```autoreload```ing of modules will behave. The extension also provides another magic, ```%aimport```, which allows for fine-grained control of which modules are affected by the autoreload. If no arguments are given to ```%autoreload```, then it will reload all modules immediately (except those excluded by ```%aimport``` as seen below). You can run it once and then use your updated code. # # The optional argument for ```autoreload``` has three valid values: # * 0 - disable automatic reloading # * 1 - reload all the modules imported by ```%aimport``` every time before executing Python code that has been typed # * 2 - reload all modules (except those excluded by ```%aimport```) every time before executing Python code that has been typed # # To regulate the modules affected by ```autoreload```, use the ```%aimport``` magic. It works as follows: # * no arguments - lists the modules that will be imported or not imported # * with one argument - the module provided will be imported with ```%autoreload 1``` # * with comma separated arguments - all modules in list will be imported with ```%autoreload 1``` # * with a ```-``` before argument - that module will *not* be autoreloaded # # For me, the most common way I use ```%autoreload``` is to just include everything during my initial development work when I'm likely to be changing Python modules and notebook code (i.e. to run ```%autoreload 2```), and to not use it at all otherwise. But having the control can be useful, especially if you are loading a lot of modules. # # ## Example # For a concrete example that you can use to follow along, make two Python files, ```auto.py``` and ```auto2.py```, and save them alongside a Jupyter notebook with the imports below. Each of the Python files should have a simple function in them, as follows: # + # in auto.py def my_api(model, year): # dummy result return { 'model': model, 'year': year, } # in auto2.py def my_api2(model, year): # dummy result return { 'model': model, 'year': year, } # - # Now, let's import both modules and inspect the API methods using the IPython/Jupyter help by appending a ```?``` to the function. You should see that imported module matches your code in the Python file. import auto import auto2 # + # auto.my_api? # - # Now, in a separate editor, add a third argument (```color```) to the ```auto.my_api``` function. Do we see it? Refresh the help cell to see. # # No, not yet. Let's turn on autoreload. # %autoreload 2 # Now, when I inspect ```auto.my_api```, I see the new argument. It worked! # # Now I can modify settings so that only the ```auto2``` module is reloaded, not ```auto```. But first, let's see what modules are being reloaded. # %aimport # Let's turn off ```auto```. # %aimport -auto # %aimport # Now, if I modify the code in ```auto```, I shouldn't see the changes in this session. Using ```%aimport``` you can restrict which code is being reloaded. # # # ## Caveats # It's important to note that module reloading is not perfect. You should not leave this on for production code, it will slow things down. Also, if you are live editing your code and leave it in a broken state, the most recent successfully loaded code will be the code running in your session, so it can make things confusing for you. This is probably not the way you want to modify large amounts of code, but when making incremental changes, it can work well. # # To observe what broken code will look like, open the module that is being autoreloaded (```auto2.py```) and add a syntax error (for example, maybe put in mismatched parens somewhere), then execute the function in a notebook cell. You should see ```autoreload``` report a traceback of the syntax error in the cell. You'll only see this error once, if you re-execute the cell it won't show you the same error, but will use the version of the code last loaded. # # Also, note that there are a few things that don't work all the time, like removing functions from a module, changing a @property in a class to an ordinary method, or reloading C extensions. In those cases, you'll need to restart your session. You can see more details in the [docs](https://ipython.readthedocs.io/en/stable/config/extensions/autoreload.html). # # ## Summary # If you've never used ```%autoreload``` before, give it a try next time you have an IPython or Jupyter session with a lot of data in it and want to make a small change to a local module. Hopefully it will save you some time. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: SageMath 8.1 # language: '' # name: sagemath # --- # + # %matplotlib inline import numpy as np from mpl_toolkits.mplot3d import Axes3D import matplotlib.pyplot as plt def draw_plane_with_3points(points): fig = plt.figure() ax = fig.add_subplot(111, projection='3d') p_array = np.array(points) p_array_T = p_array.T x = p_array_T[0] y = p_array_T[1] z = p_array_T[2] X, Y = np.meshgrid(x, y) _, Z = np.meshgrid(z, z) # Z = z.dot(np.ones([3,3])) # use z.T ax.plot_surface(X, Y, Z) ax.set_xlabel('X Label') ax.set_ylabel('Y Label') ax.set_zlabel('Z Label') ax.plot(x, y, z, '*') plt.show() points = [ [1, 1, 1], [2, 1, 1], [2, 2, 2], ] draw_plane_with_3points(points) # - # ![fano.svg](attachment:fano.svg) # %display latex G = Graph([(1,2)]);G.show() G.adjacency_matrix() import networkx as nx import matplotlib.pyplot as plt G=nx.DiGraph(directed=True) G.add_node(1,pos=(1,1)) G.add_node(2,pos=(2,2)) G.add_edge(1, 2, weight=0.5) G.add_edge(2, 1, weight=1) pos=nx.get_node_attributes(G,'pos') nx.draw_networkx(G,pos,arrows=True) labels = nx.get_edge_attributes(G,'weight') nx.draw_networkx_edge_labels(G,pos,edge_labels=labels) import networkx as nx A = nx.adjacency_matrix(G) print(A.todense()) G = Graph([(1,2)]);G.show() G.adjacency_matrix() G = Graph([(1,2),(1,3)]);G.show() G.adjacency_matrix() G = Graph([(1,2),(1,3),(1,4)]);G.show() G.adjacency_matrix() G = Graph([(1,2),(1,3),(1,4)]);G.show() G.adjacency_matrix() G = Graph([(1,2),(1,3),(1,4),(2,3)]);G.show() G.adjacency_matrix() G = Graph([(1,2),(1,3),(1,4),(2,3),(2,4)]);G.show() G.adjacency_matrix() G = Graph([(1,2),(1,3),(1,4),(2,3),(2,4), (2,5)]);G.show() G.adjacency_matrix() G = Graph([(1,2),(1,3),(1,4),(2,3),(2,4), (2,5),(2,6)]);G.show() G.adjacency_matrix() G = Graph([(1,2),(1,3),(1,4),(2,3),(2,4), (2,5),(2,6),(3,4)]);G.show() G.adjacency_matrix() G = Graph([(1,2),(1,3),(1,4),(2,3),(2,4), (2,5),(2,6),(3,4),(3,5)]);G.show() G.adjacency_matrix() G = Graph([(1,2),(1,3),(1,4),(2,3),(2,4), (2,5),(2,6),(3,4),(3,5),(3,6)]);G.show() G.adjacency_matrix() G = Graph([(1,2),(1,3),(1,4),(2,3),(2,4), (2,5),(2,6),(3,4),(3,5),(3,6), (3,7)]);G.show() G.adjacency_matrix() G = Graph([(1,2),(1,3),(1,4),(2,3),(2,4), (2,5),(2,6),(3,4),(3,5),(3,6), (3,7),(4,6)]);G.show() G.adjacency_matrix() G = Graph([(1,2),(1,3),(1,4),(2,3),(2,4), (2,5),(2,6),(3,4),(3,5),(3,6), (3,7),(4,6),(4,7)]);G.show() G.adjacency_matrix() G = Graph([(1,2),(1,3),(1,4),(2,3),(2,4), (2,5),(2,6),(3,4),(3,5),(3,6), (3,7),(4,6),(4,7),(5,6)]);G.show() G.adjacency_matrix() G = Graph([(1,2),(1,3),(1,4),(2,3),(2,4), (2,5),(2,6),(3,4),(3,5),(3,6), (3,7),(4,6),(4,7),(5,6),(6,7)]);G.show() G.adjacency_matrix() from sage.graphs.graph_coloring import vertex_coloring G = Graph([(1,2),(1,3),(1,4),(2,3),(2,4), (2,5),(2,6),(3,4),(3,5),(3,6), (3,7),(4,6),(4,7),(5,6),(6,7)]) S=vertex_coloring(G) G.show(partition=S,figsize=3,graph_border=True) G.adjacency_matrix() g=Graph(7) edges = [(1,2), (1,3), (1,4), (2,3), (2,4), (2,5), (2, 6), (3,4), (3,5), (3,6), (3,7), (4,6), (4,7), (5,6), (6,7)] g.add_edge(1,2),g.add_edge(1,3),g.add_edge(1,4),g.add_edge(2,3), g.add_edge(2,4),g.add_edge(2,5),g.add_edge(2,6),g.add_edge(3,4), g.add_edge(3,5),g.add_edge(3,6),g.add_edge(3,7),g.add_edge(4,6), g.add_edge(4,7),g.add_edge(5,6),g.add_edge(6,7) g.relabel({1:"kl",2:"i",3:"l", 4:"j",5:"jl",6:"k",7:"il"}) g.show(figsize=4,graph_border=True) g.adjacency_matrix() from sage.graphs.graph_coloring import vertex_coloring G = Graph([(1,2),(1,3),(1,4),(2,3),(2,4), (2,5),(2,6),(3,4),(3,5),(3,6), (3,7),(4,6),(4,7),(5,6),(6,7)]) S=vertex_coloring(G) G.relabel({1:"kl",2:"i",3:"l", 4:"j",5:"jl",6:"k",7:"il"}) G.show(figsize=4,graph_border=True) G.adjacency_matrix() 32*64-2016 64*128-8128 # ![image.png](attachment:image.png) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from catboost import CatBoostRegressor from sklearn.preprocessing import LabelEncoder from sklearn.model_selection import TimeSeriesSplit,train_test_split from sklearn.metrics import mean_squared_error from math import sqrt from sklearn.ensemble import RandomForestRegressor from xgboost import XGBRegressor from lightgbm import LGBMRegressor from sklearn.ensemble import BaggingRegressor train = pd.read_csv("train.csv") test = pd.read_csv("test.csv") train.head() train_df = train.copy() test_df = test.copy() train_df.info() train_df.date_time.dtype train_df['date_time'] = pd.to_datetime(train_df['date_time']) test_df.info() test_df['date_time'] = pd.to_datetime(test_df['date_time']) # Conclusions time split of data validation set to be made accordingly train_df.head() def LabelEncode(dataf,datap): le = LabelEncoder() le.fit(dataf) v = le.transform(datap) return v def findweekday(a): return a.weekday() sns.kdeplot(train_df.traffic_volume) # # Train preprocessing train_df['weather_type'] = LabelEncode(train_df['weather_type'].append([test_df['weather_type']]),train_df['weather_type']) train_df['is_holiday'] = LabelEncode(train_df['is_holiday'].append([test_df['is_holiday']]),train_df['is_holiday']) train_df['weather_description'] = LabelEncode(train_df['weather_description'].append([test_df['weather_description']]),train_df['weather_description']) train_df['month'] = train_df['date_time'].dt.month train_df['day'] = train_df['date_time'].apply(findweekday) train_df['hour'] = train_df['date_time'].dt.hour train_df['year'] = train_df['date_time'].dt.year train_df.head() test_df['weather_type'] = LabelEncode(test_df['weather_type'].append([train['weather_type']]),test_df['weather_type']) test_df['is_holiday'] = LabelEncode(test_df['is_holiday'].append([train['is_holiday']]),test_df['is_holiday']) test_df['weather_description'] = LabelEncode(train['weather_description'].append([test_df['weather_description']]),test_df['weather_description']) test_df['month'] = test_df['date_time'].dt.month test_df['day'] = test_df['date_time'].apply(findweekday) test_df['hour'] = test_df['date_time'].dt.hour test_df['year'] = test_df['date_time'].dt.year test_df.head() new_train = train_df.drop(['date_time'],axis=1) new_test = test_df.drop(['date_time'],axis=1) # + #Time based split # - tscv = TimeSeriesSplit(n_splits=2) y = new_train.traffic_volume x = new_train.drop(['traffic_volume'],axis=1) x.year.value_counts() new_test.year.value_counts() x_train_new = new_train.loc[new_train.year!=2017] x_test_new = new_train.loc[new_train.year==2017] x_train_new.year.value_counts() x_train,x_test,y_train,y_test = train_test_split(x,y,shuffle=False) # + #applying staking to the problem model1 = RandomForestRegressor() model2 = XGBRegressor() model3 = LGBMRegressor() model4 = CatBoostRegressor() model5 = BaggingRegressor() model1.fit(x_train,y_train) model2.fit(x_train,y_train) model3.fit(x_train,y_train) model4.fit(x_train,y_train) model5.fit(x_train,y_train) pred1 = model1.predict(x_test) pred2 = model2.predict(x_test) pred3 = model3.predict(x_test) pred4 = model4.predict(x_test) pred5 = model5.predict(x_test) predt1 = model1.predict(new_test) predt2 = model2.predict(new_test) predt3 = model3.predict(new_test) predt4 = model4.predict(new_test) predt5 = model5.predict(new_test) stacked_pred = np.column_stack((pred1,pred2,pred3,pred4,pred5)) stacked_test_pred = np.column_stack((predt1,predt2,predt3,predt4,predt5)) meta_model = RandomForestRegressor() meta_model.fit(stacked_pred,y_test) final_pred = meta_model.predict(stacked_test_pred) # - stacked_test_pred y_test.values final_pred bagged_pred = np.zeros(new_test.shape[0]) seed = 310 bags = 4 for n in range(0,4): rf = RandomForestRegressor(n_estimators=280,n_jobs=-1,random_state=seed+n,max_features=9,min_samples_leaf=5,min_samples_split=20) rf.fit(x,y) preds = rf.predict(new_test) bagged_pred+=preds bagged_pred/=bags score(y_test,bagged_pred) rf.get_params() rf.feature_importances_ rf.fit(x,y) y_pred2 = rf.predict(new_test) kk = pd.DataFrame({'date_time':test['date_time'],'traffic_volume':final_pred}) kk.to_csv("sub7_stack.csv",index=False) cb = CatBoostRegressor() x_train.head() cat_features = [0,11,12,13,14,15,16] cb.fit(x_train,y_train,cat_features=cat_features,eval_set=(x_test,y_test),plot=True) def score(y_actual,y_predicted): rms = sqrt(mean_squared_error(y_actual, y_predicted)) rms = rms/10000 return max(0,100-rms) y_pred = cb.predict(x_test) score(y_test,y_pred) rf = RandomForestRegressor() rf.fit(x_train,y_train) y_pred = rf.predict(x_test) score(y_test,y_pred) rf.fit(x,y) y_pred2 = rf.predict(new_test) kk = pd.DataFrame({'date_time':test['date_time'],'traffic_volume':y_pred2}) kk.to_csv("sub1.csv",index=False) xg = XGBRegressor() xg.fit(x_train,y_train) v = xg.predict(x_test) score(y_test,v) xg.fit(x,y) y_pred2 = xg.predict(new_test) kk = pd.DataFrame({'date_time':test['date_time'],'traffic_volume':y_pred2}) kk.to_csv("sub2.csv",index=False) cb = CatBoostRegressor(iterations=5000,learning_rate=0.1,depth=7) cb.fit(x_train,y_train,eval_set=(x_test,y_test),plot=True,cat_features=cat_features,early_stopping_rounds=300,use_best_model=True,verbose_eval=100) v = cb.predict(x_test) score(y_test,v) lb = LGBMRegressor() lb.fit(x_train,y_train) v = lb.predict(x_test) score(y_test,v) lb.fit(x,y) y_pred2 = lb.predict(new_test) kk = pd.DataFrame({'date_time':test['date_time'],'traffic_volume':y_pred2}) kk.to_csv("sub3.csv",index=False) cb = CatBoostRegressor(iterations=3000) cb.fit(x,y) y_pred2 = cb.predict(new_test) kk = pd.DataFrame({'date_time':test['date_time'],'traffic_volume':y_pred2}) kk.to_csv("sub4.csv",index=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] slideshow={"slide_type": "slide"} # # Aula 01 - Introdução à Ciência de Dados # + [markdown] slideshow={"slide_type": "slide"} # ## Indústria 4.0 / Sociedade 5.0 # # #
# #
# # [Fonte](https://www.sphinx-it.eu/from-the-agenda-of-the-world-economic-forum-2019-society-5-0/) # + [markdown] slideshow={"slide_type": "slide"} # ## Ciência de Dados no Século XXI # # - Dados como matéria-prima: pessoas, empresas, governos, ciência # - *Big Data*, *business intelligence*, *data analytics*,... # # #
# #
# # [Fonte](https://knowledgecom.my/ir4.html) # + [markdown] slideshow={"slide_type": "slide"} # - *Ciência de dados*: nome razoável para denotar o aspecto científico dos dados # - Contexto acadêmico: intersecta áreas, processos e pensamento científico # - Modelo holístico não unificado # - Ciência de dados tem alta capilaridade e multidisciplinaridade # + [markdown] slideshow={"slide_type": "slide"} # ## Modelo em camadas # # #
# #
# + [markdown] slideshow={"slide_type": "slide"} # ## Camada interna: interseção de áreas tradicionais # # - **Matemática/Estatística**: # - modelos matemáticos # - análise e inferência de dados # - aprendizagem de máquina # # - **Ciência da Computação/Engenharia de Software** # - hardware/software # - projeto, armazenamento e segurança de dados # # - **Conhecimento do Domínio/Expertise** # - ramo de aplicação do conhecimento # - *data reporting* e inteligência de negócios # - *marketing* e comunicação de dados # + [markdown] slideshow={"slide_type": "slide"} # ## Camada intermediária: processos da cadeia de dados # # - Governança, curadoria # - Armazenamento, reuso # - Preservação, manutenção # - Destruição, compartilhamento # + [markdown] slideshow={"slide_type": "slide"} # ## Camada externa: método científico # *Soluções dirigidas por dados* (*data-driven solutions*) # # 1. **Definição do problema**: a "grande pergunta" # # 2. **Aquisição de dados**: coleta de toda informação disponível sobre o problema # # 3. **Processamento de dados**, tratamento dos dados (limpeza, formatação e organização) # + [markdown] slideshow={"slide_type": "slide"} # 4. **Análise de dados**, mineração, agrupamento, clusterização, testes de hipótese e inferência # # 5. **Descoberta de dados**, correlações, comportamentos distintivos, tendências claras, geração de conhecimento # # 6. **Solução**, conversibilidade em produtos e ativos de valor agregado # + [markdown] slideshow={"slide_type": "slide"} # ### Exemplo: o caso da COVID-19 # # - Uma pergunta: # > _A taxa de contágio do vírus em pessoas vivendo próximas de um centro comercial localizado em uma zona rural é menor do do que em pessoas vivendo próximas de um centro comercial localizado em uma zona urbana?_ # - Como delimitar a zona urbana? # + [markdown] slideshow={"slide_type": "slide"} # - Centro comercial: # - Conjunto de lojas de pequeno porte? # - Feiras? # - Circulação de 100 pessoas/h? # # - Bancos de dados: IBGE? DATASUS? # + [markdown] slideshow={"slide_type": "slide"} # ### Projetos de CD para a COVID-19 # # Mundo: # - *Coronavirus Resource Center*, John Hopkins University [[CRC-JHU]](https://coronavirus.jhu.edu/map.html) # # Brasil: # - Observatório Covid-19 BR [[COVID19BR]](https://covid19br.github.io/index.html) # - Observatório Covid-19 Fiocruz [[FIOCRUZ]](https://portal.fiocruz.br/observatorio-covid-19) # - CoronaVIS-UFRGS [[CoronaVIS-UFRGS]](https://covid19.ufrgs.dev/dashboard/#/dashboard) # - CovidBR-UFCG [[CovidBR-UFCG]](http://covid.lsi.ufcg.edu.br) # - [[LEAPIG-UFPB]](http://www.de.ufpb.br/~leapig/projetos/covid_19.html#PB) # + [markdown] slideshow={"slide_type": "slide"} # ## Cientista de dados x analista de dados x engenheiro de dados # + [markdown] slideshow={"slide_type": "slide"} # ### Cientista de dados # # > _"**Cientista de dados** é um profissional que tem conhecimentos suficientes sobre necessidades de negócio, domínio do conhecimento, além de possuir habilidades analíticas, de software e de engenharia de sistemas para gerir, de ponta a ponta, os processos envolvidos no ciclo de vida dos dados."_ # # [[NIST 1500-1 (2015)]](https://bigdatawg.nist.gov/_uploadfiles/NIST.SP.1500-1.pdf) # + [markdown] slideshow={"slide_type": "slide"} # ### Ciência de dados # # > _"**Ciência de dados** é a extração do conhecimento útil diretamente a partir de dados através de um processo de descoberta ou de formulação e teste de hipóteses."_ # # #### Informação não é sinônimo de conhecimento! # # - Exemplo: de toda a informação circulante no seu Whatsapp por dia, que fração seria considerada útil e aproveitável? A resposta talvez seja um incrível "nada"... # # Portanto, ter bastante informação à disposição não significa, necessariamente, possuir conhecimento. # + [markdown] slideshow={"slide_type": "slide"} # ### Analista de dados # # - _Analytics_ pode ser traduzido literalmente como "análise" # - Segundo o documento NIST 1500-1, é definido como o "processo de sintetizar conhecimento a partir da informação". # # > _"**Analista de dados** é o profissional capaz de sintetizar conhecimento a partir da informação e convertê-lo em ativos exploráveis."_ # + [markdown] slideshow={"slide_type": "slide"} # ### Engenheiro de dados # # > _"**Engenheiro(a) de dados** é o(a) profissional que explora recursos independentes para construir sistemas escaláveis capazes de armazenar, manipular e analisar dados com eficiência e e desenvolver novas arquiteturas sempre que a natureza do banco de dados exigi-las."_ # + [markdown] slideshow={"slide_type": "slide"} # Embora essas três especializações possuam características distintivas, elas são tratadas como partes de um corpo maior, que é a Ciência de Dados # # - Ver [PROJETO EDISON](https://edison-project.eu), Universidade de Amsterdã, Holanda # - EDISON Data Science Framework [[EDSF]](https://edison-project.eu/sites/edison-project.eu/files/attached_files/node-5/edison2017poster02-dsp-profiles-v03.pdf) # + [markdown] slideshow={"slide_type": "slide"} # #### Quem faz o quê? # # Resumimos a seguir as principais tarefas atribuídas a cientistas, analistas e engenheiros(as) de dados com base em artigos de canais especializados: # - [[DataQuest]](https://www.dataquest.io/blog/data-analyst-data-scientist-data-engineer/) # - [[NCube]](https://ncube.com/blog/data-engineer-data-scientist-data-analyst-what-is-the-difference) # - [[Medium]](https://medium.com/@gdesantis7/decoding-the-data-scientist-51b353a01443) # - [[Data Science Academy]](http://datascienceacademy.com.br/blog/qual-a-diferenca-entre-cientista-de-dados-e-engenheiro-de-machine-learning/) # - [[Data Flair]](https://data-flair.training/blogs/data-scientist-vs-data-engineer-vs-data-analyst/). # + [markdown] slideshow={"slide_type": "slide"} # ##### Cientista de dados # - Realiza o pré-processamento, a transformação e a limpeza dos dados; # - Usa ferramentas de aprendizagem de máquina para descobrir padrões nos dados; # - Aperfeiçoa e otimiza algoritmos de aprendizagem de máquina; # - Formula questões de pesquisa com base em requisitos do domínio do conhecimento; # # + [markdown] slideshow={"slide_type": "slide"} # ##### Analista de dados # - Analisa dados por meio de estatística descritiva; # - Usa linguagens de consulta a banco de dados para recuperar e manipular a informação; # - Confecciona relatórios usando visualização de dados; # - Participa do processo de entendimento de negócios; # + [markdown] slideshow={"slide_type": "slide"} # ##### Engenheiro(a) de dados # - Desenvolve, constroi e mantém arquiteturas de dados; # - Realiza testes de larga escala em plataformas de dados; # - Manipula dados brutos e não estruturados; # - Desenvolve _pipelines_ para modelagem, mineração e produção de dados # - Cuida do suporte a cientistas e analistas de dados; # + [markdown] slideshow={"slide_type": "slide"} # #### Que ferramentas são usadas? # # As ferramentas usadas por cada um desses profissionais são variadas e evoluem constantemente. Na lista a seguir, citamos algumas. # + [markdown] slideshow={"slide_type": "slide"} # ##### Cientista de dados # - R, Python, Hadoop, Ferramentas SQL (Oracle, PostgreSQL, MySQL etc.) # - Álgebra, Estatística, Aprendizagem de Máquina # - Ferramentas de visualização de dados # # + [markdown] slideshow={"slide_type": "slide"} # ##### Analista de dados # - R, Python, # - Excel, Pandas # - Ferramentas de visualização de dados (Tableau, Infogram, PowerBi etc.) # - Ferramentas para relatoria e comunicação # + [markdown] slideshow={"slide_type": "slide"} # ##### Engenheiro(a) de dados # - Ferramentas SQL e noSQL (Oracle NoSQL, MongoDB, Cassandra etc.) # - Soluções ETL - Extract/Transform/Load (AWS Glue, xPlenty, Stitch etc.) # - Python, Scala, Java etc. # - Spark, Hadoop etc. # + [markdown] slideshow={"slide_type": "slide"} # ### A Matemática por trás dos dados # + [markdown] slideshow={"slide_type": "slide"} # #### *Bits* # # Linguagem _binária_ (sequencias dos dígitos 0 e 1). Em termos de *bits*, a frase "Ciência de dados é legal!", por exemplo, é escrita como # # `1000011110100111101010110111011000111101001110000110000011001001100101 # 1000001100100110000111001001101111111001110000011101001100000110110011001 # 01110011111000011101100100001`. # + [markdown] slideshow={"slide_type": "slide"} # #### Dados 1D, 2D e 3D # # - Texto, som, imagem, áudio, vídeo... # # - *Arrays*, vetores de dados e listas # - *Dataframes* e planilhas # - Matrizes (pixels, canais de cor) # - Matrizes 3D (filmes, animações, FPS) # + [markdown] slideshow={"slide_type": "slide"} # ## Ferramentas computacionais do curso # # - Python 3.x (onde x é um número de versão) como linguagem de programação. # - Linguagem interpretada # - Alto nível # + [markdown] slideshow={"slide_type": "slide"} # ### _iPython_ e _Jupyter Notebook_ # # - [[iPython]](http://ipython.org): iniciado em 2001; interpretador Python para melhorar a interatividade com a linguagem. # - Integrado como um _kernel_ (núcleo) no projeto [[Jupyter]](http://jupyter.org), desenvolvido em 2014, permitindo textos, códigos e elementos gráficos sejam integrados em cadernos interativos. # - _Jupyter notebooks_ são interfaces onde podemos executar códigos em diferentes linguagens # - _Jupyter_ é uma aglutinação de _Julia_, _Python_ e _R_, linguagens usuais para ciência de dados # + [markdown] slideshow={"slide_type": "slide"} # ### *Anaconda* # # - O [[Anaconda]](https://www.anaconda.com) foi iniciado em 2012 para fornecer uma ferramenta completa para o trabalho com Python. # - Em 2020, a [[Individual Edition]](https://www.anaconda.com/products/individual) é a mais popular no mundo com mais de 20 milhões de usuários. # - Leia o tutorial de instalação! # + [markdown] slideshow={"slide_type": "slide"} # ### *Jupyter Lab* # # - Ferramenta que melhorou a interatividade do Jupyter # - Este [[artigo]](https://blog.jupyter.org/jupyterlab-is-ready-for-users-5a6f039b8906) discute as características do Jupyter Lab # + [markdown] slideshow={"slide_type": "slide"} # ### *Binder* # # - O projeto [[Binder]](https://mybinder.org) funciona como um servidor online baseada na tecnologia *Jupyter Hub* para servir cadernos interativos online. # - Execução de códigos "na nuvem" sem a necessidade de instalações # - Sessões temporárias # + [markdown] slideshow={"slide_type": "slide"} # ### *Google Colab* # # - O [[Google Colab]](http://colab.research.google.com) é um "misto" do _Jupyter notebook_ e _Binder_, # - Permite que o usuário use a infra-estrutura de computação de alto desempenho (GPUs e TPUS) da Google # - Sincronização de arquivos com o Google Drive. # + [markdown] slideshow={"slide_type": "slide"} # ### Ecossistema de módulos # # - *numpy* (*NUMeric PYthon*): o *numpy* serve para o trabalho de computação numérica, operando fundamentalmente com vetores, matrizes e ágebra linear. # # - *pandas* (*Python for Data Analysis*): é a biblioteca para análise de dados de Python, que opera *dataframes* com eficiência. # + [markdown] slideshow={"slide_type": "slide"} # - *sympy* (*SYMbolic PYthon*): é um módulo para trabalhar com matemática simbólica e cumpre o papel de um verdadeiro sistema algébrico computacional. # # - *matplotlib*: voltado para plotagem e visualização de dados, foi um dos primeiros módulos Python para este fim. # + [markdown] slideshow={"slide_type": "slide"} # - *scipy* (*SCIentific PYthon*): o *scipy* pode ser visto, na verdade, como um módulo mais amplo que integra os módulos anteriores. Em particular, ele é utilizado para cálculos de integração numérica, interpolação, otimização e estatística. # # - *seaborn*: é um módulo para visualização de dados baseado no *matplotlib*, porém com capacidades visuais melhores. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="B_VA5ZH2Caqb" # ##Midterm Exam # + colab={"base_uri": "https://localhost:8080/"} id="L7jxkDJgCgTK" outputId="dcc8363f-48ed-4b8f-c8b3-8ccf58d425f6" Name = "" S_Number = "202105205" Age = "19" Birthday = "May 08, 2002" Address = "Blk94 Lot24 Ph4 Mabuhay City Paliparan 3 Dasmarinas City Cavite" crs = "Bachelor of Science in Computer Engineering" gwa = "95" print("Full name: "+ Name) print("Student Number: "+ S_Number) print("Age: "+ Age) print("Birthday: "+ Birthday) print("Address: "+ Address) print("Course: "+ crs) print("Last Sem GWA: "+ gwa) # + colab={"base_uri": "https://localhost:8080/"} id="txg2z12YDgmS" outputId="a45198ae-b0e6-412a-b8ff-4b459a763596" n = 4 answ = "Y" print(bool(2 None: """Definition des coordonées + de la direction""" self.x=x self.y=y self.direction=direction def avancer(self,direction:str): """Methode petmettant d'avancer dans une direction donnée""" if direction.lower()=='nord' : self.y+=1 if direction.lower()=='sud' : self.y-=1 if direction.lower()=='est' : self.x+=1 if direction.lower()=='ouest': self.x-=1 # - robot1=Robot(3,3) robot1.avancer('Ouest') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Named Entity Recognition using RNN # ## Import Required Libraries # + id="0YlBAoRG7P_D" import warnings warnings.filterwarnings("ignore") import numpy as np from matplotlib import pyplot as plt from nltk.corpus import brown from nltk.corpus import treebank import nltk import seaborn as sns from gensim.models import KeyedVectors from keras.preprocessing.sequence import pad_sequences from keras.utils.np_utils import to_categorical from keras.models import Sequential from keras.layers import Embedding from keras.layers import Dense, Input from keras.layers import TimeDistributed from keras.layers import LSTM, GRU, Bidirectional, SimpleRNN, RNN from keras.models import Model from keras.preprocessing.text import Tokenizer from sklearn.model_selection import train_test_split from sklearn.utils import shuffle # - # ## Dataset # + id="Szoops5X7hf3" file1 = open("eng.train", "r") X = [] # contains the words Y = [] # contains corresponding tags x = [] y = [] for i in file1: s = i.split(" ") if s[0]!='.' and len(s)==4: x.append(s[0]) y.append(s[3][:-1]) else: X.append(x) Y.append(y) x = [] y = [] #print(X) #print(Y) # + id="_9AmKZmN-M9n" num_words = len(set([word.lower() for sentence in X for word in sentence])) num_tags = len(set([word.lower() for sentence in Y for word in sentence])) # + colab={"base_uri": "https://localhost:8080/"} id="vYce51Oa-Ree" outputId="99a60321-cf6b-4a0f-d588-75ebac337d02" print("Total number of tagged sentences: {}".format(len(X))) print("Vocabulary size: {}".format(num_words)) print("Total number of tags: {}".format(num_tags)) # - # ## Encoding # + id="7NI7ngWF-gj5" # encode X word_tokenizer = Tokenizer() # instantiate tokeniser word_tokenizer.fit_on_texts(X) # fit tokeniser on data X_encoded = word_tokenizer.texts_to_sequences(X) # use the tokeniser to encode input sequence # + id="uw81s_xm-jeF" # encode Y tag_tokenizer = Tokenizer() tag_tokenizer.fit_on_texts(Y) Y_encoded = tag_tokenizer.texts_to_sequences(Y) # + colab={"base_uri": "https://localhost:8080/"} id="uYgF-C1S--41" outputId="6173dbd1-73cf-450d-ee20-99cae49a4152" # make sure that each sequence of input and output is same length different_length = [1 if len(input) != len(output) else 0 for input, output in zip(X_encoded, Y_encoded)] print("{} sentences have disparate input-output lengths.".format(sum(different_length))) # + colab={"base_uri": "https://localhost:8080/"} id="vlinzR4D_D9O" outputId="4273b4c2-af6b-40ed-923a-5717515636ef" # check length of longest sentence lengths = [len(seq) for seq in X_encoded] print("Length of longest sentence: {}".format(max(lengths))) # - # ## Padding # + id="HvvJjjAC_MR9" MAX_SEQ_LENGTH = 100 # sequences greater than 100 in length will be truncated X_padded = pad_sequences(X_encoded, maxlen=MAX_SEQ_LENGTH, padding="pre", truncating="post") Y_padded = pad_sequences(Y_encoded, maxlen=MAX_SEQ_LENGTH, padding="pre", truncating="post") # + id="bg3WVdKT_ThG" # assign padded sequences to X and Y X, Y = X_padded, Y_padded # - # ## Word2Vec Model # + id="2JZxBC8sCKlP" # word2vec path = './GoogleNews-vectors-negative300.bin' # load word2vec using the following function present in the gensim library word2vec = KeyedVectors.load_word2vec_format(path, binary=True) # + id="s-ohn5fPDWxS" #assign word vectors from word2vec model EMBEDDING_SIZE = 300 # each word in word2vec model is represented using a 300 dimensional vector VOCABULARY_SIZE = len(word_tokenizer.word_index) + 1 # create an empty embedding matix embedding_weights = np.zeros((VOCABULARY_SIZE, EMBEDDING_SIZE)) # create a word to index dictionary mapping word2id = word_tokenizer.word_index # # copy vectors from word2vec model to the words present in corpus for word, index in word2id.items(): try: embedding_weights[index, :] = word2vec[word] except KeyError: pass # + colab={"base_uri": "https://localhost:8080/"} id="2RY5cYD9DbEb" outputId="af507f97-4e1a-4c11-d740-50eb98a4eeab" # check embedding dimension print("Embeddings shape: {}".format(embedding_weights.shape)) # + colab={"base_uri": "https://localhost:8080/"} id="-BdwOh4_DhvR" outputId="366f49a8-bc05-4f5e-fc14-feee324a7947" # let's look at an embedding of a word embedding_weights[word_tokenizer.word_index['economy']] # + id="Bul6DtKIDqlT" # use Keras' to_categorical function to one-hot encode Y Y = to_categorical(Y) # + colab={"base_uri": "https://localhost:8080/"} id="rAe-D14iDzDr" outputId="a9f17ab4-602e-4291-f630-5344fc6d744a" # print Y of the first output sequqnce print(Y.shape) # - # ## Splitting Dataset # + id="7PeXpHpgD246" # split entire data into training and testing sets TEST_SIZE = 0.15 X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=TEST_SIZE, random_state=4) # + id="RMRHsYnIEIwm" # split training data into training and validation sets VALID_SIZE = 0.15 X_train, X_validation, Y_train, Y_validation = train_test_split(X_train, Y_train, test_size=VALID_SIZE, random_state=4) # + id="mjlyGel6EW-i" # total number of tags NUM_CLASSES = Y.shape[2] # - # ## RNN Model # + id="xprqa4WWE5kp" # create architecture rnn_model = Sequential() # create embedding layer - usually the first layer in text problems rnn_model.add(Embedding(input_dim = VOCABULARY_SIZE, # vocabulary size - number of unique words in data output_dim = EMBEDDING_SIZE, # length of vector with which each word is represented input_length = MAX_SEQ_LENGTH, # length of input sequence trainable = True # True - update the embeddings while training )) # add an RNN layer which contains 64 RNN cells rnn_model.add(SimpleRNN(64, return_sequences=True # True - return whole sequence; False - return single output of the end of the sequence )) # add time distributed (output at each sequence) layer rnn_model.add(TimeDistributed(Dense(NUM_CLASSES, activation='softmax'))) # - # ## Compiling the Model # + id="yvQUQpB5E_aN" rnn_model.compile(loss = 'categorical_crossentropy', optimizer = 'adam', metrics = ['acc']) # - # ## Summary of the Model # + colab={"base_uri": "https://localhost:8080/"} id="dbCuUnkhFK8s" outputId="6203c25e-081a-4a83-e5a0-20c799945d3b" # check summary of the model rnn_model.summary() # - # ## Training the Model # + colab={"base_uri": "https://localhost:8080/"} id="kw3F89ajFRx7" outputId="625a0b20-df09-47c4-f392-9d2219262dc9" rnn_training = rnn_model.fit(X_train, Y_train, batch_size=128, epochs=10, validation_data=(X_validation, Y_validation)) # - # ## Plot # + colab={"base_uri": "https://localhost:8080/", "height": 295} id="f33uSZkqFYYT" outputId="17ff2529-c0f0-42af-db35-28a99a247c66" # visualise training history plt.plot(rnn_training.history['acc']) plt.plot(rnn_training.history['val_acc']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'test'], loc="lower right") plt.show() # - # ## Saving the Model rnn_model.save('ner-rnn.h5') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import requests from bs4 import BeautifulSoup import pandas as pd import re header={'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.97 Safari/537.36'} # ## Function to scrape product name and asin no (as the file path is similar) # + def Search(search_query): url="https://www.amazon.in/s?k="+search_query #search link page=requests.get(url,headers=header) if page.status_code==200: return page #returns the page if there is no error else: return "Error" # - # ## Function to scrape link of the All customer reviews to acess all the reviews def url_link(query): url="https://www.amazon.in/dp/"+query #search link page=requests.get(url,headers=header) if page.status_code==200: return page #return the page if there is no error else: return "Error" # ## Function to scrape product reviews def content(query): url="https://www.amazon.in/"+query page=requests.get(url,headers=header) if page.status_code==200: return page else: return "Error" # ## This saves all the product name and the asin no in a list product_Name=[] Asin_no=[] for i in range(1,16): response=Search("smartphones&page="+str(i)) soup=BeautifulSoup(response.content) for i in soup.findAll('span',attrs={'class':'a-size-medium a-color-base a-text-normal'}): product_Name.append(i.text) for i in soup.find_all('div',attrs={'data-asin':True}): Asin_no.append(i['data-asin']) product_Name Asin_no # ## This takes all the links and ignores if the asin no is blank Link=[] for i in range(len(Asin_no)): if Asin_no[i]!='': link_response=url_link(Asin_no[i]) #iterates through Asin no to access the product soup=BeautifulSoup(link_response.content) for i in soup.findAll('a',attrs={'data-hook':'see-all-reviews-link-foot'}): Link.append(i['href']) #saves the footer link of all reviews to a list len(Link) # ## For each product we are scraping 15 pages of reviews and ratings reviews=[] ratings=[] for k in range(len(Link)): #helps to iterate for different products for i in range(1,15): #each product , it scrapes 15 pages of reviews cont_response=content(Link[k]+'&pageNumber='+str(i)) #iterates through multiple pages of the reviews soup=BeautifulSoup(cont_response.content) for j in soup.findAll("span",attrs={'data-hook':'review-body'}): reviews.append(j.text) #saves review content for star in soup.findAll("i",attrs={'data-hook':"review-star-rating"}): ratings.append(star.text) #saves ratings of the respective reviews # ## saving the file in dataframe new_reviews = [] for review in reviews: new_reviews.append(review.strip()) df=pd.DataFrame(columns=['Smartphone_Review','Rating']) df.to_csv('Smartphone_Review.csv',index=False) #saving it into a dataframe df=df.drop_duplicates() #removing duplicates df['Smartphone_Review']=new_reviews df['Rating']=ratings df # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import cv2 import cv2.aruco as aruco import numpy as np cv2.__version__ aruco_dict = aruco.Dictionary_get(aruco.DICT_6X6_1000) mb_dict = aruco.Dictionary_create(4096, 5) print(len(aruco_dict)) marker_id = 2 pixel_size = 700 img = aruco.drawMarker(aruco_dict, marker_id, pixel_size) pfn = "marker/marker_6x6_1000_{0:04d} .png".format(marker_id) print(pfn) cv2.imwrite(pfn, img) "various precisions: {0:6.2f} or {0:6.3f}".format(1.4148) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="FRTzh_KEUAlV" # # **Excel - Create a Simple Worksheet** # # *using openpyxl* # # + [markdown] id="EQI9oif2UGEu" # # # --- # # # + [markdown] id="QJiGW_hbUKEF" # Mount Google Drive. # # Workbooks and related files will be stored in the **excel** folder of **My Drive**. # + colab={"base_uri": "https://localhost:8080/"} id="Ns8F3wKYTwOn" outputId="592b199c-1bc2-4fb4-816b-ea3f51c11a5f" from google.colab import drive drive.mount('/content/gdrive') # + [markdown] id="aE-hniRjXUz0" # Not necessary, but check the default directory on Colab and list the files in content/gdrive/MyDrive/excel. # + colab={"base_uri": "https://localhost:8080/"} id="CSBuZwwFXeGA" outputId="9cf719b0-92d5-4c56-8a2d-bd382e63846a" print("The current working directory is: ") # ! pwd print("\nThe contents of /content/gdrive/excel/ -") # ! ls /content/gdrive/MyDrive/excel/ -a # + [markdown] id="iaUsAkRIbX4F" # Install **openpyxl** module for Python, if it is needed. # # Try the cell with the import of **openpyxl** before pip installing the module. # + id="QwXtXrXnbSTW" # You only need to try running this cell if the next cell throws an Exception # ! pip install openpyxl # + [markdown] id="HbpSHpHhUZFT" # Import **openpyxl** module. # + id="pCw4vqHnVxJ9" try: import openpyxl except: raise Exception("Unable to import pyxl module. Check pip import openpyxl.") # + [markdown] id="I-mUebMsciR2" # At this point you should have access to the **excel** directory in the **content** folder of your **Colab** environment and you should have the **openpyxl** module imported. # # You are ready to work on **Excel** sheets and workbooks in **Python**! # # + [markdown] id="iejLR513c6lB" # # # --- # # # + [markdown] id="v-LIAJovc70n" # Create an **openpyxl** Workbook object named **wb** # # Create an **openpyxl** worksheet object named **ws** as the active worksheet in the workbook **wb**. # # + colab={"base_uri": "https://localhost:8080/"} id="KJt-9Lb-d1JD" outputId="5ebcb790-3d67-4aa0-e161-1db9c616f9a8" wb = openpyxl.Workbook() ws = wb.active print("The type of wb is: ", type(wb)) print("The type of ws is: ", type(ws)) print("The active worksheet is: ws in the workbook: wb.") # + [markdown] id="zs9OUBPLeaPy" # We have an active worksheet in a workbook, in memory now -- it has not been saved to a filename on disk yet! # + [markdown] id="wmzPp_S-eiIp" # Create a matrix as a list of lists with some data which we will add to the ws worksheet. # # For this example, use the names of CUNY campuses and their street adddress, and borough in New York City. # # Store that data as a matrix of lists (rows) within a list (range). # # Each row list will have three elements, representing three columns in a sheet. # # The matrix_range list will begin with a list of the three column headings. # # + id="Hc--M0rVe0Xk" matrix_range = [ ["Campus","Address","Borough"], ["Baruch College","55 Lexington Avenue","Manhattan"], ["Borough of Mahattan Community College","199 Chambers Street","Manhattan"], ["Bronx Community College","2155 University Avenue","Bronx"], ["Brooklyn College","2900 Bedford Avenue","Brooklyn"], ["College of Staten Island","2800 Victory Boulevard","Staten Island"], ["Craig Newmark Graduate School of Journalism","219 W 40th Street","Manhattan"], ["CUNY Graduate Center","365 Fifth Avenue","Manhattan"], ["CUNY Graduate School of Public Health and Health Policy","55 W 125th Street","Manhattan"], ["CUNY School of Labor and Urban Studies","25 W 23rd Street","Manhattan"], ["CUNY School of Law","2 Court Square","Queens"], ["CUNY School of Professional Studies","119 W 31st Street","Manhattan"], ["Guttman Community College","50 W 40th Street","Manhattan"], ["Hostos Community College","500 Grand Concourse","Bronx"], ["Hunter College","695 Park Avenue","Manhattan"], ["John Jay College of Criminal Justice","524 W 59th Street","Manhattan"], ["Kingsborough Community College","2001 Oriental Boulevard","Brooklyn"], ["LaGuardia Community College","31-10 Thomson Avenue","Queens"], ["Lehman College","250 Bedford Park Blvd West","Bronx"], ["Macaulay Honors College","35 W 67th Street","Manhattan"], ["Medgar Evars College","1650 Bedford Avenue","Brooklyn"], ["New York City College of Technology","300 Jay Street","Brooklyn"], ["Queens College","65-30 Kissena Boulevard","Queens"], ["Queensborough Community College","220-05 56th Avenue","Queens"], ["The City College of New York","160 Convent Avenue","Manhattan"], ["York College","94-20 Guy oulevard","Queens"] ] # + [markdown] id="Mi2E5TFVkuM2" # Check the matrix. # + colab={"base_uri": "https://localhost:8080/"} id="o3kzxYl2kxiG" outputId="6b28bebc-436f-4495-b7ab-9454183978ed" print("The length of matrix_range is: ", len(matrix_range)) print("\nThe rows of matrix_range are: ") i = 0 for row in matrix_range: print("Row #", i, "\t",row) i +=1 # + [markdown] id="jbSynRazlmjw" # At this point we have an empty worksheet **ws** as the active worksheet in a workbook **wb** and a **matrix** of data in memory. # + [markdown] id="5O_sy7FVlzqC" # Use the **openpyxl** **append** method on the object **ws** to append the rows of the object **matrix**. # + id="BaS1FVyHmjWq" for row in matrix_range: ws.append(row) # + [markdown] id="0tHb4HJlnHWR" # Give the active worksheet **ws** a title (the name on the worksheet tab) of "CampusLocations". # # *Note: this step could have been done before appending the data from **matrix**. The worksheet name could be assigned at anytime when the worksheet is open.* # # + id="PCRkj-VknWjq" ws.title = "CampusLocations" # + [markdown] id="VBQ-p18Tn7-P" # Now save the workbook **wb** to the **excel** folder in **/content/gdrive/MyDrive/** with the filename "CUNY_campuses.xlsx". # # *Note: the workbook **wb** contains only one worksheet (**ws**).* # + id="Isyl-wNnoTnN" wb.save("/content/gdrive/MyDrive/excel/CUNY_campuses.xlsx") # + [markdown] id="ET1y_EPNpGKH" # Go to **Google Drive** and the **excel** folder to open and inspect the new workbook in **Google Sheets** or download it to view it in **Excel**. # + [markdown] id="X393PwatBI4-" # # # --- # # # + [markdown] id="ER-aExJODGUx" # ### **Formatting Excel Sheets** # # + [markdown] id="twCCKx4gDQRO" # We can adjust column widths and format our column headings. # # We can also create a named range for our table of data. # # + [markdown] id="0DN_8a1SDYSw" # # # --- # # # + [markdown] id="RsWivUrrDZMP" # ### **Creating Excel Tables** # + [markdown] id="u5eduoH9DgvC" # We can add a sheet with a dynamic table based on our worksheet with data. # + [markdown] id="yx8CXhb6Dp1i" # Let's start by importing the objects we need from **openpyxl**. # # * **Table** - for Excel tables, with filtering, formatting, etc. # * **TableStyleInfo** - to created a named style to be applied to a table # * **Image** - # # # + id="EobqShLGD9oY" from openpyxl.worksheet.table import Table, TableStyleInfo # + [markdown] id="TMGXe6-aEPl5" # If we want to access an Excel workbook which already exists, stored on an accessible drive, we should import the **load_workbook** method from **openpyxl**. # + id="HscMicbcFOLA" from openpyxl import load_workbook # + [markdown] id="iSkHH_1uFgKt" # Open an existing Excel workbook and create an active worksheet **ws**. # + id="NbCWlAjFFn2U" wb = load_workbook("/content/gdrive/MyDrive/excel/CUNY_campuses.xlsx") ws = wb.active # + [markdown] id="bbOHxjIJF5-0" # Create a table object named **tabl_1** with the display name '**Table1**'. # # * Table names must be unique within a workbook. # # * By default tables are created with a header from the first row. # # * Table filters for all the columns must always contain strings. # # * Table headers and table column headings must always contain strings. # # + id="B4L6zgSXGIHA" tabl_1 = Table(displayName="Table1", ref="A1:C26") # + [markdown] id="SdhABwxIHAK5" # Create a style in an object **style_basic** and give the style the name **TableStyleBasic** then apply that style the table **tabl_1**. # # Styles are managed using the the **TableStyleInfo object**. This allows you to stripe rows or columns and apply the different color schemes. # # + id="CkruNT2KHZX3" style_clean = TableStyleInfo(name="TableStyleMedium9", showFirstColumn=False, showLastColumn= False, showRowStripes=True, showColumnStripes=False) tabl_1.tableStyleInfo = style_clean # + [markdown] id="7qRNAKuWJUn_" # Now the table can be added to the worksheet using **ws.add_table()**. # # * Table must be added using ws.add_table() method to avoid duplicate names. # * Using this method ensures table name is unque through out defined names and all other table name. # + id="hscptOvLNIZh" ws.add_table(tabl_1) # + [markdown] id="BuIEsZGUNR1r" # Then the workbook can be saved as an **Excel** document. # # + id="McYeu9G2NWBr" wb.save("/content/gdrive/MyDrive/excel/CUNY_campuses_withTable.xlsx") # + id="rK3i4hT5R7cw" # to see the table tabl_1 created in the worksheet ws print(ws.tabl_1) # + [markdown] id="tmttJeLgKgZ4" # # # --- # # # + [markdown] id="YLJg4u5cQu-4" # **Functions and Attributes of Tables** # + id="-sYf1wcoQ1Oz" # CHECK THIS IN THE CURRENT VERSION OF OPENPYXL # from openpyxl import tables # + id="ZriLQcw4OXcb" # to query the contents of the ws object dir(ws) # + [markdown] id="CeqGOhEWJw5e" # **ws.tables** ???was??? is a dictionary-like object of all the tables in a particular worksheet. # # **ws._tables** is a list object of tables in a worksheet. # # # + [markdown] id="fbAu9wt0L7oS" # You can query all of the tables in a worksheet. # + id="97_X6kNJKFRk" # you can query all of the tables in a worksheet with the list ._tables print(ws._tables) # + [markdown] id="jx_f5nSEL37b" # ??? You can get a table by its name or range # + id="lQe-A2PpKiWW" # CHECK THIS IN THE CURRENT VERSION OF OPENPYXL # references a table by its name or a range ws.tables["Table1"] # or ws.tables["A1:C26"] # + [markdown] id="N0j8VRd8LUUX" # You can get a count of the number of tables in a worksheet. # + id="iL6WvKmqLXff" # returns integer count of the number of tables in the worksheet print(len(ws._tables)) # + [markdown] id="2K2aS3r0LD8e" # You can get table names and their ranges of all tables in a worksheet. # + id="opzzr58fK0eF" # returns a list of table names and their ranges. # print(ws.tables.items()) print(ws._tables) # + [markdown] id="mS0VGvukMFQj" # You can delete a table from a worksheet. # + id="svJ_lvOkMIMh" # CHECK THIS IN THE CURRENT VERSION OF OPENPYXL # del ws.tables["Table1"] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] nbpresent={"id": "50a40f10-f4b6-4dd3-b7aa-c63ed3ca3244"} slideshow={"slide_type": "slide"} # # Introduction to Data Science – Lecture 6 – Exercises # # *COMP 5360 / MATH 4100, University of Utah, http://datasciencecourse.net/* # - # ## Exercise 1: Reading and Writing Data # # The file [grades.csv](grades.csv) is a file with student names and letter grades: # # ``` # Alice; A # Bob; B # Robert; A # Richard; C # ``` # # Read the file into an array. Add a GPA to the student's row (A=4,B=3,C=2,D=1). # # Hint: the function [strip()](https://docs.python.org/3/library/stdtypes.html#str.strip) removes trailing whitespace from a string. # # Write that file into a new file `grades_gpa.csv` # + gpas = {"A":4, "B":3, "C":2, "D":1} # - # ## Exercise 2: Data Frames # # * Calculate the mean certified sales for all albums. import pandas as pd hit_albums = pd.read_csv("hit_albums.csv") # * Create a new dataframe that only contains albums with more than 20 million certified sales. # # # * Create a new dataframe based on the hit_albums dataset that only contains the artists that have at least two albums in the list. # * Create a new dataframe that contains the aggregates sum of all certified sales for each year. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Examples of datasets cleaning using the autoc package # ## Import Packages import pandas as pd import seaborn as sns from autoc.explorer import cserie,DataExploration import matplotlib.pyplot as plt import matplotlib matplotlib.style.use('ggplot') # %matplotlib inline plt.rcParams['figure.figsize'] = (12.0, 8) # ## Springleaf dataset analysis # you can find the kaggle competition [here](https://www.kaggle.com/c/springleaf-marketing-response) # + path_to_data = '/Users/ericfourrier/Documents/Data/SpringLeaf/train.csv' # - df = pd.read_csv(path_to_data) exploration = DataExploration(df) # ### Using different methods of the class # + # Missing value count # columns exploration.nacolcount() exploration.narowcount() # - # detecting full na columns na_cols = cserie(exploration._nacolcount.Napercentage == 1) df.loc[:,na_cols] # detecting full na rows na_rows = cserie(exploration._narowcount.Napercentage == 1) na_rows # no full missing rows # ### Using Structure # Looking at full structure of the dataset df_infos = exploration.structure() na_cols2 = cserie(df_infos.na_columns) # looking at complete na columns na_cols2 constant_cols = cserie(df_infos.constant_columns) # looking at constant columns constant_cols # ### Small study on missing values # missing values per rows distribution print(float(sum(exploration._narowcount.Napercentage > 0.0003)))/df.shape[0] exploration._narowcount.Napercentage.plot(kind = 'hist',bins=200,xlim = (0,0.0003)) # missing values per cols distribution df_infos.perc_missing[df_infos.perc_missing !=0].hist(bins = 100) # many missing columns exploration.manymissing(a=0.7) # more than 70 % missing variables # missing values per cols distribution for higher missing percentage df_infos.perc_missing[df_infos.perc_missing > 0.01].hist(bins = 100) df_infos # ### Explore character variables # ### Correlation matrix reduced_data=exploration.data.loc[:,exploration.data.columns[range(0,100)]] plt.figure(figsize=(30,15)) corr_matrix = reduced_data.corr() from autoc.utils.coorplot import plot_corrmatrix plot_corrmatrix(corr_matrix,size=0.4) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from pytorch3dunet.datasets.pdb import StandardPDBDataset, apbsInput from pathlib import Path import os import prody as pr from openbabel import openbabel import subprocess from potsim2 import PotGrid import numpy as np # + src_data_folder = '/home/lorenzo/deep_apbs/srcData/pdbbind_v2019_refined' name = '4us3' tmp_data_folder = 'runs/test_3/tmp' pdb2pqrPath = '/home/lorenzo/pymol/bin/pdb2pqr' def remove(fname): try: os.remove(fname) except OSError: pass def _processPdb(): src_pdb_file = f'{src_data_folder}/{name}/{name}_protein.pdb' tmp_ligand_pdb_file = Path(tmp_data_folder) / name / f'{name}_ligand.pdb' os.makedirs(tmp_ligand_pdb_file.parent, exist_ok=True) tmp_ligand_pdb_file = str(tmp_ligand_pdb_file) remove(tmp_ligand_pdb_file) src_mol_file = f"{src_data_folder}/{name}/{name}_ligand.mol2" obConversion = openbabel.OBConversion() obConversion.SetInAndOutFormats("mol2", "pdb") # remove water molecules (found some pdb files with water molecules) structure = pr.parsePDB(src_pdb_file) protein = structure.select('protein').toAtomGroup() # convert ligand to pdb ligand = openbabel.OBMol() obConversion.ReadFile(ligand, src_mol_file) obConversion.WriteFile(ligand, tmp_ligand_pdb_file) # select only chains that are close to the ligand (I love ProDy v2) ligand = pr.parsePDB(tmp_ligand_pdb_file) lresname = ligand.getResnames()[0] complx = ligand + protein # select ONLY atoms that belong to the protein complx = complx.select(f'same chain as exwithin 7 of resname {lresname}') complx = complx.select(f'protein and not resname {lresname}') return complx, ligand def _runApbs(out_dir): owd = os.getcwd() os.chdir(out_dir) apbs_in_fname = "apbs-in" input = apbsInput(name=name) with open(apbs_in_fname, "w") as f: f.write(input) # generates dx.gz grid file proc = subprocess.Popen( ["apbs", apbs_in_fname], stdout=subprocess.PIPE, stderr=subprocess.PIPE ) cmd_out = proc.communicate() if proc.returncode != 0: raise Exception(cmd_out[1].decode()) os.chdir(owd) def _genGrids(structure, ligand): out_dir = f'{tmp_data_folder}/{name}' dst_pdb_file = f'{out_dir}/{name}_protein_trans.pdb' remove(dst_pdb_file) pqr_output = f"{out_dir}/{name}.pqr" remove(pqr_output) grid_fname = f"{out_dir}/{name}_grid.dx.gz" remove(grid_fname) pr.writePDB(dst_pdb_file, structure) # pdb2pqr fails to read pdbs with the one line header generated by ProDy... with open(dst_pdb_file, 'r') as fin: data = fin.read().splitlines(True) with open(dst_pdb_file, 'w') as fout: fout.writelines(data[1:]) proc = subprocess.Popen( [ pdb2pqrPath, "--with-ph=7.4", "--ff=PARSE", "--chain", dst_pdb_file, pqr_output ], stdout=subprocess.PIPE, stderr=subprocess.PIPE, ) cmd_out = proc.communicate() if proc.returncode != 0: raise Exception(cmd_out[1].decode()) print(f'Running apbs on {name}') _runApbs(out_dir) # read grid grid = PotGrid(dst_pdb_file, grid_fname) # ligand mask is a boolean NumPy array, can be converted to int: ligand_mask.astype(int) ligand_mask = grid.get_ligand_mask(ligand) return grid, ligand_mask # - structure, ligand = _processPdb() grid, ligand_mask = _genGrids(structure, ligand) out_dir = f'{tmp_data_folder}/{name}' dst_pdb_file = f'{out_dir}/{name}_protein_trans.pdb' structure2 = pr.parsePDB(dst_pdb_file) structure2 structure2 # + def labelGrid(structure, grid): retgrid = np.zeros(shape=grid.grid.shape) for i,coord in enumerate(structure.getCoords()): x,y,z = coord binx = int((x - min(grid.edges[0])) / grid.delta[0]) biny = int((y - min(grid.edges[1])) / grid.delta[1]) binz = int((z - min(grid.edges[2])) / grid.delta[2]) retgrid[binx,biny,binz] = 1 return retgrid retgrid = labelGrid(structure2,grid) # - retgrid.sum() m = np.array([[[i+j for i in range (3)] for j in range(4) ] for k in range(2)] ).transpose() m.shape x = np.array([1,2,3]) x2 = x[:,np.newaxis,np.newaxis] x2.shape y = m-x2 y.shape # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # import sql library import pandas as pd import sqlite3 # %load_ext sql # %sql sqlite:// # # SQl table 1 # + language="sql" # create table Employee(Emp_id varchar(10) PRIMARY KEY, First name varchar(50) not null, Last name varchar(50), Address varchar(50) not null, title char(25) not null, Salary number(10) not null, Hire_date DATE) # + language="sql" # SELECT * FROM employee; # # + language="sql" # insert into Employee values ('emp03', 'rani', 'singh', '43 law gate', 'tester',77000,'2020-06-18'); # insert into Employee values ('emp04', 'kittu', 'singh', '43 law gate', 'desiner',57000,'2020-03-04'); # insert into Employee values ('emp05', 'Aman', 'singh', '43 law gate', 'developer',67000,'2020-08-07'); # insert into Employee values ('emp06', 'riya', 'singh', '43 law gate', 'tester',74000,'2020-03-04'); # insert into Employee values ('emp07', 'Ayush', 'singh', '43 law gate', 'tester',47000,'2020-09-13'); # insert into Employee values ('emp08', 'Nupur', 'singh', '43 law gate', 'developer',37000,'2020-0-19'); # + language="sql" # SELECT * FROM employee; # # - # # sql table 2 # + language="sql" # create table Employee2(Emp_id varchar(10) PRIMARY KEY, city_cover varchar(50) not null, Contact_number number(10) not null, Age int, # CHECK (Age>=18) ); # + language="sql" # # insert into Employee2 values ('emp03', 'kanpur', 8051557283,24); # insert into Employee2 values ('emp04', 'jalandher', 8051557283,22); # insert into Employee2 values ('emp05', 'chandighar', 8051557283,29); # insert into Employee2 values ('emp06', 'patna', 8051557283,26); # insert into Employee2 values ('emp07', 'Delhi', 8051557283,30); # insert into Employee2 values ('emp08', 'kanpur', 8051557283,31); # # + language="sql" # SELECT * FROM employee2; # - # # inner join # + language="sql" # select Employee.first,Employee.last,Employee2.city_cover from Employee INNER JOIN Employee2 ON Employee.Emp_id = Employee2.Emp_id; # - # # view # + language="sql" # CREATE VIEW DetailsView AS # SELECT first, last, title # FROM Employee; # WHERE Emp_id = emp05; # + language="sql" # SELECT * FROM DetailsView; # - # # like clause # + language="sql" # select * from Employee where title like '__v%'; # - # # Order by clause # + language="sql" # select * from Employee order by salary; # + language="sql" # select * from Employee order by salary desc; # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import matplotlib.pyplot as plt #random generating seed value np.random.seed(42) """ This function performs mini-batch gradient descent on a given dataset. Parameters X : array of predictor features y : array of actual outcome values batch_size : how many data points will be sampled for each iteration learn_rate : learning rate num_iter : number of batches used Returns regression_coef : array of slopes and intercepts generated by gradient descent procedure """ def miniBatchGD(X, y, batch_size = 20, learn_rate = 0.005, num_iter = 25): n_points = X.shape[0] W = np.zeros(X.shape[1]) # coefficients b = 0 # intercept # run iterations regression_coef = [np.hstack((W,b))] for _ in range(num_iter): batch = np.random.choice(range(n_points), batch_size) X_batch = X[batch,:] y_batch = y[batch] W, b = SquaredErrorStep(X_batch, y_batch, W, b, learn_rate) regression_coef.append(np.hstack((W,b))) return regression_coef #quared trick at every point in our data all at the same time, and repeating this process many times. # gradient decent step for squared error # Y = w0 + w1x # w0 : y intercept # w1 : slop # y : array of actual outcome values # X : array of predictor features def SquaredErrorStep(X, y, w1, w0, learn_rate = 0.001): # compute errors y_pred = w0 + np.matmul(X, w1) error = y - y_pred # https://github.com/shohan4556/machine-learning-course-notes/blob/master/Jupyter-Notebook/Regression/Linear%20Regression.ipynb # compute steps w1_new = w1 + learn_rate * np.matmul(error, X) w0_new = w0 + learn_rate * error.sum() return w1_new, w0_new def main(): #load dataset data = np.loadtxt('data.csv', delimiter=',') X = data[:,:-1] y = data[:,-1] #print(X.shape) #print(y.shape) regression_coef = miniBatchGD(X, y) plt.figure() X_min = X.min() X_max = X.max() counter = len(regression_coef) #unpack slop (w0) and intercept (w0) for W, b in regression_coef: counter -= 1 color = [1 - 0.92 ** counter for _ in range(3)] plt.plot([X_min, X_max],[X_min * W + b, X_max * W + b], color = color) plt.scatter(X, y, zorder = 3) plt.show() main() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Setup and install pytorch from source in debug mode. # # More infos found at: # - [Contributing](https://github.com/pytorch/pytorch/blob/v1.4.0/CONTRIBUTING.md) # - [Install from source](https://github.com/pytorch/pytorch/tree/v1.4.0#from-source) # # ## Prepare env # # usually simply follow steps on install from source. # eg: pytorch 1.4.0 with anaconda: # # ```shell # git clone https://github.com/pytorch/pytorch.git # # cd pytorch # git checkout v1.4.0 # git submodule update --init --recursive # ``` # # # ## Build with debug flags # # Follows infos on contributing # # ```shell # export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"} # export DEBUG=1 # export USE_CUDA=0 # python setup.py develop # ``` # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # © 2020 Nokia # # Licensed under the BSD 3 Clause license # # SPDX-License-Identifier: BSD-3-Clause # # Prepare Conala snippet collection and evaluation data # + from pathlib import Path import json from collections import defaultdict from codesearch.data import load_jsonl, save_jsonl corpus_url = "http://www.phontron.com/download/conala-corpus-v1.1.zip" conala_dir = Path("conala-corpus") conala_train_fn = conala_dir/"conala-test.json" conala_test_fn = conala_dir/"conala-train.json" conala_mined_fn = conala_dir/"conala-mined.jsonl" conala_snippets_fn = "conala-curated-snippets.jsonl" conala_retrieval_test_fn = "conala-test-curated-0.5.jsonl" if not conala_train_fn.exists(): # !wget $corpus_url # !unzip conala-corpus-v1.1.zip # - conala_mined = load_jsonl(conala_mined_fn) # The mined dataset seems to noisy to incorporate in the snippet collection: # + tags=[] # !sed -n '10000,10009p;10010q' $conala_mined_fn # + with open(conala_train_fn) as f: conala_train = json.load(f) with open(conala_test_fn) as f: conala_test = json.load(f) conala_all = conala_train + conala_test conala_all[:2], len(conala_all), len(conala_train), len(conala_test) # + tags=[] for s in conala_all: if s["rewritten_intent"] == "Convert the first row of numpy matrix `a` to a list": print(s) # + question_ids = {r["question_id"] for r in conala_all} intents = set(r["intent"] for r in conala_all) len(question_ids), len(conala_all), len(intents) # - id2snippet = defaultdict(list) for r in conala_all: id2snippet[r["question_id"]].append(r) for r in conala_all: if not r["intent"]: print(r) if r["intent"].lower() == (r["rewritten_intent"] or "").lower(): print(r) # + import random random.seed(42) snippets = [] eval_records = [] for question_id in id2snippet: snippets_ = [r for r in id2snippet[question_id] if r["rewritten_intent"]] if not snippets_: continue for i, record in enumerate(snippets_): snippet_record = { "id": f'{record["question_id"]}-{i}', "code": record["snippet"], "description": record["rewritten_intent"], "language": "python", "attribution": f"https://stackoverflow.com/questions/{record['question_id']}" } snippets.append(snippet_record) # occasionally snippets from the same question have a slightly different intent # to avoid similar queries, we create only one query per question query = random.choice(snippets_)["intent"] if any(query.lower() == r["description"].lower() for r in snippets[-len(snippets_):] ): print(f"filtering query {query}") continue relevant_ids = [r["id"] for r in snippets[-len(snippets_):] ] eval_records.append({"query": query, "relevant_ids": relevant_ids}) snippets[:2], len(snippets), eval_records[:2], len(eval_records) # - id2snippet_ = {r["id"]: r for r in snippets} for i, eval_record in enumerate(eval_records): print(f"Query: {eval_record['query']}") print(f"Relevant descriptions: {[id2snippet_[id]['description'] for id in eval_record['relevant_ids']]}") if i == 10: break # + tags=[] from codesearch.text_preprocessing import compute_overlap compute_overlap("this is a test", "test test") # - overlaps = [] filtered_eval_records = [] for r in eval_records: query = r["query"] descriptions = [id2snippet_[id]['description'] for id in r['relevant_ids']] overlap = max(compute_overlap(query, d)[1] for d in descriptions) overlaps.append(overlap) if overlap < 0.5 : filtered_eval_records.append(r) filtered_eval_records[:2], len(filtered_eval_records) save_jsonl(conala_snippets_fn, snippets) save_jsonl(conala_retrieval_test_fn, filtered_eval_records) // --- // jupyter: // jupytext: // text_representation: // extension: .scala // format_name: light // format_version: '1.5' // jupytext_version: 1.14.4 // kernelspec: // display_name: Scala // language: scala // name: scala // --- // ## Evidence Preparation Error Summary Report import $ivy.`org.plotly-scala::plotly-almond:0.7.2` import $file.^.sparkinit, sparkinit._ import $file.^.pathinit, pathinit._ import $file.^.cpinit, cpinit._ import ss.implicits._ import org.apache.spark.sql.DataFrame import org.apache.spark.sql.functions._ import java.nio.file.Paths import plotly._ import plotly.element._ import plotly.layout._ import plotly.Almond._ implicit class DFOPs(df: DataFrame) { def fn[T](fn: DataFrame => T): T = fn(df)} init(offline=false) // ### Load Summary Datasets val regex = ".*evidence_(.*)_validation_.*".r val steps = RESULTS_DIR.resolve("errors").toFile.listFiles.map(_.toString).map({ case regex(k) => k case _ => "" }).toSet def loadSummary(step: String) = ss.read .parquet(RESULTS_DIR.resolve(s"errors/evidence_${step}_validation_summary.parquet").toString) .withColumn("step", lit(step)) val df = steps.map(loadSummary).reduce(_.union(_)) df.withColumn("reason", coalesce($"reason", lit("none"))).show(1000, false) // ### Record Invalidation Cause Frequency by Source // // Evidence records are eliminated in the pipeline for a variety of reasons, and this section shows the the frequency with which those conditions are encountered per data source. Reason = "none" below indicates that the record passed all validation filters (i.e. only these records are kept, all others are lost). // Visualize the number of records with some reason they were invalid alongside those that were not (reason = "none") val traces = df.select("step").dropDuplicates().collect().map(_.getAs[String]("step")).toList.map(s => df .withColumn("reason", coalesce($"reason", lit("none"))) .withColumn("sourceID", coalesce($"sourceID", lit("none"))) .filter($"step" === s) .fn(ds => { s -> ds.select("reason").dropDuplicates().collect.map(_.getAs[String]("reason")).toSeq.map(r => { ds .filter($"reason" === r) .sort($"count".desc) .fn( dsr => { Bar( x=dsr.select("sourceID").collect.map(_.getAs[String]("sourceID")).toList, y=dsr.select("count").collect.map(_.getAs[Long]("count")).toList, name=r, showlegend=true ) }) }) }) ).toMap traces.foreach {case (k, data) => { data.plot( title=s"Validation phase: $k", yaxis=Axis(`type`=AxisType.Log), margin=Margin(t=40), barmode=BarMode.Group ) }} // Show only non-valid record counts for reference: df.filter($"reason".isNotNull).show(100, false) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from keras import backend as K from keras.models import load_model from keras.preprocessing import image from keras.optimizers import Adam from imageio import imread import numpy as np from matplotlib import pyplot as plt from models.keras_ssd300 import ssd_300 from keras_loss_function.keras_ssd_loss import SSDLoss from keras_layers.keras_layer_AnchorBoxes import AnchorBoxes from keras_layers.keras_layer_DecodeDetections import DecodeDetections from keras_layers.keras_layer_DecodeDetectionsFast import DecodeDetectionsFast from keras_layers.keras_layer_L2Normalization import L2Normalization from ssd_encoder_decoder.ssd_output_decoder import decode_detections, decode_detections_fast from data_generator.object_detection_2d_data_generator import DataGenerator from data_generator.object_detection_2d_photometric_ops import ConvertTo3Channels from data_generator.object_detection_2d_geometric_ops import Resize from data_generator.object_detection_2d_misc_utils import apply_inverse_transforms import tensorflow as tf config = tf.ConfigProto() config.gpu_options.allow_growth = True # %matplotlib inline # - # Set the image size. img_height = 300 img_width = 300 # + # TODO: Set the path to the `.h5` file of the model to be loaded. model_path = 'path/to/trained/model.h5' # We need to create an SSDLoss object in order to pass that to the model loader. ssd_loss = SSDLoss(neg_pos_ratio=3, n_neg_min=0, alpha=1.0) K.clear_session() # Clear previous models from memory. model = load_model(model_path, custom_objects={'AnchorBoxes': AnchorBoxes, 'L2Normalization': L2Normalization, 'DecodeDetections': DecodeDetections, 'compute_loss': ssd_loss.compute_loss}) # + import numpy as np import cv2 frames=np.zeros((1,300,300,3)) cap = cv2.VideoCapture(0) confidence_threshold = 0.3 colors = plt.cm.hsv(np.linspace(0, 1, 21)).tolist() classes = ['background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'umbrella'] font = cv2.FONT_HERSHEY_SIMPLEX fontScale = 1 fontColor = (255,255,255) lineType = 2 while(True): # Capture frame-by-frame ret, frame = cap.read() x = 20 y = 40 w = 100 h = 75 frames[0] = image.load_img(img_path, target_size=(img_height, img_width)) y_pred = model.predict(frames) np.set_printoptions(precision=2, suppress=True, linewidth=90) y_pred_thresh = [y_pred[k][y_pred[k,:,1] > confidence_threshold] for k in range(y_pred.shape[0])] for box in y_pred_thresh[0]: xmin = box[2] ymin = box[3] xmax = box[4] ymax = box[5] color = colors[int(box[0])] cv2.putText(frame,classes[int(box[0])], (xmin,ymin), font, fontScale, fontColor, lineType) cv2.rectangle(frame, (xmin, xmin), (xmax, ymax), color, 2) # Our operations on the frame come here if cv2.waitKey(1) & 0xFF == ord('q'): break else: cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2) # Display the resulting frame cv2.imshow('frame',frame) # When everything done, release the capture cap.release() cv2.destroyAllWindows() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import torch torch.manual_seed(123) import random random.seed(123) import torch.nn as nn import os import shutil import itertools import core.config as config from chofer_tda_datasets import Reininghaus2014ShrecSynthetic from chofer_tda_datasets.transforms import Hdf5GroupToDict from core.utils import * from torchph.nn.slayer import SLayerExponential, \ SLayerRational, \ LinearRationalStretchedBirthLifeTimeCoordinateTransform, \ prepare_batch, SLayerRationalHat from sklearn.model_selection import ShuffleSplit from collections import Counter, defaultdict from torch.utils.data import DataLoader, SubsetRandomSampler from collections import OrderedDict from torch.autograd import Variable from torch.utils.data import Subset from sklearn.model_selection import StratifiedShuffleSplit # %matplotlib notebook os.environ['CUDA_VISIBLE_DEVICES'] = str(1) class train_env: nu = 0.01 n_epochs = 300 lr_initial = 0.5 momentum = 0.9 lr_epoch_step = 20 batch_size = 20 train_size = 0.9 minimal_distance_from_diagonal = 0.00001 coordinate_transform = LinearRationalStretchedBirthLifeTimeCoordinateTransform(nu=train_env.nu) def filter_out_diagonal_points(x): def fn(dgm): i = ((dgm[:, 1] - dgm[:, 0]) > train_env.minimal_distance_from_diagonal).nonzero().squeeze() if len(i) == 0: return torch.Tensor([]) else: return dgm[i] return collection_cascade(x, lambda xx: isinstance(xx, torch.Tensor), fn) dataset = Reininghaus2014ShrecSynthetic(data_root_folder_path=config.paths.data_root_dir) dataset.data_transforms = [ Hdf5GroupToDict(), numpy_to_torch_cascade, filter_out_diagonal_points, lambda x: collection_cascade(x, lambda x: isinstance(x, torch.Tensor), lambda x: coordinate_transform(x)), ] dataset.target_transforms = [lambda x: int(x)] # + def concat_shrec_sample_target_iter(sample_target_iter): x, y = defaultdict(lambda: defaultdict(list)), [] for x_i, y_i in sample_target_iter: y.append(y_i) for k, v in x_i.items(): for kk, dgm in v.items(): x[k][kk].append(dgm) return x, y class ShrecCollate: def __init__(self, cuda=True): self.cuda = cuda def __call__(self, sample_target_iter): x, y = concat_shrec_sample_target_iter(sample_target_iter) y = torch.LongTensor(y) x = collection_cascade(x, lambda xx: isinstance(xx, list), lambda xx: prepare_batch(xx, 2)) if self.cuda: # Shifting the necessary parts of the prepared batch to the cuda x = collection_cascade(x, lambda xx: isinstance(xx, tuple), lambda xx: (xx[0].cuda(), xx[1].cuda(), xx[2], xx[3])) y = y.cuda() return x, y collate_fn = ShrecCollate(cuda=True) # + def Slayer(n_elements): return SLayerRationalHat(n_elements, radius_init=50, exponent=1) def LinearCell(n_in, n_out): m = nn.Sequential(nn.Linear(n_in, n_out), nn.BatchNorm1d(n_out), nn.ReLU(), ) m.out_features = m[0].out_features return m class ShrecSyntheticModel(nn.Module): def __init__(self): super().__init__() self.n_elements = 40 self.slayers = ModuleDict() for k in (str(i) for i in range(1, 21)): self.slayers[k] = ModuleDict() for kk in (str(i) for i in range(2)): s = Slayer(self.n_elements) self.slayers[k][kk] = nn.Sequential(s, nn.BatchNorm1d(self.n_elements)) cls_in_dim = self.n_elements * 20 * 2 self.cls = nn.Sequential( LinearCell(cls_in_dim, cls_in_dim), LinearCell(cls_in_dim, int(cls_in_dim/4)), nn.Linear(int(cls_in_dim/4), 15)) def forward(self, input): x = [] for k, v in input.items(): for kk, dgm in v.items(): x.append(self.slayers[k][kk](dgm)) x = torch.cat(x, dim=1) x = self.cls(x) return x def center_init(self, sample_target_iter): x, _ = concat_shrec_sample_target_iter(sample_target_iter) x = collection_cascade(x, stop_predicate=lambda e: isinstance(e, list), function_to_apply=lambda e: torch.cat(e, dim=0).numpy()) for k, v in x.items(): for kk, dgm in v.items(): kmeans = sklearn.cluster.KMeans(n_clusters=self.n_elements, init='k-means++', random_state=123) kmeans.fit(dgm) centers = kmeans.cluster_centers_ centers = torch.from_numpy(centers) self.slayers[k][kk][0].centers.data = centers # + def experiment(): stats_of_runs = [] splitter = StratifiedShuffleSplit(n_splits=10, train_size=train_env.train_size, test_size=1-train_env.train_size, random_state=123) train_test_splits = list(splitter.split(X=dataset.targets, y=dataset.targets)) train_test_splits = [(train_i.tolist(), test_i.tolist()) for train_i, test_i in train_test_splits] for run_i, (train_i, test_i) in enumerate(train_test_splits): print('') print('Run', run_i) model = ShrecSyntheticModel() model.center_init([dataset[i] for i in train_i]) model.cuda() stats = defaultdict(list) stats_of_runs.append(stats) opt = torch.optim.SGD(model.parameters(), lr=train_env.lr_initial, momentum=train_env.momentum) for i_epoch in range(1, train_env.n_epochs+1): model.train() dl_train = DataLoader(Subset(dataset, train_i), batch_size=train_env.batch_size, collate_fn=collate_fn, shuffle=True) dl_test = DataLoader(Subset(dataset, test_i), batch_size=train_env.batch_size, collate_fn=collate_fn) epoch_loss = 0 if i_epoch % train_env.lr_epoch_step == 0: adapt_lr(opt, lambda lr: lr*0.5) for i_batch, (x, y) in enumerate(dl_train, 1): y = torch.autograd.Variable(y) def closure(): opt.zero_grad() y_hat = model(x) loss = nn.functional.cross_entropy(y_hat, y) loss.backward() return loss loss = opt.step(closure) epoch_loss += float(loss) stats['loss_by_batch'].append(float(loss)) stats['centers'].append(model.slayers['1']['0'][0].centers.data.cpu().numpy()) print("Epoch {}/{}, Batch {}/{}".format(i_epoch, train_env.n_epochs, i_batch, len(dl_train)), end=" \r") stats['train_loss_by_epoch'].append(epoch_loss/len(dl_train)) model.eval() true_samples = 0 seen_samples = 0 epoch_test_loss = 0 for i_batch, (x, y) in enumerate(dl_test): y_hat = model(x) epoch_test_loss += float(nn.functional.cross_entropy(y_hat, torch.autograd.Variable(y.cuda())).data) y_hat = y_hat.max(dim=1)[1].data.long() true_samples += (y_hat == y).sum().item() seen_samples += y.size(0) stats['test_accuracy'].append(true_samples/float(seen_samples)) stats['test_loss_by_epoch'].append(epoch_test_loss/float(len(dl_test))) print('') print(true_samples/float(seen_samples)) print('') print('acc.', np.mean(stats['test_accuracy'][-10:])) return stats_of_runs stats_of_runs = experiment() # - last_10_runs = [np.mean(s['test_accuracy'][-10:]) for s in stats_of_runs] print(np.mean(last_10_runs)) print(np.std(last_10_runs)) last_10_runs # + stats = stats_of_runs[0] plt.figure() if 'centers' in stats: c_start = stats['centers'][0] c_end = stats['centers'][-1] plt.plot(c_start[:,0], c_start[:, 1], 'bo', label='center initialization') plt.plot(c_end[:,0], c_end[:, 1], 'ro', label='center learned') all_centers = numpy.stack(stats['centers'], axis=0) for i in range(all_centers.shape[1]): points = all_centers[:,i, :] plt.plot(points[:, 0], points[:, 1], '-k', alpha=0.25) plt.legend() plt.figure() plt.plot(stats['train_loss_by_epoch'], label='train_loss') plt.plot(stats['test_loss_by_epoch'], label='test_loss') plt.plot(stats['test_accuracy'], label='test_accuracy') plt.legend() plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="1iI2dqcWwKrt" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 127} outputId="4c84c9a2-35a4-4e91-e606-7730604fd190" # !pip install mass_ts # + id="fRi1zMwbwESv" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 73} outputId="d1810353-6c71-4adf-ec15-45817f4e06be" import datetime import numpy as np import pandas as pd import altair as alt import matplotlib.pyplot as plt import mass_ts as mts # https://www.cs.unm.edu/~mueen/FastestSimilaritySearch.html # + id="5ka4nU0MwPOh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 215} outputId="da6b4d4a-e80f-451b-9e9d-fd2a606f310f" # Clone orc repo with rowing data # !git clone -l -s git://github.com/sebastianpsm/orc.git cloned-repo # %cd cloned-repo # !ls # %cd .. # + id="X67Lhqf6a6Jk" colab_type="code" colab={} data = pd.read_csv("cloned-repo/sample_data/20200517.CSV") fs = 50 # sample freq. [Hz] rec_data = "17.5.20 7:15" # + id="5QblzeFrbfXv" colab_type="code" colab={} # Add time column data["time_delta"] = data["time_delta[ms]"].astype('timedelta64[ms]') data["t"] = data["time_delta"].cumsum() + datetime.datetime.strptime(rec_data, "%d.%m.%y %H:%M") # + id="Sf3I-WcHhsor" colab_type="code" colab={} # Filter rolling_window = 25 data["accel_x_filter"] = data["accel_x[G]"].rolling(rolling_window).mean() data["accel_y_filter"] = data["accel_y[G]"].rolling(rolling_window).mean() data["accel_z_filter"] = data["accel_z[G]"].rolling(rolling_window).mean() data["gyro_x_filter"] = data["gyro_x[deg/s]"].rolling(rolling_window).mean() data["gyro_y_filter"] = data["gyro_x[deg/s]"].rolling(rolling_window).mean() data["gyro_z_filter"] = data["gyro_x[deg/s]"].rolling(rolling_window).mean() data = data.dropna() # + id="YIL1x8KNiRqi" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 405} outputId="f5a2d668-4f68-491b-cdb5-e5c6240538d5" start=10000 stop=start+5000 plot_data = data[start:stop] plot_data = plot_data[["t", "accel_x_filter", "accel_y_filter", "accel_z_filter"]] accel_x_plot = alt.Chart(plot_data).mark_line(color="red").encode(alt.X("t:T", title="seconds"), alt.Y("accel_x_filter", title="acceleration [G]")) accel_y_plot = alt.Chart(plot_data).mark_line(color="green").encode(alt.X("t:T", title="seconds"), alt.Y("accel_y_filter", title="acceleration [G]")) accel_z_plot = alt.Chart(plot_data).mark_line(color="blue").encode(alt.X("t:T", title="seconds"), alt.Y("accel_z_filter", title="acceleration [G]")) (accel_x_plot + accel_y_plot + accel_z_plot).properties(width=800,title="Acceleration [G]").interactive(bind_y=False) # + id="eebLXAndjK-H" colab_type="code" colab={} start = 1095 stop = 1245 len = stop-start q = np.array(plot_data["accel_z_filter"][start:stop]) ts = np.array(data["accel_z_filter"]) distances = mts.mass2(ts,q) # + id="7xvcSx7TIzyb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 283} outputId="de5b33cb-df78-4eb3-a4ab-af9b3a7fa924" plt.plot(np.abs(distances)[10000:15000]) # + id="soGxNPxHj8Gb" colab_type="code" colab={} k = 200 exclusion_zone = 25 top_motifs = mts.top_k_motifs(distances, k, exclusion_zone) # + id="9TZhi83FkH_U" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 265} outputId="da11aa41-1a82-4d62-ce13-1c537365c81b" for idx, top_motif in enumerate(top_motifs): motiv = data["accel_z_filter"][top_motif:top_motif+len] plt.plot(range(0,len), motiv) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Exploring and Processing Data - I # import packages import pandas as pd import numpy as np import os # ## Import Data raw_data_path = os.path.join(os.path.abspath('..'),'data','raw') train_file_path = os.path.join(raw_data_path,'train.csv') test_file_path = os.path.join(raw_data_path,'test.csv') # import csv in pandas dataframe, index_col indicates primary key of dataset train_df = pd.read_csv(train_file_path,index_col="PassengerId") test_df = pd.read_csv(test_file_path, index_col='PassengerId') # + # type(train_df) # train_df.info() # test_df.info() # Add new column 'Survived' to test df with default value of -888 test_df['Survived'] = -888 # - # dataframes need to be passed as a tuple to concat function # axis = 0 indicates merge horizontally , axis=1 indicates vertical df = pd.concat((train_df, test_df), axis=0, sort='False') # ## Data Munging df.info() # Missing values for Age, Cabin, Embarked, Fare # ### Fix Mssing values for Embarked Column df[df.Embarked.isnull()] # How may people embarked at different points df.Embarked.value_counts() # Which embarked point has highest survival rate pd.crosstab(df[df.Survived != -888].Survived, df[df.Survived != -888].Embarked) df.groupby(['Pclass', 'Embarked']).Fare.median() # Replace missing embarked values with 'C' df.Embarked.fillna('C', inplace=True) df.info() # ### Fix Missing values for Fare column df[df.Fare.isnull()] median_fare = df[(df.Embarked == 'S') & (df.Pclass == 3)].Fare.median() print(median_fare) df.Fare.fillna(median_fare, inplace=True) df[df.Fare.isnull()] # ### Fix Missing values for Age column # %matplotlib inline df.Age.plot(kind="hist", bins=20); # #### Replace with median of title # function to get title from name def GetTitle(name): title_group = {'mr' : 'Mr', 'mrs' : 'Mrs', 'miss' : 'Miss', 'master' : 'Master', 'don' : 'Sir', 'rev' : 'Sir', 'dr' : 'Officer', 'mme' : 'Mrs', 'ms' : 'Mrs', 'major' : 'Officer', 'lady' : 'Lady', 'sir' : 'Sir', 'mlle' : 'Miss', 'col' : 'Officer', 'capt' : 'Officer', 'the countess' : 'Lady', 'jonkheer' : 'Sir', 'dona' : 'Lady' } first_name_with_title = name.split(',')[1] title = first_name_with_title.split('.')[0] title = title.strip().lower() return title_group[title] df['Title'] = df.Name.map(lambda x : GetTitle(x)) df.head() title_age_median = df.groupby('Title').Age.transform('median') df.Age.fillna(title_age_median , inplace=True) df.info() # ### Working with Outliers # Fixing with Binning df['Fare_Bin'] = pd.qcut(df.Fare, 4, labels=['very_low','low','high','very-high']) df.Fare_Bin.value_counts().plot(kind='bar', rot=0) # ## Feature Engineering # ### Feature : Age State (Adult or Child) df['AgeState'] = np.where(df['Age'] >= 18, 'Adult', 'Child') df['AgeState'].value_counts() # ### Feature : FamilySize df['FamilySize'] = df.Parch + df.SibSp + 1 # 1 for self pd.crosstab(df[df.Survived != -888].Survived, df[df.Survived != -888].FamilySize) # ### Feature : IsMother # a lady aged more thana 18 who has Parch >0 and is married (not Miss) df['IsMother'] = np.where(((df.Sex == 'female') & (df.Parch > 0) & (df.Age > 18) & (df.Title != 'Miss')), 1, 0) # Crosstab with IsMother pd.crosstab(df[df.Survived != -888].Survived, df[df.Survived != -888].IsMother) # ### Feature : Deck # set the value to NaN df.loc[df.Cabin == 'T', 'Cabin'] = np.NaN # extract first character of Cabin string to the deck def get_deck(cabin): return np.where(pd.notnull(cabin),str(cabin)[0].upper(),'Z') df['Deck'] = df['Cabin'].map(lambda x : get_deck(x)) # use crosstab to look into survived feature cabin wise pd.crosstab(df[df.Survived != -888].Survived, df[df.Survived != -888].Deck) # ### Categorical Feature Encoding # sex df['IsMale'] = np.where(df.Sex == 'male', 1, 0) # One-hot Encoding df = pd.get_dummies(df,columns=['Deck', 'Pclass','Title', 'Fare_Bin', 'Embarked','AgeState']) df.info() # drop columns df.drop(['Cabin','Name','Ticket','Parch','SibSp','Sex'], axis=1, inplace=True) # reorder columns columns = [column for column in df.columns if column != 'Survived'] columns = ['Survived'] + columns df = df[columns] df.info() # ### Save Processed Data Set processed_data_path = os.path.join(os.path.pardir,'data','processed') write_train_path = os.path.join(processed_data_path, 'train.csv') write_test_path = os.path.join(processed_data_path, 'test.csv') # train data df.loc[df.Survived != -888].to_csv(write_train_path) # test data, Survived column not in test data columns = [column for column in df.columns if column != 'Survived'] df.loc[df.Survived == -888, columns].to_csv(write_test_path) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Digit Recognizer Lets Get Started without Deep Learning... # # - Although this dataset is best suited for beginners of Deep Learning ,
# - However K-Nearest Neighbour algorithm gives an amazing prediction with 96% accuracy.
# - Lets analyze this dataset from point of view of beginners who haven't worked on deep learning yet.. # # Importing the usual libraries and filter warnings import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt # %matplotlib inline from matplotlib.pyplot import xticks import warnings warnings.filterwarnings('ignore') import os for dirname, _, filenames in os.walk('/kaggle/input'): for filename in filenames: print(os.path.join(dirname, filename)) # + #train = pd.read_csv('train.csv') #test = pd.read_csv('test.csv') train = pd.read_csv('/kaggle/input/digit-recognizer/train.csv') test = pd.read_csv('/kaggle/input/digit-recognizer/test.csv') print(train.shape,test.shape) #In the beginning it's important to check the size of your train and test data which later helps in #deciding the sample size while testing your model on train data # - train.head(5) test.head(5) # Lets see if we have a null value in the whole dataset #Usuall we will check isnull().sum() but here in our dataset number of columns are huge print(np.unique([train.isnull().sum()])) print(np.unique([test.isnull().sum()])) y = train['label'] df_train = train.drop(columns=["label"],axis=1) print(y.shape,df_train.shape) #Looks like the values are equally distributed in the dataset y.value_counts() # # Visualization # - Its quite evident that this is a multiclass classification problem and the target classes are almost uniformly distributed in the dataset having digits from 0-9 # - Its good that target variable is not skewed or non-uniformly distributed , like have just 100 samples of digit "3" in a dataset of 42000 rows sns.countplot(y) # *Lets try to visualize how are these digits written* df_train = df_train.values.reshape(-1,28,28,1) test = test.values.reshape(-1,28,28,1) # Let me try to explain what above reshape means
# We have got ourselves a 28x28 pixel image whose pixel values are all stacked in a single row.
# *In order to view these 784 column values as an image we convert it into a 28x28x1 matrix , here 1 stands for number of color channels, if we had a colored picture we would have used 3*
# Finally the value (-1) , -1 in a reshape function means you donot have to worry about the dimension , the function will calculate it for you. # In our case -1 represents the no of rows of our dataset or no of images , if you replace -1 with 42000(no of rows of train dataset) then also it will work fine #Lets display first 50 images plt.figure(figsize=(15,8)) for i in range(50): plt.subplot(5,10,i+1) plt.imshow(df_train[i].reshape((28,28)),cmap='binary') plt.axis("off") plt.tight_layout() plt.show() # # Normalize the Dataset # - The range of pixel values is from 0-255
# - In order to normalize our data and bring it all to the range of [0-1] we divide the values in dataset by 255 # y = train['label'] df_train = train.drop(columns=["label"],axis=1) print(y.shape,df_train.shape) # Normalize the dataset df_train = df_train / 255 test = test / 255 # Loading from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score from sklearn.metrics import precision_score from sklearn.metrics import recall_score from sklearn.metrics import f1_score from sklearn.metrics import confusion_matrix from sklearn.metrics import roc_auc_score from sklearn.model_selection import cross_val_score from sklearn.model_selection import cross_val_predict from sklearn.model_selection import GridSearchCV from sklearn.svm import SVC from sklearn.neighbors import KNeighborsClassifier from sklearn.tree import DecisionTreeClassifier # + seed = 2 test_size = 0.2 X_train, X_test, y_train, y_test = train_test_split(df_train,y, test_size = test_size , random_state = seed) print(X_train.shape,X_test.shape,y_train.shape,y_test.shape) # - #KNN # we use n_neighbours-10 since we know our target variables are in the range of [0-9] knn = KNeighborsClassifier(n_neighbors=10) knn.fit(X_train, y_train) y_pred = knn.predict(X_test) accuracy = accuracy_score(y_test,y_pred) print('Accuracy: %f' % accuracy) test = pd.read_csv('/kaggle/input/digit-recognizer/test.csv') test = test / 255 y_pred_test = knn.predict(test) # + submission = pd.DataFrame({"ImageId": list(range(1, len(y_pred_test)+1)),"Label": y_pred_test}) submission.to_csv("submission_digit1.csv", index=False) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Open In Colab # # Scientific Python # # We've now had a few good examples of using NumPy for engineering computing and PyPlot for visualization. However, we haven't had much exposure to classic *numerical methods*. That's because this isn't a numerical methods class, it is a Python programming tutorial. However, there are some important aspect of programming which come up in using numerical methods. # # First and foremost is **don't reinvent the wheel**. When your focus is solving an engineering problem, you should not code your own numerical methods. Instead you should use methods which have been carefully implemented and tested already - letting you focus on your own work. Luckily the *Scientific Python* or [SciPy library](https://www.scipy.org/scipylib/index.html) has hundred of numerical methods for common mathematical and scientific problems such as: # # | Category | Sub module | Description | # |-------------------|-------------------|--------------------------------------------------------| # | Interpolation | scipy.interpolate | Numerical interpolation of 1D and multivariate data | # | Optimization | scipy.optimize | Function optimization, curve fitting, and root finding | # | Integration | scipy.integrate | Numerical integration quadratures and ODE integrators | # | Signal processing | scipy.signal | Signal processing methods | # | Special functions | scipy.special | Defines transcendental functions such as $J_n$ and $\Gamma$| # # # In this notebook, we will illustrate the use of SciPy with a few engineering applications to demonstrate a few more important programming issues. We won't attempt to go through all of the important numerical methods in SciPy - for that you can read the [SciPy book](http://scipy-lectures.org/intro/scipy.html). # --- # # ## Ordinary Differential Equations # # Ordinary Differential Equations (ODEs) are ubiquitous in engineering and dynamics, and numerical methods are excellent at producing high-quality approximate solutions to ODEs that can't be solved analytically. # # As a warm up, the function $y=e^{t}$ is an exact solution of the initial value problem (IVP) # # $$ \frac{dy}{dt} = y \quad\text{with}\quad y(0) = 1 $$ # # SciPy has a few functions to solve IVPs, but I like `solve_ivp` the best. Let's check it out. # + import numpy as np import matplotlib.pyplot as plt from scipy.integrate import solve_ivp # ?solve_ivp # - # So the first argument is the ODE function itself `func=dy/dt`, then the span over which we want to integrate, and then the initial condition. Let's try it. fun = lambda t,y: y # lambda function syntax y0 = [1] t_span = [0,2] sol = solve_ivp(fun, t_span, y0) sol # So the function outputs a bunch of useful information about what happened. Also note the times is stored in a 1D array `sol.t` and the solution is stored in a 2D array (more on that later). Let's plot this up. t = np.linspace(0,2,21) plt.plot(t,np.exp(t),label='exact') # sol = solve_ivp(fun, t_span = [0,2] , y0 = y0, t_eval = t) # distributed points for plot plt.plot(sol.t,sol.y[0],'ko',label='solve_ivp') plt.xlabel('t') plt.ylabel('y',rotation=0) plt.legend(); # First off, the numerical method matches the exact solution extremely well. But this plot seems a little weird. The solver used a small time step at first (`t[1]-t[0]=0.1`) and then took bigger steps (`t[3]-t[2]=0.99`). This is because the solver uses an [adaptive 4th order Runge-Kutta method](https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta%E2%80%93Fehlberg_method) to integrate by default, which adjusts the time step to get the highest accuracy for the least number of function evaluations. # # That's great, but we want the results at a more regular interval for plotting, and the argument `t_eval` - do that by uncommenting the second line above. The result is evenly distributed and the accuracy is still excellent - it just took a few more evaluations. # # --- # # That's nice, but most engineering systems are more complex than first order ODEs. For example, even a forced spring-mass-damper systems is second order: # # $$ m \frac{d^2 x}{dt^2} + c \frac{dx}{dt} + k x = f(t) $$ # # But it is actually very simple to deal with this additional derivative, we just define the position and velocity as two separate variables, the *states* of the oscillator: # # $$ y = \left[x,\ \frac{dx}{dt}\right] $$ # # And therefore # # $$ \frac{dy}{dt} = \left[ \frac{dx}{dt},\ \frac{d^2x}{dt^2}\right] = \left[y[1],\ \frac{f(t)-c y[1] - k y[0]}{m} \right] $$ # # This trick can reduce any ODE of order `m` down to system of `m` states all governed by first order ODEs. `solve_ivp` assumes `y` is a 2D array of these states since it is the standard way to deal with dynamical systems. # # Let's try it on this example. # + # define forcing, mass-damping-stiffness, and ODE f = lambda t: np.sin(2*np.pi*t) m,c,k = 1,0.5,(2*np.pi)**2 linear = lambda t,y: [y[1],(f(t)-c*y[1]-k*y[0])/m] t = np.linspace(40,42) y = solve_ivp(linear,[0,t[-1]],[0,0], t_eval=t).y plt.plot(t,y[0],label='$x$') plt.plot(t,y[1],label='$\dot x$') plt.xlabel('t') plt.legend(); # - # This gives a sinusoid, as expected but is it correct? Instead of using the exact solution (available in this case but not generally), let's *sanity check* the results based on physical understanding. **You should always do this when using numerical methods!** # # - If we could ignore dynamics, the expected deflection would simply be $x=f/k$. Since the magnitude of $f=1$ and $k=(2\pi)^2$ this would mean we would have an amplitude of $x\sim (2\pi)^{-2} \approx 0.025$. Instead we see an amplitude $x=0.4$! Is this reasonable?? # - The natural frequency given the parameters above is $\omega_n = \sqrt(k/m) = 2\pi$. The force is *also* being applied at a frequency of $2\pi$. This could explain the high amplitude - our spring-mass system is in resonance! # # Since we have an idea to explain our results - it is your turn to test it out: # 1. Lower the forcing frequency x10. This should reduce the influence of dynamics and we should see amplitudes similar to our prediction. # 2. Reset the frequency and increase the mass x10. Predict what this should do physically before running the simulation. Do the results match your predictions? # # # Finally, one of the main advantages of the numerical approach to ODEs is that they extend trivially to nonlinear equations. For example, using a nonlinear damping $c\dot x \rightarrow d \dot x|\dot x|$ makes the dynamics difficult to solve analytically, but requires no change to our approach, only an updated ODE: # + # define nonlinear damped ODE d = 100 nonlinear = lambda t,y: [y[1],(f(t)-d*y[1]*abs(y[1])-k*y[0])/m] t = np.linspace(40,42) y = solve_ivp(nonlinear,[0,t[-1]],[0,0], t_eval=t).y plt.plot(t,y[0],label='$x$') plt.plot(t,y[1],label='$\dot x$') plt.xlabel('t') plt.legend(); # - # ## Root finding and implicit equations # # Another ubiquitous problem in engineering is *root finding*; determining the arguments which make a function zero. As before, there are a few SciPy routines for this, but `fsolve` is a good general purpose choice. Let's check it out. # + from scipy.optimize import fsolve # ?fsolve # - # So `fsolve` also takes a function as the first argument, and the second argument is the starting point `x0` of the search for the root. # # As before, let's start with a simple example, say $\text{func}=x\sin x$ which is zero at $x=n\pi$ for $n=0,1,2,\ldots$. # + func = lambda x: x*np.sin(x) for x0 in range(1,8,2): print('x0={}, root={:.2f}'.format(x0,fsolve(func,x0)[0])) # - # This example shows that a root finding method needs to be used with care when there is more than one root. Here we get different answers depending on `x0` and it's sometimes surprising; `x0=5` found the root at $5\pi$ instead of $2\pi$. Something to keep in mind. # # Root finding methods are especially useful for dealing with implicit equations. For example, the velocity of fluid through a pipe depends on the fluid friction, but this friction is itself a function of the flow velocity. The [semi-emperical equation](https://en.wikipedia.org/wiki/Darcy_friction_factor_formulae#Colebrook%E2%80%93White_equation) for the Darcy friction factor $f$ is # # $$ \frac 1 {\sqrt f} = -2\log_{10}\left(\frac \epsilon{3.7 D}+ \frac{2.51}{Re \sqrt f} \right)$$ # # where $\epsilon/D$ is the pipe wall roughness to diameter ratio, $Re=UD/\nu$ is the diameter-based Reynolds number, and the coefficients are determined from experimental tests. # # Directly solving this equation for $f$ is difficult, and engineers use charts like the [Moody Diagram](https://en.wikipedia.org/wiki/Moody_chart#/media/File:Moody_EN.svg) instead. But this is simple to solve with a root finding method; we just need to express this as function which is zero at the solution and this is always possible by simply subtracting the right-hand-side from the left! # # $$ \text{func} = \frac 1 {\sqrt f} + 2\log_{10}\left(\frac \epsilon{3.7 D}+ \frac{2.51}{Re \sqrt f} \right)$$ # # which is zero when $f$ satisfies our original equation. # + # @np.vectorize def darcy(Re,eps_D,f0=0.03): func = lambda f: 1/np.sqrt(f)+2*np.log10(eps_D/3.7+2.51/(Re*np.sqrt(f))) return fsolve(func,f0)[0] darcy(1e6,1e-3) # - # Notice we have defined one function *inside* another. This lets us define $Re$ and $\epsilon/D$ as *arguments* of `darcy`, while being *constants* in `func`. There are other ways to parameterize rooting finding, but I like this approach because the result is a function (like `darcy`) which behaves exactly like an explicit function (in this case, for $f$). # # This matches the results in the Moody Diagram, and in fact, we should be able to make our own version of the diagram to test it out fully: Re = np.logspace(3.5,8) for i,eps_D in enumerate(np.logspace(-3,-1.5,7)): f = darcy(Re,eps_D) plt.loglog(Re,f, label='{:.1g}'.format(eps_D), color=plt.cm.cool(i/7)) plt.xlabel('Re') plt.ylabel('f',rotation=0) plt.legend(title='$\epsilon/D$',loc='upper right'); # Uh oh - this didn't work. Remember how functions such as `np.sin` *broadcast* the function across an array of arguments by default. Well, `fsolve` doesn't broadcast by default, so we need to do it ourselves. # # Luckily, this is trivial using [decorators](https://docs.python.org/3/library/functools.html). Decorators are a neat python feature which lets you add capabilities to a function without coding them yourself. There are tons of useful examples (like adding a `@cache` to avoid repeating expensive calculations) but the one we need is `@np.vectorize`. Uncomment that line above the function definition and run that block again - you should see that the output is now an array. Now try running the second code cell and you should see our version of the Moody Diagram. # # Notice I've used `np.logspace` to get logarithmically spaced points, `plt.loglog` to make a plot with log axis in both x and y, and `plt.cm.cool` to use a [sequential color palette](https://medium.com/nightingale/how-to-choose-the-colors-for-your-data-visualizations-50b2557fa335) instead of the PyPlot default. Use the help features to look up these functions for details. # # Your turn: # 1. Write a function to solve the equation $r^{4}-2r^{2}\cos 2\theta = b^{4}-1$ for $r$. Test that your function gives $r=\sqrt{2}$ when $b=1$ and $\theta=0$. # 2. Reproduce a plot of the [Cassini ovals](https://en.wikipedia.org/wiki/Cassini_oval) using this function for $1\le b \le 2$. Why doesn't your function work for $b<1$? # # *Hint:* Define `theta=np.linspace(0,2*np.pi)`, use `@np.vectorize`, and use `plt.polar` or convert $r,\theta \rightarrow x,y$ using the method in [notebook 3](https://github.com/weymouth/NumericalPython/blob/main/03NumpyAndPlotting.ipynb). # ## Blasius boundary layer # # As a last example, I want to show how you can **combine** these two techniques to solves a truly hard engineering equation with just a couple lines of code. Dividing complex problems down to pieces that you can solve with simple methods and combining them back together to obtain the solution is the secret sauce of programming and well worth learning. # # The governing equations for viscous fluids are very difficult to deal with, both [mathematically](https://www.claymath.org/millennium-problems/navier%E2%80%93stokes-equation) and [numerically](https://en.wikipedia.org/wiki/Turbulence_modeling). But these equations can be simplified in the case of a laminar flow along a flat plate. In this case we expect the velocity $u=0$ on the plate because of friction, but then to rapidly increase up to an asymptotic value $u\rightarrow U$. # # ![Blasius1.png](attachment:Blasius1.png) # # This thin region of slowed down flow is called the boundary layer and we want to predict the shape of the *velocity profile* in this region. The [Blasius equation](https://en.wikipedia.org/wiki/Blasius_boundary_layer) governs this shape: # # $$ A'''+\frac{1}{2} A A'' = 0 $$ # # where $A'(z) = u/U$ is the scaled velocity function and $z$ is the scaled distance from the wall. The function $A$ has the boundary conditions # # $$ A(0) = A'(0) = 0 \quad\text{and}\quad A'(\infty) = 1 $$ # # This equation is still too complex to solve analytically, and it might look too hard numerically as well. But we just need to take it one step at a time. # # ### Step 1: # # We can reduce the Blasius equation to a first order ODE as before by defining # # $$ y = \left[A,\ A',\ A'' \right],\quad y' = \left[y[1],\ y[2],\ -\frac{1}{2} y[0]y[2] \right] $$ # # Notice `y[1]`=$u/U$ is our goal, the velocity profile. # # But to use `solve_ivp` we also need our initial conditions. We don't know $A''(0)=$`C0`, but *if we did* the initial condition would be `y0 = [0,0,C0]` and we could solve for the profile: # + def blasius(t,C0): return solve_ivp(lambda t,y: [y[1],y[2],-0.5*y[0]*y[2]], [0,t[-1]], [0,0,C0], t_eval = t).y[1] C0 = 1 # guess # C0 = fsolve(lambda C0: blasius([12],C0)[-1]-1,x0=1)[0] # solve! z = np.linspace(0,6,31) plt.plot(blasius(z,C0),z) plt.xlabel('u/U') plt.ylabel('z',rotation=0); # - # ### Step 2 # # We can determine `C0` using the addition boundary condition, $A'(\infty)=1$. It is hard to deal with infinity numerically, but we see in the plot above that the profile is pretty much constant for z>4 anyway, so we'll just apply this condition to the last point, ie `blasius(C0)[-1]=1`. This is an implicit equation for `C0`, and we can solve it using `fsolve` as we did above: we simply substract the right-hand-side and define `func = blasius(C0)[-1]-1` which is zero when `C0` satisfies the boundary condition. Uncomment the line in the code block above to check that it works. # # The value of `C0` is actually physically important as well - it's related to the friction coefficient, and we have that value # as well: print("Blasius C_F sqrt(Re) = {:.3f}".format(4*C0)) # So $C_F = 1.328/\sqrt{Re}$ for a laminar boundary layer. # # And just like that, we're done. We've numerically solved the Blasius equation in around two lines of code; determining one of the very few exact solutions for nonlinear flows in engineering and come up with a practical friction coefficient that we can use to determine the drag on immersed bodies. Not too shabby. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Nba Class # + class Nba: def __init__( self ): self.headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36'} self.apiUrl = 'https://stats.nba.com/stats/' self.apiEvents = 'https://stats.nba.com/events/' self.apiStats = 'https://stats.nba.com/stats/' def urlDebug( self, endPointUrlConstruct, endPointParams ): paramsStr = "&".join( [ str(p[0]) + '=' + str(p[1]) for p in endPointParams ] ) return endPointUrlConstruct + '?' + paramsStr # - # # Usage in other Notebook # + # # %run ../shared/nb-nba.class.ipynb # + # Nba = Nba() # Nba.apiUrl # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Explore Real-Time Data on Iguazio via Integrated Notebooks # ## Analyze Real-Time Data Using Spark Streaming, SQL, and ML # Data apears to Spark as native Spark Data Frames from pyspark.sql import SparkSession spark = SparkSession.builder.appName("Iguazio Integration demo").getOrCreate() spark.read.format("io.iguaz.v3io.spark.sql.kv").load("v3io://bigdata/stocks").show() # ## Run Interactive SQL Queries on Real-Time Data # Support full ANSI SQL through native integration of Apache Presto over iguazio DB # %sql select * from bigdata.stocks where exchange='NASDAQ' # ## Read NoSQL Data as Real-Time Data Frame Stream kvdf = pd.concat(client.read(backend='kv', table='stocks', filter='exchange=="NASDAQ"'), sort=False) kvdf.head(8) # ## Read the Time-Series data (in iguazio TSDB) as Pandas Data Frame # Use DB side aggregations, joins and filtering on the real-time metrics data import pandas as pd import v3io_frames as v3f client = v3f.Client('http://v3io-framesd:8080') # + # Read Time-Series aggregates from the DB (returned as a data stream, use concat to assemble the frames) dfs = client.read(backend='tsdb', table='stock_metrics', step='60m', aggragators='avg,max,min',start="now-2d", end='now') merged = pd.concat(dfs, sort=False) # turn the results to a Multi-indexed and pivoted table pvt = merged.pivot_table('values',['Date','symbol','exchange'],['metric_name','Aggregate']) pvt.head() # - # ## Run interesting Analysis On Real-Time Data # e.g. compare stock price volatility between 2nd tier cloud providers import matplotlib.pyplot as plt # %matplotlib inline # + stock_symbols = ['GOOG','MSFT','ORCL','IBM'] fig, axarr = plt.subplots(1,4) pricesdf = merged.loc[merged['metric_name']=='price'] for sym, ax in zip(stock_symbols, axarr): pricesdf.loc[pricesdf['symbol'] == sym].pivot(columns='Aggregate', values='values').plot(ax=ax, title=sym, figsize=[20,5]) # - # ## Read the Twitter + Sentiments feed as Pandas Data Frame # Streaming data generated in by nuclio functions can be read in real-time or historically.
It can be distributed to multiple workers for scalability via sharding/partitioning. streamdfs = client.read(backend='stream', table='stocks_stream',seek='seq',shard='0', sequence=140) streamdf = pd.concat(streamdfs, sort=False) streamdf.head() # ## Save Any Data "To Go" as a CSV file (or other formats) streamdf.to_csv('mystream.csv') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pysgrs [c for c in dir(pysgrs) if c.endswith("Alphabet") and not c.startswith("Generic")] # ## Simple Alphabet A = pysgrs.SimpleAlphabet() print(A) A.to_dataframe().T g = A.permutations(5) print([next(g) for i in range(10)]) g = A.product(5) print([next(g) for i in range(10)]) A.is_monotonic A.is_natural A.index_types A.encode("HELLO") A.decode([22, 14, 17, 11, 3]) # ## ASCII Alphabet AS = pysgrs.AsciiAlphabet() AS.to_dataframe().T AS.encode("Hello world!") AS.decode([70, 111, 111, 32, 66, 97, 114]) AS.is_monotonic AS.is_natural # ## Morse Alphabet M = pysgrs.MorseAlphabet() M.to_dataframe().T M.index_symbols M.index_min_size M.index_max_size M.is_index_size_constant M.index_types M.encode("SOS") M.decode(['-*-*', '*-**', '*', '*-', '*-*']) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- IN_COLAB = 'google.colab' in str(get_ipython()) if IN_COLAB: # !pip install git+https://github.com/pete88b/nbdev_colab_helper.git from nbdev_colab_helper.core import * project_name = 'nextai' init_notebook(project_name) # + # default_exp vision_core # - # # vision_core # # > Utility methods used in vision training and inference. # # * These methods are used to manipulate tensors defining bounding boxes, categories, and anchor boxes. # * Input tensors are are of dimension (bs, k, 4) for bounding boxes or (bs, k, 21) for categories; where bs = batch size and k = number of rows representing a given image. # * Tensors can run on GPU or CPU, depending on the processing environment # # # + #hide #from nbdev import * #from nbdev.showdoc import * # - #hide # !pip install fastai --upgrade --quiet # %nbdev_export from fastai.imports import * from torch import tensor, Tensor import torch # %nbdev_export # Automatically sets for GPU or CPU environments device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # > Convert bounding box coordinates from CTRHW to x1, y1, x2, y2 formats.
# > IMPORTANT: The method expects the input box tensor to be in CxCyHW. # %nbdev_export # Helper Functions for Predictor Methods def ctrhw2tlbr(boxes:Tensor, set_if_input_is_CxCyWH=False): ''' Convert bounding box coordinates from CTRHW to x1, y1, x2, y2 formats IMPORTANT: The method expects the input box tensor to be in CxCyHW. Inputs: Boxes - torch.tensor of activation bounding boxes Dim - (batch size x Items in batch x 4). It will fail otherwise. Input Format - Center coord, height, width Output: torch.tensor of activation bounding boxes Dim = (batch size, Items in batch, 4) Format: x1, y1, x2, y2 ''' if set_if_input_is_CxCyWH: boxes = boxes[:,:,[0,1,3,2]] # Adjust the format to CxCyHW (height, width). This is the FASTAI format x1 = (boxes[:,:,0] - torch.true_divide(boxes[:,:,3],2.)).view(-1,1) x2 = (boxes[:,:,0] + torch.true_divide(boxes[:,:,3],2.)).view(-1,1) y1 = (boxes[:,:,1] - torch.true_divide(boxes[:,:,2],2.)).view(-1,1) y2 = (boxes[:,:,1] + torch.true_divide(boxes[:,:,2],2.)).view(-1,1) return torch.cat([x1,y1,x2,y2],dim=1) #hide #center coords to top-left, bottom-right transformation res = ctrhw2tlbr(torch.tensor([[[0,0,2,2],[0,0,2,2],[0,0,2,2]]])); res # %nbdev_export def tlbr2cthw(boxes:Tensor, ctrhw=True): '''Convert top/left bottom/right format `boxes` to center/size corners. Input: boxes - torch.Tensor of activations bounding boxes Unbounded Dim = (batch size, Items in batch, 4) Format: top left xy, bottom right xy ctrhw = True - Output is in the format CxCyHW False - Output is in the format CxCyWH Output: torch.tensor of activation bounding boxes Dim = (batch size, Items in the batch, 4) Format: center coord xy, height, width''' center = torch.true_divide(boxes[:,:, :2] + boxes[:,:, 2:], 2) # Calculate box center coord sizes = torch.abs(boxes[:,:, 2:] - boxes[:,:, :2]) # Calculate box width & height # results = torch.cat( (center, sizes), 2) if ctrhw: results = results[:,:,[0,1,3,2]] # The correct FASTAI Size format is CxCyHW (height, width) return results #hide #transform top-left, bottom-right coordinates to center, h, w coordinates res = tlbr2cthw(torch.tensor([[[-1,-1,1,1],[-1,-1,1,1],[-1,-1,1,1]]])); res #hide #transform top-left, bottom-right coordinates to center, h, w coordinates res = tlbr2cthw(torch.tensor([[[-1,-1,1,1]],[[-1,-1,1,1]],[[-1,-1,1,1]]])); res #hide # "Return tosender" res = ctrhw2tlbr(tlbr2cthw(torch.tensor([[[-1,-1,1,1]]]))); res # %nbdev_export # We apply Decoding With Variance to both activation boxes and anchor boxes to calculate the final bounding boxes. def activ_decode(p_boxes:Tensor, anchors:Tensor): ''' Decodes box activations into final bounding boxes by calculating predicted anchor offsets, which are then added to anchor boxes Input: p_boxes - torcht.tensor of activation bounding boxes dim: (batch, items in batch, 4) Format: top left xy, bottom right xy anchors - torch.tensor of anchors Dim: (k * no of classes) x 4 Format: CxCyWH format Output: torcht.tensor with anchor boxes offset by box activations dim: batch x tems in batch x 4) Format: tlbr - top left xy, bottom right xy''' sigma_xy, sigma_hw = torch.sqrt(torch.tensor([0.1])), torch.sqrt(torch.tensor([0.2])) # Variances for center and hw coordinates pb = torch.tanh( p_boxes) # Set activations into [-1,1] basis (as used in Fastai) ctrwh = tlbr2cthw(pb, ctrhw=False) # Transform box activations from xyxy format to CxCyWH format. # Calculate offset centers. The sequence is Xp, followed by Yp offset_centers = ctrwh[:,:,[0,1]].to(device) * sigma_xy.to(device) * anchors[:,[2,3]].to(device) + anchors[:,[0,1]].to(device) # Calculate offset sizes. The sequence is Wp, followed by Hp offset_sizes = torch.exp(ctrwh[:,:,[2,3]].to(device) *sigma_hw.to(device)) *anchors[:,[2,3]].to(device) # Return format to CxCyHW and then return, switching back to X1Y1X2Y2 format. return torch.clamp(ctrhw2tlbr(torch.cat([offset_centers, offset_sizes], 2), set_if_input_is_CxCyWH=True).view(*p_boxes.shape), min=-1, max=+1) #hide anchors = torch.tensor([[0.0,0.0,2,2],[0.0,0.0,1,1]]) p_boxes = torch.tensor([[[0.0,0.0,2,2],[0.0,0.0,1,1]]]) res = activ_decode(p_boxes, anchors); res # %nbdev_export # Transform activations into final bounding boxes by calculating the predicted offsets to the anchor boxes def activ_encode(p_boxes:Tensor, anchors:Tensor): ''' Transforms activations into final bounding boxes by calculating predicted anchor offsets, which are then added to the anchor boxes Input: p_boxes - torcht.tensor of activation bounding boxes dim: (batch, items in batch, 4) Format: top left xy, bottom right xy Output: torch.tensor ''' sigma_ctr, sigma_hw = torch.sqrt(torch.tensor([0.1])), torch.sqrt(torch.tensor([0.2])) # Variances pb = torch.tanh( p_boxes) # Set activations into the basis [-1,1] (as used in Fastai) pb = torch.tanh(p_boxes[...,:] ) to_ctrwh = tlbr2cthw(pb, tlbr=False) # Transform activaions from xyxy format to ctrwh format. This will facilitate offset calculations below # Calculate anchors with offsets to serve as predicted bounded boxes offset_center = sigma_ctr * (to_ctrwh[:,:,[0,1]].to(device) - anchors[:,[0,1]].to(device)) / anchors[:,[2,3]].to(device) offset_size = torch.log(to_ctrwh[:,:,[2,3]].to(device)/anchors[:,[2,3]].to(device)) / sigma_hw centers = anchors[:,[0,1]].to(device) + offset_center.to(device) sizes = anchors[:,[2,3]].to(device) + offset_size.to(device) return cthw2corners(torch.cat([centers, sizes], 2)) # %nbdev_export # Strip zero-valued rows from a bounding box tensor def strip_zero_rows(bboxes:Tensor): ''' Strip zero-valued rows from a bounding box tensor Input: bboxes Bounding boox tensor Output: b_out Tensor with data rows z_out Tensor with zero-filled rows ''' b_out = []; z_out = [] for rw in torch.arange(bboxes.shape[0]): cc = bboxes[rw,0:][~(bboxes[rw,0:] == 0.).all(1)] # Retain the non all-zero rows of the bounding box if cc.nelement() != 0 : b_out.append(cc) zz = bboxes[rw,0:][(bboxes[rw,0:] == 0.).all(1)] # Retain the all-zero rows of the bounding box if cc.nelement() == 0 : z_out.append(zz) #return (b_out, z_out) return (torch.stack(b_out), torch.stack(z_out)) #hide boxxes = torch.tensor([[[1,2,3,4],[1,2,3,4]],[[1,2,3,4],[1,2,3,4]],[[0,0,0,0],[0,0,0,0]],[[0,0,0,0],[0,0,0,0]]]);boxxes print(boxxes) print(boxxes.shape) strip = strip_zero_rows(boxxes) print(F'non-zero rows: {strip[0]}') print(F'zero rows: {strip[1]}') # %nbdev_export # Graft the all-zero rows back to the bounding box array def graft_zerorows_to_tensor(bboxes:Tensor, zboxes:Tensor): ''' Graft the all-zero rows back to the bounding box row(s) Input bboxes Bounding boox tensor zboxes Tensor containing the zero-valued rows stripped by strip_zero_rows function Output Tensor with data and zero-filled rows of shape[0] = shape bboxes[0} shape.zeroboxes[0]''' return torch.cat([bboxes, zboxes], dim=1) #hide res = graft_zerorows_to_tensor(torch.tensor([[[1,2,3,4],[1,2,3,4]],[[1,2,3,4],[1,2,3,4]]]), torch.tensor([[[0,0,0,0],[0,0,0,0]],[[0,0,0,0],[0,0,0,0]]])); res res = graft_zerorows_to_tensor(strip[0], strip[1]);res # %nbdev_export # Flip a bounding box along the y axis def flip_on_y_axis(bboxes:Tensor): ''' Flip a bounding box along the y axis Input: bboxes Bounding box tensor ''' return bboxes[...,[2,1,0,3]]*torch.tensor([-1.,1.,-1.,1.]) #hide res = flip_on_y_axis(torch.tensor([0,0,1,1]));res # %nbdev_export # Flip a bounding box along the x axis def flip_on_x_axis(bboxes:Tensor): ''' Flip a bounding box along the x axis Input: bboxes Bounding boox tensor ''' return bboxes[...,[0,3,2,1]]*torch.tensor([1.,-1.,1.,-1.]) #hide res = flip_on_x_axis(torch.tensor([0,0,1,1]));res # %nbdev_export def rotate_90_plus(bb:Tensor): ''' Rotate bounding box(s) by 90 degrees clockwise Input: bboxes Bounding boox tensor ''' return bb[...,[3,0,1,2]]*torch.tensor([-1.,1.,-1.,1.]) #hide #Insensitive to tensor dimensions rot = rotate_90_plus(torch.tensor([[[0,0,1,1]]])); rot # %nbdev_export def rotate_90_minus(bb:Tensor): ''' Rotate bounding box(s) by 90 degrees counterclockwise Input: bboxes Bounding boox tensor in xyxy format''' return bb[...,[1,2,3,0]]*torch.tensor([1.,-1.,1.,-1.]) # Function is insensitive to tensor dimensions #hide #Insensitive to tensor dimensions rot = rotate_90_minus(rotate_90_plus(torch.tensor([[[0,0,1,1]]])));rot #hide #Insensitive to tensor dimensions rot = rotate_90_minus(torch.tensor([0,0,1,1])); rot #hide #Insensitive to tensor dimensions rot = rotate_90_minus(torch.tensor([[[0,0,1,1]]])); rot #hide from nbdev.export import notebook2script notebook2script() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # библиотеки для работы с данными import numpy as np import pandas as pd from scipy.sparse import lil_matrix # библиотеки для визуализации import matplotlib.pyplot as pyplot from matplotlib import cm # %matplotlib inline # библиотеки для работы со временем и датами import datetime import math from time import strftime # - # ### Загружаем и чистим данные events = pd.read_csv("02_Data_test.csv", sep=";", low_memory=False) known_personas_data = pd.read_excel("01_Факты.xlsx", header=None) # + # замещаем пропуски IMEI нулями events.loc[events["imei"] == "null", "imei"] = 0 events.loc[events["imei"] == "NaN", "imei"] = 0 events["imei"].fillna(value=0, inplace=True) events["imei"] = events["imei"].astype(np.int64) # создаём новые признаки для более удобного анализа времени событий events["time"] = events["tstamp"].apply( lambda tstamp: datetime.datetime.fromtimestamp(tstamp/1000).strftime("%Y-%m-%d %H:%M:%S") ) events["time_minutes"] = events["tstamp"].apply( lambda tstamp: int(tstamp/(1000*60)) ) # для базовой станции создаём признаки расположения и уникальные id events["station_location"] = events.ix[:,"lat"].map(str) + ", " + events.ix[:,"long"].map(str) events["station_id"] = events.ix[:,"lac"].map(str) + events.ix[:,"cid"].map(str) + " " + events.ix[:,"station_location"].map(str) # - # ### Определяем степень сходства сим-карт между собой # #### Гипотеза 1 # Сим-карты схожи между собой, если события с сим-картами возникают в одних и тех же местах в одном интервале времени. # # В качестве быстрого решения сходство мест возникновения событий определяется по сходству координат базовых станций, а интервал времени условно принимается за 8 часов. Мерой сходства будет количество событий, по которым выполнены оба условия. # + # выделяем вспомогательные данные для анализа unique_msisdns = events["msisdn"].unique() # основная разреженная матрица, куда будет сохраняться мера схожести пары сим-карт msisdns_pairs_matrix = lil_matrix((len(unique_msisdns), len(unique_msisdns)), dtype="float") # группируем все события в исходных данных по отдельным сим-картам для сокращения времени запроса в основную выборку msisdn_events = [] for msisdn in unique_msisdns: msisdn_events.append(events[events["msisdn"] == msisdn]) # + time_similarity_threshold = 1*60 print("Старт обработки выборки:", strftime("%Y-%m-%d %H:%M:%S")) # последовательно обходим матрицу попарной схожести сим-карт, работая только с верхней диагональю для исключения избыточности расчётов for msisdn_row in range(len(unique_msisdns)): print(strftime("%Y-%m-%d %H:%M:%S"),"Обрабатываю строку", (msisdn_row+1), "из", len(unique_msisdns)+1) msisdn_row_entries = msisdn_events[msisdn_row] for msisdn_column in range(msisdn_row, len(unique_msisdns)): similarity_value = 0 # диагональ, на которой лежит соответствие сим-карте самой себе, игнорируем if msisdn_column == msisdn_row: continue # ищем схожесть пары симок # сначала определяем, есть ли среди событий у пары сим-карт совпадающие расположения вышек common_locations = np.intersect1d(msisdn_events[msisdn_row]["station_location"].unique(), msisdn_events[msisdn_column]["station_location"].unique(), assume_unique=True) # если совпадения по расположению вышек найдены, вычисляем попарную разницу во времени событий на этих базовых станциях # если события призошли в одном интервале, считаем их схожими if common_locations.size > 0: msisdn_column_entries = msisdn_events[msisdn_column] for common_location in common_locations: for msisdn1_time in msisdn_row_entries.ix[msisdn_row_entries["station_location"] == common_location, "time_minutes"].values: for msisdn2_time in msisdn_column_entries.ix[msisdn_column_entries["station_location"] == common_location, "time_minutes"].values: if abs(msisdn1_time - msisdn2_time) < time_similarity_threshold: similarity_value += 1 # экономим память, записывая в разреженную матрицу только ненулевые количества схожих событий if similarity_value > 0: msisdns_pairs_matrix[msisdn_row, msisdn_column] = similarity_value print("Конец обработки выборки:", strftime("%Y-%m-%d %H:%M:%S")) # - # #### Гипотеза 1 # Сим-карты схожи между собой, если события с сим-картами возникают в одних и тех же местах в одном интервале времени. # # Сходство мест возникновения события определяется по пересечению зон покрытия станций. Зоны покрытия для ускорения расчётов аппроксимируются набором треугольников с центром в координатах базовой станции и углом раскрытия <= 45 градусам. # # Интервал времени принимается равным 24 часам. # выделяем вспомогательные величины для анализа stations = events.ix[:,["station_id","station_location","long","lat","max_dist","start_angle","end_angle"]].drop_duplicates() unique_stations = stations["station_id"].unique() unique_msisdns = events["msisdn"].unique() # основная разреженная матрица, куда будет сохраняться мера схожести пары сим-карт msisdns_pairs_matrix = lil_matrix((len(unique_msisdns), len(unique_msisdns)), dtype="float") # вспомогательные разреженные матрицы для вычисления попарного пересечения зон покрытия базовых станций # в visited_intersects_matrix запоминаем, обсчитывалось ли уже пересечение зон на этой паре станций для предотвращения дублирующих расчётов # в stations_intersect_matrix будет храниться признак пересечения зон покрытия visited_intersects_matrix = lil_matrix((len(unique_stations), len(unique_stations)), dtype="bool") stations_intersect_matrix = lil_matrix((len(unique_stations), len(unique_stations)), dtype="bool") # + time_similarity_threshold = 24*60 # определяем вспомогательные функции для расчётов # функцией detect_coverage_intersection будем определять пересечение зон покрытия для заданной пары станций # используем стороннюю библиотеку Shapely для упрощения работы с геом. фигурами на плоскости from shapely.geometry import Polygon, Point # приближённо определяем величину одного градуса широты и долготы в метрах # https://en.wikipedia.org/wiki/Earth metres_to_degrees = 40007.86 * 1000.0 / 360.0 # заранее рассчитываем порядковые номера признаков для адресации в выборке, чтобы не считать их на каждом шаге цикла station_location_index = np.where(stations.keys() == "station_location")[0][0] long_index = np.where(stations.keys() == "long")[0][0] lat_index = np.where(stations.keys() == "lat")[0][0] max_dist_index = np.where(stations.keys() == "max_dist")[0][0] start_angle_index = np.where(stations.keys() == "start_angle")[0][0] end_angle_index = np.where(stations.keys() == "end_angle")[0][0] def detect_coverage_intersection(station1_id, station2_id): # если станции совпадают, нам повезло if station1_id == station2_id: return True station1_number = np.where(unique_stations == station1_id)[0][0] station2_number = np.where(unique_stations == station2_id)[0][0] # работаем только с верхним треугольником в матрицах пересечения зон покрытия станций if station1_number > station2_number: temp = station2_number station2_number = station1_number station1_number = temp # если раньше уже рассчитывался признак пересечения двух станций, возвращаем его, чтобы не считать ещё раз if visited_intersects_matrix[station1_number, station2_number]: return stations_intersect_matrix[station1_number, station2_number] # запоминаем признаки двух станций заранее для экономия времени во времени на запрос в выборку station1_index = np.where(stations["station_id"] == station1_id)[0][0] st1_location = stations.iat[station1_index, station_location_index] st1_start_angle = stations.iat[station1_index, start_angle_index] st1_end_angle = stations.iat[station1_index, end_angle_index] st1_max_dist = stations.iat[station1_index, max_dist_index] st1_long = stations.iat[station1_index, long_index] st1_lat = stations.iat[station1_index, lat_index] station2_index = np.where(stations["station_id"] == station2_id)[0][0] st2_location = stations.iat[station2_index, station_location_index] st2_start_angle = stations.iat[station2_index, start_angle_index] st2_end_angle = stations.iat[station2_index, end_angle_index] st2_max_dist = stations.iat[station2_index, max_dist_index] st2_long = stations.iat[station2_index, long_index] st2_lat = stations.iat[station2_index, lat_index] # в качестве быстрого приближения представляем зоны покрытия станции двумя кругами с центром в координатах станции # и радиусом равным дистанции приема. # если круги не пересекаются, дальнейшая проверка бессмысленна station1_circle_area = Point(st1_long, st1_lat).buffer(st1_max_dist / metres_to_degrees) station2_circle_area = Point(st2_long, st2_lat).buffer(st2_max_dist / metres_to_degrees) if station1_circle_area.intersects(station2_circle_area): # круги пересекаются, проверяем пересекаются ли зоны покрытия # зону покрытия аппроксимируем смежными треугольниками и проверяем их попарные пересечения # если есть хоть одно пересечение, зоны покрытия базовы станций пересекаются station1_triangles = approximate_coverage_area(st1_start_angle, st1_end_angle, st1_max_dist, st1_long, st1_lat) station2_triangles = approximate_coverage_area(st2_start_angle, st2_end_angle, st2_max_dist, st2_long, st2_lat) for station1_triangle in station1_triangles: for station2_triangle in station2_triangles: if station1_triangle.intersects(station2_triangle): # запоминаем признак пересечения, чтобы не рассчитывать его дважды mark_as_visited(station1_number, station2_number, True) return True break # если ни одна пара треугольников не пересекается, то и зоны покрытия не пересекаются mark_as_visited(station1_number, station2_number, False) return False else: mark_as_visited(station1_number, station2_number, False) return False # обёртка вокруг матриц попарного пересечения зоны покрытия станций # запоминаем, какие станции мы уже обсчитали, и результат расчёта на этих станциях, экономя место в разреженных матрицах def mark_as_visited(station1_index, station2_index, result): # работаем только с верхним треугольником в матрицах if station1_index <= station2_index: visited_intersects_matrix[station1_index, station2_index] = True if result: stations_intersect_matrix[station1_index, station2_index] = result else: visited_intersects_matrix[station2_index, station1_index] = True if result: stations_intersect_matrix[station2_index, station1_index] = result # функция для расчёта координат треугольников, аппроксимирующая зону покрытия станции def approximate_coverage_area(start_angle, end_angle, max_dist, long, lat): event_triangles = [] # вычисляем начальные и конечные азимуты треугольников angle_boundaries = calculate_angle_boundaries(start_angle, end_angle) # разбиваем зону покрытия станции на отдельные треугольники for index, angle_boundary in enumerate(angle_boundaries): # пользуясь библиотекой Shapely, представляем треугольник полигоном из точек расположения базовой станции и границ зоны покрытия event_triangles.append(Polygon([ (long, lat), calculate_destination_point(long, lat, max_dist, angle_boundary[0]), calculate_destination_point(long, lat, max_dist, angle_boundary[1]) ])) return event_triangles # условно принимаем максимальным угол раскрытия треугольника в 45 градусов dividing_angle = 45 def calculate_angle_boundaries(start_angle, end_angle): angle_boundaries = [] # подсчитываем, сколько треугольников может получиться из таких угловых границ и размера шага if end_angle > start_angle: triangles_count = math.ceil((end_angle - start_angle) / dividing_angle) else: # если зона покрытия пересекает 360 градусов, считаем по-другому triangles_count = math.ceil((end_angle - (start_angle-360)) / dividing_angle) # считаем азимуты для каждого треугольника for i in range(triangles_count): start_bearing = start_angle + i*dividing_angle end_bearing = start_angle + (i+1)*dividing_angle # если азимут вылез за 360 градусов, считаем по-другому if start_bearing > 360: start_bearing = start_bearing - 360 if end_bearing > 360: end_bearing = end_bearing - 360 angle_boundaries.append([start_bearing, min(end_bearing, end_angle)]) return angle_boundaries # рассчитываем координаты точки границы зоны покрытия базовой станции, # зная координаты самой станции, дистанцию покрытия и азимут границы зоны покрытия # источник алгоритма: http://www.movable-type.co.uk/scripts/latlong.html def calculate_destination_point(start_longitude, start_latitude, distance, bearing): # преобразуем дистанцию покрытия в угловое расстояние ang_distance = distance / (6371.0 * 1000.0) # переводим углы в радианы long_rad = math.radians(start_longitude) lat_rad = math.radians(start_latitude) bearing_rad = math.radians(bearing) # чёрная магия, и потом координаты переводим обратно в градусы destination_lat_rad = math.asin(math.sin(lat_rad) * math.cos(ang_distance) + math.cos(lat_rad) * math.sin(ang_distance) * math.cos(bearing_rad)) destination_long_rad = long_rad + math.atan2(math.sin(bearing_rad) * math.sin(ang_distance) * math.cos(lat_rad), math.cos(ang_distance) - math.sin(lat_rad) * math.sin(destination_lat_rad)) destination_lat_degrees = math.degrees(destination_lat_rad) destination_long_degrees = math.degrees(destination_long_rad) return (destination_long_degrees, destination_lat_degrees) # + # заранее задаём порядковые номера признаков для адресации в цикле station_id_index = 0 time_index = 1 msisdn_index = 2 print("Старт:", strftime("%Y-%m-%d %H:%M:%S")) # основная часть проверки сходства сим-карт # идём по парам событий из выборки и проверяем сходство по зонам покрытия станций и разницу во времени # если оба признака подходящие, учитываем такую пару событий в общем счётчике по паре сим-карт # итерация странным, но быстрым способом по парам событий из выборки for event1 in zip(events["station_id"], events["time_minutes"], events["msisdn"]): event1_index = np.where(unique_msisdns == event1[msisdn_index])[0][0] for event2 in zip(events["station_id"], events["time_minutes"], events["msisdn"]): # игнорируем события одной и той же сим-карты if event1[msisdn_index] == event2[msisdn_index]: continue if abs(event1[time_index] - event2[time_index]) < time_similarity_threshold: if detect_coverage_intersection(event1[station_id_index], event2[station_id_index]): # для экономиии времени вычисляем индекс второго события только после определения схожести event2_index = np.where(unique_msisdns == event2[msisdn_index])[0][0] # работает только с верхним треугольником матрицы if event1_index < event2_index: msisdns_pairs_matrix[event1_index, event2_index] += 1 else: msisdns_pairs_matrix[event2_index, event1_index] += 1 print("Финиш:", strftime("%Y-%m-%d %H:%M:%S")) # - # #### Гипотеза 2 pairs_matrix_rows, pairs_matrix_cols = msisdns_pairs_matrix.nonzero() for row, col in zip(pairs_matrix_rows, pairs_matrix_cols): msisdns_pairs_matrix[row,col] = msisdns_pairs_matrix[row,col] / (msisdn_events[row].shape[0] * msisdn_events[col].shape[0]) # Дополнительно определяем сим-карты, которые успели за время наблюдений побывать больше чем в одном устройстве. Таким присваиваем наибольшую меру уверенности. # определяем уникальные устройства и кол-во сим-карт, которое побывало в каждом unique_imeis = events.loc[events["imei"] != 0,"imei"].unique() for index, current_imei in enumerate(unique_imeis): imei_sample = events.ix[(events["imei"] == current_imei), "msisdn"].value_counts() # если выборке событий с таким IMEI больше одной сим-карты, проставляем таким сим-картам наибольшую величину сходства msisdn_count = imei_sample.shape[0] if msisdn_count > 1: #print("У IMEI", current_imei, "в выборке найдено симок:", msisdn_count) msisdn1_index = np.where(unique_msisdns == imei_sample.keys()[0]) msisdn2_index = np.where(unique_msisdns == imei_sample.keys()[1]) # работаем только с верхним треугольником матрицы if msisdn1_index < msisdn2_index: msisdns_pairs_matrix[msisdn1_index, msisdn2_index] = 1.0 else: msisdns_pairs_matrix[msisdn2_index, msisdn1_index] = 1.0 # + cutoff_percentile = 99.95 cutoff_value = np.percentile(msisdns_pairs_matrix[msisdns_pairs_matrix.nonzero()].toarray(), cutoff_percentile) pairs_matrix_rows, pairs_matrix_cols = msisdns_pairs_matrix.nonzero() for row, col in zip(pairs_matrix_rows, pairs_matrix_cols): if msisdns_pairs_matrix[row,col] < cutoff_value: msisdns_pairs_matrix[row,col] = 0.0 # - # ##### Визуализация результатов figure, axes = pyplot.subplots(figsize=(100,100)) cax = axes.imshow(msisdns_pairs_matrix.toarray(), interpolation="none", cmap=cm.Blues) colorbar = figure.colorbar(cax) pyplot.savefig('pairs_matrix.png', bbox_inches='tight') pyplot.hist(msisdns_pairs_matrix[msisdns_pairs_matrix.nonzero()].toarray()[0],bins=100) pyplot.show() # ### Проверяем качество классификации по известным персонам # + # на тестовой выборке сверяем полученные результаты с известными парами # допускаем, что величина меры сходства не имеет значения, а важен сам факт найденного сходства пары сим-карт # допускаем, что по известным парам сходство равно единице # заполняем нижний треугольник матрицы сходства для корректной работы с известными парами msisdns_pairs_matrix = msisdns_pairs_matrix.copy() + msisdns_pairs_matrix.copy().transpose() # + # для каждой пары сим-карт из известных находим наличие рассчитанного признака сходства в матрице попарных соответствий false_negatives = 0 for msisdn_pair in known_personas_data.itertuples(): predicted_value = msisdns_pairs_matrix[np.where(unique_msisdns == msisdn_pair[1]), \ np.where(unique_msisdns == msisdn_pair[2]) \ ].nnz #print(msisdn_pair[0], ":", msisdn_pair[1],"-", msisdn_pair[2], "схожесть составила", predicted_value) false_negatives += (1 - predicted_value) print("Общее кол-во ошибок на тестовой выборке:", false_negatives, "из", known_personas_data.shape[0],"известных пар") # - # ### Выделяем персоны среди пар похожих сим-карт # Для выделения персон из имеющихся пар сим-карт, представляем найденные пары как ненаправленный граф, # в котором сим-карты - вершины, рёбра - сходство между парой сим-карт. # На таком графе можно будет выделить подграфы, состоящие из вершин, связанных между собой ненулевыми связями. # Персоной будет являться подграф с кол-вом вершин больше одной. # Для поиска таких подграфов используем алгоритм поиска в глубину from scipy.sparse.csgraph import connected_components n_components, component_list = connected_components(msisdns_pairs_matrix, directed=False) print("Найдено подграфов:", n_components) personas_found = {} persona_counter = 0 #print("Найденные персоны:") for i in range(n_components): # запоминаем подграфы с кол-во вершины больше одной if np.sum(component_list == i) > 1: persona_counter += 1 personas_found.update({persona_counter: unique_msisdns[component_list == i]}) #print("№"+str(persona_counter)+":", unique_msisdns[component_list == i]) # ### Выгружаем перечень персон с номерами # + with open("personas.txt", mode="w") as output_file: for key, value in personas_found.items(): output_file.write(str(key) + ": " + str(value) + "\n") print("Список персон выгружен в personas.txt") # - # ### Операции с промежуточными расчётами from scipy.io import savemat, loadmat savemat("msisdns_pairs_matrix.mat", {"msisdns_pairs_matrix":msisdns_pairs_matrix}, appendmat=False, do_compression=True) msisdns_pairs_matrix = loadmat("msisdns_pairs_matrix_24.mat",appendmat=False)["msisdns_pairs_matrix"].tolil().asfptype() # ### Изучаем существующие данные for msisdn_pair in known_personas_data.itertuples(): print("Персона:",msisdn_pair[0], \ " кол-во событий:",events[events["msisdn"] == msisdn_pair[1]].shape[0], \ "+",events[events["msisdn"] == msisdn_pair[2]].shape[0]) known_persona_row = 4 events[["time","msisdn","imei","station_id","event_type","cell_type","max_dist","start_angle","end_angle"]][ (events["msisdn"] == known_personas_data.ix[known_persona_row, 0]) | (events["msisdn"] == known_personas_data.ix[known_persona_row, 1]) ] print(known_personas_data.ix[known_persona_row, 0],known_personas_data.ix[known_persona_row, 1]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.7.5 64-bit # name: python37564bit3ec481eb2fd14d788a8a4fb6d3db89d5 # --- import numpy as np import pandas as pd # Importing data data=pd.read_csv('Social_Network_Ads.csv') data.head() # Checking for null values data.isnull().sum() # Data Splitting from sklearn.model_selection import train_test_split x=data.iloc[:,:-1].values y=data.iloc[:,-1].values.reshape(-1,1) x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.2,random_state=0) # Feature Scaling from sklearn.preprocessing import StandardScaler sc=StandardScaler() x_train=sc.fit_transform(x_train) x_test=sc.transform(x_test) # SCV model from sklearn.svm import SVC svc=SVC(kernel='rbf',random_state=0) svc.fit(x_train,y_train) y_pred=svc.predict(x_test) # + tags=[] from sklearn.metrics import accuracy_score,confusion_matrix print(confusion_matrix(y_test,y_pred)) accuracy_score(y_test,y_pred) # - # K-Fold Cross Validation # + tags=[] from sklearn.model_selection import cross_val_score k_fold=cross_val_score(estimator=svc,X=x_train,y=y_train,cv=10,n_jobs=-1) print('Mean Acuracy:',k_fold.mean()) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # # Last executed on Nike at 18.03.21. # # - import numpy as np import pandas as pd import matplotlib.pyplot as plt # %matplotlib inline from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense, Dropout, Activation from keras.utils import np_utils from tensorflow.python.keras.layers import Input, Dense from tensorflow.python.keras.models import Model import tensorflow as tf from sklearn.metrics import roc_auc_score #np.random.seed(35) (X_train, y_train), (X_test, y_test) = mnist.load_data() print("X_train original shape", X_train.shape) print("y_train original shape", y_train.shape) print("X_test original shape", X_test.shape) print("y_test original shape", y_test.shape) print(np.min(X_train)) print(np.max(X_train)) print(np.unique(y_train)) plt.imshow(X_train[0], cmap='gray') plt.title(y_train[0]) # + X_train = X_train.reshape(60000,784) X_test = X_test.reshape(10000,784) X_train = X_train.astype('float32') X_test = X_test.astype('float32') X_train/=255 X_test/=255 # + number_of_classes = 10 Y_train = np_utils.to_categorical(y_train, number_of_classes) Y_test = np_utils.to_categorical(y_test, number_of_classes) y_train[0], Y_train[0] # - print(X_train.shape) print(Y_test.shape) print(X_test.shape) X_train = X_train.reshape((60000, 28, 28, 1)) X_test = X_test.reshape((10000, 28, 28, 1)) print(X_train.shape) print(Y_test.shape) input_shape = (28, 28, 1) def createModel(): input_img = Input(shape=input_shape) x = tf.keras.layers.Conv2D(32, kernel_size=(3, 3), activation='relu')(input_img) x = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(x) x = tf.keras.layers.Dropout(0.5)(x) x = tf.keras.layers.Conv2D(64, kernel_size=(3, 3), activation='relu')(x) x = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(x) x = tf.keras.layers.Dropout(0.5)(x) x = tf.keras.layers.Flatten()(x) x = tf.keras.layers.Dense(256, activation='relu')(x) output = tf.keras.layers.Dense(10, activation='softmax', name='visualized_layer')(x) def auc(y_true, y_pred): return tf.py_function(roc_auc_score, (y_true, y_pred), tf.double) model = Model(inputs=input_img, outputs=output) return model model = createModel() model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) size = int(len(X_train) * 0.8) train_x, val_x = X_train[:size], X_train[size:] train_y, val_y = Y_train[:size], Y_train[size:] print(train_x.shape) history = model.fit(train_x, train_y, batch_size=128, nb_epoch=10, validation_data=(val_x, val_y)) from matplotlib import pyplot # learning curves of model accuracy pyplot.plot(history.history['loss'], label='Training') pyplot.plot(history.history['val_loss'], label='Validation') pyplot.legend() pyplot.show() score = model.evaluate(X_test, Y_test) print() print('Test accuracy: ', score[1]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernel_info: # name: python38-azureml # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] slideshow={"slide_type": "slide"} # # D2E2F: ForSea meeting # (2021-04-29) # # ### 1) Utveckla simuleringsmodell med Machine Learning # ### 2) Använda simuleringsmodell för att optimera thrustrar # # ### 3) Analysarbete # #### a) Långtidsanalys: dagar/veckor/månader/år # #### b) Förbättring med "Smart Eco-shipping" # #### c) Korttidsanalys: farygsdynamik under resa # # ### 4) PhD Chalmers paper # ### 5) "Förstå" datan # ![](figures/I_want_to_understand.png) # + gather={"logged": 1618500219438} slideshow={"slide_type": "skip"} # #%load imports.py # %matplotlib inline # %load_ext autoreload # %autoreload 2 import numpy as np import pandas as pd import matplotlib.pyplot as plt plt.rcParams["figure.figsize"] = (20,3) plt.style.use('presentation') #import seaborn as sns import os from collections import OrderedDict from IPython.display import display pd.options.display.max_rows = 999 pd.options.display.max_columns = 999 pd.set_option("display.max_columns", None) import folium import plotly.express as px import plotly.graph_objects as go import sys import os sys.path.append('../') from src.visualization import visualize, animate sys.path.append('../src/models/pipelines/longterm/scripts/prepdata/trip') import prepare_dataset, trips, trip_id sys.path.append('../src/models/pipelines/longterm/scripts/prepdata/clean_statistics') import clean_statistics import scipy.integrate import seaborn as sns # + gather={"logged": 1618500116933} experiments_path = '../notebooks/pipelines/longterm' experiment = 'steps' experiment_path = os.path.join(experiments_path, experiment) file_path = os.path.join(experiment_path, 'id.parquet') df = trip_id.load_output_pandas(path=file_path) # - df.head() # ## Plot trips # + gather={"logged": 1618500118189} visualize.plot_trips(df=df, width=1400, height=800, zoom_start=14, color_key='trip_direction') # - # ## Varför åker man inte raka vägen? # ## Trip statistics file_path = os.path.join(experiment_path,'id_statistics_clean.parquet') df_stat = clean_statistics.load_output_as_pandas_dataframe(path=file_path) df_stat.head() fig = px.scatter(df_stat, x='start_time',y='P', color='trip_direction', width=1500, height=600, color_discrete_sequence=['red','green']) fig.show() # ## Analyzing trips fig = px.line(df, x='trip_time', y='sog', color='trip_no', width=1500, height=800) fig.show() # ![](animations/trip_101.gif "101") # + [markdown] slideshow={"slide_type": "skip"} # ## Look at one trip # + slideshow={"slide_type": "skip"} trips = df.groupby(by='trip_no') trip_no = 129 trip = trips.get_group(trip_no) # + slideshow={"slide_type": "slide"} visualize.plot_map(df=trip, width=1400, height=800, zoom_start=14, color_key='trip_direction') # - trip[[f'sin_pm{i}' for i in range(1,5)]].isnull().any() trip[['cog','heading','sog']].isnull().any() animate.widget(trip=trip) # ## Is there a more optimal thruster operation? # ![](https://memegenerator.net/img/images/15921176.jpg) # ## Thruster positions? # Annas definition: # # Thruster 1 – NV # Thruster 2 – SV # Thruster 3 – NE # Thruster 4 - SE # # + df_no_reverse = df.groupby('trip_no').filter(lambda x : x.iloc[0]['reversing']==0) groups = df_no_reverse.groupby('trip_no') trip_no_reverse =groups.get_group(list(groups.groups.keys())[0]) trip_ = animate.normalize_power(trip=trip_no_reverse) row = trip_.iloc[500].copy() fig,ax=plt.subplots() fig.set_size_inches(6,6) animate.plot_thrusters(ax=ax, row=row, positions=['NV', 'SV', 'NE', 'SE']) ax.set_title(f'reversing:{row["reversing"]}, trip_direction: {row["trip_direction"]}'); # - animate.widget(trip=trip, positions=['NV', 'SV', 'NE', 'SE']) # ## Transverse force for thrusters during one trip sin_keys = ['sin_pm%i' % n for n in range(1,5)] cos_keys = ['cos_pm%i' % n for n in range(1,5)] power_keys = ['power_em_thruster_%i' % n for n in range(1,5)] columns = sin_keys + cos_keys + power_keys g = sns.PairGrid(trip[sin_keys]) g.map_upper(sns.scatterplot) #g.map_lower(sns.kdeplot) g.map_diag(sns.kdeplot, lw=3, legend=False); # ## Compare mean power for many trips df_mean = df.groupby('trip_no').mean() g = sns.PairGrid(df_mean[power_keys + ['reversing']]) g.map_upper(sns.scatterplot) #g.map_lower(sns.kdeplot) g.map_diag(sns.kdeplot, lw=3, legend=False); # + df_no_reverse = df.groupby('trip_no').filter(lambda x : x.iloc[0]['reversing']==1) groups = df_no_reverse.groupby('trip_no') trip_no_reverse =groups.get_group(list(groups.groups.keys())[0]) trip_ = animate.normalize_power(trip=trip_no_reverse) row = trip_.iloc[500].copy() fig,ax=plt.subplots() fig.set_size_inches(6,6) animate.plot_thrusters(ax=ax, row=row) ax.set_title(f'reversing:{row["reversing"]}, trip_direction: {row["trip_direction"]}'); # + df_no_reverse = df.groupby('trip_no').filter(lambda x : x.iloc[0]['reversing']==0) groups = df_no_reverse.groupby('trip_no') trip_no_reverse =groups.get_group(list(groups.groups.keys())[0]) trip_ = animate.normalize_power(trip=trip_no_reverse) row = trip_.iloc[500].copy() fig,ax=plt.subplots() fig.set_size_inches(6,6) animate.plot_thrusters(ax=ax, row=row) ax.set_title(f'reversing:{row["reversing"]}, trip_direction: {row["trip_direction"]}'); # - # ## Fartygsdata? # ![](https://upload.wikimedia.org/wikipedia/commons/thumb/f/f1/20190531_Fersea_Tycho_Brahe_Helsingor_0120_%2847981263576%29.jpg/335px-20190531_Fersea_Tycho_Brahe_Helsingor_0120_%2847981263576%29.jpg) # * GA? # * Max effekt på thrustrar? # ## Nästa möte... # -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.6.1 # language: julia # name: julia-1.6 # --- using Rocket using ReactiveMP using GraphPPL using Distributions @model function coin_model() a = datavar(Float64) b = datavar(Float64) y = datavar(Float64) θ ~ Beta(a, b) y ~ Bernoulli(θ) return y, a, b, θ end # + N = 10000 # number of coin tosses p = 0.5 # p parameter of the Bernoulli distribution dataset = float.(rand(Bernoulli(p), N)); # - function inference(data) model, (y, a, b, θ) = coin_model() fe = ScoreActor(Float64) θs = keep(Marginal) fe_sub = subscribe!(score(Float64, BetheFreeEnergy(), model), fe) θ_sub = subscribe!(getmarginal(θ), θs) prior_sub = subscribe!(getmarginal(θ), (posterior) -> begin posterior_a, posterior_b = params(posterior) update!(a, posterior_a) update!(b, posterior_b) end) update!(a, 1.0) update!(b, 1.0) for d in data update!(y, d) end unsubscribe!(θ_sub) unsubscribe!(prior_sub) unsubscribe!(fe_sub) return getvalues(θs), getvalues(fe) end est, fe = inference(dataset); using Plots # + pest = est[1:10:end] pfe = fe[1:10:end] p1 = plot(mean.(pest), ribbon = var.(pest), label = "estimate") p1 = plot!([ p ], seriestype = :hline, label = "real") p2 = plot(pfe, label = "Free-Energy") p2 = plot!([ -log(p) ], seriestype = :hline, label = "-log(evidence)") plot(p1, p2, layout = @layout([ a b ]), size = (1000, 400)) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Your first image! # # Ok now that you understand what Docker is and why use it (at least I hope, if not you will soon!), it is time to get more technical. # # ## Goal # In this module we will learn: # * What is the **concrete** difference between an **image** and a **container** # * How to create an image? # * What is a Dockerfile and how to use it? # * How to create a container out of an image? # # And we will run our very first container based on our very first image! # # ![do-it](./assets/do-it.gif) # ## Installation # First things first, we need to install Docker. # # You just have to follow [Docker's well made documentation](https://docs.docker.com/engine/install/ubuntu/). # # If you're running Windows or Mac -> https://docs.docker.com/engine/install/ # # By default, Docker will need to use `sudo` each time. Let's change that: # ```bash # sudo gpasswd -a $USER docker # ``` # # You will need to log-out before it applies the changes. # # ## How do we create an image # In order to create images (or import them), we could: # # ### Go to [Docker Hub](https://hub.docker.com/) # # which is like GitHub for Docker images. # You need an image with a SQL DB in it? [There is one](https://hub.docker.com/r/mysalt/mysql). # You need an image with python 3.6? [There is one](https://hub.docker.com/r/silverlogic/python3.6). # And a lot of other images! # # ### Create your own docker file! # # In many cases, we will want to create our own images, with our own files and our own script. # In order to do that we will create what we call a `Dockerfile`. # # This is just a file that is named `Dockerfile` and that contains a script that Docker can understand. # Based on that, it will create an image. # # ## Let's create our image # It's time! So we will create a `Dockerfile` and we will use a python image as base. It means that we start from an existing image to build our own. # So we don't have to start from scratch each time. # # In this file we will add a line to tell Docker that we want to start from the official Python 3.7 image. # # The `FROM` keyword is used to tell Docker which base image we will use. # # ```Dockerfile # FROM python:3.7 # ``` # # Now let's add another line to copy a file. In the folder you're in, there is a file named `hello_world.py`. This file contains a single line: # # ```python # print("Hello world!") # ``` # # We will create folder called `app` and put our file in it. As the python image is built on top of Ubuntu, we can use all the commands that work in Ubuntu. # # Let's see some useful keywords that can be used in a Dockerfile: # * The `RUN` keyword can be uses to run a command on the system. # * The `COPY` keyword can be used to copy a file. # * The `WORKDIR` keyword can be used to define the path where all the commands will be run starting after it. # * The `CMD` keyword can be used to define the command that the container will run when it will be launched. # # # ```Dockerfile # RUN mkdir /app # RUN mkdir /app/code # COPY code/hello_world.py /app/code/hello_world.py # WORKDIR /app # CMD ["python", "code/hello_world.py"] # ``` # # ## Let's build our image! # Now we're ready to create our first image! Exciting right? # # We say that we `build` our image. That's the term. # # The command is: `docker build . -t hello` # # * `docker` to specify that we use docker # * `build` to specify that we want to create an image # * `.` to specify that the Dockerfile is in the current directory # * `-t hello` to add a name to our image. If we don't do that we will need to use the ID that docker defines for us and it's not easy to remember. # # I already created the Dockerfile for you. Have a look on it! # !docker build . -t hello # As you can see, our image has been successfully built! # # If you look at the last line of the output, you see: # # ``` # Successfully tagged hello:latest # ``` # # Our image has been tagged with `hello:latest`. As we didn't define any tag at the end of our image name, `latest` will be added by docker. # If we make changes in our image and re-build it, a new image will be created with the tag `latest` and our old image will no longer have it. # It's useful when you want to use the most recent version of your image. # # We can also add our own tags as follows # !docker build . -t another_image:v1.0 # If we try to list all of our images, we will see that we have 3 images. # # * One `hello` with the tag `latest` # * One `hello` with the tag `v1.0` # * One `python` with the tag `3.7` *(that's the one we used as base image)* # # We can see it with: # # ```bash # docker image ls # ``` # !docker image ls # ## Manage images # As you can see images take a lot of place on you hard drive! And the more complex your images are and the more dependencies they have, the bigger they will be. # # It rapidly become a pain... # # Thankfully, we can remove the one that we don't use anymore with the command: # # ```bash # docker image rm # ```` # As we see in the `docker image ls` output, each image has an ID. We will use that to remove them. Let's say we want to remove our `hello:v1.0` image. # ``` # hello latest 8f7ca704c0a8 7 minutes ago 876MB # ``` # Here the ID is `8f7ca704c0a8`. But here multiple images have the same ID. That's because multiples images you the same image. # # Confused? It's because Docker is really smart! We tried to create multiple images based on the same Dockerfile, and there were no changes between the creation of the first image and the last one. Neither in the files, nor in the Dockerfile. So Docker knows that it doesn't have to create multiple images! It creates and 'links' it to other tags. # # So if I try to delete with the ID it will give me a warning and not do it. Because multiple tags are linked to the same image. # # So instead of using IDs we will use tags. # !docker image rm hello:v1.0 # The tag has been removed! # !docker image ls # ## Run it # Perfect! You understand what an image is now! Let's run it. # # When we will run the image, Docker will create a `container`, an instance of the image and it will execute the command we added after the `CMD` keyword. # # We will do it with the command: # ```bash # docker run -t hello:latest # ``` # # * `run` is to tell to docker to create a container. # * `-t hello:latest` is to specify which image it should use to create and run the `container` # # If you don't put `:` docker will add `:latest` by default. # !docker run -t hello # Our container successfully ran. We can see that it printed 'Hello world' as asked. # # Ok. So we created a container and we ran it. Can we see it stored somewhere? # # Let's try with `docker container ls` maybe? # !docker container ls # Damn it, the command seems to be right but there is nothing here! # Well, `docker container ls` only show running containers. And this one is not running anymore because it completed the task we asked him to do! # # So if we want to see all the container, including the stopped one, we can do: # ```bash # docker container ls -a # ``` # # !docker container ls -a # Ok good! We can of course remove it with: # # ```bash # docker contaier rm # ``` # # So in this case `cd585b842f8b` # !docker container rm cd585b842f8b # It worked! # # ## Conclusion # Great! You now have a complete understanding of images and containers. # # In the next module, we will dive deeper into the container and see what we can do with them. # # ![We will have so much fun!](./assets/have-fun.gif) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from pathlib import Path import copy import matplotlib.pyplot as plt import numpy as np import torch import torchvision from torch.utils.data import DataLoader, Dataset, random_split from torchvision import transforms from torchvision.datasets import ImageFolder # GPUのセットアップ device = torch.device("cuda" if torch.cuda.is_available() else "cpu") print(device) # + # データセットの読み込み data_transforms = { 'train': transforms.Compose([ transforms.Resize(size=(224,224)), transforms.RandomRotation(degrees=15), transforms.ColorJitter(), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), 'val': transforms.Compose([ transforms.Resize(size=(224,224)), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), } dataset = ImageFolder("raw_dataset", data_transforms['train']) dataset.class_to_idx # - # 訓練用、評価用にデータを分ける all_size = len(dataset) train_size = int(0.8 * all_size) val_size = all_size - train_size dataset_size = {"train":train_size, "val":val_size} train_data, val_data = random_split(dataset, [train_size, val_size]) # + # DataLoaderを定義 train_loader = DataLoader(train_data, batch_size=10, shuffle=True) val_loader = DataLoader(val_data, batch_size=10, shuffle=False) dataloaders = {'train':train_loader, 'val':val_loader} def imshow(inp, title=None): """Imshow for Tensor.""" inp = inp.numpy().transpose((1, 2, 0)) mean = np.array([0.485, 0.456, 0.406]) std = np.array([0.229, 0.224, 0.225]) inp = std * inp + mean inp = np.clip(inp, 0, 1) plt.imshow(inp) if title is not None: plt.title(title) plt.pause(0.001) # pause a bit so that plots are updated dataiter = iter(dataloaders['train']) images, labels = dataiter.next() class_names = list(dataset.class_to_idx.keys()) # Make a grid from batch out = torchvision.utils.make_grid(images) imshow(out, title=[class_names[x] for x in labels]) # + # モデルの定義 import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 53* 53, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 2) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(x.size(0), 16*53*53) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x net = Net().to(device) # + import torch.optim as optim criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) # + EPOCH = 100 best_model_wts = copy.deepcopy(net.state_dict()) best_acc = 0.0 #途中経過保存用に、リストを持った辞書を作ります。 loss_dict ={"train" : [], "val" : []} acc_dict = {"train" : [], "val" : []} for epoch in range(EPOCH): # loop over the dataset multiple times if (epoch+1)%5 == 0:#5回に1回エポックを表示します。 print('Epoch {}/{}'.format(epoch, EPOCH - 1)) print('-' * 10) for phase in ['train', 'val']: if phase == 'train': net.train() else: net.eval() running_loss = 0.0 running_corrects = 0 for inputs, labels in dataloaders[phase]: inputs = inputs.to(device) labels = labels.to(device) with torch.set_grad_enabled(phase == 'train'): outputs = net(inputs) _, preds = torch.max(outputs, 1) loss = criterion(outputs, labels) optimizer.zero_grad() # backward + optimize only if in training phase if phase == 'train': loss.backward() optimizer.step() running_loss += loss.item() * inputs.size(0) running_corrects += torch.sum(preds == labels.data) epoch_loss = running_loss / dataset_size[phase] epoch_acc = running_corrects.double() / dataset_size[phase] loss_dict[phase].append(epoch_loss) acc_dict[phase].append(epoch_acc) print('{} Loss: {:.4f} Acc: {:.4f}'.format(phase, epoch_loss, epoch_acc)) if phase == 'val' and epoch_acc > best_acc: best_acc = epoch_acc best_model_wts = copy.deepcopy(net.state_dict()) net.load_state_dict(best_model_wts) print('Best val acc: {:.4f}'.format(best_acc)) print('Finished Training') # + loss_train = loss_dict["train"] loss_val = loss_dict["val"] acc_train = acc_dict["train"] acc_val = acc_dict["val"] fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(10,5)) #0個目のグラフ axes[0].plot(range(EPOCH), loss_train, label = "train") axes[0].plot(range(EPOCH), loss_val, label = "val") axes[0].set_title("Loss") axes[0].legend()#各グラフのlabelを表示 #1個目のグラフ axes[1].plot(range(EPOCH), acc_train, label = "train") axes[1].plot(range(EPOCH), acc_val, label = "val") axes[1].set_title("Acc") axes[1].legend() #0個目と1個目のグラフが重ならないように調整 fig.tight_layout() # + # モデルの保存(pth) torch.save(net.state_dict(),"detector/face_mask_detector.pth") # モデルの保存(ONNX) trained_model = Net() trained_model.load_state_dict(torch.load('detector/face_mask_detector.pth', map_location=torch.device('cpu'))) dummy_input = torch.randn(1, 3, 224, 224) torch.onnx.export(trained_model, dummy_input, "detector/face_mask_detector.onnx") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np from torch.nn import functional as F import torch from torch import nn import six from six import __init__ from model.utils.bbox_tools import bbox2loc, bbox_iou, loc2bbox def generate_anchor_base(base_size=16, ratios=[0.5, 1, 2], anchor_scales=[8, 16, 32]): """Generate anchor base windows by enumerating aspect ratio and scales. Generate anchors that are scaled and modified to the given aspect ratios. Area of a scaled anchor is preserved when modifying to the given aspect ratio. :obj:`R = len(ratios) * len(anchor_scales)` anchors are generated by this function. The :obj:`i * len(anchor_scales) + j` th anchor corresponds to an anchor generated by :obj:`ratios[i]` and :obj:`anchor_scales[j]`. For example, if the scale is :math:`8` and the ratio is :math:`0.25`, the width and the height of the base window will be stretched by :math:`8`. For modifying the anchor to the given aspect ratio, the height is halved and the width is doubled. Args: base_size (number): The width and the height of the reference window. ratios (list of floats): This is ratios of width to height of the anchors. anchor_scales (list of numbers): This is areas of anchors. Those areas will be the product of the square of an element in :obj:`anchor_scales` and the original area of the reference window. Returns: ~numpy.ndarray: An array of shape :math:`(R, 4)`. Each element is a set of coordinates of a bounding box. The second axis corresponds to :math:`(y_{min}, x_{min}, y_{max}, x_{max})` of a bounding box. """ py = base_size / 2. px = base_size / 2. anchor_base = np.zeros((len(ratios) * len(anchor_scales), 4), dtype=np.float32) for i in six.moves.range(len(ratios)): for j in six.moves.range(len(anchor_scales)): h = base_size * anchor_scales[j] * np.sqrt(ratios[i]) w = base_size * anchor_scales[j] * np.sqrt(1. / ratios[i]) index = i * len(anchor_scales) + j anchor_base[index, 0] = py - h / 2. anchor_base[index, 1] = px - w / 2. anchor_base[index, 2] = py + h / 2. anchor_base[index, 3] = px + w / 2. return anchor_base anchor_base = generate_anchor_base();anchor_base def _enumerate_shifted_anchor(anchor_base, feat_stride, height, width): # Enumerate all shifted anchors: # # add A anchors (1, A, 4) to # cell K shifts (K, 1, 4) to get # shift anchors (K, A, 4) # reshape to (K*A, 4) shifted anchors # return (K*A, 4) # # !TODO: add support for torch.CudaTensor # xp = cuda.get_array_module(anchor_base) # it seems that it can't be boosed using GPU import numpy as xp shift_y = xp.arange(0, height * feat_stride, feat_stride) shift_x = xp.arange(0, width * feat_stride, feat_stride) shift_x, shift_y = xp.meshgrid(shift_x, shift_y) shift = xp.stack((shift_y.ravel(), shift_x.ravel(), shift_y.ravel(), shift_x.ravel()), axis=1) A = anchor_base.shape[0] K = shift.shape[0] anchor = anchor_base.reshape((1, A, 4)) + \ shift.reshape((1, K, 4)).transpose((1, 0, 2)) anchor = anchor.reshape((K * A, 4)).astype(np.float32) return anchor feature = torch.Tensor(1,512,50,50) n, _, hh, ww = feature.shape anchor = _enumerate_shifted_anchor( anchor_base, 16, hh, ww) anchor anchor.shape # 50*50*9 n_anchor = anchor.shape[0] // (hh * ww);n_anchor conv1 = nn.Conv2d(512, 512, 3, 1, 1) score = nn.Conv2d(512, 9 * 2, 1, 1, 0) loc = nn.Conv2d(512, 9 * 4, 1, 1, 0) h = F.relu(conv1(feature));h.shape rpn_locs = loc(h);rpn_locs.shape rpn_locs = rpn_locs.permute(0, 2, 3, 1).contiguous().view(n, -1, 4);rpn_locs.shape rpn_scores = score(h);rpn_scores.shape rpn_scores = rpn_scores.permute(0, 2, 3, 1).contiguous();rpn_scores.shape rpn_scores.view(n, hh, ww, n_anchor, 2).shape rpn_softmax_scores = F.softmax(rpn_scores.view(n, hh, ww, n_anchor, 2), dim=4);rpn_softmax_scores.shape rpn_fg_scores = rpn_softmax_scores[:, :, :, :, 1].contiguous();rpn_fg_scores.shape rpn_fg_scores = rpn_fg_scores.view(n, -1);rpn_fg_scores.shape rpn_scores = rpn_scores.view(n, -1, 2);rpn_scores.shape rpn_locs[0].cpu().data.numpy();rpn_locs[0].shape rpn_fg_scores[0].cpu().data.numpy();rpn_fg_scores.shape def loc2bbox(src_bbox, loc): import numpy as xp """Decode bounding boxes from bounding box offsets and scales. Given bounding box offsets and scales computed by :meth:`bbox2loc`, this function decodes the representation to coordinates in 2D image coordinates. Given scales and offsets :math:`t_y, t_x, t_h, t_w` and a bounding box whose center is :math:`(y, x) = p_y, p_x` and size :math:`p_h, p_w`, the decoded bounding box's center :math:`\\hat{g}_y`, :math:`\\hat{g}_x` and size :math:`\\hat{g}_h`, :math:`\\hat{g}_w` are calculated by the following formulas. * :math:`\\hat{g}_y = p_h t_y + p_y` * :math:`\\hat{g}_x = p_w t_x + p_x` * :math:`\\hat{g}_h = p_h \\exp(t_h)` * :math:`\\hat{g}_w = p_w \\exp(t_w)` The decoding formulas are used in works such as R-CNN [#]_. The output is same type as the type of the inputs. .. [#] , , , . \ Rich feature hierarchies for accurate object detection and semantic \ segmentation. CVPR 2014. Args: src_bbox (array): A coordinates of bounding boxes. Its shape is :math:`(R, 4)`. These coordinates are :math:`p_{ymin}, p_{xmin}, p_{ymax}, p_{xmax}`. loc (array): An array with offsets and scales. The shapes of :obj:`src_bbox` and :obj:`loc` should be same. This contains values :math:`t_y, t_x, t_h, t_w`. Returns: array: Decoded bounding box coordinates. Its shape is :math:`(R, 4)`. \ The second axis contains four values \ :math:`\\hat{g}_{ymin}, \\hat{g}_{xmin}, \\hat{g}_{ymax}, \\hat{g}_{xmax}`. """ if src_bbox.shape[0] == 0: return xp.zeros((0, 4), dtype=loc.dtype) src_bbox = src_bbox.astype(src_bbox.dtype, copy=False) # (22500, 4) src_height = src_bbox[:, 2] - src_bbox[:, 0] # (22500,) src_width = src_bbox[:, 3] - src_bbox[:, 1] # (22500,) src_ctr_y = src_bbox[:, 0] + 0.5 * src_height # (22500,) src_ctr_x = src_bbox[:, 1] + 0.5 * src_width # (22500,) dy = loc[:, 0::4] # (22500, 1) dx = loc[:, 1::4] # (22500, 1) dh = loc[:, 2::4] # (22500, 1) dw = loc[:, 3::4] # (22500, 1) ctr_y = dy * src_height[:, xp.newaxis] + src_ctr_y[:, xp.newaxis] # (22500, 1) ctr_x = dx * src_width[:, xp.newaxis] + src_ctr_x[:, xp.newaxis] # (22500, 1) h = xp.exp(dh) * src_height[:, xp.newaxis] # (22500, 1) w = xp.exp(dw) * src_width[:, xp.newaxis] # (22500, 1) dst_bbox = xp.zeros(loc.shape, dtype=loc.dtype) dst_bbox[:, 0::4] = ctr_y - 0.5 * h dst_bbox[:, 1::4] = ctr_x - 0.5 * w dst_bbox[:, 2::4] = ctr_y + 0.5 * h dst_bbox[:, 3::4] = ctr_x + 0.5 * w return dst_bbox # (22500, 4) anchor.shape type(anchor) src_bbox = anchor src_height = src_bbox[:, 2] - src_bbox[:, 0] src_width = src_bbox[:, 3] - src_bbox[:, 1] src_ctr_y = src_bbox[:, 0] + 0.5 * src_height src_ctr_x = src_bbox[:, 1] + 0.5 * src_width print(src_height.shape, src_width.shape, src_ctr_y.shape, src_ctr_x.shape) rpn_locst = rpn_locst[0];rpn_locst.shape loc = rpn_locst dy = loc[:, 0::4] dx = loc[:, 1::4] dh = loc[:, 2::4] dw = loc[:, 3::4] print(dy.shape, dx.shape,dh.shape,dw.shape) ctr_y = dy * src_height[:, np.newaxis] + src_ctr_y[:, np.newaxis] ctr_x = dx * src_width[:, np.newaxis] + src_ctr_x[:, np.newaxis] print(ctr_y.shape, ctr_x.shape) h = np.exp(dh) * src_height[:, np.newaxis] w = np.exp(dw) * src_width[:, np.newaxis] print(h.shape, w.shape) roi = loc2bbox(anchor, rpn_locst);roi.shape class ProposalCreator: # unNOTE: I'll make it undifferential # unTODO: make sure it's ok # It's ok """Proposal regions are generated by calling this object. The :meth:`__call__` of this object outputs object detection proposals by applying estimated bounding box offsets to a set of anchors. This class takes parameters to control number of bounding boxes to pass to NMS and keep after NMS. If the paramters are negative, it uses all the bounding boxes supplied or keep all the bounding boxes returned by NMS. This class is used for Region Proposal Networks introduced in Faster R-CNN [#]_. .. [#] , , , . \ Faster R-CNN: Towards Real-Time Object Detection with \ Region Proposal Networks. NIPS 2015. Args: nms_thresh (float): Threshold value used when calling NMS. n_train_pre_nms (int): Number of top scored bounding boxes to keep before passing to NMS in train mode. n_train_post_nms (int): Number of top scored bounding boxes to keep after passing to NMS in train mode. n_test_pre_nms (int): Number of top scored bounding boxes to keep before passing to NMS in test mode. n_test_post_nms (int): Number of top scored bounding boxes to keep after passing to NMS in test mode. force_cpu_nms (bool): If this is :obj:`True`, always use NMS in CPU mode. If :obj:`False`, the NMS mode is selected based on the type of inputs. min_size (int): A paramter to determine the threshold on discarding bounding boxes based on their sizes. """ def __init__(self, parent_model, nms_thresh=0.7, n_train_pre_nms=12000, n_train_post_nms=2000, n_test_pre_nms=6000, n_test_post_nms=300, min_size=16 ): self.parent_model = parent_model self.nms_thresh = nms_thresh self.n_train_pre_nms = n_train_pre_nms self.n_train_post_nms = n_train_post_nms self.n_test_pre_nms = n_test_pre_nms self.n_test_post_nms = n_test_post_nms self.min_size = min_size def __call__(self, loc, score, anchor, img_size, scale=1.): """input should be ndarray Propose RoIs. Inputs :obj:`loc, score, anchor` refer to the same anchor when indexed by the same index. On notations, :math:`R` is the total number of anchors. This is equal to product of the height and the width of an image and the number of anchor bases per pixel. Type of the output is same as the inputs. Args: loc (array): Predicted offsets and scaling to anchors. Its shape is :math:`(R, 4)`. score (array): Predicted foreground probability for anchors. Its shape is :math:`(R,)`. anchor (array): Coordinates of anchors. Its shape is :math:`(R, 4)`. img_size (tuple of ints): A tuple :obj:`height, width`, which contains image size after scaling. scale (float): The scaling factor used to scale an image after reading it from a file. Returns: array: An array of coordinates of proposal boxes. Its shape is :math:`(S, 4)`. :math:`S` is less than :obj:`self.n_test_post_nms` in test time and less than :obj:`self.n_train_post_nms` in train time. :math:`S` depends on the size of the predicted bounding boxes and the number of bounding boxes discarded by NMS. """ # NOTE: when test, remember # faster_rcnn.eval() # to set self.traing = False if self.parent_model.training: n_pre_nms = self.n_train_pre_nms n_post_nms = self.n_train_post_nms else: n_pre_nms = self.n_test_pre_nms n_post_nms = self.n_test_post_nms # Convert anchors into proposal via bbox transformations. # roi = loc2bbox(anchor, loc) roi = loc2bbox(anchor, loc) # Clip predicted boxes to image. roi[:, slice(0, 4, 2)] = np.clip( roi[:, slice(0, 4, 2)], 0, img_size[0]) roi[:, slice(1, 4, 2)] = np.clip( roi[:, slice(1, 4, 2)], 0, img_size[1]) # Remove predicted boxes with either height or width < threshold. min_size = self.min_size * scale hs = roi[:, 2] - roi[:, 0] ws = roi[:, 3] - roi[:, 1] keep = np.where((hs >= min_size) & (ws >= min_size))[0] roi = roi[keep, :] score = score[keep] # Sort all (proposal, score) pairs by score from highest to lowest. # Take top pre_nms_topN (e.g. 6000). order = score.ravel().argsort()[::-1] if n_pre_nms > 0: order = order[:n_pre_nms] roi = roi[order, :] # Apply nms (e.g. threshold = 0.7). # Take after_nms_topN (e.g. 300). # unNOTE: somthing is wrong here! # TODO: remove cuda.to_gpu keep = non_maximum_suppression( cp.ascontiguousarray(cp.asarray(roi)), thresh=self.nms_thresh) if n_post_nms > 0: keep = keep[:n_post_nms] roi = roi[keep] return roi # + class RegionProposalNetwork(nn.Module): """Region Proposal Network introduced in Faster R-CNN. This is Region Proposal Network introduced in Faster R-CNN [#]_. This takes features extracted from images and propose class agnostic bounding boxes around "objects". .. [#] , , , . \ Faster R-CNN: Towards Real-Time Object Detection with \ Region Proposal Networks. NIPS 2015. Args: in_channels (int): The channel size of input. mid_channels (int): The channel size of the intermediate tensor. ratios (list of floats): This is ratios of width to height of the anchors. anchor_scales (list of numbers): This is areas of anchors. Those areas will be the product of the square of an element in :obj:`anchor_scales` and the original area of the reference window. feat_stride (int): Stride size after extracting features from an image. initialW (callable): Initial weight value. If :obj:`None` then this function uses Gaussian distribution scaled by 0.1 to initialize weight. May also be a callable that takes an array and edits its values. proposal_creator_params (dict): Key valued paramters for :class:`model.utils.creator_tools.ProposalCreator`. .. seealso:: :class:`~model.utils.creator_tools.ProposalCreator` """ def __init__( self, in_channels=512, mid_channels=512, ratios=[0.5, 1, 2], anchor_scales=[8, 16, 32], feat_stride=16, proposal_creator_params=dict(), ): super(RegionProposalNetwork, self).__init__() self.anchor_base = generate_anchor_base( anchor_scales=anchor_scales, ratios=ratios) self.feat_stride = feat_stride self.proposal_layer = ProposalCreator(self, **proposal_creator_params) n_anchor = self.anchor_base.shape[0] self.conv1 = nn.Conv2d(in_channels, mid_channels, 3, 1, 1) self.score = nn.Conv2d(mid_channels, n_anchor * 2, 1, 1, 0) self.loc = nn.Conv2d(mid_channels, n_anchor * 4, 1, 1, 0) normal_init(self.conv1, 0, 0.01) normal_init(self.score, 0, 0.01) normal_init(self.loc, 0, 0.01) def forward(self, x, img_size, scale=1.): """Forward Region Proposal Network. Here are notations. * :math:`N` is batch size. * :math:`C` channel size of the input. * :math:`H` and :math:`W` are height and witdh of the input feature. * :math:`A` is number of anchors assigned to each pixel. Args: x (~torch.autograd.Variable): The Features extracted from images. Its shape is :math:`(N, C, H, W)`. img_size (tuple of ints): A tuple :obj:`height, width`, which contains image size after scaling. scale (float): The amount of scaling done to the input images after reading them from files. Returns: (~torch.autograd.Variable, ~torch.autograd.Variable, array, array, array): This is a tuple of five following values. * **rpn_locs**: Predicted bounding box offsets and scales for \ anchors. Its shape is :math:`(N, H W A, 4)`. * **rpn_scores**: Predicted foreground scores for \ anchors. Its shape is :math:`(N, H W A, 2)`. * **rois**: A bounding box array containing coordinates of \ proposal boxes. This is a concatenation of bounding box \ arrays from multiple images in the batch. \ Its shape is :math:`(R', 4)`. Given :math:`R_i` predicted \ bounding boxes from the :math:`i` th image, \ :math:`R' = \\sum _{i=1} ^ N R_i`. * **roi_indices**: An array containing indices of images to \ which RoIs correspond to. Its shape is :math:`(R',)`. * **anchor**: Coordinates of enumerated shifted anchors. \ Its shape is :math:`(H W A, 4)`. """ n, _, hh, ww = x.shape # bactch,channel, h, w anchor = _enumerate_shifted_anchor( np.array(self.anchor_base), self.feat_stride, hh, ww) # (22500, 4) n_anchor = anchor.shape[0] // (hh * ww) # 9 h = F.relu(self.conv1(x)) # torch.Size([1, 512, 50, 50]) rpn_locs = self.loc(h) # torch.Size([1, 36, 50, 50]) # UNNOTE: check whether need contiguous # A: Yes rpn_locs = rpn_locs.permute(0, 2, 3, 1).contiguous().view(n, -1, 4) # torch.Size([1, 22500, 4]) rpn_scores = self.score(h) # torch.Size([1, 18, 50, 50]) rpn_scores = rpn_scores.permute(0, 2, 3, 1).contiguous() # torch.Size([1, 50, 50, 18]) rpn_softmax_scores = F.softmax(rpn_scores.view(n, hh, ww, n_anchor, 2), dim=4) # torch.Size([1, 50, 50, 9, 2]) rpn_fg_scores = rpn_softmax_scores[:, :, :, :, 1].contiguous() # torch.Size([1, 50, 50, 9]) rpn_fg_scores = rpn_fg_scores.view(n, -1) # torch.Size([1, 22500]) rpn_scores = rpn_scores.view(n, -1, 2) # torch.Size([1, 22500, 2]) rois = list() roi_indices = list() for i in range(n): # n batches roi = self.proposal_layer( rpn_locs[i].cpu().data.numpy(), # torch.Size([1,22500, 4]) rpn_fg_scores[i].cpu().data.numpy(), # torch.Size([1, 22500]) anchor, img_size, scale=scale) # (22500, 4) batch_index = i * np.ones((len(roi),), dtype=np.int32) rois.append(roi) roi_indices.append(batch_index) rois = np.concatenate(rois, axis=0) # (22500*n, 4) roi_indices = np.concatenate(roi_indices, axis=0) # (22500*n, 4) return rpn_locs, rpn_scores, rois, roi_indices, anchor def _enumerate_shifted_anchor(anchor_base, feat_stride, height, width): # Enumerate all shifted anchors: # # add A anchors (1, A, 4) to # cell K shifts (K, 1, 4) to get # shift anchors (K, A, 4) # reshape to (K*A, 4) shifted anchors # return (K*A, 4) # # !TODO: add support for torch.CudaTensor # xp = cuda.get_array_module(anchor_base) # it seems that it can't be boosed using GPU import numpy as xp shift_y = xp.arange(0, height * feat_stride, feat_stride) shift_x = xp.arange(0, width * feat_stride, feat_stride) shift_x, shift_y = xp.meshgrid(shift_x, shift_y) shift = xp.stack((shift_y.ravel(), shift_x.ravel(), shift_y.ravel(), shift_x.ravel()), axis=1) A = anchor_base.shape[0] K = shift.shape[0] anchor = anchor_base.reshape((1, A, 4)) + \ shift.reshape((1, K, 4)).transpose((1, 0, 2)) anchor = anchor.reshape((K * A, 4)).astype(np.float32) return anchor def normal_init(m, mean, stddev, truncated=False): """ weight initalizer: truncated normal and random normal. """ # x is a parameter if truncated: m.weight.data.normal_().fmod_(2).mul_(stddev).add_(mean) # not a perfect approximation else: m.weight.data.normal_(mean, stddev) m.bias.data.zero_() # - rpn = RegionProposalNetwork( 512, 512, ratios=[0.5, 1, 2], anchor_scales=[8, 16, 32], feat_stride=16, );rpn # + # 18 = 2*9 # 36 = 4*9 # + class VGG16RoIHead(nn.Module): """Faster R-CNN Head for VGG-16 based implementation. This class is used as a head for Faster R-CNN. This outputs class-wise localizations and classification based on feature maps in the given RoIs. Args: n_class (int): The number of classes possibly including the background. roi_size (int): Height and width of the feature maps after RoI-pooling. spatial_scale (float): Scale of the roi is resized. classifier (nn.Module): Two layer Linear ported from vgg16 """ def __init__(self, n_class, roi_size, spatial_scale, classifier): # n_class includes the background super(VGG16RoIHead, self).__init__() self.classifier = classifier self.cls_loc = nn.Linear(4096, n_class * 4) self.score = nn.Linear(4096, n_class) normal_init(self.cls_loc, 0, 0.001) normal_init(self.score, 0, 0.01) self.n_class = n_class self.roi_size = roi_size self.spatial_scale = spatial_scale self.roi = RoIPooling2D(self.roi_size, self.roi_size, self.spatial_scale) def forward(self, x, rois, roi_indices): """Forward the chain. We assume that there are :math:`N` batches. Args: x (Variable): 4D image variable. rois (Tensor): A bounding box array containing coordinates of proposal boxes. This is a concatenation of bounding box arrays from multiple images in the batch. Its shape is :math:`(R', 4)`. Given :math:`R_i` proposed RoIs from the :math:`i` th image, :math:`R' = \\sum _{i=1} ^ N R_i`. roi_indices (Tensor): An array containing indices of images to which bounding boxes correspond to. Its shape is :math:`(R',)`. """ # in case roi_indices is ndarray roi_indices = at.totensor(roi_indices).float() rois = at.totensor(rois).float() indices_and_rois = t.cat([roi_indices[:, None], rois], dim=1) # NOTE: important: yx->xy xy_indices_and_rois = indices_and_rois[:, [0, 2, 1, 4, 3]] indices_and_rois = xy_indices_and_rois.contiguous() pool = self.roi(x, indices_and_rois) pool = pool.view(pool.size(0), -1) fc7 = self.classifier(pool) roi_cls_locs = self.cls_loc(fc7) roi_scores = self.score(fc7) return roi_cls_locs, roi_scores def normal_init(m, mean, stddev, truncated=False): """ weight initalizer: truncated normal and random normal. """ # x is a parameter if truncated: m.weight.data.normal_().fmod_(2).mul_(stddev).add_(mean) # not a perfect approximation else: m.weight.data.normal_(mean, stddev) m.bias.data.zero_() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # UNet input size constrains # # MONAI provides an enhanced version of UNet (``monai.networks.nets.UNet ``), which not only supports residual units, but also can use more hyperparameters (like ``strides``, ``kernel_size`` and ``up_kernel_size``) than ``monai.networks.nets.BasicUNet``. However, ``UNet`` has some constrains for both network hyperparameters and sizes of input. # # The constrains of hyperparameters can be found in the docstring of the network, and this tutorial is focused on how to determine a reasonable input size. # # The last section: **Constrains of UNet** shows the conclusions. # !python -c "import monai" || pip install -q monai-weekly # + # Copyright 2020 MONAI Consortium # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from monai.networks.nets import UNet import monai import math import torch import torch.nn as nn # - # ## Check UNet structure # The following comes from: [Left-Ventricle Quantification Using Residual U-Net](https://link.springer.com/chapter/10.1007/978-3-030-12029-0_40). # # ![image](../figures/UNet_structure.png) # # First of all, let's build an UNet instance to check its structure. `num_res_units` is set to `0` since it has no impact on the input size. # + network_0 = UNet( spatial_dims=3, in_channels=3, out_channels=3, channels=(8, 16, 32), strides=(2, 3), kernel_size=3, up_kernel_size=3, num_res_units=0, ) print(len(network_0.model)) network_0 # - # As we can see from the printed structure, the network is consisted with three parts: # # 1. The first down layer. # 2. The intermediate skip connection based block. # 3. The final up layer. # # If we want to build a deeper UNet, only the intermediate block will be expanded. # # During the network, there are only two different modules: # 1. `monai.networks.blocks.convolutions.Convolution` # 2. `monai.networks.layers.simplelayers.SkipConnection` # # All these modules are consisted with the following four layers: # 1. Activation layers (`PReLU`). # 2. Dropout layers (`Dropout`). # 3. Normalization layers (`InstanceNorm3d`). # 4. Convolution layers (`Conv` and `ConvTranspose`). # # As for the layers, convolution layers may change the size of the input, and normalization layers may have extra constrains of the input size. # As for the modules, the `SkipConnection` module also has some constrains. # # Consequently, This tutorial shows the constrains of convolution layers, normalization layers and the `SkipConnection` module respectively. # ## Constrains of convolution layers # ### Conv layer # The formula in Pytorch's official docs explains how to calculate the output size for [Conv3d](https://pytorch.org/docs/stable/generated/torch.nn.Conv3d.html#torch.nn.Conv3d), and [ConvTranspose3d](https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose3d.html#torch.nn.ConvTranspose3d) (the formulas for `1d` and `2d` are similar). # As the docs shown, the output size depends on the input size and: # - `stride` # - `kernel_size` # - `dilation` # - `padding` # # In `monai.networks.nets.UNet`, users can only input `strides` and `kernel_size`, and the other two parameters are decided by [monai.networks.blocks.convolutions.Convolution](https://github.com/Project-MONAI/MONAI/blob/dev/monai/networks/blocks/convolutions.py) (please click the link for details). # # Therefore, here `dilation = 1` and `padding = (kernel_size - 1) / 2` (`kernel_size` is required to be odd, thus here `padding` is an integer). # # The output size of `Conv` can be calculated via the following simplified formula: # `math.floor((input_size + stride - 1) / stride)`. The corresponding python function is as follow, and we only need to ensure **`math.floor((input_size + stride - 1) / stride) >= 1`**, which means **`input_size >= 1`**. def get_conv_output_size(input_tensor, stride): output_size = [] input_size = list(input_tensor.shape)[2:] for size in input_size: out = math.floor((size + stride - 1) / stride) output_size.append(out) print(output_size) # Let's check if the function is correct: stride_value = 3 example = torch.rand([1, 3, 1, 15, 29]) get_conv_output_size(example, stride_value) output = nn.Conv3d(in_channels=3, out_channels=1, stride=stride_value, kernel_size=3, padding=1)(example) print(output.shape[2:]) # ### ConvTranspose layer # Similarly, due to the default settings in [monai.networks.blocks.convolutions.Convolution](https://github.com/Project-MONAI/MONAI/blob/dev/monai/networks/blocks/convolutions.py), `output_padding = stride - 1`. The output size of `ConvTranspose` can be simplified as: # `input_size * stride`. # Therefore, before entering the `ConvTranspose` layer, we only need to ensure **`input_size >= 1`**. # Let's check if the formula is correct: stride_value = 3 print([i * stride_value for i in example.shape[2:]]) output = nn.ConvTranspose3d( in_channels=3, out_channels=1, stride=stride_value, kernel_size=3, padding=1, output_padding=stride_value - 1, )(example) print(output.shape[2:]) # ## Constrains of normalization layers monai.networks.layers.factories.Norm.factories.keys() # In MONAI's norm factories, There are six normalization layers can be used. The official docs can be found in [here](https://pytorch.org/docs/stable/nn.html#normalization-layers), and their constrains is shown in [torch.nn.functional](https://pytorch.org/docs/stable/_modules/torch/nn/functional.html). # # However, the following normalization layers will not be discussed: # 1. SyncBatchNorm, since it only supports `DistributedDataParallel`, please check the official docs for more details. # 2. LayerNorm, since its parameter `normalized_shape` should equal to `[num_channels, *spatial_dims]`, and we cannot define a fixed value for it for all normalization layers in the network. # 3. GroupNorm, since its parameter `num_channels` should equal to the number of channels of the input, and we cannot define a fixed value for it for all normalization layers in the network. # # Therefore, let's check the other three normalization layers: batch normalization, instance normalization and local response normalization. # ### batch normalization # # The input size should meet: `torch.nn.functional._verify_batch_size`, and it requires the product of all dimensions except the channel dimension is larger than 1. For example: # + batch = nn.BatchNorm3d(num_features=3) for size in [[1, 3, 2, 1, 1], [2, 3, 1, 1, 1]]: output = batch(torch.randn(size)) # uncomment the following line you can see a ValueError # batch(torch.randn([1, 3, 1, 1, 1])) # - # In reality, when batch size is 1, it's not practical to use batch normalizaton. Therefore, the constrain can be converted to **the batch size should be larger than 1**. # ### instance normalization # # The input size should meet: `torch.nn.functional._verify_spatial_size`, and it requires the product of all spatial dimensions is larger than 1. Therefore, **at least one spatial dimension should have a size larger than 1**. For example: # + instance = nn.InstanceNorm3d(num_features=3) for size in [[1, 3, 2, 1, 1], [1, 3, 1, 2, 1]]: output = instance(torch.randn(size)) # uncomment the following line you can see a ValueError # instance(torch.randn([2, 3, 1, 1, 1])) # - # ### local response normalization # # **No constrain**. For example: nn.LocalResponseNorm(size=1)(torch.randn(1, 1, 1, 1, 1)) # ## Constrains of SkipConnection # In this section, we will check if the module [SkipConnection](https://github.com/Project-MONAI/MONAI/blob/dev/monai/networks/layers/simplelayers.py) itself has more constrains for the input size. # # In `UNet`, the `SkipConnection` is called via: # # `nn.Sequential(down, SkipConnection(subblock), up)` # # and the following line will be called (in forward function): # # `torch.cat([x, self.submodule(x)], dim=1)`. # # It requires for an input tensor, the output of `self.submodule` should not change spatial sizes. # ### When `len(channels) = 2` # If `len(channels) = 2`, there will only have one `SkipConnection` module in the network, and the module is built by a single down layer with `stride = 1`. From the formulas we achieved in the previous section, we know that this layer will not change the size, thus we only need to meet the constrains from the inside normalization layer: # # 1. When using batch normalization, the batch size should larger than 1. # # 2. When using instance normalization, size of at least one spatial dimension should larger than 1. # ### When `len(channels) > 2` # If `len(channels) > 2`, more `SkipConnection` module will be built and each of the module is consisted with one down layer and one up layer. Consequently, **the output of the up layer should has the same spatial sizes as the input before entering into the down layer**. The corresponding stride values for these modules are coming from `strides[1:]`, hence for each stride value `s` from `strides[1:]`, for each spatial size value `v` of the input, the constrain of the corresponding `SkipConnection` module is: # # ``` # math.floor((v + s - 1) / s) = v / s # # ``` # # Since the left-hand side of the equation is a positive integer, `input_size` must be divisible by `stride`. If we assume `v = k * s` where `k >= 1`, we can get: # ``` # math.floor(k + (s - 1) / s) = k # k + math.floor((s -1) / s) = k # math.floor((s -1) / s) = 0 # ``` # Obviously, the above equations are always true, thus for a single `SkipConnection` module, all spatial sizes of the input must be divisible by `s`. # For the whole `SkipConnection` module, assume `[H, W, D]` is the input spatial size, then for `v in [H, W, D]`: # # **`np.remainder(v, np.prod(strides[1:])) == 0`** # # In addition, there may have more constrains from normalization layers: # # 1. When using batch normalization, the batch size of the input should be larger than 1. # # 2. When using instance normalization, size of at least one spatial dimension should larger than 1. Therefore, **assume `d = max(H, W, D)`, `d` should meet: `np.remainder(d, 2 * np.prod(strides[1:])) == 0`**. # ## Constrains of UNet # As the first section discussed, UNet is consisted with 1) a down layer, 2) one or mode skip connection module(s) and 3) an up layer. Based on the analyses for each single layer/module, the constrains of the network can be summarized as follow. # ### When `len(channels) = 2` # If `len(channels) == 2`, `strides` must be a single value, thus assume `s = strides`, and the input size is `[B, C, H, W, D]`. The constrains are: # # 1. If using batch normalization: **`B > 1`.** # 2. If using local response normalization: no constrain. # 3. If using instance normalization, assume `d = max(H, W, D)`, then `math.floor((d + s - 1) / s) >= 2`, which means **`d >= s + 1`.** # # The following are the corresponding examples: # + # example 1: len(channels) = 2, batch norm, batch size > 1. network = UNet( dimensions=3, in_channels=1, out_channels=3, channels=(8, 16), strides=(3,), kernel_size=3, up_kernel_size=3, num_res_units=0, norm="batch", ) example = torch.rand([2, 1, 1, 1, 1]) print(network(example).shape) # # uncomment the following two lines will see the error # example = torch.rand([1, 1, 1, 1, 1]) # print(network(example).shape) # - # example 2: len(channels) = 2, localresponse. network = UNet( dimensions=3, in_channels=1, out_channels=3, channels=(8, 16), strides=(3,), kernel_size=1, up_kernel_size=1, num_res_units=1, norm=("localresponse", {"size": 1}), ) example = torch.rand([1, 1, 1, 1, 1]) print(network(example).shape) # + # example 3: len(channels) = 2, instance norm. network = UNet( dimensions=3, in_channels=1, out_channels=3, channels=(8, 16), strides=(3,), kernel_size=3, up_kernel_size=5, num_res_units=2, norm="instance", ) example = torch.rand([1, 1, 4, 1, 1]) print(network(example).shape) # # uncomment the following two lines will see the error # example = torch.rand([1, 1, 1, 1, 3]) # print(network(example).shape) # - # ### When `len(channels) > 2` # Assume the input size is `[B, C, H, W, D]`, and `s = strides`. The common constrains are: # # ``` # For v in [H, W, D]: # size = math.floor((v + s[0] - 1) / s[0]) # size should meet: np.remainder(size, np.prod(s[1:])) == 0 # ``` # In addition, # 1. If using batch normalization: **`B > 1`.** # 2. If using local response normalization: no more constrain. # 3. If using instance normalization, then: # ``` # d = max(H, W, D) # max_size = math.floor((d + s[0] - 1) / s[0]) # max_size should meet: np.remainder(max_size, 2 * np.prod(s[1:])) == 0 # ``` # # The following are the corresponding examples: # + # example 1: strides=(3, 5), batch norm, batch size > 1. # thus math.floor((v + 2) / 3) should be 5 * k. If k = 1, v should be in [13, 14, 15]. network = UNet( dimensions=3, in_channels=1, out_channels=3, channels=(8, 16, 32), strides=(3, 5), kernel_size=3, up_kernel_size=3, num_res_units=0, norm="batch", ) example = torch.rand([2, 1, 13, 14, 15]) print(network(example).shape) # # uncomment the following two lines will see the error # example = torch.rand([1, 1, 12, 14, 15]) # print(network(example).shape) # + # example 2: strides=(3, 2, 4), localresponse. # thus math.floor((v + 2) / 3) should be 8 * k. If k = 1, v should be in [22, 23, 24]. network = UNet( spatial_dims=3, in_channels=1, out_channels=3, channels=(8, 16, 32, 16), strides=(3, 2, 4), kernel_size=1, up_kernel_size=3, num_res_units=10, norm=("localresponse", {"size": 1}), ) example = torch.rand([1, 1, 22, 23, 24]) print(network(example).shape) # # uncomment the following two lines will see the error # example = torch.rand([1, 1, 25, 23, 24]) # print(network(example).shape) # + # example 3: strides=(1, 2, 2, 3), instance norm. # thus v should be 12 * k. If k = 1, v should be 12. In addition, the maximum size should be 24 * k. network = UNet( spatial_dims=3, in_channels=1, out_channels=3, channels=(8, 16, 32, 32, 16), strides=(1, 2, 2, 3), kernel_size=5, up_kernel_size=3, num_res_units=5, norm="instance", ) example = torch.rand([1, 1, 24, 12, 12]) print(network(example).shape) # # uncomment the following two lines will see the error # example = torch.rand([1, 1, 12, 12, 12]) # print(network(example).shape) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %load_ext autoreload # %autoreload 2 import os os.chdir("..") # + from deepsvg.svglib.svg import SVG from deepsvg import utils from deepsvg.difflib.tensor import SVGTensor from deepsvg.svglib.utils import to_gif from deepsvg.svglib.geom import Bbox from deepsvg.svgtensor_dataset import SVGTensorDataset, load_dataset, SVGFinetuneDataset from deepsvg.utils.utils import batchify, linear import torch import numpy as np from torch.utils.data import DataLoader import torch.nn as nn # - # # DeepSVG animation between user-drawn images device = torch.device("cuda:0"if torch.cuda.is_available() else "cpu") # Load the pretrained model and dataset # + pretrained_path = "./pretrained/hierarchical_ordered.pth.tar" from configs.deepsvg.hierarchical_ordered import Config cfg = Config() cfg.model_cfg.dropout = 0. # for faster convergence model = cfg.make_model().to(device) utils.load_model(pretrained_path, model) model.eval(); # - dataset = load_dataset(cfg) def load_svg(filename): svg = SVG.load_svg(filename) svg = dataset.simplify(svg) svg = dataset.preprocess(svg, mean=True) return svg # + def easein_easeout(t): return t*t / (2. * (t*t - t) + 1.); def interpolate(z1, z2, n=25, filename=None, ease=True, do_display=True): alphas = torch.linspace(0., 1., n) if ease: alphas = easein_easeout(alphas) z_list = [(1-a) * z1 + a * z2 for a in alphas] img_list = [decode(z, do_display=False, return_png=True) for z in z_list] to_gif(img_list + img_list[::-1], file_path=filename, frame_duration=1/12) # + def encode(data): model_args = batchify((data[key] for key in cfg.model_args), device) with torch.no_grad(): z = model(*model_args, encode_mode=True) return z def encode_icon(idx): data = dataset.get(id=idx, random_aug=False) return encode(data) def encode_svg(svg): data = dataset.get(svg=svg) return encode(data) def decode(z, do_display=True, return_svg=False, return_png=False): commands_y, args_y = model.greedy_sample(z=z) tensor_pred = SVGTensor.from_cmd_args(commands_y[0].cpu(), args_y[0].cpu()) svg_path_sample = SVG.from_tensor(tensor_pred.data, viewbox=Bbox(256), allow_empty=True).normalize().split_paths().set_color("random") if return_svg: return svg_path_sample return svg_path_sample.draw(do_display=do_display, return_png=return_png) # - def interpolate_icons(idx1=None, idx2=None, n=25, *args, **kwargs): z1, z2 = encode_icon(idx1), encode_icon(idx2) interpolate(z1, z2, n=n, *args, **kwargs) # ## Loading user-drawn frames lego1 = load_svg("docs/frames/lego_1.svg") lego2 = load_svg("docs/frames/lego_2.svg") # `draw_colored` lets you see the individual paths in an SVG icon. lego1.draw_colored(); lego2.draw_colored() bird1 = load_svg("docs/frames/bird_1.svg") bird2 = load_svg("docs/frames/bird_2.svg"); bird2.permute([1, 0, 2]); # When path orders don't match between the two frames, just manually change the order using the `permute` method. For best results, keep in mind that the the model was trained using paths ordered with the lexicographical order (top to bottom, left to right). # # Colors are in this order: # - deepskyblue # - lime # - deeppink # - gold # - coral # - darkviolet # - royalblue # - darkmagenta bird1.draw_colored(); bird2.draw_colored() face1 = load_svg("docs/frames/face_1.svg"); face1.permute([1, 0, 2, 3, 4, 5]); face2 = load_svg("docs/frames/face_2.svg"); face2.permute([5, 0, 1, 2, 3, 4]); face2[0].reverse(); # Sometimes, the orientation (clockwise/counter-clockwise) of paths don't match. Fix this usng the `reverse` method. face1.draw_colored(); face2.draw_colored() football1 = load_svg("docs/frames/football_1.svg"); football1.permute([0, 1, 4, 2, 3, 5, 6, 7]); football1[3].reverse(); football1[4].reverse(); football2 = load_svg("docs/frames/football_2.svg"); football2.permute([0, 2, 3, 5, 4, 7, 6, 1]); football1.draw_colored(); football2.draw_colored() pencil1 = load_svg("docs/frames/pencil_1.svg") pencil2 = load_svg("docs/frames/pencil_2.svg"); pencil2.permute([1, 0, 2, 3, 4, 5]); pencil1.draw_colored(); pencil2.draw_colored() ship1 = load_svg("docs/frames/ship_1.svg"); ship1.permute([0, 1, 3, 2]); ship2 = load_svg("docs/frames/ship_2.svg") ship1.draw_colored(); ship2.draw_colored() # Finetune the model on those additional SVG icons for a few steps (~10-30 seconds). finetune_dataset = SVGFinetuneDataset(dataset, [lego1, lego2, football1, football2, bird1, bird2, ship1, ship2, pencil1, pencil2, face1, face2], frac=1.0, nb_augmentations=750) # + dataloader = DataLoader(finetune_dataset, batch_size=cfg.batch_size, shuffle=True, drop_last=True, num_workers=cfg.loader_num_workers, collate_fn=cfg.collate_fn) # Optimizer, lr & warmup schedulers optimizers = cfg.make_optimizers(model) scheduler_lrs = cfg.make_schedulers(optimizers, epoch_size=len(dataloader)) scheduler_warmups = cfg.make_warmup_schedulers(optimizers, scheduler_lrs) loss_fns = [l.to(device) for l in cfg.make_losses()] # - epoch = 0 for step, data in enumerate(dataloader): model.train() model_args = [data[arg].to(device) for arg in cfg.model_args] labels = data["label"].to(device) if "label" in data else None params_dict, weights_dict = cfg.get_params(step, epoch), cfg.get_weights(step, epoch) for i, (loss_fn, optimizer, scheduler_lr, scheduler_warmup, optimizer_start) in enumerate( zip(loss_fns, optimizers, scheduler_lrs, scheduler_warmups, cfg.optimizer_starts), 1): optimizer.zero_grad() output = model(*model_args, params=params_dict) loss_dict = loss_fn(output, labels, weights=weights_dict) loss_dict["loss"].backward() if cfg.grad_clip is not None: nn.utils.clip_grad_norm_(model.parameters(), cfg.grad_clip) optimizer.step() if scheduler_lr is not None: scheduler_lr.step() if scheduler_warmup is not None: scheduler_warmup.step() if step % 20 == 0: print(f"Step {step}: loss: {loss_dict['loss']}") model.eval(); # Display the interpolations! 🚀 z_lego1, z_lego2 = encode_svg(lego1), encode_svg(lego2) interpolate(z_lego1, z_lego2) z_face1, z_face2 = encode_svg(face1), encode_svg(face2) interpolate(z_face1, z_face2) z_bird1, z_bird2 = encode_svg(bird1), encode_svg(bird2) interpolate(z_bird1, z_bird2) z_football1, z_football2 = encode_svg(football1), encode_svg(football2) interpolate(z_football1, z_football2) z_pencil1, z_pencil2 = encode_svg(pencil1), encode_svg(pencil2) interpolate(z_pencil1, z_pencil2) z_ship1, z_ship2 = encode_svg(ship1), encode_svg(ship2) interpolate(z_ship1, z_ship2) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.6.1 64-bit # language: python # name: python36164biteddb5945131e4a35a21d1f6554c5511d # --- from selenium import webdriver import pandas as pd driver = webdriver.Chrome('E:\Softwares\chromedriver_win32\chromedriver.exe') driver.get('http://www.gutenberg.org/ebooks/search/%3Fsort_order%3Drelease_date') books = driver.find_elements_by_class_name('booklink') len(books) print(books[0].text) print(books[-1].text) name = books[0].find_elements_by_class_name('title')[0].text author = books[0].find_elements_by_class_name('subtitle')[0].text date = books[0].find_elements_by_class_name('extra')[0].text print(name) print(author) print(date) name = books[-1].find_elements_by_class_name('title')[0].text # author = books[-1].find_elements_by_class_name('subtitle')[0].text date = books[-1].find_elements_by_class_name('extra')[0].text print(name) print(author) print(date) for book in books[:5]: try: name = book.find_elements_by_class_name('title')[0].text try: author = book.find_elements_by_class_name('subtitle')[0].text except: author = 'Not availbale' try: date = book.find_elements_by_class_name('extra')[0].text except: date = 'Not availbale' print('name:', name) print('author :', author) print('date :', date) print('_'*100) except: pass # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="YenH_9hJbFk1" # # Clasificación de tipos de ropa con Redes Neuronales Convolucionales (CNN) # + [markdown] id="FbVhjPpzn6BM" # Esta Guia entrena un modelo de red neuronal para clasificar imagenes de ropa como, tennis y camisetas. No hay problema sino entiende todos los detalles; es un repaso rapido de un programa completo de Tensorflow con los detalles explicados a medida que avanza. # # Esta Guia usa [tf.keras](https://www.tensorflow.org/guide/keras), un API de alto nivel para construir y entrenar modelos en Tensorflow. # + id="dzLKpmZICaWN" # Antes de nada, importamos las librerías que nos puedan hacer falta import tensorflow as tf # Para crear modelos de Aprendizaje Profundo import matplotlib.pyplot as plt # Para graficar imágenes y gráficas de evaluación import numpy as np # Nos permite trabajar con estructuras vectoriales eficientes # que son además las que emplea tensorflow import math # Operaciones matemáticas # métodos para calcular métricas y matriz de confusión from sklearn.metrics import confusion_matrix, classification_report import itertools # funciones eficientes sobre elementos iterables # + [markdown] id="yR0EdgrLCaWR" # ## El dataset de moda de MNIST # + [markdown] id="DLdCchMdCaWQ" # Fashion MNIST contiene mas de 70,000 imagenes divididas en 10 categorias de prendas. Al igual que ocurria con el dataset clásico MNIST, cada imagen tiene una resolución de 28 por 28 pixeles. # # En tanto que las imágenes son más complejas, su clasificación por una red es más compleja. Por ello, vamos a usar una Red Neuronal con Convoluciones para resolver el problema. # + id="7MqDQO0KCaWS" outputId="24b9bdb9-eec3-43dd-e4ba-b3d517aa4ef3" colab={"base_uri": "https://localhost:8080/", "height": 156} fashion_mnist = tf.keras.datasets.fashion_mnist (x_train, y_train), (x_test, y_test) = fashion_mnist.load_data() # + id="IjnLH5S2CaWx" class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] # + id="m--xqplztTAv" outputId="8eece20d-5405-4c53-fd01-6d439127c70d" colab={"base_uri": "https://localhost:8080/", "height": 55} n_samples_train = len(x_train) n_samples_test = len(x_test) n_samples = n_samples_train + n_samples_test print("La relación de muestras entre los dos subconjuntos es del {:.2%} en " "subconjunto de entrenamiento y del {:.2%}, de prueba".format( n_samples_train/n_samples, n_samples_test/n_samples)) # Las imágenes están en escala de grises, con una resolución de 256 valores. Es # común en problemas de clasificación realizar una normalización y escalar ese # rango de valores al intervalo [0,1], si bien no es necesario. Puede probarse el # realizar el entrenamiento sin hacer esta conversión. x_train, x_test = x_train / 255.0, x_test / 255.0 # + id="1AlhVcULtaxn" # Una pequeña función para graficar muestras aleatorias def random_sample_plot(n_samples, mnist_set): rnd_index = np.random.choice(mnist_set.shape[0], size=(n_samples,), replace=False) subset = mnist_set[rnd_index] grid_width = 5 grid_length = math.ceil(n_samples/grid_width) plt.figure(figsize=(15,10)) # specifying the overall grid size for i in range(n_samples): plt.subplot(grid_length,grid_width,i+1) # the number of images in the grid is 5*5 (25) plt.imshow(subset[i], cmap='gray') # + id="SNo3JBVBtbRB" outputId="ee8ec132-ac6b-45a7-ab8e-e21ed0d2f071" colab={"base_uri": "https://localhost:8080/", "height": 593} # Una muestra del MNIST random_sample_plot(20, x_train) # + [markdown] id="59veuiEZCaW4" # ## Un modelo de CNN sencillo # # Construir la red neuronal requiere configurar las capas del modelo y luego compilar el modelo. # + id="iq8dm_IHwS3W" x_train = x_train.reshape((x_train.shape[0], 28, 28, 1)) x_test = x_test.reshape((x_test.shape[0], 28, 28, 1)) # + id="9ODch-OFCaW4" model = tf.keras.models.Sequential() # El modelo secuencial vacio inicialmente model.add(tf.keras.Input(shape=(28, 28, 1))) # Podemos describir la forma de la entrada # como si de una capa se tratase. # Capas de convolución y pooling model.add(tf.keras.layers.Conv2D(filters=32, kernel_size=3, padding="same", activation="relu")) model.add(tf.keras.layers.MaxPool2D(pool_size=2, strides=2, padding='valid')) #model.add(tf.keras.layers.Conv2D(filters=64, kernel_size=3, padding="same", activation="relu")) #model.add(tf.keras.layers.MaxPool2D(pool_size=2, strides=2, padding='valid')) model.add(tf.keras.layers.Flatten()) # Esta capa "aplana" la imagen, es decir, pasa # la matriz de valores a un vector unidimensional. model.add(tf.keras.layers.Dense(128, activation='relu')) # Una capa Densa o Fully-Connected. model.add(tf.keras.layers.Dropout(0.2)) # Empleamos Dropout, una técnica que inhabilita # un % de nodos de la capa anterior de forma aleatoria # durante el entrenamiento, que ayuda a la generalización # del modelo evitando el sobre ajuste de los nodos a los # ejemplos particulares usados. model.add(tf.keras.layers.Dense(10, activation='softmax')) # Una última capa Densa, ya como salida. # Se la ha dotado de 10 nodos, tantos como # dígitos diferentes consideramos para su # clasificación (0 al 9) # + id="SW569GhgxiNa" outputId="62051578-f31f-4497-a43b-24562fadd33a" colab={"base_uri": "https://localhost:8080/", "height": 364} model.summary() # Con este método podemos comprobar que nuestra red tiene las # capas añadidas y ver alguna información adicional # + [markdown] id="BPZ68wASog_I" # ### Entrenamiento del modelo # # Elegimos un optimizador y una funcion de perdida para el entrenamiento del modelo. # + id="Lhan11blCaW7" # El modelo se inicializa para su entrenamiento con el método compile # El loss SparseCategoricalCrossEntropy permite introducir las etiquetas sin # necesidad de codificarlas en One-Hot model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) # + id="xvwvpA64CaW_" outputId="54e1142f-067d-4b02-8551-44dd336f458a" colab={"base_uri": "https://localhost:8080/", "height": 381} # Con este método del modelo podemos entrenarlo. El parámetro "validation_data" # es opcional y simplemente evalúa el modelo con los datos proporcionados en cada # época, pero no serán empleados para el entrenamiento (en nuestro caso, le hemos # proporcionado el subconjunto de prueba) model.fit(x_train, y_train, epochs=10) # + [markdown] id="utvvaUgU7jXk" # ### Evaluando el modelo # # El modo más básico de evaluar un modelo de clasificación es a partir de la métrica de precisión o accuracy. # + id="HT7uV0PGjL8i" outputId="b485d742-17ec-47ce-8903-b213434d79a2" colab={"base_uri": "https://localhost:8080/", "height": 52} # Cálculo de la precisión con el método evaluate del modelo, empleando el subconjunto de prueba model.evaluate(x_test, y_test, verbose=2) # + [markdown] id="T4JfEh7kvx6m" # Se pueden emplear otro tipo de métricas. Podemos obtener algunas con la ayuda de la librería para Inteligencia Artificial Scikit Learn # + id="7TAsi-O5vYUC" def otras_metricas(logits, ground_truth, classes): logits = np.argmax(logits, axis=1) print(classification_report(ground_truth, logits, target_names=classes)) # + id="zkNSpAx1wVNh" outputId="c018d173-4fe5-47d5-92f3-c291822f5db8" colab={"base_uri": "https://localhost:8080/", "height": 312} # Llamamos al modelo para que calcule las clasificaciones con el subconjunto de # prueba predicts = model.predict(x_test) # Hallamos las métricas otras_metricas(predicts, y_test, classes=class_names) # + [markdown] id="_BZAg9TGvYJ6" # No obstante, si es interesante saber que la mayor parte de estas métricas se calculan a partir de datos que quedan reflejados en lo que es denominado una **matriz de confusión**, la cual ilustra la cantidad de aciertos y fallos cometidos por el modelo, de forma desglosada. # # Vamos a mostrar esta matriz para el modelo entrenado. # + id="zZE7jKH_AA87" # Función que convierte las salidas del modelo en una clase, para que pueda ser # asimilado por la función de cálculo de la matriz de confusión de SciKit Learn def confusion_matrix_v2(logits, ground_truth): logits = np.argmax(logits, axis=1) cm = confusion_matrix(ground_truth, logits) return cm # Una función para hacer más atractiva e intuitiva la matriz de confusión def plot_pretty_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues): """ This function prints and plots the confusion matrix. Normalization can be applied by setting `normalize=True`. """ if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] print("Normalized confusion matrix") else: print('Confusion matrix, without normalization') plt.rcParams["figure.figsize"] = (len(classes),len(classes)) plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=45) plt.yticks(tick_marks, classes) fmt = '.2f' if normalize else 'd' thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, format(cm[i, j], fmt), horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") plt.tight_layout() plt.ylabel('Etiquetas del dataset') plt.xlabel('Predicciones/clasificación del modelo') # + id="yZDr8fZKBJkM" outputId="e8cd8cb5-4af0-4025-e737-8649cba29ab5" colab={"base_uri": "https://localhost:8080/", "height": 747} # Calculamos la matriz de confusión de nuestro modelo, comparando los resultados # con las etiquetas del dataset correspondiente a cada muestra cm = confusion_matrix_v2(predicts, y_test) # Mostramos la matriz de confusión de forma intuitiva plot_pretty_confusion_matrix(cm, classes=class_names, normalize=True, title='Matriz de confusión', cmap=plt.cm.Reds) # + [markdown] id="WbyZs4bse2Fh" # ## ¿Te animas a continuar? # # Con lo anterior ya tenemos un modelo a nuestra disposición. Sin embargo, quedan una serie de pasos por realizar para trabajar y exprimir las utilidades de CUBE IDE. En concreto, falta por realizar: # # 1. **Esencial**: descargar el modelo de Keras para embeberlo en CUBE-AI. ***Nota***: recuerda que puede haber problemas de versiones, por lo que es recomendable hacer un downgrade de la versión de Tensorflow. # # 2. **Interesante e ilustrativo, extraer el dataset**: La extracción del dataset para poder hacer una evaluación meticulosa del modelo embebido. Recuerda que ha de descargarse en csv y con una forma adecuada para que sea asimilable por la herramienta. # # 3. **Para sacar el máximo provecho, cuantizar**: Tratar de cuantizar el modelo. Si queremos aprovechar al máximo las bondades de la herramienta y mejorar la eficiencia de nuestro sistema embebido, la labor de cuantizar es casi imperativa, sobre todo con placas de bajo consumo. Esta es una buena meta a alcanzar. # # # + id="ih_2KXlShXGm" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Atmospheric Correction # In remote sensing we often need to calculate surface reflectance (ρ) from radiance (L) measure by a given sensor # ### ρ = π(L -Lp) / τ(Edir + Edif) # where # # * ρ = surface reflectance # * L = at-sensor radiance # * Lp = path radiance # * τ = transmissivity (from surface to satellite) # * Edir = direct solar irradiance # * Edif = diffuse solar irradiance # * π = 3.1415 # # # Let's say a satellite sensor measures a radiance of 120 L = 120 # There are 4 unknowns remaining and they on i) atmospheric conditions and ii) Earth-Sun-Satellite geometry # **Atmospheric Conditions**
# let's say we have measured values of: H2O = 1 # water vapour (g cm-2) O3 = 0.4 # ozone (atm-cm) AOT = 0.3 # aerosol optical thickness # **Earth-Sun-Satellite Geometry**
# and some additional measurements: alt = 0 # target altitude (km) solar_z = 20 # solar zenith angle (degrees) view_z = 0 # view zentith angle (degrees) doy = 4 # Earth-Sun distance (Astronomical Units) # **Potential data sources** # # * Water vapour: [NCEP/NCAR](http://journals.ametsoc.org/doi/abs/10.1175/1520-0477%281996%29077%3C0437%3ATNYRP%3E2.0.CO%3B2) # * Ozone: [TOMS/OMI](http://ozoneaq.gsfc.nasa.gov/missions). # * Aerosol optical thickness: [MODIS Aerosol Product](http://modis-atmos.gsfc.nasa.gov/MOD04_L2/index.html) or in-scene techniques # * Geometry and day-of-year: satellite image metadata # # **6S emulator** # loading dependencies import os import sys sys.path.append(os.path.join(os.path.dirname(os.getcwd()),'bin')) from interpolated_LUTs import Interpolated_LUTs # The 6S emulator is **100x** faster than the radiative transfer code. This speed increased is acheived by using interpolated look-up tables (iLUTs). This trades set-up time for execution time. # instantiate interpolated look up table class iLUTs = Interpolated_LUTs('COPERNICUS/S2') # i.e. Sentinel 2 # download look-up tables iLUTs.download_LUTs() # interpolate look-up tables iLUTs.interpolate_LUTs() # If you are running this notebook in a docker container then you can save these interpolated look-up tables (and your Earth Engine authentication) for later using a [docker commit](https://github.com/samsammurphy/6S_emulator/wiki/docker-commits). This will save them to memory so that you only have to do the set-up once. iLUTs_all_wavebands = iLUTs.get() # for example let's look at band 1 iLUT_B1 = iLUTs_all_wavebands['B1'] # We can get atmospheric correction coefficients (a, b) for *perihelion*: a, b = iLUT_B1(solar_z,H2O,O3,AOT,alt) # and corrected for Earth's [elliptical orbit](https://github.com/samsammurphy/6S_LUT/wiki/Elliptical-Orbit-Correction): # + import math elliptical_orbit_correction = 0.03275104*math.cos(doy/59.66638337) + 0.96804905 a *= elliptical_orbit_correction b *= elliptical_orbit_correction # - # to get the correction coefficients we need to calculate surface reflectance. # + ρ = (L-a)/b print('Surface Reflectance = {:.3f}'.format(ρ)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.8.6 64-bit # metadata: # interpreter: # hash: 767d51c1340bd893661ea55ea3124f6de3c7a262a8b4abca0554b478b1e2ff90 # name: Python 3.8.6 64-bit # --- import numpy as np import pandas as pd import matplotlib.pyplot as plt # + # id = db.Column("room_id", db.Integer, primary_key=True) # room = Column(db.Integer, nullable=False) # number_person = Column(db.Integer, nullable=False) # type_room = Column(String(250), nullable=False) # price_room = Column(db.Integer, nullable=False) # position_room = Column(String(250), nullable=False) # status_room = Column(String(250), nullable=False) # - room = [] for i in range(1,6): room+= [k for k in range(i*100+1, 100*i+30)] + [j for j in range(i*100+60,100*i+90)] df = pd.DataFrame({'room':room}) df['price_room'] = 670 df['numberDomitory'] = 1 df['type_room'] = 'NORM' df['number_person'] = df['room'].apply(lambda x: 2 if x%2 else 3) df['gender'] = df['room'].apply(lambda x: 'MALE' if x%2 else 'FEMALE') df['empty_position'] = df['number_person'] df['status_room'] = 'OK' df.head() room = [] for i in range(1,6): room+= [k for k in range(1000+ i*100+1, 1000+100*i+30)] df_block = pd.DataFrame({'room':room}) df_block['price_room'] = 1200 df_block['numberDomitory'] = 1 df_block['type_room'] = 'BLOCK' df_block['number_person'] = 2 df_block['gender'] = df_block['room'].apply(lambda x: 'MALE' if x%2 else 'FEMALE') df_block['empty_position'] = df_block['number_person'] df_block['status_room'] = 'OK' df_block.head() df = pd.concat([df, df_block]) # + tags=[] df.sample(20) # - df2 = df.copy() df2['numberDomitory'] = 2 df3 = df.copy() df3['numberDomitory'] = 3 df = pd.concat([df, df2,df3]) df['room_id'] = range(len(df)) df.sample(20) # + from sqlalchemy import create_engine disk_engine = create_engine('sqlite:///rooms.sqlite') df.to_sql(name='rooms', con=disk_engine,if_exists='append', index=False) # - # !pip install sqlalchemy # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.12 64-bit (''detectron2'': conda)' # name: python3 # --- # + [markdown] id="t995W39KWYIM" # # Detectron2 Train on a custom dataset with data augmentation # # + [markdown] id="eUS0fSJOzPwE" #
# # Run in Google Colab # # # # View source on GitHub #
# + [markdown] id="RX4vF6JnWf5r" # ## Install detectron2 # # > **Important**: If you're running on a local machine, be sure to follow the [installation instructions](https://github.com/facebookresearch/detectron2/blob/master/INSTALL.md). This notebook includes only what's necessary to run in Colab. # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="c03oQPeCWgxC" outputId="a45cc754-7330-41cc-fe43-3958a9d8bb16" # !pip install pyyaml==5.1 # This is the current pytorch version on Colab. Uncomment this if Colab changes its pytorch version # # !pip install torch==1.9.0+cu102 torchvision==0.10.0+cu102 -f https://download.pytorch.org/whl/torch_stable.html # Install detectron2 that matches the above pytorch version # See https://detectron2.readthedocs.io/tutorials/install.html for instructions # !pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/torch1.9/index.html # exit(0) # After installation, you need to "restart runtime" in Colab. This line can also restart runtime # + colab={"base_uri": "https://localhost:8080/"} id="hCGPJNJ0Wz4S" outputId="f5be33ab-b412-47f8-fe6f-4f7b3dd91d14" # check pytorch installation: import torch, torchvision print(torch.__version__, torch.cuda.is_available()) assert torch.__version__.startswith("1.9") # please manually install torch 1.9 if Colab changes its default version # + [markdown] id="oyvHKjDWW2rb" # ## Train on a custom dataset # + id="Y19zWbMmnWlz" import detectron2 from detectron2.utils.logger import setup_logger setup_logger() # import some common libraries import numpy as np import cv2 import matplotlib.pyplot as plt # import some common detectron2 utilities from detectron2 import model_zoo from detectron2.engine import DefaultPredictor from detectron2.config import get_cfg from detectron2.utils.visualizer import Visualizer from detectron2.data import MetadataCatalog, DatasetCatalog # + [markdown] id="EisBfAnOnZSg" # Before we can start training our model we need to download some dataset. In this case we will use a dataset with balloon images. # + colab={"base_uri": "https://localhost:8080/"} id="pd4fS8BsW7YQ" outputId="ae900a0d-2620-415e-d510-399dbf89a721" # !wget https://github.com/matterport/Mask_RCNN/releases/download/v2.1/balloon_dataset.zip # !unzip balloon_dataset.zip > /dev/null # + [markdown] id="kr2w-ShonzL5" # In order to use a dataset with Detectron2 we need to register it. For more information check out the [official documentation](https://detectron2.readthedocs.io/tutorials/datasets.html#register-a-dataset). # 커스템 데이터셋 추가에 대한 자세한 내용은 공식문서를 참고하세요. # + id="3kPMjc7oXKBf" import os import numpy as np import json from detectron2.structures import BoxMode def get_balloon_dicts(img_dir): json_file = os.path.join(img_dir, "via_region_data.json") with open(json_file) as f: imgs_anns = json.load(f) dataset_dicts = [] for idx, v in enumerate(imgs_anns.values()): record = {} filename = os.path.join(img_dir, v["filename"]) height, width = cv2.imread(filename).shape[:2] record["file_name"] = filename record["image_id"] = idx record["height"] = height record["width"] = width annos = v["regions"] objs = [] for _, anno in annos.items(): assert not anno["region_attributes"] anno = anno["shape_attributes"] px = anno["all_points_x"] py = anno["all_points_y"] poly = [(x + 0.5, y + 0.5) for x, y in zip(px, py)] poly = [p for x in poly for p in x] obj = { "bbox": [np.min(px), np.min(py), np.max(px), np.max(py)], "bbox_mode": BoxMode.XYXY_ABS, "segmentation": [poly], "category_id": 0, } objs.append(obj) record["annotations"] = objs dataset_dicts.append(record) return dataset_dicts # + id="oUiMwelyOXk5" from detectron2.data import detection_utils as utils import detectron2.data.transforms as T import copy import torch def custom_mapper(dataset_dict): dataset_dict = copy.deepcopy(dataset_dict) # it will be modified by code below image = utils.read_image(dataset_dict["file_name"], format="BGR") transform_list = [ T.Resize((800,600)), T.RandomBrightness(0.8, 1.8), T.RandomContrast(0.6, 1.3), T.RandomSaturation(0.8, 1.4), T.RandomRotation(angle=[90, 90]), T.RandomLighting(0.7), T.RandomFlip(prob=0.4, horizontal=False, vertical=True), ] image, transforms = T.apply_transform_gens(transform_list, image) dataset_dict["image"] = torch.as_tensor(image.transpose(2, 0, 1).astype("float32")) annos = [ utils.transform_instance_annotations(obj, transforms, image.shape[:2]) for obj in dataset_dict.pop("annotations") if obj.get("iscrowd", 0) == 0 ] instances = utils.annotations_to_instances(annos, image.shape[:2]) dataset_dict["instances"] = utils.filter_empty_instances(instances) return dataset_dict # + id="aLq2pGZwOYSB" from detectron2.engine import DefaultTrainer from detectron2.data import build_detection_test_loader, build_detection_train_loader class CustomTrainer(DefaultTrainer): @classmethod def build_train_loader(cls, cfg): return build_detection_train_loader(cfg, mapper=custom_mapper) # + id="yMdJ-IBWOfwe" for d in ["train", "val"]: DatasetCatalog.register("balloon_" + d, lambda d=d: get_balloon_dicts("balloon/" + d)) MetadataCatalog.get("balloon_" + d).set(thing_classes=["balloon"]) balloon_metadata = MetadataCatalog.get("balloon_train") # + [markdown] id="Nc4-fxI-oLk-" # Now, let's fine-tune a pretrained FasterRCNN object detection model to detect our balloons. # + colab={"base_uri": "https://localhost:8080/"} id="hYjCcRL4XPOu" outputId="a6c70fa4-aed2-405b-82f1-ae838ef27261" from detectron2.config import get_cfg cfg = get_cfg() cfg.merge_from_file(model_zoo.get_config_file("COCO-Detection/faster_rcnn_R_101_FPN_3x.yaml")) cfg.DATASETS.TRAIN = ("balloon_train",) cfg.DATASETS.TEST = () # no metrics implemented for this dataset cfg.DATALOADER.NUM_WORKERS = 2 cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-Detection/faster_rcnn_R_101_FPN_3x.yaml") # initialize from model zoo cfg.SOLVER.IMS_PER_BATCH = 2 cfg.SOLVER.BASE_LR = 0.00025 cfg.SOLVER.MAX_ITER = 500 cfg.SOLVER.STEPS = [] # do not decay learning rate cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 128 # faster, and good enough for this toy dataset cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1 # only has one class (ballon) os.makedirs(cfg.OUTPUT_DIR, exist_ok=True) trainer = CustomTrainer(cfg) trainer.resume_or_load(resume=False) trainer.train() # - # Look at training curves in tensorboard: # %load_ext tensorboard # %tensorboard --logdir output # + [markdown] id="pSYxOlrSoYQO" # ## Inference & evaluation using the trained model # Now, let's run inference with the trained model on the balloon validation dataset. First, let's create a predictor using the model we just trained: # + id="nHY6xERfXY4z" cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, "model_final.pth") cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.7 # set the testing threshold for this model cfg.DATASETS.TEST = ("balloon_val", ) predictor = DefaultPredictor(cfg) # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="Ey9Qg7fsXfx5" outputId="13c65922-8d6e-49a4-a803-0c878006aa09" from detectron2.utils.visualizer import ColorMode import random dataset_dicts = get_balloon_dicts("balloon/val") for d in random.sample(dataset_dicts, 3): im = cv2.imread(d["file_name"]) outputs = predictor(im) v = Visualizer(im[:, :, ::-1], metadata=balloon_metadata, scale=0.8) v = v.draw_instance_predictions(outputs["instances"].to("cpu")) plt.figure(figsize = (14, 10)) plt.imshow(cv2.cvtColor(v.get_image()[:, :, ::-1], cv2.COLOR_BGR2RGB)) plt.show() # + [markdown] id="U47-LPkQZFP7" # We can also evaluate its performance using AP metric implemented in COCO API. # + colab={"base_uri": "https://localhost:8080/"} id="Vly7NsxaZGWn" outputId="c94059c7-51c8-4f15-aa05-d75798f8e15d" from detectron2.evaluation import COCOEvaluator, inference_on_dataset from detectron2.data import build_detection_test_loader evaluator = COCOEvaluator("balloon_val", ("bbox",), False, output_dir="./output/") val_loader = build_detection_test_loader(cfg, "balloon_val") print(inference_on_dataset(trainer.model, val_loader, evaluator)) # another equivalent way to evaluate the model is to use `trainer.test` # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="WuMJWX_7SgIB" # # Neural Sequence Distance Embeddings # # [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/gcorso/neural_seed/blob/master/tutorial/Neural_SEED.ipynb) # # The improvement of data-dependent heuristics and representation for biological sequences is a critical requirement to fully exploit the recent technological and scientific advancements for human microbiome analysis. This notebook presents Neural Sequence Distance Embeddings (Neural SEED), a novel framework to embed biological sequences in geometric vector spaces that unifies recently proposed approaches. We demonstrate its capacity by presenting different ways it can be applied to the tasks of edit distance approximation, closest string retrieval, hierarchical clustering and multiple sequence alignment. In particular, the hyperbolic space is shown to be a key component to embed biological sequences and obtain competitive heuristics. Benchmarked with common bioinformatics and machine learning baselines, the proposed approaches display significant accuracy and/or runtime improvements on real-world datasets formed by sequences from samples of the human microbiome. # + [markdown] id="oNWhBfRtV0wm" # ![Cover](https://raw.githubusercontent.com/gcorso/neural_seed/master/tutorial/cover.png) # # Figure 1: On the left, a diagram of the Neural SEED underlying idea: embed sequences in vector spaces preserving the edit distance between them. On the right, an example of the hierarchical clustering produced on the Poincarè disk from the P53 tumour protein from 30 different organisms. # # + [markdown] id="BIht2F8eUsec" # ## Introduction and Motivation # # ### Motivation # # Dysfunctions of the human microbiome (Morgan & Huttenhower, 2012) have been linked to many serious diseases ranging from diabetes and antibiotic resistance to inflammatory bowel disease. Its usage as a biomarker for the diagnosis and as a target for interventions is a very active area of research. Thanks to the advances in sequencing technologies, modern analysis relies on sequence reads that can be generated relatively quickly. However, to fully exploit the potential of these advances for personalised medicine, the computational methods used in the analysis have to significantly improve in terms of speed and accuracy. # # ![Classical microbiome analysis](https://raw.githubusercontent.com/gcorso/neural_seed/master/tutorial/microbiome_analysis.png) # # Figure 2: Traditional approach to the analysis of the 16S rRNA sequences from the microbiome. # + [markdown] id="GtYgkAgYUx3p" # ### Problem # # While the number of available biological sequences has been growing exponentially over the past decades, most of the problems related to string matching have not been addressed by the recent advances in machine learning. Classical algorithms are data-independent and, therefore, cannot exploit the low-dimensional manifold assumption that characterises real-world data. Exploiting the available data to produce data-dependent heuristics and representations would greatly accelerate large-scale analyses that are critical to microbiome analysis and other biological research. # # Unlike most tasks in computer vision and NLP, string matching problems are typically formulated as combinatorial optimisation problems. These discrete formulations do not fit well with the current deep learning approaches causing these problems to be left mostly unexplored by the community. Current supervised learning methods also suffer from the lack of labels that characterises many downstream applications with biological sequences. On the other hand, common self-supervised learning approaches, very successful in NLP, are less effective in the biological context where relations tend to be per-sequence rather than per-token (McDermott et al. 2021). # + [markdown] id="xztLd6M_ZpKn" # # ### Neural Sequence Distance Embedding # # In this notebook, we present Neural Sequence Distance Embeddings (Neural SEED), a general framework to produce representations for biological sequences where the distance in the embedding space is correlated with the evolutionary distance between sequences. This control over the geometric interpretation of the representation space enables the use of geometrical data processing tools for the analysis of the spectrum of sequences. # # ![Classical microbiome analysis](https://raw.githubusercontent.com/gcorso/neural_seed/master/tutorial/edit_diagram.PNG) # # Figure 3: The key idea of Neural SEED is to learn an encoder function that preserves distances between the sequence and vector space. # # # Examining the task of embedding sequences to preserve the edit distance reveals the importance of data-dependent approaches and of using a geometry that matches well the underlying distribution in the data analysed. For biological datasets, that have an implicit hierarchical structure given by evolution, the hyperbolic space provides significant improvement. # # We show the potential of the framework by analysing three fundamental tasks in bioinformatics: closest string retrieval, hierarchical clustering and multiple sequence alignment. For all tasks, relatively simple unsupervised approaches using Neural SEED encoders significantly outperform data-independent heuristics in terms of accuracy and/or runtime. In the paper (preprint will be available soon) and the [complete repository](https://github.com/gcorso/neural_seed) we also present more complex geometrical approaches to hierarchical clustering and multiple sequence alignment. # # # ## 2. Analysis # # To improve readability and limit the size of the notebook we make use of some subroutines in the [official repository](https://github.com/gcorso/neural_seed) for the research project. The code in the notebook is our best effort to convey the promising application of hyperbolic geometry to this novel research direction and how `geomstats` helps to achieve it. # + [markdown] id="iyPzAuvycefH" # Install and import the required packages. # + id="FwCBrVVic4Xp" # !pip3 install geomstats # !apt install clustalw # !pip install biopython # !pip install python-Levenshtein # !pip install Cython # !pip install networkx # !pip install tqdm # !pip install gdown # !pip install torch==1.7.1+cu101 torchvision==0.8.2+cu101 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html # !git clone https://github.com/gcorso/neural_seed.git import os os.chdir("neural_seed") # !cd hierarchical_clustering/relaxed/mst; python setup.py build_ext --inplace; cd ../unionfind; python setup.py build_ext --inplace; cd ..; cd ..; cd ..; os.environ['GEOMSTATS_BACKEND'] = 'pytorch' # + colab={"base_uri": "https://localhost:8080/"} id="Ym4wkXPfc4O3" outputId="6285cb49-5a92-4fb1-a8c7-f5acf731abf1" import torch import os import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import time from geomstats.geometry.poincare_ball import PoincareBall from edit_distance.train import load_edit_distance_dataset from util.data_handling.data_loader import get_dataloaders from util.ml_and_math.loss_functions import AverageMeter # + [markdown] id="1OSApf0PemhP" # ### Dataset description # # As microbiome analysis is one of the most critical applications where the methods presented could be applied, we chose to use a dataset containing a portion of the 16S rRNA gene widely used in the biological literature to analyse microbiome diversity. Qiita (Clemente et al. 2015) contains more than 6M sequences of up to 152 bp that cover the V4 hyper-variable region collected from skin, saliva and faeces samples of uncontacted Amerindians. The full dataset can be found on the [European Nucleotide Archive](https://www.ebi.ac.uk/ena/browser/text-search?query=ERP008799), but, in this notebook, we will only use a subset of a few tens of thousands that have been preprocessed and labelled with pairwise distances. We also provide results on the RT988 dataset (Zheng et al. 2019), another dataset of 16S rRNA that contains slightly longer sequences (up to 465 bp). # + colab={"base_uri": "https://localhost:8080/"} id="OvVBQf6idzcA" outputId="4a17696a-322f-4398-8ad5-c202d02d4458" # !gdown --id 1yZTOYrnYdW9qRrwHSO5eRc8rYIPEVtY2 # for edit distance approximation # !gdown --id 1hQSHR-oeuS9bDVE6ABHS0SoI4xk3zPnB # for closest string retrieval # !gdown --id 1ukvUI6gUTbcBZEzTVDpskrX8e6EHqVQg # for hierarchical clustering # + [markdown] id="-mH5MrL1hYIL" # ### Edit distance approximation # # **Edit distance** The task of finding the distance or similarity between two strings and the related task of global alignment lies at the foundation of bioinformatics. Due to the resemblance with the biological mutation process, the edit distance and its variants are typically used to measure similarity between sequences. Given two string $s_1$ and $s_2$, their edit distance $ED(s_1, s_2)$ is defined as the minimum number of insertions, deletions or substitutions needed to transform $s_1$ in $s_2$. We always deal with the classical edit distance where the same weight is given to every operation, however, all the approaches developed can be applied to any distance function of choice. # # **Task and loss function** As represented in Figure 3, the task is to learn an encoding function $f$ such that given any pair of sequences from the domain of interest $s_1$ and $s_2$: # \begin{equation}ED(s_1, s_2) \approx n \; d(f(s_1), f(s_2)) \end{equation} # # where $n$ is the maximum sequence length and $d$ is a distance function over the vector space. In practice this is enforced in the model by minimising the mean squared error between the actual and the predicted edit distance. To make the results more interpretable and comparable across different datasets, we report results using \% RMSE defined as: # \begin{equation} # \text{% RMSE}(f, S) = \frac{100}{n} \, \sqrt{L(f, S)} = \frac{100}{n} \, \sqrt{\sum_{s_1, s_2 \in S} (ED(s_1, s_2) - n \; d(f(s_1), f(s_2)))^2} # \end{equation} # # which can be interpreted as an approximate average error in the distance prediction as a percentage of the size of the sequences. # # + [markdown] id="n0EAvdHOjOoU" # In this notebook, we only show the code to run a simple linear layer on the sequence which, in the hyperbolic space, already gives particularly good results. Later we will also report results for more complex models whose implementation can be found in the [Neural SEED repository](https://github.com/gcorso/neural_seed). # + id="EXBg45KBeACe" class LinearEncoder(nn.Module): """ Linear model which simply flattens the sequence and applies a linear transformation. """ def __init__(self, len_sequence, embedding_size, alphabet_size=4): super(LinearEncoder, self).__init__() self.encoder = nn.Linear(in_features=alphabet_size * len_sequence, out_features=embedding_size) def forward(self, sequence): # flatten sequence and apply layer B = sequence.shape[0] sequence = sequence.reshape(B, -1) emb = self.encoder(sequence) return emb class PairEmbeddingDistance(nn.Module): """ Wrapper model for a general encoder, computes pairwise distances and applies projections """ def __init__(self, embedding_model, embedding_size, scaling=False): super(PairEmbeddingDistance, self).__init__() self.hyperbolic_metric = PoincareBall(embedding_size).metric.dist self.embedding_model = embedding_model self.radius = nn.Parameter(torch.Tensor([1e-2]), requires_grad=True) self.scaling = nn.Parameter(torch.Tensor([1.]), requires_grad=True) def normalize_embeddings(self, embeddings): """ Project embeddings to an hypersphere of a certain radius """ min_scale = 1e-7 max_scale = 1 - 1e-3 return F.normalize(embeddings, p=2, dim=1) * self.radius.clamp_min(min_scale).clamp_max(max_scale) def encode(self, sequence): """ Use embedding model and normalization to encode some sequences. """ enc_sequence = self.embedding_model(sequence) enc_sequence = self.normalize_embeddings(enc_sequence) return enc_sequence def forward(self, sequence): # flatten couples (B, _, N, _) = sequence.shape sequence = sequence.reshape(2 * B, N, -1) # encode sequences enc_sequence = self.encode(sequence) # compute distances enc_sequence = enc_sequence.reshape(B, 2, -1) distance = self.hyperbolic_metric(enc_sequence[:, 0], enc_sequence[:, 1]) distance = distance * self.scaling return distance # + [markdown] id="ZDa6aholkv4z" # General training and evaluation routines used to train the models: # + id="0_TS2a5VmQCQ" def train(model, loader, optimizer, loss, device): avg_loss = AverageMeter() model.train() for sequences, labels in loader: # move examples to right device sequences, labels = sequences.to(device), labels.to(device) # forward propagation optimizer.zero_grad() output = model(sequences) # loss and backpropagation loss_train = loss(output, labels) loss_train.backward() optimizer.step() # keep track of average loss avg_loss.update(loss_train.data.item(), sequences.shape[0]) return avg_loss.avg def test(model, loader, loss, device): avg_loss = AverageMeter() model.eval() for sequences, labels in loader: # move examples to right device sequences, labels = sequences.to(device), labels.to(device) # forward propagation and loss computation output = model(sequences) loss_val = loss(output, labels).data.item() avg_loss.update(loss_val, sequences.shape[0]) return avg_loss.avg # + [markdown] id="49GU1jZAlBTE" # The linear model is trained on 7000 sequences (+700 of validation) and tested on 1500 different sequences: # + colab={"base_uri": "https://localhost:8080/"} id="UC6Qio4WSnSh" outputId="2ed86477-69a6-4547-8740-1072224973d7" EMBEDDING_SIZE = 128 device = 'cuda' if torch.cuda.is_available() else 'cpu' torch.manual_seed(2021) if device == 'cuda': torch.cuda.manual_seed(2021) # load data datasets = load_edit_distance_dataset('./edit_qiita_large.pkl') loaders = get_dataloaders(datasets, batch_size=128, workers=1) # model, optimizer and loss encoder = LinearEncoder(152, EMBEDDING_SIZE) model = PairEmbeddingDistance(embedding_model=encoder, embedding_size=EMBEDDING_SIZE) model.to(device) optimizer = optim.Adam(model.parameters(), lr=0.001) loss = nn.MSELoss() # training for epoch in range(0, 21): t = time.time() loss_train = train(model, loaders['train'], optimizer, loss, device) loss_val = test(model, loaders['val'], loss, device) # print progress if epoch % 5 == 0: print('Epoch: {:02d}'.format(epoch), 'loss_train: {:.6f}'.format(loss_train), 'loss_val: {:.6f}'.format(loss_val), 'time: {:.4f}s'.format(time.time() - t)) # testing for dset in loaders.keys(): avg_loss = test(model, loaders[dset], loss, device) print('Final results {}: loss = {:.6f}'.format(dset, avg_loss)) # + [markdown] id="gyYzk2BllSP1" # Therefore, our linear model after only 50 epochs has a $\% RMSE \approx 2.6$ that, as we will see, is significantly better than any data-independent baseline. # + [markdown] id="ExaMbPXCvksC" # ### Closest string retrieval # # This task consists of finding the sequence that is closest to a given query among a large number of reference sequences and is very commonly used to classify sequences. Given a set of reference strings $R$ and a set of queries $Q$, the task is to identify the string $r_q \in R$ that minimises $ED(r_q, q)$ for each $q \in Q$. This task is performed in an unsupervised setting using models trained for edit distance approximation. Therefore, given a pretrained encoder $f$, its prediction is taken to be the string $r_q \in R$ that minimises $d(f(r_q), f(q))$ for each $q \in Q$. This allows for sublinear retrieval (via locality-sensitive hashing or other data structures) which is critical in real-world applications where databases can have billions of reference sequences. As performance measures, we report the top-1, top-5 and top-10 scores, where top-$k$ indicates the percentage of times the model ranks the closest string within its top-$k$ predictions. # + colab={"base_uri": "https://localhost:8080/"} id="1CvOT-6Mxwg-" outputId="2005c477-c5f0-40c7-be64-23a0bc8fe4c0" from closest_string.test import closest_string_testing closest_string_testing(encoder_model=model, data_path='./closest_qiita_large.pkl', batch_size=128, device=device, distance='hyperbolic') # + [markdown] id="aGH6KNf4ERUz" # Evaluated on a dataset composed of 1000 reference and 1000 query sequences (disjoint from the edit distance training set) the simple model we trained is capable of detecting the closest sequence correctly 44\% of the time and in approximately 3/4 of the cases it places the real closest sequence in its top-10 choices. # # + [markdown] id="v2JUY0Ho0VAN" # ### Hierarchical clustering # # Hierarchical clustering (HC) consists of constructing a hierarchy over clusters of data by defining a tree with internal points corresponding to clusters and leaves to datapoints. The goodness of the tree can be measured using Dasgupta's cost (Dasgupta 2016). # # One simple approach to use Neural SEED to speed up hierarchical clustering is similar to the one adopted in the previous section: estimate the pairwise distance matrix with a model pretrained for *edit distance approximation* and then use the matrix as the basis for classical agglomerative clustering algorithms (e.g. Single, Average and Complete Linkage). The computational cost to generate the matrix goes from $O(N^2M^2)$ to $O(N(M+N))$ and by using optimisations like locality-sensitive hashing the clustering itself can be accelerated. # + [markdown] id="ZYlXPjCY4Rqq" # The following code computes the pairwise distance matrix and then runs a series of agglomerative clustering heuristics (Single, Average, Complete and Ward Linkage) on it. # + colab={"base_uri": "https://localhost:8080/"} id="tPB_Lfi20FwP" outputId="53daf3b2-f8af-4ba5-905f-3c46616aae31" from hierarchical_clustering.unsupervised.unsupervised import hierarchical_clustering_testing hierarchical_clustering_testing(encoder_model=model, data_path='./hc_qiita_large_extr.pkl', batch_size=128, device=device, distance='hyperbolic') # + [markdown] id="8lw-XynW7pRy" # An alternative approach to performing hierarchical clustering we propose uses the continuous relaxation of Dasgupta's cost (Chami et al. 2020) to embed sequences in the hyperbolic space. In comparison to Chami et al. (2020), we show that it is possible to significantly decrease the number of pairwise distances required by directly mapping the sequences. # This allows to considerably speed up the construction especially when dealing with a large number of sequences without requiring any pretrained model. Figure 1 shows an example of this approach when applied to a small dataset of proteins and the code for it is in the Neural SEED repository. # + [markdown] id="xYOHX_0h89ue" # ### Multiple Sequence Alignment # # Multiple Sequence Alignment is another very common task in bioinformatics and there are several ways of using Neural SEED to accelerate heuristics. The most commonly used programs such as the Clustal series and MUSCLE are based on a phylogenetic tree estimation phase from the pairwise distances which produces a guide tree, which is then used to guide a progressive alignment phase. # # In Clustal algorithm for MSA on a subset of RT988 of 1200 sequences, the construction of the distance matrix and the tree takes 99\% of the total running time (the rest takes 24s out of 35 minutes). Therefore, one obvious improvement that Neural SEED can bring is to speed up this phase using the hierarchical clustering techniques seen in the previous section. # + [markdown] id="HltOdLRHACfk" # The following code uses the model pretrained for edit distance to approximate the neighbour joining tree construction and the runs clustalw using that guide tree: # + colab={"base_uri": "https://localhost:8080/"} id="fnHwYugF38c3" outputId="4e5e0d89-a230-4dfe-cd15-50baa8893183" from multiple_alignment.guide_tree.guide_tree import approximate_guide_trees # performs neighbour joining algorithm on the estimate of the pairwise distance matrix approximate_guide_trees(encoder_model=model, dataset=datasets['test'], batch_size=128, device=device, distance='hyperbolic') # Command line clustalw using the tree generated with the previous command. # The substitution matrix and gap penalties are set to simulate the classical edit distance used to train the model # !clustalw -infile="sequences.fasta" -dnamatrix=multiple_alignment/guide_tree/matrix.txt -transweight=0 -type='DNA' -gapopen=1 -gapext=1 -gapdist=10000 -usetree='njtree.dnd' | grep 'Alignment Score' # + [markdown] id="DxkNQl2MFcvp" # # An alternative method we propose for the MSA uses an autoencoder to convert the Steiner string approximation problem in a continuous optimisation task. More details on this in our paper and repository. # # - # ### Central role of Geomstats # # As we will show in the next section, the choice of the geometry of the embedding space is critical with Neural SEED. `geomstats` provides the geometric functions required for the methods presented (e.g.`metric.dist`, `metric.dist_pairwise` and `metric.closest_neighbor_index`) for the Poincaré Ball used in this notebook. Moreover, the large variety of other geometric spaces present (with the same interface) allows to quickly experiment to find the most appropriate one in every application domain of this approach. # # Finally, Neural SEED provides representations that are better suited for human interaction than classical bioinformatics algorithms. While the functioning of the encoder might be hard to interpret, the embeddings produced can be plotted (directly or after dimensionality reduction) and intuitively interpreted as continuous space where distance reflects evolutionary separation. `geomstats` could be of critical importance also on this front. # # `giotto-tda` was not used for this project. # + [markdown] id="vr31uNHVIy37" # ## 3. Benchmark # # In this section, we compare the Neural SEED approach to classical baseline alignment-free approaches such as k-mer and contrast the performance of neural models with different architectures and on different geometric spaces. # + [markdown] id="nT7GXudioK98" # ### Edit distance approximation # # ![Table of results](https://raw.githubusercontent.com/gcorso/neural_seed/master/tutorial/edit_real.PNG) # # Figure 4: \% RMSE test set results on the Qiita and RT988 datasets. The first five models are the k-mer baselines and, in parentheses, we indicate the dimension of the embedding space. The remaining are encoder models trained with the Neural SEED framework and they all have an embedding space dimension equal to 128. - indicates that the model did not converge. # + [markdown] id="Ouwj0zLkreKj" # Figure 4 highlights the advantage provided by data-dependent methods when compared to the data-independent baseline approaches. Moreover, the results show that it is critical for the geometry of the embedding space to reflect the structure of the low dimensional manifold on which the data lies. In these biological datasets, there is an implicit hierarchical structure given by the evolution process which is well reflected by the *hyperbolic* plane. Thanks to this close correspondence, even relatively simple models like the linear regression and MLP perform very well with this distance function. # + [markdown] id="ZMmQG-IDskZN" # ![Embedding dimension results](https://raw.githubusercontent.com/gcorso/neural_seed/master/tutorial/edit_dimension.png) # # Figure 5: \% RMSE on Qiita dataset for a Transformer with different distance functions. # + [markdown] id="NsVtPHoMspvb" # The clear benefit of using the hyperbolic space is evident when analysing the dimension required for the embedding space (Figure 5). In these experiments, we run the Transformer model tuned on the Qiita dataset with an embedding size of 128 on a range of dimensions. The hyperbolic space provides significantly more efficient embeddings, with the model reaching the 'elbow' at dimension 32 and matching the performance of the other spaces with dimension 128 with only 4 to 16. Given that the space to store the embedding and the time to compute distances between them scale linearly with the size of the space, this would provide a significant improvement in downstream tasks over other Neural SEED approaches. # + [markdown] id="9HE_VPRVsqgX" # **Running time** A critical step behind most of the algorithms analysed in the rest of the paper is the computation of the pairwise distance matrix of a set of sequences. Taking as an example the RT988 dataset (6700 sequences of length up to 465 bases), optimised C code computes on a CPU approximately 2700 pairwise distances per second and takes 2.5 hours for the whole matrix. In comparison, using a trained Neural SEED model, the same matrix can be approximated in 0.3-3s (similar value for the k-mer baseline) on the same CPU. The computational complexity for $N$ sequences of length $M$ is reduced from $O(N^2\; M^2)$ to $O(N(M + N))$ (assuming the model is linear w.r.t. the length and constant embedding size). The training process takes typically 0.5-3 hours on a GPU. However, in applications such as microbiome analysis, biologists typically analyse data coming from the same distribution (e.g. the 16S rRNA gene) for multiple individuals, therefore the initial cost would be significantly amortised. # + [markdown] id="MN2WD07Dye09" # ### Closest string retrieval # # Figure 6 shows that also in this task the data-dependent models outperform the baselines even when these operate on larger spaces. In terms of distance function, the *cosine* distance achieves performances on par with the *hyperbolic*. This can be explained by the fact that for a set of points on the same hypersphere, the ones with the smallest *cosine* or *hyperbolic* distance are the same. So the *cosine* distance is capable of providing good orderings of sequence similarity but inferior approximations of their distance. # + [markdown] id="mmS2c13Cz-6A" # ![Closest string retrieval table](https://raw.githubusercontent.com/gcorso/neural_seed/master/tutorial/closest_real.png) # # Figure 6: Accuracy of different models in the *closest string retrieval* task on the Qiita dataset. # + [markdown] id="FVQF2oyi4xly" # ### Hierarchical clustering # # The results (Figure 7) show that the difference in performance between the most expressive models and the round truth distances is not statistically significant. The *hyperbolic* space achieves the best performance and, although the relative difference between the methods is not large in terms of percentage Dasgupta's cost (but still statistically significant), it results in a large performance gap when these trees are used for tasks such as MSA. The total CPU time taken to construct the tree is reduced from more than 30 minutes to less than one in this dataset and the difference is significantly larger when scaling to datasets of more and longer sequences. # + [markdown] id="rLMaq24n5esD" # ![Unsupervised HC table](https://raw.githubusercontent.com/gcorso/neural_seed/master/tutorial/hc_average.png) # # Figure 7: Average Linkage \% increase in Dasgupta's cost of Neural SEED models compared to the performance of clustering on the ground truth distances. Average Linkage was the best performing clustering heuristic across all models. # + [markdown] id="-Zp_DeE0-zGx" # ### Multiple Sequence Alignment # # # The results reported in Figure 8 show that the alignment scores obtained when using the Neural SEED heuristics with models such as GAT are not statistically different from those obtained with the ground truth distances. Most of the models show a relatively large variance in performance across different runs. This has positive and negative consequences: the alignment obtained using a single run may not be very accurate, but, by training an ensemble of models and applying each of them, we are likely to obtain a significantly better alignment than the one from the ground truth matrix while still only taking a fraction of the time. # # ![Unsupervised MSA table](https://raw.githubusercontent.com/gcorso/neural_seed/master/tutorial/msa_guide_table.png) # # Figure 8: Percentage change in the alignment cost (- alignment score) returned by Clustal when using the heuristics to generate the tree as opposed to using NJ on real distances. The alignment was done on 1.2k unseen sequences from the RT988 dataset. # # + [markdown] id="jtLU3zIU-zNh" # ## 4. Limitations and perspectives # # ### Limitations of the method presented # # As mentioned in the introduction, we believe that the Neural SEED framework has the potential to be applied to numerous problems and, therefore, this project constitutes only an initial analysis of its geometrical properties and applications. Below we list some of the limitations of the current analysis and potential directions of research to cover them. # # **Type of sequences** Both the datasets analysed consist of sequence reads of the same part of the genome. This is a very common set-up for sequence analysis (for example for microbiome analysis) and it is enabled by biotechnologies that can amplify and sequence certain parts of the genome selectively, but it is not ubiquitous. Shotgun metagenomics consists of sequencing random parts of the genome. This would, we believe, generate sequences lying on a low-dimensional manifold where the hierarchical relationship of evolution is combined with the relationship based on the specific position in the whole genome. Therefore, more complex geometries such as product spaces might be best suited. # # **Type of labels** In this project, we work with edit distances between strings, these are very expensive when large scale analysis is required, but it is feasible to produce several thousand exact pairwise distance values from which the model can learn. For different definitions of distance, however, this might not be the case. If it is only feasible to determine which sequences are closest, then encoders can be trained using triplet loss and then most of the approaches presented would still apply. Future work could explore the robustness of this framework to inexact estimates of the distances as labels and whether Neural SEED models, once trained, could provide more accurate predictions than its labels. # # **Architectures** Throughout the project we used models that have been shown to work well for other types of sequences and tasks. However, the correct inductive biases that models should have to perform SEED are likely to be different to the ones used for other tasks and even dependent on the type of distance it tries to preserve. Moreover, the capacity of the hyperbolic space could be further exploited using models that directly operate in the hyperbolic space (Peng et al. 2021). # # **Self-supervised embeddings** One potential application of Neural SEED that was not explored in this project is the direct use of the embedding produced by Neural SEED for downstream tasks. This would enable the use of a wide range of geometric data processing tools for the analysis of biological sequences. # # + [markdown] id="bJuvzSh3-zoO" # ### Limitations of Geomstats # # We did not find a way to use both `pytorch` and `numpy` backends *at the same time*, while in some cases in our experience it would have been useful. # # # + [markdown] id="dj_XsOeN-zq-" # ### Proposed features of Geomstats # # The documentation for the various manifolds could be improved with more detailed descriptions and an index with short summaries. Moreover, there are some classes like `PoincareBall` that we found in tutorials, but we could not find in the documentation. Finally, it would be great if more complex geometries such as product spaces were included. # # - # ## References # # The preprint detailing all the approaches and related work will be available soon. # # (Morgan & Huttenhower, 2012) and . Human microbiome analysis. PLoS Comput Biol, 2012. # # (McDermott et al. 2021) , , , and . Rethinking relational encoding in language model: Pre-training for general sequences. arXiv preprint, 2021. # # (Clemente et al. 2015) , , , , , , , , , , et al. ´The microbiome of uncontacted amerindians. Science advances, 2015. # # (Zheng et al. 2019), , , , , and . Sense: Siamese neural network for sequence embedding and alignment free comparison. Bioinformatics, 2019. # # (Dasgupta 2016) . A cost function for similarity-based hierarchical clustering. In Proceedings of the forty-eighth annual ACM symposium on Theory of Computing, 2016. # # (Chami et al. 2020) , , , and . From trees to continuous embeddings and back: Hyperbolic hierarchical clustering. Advances in Neural Information Processing Systems 33, 2020. # # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + # %reload_ext autoreload # %autoreload 2 import numpy as np import pandas as pd import sys sys.path.append("data_resolution") from resolution_helpers import invalid_fips, remove_cols # - pm_data_path = "~/ASR-PM2.5/datasets/input_datasets/pm2.5/2015pm2.5dataset.csv" pm_data = pd.read_csv(pm_data_path, dtype = {"fips": str}).dropna() pm_data_updated = pm_data.rename(columns = {"fips": "FIPS"}) pm_data_updated pm_fips = pm_data_updated["FIPS"] pm_fips invalid_pm_fips = invalid_fips(pm_fips) assert len(invalid_pm_fips) == 0 keep_cols = ["FIPS", "pred_wght"] remove_cols(pm_data_updated, keep_cols) pm_data_updated.to_csv("~/ASR-PM2.5/datasets/intermediate_datasets/resultforpm2.5dataset.csv") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] slideshow={"slide_type": "skip"} # ![Callysto.ca Banner](https://github.com/callysto/curriculum-notebooks/blob/master/callysto-notebook-banner-top.jpg?raw=true) # + [markdown] slideshow={"slide_type": "skip"} # ## To use this slideshow: # - Run All, using the menu item: Kernel/Restart & Run All # - Return to this top cell # - click on "Slideshow" menu item above, that looks like this: # ![](images/SlideIcon.png) # + [markdown] slideshow={"slide_type": "slide"} # ## Mathematical Modeling # # ### August 5, 2020 with # + [markdown] slideshow={"slide_type": "slide"} # ## Session III # # In this session, we’ll explore our implementation of the “Susceptible, Exposed, Infected and Recovered” (SEIR) model used in epidemiology, the study of how disease occurs in populations. # # + [markdown] slideshow={"slide_type": "slide"} # ## Recap: What is a Mathematical Model # # A mathematical model is a description of a system using mathematical concepts and mathematical language. # # You can think of a math model as a tool to help us describe what we believe about the workings of phenomena in the world. # # We use the language of mathematics to express our beliefs. # # We use mathematics (theoretical and numerical analysis) to evaluate the model, and get insights about the original phenomenon. # + [markdown] slideshow={"slide_type": "slide"} # ### Building Models: Our Road Map for The Course # # |Topic | Session | # |-|-| # |Choose what phenomenon you want to model|1| # |What assumptions are you making about the phenomenon|1| # |Use a flow diagram to help you determine the structure of your model|1| # |Choose equations|2| # |Implement equations using Python|2| # |Solve equations|2| # |Study the behaviour of the model|3| # |Test the model|3| # |Use the model|3| # # + [markdown] slideshow={"slide_type": "slide"} # ## Recap: Our assumptions # # 1. Mode of transmission of the disease from person to person is through contact ("contact transmission") between a person who interacts with an infectious person. # # 2. Once a person comes into contact with the pathogen, there is a period of time (called the latency period) in which they are infected, but cannot infect others (yet!). # # 3. Population is not-constant (that is, people are born and die as time goes by). # + [markdown] slideshow={"slide_type": "subslide"} # ## Recap: Our assumptions # # # 4. A person in the population is either one of: # - Susceptible, i.e. not infected but not yet exposed, # - Exposed to the infection, i.e. exposed to the virus, but not yet infectious, # - Infectious, and # - Recovered from the infection. # # 5. People can die by "natural causes" during any of the stages. We assume an additional cause of death associated with the infectious stage. # + [markdown] slideshow={"slide_type": "slide"} # ## Recap: Flow diagram # # How does a person move from one stage into another? In other words, how does a person go from susceptible to exposed, to infected, to recovered? # # $\Delta$: Per-capita birth rate. # # $\mu$: Per-capita natural death rate. # # $\alpha$: Virus-induced average fatality rate. # # $\beta$: Probability of disease transmission per contact (dimensionless) times the number of contacts per unit time. # # $\epsilon$: Rate of progression from exposed to infectious (the reciprocal is the incubation period). # # $\gamma$: Recovery rate of infectious individuals (the reciprocal is the infectious period). # # + [markdown] slideshow={"slide_type": "subslide"} # ## Recap: Flow diagram # # $$\stackrel{\Delta N} {\longrightarrow} \text{S} \stackrel{\beta\frac{S}{N} I}{\longrightarrow} \text{E} \stackrel{\epsilon}{\longrightarrow} \text{I} \stackrel{\gamma}{\longrightarrow} \text{R}$$ # $$\hspace{1.1cm} \downarrow \mu \hspace{0.6cm} \downarrow \mu \hspace{0.5cm} \downarrow \mu, \alpha \hspace{0.1cm} \downarrow \mu $$ # + [markdown] slideshow={"slide_type": "slide"} # ## Our system of equations # # $N$ is updated at each time step, and infected peopel die at a higher rate. # # $$ N = S + E + I + R$$ # # We can then express our model using differential equations # # $$\frac{dS}{dt} = \Delta N - \beta \frac{S}{N}I - \mu S$$ # # $$\frac{dE}{dt} = \beta \frac{S}{N}I - (\mu + \epsilon )E$$ # # $$\frac{dI}{dt} = \epsilon E - (\gamma+ \mu + \alpha )I$$ # # $$\frac{dR}{dt} = \gamma I - \mu R$$ # # # + [markdown] slideshow={"slide_type": "subslide"} # ## Our system of equations # # # Also, we can keep track of the dead people, dead due to the infection. # # $$\frac{dD}{dt} = \alpha I $$ # + [markdown] slideshow={"slide_type": "slide"} # ## Initial conditions # # If $N(t)$ denotes the total population, then at a given time $t$, # # $$N(t) = S(t) + E(t) + I(t) + R(t).$$ # # In particular, if for $t = 0$ (also known as "day 0") we set # # $$S(0) = S_0, E(0) = E_0, I(0) = I_0, R(0) = R_0, $$ # # then the population at day 0 is: # # $$N(0) = S_0 + E_0 + I_0 + R_0.$$ # # $S_0, E_0, I_0, R_0$ are known as "initial conditions" - we will need them to solve our system. # + slideshow={"slide_type": "skip"} import numpy as np from scipy.integrate import odeint import matplotlib.pyplot as plt from ipywidgets import interact, interact_manual, widgets, Layout, VBox, HBox, Button from IPython.display import display, Javascript, Markdown, HTML, clear_output import pandas as pd import plotly.express as px import plotly.graph_objects as go # A grid of time points (in days) t = np.linspace(0, 750, 750) # The SEIR model differential equations. def deriv(y, t, Delta, beta, mu, epsilon,gamma,alpha): S, E, I, R, D = y N = S + E + I + R dS = Delta*N - beta*S*I/N - mu*S dE = beta*S*I/N - (mu + epsilon)*E dI = epsilon*E - (gamma + mu + alpha)*I dR = gamma*I - mu*R dD = alpha*I return [dS,dE, dI, dR, dD] def plot_infections(Delta, beta, mu, epsilon,gamma,alpha): # Initial number of infected and recovered individuals, I0 and R0. S0, E0,I0, R0 ,D0 = 37000000,0,100,0,0 # Total population, N. N = S0 + E0 + I0 + R0 # Initial conditions vector y0 = S0,E0, I0, R0, D0 # Integrate the SIR equations over the time grid, t. ret = odeint(deriv, y0, t, args=(Delta, beta, mu, epsilon,gamma,alpha)) S, E,I, R, D = ret.T S,E,I,R,D = np.ceil(S),np.ceil(E),np.ceil(I),np.ceil(R),np.ceil(D) seir_simulation = pd.DataFrame({"Susceptible":S,"Exposed":E,"Infected":I,"Recovered":R,"Deaths":D, "Time (days)":t}) layout = dict( xaxis=dict(title='Time (days)', linecolor='#d9d9d9', mirror=True), yaxis=dict(title='Number of people', linecolor='#d9d9d9', mirror=True)) fig = go.Figure(layout=layout) # fig.add_trace(go.Scatter(x=seir_simulation["Time (days)"], y=seir_simulation["Susceptible"], # mode='lines', # name='Susceptible')) fig.add_trace(go.Scatter(x=seir_simulation["Time (days)"], y=seir_simulation["Exposed"], mode='lines', name='Exposed')) fig.add_trace(go.Scatter(x=seir_simulation["Time (days)"], y=seir_simulation["Infected"], mode='lines', name='Infected')) fig.add_trace(go.Scatter(x=seir_simulation["Time (days)"], y=seir_simulation["Recovered"], mode='lines', name='Recovered')) fig.add_trace(go.Scatter(x=seir_simulation["Time (days)"], y=seir_simulation["Deaths"], mode='lines', name='Deaths')) fig.update_layout(title_text="Projected Susceptible, Exposed, Infectious, Recovered, Deaths") fig.show(); # + slideshow={"slide_type": "slide"} # Our code # A grid of time points (in days) t = np.linspace(0, 750, 750) # The SEIR model differential equations. def deriv(y, t, Delta, beta, mu, epsilon,gamma,alpha): S, E, I, R, D = y N = S + E + I + R dS = Delta*N - beta*S*I/N - mu*S dE = beta*S*I/N - (mu + epsilon)*E dI = epsilon*E - (gamma + mu + alpha)*I dR = gamma*I - mu*R dD = alpha*I return [dS,dE, dI, dR, dD] # + [markdown] slideshow={"slide_type": "slide"} # ## When is there equilibrium? # # Another way to think about rate of change is in terms of slope. One value that is of interest to mathematicians is the value in which a rate changes from positive to negative. # # At equlibrium the slope is horizontal. We can find this value using mathematics by setting a derivative equal to zero. # # We can find the equilibrium for our system by setting # # $$\frac{dS}{dt} =\frac{dE}{dt} =\frac{dI}{dt} =\frac{dR}{dt} = 0 $$ # # Playing some more with the equations indicates that R can be manipulated to be in terms of E or I. So that if the number of infectious (or exposed) is zero, then the number of exposed and recovered is zero too. This makes sense - if no one is infected in our population, then no one can catch the virus. # + [markdown] slideshow={"slide_type": "slide"} # ## When is there equilibrium? # # Let's suppose there is at least one infectious person in the population. # # We can do a bit of algebra to compute a very important number called $R_0$. This number is called "general (or basic) reproduction number". This is the number that epidemiologists use to determine the number of new cases a single individual will produce. # # We can do a bit of math to get this number. I will show you the simulation first, then the math behind it. # # For now, believe me. # # $$R_0 = \frac{\Delta \beta \epsilon}{\delta (\delta + \gamma) (\delta + \epsilon)}$$ # # If $R_0 < 1$ - this is disease free. # # If $R_0 > 1$ - this is called "endemic" and indicates there is an outbreak. # + slideshow={"slide_type": "slide"} # Initial number of infected and recovered individuals, I0 and R0. S0, E0,I0, R0 ,D0 = 37000000,0,1,0,0 # Total population, N. N = S0 + E0 + I0 + R0 # Initial conditions vector y0 = S0,E0, I0, R0, D0 # Integrate the SEIR equations over the time grid, t. ret = odeint(deriv, y0, t, args=(Delta, beta, mu, epsilon,gamma,alpha)) S, E,I, R,D = ret.T numerator = beta*epsilon denominator = (alpha + gamma + mu)*(epsilon + mu) print("R_0 is equal to", numerator/denominator) seir_simulation = pd.DataFrame({"Susceptible":S,"Exposed":E,"Infectious":I,"Recovered":R,"Time (days)":t}) px.line(seir_simulation,"Time (days)",'Infectious',title="Number of infectious people") # + [markdown] slideshow={"slide_type": "slide"} # ## Playing with the parameters # + slideshow={"slide_type": "fragment"} def f(beta,eps,gamma,alpha): numerator = beta*eps denominator = (alpha + gamma + mu)*(eps + mu) print("R_0 is equal to", numerator/denominator) plot_infections(0, beta, 0, eps, gamma, alpha) # + slideshow={"slide_type": "slide"} interact_manual(f, beta=widgets.FloatSlider(min=0, max=1, step=0.01, value=0.5), eps =widgets.FloatSlider(min=.1, max=1.0, step=.1, value=.1), gamma=widgets.FloatSlider(min=.1, max=1.0, step=.1, value=.1), alpha =widgets.FloatSlider(min=.005, max=1.0, step=.005, value=.005) ); # + [markdown] slideshow={"slide_type": "slide"} # ## Let's get some real data # # Using COVID-19 Open Data [1], we are going to compare our model to the number of daily cases reported in Canada. # # [1] COVID-19 Data Repository by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University, https://github.com/CSSEGISandData/COVID-19 # + slideshow={"slide_type": "fragment"} import requests as r import pandas as pd # + slideshow={"slide_type": "fragment"} API_response_confirmed = r.get("https://covid19api.herokuapp.com/confirmed") data = API_response_confirmed.json() # Check the JSON Response Content documentation below confirmed_df = pd.json_normalize(data,record_path=["locations"]) # + slideshow={"slide_type": "fragment"} # Flattening the data flat_confirmed = pd.json_normalize(data=data['locations']) flat_confirmed.set_index('country', inplace=True) # + slideshow={"slide_type": "skip"} # Define a function to drop the history.prefix # Create function drop_prefix def drop_prefix(self, prefix): self.columns = self.columns.str.lstrip(prefix) return self # Call function pd.core.frame.DataFrame.drop_prefix = drop_prefix # Define function which removes history. prefix, and orders the column dates in ascending order def order_dates(flat_df): # Drop prefix flat_df.drop_prefix('history.') flat_df.drop_prefix("coordinates.") # Isolate dates columns flat_df.iloc[:,6:].columns = pd.to_datetime(flat_df.iloc[:,6:].columns) # Transform to datetim format sub = flat_df.iloc[:,6:] sub.columns = pd.to_datetime(sub.columns) # Sort sub2 = sub.reindex(sorted(sub.columns), axis=1) sub3 = flat_df.reindex(sorted(flat_df.columns),axis=1).iloc[:,-5:] # Concatenate final = pd.concat([sub2,sub3], axis=1, sort=False) return final # + [markdown] slideshow={"slide_type": "slide"} # ## Formatting the data a bit more # + slideshow={"slide_type": "fragment"} # Apply function final_confirmed = order_dates(flat_confirmed) country = "Canada" by_prov = final_confirmed[final_confirmed.index==country].set_index("province").T.iloc[:-4,] by_prov["TotalDailyCase"] = by_prov.sum(axis=1) # + slideshow={"slide_type": "fragment"} display(by_prov.tail(1)) # + slideshow={"slide_type": "fragment"} # This variable contains data on COVID 19 daily cases non_cumulative_cases = by_prov.diff(axis=0) display(non_cumulative_cases.tail(1)) # + [markdown] slideshow={"slide_type": "slide"} # ## Fitting our model into the real data # # We will begin by attempting a first guess - you can tinker with the values to get something close to the real data. # + slideshow={"slide_type": "slide"} # Initial number of infected and recovered individuals, I0 and R0. S0, E0,I0, R0,D0 = 37000000,0,1,0,0 # Total population, N. N = S0 + E0 + I0 + R0 # Initial conditions vector y0 = [S0,E0, I0, R0,D0] t = np.linspace(0, len(non_cumulative_cases["TotalDailyCase"]), len(non_cumulative_cases["TotalDailyCase"])) # Integrate the SEIR equations over the time grid, t. Delta = 0.01# natural birth rate. Set to zero for simplicity mu = 0.06 # natural death rate. Set to zero for simplicity alpha = 0.009 # death rate due to disease beta = 0.7# an interaction parameter. Rate for susceptible to exposed. epsilon = 0.23# rate from exposed to infectious gamma = 0.15 # rate from infectious to recovered (We expect this to be bigger than mu) numerator = beta*epsilon denominator = (alpha + gamma + mu)*(epsilon + mu) print("R_0 is equal to", numerator/denominator) ret = odeint(deriv, y0, t, args=(Delta, beta, mu, epsilon,gamma,alpha)) S, E,I, R ,D= ret.T S,E,I,R,D = np.ceil(S),np.ceil(E),np.ceil(I),np.ceil(R),np.ceil(D) # Let's add a date seir_simulation = pd.DataFrame({"Susceptible":S,"Exposed":E,"Infectious":I,"Recovered":R,"Time (days)":t}) seir_simulation['date'] = pd.date_range(start='01/24/2020', periods=len(seir_simulation), freq='D') # + slideshow={"slide_type": "slide"} non_cumulative_cases = by_prov.diff(axis=0) trace3 = go.Scatter(x = non_cumulative_cases.index,y=non_cumulative_cases["TotalDailyCase"]) trace2 = go.Scatter(x = seir_simulation["date"],y=seir_simulation["Infectious"],yaxis='y2') layout = go.Layout( title= ('First guess to fit model: infectious against number of reported cases in ' + str(country)), yaxis=dict(title='Daily Number of Reported Infections',\ titlefont=dict(color='blue'), tickfont=dict(color='blue')), yaxis2=dict(title='Number of infectious members (our model)', titlefont=dict(color='red'), \ tickfont=dict(color='red'), overlaying='y', side='right'), showlegend=False) fig = go.Figure(data=[trace3,trace2],layout=layout) fig.show() # + slideshow={"slide_type": "notes"} # The SEIR model differential equations. def deriv(y, t, Delta, beta, mu, epsilon,gamma,alpha): S, E, I, R, D = y N = S + E + I + R dS = Delta*N - beta*S*I/N - mu*S dE = beta*S*I/N - (mu + epsilon)*E dI = epsilon*E - (gamma + mu + alpha)*I dR = gamma*I - mu*R dD = alpha*I return [dS,dE, dI, dR, dD] # + slideshow={"slide_type": "skip"} # !conda install -y -c conda-forge symfit # + slideshow={"slide_type": "skip"} from symfit import variables, Parameter, ODEModel, Fit, D import numpy as np tdata = [i for i in range(1,len(seir_simulation["date"]))] adata = [seir_simulation["Infectious"].to_list()[i] for i in range(1,len(seir_simulation["Infectious"].to_list()))] # tdata = [i for i in range(1,len(non_cumulative_cases.index))] # adata = [non_cumulative_cases["TotalDailyCase"].to_list()[i] for i in range(1,len(non_cumulative_cases["TotalDailyCase"].to_list()))] s,e,i,r, d,t = variables('S,E,I,R,D,t') Delta = Parameter('Delta', 0.01/N) mu = Parameter("mu",0.06) beta = Parameter("beta",0.7) epsilon = Parameter("epsilon",0.23) gamma = Parameter("gamma",0.15) alpha = Parameter("alpha",0.009) model_dict = { D(s, t): Delta*N - beta*s*i/N - mu*s, D(e, t): beta*s*i/N - (mu + epsilon)*e, D(i,t): epsilon*e - (gamma + mu + alpha)*i, D(r,t): gamma*i - mu*r, D(d,t): alpha*i } ode_model = ODEModel(model_dict, initial={t: 0.0, s:S0, e:E0, i:I0, r:R0,d:D0}) fit = Fit(ode_model, t=tdata, S=None,E=None,I=adata,R=None,D=None) fit_result = fit.execute() print(ode_model) print(fit_result) tvec = np.linspace(0, len(seir_simulation["date"]),len(seir_simulation["date"])) plt.figure(figsize=(10,10)) SEIRD = ode_model(t=tvec, **fit_result.params) plt.plot(tvec, I, label='[Infectious]') #plt.plot(tvec, E, label='[E]') plt.scatter(tdata, adata) plt.legend() plt.grid(True) plt.show() # + slideshow={"slide_type": "skip"} from symfit import variables, Parameter, ODEModel, Fit, D import numpy as np tdata = [i for i in range(1,len(seir_simulation["date"]))] adata = [seir_simulation["Infectious"].to_list()[i] +0.0001 for i in range(1,len(seir_simulation["Infectious"].to_list()))] # tdata = [i for i in range(1,len(non_cumulative_cases.index))] # adata = [non_cumulative_cases["TotalDailyCase"].to_list()[i] for i in range(1,len(non_cumulative_cases["TotalDailyCase"].to_list()))] s,e,i,r, d,t = variables('S,E,I,R,D,t') Delta = Parameter('Delta', 0.01) mu = Parameter("mu",0.06) beta = Parameter("beta",0.7) epsilon = Parameter("epsilon",0.23) gamma = Parameter("gamma",0.15) alpha = Parameter("alpha",0.009) model_dict = { D(s, t): Delta*N - beta*s*i/N - mu*s, D(e, t): beta*s*i/N - (mu + epsilon)*e, D(i,t): epsilon*e - (gamma + mu + alpha)*i, D(r,t): gamma*i - mu*r, D(d,t): alpha*i } ode_model = ODEModel(model_dict, initial={t: 0.0, s:S0, e:E0, i:I0, r:R0,d:D0}) fit = Fit(ode_model, t=tdata, S=None,E=None,I=adata,R=None,D=None) fit_result = fit.execute() print(ode_model) print(fit_result) tvec = np.linspace(0, len(seir_simulation["date"]),len(seir_simulation["date"])) plt.figure(figsize=(10,10)) SEIRD = ode_model(t=tvec, **fit_result.params) plt.plot(tvec, I, label='[Infectious]') #plt.plot(tvec, E, label='[E]') plt.scatter(tdata, adata) plt.legend() plt.grid(True) plt.show() # + slideshow={"slide_type": "skip"} tdata = [i for i in range(1,len(non_cumulative_cases.index))] adata = [non_cumulative_cases["TotalDailyCase"].to_list()[i] for i in range(1,len(non_cumulative_cases["TotalDailyCase"].to_list()))] N= 30000 s,e,i,r, d,t = variables('S,E,I,R,D,t') Delta = Parameter('Delta', value=0.01, min=0.01, max=1.0) mu = Parameter("mu",value=0.06,min=0.01, max=1.0) beta = Parameter("beta",value=0.7,min=0.01, max=1.0) epsilon = Parameter("epsilon",value=0.23,min=0.01, max=1.0) gamma = Parameter("gamma",value=0.15,min=0.01, max=1.0) alpha = Parameter("alpha",value=0.009,min=0.01, max=1.0) model_dict = { D(s, t): Delta - beta*s*i - mu*s, D(e, t): beta*s*i - (mu + epsilon)*e, D(i,t): epsilon*e - (gamma + mu + alpha)*i, D(r,t): gamma*i - mu*r, D(d,t): alpha*i } ode_model = ODEModel(model_dict, initial={t: 0.0, s:30000, e:E0, i:I0, r:R0,d:D0}) fit = Fit(ode_model, t=tdata, S=None,E=None,I=adata,R=None,D=None) fit_result = fit.execute() print(ode_model) print(fit_result) tvec = np.linspace(0, len(seir_simulation["date"]),len(seir_simulation["date"])) plt.figure(figsize=(10,10)) SEIRD = ode_model(t=tvec, **fit_result.params) plt.plot(tvec, I, label='[Infectious]') #plt.plot(tvec, E, label='[E]') plt.scatter(tdata, adata) plt.legend() plt.grid(True) plt.show() # + [markdown] slideshow={"slide_type": "slide"} # ## Model's Limitations: # # 1. Our model does not capture the "second" wave we observe in real data. # # 2. Our model assumes immunity post recovery - which is yet to be proven or disproven. # # 3. Out model does not take into account inner circles having higher probability of exposure and infection when a member is infectious. # # 4. Our model does not take into account a shift in activities and their impact on spread of disease. # # 5. Our model does not take into account other factors, such as age, immunodeficiencies, and groups who might be more impacted than others. # # 6. Model is extremely sensitive to perturbations - small changes in parameters lead to significant changes in number of people in Exposed and Infected categories. # + [markdown] slideshow={"slide_type": "slide"} # ## Data's Limitations: # # 1. Infected individuals are those who got tested and obtained a positive result - it does not take into account actual cases that were never reported. # # 2. Infected individuals present symptoms - difficult to measure asymptomatic transmission. # # 3. Data does not represent accurately whether report is from the same individual at different times. # # 4. Data is based on test accuracy - a false negative means there might be infected people who tested negative, similar to a false positive, i.e. people who are not infected who test as if they were. # # + [markdown] slideshow={"slide_type": "slide"} # ## Session III Take Away # # In this session we learned: # # 1. Exploring different values of $\Delta, \beta, \alpha, \gamma, \epsilon$ results in significant differences in the number of infectious people. # # 2. Our model is limited in that it only accounts for a single wave of cases, assumes immunity after recovery, and does not take into account age, health and activity differences in the population. # # 3. Data also brings limitations - as not all true cases are reported or captured in testing. # # 4. It is unclear how significant is the role of asymptomatic transmission from either the data or our model, would be interesting to add a parameter for asymptomatic transmission as well as indirect transmission (touching infected surfaces and then touching one's face for example). # + [markdown] slideshow={"slide_type": "skip"} # ## Further reading # # https://people.maths.bris.ac.uk/~madjl/course_text.pdf # # Infectious Disease Modelling https://towardsdatascience.com/infectious-disease-modelling-beyond-the-basic-sir-model-216369c584c4 # # Model adapted from ., ., , , A Simulation of a COVID-19 Epidemic Based on a Deterministic SEIR Model. Frontiers in Public Health Vol 8, 2020 https://www.frontiersin.org/article/10.3389/fpubh.2020.00230 DOI=10.3389/fpubh.2020.00230 # + [markdown] slideshow={"slide_type": "skip"} # [![Callysto.ca License](https://github.com/callysto/curriculum-notebooks/blob/master/callysto-notebook-banner-bottom.jpg?raw=true)](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import os import rnnSMAP import matplotlib import matplotlib.pyplot as plt import numpy as np import imp imp.reload(rnnSMAP) rnnSMAP.reload() trainName = 'CONUSv2f1' out = trainName+'_y15_Forcing_dr60' rootDB = rnnSMAP.kPath['DB_L3_NA'] rootOut = rnnSMAP.kPath['OutSigma_L3_NA'] saveFolder = os.path.join(rnnSMAP.kPath['dirResult'], 'paperSigma') doOpt = [] doOpt.append('loadData') doOpt.append('plotConf') # doOpt.append('plotProb') matplotlib.rcParams.update({'font.size': 14}) matplotlib.rcParams.update({'lines.linewidth': 2}) matplotlib.rcParams.update({'lines.markersize': 10}) plt.tight_layout() ################################################# # load data dsLst = list() statErrLst = list() statSigmaLst = list() statConfLst = list() statProbLst = list() for k in range(0, 3): if k == 0: # validation testName = 'CONUSv2f1' yr = [2016] if k == 1: # test testName = 'CONUSv2f1' yr = [2017] predField = 'LSTM' targetField = 'SMAP' ds = rnnSMAP.classDB.DatasetPost( rootDB=rootDB, subsetName=testName, yrLst=yr) ds.readData(var='SMAP_AM', field='SMAP') ds.readPred(rootOut=rootOut, out=out, drMC=100, field='LSTM') statErr = ds.statCalError(predField='LSTM', targetField='SMAP') statSigma = ds.statCalSigma(field='LSTM') statConf = ds.statCalConf(predField='LSTM', targetField='SMAP') statProb = ds.statCalProb(predField='LSTM', targetField='SMAP') dsLst.append(ds) statErrLst.append(statErr) statSigmaLst.append(statSigma) statConfLst.append(statConf) statProbLst.append(statProb) # - figTitleLst = ['Validation', 'Test'] fig, axes = plt.subplots( ncols=len(figTitleLst), figsize=(12, 6), sharey=True) sigmaStrLst = ['sigmaX', 'sigmaMC', 'sigma'] for iFig in range(0, 2): statConf = statConfLst[iFig] figTitle = figTitleLst[iFig] plotLst = list() for k in range(0, len(sigmaStrLst)): plotLst.append(getattr(statConf, 'conf_'+sigmaStrLst[k])) legendLst = [r'$p_{x}$', r'$p_{mc}$', r'$p_{comb}$'] _, _, out = rnnSMAP.funPost.plotCDF( plotLst, ax=axes[iFig], legendLst=legendLst, cLst='grbm', xlabel='Predicting Probablity', ylabel=None, showDiff=False) axes[iFig].set_title(figTitle) print(out['rmseLst']) if iFig == 0: axes[iFig].set_ylabel('True Probablity') fig.show() saveFile = os.path.join(saveFolder, 'CONUS_conf') fig.savefig(saveFile, dpi=300) fig.savefig(saveFile+'.eps') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pandas as pd import pandas_profiling import pymysql tripData1 = pd.read_csv('/dltraining/datasets/Trip Data/trip_data_1.csv', nrows = 10000) tripData1.head() # + tripFare1 = pd.read_csv('/dltraining/datasets/Trip Fare/trip_fare_1.csv', nrows = 10000) #remove the leading spaces tripFare1.columns = ['medallion', 'hack_license', 'vendor_id', 'pickup_datetime', 'payment_type', 'fare_amount', 'surcharge', 'mta_tax', 'tip_amount', 'tolls_amount', 'total_amount'] # - tripFare1.head() # # Database Connection # + host="nyctaxi.cq3rwooo9ghy.ap-southeast-2.rds.amazonaws.com" port=3306 dbname="NYCTaxiDB" user="Administrator" password="" conn = pymysql.connect(host, user=user,port=port, passwd=password, db=dbname) # create object to talk to the db cursorObject = conn.cursor() # - # check to see if there are any tables sqlQuery = "show tables" cursorObject.execute(sqlQuery) for x in cursorObject: print(x) sqlQuery = "CREATE TABLE tripData(id INT AUTO_INCREMENT PRIMARY KEY, medallion char(32), hack_license char(32), vendor_id varchar(32), rate_code int, store_and_fwd_flag char(5), pickup_datetime datetime, dropoff_datetime datetime , passenger_count int, trip_time_in_secs int, trip_distance float(10,5), pickup_longitude float(12, 7), pickup_latitude float(12, 7), dropoff_longitude float(12, 7), dropoff_latitude float(12, 7))" cursorObject.execute(sqlQuery) sqlQuery = "CREATE TABLE fareData(id INT AUTO_INCREMENT PRIMARY KEY, medallion char(32), hack_license char(32), vendor_id varchar(32), pickup_datetime datetime, payment_type varchar(32), fare_amount float(10,3), surcharge float(10,3), mta_tax float(10,3), tip_amount float(10,3), tolls_amount float(10,3), total_amount float(10,3))" cursorObject.execute(sqlQuery) sqlQuery = "show tables" cursorObject.execute(sqlQuery) for x in cursorObject: print(x) tripFare1.to_sql(con=conn, name='fareData_1', if_exists='append', index=False) sqlQuery = "select count(*) from fareData" cursorObject.execute(sqlQuery) for x in cursorObject: print(x) sqlQuery="ALTER TABLE NYCTaxiDB.fareData AUTO_INCREMENT =1" cursorObject.execute(sqlQuery) for x in cursorObject: print(x) sqlQuery = "SELECT name FROM sqlite_master WHERE type='table' AND name='fareData_1'" cursorObject.execute(sqlQuery) for x in cursorObject: print(x) pandas_profiling.ProfileReport(tripFare1) # close the connection conn.close() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + [markdown] deletable=true editable=true # # # # Interactive image # # The following interactive widget is intended to allow the developer to explore # images drawn with different parameter settings. # # + deletable=true editable=true # preliminaries from ipywidgets import interact, interactive, fixed, interact_manual import ipywidgets as widgets from jp_doodle import dual_canvas from IPython.display import display # + deletable=true editable=true # Display a canvas with an image which can be adjusted interactively # Below we configure the canvas using the Python interface. # This method is terser than using Javascript, but the redraw operations create a jerky effect # because the canvas displays intermediate states due to roundtrip messages # between the Python kernal and the Javascript interpreter. image_canvas = dual_canvas.SnapshotCanvas("interactive_image.png", width=320, height=220) image_canvas.display_all() def change_image(x=0, y=0, w=250, h=50, dx=-50, dy=-25, degrees=0, sx=30, sy=15, sWidth=140, sHeight=20, whole=False ): #sx:30, sy:15, sWidth:140, sHeight:20 if whole: sx = sy = sWidth = sHeight = None canvas = image_canvas with canvas.delay_redraw(): # This local image reference works in "classic" notebook, but not in Jupyter Lab. canvas.reset_canvas() mandrill_url = "../mandrill.png" image_canvas.name_image_url("mandrill", mandrill_url) canvas.named_image("mandrill", x, y, w, h, degrees, sx, sy, sWidth, sHeight, dx=dx, dy=dy, name=True) canvas.fit() canvas.lower_left_axes( max_tick_count=4 ) canvas.circle(x=x, y=y, r=10, color="#999") canvas.fit(None, 30) #canvas.element.invisible_canvas.show() change_image() w = interactive( change_image, x=(-100, 100), y=(-100,100), dx=(-300, 300), dy=(-300,300), w=(-300,300), h=(-300,300), degrees=(-360,360), sx=(0,600), sy=(0,600), sWidth=(0,600), sHeight=(0,600), ) display(w) # + deletable=true editable=true # Display a canvas with an image which can be adjusted interactively # Using the Javascript interface: # This approach requires more typing because Python values must # be explicitly mapped to Javascript variables. # However the canvas configuration is smooth because no intermediate # results are shown. image_canvas2 = dual_canvas.SnapshotCanvas("interactive_image.png", width=320, height=220) image_canvas2.display_all() def change_rect_js(x=0, y=0, w=250, h=50, dx=-50, dy=-25, degrees=0, sx=30, sy=15, sWidth=140, sHeight=20, whole=False ): #sx:30, sy:15, sWidth:140, sHeight:20 if whole: sx = sy = sWidth = sHeight = None canvas = image_canvas2 canvas.js_init(""" element.reset_canvas(); var mandrill_url = "../mandrill.png"; element.name_image_url("mandrill", mandrill_url); element.named_image({image_name: "mandrill", x:x, y:y, dx:dx, dy:dy, w:w, h:h, degrees:degrees, sx:sx, sy:sy, sWidth:sWidth, sHeight:sHeight}); element.fit(); element.lower_left_axes({max_tick_count: 4}); element.circle({x:x, y:y, r:5, color:"#999"}); element.fit(null, 30); """, x=x, y=y, dx=dx, dy=dy, w=w, h=h, degrees=degrees, sx=sx, sy=sy, sWidth=sWidth, sHeight=sHeight) w = interactive( change_rect_js, x=(-100, 100), y=(-100,100), dx=(-300, 300), dy=(-300, 300), w=(-300,300), h=(-300,300), fill=True, lineWidth=(0,20), degrees=(-360,360), red=(0,255), green=(0,255), blue=(0,255), alpha=(0.0,1.0,0.1) ) display(w) # + deletable=true editable=true # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # In this notebook, we are going to look through the HIV-1 protease and reverse transcriptase sequence data. # The goal is to determine a strategy for downsampling sequences for phylogenetic tree construction # + from Bio import SeqIO import pandas as pd import numpy as np import matplotlib.pyplot as plt # %matplotlib inline # - proteases = [s for s in SeqIO.parse('sequences/HIV1-protease.fasta', 'fasta')] len(proteases) rts = [s for s in SeqIO.parse('sequences/HIV1-RT.fasta', 'fasta')] len(rts) # + def extract_metadata(sequences): """ The metadata structure is as such: [Subtype].[Country].[Year].[Name].[Accession] """ prot_metadata = [] for s in sequences: metadata = s.id.split('.') data = dict() data['subtype'] = metadata[0] data['country'] = metadata[1] data['year'] = metadata[2] data['name'] = metadata[3] data['accession'] = metadata[4] prot_metadata.append(data) return pd.DataFrame(prot_metadata).replace('-', np.nan) rt_metadf = extract_metadata(rts) protease_metadf = extract_metadata(proteases) # - rt_metadf.to_csv('csv/RT-all_metadata.csv') rt_metadf protease_metadf.to_csv('csv/Protease-all_metadata.csv') protease_metadf rt_metadf['year'].value_counts().plot(kind='bar') rt_metadf['year'].value_counts().to_csv('csv/RT-num_per_year.csv') fig = plt.figure(figsize=(15,3)) rt_metadf['country'].value_counts().plot(kind='bar') rt_metadf['country'].value_counts().to_csv('csv/RT-num_per_country.csv') protease_metadf['year'].value_counts().plot(kind='bar') protease_metadf['year'].value_counts().to_csv('csv/Protease-num_per_year.csv') fig = plt.figure(figsize=(15,3)) protease_metadf['country'].value_counts().plot(kind='bar') protease_metadf['country'].value_counts().to_csv('csv/Protease-num_per_country.csv') # # Downsampling Notes # # After discussion with Nichola about the downsampling strategy, we have settled on the following: # # - Use sequences from the years 2003-2007 inclusive. # - Use sequences from the following countries: US, BR, JP, ZA, ES # + # Code for downsampling. # Recall that the metadata structure is as such: # # [Subtype].[Country].[Year].[Name].[Accession] # # We will use a dictionary to store the downsampled sequences. import numpy as np from collections import defaultdict from itertools import product years = np.arange(2003, 2008, 1) countries = ['US', 'BR', 'JP', 'ZA', 'ES'] proteases_grouped = defaultdict() for year, country in product(years, countries): proteases_grouped[(year, country)] = [] # Group the sequences first. for s in proteases: country = s.id.split('.')[1] try: year = int(s.id.split('.')[2]) except ValueError: year = 0 if country in countries and year in years: proteases_grouped[(year, country)].append(s) # + import random random.seed(1) # for reproducibility # Perform the downsampling proteases_downsampled = defaultdict(list) for k, v in proteases_grouped.items(): proteases_downsampled[k] = random.sample(v, 10) # Write the downsampled sequences to disk. protease_sequences = [] for k, v in proteases_downsampled.items(): protease_sequences.extend(v) SeqIO.write(protease_sequences, 'sequences/proteases_downsampled.fasta', 'fasta') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Quantum chemistry # # In this laboratory you will be using the models we developed last week to gain a **quantum intuition** of bonding # # We will need to add three more quantum concepts to the list. # - quantisation - only discrete energy levels are allowed # - confinement - as the position becomes more defined the energy/momentum becomes less well defined # - **tunneling - quantum states can extend outside classical boundaries** # - **interference - quantum states can constructively and destructively interfere concentrating charge between bound states** # - **superposition - a single quantum particle can interfere with itself while not being disturbed/measured** # # We need a double well to describe two quantum system interacting with each other through tunneling and interference. # # We will be running numerical simulations. These are similar to the chemical equilbrium laboratory that we undertook in week 9. # # There are limitations to numerical approaches so if you see results that look incorrect the correct solution may not have been attained. Always think critically about numerical simulations and determine whether they are following the correct behaviour. One way to do this is to use a system that has an exact solution so we can compare our numerical solution to determine whether it is correct. We will be using the harmonic oscillator and the particle in a box solutions to check our numerical solutions. # # At the end we will use some quantum chemistry software to compute some properties of diatomics and carbon dioxide. #Importing the important libraries # %matplotlib notebook #The Schrodinger solver by and will be used from utils3 import * #This is another file with functions in it to clean up the plotting. import numpy as np from IPython.display import clear_output import ipywidgets as widgets # atomic units hbar=1.0 m=1.0 #set precision of numerical approximation steps=2000 # + ######## # PARTICLE IN A DOUBLE FINITE WELL OF WIDTHS(W1 and W2), DIFFERENT DEPTHS # (D1 and D2) AND DISTANCE (B) APART ######## Case=4 ######## # INPUT ######## #set depths and widths of wells and the well separation #W1=1.0 # this value must be between 0.5 and 10 W1 = widgets.FloatSlider(value=0.5,min=0,max=10,step=0.5, description="W1") D1 = widgets.FloatSlider(value=60.0,min=0,max=500,step=10, description="D1") W2 = widgets.FloatSlider(value=0.5,min=0,max=10,step=0.5, description="W2") D2 = widgets.FloatSlider(value=70.0,min=0,max=500,step=10, description="D2") B = widgets.FloatSlider(value=0.2,min=0.1,max=2,step=0.1, description="B") squared_2 = widgets.Checkbox(value=False,description="Probability") #W2=0.5 # this value must be between 0.5 and 10 #D1=60.0 # this value must be between 30 and 500 #D2=70.0 # this value must be between 30 and 500 #B=0.2 # this value must be between 0.1 and 10 ######## # CODE ######## # print output #output(Case,['Well 1 Width','Well 2 Width','Well 1 Depth','Well 2 Depth','Well Separation'],[W1,W2,D1,D2,B*2],E,n) # create plot @debounce(0.1) #https://ipywidgets.readthedocs.io/en/latest/examples/Widget%20Events.html#Debouncing def handle_slider_change_2(change): clear_output() #set length variable for xvec A=2.0*((W1.value+W2.value)+B.value) #divide by two so a separation from -B to B is of input size") B_=B.value/2.0 #create x-vector from -A to A") xvec=np.linspace(-A,A,steps,dtype=np.float_) #get step size") h=xvec[1]-xvec[0] #create the potential from step function") U=-D1.value*(step_func(xvec+W1.value+B_)-step_func(xvec+B_))-D2.value*\ (step_func(xvec-B_)-step_func(xvec-W2.value-B_)) #create Laplacian via 3-point finite-difference method") Laplacian=(-2.0*np.diag(np.ones(steps))+np.diag(np.ones(steps-1),1)\ +np.diag(np.ones(steps-1),-1))/(float)(h**2) #create the Hamiltonian") Hamiltonian=np.zeros((steps,steps)) [i,j]=np.indices(Hamiltonian.shape) Hamiltonian[i==j]=U Hamiltonian+=(-0.5)*((hbar**2)/m)*Laplacian #diagonalize the Hamiltonian yielding the wavefunctions and energies") E,V=diagonalize_hamiltonian(Hamiltonian) #determine number of energy levels to plot (n)") n=0 while E[n]<0: n+=1 display(W1,D1,W2,D2,B,squared_2,button_2) finite_well_plot(E,V,xvec,steps,n,Case,U,param=[W1,D1,W2,D2,B],ask_to_save=False,ask_squared=squared_2.value) #plt.xlim((-5,5)) button_2 = widgets.Button(description="Run") button_2.on_click(handle_slider_change_2) display(W1,D1,W2,D2,B,squared_2,button_2) # - # # Lab report # # Briefly introduce the Schrodinger equation ($-\frac{\hbar}{2m}\frac{d^2}{dx^2}\Psi+V(x)\Psi=E\Psi$) and how it can be used in chemistry to describe bonds vibrate and electrons interact (2 paragraphs). # # ## Section 1 - Quantum confinement # # Tasks 1 through to 6 from week 12 # # ## Section 2 - Quantum interference # # For the second section on quantum interference complete the following tasks using screenshots of the simulated plot to aid your answers. # # **Task 1** - Plot the single well solution by setting D1=0.0, W2=0.5 and D2=200. Compare this solution to the harmonic oscillator from last weeks lab. # # **Task 2** - Plot the probability for the solution in Task 1 (this can take some time). The probability of finding a particle is outside of the potential energy surface a classically forbidden region. This is called tunneling where the wavefunction decays exponentially in the barrier. How does tunneling compare between the lowest energy bound state and the highest energy bound state? # # **Task 3** - To further explore tunneling we are going to look at two mismatched quantum systems W1=0.5, D1=50.0, W2=0.5, D2=100. Plot the wells far from each other B=2.0. Compare this system with the wells close together B=0.4. Which states can tunnel into the other well? How much does the energy decrease with this tunneling? # # **Task 4** - Let's now compare a matched double well to look at interference W1=0.5, D1=100, W2=0.5, D2=100. Plot the results far away B=2.0. You will see that the energy levels are degenerate (the same). However, one is out of phase with the other. These are non-interacting systems. Plot the results for B=0.5. The top two states will be constructively and destructively interfering (one state gets high, less stable, and the other lower in energy, more stable). # # **Task 5** - Plot the probability of the Task 4 system. Which state has a higher probability of being in between the wells and is constructively interfering? Explain how this constructive interference holds the nuclei together in a bond. # + """ 23. How to convert year-month string to dates corresponding to the 4th day of the month? """ """ Difficiulty Level: L2 """ """ Change ser to dates that start with 4th of the respective months. """ """ Input """ """ ser = pd.Series(['Jan 2010', 'Feb 2011', 'Mar 2012']) """ """ Desired Output """ """ 0 2010-01-04 1 2011-02-04 2 2012-03-04 dtype: datetime64[ns] """ import pandas as pd # Input ser = pd.Series(['Jan 2010', 'Feb 2011', 'Mar 2012']) # Solution 1 from dateutil.parser import parse # Parse the date ser_ts = ser.map(lambda x: parse(x)) # Construct date string with date as 4 ser_datestr = ser_ts.dt.year.astype('str') + '-' + ser_ts.dt.month.astype('str') + '-' + '04' # Format it. [parse(i).strftime('%Y-%m-%d') for i in ser_datestr] # Solution 2 ser.map(lambda x: parse('04 ' + x)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Quantile Tests # # Illustrates the various problems with quantiles and shows that the `Aggregate` and `Portfolio` classes handle them correctly. # + import sys sys.path.append("c:\\s\\telos\\python\\aggregate_project") sys.path.append("c:\\s\\telos\\python\\aggregate_extensions_project") import aggregate as agg import aggregate_extensions as agg_ext import aggregate_extensions.allocation as aea import numpy as np import matplotlib.pyplot as plt # from matplotlib.ticker import MultipleLocator, FormatStrFormatter, StrMethodFormatter, FuncFormatter, \ # AutoMinorLocator, MaxNLocator, NullFormatter, FixedLocator, FixedFormatter import pandas as pd # from jinja2 import Template # import seaborn as sns from IPython.core.display import HTML import itertools # sns.set('paper', 'ticks', 'plasma', 'serif') # sns.set_palette('muted', 8) # %config InlineBackend.figure_format = 'svg' pd.set_option('display.max_rows', 500) # pd.set_option('display.max_cols', 500) # https://github.com/matplotlib/jupyter-matplotlib # %matplotlib widget # # %matplotlib from importlib import reload import logging aggdevlog = logging.getLogger('aggdev.log') aggdevlog.setLevel(logging.INFO) print(agg.__file__) # print(agg.__file__, agg.__version__) # import os # print(os.environ['CONDA_DEFAULT_ENV']) # print(os.getcwd()) # # x = !dir *.ipynb # - uw = agg.Underwriter(update=False, create_all=False) port = uw(''' port tester agg Athin 1 claim sev dhistogram xps [0 9 10] [.98 .01 .01] fixed agg Dthick 1 claim sev dhistogram xps [0 1 90] [.98 .01 .01] fixed ''') port.update(bs=1, log2=8, remove_fuzz=True) port.q(0.98, kind='lower'), port.q(0.98, kind='upper') port.q_temp # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.9.9 64-bit (''.venv.mclatte'': venv)' # language: python # name: python3 # --- import joblib import numpy as np import os import random import torch import wandb from mclatte.test_data.diabetes import generate_data from test_utils import ( test_skimmed_mclatte, test_semi_skimmed_mclatte, test_mclatte, test_rnn, test_losses, ) random.seed(509) np.random.seed(509) torch.manual_seed(509) # ## Data Preparation N, M, H, R, D, K, C, X, M_, Y_pre, Y_post, A, T = joblib.load( os.path.join(os.getcwd(), f"data/diabetes/hp_search.joblib") ) constants = dict(m=M, h=H, r=R, d=D, k=K, c=C) # ## Modelling wandb.init(project="mclatte-test", entity="jasonyz") # ### McLatte # #### Vanilla # + # print(pd.read_csv(os.path.join(os.getcwd(), 'results/mclatte_hp.csv')).sort_values(by='valid_loss').iloc[0]) # - mclatte_config = { "encoder_class": "lstm", "decoder_class": "lstm", "hidden_dim": 8, "batch_size": 64, "epochs": 100, "lr": 0.021089, "gamma": 0.541449, "lambda_r": 0.814086, "lambda_d": 0.185784, "lambda_p": 0.081336, } # #### Semi-Skimmed # + # print(pd.read_csv(os.path.join(os.getcwd(), 'results/semi_skimmed_mclatte_hp.csv')).sort_values(by='valid_loss').iloc[0]) # - semi_skimmed_mclatte_config = { "encoder_class": "lstm", "decoder_class": "lstm", "hidden_dim": 4, "batch_size": 64, "epochs": 100, "lr": 0.006606, "gamma": 0.860694, "lambda_r": 79.016676, "lambda_d": 1.2907, "lambda_p": 11.112241, } # #### Skimmed # + # print(pd.read_csv(os.path.join(os.getcwd(), 'results/skimmed_mclatte_hp.csv')).sort_values(by='valid_loss').iloc[0]) # - skimmed_mclatte_config = { "encoder_class": "lstm", "decoder_class": "lstm", "hidden_dim": 16, "batch_size": 64, "epochs": 100, "lr": 0.000928, "gamma": 0.728492, "lambda_r": 1.100493, "lambda_p": 2.108935, } # ### Baseline RNN # + # print(pd.read_csv(os.path.join(os.getcwd(), 'results/baseline_rnn_hp.csv')).sort_values(by='valid_loss').iloc[0]) # - rnn_config = { "rnn_class": "gru", "hidden_dim": 64, "seq_len": 2, "batch_size": 64, "epochs": 100, "lr": 0.006321, "gamma": 0.543008, } # ### SyncTwin # print(pd.read_csv(os.path.join(os.getcwd(), 'results/synctwin_hp.csv')).sort_values(by='valid_loss').iloc[0]) synctwin_config = { "hidden_dim": 128, "reg_B": 0.522652, "lam_express": 0.163847, "lam_recon": 0.39882, "lam_prognostic": 0.837303, "tau": 0.813696, "batch_size": 32, "epochs": 100, "lr": 0.001476, "gamma": 0.912894, } # ## Test Models N_TEST = 5 def run_tests(): mclatte_losses = [] semi_skimmed_mclatte_losses = [] skimmed_mclatte_losses = [] rnn_losses = [] for i in range(1, N_TEST + 1): ( _, train_data, test_data, ) = generate_data(return_raw=False) skimmed_mclatte_losses.append( test_skimmed_mclatte( skimmed_mclatte_config, constants, train_data, test_data, run_idx=i, ) ) semi_skimmed_mclatte_losses.append( test_semi_skimmed_mclatte( semi_skimmed_mclatte_config, constants, train_data, test_data, run_idx=i, ) ) mclatte_losses.append( test_mclatte( mclatte_config, constants, train_data, test_data, run_idx=i, ) ) rnn_losses.append( test_rnn( rnn_config, train_data, test_data, run_idx=i, ) ) joblib.dump( ( mclatte_losses, semi_skimmed_mclatte_losses, skimmed_mclatte_losses, rnn_losses, ), f"results/test/diabetes.joblib", ) run_tests() # #### Check finished runs results def print_losses(): all_losses = joblib.load(f"results/test/diabetes.joblib") for losses in all_losses: print(f"{np.mean(losses):.3f} ({np.std(losses):.3f})") print_losses() # ### Statistical Testing LOSS_NAMES = ["McLatte", "Semi-Skimmed McLatte", "Skimmed McLatte", "RNN", "SyncTwin"] losses = joblib.load(f"results/test/diabetes.joblib") test_losses(losses, LOSS_NAMES) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import matplotlib.pyplot as plt from sklearn.cluster import KMeans from sklearn.datasets.samples_generator import make_blobs import warnings warnings.filterwarnings("ignore") # + x, y = make_blobs(n_samples = 200, centers = 5) plt.scatter(x[:,0], x[:,1]) # + kmeans = KMeans(n_clusters = 5) kmeans.fit(x) # - previsoes = kmeans.predict(x) plt.scatter(x[:,0], x[:,1], c = previsoes) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd import numpy as np from scipy.stats import chi2_contingency import plotly plotly.offline.init_notebook_mode(connected=True) import plotly.graph_objs as go import plotly.io as pio from sklearn import preprocessing # - data = pd.read_csv('../sbdc_data_merged.csv') # + #data.crosstab('Region','Business Status') # - sankey = data.groupby(['Region','center_region'])['center_region'].count().to_frame().rename({'center_region':'count'},axis=1).reset_index() sankey.head() # + region_le = preprocessing.LabelEncoder() region_le.fit(sankey['Region']) region_transform = region_le.transform(sankey['Region']) region_inverse_transform = region_le.inverse_transform(region_transform) sankey['region_position'] = region_transform center_le = preprocessing.LabelEncoder() center_le.fit(sankey['center_region']) center_transform = center_le.transform(sankey['center_region']) center_inverse_transform = center_le.inverse_transform(center_transform) sankey['center_position'] = center_transform # - region_label = list(sankey['Region'].unique()) center_label = list(sankey['center_region'].unique()) label = [] label.extend(region_label) label.extend(center_label) sankey['center_position'] = sankey['region_position'].max() + sankey['center_position'] + 1 # + data = dict( type='sankey', orientation = "h", #valueformat = ".4f", node = dict( pad = 100, thickness = 30, line = dict( color = "black", width = 0.5 ), label = label, color = "black" ), link = dict( source = sankey['region_position'], target = sankey['center_position'], value = sankey['count'], #label = inverse_transform #color = link_color )) title = str("Business Region to SBDC Center") layout = dict( title = title, font = dict( size = 20 ), width=1200, height=1200, ) fig = dict(data=[data], layout=layout) #plotly.offline.iplot(fig, validate=False) pio.write_image(fig, "{}.png".format(title)) plotly.offline.plot(fig, filename = "{}.html".format(title), auto_open=False) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Cognitive Hackathon: Week 1 - Teacher's Guide # ## Day 1 # # ## Overview # # The first day is about exploring what is possible in the world of cognitive computing. The years ahead of us will see astounding growth and change in this area, and the related changes to our culture and commerce are expected to supercede even the Internet. It will be an exciting time ahead and the examples and concepts in this course provide a hint of what is to come. # # ## Objectives # # Today, demonstrate examples of cognitive computing while introducing the foundational concepts, including machine learning, models, and artificial intelligence. Explore the Azure Cognitive API in all of its facets, with special focus on Computer Vision and Text Analytics. # # Here are the demos to begin with: # # ## Teacher Demos: Cognitive Computing # # Demonstrate these first examples use cognitive Computer Vision capabilities that allow the machine to look at an image and make certain determinations. # # ## Teacher Demo: How Old (10 mins) # # In this first one, it's to guess peoples' age. Now this can be a fun party trick, like the person at the fair who guesses your age, but this technology could also be used to better estimate demographics of people visiting a particular landmark without needing to collect any other personal information, or analyze the covers of magazines to see what age group is more frequently on the cover of which magazine. # # https://www.how-old.net/ # # ### Teacher Leads Discussion (10 mins) # Discuss what's going on in the How Old cognitive app, for example: # - Facial Recognition: The app must first find the person, so it needs to know what a face looks like, then it draws a rectangle around the face. # - Multiple People: Sometimes there's more than one face in the frame (show example) so the app must identify two or more faces. # - Facial Features: The app analyzes each face, looking for traits that help it guess the age. Which facial features might the app be using to determine age? (It's amazing enough that a computer can tell that it's looking at a person, but even moreso that it knows if they're young or old!) # # ## Teacher Demo: What Dog (10 mins) # # How about creatures other than humans? Like dogs, for instance. It might be useful for an animal shelter to be able to determine the breed of a dog that was recently brought in to more accurately represent it to folks who are looking to adopt an animal. What-dog.net does just that: # # https://www.what-dog.net/ # # ### Teacher Leads Discussion: Visual Cognitive Computing (10 mins) # Explore aspects of visual cognitive computing by how-old.net and what-dog.net apps, their similarities and differences: # * How do you think the what-dog.net knows the species of the dogs in those pictures with such accuracy? # * How might how-old.net be able to guess at ages? # * Facial Recognition: The app must know something about the anatomy of what its looking at, likely finding the dog's face. # * Facial Features: What are the different facial features on a dog that determine breed that you wouldn't use on a human? Things like the shape of the eyes, and the length of the ears and snout. # * Color: Using pattern recognition to navigate anatomy, the app is most certainly looking at the color of the dog's fur, and may even identify a tail # # ## Teacher Presents: Cognitive Computing Concepts (10 mins) # # Cognitive systems use something called machine learning, which allows apps to learn almost like children do, by observing lots and lots of examples. Thousands of dogs are shown to this app, with the species of each dog named to the app, until it compiles an understanding of the traits of each species of dog. This understanding is stored in a construct called a model. Cognitive computing is the creation of a model using machine learning then the utilization of that model by the cognitive algorithms to make determinations and recommendations. # # Here are the main concepts: # # * Cognitive Computing - use of models created by machine learning to identify and qualify images, speech, text, etc. # * Machine Learning - the creation of models through the analysis of large amounts of data (often called "Big Data") # * Model - data representation of the computer's understanding of a data set using machine learning, used by cognitive computing APIs and apps to conduct analysis and make complex determinations # # In this course we'll be using a cognitive API which contains ready-made machine learning models that your applications can consume using REST APIs. Think of it as a shortcut. There's no need to gather the training data, create a learning algorithm, or train the model. Configuring machine learning systems to analyze large swathes of data to build models could take an entire semester. We will focus on APIs with prepared functionality to give you more time to explore the possibilities of cognitive computing. # # So what are these cognitive APIs? # # ## Teacher Presents: Cognitive Computing APIs (10 mins) # # There are quite a few features in the cognitive API we're working with: # # https://azure.microsoft.com/en-us/services/cognitive-services/directory/ # # You can do an online demo these APIs using the "Demo" link next to the APIs. # # See the tabs at the top of the window for our Azure Cognitive Services: # # * Vision # * Speech # * Language # * Anomaly Detection # * Search # # Each of these tabs contains a group of cognitive APIs: # # ### Vision # Extract information from images to categorize and process visual data. Perform machine-assisted moderation of images to help curate your services. # * Computer Vision # * Video Indexer # * Face # * Custom Vision # * Content Moderator # # ### Speech # Translate between speech, text, and other languages as well as identify speakers. # * Speech to Text # * Text to Speech # * Speaker Recognition # * Speech Translation # # ### Language # Build apps that comprehend text and its grammar, meaning, and emotion. # * Text Analytics # * Bing Spell Check # * Content Moderator (again) # * Translator Text # * QnA Maker # * Language Understanding # # ### Anomaly Detection # Identify problems in real time. # * Anomaly Detector # # ### Search # Add search capability for finding and identifying web pages, images, videos, and news. # * Bing Web Search # * Custom Search # * Video Search # * Image Search # * Local Business Search # * Visual Search # * Entity Search # * News Search # * Autosuggest # # ## Summary # This first workshop covered these topics: # * Cognitive computing demonstrations # * Concepts: cognitive computing, model, machine learning # * Breadth of many types of cognitive computing APIs # # [<< Week 1 Summary](https://sp19azureteachersguide-sguthals.notebooks.azure.com/j/notebooks/Week%201/README.ipynb) [Day 2 >>](https://sp19azureteachersguide-sguthals.notebooks.azure.com/j/notebooks/Week%201/Day2.ipynb] # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.8.5 64-bit # language: python # name: python3 # --- import matplotlib.pyplot as plt import pandas as pd import spectrum_utils.plot as sup import spectrum_utils.spectrum as sus import urllib.parse import pyteomics.mgf as MGF import numpy as np from pyteomics import auxiliary #https://pyteomics.readthedocs.io/en/latest/data.html#mgf # Read the predictions with MGF.read_header('../data/test2_peprec/searchResults_HCD_predictions.mgf') as reader: auxiliary.print_tree(next(reader)) # + # Get the spectrum peaks using its USI. usi = 'mzspec:PXD004732:01650b_BC2-TUM_first_pool_53_01_01-3xHCD-1h-R2:scan:41840' peaks = pd.read_csv( f'https://metabolomics-usi.ucsd.edu/csv/?usi={urllib.parse.quote(usi)}') # Create the MS/MS spectrum. precursor_mz = 718.3600 precursor_charge = 2 spectrum = sus.MsmsSpectrum(usi, precursor_mz, precursor_charge, peaks['mz'].values, peaks['intensity'].values, peptide='WNQLQAFWGTGK') # Process the MS/MS spectrum. fragment_tol_mass = 10 fragment_tol_mode = 'ppm' spectrum = (spectrum.set_mz_range(min_mz=100, max_mz=1400) .remove_precursor_peak(fragment_tol_mass, fragment_tol_mode) .filter_intensity(min_intensity=0.05, max_num_peaks=50) .scale_intensity('root') .annotate_peptide_fragments(fragment_tol_mass, fragment_tol_mode, ion_types='aby')) # Plot the MS/MS spectrum. fig, ax = plt.subplots(figsize=(12, 6)) sup.spectrum(spectrum, ax=ax) plt.show() plt.close() # + import matplotlib.pyplot as plt import pandas as pd import spectrum_utils.plot as sup import spectrum_utils.spectrum as sus import urllib.parse usi = 'mzspec:PXD004732:01650b_BC2-TUM_first_pool_53_01_01-3xHCD-1h-R2:scan:41840' peaks = pd.read_csv( f'https://metabolomics-usi.ucsd.edu/csv/?usi={urllib.parse.quote(usi)}') peaks # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import mne import os import scipy.io import listen_italian_functions import numpy as np from matplotlib import pyplot as plt import pandas as pd import pickle import warnings warnings.filterwarnings('ignore') from IPython.display import clear_output from itertools import combinations,permutations from IPython.display import clear_output data_path = os.path.dirname(os.path.dirname(os.getcwd())) subject_name = ['Alice','Andrea','Daniel','Elena','Elenora','Elisa','Federica','Francesca','Gianluca1','Giada','Giorgia', 'Jonluca','Laura','Leonardo','Linda','Lucrezia','Manu','Marco','Martina','Pagani','Pasquale','Sara', 'Silvia','Silvia2','Tommaso'] remove_first = 0.5 #seconds # - # # Read the epoches # + Tmin = 0 Tmax = 3.51 trial_len = 2 GA_epoches = [] for s in subject_name: save_path = data_path + '/python/data/coherence_epochs/'+s+'-coh-epo-'+str(Tmin)+'-' \ +str(Tmax)+'-trialLen-'+str(trial_len)+'.fif' epochs = mne.read_epochs(save_path) GA_epoches.append(epochs) print('----------------------------------------------------------------------------------------------------------------'+s) # - # # partial coherence # coherence # # # partial coherence # # + code_folding=[0, 22, 30] def coherence_preprocess_delay(epochs,remove_first,d,trial_len,extra_channels,eeg_channles,condition): if condition != 'All': E = epochs[condition].copy() else: E = epochs.copy() eeg = E.copy().pick_channels(eeg_channles) speech = E.copy().pick_channels(extra_channels) E = eeg.copy().crop(d+remove_first,d+remove_first+trial_len) S = speech.copy().crop(0.5+remove_first,0.5+remove_first+trial_len) #E = eeg.copy().crop(0.5+remove_first,0.5+remove_first+trial_len) #S = speech.copy().crop(d+remove_first,d+remove_first+trial_len) c = np.concatenate((E.get_data(),S.get_data()),axis=1) return c def get_coherence(epochs,sfreq,fmin,fmax,indices): con, freqs, times, n_epochs, n_tapers = mne.connectivity.spectral_connectivity(epochs, method='coh',mode='multitaper', sfreq=sfreq, fmin=fmin, fmax=fmax,indices=indices,faverage=True, tmin=0, mt_adaptive=False, block_size=1000,verbose='ERROR') return con def get_partialCoherence(conXY,conXR,conRY,fr): partial_coh_XY_R=[] for i in range(59): a = (abs(conXY[i,fr]-conXR[i,fr]*conRY[fr])**2) / ((1-abs(conXR[i,fr])**2)*(1-abs(conRY[fr])**2)) partial_coh_XY_R.append(a) partial_coh_XY_R = np.asarray(partial_coh_XY_R) return partial_coh_XY_R # + code_folding=[28, 43, 53] features = ['envelop','jawaopening','lipaparature','lipProtrusion','TTCD','TMCD','TBCD'] eeg_chan = GA_epoches[0].ch_names[0:59] sfreq = GA_epoches[0].info['sfreq'] delay = np.arange(-5,6) / 10 condition = ['hyper','normal','hypo','All'] condition = ['All'] features = ['envelop','lipaparature'] feat_comb = (['envelop','lipaparature'],['lipaparature','envelop']) ############################# iter_freqs = [ ('fr', 0.25, 1), ('fr', 0.5, 2), ('fr', 1, 3), ('fr', 1, 4), ('fr', 2, 6), ('fr', 4, 8), ('fr', 8, 12), ('fr', 12, 18), ('fr', 18, 24), ('fr', 24, 40) ] fmin = [] fmax = [] for fr in range(0,len(iter_freqs)): fmin.append(iter_freqs[fr][1]) fmax.append(iter_freqs[fr][2]) ####################################### indices = [] b = (np.repeat(59,59),np.arange(0,59)) indices.append(b) b = (np.repeat(60,59),np.arange(0,59)) indices.append(b) b = (np.repeat(59,1),np.repeat(60,1)) indices.append(b) INDEX = [] b=0 for idx in range(0,len(indices)): a = np.arange(b,b+len(indices[idx][0])) INDEX.append(a) b = b+len(a) indices = np.concatenate((indices),axis=1) indices = (indices[0],indices[1]) ####################################### frame = [] for s in range(0,len(subject_name)): for d in delay: for con in condition: c = coherence_preprocess_delay(GA_epoches[s],remove_first,d + 0.5,trial_len,features,eeg_chan,con) coh = get_coherence(c,sfreq,fmin,fmax,indices) for iii in range(2): if iii ==0: conXY = coh[INDEX[0],:] conXR = coh[INDEX[1],:] else: conXY = coh[INDEX[1],:] conXR = coh[INDEX[0],:] conRY = coh[INDEX[2],:][0] for fr in range(0,len(iter_freqs)): a = str(iter_freqs[fr][0])+ ' '+str(iter_freqs[fr][1])+' - '+str(iter_freqs[fr][2])+'Hz' cc = get_partialCoherence(conXY,conXR,conRY,fr) df = pd.DataFrame({'Feature':feat_comb[iii][0],'FeatureDelay':d,'RemoveFeature':feat_comb[iii][1], 'RemoveFeatureDelay':d,'Freq':a,'Condition':con, 'Subject': subject_name[s], 'Data':[cc.flatten()]}) frame.append(df) print(str(d)+'-'+subject_name[s]) data=pd.concat((frame),axis=0) save_path = data_path + '/python/data/partialCoh/PartialCoh-removedFirst-'+str(remove_first)+'.pkl' data.to_pickle(save_path) # - # # surrogate distribution partial coherence # + code_folding=[0, 101] def PartialCoherence_preprocess_delay_surrogate(epochs,remove_first,d,trial_len,eeg_channles,keep_feat,condition,iter_freqs): ############## if condition != 'All': E = epochs[condition].copy() else: E = epochs.copy() eeg = E.copy().pick_channels(eeg_channles) speech = E.copy().pick_channels(keep_feat) E = eeg.copy().crop(d+remove_first,d+remove_first+trial_len) S = speech.copy().crop(0.5+remove_first,0.5+remove_first+trial_len) #E = eeg.copy().crop(0.5+remove_first,0.5+remove_first+trial_len) #S = speech.copy().crop(d+remove_first,d+remove_first+trial_len) sfreq = E.info['sfreq'] E = E.get_data() S = S.get_data() label = np.concatenate((eeg.ch_names,speech.ch_names)) ##################### all possible combination trial_length=S.shape[0] a = list(permutations(np.arange(0,trial_length), 2)) a = np.asarray(a) X = np.arange(0,trial_length) no_surrogates = 500 #dummy value B=[] for j in range(no_surrogates): X = np.roll(X,1) while True: A,a = get_combinations(X,a) if A.shape[0] == trial_length: B.append(A) break elif len(a)==0: break else: X = np.roll(X,1) print('.',end=' ') B = np.asarray(B) no_surrogates = len(B) #######################################à fmin = [] fmax = [] for fr in range(0,len(iter_freqs)): fmin.append(iter_freqs[fr][1]) fmax.append(iter_freqs[fr][2]) ####################################### indices = [] b = (np.repeat(59,59),np.arange(0,59)) indices.append(b) b = (np.repeat(60,59),np.arange(0,59)) indices.append(b) b = (np.repeat(59,1),np.repeat(60,1)) indices.append(b) INDEX = [] b=0 for idx in range(0,len(indices)): a = np.arange(b,b+len(indices[idx][0])) INDEX.append(a) b = b+len(a) indices = np.concatenate((indices),axis=1) indices = (indices[0],indices[1]) ####################################### frames = np.zeros((2,59,len(iter_freqs),no_surrogates)) for i in range(no_surrogates): print('--------------------'+str(i)) EE = E.copy() SS = S.copy() c = np.concatenate((EE[B[i][:,0]],SS[B[i][:,1]]),axis=1) coh = get_coherence(c,sfreq,fmin,fmax,indices) for iii in range(0,1): if iii ==0: conXY = coh[INDEX[0],:] conXR = coh[INDEX[1],:] else: conXY = coh[INDEX[1],:] conXR = coh[INDEX[0],:] conRY = coh[INDEX[2],:][0] for f in range(0,len(iter_freqs)): frames[iii,:,f,i] = get_partialCoherence(conXY,conXR,conRY,f) clear_output() return frames def get_combinations(X,a): aa = a A=[] EEG = [] Speech = [] for i in range(0,len(X)): b = np.where(a[:,0]==X[i]) if not len(b[0]) == 0: for k in range(len(b[0])): if not a[b[0][k],1] in Speech: A.append(a[b[0][k],:]) EEG.append(a[b[0][k],0]) Speech.append(a[b[0][k],1]) a = np.delete(a, b[0][k], 0) break if len(A) == len(X): return np.asarray(A),a else: return np.asarray(A),aa # + code_folding=[] features = ['envelop','jawaopening','lipaparature','lipProtrusion','TTCD','TMCD','TBCD'] eeg_chan = GA_epoches[0].ch_names[0:59] sfreq = GA_epoches[0].info['sfreq'] delay = np.arange(-5,6) / 10 condition = ['hyper','normal','hypo','All'] condition = ['All'] features = ['envelop','lipaparature'] ############################# iter_freqs = [ ('fr', 0.25, 1), ('fr', 0.5, 2), ('fr', 1, 3), ('fr', 1, 4), ('fr', 2, 6), ('fr', 4, 8), ('fr', 8, 12), ('fr', 12, 18), ('fr', 18, 24), ('fr', 24, 40) ] ####################################### for s in range(0,len(subject_name)): frame = [] for d in delay: for con in condition: surrogate_coh = PartialCoherence_preprocess_delay_surrogate(GA_epoches[s],remove_first, d + 0.5,trial_len,eeg_chan,features, con,iter_freqs) # mean or median of the surrogate distribution coh=surrogate_coh for fr in range(0,len(iter_freqs)): a = str(iter_freqs[fr][0])+ ' '+str(iter_freqs[fr][1])+' - '+str(iter_freqs[fr][2])+'Hz' cc = coh[0,:,fr,:] df = pd.DataFrame({'Feature':features[0],'FeatureDelay':d,'RemoveFeature':features[1], 'RemoveFeatureDelay':d,'Freq':a,'Condition':con, 'Subject': subject_name[s], 'Data':[cc]}) frame.append(df) cc = coh[1,:,fr,:] df = pd.DataFrame({'Feature':features[1],'FeatureDelay':d,'RemoveFeature':features[0], 'RemoveFeatureDelay':d,'Freq':a,'Condition':con, 'Subject': subject_name[s], 'Data':[cc]}) frame.append(df) data=pd.concat((frame),axis=0) a = ('-').join(features) save_path = data_path + '/python/data/partialCoh/Surrogate_PartialCoh-removedFirst-' \ +str(remove_first)+'-'+a+'-'+subject_name[s]+'.pkl' data.to_pickle(save_path) # putit into one file A=[] a = ('-').join(features) for s in subject_name: save_path = data_path + '/python/data/partialCoh/SurrogateCoherence-removedFirst-' \ +str(remove_first)+'-'+a+'-'+s+'.pkl' A.append(pd.read_pickle(save_path)) data = pd.concat((A),axis=0) save_path = data_path + '/python/data/partialCoh/SurrogateCoherence-removedFirst-' \ +str(remove_first)+'-'+a+'.pkl' data.to_pickle(save_path) # - # # remove contribution of lipaperature from the envelop delayed version (partial coherence implemented here) def partialCoherence_preprocess_delay(epochs,remove_first,d,trial_len,feat,condition,typee): if condition != 'All': E = epochs[condition].copy() else: E = epochs.copy() S = E.copy().pick_channels(feat) if typee != 'eeg': E = S.copy().crop(d+remove_first,d+remove_first+trial_len) else: E = S.copy().crop(0.5+remove_first,0.5+remove_first+trial_len) c = E.get_data() return c # + code_folding=[] features = ['envelop','jawaopening','lipaparature','lipProtrusion','TTCD','TMCD','TBCD'] eeg_chan = GA_epoches[0].ch_names[0:59] sfreq = GA_epoches[0].info['sfreq'] delay = np.arange(-5,6) / 10 condition = ['hyper','normal','hypo','All'] condition = ['All'] features = ['envelop','lipaparature'] feat_comb = (['envelop','lipaparature'],['lipaparature','envelop']) FD = np.arange(-5,6) / 10 # keep feature delay RD = np.arange(-5,6) / 10 # remove feature delay ############################# iter_freqs = [ ('fr', 1, 3), ('fr', 4, 6) ] fmin = [] fmax = [] for fr in range(0,len(iter_freqs)): fmin.append(iter_freqs[fr][1]) fmax.append(iter_freqs[fr][2]) ####################################### indices = [] b = (np.repeat(59,59),np.arange(0,59)) indices.append(b) b = (np.repeat(60,59),np.arange(0,59)) indices.append(b) b = (np.repeat(59,1),np.repeat(60,1)) indices.append(b) INDEX = [] b=0 for idx in range(0,len(indices)): a = np.arange(b,b+len(indices[idx][0])) INDEX.append(a) b = b+len(a) indices = np.concatenate((indices),axis=1) indices = (indices[0],indices[1]) ####################################### frame = [] for s in range(0,len(subject_name)): clear_output() for fd in FD: for rd in RD: for con in condition: eeg = partialCoherence_preprocess_delay(GA_epoches[s],remove_first,fd + 0.5,trial_len,eeg_chan,con,'eeg') speech = partialCoherence_preprocess_delay(GA_epoches[s],remove_first,fd + 0.5,trial_len,[features[0]],con,'speech') lipaparature = partialCoherence_preprocess_delay(GA_epoches[s],remove_first,rd + 0.5,trial_len,[features[1]],con,'speech') c = np.concatenate((eeg,speech,lipaparature),axis=1) coh = get_coherence(c,sfreq,fmin,fmax,indices) for iii in range(2): if iii ==0: conXY = coh[INDEX[0],:] conXR = coh[INDEX[1],:] else: conXY = coh[INDEX[1],:] conXR = coh[INDEX[0],:] conRY = coh[INDEX[2],:][0] for fr in range(0,len(iter_freqs)): a = str(iter_freqs[fr][0])+ ' '+str(iter_freqs[fr][1])+' - '+str(iter_freqs[fr][2])+'Hz' cc = get_partialCoherence(conXY,conXR,conRY,fr) df = pd.DataFrame({'Feature':feat_comb[iii][0],'FeatureDelay':fd,'RemoveFeature':feat_comb[iii][1], 'RemoveFeatureDelay':rd,'Freq':a,'Condition':con, 'Subject': subject_name[s], 'Data':[cc.flatten()]}) frame.append(df) print(str(fd)+'-'+str(rd)+'-'+subject_name[s]) data=pd.concat((frame),axis=0) save_path = data_path + '/python/data/partialCoh/Delayed_partialCohopp-removedFirst-'+str(remove_first)+'.pkl' data.to_pickle(save_path) # - # + s=0 d=0.2 iter_freqs = [ ('fr', 1, 3), ('fr', 4, 6) ] condition = 'All' keep_feat = ['envelop'] remove_feat = ['lipaparature'] features = ['envelop','jawaopening','lipaparature','lipProtrusion','TTCD','TMCD','TBCD'] remove_first = 0.5 #seconds channel_names = GA_epoches[0].ch_names eeg_chan = GA_epoches[0].ch_names[0:59] ####################################### indices = [] b = (np.repeat(59,59),np.arange(0,59)) indices.append(b) b = (np.repeat(60,59),np.arange(0,59)) indices.append(b) b = (np.repeat(59,1),np.repeat(60,1)) #indices.append(b) INDEX = [] b=0 for idx in range(0,len(indices)): a = np.arange(b,b+len(indices[idx][0])) INDEX.append(a) b = b+len(a) indices = np.concatenate((indices),axis=1) indices = (indices[0],indices[1]) ####################################### eeg = partialCoherence_preprocess_delay(GA_epoches[s],remove_first,d + 0.5,trial_len,eeg_chan,condition) envelop = partialCoherence_preprocess_delay(GA_epoches[s],remove_first,d + 0.5,trial_len,keep_feat,condition) lipaparature = partialCoherence_preprocess_delay(GA_epoches[s],remove_first,d + 0.5,trial_len,remove_feat,condition) c = np.concatenate((eeg,envelop,lipaparature),axis=1) print(c.shape) print(fmin) print(fmax) coh = get_coherence(c,400,fmin,fmax,indices) # - conXY = coh[INDEX[0],:] a = plt.plot(conXY[:,0]) # plotting by columns # + condition = ['All'] indices = (np.repeat([np.arange(59,len(features)+59)],59),np.tile(np.arange(0,59),len(features))) extra_channels = ['envelop','jawaopening','lipaparature','lipProtrusion','TTCD','TMCD','TBCD'] eeg_channles = GA_epoches[0].ch_names[0:59] event_id = {'Hyper': 1,'Normal': 2,'Hypo': 3} ch_types = np.repeat('eeg', len(features)+59) ch_names = np.hstack((eeg_channles,features)) info = mne.create_info(ch_names = ch_names.tolist(),ch_types = ch_types,sfreq = GA_epoches[0].info['sfreq']) ch_names = np.setdiff1d(extra_channels,features) c = coherence_preprocess_delay(GA_epoches[s],remove_first,d+0.5,trial_len,extra_channels,eeg_channles,condition) coh2 = get_coherence(c,400,fmin,fmax,indices) # - cc = np.split(coh2[:,0], len(features)) a = plt.plot(cc[0]) # plotting by columns c = np.concatenate((eeg,envelop,lipaparature),axis=1) coh = get_coherence(c,400,fmin,fmax,indices) conXY = coh[INDEX[0],:] conXR = coh[INDEX[1],:] conRY = coh[INDEX[2],:][0] a = plt.plot(conXY[:,0]) # plotting by columns a = plt.plot(conXR[:,0]) # plotting by columns print(conRY[0]) # + s=0 epochs = GA_epoches[s].copy() eeg = epochs.copy().pick_channels(eeg_channles) speech = epochs.copy().pick_channels(extra_channels) E = eeg.copy().crop(d+remove_first,d+remove_first+trial_len) S = speech.copy().crop(0.5+remove_first,0.5+remove_first+trial_len) events = E.events sfreq = E.info['sfreq'] C = np.concatenate((E.get_data(),S.get_data()),axis=1) epochs = mne.EpochsArray(C, info, E.events, 0,event_id) c = epochs.get_data() ch_names = np.hstack((E.ch_names,S.ch_names)) # + #A = C.mean(axis=0) #a = plt.plot(A[2,:]) # plotting by columns A = c.mean(axis=0) a = plt.plot(A[59,:]) # plotting by columns #a = epochs.copy().pick_channels(['AF7']).get_data() #A = a.mean(axis=0) #a = plt.plot(A[0,:]) # plotting by columns # - print(GA_epoches[s].ch_names) c = np.concatenate((eeg,envelop,lipaparature),axis=1) coh = get_coherence(c,400,fmin,fmax,indices) conXY = coh[INDEX[1],:] conXR = coh[INDEX[0],:] conRY = coh[INDEX[2],:][0] a = plt.plot(conXY[:,0]) # plotting by columns a = plt.plot(conXR[:,0]) # plotting by columns print(conRY[0]) a = plt.plot(conXY[0:59,0]) # plotting by columns a = plt.plot(conXY[59:118,0]) # plotting by columns print(conXY[118,0]) # + keep_feat1= np.append(eeg_chan,keep_feat) envelop = partialCoherence_preprocess_delay(GA_epoches[s],remove_first,d + 0.5,trial_len,channel_names, keep_feat1,condition) remove_feat1= np.append(eeg_chan,remove_feat) lipaparature = partialCoherence_preprocess_delay(GA_epoches[s],remove_first,d + 0.5,trial_len,channel_names, remove_feat1,condition) conXY = get_coherence(envelop,400,fmin,fmax,indicesXY) conXR = get_coherence(lipaparature,400,fmin,fmax,indicesXR) c = np.dstack((envelop[:,59,:],lipaparature[:,59,:])) c = np.swapaxes(c,1,2) conRY = get_coherence(c,400,fmin,fmax,indicesRY)[0] # - a = plt.plot(conXY[:,0]) # plotting by columns a = plt.plot(conXR[:,0]) # plotting by columns print(conRY[0]) # + eeg = partialCoherence_preprocess_delay(GA_epoches[s],remove_first,d + 0.5,trial_len,channel_names, eeg_chan,condition) keep_feat = ['envelop'] envelop = partialCoherence_preprocess_delay(GA_epoches[s],remove_first,d + 0.5,trial_len,channel_names, keep_feat,condition) remove_feat = ['lipaparature'] lipaparature = partialCoherence_preprocess_delay(GA_epoches[s],remove_first,d + 0.5,trial_len,channel_names, remove_feat,condition) c = np.concatenate((eeg,envelop),axis=1) conXY = get_coherence(c,400,fmin,fmax,indicesXY) c = np.concatenate((eeg,lipaparature),axis=1) conXR = get_coherence(c,400,fmin,fmax,indicesXR) c = np.concatenate((envelop,lipaparature),axis=1) conRY = get_coherence(c,400,fmin,fmax,indicesRY)[0] # - a = plt.plot(conXY[:,0]) # plotting by columns a = plt.plot(conXR[:,0]) # plotting by columns print(conRY[0]) # # bandpass filter epoches and save as fieldtrip format for partial coherence # + iter_freqs = [ ('fr', 1, 3), #('fr', 2, 4), #('fr', 3, 5), ('fr', 4, 6) #('fr', 5, 7), #('fr', 6, 8), #('fr', 7, 9), #('fr', 8, 10), #('fr', 9, 11), #('fr', 10, 12) ] save_path = data_path +'/python/data/partialCoh/partialCoh-trailLen-' +str(trial_len)+'-removedFirst-'+ str(remove_first) features = ['envelop','jawaopening','lipaparature','lipProtrusion','TTCD','TMCD','TBCD'] condition = ['Hyper','Normal','Hypo','All'] delay = np.arange(0,1.1,0.1) extra_channels = ['envelop','jawaopening','lipaparature','lipProtrusion','TTCD','TMCD','TBCD'] eeg_channles = np.setdiff1d(GA_epoches[0].ch_names, extra_channels) event_id = {'Hyper': 1,'Normal': 2,'Hypo': 3} ch_types = np.repeat('eeg', len(features)+59) ch_names = np.hstack((eeg_channles,features)) info = mne.create_info(ch_names = ch_names.tolist(),ch_types = ch_types,sfreq = GA_epoches[0].info['sfreq']) ch_names = np.setdiff1d(extra_channels,features) for s in range(0,len(subject_name)): for d in delay: for c in condition: for fr in range(0,len(iter_freqs)): b = str(iter_freqs[fr][0])+ '-'+str(iter_freqs[fr][1])+'-'+str(iter_freqs[fr][2])+'Hz' fmin = iter_freqs[fr][1] fmax = iter_freqs[fr][2] a = listen_italian_functions.partialCoherence_preprocess_delay(GA_epoches[s],remove_first,d, trial_len,extra_channels,eeg_channles, info,ch_names,event_id,fmin,fmax,c) d = d.round(decimals=1) scipy.io.savemat(save_path+'s-'+'condition-'+c+'-delay-'+str(d)+'-'+b+'-' +subject_name[s],a) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ![RDD key pair](media/07.spark_statestore_files.png) # --- # # 06 - Lectura y escritura de ficheros # -------------- # # ### Sistemas de ficheros soportados # - Igual que Hadoop, Spark soporta diferentes filesystems: local, HDFS, Amazon S3 # # - En general, soporta cualquier fuente de datos que se pueda leer con Hadoop # # - También, acceso a bases de datos relacionales o noSQL # # - MySQL, Postgres, etc. mediante JDBC # - Apache Hive, HBase, Cassandra o Elasticsearch # # ### Formatos de fichero soportados # # - Spark puede acceder a diferentes tipos de ficheros: # # - Texto plano, CSV, ficheros sequence, JSON, *protocol buffers* y *object files* # - Soporta ficheros comprimidos # - Veremos el acceso a algunos tipos en esta clase, y dejaremos otros para más adelante # !pip install pyspark # Create apache spark context from pyspark import SparkContext sc = SparkContext(master="local", appName="Mi app") # Stop apache spark context sc.stop() # ### Ejemplos con ficheros de texto # # En el directorio `data/libros` hay un conjunto de ficheros de texto comprimidos. # Ficheros de entrada # !ls data/libros # ### Funciones de lectura y escritura con ficheros de texto # # # - `sc.textFile(nombrefichero/directorio)` Crea un RDD a partir las líneas de uno o varios ficheros de texto # - Si se especifica un directorio, se leen todos los ficheros del mismo, creando una partición por fichero # - Los ficheros pueden estar comprimidos, en diferentes formatos (gz, bz2,...) # - Pueden especificarse comodines en los nombres de los ficheros # - `sc.wholeTextFiles(nombrefichero/directorio)` Lee ficheros y devuelve un RDD clave/valor # - clave: path completo al fichero # - valor: el texto completo del fichero # - `rdd.saveAsTextFile(directorio_salida)` Almacena el RDD en formato texto en el directorio indicado # - Crea un fichero por partición del rdd # + # Lee todos los ficheros del directorio # y crea un RDD con las líneas lineas = sc.textFile("data/libros") # Se crea una partición por fichero de entrada print("Número de particiones del RDD lineas = {0}".format(lineas.getNumPartitions())) # - # Obtén las palabras usando el método split (split usa un espacio como delimitador por defecto) palabras = lineas.flatMap(lambda x: x.split()) print("Número de particiones del RDD palabras = {0}".format(palabras.getNumPartitions())) # Reparticiono el RDD en 4 particiones palabras2 = palabras.coalesce(4) print("Número de particiones del RDD palabras2 = {0}".format(palabras2.getNumPartitions())) # Toma una muestra aleatoria de palabras print(palabras2.takeSample(False, 10)) # Lee los ficheros y devuelve un RDD clave/valor # clave->nombre fichero, valor->fichero completo prdd = sc.wholeTextFiles("data/libros/p*.gz") print("Número de particiones del RDD prdd = {0}\n".format(prdd.getNumPartitions())) # + # Obtiene un lista clave/valor # clave->nombre fichero, valor->numero de palabras lista = prdd.mapValues(lambda x: len(x.split())).collect() for libro in lista: print("El fichero {0:14s} tiene {1:6d} palabras".format(libro[0].split("/")[-1], libro[1])) # - # ## Ficheros Sequence # Ficheros clave/valor usados en Hadoop # # - Sus elementos implementan la interfaz [`Writable`](https://hadoop.apache.org/docs/stable/api/org/apache/hadoop/io/Writable.html) # + rdd = sc.parallelize([("a",2), ("b",5), ("a",8)], 2) # Salvamos el RDD clave valor como fichero de secuencias rdd.saveAsSequenceFile("file:///tmp/sequenceoutdir2") # + # Lo leemos en otro RDD rdd2 = sc.sequenceFile("file:///tmp/sequenceoutdir2", "org.apache.hadoop.io.Text", "org.apache.hadoop.io.IntWritable") print("Contenido del RDD {0}".format(rdd2.collect())) # - # ## Formatos de entrada/salida de Hadoop # Spark puede interactuar con cualquier formato de fichero soportado por Hadoop # - Soporta las APIs “vieja” y “nueva” # - Permite acceder a otros tipos de almacenamiento (no fichero), p.e. HBase o MongoDB, a través de `saveAsHadoopDataSet` y/o `saveAsNewAPIHadoopDataSet` # # Salvamos el RDD clave/valor como fichero de texto Hadoop (TextOutputFormat) rdd.saveAsNewAPIHadoopFile("file:///tmp/hadoopfileoutdir", "org.apache.hadoop.mapreduce.lib.output.TextOutputFormat", "org.apache.hadoop.io.Text", "org.apache.hadoop.io.IntWritable") # !echo 'Directorio de salida' # !ls -l /tmp/hadoopfileoutdir # !cat /tmp/hadoopfileoutdir/part-r-00001 # + # Lo leemos como fichero clave-valor Hadoop (KeyValueTextInputFormat) rdd3 = sc.newAPIHadoopFile("file:///tmp/hadoopfileoutdir", "org.apache.hadoop.mapreduce.lib.input.KeyValueTextInputFormat", "org.apache.hadoop.io.Text", "org.apache.hadoop.io.IntWritable") print("Contenido del RDD {0}".format(rdd3.collect())) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd from safegraph_py_functions import safegraph_py_functions as sgpy import os from dotenv import load_dotenv, find_dotenv from loguru import logger # + # find .env automagically by walking up directories until it's found dotenv_path = find_dotenv() # load up the entries as environment variables load_dotenv(dotenv_path) os.chdir(os.environ.get("ROOT_DIR")) from src import DATA_DIR raw_data_dir = DATA_DIR / 'raw' # - # if import is 0 it reads the data from the existing file # otherwise reads in the raw data an makes a unified dataset IMPORT = 0 # Read in all patterns files in the monthly-patterns folder def get_files(): patterns_path = raw_data_dir / "monthly-patterns" files = [] for f in patterns_path.glob("**/*.csv.gz"): files.append(f) return files def filter_to_philly(df): # zip codes are read as integers rather than strings so we add leading zeros. # this is not strictly necessary since Philadelphia zipcodes don't have leading zeros. # Philadelphia selection # HK: adding leading zeros because some zipcodes in MA are 0191X. df['postal_code'] = df['postal_code'].apply(lambda x: ('00000'+str(x))[-5:]) in_philly = df['postal_code'].astype(str).str.startswith("191") df = df.loc[in_philly] df = df[['safegraph_place_id','date_range_start','postal_code', 'raw_visit_counts', 'raw_visitor_counts']] return df if IMPORT == 1: philly_patterns = [] files = get_files() for i, f in enumerate(files): print(f) philly_patterns.append(filter_to_philly(pd.read_csv(f))) philly_patterns = pd.concat(philly_patterns) philly_patterns.to_csv( DATA_DIR / "processed" / "kmeans_patterns.csv.tar.gz", index=False ) else: philly_patterns = pd.read_csv(DATA_DIR / "processed" / "philly_patterns.csv.tar.gz", low_memory = False) philly_patterns.columns philly_patterns= philly_patterns[['safegraph_place_id','date_range_start', 'date_range_end', 'raw_visit_counts', 'raw_visitor_counts', 'poi_cbg','top_category']] philly_patterns['poi_cbg'] = philly_patterns['poi_cbg'].astype(int) philly_patterns['date_range_start'] = pd.to_datetime(philly_patterns['date_range_start']) philly_patterns['date_range_end'] = pd.to_datetime(philly_patterns['date_range_end'].apply(lambda x: x[:10])) census_path = DATA_DIR / 'raw' / 'open-census-data' files = [file for file in census_path.glob('**/cbg_b19.csv')] census = pd.read_csv(files[0]) census = census[['census_block_group','B19013e1']] philly_patterns = philly_patterns.merge(census, left_on = 'poi_cbg', right_on = 'census_block_group', how = 'left') philly_patterns.drop(columns = 'census_block_group', inplace = True) philly_patterns['income_missing'] = philly_patterns['B19013e1'].isna().astype(int) philly_patterns['B19013e1'].fillna(value=philly_patterns['B19013e1'].mean(), inplace=True) # + # for files with information disaggregated at the state level, keep only the country-wide info def keep_total_level(norm_stats): if 'region' in norm_stats.columns: if len(norm_stats[norm_stats['region'] == 'ALL_STATES']) == 0: raise ValueError('no region named "ALL_STATES"') norm_stats = norm_stats[norm_stats['region'] == 'ALL_STATES'] norm_stats = norm_stats.drop(columns = ['region']) return norm_stats patterns_path = raw_data_dir / "monthly-patterns" norm_files = [f for f in patterns_path.glob("**/normalization_stats.csv")] #read in normalization data norm_stats = pd.concat([keep_total_level(pd.read_csv(file)) for file in norm_files]) norm_stats['year'] = norm_stats['year'].astype(int) norm_stats['month'] = norm_stats['month'].astype(int) norm_stats['day'] = norm_stats['day'].astype(int) # HK: I only downloaded patterns data from 2019 onwards due to memory constraints norm_stats = norm_stats[norm_stats['year'] >= 2019] norm_stats.reset_index(inplace = True, drop = True) # - norm_stats.head() norm_stats['date'] = pd.to_datetime((norm_stats.year*10000+norm_stats.month*100+norm_stats.day).apply(str),format='%Y%m%d') date_ranges = philly_patterns[['date_range_start', 'date_range_end']].drop_duplicates().sort_values('date_range_start').reset_index(drop = True) date_ranges def get_date_range(x): mask = (date_ranges['date_range_start'] <= x) & (date_ranges['date_range_end'] > x) return date_ranges.loc[mask,'date_range_start'].values[0] norm_stats['date_range_start'] = norm_stats.date.apply(get_date_range) norm_stats = norm_stats[['date_range_start', 'total_visits', 'total_devices_seen']].groupby('date_range_start').agg('sum').reset_index() philly_patterns = philly_patterns.merge(norm_stats, on = 'date_range_start') philly_patterns['visit_count_norm'] = 1000000*philly_patterns['raw_visit_counts']/philly_patterns['total_visits'] philly_patterns['visitor_count_norm'] = 1000000*philly_patterns['raw_visitor_counts']/philly_patterns['total_devices_seen'] check = philly_patterns.groupby(['safegraph_place_id']).nunique() check.loc[check['B19013e1']>1] philly_patterns[philly_patterns['safegraph_place_id'] == 'sg:002e331cebec461d932e13cee0e01ca3'] mask = philly_patterns['date_range_start'] < '2020-03-01' prepandemic = philly_patterns.loc[mask,].copy() pandemic = philly_patterns.loc[~mask,].copy() prepandemic.date_range_start.unique() from sklearn.cluster import KMeans from sklearn.preprocessing import StandardScaler # + def cut_outliers(df, cols): for col in cols: ub = 2*df[col].quantile(0.99) mask = df[col] > ub df.loc[mask,col] = ub return df def make_dummies(df, cols): return pd.get_dummies(df) def Stdrshp(df, pivot_cols, constant_cols): constant = df.loc[ df.groupby('safegraph_place_id').cumcount() == 0, ['safegraph_place_id']+constant_cols ] constant.set_index('safegraph_place_id', inplace = True) #constant = make_dummies(constant, pivot_cols) pivoted = df.pivot(index='safegraph_place_id',columns='date_range_start')[pivot_cols] new_cols = [('{1} {0}'.format(*tup)) for tup in pivoted.columns] pivoted.columns = new_cols df = pivoted.join(constant) df = df.fillna(0) scaled = StandardScaler().fit_transform(df) scaled_features_df = pd.DataFrame(scaled, index=df.index, columns=df.columns) return scaled_features_df def make_clusters(df, k = 5): kmeans = KMeans(n_clusters=k).fit(df) df['cluster'] = kmeans.labels_ return (df, kmeans.inertia_) def run_Kmeans(df, pivot_cols, constant_cols, k = 5): normalized = Stdrshp(cut_outliers(df, pivot_cols), pivot_cols, constant_cols) return make_clusters(normalized, k) # - pivot_cols = ['visit_count_norm', 'visitor_count_norm'] constant_cols = ['B19013e1','income_missing'] scaled = Stdrshp(cut_outliers(prepandemic, pivot_cols), pivot_cols, constant_cols) n_clusters = [k for k in range(3,13)] inertias = [] for k in n_clusters: df, inertia = make_clusters(scaled, k = k) inertias.append(inertia) scaled.head() import matplotlib.pyplot as plt plt.plot(n_clusters,inertias) # + scaled = Stdrshp(prepandemic, pivot_cols, constant_cols) n_clusters = [k for k in range(3,13)] inertias = [] for k in n_clusters: df, inertia = make_clusters(scaled, k = k) inertias.append(inertia) plt.plot(n_clusters,inertias) # - prepandemic, prepandemic_inertia = run_Kmeans(prepandemic, pivot_cols, constant_cols, k = 5) pandemic, pandemic_inertia = run_Kmeans(pandemic, pivot_cols, constant_cols, k = 5) pandemic_inertia # We want to compare pandemic_inertia to the inertia we would have in 2020 if we used the # 2019 clustering combined = pandemic.join(prepandemic[['cluster']], how='left',rsuffix='pre') combined[['clusterpre']] = combined[['clusterpre']].fillna(value=-1) combined['clusterpre'] = combined['clusterpre'].astype(int) cols = combined.columns cols = cols[:-2] n = len(combined.columns) # + def get_inertia(df, cols, cluster_col): mask = df[cluster_col] > -1 n = len(df.index) filtered = df[mask] m = len(filtered.index) se = ((df[cols] - df.groupby(cluster_col)[cols].transform('mean'))**2).to_numpy().sum() return se*(n/m) get_inertia(combined, cols, 'clusterpre') # - #worst case scenario combined['newcol'] = 1 get_inertia(combined, cols, 'newcol') combined.groupby('cluster').size() combined.groupby('clusterpre').size() big_cats = philly_patterns.groupby('top_category').size().sort_values(ascending = False)[:5] big_cats = big_cats.index def category_groups(x): if x in big_cats: return x else: return 'other' philly_patterns['top_category_simple'] = philly_patterns.top_category.apply(category_groups) philly_patterns['top_category_simple'].unique() mask = philly_patterns['date_range_start'] < '2020-03-01' cats = philly_patterns.loc[~mask,].copy() cats = cats.loc[ cats.groupby('safegraph_place_id').cumcount() == 0, ['safegraph_place_id', 'top_category_simple'] ] combined.drop(columns = 'top_category_simple', inplace = True) combined = combined.join(cats.set_index('safegraph_place_id'), how = 'left') pd.crosstab(combined.clusterpre, combined.top_category_simple, dropna=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import management as mt mt.create_data_table() # + # Code, Name, Credits, Teacher, Schedule, Days, Chosen(0[False], 1[True]) mt.add_data('510140-8', 'Física I[8]', 4, 'null', 'X-V','2-3, 4', 0) mt.add_data('527140-6', 'Cálculo I[6]', 5, 'null', 'M-J', '0-1, 0-1', 0) mt.add_data('525140-6', 'Álgebra I[6]', 5, 'null', 'M-J', '2-3, 2-3', 0) # - mt.show_data() # Code, Name mt.del_data('510140-8') # Code # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="2sIU-EMdl1kH" # # 0 Import Libraries and Data # + id="vxHKQlUJkIL8" # import libraries import pandas as pd # Import Pandas for data manipulation using dataframes import numpy as np # Import Numpy for data statistical analysis import matplotlib.pyplot as plt # Import matplotlib for data visualisation import random import seaborn as sns import torch import torch.nn as nn import plotly.offline as py from plotly.offline import init_notebook_mode, iplot import plotly.express as px import plotly.graph_objs as go from plotly.subplots import make_subplots # This relates to plotting datetime values with matplotlib: from pandas.plotting import register_matplotlib_converters register_matplotlib_converters() # + colab={"base_uri": "https://localhost:8080/"} id="7M6bvma6kZCN" outputId="4fb5743d-38c1-4a42-b8c6-544a107100e1" from google.colab import drive drive.mount('/content/gdrive') realestate_df = pd.read_csv("/content/gdrive/MyDrive/ColabNotebooks/RealEstate/data_with_locations_and_ids.csv") print(f"Length before droping duplicate ids = {len(realestate_df)}") realestate_df =realestate_df.drop_duplicates(subset=['id'], keep='first') print(f"Length after droping duplicate ids = {len(realestate_df)}") # + id="xSoenyGInHRV" colab={"base_uri": "https://localhost:8080/", "height": 398} outputId="3c5653eb-0edd-448f-9451-7e817a4be557" sample_df=realestate_df.sample(frac=0.95, replace=False, random_state=99) sample_df["Price"] = pd.to_numeric(sample_df["Price"], downcast="float") sample_df['Time_Posted'] = sample_df['Time_Posted'].values.astype('datetime64[ns]') sample_df.head(3) # + id="vv72mqTBzpGx" colab={"base_uri": "https://localhost:8080/"} outputId="b66b1973-dca0-4ce1-e3dd-09826891ccd1" print(f"Length before removing outliers = {len(sample_df)}") listings=sample_df['Listing_Type'].unique() dataframe_list=[] for value in listings: df = sample_df[sample_df.Listing_Type==value] q_low = df["Price"].quantile(0.05) q_hi = df["Price"].quantile(0.95) df = df[(df["Price"] < q_hi) & (df["Price"] > q_low)] dataframe_list.append(df) sample_df = pd.concat(dataframe_list, axis=0, ignore_index=True) print(f"Length after removing outliers = {len(sample_df)}") # + id="8edzFR3lyEgW" colab={"base_uri": "https://localhost:8080/"} outputId="67e9897a-f8b5-4645-cf21-9bf0e9d86278" listing_type_dict={} listing_types=sample_df['Listing_Type'].unique() for type in listing_types: listing_type_dict[type]=sample_df[sample_df['Listing_Type']==type] print(listing_type_dict.keys()) # + [markdown] id="ZdbQs3lCl7hV" # # 1 EDA # + id="km7vVg55s4So" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="39141460-3aca-4a35-a3ef-98d73d6461da" fig = make_subplots(rows=len(listing_types),subplot_titles=[listing for listing in listing_types]) for index, listing in enumerate(listing_types): df=listing_type_dict[listing][['Time_Posted', 'Price']].groupby(pd.Grouper(key="Time_Posted", freq="1W")).agg({'Price': ['median', 'mean']}).dropna() df.columns=df.columns.map('_'.join) df=df.reset_index() data1=go.Scatter(x=df['Time_Posted'], y=df['Price_median'], mode='lines', name='median', line=dict(color='#abd7eb')) data2=go.Scatter(x=df['Time_Posted'], y=df['Price_mean'], mode='lines', name='mean', line=dict(color='#F47174')) fig.add_traces([data1,data2],rows=(index+1),cols=1) fig.update_layout(title_text=f"Listing Price and Time", height=2000) fig.show() # + id="VJwJRDw01_Ct" colab={"base_uri": "https://localhost:8080/"} outputId="61c53bfc-790f-43bc-e6e8-25e60ac6c494" print('Listing Types') print(sample_df['Listing_Type'].unique()) print() print('Sub Regions') print(sample_df['Sub_Region'].value_counts().head(20).index) # + id="PHVIWVJ93bst" _specify= (sample_df['Listing_Type']=='Apartments-Condos') & (sample_df['Sub_Region']=='Toronto') # + id="NzAFM6kzj68t" colab={"base_uri": "https://localhost:8080/", "height": 237} outputId="855b70db-efb0-4f35-9870-7d7ccc8128c2" df=sample_df[_specify][['Time_Posted', 'Price']].groupby(pd.Grouper(key="Time_Posted", freq="1W")).agg({'Price': ['median', 'mean']}).dropna() df.columns=df.columns.map('_'.join) mean_df=df[['Price_mean']] median_df=df[['Price_median']] df.head() # + id="ZbUPpMx6kqaN" colab={"base_uri": "https://localhost:8080/", "height": 228} outputId="9fe9e59b-a08d-44b0-c4f7-13069c612b8e" df.plot(figsize=(24,4)) # + [markdown] id="9BYpaQXam4hQ" # # 2 Setting Up for Modeling # + id="UjXLnl7alM0S" y=df['Price_median'].values test_size=4 train_set=y[:-test_size] test_set=y[-test_size:] # + id="taIvpApRnfUc" colab={"base_uri": "https://localhost:8080/"} outputId="1c4aa0c7-ffb4-45af-c77c-c883c979113c" from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler(feature_range=(-1,1)) scaler.fit(train_set.reshape(-1,1)) train_norm=scaler.transform(train_set.reshape(-1,1)) train_norm # + id="JBERon4QniWr" colab={"base_uri": "https://localhost:8080/"} outputId="1bcb3fe4-29e2-42ab-f20b-951baa5462ab" train_norm = torch.FloatTensor(train_norm).view(-1) train_norm # + id="6Uo9MvtHpLXl" colab={"base_uri": "https://localhost:8080/"} outputId="b312439b-8f34-4fc0-9b89-8b1edacd6793" window_size=4 # Define function to create seq/label tuples def input_data(_sequence,_windowsize): out = [] L = len(_sequence) for i in range(L-_windowsize): window = _sequence[i:i+_windowsize] label = _sequence[i+_windowsize:i+_windowsize+1] out.append((window,label)) return out # Apply the input_data function to train_norm train_data = input_data(train_norm,window_size) len(train_data) # this should equal len(original data)-len(test size)-len(window size) # + [markdown] id="MD-mJdglrJLx" # # 3 Define Model # + id="6AFgEup0qN5-" class LSTMnetwork(nn.Module): def __init__(self,input_size=1,hidden_size=100,output_size=1): super().__init__() self.hidden_size = hidden_size # Add an LSTM layer: self.lstm = nn.LSTM(input_size,hidden_size) # Add a fully-connected layer: self.linear = nn.Linear(hidden_size,output_size) # Initialize h0 and c0: self.hidden = (torch.zeros(1,1,self.hidden_size), torch.zeros(1,1,self.hidden_size)) def forward(self,seq): lstm_out, self.hidden = self.lstm( seq.view(len(seq),1,-1), self.hidden) pred = self.linear(lstm_out.view(len(seq),-1)) return pred[-1] # we only want the last value # + id="suiTNqvXq4x9" colab={"base_uri": "https://localhost:8080/"} outputId="0ef74ad3-7366-48bd-e8e9-ca944fb6b810" torch.manual_seed(101) model = LSTMnetwork() criterion = nn.MSELoss() optimizer = torch.optim.Adam(model.parameters(), lr=0.001) model # + id="ifiZXHOjroSC" colab={"base_uri": "https://localhost:8080/"} outputId="46c06c47-7998-4687-e03a-970da69ffae0" epochs = 20000 import time start_time = time.time() for epoch in range(epochs): # extract the sequence & label from the training data for seq, y_train in train_data: # reset the parameters and hidden states optimizer.zero_grad() model.hidden = (torch.zeros(1,1,model.hidden_size), torch.zeros(1,1,model.hidden_size)) y_pred = model(seq) loss = criterion(y_pred, y_train) loss.backward() optimizer.step() if epoch%1000 == 1: # print training result print(f'Epoch: {epoch+1:2} Loss: {loss.item():10.8f}') print(f'\nDuration: {time.time() - start_time:.0f} seconds') # + [markdown] id="_XtbfypXskGU" # # 4 Compare with Test Set # + id="IUIKLki3sKBh" colab={"base_uri": "https://localhost:8080/"} outputId="11284ba1-926e-49f8-9725-0b3859ae7909" future = 12 # Add the last window of training values to the list of predictions preds = train_norm[-window_size:].tolist() # Set the model to evaluation mode model.eval() for i in range(future): seq = torch.FloatTensor(preds[-window_size:]) with torch.no_grad(): model.hidden = (torch.zeros(1,1,model.hidden_size), torch.zeros(1,1,model.hidden_size)) preds.append(model(seq).item()) preds[window_size:] # + id="cj3Z9mgEs9O9" colab={"base_uri": "https://localhost:8080/"} outputId="a4d1f5c4-ff11-4a2c-d66e-45f4a982e07e" true_predictions = scaler.inverse_transform(np.array(preds[window_size:]).reshape(-1, 1)) true_predictions # + id="TjeeKSjAtJpk" colab={"base_uri": "https://localhost:8080/"} outputId="c2d3123b-818a-4ea8-d098-f72e2cf45614" df['Price_median'][-12:] # + id="-RmczAbOvvnT" colab={"base_uri": "https://localhost:8080/"} outputId="e3bf82ec-33db-42a8-bb3d-fe1fd19d4cef" time_change = df.index[-1]-df.index[-2] time_change # + id="DI83F-4EtmC6" colab={"base_uri": "https://localhost:8080/"} outputId="0030b1b2-6bca-40b6-e32a-47ff53019231" min_range=df['Price_mean'][-12:].index.min() max_range=df['Price_mean'][-12:].index.max() x = np.arange(min_range, max_range+time_change, dtype='datetime64[W]').astype('datetime64[D]') x # + id="vnsNIE2Etsg6" colab={"base_uri": "https://localhost:8080/", "height": 211} outputId="5622f75b-6fbb-46c1-f90c-b28fa1af14ac" plt.figure(figsize=(24,4)) plt.title('Listing Price') plt.ylabel('Price') plt.grid(True) plt.autoscale(axis='x',tight=True) plt.plot(df['Price_median']) plt.plot(x,true_predictions) plt.show() # + id="yL7FnAZYuwHg" colab={"base_uri": "https://localhost:8080/"} outputId="dabc58c0-3cfe-421a-9eb6-e622abf5d21a" x.shape # + [markdown] id="6ppazs0IQ99H" # # 5 Forecast into unknown future # + id="1gIMklKwvRo0" colab={"base_uri": "https://localhost:8080/"} outputId="a3ecdf47-9ccd-474d-a06b-a9d11d582c22" epochs = 20000 # set model back to training mode model.train() # feature scale the entire dataset y_norm = scaler.fit_transform(y.reshape(-1, 1)) y_norm = torch.FloatTensor(y_norm).view(-1) all_data = input_data(y_norm,window_size) import time start_time = time.time() for epoch in range(epochs): # train on the full set of sequences for seq, y_train in all_data: # reset the parameters and hidden states optimizer.zero_grad() model.hidden = (torch.zeros(1,1,model.hidden_size), torch.zeros(1,1,model.hidden_size)) y_pred = model(seq) loss = criterion(y_pred, y_train) loss.backward() optimizer.step() if epoch%500 == 1: # print training result print(f'Epoch: {epoch+1:2} Loss: {loss.item():10.8f}') print(f'\nDuration: {time.time() - start_time:.0f} seconds') # + id="HMXWKa16vTvz" window_size = 12 future = 12 L = len(y) preds = y_norm[-window_size:].tolist() model.eval() for i in range(future): seq = torch.FloatTensor(preds[-window_size:]) with torch.no_grad(): # Reset the hidden parameters here! model.hidden = (torch.zeros(1,1,model.hidden_size), torch.zeros(1,1,model.hidden_size)) preds.append(model(seq).item()) # Inverse-normalize the prediction set true_predictions = scaler.inverse_transform(np.array(preds).reshape(-1, 1)) # + id="Y2ahuHPnSJlG" colab={"base_uri": "https://localhost:8080/"} outputId="93d7a0de-5af7-4c90-ed5f-54ec2a911d59" true_predictions.shape # + id="QJ2EDDWjPxOw" colab={"base_uri": "https://localhost:8080/"} outputId="2e5161e1-de85-4fe5-ff9d-21e55e6ad5d9" min_range=df['Price_mean'][-window_size:].index.min() max_range=df['Price_mean'][-window_size:].index.max() time_change = df.index[-1]-df.index[-2] time_change # + id="0MZqqJe9R0vW" future_min_range=max_range+time_change future_max_range=max_range+(time_change*(window_size+1)) # + id="t4cV_xSoSYSN" colab={"base_uri": "https://localhost:8080/"} outputId="04ffde57-64f7-4a19-c831-7fbb3984210a" x = np.arange(future_min_range, future_max_range, dtype='datetime64[W]').astype('datetime64[D]') x.shape # + id="0BzZ6aMbOpZg" colab={"base_uri": "https://localhost:8080/", "height": 211} outputId="cb008cdd-231a-45a1-b6c9-6a26d30dd60f" # PLOT THE RESULT # Set a data range for the predicted data. # Remember that the stop date has to be later than the last predicted value. plt.figure(figsize=(24,4)) plt.title('Listing Price') plt.ylabel('Price') plt.grid(True) plt.autoscale(axis='x',tight=True) plt.plot(df['Price_median']) plt.plot(x,true_predictions[window_size:]) plt.show() # + id="q0uM6XjWPlpy" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: jupyter # language: python # name: jupyter # --- # + import pandas import matplotlib.pyplot as plt import seaborn as sns import matplotlib.dates as mdates import numpy as np sns.set_theme() custom_params = {"axes.spines.right": False, "axes.spines.top": False} sns.set_theme(style="darkgrid", rc=custom_params) df_orig = pandas.read_csv('/tmp/google_trend_data.csv', skiprows=2) df_orig.replace('<1', 0, inplace = True) df_orig.rename(columns={df_orig.columns[1] : "Google Trends"}, inplace=True) df = df_orig.astype({'Google Trends': 'int32'}) df.set_index('Month', inplace = True) df.index = pandas.to_datetime(df.index).date display(df) ax = df.plot(color=['#0000FF', '#FF0000'], lw=2) df_2 = df.rolling(10).mean() df_2.rename(columns={"Google Trends" : "Hype"}, inplace=True) df_all = pandas.concat([df, df_2], axis=1) g = df_all.plot(color=['#0000FF', '#FF0000'], lw=2) g.figure.savefig('/tmp/graph.svg', format='svg', dpi=1200) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] heading_collapsed=true # # Deps # + hidden=true # %reload_ext autoreload # %autoreload 2 # %matplotlib inline # + hidden=true from fastai import * from fastai.text import * from fastai.vision import * from fastai.callbacks import * # + hidden=true txt_proc = [TokenizeProcessor(tokenizer=Tokenizer(lang='el') ),NumericalizeProcessor()] # + hidden=true def SampleTexts(df,max_tokens = 100000000): lens = np.array(df.apply(lambda x : len(re.findall(r'\b',x.text))/2,axis=1)) n_tokens = np.sum(lens) if n_tokens <= max_tokens: return range(0,df.shape[0]), n_tokens #if np.sum(lens) > max_tokens: iDec = (-lens).argsort() cumSum = np.cumsum(lens[iDec]) iCut = np.argmax(cumSum>max_tokens) return iDec[:iCut], cumSum[iCut-1] # - # # Greek Lanuage Model # Extract the data from wikipedia using this guide by Moody. path_root = Path('/home/jupyter/tutorials/') source_txt = 'el_wiki_df_all.csv' bs = 112 # + [markdown] heading_collapsed=true # ## Preparing the data # + [markdown] hidden=true # The wiki text is contained in lots of seperate json files, so first we combine into a single large csv which we load with the data loader. # We don't want to train with more than 100,000,000 tokens. For the Greek corpus this is not a problem becuase there are much less than this number of tokens available. # To sample we can reduce this number of tokens even futher is desired. # + hidden=true df = [] for file in path_lang_model.glob("el_dump/*/*"): df_file = pd.read_json(file,lines=True,encoding="utf-8") df.append(df_file) df = pd.concat(df, axis=0) # + hidden=true df.to_csv(path_lang_model/wiki_103_csv,index=False) # + hidden=true df = pd.read_csv(path_lang_model/wiki_103_csv) # + hidden=true max_tokens = 100000000 i_keep,n_tokens = SampleTexts(df,max_tokens) # + hidden=true if len(i_keep) == df.shape[0]: print(f'{n_tokens} tokens < {max_tokens}, no need to sample') else: df_sample = df.iloc[i_keep] df_sample.to_csv(path_lang_model/f'../el_wiki_df_{n_tokens}.csv',index=False) # + [markdown] heading_collapsed=true # ## Try small sample # + [markdown] hidden=true # ### Sample # + hidden=true df = pd.read_csv(path_root/'data'/wiki_103_csv) max_tokens = 1000000 # + hidden=true i_keep,n_tokens = SampleTexts(df,max_tokens) df_sample = df.iloc[i_keep] df_sample.to_csv(path_root/f'data/el_wiki_df_{n_tokens}.csv',index=False) # + [markdown] hidden=true # ### Tokenize for language model # + [markdown] hidden=true # Initially limit the vocab to 30k to compare with other ULMFit methods # + hidden=true source_txt = f'el_wiki_df_{n_tokens}.csv' max_vocab = 30000 # + hidden=true data_lm = (TextList.from_csv(path_root,f'data/{source_txt}',cols=1, processor = [TokenizeProcessor(tokenizer=Tokenizer(lang='el')) , NumericalizeProcessor(vocab=None, max_vocab=max_vocab)]) .random_split_by_pct(0.1) .label_for_lm() .databunch(bs=bs)) print(f'Vocab size: {len(data_lm.vocab.itos)}') # + hidden=true data_lm.save('tmp_data_'+ source_txt) # + [markdown] hidden=true # ### Fit # + hidden=true data_lm = TextLMDataBunch.load(path_root, 'tmp_data_'+ source_txt ,bs=bs) # + hidden=true learn = language_model_learner(data_lm, drop_mult=0.3) # + hidden=true learn.lr_find() # + hidden=true learn.recorder.plot(skip_end=10) # + hidden=true learn.fit_one_cycle(10, 1e-2, moms=(0.8,0.7),callbacks=[ShowGraph(learn), SaveModelCallback(learn,monitor='accuracy',mode='max')]) # - # ## Process full text # + [markdown] heading_collapsed=true # ### Tokenize for language model # + [markdown] hidden=true # Initially limit the vocab to 30k to compare with other ULMFit methods # + hidden=true max_vocab = 30000 # + hidden=true data_lm = (TextList.from_csv(path_root,f'data/{source_txt}',cols=1, processor = [TokenizeProcessor(), NumericalizeProcessor(vocab=None, max_vocab=max_vocab)]) .random_split_by_pct(0.1) .label_for_lm() .databunch(bs=bs)) print(f'Vocab size: {len(data_lm.vocab.itos)}') data_lm.save('tmp_data_lm'+ source_txt) # + hidden=true #save the dictionary pickle_out = open(str(path_root/'data/dict_')+ source_txt + '.pkl',"wb") pickle.dump(data_lm.vocab.itos, pickle_out) pickle_out.close() # - # ### Fit data_lm = TextLMDataBunch.load(path_root, 'tmp_data_lm'+ source_txt ,bs=bs) learn = language_model_learner(data_lm, drop_mult=0.1) # we were pre-empted by goole, lets keep going for a bit learn.load('bestmodel_lm4_' + source_txt); learn.lr_find() learn.recorder.plot(skip_end=20) learn.fit_one_cycle(30, 1e-4, moms=(0.8,0.7),callbacks=[ShowGraph(learn), SaveModelCallback(learn,monitor='accuracy',mode='max',name='bestmodel_lm5_' + source_txt)]) print(f'Perplexity for best validation accuracy: {math.exp(2.678463)}') learn.fit_one_cycle(30, 5e-3, moms=(0.8,0.7),callbacks=[ShowGraph(learn), SaveModelCallback(learn,monitor='accuracy',mode='max',name='bestmodel_lm4_' + source_txt)]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import cv2 import os import numpy as np from random import shuffle from tqdm import tqdm # Paths PROJ_ROOT = os.path.join(os.pardir) path_training = os.path.join(PROJ_ROOT, "data", "raw", "train/") path_testing = os.path.join(PROJ_ROOT, "data", "raw", "test/") # Labeling the dataset def label_img(img): word_label = img.split(".")[-3] if word_label == 'cat': return [1,0] elif word_label == 'dog': return [0,1] # Create the training data def create_training_data(): training_data = [] training_labels = [] # tqdm is only used for interactive loading # loading the training data for img in tqdm(os.listdir(path_training)): # label of the image label = label_img(img) # path to the image path = os.path.join(path_training, img) # load the image from the path and convert it to grayscale for simplicity img = cv2.imread(path, cv2.IMREAD_GRAYSCALE) # resize the image img = cv2.resize(img, (50, 50)) # final step-forming the training data list wiht numpy array of images training_data.append(img) training_labels.append(label) # shuffling of the training data to preserve the random state of our data shuffle(training_data) # randomly choose 1/5 of training set and call it validation set validation_set = training_data[:len(training_data)//5] validation_labels = training_labels[:len(training_labels)//5] training_labels = training_labels[len(training_labels)//5:] training_set = training_data[len(training_data)//5:] # save the trained data for further uses if needed # np.save(os.path.join(PROJ_ROOT,'data', 'interim','training_data'), training_set) # np.save(os.path.join(PROJ_ROOT,'data', 'interim','validation_data'), validation_set) # np.save(os.path.join(PROJ_ROOT,'data', 'interim','validation_labels'), validation_labels) # np.save(os.path.join(PROJ_ROOT,'data', 'interim','training_labels'), training_labels) return training_set, training_labels, validation_labels, validation_set training_d = create_training_data() # Convert the test data as well def create_test_data(): testing_data = [] for img in tqdm(os.listdir(path_testing)): # path to the image path = os.path.join(path_testing, img) img_num = img.split(".")[0] img = cv2.imread(path, cv2.IMREAD_GRAYSCALE) img = cv2.resize(img, (50, 50)) testing_data.append([np.array(img),img_num]) shuffle(testing_data) np.save(os.path.join(PROJ_ROOT, "data", "interim", "test_data"), testing_data) return testing_data test_data = create_test_data() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # # #
# # prepared by (QLatvia) #
#
This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros.
# $ \newcommand{\bra}[1]{\langle #1|} $ # $ \newcommand{\ket}[1]{|#1\rangle} $ # $ \newcommand{\braket}[2]{\langle #1|#2\rangle} $ # $ \newcommand{\dot}[2]{ #1 \cdot #2} $ # $ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $ # $ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $ # $ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $ # $ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $ # $ \newcommand{\mypar}[1]{\left( #1 \right)} $ # $ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $ # $ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $ # $ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $ # $ \newcommand{\onehalf}{\frac{1}{2}} $ # $ \newcommand{\donehalf}{\dfrac{1}{2}} $ # $ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $ # $ \newcommand{\vzero}{\myvector{1\\0}} $ # $ \newcommand{\vone}{\myvector{0\\1}} $ # $ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $ # $ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $ # $ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $ # $ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $ # $ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $ # $ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $ # $ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $ # $ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $ # $ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} #1 \mspace{-1.5mu} \rfloor } $ #

Python: Quick Reference

#
# #
# #

Variables

# + number = 5 # integer real = -3.4 # float name = 'Asja' # string surname = "Sarkana" # string boolean1 = True # Boolean boolean2 = False # Boolean # - #
# #

Arithmetic operators

#

Basic operators

# + a = 13 b = 5 print("a =",a) print("b =",b) print() # basics operators print("a + b =",a+b) print("a - b =",a-b) print("a * b =",a*b) print("a / b =",a/b) # - #

Integer division and modulus operators

# + a = 13 b = 5 print("a =",a) print("b =",b) print() print("integer division:") print("a//b =",a//b) print("modulus operator:") print("a mod b =",a % b) # - #

Exponent operator

# # number\*\*exponent # + b = 5 print("b =",b) print() print("b*b =",b**2) print("b*b*b =",b**3) print("sqrt(b)=",b**0.5) # - #
# #

Objects

#

Lists

# + # list mylist = [10,8,6,4,2] print(mylist) # - #

Tuple

# + # tuple mytuple=(1,4,5,'Asja') print(mytuple) # - #

Dictionary

# + # dictionary mydictionary = { 'name' : "Asja", 'surname':'Sarkane', 'age': 23 } print(mydictionary) print(mydictionary['surname']) # - #

List of the other objects or variables

# + # list of the other objects or variables list_of_other_objects =[ mylist, mytuple, 3, "Asja", mydictionary ] print(list_of_other_objects) # - #
# #

Size of an object

# # We use the method "len()" that takes an object as the input. # + # length of a string print(len("")) # size of a list print(len([1,2,3,4])) # size of a dictionary mydictionary = { 'name' : "Asja", 'surname':'Sarkane', 'age': 23} print(len(mydictionary)) # - #
# #

Loops

#

While-loop

i = 10 while i>0: # while condition(s): print(i) i = i - 1 #

For-loop

for i in range(10): # i is in [0,1,...,9] print(i) for i in range(-5,6): # i is in [-5,-4,...,0,...,4,5] print(i) for i in range(0,23,4): # i is in [0,4,8,12,16,20] print(i) for i in [3,8,-5,11]: print(i) for i in "Sarkane": print(i) # + # dictionary mydictionary = { 'name' : "Asja", 'surname':'Sarkane', 'age': 23, } for key in mydictionary: print("key is",key,"and its value is",mydictionary[key]) # - #
# #

Conditionals

for a in range(4,7): # if condition(s) if a<5: print(a,"is less than 5") # elif conditions(s) elif a==5: print(a,"is equal to 5") # else else: print(a,"is greater than 5") #
# #

Logical and Boolean operators

#

Logical operator "and"

# Logical operator "and" i = -3 j = 4 if i<0 and j > 0: print(i,"is negative AND",j,"is positive") #

Logical operator "or"

# Logical operator "or" i = -2 j = 2 if i==2 or j == 2: print("i OR j is 2: (",i,",",j,")") #

Logical operator "not"

# Logical operator "not" i = 3 if not (i==2): print(i,"is NOT equal to 2") #

Operator "equal to"

# Operator "equal to" i = -1 if i == -1: print(i,"is EQUAL TO -1") #

Operator "not equal to"

# Operator "not equal to" i = 4 if i != 3: print(i,"is NOT EQUAL TO 3") #

Operator "less than or equal to"

# Operator "not equal to" i = 2 if i <= 5: print(i,"is LESS THAN OR EQUAL TO 5") #

Operator "greater than or equal to"

# Operator "not equal to" i = 5 if i >= 1: print(i,"is GREATER THAN OR EQUAL TO 3") #
# #

Double list

# + A =[ [1,2,3], [-2,-4,-6], [3,6,9] ] # print all print(A) print() # print each list in a new line for list in A: print(list) # - #
# #

List operations

#

Concatenation of two lists

# + list1 = [1,2,3] list2 = [4,5,6] #concatenation of two lists list3 = list1 + list2 print(list3) list4 = list2 + list1 print(list4) # - #

Appending a new element

# + list = [0,1,2] list.append(3) print(list) list = list + [4] print(list) # - #
# #

Functions

# + def summation_of_integers(n): summation = 0 for integer in range(n+1): summation = summation + integer return summation print(summation_of_integers(10)) print(summation_of_integers(20)) # - #
# #

Random number

# # We can use method "randrange()". # + from random import randrange print(randrange(10),"is picked randomly between 0 and 9") print(randrange(-9,10),"is picked randomly between -9 and 9") print(randrange(0,20,3),"is picked randomly from the list [0,3,6,9,12,15,18]") # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # ## Gender Distinguished Analysis of ADHD v.s. Bipolar - by # # Build models for female patients and male patients separately. # + import pandas as pd import numpy as np df_adhd = pd.read_csv('ADHD_Gender_rCBF.csv') df_bipolar = pd.read_csv('Bipolar_Gender_rCBF.csv') n1, n2 = df_adhd.shape[0], df_bipolar.shape[0] print 'Number of ADHD patients (without Bipolar) is', n1 print 'Number of Bipolar patients (without ADHD) is', n2 print 'Chance before gender separation is', float(n1) / (n1 + n2) # + # Separate the genders adhd1_id, adhd2_id = list(), list() bipolar1_id, bipolar2_id = list(), list() for i, g in df_adhd[['Patient_ID', 'Gender_id']].values: if g == 1: adhd1_id.append(i) elif g == 2: adhd2_id.append(i) for i, g in df_bipolar[['Patient_ID', 'Gender_id']].values: if g == 1: bipolar1_id.append(i) elif g == 2: bipolar2_id.append(i) print 'Number of Gender 1 ADHD patients (without Bipolar) is', len(adhd1_id) print 'Number of Gender 2 ADHD patients (without Bipolar) is', len(adhd2_id) print 'Number of Gender 1 Bipolar patients (without ADHD) is', len(bipolar1_id) print 'Number of Gender 2 Bipolar patients (without ADHD) is', len(bipolar2_id) # + # Separate ADHD data gender-wise df_adhd1 = df_adhd.loc[df_adhd['Patient_ID'].isin(adhd1_id)].drop(['Patient_ID', 'Gender_id'], axis=1) df_adhd2 = df_adhd.loc[df_adhd['Patient_ID'].isin(adhd2_id)].drop(['Patient_ID', 'Gender_id'], axis=1) # Separate Bipolar data gender-wise df_bipolar1 = df_bipolar.loc[df_bipolar['Patient_ID'].isin(bipolar1_id)].drop(['Patient_ID', 'Gender_id'], axis=1) df_bipolar2 = df_bipolar.loc[df_bipolar['Patient_ID'].isin(bipolar2_id)].drop(['Patient_ID', 'Gender_id'], axis=1) # Create disorder labels for classification # ADHD: 0, Bipolar: 1 n1_adhd, n1_bipolar = len(adhd1_id), len(bipolar1_id) n2_adhd, n2_bipolar = len(adhd2_id), len(bipolar2_id) # Labels for gender 1 y1 = [0] * n1_adhd + [1] * n1_bipolar # Labels for gender 2 y2 = [0] * n2_adhd + [1] * n2_bipolar print 'Shape check:' print 'ADHD:', df_adhd1.shape, df_adhd2.shape print 'Bipolar:', df_bipolar1.shape, df_bipolar2.shape # Gender1 data df1_all = pd.concat([df_adhd1, df_bipolar1], axis=0) # Gender2 data df2_all = pd.concat([df_adhd2, df_bipolar2], axis=0) print '\nDouble shape check:' print 'Gender 1:', df1_all.shape, len(y1) print 'Gender 2:', df2_all.shape, len(y2) # Compute chances chance1 = float(n1_adhd)/(n1_adhd + n1_bipolar) chance2 = float(n2_adhd)/(n2_adhd + n2_bipolar) print 'Chance for gender 1 is', chance1 print 'Chance for gender 2 is', chance2 # - # ## Machine Learning Utilities # ### K-Means Clustering # + # %matplotlib inline import matplotlib.pyplot as plt from sklearn.cluster import KMeans def kmeans(df, title, k=4): data = df.values.T kmeans = KMeans(n_clusters=k) kmeans.fit(data) labels = kmeans.labels_ centroids = kmeans.cluster_centers_ fig = plt.figure() fig.suptitle('K-Means on '+title+' Features' , fontsize=14, fontweight='bold') # Plot clusters for i in range(k): # Extract observations within each cluster ds = data[np.where(labels==i)] # Plot the observations with symbol o plt.plot(ds[:,0], ds[:,1], 'o') # Plot the centroids with simbol x lines = plt.plot(centroids[i,0], centroids[i,1], 'x') plt.setp(lines, ms=8.0) plt.setp(lines, mew=2.0) # - # ### Principal Component Analysis # + from sklearn import preprocessing from sklearn.decomposition import PCA # Plot explained variance ratio def plot_evr(ex_var_ratio): plt.title('Explained Variance Ratios by PCA') plt.plot(ex_var_ratio) plt.ylabel('Explained Variance Ratio') plt.xlabel('Principal Component') def pca(df, n=20): ''' Default number of principal components: 20 ''' # Scale X = df.values X_scaled = preprocessing.scale(X) # PCA pca = PCA(n_components=n) pc = pca.fit_transform(X_scaled) print '\nExplained Variance Ratios:' print pca.explained_variance_ratio_ print '\nSum of Explained Variance Ratios of the first', n, 'components is', print np.sum(pca.explained_variance_ratio_) plot_evr(pca.explained_variance_ratio_) return pc # - # ### Locally Linear Embedding # + from sklearn.manifold import LocallyLinearEmbedding # Compute explained variance ratio of transformed data def compute_explained_variance_ratio(transformed_data): explained_variance = np.var(transformed_data, axis=0) explained_variance_ratio = explained_variance / np.sum(explained_variance) explained_variance_ratio = np.sort(explained_variance_ratio)[::-1] return explained_variance_ratio def lle(X, n=10): # Scale X_scaled = preprocessing.scale(X) # LLE lle = LocallyLinearEmbedding(n_neighbors=25, n_components=n, method='ltsa') pc = lle.fit_transform(X_scaled) ex_var_ratio = compute_explained_variance_ratio(pc) print '\nExplained Variance Ratios:' print ex_var_ratio # print '\nSum of Explained Variance Ratios of ', n, 'components is', # print np.sum(ex_var_ratio) return pc # - # ### Classification # + from sklearn import cross_validation from sklearn.cross_validation import KFold def train_test_clf(clf, clf_name, X, y, k=10): ''' Train and test a classifier using # K-fold cross validation Args: clf: sklearn classifier clf_name: classifier name (for printing) X: training data (2D numpy matrix) y: labels (1D vector) k: number of folds (default=10) ''' kf = KFold(len(X), n_folds=k) scores = cross_validation.cross_val_score(clf, X, y, cv=kf) acc, acc_std = scores.mean(), scores.std() print clf_name + ' accuracy is %0.4f (+/- %0.3f)' % (acc, acc_std) return acc, acc_std # + from sklearn.linear_model import LogisticRegression from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import LinearSVC, SVC from sklearn.lda import LDA from sklearn.qda import QDA from sklearn.ensemble import RandomForestClassifier from sklearn.ensemble import GradientBoostingClassifier from sklearn.ensemble import ExtraTreesClassifier from sklearn.ensemble import AdaBoostClassifier def classify(X, y, gender, feature_type): lg = LogisticRegression(penalty='l2') knn = KNeighborsClassifier(n_neighbors=7) svc = LinearSVC() lda = LDA() qda = QDA() rf = RandomForestClassifier(n_estimators=30) gb = GradientBoostingClassifier(n_estimators=20, max_depth=3) et = ExtraTreesClassifier(n_estimators=40, max_depth=5) ada = AdaBoostClassifier() classifiers = [lg, knn, svc, lda, qda, rf, gb, et, ada] clf_names = ['Logistic Regression', 'KNN', 'Linear SVM', 'LDA', 'QDA', \ 'Random Forest', 'Gradient Boosting', 'Extra Trees', 'AdaBoost'] accuracies = list() for clf, name in zip(classifiers, clf_names): acc, acc_std = train_test_clf(clf, name, X, y) accuracies.append(acc) # Visualize classifier performance x = range(len(accuracies)) width = 0.6/1.5 plt.bar(x, accuracies, width) # Compute chance n0, n1 = y.count(0), y.count(1) chance = max(n0, n1) / float(n0 + n1) fig_title = gender + ' Classifier Performance on ' + feature_type + ' features' plt.title(fig_title) plt.xticks(x, clf_names, rotation=50) plt.xlabel('Classifier') plt.gca().xaxis.set_label_coords(1.1, -0.025) plt.ylabel('Accuracy') plt.axhline(chance, color='red', linestyle='--', label='chance') # plot chance plt.legend(loc='center left', bbox_to_anchor=(1, 0.85)) # - # ## Gender 1 ADHD v.s. Bipolar Analysis # + # %matplotlib inline import matplotlib.pyplot as plt plot = df1_all.plot(kind='hist', alpha=0.5, title='Gender 1 Data Distribution', legend=None) # - # Cluster Gender 1 rCBF features kmeans(df1_all, 'Gender 1') # PCA X1_pca = pca(df1_all, 20) # LLE X1_lle = lle(df1_all, 20) # Classification using PCA features print 'Using PCA features:' classify(X1_pca, y1, 'Gender 1', 'PCA') # Classification using LLE features print 'Using LLE features:' classify(X1_lle, y1, 'Gender 1', 'LLE') # ## Gender 2 ADHD v.s. Bipolar Analysis plot = df2_all.plot(kind='hist', alpha=0.5, title='Gender 2 Data Distribution', legend=None) # Cluster Gender 2 rCBF features kmeans(df2_all, 'Gender 2') # PCA X2_pca = pca(df2_all, 20) # LLE X2_lle = lle(df2_all, 20) # Classification using PCA features print 'Using PCA features:' classify(X2_pca, y2, 'Gender 2', 'PCA') # Classification using LLE features print 'Using LLE features:' classify(X2_lle, y2, 'Gender 2', 'LLE') # ### Conclusion # # QDA with LLE features finally does much better than chance! \\(^o^)/ # # It seems that non-linear classifiers on features extracted using manifold learning would work for this problem. # # # #### Next Step: # # - Need to look into the warning for gender 1 classification. # - The accuracy for gender 1 is suspiciously high, need to check each step to make sure there is no mistakes. # - Tune parameters for existing classifiers. # - Try new methods for both feature construction and classification. # - I have some doubts about the dimensionality reduction and train-test procedure. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python3 # --- # + # nbi:hide_in import warnings # Ignore numpy dtype warnings. These warnings are caused by an interaction # between numpy and Cython and can be safely ignored. # Reference: https://stackoverflow.com/a/40846742 warnings.filterwarnings("ignore", message="numpy.dtype size changed") warnings.filterwarnings("ignore", message="numpy.ufunc size changed") import numpy as np import matplotlib.pyplot as plt import pandas as pd # %matplotlib inline import ipywidgets as widgets from ipywidgets import interact, interactive, fixed, interact_manual import nbinteract as nbi np.set_printoptions(threshold=20, precision=2, suppress=True) pd.options.display.max_rows = 7 pd.options.display.max_columns = 8 pd.set_option('precision', 2) # This option stops scientific notation for pandas # pd.set_option('display.float_format', '{:.2f}'.format) # - # nbi:hide_in def df_interact(df, nrows=7, ncols=7): ''' Outputs sliders that show rows and columns of df ''' def peek(row=0, col=0): return df.iloc[row:row + nrows, col:col + ncols] if len(df.columns) <= ncols: interact(peek, row=(0, len(df) - nrows, nrows), col=fixed(0)) else: interact(peek, row=(0, len(df) - nrows, nrows), col=(0, len(df.columns) - ncols)) print('({} rows, {} columns) total'.format(df.shape[0], df.shape[1])) # nbi:hide_in videos = pd.read_csv('https://github.com/SamLau95/nbinteract/raw/master/notebooks/youtube_trending.csv', parse_dates=['publish_time'], index_col='publish_time') # # Page Layout / Dashboarding # # `nbinteract` gives basic page layout functionality using special comments in your code. Include one or more of these markers in a Python comment and `nbinteract` will add their corresponding CSS classes to the generated cells. # # | Marker | Description | CSS class added | # | --------- | --------- | --------- | # | `nbi:left` | Floats cell to the left | `nbinteract-left` | # | `nbi:right` | Floats cell to the right | `nbinteract-right` | # | `nbi:hide_in` | Hides cell input | `nbinteract-hide_in` | # | `nbi:hide_out` | Hides cell output | `nbinteract-hide_out` | # # By default, only the `full` template will automatically provide styling for these classes. For other templates, `nbinteract` assumes that the embedding page will use the CSS classes to style the cells. # # You can use the layout markers to create simple dashboards. In this page, we create a dashboard using a dataset of trending videos on YouTube. We first create a dashboard showing the code used to generate the plots. Further down the page, we replicate the dashboard without showing the code. df_interact(videos) # + # nbi:left options = { 'title': 'Views for Trending Videos', 'xlabel': 'Date Trending', 'ylabel': 'Views', 'animation_duration': 500, 'aspect_ratio': 1.0, } def xs(channel): return videos.loc[videos['channel_title'] == channel].index def ys(xs): return videos.loc[xs, 'views'] nbi.scatter(xs, ys, channel=videos['channel_title'].unique()[9:15], options=options) # + # nbi:right options={ 'ylabel': 'Proportion per Unit', 'bins': 100, 'aspect_ratio': 1.0, } def values(col): vals = videos[col] return vals[vals < vals.quantile(0.8)] nbi.hist(values, col=widgets.ToggleButtons(options=['views', 'likes', 'dislikes', 'comment_count']), options=options) # - # # ## Dashboard (without showing code) # nbi:hide_in df_interact(videos) # + # nbi:hide_in # nbi:left options = { 'title': 'Views for Trending Videos', 'xlabel': 'Date Trending', 'ylabel': 'Views', 'animation_duration': 500, 'aspect_ratio': 1.0, } def xs(channel): return videos.loc[videos['channel_title'] == channel].index def ys(xs): return videos.loc[xs, 'views'] nbi.scatter(xs, ys, channel=videos['channel_title'].unique()[9:15], options=options) # + # nbi:hide_in # nbi:right options={ 'ylabel': 'Proportion per Unit', 'bins': 100, 'aspect_ratio': 1.0, } def values(col): vals = videos[col] return vals[vals < vals.quantile(0.8)] nbi.hist(values, col=widgets.ToggleButtons(options=['views', 'likes', 'dislikes', 'comment_count']), options=options) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Overview # # In this project you will be applying the skills you have learned in the class so far. More specifically, you will be asked to apply an clustering algorithm to a dataset of your own choice. # # As you are choosing your data set, make sure that you think about the following important items: # # 1. What is the business problem you are trying to solve? In one or two sentences explain the problem you are trying to solve. Make sure that this problem is specific enough! For example, if you are working with baseball MLB data: # # - __Bad Problem Statement:__ Understanding baseball players. # # - __Better Problem Statement:__ From demographic info and statistics of MLB batters, can we detect hidden groups that would allow an MLB team to diversify their roster and cancel the contracts of players already redundant? # # 2. What is the data available to solve this business problem? Why do you think that this dataset can answer the problem at hand? Have you made enough research on the problem and the dataset? # # 3. Double check the source of the dataset. Make sure that you have done enough data exploration so there are not wrong entries, lots of duplicates, mysterious columns, nonsense statistics, etc. Don't get unpleasantly surprised after you have already invested 3 weeks in modeling. Invest in exploration of your data in the beginning of the project. Make sure that you didn't set yourself up for the [garbage in garbage out](https://en.wikipedia.org/wiki/Garbage_in,_garbage_out) problem # # 4. Prepare your data and make sure that you have created interesting and insightful new features that will improve the machine learning algorithm's performance. (Check [Feature Engineering](https://en.wikipedia.org/wiki/Feature_engineering)) # # 5. Decide which clustering algorithm you would like to use. Make sure to choose this algorithm because it is the best fit for this project but not because it sounds like the fanciest algorithm (e.g [KMeans](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html) if you want to use [DBSCAN](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.DBSCAN.html#sklearn.cluster.DBSCAN) then, for example,you should know what the `epsilon` parameter is and why DBSCAN is preferable to KMeans in your problem.) What is the metric you will be focusing on to measure this algorithm's performance?(e.g. [silhouette_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.silhouette_score.html)) How can you communicate this technical metric with your business partners? # # 6. Suppose you settled on a particular clustering with a certain number of clusters. Now go back and explain what these clusters correspond with in terms of business metrics. You can create plots and visualizations to support your findings. # # 7. Finally, summarize your findings and the limitations of your findings in a conclusion. Also mention how you would proceed if you had more time and resources on this project. # # # Deliverables # # - A Jupyter Notebook (I will refer it as `technical notebook` or `report`). This notebook should be in a Github Repo with a ReadMe. As previous homeworks, you will be submitting the link of this notebook. # # - This time I will not be grading the ReadMe. I assume that you have learned how to submit a good project repo with a good ReadMe in your previous projects. Please practice these learned skills. # # # # Deadlines # # Submit the completed deliverable by November 15th at 11:59 PM. Use the link below for the submissions. # # [Google Form Submission Link](https://forms.gle/trvrXd2ZrynJW7EX6) # # # # Resources # # [A Customer Segmentation Example](https://www.mktr.ai/behavioral-customer-segmentation-in-r/) # # [Machine Learning Process Check-list](https://docs.google.com/spreadsheets/d/1y4EdxeAliOQw9CDHx0_brjmk-LUb3gfX52zLGSqLg_g/edit#gid=0) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import pickle # !pip install html2text import html2text pd.set_option('display.max_colwidth', 5000) df = pd.read_pickle('/Users/hazem/Documents/manzur/data/pickles/enabbaladi/posts/posts_1_0.pkl') df.columns # + def read_tags(tag_file_num=9): tags = dict() for i in range(tag_file_num): file = f'/Users/hazem/Documents/manzur/data/pickles/enabbaladi/tags/tag_{i}.pkl' new_tags = read_tags_file(file) tags = {**tags, **new_tags} return tags tags = read_tags(8) len(tags) # + def read_tags_file(file): cats = dict() df = pd.read_pickle(file) df['id'] = df['id'].apply(int) df['id'] = df['id'].apply(str) df = df[['id','name', 'count']] df = df.set_index('id') names = df.to_dict()['name'] counts = df.to_dict()['count'] for k in names: if counts.get(k,0) >=1: cats[k.strip()] = {'name': names[k], 'count': int(counts.get(k,0))} return cats file = f'/Users/hazem/Documents/manzur/data/pickles/enabbaladi/categories.pkl' cats = read_tags_file(file) len(cats) # - def clean_post_file(file): h = html2text.HTML2Text() h.ignore_links = True df = pd.read_pickle(file) df['title'] = df['title'].apply(lambda x: dict(x)['rendered'].strip()) df['title'] = df['title'].apply(h.handle) df['content'] = df['content'].apply(lambda x: dict(x)['rendered']) df['html_content'] = df['content'] df['content'] = df['content'].apply(h.handle) df['content'] = df['content'].str.replace('\n', ' ') df['excerpt'] = df['excerpt'].apply(lambda x: dict(x)['rendered']) df['excerpt'] = df['excerpt'].apply(h.handle) df['tags_str'] = df['tags'].apply(lambda x: '|'.join([tags.get(str(e), dict()).get('name', str(e)) for e in list(x)])) df['categories_str'] = df['categories'].apply(lambda x: '|'.join([cats.get(str(e), dict()).get('name', str(e)) for e in list(x)])) df['author'] = df['author'].apply(int).apply(str) df['id'] = df['id'].apply(int).apply(str) df['realted_posts'] = df['jetpack-related-posts'].apply(lambda x: '|'.join([str(e['id']) for e in list(x)])) return df[['id','title','content', 'html_content', 'excerpt','categories','author','date_gmt', 'tags','tags_str', 'categories_str', 'realted_posts']] file = '/Users/hazem/Documents/manzur/data/pickles/enabbaladi/posts/posts_1_0.pkl' df = clean_post_file(file) # ! pip install requests import json import requests url = 'https://farasa.qcri.org/webapi/segmentation/' text = 'يُشار إلى أن اللغة العربية' api_key = '' payload = {'text': text, 'api_key': api_key} headers = {'content-type': 'application/json'} data = requests.post(url, data=json.dumps(payload), headers=headers) result = data.text print(result) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + from mpl_toolkits.mplot3d import axes3d import matplotlib.pyplot as plt import numpy as np # %matplotlib inline fig = plt.figure() ax = fig.add_subplot(111, projection='3d') X, Y, Z = axes3d.get_test_data(0.01) ax.plot_wireframe(X, Y, Z, rstride=10, cstride=10) plt.show() # + from mpl_toolkits.mplot3d import axes3d import matplotlib.pyplot as plt import numpy as np # %matplotlib inline dx = 0.1 _, _, Z = axes3d.get_test_data(dx) Z = np.zeros_like(Z) Z[int(len(Z)/2):,int(len(Z)/2):] = 10. # x = np.arange(0,len(Z)*dx,dx) # y = np.arange(0,len(Z)*dx,dx) # xv, yv = np.meshgrid(x, y) # Sx, Sy = np.gradient(Z) # plt.imshow(Z, cmap = 'terrain') # plt.colorbar() # plt.show() # plt.imshow(Sx) # plt.colorbar() # plt.show() # plt.imshow(Sy) # plt.colorbar() # plt.show() dt = (1/(2*D))*(dx**4/(2*dx**2)) print dt D = 10. maxt = 100 # plt.imshow(Z, clim = [-75,85]) plt.imshow(Z) plt.colorbar() plt.show() for t in range(maxt): old_Z = Z.copy() for i in range(1,len(Z)-1): for j in range(1,len(Z)-1): Zxx = (old_Z[i+1,j] - 2 * old_Z[i,j] + old_Z[i-1,j]) / dx**2 Zyy = (old_Z[i,j+1] - 2 * old_Z[i,j] + old_Z[i,j-1]) / dx**2 Z[i,j] = old_Z[i,j] + dt * D * (Zxx + Zyy) Z[0,:] = Z[1,:] Z[-1,:] = Z[-2,:] Z[:,0] = Z[:,1] Z[:,-1] = Z[:,-2] # plt.imshow(Z, clim = [-75,85]) plt.imshow(Z) plt.colorbar() plt.show() # - _, _, Z = axes3d.get_test_data(dx) Z.max() Z.min() Z.max() Z.min() len(X)*dx dx # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + [markdown] school_cell_uuid="508bc6930afe496a8ef375ed081a0289" # # 신경망 성능 개선 # + [markdown] school_cell_uuid="207ad6e3065e412d989084cb76508024" # 신경망의 예측 성능 및 수렴 성능을 개선하기 위해서는 다음과 같은 추가적인 고려를 해야 한다. # # * 오차(목적) 함수 개선: cross-entropy cost function # * 정규화: regularization # * 가중치 초기값: weight initialization # * Softmax 출력 # * Activation 함수 선택: hyper-tangent and ReLu # # + [markdown] school_cell_uuid="00bcc34a99574908bbeb268967dd7e4d" # ## 기울기와 수렴 속도 문제 # + [markdown] school_cell_uuid="47e6626384d74d3aa59a4b8b7cdf188d" # 일반적으로 사용하는 잔차 제곱합(sum of square) 형태의 오차 함수는 대부분의 경우에 기울기 값이 0 이므로 (near-zero gradient) 수렴이 느려지는 단점이 있다. # # * http://neuralnetworksanddeeplearning.com/chap3.html # + [markdown] school_cell_uuid="f4ddfb41b4a24d2399f55f6fb823b3f3" # $$ # \begin{eqnarray} # z = \sigma (wx+b) # \end{eqnarray} # $$ # # $$ # \begin{eqnarray} # C = \frac{(y-z)^2}{2}, # \end{eqnarray} # $$ # # # $$ # \begin{eqnarray} # \frac{\partial C}{\partial w} & = & (z-y)\sigma'(a) x \\ # \frac{\partial C}{\partial b} & = & (z-y)\sigma'(a) # \end{eqnarray} # $$ # # # * if $x=1$, $y=0$, # $$ # \begin{eqnarray} # \frac{\partial C}{\partial w} & = & a \sigma'(a) \\ # \frac{\partial C}{\partial b} & = & a \sigma'(z) # \end{eqnarray} # $$ # # * $\sigma'$는 대부분의 경우에 zero. # + school_cell_uuid="ba9e61aaf39648a994ff740af061a102" sigmoid = lambda x: 1/(1+np.exp(-x)) sigmoid_prime = lambda x: sigmoid(x)*(1-sigmoid(x)) xx = np.linspace(-10, 10, 1000) plt.plot(xx, sigmoid(xx)); plt.plot(xx, sigmoid_prime(xx)); # + [markdown] school_cell_uuid="8dbe4f79b84e489f97ebfcbfba0fca0c" # ## 교차 엔트로피 오차 함수 (Cross-Entropy Cost Function) # + [markdown] school_cell_uuid="f6699b0ddd044a95a5348c541f21a7bb" # 이러한 수렴 속도 문제를 해결하는 방법의 하나는 오차 제곱합 형태가 아닌 교차 엔트로피(Cross-Entropy) 형태의 오차함수를 사용하는 것이다. # # # $$ # \begin{eqnarray} # C = -\frac{1}{n} \sum_x \left[y \ln z + (1-y) \ln (1-z) \right], # \end{eqnarray} # $$ # # # 미분값은 다음과 같다. # # $$ # \begin{eqnarray} # \frac{\partial C}{\partial w_j} & = & -\frac{1}{n} \sum_x \left( # \frac{y }{z} -\frac{(1-y)}{1-z} \right) # \frac{\partial z}{\partial w_j} \\ # & = & -\frac{1}{n} \sum_x \left( # \frac{y}{\sigma(a)} # -\frac{(1-y)}{1-\sigma(a)} \right)\sigma'(a) x_j \\ # & = & # \frac{1}{n} # \sum_x \frac{\sigma'(a) x_j}{\sigma(a) (1-\sigma(a))} # (\sigma(a)-y) \\ # & = & \frac{1}{n} \sum_x x_j(\sigma(a)-y) \\ # & = & \frac{1}{n} \sum_x (z-y) x_j\\ \\ # \frac{\partial C}{\partial b} &=& \frac{1}{n} \sum_x (z-y) # \end{eqnarray} # $$ # + [markdown] school_cell_uuid="f103793db11c408c8678fb2fa5ad7ffd" # 이 식에서 보다시피 기울기(gradient)가 예측 오차(prediction error) $z-y$에 비례하기 때문에 # # * 오차가 크면 수렴 속도가 빠르고 # * 오차가 적으면 속도가 감소하여 발산을 방지한다. # + [markdown] school_cell_uuid="3beacd4718d54ac48fc38a086f054af0" # ## 교차 엔트로피 구현 예 # + [markdown] school_cell_uuid="5fe16343ae5b481bb6a20051ee2733e1" # * https://github.com/mnielsen/neural-networks-and-deep-learning/blob/master/src/network2.py # # # ```python # #### Define the quadratic and cross-entropy cost functions # # class QuadraticCost(object): # # @staticmethod # def fn(a, y): # """Return the cost associated with an output ``a`` and desired output # ``y``. # """ # return 0.5*np.linalg.norm(a-y)**2 # # @staticmethod # def delta(z, a, y): # """Return the error delta from the output layer.""" # return (a-y) * sigmoid_prime(z) # # # class CrossEntropyCost(object): # # @staticmethod # def fn(a, y): # """Return the cost associated with an output ``a`` and desired output # ``y``. Note that np.nan_to_num is used to ensure numerical # stability. In particular, if both ``a`` and ``y`` have a 1.0 # in the same slot, then the expression (1-y)*np.log(1-a) # returns nan. The np.nan_to_num ensures that that is converted # to the correct value (0.0). # """ # return np.sum(np.nan_to_num(-y*np.log(a)-(1-y)*np.log(1-a))) # # @staticmethod # def delta(z, a, y): # """Return the error delta from the output layer. Note that the # parameter ``z`` is not used by the method. It is included in # the method's parameters in order to make the interface # consistent with the delta method for other cost classes. # """ # return (a-y) # ``` # + school_cell_uuid="dffd8d26b5de42d89f0fa0640062aa9e" # %cd /home/dockeruser/neural-networks-and-deep-learning/src # %ls # + school_cell_uuid="ff982b3a6f1f4bf1bac0d5219140a3f2" import mnist_loader import network2 training_data, validation_data, test_data = mnist_loader.load_data_wrapper() # + school_cell_uuid="549376bb44714708b75e8f09f0b739a9" net = network2.Network([784, 30, 10], cost=network2.QuadraticCost) net.large_weight_initializer() # %time result1 = net.SGD(training_data, 10, 10, 0.5, evaluation_data=test_data, monitor_evaluation_accuracy=True) # + school_cell_uuid="cb0c11bfed1743d78fe86956e3935a54" net = network2.Network([784, 30, 10], cost=network2.CrossEntropyCost) net.large_weight_initializer() # %time result2 = net.SGD(training_data, 10, 10, 0.5, evaluation_data=test_data, monitor_evaluation_accuracy=True) # + school_cell_uuid="e294b0a0f47d483f8fcba9bb310810be" plt.plot(result1[1], 'bo-', label="quadratic cost") plt.plot(result2[1], 'rs-', label="cross-entropy cost") plt.legend(loc=0) plt.show() # + [markdown] school_cell_uuid="6d4abbbadd554251bd1e02a1f3a9d22c" # ## 과최적화 문제 # + [markdown] school_cell_uuid="66cc31183d0449aa9534cd48e2c02ddc" # 신경망 모형은 파라미터의 수가 다른 모형에 비해 많다. # * (28x28)x(30)x(10) => 24,000 # * (28x28)x(100)x(10) => 80,000 # # 이렇게 파라미터의 수가 많으면 과최적화 발생 가능성이 증가한다. 즉, 정확도가 나아지지 않거나 나빠져도 오차 함수는 계속 감소하는 현상이 발생한다. # # # # + [markdown] school_cell_uuid="aa75d77e97e1469db9ff6ddcc60c768c" # # * 예: # # ```python # net = network2.Network([784, 30, 10], cost=network2.CrossEntropyCost) # net.large_weight_initializer() # net.SGD(training_data[:1000], 400, 10, 0.5, evaluation_data=test_data, # monitor_evaluation_accuracy=True, monitor_training_cost=True) # ``` # # # # # # # # + [markdown] school_cell_uuid="00cdb88f9891493a89cab8369f92d620" # ## L2 정규화 # + [markdown] school_cell_uuid="122191bae3c04958996594d3a77c60f8" # 이러한 과최적화를 방지하기 위해서는 오차 함수에 다음과 같이 정규화 항목을 추가하여야 한다. # # $$ # \begin{eqnarray} C = -\frac{1}{n} \sum_{j} \left[ y_j \ln z^L_j+(1-y_j) \ln # (1-z^L_j)\right] + \frac{\lambda}{2n} \sum_i w_i^2 # \end{eqnarray} # $$ # # 또는 # # $$ # \begin{eqnarray} C = C_0 + \frac{\lambda}{2n} # \sum_i w_i^2, # \end{eqnarray} # $$ # # $$ # \begin{eqnarray} # \frac{\partial C}{\partial w} & = & \frac{\partial C_0}{\partial w} + \frac{\lambda}{n} w \\ # \frac{\partial C}{\partial b} & = & \frac{\partial C_0}{\partial b} # \end{eqnarray} # $$ # # $$ # \begin{eqnarray} # w & \rightarrow & w-\eta \frac{\partial C_0}{\partial w}-\frac{\eta \lambda}{n} w \\ # & = & \left(1-\frac{\eta \lambda}{n}\right) w -\eta \frac{\partial C_0}{\partial w} # \end{eqnarray} # $$ # + [markdown] school_cell_uuid="0ae02eafdb84423fa9f99c0ff10a11b6" # ## L2 정규화 구현 예 # + [markdown] school_cell_uuid="d0219ebf08cf4e95af622b9f9a7280ba" # # ```python # def total_cost(self, data, lmbda, convert=False): # """Return the total cost for the data set ``data``. The flag # ``convert`` should be set to False if the data set is the # training data (the usual case), and to True if the data set is # the validation or test data. See comments on the similar (but # reversed) convention for the ``accuracy`` method, above. # """ # cost = 0.0 # for x, y in data: # a = self.feedforward(x) # if convert: y = vectorized_result(y) # cost += self.cost.fn(a, y)/len(data) # cost += 0.5*(lmbda/len(data))*sum(np.linalg.norm(w)**2 for w in self.weights) # return cost # # def update_mini_batch(self, mini_batch, eta, lmbda, n): # """Update the network's weights and biases by applying gradient # descent using backpropagation to a single mini batch. The # ``mini_batch`` is a list of tuples ``(x, y)``, ``eta`` is the # learning rate, ``lmbda`` is the regularization parameter, and # ``n`` is the total size of the training data set. # """ # nabla_b = [np.zeros(b.shape) for b in self.biases] # nabla_w = [np.zeros(w.shape) for w in self.weights] # for x, y in mini_batch: # delta_nabla_b, delta_nabla_w = self.backprop(x, y) # nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)] # nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)] # self.weights = [(1-eta*(lmbda/n))*w-(eta/len(mini_batch))*nw for w, nw in zip(self.weights, nabla_w)] # self.biases = [b-(eta/len(mini_batch))*nb for b, nb in zip(self.biases, nabla_b)] # ``` # + [markdown] school_cell_uuid="f750a7ea24434930bb5d7ea2dadb5809" # # ```python # net.SGD(training_data[:1000], 400, 10, 0.5, evaluation_data=test_data, lmbda = 0.1, # monitor_evaluation_cost=True, monitor_evaluation_accuracy=True, # monitor_training_cost=True, monitor_training_accuracy=True) # ``` # # # # + [markdown] school_cell_uuid="a71a00863bcd428babb5e9fc7ade32f7" # ## L1 정규화 # # L2 정규화 대신 다음과 같은 L1 정규화를 사용할 수도 있다. # # $$ # \begin{eqnarray} C = -\frac{1}{n} \sum_{j} \left[ y_j \ln z^L_j+(1-y_j) \ln # (1-z^L_j)\right] + \frac{\lambda}{2n} \sum_i \| w_i \| # \end{eqnarray} # $$ # # $$ # \begin{eqnarray} # \frac{\partial C}{\partial w} = \frac{\partial C_0}{\partial w} + \frac{\lambda}{n} \, {\rm sgn}(w) # \end{eqnarray} # $$ # # $$ # \begin{eqnarray} # w \rightarrow w' = w-\frac{\eta \lambda}{n} \mbox{sgn}(w) - \eta \frac{\partial C_0}{\partial w} # \end{eqnarray} # $$ # + [markdown] school_cell_uuid="ca112e760f924894826e6a6eb6bb9cf5" # ## Dropout 정규화 # + [markdown] school_cell_uuid="6803b7f146944245bca4b67f14440d39" # Dropout 정규화 방법은 epoch 마다 임의의 hidden layer neurons $100p$%(보통 절반)를 dropout 하여 최적화 과정에 포함하지 않는 방법이다. 이 방법을 사용하면 가중치 값들 값들이 동시에 움직이는 것(co-adaptations) 방지하며 모형 averaging 효과를 가져다 준다. # # + [markdown] school_cell_uuid="e2c0a5c7152f4a3d88a753bc8359773a" # 가중치 갱신이 끝나고 테스트 시점에는 가중치에 $p$를 곱하여 스케일링한다. # # # + [markdown] school_cell_uuid="fde4b146814e4f6a98179dcea4769ac4" # ## 가중치 초기화 (Weight initialization) # + [markdown] school_cell_uuid="63c943810ffc4dfc892ba033f9c41276" # 뉴런에 대한 입력의 수 $n_{in}$가 증가하면 가중 총합 $a$값의 표준편차도 증가한다. # $$ \text{std}(a) \propto \sqrt{n_{in}} $$ # # # # # 예를 들어 입력이 1000개, 그 중 절반이 1이면 표준편차는 약 22.4 이 된다. # $$ \sqrt{501} \approx 22.4 $$ # # # # # 이렇게 표준 편가가 크면 수렴이 느려지기 때문에 입력 수에 따라 초기화 가중치의 표준편차를 감소하는 초기화 값 조정이 필요하다. # # $$\dfrac{1}{\sqrt{n_{in}} }$$ # + [markdown] school_cell_uuid="e87b5588958a4159a397eeaa32265f2b" # ## 가중치 초기화 구현 예 # # # ```python # def default_weight_initializer(self): # """Initialize each weight using a Gaussian distribution with mean 0 # and standard deviation 1 over the square root of the number of # weights connecting to the same neuron. Initialize the biases # using a Gaussian distribution with mean 0 and standard # deviation 1. # Note that the first layer is assumed to be an input layer, and # by convention we won't set any biases for those neurons, since # biases are only ever used in computing the outputs from later # layers. # """ # self.biases = [np.random.randn(y, 1) for y in self.sizes[1:]] # self.weights = [np.random.randn(y, x)/np.sqrt(x) for x, y in zip(self.sizes[:-1], self.sizes[1:])] # ``` # + [markdown] school_cell_uuid="f1c8d7a1bfb04c249103bdd23a577eee" # # + [markdown] school_cell_uuid="06beb9b04ee84e0b8fafccbbae65466c" # ## 소프트맥스 출력 # + [markdown] school_cell_uuid="12ac0dda7f1343c2b44f58f8b2921599" # 소프트맥스(softmax) 함수는 입력과 출력이 다변수(multiple variable) 인 함수이다. 최고 출력의 위치를 변화하지 않으면서 츨력의 합이 1이 되도록 조정하기 때문에 출력에 확률론적 의미를 부여할 수 있다. 보통 신경망의 최종 출력단에 적용한다. # # # # $$ # \begin{eqnarray} # y^L_j = \frac{e^{a^L_j}}{\sum_k e^{a^L_k}}, # \end{eqnarray} # $$ # # # # $$ # \begin{eqnarray} # \sum_j y^L_j & = & \frac{\sum_j e^{a^L_j}}{\sum_k e^{a^L_k}} = 1 # \end{eqnarray} # $$ # # # # # # + school_cell_uuid="0bba0cfd653247c5bcfb96924566f7e3" from ipywidgets import interactive from IPython.display import Audio, display def softmax_plot(z1=0, z2=0, z3=0, z4=0): exps = np.array([np.exp(z1), np.exp(z2), np.exp(z3), np.exp(z4)]) exp_sum = exps.sum() plt.bar(range(len(exps)), exps/exp_sum) plt.xlim(-0.3, 4.1) plt.ylim(0, 1) plt.xticks([]) v = interactive(softmax_plot, z1=(-3, 5, 0.01), z2=(-3, 5, 0.01), z3=(-3, 5, 0.01), z4=(-3, 5, 0.01)) display(v) # + [markdown] school_cell_uuid="44573772100b472e8f42636ca84e6ca9" # ## Hyper-Tangent Activation and Rectified Linear Unit (ReLu) Activation # + [markdown] school_cell_uuid="059c0562489f4664bafd7e11d0760166" # 시그모이드 함수 이외에도 하이퍼 탄젠트 및 ReLu 함수를 사용할 수도 있다. # + [markdown] school_cell_uuid="4e79fd2c1daf402384c5a128082bfe6e" # 하이퍼 탄젠트 activation 함수는 음수 값을 가질 수 있으며 시그모이드 activation 함수보다 일반적으로 수렴 속도가 빠르다. # # $$ # \begin{eqnarray} # \tanh(w \cdot x+b), # \end{eqnarray} # $$ # # # $$ # \begin{eqnarray} # \tanh(a) \equiv \frac{e^a-e^{-a}}{e^a+e^{-a}}. # \end{eqnarray} # $$ # # # # # $$ # \begin{eqnarray} # \sigma(a) = \frac{1+\tanh(a/2)}{2}, # \end{eqnarray} # $$ # # # + school_cell_uuid="a5975aacd7324aae8e5823b898666527" z = np.linspace(-5, 5, 100) a = np.tanh(z) plt.plot(z, a) plt.show() # + [markdown] school_cell_uuid="8c6ae2073b354cdbab8837ffc4dfec5d" # Rectified Linear Unit (ReLu) Activation 함수는 무한대 크기의 activation 값이 가능하며 가중치총합 $a$가 큰 경우에도 기울기(gradient)가 0 이되며 사라지지 않는다는 장점이 있다. # # $$ # \begin{eqnarray} # \max(0, w \cdot x+b). # \end{eqnarray} # $$ # + school_cell_uuid="c2b1b32880994d14b6e6078e564a35c9" z = np.linspace(-5, 5, 100) a = np.maximum(z, 0) plt.plot(z, a) plt.show() # + [markdown] school_cell_uuid="ff77548a4c42482ca9a9b7b630b32fea" # ## 그레디언트 소멸 문제 (Vanishing Gradient Problem) # + [markdown] school_cell_uuid="698e9df8f8d2413685396eabe2d11865" # 은닉 계층의 수가 너무 증가하면 수렴 속도 및 성능이 급격히 저하된다. # # * MNIST digits 예제 # * 1 hidden layer [784, 30, 10]: accuracy 96.48 percent # * 2 hidden layer [784, 30, 30, 10]: accuracy 96.90 percent # * 3 hidden layer [784, 30, 30, 30, 10]: accuracy 96.57 percent # * 4 hidden layer [784, 30, 30, 30, 30, 10]: accuracy 96.53 percent # # # 실제로 은닉 계층에서의 가중치의 값을 보면 입력 계층 쪽으로 갈수록 감소하는 것을 볼 수 있다. # # # # # + [markdown] school_cell_uuid="7a2a4861a4624720b71cff624decf06d" # 가중치가 감소하는 원인은 backpropagation시에 오차가 뉴런을 거치면서 activation 함수의 기울기가 곱해지는데 이 값이 1보다 작아서 계속 크기가 감소하기 때문이다. # # * sigmoid activation의 경우 # # $$ \sigma'(0) = 1/4 $$ # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Rank-one nonnegative matrix factorization # # The DGP atom library has several functions of positive matrices, including the trace, (matrix) product, sum, Perron-Frobenius eigenvalue, and $(I - X)^{-1}$ (eye-minus-inverse). In this notebook, we use some of these atoms to approximate a partially known elementwise positive matrix # as the outer product of two positive vectors. # # We would like to approximate $A$ as the outer product of two positive vectors $x$ and $y$, with $x$ normalized so that the product of its entries equals $1$. Our criterion is the average relative deviation between the entries of $A$ and # $xy^T$, that is, # # $$ # \frac{1}{mn} \sum_{i=1}^{m} \sum_{j=1}^{n} R(A_{ij}, x_iy_j), # $$ # # where $R$ is the relative deviation of two positive numbers, defined as # # $$ # R(a, b) = \max\{a/b, b/a\} - 1. # $$ # # The corresponding optimization problem is # # $$ # \begin{equation} # \begin{array}{ll} # \mbox{minimize} & \frac{1}{mn} \sum_{i=1}^{m} \sum_{j=1}^{n} R(X_{ij}, x_iy_j) # \\ # \mbox{subject to} & x_1x_2 \cdots x_m = 1 \\ # & X_{ij} = A_{ij}, \quad \text{for } (i, j) \in \Omega, # \end{array} # \end{equation} # $$ # # with variables $X \in \mathbf{R}^{m \times n}_{++}$, $x \in \mathbf{R}^{m}_{++}$, and $y \in \mathbf{R}^{n}_{++}$. We can cast this problem as an equivalent generalized geometric program by discarding the $-1$ from the relative deviations. # # The below code constructs and solves this optimization problem, with specific problem data # # $$ # A = \begin{bmatrix} # 1.0 & ? & 1.9 \\ # ? & 0.8 & ? \\ # 3.2 & 5.9& ? # \end{bmatrix}, # $$ # + import cvxpy as cp m = 3 n = 3 X = cp.Variable((m, n), pos=True) x = cp.Variable((m,), pos=True) y = cp.Variable((n,), pos=True) outer_product = cp.vstack([x[i] * y for i in range(m)]) relative_deviations = cp.maximum( cp.multiply(X, outer_product ** -1), cp.multiply(X ** -1, outer_product)) objective = cp.sum(relative_deviations) constraints = [ X[0, 0] == 1.0, X[0, 2] == 1.9, X[1, 1] == 0.8, X[2, 0] == 3.2, X[2, 1] == 5.9, x[0] * x[1] * x[2] == 1.0, ] problem = cp.Problem(cp.Minimize(objective), constraints) problem.solve(gp=True) print("Optimal value:\n", 1.0/(m * n) * (problem.value - m * n), "\n") print("Outer product approximation\n", outer_product.value, "\n") print("x: ", x.value) print("y: ", y.value) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + id="cXM3RFspNBRb" # adapted from https://colab.research.google.com/notebooks/magenta/hello_magenta/hello_magenta.ipynb # + id="lsFoCLATNBRh" outputId="195cde38-a10b-4a89-9e36-f1cf392e9be2" colab={"base_uri": "https://localhost:8080/", "height": 700} # !apt-get update -qq && apt-get install -qq libfluidsynth1 fluid-soundfont-gm build-essential libasound2-dev libjack-dev # !pip install -qU pyfluidsynth pretty_midi # !pip install -qU magenta # + id="N0NmZPB-NBRj" # only needed for Colab environment: import ctypes.util orig_ctypes_util_find_library = ctypes.util.find_library def proxy_find_library(lib): if lib == 'fluidsynth': return 'libfluidsynth.so.1' else: return orig_ctypes_util_find_library(lib) ctypes.util.find_library = proxy_find_library # + id="NXeSOiqENBRm" from note_seq.protobuf import music_pb2 import note_seq twinkle_twinkle = music_pb2.NoteSequence() twinkle_twinkle.notes.add(pitch=60, start_time=0.0, end_time=0.5, velocity=80) twinkle_twinkle.notes.add(pitch=60, start_time=0.5, end_time=1.0, velocity=80) twinkle_twinkle.notes.add(pitch=67, start_time=1.0, end_time=1.5, velocity=80) twinkle_twinkle.notes.add(pitch=67, start_time=1.5, end_time=2.0, velocity=80) twinkle_twinkle.notes.add(pitch=69, start_time=2.0, end_time=2.5, velocity=80) twinkle_twinkle.notes.add(pitch=69, start_time=2.5, end_time=3.0, velocity=80) twinkle_twinkle.notes.add(pitch=67, start_time=3.0, end_time=4.0, velocity=80) twinkle_twinkle.notes.add(pitch=65, start_time=4.0, end_time=4.5, velocity=80) twinkle_twinkle.notes.add(pitch=65, start_time=4.5, end_time=5.0, velocity=80) twinkle_twinkle.notes.add(pitch=64, start_time=5.0, end_time=5.5, velocity=80) twinkle_twinkle.notes.add(pitch=64, start_time=5.5, end_time=6.0, velocity=80) twinkle_twinkle.notes.add(pitch=62, start_time=6.0, end_time=6.5, velocity=80) twinkle_twinkle.notes.add(pitch=62, start_time=6.5, end_time=7.0, velocity=80) twinkle_twinkle.notes.add(pitch=60, start_time=7.0, end_time=8.0, velocity=80) twinkle_twinkle.total_time = 8 twinkle_twinkle.tempos.add(qpm=60); # + id="D_7VU4IXNBRp" outputId="4d4f25c9-b215-4e97-b78d-067f322b4aca" colab={"base_uri": "https://localhost:8080/", "height": 275} note_seq.plot_sequence(twinkle_twinkle) note_seq.play_sequence(twinkle_twinkle,synth=note_seq.fluidsynth) # + id="aIrH4uy9NBRr" outputId="356896f5-22a8-42f8-e606-5428506249c1" colab={"base_uri": "https://localhost:8080/", "height": 68} import os from magenta.models.melody_rnn import melody_rnn_sequence_generator from magenta.models.shared import sequence_generator_bundle from note_seq.protobuf import generator_pb2 from note_seq.protobuf import music_pb2 basedir = '/content' # set this to your gdrive location if you like note_seq.notebook_utils.download_bundle('attention_rnn.mag', basedir) bundle = sequence_generator_bundle.read_bundle_file( os.path.join(basedir, 'attention_rnn.mag') ) generator_map = melody_rnn_sequence_generator.get_generator_map() melody_rnn = generator_map['attention_rnn'](checkpoint=None, bundle=bundle) melody_rnn.initialize() # + id="NkOJYYItNBRu" outputId="ec390335-7408-45f7-8e0c-8796c461668e" colab={"base_uri": "https://localhost:8080/", "height": 34} def get_options(input_sequence, num_steps=128, temperature=1.0): last_end_time = (max(n.end_time for n in input_sequence.notes) if input_sequence.notes else 0) qpm = input_sequence.tempos[0].qpm seconds_per_step = 60.0 / qpm / melody_rnn.steps_per_quarter total_seconds = num_steps * seconds_per_step generator_options = generator_pb2.GeneratorOptions() generator_options.args['temperature'].float_value = temperature generate_section = generator_options.generate_sections.add( start_time=last_end_time + seconds_per_step, end_time=total_seconds) return generator_options sequence = melody_rnn.generate(twinkle_twinkle, get_options(twinkle_twinkle)) # + id="m6qhxa7pNBRx" outputId="c790625c-a151-4fd8-99ee-ed545b1a8b67" colab={"base_uri": "https://localhost:8080/", "height": 275} note_seq.plot_sequence(sequence) note_seq.play_sequence(sequence, synth=note_seq.fluidsynth) # + id="_2AQlwVRNBR2" note_seq.sequence_proto_to_midi_file(sequence, 'twinkle_continued.midi') # + id="3gGD4l90NBR4" outputId="edcf4314-81e9-420c-fd2c-e955b1116ced" colab={"base_uri": "https://localhost:8080/", "height": 17} from google.colab import files files.download('twinkle_continued.midi') # + id="rSJ1fDmpOuHX" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Adversarial attacks on GoogleNet # The goal of this notebook is to download a pretrained GoogleNet model for classifying CIFAR-10 images, test it on our dataset, then generate adversarial examples and see if they fool the GoogleNet model. Then we'll try transfer-training the GoogleNet model with these adversarial images to see if that makes the network robust against them, and what the accuracy cost is. # The pretrained model and the GoogLeNet class and its dependent code are provided by [](https://github.com/huyvnphan/PyTorch_CIFAR10). # %pylab inline import torch import torchvision import torchvision.transforms as transforms import matplotlib.pyplot as plt import numpy as np import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from tqdm import tqdm from PIL import Image import imageio # Model path: PATH = '../Models/googlenet_cifar10.pth' # + # This cell is entirely the work of Huy Phan, see above. class GoogLeNet(nn.Module): ## CIFAR10: aux_logits True->False def __init__(self, num_classes=10, aux_logits=False, transform_input=False): super(GoogLeNet, self).__init__() self.aux_logits = aux_logits self.transform_input = transform_input ## CIFAR10: out_channels 64->192, kernel_size 7->3, stride 2->1, padding 3->1 self.conv1 = BasicConv2d(3, 192, kernel_size=3, stride=1, padding=1) # self.maxpool1 = nn.MaxPool2d(3, stride=2, ceil_mode=True) # self.conv2 = BasicConv2d(64, 64, kernel_size=1) # self.conv3 = BasicConv2d(64, 192, kernel_size=3, padding=1) # self.maxpool2 = nn.MaxPool2d(3, stride=2, ceil_mode=True) ## END self.inception3a = Inception(192, 64, 96, 128, 16, 32, 32) self.inception3b = Inception(256, 128, 128, 192, 32, 96, 64) ## CIFAR10: padding 0->1, ciel_model True->False self.maxpool3 = nn.MaxPool2d(3, stride=2, padding=1, ceil_mode=False) ## END self.inception4a = Inception(480, 192, 96, 208, 16, 48, 64) self.inception4b = Inception(512, 160, 112, 224, 24, 64, 64) self.inception4c = Inception(512, 128, 128, 256, 24, 64, 64) self.inception4d = Inception(512, 112, 144, 288, 32, 64, 64) self.inception4e = Inception(528, 256, 160, 320, 32, 128, 128) ## CIFAR10: kernel_size 2->3, padding 0->1, ciel_model True->False self.maxpool4 = nn.MaxPool2d(3, stride=2, padding=1, ceil_mode=False) ## END self.inception5a = Inception(832, 256, 160, 320, 32, 128, 128) self.inception5b = Inception(832, 384, 192, 384, 48, 128, 128) if aux_logits: self.aux1 = InceptionAux(512, num_classes) self.aux2 = InceptionAux(528, num_classes) self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) self.dropout = nn.Dropout(0.2) self.fc = nn.Linear(1024, num_classes) # if init_weights: # self._initialize_weights() # def _initialize_weights(self): # for m in self.modules(): # if isinstance(m, nn.Conv2d) or isinstance(m, nn.Linear): # import scipy.stats as stats # X = stats.truncnorm(-2, 2, scale=0.01) # values = torch.as_tensor(X.rvs(m.weight.numel()), dtype=m.weight.dtype) # values = values.view(m.weight.size()) # with torch.no_grad(): # m.weight.copy_(values) # elif isinstance(m, nn.BatchNorm2d): # nn.init.constant_(m.weight, 1) # nn.init.constant_(m.bias, 0) def forward(self, x): if self.transform_input: x_ch0 = torch.unsqueeze(x[:, 0], 1) * (0.229 / 0.5) + (0.485 - 0.5) / 0.5 x_ch1 = torch.unsqueeze(x[:, 1], 1) * (0.224 / 0.5) + (0.456 - 0.5) / 0.5 x_ch2 = torch.unsqueeze(x[:, 2], 1) * (0.225 / 0.5) + (0.406 - 0.5) / 0.5 x = torch.cat((x_ch0, x_ch1, x_ch2), 1) # N x 3 x 224 x 224 x = self.conv1(x) ## CIFAR10 # N x 64 x 112 x 112 # x = self.maxpool1(x) # N x 64 x 56 x 56 # x = self.conv2(x) # N x 64 x 56 x 56 # x = self.conv3(x) # N x 192 x 56 x 56 # x = self.maxpool2(x) ## END # N x 192 x 28 x 28 x = self.inception3a(x) # N x 256 x 28 x 28 x = self.inception3b(x) # N x 480 x 28 x 28 x = self.maxpool3(x) # N x 480 x 14 x 14 x = self.inception4a(x) # N x 512 x 14 x 14 if self.training and self.aux_logits: aux1 = self.aux1(x) x = self.inception4b(x) # N x 512 x 14 x 14 x = self.inception4c(x) # N x 512 x 14 x 14 x = self.inception4d(x) # N x 528 x 14 x 14 if self.training and self.aux_logits: aux2 = self.aux2(x) x = self.inception4e(x) # N x 832 x 14 x 14 x = self.maxpool4(x) # N x 832 x 7 x 7 x = self.inception5a(x) # N x 832 x 7 x 7 x = self.inception5b(x) # N x 1024 x 7 x 7 x = self.avgpool(x) # N x 1024 x 1 x 1 x = x.view(x.size(0), -1) # N x 1024 x = self.dropout(x) x = self.fc(x) # N x 1000 (num_classes) if self.training and self.aux_logits: return _GoogLeNetOuputs(x, aux2, aux1) return x class Inception(nn.Module): def __init__(self, in_channels, ch1x1, ch3x3red, ch3x3, ch5x5red, ch5x5, pool_proj): super(Inception, self).__init__() self.branch1 = BasicConv2d(in_channels, ch1x1, kernel_size=1) self.branch2 = nn.Sequential( BasicConv2d(in_channels, ch3x3red, kernel_size=1), BasicConv2d(ch3x3red, ch3x3, kernel_size=3, padding=1) ) self.branch3 = nn.Sequential( BasicConv2d(in_channels, ch5x5red, kernel_size=1), BasicConv2d(ch5x5red, ch5x5, kernel_size=3, padding=1) ) self.branch4 = nn.Sequential( nn.MaxPool2d(kernel_size=3, stride=1, padding=1, ceil_mode=True), BasicConv2d(in_channels, pool_proj, kernel_size=1) ) def forward(self, x): branch1 = self.branch1(x) branch2 = self.branch2(x) branch3 = self.branch3(x) branch4 = self.branch4(x) outputs = [branch1, branch2, branch3, branch4] return torch.cat(outputs, 1) class InceptionAux(nn.Module): def __init__(self, in_channels, num_classes): super(InceptionAux, self).__init__() self.conv = BasicConv2d(in_channels, 128, kernel_size=1) self.fc1 = nn.Linear(2048, 1024) self.fc2 = nn.Linear(1024, num_classes) def forward(self, x): # aux1: N x 512 x 14 x 14, aux2: N x 528 x 14 x 14 x = F.adaptive_avg_pool2d(x, (4, 4)) # aux1: N x 512 x 4 x 4, aux2: N x 528 x 4 x 4 x = self.conv(x) # N x 128 x 4 x 4 x = x.view(x.size(0), -1) # N x 2048 x = F.relu(self.fc1(x), inplace=True) # N x 2048 x = F.dropout(x, 0.7, training=self.training) # N x 2048 x = self.fc2(x) # N x 1024 return x class BasicConv2d(nn.Module): def __init__(self, in_channels, out_channels, **kwargs): super(BasicConv2d, self).__init__() self.conv = nn.Conv2d(in_channels, out_channels, bias=False, **kwargs) self.bn = nn.BatchNorm2d(out_channels, eps=0.001) def forward(self, x): x = self.conv(x) x = self.bn(x) return F.relu(x, inplace=True) # + tmean = [0.4914, 0.4822, 0.4465] tstd = [0.2023, 0.1994, 0.2010] transform_test = transforms.Compose([transforms.ToTensor(), transforms.Normalize(tmean, tstd)]) inv_normalize = transforms.Normalize( mean=[-1*tmean[i]/tstd[i] for i in range(3)], std=[1/tstd[i] for i in range(3)] ) batchsize = 1 trainset = torchvision.datasets.CIFAR10(root='../Data', train=True, download=True, transform=transform_test) trainloader = torch.utils.data.DataLoader(trainset, batch_size=batchsize, shuffle=True, num_workers=2) testset = torchvision.datasets.CIFAR10(root='../Data', train=False, download=True, transform=transform_test) testloader = torch.utils.data.DataLoader(testset, batch_size=batchsize, shuffle=False, num_workers=2) classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') # - def imshowt(img): img = inv_normalize(img) # unnormalize npimg = img.numpy() plt.imshow(np.transpose(npimg, (1, 2, 0))) plt.show() gnet = GoogLeNet() state_dict = torch.load('../Models/googlenet.pt') gnet.load_state_dict(state_dict) def predict_image(network, input_tensor): """ Input: Image tensor Outputs: Predicted image class, probability assigned by network to top class """ outputs = network(input_tensor)#.squeeze() class_probas = nn.Softmax(dim=1)(outputs).detach().cpu().numpy()[0] idx = np.argmax(class_probas) img_class = classes[idx] proba = class_probas[idx] return img_class, proba for item in iter(trainloader): image, label = item break imshowt(image[0]); predict_image(gnet, image) # + correct = 0 total = 0 with torch.no_grad(): for data in tqdm(testloader): images, labels = data outputs = gnet(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test images: %d %%' % ( 100 * correct / total)) # - # Wait, why on earth is the accuracy so low?! The claimed accuracy for this model was greater than 90%, which is definitely not what we are observing. def get_adversarial_image(network, img_tuple, epsilon=0.01): img, label = img_tuple img.requires_grad = True outputs = network(img) # Format label. class_name = labels_class[label] class_idx = classes.index(class_name) label = torch.tensor(class_idx).unsqueeze(0).to(device) # Get loss gradient with regard to image pixels. loss_fn = nn.CrossEntropyLoss() loss = loss_fn(outputs, label) loss.backward() img_gradient = img.grad gradient_signs = torch.sign(img_gradient).cpu().numpy().squeeze() # Match shape of image (channels last in this case) gradient_signs = np.transpose(gradient_signs, axes=[1, 2, 0]) pixel_changes = (gradient_signs * 255 * epsilon).astype(np.int16) changed_img = (img).astype(np.int16) + pixel_changes adv_img = np.clip(changed_img, 0, 255).astype(np.uint8) return adv_img # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Map MeSH conditions to the Disease Ontology and MeSH interventions to DrugBank import pandas # ## Map MeSH to the Disease Ontology url = 'https://github.com/dhimmel/disease-ontology/blob/75050ea2d4f60e745d3f3578ae03560a2cc0e444/data/xrefs.tsv?raw=true' disease_map_df = ( pandas.read_table(url) .query("resource == 'MSH'") .drop('resource', axis='columns') .rename(columns={'resource_id': 'condition'}) ) disease_map_df.head(2) # ## Map MeSH to DrugBank # Map from DrugBank to MeSH using DrugCentral url = 'https://github.com/olegursu/drugtarget/blob/9a6d84bed8650c6c507a2d3d786814c774568610/identifiers.tsv?raw=true' drug_map_df = pandas.read_table(url) drug_map_df = drug_map_df[drug_map_df.ID_TYPE.str.contains('MESH')][['DRUG_ID', 'IDENTIFIER']].rename(columns={'IDENTIFIER': 'intervention'}).merge( drug_map_df[drug_map_df.ID_TYPE == 'DRUGBANK_ID'][['DRUG_ID', 'IDENTIFIER']].rename(columns={'IDENTIFIER': 'drugbank_id'}) ).drop('DRUG_ID', axis='columns') drug_map_df.head(2) # ## Read DrugBank # + url = 'https://github.com/dhimmel/drugbank/blob/55587651ee9417e4621707dac559d84c984cf5fa/data/drugbank.tsv?raw=true' drugbank_df = pandas.read_table(url) drugbank_id_to_name = dict(zip(drugbank_df.drugbank_id, drugbank_df.name)) url = 'https://github.com/dhimmel/drugbank/blob/55587651ee9417e4621707dac559d84c984cf5fa/data/drugbank-slim.tsv?raw=true' drugbank_slim_ids = set(pandas.read_table(url).drugbank_id) # - # ## Map ClinicalTrials.gov intervention-condition pairs mesh_df = pandas.read_table('data/mesh-intervention-to-condition.tsv') mesh_df.head(2) mapped_df = mesh_df.merge(drug_map_df).merge(disease_map_df) mapped_df = mapped_df.drop(['condition', 'intervention'], axis='columns').drop_duplicates() mapped_df.insert(2, 'drugbank_name', mapped_df.drugbank_id.map(drugbank_id_to_name)) mapped_df = mapped_df.sort_values(['doid_code', 'drugbank_id', 'nct_id']) mapped_df.head(2) len(mapped_df), mapped_df.nct_id.nunique(), mapped_df.drugbank_id.nunique(), mapped_df.doid_code.nunique() len(mapped_df[['drugbank_id', 'doid_code']].drop_duplicates()) # + #mapped_df.query("doid_name == 'multiple sclerosis'").drug_name.value_counts() # - mapped_df.to_csv('data/DrugBank-DO.tsv', sep='\t', index=False) # ## Create a slim subset # Read Disease Ontology transitive closures for slim terms url = 'https://github.com/dhimmel/disease-ontology/blob/75050ea2d4f60e745d3f3578ae03560a2cc0e444/data/slim-terms-prop.tsv?raw=true' do_slim_map_df = ( pandas.read_table(url) .rename(columns={'slim_id': 'disease_id', 'slim_name': 'disease_name', 'subsumed_id': 'doid_code'}) .drop(['subsumed_name', 'min_distance'], axis='columns') ) do_slim_map_df.head(2) slim_df = (mapped_df .query("drugbank_id in @drugbank_slim_ids") .merge(do_slim_map_df) .drop(['doid_code', 'doid_name'], axis='columns') .rename(columns={'drugbank_id': 'compound_id', 'drugbank_name': 'compound_name'}) .drop_duplicates() .sort_values(['disease_name', 'compound_name', 'nct_id']) ) slim_df.head(3) slim_df.to_csv('data/DrugBank-DO-slim.tsv', sep='\t', index=False) len(slim_df), slim_df.nct_id.nunique(), slim_df.compound_id.nunique(), slim_df.disease_id.nunique() len(slim_df[['compound_id', 'disease_id']].drop_duplicates()) # Count number of trials per compound-disease pair slim_count_df = ( slim_df.groupby(['compound_id', 'disease_id', 'compound_name', 'disease_name']) .apply(lambda df: pandas.Series({'n_trials': len(df)})) .reset_index() ) slim_count_df.to_csv('data/DrugBank-DO-slim-counts.tsv', sep='\t', index=False) slim_count_df.head(2) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Backoff Algorithm # Cuando k terminales tratan de enviar paquetes de datos en una misma red, si estos son enviados al mismo tiempo ocurre una colisión. Cuando las terminales no reciben una respuesta vuelven a enviar un mensaje pero con un retraso(delay) para evitar una posible colisión. El delay tomará valores entre: # # Siendo 'n' el número de intentos. # Si vuelve a colisionar, n incrementa en 1. from random import randint class Node: Counter = 0 def __init__(self): Node.Counter += 1 self.name = "PC" + str(Node.Counter) self.delay = randint(0, 1) self.attempt = 0 self.occurrence = self.delay def backoff(self): self.attempt+=1 aux = (2 ** self.attempt) -1 self.delay = randint(0, aux) self.occurrence += 1 + self.delay def reset(self): self.delay = randint(0, 1) self.attempt = 0 self.occurrence += 1 + self.delay def printNode(self): print("%s(%d)(%d)"%(self.name,self.delay,self.occurrence)) def displayNow(self,time): print('%s (%d)' % (self.name,self.delay),end="") if self.occurrence == time: print('\033[94m' +"("+str(self.occurrence)+")"+'\033[0m') else: print("(%d)"%(self.occurrence)) # #### Funciones para cambiar el color # + def green(s): print('\033[92m' + s + '\033[0m', end="") def blue(s): print('\033[94m' + s + '\033[0m',end="") def red(s): print('\033[91m' + s + '\033[0m',end="") # - # #### Imprime la lista de los retrasos y ocurrencias en el siguiente formato: # Nombre de Equipo(Retraso)(Ocurrencia) # + def printList(listN,time): for i in listN: i.displayNow(time) # - # #### En este programa se tomará en cuenta el tiempo, si los nodos envían mensajes al mismo tiempo ocurrirá una colisión. # + Node.Counter = 0 #tiempo time = 20 #contador de colisiones counter = 0 #n° de nodos nodes = 4 #lista de nodos listN = [] #lista de colisiones listC = [] #llena la lista con el n° de nodos for x in range(0,nodes): listN.append(Node()) for time in range(0,time+1): print("__________________________________") print("") print("time:", end="") blue(str(time)+"s") print("") #Imprime la lista de los actuales retrasos y ocurrencias. printList(listN,time) for i in listN: for j in listN: #Si ocurre una colision if i.occurrence == j.occurrence and i.name != j.name and i.occurrence == time and i.name not in listC: #Se agrega a la lista de colisiones listC.append(i.name) counter += 1 #Si ha ocurrido una colisión en la iteración la lista no estará vacía print("--------------") if listC: red("Collision") print("") print(listC) else: green("No Collision") print("") print("--------------") #Verifica la ocurrencia con el tiempo for i in listN: #Si el elemento colisiona se procede a aplica el método backoff if i.name in listC: i.backoff() red(">backoff: ") i.printNode() #Si el elemento no colisiona se resetea el retraso(delay) y vuelve a enviar otro paquete if i.name not in listC and i.occurrence == time: i.reset() green(">reset: ") i.printNode() #Se reinicia la lista de colisiones para la siguiente iteración listC = [] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Soal 1 (Bobot: 30) # hint: containers, loop, decision making, function # Buatlah sebuah fungsi CountAnimal(A, B) dengan A merupakan sebuah list yang memuat nama-nama binatang dan B merupakan nama seekor binatang. Fungsi CountAnimal(A, B) akan mengembalikan jumlah binatang B yang terdapat pada list A. animal = ['cat', 'dog', 'cat', 'monkey', 'monkey', 'dog', 'cat', 'horse', 'cat'] def CountAnimal(A, B): count = 0 # lakukan implementasi di sini # Jawab : # Melakukan perulangan pada setiap element A, jika element tersebut sama dengan B maka count bertambah satu for i in A: if i == B: count+=1 return count print(CountAnimal(animal, 'cat')) # ## Soal 2 (Bobot: 20) # hint: NumPy import numpy as np a = np.array([1, 2, 3]) b = np.array([[1, 2, 3]]) # Apakah a dan b sama? Jelaskan jawaban anda! # # ## Jawab : # Ketikkan penjelasan jawaban anda di bawah print("Apakah array a dan b sama ? : ",np.array_equal(a,b)) print("Shape a : ",a.shape) print("Shape b : ",b.shape) # #### Sehingga array a dan b tidak sama karena memiliki shape dimensi yang berbeda # ## Soal 3 (Bobot: 40) # hint: string, loop, list, function # Buatlah sebuah fungsi Reverse(A) dimana A merupakan sebuah variabel string. Fungsi ini akan melakukan reverse terhadap string yang menjadi input atau membuatnya menjadi dieja terbalik. # # Contoh: # # input > "Hello" # # output > "olleH" def Reverse(a): reversed_string = "" # lakukan implementasi di sini # Jawab : # Melakukan perulangan untuk setiap element pada a disimpan pada variable reversed_string, # sehingga variable reversed string menyimpan element a yang dibalik for i in a: reversed_string = i+reversed_string return reversed_string print(Reverse("Hello")) # ## Soal 4 (Bobot: 20) # + import pandas as pd data = pd.read_csv('iris.csv') data # - # Ambil data dengan class 'Iris-setosa' # ## Jawab : # # mengambil data dimana pada index 'class' berisi 'Iris-setosa' data[data['class']=='Iris-setosa'] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="1IYlrzOCgoYR" # # **K Medoid Algorithm** # + [markdown] id="j50bcWjtgvU9" # K Medoid is a Clustering Algorithm in Machine Learning that uses Medoids (i.e. Actual Objects in a Cluster) to represent the Cluster. # + [markdown] id="vtDPkFeTgxc_" # For using this library in Python this comes under Scikit Learn Extra Library. # + [markdown] id="9EAAb8Y1nQTf" # This function operates on an given set of medoids and updates it inplace. # # ## Parameters: # **C – The cost matrix**, where the cost in K-Medoids algorithm is given as – The dissimilarity of the medoid(Ci) and object(Pi) is calculated by using E = |Pi - Ci| #
# **Medoids** – The vector of medoid indexes.
The contents of medoids serve as the initial guess and will be overrided by the results. # + [markdown] id="1XOPH5SJoAeM" # ## K Medoid Clustering Process #
    #
  1. Initialize: select k random points out of the n data points as the medoids.
  2. #
  3. Associate each data point to the closest medoid by using any common distance metric methods.
  4. #
  5. While the cost decreases:
    # For each medoid m, for each data o point which is not a medoid: #
      #
    1. Swap m and o, associate each data point to the closest medoid, recompute the cost.
    2. #
    3. If the total cost is more than that in the previous step, undo the swap.
    4. #
    #
  6. #
# # + [markdown] id="OmGfNSfUgzDE" # In this file , we will showcase how a basic K Medoid Algorithm works in Python , on a randomly created Dataset. # + colab={"base_uri": "https://localhost:8080/"} id="h77AWPvfiB76" outputId="47df4fa5-1816-46f6-f623-17608c6674f5" # !pip install scikit-learn-extra #installing the scikit extra library because it isnt inbuilt # + [markdown] id="fXnctd_zolGl" # ## Importing Libraries # + id="RA1ExtSfgkab" import matplotlib.pyplot as plt #Used for plotting graphs from sklearn.datasets import make_blobs #Used for creating random dataset from sklearn_extra.cluster import KMedoids #KMedoid is provided under Scikit-Learn Extra from sklearn.metrics import confusion_matrix from sklearn.metrics import silhouette_score import numpy as np import pandas as pd # + [markdown] id="-l9UR35Tonr9" # ## Generating Data # + id="f4W827BshMjg" data, clusters = make_blobs( n_samples=1000, centers=5, cluster_std=0.4, random_state=0 ) # + colab={"base_uri": "https://localhost:8080/", "height": 265} id="nLj6FN0Mkm9r" outputId="b4571701-37b0-4c26-853d-ab47ccda5392" # Originally created plot with data plt.scatter(data[:,0], data[:,1]) plt.show() # + [markdown] id="FQ4QDcCpot6M" # ## Model Creation # + id="Ihyuk4rAiziW" # Creating KMedoids Model Km_model = KMedoids(n_clusters=5) # n_clusters means the number of clusters to form as well as the number of medoids to generate. # + colab={"base_uri": "https://localhost:8080/"} id="_BxT5mork81r" outputId="7ff951df-bbb7-4d37-ba13-c79451767c8e" Km_model.fit(data) #Fitting the data # + id="pt2QMyYmlHwU" pred = Km_model.predict(data) #predicting on our randomly created dataset # + id="xFqAOVrMlSYG" labels_KMed = Km_model.labels_ #storing labels to check accuracy of model # + [markdown] id="acn1qDp-oxH6" # ## Plotting our observations # + colab={"base_uri": "https://localhost:8080/", "height": 281} id="OcS8T6lQlb6j" outputId="5f6d6cd8-5859-4eb7-b0b6-4877eeaa568d" # Viewing our Prediction using Plots plt.scatter(data[:, 0], data[:, 1], c = pred , cmap='rainbow') plt.plot( Km_model.cluster_centers_[:, 0], Km_model.cluster_centers_[:, 1], "o", markerfacecolor="gold", markeredgecolor="k", markersize=6, ) plt.title("KMedoids clustering. Medoids are represented in Golden Colour") plt.show() # + colab={"base_uri": "https://localhost:8080/"} id="9XP3p4Rpmewi" outputId="c512661f-d9eb-4499-fb2a-b21584d2444c" Km_model.cluster_centers_ #points which were selected for Medoids in our clustering data # + [markdown] id="b8R9V0K-l0a5" # ## Accuracy of K Medoid Clustering # + colab={"base_uri": "https://localhost:8080/"} id="yvtPL44vikfa" outputId="46865561-94db-4ecb-da2e-5cf13bb25ea9" KMed_score = silhouette_score(data, labels_KMed) KMed_score # + [markdown] id="swaOXrlKoUjZ" # On this randomly created dataset we got an accuracy of 81.1 % # + [markdown] id="IKE4ns8An6GG" # ## Thanks a lot! # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # Open In Colab # # 06. PyTorch Transfer Learning V2 # # > **Note:** This notebook is a cloned version of [`06_pytorch_transfer_learning.ipynb`](https://github.com/mrdbourke/pytorch-deep-learning/blob/main/06_pytorch_transfer_learning.ipynb) but adapted to work with `torchvision`'s upcoming [multi-weight support API (coming in `torchvision` v0.13)](https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/). # > # > As of June 2022, it requires the [nightly versions of PyTorch and `torchvision` be installed](https://pytorch.org/get-started/locally/). # # We've built a few models by hand so far. # # But their performance has been poor. # # You might be thinking, **is there a well-performing model that already exists for our problem?** # # And in the world of deep learning, the answer is often *yes*. # # We'll see how by using a powerful technique called [**transfer learning**](https://developers.google.com/machine-learning/glossary#transfer-learning). # ## What is transfer learning? # # **Transfer learning** allows us to take the patterns (also called weights) another model has learned from another problem and use them for our own problem. # # For example, we can take the patterns a computer vision model has learned from datasets such as [ImageNet](https://www.image-net.org/) (millions of images of different objects) and use them to power our FoodVision Mini model. # # Or we could take the patterns from a [langauge model](https://developers.google.com/machine-learning/glossary#masked-language-model) (a model that's been through large amounts of text to learn a representation of language) and use them as the basis of a model to classify different text samples. # # The premise remains: find a well-performing existing model and apply it to your own problem. # # transfer learning overview on different problems # # *Example of transfer learning being applied to computer vision and natural language processing (NLP). In the case of computer vision, a computer vision model might learn patterns on millions of images in ImageNet and then use those patterns to infer on another problem. And for NLP, a language model may learn the structure of language by reading all of Wikipedia (and perhaps more) and then apply that knowledge to a different problem.* # ## Why use transfer learning? # # There are two main benefits to using transfer learning: # # 1. Can leverage an existing model (usually a neural network architecture) proven to work on problems similar to our own. # 2. Can leverage a working model which has **already learned** patterns on similar data to our own. This often results in achieving **great results with less custom data**. # # transfer learning applied to FoodVision Mini # # *We'll be putting these to the test for our FoodVision Mini problem, we'll take a computer vision model pretrained on ImageNet and try to leverage its underlying learned representations for classifying images of pizza, steak and sushi.* # # Both research and practice support the use of transfer learning too. # # A finding from a recent machine learning research paper recommended practioner's use transfer learning wherever possible. # # how to train your vision transformer paper section 6, advising to use transfer learning if you can # # *A study into the effects of whether training from scratch or using transfer learning was better from a practioner's point of view, found transfer learning to be far more beneficial in terms of cost and time. **Source:** [How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers](https://arxiv.org/abs/2106.10270) paper section 6 (conclusion).* # # And (founder of [fastai](https://www.fast.ai/)) is a big proponent of transfer learning. # # > The things that really make a difference (transfer learning), if we can do better at transfer learning, it’s this world changing thing. Suddenly lots more people can do world-class work with less resources and less data. — [ on the Lex Fridman Podcast](https://youtu.be/Bi7f1JSSlh8?t=72) # # # # ## Where to find pretrained models # # The world of deep learning is an amazing place. # # So amazing that many people around the world share their work. # # Often, code and pretrained models for the latest state-of-the-art research is released within a few days of publishing. # # And there are several places you can find pretrained models to use for your own problems. # # | **Location** | **What's there?** | **Link(s)** | # | ----- | ----- | ----- | # | **PyTorch domain libraries** | Each of the PyTorch domain libraries (`torchvision`, `torchtext`) come with pretrained models of some form. The models there work right within PyTorch. | [`torchvision.models`](https://pytorch.org/vision/stable/models.html), [`torchtext.models`](https://pytorch.org/text/main/models.html), [`torchaudio.models`](https://pytorch.org/audio/stable/models.html), [`torchrec.models`](https://pytorch.org/torchrec/torchrec.models.html) | # | **HuggingFace Hub** | A series of pretrained models on many different domains (vision, text, audio and more) from organizations around the world. There's plenty of different datasets too. | https://huggingface.co/models, https://huggingface.co/datasets | # | **`timm` (PyTorch Image Models) library** | Almost all of the latest and greatest computer vision models in PyTorch code as well as plenty of other helpful computer vision features. | https://github.com/rwightman/pytorch-image-models| # | **Paperswithcode** | A collection of the latest state-of-the-art machine learning papers with code implementations attached. You can also find benchmarks here of model performance on different tasks. | https://paperswithcode.com/ | # # different locations to find pretrained neural network models # # *With access to such high-quality resources as above, it should be common practice at the start of every deep learning problem you take on to ask, "Does a pretrained model exist for my problem?"* # # > **Exercise:** Spend 5-minutes going through [`torchvision.models`](https://pytorch.org/vision/stable/models.html) as well as the [HuggingFace Hub Models page](https://huggingface.co/models), what do you find? (there's no right answers here, it's just to practice exploring) # ## What we're going to cover # # We're going to take a pretrained model from `torchvision.models` and customise it to work on (and hopefully improve) our FoodVision Mini problem. # # | **Topic** | **Contents** | # | ----- | ----- | # | **0. Getting setup** | We've written a fair bit of useful code over the past few sections, let's download it and make sure we can use it again. | # | **1. Get data** | Let's get the pizza, steak and sushi image classification dataset we've been using to try and improve our model's results. | # | **2. Create Datasets and DataLoaders** | We'll use the `data_setup.py` script we wrote in chapter 05. PyTorch Going Modular to setup our DataLoaders. | # | **3. Get and customise a pretrained model** | Here we'll download a pretrained model from `torchvision.models` and customise it to our own problem. | # | **4. Train model** | Let's see how the new pretrained model goes on our pizza, steak, sushi dataset. We'll use the training functions we created in the previous chapter. | # | **5. Evaluate the model by plotting loss curves** | How did our first transfer learning model go? Did it overfit or underfit? | # | **6. Make predictions on images from the test set** | It's one thing to check out a model's evaluation metrics but it's another thing to view its predictions on test samples, let's *visualize, visualize, visualize*! | # ## Where can you get help? # # All of the materials for this course [are available on GitHub](https://github.com/mrdbourke/pytorch-deep-learning). # # If you run into trouble, you can ask a question on the course [GitHub Discussions page](https://github.com/mrdbourke/pytorch-deep-learning/discussions). # # And of course, there's the [PyTorch documentation](https://pytorch.org/docs/stable/index.html) and [PyTorch developer forums](https://discuss.pytorch.org/), a very helpful place for all things PyTorch. # ## 0. Getting setup # # Let's get started by importing/downloading the required modules for this section. # # To save us writing extra code, we're going to be leveraging some of the Python scripts (such as `data_setup.py` and `engine.py`) we created in the previous section, [05. PyTorch Going Modular](https://www.learnpytorch.io/05_pytorch_going_modular/). # # Specifically, we're going to download the [`going_modular`](https://github.com/mrdbourke/pytorch-deep-learning/tree/main/going_modular) directory from the `pytorch-deep-learning` repository (if we don't already have it). # # We'll also get the [`torchinfo`](https://github.com/TylerYep/torchinfo) package if it's not available. # # `torchinfo` will help later on to give us a visual representation of our model. # # > **Note:** As of June 2022, this notebook uses the nightly versions of `torch` and `torchvision` as `torchvision` v0.13+ is required for using the updated multi-weights API. You can install these using the command below. # For this notebook to run with updated APIs, we need torch 1.12+ and torchvision 0.13+ try: import torch import torchvision assert int(torch.__version__.split(".")[1]) >= 12, "torch version should be 1.12+" assert int(torchvision.__version__.split(".")[1]) >= 13, "torchvision version should be 0.13+" print(f"torch version: {torch.__version__}") print(f"torchvision version: {torchvision.__version__}") except: print(f"[INFO] torch/torchvision versions not as required, installing nightly versions.") # !pip3 install -U --pre torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/nightly/cu113 import torch import torchvision print(f"torch version: {torch.__version__}") print(f"torchvision version: {torchvision.__version__}") # + # Continue with regular imports import matplotlib.pyplot as plt import torch import torchvision from torch import nn from torchvision import transforms # Try to get torchinfo, install it if it doesn't work try: from torchinfo import summary except: print("[INFO] Couldn't find torchinfo... installing it.") # !pip install -q torchinfo from torchinfo import summary # Try to import the going_modular directory, download it from GitHub if it doesn't work try: from going_modular.going_modular import data_setup, engine except: # Get the going_modular scripts print("[INFO] Couldn't find going_modular scripts... downloading them from GitHub.") # !git clone https://github.com/mrdbourke/pytorch-deep-learning # !mv pytorch-deep-learning/going_modular . # !rm -rf pytorch-deep-learning from going_modular.going_modular import data_setup, engine # - # Now let's setup device agnostic code. # # > **Note:** If you're using Google Colab, and you don't have a GPU turned on yet, it's now time to turn one on via `Runtime -> Change runtime type -> Hardware accelerator -> GPU`. # Setup device agnostic code device = "cuda" if torch.cuda.is_available() else "cpu" device # ## 1. Get data # # Before we can start to use **transfer learning**, we'll need a dataset. # # To see how transfer learning compares to our previous attempts at model building, we'll download the same dataset we've been using for FoodVision Mini. # # Let's write some code to download the [`pizza_steak_sushi.zip`](https://github.com/mrdbourke/pytorch-deep-learning/blob/main/data/pizza_steak_sushi.zip) dataset from the course GitHub and then unzip it. # # We can also make sure if we've already got the data, it doesn't redownload. # + import os import zipfile from pathlib import Path import requests # Setup path to data folder data_path = Path("data/") image_path = data_path / "pizza_steak_sushi" # If the image folder doesn't exist, download it and prepare it... if image_path.is_dir(): print(f"{image_path} directory exists.") else: print(f"Did not find {image_path} directory, creating one...") image_path.mkdir(parents=True, exist_ok=True) # Download pizza, steak, sushi data with open(data_path / "pizza_steak_sushi.zip", "wb") as f: request = requests.get("https://github.com/mrdbourke/pytorch-deep-learning/raw/main/data/pizza_steak_sushi.zip") print("Downloading pizza, steak, sushi data...") f.write(request.content) # Unzip pizza, steak, sushi data with zipfile.ZipFile(data_path / "pizza_steak_sushi.zip", "r") as zip_ref: print("Unzipping pizza, steak, sushi data...") zip_ref.extractall(image_path) # Remove .zip file os.remove(data_path / "pizza_steak_sushi.zip") # - # Excellent! # # Now we've got the same dataset we've been using previously, a series of images of pizza, steak and sushi in standard image classification format. # # Let's now create paths to our training and test directories. # Setup Dirs train_dir = image_path / "train" test_dir = image_path / "test" # ## 2. Create Datasets and DataLoaders # # Since we've downloaded the `going_modular` directory, we can use the [`data_setup.py`](https://github.com/mrdbourke/pytorch-deep-learning/blob/main/going_modular/going_modular/data_setup.py) script we created in section [05. PyTorch Going Modular](https://www.learnpytorch.io/05_pytorch_going_modular/#2-create-datasets-and-dataloaders-data_setuppy) to prepare and setup our DataLoaders. # # But since we'll be using a pretrained model from [`torchvision.models`](https://pytorch.org/vision/stable/models.html), there's a specific transform we need to prepare our images first. # + [markdown] tags=[] # ### 2.1 Creating a transform for `torchvision.models` (manual creation) # # > **Note:** As of `torchvision` v0.13+, there's an update to how data transforms can be created using `torchvision.models`. I've called the previous method "manual creation" and the new method "auto creation". This notebook showcases both. # # When using a pretrained model, it's important that **your custom data going into the model is prepared in the same way as the original training data that went into the model**. # # Prior to `torchvision` v0.13+, to create a transform for a pretrained model in `torchvision.models`, the documentation stated: # # > All pre-trained models expect input images normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (3 x H x W), where H and W are expected to be at least 224. # > # > The images have to be loaded in to a range of `[0, 1]` and then normalized using `mean = [0.485, 0.456, 0.406]` and `std = [0.229, 0.224, 0.225]`. # > # > You can use the following transform to normalize: # > # > ``` # > normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], # > std=[0.229, 0.224, 0.225]) # > ``` # # The good news is, we can achieve the above transformations with a combination of: # # | **Transform number** | **Transform required** | **Code to perform transform** | # | ----- | ----- | ----- | # | 1 | Mini-batches of size `[batch_size, 3, height, width]` where height and width are at least 224x224^. | `torchvision.transforms.Resize()` to resize images into `[3, 224, 224]`^ and `torch.utils.data.DataLoader()` to create batches of images. | # | 2 | Values between 0 & 1. | `torchvision.transforms.ToTensor()` | # | 3 | A mean of `[0.485, 0.456, 0.406]` (values across each colour channel). | `torchvision.transforms.Normalize(mean=...)` to adjust the mean of our images. | # | 4 | A standard deviation of `[0.229, 0.224, 0.225]` (values across each colour channel). | `torchvision.transforms.Normalize(std=...)` to adjust the standard deviation of our images. | # # > **Note:** ^some pretrained models from `torchvision.models` in different sizes to `[3, 224, 224]`, for example, some might take them in `[3, 240, 240]`. For specific input image sizes, see the documentation. # # > **Question:** *Where did the mean and standard deviation values come from? Why do we need to do this?* # > # > These were calculated from the data. Specifically, the ImageNet dataset by taking the means and standard deviations across a subset of images. # > # > We also don't *need* to do this. Neural networks are usually quite capable of figuring out appropriate data distributions (they'll calculate where the mean and standard deviations need to be on their own) but setting them at the start can help our networks achieve better performance quicker. # # Let's compose a series of `torchvision.transforms` to perform the above steps. # - # Create a transforms pipeline manually (required for torchvision < 0.13) manual_transforms = transforms.Compose([ transforms.Resize((224, 224)), # 1. Reshape all images to 224x224 (though some models may require different sizes) transforms.ToTensor(), # 2. Turn image values to between 0 & 1 transforms.Normalize(mean=[0.485, 0.456, 0.406], # 3. A mean of [0.485, 0.456, 0.406] (across each colour channel) std=[0.229, 0.224, 0.225]) # 4. A standard deviation of [0.229, 0.224, 0.225] (across each colour channel), ]) # Wonderful! # # Now we've got a **manually created series of transforms** ready to prepare our images, let's create training and testing DataLoaders. # # We can create these using the `create_dataloaders` function from the [`data_setup.py`](https://github.com/mrdbourke/pytorch-deep-learning/blob/main/going_modular/going_modular/data_setup.py) script we created in [05. PyTorch Going Modular Part 2](https://www.learnpytorch.io/05_pytorch_going_modular/#2-create-datasets-and-dataloaders-data_setuppy). # # We'll set `batch_size=32` so our model see's mini-batches of 32 samples at a time. # # And we can transform our images using the transform pipeline we created above by setting `transform=simple_transform`. # # > **Note:** I've included this manual creation of transforms in this notebook because you may come across resources that use this style. It's also important to note that because these transforms are manually created, they're also infinitely customizable. So if you wanted to included data augmentation techniques in your transforms pipeline, you could. # + # Create training and testing DataLoaders as well as get a list of class names train_dataloader, test_dataloader, class_names = data_setup.create_dataloaders(train_dir=train_dir, test_dir=test_dir, transform=manual_transforms, # resize, convert images to between 0 & 1 and normalize them batch_size=32) # set mini-batch size to 32 train_dataloader, test_dataloader, class_names # - # ### 2.2 Creating a transform for `torchvision.models` (auto creation) # # As previously stated, when using a pretrained model, it's important that **your custom data going into the model is prepared in the same way as the original training data that went into the model**. # # Above we saw how to manually create a transform for a pretrained model. # # But as of `torchvision` v0.13+, an automatic transform creation feature has been added. # # When you setup a model from `torchvision.models` and select the pretrained model weights you'd like to use, for example, say we'd like to use: # # ```python # weights = torchvision.models.EfficientNet_B0_Weights.DEFAULT # ``` # # Where, # * `EfficientNet_B0_Weights` is the model architecture weights we'd like to use (there are many differnt model architecture options in `torchvision.models`). # * `DEFAULT` means the *best available* weights (the best performance in ImageNet). # * **Note:** Depending on the model architecture you choose, you may also see other options such as `IMAGENET_V1` and `IMAGENET_V2` where generally the higher version number the better. Though if you want the best available, `DEFAULT` is the easiest option. See the [`torchvision.models` documentation](https://pytorch.org/vision/main/models.html) for more. # # Let's try it out. # Get a set of pretrained model weights weights = torchvision.models.EfficientNet_B0_Weights.DEFAULT # .DEFAULT = best available weights from pretraining on ImageNet weights # And now to access the transforms assosciated with our `weights`, we can use the `transforms()` method. # # This is essentially saying "get the data transforms that were used to train the `EfficientNet_B0_Weights` on ImageNet". # Get the transforms used to create our pretrained weights auto_transforms = weights.transforms() auto_transforms # Notice how `auto_transforms` is very similar to `manual_transforms`, the only difference is that `auto_transforms` came with the model architecture we chose, where as we had to create `manual_transforms` by hand. # # The benefit of automatically creating a transform through `weights.transforms()` is that you ensure you're using the same data transformation as the pretrained model used when it was trained. # # However, the tradeoff of using automatically created transforms is a lack of customization. # # We can use `auto_transforms` to create DataLoaders with `create_dataloaders()` just as before. # + # Create training and testing DataLoaders as well as get a list of class names train_dataloader, test_dataloader, class_names = data_setup.create_dataloaders(train_dir=train_dir, test_dir=test_dir, transform=auto_transforms, # perform same data transforms on our own data as the pretrained model batch_size=32) # set mini-batch size to 32 train_dataloader, test_dataloader, class_names # - # ## 3. Getting a pretrained model # # Alright, here comes the fun part! # # Over the past few notebooks we've been building PyTorch neural networks from scratch. # # And while that's a good skill to have, our models haven't been performing as well as we'd like. # # That's where **transfer learning** comes in. # # The whole idea of transfer learning is to **take an already well-performing model on a problem-space similar to yours and then customising it to your use case**. # # Since we're working on a computer vision problem (image classification with FoodVision Mini), we can find pretrained classification models in [`torchvision.models`](https://pytorch.org/vision/stable/models.html#classification). # # Exploring the documentation, you'll find plenty of common computer vision architecture backbones such as: # # | **Architecuture backbone** | **Code** | # | ----- | ----- | # | [ResNet](https://arxiv.org/abs/1512.03385)'s | `torchvision.models.resnet18()`, `torchvision.models.resnet50()`... | # | [VGG](https://arxiv.org/abs/1409.1556) (similar to what we used for TinyVGG) | `torchvision.models.vgg16()` | # | [EfficientNet](https://arxiv.org/abs/1905.11946)'s | `torchvision.models.efficientnet_b0()`, `torchvision.models.efficientnet_b1()`... | # | [VisionTransformer](https://arxiv.org/abs/2010.11929) (ViT's)| `torchvision.models.vit_b_16()`, `torchvision.models.vit_b_32()`... | # | [ConvNeXt](https://arxiv.org/abs/2201.03545) | `torchvision.models.convnext_tiny()`, `torchvision.models.convnext_small()`... | # | More available in `torchvision.models` | `torchvision.models...` | # ### 3.1 Which pretrained model should you use? # # It depends on your problem/the device you're working with. # # Generally, the higher number in the model name (e.g. `efficientnet_b0()` -> `efficientnet_b1()` -> `efficientnet_b7()`) means *better performance* but a *larger* model. # # You might think better performance is *always better*, right? # # That's true but **some better performing models are too big for some devices**. # # For example, say you'd like to run your model on a mobile-device, you'll have to take into account the limited compute resources on the device, thus you'd be looking for a smaller model. # # But if you've got unlimited compute power, as [*The Bitter Lesson*](http://www.incompleteideas.net/IncIdeas/BitterLesson.html) states, you'd likely take the biggest, most compute hungry model you can. # # Understanding this **performance vs. speed vs. size tradeoff** will come with time and practice. # # For me, I've found a nice balance in the `efficientnet_bX` models. # # As of May 2022, [Nutrify](https://nutrify.app) (the machine learning powered app I'm working on) is powered by an `efficientnet_b0`. # # [Comma.ai](https://comma.ai/) (a company that makes open source self-driving car software) [uses an `efficientnet_b2`](https://geohot.github.io/blog/jekyll/update/2021/10/29/an-architecture-for-life.html) to learn a representation of the road. # # > **Note:** Even though we're using `efficientnet_bX`, it's important not to get too attached to any one architecture, as they are always changing as new research gets released. Best to experiment, experiment, experiment and see what works for your problem. # ### 3.2 Setting up a pretrained model # # The pretrained model we're going to be using is [`torchvision.models.efficientnet_b0()`](https://pytorch.org/vision/stable/generated/torchvision.models.efficientnet_b0.html#torchvision.models.efficientnet_b0). # # The architecture is from the paper *[EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946)*. # # efficienet_b0 from PyTorch torchvision feature extraction model # # *Example of what we're going to create, a pretrained [`EfficientNet_B0` model](https://ai.googleblog.com/2019/05/efficientnet-improving-accuracy-and.html) from `torchvision.models` with the output layer adjusted for our use case of classifying pizza, steak and sushi images.* # # We can setup the `EfficientNet_B0` pretrained ImageNet weights using the same code as we used to create the transforms. # # # ```python # weights = torchvision.models.EfficientNet_B0_Weights.DEFAULT # .DEFAULT = best available weights for ImageNet # ``` # # This means the model has already been trained on millions of images and has a good base representation of image data. # # The PyTorch version of this pretrained model is capable of achieving ~77.7% accuracy across ImageNet's 1000 classes. # # We'll also send it to the target device. # + # OLD: Setup the model with pretrained weights and send it to the target device (this was prior to torchvision v0.13) # model = torchvision.models.efficientnet_b0(pretrained=True).to(device) # OLD method (with pretrained=True) # NEW: Setup the model with pretrained weights and send it to the target device (torchvision v0.13+) weights = torchvision.models.EfficientNet_B0_Weights.DEFAULT # .DEFAULT = best available weights model = torchvision.models.efficientnet_b0(weights=weights).to(device) #model # uncomment to output (it's very long) # - # > **Note:** In previous versions of `torchvision`, you'd create a prertained model with code like: # > # > `model = torchvision.models.efficientnet_b0(pretrained=True).to(device)` # > # > However, running this using `torchvision` v0.13+ will result in errors such as the following: # > # > `UserWarning: The parameter 'pretrained' is deprecated since 0.13 and will be removed in 0.15, please use 'weights' instead.` # > # > And... # > # > `UserWarning: Arguments other than a weight enum or None for weights are deprecated since 0.13 and will be removed in 0.15. The current behavior is equivalent to passing weights=EfficientNet_B0_Weights.IMAGENET1K_V1. You can also use weights=EfficientNet_B0_Weights.DEFAULT to get the most up-to-date weights.` # If we print the model, we get something similar to the following: # # output of printing the efficientnet_b0 model from torchvision.models # # Lots and lots and lots of layers. # # This is one of the benefits of transfer learning, taking an existing model, that's been crafted by some of the best engineers in the world and applying to your own problem. # # Our `efficientnet_b0` comes in three main parts: # 1. `features` - A collection of convolutional layers and other various activation layers to learn a base representation of vision data (this base representation/collection of layers is often referred to as **features** or **feature extractor**, "the base layers of the model learn the different **features** of images"). # 2. `avgpool` - Takes the average of the output of the `features` layer(s) and turns it into a **feature vector**. # 3. `classifier` - Turns the **feature vector** into a vector with the same dimensionality as the number of required output classes (since `efficientnet_b0` is pretrained on ImageNet and because ImageNet has 1000 classes, `out_features=1000` is the default). # ### 3.3 Getting a summary of our model with `torchinfo.summary()` # # To learn more about our model, let's use `torchinfo`'s [`summary()` method](https://github.com/TylerYep/torchinfo#documentation). # # To do so, we'll pass in: # * `model` - the model we'd like to get a summary of. # * `input_size` - the shape of the data we'd like to pass to our model, for the case of `efficientnet_b0`, the input size is `(batch_size, 3, 224, 224)`, though [other variants of `efficientnet_bX` have different input sizes](https://github.com/pytorch/vision/blob/d2bfd639e46e1c5dc3c177f889dc7750c8d137c7/references/classification/train.py#L92-L93). # * **Note:** Many modern models can handle input images of varying sizes thanks to [`torch.nn.AdaptiveAvgPool2d()`](https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveAvgPool2d.html), this layer adaptively adjusts the `output_size` of a given input as required. You can try this out by passing different size input images to `summary()` or your models. # * `col_names` - the various information columns we'd like to see about our model. # * `col_width` - how wide the columns should be for the summary. # * `row_settings` - what features to show in a row. # Print a summary using torchinfo (uncomment for actual output) summary(model=model, input_size=(32, 3, 224, 224), # make sure this is "input_size", not "input_shape" # col_names=["input_size"], # uncomment for smaller output col_names=["input_size", "output_size", "num_params", "trainable"], col_width=20, row_settings=["var_names"] ) # output of torchinfo.summary() when passed our model with all layers as trainable # # Woah! # # Now that's a big model! # # From the output of the summary, we can see all of the various input and output shape changes as our image data goes through the model. # # And there are a whole bunch more total parameters (pretrained weights) to recognize different patterns in our data. # # For reference, our model from previous sections, **TinyVGG had 8,083 parameters vs. 5,288,548 parameters for `efficientnet_b0`, an increase of ~654x**! # # What do you think, will this mean better performance? # ### 3.4 Freezing the base model and changing the output layer to suit our needs # # The process of transfer learning usually goes: freeze some base layers of a pretrained model (typically the `features` section) and then adjust the output layers (also called head/classifier layers) to suit your needs. # # changing the efficientnet classifier head to a custom number of outputs # # *You can customise the outputs of a pretrained model by changing the output layer(s) to suit your problem. The original `torchvision.models.efficientnet_b0()` comes with `out_features=1000` because there are 1000 classes in ImageNet, the dataset it was trained on. However, for our problem, classifying images of pizza, steak and sushi we only need `out_features=3`.* # # Let's freeze all of the layers/parameters in the `features` section of our `efficientnet_b0` model. # # > **Note:** To *freeze* layers means to keep them how they are during training. For example, if your model has pretrained layers, to *freeze* them would be to say, "don't change any of the patterns in these layers during training, keep them how they are." In essence, we'd like to keep the pretrained weights/patterns our model has learned from ImageNet as a backbone and then only change the output layers. # # We can freeze all of the layers/parameters in the `features` section by setting the attribute `requires_grad=False`. # # For parameters with `requires_grad=False`, PyTorch doesn't track gradient updates and in turn, these parameters won't be changed by our optimizer during training. # # In essence, a parameter with `requires_grad=False` is "untrainable" or "frozen" in place. # Freeze all base layers in the "features" section of the model (the feature extractor) by setting requires_grad=False for param in model.features.parameters(): param.requires_grad = False # Feature extractor layers frozen! # # Let's now adjust the output layer or the `classifier` portion of our pretrained model to our needs. # # Right now our pretrained model has `out_features=1000` because there are 1000 classes in ImageNet. # # However, we don't have 1000 classes, we only have three, pizza, steak and sushi. # # We can change the `classifier` portion of our model by creating a new series of layers. # # The current `classifier` consists of: # # ``` # (classifier): Sequential( # (0): Dropout(p=0.2, inplace=True) # (1): Linear(in_features=1280, out_features=1000, bias=True) # ``` # # We'll keep the `Dropout` layer the same using [`torch.nn.Dropout(p=0.2, inplace=True)`](https://pytorch.org/docs/stable/generated/torch.nn.Dropout.html). # # > **Note:** [Dropout layers](https://developers.google.com/machine-learning/glossary#dropout_regularization) randomly remove connections between two neural network layers with a probability of `p`. For example, if `p=0.2`, 20% of connections between neural network layers will be removed at random each pass. This practice is meant to help regularize (prevent overfitting) a model by making sure the connections that remain learn features to compensate for the removal of the other connections (hopefully these remaining features are *more general*). # # And we'll keep `in_features=1280` for our `Linear` output layer but we'll change the `out_features` value to the length of our `class_names` (`len(['pizza', 'steak', 'sushi']) = 3`). # # Our new `classifier` layer should be on the same device as our `model`. # + # Set the manual seeds torch.manual_seed(42) torch.cuda.manual_seed(42) # Get the length of class_names (one output unit for each class) output_shape = len(class_names) # Recreate the classifier layer and seed it to the target device model.classifier = torch.nn.Sequential( torch.nn.Dropout(p=0.2, inplace=True), torch.nn.Linear(in_features=1280, out_features=output_shape, # same number of output units as our number of classes bias=True)).to(device) # - # Nice! # # Output layer updated, let's get another summary of our model and see what's changed. # # Do a summary *after* freezing the features and changing the output classifier layer (uncomment for actual output) summary(model, input_size=(32, 3, 224, 224), # make sure this is "input_size", not "input_shape" (batch_size, color_channels, height, width) verbose=0, col_names=["input_size", "output_size", "num_params", "trainable"], col_width=20, row_settings=["var_names"] ) # output of torchinfo.summary() after freezing multiple layers in our model and changing the classifier head # # Ho, ho! There's a fair few changes here! # # Let's go through them: # * **Trainable column** - You'll see that many of the base layers (the ones in the `features` portion) have their Trainable value as `False`. This is because we set their attribute `requires_grad=False`. Unless we change this, these layers won't be updated during furture training. # * **Output shape of `classifier`** - The `classifier` portion of the model now has an Output Shape value of `[32, 3]` instead of `[32, 1000]`. It's Trainable value is also `True`. This means its parameters will be updated during training. In essence, we're using the `features` portion to feed our `classifier` portion a base representation of an image and then our `classifier` layer is going to learn how to base representation aligns with our problem. # * **Less trainable parameters** - Previously there was 5,288,548 trainable parameters. But since we froze many of the layers of the model and only left the `classifier` as trainable, there's now only 3,843 trainable parameters (even less than our TinyVGG model). Though there's also 4,007,548 non-trainable parameters, these will create a base representation of our input images to feed into our `classifier` layer. # # > **Note:** The more trainable parameters a model has, the more compute power/longer it takes to train. Freezing the base layers of our model and leaving it with less trainable parameters means our model should train quite quickly. This is one huge benefit of transfer learning, taking the already learned parameters of a model trained on a problem similar to yours and only tweaking the outputs slightly to suit your problem. # ## 4. Train model # # Now we've got a pretraiend model that's semi-frozen and has a customised `classifier`, how about we see transfer learning in action? # # To begin training, let's create a loss function and an optimizer. # # Because we're still working with multi-class classification, we'll use `nn.CrossEntropyLoss()` for the loss function. # # And we'll stick with `torch.optim.Adam()` as our optimizer with `lr=0.001`. # # Define loss and optimizer loss_fn = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=0.001) # Wonderful! # # To train our model, we can use `train()` function we defined in the [05. PyTorch Going Modular section 04](https://www.learnpytorch.io/05_pytorch_going_modular/#4-creating-train_step-and-test_step-functions-and-train-to-combine-them). # # The `train()` function is in the [`engine.py`](https://github.com/mrdbourke/pytorch-deep-learning/blob/main/going_modular/going_modular/engine.py) script inside the [`going_modular` directory](https://github.com/mrdbourke/pytorch-deep-learning/tree/main/going_modular/going_modular). # # Let's see how long it takes to train our model for 5 epochs. # # > **Note:** We're only going to be training the parameters `classifier` here as all of the other parameters in our model have been frozen. # + # Set the random seeds torch.manual_seed(42) torch.cuda.manual_seed(42) # Start the timer from timeit import default_timer as timer start_time = timer() # Setup training and save the results results = engine.train(model=model, train_dataloader=train_dataloader, test_dataloader=test_dataloader, optimizer=optimizer, loss_fn=loss_fn, epochs=5, device=device) # End the timer and print out how long it took end_time = timer() print(f"[INFO] Total training time: {end_time-start_time:.3f} seconds") # - # Wow! # # Our model trained quite fast (~5 seconds on my local machine with a [NVIDIA TITAN RTX GPU](https://www.nvidia.com/en-au/deep-learning-ai/products/titan-rtx/)/about 15 seconds on Google Colab with a [NVIDIA P100 GPU](https://www.nvidia.com/en-au/data-center/tesla-p100/)). # # And it looks like it smashed our previous model results out of the park! # # With an `efficientnet_b0` backbone, our model achieves almost 90% accuracy on the test dataset, almost *double* what we were able to achieve with TinyVGG. # # Not bad for a model we downloaded with a few lines of code. # ## 5. Evaluate model by plotting loss curves # # Our model looks like it's performing pretty well. # # Let's plot it's loss curves to see what the training looks like over time. # # We can plot the loss curves using the function `plot_loss_curves()` we created in [04. PyTorch Custom Datasets section 7.8](https://www.learnpytorch.io/04_pytorch_custom_datasets/#78-plot-the-loss-curves-of-model-0). # # The function is stored in the [`helper_functions.py`](https://github.com/mrdbourke/pytorch-deep-learning/blob/main/helper_functions.py) script so we'll try to import it and download the script if we don't have it. # + # Get the plot_loss_curves() function from helper_functions.py, download the file if we don't have it try: from helper_functions import plot_loss_curves except: print("[INFO] Couldn't find helper_functions.py, downloading...") with open("helper_functions.py", "wb") as f: import requests request = requests.get("https://raw.githubusercontent.com/mrdbourke/pytorch-deep-learning/main/helper_functions.py") f.write(request.content) from helper_functions import plot_loss_curves # Plot the loss curves of our model plot_loss_curves(results) # - # Those are some excellent looking loss curves! # # It looks like the loss for both datasets (train and test) is heading in the right direction. # # The same with the accuracy values, trending upwards. # # That goes to show the power of **transfer learning**. Using a pretrained model often leads to pretty good results with a small amount of data in less time. # # I wonder what would happen if you tried to train the model for longer? Or if we added more data? # # > **Question:** Looking at the loss curves, does our model look like it's overfitting or underfitting? Or perhaps neither? Hint: Check out notebook [04. PyTorch Custom Datasets part 8. What should an ideal loss curve look like?](https://www.learnpytorch.io/04_pytorch_custom_datasets/#8-what-should-an-ideal-loss-curve-look-like) for ideas. # ## 6. Make predictions on images from the test set # # It looks like our model performs well quantitatively but how about qualitatively? # # Let's find out by making some predictions with our model on images from the test set (these aren't seen during training) and plotting them. # # *Visualize, visualize, visualize!* # # One thing we'll have to remember is that for our model to make predictions on an image, the image has to be in *same* format as the images our model was trained on. # # This means we'll need to make sure our images have: # * **Same shape** - If our images are different shapes to what our model was trained on, we'll get shape errors. # * **Same datatype** - If our images are a different datatype (e.g. `torch.int8` vs. `torch.float32`) we'll get datatype errors. # * **Same device** - If our images are on a different device to our model, we'll get device errors. # * **Same transformations** - If our model is trained on images that have been transformed in certain way (e.g. normalized with a specific mean and standard deviation) and we try and make preidctions on images transformed in a different way, these predictions may be off. # # > **Note:** These requirements go for all kinds of data if you're trying to make predictions with a trained model. Data you'd like to predict on should be in the same format as your model was trained on. # # To do all of this, we'll create a function `pred_and_plot_image()` to: # # 1. Take in a trained model, a list of class names, a filepath to a target image, an image size, a transform and a target device. # 2. Open an image with [`PIL.Image.open()`](https://pillow.readthedocs.io/en/stable/reference/Image.html#PIL.Image.open). # 3. Create a transform for the image (this will default to the `manual_transforms` we created above or it could use a transform generated from `weights.transforms()`). # 4. Make sure the model is on the target device. # 5. Turn on model eval mode with `model.eval()` (this turns off layers like `nn.Dropout()`, so they aren't used for inference) and the inference mode context manager. # 6. Transform the target image with the transform made in step 3 and add an extra batch dimension with `torch.unsqueeze(dim=0)` so our input image has shape `[batch_size, color_channels, height, width]`. # 7. Make a prediction on the image by passing it to the model ensuring it's on the target device. # 8. Convert the model's output logits to prediction probabilities with `torch.softmax()`. # 9. Convert model's prediction probabilities to prediction labels with `torch.argmax()`. # 10. Plot the image with `matplotlib` and set the title to the prediction label from step 9 and prediction probability from step 8. # # > **Note:** This is a similar function to [04. PyTorch Custom Datasets section 11.3's](https://www.learnpytorch.io/04_pytorch_custom_datasets/#113-putting-custom-image-prediction-together-building-a-function) `pred_and_plot_image()` with a few tweaked steps. # + from typing import List, Tuple from PIL import Image # 1. Take in a trained model, class names, image path, image size, a transform and target device def pred_and_plot_image(model: torch.nn.Module, image_path: str, class_names: List[str], image_size: Tuple[int, int] = (224, 224), transform: torchvision.transforms = None, device: torch.device=device): # 2. Open image img = Image.open(image_path) # 3. Create transformation for image (if one doesn't exist) if transform is not None: image_transform = transform else: image_transform = transforms.Compose([ transforms.Resize(image_size), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), ]) ### Predict on image ### # 4. Make sure the model is on the target device model.to(device) # 5. Turn on model evaluation mode and inference mode model.eval() with torch.inference_mode(): # 6. Transform and add an extra dimension to image (model requires samples in [batch_size, color_channels, height, width]) transformed_image = image_transform(img).unsqueeze(dim=0) # 7. Make a prediction on image with an extra dimension and send it to the target device target_image_pred = model(transformed_image.to(device)) # 8. Convert logits -> prediction probabilities (using torch.softmax() for multi-class classification) target_image_pred_probs = torch.softmax(target_image_pred, dim=1) # 9. Convert prediction probabilities -> prediction labels target_image_pred_label = torch.argmax(target_image_pred_probs, dim=1) # 10. Plot image with predicted label and probability plt.figure() plt.imshow(img) plt.title(f"Pred: {class_names[target_image_pred_label]} | Prob: {target_image_pred_probs.max():.3f}") plt.axis(False); # - # What a good looking function! # # Let's test it out by making predictions on a few random images from the test set. # # We can get a list of all the test image paths using `list(Path(test_dir).glob("*/*.jpg"))`, the stars in the `glob()` method say "any file matching this pattern", in other words, any file ending in `.jpg` (all of our images). # # And then we can randomly sample a number of these using Python's [`random.sample(populuation, k)`](https://docs.python.org/3/library/random.html#random.sample) where `population` is the sequence to sample and `k` is the number of samples to retrieve. # + # Get a random list of image paths from test set import random num_images_to_plot = 3 test_image_path_list = list(Path(test_dir).glob("*/*.jpg")) # get list all image paths from test data test_image_path_sample = random.sample(population=test_image_path_list, # go through all of the test image paths k=num_images_to_plot) # randomly select 'k' image paths to pred and plot # Make predictions on and plot the images for image_path in test_image_path_sample: pred_and_plot_image(model=model, image_path=image_path, class_names=class_names, # transform=weights.transforms(), # optionally pass in a specified transform from our pretrained model weights image_size=(224, 224)) # - # Woohoo! # # Those predictions look far better than the ones our TinyVGG model was previously making. # ### 6.1 Making predictions on a custom image # # It looks like our model does well qualitatively on data from the test set. # # But how about on our own custom image? # # That's where the real fun of machine learning is! # # Predicting on your own custom data, outisde of any training or test set. # # To test our model on a custom image, let's import the old faithful `pizza-dad.jpeg` image (an image of my dad eating pizza). # # We'll then pass it to the `pred_and_plot_image()` function we created above and see what happens. # + # Download custom image import requests # Setup custom image path custom_image_path = data_path / "04-pizza-dad.jpeg" # Download the image if it doesn't already exist if not custom_image_path.is_file(): with open(custom_image_path, "wb") as f: # When downloading from GitHub, need to use the "raw" file link request = requests.get("https://raw.githubusercontent.com/mrdbourke/pytorch-deep-learning/main/images/04-pizza-dad.jpeg") print(f"Downloading {custom_image_path}...") f.write(request.content) else: print(f"{custom_image_path} already exists, skipping download.") # Predict on custom image pred_and_plot_image(model=model, image_path=custom_image_path, class_names=class_names) # - # Two thumbs up! # # Looks like our model go it right again! # # But this time the prediction probability is higher than the one from TinyVGG (`0.373` vs. `0.517`) in [04. PyTorch Custom Datasets section 11.3](https://www.learnpytorch.io/04_pytorch_custom_datasets/#113-putting-custom-image-prediction-together-building-a-function). # # This indicates our `efficientnet_b0` model is *more* confident in its prediction where as our TinyVGG model was par with just guessing. # ## Main takeaways # * **Transfer learning** often allows to you get good results with a relatively small amount of custom data. # * Knowing the power of transfer learning, it's a good idea to ask at the start of every problem, "does an existing well-performing model exist for my problem?" # * When using a pretrained model, it's important that your custom data be formatted/preprocessed in the same way that the original model was trained on, otherwise you may get degraded performance. # * The same goes for predicting on custom data, ensure your custom data is in the same format as the data your model was trained on. # * There are [several different places to find pretrained models](https://www.learnpytorch.io/06_pytorch_transfer_learning/#where-to-find-pretrained-models) from the PyTorch domain libraries, HuggingFace Hub and libraries such as `timm` (PyTorch Image Models). # ## Exercises # # All of the exercises are focused on practicing the code above. # # You should be able to complete them by referencing each section or by following the resource(s) linked. # # All exercises should be completed using [device-agnostic code](https://pytorch.org/docs/stable/notes/cuda.html#device-agnostic-code). # # **Resources:** # * [Exercise template notebook for 06](https://github.com/mrdbourke/pytorch-deep-learning/blob/main/extras/exercises/06_pytorch_transfer_learning_exercises.ipynb) # * [Example solutions notebook for 06](https://github.com/mrdbourke/pytorch-deep-learning/blob/main/extras/solutions/06_pytorch_transfer_learning_exercise_solutions.ipynb) (try the exercises *before* looking at this) # * See a live [video walkthrough of the solutions on YouTube](https://youtu.be/ueLolShyFqs) (errors and all) # # 1. Make predictions on the entire test dataset and plot a confusion matrix for the results of our model compared to the truth labels. Check out [03. PyTorch Computer Vision section 10](https://www.learnpytorch.io/03_pytorch_computer_vision/#10-making-a-confusion-matrix-for-further-prediction-evaluation) for ideas. # 2. Get the "most wrong" of the predictions on the test dataset and plot the 5 "most wrong" images. You can do this by: # * Predicting across all of the test dataset, storing the labels and predicted probabilities. # * Sort the predictions by *wrong prediction* and then *descending predicted probabilities*, this will give you the wrong predictions with the *highest* prediction probabilities, in other words, the "most wrong". # * Plot the top 5 "most wrong" images, why do you think the model got these wrong? # 3. Predict on your own image of pizza/steak/sushi - how does the model go? What happens if you predict on an image that isn't pizza/steak/sushi? # 4. Train the model from section 4 above for longer (10 epochs should do), what happens to the performance? # 5. Train the model from section 4 above with more data, say 20% of the images from Food101 of Pizza, Steak and Sushi images. # * You can find the [20% Pizza, Steak, Sushi dataset](https://github.com/mrdbourke/pytorch-deep-learning/blob/main/data/pizza_steak_sushi_20_percent.zip) on the course GitHub. It was created with the notebook [`extras/04_custom_data_creation.ipynb`](https://github.com/mrdbourke/pytorch-deep-learning/blob/main/extras/04_custom_data_creation.ipynb). # 6. Try a different model from [`torchvision.models`](https://pytorch.org/vision/stable/models.html) on the Pizza, Steak, Sushi data, how does this model perform? # * You'll have to change the size of the classifier layer to suit our problem. # * You may want to try an EfficientNet with a higher number than our B0, perhaps `torchvision.models.efficientnet_b2()`? # # ## Extra-curriculum # * Look up what "model fine-tuning" is and spend 30-minutes researching different methods to perform it with PyTorch. How would we change our code to fine-tine? Tip: fine-tuning usually works best if you have *lots* of custom data, where as, feature extraction is typically better if you have less custom data. # * Check out the new/upcoming [PyTorch multi-weights API](https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/) (still in beta at time of writing, May 2022), it's a new way to perform transfer learning in PyTorch. What changes to our code would need to made to use the new API? # * Try to create your own classifier on two classes of images, for example, you could collect 10 photos of your dog and your friends dog and train a model to classify the two dogs. This would be a good way to practice creating a dataset as well as building a model on that dataset. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="qOD71vDmq6dm" # ##PCA # # + colab={"base_uri": "https://localhost:8080/", "height": 206} id="ytf7HUmkxkK9" outputId="c5a72b31-1da0-41b6-fd7e-d3fcbdd8957e" import numpy as np import pandas as pd import matplotlib.pyplot as plt # %matplotlib inline column_names = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT', 'MEDV'] dataset = pd.read_csv('https://raw.githubusercontent.com/niranjangirhe/dataset/main/housing.csv', header=None, delimiter=r"\s+", names=column_names) dataset.head() # + colab={"base_uri": "https://localhost:8080/"} id="O3-RyjfAxZpv" outputId="d73ccec0-a389-421b-933e-9507f8dc8251" print(dataset) # + id="7SE2WOHH0JVu" class PCA: # I will denote components as features here # Though its not mathematically accurate but machine learning it works def __init__(self,number_of_important_features=2): # number of specified features # Default being passed as 2 self.number_of_important_features=number_of_important_features # Best possible features self.features=None self._mean=None def fit(self,X): # placing mean to as origin of axis # axis =0 is mean of rows along the column direction self._mean=np.mean(X,axis=0) X=X-self._mean # Co-variance of N,D -->DxD # Also called Autocorrelation as both are X's covariance=np.dot(X.T,X)/(X.shape[0]-1) print(covariance.shape) # Eigenvalues,eigenvectors detail discussion below # Eigenvector is the vector which doesnot chnage it span(simply, direction) after matrix transformation # So, why eigen importance. Best intuitive way to say # for 3D object, the eigenvector represents its axis of rotation(For earth eigenvector is the axis of rotation) # Formula A(matrix).v(eigenvector)=lambda(eigenvalue).v(eigenvector) # So, Intuitively above formula means, matrix transformation of eigenvector is the eigenvector scaled by eigenvalue # Here we are finding the eigenvector and eigenvalue of the covariance matrix # how to solve is (A-lambda.I(identity matrix))-v=0, As v is non-zero --> det(A-lambda.I)=0(area under transformation=0) # Here lambda is the knob by tweaking it, we change the det = 0 # We can do all this by only one line of code, isnt it awesome!!! # There is very powerful application of eigen's i.e eigenbasis-->diagonalisation() # A gift for the patience # you can say this to your gf or bf --> "My love for you is like eigenvector" eigenvalues,eigenvector=np.linalg.eig(covariance) print("eigenvalues-->",eigenvalues.shape) print("eigenvalues \n",eigenvalues) print("eigenvector-->",eigenvector.shape) print("eigenvector \n",eigenvector) #sort the eigenvalues from highest to lowest # If we didnt transpose, then applying indexs will require more steps and computation eigenvector=eigenvector.T print("eigenvector.T-->",eigenvector.shape) print("eigenvector after Transpose\n",eigenvector) indexs=np.argsort(eigenvalues)[::-1] #taking those indices and storing in eigenvalues and eigenvectors accordingly eigenvector=eigenvector[indexs] print("eigenvector-indexs-->",eigenvector.shape) print("eigenvector after indexes \n",eigenvector) eigenvalues=eigenvalues[indexs] print("eigenvalues-indexs-->",eigenvalues.shape) print("eigenvalues \n",eigenvalues) ## This below code snippet is for seeing how to determine which feature to be calculated total = sum(eigenvalues) variance_of_each_feature = [(i / total)*100 for i in eigenvalues] print("variance of each feature-->",variance_of_each_feature) # Now taking only number of specified componenets self.features=eigenvector[:self.number_of_important_features] print("self.features",self.features.shape) # So, now the we have chosen most significant features componenet def apply(self,X): # Here we project the data onto Principal component line X=X-self._mean # Check the dimensionality with (.shape) to confirm for yourselves # Here X-->(N,4);self.features-->2,4 # (X,self.features.T)-->(N,4)x(4,2)==(N,2) i.e N samples with 2 feature vector return np.dot(X,self.features.T) # + id="gXvkSM710uJE" X = dataset.iloc[:,:] # + colab={"base_uri": "https://localhost:8080/"} id="hEwzPvuw0xij" outputId="31bea069-5f87-4410-a331-82cb58fc8088" print(X) # + colab={"base_uri": "https://localhost:8080/"} id="oL5bx5vH0e5n" outputId="dbd27b41-af5c-447e-e359-1a2f45d002fb" from sklearn.preprocessing import StandardScaler X = StandardScaler().fit_transform(X) print(X[0:5]) # + id="5xvThC3z2JQF" y = dataset['MEDV'] # + id="TNUZZ0NRknk_" colab={"base_uri": "https://localhost:8080/"} outputId="a604d52d-9cba-43c1-9762-608edc02415e" from sklearn.feature_selection import SequentialFeatureSelector as sfs #I am going to use RandomForestRegressor algoritham as an estimator. Your can select other regression alogritham as well. from sklearn.ensemble import RandomForestRegressor #k_features=10 (It will get top 10 features best suited for prediction) #forward=True (Forward feature selection model) #verbose=2 (It will show details output as shown below.) #cv=5 (Kfold cross valiation: it will split the training set in 5 set and 4 will be using for training the model and 1 will using as validation) #n_jobs=-1 (Number of cores it will use for execution.-1 means it will use all the cores of CPU for execution.) #scoring='r2'(R-squared is a statistical measure of how close the data are to the fitted regression line) model_forward=sfs(RandomForestRegressor(),n_features_to_select=7, direction = 'forward') model_forward.fit(X,y) model_backward=sfs(RandomForestRegressor(),n_features_to_select=7, direction = 'backward') model_backward.fit(X,y) # + colab={"base_uri": "https://localhost:8080/"} id="LhyDu2nLncJy" outputId="5c486efd-2da7-48d1-bf1e-6ff01ab813c1" Wrapper_support_forward = model_forward.get_support() Wrapper_support_backward = model_backward.get_support() print(Wrapper_support_forward) print(Wrapper_support_backward) print(column_names) # + colab={"base_uri": "https://localhost:8080/"} id="Yu-zzLhBn1RE" outputId="fcf6316f-75ae-495b-dde0-9b2205258d7a" print("This are the selected features using Wrapper with Random FOrest and forward approach") for i in range(14): if(Wrapper_support_forward[i] == True): print(column_names[i]) print("This are the selected features using Wrapper with Random FOrest and backward approach") for i in range(14): if(Wrapper_support_backward[i] == True): print(column_names[i]) # + [markdown] id="GNLKb1XSkiAx" # Embeded with Decision Tree # + id="YtgHK3jCgzQT" from sklearn.feature_selection import SelectKBest from sklearn.feature_selection import chi2 from sklearn.preprocessing import MinMaxScaler X_norm = MinMaxScaler().fit_transform(X) # + colab={"base_uri": "https://localhost:8080/"} id="ZoobNc_ecOHi" outputId="c45353af-49b8-4baa-aa8c-f61b250551a2" from sklearn.feature_selection import SelectFromModel from sklearn.tree import DecisionTreeRegressor embeded_dt_selector = SelectFromModel(DecisionTreeRegressor(), threshold='1.25*median') embeded_dt_selector.fit(X_norm, y) # + colab={"base_uri": "https://localhost:8080/"} id="iAKs-hyecYAp" outputId="7fe5c9fe-021a-4ceb-db27-6be7a31e369f" embeded_dt_support = embeded_dt_selector.get_support() print(embeded_dt_support) print(column_names) #embeded_rf_feature = X.iloc[:,embeded_rf_support].columns.tolist() #print(str(len(embeded_rf_feature)), 'selected features') # + colab={"base_uri": "https://localhost:8080/"} id="vs2WD6HYj8V2" outputId="7ff3b412-9fee-4090-8160-4c26a608cd76" print("This are the selected features using Embeded with Decisiomn tree") for i in range(14): if(embeded_dt_support[i] == True): print(column_names[i]) # + [markdown] id="-fGMbNAWkf83" # PCA # + id="DmIFrewC06GI" pca=PCA(2) # + colab={"base_uri": "https://localhost:8080/"} id="Hn_fjAa_0_34" outputId="9c04aa9e-3eff-49d8-c6c0-cf3951e8278d" pca.fit(X) # + id="qMXAYc2B179s" projected=pca.apply(X) # + id="kBY0pTNE2Azu" x0=projected[:,0] x1=projected[:,1] # + colab={"base_uri": "https://localhost:8080/", "height": 284} id="QbgOZjCn2VjL" outputId="fd19649f-11d7-485b-ccf3-1476c74bda81" plt.scatter(x0,x1,c=y) # + id="vF4W38KbDPbi" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="pmwtuuOZ-40s" # # System of Linear Equations # $_{\text{© | 2021 | Computational Methods for Computer Engineers}}$ # # Another fundamental engineering computational technique is solving a system of linear equations. Linear systems are very much prevalent in engineering and computational modeling, and solving them is very crucial in analyzing and operating these systems. Solving systems of linear equations are useful in several practices such as optimization. Some non-engineering applications include finance, computer programming, geometry, etc. The coverage of the module is as follows: # * Review of System of Linear Equations # * Gaussian Elimintation # * Gauss-Jordan Elimination # * Cramer's Rule # * Cholesky's LU Decomposition # * Jacobi Method # * Gauss-Siedel Method # * Successive over-relaxation (SOR) Method # * Python Functions for Roots # * Applications of Root-finding # + [markdown] id="k_C7E7qJAy7K" # ## 3.1 Looking back at Systems of Linear Equations # # A linear equation is described as an equation that has a scalar gradient. Or simply, it is an equation that produces a line when visualized. Formally,a linear equation in $n$ (unknown) variables $x_1, ... , x_n$ has the form [[1]](https://mandal.ku.edu/math290/FTwelveM290/chpterOne.pdf): # $$a_1x_1 + a_2x_2 + ... + a_nx_n = b # \\ _{\text{(Eq. 3.1.1)}}$$ # Whereas, $a_1, a_2,\dots,a_n,b$ are real where specifically $b$ is the constant term and $a$ is the coefficient of $x_i$. # # And such, a system of these equations tell us that they are somewhat related or ar relevant for a specific scenario, event, or environment. Thus formally, $y$ a System of Linear Equations in n variables $x_1, x_2, ... , x_n$ we mean a collection of linear equations in these variables. A system of $m$ linear equations in these $n$ variables can be written as: # # $$\left\{ # \begin{array}\\ # a_{11}x_1 + a_{12}x_2 + ... + a_{1n}x_n = b_1\\ # a_{21}x_1 + a_{22}x_2 + ... + a_{2n}x_n = b_2\\ # a_{31}x_1 + a_{32}x_2 + ... + a_{3n}x_n = b_3\\ # \vdots \\ # a_{m1}x_1 + a_{m2}x_2 + ... + a_{mn}x_n = b_m\\ # \end{array} # \right. # \\ _{\text{(Eq. 3.1.2)}}$$ # Whereas $a_{ij}$ and $b_i$ are all real numbers. # # If you can recall form Linear Algebra, Eq. 3.1.2 can be represented in augmented matrix form as: # $$\left[\begin{array}{cccc|c} # a_{11}&a_{12}&\dots&a_{1n}&b_1\\ # a_{21}&a_{22}&\dots&a_{2n}&b_2\\ # a_{31}&a_{32}&\dots&a_{3n}&b_3\\ # \vdots&\vdots&\ddots&\vdots&\vdots\\ # a_{m1}&a_{m2}&\dots&a_{mn}&b_m # \end{array}\right] \\ _{\text{(Eq. 3.1.3)}}$$ # Or in linear combination matrix form: # $$\begin{bmatrix} # a_{11}&a_{12}&\dots&a_{1n}\\ # a_{21}&a_{22}&\dots&a_{2n}\\ # a_{31}&a_{32}&\dots&a_{3n}\\ # \vdots&\vdots&\ddots&\vdots\\ # a_{m1}&a_{m2}&\dots&a_{mn} # \end{bmatrix} \cdot # \begin{bmatrix}x_1\\x_2\\x_3\\\vdots\\x_n\end{bmatrix}= # \begin{bmatrix}b_1\\b_2\\b_3\\\vdots\\b_m\end{bmatrix} # \\ _{\text{(Eq. 3.1.4)}}$$ # + [markdown] id="Eb5N8gm0Jk4E" # ### **Classifications of Linear Systems** # # Given a linear system in $n$ variables, precisely on the the following three is true [1]: # * The system has **exactly one** solution (consistent system) — Geometrically, solution given by precisely the point where the graphs # (two lines) of these two equations meet. # * The system has **infinitely many** solutions (consistent system). # * The system has **NO** solution (inconsistent system) — Geometrically, these two equations in the system represent two parallel lines (they never meet). # # Another classification exists for linear systems wherein: # $$b_1=b_2=\dots=b_m=0 \\ _{\text{(Eq. 3.1.5)}}$$ # Which signifies the linear system is a **homogeneous linear system**. # # + [markdown] id="7Y3z7asyND8s" # ### **Computational Methods** # There are two types of computational methods in solving linear systems: # # * **Elimination Methods** - which eliminates parts of the coefficients matrix by means of elementary row and column operations in order to reach a form that yields the solution by simple calculations. Also known as direct method. # * **Iterative Methods** - where equations are rearranged in a way that enables recursive calculations of values until convergence. # + [markdown] id="VyH6BOT2K_aM" # Now that you have a review on linear systems, we can now get started with solving them using computational methods. # + id="jAnTIMT-JzS3" import numpy as np test_systems = { 'test1': { 'X': np.array([ [2, 7, -1, 3, 1], [2, 3, 4, 1, 7], [6, 2, -3, 2, -1], [2, 1, 2, -1, 2], [3, 4, 1, -2, 1]],float), 'b': np.array([5, 7, 2, 3, 4], float), 'ans': np.array([0.444, 0.556, 0.667, 0.222, 0.222], float) }, 'test2': { 'X': np.array([ [0, 7, -1, 3, 1], [2, 3, 4, 1, 7], [6, 2, 0, 2, -1], [2, 1, 2, 0, 2], [3, 4, 1, -2, 1]],float), 'b': np.array([5, 7, 2, 3, 4], float), 'ans': np.array([0.021705, 0.792248, 1.051163, 0.158140, 0.031008], float) }, 'test3': { 'X': np.array([ [2, -1, 5, 1], [3, 2, 2, -6], [1, 3, 3, -1], [5, -2, -3, 3]],float), 'b': np.array([5, 7, 2, 3, 4], float), 'ans': np.array([2.0, -12.0, -4.0, 1.0], float) }, 'test4': { 'X': np.array([ [8.00, 3.22, 0.80, 0.00, 4.10], [3.22, 7.76, 2.33, 1.91, -1.03], [0.80, 2.33, 5.25, 1.00, 3.02], [0.00, 1.91, 1.00, 7.50, 1.03], [4.10, -1.03, 3.02, 1.03, 6.44]],float), 'b': np.array([9.45, -12.2, 7.78, -8.10, 10.0], float), 'ans': np.array([2.0, -12.0, -4.0, 1.0], float) }} # + [markdown] id="Q564wD4cJ26q" # ## 3.2 Gaussian Elimination # The first method we will learn is the Gaussian Elimination. The goal of the Gaussian Elimination method is to reduce the linear system to its row-echelon form using basic or elementary row operations. Gaussian Elimination consists two stages: # * Elimination of the elements under the main diagonal of the coefficients matrix; and # * Back-substitution of the solved unknowns until all system is solved. # # + id="iBtdBau1bIIR" X = test_systems['test2']['X'] b = test_systems['test2']['b'] n = b.size v = np.empty(n) # + [markdown] id="t2HlC1L_Z0YG" # ### Elimination # 1. Loop of $k$ from $1$ to $n-1$ to index the fixed rows and eliminated columns # 2. Loop of $i$ from $k+1$ to $n$ to index the subtracted rows # 3. Loop of $j$ from $k$ to $n$ to index the columns for element subtraction the elimination statement using Eq. 3.2.1 and 3.2.2. # $$x_{ij} = x_{kj} - \frac{x_{kk}}{x_{ik}}x_{ij} \\ _{\text{(Eq. 3.2.1)}}$$ # $$b_i = b_k - \frac{x_{kk}}{x_{ik}}b_i \\ _{\text{(Eq. 3.2.2)}}$$ # + id="9rJsipy8L9It" for k in range(n-1): if X[k, k] == 0: for j in range (n): X[k,j], X[k+1, j] = X[k+1, j], X[k,j] b[k], b[k+1] = b[k+1], b[k] for i in range(k+1, n): if X[i, k] == 0: continue fctr = X[k, k] / X[i, k] b[i] = b[k] - fctr*b[i] for j in range(k, n): X[i, j] = X[k, j] - fctr*X[i, j] # + [markdown] id="FUA2X7lraMWJ" # ### Back-substitution # 1. Starting with last row, let $v_n = \frac{b_n}{a_{nn}}$. # 2. For rows from $i = n-1$ to $1$: substitute values of obtained for $v_i+1$ and compute the $v_i$ values of the current row. This can be expressed in the formula: # $$v_i = \frac{b_i - \sum^n_{j=i+1}{a_{ij}v_j}}{a_{ii}} \\ _{\text{(Eq. 3.2.3)}}$$ # + id="J7P1kov-ZzFp" colab={"base_uri": "https://localhost:8080/"} outputId="769c30ae-1d97-4eaf-efb2-f9a8278d1766" v[n-1] = b[n-1] / X[n-1, n-1] for i in range(n-2, -1, -1): terms = 0 for j in range(i+1, n): terms += X[i, j]*v[j] v[i] = (b[i] - terms)/X[i, i] print('The solution of the system:') print(v) # + [markdown] id="JKCS8rbDSE8f" # ### Activity 3.2 # Create a pseudocode and flowchart for the Gaussian Elimination method. You may use Eq. 3.2.1 to 3.2.3 for computational reference. # + [markdown] id="sF-ZVORJL9k8" # ## 3.3 Gauss-Jordan Elimination # The Gauss-Jordan Elimination is quite similar to the Gauss elimination method. However, it differs in several aspects: # * all elements above and below the main diagonal # are eliminated to transform the coefficient matrix to the identity matrix $I_n$ and, accordingly, transform the constant terms vector to the solution vector. # * for each pivot row, elimination operations are performed for all other rows above and below the pivot row. # * no back-substitution is applied since at the end of the elimination, # the constant term vector will be already transformed to the solution vector. # # The following are the steps in performing Gauss-Jordan Elimination: # 1. Performing partial pivoting by rearranging the rows of the system at each transformation to guarantee non-zero elements in the main diagonal of the coefficients matrix. # 2. Division of every row of the system (including constant term) by the pivot element of that row ($x_{kk}$). This step will transform the matrix into an identity matrix. # 3. Elimination of all elements above and below the main diagonal with applying the same operations on the constant terms vector. This algorithm requires the following three nested loops: # >* The main loop of $k$ from row 1 to n to index the pivot rows. In this loop each element in the pivot row is divided by the pivot: # $$x^*_{kj} = \frac{x_{kj}}{x_{kk}}, b^*_k = \frac{b_k}{x_{kk}}, k = 1,2,...,n, j=k,n$$ # >* The loop of $i$ from row $1$ to $n$ to index the subtraction rows. In order to avoid the subtraction of the pivot row from itself when $i$ is equal to $k$ or if the coefficient is already is equal to zero, the subtraction for current $i$ should be skipped. # >* The loop of j from k to n to index the columns for element subtraction starting from the pivot element to the last column of the coefficients matrix. Thus, the elimination statement is written as: # $$x^*_{ij}=x_{ij}-x_{ik}x^*_{kj}$$ # and the new constant in the $i$ loop is: # $$b^*_i=b_i-x_{ik}b^*_k$$ # Where, $k = 1,2,...,n i=1,2,...,n, i \not=k$. # # # # + id="8JEywa9IoWBg" X = test_systems['test1']['X'] b = test_systems['test1']['b'] n = b.size v = np.empty(n) # + id="KZVh0JsdMApM" def gssjrdn(X,b): a = np.array(X, float) b = np.array(b, float) n = len(b) for k in range(n): #Partial Pivoting if np.fabs(a[k,k]) < 1.0e-12: for i in range(k+1,n): if np.fabs(a[i,k]) > np.fabs(a[k,k]): a[[k,i]] = a[[i,k]] b[[k,i]] = b[[i,k]] break #Division of the pivot row pivot = a[k,k] a[k] /= pivot b[k] /= pivot #Elimination loop for i in range(n): if i == k or a[i,k] == 0: continue factor = a[i,k] a[i] -= factor * a[k] b[i] -= factor * b[k] return b,a # + colab={"base_uri": "https://localhost:8080/"} id="0eX5WTDlG70l" outputId="bb34889f-c3a7-48e2-ebaf-1531dc41197c" x,a = gssjrdn(X,b) print("The solution of the system:") print(x) print("The coefficient matrix after transformation:") print(a) # + [markdown] id="o82lQAKqMAvM" # ## 3.4 Cramer's Rule # Is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one column by the vector of right hand sides of the equations. # # 1. Set the coefficient matrix $X$ and the constants vector $b$. # 2. Initialize an empty array of unknowns $v$ with a length of $m$. # 3. For each element of $v$ their values are solved by: # >* substituting the column of $X$ with $b$ corresponding to the index of the unknown element $v_i$, let this be matrix $C_i$. # >* solve for $v_i$ by: # $$v_i = \frac{|C_i|}{|X|}$$ # + id="81Etp-hQMJpV" X = test_systems['test1']['X'] b = test_systems['test1']['b'] # + id="Zv9Ur4BOh4Ce" def cramers(X,b): n = b.size v = np.empty(n) for i in range(b.size): C = X.copy() C[:,i] = b v[i] = np.linalg.det(C)/np.linalg.det(X) return v # + colab={"base_uri": "https://localhost:8080/"} id="kL-lCZTlI422" outputId="c8e4a2a7-3244-4f02-c317-64a31d219a64" cramers(X,b) # + [markdown] id="DKGMJ3fWMJuU" # ## 3.5 Cholesky Factorization # # LU decomposition of a matrix is the factorization of a given square matrix into two triangular matrices, one upper triangular matrix and one lower triangular matrix, such that the product of these two matrices gives the original matrix. It was introduced by Al in 1948, who also created the turing machine [[2]](https://www.geeksforgeeks.org/l-u-decomposition-system-linear-equations/). # # Given a linear equation: # $$AX=B\\_{\text{(Eq. 3.5.1)}}$$ it can be re-written as $$A = LU \\_{\text{(Eq. 3.5.2)}}$$ whereas $L$ is a lower triangular matrix and $U$ is an upper triangular matrix. $L$ an $U$ are then called the decompositions of the matrix $A$. The decomposition can be illustrated as: # $$\begin{bmatrix} # a_{11}&a_{12}&a_{13}\\ # a_{21}&a_{22}&a_{23}\\ # a_{31}&a_{32}&a_{33}\\ # \end{bmatrix} = # \begin{bmatrix} # 1&0&0\\ # l_{21}&1&0\\ # l_{31}&l_{32}&1\\ # \end{bmatrix} \cdot # \begin{bmatrix} # u_{11}&u_{12}&u_{13}\\ # 0&u_{22}&u_{23}\\ # 0&0&u_{33}\\ # \end{bmatrix} # \\ _{\text{(Eq. 3.5.3)}}$$ # # # # # # + [markdown] id="UUSAWj4XTbAq" # ### 3.5.1 Decomposition # The **Cholesky decomposition** or **Cholesky factorization** is a decomposition of a symmetric and positive-definite matrix. The Cholesky decomposition is roughly twice as efficient as the LU decomposition for solving systems of linear equations. # # The Cholesky decomposition is a decomposition of the form $$A = LL^T$$ where $L$ is a lower triangular matrix with real and positive diagonal entries, and $L^T$ denotes the transpose of $L$. # # **Condition 1: Symmetry** # To proceed with Cholesky's decomposition, the matrix should be symmetric. A matrix is said to be symmatric if it is a real square matrix $A$where: # $$a_{ij}=a_{ji}, i=1,...,n, j=1,...,n\\ _{\text{(Eq. 3.5.4)}}$$ or simply, a matrix is symmetric if it is equal to its transpose. # $$A = A^T\\ _{\text{(Eq. 3.5.5)}}$$ # # **Condition 2: Positive-Definite** # A real symmetric matrix $A$ is positive definite if the eigenvalues of $A$ are positive. # # Once these conditions are met, we can then proceed with the Cholesky's Algorithm. The decomposed matrix can be solved by the following form: # $$\begin{bmatrix} # a_{11}& & & sym.\\ # a_{21}&a_{22}& & \\ # a_{31}&a_{32}&a_{33}& \\ # a_{41}&a_{42}&a_{43}&a_{44}\\ # \end{bmatrix} = # \begin{bmatrix} # l^2_{11}& & & sym.\\ # l_{11}l_{21}& l_{21}^2+l_{22}^2 & & \\ # l_{11}l_{31}& l_{31}l_{21} + l_{32}l_{22} &l_{31}^2+l_{32}^2+l_{33}^2& \\ # l_{11}l_{41}& l_{41}l_{21} + l_{42}l_{22} & l_{41}l_{31} + l_{42}l_{32} + l_{43}l_{33} &l_{41}^2+l_{42}^2+l_{43}^2+l_{44}^2 \\ # \end{bmatrix} # \\ _{\text{(Eq. 3.5.6)}}$$ # # The algorithm can be shortened as: # $$\left\{ # \begin{array}\\ # l_{ij} = \sqrt{a_{ij} - \sum_{k=1}^{j-1}l_{ik}^2} & i = j\\ # l_{ij} = \frac{1}{l_{ij}} (a_{ij} - \sum_{k=1}^{j-1}l_{ik}l_{jk}) & i \not= j # \end{array} # \right\}i=j,...,n; j=1,...,n # \\ _{\text{(Eq. 3.5.7)}}$$ # # + id="OMJcJD65VmFM" X = test_systems['test4']['X'] b = test_systems['test4']['b'] # + id="jGm4e_c-MUYk" def cholesky(X): a = np.array(X, float) L = np.zeros_like(a) n = np.shape(a)[0] ## check for symmetry and positive-definite if (X == X.T).all() and (np.linalg.eigvals(X) > 0).all(): ## Cholesky decomposition for j in range(n): for i in range(j,n): if i==j: L[i,j] = np.sqrt(a[i,j]-np.sum(L[i,:]**2)) else: L[i,j]= (a[i,j]-np.sum(L[i,:j]*L[j,:j]))/L[j,j] else: print("Cannot perform Cholesky Factorization.") return L # + colab={"base_uri": "https://localhost:8080/"} id="JketQe4JVvAm" outputId="b51480b4-a204-48f5-afe8-b32a6d1c2337" cholesky(X) # + [markdown] id="kZPcJxFnSxbL" # ### 3.5.2 Forward Substitution # # $$\begin{bmatrix} # l_{11}&0&0&... & 0\\ # l_{21}&l_{22}&0&...&0 \\ # l_{31}&l_{32}&l_{33}&...&0 \\ # \vdots&\vdots&\vdots&\ddots&\vdots\\ # l_{n1}&l_{n2}&l_{n3}&...&l_{nn}\\ # \end{bmatrix} # \left\{ # \begin{array}\\ # y_1\\y_2\\y_3\\\vdots\\y_n # \end{array} # \right\}= # \left\{ # \begin{array}\\ # b_1\\b_2\\b_3\\\vdots\\b_n # \end{array} # \right\} # \\ _{\text{(Eq. 3.5.8)}}$$ # # The algorithm is represented as: # $$y_i = \frac{1}{l_{ii}}(b_i - \sum^{i-1}_{j=1}l_{ij}y_j), i = 1,2,...,n\\ _{\text{(Eq. 3.5.9)}}$$ # + [markdown] id="vaS260i2VMzr" # ### 3.5.3 Backward Substitution # $$\begin{bmatrix} # u_{1,1}&u_{1,2}&...&u_{1n-1}& u_{1,n}\\ # 0&u_{2,2}&...&u_{2,n-1}& u_{2,n} \\ # \vdots&\vdots&\vdots&\ddots&\vdots\\ # 0&0&...&u_{n-1,n-1}&u_{n-1,n}\\ # 0&0&...&0&u_{n,n}\\ # \end{bmatrix} # \left\{ # \begin{array}\\ # x_1\\x_2\\\vdots\\x_{n-1}\\x_n # \end{array} # \right\}= # \left\{ # \begin{array}\\ # y_1\\y_2\\\vdots\\y_{n-1}\\y_n # \end{array} # \right\} # \\ _{\text{(Eq. 3.5.10)}}$$ # # The algorithm is represented as: # $$y_i = \frac{1}{u_{ii}}(y_i - \sum^{n}_{j=i+1}u_{ij}x_j), i = n,n-1,...,1\\ _{\text{(Eq. 3.5.11)}}$$ # + id="I12DgFCzSoWM" def solveLU(L,U,b): L = # + [markdown] id="G3cfZYs_MUSU" # ## 3.6 Jacobi's Method # + id="44J7r2y1MZy8" # + [markdown] id="KmV2rF-zMZs8" # ## 3.7 Gauss-Siedel Method # + id="7q7x-KpFMeIU" # + [markdown] id="TX_paBKtMdgd" # ## 3.8 Successive over-relacation (SOR) Method # + id="L6y6Dlw4Mr3e" # + [markdown] id="bUd1A-a_DVgi" # ## References # [1] . (2020) *Introduction to Systems of Linear Equations*. Elementary Linear Algebra. Unversity of Kansas, KA, USA. https://mandal.ku.edu/math290/FTwelveM290/chpterOne.pdf # # + id="OfDLA99Nhixe" # + id="Rdtbj_C_epg4" # + id="BI_Tz6WGeq80" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np import matplotlib.pyplot as plt import time import os import re import geopandas as gpd from functions import getTradingPartners from functions import getData basePath = 'C:/Users/Krist/University College London/Digital Visualisation/Final Project/' countries = gpd.read_file(basePath+'Data Sources/countries (1)/ne_50m_admin_0_countries.shp').fillna(value='None') airports = pd.read_csv(basePath+'Final Data/airports.csv',index_col=0) ports = pd.read_csv(basePath+'Final Data/ports.csv',index_col=0) globalData = pd.read_csv(basePath+'Final Data/globalData_reCat.csv',index_col=0) ukports = pd.read_csv(basePath+'Final Data/UKports_kln.csv',index_col=0) ukairports = pd.read_csv(basePath+'Final Data/UKFreightAirports.csv',index_col=0) ukairports def cleanName(whatToClean,remove=None): if type(whatToClean) == list: cleaned = [] for element in whatToClean: if remove != None: cleaned.append(" ".join([ele.capitalize() for ele in re.split(' ',element) if ele != remove])) else: cleaned.append(" ".join([ele.capitalize() for ele in re.split(' ',element)])) print(cleaned) else: if remove != None: cleaned = " ".join([ele.capitalize() for ele in re.split(' ',whatToClean) if ele != remove]) else: cleaned = " ".join([ele.capitalize() for ele in re.split(' ',whatToClean)]) return cleaned # ### Getting a sense of that the datasets contains airports.columns airports.usage.unique() airports['size'].unique() ports.columns ports.harborsize.unique() ports.fillna('None').railway.unique() ports.railway.unique() countries.columns globalData.columns globalData.loc[0] print('Maximum:',globalData.all_commodities_export.max(),'\nMinimum:',globalData.all_commodities_export.min()) plt.hist(globalData.all_commodities_export) plt.show() plt.hist(np.log(globalData.all_commodities_export)) plt.show() print('Maximum:',globalData.normalised_export_2017.max(),'\nMinimum:',globalData.normalised_export_2017.min()) print('Maximum:',globalData.all_commodities_import.max(),'\nMinimum:',globalData.all_commodities_import.min()) plt.hist(globalData.all_commodities_import) plt.show() print('Maximum:',globalData.normalised_import_2017.max(),'\nMinimum:',globalData.normalised_import_2017.min()) plt.hist(np.log(1+globalData.all_commodities_import)) plt.show() print('Maximum:',globalData.passengers_2017.max(),'\nMinimum:',globalData.passengers_2017.min()) plt.hist(globalData.passengers_2017) plt.show() plt.hist(np.log(1+globalData.passengers_2017)) plt.show() print('Maximum:',globalData.normalised_passengers_2017.max(),'\nMinimum:',globalData.normalised_passengers_2017.min()) print('Maximum:',globalData.freight_2017.max(),'\nMinimum:',globalData.freight_2017.min()) plt.hist(globalData.freight_2017) plt.show() plt.hist(np.log(1+globalData.freight_2017)) plt.show() print('Maximum:',globalData.normalised_freight_2017.max(),'\nMinimum:',globalData.normalised_freight_2017.min()) # ### Let's write some csv's! # # #### First the top importers/exporters globalData top_five_importers = globalData.sort_values(by=['all_commodities_import'],ascending=False)[['all_commodities_import','name','iso3']] top_five_importers = top_five_importers.reset_index(drop=True).loc[0:4] top_five_importers = top_five_importers[['name','all_commodities_import','iso3']] top_five_importers['all_commodities_import'] = top_five_importers['all_commodities_import'].astype(int) top_five_importers.columns = ['name','value','code'] top_five_importers['name'] = cleanName(list(top_five_importers.name)) top_five_importers top_five_exporters = globalData.sort_values(by=['all_commodities_export'],ascending=False)[['all_commodities_export','name','iso3']] top_five_exporters = top_five_exporters.reset_index(drop=True).loc[0:4] top_five_exporters = top_five_exporters[['name','all_commodities_export','iso3']] top_five_exporters['all_commodities_export'] = (top_five_exporters['all_commodities_export']).astype(int) top_five_exporters.columns = ['name','value','code'] top_five_exporters['name'] = cleanName(list(top_five_exporters.name)) top_five_exporters top_five_importers.to_csv(basePath+'Layers/StylingDataDriven/top_five_importers.csv',header=True,index=False) top_five_exporters.to_csv(basePath+'Layers/StylingDataDriven/top_five_exporters.csv',header=True,index=False) # #### Now the top 5 busiest airports airports.columns airports.head() top_five_airports = airports[airports.busiest_airport_ranking!=0].sort_values(by=['busiest_airport_ranking'],ascending=True)\ [['airport_name','iata_code','amount_passed_through','busiest_airport_ranking']] top_five_airports = top_five_airports.reset_index(drop=True).loc[0:4] top_five_airports = top_five_airports[['airport_name','amount_passed_through','iata_code']] top_five_airports.columns = ['name','value','code'] top_five_airports['value'] = top_five_airports['value'].round(2) top_five_airports['name'] = cleanName(list(top_five_airports.name),"Int'l") top_five_airports top_five_airports.to_csv(basePath+'Layers/StylingDataDriven/top_five_airports.csv',header=True,index=False) # #### On to the busiest ports ports.columns top_five_ports = ports[ports.busiest_ports_ranking!=0].sort_values(by=['busiest_ports_ranking'],ascending=True)\ [['port_name','amount_shipped_through','busiest_ports_ranking']] top_five_ports = top_five_ports.reset_index(drop=True).loc[0:4] top_five_ports = top_five_ports[['port_name','amount_shipped_through','port_name']] top_five_ports.columns = ['name','value','code'] top_five_ports['name'] = cleanName(list(top_five_ports.name)) top_five_ports top_five_ports.to_csv(basePath+'Layers/StylingDataDriven/top_five_ports.csv',header=True,index=False) # #### UK airports ukairports.head() ukairports.columns top_five_ukairports = ukairports.sort_values(by=['total_freight'],ascending=False)[['airport_name','total_freight','iata_code']] top_five_ukairports = top_five_ukairports.reset_index(drop=True).loc[0:4] top_five_ukairports.columns = ['name','value','code'] top_five_ukairports['value'] = top_five_ukairports['value'].round(2) top_five_ukairports['name'] = cleanName(list(top_five_ukairports.name),"Int'l") top_five_ukairports top_five_ukairports.to_csv(basePath+'Layers/StylingDataDriven/top_five_ukairports.csv',header=True,index=False) # #### Finally, the UK ports ukports.head() ukports.columns top_five_ukports = ukports.sort_values(by=['Total'],ascending=False)[['Total','port_name','port_name']] top_five_ukports.index=np.arange(top_five_ukports.shape[0]) top_five_ukports = top_five_ukports.loc[0:4] top_five_ukports.columns = ['Total','port_name_1','port_name_2'] top_five_ukports = top_five_ukports[['port_name_1','Total','port_name_2']] top_five_ukports.columns = ['name','value','code'] top_five_ukports['value'] = top_five_ukports['value'].round(2) top_five_ukports['name'] = cleanName(list(top_five_ukports.name)) top_five_ukports top_five_ukports.to_csv(basePath+'Layers/StylingDataDriven/top_five_ukports.csv',header=True,index=False) # Getthe the top-five trading partners of UK exportPartners, theirShare = getTradingPartners('United Kingdom','2015','export',basePath=basePath) exportPartners theirShare topfiveTradingPartnersExport = pd.DataFrame(index=np.arange(5),columns=['name','value','code']) topfiveTradingPartnersExport['value'] = [partner[1] for partner in exportPartners] topfiveTradingPartnersExport['value'] = topfiveTradingPartnersExport['value'].round(2) topfiveTradingPartnersExport['name'] = [partner[0] for partner in exportPartners] topfiveTradingPartnersExport['code'] = [[iso3 for iso3,name in zip(countries.GU_A3,countries.NAME)\ if partner[0] in name.lower()][0] for partner in exportPartners] topfiveTradingPartnersExport['name'] = cleanName(list(topfiveTradingPartnersExport.name)) topfiveTradingPartnersExport topfiveTradingPartnersExport.to_csv('../Layers/StylingDataDriven/top_five_exporters_uk.csv',header=True,index=True) # Getthe the top-five trading partners of UK importPartners, theirShare_I = getTradingPartners('United Kingdom','2015','import',basePath=basePath) importPartners theirShare_I topfiveTradingPartnersImport = pd.DataFrame(index=np.arange(5),columns=['name','value','code']) topfiveTradingPartnersImport['value'] = [partner[1] for partner in importPartners] topfiveTradingPartnersImport['value'] = topfiveTradingPartnersImport['value'].round(2) topfiveTradingPartnersImport['name'] = [partner[0] for partner in importPartners] topfiveTradingPartnersImport['code'] = [[iso3 for iso3,name in zip(countries.GU_A3,countries.NAME)\ if partner[0] in name.lower()][0] for partner in importPartners] topfiveTradingPartnersImport['name'] = cleanName(list(topfiveTradingPartnersImport.name)) topfiveTradingPartnersImport topfiveTradingPartnersImport.to_csv('../Layers/StylingDataDriven/top_five_importers_uk.csv',header=True,index=True) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Example 9.1: Isentropic Compression (Air-Standard) # # *, Ph.D., P.E.
# University of Kentucky - Paducah Campus
# ME 321: Engineering Thermodynamics II
* # ## Problem Statement # Air is compressed isentropically form $T_1=250\ \text{K}$, $p_1=1\ \text{bar}$ to $T_2=400\ \text{K}$ # * (a) What is the compression ratio # * (b) What is the final pressure # ## Solution # # __[Video Explanation](https://uky.yuja.com/V/Video?v=3074239&node=10465156&a=1753145453&autoplay=1)__ # ### Python Initialization # We'll start by importing the libraries we will use for our analysis and initializing dictionaries to hold the properties we will be usings. # + jupyter={"outputs_hidden": false} from kilojoule.templates.kSI_K import * air = idealgas.Properties('Air', unit_system='SI_K') # - # ### Given Parameters # We now define variables to hold our known values. # + jupyter={"outputs_hidden": false} # %%showcalc T[1] = Quantity(250,'K') p[1] = Quantity(1,'bar') T[2] = Quantity(400,'K') # - # #### Assumptions # * Ideal gas # * Adiabatic # * Variable specific heat # * Negligible changes in kinetic and potential energy # + jupyter={"outputs_hidden": false} # %%showcalc # Ideal gaw R = air.R # - # #### (a) Compression Ratio # + jupyter={"outputs_hidden": false} # %%showcalc # Properties at inlet v[1] = R*T[1]/p[1] s[1] = air.s(T=T[1],p=p[1]) # Isentropic s[2] = s[1] # Specific volume at exit v[2] = air.v(T=T[2],s=s[2]) # Compression ratio r = v[1]/v[2] # - # #### (b) Final Pressure # + jupyter={"outputs_hidden": false} # %%showcalc # Pressure at exit p[2] = R*T[2]/v[2] # Pressure ratio r_p = p[2]/p[1] # + pv = air.pv_diagram() pv.plot_state(states[1]) pv.plot_state(states[2]) pv.plot_process(states[1],states[2],path='isentropic',label='isentropic'); pv.plot_isotherm(T[1],xcoor=.3) pv.plot_isotherm(T[2],xcoor=.3) pv.plot_isobar(p[1],pos=0.5) pv.plot_isobar(p[2],pos=0.5) h[2] = air.h(T[2],p[2]) pv.plot_isenthalp(h[2],label=False); # + jupyter={"outputs_hidden": false} Summary(); # + jupyter={"outputs_hidden": false} Summary(['r','r_p']); # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:root] * # language: python # name: conda-root-py # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="Z4ASX8AZ9uCy" colab_type="code" outputId="ef71ba4a-1bb8-4dbc-b6c1-f97c27cba03c" colab={"base_uri": "https://localhost:8080/", "height": 121} from google.colab import drive drive.mount('/content/drive') # + id="fzU3UIB_9qge" colab_type="code" outputId="0a9526d1-569d-4a1e-ad29-52e646c02bdb" colab={"base_uri": "https://localhost:8080/", "height": 50} # %load_ext autoreload # %autoreload 2 import torch from torch.utils.data import DataLoader from torch.optim import Adam from matplotlib import pyplot as plt from utils import get_mnist_data from models import ConvNN from training_and_evaluation import train_model, predict_model from attacks import fast_gradient_attack from torch.nn.functional import cross_entropy import os if not os.path.isdir("models"): os.mkdir("models") # + [markdown] id="vHg3twW39qgh" colab_type="text" # # Project 2, part 1: Creating adversarial examples (50 pt) # In this notebook we train a basic convolutional neural network on MNIST and craft adversarial examples via gradient descent. # # ## Your task # Complete the missing code in the respective files, i.e. `training_and_evaluation.py`, `attacks.py`, and this notebook. Make sure that all the functions follow the provided specification, i.e. the output of the function exactly matches the description in the docstring. # # Specifically, for this part you will have to implement the following functions / classes: # **`training_and_evaluation.py`**: # * `train_model` (15pt) # * `predict_model` (10pt) # # **`attacks.py`**: # * `fast_gradient_attack` (15pt) # # **This notebook** # * Cells in the Qualitative Evaluation section. (10pt) # # ## General remarks # Do not add or modify any code outside of the following comment blocks, or where otherwise explicitly stated. # # ``` python # ########################################################## # # YOUR CODE HERE # ... # ########################################################## # ``` # After you fill in all the missing code, restart the kernel and re-run all the cells in the notebook. # # The following things are **NOT** allowed: # - Using additional `import` statements # - Copying / reusing code from other sources (e.g. code by other students) # # If you plagiarise even for a single project task, you won't be eligible for the bonus this semester. # + id="3QBG_8BY9qgh" colab_type="code" colab={} mnist_trainset = get_mnist_data(train=True) mnist_testset = get_mnist_data(train=False) use_cuda = torch.cuda.is_available() #and False model = ConvNN() if use_cuda: model = model.cuda() epochs = 1 batch_size = 128 test_batch_size = 1000 # feel free to change this lr = 1e-3 opt = Adam(model.parameters(), lr=lr) # + id="Z9x8iL719qgm" colab_type="code" colab={} def loss_function(x, y, model): logits = model(x).cpu() loss = cross_entropy(logits, y) return loss, logits # + [markdown] id="Jedehy5u9qgo" colab_type="text" # Implement the `train_model` function in the file `training_and_evaluation.py`. # + id="EyCqUtZp9qgp" colab_type="code" outputId="0ce4c823-274c-4f23-e233-f14f4f870269" colab={"base_uri": "https://localhost:8080/", "height": 66, "referenced_widgets": ["65fba63a40e0450b877e6f45f9faec58", "ef6d7c6be40040059706e45506fd51c2", "daf47e1ac07543248f1ca750002ca7a1", "042430a5a53944aa893f11ac0eaf585e", "cd9c4dbe73fd44189a42c9327b51e1f8", "06c2049d99534deca780488d234ee162", "48a4bb8f63c04c68b02e1c61a1dc02ab", "397400e9880e4a0b94c20d4504a8ceb9"]} losses, accuracies = train_model(model, mnist_trainset, batch_size=batch_size, loss_function=loss_function, optimizer=opt) # + id="eN7JyxjI9qgs" colab_type="code" colab={} torch.save(model.state_dict(), "models/standard_training.checkpoint") # + id="oQnm43JU9qgu" colab_type="code" outputId="cf119e63-4466-40b2-90c7-4ce85b57c800" colab={"base_uri": "https://localhost:8080/", "height": 34} model.load_state_dict(torch.load("models/standard_training.checkpoint", map_location="cpu")) # + id="FRyIj6Y49qgw" colab_type="code" outputId="3bbb2684-72d7-4dd8-8017-fc733676911f" colab={"base_uri": "https://localhost:8080/", "height": 225} fig = plt.figure(figsize=(10,3)) plt.subplot(121) plt.plot(losses) plt.xlabel("Iteration") plt.ylabel("Training Loss") plt.subplot(122) plt.plot(accuracies) plt.xlabel("Iteration") plt.ylabel("Training Accuracy") plt.show() # + [markdown] id="neT5Ai_w9qgz" colab_type="text" # Implement the `predict_model` function in the file `training_and_evaluation.py`. # + id="QQRgMdCL9qg1" colab_type="code" outputId="be1366dd-0876-4ea4-a265-87e2496ec1cb" colab={"base_uri": "https://localhost:8080/", "height": 82, "referenced_widgets": ["cbd305b47c934374ad1eee0bbb826eba", "e68dbf61eae843aea1207f363aca6cc1", "03acebed2090403dadd354f48d68823d", "c65e32aa0980438d9917cb4a37f4aad9", "f87adb0b11314662b699a91d8485afd2", "7432175e74c341af87b8c200320579e6", "c6105b926251498ba30893b9e128818a", ""]} clean_accuracy = predict_model(model, mnist_testset, batch_size=test_batch_size, attack_function=None) # + [markdown] id="VXjvkkpC9qg4" colab_type="text" # ### Creating adversarial examples # #### $L_2$-bounded attacks # Fist, craft adversarial perturbations that have a $L_2$ norm of $ \| \tilde{\mathbf{x}} - \mathbf{x} \|_2 = \epsilon$ with $\epsilon=5$. # # #### $L_\infty$-bounded attacks # Afterwards, craft adversarial perturbations with $L_\infty$ norm of $ \| \tilde{\mathbf{x}} - \mathbf{x} \|_\infty = \epsilon$ with $\epsilon=0.3$. # # For this you need to implement `predict_model` in the file `training_and_evaluation.py` and `fast_gradient_attack` in `attacks.py`. See the docstring comments there. # + id="Ix8KWG3A9qg5" colab_type="code" colab={} attack_args_l2 = {"epsilon": 5, "norm": "2"} attack_args_linf = {"epsilon": 0.3, "norm": "inf"} # + [markdown] id="v6wlHdA39qg7" colab_type="text" # ### Qualitative evaluation # # First, craft adversarial examples for 10 randomly selected test samples and inspect them by plotting them. # + [markdown] id="gd9p_pS-9qg8" colab_type="text" # $L_2$ attack: # + id="Tolo_zEf9qg8" colab_type="code" colab={} test_loader = DataLoader(mnist_testset, batch_size = 10, shuffle=True) x,y = next(iter(test_loader)) ########################################################## # YOUR CODE HERE ... ########################################################## # + [markdown] id="TK6kdjtv9qg-" colab_type="text" # $L_\infty$ attack: # + id="Noq_fhdY9qg_" colab_type="code" colab={} ########################################################## # YOUR CODE HERE ... ########################################################## # + [markdown] id="H3bK109D9qhB" colab_type="text" # Visualize the adversarial examples and the model's prediction on them: # + id="zhzM5blN9qhB" colab_type="code" outputId="9f4555d9-2d07-474d-903b-83844e60af64" colab={} for ix in range(len(x)): plt.subplot(131) plt.imshow(x[ix,0].detach().cpu(), cmap="gray") plt.title(f"Label: {y[ix]}") plt.subplot(132) plt.imshow(x_pert_l2[ix,0].detach().cpu(), cmap="gray") plt.title(f"Predicted: {y_pert_l2[ix]}") plt.subplot(133) plt.imshow(x_pert_linf[ix,0].detach().cpu(), cmap="gray") plt.title(f"Predicted: {y_pert_linf[ix]}") plt.show() # + [markdown] id="u6435w719qhE" colab_type="text" # ### Quantitative evaluation # Perturb each test sample and compare the clean and perturbed accuracies. # + [markdown] id="dY4SJcI-9qhE" colab_type="text" # $L_2$ perturbations: # + id="tBOvScuJ9qhF" colab_type="code" outputId="e640bb71-a633-4e24-e67d-51dcccb1693a" colab={"referenced_widgets": ["3b77f76347a142aeba5b353232d67865"]} perturbed_accuracy_l2 = predict_model(model, mnist_testset, batch_size=test_batch_size, attack_function=fast_gradient_attack, attack_args=attack_args_l2) # + [markdown] id="ugzjV0xQ9qhK" colab_type="text" # $L_\infty$ perturbations: # + id="3ulxn0eq9qhK" colab_type="code" outputId="3f3c5ae1-e19e-4982-e24e-53912e0d7671" colab={"referenced_widgets": ["3e66590ea87741229bed318d06af52da"]} perturbed_accuracy_linf = predict_model(model, mnist_testset, batch_size=test_batch_size, attack_function=fast_gradient_attack, attack_args=attack_args_linf) # + [markdown] id="lGDTnke79qhM" colab_type="text" # Your values for `clean_accuracy` and `perturbed_accuracy` should roughly match the ones below, even though they will of course not be identical. # + id="RrVWtOg99qhM" colab_type="code" outputId="344e57c0-c75c-46ad-ab7a-0786e53ae658" colab={} clean_accuracy # + id="PuOXWGBm9qhP" colab_type="code" outputId="66feec1e-c0ed-4f0b-abdd-7b95c7590736" colab={} perturbed_accuracy_l2 # + id="UOPT8V6q9qhS" colab_type="code" outputId="1153cef7-168f-4d1a-8132-1fb33b929dca" colab={} perturbed_accuracy_linf # + [markdown] id="hP-zt9FU9qhU" colab_type="text" # #### In the remaining parts of this project we will be focusing on **$L_2$-based attacks only**. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: new_castle_env # language: python # name: new_castle_env # --- import os from datetime import datetime import pyspark.sql.functions as F import pyspark.sql.types as T from pyspark.sql import SparkSession spark = SparkSession.builder.appName("sparkify_etl").config("spark.sql.session.timeZone", "UTC")\ .master("local").getOrCreate() # ## First, Let us check the schema for log data files # + file_path = os.path.join("data","log-data") data_log = spark.read.json(file_path) data_log = data_log.where(F.col("page")=="NextSong") data_log.printSchema() data_log.limit(2).toPandas() # Observation # 1. itemInSession can be integer # 2. timestamp column can be datetime # - # ## We will check schema and explore song data files next # + file_path = os.path.join("data","song_data","*","*","*") data_song = spark.read.json(file_path) data_song.printSchema() # Since there's one record per song file, we don't need to use limit data_song.limit(5).toPandas() # Observation #lat,long can be double # - # ## Let's first create the user table # + #user_id, first_name, last_name, gender, level df_user = data_log.select("userId","firstName","lastName","gender","level") # User sql expression to cast specific columns df_user = df_user.withColumn("userId",F.expr("cast(userId as long) userId")) df_user.printSchema() df_user.limit(5).toPandas() # - # ## Next we will create songs table # + #song_id, title, artist_id, year, duration df_song = data_song.select("song_id","title","artist_id","year","duration") df_song = df_song.withColumn("year",F.col("year").cast(T.IntegerType())) df_song.printSchema() df_song.toPandas() # - # ## Artist Table will be created from song data as well # + # artist_id, name, location, lattitude, longitude df_artist = data_song.select("artist_id","artist_name","artist_location","artist_latitude","artist_longitude") df_artist = df_artist.withColumn("artist_latitude",F.col("artist_latitude").cast(T.DecimalType())) df_artist = df_artist.withColumn("artist_longitude",F.col("artist_longitude").cast(T.DecimalType())) df_artist.printSchema() df_artist.toPandas() # - # ## Our next dimension Table would be of Time where we'd split "ts" timestamp col further to granular level # + # start_time, hour, day, week, month, year, weekday df_time = data_log.select("ts") time_format = "yyyy-MM-dd' 'HH:mm:ss.SSS" #func = F.udf("start_time") df_time = df_time.withColumn("start_time", \ F.to_utc_timestamp(F.from_unixtime(F.col("ts")/1000,format=time_format),tz="UTC")) df_time = df_time.withColumn("hour",F.hour(F.col("start_time"))) df_time = df_time.withColumn("day",F.dayofmonth(F.col("start_time"))) df_time = df_time.withColumn("week",F.weekofyear(F.col("start_time"))) df_time = df_time.withColumn("month",F.month(F.col("start_time"))) df_time = df_time.withColumn("year",F.year(F.col("start_time"))) df_time = df_time.withColumn("weekday",F.dayofweek(F.col("start_time"))) df_time.printSchema() df_time.limit(2).toPandas() # - # ## Now that we've created all DIMENSIONAL tables let us proceed for the FACTS table creation # # ### In order to create facts table, we have to perform joins # # #### SQL syntax is better for longer join queries, but the same can be replicated using spark dataframe operations # + # songplay_id, start_time, user_id, level, song_id, artist_id, session_id, location, user_agent #TODO : Partition by specific keys before uploading to S3 as parquet df_song_play = data_song.join(data_log,data_song.title==data_log.song, how="inner").\ select("userId","level","song_id","artist_id","sessionId","location","userAgent") df_song_play.printSchema() df_song_play.limit(2).toPandas() # + # First let us define some views data_log.createOrReplaceTempView("t_log") data_song.createOrReplaceTempView("t_song") df_time.createOrReplaceTempView("t_time") df_song_play = spark.sql("select t_time.start_time, t_log.userId, t_log.level, \ t_song.song_id, t_song.artist_id, t_log.sessionId, \ t_log.location, t_log.userAgent \ from t_log \ inner join t_song \ on t_log.song=t_song.title \ inner join t_time \ on t_time.ts = t_log.ts \ where t_log.artist = t_song.artist_name \ and song_id is not null \ ") df_song_play.limit(2).toPandas() # - df_song_play.write.mode("overwrite").parquet("data/output.parquet") df = spark.read.parquet('data/time_parquet.parquet') df.toPandas() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + papermill={"duration": 9.890226, "end_time": "2022-03-12T13:49:37.918069", "exception": false, "start_time": "2022-03-12T13:49:28.027843", "status": "completed"} tags=[] from tqdm.auto import tqdm import os import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn.metrics import roc_auc_score from sklearn.model_selection import train_test_split , StratifiedKFold import tensorflow as tf import tensorflow.keras.backend as K from tensorflow.keras.optimizers import Adam from tensorflow.keras.models import Model, load_model, save_model from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping, ReduceLROnPlateau from tensorflow.keras.layers import Input,Dense, LSTM, RNN, Bidirectional, GlobalAveragePooling2D , Dropout, Conv1D, Flatten from tensorflow.keras.utils import to_categorical from transformers import TFAutoModel , AutoTokenizer from sklearn.metrics import classification_report # + papermill={"duration": 0.115724, "end_time": "2022-03-12T13:49:38.051478", "exception": false, "start_time": "2022-03-12T13:49:37.935754", "status": "completed"} tags=[] col_names = ['labels','text'] df_tamil_mine = pd.read_csv('../input/cross-verifying-results/BpHigh_tamil.tsv',sep ='\t') df_tamil_en_mine = pd.read_csv('../input/cross-verifying-results/BpHigh_tamil-english.tsv',sep ='\t') df_tamil_given = pd.read_csv('../input/cross-verifying-results/ta-misogyny-test.csv',names=col_names,sep ='\t') df_tamil_en_given = pd.read_csv('../input/cross-verifying-results/ta-en-misogyny-test.csv',names=col_names,sep ='\t') # + papermill={"duration": 0.026537, "end_time": "2022-03-12T13:49:38.095039", "exception": false, "start_time": "2022-03-12T13:49:38.068502", "status": "completed"} tags=[] def transform_df_labels(df): df = df.replace({'Counter-speech':0, 'Homophobia':1, 'Hope-Speech':2, 'Misandry':3, 'Misogyny':4, 'None-of-the-above':5, 'Transphobic':6, 'Xenophobia':7}) return df # + papermill={"duration": 0.042214, "end_time": "2022-03-12T13:49:38.155516", "exception": false, "start_time": "2022-03-12T13:49:38.113302", "status": "completed"} tags=[] df_tamil_mine # + papermill={"duration": 0.049215, "end_time": "2022-03-12T13:49:38.223418", "exception": false, "start_time": "2022-03-12T13:49:38.174203", "status": "completed"} tags=[] df_tamil_given.info() # + papermill={"duration": 0.027155, "end_time": "2022-03-12T13:49:38.269036", "exception": false, "start_time": "2022-03-12T13:49:38.241881", "status": "completed"} tags=[] null_list = df_tamil_given[df_tamil_given['text'].isnull()].index.tolist() # + papermill={"duration": 0.028667, "end_time": "2022-03-12T13:49:38.316317", "exception": false, "start_time": "2022-03-12T13:49:38.287650", "status": "completed"} tags=[] df_tamil_given.dropna(how='any',axis=0,inplace=True) # + papermill={"duration": 0.034142, "end_time": "2022-03-12T13:49:38.369477", "exception": false, "start_time": "2022-03-12T13:49:38.335335", "status": "completed"} tags=[] df_tamil_given # + papermill={"duration": 0.030579, "end_time": "2022-03-12T13:49:38.420013", "exception": false, "start_time": "2022-03-12T13:49:38.389434", "status": "completed"} tags=[] df_tamil_en_given.dropna(how='any',axis=0,inplace=True) # + papermill={"duration": 0.041736, "end_time": "2022-03-12T13:49:38.482435", "exception": false, "start_time": "2022-03-12T13:49:38.440699", "status": "completed"} tags=[] df_tamil_mine_transformed = transform_df_labels(df_tamil_mine['label']) df_tamil_given_transformed = transform_df_labels(df_tamil_given['labels']) df_tamil_en_mine_transformed = transform_df_labels(df_tamil_en_mine['label']) df_tamil_en_given_transformed = transform_df_labels(df_tamil_en_given['labels']) # + papermill={"duration": 0.029646, "end_time": "2022-03-12T13:49:38.526321", "exception": false, "start_time": "2022-03-12T13:49:38.496675", "status": "completed"} tags=[] target_names = ['Counter-speech', 'Homophobia', 'Hope-Speech', 'Misandry', 'Misogyny', 'None-of-the-above', 'Transphobic', 'Xenophobia'] print(classification_report(df_tamil_given_transformed ,df_tamil_mine_transformed,target_names=target_names)) # + papermill={"duration": 0.034421, "end_time": "2022-03-12T13:49:38.581109", "exception": false, "start_time": "2022-03-12T13:49:38.546688", "status": "completed"} tags=[] print(classification_report(df_tamil_en_given_transformed ,df_tamil_en_mine_transformed,target_names=target_names)) # + papermill={"duration": 0.022885, "end_time": "2022-03-12T13:49:38.625746", "exception": false, "start_time": "2022-03-12T13:49:38.602861", "status": "completed"} tags=[] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline import random random.seed(0) import numpy as np import pandas as pd import warnings warnings.simplefilter('ignore') from matplotlib import pyplot as plt from sklearn.metrics import roc_curve, auc, accuracy_score, classification_report, roc_auc_score from sklearn.neighbors import KNeighborsClassifier from sklearn.preprocessing import StandardScaler from matplotlib.legend_handler import HandlerLine2D from sklearn.model_selection import train_test_split, KFold, cross_val_score, learning_curve, GridSearchCV, validation_curve # Load data data_white = pd.read_csv('data/winequality-white.csv', delimiter=';') data_red = pd.read_csv('data/winequality-red.csv', delimiter=';') data_white["type"] = 0 data_red["type"] = 1 data = data_white.append(data_red, ignore_index=True) data = data.dropna() # + X, y = data.drop('quality', axis=1), data.quality y = (y<6).astype(int) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20) sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test) # + max_neighbors = 20 seed = 0 n_cv = 5 knn = KNeighborsClassifier() neighbor_range = np.arange(1,max_neighbors+1) train_scores, test_scores = validation_curve(knn, X_train, y_train, param_name="n_neighbors", param_range=neighbor_range, cv=n_cv, n_jobs=-1) plt.figure() plt.plot(neighbor_range, np.mean(train_scores, axis=1), label='Training score') plt.plot(neighbor_range, np.mean(test_scores, axis=1), label='Cross-validation score') plt.title('Validation curve for KNN') plt.xlabel('Neighbours count') plt.ylabel("Classification score") plt.legend(loc="best") plt.grid() plt.show() # + neighbor_range = np.arange(1,max_neighbors) tuned_params = {"n_neighbors" : neighbor_range} clf = GridSearchCV(knn, param_grid=tuned_params, cv=n_cv, n_jobs=-1) clf.fit(X_train, y_train) print("Best parameters set for KNN:") print(clf.best_params_) y_pred = clf.predict(X_test) print('Accuracy of KNN is %.2f%%' % (accuracy_score(y_test, y_pred) * 100)) # + train_sizes=np.linspace(.3, 1.0, 5) train_sizes, train_scores, test_scores = learning_curve(clf, X_train, y_train, cv=n_cv, train_sizes=train_sizes) train_scores_mean = np.mean(train_scores, axis=1) train_scores_std = np.std(train_scores, axis=1) test_scores_mean = np.mean(test_scores, axis=1) test_scores_std = np.std(test_scores, axis=1) # + plt.figure() plt.title("KNN Learning Curve") plt.xlabel("Training examples") plt.ylabel("Score") plt.grid() plt.fill_between(train_sizes, train_scores_mean - train_scores_std, train_scores_mean + train_scores_std, alpha=0.1, color="r") plt.fill_between(train_sizes, test_scores_mean - test_scores_std, test_scores_mean + test_scores_std, alpha=0.1, color="g") plt.plot(train_sizes, train_scores_mean, 'o-', color="r", label="Training score") plt.plot(train_sizes, test_scores_mean, 'o-', color="g", label="Cross-validation score") plt.legend(loc="best") plt.show() # + preds_test = clf.predict_proba(X_test)[:,1] fpr_test, tpr_test, threshold_test = roc_curve(y_test, preds_test) roc_auc_test = auc(fpr_test, tpr_test) plt.figure(figsize=(10, 10)) plt.title('ROC Curve') plt.plot(fpr_test, tpr_test, label = 'Test AUC = %0.2f' % roc_auc_test) plt.legend(loc = 'lower right') plt.plot([0, 1], [0, 1],'r--') plt.xlim([0, 1]) plt.ylim([0, 1]) plt.ylabel('True Positive Rate') plt.xlabel('False Positive Rate') # - print(roc_auc_score(y_test,preds_test)) print(classification_report(y_test, clf.predict(X_test))) # + # Training time import time clf = KNeighborsClassifier(n_neighbors=1) start_time = time.time() clf.fit(X_train, y_train) print("Time taken for training %s seconds" % (time.time() - start_time)) start_time = time.time() pred = clf.predict(X_train) print("Time taken for inference %s seconds" % (time.time() - start_time)) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:DAND] # language: python # name: conda-env-DAND-py # --- # # Testing a Perception Phenomenon # ## The Stroop Task # ### By: # > In a Stroop task, participants are presented with a list of words, with each word displayed in a color of ink. The participant’s task is to say out loud the color of the ink in which the word is printed. The task has two conditions: a congruent words condition, and an incongruent words condition. In the congruent words condition, the words being displayed are color words whose names match the colors in which they are printed: for example RED, BLUE. In the incongruent words condition, the words displayed are color words whose names do not match the colors in which they are printed: for example PURPLE, ORANGE. In each case, we measure the time it takes to name the ink colors in equally-sized lists. Each participant will go through and record a time from each condition. # # Questions for investigation # ## 1. What is our independent variable? What is our dependent variable? # * Independent: Color-to-Word congruency/incongruency # * Dependent: Time it takes to measure ink colors # ## 2. What is an appropriate set of hypotheses for this task? What kind of statistical test do you expect to perform? Justify your choices. # Hypotheses: # * $H_0: \mu_c = \mu_i $ : Color-to-Word incongruency has no effect on average time to measure ink colors # > The null hypothesis should be when there is no change in the means between the samples # * $H_A: \mu_c < \mu_i $ : Color-to-Word incongruency has increases average time to measure ink colors # > The alternative hypothesis should be when the average between the samples is not the same. In this case I choose to only test for whether the Color-to-Word incongruency increases the time to measure ink colors. I'm not testing to see if it could also decrease the time it takes to measure colors since it seems unlikely that adding complexity to the task would increase performance. # # Expected Test type: # * One-tailed dependent $t-test$ for paired samples # > The expected test is dependent since it appears to be coming from a single sample of individuals. We're computing the t-value because we don't have a population to compare to. As desribed under the alternative hypothesis section I'd expect a one-tail test since improved performance from increasing task complexity seems unlikely. # ## 3. Report some descriptive statistics regarding this dataset. Include at least one measure of central tendency and at least one measure of variability. # ### Measure of central tendency (sample means) # $\bar x_c = 14.051125$ # # $\bar x_i = 22.01591667$ # # ### Measure of variability (sample standard deviations) # $S_c = 3.56$ # # $S_c = 4.80$ # # # > *see describe below for more statistics* # ## 4. Provide one or two visualizations that show the distribution of the sample data. Write one or two sentences noting what you observe about the plot or plots. # + import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns # %matplotlib inline # %pylab inline # - stroop_df = pd.read_csv('stroopdata.csv') stroop_df stroop_df.describe() # + stroop_plot = stroop_df.plot() stroop_plot.set_ylabel("Time to identify color") # - # #### From looking at the graph above it appears that each test subject consistently performed worse when the colors were incongruent from the names of the words. # ## 5. Now, perform the statistical test and report your results. What is your confidence level and your critical statistic value? Do you reject the null hypothesis or fail to reject it? Come to a conclusion in terms of the experiment task. Did the results match up with your expectations? # ### Confidence Level: # $\alpha = .05$ # # $\bar x_c = 14.051125$ # # $\bar x_i = 22.01591667$ # # $S = 4.76239803$ # # # $n = 24$ # # $point$ $estimate = -7.964791667$ # # $Standard$ $error = $ 4.76239803 # # $t-statistic = -8.020706944$ # # $t-critical = -1.714$ # # ### Reject or Fail to reject: # > Because our calculated t-statistic of ~-8.02 is in the critical region we will reject the null hypothesis. # # ### Conclusion: # > In conclusion, it seems that having incongruent colors with names of names of colors has a statistically measurable negative impact on time to identify color compared to having congruent colors. The results lived up to my expections, as I expected increasing the complexity of the task would have a negative impact on performance. # # ### Resources used: # Spreadsheet where values were calculated # https://docs.google.com/spreadsheets/d/1QuG6C0MlVVryZt0lBJ0tY91vfvK0QC5Lzqs3HeODe4g/edit?usp=sharing # # MathJax Reference # https://math.meta.stackexchange.com/questions/5020/mathjax-basic-tutorial-and-quick-reference # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # # %load /Users/facai/Study/book_notes/preconfig.py # %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns sns.set(color_codes=True) #sns.set(font='SimHei', font_scale=2.5) #plt.rcParams['axes.grid'] = False import numpy as np import pandas as pd #pd.options.display.max_rows = 20 import logging logging.basicConfig() logger = logging.getLogger() from IPython.display import Image import enum # - # Chapter 4 Dynamic Programming # ================ # # limition: # + assumption: a perfect model # + great computational expense # ### 4.1 Plicy Evaluation (Prediction) Image('./res/iterative_policy_evaluation.png') Image('./res/ex4_1.png') # + class Action(enum.Enum): EAST = enum.auto() WEST = enum.auto() SOUTH = enum.auto() NORTH = enum.auto() @staticmethod def move(x, y, action): if action == Action.EAST: return x, y - 1 elif action == Action.WEST: return x, y + 1 elif action == Action.SOUTH: return x + 1, y elif action == Action.NORTH: return x - 1, y class GridWorld(object): def move(self, s, action): if s == 0 or s == 15: return s, 0 elif 0 < s < 15: x = s // 4 y = s % 4 x1, y1 = Action.move(x, y, action) if 0 <= x1 < 4 and 0 <= y1 < 4: s1 = x1 * 4 + y1 return s1, -1 else: return s, -1 else: raise ValueError('s {} must be in [0, 15]'.format(s)) class RandomPolicy(object): def __init__(self, grid_world): self._grid_world = grid_world self._v = np.zeros((4, 4)) self._v_flatten = self._v.ravel() self._delta = 0 def iterate(self): v = self._v.copy() for s in range(0, 16): self.update_value(s) self._delta = max(self._delta, np.sum(np.abs(v - self._v))) return self._v.copy() def get_pi(self, s): return [(0.25, (s, a)) for a in [Action.EAST, Action.WEST, Action.SOUTH, Action.NORTH]] def update_value(self, s): # V(s) = \sum_a \pi(a | s) \sum 1 * (r + 1 * V(s1)) vs = [] for (prob, (s, a)) in self.get_pi(s): s1, r = self._grid_world.move(s, a) vs.append(prob * (r + self._v_flatten[s1])) logger.debug('vs: {}'.format(vs)) self._v_flatten[s] = np.sum(vs) # - # logger.setLevel(logging.DEBUG) r = RandomPolicy(GridWorld()) for _ in range(100): r.iterate() pd.DataFrame(np.round(r.iterate())) # ### 4.2 Policy Improvement # # \begin{align*} # q_\pi(s, a) = \sum_{s', r} p(s', r \mid s, a) \left [ r + \gamma v_\pi(s') \right ] # \end{align*} # # policy improvement theorem: For all $s \in \mathcal{S}$, $q_\pi(s, \pi'(s)) \geq v_\pi(s)$, then $v_{\pi'}(s) \geq v_\pi(s)$. # # => new greedy policy: $\pi'(s) = \operatorname{arg \, max}_a q_\pi(s, a)$, policy imporvement. # # If there are ties in policy improvement step, each maximizing action can be given a portion of the probability of being selected in the new greedy policy. Image('./res/fig4_1.png') # ### 4.3 Policy Iteration # # policy iteration: # # \begin{align*} # \pi_0 \overset{E}{\longrightarrow} v_{\pi_0} \overset{I}{\longrightarrow} \pi_1 \overset{E}{\longrightarrow} v_{\pi_1} \overset{I}{\longrightarrow} \pi_2 \overset{E}{\longrightarrow} \cdots \overset{I}{\longrightarrow} \pi_\ast \overset{E}{\longrightarrow} v_{\pi_\ast} # \end{align*} Image('./res/policy_iteration.png') Image('./res/fig4_2.png') # ### 4.4 Value Iteration # # truncate policy evaluation # # => value iteration: policy evaluation is stopped after just one sweep (one update of each state). Image('./res/value_iteration.png') # ### 4.5 Asynchronouse Dynamic Programming # # avoiding sweeps # # focus the DP algorithm's update onto parts of the state set that are most relevant to the agent. # ### 4.6 Generalized Policy Iteration # # Almost all reinforcement learning methods are well described as GPI: Image('./res/gpi.png') # ### 4.7 Efficiency of Dynamic Programming # # DP: polynomial in the number of states and actions. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # بسم الله الرحمن الرحيم class Human: pass man = Human() type(man) class Human: def __init__(self): print('I am __init__') man = Human() class Human: def __init__(self, name): self.name = name st_1 = Human('Ahmed') st_2 = Human('Ali') st_1.name st_2.name st_1.name = 'Mohammed' st_1.name st_2.name class Human: score = 0 # class variable def __init__(self, name): self.name = name # instance variable st_1 = Human('Ahmed') st_2 = Human('Ali') st_1.name st_2.name st_1.score st_2.score Human.score = 3 st_1.score st_1.score = -20 st_1.score st_2.score Human.score st_3 = Human('Mahmoud') st_3.score class Human: #self.score = 0 def __init__(self, name, mobile): self.name = name self.mobile = mobile def __len__(self): pass class Human: faults = 0 # class variable # instance method def __init__(self, name): self.name = name def speak(self): print(f'My name is: {self.name}') # Decorator - code that modifies behaviour of the following block @classmethod def make_faults(omnia): omnia.faults += 1 print(omnia.faults) Human.make_faults() st_1 = Human('Ali') st_1.faults Human.faults st_1.make_faults() Human.faults st_1.faults # ## OOP Concepts class Human: no_faults = 0 def __init__(self, name): self.name = name def speak(self): print(f'My name is: {self.name}') @staticmethod def make_faults(cls): cls.faults += 1 def __len__(self): print('I am inside __len__function') print(f'Human Length is: 11') a = Human('Ahmed') class obj_1: def __init__(self): pass def __len__(self): print('I am obj1') return 0 class obj_2: def __init__(self): pass def __len__(self): print('I am obj2') return 0 o1 = obj_1() o2 = obj_2() a_list = [o1, o2] # + # for i in a_list: # print(type(i)) # - def print_obj_length(a): for i in a: print(len(i)) print_obj_length(a_list) class Human: no_faults = 0 def __init__(self, name): self.name = name def speak(self): print(f'My name is: {self.name}') @staticmethod def make_faults(cls): cls.faults += 1 def __len__(self): return len(self.name) def __add__(self, a): return self.name + a.name def __sub__(self, a): return 12 def __mul__(self, a): return 13 def __div__(self, a): return 14 def __lt__(self, a): if self.name < a.name: return True return False def __contains__(self, a): return False a = Human('Amer') b = Human('Omnia') 'x' in a a - b len(a) len(b) # ## Inheritance class Human: no_faults = 0 def __init__(self, name): self.name = name def speak(self): print(f'My name is: {self.name}') @staticmethod def make_faults(cls): cls.faults += 1 class Employee(Human): def __init__(self): super(Employee, self).__init__('Ahmed') self.salary = 100 e = Employee() e.name e.salary # + # e.make_faults() # - class A: def p(self): print('inside class a') class B: def p(self): print('inside class b') class C(B,A): pass obj = C() obj.p() class Human: def __init__(self, name): print('inside Human constructor') self.name = name class Mammal: def __init__(self, age): print('inside Mammal constructor') self.age = age class Employee(Human, Mammal): def __init__(self, name, age): Human.__init__(self, name) Mammal.__init__(self, age) e = Employee('Amera', 20) e.age e.name # + def add(x,y): return x + y def add(x,y,z): return x + y + z add(1,2) # - def add(x,y,z): return x + y + z add(1,2) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # SEE EXAMPLES: https://bitbucket.org/hrojas/learn-pandas # + # Import all libraries needed for the tutorial # General syntax to import specific functions in a library: ##from (library) import (specific library function) from pandas import DataFrame, read_csv # General syntax to import a library but no functions: ##import (library) as (give the library a nickname/alias) import matplotlib.pyplot as plt import pandas as pd #this is how I usually import pandas import sys #only needed to determine Python version number import matplotlib #only needed to determine Matplotlib version number # Enable inline plotting # %matplotlib inline # - print('Python version ' + sys.version) print('Pandas version ' + pd.__version__) print('Matplotlib version ' + matplotlib.__version__) # The inital set of baby names and bith rates names = ['Bob','Jessica','Mary','John','Mel'] births = [968, 155, 77, 578, 973] # + # zip? # - BabyDataSet = list(zip(names,births)) BabyDataSet df = pd.DataFrame(data = BabyDataSet, columns=['Names', 'Births']) df # + # df.to_csv? # - df.to_csv('births1880.csv',index=False,header=False) # + # read_csv? # - Location = 'births1880.csv' df = pd.read_csv(Location) df df = pd.read_csv(Location, header=None) df df = pd.read_csv(Location, names=['Names','Births']) df # Check data type of the columns df.dtypes # Check data type of Births column df.Births.dtype # Method 1: Sorted = df.sort_values(['Births'], ascending=False) Sorted.head(1) # Method 2: df['Births'].max() # + # Create graph df['Births'].plot() # Maximum value in the data set MaxValue = df['Births'].max() # Name associated with the maximum value MaxName = df['Names'][df['Births'] == df['Births'].max()].values # Text to display on graph Text = str(MaxValue) + " - " + MaxName # Add text to graph plt.annotate(Text, xy=(1, MaxValue), xytext=(8, 0), xycoords=('axes fraction', 'data'), textcoords='offset points') print("The most popular name") df[df['Births'] == df['Births'].max()] #Sorted.head(1) can also be used # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Linear Regression # ## กิจกรรมที่ 1 # จุดประสงค์: ให้ผู้เรียนทดลองคำนวณ Linear Regression ด้วยมือ # ให้ผู้เรียนเก็บข้อมูลที่เป็น Numerical data มา 2 อย่าง เช่น # 1. จำนวน Pokemon ต่อความหนาแน่นของเมือง (https://youtu.be/CtKeHnfK5uA) # 2. จำนวน Muggle ในประเทศไทย ต่อพื้นที่/ความหนาแน่นของเมือง # # ให้ลองคำนวณด้วยมือ หรือใช้ Excel ช่วยตามตัวอย่างข้างล่าง # https://github.com/reigngt09/MachineLearningFNE # ## ระบบตัวอย่าง # ใช้ Python แสดงผลกราฟด้วย Matplotlib และ Plotly (วิธีการติดตั้ง https://plot.ly/python/getting-started/) ข้อมูลตัวอย่างใช้ Diabetes dataset (https://scikit-learn.org/stable/datasets/index.html#diabetes-dataset) ซึ่งในการใช้งานจริง จะใช้ข้อมูลตัวอย่างที่นักเรียนน่าจะสนใจมากกว่านี้ import ipywidgets as widgets from ipywidgets import interact, interact_manual # + import matplotlib.pyplot as plt import numpy as np from sklearn import datasets, linear_model from sklearn.metrics import mean_squared_error, r2_score # Load the diabetes dataset diabetes_X, diabetes_y = datasets.load_diabetes(return_X_y=True) # Use only one feature (BMI) diabetes_X2D = diabetes_X[:, np.newaxis, 2] # Use only two feature (BMI and Average blood pressure) diabetes_X3D = diabetes_X[:,[2, 3]] # - # ## กิจกรรมที่ 2 # จุดประสงค์: ให้ผู้เรียนสามารถอธิบายกระบวนการ Tune Parameters ของเทคนิค Linear Regression # # ให้นักเรียนลองปรับค่า parameter ด้วยมือ เพื่อให้เห็นผลกระทบของ Parameter แต่ละตัว โดยเราจะใช้ data อื่น ที่น่าสนใจสำหรับเด็กกว่านี้ หรือให้นักเรียนสามารถอัพโหลดข้อมูลใส่ในกราฟของเราได้ # # ### ส่วนที่ 1 # Linear Regression ที่มี 1 Independent Variable จึงต้องปรับ parameters 2 ตัว # + # Split the data into training/testing sets diabetes_X_train = diabetes_X2D[:-20] diabetes_X_test = diabetes_X2D[-20:] # Split the targets into training/testing sets diabetes_y_train = diabetes_y[:-20] diabetes_y_test = diabetes_y[-20:] # - # Create linear regression object regr = linear_model.LinearRegression() # Train the model using the training sets regr.fit(diabetes_X_train, diabetes_y_train) @interact def show_articles_more_than(m=1000, c=100): # Make predictions using the testing set regr.coef_[0] = m regr.intercept_ = c diabetes_y_pred = regr.predict(diabetes_X_test) plt.scatter(diabetes_X_test, diabetes_y_test, color='black') plt.plot(diabetes_X_test, diabetes_y_pred, color='red', linewidth=3) plt.xlim=(-0.1, 0.15) plt.ylim=(0, 350) plt.title('BMI vs disease progression') plt.xlabel('Body mass index') plt.ylabel('Disease Progression') plt.show() return m, c # ### ส่วนที่ 2 # Linear Regression ที่มี 2 Independent Variables จึงต้องปรับ parameters 3 ตัว # + # Split the data into training/testing sets diabetes_X_train = diabetes_X3D[:-20] diabetes_X_test = diabetes_X3D[-20:] # Split the targets into training/testing sets diabetes_y_train = diabetes_y[:-20] diabetes_y_test = diabetes_y[-20:] # - # Create linear regression object regr = linear_model.LinearRegression() # Train the model using the training sets regr.fit(diabetes_X_train, diabetes_y_train) x1max, x2max = np.around(diabetes_X_train.max(axis=0), 2) x1min, x2min = np.around(diabetes_X_train.min(axis=0), 2) # + [a1, a2] = regr.coef_ d = regr.intercept_ x1max, x2max = np.around(diabetes_X_train.max(axis=0), 2) x1min, x2min = np.around(diabetes_X_train.min(axis=0), 2) # create x,y xx, yy = np.meshgrid(np.linspace(x1min,x1max,10), np.linspace(x2min,x2max,10)) # calculate corresponding z zz = a1*xx + a2*yy + d # + import plotly.graph_objs as go fig = go.FigureWidget(data= [go.Scatter3d(x=diabetes_X_train[:,0], y=diabetes_X_train[:,1] , z=diabetes_y_train, mode='markers', opacity=0.9), go.Surface(x=xx, y=yy, z=zz, opacity=0.8)]) fig.update_traces(marker=dict(size=6, line=dict(width=1, color='DarkSlateGrey')), selector=dict(mode='markers')) fig.update_layout(title='BMI and average blood pressure vs disease progression', scene = dict( xaxis = dict(title='BMI'), yaxis = dict(title='Average Blood Pressure'), zaxis = dict(title='Disease Progression'),), autosize=False, width=800, height=800, margin=dict(l=65, r=50, b=65, t=90)) fig # - @interact def show_articles_more_than(a1=139, a2=912, d=152): # Make predictions using the testing set zz = a1*xx + a2*yy + d bar = fig.data[1] bar.z = zz return # ## กิจกรรมที่ 3 # จุดประสงค์: ให้ผู้เรียนสามารถอธิบายผลกระทบของลักษณะ Data ที่มีต่อการสร้าง Linear Regression Model # # #### **การสุ่มเลือกข้อมูล (Sampling Data)** # # การเลือกข้อมูล หรือ sampling มีผลต่อ model ที่ได้ (เลือกใช้ข้อมูลที่ผู้เรียนน่าจะสนใจมากกว่านี้) # # 1. ให้เก็บตัวอย่างข้อมูลจากทุกกลุ่มข้อมูล (population) เพื่อไม่ให้ Model มี Bias หรือรู้จำรูปแบบของข้อมูลแค่บางประเภท # + # Split the data into training/testing sets diabetes_X_train = diabetes_X2D[:-20] diabetes_X_test = diabetes_X2D[-20:] # Split the targets into training/testing sets diabetes_y_train = diabetes_y[:-20] diabetes_y_test = diabetes_y[-20:] # Create linear regression object regr = linear_model.LinearRegression() @interact def show_articles_more_than(groups=['A', 'B', 'C']): # Make predictions using the testing set # Train the model using the training sets if groups == 'A': idx = diabetes_X_train > -np.inf regr.fit(diabetes_X_train, diabetes_y_train) elif groups == 'B': idx = diabetes_X_train > 0.1 else: idx = diabetes_X_train < -0.05 regr.fit(diabetes_X_train[idx,np.newaxis], diabetes_y_train[idx.flatten()]) # Make predictions using the testing set diabetes_y_pred = regr.predict(diabetes_X_test) # The coefficients print('Coefficients: ', regr.coef_[0]) # The mean squared error print('Mean squared error: %.2f' % mean_squared_error(diabetes_y_test, diabetes_y_pred)) # The coefficient of determination: 1 is perfect prediction print('Coefficient of determination: %.2f' % r2_score(diabetes_y_test, diabetes_y_pred)) # Plot outputs sc1 = plt.scatter(diabetes_X_train[idx,np.newaxis], diabetes_y_train[idx.flatten()], color='black') sc2 = plt.scatter(diabetes_X_train[~idx,np.newaxis], diabetes_y_train[~idx.flatten()], color='lightgray') sc3 = plt.scatter(diabetes_X_test, diabetes_y_test, color='blue') plt.plot(diabetes_X_test, diabetes_y_pred, color='red', linewidth=3) sc1.set_label('Training data') sc2.set_label('Unknown data') sc3.set_label('Test data') plt.title('BMI vs disease progression in sampling bias settings') plt.xlabel('Body mass index') plt.ylabel('Disease Progression') plt.legend() plt.show() # - # 2. ระวังชุดข้อมูลที่มีปริมาณข้อมูลบางประเภทมากกว่าประเภทอื่น ๆ เป็นจำนวนมาก (Imbalanced Data) # + idx1 = np.where(np.bitwise_and(diabetes_X_train < 0.1, diabetes_y_train.reshape((-1,1)) > 200))[0] idx2 = np.where(diabetes_X_train >= 0.1)[0] idx = np.concatenate((idx1[:20], idx2)) regr.fit(diabetes_X_train[idx], diabetes_y_train[idx.flatten()]) # Make predictions using the testing set diabetes_y_pred = regr.predict(diabetes_X_test) # The coefficients print('Coefficients: ', regr.coef_[0]) # The mean squared error print('Mean squared error: %.2f' % mean_squared_error(diabetes_y_test, diabetes_y_pred)) # The coefficient of determination: 1 is perfect prediction print('Coefficient of determination: %.2f' % r2_score(diabetes_y_test, diabetes_y_pred)) # Plot outputs sc2 = plt.scatter(diabetes_X_train, diabetes_y_train, color='lightgray') sc1 = plt.scatter(diabetes_X_train[idx], diabetes_y_train[idx.flatten()], color='black') sc3 = plt.scatter(diabetes_X_test, diabetes_y_test, color='blue') plt.plot(diabetes_X_test, diabetes_y_pred, color='red', linewidth=3) sc1.set_label('Training data') sc2.set_label('Unknown data') sc3.set_label('Test data') plt.title('BMI vs disease progression in imbalanced data setting') plt.xlabel('Body mass index') plt.ylabel('Disease Progression') plt.legend() plt.show() # - # ## กิจกรรมที่ 4 # จุดประสงค์: ให้ผู้เรียนทดลองใช้เครื่องมือ (Tools and Libraries) ในการสร้าง Linear Regression Model # # ให้ทดลองใช้ Scikit-Learn (https://scikit-learn.org/stable/) ในการ Tune Parameters ของ Linear Regression โดย**อัตโนมัติ** # + # Train the model using the training sets regr.fit(diabetes_X_train, diabetes_y_train) # Make predictions using the testing set diabetes_y_pred = regr.predict(diabetes_X_test) # The coefficients print('Coefficients: ', regr.coef_[0]) # The mean squared error print('Mean squared error: %.2f' % mean_squared_error(diabetes_y_test, diabetes_y_pred)) # The coefficient of determination: 1 is perfect prediction print('Coefficient of determination: %.2f' % r2_score(diabetes_y_test, diabetes_y_pred)) # Plot outputs plt.scatter(diabetes_X_test, diabetes_y_test, color='black') plt.plot(diabetes_X_test, diabetes_y_pred, color='red', linewidth=3) plt.xticks(()) plt.yticks(()) plt.title('BMI vs disease progression') plt.xlabel('Body mass index') plt.ylabel('Disease Progression') plt.show() # - # ## กิจกรรมที่ 5 # จุดประสงค์: ให้ผู้เรียนสามารถอธิบายข้อจำกัดของ Linear Regression Model ได้ # # 1. เหมาะกับข้อมูลที่สัมพันธ์กันในรูปแบบ Linear เท่านั้น (Linear Regression Is Limited to Linear Relationships) # # หาชุดข้อมูลที่แสดงให้เห็น limitation ถึงจุดนี้ แล้วให้นักเรียนลองหาค่า parameter ด้วยมือหรือ app ก็จะพบว่าได้คำตอบที่ไม่ค่อยแม่นยำ # # 2. Linear Regression มีการเปลี่ยนแปลงไวต่อค่าผิดปกติ หรือ Outliers (Sensitive to Outliers) # # หาชุดข้อมูลที่แสดงให้เห็น limitation ถึงจุดนี้ แล้วให้นักเรียนลองหาค่า parameter ด้วยมือหรือ app ก็จะพบว่าได้คำตอบที่ไม่ได้สนใจ Outliers หรือไม่เหมาะกับข้อมูลส่วนใหญ่ # # ## กิจกรรมที่ 6 # จุดประสงค์: ให้ผู้เรียนทดลองใช้เทคนิค Linear Regression Model ในการแก้ปัญหาในชีวิตจริง # # ให้นักเรียนตั้งโจทย์ เก็บข้อมูล แล้วทดลองใช้ Linear Regression ในการแก้ปัญหานั้น # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Pairplot: Visualizing High Dimensional Data # # This example provides how to visualize high dimensional data using the pairplot. # + import graspy import numpy as np # %matplotlib inline # - # ## Simulate a binary graph using stochastic block model # The 3-block model is defined as below: # # \begin{align*} # n &= [50, 50, 50]\\ # P &= # \begin{bmatrix}0.5 & 0.1 & 0.05 \\ # 0.1 & 0.4 & 0.15 \\ # 0.05 & 0.15 & 0.3 # \end{bmatrix} # \end{align*} # # Thus, the first 50 vertices belong to block 1, the second 50 vertices belong to block 2, and the last 50 vertices belong to block 3. # + from graspy.simulations import sbm n_communities = [50, 50, 50] p = [[0.5, 0.1, 0.05], [0.1, 0.4, 0.15], [0.05, 0.15, 0.3],] np.random.seed(2) A = sbm(n_communities, p) # - # ## Embed using adjacency spectral embedding to obtain lower dimensional representation of the graph # # The embedding dimension is automatically chosen. It should embed to 3 dimensions. # + from graspy.embed import AdjacencySpectralEmbed ase = AdjacencySpectralEmbed() X = ase.fit_transform(A) print(X.shape) # - # ## Use pairplot to plot the embedded data # # First we generate labels that correspond to blocks. We pass the labels along with the data for pair plot. # + from graspy.plot import pairplot labels = ['Block 1'] * 50 + ['Block 2'] * 50 + ['Block 3'] * 50 plot = pairplot(X, labels) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import re import json import numpy as np from matplotlib.pyplot import figure, show import jellyfish # # Reading the data # # First we are going to read the data. # authors = pd.read_csv('~/data/authors_1019.csv', encoding = "ISO-8859-1") investigators = pd.read_csv('~/data/investigators_837.csv', encoding = "ISO-8859-1") # # First Look # In this sections I will just take a first look at the data, check if anything is weird or not normal. authors.head() investigators.head() # The cities and country seems to be in JSON format. Just incase we will use them later I would like to do some processing of them. Also I take notice of list format of Topic and co-authors but this can stay as is. authors['cities'] = authors['cities'].apply(lambda x : json.loads(x)) authors['countries'] = authors['countries'].apply(lambda x : json.loads(x)) authors = pd.concat([authors, authors['cities'].apply(pd.Series)[0].apply(pd.Series)], axis = 1).drop('cities', axis = 1).rename(columns={"identifier": "city_id", "name": "city_name"}) authors = pd.concat([authors, authors['countries'].apply(pd.Series)[0].apply(pd.Series)], axis = 1).drop('countries', axis = 1).rename(columns={"identifier": "country_id", "name": "country_name"}) investigators['cities'] = investigators['cities'].apply(lambda x : json.loads(x)) investigators['countries'] = investigators['countries'].apply(lambda x : json.loads(x)) investigators = pd.concat([investigators, investigators['cities'].apply(pd.Series)[0].apply(pd.Series)], axis = 1).drop('cities', axis = 1).rename(columns={"identifier": "city_id", "name": "city_name"}) investigators = pd.concat([investigators, investigators['countries'].apply(pd.Series)[0].apply(pd.Series)], axis = 1).drop('countries', axis = 1).rename(columns={"identifier": "country_id", "name": "country_name"}) # Lets take a look at the data after we have processed it just to keep it in mind. authors investigators # # Divide and conquer # I have philosophy when it comes to problems in data science. KISS (Keep It Simple Stupid). With a cartisean product of the subset of the data that we have ground truth for, it would be possible to engineer some features from the strings and build a predictive model. However I think there are better ways to get the ball rolling. We will try a divide and conquer techique. We will try to develop unique IDs using information in the data. # # 1. Join data that has ORCIDs # 2. Remove data with ORCIDs from the data # 3. Find unique last names and join those from the two datasets # 4. Concatenate orcid join and unique last name join # 5. Filter out already joined data from the data # 6. Find unique combination of the first letter from first name and last name # 7. Join on this combination and concatenate with previous data. # 8. Filter out already joined data from the data # 9. Combine fist, middle, and last name into 1 column for remaining data # 10. Do fuzzy join # 11. Combine with previously joined data investigators_with_orcid = investigators[investigators['orcid'].notnull()].rename(columns={'id':'id_authors'})[['id_authors', 'orcid']] authors_with_orcid = authors[authors['orcid'].notnull()].rename(columns={'id':'id_investigators'})[['id_investigators', 'orcid']] joined_on_orcid = pd.merge(investigators_with_orcid, authors_with_orcid, on='orcid')[['id_authors', 'id_investigators']] # + # finding authors that do not have orcid authors_without_orcid = authors[authors['orcid'].isnull()] investigators_without_orcid = investigators[investigators['orcid'].isnull()] # finding authors with unique last names authors_unique_last_name = authors_without_orcid.drop_duplicates(subset=['lastname']) investigators_unique_last_name = investigators_without_orcid.drop_duplicates(subset=['lastname']) # - #unique last names count for authors (authors_unique_last_name.lastname.value_counts().apply(pd.Series)[0] == 1).value_counts() #unique last names count for investigators (investigators_unique_last_name.lastname.value_counts().apply(pd.Series)[0] == 1).value_counts() joined_on_unique_names = pd.merge(authors_unique_last_name, investigators_unique_last_name, on='lastname', suffixes=('_authors', '_investigators'))[['id_authors', 'id_investigators']] #198 more joined! It means however there is not perfect overlap with the data joined_on_unique_names.nunique() concat_orcid_unique_names = pd.concat([joined_on_unique_names, joined_on_orcid]) # Removing authors that have already been sucessfully joined authors_left = authors[~authors['id'].isin(concat_orcid_unique_names['id_authors'])] investigators_left = investigators[~investigators['id'].isin(concat_orcid_unique_names['id_investigators'])] # Creating first letter last name combo authors_left['first_letter_last_name'] = (authors_left.first_name.str[0] + '_' + authors_left.lastname).astype(str) investigators_left['first_letter_last_name'] = (investigators_left.first_name.str[0] + '_' + investigators_left.lastname).astype(str) authors_left_unique_combo = authors_left.drop_duplicates(subset=['first_letter_last_name']) investigators_left_unique_combo = investigators_left.drop_duplicates(subset=['first_letter_last_name']) joined_on_unique_names_combo = pd.merge(authors_left_unique_combo, investigators_left_unique_combo, on='first_letter_last_name', suffixes=('_authors', '_investigators'))[['id_authors', 'id_investigators']] joined_on_unique_names_combo.shape concat_all_before_fuzzy = pd.concat([concat_orcid_unique_names,joined_on_unique_names_combo]) # Removing authors that have already been sucessfully joined authors_for_fuzzy_join = authors[~authors['id'].isin(concat_all_before_fuzzy['id_authors'])] investigators_for_fuzzy_join = investigators[~investigators['id'].isin(concat_all_before_fuzzy['id_investigators'])] authors_for_fuzzy_join.shape # Replace middle name with neatural character authors_for_fuzzy_join['middle_name'] = authors_for_fuzzy_join.middle_name.fillna('_') investigators_for_fuzzy_join['middle_name'] = investigators_for_fuzzy_join.middle_name.fillna('_') # Creating fuzzy join ID authors_for_fuzzy_join['fuzzy_id'] = (authors_for_fuzzy_join.first_name + '_' + authors_for_fuzzy_join.middle_name + "_" + authors_for_fuzzy_join.lastname).astype(str) investigators_for_fuzzy_join['fuzzy_id'] = (investigators_for_fuzzy_join.first_name + '_' + investigators_for_fuzzy_join.middle_name + "_" + investigators_for_fuzzy_join.lastname).astype(str) # Renaming and selecting columns authors_for_fuzzy_join = authors_for_fuzzy_join[['fuzzy_id', 'id']].rename(columns={'id':'id_authors'}) investigators_for_fuzzy_join = investigators_for_fuzzy_join[['fuzzy_id', 'id']].rename(columns={'id':'id_investigators'}) # get closest match algorithm based on jaro_winkler def get_closest_match(x, list_strings): best_match = None highest_jw = 0 for current_string in list_strings: current_score = jellyfish.jaro_winkler(x, current_string) if(current_score > highest_jw): highest_jw = current_score best_match = current_string return best_match # # Evaluation investigators_with_orcid = investigators[investigators['orcid'].notnull()].rename(columns={'id':'id_authors'}) authors_with_orcid = authors[authors['orcid'].notnull()].rename(columns={'id':'id_investigators'}) authors_with_orcid['fuzzy_id'] = (authors_with_orcid.first_name + '_' + authors_with_orcid.middle_name + "_" + authors_with_orcid.lastname).astype(str) investigators_with_orcid['fuzzy_id'] = (investigators_with_orcid.first_name + '_' + investigators_with_orcid.middle_name + "_" + investigators_with_orcid.lastname).astype(str) authors_with_orcid.fuzzy_id = authors_with_orcid.fuzzy_id.map(lambda x: get_closest_match(x, investigators_with_orcid.fuzzy_id)) evaluate_fuzz = pd.merge(authors_with_orcid, investigators_with_orcid, how='left', on='fuzzy_id', suffixes=('_authors', '_investigators')) (evaluate_fuzz['orcid_authors'] == evaluate_fuzz['orcid_investigators']).sum() authors_with_orcid.shape #correct matches using fuzzy join 133/206 investigators_for_fuzzy_join.fuzzy_id = investigators_for_fuzzy_join.fuzzy_id.map(lambda x: get_closest_match(x, authors_for_fuzzy_join.fuzzy_id)) fuzzy = pd.merge(authors_for_fuzzy_join, investigators_for_fuzzy_join, on='fuzzy_id') final_fuzz = fuzzy[['id_authors', 'id_investigators']] concat_all = pd.concat([final_fuzz, concat_all_before_fuzzy]) concat_all.shape # # Final Remarks # Given the time constains I could not try everything that I would have liked. I would still investigate further the approached outside of ML further.I would test different distance calculations, as well as different variable combinations to make a fuzzy id out of. # # From the machine learning perspective I would have done a cartisean product of ground truth data, feature engineer label, plus feature engineer multiple features related to distance of strings. I would test out different string distance metrics. I would start with a base_line logistics regression, then move to more complex models from there. If I engineer to many distance features I may try to do a PCA. # # From a code perspecive, way to much duplicated code, everything needs to be put into functions. As well as proper unit testing for little pieces of the code. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Python Primer: The Basics # # This is a notebook file. It is an interactive document that you can edit on the fly, and you can use it to write and execute programs. You can learn more about the interface by reading the [online documentation](http://jupyter-notebook.readthedocs.io/en/stable/examples/Notebook/Notebook%20Basics.html), although it should be pretty simple to understand what is going on without any documentation. # # You can also click "Help : User Interface tour" from the menu, which will explain the basics of the notebook. # ## Our first Python program # # Traditionally, every time that we start learning a new language, we start with a program that prints "Hello World" in the output. print("Hello World") # To run the "Hello World" program above, just select the code cell, and then click the "Run" button from the toolbar. Alternatively, you can also press Ctrl + Enter, if you prefer to use keyboard shortcuts. You can discover more keyboard shortcuts by selecting "Help : Keyboard Shortcuts" from the menu. # ### Exercise # # Print your own message: # + # your code here # - # ## Comments # # You will have noticed in the cell above the line # `# your code here` # # This is a _comment in the code_ # # Comments are notes in your source code that aren't exectued when your code is run. These are useful for reminding yourself what your code does, and for notifying others to your intentions. # + # A comment # # We use comments to write down things that we want to rememember # or instructions for other users that will read/use our code # Anything after the # is ignored by python. print("I could have written code like this.") # the comment after is ignored # + # You can also use a comment to "disable" or comment out a piece of code: # print("This won't run.") print("This will run.") # - # ### Exercise # # Fix the code below, so that it runs: # + # print("Hi Panos!'' # - # ## Multiline comments # # Python has single line and multiline comments. These multiline comments are also called _docstrings_ because they are often used to write documentation for the code. # + # This is a single line comment # This a second single line comment print("trying out some comments") """This code is used to print a message of your choice. Notice that this message that is included in the triple double quotes will not execute.""" print("python ftw") # - # ## Notebooks: Markdown for text # # In notebooks, you can simply double click on a piece of text and then edit it. To restore it back, from edit mode, press "Run". Markdown is a very simple language for formatting text, and you can read further instructions by going to "Help : Markdown" from the menu, or checking the [online examples](http://jupyter-notebook.readthedocs.io/en/stable/examples/Notebook/Working%20With%20Markdown%20Cells.html) # # Below, we will see a few examples. # # Big Header # ## Smaller Header # ### A little smaller header # #### Getting smaller and smaller # ##### Very very small header # ###### I do not even know if this is a header anymore # You can write plain text, **bold text**, and _italics_. # You can also ~~strike~~ things that were incorrect. # # If you want to write code, you can use the backtick characters: `print("Hello World")` # # Or you can use triple backticks, and can get color coding by specifying the language that you use. # ```python # print("Hello World") # ``` # # ```html # # Hello World! # # ``` # You can also create bulleted lists: # * Learn Python # * ... # * Millions! # Or perhaps you want to write ordered lists: # 1. Learn Python # 2. Learn SQL # 3. .... # 4. Millions! # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import requests URL = 'https://econpy.pythonanywhere.com/ex/001.html' response = requests.get(URL) if response.status.code == 200: content = response.text # str with open ('econpy.html', w+) as file: file.write(content) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # default_exp core # - # # core # # > functions to download data. #hide from nbdev.showdoc import * #export import numpy as np import pandas as pd from fastcore.all import * from babino2020masks.lasso import * #export def get_bsas_data(): df = pd.read_csv('https://cdn.buenosaires.gob.ar/datosabiertos/datasets/salud/reporte-covid/dataset_reporte_covid_sitio_gobierno.csv') df['Date'] = pd.to_datetime(df.FECHA.map(lambda x:x[:2]+x[2:].capitalize()), format='%d%b%Y:00:00:00') df = df.sort_values('Date') df = df.loc[df.SUBTIPO_DATO.isin(['personas_hisopadas_reportados_del_dia_totales', '%_positividad_personas_hisopadas_reportadas_del_dia_totales']), ['Date', 'SUBTIPO_DATO', 'VALOR']] df = df.set_index(['Date','SUBTIPO_DATO'])['VALOR'].unstack() df = df.rename(columns={'personas_hisopadas_reportados_del_dia_totales': 'Tests', '%_positividad_personas_hisopadas_reportadas_del_dia_totales':'p'}) df.columns.name = '' df.p = df.p*0.01 df['Positives'] = df.p*df.Tests df['Negatives'] = df.Tests-df.Positives df['Odds'] = df.Positives / df.Negatives df = df.reset_index() df['date'] = df.Date df = df.set_index('date') return df #export def get_bsas_data2(): url = 'https://cdn.buenosaires.gob.ar/datosabiertos/datasets/salud/reporte-covid/dataset_reporte_covid_sitio_gobierno.xlsx' df = pd.read_excel(url) df = df.rename(columns={'FECHA': 'Date'}).sort_values('Date') positives = df.loc[df.SUBTIPO_DATO.eq('casos_confirmados_reportados_del_dia'), ['Date', 'VALOR']] positives = positives.groupby('Date')['VALOR'].sum().reset_index().rename(columns={'VALOR': 'Positives'}) tests = df.loc[df.SUBTIPO_DATO.eq('testeos_reportados_del_dia_totales'), ['Date', 'VALOR']] tests = tests.rename(columns={'VALOR': 'Tests'}) df = tests.merge(positives, on='Date') df['Negatives'] = df.Tests - df.Positives df['Odds'] = df.Positives / df.Negatives df['p'] = df.Positives / df.Tests df['date'] = df.Date df = df.set_index('date') return df #export @patch def fit(self:LassoICSelector, X, y, mask=None): Xnew = self.transform_to_ols(X) if mask is None: mask = np.array([True]*Xnew.shape[1]) Xnew = Xnew[:, mask] self.ols = self.OLS(y, Xnew) self.ols_results = self.ols.fit() mask[mask] = (self.ols_results.pvalues < self.alpha / len(self.ols_results.pvalues)) mask[0] = True mask[-1] = True if any(self.ols_results.pvalues[1:-1] >= self.alpha / len(self.ols_results.pvalues)): self.fit(X, y, mask=mask) self.support = self.selector.get_support() self.support[self.support] = mask[:-1] from nbdev.export import notebook2script; notebook2script() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Summer Olympics Data Analysis Project # # Guided by EliteTechnoGroup # ### 1. In how many cities Summer Olympics is held so far? import pandas as pd df = pd.read_csv('summer.csv') # the cities having summer olympics held # is 22 df['City'].unique() len(df['City'].unique()) # + # the countries where summer olympics are held # is 148 # - df['Country'].unique() len(df['Country'].unique()) data = df.values df.head() len(df['Country'].unique()) # ### 2. Which sport is having most number of Gold Medals so far? (Top 5) len(df['Sport']) data = [] for Sport in df['Sport'].unique(): data.append([ Sport , len(df[df['Sport'] == Sport])]) gold = pd.DataFrame(data , columns = ['Sport','Medal']).sort_values(by = 'Medal' , ascending = False) gold gold.head().plot( x = 'Sport' , y = 'Medal' ,kind = 'bar') # ### 3. Which sport is having most number of medals so far? (Top 5) data = [] for Sport in df['Sport'].unique(): data.append([ Sport , len(df[df['Sport'] == Sport])]) sport = pd.DataFrame(data , columns = ['Sport','Medal']).sort_values(by = 'Medal' , ascending = False).head() sport sport.head().plot( x = 'Sport' , y = 'Medal' ,kind = 'bar',figsize = (5,5)) df.head() # ### 4. Which player has won most number of medals? (Top 5) df['Athlete'].unique() player = df.groupby('Athlete')['Medal'].count().sort_values(ascending=False) player.head().plot( x = 'Athlete' , y = 'Medal' ,kind = 'bar') # ### 5. Which player has won most number Gold Medals of medals? (Top 5) gold = df[df['Medal']=='Gold'].groupby('Athlete').count().sort_values(['Medal'],ascending=False) gold['Medal'].head().plot( x = 'Athlete' , y = 'Medal' ,kind = 'bar' ) # ### 6. In which year India won first Gold Medal in Summer Olympics? india = df[df['Country']=='IND'] INDgold = india[india['Medal']=='Gold'].sort_values(['Year'] , ascending=True) print(INDgold['Year'].head(1)) # ### 7. Which event is most popular in terms on number of players? (Top 5) data = [] for Event in df['Event'].unique(): data.append([ Event , len(df[df['Event'] == Event])]) event = pd.DataFrame(data , columns = ['Event','Freq']).sort_values(by = 'Freq' , ascending = False) event event.head().plot( x = 'Event' , y = 'Freq' ,kind = 'bar' ) # ### 8. Which sport is having most female Gold Medalists? (Top 5) # + gold = df[df['Medal']=='Gold'] femalegold = gold[gold['Gender']=='Women'].groupby('Athlete').count().sort_values(['Medal'],ascending=False) # - femalegold['Medal'].head().plot( x = 'Sport' , y = 'Medal' ,kind = 'bar' ) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Boston house price prediction using Multiple Linear Regression # # In this notebook, I'll be using the Boston house-prices dataset. # The dataset consists of 13 features of 506 houses and the median home value in $1000's. # I'll fit a model on the 13 features to predict the value of the houses. # Import libraries and data set from sklearn.linear_model import LinearRegression from sklearn.datasets import load_boston # Loading data from bosyon house price dataset boston_data = load_boston() x = boston_data['data'] y = boston_data['target'] # Creating a regression model using scikit-learn's LinearRegression model = LinearRegression() model.fit(x,y) # + # Make a prediction using the model sample_house = [[2.29690000e-01, 0.00000000e+00, 1.05900000e+01, 0.00000000e+00, 4.89000000e-01, 6.32600000e+00, 5.25000000e+01, 4.35490000e+00, 4.00000000e+00, 2.77000000e+02, 1.86000000e+01, 3.94870000e+02, 1.09700000e+01]] # Predicting housing price for the sample_house prediction = model.predict(sample_house) print(prediction) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import bio2bel_reactome import itertools as itt import pandas as pd manager = bio2bel_reactome.Manager() top_level_pathways = manager.get_all_top_hierarchy_pathways() top_level = [ pathway for pathway in top_level_pathways if 'HSA' in pathway.resource_id ] def traverse(parent_pathway): if not parent_pathway.children: return [] result = [ {'Source Resource': 'reactome', 'Source ID': pathway.resource_id, 'Source Name': pathway.name, 'Mapping Type': 'isPartOf', 'Target Resource': 'reactome', 'Target ID': parent_pathway.resource_id, 'Target Name': parent_pathway.name, } for pathway in parent_pathway.children ] for pathway in parent_pathway.children: result.extend( traverse(pathway) ) return result reactome_hierarchy = list(itt.chain.from_iterable([ traverse(pathway) for pathway in top_level ])) len(reactome_hierarchy) df = pd.DataFrame(reactome_hierarchy) df.to_csv('reactome_hierarchy.tsv', index=False, sep='\t') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %load_ext autoreload # %autoreload 2 # + from collections import Counter import pickle as pkl import numpy as np import torch import torch.nn as nn import torch.optim as optim from torch.utils.data import DataLoader, SequentialSampler, RandomSampler, Dataset, SubsetRandomSampler from sklearn.model_selection import train_test_split from tqdm.notebook import tqdm from torchtext.vocab import Vocab from chord_rec.models.seq2seq.Seq2Seq import BaseSeq2Seq from chord_rec.models.seq2seq.Encoder import BaseEncoder from chord_rec.models.seq2seq.Decoder import BaseDecoder # - device = torch.device('cuda') if torch.cuda.is_available else torch.device("cpu") # device = torch.device("cpu") class Vec21Dataset(Dataset): def __init__(self, note_vec_seq, chord_seq, vocab): 'Initialization' self.note_vec_seq = note_vec_seq self.chord_seq = chord_seq self.vocab = vocab def __len__(self): 'Get the total length of the dataset' return len(self.note_vec_seq) def __getitem__(self, index): 'Generates one sample of data' # Select sample return self.note_vec_seq[index], self.vec_encode(self.chord_seq[index]) def encode(self, x): return self.vocab.stoi[x] def vec_encode(self, x): return np.vectorize(self.encode)(x) def decode(self, x): return self.vocab.itos[x] def vec_decode(self, x): return np.vectorize(self.decode)(x) # + data = pkl.load(open('bach_reduced_seq2seq_4mm_new.pkl',"rb")) note_seq, chord_seq = [],[] max_seq_len = 0 data_num = 0 for file in data: data_num += len(file) for window in file: note_seq.append(window[0]) chord_seq.append(window[1]) max_seq_len = max(max_seq_len, len(window[1])) # + note_padding_vec = np.full(len(note_seq[0][0]), -1).reshape(1,-1) # should be 45; not sure if -1 is good note_ending_vec = np.ones(len(note_seq[0][0])).reshape(1,-1) # should be 45 note_starting_vec = np.zeros(len(note_seq[0][0])).reshape(1,-1) # should be 45 chord_start = "" chord_padding = "" chord_end = "" padded_note_seq = [] padded_chord_seq = [] for i in range(len(note_seq)): len_diff = max_seq_len - len(note_seq[i]) temp_note_vec = np.vstack((note_starting_vec, np.array(note_seq[i]), note_ending_vec, np.repeat(note_padding_vec, len_diff , axis = 0))) padded_note_seq.append(temp_note_vec) temp_chord_vec = np.hstack((chord_start, np.array(chord_seq[i]), chord_end, np.repeat(chord_padding, len_diff , axis = 0))) padded_chord_seq.append(temp_chord_vec) # - stacked_note_seq = np.stack(padded_note_seq, axis = 0) stacked_chord_seq = np.vstack(padded_chord_seq) SEED = 0 VAL_SIZE = 0.2 TEST_SIZE = 0.2 note_vec = np.asarray(stacked_note_seq, dtype = np.float32) chord_vocab = Vocab(Counter(list(stacked_chord_seq.flatten()))) vec_size = note_vec.shape[-1] vocab_size = len(chord_vocab.stoi) # + note_train, note_test, chord_train, chord_test \ = train_test_split(note_vec, stacked_chord_seq, test_size=TEST_SIZE, random_state=SEED) note_train, note_val, chord_train, chord_val \ = train_test_split(note_vec, stacked_chord_seq, test_size= VAL_SIZE/ (1-TEST_SIZE), random_state=SEED) # - train_dataset = Vec21Dataset(note_train, chord_train, chord_vocab) val_dataset = Vec21Dataset(note_val, chord_val, chord_vocab) test_dataset = Vec21Dataset(note_test, chord_test, chord_vocab) batch_size = 32 train_loader = DataLoader(train_dataset, batch_size = batch_size, drop_last = True) val_loader = DataLoader(val_dataset, batch_size = batch_size, drop_last = True) test_loader = DataLoader(test_dataset, batch_size = batch_size, drop_last = True) # + input_size = vec_size emb_size = vec_size encoder_hidden_size = 128 decoder_hidden_size = 128 encoder_dropout = 0.5 decoder_dropout = 0.5 n_layers = 2 output_size = vocab_size model_type = "LSTM" model = nn.Transformer() # - vocab_size criterion = nn.CrossEntropyLoss(ignore_index = chord_vocab.stoi[""]) optimizer = optim.AdamW(model.parameters(), lr = 1e-3) # + # model.state_dict = torch.load("sears_reduce_20warm80epoch_s2s.sdict") # - train_pgbar = tqdm(train_loader) for idx, (note, chord) in enumerate(train_pgbar): chord = chord.long().to(device) target = chord[:,:-1].permute(1,0) gt = chord[:, 1:].permute(1,0) note = note.permute(1,0,2) # print(note.shape) # print(target.shape) output = model(note, target) break # + epochs = 100 train_losses = [] val_losses = [] clip = 1 for epoch in range(1,epochs+1): total_loss = 0 model.train() train_pgbar = tqdm(train_loader) for idx, (note, chord) in enumerate(train_pgbar): chord = chord.long().to(device) target = chord[:,:-1] gt = chord[:, 1:] pred = model() pred = pred.permute(0,2,1) loss = criterion(pred, chord.long().to(device)) optimizer.zero_grad() if loss.item() != 0: loss.backward() # torch.nn.utils.clip_grad_norm_(model.parameters(), clip) optimizer.step() # s = "Epoch: {} Train Loss: {:.5f}".format(epoch, loss.item()) # tqdm.write(s) train_pgbar.set_postfix({'Epoch': epoch, 'Train Loss': "{:.5f}".format(loss.item())}) total_loss += loss.item() print("Total Training Loss: %.2f" % total_loss) train_losses.append(total_loss/len(train_loader)) total_loss = 0 model.eval() val_pgbar = tqdm(val_loader) for idx, (note, chord) in enumerate(val_pgbar): pred = model(note.to(device), chord.long().to(device), teacher_forcing = False, start_idx = chord_vocab.stoi[""]) loss = criterion(pred.permute(0,2,1), chord.long().to(device)) # s = "Epoch: {} Val Loss: {:.5f}".format(epoch, loss.item()) # tqdm.write(s) val_pgbar.set_postfix({'Epoch': epoch, 'Val Loss': "{:.5f}".format(loss.item())}) total_loss += loss.item() print("Total Validation Loss: %.2f" % total_loss) val_losses.append(total_loss/len(val_loader)) # - import matplotlib.pyplot as plt # + fig = plt.figure() ax = fig.add_subplot(111) plt.plot(train_losses, color = "b", label = "train") plt.plot(val_losses, color = "r", label = "val") plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0.) ax.set_xlabel('epochs') ax.set_ylabel('cross entropy loss') xint = range(len(train_losses)) plt.xticks(xint) plt.show() # - model.eval() all_pred2 = [] all_label2 = [] for idx, (note, chord) in enumerate(tqdm(val_loader)): pred = model(note.to(device), chord.long().to(device), teacher_forcing = False, start_idx = chord_vocab.stoi[""]) pred = pred.detach().cpu().numpy().argmax(axis = -1) label = chord.detach().cpu().numpy() pred[:,0] = np.full(len(pred), chord_vocab.stoi[""]) all_pred2.append(val_dataset.vec_decode(pred)) all_label2.append(val_dataset.vec_decode(label)) all_pred2 = np.vstack(all_pred2) all_label2 = np.vstack(all_label2) all_pred2[0] all_label2[0] decoded_preds = all_pred2 decoded_chords = all_label2 decoded_preds # + mask = (decoded_preds != "") & (decoded_preds != "") & (decoded_preds != "") masked_preds = decoded_preds[mask] masked_chords = decoded_chords[mask] print(np.sum(masked_preds == masked_chords) / len(masked_chords)) # - print(masked_chords) masked_preds == masked_chords # + # SEPERATE EVALUATION OF ROOT AND QUALITY AFTER DECODING # seperate all pred root_preds = decoded_preds.copy() quality_preds = decoded_preds.copy() for r_id in range(decoded_preds.shape[0]): for c_id in range(decoded_preds.shape[1]): sp = decoded_preds[r_id, c_id].split(' ') root_preds[r_id, c_id] = sp[0] quality_preds[r_id, c_id] = ' '.join(sp[1:]) root_labels = decoded_chords.copy() quality_labels = decoded_chords.copy() for r_id in range(decoded_chords.shape[0]): for c_id in range(decoded_chords.shape[1]): sp = decoded_chords[r_id, c_id].split(' ') root_labels[r_id, c_id] = sp[0] quality_labels[r_id, c_id] = ' '.join(sp[1:]) # # seperate all lable # root_labels = [] # quality_labels = [] # for c in decoded_chords: # sp = c.split(' ') # root_labels.append(sp[0]) # quality_labels.append(' '.join(sp[1:])) # root_labels = np.asarray(root_labels) # quality_labels = np.asarray(quality_labels) # - mask = (root_preds != "") & (root_preds != "") & (root_preds != "") root_preds = root_preds[mask] quality_preds = quality_preds[mask] root_label = root_labels[mask] quality_labels = quality_labels[mask] np.sum(root_preds == root_label) / len(root_preds) np.sum(quality_preds == quality_labels) / len(quality_preds) torch.save(model.state_dict(), "bach_reduced_10warm80epoch50post_s2s.sdict") # EVALUATE root AFTER DECODING sum(1 for x,y in zip(root_preds, root_labels) if x == y) / len(root_labels) # EVALUATE quality AFTER DECODING sum(1 for x,y in zip(quality_preds, quality_labels) if x == y) / len(quality_labels) co = Counter(chords) most_common_chord = list(list(zip(*co.most_common(20)))[0]) for i in range(len(decoded_preds)): if decoded_preds[i] not in most_common_chord: decoded_preds[i] = "others" if decoded_chords[i] not in most_common_chord: decoded_chords[i] = "others" from sklearn.metrics import confusion_matrix, plot_confusion_matrix import seaborn as sn cm = confusion_matrix(decoded_chords, decoded_preds, normalize = "true", labels = most_common_chord + ["others"]) # + fig, ax = plt.subplots(figsize=(13,10)) sn.heatmap(cm, annot=False) ax.set_xticklabels(most_common_chord + ["others"]) ax.set_yticklabels(most_common_chord + ["others"]) plt.yticks(rotation=0) plt.xticks(rotation="vertical") plt.show() fig.savefig("confusion_bachhaydn_baseline.pdf", format = "pdf") # - torch.save(model.state_dict(), "baseline_bach_and_haydn.pt") symbol, num = list(zip(*co.most_common(50))) symbol = list(symbol) num = list(num) symbol += ['others'] num += [np.sum(list(co.values())) - np.sum(num)] num/=np.sum(num) # + plt.subplots(figsize=(13,10)) x_pos = [i for i, _ in enumerate(symbol)] plt.bar(x_pos, num) plt.xlabel("Chord Symbol") plt.ylabel("Occurance") plt.title("bach chorales Chord Distribution") plt.xticks(x_pos, symbol, rotation = "vertical") plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="J6rnhr2Xs5Vs" #

Quora Question Pairs

# + [markdown] id="o9fciGc7s5Vu" #

1. Business Problem

# + [markdown] id="LRzmxjKxs5Vw" #

1.1 Description

# + [markdown] id="1nlaIYe9s5Vx" #

Quora is a place to gain and share knowledge—about anything. It’s a platform to ask questions and connect with people who contribute unique insights and quality answers. This empowers people to learn from each other and to better understand the world.

#

# Over 100 million people visit Quora every month, so it's no surprise that many people ask similarly worded questions. Multiple questions with the same intent can cause seekers to spend more time finding the best answer to their question, and make writers feel they need to answer multiple versions of the same question. Quora values canonical questions because they provide a better experience to active seekers and writers, and offer more value to both of these groups in the long term. #

#
# > Credits: Kaggle # # + [markdown] id="wdWP5SdFs5Vy" # __ Problem Statement __ # - Identify which questions asked on Quora are duplicates of questions that have already been asked. # - This could be useful to instantly provide answers to questions that have already been answered. # - We are tasked with predicting whether a pair of questions are duplicates or not. # + [markdown] id="34hYn911s5V0" #

1.2 Sources/Useful Links

# + [markdown] id="Hv6fd7txs5V7" # 1. The cost of a mis-classification can be very high. # 2. You would want a probability of a pair of questions to be duplicates so that you can choose any threshold of choice. # 3. No strict latency concerns. # 4. Interpretability is partially important. # + [markdown] id="cW_MVIlps5WQ" #

3. Exploratory Data Analysis

# + id="sNzZdmBJs5WS" colab={"base_uri": "https://localhost:8080/", "height": 17} executionInfo={"status": "ok", "timestamp": 1640228227455, "user_tz": -330, "elapsed": 4014, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "15716323643841165579"}} outputId="ca5220dd-b086-44db-c004-b7f8ee00bfa1" import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from subprocess import check_output # %matplotlib inline import plotly.offline as py py.init_notebook_mode(connected=True) import plotly.graph_objs as go import plotly.tools as tls import os import gc import re from nltk.corpus import stopwords #import distance from nltk.stem import PorterStemmer from bs4 import BeautifulSoup # + [markdown] id="Ffhe1d2m2ckT" # DataSet Download # + [markdown] id="TlSZs3h8JSvC" # https://www.kaggle.com/c/quora-question-pairs/data?select=train.csv.zip # + id="F1br-oewc90o" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1640228240809, "user_tz": -330, "elapsed": 2678, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "15716323643841165579"}} outputId="bd5ce309-e118-4163-9e3e-ab3ed80edbd0" # !wget --header="Host: storage.googleapis.com" --header="User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36" --header="Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9" --header="Accept-Language: en-US,en;q=0.9" --header="Referer: https://www.kaggle.com/" "https://storage.googleapis.com/kagglesdsdata/competitions/6277/323734/train.csv.zip?GoogleAccessId=&Expires=1640402232&Signature=HKWlWk36tV2Rr7kcqNp9q9ajO4X8F6%2FDdwydqWmpqnrUoWIBkolkoUjXAAZEw2l3ngTweM9h%2BCDyMR5Ec%2FWEq%2BVG1EjTbHEhMOisIgph9%2F0NlCpv9%2B8f6qIaYERtII4JAY7Lpw0zFjyLIsPPuSDBRUSN4v413Z11EPQELNkCQEn6AiPwX2enPx44i%2FjW0iWrTVnixpk9V4PC4nfXU0%2BbVrm3D0%2BS%2Bf1%2B%2FpNu7eDDXRCUEI7UVlmTnv9%2FTHfZdUJumKvCoAEe%2Bud7MjPyZlbvnZv7ZJuEIOYGf5WFFNyOO0YwedahnJgAXRjY%2FA1wJaQItvJNXov0SoGv0V6TkemcIw%3D%3D&response-content-disposition=attachment%3B+filename%3Dtrain.csv.zip" -c -O 'train.csv.zip' # !unzip train.csv # + [markdown] id="__T8jddGs5Wc" #

3.1 Reading data and basic stats

# + id="ifM_s9rvs5Wd" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1640228243186, "user_tz": -330, "elapsed": 2391, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "15716323643841165579"}} outputId="793c149d-9165-42f8-bfa1-3d88c0ecaf10" df = pd.read_csv("/content/train.csv") print("Number of data points:",df.shape[0]) # + id="34zXGW8gs5Wj" colab={"base_uri": "https://localhost:8080/", "height": 206} executionInfo={"status": "ok", "timestamp": 1640228243192, "user_tz": -330, "elapsed": 26, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "15716323643841165579"}} outputId="b838ecbc-f657-4eeb-b5d5-9aa1c38f6a3f" df.head() # + colab={"base_uri": "https://localhost:8080/"} id="5imGFPLZnAvh" executionInfo={"status": "ok", "timestamp": 1640228480870, "user_tz": -330, "elapsed": 92150, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "15716323643841165579"}} outputId="a4fe2240-5d97-48e3-b1e9-45f014a89f5b" from google.colab import drive drive.mount('/content/drive') # + id="mx4DFwMns5Wp" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1640228480874, "user_tz": -330, "elapsed": 53, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "15716323643841165579"}} outputId="7c5ee626-120e-4c5a-cc18-ed29fc3a41d5" df.info() # + [markdown] id="HHHTGTzws5Ww" # We are given a minimal number of data fields here, consisting of: # # - id: Looks like a simple rowID # - qid{1, 2}: The unique ID of each question in the pair # - question{1, 2}: The actual textual contents of the questions. # - is_duplicate: The label that we are trying to predict - whether the two questions are duplicates of each other. # + [markdown] id="ZulqVzTDs5Wx" #

3.2.1 Distribution of data points among output classes

# - Number of duplicate(smilar) and non-duplicate(non similar) questions # + id="YHp64yNjs5Wx" colab={"base_uri": "https://localhost:8080/", "height": 294} executionInfo={"status": "ok", "timestamp": 1640228480876, "user_tz": -330, "elapsed": 45, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "15716323643841165579"}} outputId="4495e9bd-555a-4067-d3a1-9990f97cd383" df.groupby("is_duplicate")['id'].count().plot.bar() # + id="-usI2K2bs5W4" outputId="756b4734-4498-4f43-d7c0-6e049a7d630a" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1640228486684, "user_tz": -330, "elapsed": 359, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "15716323643841165579"}} print('~> Total number of question pairs for training:\n {}'.format(len(df))) # + id="YiPia6Pjs5W_" outputId="5c2d7dac-5f0e-4c45-daaa-10c6cb4dbb50" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1640228488664, "user_tz": -330, "elapsed": 7, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "15716323643841165579"}} print('~> Question pairs are not Similar (is_duplicate = 0):\n {}%'.format(100 - round(df['is_duplicate'].mean()*100, 2))) print('\n~> Question pairs are Similar (is_duplicate = 1):\n {}%'.format(round(df['is_duplicate'].mean()*100, 2))) # + [markdown] id="wGX03QVRs5XF" #

3.2.2 Number of unique questions

# + id="VOKa6aU2s5XG" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1640228491640, "user_tz": -330, "elapsed": 1774, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "15716323643841165579"}} outputId="cd140dfe-89b1-4b4b-9bf4-9e169c243d84" qids = pd.Series(df['qid1'].tolist() + df['qid2'].tolist()) unique_qs = len(np.unique(qids)) qs_morethan_onetime = np.sum(qids.value_counts() > 1) print ('Total number of Unique Questions are: {}\n'.format(unique_qs)) print(len(np.unique(qids))) print ('Number of unique questions that appear more than one time: {} ({}%)\n'.format(qs_morethan_onetime,qs_morethan_onetime/unique_qs*100)) print ('Max number of times a single question is repeated: {}\n'.format(max(qids.value_counts()))) q_vals=qids.value_counts() q_vals=q_vals.values # + id="plcvbd4Cs5XM" colab={"base_uri": "https://localhost:8080/", "height": 480} executionInfo={"status": "ok", "timestamp": 1640228505104, "user_tz": -330, "elapsed": 903, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "15716323643841165579"}} outputId="ce2504f4-cfea-46f7-9955-28d847284071" x = ["unique_questions" , "Repeated Questions"] y = [unique_qs , qs_morethan_onetime] plt.figure(figsize=(10, 6)) plt.title ("Plot representing unique and repeated questions ") sns.barplot(x,y) plt.show() # + [markdown] id="G-CwGaMms5XS" #

3.2.3 Checking for Duplicates

# + id="YCiDBHm5s5XT" outputId="20097698-af98-4efb-a5b2-e0302e42ccfe" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1640228508950, "user_tz": -330, "elapsed": 361, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "15716323643841165579"}} #checking whether there are any repeated pair of questions pair_duplicates = df[['qid1','qid2','is_duplicate']].groupby(['qid1','qid2']).count().reset_index() print ("Number of duplicate questions",(pair_duplicates).shape[0] - df.shape[0]) # + [markdown] id="iaHTnnt8s5XX" #

3.2.4 Number of occurrences of each question

# + id="dPZwk-C8s5Xa" outputId="c6522fee-609d-4294-9031-effcfc130928" colab={"base_uri": "https://localhost:8080/", "height": 656} executionInfo={"status": "ok", "timestamp": 1640228515327, "user_tz": -330, "elapsed": 3576, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "15716323643841165579"}} plt.figure(figsize=(20, 10)) plt.hist(qids.value_counts(), bins=160) plt.yscale('log', nonposy='clip') plt.title('Log-Histogram of question appearance counts') plt.xlabel('Number of occurences of question') plt.ylabel('Number of questions') print ('Maximum number of times a single question is repeated: {}\n'.format(max(qids.value_counts()))) # + [markdown] id="h_WdYxlYs5Xj" #

3.2.5 Checking for NULL values

# + id="r0x1gR2fs5Xk" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1640228524588, "user_tz": -330, "elapsed": 626, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "15716323643841165579"}} outputId="513fb927-ebe5-4688-ec0e-836703201747" #Checking whether there are any rows with null values nan_rows = df[df.isnull().any(1)] print (nan_rows) # + [markdown] id="CCYmufv6s5Xo" # - There are two rows with null values in question2 # + id="yLBRyACgs5Xp" outputId="c4add345-336a-4959-fc3d-5341d129256b" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1640228529217, "user_tz": -330, "elapsed": 376, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "15716323643841165579"}} # Filling the null values with ' ' df = df.fillna('') nan_rows = df[df.isnull().any(1)] print (nan_rows) # + [markdown] id="l9Qcl5xfs5Xs" #

3.3 Basic Feature Extraction (before cleaning)

# + [markdown] id="RRzvPYzGs5Xu" # Constructing a few features like: # - ____freq_qid1____ = Frequency of qid1's # - ____freq_qid2____ = Frequency of qid2's # - ____q1len____ = Length of q1 # - ____q2len____ = Length of q2 # - ____q1_n_words____ = Number of words in Question 1 # - ____q2_n_words____ = Number of words in Question 2 # - ____word_Common____ = (Number of common unique words in Question 1 and Question 2) # - ____word_Total____ =(Total num of words in Question 1 + Total num of words in Question 2) # - ____word_share____ = (word_common)/(word_Total) # - ____freq_q1+freq_q2____ = sum total of frequency of qid1 and qid2 # - ____freq_q1-freq_q2____ = absolute difference of frequency of qid1 and qid2 # + colab={"base_uri": "https://localhost:8080/"} id="xAMHtwQpsHHE" executionInfo={"status": "ok", "timestamp": 1640228537572, "user_tz": -330, "elapsed": 379, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "15716323643841165579"}} outputId="1cc3ffc1-2b85-45d3-a6c5-4cc4568adb4f" df['question1'] # + id="Iq4DZ-rYs5Xv" outputId="ee74d7a2-da6f-4546-fb1f-16ac542a4fbf" colab={"base_uri": "https://localhost:8080/", "height": 704} executionInfo={"status": "ok", "timestamp": 1640228843889, "user_tz": -330, "elapsed": 37387, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "15716323643841165579"}} df['freq_qid1'] = df.groupby('qid1')['qid1'].transform('count') df['freq_qid2'] = df.groupby('qid2')['qid2'].transform('count') df['q1len'] = df['question1'].str.len() df['q2len'] = df['question2'].str.len() df['q1_n_words'] = df['question1'].apply(lambda row: len(str(row).split(" "))) df['q2_n_words'] = df['question2'].apply(lambda row: len(str(row).split(" "))) def normalized_word_Common(row): w1 = set(map(lambda word: word.lower().strip(), row['question1'].split(" "))) w2 = set(map(lambda word: word.lower().strip(), row['question2'].split(" "))) return 1.0 * len(w1 & w2) df['word_Common'] = df.apply(normalized_word_Common, axis=1) def normalized_word_Total(row): w1 = set(map(lambda word: word.lower().strip(), row['question1'].split(" "))) w2 = set(map(lambda word: word.lower().strip(), row['question2'].split(" "))) return 1.0 * (len(w1) + len(w2)) df['word_Total'] = df.apply(normalized_word_Total, axis=1) def normalized_word_share(row): w1 = set(map(lambda word: word.lower().strip(), row['question1'].split(" "))) w2 = set(map(lambda word: word.lower().strip(), row['question2'].split(" "))) return 1.0 * len(w1 & w2)/(len(w1) + len(w2)) df['word_share'] = df.apply(normalized_word_share, axis=1) df['freq_q1+q2'] = df['freq_qid1']+df['freq_qid2'] df['freq_q1-q2'] = abs(df['freq_qid1']-df['freq_qid2']) df.to_csv("/content/drive/MyDrive/case studies/Docs_QUORA/df_fe_without_preprocessing_train.csv", index=False) df.head() # + [markdown] id="-zLujovVs5X3" #

3.3.1 Analysis of some of the extracted features

# + [markdown] id="zRIFQTkCs5X3" # - Here are some questions have only one single words. # + id="jSS0X82Ds5X5" outputId="1916d296-f2a5-4333-aac8-0ff9ee10f196" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1640228586037, "user_tz": -330, "elapsed": 391, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "15716323643841165579"}} print ("Minimum length of the questions in question1 : " , min(df['q1_n_words'])) print ("Minimum length of the questions in question2 : " , min(df['q2_n_words'])) print ("Number of Questions with minimum length [question1] :", df[df['q1_n_words']== 1].shape[0]) print ("Number of Questions with minimum length [question2] :", df[df['q2_n_words']== 1].shape[0]) # + [markdown] id="kFzTIHW3s5YB" #

3.3.1.1 Feature: word_share

# + id="s4rwGLFDs5YD" outputId="40bffb42-8a04-48f3-86ec-4d6968852066" colab={"base_uri": "https://localhost:8080/", "height": 657} executionInfo={"status": "ok", "timestamp": 1640228597760, "user_tz": -330, "elapsed": 5370, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "15716323643841165579"}} plt.figure(figsize=(12, 8)) plt.subplot(1,2,1) sns.violinplot(x = 'is_duplicate', y = 'word_share', data = df[0:]) plt.subplot(1,2,2) sns.distplot(df[df['is_duplicate'] == 1.0]['word_share'][0:] , label = "1", color = 'red') sns.distplot(df[df['is_duplicate'] == 0.0]['word_share'][0:] , label = "0" , color = 'blue' ) plt.show() # + [markdown] id="RcwMI4xps5YJ" # - The distributions for normalized word_share have some overlap on the far right-hand side, i.e., there are quite a lot of questions with high word similarity # - The average word share and Common no. of words of qid1 and qid2 is more when they are duplicate(Similar) # + [markdown] id="K0AbOS65s5YL" #

3.3.1.2 Feature: word_Common

# + id="_mCFvztcs5YM" outputId="efec6061-e542-4392-c853-f8ee72af7873" colab={"base_uri": "https://localhost:8080/", "height": 657} executionInfo={"status": "ok", "timestamp": 1640228607466, "user_tz": -330, "elapsed": 5342, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "15716323643841165579"}} plt.figure(figsize=(12, 8)) plt.subplot(1,2,1) sns.violinplot(x = 'is_duplicate', y = 'word_Common', data = df[0:]) plt.subplot(1,2,2) sns.distplot(df[df['is_duplicate'] == 1.0]['word_Common'][0:] , label = "1", color = 'red') sns.distplot(df[df['is_duplicate'] == 0.0]['word_Common'][0:] , label = "0" , color = 'blue' ) plt.show() # + [markdown] id="9Ej1ouEVs5YR" #

The distributions of the word_Common feature in similar and non-similar questions are highly overlapping

# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:miniconda3-python-tutorial] # language: python # name: conda-env-miniconda3-python-tutorial-python3_myenv # --- # ### Final step in isotherm calculation # # This notebook is used to assemble the full isotherm files. import xarray as xr import cftime import matplotlib.pyplot as plt import numpy as np from config import directory_figs, directory_data ds = xr.open_mfdataset(f'{directory_data}temp_FWPaSalP04Sv/iso20c_file_*.nc', combine='by_coords') ds.to_netcdf(f'{directory_data}iso20c_FWPaSalP04Sv.nc') ds = xr.open_mfdataset(f'{directory_data}temp_FWAtSalG02Sv/iso20c_file_*.nc', combine='by_coords') ds.to_netcdf(f'{directory_data}iso20c_FWAtSalG02Sv.nc') ds = xr.open_mfdataset(f'{directory_data}temp_FWAtSalG04Sv/iso20c_file_*.nc', combine='by_coords') ds.to_netcdf(f'{directory_data}iso20c_FWAtSalG04Sv.nc') ds = xr.open_mfdataset(f'{directory_data}temp_FWAtSalP02Sv/iso20c_file_*.nc', combine='by_coords') ds.to_netcdf(f'{directory_data}iso20c_FWAtSalP02Sv.nc') ds = xr.open_mfdataset(f'{directory_data}temp_FWAtSalP04Sv/iso20c_file_*.nc', combine='by_coords') ds.to_netcdf(f'{directory_data}iso20c_FWAtSalP04Sv.nc') # + tags=[] ds = xr.open_mfdataset(f'{directory_data}temp_005/iso20c_file_*.nc', combine='by_coords') ds.to_netcdf(f'{directory_data}iso20c_005.nc') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import matplotlib.pyplot as plt # + error1 = np.load("error_save_La_2D_w50_poly.npy") error1 = error1.reshape(500,100) error1 = np.average(error1,1) plt.plot(np.log10(error1),label="x+x^2") error2 = np.load("error_save_La_2D_w50_relu3.npy") error2 = error2.reshape(500,100) error2 = np.average(error2,1) plt.plot(np.log10(error2),label="relu3") error3 = np.load("error_save_La_2D_w50_poly_sin.npy") error3 = error3.reshape(500,100) error3 = np.average(error3,1) plt.plot(np.log10(error3),label="x+x^2+sin(x)") error3 = np.load("error_save_La_2D_w50_poly_sin_gaussian.npy") error3 = error3.reshape(500,100) error3 = np.average(error3,1) plt.plot(np.log10(error3),label="x+x^2+sin(x)+gaussian") error4 = np.load("error_save_La_2D_w50_poly_relu.npy") error4 = error4.reshape(500,100) error4 = np.average(error4,1) plt.plot(np.log10(error4),label="x+x^2+relu(x)") error5 = np.load("error_save_La_2D_w50_poly_relu3.npy") error5 = error5.reshape(500,100) error5 = np.average(error5,1) plt.plot(np.log10(error5),label="x+x^2+relu(x)**3") plt.legend() plt.title("regular") # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import os import pandas as pd import numpy as np import matplotlib.pyplot as plt from matplotlib.backends.backend_pdf import PdfPages from scipy.stats import laplace, norm, uniform param_files = ["exp_params.txt", "pair_params.txt", "peptide_params.txt"] params = {} for pf in param_files: pfp = os.path.join("/gd/bayesian_RT/Alignments/SQC_20180815_2", pf) #pfp = os.path.join("/Users/albert/git/RTLib/Alignments/NCE_20180520_4", pf) if os.path.exists(pfp): with open(pfp, "rb") as f: try: params[pf.split("_")[0]] = pd.read_csv(pfp, sep="\t") except: print("some error") params dfa = pd.read_csv('/gd/bayesian_RT/Alignments/SQC_20180815_2/ev_updated.txt', sep='\t', low_memory=False) dfa # + dff = dfa[-(dfa["remove"])].reset_index(drop=True) # factorize experiments into exp_id #eidx = dff["raw_file"].unique() #print(len(eidx)) #dff["exp_id"] = dff["raw_file"].map({ind: val for val, ind in enumerate(dff["raw_file"].unique())}) # factorize sequence into peptide_id #dff["stan_peptide_id"] = dff["sequence"].map({ind: val for val, ind in enumerate(dff["sequence"].unique())}) num_experiments = dff["exp_id"].max() + 1 num_observations = dff.shape[0] num_peptides = dff["peptide_id"].max() + 1 retention_times = dff["Retention time"] # cap PEP at 1 dff['PEP'].loc[dff['PEP'] > 1] = 1 dff['pep_new'].loc[dff['pep_new'] > 1] = 1 dff['pep_updated'].loc[dff['pep_updated'] > 1] = 1 pep = dff["PEP"] stan_peptide_id = dff["peptide_id"] exp_id = dff["exp_id"] exp_names = dff['Raw file'].unique() mean_log_rt = np.mean(np.log(retention_times)) sd_log_rt = np.std(np.log(retention_times)) max_rt = retention_times.max() pep_exp_all = dff["peptide_id"].map(str) + " - " + dff["exp_id"].map(str) pep_exp_pairs = pep_exp_all.unique() num_pep_exp_pairs = len(pep_exp_pairs) print(len(pep_exp_all)) print(len(pep_exp_all.unique())) muij_map = pep_exp_all.map({ind: val for val, ind in enumerate(pep_exp_pairs)}) #muij_map #dff # + pep_col_code = pd.cut(dff["PEP"], 10) for exp in range(1,2): print("Generating Summary for Experiment", exp, "|", exp_names[exp]) exp_params = params["exp"].iloc[exp] exp_inds = (dff['exp_id'] == exp) & (~pd.isnull(dff['pep_new'])) predicted = dff['muij'][exp_inds].values predicted_sd = dff['sigmaij'][exp_inds].values mus = dff['mu'][exp_inds].values # observed values observed = dff['Retention time'][exp_inds].values obs_peps = dff['PEP'][exp_inds].values obs_code = pep_col_code[exp_inds].values residual = observed - predicted plt.subplot(121) plt.scatter(mus, observed, s=1, color="black") #plt.scatter(predicted, observed) plt.plot([0, exp_params["split_point"]], [exp_params["beta_0"], (exp_params["split_point"] * exp_params["beta_1"]) + exp_params["beta_0"]], color="red") plt.plot([exp_params["split_point"], 300], [(exp_params["split_point"] * exp_params["beta_1"]) + exp_params["beta_0"], (exp_params["split_point"] * exp_params["beta_1"]) + ((300-exp_params["split_point"]) * exp_params["beta_2"]) + exp_params["beta_0"]], color="green") plt.plot(np.repeat(exp_params["split_point"], 2), [-100, 300], color="blue", linestyle="dashed") plt.axis([0, mus.max() + 10, exp_params["beta_0"]-10, observed.max() + 10]) plt.title(exp_names[exp]) plt.xlabel("Reference RT (min)") plt.ylabel("Observed RT (min)") plt.subplot(122) plt.scatter(predicted, residual, s=4, c=pep_col_code.cat.codes.values[exp_inds], alpha=0.5) plt.plot([0, exp_params["split_point"]], [0, 0], color="red") plt.plot([exp_params["split_point"], 300], [0, 0], color="green") plt.plot(np.repeat(exp_params["split_point"], 2), [-100, 300], color="blue", linestyle="dashed") # confidence intervals, 2.5% and 97.5% conf_x = predicted[np.argsort(predicted)] conf_2p5 = laplace.ppf(0.025, loc=0, scale=predicted_sd)[np.argsort(predicted)] conf_97p5 = laplace.ppf(0.975, loc=0, scale=predicted_sd)[np.argsort(predicted)] plt.plot(conf_x, conf_2p5, color="red") plt.plot(conf_x, conf_97p5, color="red") plt.axis([predicted.min()-5, predicted.max()+5, residual.min()-5, residual.max()+5]) plt.ylim(np.min(conf_2p5) - 0.1, np.max(conf_97p5) + 0.1) plt.xlim(conf_x[0], conf_x[-1]) cbar = plt.colorbar() cbar.set_label('Spectral PEP (Error Probability)') cbar.ax.set_yticklabels(pep_col_code.cat.categories.values) plt.xlabel("Inferred RT (min)") plt.ylabel("Residual RT (min)") plt.subplots_adjust(hspace=0.3, wspace=0.4, bottom=0.2) # plt.grid() fig = plt.gcf() fig.set_size_inches(7, 3.5) #fname = "./tmp_figs/alignment_" + str(exp) + "_" + exp_names[exp] + ".png" #print("Saving figure to", fname, "...") #fig.savefig(fname, dpi=160) #plt.close() #fig.clf() plt.show() # + num_experiments = len(dff['exp_id'].unique()) dff['residual'] = dff['Retention time'] - dff['muij'] plots_per_row = 30 if num_experiments < plots_per_row: plots_per_row = num_experiments num_rows = int(np.ceil(num_experiments / plots_per_row)) resi = [] for i in range(0, num_rows): ax = plt.subplot2grid((num_rows, 1), (i, 0)) if (i + 1) * plots_per_row > num_experiments: resi = [dff['residual'][(dff['exp_id'] == i) & (~pd.isnull(dff['residual']))] for i in range((i * plots_per_row), num_experiments)] ax.boxplot(resi, showfliers=False) ax.set_xticklabels(np.arange((i * plots_per_row), num_experiments, 1)) else: resi = [dff['residual'][(dff['exp_id'] == i) & (~pd.isnull(dff['residual']))] for i in range((i * plots_per_row), ((i + 1) * plots_per_row))] ax.boxplot(resi, showfliers=False) ax.set_xticklabels(np.arange((i * plots_per_row), ((i + 1) * plots_per_row), 1)) #ax.violinplot(resi, showmedians=True, showextrema=True) #ax.boxplot(resi, showfliers=False) ax.set_xticks(np.arange(1, plots_per_row + 1, 1)) ax.set_xlabel('Experiment Number') ax.set_ylabel('Residual RT (min)') #plt.subplots_adjust(hspace=0.6, wspace=0.3) # plt.tight_layout() # finalize and save figure f = plt.gcf() #f.text(0.5, 0, 'Experiment Number', fontsize=16, ha='center', va='center') #f.text(0.06, 0.5, 'Residual RT (min)', fontsize=16, ha='center', va='center', rotation='vertical') f.set_size_inches(12, num_rows * 2) #fname = os.path.join(figures_path, 'residuals_violin.png') #logger.info('Saving figure to {} ...'.format(fname)) #f.savefig(fname, dpi=160) #fig_names.append(fname) #plt.close() #f.clf() # + # PEP vs. PEP.new scatterplot inds = (~pd.isnull(dff['pep_new'])) & (dff['pep_new'] > 1e-5) & (dff['PEP'] > 1e-5) f, ax = plt.subplots() hst = ax.hist2d(np.log10(dff['PEP'][inds]), np.log10(dff['pep_new'][inds]), bins=(50, 50), cmap=plt.cm.Reds) ax.plot([-5, 0], [-5, 0], '-r') ax.grid() interval = (-5, 0) ax.set_xlim(interval) ax.set_ylim(interval) ax.set_xlabel('Spectral PEP', fontsize=16) ax.set_ylabel('DART-ID PEP', fontsize=16) interval = np.arange(-5, 1, 1) ax.set_xticks(interval) ax.set_yticks(interval) ax.set_xticklabels(['$10^{{{}}}$'.format(i) for i in interval], fontsize=12) ax.set_yticklabels(['$10^{{{}}}$'.format(i) for i in interval], fontsize=12) f.set_size_inches(7, 5) # plt.tight_layout() cbar = plt.colorbar(hst[3], ax=ax) cbar.set_label('Frequency', fontsize=16, labelpad=20, ha='center', va='top') cbar.ax.xaxis.set_label_position('top') # + # Fold-change increase num_points=100 x = np.logspace(-5, 0, num=num_points) y = np.zeros(num_points) y2 = np.zeros(num_points) y3 = np.zeros(num_points) inds = ~pd.isnull(dff['pep_new']) for i, j in enumerate(x): y[i] = np.sum(dff['pep_updated'] < j) / np.sum(dff['PEP'] < j) y2[i] = np.sum(dff['PEP'] < j) / dff.shape[0] y3[i] = np.sum(dff['pep_updated'] < j) / dff.shape[0] f, (ax1, ax2) = plt.subplots(2, 1) ax1.semilogx(x, (y*100)-100, '-b') # ax1.plot([np.min(x), np.max(x)], [0, 0], '-r', linestyle='dashed', linewidth=2) ax1.plot([1e-2, 1e-2], [-1000, 1000], '-k', linestyle='dashed') ax1.grid() ax1.set_xlim([3e-4, 3e-1]) ax1.set_ylim([-25, np.max(y)*100-50]) ax1.set_xlabel('PEP Threshold', fontsize=16) ax1.set_ylabel('Increase (%)', fontsize=16) ax1.set_title('Increase in confident PSMs', fontsize=16) ax2.semilogx(x, y2, '-b', linewidth=1, label='Spectra PEP') ax2.semilogx(x, y3, '-g', linewidth=1, label='DART-ID PEP') #ax2.fill_between(x, 0, y2) ax2.plot([1e-2, 1e-2], [-1000, 1000], '-k', linestyle='dashed') ax2.grid() ax2.set_xlim([3e-4, 3e-1]) ax2.set_ylim([0, 1.05]) ax2.set_xlabel('PEP Threshold', fontsize=16) ax2.set_ylabel('Fraction', fontsize=16) ax2.set_title('Fraction of confident PSMs', fontsize=16) ax2.legend(fontsize=16) plt.subplots_adjust(hspace=0.6, wspace=0.3) f.set_size_inches(5, 7) # plt.tight_layout() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd from backtesting import Backtest, Strategy from backtesting.lib import crossover from backtesting.test import SMA # + symbol=input("Enter your company name: ") df=pd.read_csv(r'C:/Users/Hp/stock analysis/nse_company_datail/'+symbol+".csv") df.rename(columns = {'Open Price': 'Open', 'High Price': 'High', 'Low Price':'Low','Close Price':'Close', 'Total Traded Quantity':'Volume'}, inplace =True) df.index = pd.DatetimeIndex(df['Date']) class SmaCross(Strategy): n1 = 9 n2 = 26 def init(self): close = self.data.Close self.sma1 = self.I(SMA, close, self.n1) self.sma2 = self.I(SMA, close, self.n2) def next(self): if crossover(self.sma1, self.sma2): self.buy() elif crossover(self.sma2, self.sma1): self.sell() bt = Backtest(df, SmaCross, cash=10000, commission=.002, exclusive_orders=True) output = bt.run() bt.plot() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # This notebook compares the performance of SignHunter and Harmonica on the two examples provided by Harmonica's authors. # # To run this notebook, please copy the contents of this notebook's directory (`samplings.py`, `main.py`, and `SignHunter - Harmonica Comparison.ipynb`) into the directory of the Harmonica's [source code](https://github.com/callowbird/Harmonica) # # One should note that the two algorithms make different assumptions on the objective function to be optimized. # SignHunter assumes the function is separable in its variables, while Harmonica assumes that the function _can be approximated by a sparse and low degree polynomial in the Fourier basis. This # means intuitively that it can be approximated well by a decision tree_. See [this](https://arxiv.org/pdf/1706.00764.pdf) # From below, we can see that SignHunter minimizes the first objective function using 20 queries achieving a value of -50.0 whereas Harmonica does the same using 4223 queries! # # For the second objective function, Signhunter achieves a better objective value of -916.2654 with 500 queries. On the other hand, Harmonica employs 4223 queries to obtain an objective value of -916.247764. # # In terms of computational complexity per query, SignHunter is far more efficient. Please see the time per query from each cell below. E.g., SignHunter takes $36\mu s$ per query and Harmonica takes $67ms$! # # Please note that we are using Harmonica's examples and its default parameters. There are no hyperparameter for SignHunter. import samplings import numpy as np import time def sign_hunter(f, dim, max_num_queries): """ A simple implementation of SignHunter """ best_guess = np.ones(dim) best_f = f(best_guess) guess = best_guess.copy() num_queries = 1 while num_queries < max_num_queries: for h in range(np.ceil(np.log2(dim)).astype(int) + 1): chunk_len = np.ceil(dim / 2**h).astype(int) for i in range(2**h): istart = i * chunk_len iend = min(istart + chunk_len, dim) guess[istart:iend] *= -1 val = f(guess) num_queries += 1 if val <= best_f: #print("guess is better", istart, iend, best_guess, guess) best_guess = guess.copy() best_f = val else: #print("guess is worse", istart, iend, best_guess, guess) guess = best_guess.copy() #print(istart, iend) if istart == dim - 1: break if num_queries >= max_num_queries: break if num_queries >= max_num_queries: break if num_queries >= max_num_queries: break # signhunter assumes the objective function is separable so it returns a solution in O(n) # if the assumption is not satisfied we can start with a new guess after O(n) queries. guess = np.sign(np.random.randn(dim)) num_queries += 1 val = f(guess) if val <= best_f: best_guess = guess.copy() best_f = val return best_f, guess, num_queries # ### Objective Function 1 from Harmonica source code # # https://github.com/callowbird/Harmonica # ###### SignHunter samplings.num_queries = 0 query = samplings.f1 start_time = time.time() best_f, sol, num_queries= sign_hunter(query, 60, 20) end_time = time.time() time_per_query = (end_time - start_time) / num_queries * 1e6 print("\n SignHunter: Number of queries {}| Solution cost {}| Time per query {:.2f}us".format( samplings.num_queries, best_f,time_per_query)) # ###### Harmonica # !python main.py -query f1 # ### Objective Function 2 from Harmonica source code # # https://github.com/callowbird/Harmonica # ###### SignHunter samplings.num_queries = 0 query = samplings.f2 start_time = time.time() best_f, sol, num_queries= sign_hunter(query, 60, 500) end_time = time.time() time_per_query = (end_time - start_time) / num_queries * 1e6 print("\n SignHunter: Number of queries {}| Solution cost {}| Time per query {:.2f}us".format( samplings.num_queries, best_f,time_per_query)) # ###### Harmonica # !python main.py -query f2 # --- # jupyter: # jupytext: # text_representation: # extension: .r # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: R 3.6 # language: R # name: ir36 # --- # # Clustering # + getData <- function(N=30) { x1 <- c( rnorm(N, mean=2.5, sd=2.5), rnorm(N, mean=10.5, sd=2.5) ) x2 <- c( rnorm(N, mean=2.5, sd=2.5), rnorm(N, mean=10.5, sd=2.5) ) y <- c(rep(0, N), rep(1, N)) X <- data.frame(x1=x1, x2=x2) return(list(X=X, y=y)) } D = getData() # - # ## K-means m <- kmeans(D$X, 2) print(m) print(m$totss) # + options(repr.plot.width=5, repr.plot.height=5) plot(D$X, col=m$cluster) points(m$centers, col=1:2, pch=8, cex=2) # + library('cluster') options(repr.plot.width=5, repr.plot.height=5) clusplot(D$X, m$cluster, color=TRUE, shade=TRUE, labels=2, lines=0) # + library('fpc') options(repr.plot.width=5, repr.plot.height=5) plotcluster(D$X, m$cluster) # - # ## Ward hierarchical clustering d <- dist(D$X, method='euclidean') m <- hclust(d, method='ward.D2') print(m) # + options(repr.plot.width=10, repr.plot.height=8) plot(m) # - # ## Model based # + library('mclust') m <- Mclust(D$X) # - print(summary(m)) plot(m) # ## Comparing clustering solutions # + m1 <- kmeans(D$X, 2) m2 <- Mclust(D$X) d <- dist(D$X, method='euclidean') s <- cluster.stats(d, m1$cluster, m2$cluster) print(s) # - # ## Silhouette score s <- silhouette(m1$cluster, d) print(summary(s)) plot(s, col=c('red', 'blue')) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #
Random Variables
#

# https://www.countbayesie.com/blog/2015/2/20/random-variables-and-expectation #

#

# A random variable is something the user assigns to an event. An example of an event is a dice roll. # A roll of a color wheel gives a yellow, green or red. #

# # The color wheel when spun defines an event space. From looking at the color wheel we can infer the probabilities # of each event. #

$P(yellow)=1/2$

#

$P(red)=1/4$

#

$P(blue)=1/4$

#

# Given a set of events, Y,R,R,B,B we can convert these to a number. #

#

$X(w)=2$ if yellow

#

$X(w)=1$ if red

#

$X(w)=0$ if blue

# #

The act of assignings numbers to a categorical variable creates a random variable. We capitalize random # variables and the lower case w refers to the event as part of an event space.

# #

In most cases we are not able to infer the probability for the event space. In this case we use the data # to define the event space and use the number of occurrences for each event to define the probabilities. This # requires large amounts of data for multidimensional data and also requires us to update our probabilities # to conform to the total probablity rule as we see new events which are a common occurrence in sparse conditions. #

https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/ Wow this is so clear... # + https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading3.pdf
Conditional Probabiity
Conditional probability answers the question "how does the probabiltity of an event change when we have extra or new information?"

Example1:: toss a coin 3 times, what is the probability of 3 heads? The entire event space has 8 combinations: {HHH, HHT, HTH, HTT, THH, THT, TTH, TTT}. All outcomes are equally likely at 1/8.

Suppose we are told the first toss was H, this is new information in contrast to the scenario above where we have no information other than there is a fair coin. There is a new reduced sample space bc the first one is H, {HHH, HHT, HTH, HTT}, so the probability is 1/4. This is called conditional probability bc we have the conditon of observing a H first.

The conditional probablity is written as

$P(A|B)$ This is read as the conditional probablity of A given B or the probability of A conditioned on B or the probability of A given B.

The graphical explanation for the conditional probability is the are taken up by the area under A and B divided by the area of B.

$P(A|B) = \frac{P(A \cap B)}{P(B)}$

Redoing the coin example where we want the conditional probablity of A given B, applying the above formula; A = "3 heads", B="first toss out of 3 is H", the probability of A or P(A)=1/8, the probability of B, P(B)=1/4. We know $P(A \cap B)=A = \frac{1}{8}$

# -
Probability Mass Distribuiton for the Discrete case
Probability Distribution function for the continuous case
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Small primer for Python a = 'Corso_di programmazione in Python!' b = a[-4:] print(b) # ## Operazioni con tipi numerici a, b = 2, 8 print(a*b, a**b, a / b, a // b, a % b) # ## Stringhe print('have') print('haven\'t') print("haven't") print(r"haven\'t") print(""" Stringa su più righe con formattazione""") print("""\ Stringa su più righe con formattazione""") print( ('Stringa che viene automaticamente ' 'concatenata') ) # ### Indici s = 'In addition to indexing, slicing is also supported to work with strings' print(s[4:], s[:4]) print(s[-9:]) print(s[-1] + s[-2] + s[-3] + s[-4] + s[-5] + s[-6]) s[:10] + 'x' + s[11:] a = 32 b = float(a) b # ### Metodi utili su stringhe print(s.lower()) print(s.upper()) print(s.title()) print(s.title().swapcase()) print(s.split('to')) print(s.index('to')) print(s.count('to')) print("Usa stringa in cui inserire i numeri {} e {}".format(4, 2 / 3)) # ## Esercizi # Data una stringa di lunghezza arbitria # - ottenere la stringa composta dalla concatenazione dai primi e degli ultimi 4 elementi # - stampare la stringa in ordine inverso # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import random import numpy as np import matplotlib.pyplot as plt # This is a bit of magic to make matplotlib figures appear inline in the notebook # rather than in a new window. # %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # Some more magic so that the notebook will reload external python modules; # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython # %load_ext autoreload # %autoreload 2 # + from sklearn import preprocessing, metrics import utils import scipy.io import numpy as np from linear_classifier import LinearSVM_twoclass # load the SPAM email training dataset X,y = utils.load_mat('data/spamTrain.mat') yy = np.ones(y.shape) yy[y==0] = -1 # load the SPAM email test dataset test_data = scipy.io.loadmat('data/spamTest.mat') X_test = test_data['Xtest'] y_test = test_data['ytest'].flatten() yytest = np.ones(y_test.shape) yytest[y_test==0] = -1 ################################################################################## # YOUR CODE HERE for training the best performing SVM for the data above. # # what should C be? What should num_iters be? Should X be scaled? # # should X be kernelized? What should the learning rate be? What should the # # number of iterations be? # ################################################################################## from sklearn.metrics.pairwise import linear_kernel num_train = 3200; X_train = X[:3200]; Xval = X[3200:]; yy_train = yy[:3200]; yyval = yy[3200:]; Cvals = [1,3,10,30,100,300] max_acc = 0; best_C = 0; best_sigma = 0; K = linear_kernel(X_train,X_train) # scale the kernelized data matrix scaler = preprocessing.StandardScaler().fit(K) scaleK = scaler.transform(K) # add the intercept term KK = np.vstack([np.ones((scaleK.shape[0],)),scaleK.T]).T Kval = linear_kernel(Xval,X_train) # scale the kernelized data matrix scale_Kval = scaler.transform(Kval) # add the intercept term KK_val = np.vstack([np.ones((scale_Kval.shape[0],)),scale_Kval.T]).T for C in Cvals: svm = LinearSVM_twoclass() svm.theta = np.zeros((KK.shape[1],)) svm.train(KK,yy_train,learning_rate=1e-4,reg=C,num_iters=20000,verbose=False,batch_size=KK.shape[0]) pred_val = svm.predict(KK_val) accuracy = np.sum((pred_val == yyval)*1)/len(yyval) print("C value: " + str(C),"current accuracy: " + str(accuracy),"max accuracy: " + str(max_acc)) if (accuracy >= max_acc): max_acc = accuracy; best_C = C; print(max_acc,best_C) # - iterations = [100,1000,4000,10000,20000,30000] best_iter = 0; for iters in iterations: svm = LinearSVM_twoclass() svm.theta = np.zeros((KK.shape[1],)) svm.train(KK,yy_train,learning_rate=1e-4,reg=best_C,num_iters=iters,verbose=False,batch_size=KK.shape[0]) pred_val = svm.predict(KK_val) accuracy = np.sum((pred_val == yyval)*1)/len(yyval) print("iteration number: " + str(iters),"current accuracy: " + str(accuracy),"max accuracy: " + str(max_acc)) if (accuracy >= max_acc): max_acc = accuracy; best_iter = iters; if(best_iter == 0): best_iter = 20000; print(max_acc,best_iter) # + lrs = [1e-1,1e-2,1e-3,1e-4,1e-5,1e-6] best_lr = 0; for lr in lrs: svm = LinearSVM_twoclass() svm.theta = np.zeros((KK.shape[1],)) svm.train(KK,yy_train,learning_rate=lr,reg=best_C,num_iters=best_iter,verbose=False,batch_size=KK.shape[0]) pred_val = svm.predict(KK_val) accuracy = np.sum((pred_val == yyval)*1)/len(yyval) print("current accuracy: " + str(accuracy),"max accuracy: " + str(max_acc)) if (accuracy >= max_acc): max_acc = accuracy; best_lr = lr; if(best_lr == 0): best_lr = 1e-4; print(max_acc,best_lr) # + Cvals = [10,30,100] iterations = [4000,10000,20000] lrs = [1e-3,1e-4,1e-5] best_lr = 0; best_C = 0; best_iter = 0; for C in Cvals: for iteration in iterations: for lr in lrs: svm = LinearSVM_twoclass() svm.theta = np.zeros((KK.shape[1],)) svm.train(KK,yy_train,learning_rate=lr,reg=C,num_iters=iteration,verbose=False,batch_size=KK.shape[0]) pred_val = svm.predict(KK_val) accuracy = np.sum((pred_val == yyval)*1)/len(yyval) print("current accuracy: " + str(accuracy),"max accuracy: " + str(max_acc)) if (accuracy > max_acc): max_acc = accuracy; best_lr = lr; best_C = C; best_iter = iteration; if(best_lr == 0 and best_C == 0 and best_iter == 0): best_lr = 1e-4; best_iter = 20000; best_C = 10; print(max_acc,best_C,best_iter,best_lr) # + ################################################################################## # YOUR CODE HERE for testing your best model's performance # # what is the accuracy of your best model on the test set? On the training set? # ################################################################################## print(best_C,best_iter,best_lr) K = linear_kernel(X_train,X_train) # scale the kernelized data matrix scaler = preprocessing.StandardScaler().fit(K) scaleK = scaler.transform(K) # add the intercept term KK = np.vstack([np.ones((scaleK.shape[0],)),scaleK.T]).T Kval = linear_kernel(Xval,X_train) # scale the kernelized data matrix scale_Kval = scaler.transform(Kval) # add the intercept term KK_val = np.vstack([np.ones((scale_Kval.shape[0],)),scale_Kval.T]).T svm = LinearSVM_twoclass() svm.theta = np.zeros((KK.shape[1],)) svm.train(KK,yy_train,learning_rate=best_lr,reg=best_C,num_iters=best_iter,verbose=True,batch_size=KK.shape[0]) pred_train = svm.predict(KK) accuracy = np.sum((pred_train == yy_train)*1)/len(yy_train) print("max accuracy: " + str(accuracy)) # + test_data = scipy.io.loadmat('data/spamTest.mat') X_test = test_data['Xtest'] y_test = test_data['ytest'].flatten() yytest = np.ones(y_test.shape) yytest[y_test==0] = -1 print(yytest.shape) Ktest = linear_kernel(X_test,X_train) # scale the kernelized data matrix scale_Ktest = scaler.transform(Ktest) # add the intercept term KK_test = np.vstack([np.ones((scale_Ktest.shape[0],)),scale_Ktest.T]).T pred_test = svm.predict(KK_test) accuracy = np.sum((pred_test == yytest)*1)/len(yytest) print("max accuracy: " + str(accuracy)) ################################################################################## # ANALYSIS OF MODEL: Print the top 15 words that are predictive of spam and for # # ham. Hint: use the coefficient values of the learned model # ################################################################################## words, inv_words = utils.get_vocab_dict() print ("######## top 15 spam words ########") w = np.dot(svm.theta[1:], X_train).argsort()[::-1] for i in w[:15]: print (words[i]) ################################################################################## # END OF YOUR CODE # ################################################################################## # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %reset import pandas as pd import seaborn as sns import math import random # + min_price = 10 max_price = 15 current_price = 15 simulation_time = 500 #days volatility = 0.1 #determines width of min max range value_scaler = 1 #determines cheapness of stock global_market = True global_market_strength = 100 global_sentiment = random.choice(["bullish", "bearish"]) average_change_global_sentiment = 1 / (100 * 24) #1 / (days * 24 * number_of_stocks) changes every * days last_change = random.choice(["positive", "negative"]) #random.seed(1234) # - def define_function(current_price, max_price, min_price): use_current_price = current_price function_type = random.choice(["sin", "cos"]) number_of_periods = int(round((1 + random.random()))) period_length = 24 / number_of_periods periodicity = (2 * math.pi) / period_length amplitude = ((max_price - min_price) / 100) * random.random() if function_type == "sin": vertical_translation = use_current_price elif function_type == "cos": vertical_translation = use_current_price function = { "periodicity": periodicity, "amplitude": amplitude, "vertical_translation": vertical_translation, "function_type": function_type } return function def get_price_target(function, time_step, max_price, min_price): global global_market global global_sentiment global global_market_strength global average_change_global_sentiment if function.get('function_type') == "sin": price_target = function.get('amplitude') * math.sin(function.get('periodicity') * time_step) + function.get('vertical_translation') elif function.get('function_type') == "cos": price_target = function.get('amplitude') * math.cos(function.get('periodicity') * time_step) + function.get('vertical_translation') rand_pos_change = random.random() rand_pos_change_pos_neg = random.random() if rand_pos_change < 1: if rand_pos_change_pos_neg < 0.5: price_target = price_target - ((max_price - min_price) * (random.uniform(0.1, 1.0) / 10) ) elif rand_pos_change_pos_neg > 0.5: price_target = price_target + ((max_price - min_price) * (random.uniform(0.1, 1.0) / 10) ) if price_target < min_price: price_target = min_price elif price_target > max_price: price_target = max_price if global_market == True: rand_global_sentiment_value = random.random() if rand_global_sentiment_value < average_change_global_sentiment: global_sentiment = random.choice(["bullish", "bearish"]) if global_sentiment == "bullish": rand_global_market_value = random.random() / (global_market_strength / price_target) price_target = price_target + rand_global_market_value elif global_sentiment == "bearish": rand_global_market_value = -1 * random.random() / (global_market_strength / price_target) if price_target + rand_global_market_value > 0: price_target = price_target + rand_global_market_value return price_target def update_current_price(current_price, max_price, min_price, price_target): global last_change new_price = current_price + (price_target - current_price) * random.random() / 100 rand_pos_change = random.random() rand_pos_value = ((max_price - min_price) * (random.uniform(0.1, 1.0) / 10) ) / 100 if rand_pos_change < 0.1: if last_change == "negative": last_change = "positive" else: last_change = "negative" if last_change == "negative": if new_price - rand_pos_value < 0.001: rand_pos_change_2 = random.random() / 100 while new_price - ( rand_pos_change - rand_pos_change_2 ) < 0.001: rand_pos_change_2 = random.random() new_price = new_price - ( rand_pos_change - rand_pos_change_2 ) else: new_price = new_price - rand_pos_value elif last_change == "positive": new_price = new_price + rand_pos_value return new_price def update_min_max(max_price, min_price, volatility): prefered_diff = -3.300066 + 3.44987 * math.pow(math.e, 3.660217 * volatility) diff = max_price - min_price rand_change_max_value = random.random() rand_change_min_value = random.random() if rand_change_max_value < 0.1: if diff > prefered_diff: new_max_price = max_price - random.random() else: new_max_price = max_price + random.random() else: new_min_price = min_price new_max_price = max_price if rand_change_min_value < 0.1: if diff > prefered_diff: new_min_price = min_price + random.random() else: new_min_price = min_price - random.random() else: new_min_price = min_price new_max_price = max_price if new_min_price > new_max_price: new_min_price = min_price new_max_price = max_price if new_min_price <= 0: new_min_price = min_price if new_max_price <= 0: new_min_price = max_price return [new_max_price, new_min_price] def scale_value(value, value_scaler): new_value = value / value_scaler return new_value # + lista = [] listb = [] listc = [] listd = [] time = 0 for q in range(simulation_time): function = define_function(current_price, max_price, min_price) for i in range(24): price_target = get_price_target(function, i, max_price, min_price,) lista.append(scale_value(price_target, value_scaler)) listb.append(time) for j in range(60): current_price = (update_current_price(current_price, max_price, min_price, price_target)) listc.append(scale_value(current_price, value_scaler)) listd.append(time) time = time + 1 new_min_max_prize = update_min_max(max_price, min_price, volatility) max_price = new_min_max_prize[0] min_price = new_min_max_prize[1] print(min_price) print(max_price) #sns.scatterplot(x=listb, y=lista) sns.scatterplot(x=listd, y=listc) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="Ru_YX1ZeYcvv" # [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/education-toolkit/blob/main/tutorials/ES/03_Primeros_pasos_con_Transformers.ipynb) # # 💡 **¡Hola!** # # Hemos reunido un conjunto de herramientas que tanto profesores universitarios como organizadores de eventos pueden usar para preparar fácilmente laboratorios, tareas o clases. El contenido está diseñado de manera autónoma tal que pueda ser facilmente incorporado a clases existentes. Este contenido es gratuito y usa tecnologías ampliamente conocidas de código abierto (`transformers`, `gradio`, etc). # # Alternativamente, puede solicitar que un miembro del equipo de Hugging Face ejecute los tutoriales para su clase a través de la iniciativa [ML demo.cratization tour](https://huggingface2.notion.site/ML-Demo-cratization-tour-with-66847a294abd4e9785e85663f5239652). # # Puede encontrar todos los tutoriales y recursos que hemos construido [aquí](https://huggingface2.notion.site/Education-Toolkit-7b4a9a9d65ee4a6eb16178ec2a4f3599). # + [markdown] id="tEFE3RqPeCvi" # # Tutorial: Primeros pasos con Transformers # + [markdown] id="CxkBL0ExerWB" # **Objetivos de aprendizaje:** El objetivo de este tutorial es aprender cómo: # # 1. Las redes neuronales Transformer pueden ser usadas para hacer frente a un amplio rango de tareas en el procesamiento del lenguaje natural y más. # 2. El aprendizaje por transferencia permite adaptar Transformers a tareas específicas. # 3. La función `pipeline()` de la biblioteca `Transformers` puede ser usada para correr inferencias con modelos desde el [Hub de Hugging Face](https://huggingface.co/models). # # Este tutorial está basado en nuestro primer libro publicado por O'Really [_Natural Language Processing with Transformers_](https://transformersbook.com/) - ¡Dale un vistazo si quieres ahondar en este tema! # **Duración**: 30-45 minutos. # # **Pre-requisitos:** Conocimiento de Python y estar familiarizado con Machine Learning. # # # **Autor**: [](https://twitter.com/_lewtun) (Siéntete libre de contactarme si tienes alguna pregunta acerca de este tutorial) # # ¡Todos estos pasos se pueden completar de manera gratuita! Todo lo que necesitas es un navegador y un lugar para escribir en Python 👩‍💻 # + [markdown] id="2cZf9wAu3Sgs" # ## 0. ¿Por qué Transformers? # + [markdown] id="eJNpOWlW3zwf" # El aprendizaje profundo (Machine Learning) actualmente está bajo un periodo de rápido progreso en una amplia gama de aplicaciones incluyendo: # # * 📖 Procesamiento del lenguaje natural # * 👀 Visión por computadora # * 🔊 Audio # * 🧬 Biología # * y mucho más! # # El principal autor de estos avances es el **Transformer** -- una novedosa **red neuronal** desarrollada por investigadores de Google en 2017. En conclusión, si estás dentro del Machine Learning, necesitas Transformers. # # Acá presentamos ejemplos de lo que los Transformers pueden hacer: # # * 💻 Pueden **generar código** en productos como [GitHub Copilot](https://copilot.github.com/), el cual está basado en la familia de [GPT models](https://huggingface.co/gpt2?text=My+name+is+Clara+and+I+am) de OpenAI. # * ❓ Pueden ser usados para el **mejoramiento de motores de búsqueda** como hizo [Google](https://www.blog.google/products/search/search-language-understanding-bert/) con un Transformer llamado [BERT](https://huggingface.co/bert-base-uncased). # * 🗣️ Pueden **procesar el habla en múltiples lenguajes** para realizar reconocimiento de voz, traducciones e identificación del lenguaje. Por ejemplo, el módulo de Facebook [XLS-R](https://huggingface.co/spaces/facebook/XLS-R-2B-22-16) puede transcribir audio de manera automática de un lenguaje a otro. # # Entrenar estos modelos **desde el inicio** involucra **muchos recursos**. Se requiere de una gran cantidad de computación, datos y días para entrenarlos. 😱 # # Afortunadamente, no se requiere realizar el entrenamiento en la mayoría de casos. Gracias a una técnica conocida como **transferencia de aprendizaje** (transfer learning en inglés) es posible adaptar un modelo que ha sido entrenado desde cero (usualmente llamado **modelo pre-entrenado**) a una variedad de tareas subsecuentes. Este proceso es llamado **fine-tuning** y generalmente puede ser llevado a cabo con una sola GPU y un dataset de un tamaño tal que puede ser encontrado en su universidad o compañía. # # Los modelos que veremos en este tutorial son todos ejemplos de modelos fine-tuned. Aprende más sobre el proceso de aprendizaje por transferencia en el siguiente video: # + colab={"base_uri": "https://localhost:8080/", "height": 321} id="etpT9sxJ5mMZ" outputId="a20977ef-26e2-4976-9445-a6d0c4c1deea" from IPython.display import YouTubeVideo YouTubeVideo('BqqfQnyjmgg') # + [markdown] id="WiBVG3_m5wKh" # Los Transformers son los chicos buena onda del barrio pero ¿cómo podemos usarlos? Si tan solo hubiese una biblioteca que pudiera ayudarnos ... ¡Oh espera, ahí está! La [biblioteca Transformers de Hugging Face](https://github.com/huggingface/transformers) provee una API unificada con docenas de arquitecturas Transformer, así como los medios para entrenar modelos y ejecutar inferencias con ellos. Así que para empezar vamos a instalar la biblioteca con el siguiente comando: # + id="rQIIoolp6xvM" # %%capture # %pip install transformers[sentencepiece] # + [markdown] id="Jw7ggS5O8SXk" # Ahora que hemos instalado la biblioteca vamos a ver algunas aplicaciones. # # + [markdown] id="ZBZqc3cp9wr6" # ## 1. Pipelines para Transformers # + [markdown] id="0BtPHb0I9yQh" # El camino más rápido para aprender lo que la biblioteca Transformers puede hacer es mediante la función `pipeline()`. Esta función carga un modelo desde el Hub de Hugging Face y se encarga de todos los pasos de pre-procesamiento y post-procesamiento necesarios para convertir entradas en predicciones: # + [markdown] id="aqGtvrdw-51s" # Alt text that describes the graphic # + [markdown] id="zJJv6Xqj--re" # En las siguientes secciones veremos cómo estos pasos pueden ser combinados para diferentes aplicaciones. Si quieres aprender más sobre qué es lo que sucede detrás de cámara mira el siguiente video: # + colab={"base_uri": "https://localhost:8080/", "height": 321} id="Jycz3wosGyiR" outputId="96dd4b61-c0b6-4162-9a2b-97691c086666" YouTubeVideo('1pedAIvTWXk') # + [markdown] id="rq6UZH31G86_" # ## 2. Clasificación de Texto # + [markdown] id="ZawTukA6HKpk" # Comenzamos con una de las tareas más comunes en el NLP: clasificación de texto. Necesitamos un fragmento de texto para analizar en nuestros modelos; vamos a usar la siguiente retroalimentación (¡ficticia!) de un cliente sobre cierto pedido en línea: # + id="FvZ-boY-IYaF" text = """Dear Amazon, last week I ordered an Optimus Prime action figure \ from your online store in Germany. Unfortunately, when I opened the package, \ I discovered to my horror that I had been sent an action figure of Megatron \ instead! As a lifelong enemy of the Decepticons, I hope you can understand my \ dilemma. To resolve the issue, I demand an exchange of Megatron for the \ Optimus Prime figure I ordered. Enclosed are copies of my records concerning \ this purchase. I expect to hear from you soon. Sincerely, Bumblebee.""" # + [markdown] id="on7DnMqqJ4FC" # Creamos un wrapper simple para que nuestro texto quede más presentable al imprimirlo: # + colab={"base_uri": "https://localhost:8080/"} id="zBbwmHlMKvaR" outputId="9332dc81-8a31-4f10-c554-61c7f069bf71" import textwrap wrapper = textwrap.TextWrapper(width=80, break_long_words=False, break_on_hyphens=False) print(wrapper.fill(text)) # + [markdown] id="7C62JmHBKziG" # Supongamos que nos gustaría predecir el sentimiento de este texto; si la retroalimentación es positiva o negativa. Este es un tipo especial de clasificación de texto que es muy usado en la industria para agregar opiniones de clientes a través de productos o servicios. El siguiente ejemplo muestra como un Transformer como BERT convierte las entradas (inputs en inglés) en partes más pequeñas llamadas tokens. Los tokens son introducidos en la red para producir una sola predicción. # + [markdown] id="vMzdRLugMb_y" # Alt text that describes the graphic # + [markdown] id="DEN-AdM0MjAl" # Cargar un modelo Transformer para esta tarea es bastante simple. Solo necesitamos especificar la tarea en la función `pipeline()` como se muestra a continuación: # + colab={"base_uri": "https://localhost:8080/", "height": 182, "referenced_widgets": ["506a5c17b26b492ea8c31f3d8486b957", "", "", "", "", "8a905eac8be3465184a720e2da11ed16", "", "b1aeec2863b14973922ecc9e84852029", "14c7fc41e54a471b8207c65a4786cbe2", "9b3eb4864d70451baab386ebe8cb0b72", "50c521a61b334522a32815d3f9501e4d", "72df0a22c6d54ce7884e462731e7c6b1", "f3f0feb5f1414379a970c5111d73bed9", "399c64204c42461aad5e2951fe983323", "", "", "2d551775c7dc4d43a2421af89729e0a5", "e313f56450ba42658126c70535e469ff", "e0d6667904864c7cad479358538ff2a4", "", "0f10dace938e4575becbca494b88250d", "", "", "0ffc66ba1afd4349af896b2e3eec4214", "5a63db4f760f4475be6ad3f09d63b7dc", "", "1d3c3275091e449f84fe2ca2cc76976f", "0f6eb039b9dd426592313a140de85ded", "15d418fcf9334a55b058eca8a2657c10", "", "", "", "aac615e039da4063b7392ec1b8f6ad13", "26c954ec8ecf4dc986a29dddb74a796e", "d80c5f30fe3740c09c1bf7fb4eebaf9b", "", "ff053a58463442a58666740e7e2d7d60", "6db4a896ce7d43b0a8bd904136b0d961", "169f9e5522ff4652923bf39162724fee", "7351f39ed4c44ceb834e0ffaf445817b", "347d2df9ec5c4d1194b8ee46b07f2613", "", "ca36401295d54de0958f63a7c7e55ab4", "583fdeeb923c4e96afb2613f3f22edf3"]} id="jmeJRXp7M5Jt" outputId="b4c9f278-10d5-472e-e6dd-b55ac38d48aa" from transformers import pipeline sentiment_pipeline = pipeline('text-classification') # + [markdown] id="LgIndrV0NXEl" # Cuando el código corra verás un mensaje sobre qué modelo del Hub está siendo usado por defecto. En este caso, la función `pipeline()` carga el modelo `distilbert-base-uncased-finetuned-sst-2-english`, el cual es una variante más pequeña de BERT entrenada con [SST-2](https://paperswithcode.com/sota/sentiment-analysis-on-sst-2-binary) el cual es un dataset de análisis de sentimientos. # + [markdown] id="BsbpBDKqREzv" # 💡 La primera vez que ejecutes el código el modelo será descargado automáticamente desde el Hub y se guardará en caché para un uso posterior. # + [markdown] id="tQAreN2IR1vO" # Ahora estamos listos para correr nuestro ejemplo a través de `pipeline()` y ver algunas predicciones: # + colab={"base_uri": "https://localhost:8080/"} id="COL-U3T8SDpN" outputId="c9c602d9-0ef1-4984-ccc6-9db4b77ebad6" sentiment_pipeline(text) # + [markdown] id="bsNTgpH5SJMd" # El modelo predice un sentimiento negativo con una alta confianza. Tiene sentido ya que tenemos un cliente descontento. Puedes ver también que el pipeline retorna una lista de diccionarios de Python con predicciones. También podemos pasar varios textos al mismo tiempo en cuyo caso obtendríamos una lista de diccionarios para cada texto. # + [markdown] id="8ytXKhpHTQ_Q" # ⚡ **¡Tu turno!** Alimenta una lista de textos con diferentes tipos de sentimiento al objeto `sentiment_pipeline`. ¿Las predicciones siempre tienen sentido? # + [markdown] id="L-QEMV7kUFtk" # ## 3. Name Entity Recognitions (reconocimiento de entidades nombradas) # + [markdown] id="X77Uhi-3UGh3" # Vamos a hacer algo más sofisticado. En lugar de solo encontrar el sentimiento general, vamos a ver si podemos extraer **entidades** tales como organizaciones, lugares o individuos desde el texto. Esta tarea es llamada Name Entity Recognition o NER por su nombre corto. En vez de predecir solo una clase para todo el texto **una clase se predice por cada token**, tal como se muestra en el siguiente ejemplo: # + [markdown] id="FZSczjI1VTns" # Alt text that describes the graphic # + [markdown] id="kRjS1wZEVX4C" # Nuevamente, vamos a cargar un pipeline para NER sin especificar un modelo. Esto cargará un modelo BERT que ha sido entrenado con el dataset [CoNLL-2003](https://huggingface.co/datasets/conll2003): # + colab={"base_uri": "https://localhost:8080/", "height": 182, "referenced_widgets": ["222574363e1c47fb999a266f82f45397", "3d92ddad50e6418c83418f6f4cc2b64b", "2abb4abdcf024756ba86b52b067a9157", "ef5b4c49e7f745a3a6f7f78ac3367115", "beff9560954c42dda7017ca59d9d06f0", "981e951a7fe64b0ba4336c5c70db8c8b", "c03fa2ce216b4544b41392d145414c0a", "03fb460e13644a2d905959feb2af0950", "f8e7208a8ed24422906d135ff54bec23", "b647a61694b544019587a9fb1598a8e9", "2f07fc00806a430ab8bf40fa0795a5b9", "", "", "71f3bb4e03964408b989a29e8a9e8d6c", "dde3eaa0d6fd44eea57cbb3659ceedbe", "", "", "", "27b44e46ef344cba8758e75ac619e8d2", "", "3d97716fb9ae43f5a6b8a806b7db9b4e", "0adcc01ae5d04cc2a10534c5323ecc74", "f642ed94de5644cb88be5705861e8bb6", "", "", "108f0a8a22e542d28bea8e5f0b6d0bef", "d6d37489916a4d59a49fa3f62f1bd935", "2f64082418154a70886a10202cebae6e", "2f5ecd49d796451b9664e65e283f89c6", "1ef7fe9d53e74ddd8900173e4289711f", "957342d7b9e74e168e3d313fcf47ab04", "", "", "", "", "969421cd763b48c6aa62be27b7eb625a", "0aed65bed6be49b3b13b51e3d182afe3", "caf706ba00dc4e779d33e24383e79807", "", "", "7f7f3354636a478dbd0b8d02b1b637a7", "", "297cafd9618f46f994cea839552e9acc", ""]} id="5Sq8MdctVtO3" outputId="43cf5f7b-5b29-4610-d9dd-52b4c14e1cc0" ner_pipeline = pipeline('ner') # + [markdown] id="gUsdwdfIWEmu" # Cuando pasamos nuestro texto a través del modelo obtenemos una lista larga de diccionarios de Python. Cada diccionario corresponde a una entidad detectada. Como múltiples tokens pueden corresponder a una sola entidad, podemos aplicar una estrategia de agregación que fusiona entidades si la misma clase aparece en tokens consecutivos: # + colab={"base_uri": "https://localhost:8080/"} id="gOj_w1noXOGf" outputId="37e16484-2ca1-4841-96ec-600883633ca5" entities = ner_pipeline(text, aggregation_strategy="simple") print(entities) # + [markdown] id="6Q-945G2XSws" # Esto no es muy fácil de leer así que vamos a limpiar un poco la salida: # + colab={"base_uri": "https://localhost:8080/"} id="0TLYLYKUXd7v" outputId="604d3715-f1c7-4e4c-9471-685415b34dfa" for entity in entities: print(f"{entity['word']}: {entity['entity_group']} ({entity['score']:.2f})") # + [markdown] id="9uo5YVCgXiTH" # ¡Esto es mucho mejor! Tal parece que el modelo encontró la mayoría de entidades nombradas, pero estaba confundido con "Megatron" y "Decepticons", los cuales son personajes de la franquicia de Transformers. Esto no es una sorpresa ya que el dataset original probablemente no contenía muchos personajes de Transformers. Por esta razón tiene sentido ajustar aún más el modelo en su dataset. # # Ahora que hemos visto ejemplos de clasificación de texto y tokens, vamos a ver una aplicación interesante llamada **Question Answering**. # + [markdown] id="aul7JfZzY9m1" # ## 4. Question Answering # + [markdown] id="AjuUSV7sY-sl" # En esta tarea, al modelo se le da una **pregunta** y un **contexto** para así encontrar la respuesta a la pregunta dentro del contexto dado. Este problema se puede reformular como un problema de clasificación: para cada token, el modelo necesita predecir si es el comienzo o el final de la respuesta. Al final, podemos extraer la respuesta observando el lapso entre el token con la probabilidad de ser el inicio más alta y la probabilidad de ser el final más alta: # + [markdown] id="R3tarEvPc5Gj" # Alt text that describes the graphic # + [markdown] id="lxatj1euc8F9" # Puedes imaginar que esto requiere un poco de pre- y post-procesamiento. Lo bueno es que el pipeline se encarga de todo eso. Como siempre, cargamos el modelo especificando la tarea en la función `pipeline()`: # + colab={"base_uri": "https://localhost:8080/", "height": 194, "referenced_widgets": ["7bbbbd66c6c245edb79d56c7fb164bd3", "7075de6c42664f96b47624f0f24212fe", "2a26f50837fc4c548ef519cd7135ab0b", "17de462f7f65400d804e55f31ed3c23d", "7535878a983e44b08e3076d730ab8148", "c216a1e831d944beb40e3fade3e0abfc", "", "29884ce8d2874c9ea527864f9a4a0721", "", "", "416d83a087384c8eb5e19f83ce573b1d", "", "527d2e43d9b340028c91ee0cab9ae368", "28e187bc668147548dc8483e814dfa5b", "a0dead6f3c8e4a8a8ba62990269fdcbd", "1c0860d67cb04eaca4286845623017b4", "73009fce3ce54baa8aeb0b2ae372d0e5", "d7f9edaf82784be686c2f67a05d5f30b", "81ca3fae09a148e2a304557390906b33", "459b4202a20e4a5e9c6e0907a1e2094b", "", "", "", "", "", "", "c51d398b97994ed3b5c40afab8694ffe", "f1b7a0bc9e474b20af9821774a9c06e4", "", "", "", "", "b06dc6cc7a3c430ca8a543fa16f7648f", "", "", "", "f063101583ca4901a37c3b59707edc6c", "", "6ec0a6bc5a0b464eb773d4a1c65d2216", "", "", "cf2ef64e38eb486ea2cff9a53947b3c7", "", "ca3719703a8f4b28adf9e1798b985119", "e9f69e4421b44498a22dbe5e027c9fc8", "09dac7ad0610410e9c6c2bea24b60e65", "c7e1cc2ead9b44bdab5a9653da4fdaa3", "", "", "", "f4736a79560d4b68900fedd98afe23d3", "", "", "481055f3065540a6ad745b61e3351096", "2c95b9ab232744c89d3d4f95b315807c"]} id="HpPyKENKdY0p" outputId="313007dd-219c-4e0a-9b71-3a354263de3f" qa_pipeline = pipeline("question-answering") # + [markdown] id="pPuvHI9iechM" # Este modelo por defecto está entrenado con el famoso [SQuAD dataset](https://huggingface.co/datasets/squad). Vamos a ver si podemos preguntarle lo que el cliente quiere: # + colab={"base_uri": "https://localhost:8080/"} id="NxzpTanPeukX" outputId="fd89dbd3-9e2a-4680-e52e-aa4a8796950b" question = "What does the customer want?" outputs = qa_pipeline(question=question, context=text) outputs # + [markdown] id="UWbyiRT8e45U" # Impresionante, ¡eso se ve bien! # + [markdown] id="bX4XxsSZf9o-" # ## 5. Resumen de texto # + [markdown] id="8qa0d-BRf-mQ" # Veamos si podemos ir más allá de estas tareas de comprensión del lenguaje natural (NLU) donde BERT sobresale, Vamos a profundizar en el dominio generativo. Tenga en cuenta que la generación es mucho más exigente desde el punto de vista computacional, ya que generalmente generamos un token a la vez y necesitamos ejecutarlo varias veces. A continuación se muestra un ejemplo de cómo funciona este proceso: # + [markdown] id="RpYTeFhghBeW" # Alt text that describes the graphic # + [markdown] id="ITCMtWVJhGHj" # Una tarea popular que involucra la generación es el resumen. Veamos si podemos usar un transformer para generar un resumen para nosotros: # + colab={"base_uri": "https://localhost:8080/", "height": 194, "referenced_widgets": ["5f253b06b8c74dcd9c1e0dedc62d5284", "b02907c28def4e5f9a3e7dda672ab69a", "028bd6bff75145a6ac29568ad2000d0f", "197e820224194123a26b6dd6928b1439", "", "", "aac572e0a423432e97ef4685fb5e9a7d", "fa3ab9ab7bbf463a894046365c716a82", "de09c811cbb14100a70c6c29427dd966", "e14acf952c3341e887759625175a8556", "b4ccb797874e403ba996b2289b08bd04", "00d3c462633040c2a065e70dce5cace2", "c54a0a2f8e1a41b48e9367535a0eab81", "2432d60beda041d98e464d2b832cc21d", "", "ee7305032323481998dc95ede804703f", "4d74dba4244347618bbaf8ec6c344c77", "9676009505f04120b0e537c0eeea303f", "", "", "", "", "", "", "052ea839f6c04149bcee82d38de959bb", "", "2ac752fbe5dc4821917c77c68c334eda", "", "7c36a8168b9148668851d18a337edf4d", "", "", "4f7a753e25c04967a4d5d6077aa66685", "", "fa9ea934ef6340199fba276ddbbe51c1", "84f47a4367224211a71577623f3611a7", "", "e2f6ee26c2aa46f187d1873a7c575270", "283bb9a480ca40848e71c52e25eac295", "", "375e36709be545a99c6598589aa91230", "", "7a53ca25fdcc46479a09ab97a00956a1", "69ad2599e8ea489e9772275e01028e84", "", "7d762ea433a346b096cfe415f82af8ac", "0841a984b4594db788b2e7adc8ba0b3a", "", "", "5447ad3af9fe4878bd715eca78c8fef1", "e55e262cfc09497ca1d1655f490a3d4a", "03af28256b3b4c839f22ee753d8dea8b", "", "", "29a8df0e89a547f2962917209fcbdffa", "1b8689e0343145c08315124c38b68dd1"]} id="_Q3eAvH1hu19" outputId="181099cf-f161-457d-a800-89aedde8706f" summarization_pipeline = pipeline("summarization") # + [markdown] id="6CdIXrXKiHhK" # Este modelo fue entrenado con: [CNN/Dailymail dataset](https://huggingface.co/datasets/cnn_dailymail) para resumir artículos de noticias. # + colab={"base_uri": "https://localhost:8080/"} id="LGCg0v2xiYua" outputId="a34897ee-20f5-4253-c534-df4effe75490" outputs = summarization_pipeline(text, max_length=45, clean_up_tokenization_spaces=True) print(wrapper.fill(outputs[0]['summary_text'])) # + [markdown] id="QT6UqldwinmU" # ¡No está para nada mal! Podemos ver que el modelo fue capaz de obtener la esencia principal de la retroalimentación del cliente e inclusive identificó el autor como "Bumblebee". # + [markdown] id="7qF1t-nRjFWS" # ## 6. Traducción # + [markdown] id="yJydks3VjReg" # ¿Qué sucede si no hay modelos en el mismo idioma de mis datos? Puedes intentar traduciendo el texto. El [Helsinki NLP team](https://huggingface.co/models?pipeline_tag=translation&sort=downloads&search=Helsinkie-NLP) ha creado más de 1,000 modelos para traducciones de diferentes pares de idiomas 🤯. Acá cargamos un modelo que traduce de inglés a alemán: # + colab={"base_uri": "https://localhost:8080/", "height": 209, "referenced_widgets": ["603a8a0d9d2947c48fa7adbcede10c9a", "5a3adc3d99294f209c5f537aab17e088", "d6eeaa233d03455bb3498b378bbcc9af", "0fcb81bd55614252bb82c30533832199", "8709c8e41d444b2594f65de972094a21", "cecb179beb714b999254b21cc9ac4bc1", "6a28b0b1402c407b9b6734824f0162b4", "4845777c66734ec382d03ffad7111b18", "9cb1b693162d4617b3825bbd93ef2b60", "b7f8d4305db945668902a193230835a9", "14b175ac272b4d0ca94a0157c68c41a4", "df6d1b33b8f34b60b9371ed0e143dfb9", "fa05c5754c7e4c829a93716fd5f47b2e", "cd9630219ea243e1a0c05c6e2ffd9c8d", "", "5cae6f091a5e4fc88583eb987a739a8f", "6e3fe40dc90340c4b9e40176a32bcdb3", "", "", "", "8bf346a656e847a295659ded471dec9f", "8bea69530ba44804b141f3138ba9e7f1", "", "23b547ef10604a64bfbbeff8152fa38e", "e1158a984f1146588e910ea5d6978073", "", "", "", "", "", "567443128df34ce29c9e9842e30a72b7", "", "", "", "", "", "", "fdf1c3e6164a4effa2aba0ed82447dcf", "af3707fa436643c98ae26172ba7b4cfa", "e378c03a0edf41248dd0cebe0725231d", "", "", "ee1ec5e7aa684a5493d9321e33628227", "", "42fd1fb1ad874a40a76206420e31d251", "", "", "", "5f9a3166476541dc8ebb419a4b5292b6", "166b45043d394db09908ab1b1d8fe1d4", "0446d3d7e05e481d9539d695ad21718f", "", "bfdb7eb8ed014748ba0546521d03d11f", "6cc0e34095394d3e8e072a72c10b9bf0", "", "", "5c20186171eb42a2b2a8fd8a0e83d09c", "", "", "", "2f4478b670074bf8a32d0e2121f0466f", "ce1358cd0af04f9faa80ed2dc2fb105f", "58ddf00481994c0391ccaf2b8e999944", "", "341226734a514ea082d60e19b5a43074", "a2827ee49cfd494899c06b5c7ffb3f2e"]} id="8ahc5nuXkhGI" outputId="d0909b24-7519-4bdc-da4e-7ec8f2d072fa" translator = pipeline("translation_en_to_es", model="Helsinki-NLP/opus-mt-en-es") # + [markdown] id="rndZCXu5r5H4" # Vamos a traducir nuestro texto a alemán: # + colab={"base_uri": "https://localhost:8080/"} id="EDjn5ffmspGv" outputId="7d369151-04bc-429f-9389-3403143a7530" outputs = translator(text, clean_up_tokenization_spaces=True, min_length=100) print(wrapper.fill(outputs[0]['translation_text'])) # + [markdown] id="pJ5JmN5CtkqP" # Podemos observar que el texto no está perfectamente traducido pero el significado principal se mantiene. Otra aplicación interesante de modelos de traducción es el aumento de datos por medio de la retro-traducción. # + [markdown] id="cBn2xe6CuZ4K" # ## 7. Zero-shot classification # + [markdown] id="3t1ER7_quphT" # Como un último ejemplo, echemos un vistazo a una aplicación muy interesante mostrando la versatilidad de los Transformers: zero-shot classification. En zero-shot classification los modelos reciben un texto y una lista de etiquetas candidatas. Los modelos determinan qué etiquetas son compatibles con el texto. ¡En lugar de tener clases fijas se permite una clasificación flexible sin ningún dato etiquetado! Por lo general, esta es una buena primera línea de base. # + colab={"base_uri": "https://localhost:8080/", "height": 177, "referenced_widgets": ["090076743c5f4bd09d649f52fc72614e", "fa465b0e019641cf9507e4fd0a383223", "5c729571ad8a403d8838d7a6f8fdb1eb", "da8af19e10654dad84c69e085de18909", "896c9df9b32545c883b96a07fddd9b98", "6d442a873ebe471ba3781005245612fe", "4a647914ed0741b7b455acd858fcace2", "43ff464181d44260ae338fb11e754146", "3303c7c82e8243d7896bf51fcf442c44", "e693d058ef494841a875d7a4ac76fba7", "2158029111e6410281b63a1351f56fb4", "3d32ce46fab0495a8bd665f7f2b60c20", "", "", "99286c3cb43745c394ab3ab2a6bce52c", "", "", "629db20833ed4816b7c1eb3045243c0d", "", "", "", "", "", "", "", "", "", "ca7f86f38e0b406693f4ce3e2d157c65", "", "", "", "68b06137bd5849db8e906188de2ee063", "", "27adb05f78a44dcc96270e38fa60a28f", "", "0c7c0664c1744a859000a641898f6379", "b4969a2c03764e099ef8b3f3805c5704", "", "50475fb7816b4a60a1fa756c9250e66d", "", "", "", "1b0a952820cb49af8d0f865903f6f83f", "", "", "a1863f8ebef941e09b09e517c5c322e5", "", "4f18ff8b4a784189849721c336beb462", "17ac9e97fb1b4afda6e8dbe05ea200ef", "54d0482af44e406f99f90f080febce40", "", "2c64e33896a04aeb87ee1bdec47ad50f", "644665d36f6f4fbb87e98ae6ea882a3f", "", "54ac831ac68749e79e3b2ca550e6a1eb"]} id="rVzTatpbxc5h" outputId="1b29e72e-a21c-49f9-ebc4-fa48ba3dccfa" zero_shot_classifier = pipeline("zero-shot-classification", model="vicgalle/xlm-roberta-large-xnli-anli") # + [markdown] id="gR0Wytdpxjon" # Vamos a ver un ejemplo: # + id="XVGP_X3_x-3z" text = 'Dieser Tutorial ist großartig! Ich hoffe, dass jemand von Hugging Face meine Universität besuchen wird :)' classes = ['Treffen', 'Arbeit', 'Digital', 'Reisen'] # + colab={"base_uri": "https://localhost:8080/"} id="Q6-3E9nZ0o_d" outputId="65f465fc-a939-4c3b-8e38-ee1513a56d3e" zero_shot_classifier(text, classes, multi_label=True) # + [markdown] id="FPHvUkqU1fdP" # Parece que todo salió bien en este corto ejemplo. Naturalmente, para ejemplos más largos y concretos, esta aproximación podría no ser tan buena. # + [markdown] id="6IplKKeW2OoF" # ## 8. Yendo más allá del texto # + [markdown] id="yjT_fDNu2RQo" # Como se mencionó al principio de este tutorial, los Transformers pueden ser usados para otros dominios fuera del NLP. Para estos dominios hay muchas más pipelines con los que puedes experimentar. Mira la siguiente lista para obtener una visión general de las tareas que existen: # + colab={"base_uri": "https://localhost:8080/"} id="87YewgS-2tQf" outputId="2cb54f94-8f96-41c3-ca36-82815a1dae00" from transformers import pipelines for task in pipelines.SUPPORTED_TASKS: print(task) # + [markdown] id="5Q6zhwR52xmW" # Vamos a ver una aplicación que involucre imágenes: # + [markdown] id="J4zR27sR23Wb" # ### Computer vision # + [markdown] id="QM1pwuu-24bQ" # Recientemente, los modelos Transformer también han entrado al mundo del computer vision. Revisa el modelo DETR en el [Hub](https://huggingface.co/facebook/detr-resnet-101-dc5): # + [markdown] id="AUbxbozo3WY1" # Alt text that describes the graphic # + [markdown] id="r9MANmAA3bby" # ### Audio # + [markdown] id="xBQRSiax3cQC" # Otra área muy prometedora es el procesamiento de audio. Especialmente Speech2Text ha tenido avances muy prometedores recientemente. Mira por ejemplo el [wav2vec2 model](https://huggingface.co/facebook/wav2vec2-base-960h): # + [markdown] id="45Kkuy9c37To" # Alt text that describes the graphic # + [markdown] id="f7gbNqHS3-e1" # ### Tabla QA # + [markdown] id="8RigtlGa3_Xd" # Finalmente, una gran cantidad de datos del mundo real aún siguen en forma de tablas. Poder consultar tablas es muy útil y con [TAPAS](https://huggingface.co/google/tapas-large-finetuned-wtq) puedes realizar preguntas y respuestas de manera tabular: # + [markdown] id="1U-zq5Eo4rRV" # Alt text that describes the graphic # + [markdown] id="iE0xDjHn4uft" # ## 9. ¿Y ahora qué? # + [markdown] id="wfURecKT4vfJ" # Esperamos que este tutorial te haya dado una idea de lo que los Transformers pueden hacer y que te emocione aprender más. Acá te presentamos algunos recursos que puedes usar para profundizar en el tema así como en el ecosistema de Hugging Face. # # 🤗 **Un tour a través del Hub de Hugging Face** # # En este tutorial llegas a: # - Explorar los más de 30,000 modelos compartidos en el Hub. # - Aprender a encontrar el modelo y datasets adecuados de manera eficiente para tus propias tareas. # - Aprender como contribuir y trabajar de manera colaborativa en tus fluhos de trabajo ML. # # ***Duración: 20-40 minutos*** # # 👉 [Click acá para acceder al tutorial](https://github.com/huggingface/education-toolkit/blob/main/tutorials/ES/01_tour_hub_de_huggingface.md) # # ✨ **Construye y hospeda demos de Machine Learning con gradio y Hugging Face** # # En este tutorial llegas a: # - Explorar ML demos creados por la comunidad. # - Construir un demo rápido para tu modelo de Machine Learning en Python usando la librería `gradio`. # - Hospedar los demos de manera gratuita con Hugging Face Spaces. # - Agregar tu demo a Hugging Face Org para tus clases o conferencias. # # ***Duración: 20-40 minutos*** # # 👉 [Click aquí para acceder al tutorial](https://colab.research.google.com/github/huggingface/education-toolkit/blob/main/tutorials/ES/02_ml-demos-con-gradio.ipynb) # # 🎓 **Curso Hugging Face** # # Este curso te enseña cómo aplicar Transformers a varias tareas en el procesamiento de lenguaje natural y demás. A lo largo del camino, aprenderás como usar el ecosistema Hugging Face — 🤗 Transformers, 🤗 Datasets, 🤗 Tokenizers, y 🤗 Accelerate — así como el Hub de. ¡Es completamente gratis! # # 👉 [Da click aquí para acceder al 🤗 Curso](https://huggingface.co/course/chapter1/1). # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Explain the use of seeds in generating pseudorandom numbers. # # # Seeds # ### numpy.random.RandomState # # The data generation functions in numpy.random use a global random seed. To avoid # global state, you can use numpy.random.RandomState to create a random number # generator isolated from others # # numpy.random.RandomState is a container for the Mersenne Twister pseudo-random number generator. # # A pseudorandom number generator (PRNG), also known as a deterministic random bit generator (DRBG),[1] is an algorithm for generating a sequence of numbers whose properties approximate the properties of sequences of random numbers. import numpy as np rng = np.random.RandomState(1234) rng.randn(10) # ### numpy.random.seed # # This method is called when RandomState is initialized. # If you set the np.random.seed(a_fixed_number) every time you call the numpy's other random function, the result will be the same: # np.random.seed(0) makes the random numbers predictable np.random.seed(0) np.random.rand(4) # + # np.random.seed(0) makes the random numbers predictable np.random.seed(0) np.random.rand(4) # - # ### Use random seed and shuffle method together # + numbers = [10, 20, 30, 40, 50, 60] print ("Original list: ", numbers ) np.random.seed(4) np.random.shuffle(numbers) print("Reshuffled list ", numbers) numbers = [10, 20, 30, 40, 50, 60] np.random.seed(4) np.random.shuffle(numbers) print("reshuffled list ", numbers) # - # Reference: # # https://en.wikipedia.org/wiki/Mersenne_Twister # # https://en.wikipedia.org/wiki/Pseudorandom_number_generator # # https://pynative.com/python-random-seed/ # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Tracking the Smoke Caused by the fires # # In this example we show how to use [HRRR Smoke Experimental dataset](https://data.planetos.com/datasets/noaa_hrrr_wrf_smoke) to analyse smoke in the US and we will also download historical fire data from [Cal Fire](https://www.fire.ca.gov/incidents) web page to visualize burned area since 2013. # # [The High-Resolution Rapid Refresh Smoke (HRRRSmoke)](https://data.planetos.com/datasets/noaa_hrrr_wrf_smoke) is a three-dimensional model that allows simulation of mesoscale flows and smoke dispersion over complex terrain, in the boundary layer and aloft at high spatial resolution over the CONUS domain. The smoke model comprises a suite of fire and environmental products for forecasters during the fire weather season. Products derived from the HRRRSmoke model include the Fire Radiative Power (FRP), Near-Surface Smoke (PM2.5), and Vertically Integrated Smoke, to complement the 10-meter winds, 1-hour precipitation, 2-meter temperature and surface visibility experimental forecast products. Keep in mind, that this dataset is EXPERIMENTAL. Therefore, they should not be used to make decisions regarding safety of life or property. # # HRRR Smoke has many different weather parameters, two of them are directly smoke related - Column-integrated mass density and Mass density (concentration) @ Specified height level above ground. First is vertically-integrated smoke and second smoke on the lowest model level (8 m). # # As we would like to have data about the Continental United States we will download data by using Package API. Then we will create a widget where you can choose timestamp by using a slider. After that, we will also save the same data as a GIF to make sharing the results with friends and colleagues more fun. And finally, we will compare smoke data with CAMS particulate matter data to find out if there's a colleration between them as HRRR Smoke is still experimental. # %matplotlib notebook # %matplotlib inline import numpy as np import dh_py_access.lib.datahub as datahub import xarray as xr import matplotlib.pyplot as plt import ipywidgets as widgets from mpl_toolkits.basemap import Basemap,shiftgrid import dh_py_access.package_api as package_api import matplotlib.colors as colors import warnings import datetime import shutil import imageio import seaborn as sns import pandas as pd import os import matplotlib as mpl import wget warnings.filterwarnings("ignore") # Please put your datahub API key into a file called APIKEY and place it to the notebook folder or assign your API key directly to the variable API_key! server = 'api.planetos.com' API_key = open('APIKEY').readlines()[0].strip() #'' version = 'v1' # At first, we need to define the dataset name and a variable we want to use. dh = datahub.datahub(server,version,API_key) dataset = 'noaa_hrrr_wrf_smoke' variable_name1 = 'Mass_density_concentration_height_above_ground' # Then we define spatial range. We decided to analyze US, where unfortunately catastrofic wildfires are taking place at the moment and influeces air quality. # + # reftime = datetime.datetime.strftime(datetime.datetime.today(), '%Y-%m-%d') + 'T00:00:00' area_name = 'usa' today_hr = datetime.datetime.strftime(datetime.datetime.today(),'%Y%m%dT%H') latitude_north = 49; longitude_west = -127 latitude_south = 26; longitude_east = -70.5 # - # ### Download the data with package API # # 1. Create package objects # 2. Send commands for the package creation # 3. Download the package files package_hrrr = package_api.package_api(dh,dataset,variable_name1,longitude_west,longitude_east,latitude_south,latitude_north,area_name=area_name+today_hr) package_hrrr.make_package() package_hrrr.download_package() # ### Work with the downloaded files # # We start with opening the files with xarray. After that, we will create a map plot with a time slider, then make a GIF using the images, then we will do the same thing for closer area - California; and finally, we will download csv file about fires in California to visualize yearly incidents data as a bar chart. dd1 = xr.open_dataset(package_hrrr.local_file_name) dd1['longitude'] = ((dd1.lon+180) % 360) - 180 dd1[variable_name1].data[dd1[variable_name1].data < 0] = 0 dd1[variable_name1].data[dd1[variable_name1].data == np.nan] = 0 # Here we are making a Basemap of the US that we will use for showing the data. m = Basemap(projection='merc', lat_0 = 55, lon_0 = -4, resolution = 'h', area_thresh = 0.05, llcrnrlon=longitude_west, llcrnrlat=latitude_south, urcrnrlon=longitude_east, urcrnrlat=latitude_north) lons,lats = np.meshgrid(dd1.longitude.data,dd1.lat.data) lonmap,latmap = m(lons,lats) # Now it is time to plot all the data. A great way to do it is to make an interactive widget, where you can choose time stamp by using a slider. # # As the minimum and maximum values are very different, we are using logarithmic colorbar to visualize it better. # # On the map we can see that the areas near fires have more smoke, but it travels pretty far. Depending on when the notebook is run, we can see very different results. # # But first we define minimum, maximum and also colormap. # + vmax = np.nanmax(dd1[variable_name1].data) vmin = 2 cmap = mpl.cm.twilight.colors[:-100] tmap = mpl.colors.LinearSegmentedColormap.from_list('twilight_edited', cmap) # - def loadimg(k): fig=plt.figure(figsize=(10,7)) ax = fig.add_subplot(111) pcm = m.pcolormesh(lonmap,latmap,dd1[variable_name1].data[k][0], norm = colors.LogNorm(vmin=vmin, vmax=vmax),cmap = tmap) ilat,ilon = np.unravel_index(np.nanargmax(dd1[variable_name1].data[k][0]),dd1[variable_name1].data[k][0].shape) cbar = plt.colorbar(pcm,fraction=0.024, pad=0.040,ticks=[10**0, 10**1, 10**2,10**3]) cbar.ax.set_yticklabels([0,10,100,1000]) ttl = plt.title('Near Surface Smoke ' + str(dd1[variable_name1].time[k].data)[:-10],fontsize=20,fontweight = 'bold') ttl.set_position([.5, 1.05]) cbar.set_label(dd1[variable_name1].units) m.drawcountries() m.drawstates() m.drawcoastlines() print("Maximum: ","%.2f" % np.nanmax(dd1[variable_name1].data[k][0])) plt.show() widgets.interact(loadimg, k=widgets.IntSlider(min=0,max=len(dd1[variable_name1].data)-1,step=1,value=0, layout=widgets.Layout(width='100%'))) # Let's include an image from the last time-step as well, because GitHub Preview doesn't show the time slider images. loadimg(9) # With the function below we will save images you saw above to the local filesystem as a GIF, so it is easily shareable with others. def make_ani(m,lonmap,latmap,aniname,smaller_area=False): if smaller_area==True: fraction = 0.035 fontsize = 13 else: fraction = 0.024 fontsize = 20 folder = './anim/' for k in range(len(dd1[variable_name1])): filename = folder + 'ani_' + str(k).rjust(3,'0') + '.png' if not os.path.exists(filename): fig=plt.figure(figsize=(10,7)) ax = fig.add_subplot(111) pcm = m.pcolormesh(lonmap,latmap,dd1[variable_name1].data[k][0], norm = colors.LogNorm(vmin=vmin, vmax=vmax),cmap = tmap) m.drawcoastlines() m.drawcountries() m.drawstates() cbar = plt.colorbar(pcm,fraction=fraction, pad=0.040,ticks=[10**0, 10**1, 10**2,10**3]) cbar.ax.set_yticklabels([0,10,100,1000]) ttl = plt.title('Near Surface Smoke ' + str(dd1[variable_name1].time[k].data)[:-10],fontsize=fontsize,fontweight = 'bold') ttl.set_position([.5, 1.05]) cbar.set_label(dd1[variable_name1].units) ax.set_xlim() if not os.path.exists(folder): os.mkdir(folder) plt.savefig(filename,bbox_inches = 'tight',dpi=150) plt.close() files = sorted(os.listdir(folder)) images = [] for file in files: if not file.startswith('.'): filename = folder + file images.append(imageio.imread(filename)) kargs = { 'duration': 0.3,'quantizer':2,'fps':5.0} imageio.mimsave(aniname, images, **kargs) print ('GIF is saved as {0} under current working directory'.format(aniname)) shutil.rmtree(folder) make_ani(m,lonmap,latmap,'hrrr_smoke.gif') # As we are interested in California fires right now, it would make sense to make animation of only California area as well. So people can be prepared when smoke hits their area. The model has pretty good spatial resolution as well - 3 km, which makes tracking the smoke easier. # + latitude_north_cal = 43; longitude_west_cal = -126. latitude_south_cal = 30.5; longitude_east_cal = -113 m2 = Basemap(projection='merc', lat_0 = 55, lon_0 = -4, resolution = 'h', area_thresh = 0.05, llcrnrlon=longitude_west_cal, llcrnrlat=latitude_south_cal, urcrnrlon=longitude_east_cal, urcrnrlat=latitude_north_cal) lons2,lats2 = np.meshgrid(dd1.longitude.data,dd1.lat.data) lonmap_cal,latmap_cal = m2(lons2,lats2) # - make_ani(m2,lonmap_cal,latmap_cal,'hrrr_smoke_california.gif',smaller_area=True) # Finally, we will remove the package we downloaded. os.remove(package_hrrr.local_file_name) # # Data about Burned Area from Cal Fire # Now we will download csv file from Cal Fire web page and illustrate how many acres each year was burnt since 2013. if not os.path.exists('acres_burned.csv'): wget.download('https://www.fire.ca.gov/imapdata/mapdataall.csv',out='acres_burned.csv') datain = pd.read_csv('acres_burned.csv') # Here we convert `incident_dateonly_created` column to datetime, so it's easier to group data by year. datain['incident_dateonly_created'] = pd.to_datetime(datain['incident_dateonly_created']) # Below you can see the data from `acres_burned.csv` file. It has information about each incident. This time we only compute total acres burned each year. datain # Computing yearly sums. In some reason there's many years without much data, so we will filter it out. Also, reseting index, as we don't want dates to be as an index and making year column. burned_acres_yearly = datain.resample('1AS', on='incident_dateonly_created')['incident_acres_burned'].sum() burned_acres_yearly = burned_acres_yearly[burned_acres_yearly.index > datetime.datetime(2012,1,1)] burned_acres_yearly = burned_acres_yearly.reset_index() burned_acres_yearly['year'] = pd.DatetimeIndex(burned_acres_yearly.incident_dateonly_created).year # We can see the computed data below. burned_acres_yearly # Finally we will make a bar chart of the data. We are using [seaborn](https://seaborn.pydata.org/) this time for plotting the data and to visualize it better, we added colormap to bar chart as well. # Image will be saved into the working directory. # + fig,ax = plt.subplots(figsize=(10,6)) pal = sns.color_palette("YlOrRd_r", len(burned_acres_yearly)) rank = burned_acres_yearly['incident_acres_burned'].argsort().argsort() sns.barplot(x='year',y='incident_acres_burned',data=burned_acres_yearly,ci=95,ax=ax,palette=np.array(pal[::-1])[rank]) ax.set_xlabel('Year',fontsize=15) ax.set_ylabel('Burned Area [acres]',fontsize=15) ax.grid(color='#C3C8CE',alpha=1) ax.set_axisbelow(True) ax.spines['bottom'].set_color('#C3C8CE') ax.spines['top'].set_color('#C3C8CE') ax.spines['left'].set_color('#C3C8CE') ax.spines['right'].set_color('#C3C8CE') ttl = ax.set_title('Burned Area in California',fontsize=20,fontweight = 'bold') ttl.set_position([.5, 1.05]) ax.tick_params(labelsize=15,length=0) plt.savefig('acres_burned_cali.png',dpi=300) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="H0vxTsY2PoRY" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 269} outputId="0bfafda8-5368-4544-ebd1-25d77a6b2bca" import pandas as pd import numpy as np from sklearn import manifold import matplotlib.pyplot as plt X = [[1,1],[2,1],[2,2],[3,2]] X=np.array(X) plt.scatter(X[:,0],X[:,1]) plt.show() # + id="xXrF7ik5PvEZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 90} outputId="a57b489e-8a8b-4c8e-b2f3-78d7f6082d12" from sklearn.metrics.pairwise import euclidean_distances euc_dis_original=euclidean_distances(X, X) print(euc_dis_original) # + id="C_3lGzePPzyb" colab_type="code" colab={} mds = manifold.MDS(n_components=2, dissimilarity="euclidean", n_init=100, max_iter=1000, random_state=10) results = mds.fit(X) # + id="KHBRllgjP-0j" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 685} outputId="234a6954-72a1-4db9-f657-ddcb283161bc" coords = results.embedding_ euc_dis_mod=euclidean_distances(coords, coords) print(euc_dis_mod) fig = plt.figure(figsize=(12,10)) plt.subplots_adjust(bottom = 0.1) plt.scatter(coords[:, 0], coords[:, 1]) plt.show() # + id="VqYSp4zYQBBb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 269} outputId="9a2e8e59-81e5-47e0-986e-a8e35e370aa9" plt.scatter(euc_dis_original, euc_dis_mod) plt.show() # + id="9sLu8amtQJo5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 269} outputId="c900491f-c7db-4c79-d84e-1651e4ed3b14" plt.plot(euc_dis_original, euc_dis_mod) plt.show() # + id="XaH1K6ppQQZw" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 90} outputId="51cc25e5-6fe1-46a1-d019-8142d6562206" print(euc_dis_original-euc_dis_mod) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="QQg3OSN80oFU" # # **Linear regression** # ## 1. **Abstract** # # # The insurance risk are important point of estimation for the insurance company. The aim of the notebook is to find the factors affecting the this and creating a regression model to predict this. # # ## Linear Models # # Linear regression predicts the response variable $y$ assuming it has a linear relationship with predictor variable(s) $x$ or $x_1, x_2, ,,, x_n$. # # $$y = \beta_0 + \beta_1 x + \varepsilon .$$ # # *Simple* regression use only one predictor variable $x$. *Mulitple* regression uses a set of predictor variables $x_1, x_2, ,,, x_n$. # # The *response variable* $y$ is also called the regressand, forecast, dependent or explained variable. The *predictor variable* $x$ is also called the regressor, independent or explanatory variable. # # The parameters $\beta_0$ and $\beta_1$ determine the intercept and the slope of the line respectively. The intercept $\beta_0$ represents the predicted value of $y$ when $x=0$. The slope $\beta_1$ represents the predicted increase in $Y$ resulting from a one unit increase in $x$. # # Note that the regression equation is just our famliar equation for a line with an error term. # # The equation for a line: # $$ Y = bX + a $$ # # $$y = \beta_0 + \beta_1 x $$ # # The equation for a line with an error term: # # $$ Y = bX + a + \varepsilon $$ # # $$y = \beta_0 + \beta_1 x + \varepsilon .$$ # # - $b$ = $\beta_1$ = slope # - $a$ = $\beta_0$ = $Y$ intercept # - $\varepsilon$ = error term # # # We can think of each observation $y_i$ consisting of the systematic or explained part of the model, $\beta_0+\beta_1x_i$, and the random *error*, $\varepsilon_i$. # # # # + [markdown] id="JWwRbuTo06M8" # ## **2. Importing necessary libraries** # + id="J4h5qR_10XK0" import numpy as np from matplotlib import pyplot import pandas as pd import seaborn as sns from sklearn.ensemble import RandomForestRegressor import matplotlib.pylab as plt # + [markdown] id="ATNhqw5wGN_f" # ## **3. Data Loading and Preprocessing** # # Data is stored in the our github # + id="QiPf7PTW1BAF" import pandas as pd url = 'https://raw.githubusercontent.com/abhi-gm/Machine-Learning-Workshop/main/Datasets/insurance.csv' data = pd.read_csv(url, error_bad_lines=False) # + colab={"base_uri": "https://localhost:8080/", "height": 204} id="x2As02pj4_s1" outputId="8eff3e67-84b9-4185-d39d-a9f245a6de76" data.head() # + [markdown] id="KWIj1rKDGeQQ" # ## **3.1. Encoding the sex and smoker** # # ### **Sex** # # Male = 0 # # Female = 1 # # ### **Smoker** # # Yes = 1 # # No = 0 # + colab={"base_uri": "https://localhost:8080/", "height": 676} id="ZF_kZsKBGeby" outputId="ff96e022-a5a2-4bc5-aa24-4983e1b38699" data['sex'] = data.sex.map({'male':0, 'female':1}) data['smoker'] = data.smoker.map({'no':0, 'yes':1}) data.head(20) # + [markdown] id="DNiqQes-LRwZ" # Identfying the Unique values in the Region column # + colab={"base_uri": "https://localhost:8080/"} id="LbiN02IuHKMh" outputId="0936c6b8-3fea-4a3d-a651-d63098273e1c" data.region.unique() # + id="76bLnvIfHNuH" import numpy as np from sklearn.preprocessing import OneHotEncoder # + colab={"base_uri": "https://localhost:8080/", "height": 206} id="Zk0lKxG8HUl2" outputId="2dce35d4-8781-4643-8719-5685c04f014c" # creating instance of one-hot-encoder enc = OneHotEncoder() # passing bridge-types-cat column (label encoded values of bridge_types) enc_df = pd.DataFrame(enc.fit_transform(data[['region']]).toarray()) enc_df enc_df.columns = ['northeast','northwest','southeast','southwest'] enc_df.apply(np.int64) data =data.join(enc_df) data=data.drop(['region'],axis=1) data.head() # + [markdown] id="lP2CdM5vHnd9" # ## **3.3. Normalizing the data** # + id="Txma206iHipV" from sklearn import preprocessing # Create x, where x the 'scores' column's values as floats x = data[['charges','bmi','age']].values.astype(float) # Create a minimum and maximum processor object min_max_scaler = preprocessing.MinMaxScaler() # Create an object to transform the data to fit minmax processor x_scaled = min_max_scaler.fit_transform(x) # Run the normalizer on the dataframe data[['charges','bmi','age']] = pd.DataFrame(x_scaled) # + colab={"base_uri": "https://localhost:8080/", "height": 676} id="gb-E4DcYHudK" outputId="14c028d7-a3e0-483d-cd6c-f161e93a3dd0" #looking at data head after adding dummy variables and nromalizing data.head(20) # + colab={"base_uri": "https://localhost:8080/", "height": 362} id="lE-DLZjFH1vu" outputId="0e4e3799-e185-481f-9ff7-ef7401123184" #finding the correlation between all the features in the data data.corr() # + colab={"base_uri": "https://localhost:8080/", "height": 449} id="bzK3BurtH5mV" outputId="09740940-d867-42a5-c5f9-afd88bb83970" #plotting the heat map of the correlation plt.figure(figsize=(20,7)) sns.heatmap(data.corr(), annot=True) # + [markdown] id="O5HSlrAWH_Vw" # ## **4. Ordinary Least Squares** # Ordinary Least Squares (OLS) is the most common estimation method for linear models # + colab={"base_uri": "https://localhost:8080/", "height": 655} id="3vw1-9S1H8nF" outputId="98a575fa-35fa-4dfa-c29b-6df8e8acc735" #Using OLS for finding the p value and t statistics import statsmodels.api as sm model = sm.OLS(data['charges'], data[['age', 'sex', 'bmi', 'children', 'smoker','northeast', 'northwest', 'southeast', 'southwest']]).fit() # Print out the statistics model.summary() # + [markdown] id="7JV4kcKGxMDC" # ### OLS after removing insignificant feature "sex" # + colab={"base_uri": "https://localhost:8080/", "height": 579} id="SzoZiR0CDPoQ" outputId="20253f7a-2dac-4ec6-dfc2-5973f56873ae" model = sm.OLS(data['charges'], data[['age','bmi', 'children', 'smoker','northeast', 'northwest', 'southeast', 'southwest']]).fit() # Print out the statistics model.summary() # + [markdown] id="mXAhK24KIKeW" # ## **5. Pair Plot** # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="P8Liot0SIG_Z" outputId="27205f41-3172-40ea-e1a0-df8153a46604" #pair plot plt.figure(figsize=(12,10)) sns.pairplot(data) # + [markdown] id="21STMwKzIWNb" # ## **6. Train ,Validation and Test split** # # Data is split into 3 parts # # Taining data set = 80.75% # # Validation data set = 14.25% # # Test data set = 5% # + id="9AlAFk-ZIZRr" from sklearn.model_selection import train_test_split X = data[ ['age', 'bmi', 'children', 'smoker','northeast','northwest', 'southeast', 'southwest']] y = data['charges'] X_t, X_test, y_t, y_test = train_test_split(X, y, test_size=0.05, random_state=1) X_train, X_val, y_train, y_val = train_test_split(X_t, y_t, test_size=0.15, random_state=1) # + [markdown] id="V73zPAlwIeii" # ## **7. Linear Regression** # + id="OqLbVlc_Ib62" from sklearn.model_selection import train_test_split from sklearn.metrics import r2_score from sklearn.linear_model import LinearRegression from sklearn import datasets, linear_model from sklearn.metrics import mean_squared_error, r2_score # + id="Bfis7mzhIlDu" # Create linear regression object regr = linear_model.LinearRegression() # Train the model using the training sets model = regr.fit(X_train,y_train) # + colab={"base_uri": "https://localhost:8080/"} id="4EAWRWveJF2T" outputId="4726705f-8759-469d-b2bf-e1fc78d4ed64" #Validation set used for tunning the hyperparmeter # Make predictions using the Validation sets y_pred = regr.predict(X_val) # The mean squared error print('Mean squared error: %.2f'% mean_squared_error(y_val, y_pred)) # The coefficient of determination: 1 is perfect prediction print('Coefficient of determination: %.2f'% r2_score(y_val, y_pred)) # + colab={"base_uri": "https://localhost:8080/"} id="sh71aKv6IrRm" outputId="c3ade4ac-28af-46ff-e4e5-dff47a81eb66" # Make predictions using the testing set y_pred = regr.predict(X_test) # The mean squared error print('Mean squared error on the Test dataset : %.2f'% mean_squared_error(y_test, y_pred)) # The coefficient of determination: 1 is perfect prediction print('R2 score for Test dataset : %.2f'% r2_score(y_test, y_pred)) # + [markdown] id="YIyWs3quI63t" # ## **7. Conclusion** # The linear regression model is trained and interpreted based on p-value and those features with a p-value above 0.05 dropped as they do not affect. # + [markdown] id="ccJkMOp-I-F6" # ## **8. Refrence** # https://www.statsmodels.org/stable/regression.html # # Copyright 2020 # # Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- # # Generative Adversarial Networks # # :label:`chapter_gans` # ```toc # :maxdepth: 2 # # gan # dcgan # ``` # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernel_info: # name: python3 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # #

#

# # ## *Data Science Unit 4 Sprint 2* # # # Sprint Challenge - Neural Network Foundations # # Table of Problems # # 1. [Defining Neural Networks](#Q1) # 2. [Chocolate Gummy Bears](#Q2) # - Perceptron # - Multilayer Perceptron # 4. [Keras MMP](#Q3) # # ## 1. Define the following terms: # # - **Neuron:** units in a neural network that recieve inputs from other nodes or an external source to compute an output # - **Input Layer:** data collected for each of the independent variables. X1,X2,X3 and so on # - **Hidden Layer:** layers between inputs and out put with different itterations definging the weights a biases for selected input to achieve desired output # - **Output Layer:** information displayed to us after defining the input and adjusting how the hidden layers will work. # - **Activation:** a non-linier function that take the inputs of a NN and defines the oputput # - **Backpropagation:** helps weights adjust effectively after learning the mistake enhance to a better score # # ## 2. Chocolate Gummy Bears # # Right now, you're probably thinking, "yuck, who the hell would eat that?". Great question. Your candy company wants to know too. And you thought I was kidding about the [Chocolate Gummy Bears](https://nuts.com/chocolatessweets/gummies/gummy-bears/milk-gummy-bears.html?utm_source=google&utm_medium=cpc&adpos=1o1&gclid=Cj0KCQjwrfvsBRD7ARIsAKuDvMOZrysDku3jGuWaDqf9TrV3x5JLXt1eqnVhN0KM6fMcbA1nod3h8AwaAvWwEALw_wcB). # # Let's assume that a candy company has gone out and collected information on the types of Halloween candy kids ate. Our candy company wants to predict the eating behavior of witches, warlocks, and ghosts -- aka costumed kids. They shared a sample dataset with us. Each row represents a piece of candy that a costumed child was presented with during "trick" or "treat". We know if the candy was `chocolate` (or not chocolate) or `gummy` (or not gummy). Your goal is to predict if the costumed kid `ate` the piece of candy. # # If both chocolate and gummy equal one, you've got a chocolate gummy bear on your hands!?!?! # ![Chocolate Gummy Bear](https://ed910ae2d60f0d25bcb8-80550f96b5feb12604f4f720bfefb46d.ssl.cf1.rackcdn.com/3fb630c04435b7b5-2leZuM7_-zoom.jpg) # + inputHidden=false jupyter={"outputs_hidden": false} outputHidden=false import pandas as pd candy = pd.read_csv('chocolate_gummy_bears.csv') # + inputHidden=false jupyter={"outputs_hidden": false} outputHidden=false candy.head() # - # ### Perceptron # # To make predictions on the `candy` dataframe. Build and train a Perceptron using numpy. Your target column is `ate` and your features: `chocolate` and `gummy`. Do not do any feature engineering. :P # # Once you've trained your model, report your accuracy. You will not be able to achieve more than ~50% with the simple perceptron. Explain why you could not achieve a higher accuracy with the *simple perceptron* architecture, because it's possible to achieve ~95% accuracy on this dataset. Provide your answer in markdown (and *optional* data anlysis code) after your perceptron implementation. # + inputHidden=false jupyter={"outputs_hidden": false} outputHidden=false # Start your candy perceptron here X = candy[['chocolate', 'gummy']].values y = candy['ate'].values # + import numpy as np def sigmoid(x): return 1 / (1 + np.exp(-x)) def sigmoid_derivate(x): sx = sigmoid(x) return( sx * (1-sx)) class Perceptron(object): def __init__(self, niter = 1): self.niter = niter def fit(self, X, y): inputs = X correct_outputs = y.reshape(-1,1) weights = 2 * np.random.random((X.shape[1],1)) - 1 # bias = 2 * np.random.random((len(X),1)) - 1 for i in range(self.niter): weighted_sum = np.dot(inputs, weights)# + bias activated_output = sigmoid(weighted_sum) error = correct_outputs - activated_output adjustments = error * sigmoid_derivate(activated_output) #bias += error weights += np.dot(inputs.T, adjustments) self.activated_output = activated_output self.weights = weights #self.bias = bias def predict(self, X): weighted_sum = np.dot(X, self.weights) #+ self.bias return sigmoid(weighted_sum) # - nn = Perceptron() nn.fit(X,y) # + pred = nn.predict(X) n_pred = [] for i in pred: n_pred.append(int(i)) from sklearn.metrics import accuracy_score as acs acs(y, n_pred) # - # a simple perceptron does not have the bias nor does it have weights updated effictively. # ### Multilayer Perceptron # # Using the sample candy dataset, implement a Neural Network Multilayer Perceptron class that uses backpropagation to update the network's weights. Your Multilayer Perceptron should be implemented in Numpy. # Your network must have one hidden layer. # # Once you've trained your model, report your accuracy. Explain why your MLP's performance is considerably better than your simple perceptron's on the candy dataset. # + inputHidden=false jupyter={"outputs_hidden": false} outputHidden=false class NeuralNetwork: def __init__(self): self.inputs = 2 self.hiddenNodes = 3 self.outputNodes = 1 self.weights1 = np.random.rand(self.inputs, self.hiddenNodes) self.weights2 = np.random.rand(self.hiddenNodes, self.outputNodes) def sigmoid(self, s): return 1 / (1+np.exp(-s)) def sigmoidPrime(self, s): return s * (1 - s) def feed_forward(self, X): self.hidden_sum = np.dot(X, self.weights1) self.activated_hidden = self.sigmoid(self.hidden_sum) self.output_sum = np.dot(self.activated_hidden, self.weights2) self.activated_output = self.sigmoid(self.output_sum) return self.activated_output def backward(self, X,y,o): self.o_error = y - o self.o_delta = self.o_error * self.sigmoidPrime(o) self.z2_error = self.o_delta.dot(self.weights2.T) self.z2_delta = self.z2_error * self.sigmoidPrime(self.activated_hidden) self.weights1 += X.T.dot(self.z2_delta) self.weights2 += self.activated_hidden.T.dot(self.o_delta) def train(self, X, y): o = self.feed_forward(X) self.backward(X,y,o) nn = NeuralNetwork() y.reshape(-1,1) # - for i in range(10000): if (i in [a for a in range(7)]) or ((i + 1) % 5000 == 0): print('+' + '---' * 3 + f'EPOCH {i+1}' + '---'*3 + '+') print('Input: \n', X) print('Actual Output: \n', y.reshape(-1,1)) print('Predicted Output: \n', str(nn.feed_forward(X))) print("Loss: \n", str(np.mean(np.square(y.reshape(-1,1) - nn.feed_forward(X))))) nn.train(X,y.reshape(-1,1)) # + def sigmoid(x): return 1 / (1 + np.exp(-x)) def sigmoid_derivate(x): sx = sigmoid(x) return( sx * (1-sx)) class Perceptron(object): def __init__(self, niter = 500): self.niter = niter def fit(self, X, y): inputs = X correct_outputs = y.reshape(-1,1) weights = 2 * np.random.random((X.shape[1],1)) - 1 bias = 2 * np.random.random((len(X),1)) - 1 for i in range(self.niter): weighted_sum = np.dot(inputs, weights) + bias activated_output = sigmoid(weighted_sum) error = correct_outputs - activated_output adjustments = error * sigmoid_derivate(activated_output) bias += error weights += np.dot(inputs.T, adjustments) self.activated_output = activated_output self.weights = weights self.bias = bias def predict(self, X): weighted_sum = np.dot(X, self.weights) + self.bias return sigmoid(weighted_sum) nn = Perceptron() nn.fit(X,y) pred = nn.predict(X) n_pred = [] for i in pred: n_pred.append(int(i)) acs(y,n_pred) # - # Backpropogation helps weights adjust effectively after learning the mistake enhance to a better score (+95%) # P.S. Don't try candy gummy bears. They're disgusting. # ## 3. Keras MMP # # Implement a Multilayer Perceptron architecture of your choosing using the Keras library. Train your model and report its baseline accuracy. Then hyperparameter tune at least two parameters and report your model's accuracy. # Use the Heart Disease Dataset (binary classification) # Use an appropriate loss function for a binary classification task # Use an appropriate activation function on the final layer of your network. # Train your model using verbose output for ease of grading. # Use GridSearchCV or RandomSearchCV to hyperparameter tune your model. (for at least two hyperparameters) # When hyperparameter tuning, show you work by adding code cells for each new experiment. # Report the accuracy for each combination of hyperparameters as you test them so that we can easily see which resulted in the highest accuracy. # You must hyperparameter tune at least 3 parameters in order to get a 3 on this section. # + inputHidden=false jupyter={"outputs_hidden": false} outputHidden=false import pandas as pd from sklearn.preprocessing import StandardScaler df = pd.read_csv('https://raw.githubusercontent.com/ryanleeallred/datasets/master/heart.csv') df = df.sample(frac=1) print(df.shape) df.head() # - # !pip install keras # !pip install tensorflow # + import sklearn from sklearn.model_selection import GridSearchCV, train_test_split from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Dropout from tensorflow.keras.wrappers.scikit_learn import KerasClassifier X = df.iloc[:,0:14] y = df.values[:,-1].astype('bool') print(X.shape,y.shape) # - names = X.columns# Create the Scaler object scaler = StandardScaler()# Fit your data on the scaler object scaled_X = scaler.fit_transform(X) X = pd.DataFrame(scaled_X, columns=names) df.target.value_counts(normalize = True) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.2) print(X_train.shape, y_train.shape, X_test.shape, y_test.shape) # + #Frame the model import numpy as np # Plant a random seed for reproducibility seed = 1 np.random.seed(seed) # Function to create model, required for KerasClassifier def create_model(): # create model - Sigmoid activation function for binary/boolean type model = Sequential() model.add(Dense(24, input_dim=14, activation='sigmoid')) model.add(Dropout(rate = .3)) model.add(Dense(12, activation='sigmoid')) model.add(Dense(1, activation='sigmoid')) # Compile model model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) return model # create model; verbose? Yes model = KerasClassifier(build_fn=create_model, verbose=1) # define the grid search parameters; Bests: batch_size = , epochs = param_grid = {'batch_size': [1,10,30,60,120], 'epochs': [10] } # Create Grid Search grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=-1) grid_result = grid.fit(X_train, y_train) # Report Results - best accuracy = print(f"Best: {grid_result.best_score_} using {grid_result.best_params_}") means = grid_result.cv_results_['mean_test_score'] stds = grid_result.cv_results_['std_test_score'] params = grid_result.cv_results_['params'] for mean, stdev, param in zip(means, stds, params): print(f"Means: {mean}, Stdev: {stdev} with: {param}") # + #trying a higher batch size for better accuracy # define the grid search parameters; Bests: batch_size = , epochs = param_grid = {'batch_size': [120], 'epochs': [10, 20, 40, 80, 160] } # Create Grid Search grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=-1) grid_result = grid.fit(X_train, y_train) # Report Results - best accuracy = print(f"Best: {grid_result.best_score_} using {grid_result.best_params_}") means = grid_result.cv_results_['mean_test_score'] stds = grid_result.cv_results_['std_test_score'] params = grid_result.cv_results_['params'] for mean, stdev, param in zip(means, stds, params): print(f"Means: {mean}, Stdev: {stdev} with: {param}") # + #refining epoch values # define the grid search parameters; Bests: batch_size = , epochs = param_grid = {'batch_size': [120], 'epochs': [30, 40, 50, 60, 70] } # Create Grid Search grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=-1) grid_result = grid.fit(X_train, y_train) # Report Results - best accuracy = print(f"Best: {grid_result.best_score_} using {grid_result.best_params_}") means = grid_result.cv_results_['mean_test_score'] stds = grid_result.cv_results_['std_test_score'] params = grid_result.cv_results_['params'] for mean, stdev, param in zip(means, stds, params): print(f"Means: {mean}, Stdev: {stdev} with: {param}") # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="YoyUjwugZb0k" import pandas as pd import numpy as np import requests import matplotlib.pyplot as plt import json # + id="snarO45jkACD" response = pd.read_json('https://api.atlasacademy.io/export/JP/nice_servant_lore_lang_en.json') # + colab={"base_uri": "https://localhost:8080/"} id="Rtma-0B1kJ1S" outputId="a54f5b55-7faf-443b-9504-59c29ffc8705" print(response) # + colab={"base_uri": "https://localhost:8080/"} id="IpgfwW1YkXdG" outputId="87a0aeb8-d4ad-4d89-c5f9-36d6296afffe" response.columns # + colab={"base_uri": "https://localhost:8080/", "height": 819} id="1g2_DzGokdEU" outputId="ca4e0ef3-805f-43d7-dffb-4215bf34527f" response.head() # + id="841FyIIKkmMf" colab={"base_uri": "https://localhost:8080/", "height": 392} outputId="c737a99a-9bf7-48ed-ecac-8fa9c72c4a28" response.rarity.value_counts().plot(kind='barh', figsize=(8, 6) ) # + colab={"base_uri": "https://localhost:8080/"} id="jvZk2siYoKrt" outputId="02dd27bc-dfcd-4387-9dac-b06cded5d988" response.rarity.value_counts() # + colab={"base_uri": "https://localhost:8080/"} id="v8j8fr7Mo5Yd" outputId="bc238a88-a223-4ccb-cedd-4bccc81cb2fe" response.rarity.count() # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="ybziU0CMp7eB" outputId="93ffc12f-e71f-4a86-c374-9907ee6ad31c" response.pivot_table(index=['rarity', 'className'], aggfunc=lambda x: len(x.unique())) # + id="0gqi4OC4qLw_" df1 = response[['rarity', 'className', 'collectionNo']] df_pivot = df1.pivot_table(index=['rarity', 'className'], aggfunc=lambda x: len(x.unique())) # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="FYh6mNQztROt" outputId="07db6ba7-e9a4-49ed-bad6-2101d606f9ef" fig = df_pivot.plot(kind='barh', figsize=(15, 18), grid = 'True') # plt.rcParams['figure.facecolor'] = "white" fig.figure.savefig('distribution.png', transparent=False) # + colab={"base_uri": "https://localhost:8080/"} id="nHHURotGuNsR" outputId="aae0364f-67cb-424e-c52b-1c76fe627438" df_pivot.columns # + colab={"base_uri": "https://localhost:8080/"} id="MeYp8_qZwOT-" outputId="dd84326f-209b-42ff-8b97-4e73d7ca0f0f" df1.rarity.describe() # + colab={"base_uri": "https://localhost:8080/"} id="Mf1T45J9wwd2" outputId="858c189a-6ac3-40ed-b522-b0b004b44b2c" df1.className.value_counts() # + colab={"base_uri": "https://localhost:8080/"} id="ppfzWrKP6bju" outputId="d69cfc05-696f-42d4-e900-f34c938adcc5" df1.rarity.value_counts() # + id="lq0ke54l7J_J" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="3a5d772b-a075-4031-c9d2-2c15683913d7" df_pivot # + id="zUZ5ee6NFd35" df_pivot.to_csv('distribution.csv') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="uDMNGj81eiaR" # !conda install spacy # - # !python -m spacy download fr_core_news_md # + [markdown] colab_type="text" id="22g96wDxrb1j" # # Extractions de différentes informations # # Voici les tag de *part of speech* : # # + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 24739, "status": "ok", "timestamp": 1529313482727, "user": {"displayName": "", "photoUrl": "//lh4.googleusercontent.com/-WekCrd6tYzo/AAAAAAAAAAI/AAAAAAAAAAg/I6TLjrt6Pq8/s50-c-k-no/photo.jpg", "userId": "108344344344240562730"}, "user_tz": -120} id="iSHCfS_lf6PN" outputId="02c4dab1-b291-44fd-dfe3-5721a74bcd1b" import spacy nlp = spacy.load("fr_core_news_md") doc = nlp(u"Bonjour le monde de Linux Mag") print([(word.text, word.pos_) for word in doc]) # + [markdown] colab_type="text" id="VS_F7CpqrwY7" # Et maintenant, regardons les *Lemma* # + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 528, "status": "ok", "timestamp": 1529313495263, "user": {"displayName": "Ugo Cupcic", "photoUrl": "//lh4.googleusercontent.com/-WekCrd6tYzo/AAAAAAAAAAI/AAAAAAAAAAg/I6TLjrt6Pq8/s50-c-k-no/photo.jpg", "userId": "108344344344240562730"}, "user_tz": -120} id="wsbIjQuOhP1_" outputId="ee0c932d-cad1-494d-9ee9-1cc54dfe1e06" doc = nlp(u"Nous adorons ce magazine qui est très intéressant.") print(" ".join([word.lemma_ + " " for word in doc])) # + [markdown] colab_type="text" id="VglUDO2yr5OE" # Nous pouvons aussi identifier des groupes de mots qui représente un seul concept: les *noun chunks*. # + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 555, "status": "ok", "timestamp": 1528986564231, "user": {"displayName": "", "photoUrl": "//lh4.googleusercontent.com/-WekCrd6tYzo/AAAAAAAAAAI/AAAAAAAAAAg/I6TLjrt6Pq8/s50-c-k-no/photo.jpg", "userId": "108344344344240562730"}, "user_tz": -120} id="wqtn7tBXiqEL" outputId="30f0c089-7600-4b47-f406-f560244200d6" doc = nlp(u"Nous allons parler d'intelligence artificielle.") for noun_chunk in doc.noun_chunks: if " " in noun_chunk.text: print noun_chunk.text # + [markdown] colab_type="text" id="psuBBbAGsJ44" # # Travail sur la similarité # # ## Distance entre des mots # # Etudions la similarité de ces quelques mots: # + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 68} colab_type="code" executionInfo={"elapsed": 541, "status": "ok", "timestamp": 1528986568055, "user": {"displayName": "", "photoUrl": "//lh4.googleusercontent.com/-WekCrd6tYzo/AAAAAAAAAAI/AAAAAAAAAAg/I6TLjrt6Pq8/s50-c-k-no/photo.jpg", "userId": "108344344344240562730"}, "user_tz": -120} id="s7FPy384i3Oe" outputId="4d45c6ba-9f5d-43d2-d12e-a3b9a4ef7030" fraise = nlp(u"fraise") framboise = nlp(u"framboise") journal = nlp(u"journal") print "similarité fraise-framboise: " + str(fraise.similarity(framboise)) print "similarité fraise-journal: " + str(fraise.similarity(journal)) print "similarité framboise-journal: " + str(framboise.similarity(journal)) # + [markdown] colab_type="text" id="URGoMLCljZjV" # Utilisation un peu plus poussée des vecteurs word2vec à la base de notre similarité: # + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 102} colab_type="code" executionInfo={"elapsed": 540, "status": "ok", "timestamp": 1529313196333, "user": {"displayName": "", "photoUrl": "//lh4.googleusercontent.com/-WekCrd6tYzo/AAAAAAAAAAI/AAAAAAAAAAg/I6TLjrt6Pq8/s50-c-k-no/photo.jpg", "userId": "108344344344240562730"}, "user_tz": -120} id="wrywQtHRlrzu" outputId="7deac1fb-7e8b-4e57-a6ad-8807b89c6d46" print norm(nlp(u'Paris').vector - nlp(u'France').vector) print norm(nlp(u'Rome').vector - nlp(u'Italie').vector) print norm(nlp(u'Madrid').vector - nlp(u'Espagne').vector) print norm(nlp(u'Moscou').vector - nlp(u'Russie').vector) print cosine(nlp(u'Paris').vector - nlp(u'France').vector, nlp(u'capitale').vector) # + [markdown] colab_type="text" id="055gCkQqsXnb" # ## Distance entre des phrases # # La mesure de similarité s'applique également à des phrases # + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 51} colab_type="code" executionInfo={"elapsed": 575, "status": "ok", "timestamp": 1528986570752, "user": {"displayName": "", "photoUrl": "//lh4.googleusercontent.com/-WekCrd6tYzo/AAAAAAAAAAI/AAAAAAAAAAg/I6TLjrt6Pq8/s50-c-k-no/photo.jpg", "userId": "108344344344240562730"}, "user_tz": -120} id="sQvx6ja-jOPh" outputId="925558b2-0865-43e9-da79-a40912d31975" phrase_1 = nlp(u"Ce journal est très intéressant.") phrase_2 = nlp(u"Ce magazine est vraiment passionnant.") phrase_3 = nlp(u"Je bois un thé en écrivant mon article.") sim_1_2 = phrase_1.similarity(phrase_2) sim_1_3 = phrase_1.similarity(phrase_3) print "similarité entre les deux premières phrases: " + str(sim_1_2) print "similarité entre la première et la dernière phrases: " + str(sim_1_3) # + [markdown] colab_type="text" id="Zjo3fYoosidz" # Mais elle présente des limitations: # + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 51} colab_type="code" executionInfo={"elapsed": 569, "status": "ok", "timestamp": 1528986573303, "user": {"displayName": "", "photoUrl": "//lh4.googleusercontent.com/-WekCrd6tYzo/AAAAAAAAAAI/AAAAAAAAAAg/I6TLjrt6Pq8/s50-c-k-no/photo.jpg", "userId": "108344344344240562730"}, "user_tz": -120} id="2Fqdj6OeqN79" outputId="35465f60-7832-4368-f723-22dba40578c8" phrase_1 = nlp(u"Ce journal est très intéressant.") phrase_2 = nlp(u"Je trouve cette revue passionante.") phrase_3 = nlp(u"Ce thé est vraiment délicieux.") sim_1_2 = phrase_1.similarity(phrase_2) sim_1_3 = phrase_1.similarity(phrase_3) print "similarité entre les deux premières phrases: " + str(sim_1_2) print "similarité entre la première et la dernière phrases: " + str(sim_1_3) # + [markdown] colab_type="text" id="ijuaOh2XsuMO" # Simplifions nos phrases et recalculons des similarités: # + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 153} colab_type="code" executionInfo={"elapsed": 533, "status": "ok", "timestamp": 1528986576241, "user": {"displayName": "", "photoUrl": "//lh4.googleusercontent.com/-WekCrd6tYzo/AAAAAAAAAAI/AAAAAAAAAAg/I6TLjrt6Pq8/s50-c-k-no/photo.jpg", "userId": "108344344344240562730"}, "user_tz": -120} id="8BSJPRrowZ0z" outputId="c01dbf21-2d3e-4acd-fdeb-18ac272f2e95" def simplifier(sentence): sentence_as_NLP = nlp(sentence) simplified_sentence = "" for word in sentence_as_NLP: if word.pos_ in ["PROPN", "NOUN", "VERB", "ADJ"]: simplified_sentence += word.lemma_ + " " print ("Phrase complete: \"" + sentence + "\" => simplication: \"" + simplified_sentence + "\"") return simplified_sentence def similarity(sentence1, sentence2): return nlp(sentence1).similarity(nlp(sentence2)) sentence1 = u"Ce thé est vraiment délicieux" sentence2 = u"Je trouve cette revue passionante" sentence3 = u"Ce thé est un délice." print "----" print "Similarité:" + str(similarity(simplifier(sentence1), simplifier(sentence2))) print "----" print "Similarité:" + str(similarity(simplifier(sentence1), simplifier(sentence3))) # + [markdown] colab_type="text" id="N8k3kQlrs870" # Nous avons toujours des limitations: # + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 68} colab_type="code" executionInfo={"elapsed": 577, "status": "ok", "timestamp": 1528986608087, "user": {"displayName": "", "photoUrl": "//lh4.googleusercontent.com/-WekCrd6tYzo/AAAAAAAAAAI/AAAAAAAAAAg/I6TLjrt6Pq8/s50-c-k-no/photo.jpg", "userId": "108344344344240562730"}, "user_tz": -120} id="ucvhZXI-zCBU" outputId="c4ab1d5a-6cee-4432-a1b8-3bb3b774f237" print "Similarité:" + str(similarity(simplifier(u"J'aime le thé."), simplifier(u"Je n'aime pas le thé"))) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd from finta import TA import numpy as np from pathlib import Path # + #yahoo finance stock data (for longer timeframe) import yfinance as yf def stock_df(ticker, start, end): stock = yf.Ticker(ticker) stock_df = stock.history(start = start, end = end) return stock_df start = pd.to_datetime('2015-01-01') end = pd.to_datetime('today') spy_df = stock_df('SPY', start, end) len(spy_df) # + # spy_df["Monetary Gain"] = spy_df["Close"].diff() spy_df['Actual Return'] = spy_df["Close"].pct_change() spy_df.loc[(spy_df['Actual Return']*100 > 1), 'Return Direction'] = 1 spy_df.loc[(spy_df['Actual Return']*100 < 1), 'Return Direction'] = 0 spy_df.loc[(spy_df['Actual Return']*100 < 0), "Return Direction"] = -1 # spy_df['Trades'] = np.abs(spy_df['Trading Signal'].diff()) # spy_df['Strategy Returns'] = spy_df['Actual Return'] * spy_df['Trading Signal'].shift() spy_df.dropna(inplace= True) spy_df.tail(20) # + #per trade commision cost: 2% of total #ex.) if you bought 100 shares at 1/10/22 for a total of $46,270, there is $925.4 commision. #Then you sell again at 1/11/21 for $46,523 , you make a profit of $253. But considering commision, you lose a lot more money # + #Adding Technical indicators spy_technical_indicators = pd.DataFrame() spy_technical_indicators["Close"] = spy_df["Close"] spy_technical_indicators["Actual Return"] = spy_df["Actual Return"] #Creating Volume Weighted Average Price 'VWAP' -- Trend Indicator spy_technical_indicators['VWAP'] = TA.VWAP(spy_df) spy_technical_indicators["VWAP Evaluation"] = "Hold" spy_technical_indicators.loc[spy_technical_indicators["VWAP"] < spy_technical_indicators["Close"], 'VWAP Evaluation'] = "Sell" spy_technical_indicators.loc[spy_technical_indicators["VWAP"] > spy_technical_indicators["Close"], 'VWAP Evaluation'] = "Buy" spy_technical_indicators["VWAP Lag"] = spy_technical_indicators["VWAP Evaluation"].shift(1) for index, row in spy_technical_indicators.iterrows(): if (spy_technical_indicators.loc[index, "VWAP Evaluation"] == "Hold" and spy_technical_indicators.loc[index, "VWAP Lag"] == "Sell"): spy_technical_indicators.loc[index, "VWAP Evaluation"] = "VWAP Bearish Signal" if (spy_technical_indicators.loc[index, "VWAP Evaluation"] == "Hold" and spy_technical_indicators.loc[index, "VWAP Lag"] == "Buy"): spy_technical_indicators.loc[index, "VWAP Evaluation"] = "VWAP Buillish Signal" spy_technical_indicators.drop(columns = ["VWAP Lag"], inplace = True) #Creating Exponential Moving Average 'EMA' short(9-days) and long(70-days) for DMAC trading algorithm -- Trend Indicator spy_technical_indicators['EMA_short'] = TA.EMA(spy_df, 9) spy_technical_indicators['EMA_long'] = TA.EMA(spy_df, 70) spy_technical_indicators["DMAC Evaluation"] = "Hold" spy_technical_indicators.loc[spy_technical_indicators["EMA_short"] < spy_technical_indicators["EMA_long"], 'DMAC Evaluation'] = "Sell" spy_technical_indicators.loc[spy_technical_indicators["EMA_short"] > spy_technical_indicators["EMA_long"], 'DMAC Evaluation'] = "Buy" spy_technical_indicators["DMAC Lag"] = spy_technical_indicators["DMAC Evaluation"].shift(1) for index, row in spy_technical_indicators.iterrows(): if (spy_technical_indicators.loc[index, "DMAC Evaluation"] == "Hold" and spy_technical_indicators.loc[index, "DMAC Lag"] == "Sell"): spy_technical_indicators.loc[index, "DMAC Evaluation"] = "DMAC Bearish Signal" if (spy_technical_indicators.loc[index, "DMAC Evaluation"] == "Hold" and spy_technical_indicators.loc[index, "DMAC Lag"] == "Buy"): spy_technical_indicators.loc[index, "DMAC Evaluation"] = "DMAC Buillish Signal" spy_technical_indicators.drop(columns = ["DMAC Lag"], inplace = True) #Creating Bollinger Bands 'BBANDS' -- Volatility Indicator bbands_df = TA.BBANDS(spy_df) spy_technical_indicators = pd.concat([spy_technical_indicators, bbands_df], axis=1) spy_technical_indicators["BBbands Evaluation"] = "Hold" spy_technical_indicators.loc[spy_technical_indicators["BB_UPPER"] < spy_technical_indicators["Close"], 'BBbands Evaluation'] = "Sell" spy_technical_indicators.loc[spy_technical_indicators["BB_LOWER"] > spy_technical_indicators["Close"], 'BBbands Evaluation'] = "Buy" spy_technical_indicators["BBbands Lag"] = spy_technical_indicators["BBbands Evaluation"].shift(1) for index, row in spy_technical_indicators.iterrows(): if (spy_technical_indicators.loc[index, "BBbands Evaluation"] == "Hold" and spy_technical_indicators.loc[index, "BBbands Lag"] == "Sell"): spy_technical_indicators.loc[index, "BBbands Evaluation"] = "BBbands Bearish Signal" if (spy_technical_indicators.loc[index, "BBbands Evaluation"] == "Hold" and spy_technical_indicators.loc[index, "BBbands Lag"] == "Buy"): spy_technical_indicators.loc[index, "BBbands Evaluation"] = "BBbands Buillish Signal" spy_technical_indicators.drop(columns = ["BBbands Lag"], inplace = True) #Creating Elder's Force Index 'EFI' -- Volatility Indicator spy_technical_indicators['EFI'] = TA.EFI(spy_df) spy_technical_indicators["EFI Evaluation"] = "Hold" spy_technical_indicators.loc[spy_technical_indicators["EFI"] < 0, 'EFI Evaluation'] = "Buy" spy_technical_indicators.loc[spy_technical_indicators["EFI"] > 0, 'EFI Evaluation'] = "Sell" spy_technical_indicators["EFI Lag"] = spy_technical_indicators["EFI Evaluation"].shift(1) for index, row in spy_technical_indicators.iterrows(): if (spy_technical_indicators.loc[index, "EFI Evaluation"] == "Hold" and spy_technical_indicators.loc[index, "EFI Lag"] == "Sell"): spy_technical_indicators.loc[index, "EFI Evaluation"] = "EFI Bearish Signal" if (spy_technical_indicators.loc[index, "EFI Evaluation"] == "Hold" and spy_technical_indicators.loc[index, "EFI Lag"] == "Buy"): spy_technical_indicators.loc[index, "EFI Evaluation"] = "EFI Buillish Signal" spy_technical_indicators.drop(columns = ["EFI Lag"], inplace = True) spy_technical_indicators.drop(columns = ["Close"], inplace = True) spy_technical_indicators.dropna(inplace=True) display(spy_technical_indicators.head(10)) display(spy_technical_indicators.tail(10)) # + from sklearn.preprocessing import StandardScaler,OneHotEncoder categorical_variables = list(spy_technical_indicators.dtypes[spy_technical_indicators.dtypes == "object"].index) categorical_variables enc = OneHotEncoder(sparse= False) encoded_data = enc.fit_transform(spy_technical_indicators[categorical_variables]) encoded_df = pd.DataFrame(encoded_data, columns = enc.get_feature_names(categorical_variables), index = spy_technical_indicators.index) encoded_df = pd.concat([encoded_df, spy_technical_indicators.drop(columns = categorical_variables)], axis = 1) encoded_df # + import tensorflow as tf from tensorflow.keras.layers import Dense from tensorflow.keras.models import Sequential from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler X = encoded_df.drop(columns = ["Actual Return"]) y = encoded_df["Actual Return"] X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1) scaler = StandardScaler() X_scaler = scaler.fit(X_train) X_train_scaled = X_scaler.transform(X_train) X_test_scaled = X_scaler.transform(X_test) # + neural = Sequential() number_input_features = len(X.columns) hidden_nodes_layer1 = (number_input_features + 1) // 2 hidden_nodes_layer2 = (hidden_nodes_layer1 + 1) // 2 hidden_nodes_layer3 = (hidden_nodes_layer2 + 1) // 2 neural.add(Dense(units=hidden_nodes_layer1, input_dim=number_input_features, activation="elu")) neural.add(Dense(units=hidden_nodes_layer2, activation="elu")) neural.add(Dense(units=hidden_nodes_layer3, activation="elu")) neural.add(Dense(units=1, activation="linear")) neural.summary() # + # opt = tf.keras.optimizers.SGD(lr=0.01) neural.compile(loss = "mse", optimizer = "adam", metrics = ["mse"]) model = neural.fit(X_train_scaled, y_train, epochs = 500) # + # Y_prediction = (neural.predict(X_train_scaled) > 0.5).astype("int32") Y_prediction = neural.predict(X_train_scaled) Y_prediction = Y_prediction.squeeze() results = pd.DataFrame( {"Predictions": Y_prediction, "Actual": y_train}) display(results) model_loss, model_accuracy = neural.evaluate(X_train_scaled,y_train,verbose=2) print(f"Loss: {model_loss}, MSE: {model_accuracy}") # + #For binary classification only # Y_test_prediction = (neural.predict(X_test_scaled) > 0.5).astype("int32") Y_test_prediction = neural.predict(X_test_scaled) Y_test_prediction = Y_test_prediction.squeeze() test_results = pd.DataFrame( {"Predictions": Y_prediction, "Actual": y_train}) model_loss, model_accuracy = neural.evaluate(X_test_scaled,y_test,verbose=2) print(f"Loss: {model_loss}, MSE: {model_accuracy}") # + import matplotlib.pyplot as plt plt.plot(model.history["loss"]) # - neural.save(Path("Files/Volatility_Indicators.h5")) test_results.plot() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:root] * # language: python # name: conda-root-py # --- # !conda install pymysql -y import pymysql import pandas as pd # create database qa_api; conn = pymysql.connect(host='localhost', user='****', password='****', db='qa_api', charset='utf8') curs = conn.cursor() conn.commit() sql_create_company = """create table company( stock_code char(6) not null, company_name varchar(50) not null, primary key (stock_code) ); """ sql_create_report = """create table report( report_id int not null, stock_code char(6) not null, title varchar(512), price decimal(10,2), opinion varchar(20), writer varchar(20), source varchar(20), url varchar(512), contents longtext, report_date date, primary key (report_id), foreign key (stock_code) references company (stock_code) ); """ curs.execute(sql_create_company) curs.execute(sql_create_report) conn.commit() df_company = pd.read_csv('company.csv', encoding='cp949') df_company['종목코드'] = df_company['종목코드'].astype(str) df_company['종목코드'] = df_company['종목코드'].apply(lambda x: x.zfill(6)) df_company df_report = pd.read_csv('report.csv', encoding='utf-8') # + df_report['기업코드'] = df_report['기업코드'].astype(str) df_report['기업코드'] = df_report['기업코드'].apply(lambda x: x.zfill(6)) df_report['적정가'] = df_report['적정가'].apply(lambda x: x.replace(",","")) df_report = df_report.where((pd.notnull(df_report)), None) # + sql = "insert into company (stock_code, company_name) values (%s, %s)" for stock_code, company_name in zip(df_company['종목코드'], df_company['회사명']): curs.execute(sql, (stock_code, company_name)) conn.commit() # + sql = """insert into report (report_id, stock_code, title, price, opinion, writer, source, url, contents, report_date) values (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s)""" for i in df_report.index: try: report_date = df_report.loc[i,'날짜'] stock_code = df_report.loc[i,'기업코드'] title = df_report.loc[i,'제목'] price = df_report.loc[i,'적정가'] opinion = df_report.loc[i,'투자의견'] writer = df_report.loc[i,'작성자'] source = df_report.loc[i,'출처'] url = df_report.loc[i,'PDF_URL'] report_id = df_report.loc[i,'파일명'] contents = df_report.loc[i,'html'] curs.execute(sql, (report_id, stock_code, title, price, opinion, writer, source, url, contents, report_date)) except: print(i) continue conn.commit() conn.close() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Métodos de los conjuntos c = set() # ### add(): Añade un ítem a un conjunto, si ya existe no lo añade c.add(1) c.add(2) c.add(3) c # ### discard(): Borra un ítem de un conjunto c.discard(1) c c.add(1) c2 = c c2.add(4) c # ### copy(): Crea una copia de un conjunto # *Recordad que los tipos compuestos no se pueden copiar, son como accesos directos por referencia* c2 = c.copy() c2 c c2.discard(4) c2 c # ### clear(): Borra todos los ítems de un conjunto c2.clear() c2 # ## Comparación de conjuntos c1 = {1,2,3} c2 = {3,4,5} c3 = {-1,99} c4 = {1,2,3,4,5} # ### isdisjoint(): Comprueba si el conjunto es disjunto de otro conjunto # *Si no hay ningún elemento en común entre ellos* c1.isdisjoint(c2) # ### issubset(): Comprueba si el conjunto es subconjunto de otro conjunto # *Si sus ítems se encuentran todos dentro de otro* c3.issubset(c4) # ### issuperset(): Comprueba si el conjunto es contenedor de otro subconjunto # *Si contiene todos los ítems de otro* c3.issuperset(c1) # ## Métodos avanzados # Se utilizan para realizar uniones, diferencias y otras operaciones avanzadas entre conjuntos. # # Suelen tener dos formas, la normal que **devuelve** el resultado, y otra que hace lo mismo pero **actualiza** el propio resultado. c1 = {1,2,3} c2 = {3,4,5} c3 = {-1,99} c4 = {1,2,3,4,5} # ### union(): Une un conjunto a otro y devuelve el resultado en un nuevo conjunto c1.union(c2) == c4 c1.union(c2) c1 c2 # ### update(): Une un conjunto a otro en el propio conjunto c1.update(c2) c1 # ### difference(): Encuentra los elementos no comunes entre dos conjuntos c1 = {1,2,3} c2 = {3,4,5} c3 = {-1,99} c4 = {1,2,3,4,5} c1.difference(c2) # ### difference_update(): Guarda en el conjunto los elementos no comunes entre dos conjuntos c1.difference_update(c2) c1 # ### intersection(): Devuelve un conjunto con los elementos comunes en dos conjuntos c1 = {1,2,3} c2 = {3,4,5} c3 = {-1,99} c4 = {1,2,3,4,5} c1.intersection(c2) # ### intersection_update(): Guarda en el conjunto los elementos comunes entre dos conjuntos c1.intersection_update(c2) c1 # ### symmetric_difference(): Devuelve los elementos simétricamente diferentes entre dos conjuntos # *Todos los elementos que no concuerdan entre los dos conjuntos* c1 = {1,2,3} c2 = {3,4,5} c3 = {-1,99} c4 = {1,2,3,4,5} c1.symmetric_difference(c2) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="aqHzrgT7giPz" import numpy as np import pandas as pd import matplotlib.pyplot as plt import scipy import time import os # scipy.__version__ # from numba import guvectorize, complex64 # from scipy import constants # print(constants.Boltzmann) # For local runtime # jupyter notebook --NotebookApp.allow_origin='https://colab.research.google.com' --port=8888 --NotebookApp.port_retries=0 --no-browser # pd.set_option("max_rows", None) # pd.set_option("max_colwidth", None) # + colab={"base_uri": "https://localhost:8080/"} id="lOz8bFKDtz2T" outputId="0f856552-e0c6-4913-a6ca-e67c5cb30087" os.getcwd() # + id="L6cyHrcB313p" # from google.colab import drive # drive.mount('/content/drive', force_remount=True) # + colab={"base_uri": "https://localhost:8080/"} id="Mtvb7tGAgodC" outputId="c5fe64fb-2248-49b6-fe29-571b9bc56785" # constanta na = 2048 # Number of atom in both direction nm = na-1 nx = na # Number of atom in x-direction ny = na # Number of atom in y-direction nl = na//2 # Number of repetation for 2x2 block matrix lx = 1 # Length of wavefunction in x-direction ly = 1 # Length of wavefunction in y-direction dx = lx/nx # Interval x dy = ly/ny # Interval y nrx = nx//2 nry = ny//2 pot = 0 # On-site potensial nt = 4096 # timestep nta = nt//2 # Rounded half of timestep deltaT = 0.02 # interval time ta = (nt/2)*deltaT # Maximum value of Time Propagation Tm = ta*(np.pi/(2*np.pi)) dTm = Tm/nta a = 1 # hopping parameter mu = 0 # Chemical Potential T = 300 # temperature # Kb = 0.03564 #1 # boltzmann constant Kb = 0.695 #1 # boltzmann constant # T_au = T /(3.158*10**(5)) beta = 1 / (Kb * T) sum0 = 0 print(dTm) print(Tm) # + id="Y5C86kgfqh_P" # Array Declaration H_even = np.zeros((nx,ny), dtype=float) H_odd = np.zeros((nx,ny), dtype=float) fermiDirac = np.zeros((nx,ny), dtype=complex) fermiDirac_t1 = np.zeros((nx,ny), dtype=complex) # fermiDirac_t2 = np.zeros((nx,ny), dtype=complex) # + id="3YmhRV8dkh7f" ## Declaration of array Hamiltonan def H(H, num): # num = 0 for even, num = 1 for odd if num == 0: d = 0 for i in range(nrx): for j in range(nry): if j == i: sx = 2*i+d sy = 2*j+d H[sx][sy+1] = -a H[sx+1][sy] = -a if num == 1: H[0][nm] = -a H[nm][0] = -a d = 1 for i in range(nrx-1): for j in range(nry-1): if j == i: sx = 2*i+d sy = 2*j+d H[sx][sy+1] = -a H[sx+1][sy] = -a return H ## Normalize Hamiltonan def normalize_H(H, num, sum0): sum0n = 0 for i in range(nx): for j in range(ny): sum0n += H[i][j]*H[i][j] # print(sum0) if num == 0: sum0 += np.sqrt(4*sum0n) elif num == 1: sum0 += np.sqrt(2*sum0n) return sum0 # + id="omFfdsten92w" def fermi(ff, c, psi, mat = fermiDirac, mat_t1 = fermiDirac_t1): # ff = 1 for 1-FermiDirac, c related to normal(0), conjugate(1) # MFD = 0 # Matrix Fermi Dirac TEn = Kb*T # Bottom part of fermi dirac distribution potN = pot/sum0 # Normalize potential bN = sum0/TEn # Normalize beta (BetaN = Beta*HamiltonanNorm ) aN = a/sum0 # Normalize hopping parameter ch = np.cosh(a/TEn) # Cos hiperbolic part sh = np.sinh(bN*aN) # Sin hiperbolic part ## Filling the inverse matrix of fermi dirac: Diagonal Part for i in range(nx): for j in range(ny): if j == i: mat[i][j] = ch * np.exp((bN*potN) - (mu/TEn)) ## Filling the inverse matrix of fermi dirac: 2x2 Block diagonal Part with even index d = 0 for i in range(nrx): for j in range(nry): if i == j : sx = 2*i+d sy = 2*j+d mat[sx][sy+1] = -sh * np.exp((bN*potN) - (mu/TEn)) mat[sx+1][sy] = -sh * np.exp((bN*potN) - (mu/TEn)) ## Filling the inverse matrix of fermi dirac: 2x2 Block diagonal with odd index d = 0 i = 0 for j in range(ny): sx = 2*i+d sy = j mat_t1[sx][sy] = ch*mat[sx][sy] - sh*mat[nm][sy] mat_t1[nm][sy] = -sh*mat[sx][sy] + ch*mat[nm][sy] d = 1 for i in range(nrx-1): for j in range(ny): sx = 2*i+d sy = j mat_t1[sx][sy] = ch*mat[sx][sy] - sh*mat[sx+1][sy] mat_t1[sx+1][sy] = -sh*mat[sx][sy] + ch*mat[sx+1][sy] ## Adding identity matrix to mat_t mat_t2 = np.identity(nx) + mat_t1 ## Inverse Matrix invMat = np.linalg.inv(mat_t2) if ff == 0: if c == 0: invMat_t = invMat elif c == 1: invMat_t = np.conjugate(invMat) elif ff == 1: invMat_t = np.identity(nx) - invMat ## Multiplication (1-F(H-mu)^-1) with psi0 psi1_t = invMat_t @ psi faktorNormalisasi = 1/(np.linalg.norm(psi1_t)*np.sqrt(dx*dy)) psi1_t *= faktorNormalisasi return psi1_t # + id="A48l1nEC_YuG" def currentDens(psi1): psi = np.zeros((nx,ny), dtype = complex) psi_a = np.zeros((nx,ny), dtype = complex) c0 = 0 + 0j s0 = 0 - 1j*a # s0 = 0 - 0.5j*a d=1 for i in range(nl-1): for j in range(ny): sx = 2*i+d sy = j sx1 = sx+1 psi[sx][sy] = c0*psi1[sx][sy] + s0*psi1[sx1][sy] psi[sx1][sy] = s0*psi1[sx][sy] + c0*psi1[sx1][sy] i = 0 for j in range(ny): sx = 2*i sy = j sx1 = sx+1 psi[sx][sy] = c0*psi1[sx][sy] + s0*psi1[nm][sy] psi[nm][sy] = s0*psi1[sx][sy] + c0*psi1[nm][sy] d=1 for i in range(nrx): for j in range(nry): sx = 2*i+d sy = 2*j+d sx = (sx+1) % nx psi_a[sx][sy] = c0*psi1[sx][sy] + s0*psi1[sx1][sy] psi_a[sx1][sy] = s0*psi1[sx][sy] + c0*psi1[sx1][sy] d=0 for i in range(nrx): for j in range(nry): sx = 2*i+d sy = 2*j+d sx1 = sx+1 psi_a[sx][sy] = c0*psi1[sx][sy] + s0*psi1[sx1][sy] psi_a[sx1][sy] = s0*psi1[sx][sy] + c0*psi1[sx1][sy] psi0 = psi + psi_a faktorNormalisasi = 1/(np.linalg.norm(psi0)*np.sqrt(dx*dy)) psi0 *= faktorNormalisasi return psi0 # + id="7IavThMlzq3D" def kineticPropagation(d, t, psi1, deltaT): er = np.cos(-deltaT*a) ei = -1.0j*(np.sin(-1.0*deltaT*a)) # er = np.cos(-1.0*dTm*a) # ei = -1.0j*(np.sin(-1.0*dTm*a)) if(d == 0): nry = int(ny/2) for x in range(nx): for y in range(nry): sx = x sy = int((2*y) + t) sy1 = int((sy+1)%ny) psi1[sx][sy] = er*psi1[sx][sy] - ei*psi1[sx][sy1] psi1[sx][sy1] = -ei*psi1[sx][sy] + er*psi1[sx][sy1] elif(d == 1): nrx = int(nx/2) nry = int(ny/2) for x in range(nrx): for y in range(nry): sx = (2*x) + t sy = (2*y) + t sx1 = (sx+1)%nx psi1[sx][sy] = er*psi1[sx][sy] - ei*psi1[sx1][sy] psi1[sx1][sy] = -ei*psi1[sx][sy] + er*psi1[sx1][sy] return psi1 # + id="Q55TDspCQHH2" ## One Cycle of kinetic propagation def oneCycle(psi, deltaT): for d in range(2): for t in range(2): kineticPropagation(d, t, psi, deltaT) faktorNormalisasi = 1/(np.linalg.norm(psi)*np.sqrt(dx*dy)) psi *= faktorNormalisasi return psi # + id="hNKTHvIM6FL3" ## Declare psi1 def psi1_evol(): psi1_t = np.zeros((nta, nx, ny), dtype=complex) psi1_t[0] = fermiDirac1_t # print('{} = {}'.format(0, psi1_t[0])) for k in range(1,nta): psi1_t1 = oneCycle(psi1_t[k-1], deltaT) psi1_t[k] = psi1_t1 # print('{} = {}'.format(k, psi1_t1)) return psi1_t # + id="jVrx9G63wce7" ## Declare psi2 def psi2_evol(): psi2_t = np.zeros((nt,nx,ny), dtype=complex) psi2_t[0] = fermiDirac2_t # print('{} = {}'.format(0, psi2_t[0])) # y= input() for k in range(1,nt): psi2_t1 = oneCycle(psi2_t[k-1], deltaT) psi2_t[k] = psi2_t1 # print('{} = {}'.format(k, psi2_t1)) # y= input() return psi2_t # + id="BTrwiLLDcddF" def omega(): # Tm = ta*(1/(2*np.pi)) #new periode is shifted by factor 0.159(or 1/(2*pi)) hence the freq window following Nyquist Theorm # dTm = (Tm/nta) #delta Tau maks print('Tau Maks: {}, Delta Tau: {}'.format(Tm, dTm)) tau = 0 tau_arr = [] for k in range(nt): if (tau <= Tm): tau_arr.append(tau) tau = tau + dTm #Constanta Opcon Wm = 1/(2*dTm) # Maximum frequency, with Nyquist Theorem applied dWm = 1/(2*Tm) # Frequency interval, with Nyquist Theorem applied print('W Maks: {}, Delta W: {}'.format(Wm, dWm)) W = 0.001 W_arr = np.linspace(0, Wm, nta) W_arr2 = [] for k in range(nt): if (W <= Wm): W_arr2.append(W) W = W + dWm return tau_arr, W_arr,W_arr2,Tm, Wm # + colab={"base_uri": "https://localhost:8080/"} id="sAzxMlyCFEHC" outputId="3d98e81e-481f-4f47-d8b1-6f1ff6943ea2" tau_arr, W_arr,W_arr2, tau_max, W_max = omega() # + colab={"base_uri": "https://localhost:8080/"} id="GvjSYvZiFazZ" outputId="e90cb1c6-94de-48c3-d443-33b038cabf0f" print(W_arr) # + colab={"base_uri": "https://localhost:8080/"} id="DMQbtJrFGTJp" outputId="9a730592-d8cc-4b11-cdcc-8c970a2671e4" print(W_arr2) # + id="eIyG3pM26xrC" ## Correlation of Optical Conductivity def opCon_corr(): cfr_arr = [] # psi1_t = np.array(psi1_evol()) psi1_t = psi1_evol() # psi2_t = np.conjugate(np.array(psi2_evol())) psi2_t = psi2_evol() opCon = [] cfr_arr = [] for k in range(nt): J = np.array(currentDens(psi1_t[k])) opCon_t = psi2_t[k] @ J opCon.append(opCon_t) # opCon.append(2*(opCon_t).imag) opCon_t2 = np.array(opCon) # multiplication opCon[0] with opCon[k] for k in range(nt): cfr = 0 for i in range(nx): for j in range(ny): cfr += opCon_t2[0][i][j] * opCon_t2[k][i][j] # cfr_arr.append((cfr*dx*dy).imag) cfr_arr.append(cfr*dx*dy) return np.array(cfr_arr) # + id="MaQPUFqZqrzy" psi0 = np.random.uniform(-1,1,(nx,ny)) + 1j*np.random.uniform(-1,1,(nx,ny)) faktorNormalisasi = 1/(np.linalg.norm(psi0)*np.sqrt(dx*dy)) psi0 *= faktorNormalisasi psi1 = psi0 psi2 = psi0.conjugate() # pd.DataFrame(psi0) # + colab={"base_uri": "https://localhost:8080/"} id="jxP9uLJKc_6i" outputId="54e9835a-0c2a-4087-9139-18394d04bf83" H(H_odd, 1) # pd.DataFrame(H(H_odd, 1)) # + colab={"base_uri": "https://localhost:8080/"} id="DHUAd-zzc6EK" outputId="f52eb2b4-efea-4d1c-bed2-145c7e88b695" H(H_even, 0) # pd.DataFrame(H(H_even, 0)) # + colab={"base_uri": "https://localhost:8080/"} id="nrDp8PDgdIk5" outputId="f77b15ca-214c-47ba-ca7f-57dfb5501a2a" sum0 = normalize_H(H_even, 0, sum0) sum0 = normalize_H(H_odd, 1, sum0) print(sum0) # + id="h_s_JTXEXbhL" ## Correlation of Opctical Conductivity with combine psi def opCon_corr2(psi1): opCon = [] opCon_t = np.zeros((nt,nx,ny), dtype=complex) cfr_arr = np.zeros((nt), dtype=complex) psi1_J = currentDens(psi1) fermiDirac1_t = fermi(1, 0, psi1_J) psi1_J2 = currentDens(fermiDirac1_t) fermiDirac2_t = fermi(0, 1, psi1_J2) for k in range(nt): if k == 0: opCon_t[0] = fermiDirac2_t else : kinProp1 = oneCycle(opCon_t[k-1], 2*deltaT) # kinProp2 = oneCycle(kinProp1, deltaT) opCon_t[k] = kinProp1 # multiplication psi1[0] with opCon[k] cfr = 0 for i in range(nx): for j in range(ny): cfr += psi2[i][j] * opCon_t[k][i][j] # cfr_arr.append((cfr*dx*dy).imag) cfr_arr[k]= cfr*dx*dy # cfr_arr[k]= 2*cfr*dx*dy return cfr_arr # + id="1JdJcx6ei8mE" # try to implement the saving memory with odd even login opcon, and save the correlation # def corr_func(k, opCon_t): def corr_func(opCon_t): cfr = 0 for i in range(nx): for j in range(ny): cfr += psi2[i][j] * opCon_t[i][j] # cfr += psi2[i][j] * opCon_t[k][i][j] # cfr_arr.append((cfr*dx*dy).imag) # cfr_arr[k]= cfr*dx*dy return cfr*dx*dy def opCon_corr3(psi1): # opCon = [] opCon_t = np.zeros((3,nx,ny), dtype=complex) cfr_arr = np.zeros((nta), dtype=complex) psi1_J = currentDens(psi1) fermiDirac1_t = fermi(1, 0, psi1_J) psi1_J2 = currentDens(fermiDirac1_t) # fermiDirac2_t = fermi(0, 1, psi1_J2) fermiDirac2_t = fermi(0, 0, psi1_J2) for k in range(nta): if k == 0: opCon_t[0] = fermiDirac2_t cfr_arr[k] = corr_func(opCon_t[0]) elif k == 1: kinProp1 = oneCycle(opCon_t[0], 2*dTm) opCon_t[1] = kinProp1 cfr_arr[k] = corr_func(opCon_t[1]) elif k%2 == 0: kinProp1 = oneCycle(opCon_t[1], 2*dTm) opCon_t[2] = kinProp1 cfr_arr[k] = corr_func(opCon_t[2]) else : kinProp1 = oneCycle(opCon_t[2], 2*dTm) # kinProp2 = oneCycle(kinProp1, deltaT) opCon_t[1] = kinProp1 cfr_arr[k] = corr_func(opCon_t) # cfr_arr[k]= cfr*dx*dy # cfr_arr[k]= 2*cfr*dx*dy return cfr_arr # + id="R2bostHyYZrw" # try to implement the saving memory with odd even login opcon, and save the correlation def opCon_corr3(psi1): # opCon = [] opCon_t = np.zeros((3,nx,ny), dtype=complex) cfr_arr = np.zeros((nta), dtype=complex) tau = 0 psi1_J = currentDens(psi1) fermiDirac1_t = fermi(1, 0, psi1_J) psi1_J2 = currentDens(fermiDirac1_t) # fermiDirac2_t = fermi(0, 1, psi1_J2) fermiDirac2_t = fermi(0, 0, psi1_J2) for k in range(nta): if tau variable tic = time.time() opCon_corr2_psi3 = opCon_corr3(psi0) #at the end of cell, define variable toc = time.time() running_time = toc - tic print(running_time) # + id="Ov97Dk8-me9E" # tic = time.time() # opCon_corr2_psi2 = opCon_corr2(psi0) #at the end of cell, define variable # toc = time.time() # running_time = toc - tic # print(running_time) # + id="hc0zSCN5nMrv" print(opCon_corr2_psi3[3]) # print(opCon_corr2_psi2[3]) # + id="5pe-fOCnmtfY" plt.plot(np.arange(0, nta), opCon_corr2_psi3, label='3 part') # plt.plot(np.arange(0, nt), opCon_corr2_psi2, label='2 part') # plt.xlim(-1, 1000) plt.legend() plt.show() # + id="ZDpSdLhmQd3z" header_save = 'Optical Conductivity {} x {} \n nt={} ; deltaT={}; Tm={}; dTm={} \n Kb={} ; T={}; time={} s'.format(na,na,nt,deltaT,Tm, dTm, Kb,T,running_time) # + id="FBd1lSI-yJXG" # pd.DataFrame(opCon_corr2_psi0) # + id="CN_e0e6ax6sk" print(opCon_corr2_psi3.shape) # + id="KG76_zoPzAuw" # np.savetxt('/content/drive/MyDrive/Data/opCon1.csv', opCon_corr2_psi0.T,fmt='%.12e%+.12ej', delimiter=",", header=header_save,footer=header_save, comments='~~~ ') #true # np.savetxt('/content/drive/MyDrive/Data/Data3/opCon{}_{}.csv'.format(na,Kb), opCon_corr2_psi3.T,fmt='%.12e%+.12ej', delimiter=",", header=header_save, footer=header_save) np.savetxt('opCon{}_{}.csv'.format(na,Kb), opCon_corr2_psi3.T,fmt='%.12e%+.12ej', delimiter=",", header=header_save, footer=header_save) # + id="AZTwP_Lw2UaM" # # !cat '/content/drive/MyDrive/Data/Data2/opCon32_0.695.csv' # # !cat '/opCon512_0.695.csv' # + id="EXxVuzdh29TO" # data = np.loadtxt('/content/drive/MyDrive/Data/Data2/opCon{}_{}.csv'.format(na,Kb), delimiter=',' , dtype=complex) data = np.loadtxt('opCon{}_{}.csv'.format(na,Kb), delimiter=',' , dtype=complex) pd.DataFrame(data) # + id="v_YyN-bwr5vV" # psi1_J = currentDens(psi1) # fermiDirac1_t = fermi(1, 0, psi1_J) # fermiDirac2_t = fermi(0, 0, psi2) # pd.DataFrame(fermiDirac1_t) # + id="TTET0Get2EAe" # print(psi1_evol()) # + id="xh-Rc_E82YeS" # print(psi2_evol()) # + id="MU56SlyqRFmm" # opCon_t2 = opCon_corr() # + id="nANfBCvWPCN6" # x = [elem.real for elem in opCon_t2] # y = [elem.imag for elem in opCon_t2] x = [elem.real for elem in opCon_corr2_psi3] y = [elem.imag for elem in opCon_corr2_psi3] # x = [elem.real for elem in data] # y = [elem.imag for elem in data] # + id="BP41cBF0R5Na" plt.plot(np.arange(0, nta), x, label='real part') plt.plot(np.arange(0, nta), y, label='imag part') # plt.xlim(-1, 1000) plt.legend() plt.show() # + id="rSBTks1o0wnz" tau_arr, W_arr,W_arr2, tau_max, W_max = omega() # + id="f0gg432iktPt" # opCon_t3 = opCon_calc() # opCon_t4 = opCon_calc2(tau_arr) # opCon_t4 = opCon_t2 # print(opCon_t4) # + id="k80MIeDz7Kh3" print(len(tau_arr)) print(len(opCon_corr2_psi3)) tau_arr1 = np.linspace(-tau_max, tau_max, nt) print(tau_arr1) # + id="-OaopQdT12Pt" X = np.zeros(len(opCon_corr2_psi3)) fungsiKorelasi = np.hstack((X,opCon_corr2_psi3)) print(len(fungsiKorelasi)) # X = np.zeros(len(opCon_corr2_psi0)) # fungsiKorelasi = np.hstack((X,opCon_corr2_psi0)) plt.plot(tau_arr1, fungsiKorelasi.real, label='real part') plt.plot(tau_arr1, fungsiKorelasi.imag, label='imag part') # plt.xlim(0, 6) # plt.legend() # + id="Ov2VIo8o9Ucd" # plt.plot(np.arange(-len(opCon_t2), len(opCon_t2)), fungsiKorelasi.real, label='real part') # plt.plot(tau_arr1, fungsiKorelasi.real, label='real part') # plt.xlim(-1, 1) # plt.legend() # + id="pQbWDpD69pnZ" # plt.plot(tau_arr1, fungsiKorelasi.imag, label='Imag part') # plt.xlim(-2, 2) # plt.legend() # + id="YrGRgoFA0nu3" ##Opcon FFT numpy def opConFFT(func): hann = np.hanning(len(func)) # wp = func.real*hann # wp = func.imag*hann wp = func*hann y = np.fft.fft(wp)/(2*np.pi) # y = np.fft.rfft(wp)/(2*np.pi) # y1 = np.abs(y) y1 = y # y1 = y.imag # freq = np.fft.fftfreq(len(func), 0.95*deltaT) freq = np.fft.fftfreq(len(func), 5.9*dTm) # freq = np.fft.rfftfreq(len(func),0.095) # print(freq) freq1 = np.fft.fftshift(freq*(2*np.pi)/1) # print(freq1) result = np.fft.fftshift(y1) print(len(wp)) # result = np.abs(y1) return freq1, result # + id="HtQXnapY9JSS" opConFFT_t = opConFFT(fungsiKorelasi) # opConFFT_t = opConFFT(opCon_t2) plt.title('Optical Condictivity\n[2 Dimension]') plt.plot(opConFFT_t[0], opConFFT_t[1]) plt.xlim(0,10) plt.grid() plt.show() # + id="Wm6drtI-6DVz" plt.plot(opConFFT_t[0], np.absolute(opConFFT_t[1].imag), label='Imag part') # plt.plot(opConFFT_t[0], np.absolute(opConFFT_t[1].real), label='Real part') # plt.plot(opConFFT_t[0], np.absolute(opConFFT_t[1]), label='Both part') plt.xlim(0,15) plt.legend() plt.grid() plt.show() # + id="E_d-RUnH3U8Z" def constApply(func): xMid = len(func[0])//2 xLast = len(func[0]) Wm = 1/(2*dTm) # Maximum frequency, with Nyquist Theorem applied dWm = 1/(2*ta) opCon_x = func[0][xMid:xLast] print(len(opCon_x)) omega = np.linspace(0,6,xMid) opCon_valueSlice = func[1][xMid:xLast] # const = -beta*omega # const2 = (np.exp(const)-1)/omega # print(const2) # print(const) opCon_Const = [] opCon = [] for k in opCon_x: if k == 0 : opCon_Const.append(0) else : const1 = -beta*k opCon_Const.append(-(np.exp(const1)-1)/k) # for k in range(xMid, xLast,1): # print(k) # tau = dTm*(k+2) # if k == 0 : # opCon_Const.append(0) # else : # const1 = -beta*k # # print(np.exp(const1)) # opCon_Const.append((np.exp(const1)-1)/k) const = np.array(opCon_Const) for k in range(xMid): # opCon.append(((const[k] * func[1][k].imag)/1)) # opCon.append(((const[k]*200 * opCon_valueSlice[k])/1)) opCon.append(((const[k]* 134.33*opCon_valueSlice[k].imag)/1)) # opCon.append(((opCon_Const[k] * func[1][k])/0.04).real) # opCon_t6 = np.fft.fftshift(opCon) print(opCon_Const) return opCon_x, opCon # + id="JANAlQto49pG" newValue = constApply(opConFFT_t) # + id="3Jq8o7YI5KXc" plt.plot(newValue[0],np.abs(newValue[1])) plt.xlim(0,10) # + id="uE2IttHgG-Ij" ## opcon FFT scipy def opConFFT_scipy(func): g = np.hanning(len(func)) # wp = np.abs(func*g) wp = func.imag*g y = scipy.fft.fft(wp)/(2*np.pi) y1 = y result = scipy.fft.fftshift(y1) freq = scipy.fft.fftfreq(len(func), deltaT) # print(freq) # freq1 = scipy.fft.fftshift(freq*((2*np.pi)/3.05)) freq1 = scipy.fft.fftshift(freq*((2*np.pi))) return freq1, result # + id="tUaithwAJd9w" # opConFFT_sci = opConFFT_scipy(fungsiKorelasi) # max_value = np.max(opConFFT_sci[1].imag) # print(4/max_value) # plt.title('Optical Condictivity\n[2 Dimension]') # plt.plot(opConFFT_sci[0], np.abs(opConFFT_sci[1])) # plt.plot(opConFFT_sci[0], np.abs(opConFFT_sci[1])) # plt.xlim(-16,16) # plt.grid() # plt.show() # + id="mzxpIJV9QjRD" # plt.plot(opConFFT_sci[0], opConFFT_sci[1].imag, label='Imag part') # plt.plot(opConFFT_sci[0], opConFFT_sci[1].real, label='Real part') # plt.xlim(-7,7) # plt.legend() # plt.grid() # plt.show() # + id="Aa3VE7eSQsEt" # plt.title('Optical Condictivity\n[2 Dimension]') # plt.plot(opConFFT_sci[0], opConFFT_sci[1].imag) # plt.xlim(0,8) # plt.grid() # plt.show() # + id="6N-xJoSVpcVn" # result = np.where(opConFFT_t[0] == 0) # print(result) # + id="73u1cWhT5R7k" def constApply(): xVal = len(opConFFT_t[0]) opCon_Const = [] opCon = [] for k in opConFFT_t[0]: if k == 0 : opCon_Const.append(0) else : const1 = -beta*k # print(np.exp(const1)) opCon_Const.append((np.exp(const1)-1)/k) const = np.array(opCon_Const) for k in range(xVal): opCon.append(((const[k] * opConFFT_t[1][k].imag)/1)) # opCon.append(((opCon_Const[k] * opConFFT_t[1][k])/0.04).real) # opCon_t6 = np.fft.fftshift(opCon) return opCon # + id="OmmFXsnp2ooT" # plt.plot(opConFFT_t[0], np.abs(opCon_t6)) # plt.xlim(-0.7,10) # plt.ylim(0, 2.5) # plt.grid() # plt.show() # + id="vI3p1VnKumMq" # plt.plot(opConFFT_t[0], opCon_t6) # plt.xlim(-1,10) # plt.ylim(-0.0002, 0.0002) # plt.grid() # plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import getpass a = getpass.getpass(prompt = 'Enter Your Password: ', stream = None) print(a) username = getpass.getuser() print(username) import os print(os.getlogin()) os.environ.get('USERNAME') os.getenv('USERNAME') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # # # #
# #
# Prepared by | July 15, 2019 #
# This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. #
# $ \newcommand{\bra}[1]{\langle #1|} $ # $ \newcommand{\ket}[1]{|#1\rangle} $ # $ \newcommand{\braket}[2]{\langle #1|#2\rangle} $ # $ \newcommand{\dot}[2]{ #1 \cdot #2} $ # $ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $ # $ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $ # $ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $ # $ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $ # $ \newcommand{\mypar}[1]{\left( #1 \right)} $ # $ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $ # $ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $ # $ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $ # $ \newcommand{\onehalf}{\frac{1}{2}} $ # $ \newcommand{\donehalf}{\dfrac{1}{2}} $ # $ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $ # $ \newcommand{\vzero}{\myvector{1\\0}} $ # $ \newcommand{\vone}{\myvector{0\\1}} $ # $ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $ # $ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $ # $ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $ # $ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $ # $ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $ # $ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $ # $ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $ # $ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $ #
  • Düzensiz # #
  • Quantum algorithm implementations for beginners # # An article about implementation of the well known quantum algorithms #
  • # Course on Quantum Information Science and Quantum Computing # # Gitlab page of the winner of the competition Teach me Qiskit by IBM #
  • Quantum Quest # # Lecture notes of the 4-week long course for high school students
  • # # #
  • Quantum Computation and Quantum Information # # Book by Nielsen and Chuang #
  • # # Web site of . You can find links to his lecture notes and blog. #
  • Quantum Game # # An online quantum game about quantum optics #
  • Quantum Computing Series # # A series of notes for quantum computing #
  • Quantum Algorithm Zoo # # A list of quantum algorithms and their complexities. #
  • Quantum Computing for the Very Curious # # A nice website to start learning about quantum computing #
  • Comics # # A comics about quantum computing # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: PaddlePaddle 2.0.0b0 (Python 3.5) # language: python # name: py35-paddle1.2.0 # --- # # 项目背景-BOTBAY机器人港湾 # # BotBay致力于创建专属的拟人机器人,我们理想中它可以接入不同的平台【微信、5G】,作为每一人完成日常工作生活专属助理,你可以给它起一个名字,这样就可以伴随终身,我们希望无论你今后的工作生活如何变化,它都可以普适的服务能力,目前版本我们赋能BotBay工作消息整理和待办提醒功能,例如: # 1.把机器人拉进群,帮助我记录群里面的文字、图片、文件,并自动将文件存储到云盘,文字经过过滤后形成纪要; # 2.在群里面@我或者私聊我,要求查看当日信息“日报”、“纪要”、并支持将“纪要发送邮箱”; # 3.模拟一个工作任务,看看机器人如何提醒我的。 # # # 作品演示 # ## 视频 # [B站链接](https://www.bilibili.com/video/BV1q64y127Vd/) # # ## 部分截图 # ### 账号绑定和给机器人起名字 # > 新用户启动chatbot交互时,由于它还不认识你,所以需要向你确认账户【基于本团队之前开发过的一套用户体系】和机器人它自己的姓名 #
    # # # ### 停止与启动机器人应答 # > 由于我们使用的是本人微信号,考虑到不影响日常收发消息,所以实现了开关 #
    # # # ### 自动纪要生成 # > 根据关键词提取算法,判断群聊消息中那些内容更加有可能属于重要信息,支持纪要发送邮箱【模拟会议纪要的过程】 #
    # # # ### 群文件、图片、音频、视频自动归档-移动端 # > 一个工程向的小机制,帮助归档群聊文件,防止文件过期、手机电脑更换等问题 #
    # # # > 当然这个其实是一个正儿八经的网盘系统【基于本团队之前开发过的一套网盘】 #
    # # # ### 待办提醒与代操作 # > 如果BOTBAY接入了业务办公系统的话,那它就可以采用询问的方式协助你处理待办工作,如下图我们模拟了一个申请单提交审批流程 #
    # #
    # # # ### 信息归档日报 # > 根据收集到的Text/Audio/Video/Attachement/Image,以及Room/Contact/mentionList等信息,进行归类、统计、分析 #
    # #
    # #
    # # # > 当然也有PC端的展现 #
    # # # # 平台架构 # 本项目采用一入口,一平台,多支撑的模式进行设计与开发,其中: # # * 一入口 - 微信入口,采用chatbot模式实现用户与系统的交互与应答。 # # * 一平台 - botPlatform:托管chatbot,启动wechaty实例,接收消息,按状态机模式处理基础消息响应与逻辑分发。 # # * 多支撑 - paddleWorkers:使用paddleHub提供的支撑服务,本项目中使用paddle提供的图片OCR解析微信消息中的图片文字,今后可拓展不同的paddle服务,支撑chatbot实现更多功能。 # # # 核心逻辑 # ## botPlatform-托管chatbot # > 技术路线为NodeJs+Express+MongoDB,主要关键技术为:状态机、分词与关键词提取 # 由于整体代码量巨大,因此本次只上传了关键部分代码 # # | 序号 | 模块名称 | 功能 | 代码 | # | -------- | -------- | -------- | -------- | # | 1 | CMX-CoreHandler | 实现用户认证、用户管理、角色权限等功能 | 无 | # | 1.1 | user.js | 用户相关功能 | 无 | # | 1.2 | bot.js | chatbot相关功能 | **有** | # | 1.3 | application.js | 应用相关功能 | 无 | # | 2 | CMX-FileHandler | 实现文件处理、自动归档,网盘功能 | 无 | # | 3 | CMX-ResourceHandler | 实现流程处理、表单数据处理功能 | 无 | # # --- # ### 配置信息 # > 这里主要是botWechatMap变量在后面的login过程限制了可以扫码的微信用户白名单 # ``` # var _LANG = 'ch';//默认中文 # const BOTCONFIG = { # autoregistHello: 'hello bot', # botWechatMap: { # porbello: 'https://u.wechat.com/MG3oDlaSML_iJ3AN6me3Uv4'//不是随意的微信扫码都有效,这里配置了白名单 # }, # language: { # ch: { # hello: '您好,我是您的专属助手', # //...等等其它语言 # } # } # }; # ``` # config/index.js # ``` # const config = { # bot: { # enable: true,//开启机器人服务 # tokens: ['']//如果有多个token,可以启动多个实例 # }, # } # ``` # # ### 启动wechaty实例 # ``` # var bots = []; # if (config.bot && config.bot.enable) { # for (let i = 0; i < config.bot.tokens.length; i++) {//如果有多个token,则循环运行实例 # const bot = new Wechaty({ # puppet: new PuppetPadlocal({ # token: config.bot.tokens[i] # }), # name: 'BotBay' # }); # bot.cmx = bot.cmx || {}; # bot.cmx.use = false; # bots.push(bot); # bot # .on('scan', (qrcode, status) => { # bot.cmx.qrcode = qrcode;//有二维码时赋值 # }) # .on('login', async (user) => { # console.log(`User ${user} logged in`); # const contact = bot.userSelf(); # if (BOTCONFIG.botWechatMap[contact.id]) { # console.log(`check pass`); # bot.cmx.use = true;//本bot状态是否为待机 # bot.cmx.qrcode = '';//把二维码输出到前端管理页面用 # bot.cmx.wechatqr = BOTCONFIG.botWechatMap[contact.id];//给前端管理页面显示本bot对应的微信二维码 # } else { # console.log(`check fail`);//如果不是白名单微信扫码,则强制登出,这里复现貌似登出后不能立刻回调到scan事件 # await bot.logout(); # //TOOD 不知道是否需要主动重启wechaty # } # # }) # .on('logout', user => { # console.log(`User ${user} log out`); # bot.cmx.use = false;//bot状态置为待机 # }) # .on('error', e => console.info('Bot', 'error: %s', e)) # .on('message', message => onMessage(message, bot)) # .on('friendship', friendship => onFriendship(friendship, bot)) # .on('room-join', (room, inviteeList, inviter) => onRoomJoin(room, inviteeList, inviter, bot)) # .start(); # } # } # ``` # ### 获取我自己的bot # ``` # async function getMyBot(wechatid) { # return new Promise((resolve, reject) => { # Models.Botlists.findOne({//里面存储了每个人bot的信息 # wechatid: wechatid # }).lean().exec((err, data) => { # if (err || !data) resolve(false); # else { # if (data.owner) { # Models.Userworkspacelinks.findOne({ # user: data.owner # }).lean().exec((dmErr, dmData) => { # if (dmErr || !dmData) resolve(false); # else resolve(Object.assign(data, { # workspace: dmData.workspace # })); # }); # } else { # resolve(false); # } # } # }); # }); # } # ``` # 其中bot的字段大致如下 # ``` # const botlistsScheMa = new Schema({ # nickname: String,//昵称 # owner: String,//所属人 # expires: { type: Date },//过期时间 # state: { type: Number, default: 1 },//状态 # birthday: { type: Date, default: Date.now },//生日 # desc: String,//个人简介 # worldranking: Number,//排名 # level: Number,//等级 # wechatid: String,//关联微信号 # hello: String//自定义触发语 # }); # # ``` # ### 状态机状态枚举 # 1. HELLO - 初始化状态,新添加机器人为好友或使用“变身机器人”触发 # 2. WAITUSERNAME - 检查发现不明确用户账户,等待账户信息 # 3. WAITNICKNAME - 检查用户尚未给本机器人起名,等待昵称信息 # 4. FREE - 目前基础信息完整,响应交互 # # 在下文中状态机在不同时机发生状态变化 # # ### 添加好友 # ``` # async function onFriendship(friendship, bot) { # const contact = friendship.contact(); # if (friendship.type() === bot.Friendship.Type.Receive) { // 1. receive new friendship request from new contact # let hasbotinfo = await ((key, wechatid) => { # return new Promise((resolve, reject) => { # Models.Botlists.findOne({ # $or: [{ hello: key }, { wechatid: wechatid }]//根据触发语或微信号检查是否已有机器人 # }).lean().exec((err, data) => { # if (err) resolve(false); # else resolve(data || false); # }); # }); # })(friendship.hello(), contact.id); # if (hasbotinfo === false && friendship.hello() == BOTCONFIG.autoregistHello) {//autoregistHello是默认通用的触发语 # hasbotinfo = 'new wechat user'; # } # if (hasbotinfo !== false) { # await friendship.accept();//接收好友申请 # console.log(`Request from ${contact.name()} is accept succesfully!`); # if (hasbotinfo == 'new wechat user') { # RedisClient.set('BOT-' + contact.id, 'WAITUSERNAME');//状态机置为等待账户名 # await fsmJob(bot, contact); # } else { # if (!isEmpty(hasbotinfo.nickname)) { # RedisClient.set('BOT-' + contact.id, 'HELLO');//状态机置为打招呼 # await fsmJob(bot, contact, hasbotinfo.nickname); # } else { # RedisClient.set('BOT-' + contact.id, 'WAITNICKNAME');//状态机置为等待昵称 # await fsmJob(bot, contact); # } # await ((_query, _updatedata) => { # return new Promise((resolve, reject) => { # Models.Botlists.updateOne(_query, _updatedata, (err, data) => { # if (err) { # console.error(err); # resolve(false); # } else resolve(data); # }); # }); # })({ # _id: hasbotinfo._id # }, { # wechatid: contact.id//更新一下微信号 # }); # } # } else { # RedisClient.del('BOT-' + contact.id); # console.log(`no exist botinfo from ${friendship.hello()}`); # } # } else if (friendship.type() === bot.Friendship.Type.Confirm) { // 2. confirm friendship # console.log(`New friendship confirmed with ${contact.name()}`); # } # } # ``` # # ### 接收消息 # ``` # async function onMessage(msg, bot) { # const contact = msg.talker(); # if (contact.id == 'wexin' || msg.self()) { # return; # } # await fsmJob(bot, contact, msg, true);//直接调用状态机动作 # } # ``` # # ### 状态机动作 # ``` # async function fsmJob(bot, contact, msg, reply) { # let FSM = await (() => {//查询当前用户状态 # return new Promise((resolve, reject) => { # RedisClient.get('BOT-' + contact.id, function (err, result) { # if (err) { # resolve(false); # } else { # resolve(result || ''); # } # }); # }); # })(); # if (FSM) # await botDoProcess[FSM](bot, contact, msg, reply);//直接执行对应动作 # else {//说明是新用户 # if (reply && msg) {//说明用户主动发消息给bot # //...有若干代码,主要思想就是根据用户发的消息,进行相应处理 # } # } # } # ``` # 主要逻辑在botDoProcess变量中实现, # ``` # const botDoProcess = { # WAITUSERNAME: async (bot, contact, msg, reply)=>{//接收到的是用户账户名,检查数据库是否存在,存在则与bot绑定}, # WAITNICKNAME: async (bot, contact, msg, reply)=>{//接收到的是bot昵称,更新数据库}, # FREE: async (bot, contact, msg, reply)=>{//处理指令【纪要、纪要发送邮箱、日报、帮助等】,对文本、音频、视频、附件、图片进行处理、归档、统计、分析}, # HELLO: async (bot, contact, nickname)=>{ # await contact.say(BOTCONFIG.language[_LANG].hello + nickname); # RedisClient.set('BOT-' + contact.id, 'FREE');//空闲 # await fsmJob(bot, contact); # }, # } # ``` # # ### 分词与关键词提取 # 使用CppJieba提供底层分词算法实现 # # ## paddleWorkers-提供chatbot支撑服务 # > 本次只使用了图片OCR这一个功能,并且封装为http接口【因为pyton实现的paddleWorker,nodejs实现的botPlatform】,暴露给botPlatform使用,得力于paddlehub的组件成熟度,所以代码量很少,这里给paddlehub点个赞 # # ``` # from flask import request, Flask # import json # import paddlehub as hub # import cv2 # import requests # import os # # # app = Flask(__name__) # ocr = None # # # @app.route('/imageOcr', methods=['GET']) # def image_ocr(): # path = request.args.get('imagePath') # print(path) # file_name = os.path.basename(path) # file_down = requests.get(path) # with open('/mnt/'+file_name,'wb') as f: # f.write(file_down.content) # ocr_res = ocr.recognize_text(images=[cv2.imread('/mnt/'+file_name)]) # data = ocr_res[0]['data'] # res_data = {} # text = [] # for item in data: # text.append(item['text']) # res_data['msg'] = '请求成功' # res_data['code'] = 200 # res_data['data'] = text # os.remove('/mnt/'+file_name) # return json.dumps(res_data,ensure_ascii=False) # # # # def load_model(): # global ocr # ocr = hub.Module(name="chinese_ocr_db_crnn_server") # # # if __name__ == "__main__": # load_model() # app.run(host="0.0.0.0", port=9000) # # ``` # ## webpages-实现前端页面 # > 虽然写前端页面的工作相比于高大上的机器学习、深度学习、人工智能、自然语言处理这些门类,显得不上档次,但是有一个"友好一点点"的界面总还算是件好事。 # # 基本上就是这个样子按组件化编写的页面 # ``` # #
    #

    # 群聊文件 #

    # # # 暂无数据~ # #
    #
    # ``` # 开发语言为VUE,使用Echarts的图表,这部分就不赘述如何开发的了,按设计稿实现就好了。 # # # 尚未解决问题 # 1. 目前版本基于状态机的消息处理逻辑是不能应答非标准化的指令的,可以通过引入自然语言处理和多轮对话技术辅助触发状态变化; # 2. 由于wechaty原理基于微信号的消息收发,所以存在添加好友人数上线,目前本方案可支持不添加好友情况下,在微信群中@机器人的方式进行交互,但复杂场景下还是需要添加好友的。考虑到这点,botPlatform在最开始的配置信息中,预留了多个wechaty实例使用的token数组,并且通过循环创建的方式,可以在服务器端启动多个wechaty实例待机,并且根据策略派发实例响应用户交互(ps.不成熟); # 3. 基于wechaty作为消息收发中枢的模式无法满足生产环境下高可用的要求,如果一个实例宕机,其它实例目前没有平滑无缝接管服务的方式,所以只能多拜拜大神了。 # # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Tensorflow Load and Benchmark Tests # # Using a pretrained model for [Tensorflow flowers dataset](https://www.tensorflow.org/datasets/catalog/tf_flowers) # # * Load test the model at fixed rate # * Benchmark the model to find maximum throughput and saturation handling # # ## Setup # # * Create a 3 node GCP cluster with n1-standard-8 node # * Install Seldon Core # # ## TODO # # * gRPC # * Run vegeta on separate node to model servers using affinity/taints # !kubectl create namespace seldon # !kubectl config set-context $(kubectl config current-context) --namespace=seldon # + import sys sys.path.append("../") from vegeta_utils import * # - # ## Put Taint on Nodes # raw=!kubectl get nodes -o jsonpath='{.items[0].metadata.name}' firstNode = raw[0] # raw=!kubectl get nodes -o jsonpath='{.items[1].metadata.name}' secondNode = raw[0] # raw=!kubectl get nodes -o jsonpath='{.items[2].metadata.name}' thirdNode = raw[0] # !kubectl taint nodes '{firstNode}' loadtester=active:NoSchedule # !kubectl taint nodes '{secondNode}' model=active:NoSchedule # !kubectl taint nodes '{thirdNode}' model=active:NoSchedule # ## Benchmark with Saturation Test # %%writefile tf_flowers.yaml apiVersion: machinelearning.seldon.io/v1alpha2 kind: SeldonDeployment metadata: name: tf-flowers spec: protocol: tensorflow transport: rest predictors: - graph: implementation: TENSORFLOW_SERVER modelUri: gs://kfserving-samples/models/tensorflow/flowers name: flowers parameters: - name: model_name type: STRING value: flowers componentSpecs: - spec: containers: - name: flowers resources: requests: cpu: '2' tolerations: - key: model operator: Exists effect: NoSchedule name: default replicas: 1 run_model("tf_flowers.yaml") # Run test to gather the max throughput of the model results = run_vegeta_test("tf_vegeta_cfg.yaml", "vegeta_max.yaml", "11m") print(json.dumps(results, indent=4)) saturation_throughput = int(results["throughput"]) print("Max Throughtput=", saturation_throughput) # ## Load Tests with HPA # # Run with an HPA at saturation rate to check: # * Latencies affected by scaling # # %%writefile tf_flowers.yaml apiVersion: machinelearning.seldon.io/v1alpha2 kind: SeldonDeployment metadata: name: tf-flowers spec: protocol: tensorflow transport: rest predictors: - graph: implementation: TENSORFLOW_SERVER modelUri: gs://kfserving-samples/models/tensorflow/flowers name: flowers parameters: - name: model_name type: STRING value: flowers componentSpecs: - hpaSpec: minReplicas: 1 maxReplicas: 5 metrics: - resource: name: cpu targetAverageUtilization: 70 type: Resource spec: containers: - name: flowers resources: requests: cpu: '1' livenessProbe: failureThreshold: 3 initialDelaySeconds: 60 periodSeconds: 5 successThreshold: 1 tcpSocket: port: http timeoutSeconds: 5 readinessProbe: failureThreshold: 3 initialDelaySeconds: 20 periodSeconds: 5 successThreshold: 1 tcpSocket: port: http timeoutSeconds: 5 tolerations: - key: model operator: Exists effect: NoSchedule name: default replicas: 1 run_model("tf_flowers.yaml") rate = saturation_throughput duration = "10m" # %env DURATION=$duration # %env RATE=$rate/1s # !cat vegeta_cfg.tmpl.yaml | envsubst > vegeta.tmp.yaml # !cat vegeta.tmp.yaml results = run_vegeta_test("tf_vegeta_cfg.yaml", "vegeta.tmp.yaml", "11m") print(json.dumps(results, indent=4)) print_vegeta_results(results) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # #### YAML -> JSON conversion import yaml # #### yaml_files/example1.yaml # ------------------------------------------ # # ```yaml # env: # - name: INCLUDE_TEST_PIPELINES # value: FALSE # - name: AIRFLOW_KUBE_NAMESPACE # valueFrom: # fieldRef: # fieldPath: metadata.namespace # - name: SQL_ALCHEMY_CONN # valueFrom: # secretKeyRef: # name: airflow-secrets # key: sql_alchemy_conn # ``` with open("yaml_files/example1.yaml", 'r') as stream: try: yaml1 = yaml.load(stream) except yaml.YAMLError as exc: print(exc) import json print(json.dumps(yaml1, indent=2)) # #### yaml_files/example2.yaml # --------- # ```yaml # invoice: 34843 # date : '2001-01-23' # JSON doesn't support date data type thats why converted to string # bill-to: &id001 # given : Chris # family : Dumars # address: # lines: | # 458 Walkman Dr. # Suite #292 # city : Royal Oak # state : MI # postal : 48046 # ship-to: *id001 # product: # - sku : BL394D # quantity : 4 # description : Basketball # price : 450.00 # - sku : BL4438H # quantity : 1 # description : Super Hoop # price : 2392.00 # tax : 251.42 # total: 4443.52 # comments: > # Late afternoon is best. # Backup contact is Nancy # Billsmer @ 338-4338. # ``` with open("yaml_files/example2.yaml", 'r') as stream: try: yaml2 = yaml.load(stream) except yaml.YAMLError as exc: print(exc) print(json.dumps(yaml2, indent=2)) # #### yaml_files/example3.yaml # --- # ```yaml # env: # - DIR: /hello/txt # - USER: surendra # - PASSWORD: # credentials: > # { # "key1": "value1" # "key2": "value2" # } # temp_key: standard_key # ``` with open("yaml_files/example3.yaml", 'r') as stream: try: yaml3 = yaml.load(stream) except yaml.YAMLError as exc: print(exc) print(yaml3) print(yaml3['credentials']) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np ND = np.array([1,2,3,4,5]) ND PL = [1,2,3,4,5] PL ND = np.array([[1,2,3,4,5], [1,2,3,4,5]]) ND ND.shape ND.ndim ND.dtype type(ND) ND.size ND3 = np.array([[[1,2,3,4,5], [1,2,3,4,5]], [[1,2,3,4,5], [1,2,3,4,5]]]) ND3 ND3.ndim ND3.size ND3.shape ND0 = np.array(4) ND0 type(ND0) ND0.ndim NDX = np.array([4]) NDX NDX.ndim # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="imJ4pdxMFuob" colab_type="code" outputId="00e50642-4322-4f4e-e497-011de77529b8" colab={"base_uri": "https://localhost:8080/", "height": 74} import keras import numpy as np import pandas as pd import cv2 from keras.models import Sequential from keras.layers import Conv2D,MaxPooling2D, Dense,Flatten from keras.datasets import mnist import matplotlib.pyplot as plt from keras.utils import np_utils from keras.optimizers import SGD # + id="qIP4zEpGGYl1" colab_type="code" outputId="fc05d15f-6713-4e3b-f9e6-849225009a4d" colab={"base_uri": "https://localhost:8080/", "height": 403} # !pip install PyDrive # + id="ijUASkTROl6D" colab_type="code" colab={} import os from pydrive.auth import GoogleAuth from pydrive.drive import GoogleDrive from google.colab import auth from oauth2client.client import GoogleCredentials # + id="2yEzz9S4RdR9" colab_type="code" colab={} auth.authenticate_user() gauth = GoogleAuth() gauth.credentials = GoogleCredentials.get_application_default() drive = GoogleDrive(gauth) # + id="Z6DDOrSbRqA3" colab_type="code" colab={} download = drive.CreateFile({'id': '1wG0gS-bqjV6yz1YveuxkvHT5_2DOuT05'}) download.GetContentFile('train.csv') train = pd.read_csv('train.csv') # + id="2Rc5CNhVRs4b" colab_type="code" colab={} download = drive.CreateFile({'id': '1q_Zwlu3RncjKq1YpiVtkiMPxIIueGRYB'}) download.GetContentFile('test.csv') test = pd.read_csv('test.csv') # + id="Bzui153jR012" colab_type="code" outputId="fa16d255-5c14-4976-b736-839e39a32c61" colab={"base_uri": "https://localhost:8080/", "height": 499} display(train.info()) display(test.info()) display(train.head(n = 2)) display(test.head(n = 2)) # + id="Yim557wpbSf_" colab_type="code" colab={} train_Y = train['label'] test_Y = test['label'] train_X = train.drop(['label'],axis = 1) test_X = test.drop(['label'],axis = 1) # + id="PyiXHi2DkT4r" colab_type="code" colab={} train_X = train_X.astype('float32') / 255 test_X = test_X.astype('float32')/255 # + id="RpKOAcYvi-wS" colab_type="code" outputId="d640461b-356f-4c33-e1e4-1a46cd289040" colab={"base_uri": "https://localhost:8080/", "height": 1062} display(train_Y) # + id="zRdBQOJ0bVvF" colab_type="code" colab={} train_X = train_X.values.reshape(27455,784) test_X = test_X.values.reshape(7172,784) train_Y = keras.utils.to_categorical(train_Y,26) test_Y = keras.utils.to_categorical(test_Y,26) # + id="00fdB0Yvl1d8" colab_type="code" colab={} model = Sequential() model.add(Dense(units=128,activation="relu",input_shape=(784,))) model.add(Dense(units=128,activation="relu")) model.add(Dense(units=128,activation="relu")) model.add(Dense(units=128,activation="relu")) model.add(Dense(units=128,activation="relu")) model.add(Dense(units=26,activation="softmax")) # + id="QIfU7or-fcAM" colab_type="code" outputId="a570dd53-470a-4dd2-f202-090fdd291622" colab={"base_uri": "https://localhost:8080/", "height": 3359} model.compile(optimizer=SGD(0.001),loss="categorical_crossentropy",metrics=["accuracy"]) model.fit(train_X,train_Y,batch_size=32,epochs=100,verbose=1) # + id="tmXKzSMSfgDS" colab_type="code" outputId="226ed580-45e6-479e-9576-2fc469348fd5" colab={"base_uri": "https://localhost:8080/", "height": 92} accuracy = model.evaluate(x=test_X,y=test_Y,batch_size=32) print("Accuracy: ",accuracy[1]) # + id="QFWBt201hlEf" colab_type="code" outputId="14a91e7e-72a3-479d-ec39-789c06ad6168" colab={"base_uri": "https://localhost:8080/", "height": 74} img = test_X[1] test_img = img.reshape((1,784)) img_class = model.predict_classes(test_img) prediction = img_class[0] classname = img_class[0] print("Class: ",classname) # + id="5Yzwqb7Fhojk" colab_type="code" outputId="a22d2978-ca5e-4dd3-fee1-afdb1a1a82e8" colab={"base_uri": "https://localhost:8080/", "height": 382} img = img.reshape((28,28)) plt.imshow(img) plt.title(classname) plt.show() # + id="3NldTeSghsHB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 94} outputId="2ce7b919-50c1-4cde-b2c7-05a8f2d8d717" model.save_weights('model_weights.h5') weights_file = drive.CreateFile({'title' : 'model_weights.h5'}) weights_file.SetContentFile('model_weights.h5') weights_file.Upload() drive.CreateFile({'id': weights_file.get('id')}) # + id="b3MJfJrWyGNH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 94} outputId="9ccecde1-5cca-4397-f338-72155c24d8a2" model.save('model.h5') weights_file = drive.CreateFile({'title' : 'model.h5'}) weights_file.SetContentFile('model.h5') weights_file.Upload() drive.CreateFile({'id': weights_file.get('id')}) # + id="WFgvZoNXqsCJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 173} outputId="b5894724-4551-4efd-ffe9-41c446c360f7" # !pip install h5py pyyaml # + id="y7SmbytZx6Wa" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Correlation Plot # # The `corr_plot()` function creates a fluent builder object offering a set of methods for configuring of beautiful correlation plots. # + import pandas as pd from lets_plot import * from lets_plot.bistro.corr import * LetsPlot.setup_html() # - df = pd.read_csv('https://raw.githubusercontent.com/JetBrains/lets-plot-docs/master/data/mpg.csv') corr_plot(data=df, threshold=.5).points().labels()\ .palette_gradient(low='#d7191c', mid='#ffffbf', high='#1a9641')\ .build() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + id="OHDqKE90ndf_" colab={"base_uri": "https://localhost:8080/"} outputId="aa28f456-266b-4364-d6c7-f71defcba5e5" import time start = time.perf_counter() import tensorflow as tf print(tf.__version__) end = time.perf_counter() print('Elapsed time: ' + str(end - start)) # + [markdown] id="RNuSdnFAnNyD" # ## Tweet Sentiment Extraction # "My ridiculous dog is amazing." [sentiment: positive] # # With all of the tweets circulating every second it is hard to tell whether the sentiment behind a specific tweet will impact a company, or a person's, brand for being viral (positive), or devastate profit because it strikes a negative tone. Capturing sentiment in language is important in these times where decisions and reactions are created and updated in seconds. But, which words actually lead to the sentiment description? In this competition you will need to pick out the part of the tweet (word or phrase) that reflects the sentiment. # # Help build your skills in this important area with this broad dataset of tweets. Work on your technique to grab a top spot in this competition. What words in tweets support a positive, negative, or neutral sentiment? How can you help make that determination using machine learning tools? # # In this competition we've extracted support phrases from Figure Eight's Data for Everyone platform. The dataset is titled Sentiment Analysis: Emotion in Text tweets with existing sentiment labels, used here under creative commons attribution 4.0. international licence. Your objective in this competition is to construct a model that can do the same - look at the labeled sentiment for a given tweet and figure out what word or phrase best supports it. # # Disclaimer: The dataset for this competition contains text that may be considered profane, vulgar, or offensive. # Link: https://www.kaggle.com/c/tweet-sentiment-extraction/overview # + colab={"base_uri": "https://localhost:8080/"} id="jlj1Nfr4otNF" outputId="82483a21-2768-472a-a8a8-d1193a5e1506" from google.colab import drive drive.mount('/content/gdrive') # + [markdown] id="gNZrC5SmndgC" # # EDA # + id="gAmkKrZrndgD" colab={"base_uri": "https://localhost:8080/"} outputId="10fe83eb-8f2e-4455-8fce-9e19fffd24c8" import pandas as pd start = time.perf_counter() """ #loading data if using local system train_data = pd.read_csv("data/jigsaw-toxic-comment-train.csv") validation_data = pd.read_csv("data/validation.csv") test_data = pd.read_csv("data/test.csv") """ #loading data if using colab path = "/content/gdrive/MyDrive/Dataset/Tweet Sentiment Extraction/Data/" train_data = pd.read_csv(path+ "train.csv") test_data = pd.read_csv(path+"/test.csv") end = time.perf_counter() print('Elapsed time: ' + str(end - start)) # + id="gfmMCAB4ndgD" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="fff27f80-7a96-4cd1-da3e-65d2ebd1f049" train_data.head() # + colab={"base_uri": "https://localhost:8080/"} id="cYutIXkJmULv" outputId="dc48e276-3833-4b62-977d-e0a9aedee840" train_data.describe # + id="QbjxvdkSndgF" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="850089c1-23d2-43ab-eeef-841d1d91e790" test_data.head() # + colab={"base_uri": "https://localhost:8080/"} id="z2AXq4J-mWMe" outputId="56ffbda1-06be-441e-c270-c17b21aa7da3" test_data.describe # + id="Q4TdsQwPndgG" colab={"base_uri": "https://localhost:8080/"} outputId="647081ed-56ba-4919-eec7-cc38b907931a" pip install wordcloud # + id="v8m6iYGXndgG" colab={"base_uri": "https://localhost:8080/"} outputId="e62447c2-ba7d-4520-a97b-f2f06c691875" pip install plotly # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="0EJs-99LXGaR" # # Overview # ## Data visualization # + [markdown] id="VuzIPUAv3eHK" # ## Visualizing Data # - A fundamental part of the data scientist’s toolkit is data visualization. Although it is very # easy to create visualizations, it’s much harder to produce good ones. # - There are two primary uses for data visualization: # # To explore data / Communicate data # # # + [markdown] id="kUzTRwUf4cHb" # ## Matplotlib # A wide variety of tools exists for visualizing data. We will be beginning with the matplotlib library, which is widely used for simple vizualisations like bar charts, line charts, and scatterplots. # In particular, we will be using the matplotlib.pyplot module. Pyplot # maintains an internal state in which you build up a visualization step by step. # # + [markdown] id="Oy2MAncj6iba" # ## Line Charts # # A line plot is generally used to present observations collected at regular intervals. # # > To chart time series data # # # # + colab={"base_uri": "https://localhost:8080/", "height": 265} id="o6PSX4U8LsW7" outputId="ed017fc7-6e15-429d-ff04-979f9309c95c" import matplotlib.pyplot as plt years = [1950, 1960, 1970, 1980, 1990, 2000, 2010] gdp = [300.2, 543.3, 1075.9, 3862.5, 5979.6, 10286.7, 15958.3] plt.plot(years, gdp) plt.show() # + id="RhLm9ZNc3YEV" colab={"base_uri": "https://localhost:8080/", "height": 295} outputId="e728bf9d-60b4-437e-8b0a-058cadd6f814" from matplotlib import pyplot as plt years = [1950, 1960, 1970, 1980, 1990, 2000, 2010] gdp = [300.2, 543.3, 1075.9, 3862.5, 5979.6, 10289.7, 15958.3] ## Change the values and check the change # create a line chart, years on x-axis, gdp on y-axis plt.plot(years, gdp, color='red', marker='o', linestyle='dashed') ## Possible other linestyles: ('solid', 'dotted',dashed','dashdot') # add a title plt.title("Nominal GDP") # add a label to the y-axis plt.ylabel("Millions $") plt.xlabel('Years') plt.show() # + [markdown] id="dm1EknhA7Gek" # ## Bar Charts # A bar chart is a good choice when you want to show how some quantity varies among some discrete set of items. # # > Categorical quantities. # + colab={"base_uri": "https://localhost:8080/"} id="HQ1vypepQX64" outputId="82e5cddd-b85d-479b-a1bb-a1f5ffdf52f9" listA = [] for i in range(1,12): listA.append(i) print(listA) # + id="Nl9jtpJJ52k1" colab={"base_uri": "https://localhost:8080/", "height": 295} outputId="77a869c3-5472-42e8-f6ae-078faf92b547" movies = ["", "Ben-Hur", "Casablanca", "Gandhi", "West Side Story"] num_oscars = [5, 11, 3, 8, 10] plt.bar(movies, num_oscars,width=0.4) ## try different values plt.title("Favorite Movies Oscars") plt.ylabel("# of Oscar Awards") plt.yticks(listA) plt.xlabel('Movies names') plt.show() # + colab={"base_uri": "https://localhost:8080/"} id="fdvM-k9ZTY3J" outputId="21c8b586-8861-4c97-db80-4687af6cce92" listA = [] for i in range(495,506): listA.append(i) listA # + id="PP5R2OQl7QXW" colab={"base_uri": "https://localhost:8080/", "height": 295} outputId="76de5bf7-cc5d-48d3-f3b2-31578d084a91" mentions = [500, 515] years = [2013, 2014] plt.bar(years, mentions,width=0.3) plt.xticks(years) plt.ylabel("# of times I heard someone say 'data science'") plt.xlabel('Year') plt.axis([2012.5,2014.5,450,520]) plt.title("Stydying the increament") plt.show() # + [markdown] id="gM9xOJO79pKT" # ## Line Charts # + id="X0tjpvOh9IC1" colab={"base_uri": "https://localhost:8080/", "height": 295} outputId="dd81506f-e292-41bd-a3fa-0504b4723ee7" variance = [1, 2, 4, 8, 16, 32, 64, 128, 256] bias_squared = [256, 128, 64, 32, 16, 8, 4, 2, 1] total_error = [x + y for x, y in zip(variance, bias_squared)] xs = [0,1,2,3,4,5,6,7,8] plt.plot(xs, variance, 'g-') # green solid line plt.plot(xs, bias_squared, 'r-.') # red dot-dashed line plt.plot(xs, total_error, 'b:') # blue dotted line plt.xlabel("model complexity") plt.title("The Bias-Variance Tradeoff") plt.show() # + [markdown] id="sY3w0SnJ906p" # ## Scatterplots # # > A scatterplot is the right choice for visualizing the relationship between two paired sets of # data (or more). # + colab={"base_uri": "https://localhost:8080/", "height": 295} id="slDiw16BhMFg" outputId="cc453fc9-32dd-4c3f-d808-b83fdaa634f0" test_1_grades = [ 99, 90, 85, 97, 80,78] test_2_grades = [100, 85, 60, 90, 70,83] plt.scatter(test_1_grades, test_2_grades) plt.title("Axes of tests grades") plt.xlabel("test 1 grade") plt.ylabel("test 2 grade") plt.show() # + id="P739hRb49r9D" colab={"base_uri": "https://localhost:8080/", "height": 295} outputId="a37b0afd-cafd-41a7-9590-7aac6e36e035" friends = [ 70, 65, 72, 63, 71, 64, 60, 64, 67] minutes = [175, 170, 205, 120, 220, 130, 105, 145, 190] labels = ['i', 'i', 'i', 'i', 'e', 'e', 'e', 'e', 'e'] plt.scatter(friends, minutes) # label each point for label, friend_count, minute_count in zip(labels, friends, minutes): plt.annotate(label, xy=(friend_count, minute_count), # put the label with its point xytext=(6, -5), # but slightly offset textcoords='offset points') plt.title("Daily Minutes vs. Number of Friends") plt.xlabel("# of friends") plt.ylabel("daily minutes spent on the site") plt.show() # + [markdown] id="cAbbNPWv-VmF" # ##For Further Exploration # Seaborn is built on top of matplotlib and allows you to easily produce prettier (and more complex) visualizations. # # D3.js is a JavaScript library for producing sophisticated interactive visualizations for # the web. Although it is not in Python, it is both trendy and widely used, and it is well # worth your while to be familiar with it. # # Bokeh is a newer library that brings D3-style visualizations into Python. # # ggplot is a Python port of the popular R library ggplot2, which is widely used for creating “publication quality” charts and graphics. It’s probably most interesting if you’re already an avid ggplot2 user, and possibly a little opaque if you’re not. # + [markdown] id="vp_fAJZb-9dp" # Reference: # # > Website : GeekforGeeks.com # # > link: https://machinelearningmastery.com/data-visualization-methods-in-python/ # # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # explore dataset # # This set of notebooks is based upon # 1. NASA bearing dataset: http://data-acoustics.com/measurements/bearing-faults/bearing-4/ # Set number 2: 4 accelerometers one on each bearing # 2. Tutorial: https://towardsdatascience.com/machine-learning-for-anomaly-detection-and-condition-monitoring-d4614e7de770 # # Acknowledgement is made for the measurements used in this work provided through data-acoustics.com database and the python code from a tutorial by . # + import configparser import logging from ocs_sample_library_preview import * import json import pandas as pd import pprint logger = logging.getLogger() logger.setLevel(logging.CRITICAL) config = configparser.ConfigParser() config.read('config.ini') ocsClient = OCSClient(config.get('Access', 'ApiVersion'), config.get('Access', 'Tenant'), config.get('Access', 'Resource'), config.get('Credentials', 'ClientId'), config.get('Credentials', 'ClientSecret')) namespace_id = config.get('Configurations', 'Namespace') print(namespace_id) # - # determine the dataset start-end dates # they should all be the same, so we'll use the signal variables in later cells import pprint as pprint for i in range(1,5): stream = f'Nasa.bearing{i}.agg' signal_starttime = ocsClient.Streams.getFirstValue(namespace_id,stream,None)['timestamp'] signal_endtime = ocsClient.Streams.getLastValue(namespace_id,stream,None)['timestamp'] print(f'{stream},{signal_starttime},{signal_endtime}') # get the first 10 values from each of the aggregate streams to explore what the data looks like import pprint as pprint for i in range(1,5): signal_starttime = ocsClient.Streams.getFirstValue(namespace_id,f'Nasa.bearing{i}.agg',None)['timestamp'] pprint.pprint(ocsClient.Streams.getRangeValues(namespace_id,f'Nasa.bearing{i}.agg',start=signal_starttime,count=10,value_class=None,skip=0,reverse=False,boundary_type=SdsBoundaryType.Inside)) # retrieve summary information # + # define a function to format the getSummaries output into a DataFrame based upon a stream query to help understand the data def sds_summaries_format(query,start=None,end=None,property=None): if start == None or end == None: print("not implemented, specify start and end parameters") return None df_summaries = None df_summaries = pd.DataFrame(columns=['Count', 'Minimum', 'Maximum', 'Range', 'Total', 'Mean', 'StandardDeviation', 'PopulationStandardDeviation', 'WeightedMean', 'WeightedStandardDeviation', 'WeightedPopulationStandardDeviation', 'Skewness', 'Kurtosis']) for stream in ocsClient.Streams.getStreams(namespace_id,query=query): try: summary = ocsClient.Streams.getSummaries(namespace_id,stream.Id,start=start,end=end,count=1, value_class=None) #pprint.pprint(summary) for key,value in summary[0]['Summaries'].items(): df_summaries.loc[stream.Name,key] = value[f'{property}'] except Exception as e: print(f'getSummaries error: {str(e)}') return(df_summaries) df = sds_summaries_format("nasa.bearing*.agg",start= signal_starttime,end= signal_endtime,property="channel") df.sort_index(inplace=True) df # + # create a dataframe of the aggregate dataset that could be used for later analysis work ocsClient.acceptverbosity=True dfagg = pd.DataFrame() for bearing in range(1,5): values = ocsClient.Streams.getRangeValues(namespace_id,f'nasa.bearing{bearing}.agg',start= signal_starttime,skip=0,count=985,value_class=None,reverse=False,boundary_type=SdsBoundaryType.Exact) df_temp = pd.DataFrame.from_dict(values).set_index('timestamp') # df_temp = df_temp.set_index('timestamp') df_temp.rename(columns={'channel':f'bearing {bearing}'},inplace=True) if dfagg.empty: dfagg = dfagg.append(df_temp) else: dfagg = dfagg.merge(df_temp,on='timestamp') # - dfagg.head() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="hd8JsUcWmdf8" colab_type="text" # # Finetune GPT-2 on Joe Rogan Experience transcripts # # By [](https://www.linkedin.com/in/clayajohnson/) # # *Last updated: July 17th, 2020* # # This colab notebook finetunes OpenAi's gpt-2 on transcripts from the Joe Rogan Experience podcast as part of the dialogue modelling transformer network project. # # For more info you can visit [DMTNet](https://github.com/clayajohnson/dialogue_modelling_transformer_network). # # Begin by copying this notebook to your Google Drive to keep it and save your changes. (File -> Save a Copy in Drive) # + [markdown] id="a1vsRh7zWRVM" colab_type="text" # ## Setup # # In order to retrain the model, the workspace needs to be configured correctly with the appropriate resources and libraries. # # Steps: # 1. Clone my repo into the colab notebook # 2. Download the requirements # 3. Configure project compatible version of tensorflow and cuda support # + id="riVEKJ_MZ5hX" colab_type="code" colab={} # Clone the repo with modified training and running scripts # !git clone -b interactive-feature https://github.com/clayajohnson/gpt-2.git # + id="ECSOrg_9RU2n" colab_type="code" colab={} # Move into gpt-2 folder # %cd /content/gpt-2 # + id="mBOpJTTo_aU5" colab_type="code" colab={} # Download contents of requirements.txt # !pip3 install -r requirements.txt # + id="xapT_JtlRie2" colab_type="code" colab={} # Install the compatible version of tensorflow with GPU support # !pip install tensorflow-gpu==1.15.0 # !pip install 'tensorflow-estimator<1.15.0rc0,>=1.14.0rc0' --force-reinstall # + id="ZpnEZMLRSMyx" colab_type="code" colab={} # Install compatible cuda support for project # !wget https://developer.nvidia.com/compute/cuda/9.0/Prod/local_installers/cuda-repo-ubuntu1604-9-0-local_9.0.176-1_amd64-deb # !dpkg -i cuda-repo-ubuntu1604-9-0-local_9.0.176-1_amd64-deb # !apt-key add /var/cuda-repo-*/7fa2af80.pub # !apt-get update # !apt-get install cuda-9-0 # !export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-9.0/lib64/ # + id="OrHykbUqCu9O" colab_type="code" colab={} # Move into gpt-2 folder after restarting runtime # %cd /content/gpt-2 # + [markdown] id="3L96aP_5Xkv6" colab_type="text" # ## Finetuning # # First, a model must be downloaded that will then be finetuned. This notebook uses the 345M parameter model but a smaller 117M parameter model is also available. # # Steps: # 1. Download the model # 2. Run the training script `train.py` for approximately 70 steps on the JRE dataset (probably need to adjust values for different datasets) # 3. Run the interactive script `interactive_conditional_samples.py` to test the conversational functionality of the model. # + id="P-l4vFusbkzS" colab_type="code" colab={} # Download the 345 million parameter model # !python3 download_model.py 345M # + id="Qf2m80RLcYGS" colab_type="code" colab={} # Run the training script on the JRE dataset (approx. 70 steps yields good results) go to "Runtime > Interupt execution" to stop training. # !PYTHONPATH=src ./train.py --dataset jre_dataset.txt --model_name 345M --sample_every 10 --learning_rate 0.0001 --restore_from fresh # + id="iFImXoKrhBoA" colab_type="code" colab={} # Run the interactive script on the retrained model # !PYTHONPATH=src ./src/interactive_conditional_samples.py --run_name run1 # + [markdown] id="5zAUfqH3Yuxz" colab_type="text" # ## Saving the model # # Once the model is suitably finetuned, save it to your gdrive. This is done by first mounting your google drive and then copying the checkpoint/run1 folder into a folder `/checkpoints` in your google drive. From here, the folder can be zipped and downloaded onto your local machine. # + id="Xwty9bEtQizM" colab_type="code" colab={} # Once you have tested the retrained model and are happy with the results, save the model to your google drive from google.colab import drive drive.mount('/content/gdrive') # %cp -r ./checkpoint/run1/ /content/gdrive/My\ Drive/checkpoints # + [markdown] id="j5yE1Xa4Zm0w" colab_type="text" # # LICENSE # # MIT License # # Copyright (c) 2020 # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in all # copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import datetime as dt d1 = dt.datetime(2019, 2, 25, 10, 50, tzinfo=dt.timezone.utc) d2 = dt.datetime(2019, 2, 26, 11, 20, tzinfo=dt.timezone.utc) d2 - d1 td = d2 - d1 td.total_seconds() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Evaluate all embeddings generated with parameter_search import os import pandas as pd import numpy as np from pathlib import Path import datetime import pickle import seaborn as sns import matplotlib.pyplot as plt from evaluation_functions import nn, sil # + wd = os.getcwd() DF = os.path.join(os.path.sep, str(Path(wd).parents[0]), "data", "processed", "df_focal_reduced.pkl") OUT_COORDS = os.path.join(os.path.sep, str(Path(wd).parents[0]), "data", "interim", "parameter_search", "grid_search") OUT_EVALS = os.path.join(os.path.sep, str(Path(wd).parents[0]), "data", "interim", "parameter_search", "grid_search_evals") FIGURES = os.path.join(os.path.sep, str(Path(wd).parents[0]), "reports", "figures", "grid_search") if (not os.path.isdir(OUT_COORDS)): print("Folder with UMAP coordinates doesn't exist: Missing ", OUT_COORDS) for d in [OUT_EVALS, FIGURES]: if (not os.path.isdir(d)): os.mkdir(d) # + spec_df = pd.read_pickle(DF) print(spec_df.shape) labels = spec_df.call_lable.values labeltypes = sorted(list(set(labels))) # + #outname = os.path.join(os.path.sep, OUT_EVALS, 'eval_table_5.csv') #eval_table = pd.read_csv(outname, sep=";") #already_evaluated = [x+'.csv' for x in eval_table[params].astype(str).agg('_'.join, axis=1)] #not_evaluated = list(set(all_embedding_files) - set(already_evaluated)) # - params = ['preprocess_type', 'metric_type', 'duration_method','min_dist', 'spread', 'n_neighbors', 'n_comps', 'input_type', 'denoised', 'n_mels', 'f_unit', 'bp_filtered', 'n_repeat'] k=5 all_embedding_files = list(sorted(os.listdir(OUT_COORDS))) print(len(all_embedding_files)) # Check if some evaluation already exists outname = os.path.join(os.path.sep, OUT_EVALS, 'eval_table_'+str(k)+'.csv') if (os.path.exists(outname)): eval_table = pd.read_csv(outname, sep=";") #already_evaluated = [x+'.csv' for x in eval_table[params].astype(str).agg('_'.join, axis=1)] already_evaluated = list(eval_table['filename']) not_evaluated = list(set(all_embedding_files) - set(already_evaluated)) print(len(not_evaluated)) if (len(not_evaluated)>0): old_eval_table = eval_table.copy() all_embedding_files = not_evaluated if len(all_embedding_files)>0: eval_colnames = params+ ['S_total'] + ['S_'+x for x in labeltypes] + ['Snorm_total'] + ['Snorm_'+x for x in labeltypes] + ['SIL_total'] + ['SIL_'+x for x in labeltypes]+['knncc_'+x for x in labeltypes]+['knn-acc_'+x for x in labeltypes] #print(eval_colnames) eval_table = np.zeros((len(all_embedding_files), len(eval_colnames))) eval_table = pd.DataFrame(eval_table, columns=eval_colnames) for i,embedding_file in enumerate(all_embedding_files): embedding = np.loadtxt(os.path.join(os.path.sep, OUT_COORDS, embedding_file),delimiter=";") embedding_params_string = embedding_file.replace('.csv', '') embedding_params_list = embedding_params_string.split('_') nn_stats = nn(embedding, labels, k=k) sil_stats = sil(embedding, labels) eval_vector = embedding_params_list + [nn_stats.get_S()] + list(nn_stats.get_ownclass_S()) + [nn_stats.get_Snorm()] + list(nn_stats.get_ownclass_Snorm()) + [sil_stats.get_avrg_score()] + list(sil_stats.get_score_per_class()) + list(nn_stats.knn_cc()) + list(nn_stats.knn_accuracy()) eval_table.loc[i,:] = eval_vector eval_table['filename'] = all_embedding_files eval_table = pd.concat([old_eval_table, eval_table]) eval_table eval_table['knncc_total'] = eval_table[['knncc_'+x for x in labeltypes]].mean(axis=1) eval_table['knn-acc_total'] = eval_table[['knn-acc_'+x for x in labeltypes]].mean(axis=1) outname = os.path.join(os.path.sep, OUT_EVALS, 'eval_table_'+str(k)+'.csv') print(outname) eval_table.to_csv(outname, sep=";", index=False) # ## Plot results k=5 outname = os.path.join(os.path.sep, OUT_EVALS, 'eval_table_'+str(k)+'.csv') eval_table = pd.read_csv(outname, sep=";") eval_table["min_dist"] = pd.to_numeric(eval_table["min_dist"]) eval_table["n_neighbors"] = pd.to_numeric(eval_table["n_neighbors"]) eval_table["spread"] = pd.to_numeric(eval_table["spread"]) eval_table["n_comps"] = pd.to_numeric(eval_table["n_comps"]) eval_table["n_mels"] = pd.to_numeric(eval_table["n_mels"]) eval_table["n_repeat"] = pd.to_numeric(eval_table["n_repeat"]) eval_table print(np.quantile(eval_table.S_total, 0.01)) print(np.quantile(eval_table.S_total, 0.99)) duration_method = eval_table.duration_method.copy() # + dur_dict = {'overlap-only': 'overlap', 'timeshift-overlap': 'tshift-overlap', 'timeshift-pad': 'tshift-pad', 'pairwise-pad': 'pw-pad'} dur_dict = {'overlap': 'ovl', 'pad': 'pad', 'overlap-only': 'ovl-o', 'timeshift-overlap': 'ts-ovl', 'timeshift-pad': 'ts-pad', 'stretched': 'str', 'pairwise-pad': 'pw-pad'} duration_method = [dur_dict[x] if x in dur_dict.keys() else x for x in duration_method] eval_table['duration_method'] = duration_method # + metric_dict = {'correlation': 'corr', 'cosine': 'cosi', 'euclidean': 'euclid', 'manhattan': 'manh'} metric_method = [metric_dict[x] if x in metric_dict.keys() else x for x in eval_table['metric_type']] eval_table['metric_type'] = metric_method # + # Remove invalid spread #eval_table = eval_table.loc[eval_table.spread!=0.1,:] # Remove invalid input_type #eval_table = eval_table.loc[eval_table.input_type!='mfccs',:] #eval_table['input_type'] = [x if x!='zmfccs' else 'mfccs' for x in eval_table.input_type] # - calltypes = sorted(list(set(spec_df.call_lable))) pal = sns.color_palette("Set2", n_colors=len(calltypes)) params = ['preprocess_type', 'metric_type', 'duration_method','min_dist', 'spread', 'n_neighbors', 'n_comps', 'input_type','denoised', 'n_mels', 'f_unit', 'bp_filtered', 'n_repeat'] p_default = dict(zip(params[:-1], ['zs', 'euclid', 'pad', 0.0, 1.0, 15, 3, 'melspecs', 'no', 40, 'dB', 'no'])) # + param = 'denoised' print(param) outvar="knn-acc_total" df = eval_table boxplot = df[[param]+[outvar]].boxplot(by=param) plt.ylim(30,70) # - print(np.min(eval_table.S_total)) print(np.max(eval_table.S_total)) print(np.min(eval_table.SIL_total)) print(np.max(eval_table.SIL_total)) print(np.min(eval_table.knncc_total)) print(np.max(eval_table.knncc_total)) y_lower_dict = {"SIL":-0.65, "SIL_total": -0.3, "S":0, "S_total": 30, "knncc":0, "knncc_total": 10, "knn-acc": 0, "knn-acc_total": 30} y_upper_dict = {"SIL":0.65, "SIL_total": 0.3, "S":100, "S_total": 70, "knncc":100, "knncc_total": 45, "knn-acc": 100, "knn-acc_total": 70} # + # BOXPLOTS for outvar in ['S_total', 'SIL_total']: for param in params[:-1]: other_params = set(params).difference([param, 'n_repeat']) boxplot = df[[param]+[outvar]].boxplot(by=param) plt.ylim(y_lower_dict[outvar], y_upper_dict[outvar]) plt.suptitle('') plt.title('') plt.xlabel(param) plt.savefig(os.path.join(os.path.sep, FIGURES,'box_'+outvar+'_'+param+'.jpg')) plt.close() # + # LINE PLOTS for out_v in ["SIL", "S", "knncc"]: outvars = [out_v+'_'+x for x in calltypes] #outvars = [out_v+'_total']+outvars for param in params[:-1]: other_params = set(params).difference([param, 'n_repeat']) df = eval_table #print(df.shape) means = df[[param, out_v+'_total']] df = df[[param]+outvars] melted = pd.melt(df, id_vars=param, value_vars=outvars) melted = melted.sort_values(by=param) sns.lineplot(x=param, y="value", hue="variable", data=melted, palette="Set2", hue_order=outvars, err_style='band') sns.lineplot(x=param, y=out_v+'_total', data=means, color='black') plt.ylabel(out_v) plt.ylim(y_lower_dict[out_v], y_upper_dict[out_v]) lg = plt.legend(bbox_to_anchor=(1.05,1),loc=2, borderaxespad=0.) plt.savefig(os.path.join(os.path.sep, FIGURES,'line_'+out_v+'_'+param+'.jpg'), bbox_extra_artists=(lg,), bbox_inches='tight') plt.close() # + # Lineplots with error bars for out_v in ["SIL", "S", "knncc"]: outvars = [out_v+'_'+x for x in calltypes] color_dict = dict(zip(outvars, pal)) for param in params[:-1]: other_params = set(params).difference([param, 'n_repeat']) df = eval_table #print(df.shape) means = df[[param, out_v+'_total']] df = df[[param]+outvars] levels = sorted(list(set(df[param]))) mean_df = df.groupby([param]).mean() std_df = df.groupby([param]).std() fig, ax = plt.subplots(figsize=(7, 4)) for outvar in outvars: y = mean_df[outvar].values yerr = std_df[outvar].values ax.errorbar(levels, y, yerr=yerr,color=color_dict[outvar]) # linestype=ls ax.set_ylim(y_lower_dict[out_v], y_upper_dict[out_v]) ax.set_title(param) plt.savefig(os.path.join(os.path.sep, FIGURES,'err_'+out_v+'_'+param+'.jpg')) plt.close() # + # Lineplots with error bars with Mean (?) for out_v in ["SIL", "S", "knncc"]: #for out_v in ["S"]: outvars = [out_v+'_'+x for x in calltypes] color_dict = dict(zip(outvars, pal)) outvars = [out_v+'_total']+outvars color_dict[out_v+'_total'] = "black" #for param in params[0:2]: for param in params[:-1]: other_params = set(params).difference([param, 'n_repeat']) df = eval_table #print(df.shape) means = df[[param, out_v+'_total']] df = df[[param]+outvars] levels = sorted(list(set(df[param]))) mean_df = df.groupby([param]).mean() std_df = df.groupby([param]).std() fig, ax = plt.subplots(figsize=(7, 4)) for outvar in outvars: y = mean_df[outvar].values yerr = std_df[outvar].values ax.errorbar(levels, y, yerr=yerr,color=color_dict[outvar]) # linestype=ls ax.set_ylim(y_lower_dict[out_v], y_upper_dict[out_v]) ax.set_title(param) plt.savefig(os.path.join(os.path.sep, FIGURES,'errmean_'+out_v+'_'+param+'.jpg')) plt.close() # + #df = df.groupby([param]).mean() # - eval_table.head(5) import statsmodels.api as sm from statsmodels.formula.api import ols print(np.min(eval_table.S_total)) print(np.max(eval_table.S_total)) plot_params = ['preprocess_type', 'metric_type', 'duration_method', 'denoised', 'n_mels', 'f_unit', 'bp_filtered'] # + outvar = 'S_total' for i, param in enumerate(plot_params): print(param) other_params = set(params).difference([param, 'n_repeat']) df = eval_table #for p in other_params: # df = df.loc[df[p]==p_default[p],:] levels = sorted(list(set(df[param]))) print(levels) # STATS mod = ols(outvar+'~'+param, data=df).fit() aov_table = sm.stats.anova_lm(mod, typ=2) print(aov_table) #print(mod.summary()) # - # # S_total and SIL_total # + # BOXPLOTS plot_params = ['n_mels', 'f_unit', 'preprocess_type', 'duration_method', 'denoised', 'bp_filtered', 'metric_type'] title_dict = dict(zip(plot_params, ['Number of mels', 'Spectrogram intensity unit', 'Level of spectrogram preprocessing', 'Dealing with variable duration', 'Denoising: Median subtraction', 'Denoising: Bandpass filtering', 'UMAP: distance metric' ])) n_rows = 2 n_cols = 4 cs = list(range(0,n_cols)) * n_rows rs_list = [[x]*n_cols for x in list(range(0,n_rows))] rs = list() for x in rs_list: for y in x: rs.append(y) #fig.suptitle('Effect of different run parameters') #for outvar in ['knn-acc_total']: for outvar in ['S_total', 'SIL_total', 'knncc_total', 'knn-acc_total']: #for outvar in ['S_total']: fig, axes = plt.subplots(n_rows, n_cols, figsize=(21, 10), sharey=True) for i,param in enumerate(plot_params): #print(param) other_params = set(params).difference([param, 'n_repeat']) df = eval_table #for p in other_params: # df = df.loc[df[p]==p_default[p],:] #print(df.shape) #print(levels) levels = sorted(list(set(df[param]))) #color_dict = {level: "red" if level == p_default[param] else "white" for level in levels} ax = sns.boxplot(ax=axes[rs[i], cs[i]], data=df, x=param, y=outvar, order=levels)#, palette = color_dict) #ax.set_ylabel(fontsize=18) for b,box in enumerate(ax.artists): if b==levels.index(p_default[param]): col = "red" else: col="black" box.set_edgecolor(col) box.set_facecolor("white")# # iterate over whiskers and median lines for j in range(6*b,6*(b+1)): ax.lines[j].set_color(col) ax.set_ylim(y_lower_dict[outvar], y_upper_dict[outvar]) ax.set_title(title_dict[param], fontsize=20) ax.set_xlabel('', fontsize=18) ax.set_ylabel('', fontsize=18) ax.tick_params(labelsize=18) plt.tight_layout() plt.setp(axes[:, 0], ylabel=outvar) fig.delaxes(ax= axes[1,3]) #fig.delaxes(ax= axes[2,3]) plt.savefig(os.path.join(os.path.sep, FIGURES,'box_'+outvar+'_all_urghs.jpg'), bbox_inches='tight') #plt.close() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- import numpy as np import pandas as pd import requests import json import plotly.figure_factory as ff import plotly.express as px import plotly.graph_objects as go import seaborn as sns import matplotlib.pyplot as plt import kaleido import plotly # ## Define search engine version # + tags=["parameters"] if "SEARCH_VERSION" not in locals(): SEARCH_VERSION = "new" else: print(SEARCH_VERSION) # - SEARCH_VERSION # ## Import lastest elastic data # + tags=[] df_test = pd.read_csv("./data/elastic_wars.csv", dtype=str) # - df_test.head(3) df_test.columns df_test.shape df_test.dtypes # ## Call last search functions (maybe add description) def find(key, dictionary): for k, v in dictionary.items(): if k == key: yield v elif isinstance(v, dict): for result in find(key, v): yield result elif isinstance(v, list): for d in v: for result in find(key, d): yield result def get_response(url, q): params["q"] = q response = requests.get(url, params=params) time_elapsed = response.elapsed.total_seconds() content = json.loads(response.content) total_results = content[0]["total_results"] total_pages = content[0]["total_pages"] siren_list = list(find("siren", content[0])) return total_results, total_pages, siren_list, time_elapsed url_elastic = "http://api.sirene.dataeng.etalab.studio/search" # Get first 20 results params = {"q": "", "page": "1", "per_page": "20"} ( df_test[f"results_elastic_{SEARCH_VERSION}"], df_test[f"pages_elastic_{SEARCH_VERSION}"], df_test[f"siren_elastic_{SEARCH_VERSION}"], df_test[f"resp_time_elastic_{SEARCH_VERSION}"], ) = ("", "", "", "") df_test.columns df_test.head(3) for index, row in df_test.iterrows(): ( df_test[f"results_elastic_{SEARCH_VERSION}"][index], df_test[f"pages_elastic_{SEARCH_VERSION}"][index], df_test[f"siren_elastic_{SEARCH_VERSION}"][index], df_test[f"resp_time_elastic_{SEARCH_VERSION}"][index], ) = get_response(url_elastic, row["terms"]) df_test elastic_columns = [ col for col in df_test.columns if "elastic" in col and ("result" in col or "pages" in col or "resp_time" in col) ] elastic_columns for col in elastic_columns: df_test[col] = df_test[col].astype("float64") df_test.dtypes df_test.describe() df_test.describe().to_csv( f"./output/describe/describe_{SEARCH_VERSION}.csv", header=True, index=True ) # ## Ranks df_test[f"rank_elastic_{SEARCH_VERSION}"] = "" for ind, row in df_test.iterrows(): if str(row["siren"]) in row[f"siren_elastic_{SEARCH_VERSION}"]: df_test[f"rank_elastic_{SEARCH_VERSION}"][ind] = row[ f"siren_elastic_{SEARCH_VERSION}" ].index(str(row["siren"])) else: df_test[f"rank_elastic_{SEARCH_VERSION}"][ind] = -1 df_test[f"rank_elastic_{SEARCH_VERSION}"] = df_test[ f"rank_elastic_{SEARCH_VERSION}" ].astype("int32") # ## KPIs fig = px.histogram( df_test.sort_values(by=[f"rank_elastic_{SEARCH_VERSION}"]), x=f"rank_elastic_{SEARCH_VERSION}", color_discrete_sequence=["indianred"], title="Distribution Elasticsearch des rangs du bon résultat", ) fig.update_layout(bargap=0.5) fig.update_xaxes(type="category") fig.show() plotly.offline.plot(fig, filename=f"./output/plots/rank_{SEARCH_VERSION}.html") rank_columns = [col for col in df_test.columns if "rank_elastic" in col] rank_columns # + tags=[] fig = go.Figure() rank_dict = {} for rank in rank_columns: df_test[rank] = df_test[rank].astype("int32") rank_dict[rank] = df_test.sort_values(by=[rank])[rank] fig.add_trace( go.Histogram( histfunc="count", x=rank_dict[rank], name=rank, ) ) fig.update_layout( title_text="Fréquence des rangs des résultats de la recherche", # title of plot xaxis_title_text="Rang du résulat dans la page", # xaxis label yaxis_title_text="Nombre de requêtes", # yaxis label bargap=0.2, # gap between bars of adjacent location coordinates bargroupgap=0.1, # gap between bars of the same location coordinates ) fig.update_xaxes(type="category") fig.show() # fig.write_image(f"./output/plots/rank_{SEARCH_VERSION}.png") plotly.offline.plot(fig, filename=f"./output/plots/rank_war_{SEARCH_VERSION}.html") # - fig = px.pie( df_test, names=rank_columns[-1], hole=0.7, color=rank_columns[-1], title="Pourcentages des rangs du bon résultat dans la recherche elasticsearch", ) fig.update_traces(textposition="inside", textinfo="percent+label") fig.show() plotly.offline.plot(fig, filename=f"./output/plots/pie_{SEARCH_VERSION}.html") # ### Nombre maximale de requête # df_max = ( df_test[f"results_elastic_{SEARCH_VERSION}"].value_counts(normalize=True) * 100 ).reset_index() df_max[df_max["index"] == "10000.0"].to_csv( f"./output/describe/max_requetes_{SEARCH_VERSION}.csv", header=True, index=True ) df_max[df_max["index"] == "10000.0"] # ### Sauvegarder les dataframes df_test.to_csv(f"./data/elastic_wars_{SEARCH_VERSION}.csv", header=True, index=False) elastic_columns = [col for col in df_test.columns if "elastic" in col] columns_to_save = ["terms", "siren"] columns_to_save = columns_to_save + elastic_columns[-10:] columns_to_save df_test.to_csv( f"./data/elastic_wars.csv", header=True, index=False, columns=columns_to_save ) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from common import * # !ls data paths = glob.glob('./data/moto_mask/*') paths[:3] import pyson.vision as pv img = cv2.imread(paths[5], cv2.IMREAD_UNCHANGED) # + # plt.figure(dpi=200) # plt.imshow(img[...,:3][:,:,::-1]) # plt.imshow(img[...,3], alpha=.3) # plt.show() # - mask = img[...,-1] pv.show(mask) mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, np.ones([10, 10])) pv.show(mask) cnts, hiers = pv.find_contours(mask) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # # # #
      # # # # # # # # # # #
    Follow Join Visit
    #
    #

    Welcome to QWorld's Bronze

    # Bronze is our introductory tutorial to introduce the basics of quantum computation and quantum programming. It is a big collection of [Jupyter notebooks](https://jupyter.org). # # Bronze can be used to organize 16-20 hours long workshops or to design one-semester course for the second or third year university students. In Bronze, we focus on real numbers and skip to use complex numbers to keep the tutorial simpler. Here is a complete list of our workshops using Bronze: QBronze. # # *If you are using Jupyter notebooks for the first time, you can check our very short Introduction for Notebooks.* # # **The open-source toolkits we are using** # - Programming language: python # - Quantum programming libraries: Qiskit is the main library at the moment. We are extending Bronze to use ProjectQ, Cirq, and pyQuil. # - We use MathJax to display mathematical expressions on html files (e.g., exercises). # - We use open source interactive tool quantumgame for showing quantum coin flipping experiments. #

    # Proceed with the content page >> #

    # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: '''Python Interactive''' # language: python # name: eb2d7657-d636-40cf-996c-6c1457ec907c # --- import pandas as pd pd.DataFrame({"name": ["", ""], "age": [26, 27]}) pd.Series(['', '', '', '']) pd.Series([30, 40, 50], index=['Ankit', 'Rohit', 'Garima'], name='Friends') pd['name'] pd.name friend = pd.read_csv('Friend.csv') friend friend.shape friend.head() friend.head(1) friend.tail(1) fruits = pd.DataFrame({'Apples': [30], 'Bananas': [21]}) fruits # ## 2. # # Create a dataframe `fruit_sales` that matches the diagram below: # # ![](https://i.imgur.com/CHPn7ZF.png) fruit_sale = pd.DataFrame({'Apples': [35, 41], 'Bananas': [21, 34]}, index=['2017 Sales', '2018 Sales']) fruit_sale # ## 3. # # Create a variable `ingredients` with a Series that looks like: # # ``` # Flour 4 cups # Milk 1 cup # Eggs 2 large # Spam 1 can # Name: Dinner, dtype: object # ``` ingredients = pd.Series(['4 cups', '1 cup', '2 large', '1 can'], index=['Flour', 'Milk', 'Eggs', 'Spam'], name='Dinner') ingredients fruit_sale.to_csv('fruit_sale.csv') fruit_sale.Apples fruit_sale['Apples'] fruit_sale['Apples'][0] fruit_sale['Apples'][1] fruit_sale['Apples'][-1] fruit_sale.iloc[0] fruit_sale.iloc[0:1] fruit_sale.iloc[0:2] fruit_sale.iloc[1] fruit_sale.iloc[0:, 1] fruit_sale.loc['2018 Sales', 'Apples'] friends = pd.DataFrame({'ankit': [26, 27, 28], 'rohit': [31, 32, 33]}) friends friends.loc[0, 'ankit'] friends.loc[0:2, 'rohit'] friends.iloc[0:2] friends.iloc[0:2]['ankit'] friends.set_index('Title') friends.set_index('rohit') friends.set_index(['ankit', 'rohit']) friends.ankit == 26 friends.ankit >= 26 friends.ankit > 26 friends.loc[friends.ankit > 26] friends.loc[friends.ankit > 26, 'rohit'] friends.loc[(friends.ankit > 26) & (friends.ankit % 2 == 0)] friends.loc[friends.ankit.isin([27, 28])] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.9.6 64-bit # language: python # name: python3 # --- # # Análisis exploratorio de *items_ordered_2years* # En este cuadeerno se detalla el proceso de análisis de datos de los pedidos realizados # # Índice de Contenidos # 1. Importación de paquetes # 2. Carga de datos # 3. Análisis de los datos # # 3.1. Variables # # 3.2. Duplicados # # 3.3. Clientes # # 3.4. Ventas totales # # 3.5. Ventas por localización # # 4. Conclusiones # ## Importación de paquetes # Cargamos las librerías a usar. # + import pandas as pd import plotly.express as px import random seed = 124 random.seed(seed) # - # ## Carga de datos # Abrimos el fichero y lo cargamos para poder manejar la información df_items = pd.read_csv('../Data/items_ordered_2years.txt', sep='|', on_bad_lines='skip',parse_dates=['created_at']) df_items.sample(5, random_state=seed) # ## Análisis de los datos # En este apartado se analizarán todas las características de las variables print("Dimensiones de los datos:", df_items.shape) df_items.isna().sum() # Existen datos perdidos o no proporcionados de 3 variables distintas percent_missing = df_items.isnull().sum() * 100 / df_items.shape[0] df_missing_values = pd.DataFrame({ 'column_name': df_items.columns, 'percent_missing': percent_missing }) df_missing_values.sort_values(by='percent_missing',ascending=False,inplace=True) print(df_missing_values.to_string(index=False)) # Aunque haya datos perdidos, estos no representan una gran proporción respecto al dataset df_items.info() # Podemos observar que tenemos 1 variable de tipo fecha, 2 variables que son números enteros (el identificador de producto y la cantidad pedida), 3 variables son números decimales (precio base, precio de venta y porcentaje de descuento) y las demás variables que son clásificadas como tipo **object** también las podemos considerar de tipo *string* df_items.describe() # Al echar un vistazo por encima nos podemos dar cuenta de un par de cosas un tanto *extrañas*: # * El mínimo precio base es negativo # * El mínimo precio de venta es 0 # * El mínimo porcentaje de descuento es 1, por lo tanto no hay ningún pedido que no tenga aplicado un porcentaje de descuento df_items.describe(include=object) # Vemos que el número de pedido se repite y esto se debe a que por cada producto distinto que existen dentro de un pedido necesitamos un registro nuevo, y por eso se repide el identificador de pedido # ### Variables # Ahora realizaremos un análisis más específico de cada variable # #### *num_order* # Esta variable es el identificador de los pedidos y la podemos ver repetida ya que en un mismo pedido puede haber diferentes productos, y para cada producto distinto dentro de un mismo pedido se usan distintas entradas. orders = df_items['num_order'].nunique() print(f'Hay {orders} pedidos registrados') df_items['num_order'].value_counts() # El máximo de líneas de pedidos son 60, mientras que hay pedidos con tan solo una línea. # #### *item_id* # La variable *item_id* se usa para identificar el producto dentro del pedido, por lo tanto un mismo producto que este en pedidos distintos tendrá distintos *item_id* items = df_items['item_id'].nunique() print(f'Identificadores : {items}') print(f'Registros : {df_items.shape[0]}') # Como se ve no tenemos identificadores de items como registros del dataset lo que nos lleva a pensar que podría haber registros repetidos df_items['item_id'].value_counts() df_items[df_items['item_id'] == '642b9b87df5b13e91ce86962684c2613'] # Nuestras sospechas se confirman, ya que vemos que el mismo pedido tiene las tres mismas líneas. Por tanto, se trata de datos duplicados # #### *created_at* # Instante en el que se realizó el pedido df_items['created_at'].sample(1,random_state=seed) # El formato de la fecha y tiempo es *YYYY-MM-DD HH:MM:SS* # #### *product_id* # Identificador del producto products = df_items['product_id'].nunique() print(f'Hay {products} identificadores de productos') # Productos en más frecuentas en los pedidos df_items['product_id'].value_counts() # Los productos más comunes en los pedidos, no son exactamente los mismos que los que más cantidad se piden. # Productos más vendidos (cantidad) df_best_products = df_items.groupby("product_id",as_index=False).agg({"num_order":"count", "qty_ordered":"sum"}).sort_values("qty_ordered", ascending=False) df_best_products.head() # #### *qty_ordered* # Cantidad pedida de cierto producto para un cierto pedido df_q = df_items.groupby('num_order',as_index=False).agg(total_qty=('qty_ordered','sum'),n_items=('item_id','count')).sort_values('total_qty',ascending=False) df_q.head(10) df_q.tail(10) # En el resumen de la descripción del conjunto de datos, ya veíamos que los mínimos y máximos de las cantidades que se piden eran bastante dispares. Con esta tabla vemos como se reparten las cantidades pedidas respecto a los productos dentro del pedido. df_q['total_qty'].value_counts().head(10) df_q[df_q['total_qty'] > 10].shape[0] # Vemos que lo más común es que en los pedidos haya entre *2* y *10* unidades de productos. Aun así hay un número sustancia del pedidos que tienen cantidades superiores a las *10* unidades. # #### *base_cost* # Precio de coste de una unidad de producto sin IVA # Como vimos al principio esta columna tiene valores perdidos. Veamos si sería posible recuperarlos mirando *base_cost* del mismo producto en otros pedidos. products = df_items[df_items['base_cost'].isna()]['product_id'].drop_duplicates().to_list() print(f'Nº productos con precios nulos: {len(products)}') df_pr = df_items[(df_items['product_id'].isin(products)) & (df_items['base_cost'].notna())][['product_id','base_cost']] print(f"Nº productos que se puede recuperar base_cost: {df_pr['product_id'].nunique()}") df_pr.groupby('product_id',as_index=False).agg(n_cost=('base_cost','count'),mean_cost=('base_cost','mean')).sort_values('n_cost',ascending=False) # Parece ser que se puede recuperar bastantes valores, eso sí hay que decidir que valores imputar porque los productos varían de precio en los diferentes pedidos. Lo más extraño a mencionar es que el precio de coste medio de uno de los productos sea directamente un cero. neg_bcost = df_items[df_items['base_cost'] < 0].shape[0] print(f'Registros con base_cost negativo: {neg_bcost}') # #### *price* # Precio unitario de los productos. pr_zero = df_items[df_items['price'] == 0].shape[0] null_benefits = df_items[(df_items['price'] < df_items['base_cost'])][['product_id','base_cost','price','discount_percent']] print(f'Productos con precio cero: {pr_zero}') print(f'Líneas pedido sin beneficios: {null_benefits.shape[0]}') # Existen registros con precios nulos y también bastantes líneas de pedidos en los que nos se generan beneficios. Puede que estas últimas se traten de descuentos aplicados. null_benefits.corr() # Donde nos surge la gran duda es en que no hay correspondencia entre los costes, precios y descuentos de los productos. Por tanto, podemos intuir que los precios sobre los que se aplican los descuentos no son estos, sino otros, ya que nosotros tenemos los precios finales. # #### *discount_percent* # Porcentaje de descuento aplicado, pero no al precio con el que contamos nosotros. print(f"Líneas con descuento: {df_items[df_items['discount_percent'] > 0].shape[0]}") print(f'Lineas total: {df_items.shape[0]}') # Algo que llama la atención es que todos las líneas de pedidos tienen descuentos print(f"Cantidad de porc. desc.: {df_items['discount_percent'].nunique()}") df_items['discount_percent'].value_counts().sort_values(ascending=False).head(10) df_items['discount_percent'].value_counts().sort_values(ascending=False).tail(10) # Los porcentajes de descuento más comunes son por debajo de *25%*, mientras que los menos comunes oscilan entre el *30% y 50%*. # #### *customer_id* # El identificador del cliente que ha realizado el pedido clients = df_items['customer_id'].nunique() print(f'{clients} clientes han realizado pedidos') df_items['customer_id'].sample(5,random_state=seed) # Este campo es un *hash* con el fin de anonimizar los datos personales. # #### *city* y *zipcode* # Estas variables indican la ciudad en la cuál se han realizado el pedido print(f"Nº de ciudades: {df_items['city'].nunique()}") print(f"Nº de ciudades: {df_items['zipcode'].nunique()}") # Vimos antes que estas dos columnas tienen un número similar de registros nulos, lo que da a entender que puede que coincidan. df_items[(df_items.city.isna()) & (df_items.zipcode.isna())].shape[0] # Tenemos 2910 registros donde no tenemos ninguna información de la dirección, ni la ciudad ni el código postal. A pesar de que no son demasiados registros, es información que no podremos recuperar de ninguna manera, ya que los datos están anonimizados cities = df_items[(df_items['city'].notna()) & (df_items['city'].str.contains('alba'))]['city'] cities.value_counts() # Haciendo una búsqueda sencilla de cadenas, vemos que el campo no está normalizado y hay nombres de ciudades escritos de diferentes maneras. # Normalizamos los nombres de ciudades indexes = df_items['city'].notna().index df_items.loc[indexes, 'city'] = df_items.loc[indexes, 'city'].apply(lambda x: str(x).lower().strip()) print(f"Nº de ciudades: {df_items['city'].nunique()}") # Cuando normalizamos vemos que el número de ciudades disminuye. Aun así, faltarían muchos más métodos que aplicar para llegar a normalizar el campo. # **NOTA** Por otro lado, mientras que estabamos limpiando datos, nos dimos cuenta que hay valores númericos para ciudades y códigos postales imputados como nombres. df_city = df_items[(df_items['city'].notna()) & (df_items['city'].str.isnumeric())] print(f"Nº ciudad con dígitos: {df_city.shape[0]}") df_city[['num_order','city','zipcode']].sample(5,random_state=seed) df_zip = df_items[(df_items['zipcode'].notna()) & (df_items['zipcode'].notna())] df_zip = df_zip[(df_zip['city'].str.isnumeric()) & (~df_zip['zipcode'].str.isnumeric())] print(f"Nº códigos postales y ciudad invertidos: {df_zip.shape[0]}") df_zip[['city','zipcode']].sample(10,random_state=seed) # ### Duplicados # En apartados anteriores intuimos la presencia de registros duplicados, veamos a que se debe. print(f"Regitros duplicados: {df_items.duplicated().sum()}") df_items[df_items.duplicated()].head(5) # ### Clientes que más compran # Clientes que más pedidos hacen df_clients = df_items.groupby(['customer_id', 'num_order'],as_index=False).agg({'qty_ordered':'sum'}) # num_order se repite, por tanto no se puede agregar por el de primeras df_clients = df_clients.groupby(['customer_id'],as_index=False).agg({'num_order':'count', 'qty_ordered':'sum'}) df_clients = df_clients.sort_values('num_order',ascending=False) df_clients.head() # Clientes que han comprado una mayor cantidad de productos df_clientes = df_clients.sort_values('qty_ordered',ascending=False) df_clientes.head() # ### Ventas totales # En este apartado veremos las fechas en las cuáles se producen más ventas # Modificamos la columna para que solo muestre la fecha sin la hora df_items.loc[:, 'date'] = df_items['created_at'].dt.date df_items[['created_at','date']].sample(5,random_state=seed) # Agrupamos por fecha para saber la cantidad de productos y el número de pedidos realizados por cada día df_temp = df_items.groupby(['date', 'num_order']).agg({'qty_ordered':'sum'}).reset_index() # num_order se repite, por tanto no se puede agregar por el de primeras df_temp = df_temp.groupby(['date']).agg({'num_order':'count', 'qty_ordered':'sum'}).reset_index() df_temp.sample(5,random_state=seed) fig = px.area(df_temp, x="date", y="num_order", title='Pedidos realizados por día',) fig.show() # Vemos los picos de pedidos en los meses como Noviembre (BlackFriday) y picos en las rebajas tanto de invierno como de verano fig = px.area(df_temp, x="date", y="qty_ordered", title='Cantidad de productos pedidos por día') fig.show() # La cantidad de productos pedidos por día es muy similar en forma a la anterior gráfica, por lo que tienen cierta correlación. Aun así, nos debemos fijar en la escala del eje Y. df_temp.corr() # ### Ventas por localización # Puesto que hemos sacado las coordenadas de cada localización podemos mostrar visualmente donde se han producido más ventas df_temp = df_items.groupby(['city', 'num_order'],as_index=False).agg({'qty_ordered':'sum'}) # num_order se repite, por tanto no se puede agregar por el de primeras df_temp = df_temp.groupby(['city'],as_index=False).agg({'num_order':'count', 'qty_ordered':'sum'}) df_temp.sort_values('num_order',ascending=False).head(10) # Dado que no hemos limpiado aun los datos, los pedidos por localización no son correctos, tal y como podemos ver en la muestra. Algunas ciudades como Madrid se repiten varias veces porque están escritas de forma distinta. Aquí es donde hay que remarcar la importancia de trabajar los datos. # ## Conclusiones # A partir de este análisis hemos deducido: # # * Una misma localización puede estar representada de muchas diferentes maneras. # * Los datos que faltan son pocos. De los campos *base_cost*, *city* y *zipcode* que faltan podemos guiarnos de otras ventas para intentar recuperarlos. # * Hay productos donde el precio base es negativo. # * Existen productos donde el precio de venta es inferior al precio base. # * Todos los pedidos tienen aplicado algún tipo de descuento. # * Existe un pico de ventas en días especiales, sobre todo black friday. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # What is Categorical Data? # What is categorical data? # Since we are going to be working on categorical variables in this article, here is a quick refresher on the same with a couple of examples. Categorical variables are usually represented as ‘strings’ or ‘categories’ and are finite in number. Here are a few examples: # # 1. The city where a person lives: Delhi, Mumbai, Ahmedabad, Bangalore, etc. # 2. The department a person works in: Finance, Human resources, IT, Production. # 3. The highest degree a person has: High school, Diploma, Bachelors, Masters, PhD. # 4. The grades of a student: A+, A, B+, B, B- etc. # 5. In the above examples, the variables only have definite possible values. Further, we can see there are two kinds of categorical data # # # 1. Ordinal Data: The categories have an inherent order # 2. Nominal Data: The categories do not have an inherent order # # In Ordinal data, while encoding, one should retain the information regarding the order in which the category is provided. Like in the above example the highest degree a person possesses, gives vital information about his qualification. The degree is an important feature to decide whether a person is suitable for a post or not. # # While encoding Nominal data, we have to consider the presence or absence of a feature. In such a case, no notion of order is present. For example, the city a person lives in. For the data, it is important to retain where a person lives. Here, We do not have any order or sequence. It is equal if a person lives in Delhi or Bangalore. # ## Note:- # Important thing to remember is always determine what type of categorical variable is (ordinal or nominal). On the basis of that only we can encode that variable. # ## Encoding of categorical data # ### Types of encoding are:- # 1. Label Encoding or Ordinal Encoding # 2. One hot Encoding # 3. Dummy Encoding # 4. Effect Encoding # 5. Binary Encoding # 6. BaseN Encoding # 7. Hash Encoding # 8. Target Encoding # ### Label Encoding and Ordinal Encoding # We use this categorical data encoding technique when the categorical feature is ordinal. In this case, retaining the order is important. Hence encoding should reflect the sequence. # # In Label encoding, each label is converted into an integer value. We will create a variable that contains the categories representing the education qualification of a person. # # Package used for Label Encoding is category_encoder and sklearn.preprocessing.ordinalencoder and sklearn.preprocessing.labelencoder # ### Difference between label encoder and ordinal encoder # The only different is that LabelEncoder returned an array, while OrdinalEncoder returned each element inside an array of arrays. pip install category_encoders # + #### Ordinal Encoding import category_encoders as ce import pandas as pd train_df=pd.DataFrame({'Degree':['High school','Masters','Diploma','Bachelors','Bachelors','Masters','Phd','High school','High school']}) # create object of Ordinalencoding encoder= ce.OrdinalEncoder(cols=['Degree'],return_df=True, mapping=[{'col':'Degree', 'mapping':{'None':0,'High school':1,'Diploma':2,'Bachelors':3,'Masters':4,'Phd':5}}]) #Original data train_df # - #fit and transform train data df_train_transformed = encoder.fit_transform(train_df) df_train_transformed # + # Or # Sklearn library also has ordinal encoding from sklearn.preprocessing import OrdinalEncoder oen = OrdinalEncoder() train_df['Degree_with_oridnal_encoder'] = oen.fit_transform(train_df.Degree.values.reshape(-1,1)) train_df # - # Ordinal encoding with manually set values degree_dict = {'High school': 0, 'Diploma': 1, 'Bachelors': 2, 'Masters': 3, 'Phd': 4} train_df['degree_ordinal_encoder_manual'] = train_df.Degree.map(degree_dict) train_df ### Label encoder from sklearn.preprocessing import LabelEncoder train_df['Degree_with_label_encoder'] = LabelEncoder().fit_transform(train_df.Degree) train_df # ### One Hot Encoding # We use this categorical data encoding technique when the features are nominal(do not have any order). In one hot encoding, for each level of a categorical feature, we create a new variable. Each category is mapped with a binary variable containing either 0 or 1. Here, 0 represents the absence, and 1 represents the presence of that category. # # These newly created binary features are known as Dummy variables. The number of dummy variables depends on the levels present in the categorical variable. This might sound complicated. Let us take an example to understand this better. Suppose we have a dataset with a category animal, having different animals like Dog, Cat, Sheep, Cow, Lion. Now we have to one-hot encode this data. # + import category_encoders as ce import pandas as pd data=pd.DataFrame({'City':[ 'Delhi','Mumbai','Hydrabad','Chennai','Bangalore','Delhi','Hydrabad','Bangalore','Delhi' ]}) #Create object for one-hot encoding encoder=ce.OneHotEncoder(cols='City',handle_unknown='return_nan',return_df=True,use_cat_names=True) #Original Data data # - #Fit and transform Data data_encoded = encoder.fit_transform(data) data_encoded # + # Or # Sklearn library also has onehotencoding from sklearn.preprocessing import OneHotEncoder ohc = OneHotEncoder() ohe = ohc.fit_transform(data.City.values.reshape(-1,1)).toarray() pde_onehot = pd.DataFrame(ohe, columns=['A','B','C','D','E']) pde_onehot # - # ### Dummy Encoding # Dummy coding scheme is similar to one-hot encoding. This categorical data encoding method transforms the categorical variable into a set of binary variables (also known as dummy variables). In the case of one-hot encoding, for N categories in a variable, it uses N binary variables. The dummy encoding is a small improvement over one-hot-encoding. Dummy encoding uses N-1 features to represent N labels/categories. # # To understand this better let’s see the image below. Here we are coding the same data using both one-hot encoding and dummy encoding techniques. While one-hot uses 3 variables to represent the data whereas dummy encoding uses 2 variables to code 3 categories. # + import category_encoders as ce import pandas as pd data=pd.DataFrame({'City':['Delhi','Mumbai','Hyderabad','Chennai','Bangalore','Delhi','Hyderabad']}) #Original Data data # - # encode the data data_encoded=pd.get_dummies(data=data,drop_first=True) data_encoded # Here using drop_first argument, we are representing the first label Bangalore using 0. # ### Drawbacks of one hot encoding and dummy encoding # One hot encoder and dummy encoder are two powerful and effective encoding schemes. They are also very popular among the data scientists, But may not be as effective when- # # 1. A large number of levels are present in data. If there are multiple categories in a feature variable in such a case we need a similar number of dummy variables to encode the data. For example, a column with 30 different values will require 30 new variables for coding. # 2. If we have multiple categorical features in the dataset similar situation will occur and again we will end to have several binary features each representing the categorical feature and their multiple categories e.g a dataset having 10 or more categorical columns. # # In both the above cases, these two encoding schemes introduce sparsity in the dataset i.e several columns having 0s and a few of them having 1s. In other words, it creates multiple dummy features in the dataset without adding much information. # # Also, they might lead to a Dummy variable trap. It is a phenomenon where features are highly correlated. That means using the other variables, we can easily predict the value of a variable. # # Due to the massive increase in the dataset, coding slows down the learning of the model along with deteriorating the overall performance that ultimately makes the model computationally expensive. Further, while using tree-based models these encodings are not an optimum choice. # ### Effect Encoding: # This encoding technique is also known as Deviation Encoding or Sum Encoding. Effect encoding is almost similar to dummy encoding, with a little difference. In dummy coding, we use 0 and 1 to represent the data but in effect encoding, we use three values i.e. 1,0, and -1. # # The row containing only 0s in dummy encoding is encoded as -1 in effect encoding. In the dummy encoding example, the city Bangalore at index 4 was encoded as 0000. Whereas in effect encoding it is represented by -1-1-1-1. # + import category_encoders as ce import pandas as pd data=pd.DataFrame({'City':['Delhi','Mumbai','Hyderabad','Chennai','Bangalore','Delhi','Hyderabad']}) encoder=ce.sum_coding.SumEncoder(cols='City',verbose=False,) #Original Data data # - encoder.fit_transform(data) # ![What_is_encoding%20_important%20.jpg](attachment:What_is_encoding%20_important%20.jpg) # Qualitative data analysis # Plot of categorical data # Chi Square test # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Creating spectogram for each audio file # load python libraries # %matplotlib inline import numpy as np import pandas as pd import random from scipy.io import wavfile from sklearn.preprocessing import scale import librosa.display import librosa import matplotlib.pyplot as plt import os # ## Function defination def save_melspectrogram(directory_path, file_name, dataset_split, label, sampling_rate=44100): """ Will save spectogram into current directory""" path_to_file = os.path.join(directory_path, file_name) data, sr = librosa.load(path_to_file, sr=sampling_rate, mono=True) data = scale(data) melspec = librosa.feature.melspectrogram(y=data, sr=sr, n_mels=128) # Convert to log scale (dB) using the peak power (max) as reference # per suggestion from Librbosa: https://librosa.github.io/librosa/generated/librosa.feature.melspectrogram.html log_melspec = librosa.power_to_db(melspec, ref=np.max) librosa.display.specshow(log_melspec, sr=sr) # create saving directory directory = './melspectrograms/{dataset}/{label}'.format(dataset=dataset_split, label=label) if not os.path.exists(directory): os.makedirs(directory) plt.savefig(directory + '/' + file_name.strip('.wav') + '.png') def _train_test_split(filenames, train_pct): """Create train and test splits for ESC-50 data""" random.seed(2018) n_files = len(filenames) n_train = int(n_files*train_pct) train = np.random.choice(n_files, n_train, replace=False) # split on training indices training_idx = np.isin(range(n_files), train) training_set = np.array(filenames)[training_idx] testing_set = np.array(filenames)[~training_idx] return {'training': training_set, 'testing': testing_set} # + dataset_dir = 'D:/ESC-50-master' # Load meta data for audio files meta_data = pd.read_csv('cough.csv') labs = meta_data.is_cough unique_labels = labs.unique() meta_data.head() # - # ## Main program for label in unique_labels: print ("Proccesing {} audio files".format(label)) current_label_meta_data = meta_data[meta_data.is_cough == label] datasets = _train_test_split(current_label_meta_data.filename, train_pct=0.8) for dataset_split, audio_files in datasets.items(): for filename in audio_files: directory_path = dataset_dir + '/audio/' save_melspectrogram(directory_path, filename, dataset_split, label, sampling_rate=44100) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="11BphXo9mU7D" import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn.cluster import KMeans from sklearn import neighbors from sklearn.model_selection import train_test_split from sklearn.preprocessing import MinMaxScaler # + colab={"base_uri": "https://localhost:8080/", "height": 204} id="Edq3iu9Cmrgk" outputId="d23bf301-b1cc-4d67-9d7f-00a3ee8b3688" df = pd.read_csv("/content/drive/MyDrive/Capgemini/electronics.csv") df.head() # + colab={"base_uri": "https://localhost:8080/"} id="Uh5C732Hn8Y2" outputId="39cd4434-e54e-4d0d-d989-5d33e5aaf10f" df.shape # + colab={"base_uri": "https://localhost:8080/"} id="W_7cNA-2qQN4" outputId="cc335fce-6a53-4c7f-dbdd-1a168fed9088" df.info() # + colab={"base_uri": "https://localhost:8080/"} id="qph3t8JmnolJ" outputId="92dd5c59-ea8a-48d3-e9ab-90771111515a" df.isnull().sum() # + colab={"base_uri": "https://localhost:8080/"} id="-W14Z94RozgO" outputId="7db2f10e-01ec-4fd0-cd78-4cade84fb868" df['brand'].unique() # + id="GGFi-b8Dp2XO" df.drop(['user_attr'],axis = 'columns' ,inplace=True) # + id="0qfnCEjYrJHt" df["brand"].fillna( method ='ffill', inplace = True) # + colab={"base_uri": "https://localhost:8080/"} id="kjbG_N60qziD" outputId="2f7c7aed-2a2f-46d9-8657-a02f55613462" df.info() # + colab={"base_uri": "https://localhost:8080/", "height": 204} id="uoMhxGTwq7Sv" outputId="6860bac5-053d-4f15-fd1d-da6b9bb33fd1" df.corr() # + colab={"base_uri": "https://localhost:8080/", "height": 279} id="Yyegry3sr3cs" outputId="0f5e7f87-fc8f-46cd-e1ab-bbc39e21d673" sns.countplot(x ='rating', data = df) plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 390} id="Vb-WK5Kvr_Q6" outputId="5e550238-3f32-4cc8-9cf7-4e4c1b4e22ed" rating_count = pd.DataFrame(df.groupby('item_id')['rating'].count()) df1 = rating_count.sort_values('rating', ascending=False).head(10) df1 # + [markdown] id="Cp6yPvqeui5t" # #These are the top 10 highest rated items. # + colab={"base_uri": "https://localhost:8080/", "height": 296} id="3RU4wRVbuYhL" outputId="566f42b6-119c-4e84-e265-22d096b313ae" ax = df1.plot.bar(stacked=True) # + [markdown] id="blRiFa6-848e" # #Recommendation based on Brand # + colab={"base_uri": "https://localhost:8080/", "height": 675} id="GaG0yIqLx3zQ" outputId="080d0d75-e7de-4008-f61d-8664d4fb826c" most_items = df.groupby('brand')['item_id'].count().reset_index().sort_values('item_id', ascending=False).head(10).set_index('brand') plt.figure(figsize=(15,10)) ax = sns.barplot(most_items['item_id'], most_items.index, palette='inferno') ax.set_title("Top 10 most rated Brands") ax.set_xlabel("Total number of items") totals = [] for i in ax.patches: totals.append(i.get_width()) total = sum(totals) for i in ax.patches: ax.text(i.get_width()+.2, i.get_y()+.2,str(round(i.get_width())), fontsize=15,color='black') # + [markdown] id="ppk1KNuO-MCT" # #Recommendations based on correlations # + colab={"base_uri": "https://localhost:8080/", "height": 390} id="ivLVBY8v9e8N" outputId="5cb36bd7-f890-43bd-9ab3-4891fe35b1ce" average_rating = pd.DataFrame(df.groupby('item_id')['rating'].mean()) average_rating['ratingCount'] = pd.DataFrame(df.groupby('item_id')['rating'].count()) df2 = average_rating.sort_values('ratingCount',ascending=False).head(10) df2 # + colab={"base_uri": "https://localhost:8080/", "height": 296} id="aIs8cgie9-9I" outputId="e5e22573-e0f3-4925-ebd5-bd0c2f08ddcd" ax = df2.plot.bar(stacked=True) # + [markdown] id="WFXxHJMvDIoA" # # Category based Recommendation # + colab={"base_uri": "https://localhost:8080/", "height": 675} id="5F8nJYyzHb0b" outputId="583c4965-37ce-4fc3-c207-9164e5ace7c5" most_items1 = df.groupby('category')['item_id'].count().reset_index().sort_values('item_id', ascending=False).head(10).set_index('category') plt.figure(figsize=(15,10)) ax = sns.barplot(most_items1['item_id'], most_items1.index, palette='inferno') ax.set_title("Top 10 most rated Category") ax.set_xlabel("Total number of items") totals = [] for i in ax.patches: totals.append(i.get_width()) total = sum(totals) for i in ax.patches: ax.text(i.get_width()+.2, i.get_y()+.2,str(round(i.get_width())), fontsize=15,color='black') # + colab={"base_uri": "https://localhost:8080/", "height": 359} id="QCdH83qVIZu4" outputId="3bf81bb8-8e70-4145-957b-f6bf7239a4e3" combine_item_rating = df.copy() columns = ['timestamp', 'model_attr','brand', 'year', 'split'] combine_item_rating = combine_item_rating.drop(columns, axis=1) combine_item_rating.head(10) # + colab={"base_uri": "https://localhost:8080/", "height": 359} id="MY4F04KdJw4c" outputId="7ba4a099-51e6-40fb-b0ab-fb9ac04f70a5" combine_item_rating = combine_item_rating.dropna(axis = 0, subset = ['category']) item_ratingCount = (combine_item_rating.groupby(by = ['category'])['rating'].count().reset_index().rename(columns = {'rating': 'totalRatingCount'})[['category', 'totalRatingCount']]) item_ratingCount.head(10) # + colab={"base_uri": "https://localhost:8080/", "height": 204} id="A44HbO0CKZOF" outputId="74a6c849-75d0-4980-cbc2-8ea79b1d3497" rating_with_totalRatingCount = combine_item_rating.merge(item_ratingCount, left_on = 'category', right_on = 'category', how = 'left') rating_with_totalRatingCount.head() # + colab={"base_uri": "https://localhost:8080/", "height": 204} id="9wMkQIHdOxLc" outputId="b0e0b8ba-24af-42e5-a3b3-6a2117f0d3f8" df3 = rating_with_totalRatingCount.join(df, lsuffix="DROP").filter(regex="^(?!.*DROP)") df3.head() # + [markdown] id="8oVDq9pZTJOl" # #Recommendation Using **Nearst Neighbors** # + colab={"base_uri": "https://localhost:8080/"} id="SauW8x6KPdJP" outputId="372a923c-8361-41ed-a69a-f1898af7eaa1" from scipy.sparse import csr_matrix from sklearn.neighbors import NearestNeighbors user_rating = df3.drop_duplicates(['user_id','category']) user_rating_pivot = user_rating.pivot(index = 'category', columns = 'user_id', values = 'rating').fillna(0) user_rating_matrix = csr_matrix(user_rating_pivot.values) model_knn = NearestNeighbors(metric = 'cosine', algorithm = 'brute') model_knn.fit(user_rating_matrix) # + colab={"base_uri": "https://localhost:8080/"} id="yj7C6UNzRoZb" outputId="e9d58644-d2ac-4c42-ad4a-1994139c9cc8" query_index = np.random.choice(user_rating_pivot.shape[0]) print(query_index) distances, indices = model_knn.kneighbors(user_rating_pivot.iloc[query_index,:].values.reshape(1, -1), n_neighbors = 5) # + colab={"base_uri": "https://localhost:8080/", "height": 35} id="L-QBgwyOSCjW" outputId="a7da8b67-a07c-4792-9780-03e69e1fd323" user_rating_pivot.index[query_index] # + colab={"base_uri": "https://localhost:8080/"} id="y-rzpzWxSFhK" outputId="6eaf93b7-a9de-4dab-8113-7e5ca5fd6886" for i in range(0, len(distances.flatten())): if i == 0: print('Recommendations for who purchased {0}:\n'.format(user_rating_pivot.index[query_index])) else: print('{0}: {1}, with Score of {2}:'.format(i, user_rating_pivot.index[indices.flatten()[i]], distances.flatten()[i])) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import tensorflow as tf import pandas as pd import lux import numpy as np import matplotlib.pyplot as plt from sklearn.preprocessing import LabelEncoder df = pd.read_csv("dataset.csv").fillna("0").sort_values("Disease").reset_index(drop=True) cat_col = df.select_dtypes("object").columns df = df[cat_col].astype("category") df.info() df[cat_col].nunique().reset_index(name='cardinality') data_encoder = LabelEncoder() sym = pd.read_csv("symptom_severity.csv").Symptom sym = list(sym) + ["0"] data_encoder = data_encoder.fit(sym) len(data_encoder.classes_) for col in cat_col: df[col] = df[col].str.replace(" ", "") #df.to_csv("clean_dataset.csv", index=False) for col in cat_col[1:]: df[col] = data_encoder.transform(df[col]) df.head() training_index = [] for x in df.index: if x % 120 == 0 and x != 0: for y in np.random.choice(df[x-120:x].index, size=(24)): training_index.append(y) len(training_index) training_index[0:10] training_data = df.drop(training_index) training_data.tail() test_data = df.iloc[training_index] test_data.head() encoder = LabelEncoder() encoder = encoder.fit(df.Disease) encoder.classes_ train_y = encoder.transform(training_data["Disease"]) train_x = training_data.drop("Disease", axis=1) train_x.shape test_y = encoder.transform(test_data["Disease"]) test_x = test_data.drop("Disease", axis=1) test_x.shape import tensorflow as tf physical_devices = tf.config.list_physical_devices('GPU') tf.config.experimental.set_memory_growth(physical_devices[0], enable=True) #np.reshape(train_x.values, [4059, 1, 17]).view("% group_by(name, dataset_trajectory_type) %>% summarise(density = list(density(agg_perfs, bw = 0.05, from = 0, to = 1, n = 100))) %>% mutate(x = map(density, "x"), y = map(density, "y")) %>% unnest(x, y) %>% ungroup() densities_stacked <- densities %>% group_by(name, x) %>% arrange(dataset_trajectory_type) %>% mutate(norm = sum(y), y = y * y, y = y / sum(y) * norm, y = ifelse(is.na(y), 0, y)) %>% # normalise between 0 and 1 mutate(ymax = cumsum(y), ymin = lag(ymax, default = 0)) %>% ungroup() %>% group_by(name) %>% mutate(ymin = ymin / max(ymax), ymax = ymax / max(ymax)) %>% # normalise so that the maximal density is 1 ungroup() densities_violin <- densities_stacked %>% group_by(name, x) %>% mutate(ymax_violin = ymax - max(ymax)/2, ymin_violin = ymin - max(ymax)/2) %>% ungroup() ggplot(densities_violin) + geom_ribbon( aes( x, ymin = ymin_violin + as.numeric(factor(name,method_order)), ymax = ymax_violin + as.numeric(factor(name,method_order)), fill = dataset_trajectory_type, group = paste0(name, dataset_trajectory_type), ), position = "identity" ) + geom_point(aes(y = match(name,method_order), x = overall), data = agg_perfs_structdr, size = 7, shape = 45, color = "black") + scale_y_continuous(NULL, breaks = seq_along(method_order), labels = method_order, expand = c(0, 0)) + scale_x_continuous("Overall score", limits = c(0, 1), expand = c(0, 0)) + scale_alpha_manual(values = c(`TRUE` = 1, `FALSE` = 0.2)) + coord_flip() + theme_minimal() + theme(axis.text.x = element_text(angle = 45, hjust = 1), legend.position = "top", legend.justification = "center") + guides(fill = guide_legend(nrow = 1), alpha = FALSE) ggsave('./figures/dynbenchmark.overall.violin.pdf',width=12,height=4) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: mmc # language: python # name: mmc # --- # + # hide pakcage non critical warnings import warnings warnings.filterwarnings('ignore') from IPython.core.display import display, HTML display(HTML("")) # %load_ext autoreload # %autoreload 2 # %matplotlib inline # + import matplotlib.pyplot as plt import scipy.io as sio import torch import torch.nn as nn import torch.nn.functional as F import torch.nn.init as init from custom_data import DCCPT_data from config import cfg, get_data_dir, get_output_dir, AverageMeter, remove_files_in_dir from convSDAE import convSDAE from tensorboard_logger import Logger import os import random import numpy as np import data_params as dp # - import devkit.api as dk net = convSDAE(dim=[1, 50, 50, 50, 10], output_padding=[0, 1, 0], numpen=4, dropout=0.2, slope=0) net import torch.optim as optim import torch.optim.lr_scheduler as lr_scheduler lr =0.0001 numlayers = 4 lr = 10 maxepoch = 2 stepsize = 10 for par in net.base[numlayers-1].parameters(): par.requires_grad = True for par in net.bbase[numlayers-1].parameters(): par.requires_grad = True for m in net.bbase[numlayers-1].modules(): if isinstance(m, nn.BatchNorm2d): m.training = True # setting up optimizer - the bias params should have twice the learning rate w.r.t. weights params bias_params = filter(lambda x: ('bias' in x[0]) and (x[1].requires_grad), net.named_parameters()) bias_params = list(map(lambda x: x[1], bias_params)) nonbias_params = filter(lambda x: ('bias' not in x[0]) and (x[1].requires_grad), net.named_parameters()) nonbias_params = list(map(lambda x: x[1], nonbias_params)) # + optimizer = optim.SGD([{'params': bias_params, 'lr': 2*lr}, {'params': nonbias_params}], lr=lr, momentum=0.9, weight_decay=0.0, nesterov=True) scheduler = lr_scheduler.StepLR(optimizer, step_size=200, gamma=0.1) # - from torch.utils.tensorboard import SummaryWriter writer = SummaryWriter('fashion_mnist_experiment_1') datadir = get_data_dir("cmnist") trainset = DCCPT_data(root=datadir, train=True, h5=False) trainloader = torch.utils.data.DataLoader(trainset, batch_size=256, shuffle=True) dataiter = iter(trainloader) dataiter.next() images, labels = dataiter.next() images.shape torch.tensor(3) labels net(images, torch.tensor(1)) writer = SummaryWriter('fashion_mnist_experiment_1') writer.add_graph(net, (images, torch.tensor(4))) # writer.close() # -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.1.0 # language: julia # name: julia-1.1 # --- # + using Plots # pyplot() plotly() using QSimulator # - # # Transmon Ramsey Oscillations # # Starting from a coherence between two energy eigenstates we expect oscilltions at the detuning between the energy levels. We'll work with natural units of GHz and ns for numerical stability reasons. We'll start with a single 3 level transmon in the lab frame with a 5 GHz qubit frequency and -200 MHz anharmonicity. # we create a specific QSystem q0 = DuffingTransmon("q0", 3, DuffingSpec(5, -0.2)) # we can ask for the Hamiltonian of an QSystem as a Matrix hamiltonian(q0) # a CompositeQSystems is a tensor product structure of QSystem's and is what all the solvers are built around cqs = CompositeQSystem([q0]); add_hamiltonian!(cqs, q0) hamiltonian(cqs) # evolve an initial superposition state for 1 ns times = range(0, stop=1, length=201) |> collect ψ0 = (1/sqrt(2)) * ComplexF64[1; 1; 0] ψs = unitary_state(cqs, times, ψ0); # plot the projection on to the initial state and we expect to see 5 GHz oscillations signal = Float64[real(ψ0'*ψ) for ψ in ψs] expected = 0.5 .+ 0.5*cos.(2π*5 * times) p = plot(times, signal, linewidth=2, label="simulated") plot!(p, times, expected, label="ideal") xlabel!(p, "Time (ns)") # # Rabi Oscillations # # Driving Rabi oscillations in the lab frame is a good example of a parametric time dependent Hamiltonian. The drive electric field couples to the transmon dipole or $X$ operator. # # ## Constant Drive qubit_freq = 5.0 q0 = DuffingTransmon("q0", 3, DuffingSpec(5, -0.2)) cqs = CompositeQSystem([q0]); add_hamiltonian!(cqs, q0) add_hamiltonian!(cqs, dipole_drive(q0, t -> 0.02*cos(2π*qubit_freq * t)), q0); ψ_init = ComplexF64[1; 0; 0] times = range(0, stop=100, length=101) |> collect ψs = unitary_state(cqs, times, ψ_init); p = plot(times, [abs2(s[1]) for s in ψs], label="Ground State Pop.") plot!(p, times, [abs2(s[2]) for s in ψs], label="Excited State Pop.") xlabel!(p, "Time (ns)") # ## Variable Amplitude Gaussian Pulse # + # write a helper function that returns the drive Hamiltonian at a particular point in time function gaussian(pulse_length, pulse_freq, t; cutoff=2.5) σ = pulse_length/2/cutoff pulse = exp(-0.5*((t-pulse_length/2)/σ)^2) pulse * cos(2π*pulse_freq * t) end function flat(pulse_freq, t) cos(2π*pulse_freq * t) end # + ψ0 = ComplexF64[1; 0; 0] states_flat = [] states_gaussian = [] amps = 0.1 * range(0, stop=1, length=51) pulse_length = 25.0 qubit_freq = 5.0 for amp = amps # first do flat pulse # three level transmon in the lab frame q0 = DuffingTransmon("q0", 3, DuffingSpec(5, -0.2)) cqs = CompositeQSystem([q0]); add_hamiltonian!(cqs, q0) add_hamiltonian!(cqs, dipole_drive(q0, t -> amp*flat(qubit_freq, t)), q0); ψs = unitary_state(cqs, [0, pulse_length], ψ0) push!(states_flat, ψs[end]) # now gaussian cqs = CompositeQSystem([q0]); add_hamiltonian!(cqs, q0) add_hamiltonian!(cqs, dipole_drive(q0, t -> amp*gaussian(pulse_length, qubit_freq, t)), q0); ψs = unitary_state(cqs, [0, pulse_length], ψ0) push!(states_gaussian, ψs[end]) end # - p1 = plot() for ct = 0:2 plot!(p1, amps*1e3, [abs2(s[ct+1]) for s in states_flat], label="$ct State Pop.") end xlabel!(p1, "Nutation Strength (MHz)") title!(p1, "Flat Pulse") p2 = plot() for ct = 0:2 plot!(p2, amps*1e3, [abs2(s[ct+1]) for s in states_gaussian], label="$ct State Pop.") end xlabel!(p2, "Peak Nutation Strength (MHz)") title!(p2, "Gaussian Pulse") plot(p1,p2, layout=(1,2), size=(800,400)) # # Two Qubit Gates - Parametric Gates in the Lab Frame # # $$ \mathcal{H}(t) = \omega_0 \hat{n}_0 + \Delta_0 \Pi_{2_0} + \omega_1(t)\hat{n}_1 + \Delta_1 \Pi_{2_1} + gX_0X_1$$ # # ## iSWAP # + # parameters from Blue Launch paper q0 = DuffingTransmon("q0", 3, DuffingSpec(3.94015, -0.1807)) q1 = PerturbativeTransmon("q1", 3, TransmonSpec(.172, 12.71, 3.69)) freqs = 114:0.5:124 times = 0.0:5:600 dims = (3,3) basis = TensorProductBasis(dims) ψ₀₁ = TensorProductBasisState(basis, (0,1)) ψ₁₀ = TensorProductBasisState(basis, (1,0)) ψ₀ = vec(ψ₁₀) # start in 10 state pop₀₁ = [] pop₁₀ = [] amp = 0.323 for freq = freqs cqs = CompositeQSystem([q0, q1]) add_hamiltonian!(cqs, q0) add_hamiltonian!(cqs, 0.006*X([q0, q1]), [q0,q1]) add_hamiltonian!(cqs, parametric_drive(q1, t -> amp*sin(2π*freq/1e3*t)), q1) ψs = unitary_state(cqs, times, ψ₀); push!(pop₀₁, [abs2(ψ[QSimulator.index(ψ₀₁)]) for ψ in ψs]) push!(pop₁₀, [abs2(ψ[QSimulator.index(ψ₁₀)]) for ψ in ψs]) end # - p1 = contour(freqs, times, cat(pop₀₁..., dims=2), fill=true) xlabel!(p1, "Frequency (MHz)") ylabel!(p1, "Time (ns)") p2 = contour(freqs, times, cat(pop₁₀..., dims=2), fill=true) xlabel!(p2, "Frequency (MHz)") plot(p1,p2, layout=(1,2), size=(600,300)) # + # Look more finely at a slice along time to show we are getting full contrast and look at lab frame jaggedness q0 = DuffingTransmon("q0", 3, DuffingSpec(3.94015, -0.1807)) q1 = PerturbativeTransmon("q1", 3, TransmonSpec(.172, 12.71, 3.69)) # should get an iSWAP interaction at ≈ 122 MHz freq = 118.5/1e3 amp = 0.323 times = 0:200 dims = (3,3) basis = TensorProductBasis(dims) ψ₀₁ = TensorProductBasisState(basis, (0,1)) ψ₁₀ = TensorProductBasisState(basis, (1,0)) ψ₀ = vec(ψ₁₀) # start in 10 state pop₀₁ = [] pop₁₀ = [] cqs = CompositeQSystem([q0, q1]) add_hamiltonian!(cqs, q0) add_hamiltonian!(cqs, 0.006*X([q0, q1]), [q0,q1]) add_hamiltonian!(cqs, parametric_drive(q1, t -> amp*sin(2π*freq*t)), q1) ψs = unitary_state(cqs, times, ψ₀); pop₀₁ = [abs2(ψ[QSimulator.index(ψ₀₁)]) for ψ in ψs] pop₁₀ = [abs2(ψ[QSimulator.index(ψ₁₀)]) for ψ in ψs]; # - p = plot(times, pop₀₁, label="01") plot!(p, times, pop₁₀, label="10") # # CZ02 # + # parameters from Blue Launch paper q0 = DuffingTransmon("q0", 3, DuffingSpec(3.94015, -0.1807)) q1 = PerturbativeTransmon("q1", 3, TransmonSpec(.172, 12.71, 3.69)) freqs = 90:1:115 times = 0.0:10:300 dims = (3,3) basis = TensorProductBasis(dims) ψ₁₁ = TensorProductBasisState(basis, (1,1)) ψ₀₂ = TensorProductBasisState(basis, (0,2)) ψ₀ = vec(ψ₁₁) # start in 11 state pop₁₁ = [] pop₀₂ = [] amp = 0.245 for freq = freqs cqs = CompositeQSystem([q0, q1]) add_hamiltonian!(cqs, q0) add_hamiltonian!(cqs, 0.006*X([q0, q1]), [q0,q1]) add_hamiltonian!(cqs, parametric_drive(q1, t -> amp*sin(2π*freq/1e3*t)), q1) ψs = unitary_state(cqs, times, ψ₀); push!(pop₁₁, [abs2(ψ[QSimulator.index(ψ₁₁)]) for ψ in ψs]) push!(pop₀₂, [abs2(ψ[QSimulator.index(ψ₀₂)]) for ψ in ψs]) end pop₁₁ = cat(pop₁₁..., dims=2) pop₀₂ = cat(pop₀₂..., dims=2); # - p1 = contour(freqs, times, pop₁₁, fill=true) xlabel!(p1, "Frequency (MHz)") ylabel!(p1, "Time (ns)") p2 = contour(freqs, times, pop₀₂, fill=true) xlabel!(p2, "Frequency (MHz)") plot(p1,p2, layout=(1,2), size=(600,300)) # # Two Qubit Gates - Parametric Gates in the Rotating Frame # # We can move into the doubly rotating frame. The dipole coupling becomes time dependent and we discard the flip-flip (flop-flop) terms in the Hamiltonian. # # $$ \mathcal{H}(t) = \Delta_0 \Pi_{2_0} + \omega_1(t)\hat{n}_1 + \Delta_1 \Pi_{2_1} - \omega_1(0)\hat{n}_1 + e^{i\delta t}\sigma_+\sigma_- + e^{-i\delta t}\sigma_-\sigma_+$$ # + q0 = DuffingTransmon("q0", 3, DuffingSpec(3.94015, -0.1807)) q1 = PerturbativeTransmon("q1", 3, TransmonSpec(.172, 12.71, 3.69)) # should get an iSWAP interaction at ≈ 122 MHz freq = 118.5/1e3 amp = 0.323 times = 0:200 dims = (3,3) basis = TensorProductBasis(dims) ψ₀₁ = TensorProductBasisState(basis, (0,1)) ψ₁₀ = TensorProductBasisState(basis, (1,0)) ψ₀ = vec(ψ₁₀) # start in 10 state cqs = CompositeQSystem([q0, q1]) add_hamiltonian!(cqs, q0) # add rotating frame Hamiltonian shifts add_hamiltonian!(cqs, -spec(q0).frequency*number(q0), q0) q1_freq = hamiltonian(q1, 0.0)[2,2] add_hamiltonian!(cqs, -q1_freq*number(q1), q1) diff_freq = spec(q0).frequency - q1_freq add_hamiltonian!(cqs, t -> 0.006*.5*X_Y([q0, q1], [diff_freq*t, 0.0]), [q0,q1]) add_hamiltonian!(cqs, parametric_drive(q1, t -> amp*sin(2π*freq*t)), q1) ψs = unitary_state(cqs, times, ψ₀); pop₀₁ = [abs2(ψ[QSimulator.index(ψ₀₁)]) for ψ in ψs] pop₁₀ = [abs2(ψ[QSimulator.index(ψ₁₀)]) for ψ in ψs]; # - p = plot(times, pop₀₁, label="01") plot!(p, times, pop₁₀, label="10") # + q0 = DuffingTransmon("q0", 3, DuffingSpec(3.94015, -0.1807)) q1 = PerturbativeTransmon("q1", 3, TransmonSpec(.172, 12.71, 3.69)) # should get an CZ02 interaction at ≈ 115 MHz freq = 102.5/1e3 amp = 0.245 times = 0:0.5:200 dims = (3,3) basis = TensorProductBasis(dims) ψ₁₁ = TensorProductBasisState(basis, (1,1)) ψ₀₂ = TensorProductBasisState(basis, (0,2)) ψ₀ = vec(ψ₁₁) # start in 11 state cqs = CompositeQSystem([q0, q1]) add_hamiltonian!(cqs, q0) # add rotating frame Hamiltonian shifts add_hamiltonian!(cqs, -spec(q0).frequency*number(q0), q0) q1_freq = hamiltonian(q1, 0.0)[2,2] add_hamiltonian!(cqs, -q1_freq*number(q1), q1) diff_freq = spec(q0).frequency - q1_freq add_hamiltonian!(cqs, t -> 0.006*.5*X_Y([q0, q1], [diff_freq*t, 0.0]), [q0,q1]) add_hamiltonian!(cqs, parametric_drive(q1, t -> amp*sin(2π*freq*t)), q1) ψs = unitary_state(cqs, times, ψ₀); pop₁₁ = [abs2(ψ[QSimulator.index(ψ₁₁)]) for ψ in ψs] pop₀₂ = [abs2(ψ[QSimulator.index(ψ₀₂)]) for ψ in ψs]; # - p = plot(times, pop₁₁, label="11") plot!(p, times, pop₀₂, label="02") # # Two Qubit Gates - Soft Shoulders # # Look at how soft shoulders distort the pulse shape. # # + using SpecialFunctions: erf q0 = DuffingTransmon("q0", 3, DuffingSpec(3.94015, -0.1807)) q1 = PerturbativeTransmon("q1", 3, TransmonSpec(.172, 12.71, 3.69)) # should get an CZ02 interaction at ≈ 115 MHz freqs = 1e-3*(90:1:115) amp = 0.245 risetime = 50 times = (2*risetime):10:500 dims = (3,3) basis = TensorProductBasis(dims) ψ₁₁ = TensorProductBasisState(basis, (1,1)) ψ₀₂ = TensorProductBasisState(basis, (0,2)) ψ₀ = vec(ψ₁₁) # start in 11 state pop₁₁ = Float64[] pop₀₂ = Float64[] for tmax = times # erfsquared pulse parameters fwhm = 0.5 * risetime t₁ = fwhm t₂ = tmax - fwhm σ = 0.5 * fwhm / sqrt(2*log(2)) erf_squared(t) = 0.5 * (erf((t - t₁)/σ) - erf((t - t₂)/σ) ) for freq = freqs cqs = CompositeQSystem([q0, q1]) add_hamiltonian!(cqs, q0) # add rotating frame Hamiltonian shifts add_hamiltonian!(cqs, -spec(q0).frequency*number(q0), q0) q1_freq = hamiltonian(q1, 0.0)[2,2] add_hamiltonian!(cqs, -q1_freq*number(q1), q1) diff_freq = spec(q0).frequency - q1_freq add_hamiltonian!(cqs, t -> 0.006*.5*X_Y([q0, q1], [diff_freq*t, 0.0]), [q0,q1]) add_hamiltonian!(cqs, parametric_drive(q1, t -> amp*erf_squared(t)*sin(2π*freq*t)), q1) ψ = unitary_state(cqs, tmax, ψ₀); push!(pop₁₁, abs2(ψ[QSimulator.index(ψ₁₁)])) push!(pop₀₂, abs2(ψ[QSimulator.index(ψ₀₂)])) end end pop₁₁ = reshape(pop₁₁, length(freqs), length(times)) pop₀₂ = reshape(pop₀₂, length(freqs), length(times)); # - p1 = contour(times, 1e3*freqs, pop₀₂, fill=true) ylabel!(p1, "Frequency (MHz)") xlabel!(p1, "Time (ns)") title!(p1, "Population 11") p2 = contour(times, 1e3*freqs, pop₀₂, fill=true) xlabel!(p2, "Time (ns)") title!(p2, "Population 02") plot(p1,p2, layout=(1,2), size=(800,400)) # # Pulse Amplitude Noise # # We can estimate the effect of fluctuations on the pulse amplitude with a Krauss map sum of unitaries weighted by a normal distribution. # # $$ \mathcal{S}(\rho) = \sum_k \lambda_k U_k\rho U_k^\dagger $$ # + # pick an operating point of the plots above freq = 103/1e3 amp = 0.245 tmax = 190 q0 = DuffingTransmon("q0", 3, DuffingSpec(3.94015, -0.1807)) q1 = PerturbativeTransmon("q1", 3, TransmonSpec(.172, 12.71, 3.69)) risetime = 50 fwhm = 0.5 * risetime t₁ = fwhm t₂ = tmax - fwhm σ = 0.5 * fwhm / sqrt(2*log(2)) erf_squared(t) = 0.5 * (erf((t - t₁)/σ) - erf((t - t₂)/σ) ) cqs = CompositeQSystem([q0, q1]) add_hamiltonian!(cqs, q0) # add rotating frame Hamiltonian shifts add_hamiltonian!(cqs, -spec(q0).frequency*number(q0), q0) q1_freq = hamiltonian(q1, 0.0)[2,2] add_hamiltonian!(cqs, -q1_freq*number(q1), q1) diff_freq = spec(q0).frequency - q1_freq add_hamiltonian!(cqs, t -> 0.006*.5*X_Y([q0, q1], [diff_freq*t, 0.0]), [q0,q1]) add_hamiltonian!(cqs, parametric_drive(q1, t -> amp*erf_squared(t)*sin(2π*freq*t)), q1) U = unitary_propagator(cqs, float(tmax)); # - # project out into the qubit space projector = [1,0]*[1, 0, 0]' + [0, 1]*[0, 1, 0]' projector = projector ⊗ projector U_proj = projector * U * projector' import QuantumInfo: liou, avgfidelity, kraus2liou import Cliffords: Z using Optim using LinearAlgebra: diagm Zrot = θ -> exp(-1im * θ * π * Z) CZ = diagm(0 => [1.0,1.0,1.0,-1.0]) res = optimize(zs -> 1 - avgfidelity(liou((Zrot(zs[1]) ⊗ Zrot(zs[2])) * U_proj), liou(CZ)), [0.0, 0.0]) # + # pick an operating point of the plots above freq = 103/1e3 amp = 0.245 tmax = 190 q0 = DuffingTransmon("q0", 3, DuffingSpec(3.94015, -0.1807)) q1 = PerturbativeTransmon("q1", 3, TransmonSpec(.172, 12.71, 3.69)) risetime = 50 fwhm = 0.5 * risetime t₁ = fwhm t₂ = tmax - fwhm σ = 0.5 * fwhm / sqrt(2*log(2)) erf_squared(t) = 0.5 * (erf((t - t₁)/σ) - erf((t - t₂)/σ) ) Us = [] amp_noises = range(-0.005, stop=0.005, length=101) for amp_noise = amp_noises cqs = CompositeQSystem([q0, q1]) add_hamiltonian!(cqs, q0) # add rotating frame Hamiltonian shifts add_hamiltonian!(cqs, -spec(q0).frequency*number(q0), q0) q1_freq = hamiltonian(q1, 0.0)[2,2] add_hamiltonian!(cqs, -q1_freq*number(q1), q1) diff_freq = spec(q0).frequency - q1_freq add_hamiltonian!(cqs, t -> 0.006*.5*X_Y([q0, q1], [diff_freq*t, 0.0]), [q0,q1]) add_hamiltonian!(cqs, parametric_drive(q1, t -> amp*(1+amp_noise)*erf_squared(t)*sin(2π*freq*t)), q1) push!(Us, unitary_propagator(cqs, float(tmax))); end z_corrs = res.minimizer Us = [projector * U * projector' for U = Us] Us = [Zrot(z_corrs[1]) ⊗ Zrot(z_corrs[2]) * U for U = Us]; # - p = plot(1 .+ amp_noises, [avgfidelity(liou(U), liou(CZ)) for U = Us]) xlabel!(p, "Pulse Amplitude") ylabel!(p, "CZ Fidelity") σ = 0.0005 distribution = exp.(-0.5*(amp_noises/σ).^2) distribution /= sum(distribution) plot(amp_noises, distribution) fids = Float64[] sigmas = range(1e-4, stop=3e-3, length=101) for σ = sigmas distribution = exp.(-0.5*(amp_noises/σ).^2) distribution /= sum(distribution) fid = avgfidelity(kraus2liou([√λ*U for (λ,U) = zip(distribution, Us)]), liou(CZ)) push!(fids, fid) end p = plot(sigmas, fids) xlabel!(p, "Relative Pulse Amplitude σ") ylabel!(p, "CZ Average Fidelity") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Kaggle # + [markdown] _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" papermill={"duration": 0.029064, "end_time": "2021-12-17T03:38:22.745948", "exception": false, "start_time": "2021-12-17T03:38:22.716884", "status": "completed"} tags=[] # # **College Python Project** # ## **CIA - COVID INFO AND ANALYSIS** # + [markdown] papermill={"duration": 0.027488, "end_time": "2021-12-17T03:38:22.80354", "exception": false, "start_time": "2021-12-17T03:38:22.776052", "status": "completed"} tags=[] # ## Overview # *(Source: WHO)* # # Coronavirus disease (COVID-19) is an infectious disease caused by the SARS-CoV-2 virus. # # Most people infected with the virus will experience mild to moderate respiratory illness and recover without requiring special treatment. However, some will become seriously ill and require medical attention. Older people and those with underlying medical conditions like cardiovascular disease, diabetes, chronic respiratory disease, or cancer are more likely to develop serious illness. Anyone can get sick with COVID-19 and become seriously ill or die at any age. # # The best way to prevent and slow down transmission is to be well informed about the disease and how the virus spreads. Protect yourself and others from infection by staying at least 1 metre apart from others, wearing a properly fitted mask, and washing your hands or using an alcohol-based rub frequently. Get vaccinated when it’s your turn and follow local guidance. # # The virus can spread from an infected person’s mouth or nose in small liquid particles when they cough, sneeze, speak, sing or breathe. These particles range from larger respiratory droplets to smaller aerosols. It is important to practice respiratory etiquette, for example by coughing into a flexed elbow, and to stay home and self-isolate until you recover if you feel unwell. # ## We have attempted to show how this deadly virus attacked the world # + [markdown] papermill={"duration": 0.029466, "end_time": "2021-12-17T03:38:22.861375", "exception": false, "start_time": "2021-12-17T03:38:22.831909", "status": "completed"} tags=[] # # Initialization # + papermill={"duration": 0.083977, "end_time": "2021-12-17T03:38:22.972854", "exception": false, "start_time": "2021-12-17T03:38:22.888877", "status": "completed"} tags=[] # This Python 3 environment comes with many helpful analytics libraries installed # It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python # For example, here's several helpful packages to load import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) # Input data files are available in the read-only "../input/" directory # For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory import os for dirname, _, filenames in os.walk('/kaggle/input'): for filename in filenames: print(os.path.join(dirname, filename)) # You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All" # You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session # + [markdown] papermill={"duration": 0.028998, "end_time": "2021-12-17T03:38:23.03017", "exception": false, "start_time": "2021-12-17T03:38:23.001172", "status": "completed"} tags=[] # ## # + [markdown] papermill={"duration": 0.028792, "end_time": "2021-12-17T03:38:23.087932", "exception": false, "start_time": "2021-12-17T03:38:23.05914", "status": "completed"} tags=[] # # Importing Libraries # + papermill={"duration": 2.254127, "end_time": "2021-12-17T03:38:25.370159", "exception": false, "start_time": "2021-12-17T03:38:23.116032", "status": "completed"} tags=[] #Data Processing import pandas as pd import numpy as np #Data Visulaisation import plotly.express as px #Machine Learning Libraries import sklearn from sklearn import linear_model from sklearn.utils import shuffle #Miscellaneous import os import warnings warnings.filterwarnings('ignore') # + [markdown] papermill={"duration": 0.026947, "end_time": "2021-12-17T03:38:25.430122", "exception": false, "start_time": "2021-12-17T03:38:25.403175", "status": "completed"} tags=[] # # Reading DataSet # + papermill={"duration": 0.562786, "end_time": "2021-12-17T03:38:26.020326", "exception": false, "start_time": "2021-12-17T03:38:25.45754", "status": "completed"} tags=[] def read_data(path,file): return pd.read_csv(path+"/"+file) path=r'../input/corona-virus-report' world =read_data(path,'worldometer_data.csv') india=read_data('../input/covid19-in-india','covid_19_india.csv') state=pd.read_csv("../input/covid19-in-india/StatewiseTestingDetails.csv") daily = pd.read_csv('../input/covid19-corona-virus-india-dataset/nation_level_daily.csv') vac_data = pd.read_csv('../input/covid-world-vaccination-progress/country_vaccinations.csv') pop_data = pd.read_csv('../input/corona-virus-report/worldometer_data.csv') vac_manu = pd.read_csv('../input/covid-world-vaccination-progress/country_vaccinations_by_manufacturer.csv') state_vac=pd.read_csv('../input/covid19-in-india/covid_vaccine_statewise.csv') daily_records=read_data(path,'day_wise.csv') # + [markdown] papermill={"duration": 0.026623, "end_time": "2021-12-17T03:38:26.074076", "exception": false, "start_time": "2021-12-17T03:38:26.047453", "status": "completed"} tags=[] # # Data Cleaning # ## Making the data more usable for working with it # + papermill={"duration": 0.27686, "end_time": "2021-12-17T03:38:26.377925", "exception": false, "start_time": "2021-12-17T03:38:26.101065", "status": "completed"} tags=[] # For world Vaccination Dataset usa_vac = vac_data[vac_data['country'] == 'United States'] uk_vac = vac_data[vac_data['country'] == 'United Kingdom'] ger_vac = vac_data[vac_data['country'] == 'Germany'] ita_vac = vac_data[vac_data['country'] == 'Italy'] fra_vac = vac_data[vac_data['country'] == 'France'] chi_vac = vac_data[vac_data['country'] == 'China'] rus_vac = vac_data[vac_data['country'] == 'Russia'] isr_vac = vac_data[vac_data['country'] == 'Israel'] uae_vac = vac_data[vac_data['country'] == 'United Arab Emirates'] can_vac = vac_data[vac_data['country'] == 'Canada'] jpn_vac = vac_data[vac_data['country'] == 'Japan'] ind_vac = vac_data[vac_data['country'] == 'India'] ino_vac = vac_data[vac_data['country'] == 'Indonesia'] mal_vac = vac_data[vac_data['country'] == 'Malaysia'] ban_vac = vac_data[vac_data['country'] == 'Bangladesh'] nig_vac = vac_data[vac_data['country'] == 'Nigeria'] phi_vac = vac_data[vac_data['country'] == 'Phillipines'] vie_vac = vac_data[vac_data['country'] == 'Vietnam'] egy_vac = vac_data[vac_data['country'] == 'Egypt'] pak_vac = vac_data[vac_data['country'] == 'Pakistan'] usa_vac.drop(usa_vac[usa_vac['daily_vaccinations'].isnull()].index, inplace=True) uk_vac.drop(uk_vac[uk_vac['daily_vaccinations'].isnull()].index, inplace=True) ger_vac.drop(ger_vac[ger_vac['daily_vaccinations'].isnull()].index, inplace=True) ita_vac.drop(ita_vac[ita_vac['daily_vaccinations'].isnull()].index, inplace=True) fra_vac.drop(fra_vac[fra_vac['daily_vaccinations'].isnull()].index, inplace=True) chi_vac.drop(chi_vac[chi_vac['daily_vaccinations'].isnull()].index, inplace=True) rus_vac.drop(rus_vac[rus_vac['daily_vaccinations'].isnull()].index, inplace=True) isr_vac.drop(isr_vac[isr_vac['daily_vaccinations'].isnull()].index, inplace=True) uae_vac.drop(uae_vac[uae_vac['daily_vaccinations'].isnull()].index, inplace=True) can_vac.drop(can_vac[can_vac['daily_vaccinations'].isnull()].index, inplace=True) jpn_vac.drop(jpn_vac[jpn_vac['daily_vaccinations'].isnull()].index, inplace=True) ind_vac.drop(ind_vac[ind_vac['daily_vaccinations'].isnull()].index, inplace=True) ino_vac.drop(ino_vac[ino_vac['daily_vaccinations'].isnull()].index, inplace=True) mal_vac.drop(mal_vac[mal_vac['daily_vaccinations'].isnull()].index, inplace=True) ban_vac.drop(ban_vac[ban_vac['daily_vaccinations'].isnull()].index, inplace=True) nig_vac.drop(nig_vac[nig_vac['daily_vaccinations'].isnull()].index, inplace=True) phi_vac.drop(phi_vac[phi_vac['daily_vaccinations'].isnull()].index, inplace=True) vie_vac.drop(vie_vac[vie_vac['daily_vaccinations'].isnull()].index, inplace=True) egy_vac.drop(egy_vac[egy_vac['daily_vaccinations'].isnull()].index, inplace=True) pak_vac.drop(pak_vac[pak_vac['daily_vaccinations'].isnull()].index, inplace=True) # + papermill={"duration": 0.070395, "end_time": "2021-12-17T03:38:26.475066", "exception": false, "start_time": "2021-12-17T03:38:26.404671", "status": "completed"} tags=[] #For Indian Vaccination Dataset df2=state_vac df2 = df2.rename(columns= {'Updated On':'Date','Total Doses Administered':'TotalDoses','Male(Individuals Vaccinated)':'Male','Female(Individuals Vaccinated)':'Female', 'Total Individuals Vaccinated':'TotalVaccinated',' Covaxin (Doses Administered)':'Covaxin','CoviShield (Doses Administered)':'CoviShield','Sputnik V (Doses Administered)':'Sputnik'}) df2.Date = pd.to_datetime(df2.Date, format="%d/%m/%Y") df3=india df1=state df2 = df2[df2['State'] !='India'] df2 = df2.rename(columns= {'Updated On':'Date','Total Doses Administered':'TotalDoses','Male(Individuals Vaccinated)':'Male','Female(Individuals Vaccinated)':'Female', 'Total Individuals Vaccinated':'TotalVaccinated',' Covaxin (Doses Administered)':'Covaxin','CoviShield (Doses Administered)':'CoviShield','Sputnik V (Doses Administered)':'Sputnik'}) df2.Date = pd.to_datetime(df2.Date, format="%d/%m/%Y") df2_2=df2[df2['Date']=="2021-08-9"] df2_2.dropna() df2_1 = df3[df3['Date']=='2021-08-11'] # + [markdown] papermill={"duration": 0.026933, "end_time": "2021-12-17T03:38:26.528956", "exception": false, "start_time": "2021-12-17T03:38:26.502023", "status": "completed"} tags=[] # # Data Visualization # + [markdown] papermill={"duration": 0.026977, "end_time": "2021-12-17T03:38:26.583009", "exception": false, "start_time": "2021-12-17T03:38:26.556032", "status": "completed"} tags=[] # ## World Dataset # + papermill={"duration": 1.304319, "end_time": "2021-12-17T03:38:27.914252", "exception": false, "start_time": "2021-12-17T03:38:26.609933", "status": "completed"} tags=[] features=['TotalCases','TotalDeaths','TotalRecovered','ActiveCases'] for i in features: fig=px.treemap(world.iloc[0:25], values=i, path=['Country/Region'], template='plotly_dark', title="Tree Map depicting Impact of Covid-19 w.r.t {}".format(i)) fig.show() # + papermill={"duration": 0.155446, "end_time": "2021-12-17T03:38:28.100325", "exception": false, "start_time": "2021-12-17T03:38:27.944879", "status": "completed"} tags=[] px.line(daily_records, x='Date', y=['Confirmed', 'Deaths', 'Recovered','Active'], template='plotly_dark', title='Daily trends of Covid-19 cases', labels={'Date':'Month','value':'Statistics'}) # + papermill={"duration": 0.134577, "end_time": "2021-12-17T03:38:28.266527", "exception": false, "start_time": "2021-12-17T03:38:28.13195", "status": "completed"} tags=[] fig2=px.bar(world.iloc[0:20][::-1], y='Country/Region', x=['TotalCases','TotalRecovered', 'ActiveCases','TotalDeaths','Serious,Critical'], template='plotly_dark', title='Severly Hit Countries') fig2.update_xaxes(tickangle=270) fig2.show() # + papermill={"duration": 0.099936, "end_time": "2021-12-17T03:38:28.398669", "exception": false, "start_time": "2021-12-17T03:38:28.298733", "status": "completed"} tags=[] px.pie(world.iloc[0:20], names='Country/Region', values='TotalCases', template='plotly_dark', title='Distribution of Total Cases ') # + [markdown] papermill={"duration": 0.032835, "end_time": "2021-12-17T03:38:28.464256", "exception": false, "start_time": "2021-12-17T03:38:28.431421", "status": "completed"} tags=[] # ## Indian Dataset # + papermill={"duration": 0.116912, "end_time": "2021-12-17T03:38:28.613428", "exception": false, "start_time": "2021-12-17T03:38:28.496516", "status": "completed"} tags=[] from plotly.subplots import make_subplots import plotly.graph_objects as go grouped_data=read_data(path,'full_grouped.csv') grouped_data.head() def country_visualisations(df,country): data_group=df[df['Country/Region']==country] data=data_group.loc[:,['Date','Confirmed','Deaths','Recovered','Active']] figure1=make_subplots(rows=1,cols=4,subplot_titles=('Confirmed','Active','Recovered','Deaths')) figure1.add_trace(go.Scatter(name='Confirmed',x=data['Date'],y=data['Confirmed']),row=1,col=1) figure1.add_trace(go.Scatter(name='Active',x=data['Date'],y=data['Active']),row=1,col=2) figure1.add_trace(go.Scatter(name='Recovered',x=data['Date'],y=data['Recovered']),row=1,col=3) figure1.add_trace(go.Scatter(name='Deaths',x=data['Date'],y=data['Deaths']),row=1,col=4) figure1.update_layout(height=500, width=2000, title_text='Recorded Cases in {}'.format(country), template='plotly_dark') figure1.show() # + papermill={"duration": 0.229602, "end_time": "2021-12-17T03:38:28.875937", "exception": false, "start_time": "2021-12-17T03:38:28.646335", "status": "completed"} tags=[] country_visualisations(grouped_data,'India') country_visualisations(grouped_data,'Peru') # + [markdown] papermill={"duration": 0.03367, "end_time": "2021-12-17T03:38:28.944693", "exception": false, "start_time": "2021-12-17T03:38:28.911023", "status": "completed"} tags=[] # ## World Vaccination Situation # + papermill={"duration": 0.084483, "end_time": "2021-12-17T03:38:29.063475", "exception": false, "start_time": "2021-12-17T03:38:28.978992", "status": "completed"} tags=[] fig = go.Figure() fig.add_trace(go.Scatter(x=usa_vac['date'], y=usa_vac['daily_vaccinations'], mode='lines+markers', name='USA')) fig.add_trace(go.Scatter(x=uk_vac['date'],y=uk_vac['daily_vaccinations'], mode='lines+markers', name='UK')) fig.add_trace(go.Scatter(x=ger_vac['date'],y=ger_vac['daily_vaccinations'], mode='lines+markers', name='Germany')) fig.add_trace(go.Scatter(x=ind_vac['date'],y=ind_vac['daily_vaccinations'], mode='lines+markers', name='India')) fig.update_layout(title='Comparison of Daily Vaccinations' , template='plotly_dark' ) fig.show() # + [markdown] papermill={"duration": 0.03606, "end_time": "2021-12-17T03:38:29.136072", "exception": false, "start_time": "2021-12-17T03:38:29.100012", "status": "completed"} tags=[] # ## Indian Vs Pakistan (And China) # ### Not Cricket but Vaccinations # + papermill={"duration": 0.080436, "end_time": "2021-12-17T03:38:29.252625", "exception": false, "start_time": "2021-12-17T03:38:29.172189", "status": "completed"} tags=[] fig = go.Figure() fig.add_trace(go.Scatter(x=ind_vac['date'],y=ind_vac['daily_vaccinations'], mode='lines+markers', name='India')) fig.add_trace(go.Scatter(x=chi_vac['date'],y=chi_vac['daily_vaccinations'], mode='lines+markers', name='China')) fig.add_trace(go.Scatter(x=pak_vac['date'],y=pak_vac['daily_vaccinations'], mode='lines+markers', name='Pakistan')) fig.update_layout(title='Comparison of Daily Vaccinations' , template='plotly_dark') fig.show() # + [markdown] papermill={"duration": 0.037258, "end_time": "2021-12-17T03:38:29.327669", "exception": false, "start_time": "2021-12-17T03:38:29.290411", "status": "completed"} tags=[] # ## Pakistan may have won the cricket match but they did not win at vaccinations per day # # + [markdown] papermill={"duration": 0.036979, "end_time": "2021-12-17T03:38:29.402072", "exception": false, "start_time": "2021-12-17T03:38:29.365093", "status": "completed"} tags=[] # ## Indian Situation # + [markdown] papermill={"duration": 0.037121, "end_time": "2021-12-17T03:38:29.476521", "exception": false, "start_time": "2021-12-17T03:38:29.4394", "status": "completed"} tags=[] # ### Testing Situation In India # + papermill={"duration": 0.813541, "end_time": "2021-12-17T03:38:30.327582", "exception": false, "start_time": "2021-12-17T03:38:29.514041", "status": "completed"} tags=[] state.Date = pd.to_datetime(state.Date, format="%Y/%m/%d") fig = px.line(state, x='Date', y='TotalSamples', color='State', title='Total number of samples collected for Covid-19 testing(Statewise)' , template="plotly_dark") fig.show() # + [markdown] papermill={"duration": 0.050614, "end_time": "2021-12-17T03:38:30.486576", "exception": false, "start_time": "2021-12-17T03:38:30.435962", "status": "completed"} tags=[] # ### Result of Testing # # + papermill={"duration": 0.138953, "end_time": "2021-12-17T03:38:30.674983", "exception": false, "start_time": "2021-12-17T03:38:30.53603", "status": "completed"} tags=[] fig = px.bar(df2_1, x='State/UnionTerritory', y=['Confirmed','Cured','Deaths'], template="plotly_dark") fig.update_layout(xaxis={'categoryorder':'total descending'}) fig.show() # + [markdown] papermill={"duration": 0.050281, "end_time": "2021-12-17T03:38:30.775429", "exception": false, "start_time": "2021-12-17T03:38:30.725148", "status": "completed"} tags=[] # ### Indian Vaccination Status # + papermill={"duration": 0.12962, "end_time": "2021-12-17T03:38:30.956384", "exception": false, "start_time": "2021-12-17T03:38:30.826764", "status": "completed"} tags=[] fig = px.bar(df2_2, x='State', y='TotalDoses',title='Total Doses (Jan-Aug)',template="plotly_dark") fig.update_traces(textposition='outside') fig.update_layout(xaxis={'categoryorder':'total descending'}) fig.update_xaxes(tickfont=dict(size=14)) fig.update_yaxes(tickfont=dict(size=14)) fig.show() # + papermill={"duration": 0.147765, "end_time": "2021-12-17T03:38:31.155473", "exception": false, "start_time": "2021-12-17T03:38:31.007708", "status": "completed"} tags=[] fig = px.bar(df2_2, x="State", y=["Covaxin", "CoviShield",'TotalDoses'], template="plotly_dark") fig.update_layout(barmode='stack',legend_orientation="h",legend=dict(x= 0.3, y=1.0), xaxis={'categoryorder':'total descending'}, title_text='Covid-19 Total Vaccinations in India according to type of vaccine', title_x=0.5, width= 1100, height= 500 ) fig.update_xaxes(tickfont=dict(size=14)) fig.update_yaxes(tickfont=dict(size=14)) fig.show() # + [markdown] papermill={"duration": 0.052167, "end_time": "2021-12-17T03:38:31.260736", "exception": false, "start_time": "2021-12-17T03:38:31.208569", "status": "completed"} tags=[] # # Machine Learning Part # + [markdown] papermill={"duration": 0.052202, "end_time": "2021-12-17T03:38:31.365559", "exception": false, "start_time": "2021-12-17T03:38:31.313357", "status": "completed"} tags=[] # ## Attempt At Linear Regression # + papermill={"duration": 0.084136, "end_time": "2021-12-17T03:38:31.501964", "exception": false, "start_time": "2021-12-17T03:38:31.417828", "status": "completed"} tags=[] from sklearn import model_selection from sklearn.linear_model import LinearRegression df2_1=df2_1[df2_1['State/UnionTerritory']!='Maharashtra'] states_clubbed=df2_1[["Confirmed","Cured","Deaths"]] predict="Deaths" X=np.array(states_clubbed.drop(predict,1)) y=np.array(states_clubbed[predict]) X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, test_size=0.25) linear = LinearRegression() linear.fit(X_train,y_train) Y_pred = linear.predict(X_test) print(linear.score(X_test, y_test)) print(linear.score(X_train,y_train)) # + papermill={"duration": 0.11594, "end_time": "2021-12-17T03:38:31.673859", "exception": false, "start_time": "2021-12-17T03:38:31.557919", "status": "completed"} tags=[] df2_1 fig = px.scatter(df2_1, x='Cured', y='Deaths', template="plotly_dark") fig.show() # + papermill={"duration": 0.170887, "end_time": "2021-12-17T03:38:31.897854", "exception": false, "start_time": "2021-12-17T03:38:31.726967", "status": "completed"} tags=[] df2_1 # + papermill={"duration": 0.0538, "end_time": "2021-12-17T03:38:32.006676", "exception": false, "start_time": "2021-12-17T03:38:31.952876", "status": "completed"} tags=[] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # *best viewed in [nbviewer](https://nbviewer.jupyter.org/github/CambridgeSemiticsLab/BH_time_collocations/blob/master/results/notebooks/time_distribution_and_pos.ipynb)* # # Time Distribution and "Parts of Speech" # ## Beginning Analysis of Hebrew Time Adverbials # ### # # # ! echo "last updated:"; date # ## The Phenomena of "Time Adverbials" # # Time adverbials are crucial and ubiquitous constructions in the clause structures of worldwide languages ([Klein 1994; Haspelmath 1997](../../docs/bibliography.txt)). These arguments, which can consist of any unit from single words up to dependent clauses, coordinate with the main verb, alongside other arguments, to locate and modify events/predications. Word and phrasal time adverbials derive from two main sources: words prototypically used as "adverbs," and noun phrases. Adverbs are occasionally distinguished by morphological features. In Biblical Hebrew, for instance, the suffix *-am* distinguishes words like יומם as adverbial. But in Hebrew explicit morphological markers are otherwise rare. Adverbs have to be distinguished syntactically and behaviorally instead. Noun-based time adverbials are the genetic descendants of locative phrases (Haspelmath 1997). They are most frequently prepositional, and they utilize other elements common to noun phrases: definite articles, demonstratives, quantifiers, construct states, etc. # # In this notebook, we (me and you) will set out the primary surface forms of attested time adverbials in the Hebrew Bible, meaning we will compare and quantify all identical and similar phrases. We will visualize and model the distribution of time adverbials across the whole Hebrew Bible. And finally we will investigate the validity of so-called "parts of speech." By what criteria does one define a "part of speech"? How do these categories affect the interpretation of time adverbials in Biblical Hebrew? And is it possible to derive a data-driven approach to such an analysis? #
    # # Python # # Now we import the modules and data needed for the analysis. # + # standard packages from pathlib import Path import collections import networkx as nx import pandas as pd import numpy as np import matplotlib.pyplot as plt from matplotlib import rcParams rcParams['font.serif'] = ['SBL Biblit'] import seaborn as sns #sns.set(font_scale=1.5, style='whitegrid') #from sklearn.decomposition import PCA from adjustText import adjust_text from bidi.algorithm import get_display # bi-directional text support for plotting # import custom modules (kept under /tools) # # !NB! add to Python path to avoid Module Not Found errors from stats.significance import contingency_table, apply_fishers from stats.pca import apply_pca from paths import figs, main_table from tf_tools.load import load_tf from cx_analysis.load import cxs from cx_analysis.search import SearchCX # set up Text-Fabric methods for navigating corpus data TF, API, A = load_tf(silent='deep') A.displaySetup(condenseType='phrase') F, E, T, L = A.api.F, A.api.E, A.api.T, A.api.L # import Construction objects for more corpus data + syntax structures se = SearchCX(A) phrase2cxs = cxs['phrase2cxs'] class2cx = cxs['class2cx'] time_cxs = list(phrase2cxs.values()) # set up path configs for figure images name = 'time_distribution_and_pos' figures = figs.joinpath(name) if not figures.exists(): figures.mkdir(parents=True) def fg_path(fg_name): return figures.joinpath(fg_name) # - # # Dataset # # The dataset used by this analysis is originally based on the ETCBC's [BHSA syntactic corpus](https://github.com/ETCBC/bhsa). BHSA encodes phrases along with syntactic features for the entire Hebrew Bible. One of these features is that of `function`, referring to a phrase's role within a clause. A phrase with `function=Time` (time phrases) is the object of this analysis: phrases that modify time within the argument structure of a clause. # # The BHSA syntax data provides a useful starting point. One of the first challenges in using this data for research, however, is in evaluating the content and quality of the syntactic data. Tagging syntax, as with many things in linguistics, is far from an objective task. Another challenge is found in the BHSA datamodel itself. BHSA, in general, does not contain very much semantic data (e.g. word classes). It also emphasizes clause and discourse elements at the expense of the phrase model. This makes the task of navigating and classifying time phrases more difficult. In order to evaluate the quality and content of the BHSA time phrase data, we need to be able to navigate the phrase structure. # # To augment these shortcomings, this project adds some semantic intelligence and parsing to BHSA time phrases. This is achieved using a custom class called [`Construction`](../../tools/cx_analysis/cx.py). The class, using the [grammar](../../data/cxs) written for it, parse the BHSA time phrases to provide more detailed semantic distinctions (especially for prepositional words) and a more precise parsing of phrase structure. The phrase structure is stored as a directed graph (using NetworkX). That structure is then used to tag features of interest in the time phrases. # # Finally, a [script](../../data/cxs/dataset.py) uses the new data to build a dataset. We import that dataset below as a Pandas `DataFrame` and explore its contents. times_full = pd.read_csv(main_table, sep='\t') times_full.set_index(['node'], inplace=True) # The full dimensions of the dataset are shown: times_full.shape # We show the top 5 rows of the `DataFrame` below. times_full.head() # **The dataset currently contains some complex phrases which consist of multiple time units coordinated together. For the time being we set those aside to focus on time adverbials which have no more than one profiled time unit.** # # This choice is necessary since complex time phrases have not yet been fully parsed. It is my hope to add them later. times = times_full[~times_full.classi.str.contains('component')] times.shape # The difference in size is compared below: print(f'Size difference in dataset: {times_full.shape[0]-times.shape[0]}') # ## Distribution # # Before describing the time adverbials in depth, let's look at their distribution throughout the Hebrew Bible. We can visualize distribution as across a single dimension, a sequence of clauses. # # For a set of clauses within a single book, `1–N`, create clusters of clauses, where `cluster = a 50 clause stretch`. If a book ends without an even 50 clauses, keep the uneven cluster as either its own cluster (if `N-clauses > 30`) or add it to last cluster in the book. # + # divide texts evenly into slices of 50 clauses clause_segments = [] for book in F.otype.s('book'): clauses = list(L.d(book,'clause')) cluster = [] for i, clause in enumerate(clauses): i += 1 # skip non-Hebrew clauses lang = F.language.v(L.d(clause,'word')[0]) if lang != 'Hebrew': continue cluster.append(clause) # create cluster of 50 if (i and i % 50 == 0): clause_segments.append(cluster) cluster = [] # deal with final uneven clusters elif i == len(clauses): if len(cluster) < 30: clause_segments[-1].extend(cluster) # add to last cluster else: clause_segments.append(cluster) # keep as cluster # - # Let's see how many segments have been made. len(clause_segments) # NB that several segments are slightly larger or smaller than 50. Here are the sizes of the under/over-sized clusters. unevens = [cl for cl in clause_segments if len(cl) != 50] print('lengths of uneven-sized clusters:') for cl in unevens: print(len(cl), end='; ') # We will now iterate through the clusters and tally the number of time adverbials contained within each one. We track along the way the starting points for each new book in the corpus. Those are recorded so they can be plotted. The plot is presented further below as a strip-plot. # + # map book names for visualizing # map grouped book names book_map = {'1_Kings': 'Kings', '2_Kings':'Kings', '1_Samuel':'Samuel', '2_Samuel':'Samuel', '1_Chronicles':'Chronicles', '2_Chronicles':'Chronicles',} # book of 12 for book in ('Hosea', 'Joel', 'Amos', 'Obadiah', 'Jonah', 'Micah', 'Nahum', 'Habakkuk', 'Zephaniah', 'Haggai', 'Zechariah', 'Malachi'): book_map[book] = 'Twelve' # Megilloth for book in ('Ruth', 'Lamentations', 'Ecclesiastes', 'Esther', 'Song_of_songs'): book_map[book] = 'Megilloth' # Dan-Neh for book in ('Ezra', 'Nehemiah', 'Daniel'): book_map[book] = 'Daniel-Neh' # + # build strip plot data strip_data = [] covered_nodes = set() bookboundaries = {} # time adverbial slots for testing whether # a clause contains a TA or not ta_slots = set( s for cx in time_cxs for sp in cx for s in sp.slots ) # iterate through constructions and gather book data this_book = None for i, seg in enumerate(clause_segments): for cl in seg: book, chapter, verse = T.sectionFromNode(cl) this_book = book_map.get(book, book) if set(L.d(cl,'word')) & ta_slots: strip_data.append(i+1) # add book boundaries for plotting if this_book not in bookboundaries: bookboundaries[this_book] = i+1 # - strip_title = 'Distribution of time adverbials by segments of ~50 clauses across Hebrew Bible' plt.figure(figsize=(20, 6)) sns.stripplot(x=strip_data, edgecolor='black', linewidth=0.6, color='lightblue', jitter=0.3) plt.xticks(ticks=list(bookboundaries.values()), labels=list(bookboundaries.keys()), rotation='vertical', size=16) plt.ylabel('random "jitter" effect for visibility', size=11) plt.xlabel('sequence of ~50 clause segments across books (x=1 is first 50 clauses of Genesis)', size=11) plt.grid(axis='x') plt.savefig(fg_path('distribution_by_50clauses.png'), dpi=300, bbox_inches='tight') plt.title(strip_title, size=16) print(strip_title) # keep title out of savefig print('x-axis: Nth clause cluster') print('y-axis: random jitter effect for visibility') plt.show() # We can see a slightly sparser population of time adverbials in some of the poetic books, especially Job-Proverbs. # ## Top Surface Forms # + token_counts = pd.DataFrame(times.token.value_counts()) token_counts.head(25) # - # ## Noun / Adverb Distinction? # # In the surface forms we can recognize numerous features which are common to phrases in Hebrew: prepositions, plural morphemes, definite articles, demonstratives. We also see modifications of number with cardinal number quantifiers or qualitative quantifiers like כל. **In the quest to specify the semantics and build semantic classes, we need to know how these various constructions relate to one another.** What does each element contribute to the whole? What is their relationship to one another? How does their use contribute to the precise meaning of a phrase? # # A cursory overview of the time adverbial surface forms above shows that not all time-words combine evenly with these features. Some regularly do, such as יום. But other words like עתה, אז, עולם seem to appear on their own more often, occasionally with a preposition. These are words that are often classified as "particles" or quintessential "adverbs". # # ### Variability Demonstrated: עולם # # A closer inspection of certain so-called particles reveals that these larger tendencies are not always absolute, but variable. Take עולם, for example. Below we count and reveal all of its surface forms. olam = times.query('time == "עולם"') olam_tokens = pd.DataFrame(olam.token.value_counts()) display(olam_tokens) # Let's count up the relevant features in the phrases. olam_features = pd.pivot_table(olam, index='time', values=['bare', 'definite', 'time_pl', 'genitive'], aggfunc=np.sum) olam_features['total'] = olam_features.sum(1) olam_features # Inspection of the various forms occuring with עולם as the head shows variable preference for nominalizing constructions. The vast majority (166/175) appear without modifications ("bare") like definite articles and pluralization. But 8 cases do deploy nominalizing constructions such as definites and plurals. # **How do we account for both the tendencies and the variability of words like these?** And how does that inform our analysis of the semantics? # # I propose, in line with the approach of Construction Grammar, to avoid assuming a univeral word class (e.g. "noun" versus "adverb"). Rather, parts-of-speech tendencies are manifested through collocational patterns. Some words may be highly associated with a given pattern, and the associations may even be semantically motivated. For instance, the word אישׁ will co-occur requently with patterns associated with (animate) objects. The high co-occurrence frequency is motivated by the meaning of the word itself. Framed in this way, we can talk more about a continuum of uses: a word may be more or less associated with contexts that indicate, for example, an object (traditionally a "noun"). # # Croft also treats parts of speech as a continuum of semantic features rather than discrete categories (Croft, *Radical Construction Grammar*, 2001). He provides the following illustration for constructions in English which characterize that continuum (2001: 99): # # Croft notes two axes along with language encodes certain functions. The y-axis denotes an object-to-action continuum, whereas the x-axis denotes a reference-to-predication continuum. Words that are prototypical nouns will co-occur often with constructions indicating referentiality (REF) and object attributes (OBJECT). This is the upper left-hand corner. Croft lists indicators of "number" as such a marker. On the opposite corner are words which are heavily construed as actions with predication, i.e. prototypical verbs. These co-occur with morpheme markers like tense or modality. # # Can we model this continuum somehow? Doing so would enable us to look for subtle collocational tendencies above the level of the phrase. For example, in a previous study, I found prelimary evidence that the *yiqṭol* verb tends to collocate with time adverbials which do not nominalize with features such as definite articles (see results [here](https://github.com/CambridgeSemiticsLab/BH_time_collocations/blob/master/archive/2019-10-31/analysis/exploratory/construction_clusters.ipynb)). Or, to put it simply, it seems that *yiq̣tol* prefers times with more particle-like than nominal behavior. If we were to have more precise data about time-word behavior, we could more accurately measure these kinds of clause-level tendencies. # ## Test Collocational Tendencies within Time Adverbial Components # The remaining time adverbials are now further filtered based on the exclusions noted above. They will be stored based on the lexeme string of their heads. # + head_cols = pd.pivot_table( times, index=['time'], values=['time_pl', 'quantified', 'definite', 'demonstrative', 'ordinal', 'time_sffx', 'bare', 'genitive'], aggfunc=np.sum ) head_cols = head_cols.loc[head_cols.sum(1).sort_values(ascending=False).index] # sort on sum # - head_cols # We now have a co-occurrence matrix that contains all of our counts. Let's have a look at the top values. head_cols.head(10) # ## Dataset Stats # ### Raw sums across features head_cols.sum() # ## Pruning # Some heads will occur only once or so. We will remove any cases with a sample size `< 5`. head_cols_pruned = head_cols[head_cols.sum(1) > 4] # Here is the comparison of size after we've applied our pruning. head_cols.shape # before head_cols_pruned.shape # after # ### Normalization: ratio # # There are a number of ways to normalize the counts so that more and less common words are evenly compared. We can use a contingency-based method, which looks at each count in relation to the size of the dataset. This would allow us to isolate relationships which are statistically significant to a particular word. We can also use a non-contingency method such as a simple ratio (percentage). In this case, the ratio makes a bit more sense because we want to focus more closely on each word's own distributional tendencies. None of the data can be "statistically insignificant". The data is normalized below. Each decimal value is out of 1. # + head_cols_ratio = head_cols_pruned.divide(head_cols_pruned.sum(1), axis=0) head_cols_ratio.head(10) # - head_cols.index.shape # ## Nominalizing Tendencies # # Here we will plot the nominalizing tendencies of words to get an idea about parts of speech. pca_values, loadings = apply_pca(head_cols_ratio, 0, 1) loadings[:3] # + # -- plot words -- x, y = (pca_values.iloc[:,0], pca_values.iloc[:,1]) plt.figure(figsize=(12, 10)) plt.scatter(x, y, color='') #plt.grid(b=None) plt.xticks(size=10) plt.yticks(size=10) plt.xlabel('PC1', size=16) plt.ylabel('PC2', size=16) plt.axhline(color='black', linestyle='-', linewidth=0.5) plt.axvline(color='black', linestyle='-', linewidth=0.5) # -- annotate words -- texts = [] text_strings = head_cols_ratio.index text_strings = [get_display(s).replace('\u05C1','') for s in text_strings] for i, txt in enumerate(text_strings): tx, ty = x[i], y[i] texts.append( plt.text( tx, ty, txt, size=11, weight='heavy', family='serif', ) ) # -- annotate features -- # configure offsets / feature-specific font sizes: offsets = { 'definite': (-0.04, 0.005, 11), 'genitive': (-0.08, 0, 10), 'quantified': (-0.1, -0.001, 10), 'time_sffx': (0, -0.013, 9), 'time_pl': (-0.03, -0.02, 11), 'bare': (0, 0.005, 11), } skip_feat = ['ordinal', 'demonstrative'] for feature in loadings: if feature in skip_feat: continue x_off, y_off, size = offsets.get(feature, (0,0,12)) # config offsets / size fx, fy = loadings[feature][:2] plt.arrow(0, 0, fx, fy, color='blue', linewidth=1, head_width=0) plt.text(fx+x_off, fy+y_off, feature, color='blue', size=size) adjust_text(texts) # clean up overlapping annotations plt.savefig(fg_path('time_head_clusters.png'), dpi=300, bbox_inches='tight') # save plot plt.title('Time Head Clusters based on Collocating Constructions', size=18) plt.show() # - # Cf. Fisher's hyper-geometric # ## Analysis of PCA Results # # Two primary groups with a potential third sub-group. The blue arrows show us the features which are influencing the locations on the graph. The two strongest influences are prepositions and zero-marking. These separate words along the x-axis. From the perspective of the x-axis alone, we see mainly two groups, based on colloca # # Note that the presence of constructions indicating definiteness (ה), quantifiers, demonstratives, and plurality pull the largest cluster to the first and second quadrants (`y>0`). pca_values[:10] # ## Prepositional Tendencies # # Here we plot the prepositional tendencies of various time words. # + prep_cols = times.pivot_table(index='time', columns='preposition', aggfunc='size').fillna(0) prep_cols = prep_cols.loc[prep_cols.sum(1).sort_values(ascending=False).index] prep_cols # + prep_col_ratio = prep_cols.div(prep_cols.sum(1), axis=0) prep_col_ratio # - prep_col_ratio_pruned = prep_col_ratio[prep_cols.sum(1) > 5] # plot only values with > 10 observations # + fig, ax = plt.subplots(figsize=(10,8)) x, y = [prep_col_ratio_pruned.iloc[:,i] for i in (0,1)] ax.scatter(x, y, color='') ax.set_xlabel('ø prep') ax.set_ylabel('prep') # annotate texts = [] text_strings = prep_col_ratio_pruned.index text_strings = [get_display(s).replace('\u05C1','') for s in text_strings] for i, txt in enumerate(text_strings): tx, ty = x[i], y[i] texts.append( ax.text( tx, ty, txt, size=11, weight='heavy', family='serif', ) ) adjust_text(texts) # clean up overlapping annotations plt.savefig(fg_path('time_prep_ratios.png'), dpi=300, bbox_inches='tight') plt.show() # - # **TODO: switch to a scatter plot [like this](https://analyse-it.com/docs/user-guide/distribution/continuous/dot-plot)** # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="e1hsulTtnq4o" colab_type="text" # # first we are importing the modules and libraries neccessary for running the **dynamic** python code # # # + id="VAvkwfQEkPqn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 306} outputId="38f950ae-b808-4a3d-e32d-0007911abbc0" import pandas as pd # !pip install ctrl4ai from ctrl4ai import preprocessing from ctrl4ai import automl from ctrl4ai import datasets from ctrl4ai import helper import numpy as np # + id="XD2qY92qknTG" colab_type="code" colab={} dataset = pd.read_csv('titanic.csv') # + id="gGVL-76GOprM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 697} outputId="a6c169b9-0f49-4dfe-b4a8-ba679296c132" y = dataset['Survived'] # + id="DJDbgE3Q7Boi" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="d0427951-2350-43e8-f191-9638b3d027e0" len(y.index) # + id="VeC3gExJOpu7" colab_type="code" colab={} dataset = dataset.drop('Survived', axis=1) # + id="0oGYAOz1Opzg" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="42cdde43-955c-49b2-967a-79713e3d8377" dataset.head() # + [markdown] id="nvrtL3QsoAxN" colab_type="text" # # # here we are declaring for the x and y which our client will give to us # + [markdown] id="FQPB5PVpoqxt" colab_type="text" # # **droping the single valued columns** # + id="O7uo8-MAmKTp" colab_type="code" colab={} #for droping the single valued columns #it checks for the single valued column and store as list in e........ from ctrl4ai import helper single_valued_column = [ i for i in dataset.columns if helper.single_valued_col(dataset[i]) == True ] # the columns stored in e is then removed from the dataset columns using loop for e in single_valued_column: DataFrame.drop(labels=single_valued_column[e], axis=1) # single valued columns are removed from the dataset............... # + [markdown] id="LF7VcElRpE3X" colab_type="text" # # **Dropping non-categorical non-numeric columns** # + id="ztZDxu9pmKd7" colab_type="code" colab={} # list of item which are categorical variable in our dataset from ctrl4ai import helper categorical_variable = [ i for i in dataset.columns if helper.check_categorical_col(dataset[i]) == True ] # + id="3IayWGFGmKg8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="4b87cca9-0b15-4825-b070-ae3609b8e429" t = preprocessing.drop_non_numeric(dataset) li1 = dataset.columns li2 = t.columns temp3 = [item for item in li1 if item not in li2] li3 = categorical_variable li4 = temp3 temp4 = [item for item in temp3 if item in li3] to_be_removed = [item for item in temp3 if item not in temp4] new_data_set = dataset.drop(to_be_removed, axis=1) # + id="ltIh1x1VmKjX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="d84c6254-cde5-41ed-98e0-51a537478814" new_data_set.head() # + id="LnipkGoKt4zJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="f334234c-c5ca-4a18-cd06-b7461a41a758" new_data_set_1 = preprocessing.drop_null_fields(new_data_set, dropna_threshold=0.3) # + id="o5f_e7Vqs60g" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="4c3d1b8c-d6ff-4415-dfc1-fde0ef63597b" new_data_set_1.head() # + id="msZQhANyzVRp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 453} outputId="34289b85-957e-436e-db12-29dd16984d92" # imputing th values in the null fields odf the dataframe # for continuos data we do mean and for categorical data we do mode over here.... preprocessing.impute_nulls(new_data_set_1, method='central_tendency') # + id="9mZWSxSUzVxV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="c3a3d2db-db30-417e-fc8f-2e501aeaf476" new_data_set_2.isnull().values.any() # + id="NIXFVy7NzWGq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="708deb71-d448-4bc2-8768-4a389e833210" len(new_data_set_2.index) # + id="qYvAs65W5xn_" colab_type="code" colab={} # + id="PxIgxpri5xrS" colab_type="code" colab={} # + id="LPWtinxBpyxN" colab_type="code" colab={} from ctrl4ai import helper categorical_variable = [ i for i in new_data_set_1.columns if helper.check_categorical_col(new_data_set_1[i]) == True ] # + id="oVS8ud6i8vD2" colab_type="code" colab={} # create dummy variables for multiple categories # drop_first=True handles k - 1 data_set = pd.get_dummies(new_data_set_1, columns=categorical_variable, drop_first=True) # this drops original Sex and Embarked columns # and creates dummy variables # + id="ihP2P_96HCNG" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 439} outputId="4c4cb055-e103-4449-d23e-c476ccb10325" data_set # + id="y8YnDnRNHCQJ" colab_type="code" colab={} final_data_set = data_set.drop('PassengerId', axis=1) # + id="f6HIMR5O6CmF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="e5146d85-2f05-440e-f49f-e25a3ef316e6" # + id="lSYTmNyZJEk-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 439} outputId="0e2c0040-6105-4d45-b0fd-350f9491efe0" final_data_set # + id="OlE0hj-kJEoc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="99428176-fbaf-411d-bef9-55f44d81b2d2" len(y.index) # + id="p-V8mqDfXyC0" colab_type="code" colab={} #converting the integer of the pandaas dataframe to float..... final_data_set = final_data_set.astype(float) # + id="RXE3bXCpYuQu" colab_type="code" colab={} X = final_data_set # + id="bRFjnSNE6KYa" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="4b25153a-018e-4046-9064-e8a4af69d9ad" len(X.index) # + id="Cm79RW5Y6Kpf" colab_type="code" colab={} # + id="9FkA4l1fJEtI" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 439} outputId="b4d5ffd7-6e00-4436-f541-86a5bab7183b" X # + id="36AZ8H89dV3N" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="a1e665ab-cf88-46c7-e441-110a27883a3b" len(X.index) # + id="wDZ29b1rdFsF" colab_type="code" colab={} from sklearn.model_selection import train_test_split X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.25,random_state=42) # + id="XLx_8LUxeB6s" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="db3c4fd1-9cd6-4adc-dca3-569fd30cef0b" # null values are still present final_data_set.isnull().values.any() # + id="O6jMffrHisxj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="526d7ce3-eddf-40e2-b1a5-758ca17db946" final_data_set.isnull().sum().sum() # + id="7xsgAdtbis9i" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="9bd87e2e-37d7-491d-c6ac-53aa6df68767" y_train.shape # + id="5eQcYn7iIUNU" colab_type="code" colab={} from sklearn import preprocessing # + id="4hpxT5TDHCSG" colab_type="code" colab={} X_num = X_train.to_numpy() # + id="WPuwTEFdbgTg" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="e37ea52e-5b89-41e0-93ec-3cd7d3a08a4d" X_num.shape # + id="U93kz5oyG99y" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 238} outputId="bf7f20bb-7a9c-4627-ee1c-c27cb28aab7d" min_max_scaler = preprocessing.MinMaxScaler() X_train_scaled = min_max_scaler.fit_transform(X_num) X_train_scaled # + id="qRhBACOuZnL7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="68974f82-a305-4051-dd99-8e1457618195" X_test_scaled= min_max_scaler.transform(X_test) X_test_scaled min_max_scaler.min_ min_max_scaler.scale_ # + id="FS9dQ_iscdVm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 238} outputId="c59d23c1-d4fb-453b-995c-112682cc982c" X_train_scaled # + id="R1ZCGHTqcdYd" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 238} outputId="5e59b9e7-5917-4439-84a9-b6db2df376e4" X_test_scaled # + id="5vCQTP9thdzp" colab_type="code" colab={} X_train_scaled_roundoff = np.around(X_train_scaled, decimals = 4) # + id="ndoNFyvVhd3c" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 136} outputId="a2c77ad2-9cc3-4b7f-a960-872240c0a0e5" X_train_scaled_roundoff # + id="0LRy6oyHhd58" colab_type="code" colab={} X_test_scaled_roundoff = np.around(X_test_scaled, decimals = 4) # + id="wlvHOVcacdbZ" colab_type="code" colab={} # import the class from sklearn.linear_model import LogisticRegression # instantiate the model (using the default parameters) logreg = LogisticRegression() # fit the model with data logreg.fit(X_train_scaled,y_train) y_pred=logreg.predict(X_test_scaled) # + id="C6r3t1s8cdeA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="73f16ffb-8693-4b45-8eff-6030a2e2e7b5" X_train_scaled.shape # + id="KYhbdo6764_G" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="bbbba7b8-6b69-439c-e057-f098c9830416" y_train.shape # + id="uXPy_xhA68vO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="63e71b67-2cbd-4597-c520-1c24edae2cfb" # import the metrics class from sklearn import metrics cnf_matrix = metrics.confusion_matrix(y_test, y_pred) cnf_matrix # + id="6nJ162L771u5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 71} outputId="1703c099-83e5-4ed9-8d65-d2114ede00fb" # import required modules import numpy as np import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline # + id="vjQKbeC3774C" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 342} outputId="9a1bc3b6-c4a8-42ff-e944-6c00108ec51d" class_names=[0,1] # name of classes fig, ax = plt.subplots() tick_marks = np.arange(len(class_names)) plt.xticks(tick_marks, class_names) plt.yticks(tick_marks, class_names) # create heatmap sns.heatmap(pd.DataFrame(cnf_matrix), annot=True, cmap="YlGnBu" ,fmt='g') ax.xaxis.set_label_position("top") plt.tight_layout() plt.title('Confusion matrix', y=1.1) plt.ylabel('Actual label') plt.xlabel('Predicted label') # + id="-whH6_MW777F" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="d822f000-464e-4b1d-898c-cdb97b28996e" print("Accuracy:",metrics.accuracy_score(y_test, y_pred)) print("Precision:",metrics.precision_score(y_test, y_pred)) print("Recall:",metrics.recall_score(y_test, y_pred)) # + id="gBDFXeVb8A60" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 265} outputId="b364e0dd-605e-46c3-9fbe-6dde77521ccb" y_pred_proba = logreg.predict_proba(X_test)[::,1] fpr, tpr, _ = metrics.roc_curve(y_test, y_pred_proba) auc = metrics.roc_auc_score(y_test, y_pred_proba) plt.plot(fpr,tpr,label="data 1, auc="+str(auc)) plt.legend(loc=4) plt.show() # + id="LNbOlIKJ8EMM" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import matplotlib as mp import urllib.request import wget url = 'https://data.cityofchicago.org/resource/38sz-xyf4.json' wget.download(url, '/Users//Documents/NYU CUSP/Urban Spatial Final') df = pd.read_json(url) df df['start_time'] = pd.to_datetime(df['start_time']) df['time'] = pd.DatetimeIndex(df['start_time']).time df['hour'] = df['start_time'].dt.hour df.dtypes df['hour'].value_counts() late = df.loc[(df.hour >= 17)] len(late) late.to_excel(r'C:\Users\\Documents\NYU CUSP\Urban Spatial Final\health_events.xlsx', index = False) df2 = pd.read_csv('C:/Users//Documents/NYU CUSP/Urban Spatial Final/Transportation_Network_Providers_-_Trips.csv') df2.dtypes df2['Trip Start Timestamp'] = pd.to_datetime(df2['Trip Start Timestamp']) df2['Trip End Timestamp'] = pd.to_datetime(df2['Trip End Timestamp']) df2['time'] = pd.DatetimeIndex(df2['Trip Start Timestamp']).time df2['hour'] = df2['Trip Start Timestamp'].dt.hour rideshare_night_trips = df2.loc[(df2.hour >= 18)] rideshare_night_trips rideshare_night_trips.to_excel(r'C:\Users\\Documents\NYU CUSP\Urban Spatial Final\rideshare_night_trips.xlsx', index = False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Almgren and Chriss Model For Optimal Execution of Portfolio Transactions # # ### Introduction # # We consider the execution of portfolio transactions with the aim of minimizing a combination of risk and transaction costs arising from permanent and temporary market impact. As an example, assume that you have a certain number of stocks that you want to sell within a given time frame. If you place this sell order directly to the market as it is, transaction costs may rise due to temporary market impact. On the other hand, if you split up into pieces in time, cost may rise due to volatility in the stock price. # # [Almgren and Chriss](https://cims.nyu.edu/~almgren/papers/optliq.pdf) provided a solution to this problem by assuming the permanent and temporary market impact functions are linear functions of the rate of trading, and that stock prices follow a discrete arithmetic random walk. # # In this notebook, we will take a look at the model used by Almgren and Chriss to solve the optimal liquidation problem. We will start by stating the formal definitions of *trading trajectory*, *trading list*, and *trading strategy* for liquidating a single stock. # # ### Trading Trajectory, Trading List, and Trading Strategy # # We define trading trajectory, trading list, and trading strategy just as Almgren and Chriss did in their [paper](https://cims.nyu.edu/~almgren/papers/optliq.pdf). Suppose we hold $X$ shares of a stock that we want to liquidate before time $T$. Divide $T$ into $N$ intervals of length $\tau=\frac{T}{N}$ and define: # # - $t_k = k\tau$ to be discrete times, where $k = 0,..,N$. # # # - A **trading trajectory** to be the list $(x_0,..,x_N)$, where $x_k$ is the number of shares we plan to hold at time $t_k$. We require that our initial position $x_0 = X$, and that at liquidation time $T$, $x_N = 0$. # # # - A **trading list** to be $(n_1,..,n_N)$, $n_k = x_{k-1} - x_k$ as the number of shares that we will sell between times $t_{k-1}$ and $t_k$. # # # - A **trading strategy** as a rule for determining $n_k$ from the information available at time $t_{k-1}$. # # Below, we can see a visual example of a trading trajectory, for $N = 12$. # # # ## Price Dynamics # # We will assume that the stock price evolves according to a discrete arithmetic random walk: # # \begin{equation} # S_k = S_{k-1} + \sigma \tau^{1/2} \xi_k # \end{equation} # # for $k = 1,..,N$ and where: # # \begin{equation} # S_k = \text{ stock price at time $k$}\hspace{21.6cm}\\ # \sigma = \text{ standard deviation of the fluctuations in stock price}\hspace{16.3cm}\\ # \tau = \text{ length of discrete time interval}\hspace{20.2cm}\\ # \xi_k = \text{ draws from independent random variables}\hspace{17.8cm} # \end{equation} # # We will denote the initial stock price as $S_0$. The role of $\xi_k$ is to simulate random price fluctuations using random numbers drawn from a Normal Gaussian distribution with zero mean and unit variance. The code below shows us what this price model looks like, for an initial stock price of $S_0 =$ \$50 dollars, a standard deviation of price fluctuations of $\sigma = 0.379$, and a discrete time interval of $\tau = 1$. # + # %matplotlib inline import matplotlib.pyplot as plt import utils # Set the default figure size plt.rcParams['figure.figsize'] = [17.0, 7.0] # Set the number of days to follow the stock price n_days = 100 # Plot the stock price as a function of time utils.plot_price_model(seed = 0, num_days = n_days) # - # ## Market Impact # # As we learned previously the price of a stock is affected by market impact that occurs every time we sell a stock. In their model, Almgren and Chriss distinguish between two types of market impact, permanent and temporary market impact. We will now add these two factors into our price model. # # ### Permanent Impact # # Permanent market impact refers to changes in the equilibrium price of a stock as a direct function of our trading. Permanent market impact is called *permanent* because its effect persists for the entire liquidation period, $T$. We will denote the permanent price impact as $g(v)$, and will add it to our price model: # # \begin{equation} # S_k = S_{k-1} + \sigma \tau^{1/2} \xi_k - \tau g\left(\frac{n_k}{\tau}\right) # \end{equation} # # Here, we assumed the permanent impact function, $g(v)$, is a linear function of the trading rate, $v = n_k / \tau$. We will take $g(v)$ to have the form: # # \begin{equation} # g(v) = \gamma \left(\frac{n_k}{\tau}\right) # \end{equation} # # where $\gamma$ is a constant and has units of (\$/share${}^2$). Replacing this in the above equation we get: # # \begin{equation} # S_k = S_{k-1} + \sigma \tau^{1/2} \xi_k - \gamma n_k # \end{equation} # # With this form, we can see that for each $n$ shares that we sell, we will depress the stock price permanently by $n\gamma$, regardless of the time we take to sell the stocks. # ### Temporary Impact # # Temporary market impact refers to temporary imbalances in supply and demand caused by our trading. This leads to temporary price movements away from equilibrium. Temporary market impact is called *temporary* because its effect # dissipates by the next trading period. We will denote the temporary price impact as $h(v)$. Given this, the actual stock price at time $k$ is given by: # # \begin{equation} # \tilde{S_k} = S_{k-1} - h\left(\frac{n_k}{\tau}\right) # \end{equation} # # Where, we have again assumed the temporary impact function, $h(v)$, is a linear function of the trading rate, $v = n_k / \tau$. We will take $h(v)$ to have the form: # # \begin{equation} # h(v) = \epsilon \mbox{ sign}(n_k) + \eta \left(\frac{n_k}{\tau}\right) # \end{equation} # # where $\epsilon$ and $\eta$ are constants with units (\$/share) and (\$ time/share${}^2$), respectively. It is important to note that $h(v)$ does not affect the price $S_k$. # ## Capture # # We define the **Capture** to be the total profits resulting from trading along a particular trading trajectory, upon completion of all trades. We can compute the capture via: # # \begin{equation} # \sum\limits_{k=1}^{N} n_k \tilde{S_k} = X S_0 + \sum\limits_{k=1}^{N} \left(\sigma \tau^{1/2} \xi_k - \tau g\left(\frac{n_k}{\tau}\right)\right) x_k - \sum\limits_{k=1}^{N} n_k h\left(\frac{n_k}{\tau}\right) # \end{equation} # # As we can see this is the sum of the product of the number of shares $n_k$ that we sell in each time interval, times the effective price per share $\tilde{S_k}$ received on that sale. # # ## Implementation Shortfall # # We define the **Implementation Shortfall** as the total cost of trading and is given by: # # \begin{equation} # I_s = X S_0 - \sum_{k = 1}^N n_k \tilde{S_k} # \end{equation} # # This is what we seek to minimize when determining the best trading strategy! # # Note that since $\xi_k$ is random, so is the implementation shortfall. Therefore, we have to frame the minimization problem in terms of the expectation value of the shortfall and its corresponding variance. We'll refer to $E(x)$ as the expected shortfall and $V(x)$ as the variance of the shortfall. Simplifying the above equation for $I_s$, is easy to see that: # # \begin{equation} # E(x) = \sum\limits_{k=1}^{N} \tau x_k g\left(\frac{n_k}{\tau}\right) + \sum\limits_{k=1}^{N} n_k h\left(\frac{n_k}{\tau}\right) # \end{equation} # # and # # \begin{equation} # V(x) = \sigma^2 \sum\limits_{k=1}^{N} \tau {x_k}^2 # \end{equation} # # The units of $E(x)$ are dollars and the units of $V(x)$ are dollars squared. So now, we can reframe our minimization problem in terms of $E(x)$ and $V(x)$. # For a given level of variance of shortfall, $V(x)$, we seek to minimize the expectation of shortfall, $E(x)$. In the next section we will see how to solve this problem. # # ## Utility Function # # Our goal now is to find the strategy that has the minimum expected shortfall $E(x)$ for a given maximum level of variance $V(x) \ge 0$. This constrained optimization problem can be solved by introducing a Lagrange multiplier $\lambda$. Therefore, our problem reduces to finding the trading strategy that minimizes the **Utility Function** $U(x)$: # # \begin{equation} # U(x) = E(x) + \lambda V(x) # \end{equation} # # The parameter $\lambda$ is referred to as **trader’s risk aversion** and controls how much we penalize the variance relative to the expected shortfall. # # The intuition of this utility function can be thought of as follows. Consider a stock which exhibits high price volatility and thus a high risk of price movement away from the equilibrium price. A risk averse trader would prefer to trade a large portion of the volume immediately, causing a known price impact, rather than risk trading in small increments at successively adverse prices. Alternatively, if the price is expected to be stable over the liquidation period, the trader would rather split the trade into smaller sizes to avoid price impact. This trade-off between speed of execution and risk of price movement is ultimately what governs the structure of the resulting trade list. # # # # Optimal Trading Strategy # # Almgren and Chriss solved the above problem and showed that for each value # of risk aversion there is a uniquely determined optimal execution strategy. The details of their derivation is discussed in their [paper](https://cims.nyu.edu/~almgren/papers/optliq.pdf). Here, we will just state the general solution. # # The optimal trajectory is given by: # # \begin{equation} # x_j = \frac{\sinh \left( \kappa \left( T-t_j\right)\right)}{ \sinh (\kappa T)}X, \hspace{1cm}\text{ for } j=0,...,N # \end{equation} # # and the associated trading list: # # \begin{equation} # n_j = \frac{2 \sinh \left(\frac{1}{2} \kappa \tau \right)}{ \sinh \left(\kappa T\right) } \cosh \left(\kappa \left(T - t_{j-\frac{1}{2}}\right)\right) X, \hspace{1cm}\text{ for } j=1,...,N # \end{equation} # # where $t_{j-1/2} = (j-\frac{1}{2}) \tau$. The expected shortfall and variance of the optimal trading strategy are given by: # # # # In the above equations $\kappa$ is given by: # # \begin{align*} # &\kappa = \frac{1}{\tau}\cosh^{-1}\left(\frac{\tau^2}{2}\tilde{\kappa}^2 + 1\right) # \end{align*} # # where: # # \begin{align*} # &\tilde{\kappa}^2 = \frac{\lambda \sigma^2}{\tilde{\eta}} = \frac{\lambda \sigma^2}{\eta \left(1-\frac{\gamma \tau}{2 \eta}\right)} # \end{align*} # # # # Trading Lists and Trading Trajectories # # ### Introduction # # [Almgren and Chriss](https://cims.nyu.edu/~almgren/papers/optliq.pdf) provided a solution to the optimal liquidation problem by assuming the that stock prices follow a discrete arithmetic random walk, and that the permanent and temporary market impact functions are linear functions of the trading rate. # # Almgren and Chriss showed that for each value of risk aversion there is a unique optimal execution strategy. This optimal execution strategy is determined by a trading trajectory and its associated trading list. The optimal trading trajectory is given by: # # \begin{equation} # x_j = \frac{\sinh \left( \kappa \left( T-t_j\right)\right)}{ \sinh (\kappa T)}X, \hspace{1cm}\text{ for } j=0,...,N # \end{equation} # # and the associated trading list is given by: # # \begin{equation} # n_j = \frac{2 \sinh \left(\frac{1}{2} \kappa \tau \right)}{ \sinh \left(\kappa T\right) } \cosh \left(\kappa \left(T - t_{j-\frac{1}{2}}\right)\right) X, \hspace{1cm}\text{ for } j=1,...,N # \end{equation} # # where $t_{j-1/2} = (j-\frac{1}{2}) \tau$. # # Given some initial parameters, such as the number of shares, the liquidation time, the trader's risk aversion, etc..., the trading list will tell us how many shares we should sell at each trade to minimize our transaction costs. # # In this notebook, we will see how the trading list varies according to some initial trading parameters. # # ## Visualizing Trading Lists and Trading Trajectories # # Let's assume we have 1,000,000 shares that we wish to liquidate. In the code below, we will plot the optimal trading trajectory and its associated trading list for different trading parameters, such as trader's risk aversion, number of trades, and liquidation time. # + # %matplotlib inline import matplotlib.pyplot as plt import utils # We set the default figure size plt.rcParams['figure.figsize'] = [17.0, 7.0] # Set the number of days to sell all shares (i.e. the liquidation time) l_time = 60 # Set the number of trades n_trades = 60 # Set the trader's risk aversion t_risk = 1e-6 # Plot the trading list and trading trajectory. If show_trl = True, the data frame containing the values of the # trading list and trading trajectory is printed utils.plot_trade_list(lq_time = l_time, nm_trades = n_trades, tr_risk = t_risk, show_trl = True) # - # # Implementing a Trading List # # Once we have the trading list for a given set of initial parameters, we can actually implement it. That is, we can sell our shares in the stock market according to the trading list and see how much money we made or lost. To do this, we are going to simulate the stock market with a simple trading environment. This simulated trading environment uses the same price dynamics and market impact functions as the Almgren and Chriss model. That is, stock price movements evolve according to a discrete arithmetic random walk and the permanent and temporary market impact functions are linear functions of the trading rate. We are going to use the same environment to train our Deep Reinforcement Learning algorithm later on. # # We will describe the details of the trading environment in another notebook, for now we will just take a look at its default parameters. We will distinguish between financial parameters, such the annual volatility in stock price, and the parameters needed to calculate the trade list using the Almgren and Criss model, such as the trader's risk aversion. # + import utils # Get the default financial and AC Model parameters financial_params, ac_params = utils.get_env_param() print(financial_params) print(ac_params) # + # %matplotlib inline import matplotlib.pyplot as plt import utils # We set the default figure size plt.rcParams['figure.figsize'] = [17.0, 7.0] # Set the random seed sd = 0 # Set the number of days to sell all shares (i.e. the liquidation time) l_time = 60 # Set the number of trades n_trades = 60 # Set the trader's risk aversion t_risk = 1e-6 # Implement the trading list for the given parameters utils.implement_trade_list(seed = sd, lq_time = l_time, nm_trades = n_trades, tr_risk = t_risk) # - # # The Efficient Frontier of Optimal Portfolio Transactions # # ### Introduction # # [Almgren and Chriss](https://cims.nyu.edu/~almgren/papers/optliq.pdf) showed that for each value of risk aversion there is a unique optimal execution strategy. The optimal strategy is obtained by minimizing the **Utility Function** $U(x)$: # # \begin{equation} # U(x) = E(x) + \lambda V(x) # \end{equation} # # where $E(x)$ is the **Expected Shortfall**, $V(x)$ is the **Variance of the Shortfall**, and $\lambda$ corresponds to the trader’s risk aversion. The expected shortfall and variance of the optimal trading strategy are given by: # # # # In this notebook, we will learn how to visualize and interpret these equations. # # # The Expected Shortfall # # As we saw in the previous notebook, even if we use the same trading list, we are not guaranteed to always get the same implementation shortfall due to the random fluctuations in the stock price. This is why we had to reframe the problem of finding the optimal strategy in terms of the average implementation shortfall and the variance of the implementation shortfall. We call the average implementation shortfall, the expected shortfall $E(x)$, and the variance of the implementation shortfall $V(x)$. So, whenever we talk about the expected shortfall we are really talking about the average implementation shortfall. Therefore, we can think of the expected shortfall as follows. Given a single trading list, the expected shortfall will be the value of the average implementation shortfall if we were to implement this trade list in the stock market many times. # # To see this, in the code below we implement the same trade list on 50,000 trading simulations. We call each trading simulation an episode. Each episode will consist of different random fluctuations in stock price. For each episode we will compute the corresponding implemented shortfall. After all the 50,000 trading simulations have been carried out we calculate the average implementation shortfall and the variance of the implemented shortfalls. We can then compare these values with the values given by the equations for $E(x)$ and $V(x)$ from the Almgren and Chriss model. # + # %matplotlib inline import matplotlib.pyplot as plt import utils # Set the default figure size plt.rcParams['figure.figsize'] = [17.0, 7.0] # Set the liquidation time l_time = 60 # Set the number of trades n_trades = 60 # Set trader's risk aversion t_risk = 1e-6 # Set the number of episodes to run the simulation episodes = 10 utils.get_av_std(lq_time = l_time, nm_trades = n_trades, tr_risk = t_risk, trs = episodes) # Get the AC Optimal strategy for the given parameters ac_strategy = utils.get_optimal_vals(lq_time = l_time, nm_trades = n_trades, tr_risk = t_risk) ac_strategy # - # # Extreme Trading Strategies # # Because some investors may be willing to take more risk than others, when looking for the optimal strategy we have to consider a wide range of risk values, ranging from those traders that want to take zero risk to those who want to take as much risk as possible. Let's take a look at these two extreme cases. We will define the **Minimum Variance** strategy as that one followed by a trader that wants to take zero risk and the **Minimum Impact** strategy at that one followed by a trader that wants to take as much risk as possible. Let's take a look at the values of $E(x)$ and $V(x)$ for these extreme trading strategies. The `utils.get_min_param()` uses the above equations for $E(x)$ and $V(x)$, along with the parameters from the trading environment to calculate the expected shortfall and standard deviation (the square root of the variance) for these strategies. We'll start by looking at the Minimum Impact strategy. # + import utils # Get the minimum impact and minimum variance strategies minimum_impact, minimum_variance = utils.get_min_param() # - # ### Minimum Impact Strategy # # This trading strategy will be taken by trader that has no regard for risk. In the Almgren and Chriss model this will correspond to having the trader's risk aversion set to $\lambda = 0$. In this case the trader will sell the shares at a constant rate over a long period of time. By doing so, he will minimize market impact, but will be at risk of losing a lot of money due to the large variance. Hence, this strategy will yield the lowest possible expected shortfall and the highest possible variance, for a given set of parameters. We can see that for the given parameters, this strategy yields an expected shortfall of \$197,000 dollars but has a very big standard deviation of over 3 million dollars. minimum_impact # ### Minimum Variance Strategy # # This trading strategy will be taken by trader that wants to take zero risk, regardless of transaction costs. In the Almgren and Chriss model this will correspond to having a variance of $V(x) = 0$. In this case, the trader would prefer to sell the all his shares immediately, causing a known price impact, rather than risk trading in small increments at successively adverse prices. This strategy will yield the smallest possible variance, $V(x) = 0$, and the highest possible expected shortfall, for a given set of parameters. We can see that for the given parameters, this strategy yields an expected shortfall of over 2.5 million dollars but has a standard deviation equal of zero. minimum_variance # # The Efficient Frontier # # The goal of Almgren and Chriss was to find the optimal strategies that lie between these two extremes. In their paper, they showed how to compute the trade list that minimizes the expected shortfall for a wide range of risk values. In their model, Almgren and Chriss used the parameter $\lambda$ to measure a trader's risk-aversion. The value of $\lambda$ tells us how much a trader is willing to penalize the variance of the shortfall, $V(X)$, relative to expected shortfall, $E(X)$. They showed that for each value of $\lambda$ there is a uniquely determined optimal execution strategy. We define the **Efficient Frontier** to be the set of all these optimal trading strategies. That is, the efficient frontier is the set that contains the optimal trading strategy for each value of $\lambda$. # # The efficient frontier is often visualized by plotting $(x,y)$ pairs for a wide range of $\lambda$ values, where the $x$-coordinate is given by the equation of the expected shortfall, $E(X)$, and the $y$-coordinate is given by the equation of the variance of the shortfall, $V(X)$. Therefore, for a given a set of parameters, the curve defined by the efficient frontier represents the set of optimal trading strategies that give the lowest expected shortfall for a defined level of risk. # # In the code below, we plot the efficient frontier for $\lambda$ values in the range $(10^{-7}, 10^{-4})$, using the default parameters in our trading environment. Each point of the frontier represents a distinct strategy for optimally liquidating the same number of stocks. A risk-averse trader, who wishes to sell quickly to reduce exposure to stock price volatility, despite the trading costs incurred in doing so, will likely choose a value of $\lambda = 10^{-4}$. On the other hand, a trader # who likes risk, who wishes to postpones selling, will likely choose a value of $\lambda = 10^{-7}$. In the code, you can choose a particular value of $\lambda$ to see the expected shortfall and level of variance corresponding to that particular value of trader's risk aversion. # + # %matplotlib inline import matplotlib.pyplot as plt import utils # Set the default figure size plt.rcParams['figure.figsize'] = [17.0, 7.0] # Plot the efficient frontier for the default values. The plot points out the expected shortfall and variance of the # optimal strategy for the given the trader's risk aversion. Valid range for the trader's risk aversion (1e-7, 1e-4). utils.plot_efficient_frontier(tr_risk = 1e-6) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # IPython Notebookの練習 # ## はじめに # IPython Notebookは、ブラウザ上でインタラクティブにPythonのプログラムを実行できる、ノートのようなものです。 # 10分で最低限を学びましょう。 # ## セルについて # + active="" # ノートブックは「セル」から構成されています。この文章が書かれている領域も「セル」のひとつです。 # セルにはいくつかの種類があり、この文の領域は「Raw NBConvert」という種類であることが画面上部のツールバーから分かります。 # 他には「Markdown」「Code」といった種類もあり、Markdown記法のテキストやPythonのコードを挿入することができるようになっています。 # - # この領域も含めて、見出しや文章では基本的にMarkdown記法のセルを用いています。 # [リンク(Markdown-Wikipedia)](https://ja.wikipedia.org/wiki/Markdown) # を挿入したり、*強調*したり、線をひいたり↓といったことが可能です。 # # --- # 「Code」のセルでは、Pythonをその場で実行することができます。 # 例えば、下のセルにカーソルを合わせた状態でShift-Enter(Shiftキーを押しながらエンターキーを押す)すると、3という数字が表示されると思います。 print 1+2 # それぞれのセルは自由に編集することができます。また、画面上部のInsertから自由に挿入することもできます。 # 試しに以下のコードを編集したり、新たなセルをつくってみたりしてみましょう。 for i in range(10): print "Hello, World! No. %d" % i # ## Pythonプログラムの実行 # 前節では1+2をprintすることができました。もう少し掘り下げてみましょう。 # # ノートブック上では、カーネルを再起動(ツールバーの更新マークから可能)するまでは変数等が保存されます。 # 例えば、以下のようにimport, 変数定義をしたコード上でShift-Enterしてみましょう。 import math a= 1 # この時点でmathがimportされ、aに1が代入されている # 以下を実行(Shift-Enter)すると、importと変数定義がしっかりなされていることがわかりますね。 print math.pi print a # ## 画像の挿入 # 以下のようにして画像を挿入することもできます。 from IPython.display import display, Image display(Image(filename="fig1.jpg")) display(Image(filename="fig2.jpg")) # ## グラフの挿入(Matplotlib) # グラフを挿入するにはmatplotlibが必要です。以下のプログラムでは数値計算ライブラリのnumpyも使用しています。 # # インストール方法は[matplotlib公式サイト](http://matplotlib.org/users/installing.html), [numpy公式サイト](http://www.scipy.org/scipylib/download.html)参照。 # + # %matplotlib inline import matplotlib.pyplot as plt import numpy as np random_data = np.random.randn(100, 100) plt.imshow(random_data, interpolation='none') plt.show() # - # ## その他参考になりそうなこと # * [公式サイト](http://ipython.org/index.html"") # * プログラムを中断したいときは通常ならCtrl-Cを実行しますが、IPython Notebook上ではツールバーの停止マーク■を使います。 # * Pythonのコードの左横に「In [(数字)]:」というように実行順が表示されています。この数字部分が*になっている間はプログラムの実行中です。 # * ショートカットは[こちら](https://ipython.org/ipython-doc/1/interactive/notebook.html#keyboard-shortcuts"Keyboard-shortcuts")を参照 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Dimensionality Reduction # *from Python Machine Learning by under the MIT License (MIT)* # # This code might be directly from the book, mine, or a mix. # ## Principal component analysis (PCA) # - Standarize the d-dimensional dataset # - Construct the covariance matrix # - Decompose the covariance matrix into its eigenvectors and eigenvalues # - Select k eigenvectors that correspond to the k largerst eigenvalues where k is the dimensionality of the new feature subspace. # - Construct a projection matrix W from the "top" k eigenvectors. # - Transform the d-dimensional input dataset X using the projection matrix W to obtain the new k-dimensional feature subspace. # + import pandas as pd from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler import numpy as np df_wine = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data', header=None) df_wine.head() X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0) sc = StandardScaler() X_train_std = sc.fit_transform(X_train) X_test_std = sc.transform(X_test) cov_mat = np.cov(X_train_std.T) eigen_vals, eigen_vecs = np.linalg.eig(cov_mat) # + import matplotlib.pyplot as plt tot = sum(eigen_vals) var_exp = [(i / tot) for i in sorted(eigen_vals, reverse=True)] cum_var_exp = np.cumsum(var_exp) plt.bar(range(1, 14), var_exp, alpha=0.5, align='center', label='individual explained variance') plt.ylabel('Explained variance ratio') plt.xlabel('Principal components') plt.legend(loc='best') plt.show() # + # Make a list of (eigenvalue, eigenvector) tuples and sort the tuples from high to low eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i]) for i in range(len(eigen_vals))] eigen_pairs.sort(key=lambda k: k[0], reverse=True) w = np.hstack((eigen_pairs[0][1][:, np.newaxis], eigen_pairs[1][1][:, np.newaxis])) print('Matrix W:\n', w) # + X_train_pca = X_train_std.dot(w) colors = ['r', 'b', 'g'] markers = ['s', 'x', 'o'] for l, c, m in zip(np.unique(y_train), colors, markers): plt.scatter(X_train_pca[y_train == l, 0], X_train_pca[y_train == l, 1], c=c, label=l, marker=m) plt.xlabel('PC 1') plt.ylabel('PC 2') plt.legend(loc='lower left') plt.show() # - # ### PCA in scikit-learn # + from sklearn.decomposition import PCA pca = PCA() X_train_pca = pca.fit_transform(X_train_std) pca = PCA(n_components=2) X_train_pca = pca.fit_transform(X_train_std) X_test_pca = pca.transform(X_test_std) # + from matplotlib.colors import ListedColormap def plot_decision_regions(X, y, classifier, resolution=0.02): # setup marker generator and color map markers = ('s', 'x', 'o', '^', 'v') colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan') cmap = ListedColormap(colors[:len(np.unique(y))]) # plot the decision surface x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1 x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution), np.arange(x2_min, x2_max, resolution)) Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T) Z = Z.reshape(xx1.shape) plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap) plt.xlim(xx1.min(), xx1.max()) plt.ylim(xx2.min(), xx2.max()) # plot class samples for idx, cl in enumerate(np.unique(y)): plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1], alpha=0.8, c=cmap(idx), marker=markers[idx], label=cl) # + from sklearn.linear_model import LogisticRegression lr = LogisticRegression() lr = lr.fit(X_train_pca, y_train) plot_decision_regions(X_train_pca, y_train, classifier=lr) plt.xlabel('PC 1') plt.ylabel('PC 2') plt.legend(loc='lower left') plt.show() # - plot_decision_regions(X_test_pca, y_test, classifier=lr) plt.xlabel('PC 1') plt.ylabel('PC 2') plt.legend(loc='lower left') plt.show() # #### Explained variance ratios of the different princial components pca = PCA(n_components=None) X_train_pca = pca.fit_transform(X_train_std) pca.explained_variance_ratio_ # ## Linear discriminant analysis (LDA) # - Standarize the d-dimensional dataset # - For each class, compute the d-dimensional mean vector # - Construct the between-class scatter matrix Sb and the within-class scatter matrix Sw. # - Compute the eighenvectors and corresponding eigenvalues of the matrix Sw-1Sb # - Choose the k eigenvectors that correspond to the k largest eigenvalues to construct a dxk-dimensional transformation matrix W; the eigenvectors are the columns of this matrix. # - Project the samples onte the new feature subspace using the transformation matrix W. # + # calculate the mean vectors for each class np.set_printoptions(precision=4) mean_vecs = [] for label in range(1, 4): mean_vecs.append(np.mean(X_train_std[y_train == label], axis=0)) # compute the within-class scatter matrix d = 13 # number of features S_W = np.zeros((d, d)) for label, mv in zip(range(1, 4), mean_vecs): class_scatter = np.zeros((d, d)) # scatter matrix for each class for row in X_train_std[y_train == label]: row, mv = row.reshape(d, 1), mv.reshape(d, 1) # make column vectors class_scatter += (row - mv).dot((row - mv).T) S_W += class_scatter # sum class scatter matrices S_W = np.zeros((d, d)) for label, mv in zip(range(1, 4), mean_vecs): class_scatter = np.cov(X_train_std[y_train == label].T) S_W += class_scatter mean_overall = np.mean(X_train_std, axis=0) d = 13 # number of features S_B = np.zeros((d, d)) for i, mean_vec in enumerate(mean_vecs): n = X_train[y_train == i + 1, :].shape[0] mean_vec = mean_vec.reshape(d, 1) # make column vector mean_overall = mean_overall.reshape(d, 1) # make column vector S_B += n * (mean_vec - mean_overall).dot((mean_vec - mean_overall).T) print('Between-class scatter matrix: %sx%s' % (S_B.shape[0], S_B.shape[1])) # + eigen_vals, eigen_vecs = np.linalg.eig(np.linalg.inv(S_W).dot(S_B)) # Make a list of (eigenvalue, eigenvector) tuples eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i]) for i in range(len(eigen_vals))] # Sort the (eigenvalue, eigenvector) tuples from high to low eigen_pairs = sorted(eigen_pairs, key=lambda k: k[0], reverse=True) # Visually confirm that the list is correctly sorted by decreasing eigenvalues print('Eigenvalues in decreasing order:\n') for eigen_val in eigen_pairs: print(eigen_val[0]) # + tot = sum(eigen_vals.real) discr = [(i / tot) for i in sorted(eigen_vals.real, reverse=True)] plt.bar(range(1, 14), discr, alpha=0.5, align='center', label='individual "discriminability"') plt.ylabel('"discriminability" ratio') plt.xlabel('Linear Discriminants') plt.ylim([-0.1, 1.1]) plt.legend(loc='best') plt.show() # + w = np.hstack((eigen_pairs[0][1][:, np.newaxis].real, eigen_pairs[1][1][:, np.newaxis].real)) X_train_lda = X_train_std.dot(w) colors = ['r', 'b', 'g'] markers = ['s', 'x', 'o'] for l, c, m in zip(np.unique(y_train), colors, markers): plt.scatter(X_train_lda[y_train == l, 0] * (-1), X_train_lda[y_train == l, 1] * (-1), c=c, label=l, marker=m) plt.xlabel('LD 1') plt.ylabel('LD 2') plt.legend(loc='lower right') plt.show() # - # ### LDA in scikit-learn # + from sklearn.linear_model import LogisticRegression from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA lda = LDA(n_components=2) X_train_lda = lda.fit_transform(X_train_std, y_train) lr = LogisticRegression() lr = lr.fit(X_train_lda, y_train) plot_decision_regions(X_train_lda, y_train, classifier=lr) plt.xlabel('LD 1') plt.ylabel('LD 2') plt.legend(loc='lower left') plt.show() # + X_test_lda = lda.transform(X_test_std) plot_decision_regions(X_test_lda, y_test, classifier=lr) plt.xlabel('LD 1') plt.ylabel('LD 2') plt.legend(loc='lower left') plt.show() # - # ## Kernel principal component analysis # # The kernel function can be understood as a function that calculates a dot product between two vectors - a measure of similarity. # #### RBF kernel PCA # + from scipy.spatial.distance import pdist, squareform from scipy import exp from scipy.linalg import eigh import numpy as np def rbf_kernel_pca(X, gamma, n_components): """ RBF kernel PCA implementation. Parameters ------------ X: {NumPy ndarray}, shape = [n_samples, n_features] gamma: float Tuning parameter of the RBF kernel n_components: int Number of principal components to return Returns ------------ X_pc: {NumPy ndarray}, shape = [n_samples, k_features] Projected dataset """ # Calculate pairwise squared Euclidean distances # in the MxN dimensional dataset. sq_dists = pdist(X, 'sqeuclidean') # Convert pairwise distances into a square matrix. mat_sq_dists = squareform(sq_dists) # Compute the symmetric kernel matrix. K = exp(-gamma * mat_sq_dists) # Center the kernel matrix. N = K.shape[0] one_n = np.ones((N, N)) / N K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n) # Obtaining eigenpairs from the centered kernel matrix # numpy.eigh returns them in sorted order eigvals, eigvecs = eigh(K) # Collect the top k eigenvectors (projected samples) X_pc = np.column_stack((eigvecs[:, -i] for i in range(1, n_components + 1))) return X_pc # - # #### Example # + from sklearn.datasets import make_circles X, y = make_circles(n_samples=1000, random_state=123, noise=0.1, factor=0.2) plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5) plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5) plt.show() # + X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2) fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3)) ax[0].scatter(X_kpca[y == 0, 0], X_kpca[y == 0, 1], color='red', marker='^', alpha=0.5) ax[0].scatter(X_kpca[y == 1, 0], X_kpca[y == 1, 1], color='blue', marker='o', alpha=0.5) ax[1].scatter(X_kpca[y == 0, 0], np.zeros((500, 1)) + 0.02, color='red', marker='^', alpha=0.5) ax[1].scatter(X_kpca[y == 1, 0], np.zeros((500, 1)) - 0.02, color='blue', marker='o', alpha=0.5) ax[0].set_xlabel('PC1') ax[0].set_ylabel('PC2') ax[1].set_ylim([-1, 1]) ax[1].set_yticks([]) ax[1].set_xlabel('PC1') plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from scipy.stats import bernoulli import numpy as np import matplotlib.pyplot as plt # %matplotlib inline plt.style.use('ggplot') # Let's say you invested $100 in a stock with a mean monthly return of 1%. But there is dispersion around the mean: the actual returns of the stock each month are 1% + 2% = 3% or 1% - 2% = -1%, with equal probability. By simulating many possible ways this scenario could play out over time, let's look at the distribution of ending values of the portfolio over several time horizons. # We'll model these returns using a _Bernoulli_ random variable, which we can simulate in code using `scipy.stats.bernoulli`. A Bernoulli random variable takes the values 1 or 0 with a probability set by a parameter `p`. def generate_returns(num_returns): p = 0.5 return 0.01 + (bernoulli.rvs(p, size=num_returns)-0.5)*0.04 print(generate_returns(6)) # First, let's look at the distribution of ending values of the stock over 6 months. final_values = [100*np.prod(generate_returns(6)+1) for i in range(1,1000)] plt.hist(final_values, bins=20) plt.ylabel('Frequency') plt.xlabel('Value after 6 months') plt.show() # After 6 months, the distribution of possible values looks symmetric and bell-shaped. This is because there are more paths that lead to middle-valued ending prices. Now, let's look at the ending values of the stock over 20 months. final_values = [100*np.prod(generate_returns(20)+1) for i in range(1,1000)] plt.hist(final_values, bins=20) plt.ylabel('Frequency') plt.xlabel('Value after 20 months') plt.show() # Finally, let's look at the ending values of the stock over 100 months. final_values = [100*np.prod(generate_returns(100)+1) for i in range(1,1000)] plt.hist(final_values, bins=20) plt.ylabel('Frequency') plt.xlabel('Value after 100 months') plt.show() # As you can see, the distribution gets less and less normal-looking over time. The upside potential is unlimited—there always exists the possibility that the stock will continue to appreciate over time. The downside potential, however, is limited to zero—you cannot loose more than 100% of your investment. The distribution we see emerging here is distinctly asymmetric—the values are always positive, but there is a long tail on the right-hand side: we say, it is _positively skewed_. The distribution is approaching what's called a _lognormal distribution_. Let's talk more about how this distribution emerges in the next video. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Exercise 08 # # ## Analyze how travelers expressed their feelings on Twitter # # A sentiment analysis job about the problems of each major U.S. airline. # Twitter data was scraped from February of 2015 and contributors were # asked to first classify positive, negative, and neutral tweets, followed # by categorizing negative reasons (such as "late flight" or "rude service"). # + import pandas as pd import numpy as np # %matplotlib inline import matplotlib.pyplot as plt # read the data and set the datetime as the index import zipfile with zipfile.ZipFile('../datasets/Tweets.zip', 'r') as z: f = z.open('Tweets.csv') tweets = pd.read_csv(f, index_col=0) tweets.head() # - tweets.shape # ### Proportion of tweets with each sentiment tweets['airline_sentiment'].value_counts() # ### Proportion of tweets per airline # tweets['airline'].value_counts() pd.Series(tweets["airline"]).value_counts().plot(kind = "bar",figsize=(8,6),rot = 0) pd.crosstab(index = tweets["airline"],columns = tweets["airline_sentiment"]).plot(kind='bar',figsize=(10, 6),alpha=0.5,rot=0,stacked=True,title="Sentiment by airline") # # Exercise 8.1 # # Predict the sentiment using CountVectorizer, stopwords, n_grams, stemmer, TfidfVectorizer # # use Random Forest classifier from sklearn.model_selection import train_test_split, cross_val_score from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer from sklearn.ensemble import RandomForestClassifier from nltk.stem.snowball import SnowballStemmer from nltk.stem import WordNetLemmatizer X = tweets['text'] y = tweets['airline_sentiment'].map({'negative':-1,'neutral':0,'positive':1}) # # Exercise 8.2 # # Train a Deep Neural Network with the following architecture: # # - Input = text # - Dense(128) # - Relu Activation # - BatchNormalization # - Dropout(0.5) # - Dense(10, Softmax) # # Optimized using rmsprop using as loss categorical_crossentropy # # Hints: # - test with two iterations then try more. # - learning can be ajusted # # Evaluate the performance using the testing set (aprox 55% with 50 epochs) from keras.models import Sequential from keras.utils import np_utils from keras.layers import Dense, Dropout, Activation, BatchNormalization from keras.optimizers import RMSprop from keras.callbacks import History from livelossplot import PlotLossesKeras # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + colab={} colab_type="code" id="WU--cjOphQKk" # Copyright 2020 DeepMind Technologies Limited. All Rights Reserved. # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # + [markdown] colab_type="text" id="MGjUZ2Q55OWM" # #Ensemble and Subspace Sampling Experiments on CIFAR10 # # # # # # This notebook illustrates the CIFAR-10 experiments in the paper: # # [Deep Ensembles: A Loss Landscape Perspective](https://arxiv.org/abs/1912.02757) by , and # # # These experiments investigate the effects of ensembling and variational Bayesian methods, please see the paper for more details. # + [markdown] colab_type="text" id="U5sMeD_4pTe2" # # Setting up # + cellView="code" colab={} colab_type="code" id="bUXPnWQIEKJh" # %tensorflow_version 1.x import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt import tensorflow as tf from tensorflow.contrib import layers import tensorflow_datasets as tfds from scipy.special import softmax from matplotlib import patches as mpatch # Plot Style mpl.style.use('seaborn-colorblind') mpl.rcParams.update({ 'font.size': 14, 'lines.linewidth': 2, 'figure.figsize': (6, 6 / 1.61) }) mpl.rcParams['grid.color'] = 'k' mpl.rcParams['grid.linestyle'] = ':' mpl.rcParams['grid.linewidth'] = 0.5 mpl.rcParams['lines.markersize'] = 6 mpl.rcParams['lines.marker'] = None mpl.rcParams['axes.grid'] = True DEFAULT_FONTSIZE = 13 mpl.rcParams.update({ 'font.size': DEFAULT_FONTSIZE, 'lines.linewidth': 2, 'legend.fontsize': DEFAULT_FONTSIZE, 'axes.labelsize': DEFAULT_FONTSIZE, 'xtick.labelsize': DEFAULT_FONTSIZE, 'ytick.labelsize': DEFAULT_FONTSIZE, 'figure.figsize': (7, 7.0 / 1.4) }) # + [markdown] colab_type="text" id="8_a0OnCQrkFu" # #Getting the datasets # + colab={} colab_type="code" id="AlG96jyJr8cW" def give_me_data(): print("Reading CIFAR10") cifar_data = {} N_val = 500 # Construct a tf.data.Dataset ds_train, ds_test = tfds.load( name="cifar10", split=["train", "test"], batch_size=-1) numpy_train = tfds.as_numpy(ds_train) x_train_raw, y_train_raw = numpy_train["image"], numpy_train["label"] numpy_test = tfds.as_numpy(ds_test) x_test_raw, y_test_raw = numpy_test["image"], numpy_test["label"] N_train = x_train_raw.shape[0] - N_val X_train = x_train_raw[:N_train] y_train = y_train_raw[:N_train] X_val = x_train_raw[N_train:N_train + N_val] y_val = y_train_raw[N_train:N_train + N_val] X_test = x_test_raw y_test = y_test_raw Hn = 32 Wn = 32 Cn = 3 cifar_data['Hn'] = Hn cifar_data['Wn'] = Wn cifar_data['Cn'] = Cn cifar_data['classes'] = 10 cifar_data['X_train'] = X_train.reshape([-1, Hn, Wn, Cn]) cifar_data['X_val'] = X_val.reshape([-1, Hn, Wn, Cn]) cifar_data['X_test'] = X_test.reshape([-1, Hn, Wn, Cn]) cifar_data['y_train'] = y_train.reshape([-1]) cifar_data['y_val'] = y_val.reshape([-1]) cifar_data['y_test'] = y_test.reshape([-1]) return cifar_data # + colab={"height": 35} colab_type="code" executionInfo={"elapsed": 16490, "status": "ok", "timestamp": 1596259193156, "user": {"displayName": "", "photoUrl": "", "userId": ""}, "user_tz": 420} id="P9ius35wN76P" outputId="2f9f551f-b17c-4e0d-8442-b4749f18f33c" cifar_ds = give_me_data() X_train = cifar_ds['X_train'] y_train = cifar_ds['y_train'] X_val = cifar_ds['X_val'] y_val = cifar_ds['y_val'] X_test = cifar_ds['X_test'] y_test = cifar_ds['y_test'] Hn = cifar_ds['Hn'] Wn = cifar_ds['Wn'] Cn = cifar_ds['Cn'] classes = cifar_ds['classes'] N_train = X_train.shape[0] N_val = X_val.shape[0] N_test = X_test.shape[0] # + [markdown] colab_type="text" id="DsEjVGc_pqAL" # # Defining a CNN # + colab={} colab_type="code" id="fWU-qKjRvvKV" # Builds a CNN network and returns various graph nodes for training and # evaluation. # In the 'eval' part of the graph, placeholders for network weights (Ws & bs) # are created to facilitate inferencing at given weights (e.g. from subspace # sampling). def multilayer_CNN(X_ph_in, y_ph_in, dropout_rate_ph_in=None, filter_sizes=(3, 3, 3, 3), pools=(2, 2, 2, 2), channels=(32, 64, 128, 256), classes=10): net_hooks = { 'weights': {}, 'placeholder': {}, 'train_hook': {}, 'train_output': {}, 'eval_output': {} } f_nonlin = tf.nn.relu with tf.variable_scope('train'): a = X_ph_in Ws = [] bs = [] for i in range(len(filter_sizes)): _, _, _, Cnow = a.get_shape().as_list() W = tf.get_variable( 'Wconv' + str(i), shape=[filter_sizes[i], filter_sizes[i], Cnow, channels[i]], initializer=layers.xavier_initializer(), trainable=True) b = tf.get_variable( 'bconv' + str(i), shape=[1, 1, channels[i]], initializer=layers.xavier_initializer(), trainable=True) Ws.append(W) bs.append(b) h = tf.nn.conv2d(a, W, strides=[1, 1, 1, 1], padding='SAME') + b h = tf.nn.dropout(h, rate=dropout_rate_ph_in) h = tf.nn.max_pool( h, ksize=[1, pools[i], pools[i], 1], strides=[1, pools[i], pools[i], 1], padding='SAME') if i < len(filter_sizes) - 1: a = f_nonlin(h) else: a = h _, H_out, W_out, C_out = a.get_shape().as_list() _, height_final, width_final, channels_final = a.get_shape().as_list() # Final fully connected layer. W_final = tf.get_variable( 'Wfinal', shape=[height_final * width_final * channels_final, classes], initializer=layers.xavier_initializer(), trainable=True) b_final = tf.get_variable( 'bfinal', shape=[1, classes], initializer=layers.xavier_initializer(), trainable=True) Ws.append(W_final) bs.append(b_final) net_hooks['weights']['Ws'] = Ws net_hooks['weights']['bs'] = bs a = tf.matmul( tf.reshape(a, [-1, height_final * width_final * channels_final]), W_final) + b_final y = a net_hooks['train_output']['y'] = y y_ph_onehot = tf.one_hot(y_ph_in, classes, dtype=tf.int32) # Weights and biases for regularization. net_hooks['train_hook']['L2_reg_Ws'] = tf.reduce_sum( [tf.reduce_sum(W_now**2.0) for W_now in Ws]) net_hooks['train_hook']['L2_reg_bs'] = tf.reduce_sum( [tf.reduce_sum(b_now**2.0) for b_now in bs]) loss = tf.nn.softmax_cross_entropy_with_logits(labels=y_ph_onehot, logits=y) net_hooks['train_hook']['loss'] = tf.reduce_mean(loss) # Defining all weights and biases as placeholders so that inference can be # performed at given weight values. with tf.variable_scope('eval'): Ws_ph = [] for W in Ws: W_ph_now = tf.placeholder(tf.float32, W.get_shape()) Ws_ph.append(W_ph_now) bs_ph = [] for b in bs: b_ph_now = tf.placeholder(tf.float32, b.get_shape()) bs_ph.append(b_ph_now) a = X_ph_in for i in range(len(filter_sizes)): _, _, _, Cnow = a.get_shape().as_list() h = tf.nn.conv2d( a, Ws_ph[i], strides=[1, 1, 1, 1], padding='SAME') + bs_ph[i] h = tf.nn.dropout(h, rate=dropout_rate_ph_in) h = tf.nn.max_pool( h, ksize=[1, pools[i], pools[i], 1], strides=[1, pools[i], pools[i], 1], padding='SAME') if i < len(filter_sizes) - 1: a = f_nonlin(h) else: a = h _, H_out, W_out, C_out = a.get_shape().as_list() _, height_final, width_final, channels_final = a.get_shape().as_list() last_layer = tf.reshape(a, [-1, height_final * width_final * channels_final]) net_hooks['train_hook']['last_layer'] = last_layer a = tf.matmul(last_layer, Ws_ph[-1]) + bs_ph[-1] y_eval = a net_hooks['eval_output']['pred_eval'] = tf.nn.softmax(y_eval, axis=-1) y_ph_onehot = tf.one_hot(y_ph_in, classes, dtype=tf.int32) loss_eval = tf.nn.softmax_cross_entropy_with_logits( labels=y_ph_onehot, logits=y_eval) net_hooks['eval_output']['loss_eval'] = tf.reduce_mean(loss_eval) net_hooks['eval_output']['y_eval'] = y_eval net_hooks['placeholder']['Ws_ph'] = Ws_ph net_hooks['placeholder']['bs_ph'] = bs_ph return net_hooks # + [markdown] colab_type="text" id="xebqAIwmq9ia" # # Build model graph # + colab={"height": 35} colab_type="code" executionInfo={"elapsed": 971, "status": "ok", "timestamp": 1596259216214, "user": {"displayName": "", "photoUrl": "", "userId": ""}, "user_tz": 420} id="-WWIxPS1q5B6" outputId="7098c352-b897-434b-fc79-1972dbf29243" tf.reset_default_graph() X_ph = tf.placeholder(tf.float32, [None, Hn, Wn, Cn]) y_ph = tf.placeholder(tf.int32, [None]) lr_ph = tf.placeholder(tf.float32) dropout_rate_ph = tf.placeholder(tf.float32, []) L2_reg_constant_ph = tf.placeholder(tf.float32, []) architecture = 'CNN' if architecture == 'CNN': with tf.variable_scope('to_get_shape'): # Medium CNN filter_sizes = (3, 3, 3, 3) pools = (2, 2, 2, 2) channels = (64, 128, 256, 256) dummy_cnn_hooks = multilayer_CNN( X_ph, y_ph, dropout_rate_ph_in=dropout_rate_ph, filter_sizes=filter_sizes, pools=pools, channels=channels, classes=classes) # Number of parameters. params = dummy_cnn_hooks['weights']['Ws'] + dummy_cnn_hooks['weights']['bs'] flat_params = tf.concat([tf.reshape(v, [-1]) for v in params], axis=0) number_of_params = flat_params.get_shape().as_list()[0] print('Number of free parameters=' + str(number_of_params)) cnn_hooks = multilayer_CNN( X_ph, y_ph, dropout_rate_ph_in=dropout_rate_ph, filter_sizes=filter_sizes, pools=pools, channels=channels, classes=classes) y = cnn_hooks['train_output']['y'] loss = cnn_hooks['train_hook']['loss'] y_eval = cnn_hooks['eval_output']['y_eval'] pred_eval = cnn_hooks['eval_output']['pred_eval'] loss_eval = cnn_hooks['eval_output']['loss_eval'] Ws_ph = cnn_hooks['placeholder']['Ws_ph'] bs_ph = cnn_hooks['placeholder']['bs_ph'] Ws = cnn_hooks['weights']['Ws'] bs = cnn_hooks['weights']['bs'] L2_reg_Ws = cnn_hooks['train_hook']['L2_reg_Ws'] L2_reg_bs = cnn_hooks['train_hook']['L2_reg_bs'] loss_to_be_optimized = loss + L2_reg_constant_ph * (L2_reg_Ws + L2_reg_bs) train_step = tf.train.AdamOptimizer(lr_ph).minimize(loss_to_be_optimized) correct_prediction = tf.equal(tf.cast(tf.argmax(y, 1), dtype=tf.int32), y_ph) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) correct_prediction_eval = tf.equal( tf.cast(tf.argmax(y_eval, 1), dtype=tf.int32), y_ph) accuracy_eval = tf.reduce_mean(tf.cast(correct_prediction_eval, tf.float32)) # + [markdown] colab_type="text" id="8PHJJ59SuMAo" # ## Helper Functions # + colab={} colab_type="code" id="eL52WPi6uPDD" # Helper functions to flatten weights to a vector, and reform a flattened vector # to weights. def flatten(Ws1, bs1): lists_now = [] for W_now in Ws1: lists_now.append(W_now.reshape([-1])) for b_now in bs1: lists_now.append(b_now.reshape([-1])) return np.concatenate(lists_now, axis=0) def get_flat_name(Ws_tf, bs_tf): names = [] for W in Ws_tf: names = names + [W.name] * np.prod(W.get_shape().as_list()) for b in bs_tf: names = names + [b.name] * np.prod(b.get_shape().as_list()) return names def reform(flat1): sofar = 0 Ws_out_now = [] bs_out_now = [] for W in Ws: shape_now = W.get_shape().as_list() size_now = np.prod(shape_now) elements = flat1[sofar:sofar + size_now] sofar = sofar + size_now Ws_out_now.append(np.array(elements).reshape(shape_now)) for b in bs: shape_now = b.get_shape().as_list() size_now = np.prod(shape_now) elements = flat1[sofar:sofar + size_now] sofar = sofar + size_now bs_out_now.append(np.array(elements).reshape(shape_now)) return Ws_out_now, bs_out_now # Brier score for evaluating uncertainty performance. def brier_scores(labels, probs=None, logits=None): """Compute elementwise Brier score. Args: labels: Tensor of integer labels shape [N1, N2, ...] probs: Tensor of categorical probabilities of shape [N1, N2, ..., M]. logits: If `probs` is None, class probabilities are computed as a softmax over these logits, otherwise, this argument is ignored. Returns: Tensor of shape [N1, N2, ...] consisting of Brier score contribution from each element. The full-dataset Brier score is an average of these values. """ assert (probs is None) != ( logits is None), "Exactly one of probs and logits should be None." if probs is None: probs = softmax(logits, axis=-1) nlabels = probs.shape[-1] flat_probs = probs.reshape([-1, nlabels]) flat_labels = labels.reshape([len(flat_probs)]) plabel = flat_probs[np.arange(len(flat_labels)), flat_labels] out = np.square(flat_probs).sum(axis=-1) - 2 * plabel return out.reshape(labels.shape) + 1 # + [markdown] colab_type="text" id="W2OrJv2tvdLv" # # The root training loop (Independent solutions) # This section could be time-consuming depending on the size of 'points_to_collect' . # + colab={} colab_type="code" id="T6T6KJVWvNVP" # Choosing a subset of the train data for faster eval. N_train_subset = N_val train_chosen_indices = np.random.choice( range(N_train), N_train_subset, replace=False) X_train_subset = X_train[train_chosen_indices] y_train_subset = y_train[train_chosen_indices] # + colab={} colab_type="code" id="Ls0ziO7ivQo4" # Number of independent solutions to collect. points_to_collect = 5 # Number of ensemble models to use. max_ens_size = 4 # Number of full training trajectory to collect for analysis such as T-SNE. num_trajectory_record = 3 # Collect checkpoints along trajectory for subspace sampling or SWA. collect_solution_after_epoch = 30 # Training hyperparams for each of the runs. epochs = 40 batch_size = 128 dropout = 0.1 L2_constant = 0.0 learning_rate = 1.6e-3 # Defining the LR schedule lr_halving_epoch = 10 lr_halving_maxcount = 1000 lr_halving_power = 2.0 # + colab={} colab_type="code" id="Fw6WhPz-slnf" print("train_step", train_step) Ws_many = [] bs_many = [] losses_many = [] accs_many = [] # Collecting last epochs from each trajectory. Ws_by_epochs_many = [[] for _ in range(points_to_collect)] bs_by_epochs_many = [[] for _ in range(points_to_collect)] # Collecting whole trajectory. Ws_trajectory = [[] for _ in range(num_trajectory_record)] bs_trajectory = [[] for _ in range(num_trajectory_record)] global_init_op = tf.global_variables_initializer() with tf.Session() as sess: for point_id in range(points_to_collect): print("Optimization " + str(point_id) + " with starting lr=" + str(learning_rate) + " dropout rate=" + str(dropout) + " batch=" + str(batch_size) + " L2const=" + str(L2_constant)) sess.run(global_init_op) for e in range(epochs): iterations = int(np.floor(float(N_train) / float(batch_size))) losses_train_list = [] accs_train_list = [] halvings_count_now = np.floor(float(e) / lr_halving_epoch) halvings_to_be_used = np.min([halvings_count_now, lr_halving_maxcount]) learning_rate_now = learning_rate / ( (lr_halving_power)**halvings_to_be_used) for it in range(iterations): indices = np.random.choice(range(N_train), batch_size) X_batch = X_train[indices] y_batch = y_train[indices] feed_dict = { X_ph: X_batch, y_ph: y_batch, lr_ph: learning_rate_now, dropout_rate_ph: dropout, L2_reg_constant_ph: L2_constant, } variables = [train_step, loss, accuracy] _, loss_out, acc_out = sess.run(variables, feed_dict=feed_dict) losses_train_list.append(loss_out) accs_train_list.append(acc_out) # Evaluating current epoch. feed_dict = { X_ph: X_val, y_ph: y_val, dropout_rate_ph: 0.0, L2_reg_constant_ph: 0.0, } loss_val_out, acc_val_out, Ws_opt_out_now, bs_opt_out_now = sess.run( [loss, accuracy, Ws, bs], feed_dict=feed_dict) feed_dict[X_ph] = X_train_subset feed_dict[y_ph] = y_train_subset loss_train_out, acc_train_out = sess.run([loss, accuracy], feed_dict=feed_dict) feed_dict[X_ph] = X_test feed_dict[y_ph] = y_test loss_test_out, acc_test_out = sess.run([loss, accuracy], feed_dict=feed_dict) print("e=" + str(e) + " train loss=" + f"{loss_train_out:.4f}" + " train acc=" + f"{acc_train_out:.4f}" + " val loss=" + f"{loss_val_out:.4f}" + " val acc=" + f"{acc_val_out:.4f}" + " test loss=" + f"{loss_test_out:.4f}" + " test acc=" + f"{acc_test_out:.4f}") if e >= collect_solution_after_epoch: Ws_by_epochs_many[point_id].append(Ws_opt_out_now) bs_by_epochs_many[point_id].append(bs_opt_out_now) if point_id < num_trajectory_record: Ws_trajectory[point_id].append(Ws_opt_out_now) bs_trajectory[point_id].append(bs_opt_out_now) # Saving model weights of the last checkpoint. Ws_opt_out_now, bs_opt_out_now = sess.run([Ws, bs]) Ws_many.append(Ws_opt_out_now) bs_many.append(bs_opt_out_now) losses_many.append(loss_val_out) accs_many.append(acc_val_out) # + [markdown] colab_type="text" id="KapGgJupd9SZ" # ###Collecting Validation Predictions # + colab={} colab_type="code" id="8SJntXp5U2Ir" trajectory_preds_test = np.zeros( (num_trajectory_record, epochs, N_val, classes)) independent_preds_test = np.zeros((points_to_collect, N_val, classes)) with tf.Session() as sess: for id_now in range(points_to_collect): for j, W in enumerate(Ws_many[id_now]): feed_dict[Ws_ph[j]] = W for j, b in enumerate(bs_many[id_now]): feed_dict[bs_ph[j]] = b feed_dict[dropout_rate_ph] = 0.0 feed_dict[X_ph] = X_val feed_dict[y_ph] = y_val pred_eval_out = sess.run(pred_eval, feed_dict=feed_dict) independent_preds_test[id_now] = pred_eval_out for id_now in range(num_trajectory_record): for e in range(epochs): for j, W in enumerate(Ws_trajectory[id_now][e]): feed_dict[Ws_ph[j]] = W for j, b in enumerate(bs_trajectory[id_now][e]): feed_dict[bs_ph[j]] = b pred_eval_out = sess.run(pred_eval, feed_dict=feed_dict) trajectory_preds_test[id_now][e] = pred_eval_out # + [markdown] colab_type="text" id="MSyiWg_W9b4F" # #Cosine and Fraction of Disagreement # + colab={} colab_type="code" id="wVnTmgAD9l9Y" def cos_between(v1, v2): """ Returns the angle in radians between vectors 'v1' and 'v2'""" v1_u = v1 / np.linalg.norm(v1) v2_u = v2 / np.linalg.norm(v2) return np.dot(v1_u, v2_u) # + [markdown] colab_type="text" id="FV_a7LZ6BROc" # ### Within Trajectory # + colab={} colab_type="code" id="HLjL_tcXGAZp" trajectory_id = 0 flat_p_list = [] num_epochs = len(Ws_trajectory[trajectory_id]) cos_matrix = np.zeros((num_epochs, num_epochs)) for e in range(num_epochs): flat_p_list.append( flatten(Ws_trajectory[trajectory_id][e], bs_trajectory[trajectory_id][e])) for i in range(num_epochs): for j in range(i, num_epochs): cos_matrix[i][j] = cos_between(flat_p_list[i], flat_p_list[j]) cos_matrix[j][i] = cos_matrix[i][j] plt.imshow(cos_matrix, interpolation="nearest", cmap="bwr", origin="lower") plt.colorbar() plt.grid("off") title = "Cosine Along Train Trajectory" plt.title(title) plt.xlabel("Checkpoint id") plt.ylabel("Checkpoint id") plt.show() # + colab={} colab_type="code" id="gprmfsa2n8a_" preds_now = trajectory_preds_test[trajectory_id] targets_now = y_val classes_predicted = np.argmax(preds_now, axis=-1) fractional_differences = np.mean( classes_predicted.reshape([1, epochs, len(targets_now)]) != classes_predicted.reshape([epochs, 1, len(targets_now)]), axis=-1) plt.imshow( fractional_differences, interpolation="nearest", cmap="bwr", origin="lower") plt.colorbar() plt.grid("off") title = "Disagreement Fraction Along Train Trajectory" plt.title(title) plt.xlabel("Checkpoint id") plt.ylabel("Checkpoint id") plt.show() # + [markdown] colab_type="text" id="-FEPT3_OBT-Y" # ###Between Independent Runs # + colab={} colab_type="code" id="Wta3MB2tH4jB" flat_p_list = [] cos_matrix = np.zeros((points_to_collect, points_to_collect)) for i in range(points_to_collect): flat_p_list.append(flatten(Ws_many[i], bs_many[i])) for i in range(points_to_collect): for j in range(i, points_to_collect): cos_matrix[i][j] = cos_between(flat_p_list[i], flat_p_list[j]) cos_matrix[j][i] = cos_matrix[i][j] plt.imshow(cos_matrix, cmap="bwr", origin="lower") plt.colorbar() plt.grid("off") title = "Cosine Between Independent Solutions" plt.title(title) plt.xlabel("Independent Solution") plt.ylabel("Independent Solution") plt.show() # + colab={} colab_type="code" id="Cglsoi44tCnL" preds_now = independent_preds_test targets_now = y_val classes_predicted = np.argmax(preds_now, axis=-1) fractional_differences = np.mean( classes_predicted.reshape( [1, points_to_collect, len(targets_now)]) != classes_predicted.reshape( [points_to_collect, 1, len(targets_now)]), axis=-1) plt.imshow( fractional_differences, interpolation="nearest", cmap="bwr", origin="lower") plt.colorbar() plt.grid("off") title = "Disagreement Fraction Btw Independent Solutions" plt.title(title) plt.xlabel("Independent Solution") plt.ylabel("Independent Solution") plt.show() # + [markdown] colab_type="text" id="8hgROgxil29I" # #T-SNE Visualization # + colab={} colab_type="code" id="W6aVhQ5Yl7A6" from sklearn.manifold import TSNE reshaped_prediction = trajectory_preds_test.reshape([-1, N_val * classes]) prediction_embed = TSNE(n_components=2).fit_transform(reshaped_prediction) traj_embed = prediction_embed.reshape([num_trajectory_record, epochs, 2]) colors_list = ["r", "b", "g"] labels_list = ["traj_{}".format(i) for i in range(num_trajectory_record)] for i in range(num_trajectory_record): plt.plot( traj_embed[i, :, 0], traj_embed[i, :, 1], color=colors_list[i], alpha=0.8, linestyle="", marker="o", label=labels_list[i]) plt.plot( traj_embed[i, :, 0], traj_embed[i, :, 1], color=colors_list[i], alpha=0.3, linestyle="-", marker="") plt.plot( traj_embed[i, 0, 0], traj_embed[i, 0, 1], color=colors_list[i], alpha=1.0, linestyle="", marker="*", markersize=20) plt.legend(loc=1) # + [markdown] colab_type="text" id="ewL6qSnHlNEy" # #Effects of Ensemble + Subspace Sampling # + [markdown] colab_type="text" id="FN3eJMBc55hG" # ## Gaussian Sampling # Sample from model weight space according to a Gaussian distribution formed by last epoch checkpoints along a training trajectory. # + colab={} colab_type="code" id="d5TWmKrc53kK" from sklearn.decomposition import PCA def get_gaussian_sample(var_mean, var_std, scale=1.0): var_sample = np.random.normal(var_mean, scale * var_std) return var_sample def get_pca_gaussian_flat_sampling(pca, means, rank, scale=1.0): standard_normals = np.random.normal(loc=0.0, scale=scale, size=(rank)) shifts = pca.inverse_transform(standard_normals) return shifts + means def get_rand_norm_direction(shape): random_direction = np.random.normal(loc=0.0, scale=1.0, size=shape) random_direction_normed = random_direction / np.linalg.norm(random_direction) return random_direction_normed # + colab={} colab_type="code" id="mtGq9Qkj7iEG" # PCA rank. rank = 5 num_sample = 30 dial_gaussian_whole_Ws = [[] for _ in range(points_to_collect)] dial_gaussian_whole_bs = [[] for _ in range(points_to_collect)] pca_gaussian_whole_Ws = [[] for _ in range(points_to_collect)] pca_gaussian_whole_bs = [[] for _ in range(points_to_collect)] for mid in range(points_to_collect): Ws_traj = Ws_by_epochs_many[mid] bs_traj = bs_by_epochs_many[mid] vs_list = [] for i in range(len(Ws_traj)): vs_list.append(flatten(Ws_traj[i], bs_traj[i])) vs_np = np.stack(vs_list, axis=0) means = np.mean(vs_np, axis=0) stds = np.std(vs_np, axis=0) vs_np_centered = vs_np - means.reshape([1, -1]) pca = PCA(n_components=rank) pca.fit(vs_np_centered) for i in range(num_sample): v_sample = get_gaussian_sample(means, stds, scale=1.0) w_sample, b_sample = reform(v_sample) dial_gaussian_whole_Ws[mid].append(w_sample) dial_gaussian_whole_bs[mid].append(b_sample) v_sample = get_pca_gaussian_flat_sampling(pca, means, rank, scale=1.0) w_sample, b_sample = reform(v_sample) pca_gaussian_whole_Ws[mid].append(w_sample) pca_gaussian_whole_bs[mid].append(b_sample) # + [markdown] colab_type="text" id="U_kvSzLKzBSW" # ## Random Sampling # Randomly sample from model weight space around the final checkpoint. # + colab={} colab_type="code" id="PGILtMX7zUHp" # Random samples need to meet this accuracy threshold to be included. acc_threshold = 0.70 rand_Ws = [[] for _ in range(points_to_collect)] rand_bs = [[] for _ in range(points_to_collect)] with tf.Session() as sess: feed_dict = { X_ph: X_val, y_ph: y_val, lr_ph: 0.0, dropout_rate_ph: 0.0, L2_reg_constant_ph: 0.0, } for mid in range(points_to_collect): for i in range(num_sample): vs = flatten(Ws_many[mid], bs_many[mid]) for trial in range(5): scale = 10*np.random.uniform() v_sample = vs + scale * get_rand_norm_direction(vs.shape) w_sample, b_sample = reform(v_sample) for j,W in enumerate(w_sample): feed_dict[Ws_ph[j]] = w_sample[j] for j,b in enumerate(b_sample): feed_dict[bs_ph[j]] = b_sample[j] val_acc = sess.run(accuracy_eval,feed_dict = feed_dict) if val_acc >= acc_threshold: rand_Ws[mid].append(w_sample) rand_bs[mid].append(b_sample) print('Obtaining 1 rand sample at scale {} with validation acc {} at {}th try'.format(scale, val_acc,trial)) break if trial ==4: print('No luck -------------------') # + colab={} colab_type="code" id="NJ5_RCriP-p5" # PCA low-rank approximation of the random samplings pca_gaussian_rand_Ws = [[] for _ in range(points_to_collect)] pca_gaussian_rand_bs = [[] for _ in range(points_to_collect)] for mid in range(points_to_collect): Ws_traj = rand_Ws[mid] bs_traj = rand_bs[mid] vs_list = [] for i in range(len(Ws_traj)): vs_list.append(flatten(Ws_traj[i], bs_traj[i])) vs_np = np.stack(vs_list, axis=0) means = np.mean(vs_np, axis=0) stds = np.std(vs_np, axis=0) vs_np_centered = vs_np - means.reshape([1, -1]) pca = PCA(n_components=rank) pca.fit(vs_np_centered) for i in range(num_sample): v_sample = get_pca_gaussian_flat_sampling(pca, means, rank, scale=1.0) w_sample, b_sample = reform(v_sample) pca_gaussian_rand_Ws[mid].append(w_sample) pca_gaussian_rand_bs[mid].append(b_sample) # + [markdown] colab_type="text" id="nEQ96SeTdnkZ" # ## Collecting predictions from Original and Subspace # + colab={} colab_type="code" id="gKnPAilsoSNB" # Average a list of weights. def average_var(w_list): avg = [[] for _ in w_list[0]] for w_now in w_list: for i, w in enumerate(w_now): avg[i].append(w) for i, v in enumerate(avg): avg[i] = np.mean(np.stack(v, axis=0), axis=0) return avg # Given a list of model weights, feed_dict and a session, returns the model # predictions as a list. def get_pred_list(Ws_list, bs_list, feed_dict, sess): pred_list = [] for id in range(len(Ws_list)): Ws_now = Ws_list[id] bs_now = bs_list[id] for j, W in enumerate(Ws): feed_dict[Ws_ph[j]] = Ws_now[j] for j, b in enumerate(bs): feed_dict[bs_ph[j]] = bs_now[j] pred_eval_out = sess.run(pred_eval, feed_dict=feed_dict) pred_list.append(pred_eval_out) return pred_list # Consider a list of subspaces, each has a list of sampled weights. This # function computes model predictions, ensembles the predictions within each # subspace, and returns the list of ensembled predictions. def get_subspace_pred_list(Ws_subspace_list, bs_subspace_list, feed_dict, sess): subspace_pred = [] num_subspace = len(Ws_subspace_list) for mid in range(num_subspace): pred_list_now = get_pred_list(Ws_subspace_list[mid], bs_subspace_list[mid], feed_dict, sess) subspace_pred.append(np.mean(np.stack(pred_list_now, axis=0), axis=0)) return subspace_pred # + colab={} colab_type="code" id="xeAuHy2xQ39g" # Returns a list of all possible k-subset from [1, ..., n] # Don't scale well for large n. Use with caution. def choose_k_from_n(n, k): if k>n or k <1 or n < 1: return [] if k == n : return [list(range(1, n+1))] if k == 1: return [[i] for i in range(1, n+1)] a = choose_k_from_n(n-1,k) b = choose_k_from_n(n-1, k-1) b_new = [] for g in b: b_new.append(g+[n]) return a+b_new def get_acc_brier(y, pred, is_logits=False): acc = np.mean(np.argmax(pred,axis=-1)==y) if is_logits: brier = brier_scores(y,logits=pred) else: brier = brier_scores(y,probs=pred) return acc, np.mean(brier) # Given a list of model predictions, compute the accuracy and brier score for # each individual model as well as the ensemble of them. def get_all_models_metrics(pred_list,y_test,max_ens_size=5, ens_size_list=None): acc_list = [] acc_list_ensemble = [] b_list = [] b_list_ensemble = [] num_models = len(pred_list) for i in range(num_models): acc,brier = get_acc_brier(y_test, pred_list[i]) acc_list.append(acc) b_list.append(brier) max_ens_size = np.min([max_ens_size, num_models]) if ens_size_list is None: ens_size_list = range(1, max_ens_size+1) for ens_size in ens_size_list: # Pick all possible subset with size of ens_size from available models. # Compute ensemble for each such subset. ens_index_list = choose_k_from_n(num_models,ens_size) ens_acc = [] ens_brier=[] for ens_ind in ens_index_list: ens_pred_list = [] for ind in ens_ind: ens_pred_list.append(pred_list[ind-1]) ens_np = np.mean(np.stack(ens_pred_list,axis=0), axis=0) acc,brier = get_acc_brier(y_test, ens_np) ens_acc.append(acc) ens_brier.append(brier) acc_list_ensemble.append(np.mean(ens_acc)) b_list_ensemble.append(np.mean(ens_brier)) metrics = {'accuracy': {}, 'brier':{},} metrics['accuracy']['individual'] = acc_list metrics['accuracy']['ensemble'] = acc_list_ensemble metrics['brier']['individual'] = b_list metrics['brier']['ensemble'] = b_list_ensemble return metrics # + colab={} colab_type="code" id="RR2UKvMcdpJm" # Get predictions on test data. orig_pred = [] wa_pred = [] # Compute averaged weights. wa_Ws = [[] for _ in range(points_to_collect)] wa_bs = [[] for _ in range(points_to_collect)] for i in range(points_to_collect): wa_Ws[i] = average_var(Ws_by_epochs_many[i]) wa_bs[i] = average_var(bs_by_epochs_many[i]) with tf.Session() as sess: feed_dict[X_ph] = X_test feed_dict[y_ph] = y_test orig_pred = get_pred_list(Ws_many, bs_many, feed_dict, sess) wa_pred = get_pred_list(wa_Ws, wa_bs, feed_dict, sess) diag_gaus_pred = get_subspace_pred_list(dial_gaussian_whole_Ws, dial_gaussian_whole_bs, feed_dict, sess) pca_gaus_pred = get_subspace_pred_list(pca_gaussian_whole_Ws, pca_gaussian_whole_bs, feed_dict, sess) pca_rand_pred = get_subspace_pred_list(pca_gaussian_rand_Ws, pca_gaussian_rand_bs, feed_dict, sess) rand_pred = get_subspace_pred_list(rand_Ws, rand_bs, feed_dict, sess) # + colab={} colab_type="code" id="sCSsNdA06lm1" max_ens_size = points_to_collect - 1 orig_metrics = get_all_models_metrics( orig_pred, y_test, max_ens_size=max_ens_size) wa_metrics = get_all_models_metrics(wa_pred, y_test, max_ens_size=max_ens_size) diag_metrics = get_all_models_metrics( diag_gaus_pred, y_test, max_ens_size=max_ens_size) pca_metrics = get_all_models_metrics( pca_gaus_pred, y_test, max_ens_size=max_ens_size) rand_metrics = get_all_models_metrics( rand_pred, y_test, max_ens_size=max_ens_size) pca_rand_metrics = get_all_models_metrics( pca_rand_pred, y_test, max_ens_size=max_ens_size) # + [markdown] colab_type="text" id="yQtoJGx817vV" # ##Plot Metrics for Ensembles + Subspace # + colab={} colab_type="code" id="utAi4c29CvgO" title = "Ensemble ACC test" plt.xlabel("Ensemble size") plt.ylabel("Test Acc") ensemble_sizes = np.asarray(range(max_ens_size)) + 1 plt.plot( ensemble_sizes, orig_metrics["accuracy"]["ensemble"], marker="s", label="probs ensembling", color="navy") plt.plot( ensemble_sizes, [np.mean(orig_metrics["accuracy"]["individual"])] * len(ensemble_sizes), label="original", color="blue") plt.plot( ensemble_sizes, [np.mean(diag_metrics["accuracy"]["individual"])] * len(ensemble_sizes), label="Diag", color="pink") plt.plot( ensemble_sizes, [np.mean(pca_metrics["accuracy"]["individual"])] * len(ensemble_sizes), label="PCA", color="green") plt.plot( ensemble_sizes, diag_metrics["accuracy"]["ensemble"], marker="s", label="Diag+Ensemble", color="red") plt.plot( ensemble_sizes, wa_metrics["accuracy"]["ensemble"], marker="s", label="WA+ensembling", color="grey") plt.plot( ensemble_sizes, pca_metrics["accuracy"]["ensemble"], marker="s", label="PCA+Ensemble", color="green") plt.plot( ensemble_sizes, rand_metrics["accuracy"]["ensemble"], marker="s", label="Rand+Ensemble", color="yellow") plt.plot( ensemble_sizes, pca_rand_metrics["accuracy"]["ensemble"], marker="s", label="PCA Rand+Ensemble", color="m") plt.legend() plt.xlim(1, max_ens_size) plt.show() # + colab={} colab_type="code" id="8vLQoHba6eXt" title = "Ensemble Brier test" plt.xlabel("Ensemble size") plt.ylabel("Test Brier") ensemble_sizes = np.asarray(range(max_ens_size)) + 1 plt.plot( ensemble_sizes, orig_metrics["brier"]["ensemble"], marker="s", label="probs ensembling", color="navy") plt.plot( ensemble_sizes, [np.mean(orig_metrics["brier"]["individual"])] * len(ensemble_sizes), label="original", color="blue") plt.plot( ensemble_sizes, [np.mean(diag_metrics["brier"]["individual"])] * len(ensemble_sizes), label="Diag", color="pink") plt.plot( ensemble_sizes, [np.mean(pca_metrics["brier"]["individual"])] * len(ensemble_sizes), label="PCA", color="green") plt.plot( ensemble_sizes, diag_metrics["brier"]["ensemble"], marker="s", label="Diag Ensemble", color="red") plt.plot( ensemble_sizes, pca_metrics["brier"]["ensemble"], marker="s", label="PCA Ensemble", color="green") plt.plot( ensemble_sizes, wa_metrics["brier"]["ensemble"], marker="s", label="WA ensembling", color="grey") plt.plot( ensemble_sizes, rand_metrics["brier"]["ensemble"], marker="s", label="Rand Ensemble", color="yellow") plt.plot( ensemble_sizes, pca_rand_metrics["brier"]["ensemble"], marker="s", label="PCA Rand Ensemble", color="m") plt.xlim(1, max_ens_size) plt.legend() plt.show() # + [markdown] colab_type="text" id="bGelr5_KBL-O" # ## Evaluating on Corrupted CIFAR10 # See [paper](https://arxiv.org/pdf/1906.02530.pdf) for data description. # # CAUTION: This section will be very time-consuming. If one wants to test the code quickly, consider only run a small portion of CIFAR10-C -- reducing 'ALL_CORRUPTIONS' to containing only one type, and 'intensity_range' to [0, 1]. # + colab={} colab_type="code" id="K2jeOP8AERPn" ALL_CORRUPTIONS = [ 'gaussian_noise', 'shot_noise', # 'impulse_noise', # 'defocus_blur', # 'frosted_glass_blur', # 'motion_blur', # 'zoom_blur', # 'snow', # 'frost', # 'fog', # 'brightness', # 'contrast', # 'elastic', # 'pixelate', # 'jpeg_compression', # 'gaussian_blur', # 'saturate', # 'spatter', # 'speckle_noise', ] num_models_to_use = 5 ens_size_list = [2, 4] model_names = ['original', 'wa', 'diag', 'pca', 'rand', 'pca_rand'] model_names_plus_ens = [] for i in ens_size_list: model_names_plus_ens = model_names_plus_ens + [ name + '_ens_{}'.format(i) for name in model_names ] all_model_names = model_names + model_names_plus_ens model_to_acc_every_level = {name: [] for name in all_model_names} model_to_brier_every_level = {name: [] for name in all_model_names} intensity_range = range(6) for level in intensity_range: print('========= Level {} ======='.format(level)) model_to_acc_this_level = {name: [] for name in all_model_names} model_to_brier_this_level = {name: [] for name in all_model_names} if level == 0: corruptions = ['no_corruption'] else: corruptions = ALL_CORRUPTIONS for corruption_type in corruptions: if corruption_type is 'no_corruption': ds_test = tfds.load(name='cifar10', split=['test'], batch_size=-1) else: # Load corrupted data. corruption_config_name = corruption_type + '_{}'.format(level) ds_test = tfds.load( name='cifar10_corrupted', split=['test'], builder_kwargs={'config': corruption_config_name}, batch_size=-1) numpy_ds = tfds.as_numpy(ds_test) x_test, y_test = numpy_ds[0]['image'], numpy_ds[0]['label'] x_test = x_test.reshape([-1, Hn, Wn, Cn]) y_test = y_test.reshape([-1]) N_test = len(y_test) # Run inference with tf.Session() as sess: feed_dict[X_ph] = x_test feed_dict[y_ph] = y_test #Get predictions for name in model_names: if name is 'original': pred_list = get_pred_list(Ws_many[0:num_models_to_use], bs_many[0:num_models_to_use], feed_dict, sess) elif name is 'wa': # Weight Averaged pred_list = get_pred_list(wa_Ws[0:num_models_to_use], wa_bs[0:num_models_to_use], feed_dict, sess) elif name is 'diag': # Subspacesampling pred_list = get_subspace_pred_list( dial_gaussian_whole_Ws[0:num_models_to_use], dial_gaussian_whole_bs[0:num_models_to_use], feed_dict, sess) elif name is 'pca': pred_list = get_subspace_pred_list( pca_gaussian_whole_Ws[0:num_models_to_use], pca_gaussian_whole_bs[0:num_models_to_use], feed_dict, sess) elif name is 'pca_rand': pred_list = get_subspace_pred_list( pca_gaussian_rand_Ws[0:num_models_to_use], pca_gaussian_rand_bs[0:num_models_to_use], feed_dict, sess) elif name is 'rand': pred_list = get_subspace_pred_list(rand_Ws[0:num_models_to_use], rand_bs[0:num_models_to_use], feed_dict, sess) corrupt_metrics = get_all_models_metrics( pred_list, y_test, ens_size_list=ens_size_list) model_to_acc_this_level[name].append( np.mean(corrupt_metrics['accuracy']['individual'])) model_to_brier_this_level[name].append( np.mean(corrupt_metrics['brier']['individual'])) for i, ens_size in enumerate(ens_size_list): model_to_acc_this_level[name + '_ens_{}'.format(ens_size)].append( corrupt_metrics['accuracy']['ensemble'][i]) model_to_brier_this_level[name + '_ens_{}'.format(ens_size)].append( corrupt_metrics['brier']['ensemble'][i]) for name in all_model_names: model_to_acc_every_level[name].append(model_to_acc_this_level[name]) model_to_brier_every_level[name].append(model_to_brier_this_level[name]) # + [markdown] colab_type="text" id="rGS9GPU2mo5j" # ## Plot the metrics across corruption intensity # + colab={} colab_type="code" id="b5cAs29LkPYu" ens_size_list = [2, 4] model_names = ['original', 'wa', 'diag', 'pca', 'rand', 'pca_rand'] model_names_plus_ens = [] for i in ens_size_list: model_names_plus_ens = model_names_plus_ens + [ name + '_ens_{}'.format(i) for name in model_names ] all_model_names = model_names + model_names_plus_ens # Model names used by Jensen Shannon plots. js_model_names = ['inde', 'traj', 'diag', 'pca', 'rand', 'pca_rand'] model_name_to_color = { 'original': 'indianred', 'wa': 'dimgray', 'diag': 'gold', 'pca': 'blue', 'rand': 'mediumseagreen', 'pca_rand': 'fuchsia', 'inde': 'red', 'traj': 'grey' } for name in model_names: for i in ens_size_list: model_name_to_color[name + '_ens_{}'.format(i)] = model_name_to_color[name] model_name_to_label = { 'original': 'Original', 'wa': 'Weight Avg', 'diag': 'Diag Gaus', 'pca': 'PCA Gaus', 'rand': 'Random', 'pca_rand': 'PCA Random', 'inde': 'Independent', 'traj': 'Train Trajectory' } for name in model_names: model_name_to_label[name + '_ens'] = model_name_to_label[name] + '+Ens' for i in ens_size_list: model_name_to_label[ name + '_ens_{}'.format(i)] = model_name_to_label[name] + '+Ens {}'.format(i) # Line style, currently can only support 2 ensemble sizes. Beyond that, solid # line will be used for all. model_name_to_ls = {} for name in model_names: model_name_to_ls[name] = 'dashed' model_name_to_ls[name + '_ens_{}'.format(ens_size_list[0])] = 'dotted' for i in range(1, len(ens_size_list)): model_name_to_ls[name + '_ens_{}'.format(ens_size_list[i])] = 'solid' def plot_metric_over_corrupted_data(metric_name, intensity_range, all_model_names, base_model_names, color_map, ls_map): plt.figure(figsize=(10, 9)) ylabel = { 'acc': 'Accuracy', 'brier': 'Brier', } plt.xlabel('Corruption Intensity', fontsize=16) plt.ylabel(ylabel[metric_name], fontsize=16) label_map = {'original': 'Single'} for ens_size in ens_size_list: label_map['original_ens_{}'.format(ens_size)] = '{}-Ensemble'.format( ens_size) model_to_mean = {name: [] for name in all_model_names} for name in all_model_names: if metric_name is 'acc': model_to_metric_every_level = model_to_acc_every_level[name] elif metric_name is 'brier': model_to_metric_every_level = model_to_brier_every_level[name] for t in intensity_range: model_to_mean[name].append(np.mean(model_to_metric_every_level[t])) if name in label_map: label = label_map[name] else: label = None plt.plot( intensity_range, model_to_mean[name], marker='s', label=label, color=color_map[name], ls=ls_map[name]) legend0 = plt.legend(loc=3, fontsize=12) patch_list = [] for name in base_model_names: patch_list.append( mpatch.Patch( color=model_name_to_color[name], label=model_name_to_label[name])) legend1 = plt.legend(handles=patch_list) ax = plt.gca().add_artist(legend0) plt.xlim(np.min(intensity_range), np.max(intensity_range)) plt.tight_layout() def plot_js_over_corrupted_data(intensity_range, all_model_names, color_map, label_map): plt.figure(figsize=(10, 9)) plt.xlabel('Corruption Intensity', fontsize=16) plt.ylabel('', fontsize=16) model_to_mean = {name: [] for name in all_model_names} for name in all_model_names: model_to_metric_every_level = model_to_js_every_level[name] for t in intensity_range: model_to_mean[name].append(np.mean(model_to_metric_every_level[t])) plt.plot( intensity_range, model_to_mean[name], marker='s', label=label_map[name], color=color_map[name]) plt.xlim(np.min(intensity_range), np.max(intensity_range)) plt.tight_layout() plt.legend() # + colab={} colab_type="code" id="cszvIYjZkHLi" intensity_range = range(6) plot_metric_over_corrupted_data('acc', intensity_range, all_model_names, model_names, model_name_to_color, model_name_to_ls) # + colab={} colab_type="code" id="aLsVQu1SYzaX" intensity_range = range(6) plot_metric_over_corrupted_data('brier', intensity_range, all_model_names, model_names, model_name_to_color, model_name_to_ls) # + [markdown] colab_type="text" id="ZwOzmfhn3Z4l" # ## # + colab={} colab_type="code" id="tqiAL-z73cPe" def get_entropy(p): return np.sum(-p * np.log(p + 1e-5), axis=1) # Compute the Jensen Shannon score given a list of model predictions. def get_jensen_shannon(pred_list, is_logit=False): n = len(pred_list) if is_logit: p_list = [] for i in range(n): p_list.append(softmax(pred_list[i], axis=1)) else: p_list = pred_list p_np = np.stack(p_list, axis=0) ensemble_pred = np.mean(p_np, axis=0) ensemble_entropy = np.mean(get_entropy(ensemble_pred)) mean_entropy = np.mean(np.sum(-p_np * np.log(p_np + 1e-5), axis=2)) return ensemble_entropy - mean_entropy # + [markdown] colab_type="text" id="mCgvOt_gbRYG" # ### Eval on SVHN # JS on OOD data # + colab={} colab_type="code" id="4WralaRlbVOW" _, ds_test = tfds.load( name="svhn_cropped", split=["train", "test"], batch_size=-1) numpy_ds = tfds.as_numpy(ds_test) X_test_raw, y_test_fine_raw = numpy_ds["image"], numpy_ds["label"] X_test_svhn = X_test_raw y_test_svhn = y_test_fine_raw subsample = np.random.choice(len(y_test_svhn), 10000, replace=False) X_svhn = X_test_svhn[subsample, :, :, :] y_svhn = y_test_svhn[subsample] # + colab={} colab_type="code" id="ICYLKUto1mrv" traj_svhn_js = [] pca_svhn_js = [] diag_svhn_js = [] rand_svhn_js = [] pca_rand_svhn_js = [] with tf.Session() as sess: feed_dict[X_ph] = X_svhn feed_dict[y_ph] = y_svhn feed_dict[lr_ph] = 0.0 feed_dict[dropout_rate_ph] = 0.0 indi_svhn_pred = get_pred_list(Ws_many[0:10], bs_many[0:10], feed_dict, sess) for traj_id in range(points_to_collect): trajectory_svhn_pred = get_pred_list(Ws_by_epochs_many[traj_id], bs_by_epochs_many[traj_id], feed_dict, sess) traj_svhn_js.append(get_jensen_shannon(trajectory_svhn_pred)) diag_gaus_whole_svhn_pred = get_pred_list(dial_gaussian_whole_Ws[traj_id], dial_gaussian_whole_bs[traj_id], feed_dict, sess) diag_svhn_js.append(get_jensen_shannon(diag_gaus_whole_svhn_pred)) pca_gaus_whole_svhn_pred = get_pred_list(pca_gaussian_whole_Ws[traj_id], pca_gaussian_whole_bs[traj_id], feed_dict, sess) pca_svhn_js.append(get_jensen_shannon(pca_gaus_whole_svhn_pred)) pca_rand_svhn_pred = get_pred_list(pca_gaussian_rand_Ws[traj_id], pca_gaussian_rand_bs[traj_id], feed_dict, sess) pca_rand_svhn_js.append(get_jensen_shannon(pca_rand_svhn_pred)) rand_svhn_pred = get_pred_list(rand_Ws[traj_id], rand_bs[traj_id], feed_dict, sess) rand_svhn_js.append(get_jensen_shannon(rand_svhn_pred)) # + colab={} colab_type="code" id="-1j-4_KhAZEg" print("Jensen-Shannon for Independent run is {:.4f}".format( get_jensen_shannon(indi_svhn_pred))) print("Jensen-Shannon for within-trajectory is {:.4f}".format( np.mean(traj_svhn_js))) print("Jensen-Shannon for Diag Gaussian is {:.4f}".format( np.mean(diag_svhn_js))) print("Jensen-Shannon for PCA Gaussian is {:.4f}".format(np.mean(pca_svhn_js))) print("Jensen-Shannon for Rand is {:.4f}".format(np.mean(rand_svhn_js))) print("Jensen-Shannon for PCA Rand is {:.4f}".format(np.mean(pca_rand_svhn_js))) # + [markdown] colab_type="text" id="FLCIw3GwQeaa" # ###Eval on Cifar-10-C # # CAUTION: This section will be very time-consuming. If one wants to test the code quickly, consider only run a small portion of CIFAR10-C -- reducing 'ALL_CORRUPTIONS' to containing only one type, and 'intensity_range' to [0, 1]. # + colab={} colab_type="code" id="UNM4VieiQjAp" # Choose any training trajectory to study the JS of subspace samplings. traj_id = 3 model_to_js_every_level = {name: [] for name in js_model_names} intensity_range = range(6) for level in intensity_range: print("========= Level {} =======".format(level)) model_to_js_this_level = {name: [] for name in js_model_names} if level == 0: corruptions = ["no_corruption"] else: corruptions = ALL_CORRUPTIONS for corruption_type in corruptions: if corruption_type is "no_corruption": ds_test = tfds.load(name="cifar10", split=["test"], batch_size=-1) else: # Load corrupted data. corruption_config_name = corruption_type + "_{}".format(level) ds_test = tfds.load( name="cifar10_corrupted", split=["test"], builder_kwargs={"config": corruption_config_name}, batch_size=-1) numpy_ds = tfds.as_numpy(ds_test) x_test, y_test = numpy_ds[0]["image"], numpy_ds[0]["label"] x_test = x_test.reshape([-1, Hn, Wn, Cn]) y_test = y_test.reshape([-1]) N_test = len(y_test) # Run inference with tf.Session() as sess: feed_dict[X_ph] = x_test feed_dict[y_ph] = y_test #Get predictions for name in js_model_names: if name is "inde": pred_list = get_pred_list(Ws_many, bs_many, feed_dict, sess) elif name is "traj": # Weight Averaged pred_list = get_pred_list(Ws_by_epochs_many[traj_id], bs_by_epochs_many[traj_id], feed_dict, sess) elif name is "diag": pred_list = get_pred_list(dial_gaussian_whole_Ws[traj_id], dial_gaussian_whole_bs[traj_id], feed_dict, sess) elif name is "pca": pred_list = get_pred_list(pca_gaussian_whole_Ws[traj_id], pca_gaussian_whole_bs[traj_id], feed_dict, sess) elif name is "pca_rand": pred_list = get_pred_list(pca_gaussian_rand_Ws[traj_id], pca_gaussian_rand_bs[traj_id], feed_dict, sess) elif name is "rand": pred_list = get_pred_list(rand_Ws[traj_id], rand_bs[traj_id], feed_dict, sess) model_to_js_this_level[name].append(get_jensen_shannon(pred_list)) for name in js_model_names: model_to_js_every_level[name].append(model_to_js_this_level[name]) # + colab={} colab_type="code" id="2mcbo-46d4s2" plot_js_over_corrupted_data(intensity_range, js_model_names, model_name_to_color, model_name_to_label) # + [markdown] colab_type="text" id="zCZUtnjKljz-" # #Diversity vs Accuracy # This section plots the Diversiy vs Accuracy plane for different subspace sampling methods, compared to independent solutions: # # # * Random # * Gaussian Diag # * Gaussian Low rank (PCA) # * Dropout # # Note that unlike the previous section "Effects of Ensemble + Subspace Sampling" where we only keep samples with good accuracy performance, in this section we include samples that sacrifices accuracy to explore the full spectrum of function diversity # + colab={} colab_type="code" id="D3eKK8UPz-UB" base_id = 0 base_v = flatten(Ws_many[base_id], bs_many[base_id]) def get_acc_and_diff(pred_class, base_pred, y_label): diff = np.mean(pred_class != base_pred) acc = np.mean(pred_class == y_label) return acc, diff # Given a list of weights, compute their accuracy and their difference to the # base prediction. def get_acc_and_diff_from_weights(Ws_list, bs_list, feed_dict, base_pred): assert len(Ws_list) == len(bs_list) with tf.Session() as sess: pred_list = get_pred_list(Ws_list, bs_list, feed_dict, sess) acc_list = [] diff_list = [] for i in range(len(Ws_list)): acc, diff = get_acc_and_diff( np.argmax(pred_list[i], axis=-1), base_pred, feed_dict[y_ph]) acc_list.append(acc) diff_list.append(diff) return acc_list, diff_list # + [markdown] colab_type="text" id="0oScJ4X09-F9" # ##Independent Solutions # + colab={} colab_type="code" id="TkwIdoOJvgz2" feed_dict = { X_ph: X_test, y_ph: y_test, lr_ph: 0.0, dropout_rate_ph: 0.0, L2_reg_constant_ph: 0.0, } with tf.Session() as sess: orig_pred_list = get_pred_list(Ws_many, bs_many, feed_dict, sess) base_pred = np.argmax(orig_pred_list[base_id], axis=-1) independent_acc = [] independent_diff = [] for i in range(points_to_collect): class_pred = np.argmax(orig_pred_list[i], axis=-1) acc, diff = get_acc_and_diff(class_pred, base_pred, y_test) independent_acc.append(acc) independent_diff.append(diff) data_to_show = [ ("independent optima", "red", 300, "*", 1.0, independent_acc, independent_diff), ] # + colab={} colab_type="code" id="cotIEVchK3Vh" data_to_show = [ ("independent optima", "red", 300, "*", 1.0, independent_acc, independent_diff), ] # + [markdown] colab_type="text" id="hT1pKG9W-BTZ" # ##Gaussian Sampling # + colab={} colab_type="code" id="czoWhgyGAFoC" num_sample = 50 rank = 5 # for PCA Ws_traj = Ws_by_epochs_many[base_id] bs_traj = bs_by_epochs_many[base_id] dial_gaussian_whole_Ws = [] dial_gaussian_whole_bs = [] pca_gaussian_whole_Ws = [] pca_gaussian_whole_bs = [] vs_list = [] for i in range(len(Ws_traj)): vs_list.append(flatten(Ws_traj[i], bs_traj[i])) vs_np = np.stack(vs_list, axis=0) means = np.mean(vs_np, axis=0) stds = np.std(vs_np, axis=0) vs_np_centered = vs_np - means.reshape([1, -1]) pca = PCA(n_components=rank) pca.fit(vs_np_centered) for i in range(num_sample): scale = np.random.uniform() # One can adjust the constant in front of scale to get a fuller range in # diversity-acc plane. v_sample = get_gaussian_sample(means, stds, scale=10.0 * scale) w_sample, b_sample = reform(v_sample) dial_gaussian_whole_Ws.append(w_sample) dial_gaussian_whole_bs.append(b_sample) # One can adjust the scale value to get a fuller range in diversity-acc plane. v_sample = get_pca_gaussian_flat_sampling(pca, means, rank, scale=6.0) w_sample, b_sample = reform(v_sample) pca_gaussian_whole_Ws.append(w_sample) pca_gaussian_whole_bs.append(b_sample) pca_acc, pca_diff = get_acc_and_diff_from_weights(pca_gaussian_whole_Ws, pca_gaussian_whole_bs, feed_dict, base_pred) diag_acc, diag_diff = get_acc_and_diff_from_weights(dial_gaussian_whole_Ws, dial_gaussian_whole_bs, feed_dict, base_pred) # + colab={} colab_type="code" id="mfnrhHSa5Xiw" data_to_show = data_to_show + [ ("diag gaussian", "purple", 10, "o", 0.6, diag_acc, diag_diff), ("pca gaussian", "fuchsia", 10, "o", 0.3, pca_acc, pca_diff), ] # + [markdown] colab_type="text" id="Jn-g6-GZZ5tc" # ##Random Sampling # + colab={} colab_type="code" id="uJpRJaDLaFRa" rand_Ws = [] rand_bs = [] for i in range(num_sample): # One can adjust the constant in front of scale to get a fuller range in # diversity-acc plane. scale = 70.0 * np.random.uniform() v_sample = base_v + scale * get_rand_norm_direction(base_v.shape) w_sample, b_sample = reform(v_sample) rand_Ws.append(w_sample) rand_bs.append(b_sample) rand_acc, rand_diff = get_acc_and_diff_from_weights(rand_Ws, rand_bs, feed_dict, base_pred) # + colab={} colab_type="code" id="h1my3OECbzEi" data_to_show = data_to_show + [ ("random subspace", "blue", 3, "o", 0.3, rand_acc, rand_diff), ] # + [markdown] colab_type="text" id="JJWIEJZBZ9Jf" # ##Dropout Sampling # + colab={} colab_type="code" id="MBiCleCgb8r1" def apply_dropout_to_array(anchor_array, dropout_to_use): shape_now = anchor_array.shape random_mask = np.random.rand(*shape_now) mask = (random_mask < (1.0 - dropout_to_use)) array_dropped = (anchor_array * mask) / (1.0 - dropout_to_use) return array_dropped dropout_Ws = [] dropout_bs = [] for i in range(num_sample): # One can adjust the constant to get a fuller range in diversity-acc plane. dropout = 0.1 * np.random.uniform() v_sample = apply_dropout_to_array(base_v, dropout) w_sample, b_sample = reform(v_sample) dropout_Ws.append(w_sample) dropout_bs.append(b_sample) dropout_acc, dropout_diff = get_acc_and_diff_from_weights( dropout_Ws, dropout_bs, feed_dict, base_pred) # + colab={} colab_type="code" id="E5qdldWHb9c4" data_to_show = data_to_show + [ ("dropout subspace", "orange", 10, "o", 1.0, dropout_acc, dropout_diff), ] # + [markdown] colab_type="text" id="VwNL8PPM7DrM" # ##Plotting Figure # + colab={} colab_type="code" id="AqhcI78L52hM" # Functions for computing the analytic limit curves (see paper). def perturbed_reference_analytic(desired_accuracy, reference_accuracy, classes): return (reference_accuracy - desired_accuracy) / ( reference_accuracy + (reference_accuracy - 1.0) / (classes - 1.0)) def random_average_case(desired_accuracy, reference_accuracy, classes): part1 = reference_accuracy * (1.0 - desired_accuracy) part2 = desired_accuracy * (1.0 - reference_accuracy) part3 = (1.0 - reference_accuracy) * (1.0 - desired_accuracy) * (classes - 2.0) / ( classes - 1.0) return part1 + part2 + part3 # + colab={} colab_type="code" id="DjOAuW3F56qu" plt.figure(figsize=(7, 5)) accs_fit_random = np.linspace(0.1, 0.71, 100) diffs_fit_random = random_average_case(accs_fit_random, 0.71, 10) plt.plot( accs_fit_random, diffs_fit_random / (1.0 - accs_fit_random), color="black", linestyle="-.", label="Upper limit") # Analytic limits curve. accs_fit_perturbed = np.linspace(0.1, 0.71, 100) diffs_fit_perturbed = perturbed_reference_analytic(accs_fit_perturbed, 0.71, 10) plt.plot( accs_fit_perturbed, diffs_fit_perturbed / (1.0 - accs_fit_perturbed), color="black", linestyle="--", label="Lower limit") for (name_now, color_now, size_now, marker_now, alpha_now, accs_now, diffs_now) in data_to_show: metric_now = np.asarray(diffs_now) / (1.0 - np.asarray(accs_now)) if name_now != "independent optima": plt.scatter( accs_now, metric_now, color=color_now, s=size_now, marker=marker_now, alpha=alpha_now) plt.scatter([], [], color=color_now, label=name_now, s=10, marker=marker_now, alpha=1.0) else: plt.scatter( accs_now, metric_now, color=color_now, s=size_now, marker=marker_now, alpha=alpha_now, label=name_now) if name_now == "independent optima": base_star_color = "green" plt.scatter([accs_now[base_id]], [metric_now[base_id]], color=base_star_color, label="baseline optimum", s=size_now, marker=marker_now, alpha=alpha_now) plt.xlabel("Validation accuracy", fontsize=14) plt.ylabel("Fraction of labels changes / (1.0-accuracy)", fontsize=14) plt.title("MediumCNN on Cifar10", fontsize=18) plt.legend(loc=3, fancybox=True, framealpha=0.5, fontsize=16) plt.ylim([-0.1, 1.2]) plt.xlim([0, 0.78]) plt.show() # -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.2.0 # language: julia # name: julia-1.2 # --- using GR GR.__init__() using PackageCompiler compile_incremental("GR", joinpath(dirname(pathof(GR)), "..", "examples", "snoop.jl")) using Random rng = MersenneTwister(1234); x = 0:π/100:2π y = sin.(x) plot(x, y) x = LinRange(0, 1, 51) y = x .- x.^2 scatter(x, y) sz = LinRange(50, 300, length(x)) c = LinRange(0, 255, length(x)) scatter(x, y, sz, c) stem(x, y) histogram(randn(rng, 10000)) plot(randn(50)) oplot(randn(50,3)) x = LinRange(0, 30, 1000) y = cos.(x) .* x z = sin.(x) .* x plot3(x, y, z) angles = LinRange(0, 2pi, 40) radii = LinRange(0, 2, 40) polar(angles, radii) x = 2 .* rand(rng, 100) .- 1 y = 2 .* rand(rng, 100) .- 1 z = 2 .* rand(rng, 100) .- 1 scatter3(x, y, z) c = 999 .* rand(rng, 100) .+ 1 scatter3(x, y, z, c) x = randn(rng, 100000) y = randn(rng, 100000) hexbin(x, y) x = 8 .* rand(rng, 100) .- 4 y = 8 .* rand(rng, 100) .- 4 z = sin.(x) .+ cos.(y) contour(x, y, z) X = LinRange(-2, 2, 40) Y = LinRange(0, pi, 20) x, y = meshgrid(X, Y) z = sin.(x) .+ cos.(y) contour(x, y, z) x = 8 .* rand(rng, 100) .- 4 y = 8 .* rand(rng, 100) .- 4 z = sin.(x) .+ cos.(y) contourf(x, y, z) X = LinRange(-2, 2, 40) Y = LinRange(0, pi, 20) x, y = GR.meshgrid(X, Y) z = sin.(x) .+ cos.(y) contourf(x, y, z) x = 8 .* rand(rng, 100) .- 4 y = 8 .* rand(rng, 100) .- 4 z = sin.(x) + cos.(y) tricont(x, y, z) x = 8 .* rand(rng, 100) .- 4 y = 8 .* rand(rng, 100) .- 4 z = sin.(x) .+ cos.(y) surface(x, y, z) X = LinRange(-2, 2, 40) Y = LinRange(0, pi, 20) x, y = meshgrid(X, Y) z = sin.(x) .+ cos.(y) surface(x, y, z) x = 8 .* rand(rng, 100) .- 4 y = 8 .* rand(rng, 100) .- 4 z = sin.(x) .+ cos.(y) GR.trisurf(x, y, z) x = 8 .* rand(rng, 100) .- 4 y = 8 .* rand(rng, 100) .- 4 z = sin.(x) .+ cos.(y) wireframe(x, y, z) X = LinRange(-2, 2, 40) Y = LinRange(0, pi, 20) x, y = meshgrid(X, Y) z = sin.(x) .+ cos.(y) wireframe(x, y, z) # Create example data X = LinRange(-2, 2, 40) Y = LinRange(0, pi, 20) x, y = meshgrid(X, Y) z = sin.(x) .+ cos.(y) heatmap(z) imshow(z) s = LinRange(-1, 1, 40) x, y, z = GR.meshgrid(s, s, s) v = 1 .- (x .^ 2 .+ y .^ 2 .+ z .^ 2) .^ 0.5 isosurface(v, isovalue=0.2) GR.GR3.terminate() volume(randn(rng, 50, 50, 50)) N = 1_000_000 x = randn(rng, N) y = randn(rng, N) shade(x, y) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="UT7OMWIcQR2o" colab_type="text" # Universidade Tecnológica Federal do Paraná # Professor: # Orientando: Enzo Dornelles Italiano # Cálculo Numérico # + [markdown] id="0A_Fr9wJQT36" colab_type="text" # Inicialmente precisamos executar uma vez os códigos abaixo # + [markdown] id="hWOK_ACJdEgW" colab_type="text" # #Códigos # + id="QgmjIRa9QF15" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 819} outputId="c67e5286-0b6d-4a41-fd05-ea75fdf8da89" # !pip3 install prettymatrix import copy import math import numpy as np from sympy import * import prettymatrix import matplotlib.pyplot as plt import plotly.graph_objects as go from prettytable import PrettyTable from numpy.polynomial import Polynomial as P x = symbols('x') def Lagrange(pontos, valor, f): Pn = 0 print("Polinômios coeficientes") for i in range(len(pontos)): mult = 1 multp = 1 div = 1 for j in range(len(pontos)): if i == j: continue mult *= P([-pontos[j][0], 1]) multp *= x - pontos[j][0] div *= pontos[i][0] - pontos[j][0] print("\n>>>>>>>L[%a]<<<<<<<" % i) pprint(multp/div) Pn = Pn + pontos[i][1] * (mult // div) print("Polinômio interpolador de Lagrange p(x) = ", end="") poli = list(Pn) for i in range(len(poli)): print(abs(round(poli[i],8)),end="") if i == 0: print(" ",end="") elif i == 1: print("x ", end="") else: print("x**%o"%i, end=" ") if i != len(poli)-1: if poli[i+1] >= 0: print("+ ", end="") else: print("- ", end="") print("\n") print("Polinômio interpolador avaliado em x =",valor,", é P("+str(valor)+") =" ,Pn(valor)) if f != 0: f = diff(f,x,len(poli)) # print(simplify(f)) maior = abs(f.subs(x,pontos[0][0])) if abs(f.subs(x,pontos[len(pontos)-1][0])) > maior: maior = abs(f.subs(x,pontos[len(pontos)-1][0])) mult = 1 for i in range(len(pontos)): mult *= abs(valor-pontos[i][0]) E = mult * maior / factorial(len(poli)) print("\nLimitante") print("|E("+str(valor)+")| <= ",E.evalf()) def plotL(pontos, xi, xf): l = [] for i in range(len(pontos)): multp = 1 div = 1 for j in range(len(pontos)): if i == j: continue multp *= x - pontos[j][0] div *= pontos[i][0] - pontos[j][0] l.append(multp/div) return l def graficoLagrange(pontos): Pn = 0 for i in range(len(pontos)): mult = 1 div = 1 for j in range(len(pontos)): if i == j: continue mult *= P([-pontos[j][0], 1]) div *= pontos[i][0] - pontos[j][0] Pn = Pn + pontos[i][1] * (mult // div) return Pn def Newton(pontos, valor, f): dif = [] for i in range(len(pontos)): dif.append([]) for i in range(len(pontos)): dif[0].append(pontos[i][1]) for i in range(len(pontos)-1): for j in range(len(pontos)-(i+1)): dif[i+1].append((dif[i][j+1]-dif[i][j])/(pontos[j+i+1][0]-pontos[j][0])) Table = PrettyTable() points=[] for i in range(len(pontos)): points.append(pontos[i][0]) Table.add_column("xk", points) for k in range(len(dif)): while len(dif[k]) < len(pontos): dif[k].append("-") Table.add_column("Dif_"+str(k),dif[k]) print("Tabela") print(Table) Pn = dif[0][0] for i in range(1,len(dif)): temp = 1 for j in range(i): temp *= (x-pontos[j][0]) temp *= dif[i][0] Pn += temp print("Polinômio interpolador p(x) = ",end="") print(simplify(Pn)) print("Polinômio interpolador avaliado em x = "+str(valor)+" é p("+str(valor)+") = ", end="") print(round(Pn.subs(x,valor),8)) if f != 0: f = diff(f,x,degree(Pn,x)+1) # print(simplify(f)) maior = abs(f.subs(x,pontos[0][0])) if abs(f.subs(x,pontos[len(pontos)-1][0])) > maior: maior = abs(f.subs(x,pontos[len(pontos)-1][0])) mult = 1 for i in range(len(pontos)): mult *= abs(valor-pontos[i][0]) E = mult * maior / factorial(degree(Pn,x)+1) print("\nLimitante") print("|E("+str(valor)+")| <= ",E.evalf()) def graficoNewton(pontos): dif = [] for i in range(len(pontos)): dif.append([]) for i in range(len(pontos)): dif[0].append(pontos[i][1]) for i in range(len(pontos)-1): for j in range(len(pontos)-(i+1)): dif[i+1].append((dif[i][j+1]-dif[i][j])/(pontos[j+i+1][0]-pontos[j][0])) Pn = dif[0][0] for i in range(1,len(dif)): temp = 1 for j in range(i): temp *= (x-pontos[j][0]) temp *= dif[i][0] Pn += temp return Pn def NewtonGregory(pontos, valor, f): intervalo = pontos[1][0] - pontos[0][0] for i in range(1,len(pontos)): if pontos[i][0] - pontos[i-1][0] != intervalo: return print("Valores de X não são equidistantes") dif = [] for i in range(len(pontos)): dif.append([]) for i in range(len(pontos)): dif[0].append(pontos[i][1]) for i in range(len(pontos)-1): for j in range(len(pontos)-(i+1)): dif[i+1].append((dif[i][j+1]-dif[i][j])) Table = PrettyTable() points=[] for i in range(len(pontos)): points.append(pontos[i][0]) Table.add_column("xk", points) for k in range(len(dif)): while len(dif[k]) < len(pontos): dif[k].append("-") Table.add_column("Dif_"+str(k),dif[k]) print("Tabela") print(Table) Pn = dif[0][0] for i in range(1,len(dif)): temp = 1 for j in range(i): temp *= (x-pontos[j][0]) temp *= (dif[i][0]/(factorial(i)*intervalo**i)) Pn += temp print("Polinômio interpolador p(x) = ",end="") print(Pn) print("Polinômio interpolador avaliado em x = "+str(valor)+" é p("+str(valor)+") = ", end="") print(round(Pn.subs(x,valor),8)) if f != 0: f = diff(f,x,degree(Pn,x)+1) # print(simplify(f)) maior = abs(f.subs(x,pontos[0][0])) if abs(f.subs(x,pontos[len(pontos)-1][0])) > maior: maior = abs(f.subs(x,pontos[len(pontos)-1][0])) mult = 1 for i in range(len(pontos)): mult *= abs(valor-pontos[i][0]) E = mult * maior / factorial(degree(Pn,x)+1) print("\nLimitante") print("|E("+str(valor)+")| <= ",E.evalf()) def graficoNG(pontos): intervalo = pontos[1][0] - pontos[0][0] for i in range(1,len(pontos)): if pontos[i][0] - pontos[i-1][0] != intervalo: return print("Valores de X não são equidistantes") dif = [] for i in range(len(pontos)): dif.append([]) for i in range(len(pontos)): dif[0].append(pontos[i][1]) for i in range(len(pontos)-1): for j in range(len(pontos)-(i+1)): dif[i+1].append((dif[i][j+1]-dif[i][j])) Pn = dif[0][0] for i in range(1,len(dif)): temp = 1 for j in range(i): temp *= (x-pontos[j][0]) temp *= (dif[i][0]/(factorial(i)*intervalo**i)) Pn += temp return Pn def sistLinear(G, B, ordem): y = symbols('y:'+str(ordem)) mY = [] for i in range(len(y)): mY.append(y[i]) D = np.linalg.det(G) tempG = G.copy() for j in range(ordem): for i in range(ordem): tempG[i][j] = B[i] tempD = np.linalg.det(tempG) tempG = G.copy() mY[j] = round(tempD/D, 8) mTemp = [] for i in range(len(mY)): mTemp.append([mY[i]]) mY = mTemp.copy() mY = np.asarray(mY) return mY def spline(pontos, valor): h = [] for i in range(1,len(pontos)): h.append(pontos[i][0] - pontos[i-1][0]) M = np.zeros((len(h)-1,len(h)-1)) for i in range(len(h)-1): if i == 0: M[i][i] = 2*(h[i]+h[i+1]) M[i][i+1] = h[i+1] elif i == len(h)-2: M[i][i] = 2*(h[i]+h[i+1]) M[i][i-1] = h[i] else: M[i][i] = 2*(h[i]+h[i+1]) M[i][i-1] = h[i] M[i][i+1] = h[i+1] print(prettymatrix.matrix_to_string(M, name='Matriz = ')) B = np.zeros((len(h)-1,1)) for i in range(1,len(h)): B[i-1][0] = 6*((pontos[i+1][1]-pontos[i][1])/h[i]) - 6*((pontos[i][1]-pontos[i-1][1])/h[i-1]) print(prettymatrix.matrix_to_string(B, name='B = ')) mu = sistLinear(M, B, len(h)-1) print("Spline natural: \u03BC0 = 0, \u03BC"+str(len(h))+" = 0\n") print("Resolvendo o sistema linear M*Y=B, temos:") print('\u03BC1 = ', mu[0][0]) print('\u03BC2 = ', mu[1][0]) alpha = np.zeros(len(h)) beta = np.zeros(len(h)) gamma = np.zeros(len(h)) for i in range(len(h)): if i == 0: alpha[i] = ((pontos[i+1][1]-pontos[i][1])/h[i]) - ((mu[i][0]/6)*h[i]) - ((0/3)*h[i]) beta[i] = 0/2 gamma[i] = (mu[i][0]-0)/(6*h[i]) elif i == len(mu): alpha[i] = ((pontos[i+1][1]-pontos[i][1])/h[i]) - ((0/6)*h[i]) - ((mu[i-1]/3)*h[i]) beta[i] = mu[i-1][0]/2 gamma[i] = (0-mu[i-1][0])/(6*h[i]) else: alpha[i] = ((pontos[i+1][1]-pontos[i][1])/h[i]) - ((mu[i][0]/6)*h[i]) - ((mu[i-1]/3)*h[i]) beta[i] = mu[i-1][0]/2 gamma[i] = (mu[i][0]-mu[i-1][0])/(6*h[i]) i = np.linspace(0,len(alpha)-1,len(alpha)) Table = PrettyTable() Table.add_column("i",i) Table.add_column("\u03B1",alpha) Table.add_column("\u03B2",beta) Table.add_column("\u03B3",gamma) print("\nCoeficientes dos polinomios da spline:") print(Table) S = [] for i in range(len(alpha)): S.append(pontos[i][1] + (alpha[i]*(x-pontos[i][0])) + (beta[i]*(x-pontos[i][0])**2) + (gamma[i]*(x-pontos[i][0])**3)) print("\nSpline cúbica natural:\n") for i in range(len(S)): print("P"+str(i)+"(x) = "+str(simplify(S[i]))+" , Intervalo=["+str(pontos[i][0])+","+str(pontos[i+1][0])+"]") print("") c = 0 for i in range(1,len(pontos)): intervalo = [pontos[i-1][0],pontos[i][0]] if valor >= intervalo[0] and valor < intervalo[1]: c = copy.copy(i) break print("Queremos encontrar o valor para f("+str(valor)+") então devemos usar P"+str(c-1)+" pois x = "+str(valor)+" está contido no intervalo = ",intervalo) print("\nLogo, a função em x = "+str(valor)+" é aproximadamente: ",S[1].subs(x,valor)) def graficoSpline(pontos, valor): h = [] for i in range(1,len(pontos)): h.append(pontos[i][0] - pontos[i-1][0]) M = np.zeros((len(h)-1,len(h)-1)) for i in range(len(h)-1): if i == 0: M[i][i] = 2*(h[i]+h[i+1]) M[i][i+1] = h[i+1] elif i == len(h)-2: M[i][i] = 2*(h[i]+h[i+1]) M[i][i-1] = h[i] else: M[i][i] = 2*(h[i]+h[i+1]) M[i][i-1] = h[i] M[i][i+1] = h[i+1] B = np.zeros((len(h)-1,1)) for i in range(1,len(h)): B[i-1][0] = 6*((pontos[i+1][1]-pontos[i][1])/h[i]) - 6*((pontos[i][1]-pontos[i-1][1])/h[i-1]) mu = sistLinear(M, B, len(h)-1) alpha = np.zeros(len(h)) beta = np.zeros(len(h)) gamma = np.zeros(len(h)) for i in range(len(h)): if i == 0: alpha[i] = ((pontos[i+1][1]-pontos[i][1])/h[i]) - ((mu[i][0]/6)*h[i]) - ((0/3)*h[i]) beta[i] = 0/2 gamma[i] = (mu[i][0]-0)/(6*h[i]) elif i == len(mu): alpha[i] = ((pontos[i+1][1]-pontos[i][1])/h[i]) - ((0/6)*h[i]) - ((mu[i-1]/3)*h[i]) beta[i] = mu[i-1][0]/2 gamma[i] = (0-mu[i-1][0])/(6*h[i]) else: alpha[i] = ((pontos[i+1][1]-pontos[i][1])/h[i]) - ((mu[i][0]/6)*h[i]) - ((mu[i-1]/3)*h[i]) beta[i] = mu[i-1][0]/2 gamma[i] = (mu[i][0]-mu[i-1][0])/(6*h[i]) S = [] for i in range(len(alpha)): S.append(pontos[i][1] + (alpha[i]*(x-pontos[i][0])) + (beta[i]*(x-pontos[i][0])**2) + (gamma[i]*(x-pontos[i][0])**3)) c = 0 for i in range(1,len(pontos)): intervalo = [pontos[i-1][0],pontos[i][0]] if valor >= intervalo[0] and valor < intervalo[1]: c = copy.copy(i) break Pn = S return Pn,c def minquaddis(pontos, grau): pts = len(pontos) g = np.zeros((grau+1,pts)) f = [] for j in range(pts): for i in range(grau+1): g[i][j] = pontos[j][0]**i f.append(pontos[j][1]) print("Vetores") for i in range(grau+1): print("g"+str(i+1)+" = ", g[i]) print("f = ", f) print("") B = np.zeros((grau+1,grau+1)) for i in range(grau+1): for j in range(grau+1): soma = 0 for k in range(pts): soma += g[i][k] * g[j][k] B[i][j] = soma print("A matriz dos coeficientes do sistema, no qual denotamos por B é") print(prettymatrix.matrix_to_string(B, name='B = ')) print("E a matriz coluna cuja cada entrada é é:") D = [] for i in range(grau+1): soma = 0 for k in range(pts): soma += g[i][k] * f[k] D.append([soma]) D = np.asarray(D) print(prettymatrix.matrix_to_string(D, name='D = ')) print("Solução do sistema B*Y=D via eliminação de Gauss com pivotamento parcial:") Y = sistLinear(B,D,grau+1) print(prettymatrix.matrix_to_string(Y, name='Y = ')) p = 0 for i in range(grau+1): p += Y[i][0]*x**i print("Polinômio g(x) = ",p) def graficodis(pontos,grau): pts = len(pontos) g = np.zeros((grau+1,pts)) f = [] for j in range(pts): for i in range(grau+1): g[i][j] = pontos[j][0]**i f.append(pontos[j][1]) B = np.zeros((grau+1,grau+1)) for i in range(grau+1): for j in range(grau+1): soma = 0 for k in range(pts): soma += g[i][k] * g[j][k] B[i][j] = soma D = [] for i in range(grau+1): soma = 0 for k in range(pts): soma += g[i][k] * f[k] D.append([soma]) D = np.asarray(D) Y = sistLinear(B,D,grau+1) P = 0 for i in range(grau+1): P += Y[i][0]*x**i return P def minquadcont(f, a, b, grau): grau += 1 g = [] for i in range(grau): g.append(x**i) B = np.zeros((grau,grau)) D = np.zeros((grau,1)) for i in range(grau): for j in range(grau): B[i][j] = integrate(g[i]*g[j], (x, a, b)) D[i][0] = integrate(g[i]*f, (x, a, b)) print("A matriz dos coeficientes do sistema, no qual denotamos por B é") print(prettymatrix.matrix_to_string(B, name='B = ')) print("E a matriz coluna cuja cada entrada é é:") print(prettymatrix.matrix_to_string(D, name='D = ')) Y = sistLinear(B, D, grau) print("Solução do sistema B*Y=D via eliminação de Gauss com pivotamento parcial:") print(prettymatrix.matrix_to_string(Y, name='Y = ')) P = 0 for i in range(grau): P += Y[i][0]*x**i print("Polinômio g(x) = ", P) def graficocont(f, a, b, grau): grau += 1 g = [] for i in range(grau): g.append(x**i) B = np.zeros((grau,grau)) D = np.zeros((grau,1)) for i in range(grau): for j in range(grau): B[i][j] = integrate(g[i]*g[j], (x, a, b)) D[i][0] = integrate(g[i]*f, (x, a, b)) Y = sistLinear(B, D, grau) P = 0 for i in range(grau): P += Y[i][0]*x**i return P # + [markdown] id="VwSFtFUIeaF4" colab_type="text" # #Interpolação # + [markdown] id="605xghvWp4ki" colab_type="text" # ## 1. Polinônimo de Lagrange # + [markdown] id="ziIXgK3Vef24" colab_type="text" # O procedimento aqui é Lagrange(pontos,valor,f(x)) # # Onde pontos é a tabela descrita na forma de matriz, valor é o ponto a ser avaliado e $f(x)$ é a função na qual é possível estimar # o erro. Quando se deseja apenas obter o polinômio interpolador de Lagrange, façamos $f(x)=0$. # # Consideremos dois casos: # # (a) Quando a função $f(x)$ é desconhecida. Neste caso, tomamos f=0 no algoritmo Lagrange(pontos,valor,f(x)). # # Exemplo: Conhecendo-se a seguinte tabela # # | x | -1 | 0 | 3 | # |------|----|---|---| # | f(x) | 15 | 8 | 1 | # # Determine o polinômio interpolador na forma de Lagrange e obtenha uma aproximação para $f(1)$. # # Solução: Como a função $f$ é desconhecida, segundo a instrução acima, consideremos $f=0$ e valor = 1: # + id="4An_iMIZf63g" colab_type="code" colab={} def f(x): return 0 valor = 5 # + [markdown] id="w4i2F-xlf_5y" colab_type="text" # Em seguida, consideremos a tabela dada na forma de matriz: # + id="Cm9Oae-XgCt4" colab_type="code" colab={} pontos = [[0,1],[1,2.3],[4,2.2],[6,3.7]] # + [markdown] id="OvzcZKR5gN1W" colab_type="text" # Logo, basta usar o comando Lagrange(dados,valor,f(x)): # + id="eINe6nSogU91" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 425} outputId="59745cc7-a763-445e-d9aa-ff2f3df1c4c9" Lagrange(pontos,valor,f(x)) # + [markdown] id="TRtulaoehNDB" colab_type="text" # Sabemos que os polinômios coeficientes tem a propriedade que $L_i(x_i)=1$ e $L_i(x_j)=0$ para # $i\neq j$. Podemos ver isso graficamente pelo comando # plotL(pontos, xi, xf), onde $x_i$ é o x inicial do gráfico e $x_f$ é o final. # + id="IsmTPOlWLUTD" colab_type="code" colab={} xi = -1.5 xf = 3.5 result = plotL(pontos, xi, xf) fig = go.Figure() z = np.arange(xi,xf,0.001) y = np.zeros((len(result),len(z))) for i in range(len(result)): for j in range(len(z)): y[i][j] = (result[i].subs(x,z[j])) fig.add_trace(go.Scatter(x=z,y = y[i], name=str(result[i]))) fig.show() # + [markdown] id="gKvucI1-kEW9" colab_type="text" # Para plotar o gráfico do polinômio de Lagrange, basta usar o seguinte comando: # + id="LSvFelhSeLE_" colab_type="code" colab={} result = graficoLagrange(pontos) xi = -1 xf = 7 fig = go.Figure() z = np.arange(xi,xf,0.001) y = [] for i in range(len(z)): y.append(result(z[i])) a = [] w = [] for i in range(len(pontos)): a.append(pontos[i][0]) w.append(pontos[i][1]) fig.add_trace(go.Scatter(x=z,y=y, name='Polinômio Interpolador P(x)')) fig.add_trace(go.Scatter(x=a,y=w, name="Pontos da tabela", mode="markers")) fig.add_trace(go.Scatter(x=[valor],y=[result(valor)], name="Estimativa", mode="markers")) fig.show() # + [markdown] id="5JRTxuDulgPM" colab_type="text" # (b) Caso em que a $f(x)$ é apresentada. Neste caso, é possível avaliar o erro cometido na interpolação. # + [markdown] id="5EKpj4tXlvGG" colab_type="text" # Exemplo: Considere a função $f(x)=\frac{3+x}{1+x}$ definida nos pontos conforme a tabela: # + id="tHucL_-fl2WF" colab_type="code" colab={} pontos = [[0.1,2.82],[0.2,2.67],[0.4,2.43]] # + [markdown] id="Lila8yWRl7uF" colab_type="text" # Determine o polinomio interpolador de $f(x)$, usando a fórmula de Lagrange. Em seguida, avalie $f(0.25)$ e um limitante superior para o erro. # # Solução: Definamos a função $f$ e o valor = 0.25: # + id="fbqv-gr_mFMI" colab_type="code" colab={} def f(x): return (3+x)/(1+x) valor = 0.25 # + [markdown] id="fqUZuH1_mSD2" colab_type="text" # Logo, basta usar o comando Lagrange(pontos,valor,f(x)): # + id="CK-v-fntmXkF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 289} outputId="e33e9e82-0b90-4f8d-b1b2-515e6e8cd7bc" Lagrange(pontos, valor, f(x)) # + [markdown] id="CfQ2cRalnl9f" colab_type="text" # Sabemos que os polinômios coeficientes tem a propriedade que $L_i(x_i)=1$ e $L_i(x_j)=0$ para $i\neq j$. Podemos ver isso graficamente pelo comando plotL(pontos) # + id="xzQsis1dnw_4" colab_type="code" colab={} xi = -0.4 xf = 0.9 result = plotL(pontos, xi, xf) fig = go.Figure() z = np.arange(xi,xf,0.001) y = np.zeros((len(result),len(z))) for i in range(len(result)): for j in range(len(z)): y[i][j] = (result[i].subs(x,z[j])) fig.add_trace(go.Scatter(x=z,y = y[i], name=str(result[i]))) fig.show() # + [markdown] id="QLihP20poTHc" colab_type="text" # Para plotar o gráfico do polinômio de Lagrange, basta usar o seguinte comando: # + id="IsqLWIf7oUq0" colab_type="code" colab={} result = graficoLagrange(pontos) xi = -1 xf = 1.5 fig = go.Figure() z = np.arange(xi,xf,0.001) y = [] for i in range(len(z)): y.append(result(z[i])) a = [] w = [] for i in range(len(pontos)): a.append(pontos[i][0]) w.append(pontos[i][1]) fig.add_trace(go.Scatter(x=z,y=y, name='Polinômio Interpolador P(x)')) fig.add_trace(go.Scatter(x=a,y=w, name="Pontos da tabela", mode="markers")) fig.add_trace(go.Scatter(x=[valor],y=[result(valor)], name="Estimativa", mode="markers")) fig.show() # + [markdown] id="teBBCbccoisL" colab_type="text" # Como neste exemplo, $f(x)$ é dada, façamos os gráfico de $f(x)$ e $p(x)$ empregando o comando: # + id="giimslewoomC" colab_type="code" colab={} result = graficoLagrange(pontos) xi = -0.5 xf = 1.5 fig = go.Figure() z = np.arange(xi,xf,0.001) y = [] for i in range(len(z)): y.append(result(z[i])) expr = lambdify(x,f(x)) a = [] for i in range(len(z)): a.append(expr(z[i])) b = [] w = [] for i in range(len(pontos)): b.append(pontos[i][0]) w.append(pontos[i][1]) fig.add_trace(go.Scatter(x=b,y=w, name="Pontos da tabela", mode="markers")) fig.add_trace(go.Scatter(x=z,y=y, name='Polinômio Interpolador P(x)')) fig.add_trace(go.Scatter(x=z,y=a, name='Função f(x)')) fig.add_trace(go.Scatter(x=[valor],y=[result(valor)], name="Estimativa", mode="markers", marker=dict(color="red"))) fig.add_trace(go.Scatter(x=[valor],y=[expr(valor)], name="Valor exato", mode="markers")) fig.show() # + [markdown] id="klFdlsFBpzk9" colab_type="text" # ## 2. Interpolação: Diferenças Divididas: Fórmula de Newton # + [markdown] id="jxm09qNgqURx" colab_type="text" # O procedimento aqui é Newton(pontos,valor,f(x)) # # Determine o polinômio interpolador usando a fórmula de Newton. Além disso, avalie $f(0.7)$ onde $f(x)=e^x+sin(x)$ e exiba um limitante superior para o erro. Caso apenas deseje encontrar o polinômio interpolador, considere $f(x)=0$. # + id="cs53tnnvqzZ9" colab_type="code" colab={} pontos = [[1.0,1.0],[1.02,0.9888],[1.04,0.9784]] # + [markdown] id="wrIXEHG2q0gW" colab_type="text" # Solução: Inicialmente, definamos $f(x)$ e valor: # + id="rpxKVdMJq367" colab_type="code" colab={} def f(x): return 0 valor = 1.03 # + [markdown] id="iBh8OnverAPQ" colab_type="text" # Logo, # + id="bX2OA2VvrBOs" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 187} outputId="2a0fb267-25e1-4ac4-e843-ccd87e09a90d" Newton(pontos,valor,f(x)) # + [markdown] id="soMQE53nsTwk" colab_type="text" # Para plotar o gráfico do polinômio interpolador, basta usar o seguinte comando: # + id="GwWtr-IgsYNq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 542} outputId="4b7defa8-ac03-44a9-aa99-a1fe4e988b4a" result = graficoNewton(pontos) xi = 0 xf = 2 fig = go.Figure() expr = lambdify(x, result) z = np.arange(xi,xf,0.001) y = [] for i in range(len(z)): y.append(expr(z[i])) a = [] w = [] for i in range(len(pontos)): a.append(pontos[i][0]) w.append(pontos[i][1]) fig.add_trace(go.Scatter(x=z,y=y, name='Polinômio Interpolador P(x)')) fig.add_trace(go.Scatter(x=a,y=w, name="Pontos da tabela", mode="markers")) fig.add_trace(go.Scatter(x=[valor],y=[expr(valor)], name="Estimativa", mode="markers")) fig.show() # + [markdown] id="mbFOOogVtZto" colab_type="text" # Para plotar o gráfico de $f(x)$ e $p(x)$, basta usar o comando: # + id="7WC1CbwAtZTG" colab_type="code" colab={} result = graficoNewton(pontos) xi = -1 xf = 2 fig = go.Figure() expr_res = lambdify(x, result) expr_fun = lambdify(x, f(x)) z = np.arange(xi,xf,0.001) y = [] for i in range(len(z)): y.append(expr_res(z[i])) a = [] for i in range(len(z)): a.append(expr_fun(z[i])) b = [] w = [] for i in range(len(pontos)): b.append(pontos[i][0]) w.append(pontos[i][1]) fig.add_trace(go.Scatter(x=b,y=w, name="Pontos da tabela", mode="markers")) fig.add_trace(go.Scatter(x=z,y=y, name='Polinômio Interpolador P(x)')) fig.add_trace(go.Scatter(x=z,y=a, name='Função f(x)')) fig.add_trace(go.Scatter(x=[valor],y=[expr_res(valor)], name="Estimativa", mode="markers", marker=dict(color="red"))) fig.add_trace(go.Scatter(x=[valor],y=[expr_fun(valor)], name="Valor exato", mode="markers")) fig.show() # + [markdown] id="22sw9rp3uaPs" colab_type="text" # ## 3. Polinômio de Newton-Grégory # + [markdown] id="kuRhjI9YuiuK" colab_type="text" # O procedimento aqui é NewtonGregory(pontos,valor,f(x)) # # Exemplo: Considere a função $f(x)=\frac{1}{1+x}$ tabelada como segue # + id="X3LDIETIwboG" colab_type="code" colab={} pontos = [[1950,352724],[1960,683908],[1970,1235030],[1980,1814990]] # + [markdown] id="ncPC8j0FwdOW" colab_type="text" # Determine o polinômio interpolador pela fórmula de Newton-Gregory, avalie $f(1,3)$ e exiba um limitante superior para o erro. # # Solução: Inicialmente, definamos a função $f(x)$: # + id="Ya2ZaLOYwp6E" colab_type="code" colab={} def f(x): return 0 valor = 1975 # + id="6ramw_VExAPY" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="817746c6-7eab-47c1-c86c-0875bd2da6d4" NewtonGregory(pontos, valor, f(x)) # + [markdown] id="uwm7IvkzxgWU" colab_type="text" # Para plotar o gráfico do polinômio interpolador, basta usar o seguinte comando: # + id="DsXRMD6rxiBh" colab_type="code" colab={} result = graficoNG(pontos) xi = 1900 xf = 2000 fig = go.Figure() expr = lambdify(x, result) z = np.arange(xi,xf,0.001) y = [] for i in range(len(z)): y.append(expr(z[i])) a = [] w = [] for i in range(len(pontos)): a.append(pontos[i][0]) w.append(pontos[i][1]) fig.add_trace(go.Scatter(x=z,y=y, name='Polinômio Interpolador P(x)')) fig.add_trace(go.Scatter(x=a,y=w, name="Pontos da tabela", mode="markers")) fig.add_trace(go.Scatter(x=[valor],y=[expr(valor)], name="Estimativa", mode="markers")) fig.show() # + [markdown] id="dEY_YEC1yHHB" colab_type="text" # Finalmente, plotamos o gráfico de $f(x)$ e $p(x)$: # + id="Y1-IQBgoyJwr" colab_type="code" colab={} result = graficoNG(pontos) xi = 1900 xf = 2000 fig = go.Figure() expr_res = lambdify(x, result) expr_fun = lambdify(x, f(x)) z = np.arange(xi,xf,0.001) y = [] for i in range(len(z)): y.append(expr_res(z[i])) a = [] for i in range(len(z)): a.append(expr_fun(z[i])) b = [] w = [] for i in range(len(pontos)): b.append(pontos[i][0]) w.append(pontos[i][1]) fig.add_trace(go.Scatter(x=b,y=w, name="Pontos da tabela", mode="markers")) fig.add_trace(go.Scatter(x=z,y=y, name='Polinômio Interpolador P(x)')) fig.add_trace(go.Scatter(x=z,y=a, name='Função f(x)')) fig.add_trace(go.Scatter(x=[valor],y=[expr_res(valor)], name="Estimativa", mode="markers", marker=dict(color="red"))) fig.add_trace(go.Scatter(x=[valor],y=[expr_fun(valor)], name="Valor exato", mode="markers")) fig.show() # + [markdown] id="xUT2uBVJym7D" colab_type="text" # ## 4. Splines # + [markdown] id="LDSkqJdbyqKC" colab_type="text" # Usaremos o comando spline(pontos,valor), que nos dará além da spline, o sistema linear e todos os coeficientes necessários para a obtenção da spline. # # O procedimento spline_grafico(pontos,valor) fornece a spline avaliada em x = valor e exibe o gráfico da spline no intervalo $[a,b]$. # # Exemplo: Ajuste os dados da tabela abaixo com uma spline cúbica natural. # + id="AjwEYPtEzJEj" colab_type="code" colab={} pontos = [[1,2],[2,3],[4,7],[6,5]] # + [markdown] id="H1J6eKvjzMon" colab_type="text" # Calcule a função em x = 5. # # Solução: De fato, # + id="5Pguokhf1IoQ" colab_type="code" colab={} valor = 5 # + id="zGvE86CizQMw" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 595} outputId="ce216579-04b3-4f32-df25-9d153dbd249e" spline(pontos,valor) # + [markdown] id="1zeEQ6AB0cVU" colab_type="text" # E o gráfico de p(x) é: # + id="D3h7H8Ge0d2q" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 542} outputId="f11ecd3f-542b-4e79-b15e-e09111c219f1" result,c = graficoSpline(pontos, valor) xi = -1 xf = 7 fig = go.Figure() for i in range(len(pontos)-1): z = np.arange(pontos[i][0],pontos[i+1][0],0.001) y = [] expr_res = lambdify(x, result[i]) for j in range(len(z)): y.append(expr_res(z[j])) fig.add_trace(go.Scatter(x=z,y=y, name='Polinômio Interpolador P'+str(i)+'(x)')) a = [] w = [] for i in range(len(pontos)): a.append(pontos[i][0]) w.append(pontos[i][1]) expr_res = lambdify(x, result[c-1]) fig.add_trace(go.Scatter(x=a,y=w, name="Pontos da tabela", mode="markers")) fig.add_trace(go.Scatter(x=[valor],y=[expr_res(valor)], name="Estimativa", mode="markers", marker=dict(color="red"))) fig.show() # + [markdown] id="nakesq7vO6Tu" colab_type="text" # # Método dos Mínimos Quadrados # + [markdown] id="hVq_NsGnPHrS" colab_type="text" # ## 1. Caso Discreto # + [markdown] id="chNFjn5EP2YC" colab_type="text" # Usaremos o comando minquaddis(pontos,n) # # Exemplo: Ajustar os dados da tabela abaixo por um polinômio de grau 2 # # | x | 2 | -1 | 1 | 2 | # |------|---|----|---|---| # | f(x) | 1 | -3 | 1 | 9 | # # Solução: Definamos os pontos como uma matriz com pares ordenados de x e $f(x)$ # + id="4g0X2BiDRcvs" colab_type="code" colab={} pontos = [[.5,4.4],[2.8,1.8],[4.2,1],[6.7,.4],[8.3,.2]] # + [markdown] id="W9LAIaDdRhFQ" colab_type="text" # Recorrendo ao comando acima, tendo em mente que n = 2, obtemos: # + id="cSF7tzZRRh25" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 425} outputId="df14bd85-fd6d-4da5-f1dc-bcc76519ba36" minquaddis(pontos,1) # + [markdown] id="btEQB-haTNRu" colab_type="text" # Enfim, plotaremos o gráfico de $g(x)$ com os pontos da tabela: # + id="2_olQgSVTP3c" colab_type="code" colab={} result = graficodis(pontos,1) xi = 0 xf = 10 fig = go.Figure() expr_res = lambdify(x, result) z = np.arange(xi,xf,0.001) y = [] for i in range(len(z)): y.append(expr_res(z[i])) b = [] w = [] for i in range(len(pontos)): b.append(pontos[i][0]) w.append(pontos[i][1]) fig.add_trace(go.Scatter(x=b,y=w, name="Pontos da tabela", mode="markers")) fig.add_trace(go.Scatter(x=z,y=y, name='Função f(x)')) fig.show() # + [markdown] id="cPnDWNfiW27O" colab_type="text" # ## 2. Caso Contínuo # + [markdown] id="2XF0OyNxW8R4" colab_type="text" # Neste caso, empregaremos o comando: minquadcont(f,a,b,n) # # Exemplo: Usando o método dos mínimos quadrados, aproxime a função $f(x)=e^{-x}$ no intervalo $[1,3]$ por uma reta. # # Solução: Como de praxe, definamos a função $f$, e os valores de $a,b$ e n: # + id="cRPF5JedXSrs" colab_type="code" colab={} def f(x): return exp(-x) a = 1 b = 3 n = 1 # + [markdown] id="o_S29iKcXbRc" colab_type="text" # Logo, # + id="BnOwYAY5XcDb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 340} outputId="97f23ea7-05bc-4182-ea6e-de097d722d64" minquadcont(f(x),a,b,n) # + [markdown] id="L1sNyEI-X6x0" colab_type="text" # Por fim, façamos os gráficos de $f(x)$ e $g(x)$: # + id="TaHq3seYX94t" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 542} outputId="8d9d3424-6b07-4d39-efbc-4673ef83f5e1" result = graficocont(f(x), a, b, n) xi = 0 xf = 4 fig = go.Figure() z = np.arange(xi,xf,0.001) expr_res = lambdify(x, result) expr_fun = lambdify(x, f(x)) y = [] for i in range(len(z)): y.append(expr_res(z[i])) c = np.arange(xi,xf,0.001) w = [] for i in range(len(c)): w.append(expr_fun(c[i])) fig.add_trace(go.Scatter(x=c,y=w, name='Função f(x)')) fig.add_trace(go.Scatter(x=z,y=y, name='Função g(x)')) fig.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pandas as pd import matplotlib import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline #Cargamos los datos data = pd.read_csv('Indigenous.csv') #Exploramos un ejemplo de los datos data.head() #Comprobamos que el numero de columnas print("Número de columnas:", len((data.columns))) #Contamos el número de filas que hay en nuestro dataset print("Número de filas:", np.shape(data)[0]) #Comparamos regiones reg_plot = sns.catplot(x="Reg", kind="count", palette="winter", data=data, size=5) reg_plot.set_axis_labels( "Región", "Frecuencia") reg_plot.fig.set_size_inches(8,8) #Año data["Year"]=[data["Dat"][i][0:4] for i in range(len(data["Dat"]))] year_plot = sns.catplot(x="Year", kind="count", palette="summer", data=data, size=5) year_plot.set_axis_labels( "Año", "Frecuencia") year_plot.fig.set_size_inches(16,8) #Comparamos tipos de conflicto contp_plot = sns.catplot(x="Contp", kind="count", palette="autumn", data=data, size=5) contp_plot.set_axis_labels( "Tipo de conflicto", "Frecuencia") contp_plot.fig.set_size_inches(8,8) #Comparamos naturalezas de los conflictos contp_plot = sns.catplot(x="Agtp", kind="count", palette="autumn", data=data, size=5) contp_plot.set_axis_labels( "Naturaleza del conflicto", "Frecuencia") contp_plot.fig.set_size_inches(8,8) #Comparamos estados de acuerdos contp_plot = sns.catplot(x="Status", kind="count", palette="spring", data=data, size=5) contp_plot.set_axis_labels( "Estado del acuerdo", "Frecuencia") contp_plot.fig.set_size_inches(8,8) #Comparamos estados de acuerdos contp_plot = sns.catplot(x="Stage", kind="count", palette="spring", data=data, size=5) contp_plot.set_axis_labels( "Fase del proceso de acuerdo", "Frecuencia") contp_plot.fig.set_size_inches(8,8) #Países implicados sns.set(font_scale=1.2) country_plot = sns.catplot(x="Con", kind="count", palette="cubehelix", data=data, size=5) country_plot.set_axis_labels( "País/Población indígena", "Frecuencia") country_plot.fig.set_size_inches(30,8) country_plot.set_xticklabels(rotation=90) data["Con"].unique() # + #Atributos interesantes int_attribute=[] for i in data.columns: if data[i].dtypes=="int64" and data[i].sum()>50: Int_attribute.append(i) print(int_attribute) #Grupos de atributos interesantes groups = ['GCh', 'GDis', 'GRa', 'GRe', 'GInd', 'GRef', 'GSoc'] gender = ['GeWom'] state_definition = ['StDef'] governance = ['Pol', 'Ele', 'Civso', 'Pubad'] powersharing = ['Polps', 'Terps', 'Eps'] human_rights = ['HrGen', 'EqGen', 'HrDem', 'Prot', 'HrFra', 'Med', 'HrCit'] socio_econ = ['Dev', 'DevSoc', 'Bus', 'Tax', 'Ce'] security = ['SsrPol', 'SsrArm', 'SsrDdr', 'SsrPsf'] trans_justice = ['TjAm', 'TjVic', 'TjRep', 'TjNR'] # - # Gráfico de barras de sobreviviviente segun clase plot = pd.crosstab(index=data["EqGen"], columns=data["Reg"]).apply(lambda r: r/r.sum() *100, axis=1).plot(kind='bar') data["Terps"].hist(bins=10, ec="black") plt.title("Histograma avg_month_turnover") plt.xlabel("Gasto mensual medio") plt.ylabel("Número de clientes") plt.show() # + plt.figure(figsize=(15,7)) plot = df["Year"].value_counts().sort_index().plot(kind='bar', title='Años') plot = df["Status"].value_counts().sort_index().plot(kind='bar', title='Años') plot = df["StageSub"][df["Stage"]=="Cea"].value_counts().sort_index().plot(kind='bar', title='Años') # + #Año data["Year"]=[data["Dat"][i][0:4] for i in range(len(data["Dat"]))] number=[pd.value_counts(data['Year'])[i] for i in range(len(data["Year"].unique()))] year_count # + tot_reg = [pd.value_counts(data["Reg"])[i] for i in range(len(data["Reg"].unique()))] REG = data["Reg"].unique() for i in range(len(data["Reg"])): for j in range(len(REG)): if data["Reg"][i] == REG[j]: data["GCh"][i] = data["GCh"][i]/tot_reg[j] #data.boxplot("Terps") q1 = np.percentile(data["SsrCrOcr"], 25) q3 = np.percentile(data["SsrCrOcr"], 75) #Como solo hay máximos calculamos el valor extremo superior maximum = q3+(q3-q1)*1.5 #Guardamos los índices de dichos valores extremos outliers = data[data["SsrCrOcr"]>maximum].index #Vemos con que tipo de clientes se corresponden #print(data[data["SsrCrOcr"]>maximum]) #BOSNIA # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # %matplotlib inline import pandas as pd import matplotlib.pyplot as plt from sklearn.preprocessing import scale from sklearn.cluster import KMeans import seaborn as sns #reading Dataset retail = pd.read_csv("Online+Retail.csv", sep = ',',encoding = "ISO-8859-1", header= 0) # parse date retail['InvoiceDate'] = pd.to_datetime(retail['InvoiceDate'], format = "%d-%m-%Y %H:%M") # - #dropping the na cells order_wise = retail.dropna() # + #RFM implementation amount = pd.DataFrame(order_wise.Quantity * order_wise.UnitPrice, columns = ["Amount"]) #merging amount in order_wise order_wise = pd.concat(objs = [order_wise, amount], axis = 1, ignore_index = False) # - #Monetary Function monetary = order_wise.groupby("CustomerID").Amount.sum() monetary = monetary.reset_index() monetary.head() # + #Frequency function frequency = order_wise[['CustomerID', 'InvoiceNo']] k = frequency.groupby("CustomerID").InvoiceNo.count() k = pd.DataFrame(k) k = k.reset_index() k.columns = ["CustomerID", "Frequency"] # - #creating master dataset master = monetary.merge(k, on = "CustomerID", how = "inner") master.head() # + #Generating recency function recency = order_wise[['CustomerID','InvoiceDate']] maximum = max(recency.InvoiceDate) maximum = maximum + pd.DateOffset(days=1) recency['diff'] = maximum - recency.InvoiceDate #Dataframe merging by recency df = pd.DataFrame(recency.groupby('CustomerID').diff.min()) df = df.reset_index() df.columns = ["CustomerID", "Recency"] # - RFM = k.merge(monetary, on = "CustomerID") RFM = RFM.merge(df, on = "CustomerID") RFM.head() # + # outlier treatment for Amount plt.boxplot(RFM.Amount) Q1 = RFM.Amount.quantile(0.25) Q3 = RFM.Amount.quantile(0.75) IQR = Q3 - Q1 RFM = RFM[(RFM.Amount >= Q1 - 1.5*IQR) & (RFM.Amount <= Q3 + 1.5*IQR)] # outlier treatment for Frequency plt.boxplot(RFM.Frequency) Q1 = RFM.Frequency.quantile(0.25) Q3 = RFM.Frequency.quantile(0.75) IQR = Q3 - Q1 RFM = RFM[(RFM.Frequency >= Q1 - 1.5*IQR) & (RFM.Frequency <= Q3 + 1.5*IQR)] # outlier treatment for Recency plt.boxplot(RFM.Recency) Q1 = RFM.Recency.quantile(0.25) Q3 = RFM.Recency.quantile(0.75) IQR = Q3 - Q1 RFM = RFM[(RFM.Recency >= Q1 - 1.5*IQR) & (RFM.Recency <= Q3 + 1.5*IQR)] # + from sklearn.preprocessing import StandardScaler RFM_normal = RFM.drop('CustomerID', axis=1) RFM_normal.Recency = RFM_normal.Recency.dt.days standard_scaler = StandardScaler() RFM_norm1 = standard_scaler.fit_transform(RFM_normal) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # This is a jupyter notebook to get all the data from my shapefiles, group together the objects with the same label and then save the data as png images of a size 10000 x 10000 pix (which correspond to the size of my bounding boxes). # + import geopandas as gpd import matplotlib from matplotlib import pyplot as plt from shapely.geometry import Point, Polygon from matplotlib.collections import PatchCollection from descartes.patch import PolygonPatch import pandas as pd import os import numpy as np # Load the box module from shapely to create box objects from shapely.geometry import box ## Raster data library import rasterio import rasterio.features import rasterio.warp from rasterio import plot as rioplot # to display images inline get_ipython().magic(u'matplotlib inline') matplotlib.use('Agg')# not sure what I used it for # some custom files from img_helpers import get_all_images_in_folder, return_polygons from PIL import Image # - global_path = "D:/allegoria/datasets_alegoria/BD/old_data_moselle/BDTOPO/1_DONNEES_LIVRAISON_2019-06-00227/" # ## ROADS # load all the shapely files related to ROADS fp_road = global_path + "BDT_2-0_SHP_LAMB93_D057-ED083/A_RESEAU_ROUTIER/ROUTE.shp" fp_nmrd = global_path + "BDT_2-0_SHP_LAMB93_D057-ED083/A_RESEAU_ROUTIER/ROUTE_NOMMEE.shp" fp_prrd= global_path + "BDT_2-0_SHP_LAMB93_D057-ED083/A_RESEAU_ROUTIER/ROUTE_PRIMAIRE.shp" fp_scrd = global_path + "BDT_2-0_SHP_LAMB93_D057-ED083/A_RESEAU_ROUTIER/ROUTE_SECONDAIRE.shp" # Read file using gpd.read_file() data_road = gpd.read_file(fp_road) data_nmrd = gpd.read_file(fp_nmrd) data_prrd = gpd.read_file(fp_prrd) data_scrd = gpd.read_file(fp_scrd) # make a single table with all the roads, not just a signle type all_roads= pd.concat([data_road, data_nmrd, data_prrd, data_scrd],ignore_index=True ) all_roads.head() # small demo of the roads len(all_roads) # we can plot all the Roads network of cote d'Or all_roads.plot() # Data type - geographical projection of he data used. # check the projection of the data - I need espg 2154, otherwise re-project using geopandas all_roads.crs # ## Houses # + # Load all the data from the BUILDINGS caegory fp_bati = global_path +"BDT_2-0_SHP_LAMB93_D057-ED083/E_BATI/BATI_INDIFFERENCIE.shp" fp_inds =global_path +"BDT_2-0_SHP_LAMB93_D057-ED083/E_BATI/BATI_INDUSTRIEL.shp" # Read file using gpd.read_file() buildings1= gpd.read_file(fp_bati) buildings2 =gpd.read_file(fp_inds) # concaenate the buildings into a single table all_buildings = pd.concat([buildings1, buildings2],ignore_index=True ) # - # special buildings fp_remk = global_path +"BDT_2-0_SHP_LAMB93_D057-ED083/E_BATI/BATI_REMARQUABLE.shp" buildings3 = gpd.read_file(fp_remk) buildings3.NATURE.unique() # churches = buildings3.loc[(buildings3['NATURE'] == "Eglise") | (buildings3['NATURE'] == "Chapelle") | (buildings3['NATURE'] =="Bâtiment religieux divers")] print(len(churches)) towers = buildings3.loc[buildings3['NATURE'] == "Tour, donjon, moulin"] monuments = buildings3.loc[buildings3['NATURE'] == "Monument"] forts = buildings3.loc[buildings3['NATURE'] == 'Fort, blockhaus, casemate'] castels =buildings3.loc[buildings3['NATURE'] =='Château'] ordinary_buildings = buildings3.loc[(buildings3['NATURE'] == "Préfecture") | (buildings3['NATURE'] == "Mairie") | (buildings3['NATURE'] =="Sous-préfecture") | (buildings3['NATURE'] =="Bâtiment sportif") ] all_buildings = pd.concat([all_buildings, ordinary_buildings],ignore_index=True, sort=False) print(len(all_buildings)) all_buildings.to_crs({'init': 'epsg:2154'}) # ## WATER fp_water = global_path +"BDT_2-0_SHP_LAMB93_D057-ED083/D_HYDROGRAPHIE/SURFACE_EAU.shp" fp_cours = global_path +"BDT_2-0_SHP_LAMB93_D057-ED083/D_HYDROGRAPHIE/TRONCON_COURS_EAU.shp" data_water = gpd.read_file(fp_water) data_cours = gpd.read_file(fp_cours) all_water = pd.concat([data_water, data_cours],ignore_index=True,sort=False) all_water.plot() len(all_water) data_water.crs len(all_water) # ## SPORT TERRITORIES fp_sport =global_path +"BDT_2-0_SHP_LAMB93_D057-ED083/E_BATI/TERRAIN_SPORT.shp" data_sport = gpd.read_file(fp_sport) data_sport.plot() len(data_sport) # ## CEMETRIES fp_cemetries = global_path +"BDT_2-0_SHP_LAMB93_D057-ED083/E_BATI/CIMETIERE.shp" data_cemetries = gpd.read_file(fp_cemetries) data_cemetries.plot() len(data_cemetries) # ## GREENERY fp_greenery = global_path + "BDT_2-0_SHP_LAMB93_D057-ED083/F_VEGETATION/ZONE_VEGETATION.shp" all_greenery = gpd.read_file(fp_greenery) len(all_greenery) all_greenery.plot() len(data_greenery) # ## AERODROMES fp_aero = global_path + "BDT_2-0_SHP_LAMB93_D057-ED083/E_BATI/PISTE_AERODROME.shp" data_aero = gpd.read_file(fp_aero) data_aero.plot() len(data_aero) # ## RAILROADS fp_rail = global_path + "\BDT_2-0_SHP_LAMB93_D057-ED083/B_VOIES_FERREES_ET_AUTRES/TRONCON_VOIE_FERREE.shp" data_rail = gpd.read_file(fp_rail) data_rail.plot() len(data_rail) # ## BOUNDING BOXES FROM THE IMAGES # Finally, load the bounding boxes from all the jp2 images I have for a department. I actually don't use them later, taking the bounding box from the jp2 meta data, but alternatively they can be used as bounding boxes for vector data. bb_boxes_path = 'D:/allegoria/datasets_alegoria/BD/old_data_moselle/BDORTHO/3_SUPPLEMENTS_LIVRAISON_2019-06-00226/BDO_RVB_0M50_JP2-E080_LAMB93_MOSELLE/dalles.shp' bb_boxes= gpd.read_file(bb_boxes_path) bb_boxes.crs bb_boxes.plot() # ## IMAGES as JP2 # Load all the jp2 images from the folder and store the absolute path in a list. folder = 'D:/allegoria/datasets_alegoria/BD/old_data_moselle/BDORTHO/1_DONNEES_LIVRAISON_2019-06-00226/BDO_RVB_0M50_JP2-E080_LAMB93_MOSELLE' img_type = '.jp2' image_files = get_all_images_in_folder(folder, img_type) name = image_files[0][-36:] print(name) # ## Save the shape files as .png in correspondence with given images # The function below works very slow, long to execute but the files are really huge - 10000x10000 pixels. # I store each shapefile object on a separate canvas, which is then saved as a .png image. # Attention, this version save the images with a bounding box around them, look at the next script to see how the bounding box can be disabled. # + plt.ioff() # don't plot anything here save_path = 'D:/allegoria/topo_ortho/cotedor/roads/' save_path2 = 'D:/allegoria/topo_ortho/cotedor/buildings/' save_path3 = 'D:/allegoria/topo_ortho/cotedor/water/' save_path4 = 'D:/allegoria/topo_ortho/cotedor/sport/' my_dpi=300 for i in range(25): # range - number of images name = image_files[i][-36:] print(name) with rasterio.open(image_files[i]) as dataset: # Read the dataset's valid data mask as a ndarray. mask = dataset.dataset_mask() # Extract feature shapes and values from the array. for geom, val in rasterio.features.shapes( mask, transform=dataset.transform): # Transform shapes from the dataset's own coordinate # reference system to CRS84 (EPSG:4326). geom = rasterio.warp.transform_geom( dataset.crs, 'epsg:2154', geom, precision=6) # Print GeoJSON shapes to stdout. print(geom) # from now on all the shape data bb_box = geom['coordinates'] polygon_bbox = Polygon(bb_box[0]) # here the first one corresponds to my prjection, not sure what the others are for. # bounding boxes coordinates coordinates = bb_box[0] x_width, y_width = (coordinates[2][0]-coordinates[0][0])*2, (coordinates[0][1]-coordinates[1][1])*2 print(x_width, y_width) #roads sg_roads = all_roads[all_roads.geometry.within(polygon_bbox)] #extract segments of roads name_wpath = save_path + name[1:-4] + '.png' plt.autoscale(False) fig = sg_roads.plot(figsize=(x_width/float(10*my_dpi), y_width/float(10*my_dpi)),linewidth=0.12, edgecolor='c') fig.set_xlim([coordinates[0][0],coordinates[2][0]]) fig.set_ylim([coordinates[1][1],coordinates[0][1]]) plt.axis('off') plt.autoscale(False) fig.axes.get_xaxis().set_visible(False) fig.axes.get_yaxis().set_visible(False) plt.savefig(name_wpath, type="png", dpi= float(my_dpi) * 10) # buildings sg_houses = all_buildings[all_buildings.geometry.within(polygon_bbox)] #extract segments of buildings name_wpath = save_path2 + name[1:-4] + '.png' fig = sg_houses.plot(figsize=(x_width/float(10*my_dpi), y_width/float(10*my_dpi)), facecolor='c', edgecolor='c') fig.set_xlim([coordinates[0][0],coordinates[2][0]]) fig.set_ylim([coordinates[1][1],coordinates[0][1]]) plt.axis('off') plt.autoscale(False) fig.axes.get_xaxis().set_visible(False) fig.axes.get_yaxis().set_visible(False) plt.savefig(name_wpath,type="png", dpi=float(my_dpi) * 10) # water sg_water = data_water[data_water.geometry.within(polygon_bbox)] #extract segments of water name_wpath = save_path3 + name[1:-4] + '.png' fig = sg_water.plot(figsize=(x_width/float(10*my_dpi), y_width/float(10*my_dpi)), facecolor='c', edgecolor='c') fig.set_xlim([coordinates[0][0],coordinates[2][0]]) fig.set_ylim([coordinates[1][1],coordinates[0][1]]) plt.axis('off') plt.autoscale(False) fig.axes.get_xaxis().set_visible(False) fig.axes.get_yaxis().set_visible(False) plt.savefig(name_wpath,type="png", dpi=float(my_dpi) * 10) #sport sg_sport = data_sport[data_sport.geometry.within(polygon_bbox)] #extract segments of sport things name_wpath = save_path4 + name[1:-4] + '.png' fig = sg_sport.plot(figsize=(x_width/float(10*my_dpi), y_width/float(10*my_dpi)), facecolor='c', edgecolor='c') fig.set_xlim([coordinates[0][0],coordinates[2][0]]) fig.set_ylim([coordinates[1][1],coordinates[0][1]]) plt.autoscale(False) plt.axis('off') fig.axes.get_xaxis().set_visible(False) fig.axes.get_yaxis().set_visible(False) plt.savefig(name_wpath,type="png", dpi=float(my_dpi) * 10) dataset.close() plt.close('all') # - # ## SAVE all LABELS on ONE image # This script saves the jp2 image as a tif one, and also all the shapefiles in different colors on a separate canvas. The frame around the image is deleted. The final files are of the same size as the original jp2 file. # + plt.ioff() # don't plot anything here import matplotlib as mpl mpl.rcParams['savefig.pad_inches'] = 0 save_path = 'D:/allegoria/topo_ortho/cotedor/labels_png/' my_dpi=300 for i in range(295,305): # range - number of images name = image_files[i][-36:] print(name) with rasterio.open(image_files[i]) as dataset: # Read the dataset's valid data mask as a ndarray. mask = dataset.dataset_mask() # Extract feature shapes and values from the array. for geom, val in rasterio.features.shapes( mask, transform=dataset.transform): # Transform shapes from the dataset's own coordinate # reference system to CRS84 (EPSG:4326). geom = rasterio.warp.transform_geom( dataset.crs, 'epsg:2154', geom, precision=6) # Print GeoJSON shapes to stdout. print(geom) raster = dataset.read() # some setup bb_box = geom['coordinates'] polygon_bbox = Polygon(bb_box[0]) # bounding boxes coordinates coordinates = bb_box[0] x_width, y_width = (coordinates[2][0]-coordinates[0][0])*2, (coordinates[0][1]-coordinates[1][1])*2 print(x_width, y_width) # save the image plt.autoscale(tight=True) fig, ax = plt.subplots(figsize=(x_width/float(10*my_dpi), y_width/float(10*my_dpi)), frameon=False) ax.set_position([0, 0, x_width/float(10*my_dpi), y_width/float(10*my_dpi)]) fig = rasterio.plot.show(raster, ax=ax) plt.axis('off') plt.savefig('D:/allegoria/topo_ortho/cotedor/imgs_tif/'+name[1:-4]+'.tif', type="tif", dpi= float(my_dpi)*10) # from now on all the shape data # shapefiles sg_roads = all_roads[all_roads.geometry.within(polygon_bbox)] #extract segments of roads sg_houses = all_buildings[all_buildings.geometry.within(polygon_bbox)] #extract segments of buildings sg_water = data_water[data_water.geometry.within(polygon_bbox)] #extract segments of water sg_sport = data_sport[data_sport.geometry.within(polygon_bbox)] #extract segments of sport things name_wpath = save_path + name[1:-4] + '.png' plt.autoscale(False) plt.margins(0) fig = plt.figure() fig = sg_roads.plot(figsize=(x_width/float(10*my_dpi), y_width/float(10*my_dpi)),linewidth=0.21, edgecolor='b') sg_houses.plot(figsize=(x_width/float(10*my_dpi), y_width/float(10*my_dpi)), color ='r', ax = fig) sg_water.plot(figsize=(x_width/float(10*my_dpi), y_width/float(10*my_dpi)), color ='g', ax=fig) sg_sport.plot(figsize=(x_width/float(10*my_dpi), y_width/float(10*my_dpi)), color ='k', ax = fig) fig.set_xlim([coordinates[0][0],coordinates[2][0]]) fig.set_ylim([coordinates[1][1],coordinates[0][1]]) fig.set_position([0, 0, x_width/float(10*my_dpi), y_width/float(10*my_dpi)]) plt.axis('off') plt.autoscale(False) fig.axes.get_xaxis().set_visible(False) fig.axes.get_yaxis().set_visible(False) plt.savefig(name_wpath, type="png", dpi= float(my_dpi) * 10) dataset.close() plt.close('all') # - # ## TEST VISUALIZATION # Just an example of how to plot vector data on a raster image using rasterio and matplotlib. # + get_ipython().magic(u'matplotlib inline') save_path = 'D:/allegoria/topo_ortho/mozelle/' my_dpi=300 i =222 with rasterio.open(image_files[222]) as dataset: #an image name as an input # Read the dataset's valid data mask as a ndarray. mask = dataset.dataset_mask() # Extract feature shapes and values from the array. for geom, val in rasterio.features.shapes( mask, transform=dataset.transform): # Transform shapes from the dataset's own coordinate # reference system to CRS84 (EPSG:4326). geom = rasterio.warp.transform_geom( dataset.crs, 'epsg:2154', geom, precision=6) # Print GeoJSON shapes to stdout. print(geom) raster = rasterio.open(image_files[222]) name = image_files[i][-36:] name_wpath = save_path + name[1:-4] + '.png' bb_box = geom['coordinates'] polygon_bbox = Polygon(bb_box[0]) coordinates = bb_box[0] x_width, y_width = (coordinates[2][0]-coordinates[0][0])*2, (coordinates[0][1]-coordinates[1][1])*2 # shapefiles sg_roads = all_roads[all_roads.geometry.within(polygon_bbox)] #extract segments of roads sg_houses = all_buildings[all_buildings.geometry.within(polygon_bbox)] #extract segments of buildings sg_water = data_water[data_water.geometry.within(polygon_bbox)] #extract segments of water sg_sport = data_sport[data_sport.geometry.within(polygon_bbox)] #extract segments of sport things fig, ax = plt.subplots(figsize=(x_width/float(10*my_dpi), y_width/float(10*my_dpi))) rasterio.plot.show(raster, ax=ax) sg_roads.plot(ax=ax, color='blue', linewidth=0.12) sg_houses.plot(ax=ax, color='red') sg_water.plot(ax=ax, color ='green') sg_sport.plot(ax=ax, color ='black') plt.axis('off') plt.savefig(name_wpath, type="png", dpi= float(my_dpi) * 10) # save the resulting figure # - # ## CUT ALL images and save the pathes ## An attempt to cut all the images right away given the coordinates import csv import os import rasterio import rasterio.mask resolution = 1000 # the resolution in geo coordinates, the real pixel size will be Resolution X 2 # + args = {} args["save_path"] = "D:/allegoria/topo_ortho/ING_processed_margo/moselle_2004/" my_dpi = 300 for i in range(len(image_files)): # range - number of images len(image_files) name = image_files[i][-36:] print(name) with rasterio.open(image_files[i]) as dataset: # Read the dataset's valid data mask as a ndarray. mask = dataset.dataset_mask() # Extract feature shapes and values from the array. for geom, val in rasterio.features.shapes( mask, transform=dataset.transform): # Transform shapes from the dataset's own coordinate # reference system to CRS84 (EPSG:4326). geom = rasterio.warp.transform_geom( dataset.crs, 'epsg:2154', geom, precision=6) raster = rasterio.open(image_files[i]) # some setup bb_box = geom['coordinates'] polygon_bbox = Polygon(bb_box[0]) polygons = return_polygons(image_bounds=polygon_bbox.bounds, resolution=(resolution, resolution)) # cut image into patches, the geo res is used # create a directory where the patches will be stored try: os.mkdir(args["save_path"] + name[:-4]) except: print("already exists!") pd_poly = pd.DataFrame(polygons) pd_poly.to_csv(args["save_path"] + name[:-4] + "/geo_polygons.csv") for count, polygon_patch in enumerate(polygons): # get rastr patch and save out_image, _ = rasterio.mask.mask(raster, [polygon_patch], crop=True) new_im = Image.fromarray(np.swapaxes(out_image, 0, 2)) new_im.save(args["save_path"]+ name[:-4] + "/" + str(count).zfill(4) + "_img.png") #get vector data pixelized and save # shapefiles sg_roads = all_roads[all_roads.geometry.intersects(polygon_patch)] # extract segments of roads sg_houses = all_buildings[ all_buildings.geometry.intersects(polygon_patch)] # extract segments of ordinary buildings sg_churches = churches[churches.geometry.intersects(polygon_patch)] # churches sg_towers = towers[towers.geometry.intersects(polygon_patch)] # towers sg_monuments = monuments[monuments.geometry.intersects(polygon_patch)] # monuments sg_forts = monuments[monuments.geometry.intersects(polygon_patch)] # forts sg_castels = castels[castels.geometry.intersects(polygon_patch)] # chateux sg_water = all_water[all_water.geometry.intersects(polygon_patch)] # extract segments of water sg_sport = data_sport[data_sport.geometry.intersects(polygon_patch)] # extract segments of sport things sg_cemetries = data_cemetries[data_cemetries.geometry.intersects(polygon_patch)] # cemetries sg_aero = data_aero[data_aero.geometry.intersects(polygon_patch)] # aeroports sg_railroads = data_rail[data_rail.geometry.intersects(polygon_patch)] # railroads sg_greenery = all_greenery[all_greenery.geometry.intersects(polygon_patch)] # forests # now get them as image fig, ax = plt.subplots(figsize=(20.0, 20.0), dpi=100) # resolution is fixed for 2000 sg_roads.plot(linewidth=4.0, edgecolor='#FFA500', color='#FFA500', ax=ax) sg_water.plot(color='#0000FF', ax=ax) sg_sport.plot(color='#8A2BE2', ax=ax) sg_houses.plot(color='#FF0000', ax=ax) sg_churches.plot(color='#FFFF00', ax=ax) sg_towers.plot(color="#A52A2A", ax=ax) sg_monuments.plot(color='#F5F5DC', ax=ax) sg_forts.plot(color='#808080', ax=ax) sg_castels.plot(color='#000000', ax=ax) sg_cemetries.plot(color='#4B0082', ax=ax) sg_aero.plot(color='#5F021F', ax=ax) sg_railroads.plot(color='#FF00FF', ax=ax) sg_greenery.plot(color='#00FF00', ax=ax) ax.set_xlim([polygon_patch.bounds[0], polygon_patch.bounds[2]]) ax.set_ylim([polygon_patch.bounds[3], polygon_patch.bounds[1]]) ax.set_xticklabels([]) ax.set_yticklabels([]) ax.axis('off') plt.subplots_adjust(left=0., right=1., top=1., bottom=0.) plt.savefig("D:/allegoria/topo_ortho/ING_processed_margo/moselle_2004/" + name[:-4] +"/"+ str(count).zfill(4) +"_lbl.png", dpi= 100, bbox_inches='tight', pad_inches=0) # save the resulting figure plt.close('all') # + image_png =plt.imread('D:/allegoria/topo_ortho/ING_processed_margo/moselle_2004/57-2004-0910-6930-LA93-0M50-E080/0018_lbl.png') image = plt.imread('D:/allegoria/topo_ortho/ING_processed_margo/moselle_2004/57-2004-0910-6930-LA93-0M50-E080/0018_img.png') rot_png = np.rot90(image_png, k=1, axes=(1,0)) plt.figure(figsize=(20,10)) plt.figure(1) plt.subplot(211) plt.imshow(rot_png) plt.subplot(212) plt.imshow(image) plt.show() # - polygons = return_polygons(polygon_bbox.bounds, resolution = (resolution ,resolution)) fig, ax = plt.subplots(figsize=(x_width/float(10*my_dpi), y_width/float(10*my_dpi))) rasterio.plot.show(raster, ax=ax) for i in polygons: plt.plot(*i.exterior.xy) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Load all necessary packages import sys sys.path.insert(1, "../") import numpy as np np.random.seed(0) from aif360.datasets import GermanDataset from aif360.metrics import BinaryLabelDatasetMetric from aif360.algorithms.preprocessing import Reweighing from IPython.display import Markdown, display # + pycharm={"name": "#%%\n"} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Logic & Control Flow Challenge Problem # # ## Password Strength # Here you will write a function that determines if a password is safe or not. The password will be considered strong enough if its length is greater than or equal to 10 symbols, it has at least one digit, as well as containing one uppercase letter and one lowercase letter in it. The password contains only ASCII latin letters or digits. # # ### Input: # A password as a string. # # ### Output: # Boolean value indicating if the password is safe or not. ## Fill in this cell with your function def password_strength(password): # To run checks on each character in the password we have to use a for loop # We will learn more about loops next week for character in password: # perform checks on the variable password # + ## Run this cell to check your function ## If your function passes all tests there will be no output ## If there is a problem, python will raise an AssertionError ## Check the error message to see which example is throwing the error assert password_strength("") == True, "Failed Check 1" assert password_strength("password") == False, "Failed Check 2" assert password_strength('') == False, "Failed Check 3" assert password_strength('') == True, "Failed Check 4" assert password_strength('') == False, "Failed Check 5" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from keras import optimizers from keras.callbacks import ModelCheckpoint from keras import backend as K from keras import optimizers from keras.layers import Dense from keras.layers import Dense, Dropout from keras.models import Sequential from keras.wrappers.scikit_learn import KerasClassifier from pandas import ExcelFile from pandas import ExcelWriter from PIL import Image from scipy import ndimage from scipy.stats import randint as sp_randint from sklearn.base import BaseEstimator from sklearn.base import TransformerMixin from sklearn.ensemble import ExtraTreesClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.feature_selection import SelectFromModel from sklearn import datasets from sklearn import metrics from sklearn import pipeline from sklearn.metrics import roc_auc_score, roc_curve from sklearn.model_selection import cross_val_score from sklearn.model_selection import GridSearchCV from sklearn.model_selection import PredefinedSplit from sklearn.model_selection import RandomizedSearchCV from sklearn.model_selection import ShuffleSplit from sklearn.model_selection import StratifiedKFold from sklearn.model_selection import train_test_split from sklearn.pipeline import Pipeline from sklearn.preprocessing import FunctionTransformer from sklearn.preprocessing import Imputer from sklearn.preprocessing import LabelEncoder from sklearn.preprocessing import StandardScaler from sklearn.utils import resample from tensorflow.python.framework import ops import h5py import keras import matplotlib.pyplot as plt import numpy as np import openpyxl import pandas as pd import scipy import tensorflow as tf import xlsxwriter import numpy as np from keras import layers from keras.layers import Input, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D from keras.layers import AveragePooling2D, MaxPooling2D, Dropout, GlobalMaxPooling2D, GlobalAveragePooling2D from keras.models import Model from keras.preprocessing import image from keras.utils import layer_utils from keras.utils.data_utils import get_file from keras.applications.imagenet_utils import preprocess_input import pydot from IPython.display import SVG from keras.utils.vis_utils import model_to_dot from keras.utils import plot_model import keras.backend as K # %load_ext autoreload # %matplotlib inline # + from __future__ import print_function import rdkit from rdkit import Chem from rdkit.Chem import AllChem import pandas as pd import numpy as np from matplotlib import pyplot as plt # %matplotlib inline print("RDKit: %s"%rdkit.__version__) # - import keras from sklearn.utils import shuffle from keras.models import Sequential, Model from keras.layers import Conv2D, MaxPooling2D, Input, GlobalMaxPooling2D from keras.layers.core import Dense, Dropout, Activation, Flatten from keras.optimizers import Adam from keras.preprocessing.image import ImageDataGenerator from keras.callbacks import ReduceLROnPlateau print("Keras: %s"%keras.__version__) data = pd.read_excel(r'IGC50.xlsx') data["mol"] = data["smiles"].apply(Chem.MolFromSmiles) def chemcepterize_mol(mol, embed=20.0, res=0.5): dims = int(embed*2/res) #print(dims) #print(mol) #print(",,,,,,,,,,,,,,,,,,,,,,") cmol = Chem.Mol(mol.ToBinary()) #print(cmol) #print(",,,,,,,,,,,,,,,,,,,,,,") cmol.ComputeGasteigerCharges() AllChem.Compute2DCoords(cmol) coords = cmol.GetConformer(0).GetPositions() #print(coords) #print(",,,,,,,,,,,,,,,,,,,,,,") vect = np.zeros((dims,dims,4)) #Bonds first for i,bond in enumerate(mol.GetBonds()): bondorder = bond.GetBondTypeAsDouble() bidx = bond.GetBeginAtomIdx() eidx = bond.GetEndAtomIdx() bcoords = coords[bidx] ecoords = coords[eidx] frac = np.linspace(0,1,int(1/res*2)) # for f in frac: c = (f*bcoords + (1-f)*ecoords) idx = int(round((c[0] + embed)/res)) idy = int(round((c[1]+ embed)/res)) #Save in the vector first channel vect[ idx , idy ,0] = bondorder #Atom Layers for i,atom in enumerate(cmol.GetAtoms()): idx = int(round((coords[i][0] + embed)/res)) idy = int(round((coords[i][1]+ embed)/res)) #Atomic number vect[ idx , idy, 1] = atom.GetAtomicNum() #Gasteiger Charges charge = atom.GetProp("_GasteigerCharge") vect[ idx , idy, 3] = charge #Hybridization hyptype = atom.GetHybridization().real vect[ idx , idy, 2] = hyptype return vect # + mol = data["mol"][104] v = chemcepterize_mol(mol, embed=10, res=0.2) print(v.shape) plt.imshow(v[:,:,:3]) # - def vectorize(mol): return chemcepterize_mol(mol, embed=12) data["molimage"] = data["mol"].apply(vectorize) X_train = np.array(list(data["molimage"][data["split"]==1])) X_test = np.array(list(data["molimage"][data["split"]==0])) print(X_train.shape) print(X_test.shape) assay = "Activity" y_train = data[assay][data["split"]==1].values.reshape(-1,1) y_test = data[assay][data["split"]==0].values.reshape(-1,1) print(np.shape(y_train)) input_shape = X_train.shape[1:] print(input_shape) from keras.layers import Dense, Dropout def Inception0(input): tower_1 = Conv2D(16, (1, 1), padding='same', activation='relu')(input) tower_1 = Conv2D(16, (3, 3), padding='same', activation='relu')(tower_1) tower_2 = Conv2D(16, (1, 1), padding='same', activation='relu')(input) tower_2 = Conv2D(16, (5, 5), padding='same', activation='relu')(tower_2) tower_3 = Conv2D(16, (1, 1), padding='same', activation='relu')(input) output = keras.layers.concatenate([tower_1, tower_2, tower_3], axis=-1) return output def Inception(input): tower_1 = Conv2D(16, (1, 1), padding='same', activation='relu')(input) tower_1 = Conv2D(16, (3, 3), padding='same', activation='relu')(tower_1) tower_2 = Conv2D(16, (1, 1), padding='same', activation='relu')(input) tower_2 = Conv2D(16, (5, 5), padding='same', activation='relu')(tower_2) tower_3 = MaxPooling2D((3, 3), strides=(1, 1), padding='same')(input) tower_3 = Conv2D(16, (1, 1), padding='same', activation='relu')(tower_3) output = keras.layers.concatenate([tower_1, tower_2, tower_3], axis=-1) return output # + input_img = Input(shape=input_shape) x = Inception0(input_img) x = Inception(x) x = Inception(x) od=int(x.shape[1]) x = MaxPooling2D(pool_size=(od,od), strides=(1,1))(x) x = Flatten()(x) x = Dense(100, activation='relu')(x) output = Dense(1, activation='linear')(x) model = Model(inputs=input_img, outputs=output) print(model.summary()) # - from keras.preprocessing.image import ImageDataGenerator generator = ImageDataGenerator(rotation_range=180, width_shift_range=0.1,height_shift_range=0.1, fill_mode="constant",cval = 0, horizontal_flip=True, vertical_flip=True,data_format='channels_last', ) def coeff_determination(y_true, y_pred): from keras import backend as K SS_res = K.sum(K.square( y_true-y_pred )) SS_tot = K.sum(K.square( y_true - K.mean(y_true) ) ) return ( 1 - SS_res/(SS_tot + K.epsilon()) ) # + def get_lr_metric(optimizer): def lr(y_true, y_pred): return optimizer.lr return lr # - optimizer = Adam(lr=0.00025) lr_metric = get_lr_metric(optimizer) model.compile(loss="mse", optimizer=optimizer, metrics=[coeff_determination, lr_metric]) # + #Concatenate for longer epochs Xt = np.concatenate([X_train]*50, axis=0) yt = np.concatenate([y_train]*50, axis=0) batch_size=128 g = generator.flow(Xt, yt, batch_size=batch_size, shuffle=True) steps_per_epoch = 10000/batch_size callbacks_list = [ ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=5, min_lr=1e-15, verbose=1, mode='auto',cooldown=0), ModelCheckpoint(filepath="weights.best.hdf5", monitor='val_loss', save_best_only=True, verbose=1, mode='auto') ] history =model.fit_generator(g, steps_per_epoch=len(Xt)//batch_size, epochs=150, validation_data=(X_test,y_test), callbacks=callbacks_list) # + hist = history.history plt.figure(figsize=(10, 8)) for label in ['val_coeff_determination','coeff_determination']: plt.subplot(221) plt.plot(hist[label], label = label) plt.legend() plt.xlabel("Epochs") plt.ylabel("coeff_determination") for label in ['val_loss','loss']: plt.subplot(222) plt.plot(hist[label], label = label) plt.legend() plt.xlabel("Epochs") plt.ylabel("loss") plt.subplot(223) plt.plot( hist['lr'],hist['val_coeff_determination'] ) plt.legend() plt.xlabel("lr") plt.ylabel("val_coeff_determination") plt.subplot(224) plt.plot( hist['lr'],hist['val_loss'] ) plt.legend() plt.xlabel("lr") plt.ylabel("val_loss") plt.subplots_adjust(top=0.92, bottom=0.08, left=0.10, right=0.95, hspace=0.25, wspace=0.35) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Scatter # %matplotlib ipympl import matplotlib.pyplot as plt import numpy as np import mpl_interactions.ipyplot as iplt import ipywidgets as widgets import pandas as pd from matplotlib.colors import to_rgba_array, TABLEAU_COLORS, XKCD_COLORS # ## Basic example # + gif="scatter1.png" N = 50 x = np.random.rand(N) def f_y(x, tau): return np.sin(x*tau)**2 + np.random.randn(N)*.01 fig, ax = plt.subplots() controls = iplt.scatter(x,f_y, tau = (1, 2*np.pi, 100)) # - # ## Using functions and broadcasting # You can also use multiple functions. If there are fewer `x` inputs than `y` inputs then the `x` input will be broadcast to fit the `y` inputs. Similarly `y` inputs can be broadcast to fit `x`. You can also choose colors and sizes for each line # + gif="scatter2.png" N = 50 x = np.random.rand(N) def f_y1(x, tau): return np.sin(x*tau)**2 + np.random.randn(N)*.01 def f_y2(x, tau): return np.cos(x*tau)**2 + np.random.randn(N)*.1 fig, ax = plt.subplots() controls = iplt.scatter(x,f_y1, tau = (1, 2*np.pi, 100), c = 'blue', s = 5) _ = iplt.scatter(x,f_y2, controls= controls, c = 'red', s = 20) # - # ## Functions for both x and y # # The function for `y` should accept `x` and then any parameters that you will be varying. The function for `x` should accept only the parameters. # + gif="scatter3.png" N = 50 def f_x(mean): return np.random.rand(N) + mean def f_y(x, mean): return np.random.rand(N) - mean fig, ax = plt.subplots() controls = iplt.scatter(f_x, f_y, mean = (0, 1, 100), s = None, c = np.random.randn(N)) # - # ## Using functions for other attributes # # You can also use functions to dynamically update other scatter attributes such as the `size`, `color`, `edgecolor`, and `alpha`. # # The function for `alpha` needs to accept the parameters but not the xy positions as it affects every point. The functions for `size`, `color` and `edgecolor` all should accept `x, y, ` # # + gif="scatter4.png" N = 50 mean = 0 x = np.random.rand(N) + mean - 0.5 def f(x, mean): return np.random.rand(N) + mean - 0.5 def c_func(x, y, mean): return x def s_func(x, y, mean): return np.abs(40 / (x + 0.001)) def ec_func(x, y, mean): if np.random.rand() > 0.5: return "black" else: return "red" fig, ax = plt.subplots() sliders = iplt.scatter( x, f, mean=(0, 1, 100), c=c_func, s=s_func, edgecolors=ec_func, alpha=0.5, ) # - # ## Modifying the colors of individual points # + gif="scatter5.png" N = 500 x = np.random.rand(N) - 0.5 y = np.random.rand(N) - 0.5 def f(mean): x = (np.random.rand(N) - 0.5) + mean y = 10 * (np.random.rand(N) - 0.5) + mean return x, y def threshold(x, y, mean): colors = np.zeros((len(x), 4)) colors[:, -1] = 1 deltas = np.abs(y - mean) idx = deltas < 0.01 deltas /= deltas.max() colors[~idx, -1] = np.clip(0.8 - deltas[~idx], 0, 1) return colors fig, ax = plt.subplots() sliders = iplt.scatter(x, y, mean=(0, 1, 100), alpha=None, c=threshold) # - # ## Putting it together - Wealth of Nations # Using interactive_scatter we can recreate the interactive [wealth of nations](https://observablehq.com/@mbostock/the-wealth-health-of-nations) plot using Matplotlib! # # # The data preprocessing was taken from an [example notebook](https://github.com/bqplot/bqplot/blob/55152feb645b523faccb97ea4083ca505f26f6a2/examples/Applications/Wealth%20Of%20Nations/Bubble%20Chart.ipynb) from the [bqplot](https://github.com/bqplot/bqplot) library. If you are working in jupyter notebooks then you should definitely check out bqplot! # # # ### Data preprocessing # + # this cell was taken wholesale from the bqplot example # bqplot is under the apache license, see their license file here: # https://github.com/bqplot/bqplot/blob/55152feb645b523faccb97ea4083ca505f26f6a2/LICENSE data = pd.read_json('nations.json') def clean_data(data): for column in ['income', 'lifeExpectancy', 'population']: data = data.drop(data[data[column].apply(len) <= 4].index) return data def extrap_interp(data): data = np.array(data) x_range = np.arange(1800, 2009, 1.) y_range = np.interp(x_range, data[:, 0], data[:, 1]) return y_range def extrap_data(data): for column in ['income', 'lifeExpectancy', 'population']: data[column] = data[column].apply(extrap_interp) return data data = clean_data(data) data = extrap_data(data) income_min, income_max = np.min(data['income'].apply(np.min)), np.max(data['income'].apply(np.max)) life_exp_min, life_exp_max = np.min(data['lifeExpectancy'].apply(np.min)), np.max(data['lifeExpectancy'].apply(np.max)) pop_min, pop_max = np.min(data['population'].apply(np.min)), np.max(data['population'].apply(np.max)) # - # ### Define functions to provide the data # + def x(year): return data["income"].apply(lambda x: x[year - 1800]) def y(x, year): return data["lifeExpectancy"].apply(lambda x: x[year - 1800]) def s(x, y, year): pop = data["population"].apply(lambda x: x[year - 1800]) return 6000 * pop.values / pop_max regions = data["region"].unique().tolist() c = data["region"].apply(lambda x: list(TABLEAU_COLORS)[regions.index(x)]).values # - # ### Marvel at data # + gif="scatter6.png" fig, ax = plt.subplots(figsize=(10, 4.8)) controls = iplt.scatter( x, y, s=s, year=np.arange(1800, 2009), c=c, edgecolors="k", slider_formats="{:d}", play_buttons=True, play_button_pos="left", ) fs = 15 ax.set_xscale("log") ax.set_ylim([0, 100]) ax.set_xlim([200, income_max * 1.05]) ax.set_xlabel("Income", fontsize=fs) _ = ax.set_ylabel("Life Expectancy", fontsize=fs) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] pycharm={"is_executing": true} # # Roboschool # + pycharm={"is_executing": true, "name": "#%%\n"} import numpy as np import os CPU_NUM = 2 JOB_TIME = '0-08:00' JOB_MEMORY = '8000M' job_sub_dir = './job_scripts' job_out_dir = './job_scripts_output' # - os.makedirs(job_sub_dir) os.makedirs(job_out_dir) tasks_complement = ['RoboschoolInvertedPendulum-v1', 'RoboschoolInvertedPendulumSwingup-v1', 'RoboschoolInvertedDoublePendulum-v1', 'RoboschoolReacher-v1', 'RoboschoolPong-v1'] # + ## 2L ss # + pycharm={"is_executing": true, "name": "#%%\n"} tasks = ['RoboschoolAnt-v1', 'RoboschoolHalfCheetah-v1', 'RoboschoolWalker2d-v1', 'RoboschoolHopper-v1'] n_steps = ['1', '2', '3', '4', '5'] replay_size = ['1000000', '500000', '100000', '50000', '10000'] seeds = ['0', '1', '2', '3'] for s in seeds: for task in tasks: for n_s in n_steps: for b_s in replay_size: job_filename = 'job_{0}_{1}_{2}_{3}.sh'.format(task, s, n_s, b_s) print(job_filename) with open(os.path.join(job_sub_dir, job_filename), 'w') as job_file: job_file.write('#!/bin/bash\n') job_file.write('#SBATCH --account=def-dkulic\n') job_file.write('#SBATCH --gres=gpu:{} # request GPU "generic resource"\n'.format(GPU_NUM)) job_file.write('#SBATCH --cpus-per-task={} #Maximum of CPU cores per GPU request: 6 on Cedar, 16 on Graham.\n'.format(CPU_NUM)) job_file.write('#SBATCH --mem={} # memory per node\n'.format(JOB_MEMORY)) job_file.write('#SBATCH --time={} # time (DD-HH:MM)\n'.format(JOB_TIME)) job_file.write('#SBATCH --output=./job_scripts_output/ddpg_n_step_2L_NoDelayTrain_{0}_{1}_{2}_{3}_%N-%j.out # %N for node name, %j for jobID\n'.format(task, s, n_s, b_s)) job_file.write('## Main processing command\n') job_file.write('module load cuda cudnn \n') job_file.write('source ~/tf_gpu/bin/activate\n') job_file.write('python ./ddpg_n_step.py --env {0} --seed {1} --l 2 --n_step {2} --replay_size {3} --without_delay_train --exp_name ddpg_n_step_2L_NoDelayTrain_{0}_{1}_{2}_{3}'.format(task, s, n_s, b_s)) # - tasks = ['RoboschoolAnt-v1', 'RoboschoolHalfCheetah-v1', 'RoboschoolWalker2d-v1', 'RoboschoolHopper-v1'] n_steps = ['1', '2', '3', '4', '5'] replay_size = ['1000000', '500000', '100000', '50000', '10000'] seeds = ['0', '1', '2', '3'] for s in seeds: for task in tasks: for n_s in n_steps: for b_s in replay_size: job_filename = 'job_{0}_{1}_{2}_{3}.sh'.format(task, s, n_s, b_s) print(job_filename) with open(os.path.join(job_sub_dir, job_filename), 'w') as job_file: job_file.write('#!/bin/bash\n') job_file.write('#SBATCH --account=def-dkulic\n') job_file.write('#SBATCH --cpus-per-task={} #Maximum of CPU cores per GPU request: 6 on Cedar, 16 on Graham.\n'.format(CPU_NUM)) job_file.write('#SBATCH --mem={} # memory per node\n'.format(JOB_MEMORY)) job_file.write('#SBATCH --time={} # time (DD-HH:MM)\n'.format(JOB_TIME)) job_file.write('#SBATCH --output=./job_scripts_output/ddpg_n_step_2L_NoDelayTrain_{0}_{1}_{2}_{3}_%N-%j.out # %N for node name, %j for jobID\n'.format(task, s, n_s, b_s)) job_file.write('## Main processing command\n') job_file.write('module load cuda cudnn \n') job_file.write('source ~/tf_gpu/bin/activate\n') job_file.write('python ./ddpg_n_step.py --env {0} --seed {1} --l 2 --n_step {2} --replay_size {3} --without_delay_train --exp_name ddpg_n_step_2L_NoDelayTrain_{0}_{1}_{2}_{3}'.format(task, s, n_s, b_s)) # ## 2L Start Step 100 import numpy as np import os CPU_NUM = 2 JOB_TIME = '0-10:00' JOB_MEMORY = '8000M' job_sub_dir = './job_scripts' job_out_dir = './job_scripts_output' # + tasks = ['RoboschoolAnt-v1', 'RoboschoolHalfCheetah-v1', 'RoboschoolWalker2d-v1', 'RoboschoolHopper-v1', 'RoboschoolInvertedPendulum-v1', 'RoboschoolInvertedPendulumSwingup-v1', 'RoboschoolInvertedDoublePendulum-v1', 'RoboschoolReacher-v1', 'RoboschoolPong-v1'] n_steps = ['1', '2', '3', '4', '5'] replay_size = ['1000000', '500000', '100000', '50000', '10000'] seeds = ['0', '1', '2', '3'] for s in seeds: for task in tasks: for n_s in n_steps: for b_s in replay_size: job_filename = 'job_{0}_{1}_{2}_{3}.sh'.format(task, s, n_s, b_s) print(job_filename) with open(os.path.join(job_sub_dir, job_filename), 'w') as job_file: job_file.write('#!/bin/bash\n') job_file.write('#SBATCH --account=def-dkulic\n') job_file.write('#SBATCH --cpus-per-task={} #Maximum of CPU cores per GPU request: 6 on Cedar, 16 on Graham.\n'.format(CPU_NUM)) job_file.write('#SBATCH --mem={} # memory per node\n'.format(JOB_MEMORY)) job_file.write('#SBATCH --time={} # time (DD-HH:MM)\n'.format(JOB_TIME)) job_file.write('#SBATCH --output=./job_scripts_output/ddpg_n_step_2L_NoDelayTrain_ss100_{0}_{1}_{2}_{3}_%N-%j.out # %N for node name, %j for jobID\n'.format(task, s, n_s, b_s)) job_file.write('## Main processing command\n') job_file.write('module load cuda cudnn \n') job_file.write('source ~/tf_cpu/bin/activate\n') job_file.write('python ./ddpg_n_step.py --env {0} --seed {1} --l 2 --n_step {2} --replay_size {3} --without_delay_train --start_steps 100 --data_dir spinup_data_ddpg_n_step_2L_ss100_Roboschool_new --exp_name ddpg_n_step_2L_NoDelayTrain_ss100_{0}_{1}_{2}_{3}'.format(task, s, n_s, b_s)) # + tasks = ['RoboschoolAnt-v1', 'RoboschoolHalfCheetah-v1', 'RoboschoolWalker2d-v1', 'RoboschoolHopper-v1', 'RoboschoolInvertedPendulum-v1', 'RoboschoolInvertedPendulumSwingup-v1', 'RoboschoolInvertedDoublePendulum-v1', 'RoboschoolReacher-v1', 'RoboschoolPong-v1'] n_steps = ['1', '2', '3', '4', '5'] replay_size = ['1000000', '500000', '100000', '50000', '10000'] seeds = ['0', '1', '2', '3'] for s in seeds: for task in tasks: for n_s in n_steps: for b_s in replay_size: job_filename = 'job_{0}_{1}_{2}_{3}.sh'.format(task, s, n_s, b_s) print(job_filename) with open(os.path.join(job_sub_dir, job_filename), 'w') as job_file: job_file.write('#!/bin/bash\n') job_file.write('#SBATCH --account=def-dkulic\n') job_file.write('#SBATCH --cpus-per-task={} #Maximum of CPU cores per GPU request: 6 on Cedar, 16 on Graham.\n'.format(CPU_NUM)) job_file.write('#SBATCH --mem={} # memory per node\n'.format(JOB_MEMORY)) job_file.write('#SBATCH --time={} # time (DD-HH:MM)\n'.format(JOB_TIME)) job_file.write('#SBATCH --output=./job_scripts_output/ddpg_n_step_1L_NoDelayTrain_ss100_{0}_{1}_{2}_{3}_%N-%j.out # %N for node name, %j for jobID\n'.format(task, s, n_s, b_s)) job_file.write('## Main processing command\n') job_file.write('module load cuda cudnn \n') job_file.write('source ~/tf_cpu/bin/activate\n') job_file.write('python ./ddpg_n_step.py --env {0} --seed {1} --l 1 --n_step {2} --replay_size {3} --without_delay_train --start_steps 100 --data_dir spinup_data_ddpg_n_step_1L_ss100_Roboschool_new --exp_name ddpg_n_step_1L_NoDelayTrain_ss100_{0}_{1}_{2}_{3}'.format(task, s, n_s, b_s)) # - # ### 3L ss100 import numpy as np import os CPU_NUM = 2 JOB_TIME = '0-08:00' JOB_MEMORY = '8000M' job_sub_dir = './job_scripts' job_out_dir = './job_scripts_output' # + # tasks = ['RoboschoolInvertedPendulum-v1', 'RoboschoolInvertedPendulumSwingup-v1', # 'RoboschoolInvertedDoublePendulum-v1', 'RoboschoolReacher-v1', 'RoboschoolPong-v1'] tasks = ['RoboschoolAnt-v1', 'RoboschoolHalfCheetah-v1', 'RoboschoolWalker2d-v1', 'RoboschoolHopper-v1'] n_steps = ['1', '2', '3', '4', '5'] replay_size = ['1000000', '500000', '100000', '50000', '10000'] seeds = ['0', '1', '2', '3'] for s in seeds: for task in tasks: for n_s in n_steps: for b_s in replay_size: job_filename = 'job_{0}_{1}_{2}_{3}.sh'.format(task, s, n_s, b_s) print(job_filename) with open(os.path.join(job_sub_dir, job_filename), 'w') as job_file: job_file.write('#!/bin/bash\n') job_file.write('#SBATCH --account=def-dkulic\n') job_file.write('#SBATCH --cpus-per-task={} #Maximum of CPU cores per GPU request: 6 on Cedar, 16 on Graham.\n'.format(CPU_NUM)) job_file.write('#SBATCH --mem={} # memory per node\n'.format(JOB_MEMORY)) job_file.write('#SBATCH --time={} # time (DD-HH:MM)\n'.format(JOB_TIME)) job_file.write('#SBATCH --output=./job_scripts_output/ddpg_n_step_3L_NoDelayTrain_ss100_{0}_{1}_{2}_{3}_%N-%j.out # %N for node name, %j for jobID\n'.format(task, s, n_s, b_s)) job_file.write('## Main processing command\n') job_file.write('module load cuda cudnn \n') job_file.write('source ~/tf_gpu/bin/activate\n') job_file.write('python ./ddpg_n_step.py --env {0} --seed {1} --l 3 --n_step {2} --replay_size {3} --without_delay_train --start_steps 100 --data_dir spinup_data_ddpg_n_step_Roboschool_3L_ss100 --exp_name ddpg_n_step_3L_NoDelayTrain_ss100_{0}_{1}_{2}_{3}'.format(task, s, n_s, b_s)) # - # ## 1L Start Step 100 import numpy as np import os CPU_NUM = 2 JOB_TIME = '0-08:00' JOB_MEMORY = '8000M' job_sub_dir = './job_scripts' job_out_dir = './job_scripts_output' # + # tasks = ['RoboschoolAnt-v1', 'RoboschoolHalfCheetah-v1', 'RoboschoolWalker2d-v1', 'RoboschoolHopper-v1'] tasks = ['RoboschoolAnt-v1', 'RoboschoolHalfCheetah-v1', 'RoboschoolWalker2d-v1', 'RoboschoolHopper-v1', 'RoboschoolInvertedPendulum-v1', 'RoboschoolInvertedPendulumSwingup-v1', 'RoboschoolInvertedDoublePendulum-v1', 'RoboschoolReacher-v1', 'RoboschoolPong-v1'] n_steps = ['1', '2', '3', '4', '5'] replay_size = ['1000000', '500000', '100000', '50000', '10000'] seeds = ['0', '1', '2', '3'] for s in seeds: for task in tasks: for n_s in n_steps: for b_s in replay_size: job_filename = 'job_{0}_{1}_{2}_{3}.sh'.format(task, s, n_s, b_s) print(job_filename) with open(os.path.join(job_sub_dir, job_filename), 'w') as job_file: job_file.write('#!/bin/bash\n') job_file.write('#SBATCH --account=def-dkulic\n') job_file.write('#SBATCH --cpus-per-task={} #Maximum of CPU cores per GPU request: 6 on Cedar, 16 on Graham.\n'.format(CPU_NUM)) job_file.write('#SBATCH --mem={} # memory per node\n'.format(JOB_MEMORY)) job_file.write('#SBATCH --time={} # time (DD-HH:MM)\n'.format(JOB_TIME)) job_file.write('#SBATCH --output=./job_scripts_output/ddpg_n_step_2L_NoDelayTrain_ss100_{0}_{1}_{2}_{3}_%N-%j.out # %N for node name, %j for jobID\n'.format(task, s, n_s, b_s)) job_file.write('## Main processing command\n') job_file.write('module load cuda cudnn \n') job_file.write('source ~/tf_gpu/bin/activate\n') job_file.write('python ./ddpg_n_step.py --env {0} --seed {1} --l 1 --n_step {2} --replay_size {3} --without_delay_train --start_steps 100 --data_dir spinup_data_ddpg_n_step_Roboschool_1L_ss100 --exp_name ddpg_n_step_1L_NoDelayTrain_ss100_{0}_{1}_{2}_{3}'.format(task, s, n_s, b_s)) # + tasks = ['RoboschoolAnt-v1', 'RoboschoolHalfCheetah-v1', 'RoboschoolWalker2d-v1', 'RoboschoolHopper-v1'] # tasks = ['RoboschoolInvertedPendulum-v1', 'RoboschoolInvertedPendulumSwingup-v1', # 'RoboschoolInvertedDoublePendulum-v1', 'RoboschoolReacher-v1', 'RoboschoolPong-v1'] n_steps = ['1', '2', '3', '4', '5'] replay_size = ['1000000', '500000', '100000', '50000', '10000'] seeds = ['0', '1', '2', '3'] for s in seeds: for task in tasks: for n_s in n_steps: for b_s in replay_size: job_filename = 'job_{0}_{1}_{2}_{3}.sh'.format(task, s, n_s, b_s) print(job_filename) with open(os.path.join(job_sub_dir, job_filename), 'w') as job_file: job_file.write('#!/bin/bash\n') job_file.write('#SBATCH --account=def-dkulic\n') job_file.write('#SBATCH --cpus-per-task={} #Maximum of CPU cores per GPU request: 6 on Cedar, 16 on Graham.\n'.format(CPU_NUM)) job_file.write('#SBATCH --mem={} # memory per node\n'.format(JOB_MEMORY)) job_file.write('#SBATCH --time={} # time (DD-HH:MM)\n'.format(JOB_TIME)) job_file.write('#SBATCH --output=./job_scripts_output/ddpg_n_step_2L_NoDelayTrain_ss100_{0}_{1}_{2}_{3}_%N-%j.out # %N for node name, %j for jobID\n'.format(task, s, n_s, b_s)) job_file.write('## Main processing command\n') job_file.write('module load cuda cudnn \n') job_file.write('source ~/tf_gpu/bin/activate\n') job_file.write('python ./ddpg_n_step.py --env {0} --seed {1} --l 1 --n_step {2} --replay_size {3} --without_delay_train --start_steps 100 --data_dir spinup_data_ddpg_n_step_Roboschool_1L_ss100 --exp_name ddpg_n_step_1L_NoDelayTrain_ss100_{0}_{1}_{2}_{3}'.format(task, s, n_s, b_s)) # - # ## 1L Start Step 10000 # + # tasks = ['RoboschoolAnt-v1', 'RoboschoolHalfCheetah-v1', 'RoboschoolWalker2d-v1', 'RoboschoolHopper-v1'] tasks = ['RoboschoolAnt-v1', 'RoboschoolHalfCheetah-v1', 'RoboschoolWalker2d-v1', 'RoboschoolHopper-v1', 'RoboschoolInvertedPendulum-v1', 'RoboschoolInvertedPendulumSwingup-v1', 'RoboschoolInvertedDoublePendulum-v1', 'RoboschoolReacher-v1', 'RoboschoolPong-v1'] n_steps = ['1', '2', '3', '4', '5'] replay_size = ['1000000', '500000', '100000', '50000', '10000'] seeds = ['0', '1', '2', '3'] for s in seeds: for task in tasks: for n_s in n_steps: for b_s in replay_size: job_filename = 'job_{0}_{1}_{2}_{3}.sh'.format(task, s, n_s, b_s) print(job_filename) with open(os.path.join(job_sub_dir, job_filename), 'w') as job_file: job_file.write('#!/bin/bash\n') job_file.write('#SBATCH --account=def-dkulic\n') job_file.write('#SBATCH --cpus-per-task={} #Maximum of CPU cores per GPU request: 6 on Cedar, 16 on Graham.\n'.format(CPU_NUM)) job_file.write('#SBATCH --mem={} # memory per node\n'.format(JOB_MEMORY)) job_file.write('#SBATCH --time={} # time (DD-HH:MM)\n'.format(JOB_TIME)) job_file.write('#SBATCH --output=./job_scripts_output/ddpg_n_step_2L_NoDelayTrain_ss10000_{0}_{1}_{2}_{3}_%N-%j.out # %N for node name, %j for jobID\n'.format(task, s, n_s, b_s)) job_file.write('## Main processing command\n') job_file.write('module load cuda cudnn \n') job_file.write('source ~/tf_gpu/bin/activate\n') job_file.write('python ./ddpg_n_step.py --env {0} --seed {1} --l 1 --n_step {2} --replay_size {3} --without_delay_train --start_steps 10000 --data_dir spinup_data_ddpg_n_step_1L_ss10000 --exp_name ddpg_n_step_1L_NoDelayTrain_ss10000_{0}_{1}_{2}_{3}'.format(task, s, n_s, b_s)) # - # ## 2L Start Step 10000 # + # tasks = ['RoboschoolAnt-v1', 'RoboschoolHalfCheetah-v1', 'RoboschoolWalker2d-v1', 'RoboschoolHopper-v1'] tasks = ['RoboschoolInvertedPendulum-v1', 'RoboschoolInvertedPendulumSwingup-v1', 'RoboschoolInvertedDoublePendulum-v1', 'RoboschoolReacher-v1', 'RoboschoolPong-v1'] n_steps = ['1', '2', '3', '4', '5'] replay_size = ['1000000', '500000', '100000', '50000', '10000'] seeds = ['0', '1', '2', '3'] for s in seeds: for task in tasks: for n_s in n_steps: for b_s in replay_size: job_filename = 'job_{0}_{1}_{2}_{3}.sh'.format(task, s, n_s, b_s) print(job_filename) with open(os.path.join(job_sub_dir, job_filename), 'w') as job_file: job_file.write('#!/bin/bash\n') job_file.write('#SBATCH --account=def-dkulic\n') job_file.write('#SBATCH --cpus-per-task={} #Maximum of CPU cores per GPU request: 6 on Cedar, 16 on Graham.\n'.format(CPU_NUM)) job_file.write('#SBATCH --mem={} # memory per node\n'.format(JOB_MEMORY)) job_file.write('#SBATCH --time={} # time (DD-HH:MM)\n'.format(JOB_TIME)) job_file.write('#SBATCH --output=./job_scripts_output/ddpg_n_step_2L_NoDelayTrain_ss10000_{0}_{1}_{2}_{3}_%N-%j.out # %N for node name, %j for jobID\n'.format(task, s, n_s, b_s)) job_file.write('## Main processing command\n') job_file.write('module load cuda cudnn \n') job_file.write('source ~/tf_gpu/bin/activate\n') job_file.write('python ./ddpg_n_step.py --env {0} --seed {1} --l 2 --n_step {2} --replay_size {3} --without_delay_train --start_steps 10000 --data_dir spinup_data_ddpg_n_step_2L_ss10000_complement --exp_name ddpg_n_step_2L_NoDelayTrain_ss10000_{0}_{1}_{2}_{3}'.format(task, s, n_s, b_s)) # - import os job_sub_dir = './job_scripts' jobs = os.listdir('./job_scripts') jobs.sort() i=1 for job in jobs: code = os.system('sbatch {}'.format(os.path.join(job_sub_dir, job))) print('{} ---- {}: {}'.format(i, job, code)) i += 1 # # PyBulletGym import numpy as np import os CPU_NUM = 2 JOB_TIME = '0-06:00' JOB_MEMORY = '8000M' job_sub_dir = './job_scripts' job_out_dir = './job_scripts_output' # ## Start Step 100 # + tasks = ['AntPyBulletEnv-v0', 'HalfCheetahPyBulletEnv-v0', 'Walker2DPyBulletEnv-v0', 'HopperPyBulletEnv-v0', 'ReacherPyBulletEnv-v0', 'PusherPyBulletEnv-v0', 'ThrowerPyBulletEnv-v0', 'StrikerPyBulletEnv-v0'] n_steps = ['1', '2', '3', '4', '5'] replay_size = ['1000000', '500000', '100000', '50000', '10000'] seeds = ['0', '1', '2', '3'] for s in seeds: for task in tasks: for n_s in n_steps: for b_s in replay_size: job_filename = 'job_{0}_{1}_{2}_{3}.sh'.format(task, s, n_s, b_s) print(job_filename) with open(os.path.join(job_sub_dir, job_filename), 'w') as job_file: job_file.write('#!/bin/bash\n') job_file.write('#SBATCH --account=def-dkulic\n') job_file.write('#SBATCH --cpus-per-task={} #Maximum of CPU cores per GPU request: 6 on Cedar, 16 on Graham.\n'.format(CPU_NUM)) job_file.write('#SBATCH --mem={} # memory per node\n'.format(JOB_MEMORY)) job_file.write('#SBATCH --time={} # time (DD-HH:MM)\n'.format(JOB_TIME)) job_file.write('#SBATCH --output=./job_scripts_output/ddpg_n_step_2L_NoDelayTrain_ss100_{0}_{1}_{2}_{3}_%N-%j.out # %N for node name, %j for jobID\n'.format(task, s, n_s, b_s)) job_file.write('## Main processing command\n') job_file.write('module load cuda cudnn \n') job_file.write('source ~/tf_gpu/bin/activate\n') job_file.write('python ./ddpg_n_step.py --env {0} --seed {1} --l 2 --n_step {2} --replay_size {3} --without_delay_train --start_steps 100 --exp_name ddpg_n_step_2L_NoDelayTrain_ss100_{0}_{1}_{2}_{3}'.format(task, s, n_s, b_s)) # + tasks = ['InvertedPendulumPyBulletEnv-v0', 'InvertedPendulumSwingupPyBulletEnv-v0', 'InvertedDoublePendulumPyBulletEnv-v0', 'ReacherPyBulletEnv-v0'] n_steps = ['1', '2', '3', '4', '5'] replay_size = ['1000000', '500000', '100000', '50000', '10000'] seeds = ['0', '1', '2', '3'] for s in seeds: for task in tasks: for n_s in n_steps: for b_s in replay_size: job_filename = 'job_{0}_{1}_{2}_{3}.sh'.format(task, s, n_s, b_s) print(job_filename) with open(os.path.join(job_sub_dir, job_filename), 'w') as job_file: job_file.write('#!/bin/bash\n') job_file.write('#SBATCH --account=def-dkulic\n') job_file.write('#SBATCH --cpus-per-task={} #Maximum of CPU cores per GPU request: 6 on Cedar, 16 on Graham.\n'.format(CPU_NUM)) job_file.write('#SBATCH --mem={} # memory per node\n'.format(JOB_MEMORY)) job_file.write('#SBATCH --time={} # time (DD-HH:MM)\n'.format(JOB_TIME)) job_file.write('#SBATCH --output=./job_scripts_output/ddpg_n_step_1L_NoDelayTrain_ss100_{0}_{1}_{2}_{3}_%N-%j.out # %N for node name, %j for jobID\n'.format(task, s, n_s, b_s)) job_file.write('## Main processing command\n') job_file.write('module load cuda cudnn \n') job_file.write('source ~/tf_cpu/bin/activate\n') job_file.write('python ./ddpg_n_step.py --env {0} --seed {1} --l 1 --n_step {2} --replay_size {3} --without_delay_train --start_steps 100 --data_dir spinup_data_PyBulletEnv_1L_ss100_complement --exp_name ddpg_n_step_1L_NoDelayTrain_ss100_{0}_{1}_{2}_{3}'.format(task, s, n_s, b_s)) # - # ## 2L Start Step 10000 import numpy as np import os CPU_NUM = 2 JOB_TIME = '0-06:00' JOB_MEMORY = '8000M' job_sub_dir = './job_scripts' job_out_dir = './job_scripts_output' # + tasks = ['AntPyBulletEnv-v0', 'HalfCheetahPyBulletEnv-v0', 'Walker2DPyBulletEnv-v0', 'HopperPyBulletEnv-v0', 'ReacherPyBulletEnv-v0', 'PusherPyBulletEnv-v0', 'ThrowerPyBulletEnv-v0', 'StrikerPyBulletEnv-v0'] n_steps = ['1', '2', '3', '4', '5'] replay_size = ['1000000', '500000', '100000', '50000', '10000'] seeds = ['0', '1', '2', '3'] for s in seeds: for task in tasks: for n_s in n_steps: for b_s in replay_size: job_filename = 'job_{0}_{1}_{2}_{3}.sh'.format(task, s, n_s, b_s) print(job_filename) with open(os.path.join(job_sub_dir, job_filename), 'w') as job_file: job_file.write('#!/bin/bash\n') job_file.write('#SBATCH --account=def-dkulic\n') job_file.write('#SBATCH --cpus-per-task={} #Maximum of CPU cores per GPU request: 6 on Cedar, 16 on Graham.\n'.format(CPU_NUM)) job_file.write('#SBATCH --mem={} # memory per node\n'.format(JOB_MEMORY)) job_file.write('#SBATCH --time={} # time (DD-HH:MM)\n'.format(JOB_TIME)) job_file.write('#SBATCH --output=./job_scripts_output/ddpg_n_step_2L_NoDelayTrain_ss10000_{0}_{1}_{2}_{3}_%N-%j.out # %N for node name, %j for jobID\n'.format(task, s, n_s, b_s)) job_file.write('## Main processing command\n') job_file.write('module load cuda cudnn \n') job_file.write('source ~/tf_gpu/bin/activate\n') job_file.write('python ./ddpg_n_step.py --env {0} --seed {1} --l 2 --n_step {2} --replay_size {3} --without_delay_train --start_steps 10000 --exp_name ddpg_n_step_2L_NoDelayTrain_ss10000_{0}_{1}_{2}_{3}'.format(task, s, n_s, b_s)) # - # ## 2L Start Step 100 import numpy as np import os CPU_NUM = 2 JOB_TIME = '0-10:00' JOB_MEMORY = '12000M' job_sub_dir = './job_scripts' job_out_dir = './job_scripts_output' # + tasks = ['AntPyBulletEnv-v0', 'HalfCheetahPyBulletEnv-v0', 'Walker2DPyBulletEnv-v0', 'HopperPyBulletEnv-v0', 'InvertedPendulumPyBulletEnv-v0', 'InvertedPendulumSwingupPyBulletEnv-v0', 'InvertedDoublePendulumPyBulletEnv-v0', 'ReacherPyBulletEnv-v0'] n_steps = ['1', '2', '3', '4', '5'] replay_size = ['1000000', '500000', '100000', '50000', '10000'] seeds = ['0', '1', '2', '3'] for s in seeds: for task in tasks: for n_s in n_steps: for b_s in replay_size: job_filename = 'job_{0}_{1}_{2}_{3}.sh'.format(task, s, n_s, b_s) print(job_filename) with open(os.path.join(job_sub_dir, job_filename), 'w') as job_file: job_file.write('#!/bin/bash\n') job_file.write('#SBATCH --account=def-dkulic\n') job_file.write('#SBATCH --cpus-per-task={} #Maximum of CPU cores per GPU request: 6 on Cedar, 16 on Graham.\n'.format(CPU_NUM)) job_file.write('#SBATCH --mem={} # memory per node\n'.format(JOB_MEMORY)) job_file.write('#SBATCH --time={} # time (DD-HH:MM)\n'.format(JOB_TIME)) job_file.write('#SBATCH --output=./job_scripts_output/ddpg_n_step_2L_NoDelayTrain_ss100_{0}_{1}_{2}_{3}_%N-%j.out # %N for node name, %j for jobID\n'.format(task, s, n_s, b_s)) job_file.write('## Main processing command\n') job_file.write('module load cuda cudnn \n') job_file.write('source ~/tf_cpu/bin/activate\n') job_file.write('python ./ddpg_n_step.py --env {0} --seed {1} --l 2 --n_step {2} --replay_size {3} --without_delay_train --start_steps 100 --data_dir spinup_data_ddpg_n_step_2L_ss100_PyBulletEnv_new --exp_name ddpg_n_step_2L_NoDelayTrain_ss100_{0}_{1}_{2}_{3}'.format(task, s, n_s, b_s)) # - # # Random Action # ## Start Step 10000 import numpy as np import os CPU_NUM = 2 JOB_TIME = '0-06:00' JOB_MEMORY = '8000M' job_sub_dir = './job_scripts' job_out_dir = './job_scripts_output' # + tasks = ['AntPyBulletEnv-v0', 'HalfCheetahPyBulletEnv-v0', 'Walker2DPyBulletEnv-v0', 'HopperPyBulletEnv-v0', 'ReacherPyBulletEnv-v0', 'PusherPyBulletEnv-v0', 'ThrowerPyBulletEnv-v0', 'StrikerPyBulletEnv-v0'] n_steps = ['1', '2', '3', '4', '5'] replay_size = ['1000000', '500000', '100000', '50000', '10000'] seeds = ['0', '1', '2', '3'] for s in seeds: for task in tasks: for n_s in n_steps: for b_s in replay_size: job_filename = 'job_{0}_{1}_{2}_{3}.sh'.format(task, s, n_s, b_s) print(job_filename) with open(os.path.join(job_sub_dir, job_filename), 'w') as job_file: job_file.write('#!/bin/bash\n') job_file.write('#SBATCH --account=def-dkulic\n') job_file.write('#SBATCH --cpus-per-task={} #Maximum of CPU cores per GPU request: 6 on Cedar, 16 on Graham.\n'.format(CPU_NUM)) job_file.write('#SBATCH --mem={} # memory per node\n'.format(JOB_MEMORY)) job_file.write('#SBATCH --time={} # time (DD-HH:MM)\n'.format(JOB_TIME)) job_file.write('#SBATCH --output=./job_scripts_output/ddpg_n_step_2L_NoDelayTrain_ss10000_{0}_{1}_{2}_{3}_%N-%j.out # %N for node name, %j for jobID\n'.format(task, s, n_s, b_s)) job_file.write('## Main processing command\n') job_file.write('module load cuda cudnn \n') job_file.write('source ~/tf_gpu/bin/activate\n') job_file.write('python ./ddpg_n_step.py --env {0} --seed {1} --l 2 --n_step {2} --replay_size {3} --without_delay_train --start_steps 10000 --random_action_baseline --exp_name ddpg_n_step_2L_NoDelayTrain_ss10000_random_{0}_{1}_{2}_{3}'.format(task, s, n_s, b_s)) # - # ## Start Step 100 1L import numpy as np import os CPU_NUM = 2 JOB_TIME = '0-06:00' JOB_MEMORY = '8000M' job_sub_dir = './job_scripts' job_out_dir = './job_scripts_output' # + tasks = ['AntPyBulletEnv-v0', 'HalfCheetahPyBulletEnv-v0', 'Walker2DPyBulletEnv-v0', 'HopperPyBulletEnv-v0', 'ReacherPyBulletEnv-v0', 'PusherPyBulletEnv-v0', 'ThrowerPyBulletEnv-v0', 'StrikerPyBulletEnv-v0'] n_steps = ['1', '2', '3', '4', '5'] replay_size = ['1000000', '500000', '100000', '50000', '10000'] seeds = ['0', '1', '2', '3'] for s in seeds: for task in tasks: for n_s in n_steps: for b_s in replay_size: job_filename = 'job_{0}_{1}_{2}_{3}.sh'.format(task, s, n_s, b_s) print(job_filename) with open(os.path.join(job_sub_dir, job_filename), 'w') as job_file: job_file.write('#!/bin/bash\n') job_file.write('#SBATCH --account=def-dkulic\n') job_file.write('#SBATCH --cpus-per-task={} #Maximum of CPU cores per GPU request: 6 on Cedar, 16 on Graham.\n'.format(CPU_NUM)) job_file.write('#SBATCH --mem={} # memory per node\n'.format(JOB_MEMORY)) job_file.write('#SBATCH --time={} # time (DD-HH:MM)\n'.format(JOB_TIME)) job_file.write('#SBATCH --output=./job_scripts_output/ddpg_n_step_2L_NoDelayTrain_ss10000_{0}_{1}_{2}_{3}_%N-%j.out # %N for node name, %j for jobID\n'.format(task, s, n_s, b_s)) job_file.write('## Main processing command\n') job_file.write('module load cuda cudnn \n') job_file.write('source ~/tf_gpu/bin/activate\n') job_file.write('python ./ddpg_n_step.py --env {0} --seed {1} --l 1 --n_step {2} --replay_size {3} --without_delay_train --start_steps 100 --exp_name ddpg_n_step_1L_NoDelayTrain_ss100_{0}_{1}_{2}_{3}'.format(task, s, n_s, b_s)) # - import os job_sub_dir = './job_scripts' jobs = os.listdir('./job_scripts') jobs.sort() i=1 for job in jobs: code = os.system('sbatch {}'.format(os.path.join(job_sub_dir, job))) print('{} ---- {}: {}'.format(i, job, code)) i += 1 # ## Start Step 10000 1L import numpy as np import os CPU_NUM = 2 JOB_TIME = '0-06:00' JOB_MEMORY = '8000M' job_sub_dir = './job_scripts' job_out_dir = './job_scripts_output' # + tasks = ['AntPyBulletEnv-v0', 'HalfCheetahPyBulletEnv-v0', 'Walker2DPyBulletEnv-v0', 'HopperPyBulletEnv-v0', 'ReacherPyBulletEnv-v0', 'PusherPyBulletEnv-v0', 'ThrowerPyBulletEnv-v0', 'StrikerPyBulletEnv-v0'] n_steps = ['1', '2', '3', '4', '5'] replay_size = ['1000000', '500000', '100000', '50000', '10000'] seeds = ['0', '1', '2', '3'] for s in seeds: for task in tasks: for n_s in n_steps: for b_s in replay_size: job_filename = 'job_{0}_{1}_{2}_{3}.sh'.format(task, s, n_s, b_s) print(job_filename) with open(os.path.join(job_sub_dir, job_filename), 'w') as job_file: job_file.write('#!/bin/bash\n') job_file.write('#SBATCH --account=def-dkulic\n') job_file.write('#SBATCH --cpus-per-task={} #Maximum of CPU cores per GPU request: 6 on Cedar, 16 on Graham.\n'.format(CPU_NUM)) job_file.write('#SBATCH --mem={} # memory per node\n'.format(JOB_MEMORY)) job_file.write('#SBATCH --time={} # time (DD-HH:MM)\n'.format(JOB_TIME)) job_file.write('#SBATCH --output=./job_scripts_output/ddpg_n_step_2L_NoDelayTrain_ss10000_{0}_{1}_{2}_{3}_%N-%j.out # %N for node name, %j for jobID\n'.format(task, s, n_s, b_s)) job_file.write('## Main processing command\n') job_file.write('module load cuda cudnn \n') job_file.write('source ~/tf_gpu/bin/activate\n') job_file.write('python ./ddpg_n_step.py --env {0} --seed {1} --l 1 --n_step {2} --replay_size {3} --without_delay_train --start_steps 10000 --exp_name ddpg_n_step_1L_NoDelayTrain_ss10000_{0}_{1}_{2}_{3}'.format(task, s, n_s, b_s)) # - import os job_sub_dir = './job_scripts' jobs = os.listdir('./job_scripts') jobs.sort() i=1 for job in jobs: code = os.system('sbatch {}'.format(os.path.join(job_sub_dir, job))) print('{} ---- {}: {}'.format(i, job, code)) i += 1 # ## Nonstationary Env Start Step 100 # + tasks = ['AntPyBulletEnv-v0', 'HalfCheetahPyBulletEnv-v0', 'Walker2DPyBulletEnv-v0', 'HopperPyBulletEnv-v0'] n_steps = ['1', '2', '3', '4', '5'] replay_size = ['1000000', '500000', '100000', '50000', '10000'] seeds = ['0', '1', '2', '3'] for s in seeds: for task in tasks: for n_s in n_steps: for b_s in replay_size: job_filename = 'job_{0}_{1}_{2}_{3}.sh'.format(task, s, n_s, b_s) print(job_filename) with open(os.path.join(job_sub_dir, job_filename), 'w') as job_file: job_file.write('#!/bin/bash\n') job_file.write('#SBATCH --account=def-dkulic\n') job_file.write('#SBATCH --cpus-per-task={} #Maximum of CPU cores per GPU request: 6 on Cedar, 16 on Graham.\n'.format(CPU_NUM)) job_file.write('#SBATCH --mem={} # memory per node\n'.format(JOB_MEMORY)) job_file.write('#SBATCH --time={} # time (DD-HH:MM)\n'.format(JOB_TIME)) job_file.write('#SBATCH --output=./job_scripts_output/ddpg_n_step_2L_NoDelayTrain_ss100_{0}_{1}_{2}_{3}_%N-%j.out # %N for node name, %j for jobID\n'.format(task, s, n_s, b_s)) job_file.write('## Main processing command\n') job_file.write('module load cuda cudnn \n') job_file.write('source ~/tf_gpu/bin/activate\n') job_file.write('python ./ddpg_n_step.py --env {0} --nonstationary_env --gravity_cycle 1000 --seed {1} --l 2 --n_step {2} --replay_size {3} --without_delay_train --start_steps 100 --exp_name ddpg_n_step_Nonstationary1000_2L_NoDelayTrain_ss100_{0}_{1}_{2}_{3}'.format(task, s, n_s, b_s)) # - # ### Nonstationary Env Start Step 10000 import numpy as np import os CPU_NUM = 2 JOB_TIME = '0-06:00' JOB_MEMORY = '8000M' job_sub_dir = './job_scripts' job_out_dir = './job_scripts_output' # + tasks = ['AntPyBulletEnv-v0', 'HalfCheetahPyBulletEnv-v0', 'Walker2DPyBulletEnv-v0', 'HopperPyBulletEnv-v0'] n_steps = ['1', '2', '3', '4', '5'] replay_size = ['1000000', '500000', '100000', '50000', '10000'] seeds = ['0', '1', '2', '3'] for s in seeds: for task in tasks: for n_s in n_steps: for b_s in replay_size: job_filename = 'job_{0}_{1}_{2}_{3}.sh'.format(task, s, n_s, b_s) print(job_filename) with open(os.path.join(job_sub_dir, job_filename), 'w') as job_file: job_file.write('#!/bin/bash\n') job_file.write('#SBATCH --account=def-dkulic\n') job_file.write('#SBATCH --cpus-per-task={} #Maximum of CPU cores per GPU request: 6 on Cedar, 16 on Graham.\n'.format(CPU_NUM)) job_file.write('#SBATCH --mem={} # memory per node\n'.format(JOB_MEMORY)) job_file.write('#SBATCH --time={} # time (DD-HH:MM)\n'.format(JOB_TIME)) job_file.write('#SBATCH --output=./job_scripts_output/ddpg_n_step_2L_NoDelayTrain_ss10000_{0}_{1}_{2}_{3}_%N-%j.out # %N for node name, %j for jobID\n'.format(task, s, n_s, b_s)) job_file.write('## Main processing command\n') job_file.write('module load cuda cudnn \n') job_file.write('source ~/tf_gpu/bin/activate\n') job_file.write('python ./ddpg_n_step.py --env {0} --nonstationary_env --gravity_change_pattern gravity_averagely_harder --gravity_cycle 1000 --seed {1} --l 2 --n_step {2} --replay_size {3} --without_delay_train --start_steps 10000 --exp_name ddpg_n_step_NonstationaryHarder1000_2L_NoDelayTrain_ss10000_{0}_{1}_{2}_{3}'.format(task, s, n_s, b_s)) # + tasks = ['AntPyBulletEnv-v0', 'HalfCheetahPyBulletEnv-v0', 'Walker2DPyBulletEnv-v0', 'HopperPyBulletEnv-v0'] n_steps = ['1', '2', '3', '4', '5'] replay_size = ['1000000', '500000', '100000', '50000', '10000'] seeds = ['0', '1', '2', '3'] for s in seeds: for task in tasks: for n_s in n_steps: for b_s in replay_size: job_filename = 'job_{0}_{1}_{2}_{3}.sh'.format(task, s, n_s, b_s) print(job_filename) with open(os.path.join(job_sub_dir, job_filename), 'w') as job_file: job_file.write('#!/bin/bash\n') job_file.write('#SBATCH --account=def-dkulic\n') job_file.write('#SBATCH --cpus-per-task={} #Maximum of CPU cores per GPU request: 6 on Cedar, 16 on Graham.\n'.format(CPU_NUM)) job_file.write('#SBATCH --mem={} # memory per node\n'.format(JOB_MEMORY)) job_file.write('#SBATCH --time={} # time (DD-HH:MM)\n'.format(JOB_TIME)) job_file.write('#SBATCH --output=./job_scripts_output/ddpg_n_step_2L_NoDelayTrain_ss10000_{0}_{1}_{2}_{3}_%N-%j.out # %N for node name, %j for jobID\n'.format(task, s, n_s, b_s)) job_file.write('## Main processing command\n') job_file.write('module load cuda cudnn \n') job_file.write('source ~/tf_gpu/bin/activate\n') job_file.write('python ./ddpg_n_step.py --env {0} --nonstationary_env --gravity_change_pattern gravity_averagely_equal --gravity_cycle 1000 --seed {1} --l 2 --n_step {2} --replay_size {3} --without_delay_train --start_steps 10000 --exp_name ddpg_n_step_NonstationaryEqual1000_2L_NoDelayTrain_ss10000_{0}_{1}_{2}_{3}'.format(task, s, n_s, b_s)) # + tasks = ['AntPyBulletEnv-v0', 'HalfCheetahPyBulletEnv-v0', 'Walker2DPyBulletEnv-v0', 'HopperPyBulletEnv-v0'] n_steps = ['1', '2', '3', '4', '5'] replay_size = ['1000000', '500000', '100000', '50000', '10000'] seeds = ['0', '1', '2', '3'] for s in seeds: for task in tasks: for n_s in n_steps: for b_s in replay_size: job_filename = 'job_{0}_{1}_{2}_{3}.sh'.format(task, s, n_s, b_s) print(job_filename) with open(os.path.join(job_sub_dir, job_filename), 'w') as job_file: job_file.write('#!/bin/bash\n') job_file.write('#SBATCH --account=def-dkulic\n') job_file.write('#SBATCH --cpus-per-task={} #Maximum of CPU cores per GPU request: 6 on Cedar, 16 on Graham.\n'.format(CPU_NUM)) job_file.write('#SBATCH --mem={} # memory per node\n'.format(JOB_MEMORY)) job_file.write('#SBATCH --time={} # time (DD-HH:MM)\n'.format(JOB_TIME)) job_file.write('#SBATCH --output=./job_scripts_output/ddpg_n_step_2L_NoDelayTrain_ss10000_{0}_{1}_{2}_{3}_%N-%j.out # %N for node name, %j for jobID\n'.format(task, s, n_s, b_s)) job_file.write('## Main processing command\n') job_file.write('module load cuda cudnn \n') job_file.write('source ~/tf_gpu/bin/activate\n') job_file.write('python ./ddpg_n_step.py --env {0} --nonstationary_env --gravity_change_pattern gravity_averagely_easier --gravity_cycle 1000 --seed {1} --l 2 --n_step {2} --replay_size {3} --without_delay_train --start_steps 10000 --exp_name ddpg_n_step_NonstationaryEasier1000_2L_NoDelayTrain_ss10000_{0}_{1}_{2}_{3}'.format(task, s, n_s, b_s)) # + pycharm={"is_executing": true, "name": "#%%\n"} import os job_sub_dir = './job_scripts' jobs = os.listdir('./job_scripts') jobs.sort() i=1 for job in jobs: code = os.system('sbatch {}'.format(os.path.join(job_sub_dir, job))) print('{} ---- {}: {}'.format(i, job, code)) i += 1 # + pycharm={"is_executing": true, "name": "#%%\n"} # + pycharm={"name": "#%%\n"} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # --------------------------------------------------------------- # python best courses https://courses.tanpham.org/ # --------------------------------------------------------------- # Write a Python program which takes two digits m (row) and n (column) as input and generates a two-dimensional array. # The element value in the i-th row and j-th column of the array should be i*j. # Note : # i = 0,1.., m-1 # j = 0,1, n-1. # Input # Input number of rows: 3 # Input number of columns: 4 # Output # [[0, 0, 0, 0], [0, 1, 2, 3], [0, 2, 4, 6]] row_num = int(input("Input number of rows: ")) col_num = int(input("Input number of columns: ")) multi_list = [[0 for col in range(col_num)] for row in range(row_num)] for row in range(row_num): for col in range(col_num): multi_list[row][col]= row*col print(multi_list) # + row_num=int(input("row: ")) col_num=int(input("col: ")) row_range=range(row_num) col_range=range(col_num) print(list(row_range)) print(col_range) table=[[0 for col in col_range] for row in row_range] for row in row_range: for col in col_range: table[row][col]=row*col print(table) # - num_list = [1, 2, 3,4,5,6] for num in num_list: print (num) num_list.remove(num) print (num_list) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:khtools--compare-kmers--encodings] # language: python # name: conda-env-khtools--compare-kmers--encodings-py # --- # + from glob import iglob import os import pandas as pd import screed import seaborn as sns from tqdm import tqdm # - # # Change to Quest for Orthologs 2019 data directory # cd ~/data_sm/kmer-hashing/quest-for-orthologs/data/2019/ # ls -lha # # Download orthology and transcription factor data # ## Read orthologous transcription factors visual = pd.read_csv('opisthokont_not_human_visual_system_ensembl_compara.csv') print(visual.shape) visual.head() # # Go to Quest for Orthologs fastas # ## Read species metadata species_metadata = pd.read_csv("species_metadata.csv") print(species_metadata.shape) species_metadata.head() # ### Subset to opisthokonts # Estimated opisthokonta divergence time from http://timetree.org/ t = 1105 opisthokonts = species_metadata.query('divergence_from_human_mya <= @t') print(opisthokonts.shape) opisthokonts.head() opisthokonts.query('scientific_name == "Homo sapiens"') # ## Read Gene Accession file # # ``` # Gene mapping files (*.gene2acc) # =============================== # # Column 1 is a unique gene symbol that is chosen with the following order of # preference from the annotation found in: # 1) Model Organism Database (MOD) # 2) Ensembl or Ensembl Genomes database # 3) UniProt Ordered Locus Name (OLN) # 4) UniProt Open Reading Frame (ORF) # 5) UniProt Gene Name # A dash symbol ('-') is used when the gene encoding a protein is unknown. # # Column 2 is the UniProtKB accession or isoform identifier for the given gene # symbol. This column may have redundancy when two or more genes have identical # translations. # # Column 3 is the gene symbol of the canonical accession used to represent the # respective gene group and the first row of the sequence is the canonical one. # ``` # + def read_gene2acc(gene2acc, names=['maybe_ensembl_id', 'uniprot_id', 'canonical_accession']): df = pd.read_csv(gene2acc, sep='\t', header=None, na_values='-', names=names) return df gene2acc = read_gene2acc('Eukaryota/UP000005640_9606.gene2acc') # gene2acc = pd.read_csv('Eukaryota/UP000005640_9606.gene2acc', sep='\t', header=None, na_values='-', names=columns) print(gene2acc.shape) gene2acc.head() # - gene2acc.dropna() # ## Read ID mapping file # # ``` # Database mapping files (*.idmapping) # ==================================== # # These files contain mappings from UniProtKB to other databases for each # reference proteome. # The format consists of three tab-separated columns: # # 1. UniProtKB accession # 2. ID_type: # Database name as shown in UniProtKB cross-references and supported by the ID # mapping tool on the UniProt web site (http://www.uniprot.org/mapping) # 3. ID: # Identifier in the cross-referenced database. # # ``` opisthokonts.head() opisthokonts.query('proteome_id == "UP000000437"') # # Get ID Mapping for uniprot ids from ENSMBL # + dfs = [] for filename in tqdm(sorted(iglob("Eukaryota/*.idmapping"))): # print(filename) basename = os.path.basename(filename) prefix = basename.split('.')[0] species_id, taxa_id = prefix.split("_") # print(f"{species_id=} {taxa_id=}") if species_id in opisthokonts.proteome_id.values: df = pd.read_csv(filename, sep='\t', header=None, names=['uniprot_id', 'id_type', 'db_id']) df['species_id'] = species_id df['taxa_id'] = species_id # Use only Ensembl data # df = df.query('id_type == "Ensembl"') print(df.shape) dfs.append(df) id_mapping = pd.concat(dfs, ignore_index=True) print(id_mapping.shape) id_mapping.head() # - # # Merge id mapping with ensembl compara tfs id_mapping_for_merging = id_mapping.copy() id_mapping_for_merging.columns = "target__" + id_mapping_for_merging.columns id_mapping_for_merging.head() visual.shape # + # %%time visual_uniprot_merge_proteins = visual.merge(id_mapping_for_merging, left_on='target__protein_id', right_on='target__db_id') print(visual_uniprot_merge_proteins.shape) visual_uniprot_merge_proteins.tail() # - visual_uniprot_merge_proteins.type.value_counts() # + # %%time visual_uniprot_merge_genes = visual.merge(id_mapping_for_merging, left_on='target__id', right_on='target__db_id') print(visual_uniprot_merge_genes.shape) visual_uniprot_merge_genes.tail() # - visual_uniprot_merge_genes.type.value_counts() # ## Read in QfO human uniprot ids human_id_mapping = pd.read_csv('Eukaryota/UP000005640_9606.idmapping', sep='\t', header=None, names=['uniprot_id', 'id_type', 'db_id']) human_id_mapping.columns = 'source__' + human_id_mapping.columns print(human_id_mapping.shape) human_id_mapping.head() visual_uniprot_merge_proteins.head() human_id_mapping.head() # ## Merge with human id mapping # %%time visual_uniprot_merge_proteins_with_human = visual_uniprot_merge_proteins.merge( human_id_mapping, left_on='source__protein_id', right_on='source__db_id', how='outer') visual_uniprot_merge_proteins_with_human.columns = visual_uniprot_merge_proteins_with_human.columns.str.replace("source__", 'human__') print(visual_uniprot_merge_proteins_with_human.shape) visual_uniprot_merge_proteins_with_human.query('type == "ortholog_one2one"').head() # ## Spot check known orthologs # # ``` # tr|W5NNY8|W5NNY8_LEPOC Rhodopsin OS=Lepisosteu... sp|P08100|OPSD_HUMAN Rhodopsin OS=Homo sapiens... # ``` # + # Lepisosteus oculatus (Spotted gar) spotted_gar_rhodopsin = 'W5NNY8' human_rhodopsin = 'P08100' # - human_id_mapping.query('source__uniprot_id == @human_rhodopsin & source__id_type == "Ensembl_PRO"') id_mapping.head() id_mapping.query('uniprot_id == @spotted_gar_rhodopsin') visual.query('source__protein_id == "ENSP00000296271" & target__protein_id == "ENSLOCP00000022347"') visual_uniprot_merge_proteins_with_human.query('human__protein_id == "ENSP00000296271" & target__protein_id == "ENSLOCP00000022347"') visual_uniprot_merge_proteins_with_human.type.value_counts() # ## Write merged TFs to disk pwd # %time visual_uniprot_merge_proteins_with_human.to_csv('opisthokont_not_human_visual_transduction_ensembl_compara_merged_uniprot.csv.gz', index=False) # %time visual_uniprot_merge_proteins_with_human.to_parquet('opisthokont_not_human_visual_transduction_ensembl_compara_merged_uniprot.parquet', index=False) # ## Make set variable for quick membership evalution visual_orthologs = set(visual_uniprot_merge_proteins_with_human.target__uniprot_id) # ### Prove that the set `tf_orthologs` is faster # %timeit 'Q7Z761' in visual_orthologs # %timeit 'Q7Z761' in visual_uniprot_merge_proteins_with_human.target__uniprot_id # #### Yep, sets are 3 orders of magnitude faster! # # Read non-human proteins and subset if they are an ortholog of a TF # ## Make outdir # + # # ls Eukaryota/ # - not_human_outdir = 'Eukaryota/not-human-visual-transduction-fastas/' # ! mkdir $not_human_outdir # ## How much compute is this? # Number of human transcription factor proteins in the quest for orthologs database # visual_uniprot_merge_proteins_with_human.human__uniprot_id.nunique() n_human_qfo = 93 visual_uniprot_merge_proteins_with_human.head() visual_uniprot_merge_proteins_with_human.type.value_counts() visual_uniprot_merge_proteins_with_human.query('target__species != "homo_sapiens"').target__uniprot_id.nunique() n_not_human_qfo = 344 n_human_qfo * n_not_human_qfo * 0.0006 / 60 / 60 # # ### Whoa so this should take less than an hour? # ## Read in protein fastas with screed # + for filename in iglob('Eukaryota/not-human-protein-fastas/*.fasta'): tf_records = [] basename = os.path.basename(filename) with screed.open(filename) as records: for record in records: name = record['name'] record_id = name.split()[0] uniprot_id = record_id.split('|')[1] if uniprot_id in visual_orthologs: tf_records.append(record) if len(tf_records) > 0: print(filename) print(f"\tlen(tf_records): {len(tf_records)}") with open(f'{not_human_outdir}/{basename}', 'w') as f: for record in tf_records: f.write(">{name}\n{sequence}\n".format(**record)) # - # # Script to run ll /mnt/data_sm/olga/kmer-hashing/quest-for-orthologs/data/2019/Eukaryota/human-visual-transduction-fastas/ ll /mnt/data_sm/olga/kmer-hashing/quest-for-orthologs/data/2019/Eukaryota/not-human-visual-transduction-fastas/ | head # + # %%file qfo_human_vs_opisthokont_tfs.sh # #!/bin/bash OUTDIR=$HOME/data_sm/kmer-hashing/quest-for-orthologs/analysis/2019/visual-transduction/ # mkdir -p $OUTDIR/intermediates # cd $OUTDIR/intermediates PARQUET=$OUTDIR/qfo-eukaryota-visual-transduction-protein.parquet EUKARYOTA=/mnt/data_sm/olga/kmer-hashing/quest-for-orthologs/data/2019/Eukaryota HUMAN=$EUKARYOTA/human-visual-transduction-fastas/human_visual_transduction_proteins.fasta NOT_HUMAN=$EUKARYOTA/not-human-visual-transduction-fastas/ conda activate khtools--encodings--compare-cli time khtools compare-kmers \ --processes 120 \ --ksize-min 3 \ --ksize-max 45 \ --parquet $PARQUET \ --intermediate-parquet \ --fastas2 $HUMAN \ $NOT_HUMAN/* | tee khtools_compare-kmers.log # - pwd # ## Time estimation # # taking ~1000 seconds per non-human sequence n_not_human_qfo_tfs n_not_human_qfo_tfs * 1000 / 120 / 60 / 60 / 24 # Okay, so this will take ~4.7 days to compute running on `lrrr` # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + #data format library import h5py #numpy import numpy as np import numpy.ma as ma import matplotlib.pyplot as plt # # %matplotlib notebook from sklearn.cluster import KMeans import sys from mpl_toolkits.mplot3d import Axes3D import matplotlib.colors as colors import os from scipy.integrate import odeint sys.path.append('../utils/') import operator_calculations as op_calc import stats from sklearn.linear_model import LinearRegression import delay_embedding as embed import clustering_methods as cl colors_state_=plt.rcParams['axes.prop_cycle'].by_key()['color'][:10] plt.rc('text', usetex=True) plt.rc('font',size=14) # - def Lorenz(state,t,sigma,rho,beta): # unpack the state vector x,y,z = state # compute state derivatives xd = sigma * (y-x) yd = (rho-z)*x - y zd = x*y - beta*z # return the state derivatives return [xd, yd, zd] dt = 0.02 frameRate=1/dt T = 5000 state0 = np.array([-8, -8, 27]) t = np.linspace(0, T, int(T*frameRate)) sigma,rho,beta=10,28,8/3 tseries=np.array(odeint(Lorenz,state0,t,args=(sigma,rho,beta)),dtype=np.float64)[int(len(t)/2):] plt.plot(tseries[:,0],tseries[:,2],lw=.02) plt.show() # # Compute predictability as a function of delay X = tseries[:,0].reshape((tseries.shape[0],1)) #take x variable only # + #to get error estimates in the manuscript we split the trajectory into non-overlapping segments n_seed_range=np.arange(200,1100,200) #number of partitions to examine range_Ks = np.arange(1,12,dtype=int) #range of delays to study h_K=np.zeros((len(range_Ks),len(n_seed_range))) for k,K in enumerate(range_Ks): traj_matrix = embed.trajectory_matrix(X,K=K-1) for ks,n_seeds in enumerate(n_seed_range): labels=cl.kmeans_knn_partition(traj_matrix,n_seeds) h = op_calc.get_entropy(labels) h_K[k,ks]=h print('Computed for {} delays and {} seeds.'.format(K,n_seeds)) # - plt.plot(n_seed_range,h_K.T) plt.xlabel('N',fontsize=15) plt.ylabel('h (nats/s)',fontsize=15) plt.show() plt.plot(h_K[:,-1],marker='o') plt.show() K_star=6 traj_matrix = embed.trajectory_matrix(X,K=K_star-1) # # Estimate implied time scales of the reversibilized operator # + #to get error estimates in the manuscript we split the trajectory into non-overlapping segments n_seeds = 1000 n_modes=10 n_samples = 50 #bootstrapping samples range_delays = np.arange(1,21) size = 10000 labels = ma.array(cl.kmeans_knn_partition(traj_matrix,n_seeds),dtype=int) delay_range = np.arange(1,20,1) n_modes=5 ts_traj = np.zeros((len(delay_range),n_modes)) for kd,delay in enumerate(delay_range): P = op_calc.transition_matrix(labels,delay) R = op_calc.get_reversible_transition_matrix(P) ts_traj[kd,:] = op_calc.compute_tscales(R,delay,dt,k=n_modes+1) print(delay) # - plt.plot(delay_range*dt,ts_traj) plt.show() delay = 12 print(delay*dt) P = op_calc.transition_matrix(labels,delay) R = op_calc.get_reversible_transition_matrix(P) eigvals,eigvecs = op_calc.sorted_spectrum(R,k=3) print(eigvals) X=eigvecs[:,1].real plt.figure(figsize=(10,7)) color_abs = np.max(np.abs(X)) plt.scatter(traj_matrix[:,0],traj_matrix[:,-1],c=X[labels],cmap='seismic',s=1,vmin=-color_abs,vmax=color_abs) # plt.xticks(range(-20,21,5)) # plt.ticks() # plt.savefig('Phi_2_Lorenz.png') plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="R7QEBylzjfhX" # # # # # + id="JdAig2LEfd2q" # + [markdown] id="TdVu7bFxjOJU" # ## Uncomment below cells and run all of them # + id="6SXxlMnzFCMA" colab={"base_uri": "https://localhost:8080/"} outputId="000993f7-400a-4954-e884-5fe8586c94d8" # ! pip install kaggle # + id="ULlYlIaGFCJN" # ! mkdir ~/.kaggle # + [markdown] id="4o948yf1jQOl" # ### From Kaggle website from your account page generate API Token then download and store the kaggle.json file into the same directory of this notebook. # + id="0NKB1GD4FCHY" # ! cp kaggle.json ~/.kaggle/ # + id="421u9brnFCFd" # ! chmod 600 ~/.kaggle/kaggle.json # + [markdown] id="K_elGLdqjSaY" # # # ### copy the API command for the dataset you want to download from kaggle dataset page # + id="A5DPMo_rFCDk" colab={"base_uri": "https://localhost:8080/"} outputId="8520a5ef-788a-4a30-9666-43749a9b4315" # !kaggle datasets download -d andrewmvd/animal-faces # + id="BDSz2ELvFCBy" colab={"base_uri": "https://localhost:8080/"} outputId="fe627b1b-136c-40b8-e3fe-18f954d75c14" # ! unzip animal-faces.zip # + [markdown] id="vZMLiWc5PHcG" # # The keras deep learning libraries has functions to fit model # # 1. .fit # 2. .fit_generator # # # # + [markdown] id="NeM7g_30PHXL" # # What is difference between keras fit and fit_generator? # # # Both the functions perform on same task, but how they do it is completely different. # # let us disccus each function with example and understand their working. # # # ## fit_geneartor # # # For small and simple dataset we can use .fit function because the dataset do not require any data augmentation. # # However, in real life this is not the case. The dataset we get are often complex and challenging with a large size. Thus is fiting it into the memory is difficult. # # The dataset also require to be augmented to prevent overfitting of the model and increase the ability of our model to generalize # # # # In those cases we need to use Keras .fit_generator # function # # # First step is to initialize the number of epochs we are going to train our neural network along with the batch size. # # Then we initialize a object named gen which is a keras ImageDatagenerator object that will be used to perform data augmentation i.e randomly rotating, shifting, chaning brightness, etc # # # Each new batch of data is randomly adjusted according to the parameters supplied to ImageDataGenerator. # # # Now we need to use keras .fit_generator function to train our neural network # # # # # + id="no6Wg1e4E8V0" # importing all the necessary for example: tensorflow, ImageDataGenerator, etc import os import cv2 import random import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import tensorflow as tf from tensorflow import keras from keras.preprocessing.image import ImageDataGenerator as gen # + id="PKF9ozLhIDbj" # create a function for plotting images with parameter image array # create a subplot of 9 images using fig, axes # flatten the axes # loop through image array and axes simultaneously using zip function # show image using imshow() # tighten the layout to avoid white space between images # display the subplot def plotting(image_arr): fig,axes = plt.subplot(3,3,figsize=(9,9)) axes = axes.flatten() for img,ax in zip(image_arr,axes): ax.imshow(img) ax.axis('off') plt.tight_layout() plt.show() # + [markdown] id="_Nob9B90bVmn" # # 1. Read Image # # Read the images from the dataset directory # + colab={"base_uri": "https://localhost:8080/"} id="ZbqOFn3gI2iD" outputId="3ca02b88-29e0-47df-8fb4-9478ed4d98e7" # chose a random image from directory using random.choice(os.listdir(image_folder path)) path = r'/content/afhq/train/dog' random_img = random.choice(os.listdir(path)) # print the chosen image print(random_img) # + id="fc6zgnxHI3rO" colab={"base_uri": "https://localhost:8080/"} outputId="cf8a867f-88f8-4936-b687-bb75a7582653" # create a image path varibale to store concatenation of image folder path + chosen image name random_imgpath = path+'/'+random_img print("image Path: ",random_imgpath) # + [markdown] id="PqAjNadjbYWQ" # # 2. Plot the sample image chosen # # just plot the image chosen in above cells using matplotlib # + colab={"base_uri": "https://localhost:8080/", "height": 286} id="U4muIpTtI5gM" outputId="cbbf0cd5-bfdb-43b8-f6bc-0c22930bde45" # read image from path and expand dims with axis 0 and store the result in image named variable image = np.expand_dims(cv2.imread(random_imgpath),0) #print(image.shape) # 4 dimensional required set # plot the image using plt.imshow() plt.imshow(image[0]) # + id="PNo57Dp4IpEC" # set train data path in a variable named train_path train_path = "/content/afhq/train" # + id="BzXFeMprII9A" ''' create a variable with reference to ImageDataGenerator with following parameters: gen = ImageDataGenerator( rotation_range= integer value, width_shift_range= float value, height_shift_range= float vaue, zoom_range= float value, channel_shift_range= integer, brightness_range= [min smaller than 1, max grater than 1], horizontal_flip= boolean, vertical_flip= boolean) ''' generate = keras.preprocessing.image.ImageDataGenerator(rotation_range=2,width_shift_range=1.3,height_shift_range=1.8,zoom_range=1.9,channel_shift_range=3, brightness_range=[0.3,0.9],horizontal_flip=True,vertical_flip=True) image_size = 244 batch_size = 48 # image size variable named IMAGE_SIZE with size value # batch size variable named BATCH_SIZE with size value # + id="5GTf3ceJoEhC" # + [markdown] id="72jJvPVMkDF1" # # # Using flow_from_directory function # # 1. The directory must be set to the path where your 'n' classes of folders are present. # 2. The target_size is the size of your input images, every image will be resized to this size. # 3. color_mode: if the image is either black and white or grayscale set 'grayscale' or if the image has three color channels, set 'rgb'. # 4. batch_size: No. of images to be yielded from the generator per batch. # 5. class_mode: Set 'binary' if you have only two classes to predict, if not set to'categorical'. in case if you're developing an Autoencoder system, both input and the output would probably be the same image, for this case set to 'input'. # 6. shuffle: Set True if you want to shuffle the order of the image that is being yielded, else set False. # 7. seed: Random seed for applying random image augmentation and shuffling the order of the image. # + [markdown] id="-EKNKYEucFH4" # # Train generator # # Using ImageDatagenerator function flow_from_directory on training data directory we can generate training data with data augmentation. # + colab={"base_uri": "https://localhost:8080/"} id="5TV2RkLgcDW6" outputId="c63d1a5b-541c-4151-bfb8-318ca027d065" '''create a variable with reference to ImageDataGenerator with following parameters: train_generator = gen.flow_from_directory( train_path, target_size=(IMAGE_SIZE,IMAGE_SIZE), subset='training', batch_size= BATCH_SIZE, color_mode = 'rgb', shuffle=boolean, class_mode= 'categorical' or 'binary', seed= integer value) ''' train_generated = generate.flow_from_directory(train_path,target_size=(image_size,image_size),subset='training',batch_size=batch_size,color_mode='rgb',shuffle=True,class_mode = 'categorical',seed = 42) # + [markdown] id="eTBf0OgmbyIQ" # # ploting the generated images above flow_from_directory method # + colab={"base_uri": "https://localhost:8080/", "height": 719} id="jm8DgGNUIHqN" outputId="7fae3f04-9502-4a0b-e88c-2e7d4bdaa352" # create two variables x,y which will store values returned by train_generator.next() x,y = train_generated.next() fig = plt.figure(figsize=(20,12)) # create figure row = 3 cols =3 for i in range(9): fig.add_subplot(row,cols,i+1) image = x[i].astype(np.uint8) label = y[i] plt.imshow(image) plt.title(label) plt.axis('off') plt.show() # create loop in range 9 or your choice # add a subplot to figure of size you want # create a image named variable to store ith value of x astype np.uint8 # create a label named variable to store ith value of y astype np.uint8 # plot the image using imshow # add the title as actual label to image # show the image # + id="aP8UU-THJ555" # set valid data path in a variable named valid_path val_path = "/content/afhq/val" # + [markdown] id="j2rlj_7ecK4f" # # Valid generator # # Using ImageDatagenerator function flow_from_directory on training data directory we can generate training data with data augmentation. # + colab={"base_uri": "https://localhost:8080/"} id="alr4ElihIkRg" outputId="11291a51-3bed-4c05-ab2f-b6f2b93bba2b" '''create a variable with reference to ImageDataGenerator with following parameters: valid_generator = gen.flow_from_directory( valid_path, target_size=(IMAGE_SIZE,IMAGE_SIZE), shuffle= boolean, class_mode='categorical' or 'binary') ''' val_generator = generate.flow_from_directory(val_path,target_size=(image_size,image_size),shuffle=True,class_mode='categorical') # + [markdown] id="HxdVhGTUcNuc" # # Building model # + [markdown] id="E25E1umPdQOt" # # To train faster we will be using Transer Learning with # # ## MobileNetV2 Model # # MobileNet is a CNN architecture model for Image Classification and Mobile Vision. As compared to other models, running or applying transfer learning using MobileNet consumes very less omputation power. Thus it can be used on devices such as Mobile devices, embedded systems and computers wthout GPU or low specification. # It runs well on browser as browser have computational limitation. # # + id="NJLdVT5ALpt-" colab={"base_uri": "https://localhost:8080/"} outputId="3f29182e-841a-4ffa-8cf5-24d463fa5fb7" # create a variable name base_model with MobileNetV2 model from tensorflow application library # weight: imagnet # input shape : 224,224,3 base_model = keras.applications.MobileNetV2(input_shape=(224,224,3),include_top=False,weights="imagenet") # set base model trainable to False base_model.trainable=False # + id="N-5YOncMLq_V" # create a variable and store number of classes i.e 3 classes in this case classes = ['cat','dog','wild'] # + id="JkqPHS7mLtDn" # define inputs using tensorflow Input --> 224,224,3 # set the inputs to base_model x = base_model(inps) # Add GlobalAveragePooling2D layer to base_model x = keras.layers.GlobalAveragePooling2D()(x) # define initializer for model i.e GlorotUniform for uniform distribution of tensor data initializer = tf.keras.initializers.GlorotUniform() # create a variable to store activation, use any sigmoid, softmax (softmax recommended) var = "softmax" # Define a Dense layer output = keras.layers.Dense(3)(x) # Create a model using the layers created model = keras.Model(inps,output) # + [markdown] id="KN8kDJgCdV6M" # # Note: # The last layer has 10 number of classes unit. So the output(predicted labels) will be 10 floating points as the actual label is a single integer number. # # For the last layer, the activation function can be: # 1. None # 2. sigmoid # 3. softmax # # When there is no activation function used inthe model's last layer, we need to set from_logits=True in cross-entropy loss function during compiling model. This loss function will apply sigmoid transformation on predicted label values # + colab={"base_uri": "https://localhost:8080/"} id="9V_rncsSgCCA" outputId="f3ede566-88a4-43ca-c429-2cdfcd008a21" model.summary() # + [markdown] id="QxLH6abPdZ1w" # ### Let's see the model summary # + colab={"base_uri": "https://localhost:8080/"} id="HZS8q0JDLzBN" outputId="eca9f642-0631-43fd-e444-14f9cbb7ff95" # visualize modle summary model.summary() # + [markdown] id="Wf10goLediHF" # ## Compile the model # # Compile the model before training it. Since there are 3 classes, use the tf.keras.losses.CategoricalCrossEntropy() loss with from_logits=True (if activation function is not mentioned while creating model) # + id="hHzPymR0ddog" # Compile the model by passing loss, optimizer and metrics model.compile(optimizer=keras.optimizers.Adam(learning_rate=0.0001),loss= keras.losses.CategoricalCrossentropy(from_logits=True),metrics=['accuracy']) # + [markdown] id="LmE5h-Qva6w4" # # # How fit_generator works exactly? # # In the background, keras fit_generaor uses following process on the data while training a model: # 1. Keras calls the genrator function passed to fit_generator i.e gen.flow_from_directory in our case. # 2. The generator returns the batch of size mentioned while generating image to the fit_generator. # 3. The .fit_generator function accepts the batch of image data, perform backpropagation and updates the weights in our model. # 4. This process is repeated until we have reached the desired number of epochs. # + [markdown] id="3r5zE2dRa07f" # ## Why do we need steps_per_epoch? # # The keras data genrator is meant to loop infinitely i.e it should never return or exit. # # Since the function is intended to loop infinitely, Keras has no ability to determine when one epoch starts and a new epoch begins. # # Therefore, we compute the steps_per_epoch # value as the total number of training data points divided by the batch size. Once Keras hits this step count it knows that it's a new epoch. # # + colab={"base_uri": "https://localhost:8080/"} id="6QTwWDXbKA9c" outputId="135433ab-6dc5-47e9-9419-6c8193b70f24" # create a variable name step_size_train and set value as train_generator.n// train_generator.batch_size step_size_train = train_generated.n //train_generated.batch_size # create a variable name step_size_valid and set value as valid_generator.n// train_generator.batch_size step_size_val = val_generator.n//train_generated.batch_size print('train: ',step_size_train,' val: ',step_size_valid) #fit model using fit_generator function by passing following paarameters: model.fit_generator(generator = train_generated, steps_per_epoch = step_size_train, validation_data = val_generator, validation_steps=step_size_val, epochs=5) # + colab={"base_uri": "https://localhost:8080/"} id="Rjr6PHAqoo7O" outputId="0f341073-e038-40ad-854e-c493d6da9a1f" step_size_valid = val_generator.n//train_generated.batch_size print(step_size_valid) # + [markdown] id="FIlpLJFxfgJn" # # Summary # # In this assignement you learned the differences between Keras fit and fitgenerator functions used to train a deep neural network. # # .fit is used when there is no need for data augmentation and teh whole data can be used to fit in memory. # # Whereas, # .fit_generator should be used when data augmenetation to be applied or data is too large to store in memory or whenever there is need for converting image dataset in to baches of images # + [markdown] id="QWLNveojmfs9" # ![link text](http://recyclefloridatoday.org/wp-content/uploads/2020/07/Congratulations-1.jpg) # + [markdown] id="muaF2swfmlT8" # # Please fill the below feedback form about this assignment # # https://forms.zohopublic.in/cloudyml/form/CloudyMLDeepLearningFeedbackForm/formperma/VCFbldnXAnbcgAIl0lWv2blgHdSldheO4RfktMdgK7s # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Deep Q Network with Prioritized Experience Replay # ## Imports # + import gym import numpy as np import torch import torch.optim as optim from IPython.display import clear_output from matplotlib import pyplot as plt # %matplotlib inline from timeit import default_timer as timer from datetime import timedelta import math from utils.wrappers import * from agents.DQN import Model as DQN_Agent from networks.networks import DQN from networks.network_bodies import AtariBody from utils.hyperparameters import Config # - # ## Hyperparameters # + config = Config() config.device = torch.device("cuda" if torch.cuda.is_available() else "cpu") device = config.device #epsilon variables config.epsilon_start = 1.0 config.epsilon_final = 0.01 config.epsilon_decay = 30000 config.epsilon_by_frame = lambda frame_idx: config.epsilon_final + (config.epsilon_start - config.epsilon_final) * math.exp(-1. * frame_idx / config.epsilon_decay) #misc agent variables config.GAMMA=0.99 config.LR=1e-4 #memory config.TARGET_NET_UPDATE_FREQ = 1000 config.EXP_REPLAY_SIZE = 100000 config.BATCH_SIZE = 32 config.PRIORITY_ALPHA=0.6 config.PRIORITY_BETA_START=0.4 config.PRIORITY_BETA_FRAMES = 100000 #Learning control variables config.LEARN_START = 10000 config.MAX_FRAMES=700000 #Nstep controls config.N_STEPS=1 # - # ## Prioritized Replay (Without Rank-Based Priority) class PrioritizedReplayMemory(object): def __init__(self, capacity, alpha=0.6, beta_start=0.4, beta_frames=100000): self.prob_alpha = alpha self.capacity = capacity self.buffer = [] self.pos = 0 self.priorities = np.zeros((capacity,), dtype=np.float32) self.frame = 1 self.beta_start = beta_start self.beta_frames = beta_frames def beta_by_frame(self, frame_idx): return min(1.0, self.beta_start + frame_idx * (1.0 - self.beta_start) / self.beta_frames) def push(self, transition): max_prio = self.priorities.max() if self.buffer else 1.0**self.prob_alpha if len(self.buffer) < self.capacity: self.buffer.append(transition) else: self.buffer[self.pos] = transition self.priorities[self.pos] = max_prio self.pos = (self.pos + 1) % self.capacity def sample(self, batch_size): if len(self.buffer) == self.capacity: prios = self.priorities else: prios = self.priorities[:self.pos] total = len(self.buffer) probs = prios / prios.sum() indices = np.random.choice(total, batch_size, p=probs) samples = [self.buffer[idx] for idx in indices] beta = self.beta_by_frame(self.frame) self.frame+=1 #min of ALL probs, not just sampled probs prob_min = probs.min() max_weight = (prob_min*total)**(-beta) weights = (total * probs[indices]) ** (-beta) weights /= max_weight weights = torch.tensor(weights, device=device, dtype=torch.float) return samples, indices, weights def update_priorities(self, batch_indices, batch_priorities): for idx, prio in zip(batch_indices, batch_priorities): self.priorities[idx] = (prio + 1e-5)**self.prob_alpha def __len__(self): return len(self.buffer) # ## Agent class Model(DQN_Agent): def __init__(self, static_policy=False, env=None, config=None): super(Model, self).__init__(static_policy, env, config) def declare_networks(self): self.model = DQN(self.num_feats, self.num_actions, body=AtariBody) self.target_model = DQN(self.num_feats, self.num_actions, body=AtariBody) def declare_memory(self): self.memory = PrioritizedReplayMemory(self.experience_replay_size, self.priority_alpha, self.priority_beta_start, self.priority_beta_frames) def compute_loss(self, batch_vars): batch_state, batch_action, batch_reward, non_final_next_states, non_final_mask, empty_next_state_values, indices, weights = batch_vars #estimate self.model.sample_noise() current_q_values = self.model(batch_state).gather(1, batch_action) #target with torch.no_grad(): max_next_q_values = torch.zeros(self.batch_size, device=self.device, dtype=torch.float).unsqueeze(dim=1) if not empty_next_state_values: max_next_action = self.get_max_next_state_action(non_final_next_states) self.target_model.sample_noise() max_next_q_values[non_final_mask] = self.target_model(non_final_next_states).gather(1, max_next_action) expected_q_values = batch_reward + ((self.gamma**self.nsteps)*max_next_q_values) diff = (expected_q_values - current_q_values) self.memory.update_priorities(indices, diff.detach().squeeze().abs().cpu().numpy().tolist()) loss = self.huber(diff).squeeze() * weights loss = self.huber(diff) loss = loss.mean() return loss # ## Plot Results def plot(frame_idx, rewards, losses, sigma, elapsed_time): clear_output(True) plt.figure(figsize=(20,5)) plt.subplot(131) plt.title('frame %s. reward: %s. time: %s' % (frame_idx, np.mean(rewards[-10:]), elapsed_time)) plt.plot(rewards) if losses: plt.subplot(132) plt.title('loss') plt.plot(losses) if sigma: plt.subplot(133) plt.title('noisy param magnitude') plt.plot(sigma) plt.show() # ## Training Loop # + start=timer() env_id = "PongNoFrameskip-v4" env = make_atari(env_id) env = wrap_deepmind(env, frame_stack=False) env = wrap_pytorch(env) model = Model(env=env, config=config) episode_reward = 0 observation = env.reset() for frame_idx in range(1, config.MAX_FRAMES + 1): epsilon = config.epsilon_by_frame(frame_idx) action = model.get_action(observation, epsilon) prev_observation=observation observation, reward, done, _ = env.step(action) observation = None if done else observation model.update(prev_observation, action, reward, observation, frame_idx) episode_reward += reward if done: model.finish_nstep() model.reset_hx() observation = env.reset() model.save_reward(episode_reward) episode_reward = 0 if np.mean(model.rewards[-10:]) > 19: plot(frame_idx, all_rewards, losses, timedelta(seconds=int(timer()-start))) break if frame_idx % 10000 == 0: plot(frame_idx, model.rewards, model.losses, model.sigma_parameter_mag, timedelta(seconds=int(timer()-start))) model.save_w() env.close() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Extinction risks for cartilaginous fish # # An exploration of some of the results in [Extinction risk is most acute for the world’s largest and smallest vertebrates](https://www.pnas.org/content/114/40/10678), Ripple et al., PNAS October 3, 2017 114 (40) 10678-10683 # # Specifically, we'll investigate how extinction risks vary by weight for cartilaginous fish. This provides some nice practice with simple linear and logistic regression, with the overall goal of explaining basic diagnostics for both methods. # # All of this (and more!) is in Chapters 2-5 of my Manning book, [Regression: A friendly guide](https://www.manning.com/books/regression-a-friendly-guide). # # This notebook and the relevant CSVs are available in my [regression repo on github](https://github.com/mbrudd/regression), along with other code and data for the book. Clone and fork at will! # ### Imports and settings import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import statsmodels.api as sm import statsmodels.formula.api as smf from scipy.stats import chi2 from sklearn import metrics import warnings warnings.filterwarnings('ignore') sns.set_theme() plt.rcParams['figure.figsize'] = [8,8] # ### Data # # First things first: we need data! Thanks to the good people at [ROpenSci](https://ropensci.org/), the data from [Fishbase.org](https://www.fishbase.se/search.php) is easily available in the [rfishbase](https://docs.ropensci.org/rfishbase/index.html) package. # fish = pd.read_csv("fish.csv") fish.shape fish.columns fish = fish.filter(["Species","Length","Weight"]) fish train = fish.dropna(axis='index') train sns.scatterplot(data=train, x="Length", y="Weight") plt.title("Fish weights versus fish lengths") train = train.assign(LogLength = np.log(train.Length), LogWeight = np.log(train.Weight)) sns.scatterplot(data=train, x="LogLength", y="LogWeight") plt.title("Log(Weight) versus Log(Length)") plt.axvline( np.mean( train["LogLength"] ), linestyle='--') plt.axhline( np.mean( train["LogWeight"] ), linestyle='--') # ### Linear regression # # The basic metric for the strength of a _linear_ relationship is the _correlation coefficient_: # train.LogLength.corr( train.LogWeight ) # This is a very strong correlation! In real life, especially in the social sciences, correlations between .3 and .7 in magnitude are much more common. Having checked the linear relationship, let's fit the regression line: # train_model = smf.ols( "LogWeight ~ LogLength", data=train) train_fit = train_model.fit() train_fit.params # # This model says that # # $$ \log{ \left( \text{Weight} \right) } ~ = ~ -3.322617 + 2.681095 \times \log{ \left( \text{Length} \right) } \ ,$$ # # which is easier to digest after exponentiating: # # $$ \text{Weight} ~ = ~ e^{-3.322617} \times \text{Length}^{2.681095} ~ = ~ 0.036 \times \text{Length}^{2.681095} \ .$$ # # This _power law relationship_ says that weight is roughly proportional to the cube of the length! # # *** # # The _null model_ predicts that _every_ needed/unseen weight equals the average of the known weights: # np.mean( fish["Weight"] ) np.log( np.mean( fish["Weight"] ) ) # Is the regression model better than this at predicting weights? Answering this specific question is the job of the _coefficient of determination_, denoted $R^2$. # # $$R^2 ~ = ~ \frac{ \text{TSS} - \text{SSR} }{ \text{TSS} } ~ = ~ 1 - \frac{ \text{SSR} }{ \text{TSS} }$$ # # You could compute it this way... ( train_fit.centered_tss - train_fit.ssr) / train_fit.centered_tss # # but don't! It's already provided: # train_fit.rsquared # ### Sharks! # # The information we need about [cartilaginous fish](https://en.wikipedia.org/wiki/Chondrichthyes) (sharks, rays, skates, sawfish, ghost sharks) comes from the [IUCN Red List](https://www.iucnredlist.org/): sharks = pd.read_csv("chondrichthyes.csv") sharks sharks = sharks.join( fish.set_index("Species"), on="Species") sharks sharks = sharks[ sharks.Length.notna() ] sharks = sharks[ sharks.Category != "Data Deficient" ] sharks # ### Data imputation # # Use the power law relationship to _impute_ the missing weights: # imp = np.exp( train_fit.params.Intercept )*np.power( sharks.Length, train_fit.params.LogLength ) sharks.Weight = sharks.Weight.where( sharks.Weight.notna(), imp ) sharks sharks = sharks.assign(LogLength = np.log(sharks.Length), LogWeight = np.log(sharks.Weight)) sns.scatterplot( data=sharks, x="LogLength", y="LogWeight") plt.title("Log(Weight) versus Log(Length) for sharks") sharks threatened = ["Critically Endangered","Endangered","Vulnerable"] sharks["Threatened"] = sharks["Category"].isin( threatened ).astype('int') sharks = sharks.drop(columns = "Category") sharks null_prob = np.mean(sharks["Threatened"]) null_prob sharks_model = smf.glm("Threatened ~ LogWeight", data=sharks, family=sm.families.Binomial()) sharks_fit = sharks_model.fit() sharks_fit.params # This model says that # # $$\log{ \left( \text{Odds of being threatened} \right) } ~ = ~ -3.173571 + 0.293120 \times \log{\left( \text{Weight} \right) } \ ,$$ # # which is equivalent to a power law: # # $$\text{Odds of being threatened} ~ = ~ .042 \times \text{Weight}^{.293120} \ .$$ # # In other words, bigger fish are more likely to be threatened. # # *** # # - For logistic models, the _deviance_ is analogous to the sum of squared residuals in linear regression analysis; logistic model coefficients minimize the deviance. # # - The _likelihood ratio statistic_ compares the deviances of the simple logistic model and the null model; it's analogous to the coefficient of determination. # # - Unlike the coefficient of determination, the likelihood ratio statistic defies easy interpretation. It's easy to gauge its size, though: it's a $\chi^2$ statistic with $df=1$ (_why_ this is true is another story...). # sharks_fit.null_deviance - sharks_fit.deviance 1 - chi2.cdf(sharks_fit.null_deviance - sharks_fit.deviance, df=1) # # This is astronomically small -- the logistic model that includes `LogLength` is better than the null model that ignores it! # # And if we plot the results, things look pretty good: # sns.regplot(data=sharks, x="LogWeight", y="Threatened", logistic=True, ci=None) plt.savefig("sharks_fit.png") # ### Model assessment # # Ripple et al. stop here with this particular model, but they should have assessed it carefully! We'll look at two options for what to do next. # # #### Binary classification and the ROC curve # # The naive thing to do is to compare the model's fitted probabilities to a threshold of 50% : classify the fish as `Threatened` if the fitted probability is higher than 50%, as `Not threatened` otherwise. # sharks["Class"] = (sharks_fit.fittedvalues > 0.50).astype(int) sharks pd.crosstab(sharks["Threatened"], sharks["Class"]) np.mean( sharks["Threatened"] == sharks["Class"] ) fpr, tpr, thresholds = metrics.roc_curve(sharks["Threatened"], sharks_fit.fittedvalues) chronic_auc = metrics.auc(fpr, tpr) chronic_auc plt.figure() plt.plot(fpr, tpr, label='ROC curve AUC: %0.2f' % chronic_auc) plt.plot([0,1], [0,1], 'r--', label='Random classification') # plt.xlim([0, 1]) # plt.ylim([0, 1.05]) plt.xlabel('False Positive Rate (1-Specificity)') plt.ylabel('True Positive Rate (Sensitivity)') plt.title('ROC curve for shark extinction risk classifier') plt.legend(loc="lower right") # #### Logistic analogues of $R^2$ # _McFadden's pseudo_-$R^2$ : replace sums of squares with deviances to measure the proportional reduction in the deviance R2_M = 1 - (sharks_fit.deviance / sharks_fit.null_deviance) R2_M # # Or use the native sums of squares in this context: # sharks["Null_residual"] = sharks["Threatened"] - null_prob sharks["Residual"] = sharks["Threatened"] - sharks_fit.fittedvalues sharks["Difference"] = sharks_fit.fittedvalues - null_prob R2_S = np.sum(sharks["Difference"]**2) / np.sum(sharks["Null_residual"]**2) R2_S # # Or compute _Tjur's coefficient of discrimination_: a good model should, on average, assign high probabilities to observed successes (1's) and low probabilities to observed failures (0's) # sharks["Fit_prob"] = sharks_fit.fittedvalues sns.displot( data=sharks, x="Fit_prob", col="Threatened", binwidth=0.2) fit_avgs = sharks.groupby("Threatened").agg(Fit_average=('Fit_prob','mean')) fit_avgs R2_D = fit_avgs["Fit_average"][1] - fit_avgs["Fit_average"][0] R2_D # Yikes! Not a very good model after all. :( # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Document Information Extraction Showcase # This notebook is designed to demonstrate how to easily consume the SAP AI Business Services - Document Information Extraction service. In this demo, we first create a new client to then use the service to extract data from an example invoice. # ## Extract credentials from service key # You require a valid service key for the Document Information Extraction service on the SAP Business Technology Platform. For the detailed setup steps see: https://help.sap.com/viewer/5fa7265b9ff64d73bac7cec61ee55ae6/SHIP/en-US/0d68dc0002f0484ba25f85f3170166e0.html # The necessary credentials are the following: # # - url: The URL of the service deployment provided in the outermost hierarchy of the service key json file # - uaa_url: The URL of the UAA server used for authentication provided in the uaa part of the service key json file # - clientid: The clientid used for authentication to the UAA server provided in the uaa part of the service key json file # - clientsecret: The clientsecret used for authentication to the UAA server provided in the uaa part of the service key json file # Please insert your copied service key json here service_key = { "url": "*******", "uaa": { "tenantmode": "*******", "sburl": "*******", "subaccountid": "*******", "clientid": "*******", "xsappname": "*******", "clientsecret": "*******", "url": "*******", "uaadomain": "*******", "verificationkey": "*******", "apiurl": "*******", "identityzone": "*******", "identityzoneid": "*******", "tenantid": "*******", "zoneid": "*******" }, "swagger": "/document-information-extraction/v1/" } url = service_key['url'] uaa_url = service_key['uaa']['url'] client_id = service_key['uaa']['clientid'] client_secret = service_key['uaa']['clientsecret'] # ## Initialize DoxApiClient # Import DOX API client from sap_business_document_processing import DoxApiClient # Instantiate object used to communicate with Document Information Extraction REST API api_client = DoxApiClient(url, client_id, client_secret, uaa_url) # ## (optional) Display access token # Token can be used to interact with e.g. swagger UI to explore DOX API print(api_client.session.token) print(f"\nYou can use this token to authorize here and explore the API via Swagger UI: \n{api_client.base_url}") # ## See list of document fields you can extract # Get the available document types and corresponding extraction fields from utils import display_capabilities capabilities = api_client.get_capabilities() display_capabilities(capabilities) # ## (optional) Create a Client # To use Document Information Extraction, you need a client. This client is used to distinguish and separate data. You can create a new client if you wish to perform the information extraction with a separate client. One 'default' client already exists. # Check which clients exist for this tenant api_client.get_clients() # Create a new client with the id 'c_00' and name 'Client 00' api_client.create_client(client_id='c_00', client_name='Client 00') # ## Upload a document and retrieve the extracted result # + # Specify the fields that should be extracted header_fields = [ "documentNumber", "taxId", "purchaseOrderNumber", "shippingAmount", "netAmount", "senderAddress", "senderName", "grossAmount", "currencyCode", "receiverContact", "documentDate", "taxAmount", "taxRate", "receiverName", "receiverAddress" ] line_item_fields = [ "description", "netAmount", "quantity", "unitPrice", "materialNumber" ] # Extract information from invoice document document_result = api_client.extract_information_from_document(document_path='sample-invoice-1.pdf', client_id='default', document_type='invoice', header_fields=header_fields, line_item_fields=line_item_fields) # - # Check the extracted data import json print(json.dumps(document_result, indent=2)) # Let's visualize the extracted values on the invoice document from utils import display_extraction display_extraction(document_result, 'sample-invoice-1.pdf') # ## (optional) Upload Ground Truth # Ground truth values can be uploaded to evaluate the results of the Document Information Extraction # Load ground truth values from json file with open('gt-sample-invoice-1.json') as ground_truth_file: ground_truth = json.load(ground_truth_file) # Add ground truth values to the uploaded invoice api_client.post_ground_truth_for_document(document_id=document_result['id'], ground_truth=ground_truth) # You can now also retrieve the uploaded ground truth values by setting extracted_values to False api_client.get_extraction_for_document(document_id=document_result['id'], extracted_values=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- # # Wildfire Analysis # ## and # CPSC322 # 04.06.21 # ## Sources # # 1970-2007 WADNR Wildfire Data: # # https://data-wadnr.opendata.arcgis.com/datasets/dnr-fire-statistics-1970-2007-1 # # # 2008-2019 WADNR Wildfire Data: # # https://data-wadnr.opendata.arcgis.com/datasets/dnr-fire-statistics-2008-present-1 # ## Source Description # # Both data sets come in the form of CSV or GeoJson files. For our application, CSV files will most likely be easier, but we get more data from GeoJson. # # The most interesting/valuable trait that we could predict is the size of a future fire. We plan to use a discretization of various fire sizes to create a 1-10 level chart similar to the MPG chart. The attribute that currently holds data for the size of a fire is "ACRES_BURNED". When looking at the two datasets, you can see that the 2008-Present dataset has many more attributes. # ## Implementation # # These files are quite large. This may be a challenge for sifting through the data to remove incomplete entries. We have nearly 60,000 independent fire events that have been logged by the Washington Department of Natural Resources for the last 50 years. I anticipate strong results with a dataset this size, and with enough effort, we will have a dataset that is fit for distribution at the end of the project. # # Given that we are trying to predict a binned value, the best thing to do would be to check for correlation between other attributes, and then build our attribute list from attributes with strong correlation. We want to avoid perfect multicollinearity, and with a dataset this size, it is improbable that that would happen. From a cursory glance, potentially useful attributes will be County, Fire Cause, and Date. One unique challenge we face is that there are many more attributes from 2008 onward, but we would also like the historical benefit of having 50 years worth of data. It may be necessary to create default values to substitute for missing values, instead of just removing the entry. # ## Potential Impact # # This project was first attempted at "https://github.com/michaelpeterswa/kulo" where location was used to find a continuous output value, using Keras and Tensorflow. That project had mixed results and the model was not well refined, which lead to inconclusive outputs. I (Michael) hope to expand upon that by getting useful predictions that work for future wildfire events. # # These results could be useful for a number of purposes. The first obvious choice would be predicting how at-risk a property is to wildfires in the State of Washington. This metric would be useful to homeowners, insurance underwriters, fire departments, and city planners. Of course, this project won't truly be production ready in the time-frame we have, but it's good to look forward at future potentials. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline import matplotlib.pyplot as plt import numpy as np # ### Set up a series of random uniform distributions # The central limit theorem states that for random processes, if we measure the mean of independent random variates, the distribution of the means of those distributions will be Gaussianly distributed. This result is independent of the character of the random distribution. We can show this by using a collection of uniform random variates and measuring th emeans many times n_exp=10000 #number of experiments n_draw=10 n_bins=100 x_min=.15 x_max=.85 means=np.zeros(n_exp) # ### Let's perform the experiment # loop over the number of experiments for i in range (n_exp): #pull 10 random variates from a uniform distribution z=np.random.uniform(0,1,n_draw) #record the mean means[i]=np.mean(z) # ### Define a function to plat a Gaussian def gaussian(x,mu,sigma): return 1/(2.0*np.pi*sigma**2)**.5 * np.exp(-.5*((x-mu)/sigma)**2) # ### Make a histogram # + fig=plt.figure(figsize=(7,7)) y_hist,x_hist, ignored=plt.hist(means, bins=n_bins, range=[x_min,x_max], density=True) xx=np.linspace(x_min,x_max,1000) #set the RMS of the Gaussian sigma=1/12**.5/10**.5 plt.plot(xx,gaussian(xx,.5,sigma), color="red") plt.ylim([0,1.1*gaussian(.5,.5,sigma)]) plt.xlim([x_min,x_max]) #plt.gca().set_aspect(2) plt.xlabel('x') plt.ylabel('y(x)') plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + deletable=true editable=true import cv2 import matplotlib.pyplot as plt # %matplotlib inline # + deletable=true editable=true # load the image and show some basic information on it image = cv2.imread('hongkong_airport.jpg') print "width: %d pixels" % (image.shape[1]) print "height: %d pixels" % (image.shape[0]) print "channels: %d" % (image.shape[2]) # + deletable=true editable=true cv_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) plt.axis("off") plt.imshow(cv_rgb) # cv2.imwrite("newimage.jpg", image) # %time # + deletable=true editable=true print "OpenCV Version : %s " % cv2.__version__ # + import sys sys.path.append('/code/opencv_tutorials') from image_utils import Ipy # + deletable=true editable=true # + deletable=true editable=true Ipy.showimage(image) # %time # + deletable=true editable=true # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: MT # language: python # name: mt # --- # + [markdown] toc=true #

    Table of Contents

    # # - # # Preprocess Large Holdings File # # #### Converting the raw 50+ GB sas file with the holdings complezte data into a sparse python matrix which can be loaded into memory and more important which can be handled more efficiently by different alogorithms. # #### The logic behind this process is as follows: # # Loading data and transforming it into csv file to work with # # 1. 50+ GB holdings.sas7bdat file containing all the holdings data downloaded directly from wrds using ftp client # 2. Converted into csv using sas7bdat_to_csv utility (Link) # # Two step process to transform file into sparse matrix # Challenge is to convert from row describing one holding to rows describing the holdings of one fund at one point in time. Aslo it is crucial to keep track of which row of the sparse matrix is which fund at wjich date and which colums are which securities. # # 3. Open file in python # 4. Parse through file to make two lists. One with all fund/date combinations (using the comination as an ID) and one with all securities. # 5. Generate sparse matrix with the dimensions "number of fund/date combinations" x "numer of securities" # 6. Parse through large csv file again and fill the percentage_tna (percentage of the fund held in that particular security) number into the right spot of the sparse matrix as determined by two maps based on all fund/date combinations and securities # 7. Save final sparse matrix and tables containing information about which row is which fund/date and which column is which security. # TODO # # Parsing through csv file could be significantly sped up using something like: https://stackoverflow.com/questions/17444679/reading-a-huge-csv-file # ## Import statements # + pycharm={"is_executing": false} import os import sys import feather import numpy as np import pandas as pd from scipy import sparse # - # ## Load File # + pycharm={"is_executing": false} path = '../data/raw/holdings.csv' # - # ## Parse complete file to get all unique fund/date combinations and stocks # + # %%time chunksize = 10 ** 7 unit = chunksize / 184_578_843 # All columns in holdings.csv file # crsp_portno, report_dt, security_rank, eff_dt, percent_tna # nbr_shares, market_val, crsp_company_key, security_name, # cusip, permno, permco, ticker, coupon, maturity_dt reader = pd.read_csv(path, usecols = ['crsp_portno','report_dt', 'crsp_company_key','security_name','cusip','permno','permco','ticker'], dtype = {'crsp_portno': np.int64, 'report_dt': np.int64, 'crsp_company_key': np.int64, 'security_name': str, 'cusip': str, 'permno':str, 'permco':str, 'ticker':str}, low_memory=False, chunksize=chunksize) dfList_1 = [] dfList_2 = [] for i, chunk in enumerate(reader): temp_df_1 = chunk.loc[:,['crsp_portno','report_dt']].drop_duplicates() temp_df_2 = chunk.loc[:,['crsp_company_key','security_name','cusip','permno','permco','ticker']].drop_duplicates() dfList_1.append(temp_df_1) dfList_2.append(temp_df_2) print("{:6.2f}%".format(((i+1) * unit * 100))) # + df_1 = pd.concat(dfList_1,sort=False) df_2 = pd.concat(dfList_2,sort=False) df_1 = df_1.drop_duplicates() df_2 = df_2.drop_duplicates() # + # Generate a unique ID from the portno and the date of a fund/date combination df_1 = df_1.assign(port_id = ((df_1['crsp_portno'] * 1000000 + df_1['report_dt']))) df_1 = df_1.rename(columns = {'report_dt':'report_dt_int'}) df_1 = df_1.assign(report_dt = pd.to_timedelta(df_1['report_dt_int'], unit='D') + pd.Timestamp('1960-1-1')) df_1 = df_1.reset_index(drop = True) df_1 = (df_1 .assign(row = df_1.index) .set_index('port_id')) # - df_2 = df_2.reset_index(drop = True) df_2 = (df_2 .assign(col = df_2.index) .set_index('crsp_company_key')) df_1.head(1) df_2.head(1) # ## Parse complete file to generate data for sparse matrix # + chunksize = 10 ** 7 unit = chunksize / 184_578_843 reader = pd.read_csv(path, usecols = ['crsp_portno','report_dt','crsp_company_key','percent_tna'], dtype = {'crsp_portno': np.int64, 'report_dt': np.int64, 'crsp_company_key': np.int64, 'percent_tna':np.float64}, low_memory=False, chunksize=chunksize) # + # TODO pd.merge seems to be faster in this case than df.join # + # %%time dfList = [] df_1_temp = df_1.loc[:,['row']] df_2_temp = df_2.loc[:,['col']] for i, chunk in enumerate(reader): temp_df = chunk.dropna() temp_df = temp_df.assign(port_id = ((temp_df['crsp_portno'] * 1000000 + temp_df['report_dt']))) temp_df.set_index('port_id',inplace=True) temp_df = temp_df.join(df_1_temp, how='left') temp_df.set_index('crsp_company_key',inplace=True) temp_df = temp_df.join(df_2_temp, how='left') temp_df = temp_df[['percent_tna','row','col']] dfList.append(temp_df) print("{:6.2f}%".format(((i+1) * unit * 100))) # - df_sparse = pd.concat(dfList,sort=False) df_sparse.reset_index(drop=True,inplace=True) print(df_sparse.shape) df_sparse.head(3) # ## Delete duplicates # All other filters will be applied later but this one has to be done before sparse matrix is created duplicates_mask = df_sparse.duplicated(['col','row'],keep='last') df_sparse = df_sparse[~duplicates_mask] # ## Check if holdings data makes sense merged_data = pd.merge(df_sparse,df_1[['report_dt','row']],how='left',on='row') # + date = pd.to_datetime('2016-09-30') sum_col = (merged_data .query('report_dt == @date') .groupby(by = ['col']) .sum() .sort_values('percent_tna',ascending = False)) sum_col.join(df_2.set_index('col'),how='left').head(10) # - # ## Change fund info and security info dfs for future use df_1 = df_1[['crsp_portno','report_dt','row']].assign(port_id = df_1.index) df_1.set_index('row',inplace=True) df_1.sample() df_2 = df_2.assign(crsp_company_key = df_2.index) df_2.set_index('col',inplace=True) df_2.sample() # ## Create sparse matrix sparse_matrix = sparse.csr_matrix((df_sparse['percent_tna'].values, (df_sparse['row'].values, df_sparse['col'].values))) # + # Check if all dimensions match # + pycharm={"is_executing": true} print('Number of fund/date combinations: {:12,d}'.format(sparse_matrix.shape[0])) print('Number of unique securities: {:12,d}'.format(sparse_matrix.shape[1])) print('Number of non-zero values in sparse matrix: {:12,d}'.format(sparse_matrix.getnnz())) print() print('Number of rows in fund info df: {:12,d}'.format(df_1.shape[0])) print('Number of rows in fund info df: {:12,d}'.format(df_2.shape[0])) print() match_test = (sparse_matrix.shape[0] == df_1.shape[0]) & (sparse_matrix.shape[1] == df_2.shape[0]) print('Everything matches: {}'.format(match_test)) # - # ## Get Market Value for all security & date pairs np.float64 # + # %%time chunksize = 10 ** 7 unit = chunksize / 184_578_843 # All columns in holdings.csv file # crsp_portno, report_dt, security_rank, eff_dt, percent_tna # nbr_shares, market_val, crsp_company_key, security_name, # cusip, permno, permco, ticker, coupon, maturity_dt reader = pd.read_csv(path, usecols = ['report_dt','crsp_company_key','security_name','market_val'], dtype = {'report_dt': np.int64, 'crsp_company_key': np.int64, 'security_name': str, 'market_val': np.float64}, low_memory=False, chunksize=chunksize) dfList_1 = [] for i, chunk in enumerate(reader): temp_df_1 = chunk.drop_duplicates() dfList_1.append(temp_df_1) print("{:6.2f}%".format(((i+1) * unit * 100))) # - f_market_val = pd.concat(dfList_1,sort=False) f_market_val_f = f_market_val.drop_duplicates(subset = ['report_dt','crsp_company_key']) f_market_val_f.shape # ## Save data # #### Sparse matrix containing holdings # + pycharm={"is_executing": true} path = '../data/interim/holdings' sparse.save_npz(path, sparse_matrix) # - # #### Fund/date info # + pycharm={"is_executing": true} path = '../data/interim/row_info.feather' feather.write_dataframe(df_1,path) # - # #### Securities info # + pycharm={"is_executing": true} path = '../data/interim/col_info.feather' feather.write_dataframe(df_2,path) # - # #### Market cap info # + pycharm={"is_executing": true} path = '../data/interim/market_cap.feather' feather.write_dataframe(f_market_val_f,path) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas dataset = pandas.read_csv("power.csv", header=1, names=['Time', 'Bus', 'Shunt', 'Load', 'Current', 'Power']) dataset.set_index('Time', inplace=True) cleanset = dataset[dataset.Current < 7.5] ax = cleanset.Current.plot(figsize=(14,4), label='Current (mA)') ax.set_ylabel('Current (mA)') ax.set_xlabel('Time (ms)') ax.set_xlim(0, 8000) ax.annotate("pm_set(0)", xy=(966, 7.1), xytext=(2000, 5.5), arrowprops=dict(facecolor='red', arrowstyle='->')) ax.annotate("RTC alarm", xy=(6020, 1.7), xytext=(6500, 3), arrowprops=dict(facecolor='red', arrowstyle='->')) ax.annotate("", xy=(1150, 3.2), xytext=(6020, 3.2), arrowprops=dict(facecolor='red', arrowstyle='|-|')) ax.annotate('5 sec', xy=(3400, 3.4)) ax.savefig('') cleanset['diff'] = cleanset.Current.diff() cleanset['dblediff'] = cleanset['diff'].diff() cleanset['absdiff'] = abs(cleanset.dblediff) cleanset[cleanset.absdiff > 1] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + colab={"base_uri": "https://localhost:8080/"} id="A74kXimPnJO3" executionInfo={"status": "ok", "timestamp": 1628926986520, "user_tz": -420, "elapsed": 19654, "user": {"displayName": "", "photoUrl": "", "userId": "00266109630258428955"}} outputId="63f398fd-17c5-41fc-f632-0fc5a04ab785" # !pip install lightkurve # + id="jkYwCvnpm970" executionInfo={"status": "ok", "timestamp": 1628927264723, "user_tz": -420, "elapsed": 300, "user": {"displayName": "", "photoUrl": "", "userId": "00266109630258428955"}} import lightkurve as lk import numpy as np import matplotlib.pyplot as plt # %matplotlib inline # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="JKd5jGDGn5di" executionInfo={"status": "ok", "timestamp": 1628929922311, "user_tz": -420, "elapsed": 355, "user": {"displayName": "", "photoUrl": "", "userId": "00266109630258428955"}} outputId="7a089e73-bf42-4d6a-fa85-fafe883a57da" # Search Kepler data for Quarters 6, 7, and 8. search_result = lk.search_lightcurve('WASP-33', mission='TESS') # Download and stitch the data together lc = search_result.download() # Plot the resulting light curve lc # + colab={"base_uri": "https://localhost:8080/", "height": 386} id="jAWQi0q3o5R9" executionInfo={"status": "ok", "timestamp": 1628929927333, "user_tz": -420, "elapsed": 1405, "user": {"displayName": "", "photoUrl": "", "userId": "00266109630258428955"}} outputId="e92f218a-bd62-455c-92d2-d3471035255f" lc.plot(); # + colab={"base_uri": "https://localhost:8080/", "height": 390} id="FwVjyyB9uuzg" executionInfo={"status": "ok", "timestamp": 1628929929010, "user_tz": -420, "elapsed": 619, "user": {"displayName": "", "photoUrl": "", "userId": "00266109630258428955"}} outputId="6668635d-5e74-4367-cf0a-50a27dff6963" pg = lc.to_periodogram(maximum_period=100) pg.plot(view='period'); # + colab={"base_uri": "https://localhost:8080/", "height": 37} id="XCF_VvMjuztH" executionInfo={"status": "ok", "timestamp": 1628929932655, "user_tz": -420, "elapsed": 350, "user": {"displayName": "", "photoUrl": "", "userId": "00266109630258428955"}} outputId="0df6bd06-9832-4304-fdda-ece07138eb7b" pg.period_at_max_power # + colab={"base_uri": "https://localhost:8080/", "height": 389} id="zzqKfgWUupFJ" executionInfo={"status": "ok", "timestamp": 1628929935452, "user_tz": -420, "elapsed": 1173, "user": {"displayName": "", "photoUrl": "", "userId": "00266109630258428955"}} outputId="a253eb45-d977-4be3-c160-fcd7dac3ac7a" # Create a model light curve for the highest peak in the periodogram lc_model = pg.model(time=lc.time, frequency=pg.frequency_at_max_power) # Plot the light curve ax = lc.plot() # Plot the model light curve on top lc_model.plot(ax=ax, lw=3, ls='--', c='red'); # + colab={"base_uri": "https://localhost:8080/", "height": 386} id="DWuMS-txyk9Z" executionInfo={"status": "ok", "timestamp": 1628929993734, "user_tz": -420, "elapsed": 971, "user": {"displayName": "", "photoUrl": "", "userId": "00266109630258428955"}} outputId="e5f6b978-c55b-430a-aab4-ca1c9bf26ad7" # Fold the light curve at the known planet period planet_period = 1.21987 lc.fold(period=planet_period).plot(); # + colab={"base_uri": "https://localhost:8080/", "height": 404} id="H3TbFiJUy182" executionInfo={"status": "ok", "timestamp": 1628930072351, "user_tz": -420, "elapsed": 1690, "user": {"displayName": "", "photoUrl": "", "userId": "00266109630258428955"}} outputId="1c7685d9-7128-43ba-c411-444a49eed319" pg = lc.to_periodogram() pg.plot(scale='log'); # + colab={"base_uri": "https://localhost:8080/", "height": 386} id="ZeMOwKW4zGqN" executionInfo={"status": "ok", "timestamp": 1628930101619, "user_tz": -420, "elapsed": 10147, "user": {"displayName": "", "photoUrl": "", "userId": "00266109630258428955"}} outputId="49c2727a-7f81-46e5-df91-d6b0b3074ecf" # Remove the signals associated with the 50 highest peaks newlc = lc.copy() for i in range(50): pg = newlc.to_periodogram() model = pg.model(time=newlc.time, frequency=pg.frequency_at_max_power) newlc.flux = newlc.flux / model.flux # Plot the new light curve on top of the original one ax = lc.plot(alpha=.5, label='Original'); newlc.plot(ax=ax, label='New'); # + id="Ncxaw_yezUeV" executionInfo={"status": "ok", "timestamp": 1628930137585, "user_tz": -420, "elapsed": 1665, "user": {"displayName": "", "photoUrl": "", "userId": "00266109630258428955"}} outputId="3966fb5e-6a25-4ddf-f65d-5875860f7350" colab={"base_uri": "https://localhost:8080/"} ax = newlc.fold(period=planet_period).plot(label='Unbinned') newlc.fold(period=planet_period).bin(0.1).plot(ax=ax, lw=2, label='Binned'); # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # All the IPython Notebooks in this lecture series by Dr. are available @ **[GitHub](https://github.com/milaan9/02_Python_Datatypes/tree/main/002_Python_String_Methods)** # # # Python String `capitalize()` # # In Python, the **`capitalize()`** method converts first character of a string to uppercase letter and lowercases all other characters, if any. # # **Syntax**: # # ```python # string.capitalize() # ``` # ## `capitalize()` Parameters # # The **`capitalize()`** function doesn't take any parameter. # ## Return Value from `capitalize()` # # The **`capitalize()`** function returns a string with the first letter capitalized and all other characters lowercased. It doesn't modify the original string. # + # Example 1: Capitalize a Sentence string = "python is AWesome." capitalized_string = string.capitalize() print('Old String: ', string) print('Capitalized String:', capitalized_string) # + # Example 2: Non-alphabetic First Character string = "+ is an operator." new_string = string.capitalize() print('Old String:', string) print('New String:', new_string) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="zn3Ifb-jxk6I" # # Regiment # + [markdown] id="HLTR6zChxk6Q" # ### Introduction: # # Special thanks to: http://chrisalbon.com/ for sharing the dataset and materials. # # ### Step 1. Import the necessary libraries # + id="6yxP34Exxk6S" import pandas as pd # + [markdown] id="RpIIp4Bjxk6T" # ### Step 2. Create the DataFrame with the following values: # + id="o7INUdvUxk6T" raw_data = {'regiment': ['Nighthawks', 'Nighthawks', 'Nighthawks', 'Nighthawks', 'Dragoons', 'Dragoons', 'Dragoons', 'Dragoons', 'Scouts', 'Scouts', 'Scouts', 'Scouts'], 'company': ['1st', '1st', '2nd', '2nd', '1st', '1st', '2nd', '2nd','1st', '1st', '2nd', '2nd'], 'name': ['Miller', 'Jacobson', 'Ali', 'Milner', 'Cooze', 'Jacon', 'Ryaner', 'Sone', 'Sloan', 'Piger', 'Riani', 'Ali'], 'preTestScore': [4, 24, 31, 2, 3, 4, 24, 31, 2, 3, 2, 3], 'postTestScore': [25, 94, 57, 62, 70, 25, 94, 57, 62, 70, 62, 70]} # + [markdown] id="BHQsHG4Uxk6T" # ### Step 3. Assign it to a variable called regiment. # #### Don't forget to name each column # + id="50lyECRTxk6U" outputId="eeaef976-0112-4fcb-cc0c-0d7700b6c9bf" colab={"base_uri": "https://localhost:8080/", "height": 204} regiment = pd.DataFrame(raw_data, columns = raw_data.keys()) regiment.head(5) # + [markdown] id="XrcbPN5Exk6U" # ### Step 4. What is the mean preTestScore from the regiment Nighthawks? # + id="YP-pwRqaxk6U" outputId="05651984-e6ff-464f-be9c-9f81cd328307" colab={"base_uri": "https://localhost:8080/", "height": 111} regiment[regiment['regiment'] == 'Nighthawks'].groupby('regiment').mean() # + [markdown] id="R3kLTtkrxk6V" # ### Step 5. Present general statistics by company # + id="wF9BHXRbxk6W" outputId="e1b62d18-4e90-4d1e-e1ec-db254581e574" colab={"base_uri": "https://localhost:8080/", "height": 173} regiment.groupby('company').describe() # + [markdown] id="a-GXdneyxk6X" # ### Step 6. What is the mean of each company's preTestScore? # + id="bMb_cQ1txk6X" outputId="76213038-7c3d-4ae9-b4d4-4553111c4245" colab={"base_uri": "https://localhost:8080/"} regiment.groupby('company').preTestScore.mean() # + [markdown] id="FPlmg6sixk6X" # ### Step 7. Present the mean preTestScores grouped by regiment and company # + id="zAeCgvEoxk6X" outputId="301aa18b-a2f0-48ee-fad4-db1fd051816c" colab={"base_uri": "https://localhost:8080/"} regiment.groupby(['regiment', 'company']).preTestScore.mean() # + [markdown] id="GwKyJTD2xk6Y" # ### Step 8. Present the mean preTestScores grouped by regiment and company without heirarchical indexing # + id="zfzv0Dcjxk6Y" outputId="517ab74f-39c3-400f-e8fc-9add34b46595" colab={"base_uri": "https://localhost:8080/", "height": 173} regiment.groupby(['regiment','company']).preTestScore.mean().unstack() # + [markdown] id="0pVFnAdSxk6Y" # ### Step 9. Group the entire dataframe by regiment and company # + id="7JesFN3Ixk6Y" outputId="7988387d-bd1a-4207-dae3-572d8030d5ca" colab={"base_uri": "https://localhost:8080/", "height": 266} regiment.groupby(['regiment','company']).mean() # + [markdown] id="8525iRf1xk6Z" # ### Step 10. What is the number of observations in each regiment and company # + id="3eWqWBKHxk6Z" outputId="ed335353-3455-4ad6-9c04-044cc7b57326" colab={"base_uri": "https://localhost:8080/"} regiment.groupby(['company', 'regiment']).size() # + [markdown] id="EgaD1JF5xk6Z" # ### Step 11. Iterate over a group and print the name and the whole data from the regiment # + id="pPCvB9t6xk6Z" outputId="21b2f6ef-cff8-49e8-b71a-251be5427749" colab={"base_uri": "https://localhost:8080/"} for name,group in regiment.groupby('regiment'): print(name) print (group) # Group the dataframe by regiment, and for each regiment, for name, group in regiment.groupby('regiment'): # print the name of the regiment print(name) # print the data of that regiment print(group) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pandas as pd import random def bootstrapdf(df): df = df.sample(frac=1, replace=True) return df def check_for_leaf(df,counter, min_samples, max_depth): unique_classes = np.unique(df) if len(unique_classes) == 1 or len(df)<=min_samples or counter==max_depth: labelcol = df uniq_cls, cnt = np.unique(labelcol, return_counts=True) classification = unique_classes[cnt.argmax()] return classification else: return False def gini_imp_test(df, col_index): df.reset_index(inplace = True, drop = True) classes = df.iloc[:,-1] feature = df.iloc[:,col_index] if len(feature.unique()) == 2: gini_imp = 0 for i in np.unique(feature): idx = np.where(feature == i) label = classes.loc[idx].values a, b = np.unique(label, return_counts = True) list1 = [(i/sum(b))**2 for i in b] prob = 1 - sum(list1) wt = len(idx[0]) / df.shape[0] gini_imp += wt * prob return gini_imp, i else: label = np.sort(feature.unique())[1:-1] best_gini_imp = float('inf') split_val = 0 for i in label: idx1 = np.where(feature > i) idx2 = np.where(feature <= i) if len(idx1[0]) > 2 and len(idx2[0]) > 2: b1, b1cnt = np.unique(classes.loc[idx1].values, return_counts = True) b2, b2cnt = np.unique(classes.loc[idx2].values, return_counts = True) list1 = [(i/sum(b1cnt))**2 for i in b1cnt] list2 = [(i/sum(b2cnt))**2 for i in b2cnt] prob1 = 1 - sum(list1) prob2 = 1 - sum(list2) gini = ((sum(b1cnt)/df.shape[0])*prob1) + ((sum(b2cnt)/df.shape[0])*prob2) if gini < best_gini_imp: best_gini_imp = gini split_val = i else: continue return best_gini_imp, split_val def best_node(df, col_list): best_gini_imp = float('inf') value = 0 col = 0 for i in col_list: gini, val = gini_imp_test(df, i) if gini < best_gini_imp: best_gini_imp = gini value = val col = i return col, value def split_df(df, col_index, split_val): feature = df.iloc[:,col_index] if feature.dtypes == object: temp1 = df[df.iloc[:,col_index] == split_val] temp2 = df[df.iloc[:,col_index] != split_val] return temp1, temp2 elif feature.dtypes != object: temp1 = df[df.iloc[:,col_index] <= split_val] temp2 = df[df.iloc[:,col_index] >= split_val] temp1.reset_index(inplace = True, drop = True) temp2.reset_index(inplace = True, drop = True) return temp1, temp2 def check_purity(data): label_column = data[:, -1] unique_classes = np.unique(label_column) if len(unique_classes) == 1: return True else: return False def classify_data(data): label_column = data[:,-1] unique_classes, counts_unique_classes = np.unique(label_column, return_counts=True) index = counts_unique_classes.argmax() classification = unique_classes[index] return classification def metrics(ts_lb,answer): TN = 0 TP = 0 FN = 0 FP = 0 for i,j in zip(ts_lb,answer): if j==1 and i==1: TP += 1 elif(j==1 and i==0): FN += 1 elif(j==0 and i==1): FP += 1 elif(j==0 and i==0): TN += 1 Accuracy = (TP + TN)/(TP + FP + TN + FN) Precision = TP/(TP + FP) Recall = TP/(TP + FN) f1_score = (2*Precision*Recall)/(Precision + Recall) return Accuracy, Precision, Recall, f1_score def decision_tree(df, columns, num_features, counter = 0, min_samples = 10, max_depth = 5): if (check_purity(df.values)) or (counter == max_depth) or (len(df) < min_samples): classification = classify_data(df.values) return classification else: counter += 1 col_list = random.sample(columns, num_features) column, value = best_node(df, col_list) if df.iloc[:,column].dtype == object: columns.remove(column) branch1, branch2 = split_df(df, column, value) if len(branch1) == 0 or len(branch2) == 0: classification = classify_data(df.values) return classification query = "{} <= {}".format(column, value) branch = {query: []} left_branch = decision_tree(branch1, columns, num_features, counter) right_branch = decision_tree(branch2, columns, num_features, counter) if left_branch == right_branch: branch = left_branch else: branch[query].append(left_branch) branch[query].append(right_branch) return branch def random_forest(df, num_trees, num_features): trees = [] for i in range(num_trees): df = bootstrapdf(df) columns = list(df.iloc[:,:-1].columns) tree = decision_tree(df, columns, num_features) trees.append(tree) return trees def predict(model, test_data): classes = [] for tree in model: cls = [] for i in range(len(test_data)): t = tree col,_,val = list(t.keys())[0].split() col = int(col) try: val = float(val) except: val = str(val) key = list(t.keys())[0] key_val = t[key] while True: if test_data.iloc[i,col] <= val: t = t[key][0] if type(t) != dict: cls.append(t) break else: col,_,val = list(t.keys())[0].split() col = int(col) try: val = float(val) except: val = str(val) key = list(t.keys())[0] key_val = t[key] else: t = t[key][1] if type(t) != dict: cls.append(t) break else: col,_,val = list(t.keys())[0].split() col = int(col) try: val = float(val) except: val = str(val) key = list(t.keys())[0] key_val = t[key] cls = [int(i) for i in cls] classes.append(cls) classes = np.array(classes) final_class = [] for i in range(len(test_data)): unique_classes, counts_unique_classes = np.unique(classes[:,i], return_counts=True) index = counts_unique_classes.argmax() classification = unique_classes[index] final_class.append(classification) final_class test_data["Class"] = final_class return test_data # + def k_fold(df): num_trees = int(input("Enter number of trees: ")) num_features = int(input("Enter number of features for each split: ")) k = int(input("Enter k value: ")) metrics_list = [] for i in range(k): splitdfs = np.array_split(df, k) test = splitdfs[i] del(splitdfs[i]) train = pd.concat(splitdfs) test.reset_index(inplace = True, drop = True) train.reset_index(inplace = True, drop = True) actual = test.iloc[:,-1] test = test.iloc[:,:-1] model = random_forest(train, num_trees, num_features) results = predict(model, test) Accuracy, Precision, Recall, f1_score = metrics(actual, results["Class"]) metrics_list.append([Accuracy, Precision, Recall, f1_score]) metrics_list = np.array(metrics_list) metrics_list = np.mean(metrics_list, axis = 0) print("Accuracy: ",metrics_list[0]) print("Precision: ",metrics_list[1]) print("Recall: ",metrics_list[2]) print("f1_score: ",metrics_list[3]) return metrics_list # - df1 = pd.read_csv("project3_dataset1.txt", sep = '\t', header=None) k_fold(df1) df2 = pd.read_csv("project3_dataset2.txt", sep = '\t', header=None) k_fold(df2) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # VacationPy # ---- # # # + # Dependencies and Setup import matplotlib.pyplot as plt import pandas as pd import numpy as np import requests import gmaps import os # Import API key from api_keys import g_key # - # ### Store Part I results into DataFrame # * Load the csv exported in Part I to a DataFrame input_path = "result.csv" df = pd.read_csv(input_path) del df["Unnamed: 0"] # df = df.dropna() df.head() # ### Humidity Heatmap # * Configure gmaps. # * Use the Lat and Lng as locations and Humidity as the weight. # * Add Heatmap layer to map. # + locations = [(i,j) for i,j in zip(df["Lat"],df["Lng"])] weight = df["Humidity"] # Customize the size of the figure figure_layout = { 'width': '800px', 'height': '500px', 'border': '1px solid black', 'padding': '1px', 'margin': '0 auto 0 auto' } fig = gmaps.figure(layout=figure_layout) max_intensity = max(weight) # Create heat layer heat_layer = gmaps.heatmap_layer(locations, weights=weight, dissipating=False, max_intensity=max_intensity, point_radius=1.5) # Add layer fig.add_layer(heat_layer) # Display figure fig # + figure_layout = { 'width': '800px', 'height': '500px', 'border': '1px solid black', 'padding': '1px', 'margin': '0 auto 0 auto' } fig = gmaps.figure(map_type="HYBRID", layout=figure_layout) # Create heat layer heat_layer = gmaps.heatmap_layer(locations, weights=weight, dissipating=False, max_intensity=max_intensity, point_radius=1.5) fig.add_layer(heat_layer) fig # - # ### Create new DataFrame fitting weather criteria # * Narrow down the cities to fit weather conditions. # * Drop any rows will null values. # + ideal_weather_df = df.loc[(df["Max Temp"]<80)&(df["Max Temp"]>70)& (df["Wind Speeds"]<10)& (df["Cloudiness"]==0),:] ideal_weather_df = ideal_weather_df.dropna() ideal_weather_df = ideal_weather_df.reset_index(drop = True) ideal_weather_df # - # ### Hotel Map # * Store into variable named `hotel_df`. # * Add a "Hotel Name" column to the DataFrame. # * Set parameters to search for hotels with 5000 meters. # * Hit the Google Places API for each city's coordinates. # * Store the first Hotel result into the DataFrame. # * Plot markers on top of the heatmap. ideal_weather_df['Hotel Name'] = "" ideal_weather_df # + hotel_df = [] base_url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json" params = { "radius": 5000, "types" : "hotel", "key": g_key, } # use iterrows to iterate through pandas dataframe for index, row in ideal_weather_df.iterrows(): lat = row['Lat'] lng = row['Lng'] # add location to params dict params["location"] = f"{lat},{lng}" # assemble url and make API request response = requests.get(base_url, params=params).json() try: ideal_weather_df.loc[index, 'Hotel Name'] = response['results'][0]['name'] hotel_df.append(response['results'][0]['name']) except: ideal_weather_df.loc[index, 'Hotel Name'] = float("NaN") print("skip the city") # - ideal_weather_df = ideal_weather_df.dropna() ideal_weather_df # ## Unfortunately the below codes are not working # + # NOTE: Do not change any of the code in this cell # Using the template add the hotel marks to the heatmap info_box_template = """
    Name
    {Hotel Name}
    City
    {City}
    Country
    {Country}
    """ # Store the DataFrame Row # NOTE: be sure to update with your DataFrame name #narrowed_city_df=pd.DataFrame() hotel_info = [info_box_template.format(**row) for index, row in ideal_weather_df.iterrows()] locations = ideal_weather_df[["Lat", "Lng"]] city = ideal_weather_df["City"] # - ideal_weather_df.shape # + # Add marker layer ontop of heat map locations = ideal_weather_df[["Lat", "Lng"]] # markers = gmaps.marker_layer(locations) markers = gmaps.marker_layer(locations, info_box_content= [f"HOTEL NAME: \n{hotel}" for hotel in hotel_df]) fig.add_layer(markers) fig # Display Map # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python3 # --- # ## Gaussian Network Moments (GNMs) # # Let $\mathbf{x}\sim\mathcal{N}\left(\mathbf{\mu}, \mathbf{\Sigma}\right)$ and $q(x) = \max(x, 0)$ where $\Phi(x)$ and $\varphi(x)$ are the CDF and PDF of the normal distribution, # # $\mathbb{E}\left[q(\mathbf{x})\right] = \mathbf{\mu}\odot\Phi\left(\mathbf{\mu}\oslash\mathbf{\sigma}\right) + \mathbf{\sigma}\odot\varphi\left(\mathbf{\mu}\oslash\mathbf{\sigma}\right)$ # where $\mathbf{\sigma} = \sqrt{\text{diag}\left(\mathbf{\Sigma}\right)}$ with # $\odot$ and $\oslash$ as element-wise product and division. # # $\mathbb{E}\left[q^2(\mathbf{x})\right] = # \left(\mathbf{\mu}^2+\mathbf{\sigma}^2\right) \odot \Phi\left(\mathbf{\mu}\oslash\mathbf{\sigma}\right) + \mathbf{\mu} \odot \mathbf{\sigma} \odot \varphi\left(\mathbf{\mu}\oslash\mathbf{\sigma}\right)$ # where $\text{var}\left[q(\mathbf{x})\right] = \mathbb{E}\left[q^2(\mathbf{x})\right] - \mathbb{E}\left[q(\mathbf{x})\right]^2$ # # $\left.\mathbb{E}\left[q(\mathbf{x})q(\mathbf{x})^\top\right]\right|_{\mathbf{\mu} = \mathbf{0}} = c\left(\mathbf{\Sigma}\oslash\mathbf{\sigma}\mathbf{\sigma}^\top\right) \odot \mathbf{\sigma}\mathbf{\sigma}^\top$ # where $c(x) = \frac{1}{2\pi}\left(x\cos^{-1}(-x)+\sqrt{1-x^2}\right)$ # (Note: $\left|c(x) - \Phi(x - 1)\right| < 0.0241$) # # $\text{cov}\left[q(\mathbf{x})\right] = \mathbb{E}\left[q(\mathbf{x})q(\mathbf{x})^\top\right] - \mathbb{E}\left[q(\mathbf{x})\right]\mathbb{E}\left[q(\mathbf{x})\right]^\top$ # where $\left.\text{cov}\left[q(\mathbf{x})\right]\right|_{\mathbf{\mu} = \mathbf{0}} = \left.\mathbb{E}\left[q(\mathbf{x})q(\mathbf{x})^\top\right]\right|_{\mathbf{\mu} = \mathbf{0}} - \frac{1}{2\pi}\mathbf{\sigma}\mathbf{\sigma}^\top$ # + import sys import torch import unittest from torch import nn from pprint import pprint from types import ModuleType import network_moments.torch as nm seed = 77 # for reproducability def traverse(obj, exclude=[]): data = [] if type(obj) is not ModuleType: return data for e in dir(obj): if not e.startswith('_') and all(e != s for s in exclude): sub = traverse(obj.__dict__[e], exclude) data.append(e if len(sub) == 0 else {e:sub}) return data print(nm.__doc__) print('Network Moments Structure:') pprint(traverse(nm.gaussian, exclude=['tests', 'general'])) # - # ### Testing the tightness of the expressions on using tests runner = unittest.TextTestRunner(sys.stdout, verbosity=2) load = unittest.TestLoader().loadTestsFromModule result = runner.run(unittest.TestSuite([ load(nm.gaussian.affine.tests), load(nm.gaussian.relu.tests), ])) # ### Testing the tightness of the expressions on affine-ReLU-affine networks rand = nm.utils.rand gnm = nm.gaussian.affine_relu_affine print(gnm.special_variance.__doc__) # + length = 3 count = 1000000 dtype = torch.float64 device = torch.device('cpu', 0) torch.manual_seed(seed) # input mean and covariance mu = torch.randn(length, dtype=dtype, device=device) cov = rand.definite(length, dtype=dtype, device=device, positive=True, semi=False, norm=1.0) # variables A = torch.randn(length, length, dtype=dtype, device=device) c1 = -A.matmul(mu) # torch.randn(length, dtype=dtype) B = torch.randn(length, length, dtype=dtype, device=device) c2 = torch.randn(length, dtype=dtype, device=device) # analytical output mean and variance out_mu = gnm.mean(mu, cov, A, c1, B, c2) out_var = gnm.special_variance(cov, A, B) # Monte-Carlo estimation of the output mean and variance normal = torch.distributions.MultivariateNormal(mu, cov) samples = normal.sample((count,)) out_samples = samples.matmul(A.t()) + c1 out_samples = torch.max(out_samples, torch.zeros([], dtype=dtype, device=device)) out_samples = out_samples.matmul(B.t()) + c2 mc_mu = torch.mean(out_samples, dim=0) mc_var = torch.var(out_samples, dim=0) # printing the ratios print('Monte-Carlo mean / Analytical mean:') print((mc_mu / out_mu).cpu().numpy()) print('Monte-Carlo variance / Analytical variance:') print((mc_var / out_var).cpu().numpy()) # - # ### Linearization # + batch = 1 num_classes = 10 image_size = (28, 28) dtype = torch.float64 device = torch.device('cpu', 0) size = torch.prod(torch.tensor(image_size)).item() x = torch.rand(batch, *image_size, dtype=dtype, device=device) model = nn.Sequential( nm.utils.flatten, nn.Linear(size, num_classes), ) model.type(dtype) if device.type != 'cpu': model.cuda(device.index) jac, bias = nm.utils.linearize(model, x) A = list(model.children())[1].weight print('Tightness of A (best is zero): {}'.format( torch.max(torch.abs(jac - A)).item())) b = list(model.children())[1].bias print('Tightness of b (best is zero): {}'.format( torch.max(torch.abs(bias - b)).item())) # - # ### Two-stage linearization # + count = 10000 num_classes = 10 image_size = (28, 28) dtype = torch.float64 device = torch.device('cpu', 0) gnm = nm.gaussian.affine_relu_affine size = torch.prod(torch.tensor(image_size)).item() x = torch.rand(1, *image_size, dtype=dtype, device=device) # deep model first_part = nn.Sequential( nm.utils.flatten, nn.Linear(size, 500), nn.ReLU(), nn.Linear(500, 500), nn.ReLU(), nn.Linear(500, 300), ) first_part.type(dtype) relu = nn.Sequential( nn.ReLU(), ) relu.type(dtype) second_part = nn.Sequential( nn.Linear(300, 100), nn.ReLU(), nn.Linear(100, num_classes), ) second_part.type(dtype) if device.type != 'cpu': first_part.cuda(device.index) relu.cuda(device.index) second_part.cuda(device.index) def model(x): return second_part(relu(first_part(x))) # variables A, c1 = nm.utils.linearize(first_part, x) B, c2 = nm.utils.linearize(second_part, relu(first_part(x)).detach()) x.requires_grad_(False) A.squeeze_() c1.squeeze_() B.squeeze_() c2.squeeze_() # analytical output mean and variance mean = x.view(-1) covariance = rand.definite(size, norm=0.1, dtype=dtype, device=device) out_mu = gnm.mean(mean, covariance, A, c1, B, c2) out_var = gnm.special_variance(covariance, A, B) # Monte-Carlo estimation of the output mean and variance normal = torch.distributions.MultivariateNormal(mean, covariance) samples = normal.sample((count,)) out_samples = model(samples.view(-1, *image_size)).detach() mc_mu = torch.mean(out_samples, dim=0) mc_var = torch.var(out_samples, dim=0) # printing the ratios print('Monte-Carlo mean / Analytical mean:') print((mc_mu / out_mu).cpu().numpy()) print('Monte-Carlo variance / Analytical variance:') print((mc_var / out_var).cpu().numpy()) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.9.1 64-bit (''mypython'': conda)' # name: python37364bit00d4d3a58f944c1c8375f1b9a9835895 # --- # # select Multi domain protein from pisces # ## Read pisces text import numpy as np import pandas as pd from prody import parsePDB import os import random import time pisces_df = pd.read_csv('../../../pisces/20210225/cullpdb_pc20_res2.0_R0.25_d210225_chains7584', delim_whitespace=True) pisces_df['PDB_ID'] = pisces_df['IDs'].str[: 4] pisces_df['Chain'] = pisces_df['IDs'].str[4] pisces_df # ## Read CATH text # Read CATH domain text cath_domain_df = pd.read_csv('../../../CATH/20191109/cath-domain-list.txt', delim_whitespace=True, skiprows=16, header=None) cath_domain_df = cath_domain_df.drop(list(range(1, 12)), axis=1) cath_domain_df = cath_domain_df.rename({0: 'CATH_Domain'}, axis=1) cath_domain_df['IDs'] = cath_domain_df['CATH_Domain'].str[: 5].str.upper() domain_num_df = pd.DataFrame(cath_domain_df.groupby('IDs').apply(len)).rename({0: 'Domain_num'}, axis=1).reset_index() domain_num_df # ## Concat pisces df and CATH df cdf = pd.merge(pisces_df, domain_num_df, on='IDs', how='inner') cdf # ## select Multidomain entries multidomain_df = cdf.query('Domain_num > 1') multidomain_df multidomain_df.sort_values('Domain_num')[-10: ] multidomain_df_outpath = '../../../pisces/20210225/multidomain_cullpdb_pc20_res2.0_R0.25.csv' multidomain_df.to_csv(multidomain_df_outpath) # ## Get fasta sequence from multidomain df from Bio import SeqIO from pathlib import Path pdb_fasta_path = '../../../PDBseq/pdb_seqres.txt' out_fasta_dir = Path('../../../pisces/20210225/multi-domain_fasta/') records_dict = SeqIO.to_dict(SeqIO.parse(pdb_fasta_path, 'fasta')) for index, rows in multidomain_df.iterrows(): seq_id = rows['PDB_ID'].lower() + '_' + rows['Chain'] try: seq = records_dict[seq_id] except KeyError: print(seq_id) else: out_path = (out_fasta_dir / seq_id.upper()).with_suffix('.fasta') # SeqIO.write(seq, out_path, 'fasta') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- m.controlpanel(fn_of_sol=Weights_pie) # + # %matplotlib inline import matplotlib.pyplot as plt from gpkit import VectorVariable, Variable, Model, units from gpkit.tools import te_exp_minus1 import gpkit import numpy as np from gpkit.constraints.set import ConstraintSet import subprocess def Weights_pie(sol): Wts = sol.program.result["variables"] print " " + "MTOW: %.0f lbs" %(Wts["W_{MTO}"]) + " " + "OEW: %.0f lbs" %(Wts["W_{ZF}"]) the_grid = plt.GridSpec(1, 8) plt.subplot(the_grid[0, 0], aspect=1) plt.pie([Wts["W_{ZF}"],Wts["W_{payload}"],Wts["W_{fuel}"]], labels=['OEW','Payload','Fuel'], explode=(0.05, 0, 0), autopct='%1.1f%%', shadow=True, radius=8.0) plt.subplot(the_grid[0, 7], aspect=1) plt.pie([Wts["W_{CB}"], Wts["W_{AF}"], Wts["W_{wing}"], Wts["W_{vertical}"], Wts["n_{eng}"]*Wts["W_{eng}"], Wts["W_{extra}"]], labels=['Centerbody','Afterbody','Wing','Vertical','Engine','Extra'], explode=(0, 0, 0, 0, 0, 0), autopct='%1.1f%%', shadow=True, radius=8.0) SFC = Variable("SFC", 0.55, "lbs/lbs/hr", "Specific Fuel Consumption") R = Variable("R", 8000, "nmi", "Range") V = Variable("V", 290, "m/s", "Velocity") # rho = Variable("\\rho", 1.23, "kg/m^3", "Air Density") rho = Variable("\\rho", 0.3, "kg/m^3", "Air Density") S = Variable("S", 800, "m^2", "Wing Area") n_eng = Variable("n_{eng}", 2, "-", "Number of Engines") lambda_aft = Variable("\\lambda_{aft}", 0.8, "-", "Aftbody Taper Ratio") n_pax = Variable("n_{pax}", 450, "-", "Number of Passengers") W_pax = Variable("W_{pax}", 210, "lbf", "Weight of One Passenger") pi = Variable("\\pi", np.pi, "-", "pi") e = Variable("e", 0.9, "-", "Oswald Efficiency") AR = Variable("AR", 7.0, "-", "Aspect Ratio") MTOW = Variable("W_{MTO}", "lbf", "Maximum Takeoff Weight") z_bre = Variable("z_{bre}", "-", "Breguet Term") L = Variable("L", "lbf", "Lift") CL = Variable("C_L", "-", "Lift Coefficient") D = Variable("D", "lbf", "Drag") CD = Variable("C_D", "-", "Drag Coefficient") CDp = Variable("C_Dp", "-", "Profile Drag Coefficient") ZFW = Variable("W_{ZF}", "lbf", "Zero Fuel Weight") W_cb = Variable("W_{CB}", "lbf", "Centerbody Weight") cabin_area = Variable("S_{cabin}", "m^2", "Cabin Planform Area") W_af = Variable("W_{AF}", "lbf", "Afterbody Weight") aft_area = Variable("S_{aft}", "m^2", "Cabin Planform Area") W_wing = Variable("W_{wing}", "lbf", "Wing Weight") W_vertical = Variable("W_{vertical}", "lbf", "Vertical Tail Weight") W_pay = Variable("W_{payload}", "lbf", "Payload Weight") W_fuel = Variable("W_{fuel}", "lbf", "Fuel Weight") W_eng = Variable("W_{eng}", "lbf", "Engine Weight") W_extra = Variable("W_{extra}", "lbf", "Additional Weight") # Performance Model constraints = [0.5 * rho * V**2 * CL * S >= MTOW, L == MTOW, D >= 1./2 * rho * V**2 * CD * S, z_bre >= R * (SFC/V) * (D/L), W_fuel >= ZFW*te_exp_minus1(z_bre,4)] # Aero Model class XFOIL(): def __init__(self): self.pathname = "/Users/codykarcher/Xfoil/bin/./xfoil" def cd_model(self, x0, max_iter=100): # File XFOIL will save data to polarfile = 'polars.txt' # Remove any existing file. Prevents XFOIL Error ls_res = subprocess.check_output(["ls -a"], shell = True) ls = ls_res.split() if polarfile in ls_res: subprocess.call("rm " + polarfile, shell = True) # Open XFOIL and run case proc = subprocess.Popen([self.pathname], stdout=subprocess.PIPE, stdin=subprocess.PIPE) proc.stdin.write('naca2412 \n' + 'oper \n' + "iter %d\n" %(max_iter)+ 'visc \n' + "%.0f \n" % (1.0e6) + #(x0["Re"]) + 'pacc \n' + '\n' + '\n' + "cl %.0f \n" %(x0["C_L"]) + 'pwrt \n' + polarfile + '\n' + 'q') stdout_val = proc.communicate()[0] proc.stdin.close() # Read data from XFOIL output data = np.genfromtxt(polarfile, skip_header=12) CD_star = data[2] return CD_star class SimpleCDModel(ConstraintSet): def as_posyslt1(self): raise ValueError("SimpleCDModel is not allowed to solve as a GP.") def as_gpconstr(self, x0): if not x0: return (CDp >= 0.01) else: xfoilcd = XFOIL().cd_model(x0) return (CDp >= xfoilcd) constraints += [SimpleCDModel([]), CD >= CDp + 5.*(CL**2)/(pi*e*AR)] # constraints += [CD >= Aero_Model] # Weights Model wcb_mtow_exp = 0.166552 wcb_ca_exp = 1.061158 constraints += [ZFW >= (W_cb + W_af + W_wing + W_vertical + n_eng*W_eng + W_extra), W_cb >= 3.*(units.lbf * units.lbf**(-wcb_mtow_exp) * units.meters**(-2*wcb_ca_exp)) * ( 5.698865 * 0.316422 * (MTOW ** wcb_mtow_exp) * cabin_area ** wcb_ca_exp), cabin_area == 0.4*S, W_af >= 6.*(units.lbf * units.lbf**(-.2) * units.meter**(-2))*( (1.0 + 0.05*n_eng) * 0.53 * aft_area * (MTOW**0.2) * (lambda_aft + 0.5) ), aft_area == 0.2*S, W_wing >= 2.*W_cb, W_vertical >= .3*W_wing, W_pay >= n_pax*W_pax, MTOW >= ZFW + W_pay + W_fuel, W_eng >= .5*D, W_extra >= .1*MTOW ] objective = MTOW m = Model(objective, constraints) # m.controlpanel(fn_of_sol=Weights_pie) sol=m.localsolve() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Demos: Lecture 19 # + import pennylane as qml from pennylane import numpy as np from itertools import chain from lecture19_helpers import * # - # ## Demo 1: QAOA from scratch # $$ # \hat{H}_c = \gamma \sum_{ij \in E} (Z_i + Z_j + Z_i Z_j) - 2 \lambda \sum_{i \in V} Z_i # $$ # # $$ # \hat{H}_m = \sum_{i\in V} X_i # $$ edges = [(0, 1), (1, 2), (2, 3), (3, 4), (4, 0), (1, 4)] # + edge_terms = [[qml.PauliZ(i), qml.PauliZ(j), qml.PauliZ(i) @ qml.PauliZ(j)] for (i, j) in edges] edge_terms = list(chain(*edge_terms)) edge_coeffs = [1] * len(edge_terms) vertex_terms = [qml.PauliZ(i) for i in range(5)] vertex_coeffs = [-2] * len(vertex_terms) edge_h = qml.Hamiltonian(edge_coeffs, edge_terms) vertex_h = qml.Hamiltonian(vertex_coeffs, vertex_terms) cost_h = edge_h + vertex_h # - mixer_terms = [qml.PauliX(i) for i in range(5)] mixer_coeffs = [1] * len(mixer_terms) mixer_h = qml.Hamiltonian(mixer_coeffs, mixer_terms) # + dev = qml.device('default.qubit', wires=5) @qml.qnode(dev) def qaoa_circuit(beta_c, alpha_m): # Initalize for wire in range(5): qml.Hadamard(wires=wire) # Apply alternating layers for idx in range(len(beta_c)): qml.ApproxTimeEvolution(cost_h, beta_c[idx], 1) qml.ApproxTimeEvolution(mixer_h, alpha_m[idx], 1) # Return expval of cost H return qml.expval(cost_h) # - n_layers = 7 beta_c = np.random.normal(size=n_layers) alpha_m = np.random.normal(size=n_layers) # + opt = qml.GradientDescentOptimizer(stepsize=0.01) for _ in range(100): (beta_c, alpha_m) = opt.step(qaoa_circuit, beta_c, alpha_m) # - @qml.qnode(dev) def qaoa_probs(beta_c, alpha_m): # Initalize for wire in range(5): qml.Hadamard(wires=wire) # Apply alternating layers for idx in range(len(beta_c)): qml.ApproxTimeEvolution(cost_h, beta_c[idx], 1) qml.ApproxTimeEvolution(mixer_h, alpha_m[idx], 1) # Return expval of cost H return qml.probs(wires=range(5)) plot_probs(qaoa_probs(beta_c, alpha_m)) plot_graph(edges, '11010') # ## Demo 2: using `qml.qaoa` graph = nx.Graph(edges) cost_h, mixer_h = qml.qaoa.min_vertex_cover(graph, constrained=False) # + def single_layer(beta_c_single, alpha_m_single): qml.qaoa.cost_layer(beta_c_single, cost_h) qml.qaoa.mixer_layer(alpha_m_single, mixer_h) @qml.qnode(dev) def qaoa_circuit(beta_c, alpha_m): for wire in range(5): qml.Hadamard(wires=wire) qml.layer(single_layer, len(beta_c), beta_c, alpha_m) return qml.expval(cost_h) # + n_layers = 5 beta_c = np.random.normal(size=n_layers) alpha_m = np.random.normal(size=n_layers) opt = qml.GradientDescentOptimizer(stepsize=0.01) for _ in range(100): (beta_c, alpha_m) = opt.step(qaoa_circuit, beta_c, alpha_m) # - @qml.qnode(dev) def qaoa_probs_circuit(beta_c, alpha_m): for wire in range(5): qml.Hadamard(wires=wire) qml.layer(single_layer, len(beta_c), beta_c, alpha_m) return qml.probs(wires=range(5)) plot_probs(qaoa_probs_circuit(beta_c, alpha_m)) plot_graph(edges, '01101') # # 04 - More Types # ## The `None` Type # The `None` object is used to represent the absence of a value. It is similar to `null` in other programming languages. # # Like other "empty" values, such as `0`, `[]` and the empty string `""`, it is `False` when converted to a Boolean variable. # # When entered at the Python console, it is displayed as `None`. # + myNone = None print(myNone) print(None == None) print(bool(None)) # - # ## Tuples # Tuples are very similar to lists, except that they are **immutable** (they cannot be changed). Also, they are created using parentheses (round brackets), rather than square brackets. # # You can access the values in the tuple with their index, just as you did with lists. **However, trying to reassign a value in a tuple causes a `TypeError`**. # + # Creating lists and tuples myList = ["Foo", "Bar", "Spam", "Eggs"] # This is a list myTuple = ("Foo", "Bar", "Spam", "Eggs") # This is a tuple # Accessing values in lists and tuples print(myList[0]) # Get first element in the list print(myTuple[0]) # Get first element in the tuple print(myList[2]) # Get second element in the list print(myTuple[2]) # Get second element in the tuple print(myList[-1]) # Get last element in the list print(myTuple[-1]) # Get last element in th tuple # We can only reassign values in lists, and not in tuples myList[1] = "Sausage" print(myList) # myTuple[1] = "Sausage" # This is illegal and causes a `TypeError` # print(myTuple) # - # Note that, like lists, tuples can be nested within each other. nestedTuple = ((1, 2), (3, 4), (5, 6), (7, 8)) print(nestedTuple) # An empty tuple is created using an empty parenthesis pair. emptyTuple = () print(emptyTuple) # Tuples are commonly used if **reassignment of elements should be avoided**. For example, coordinates are usually created using tuples as changing a single ordinate in the coordinate is undesirable. # **Exercise 04.01**: Write a program that calculates the straight-line distance between the two points in 3D space. # - The first point has coordinates $(-1, 6, 4)$. # - The second point has coordinates $(-4, 10, -8)$. # # *Note: for two points $(x_1, y_1, z_1)$ and $(x_2, y_2, z_2)$, the straight-line distance between them is given by $$\sqrt{(x_1-x_2)^2 + (y_1-y_2)^2+(z_1-z_2)^2}$$* # + # Write your code here # - # ## Dictionaries # Dictionaries are data structures used to map arbitrary keys to values. # # Each element in a dictionary is represented by a `key: value` pair. # # Dictionaries can be indexed in the same way as lists, using square brackets containing keys. # # Trying to index a key that isn't part of the dictionary returns a `KeyError`. # + ages = {"Alex": 23, "Bob": 36, "Charlie": 45} # Elements are key-value pairs print(ages["Alex"]) # Get the value associated with the key "Alex" print(ages["Bob"]) print(ages["Charlie"]) # print(ages[0]) # There is no key 0 so this doesn't work and raises a `KeyError` # - # A dictionary can store any type of data as **values**. # + colors = { "red": [255, 0, 0], # The value can be mutable "green": [0, 255, 0], "blue": [0, 0, 255] } print(colors["red"]) print(colors["green"]) print(colors["blue"]) # - # However, the **keys** of a dictionary are more restrictive. The keys of a dictionary must be **immutable** (i.e. non editable). Thus, the keys of a dictionary **cannot be lists, dictionaries, or sets** (explained later) as these are **mutable**. # # *Note: recall that strings are __immutable__ so they can be used as keys for dictionaries.* # Uncomment the following lines to see the errors that Python produces myDict = { # [1, 2, 3]: "List", # Key is a list - not acceptable # {"A": 1, "B": 2}: "Dictionary" # Key is a dictionary - not acceptable } # An empty dictionary is defined as `{}` or `dict()` (preferred). print({}) print(dict()) print(dict() == {}) # Just like lists, dictionary keys can be assigned to different values. # # However, unlike lists, a new dictionary key can also be assigned a value, not just ones that already exist. # + squares = {1: 1, 2: 4, 3: "Error", 4: 16, 5: 25} print(squares) squares[3] = 9 # Assign the value of the element with key `3` to `9` print(squares) squares[8] = 64 # Make a new key-value pair of (8, 64) print(squares) # - # To determine whether a **key** is in a dictionary, you can use `in` and `not in`, just as you can for a list. # + ages = {"Alex": 23, "Bob": 36, "Charlie": 45} print("Alex" in ages) # Is the key "Alex" in `ages`? print("Dave" in ages) # Is the key "Dave" in `ages`? # - # To get the **keys** of a dictionary, use the `.keys()` method of the dictionary. To get the **values** of a dictionary, use the `.values()` method of the dictionary. # # *Note: __none of the aforementioned methods returns a list__, so you will need to explicitly convert their outputs to a list to use list methods. However, you can __still perform `in` and `not in` on them__ without converting to a list*. # + ages = {"Alex": 23, "Bob": 36, "Charlie": 45} print(list(ages.keys())) print(list(ages.values())) print("Alex" in ages.keys()) # Is "Alex" in the `ages`'s keys? print(36 in ages.values()) # Is 36 in the `ages`'s values? # - # To get all the key-value pairs of a dictionary, we can use the `.items()` method. However, just like the other two methods, although `in` and `not in` can be used on it, **it is not a list**. Thus explicit conversion to a list is needed if you want it to be a list. # + ages = {"Alex": 23, "Bob": 36, "Charlie": 45} print(ages.items()) # Not a list print(list(ages.items())) # - # Running `list(dictionary.items())` returns a list of key-value pairs, where each key-value pair is in a **tuple**. The first item in the tuple is the key and the second item is the value. # # We can iterate through all key-value pairs by doing something like this: # + ages = {"Alex": 23, "Bob": 36, "Charlie": 45} for keyValuePair in ages.items(): print(keyValuePair[0], keyValuePair[1]) # Remember: index 0 is the key and index 1 is the value # A more concise way of doing that is this: for key, value in ages.items(): # Notice there are now two items before `in` print(key, value) # - # Another useful dictionary method is `get`. It does the same thing as indexing, but if the key is not found in the dictionary it returns another specified value instead (`None`, by default). # + ages = {"Alex": 23, "Bob": 36, "Charlie": 45} print(ages.get("Alex")) # Get the value associated with key "Alex" print(ages.get("Dave")) # Get the value associated with key "Dave" print(ages.get("Dave", "This is returned if not found")) # - # **Exercise 04.02**: The file `titles.json` contains a dictionary containing key-value pairs. The key represents the [Project Gutenberg](https://www.gutenberg.org/) EBook ID and the value the title of the EBook. **This dictionary has the name `titles`**. Note that the key is a **string** representing an **integer**, representing the EBook ID. # # Write code that asks the user to input an EBook ID and returns the title. # - You **should validate** that the input is between 1 and 1000 inclusive. **You do not need to validate whether the input is an integer or not**. # - Print the corresponding title of the book if the input ID is valid; otherwise print `Not Found`. # # The reading and processing of the dictionary is handled for you. # # *Note: the ID `182` does not correspond to any book; you can use this to check your program.* # + # Reading in the dictionary; DO NOT MODIFY import json with open("titles.json", "r") as f: titles = json.load(f) # Write your code here # - # ## List Slicing # List slices provide a more advanced way of retrieving values from a list. Basic list slicing involves indexing a list with **two colon-separated integers**. This returns a new list containing all the values in the old list between the indices. # + squares = [1, 4, 9, 16, 25, 36, 49, 64, 81, 100] print(squares[2:4]) print(squares[3:8]) print(squares[0:1]) # - # Like the arguments to range, the first index provided in a slice is included in the result, but the second isn't. # + squares = [1, 4, 9, 16, 25, 36, 49, 64, 81, 100] # Compare this... listSlice = [] for i in range(3, 8): listSlice.append(squares[i]) print(listSlice) # ...with this print(squares[3:8]) # - # If the first number in a slice is omitted, it is taken to be the start of the list. If the second number is omitted, it is taken to be the end. # + squares = [1, 4, 9, 16, 25, 36, 49, 64, 81, 100] print(squares[:4]) print(squares[4:]) # - # Slicing can also be done on tuples. # + cubes = (1, 8, 27, 64, 125) print(cubes[1:4]) print(cubes[:4]) print(cubes[2:]) # - # Just like `range`, list slices can also have a third number, **representing the step**, to include only certain values in the slice. # + squares = [1, 4, 9, 16, 25, 36, 49, 64, 81, 100] print(squares[::2]) # Take only alternate values print(squares[2:8:2]) print(squares[2:8:3]) # - # **Discussion 04.01**: What is the output of this code? # ```python # squares = [1, 4, 9, 16, 25, 36, 49, 64, 81, 100] # # print(squares[2::4]) # print(squares[8:1:-1]) # print(squares[::-1]) # ``` # Try and predict the output of the code before running the code below. # + # Try out the code here # - # **Remark**: Using `[::-1]` as a slice is a common and idiomatic way to reverse a list. # + myList = ["A", "B", "C", "D", "E"] print(myList) print(myList[::-1]) # - # **Exercise 04.03**: A list is given below. # ```python # myList = [1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144, 169, 196, 225, 256, 289, 324, 361, 400] # ``` # Write *concise* code that outputs the following: # ``` # [169, 144, 121, 100, 81, 64, 49, 36, 25] # ``` # + # The list above is provided here myList = [1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144, 169, 196, 225, 256, 289, 324, 361, 400] # Write your code here # - # ## List Comprehension # List comprehensions are a useful way of quickly creating lists whose contents obey a simple rule. # # For example, we can do the following: # A list comprehension squares = [i**2 for i in range(10)] # Generates the square numbers from 0 to 9 inclusive print(squares) # **Discussion 04.02**: What is the output of the following code? # ```python # print([i for i in range(1, 20)]) # print([x*2 for x in range(1, 20)]) # print([num**3 for num in range(5, 1, -1)]) # print([2*a+1 for a in range(9)]) # ``` # Try and predict the output of the code before running it below. # + # Try out the code here # - # A list comprehension can also contain an `if` statement to enforce a condition on values in the list. # Adds the square of the current number only if `i % 2 == 0` evenSquares = [i**2 for i in range(10) if i % 2 == 0] print(evenSquares) # **Exercise 04.04**: Create a list of the multiples of 3 from 0 to 19 inclusive. # + # Write your code here # - # Trying to create a list in a very extensive range will result in a `MemoryError`. This code shows an example where the list comprehension runs out of memory. # # This issue is solved by generators, which are covered in subsequent modules. # + # ohNoItsTooBig = [i for i in range(10 ** 100)] # Will produce a `MemoryError` (after a long while) # - # **Exercise 04.05**: Create a list of multiples of 3 from 0 to `N` inclusive, where `N` is an integer that is entered by the user. You may assume that the user always enters an integer. # - Validate that `N` is positive but less than 100. Continue asking for input until an integer that meets this condition is entered. # # *Sample Input* (without the comments): # ``` # 1234 # Invalid # 123 # Invalid # -12 # Invalid # 19 # Valid # ``` # # *Sample output*: # ``` # [0, 3, 6, 9, 12, 15, 18] # ``` # + # Write your code here # - # ## String Formatting (using `f`-Strings) # So far, to combine strings and non-strings, you've converted the non-strings to strings and added them. # # String formatting provides a more powerful way to embed non-strings within strings. From Python 3.6 onwards, we can use `f`-strings to substitute known values into strings. # + nums = [4, 5, 6] # Variables are used by writing `{variable}` inside the `f`-string message = f"My numbers are {nums[0]}, {nums[1]} and {nums[2]}." print(message) # - # **Discussion 04.03**: What do you think the output of the following code will be? # ```python # myList = ["A", 2, 3.45, False, True] # # print(f"First element is {myList[0]}") # print(f"Second element is {myList[1]}") # print("What does this do? {myList[3]}") # print(f"What about booleans? {myList[-1]}") # ``` # Try and predict the output before running the code in the space provided below. # + # Try out the code here # - # **Discussion 04.04**: What do you think the following code will output? # ```python # var1 = 1 # var2 = 2 # var3 = 3 # # print(f"{var1} {var1} {var1}") # print(f"{var2} {var3} {var1}") # print(f"{{var2}} {{var3}} {{var1}}") # ``` # Try and predict the output before running the code in the space provided below. # + # Try out the code here # - # The discussion above brings up an important thing to note: to enter curly braces in an `f`-string, you **must use double of them**. # - To type `{` in an `f`-string, write `{{`. # - To type `}` in an `f`-string, write `}}`. variable = [1, 2, 3] print(f"These are curly braces ({{ and }}) and here's my list {variable}.") # **Exercise 04.06**: Write a program that: # - Accepts two strings as input. # - Print the string `[First String] [First String] [Second String] [Second String] [First String] [Second String] [Second String] [First String]`, replacing `[First String]` and `[Second String]` with the appropriate string. # + # Write your code here # - # ## Useful Functions # Python contains many useful built-in functions and methods to accomplish common tasks. # ### Text Functions # Here are some useful string functions. # - `join`: joins a list of strings with another string as a separator. # - `split`: turns a string with a certain separator into a list. # - `replace`: replaces one substring in a string with another. # - `startswith`: determine if there is a substring at the start of a string. # - `endswith`: determine if there is a substring at the end of a string. print(", ".join(["spam", "eggs", "ham"])) print("A long string, separated by commas, wow!".split(",")) print("Hello me".replace("me", "world")) print("this is a sentence".startswith("this")) print("this is a sentence".endswith("sentence")) # **Exercise 04.07**: Write code that splits the string `This is a test` into its individual words. Also write code that combines the list `["This", "is", "another", "test."]` into a single, grammatically sound sentence. # + # Write your code here # - # ### Numeric Functions # - To find the maximum or minimum of some numbers or a list, you can use `max` or `min`. # - To find the distance of a number from zero (its absolute value), use `abs`. # - To round a number to a certain number of decimal places, use `round`. # - To find the sum of numbers in a list, use `sum`. # # *Note: Python's `round` function uses the __[round half to even](https://en.m.wikipedia.org/wiki/Rounding#Round_half_to_even) method__ (also known as Banker's Rounding) for rounding numbers.* # + numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] print(max(numbers)) print(min(numbers)) print(sum(numbers)) print(abs(54)) print(abs(-54)) print(round(2.5)) print(round(3.5)) print(round(4.7515244, 3)) # Round to 3 decimal places print(round(2, 2)) # - # **Exercise 04.08**: The list # ``` # [1, -2, 3.04, 4.055, -5.605, 5.605, 7.123456, -7.123456, 0.005, -0.005] # ``` # contains several numbers. Round each number to 1 decimal place. Then return the maximum and minimum of the absolute value of the rounded numbers. # + # Write your code here # - # ### List Functions # Often used in conditional statements, `all` and `any` take a list as an argument, and return `True` if all or any (respectively) of their arguments evaluate to `True` (and `False` otherwise). # + nums = [55, 44, 33, 22, 11] if all([num > 5 for num in nums]): print("All numbers greater than 5") if any([num % 2 == 0 for num in nums]): print("At least one number is even") # - # The function `enumerate` can be used to iterate through the values and indices of a list simultaneously. # + myList = ["Value 1", "Value 2", "Value 3", "Value 4", "Value 5"] for index, value in enumerate(myList): print(index, value) # - # **Exercise 04.09**: The list # ``` # [1, 3, 6, 8, 12, 24, 14, 30, 20, 21] # ``` # contains several integers. Write code that determines whether: # - every value is at least twice its index (0-based indexing) # - at least one value is a multiple of 4 when its index (0-based indexing) is also a multiple of 4 # + # Write your code here # - # ## Assignment 04A # The file `titles.json` contains a dictionary containing key-value pairs. The key represents the [Project Gutenberg](https://www.gutenberg.org/) EBook ID and the value the title of the EBook. **This dictionary has the name `titles`**. Note that the key is a **string** representing an **integer**, representing the EBook ID. The loading of this dictionary is handled for you below. # # ### Task # Write code that does the following. # - Determines whether any book title has longer than 50 characters. # - Determines if every book title has the letter `e` in it (both uppercase and lowercase) # # In addition, generate a list of book titles which has **two consecutive spaces next to each other**. Print out the **first 3 elements of this list** using the format # ``` # The first three titles which have two consecutive spaces are: # [TITLE 1], [TITLE 2], and [TITLE 3]. # ``` # + # Reading in the dictionary; DO NOT MODIFY import json with open("titles.json", "r") as f: titles = json.load(f) # Write your code here # - # ## Assignment 04B # The string # ``` # 1,2 3,4 5,6 7,8 9,10 -1,-2 11,-12 -13,14 15,16 20,20 # ``` # represents a list of 10 tuples (separated by spaces), where each tuple is a pair of two numbers (separated by commas). # # Define the *norm* of a tuple $(x,y)$ to be # $$\sqrt{x^2+y^2}$$ # # ### Task # Generate the list of norms, in the same order as the tuples in the string. Then, # - print the list of norms, where each element is rounded to 2 decimal places. # - find the maximum and minimum norm (**not** rounded to 2 decimal places) and print their sum # - by considering list slicing, print out the 1st, 3rd, 5th, 7th, and 9th norms (1-based indexing, **not** rounded to 2 decimal places) in a list # + # The string defined earlier theString = "1,2 3,4 5,6 7,8 9,10 -1,-2 11,-12 -13,14 15,16 20,20" # Write your code here # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + # %matplotlib inline import gym import matplotlib import numpy as np import sys from collections import defaultdict if "../" not in sys.path: sys.path.append("../") from lib.envs.blackjack import BlackjackEnv from lib import plotting matplotlib.style.use('ggplot') # - env = BlackjackEnv() def create_random_policy(nA): """ Creates a random policy function. Args: nA: Number of actions in the environment. Returns: A function that takes an observation as input and returns a vector of action probabilities """ A = np.ones(nA, dtype=float) / nA def policy_fn(observation): return A return policy_fn def create_greedy_policy(Q): """ Creates a greedy policy based on Q values. Args: Q: A dictionary that maps from state -> action values Returns: A function that takes an observation as input and returns a vector of action probabilities. """ def policy_fn(observation): A = np.zeros( len(Q[observation]), dtype=float) best_action = np.argmax(Q[observation]) A[best_action] = 1.0 return policy_fn def mc_control_importance_sampling(env, num_episodes, behavior_policy, discount_factor=1.0): """ Monte Carlo Control Off-Policy Control using Weighted Importance Sampling. Finds an optimal greedy policy. Args: env: OpenAI gym environment. num_episodes: Nubmer of episodes to sample. behavior_policy: The behavior to follow while generating episodes. A function that given an observation returns a vector of probabilities for each action. discount_factor: Lambda discount factor. Returns: A tuple (Q, policy). Q is a dictionary mapping state -> action values. policy is a function that takes an observation as an argument and returns action probabilities. This is the optimal greedy policy. Off-policy every-visit MC control (returns π ≈ π∗) Initialize, for all s ∈ S, a ∈ A(s): Q(s, a) ← arbitrary C(s,a) ← 0 π(s) ← a deterministic policy that is greedy with respect to Q Repeat forever: Generate an episode using any soft policy μ: S0,A0,R1,...,ST−1,AT−1,RT ,ST G←0 W←1 Fort=T−1,T−2,... downto0: G ← γG + Rt+1 C(St, At) ← C(St, At) + W Q(St, At) ← Q(St, At) + W/C(St,At) [G − Q(St, At)] π(St) ← argmaxa Q(St,a) (with ties broken consistently) If At ̸= π(St) W←W 1/μ(At |St ) """ # The final action-value function. # A dictionary that maps state -> action values Q = defaultdict(lambda: np.zeros(env.action_space.n)) C = defaultdict(lambda: np.zeros(env.action_space.n)) # Our greedily policy we want to learn target_policy = create_greedy_policy(Q) actions = np.array(xrange(env.action_space.n)) for ep in range(num_episodes): observation = env.reset() episode = [] done = False if num_episodes <= 5: print "starting episode %d" % ep while not done: action_dist = behavior_policy(observation) action = np.random.choice(actions, p=action_dist) next_obs, reward, done, _ = env.step(action) episode.append( (observation, action, reward) ) observation = next_obs if num_episodes <= 5: print "length: %d" % len(episode) G = 0.0 W = 1.0 for t in range( len(episode)-1, -1, -1 ): observation, action, reward = episode[t] if num_episodes <= 5: print "%d %f" % (action, reward) G = (discount_factor * G) + reward C[observation][action] += W Q[observation][action] += (W / C[observation][action]) * (G - Q[observation][action]) if action != np.argmax(target_policy(observation)): break W = W / behavior_policy(observation)[action] return Q, target_policy random_policy = create_random_policy(env.action_space.n) Q, policy = mc_control_importance_sampling(env, num_episodes=5, behavior_policy=random_policy) random_policy = create_random_policy(env.action_space.n) Q, policy = mc_control_importance_sampling(env, num_episodes=500000, behavior_policy=random_policy) # For plotting: Create value function from action-value function # by picking the best action at each state V = defaultdict(float) for state, action_values in Q.items(): action_value = np.max(action_values) V[state] = action_value plotting.plot_value_function(V, title="Optimal Value Function") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.11 64-bit (''fns_hack'': conda)' # name: python3 # --- # # Task C (Monte-Carlo solution) # В селе Азино очень любят играть в азартные игры. # Правила одной из игр таковы. Игрок 125 раз бросает игральную шестигранную кость. Из полученной последовательности выпавших чисел выбираются все максимальные по включению подотрезки, где каждое число ровно на 1 больше предыдущего. За каждый такой подотрезок игрок получает выигрыш, равный # 100⋅max(0,(L−2)), где L -- длина подотрезка. # Какую наибольшую целочисленную цену игрок может заплатить за участие в такой игре, чтобы матожидание его прибыли было положительным? # My first solution for the case that any first value of a die roll after the break of sequence belongs to a new sequence. Apparently, I got this condition wrong because the answer got 0 points despite robust results across many runs of simulations import numpy as np wins = [] for i in range(1000000): sample = np.random.choice(np.arange(1, 7), size=125) #print(sample) gains = [] idx = 1 curr = 1 while idx < len(sample): if sample[idx] == sample[idx-1] + 1: curr += 1 else: gains.append(curr) curr = 1 idx +=1 #print(gains) money = list(map(lambda x: max(0, x-2), gains)) wins.append(sum(money)) result = (np.array(wins) *100).mean() print(result) # A solution for the case when a first value after the break doesn't belong to the new sequence. The result is unstable (integer part alters between 27 and 28). As of now, I coudn't come up with an analytical solution for this problem. import numpy as np wins = [] for i in range(1000000): sample = np.random.choice(np.arange(1, 7), size=125) #print(sample) gains = [] idx = 1 curr = 0 while idx < len(sample): if sample[idx] == sample[idx-1] + 1: curr += 1 else: gains.append(curr) curr = 0 idx +=1 #print(gains) money = list(map(lambda x: max(0, 100*x-200), gains)) wins.append(sum(money)) result = np.array(wins).mean() print(result) # # Task D. Fraud in crowd-sourcing # Одним из популярных способов разметки данных является краудсорсинг. Краудсорсинговые платформы, такие как Toloka, позволяют заказчикам размещать задания по разметке данных, а исполнителям выполнять эти задания за вознаграждение от заказчика. # # Довольно распространенный тип краудсорсинговых заданий — бинарная классификация изображений, где исполнителю нужно выбрать один из двух вариантов, например, указать, есть ли на картинке котик. # # Один из заказчиков решил использовать для борьбы с недобросовестными исполнителями (фродерами) механизм контрольных заданий. На каждую страницу с заданиями добавляется одно контрольное задание, для которого известен правильный ответ. Ответы исполнителя на такие задания сравниваются с эталонными, после чего исполнитель может быть заблокирован, если его ответы не будут соответствовать правилу контроля качества. # # Заказчик настроил следующее правило контроля качества: # Если среди первых 10 ответов на контрольные задания более 30% неверных, исполнитель блокируется. # # Далее после каждых новых 5 контрольных ответов выбираются 10 последних контрольных ответов исполнителя, и если среди них более 30% неверных, исполнитель блокируется. # В каждом задании заказчика можно дать один из двух ответов: «Да» или «Нет». Заказчик готовит набор из 100 контрольных заданий. Каждый раз, когда исполнителю нужно будет показать контрольное задание, оно будет случайно равновероятно выбираться из тех контрольных заданий, которые исполнитель еще не выполнял. # # Какое наибольшее число контрольных заданий с эталонным ответом «Да» может быть в подготовленном наборе контрольных заданий, чтобы фродер, всегда отвечающий «Да», с вероятностью не менее 80% был заблокирован после выполнения не более 25 контрольных заданий? # Solution outline: # # Fraudster has to take 5 runs of the test. In every successive 10 tasks that a fraudster has to accomplish, he will fail if there are 3 or less positive examples (or, equivalently, 4 or more negative). I denote this probability of a failure as $P$. Then number of a run when he first fails follows geometric distribution $X \sim G_P$. Probability that he'll get caught during first 5 attempts is c.d.f. of X at 5. Thus, the following should hold: # # $F(5) = 1-(1-P)^5 \ge 0.8$ # # or # # $1-P({he\ passed\ 10\ tasks})^5 \ge 0.8$ # # $P({he\ passed\ 10\ tasks}) = P({there's\ at\ most\ 3\ negatives\ among\ 10\ tasks})= \frac{\sum_{i=0}^{3}{K \choose i}{100-K \choose 10-i}}{100 \choose 10}$, # # where K denotes number of negatives among all 100 samples. Now we just need to find such K, that $F_K \ge 0.8$ and $F_{K-1} < 0.8$. And value 100-K would correspond to maximum of positives in the sample. from math import comb # k is number of negatives k=28 s = 0 for i in range(4): s += comb(k, i) * comb(100-k, 10-i) # fail here means fail of a run of tests to identify the fraud. p_fail = s / comb(100, 10) p_success_all = 1 - p_fail ** 5 print(p_fail) print(p_success_all) print('max positives:', 100-k) # And this is my first solution which had a mistake: I put 4 instead of 5 as a power in c.d.f calculation (it corresponds to another interpretation of geometric distribution r. v. that counts number of failures before the first success). from math import comb # k is number of negatives k=30 s = 0 for i in range(4): s += comb(k, i) * comb(100-k, 10-i) p_fail = s / comb(100, 10) p_success_all = 1 - p_fail ** 4 print(p_fail) print(p_success_all) print('max positives:', 100-k) # # Task E. Time of delivery # Время ожидания заказа в Яндекс.Еде можно разложить на 4 составляющие: # 1) время выполнения заказа в ресторане # 2) время на поиск курьера для выполнения заказа # 3) время на дорогу курьера до ресторана # 4) время на дорогу курьера от ресторана до клиента. # # При этом: # - задача 3) начинается сразу после задачи 2) # - задача 4) начинается сразу после задачи 1) и 3) # - задачи 1) и 2) начинаются одновременно. # # В упрощенной модели можно считать, что каждому этапу соответствует некоторое вероятностное распределение на время выполнения процесса в минутах, а именно: # 1. равномерное распределение на [10;30] # 2. равномерное распределение на [2;7] # 3. равномерное распределение на [5;20] # 4. равномерное распределение на [5;15]. # # Какое минимальное время на доставку заказа стоит указывать Яндекс Еде, чтобы хотя бы 95% заказов выполнялись без опозданий? В ответ введите число, округленное до 3 знаков после десятичного разделителя. # ## Outline of solution # Denote time to deliver the order as Y # # $Y = X_4 + max(X_1;\ X_2+X_3)$ # # Then we have to calculate c.d.f of p.d.f of Y incrementally, step by step. It's a long and tedious proccess involving Simpson's distrubution and some tricks. # # Resulting dictribution of $Z=max(X_1;\ X_2+X_3)$ part has several subintervals and behaves differently on them. But, it seemed to me that calculating convolution with $X_4$ only for the last subinterval $z \in [27;30]$ would be enough to get 0.95 percentile of final distribution. So I derived analitycal solution which yielded an answer: $p_{0.95}=45-\sqrt{20}\approx 40.528$. # # The answer proved to be wrong apparently because value of Y=40.52 overlaps with penultimate subinterval of Z or due to a miscalculation. # import numpy as np for _ in range(10): size = 100_000_000 X1 = np.random.uniform(low=10, high=30, size=size) X2 = np.random.uniform(low=2, high=7, size=size) X3 = np.random.uniform(low=5, high=20, size=size) X4 = np.random.uniform(low=5, high=15, size=size) Y = X4 + np.max([X1, X2 + X3], axis=0) Y.sort() print(Y[int(0.95*size)]) # Quick Monte-Carlo simulation confirms that analytical derivation is not far off from the truth. # # Task F. Analytics # Участник чемпионата Yandex Cup за каждую решенную задачу получает 2 монеты, которые он может обменять на буквы из набора {А Н Л И Т К}. Цель каждого участника — не только победить, но и составить название одного из направлений — АНАЛИТИКА. # # За 1 монету участник может купить одну букву из набора {А Н Л И Т К}, при этом буквы выдаются независимо и равновероятно. # # Для решения задачи необходимо ответить на два вопроса и указать в поле ввода сумму ответов, округленную до 3 знаков после десятичного разделителя. # # Какое минимальное количество задач должен решить участник, чтобы вероятность собрать заветное слово АНАЛИТИКА была не менее 0.5? # Какова вероятность собрать слово, если человек решит то количество задач, которое получилось в ответе на вопрос 1? # I modeled occurences of each letter as a random vector following a multinomial distribution given multisets of n letters drawn out of alphabet with cardinality 6. For each n we need to count number of positive outcomes and multiply the result by $\frac{1}{6^n}$ (probability of each outcome). # # Since we know that at $n = 9$ we have a single positive outcome, we start investigating n=10, 11..., appending all combinations with replacement of n-9 letters to this initial outcome and evaluating if it's positive. # + from itertools import combinations_with_replacement from copy import copy from math import factorial TRIES = 22 right = [3, 2, 1, 1, 1, 1] goods = [] for comb in combinations_with_replacement(range(6), r=TRIES - 9): variant = copy(right) for i in comb: variant[i] += 1 goods.append(variant) #print(goods) def perms(arr): n = sum(arr) result = factorial(n) for i in arr: result = result / factorial(i) return result for i in goods: if sum(i) != TRIES : print('wrong') print(len(goods)) t_goods = [tuple(x) for x in goods] #print(sorted(t_goods)) goods = list(set(t_goods)) print(len(set(t_goods))) #print(goods) answer = list(map(perms, goods)) print(sum(answer) / 6 ** TRIES) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # A2: Bias in data # # For this assignment, I'll compare [Wikipedia](https://www.wikipedia.org) articles about political figures, authored from various countries. By comparing the count of articles from a country to its population, and also the "quality" of articles about politicians, I hope to see and show that the level of coverage varies significantly and could produce bias within the content of the articles. The quality of a given article will be evaluated using the [ORES](https://www.mediawiki.org/wiki/ORES) service. # # First, I import the libraries that my Python 3 code will be using: import requests # For the ORES API call import numpy # For replacement of unread values with NaNs import pandas # For dataframing, merging, numeric conversions, and reading CSVs # Next, load the population data from the CSV file (WPDS_2018_data.csv) obtained from [here:](https://www.dropbox.com/s/5u7sy1xt7g0oi2c/WPDS_2018_data.csv?dl=0) # + # Load population data data_population = pandas.read_csv('WPDS_2018_data.csv') # Rename the columns, because we'll want 'country' to merge, later, and because I like short'population' better data_population = pandas.DataFrame({'country':data_population['Geography'], 'population':data_population['Population mid-2018 (millions)']}) # Take a look at its data data_population.head() #data_population # - # To acquire the data regarding Wikipedia pages, I download the archive from [here](https://figshare.com/articles/Untitled_Item/5513449) and expand to be able to open \country\data\page_data.csv. For this assignment, this file is stored in the same working directory as this Python notebook. data_page = pandas.read_csv('page_data.csv') # Take a look at its data data_page.head(4) #data_page # To obtain a quality score from ORES, I adapted code from [here,](https://github.com/Ironholds/data-512-a2/blob/master/hcds-a2-bias_demo.ipynb) (credit to GitHub users [jtmorgan](https://github.com/jtmorgan) and [ironholds](https://github.com/Ironholds)). The following function makes an ORES API call, given a list of revision ides to search for and a call header identifying me as caller. # + headers = {'User-Agent' : 'https://github.com/pking70', 'From' : ''} def get_ores_data(revision_ids, headers): # Define the endpoint endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}' # Specify the parameters - concatenating all the revision IDs together separated by | marks. params = {'project' : 'enwiki', 'model' : 'wp10', 'revids' : '|'.join(str(x) for x in revision_ids) } # Make the call for a response in JSON format api_call = requests.get(endpoint.format(**params)) response = api_call.json() #print(json.dumps(response, indent=4, sort_keys=True)) return response # - # To loop through all the page data, I segment the ids into groups of 100. Then I call ORES and append the response to a new dataframe named predictions. For rev_ids for which there is not a valid response, I replace with a numpy NaN. # # This code can take a while to execute, depending on the count of pages to query (for this run, I have over 47,000). I uncomment the 'print(i)' statement when I want to see progress and not wonder if it is indefinitely looping, but for now it is commented out. # + # So if we grab some example revision IDs and turn them into a list and then call get_ores_data... rev_ids = list(data_page['rev_id']) # Extract the rev_ids from the page data start = 0 # Start at item 0 step = 100 # How many ids to query ORES for at once. ORES does not work with large counts, but 100 works predictions = pandas.DataFrame() # A dataframe for the prediction results for i in range(start, len(rev_ids), step): # Loop through all the rev_ids # print(i) # Uncomment this if you want to watch progress rev_ids_set = rev_ids[i:i+step] # Use this number of ids for the ORES call response = get_ores_data(rev_ids_set, headers) # Call ORES for revision in response['enwiki']['scores']: # Loop through the JSON ORES call response try: prediction = response['enwiki']['scores'][revision]['wp10']['score']['prediction'] # Store predictions except: prediction = numpy.nan # When there is not a valid response, store a NaN # In a new dataframe, store revisions and predictions predictions = predictions.append({'revision':revision, 'prediction':prediction}, ignore_index=True) # - # To review the structure of the predictions dataframe: predictions.head() # The predictions data is merged with the page data, joined on their respective revision id fields. The revision field must be converted from string to numeric for this to work. predictions['revision'] = pandas.to_numeric(predictions['revision'], errors='coerce') data_page_prediction = data_page.merge(predictions, left_on='rev_id', right_on='revision') # Take a look at its data data_page_prediction.head() # The prediction+page data is merged with the population data that I loaded into data_population earlier, joined on their respective country fields. I was having trouble with this merge, so I trim any extra spaces from both country fields to possibly improve matching. data_pagepredpop = data_page_prediction.merge(data_population, left_on='country'.strip(), right_on='country'.strip()) # Take a look at its data data_pagepredpop.head() # For the final dataframe, extract the fields we want with the titles requested by the assignment, here. data_final = pandas.DataFrame({'country':data_pagepredpop['country'], 'population':data_pagepredpop['population'], 'article_name':data_pagepredpop['page'], 'revision_id':data_pagepredpop['rev_id'], 'article_quality':data_pagepredpop['prediction']}) # Take a look at its data data_final # To analyze this data, I want to examine which countries have the most (and least) amount of articles on Wikipedia regarding their political figures. I also want to examine the proportion of highly and lowly rated articles for each country. # # Note that the quality of an article has been returned by ORES, according to the scale defined here. # # In short, the prediction column of final_data now contains a value on this spectrum: # # 1. FA - Featured article # 2. GA - Good article # 3. B - B-class article # 4. C - C-class article # 5. Start - Start-class article # 6. Stub - Stub-class article # # To prepare for analysis, I must calculate the count of articles per country, and the per capita ratio of articles per country: data_article_count = data_final.groupby(['country']).size().reset_index(name='count') # Take a look at its data data_article_count.head() # The article count data is merged with the population data, joined on their respective country fields. data_popcount = data_population.merge(data_article_count, left_on='country'.strip(), right_on='country'.strip()) # Take a look at its data data_popcount.head() # I want the per capita proportion of articles to population. I have to convert population and count to numeric, and also multiply population by one million to scale it according to its defined format (remember, it was 'Population mid-2018 (millions)'). data_popcount['count'] = pandas.to_numeric(data_popcount['count'], errors='coerce') # Numeric conversion data_popcount['population'] = pandas.to_numeric(data_popcount['population'], errors='coerce') # Numeric conversion data_popcount['per capita'] = 100*(data_popcount['count']/(data_popcount['population']*1000000)) # Ratio calculation # Take a look at its data data_popcount.head() # To see the ten highest-ranked countries in terms of number of politician articles as a proportion of country population: # data_popcount.sort_values(by='per capita', ascending=False).head(10) # To see the ten lowest-ranked countries in terms of number of politician articles as a proportion of country population: data_popcount.sort_values(by='per capita', ascending=True).head(10) # To see the ten highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country: # data_quality = data_final[(data_final['article_quality']=='GA')|(data_final['article_quality']=='FA' )] data_quality = data_quality.groupby(['country']).size().reset_index(name='count_quality') # Take a look at its data data_quality.head() # Merge: data_countqual = data_popcount.merge(data_quality, left_on='country'.strip(), right_on='country'.strip()) # Take a look at its data data_countqual.head() # I want the proportion of highly rated articles to total articles. data_countqual['proportion'] = data_countqual['count_quality']/data_countqual['count'] # Take a look at its data data_countqual.head() # To see the ten highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country: data_countqual.sort_values(by='proportion', ascending=False).head(10) # To see the ten lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country: data_countqual.sort_values(by='proportion', ascending=True).head(10) # However, the above table ranks only countries for which there are high quality articles to count. All the countries that had zero (0) GA or FA articles have been omitted, as their proportion is 0. To rank countries that have no high quality articles, would be impossible. They are all tied at 0, which would make them all equally the "lowest." To see which countries completely lack high quality articles for comparison, first I create a dataframe that contains the counts of all level of quality articles by country: data_allquality = data_final.groupby(['country']).size().reset_index(name='count_allquality') # Take a look at its data data_allquality # There are 180 such countries. # # I can find the indexes within all countries that have quality articles with this merge: data_indexes = pandas.merge(data_allquality.reset_index(), data_quality) data_indexes # Then, by dropping these indexes (the indexes of countries that have high quality articles) I produce a new dataset of countries that lack any high quality articles, which I call data_lowquality: data_lowquality = data_allquality.drop(data_indexes['index']) data_lowquality # In essence, this is the list of countries that have zero (0) highly rated articles. It is not exactly meaningful to rank them; They are above, in alphabetic order: the 37 countries with the lowest possible proportion of highly rated articles. # # Finally, I save my data to a CSV for sharing and reproducability: data_final.to_csv('data_final.csv') # For further reflection upon the meaning of the data processing and analysis within this notebook, please see the Readme file. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Basic stats using `Scipy` # In this example we will go over how to draw samples from various built in probability distributions and define your own custom distributions. # # ## Packages being used # + `scipy`: has all the stats stuff # + `numpy`: has all the array stuff # # ## Relevant documentation # + `scipy.stats`: http://docs.scipy.org/doc/scipy/reference/tutorial/stats.html, http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.rv_continuous.html#scipy.stats.rv_continuous, http://docs.scipy.org/doc/scipy/reference/stats.html#module-scipy.stats import numpy as np import scipy.stats as st # some special functions we will make use of later on from scipy.special import erfc from matplotlib import pyplot as plt from astropy.visualization import hist import mpl_style # %matplotlib notebook plt.style.use(mpl_style.style1) # There are many probability distributions that are already available in `scipy`: http://docs.scipy.org/doc/scipy/reference/stats.html#module-scipy.stats. These classes allow for the evaluations of PDFs, CDFs, PPFs, moments, random draws, and fitting. As an example lets take a look at the normal distribution. norm = st.norm(loc=0, scale=1) x = np.linspace(-5, 5, 1000) plt.figure(1, figsize=(8, 10)) plt.subplot2grid((2, 2), (0, 0)) plt.plot(x, norm.pdf(x)) plt.xlabel('x') plt.ylabel('PDF(x)') plt.xlim(-5, 5) plt.subplot2grid((2, 2), (0, 1)) plt.plot(x, norm.cdf(x)) plt.xlabel('x') plt.ylabel('CDF(x)') plt.xlim(-5, 5) plt.subplot2grid((2, 2), (1, 0)) sample_norm = norm.rvs(size=100000) hist(sample_norm, bins='knuth', histtype='step', lw=1.5, density=True) plt.xlabel('x') plt.ylabel('Random Sample') plt.tight_layout() # You can calculate moments and fit data: # + for i in range(4): print('moment {0}: {1}'.format(i+1, norm.moment(i+1))) print('best fit: {0}'.format(st.norm.fit(sample_norm))) # - # # Custom probability distributions # Sometimes you need to use obscure PDFs that are not already in `scipy` or `astropy`. When this is the case you can make your own subclass of `st.rv_continuous` and overwrite the `_pdf` or `_cdf` methods. This new sub class will act exactly like the built in distributions. # # The methods you can override in the subclass are: # # + \_rvs: create a random sample drawn from the distribution # + \_pdf: calculate the PDF at any point # + \_cdf: calculate the CDF at any point # + \_sf: survival function, a.k.a. 1-CDF(x) # + \_ppf: percent point function, a.k.a. inverse CDF # + \_isf: inverse survival function # + \_stats: function that calculates the first 4 moments # + \_munp: function that calculates the nth moment # + \_entropy: differential entropy # + \_argcheck: function to check the input arguments are valid (e.g. var>0) # # You should override any method you have analytic functions for, otherwise (typically slow) numerical integration, differentiation, and function inversion are used to transform the ones that are specified. # # ## The exponentially modified Gaussian distribution # As and example lets create a class for the EMG distribution (https://en.wikipedia.org/wiki/Exponentially_modified_Gaussian_distribution). This is the distributions resulting from the sum of a Gaussian random variable and an exponential random variable. The PDF and CDF are: # # \begin{align} # f(x;\mu,\sigma, \lambda) & = \frac{\lambda}{2} \exp{\left( \frac{\lambda}{2} \left[ 2\mu+\lambda\sigma^{2}-2x \right] \right)} \operatorname{erfc}{\left( \frac{\mu + \lambda\sigma^{2}-x}{\sigma\sqrt{2}} \right)} \\ # F(x; \mu, \sigma, \lambda) & = \Phi(u, 0, v) - \Phi(u, v^2, v) \exp{\left( -u + \frac{v^2}{2} \right)} \\ # \Phi(x, a, b) & = \frac{1}{2} \left[ 1 + \operatorname{erf}{\left( \frac{x - a}{b\sqrt{2}} \right)} \right] \\ # u & = \lambda(x - \mu) \\ # v & = \lambda\sigma # \end{align} # + # create a generating class class EMG_gen1(st.rv_continuous): def _pdf(self, x, mu, sig, lam): u = 0.5 * lam * (2 * mu + lam * sig**2 - 2 * x) v = (mu + lam * sig**2 - x)/(sig * np.sqrt(2)) return 0.5 * lam * np.exp(u) * erfc(v) def _cdf(self, x, mu, sig, lam): u = lam * (x - mu) v = lam * sig phi1 = st.norm.cdf(u, loc=0, scale=v) phi2 = st.norm.cdf(u, loc=v**2, scale=v) return phi1 - phi2 * np.exp(-u + 0.5 * v**2) def _stats(self, mu, sig, lam): # reutrn the mean, variance, skewness, and kurtosis mean = mu + 1 / lam var = sig**2 + 1 / lam**2 sl = sig * lam u = 1 + 1 / sl**2 skew = (2 / sl**3) * u**(-3 / 2) v = 3 * (1 + 2 / sl**2 + 3 / sl**4) / u**2 kurt = v - 3 return mean, var, skew, kurt def _argcheck(self, mu, sig, lam): return np.isfinite(mu) and (sig > 0) and (lam > 0) class EMG_gen2(EMG_gen1): def _ppf(self, q, mu, sig, lam): # use linear interpolation to solve this faster (not exact, but much faster than the built in method) # pick range large enough to fit the full cdf var = sig**2 + 1 / lam**2 x = np.arange(mu - 50 * np.sqrt(var), mu + 50 * np.sqrt(var), 0.01) y = self.cdf(x, mu, sig, lam) return np.interp(q, y, x) class EMG_gen3(EMG_gen1): def _rvs(self, mu, sig, lam): # redefine the random sampler to sample based on a normal and exp dist return st.norm.rvs(loc=mu, scale=sig, size=self._size) + st.expon.rvs(loc=0, scale=1/lam, size=self._size) # use generator to make the new class EMG1 = EMG_gen1(name='EMG1') EMG2 = EMG_gen2(name='EMG2') EMG3 = EMG_gen3(name='EMG3') # - # Lets look at how long it takes to create readom samples for each of these version of the EMG: # %time EMG1.rvs(0, 1, 0.5, size=1000) print('=========') # %time EMG2.rvs(0, 1, 0.5, size=1000) print('=========') # %time EMG3.rvs(0, 1, 0.5, size=1000) print('=========') # As you can see, the numerical inversion of the CDF is very slow, the approximation to the inversion is much faster, and defining `_rvs` in terms of the `normal` and `exp` distributions is the fastest. # # Lets take a look at the results for `EMG3`: dist = EMG3(0, 1, 0.5) x = np.linspace(-5, 20, 1000) plt.figure(2, figsize=(8, 10)) plt.subplot2grid((2, 2), (0, 0)) plt.plot(x, dist.pdf(x)) plt.xlabel('x') plt.ylabel('PDF(x)') plt.subplot2grid((2, 2), (0, 1)) plt.plot(x, dist.cdf(x)) plt.xlabel('x') plt.ylabel('CDF(x)') plt.subplot2grid((2, 2), (1, 0)) sample_emg = dist.rvs(size=10000) hist(sample_emg, bins='knuth', histtype='step', lw=1.5, density=True) plt.xlabel('x') plt.ylabel('Random Sample') plt.tight_layout() # As with the built in functions we can calculate moments and do fits to data. **Note** Since we are not using the built in `loc` and `scale` params they are fixed to 0 and 1 in the fit below. # + for i in range(4): print('moment {0}: {1}'.format(i+1, dist.moment(i+1))) print('best fit: {0}'.format(EMG3.fit(sample_emg, floc=0, fscale=1))) # - # For reference here is how `scipy` defines this distriubtion (found under the name `exponnorm`): import scipy.stats._continuous_distns as cd np.source(cd.exponnorm_gen) # %time st.exponnorm.rvs(0.5, size=1000) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # #**GC Content** letters = ["B","D","E","F","H","I","J","K","L","M","N","O","P","Q","R","S","U","V","W","X","Y","Z"] seq = input("Hello, what is your sequence? ") sequ = seq.upper() if any(elem in letters for elem in sequ): print("I'm sorry, there are non-base letters in your sequence") else: import re seqf = re.sub(" ", "", sequ) cnt = input("Okay, what would you like to count? ") cntu = cnt.upper() if "A" in cntu: a = seqf.count("A") else: a = 0 if "T" in cntu: t = seqf.count("T") else: t = 0 if "C" in cntu: c = seqf.count("C") else: c = 0 if "G" in cntu: g = seqf.count("G") else: g = 0 total = a + t + c + g percent = total/len(seqf) print(cntu + " Count: " + str(total)) print(cntu + " Content: " + str(round(percent,3))) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # NeuralEE on CORTEX Dataset # `CORTEX` dataset contains 3005 mouse cortex cells and gold-standard labels for seven distinct cell types. Each cell type corresponds to a cluster to recover. # + import random import numpy as np import torch from neuralee.embedding import NeuralEE from neuralee.dataset import CortexDataset from neuralee._aux import scatter # %matplotlib inline # - # Choose a GPU if a GPU available. It could be defined as follow: # ``` # device = torch.device('cuda:0') # device = torch.device('cuda:1') # device = torch.device('cpu') # ``` device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') # To reproduce the following results, fix the random seed. torch.manual_seed(1234) random.seed(1234) np.random.seed(1234) # First, we apply log(1 + x) transformation to each element of the cell-gene expression matrix. # Then, We retain top 558 genes ordered by variance as the original paper. # Finally, we normalize the expression of each gene by subtracting its mean and dividing its standard deviation. cortex_dataset = CortexDataset(save_path='../') cortex_dataset.log_shift() cortex_dataset.subsample_genes(558) cortex_dataset.standardscale() # We apply NeuralEE with different hyper-paramters. # `N_small` takes from {1.0, 0.5, 0.25}, while `N_smalls`= 1.0 means not applied with stochastic optimization. # `lam` takes from {1, 10}. # `perplexity` fixs as 30. # + N_smalls = [1.0, 0.5, 0.25] N_str = ["nobatch", "2batches", "4batches"] lams = [1, 10] cortex_dataset.affinity(perplexity=30.0) for i in range(len(N_smalls)): cortex_dataset.affinity_split(N_small=N_smalls[i], perplexity=30.0) for lam in lams: NEE = NeuralEE(cortex_dataset, lam=lam, device=device) results_Neural = NEE.fine_tune() np.save('embedding/CORTEX_' + 'lam' + str(lam) + '_' + N_str[i], results_Neural['X'].numpy()) scatter(results_Neural['X'].numpy(), NEE.labels, cortex_dataset.cell_types) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: lith_pred # language: python # name: lith_pred # --- # + import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from src.definitions import ROOT_DIR from src.visualization.visualize import plot_ecdf # - # %load_ext autoreload # %autoreload 2 plt.style.use('seaborn-poster') # # Load train data # + train_path = ROOT_DIR / 'data/external' / 'CSV_train.csv' assert train_path.is_file() # - data = pd.read_csv(train_path, sep=';') data.sample(10) data.shape # ## Raw features data.columns # # GR distribution per well # + fig, ax = plt.subplots(1, 1, figsize=(15, 10)) sns.kdeplot(data=data, x='GR', hue='WELL', common_norm=True, fill=False, legend=False, ax=ax) plt.title('All GR distribution') plt.show() # - plot_ecdf(data['GR'], 'GR', quantiles=[0.05, 0.95]) sns.displot(data=data, x='GR', kind='ecdf') data['GROUP'].value_counts(normalize=True, dropna=False) # + cond = data['GROUP'] == 'HORDALAND GP.' df = data.loc[cond, :] # - df df.groupby('WELL')['GR'].count()['31/2-19 S'] low_gr, high_gr = 24, 127 # + fig, ax = plt.subplots(1, 1, figsize=(15, 50)) sns.violinplot(data=df, x='GR', y='WELL', common_norm=False, scale="width", fill=False, legend=False, linewidth=0.5, ax=ax) plt.axvline(x=low_gr) plt.axvline(x=high_gr) plt.title('All GR distributions') plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Reciprocal Lattices - FCC, BCC # # September 7, 2020 University of Toronto For technical issues: # # *Please view the 2. Reciprocal Space notebook before visiting this one. # # We can now apply our better understanding of the relationship between real space and reciprocal space to familiar crystal structure. In this module, we will construct the reciprocal lattice for the FCC and the BCC crystal. # # First, we will re-construct the real crystal structure object exactly as we did in the Crystal Structure module. The code is initially set up for the FCC lattice exactly as in the Crystal Structure module. As an exercise you can modify it for the BCC crystal structure. # + init_cell=true #Import statements from pymatgen import Structure, Lattice import numpy as np import nglview as ngl import MSE430Funcs.CrysStrucFuncs as csfunc import MSE430Funcs.RLfuncs as rl #Set the lattice parameter, a #Default unit is Angstrom #Ex. For Ni, a = 3.49A # ***==================*** a1=3.49 # ***==================*** #Construct the lattice by passing the lattice vectors the lattice function #This function will return the coefficients of the lattice vectors suitable to #build a structure object # *** =========================================================== **** prim_vecs1 = [[1/2*a1, 1/2*a1, 0], [0, 1/2*a1, 1/2*a1], [1/2*a1, 0, 1/2*a1]] # *** =========================================================== *** lattice1=Lattice(prim_vecs1) #Set the atomic species: # *** ========================== *** specie1 = 'Ni' #String of the atomic symbol # *** ========================== *** #Define the basis using fractional coordinates and the species of each atom: basis_coords1 = [[0,0,0]] basis_species1 = [specie1] #Note the order of the specie corresponds to the basis coordinates #Construct a structure object out of the basis and lattice unit_cell = Structure(lattice1, basis_species1, basis_coords1, to_unit_cell=True, coords_are_cartesian=False) #This function adds sites to the unit cell object, so it can be visualized as a conventional unit cell unit_cell_conv = csfunc.cubicCell(unit_cell, a1) view2 = csfunc.visUC(unit_cell_conv, a1) view2 # + [markdown] variables={"prim_vecs1[0][0]": "

    NameError: name 'prim_vecs1' is not defined

    \n", "prim_vecs1[0][1]": "

    NameError: name 'prim_vecs1' is not defined

    \n", "prim_vecs1[0][2]": "

    NameError: name 'prim_vecs1' is not defined

    \n", "prim_vecs1[1][0]": "

    NameError: name 'prim_vecs1' is not defined

    \n", "prim_vecs1[1][1]": "

    NameError: name 'prim_vecs1' is not defined

    \n", "prim_vecs1[1][2]": "

    NameError: name 'prim_vecs1' is not defined

    \n", "prim_vecs1[2][0]": "

    NameError: name 'prim_vecs1' is not defined

    \n", "prim_vecs1[2][1]": "

    NameError: name 'prim_vecs1' is not defined

    \n", "prim_vecs1[2][2]": "

    NameError: name 'prim_vecs1' is not defined

    \n"} # The real lattice vectors are (```shift + enter``` to update): # #
    $\vec{a_1}=${{prim_vecs1[0][0]}}$\hat{x} + ${{prim_vecs1[0][1]}}$\hat{y} + ${{prim_vecs1[0][2]}}$\hat{z}$ # #
    #
    $\vec{a_2}=${{prim_vecs1[1][0]}}$\hat{x} + ${{prim_vecs1[1][1]}}$\hat{y} + ${{prim_vecs1[1][2]}}$\hat{z}$ # # #
    $\vec{a_3}=${{prim_vecs1[2][0]}}$\hat{x} + ${{prim_vecs1[2][1]}}$\hat{y} + ${{prim_vecs1[2][2]}}$\hat{z}$
    #
    #
    # - # Now that the crystal structure object has been constructed, we can directly request the reciprocal lattices using a built-in function which uses the following equations to obtain the primitive lattice vectors: # #
    $\vec{b_1}=2\pi \frac{\vec{a_2}\times\vec{a_3}}{\vec{a_1}\cdot(\vec{a_2}\times\vec{a_3})}$ # #
    #
    $\vec{b_2}=2\pi \frac{\vec{a_3}\times\vec{a_1}}{\vec{a_2}\cdot(\vec{a_3}\times\vec{a_1})}$ # # #
    $\vec{b_3}=2\pi \frac{\vec{a_1}\times\vec{a_2}}{\vec{a_3}\cdot(\vec{a_1}\times\vec{a_2})}$
    #
    #
    # + hide_input=false init_cell=true #Obtain the reciprocal lattice from the crystal structure object lat =unit_cell.lattice recip_lat=lat.reciprocal_lattice #Convert to nd array for display purposes rec_lat = np.round(recip_lat.matrix, 2) #Show the reciprocal lattice as points view3 = rl.recipLattice(recip_lat, a1) view3 # + [markdown] variables={"rec_lat[0][0]": "

    NameError: name 'rec_lat' is not defined

    \n", "rec_lat[0][1]": "

    NameError: name 'rec_lat' is not defined

    \n", "rec_lat[0][2]": "

    NameError: name 'rec_lat' is not defined

    \n", "rec_lat[1][0]": "

    NameError: name 'rec_lat' is not defined

    \n", "rec_lat[1][1]": "

    NameError: name 'rec_lat' is not defined

    \n", "rec_lat[1][2]": "

    NameError: name 'rec_lat' is not defined

    \n", "rec_lat[2][0]": "

    NameError: name 'rec_lat' is not defined

    \n", "rec_lat[2][1]": "

    NameError: name 'rec_lat' is not defined

    \n", "rec_lat[2][2]": "

    NameError: name 'rec_lat' is not defined

    \n"} # The reciprocal lattice vectors are (```shift + enter``` to update): # #
    $\vec{b_1}=${{rec_lat[0][0]}}$\hat{x}' + ${{rec_lat[0][1]}}$\hat{y}' + ${{rec_lat[0][2]}}$\hat{z}'$ # #
    #
    $\vec{b_2}=${{rec_lat[1][0]}}$\hat{x}' + ${{rec_lat[1][1]}}$\hat{y}' + ${{rec_lat[1][2]}}$\hat{z}'$ # # #
    $\vec{b_3}=${{rec_lat[2][0]}}$\hat{x}' + ${{rec_lat[2][1]}}$\hat{y}' + ${{rec_lat[2][2]}}$\hat{z}'$
    #
    #
    # # Try inputting the BCC primitive translation vectors to see the BCC reciprocal lattice. # - # ## Brillouin Zone # # The first Brillouin zone can be found by determining the Wigner Seitz cell of the reciprocal lattice. By definition, the Wigner Seitz cell contains a single lattice point at the center. It can be formed by drawing a perpendicular plane at the midsection between the central lattice point and all of its nearest neighbours. # + hide_input=true init_cell=true # %matplotlib notebook bz = lat.get_brillouin_zone() #Display Brillouin Zone in reciprocal space: rl.brilZone(bz, recip_lat) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # **Business Understanding** # # Here we are going to answer the following three business questions specific to developers in USA # # - Which Programming Languages are the most Popular in the USA? # - Which programming language is the most popular among young developers? # - What programming language do developers who earn top 5 average Salaries use? # # Answering the above business questions helps those new the field to make a better choice on which programming language to learn based on popularity and how much they can earn if they learn a particular programming language. # ### Data Understanding # **Step 1: A look at all the data** # Data we are using for this analysis is Stack Overflow Annual Developer Survey Data # # More than 100K responses fielded from 183 countries and dependent territoris. Many thanks to [Stack Overflow](https://insights.stackoverflow.com/survey) for making this data publicly available. However, I have selected developers in USA to answer the business questions stipulated above. # # before diving into answering the above questions, let's have a look at the data. #import packages import pandas as pd from collections import defaultdict import numpy as np import pandas as pd import matplotlib.pyplot as plt # %matplotlib inline # read all the data survey_df = pd.read_csv('data/survey_results_public.csv', low_memory=False) survey_df.head() # read the schema associated to the data survey_schema = pd.read_csv('data/survey_results_schema.csv') survey_schema.head() # developers per country dev_per_country = survey_df[['Country', 'Respondent']].groupby('Country').count().sort_values(by='Respondent', ascending=False) dev_per_country.iloc[:10,:].plot(kind='bar', figsize=(10,5)) plt.title('Developers by Country') plt.xlabel('Country') plt.ylabel('Developer Count'); # The above data shows the highest number of developers who responded to the survey reside in USA. Our business questions also focus on those developers only. I have left the analysis comparing the result with other countries to you the reader. # **Step 2: A look at the US developers data** # How much percent of the total developers currently reside in the USA? us_survey = survey_df[survey_df.Country=='United States'] print('{0:.2f}% developers are in the USA'.format(100*us_survey.shape[0]/survey_df.shape[0])) # As shown int the cell below we are going to use the column **LanguageWorkedWith** and **Salary** to aswner our business questions. list(survey_schema[survey_schema['Column']=='LanguageWorkedWith'].QuestionText) list(survey_schema[survey_schema['Column']=='Salary'].QuestionText) print("{0:.2f}% responders didn't specify the Language they worked with". format(100*us_survey.LanguageWorkedWith.isnull().sum()/us_survey.shape[0])) print("{0:.2f}% responders haven't disclosed Salary". format(100*us_survey.Salary.isnull().sum()/us_survey.shape[0])) # We can see above that LanguageWorkedWith and Salary fields are null but those won't have any impact in this analysis as I have only considered those who have specified thos entries as shown in below analysis. # ### Answering Business quesions # **1. What are the top most programming languages developers worked with?** # # To answer this question we need LanguageWorkedWith column, find all the possible programming languages included in the survey and count each of them to find the total. # # **1.1-Data Preparation** lang_df = us_survey.LanguageWorkedWith.value_counts().reset_index() lang_df.rename(columns={'index':'Language', 'LanguageWorkedWith':'count'}, inplace=True) lang_df.head() all_values = us_survey.LanguageWorkedWith.values expanded_values = [] for x in all_values: expanded_values.extend(str(x).split(';')) lang_list = set(expanded_values) print(len(lang_list)) print(lang_list) #Now we want to see how often each of these individual values appears - I wrote # this function to assist with process - it isn't the best solution, but it gets # the job done and our dataset isn't large enough to computationally hurt us too much. # Thanks to https://github.com/jjrunner/ for the following function def total_count(df, col1, col2, look_for): ''' INPUT: df - the pandas dataframe you want to search col1 - the column name you want to look through col2 - the column you want to count values from look_for - a list of strings you want to search for in each row of df[col] OUTPUT: new_df - a dataframe of each look_for with the count of how often it shows up ''' new_df = defaultdict(int) for val in look_for: for idx in range(df.shape[0]): if val in df[col1][idx]: new_df[val] += int(df[col2][idx]) new_df = pd.DataFrame(pd.Series(new_df)).reset_index() new_df.columns = [col1, col2] new_df.sort_values('count', ascending=False, inplace=True) return new_df lang_df_sep_count = total_count(lang_df, 'Language', 'count', lang_list) lang_df_sep_count.head() lang_df_sep_count['perc'] = 100*lang_df_sep_count['count']/np.sum(lang_df_sep_count['count']) lang_df_sep_count.head() # **1.2. Answer to our first business question** fig, ax = plt.subplots(1,1) fig.set_figwidth(20) x = np.arange(1,39) # Set number of ticks for x-axis ax.set_xticks(x) ax.bar(x, lang_df_sep_count.perc) x_tick_labels = lang_df_sep_count.Language plt.title('Percentage of Programming Language Users for All Ages'); plt.xlabel('Programming Language'); plt.ylabel('Count'); ax.set_xticklabels(x_tick_labels, rotation='vertical', fontsize=10); # The above graph shows the top programming languages developers worked with are C, Java, JavaScript, HTML and CSS. Actual numbers show C and Java are pretty close. Not surprisingly, users of HTML and JavaScipt are very close since there is a high chance to use JavaScript for developering working with HTML and CSS. Now let's look at how usage varies by age. # **2. Which programming language is the most popular among young developers** # # Here we'll look at programming language that are poppular among 18-24 years old age range. # # **2.1 Data preparation** # # Here we are going to create a new entry for each programming language #df_lang = survey_df[['Respondent', 'Age', 'LanguageWorkedWith']] df_lang_exp = pd.concat([survey_df.Respondent, survey_df.Age, survey_df.LanguageWorkedWith.str.split(';')], axis=1).dropna() df_stacked = pd.DataFrame({'Respondent':np.repeat(df_lang_exp.Respondent.values, df_lang_exp.LanguageWorkedWith.str.len()), 'Age':np.repeat(df_lang_exp.Age.values, df_lang_exp.LanguageWorkedWith.str.len()), 'LanguageWorkedWith':np.concatenate(df_lang_exp.LanguageWorkedWith.values) }) df_stacked.head() # + from wordcloud import WordCloud lang = df_stacked["LanguageWorkedWith"].value_counts().reset_index() wrds = lang["index"].str.replace(" ","") wc = WordCloud(background_color='white', colormap=plt.cm.viridis, scale=5).generate(" ".join(wrds)) plt.figure(figsize=(10,10)) plt.imshow(wc, interpolation="bilinear") plt.axis("off") plt.title("Word Cloud of Programming Languages in USA\n", fontdict={'size':16, 'weight': 'bold'}); # - df_stacked['Age_lang_Count'] =1 groupby_age_language = df_stacked.groupby(['Age','LanguageWorkedWith']) groupby_age_language = groupby_age_language['Age_lang_Count'].aggregate(np.sum).unstack() groupby_scaled = groupby_age_language.div(groupby_age_language.sum(axis=1), axis=0) groupby_scaled.head() # **2.1 Answers to our second business question** groupby_scaled.iloc[0, :].plot(kind = 'bar', title = 'Programming Language Use for 18 - 24 years olds', figsize=(15,5)) plt.ylabel('Counts') plt.show(); plt.figure(figsize=(20,20)) groupby_scaled.iloc[-1, :].plot(kind = 'bar', title = 'Programming Language Use for Under 18 years olds', figsize=(15,5)) plt.ylabel('Counts') plt.show(); # Leading programming languages in this age group : HTML, JavaScript, CSS, SQL; followed by Java, Python and PHP.As we can see HTML, JavaScript and CSS go hand in hand as anyone working as web developer should have skills related to those languages. groupby_scaled.iloc[:, [4, 17,27]].plot(kind = 'bar', title = 'Programming Language - C++, Python and Java by Age', figsize=(15,10)) plt.ylabel('Counts') # Looking at the three popular programming languages in the above plot, we can see that Python is leading in the under 18 years old age range. We can see a trend in Python users growing steadily. # **3.What programming language do developers who earn top 5 average Salaries use?** # # **3.1 Data preparation** df_lang_sal = pd.concat([survey_df.Respondent, survey_df.Salary, survey_df.LanguageWorkedWith.str.split(';')], axis=1).dropna() df_stacked_sal = pd.DataFrame({'Respondent':np.repeat(df_lang_sal.Respondent.values, df_lang_sal.LanguageWorkedWith.str.len()), 'Salary':np.repeat(df_lang_sal.Salary.values, df_lang_sal.LanguageWorkedWith.str.len()), 'LanguageWorkedWith':np.concatenate(df_lang_sal.LanguageWorkedWith.values) }) df_stacked_sal.head() df_stacked_sal['Salary'] = df_stacked_sal['Salary'].replace({'\,':''}, regex=True).astype(float) #removing those with unsually high salary to make this analysis more realistic MAX_LIM = df_stacked_sal.loc[:,'Salary'].quantile(0.95) df_stacked_sal['Salary'] = df_stacked_sal.loc[df_stacked_sal['Salary'] < MAX_LIM, 'Salary'] #remove those with zero Salary df_stacked_sal = df_stacked_sal[df_stacked_sal.Salary > 0] groupby_lang_sal = df_stacked_sal.groupby(['LanguageWorkedWith'])['Salary'].aggregate(np.mean) groupby_lang_sal.head() # **3.2 Answer to our third business question** groupby_lang_sal.plot(kind = 'bar', title = 'Programming Language vs Average Salary', figsize=(15,5)) plt.ylabel('Average Salary') plt.show(); # ### Results Evaluation # **1. Which Programming Languages are the most Popular in the USA?** # # The above graph shows the top programming languages developers worked with are C, Java, JavaScript, HTML and CSS. Actual numbers show C and Java are pretty close. Not surprisingly, users of HTML and JavaScipt are very close since there is a high chance to use JavaScript for developering working with HTML and CSS. # # **2. Which programming language is the most popular among young developers?** # # Leading programming languages in this age group : HTML, JavaScript, CSS, SQL; followed by Java, Python and PHP.As we can see HTML, JavaScript and CSS go hand in hand as anyone working as web developer should have skills related to those languages. # # **3. What programming language do developers who earn top 5 average Salaries use?** # # We have found out that Clojure, Go, Hack, Objective-C, Ocaml, Scala are the top of the list interms of average Salary. # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Mobile Network import numpy as np import matplotlib.pyplot as plt import pandas as pd import seaborn as sns # %matplotlib inline dataset = pd.read_csv('dandi.csv') X = dataset.iloc[:, 0:5].values y = dataset.iloc[:, 5].values dataset.head() # ## EDA # Checkking for missing values sns.heatmap(dataset.isnull(), yticklabels = False, cbar = False,cmap = 'viridis') df = dataset df.info() df.describe() sns.countplot(x = "STATE", hue = "RSRQ", data = df) sns.pairplot(df) # ## Linear Discriment analysis # - feature scaling the data # - spliting the dataset into train and test data from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.1, random_state = 0) from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test) from sklearn.discriminant_analysis import LinearDiscriminantAnalysis lda = LinearDiscriminantAnalysis(n_components = 2) X_train = lda.fit_transform(X_train, y_train) X_test = lda.transform(X_test) # ## Logistic Regression # *Classification* from sklearn.linear_model import LogisticRegression classifier = LogisticRegression(random_state = 0) classifier.fit(X_train, y_train) y_pred = classifier.predict(X_test) from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix """ Confusion matrix """ print(confusion_matrix(y_test, y_pred)) """ Classification Report """ print(classification_report(y_test, y_pred)) y_pred = classifier.predict(X_test) for i in range(len(y_pred)): if y_pred[i] == 0: print("Link Failure at ", X_test[i]) # # SVM # - Support Vector Machine from sklearn.svm import SVC model = SVC() model.fit(X_train, y_train) prediction = model.predict(X_test) print(confusion_matrix(y_test, prediction)) print(classification_report(y_test, prediction)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Training the model # # In this tutorial we will use TensorFlow to train a model. # + import sys import platform import os print("Python version: {}".format(sys.version)) print("{}".format(platform.platform())) # - # # Biomedical Image Segmentation with U-Net # # In this code example, we apply the U-Net architecture to segment brain tumors from raw MRI scans as shown below. With relatively little data we are able to train a U-Net model to accurately predict where tumors exist. # # The Dice coefficient (the standard metric for the BraTS dataset used in the study) for our model is about 0.82-0.88. Menze et al. [reported](http://ieeexplore.ieee.org/document/6975210/) that expert neuroradiologists manually segmented these tumors with a cross-rater Dice score of 0.75-0.85, meaning that the model’s predictions are on par with what expert physicians have made. # # Since its introduction two years ago, the [U-Net](https://arxiv.org/pdf/1505.04597.pdf0) architecture has been used to create deep learning models for segmenting [nerves](https://github.com/jocicmarko/ultrasound-nerve-segmentation) in ultrasound images, [lungs](https://www.kaggle.com/c/data-science-bowl-2017#tutorial) in CT scans, and even [interference](https://github.com/jakeret/tf_unet) in radio telescopes. # # ## What is U-Net? # U-Net is designed like an [auto-encoder](https://en.wikipedia.org/wiki/Autoencoder). It has an encoding path (“contracting”) paired with a decoding path (“expanding”) which gives it the “U” shape. However, in contrast to the autoencoder, U-Net predicts a pixelwise segmentation map of the input image rather than classifying the input image as a whole. For each pixel in the original image, it asks the question: “To which class does this pixel belong?” This flexibility allows U-Net to predict different parts of the tumor simultaneously. # # This module loads the data generator from `dataloader.py`, creates a TensorFlow/Keras model from `model.py`, trains the model on the data, and then saves the best model. # ### TensorFlow Version Check # # Check to see what version of TensorFlow is installed and if it has [Intel DNNL optimizations](https://software.intel.com/content/www/us/en/develop/articles/intel-optimization-for-tensorflow-installation-guide.html) # + def test_intel_tensorflow(): """ Check if Intel version of TensorFlow is installed """ import tensorflow as tf print("We are using Tensorflow version {}".format(tf.__version__)) major_version = int(tf.__version__.split(".")[0]) if major_version >= 2: from tensorflow.python import _pywrap_util_port print("Intel-optimizations (DNNL) enabled:", _pywrap_util_port.IsMklEnabled()) else: print("Intel-optimizations (DNNL) enabled:", tf.pywrap_tensorflow.IsMklEnabled()) test_intel_tensorflow() # - # ## Training Time! # The bulk of the training section can be broken down in 4 simple steps: # 1. Load the training data # 1. Define the model # 3. Train the model on the data # 4. Evaluate the best model # # #### Step 1 : Loading the BraTS data set from the tf.data loader # + data_path = "../data/decathlon/Task01_BrainTumour/2D_model/" crop_dim=128 # -1 = Original resolution (240) batch_size = 128 seed=816 # - # ## Challenge A: Fill in the missing parameters # # Fill in the parameters for the dataset generator. This includes the location of the NumPy files, the cropping dimension, the batch size, and whether to use data augmentation (flips, rotations). # + from dataloader import DatasetGenerator # TODO: Fill in the missing parameters (...) for the data generator # HINT: We need to pass in the NumPy files to load # We saved those files under "train/*.npz", "validation/*.npz", "testing/*.npz" when we ran 00_Prepare-Data.ipynb ds_train = DatasetGenerator(...) ds_validation = DatasetGenerator(...) ds_testing = DatasetGenerator(...) # - # ## Plot some samples of the dataset # # We can use the DatasetGenerator's plot_samples function to plot a few samples of the dataset. Note that with `augment` set to True, we have randomly cropped, flipped, and rotated the images. ds_train.plot_samples(num_samples=4) ds_validation.plot_samples(num_samples=8) # #### Step 2: Define the model # ## Challenge B: Fill in the missing parameters # # Fill in the parameters for the model definition. This includes the number of feature maps for the 1st layer, the learning rate, whether to use dropout, and whether to use upsampling or transposed convolution. # + from model import unet print("-" * 30) print("Creating and compiling model ...") print("-" * 30) # TODO: Fill in the ... unet_model = unet(...) model = unet_model.create_model( ds_train.get_input_shape(), ds_train.get_output_shape()) model_filename, model_callbacks = unet_model.get_callbacks() # # If there is a current saved file, then load weights and start from there. # saved_model = os.path.join(args.output_path, args.inference_filename) # if os.path.isfile(saved_model): # model.load_weights(saved_model) # - # The code snippet below draws the model using Keras' built-in `plot_model`. Compare with the implementation of `model.py` # + from tensorflow.keras.utils import plot_model from IPython.display import Image plot_model(model, to_file='images/model.png', show_shapes=True, show_layer_names=True, rankdir='TB' ) Image('images/model.png') # - # #### Step 3: Train the model on the data # ## Challenge C: Fill in the missing parameters # # Fill in the parameters for the model training. This includes the number epochs and the training/validation datasets. # + import datetime start_time = datetime.datetime.now() print("Training started at {}".format(start_time)) # TODO: Fill in the ... history = model.fit(...) print("Total time elapsed for training = {} seconds".format(datetime.datetime.now() - start_time)) print("Training finished at {}".format(datetime.datetime.now())) # Append training log # with open("training.log","a+") as fp: # fp.write("{}: {}\n".format(datetime.datetime.now(), # history.history["val_dice_coef"])) # - # #### Step 4: Evaluate the best model print("-" * 30) print("Loading the best trained model ...") print("-" * 30) unet_model.evaluate_model(model_filename, ds_testing.get_dataset()) # ## End: In this tutorial, you have learnt: # * What is the U-Net model # * Comparing training times - Tensorflow DNNL vs Tensorflow (stock) # * How to tweak a series of environment variables to get better performance out of DNNL # * How to tweak a series of Tensorflow-related and neural-network specific parameters for better performance # *Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. SPDX-License-Identifier: EPL-2.0* # *Copyright (c) 2019-2020 Intel Corporation* # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # XOR Prediction Neural Network # #### A simple neural network which will learn the XOR logic gate. # # I will provide you with any links necessary so that you can read about the different aspects of this NN(Neural Network). # ## Neural Network Info # # #### All information regarding the neural network: # # - Input Layer Units = 2 (Can be modified) # - Hidden Layer Units = 2 (Can be modified) # - Output Layer Units = 1 (Since this is problem specific, it can't be modified) # # - No. of hidden layers = 1 # - Learning Algorithm = Backpropagation # # ![arsitektur_NN](Studi_kasus1.jpg) # # Feel free to mess around with it and try out different things. # + import numpy as np # For matrix math import matplotlib.pyplot as plt # For plotting import sys # For printing # - # ### Neural Network Implementation # Initially, I was going to approach this in an Object Oriented manner but I think that it would be much easier to read and implement, functionally. So, let's get started. # ### Training Data # # The XOR logic gate returns true when the number of inputs given is odd and false when they're even. Here is the simple training dataset. # + # The training data. X = np.array([ [0, 1], [1, 0], [1, 1], [0, 0] ]) # The labels for the training data. y = np.array([ [1], [1], [0], [0] ]) outNN = np.zeros(4) # - X y # ### Additional Parameters # These are just additional parameters which are required by the weights for their dimensions. num_i_units = 2 # Number of Input units num_h_units = 2 # Number of Hidden units num_o_units = 1 # Number of Output units # ### Neural Network Parameters # These are the parameters required directly by the NN. Comments should describe the variables. # + # The learning rate for Gradient Descent. learning_rate = 0.01 # The parameter to help with overfitting. reg_param = 0 # Maximum iterations for Gradient Descent. max_iter = 50 # Number of training examples m = 4 # - # ### Weights and Biases # These are the numbers the NN needs to learn to make accurate predictions. # # For the connections being made from the input layer to the hidden layer, the weights and biases are arranged in the following order: **each row contains the weights for each hidden unit**. Then, the shape of these set of weights is: *(number of hidden units X number of input units)* and the shape of the biases for this connection will be: *(number of hidden units X 1)*. # # So, the overall shape of the weights and biases are: # # **Weights1(Connection from input to hidden layers)**: num_h_units X num_i_units # **Biases1(Connection from input to hidden layers)**: num_h_units X 1 # # **Weights2(Connection from hidden to output layers)**: num_o_units X num_h_units # **Biases2(Connection from hidden to output layers)**: num_o_units X 1 # # ### Generating the Weights # # The weights here are going to be generated using a [Normal Distribution(Gaussian Distribution)](http://mathworld.wolfram.com/NormalDistribution.html). They will also be seeded so that the outcome always comes out the same. # + np.random.seed(1) W1 = np.random.normal(0, 1, (num_h_units, num_i_units)) # 2x2 W2 = np.random.normal(0, 1, (num_o_units, num_h_units)) # 1x2 B1 = np.random.random((num_h_units, 1)) # 2x1 B2 = np.random.random((num_o_units, 1)) # 1x1 # - W1 W2 B1 B2 # ### Sigmoid Function # [This](http://mathworld.wolfram.com/SigmoidFunction.html) function maps any input to a value between 0 and 1. # # ![sigmoid-curve.png](attachment:sigmoid-curve.png) # # In my implementation, I have added a boolean which if set to true, will return [Sigmoid Prime(the derivative of the sigmoid function)](http://www.ai.mit.edu/courses/6.892/lecture8-html/sld015.htm) of the input value. This will be used in backpropagation later on. def sigmoid(z, derv=False): if derv: return z * (1 - z) return 1 / (1 + np.exp(-z)) # ### Forward Propagation # [This](https://en.wikipedia.org/wiki/Feedforward_neural_network) is how predictions are made. Propagating the input through the NN to get the output. # # In my implementation, the forward function only accepts a feature vector as row vector which is then converted to a column vector. Also, the predict boolean, if set to true, only returns the output. Otherwise, it returns a tuple of the outputs of all the layers. def forward(x, predict=False): a1 = x.reshape(x.shape[0], 1) # Getting the training example as a column vector. z2= W1.dot(a1) + B1 # 2x2 * 2x1 + 2x1 = 2x1 a2 = sigmoid(z2) # 2x1 z3 = W2.dot(a2) + B2 # 1x2 * 2x1 + 1x1 = 1x1 a3 = sigmoid(z3) if predict: return a3 return (a1, a2, a3) # ### Gradients for the Weights and Biases # These variables will contain the gradients for the weights and biases which will be used by gradient descent to update the weights and biases. # # Also, creating the vector which will be storing the cost values for each gradient descent iteration to help visualize the cost as the weights and biases are updated. # + dW1 = 0 # Gradient for W1 dW2 = 0 # Gradient for W2 dB1 = 0 # Gradient for B1 dB2 = 0 # Gradient for B2 cost = np.zeros((max_iter, 1)) # Column vector to record the cost of the NN after each Gradient Descent iteration. # - # ## Training # This is the training function which contains the meat of NN. This contains forward propagation and [Backpropagation](http://neuralnetworksanddeeplearning.com/chap2.html). # # ### Backpropagation # The process of propagating the error in the output layer, backwards through the NN to calculate the error in each layer. Intuition: It's like forward propagation, but backwards. # # Steps(for this NN): # 1. Calculate the error in the output layer(dz2). # 2. Calculate the error in the weights connecting the hidden layer to the output layer using dz2 (dW2). # 3. Calculate the error in the hidden layer(dz1). # 4. Calculate the error in the weights connecting the input layer to the hidden layer using dz1 (dW1). # 5. The errors in the biases are just the errors in the respective layers. # # Afterwards, the gradients(errors) of the weights and biases are used to update the corresponding weights and biases by multiplying them with the negative of the learning rate and scaling it by divinding it by the number of training examples. # # While iterating over all the training examples, the cost is also being calculated simultaneously for each example. Then, a regurlization parameter is added, although for such a small dataset, regularization is unnecessary since to perform well, the NN will have to over fit to the training data. def train(_W1, _W2, _B1, _B2): # The arguments are to bypass UnboundLocalError error for i in range(max_iter): c = 0 dW1 = 0 dW2 = 0 dB1 = 0 dB2 = 0 for j in range(m): sys.stdout.write("\rIteration: {} and {}".format(i + 1, j + 1)) # Forward Prop. a0 = X[j].reshape(X[j].shape[0], 1) # 2x1 z1 = _W1.dot(a0) + _B1 # 2x2 * 2x1 + 2x1 = 2x1 a1 = sigmoid(z1) # 2x1 z2 = _W2.dot(a1) + _B2 # 1x2 * 2x1 + 1x1 = 1x1 a2 = sigmoid(z2) # 1x1 # Back prop. dz2 = a2 - y[j] # 1x1 dW2 += dz2 * a1.T # 1x1 .* 1x2 = 1x2 dz1 = np.multiply((_W2.T * dz2), sigmoid(a1, derv=True)) # (2x1 * 1x1) .* 2x1 = 2x1 dW1 += dz1.dot(a0.T) # 2x1 * 1x2 = 2x2 dB1 += dz1 # 2x1 dB2 += dz2 # 1x1 c = c + (-(y[j] * np.log(a2)) - ((1 - y[j]) * np.log(1 - a2))) sys.stdout.flush() # Updating the text. _W1 = _W1 - learning_rate * (dW1 / m) + ( (reg_param / m) * _W1) _W2 = _W2 - learning_rate * (dW2 / m) + ( (reg_param / m) * _W2) _B1 = _B1 - learning_rate * (dB1 / m) _B2 = _B2 - learning_rate * (dB2 / m) cost[i] = (c / m) + ( (reg_param / (2 * m)) * ( np.sum(np.power(_W1, 2)) + np.sum(np.power(_W2, 2)) ) ) return (_W1, _W2, _B1, _B2) # ## Running # Now, let's try out the NN. Here, I have called the train() function. You can make any changes you like and then run all the kernels again. I have also plotted the cost function to visual how the NN performed. # # The console printing might be off. # # The weights and biases are then shown. a0 = X[j].reshape(X[j].shape[0], 1) # 2x1 a0 W1, W2, B1, B2 = train(W1, W2, B1, B2) W1 W2 B1 B2 # ### Plotting # Now, let's plot a simple plot showing the cost function with respect to the number of iterations of gradient descent. # + # Assigning the axes to the different elements. plt.plot(range(max_iter), cost) # Labelling the x axis as the iterations axis. plt.xlabel("Iterations") # Labelling the y axis as the cost axis. plt.ylabel("Cost") # Showing the plot. plt.show() # - # # Observation # With the initial parameters, the cost function doesn't look that good. It is decreasing which is a good sign but it isn't flattening out. I have tried, multiple different values but this some seems like the best fit. # # Try out your own values, run the notebook again and see what you get. coba = np.array([ [0, 1], [1, 0], [1, 1], [0, 0] ]) coba # Forward Prop. for j in range(4): a0 = coba[j].reshape(coba[j].shape[0], 1) # 2x1 z1 = W1.dot(a0) + B1 # 2x2 * 2x1 + 2x1 = 2x1 a1 = sigmoid(z1) # 2x1 z2 = W2.dot(a1) + B2 # 1x2 * 2x1 + 1x1 = 1x1 outNN[j] = sigmoid(z2) # 1x1 outNN plt.plot(y, 'bo', linewidth=2, markersize=12) for j in range(4): plt.plot(j,outNN[j], 'r+', linewidth=2, markersize=12) for j in range(4): plt.plot(y, 'bo', j,outNN[j], 'r+', linewidth=2, markersize=12) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="jJH_c67OHs--" import numpy as np import pandas as pd import pdb # + [markdown] id="3q9foll-IKHS" # # ACTG- Semi Synthetic Dataset - Informative # # + colab={"base_uri": "https://localhost:8080/"} id="ggNLtAupIaaE" outputId="f9206da1-7430-4dc2-8904-eb3ed01e7865" #Refernce Taken from https://github.com/paidamoyo/counterfactual_survival_analysis seed=31415 np.random.seed(seed) data_frame = pd.read_csv('/content/ACTG175.csv', index_col=0) to_drop = ['cens', 'days', 'arms', 'pidnum'] data_frame = data_frame.fillna(data_frame.median()) age_data = data_frame[['age']] #print("age description:{}".format(age_data.describe())) age_data =np.array(age_data).reshape(len(age_data)) #print(age_data.shape) mu_age = np.mean(age_data) cd40_data = data_frame[['cd40']] #print("cd40_data description:{}".format(cd40_data.describe())) cd40_data=np.array(cd40_data).reshape(len(cd40_data)) #print(cd40_data.shape) mu_cd40 = np.mean(cd40_data) x_data = data_frame.drop(labels=to_drop, axis=1) #print("covariate description:{}".format(x_data.describe())) x_data =np.array(x_data).reshape(x_data.shape) #print(x_data.shape) beta_one = [ 0.0026987044, 0.0094957416, -0.2047708817, -0.0518243280, -0.2168722467, 0.0076266828, -0.0796099695, 0.6258748940, 0, 0.0009670592, -1.0101809693, -0.4038655688, -1.5959739338, -0.0563572096, 0.5244218189, 0, 0.2280296997, 0.0035548596, -0.0047974742, -0.0121293815, -1.0625208970, -0.0004266264,0.0005844290 ] beta_one = np.array(beta_one) #print("beta_one: ", beta_one.shape) assert(beta_one.shape[0] == x_data.shape[1]) beta_zero = [1.148569e-02, 3.896347e-03, -3.337743e-02, -1.215442e-01, -6.036002e-01, 4.563380e-03, -5.217492e-02, 1.414948e+00, 0, 9.294612e-06, 7.863787e-02, 4.756738e-01, -7.807835e-01, -1.766999e-01, 1.622865e-01, 0, 1.551692e-01, 2.793350e-03, -6.417969e-03, -9.856514e-03, -1.127284e+00, 2.247806e-04, 1.952943e-04] beta_zero = np.array(beta_zero) #print("beta_zero: ", beta_zero.shape) assert(beta_zero.shape[0] == x_data.shape[1]) def sigmoid(a): return 1/(1 + np.exp(-a)) N = x_data.shape[0] T_F = np.zeros(N) #Time-to-event Factual T_CF = np.zeros(N) #Time-to-event CounterFactual C_F = np.zeros(N) #Outcome Factual C_CF = np.zeros(N) Y_F = np.zeros(N) #Outcome Factual Y_CF = np.zeros(N) #Outcome CounterFactual delta_F = np.zeros(N) delta_CF = np.zeros(N) W = np.zeros(N) #Treatment Indicator prop = np.zeros(N) time = 'days' lamd_zero = 6 * 1e-4 lamd_one = 6 * 1e-4 ######################### #New variables introduced lamd_0_c and lamd_1_c for making Censoring Time dependent on features lamd_zero_c = 8.8 * 1e-4 lamd_one_c = 8.8 * 1e-4 alpha_c = 0.0050 ######################### alpha = 0.0055 U_0 = np.random.uniform(0,1, size=(N)) U_1 = np.random.uniform(0,1, size=(N)) gamma = -30 b_zero = 0 ######################################################################## data = 'info' #{'info', 'non_info'} Dataset Generation for Informative/Non-Informative Cenosring if data == 'non_info': c_mean_time = 1000 # mean censoring time c_std_time = 100 # std censoring time C = np.random.normal(c_mean_time, c_std_time, size=(N)) for i in range(N): pos_age_i = age_data[i] beta_i = gamma * ((pos_age_i - mu_age) + (cd40_data[i]-mu_cd40))# counfounding balance = 1.5 # parameter to balance prop_i = 1/balance * (0.3 + sigmoid(beta_i)) prop[i] = prop_i W_i = np.random.binomial(n=1, p=prop_i, size=1)[0] W[i] = W_i cov_eff_T_0 = lamd_zero * np.exp(np.dot(x_data[i], beta_zero)) cov_eff_T_1 = lamd_one * np.exp(np.dot(x_data[i], beta_one)) ###changed cov_eff_C_0 = lamd_zero_c * np.exp(np.dot(x_data[i], beta_zero)) cov_eff_C_1 = lamd_one_c * np.exp(np.dot(x_data[i], beta_one)) #### stoch_0 = alpha * np.log(U_0[i]) stoch_1 = alpha * np.log(U_1[i]) ####changed stoch_0_c = alpha_c * np.log(U_0[i]) stoch_1_c = alpha_c * np.log(U_1[i]) ### T_1_i = 1/alpha * np.log(1 - stoch_1/cov_eff_T_1) + b_zero T_0_i = 1/alpha * np.log(1 - stoch_0/cov_eff_T_0) T_F_i = W_i * T_1_i + (1-W_i) * T_0_i T_CF_i = (1-W_i) * T_1_i + W_i * T_0_i if data == 'info': # C_1_i = np.exp(np.random.normal(np.log(T_1_i), 0.01, size=(1))) # C_0_i = np.exp(np.random.normal(np.log(T_0_i), 0.01, size=(1))) C_1_i = 1/alpha_c * np.log(1 - stoch_1_c/cov_eff_C_1) + b_zero C_0_i = 1/alpha_c * np.log(1 - stoch_0_c/cov_eff_C_0) C_F_i = W_i * C_1_i + (1-W_i) * C_0_i C_CF_i = (1-W_i) * C_1_i + W_i * C_0_i Y_F_i = min(T_F_i, C_F_i) Y_CF_i = min(T_CF_i, C_CF_i) delta_F_i = T_F_i <= C_F_i delta_F[i] = delta_F_i delta_CF_i = T_CF_i <= C_CF_i delta_CF[i] = delta_CF_i T_F[i] = T_F_i T_CF[i] = T_CF_i C_F[i] = C_F_i C_CF[i] = C_CF_i Y_F[i] = Y_F_i Y_CF[i] = Y_CF_i elif data == 'non_info': C_i = C[i] Y_F_i = min(T_F_i, C_i) Y_CF_i = min(T_CF_i, C_i) delta_F_i = T_F_i <= C_i delta_F[i] = delta_F_i delta_CF_i = T_CF_i <= C_i delta_CF[i] = delta_CF_i T_F[i] = T_F_i T_CF[i] = T_CF_i Y_F[i] = Y_F_i Y_CF[i] = Y_CF_i # + id="Fv9pJ4nMQzs9" ####Saving Data Frame df = pd.DataFrame(x_data) if data=='non_info': df = df.assign(W = W, y_f = Y_F,e_f = delta_F, t_f = T_F,y_cf = Y_CF, e_cf = delta_CF, t_cf = T_CF) elif data=='info': df = df.assign(W = W, y_f = Y_F,e_f = delta_F, t_f = T_F, c_f = C_F ,y_cf = Y_CF, e_cf = delta_CF, t_cf = T_CF, c_cf = C_CF) df.to_csv("actg_semi_{}_new.csv".format(data), index=False) # + colab={"base_uri": "https://localhost:8080/", "height": 593} id="VX6F6MBWoRVM" outputId="86d63831-cd72-4afe-871e-0093ba167e22" import matplotlib.pyplot as plt fig, ax = plt.subplots(figsize =(7, 4)) ax.hist(T_CF,bins=100) plt.xlabel("True_Outcome (Time)") plt.ylabel("Number of Samples") plt.title('Time to event -Counterfactual') import matplotlib.pyplot as plt fig, ax = plt.subplots(figsize =(7, 4)) ax.hist(C_CF,bins=100) plt.xlabel("True_Outcome (Time)") plt.ylabel("Number of Samples") plt.title('Censoring Time - Counterfactual') # + colab={"base_uri": "https://localhost:8080/"} id="HlYf1dZbpZfy" outputId="6df4139e-d0b4-4bb9-fb17-f2f6cf2885ee" unique, counts = np.unique(delta_F, return_counts=True) print('Number of Censored samples', counts[0]) print('Number of uncesnored samples', counts[1]) unique, counts = np.unique(W, return_counts=True) print('Number of Control samples', counts[0]) print('Number of Treated samples', counts[1]) df['e_f'] = delta_F df['e_cf'] = delta_CF df['y_f'] = Y_F df['y_cf'] = Y_CF # + id="DpEsc6SAHoJ0" np.save('covariates', x_data) np.save('treatment', W) data_F = {'y_f': Y_F, 'e_f': delta_F, 't_f': T_F,'y_cf': Y_CF, 'e_cf': delta_CF,'t_cf': T_CF} df = pd.DataFrame.from_dict(data_F) df.to_csv('event_pairs.csv', index=False) # + [markdown] id="gpkvWe9WIVu5" # # Synthetic Dataaset # + [markdown] id="pzegL2wERpdj" # Non-Informative Censoring # + id="kVgsWOT4RncB" colab={"base_uri": "https://localhost:8080/"} outputId="4c819cc2-c611-42b3-e6cd-ad94029b6d5a" import random import numpy as np import pandas as pd np.random.seed(1234) random.seed(1234) mu, sigma = 0, 1 # mean and standard deviation n = 5000 #Total No. of Patients X1,X2,X3 = [], [], []#Three 4 dimensional vector features for i in range(n): X1.append(np.random.normal(mu, sigma,4)) X2.append(np.random.normal(mu, sigma,4)) X3.append(np.random.normal(mu, sigma,4)) beta_0, beta_1 = [], [] #Mean Parameters for exponential distribution for i in range(n): gamma = np.array([10,10,10,10]) gamma1 = np.array([10,10,10,10]) beta_0.append(np.square(np.dot(gamma.reshape((1,4)),X3[i].reshape((4,1)))) + np.dot(gamma.reshape((1,4)),X1[i].reshape((4,1)))) beta_1.append(np.square(np.dot(gamma1.reshape((1,4)),X3[i].reshape((4,1)))) + np.dot(gamma1.reshape((1,4)),X2[i].reshape((4,1)))) X1 = np.asarray(X1) X2 = np.asarray(X2) X3 = np.asarray(X3) d = { 'feature1_1' :X1[:,0],'feature1_2' :X1[:,1],'feature1_3' :X1[:,2],'feature1_4' :X1[:,3], 'feature2_1' :X2[:,0],'feature2_2' :X2[:,1],'feature2_3' :X2[:,2],'feature2_4' :X2[:,3], 'feature3_1' :X3[:,0],'feature3_2' :X3[:,1],'feature3_3' :X3[:,2],'feature3_4' :X3[:,3], } df = pd.DataFrame(data = d) T0 = [] #Outcome Variable Time for Treatment 0 T1 = [] #Outcome Variable Time for Treatment 1 for i in range(n): T0.append(int(np.random.exponential(abs(float(beta_0[i])),1))) T1.append(int(np.random.exponential(abs(float(beta_1[i])),1))) T0_CLIP = np.clip(T0, 3,max(T0)) T1_CLIP = np.clip(T1,3,max(T1)) W = np.zeros((len(df))) for i in range(len(df)): if df['feature3_1'][i] < 1 and df['feature3_1'][i] > -1 and df['feature3_2'][i] < 1 and df['feature3_2'][i] >0: W[i] = 1 df['treatment'] = W c_mean_time = 500 # mean censoring time #{1000,500,200,} c_std_time = 50 # std censoring time #{,50,50} #censoring {,49.9,31.28} U_0 = np.random.uniform(0,1, size=(n)) U_1 = np.random.uniform(0,1, size=(n)) C = np.random.normal(c_mean_time, c_std_time, size=(n)) TF = W*T1_CLIP + (1-W)*T0_CLIP TCF= (1-W)*T1_CLIP + W*T0_CLIP YF = [] YCF = [] delta_F = [] delta_CF = [] for i in range(n): C_i = C[i] YF.append(min(TF[i], C[i])) YCF.append(min(TCF[i], C[i])) delta_F.append(int(TF[i] <=C[i])) delta_CF.append(int(TCF[i] <=C[i])) unique, counts = np.unique(delta_F, return_counts=True) print('Number of Censored samples', counts[0]) print('Number of uncesnored samples', counts[1]) unique, counts = np.unique(W, return_counts=True) print('Number of Control samples', counts[0]) print('Number of Treated samples', counts[1]) df['e_f'] = delta_F df['e_cf'] = delta_CF df['y_f'] = YF df['y_cf'] = YCF # + [markdown] id="COrX246lRuRu" # Synthetic - Informative Censoring (Old Technique of Data Generation) # + id="5SzepWqtLOB4" # seed=31415 # np.random.seed(seed) # mu, sigma = 1, 1 #for informative {1,1} ,for non informative{0,1} mean and standard deviation # N = 5000 #Total No. of Patients # data = 'info' #{'info', 'non_info'} Dataset Generation Select 'info' for Informative and 'non_info' for Non-Informative Cenosring # X1,X2,X3 = [], [], []#Three 4 dimensional vector features # for i in range(N): # X1.append(np.random.normal(mu, sigma,4)) # X2.append(np.random.normal(mu, sigma,4)) # X3.append(np.random.normal(mu, sigma,4)) # X1 = np.asarray(X1) # X2 = np.asarray(X2) # X3 = np.asarray(X3) # d = { # 'feature1_1' :X1[:,0],'feature1_2' :X1[:,1],'feature1_3' :X1[:,2],'feature1_4' :X1[:,3], # 'feature2_1' :X2[:,0],'feature2_2' :X2[:,1],'feature2_3' :X2[:,2],'feature2_4' :X2[:,3], # 'feature3_1' :X3[:,0],'feature3_2' :X3[:,1],'feature3_3' :X3[:,2],'feature3_4' :X3[:,3], # } # df = pd.DataFrame(data = d) # x_data = np.array(df) # gamma = np.array([10,10,10,10]) # For Informative-np.array([1,1,1,1]), Non-informative [10,10,10,10] # T_F = np.zeros(N) #Time-to-event Factual # T_CF = np.zeros(N) #Time-to-event CounterFactual # C_F = np.zeros(N) #Censoring Time Factual # C_CF = np.zeros(N) #Censoring Time Counterfactual # Y_F = np.zeros(N) #Outcome Factual # T_0 = np.zeros(N) # T_1 = np.zeros(N) # Y_CF = np.zeros(N) #Outcome CounterFactual # delta_F = np.zeros(N) #Event/Censoring Indicator Factual # delta_CF = np.zeros(N) #Event/Censoring Indicator CounterFactual # W = np.zeros(N) #Treatment Variable # ##################### # for i in range(N): # beta_0_i = np.square(np.dot(gamma.reshape((1,4)),X3[i].reshape((4,1)))) + np.dot(gamma.reshape((1,4)),X1[i].reshape((4,1))) # beta_1_i = np.square(np.dot(gamma.reshape((1,4)),X3[i].reshape((4,1)))) + np.dot(gamma.reshape((1,4)),X2[i].reshape((4,1))) # T_0_i = np.random.exponential(abs(float(beta_0_i)),1) # T_1_i = np.random.exponential(abs(float(beta_1_i)),1) # if df['feature3_1'][i] < 3 and df['feature3_1'][i] > 1 and df['feature3_2'][i] < 5 and df['feature3_2'][i] >1: #for informative # #if df['feature3_1'][i] < 1 and df['feature3_1'][i] > -1 and df['feature3_2'][i] < 1 and df['feature3_2'][i] >0: #for non-informative # W[i] = 1 # else: # W[i] = 0 # T_F_i = W[i] * T_1_i + (1-W[i]) * T_0_i # T_CF_i = (1-W[i])* T_1_i + W[i] * T_0_i # T_0[i] = T_0_i # T_1[i] = T_1_i # T_F[i] = T_F_i # T_CF[i] = T_CF_i # ###################### # if data == 'non_info': # c_mean_time = 6 # mean censoring time #1000,500 # c_std_time = 0.01 # std censoring time #100,50 # C = np.exp(np.random.normal(np.log(c_mean_time), c_std_time, size=(N))) # print('Non Informative Cesnoring') # elif data == 'info': # print('Informative Censoring') # else: # prit('Type of Censoring Not defined') # ##################### # for i in range(N): # if data == 'info': # C_1_i = np.exp(np.random.normal(np.log(T_1[i]), 0.01, size=(1))) # C_0_i = np.exp(np.random.normal(np.log(T_0[i]), 0.01, size=(1))) # C_F_i = W[i] * C_1_i + (1-W[i]) * C_0_i # C_CF_i = (1-W[i]) * C_1_i + W[i] * C_0_i # Y_F_i = min(T_F[i], C_F_i) # Y_CF_i = min(T_CF[i], C_CF_i) # delta_F_i = T_F[i] <= C_F_i # delta_F[i] = delta_F_i # delta_CF_i = T_CF[i] <= C_CF_i # delta_CF[i] = delta_CF_i # C_F[i] = C_F_i # C_CF[i] = C_CF_i # Y_F[i] = Y_F_i # Y_CF[i] = Y_CF_i # elif data == 'non_info': # C_i = C[i] # Y_F_i = min(T_F[i], C_i) # Y_CF_i = min(T_CF[i], C_i) # delta_F_i = T_F[i] <= C_i # delta_F[i] = delta_F_i # delta_CF_i = T_CF[i] <= C_i # delta_CF[i] = delta_CF_i # Y_F[i] = Y_F_i # Y_CF[i] = Y_CF_i # else: # print('DATA not defined') # + id="tyqYlsZ4lZNo" # import matplotlib.pyplot as plt # fig, ax = plt.subplots(figsize =(10, 7)) # ax.hist(C_CF,bins=100) # plt.xlabel("True_Outcome (Time)") # plt.ylabel("Number of Samples") # + [markdown] id="FcuYoqXBxXgW" # # Synthetic Dataset- Informative Censoring (New) # + [markdown] id="HV_cV_PEf-1M" # Three 4-dimensional covariates are sampled from normal distribution: \\ # $\mathbf{x}_1^{(i)}, \mathbf{x}_2^{(i)}, \mathbf{x}_3^{(i)} \sim \mathcal{N}(1,1)$ # # Time-to-event Generation: \\ # $T^{(i)}_0 \sim \exp\Big(\big|(\gamma_1^T \mathbf{x}_3^{(i)})^2 + \gamma_1^T \mathbf{x}_1^{(i)}\big|\Big)$, # $T^{(i)}_1 \sim \exp\Big(\big|(\gamma_1^T \mathbf{x}_3^{(i)})^2 + \gamma_1^T \mathbf{x}_2^{(i)}\big|\Big)$ # # # Censoring Time Generation: \\ # $C^{(i)}_0 \sim \exp\Big(\big|(\gamma_2^T \mathbf{x}_3^{(i)}) + \gamma_2^T \mathbf{x}_1^{(i)}\big|\Big)$, # $C^{(i)}_1 \sim \exp\Big(\big|(\gamma_2^T \mathbf{x}_3^{(i)}) + \gamma_2^T \mathbf{x}_2^{(i)}\big|\Big)$ # # # where, # $\gamma_1,\gamma_2 = [5,5,5,5] $ and $[80,80,80,80] $respectively for informative censoring. # + colab={"base_uri": "https://localhost:8080/"} id="hbk-RmFelDzZ" outputId="8261e4d5-ba22-43b9-a1a3-478f2767a234" #changed import pdb seed=31415 np.random.seed(seed) mu, sigma = 1, 1 mu1,sigma1 = 2,1 #for informative {1,1} ,for non informative{0,1} mean and standard deviation N = 5000 #Total No. of Patients data = 'info' #{'info', 'non_info'} Dataset Generation Select 'info' for Informative and 'non_info' for Non-Informative Cenosring X1,X2,X3 = [], [], []#Three 4 dimensional vector features for i in range(N): X1.append(np.random.normal(mu, sigma,4)) X2.append(np.random.normal(mu, sigma,4)) X3.append(np.random.normal(mu, sigma,4)) X1 = np.asarray(X1) X2 = np.asarray(X2) X3 = np.asarray(X3) d = { 'feature1_1' :X1[:,0],'feature1_2' :X1[:,1],'feature1_3' :X1[:,2],'feature1_4' :X1[:,3], 'feature2_1' :X2[:,0],'feature2_2' :X2[:,1],'feature2_3' :X2[:,2],'feature2_4' :X2[:,3], 'feature3_1' :X3[:,0],'feature3_2' :X3[:,1],'feature3_3' :X3[:,2],'feature3_4' :X3[:,3], } df = pd.DataFrame(data = d) x_data = np.array(df) gamma = np.array([5,5,5,5]) # For Informative-np.array([5,5,5,5]), Non-informative [10,10,10,10] T_F = np.zeros(N) #Time-to-event Factual T_CF = np.zeros(N) #Time-to-event CounterFactual C_F = np.zeros(N) #Censoring Time Factual C_CF = np.zeros(N) #Censoring Time Counterfactual Y_F = np.zeros(N) #Outcome Factual T_0 = np.zeros(N) T_1 = np.zeros(N) Y_CF = np.zeros(N) #Outcome CounterFactual delta_F = np.zeros(N) #Event/Censoring Indicator Factual delta_CF = np.zeros(N) #Event/Censoring Indicator CounterFactual W = np.zeros(N) #Treatment Variable ##################### for i in range(N): beta_0_i = np.square(np.dot(gamma.reshape((1,4)),X3[i].reshape((4,1)))) + np.dot(gamma.reshape((1,4)),X1[i].reshape((4,1))) beta_1_i = np.square(np.dot(gamma.reshape((1,4)),X3[i].reshape((4,1)))) + np.dot(gamma.reshape((1,4)),X2[i].reshape((4,1))) T_0_i = np.random.exponential(abs(float(beta_0_i)),1) T_1_i = np.random.exponential(abs(float(beta_1_i)),1) if df['feature3_1'][i] < 3 and df['feature3_1'][i] > 1 and df['feature3_2'][i] < 5 and df['feature3_2'][i] >1: #for informative #if df['feature3_1'][i] < 1 and df['feature3_1'][i] > -1 and df['feature3_2'][i] < 1 and df['feature3_2'][i] >0: #for non-informative W[i] = 1 else: W[i] = 0 T_F_i = W[i] * T_1_i + (1-W[i]) * T_0_i T_CF_i = (1-W[i])* T_1_i + W[i] * T_0_i T_0[i] = T_0_i T_1[i] = T_1_i T_F[i] = T_F_i T_CF[i] = T_CF_i ###################### if data == 'non_info': c_mean_time = 6 # mean censoring time #1000,500 c_std_time = 0.01 # std censoring time #100,50 C = np.exp(np.random.normal(np.log(c_mean_time), c_std_time, size=(N))) print('Non Informative Cesnoring') elif data == 'info': print('Informative Censoring') else: prit('Type of Censoring Not defined') ##################### for i in range(N): if data == 'info': # C_1_i = np.exp(np.random.normal(np.log(T_1[i]), 0.01, size=(1))) # C_0_i = np.exp(np.random.normal(np.log(T_0[i]), 0.01, size=(1))) #Changed gamma1 = np.array([80,80,80,80]) #pdb.set_trace() beta_0_i = np.dot(gamma1.reshape((1,4)),X3[i].reshape((4,1))) + np.dot(gamma1.reshape((1,4)),X1[i].reshape((4,1))) beta_1_i = np.dot(gamma1.reshape((1,4)),X3[i].reshape((4,1))) + np.dot(gamma1.reshape((1,4)),X2[i].reshape((4,1))) C_0_i = np.random.exponential(abs(float(beta_0_i) ),1) C_1_i = np.random.exponential(abs(float(beta_1_i) ),1) C_F_i = W[i] * C_1_i + (1-W[i]) * C_0_i C_CF_i = (1-W[i]) * C_1_i + W[i] * C_0_i Y_F_i = min(T_F[i], C_F_i) Y_CF_i = min(T_CF[i], C_CF_i) delta_F_i = T_F[i] <= C_F_i delta_F[i] = delta_F_i delta_CF_i = T_CF[i] <= C_CF_i delta_CF[i] = delta_CF_i C_F[i] = C_F_i C_CF[i] = C_CF_i Y_F[i] = Y_F_i Y_CF[i] = Y_CF_i elif data == 'non_info': C_i = C[i] Y_F_i = min(T_F[i], C_i) Y_CF_i = min(T_CF[i], C_i) delta_F_i = T_F[i] <= C_i delta_F[i] = delta_F_i delta_CF_i = T_CF[i] <= C_i delta_CF[i] = delta_CF_i Y_F[i] = Y_F_i Y_CF[i] = Y_CF_i else: print('DATA not defined') unique, counts = np.unique(delta_F, return_counts=True) print('Number of Censored samples', counts[0]) print('Number of uncesnored samples', counts[1]) unique, counts = np.unique(W, return_counts=True) print('Number of Control samples', counts[0]) print('Number of Treated samples', counts[1]) df['e_f'] = delta_F df['e_cf'] = delta_CF df['y_f'] = Y_F df['y_cf'] = Y_CF # + colab={"base_uri": "https://localhost:8080/", "height": 919} id="yrfXD3zno8K8" outputId="93d75eba-0a7d-4b71-f4b2-6efc19b61e05" import matplotlib.pyplot as plt fig, ax = plt.subplots(figsize =(10, 7)) ax.hist(T_F,bins=100) plt.xlabel("True_Outcome (Time)") plt.ylabel("Number of Samples") plt.title('Time to event -factual') import matplotlib.pyplot as plt fig, ax = plt.subplots(figsize =(10, 7)) ax.hist(C_F,bins=100) plt.xlabel("True_Outcome (Time)") plt.ylabel("Number of Samples") plt.title('Censoring Time - factual') # + id="DhNJTuTjtseL" ###Saving Data Frame if data=='non_info': df = df.assign(treatment = W, y_f = Y_F,e_f = delta_F, t_f = T_F,y_cf = Y_CF, e_cf = delta_CF, t_cf = T_CF) elif data=='info': df = df.assign(treatment = W, y_f = Y_F,e_f = delta_F, t_f = T_F, c_f = C_F ,y_cf = Y_CF, e_cf = delta_CF, t_cf = T_CF, c_cf = C_CF) df.to_csv("synthetic_{}.csv".format(data), index=False) # + id="6h7lI4DwA6-z" # np.save('covariates', x_data) # np.save('treatment', W) # data_F = {'y_f': Y_F, 'e_f': delta_F, 't_f': T_F, 'y_cf': Y_CF, 'e_cf': delta_CF, 't_cf': T_CF} # df1 = pd.DataFrame.from_dict(data_F) # df1.to_csv('event_pairs.csv', index=False) # + [markdown] id="ADjlzi0EMJAc" # # Metabric Dataset # + id="8kQsI0x0t--N" colab={"base_uri": "https://localhost:8080/"} outputId="755d568a-625d-492b-dc48-d1ed1a825314" # # %cd /content/drive/MyDrive/Colab Notebooks/Thesis/METABRIC Dataset # import pandas as pd # import numpy as np # import pickle # import pdb # def interpolate(S,vartype): # S_non_nan = S[~S.isna()] # if (vartype == 'real'): # S = S.fillna(S_non_nan.mean()) # if (vartype == 'int'): # S = S.fillna(int(S_non_nan.mean())) # if (vartype == 'cat'): # S = S.fillna(S_non_nan.mode()[0]) # #pdb.set_trace() # return S # fn = "brca_metabric_clinical_data.tsv" # treatment = 'Chemotherapy' #{'Type of Breast Surgery', 'Chemotherapy' , 'Hormone Therapy', 'Radio Therapy'} # time = 'Overall Survival (Months)' # censored = 'Overall Survival Status' # outfile = open('outdata','wb') # df = pd.read_csv(fn,sep = "\t") # df.dropna(subset = ['Chemotherapy'],inplace=True) # ####### Step 1 # ####### Drop 'Study ID', 'Sample ID', 'Sample Type' , 'Sex', 'Cohorot' # colsAnalyzed = ['Age at Diagnosis', 'Type of Breast Surgery', 'Cancer Type', 'Cancer Type Detailed', 'Cellularity','Chemotherapy','Pam50 + Claudin-low subtype','ER status measured by IHC','ER Status','Neoplasm Histologic Grade','HER2 status measured by SNP6','HER2 Status','Tumor Other Histologic Subtype', # 'Hormone Therapy','Inferred Menopausal State','Integrative Cluster', 'Primary Tumor Laterality', 'Lymph nodes examined positive','Mutation Count', # 'Nottingham prognostic index', 'Oncotree Code', 'PR Status', 'Radio Therapy', '3-Gene classifier subtype', 'Tumor Size', 'Tumor Stage','Overall Survival (Months)', # 'Overall Survival Status'] # colType = ['real','cat','cat','cat','cat','cat','cat', # 'cat','cat','cat','cat','cat','cat', # 'cat','cat','cat','cat','cat','int', # 'real','cat','cat','cat','cat','int','cat','real', # 'cat'] # df = df[colsAnalyzed] # ####### Step 2 # ####### Replace Nan in continuous valued column using mean & categorical column with mode as in Deep-Hit paper # for i,cols in enumerate(colsAnalyzed): # vartype = colType[i] # df[cols] =interpolate(df[cols],vartype) # ###### Step 3 # ####### Replace categorical columns with One-Hot encoding # for i,cols in enumerate(colsAnalyzed): # if (colType[i] == 'cat'): # one_hot = pd.get_dummies(df[cols]) # if (cols == censored): # df[cols] = df[cols].str.split(':',expand=True) # #pdb.set_trace() # if (cols == treatment): # df[cols] = df[cols] == df[cols].unique()[0] # #pdb.set_trace() # if (not(cols in [censored,treatment])): # one_hot.columns = [cols] * df[cols].unique().shape[0] # df = df.drop(cols,axis = 1) # df = df.join(one_hot) # #### Step 4 # #### change to 'x','t','e','y' format # data = {} # colsAnalyzed.remove(treatment) # data['t'] = df[treatment] # colsAnalyzed.remove(time) # data['y'] = df[time] # colsAnalyzed.remove(censored) # data['e'] = df[censored] ## 0 for Living, 1 for deceased # data['x'] = df[colsAnalyzed] # + [markdown] id="OAt22A0yvu7d" # # Dataset Analysis # + colab={"base_uri": "https://localhost:8080/"} id="Wpwrkxx3F3lS" outputId="60bbdbc4-01f6-454d-ae9c-9ef679f36a54" unique, counts = np.unique(df['treatment'], return_counts=True) print('Number of Control samples', counts[0]) print('Number of Treated samples', counts[1]) unique, counts = np.unique(df['e_f'], return_counts=True) print('Number of Factual Cesnored samples', counts[0]) print('Number of Factual Uncensored samples', counts[1]) # unique, counts = np.unique(delta_CF, return_counts=True) # print('Number of Counterfactual Cesnored samples', counts[0]) # print('Number of Counterfactual Uncensored samples', counts[1]) # + colab={"base_uri": "https://localhost:8080/", "height": 477} id="n7tRd2j-PL6Z" outputId="22327413-b673-46f3-9ca8-f2b12cf09f8a" import matplotlib.pyplot as plt fig, ax = plt.subplots(figsize =(10, 7)) ax.hist(T_F,bins=100) plt.xlabel("True_Outcome(Time)") plt.ylabel("Number of Samples") plt.title('Factual- Complete Data') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Titanic Survival : Predicitve Model Notebook # **Author**:🧕🏿 \ # **Date** : 31th August, 2019 # ### Building Predictive Model - Baseline Model # import os import numpy as np import pandas as pd # %matplotlib inline processed_data_path = os.path.join (os.path.pardir, 'data','processed') test_file_path = os.path.join (processed_data_path, 'test.csv') train_file_path = os.path.join (processed_data_path, 'train.csv') train_df = pd.read_csv(train_file_path, index_col = 'PassengerId') test_df = pd.read_csv(test_file_path, index_col = 'PassengerId') train_df.info() test_df.info() # ### Data Preparation X = train_df.loc[:, 'Age':].as_matrix().astype('float') y = train_df['Survived'].ravel() print (X.shape, y.shape) # + #train test split from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0) print(X_train.shape, y_train.shape) print(X_test.shape, y_test.shape) # - print(np.mean(y_train), np.mean(y_test)) # #### Baseline Model from sklearn.dummy import DummyClassifier #creates the model model_dummy = DummyClassifier(strategy='most_frequent', random_state = 0) #training the model model_dummy.fit(X_train,y_train); #Basline Accuracy print('Accuracy Score for the bseline Model : {0: .3f}'.format(model_dummy.score(X_test,y_test))) # + #performance metricz from sklearn.metrics import accuracy_score, confusion_matrix, precision_score, recall_score # - #ACCURACY print('accuracy for baseline model: {0: .3f}' .format(accuracy_score(y_test, model_dummy.predict(X_test)))) #CONFUSION MATRIX print('confusion_matrix for baseline model: \n {0}' .format(confusion_matrix(y_test, model_dummy.predict(X_test)))) print('precision for baseline model: {0: .3f}' .format(precision_score(y_test, model_dummy.predict(X_test)))) print('recall for baseline model: {0: .3f}' .format(recall_score(y_test, model_dummy.predict(X_test)))) # ### Submitting the Baseline Model def get_submission_file(model, filename): #converting to the matrix test_X = test_df.as_matrix().astype('float') #make predicitions predictions = model.predict(test_X) #dataframe to submit df_submission = pd.DataFrame({'PassengerId':test_df.index, 'Survived':predictions}) #submission file submission_data_path = os.path.join(os.path.pardir, 'data','external') submission_file_path = os.path.join(submission_data_path, filename) df_submission.to_csv(submission_file_path, index = False) get_submission_file(model_dummy, 'dummy_01.csv') from sklearn.linear_model import LogisticRegression lr_model = LogisticRegression(random_state = 0) lr_model.fit(X_train, y_train); print('Score For logistic Regression : Version 0.0.1 - {0: .2f}'. format(lr_model.score(X_test,y_test))) #ACCURACY print('accuracy for baseline model: {0: .3f}' .format(accuracy_score(y_test, lr_model.predict(X_test)))) #CONFUSION MATRIX print('confusion_matrix for baseline model: \n {0}' .format(confusion_matrix(y_test, lr_model.predict(X_test)))) print('precision for baseline model: {0: .3f}' .format(precision_score(y_test, lr_model.predict(X_test)))) print('recall for baseline model: {0: .3f}' .format(recall_score(y_test, lr_model.predict(X_test)))) get_submission_file(lr_model, 'lr_model.csv'); # + from sklearn.linear_model import LogisticRegression lr_model = LogisticRegression(random_state = 0) # - lr_model.fit(X_train, y_train); print('Score For logistic Regression : Version 0.0.1 - {0: .2f}'. format(lr_model.score(X_test,y_test))) # ### Performance Metrics For Logisitic Regression Model #ACCURACY print('accuracy for baseline model: {0: .3f}' .format(accuracy_score(y_test, lr_model.predict(X_test)))) #CONFUSION MATRIX print('confusion_matrix for baseline model: \n {0}' .format(confusion_matrix(y_test, lr_model.predict(X_test)))) print('precision for baseline model: {0: .3f}' .format(precision_score(y_test, lr_model.predict(X_test)))) print('recall for baseline model: {0: .3f}' .format(recall_score(y_test, lr_model.predict(X_test)))) lr_model.coef_ get_submission_file(lr_model, 'lr_model.csv'); # ## Hyper Parameter Optimization lr_model = LogisticRegression(random_state= 0) from sklearn.model_selection import GridSearchCV params = {'C':[1.0,10.0, 50.0,100.0, 1000.0], 'penalty':['l1','l2']} clf = GridSearchCV(lr_model, param_grid = params, cv = 3) clf.fit(X_train, y_train) clf.best_params_ get_submission_file(clf,'03_lr.csv') # # Patch # # Literate notebooks benefit from splitting their code and documentation across several cells. Unfortunately, the nature of the notebook-kernel execution model introduces some constraints upon this, as it is impossible to extend Python local namespaces across different cells. To facilitate this, we introduce the `patch` decorator which operates at runtime and build time to unify separate definitions. # %load_ext literary.module from typing import Callable, Type, TypeVar T = TypeVar("T") # Some wrapper classes store the original object in a named attribute. Here we define a few of the common cases. WRAPPER_NAMES = "fget", "fset", "fdel", "__func__", "func" # Let's implement the *runtime* decorator, which monkeypatches the class with the decorated function def patch(cls: Type) -> Callable[[T], T]: """Decorator to monkey-patch additional methods to a class. At import-time, this will disappear and the source code itself will be transformed Inside notebooks, the implementation below will be used. :param cls: :return: """ def get_name(func): # Fix #4 to support patching (property) descriptors try: return func.__name__ except AttributeError: # Support various descriptors for attr in WRAPPER_NAMES: try: return getattr(func, attr).__name__ except AttributeError: continue # Raise original exception raise def _notebook_patch_impl(func): setattr(cls, get_name(func), func) return func return _notebook_patch_impl # We can now implement a test class to see this decorator in action class TestClass: pass # At runtime, an instantiated class can have new methods attached to its type obj = TestClass() @patch(TestClass) def method_a(self): return "method a" # And we can see that the method behaves as expected assert obj.method_a() == "method a" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Lesson 10: Make Classes: advanced topics # # **Udacity Full Stack Web Developer Nanodegree program** # # Part 01. Programming fundamentals and the web # # [Programming foundations with Python](https://www.udacity.com/course/programming-foundations-with-python--ud036) # # # # br3ndonland # ## 01. Advanced Ideas in OOP # # ## 02. Class Variables # # * Class variables can be shared by all instances of the Class. # * The Google Style Guide recommends all caps for class variable names, like `VALID_RATINGS`. # * PEP 8 [recommends](https://www.python.org/dev/peps/pep-0008/#class-names) CapWords for class names. # # *Instructor notes*: # # Around 0:40 or so, Kunal said with regard to the valid_ratings variable: # # "...This is an array or a list..." # # To avoid confusion down the road: Arrays and lists are actually not the same thing in Python. 99.9% of the time, you'll want to use lists. They're flexible, and good for pretty much every purpose. # # However, there is also an array.array class in Python, which is essentially a thin wrapper on top of C arrays. # # They're not something you'll probably need to use, but read more here if interested: http://www.wired.com/2011/08/python-notes-lists-vs-arrays/ # # # ## 03. Quiz: Using Predefined Class Variables # # We explored three of the built-in class variables: `__doc__`, `__name__`, `__module__`. # # ```python # print(media.Movie.__doc__) # documentation function # print(media.Movie.__name__) # prints the class function name # print(media.Movie.__module__) # prints module name # ``` # ``` # This class provides a way to store movie information. # Movie # media # ``` # import webbrowser webbrowser.__doc__ # ## 04. Inheritance # # This is another major topic in object-oriented programming. Think about it like a family. A child inherits characteristics and genomic material from their parents. Similarly, child classes can inherit information from parent classes # # ## 05. Class Parent # # We create a parent class, and reference it in a child instance. While watching, I was thinking that the parent class should be in a separate file. Kunal addressed this, and he just combined them for ease of demonstration. # # ```python # # Udacity full-stack web developer nanodegree # # 01. Programming fundamentals and the web # # Lesson 10. Make Classes: Advanced Topics # # # class Parent(): # def __init__(self, last_name, eye_color): # print("Parent Constructor called.") # self.last_name # self.eye_color # # # # Normally the child instance would be in a separate file. # # Included here for ease of demonstration. # billy_cyrus = Parent("Cyrus", "blue") # print(billy_cyrus.last_name) # # ``` # # When I first ran this, I was getting `AttributeError: 'Parent' object has no attribute 'last_name'`. I hadn't finished defining parameters in the Parent Class. # # ```python # # Udacity full-stack web developer nanodegree # # 01. Programming fundamentals and the web # # Lesson 10. Make Classes: Advanced Topics # # # class Parent(): # def __init__(self, last_name, eye_color): # print("Parent Constructor called.") # self.last_name = last_name # self.eye_color = eye_color # # # # Normally the child instance would be in a separate file. # # Included here for ease of demonstration. # billy_cyrus = Parent("Cyrus", "blue") # print(billy_cyrus.last_name) # # ``` # ## 06. Quiz: What's the Output? # # The code seems highly redundant. # # *What will the output be?* # # ``` # Parent Constructor called # Child Constructor called # Cyrus # 5 # ``` # # *output*: # # ``` # Parent Constructor called. # Cyrus # Child Constructor called. # Parent Constructor called. # Cyrus # 5 # ``` # When computing the `miley_cyrus` function, the Child Constructor is first called. The Child calls the Parent. The output is then generated. # ## 07. Transitioning to Class `Movie` # ## 08. Updating the Design for Class `Movie` # # We revisit the code from our Class `Movie`. What if we wanted to add a similar Class for TV shows? Movies and TV shows share some characteristics, like the presence of a title and duration. We could create a parent function, `class Video():`, and group `class Movie(Video):` and `class TvShow(Video):` as child classes. Inheritance allows us to write code in an intuitive, human-readable way. # # # ## 10. Reusing methods # # Kunal went back to the *inheritance.py* file, and defined a new instance method inside `class Parent()` called `show_info`. # # ```python # def show_info(self): # print("Last Name - " + self.last_name) # print("Eye Color - " + self.eye_color) # # ``` # # He then referenced it with `billy_cyrus.show_info()`. # # ## 11. Method Overriding # # The `show_info()` method is present inside the `class Parent():`. It can be created again inside the `class Child():`, and it will override the parent function. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np from scipy import linalg A = np.array([[1, 1], [2, 3]]) print("A array") print(A) b = np.array([[1], [2]]) print("b array") print(b) solution = np.linalg.solve(A, b) print("solution ") print(solution) # validate results print("validation of solution (should be a 0 matrix)") print(A.dot(solution) - b) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import os os.chdir(r'D:\D\Edureka\Edureka - 24 June - Python\Class 17') import pandas as pd import numpy as np dataset = pd.read_csv('Churn_Modelling.csv') x=dataset.iloc[:,3:13].values y = dataset.iloc[:,13].values y x from sklearn.preprocessing import LabelEncoder, OneHotEncoder labelencoder_x = LabelEncoder() x[:,1] = labelencoder_x.fit_transform(x[:,1]) labelencoder_x = LabelEncoder() x[:,2] = labelencoder_x.fit_transform(x[:,2]) x onehotencoder = OneHotEncoder(categorical_features=[1]) x = onehotencoder.fit_transform(x).toarray() pd.DataFrame(x) from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.2) from sklearn.neighbors import KNeighborsClassifier classifier_knn = KNeighborsClassifier(n_neighbors=801, p=2) classifier_knn.fit(x_train, y_train) y_pred = classifier_knn.predict(x_test) from sklearn.metrics import confusion_matrix confusion_matrix(y_test, y_pred) from sklearn.svm import SVC classifier_svm = SVC(kernel='rbf') classifier_svm.fit(x_train, y_train) y_pred=classifier_svm.predict(x_test) confusion_matrix(y_test, y_pred) # !pip install xgboost from xgboost import XGBClassifier classifier = XGBClassifier() classifier.fit(x_train, y_train) y_pred = classifier.predict(x_test) confusion_matrix(y_test, y_pred) from sklearn.metrics import accuracy_score accuracy_score(y_test, y_pred) from sklearn.model_selection import cross_val_score accuracies = cross_val_score(estimator=classifier, X = x_train, y=y_train, cv=10) accuracies.mean() accuracies.std() accuracies os.chdir(r'C:\Users\Admin\Downloads') dataset = pd.read_csv('Recommend.csv', names = ['user_id','movie_id','rating','timestamp']) dataset n_users = dataset.user_id.unique().shape[0] n_movies=dataset.movie_id.unique().shape[0] from sklearn.model_selection import train_test_split train_data, test_data = train_test_split(dataset, test_size=0.25) train_data_matrix = np.zeros((n_users, n_movies)) pd.DataFrame(train_data_matrix) for line in train_data.itertuples(): train_data_matrix[line[1]-1, line[2]-1] = line[3] train_data_matrix[line[1]-1,line[2]-1] train_data_matrix[243,707] pd.DataFrame(train_data_matrix) list(train_data.itertuples()) for line in test_data.itertuples(): tes[line[1]-1, line[2]-1] = line[3] test_data_matrix = np.zeros((n_users, n_movies)) for line in test_data.itertuples(): test_data_matrix[line[1]-1, line[2]-1] = line[3] pd.DataFrame(test_data_matrix) from sklearn.metrics import pairwise_distances user_similarity = pairwise_distances(train_data_matrix, metric='cosine') mean_user_rating = train_data_matrix.mean(axis=1)[:,np.newaxis] rating_diff = (train_data_matrix-mean_user_rating) user_pred = mean_user_rating + user_similarity.dot(rating_diff)/np.array([np.abs(user_similarity).sum(axis=1)]).T df_out = pd.DataFrame(user_pred) df_out.to_csv('output.csv') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.6 (fn_env) # language: python # name: fn_env # --- # %load_ext line_profiler # %load_ext autoreload # %autoreload 2 import sys import gym import pathlib import typing as tp from scipy.special import softmax import pandas as pd sys.path.insert(0, '/mnt/ubuntu_data_hdd/school/masters/uwaterloo/1b/cs885/project/codenames/codenames_ai/src') from default_game import * from codenames import * from codenames_env import * import scann glove.vectors.shape # + # searcher = scann.scann_ops_pybind.builder(glove.vectors, 20, "dot_product").tree( # num_leaves=2000, num_leaves_to_search=300, training_sample_size=250000).score_ah( # 2, anisotropic_quantization_threshold=0.2).reorder(100).build() # - def compute_recall(neighbors, true_neighbors): total = 0 for gt_row, row in zip(true_neighbors, neighbors): total += np.intersect1d(gt_row, row).shape[0] return total / true_neighbors.size glove.vectorize(["hot dog", "vanilla ice cream"]).shape # %timeit guesser.generate_word_suggestions_mean_approx(["seal" , "antarctica", "meal"], 20) # %timeit guesser.generate_word_suggestions_minimax_approx(["seal" , "antarctica", "meal"], 20) # %timeit guesser.generate_word_suggestions_minimax(["seal" , "antarctica", "meal"], 20) # %timeit guesser.generate_word_suggestions_mean(["seal" , "antarctica", "meal"], 20) guesser.generate_word_suggestions_minimax(["seal" , "antarctica", "meal"], 20) # + tags=[] guesser.give_hint_candidates(["seal" , "antarctica"], strategy="approx_mean") # - codenames_hacked_env.generate_candidates([], 3) codenames_hacked_env = CodenamesEnvHack(glove, wordlist) codenames_hacked_env.start_new_game() codenames_hacked_env.render() type(codenames_hacked_env.current_observation()) def bench_me(): action = codenames_hacked_env.action_space.sample() step = codenames_hacked_env.step(action) print(step[1:3]) bench_me() qq = np.arange(12).reshape(3, 4) em = np.arange(100, 244).reshape(12, 12) em[qq] glove.vectors[[np.array([0, 1])]] @ glove.vectors[np.array([1, 2, 3])].T np.min(glove.vectors[[np.array([0, 1])]] @ glove.vectors[np.array([1, 2, 3])].T, axis=1) # %lprun -f GloveGuesser.generate_word_suggestions_mean_approx bench_me() # %lprun -f bench_me bench_me() codenames_hacked_env.render() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd from gensim import models import numpy as np import gensim import gensim.downloader from tqdm import tqdm import time import math import os # ## Dataset df = pd.read_csv('data/qqp/train_v1.csv', header=0, index_col=0) df df['question'] = (df['question1'] + "|||" + df['question2']) df[df.is_duplicate == 1].shape[0] df[df.is_duplicate == 0].shape[0] # + question_list = df['question'].to_list() question_list = [question.lower().split() for question in question_list] labels = df['is_duplicate'].to_list() # + max_length = -np.inf for sentence in question_list: if len(sentence) > max_length: max_length = len(sentence) for sentence in question_list: if len(sentence) > max_length: max_length = len(sentence) # - max_length # ## Word Embeddings root_dir = 'logs/qqp' os.makedirs(root_dir, exist_ok=True) # + vector_size = 50 window_size = 5 negative_size = 15 wv_model_file = root_dir + '/' + 'wv_bilstm.pth' # + # wv_model = gensim.downloader.load('glove-wiki-gigaword-50') # wv_model = models.Word2Vec(sentences=sentences, vector_size=vector_size, window=window_size, negative=negative_size).wv # wv_model = models.Word2Vec(corpus_file='data/corpus.txt', vector_size=vector_size, window=window_size, negative=negative_size).wv # wv_model.save(wv_model_file) # del wv_model # - def vectorize_sentences(sentences, wv, sentence_size): voc = wv.key_to_index.keys() unk = wv[''] eos = wv[''] lengths = [] for i, sentence in enumerate(sentences): lengths.append(len(sentence)) for i, token in enumerate(sentence): if token in voc: sentence[i] = wv[token] else: sentence[i] = unk while len(sentence) < sentence_size: sentence.append(eos) return sentences, lengths wv = models.KeyedVectors.load(wv_model_file) wv.add_vectors( ['', ''], [np.zeros(wv.vector_size), np.ones(wv.vector_size)] ) # ## BiLSTM import torch from torch import nn from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence if torch.cuda.is_available(): device = torch.device('cuda') else: device = torch.device('cpu') def save_model(model, file_name): torch.save(model.state_dict(), file_name) def load_model(model, file_name): return model.load_state_dict(torch.load(file_name)) # + hidden_size = 50 sentence_size = 275 num_layers = 2 bidirectional = True batch_size = 128 lr = 0.001 num_epochs = 20 eval_rate = 0.1 model_file = 'logs/qqp/lstm_qqp_v2.pth' # - question_vec, question_lengths = vectorize_sentences(question_list, wv, sentence_size) question_lengths = np.array(question_lengths) labels = np.array(labels) # + # vec_labels = np.zeros((labels.size, labels.max()+1)) # vec_labels[np.arange(labels.size), labels] = 1 # + # vectors = vec_sentences.reshape((-1, 50)) # mu = vectors.mean(axis=0) # sigma = np.sqrt(((vectors - mu) ** 2).mean(axis=0)) # vec_sentences = (vec_sentences - mu) / sigma # + eval_index = int(len(question_vec) * eval_rate) question_train = question_vec[eval_index:] question_eval = question_vec[:eval_index] question_len_train = question_lengths[eval_index:] question_len_eval = question_lengths[:eval_index] label_train = labels[eval_index:] label_eval = labels[:eval_index] # + # question1_train = torch.tensor(question1_train, dtype=torch.float) # question1_eval = torch.tensor(question1_eval, dtype=torch.float) # question2_train = torch.tensor(question2_train, dtype=torch.float) # question2_eval = torch.tensor(question2_eval, dtype=torch.float) question_len_train = torch.tensor(question_len_train, dtype=torch.int) question_len_eval = torch.tensor(question_len_eval, dtype=torch.int) label_train = torch.tensor(label_train, dtype=torch.long) label_eval = torch.tensor(label_eval, dtype=torch.long) # - class Classifier(nn.Module): def __init__(self): super(Classifier, self).__init__() lstm_dim = hidden_size * 2 * (2 if bidirectional else 1) self.lstm = nn.LSTM(input_size=vector_size, hidden_size=hidden_size, num_layers=num_layers, bidirectional=bidirectional, batch_first=True ) self.fcnn_1 = nn.Linear(in_features=lstm_dim, out_features=64) self.fcnn_2 = nn.Linear(in_features=64, out_features=2) def forward(self, question, question_len): question = pack_padded_sequence(question, question_len.cpu(), batch_first=True, enforce_sorted=False) question, _ = self.lstm(question) question, _ = pad_packed_sequence(question, batch_first=True) avg_pool = torch.mean(question, 1) max_pool, _ = torch.max(question, 1) output = torch.cat([avg_pool, max_pool], dim=1) output = self.fcnn_1(output) output = torch.relu(output) output = self.fcnn_2(output) return output # + # weights = torch.log(1/(train_y.sum(dim=0) / train_y.sum())) # weights = weights.detach().to(device) # weights # - num_train_batch = int(len(question_train) / batch_size) num_eval_batch = int(len(question_eval) / batch_size) classifier = Classifier().to(device) criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(classifier.parameters(), lr=lr) # + code_folding=[] min_loss = np.inf for i in range(num_epochs): print(f'---> Epoch {i} <---') time.sleep(0.5) classifier.train() loader = tqdm(range(num_train_batch), postfix={'Epoch': i}) train_losses = [] for i_batch in loader: question, question_len, targets = ( question_train[i_batch*batch_size:(i_batch+1)*batch_size], question_len_train[i_batch*batch_size:(i_batch+1)*batch_size], label_train[i_batch*batch_size:(i_batch+1)*batch_size] ) question = torch.tensor(np.array(question), dtype=torch.float, device=device) targets = targets.to(device) optimizer.zero_grad() outputs = classifier(question, question_len) loss = criterion(outputs, targets) loss.backward() optimizer.step() with torch.no_grad(): train_losses.append(loss.item()) loader.set_postfix({ 'Epoch': i, 'Train loss': np.mean(train_losses) }, refresh=True) time.sleep(0.5) with torch.no_grad(): classifier.eval() loader = tqdm(range(num_eval_batch), postfix={'Epoch': i,}, colour='green') eval_losses = [] eval_scores = [] for i_batch in loader: question, question_len, targets = ( question_eval[i_batch*batch_size:(i_batch+1)*batch_size], question_len_eval[i_batch*batch_size:(i_batch+1)*batch_size], label_eval[i_batch*batch_size:(i_batch+1)*batch_size] ) question = torch.tensor(question, dtype=torch.float, device=device) targets = targets.to(device) outputs = classifier(question, question_len) loss = criterion(outputs, targets) score = (outputs.argmax(dim=1) == targets).detach().cpu().numpy() eval_scores.append(score) eval_losses.append(loss.item()) loader.set_postfix({ 'Epoch': i, 'Eval loss': np.mean(eval_losses), 'Eval score': np.concatenate(eval_scores).mean() }, refresh=True) eval_loss = np.mean(eval_losses) if eval_loss <= min_loss: min_loss = eval_loss save_model(classifier, model_file) loader.write('*** save ***') time.sleep(0.5) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Portfolio Allocation and Optimization: Monte Carlo Simulation and Optimization Algorithm # Uptil now we were focused more on analysing and forcasting Stock prices for Individual Stocks. A trader or a Hedge fund manager can optimize their Trading strategies based on the forcast Buy/Sell signal. # We will now act as a Portfolio Manager and try to build a Trading Strategy using Sharpe Ratio and Optimization to demonstrate how a Portfolio can be optimized for higher gains. # # ### Sharpe Ratio # # The Sharpe ratio measures the performance of an investment compared to a risk-free asset, after adjusting for its risk. It is defined as the difference between the returns of the investment and the risk-free return, divided by the standard deviation of the investment. # + import pandas as pd import matplotlib.pyplot as plt from pylab import rcParams import numpy as np import seaborn as sns import os from sklearn.ensemble import RandomForestRegressor from sklearn.model_selection import cross_val_score, train_test_split, GridSearchCV from sklearn.feature_selection import RFECV, SelectFromModel, SelectKBest from sklearn.preprocessing import MinMaxScaler from sklearn.model_selection import train_test_split from tqdm import tqdm_notebook from sklearn.preprocessing import StandardScaler from sklearn import metrics from keras.optimizers import RMSprop, SGD, Adam from keras.layers import LSTM, Dense, Dropout from keras.models import Sequential # %matplotlib inline os.chdir(r'N:\STOCK ADVISOR BOT') # - mcd = pd.read_csv('MCD.csv') #best features features_selected = ['Date','Close(t)'] mcd = mcd[features_selected] mcd = mcd.iloc[3500:-60, :] mcd = mcd.rename(columns={'Close(t)':'Close'}) mcd = mcd.set_index('Date') mcd from matplotlib import pyplot as plt plt.figure() plt.plot(mcd["Close"]) plt.title('MCD stock price history') plt.ylabel('Price (USD)') plt.xlabel('Days') plt.legend(['Close'], loc='upper left') plt.show() # ### Lets get the Stock prices for 3 more stocks (Coka cola , and Johnsons & Johnsons) # + gs = pd.read_csv('GS.csv') #best features features_selected = ['Date','Close(t)'] gs = gs[features_selected] gs = gs.iloc[3500:-60, :] gs = gs.rename(columns={'Close(t)':'Close'}) gs = gs.set_index('Date') jnj = pd.read_csv('JNJ.csv') #best features features_selected = ['Date','Close(t)'] jnj = jnj[features_selected] jnj = jnj.iloc[3500:-60, :] jnj = jnj.rename(columns={'Close(t)':'Close'}) jnj = jnj.set_index('Date') ko = pd.read_csv('KO.csv') #best features features_selected = ['Date','Close(t)'] ko = ko[features_selected] ko = ko.iloc[3500:-60, :] ko = ko.rename(columns={'Close(t)':'Close'}) ko = ko.set_index('Date') # - for stock_df in (mcd, ko, gs, jnj): stock_df['Normed Return'] = stock_df['Close'] /stock_df.iloc[0]['Close'] # ### Normalized returns jnj.head() ko.head() # ### Portfolio Allocation # Let take the starting Portfolio Allocations consisting of these 4 stocks as - # # - 25% in MCD # - 10% in KO # - 30% in JNJ # - 35% in GS for stock_df, allo in zip((mcd, ko, jnj, gs),[.25,.1,.3,.35]): stock_df['Allocation'] = stock_df['Normed Return']*allo mcd.head() # Assuming the starting portfolio Position of 1 million $, lets look at how the value of our position changes in each stock # value of each position for stock_df in (mcd, ko, jnj, gs): stock_df['Position Value'] = stock_df['Allocation']*1000000 gs.head(10) # You can see how the value in increasing. Lets create a single dataframe for all of the 4 stocks # + # create list of all position values all_pos_vals = [mcd['Position Value'], ko['Position Value'], jnj['Position Value'], gs['Position Value']] # concatenate the list of position values portfolio_val = pd.concat(all_pos_vals, axis=1) # set the column names portfolio_val.columns = ['MCD', 'KO', 'JNJ', 'GS'] # add a total portfolio column portfolio_val['Total'] = portfolio_val.sum(axis=1) # - portfolio_val.head(10) # ### we can see day-by-day how our positions and portfolio value is changing. # plot our portfolio import matplotlib.pyplot as plt # %matplotlib inline portfolio_val['Total'].plot(figsize=(10,8)) # ### We have made 150k for the year # # Lets also look at individual contributions of each stock in our portfolio portfolio_val.drop('Total',axis=1).plot(figsize=(10,8)) # ### Let's move towards implementing MCMC and Optimization # Daily Return portfolio_val['Daily Return'] = portfolio_val['Total'].pct_change(1) # + # average daily return portfolio_val['Daily Return'].mean() # standard deviation portfolio_val['Daily Return'].std() # plot histogram of daily returns portfolio_val['Daily Return'].plot(kind='hist', bins=50, figsize=(4,5)) # - # Calculating the total portfolio return # cumulative portfolio return cum_return = 100 * (portfolio_val['Total'][-1]/portfolio_val['Total'][0] - 1) cum_return # ### Calculate Sharpe Ratio # The Sharpe Ratio is the mean (portfolio return - the risk free rate) % standard deviation. sharpe_ratio = portfolio_val['Daily Return'].mean() / portfolio_val['Daily Return'].std() ASR = (252**0.5) * sharpe_ratio ASR # Annualized Share Ratio is 0.79 # ### Optimization using Monte Carlo Simulation # We will check a bunch of random allocations and analyse which one has the best Sharpe Ratio. # #### This process of randomly guessing is known as a Monte Carlo Simulation. # # We will randomly assign weights to our stocks in the portfolio using mcmc and then calculate the average daily return & SD (Standard deviation) of return. Then we can calculate the Sharpe Ratio for many randomly selected allocations. # We will further use Optimzation Algorithm to minimize for this. # # Minimization is a similar concept to optimization - let's say we have a simple equation y = x2 - the idea is we're trying to figure out what value of x will minimize y, in this example 0. # # This idea of a minimizer will allow us to build an optimizer. # concatenate them and rename the columns stocks = pd.concat([mcd.Close, ko.Close, gs.Close, jnj.Close], axis=1) stocks.columns = ['MCD', 'KO', 'GS', 'JNJ'] stocks stocks.pct_change(1).mean() stocks.pct_change(1).head() # log daily return log_return = np.log(stocks/stocks.shift(1)) log_return.head() # + print(stocks.columns) weights = np.array(np.random.random(4)) print('Random Weights:') print(weights) print('Rebalance') weights = weights/np.sum(weights) print(weights) # + # expected return print('Expected Portfolio Return') exp_ret = np.sum((log_return.mean()*weights)*252) print(exp_ret) # expected volatility print('Expected Volatility') exp_vol = np.sqrt(np.dot(weights.T,np.dot(log_return.cov()*252, weights))) print(exp_vol) # Sharpe Ratio print('Sharpe Ratio') SR = exp_ret/exp_vol print(SR) # - # Now we will repeat the above process over 1000 times # + num_ports = 8000 all_weights = np.zeros((num_ports, len(stocks.columns))) ret_arr = np.zeros(num_ports) vol_arr = np.zeros(num_ports) sharpe_arr = np.zeros(num_ports) for ind in range(num_ports): # weights weights = np.array(np.random.random(4)) weights = weights/np.sum(weights) # save the weights all_weights[ind,:] = weights # expected return ret_arr[ind] = np.sum((log_return.mean()*weights)*252) # expected volatility vol_arr[ind] = np.sqrt(np.dot(weights.T,np.dot(log_return.cov()*252, weights))) # Sharpe Ratio sharpe_arr[ind] = ret_arr[ind]/vol_arr[ind] # - sharpe_arr.max() # If we then get the location of the maximum Sharpe Ratio and then get the allocation for that index. This shows us the optimal allocation out of the 8000 random allocations: sharpe_arr.argmax() all_weights[7098, :] # These are the best allocations we have received using MCMC. Lets compare how we did from the original Allocation # Original Allocation - # # - 25% in MCD # - 10% in KO # - 30% in JNJ # - 35% in GS # # MCMC Allocation - # # - 54% in MCD # - 42% in KO # - 3% in JNJ # - 0.1% IN gs # # ### Initial Allocation # + import matplotlib.pyplot as plt # Pie chart, where the slices will be ordered and plotted counter-clockwise: labels = 'MCD', 'KO', 'JNJ', 'GS' sizes = [25, 10, 30, 35] explode = (0, 0.1, 0, 0) # only "explode" the 2nd slice (i.e. 'Hogs') fig1, ax1 = plt.subplots() ax1.pie(sizes, explode=explode, labels=labels, autopct='%1.1f%%', shadow=True, startangle=90) ax1.axis('equal') # Equal aspect ratio ensures that pie is drawn as a circle. plt.show() # - # ### After MCMC # + import matplotlib.pyplot as plt # Pie chart, where the slices will be ordered and plotted counter-clockwise: labels = 'MCD', 'KO', 'JNJ', 'GS' sizes = [54, 42, 3, 0.1] explode = (0, 0.1, 0, 0) # only "explode" the 2nd slice (i.e. 'Hogs') fig1, ax1 = plt.subplots() ax1.pie(sizes, explode=explode, labels=labels, autopct='%1.1f%%', shadow=True, startangle=90) ax1.axis('equal') # Equal aspect ratio ensures that pie is drawn as a circle. plt.show() # - # ### Portfolio Optimization: Optimization Algorithm # Let's now move on from random allocations to a mathematical optimization algorithm. # # All of the heavy lifting for this optimization will be done with SciPy, so we just have to do a few things to set up the optimization function. # + from scipy.optimize import minimize def get_ret_vol_sr(weights): weights = np.array(weights) ret = np.sum(log_return.mean() * weights) * 252 vol = np.sqrt(np.dot(weights.T,np.dot(log_return .cov()*252,weights))) sr = ret/vol return np.array([ret,vol,sr]) # minimize negative Sharpe Ratio def neg_sharpe(weights): return get_ret_vol_sr(weights)[2] * -1 # check allocation sums to 1 def check_sum(weights): return np.sum(weights) - 1 # - # create constraint variable cons = ({'type':'eq','fun':check_sum}) # create weight boundaries bounds = ((0,1),(0,1),(0,1),(0,1)) # initial guess init_guess = [0.25, 0.25, 0.25, 0.25] # First we call minimize and pass in what we're trying to minimize - negative Sharpe, our initial guess, we set the minimization method to SLSQP, and we set our bounds and constraints: opt_results = minimize(neg_sharpe, init_guess, method='SLSQP', bounds=bounds, constraints=cons) opt_results opt_results.x get_ret_vol_sr(opt_results.x) # The Optimal results of the Optimization algorithm are 1.2367 # ### Conclusion # # We have implemented Monte carlo Simulation technique to randomly take a sample and simulate it to find the best sharpe ratio for a given portfolio. # Then we moved to a better appraoch using Optimization Algorithm by Minimize function in Spicy library. # # We get around same Sharpe ratio for both which improves on the initial portfolio allocation we started off with. # # We can form more trading strategies based on other methods too like Pairs trading, Butterfly spread, Bull-bear spread etc. But we just wanted to implement a Trading strategy and see how we can optimize a portfolio. # ### License # MIT License # # Copyright (c) 2020 # # Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.9.0 64-bit (''_dataInterpretation'': pipenv)' # language: python # name: python39064bitdatainterpretationpipenv7d89860b0d4449b6a38409f1c866e0d7 # --- # Checking the rater agreement (pct viewing time) for all frames import glob import pandas as pd import os import matplotlib.pyplot as plt import matplotlib as mpl import numpy as np import scipy.misc from tabulate import tabulate from sklearn.metrics import cohen_kappa_score import krippendorff # Load the ratings of the algorithm and the raters # + tags=[] # Load data from algorithmic tracking raterFiles = glob.glob("data/P*.txt") df_algoFiles = (pd.read_csv(f, header = None) for f in raterFiles) df_algo = pd.concat(df_algoFiles, ignore_index=True, axis = 0) # Load data from manual ratings raterFiles = glob.glob("data/data_Rater*.csv") df_raterFiles = (pd.read_csv(f, header = 0) for f in raterFiles) df_rater = pd.concat(df_raterFiles, ignore_index=True) # - # Prepare the data to get it into long format # + # Only take the last judgement of each rater df_rater.drop_duplicates(subset=['Rater', 'Frame', 'Trial'], keep='last', inplace = True) # Rename columns df_algo.columns = ["Trial", "Label", "1", "2", "3", "4", "5", "6", "VisiblePoints", "7", "8" ] # Add frame number column df_algo["Frame"] = df_algo.groupby(['Trial']).cumcount() # Add column for rater df_algo['Rater'] = 'Algorithm' # Set datatypes df_algo["Trial"] = df_algo["Trial"].astype("string") df_algo["Frame"] = df_algo["Frame"].astype("string") df_algo["Label"] = df_algo["Label"].astype("string") df_rater["Frame"] = df_rater["Frame"].astype("string") df_rater["Trial"] = df_rater["Trial"].astype("string") df_rater["Label"] = df_rater["Label"].astype("string") # Rename the labels to match the AOI from the algorithmic approach df_algo['Label'] = df_algo['Label'].str.replace("Nose","Head") df_algo['Label'] = df_algo['Label'].str.replace("Neck","Chest") df_algo['Label'] = df_algo['Label'].str.replace("LElbow","Left arm") df_algo['Label'] = df_algo['Label'].str.replace("RElbow","Right arm") df_algo['Label'] = df_algo['Label'].str.replace("RKnee","Right leg") df_algo['Label'] = df_algo['Label'].str.replace("LKnee","Left leg") df_algo['Label'] = df_algo['Label'].str.replace("MidHip","Pelvis") # Check the unique values # df_algo['Label'].unique() # - # Merge the data into Long format # + # Merge data frames df = pd.concat([df_algo, df_rater], join='outer', keys=['Trial', 'Frame', 'Rater', 'Label']).reset_index(drop=True) # only keep rows where all ratings are available def filterRows(group): if group.shape[0] > 1: return group df = df.groupby(['Trial', 'Frame']).apply(filterRows).reset_index(drop=True) df.ffill(inplace=True) df = df[['Trial', 'Label', 'VisiblePoints', 'Frame', 'Rater']] df.drop(columns=['VisiblePoints'], inplace=True) df.to_csv("results/DataLong.csv", index=False) # - # Descriptive statistics # + df.Trial.value_counts() # print(df.columns) df.groupby(['Trial', 'Frame']).count() df.groupby('Trial').count() # - # What happens when gaze is on "Other". For the human raters, take the mode (most chosen answer) # # + tags=[] def mode_of_frame(group): group['Mode'] = group.Label.mode()[0] group['isAnyOther'] = (group.Label == "Other").any() group['Algorithm_Label'] = group.loc[group.Rater == "Algorithm", 'Label'] return group df = df.groupby(['Trial', 'Frame']).apply(mode_of_frame) # - # Plot what happens when gaze is on "Other" # + # Other plot # %matplotlib inline mpl.style.use('default') # The data pct_algorithm = df.loc[(df.Rater == "Algorithm") & (df.Label == "Other"), 'Mode'].value_counts() pct_rater1 = df.loc[df.isAnyOther == 1, 'Algorithm_Label'].value_counts() # Plot settings # Requires probably on linux: sudo apt-get install dvipng texlive-latex-extra texlive-fonts-recommended cm-super mpl.rcParams.update( { 'font.family': 'serif', 'text.usetex': True, } ) # Figure settings for export pts_document_with = 600. # How wide is the page pts_per_inch = 1. / 72.27 figure_width = pts_document_with * pts_per_inch # Plot fig, axes = plt.subplots(nrows=1, ncols=2, figsize = (figure_width,4), sharey = True) axes[0].set_axisbelow(True) axes[1].set_axisbelow(True) pct_algorithm.plot(kind = 'bar', ax = axes[0], color = '#909090') pct_rater1.plot(kind = 'bar', ax = axes[1], color = '#909090') axes[0].grid(linestyle='dashed') axes[1].grid(linestyle='dashed') # fig.suptitle("AOI classification when subsetting on $Other$") axes[0].set_ylabel("Frames [N]") axes[0].set_title("Manual rating when algorithm judges $Other$") axes[1].set_title("Algorithmic rating when rater judge $Other$") # Save plt.savefig("plots/RaterOtherSubset.svg", bbox_inches='tight') # - # Calculate the agreement betweeen each rater and the algorithm # + # Create rating agreements between raters and algorithm, and among raters. Need data in wide format for this df = df.pivot(index=['Trial', 'Frame'], columns='Rater', values='Label') # Columns with comparison values df['Rater1_Algorithm'] = df.Rater1 == df.Algorithm df['Rater2_Algorithm'] = df.Rater2 == df.Algorithm df['Rater3_Algorithm'] = df.Rater3 == df.Algorithm df['Rater1_Rater2'] = df.Rater1 == df.Rater2 df['Rater1_Rater3'] = df.Rater1 == df.Rater3 df['Rater2_Rater3'] = df.Rater2 == df.Rater3 df['ManualRaters'] = ( (df.Rater1 == df.Rater2) & (df.Rater1 == df.Rater3) & (df.Rater2 == df.Rater3)) # Drop Na's because they can't be converted to int df.dropna(inplace=True) # Calculate the rating agreement rater1_algorithm_pct = ((df.Rater1_Algorithm.astype(int).sum() / df.shape[0]) * 100) rater2_algorithm_pct = ((df.Rater2_Algorithm.astype(int).sum() / df.shape[0]) * 100) rater3_algorithm_pct = ((df.Rater3_Algorithm.astype(int).sum() / df.shape[0]) * 100) rater1_rater2_pct = ((df.Rater1_Rater2.astype(int).sum() / df.shape[0]) * 100) rater1_rater3_pct = ((df.Rater1_Rater3.astype(int).sum() / df.shape[0]) * 100) rater2_rater3_pct = ((df.Rater2_Rater3.astype(int).sum() / df.shape[0]) * 100) rater_all_pct = ((df.ManualRaters.astype(int).sum() / df.shape[0]) * 100) # Back to long format df = df.stack().rename('Label').reset_index(['Frame', 'Trial', 'Rater']) # - # Inter rater reliability # + # Create an index variable for the labels [0 .. n] df['Label_ID'], _ = pd.factorize(df.Label) algorithm = df.loc[df.Rater == "Algorithm", 'Label_ID'] rater1 = df.loc[df.Rater == "Rater1", 'Label_ID'] rater2 = df.loc[df.Rater == "Rater2", 'Label_ID'] rater3 = df.loc[df.Rater == "Rater3", 'Label_ID'] rater1_rater2_kappa = cohen_kappa_score(rater1, rater2) rater1_rater3_kappa = cohen_kappa_score(rater1, rater3) rater2_rater3_kappa = cohen_kappa_score(rater2, rater3) rater1_algorithm_kappa = cohen_kappa_score(algorithm, rater1) rater2_algorithm_kappa = cohen_kappa_score(algorithm, rater2) rater3_algorithm_kappa = cohen_kappa_score(algorithm, rater3) # - # Inter rater reliability among manual raters and among all raters # + tags=[] # Initiate lists and rater tags manualList = [] manualRaters = ['Rater1', 'Rater2', 'Rater3'] def append_to_list(group, compareRaters, outList): subset = group[group.Rater.isin(compareRaters)] outList.append(subset.Label_ID.to_list()) # Run for both groups df.groupby(['Trial', 'Frame']).apply(append_to_list, manualRaters, manualList) # Run Krippendorffs kappa kappa_manualRaters = krippendorff.alpha(np.array(manualList).T) # - # Visualize and save the results and the data # + # Create table table = [ ["Comparison all AOI", "Percent agreement [%]", "Reliability [Cohens Kappa]"], ["Rater 1 vs. Algorithm", rater1_algorithm_pct, rater1_algorithm_kappa], ["Rater 2 vs. Algorithm", rater2_algorithm_pct, rater2_algorithm_kappa], ["Rater 3 vs. Algorithm", rater3_algorithm_pct, rater3_algorithm_kappa], ["Rater 1 vs. Rater 2", rater1_rater2_pct, rater1_rater2_kappa], ["Rater 1 vs. Rater 3", rater1_rater3_pct, rater1_rater3_kappa], ["Rater 2 vs. Rater 3", rater2_rater3_pct, rater2_rater3_kappa], ["Among manual raters", rater_all_pct , kappa_manualRaters], ] tabulate_table = tabulate( table, headers="firstrow", floatfmt=".2f", tablefmt="github") print(tabulate_table) # Save table with open('results/Reliability_AllAOI.txt', 'w') as f: f.write(tabulate_table) # Save data df.to_csv("results/data_all.csv") # + # %matplotlib inline mpl.style.use('default') # The data pct_algorithm = (df.loc[df.Rater == "Algorithm", 'Label'].value_counts() / df.loc[df.Rater == "Algorithm"].shape[0]) * 100 pct_rater1 = (df.loc[df.Rater == "Rater1", 'Label'].value_counts() / df.loc[df.Rater == "Rater1"].shape[0]) * 100 pct_rater2 = (df.loc[df.Rater == "Rater2", 'Label'].value_counts() / df.loc[df.Rater == "Rater2"].shape[0]) * 100 pct_rater3 = (df.loc[df.Rater == "Rater3", 'Label'].value_counts() / df.loc[df.Rater == "Rater3"].shape[0]) * 100 # Plot settings # Requires probably on linux: sudo apt-get install dvipng texlive-latex-extra texlive-fonts-recommended cm-super mpl.rcParams.update( { 'font.family': 'serif', 'text.usetex': True, } ) # Figure settings for export pts_document_with = 600. # How wide is the page pts_per_inch = 1. / 72.27 figure_width = pts_document_with * pts_per_inch # Plot fig, axes = plt.subplots(nrows=1, ncols=4, figsize = (figure_width,4), sharey = True) axes[0].set_axisbelow(True) axes[1].set_axisbelow(True) axes[2].set_axisbelow(True) axes[3].set_axisbelow(True) pct_algorithm.plot(kind = 'bar', ax = axes[0], color = '#909090') pct_rater1.plot(kind = 'bar', ax = axes[1], color = '#909090') pct_rater2.plot(kind = 'bar', ax = axes[2], color = '#909090') pct_rater3.plot(kind = 'bar', ax = axes[3], color = '#909090') axes[0].grid(linestyle='dashed') axes[1].grid(linestyle='dashed') axes[2].grid(linestyle='dashed') axes[3].grid(linestyle='dashed') # fig.suptitle("AOI classification") axes[0].set_ylabel("Viewing time [pct]") axes[0].set_title("Algorithmic Labeling") axes[1].set_title("Rater 1") axes[2].set_title("Rater 2") axes[3].set_title("Rater 3") # Save plt.savefig("plots/RaterComparison_All.svg", bbox_inches='tight') # + # How many percent were classified as Other by either party? rater_other = (pct_rater1['Other'] + pct_rater2['Other'] + pct_rater3['Other']) / 3 rater_other_std = np.std([pct_rater1['Other'], pct_rater2['Other'], pct_rater3['Other']]) print(f"Algorithm judged {pct_algorithm['Other']:.2f}% as Other with std {rater_other_std:2f}") print(f"Rater judged {rater_other:.1f}% as Other") table = [ ["Comparison semantic segmentation", "Frames with gaze located on human shape [%]"], ["Algorithm", 100-pct_algorithm['Other']], ["Rater 1", 100-pct_rater1['Other']], ["Rater 2", 100-pct_rater2['Other']], ["Rater 3", 100-pct_rater3['Other']] ] tabulate_table = tabulate( table, headers="firstrow", floatfmt=".2f", tablefmt="latex") print(tabulate_table) df.to_csv("results/data_HumanClassified.csv") # + # F-1 scores # Create rating agreements between raters and algorithm, and among raters. Need data in wide format for this df = df.pivot(index=['Trial', 'Frame'], columns='Rater', values='Label') # Columns with comparison values df['Rater1_Algorithm'] = df.Rater1 == df.Algorithm df['Rater2_Algorithm'] = df.Rater2 == df.Algorithm df['Rater3_Algorithm'] = df.Rater3 == df.Algorithm df['Rater1_Rater2'] = df.Rater1 == df.Rater2 df['Rater1_Rater3'] = df.Rater1 == df.Rater3 df['Rater2_Rater3'] = df.Rater2 == df.Rater3 df['ManualRaters'] = ( (df.Rater1 == df.Rater2) & (df.Rater1 == df.Rater3) & (df.Rater2 == df.Rater3)) print(f"Unanimous agreement in {df.ManualRaters.sum()} frames) # + # Take data where unanimous agreement between raters df_unanimous = df.loc[df['ManualRaters'] == True, :] print(df_unanimous.Rater1) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.1 64-bit (''pyUdemy'': conda)' # name: python38164bitpyudemyconda8c705f49a8e643418ce4b1ca64c8ab63 # --- def print_memory_address(var): print(hex(id(var))) my_test_value = 10 # int(10) print_memory_address(my_test_value) # + my_int_value1 = 42 my_int_value2 = 42 my_int_value3 = 42 print_memory_address(my_int_value1) print_memory_address(my_int_value2) print_memory_address(my_int_value3) # + my_float_value1 = 42.0 my_float_value2 = 42.0 my_float_value3 = 42.0 print_memory_address(my_float_value1) print_memory_address(my_float_value2) print_memory_address(my_float_value3) # + my_bool1 = True my_bool2 = True print_memory_address(my_bool1) print_memory_address(my_bool2) # + my_bool1 = False my_bool2 = False print_memory_address(my_bool1) print_memory_address(my_bool2) # + my_none1 = None my_none2 = None print_memory_address(my_none1) print_memory_address(my_none2) # + my_list1 = [1, 2, 3] my_list2 = my_list1 print(my_list1) print(my_list2) print_memory_address(my_list1) print_memory_address(my_list2) my_list1 = [-1, -2, -3] print_memory_address(my_list1) print_memory_address(my_list2) # + a = 10 b = 10 print_memory_address(a) print_memory_address(b) print() b = 8 print_memory_address(a) print_memory_address(b) print() b = 10 print_memory_address(a) print_memory_address(b) print() a = 8 b = 9 print_memory_address(a) print_memory_address(b) print() a = 10 b = 10 print_memory_address(a) print_memory_address(b) print() # + import ctypes my_list1 = [1, 2, 3] my_list2 = my_list1 my_list3 = my_list1 print(ctypes.c_long.from_address(id(my_list1))) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Dynamical systems # # A (discrete time) dynamical system describes the evolution of the state of a system and # the observations that can be obtained from the state. The general form is # # \begin{eqnarray} # x_0 & \sim & \pi(x_0) \\ # x_t & = & f(x_{t-1}, \epsilon_t) \\ # y_t & = & g(x_{t}, \nu_t) # \end{eqnarray} # # Here, $f$ and $g$ are transition and observation functions. The variables # $\epsilon_t$ and $\nu_t$ are assumed to be unknown random noise components with a known distribution. The initial state, $x_0$, can be either known exactly or at least and initial state distribution density $\pi$ is known. The model describes the relation between observations $y_t$ and states $x_t$. # ## Frequency modulated sinusoidal signal # # \begin{eqnarray} # \epsilon_t & \sim & \mathcal{N}(0, P) \\ # x_{1,t} & = & \mu + a x_{1,t-1} + \epsilon_t \\ # x_{2,t} & = & x_{2,t-1} + x_{1,t-1} \\ # \nu_t & \sim & \mathcal{N}(0, R) \\ # y_t & = & \cos(2\pi x_{2,t}) + \nu_t # \end{eqnarray} # + # %matplotlib inline import numpy as np import matplotlib.pylab as plt N = 100 T = 100 a = 0.9 xm = 0.9 sP = np.sqrt(0.001) sR = np.sqrt(0.01) x1 = np.zeros(N) x2 = np.zeros(N) y = np.zeros(N) for i in range(N): if i==0: x1[0] = xm x2[0] = 0 else: x1[i] = xm + a*x1[i-1] + np.random.normal(0, sP) x2[i] = x2[i-1] + x1[i-1] y[i] = np.cos(2*np.pi*x2[i]/T) + np.random.normal(0, sR) plt.figure() plt.plot(x) plt.figure() plt.plot(y) plt.show() # - # ## Stochastic Kinetic Model # # Stochastic Kinetic Model is a general modelling technique to describe the interactions of a set of objects such as molecules, individuals or items. This class of models are particularly useful in modeling queuing systems, production plants, chemical, ecological, biological systems or biological cell cycles at a sufficiently detailed level. It is a good example of a a dynamical model that displays quite interesting and complex behaviour. # # The model is best motivated first with a specific example, known as the Lotka-Volterra predator-prey model: # # ### A Predator Prey Model (Lotka-Volterra) # # Consider a population of two species, named as smiley 😊 and zombie 👹. Our dynamical model will describe the evolution of the number of individuals in this entire population. We define $3$ different event types: # # #### Event 1: Reproduction # # The smiley, denoted by $X_1$, reproduces by division so one smiley becomes two smileys after a reproduction event. # #

    # 😊 $\rightarrow$ 🙂 😊 #

    # # In mathematical notation, we denote this event as # \begin{eqnarray} # X_1 & \xrightarrow{k_1} 2 X_1 # \end{eqnarray} # # Here, $k_1$ denotes the rate constant, the rate at which a _single_ single smiley is reproducing according to the exponential distribution. When there are $x_1$ smileys, each reproducing with rate $k_1$, the rate at which a reproduction event occurs is simply # \begin{eqnarray} # h_1(x, k_1) & = & k_1 x_1 # \end{eqnarray} # The rate $h_1$ is the rate of a reproduction event, increasing proportionally to the number of smileys. # # #### Event 2: Consumption # # The predatory species, the zombies, denoted as $X_2$, transform the smileys into zombies. So one zombie 'consumes' one smiley to create a new zombie. # #

    # 😥 👹 $\rightarrow$ 👹 👹 #

    # # The consumption event is denoted as # \begin{eqnarray} # X_1 + X_2 & \xrightarrow{k_2} 2 X_2 # \end{eqnarray} # # Here, $k_2$ denotes the rate constant, the rate at which a zombie and a smiley meet, and the zombie transforms the smiley into a new zombie. When there are $x_1$ smileys and $x_2$ zombies, there are in total $x_1 x_2$ possible meeting events, With each meeting event occurring at rate $k_2$, the rate at which a consumption event occurs is simply # \begin{eqnarray} # h_2(x, k_2) & = & k_2 x_1 x_2 # \end{eqnarray} # The rate $h_2$ is the rate of a consumption event. There are more consumptions if there are more zombies or smileys. # # #### Event 3: Death # Finally, in this story, unlike Hollywood blockbusters, the zombies are mortal and they decease after a certain random time. # #

    # 👹 $\rightarrow $ ☠️ #

    # # This is denoted as $X_2$ disappearing from the scene. # \begin{eqnarray} # X_2 & \xrightarrow{k_3} \emptyset # \end{eqnarray} # A zombie death event occurs, by a similar argument as reproduction, at rate # \begin{eqnarray} # h_3(x, k_3) & = & k_3 x_2 # \end{eqnarray} # # #### Model # All equations can be written # # \begin{eqnarray} # X_1 & \xrightarrow{k_1} 2 X_1 & \hspace{3cm}\text{Reproduction}\\ # X_1 + X_2 & \xrightarrow{k_2} 2 X_2 & \hspace{3cm}\text{Consumption} \\ # X_2 & \xrightarrow{k_3} \emptyset & \hspace{3cm} \text{Death} # \end{eqnarray} # # More compactly, in matrix form we can write: # # \begin{eqnarray} # \left( # \begin{array}{cc} # 1 & 0 \\ # 1 & 1 \\ # 0 & 1 # \end{array} # \right) # \left( # \begin{array}{cc} # X_1 \\ # X_2 # \end{array} # \right) \rightarrow # \left( # \begin{array}{cc} # 2 & 0 \\ # 0 & 2 \\ # 0 & 0 # \end{array} # \right) # \left( # \begin{array}{cc} # X_1 \\ # X_2 # \end{array} # \right) # \end{eqnarray} # # The rate constants $k_1, k_2$ and $k_3$ denote the rate at which a _single_ event is occurring according to the exponential distribution. # # All objects of type $X_1$ trigger the next event # \begin{eqnarray} # h_1(x, k_1) & = & k_1 x_1 \\ # h_2(x, k_2) & = & k_2 x_1 x_2 \\ # h_3(x, k_2) & = & k_3 x_2 # \end{eqnarray} # # # The dynamical model is conditioned on the type of the next event, denoted by $r(j)$ # # \begin{eqnarray} # Z(j) & = & \sum_i h_i(x(j-1), k_i) \\ # \pi_i(j) & = & \frac{h_i(x(j-1), k_i) }{Z(j)} \\ # r(j) & \sim & \mathcal{C}(r; \pi(j)) \\ # \Delta(j) & \sim & \mathcal{E}(1/Z(j)) \\ # t(j) & = & t(j-1) + \Delta(j) \\ # x(j) & = & x(j-1) + S(r(j)) # \end{eqnarray} # # # # + slideshow={"slide_type": "slide"} # %matplotlib inline import numpy as np import matplotlib.pylab as plt A = np.array([[1,0],[1,1],[0,1]]) B = np.array([[2,0],[0,2],[0,0]]) S = B-A N = S.shape[1] M = S.shape[0] STEPS = 50000 k = np.array([0.8,0.005, 0.3]) X = np.zeros((N,STEPS)) x = np.array([100,100]) T = np.zeros(STEPS) t = 0 for i in range(STEPS-1): rho = k*np.array([x[0], x[0]*x[1], x[1]]) srho = np.sum(rho) if srho == 0: break idx = np.random.choice(M, p=rho/srho) dt = np.random.exponential(scale=1./srho) x = x + S[idx,:] t = t + dt X[:, i+1] = x T[i+1] = t plt.figure(figsize=(10,5)) plt.plot(T,X[0,:], '.b') plt.plot(T,X[1,:], '.r') plt.legend([u'Smiley',u'Zombie']) plt.show() # - # ## State Space Representation plt.figure(figsize=(10,5)) plt.plot(X[0,:],X[1,:], '.') plt.xlabel('# of Smileys') plt.ylabel('# of Zombies') plt.axis('square') plt.show() # In this model, the state space can be visualized as a 2-D lattice of nonnegative integers, where each point $(x_1, x_2)$ denotes the number of smileys versus the zombies. # The model simulates a Markov chain on a directed graph where possible transitions are shown as edges where the edge color shade is proportional to the transition probability (darker means higher probability). # # The edges are directed, the arrow tips are not shown. There are three types of edges, each corresponding to one event type: # # * $\rightarrow$ Birth # * $\nwarrow$ Consumption # * $\downarrow$ Death # # # # + # %matplotlib inline import networkx as nx import numpy as np import matplotlib.pylab as plt from itertools import product # Maximum number of smileys or zombies N = 20 #A = np.array([[1,0],[1,1],[0,1]]) #B = np.array([[2,0],[0,2],[0,0]]) #S = B-A k = np.array([0.6,0.05, 0.3]) G = nx.DiGraph() pos = [u for u in product(range(N),range(N))] idx = [u[0]*N+u[1] for u in pos] G.add_nodes_from(idx) edge_colors = [] edges = [] for y,x in product(range(N),range(N)): source = (x,y) rho = k*np.array([source[0], source[0]*source[1], source[1]]) srho = np.sum(rho) if srho==0: srho = 1. if x0: # Consumption target = (x-1,y+1) edges.append((source[0]*N+source[1], target[0]*N+target[1])) edge_colors.append(rho[1]/srho) if y>0: # Death target = (x,y-1) edges.append((source[0]*N+source[1], target[0]*N+target[1])) edge_colors.append(rho[2]/srho) G.add_edges_from(edges) col_dict = {u: c for u,c in zip(edges, edge_colors)} cols = [col_dict[u] for u in G.edges() ] plt.figure(figsize=(9,9)) nx.draw(G, pos, arrows=False, width=2, node_size=20, node_color="white", edge_vmin=0,edge_vmax=0.7, edge_color=cols, edge_cmap=plt.cm.gray_r ) plt.xlabel('# of smileys') plt.ylabel('# of zombies') #plt.gca().set_visible('on') plt.show() # - # ## Generic code to simulate an SKM # + def simulate_skm(A, B, k, x0, STEPS=1000): S = B-A N = S.shape[1] M = S.shape[0] X = np.zeros((N,STEPS)) x = x0 T = np.zeros(STEPS) t = 0 X[:,0] = x for i in range(STEPS-1): # rho = k*np.array([x[0]*x[2], x[0], x[0]*x[1], x[1]]) rho = [k[j]*np.prod(x**A[j,:]) for j in range(M)] srho = np.sum(rho) if srho == 0: break idx = np.random.choice(M, p=rho/srho) dt = np.random.exponential(scale=1./srho) x = x + S[idx,:] t = t + dt X[:, i+1] = x T[i+1] = t return X,T # - # ## A simple ecosystem # # Suppose there are $x_1$ rabbits and $x_2$ clovers. Rabbits eat clovers with a rate of $k_1$ to reproduce. Similarly, rabbits die with rate $k_2$ and a clover grows. # # Pray (Clover): 🍀 # Predator (Rabbit): 🐰 # #

    # 🐰🍀 $\rightarrow$ 🐰🐰 #

    #

    # 🐰 $\rightarrow$ 🍀 #

    # # In this system, clearly the total number of objects $x_1+x_2 = N$ is constant. # # ### Probabilistic question # What is the distribution of the number of rabbits at time $t$ # # ### Statistical questions # - What are the parameters $k_1$ and $k_2$ of the system given observations of rabbit counts at specific times $t_1, t_2, \dots, t_K$ # # - Given rabbit counts at time $t$, predict counts at time $t + \Delta$ # # # + # #%matplotlib nbagg # %matplotlib inline import numpy as np import matplotlib.pylab as plt A = np.array([[1,1],[1,0]]) B = np.array([[2,0],[0,1]]) k = np.array([0.02,0.3]) x0 = np.array([10,40]) X,T = simulate_skm(A,B,k,x0,STEPS=10000) plt.figure(figsize=(10,5)) plt.plot(T,X[0,:], '.b',ms=2) plt.plot(T,X[1,:], '.g',ms=2) plt.legend([u'Rabbit', u'Clover']) plt.show() # - # ## A simple ecological network # # Food (Clover): 🍀 # # Prey (Rabbit): 🐰 # # Predator (Wolf): 🐺 # #

    # 🐰🍀 $\rightarrow$ 🐰🐰 #

    #

    # 🐰 $\rightarrow$ 🍀 #

    #

    # 🐰🐺 $\rightarrow$ 🐺🐺 #

    #

    # 🐺 $\rightarrow$ 🍀 #

    # # # The number of objects in this system are constant # + # #%matplotlib nbagg # %matplotlib inline import numpy as np import matplotlib.pylab as plt A = np.array([[1,0,1],[1,0,0],[1,1,0],[0,1,0]]) B = np.array([[2,0,0],[0,0,1],[0,2,0],[0,0,1]]) #k = np.array([0.02,0.09, 0.001, 0.3]) #x0 = np.array([1000,1000,10000]) k = np.array([0.02,0.19, 0.001, 2.8]) x0 = np.array([1000,1,10000]) X,T = simulate_skm(A,B,k,x0,STEPS=50000) plt.figure(figsize=(10,5)) plt.plot(T,X[0,:], '.y',ms=2) plt.plot(T,X[1,:], '.r',ms=2) plt.plot(T,X[2,:], '.g',ms=2) plt.legend([u'Rabbit',u'Wolf',u'Clover']) plt.show() # + sm = int(sum(X[:,0]))+1 Hist = np.zeros((sm,sm)) STEPS = X.shape[1] for i in range(STEPS): Hist[int(X[1,i]),int(X[0,i])] = Hist[int(X[1,i]),int(X[0,i])] + 1 plt.figure(figsize=(10,5)) #plt.plot(X[0,:],X[1,:], '.',ms=1) plt.imshow(Hist,interpolation='nearest') plt.xlabel('# of Rabbits') plt.ylabel('# of Wolfs') plt.gca().invert_yaxis() #plt.axis('square') plt.show() # + # %matplotlib inline import networkx as nx import numpy as np import matplotlib.pylab as plt # Maximum number of rabbits or wolves N = 30 k = np.array([0.005,0.06, 0.001, 0.1]) G = nx.DiGraph() pos = [u for u in product(range(N),range(N))] idx = [u[0]*N+u[1] for u in pos] G.add_nodes_from(idx) edge_colors = [] edges = [] for y,x in product(range(N),range(N)): clover = N - (x+y) source = (x,y) rho = k*np.array([source[0]*clover, source[0], source[0]*source[1], source[1]]) srho = np.sum(rho) if srho==0: srho = 1. if x0: # Consumption target = (x-1,y+1) edges.append((source[0]*N+source[1], target[0]*N+target[1])) edge_colors.append(rho[2]/srho) # if y>0: # Wolf Death # target = (x,y-1) # edges.append((source[0]*N+source[1], target[0]*N+target[1])) # edge_colors.append(rho[3]/srho) # if x>0: # Rabbit Death # target = (x-1,y) # edges.append((source[0]*N+source[1], target[0]*N+target[1])) # edge_colors.append(rho[1]/srho) G.add_edges_from(edges) col_dict = {u: c for u,c in zip(edges, edge_colors)} cols = [col_dict[u] for u in G.edges() ] plt.figure(figsize=(5,5)) nx.draw(G, pos, arrows=False, width=2, node_size=20, node_color="white", edge_vmin=0,edge_vmax=0.4, edge_color=cols, edge_cmap=plt.cm.gray_r ) plt.xlabel('# of smileys') plt.ylabel('# of zombies') #plt.gca().set_visible('on') plt.show() # - # ## Alternative model # # Constant food supply for the prey. # #

    # 🐰🍀 $\rightarrow$ 🐰🐰🍀 #

    #

    # 🐰🐺 $\rightarrow$ 🐺 #

    #

    # 🐺 $\rightarrow$ 🐺🐺 #

    #

    # 🐺 $\rightarrow$ ☠️ #

    # # This model is flawed as it allows predators to reproduce even when no prey is there. # # + # #%matplotlib nbagg # %matplotlib inline import numpy as np import matplotlib.pylab as plt A = np.array([[1,0,1],[1,1,0],[0,1,0],[0,1,0]]) B = np.array([[2,0,1],[0,1,0],[0,2,0],[0,0,0]]) k = np.array([4.0,0.038, 0.02, 0.01]) x0 = np.array([50,100,1]) X,T = simulate_skm(A,B,k,x0,STEPS=10000) plt.figure(figsize=(10,5)) plt.plot(T,X[0,:], '.b',ms=2) plt.plot(T,X[1,:], '.r',ms=2) plt.plot(T,X[2,:], '.g',ms=2) plt.legend([u'Rabbit',u'Wolf',u'Clover']) plt.show() # - # # # 🙀 : Hungry cat # # 😻 : Happy cat # #

    # 🐭🧀 $\rightarrow$ 🐭🐭🧀 #

    #

    # 🐭🙀 $\rightarrow$ 😻 #

    #

    # 😻 $\rightarrow$ 🙀 #

    #

    # 😻 $\rightarrow$ 🙀🙀 #

    #

    # 😻 $\rightarrow$ ☠️ #

    #

    # 🙀 $\rightarrow$ ☠️ #

    # # # # + # #%matplotlib nbagg # %matplotlib inline import numpy as np import matplotlib.pylab as plt death_rate = 1.8 A = np.array([[1,0,0,1],[1,1,0,0],[0,0,1,0],[0,0,1,0],[0,0,1,0],[0,1,0,0]]) B = np.array([[2,0,0,1],[0,0,1,0],[0,1,0,0],[0,2,0,0],[0,0,0,0],[0,0,0,0]]) k = np.array([9.7, 9.5, 30, 3.5, death_rate, death_rate]) x0 = np.array([150,20,10,1]) X,T = simulate_skm(A,B,k,x0,STEPS=5000) plt.figure(figsize=(10,5)) plt.plot(X[0,:], '.b',ms=2) plt.plot(X[1,:], 'or',ms=2) plt.plot(X[2,:], '.r',ms=3) plt.legend([u'Mouse',u'Hungry Cat',u'Happy Cat']) plt.show() # - # ## From Diaconis and Freedman # # A random walk on the unit interval. Start with $x$, choose one of the two intervals $[0,x]$ and $[x,1]$ with equal probability $0.5$, then choose a new $x$ uniformly on the interval. # + # %matplotlib inline import numpy as np # - # ## A random switching system # # \begin{eqnarray} # A(0) & = & \left(\begin{array}{cc} 0.444 & -0.3733 \\ 0.06 & 0.6000 \end{array}\right) \\ # B(0) & = & \left(\begin{array}{c} 0.3533 \\ 0 \end{array}\right) \\ # A(1) & = & \left(\begin{array}{cc} -0.8 & -0.1867 \\ 0.1371 & 0.8 \end{array}\right) \\ # B(1) & = & \left(\begin{array}{c} 1.1 \\ 0.1 \end{array}\right) \\ # w & = & 0.2993 # \end{eqnarray} # # # \begin{eqnarray} # c_t & \sim & \mathcal{BE}(c; w) \\ # x_t & = & A(c_t) x_{t-1} + B(c_t) # \end{eqnarray} # # + #Diaconis and # %matplotlib inline import numpy as np import matplotlib.pylab as plt T = 3000; x = np.matrix(np.zeros((2,T))); x[:,0] = np.matrix('[0.3533; 0]'); A = [np.matrix('[0.444 -0.3733;0.06 0.6000]'), np.matrix('[-0.8 -0.1867;0.1371 0.8]')]; B = [np.matrix('[0.3533;0]'), np.matrix('[1.1;0.1]')]; w = 0.27; for i in range(T-1): if np.random.rand()Open In Colab # + id="0hE-Nq8-Vcwp" import pandas as pd df = pd.read_csv("data.csv") # + colab={"base_uri": "https://localhost:8080/"} id="d9PAPUgZW_ia" outputId="7c894b92-c3d0-4bb0-f520-09bd477ab4d6" df.dtypes # + id="QmRhyh0fW_1H" X = df.iloc[:,:-1].values Y = df.iloc[:,-1].values # + colab={"base_uri": "https://localhost:8080/"} id="RadpdZw9W_7a" outputId="cf033cd4-dede-4239-f567-fbd54971d40a" X # + colab={"base_uri": "https://localhost:8080/"} id="7zUiIw4YXACI" outputId="58d0e45c-6af0-43bd-b9f3-438f7e23237f" Y # + [markdown] id="wyjfRKjcbv3k" # **Encoding categorical data** # + [markdown] id="Oapx8KPPbgxW" # **Encoding the independent data** # # # # + id="FSra9LrpXAIT" import numpy as np from sklearn.impute import SimpleImputer imputer = SimpleImputer(missing_values=np.nan , strategy = "mean") imputer.fit(X[:,1:3]) X[:,1:3] = imputer.transform(X[:,1:3]) # + colab={"base_uri": "https://localhost:8080/"} id="06uzi5BzmUu7" outputId="70721b1d-3bee-4d94-c9b9-dff22fee7f78" X # + [markdown] id="AEUxc3DYa6wh" # **Encoding the dependent variable** # # + id="NmcHiMRlXANv" from sklearn.preprocessing import LabelEncoder le = LabelEncoder() Y = le.fit_transform(Y) # + colab={"base_uri": "https://localhost:8080/"} id="0zl0LqpaXAQ7" outputId="d26746c9-463e-4ffd-caf5-428780628c38" Y # + id="GthKctsemrvQ" from sklearn.compose import ColumnTransformer from sklearn.preprocessing import OneHotEncoder ct = ColumnTransformer(transformers= [("encoder",OneHotEncoder(),[0])], remainder = "passthrough") X = np.array(ct.fit_transform(X)) # + colab={"base_uri": "https://localhost:8080/"} id="6Y0NNdzDn-gK" outputId="bd4228a3-76d2-4a44-e6f0-9d8293c55c04" X # + id="gx1mX4PnoFBB" from sklearn.preprocessing import LabelEncoder le = LabelEncoder() Y = le.fit_transform(Y) # + colab={"base_uri": "https://localhost:8080/"} id="Iw814snOoHGq" outputId="415980b8-86a2-442b-d3ed-56f0d418c882" Y # + [markdown] id="JH3rpmsPcItS" # **Splitting the dataset into Training set and Test set** # + id="jUDdTx1qcTSy" from sklearn.model_selection import train_test_split X_train, X_test, Y_train, Y_test = train_test_split(X,Y,test_size = 0.2, random_state = 1) # + colab={"base_uri": "https://localhost:8080/"} id="vI6hb4W2mEmi" outputId="afb4fce3-e595-4c57-94a0-4219aef694cb" X_train # + colab={"base_uri": "https://localhost:8080/"} id="EJXm6u6gmEwo" outputId="012c7778-01d5-4e5e-f1aa-4dc45703987c" X_test # + colab={"base_uri": "https://localhost:8080/"} id="_vmoJhIcmE14" outputId="79e39f4a-e2a0-4ca7-8fa8-40fec6f65111" Y_train # + colab={"base_uri": "https://localhost:8080/"} id="PMAlalahmFUx" outputId="7517fcc0-6f63-49e6-8ad0-9c8fe932d686" Y_test # + [markdown] id="9NWGwzQgqVCy" # **Features Scaling** # + id="uYlMsgtfqbM2" from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train[:,3:] = sc.fit_transform(X_train[:,3:]) X_test = sc.transform(X_test[:,3:]) # + colab={"base_uri": "https://localhost:8080/"} id="5rPMmfPSqbUt" outputId="2b676412-1a37-4015-f90c-7215271682f3" X_train # + colab={"base_uri": "https://localhost:8080/"} id="VxlNBAVvqbc9" outputId="ffccd826-67eb-407c-c12e-43a7f73edd41" X_test # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Observations and Insights f = open("observations/obs.txt", "r") print(f.read()) # ## Dependencies and starter code # + # Dependencies and Setup import matplotlib.pyplot as plt import pandas as pd import scipy.stats as st import numpy as np # Study data files mouse_metadata = "data/Mouse_metadata.csv" study_results = "data/Study_results.csv" # Read the mouse data and the study results mouse_metadata = pd.read_csv(mouse_metadata) study_results = pd.read_csv(study_results) # Combine the data into a single dataset #merge_df = pd.merge(bitcoin_df,dash_df,on="Date") mouse_data = pd.merge(mouse_metadata,study_results, on = "Mouse ID") mouse_data.head(10) # - # ## Summary statistics # + # Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen #groupby regimen = mouse_data.groupby('Drug Regimen') #constructing the summary table #mean regimen_df = pd.DataFrame(regimen['Tumor Volume (mm3)'].mean()) regimen_df = regimen_df.rename(columns = {'Tumor Volume (mm3)':'Mean Tumor Volume'}) #median regimen_df['Median Tumor Volume'] = regimen['Tumor Volume (mm3)'].median() #variance regimen_df['Variance Tumor Volume'] = regimen['Tumor Volume (mm3)'].var() #standard deviation regimen_df['Standard Deviation Tumor Volume'] = regimen['Tumor Volume (mm3)'].std() #SEM regimen_df['SEM Volume'] = regimen['Tumor Volume (mm3)'].sem() #sort them by least to greatest mean tumor volume regimen_df = regimen_df.sort_values("Mean Tumor Volume") regimen_df # - # ## Bar plots # + # Generate a bar plot showing number of data points for each treatment regimen using pandas #originally did this but can actually just to a value_counts. comes out sorted descending too #data_points_df = pd.DataFrame(regimen['Drug Regimen'].count()) #data_points_df = data_points_df.sort_values("Data Points",ascending=False) data_points_df = pd.DataFrame(mouse_data['Drug Regimen'].value_counts()) data_points_df = data_points_df.rename(columns = {'Drug Regimen':'Data Points'}) #plot! myplot = data_points_df.plot(kind = 'bar',title='Drug Regimen Data Points', legend = False, rot = 80) myplot.set_ylabel("Data Points") #data_points_df # + # Generate a bar plot showing number of data points for each treatment regimen using pyplot #reset index here so we can call the drug regimen column data_points_df = data_points_df.reset_index() #data_points_df #set x and y axis from columns x_axis= data_points_df['index'] y_axis = data_points_df['Data Points'] #construct plot #I'll just change the color here plt.bar(x_axis, y_axis, width = .6, color = 'gray') #rotate the x axis labels plt.xticks(rotation = 80) plt.ylim(0,max(data_points_df['Data Points'])+10) plt.xlabel("Drug Regimen") plt.ylabel('Data Points') plt.title("Drug Regimen Data Points") plt.show() # - # ## Pie plots # + # Generate a pie plot showing the distribution of female versus male mice using pandas #first we drop the mouse id duplicates, since all we care about is each unique mouse and its gender mice_gender = mouse_data.drop_duplicates(subset='Mouse ID') #df with gender and counts of each gender mice_gender_df = pd.DataFrame(mice_gender.groupby('Sex').count()['Mouse ID']) mice_gender_df =mice_gender_df.rename(columns = {'Mouse ID':'Counts'}) #construst plot mice_gender_df.plot(kind ='pie' ,y = 'Counts', figsize = (5.5,5.5),autopct = "%.1f%%", colors = ['pink', 'blue'], legend = False, title = "Mice Gender Distribution",startangle=140) #mice_gender_df # + # Generate a pie plot showing the distribution of female versus male mice using pyplot #similar to before we need to reset the index mice_gender_df = mice_gender_df.reset_index() #use these columns for parameters for the pie chart counts = mice_gender_df['Counts'] gender = mice_gender_df['Sex'] #found a bunch of colors here #https://matplotlib.org/3.1.0/gallery/color/named_colors.html colors = ["magenta","deepskyblue"] #contruct plot plt.figure(figsize = (5,5)) plt.pie(counts, colors=colors,labels=gender, autopct="%.1f%%", startangle=140) plt.title("Mice Gender Distribution") plt.axis("equal") plt.show() # - # ## Quartiles, outliers and boxplots # + #Calculate the final tumor volume of each mouse across four of the most promising treatment regimens. #treatments given in directions top_four_treatments=['Capomulin', 'Ramicane', 'Infubinol','Ceftamin'] #create df with the final tumor volumes for each mouse treated with the top four most promising treatements #we can do a drop duplicate and keep the last row becuase that is the final timepoint for each mouse tumor_df1 = mouse_data.drop_duplicates(subset = ['Mouse ID'], keep ='last') tumor_df1 = tumor_df1[['Mouse ID','Drug Regimen','Tumor Volume (mm3)']] tumor_df1 = tumor_df1.rename(columns = {'Tumor Volume (mm3)':'Final Tumor Volume'}) #filter the df down drug regimen column with only the top four treatments tumor_df1 = tumor_df1.loc[(tumor_df1['Drug Regimen'].isin(top_four_treatments)),:] tumor_df1=tumor_df1.reset_index(drop=True) #tumor_df1 # + #Calculate the IQR and quantitatively determine if there are any potential outliers. #Going to create a df with the IQR analysis, doesn't say we have to do this but seems like a good way #data frame will include quartiles, iqr, bounds, and outlier values across the four treatments #creating empty lists that will be used to create the IQR analysis df #quartiles, outliers lowerq=[] median=[] upperq=[] outliercount=[] outliervalue=[] #use this loop to seperate each the four drug treatments and calculate quantiles. #will append these values to the empty lists for drug in top_four_treatments: drug_df = tumor_df1.loc[(tumor_df1['Drug Regimen'] == drug),:] quartiles = drug_df['Final Tumor Volume'].quantile([.25,.5,.75]) lowerq.append(round(quartiles[0.25],2)) median.append(round(quartiles[0.5],2)) upperq.append(round(quartiles[0.75],2)) #use list comprehensions to create lists with calculated iqr and bounds from quartiles iqr=[upperq[i]-lowerq[i] for i in range(len(lowerq))] lower_bound=[lowerq[i]-1.5*iqr[i] for i in range(len(lowerq))] upper_bound=[upperq[i]+1.5*iqr[i] for i in range(len(upperq))] #this loop will actually create a seperate df with only outliers for each seperate df #then it will append a list of outliers for each seperate treatment to the list 'outliervalue' for i in range(len(top_four_treatments)): drug_df = tumor_df1.loc[(tumor_df1['Drug Regimen'] == top_four_treatments[i]),:] #filter with bounds outlier_df= drug_df.loc[((drug_df['Final Tumor Volume'] < lower_bound[i]) | (drug_df['Final Tumor Volume'] > upper_bound[i])),:] #since some of the regimen don't have any outliers, it creates an empty df, we can use the syntax of the if statement if outlier_df.empty == True: outliervalue.append('n/a') else: outliervalue.append(np.round(outlier_df['Final Tumor Volume'].values[:],2)) #all of our lists should be filled now and we can create our df! iqr_df=pd.DataFrame({"Drug Regimen":top_four_treatments, "Lower Quartile": lowerq, "Median":median, "Upper Quartile": upperq, "IQR":iqr, "Lower Bound":lower_bound, "Upper Bound":upper_bound, "Outlier Values":outliervalue}) #set the index to drug regimen iqr_df=iqr_df.set_index('Drug Regimen') iqr_df # + # Generate a box plot of the final tumor volume of each mouse across four regimens of interest final_volume_values=[] #will append a list of final tumor volumes for each seperate treatment into our list for drug in top_four_treatments: drug_df = tumor_df1.loc[(tumor_df1['Drug Regimen'] == drug)] final_volume_values.append(drug_df["Final Tumor Volume"].tolist()) #contruct fig1, ax1 = plt.subplots(figsize=(8,5)) #add axis titles ax1.set_title('Final Tumor Volume Box Plots\n') ax1.set_xlabel('\nDrug Regimen') ax1.set_ylabel('Final Tumor Volume (mm3)\n') #filerprops to define the outlier markers flierprops = dict(marker='o', markerfacecolor='r', markersize=7, markeredgecolor='black') ax1.boxplot(final_volume_values,flierprops=flierprops) #rename the x axis ticks with the actual drug regimens plt.xticks(np.arange(1,len(top_four_treatments)+1), top_four_treatments) plt.ylim(0,80) #plt.figure(figsize=(10,6)) plt.show() # - # ## Line and scatter plots #find all the mice treated specifically with Capomulin drug = 'Capomulin' cap_mice = mouse_data.loc[(mouse_data['Drug Regimen'] == drug),['Mouse ID','Drug Regimen']] cap_mice=cap_mice.drop_duplicates(subset = 'Mouse ID') #cap_mice # + # Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin #so this cell should create a nice formatted line plot for any mouse, just change mouse variable mouse = 'm601' #filter original df with mouse of our choice mymouse= mouse_data.loc[(mouse_data['Mouse ID'] == mouse),['Mouse ID','Timepoint','Tumor Volume (mm3)','Weight (g)']] #setting axis with columns from df x_axis = mymouse['Timepoint'] y_axis = mymouse['Tumor Volume (mm3)'] #labels plt.xlabel('Timepoint (days)') plt.ylabel('Tumor Volume (mm3)') plt.title(f'Time vs Tumor Volume, Mouse ID: {mouse}\n') #setting lims and ticks using min and max values so it will format nicely for any mouse, not just specific to a single mouse plt.xlim(0,x_axis.max()) plt.ylim((round(y_axis.min()-2),(round(y_axis.max())+2))) plt.yticks(np.arange((round(y_axis.min())-4),(y_axis.max()+4),2)) plt.xticks(np.arange((x_axis.min()),(x_axis.max()+5),5)) #more formatting plt.grid() # :O so clip_on=False makes it to where the marker isnt cut off by the edge of the plot!! plt.plot(x_axis,y_axis,marker="o", color="blue", linewidth=1,clip_on=False) plt.show() # + #shows the df with lineplot data for specified mouse #mymouse # + # Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen #filter the original df with only Capomulin treated rows #drug variable was defined earlier as 'Capomulin' drug_filter= mouse_data.loc[(mouse_data['Drug Regimen'] == drug),:] #groupby individual mouse group_mouse = drug_filter.groupby(['Mouse ID']) #create dataframe with mean tumor volume and weight of each individual mice group_mouse_df = pd.DataFrame(group_mouse['Tumor Volume (mm3)'].mean()) group_mouse_df['Weight (g)'] = group_mouse['Weight (g)'].mean() group_mouse_df = group_mouse_df.rename(columns = {'Tumor Volume (mm3)':'Mean Tumor Volume'}) #group_mouse_df.head() # + # Calculate the correlation coefficient and linear regression model for #mouse weight and average tumor volume for the Capomulin regimen, columns for scatterplot #y vs x, directions say to plot mouse weight vs avg tumor volume #doesn't rly make sense becuase mouse weight is the independent variable so I'm going to plot tumor volume vs weight avgtumor = group_mouse_df.iloc[:, 0] weight = group_mouse_df.iloc[:, 1] #(slope,int, r, p, std_err) from linregress lin = st.linregress(weight,avgtumor) regresslinex=np.arange(weight.min(),weight.max()+2,2) line = regresslinex*lin[0] + lin[1] #string presenting equation eq = f"y = {round(lin[0],2)}x+{round(lin[1],2)}" #scatterplot formatting plt.figure(figsize=(10,7)) plt.xlabel('Mouse Weight (g)') plt.ylabel('Average Tumor Volume (mm3)') plt.ylim((round(avgtumor.min()-1.5),(round(avgtumor.max())+1.5))) plt.xlim((round(weight.min()-1.5),(round(weight.max())+1.5))) plt.title(f'Average Tumor Volume vs Mouse Weight for {drug} Regimen\n') plt.scatter(y=avgtumor,x=weight,color='slategrey') #plotting linear regression line with equation annotations plt.plot(regresslinex,line,"b--") variables = 'x = Mouse Weight (g) \ny = Average Tumor Volume (mm3)' plt.annotate(eq,(weight.max()-4,avgtumor.min()+1)) plt.annotate(variables,(weight.max()-4,avgtumor.min())) plt.show() #printing out the linear regression results print(f'Average Tumor Volume vs Mouse Weight for {drug} Regimen\nLinear Regression Model:') print(f'\n{eq}\nx = Mouse Weight (g)\ny = Average Tumor Volume (mm3)') print(f'Correlation Coefficient(R) = {round(lin[2],3)}') # + #ignore cell, was testing something out #lab=['a','a','a','a','a','a','a','a','a','a', # 'b','b','b','b','b','b','b','b','b','b', # 'c','c','c','c','c','c','c','c','c','c',] #numbs=[4,4,4,4,4,4,4,4,8,8, # 4,4,4,4,4,4,4,9,10,9, # 4,4,4,4,4,6,8,6,6,8] #uns=['a','b','c'] #random = pd.DataFrame({'label':lab,'numbs':numbs}) #outliercountss=[] #outlierdfss=[] #outlieractualvalues=[] #for i in range(len(uns)): # smort = random.loc[(random['label'] == uns[i]),:] # out= smort.loc[(smort['numbs'] > 5),:] # outliercountss.append(len(out)) # outlierdfss.append(out) #for i in range(len(outlierdfss)): # outlieractualvalues.append(outlierdfss[i]['numbs'].values[:]) #outlieractualvalues # #summ=pd.DataFrame({'labels':uns, # 'outliers':outlieractualvalues}) #summ # + #whoops, was trying to set up a df to show which drug had the largest tumor change based on the average #tumor change across each drug regimen, but we were already given the treatments we need to work with #thought I would do it anyways, created a df with the change in tumor volume, grouped by the drug and then have the average #tumor volume change, spits out the top four. #curiously, it doesn't match up with the four treatments we were given #initial_vol = mouse_data.drop_duplicates(subset = ['Mouse ID'], keep ='first') #initial_vol = initial_vol[['Mouse ID','Drug Regimen','Tumor Volume (mm3)']] #initial_vol = initial_vol.rename(columns = {'Tumor Volume (mm3)':'Initial Tumor Volume (mm3)'}) #final_vol = mouse_data.drop_duplicates(subset = ['Mouse ID'], keep ='last') #final_vol = final_vol[['Mouse ID','Tumor Volume (mm3)']] #final_vol = final_vol.rename(columns = {'Tumor Volume (mm3)':'Final Tumor Volume (mm3)'}) #change_df = pd.merge(initial_vol,final_vol, on = "Mouse ID") #change_df['Change in Tumor Volume (mm3)'] = change_df['Initial Tumor Volume (mm3)']-change_df['Final Tumor Volume (mm3)'] #top = change_df.groupby(['Drug Regimen']) #top_df=pd.DataFrame(top['Change in Tumor Volume (mm3)'].mean()) #top_df=top_df.sort_values("Change in Tumor Volume (mm3)",ascending=False) #top_df = top_df.reset_index() #top_df #top_treatments = top_df[:4] #top_treatments #da_best_treatments = top_treatments['Drug Regimen'].values #da_best_treatments #print('The top four most promising treatments:') #for x in da_best_treatments: # print(x) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: poker # language: python # name: poker # --- from pluribus.poker.deck import Deck import math import numpy as np from itertools import combinations import time import dill as pickle # ### Brief Look at Size of Problem # + # additional resources: # https://poker.cs.ualberta.ca/publications/2013-techreport-nl-size.pdf # - deck = Deck() # when dealing with huge combos, this will be more performant # can evaluate the stored number directly cards = [c.eval_card for c in deck] def get_card_combos(num_cards, cards): """ return combos of cards (Card.eval_card) """ # most performant I could find so far return np.asarray(list((combinations(cards, num_cards)))) def ncr(n,r): """ helper function for calculating combination size n choose r""" return int(math.factorial(n)/(math.factorial(r)*math.factorial(n-r))) start = time.time() unique_starting_hands = get_card_combos(2, cards) end = time.time() print(end - start) #52 choose 2 (see: ncr()) print("unique starting_hands: ", len(unique_starting_hands)) start = time.time() flops = get_card_combos(3, cards) # flop combos plus starting hands end = time.time() print(end - start) # + # the following consider Ah,Kh,Jh,7h as different than Ah,Kh,7h,Jh # this might be fair to encode some sense of strategy # - print("unique flops with unique starting hands: ", ncr(52, 2) * ncr(50, 3)) print("unique flops + turns: ", ncr(52, 2) * ncr(50, 3) * ncr(47, 1)) print("unique flops + turns + rivers: ", ncr(52, 2) * ncr(50, 3) * ncr(47, 1) * ncr(46, 1)) # + # both the supplementary and this paper below mention # + # we could also consider Ah,Kh,Jh,7h as not different than Ah,Kh,7h,Jh # this would mean # obviously, we have to consider starting hands as separate from the board # as evaluating Ks,Jh on a Ah,Kh,7h board is different than Ah,Jh on a Ks,Kh,7h # TODO (c): do we care about situations likie this? --yes! # - # example print("unique flops with unique starting hands: ", ncr(52, 2) * ncr(50, 3)) print("unique flops + turns: ", ncr(52, 2) * ncr(50, 4)) print("unique flops + turns: ", ncr(52, 2) * ncr(50, 5)) # + # but, really the best way would be to work on how to get lossless # here is an example: https://poker.cs.ualberta.ca/publications/2013-techreport-nl-size.pdf # it'll be a combination of strategy the same hands plus one of the methods above # + # here's a demonstration of the size (not even the largest problem) # TODO (c) we'll need to figure out how to apply lossless to each round # even then we'll still be clustering 2,428,287,420!! (169, flop: 1,286,792, turn: 55,190,538, river: 2,428,287,420) # unless we are using some sort of hip sampling trick # - # ### Notes on the Size of the Problem as it relates to Clustering the Information Situations # - Turns out the second way depicted above will be fine (I think) for the imperfect recall paradigm # - see here: pg.4 under heading "Computing the abstraction for round 1": http://www.cs.cmu.edu/~sandholm/gs3.aaai07.pdf # - Secondly, potentially use lossless as mentioned here # - pg. 274 5th paragragh http://www.ifaamas.org/Proceedings/aamas2013/docs/p271.pdf # - note also the two k means algorithm efficeincies discussed in the paragraph # - this is discussed in detail here: http://www.cs.cmu.edu/~sandholm/gs3.aaai07.pdf # - Thirdly, change the clustering problem to deal with indices of histograms, rather than the entire histogram # - I'm estimating 160 gigabytes to store the 2.5 billion x 8 numpy array needed to perform k means clustering on the river (using OCHS), or about 224 gigabytes to store the 3.5 billion X 8 numpy array needed to do the clustering for the river # - However, on the turn the problem would be at a minimum 5.5 X 10^7 x 200, which is too big # - can use sparse data here: https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Appannie 용 스크래퍼 # - # 배운 라이브러리 from selenium import webdriver from selenium.webdriver.common.keys import Keys from bs4 import BeautifulSoup import pandas as pd import csv import time # 찾아본 라이브러리 import pyautogui as gui # 마우스 이동, 키보드 입력 import pyperclip # 클립보드 사용 browser = webdriver.Chrome('C:\chromedriver.exe') browser.get('https://www.appannie.com/intelligence/top-apps/store-rank/ios?date=%272021-07-19%27&country_code=KR&device_code=ios-phone&category_id=100000&store_rank_ios_chart_free$previous_range$chart_compare_facets=!(store_product_rank_grossing__aggr)&store-rank.ios.view=free&store_rank.ios.chart_range=7') # 스크래퍼 작동을 위해 화면이 top에 위치 & 페이지크기 full browser.maximize_window() browser.implicitly_wait(10) html= browser.page_source browser.execute_script("window.scrollTo(0, 950)") # + contents = [] # 1 ~ 91 순위까지 for i in range(1, 92): gui.moveTo(490,236,1) gui.dragTo(1173, 280, 1,button='left') gui.hotkey('ctrl','c') a = pyperclip.paste() gui.click() gui.scroll(-60) list = a.split('\r') list = [i.strip('\n') for i in list] time.sleep(0.6) if len(list) == 3: list.insert(2, '없음') contents.append(list) continue elif len(list) == 4: contents.append(list) continue # else: # break # - # 92 ~ 100 순위 까지 (마지막은 스크롤을 내릴 수 없어서 따로 for문 작성) for i in range(1,10): n = 75 * (i-1) gui.moveTo(490,236+int(n),1) gui.dragTo(1173, 280+int(n), 1,button='left') gui.hotkey('ctrl','c') a = pyperclip.paste() gui.click() list = a.split('\r') list = [i.strip('\n') for i in list] time.sleep(0.5) if len(list) == 3: list.insert(2, '없음') contents.append(list) continue elif len(list) == 4: contents.append(list) continue len(contents) contents # 드래그시 열의 마지막 str에서 긴 (Application) 의 경우 1~2문자의 누락가능성 확인 -> 조정하기위한 for문 for content in contents: if content[-1][-1] == 'n': content[-1] = content[-1] + 's)' elif content[-1][-1] == 'o': content[-1] = content[-1] + 'ns)' elif (content[-1] == 'Games') or (content[-1] == 'Kids'): break elif content[-1][-1] == 's': content[-1] = content[-1] + ')' pd.set_option('display.max_row', 101) pd.set_option('display.max_columns', 10) df = pd.DataFrame(data = contents, columns = ['-', '앱이름','회사명','카테고리']) df.index = df.index + 1 df = df.drop(columns = ['-']) df # * Games : Action, Adventure, Board, Card, Casino, Casual, Dice, Education, Family, Kids, Music, Puzzle, Racing, Role Playing, Simulation, Sports, Strategy, Trivia, Word # # * Applications : Books, Business, Catalogs, Developer Tools, Education, Entertainment, Finance, Food and Drink, Graphics & Design, Health and Fitness, Lifestyle, Magazines and Newspapers, Medical, Music, Navigation, News, Photo and Video, Productivity, Reference, Shopping, Social Networking, Sports, Travel, Utilities, Weather # # * Kids : Ages 5 & Under, Ages 6-8, Ages 9-11 # + # 회사명 中 None 은 '없음' 으로 본다 # - df.to_excel('C:\pl/contents.xls') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Factorial digit sum # # # # n! means n × (n − 1) × ... × 3 × 2 × 1 # # For example, 10! = 10 × 9 × ... × 3 × 2 × 1 = 3628800, # and the sum of the digits in the number 10! is 3 + 6 + 2 + 8 + 8 + 0 + 0 = 27. # # Find the sum of the digits in the number 100! # # + # %%time import math number = math.factorial(100) textnum = str(number) mysum = 0 for char in textnum: mysum = mysum + int(char) print(mysum) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from plotly import __version__ from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot import plotly.graph_objs as go import pandas as pd print(__version__) # initiate the Plotly Notebook mode to use plotly.offline and inject the plotly.js source files into the notebook. init_notebook_mode(connected=True) #What is plotly https://plot.ly/python/user-guide/#why-graphobjs # + #Simple Bar Graph with Plotly data = [go.Bar( x=['oranges', 'apples', 'mangos'], y=[10, 20, 30] )] iplot(data, filename='bar') # + #Data available from WorldBank or by cloning this repo #Data Frame df = pd.read_excel('WorldBankPop2017.xlsx') #Display the shape i.e the number of rows and columns in the data frame df.shape # - #Check and ensure there are no null values in our data df.info() #Check the first 5 rows of the data frame df.head(5) # + #Create a Bar Graph with the 'Country' and 'Population' columns data = [go.Bar( x= df['Country Name'], y= df['2017 [YR2017]'], width = 0.5 )] # Add Title to the x and y axis respectively of your plotly graph layout = go.Layout( title='Countrywise Population Distribution', xaxis=dict( title='Tactics', ), yaxis=dict( title='Total Population', ), ) #Create and plot the figure with Plotly fig = go.Figure(data=data, layout=layout) iplot(fig, filename='basic-bar') # + #Its a little hard to see which country has the highest population unless you click on the bars in the plotly graph above # Lets try to Visualize Data using a Scatter Plot) #May not be useful see https://python-graph-gallery.com/scatter-plot/ trace = go.Scatter( x= df['Country Name'], y= df['2017 [YR2017]'], mode = 'markers' ) data = [trace] # Plot and embed in ipython notebook! https://plot.ly/python/line-and-scatter/ iplot(data, filename='basic-scatter') # + #https://plot.ly/python/choropleth-maps/#reference data = [ dict( type = 'choropleth', locations = df['Country Code'], z = df['2017 [YR2017]'], text = df['Country Name'], showscale = True, marker = dict( line = dict ( color = 'rgb(180,180,180)', width = 0.5 ) ), colorbar = dict( autotick = False, title = 'Population'), ) ] layout = dict( title = '2017 World Population courtest:\ \ WorldBank', geo = dict( showframe = False, showcoastlines = False, projection = dict( type = 'Mercator' ) ) ) fig = dict( data=data, layout=layout ) iplot( fig, validate=False, filename='WorldPOP') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### set data # %matplotlib inline s = np.sin(2 * np.pi * 0.125 * np.arange(20)) plt.plot(s, 'ro-') plt.xlim(-0.5, 20.5) plt.ylim(-1.1, 1.1) plt.show() from scipy.linalg import toeplitz S = np.fliplr(toeplitz(np.r_[s[-1], np.zeros(s.shape[0] - 2)], s[::-1])) S[:5, :3] # data를 3개씩 잘라서 그 다음 것을 예측하도록 트레이닝 # # ex) 1-3월 매출 -> 4월 매출 예측 X_train = S[:-1, :3][:, :, np.newaxis] Y_train = S[:-1, 3] X_train.shape, Y_train.shape X_train[0:2] Y_train[0:2] plt.subplot(211) plt.plot([0, 1, 2], X_train[0].flatten(), 'bo-', label="input sequence") plt.plot([3], Y_train[0], 'ro', label="target") plt.xlim(-0.5, 4.5) plt.ylim(-1.1, 1.1) plt.legend() plt.title("First sample sequence") plt.subplot(212) plt.plot([1, 2, 3], X_train[1].flatten(), 'bo-', label="input sequence") plt.plot([4], Y_train[1], 'ro', label="target") plt.xlim(-0.5, 4.5) plt.ylim(-1.1, 1.1) plt.legend() plt.title("Second sample sequence") plt.tight_layout() plt.show() # ## modeling # + from keras.models import Sequential from keras.layers import SimpleRNN, Dense np.random.seed(0) model = Sequential() model.add(SimpleRNN(10, input_shape=(3, 1))) model.add(Dense(1, activation="linear")) # 0-1 사이의 값이 나오는 문제가 아님 model.compile(loss='mse', optimizer='sgd') # - # 학습전 plt.plot(Y_train, 'ro-', label="target") plt.plot(model.predict(X_train[:,:,:]), 'bs-', label="output") plt.xlim(-0.5, 20.5) plt.ylim(-1.1, 1.1) plt.legend() plt.title("Before training") plt.show() # 학습 history = model.fit(X_train, Y_train, epochs=100, verbose=0) plt.plot(history.history["loss"]) plt.title("Loss") plt.show() # 학습 후 예측 plt.plot(Y_train, 'ro-', label="target") plt.plot(model.predict(X_train[:,:,:]), 'bs-', label="output") plt.xlim(-0.5, 20.5) plt.ylim(-1.1, 1.1) plt.legend() plt.title("After training") plt.show() # seq-to-seq # `return_seqences = True` : many-to-many # + from keras.layers import TimeDistributed model2 = Sequential() model2.add(SimpleRNN(10, return_sequences=True, input_shape=(3, 1))) model2.add(TimeDistributed(Dense(1, activation="linear"))) # 3차원 텐서 입력 model2.compile(loss='mse', optimizer='sgd') # - # 출력값을 3개짜리 순서열로 #기존 train X_train = S[:-1, :3][:, :, np.newaxis] Y_train = S[:-1, 3] X_train.shape, Y_train.shape X_train2 = S[:-3, 0:3][:, :, np.newaxis] Y_train2 = S[:-3, 3:6][:, :, np.newaxis] X_train2.shape, Y_train2.shape print(X_train[:2]) X_train2[:2] print(Y_train[:2]) Y_train2[:2] # ### train data 모습 plt.subplot(211) plt.plot([0, 1, 2], X_train2[0].flatten(), 'bo-', label="input sequence") plt.plot([3, 4, 5], Y_train2[0].flatten(), 'ro-', label="target sequence") plt.xlim(-0.5, 6.5) plt.ylim(-1.1, 1.1) plt.legend() plt.title("First sample sequence") plt.subplot(212) plt.plot([1, 2, 3], X_train2[1].flatten(), 'bo-', label="input sequence") plt.plot([4, 5, 6], Y_train2[1].flatten(), 'ro-', label="target sequence") plt.xlim(-0.5, 6.5) plt.ylim(-1.1, 1.1) plt.legend() plt.title("Second sample sequence") plt.tight_layout() plt.show() # ### fitting history2 = model2.fit(X_train2, Y_train2, epochs=100, verbose=0) plt.plot(history2.history["loss"]) plt.title("Loss") plt.show() # ### 학습 후 plt.subplot(211) plt.plot([1, 2, 3], X_train2[1].flatten(), 'bo-', label="input sequence") plt.plot([4, 5, 6], Y_train2[1].flatten(), 'ro-', label="target sequence") plt.plot([4, 5, 6], model2.predict(X_train2[1:2,:,:]).flatten(), 'gs-', label="output sequence") plt.xlim(-0.5, 7.5) plt.ylim(-1.1, 1.1) plt.legend() plt.title("Second sample sequence") plt.subplot(212) plt.plot([2, 3, 4], X_train2[2].flatten(), 'bo-', label="input sequence") plt.plot([5, 6, 7], Y_train2[2].flatten(), 'ro-', label="target sequence") plt.plot([5, 6, 7], model2.predict(X_train2[2:3,:,:]).flatten(), 'gs-', label="output sequence") plt.xlim(-0.5, 7.5) plt.ylim(-1.1, 1.1) plt.legend() plt.title("Third sample sequence") plt.tight_layout() plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Convergence Testing: # ### K-Point Convergence: # Using a plane-wave energy cutoff of 520 eV, and Monkhorst pack k-grid densities of $i$ x $i$ x $i$ for $i$ ranging from 1 to 8. # + jupyter={"source_hidden": true} import numpy as np import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline sns.set_palette(sns.color_palette('bright')) sns.set_style('darkgrid') kdensity = np.arange(3, 8.1) kconv_energies = np.array([float(line.rstrip('\n')) for line in open('VASP_Outputs/kconv_energy_list.txt')]) f, ax = plt.subplots(1, 2, figsize=(22, 10)) ax[0].plot(kdensity, 1000*(kconv_energies-kconv_energies[-1]), #color='steelblue', marker="o", label="Convergence E0", linewidth=2, linestyle='-') ax[0].grid(True) ax[0].set_xlabel("K-Grid Density (i x i x i)", labelpad=10) ax[0].set_ylabel("Ground State Energy wrt Final Datapoint [meV]", labelpad=10) ax[0].set_title("Electronic SCF Energy Convergence wrt K-Point Density", fontsize=20, pad=20) # pad is offset of title from plot # Adjusting the in-plot margins (i.e. the gap between the final x value and the x limit of the graph) ax[0].margins(0.1) ax[0].ticklabel_format(useOffset=False) plt.setp(ax[0].get_yticklabels(), rotation=20) f.subplots_adjust(bottom=0.3, top=0.85) # Adjusting specific margins ax[1].ticklabel_format(useOffset=False) ax[1].plot(kdensity[1:], 1000*(kconv_energies[1:]-kconv_energies[-1]), color='red', marker="s", label="Convergence E0", linewidth=2) ax[1].grid(True) ax[1].set_xlabel("K-Grid Density (i x i x i)", labelpad=10) ax[1].set_ylabel("Ground State Energy wrt Final Datapoint [meV]", labelpad=10) ax[1].set_title("Electronic SCF Energy Convergence wrt K-Point Density", fontsize=20, pad=20) # pad is offset of title from plot # Adjusting the in-plot margins (i.e. the gap between the final x value and the x limit of the graph) ax[1].margins(0.1) plt.setp(ax[1].get_yticklabels(), rotation=20) plt.savefig('kpoint_Convergence.png') # + jupyter={"source_hidden": true} # %matplotlib inline kdensity = np.arange(3, 8.1) kconv_energies_newencut = np.array([float(line.rstrip('\n')) for line in open('VASP_Outputs/kenconv_energy_list.txt')]) kconv_newencut = np.zeros([6,]) for i in range(len(kdensity)): # Selecting energies for ENCUT of 800 eV kconv_newencut[i] = kconv_energies_newencut[i*13+12] f2, ax2 = plt.subplots(1, 2, figsize=(22, 10)) ax2[0].plot(kdensity, 1000*(kconv_newencut-kconv_newencut[-1]), color='steelblue', marker="s", label="Convergence E0", linewidth=2) ax2[0].grid(True) ax2[0].set_xlabel("K-Grid Density (i x i x i)", labelpad=10) ax2[0].set_ylabel("Ground State Energy wrt Final Datapoint [meV]", labelpad=10) ax2[0].set_title("Electronic SCF Energy Convergence wrt K-Density (800 eV ENCUT)", fontsize=20, pad=20) # pad is offset of title from plot # Adjusting the in-plot margins (i.e. the gap between the final x value and the x limit of the graph) ax2[0].margins(0.1) ax2[0].ticklabel_format(useOffset=False) plt.setp(ax2[0].get_yticklabels(), rotation=20) f2.subplots_adjust(bottom=0.3, top=0.85) # Adjusting specific margins ax2[1].ticklabel_format(useOffset=False) ax2[1].plot(kdensity[1:], 1000*(kconv_newencut[1:]-kconv_newencut[-1]), color='red', marker="s", label="Convergence E0", linewidth=2) ax2[1].grid(True) ax2[1].set_xlabel("K-Grid Density (i x i x i)", labelpad=10) ax2[1].set_ylabel("Ground State Energy wrt Final Datapoint [meV]", labelpad=10) ax2[1].set_title("Electronic SCF Energy Convergence wrt K-Density (800 eV ENCUT)", fontsize=20, pad=20) # pad is offset of title from plot # Adjusting the in-plot margins (i.e. the gap between the final x value and the x limit of the graph) ax2[1].margins(0.1) plt.setp(ax2[1].get_yticklabels(), rotation=20) plt.show() # + jupyter={"source_hidden": true} ediff = np.array([kconv_energies[i]-kconv_energies[i-1] for i in range(1, len(kconv_energies))]) f1, ax1 = plt.subplots(1, 2, figsize=(22, 7)) ax1[0].plot(kdensity[1:], np.log10(1000*abs(ediff)), color='steelblue', marker="s", label="Convergence E0", linewidth=2) ax1[0].grid(True) ax1[0].set_xlabel("K-Grid Density (i x i x i)", labelpad=10) ax1[0].set_ylabel("Log_10(|SCF Energy Difference|) [meV]", labelpad=10) ax1[0].set_title("SCF Energy Difference wrt K-Point Density", fontsize=20, pad=20) # pad is offset of title from plot ax1[0].ticklabel_format(useOffset=False) plt.setp(ax1[0].get_yticklabels(), rotation=20) # Adjusting the in-plot margins (i.e. the gap between the final x value and the x limit of the graph) ax1[0].margins(0.1) f1.subplots_adjust(bottom=0.3, top=0.85) # Adjusting specific margins ax1[1].plot(kdensity[2:], np.log10(abs(1000*ediff[1:])), color='red', marker="s", label="Convergence E0", linewidth=2) ax1[1].grid(True) ax1[1].set_xlabel("K-Grid Density (i x i x i)", labelpad=10) ax1[1].set_ylabel("Log_10(|SCF Energy Difference|) [meV]", labelpad=10) ax1[1].set_title("SCF Energy Difference wrt K-Point Density", fontsize=20, pad=20) # pad is offset of title from plot ax1[1].ticklabel_format(useOffset=False) plt.setp(ax1[1].get_yticklabels(), rotation=20) # Adjusting the in-plot margins (i.e. the gap between the final x value and the x limit of the graph) ax1[1].margins(0.1) plt.show() # - # ### Plane Wave Energy Cutoff Convergence: # Using a Monkhorst pack k-grid density of 5 x 5 x 5, the plane wave energy cutoff was varied from 200 eV to 700 eV. # + jupyter={"source_hidden": true} ecutoff = np.arange(200, 700.5, 20) econv_energies = np.array([float(line.rstrip('\n')) for line in open('VASP_Outputs/econv_energy_list.txt')]) sns.set_style('darkgrid') fec, axec = plt.subplots(1, 2, figsize=(22, 7)) axec[0].plot(ecutoff, 1000*(econv_energies-econv_energies[-1]), color='steelblue', marker="s", label="Convergence E0", linewidth=2) axec[0].grid(True) axec[0].set_xlabel("Plane Wave Energy Cutoff [eV]", labelpad=10) axec[0].set_ylabel("Ground State Energy wrt Final Datapoint [meV]", labelpad=10) axec[0].ticklabel_format(useOffset=False) plt.setp(axec[0].get_yticklabels(), rotation=20) axec[0].set_title("Electronic SCF Energy Convergence wrt ENCUT", fontsize=20, pad=20) # pad is offset of title from plot # Adjusting the in-plot margins (i.e. the gap between the final x value and the x limit of the graph) axec[0].margins(0.1) fec.subplots_adjust(bottom=0.3, top=0.85) # Adjusting specific margins axec[1].plot(ecutoff[3:], 1000*(econv_energies[3:]-econv_energies[-1]), color='red', marker="s", label="Convergence E0", linewidth=2) axec[1].grid(True) axec[1].set_xlabel("Plane Wave Energy Cutoff [eV]", labelpad=10) axec[1].set_ylabel("Ground State Energy wrt Final Datapoint [meV]", labelpad=10) axec[1].ticklabel_format(useOffset=False) plt.setp(axec[1].get_yticklabels(), rotation=20) axec[1].set_title("Electronic SCF Energy Convergence wrt ENCUT", fontsize=20, pad=20) # pad is offset of title from plot # Adjusting the in-plot margins (i.e. the gap between the final x value and the x limit of the graph) axec[1].margins(0.1) plt.savefig('ENCUT_Convergence.png') plt.show() # + jupyter={"source_hidden": true} econv_energies = np.array([float(line.rstrip('\n')) for line in open('VASP_Outputs/econv_energy_list.txt')]) ecdiff = np.array([econv_energies[i]-econv_energies[i-1] for i in range(1, len(econv_energies))]) fec1, axec1 = plt.subplots(1, 1, figsize=(11, 7)) axec1.plot(ecutoff[1:], np.log10(1000*abs(ecdiff)), color='steelblue', marker="s", label="Convergence E0", linewidth=2) axec1.grid(True) axec1.set_xlabel("Plane Wave Energy Cutoff [eV]", labelpad=10) axec1.set_ylabel("Log_10(|SCF Energy Difference|) [meV]", labelpad=10) axec1.set_title("SCF Energy Difference wrt ENCUT", fontsize=20, pad=20) # pad is offset of title from plot # Adjusting the in-plot margins (i.e. the gap between the final x value and the x limit of the graph) axec1.margins(0.1) fec1.subplots_adjust(bottom=0.3, top=0.85) # Adjusting specific margins axec1.ticklabel_format(useOffset=False) plt.setp(axec1.get_yticklabels(), rotation=20) #axec1[1].plot(ecutoff[2:], np.log10(abs(ecdiff[1:])), color='red', marker="s", label="Convergence E0", linewidth = 2) # axec1[1].grid(True) #axec1[1].set_xlabel("Plane Wave Energy Cutoff [eV]", labelpad = 10) #axec1[1].set_ylabel("Log_10(|SCF Energy Difference|) [eV]", labelpad = 10) # axec1[1].set_title("SCF Energy Difference wrt ENCUT", fontsize = 20, pad = 20) # pad is offset of title from plot # axec1[1].margins(0.1) # Adjusting the in-plot margins (i.e. the gap between the final x value and the x limit of the graph) plt.show() # - # **Note:** # "The PREC-tag determines the energy cutoff ENCUT, _**if ENCUT is not specified in the INCAR file**_. For PREC=Low, ENCUT will be set to the maximal ENMIN value found on the POTCAR file. For PREC=Medium and PREC=Accurate, ENCUT will be set to maximal ENMAX value found on the POTCAR file. In general, an energy cutoff greater than 130% ENMAX is only required for accurate evaluation of quantities related to the stress tensor (e.g. elastic properties)." # Note that in the case of $Cs_2 AgSbBr_6$, the largest ENMAX in the POTCAR was 250 eV. # ## 2D Convergence Plot # K-grid density range from 3 x 3 x 3 to 8 x 8 x 8, with ENCUT varying from 200 to 800 eV in 50 eV increments # + jupyter={"source_hidden": true} import altair as alt import numpy as np import pandas as pd kenconv_energies = np.array([float(line.rstrip('\n')) for line in open('VASP_Outputs/kenconv_energy_list.txt')])*1000 kgrid_density = np.arange(3, 9) encut_range = np.arange(200, 801, 50) kenarray = np.zeros((len(encut_range), len(kgrid_density))) # Array of energy values for i in range(len(kgrid_density)): kenarray[:, i] = kenconv_energies[i * len(encut_range): (i+1)*len(encut_range)] x, y = np.meshgrid(kgrid_density, encut_range) # Convert this grid to columnar data expected by Altair source = pd.DataFrame({'K-Grid Density': x.ravel(), 'Plane Wave Energy Cutoff (eV)': y.ravel(), 'Ground State Energy wrt Final Datapoint (meV)': kenarray.ravel()-kenarray.ravel()[-1]}) alt.Chart(source,width=300,height=320,title = '2D Electronic SCF Energy Convergence Plot').mark_rect().encode( x='K-Grid Density:O', y=alt.Y("Plane Wave Energy Cutoff (eV)", sort='descending', type='ordinal'), color='Ground State Energy wrt Final Datapoint (meV):Q' ) # + jupyter={"source_hidden": true} source1 = pd.DataFrame({'K-Grid Density': x.ravel()[len(kgrid_density)*2:], 'Plane Wave Energy Cutoff (eV)': y.ravel()[len(kgrid_density)*2:], 'Ground State Energy wrt Final Datapoint (meV)': kenarray.ravel()[len(kgrid_density)*2:]-kenarray.ravel()[-1]}) alt.Chart(source1,title = '2D Electronic SCF Energy Convergence Plot (Adjusted Scale)').mark_rect().encode( x='K-Grid Density:O', y=alt.Y("Plane Wave Energy Cutoff (eV)", sort='descending', type='ordinal'), color='Ground State Energy wrt Final Datapoint (meV):Q', ) # - # **Seems that the convergence effect of the plane wave energy cutoff is much more significant than that of the k-grid density.** For the k-grid density, a grid of 2 x 2 x 2 is too small for the SCF energy to be calculated (i.e. to be 'self-consistent' within the NELM limit of 100 electronic SCF loops), and that 3 x 3 x 3 is approximately 1.4 meV off the converged energy value for higher k-densities (see 'K-Point Convergence' above), but that k-grid densities of above 4 x 4 x 4 do not significantly improve the convergence of the final energy. # Hence, it is concluded that a k-density of 5 x 5 x 5 (or 4 x 4 x 4) and an ENCUT of +500 eV is sufficient to obtain a converged final energy from VASP for $Cs_2 AgSbBr_6$. # + jupyter={"source_hidden": true} from mpl_toolkits.mplot3d import Axes3D # noqa: F401 unused import from matplotlib import cm from matplotlib.ticker import LinearLocator, FormatStrFormatter import numpy as np # %matplotlib widget fig = plt.figure(figsize = (12,9.5)) ax = fig.gca(projection='3d') sns.set_style('white') # Plot the surface. surf = ax.plot_surface(x, y, kenarray-kenarray[-1,-1], cmap=cm.coolwarm, linewidth=0, antialiased=False) ax.grid(True) ax.set_xlabel("K-Grid Density (i x i x i)", labelpad=10) ax.set_zlabel("Ground State Energy wrt Final Datapoint [meV]", labelpad=10) ax.set_ylabel("Plane Wave Energy Cutoff [eV]", labelpad=10) ax.set_title("2D Electronic SCF Energy Convergence wrt K-Density & ENCUT", fontsize=20, pad=20) # pad is offset of title from plot # Adjusting the in-plot margins (i.e. the gap between the final x value and the x limit of the graph) ax.margins(0.1) ax.ticklabel_format(useOffset=False) plt.setp(ax.get_yticklabels(), rotation=20) f.subplots_adjust(bottom=0.3, top=0.85) # Adjusting specific margins # Add a color bar which maps values to colors. cb = fig.colorbar(surf, shrink=0.5, aspect=5) cb.set_label('Final Energy [meV]', labelpad = -35, y = 1.15, rotation = 0) plt.show() # + jupyter={"source_hidden": true} # %matplotlib widget fig1 = plt.figure(figsize=(12,9.5)) ax1 = fig1.gca(projection='3d') # Plot the surface. surf = ax1.plot_surface(x[2:], y[2:], kenarray[2:]-kenarray[-1,-1], cmap=cm.coolwarm, linewidth=0, antialiased=False) ax1.grid(True) ax1.set_xlabel("K-Grid Density (i x i x i)", labelpad=10) ax1.set_zlabel("Ground State Energy wrt Final Datapoint [meV]", labelpad=10) ax1.set_ylabel("Plane Wave Energy Cutoff [eV]", labelpad=10) ax1.set_title("2D Electronic SCF Energy Convergence wrt K-Density & ENCUT", fontsize=20, pad=20) # pad is offset of title from plot # Adjusting the in-plot margins (i.e. the gap between the final x value and the x limit of the graph) ax1.margins(0.1) ax1.ticklabel_format(useOffset=False) plt.setp(ax1.get_yticklabels(), rotation=20) #fig1.subplots_adjust(bottom=0.3, top=0.85) # Adjusting specific margins # Add a color bar which maps values to colors. cb = fig1.colorbar(surf, shrink=0.5, aspect=5) cb.set_label('Energy Difference [meV]', labelpad = -35, y = 1.15, rotation = 0) #plt.savefig('3D_Conv_Plot') plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python3 # --- # + [markdown] tags=["meta", "draft"] # # Differential Evolution # + [markdown] tags=["hide"] # **TODO** # * Livre metaheuristiques p.165 # * [Scipy's implementation](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differential_evolution.html#scipy.optimize.differential_evolution) # * https://en.wikipedia.org/wiki/Differential_evolution # * http://www.scholarpedia.org/article/Metaheuristic_Optimization#Differential_Evolution # * [Practical advises](http://www1.icsi.berkeley.edu/~storn/code.html#prac) # - # Differential Evolution (DE) was developed by and in 1995 [Storn95] [Storn97]. # # This algorithm has been made for continuous problems. # # * **[Storn95]**: ., & . (1995). *Differential evolution - a simple and efficient adaptive scheme for global optimization over continuous spaces*. Technical Report TR95-012, International Computer Science Institute, Berkeley, California. [PDF document](ftp://ftp.icsi.berkeley.edu/pub/techreports/1995/tr-95-012.pdf). # * **[Storn97]**: ., & . (1997). *Differential evolution - a simple and efficient heuristic for global optimization over continuous spaces*. Journal of global optimization, 11(4), 341-359. # ## The Differential Evolution (DE) algorithm # ### Simplified notations # FOREACH generation # # $\quad$ FOREACH individual $i$ (a solution vector) of the population (a set) # # $\quad\quad$ 1. **MUTATION**: # # $\quad\quad\quad$ 1.1 Randomly choose 3 different individuals $\boldsymbol{x}_{r1}$, $\boldsymbol{x}_{r2}$ and $\boldsymbol{x}_{r3}$ # # $\quad\quad\quad\quad$ $\boldsymbol{x}_{r2}$ and $\boldsymbol{x}_{r3}$ are used to automatically define the direction and the amplitude of the mutation # # $\quad\quad\quad\quad$ (i.e. the search amplitude and direction) # # $\quad\quad\quad$ 1.2 Compute mutant $\boldsymbol{v}_{i} \leftarrow \boldsymbol{x}_{r1} + F . (\boldsymbol{x}_{r2} - \boldsymbol{x}_{r3})$ # # $\quad\quad$ 2. **CROSSOVER**: # # $\quad\quad\quad$ Compute the "test" individual $\boldsymbol{u}_{i}$ randomly taking each of its (vector) components in either $\boldsymbol{x}_i$ or $\boldsymbol{v}_i$ # # $\quad\quad$ 3. **SELECTION**: # # $\quad\quad\quad$ The test individual $\boldsymbol{u}_{i}$ replace $\boldsymbol{x}_{i}$ in the next generation if it has a better score. # Differential evolution consists of three main steps: # 1. mutation # 2. crossover # 3. selection # # The following sections provides more details about those three steps. # # ### Mutation # # # # ### Crossover # # # # ### Selection # # # **Notations**: # * $D$ : the dimension of input vectors, $D \in \mathbb{N}^*_+$. # * $t$ : the current iteration index (or generation index), $t \in \mathbb{N}^*_+$. # * $T$ : the total number of iteration (or generation), $T \in \mathbb{N}^*_+$. # * $N$ : the size of the population, $N > 3$. # * $F$ : the *differential weight*. In principle, $F \in [0,2]$, but in practice, a scheme with $F \in [0,1]$ is more efficient and stable ([scholarpedia](http://www.scholarpedia.org/article/Metaheuristic_Optimization#Differential_Evolution)). # * $Cr$ : the crossover probability parameter, $Cr \in [0,1]$. # # --- # # **Algorithm's parameters:** $N$, $Cr$ and $F$ # # --- # # **Input:** # * An initial population $(\boldsymbol{x}_{1,1}, \boldsymbol{x}_{2,1}, \dots, \boldsymbol{x}_{N,1})$ with $\boldsymbol{x}_{0,i} \in \mathbb{R}^D$ # # --- # # **for all** generation $t = 1, \dots, T$ **do** # # $\quad$ **for all** individuals $i = 1, \dots, N$ **do** # # $\quad\quad$ **1. Mutation:** # # $\quad\quad\quad$ Randomly choose 3 different individuals $\boldsymbol{x}_{r1,t}$, $\boldsymbol{x}_{r2,t}$ and $\boldsymbol{x}_{r3,t}$ in the population # # $\quad\quad\quad$ Make a donor vector $\boldsymbol{v}_{i,t}$ as the following # # $\quad\quad\quad \boldsymbol{v}_{i,t} \leftarrow \boldsymbol{x}_{r1,t} + F (\boldsymbol{x}_{r2,t} - \boldsymbol{x}_{r3,t})$ # # $\quad\quad$ **2. Crossover:** make a test individual $\boldsymbol{u}_{i,t}$ from $\boldsymbol{x}_{i,t}$ and $\boldsymbol{v}_{i,t}$ # # $\quad\quad\quad$ Randomly choose a dimension index $d_{r,i}$. # # $\quad\quad\quad$ **for all** dimension $d = 1, \dots, D$ **do** # # $$ # u_{d,i,t} \leftarrow # \left\{ # \begin{align} # v_{d,i,t} \quad & \text{ if } \mathcal{B}(Cr) = 1 \text{ or if } d = d_{r,i}\\ # x_{d,i,t} \quad & \text{ otherwise.} # \end{align} # \right. # $$ # # $\quad\quad\quad\quad$ where $\mathcal{B}(Cr)$ is a Bernoulli random variable of parameter $Cr$ # # $\quad\quad\quad\quad$ and where $u_{d,i,t}$, $v_{d,i,t}$ and $x_{d,i,t}$ are respectively the $d^{th}$ component of $\boldsymbol{u}_{i,t}$, $\boldsymbol{v}_{i,t}$ and $\boldsymbol{x}_{i,t}$ # # $\quad\quad$ **3. Selection:** # # $\quad\quad\quad$ **If** $f(\boldsymbol{u}_{i,t}) < f(\boldsymbol{x}_{i,t})$ # # $\quad\quad\quad\quad \boldsymbol{x}_{i,t+1} \leftarrow \boldsymbol{u}_{i,t}$ # # $\quad\quad\quad$ **Else** # # $\quad\quad\quad\quad \boldsymbol{x}_{i,t+1} \leftarrow \boldsymbol{x}_{i,t}$ # # $\quad$ **end for** # # **end for** # + tags=["hide"] # import python packages here... # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # The Annotated Transformer Tutorial - Reimplimentation of Original # # This is a reimplementation of the famous [Annotated Transformer](http://nlp.seas.harvard.edu/2018/04/03/attention.html) that I did to better understand the transformer. # ## Imports & Inits # %load_ext autoreload # %autoreload 2 # + import pdb import numpy as np np.set_printoptions(precision=4) import math, copy, time import matplotlib.pyplot as plt import seaborn as sns sns.set_style("darkgrid") sns.set_context(context="talk") # %matplotlib inline import torch from torch import nn from torch.nn import functional as F from torch.autograd import Variable # - # ## Model Architecture def clones(module, N): """ Produce N identitcal layers. """ return nn.ModuleList([copy.deepcopy(module) for _ in range(N)]) # + class EncoderDecoder(nn.Module): """ A standard Encoder-Decoder architecture. Base for this and many other models. """ def __ini__(self, encoder, decoder, src_embed, tgt_embed, generator): super(EncoderDecoder, self).__ini__() self.encoder = encoder self.decoder = decoder self.src_embed = src_embed self.tgt_embed = tgt_embed self.generator = generator def forward(self, src, tgt, src_mask, tgt_mask): """ Take in and process masked source and target sequences. """ return self.decode(self.encode(src, src_mask), src_mask, tgt, tgt_mask) def encode(self, src, src_mask): return self.encoder(self.src_embed(src), src_mask) def decode(self, memory, src_mask, tgt, tgt_mask): return self.deocder(self.tgt_embed(tgt), memory, src_mask, tgt_mask) class Generator(nn.Module): """ Define standard linear + softmax generation step. """ def __init__(self, d_model, vocab): super(Generator, self).__init__() self.proj = nn.Linear(d_model, vocab) def forward(self, x): return F.log_softmax(self.proj(x), dim=-1) # - class LayerNorm(nn.Module): """ Construct a layernorm module. """ def __init__(self, features, eps=1e-6): super(LayerNorm, self).__init__() self.a_2 = nn.Parameter(torch.ones(features)) self.b_2 = nn.Parameter(torch.zeros(features)) self.eps = eps def forward(self, x): mean = x.mean(-1, keepdim=True) std = x.std(-1, keepdim=True) return self.a_2 * (x - mean) / (std + self.eps) + self.b_2 class SublayerConnection(nn.Module): """ A residual connection followed by a layer norm. Note for code simplicity the norm is first as opposed to last. """ def __init__(self, size, dropout): super(SublayerConnection, self).__init__() self.norm = LayerNorm(size) self.dropout = nn.Dropout(dropout) def forward(self, x, sublayer): """ Apply residual connection to any subplayer with the same size. """ return x + self.dropout(subplayer(self.norm(x))) # ## Encoder # + class Encoder(nn.Module): """ Core encoder is a stack of N layers. """ def __init__(self, layer, N): super(Encoder, self).__init__() self.layers = clones(layer, N) self.norm = LayerNorm(layer.size) def forward(self, x, mask): """ Pass the input (and mask) through each layer in turn. """ for layer in self.layers: x = layer(x, mask) return self.norm(x) class EncoderLayer(nn.Module): """ Encoder is made up of self-attn and feed forward. """ def __init__(self, size, self_attn, feed_forward, dropout): super(EncoderLayer, self).__init__() self.self_attn = self_attn self.feed_forward = feed_forward self.subplayer = clones(SublayerConnection(size, dropout), 2) self.size = size def forward(self, x, mask): x = self.sublayer[0](x, lambda x: self.self_attn(x, x, x, mask)) return self.sublayer[1](x, self.feed_forward) # - # ## Decoder class Decoder(nn.Module): """ Generic N layer decoder with masking. """ def __init__(self, layer, N): super(Decoder, self).__init__() self.layers = clones(layer, N) self.norm = LayerNorm(layer.size) def forward(self, x, memory, src_mask, tgt_mask): """ Pass the input (and mask) through each layer in turn. """ for layer in self.layers: x = layer(x, memory, src_mask, tgt_mask) return self.norm(x) class DecoderLayer(nn.Module): """ Decoder is made of self-attn, src-attn, and feed forward. """ def __init__(self, size, self_attn, src_attn, feed_forward, dropout): super(DecoderLayer, self).__init__() self.size = size self.self_attn = self_attn self.src_attn = src_attn self.feed_forward = feed_forward self.sublayer = clones(SublayerConnection(size, dropout), 3) def forward(self, x, memory, src_mask, tgt_mask): m = memory x = self.sublayer[0](x, lambda x: self.self_attn(x, x, x, tgt_mask)) x = self.sublayer[1](x, lambda x: self.src_attn(x, m, m, src_mask)) return self.sublayer[2](x, self.feed_forward) def subsequent_mask(size): """ Mask out subsequent positions. """ attn_shape = (1, size, size) subsq_mask = np.triu(np.ones(attn_shape), k=1).astype('uint8') return torch.from_numpy(subsq_mask) == 0 fig, ax = plt.subplots(1,1,figsize=(5,5)) ax.imshow(subsequent_mask(20)[0]) # ## Attention def attention(query, key, value, mask=None, dropout=None): """ Compute 'Scaled Dot Product Attention' """ d_k = query.size(-1) scores = torch.matmul(query, key.transpose(-2, -1)) / math.sqrt(d_k) if mask is not None: scores = scores.maksed_fill(mask == 0, -1e9) p_attn = F.softmax(scores, dim=-1) if dropout is not None: p_attn = dropout(p_attn) return torch.matmul(p_attn, value), p_attn class MultiHeadAttention(nn.Module): def __init__(self, h, d_model, dropout=0.1): """ Take in the model size and number of heads. """ super(MultiHeadAttention, self).__init__() assert d_model % h == 0 # assume d_v = d_k self.d_k = d_model // h self.h = h self.linears = clones(nn.Linear(d_model, d_model), 4) self.attn = None self.dropout = nn.Dropout(dropout) def forward(self, query, key, value, mask=None): if mask is not None: mask = mask.unsqueeze(1) # same maks is applied to all h heads nbatchs = query.size(0) # 1) Do all the linear projections in a batch from d_model => h x d_k query, key, value = [l(x).view(nbatchs, -1, self.h, self.d_k).transpose(1, 2) for l,x in zip(self.linears, (query, key, value))] # 2) Apply attention on all the projected vectors in a batch x, self.attn = attention(query, key, value, mask=mask, dropout=self.dropout) # 3) "concat" using view and apply a final linear x = x.transpose(1, 2).contiguous().view(nbatchs, -1, self.h * self.d_k) return self.linears[-1](x) # ## Feedforward, Embeddings, and Positional Encoding class PositionWiseFeedForward(nn.Module): def __init__(self, d_model, d_ff, dropout=0.1): super(PositionWiseFeedForward, self).__init__() self.w_1 = nn.Linear(d_model, d_ff) self.w_2 = nn.Linear(d_ff, d_model) self.dropout = nn.Dropout(dropout) def forward(self, x): return self.w_2(self.dropout(F.relu(self.w_1(x)))) class Embeddings(nn.Module): def __init__(self, d_model, vocab): super(Embeddings, self).__init__() self.lut = nn.Embedding(vocab, d_model) self.d_model = d_model def forward(self, x): return self.lut(x) * math.sqrt(self.d_model) class PositionalEncoding(nn.Module): "Implement the PE function." def __init__(self, d_model, dropout, max_len=5000): super(PositionalEncoding, self).__init__() self.dropout = nn.Dropout(dropout) # Compute the positional encodings once in log space. pe = torch.zeros(max_len, d_model) position = torch.arange(0, max_len).unsqueeze(1) div_term = torch.exp(torch.arange(0, d_model, 2) * -(math.log(10000.0) / d_model)) pe[:, 0::2] = torch.sin(position * div_term) pe[:, 1::2] = torch.cos(position * div_term) pe = pe.unsqueeze(0) self.register_buffer('pe', pe) def forward(self, x): x = x + Variable(self.pe[:, :x.size(1)], requires_grad=False) return self.dropout(x) fig, ax = plt.subplots(1,1,figsize=(15,5)) pe = PositionalEncoding(20, 0) y = pe.forward(Variable(torch.zeros(1, 100, 20))) ax.plot(np.arange(100), y[0, :, 4:8].data.numpy()) ax.legend(["dim %d"%p for p in [4,5,6,7]]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="Lj79fcnLzcxP" # ##Python program to inverse # # + id="vK40mpsBxpWv" import numpy as np # + colab={"base_uri": "https://localhost:8080/"} id="SvS3I9LO3FBV" outputId="373d750d-aa0b-4611-e8df-23b167c3a8e6" A = np.array([[1, 2], [4, 7]]) invA = np.linalg.inv(A) # inverse of matrix A print(invA) # print the inverse of matrix A # + colab={"base_uri": "https://localhost:8080/"} id="7hsWIzIs30MM" outputId="9c9333df-e78e-4f20-8f96-cf2d8a1f3888" C = np.dot(A, invA) # get identity matrix print(C) # print dot product # + [markdown] id="0dXgkzRu4mkA" # ##Python Program to Inverse and Transpose a 3x3 Matrix # # # # + colab={"base_uri": "https://localhost:8080/"} id="2WLGnCGl4kfz" outputId="49c0a867-598d-4d0c-b32a-e5abd3c5de11" A = np.array([[6, 1, 1], [4,-2, 5], [2, 8, 7]]) InvOfA = np.linalg.inv(A) # to inverse matrix A B = np.transpose(A) # to transpose matrix A print(A) print() print(B) # + colab={"base_uri": "https://localhost:8080/"} id="H7jWI5PN7LUP" outputId="2564cf94-ea5d-4c68-e551-fd31cffd18aa" dot = np.dot(A, InvOfA) # get dot product of A and its inverse print(dot) # + colab={"base_uri": "https://localhost:8080/"} id="c73338sC1CRR" outputId="17c9edba-7dcb-488f-d48a-9bce0f7f4dbe" # practice M = np.array([[1,7], [-3, 5]]) np.array(M @ np.linalg.inv(M), dtype=int) # get the dot product of M and the inverse of M then convert it to integers to get the identity matrix # + colab={"base_uri": "https://localhost:8080/"} id="Y9akSpsxzncZ" outputId="e1d327d2-faed-468e-e386-7982b9f483ea" # practice A = np.array([[3, 2, 3], [4, 19, 6], [74, 2, 9]]) B = np.linalg.inv(A) np.array(A @ B, dtype=int) # get the dot product of A and B then convert it to integers to get the identity matrix # + [markdown] id="8N8z-jD2AgDR" # ##Coding Activity 3 - Python Exercise 3 # Create a Python Program to inverse and transpose a 4x4 matrix # # # # + colab={"base_uri": "https://localhost:8080/"} id="HhafYCzuAkyW" outputId="cd832072-0e30-493e-d313-e32bd79d6174" A = np.array([[6,1,1,3], [4,-2,5,1], [2,8,7,6], [3,1,9,7] ]) InvA = np.linalg.inv(A) # to inverse matrix A TransposeA = np.transpose(A) # to transpose matrix A print('The inverse of Matrix A is: ') print(InvA) # print inverse of matrix A print() print('The tranpose of Matrix A is: ') print(TransposeA) # print transpose of matrix A # + [markdown] id="Mrk9uXoiAgA9" # # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.7.1 # language: julia # name: julia-1.7 # --- # # Homework 2 Exercise 3 # Now suppose the causal association between age and weight might be dif- # ferent for boys and girls. Use a single linear regression, with a categorical # variable for sex, to estimate the total causal effect of age on weight separately # for boys and girls. How do girls and boys differ? Provide one or more pos- # terior contrasts as a summary. using StatsPlots using StatisticalRethinking: sr_datadir, PI using StatisticalRethinkingCommon using Statistics, StatsBase import CSV using DataFrames using Gen, Distributions @time data = CSV.read(sr_datadir("Howell1.csv"), DataFrame) children = data[data.age .< 13 , :] boys = children[children.male .== 1, :] girls = children[children.male .== 0, :]; @df boys plot(:age, :weight, seriestype=:scatter, title="Age/Weight (Children)", xlabel="Age (years)", ylabel="weight(kg)", labels="boys", legend=:bottomright) @df girls plot!(:age, :weight, seriestype=:scatter, labels="girls") dt = standardize_column!(children, :age, scale=false) StatsBase.transform!(dt,boys.age) StatsBase.transform!(dt,girls.age) dt @gen function children_age_weight_model(ages, males) a0 ~ normal(12., 7.5) b0 ~ gamma(0.25, 4.) noise0 ~ gamma(1., 1.) a1 ~ normal(12., 7.5) b1 ~ gamma(0.25, 4.) noise1 ~ gamma(1., 1.) a = [a0 a1] b = [b0 b1] noise = [noise0 noise1] function f(age,male) return a[male+1] + b[male+1] * age end for (i,age) in enumerate(ages) male = Int(males[i]) {(:y, i)} ~ normal(f(age, male), noise[male+1]) end return f end @df boys plot(:age, :weight, seriestype=:scatter, title="Age/Weight (Children)", xlabel="Age (years)", ylabel="weight(kg)", labels="boys", legend=:bottomright) @df girls plot!(:age, :weight, seriestype=:scatter, labels="girls") priors = [children_age_weight_model((), ()) for _ in 1:15] test_xs = Vector(range(minimum(children.age)-1, maximum(children.age)+1, length=1000)) plot!(test_xs, [f.(test_xs, zeros(Int8,1000)) for f in priors[1:7]], labels=nothing) plot!(test_xs, [f.(test_xs, ones(Int8,1000)) for f in priors[8:end]], labels=nothing) plot!(ylim=(minimum(children.weight) - 10, maximum(children.weight) + 10)) # + observations = Gen.choicemap() for (i,weight) in enumerate(children.weight) observations[(:y, i)] = weight end @time traces, = mcmc(children_age_weight_model, (children.age, children.male), observations, warmup=1_500, steps=15_000); # + params = (:a0, :a1, :b0, :b1, :noise0, :noise1) @time inferred,logweight = infer(traces, children_age_weight_model, (children.age,children.male), params) a0 = inferred[:a0] b0 = inferred[:b0] noise0 = inferred[:noise0] a1 = inferred[:a1] b1 = inferred[:b1] noise1 = inferred[:noise1] @info "Parameters:" a0 b0 noise0 a1 b1 noise1 # - @df boys plot(:age, :weight, seriestype=:scatter, title="Age/Weight (Children)", xlabel="Age (years)", ylabel="weight(kg)", labels="boys", legend=:bottomright) @df girls plot!(:age, :weight, seriestype=:scatter, labels="girls") plot!(test_xs, (@. a0 + b0 * test_xs), linewidth=2., ribbon=noise0, labels="posterior (girls)") plot!(test_xs, (@. a1 + b1 * test_xs), linewidth=2., ribbon=noise1, labels="posterior (boys)") @df boys plot(:age, :weight, seriestype=:scatter, title="Age/Weight (Children)", xlabel="Age (years)", ylabel="weight(kg)", labels="boys", legend=:bottomright) @df girls plot!(:age, :weight, seriestype=:scatter, labels="girls") plot!(test_xs, (@. a0 + b0 * test_xs), labels="posterior (girls)", linewidth=2.) plot!(test_xs, (@. a1 + b1 * test_xs), labels="posterior (boys)", linewidth=2.) test_predictions_girls = @. rand(Normal(a0 + b0 * girls.age, noise0)) test_predictions_boys = @. rand(Normal(a1 + b1 * girls.age, noise1)) plot!(children.age, test_predictions_girls, seriestype=:scatter, alpha=1/2, label="model (girls)") plot!(children.age, test_predictions_boys, seriestype=:scatter, alpha=1/2, label="model (boys)") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ANTARES) # language: python # name: antares_py3.7 # --- __author__ = ' <>, <>' __version__ = '20200114' # yyyymmdd __datasets__ = [''] __keywords__ = ['ANTARES', 'variable'] # # Exploring Elastic Search Database to Find R Corona Borealis Stars # # *, & ANTARES Team* # ### Table of contents # * [Goals & notebook summary](#goals) # * [Disclaimer & Attribution](#attribution) # * [Imports & setup](#import) # * [Authentication](#auth) # * [First chapter](#chapter1) # * [Resources and references](#resources) # # # Goals # This notebook is an example of how to explore the ANTARES alert database for variable stars. Here we use the infrared color selection of candidate R Coronae Borealise stars, and search the ZTF time-series photometry to see if there are unknown R CrBs revealing themselves by a significant, irregular dimming (up to 8 mag). # # Summary # We first obtain the candidate R CrBs from WISE color, selected by Tisserand et al. (2012). We then use the ANTARES search API to find time-series photometry of each candidate, and looked for R CrBs candidates that show more than 2 magnitude variability in either g- or r-band. In the end, we use ZTF18abhjrcf as a showcase. # # # # Disclaimer & attribution # If you use this notebook for your published science, please acknowledge the following: # # * Data Lab concept paper: al., "The NOAO Data Laboratory: a conceptual overview", SPIE, 9149, 2014, http://dx.doi.org/10.1117/12.2057445 # # * Data Lab disclaimer: http://datalab.noao.edu/disclaimers.php # # # Imports and setup from antares import dev_kit as dk import matplotlib.pyplot as plt import pandas as pd # # # Read in relevant tables # We use the candidate list from Tisserand (2012), dropping candidates below Dec<-30 (in the ZTF field). Here we read in the list of candidate, specifically their ra and dec. catalog=pd.read_csv('WISE_RCrB.dat') ra=catalog['ra'] dec=catalog['dec'] # # # Querying ANTARES alert database # This cell shows how to call elastic search with ANTARES API. It can search on ZTF object id, RA, Dec, or other properties. For our purpose, we search for variabilities larger than 2 mag in either g- or r-band. For illustration purpose, we only search variability in three of the candidate (id = 228-230) # + from antares_client.search import search #for i in range(len(ra)): #this line will search for the full candidate list for variability for i in [228, 229, 230]: print("Iteration: ",i, "ra",ra[i],"dec",dec[i]) query = { "query": { "bool": { "must": [ { "range": { "ra": { "gte": ra[i]-1./3600., "lte": ra[i]+1./3600., } } }, { "range": { "dec": { "gte": dec[i]-1./3600., "lte": dec[i]+1./3600. } } } ] } } } result_set = search(query) galerts = [ alert for alert in result_set if ( alert["survey"] == 1 and len(alert["properties"]) and alert["properties"]["ztf_fid"] == "1" ) ] gmag = [float(alert["properties"]["ztf_magpsf"]) for alert in galerts] ralerts = [ alert for alert in result_set if ( alert["survey"] == 1 and len(alert["properties"]) and alert["properties"]["ztf_fid"] == "2" ) ] rmag = [float(alert["properties"]["ztf_magpsf"]) for alert in ralerts] if len(gmag)>1 and max(gmag)-min(gmag) > 2.: print("Got a hit on locus_id: ",result_set[0]["locus_id"]," in g-band") if len(rmag)>1 and max(rmag)-min(rmag) > 2.: print("Got a hit on locus_id: ",result_set[0]["locus_id"]," in r-band") # - # # # Extracting light curve related properties # # # Looks like we got a hit. Let's have a look at the last one (locus_id 425493). We first extract relevant properties (MJD, Mag, Mag_err) from this locus. # + galerts = [ alert for alert in result_set if ( alert["survey"] == 1 and len(alert["properties"]) and alert["properties"]["ztf_fid"] == "1" ) ] gdate = [float(alert["properties"]["ztf_jd"]) for alert in galerts] gmag = [float(alert["properties"]["ztf_magpsf"]) for alert in galerts] gerr = [float(alert["properties"]["ztf_sigmapsf"]) for alert in galerts] ralerts = [ alert for alert in result_set if ( alert["survey"] == 1 and len(alert["properties"]) and alert["properties"]["ztf_fid"] == "2" ) ] rdate = [float(alert["properties"]["ztf_jd"]) for alert in ralerts] rmag = [float(alert["properties"]["ztf_magpsf"]) for alert in ralerts] rerr = [float(alert["properties"]["ztf_sigmapsf"]) for alert in ralerts] # - # Having the time-series photometry in hand, we can plot the light curve. plt.ylim(max(gmag)+0.1*(max(gmag)-min(gmag)),min(rmag)-0.1*(max(rmag)-min(rmag))) plt.scatter(rdate, rmag, c='red', alpha=0.5) plt.scatter(gdate, gmag, c='green', alpha=0.5) plt.title('Light curve of Locus_id=425493') plt.xlabel('Time [Julian date]') plt.ylabel('Magnitude in g- and r-passband') plt.show() # # Concluding remarks # # Locus_id 425493 (=ZTF18abhjrcf) shows a rapid dropping of more than 2 magnitudes in g-band, and consistent dimming in the r-band as well (thought not as much as g-band). This is similar to the sudden dimming seen in R Coronae Borealis. We subsequently obtained spectra of this source, and confirmed its RCB nature. # # Resources and references # Further reading: # # AAVSO introduction on R Coronae Borealis stars: https://www.aavso.org/vsots_rcrb # # Tisserand (2012) "Tracking down R Coronae Borealis stars from their mid-infrared WISE colours". A&A, 539, 51: https://ui.adsabs.harvard.edu/abs/2012A&A...539A..51T # # Tisserand et al. (2013) "The ongoing pursuit of R Coronae Borealis stars: the ASAS-3 survey strikes again". A&A, 551, 22: https://ui.adsabs.harvard.edu/abs/2013A&A...551A..77T # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Python Basics # # ### The ABC of Computational Text Analysis (Spring 2022) # ### BA-Seminar at University of Lucerne by # ## Variables # + # define variable* x = "at your service" y = 2 z = ", most of the time." # combine variables int_combo = y * y # for numbers any mathematical operation str_combo = x + z # for text only concatenation with + # show content of variable print(str_combo) # - # ## Data Type Conversion # check the type type(x) # + # convert types (similar for other types) int('100') # convert to integer str(100) # convert to string # include variable in a f-string x = 3 mixed = f"x has the value: {x}" print(mixed) # - # ## Assign vs. Compare # + # assign a value to a variable x = 1 word = "Test" # compare two values if they are identical 1 == 2 # False word == "Test" # True # + [markdown] tags=[] # ## iterate with for-loop # + sentence = ["This", "is", "a", "sentence"] # iterate over each element for token in sentence: # do something with the element print(token) # - # ## condition with if-else statement # + sentence = ['This', 'is', 'a', 'sentence'] if len(sentence) == 3: print('This sentence has exactly 3 tokens') elif len(sentence) < 5: print('This sentence has less than 5 tokens') else: print('This sentence is longer than 5 tokens') # - # ## Methods # + tokens = "This is a sentence".split(" ") # split at whitespace print(tokens, type(tokens)) # check the variable tokens.append(".") # add something to a list tokens = " ".join(tokens) # join elements to string print(tokens, type(tokens)) # check the variable # - # ## Functions # + # define a new function def word_properties(word): """ Print some properties of the provided word. It takes any string as argument (variable word). """ # print(), len() and sorted() are functions themeselves word_length = len(word) sorted_letters = sorted(word, reverse=True) print(f"Properties for the word '{word}':") print("length:", word_length, "letters:", sorted_letters) word_properties("computer") # call function with the word "computer" # + [markdown] tags=[] # ## Indexing # + sentence =["This", "is", "a", "sentence"] # element at position X first_tok = sentence[0] # 'This' print(first_tok) # elements of subsequence [start:end] sub_seq = sentence[0:3] # ['This', 'is', 'a'] print(sub_seq) # elements of subsequence backwards sub_seq_back = sentence[-2:] # ['a', 'sentence'] print(sub_seq_back) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [Root] # language: python # name: Python [Root] # --- # # Sort # # ### Introduction # # Sorting is the process of placing elements from a collection in some kind of order. This suggests that sorting is an important area of study in computer science and sorting also provides instructive examples of how algorithms and data structures work together, and of how the correct choice of data structure can make dramatic improvements in execution speed and memory usage. # # An understanding of recursion allows us to consider more sophisticated procedures for sorting, notably quick sort, one of the fastest sorting algorithms. An understanding of recursive algorithms also leads also to the recognition that data structures themselves can be recursive. This insight is illustrated through an analysis of the heap sort algorithm, in which a recursive data structure known as a heap is used for the purposes of sorting. # ### 1. What is sorting? # # #### Definition # Suppose we have a collection, $C$, of items of data, (and we limit our discussion to dealing mostly with linear structures). Sorting $C$ means rearranging (reordering) its data items, based on an ordering property of each item, into ascending or descending order (and a reordering of a sequence has exactly the same items as the original sequence, but in a different order). # # If it is sorted into ascending order then if item $A$ comes before item $B$ it must also be true that $A \leq B$, according to the particular ordering property we are using. If $C$ is sorted in descending order then we would have $A \geq B$. # # A formal, mathematical statement of sorting in, say, ascending order would run as follows: # # Given a sequence of $n$ items $(x1, x2, …, xn)$, find a reordering $(x'_1, x'_2, ..., x'_n)$ such that $x'_1 \leq x'_2 \leq ... \leq x'_n$. # # This leads us into writing our specification for ascending sort: # # # # # # # # # # # # # # # # # # # # # #
    Name:Sort
    Inputs:A sequence of elements $C = \{c_1, c_2, c_3, ..., c_n\}$
    Outputs:A re-ordering of $C$: $C' = \{c'_1, c'_2, c'_3, ..., c'_n\}$
    Preconditions:All $c_i$ in $C$ must have the same ordering property
    Postcondition:$c'_1\leq c'_2\leq c'_3\leq ... \leq c'_n$
    # # and we continue, in general terms, but conduct our discussion in terms of ascending sort for simplicity. # #### Complexity # # To sort a small number of items, a complex sorting method may be more trouble than it is worth and the overhead may be too high – a simple, rough-and-ready, unsystematic method may suffice. We often refer to such simple strategies as naive sorting or straight sorting methods. # # Sorting a large number of items can take a substantial amount of computing resources and the efficiency of a sorting algorithm becomes material, and every significant improvement can matter. Two different units of computation are commonly considered when measuring the complexity and evaluating the overall efficiency of a sorting algorithm: #
      #
    1. Comparison, the most commonly used unit, where we determine if one item is larger or smaller than another; and
    2. # #
    3. Swap, if, during sorting, two values are found to be out of order, exchanging the positions of the two items is a swap. This exchange is costly.
    4. #
    # ### 2. Naive Sorting # #### Bubble Sort # This algorithm iterates several times over the list. On the first pass it ‘bubbles’ the largest item up to its correct place, through a series of swaps. On the next pass it does the same with the next largest item, and so on, but the length of the section of the sequence over which further comparisons and swaps are made are reduced by 1 containing comparison to the unsorted portion. In this way, the items at the end, which are in their correct positions, no long need to be compared (as they have been sorted). # # + def bubbleSort(alist): for passnum in range(len(alist)-1,0,-1): for i in range(passnum): if alist[i]>alist[i+1]: temp = alist[i] alist[i] = alist[i+1] alist[i+1] = temp # print(alist) TO SEE EACH STEP # Note the use of three-step assignment rather than simulatenous assignment, i.e. a, b = b, a. alist = [54,26,93,17,77,31,44,55,20] bubbleSort(alist) print(alist) # - # To analyze the bubble sort, we note that regardless of how the items are arranged in the initial list, $n−1$ passes will be made to sort a list of size $n$. Hence, the total number of comparisons is the sum of the first $n−1$ integers. The sum $n$ integers is $\frac{1}{2}n^2 + \frac{1}{2}n$ and thus the sum $n-1$ integers is $\frac{1}{2}n^2 + \frac{1}{2}n - n = \frac{1}{2}n^2 - \frac{1}{2}n$. Hence, there are $O(n^2)$ comparisons. In the best case, if the list is already ordered, no exchanges will be made. However, in the worst case, every comparison will cause an exchange. On average, we exchange half of the time. # A bubble sort is often considered the most inefficient sorting method since it must exchange items before the final location is known. These “wasted” exchange operations are very costly. However, because the bubble sort makes passes through the entire unsorted portion of the list, it has the capability to do something most sorting algorithms cannot. In particular, if during a pass there are no exchanges, then we know that the list must be sorted. A bubble sort can be modified to stop early if it finds that the list has become sorted. This means that for lists that require just a few passes, a bubble sort may have an advantage in that it will recognize the sorted list and stop. # The next piece of code shows this modification, which is often referred to as the short bubble. # + def shortBubbleSort(alist): exchanges = True passnum = len(alist)-1 while passnum > 0 and exchanges: exchanges = False for i in range(passnum): if alist[i]>alist[i+1]: exchanges = True temp = alist[i] alist[i] = alist[i+1] alist[i+1] = temp # print(alist) passnum = passnum-1 alist=[20,30,40,90,50,60,70,80,100,110] shortBubbleSort(alist) print(alist) # - # If no exchanges are made during an inner loop, this means the list must already be sorted. No further processing is necessary, and so the short bubble sort algorithm stops at this point, rather than continue with unnecessary iterations. The Boolean variable exchanges changes from false to true as soon as an exchange is made. If it has not changed to true during an iteration of the inner loop, the algorithm terminates. # Both variants of bubble sort are afflicted with one particular potential problem, known as the problem of the hares and the tortoises: # # # The item 99 (a hare), located at the beginning of the list, makes its way speedily up to its place at the end. By contrast, 1 (a tortoise) is situated at the end of the list and crawls down to its correct place at the beginning with tortuous slowness. # # In bubble sort, once the largest item in the sequence is encountered, it moves to its place at the end in a single pass. Larger items near the beginning of the sequence are the hares and are quickly swapped up to the end in this way. Smaller items near the end of the sequence are the tortoises, which trudge down to their places at the beginning only after all n − 1 iterations have occurred. # # As with any algorithm, the actual input data that bubble sort is working on may seriously affect its performance. A worst-case scenario for bubble sort is that in which the input list it is given is already sorted, but in reverse order. In such a reverse-sorted list, tortoises will predominate. # # Can anything be done about this? Many have tried, and the bubble sort algorithm has been subjected to many tweaks over the years; but analysis shows that the bubble sort and its minor improvements are inferior to both the insertion and the selection sorts; and in fact the bubblesort has hardly anything to recommend it except its catchy name. # + #### Bubble Sort # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Introduction # Here is a simple approach to prediction of annual water usage where I practiced time-series forecasting. # # **Box-Jenkins Method is used for data analysis using non-seasonal ARIMA (non-stationary) model.** # # The dataset provides the annual water usage in Baltimore from 1885 to 1963, or 79 years of data. # # The values are in the units of liters per capita per day, and there are 79 observations. # # The dataset is credited to Hipel and McLeod, 1994. # # # The RMSE performance measure and walk-forward validation are used for model evaluation. # # Literature: # # [Time Series Analysis and Forecasting by Example](https://books.google.si/books/about/Time_Series_Analysis_and_Forecasting_by.html?id=Bqm5kJC8hgMC&printsec=frontcover&source=kp_read_button&redir_esc=y#v=onepage&q&f=false) # # [How to Remove Trends and Seasonality with a Difference Transform in Python](https://machinelearningmastery.com/remove-trends-seasonality-difference-transform-python/) # # [Autocorrelation in Time Series Data](https://www.influxdata.com/blog/autocorrelation-in-time-series-data/) # ## Import libraries # + from matplotlib import pyplot import matplotlib.cm as cm # %matplotlib inline import numpy as np from pandas import read_csv, concat, Grouper, DataFrame, datetime, Series from pandas.plotting import lag_plot, autocorrelation_plot import warnings from statsmodels.tsa.arima_model import ARIMA, ARIMAResults from statsmodels.graphics.tsaplots import plot_acf, plot_pacf from statsmodels.tsa.stattools import adfuller from sklearn.metrics import mean_squared_error # - # # Data preparation # ### Import data and split to train/test and validation set # + series = read_csv('water.csv', header=0, index_col=0, parse_dates=True, squeeze=True) split_point = len(series) - 10 # how may points for validation dataset, validation = series[0:split_point], series[split_point:] print('Dataset: %d time points \nValidation: %d time points' % (len(dataset), len(validation))) dataset.to_csv('dataset.csv', header=False) validation.to_csv('validation.csv', header=False) # - # ### Summary statistics # + # summary statistics of time series series = read_csv('dataset.csv', header=None, index_col=0, parse_dates=True, squeeze=True) print(series.describe()) # line plot pyplot.figure(num=0, figsize=(5*2.35,5), dpi=80, facecolor='w', edgecolor='k') series.plot() pyplot.xlabel('time (y)') pyplot.ylabel('water usage (lpcd)') pyplot.title('water usage over time') pyplot.grid(True) pyplot.show() # histogram plot pyplot.figure(num=1, figsize=(5*2,5), dpi=80, facecolor='w', edgecolor='k') #pyplot.figure(1) pyplot.subplot(121) series.hist() pyplot.xlabel('water usage (lpcd)') pyplot.ylabel('') pyplot.title('histogram') pyplot.grid(True) # density plot pyplot.subplot(122) series.plot(kind='kde') pyplot.xlabel('water usage (lpcd)') pyplot.ylabel('') pyplot.title('density plot') pyplot.grid(True) pyplot.tight_layout() pyplot.show() # + # boxplots of time series, the last decade is omitted pyplot.figure(num=2, figsize=(5,5), dpi=80, facecolor='w', edgecolor='k') series = read_csv('dataset.csv', header=None, index_col=0, parse_dates=True, squeeze=True) groups = series.groupby(Grouper(freq='10YS')) decades = DataFrame() for name, group in groups: if len(group.values) is 10: decades[name.year] = group.values decades.boxplot() pyplot.xlabel('time (decade)') pyplot.ylabel('water usage (lpcd)') pyplot.title('boxplot, groupd by decades') pyplot.show() # - # ## Persistence model - baseline RMSE # + # evaluate a persistence model # load data series = read_csv('dataset.csv', header=None, index_col=0, parse_dates=True, squeeze=True) # prepare data X = series.values X = X.astype('float32') train_size = int(len(X) * 0.50) train, test_baseline = X[0:train_size], X[train_size:] # walk-forward / Rolling Window / Rolling Forecast validation history = [x for x in train] predictions = list() for i in range(len(test_baseline)): # predict yhat = history[-1] predictions.append(yhat) # observation obs = test_baseline[i] history.append(obs) #print('Predicted=%.3f, Expected=%3.f' % (yhat, obs)) # report performance rmse = np.sqrt(mean_squared_error(test_baseline, predictions)) print('Peristence RMSE: %.3f' % rmse) # plot pyplot.figure(num=2, figsize=(5,5), dpi=80, facecolor='w', edgecolor='k') pyplot.plot(test_baseline) pyplot.plot(predictions, color='red') pyplot.xlabel('time') pyplot.ylabel('water usage (lpcd)') pyplot.title('persistence model') pyplot.grid(True) pyplot.show() # - # # Manually configure ARIMA # ## Detrend data by differencing # + # create and summarize a stationary version of the time series # create a differenced series def difference(dataset): diff = list() for i in range(1, len(dataset)): value = dataset[i] - dataset[i - 1] diff.append(value) return Series(diff) series = read_csv('dataset.csv', header=None, index_col=0, parse_dates=True, squeeze=True) X = series.values X = X.astype('float32') # difference data for detrending stationary = difference(X) stationary.index = series.index[1:] # check if stationary result = adfuller(stationary) print('ADF Statistic: %f' % result[0]) print('p-value: %f' % result[1]) print('Critical Values:') for key, value in result[4].items(): print('\t%s: %.3f' % (key, value)) # plot differenced data stationary.plot() pyplot.title('differenced data') pyplot.xlabel('time (y)') pyplot.ylabel('d(water usage (lpcd)) / dt') pyplot.show() # save stationary.to_csv('stationary.csv', header=False) # - # One step differencing (d=1) appears to be enough. # ## Autocorrelation and partial autoorrelation # #### estimates for lag *p* and order of MA model *q* # # p is the order (number of time lags) of the autoregressive model # # d is the degree of differencing (the number of times the data have had past values subtracted) # # q is the order of the moving-average model # + # ACF and PACF plots of the time series series = read_csv('dataset.csv', header=None, index_col=0, parse_dates=True, squeeze=True) pyplot.figure() pyplot.subplot(211) plot_acf(series, lags=20, ax=pyplot.gca()) pyplot.xlabel('lag (d)') pyplot.subplot(212) plot_pacf(series, lags=20, ax=pyplot.gca()) pyplot.xlabel('lag (d)') pyplot.tight_layout() pyplot.show() # - # A good starting point for the p is 4 and q as 1. # ### Evaluate a manually configured ARIMA model # + # load data series = read_csv('dataset.csv', header=None, index_col=0, parse_dates=True, squeeze=True) # prepare dataa X = series.values X = X.astype('float32') train_size = int(len(X) * 0.50) train, test = X[0:train_size], X[train_size:] # walk-forward validation history = [x for x in train] predictions = list() for i in range(len(test)): # predict model = ARIMA(history, order=(4,1,1)) model_fit = model.fit(disp=0) yhat = model_fit.forecast()[0] predictions.append(yhat) # observation obs = test[i] history.append(obs) #print('>Predicted=%.3f, Expected=%3.f' % (yhat, obs)) # report performance rmse = np.sqrt(mean_squared_error(test, predictions)) print('RMSE: %.3f' % rmse) # - # Worse performance than baseline (persistence) model! # # Grid search ARIMA parameters # ### Define functions # + # evaluate an ARIMA model for a given order (p,d,q) and return RMSE def evaluate_arima_model(X, arima_order): # prepare training dataset X = X.astype('float32') train_size = int(len(X) * 0.50) train, test = X[0:train_size], X[train_size:] history = [x for x in train] # make predictions predictions = list() for t in range(len(test)): model = ARIMA(history, order=arima_order) # model_fit = model.fit(disp=0) model_fit = model.fit(trend='nc', disp=0) # disable the automatic addition of a trend constant yhat = model_fit.forecast()[0] predictions.append(yhat) history.append(test[t]) # calculate out of sample error rmse = np.sqrt(mean_squared_error(test, predictions)) return rmse # evaluate combinations of p, d and q values for an ARIMA model def evaluate_models(dataset, p_values, d_values, q_values): dataset = dataset.astype('float32') best_score, best_order = float("inf"), None for p in p_values: for d in d_values: for q in q_values: order = (p,d,q) try: rmse = evaluate_arima_model(dataset, order) if rmse < best_score: best_score, best_order = rmse, order print('ARIMA%s - RMSE = %.3f' % (order, rmse)) except: continue print('\nBest: ARIMA%s - RMSE = %.3f' % (best_order, best_score)) return best_order # - # ### Run on dataset def grid_search(series): # evaluate parameters p_values = range(0, 3) d_values = range(0, 2) q_values = range(0, 3) warnings.filterwarnings("ignore") best_order = evaluate_models(series.values, p_values, d_values, q_values) return best_order # + # load dataset series = read_csv('dataset.csv', header=None, index_col=0, parse_dates=True, squeeze=True) # search best_order = grid_search(series) # - # ### Summarize residual errors - *walk-forward validation* def residual_stats(series, best_order, bias=0): print('-----------------------------') # prepare data X = series.values X = X.astype('float32') train_size = int(len(X) * 0.50) train, test = X[0:train_size], X[train_size:] # walk-forward validation history = [x for x in train] predictions = list() for i in range(len(test)): # predict model = ARIMA(history, order=best_order) model_fit = model.fit(trend='nc', disp=0) yhat = bias + float(model_fit.forecast()[0]) predictions.append(yhat) # observation obs = test[i] history.append(obs) #report performance rmse = np.sqrt(mean_squared_error(test, predictions)) print('RMSE: %.3f\n' % rmse) # errors residuals = [test[i]-predictions[i] for i in range(len(test))] residuals = DataFrame(residuals) residuals_mean = residuals.mean() print('RESIDUAL STATISTICS \n') print(residuals.describe()) # plot pyplot.figure(num=None, figsize=(5,5), dpi=80, facecolor='w', edgecolor='k') pyplot.subplot(211) residuals.hist(ax=pyplot.gca()) pyplot.xlabel('residual') pyplot.ylabel('') pyplot.title('histogram') pyplot.grid(True) pyplot.subplot(212) residuals.plot(kind='kde', ax=pyplot.gca()) pyplot.xlabel('residual') pyplot.ylabel('') pyplot.title('density plot') pyplot.grid(True) pyplot.tight_layout() pyplot.show() return residuals_mean # + series = read_csv('dataset.csv', header=None, index_col=0, parse_dates=True, squeeze=True) residuals_mean = residual_stats(series, best_order) # - # We can see that the # distribution has a right shift and that the mean is non-zero at around 1.0. This is perhaps a sign # that the predictions are biased. # # The distribution of residual errors is also plotted. The graphs suggest a Gaussian-like # distribution with a longer right tail, providing further evidence that perhaps a power transform # might be worth exploring. # # We could use this information to bias-correct predictions by adding the mean residual error # of 1.081624 to each forecast made. # ### Make bias corrected forecasts # _ = residual_stats(series, best_order, bias = residuals_mean) # Not much of an improvement after bias correction. # # Save finalized model to file # + # monkey patch around bug in ARIMA class def __getnewargs__(self): return ((self.endog),(self.k_lags, self.k_diff, self.k_ma)) ARIMA.__getnewargs__ = __getnewargs__ def save_model(best_order, model_name): # load data series = read_csv('dataset.csv', header=None, index_col=0, parse_dates=True, squeeze=True) # prepare data X = series.values X = X.astype('float32') # fit model model = ARIMA(X, order=best_order) model_fit = model.fit(trend='nc', disp=0) # bias constant bias = residuals_mean # save model model_fit.save(f'model_{model_name}.pkl') np.save(f'model_bias_{model_name}.npy', [bias]) # - save_model(best_order_box, model_name='simple') # # Load and evaluate the finalized model on the validation dataset def validate_models(model_names): # load train dataset dataset = read_csv('dataset.csv', header=None, index_col=0, parse_dates=True, squeeze=True) X = dataset.values.astype('float32') history = [x for x in X] # load validation dataset validation = read_csv('validation.csv', header=None, index_col=0, parse_dates=True, squeeze=True) y = validation.values.astype('float32') # plot pyplot.figure(num=None, figsize=(5,5), dpi=80, facecolor='w', edgecolor='k') pyplot.plot(y, color=cm.Set1(0), label='actual') for ind, model_name in enumerate(model_names): # load model model_fit = ARIMAResults.load(f'model_{model_name}.pkl') bias = np.load(f'model_bias_{model_name}.npy') # make first prediction predictions = np.ones(len(y)) yhat = bias + float(model_fit.forecast()[0]) predictions[0] = yhat history.append(y[0]) #print('>Predicted=%.3f, Expected=%3.f' % (yhat, y[0])) # rolling forecasts for i in range(1, len(y)): # predict model = ARIMA(history, order=(2,1,0)) model_fit = model.fit(trend='nc', disp=0) yhat = bias + float(model_fit.forecast()[0]) predictions[i] = yhat # observation obs = y[i] history.append(obs) # print('>Predicted=%.3f, Expected=%3.f' % (yhat, obs)) rmse = np.sqrt(mean_squared_error(y, predictions)) print(f'RMSE {model_name}: %.3f' % rmse) pyplot.plot(predictions, color=cm.Set1(ind+1), label=f'{model_name} predict') pyplot.xlabel('time (d)') pyplot.ylabel('water usage (lpcd)') pyplot.title('Validation') pyplot.legend() pyplot.grid(True) pyplot.show() validate_models(model_names=['simple']) # # Comparison of detrend approaches # ### Linear detrend & Box-Cox transform # + from statsmodels.tsa.tsatools import detrend from scipy.stats import boxcox figsize = (8,4) series = read_csv('dataset.csv', header=0, index_col=0, parse_dates=True, squeeze=True) #print(series.describe()) pyplot.figure(num=None, figsize=figsize, dpi=80, facecolor='w', edgecolor='k') pyplot.subplot(121) series.plot(color='k', label='data') pyplot.subplot(122) series.plot(kind='kde', color='k', label='data') pyplot.xlabel('') pyplot.ylabel('density') pyplot.title('density plot') pyplot.legend() pyplot.show() # --- linear detrend --- series_linear = detrend(series) #print(result.describe()) pyplot.figure(num=None, figsize=figsize, dpi=80, facecolor='w', edgecolor='k') pyplot.subplot(121) series_linear.plot(color='k', label='linear detrend') pyplot.subplot(122) series_linear.plot(kind='kde', color='k', label='linear detrend') pyplot.xlabel('') pyplot.ylabel('density') pyplot.title('density plot') pyplot.legend() pyplot.show() # --- Box-Cox transform --- series_boxcox, lam = boxcox(series) series_boxcox = Series(series_new) # plot pyplot.figure(num=None, figsize=figsize, dpi=80, facecolor='w', edgecolor='k') pyplot.subplot(121) series_boxcox.plot(color='k', label='Box-Cox') pyplot.subplot(122) series_boxcox.plot(kind='kde', color='k', label='Box-Cox') pyplot.xlabel('') pyplot.ylabel('density') pyplot.title('density plot') pyplot.legend() pyplot.show() # - best_order_simple = grid_search(series) best_order_linear = grid_search(series_linear) best_order_boxcox = grid_search(series_boxcox) _ = residual_stats(series, best_order) _ = residual_stats(series_linear, best_order) _ = residual_stats(series_boxcox, best_order) # + save_model(best_order_simple, model_name='simple') save_model(best_order_linear, model_name='linear') save_model(best_order_boxcox, model_name='boxcox') validate_models(model_names = ['simple', 'linear', 'boxcox']) # - # Predictions with linear detrend and Box-Cox transform have lower RMSE, probably statistically insignificant. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:test_conda_s2v] # language: python # name: conda-env-test_conda_s2v-py # --- # # How to use dataloader import pandas as pd import sklearn2vantage as s2v from sklearn.datasets import load_digits from sqlalchemy import create_engine def make_df(loader): data = loader() columns = data.get("feature_names") df = pd.DataFrame(data.data, columns=columns) df["target"] = pd.Series(data.target) return df engine = create_engine("teradata://dbc:dbc@172.16.58.3:1025/tdwork") df_digits= make_df(load_digits) # + jupyter={"outputs_hidden": true} # replace s2v.tdload_df(df_digits, engine, tablename="tdload_digits", ifExists="replace", ssh_ip="172.16.58.3", ssh_username="root", ssh_password="") # - pd.read_sql_query("select count(*) from tdload_digits", engine) # + jupyter={"outputs_hidden": true} # insert s2v.tdload_df(df_digits, engine, tablename="tdload_digits", ifExists="insert", ssh_ip="172.16.58.3", ssh_username="root", ssh_password="") # - pd.read_sql_query("select count(*) from tdload_digits", engine) # + jupyter={"outputs_hidden": true} # errror if table already exists s2v.tdload_df(df_digits, engine, tablename="tdload_digits", ifExists="error", ssh_ip="172.16.58.3", ssh_username="root", ssh_password="") # + jupyter={"outputs_hidden": true} # saveIndex, make index ("index"), which is unique, and use xz compressing for scp s2v.tdload_df(df_digits, engine, tablename="tdload_digits", ifExists="replace", ssh_ip="172.16.58.3", ssh_username="root", ssh_password="", saveIndex=True, indexList=["index"], isIndexUnique=True, compress="xz") # - df_titanic = pd.read_csv("titanic/train.csv").set_index("PassengerId") df_titanic.head() # + jupyter={"outputs_hidden": true} # test with titanic data, with less verbosity s2v.tdload_df(df_titanic, engine, tablename="titanic_train", ifExists="replace", ssh_ip="172.16.58.3", ssh_username="root", ssh_password="", saveIndex=True, indexList=["PassengerId"], verbose=False) # - df_house = pd.read_csv("house-prices-advanced-regression-techniques/train.csv") df_house.head() # + jupyter={"outputs_hidden": true} # test with house data, with gz compressing s2v.tdload_df(df_house, engine, tablename="house_train", ifExists="replace", ssh_ip="172.16.58.3", ssh_username="root", ssh_password="", compress="gz") # - # create sample data df_sample = pd.DataFrame({"c_int":[1,2,3,4], "c_float":[1.2, -0.3, 0.9, 1.5], "c_bigint":[int(1e12), int(2e12), int(3e12), int(4e12)], "c_bool":[True,False,True,False], "c_string":["a", "b", "c", "d"], "c_list":[[1,2,3], [4,5,6], [7,8,9], [1,2,3]], "c_timedelta":pd.to_timedelta(["00:01:02", "00:03:01", "05:02:01", "1:01:08"]), "c_date":pd.to_datetime(["2020/1/1","1998/12/2", "1980/4/1","2008/1/1"]), "c_timestamp":pd.to_datetime(["2020/1/1 10:10:10.09", "2020/1/1 10:10:10", "2020/1/1 10:10", "2020/1/1 10:10:05"])}) df_sample.head() # + jupyter={"outputs_hidden": true} # test with sample data s2v.tdload_df(df_sample, engine, tablename="t_sample", ifExists="replace", ssh_ip="172.16.58.3", ssh_username="root", ssh_password="", compress="bz2") # + jupyter={"outputs_hidden": true} # test with sample data and data type overwriting s2v.tdload_df(df_sample, engine, tablename="t_sample", ifExists="replace", ssh_ip="172.16.58.3", ssh_username="root", ssh_password="", compress="bz2", dtype={"c_int": "varchar(5)", "c_bigint":"float"}) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # The Sparks Foundation # ## Task2 - Prediction using Unsupervised ML # - From the given ‘Iris’ dataset, predict the optimum number of clusters and represent it visually. # - Use R or Python or perform this task # - Data can be found at https://bit.ly/3kXTdox -> save as **Iris.csv** # ### Importing the libraries import pandas as pd import matplotlib.pyplot as plt import seaborn as sns #import sklearn from sklearn.cluster import KMeans #from sklearn.metrics import silhouette_score from sklearn.model_selection import train_test_split # ### Loading the dataset # Loading the dataset df = pd.read_csv("Iris.csv") df # Let's check the relationship between the columns sns.set_style("whitegrid") sns.pairplot(df,hue="Species",height=3) plt.show() # Clustering is an unsupervised learning method that allows us to group set of objects based on similar characteristics. In general, it can help you find meaningful structure among your data, group similar data together and discover underlying patterns. # # One of the most common clustering methods is **K-means algorithm**. The goal of this algorithm is to partition the data into set such that the total sum of squared distances from each point to the mean point of the cluster is minimized. # + # last column values excluded x = df.iloc[:, :-1].values # last column value y = df.iloc[:, -1].values # Splitting the dataset into the Training set and Test set x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=0.2,random_state=0) # - # Let's check the optimal k Sum_of_squared_distances = [] K = range(1,10) for num_clusters in K : kmeans = KMeans(n_clusters=num_clusters) kmeans.fit(df[["SepalLengthCm","SepalWidthCm","PetalLengthCm","PetalWidthCm"]]) Sum_of_squared_distances.append(kmeans.inertia_) plt.plot(K,Sum_of_squared_distances,"bx-") plt.xlabel("Values of K") plt.ylabel("Sum of squared distances/Inertia") plt.title("Elbow Method For Optimal k") plt.show() # ### Observing the elbow method for optimal k we can check above that 3 is the number of cluster that we can use # + # Applying Kmeans classifier kmeans = KMeans(n_clusters=3,init = 'k-means++', max_iter = 100, n_init = 10, random_state = 0) y_kmeans = kmeans.fit_predict(x) # Display cluster centers print(f"Cluster centers \n {kmeans.cluster_centers_}") plt.scatter(x[y_kmeans == 0, 0], x[y_kmeans == 0, 1],s = 100, c = 'red', label = 'Iris-setosa') plt.scatter(x[y_kmeans == 1, 0], x[y_kmeans == 1, 1],s = 100, c = 'blue', label = 'Iris-versicolour') plt.scatter(x[y_kmeans == 2, 0], x[y_kmeans == 2, 1],s = 100, c = 'green', label = 'Iris-virginica') # Visualising the clusters - On the first two columns # Plotting the centroids of the clusters plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:,1],s = 100, c = 'black', label = 'Centroids') plt.legend() plt.show() # - # ### In the plot above we can observe the centroids of the 3 groups # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 第三章 Factory Pattern # # 從文字上面解釋,就是想成工廠產生不同物件那樣,我今天只要根據不同參數就可以產生不同物件! 那麼書中提到Factory pattern的三種變形 # # - Simple Factory Pattern # - Factory method Pattern # - Abstract Factory Pattern # # 其實第一個跟第二個感覺是沒什麼差別的...精髓都一樣是經由參數來生成不同物件! 下面就是一個簡單的示範藉由參數來建立不同物件! # + from abc import ABCMeta, abstractmethod class Insect(metaclass=ABCMeta): @abstractmethod def intro(self): raise NotImplementedError class Bug(Insect): def intro(self): print("I'm bug") def createInsect(name): cls = globals()[name] return cls() bug = createInsect('Bug') bug.intro() # - # 至於**Abstract Factory Pattern**,書中舉的例子老實講,我並沒有特別的感受,倒不如說我只覺得不就是定義共同interface而已嗎? 以下來看看書中例子 # + class PizzaFactory(metaclass=ABCMeta): @abstractmethod def createVegPizza(self): pass @abstractmethod def createNonVegPizza(self): pass class IndianPizzaFactory(PizzaFactory): def createVegPizza(self): return DeluxVeggiePizza() def createNonVegPizza(self): return ChickenPizza() class USPizzaFactory(PizzaFactory): def createVegPizza(self): return MexicanVegPizza() def createNonVegPizza(self): return HamPizza() # 這邊定義了pizza工廠應該要有的interface就是做素跟非素兩種pizza class VegPizza(metaclass=ABCMeta): @abstractmethod def prepare(self, VegPizza): pass class NonVegPizza(metaclass=ABCMeta): @abstractmethod def serve(self, VegPizza): pass class DeluxVeggiePizza(VegPizza): def prepare(self): print("Prepare ", type(self).__name__) class ChickenPizza(NonVegPizza): def serve(self, VegPizza): print(type(self).__name__, " is served with Chicken on ", type(VegPizza).__name__) class MexicanVegPizza(VegPizza): def prepare(self): print("Prepare ", type(self).__name__) class HamPizza(NonVegPizza): def serve(self, VegPizza): print(type(self).__name__, " is served with Ham on ", type(VegPizza).__name__) class PizzaStore: def __init__(self): pass def makePizzas(self): for factory in [IndianPizzaFactory(), USPizzaFactory()]: self.NonVegPizza = factory.createNonVegPizza() self.VegPizza = factory.createVegPizza() self.VegPizza.prepare() self.NonVegPizza.serve(self.VegPizza) pizza = PizzaStore() pizza.makePizzas() # - # 上面就是書中舉的例子,老實講就是定義interface,去繼承實作他而已.. 這章節感覺比較沒什麼需要特別記錄的。 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 3 - PySpark Python3 Medium # language: python # name: pyspark3-med # --- sc # + nodes = "c18node14.acis.ufl.edu,c18node2.acis.ufl.edu,c18node6.acis.ufl.edu,c18node10.acis.ufl.edu,c18node12.acis.ufl.edu" index = "idigbio" #{"bool": # {"must": [ # {"term":{"recordset":"95773ebb-2f5f-43f0-a652-bfd8d5f4707a"}} # ] # } #}, query = """{"query": { "bool": { "must": [ { "term": {"recordset": "95773ebb-2f5f-43f0-a652-bfd8d5f4707a"} } ] }}}""" # - field_set = ["uuid", "licenselogourl"] fields = ",".join(field_set) df = (sqlContext.read.format("org.elasticsearch.spark.sql") .option("es.read.field.include", fields) .option("es.nodes", nodes) .option("es.query", query) .load("{0}/mediarecords".format(index)) ) print(df.count()) df.printSchema() df.head(10) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- import torch import torch.nn as nn import torch.optim as optim from random import randint import time import utils device = torch.device("cpu") print(device) # + from utils import check_mnist_dataset_exists data_path = check_mnist_dataset_exists() # download mnist dataset # 60,000 gray scale pictures as well as their label, each picture is 28 by 28 pixels train_data = torch.load(data_path + 'mnist/train_data.pt') train_label = torch.load(data_path + 'mnist/train_label.pt') test_data = torch.load(data_path + 'mnist/test_data.pt') test_label = torch.load(data_path + 'mnist/test_label.pt') # - class three_layer_net(nn.Module): def __init__(self, input_size, hidden_size1, hidden_size2, output_size): super(three_layer_net , self).__init__() # 三层全连接网络MLP self.layer1 = nn.Linear(input_size, hidden_size1, bias=False) self.layer2 = nn.Linear(hidden_size1, hidden_size2, bias=False) self.layer3 = nn.Linear(hidden_size2, output_size, bias=False) def forward(self, x): y = self.layer1(x) # 第一层使用ReLU作为激活函数 y_hat = torch.relu(y) z = self.layer2(y_hat) # 第二层使用ReLU作为激活函数 z_hat = torch.relu(z) # 第三层直接输出 scores = self.layer3(z_hat) # prob = torch.softmax(y, dim=1) # 若这里用了softmax,则 # 1. criterion需要用NLLLoss # 2. 需要对输出的分数求log,即log_scores = torch.log(scores) # 3. 最终的损失loss = criterion(log_scores, label) # 以上的步骤其实就是Cross-Entropy Loss的拆分,NLLLoss实际就是在做归一化 # 若使用LogSoftmax # prob = torch.logsoftmax(y, dim=1) # 则不需要求log return scores net = three_layer_net(784, 50, 50, 10) print(net) utils.display_num_param(net) net = net.to(device) # cross-entropy Loss criterion = nn.CrossEntropyLoss() # criterion = nn.NLLLoss() # batch size = 200 bs = 200 def eval_on_test_set(): running_error = 0 num_batches = 0 # test size = 10000 for i in range(0, 10000, bs): # extract the minibatch minibatch_data = test_data[i: i + bs] minibatch_label = test_label[i: i + bs] # reshape the minibatch, 784 = 28 x 28 # 200 x 784 inputs = minibatch_data.view(bs, 784) # feed it to the network scores = net(inputs) # compute the error made on this batch error = utils.get_error(scores, minibatch_label) # add it to the running error running_error += error.item() num_batches += 1 total_error = running_error / num_batches print('test error = ', total_error * 100, ' percent') # + start = time.time() lr = 0.05 # initial learning rate for epoch in range(200): # learning rate strategy: divide the learning rate by 1.5 every 10 epochs if epoch % 10 == 0 and epoch > 10: lr = lr / 1.5 # create a new optimizer at the beginning of each epoch: give the current learning rate. optimizer = torch.optim.SGD(net.parameters() , lr=lr) running_loss = 0 running_error = 0 num_batches = 0 # 先随机排序 # train size = 60000 shuffled_indices = torch.randperm(60000) # train size = 60000 for count in range(0, 60000, bs): # forward and backward pass # set dL/dU, dL/dV, dL/dW to be filled with zeros optimizer.zero_grad() # 随机抽取200条数据, batch size = 200 indices = shuffled_indices[count: count + bs] minibatch_data = train_data[indices] minibatch_label = train_label[indices] # reshape the minibatch, batch size = 200, 784 = 28 x 28 # 200 x 784 inputs = minibatch_data.view(bs, 784) # tell Pytorch to start tracking all operations that will be done on "inputs" inputs.requires_grad_() # forward the minibatch through the net scores = net(inputs) # log_scores = torch.log(scores) # compute the average of the losses of the data points in the minibatch # 一个batch的平均损失 loss = criterion(scores, minibatch_label) # loss = criterion(log_scores, minibatch_label) # backward pass to compute dL/dU, dL/dV and dL/dW loss.backward() # do one step of stochastic gradient descent: U=U-lr(dL/dU), V=V-lr(dL/dU), ... optimizer.step() # compute some stats # 获得当前的loss running_loss += loss.detach().item() error = utils.get_error(scores.detach(), minibatch_label) running_error += error.item() num_batches += 1 # compute stats for the full training set # once the epoch is finished we divide the "running quantities" by the number of batches # 总Loss = 每个Batch的Loss累加 / Batch数量累加 = 所有Batch的Loss / Batch数 # 若Batch Size = 1,则Batch数 = 数据集大小 # 若Batch Size = 数据集大小,则Batch数 = 1 total_loss = running_loss / num_batches # 总Error = 每个Batch的Error累加 / Error数量累加 = 所有Batch的Error / Batch数 # 若Batch Size = ,则Batch数 = 数据集大小 # 若Batch Size = 数据集大小,则Batch数 = 1 total_error = running_error / num_batches # 训练一个batch的时间 elapsed_time = time.time() - start # every 10 epoch we display the stats and compute the error rate on the test set if epoch % 10 == 0 : print(' ') print('epoch = ', epoch, ' time = ', elapsed_time, ' loss = ', total_loss, ' error = ', total_error * 100, ' percent lr = ', lr) eval_on_test_set() # + # choose a picture at random idx = randint(0, 10000-1) im = test_data[idx] # diplay the picture utils.show(im) # feed it to the net and display the confidence scores # im.view(1, 784)而不是im.view(784)是因为net是根据有batch size存在而设计的 # 例如batch size = 200,即im.view(200, 784),则input是[[data_1], [data_2], ..., [data_200]] # 而im.view(784)的input是[data_1],少了一个维度 # 1代表了batch size = 1,就是只有一张图 scores = net(im.view(1, 784)) # one 1 x 784 image, 784 = 28 x 28 # dim=1是因为这里的输出是 # [[-7.2764, 8.4730, 2.6842, 1.6302, -3.8437, -1.9697, -0.5854, -0.0792, 2.0861, -0.5462]] # 需要求里面的维度的softmax probs = torch.softmax(scores, dim=1) utils.show_prob_mnist(probs) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="xaXRKmrNTDtj" executionInfo={"status": "ok", "timestamp": 1603611297371, "user_tz": -540, "elapsed": 1909, "user": {"displayName": "\uc774\uc6a9\uc6b0", "photoUrl": "", "userId": "08200059377748879944"}} outputId="fd54e5e8-a3c9-4a58-90dc-78c6473e4489" colab={"base_uri": "https://localhost:8080/", "height": 377} # !nvidia-smi # + id="HdJ4Of9JTMRJ" executionInfo={"status": "ok", "timestamp": 1603611317894, "user_tz": -540, "elapsed": 22415, "user": {"displayName": "\uc774\uc6a9\uc6b0", "photoUrl": "", "userId": "08200059377748879944"}} outputId="4b477603-017a-4c15-bc63-60c0102a9e8e" colab={"base_uri": "https://localhost:8080/", "height": 35} from google.colab import drive drive.mount('/content/drive') # + id="OZCkG0XD5IYV" executionInfo={"status": "ok", "timestamp": 1603611318995, "user_tz": -540, "elapsed": 23506, "user": {"displayName": "\uc774\uc6a9\uc6b0", "photoUrl": "", "userId": "08200059377748879944"}} outputId="98fa6675-6818-413c-e5fd-34520a8f028a" colab={"base_uri": "https://localhost:8080/", "height": 71} # !ls drive/'My Drive'/'WEB_Ask_06devbros'/'ai'/'chatbot' # + id="OcbVtTcs5TdU" executionInfo={"status": "ok", "timestamp": 1603611331902, "user_tz": -540, "elapsed": 36404, "user": {"displayName": "\uc774\uc6a9\uc6b0", "photoUrl": "", "userId": "08200059377748879944"}} outputId="11c9fe89-bc0a-4307-eae9-11968d1641e0" colab={"base_uri": "https://localhost:8080/", "height": 1000} # !pip install -r drive/'My Drive'/'WEB_Ask_06devbros'/'ai'/'chatbot'/requirements.txt # + id="LywYq8D-5VwE" executionInfo={"status": "ok", "timestamp": 1603611334855, "user_tz": -540, "elapsed": 39348, "user": {"displayName": "\uc774\uc6a9\uc6b0", "photoUrl": "", "userId": "08200059377748879944"}} outputId="f1f74fa0-dd58-4d10-a5d0-372aae2db1de" colab={"base_uri": "https://localhost:8080/", "height": 161} # !pip install --upgrade tokenizers==0.8.1.rc1 # + id="BkWNHWvZ5m7q" executionInfo={"status": "ok", "timestamp": 1603611334859, "user_tz": -540, "elapsed": 39349, "user": {"displayName": "\uc774\uc6a9\uc6b0", "photoUrl": "", "userId": "08200059377748879944"}} import sys sys.path.append('/content/drive/My Drive/WEB_Ask_06devbros/ai/chatbot') # + id="Th_ZZBGj5ZNS" executionInfo={"status": "ok", "timestamp": 1603611342330, "user_tz": -540, "elapsed": 46817, "user": {"displayName": "\uc774\uc6a9\uc6b0", "photoUrl": "", "userId": "08200059377748879944"}} import os import numpy as np import torch from model.kogpt2 import DialogKoGPT2 from kogpt2_transformers import get_kogpt2_tokenizer # + id="OW-ZHy_95hIq" executionInfo={"status": "ok", "timestamp": 1603611402269, "user_tz": -540, "elapsed": 106747, "user": {"displayName": "\uc774\uc6a9\uc6b0", "photoUrl": "", "userId": "08200059377748879944"}} outputId="50cbad04-e25c-45b9-87fd-c22a6cda51e4" colab={"base_uri": "https://localhost:8080/", "height": 317, "referenced_widgets": ["6eb4c4ea7a814072b918e2517044fb64", "", "7e27252a5f7a443abeae6e3c22caf95d", "88b2df8f98df42c89740a57d8d7ba995", "", "", "4a44acef6fcb4e26bd85b02370053adb", "ad7e008a4b1a4368b2da80d1c5df624b", "", "b919300429434e52a08a7853a41e6e5a", "a2f8e115bad54ccda66ba4d9f51fa3af", "0ef14f998ae34562b63109201840d26e", "55f333781b0843abaf020b827aa68540", "8e9c6644584648469156a54fd82fe284", "", "acefd0d0b96144d1adc387da04c2fa3c", "02c9c04531ed4c8da17ed66e71d101ac", "e36fa5b91ce8484d9c8b6ac7952bca60", "c1e841026c6a433db04a9f734859ac3a", "e6a7f47e603f4162ac8a61b0da9a87b4", "737cfeef992a451ba18d22b77ca25e55", "2217a8f04e754aa49a8d02dca03d9c64", "", "e3e5dfec5c5e4dcebd2a29e59f13d03a", "83f1c23076434edcbd9e1883c41a800e", "", "a69843e124d649bb9263ba49238f14e5", "6581ffeef1d74cd5a6d0c9c448e5a89a", "dff03380331a4751832c4518e38d44ea", "94272af87e6742fe8d189b2202981210", "4b8a5797897442238cde001c000aea3a", "", "", "", "", "", "a028c538a62247909c443ecfc6e2ba05", "b3c8ef0098fe4562beb6e2a78801b628", "249e8017dccf49e79e6fa71dfac903b8", "", "", "", "9816c6e78fd24e8398c99600988ea1d0", "", "10280504cb89466792bad4e4e9e72757", "2c530abfd5d3479d8c23edfff230a158", "659e554813ba4c2c937dc7e3e46f19e2", "a8b06eb6286e450b803a8ef4e386a0f7"]} root_path='/content/drive/My Drive/WEB_Ask_06devbros/ai/chatbot' data_path = f"{root_path}/data/wellness_dialog_for_autoregressive.txt" checkpoint_path =f"{root_path}/checkpoint" save_ckpt_path = f"{checkpoint_path}/kogpt2-wellnesee-auto-regressive-20201022-add-chatbotdata.pth" ctx = "cuda" if torch.cuda.is_available() else "cpu" device = torch.device(ctx) # 저장한 Checkpoint 불러오기 checkpoint = torch.load(save_ckpt_path, map_location=device) model = DialogKoGPT2() model.load_state_dict(checkpoint['model_state_dict']) model.eval() tokenizer = get_kogpt2_tokenizer() count = 0 output_size = 200 # 출력하고자 하는 토큰 갯수 # + id="IB3xnLGjQFFo" executionInfo={"status": "ok", "timestamp": 1603611402570, "user_tz": -540, "elapsed": 107038, "user": {"displayName": "\uc774\uc6a9\uc6b0", "photoUrl": "", "userId": "08200059377748879944"}} outputId="bfdf13fd-65f0-4180-af0e-686acb08b4cf" colab={"base_uri": "https://localhost:8080/", "height": 377} # !nvidia-smi # + id="vSFIAggML5z3" executionInfo={"status": "error", "timestamp": 1603611612335, "user_tz": -540, "elapsed": 316795, "user": {"displayName": "\uc774\uc6a9\uc6b0", "photoUrl": "", "userId": "08200059377748879944"}} outputId="ed26a65c-6c8f-4dd2-83b0-913a4f5b3e21" colab={"base_uri": "https://localhost:8080/", "height": 1000} while 1: #for i in range(5): sent = input('Question: ') # '요즘 기분이 우울한 느낌이에요' tokenized_indexs = tokenizer.encode(sent) input_ids = torch.tensor([tokenizer.bos_token_id,] + tokenized_indexs +[tokenizer.eos_token_id]).unsqueeze(0) # set top_k to 50 sample_output = model.generate( input_ids=input_ids ) print("Answer: " + tokenizer.decode(sample_output[0].tolist()[len(tokenized_indexs)+1:],skip_special_tokens=True)) print(100 * '-') # for s in kss.split_sentences(sent): # print(s) # + id="EopdYg6ESrwO" executionInfo={"status": "aborted", "timestamp": 1603611612330, "user_tz": -540, "elapsed": 316787, "user": {"displayName": "\uc774\uc6a9\uc6b0", "photoUrl": "", "userId": "08200059377748879944"}} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="qHTNhOayUJsj" colab_type="code" colab={} import re def clean_words(text): text = re.sub("(<.*?>)","",text) text=re.sub("(\W|\d+)"," ",text) text=text.strip() return text # + id="JVIGcTNeU_h0" colab_type="code" colab={} raw = ['..sleepy', 'sleepy!!', '#sleepy', '>>>>>sleepy>>>>', 'sleepy'] # + id="hhp9G6bPVCE7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="ee24db54-f05b-484a-f163-daa8e24be88d" clean = [clean_words(r) for r in raw] print(clean) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Understand Data # %cd .. # + import numpy as np import seaborn as sns import matplotlib.pyplot as plt import pickle from Bio.Seq import Seq from Bio import SeqIO from Bio.SeqUtils.ProtParam import ProteinAnalysis # - # ### Data Imports # + input_path = "input_data/" classes = ["cytosolic", "secreted", "mitochondrial", "nuclear"] # Get dictionaries for labels label2index = {key:i for i, key in enumerate(classes)} index2label = dict(zip(label2index.values(), label2index.keys())) # Get training data datasets = dict() for label in classes: datasets[label] = list(SeqIO.parse(input_path+label+".fasta", "fasta")) # Get test data blind_test_data = list(SeqIO.parse(input_path+"blind_test.fasta", "fasta")) # Get number of examples in each category counts = {key:len(val) for i, (key,val) in enumerate(datasets.items())} counts["total"] = sum([len(sublist) for keys, sublist in datasets.items()]) print(label2index) print(counts) print("Number of test data points: ", len(blind_test_data)) # - # Insights: # 1. Need to break up data into 80% train, 10% validation and 10% test. Which take an equal proportion from each class so that proportion of each class same in all data sets. Presents problem of biased towards one class datasets. # ### Understand Data # SeqIO: https://biopython.org/wiki/SeqIO
    # SeqRecord class: https://biopython.org/wiki/SeqRecord # One entry of train data print(datasets[label][0]) print() print("ID: ", datasets[label][0].id) print("Sequence: ", datasets[label][0].seq) print("Name: ", datasets[label][0].name) print("Description: ", datasets[label][0].description) print("Features: ", datasets[label][0].features) print("Annotations: ", datasets[label][0].annotations) # One entry of test data print(blind_test_data[0]) print() print("ID: ", blind_test_data[0].id) print("Sequence: ", blind_test_data[0].seq) print("Name: ", blind_test_data[0].name) print("Description: ", blind_test_data[0].description) print("Features: ", blind_test_data[0].features) print("Annotations: ", blind_test_data[0].annotations) # ### Understand Features # ProtoParam module: https://biopython.org/wiki/ProtParam
    # ProteinAnalysis class: https://biopython.org/DIST/docs/api/Bio.SeqUtils.ProtParam.ProteinAnalysis-class.html
    # ProtoParamData: https://biopython.org/DIST/docs/api/Bio.SeqUtils.ProtParamData-pysrc.html # + # Need to convert SeqRecord.seq object to string for ProteinAnalysis analysed_seq = ProteinAnalysis(str(datasets[label][0].seq)) print("Sequence: ", analysed_seq.sequence) print() print("Count amino acids:\n", analysed_seq.count_amino_acids()) print("Percent amino acids:\n", analysed_seq.get_amino_acids_percent()) print() print("Length: ", analysed_seq.length) print("Molecular weight of protein: ", analysed_seq.molecular_weight()) print("Aromaticity: ", analysed_seq.aromaticity()) print("Instability Index: ", analysed_seq.instability_index()) print("Isoelecric point: ", analysed_seq.isoelectric_point()) # [helix, turn, sheet] print("Secondary structure fraction: ", analysed_seq.secondary_structure_fraction()) print("Charge at pH: ", analysed_seq.charge_at_pH(7)) print("Gravy: ", analysed_seq.gravy()) # [reduced, oxidized] print("Molar extinction coefficient: ", analysed_seq.molar_extinction_coefficient()) print("Monoisotopic: ", analysed_seq.monoisotopic) # print("Flexibility: ", analysed_seq.flexibility()) print() protein_scale = analysed_seq.protein_scale(pickle.load(open("input_data/scales/kd.pickle","rb")), 5) print("Protein scale:\n", protein_scale) # - # ### Look at feature distribution of data full_data = list() for i, (key, val) in enumerate(datasets.items()): full_data.extend(val) # Preprocess the sequence to remove `X` and `B` and `U` amino_acids = ['A','B','C','D','E','F','G','H','I','K','L','M','N','P','Q','R','S','T','U','V','W','Y'] for example in full_data: tmp = list(str(example.seq)) for i, char in enumerate(tmp): if char=='X': char = np.random.choice(amino_acids) tmp[i] = char if char=='B': if np.random.uniform() > 0.5: tmp[i] = 'D' else: tmp[i] = 'N' elif char=='U': tmp[i] = 'C' tmp = ''.join(tmp) example.seq = Seq(tmp) analysed_data = [ProteinAnalysis(str(example.seq)) for example in full_data] print("Before: ", len(analysed_data)) analysed_data = [example for example in analysed_data if example.length < 2000] print("After: ", len(analysed_data)) features = dict() features["length"] = [example.length for example in analysed_data] features["molecular_weight"] = [example.molecular_weight() for example in analysed_data] features["isoelectric_point"] = [example.isoelectric_point() for example in analysed_data] features["aromaticity"] = [example.aromaticity() for example in analysed_data] features["instability_index"] = [example.instability_index() for example in analysed_data] features["charge_at_ph7"] = [example.charge_at_pH(7) for example in analysed_data] features["gravy"] = [example.gravy() for example in analysed_data] plt.figure(figsize=(10,4)) sns.distplot(features["length"], hist = False, color = 'purple') plt.title('Length of Examples in dataset') plt.xlabel('Lengths') plt.ylabel('% of examples') plt.grid() plt.show() plt.figure(figsize=(10,4)) sns.distplot(features["molecular_weight"], hist = False, color = 'purple') plt.title('Molecular Weights of Examples in dataset') plt.xlabel('Molecular Weights') plt.ylabel('% of examples') plt.grid() plt.show() plt.figure(figsize=(10,4)) sns.distplot(features["isoelectric_point"], hist = False, color = 'purple') plt.title('Isoelectric Points of Examples in dataset') plt.xlabel('Isoelectric Points') plt.ylabel('% of examples') plt.grid() plt.show() plt.figure(figsize=(10,4)) sns.distplot(features["aromaticity"], hist = False, color = 'purple') plt.title('Aromaticity of Examples in dataset') plt.xlabel('Aromaticity') plt.ylabel('% of examples') plt.grid() plt.show() plt.figure(figsize=(10,4)) sns.distplot(features["instability_index"], hist = False, color = 'purple') plt.title('Instability Index of Examples in dataset') plt.xlabel('Instability Index') plt.ylabel('% of examples') plt.grid() plt.show() plt.figure(figsize=(10,4)) sns.distplot(features["charge_at_ph7"], hist = False, color = 'purple') plt.title('Charge at pH 7 of Examples in dataset') plt.xlabel('Charge at pH 7') plt.ylabel('% of examples') plt.grid() plt.show() plt.figure(figsize=(10,4)) sns.distplot(features["gravy"], hist = False, color = 'purple') plt.title('Gravy of Examples in dataset') plt.xlabel('Gravy') plt.ylabel('% of examples') plt.grid() plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] graffitiCellId="id_vhxf50c" # ### Problem Statement # # # Given a list of integers that contain natural numbers in random order. Write a program to find the longest possible sub sequence of consecutive numbers in the array. Return this subsequence in sorted order. # # *In other words, you have to return the sorted longest (sub) list of consecutive numbers present anywhere in the given list.* # # For example, given the list `5, 4, 7, 10, 1, 3, 55, 2`, the output should be `1, 2, 3, 4, 5` # # **Note** # 1. The solution must take O(n) time. *Can you think of using a dictionary here?* # 2. If two subsequences are of equal length, return the subsequence whose index of smallest element comes first. # # --- # # ### The Idea: # Every element of the given `input_list` could be a part of some subsequence. Therefore, we need a way (using a dictionary) to keep track if an element has already been visited. Also, store length of a subsequence if it is maximum. For this purpose, we have to check in **forward** direction, if the `(element+1)` is available in the given dictionary, in a "while" loop. Similarly, we will check in **backward** direction for `(element-1)`, in another "while" loop. At last, if we have the smallest element and the length of the longest subsequence, we can return a **new** list starting from "smallest element" to "smallest element + length". # # The steps would be: # # # 1. Create a dictionary, such that the elements of input_list become the "key", and the corresponding index become the "value" in the dictionary. We are creating a dictionary because the lookup time is considered to be constant in a dictionary. # # # 2. For each `element` in the `input_list`, first mark it as visited in the dictionary # # - Check in forward direction, if the `(element+1)` is available. If yes, increment the length of subsequence # # - Check in backward direction, if the `(element-1)` is available. If yes, increment the length of subsequence # # - Keep a track of length of longest subsequence visited so far. For the longest subsequence, store the smallest element (say `start_element`) and its index as well. # # # 3. Return a **new** list starting from `start_element` to `start_element + length`. # + [markdown] graffitiCellId="id_q492hrd" # ### Exercise - Write the function definition here # + graffitiCellId="id_eaee7mz" def longest_consecutive_subsequence(input_list): idx_dict = dict() for idx,element in enumerate(input_list): idx_dict[element] = idx max_length = 1 for element in input_list: if idx_dict[element] == -1: continue else: idx_dict[element] = -1 length = 1 current = element start_element = current while current+1 in idx_dict: idx_dict[current+1] = -1 length += 1 current += 1 current = element while current-1 in idx_dict: idx_dict[current-1] = -1 length += 1 start_element = current-1 current -= 1 if length > max_length: max_length = length max_start = start_element return [element for element in range(max_start, max_start + max_length)] # + [markdown] graffitiCellId="id_7w3exwo" # ### Test - Let's test your function # + graffitiCellId="id_hlznh6q" def test_function(test_case): output = longest_consecutive_subsequence(test_case[0]) if output == test_case[1]: print("Pass") else: print("Fail") # + graffitiCellId="id_z2y7gsr" test_case_1 = [[5, 4, 7, 10, 1, 3, 55, 2], [1, 2, 3, 4, 5]] test_function(test_case_1) # + graffitiCellId="id_a3yf5ol" test_case_2 = [[2, 12, 9, 16, 10, 5, 3, 20, 25, 11, 1, 8, 6 ], [8, 9, 10, 11, 12]] test_function(test_case_2) # + graffitiCellId="id_u5rs0q7" test_case_3 = [[0, 1, 2, 3, 4], [0, 1, 2, 3, 4]] test_function(test_case_3) # + graffitiCellId="id_08ng0hs" longest_consecutive_subsequence([5, 4, 7, 10, 1, 3, 55, 2]) # + [markdown] graffitiCellId="id_et1ek54" # # + graffitiCellId="id_r15x1vg" def longest_consecutive_subsequence(input_list): # Create a dictionary. # Each element of the input_list would become a "key", and # the corresponding index in the input_list would become the "value" element_dict = dict() # Traverse through the input_list, and populate the dictionary # Time complexity = O(n) for index, element in enumerate(input_list): element_dict[element] = index # Represents the length of longest subsequence max_length = -1 # Represents the index of smallest element in the longest subsequence starts_at = -1 # Traverse again - Time complexity = O(n) for index, element in enumerate(input_list): current_starts = index element_dict[element] = -1 # Mark as visited current_count = 1 # length of the current subsequence '''CHECK ONE ELEMENT FORWARD''' current = element + 1 # `current` is the expected number # check if the expected number is available (as a key) in the dictionary, # and it has not been visited yet (i.e., value > 0) # Time complexity: Constant time for checking a key and retrieving the value = O(1) while current in element_dict and element_dict[current] > 0: current_count += 1 # increment the length of subsequence element_dict[current] = -1 # Mark as visited current = current + 1 '''CHECK ONE ELEMENT BACKWARD''' # Time complexity: Constant time for checking a key and retrieving the value = O(1) current = element - 1 # `current` is the expected number while current in element_dict and element_dict[current] > 0: current_starts = element_dict[current] # index of smallest element in the current subsequence current_count += 1 # increment the length of subsequence element_dict[current] = -1 current = current - 1 '''If length of current subsequence >= max length of previously visited subsequence''' if current_count >= max_length: if current_count == max_length and current_starts > starts_at: continue starts_at = current_starts # index of smallest element in the current (longest so far) subsequence max_length = current_count # store the length of current (longest so far) subsequence start_element = input_list[starts_at] # smallest element in the longest subsequence # return a NEW list starting from `start_element` to `(start_element + max_length)` return [element for element in range(start_element, start_element + max_length)] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## K-Nearest Neighbor Algoritm ### # KNN is a classification algorithm. It is basic to understand. # # K is the number of neighbors you want to look at. Algorithm looks at the classes of nearest k points and classify the point if a class have more points that are nearest to point. # ### Import Libraries ### # I will only use numpy for math and matplotlib for graphs. I will not use any ML libraries. # + import numpy as np import matplotlib.pyplot as plt # Use matplotlib in Jupyter Notebook Outputs # %matplotlib inline # - # ### Define Data ### # For this example I will use fake data, but for better understanding, my data is the accaptance of student to a university and their SAT Score and GPA. # + # Input data - [SAT Score, GPA] X = [[1590,2.9], [1540,2.7], [1600,2.6], [1590,2.7], [1520,2.5], [1540,2.4], [1560,2.3], [1490,2.3], [1510,2.4], [1350,3.9], [1360,3.7], [1370,3.8], [1380,3.7], [1410,3.6], [1420,3.9], [1430,3.4], [1450,3.7], [1460,3.2], [1590,3.9], [1540,3.7], [1600,3.6], [1490,3.7], [1520,3.5], [1540,3.4], [1560,3.3], [1460,3.3], [1510,3.4], [1340,2.9], [1360,2.4], [1320,2.5], [1380,2.6], [1400,2.1], [1320,2.5], [1310,2.7], [1410,2.1], [1305,2.5], [1460,2.7], [1500,2.9], [1300,3.5], [1320,3.6], [1400,2.7], [1300,3.1], [1350,3.1], [1360,2.9], [1305,3.9], [1430,3.0], [1440,2.3], [1440,2.5], [1380,2.1], [1430,2.1], [1400,2.5], [1420,2.3], [1310,2.1], [1350,2.0]] # Labels - Accepted or Rejected Y = ['accepted','accepted','accepted','accepted','accepted','accepted','accepted','accepted','accepted', 'accepted','accepted','accepted','accepted','accepted','accepted','accepted','accepted','accepted', 'accepted','accepted','accepted','accepted','accepted','accepted','accepted','accepted','accepted', 'rejected','rejected','rejected','rejected','rejected','rejected','rejected','rejected','rejected', 'rejected','rejected','rejected','rejected','rejected','rejected','rejected','rejected','rejected', 'rejected','rejected','rejected','rejected','rejected','rejected','rejected','rejected','rejected'] # - # ### Plot data to a 2d graph ### # Let's see our data on a graph. I like to see data on graphs. It helps me to understand the problem better when there is an error. # + for i in range(len(X)): if Y[i] == 'accepted': plt.scatter(X[i][0], X[i][1], s=120, marker='P', linewidths=2, color='green') else: plt.scatter(X[i][0], X[i][1], s=120, marker='P', linewidths=2, color='red') plt.plot() # - # ### Helper Functions ### # Find which variable is the most in an array of variables def most_found(array): list_of_words = [] for i in range(len(array)): if array[i] not in list_of_words: list_of_words.append(array[i]) most_counted = '' n_of_most_counted = None for i in range(len(list_of_words)): counted = array.count(list_of_words[i]) if n_of_most_counted == None: most_counted = list_of_words[i] n_of_most_counted = counted elif n_of_most_counted < counted: most_counted = list_of_words[i] n_of_most_counted = counted elif n_of_most_counted == counted: most_counted = None return most_counted # ### KNN Algorithm ### # I calculated euclidean distance of every point to the new point and found labels of nearest k points. # # #### Euclidean Distance #### # square root of sum of square of distance between two points in every dimension. # # Like pythagorean theorem: a^2 + b^2 = c^2 # + def find_neighbors(point, data, labels, k=3): # How many dimentions do the space have? n_of_dimensions = len(point) #find nearest neighbors neighbors = [] neighbor_labels = [] for i in range(0, k): # To find it in data later, I get its order nearest_neighbor_id = None smallest_distance = None for i in range(0, len(data)): eucledian_dist = 0 for d in range(0, n_of_dimensions): dist = abs(point[d] - data[i][d]) eucledian_dist += dist eucledian_dist = np.sqrt(eucledian_dist) if smallest_distance == None: smallest_distance = eucledian_dist nearest_neighbor_id = i elif smallest_distance > eucledian_dist: smallest_distance = eucledian_dist nearest_neighbor_id = i neighbors.append(data[nearest_neighbor_id]) neighbor_labels.append(labels[nearest_neighbor_id]) data.remove(data[nearest_neighbor_id]) labels.remove(labels[nearest_neighbor_id]) return neighbor_labels def k_nearest_neighbor(point, data, labels, k=3): # If two different labels are most found, continue to search for 1 more k while True: neighbor_labels = find_neighbors(point, data, labels, k=k) label = most_found(neighbor_labels) if label != None: break k += 1 if k >= len(data): break return label # - # ### Predict label using KNN ### point = [1500, 2.3] k_nearest_neighbor(point, X, Y, k=5) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] slideshow={"slide_type": "slide"} # # Visualizing the effect of $L_1/L_2$ regularization # # We use a toy example with two weights $(w_0, w_1)$ to illustrate the effect $L_1$ and $L_2$ regularization has on the solution to a loss minimization problem. # # ## Table of Contents # # 1. [Draw the data loss and the L1/L2L1/L2 regularization curves](#Draw-the-data-loss-and-the-%24L_1%2FL_2%24-regularization-curves) # 2. [Plot the training progress](#Plot-the-training-progress) # 3. [L1L1 -norm regularization leads to "near-sparsity"](#%24L_1%24-norm-regularization-leads-to-%22near-sparsity%22) # 4. [References](#References) # + slideshow={"slide_type": "skip"} import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import axes3d import matplotlib.animation as animation import matplotlib.patches as mpatches from torch.autograd import Variable import torch import math # + from IPython.display import set_matplotlib_formats set_matplotlib_formats('pdf', 'png') plt.rcParams['savefig.dpi'] = 75 plt.rcParams['figure.autolayout'] = False plt.rcParams['figure.figsize'] = 10, 6 plt.rcParams['axes.labelsize'] = 18 plt.rcParams['axes.titlesize'] = 20 plt.rcParams['font.size'] = 16 plt.rcParams['lines.linewidth'] = 2.0 plt.rcParams['lines.markersize'] = 8 plt.rcParams['legend.fontsize'] = 14 # plt.rcParams['text.usetex'] = True plt.rcParams['font.family'] = "Sans Serif" plt.rcParams['font.serif'] = "cm" # - # ## Draw the data loss and the $L_1/L_2$ regularization curves # # We choose just a very simple convex loss function for illustration (in blue), which has its minimum at W=(3,2): # # #
    # # $$ # \begin{equation} # loss(W) = 0.5(w_0-3)^2 + 2.5(w_1-2)^2 # \label{eq:loss} \tag{1} # \end{equation} # $$ # # The L1-norm regularizer (aka lasso regression; lasso: least absolute shrinkage and selection operator): #
    # # $$ # \begin{equation} # L_1(W) = \sum_{i=1}^{|W|} |w_i| # \label{eq:eq1} \tag{2} # \end{equation} # $$ # The L2 regularizer (aka Ridge Regression and Tikhonov regularization), is not is the square of the L2-norm, and this little nuance is sometimes overlooked. #
    # # $$ # \begin{equation} # L_2(W) = ||W||_2^2= \sum_{i=1}^{|W|} w_i^2 # \label{eq:eq3} \tag{3} # \end{equation} # $$ # + def loss_fn(W): return 0.5*(W[0]-3)**2 + 2.5*(W[1]-2)**2 # L1 regularization def L1_regularization(W): return abs(W[0]) + abs(W[1]) def L2_regularization(W): return np.sqrt(W[0]**2 + W[1]**2) fig = plt.figure(figsize=(10,10)) ax = fig.gca(projection="3d") xmesh, ymesh = np.mgrid[-3:9:50j,-2:6:50j] loss_mesh = loss_fn(np.array([xmesh, ymesh])) ax.plot_surface(xmesh, ymesh, loss_mesh); l1_mesh = L1_regularization(np.array([xmesh, ymesh])) ax.plot_surface(xmesh, ymesh, l1_mesh); l2_mesh = L2_regularization(np.array([xmesh, ymesh])) ax.plot_surface(xmesh, ymesh, l2_mesh); # - # ## Plot the training progress # #
    # The diamond contour lines are the values of the L1 regularization. Since this is a contour diagram, all the points on a contour line have the same L1 value.
    # In otherwords, for all points on a contour line: # $$L_1(W) = \left|w_0\right| + \left|w_1\right| == constant$$ # This is called the L1-ball.
    # L2-balls maintain the equation: $$L_2(W) = w_0^2 + w_1^2 == constant$$ #
    # The oval contour line are the values of the data loss function. The regularized solution tries to find weights that satisfy both the data loss and the regularization loss. #

    # ```alpha``` and ```beta``` control the strengh of the regularlization loss versus the data loss. # To see how the regularizers act "in the wild", set ```alpha``` and ```beta``` to a high value like 10. The regularizers will then dominate the loss, and you will see how each of the regularizers acts. #
    # Experiment with the value of alpha to see how it works. # + initial_guess = torch.Tensor([8,5]) W = Variable(initial_guess, requires_grad=True) W_l1_reg = Variable(initial_guess.clone(), requires_grad=True) W_l2_reg = Variable(initial_guess.clone(), requires_grad=True) def L1_regularization(W): return W.norm(1) def L2_regularization(W): return W.pow(2).sum() lr = 0.04 alpha = 0.75 # 1.5 # 4 # 0.4 beta = 0.75 num_steps = 1000 def train(W, lr, alpha, beta, num_steps): guesses = [] for i in range(num_steps): # Zero the gradients of the weights if W.grad is not None: W.grad.data.zero_() # Compute the loss and the gradients of W loss = loss_fn(W) + alpha * L1_regularization(W) + beta * L2_regularization(W) loss.backward() # Update W W.data = W.data - lr * W.grad.data guesses.append(W.data.numpy()) return guesses # Train the weights without regularization guesses = train(W, lr, alpha=0, beta=0, num_steps=num_steps) # ...and with L1 regularization guesses_l1_reg = train(W_l1_reg, lr, alpha=alpha, beta=0, num_steps=num_steps) guesses_l2_reg = train(W_l2_reg, lr, alpha=0, beta=beta, num_steps=num_steps) fig = plt.figure(figsize=(15,10)) plt.axis("equal") # Draw the contour maps of the data-loss and regularization loss CS = plt.contour(xmesh, ymesh, loss_mesh, 10, cmap=plt.cm.bone) # Draw the L1-balls CS2 = plt.contour(xmesh, ymesh, l1_mesh, 10, linestyles='dashed', levels=[range(5)]); # Draw the L2-balls CS3 = plt.contour(xmesh, ymesh, l2_mesh, 10, linestyles='dashed', levels=[range(5)]); # Add green contour lines near the loss minimum CS4 = plt.contour(CS, levels=[0.25, 0.5], colors='g') # Place a green dot at the data loss minimum, and an orange dot at the origin plt.scatter(3,2, color='g') plt.scatter(0,0, color='r') # Color bars and labels plt.xlabel("W[0]") plt.ylabel("W[1]") CB = plt.colorbar(CS, label="data loss", shrink=0.8, extend='both') CB2 = plt.colorbar(CS2, label="reg loss", shrink=0.8, extend='both') # Label the contour lines plt.clabel(CS, fmt = '%2d', colors = 'k', fontsize=14) #contour line labels plt.clabel(CS2, fmt = '%2d', colors = 'red', fontsize=14) #contour line labels # Plot the two sets of weights (green are weights w/o regularization; red are L1; blue are L2) it_array = np.array(guesses) unregularized = plt.plot(it_array.T[0], it_array.T[1], "o", color='g') it_array = np.array(guesses_l1_reg) l1 = plt.plot(it_array.T[0], it_array.T[1], "+", color='r') it_array = np.array(guesses_l2_reg) l2 = plt.plot(it_array.T[0], it_array.T[1], "+", color='b') # Legends require a proxy artists in this case unregularized = mpatches.Patch(color='g', label='unregularized') l1 = mpatches.Patch(color='r', label='L1') l2 = mpatches.Patch(color='b', label='L2') plt.legend(handles=[unregularized, l1, l2]) # Finally add the axes, so we can see how far we are from the sparse solution. plt.axhline(0, color='orange') plt.axvline(0, color='orange') print("solution: loss(%.3f, %.3f)=%.3f" % (W.data[0], W.data[1], loss_fn(W))) print("solution: l1_loss(%.3f, %.3f)=%.3f" % (W.data[0], W.data[1], L1_regularization(W))) print("regularized solution: loss(%.3f, %.3f)=%.3f" % (W_l1_reg.data[0], W_l1_reg.data[1], loss_fn(W_l1_reg))) print("regularized solution: l1_loss(%.3f, %.3f)=%.3f" % (W_l1_reg.data[0], W_l1_reg.data[1], L1_regularization(W_l1_reg))) print("regularized solution: l2_loss(%.3f, %.3f)=%.3f" % (W_l2_reg.data[0], W_l2_reg.data[1], L2_regularization(W_l2_reg))) # - # ## $L_1$-norm regularization leads to "near-sparsity" # # $L_1$-norm regularization is often touted as sparsity inducing, but it actually creates solutions that oscillate around 0, not exactly 0 as we'd like.
    # To demonstrate this, we redefine our toy loss function so that the optimal solution for $w_0$ is close to 0 (0.3). # + def loss_fn(W): return 0.5*(W[0]-0.3)**2 + 2.5*(W[1]-2)**2 # Train again W = Variable(initial_guess, requires_grad=True) guesses_l1_reg = train(W, lr, alpha=alpha, beta=0, num_steps=num_steps) # - # When we draw the progress of the weight training, we see that $W_0$ is gravitating towards zero.
    # + # Draw the contour maps of the data-loss and regularization loss CS = plt.contour(xmesh, ymesh, loss_mesh, 10, cmap=plt.cm.bone) # Plot the progress of the training process it_array = np.array(guesses_l1_reg) l1 = plt.plot(it_array.T[0], it_array.T[1], "+", color='r') # Finally add the axes, so we can see how far we are from the sparse solution. plt.axhline(0, color='orange') plt.axvline(0, color='orange'); plt.xlabel("W[0]") plt.ylabel("W[1]") # - # But if we look closer at what happens to $w_0$ in the last 100 steps of the training, we see that is oscillates around 0, but never quite lands there. Why?
    # Well, $dL1/dw_0$ is a constant (```lr * alpha``` in our case), so the weight update step:
    # ```W.data = W.data - lr * W.grad.data```
    # can be expanded to
    # ```W.data = W.data - lr * (alpha + dloss_fn(W)/dW0)``` where ```dloss_fn(W)/dW0)```
    is the gradient of loss_fn(W) with respect to $w_0$.
    # The oscillations are not constant (although they do have a rythm) because they are influenced by this latter loss. it_array = np.array(guesses_l1_reg[int(0.9*num_steps):]) for i in range(len(it_array)): print("%.4f\t(diff=%.4f)" % (it_array.T[0][i], abs(it_array.T[0][i]-it_array.T[0][i-1]))) # ## References # #
    ** and and **. # [*Deep Learning*](http://www.deeplearningbook.org), # MIT Press, # 2016. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Hydrological mass-balance output # **New in version 1.5!** # # In two recent PRs ([GH1224](https://github.com/OGGM/oggm/pull/1224), [GH1232](https://github.com/OGGM/oggm/pull/1232) and [GH1242](https://github.com/OGGM/oggm/pull/1242)), we have added a new task in OGGM, `run_with_hydro`, which adds mass-balance and runoff diagnostics to the OGGM output files. # # This task is still experimental - it is tested for consistency and mass-conservation and we trust its output, but its API and functionalites might change in the future (in particular to make it faster and to add more functionality, as explained below). import matplotlib.pyplot as plt import xarray as xr import numpy as np import pandas as pd import seaborn as sns # Make pretty plots sns.set_style('ticks') sns.set_context('notebook') from oggm import cfg, utils, workflow, tasks, graphics from oggm_edu import read_run_results, compute_climate_statistics cfg.initialize(logging_level='WARNING') cfg.PATHS['working_dir'] = utils.gettempdir(dirname='OGGMHydro') # ### Define the glacier we will play with # For this notebook we use the Hintereisferner, Austria. Some other possibilities to play with: # # - Shallap Glacier: RGI60-16.02207 # - Artesonraju: RGI60-16.02444 ([reference glacier](https://cluster.klima.uni-bremen.de/~github/crossval/1.1.2.dev45+g792ae9c/web/RGI60-16.02444.html)) # - Hintereisferner: RGI60-11.00897 ([reference glacier](https://cluster.klima.uni-bremen.de/~github/crossval/1.1.2.dev45+g792ae9c/web/RGI60-11.00897.html)) # # And virtually any glacier you can find the RGI Id from, e.g. in the [GLIMS viewer](https://www.glims.org/maps/glims). # Hintereisferner rgi_id = 'RGI60-11.00897' # ## Preparing the glacier data # This can take up to a few minutes on the first call because of the download of the required data: # We pick the elevation-bands glaciers because they run a bit faster base_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.4/L3-L5_files/CRU/elev_bands/qc3/pcp2.5/no_match' gdir = workflow.init_glacier_directories([rgi_id], from_prepro_level=5, prepro_border=80, prepro_base_url=base_url)[0] # ## "Commitment run" # This runs a simulation for 100 yrs under a constant climate based on the climate of the last 11 years: # file identifier where the model output is saved file_id = '_ct' tasks.run_with_hydro(gdir, run_task=tasks.run_constant_climate, nyears=100, y0=2014, halfsize=5, store_monthly_hydro=True, output_filesuffix=file_id); # Let's have a look at the output: with xr.open_dataset(gdir.get_filepath('model_diagnostics', filesuffix=file_id)) as ds: # The last step of hydrological output is NaN (we can't compute it for this year) ds = ds.isel(time=slice(0, -1)).load() # There are plenty of new variables in this dataset! We can list them with: ds # ### Annual runoff # The annual variables are stored as usual with the time dimension. For example: ds.volume_m3.plot(); # The new hydrological variables are also available. Let's make a pandas DataFrame of all "1D" (annual) variables: sel_vars = [v for v in ds.variables if 'month_2d' not in ds[v].dims] df_annual = ds[sel_vars].to_dataframe() # The hydrological variables are computed on the largest possible area that was covered by glacier ice in the simulation. This is equivalent to the runoff that would be measured at a fixed-gauge station at the glacier terminus. The total annual runoff is: # Select only the runoff variables and convert them to megatonnes (instead of kg) runoff_vars = ['melt_off_glacier', 'melt_on_glacier', 'liq_prcp_off_glacier', 'liq_prcp_on_glacier'] df_runoff = df_annual[runoff_vars] * 1e-9 df_runoff.sum(axis=1).plot(); plt.ylabel('Mt'); # It consists of the following components: # - melt off-glacier: snow melt on areas that are now glacier free (i.e. 0 in the year of largest glacier extent, in this example at the start of the simulation) # - melt on-glacier: ice + seasonal snow melt on the glacier # - liquid precipitaton on- and off-glacier (the latter being zero at the year of largest glacial extent, in this example at start of the simulation) f, ax = plt.subplots(figsize=(10, 6)); df_runoff.plot.area(ax=ax, color=sns.color_palette("rocket")); plt.xlabel('Years'); plt.ylabel('Runoff (Mt)'); plt.title(rgi_id); # As the glacier retreats, total runoff decreases as a result of the decreasing glacier contribution. # ### Monthly runoff # The "2D" variables contain the same data but at monthly resolution, with the dimension (time, month). For example, runoff can be computed the # same way: # Select only the runoff variables and convert them to megatonnes (instead of kg) monthly_runoff = ds['melt_off_glacier_monthly'] + ds['melt_on_glacier_monthly'] + ds['liq_prcp_off_glacier_monthly'] + ds['liq_prcp_on_glacier_monthly'] monthly_runoff *= 1e-9 monthly_runoff.clip(0).plot(cmap='Blues', cbar_kwargs={'label':'Mt'}); plt.xlabel('Months'); plt.ylabel('Years'); # Something is a bit wrong: the coordinates are hydrological months - let's make this better: # + # This should work in both hemispheres maybe? ds_roll = ds.roll(month_2d=ds['calendar_month_2d'].data[0]-1, roll_coords=True) ds_roll['month_2d'] = ds_roll['calendar_month_2d'] # Select only the runoff variables and convert them to megatonnes (instead of kg) monthly_runoff = ds_roll['melt_off_glacier_monthly'] + ds_roll['melt_on_glacier_monthly'] + ds_roll['liq_prcp_off_glacier_monthly'] + ds_roll['liq_prcp_on_glacier_monthly'] monthly_runoff *= 1e-9 monthly_runoff.clip(0).plot(cmap='Blues', cbar_kwargs={'label':'Mt'}); plt.xlabel('Months'); plt.ylabel('Years'); # - monthly_runoff.sel(month_2d=[5, 6, 7, 8]).plot(hue='month_2d'); # The runoff is approx. zero in the winter months, and is high in summer. The annual cycle changes as the glacier retreats: monthly_runoff.sel(time=[0, 30, 99]).plot(hue='time'); plt.title('Annual cycle'); plt.xlabel('Month'); plt.ylabel('Runoff (Mt)'); # Let's distinguish between the various components of the monthly runnoff for two 10 years periods (begining and end of the 100 year simulation): # + # Pick the variables we need (the 2d ones) sel_vars = [v for v in ds_roll.variables if 'month_2d' in ds_roll[v].dims] # Pick the first decade and average it df_m_s = ds_roll[sel_vars].isel(time=slice(0, 10)).mean(dim='time').to_dataframe() * 1e-9 # Rename the columns for readability df_m_s.columns = [c.replace('_monthly', '') for c in df_m_s.columns] # Because of floating point precision sometimes runoff can be slightly below zero, clip df_m_s = df_m_s.clip(0) # Same for end df_m_e = ds_roll[sel_vars].isel(time=slice(-11, -1)).mean(dim='time').to_dataframe() * 1e-9 df_m_e.columns = [c.replace('_monthly', '') for c in df_m_s.columns] df_m_e = df_m_e.clip(0) # - f, (ax1, ax2) = plt.subplots(1, 2, figsize=(18, 7), sharey=True); df_m_s[runoff_vars].plot.area(ax=ax1, legend=False, title='Year 0-10', color=sns.color_palette("rocket")); df_m_e[runoff_vars].plot.area(ax=ax2, title='Year 90-100', color=sns.color_palette("rocket")); ax1.set_ylabel('Monthly runoff (Mt)'); ax1.set_xlabel('Month'); ax2.set_xlabel('Month'); # ## Random climate commitment run # Same as before, but this time randomly shuffling each year of the last 11 years: # file identifier where the model output is saved file_id = '_rd' tasks.run_with_hydro(gdir, run_task=tasks.run_random_climate, nyears=100, y0=2014, halfsize=5, store_monthly_hydro=True, seed=0, unique_samples=True, output_filesuffix=file_id); with xr.open_dataset(gdir.get_filepath('model_diagnostics', filesuffix=file_id)) as ds: # The last step of hydrological output is NaN (we can't compute it for this year) ds = ds.isel(time=slice(0, -1)).load() # ### Annual runoff sel_vars = [v for v in ds.variables if 'month_2d' not in ds[v].dims] df_annual = ds[sel_vars].to_dataframe() # + # Select only the runoff variables and convert them to megatonnes (instead of kg) runoff_vars = ['melt_off_glacier', 'melt_on_glacier', 'liq_prcp_off_glacier', 'liq_prcp_on_glacier'] df_runoff = df_annual[runoff_vars] * 1e-9 f, ax = plt.subplots(figsize=(10, 6)); df_runoff.plot.area(ax=ax, color=sns.color_palette("rocket")); plt.xlabel('Years'); plt.ylabel('Runoff (Mt)'); plt.title(rgi_id); # - # ### Monthly runoff # + ds_roll = ds.roll(month_2d=ds['calendar_month_2d'].data[0]-1, roll_coords=True) ds_roll['month_2d'] = ds_roll['calendar_month_2d'] # Select only the runoff variables and convert them to megatonnes (instead of kg) monthly_runoff = ds_roll['melt_off_glacier_monthly'] + ds_roll['melt_on_glacier_monthly'] + ds_roll['liq_prcp_off_glacier_monthly'] + ds_roll['liq_prcp_on_glacier_monthly'] monthly_runoff *= 1e-9 monthly_runoff.clip(0).plot(cmap='Blues', cbar_kwargs={'label':'Mt'}); plt.xlabel('Months'); plt.ylabel('Years'); # + # Pick the variables we need (the 2d ones) sel_vars = [v for v in ds_roll.variables if 'month_2d' in ds_roll[v].dims] # Pick the first decade and average it df_m_s = ds_roll[sel_vars].isel(time=slice(0, 10)).mean(dim='time').to_dataframe() * 1e-9 # Rename the columns for readability df_m_s.columns = [c.replace('_monthly', '') for c in df_m_s.columns] # Because of floating point precision sometimes runoff can be slightly below zero, clip df_m_s = df_m_s.clip(0) # Same for end df_m_e = ds_roll[sel_vars].isel(time=slice(-11, -1)).mean(dim='time').to_dataframe() * 1e-9 df_m_e.columns = [c.replace('_monthly', '') for c in df_m_s.columns] df_m_e = df_m_e.clip(0) # - f, (ax1, ax2) = plt.subplots(1, 2, figsize=(18, 7), sharey=True); df_m_s[runoff_vars].plot.area(ax=ax1, legend=False, title='Year 0-10', color=sns.color_palette("rocket")); df_m_e[runoff_vars].plot.area(ax=ax2, title='Year 90-100', color=sns.color_palette("rocket")); ax1.set_ylabel('Monthly runoff (Mt)'); ax1.set_xlabel('Month'); ax2.set_xlabel('Month'); # ## CMIP5 projection run # This time, let's start from the estimated glacier state in 2020 and run a projection from there. See the [run_with_gcm](run_with_gcm.ipynb) tutorial. # "Downscale" the climate data from oggm.shop import gcm_climate bp = 'https://cluster.klima.uni-bremen.de/~oggm/cmip5-ng/pr/pr_mon_CCSM4_{}_r1i1p1_g025.nc' bt = 'https://cluster.klima.uni-bremen.de/~oggm/cmip5-ng/tas/tas_mon_CCSM4_{}_r1i1p1_g025.nc' for rcp in ['rcp26', 'rcp45', 'rcp60', 'rcp85']: # Download the files ft = utils.file_downloader(bt.format(rcp)) fp = utils.file_downloader(bp.format(rcp)) # bias correct them workflow.execute_entity_task(gcm_climate.process_cmip_data, [gdir], filesuffix='_CCSM4_{}'.format(rcp), # recognize the climate file for later fpath_temp=ft, # temperature projections fpath_precip=fp, # precip projections ); for rcp in ['rcp26', 'rcp45', 'rcp60', 'rcp85']: rid = '_CCSM4_{}'.format(rcp) tasks.run_with_hydro(gdir, run_task=tasks.run_from_climate_data, ys=2020, climate_filename='gcm_data', # use gcm_data, not climate_historical climate_input_filesuffix=rid, # use the chosen scenario init_model_filesuffix='_historical', # this is important! Start from 2020 glacier output_filesuffix=rid, # recognize the run for later store_monthly_hydro=True, # add monthly diagnostics ); # ## RCP2.6 file_id = '_CCSM4_rcp26' with xr.open_dataset(gdir.get_filepath('model_diagnostics', filesuffix=file_id)) as ds: # The last step of hydrological output is NaN (we can't compute it for this year) ds = ds.isel(time=slice(0, -1)).load() ds.volume_m3.plot(); # ### Annual runoff sel_vars = [v for v in ds.variables if 'month_2d' not in ds[v].dims] df_annual = ds[sel_vars].to_dataframe() # + # Select only the runoff variables and convert them to megatonnes (instead of kg) runoff_vars = ['melt_off_glacier', 'melt_on_glacier', 'liq_prcp_off_glacier', 'liq_prcp_on_glacier'] df_runoff = df_annual[runoff_vars].clip(0) * 1e-9 f, ax = plt.subplots(figsize=(10, 6)); df_runoff.plot.area(ax=ax, color=sns.color_palette("rocket")); plt.xlabel('Years'); plt.ylabel('Runoff (Mt)'); plt.title(rgi_id); # - # ### Monthly runoff ds_roll = ds.roll(month_2d=ds['calendar_month_2d'].data[0]-1, roll_coords=True) ds_roll['month_2d'] = ds_roll['calendar_month_2d'] # + # Pick the variables we need (the 2d ones) sel_vars = [v for v in ds_roll.variables if 'month_2d' in ds_roll[v].dims] # Pick the first decade and average it df_m_s = ds_roll[sel_vars].isel(time=slice(0, 10)).mean(dim='time').to_dataframe() * 1e-9 # Rename the columns for readability df_m_s.columns = [c.replace('_monthly', '') for c in df_m_s.columns] # Because of floating point precision sometimes runoff can be slightly below zero, clip df_m_s = df_m_s.clip(0) # Same for end df_m_e = ds_roll[sel_vars].isel(time=slice(-11, -1)).mean(dim='time').to_dataframe() * 1e-9 df_m_e.columns = [c.replace('_monthly', '') for c in df_m_s.columns] df_m_e = df_m_e.clip(0) # - f, (ax1, ax2) = plt.subplots(1, 2, figsize=(18, 7), sharey=True); df_m_s[runoff_vars].plot.area(ax=ax1, legend=False, title='Year 0-10', color=sns.color_palette("rocket")); df_m_e[runoff_vars].plot.area(ax=ax2, title='Year 90-100', color=sns.color_palette("rocket")); ax1.set_ylabel('Monthly runoff (Mt)'); ax1.set_xlabel('Month'); ax2.set_xlabel('Month'); # ## RCP8.5 file_id = '_CCSM4_rcp85' with xr.open_dataset(gdir.get_filepath('model_diagnostics', filesuffix=file_id)) as ds: # The last step of hydrological output is NaN (we can't compute it for this year) ds = ds.isel(time=slice(0, -1)).load() ds.volume_m3.plot(); # ### Annual runoff sel_vars = [v for v in ds.variables if 'month_2d' not in ds[v].dims] df_annual = ds[sel_vars].to_dataframe() # + # Select only the runoff variables and convert them to megatonnes (instead of kg) runoff_vars = ['melt_off_glacier', 'melt_on_glacier', 'liq_prcp_off_glacier', 'liq_prcp_on_glacier'] df_runoff = df_annual[runoff_vars] * 1e-9 f, ax = plt.subplots(figsize=(10, 6)); df_runoff.plot.area(ax=ax, color=sns.color_palette("rocket")); plt.xlabel('Years'); plt.ylabel('Runoff (Mt)'); plt.title(rgi_id); # - # ### Monthly runoff ds_roll = ds.roll(month_2d=ds['calendar_month_2d'].data[0]-1, roll_coords=True) ds_roll['month_2d'] = ds_roll['calendar_month_2d'] # + # Pick the variables we need (the 2d ones) sel_vars = [v for v in ds_roll.variables if 'month_2d' in ds_roll[v].dims] # Pick the first decade and average it df_m_s = ds_roll[sel_vars].isel(time=slice(0, 10)).mean(dim='time').to_dataframe() * 1e-9 # Rename the columns for readability df_m_s.columns = [c.replace('_monthly', '') for c in df_m_s.columns] # Because of floating point precision sometimes runoff can be slightly below zero, clip df_m_s = df_m_s.clip(0) # Same for end df_m_e = ds_roll[sel_vars].isel(time=slice(-11, -1)).mean(dim='time').to_dataframe() * 1e-9 df_m_e.columns = [c.replace('_monthly', '') for c in df_m_s.columns] df_m_e = df_m_e.clip(0) # - f, (ax1, ax2) = plt.subplots(1, 2, figsize=(18, 7), sharey=True); df_m_s[runoff_vars].plot.area(ax=ax1, legend=False, title='Year 0-10', color=sns.color_palette("rocket")); df_m_e[runoff_vars].plot.area(ax=ax2, title='Year 90-100', color=sns.color_palette("rocket")); ax1.set_ylabel('Monthly runoff (Mt)'); ax1.set_xlabel('Month'); ax2.set_xlabel('Month'); # ## Calculating peak water under different climate scenarios # # A typical usecase for simulating and analysing the hydrological ouptuts of glaciers are for **peak water** estimations. For instance, we might want to know the point in time when the annual total runoff from a glacier reaches its maximum under a certain climate scenario. # # The total runoff is a sum of the melt and liquid precipitation, from both on and off the glacier. **Peak water** can be calculated from the 11-year moving average of the total runoff ([Huss and Hock, 2018](http://www.nature.com/articles/s41558-017-0049-x)). # # For Hintereisferner we have alredy run the simulations for the different climate scenarios, so we can sum up the runoff variables and plot the output. # Create the figure f, ax = plt.subplots(figsize=(18, 7), sharex=True) # Loop all scenarios for i, rcp in enumerate(['rcp26', 'rcp45', 'rcp60', 'rcp85']): file_id = f'_CCSM4_{rcp}' # Open the corresponding data. with xr.open_dataset(gdir.get_filepath('model_diagnostics', filesuffix=file_id)) as ds: # Load the data into a dataframe ds = ds.isel(time=slice(0, -1)).load() # Select annual variables sel_vars = [v for v in ds.variables if 'month_2d' not in ds[v].dims] # And create a dataframe df_annual = ds[sel_vars].to_dataframe() # Select the variables relevant for runoff. runoff_vars = ['melt_off_glacier', 'melt_on_glacier', 'liq_prcp_off_glacier', 'liq_prcp_on_glacier'] # Convert to mega tonnes instead of kg. df_runoff = df_annual[runoff_vars].clip(0) * 1e-9 # Sum the variables each year "axis=1", take the 11 year rolling mean # and plot it. df_runoff.sum(axis=1).rolling(window=11).mean().plot(ax=ax, label=rcp, color=sns.color_palette("rocket")[i], ) ax.set_ylabel('Annual runoff (Mt)') ax.set_xlabel('Year') plt.title(rgi_id) plt.legend(); # For Hintereisferner, runoff continues to decrease throughout the 21st-century for all scenarios, indicating that **peak water** has already been reached sometime in the past. This is the case for many European glaciers. Let us take a look at another glacier, in a different climatical setting where a different hydrological projection can be expected. # # We pick a glacier (RGI60-15.02420, unnamed) in the Eastern Himalayas. First we need to initialize its glacier directory. # Unnamed glacier rgi_id = 'RGI60-15.02420' gdir = workflow.init_glacier_directories([rgi_id], from_prepro_level=5, prepro_border=80, prepro_base_url=base_url)[0] # Then, we have to process the climate data for the new glacier. # Do we need to download data again? I guess it is stored somewhere, # but this is an easy way to loop over it for bias correction. bp = 'https://cluster.klima.uni-bremen.de/~oggm/cmip5-ng/pr/pr_mon_CCSM4_{}_r1i1p1_g025.nc' bt = 'https://cluster.klima.uni-bremen.de/~oggm/cmip5-ng/tas/tas_mon_CCSM4_{}_r1i1p1_g025.nc' for rcp in ['rcp26', 'rcp45', 'rcp60', 'rcp85']: # Download the files ft = utils.file_downloader(bt.format(rcp)) fp = utils.file_downloader(bp.format(rcp)) workflow.execute_entity_task(gcm_climate.process_cmip_data, [gdir], # Name file to recognize it later filesuffix='_CCSM4_{}'.format(rcp), # temperature projections fpath_temp=ft, # precip projections fpath_precip=fp, ); # With this done, we can run the simulations for the different climate scenarios. for rcp in ['rcp26', 'rcp45', 'rcp60', 'rcp85']: rid = '_CCSM4_{}'.format(rcp) tasks.run_with_hydro(gdir, run_task=tasks.run_from_climate_data, ys=2020, # use gcm_data, not climate_historical climate_filename='gcm_data', # use the chosen scenario climate_input_filesuffix=rid, # this is important! Start from 2020 glacier init_model_filesuffix='_historical', # recognize the run for later output_filesuffix=rid, # add monthly diagnostics store_monthly_hydro=True, ); # Now we can create the same plot as before in order to visualize **peak water** # Create the figure f, ax = plt.subplots(figsize=(18, 7)) # Loop all scenarios for i, rcp in enumerate(['rcp26', 'rcp45', 'rcp60', 'rcp85']): file_id = f'_CCSM4_{rcp}' # Open the corresponding data in a context manager. with xr.open_dataset(gdir.get_filepath('model_diagnostics', filesuffix=file_id)) as ds: # Load the data into a dataframe ds = ds.isel(time=slice(0, -1)).load() # Select annual variables sel_vars = [v for v in ds.variables if 'month_2d' not in ds[v].dims] # And create a dataframe df_annual = ds[sel_vars].to_dataframe() # Select the variables relevant for runoff. runoff_vars = ['melt_off_glacier', 'melt_on_glacier', 'liq_prcp_off_glacier', 'liq_prcp_on_glacier'] # Convert to mega tonnes instead of kg. df_runoff = df_annual[runoff_vars].clip(0) * 1e-9 # Sum the variables each year "axis=1", take the 11 year rolling mean # and plot it. df_runoff.sum(axis=1).rolling(window=11).mean().plot(ax=ax, label=rcp, color=sns.color_palette("rocket")[i] ) ax.set_ylabel('Annual runoff (Mt)') ax.set_xlabel('Year') plt.title(rgi_id) plt.legend(); # Unlike for Hintereisferner, these projections indicate that the annual runoff will increase in all the scenarios for the first half of the century. The higher RCP scenarios can reach **peak water** later in the century, since the excess melt can continue to increase. For the lower RCP scenarios on the other hand, the glacier might be approaching a new equilibirum, which reduces the runoff earlier in the century ([Rounce et. al., 2020](https://www.frontiersin.org/articles/10.3389/feart.2019.00331/full)). After **peak water** is reached (RCP2.6: ~2055, RCP8.5: ~2070 in these projections), the annual runoff begin to decrease. This decrease occurs because the shrinking glacier is no longer able to support the high levels of melt. # ## What's next? # # - return to the [OGGM documentation](https://docs.oggm.org) # - back to the [table of contents](welcome.ipynb) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.6 # language: python # name: python3 # --- #

    SidLabs_Segmenting Neighborhoods in Torronto

    #

    Question#1

    #

    Using pandas to transform the data in the provided table from Wikipedia page into pandas dataframe.

    #

    1.1 Importing libraries and solving environments

    # + import numpy as np # importing numpy import pandas as pd # importing pandas pd.set_option('display.max_columns', None) pd.set_option('display.max_rows', None) import json # importing JSON # #!conda install -c conda-forge geopy --yes from geopy.geocoders import Nominatim # convert to latitude and longitude import requests # importing requests from pandas.io.json import json_normalize # JSON into dataframe import matplotlib.cm as cm # importing matplotlib import matplotlib.colors as colors from sklearn.cluster import KMeans # importing K-Means from sklearn.datasets.samples_generator import make_blobs # #!conda install -c conda-forge folium=0.5.0 --yes import folium # importing folium from bs4 import BeautifulSoup import lxml print("all done here!") # - #

    1.2 Scraping and downloading the data using BeautifulSoup and loading it into Pandas dataframe.

    # + r = requests.get('https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M') soup = BeautifulSoup(r.text, 'html.parser') table=soup.find('table', attrs={'class':'wikitable sortable'}) #heads... headers=table.findAll('th') for i, head in enumerate(headers): headers[i]=str(headers[i]).replace("","").replace("","").replace("\n","") rows=table.findAll('tr') rows=rows[1:len(rows)] # skipping symbols and line feeds... for i, row in enumerate(rows): rows[i] = str(rows[i]).replace("\n","").replace("\n","") df=pd.DataFrame(rows) df[headers] = df[0].str.split("\n", n = 2, expand = True) df.drop(columns=[0],inplace=True) df = df.drop(df[(df.Borough == "Not assigned")].index) df.Neighbourhood.replace("Not assigned", df.Borough, inplace=True) df.Neighbourhood.fillna(df.Borough, inplace=True) df=df.drop_duplicates() # extract titles... df.update( df.Neighbourhood.loc[ lambda x: x.str.contains('title') ].str.extract('title=\"([^\"]*)',expand=False)) df.update( df.Borough.loc[ lambda x: x.str.contains('title') ].str.extract('title=\"([^\"]*)',expand=False)) df.update( df.Neighbourhood.loc[ lambda x: x.str.contains('Toronto') ].str.replace(", Toronto","")) df.update( df.Neighbourhood.loc[ lambda x: x.str.contains('Toronto') ].str.replace("\(Toronto\)","")) # combining neighborhoods with the same post code... df2 = pd.DataFrame({'Postcode':df.Postcode.unique()}) df2['Borough']=pd.DataFrame(list(set(df['Borough'].loc[df['Postcode'] == x['Postcode']])) for i, x in df2.iterrows()) df2['Neighborhood']=pd.Series(list(set(df['Neighbourhood'].loc[df['Postcode'] == x['Postcode']])) for i, x in df2.iterrows()) df2['Neighborhood']=df2['Neighborhood'].apply(lambda x: ', '.join(x)) df2.dtypes # - #

    1.2.1 having a look at our dataframe

    df2.head(10) #

    1.3 using the .shape method to print the number of rows of the dataframe.

    df2.shape #

    Question#2

    #

    Using the Geocoder package or the csv file to create dataframe with longitude and latitude values.

    #

    2.1 Downloading the toronto longitude and latitude data from cocl.us site and exploring the dataframe. # #!wget -q -O 'Toronto_long_lat_data.csv' http://cocl.us/Geospatial_data df_lon_lat = pd.read_csv('Toronto_long_lat_data.csv') df_lon_lat.head() #

    2.1.1 Converting the column name 'Postal Code" on df_lon_lat to match "Postcode" from df2.

    df_lon_lat.columns=['Postcode','Latitude','Longitude'] df_lon_lat.head() #

    2.2 merging df2 and df_lon_lat into df3.

    df3 = pd.merge(df2, df_lon_lat[['Postcode','Latitude', 'Longitude']], on='Postcode') df3.head(15) #

    Question#3

    #

    # Exploring and clustering the neighborhoods in Toronto. # Also use geopy library to get the latitude and longitude values for Toronto. #

    #

    3.1 Importing libraries and solving environments

    from geopy.geocoders import Nominatim import matplotlib.cm as cm import matplotlib.colors as colors from sklearn.cluster import KMeans # #!conda install -c conda-forge folium=0.5.0 --yes import folium print('All done, Plz proceed...') # 3.2 Using the geolocator library to obtain the longitude and latitude values for Toronto

    address = 'Toronto, ON' geolocator = Nominatim(user_agent="Toronto") location = geolocator.geocode(address) latitude_toronto = location.latitude longitude_toronto = location.longitude print('The coordinates for Toronto are {}, {}.'.format(latitude_toronto, longitude_toronto)) #

    3.2 plotting the neighborhoods on the Toronto map to visualize

    # + map_toronto = folium.Map(location=[latitude_toronto, longitude_toronto], zoom_start=10) # adding markers to map ... for lat, lng, borough, Neighborhood in zip(df3['Latitude'], df3['Longitude'], df3['Borough'], df3['Neighborhood']): label = '{}, {}'.format(Neighborhood, borough) label = folium.Popup(label, parse_html=True) folium.CircleMarker( [lat, lng], radius=5, popup=label, color='red', fill=True, fill_color='green', fill_opacity=0.7, parse_html=False).add_to(map_toronto) map_toronto # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + import csv import re import math import matplotlib.pyplot as plt import numpy as np import sys import pickle sys.path.append("..") # adds higher directory to python modules path from LoaderPACK.Loader import load_whole_data, load_shuffle_5_min import torch # - # # Section 1: Visualize the eeg-recording and targets # In the following a whole eeg-recording will be loaded: # + device = "cpu" # the load_whole_data laods a whole eeg-recording and targets into memory trainload = load_whole_data(path = "C:/Users/Marc/Desktop/model_data/train_model_data", series_dict = 'train_series_length.pickle', ind = [i for i in range(1, 195 + 1)]) trainloader = torch.utils.data.DataLoader(trainload, batch_size=1, shuffle=False, num_workers=0) it = iter(trainloader) model_input, model_target, model_data = next(it) torch.set_printoptions(edgeitems=2) print(model_input[0]) print() print(model_target[0]) print() print(model_input[0].shape) # - # Thus 4 channels from this individuals recording will be used in the training process. # The network will be trained on only 5 minutes of data. Thus the above recorded data is passed into another dataloader. This dataloader cuts the recording into 5 min. intervals: # + device = "cpu" # the load_whole_data laods a whole eeg-recording and targets into memory trainload = load_whole_data(path = "C:/Users/Marc/Desktop/model_data/train_model_data", series_dict = 'train_series_length.pickle', ind = [i for i in range(1, 195 + 1)]) trainloader = torch.utils.data.DataLoader(trainload, batch_size=1, shuffle=False, num_workers=0) it = iter(trainloader) batch_size = 1 # the load_shuffle_5_min cuts the recording and targets loaded from load_whole_data # into random 5 minutes intervals. loader2 = load_shuffle_5_min(next(it), device) loader2 = torch.utils.data.DataLoader(loader2, batch_size=batch_size, shuffle=True, num_workers=0) it2 = iter(loader2) model_input, model_target, model_data = next(it2) torch.set_printoptions(edgeitems=2) print(model_input[0][0]) print(model_target[0][0]) # - # # Section 2: Equalize dataloading # The Lorenz curve will be used to visualize the equality of the data. def lorenz_curve(ls: list, name: str): """ This function return a lorenz curve for the lenght of the patients eeg recordings. Args: ls (list): list with the length of each eeg recording. name (str): the name - or sting displayed in the title of the plot. Return: matplotlib plot """ tot_sum = sum(ls) # get the total sum of intervals nr_patients = len(ls) # get the number of patients sorted_ls = sorted(ls) # sort the list res = [] rang = [i/100 for i in range(0, 100 + 5, 5)] # get the percentage range for per in rang: res.append(sum(sorted_ls[:math.floor(nr_patients * per)])/tot_sum) plt.title(f"Lorenz curve of {name}") plt.xlabel("Cummulative % of patients") plt.ylabel("Cummulative % of intervals") plt.plot(rang, res) plt.plot(rang, rang) return plt # For the train encoding file, each patient can be linked with his recordings: # + path = "C:/Users/Marc/Desktop/model_data/train_model_data" patient_samples = dict() with open(path + '/train_encoding.csv', 'r') as file: ls = csv.reader(file) for rows in ls: m = re.match(r".+/\d+/(\d+)/+.", rows[0]) val_to_dict = patient_samples.get(m.group(1), []) val_to_dict.append(rows[2]) patient_samples[m.group(1)] = val_to_dict # - print(patient_samples) # ## Show the data with no correction # + import time device = "cpu" print(device) # the load_whole_data laods a whole eeg-recording and targets into memory trainload = load_whole_data(path = "C:/Users/Marc/Desktop/model_data/train_model_data", series_dict = 'train_series_length.pickle', ind = [i for i in range(1, 195 + 1)]) trainloader = torch.utils.data.DataLoader(trainload, batch_size=1, shuffle=False, num_workers=0) data_before = [] # list to save the amount of 5 min. cuts per recording start = time.time() for file in trainloader: size = (file[0][0].shape[0], file[0][0].shape[1]) length = math.floor(((size[1]-200*30)/(200*60*5)))*size[0] # the amount of total possible cuts for the recording data_before.append(length) print("time:", time.time()-start) # - # The amount of 5 minute intervals per recording can be displayed with a histogram # + plt.hist(data_before, bins = 30) plt.title("Histogram over intervals per recording before correction") plt.savefig(f"C:/Users/Marc/Desktop/BP photos/hist_exp_before.jpg") plt.show() plt.close() print(np.mean(data_before)) print(np.median(data_before)) # - # It can be seen, that there is a few but very large samples (way larger than the mean). # The above plot is for the amount of 5 min. intervals per recording. A patient might have multiple recordings. Thus in the following the amount of 5 min. intervals per patient will be found: # + patient_before = [] for value in patient_samples.values(): # go though the patient indicies vals = 0 for i in value: vals += data_before[int(i)-1] patient_before.append(vals) # + plt.hist(patient_before, bins = 30) plt.title("Histogram over intervals per patient before correction") plt.savefig(f"C:/Users/Marc/Desktop/BP photos/hist_patient_before.jpg") plt.show() plt.close() print(np.mean(patient_before)) print(np.median(patient_before)) # - # It can be seen from the histogram, that a few patients constitute with a lot of the 5 min. intervals in the data. Let's investigate how much data the top 25% of patients constitute with. sorted_patient_before = sorted(patient_before, reverse=True) print("Total number of patients:", len(patient_before)) print("25% of patients amounts to:", int(0.25*len(patient_before))) print("The top 25% of patients constitute with:", sum(sorted_patient_before[:int(0.25*len(patient_before))])/sum(patient_before), "of the data") # This can also be seen in the plot of the Lorenz curve: # + m = lorenz_curve(patient_before, "intervals before correction") plt.savefig(f"C:/Users/Marc/Desktop/BP photos/Lorenz_before.jpg") plt.show() plt.close() # - # Thus it can be seen that a few patients constitute with a large amount of the data. # ## Show the data with correction # To try and eliminate this inequality a few steps has to be made in the dataset: # + import time device = "cpu" print(device) # the load_whole_data laods a whole eeg-recording and targets into memory trainload = load_whole_data(path = "C:/Users/Marc/Desktop/model_data/train_model_data", series_dict = 'train_series_length.pickle', ind = [i for i in range(1, 195 + 1)]) trainloader = torch.utils.data.DataLoader(trainload, batch_size=1, shuffle=False, num_workers=0) data_after = [] nr_of_files_loaded = 0 batch_size = 1 start = time.time() for file in trainloader: # instead of loading the total amount of 5 min. intervals possible # this dataloader tries to weight the recordings # thus short recordings will be represented as much as longer recordings loader2 = load_shuffle_5_min(file, device) loader2 = torch.utils.data.DataLoader(loader2, batch_size=batch_size, shuffle=True, num_workers=0) nr_of_files_loaded += 1 j = 0 for i in loader2: j += 1 data_after.append(j) # append the amount of series loaded print("time:", time.time()-start) print("Number of files loaded in total:", nr_of_files_loaded) # - # This data can now be plotted with a histogram: # + plt.title("Histogram over intervals per recording after correction") plt.hist(data_after, bins = 30) plt.savefig(f"C:/Users/Marc/Desktop/BP photos/hist_exp_after.jpg") plt.show() plt.close() print(np.mean(data_after)) print(np.median(data_after)) # - # Now it can be seen, that there are no longer any extreme cases. # The amount of 5 min. intevals per patient will be found: # + patient_after = [] for value in patient_samples.values(): vals = 0 for i in value: vals += data_after[int(i)-1] patient_after.append(vals) # + plt.title("Histogram over intervals per patient after correction") plt.hist(patient_after, bins = 30) plt.savefig(f"C:/Users/Marc/Desktop/BP photos/hist_patient_after.jpg") plt.show() plt.close() print(np.mean(patient_after)) print(np.median(patient_after)) # - # This again show, that a more fair distribution has been achived. # After the correction the amount of data that the top 25% of patients constitute with is: sorted_patient_after = sorted(patient_after, reverse=True) print("Total number of patients:", len(patient_after)) print("25% of patients amounts to:", int(0.25*len(patient_after))) print("The top 25% of patients constitute with:", sum(sorted_patient_after[:int(0.25*len(patient_after))])/sum(patient_after), "of the data") # This more fair distribution can also be seen in the following Lorenz curve: # + m = lorenz_curve(patient_after, "intervals after correction") plt.savefig(f"C:/Users/Marc/Desktop/BP photos/Lorenz_after.jpg") plt.show() plt.close() # - # ## Show the data with extreme correction # + import time device = "cpu" print(device) # the load_whole_data laods a whole eeg-recording and targets into memory trainload = load_whole_data(path = "C:/Users/Marc/Desktop/model_data/train_model_data", series_dict = 'train_series_length.pickle', ind = [i for i in range(1, 195 + 1)]) trainloader = torch.utils.data.DataLoader(trainload, batch_size=1, shuffle=False, num_workers=0) data_ex = [] nr_of_files_loaded = 0 batch_size = 1 start = time.time() for file in trainloader: # the load_shuffle_5_min will now load the data with extreme correction loader2 = load_shuffle_5_min(file, device, ex = True) loader2 = torch.utils.data.DataLoader(loader2, batch_size=batch_size, shuffle=True, num_workers=0) nr_of_files_loaded += 1 j = 0 for i in loader2: j += 1 data_ex.append(j) print("time:", time.time()-start) print("Number of files loaded in total:", nr_of_files_loaded) # - # Find how many intervals each patient consitute with: # + patient_ex = [] for value in patient_samples.values(): vals = 0 for i in value: vals += data_ex[int(i)-1] patient_ex.append(vals) # - # The following shows that extreme equality has been achived: sorted_patient_ex = sorted(patient_ex, reverse=True) print("Total number of patients:", len(patient_ex)) print("25% of patients amounts to:", int(0.25*len(patient_ex))) print("The top 25% of patients constitute with:", sum(sorted_patient_ex[:int(0.25*len(patient_ex))])/sum(patient_ex), "of the data") # Now the Lorenz plot can be created: # + m = lorenz_curve(patient_ex, "intervals with extreme correction") plt.savefig(f"C:/Users/Marc/Desktop/BP photos/Lorenz_low.jpg") plt.show() plt.close() # - # # Section 3: How the dataloading is equalized: # To make sure that data is distributed in the way that is intented, a pickle file with meta data of each recording is used. In the following code, this file will be looked at. # Files of this type is created by the script: "2 series_dict.py" found in the preprocess folder on git. # + import pickle # load the file: with open('C:/Users/Marc/Desktop/model_data/train_model_data/train_series_length.pickle', 'rb') as handle: s_dict = pickle.load(handle) # - # Print the content for the first recording: print(s_dict[1]) # The first number is how many samples the patient delivered (in all of the recorded eeg-sessions). # The second number is how many different times the patient has been recorded. # The third number is how many intervals should be sampled from the recording in total. If a patient has a short recording, this will not be over sampled. # The fourth value return the size of the given recording: (number of usefull channels, the length of the channels). # + new_p = list(i for i in np.array(list(s_dict.values()))[...,2]) patient_after = [] for value in patient_samples.values(): vals = 0 for i in value: vals += new_p[int(i)-1] patient_after.append(vals) # - # The following is the histogram and lorenz curve achived earlier # + plt.title("Histogram over intervals per experiment after correction") plt.hist(patient_after, bins = 30) plt.show() print(np.mean(patient_after)) print(np.median(patient_after)) # + m = lorenz_curve(patient_after, "intervals after correction") #plt.savefig(f"C:/Users/Marc/Desktop/BP photos/Lorenz_after.jpg") plt.show() #plt.close() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + import tensorflow as tf import numpy as np import matplotlib.pyplot as plt import scipy from src import inception_v3_imagenet from src import imagenet_labels from src import utils from src.diff_renderer import make_render_op # - from src.utils import angles_to_matrix # %matplotlib inline width, height = 1000, 1000 mesh = utils.load_obj('resources/dog.obj') original_texture = mesh.texture_image.copy() render_op = make_render_op(mesh, width, height) trans = [0, 0, 2.6] rotation = [0.4, np.pi+.7, 2.9] fscale = 0.4 bgcolor = [0.9, 0.9, 0.9] texture_image = mesh.texture_image view_matrix = np.hstack((angles_to_matrix(rotation) , np.reshape(trans, (3, 1)) )) view_matrix view2_matrix = np.vstack((view_matrix, np.array([0, 0, 0, 1]))) view2_matrix # + pixel_center_offset = 0.5 near = 0.1 far = 100. f = 0.5 * (fmat[0] + fmat[1]) center = [width/2.,height/2.] right = (width-(center[0]+pixel_center_offset)) * (near/f) left = -(center[0]+pixel_center_offset) * (near/f) top = -(height-(center[1]+pixel_center_offset)) * (near/f) bottom = (center[1]+pixel_center_offset) * (near/f) A = (right + left) / (right - left) B = (top + bottom) / (top - bottom) C = (far + near) / (far - near) D = (2 * far * near) / (far - near) projMatrix = np.array([ [2 * near / (right - left), 0, A, 0], [0, 2 * near / (top - bottom), B, 0], [0, 0, C, D], [0, 0, -1, 0] ]) # - homo_v = np.hstack((mesh.v, np.ones((mesh.v.shape[0], 1) ))) homo_v # + # proj_matrix = camera_matrix.dot(view_matrix) proj_matrix = projMatrix.dot(view2_matrix) # unhomo(proj_matrix.dot(homo_v[0,:])) # - abnormal = proj_matrix.dot(homo_v.reshape((-1, 4, 1)))[:, :, 0] XY = (abnormal[:,:] / abnormal[3,:]).T XY # + # plt.set_autoscale_on(False) plt.figure(figsize=(5,5)) plt.scatter(XY[:,0], XY[:, 1], c = XY[:, 2], s=3) # plt.axes().set_aspect('equal', 'datalim') plt.xlim([1, -1]) plt.ylim([1, -1]) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Code for trial and error analysis and sanity checks # + import pandas as pd import os import sys import seaborn as sns import matplotlib.pyplot as plt import numpy as np import statsmodels.api as sm import statsmodels.formula.api as smf import nibabel as nib sys.path.append('../') from lib.stats_utils import * # - project_dir = '../../../' data_dir = project_dir + 'data/preproc_T1s/' metadata_dir = '../metadata/' results_dir = '../results/MAGeT/' # ### Check cerebellum volumes of the atlas labels # + atlas_volume_csv = metadata_dir + 'atlas_volumes.csv' atlas_volume_df = pd.read_csv(atlas_volume_csv) cols_L = ['L_I_II', 'L_III', 'L_IV', 'L_V', 'L_VI', 'L_Crus_I','L_Crus_II', 'L_VIIB', 'L_VIIIA', 'L_VIIIB', 'L_IX', 'L_X', 'L_CM'] cols_R = ['R_I_II', 'R_III', 'R_IV', 'R_V', 'R_VI', 'R_Crus_I', 'R_Crus_II','R_VIIB', 'R_VIIIA', 'R_VIIIB', 'R_IX', 'R_X', 'R_CM'] atlas_volume_df['L_CB'] = atlas_volume_df[cols_L].sum(axis=1) atlas_volume_df['R_CB'] = atlas_volume_df[cols_R].sum(axis=1) atlas_volume_df.head() # + plot_df = atlas_volume_df.copy() plot_df = pd.melt(plot_df, id_vars = ['Subject'], value_vars = ['L_CB','R_CB'] , var_name ='ROI', value_name ='volume') palette = sns.color_palette('husl',2) with sns.axes_style("whitegrid"): g = sns.catplot(x='volume', y='ROI', kind='bar',aspect=3, height=4, palette = palette, data=plot_df) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Read the record.json file and access them # + import json fd=open("record.json",'r') r=fd.read() fd.close() record=json.loads(r) # - # ### List of items available in the record record # ### Code to insert inventory to the record.json # + prod_id = str(input("Enter product id: ")) name = str(input("Enter name: ")) pr = int(input("Enter price: ")) qn = int(input("Enter quantity: ")) cat=str(input("enter the category of the item: ")) disc= int(input("enter the discount available on the item : ")) record[prod_id] = {'name': name, 'pr': pr, 'qn': qn, 'category' : cat, 'Discount' : disc} js = json.dumps(record) fd = open("record.json",'w') fd.write(js) fd.close() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="NWC5lvk6veIr" colab_type="code" outputId="06ad4c03-542e-460c-d5ef-71324aa2a4b6" colab={"base_uri": "https://localhost:8080/", "height": 34} import cv2 import pandas as pd import matplotlib.pyplot as plt import numpy as np from keras.utils import to_categorical from keras.models import Sequential from keras.layers import Dense, Conv2D, Flatten from sklearn.model_selection import train_test_split from keras.models import Sequential from keras.layers import Dense , Activation , Dropout ,Flatten from keras.layers.convolutional import Conv2D from keras.layers.convolutional import MaxPooling2D from keras.metrics import categorical_accuracy from keras.models import model_from_json from keras.optimizers import * from keras.layers.normalization import BatchNormalization # + pycharm={"name": "#%%\n"} data = pd.read_pickle("final_data.pkl") # + id="9o18yWi6zq30" colab_type="code" outputId="e33f0365-b014-40d7-8f6d-5465c57f6c98" colab={"base_uri": "https://localhost:8080/", "height": 52} print(data.shape) data = data[data['X'] != None] print(data.shape) # + id="xoBgmky9EizB" colab_type="code" outputId="fb2f455c-2674-4665-948f-65356a7e3d87" colab={"base_uri": "https://localhost:8080/", "height": 34} data_nosketch = data[data['sketch'] == 0] data_nosketch data_nosketch.shape # + id="n7wSYSGXGDgX" colab_type="code" colab={} def return_dataset(data, class_name): for index, row in data.iterrows(): vec = data['X'][index] if index == 0: X = vec y = np.reshape(data[class_name][index], (1, 1)) else: X = np.concatenate((X, vec), axis=1) y = np.concatenate((y, np.reshape(data[class_name][index], (1, 1))), axis=0) return X, y # + id="wzeuDHHSGFEg" colab_type="code" outputId="e2a956de-2755-4053-8d37-48f031c844a2" colab={"base_uri": "https://localhost:8080/", "height": 225} X = data['X'] y = data['expression'] def encode(data): print('Shape of data (BEFORE encode): %s' % str(data.shape)) encoded = to_categorical(data) print('Shape of data (AFTER encode): %s\n' % str(encoded.shape)) return encoded encoded_y = encode(y) print(encoded_y) #print(y) X_train, X_test, y_train, y_test = train_test_split(X, encoded_y, test_size = 0.1, random_state = 42) print(y_train.shape) print(X_train.shape) # + id="Fkax8d0AWZ5-" colab_type="code" outputId="dc6848dc-9277-4b16-83f7-0c51d687ccd3" colab={"base_uri": "https://localhost:8080/", "height": 69} index = 0 X_list = list(X_train) y_list = list(y_train) X_list_n = [] X_test_n = [] for example in X_list: index = index + 1 try: if (example.shape != (48, 48, 3)): print(index) except: print(X_list[index - 1]) print(index - 1) X_list.pop(index - 1) y_list.pop(index - 1) for example in X_list: gray = cv2.cvtColor(example, cv2.COLOR_BGR2GRAY) X_list_n.append(gray) for example in list(X_test): gray = cv2.cvtColor(example, cv2.COLOR_BGR2GRAY) X_test_n.append(gray) print(len(X_list)) print(len(y_list)) X_t = np.stack(X_list_n, axis=0) y_t = np.stack(y_list, axis=0) X_tes = np.stack(list(X_test_n), axis=0) y_tes = np.stack((y_test), axis=0) X_t = np.reshape(X_t, (X_t.shape[0], X_t.shape[1], X_t.shape[2], 1)) X_tes = np.reshape(X_tes, (X_tes.shape[0], X_tes.shape[1], X_tes.shape[2], 1)) print(y_tes.shape) # + id="Ijarfa2a2jTQ" colab_type="code" outputId="c6ac87dd-8721-4104-8ebb-737f10acbfc6" colab={"base_uri": "https://localhost:8080/", "height": 286} from skimage import data coins = data.coins() print(type(coins), coins.dtype, coins.shape) plt.imshow(coins, cmap='gray', interpolation='nearest'); # + id="dkwgscewnx3B" colab_type="code" outputId="6c840dcb-2c32-4407-9ea5-448575158c0d" colab={"base_uri": "https://localhost:8080/", "height": 391} print(y_tes.shape) print(X_tes.shape) print("-----------") print(type(X_tes[0])) print(X_tes[0].dtype) print(X_tes[0].reshape((X_tes[0].shape[0], X_tes[0].shape[1])).shape) plt.imshow(X_t[32].reshape((X_tes[0].shape[0], X_tes[0].shape[1])), cmap='gray') plt.show() print(y_t[32]) # + id="xEH8ivdu0SsU" colab_type="code" outputId="994dfe8e-d995-4dc0-d89d-44ae3af8a5f8" colab={"base_uri": "https://localhost:8080/", "height": 141} model = Sequential() # 1 - Convolution model.add(Conv2D(64,(3,3), padding='same', input_shape=(48, 48,1))) model.add(BatchNormalization()) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) # 2nd Convolution layer model.add(Conv2D(128,(5,5), padding='same')) model.add(BatchNormalization()) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) # 3rd Convolution layer model.add(Conv2D(512,(3,3), padding='same')) model.add(BatchNormalization()) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) # Flattening model.add(Flatten()) # Fully connected layer 1st layer model.add(Dense(256)) model.add(BatchNormalization()) model.add(Activation('relu')) model.add(Dropout(0.25)) # Fully connected layer 2nd layer model.add(Dense(512)) model.add(BatchNormalization()) model.add(Activation('relu')) model.add(Dropout(0.25)) model.add(Dense(7, activation='sigmoid')) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=[categorical_accuracy]) # + id="769G3tFGB945" colab_type="code" outputId="4b023a4f-800a-496b-a3cd-a7aefb24e670" colab={"base_uri": "https://localhost:8080/", "height": 69} #train the model history = model.fit(X_t, y_t, batch_size=64, epochs=30, verbose=1, validation_split=0.1111) # + id="MbMcjC8OCAG9" colab_type="code" outputId="61da305a-36c3-4423-d21d-457b7a127e62" colab={"base_uri": "https://localhost:8080/", "height": 139} from sklearn.metrics import classification_report, confusion_matrix pred_list = []; actual_list = [] predictions = model.predict(X_tes) for i in predictions: pred_list.append(np.argmax(i)) for i in y_tes: actual_list.append(np.argmax(i)) confusion_matrix(actual_list, pred_list) # + id="xyqCUSzoBKy4" colab_type="code" colab={} predictions.shape # + id="XaxBLbso7QRV" colab_type="code" colab={} np.sum(np.sum(confusion_matrix(actual_list, pred_list), axis=1), axis=0) # + id="I5K914yF7hnz" colab_type="code" colab={} print(history.history.keys()) # summarize history for accuracy plt.plot(history.history['categorical_accuracy']) plt.plot(history.history['val_categorical_accuracy']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() # summarize history for loss plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() # + id="edNsQS74-MR2" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Figure 3: Cluster-level consumptions # # This notebook generates individual panels of Figure 3 in "Combining satellite imagery and machine learning to predict poverty". # + from fig_utils import * import matplotlib.pyplot as plt import time # %matplotlib inline # - # ## Predicting consumption expeditures # # The parameters needed to produce the plots are as follows: # # - country: Name of country being evaluated as a lower-case string # - country_path: Path of directory containing LSMS data corresponding to the specified country # - dimension: Number of dimensions to reduce image features to using PCA. Defaults to None, which represents no dimensionality reduction. # - k: Number of cross validation folds # - k_inner: Number of inner cross validation folds for selection of regularization parameter # - points: Number of regularization parameters to try # - alpha_low: Log of smallest regularization parameter to try # - alpha_high: Log of largest regularization parameter to try # - margin: Adjusts margins of output plot # # The data directory should contain the following 5 files for each country: # # - conv_features.npy: (n, 4096) array containing image features corresponding to n clusters # - consumptions.npy: (n,) vector containing average cluster consumption expenditures # - nightlights.npy: (n,) vector containing the average nightlights value for each cluster # - households.npy: (n,) vector containing the number of households for each cluster # - image_counts.npy: (n,) vector containing the number of images available for each cluster # # Exact results may differ slightly with each run due to randomly splitting data into training and test sets. # #### Panel A # + # Plot parameters country = 'nigeria' country_path = '../data/LSMS/nigeria/' dimension = None k = 5 k_inner = 5 points = 10 alpha_low = 1 alpha_high = 5 margin = 0.25 # Plot single panel t0 = time.time() X, y, y_hat, r_squareds_test = predict_consumption(country, country_path, dimension, k, k_inner, points, alpha_low, alpha_high, margin) t1 = time.time() print 'Finished in {} seconds'.format(t1-t0) # - # #### Panel B # + # Plot parameters country = 'tanzania' country_path = '../data/LSMS/tanzania/' dimension = None k = 5 k_inner = 5 points = 10 alpha_low = 1 alpha_high = 5 margin = 0.25 # Plot single panel t0 = time.time() X, y, y_hat, r_squareds_test = predict_consumption(country, country_path, dimension, k, k_inner, points, alpha_low, alpha_high, margin) t1 = time.time() print 'Finished in {} seconds'.format(t1-t0) # - # #### Panel C # + # Plot parameters country = 'uganda' country_path = '../data/LSMS/uganda/' dimension = None k = 5 k_inner = 5 points = 10 alpha_low = 1 alpha_high = 5 margin = 0.25 # Plot single panel t0 = time.time() X, y, y_hat, r_squareds_test = predict_consumption(country, country_path, dimension, k, k_inner, points, alpha_low, alpha_high, margin) t1 = time.time() print 'Finished in {} seconds'.format(t1-t0) # - # #### Panel D # + # Plot parameters country = 'malawi' country_path = '../data/LSMS/malawi/' dimension = None k = 5 k_inner = 5 points = 10 alpha_low = 1 alpha_high = 5 margin = 0.25 # Plot single panel t0 = time.time() X, y, y_hat, r_squareds_test = predict_consumption(country, country_path, dimension, k, k_inner, points, alpha_low, alpha_high, margin) t1 = time.time() print 'Finished in {} seconds'.format(t1-t0) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import pandas as pd import seaborn as sns import datasist as ds sns.set(rc={'figure.figsize': [13, 13]}, font_scale=1.2) # - df = pd.read_csv('titanic.csv') df # df.groupby('Ticket').mode() df.groupby(['Cabin']).sum() df.info() from sklearn.preprocessing import LabelEncoder label_encoder = LabelEncoder() df['Sex']= label_encoder.fit_transform(df['Sex']) df['title']=df['Name'].str.split().str[1].str.split('.').str[0] df['title'].value_counts() top_title = list(df['title'].value_counts().sort_values(ascending=False).head(6).index) top_title for label in top_title: df[label] = np.where(df['title']==label,1,0) df.drop(['title'],axis=1,inplace=True) df df['Cabin'] = df['Cabin'].str[0] df sns.countplot(x='Cabin',hue='Survived', data=df) df.groupby('Cabin').describe()[['Fare','Pclass','Sex']].transpose() # from sklearn.preprocessing import LabelEncoder # label_encoder = LabelEncoder() df['NCabin']= label_encoder.fit_transform(df['Cabin']) df df['NCabin'] = np.where(df['NCabin']==8,np.nan,df['NCabin']) df df.groupby(['Sex','Pclass']).median('NCabin')['NCabin'] from sklearn.impute import KNNImputer imputer = KNNImputer() # cols = ['SibSp','Sex', 'Fare','Pclass','Survived','NCabin'] cols = ['SibSp','Sex', 'Fare','Pclass','Survived','NCabin'] XX = df[cols] cc=imputer.fit_transform(XX) df['NCabin']= cc[:,-1:] df['NCabin'] = df['NCabin'].round() df['NCabin'].value_counts() df['Age'].value_counts() df # + # from sklearn.impute import KNNImputer # imputer = KNNImputer() # cols = ['SibSp', 'Fare','NCabin','Pclass','Survived','Age'] cols = ['SibSp', 'Fare','NCabin','Pclass','Survived','Age'] XX = df[cols] cc=imputer.fit_transform(XX) # - df['Age']= cc[:,-1:] df['Age'].value_counts() df.isna().sum() df df = pd.get_dummies(df, columns=['NCabin'], drop_first=True) df df.info() df.drop(['Cabin'], axis=1,inplace=True) # df.drop(['NCabin'], axis=1,inplace=True) df.info() import pandas_profiling # + # pandas_profiling.ProfileReport(df) # - sns.heatmap(df.corr(), annot=True, fmt='.1f') df df.isna().sum().sort_values(ascending=False) missing_columns = list(df.isna().sum()[df.isna().sum()>1].index) df = pd.get_dummies(df, columns=['Embarked'], drop_first=True) df x = df.drop(['Survived','Name','Ticket','PassengerId'], axis=1) y = df['Survived'] x.isna().sum() y from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.3) y_train.value_counts() # + # from imblearn.under_sampling import RandomUnderSampler # sampler = RandomUnderSampler() # x_train, y_train = sampler.fit_resample(x_train, y_train) # y_train.value_counts() # - from imblearn.over_sampling import SMOTE sampler = SMOTE() x_train, y_train = sampler.fit_resample(x_train, y_train) y_train.value_counts() # + from sklearn.preprocessing import StandardScaler scaler = StandardScaler() scaler.fit(x_train) x_train = scaler.transform(x_train) x_test = scaler.transform(x_test) # + from sklearn.linear_model import LogisticRegression from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import SVC from sklearn.naive_bayes import GaussianNB from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier from xgboost import XGBClassifier from sklearn.metrics import confusion_matrix, accuracy_score, precision_score, recall_score, f1_score, fbeta_score, classification_report # - models = { "LR": LogisticRegression(), "KNN": KNeighborsClassifier(), "SVC": SVC(), "DT": DecisionTreeClassifier(), "RF": RandomForestClassifier(), "XGB": XGBClassifier(), "Naive Bayes": GaussianNB() } for name, model in models.items(): print(f'Training Model {name} \n--------------') model.fit(x_train, y_train) y_pred = model.predict(x_test) print(f'Training Accuracy: {accuracy_score(y_train, model.predict(x_train))}') print(f'Testing Accuracy: {accuracy_score(y_test, y_pred)}') print(f'Testing Confusion Matrix: \n{confusion_matrix(y_test, y_pred)}') print(f'Testing Recall: {recall_score(y_test, y_pred)}') print(f'Testing Precesion: {precision_score(y_test, y_pred)}') print(f'Testing F-1: {f1_score(y_test, y_pred)}') print(f'Testing F-Beta: {fbeta_score(y_test, y_pred, beta=0.5)}') print('-'*30) # + model = LogisticRegression() model.fit(x_train, y_train) # - y_pred = model.predict(x_test) y_pred y_test import joblib joblib.dump(model, 'model.h5') joblib.dump(scaler, 'scaler.h5') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="egXb7YpqEcZF" # ## Congressional Voting Assignment - Apply the t-test to real data # # Your assignment is to determine which issues have "statistically significant" differences between political parties in this [1980s congressional voting data](https://archive.ics.uci.edu/ml/datasets/Congressional+Voting+Records). The data consists of 435 instances (one for each congressperson), a class (democrat or republican), and 16 binary attributes (yes or no for voting for or against certain issues). Be aware - there are missing values! # # Your goals: # # 1. Load and clean the data (or determine the best method to drop observations when running tests) # 2. Using hypothesis testing, find an issue that democrats support more than republicans with p < 0.01 # 3. Using hypothesis testing, find an issue that republicans support more than democrats with p < 0.01 # 4. Using hypothesis testing, find an issue where the difference between republicans and democrats has p > 0.1 (i.e. there may not be much of a difference) # # Note that this data will involve *2 sample* t-tests, because you're comparing averages across two groups (republicans and democrats) rather than a single group against a null hypothesis. # + colab={"base_uri": "https://localhost:8080/", "height": 263} colab_type="code" id="nstrmCG-Ecyk" outputId="e24f5399-69a2-42c8-dba9-f9cffb2dd834" # Grab the file from UCI # !wget https://archive.ics.uci.edu/ml/machine-learning-databases/voting-records/house-votes-84.data # + colab={} colab_type="code" id="rBdcYs-o9MH7" # Imports import pandas as pd import numpy as np from scipy import stats import matplotlib.pyplot as plt import seaborn as sns # + colab={"base_uri": "https://localhost:8080/", "height": 280} colab_type="code" id="ksR75_YZ9ZR4" outputId="acb9a84f-468d-49db-d13a-e857df6724bc" # Load Data df = pd.read_csv('house-votes-84.data', header=None, names=['party','handicapped-infants','water-project', 'budget','physician-fee-freeze', 'el-salvador-aid', 'religious-groups','anti-satellite-ban', 'aid-to-contras','mx-missile','immigration', 'synfuels', 'education', 'right-to-sue','crime','duty-free', 'south-africa']) print(df.shape) df.head() # + colab={"base_uri": "https://localhost:8080/", "height": 263} colab_type="code" id="C5VUpcut9f2b" outputId="9815c1bf-be46-4c54-c76c-04c2c3f98da9" # Replace '?' with np.NaN, 'n' with 0, and 'y' with 1 df = df.replace({'?':np.NaN, 'n':0, 'y':1}) df.head() # + colab={"base_uri": "https://localhost:8080/", "height": 323} colab_type="code" id="tcHR0EoV_9rO" outputId="69da2536-eed9-4d04-f1bd-16920795efce" # How many abstentions? (NaNs) df.isnull().sum() # + colab={"base_uri": "https://localhost:8080/", "height": 280} colab_type="code" id="RWfKVbSvArDy" outputId="4e792a47-10de-44cf-d586-e26a2bcdbc55" # Create Republicans Dataframe rep = df[df.party == "republican"] print(rep.shape) rep.head() # + colab={"base_uri": "https://localhost:8080/", "height": 280} colab_type="code" id="wvS9kXRDA4Rm" outputId="48d1b509-4a5d-44b2-a1ba-8bbc4e0d2026" # Create Democrats Dataframe dem = df[df.party == "democrat"] print(dem.shape) dem.head() # + [markdown] colab_type="text" id="-TbWrjOsXVPg" # # 1 Sample T-Tests # # In a 1-sample T-test we are testing the mean of one sample against a null hypothesis of our choosing. # # The null hypothesis that we designate depends on how we have encoded our data and the kind of questions that we want to test. # # If I have encoded votes as 0 for no and 1 for yes, I want to test Democratic support for an issue, and I use a null hypothesis of 0, then I am comparing Democrat voting support against a null hypothesis of no Democrat support at all for a given issue. # # If I use a null hypothesis of .5 then I am comparing the democrat voting support against a null hypothesis of democrats being neither in favor or against a particular issue. # # # If I use a null hypothesis of 1 then I am comparing the democrat voting support against a null hypothesis of all democrats being favor of a particular issue. # # Lets use the 0 and .5 null-hypotheses to test the significance of those particular claims. They're all valid questions to be asking, they're just posing a slightly different question --testing something different. # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="tMOWdS-jXY8s" outputId="f6b3a5a3-8217-468d-94d8-45c913ad7cc6" # Lets test this out on the handicapped-infants issue since it's the first one. # I am just going to omit NaN values from my tests. # Null Hypothesis that Democratic support is 0. stats.ttest_1samp(dem['handicapped-infants'], 0, nan_policy='omit') # + [markdown] colab_type="text" id="gN5r_MUTceNM" # Given the results of the above test I would REJECT the null hypothesis that there is no Democrat support for the handicapped-infants bill at the 95% significance level. # # In Layman's terms It would be a FALSE statment to declare that there is no democratic support for this bill. That's something that you might hear a political pundit declare, but you'll notice that they don't report their alpha value or p-value when they make such claims, they just spew them. --*Tell us how you really feel!* # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="4qqZcb04b4ec" outputId="5246b6de-fb2e-47a9-b98a-b9a85a3df472" # Null Hypothesis that Democrats neither support not oppose the issue stats.ttest_1samp(dem['handicapped-infants'], .5, nan_policy='omit') # + colab={"base_uri": "https://localhost:8080/", "height": 638} colab_type="code" id="1CBhze5ycmcX" outputId="91c98a3f-b41c-4504-c8b1-4db25ee15e58" # Look at vote counts by party and by issue # It's very easy to perform some Interocular Traumatic Tests (IOT) on this data # https://www.r-bloggers.com/inter-ocular-trauma-test/ # We can eyeball the outcomes of some of these before we perform any T-tests. # But which of the differences is statistically significant? dem.apply(pd.Series.value_counts).T # + [markdown] colab_type="text" id="Wnvni1GGe-Ah" # As we look at the above table we see our findings corroborated in the raw voting counts of democrats. They are definitely not all against the handicapped-infants bill hence the very high p-value. We also see that democrats on this issue are not evenly split but that there is a significant margin of support for the handicapped-infants bill. However this margin is not as extreme for our second hypothesis as for our first, and hence our secont 1-sample t-test has a lower t statistic and a higher p-value although it is still significant. # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="0EtYrjgUfuvc" outputId="626dcf92-7de1-4639-d3f2-97ebec089aea" # Null hypothesis that there is no democratic support for the bill. stats.ttest_1samp(dem['physician-fee-freeze'], 0, nan_policy='omit') # + [markdown] colab_type="text" id="Rf6QmMw1gMT0" # We see that even though this issue has the most extreme Democrat opposition, we cannot conclude that there is no Democrat support for this issue. We will again REJECT the null hypothesis that the mean of democratic support is 0 --or that there is no Democrat support for this issue. # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="237TY_jTf3mj" outputId="2067fa70-50fd-4170-90fc-270c962526d1" # Null hypothesis that there is even support for the bill among democrats. stats.ttest_1samp(dem['physician-fee-freeze'], .5, nan_policy='omit') # + [markdown] colab_type="text" id="c9rR-KJsglzG" # Here we are again testing the null hypothesis of Democrats being neither for nor against the issue, but being split in their voting. This time we see a strong negative t-statistic and low p-value. The negative sign on this t-statistic suggests that democratic support is much further to the left of .5 (our null hypothesis of even yes/no voting in the party) in other words, this t-statistic says that no only are democrats not split on the issue, they are highly against this policy. Again this is very clear to us as we consult the output of raw vote counts for the democrats. # # As a final test and example of 1-sample T-tests lets try and get something that's not statistically significant. Lets test our split-support hypothesis on the "synfuels" policy. # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="eq7X8gdRhxRM" outputId="7a1e07be-f413-4e36-ca96-b168b6c02a57" stats.ttest_1samp(dem['synfuels'], .5, nan_policy='omit') # + [markdown] colab_type="text" id="2CGe5scFiRoh" # Here we FAIL TO REJECT the null hypothesis that democrats voting is even on this issue because our t-statistic is small resulting in a p-value greater than .05. So we would fail to reject this null hypothesis at the 95% level. Remember that we never "accept" the null hypothesis, we only "fail to reject" it. We're not claiming that the null hypothesis is true, we're just stating that our test doesn't have enough statistical power to show otherwise. # + [markdown] colab_type="text" id="o2AtRg6di7wk" # ### We could do the same thing with the Republicans dataframe, but the result would be extremely similar. # + colab={"base_uri": "https://localhost:8080/", "height": 635} colab_type="code" id="gruyk8jojFXp" outputId="565b0519-10d1-44e7-8ef9-9026ed92bf31" # Look at republican voting patterns rep.apply(pd.Series.value_counts).T # + [markdown] colab_type="text" id="VtwHw3c-jN88" # ## What if we didn't split our dataframe up by Republicans and Democrats, then what would that be testing? # # The contents of the overall dataframe that we're working with determines the GENERALIZABILITY of our results. If we're running tests on Democrat voting behavior then our hypothesis tests can only make claims about Democrat voting, they say nothing about Republican support or opposition for an issue, we would have to run those tests on the Republican dataframe. # # But then what do T-tests on the entire dataframe of both Republicans and Democrats say? They're testing the same thing but generalized to all congresspeople and the results are not specific to one party or another. # + colab={"base_uri": "https://localhost:8080/", "height": 635} colab_type="code" id="Z6CU5zb5jZYg" outputId="629d49ae-a36b-421d-f9a9-e82d8c40b3df" # Look at congressional voting patterns df.apply(pd.Series.value_counts).T # + [markdown] colab_type="text" id="4C8o5hBYmCDq" # We'll give one example of this, but it's the same as the T-tests above, just with the context being congress as a whole rather than a specific party. # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="FaMnflMtlg-M" outputId="f949fc3b-e3c8-45ec-c124-2f00fedc5cf1" stats.ttest_1samp(df['mx-missile'], .5, nan_policy='omit') # + [markdown] colab_type="text" id="wnxJUIvllxCL" # We FAIL TO REJECT the null hypothesis that congressional support for the 'mx-missile' policy is split (even) among congresspeople. # + [markdown] colab_type="text" id="4jpBRb2Vmexi" # # 2-sample T-tests # # Two-sample T-tests are very similar to 1-sample t-tests, except that instead of providing a raw value as a null hypothesis, we will be comparing the mean of a second sample as the alternate hypothesis. # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="uowS49VLmu6W" outputId="fd6aeca0-e683-4036-b90a-d181715fc050" stats.ttest_ind(dem['mx-missile'], rep['mx-missile'],nan_policy='omit') # + [markdown] colab_type="text" id="Vm8V54Zqnp4Y" # ## Don't make this mistake! # # Notice that a 2-sample T-test does not give the same results as a 1-sample t-test where the null hypothesis is the mean of a second sample. The test below is representative of a t-test comparing the mean of the first sample to a null-hypothesis value of .115, but it is not representative of comparing the mean of the sample to the mean of a second sample. This is because passing in the mean as a single value does not account for the variance around the mean of the second sample, so the 1-sample t-test against the mean value of another sample has a much higher t-statistic and significance than a test where you pass in the full data of both samples. # # You can avoid this mistake by making sure that you provide a raw null-hypothesis value (like 0, .5, 1) as the null-hypothesis value when performing a 1-sample t-test, and only two separate samples in their entirety to the function when performing a two-sample t-test. # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="f7oZcq7wlvgq" outputId="43318bfd-6df4-4900-da4a-63e9c95d1f26" rep_mx_missile_mean = rep['mx-missile'].mean() print(rep_mx_missile_mean) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="ZSpJva95nHEc" outputId="450eed41-e050-4cfe-ed09-3ebcca031d54" stats.ttest_1samp(dem['mx-missile'], rep_mx_missile_mean, nan_policy='omit') # + [markdown] colab_type="text" id="5vIu8rIKr2BS" # Also notice that the order in which you pass the two samples to the 2-sample test will reverse the direction of the t-statistic. # # A t-statistic with a positive sign indicates that the first sample mean is higher than the second sample mean, and the significance level indicates whether or not the means are different at the 95% confidence level. # + colab={"base_uri": "https://localhost:8080/", "height": 67} colab_type="code" id="WGn7p3zZr-Uv" outputId="4ef32bcd-f169-4721-ede9-6439631c9c6b" print(dem['mx-missile'].mean()) print(rep['mx-missile'].mean()) stats.ttest_ind(dem['mx-missile'], rep['mx-missile'], nan_policy='omit') # + [markdown] colab_type="text" id="XNw-CZod7hmF" # Here we have reversed the order of which sample we pass in first to demonstrate how `ttest_ind` might result in a negative t-statistic. # # + colab={"base_uri": "https://localhost:8080/", "height": 67} colab_type="code" id="kvaZq-qesODH" outputId="16bb8eff-8301-4cf4-87cd-9f0bdc73a1ee" print(dem['mx-missile'].mean()) print(rep['mx-missile'].mean()) stats.ttest_ind(rep['mx-missile'], dem['mx-missile'], nan_policy='omit') # + [markdown] colab_type="text" id="1DZ3K_x5sboG" # Because of this, in order to ensure consistency of the signs of the t-statistic during 2-sample tests, I suggest sticking with passing in one party as the first argument and the other party as the second argument and sticking with that pattern throughout your testing so that you don't confuse yourself. # + [markdown] colab_type="text" id="YOJ5ahfgpcRY" # ## Two-sample T-tests for Democrat Support, Republican Support, and no singnificant difference in support between parties: # # + [markdown] colab_type="text" id="K74YMzbIqD4h" # ## Significant Democrat Support # + [markdown] colab_type="text" id="3ObXLZZt6bXZ" # The outcome of this tests indicates that we should reject the null hypothesis that the mean of Democrat votes is equal to the mean of Republican votes for this issue. We would conclude due to the positive t-statistic here that Democrat support is significantly higher than republican support for this issue. # + colab={"base_uri": "https://localhost:8080/", "height": 67} colab_type="code" id="gG06dA3XqDAx" outputId="66ba07be-e6bb-4002-8b06-471c6e10e2fb" stats.ttest_ind(dem['synfuels'], rep['synfuels'], nan_policy='omit') # + [markdown] colab_type="text" id="OfhdLA3ZqGcE" # ## Significant Republican Support # + [markdown] colab_type="text" id="HpiuhNS06pbH" # The outcome of this tests indicates that we should reject the null hypothesis that the mean of democrat votes is equal to the mean of republican votes for this issue. We would conclude due to the positive t-statistic here that Republican support is significantly higher than Democrat support for this issue. # + colab={"base_uri": "https://localhost:8080/", "height": 67} colab_type="code" id="esSjOulYqCh3" outputId="eee6357f-d8a5-4d81-d6b8-3ff9a5e34cb9" print(dem['south-africa'].mean()) print(rep['south-africa'].mean()) stats.ttest_ind(rep['south-africa'], dem['south-africa'], nan_policy='omit') # + [markdown] colab_type="text" id="zN91yLt9qJVq" # ## No Significant Difference in Support # # Due to the insignificant p-value we would FAIL TO REJECT the null hypothesis that the means of Democrat support and Republican support are equal. This means that regardless of the sign on the t-statistic, we cannot make statistically significant claims about the difference between Republican and Democrat support of this issue. # + colab={"base_uri": "https://localhost:8080/", "height": 67} colab_type="code" id="lvRgxQZ4nOpE" outputId="707bd180-f398-4d1b-9dc7-0c131a17560e" print(dem['water-project'].mean()) print(rep['water-project'].mean()) stats.ttest_ind(dem['water-project'], rep['water-project'], nan_policy='omit') # + [markdown] colab_type="text" id="GtrGLl8u-NMB" # # $\chi^2$ Hypothesis Tests # # Chi-square hypothesis tests can only be performed on categorical data to test independence (lack of association) between two variables. # # You can tabulate the different cross-sections of your categorical data by creating a "contingency table" also known as a "cross-tab". If we see substantial differences between categories in the cross-tab, this might indicate a dependence between variables (possibly indicating some level of correlation although correlation and dependence are not perfectly synonymous),however, we must perform a chi-square test in order to test this hypothesis. # # Correlation implies dependence (association), but dependence does not necessarily imply correlation. (Although it is correlated with correlation - you see what I did there?) # + [markdown] colab_type="text" id="BRGcpta8_Fvm" # ## First, a Numpy vs Scipy implementation # # Lets demonstrate a chi-squared test on the "Adult" dataset (1994 census data) from UCI. # # Lets compare gender and binned working hours per week (both are categorical). # # # + colab={"base_uri": "https://localhost:8080/", "height": 365} colab_type="code" id="1QXjZvRREDoJ" outputId="34e0b88b-b54a-4c99-c7d2-f0c3ef08e0fe" df = pd.read_csv('https://raw.githubusercontent.com/ryanleeallred/datasets/master/adult.csv') print(df.shape) df.head() # + [markdown] colab_type="text" id="p2jshGRLCE2J" # Notice that for chi-squared tests, I will not have to categorically encode my data since I will perform my calculations on the contingency table (cross-tab) and not on the raw dataframe. # + colab={"base_uri": "https://localhost:8080/", "height": 204} colab_type="code" id="7X-ZV8kWIRlI" outputId="5232f287-c060-440e-acb4-37de78207c7f" def process_hours(df): cut_points = [0,9,19,29,39,49,1000] label_names = ["0-9","10-19","20-29","30-39","40-49","50+"] df["hours_per_week_categories"] = pd.cut(df["hours-per-week"], cut_points,labels=label_names) return df data = process_hours(df) workhour_by_sex = data[['sex', 'hours_per_week_categories']] workhour_by_sex.head() # + colab={"base_uri": "https://localhost:8080/", "height": 68} colab_type="code" id="UWPW5V7pInq8" outputId="3055aac2-11a7-47c6-f910-9a88b6edcf51" workhour_by_sex['sex'].value_counts() # + colab={"base_uri": "https://localhost:8080/", "height": 136} colab_type="code" id="9PkgOVwSIute" outputId="e33efd22-0664-400b-9ea0-8d94a8d670c8" workhour_by_sex['hours_per_week_categories'].value_counts() # + [markdown] colab_type="text" id="Ubv7UOknCq6-" # Before we calculate our contingency table, lets make very clear what our null and alternative hypotheses are in this situation. # # $H_{0}$ : There is *no* statistically significant relationship between gender and working hours per week. # # $H_{a}$ : There *is* a statistically significant relationship between gender and working hours per week. # + colab={"base_uri": "https://localhost:8080/", "height": 173} colab_type="code" id="94QdK0noI7fV" outputId="437dc3e5-e31d-4bf4-f476-971faec1ff0d" # Calculate our contingency table with margins contingency_table = pd.crosstab( workhour_by_sex['sex'], workhour_by_sex['hours_per_week_categories'], margins = True) contingency_table # + [markdown] colab_type="text" id="NJaC1rN0DVNa" # Using the contingency table with margins included will make our from-scratch implementation a little bit easier. # # Our code would be more reusable if we calculated totals as we went instead of pulling them from the table, but I wanted to be really clear about what they represented, so we're going to grab the row and column totals from the margins of the contingency table directly. # # + [markdown] colab_type="text" id="h1wRLvskEEpW" # ### Visualizing preferences with a stacked bar-chart. # + colab={"base_uri": "https://localhost:8080/", "height": 350} colab_type="code" id="YEzlcsPeDLUx" outputId="94988d11-1f93-47cd-de5d-fdc8167be348" #Assigns the frequency values malecount = contingency_table.iloc[0][0:6].values femalecount = contingency_table.iloc[1][0:6].values #Plots the bar chart fig = plt.figure(figsize=(10, 5)) sns.set(font_scale=1.8) categories = ["0-9","10-19","20-29","30-39","40-49","50+"] p1 = plt.bar(categories, malecount, 0.55, color='#d62728') p2 = plt.bar(categories, femalecount, 0.55, bottom=malecount) plt.legend((p2[0], p1[0]), ('Male', 'Female')) plt.xlabel('Hours per Week Worked') plt.ylabel('Count') plt.show() # + [markdown] colab_type="text" id="yI7mDQz5Zicb" # It's harder to eyeball if these variables might be dependent. I would look at the ratios between 30-39, 40-49 and 50+. As we do this we see that males working 40-49 hours per week experiences something like 5x jump from the amount working 30-39 whereas for women it's something like a 2.5x jump. Similarly for men we see a decrease to about 1/3 the size as we move from >40 to the >50 category, but for women the final category is about 1/5 the size. This suggests to me that gender and working hours are not independent. Hopefully this passes the smell test for you as it matches your intuition that has been trained from a lifetime of training data. # + [markdown] colab_type="text" id="17rpjP9MF8QG" # ### Expected Value Calculation # \begin{align} # expected_{i,j} =\frac{(row_{i} \text{total})(column_{j} \text{total}) }{(\text{total observations})} # \end{align} # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="5qJUDgLhJR5Q" outputId="0c1cad1d-5e49-4401-9910-0eb8b69eace4" # Get Row Sums row_sums = contingency_table.iloc[0:2,6].values print(row_sums) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="PcR_YTU3KhmH" outputId="3c4dd004-d3af-4fa3-a7d0-89ebfd7e53d7" # Get Column Sums col_sums = contingency_table.iloc[2,0:6].values print(col_sums) # + colab={"base_uri": "https://localhost:8080/", "height": 101} colab_type="code" id="7NhHCkSJMT_p" outputId="501c19c9-e8c9-40d8-cda9-9b18514c804a" # Calculate Expected Values for each cell total = contingency_table.loc['All', 'All'] print("Total number of observations:", total) expected = [] for i in range(len(row_sums)): expected_row = [] for column in col_sums: expected_val = column*row_sums[i]/total expected_row.append(expected_val) expected.append(expected_row) print(np.array(expected)) # + [markdown] colab_type="text" id="5l656fkhFHMI" # ## Chi-Squared Statistic with Numpy # # \begin{align} # \chi^2 = \sum \frac{(observed_{i}-expected_{i})^2}{(expected_{i})} # \end{align} # # For the $observed$ values we will just use a version of our contingency table without the margins as a numpy array. In this way, if our observed values array and our expected values array are the same shape, then we can subtract them and divide them directly which makes the calculations a lot cleaner. No for loops! # + colab={"base_uri": "https://localhost:8080/", "height": 67} colab_type="code" id="NDr72RcGI0kE" outputId="4023dbfd-a5a2-42cb-8e9e-f409ff17f0d0" # Get contingency table without margins contingency = pd.crosstab(workhour_by_sex['sex'], workhour_by_sex['hours_per_week_categories']) contingency = contingency.values print(contingency.shape) print(contingency) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="62rXAnuhM7xs" outputId="a3595c18-9d0c-4f1a-8aa9-ff27e7877d4a" chi_squared = ((contingency - expected)**2/(expected)).sum() print(f"Chi-Squared: {chi_squared}") # + [markdown] colab_type="text" id="kJQHlyZVc_PX" # ### Degrees of Freedom # # \begin{align} # DoF = (\text{Number of Rows} -1)\times(\text{Number of Columns}-1) # \end{align} # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="ZUdSOrL3PaPe" outputId="d04dff2b-431d-45bd-d28f-4955f23333ce" dof = (len(row_sums)-1)*(len(col_sums)-1) print(f"Degrees of Freedom: {dof}") # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="Hv0dIDbDQI0b" outputId="687e95db-986b-4347-dda5-d917d9173c75" # Calculate the p-value from the chi_squared and dof p_value = stats.chi2.sf(chi_squared, dof) print(f"P-value: {p_value}") # + colab={"base_uri": "https://localhost:8080/", "height": 151} colab_type="code" id="yRy-4OpZQmNe" outputId="bd3216ed-407d-4779-ff1a-aa6176ff06ab" chi_squared, p_value, dof, expected = stats.chi2_contingency(contingency) print(f"Chi-Squared: {chi_squared}") print(f"P-value: {p_value}") print(f"Degrees of Freedom: {dof}") print("Expected: \n", np.array(expected)) # + [markdown] colab_type="text" id="NG89JCDLURyT" # # Can we perform a Chi2 test on our congressional voting data? # # Is it categorical? Then yes. Lets do it! # # Are political party and voting behavior on the "budget" independent? Lets test it! # + colab={"base_uri": "https://localhost:8080/", "height": 266} colab_type="code" id="BJFHyjnDV0c-" outputId="4ee91f9c-296d-456e-aeed-9f1e200f3c0b" # Load the data again to be safe: df = pd.read_csv('house-votes-84.data', header=None, names=['party','handicapped-infants','water-project', 'budget','physician-fee-freeze', 'el-salvador-aid', 'religious-groups','anti-satellite-ban', 'aid-to-contras','mx-missile','immigration', 'synfuels', 'education', 'right-to-sue','crime','duty-free', 'south-africa']) df = df.replace({'?':np.NaN, 'n':0, 'y':1}) print(df.shape) df.head() # + colab={"base_uri": "https://localhost:8080/", "height": 136} colab_type="code" id="ZtT_nrl_VoJV" outputId="21928d83-e4a9-474e-ce45-a04bafc86a51" contingency_table = pd.crosstab(df['party'], df['budget']) contingency_table # + colab={"base_uri": "https://localhost:8080/", "height": 118} colab_type="code" id="LkdayAPVWSA-" outputId="676150be-c947-41ea-f41b-1fb5c6c8fe10" chi_squared, p_value, dof, expected = stats.chi2_contingency(contingency_table) print(f"Chi-Squared: {chi_squared}") print(f"P-value: {p_value}") print(f"Degrees of Freedom: {dof}") print("Expected: \n", np.array(expected)) # + colab={} colab_type="code" id="QIq2MFTp3bxJ" # stats.chi2.sf? # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="UgEck9IDWnYC" outputId="db592848-d7f4-424d-b85d-2edf35b3f043" # Calculate the p-value from the chi_squared and dof p_value = stats.chi2.sf(chi_squared, dof) print(f"P-value: {p_value}") # + [markdown] colab_type="text" id="_dwKXzZIk1WS" # We REJECT the null hypothesis that political party and voting on the budget are independent. They must be associated. Which hopefully also passes the smell test. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="xnj4XU2aE2kO" colab_type="text" # # Aleatoric regression # + id="OaOS9uR1E2kR" colab_type="code" colab={} # %matplotlib inline import numpy as np import torch import math import matplotlib.pyplot as plt import imageio import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torchvision import datasets, transforms from torchvision.utils import make_grid from tqdm import tqdm, trange import seaborn as sns # + [markdown] id="2IaBfC4dE2kV" colab_type="text" # ## Helper functions # + id="TxRREopXE2kV" colab_type="code" colab={} def normalize(image): """Takes a tensor of 3 dimensions (height, width, colors) and normalizes it's values to be between 0 and 1 so it's suitable for displaying as an image.""" image = image.astype(np.float32) return (image - image.min()) / (image.max() - image.min() + 1e-5) def display_images(images, titles=None, cols=5, interpolation=None, cmap="Greys_r"): """ images: A list of images. I can be either: - A list of Numpy arrays. Each array represents an image. - A list of lists of Numpy arrays. In this case, the images in the inner lists are concatentated to make one image. """ titles = titles or [""] * len(images) rows = math.ceil(len(images) / cols) height_ratio = 1.2 * (rows/cols) * (0.5 if type(images[0]) is not np.ndarray else 1) plt.figure(figsize=(15, 15 * height_ratio), dpi=200) i = 1 for image, title in zip(images, titles): plt.subplot(rows, cols, i) plt.axis("off") # Is image a list? If so, merge them into one image. if type(image) is not np.ndarray: image = [normalize(g) for g in image] image = np.concatenate(image, axis=1) else: image = normalize(image) plt.title(title, fontsize=12) plt.imshow(image, cmap=cmap, interpolation=interpolation) i += 1 def show(img,cmap='viridis'): npimg = img.numpy() plt.imshow(np.transpose(npimg, (1,2,0)), interpolation='nearest', cmap=cmap) # + id="RT43ZKLoE2kZ" colab_type="code" colab={} def uncertainity_estimate_2d(X, model, iters, l2, dropout_p=0.5, decay=1e-6): outputs = [model(X).cpu().data for i in range(iters)] outputs = torch.stack(outputs) y_mean = outputs.mean(dim=0) y_variance = outputs.var(dim=0) tau = l2 * (1. - dropout_p) / (2. * iters * decay) y_variance += (1. / tau) y_std = np.sqrt(y_variance) return y_mean, y_std # + id="QQ36HKsSE2kc" colab_type="code" colab={} DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu") LOADER_KWARGS = {'num_workers': 1, 'pin_memory': True} if torch.cuda.is_available() else {} # + id="eN6LHpaqE2ke" colab_type="code" colab={} BATCH_SIZE = 128 TEST_BATCH_SIZE = 5 train_loader = torch.utils.data.DataLoader( datasets.MNIST( './mnist', train=True, download=True, transform=transforms.ToTensor()), batch_size=BATCH_SIZE, shuffle=True, **LOADER_KWARGS) test_loader = torch.utils.data.DataLoader( datasets.MNIST( './mnist', train=False, download=True, transform=transforms.ToTensor()), batch_size=TEST_BATCH_SIZE, shuffle=False, **LOADER_KWARGS) TRAIN_SIZE = len(train_loader.dataset) TEST_SIZE = len(test_loader.dataset) NUM_BATCHES = len(train_loader) NUM_TEST_BATCHES = len(test_loader) CLASSES = 10 TRAIN_EPOCHS = 10 #20 SAMPLES = 2 TEST_SAMPLES = 10 # + id="lwkLkCbxE2kh" colab_type="code" colab={} class AutoEncoderAleatoric(nn.Module): def __init__(self, p=0.5): self.dropout_p = p super(AutoEncoderAleatoric, self).__init__() self.encoder = nn.Sequential( nn.Linear(28 * 28, 128), nn.ReLU(True), nn.Linear(128, 64), nn.ReLU(True), nn.Linear(64, 12), nn.ReLU(True), nn.Linear(12, 3)) self.decoder = nn.Sequential( nn.Linear(3, 12), nn.ReLU(True), nn.Dropout(p=self.dropout_p), nn.Linear(12, 64), nn.ReLU(True), nn.Dropout(p=self.dropout_p), nn.Linear(64, 128), nn.ReLU(True), nn.Dropout(p=self.dropout_p), nn.Linear(128, 128), nn.Tanh()) self.fc_pred = nn.Linear(128, 28 * 28) self.fc_logvar = nn.Linear(128, 28 * 28) def forward(self, x): x = x.view(-1, 28*28) x = self.encoder(x) x = self.decoder(x) pred = torch.tanh(self.fc_pred(x)) logvar = F.softplus(self.fc_logvar(x)) return pred, logvar # + id="bSDp6emIE2kk" colab_type="code" colab={} model = AutoEncoderAleatoric(p=0.1).to(DEVICE) # + id="6cFcZu3FE2kn" colab_type="code" colab={} def gaussian_log_loss(output, target, logvar): term1 = -0.5*torch.exp(-logvar)*(target - output)**2 term2 = 0.5*logvar return (term1 + term2).sum() # + id="hz3Oi_sfE2kp" colab_type="code" colab={} def train_autoencoder(model, optimizer, criterion, epoch): model.train() for batch_idx, (data, target) in enumerate(tqdm(train_loader)): # for batch_idx, (data, target) in enumerate(train_loader): data = data.view(data.size(0), -1) data, target = data.to(DEVICE), target.to(DEVICE) model.zero_grad() out, logvar = model(data) loss = criterion(out, data, logvar) loss.backward() optimizer.step() print('epoch [{}], loss:{:.4f}'.format(epoch, loss.data.item())) # + id="rcBrm00qE2ku" colab_type="code" colab={} outputId="962d2808-90b2-4391-875d-974f97f6291c" TRAIN_EPOCHS = 5 #20 criterion = gaussian_log_loss optimizer = torch.optim.Adam(model.parameters(), lr=1e-3) for epoch in range(TRAIN_EPOCHS): train_autoencoder(model, optimizer, criterion, epoch) # + id="FW7VCCtEE2k0" colab_type="code" colab={} outputId="695895ce-1ebc-4932-9573-6a1da3da5420" mnist_sample, mnist_target = iter(test_loader).next() mnist_sample = mnist_sample.to(DEVICE) print(mnist_target) # sns.set_style("dark") show(make_grid(mnist_sample.cpu())) # + id="ydLgSyrOE2k4" colab_type="code" colab={} outputId="e85dce15-6ad8-495a-f46e-cd64c34c8c3e" out, logvar = model(mnist_sample) out = out.detach().cpu() out = out.view(-1, 1, 28,28) logvar = logvar.detach().cpu() logvar = logvar.view(-1, 1, 28,28) print(out.size()) show(make_grid(out)) # + id="sQIvMh1uE2k7" colab_type="code" colab={} outputId="c04271e3-15ad-4746-bf2e-821594aeb395" x = mnist_sample[1,:,:,:] y_mean, y_std = out[1,:,:,:], torch.exp(logvar[1,:,:,:]) res_list = [x.cpu().view(28,28).numpy(), y_mean.view(28,28).numpy(), y_std.view(28,28).numpy()] print(res_list[0].shape) display_images(res_list, titles=['input','mean', 'std'], cols=len(res_list), interpolation=None, cmap="viridis") # + [markdown] id="pYMhCjywE2k-" colab_type="text" # ## Out of domain samples # + id="4FD2Ynb0E2k_" colab_type="code" colab={} outputId="422ca501-e572-4043-8c61-debd31698454" kmnist_loader = torch.utils.data.DataLoader( datasets.KMNIST('./kmnist', train=False, download=True, transform=transforms.ToTensor()), batch_size=5, shuffle=False) # + id="Q4WPES8CE2lC" colab_type="code" colab={} outputId="c6326948-e7b4-4793-96a1-a3760af4d559" kmnist_sample, kmnist_target = iter(kmnist_loader).next() kmnist_sample = kmnist_sample.to(DEVICE) print(kmnist_target) # sns.set_style("dark") show(make_grid(kmnist_sample.cpu())) # + id="38OHYXZPE2lG" colab_type="code" colab={} outputId="ddccf943-ca8d-47dd-c14d-ff38e2bf4da0" out, logvar = model(kmnist_sample) out = out.detach().cpu() out = out.view(-1, 1, 28,28) logvar = logvar.detach().cpu() logvar = logvar.view(-1, 1, 28,28) print(out.size()) show(make_grid(out)) # + id="LucqxP4mE2lK" colab_type="code" colab={} outputId="95edb113-1f09-4d25-d8df-459ac285b4b5" x = kmnist_sample[1,:,:,:] y_mean, y_std = out[1,:,:,:], torch.exp(logvar[1,:,:,:]) res_list = [x.cpu().view(28,28).numpy(), y_mean.view(28,28).numpy(), y_std.view(28,28).numpy()] print(res_list[0].shape) display_images(res_list, titles=['input','mean', 'std'], cols=len(res_list), interpolation=None, cmap="viridis") # + id="vtsAhXOTE2lR" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # ## Chapter5.1 scikit-learn 간단하게 살펴보기 # + from sklearn.datasets import make_blobs X_dataset, y_dataset = make_blobs(centers=[[-0.3, 0.5], [0.3, -0.2]], cluster_std=0.2, n_samples=100, center_box=(-1.0, 1.0), random_state=42) # + from sklearn.linear_model import LogisticRegression # 로지스틱스 회귀 모델 인스턴스 만들기 classifier = LogisticRegression() # 학습 하기 classifier.fit(X_dataset, y_dataset) # + # 테스트 전용 데이터 test_data = [[0.1, 0.1], [-0.5, 0.0]] print(classifier.predict(test_data)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Transitivity # ## Set up modules, variables, functions # + import sys import collections import numpy as np import pandas as pd pd.set_option('display.max_rows', 500) from tf.fabric import Fabric from tf.app import use import matplotlib.pyplot as plt # import some project code from elsewhere # Lord, please forgive me for my import sins sys.path.append('/Users/cody/github/CambridgeSemiticsLab/time_collocations/tools/stats/') from pca import apply_pca from significance import apply_deltaP, apply_fishers sys.path.append('/Users/cody/github/CambridgeSemiticsLab/Gesenius_data/analysis/') from text_show import TextShower # load Text-Fabric with datasets downloaded from Github # NB that URLs of the original Github repos can be inferred # from the pathnames. Just replace github/ with github.com/ locations = [ '~/github/etcbc/bhsa/tf/c', # main ETCBC data '~/github/etcbc/heads/tf/c', # for phrase head data (i.e. for complements) ] # load TF data, give main classes shortform names, instantiate the use app with gives convenient methods TF = Fabric(locations=locations) API = TF.load(''' vs vt pdp lex lex_utf8 rela code function freq_lex typ nhead label gloss number mother language ''') A = use('bhsa', api=API) F, E, T, L = A.api.F, A.api.E, A.api.T, A.api.L sys.path.append('../') from clause_relas import in_dep_calc as clause_relator # - def get_spread(array, n): """Retrieve an even spread of indices an array/Series. https://stackoverflow.com/a/50685454/8351428 Args: array: either numpy array or Pandas Series (with multiple indices allowed) n: number of indices to return Returns: indexed array or series """ end = len(array) - 1 spread = np.ceil(np.linspace(0, end, n)).astype(int) indices = np.unique(spread) try: return array[indices] except KeyError: return array.iloc[indices] ts = TextShower( default=['ref','verb_id', 'text' , 'clause', 'has_obj'], stylize=['text', 'clause'] ) # ## Build the dataset # + def clause_has_object(verb, clause_atom, clause): """Search a given clause for any marked objects.""" clause_phrases = L.d(clause, 'phrase') daughters = E.mother.t(clause_atom) daught_relas = set(F.rela.v(d) for d in daughters) daught_codes = set(F.code.v(d) for d in daughters) phrase_functs = set(F.function.v(p) for p in clause_phrases) # we count direct speech clauses following amar as direct objects amar_obj = F.lex.v(verb) == '>MR[' and 999 in daught_codes # evaluate conditions for object presence return any([ 'Objc' in phrase_functs, 'PreO' in phrase_functs, 'PtcO' in phrase_functs, 'Objc' in daught_relas, amar_obj, ]) def cmpl_data(clause): """Tag complement data within a clause""" data = { 'has_cmpl': 0, # 'cmpl_ph_type': np.nan, # 'cmpl_heads': np.nan, } # check for complement cmpl_ph = [p for p in L.d(clause, 'phrase') if F.function.v(p) == 'Cmpl'] # detect presence of complement if cmpl_ph: data['has_cmpl'] = 1 return data dataset = [] for verb in F.pdp.s('verb'): # ignore low frequency clauses if F.freq_lex.v(verb) < 10: continue clause = L.u(verb, 'clause')[0] ca_rela = clause_relator(clause) # ignore non-main clauses to avoid # the problem of, e.g., אשֶׁר subjects / objects # which are not marked in the database, as well as other # potential complications if ca_rela != 'Main': continue # process the data book, chapter, verse = T.sectionFromNode(verb) clause_atom = L.u(verb, 'clause_atom')[0] ref = f'{book} {chapter}:{verse}' vs = F.vs.v(verb) lex = F.lex_utf8.v(verb) lex_node = L.u(verb, 'lex')[0] verb_id = f'{lex}.{vs}' has_obj = 1 * clause_has_object(verb, clause_atom, clause) cl_data = { 'node': verb, 'ref': ref, 'lex_node': lex_node, 'lex': lex, 'text': T.text(verb), 'clause': T.text(clause), 'clause_node': clause, 'clause_atom': clause_atom, 'verb_id': verb_id, 'has_obj': has_obj, } cl_data.update(cmpl_data(clause)) dataset.append(cl_data) vdf = pd.DataFrame(dataset) vdf = vdf.set_index('node') print(vdf.shape) vdf.head(5) # - ts.show(vdf, spread=10) # ## Count verb lexeme object tendencies # + vo_ct = pd.pivot_table( vdf, index='verb_id', columns='has_obj', aggfunc='size', fill_value=0, ) vo_ct = vo_ct.loc[vo_ct.sum(1).sort_values(ascending=False).index] vo_ct.head(10) # - # ### Cull dataset down print('present data shape:') vo_ct.shape vo_ct.sum(1).plot() # + vo_ct2 = vo_ct[vo_ct.sum(1) >= 30] # keep those with N observations print('new shape of data:') vo_ct2.shape # + vo_pr = vo_ct2.div(vo_ct2.sum(1), 0) vo_pr.head(10) # - # # Cluster them import seaborn as sns # + fig, ax = plt.subplots(figsize=(5, 8)) sns.stripplot(ax=ax, data=100*vo_pr.iloc[:, 1:2], jitter=0.2, edgecolor='black', linewidth=1, color='lightblue', size=10) ax.set_ylabel('% of verb lexeme with an object', size=14) plt.savefig('/Users/cody/Desktop/verb_objects.png', dpi=300, bbox_inches='tight', facecolor='white') # - # ## Select a subset for transitivity tagging # # We will select the prototypical, unambiguous cases. # # These seem to be those from 0-5% (intransitive) # and those from 60-100%. We'll leave the rest of the cases. tran = vo_pr[vo_pr[1] >= 0.6] itran = vo_pr[vo_pr[1] <= 0.05] tran itran # how many verbs would this account for? vo_ct.loc[np.array(tran.index, itran.index)].sum(1).sum() # ## Export for Inspection # + from pathlib import Path from df2gspread import df2gspread as d2g import gspread from oauth2client.service_account import ServiceAccountCredentials scope = ['https://spreadsheets.google.com/feeds', 'https://www.googleapis.com/auth/drive'] credentials = ServiceAccountCredentials.from_json_keyfile_name( '/Users/cody/.config/gspread/service_account.json', scope ) drive_id = Path.home().joinpath('github/CambridgeSemiticsLab/Gesenius_data/data/_private_/keys/tran_folder.txt').read_text().strip() # - gc = gspread.authorize(credentials) tran_sh = gc.create("transitive_verbs", drive_id) itran_sh = gc.create('intransitive_verbs', drive_id) d2g.upload(tran.round(2), tran_sh.id, 'Sheet1', row_names=True, col_names=True, credentials=credentials) d2g.upload(itran.round(2), itran_sh.id, 'Sheet1', row_names=True, col_names=True, credentials=credentials) # ## Cluster on multivariate tendencies # # A lot of dynamic verbs are in the intransitive list. It would be nice to have some separation for categories close to a stative / dynamic dichotomy. It may be possible to do this with a PCA analysis by using the `Cmpl` arguments. # + va_ct = pd.pivot_table( vdf, index='verb_id', values=['has_obj', 'has_cmpl'], aggfunc='sum', fill_value=0, ) vlex_ct = vdf.verb_id.value_counts() vlex_ct = vlex_ct[vlex_ct >= 15] # restrict frquency to N and up va_ct = va_ct.loc[vlex_ct.index] va_pr = va_ct.div(vlex_ct, 0) va_pr.head(25) # - va_pr.shape va_pr.loc['ידע.qal'] va_pr[ (va_pr.has_obj < 0.3) & (va_pr.has_cmpl < 0.1) ] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="sSZjbw6kWch4" # # WBAP Hackathon 2021 # A google colab friendly notebook. It will allow you to use colab gpu to train your hackathon model from your code in a github repo. # + [markdown] id="tm9NDeAhUrCY" # ## Specify the GitHub repository and branch that you want to run from # + id="XwPnR2ibNlh2" branch = 'master' github_url = 'https://github.com/pulinagrawal/WM_Hackathon' # + [markdown] id="crY-fJF8UzWl" # ## Run the cells in this section to setup environment # It can take several minutes to run. # + id="BbsZn_xidsIc" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="1806875d-7611-48e5-918c-2a4b5b0db8e0" # !git clone $github_url import os from pathlib import Path os.chdir(str('WM_Hackathon')) # !git checkout $branch # !pip install ray[debug]===0.8.7 # !pip install -r requirements.txt os.chdir(str(Path('..'))) # !git clone https://github.com/Cerenaut/cerenaut-pt-core # !pwd os.chdir(str(Path('.')/'cerenaut-pt-core')) # !pwd # !python setup.py develop os.chdir(str(Path('..')/'WM_Hackathon')) # !pwd # !pip install --ignore-installed --no-deps -e . # + [markdown] id="crpLmFVqVPf3" # ### Verify that the installed version of ray is '0.8.7' and your current directory is '/content/WM_Hackathon' # + colab={"base_uri": "https://localhost:8080/", "height": 35} id="oqM6Ke2QfBSQ" outputId="b4adfb85-8a9f-47e3-8c96-86676a2211ff" import ray ray.__version__ # + [markdown] id="44aV6gSgZjW_" # ## To enable ray to use GPU # Change to the following config in your agent_av.json # ``` # "agent":{ # "num_workers": 1, # "num_gpus": 1, # "num_gpus_per_worker": 1, # "rollout_fragment_length": 50, # "gamma": 0.8 # } # ``` # # # + colab={"base_uri": "https://localhost:8080/"} id="Ob-HPAVpULmL" outputId="2f82fe4c-4d30-4811-9eec-f8aac1f2a548" import os from pathlib import Path os.chdir('/content/WM_Hackathon') # !echo Your current directory is # !pwd # + [markdown] id="5k3X62AfV-1s" # # Now you can run train_agent.py with your config # + colab={"base_uri": "https://localhost:8080/"} id="guWkf0s-gKlG" outputId="7729a230-1361-4b98-9644-62387e549dc9" # !python train_agent.py m2s-v0 configs/m2s_env.json configs/stub_agent_env_full.json configs/stub_agent_full.json # + [markdown] id="lE8hgC0KWGbY" # ## Apologies. Haven't figured out tensorboard yet. # (This should have technically worked) # + colab={"base_uri": "https://localhost:8080/", "height": 821} id="b_4Ck7cblmA9" outputId="4340422a-a343-4744-db9d-347426b8942d" # %reload_ext tensorboard # %tensorboard --logdir /content/WM_Hackathon/runs # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import mplfinance as mpf import pandas as pd myData = pd.read_csv("issue77_eurusd.csv") myData['Date and Time'] = pd.to_datetime(myData['Gmt time'],format='%d.%m.%Y %H:%M:%S.000') myData.set_index('Date and Time', inplace=True) myData = myData.iloc[-24:] line = mpf.make_addplot(myData["High"],scatter=False) #No issues drawing a line scatter = mpf.make_addplot(myData["High"],scatter=True,secondary_y=False) #Issues drawing scatter mpf.plot(myData, type = 'candle', style='yahoo', addplot= line, figscale=1.8) # - print(pd.__version__) print(mpf.__version__) myData.describe() import numpy as np #scatter['data'].iloc[ 0] = np.nan #scatter['data'].iloc[10] = np.nan #scatter['data'].iloc[-1] = np.nan scatter['data'] mpf.plot(myData, type = 'candle', style='yahoo', addplot= scatter, ) mpf.plot(myData, type = 'candle', style='yahoo', addplot= scatter, set_ylim=(1.1271,1.1286) ) mpf.plot(myData, type = 'candle', style='yahoo', addplot= scatter, volume=True ) mpf.plot(myData, type = 'candle', style='yahoo', addplot= scatter, volume=True, set_ylim=(1.1271,1.1286), set_ylim_panelB=(-0.005,0.006), ) scatter myData.index.values myData['High'].values # #%matplotlib qt # %matplotlib inline import matplotlib.pyplot as plt # + def makeplot(): fig = plt.figure() ax = fig.add_axes([0.0, 0.0, 1.0, 1.0]) #ax.autoscale(enable=False,axis='y') ax.scatter(myData.index.values,myData['High'].values) def makeplot_noas(): fig = plt.figure() ax = fig.add_axes([0.0, 0.0, 1.0, 1.0]) #ax.autoscale(enable=False,axis='y') miny = 1.1274 maxy = 1.1285 ax.set_ylim(ymin=miny,ymax=maxy) minx = min(myData.index.values) maxx = max(myData.index.values) ax.set_xlim(xmin=minx,xmax=maxx) ax.scatter(myData.index.values,myData['High'].values) # - makeplot() makeplot_noas() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from pyspark import SparkConf, SparkContext from pyspark.sql import SparkSession from pyspark.sql import * from pyspark.sql.types import * from pyspark.sql.functions import udf from pyspark.sql.functions import * from pyspark.sql.window import Window NoneType = type(None) import os import socket import hashlib import string import time from osgeo import ogr import geopandas as gpd from pyspark.sql import SparkSession from sedona.register import SedonaRegistrator from sedona.utils import SedonaKryoRegistrator, KryoSerializer # + def createMd5(text): return hashlib.md5(text.encode('utf-8')).hexdigest() md5Udf= udf(lambda z: createMd5(z),StringType()) def clean_lower(text): sentence = text.translate(str.maketrans('', '', '!"#$%&\'()*+,./:;<=>?@[\\]^`{|}~-_”“«»‘')).lower() return " ".join(sentence.split()) cleanLowerUdf= udf(lambda z: clean_lower(z),StringType()) def get_site_from_url(text): return text.split("/")[2] getUrl= udf(lambda z: get_site_from_url(z),StringType()) # - minio_ip = socket.gethostbyname('minio') spark = SparkSession. \ builder. \ appName("Python Spark S3"). \ config("spark.serializer", KryoSerializer.getName). \ config("spark.executor.memory", "80g"). \ config("spark.driver.memory", "80g"). \ config('spark.dirver.maxResultSize', '5g'). \ config("spark.kryo.registrator", SedonaKryoRegistrator.getName). \ config('spark.hadoop.fs.s3a.endpoint', 'http://'+minio_ip+':9000'). \ config("spark.hadoop.fs.s3a.access.key", "minio-access-key"). \ config("spark.hadoop.fs.s3a.secret.key", "minio-secret-key"). \ config('spark.hadoop.fs.s3a.impl', 'org.apache.hadoop.fs.s3a.S3AFileSystem'). \ config('spark.jars.packages', 'org.apache.sedona:sedona-python-adapter-3.0_2.12:1.0.0-incubating,org.datasyslab:geotools-wrapper:geotools-24.0'). \ getOrCreate() SedonaRegistrator.registerAll(spark) # + st= StructType([ StructField("abstract", StringType()), StructField("authors", StringType()), StructField("image", StringType()), StructField("metadata", StringType()), StructField("publish_date", TimestampType()), StructField("text", StringType()), StructField("title", StringType()), StructField("url", StringType()), ]) # - df_news_covid_mexico = spark.read.schema(st).option("timestampFormat", "dd-MM-yyyy").json("s3a://news/covid_mexico/*.json") df_news_covid_mexico.count() df_news_covid_mexico.printSchema() df_news_covid_mexico.show(10) df_news_covid_mexico_date_text = df_news_covid_mexico.select(md5Udf("url").alias("article_id"),"title","url","publish_date",cleanLowerUdf("text").alias("clean_text"),getUrl("url").alias("site")).filter("length(text) >= 2") df_news_covid_mexico_date_text.show(15) df_news_covid_mexico_date_text.count() df_news_covid_mexico_date_text.select("title").show(15,False) url = "jdbc:postgresql://postgres/shared" mode="overwrite" properties = { "user": "shared", "password": os.environ['SHARED_PASSWORD'] } # + #os.environ['SHARED_PASSWORD']='' # - df_news_covid_mexico_date_text.write.jdbc(url=url, table="tb_news_covid_mexico_date_text", mode=mode, properties=properties) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # NumPy # To work with NumPy you should import it import numpy as np # You can create NumPy array from python list my_list = [1,2,3] arr = np.array(my_list) arr # Or without temporary variable arr = np.array([1, 2, 3]) arr # ## arange # Return evenly spaced values within a given interval. np.arange(3) np.arange(3.0) np.arange(3,7) np.arange(3,7,2) # ## zeros # Return a new array of given shape and type, filled with zeros. np.zeros(5) np.zeros((5,), dtype=np.int) np.zeros((2, 1)) np.zeros((2,3)) # ## ones # Return a new array of given shape and type, filled with ones. np.ones(5) np.ones((5,), dtype=np.int) np.ones((2, 1)) np.ones((2,3)) # ## linspace # Return evenly spaced numbers over a specified interval. np.linspace(2.0, 3.0, num=5) np.linspace(2.0, 3.0, num=5, endpoint=False) np.linspace(2.0, 3.0, num=5, retstep=True) # ## random.randint # Return random integers from low (inclusive) to high (exclusive). np.random.randint(2, size=10) np.random.randint(1, size=10) np.random.randint(5, size=(2, 4)) # ## random.normal # Draw random samples from a normal (Gaussian) distribution. np.random.normal(0, 0.1, 10) # ## random.seed # Seed the generator. np.random.seed(101) np.random.randint(2, size=5) # The same np.random.seed(101) np.random.randint(2, size=5) # ## max arr = np.array(np.random.randint(10, size=10)) arr arr.max() # ## min arr.min() # ## mean arr.mean() # ## argmax arr.argmax() # ## reshape # Gives a new shape to an array without changing its data. arr.reshape((2, 5)) arr.reshape((1, 10)) arr.reshape((10, 1)) arr.reshape((5, 2)) # ## Access to elements arr = np.random.randint(10, size=100).reshape((10, 10)) arr arr[5,5] arr[0,0] arr[9,9] arr[:,0] arr[0,:] arr[:3,:3] arr > 5 arr[arr > 5] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: myenv # language: python # name: myenv # --- try: from pyspark import SparkContext, SparkConf from pyspark.sql import SparkSession except ImportError as e: printmd('<<<<>>>>') # + sc = SparkContext.getOrCreate(SparkConf().setMaster("local[*]")) spark = SparkSession \ .builder \ .getOrCreate() # - # Mean = $\frac{1}{n} \sum_{i=1}^n a_i$ # create a rdd from 0 to 99 rdd = sc.parallelize(range(100)) sum_ = rdd.sum() n = rdd.count() mean = sum_/n print(mean) # Median
    # (1) sort the list
    # (2) pick the middle element
    rdd.collect() rdd.sortBy(lambda x:x).collect() # To access the middle element, we need to access the index. rdd.sortBy(lambda x:x).zipWithIndex().collect() sortedandindexed = rdd.sortBy(lambda x:x).zipWithIndex().map(lambda x:x) n = sortedandindexed.count() if (n%2 == 1): index = (n-1)/2; print(sortedandindexed.lookup(index)) else: index1 = (n/2)-1 index2 = n/2 value1 = sortedandindexed.lookup(index1)[0] value2 = sortedandindexed.lookup(index2)[0] print((value1 + value2)/2) # Standard Deviation:
    # - tells you how wide the is spread around the mean
    # so if SD is low, all the values should be close to the mean
    # - to calculate it first calculate the mean $\bar{x}$
    # - SD = $\sqrt{\frac{1}{N}\sum_{i=1}^N(x_i - \bar{x})^2}$
    from math import sqrt sum_ = rdd.sum() n = rdd.count() mean = sum_/n sqrt(rdd.map(lambda x: pow(x-mean,2)).sum()/n) # Skewness
    # - tells us how asymmetric data is spread around the mean
    # - check positive skew, negative skew
    # - Skew = $\frac{1}{n}\frac{\sum_{j=1}^n (x_j- \bar{x})^3}{\text{SD}^3}$, x_j= individual value sd= sqrt(rdd.map(lambda x: pow(x-mean,2)).sum()/n) n = float(n) # to round off skw = (1/n)*rdd.map(lambda x : pow(x- mean,3)/pow(sd,3)).sum() skw # Kurtosis
    # # - tells us the shape of the data
    # - indicates outlier content within the data
    # - kurt = $\frac{1}{n}\frac{\sum_{j=1}^n (x_j- \bar{x})^4}{\text{SD}^4}$, x_j= individual value # (1/n)*rdd.map(lambda x : pow(x- mean,4)/pow(sd,4)).sum() # Covariance \& Correlation
    # # - how two columns interact with each other
    # - how all columns interact with each other
    # - cov(X,Y) = $\frac{1}{n} \sum_{i=1}^n (x_i-\bar{x})(y_i -\bar{y})$ rddX = sc.parallelize(range(100)) rddY = sc.parallelize(range(100)) # to avoid loss of precision use float meanX = rddX.sum()/float(rddX.count()) meanY = rddY.sum()/float(rddY.count()) # since we need to use rddx, rddy same time we need to zip them together rddXY = rddX.zip(rddY) covXY = rddXY.map(lambda x:(x[0]-meanX)*(x[1]-meanY)).sum()/rddXY.count() covXY # Correlation # # - corr(X,Y) =$ \frac{\text{cov(X,Y)}}{SD_X SD_Y}$ #
    # # Measure of dependency - Correlation
    # +1 Columns totally correlate
    # 0 columns show no interaction
    # -1 inverse dependency from math import sqrt n = rddXY.count() mean = sum_/n SDX = sqrt(rdd.map(lambda x: pow(x-meanX,2)).sum()/n) SDY = sqrt(rdd.map(lambda y: pow(y-meanY,2)).sum()/n) corrXY = covXY/(SDX *SDY) corrXY # corellation matrix in practice import random from pyspark.mllib.stat import Statistics col1 = sc.parallelize(range(100)) col2 = sc.parallelize(range(100,200)) col3 = sc.parallelize(list(reversed(range(100)))) col4 = sc.parallelize(random.sample(range(100),100)) data = col1 data.take(5) data1 = col1.zip(col2) data1.take(5) # Welcome to exercise one of week two of “Apache Spark for Scalable Machine Learning on BigData”. In this exercise you’ll read a DataFrame in order to perform a simple statistical analysis. Then you’ll rebalance the dataset. No worries, we’ll explain everything to you, let’s get started. # # Let’s create a data frame from a remote file by downloading it: # # + # delete files from previous runs # !rm -f hmp.parquet* # download the file containing the data in PARQUET format # !wget https://github.com/IBM/coursera/raw/master/hmp.parquet # create a dataframe out of it df = spark.read.parquet('hmp.parquet') # register a corresponding query table df.createOrReplaceTempView('df') # - # This is a classical classification data set. One thing we always do during data analysis is checking if the classes are balanced. In other words, if there are more or less the same number of example in each class. Let’s find out by a simple aggregation using SQL. # from pyspark.sql.functions import col counts = df.groupBy('class').count().orderBy('count') display(counts) df.groupBy('class').count().show() spark.sql('select class,count(*) from df group by class').show() # This looks nice, but it would be nice if we can aggregate further to obtain some quantitative metrics on the imbalance like, min, max, mean and standard deviation. If we divide max by min we get a measure called minmax ration which tells us something about the relationship between the smallest and largest class. Again, let’s first use SQL for those of you familiar with SQL. Don’t be scared, we’re used nested sub-selects, basically selecting from a result of a SQL query like it was a table. All within on SQL statement. # spark.sql(''' select *, max/min as minmaxratio -- compute minmaxratio based on previously computed values from ( select min(ct) as min, -- compute minimum value of all classes max(ct) as max, -- compute maximum value of all classes mean(ct) as mean, -- compute mean between all classes stddev(ct) as stddev -- compute standard deviation between all classes from ( select count(*) as ct -- count the number of rows per class and rename it to ct from df -- access the temporary query table called df backed by DataFrame df group by class -- aggrecate over class ) ) ''').show() # The same query can be expressed using the DataFrame API. Again, don’t be scared. It’s just a sequential expression of transformation steps. You now an choose which syntax you like better. # df.show() df.printSchema() # + from pyspark.sql.functions import col, min, max, mean, stddev df \ .groupBy('class') \ .count() \ .select([ min(col("count")).alias('min'), max(col("count")).alias('max'), mean(col("count")).alias('mean'), stddev(col("count")).alias('stddev') ]) \ .select([ col('*'), (col("max") / col("min")).alias('minmaxratio') ]) \ .show() # - # Now it’s time for you to work on the data set. First, please create a table of all classes with the respective counts, but this time, please order the table by the count number, ascending. # df1 = df.groupBy('class').count() df1.sort('count',ascending=True).show() # Pixiedust is a very sophisticated library. It takes care of sorting as well. Please modify the bar chart so that it gets sorted by the number of elements per class, ascending. Hint: It’s an option available in the UI once rendered using the display() function. # # + pixiedust={"displayParams": {}} import pixiedust from pyspark.sql.functions import col counts = df.groupBy('class').count().orderBy('count') display(counts) # - # Imbalanced classes can cause pain in machine learning. Therefore let’s rebalance. In the flowing we limit the number of elements per class to the amount of the least represented class. This is called undersampling. Other ways of rebalancing can be found here: # # [https://machinelearningmastery.com/tactics-to-combat-imbalanced-classes-in-your-machine-learning-dataset/](https://machinelearningmastery.com/tactics-to-combat-imbalanced-classes-in-your-machine-learning-dataset?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0201EN-SkillsNetwork-20647446&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ) # # + from pyspark.sql.functions import min # create a lot of distinct classes from the dataset classes = [row[0] for row in df.select('class').distinct().collect()] # compute the number of elements of the smallest class in order to limit the number of samples per calss min = df.groupBy('class').count().select(min('count')).first()[0] # define the result dataframe variable df_balanced = None # iterate over distinct classes for cls in classes: # only select examples for the specific class within this iteration # shuffle the order of the elements (by setting fraction to 1.0 sample works like shuffle) # return only the first n samples df_temp = df \ .filter("class = '"+cls+"'") \ .sample(False, 1.0) \ .limit(min) # on first iteration, assing df_temp to empty df_balanced if df_balanced == None: df_balanced = df_temp # afterwards, append vertically else: df_balanced=df_balanced.union(df_temp) # - # Please verify, by using the code cell below, if df_balanced has the same number of elements per class. You should get 6683 elements per class. # $$$ # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 's descent # # (23 March 1882 – 14 April 1935) was a German mathematician who made important contributions to abstract algebra and theoretical physics (https://en.wikipedia.org/wiki/Emmy_Noether). # # According to the math genealogy project, had 14 doctoral students, who had 76 themselves, ... so until now she has *1365* descendants. import networkx, json from francy_widget import FrancyWidget G = networkx.DiGraph() data = json.load(open("noether.json")) nodes = data["nodes"] #print(len(nodes)) nodes_to_keep = {k:nodes[k] for k in nodes if nodes[k][0]<4} edges_to_keep = [e for e in data["edges"] if e[1] in nodes_to_keep] G.add_edges_from(edges_to_keep) def node_options(n): options = {} d = nodes[n] options["layer"] = d[0] options["title"] = "%s (%s)" % (d[2].split(",")[0], d[3]) if n in ["6967", "63779", "6982", "29850", "121808", "191816", "54355", "98035", "44616", "57077", "21851"]: options["type"] = 'diamond' else: options["type"] = 'circle' return options FrancyWidget(G, graphType="directed", height=800, zoomToFit=False, node_options=node_options) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline from pyvista import set_plot_theme set_plot_theme('document') # Load data using a Reader {#reader_example} # ======================== # # To have more control over reading data files, use a class based reader. # This class allows for more fine-grained control over reading datasets # from files. See `pyvista.get_reader`{.interpreted-text role="func"} for # a list of file types supported. # import pyvista from pyvista import examples import numpy as np from tempfile import NamedTemporaryFile # An XML PolyData file in `.vtp` format is created. It will be saved in a # temporary file for this example. # temp_file = NamedTemporaryFile('w', suffix=".vtp") temp_file.name # `pyvista.Sphere`{.interpreted-text role="class"} already includes # `Normals` point data. Additionally `height` point data and `id` cell # data is added. # mesh = pyvista.Sphere() mesh['height'] = mesh.points[:, 1] mesh['id'] = np.arange(mesh.n_cells) mesh.save(temp_file.name) # `pyvista.read`{.interpreted-text role="func"} function reads all the # data in the file. This provides a quick and easy one-liner to read data # from files. # new_mesh = pyvista.read(temp_file.name) print(f"All arrays: {mesh.array_names}") # Using `pyvista.get_reader`{.interpreted-text role="func"} enables more # fine-grained control of reading data files. Reading in a `.vtp`[ file # uses the :class:\`pyvista.XMLPolyDataReader]{.title-ref}. # reader = pyvista.get_reader(temp_file.name) reader # Alternative method: reader = pyvista.XMLPolyDataReader(temp_file.name) # Some reader classes, including this one, offer the ability to inspect # the data file before loading all the data. For example, we can access # the number and names of point and cell arrays. # print(f"Number of point arrays: {reader.number_point_arrays}") print(f"Available point data: {reader.point_array_names}") print(f"Number of cell arrays: {reader.number_cell_arrays}") print(f"Available cell data: {reader.cell_array_names}") # We can select which data to read by selectively disabling or enabling # specific arrays or all arrays. Here we disable all the cell arrays and # the `Normals` point array to leave only the `height` point array. The # data is finally read into a pyvista object that only has the `height` # point array. # reader.disable_all_cell_arrays() reader.disable_point_array('Normals') print(f"Point array status: {reader.all_point_arrays_status}") print(f"Cell array status: {reader.all_cell_arrays_status}") reader_mesh = reader.read() print(f"Read arrays: {reader_mesh.array_names}") # We can reuse the reader object to choose different variables if needed. # reader.enable_all_cell_arrays() reader_mesh_2 = reader.read() print(f"New read arrays: {reader_mesh_2.array_names}") # Some Readers support setting different time points or iterations. In # both cases, this is done using the time point functionality. The NACA # dataset has two such points with density. This dataset is in EnSight # format, which uses the `pyvista.EnSightReader`{.interpreted-text # role="class"} class. # filename = examples.download_naca(load=False) reader = pyvista.get_reader(filename) time_values = reader.time_values print(reader) print(f"Available time points: {time_values}") print(f"Available point arrays: {reader.point_array_names}") # First both time points are read in, and then the difference in density # is calculated and saved on the second mesh. The read method of # `pyvista.EnSightReader`{.interpreted-text role="class"} returns a # `pyvista.MultiBlock`{.interpreted-text role="class"} instance. In this # dataset, there are 3 blocks and the new scalar must be applied on each # block. # # + reader.set_active_time_value(time_values[0]) mesh_0 = reader.read() reader.set_active_time_value(time_values[1]) mesh_1 = reader.read() for block_0, block_1 in zip(mesh_0, mesh_1): block_1['DENS_DIFF'] = block_1['DENS'] - block_0['DENS'] # - # The value of [DENS]{.title-ref} is plotted on the left column for both # time points, and the difference on the right. # # + plotter = pyvista.Plotter(shape='2|1') plotter.subplot(0) plotter.add_mesh(mesh_0, scalars='DENS',show_scalar_bar=False) plotter.add_text(f"{time_values[0]}") plotter.subplot(1) plotter.add_mesh(mesh_1, scalars='DENS', show_scalar_bar=False) plotter.add_text(f"{time_values[1]}") # pyvista currently cannot plot the same mesh twice with different scalars plotter.subplot(2) plotter.add_mesh(mesh_1.copy(), scalars='DENS_DIFF', show_scalar_bar=False) plotter.add_text("DENS Difference") plotter.link_views() plotter.camera_position= ((0.5, 0, 8), (0.5, 0, 0), (0, 1, 0)) plotter.show() # - # Reading time points or iterations can also be utilized to make a movie. # Compare to `gif_movie_example`{.interpreted-text role="ref"}, but here a # set of files are read in through a ParaView Data format file. This file # format and reader also return a `pyvista.MultiBlock`{.interpreted-text # role="class"} mesh. # filename = examples.download_wavy(load=False) reader = pyvista.get_reader(filename) print(reader) # For each time point, plot the mesh colored by the height. Put iteration # value in top left # # + plotter = pyvista.Plotter(notebook=False, off_screen=True) # Open a gif plotter.open_gif("wave_pvd.gif") for time_value in reader.time_values: reader.set_active_time_value(time_value) mesh = reader.read()[0] # This dataset only has 1 block plotter.add_mesh(mesh, scalars='z', show_scalar_bar=False) plotter.add_text(f"Time: {time_value:.0f}", color="black") plotter.render() plotter.write_frame() plotter.clear() plotter.close() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np import statsmodels.api as sm from statsmodels.regression.rolling import RollingOLS from sklearn.linear_model import LinearRegression import matplotlib.pyplot as plt import seaborn as sns import warnings warnings.filterwarnings("ignore") # ## The ProShares ETF Product # ***1. “Alternative ETFs”*** # Describe the two types of investments referenced by this term. # # **Solution:** # See Figure 1 of the case. Real estate, commodities, precious metals, currencies, volatility, private equity. # # # ***2. Hedge Funds.*** # (a)Using just the information in the case, what are two measures by which hedge funds are an attractive investment? # # **Solution:** # As seen in Exhibit 1, slide 9-10, the HFRI has a much higher SR than the S&P500 (.84 vs .41) , as well as a significantly smaller drawdown. Slide 7 also emphasizes that the HFRI is not perfectly correlated to equities and bonds, so it provides diversification at the portfolio level. # # # (b)What are the main benefits of investing in hedge funds via an ETF instead of directly? # # **Solution:** # See slide 13 of Exhibit 1, for a summary as given by ProShares. Undoubtedly, the ETF charges lower fees, provides more liquidity, opens access beyond institutional and high-net-worth investors. Furthermore, the investment does not have the idiosyncratic risk of a single fund, subject to that single fund's management, legal risks, etc. Of course, there are drawbacks to investing in the ETF instead of a single fund. Namely, the single fund may deliver excess returns (alpha) via specialized information, market access, skill, etc. # # # ***3. The Benchmarks.*** # (a) Explain as simply as possible how HFRI, MLFM, MLFM-ES, and HDG differ in their construction and purpose. # # **Solution:** # See Footnote7 on Page 3. HFRI was designed to reflect the collective performance of hedge funds through an equally weighted composite of over 2,000 constituent hedge funds that were available to accredited investors. # #
    # # See slide 16 of Exhibit 1 and Page 5. MLFM targets a high correlation to the HFRI. There were six factors in the original Merrill Lynch Factor Model: S&P 500 (U.S. large-cap stocks), Russell 2000 (U.S. small- cap stocks), MSCI EAFE (developed stock markets), MSCI Emerging Markets, the Eurodollar/U.S. dollar exchange rate, and the three-month Eurodollar Deposit yields. # #
    # # See Page 5 of the case. MLFM-ES is an updated version of MLFM such that the six index components are tradable. The MLFM-ES substituted three-month Eurodollar deposit yields with U.S. Treasury Bills and the dollar/euro exchange rates with ProShares UltraShort Euro (EUO). It has a nearly perfect correlation to MLFM. # #
    # # See Page 5 and Exhibit 2 Factor Models. HDG seeks to track the performance of MLFM-ES to strive for a high correlation to hedge fund beta, thus it has the same factors as MLFM-ES. # # # (b)How well does the Merrill Lynch Factor Model (MLFM) track the HFRI? # # **Solution:** # Exhibit 1, slide 18 shows that MLFM to HFRI has a correlation of 0.90 through 2013. # # # (c)In which factor does the MLFM have the largest loading? (See a slide in Exhibit 1.) # # **Solution:** # By far, the largest factor loading is in t-bills. This is true throughout 2013 and 2014. See Exhibit 1, slides 19-20 for details. # # # (d)What are the main concerns you have for how the MLFM attempts to replicate the HFRI? # # **Solution:** # The factors used in the MLFM are highly correlated, which raises concerns about the factor weights. Also, the dynamic regression used to construct the weights on the six factors was a backward-looking exercise, thus it will always lag behind the changes in hedge fund style. # # # ***4. The HDG Product*** # (a) What does ProShares ETF, HDG, attempt to track? Is the tracking error small? # # **Solution:** # HDG tracks a modified version of the ML Factor Model, MLFM-ES. The Merrill Lynch Factor Model involves indexes which cannot be exactly traded. For that reason, ProShares created a traded version of the Factor Model which replaces non-traded indexes with liquid, traded securities. # # Exhibit 1, slide 22 shows that this modified benchmark tracks the standard ML Factor Model with a correlation of 99.7% when looking at daily data from 2011-2013. # # Exhibit 2 shows that HDG tracks this benchmark closely, though it does not report a numerical estimate. # # (b) HDG is, by construction, delivering beta for investors. Isn't the point of hedge funds to generate alpha? Then why would HDG be valuable? # # **Solution:** HDG may be valuable by delivering complicated or expensive beta to investors. In that sense, a sophisticated \beta" could be valued as \alpha" by investors, especially if delivered in a low-cost ETF. # # And even if HDG is delivering only accessible beta, it could be valuable to a portfolio through its ability to diversify against traditional equity and bond allocations, as shown in Exhibit 1 and discussed as a benefit of HFRI. And at ETF fees, this could be an efficient way of loading into these alternative exposures. # # (c) The fees of a typical hedge-fund are 2% on total assets plus 20% of excess returns if positive. HDG's expense ratio is roughly 1% on total assets. What would their respective net Sharpe Ratios be, assuming both have a gross excess returns of 10% and volatility of 20%? # # **Solution:** The gross returns of the underlying assets have a Sharpe Ratio of 0.50. Net of the 1% fee, the net SR for the ETF would then be 9/20 = 0.45. Net of fees, the hedge-fund has 6% excess returns, which leads to a Sharpe Ratio of 6/20 = 0.30. # # Of course, this calculation is very simple, but it intends to illustrate that performance is sensitive to the high fees traditionally charged by hedge funds. In the numerical example above, the ETF delivers 90% of the asset-level Sharpe Ratio, while the Hedge Fund delivers 60% of the underlying asset Sharpe Ratio. Thus, though the ETF may miss some of the individual hedge-fund premia, it also has a lower hurdle rate given the lower fees. # + hf_data = pd.read_excel('../data/proshares_analysis_data.xlsx', sheet_name = 'hedge_fund_series') hf_data = hf_data.set_index('date') factor_data = pd.read_excel('../data/proshares_analysis_data.xlsx', sheet_name = 'merrill_factors') factor_data = factor_data.set_index('date') other_data = pd.read_excel('../data/proshares_analysis_data.xlsx', sheet_name = 'other_data') other_data = other_data.set_index('date') other_data['SPY US Equity'] = factor_data['SPY US Equity'] # - def summary_stats(df, annual_fac): report = pd.DataFrame() report['Mean'] = df.mean() * annual_fac report['Vol'] = df.std() * np.sqrt(annual_fac) report['Sharpe'] = report['Mean'] / report['Vol'] return round(report, 4) def tail_risk_report(data, q): df = data.copy() df.index = data.index.date report = pd.DataFrame(columns = df.columns) report.loc['Skewness'] = df.skew() report.loc['Excess Kurtosis'] = df.kurtosis() report.loc['VaR'] = df.quantile(q) report.loc['Expected Shortfall'] = df[df < df.quantile(q)].mean() cum_ret = (1 + df).cumprod() rolling_max = cum_ret.cummax() drawdown = (cum_ret - rolling_max) / rolling_max report.loc['Max Drawdown'] = drawdown.min() report.loc['MDD Start'] = None report.loc['MDD End'] = drawdown.idxmin() report.loc['Recovery Date'] = None for col in df.columns: report.loc['MDD Start', col] = (rolling_max.loc[:report.loc['MDD End', col]])[col].idxmax() recovery_df = (drawdown.loc[report.loc['MDD End', col]:])[col] # modify the threshold for recovery from 0 to 0.001 try: report.loc['Recovery Date', col] = recovery_df[recovery_df >= 0].index[0] report.loc['Recovery period (days)'] = (report.loc['Recovery Date'] - report.loc['MDD Start']).dt.days except: report.loc['Recovery Date', col] = None report.loc['Recovery period (days)'] = None return round(report,4) def reg_stats(df, annual_fac): reg_stats = pd.DataFrame(data = None, index = df.columns, columns = ['beta', 'Treynor Ratio', 'Information Ratio']) for col in df.columns: # Drop the NAs in y y = df[col].dropna() # Align the X with y X = sm.add_constant(factor_data['SPY US Equity'].loc[y.index]) reg = sm.OLS(y, X).fit() reg_stats.loc[col, 'beta'] = reg.params[1] reg_stats.loc[col, 'Treynor Ratio'] = (df[col].mean() * annual_fac) / reg.params[1] reg_stats.loc[col, 'Information Ratio'] = (reg.params[0] / reg.resid.std()) * np.sqrt(annual_fac) return reg_stats.astype(float).round(4) def display_correlation(df,list_maxmin=True): corrmat = df.corr() #ignore self-correlation corrmat[corrmat==1] = None sns.heatmap(corrmat) if list_maxmin: corr_rank = corrmat.unstack().sort_values().dropna() pair_max = corr_rank.index[-1] pair_min = corr_rank.index[0] print(f'MIN Correlation pair is {pair_min}') print(f'MAX Correlation pair is {pair_max}') # # 2 Analyzing the Data # **1**. For the series in the “hedge fund series” tab, report the following summary statistics: summary_stats(hf_data.join(factor_data['SPY US Equity']),12) # **2.** For the series in the “hedge fund series” tab, calculate the following statistics related to tail- risk. tail_risk_report(hf_data.join(factor_data['SPY US Equity']),0.05) # **3.** For the series in the “hedge fund series” tab, run a regression of each against SPY (found in the “merrill factors” tab.) Include an intercept. Report the following regression-based statistics: reg_stats(hf_data,12) # # 4. Relative Performance # Discuss the previous statistics, and what they tell us about... # # (a) the differences between SPY and the hedge-fund series? # # **Solution:** # The SPY has a higher mean, volatility, and Sharpe Ratio than that of all hedge-fund series. Also, the SPY has smaller tail risks in terms of VaR, and CVaR. Besides, all hedge-fund series have a negative information ratio, which indicates that they fail to beat the market. # # # (b) which performs better between HDG and QAI. # # **Solution:** # Although HDG has a slightly higher mean return, it also has a higher volatility, so it has a smaller Sharpe Ratio than QAI. In addition, HDG contains more tail risks as it has higher VaR, CVaR, and Maximum Drawdown. The recovery period of HDG is also longer than that of QAI. QAI is also less correlated to the market(See the heat map below), and has a higher Treynor Ratio and Information Ratio. Overall, QAI performs better than HDG. # # # (c) whether HDG and the ML series capture the most notable properties of HFRI. # # **Solution:** # Both HDG and the ML series fail to deliver the same high returns compendated with the high risk of HRFI. The HFRI also shows a very high excess kurtosis, but all of the hedge-fund series has a very small excess kurtosis. # # # 5. Report the correlation matrix for these assets. # (a) Show the correlations as a heat map. # (b) Which series have the highest and lowest correlations? display_correlation(hf_data.join(factor_data['SPY US Equity']),True) # **6.** Replicate HFRI with the six factors listed on the “merrill factors” tab. Include a constant, and run the unrestricted regression y = hf_data['HFRIFWI Index'] X = sm.add_constant(factor_data) static_model = sm.OLS(y,X).fit() # (a) Report the intercept and betas. static_model.params.to_frame('Regression Parameters') # (b) Are the betas realistic position sizes, or do they require huge long-short positions? # # **Solution:** # The betas look like realistic position sizes. They do not require huge long-short positions. # # (c) Report the R-squared. round(static_model.rsquared,4) # (d)Report the volatility of $\epsilon^{merr}$, (the tracking error.) # round(static_model.resid.std() * np.sqrt(12),4) # 7. Let’s examine the replication out-of-sample. model = RollingOLS(y,X,window=60) rolling_betas = model.fit().params.copy() rolling_betas rep_IS = (rolling_betas * X).sum(axis=1,skipna=False) rep_OOS = (rolling_betas.shift() * X).sum(axis=1,skipna=False) replication = hf_data[['HFRIFWI Index']].copy() replication['Static-IS-Int'] = static_model.fittedvalues replication['Rolling-IS-Int'] = rep_IS replication['Rolling-OOS-Int'] = rep_OOS replication.corr() replication[['Rolling-OOS-Int','HFRIFWI Index']].plot() # How well does the out-of-sample replication perform with respect to the target? # # **Solution:** # The out-of-sample replication performs very well with respect to the target. It has a very high correlation to the HFRI. # # **8.** We estimated the replications using an intercept. Try the full-sample estimation, but this time without an intercept. y = hf_data['HFRIFWI Index'] X = factor_data static_model_noint = sm.OLS(y,X).fit() # (a) Report the regression beta. How does it compare to the estimated beta with an intercept, βˆmerr? # # **Solution:** # Without an intercept, the betas are almost identical, except the beta in the 3-month T-bills. The T-bills are such low volatility, they act almost like an intercept. Thus, the regression performance is very similar. # betas = pd.DataFrame(static_model.params,columns=['Yes Intercept']).T betas.loc['No Intercept'] = static_model_noint.params betas # (b) the mean of the fitted value, $\check{r}^{hfri}$ . How does it compare to the mean of the HFRI? # # **Solution:** # The mean of the fitted value is slightly smaller than the mean of the HFRI. # print("The mean of the fitted value is", round(static_model_noint.fittedvalues.mean(),4)*12) print("The mean of the HFRI is",round(hf_data['HFRIFWI Index'].mean(),4)*12) # (c) the correlations of the fitted values, $\check{r}^{hfri}$ to the HFRI. How does the correlation compare to that of the fitted values of $\hat{r}^{hfri}$? # # **Solution:** # The correlations of the fitted values of model without an intercept to the HFRI are quite high. This correlation are very similar to that of the fitted values of the model with an intercept. # replication['Static-IS-NoInt'] = static_model_noint.fittedvalues replication.corr() # Do you think Merrill and ProShares fit their replicators with an intercept or not? # # **Solution:** # Recall that if our porfolio is trying to deliver hedge-fund returns (including their mean and high SR) # via an ETF, then the replication should definitely not include an intercept in the regression. # It should make the replication factors match the mean, so that the investors in the ETF # match the mean, not just the variation of HFRI. # # # However, if our porfolio is only trying to deliver a hedge, or a similar variation, then we should # include an intercept, and accept that the replication will differ in mean returns by alpha # but will match variation anyway. # # # Overall, it would seem reasonable and necessary for the replication to attempt to deliver the # high mean returns of hedge funds in order to be attractive for investors. For that reason, we # expect Merrill and ProShares are not including an intercept when running their regressions # and setting their factor weights. # # 3 Extensions # **1.** Merrill constrains the weights of each asset in its replication regression of HFRI. Try constraining your weights by re-doing 2.6. # # (a) Use Non-Negative Least Squares (NNLS) instead of OLS.5 # + reg_nnls = LinearRegression(positive=True) y = hf_data['HFRIFWI Index'] X = factor_data # By default, fit_intercept is true so there's no need to add constant nnls_model = reg_nnls.fit(X, y) sum_index = list(factor_data.columns) sum_index.append('constant') coefs = np.append(nnls_model.coef_,nnls_model.intercept_) nnls_reg_summary = pd.DataFrame(coefs,index = sum_index,columns = ['Regression Parameters']) nnls_reg_summary # - # (b) Go further by using a Generalized Linear Model to put separate interval constraints on # each beta, rather than simply constraining them to be non-negative. # # skip. # **2**. Let’s decompose a few other targets to see if they behave as their name suggests. # (a) Regress HEFA on the same style factors used to decompose HFRI. Does HEFA appear to be a currency-hedged version of EFA? # # **Solution:** # Yes, the beta for EFA US Equity is 0.9779, which is very close to 1. The beta for EUO US Equity is the second largest, which is 0.3099. Other betas are relatively small, except the beta for USGG3M Index is very negative, but it has a large p-value. Therefore, we can replicate HEFA very well using only EUO US Equity and EFA. # y = other_data['HEFA US Equity'].dropna() X = sm.add_constant(factor_data.loc[y.index]) sm.OLS(y, X).fit().summary() # **(b)** Decompose TRVCI with the same style factors used to decompose HFRI. The TRVCI Index tracks venture capital funds–in terms of our styles, what best describes venture capital? # # **Solution:** # The venture capital fund is very sensitive to the market and the US treasury-bills indicated by the corresponding betas. This means that the venture capital performs well when the market and the economy are good. However, the p-val for the US treasury-bills beta is very high (0.597), so the beta estimate can be unreliable. # y = other_data['TRVCI Index'].dropna() X = sm.add_constant(factor_data.loc[y.index]) sm.OLS(y, X).fit().summary() # **(c)** TAIL is an ETF that tracks SPY, but that also buys put options to protect against market downturns. Calculate the statistics in questions 2.1-2.3 for TAIL. Does it seem to behave as indicated by this description? That is, does it have high correlation to SPY while delivering lower tail risk? # # **Solution:** # The TAIL does not seem to track the SPY well. It has a much lower and negative Sharpe Ratio, so the returns of TAIL are much worse than SPY. In terms of tail risks, it has a very high skewness and kurtosis. However, the VaR and the CVaR are slightly higher than then SPY, and this makes sense since it buys put options for protection. Besides, it has not recovered from the Maximum drawdown yet. We can conclude that it won't have high correlation to SPY. # summary_stats(other_data[['TAIL US Equity','SPY US Equity']],12) tail_risk_report(other_data[['TAIL US Equity','SPY US Equity']],0.05) reg_stats(other_data[['TAIL US Equity','SPY US Equity']],12) # ***3. Geared ETFs*** # (a) Explain conceptually why Levered ETFs may track their index well for a given day but diverge over time. How is this exacerbated in volatile periods like 2008? # # **Solution:** # Even though the tracking error of Levered ETFs in a day is small, the error can become huge over time because of the effect of compounding. During the volatile periods, the Levered ETFs need to be reset daily, so the compounding effect would exacerbate the tracking error. # # (b) Analyze SPXU and UPRO relative to SPY. # # i. Analyze them with the statistics from 2.1-2.3. Do these two ETFs seem to live up to their names? # # **Solution:** # The mean returns of SPXU and UPRO seem to live up to their names, but in terms of Sharpe Ratio, they do not. Also, the two ETFs have more tail risk in terms of VaR and CVaR. The market beta for SPXU is -2.5034 instead of -3, but the market beta for UPRO is 3.1421, which is very close to 3 as claimed. # summary_stats(other_data[['SPXU US Equity','UPRO US Equity','SPY US Equity']], 12) tail_risk_report(other_data[['SPXU US Equity','UPRO US Equity','SPY US Equity']], 0.05) reg_stats(other_data[['SPXU US Equity','UPRO US Equity','SPY US Equity']], 12) (1+other_data[['SPXU US Equity','UPRO US Equity','SPY US Equity']]).cumprod().plot() # In conclusion, I think "levered" ETFs can be a good choice while the overall market is performing well as it can deliver much desirable returns. However, when the market is volatile, the "levered" ETFs are very likely to underperform and it contains more tail risk as well. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.8.8 64-bit ('.env') # metadata: # interpreter: # hash: 6b3d8fded84b82a84dd06aec3772984b6fd7683755c4932dae62599619bfeba9 # name: python3 # --- # # Multiple Subplots # # Sometimes it is helpful to compare diffrent views of data side by side To this end, Matplotlib has the ocncept of *subplots: groups of smaller axes that can exist together within a single figure. These subplots might be insets, grids of plots, or other more complicated layouts. In this section we'll explore four routines for creating subplots in Matplotlib. # %matplotlib inline import matplotlib.pyplot as plt plt.style.use('seaborn-white') import numpy as np # # `plt.axes`: Subplots by Hand # # The most basic mtehod for creating an axes is to use the `plt.axes` function. As we've seen previously, by default this creates a stanard axes object that fills the entire figure. `plt.axes` also takes an optional argument that is a list of four numbers in the figure coordinate system. These numbers represent `[left, bottom, width, height]` in the figure coordinate system, which ranges from 0 at bottom left of the figure to 1 at the top right of the figure. # # For example, we might create an inset axes at the top-right corner of another axes by setting the *x* and *y* position to 0.65 (that is, starting at 65% of the width and 65% of the height of the figure) and the *x* and *y* extents to 0.2 (that is, the size of the axes is 20% of the width and 20% of the height of the figure): ax1 = plt.axes() # standard axes ax2 = plt.axes([0.65, 0.65, 0.2, 0.2]) # The equivalent of this command within the object-oriented infertface is `fig.add_axes()`. Let's use this to create two vertically stacked axes: # + fig = plt.figure() ax1 = fig.add_axes([0.1, 0.5, 0.8, 0.4], xticklabels=[], ylim=(-1.2, 1.2)) ax2 = fig.add_axes([0.1, 0.1, 0.8, 0.4], ylim=(-1.2, 1.2)) x = np.linspace(0, 10) ax1.plot(np.sin(x)) ax2.plot(np.cos(x)); # - # We now have two axes (the top with no tick labels) that are just touching: the bottom of the upper panel (at position 0.5) matches the top of the lowe panel (at position 0.1 + 0.4) # ## `plt.subplot`: Simple Grids of Subplots # # Aligned columns or rows of subplots are a common-enough need that Matplotlib has several convenience routines that make them easy to create. The lowest level of these is `plt.subplot()`, which creates a single subplot within a grid. As you can see, this command takes three integer arguments - the number of rows, the numbers of columns, and the index of the plot to be created in this scheme, which runs from the upper left to the bottom right: for i in range(1, 7): plt.subplot(2, 3, i) plt.text(0.5, 0.5, str((2, 3, i)), fontsize=18, ha='center') # The command `plt.subplots_adjust` can be used to adjust the spacing between these plots. The following code uses the equivalent object-oriented command, `fig.add_subplot()`: fig = plt.figure() fig.subplots_adjust(hspace=0.4, wspace=0.4) for i in range(1, 7): ax = fig.add_subplot(2, 3, i) ax.text(0.5, 0.5, str((2, 3, i)), fontsize=18, ha='center') # We've used the `hspace` and `wspace` arguments of `plt.subplots_adjust`, which specify the spacing along the height and width of the figure, in units of the subplot size (in this case, the space 40% of the subplot width and height). # ## `plt.subplots`: The Whole Grid in One Go # # The approach just described can become quite tedious when creating a large grid of subplots, especially if you'd like to hide the x-axis and y-axis labels on the inner plots. # # For the purpose, `plt.subplots()` is the easier tool to use (note the `s` at the end of `subplots`). Rather than creating a single subplot, this function creates a full grid of subplots in a single line, returning them in a NumPy array. The arguments are the number of rows and number of columns, along with optional keywords `sharex` and `sharey`, which allow you to specify the relationships between different axes. # # Here we'll create a $2x3$ grid of subplots, where all axes in the same row share their y-axis scale, and all axes in the same column share their x-axis scale: fig, ax = plt.subplots(2, 3, sharex='col', sharey='row') # Note that by specifying `sharex` and `sharey`, we've automatically removed innter labels on the grid to make the plot cleaner. The resulting grid of axes instances is returned within a NumPy array for convenient specification of the desired axes using standard array indexing notation: # axes are in a two-dimensional array, indexed by [row, col] for i in range(2): for j in range(3): ax[i, j].text(0.5, 0.5, str((i, j)), fontsize=18, ha='center') fig # In comparison to `plt.subplot()`, `plt.subplots()` is more consistent with Python's conventional 0-based indexing. # ## `plt.GridSpec`: More Complicated Arrangements # # To go beyond a regular grid to subplots that span multiple rows and columns, `plt.GridSpec()` is the best tool. The `plt.GridSepc()` object does not create a plot by itself; it is imply a convenient interface that is recognized by the `plt.subplot()` command. For example, a gridspec for a grid of two rows and three columns with some specified width and height space looks like this: grid = plt.GridSpec(2, 3, wspace=0.4, hspace=0.3) # From this we can specify subplot locations and extents using the familiary Python slicing syntax: plt.subplot(grid[0, 0]) plt.subplot(grid[0, 1:]) plt.subplot(grid[1, :2]) plt.subplot(grid[1, 2]); # This type of flexible grid alignment has a wide range of uses. I most often use it when creating multi-axes histogram plots like the one shown here: # + # Create some normally distributed data mean = [0, 0] cov = [[1, 1], [1, 2]] x, y = np.random.multivariate_normal(mean, cov, 3000).T # Set up the axes with gridspec fig = plt.figure(figsize=(6, 6)) grid = plt.GridSpec(4, 4, hspace=0.2, wspace=0.2) main_ax = fig.add_subplot(grid[:-1, 1:]) y_hist = fig.add_subplot(grid[:-1, 0], xticklabels=[], sharey=main_ax) x_hist = fig.add_subplot(grid[-1, 1:], yticklabels=[], sharex=main_ax) #scatter points on the main axes main_ax.plot(x, y, 'ok', markersize=3, alpha=0.2) # histogram on the attached axes x_hist.hist(x, 40, histtype='stepfilled', orientation='vertical', color='gray') x_hist.invert_yaxis() y_hist.hist(y, 40, histtype='stepfilled', orientation='horizontal', color='gray') y_hist.invert_xaxis() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: DESI master # language: python # name: desi-master # --- # # Test fibermap combination to avoid FITS errors # ## Replicate `assemble_fibermap` script # # ``` # assemble_fibermap -n NIGHT -e EXPID -o OUTFILE # ``` # ### Set up options # + import os import sys import argparse from unittest.mock import patch import glob import warnings import time from pkg_resources import resource_filename import yaml import numpy as np from astropy.table import Table, Column, join from astropy.io import fits from desitarget.targetmask import desi_mask from desiutil.log import get_logger from desiutil.depend import add_dependencies from desispec.io.util import fitsheader, write_bintable, makepath, addkeys, parse_badamps from desispec.io.meta import rawdata_root, findfile, faflavor2program from desispec.io import iotime from desispec.maskbits import fibermask from desispec.io.fibermap import find_fiberassign_file, compare_fiberassign, assemble_fibermap # + os.environ['DESI_LOGLEVEL'] = 'DEBUG' os.environ['SPECPROD'] = 'everest' if 'CSCRATCH' not in os.environ: os.environ['CSCRATCH'] = os.path.join(os.environ['HOME'], 'Documents', 'Data', 'scratch') night = '20210922' expid = '00101293' outfile = os.path.join(os.environ['CSCRATCH'], f'fibermap-{expid}.fits') with patch('sys.argv', ['assemble_fibermap', '-n', night, '-e', expid, '-o', outfile]) as foo: parser = argparse.ArgumentParser(usage = "{prog} [options]") parser.add_argument("-n", "--night", type=int, required=True, help="input night") parser.add_argument("-e", "--expid", type=int, required=True, help="spectroscopic exposure ID") parser.add_argument("-o", "--outfile", type=str, required=True, help="output filename") parser.add_argument("-b","--badamps", type=str, help="comma separated list of {camera}{petal}{amp}"+\ ", i.e. [brz][0-9][ABCD]. Example: 'b7D,z8A'") parser.add_argument("--badfibers", type=str, help="filename with table of bad fibers (with at least FIBER and FIBERSTATUS columns)") parser.add_argument("--debug", action="store_true", help="enter ipython debug mode at end") parser.add_argument("--overwrite", action="store_true", help="overwrite pre-existing output file") parser.add_argument("--force", action="store_true", help="make fibermap even if missing input guide or coordinates files") parser.add_argument("--no-svn-override", action="store_true", help="Do not allow fiberassign SVN to override raw data") args = parser.parse_args() print(args) # - # ### Run `assemble_fibermap()` # fibermap = assemble_fibermap(args.night, args.expid, badamps=args.badamps, force=args.force) fibermap = assemble_fibermap(args.night, args.expid, badamps=args.badamps, badfibers_filename=args.badfibers, force=args.force, allow_svn_override=(not args.no_svn_override) ) # ### Write file tmpfile = args.outfile+'.tmp' fibermap.write(tmpfile, overwrite=args.overwrite, format='fits') os.rename(tmpfile, args.outfile) # log.info(f'Wrote {args.outfile}') fibermap fibermap.meta # ## Recreate `assemble_fibermap()` # ### Examine raw file # + log = get_logger() rawfile = findfile('raw', night, int(expid)) try: rawheader = fits.getheader(rawfile, 'SPEC', disable_image_compression=True) except KeyError: rawheader = fits.getheader(rawfile, 'SPS', disable_image_compression=True) rawfafile = fafile = find_fiberassign_file(night, int(expid)) # - rawfafile allow_svn_override=(not args.no_svn_override) #- Look for override fiberassign file in svn tileid = rawheader['TILEID'] if allow_svn_override and ('DESI_TARGET' in os.environ): targdir = os.getenv('DESI_TARGET') testfile = f'{targdir}/fiberassign/tiles/trunk/{tileid//1000:03d}/fiberassign-{tileid:06d}.fits' if os.path.exists(testfile+'.gz'): fafile = testfile+'.gz' elif os.path.exists(testfile): fafile = testfile if rawfafile != fafile: log.info(f'Overriding raw fiberassign file {rawfafile} with svn {fafile}') else: log.info(f'{testfile}[.gz] not found; sticking with raw data fiberassign file') # + force=args.force #- Find coordinates file in same directory dirname, filename = os.path.split(rawfafile) globfiles = glob.glob(dirname+'/coordinates-*.fits') if len(globfiles) == 1: coordfile = globfiles[0] elif len(globfiles) == 0: message = f'No coordinates*.fits file in fiberassign dir {dirname}' if force: log.error(message + '; continuing anyway') coordfile = None else: raise FileNotFoundError(message) elif len(globfiles) > 1: raise RuntimeError(f'Multiple coordinates*.fits files in fiberassign dir {dirname}') # + #- And guide file dirname, filename = os.path.split(rawfafile) globfiles = glob.glob(dirname+'/guide-????????.fits.fz') if len(globfiles) == 0: #- try falling back to acquisition image globfiles = glob.glob(dirname+'/guide-????????-0000.fits.fz') if len(globfiles) == 1: guidefile = globfiles[0] elif len(globfiles) == 0: message = f'No guide-*.fits.fz file in fiberassign dir {dirname}' if force: log.error(message + '; continuing anyway') guidefile = None else: raise FileNotFoundError(message) elif len(globfiles) > 1: raise RuntimeError(f'Multiple guide-*.fits.fz files in fiberassign dir {dirname}') # - coordfile, guidefile # + #- Read QA parameters to find max offset for POOR and BAD positioning #- replicates desispec.exposure_qa.get_qa_params, but that has #- circular import if loaded from here param_filename = resource_filename('desispec', 'data/qa/qa-params.yaml') with open(param_filename) as f: qa_params = yaml.safe_load(f)['exposure_qa'] poor_offset_um = qa_params['poor_fiber_offset_mm']*1000 bad_offset_um = qa_params['bad_fiber_offset_mm']*1000 #- Preflight announcements log.info(f'Night {night} spectro expid {expid}') log.info(f'Raw data file {rawfile}') log.info(f'Fiberassign file {fafile}') if fafile != rawfafile: log.info(f'Original raw fiberassign file {rawfafile}') log.info(f'Platemaker coordinates file {coordfile}') log.info(f'Guider file {guidefile}') #---- #- Read and assemble # fa = Table.read(fafile, 'FIBERASSIGN') # fa.sort('LOCATION') with fits.open(fafile) as hdulist: fa = hdulist['FIBERASSIGN'].copy() fa.data.sort(order='LOCATION') # - #- if using svn fiberassign override, check consistency of columns that #- ICS / platemaker used for actual observations; they should never change if fafile != rawfafile: with fits.open(rawfafile) as hdulist: rawfa = hdulist['FIBERASSIGN'].copy() rawfa.data.sort(order='LOCATION') # rawfa = Table.read(rawfafile, 'FIBERASSIGN') # rawfa.sort('LOCATION') badcol = compare_fiberassign(rawfa.data, fa.data) #- special case for tile 80713 on 20210110 with PMRA,PMDEC NaN -> 0.0 if night == 20210110 and tileid == 80713: for col in ['PMRA', 'PMDEC']: if col in badcol: ii = rawfa[col] != fa[col] if np.all(np.isnan(rawfa.data[col][ii]) & (fa.data[col][ii] == 0.0)): log.warning(f'Ignoring {col} mismatch NaN -> 0.0 on tile {tileid} night {night}') badcol.remove(col) if len(badcol)>0: msg = f'incompatible raw/svn fiberassign files for columns {badcol}' log.critical(msg) raise ValueError(msg) else: log.info('svn fiberassign columns used for obervations match raw data (good)') # + #- Tiles designed before Fall 2021 had a LOCATION:FIBER swap for fibers #- 3402 and 3429 at locations 6098 and 6099; check and correct if needed. #- see desispec #1380 #- NOTE: this only swaps them if the incorrect combination is found if (6098 in fa.data['LOCATION']) and (6099 in fa.data['LOCATION']): iloc6098 = np.where(fa.data['LOCATION'] == 6098)[0][0] iloc6099 = np.where(fa.data['LOCATION'] == 6099)[0][0] if (fa.data['FIBER'][iloc6098] == 3402) and (fa.data['FIBER'][iloc6099] == 3429): log.warning(f'FIBERS 3402 and 3429 are swapped in {fafile}; correcting') fa.data['FIBER'][iloc6098] = 3429 fa.data['FIBER'][iloc6099] = 3402 #- add missing columns for data model consistency if 'PLATE_RA' not in fa.data.columns.names: plate_ra_col = fa.data.columns['TARGET_RA'].copy() plate_ra_col.name = 'PLATE_RA' fa.data.columns.add_col(plate_ra_col) if 'PLATE_DEC' not in fa.data.columns.names: plate_dec_col = fa.data.columns['TARGET_DEC'].copy() plate_dec_col.name = 'PLATE_DEC' fa.data.columns.add_col(plate_dec_col) #- also read extra keywords from HDU 0 fa_hdr0 = fits.getheader(fafile, 0) if 'OUTDIR' in fa_hdr0: fa_hdr0.rename_keyword('OUTDIR', 'FAOUTDIR') longstrn = fits.Card('LONGSTRN', 'OGIP 1.0', 'The OGIP Long String Convention may be used.') fa_hdr0.insert('DEPNAM00', longstrn) fa.header.extend(fa_hdr0, unique=True) # skipkeys = ['SIMPLE', 'EXTEND', 'COMMENT', 'EXTNAME', 'BITPIX', 'NAXIS'] # addkeys(fa.meta, fa_hdr0, skipkeys=skipkeys) # - fa.header rawheader # + #- Read platemaker (pm) coordinates file; 3 formats to support: # 1. has FLAGS_CNT/EXP_n and DX_n, DX_n (e.g. 20201214/00067678) # 2. has FLAGS_CNT/EXP_n but not DX_n, DY_n (e.g. 20210402/00083144) # 3. doesn't have any of these (e.g. 20201220/00069029) # Notes: # * don't use FIBER_DX/DY because some files are missing those # (e.g. 20210224/00077902) # * don't use FLAGS_COR_n because some files are missing that # (e.g. 20210402/00083144) pm = None numiter = 0 if coordfile is None: log.error('No coordinates file, thus no info on fiber positioning') else: with fits.open(coordfile) as hdulist: pm = hdulist['DATA'].copy() # pm = Table.read(coordfile, 'DATA') #- PM = PlateMaker #- If missing columns *and* not the first in a (split) sequence, #- try again with the first expid in the sequence #- (e.g. 202010404/00083419 -> 83418) if 'DX_0' not in pm.data.columns.names: log.error(f'Missing DX_0 in {coordfile}') if 'VISITIDS' in rawheader: firstexp = int(rawheader['VISITIDS'].split(',')[0]) if firstexp != rawheader['EXPID']: origcorrdfile = coordfile coordfile = findfile('coordinates', night, firstexp) log.info(f'trying again with {coordfile}') with fits.open(coordfile) as hdulist: pm = hdulist['DATA'].copy() # pm = Table.read(coordfile, 'DATA') else: log.error(f'no earlier coordinates file for this tile') else: log.error('Missing VISITIDS header keywords to find earlier coordinates file') if 'FLAGS_CNT_0' not in pm.data.columns.names: log.error(f'Missing spotmatch FLAGS_CNT_0 in {coordfile}; no positioner offset info') pm = None numiter = 0 else: #- Count number of iterations in file numiter = len([col for col in pm.data.columns.names if col.startswith('FLAGS_CNT_')]) log.info(f'Using FLAGS_CNT_{numiter-1} in {coordfile}') # - fa # + #- Now let's merge that platemaker coordinates table (pm) with fiberassign if pm is not None: pm_table = Table(pm.data) pm_table['LOCATION'] = 1000*pm_table['PETAL_LOC'] + pm_table['DEVICE_LOC'] keep = np.in1d(pm_table['LOCATION'], fa.data['LOCATION']) pm_table = pm_table[keep] pm_table.sort('LOCATION') log.info('{}/{} fibers in coordinates file'.format(len(pm_table), len(fa.data))) #- Create fibermap table to merge with fiberassign file fibermap = Table() fibermap_header = fits.Header({'XTENSION': 'BINTABLE'}) fibermap['LOCATION'] = pm_table['LOCATION'] fibermap['NUM_ITER'] = numiter #- Sometimes these columns are missing in the coordinates files, maybe #- only when numiter=1, i.e. only a blind move but not corrections? if f'FPA_X_{numiter-1}' in pm_table.colnames: fibermap['FIBER_X'] = pm_table[f'FPA_X_{numiter-1}'] fibermap['FIBER_Y'] = pm_table[f'FPA_Y_{numiter-1}'] fibermap['DELTA_X'] = pm_table[f'DX_{numiter-1}'] fibermap['DELTA_Y'] = pm_table[f'DY_{numiter-1}'] else: log.error('No FIBER_X/Y or DELTA_X/Y information from platemaker') fibermap['FIBER_X'] = np.zeros(len(pm_table)) fibermap['FIBER_Y'] = np.zeros(len(pm_table)) fibermap['DELTA_X'] = np.zeros(len(pm_table)) fibermap['DELTA_Y'] = np.zeros(len(pm_table)) if ('FIBER_RA' in pm_table.colnames) and ('FIBER_DEC' in pm_table.colnames): fibermap['FIBER_RA'] = pm_table['FIBER_RA'] fibermap['FIBER_DEC'] = pm_table['FIBER_DEC'] else: log.error('No FIBER_RA or FIBER_DEC from platemaker') fibermap['FIBER_RA'] = np.zeros(len(pm_table)) fibermap['FIBER_DEC'] = np.zeros(len(pm_table)) #- Bit definitions at https://desi.lbl.gov/trac/wiki/FPS/PositionerFlags #- FLAGS_EXP bit 2 is for positioners (not FIF, GIF, ...) #- These should match what is in fiberassign, except broken fibers expflags = pm_table[f'FLAGS_EXP_{numiter-1}'] goodmatch = ((expflags & 4) == 4) if np.any(~goodmatch): badloc = list(pm_table['LOCATION'][~goodmatch]) log.warning(f'Flagging {len(badloc)} locations without POS_POS bit set: {badloc}') #- Keep only matched positioners (FLAGS_CNT_n bit 0) cntflags = pm_table[f'FLAGS_CNT_{numiter-1}'] spotmatched = ((cntflags & 1) == 1) num_nomatch = np.sum(goodmatch & ~spotmatched) if num_nomatch > 0: badloc = list(pm_table['LOCATION'][goodmatch & ~spotmatched]) log.error(f'Flagging {num_nomatch} unmatched fiber locations: {badloc}') goodmatch &= spotmatched #- pass forward dummy column for joining with fiberassign fibermap['_GOODMATCH'] = goodmatch #- WARNING: this join can re-order the table fibermap = join(Table(fa.data), fibermap, join_type='left') #- poor and bad positioning dr = np.sqrt(fibermap['DELTA_X']**2 + fibermap['DELTA_Y']**2) * 1000 poorpos = ((poor_offset_um < dr) & (dr <= bad_offset_um)) badpos = (dr > bad_offset_um) | np.isnan(dr) numpoor = np.count_nonzero(poorpos) numbad = np.count_nonzero(badpos) if numpoor > 0: log.warning(f'Flagging {numpoor} POOR positions with {poor_offset_um} < offset <= {bad_offset_um} microns') if numbad > 0: log.warning(f'Flagging {numbad} BAD positions with offset > {bad_offset_um} microns') #- Set fiber status bits missing = np.in1d(fibermap['LOCATION'], pm_table['LOCATION'], invert=True) missing |= ~fibermap['_GOODMATCH'] fibermap['FIBERSTATUS'][missing] |= fibermask.MISSINGPOSITION fibermap['FIBERSTATUS'][poorpos] |= fibermask.POORPOSITION fibermap['FIBERSTATUS'][badpos] |= fibermask.BADPOSITION fibermap.remove_column('_GOODMATCH') else: #- No coordinates file or no positioning iterations; #- just use fiberassign + dummy columns log.error('Unable to find useful coordinates file; proceeding with fiberassign + dummy columns') fibermap = Table(fa.data) fibermap['NUM_ITER'] = 0 fibermap['FIBER_X'] = 0.0 fibermap['FIBER_Y'] = 0.0 fibermap['DELTA_X'] = 0.0 fibermap['DELTA_Y'] = 0.0 fibermap['FIBER_RA'] = 0.0 fibermap['FIBER_DEC'] = 0.0 # Update data types to be consistent with updated value if coord file was used. for val in ['FIBER_X','FIBER_Y','DELTA_X','DELTA_Y']: old_col = fibermap[val] fibermap.replace_column(val,Table.Column(name=val,data=old_col.data,dtype='>f8')) for val in ['LOCATION','NUM_ITER']: old_col = fibermap[val] fibermap.replace_column(val,Table.Column(name=val,data=old_col.data,dtype=np.int64)) # - #- Update SKY and STD target bits to be in both CMX_TARGET and DESI_TARGET #- i.e. if they are set in one, also set in the other. Ditto for SV* for targetcol in ['CMX_TARGET', 'SV0_TARGET', 'SV1_TARGET', 'SV2_TARGET']: if targetcol in fibermap.colnames: for mask in [ desi_mask.SKY, desi_mask.STD_FAINT, desi_mask.STD_BRIGHT]: ii = (fibermap[targetcol] & mask) != 0 iidesi = (fibermap['DESI_TARGET'] & mask) != 0 fibermap[targetcol][iidesi] |= mask fibermap['DESI_TARGET'][ii] |= mask # + #- Add header information from rawfile log.debug(f'Adding header keywords from {rawfile}') # skipkeys = ['EXPTIME',] # addkeys(fibermap.meta, rawheader, skipkeys=skipkeys) fibermap_header.extend(rawheader, strip=True) fibermap_header.remove('EXPTIME') fibermap['EXPTIME'] = rawheader['EXPTIME'] #- Add header info from guide file #- sometimes full header is in HDU 0, other times HDU 1... if guidefile is not None: log.debug(f'Adding header keywords from {guidefile}') guideheader = fits.getheader(guidefile, 0) if 'TILEID' not in guideheader: guideheader = fits.getheader(guidefile, 1) if fibermap_header['TILEID'] != guideheader['TILEID']: raise RuntimeError('fiberassign tile {} != guider tile {}'.format( fibermap_header['TILEID'], guideheader['TILEID'])) # addkeys(fibermap.meta, guideheader, skipkeys=skipkeys) fibermap_header.extend(guideheader, strip=True) fibermap_header.remove('EXPTIME') fibermap_header['EXTNAME'] = 'FIBERMAP' for key in ('ZIMAGE', 'ZSIMPLE', 'ZBITPIX', 'ZNAXIS', 'ZNAXIS1', 'ZTILE1', 'ZCMPTYPE', 'ZNAME1', 'ZVAL1', 'ZNAME2', 'ZVAL2'): fibermap_header.remove(key) # - fibermap_header # + #- Early data raw headers had bad >8 char 'FIBERASSIGN' keyword if 'FIBERASSIGN' in fibermap_header: log.warning('Renaming header keyword FIBERASSIGN -> FIBASSGN') fibermap_header.rename_keyword('FIBERASSIGN', 'FIBASSGN') # fibermap.meta['FIBASSGN'] = fibermap.meta['FIBERASSIGN'] # del fibermap.meta['FIBERASSIGN'] #- similarly for early splits in raw data file if 'USESPLITS' in fibermap_header: log.warning('Renaming header keyword USESPLITS -> USESPLIT') fibermap_header.rename_keyword('USESPLITS', 'USESPLIT') # fibermap.meta['USESPLIT'] = fibermap.meta['USESPLITS'] # del fibermap.meta['USESPLITS'] #- Record input guide and coordinates files if guidefile is not None: fibermap_header['GUIDEFIL'] = os.path.basename(guidefile) else: fibermap_header['GUIDEFIL'] = 'MISSING' if coordfile is not None: fibermap_header['COORDFIL'] = os.path.basename(coordfile) else: fibermap_header['COORDFIL'] = 'MISSING' # - badamps = None #- mask the fibers defined by badamps if badamps is not None: maskbits = {'b':fibermask.BADAMPB, 'r':fibermask.BADAMPR, 'z':fibermask.BADAMPZ} ampoffsets = {'A': 0, 'B':250, 'C':0, 'D':250} for (camera, petal, amplifier) in parse_badamps(badamps): maskbit = maskbits[camera] ampoffset = ampoffsets[amplifier] fibermin = int(petal)*500 + ampoffset fibermax = fibermin + 250 ampfibs = np.arange(fibermin,fibermax) truefmax = fibermax - 1 log.info(f'Masking fibers from {fibermin} to {truefmax} for camera {camera} because of badamp entry '+\ f'{camera}{petal}{amplifier}') ampfiblocs = np.in1d(fibermap['FIBER'], ampfibs) fibermap['FIBERSTATUS'][ampfiblocs] |= maskbit badfibers_filename = None #- mask the fibers defined by bad fibers if badfibers_filename is not None: table=Table.read(badfibers_filename) # list of bad fibers that are in the fibermap badfibers=np.intersect1d(np.unique(table["FIBER"]),fibermap["FIBER"]) for i,fiber in enumerate(badfibers) : # for each of the bad fiber, add the bad bits to the fiber status badfibermask = np.bitwise_or.reduce(table["FIBERSTATUS"][table["FIBER"]==fiber]) fibermap['FIBERSTATUS'][fibermap["FIBER"]==fiber] |= badfibermask # + #- NaN are a pain; reset to dummy values for col in [ 'FIBER_X', 'FIBER_Y', 'DELTA_X', 'DELTA_Y', 'FIBER_RA', 'FIBER_DEC', 'GAIA_PHOT_G_MEAN_MAG', 'GAIA_PHOT_BP_MEAN_MAG', 'GAIA_PHOT_RP_MEAN_MAG', ]: ii = np.isnan(fibermap[col]) if np.any(ii): n = np.sum(ii) log.warning(f'Setting {n} {col} NaN to 0.0') fibermap[col][ii] = 0.0 # # Some SV1-era files had these extraneous columns, make sure they are not propagated. # for col in ('NUMTARGET', 'BLOBDIST', 'FIBERFLUX_IVAR_G', 'FIBERFLUX_IVAR_R', 'FIBERFLUX_IVAR_Z', 'HPXPIXEL'): if col in fibermap.colnames: log.debug("Removing column '%s' from fibermap table.", col) fibermap.remove_column(col) # # Some SV1-era files have RELEASE as int32. Should be int16. # if fibermap['RELEASE'].dtype == np.dtype('>i2'): log.debug("RELEASE has correct type.") else: log.warning("Setting RELEASE to int16.") fibermap['RELEASE'] = fibermap['RELEASE'].astype(np.int16) #- Some code incorrectly relies upon the fibermap being sorted by #- fiber number, so accomodate that before returning the table fibermap.sort('FIBER') # - fibermap fibermap_header fibermap_hdu = fits.BinTableHDU(fibermap) fibermap_hdu.header.extend(fibermap_header, unique=True) fibermap_hdulist = fits.HDUList([fits.PrimaryHDU(), fibermap_hdu]) fibermap_hdulist.info() fibermap_hdulist[0].header foo = np.array([1000, 2000, 3000, 4000, 5000, 6000, 7000, 8000, 9000, 10000], dtype=np.int32) foo.astype(np.int16) # ## Look at the list of exposures. exposures = os.path.join(os.environ['DESI_SPECTRO_REDUX'], os.environ['SPECPROD'], "exposures-{SPECPROD}.fits".format(**os.environ)) with fits.open(exposures) as hdulist: exposures = hdulist['EXPOSURES'].data.copy() program = faflavor2program(exposures['FAFLAVOR']) sv1bright = (program == 'bright') & (exposures['SURVEY'] == 'sv1') & (exposures['NIGHT'] == 20210224) exposures['NIGHT'][sv1bright], exposures['EXPID'][sv1bright] exposures_row = np.nonzero((exposures['NIGHT'] == 20210224) & (exposures['EXPID'] == 77952))[0][0] survey_program = "{0}-{1}".format(exposures['SURVEY'][exposures_row], program[exposures_row]) survey_program sv1dark = (program == 'dark') & (exposures['SURVEY'] == 'sv1') & (exposures['NIGHT'] == 20210220) exposures['NIGHT'][sv1dark], exposures['EXPID'][sv1dark] sv2dark = (program == 'dark') & (exposures['SURVEY'] == 'sv2') & (exposures['NIGHT'] == 20210403) exposures['NIGHT'][sv2dark], exposures['EXPID'][sv2dark] sv2bright = (program == 'bright') & (exposures['SURVEY'] == 'sv2') & (exposures['NIGHT'] == 20210324) exposures['NIGHT'][sv2bright], exposures['EXPID'][sv2bright] sv3dark = (program == 'dark') & (exposures['SURVEY'] == 'sv3') & (exposures['NIGHT'] == 20210420) exposures['NIGHT'][sv3dark], exposures['EXPID'][sv3dark] sv3bright = (program == 'bright') & (exposures['SURVEY'] == 'sv3') & (exposures['NIGHT'] == 20210420) exposures['NIGHT'][sv3bright], exposures['EXPID'][sv3bright] maindark = (program == 'dark') & (exposures['SURVEY'] == 'main') & (exposures['NIGHT'] == 20210531) exposures['NIGHT'][maindark], exposures['EXPID'][maindark] mainbright = (program == 'bright') & (exposures['SURVEY'] == 'main') & (exposures['NIGHT'] == 20210531) exposures['NIGHT'][mainbright], exposures['EXPID'][mainbright] # ## Dealing with LONGSTRN # Add this header # ``` # LONGSTRN= 'OGIP 1.0' / The OGIP Long String Convention may be used. # ``` # [Reference](https://heasarc.gsfc.nasa.gov/docs/software/fitsio/c/f_user/node25.html). # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 1. Named Entity Recognization # + import nltk #nltk.download('maxent_ne_chunker') #nltk.download('words') # + paragraph = """The Taj Mahal was built by Emperor """ # POS Tagging words = nltk.word_tokenize(paragraph) # - tagged_words = nltk.pos_tag(words) # Named entity recognition namedEnt = nltk.ne_chunk(tagged_words) namedEnt.draw() orga = """ ORGANIZATION Georgia-Pacific Corp., WHO PERSON , President Obama LOCATION Murray River, Mount Everest DATE June, 2008-06-29 TIME two fifty a m, 1:30 p.m. MONEY 175 million Canadian Dollars, GBP 10.40 PERCENT twenty pct, 18.75 % FACILITY Washington Monument, Stonehenge GPE South East Asia, """ words = nltk.word_tokenize(orga) tagged_words_new = nltk.pos_tag(words) nameentity = nltk.ne_chunk(tagged_words_new) nameentity.draw() # # 2. Text Visuatization # Bag of Words # + import nltk import re paragraph = """Thank you all so very much. Thank you to the Academy. Thank you to all of you in this room. I have to congratulate the other incredible nominees this year. The Revenant was the product of the tireless efforts of an unbelievable cast and crew. First off, to my brother in this endeavor, Mr. Tom Hardy. Tom, your talent on screen can only be surpassed by your friendship off screen … thank you for creating a t ranscendent cinematic experience. Thank you to everybody at Fox and New Regency … my entire team. I have to thank everyone from the very onset of my career … To my parents; none of this would be possible without you. And to my friends, I love you dearly; you know who you are. And lastly, I just want to say this: Making The Revenant was about man's relationship to the natural world. A world that we collectively felt in 2015 as the hottest year in recorded history. Our production needed to move to the southern tip of this planet just to be able to find snow. Climate change is real, it is happening right now. It is the most urgent threat facing our entire species, and we need to work collectively together and stop procrastinating. We need to support leaders around the world who do not speak for the big polluters, but who speak for all of humanity, for the indigenous people of the world, for the billions and billions of underprivileged people out there who would be most affected by this. For our children’s children, and for those people out there whose voices have been drowned out by the politics of greed. I thank you all for this amazing award tonight. Let us not take this planet for granted. I do not take tonight for granted. Thank you so very much.""" # - # we need BOW model becaues any ml or dl algo only understands numbers or float so the BOWM is a necessaty data = nltk.sent_tokenize(paragraph) words = nltk.word_tokenize(paragraph) len(words) for i in range(len(data)): data[i] = data[i].lower() #lowercasing the dataset data[i] = re.sub(r"\W",' ',data[i]) # puncuation chars data[i] = re.sub(r"\s+",' ',data[i]) data # + #Creating The histogram # - #per line tokenizing word2count = {} for datas in data: words = nltk.word_tokenize(datas) for word in words: if word not in word2count.keys(): word2count[word] = 1 #jodi na thake tale atleast #it has apppeared once else: word2count[word] += 1 #jodi thake tale count barachi word2count len(word2count) import heapq # used to find the N most frequent words in the dict # + freq_words = heapq.nlargest(100, word2count, key=word2count.get) # - freq_words # + #Creating the BOG model X = [] # it will contain the BOW model # in a BOw each words is a vector 1 - apppears & 0- not appears for datas in data: vector = [] for word in freq_words: if word in nltk.word_tokenize(datas): vector.append(1) else: vector.append(0) X.append(vector) # - len(data) import numpy as np np.array(X) np.array(X).shape X = np.asarray(X) X # # Text modelling using TF - IDF Model import nltk import re import heapq import numpy as np paragraph = """Thank you all so very much. Thank you to the Academy. Thank you to all of you in this room. I have to congratulate the other incredible nominees this year. The Revenant was the product of the tireless efforts of an unbelievable cast and crew. First off, to my brother in this endeavor, Mr. Tom Hardy. Tom, your talent on screen can only be surpassed by your friendship off screen … thank you for creating a t ranscendent cinematic experience. Thank you to everybody at Fox and New Regency … my entire team. I have to thank everyone from the very onset of my career … To my parents; none of this would be possible without you. And to my friends, I love you dearly; you know who you are. And lastly, I just want to say this: Making The Revenant was about man's relationship to the natural world. A world that we collectively felt in 2015 as the hottest year in recorded history. Our production needed to move to the southern tip of this planet just to be able to find snow. Climate change is real, it is happening right now. It is the most urgent threat facing our entire species, and we need to work collectively together and stop procrastinating. We need to support leaders around the world who do not speak for the big polluters, but who speak for all of humanity, for the indigenous people of the world, for the billions and billions of underprivileged people out there who would be most affected by this. For our children’s children, and for those people out there whose voices have been drowned out by the politics of greed. I thank you all for this amazing award tonight. Let us not take this planet for granted. I do not take tonight for granted. Thank you so very much.""" # Tokenize sentences dataset = nltk.sent_tokenize(paragraph) for i in range(len(dataset)): dataset[i] = dataset[i].lower() dataset[i] = re.sub(r'\W',' ',dataset[i]) dataset[i] = re.sub(r'\s+',' ',dataset[i]) #per line tokenizing # Creating word histogram word2count = {} for data in dataset: words = nltk.word_tokenize(data) for word in words: if word not in word2count.keys(): word2count[word] = 1 #jodi na thake tale atleast #it has apppeared once else: word2count[word] += 1 #jodi thake tale count barachi # Selecting best 100 features freq_words = heapq.nlargest(100, word2count, key=word2count.get) # + #Creating the BOG model X = [] # it will contain the BOW model # in a BOw each words is a vector 1 - apppears & 0- not appears for datas in data: vector = [] for word in freq_words: if word in nltk.word_tokenize(datas): vector.append(1) else: vector.append(0) X.append(vector) # - X = np.asarray(X) X # + # IDF Dictionary word_idfs = {} for word in freq_words: doc_count = 0 for data in dataset: if word in nltk.word_tokenize(data): doc_count += 1 word_idfs[word] = np.log(len(dataset)/(1+doc_count)) # +1 is kindof like a bias # - len(word_idfs) # TF Matrix tf_matrix = {} for word in freq_words: doc_tf = [] for data in dataset: frequency = 0 for w in nltk.word_tokenize(data): if word == w: frequency += 1 tf_word = frequency/len(nltk.word_tokenize(data)) doc_tf.append(tf_word) tf_matrix[word] = doc_tf # TF - IDF Calculation tfidf_matrix = [] for word in tf_matrix.keys(): tfidf = [] for value in tf_matrix[word]: score = value * word_idfs[word] tfidf.append(score) tfidf_matrix.append(tfidf) X = np.asarray(tfidf_matrix).astype('float64') X X.shape X = X.T X.shape # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="cKTU3WzV_x96" def fcRecInfinita(): print("hola") fnRecIntfinita() # + id="0PTfnVEFAUdK" fcRecInfinita() # + colab={"base_uri": "https://localhost:8080/"} id="INvW9buRAXOv" outputId="b84cc6ba-3027-44ba-8634-718222b45d94" def fnRec( x ): if x== 0: print('Stop') else: print( x ) fnRec( x-1 ) def main(): print('inicio del programa') fnRec(5) print('fin del programa') main() # + colab={"base_uri": "https://localhost:8080/"} id="3CRig-KPBztZ" outputId="6fb845a9-c9a6-4e8a-b8c9-b5eeef4ac04e" def printRev( x ): if x > 0: print( x ) printRev( x-1 ) print( 3 ) # + colab={"base_uri": "https://localhost:8080/"} id="iHY8t8CMIy1X" outputId="720e3a6b-3a81-487a-fb04-a861237ddb33" def fibonacci( n ): if n == 1 or n == 0: return n else: return(fibonacci(n-1) +fibonacci(n-2) ) print(fibonacci(8)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Sitzung 6 # # Diese Skripte sind ausschließlich als Zusatz-Material gedacht. Speziell für diejenigen unter Euch, die einen Einblick in das Programmieren gewinnen wollen. Wenn Du es also leid bist repetitive Tätigkeiten auszuführen und das lieber einer Maschine überlassen willst, bist Du hier genau richtig. # # Die Codes sind nicht für die Klausur relevant, genau genommen haben sie mit dem Lehrstuhl für Statistik __rein gar nichts__ zu tun. # # --- import numpy as np from scipy.special import binom from matplotlib import pyplot as plt from tqdm import trange # ## Verteilungen diskreter Wartezeiten # # Ist es möglich, die Argumentationsweise der Exponentialverteilung auf die Binomialverteilung und die Hypergeometrische Verteilung zu erweitern? # # Sei $$T: \text{Wartezeit auf den ersten Erfolg}$$ # # Wobei $$X: \text{Anzahl der Erfolge}$$ # # Zur Erinnerung: warten heißt, dass bisher nichts passiert ist $X=0$. # # Für $X \sim Pois(\lambda)$ gilt: # # \begin{equation} # P(X=0) = \frac{\lambda^0}{0!}\exp(-\lambda) = \exp(-\lambda) # \end{equation} # # Für unabhängige und identisch verteilte Zeiteinheiten gilt: # # \begin{align} # P(X&=0 \text{ in 2 Zeiteinheiten}) \\ # &= P(\{X=0\text{ in der ersten Zeiteinheit}\} , \{ X=0\text{ in der zweiten Zeiteinheit} \}) \\ # &= P(\{X=0\text{ in der ersten Zeiteinheit}\}) \cdot P(\{ X=0\text{ in der zweiten Zeiteinheit} \}) \\ # &= \exp(-\lambda) \cdot \exp(-\lambda) = \exp(-2\lambda) # \end{align} # # Und allgemein: # $$P(X=0 \text{ in t Zeiteinheiten}) = \exp(-\lambda t) = P(T \geq t)$$ # # Damit können wir sagen: # \begin{equation} # P(T \leq t) = 1 - \exp(-\lambda t) # \end{equation} # # --- # ## Erweiterung auf die Binomialverteilung # # Für $X \sim Bin(n, p)$, mit $n \in \mathbb{N}, p \in \mathbb{R}_{+}$ gilt die obere Argumentation immer noch: # # $$P_n(X=0)=\underbrace{{N \choose 0}}_{=1} \overbrace{p^0}^{=1} (1-p)^{n-0} = (1-p)^n$$ # # und # # $$P(T \leq n) = 1 - P_n(X=0) = 1 - (1-p)^n$$ # # ### Überprüfung: # # Wartezeit auf die erste sechs beim Mensch-ärgere-dich-nicht trials = 1000000 n = np.arange(0, 100) theoretical = 1 - (1-1/6)**n samples = np.ones(trials) for i in trange(trials): while np.random.randint(low=1, high=7) != 6: samples[i] += 1 values, counts = np.unique(samples, return_counts=True) empirical = counts/trials plt.figure(figsize=(10, 5)) plt.step(n, theoretical, where='post', label='$P_{theoretisch}(X < x)$') plt.step(values, empirical.cumsum(), where='post', label='$P_{empirisch}(X < x)$') plt.legend() plt.title("Vergleich/Überprüfung theoretischer Verteilungsfunktion") plt.xlim([0, 40]) # --- # ## Erweiterung auf die Hypergeometrische Verteilung # # Für $X \sim Hyper(N, M, n)$ # # \begin{align} # P_n(X=0) &= \overbrace{\left(\frac{N-M}{N}\right) \cdot \left(\frac{N-M-1}{N-1}\right) \cdot \dots \cdot \left(\frac{N-M-(n-1)}{N-(n-1)}\right)}^{\textit{n Faktoren}} \\ # &= \Large{\frac{\frac{(N-M)!}{(N-M-n)!}}{\frac{N!}{(N-n)!}}} # = \Large{\frac{\frac{(N-M)! \color{red}{n!}}{(N-M-n)!\color{red}{n!}}}{\frac{N!\color{red}{n!}}{(N-n)!\color{red}{n!}}}}\\ # &= \Large{\frac{\color{red}{n!}{N-M \choose n}}{\color{red}{n!}{N \choose n}} = \frac{{N-M \choose n}}{{N \choose n}}} # \end{align} # # und # # $$P(T < n) = 1 - P_n(X=0) = 1 - \frac{{N-M \choose n}}{{N \choose n}}$$ # # ### Überprüfung: # # Wie wahrscheinlich ist es eine Partie russisches Roulette zu überleben? # + N = 6 M = 1 n = np.arange(0, 6) theoretical = 1 - binom(N-M, n)/binom(N, n) samples = np.zeros(trials) for i in trange(trials): x = [1, 2, 3, 4, 5, 6] np.random.shuffle(x) didi_mao = None while didi_mao != 6: didi_mao = x.pop() samples[i] += 1 values, counts = np.unique(samples, return_counts=True) empirical = counts/trials # - plt.figure(figsize=(10, 5)) plt.step(n, theoretical, where='post', label='$P_{theoretisch}(X < x)$') plt.step(values, empirical.cumsum(), where='post', label='$P_{empirisch}(X < x)$') plt.legend() plt.title("Vergleich/Überprüfung theoretischer Verteilungsfunktion") plt.xlim([0, 6]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="Lx37tqJNNoIj" # For only one block, by definition, we have: $T = E(K, X)$, where $K$ is the key and $X$ is the message.
    # For a two-block message, the first block is $X$, and the second one, $X_{2}$, is $X \oplus T$. The MAC would be:
    # $E(K, X_2)=E(K,T\oplus(X\oplus T))=E(K, X)=T$ # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Importation de librairies et lectures de fichiers import pandas as pd import numpy as np import re import matplotlib.pyplot as plt import seaborn as sns # lecture fichiers web=pd.read_excel('C:\\Users\\agued\\OneDrive\\Escritorio\\web.xlsx') liaison=pd.read_excel ('C:\\Users\\agued\\OneDrive\\Escritorio\\liaison.xlsx') erp=pd.read_excel('C:\\Users\\agued\\OneDrive\\Escritorio\\erp.xlsx') # # Preparation de données # ## web # + tags=[] web.head() # - web.shape # + tags=[] web.isnull().sum() # + tags=[] web.nunique().tolist # + active="" # Suppresion de colonnes vides, et des colonnes sans aucun manquant et un seul valeur unique # - web.drop(['tax_class', 'post_content','post_password','post_content_filtered','virtual','downloadable','rating_count'], axis = 1, inplace = True) # ### *on a vu que la colonne 'guid' peut servir de clé aussi avec 1430 valeurs uniques, donc je constate combien de lignes il y a avec un valeur pour 'guid' avec sku null # + tags=[] web[(web['guid'].notnull())& (web['sku'].isnull())] # - # il y a deux lignes mais sans aucune ventes (total_sales), # la colonne Sku sera notre cle pour les jointures, donc je supprime les valeurs nulls, et verifie les doublons # + tags=[] web=web.dropna(subset=['sku']) # - web.duplicated('sku').sum() # Analyse de differences entre les doublons. # + tags=[] values=[15296,19814] web[web.sku.isin(values)] # - # La difference est que la premiere ligne correspond au produit(post_type=product) et la deuxieme corresponde à l'image (post_type=attachement). On supprime les lignes que correspondent à l'image. Et le deux colonne liées à elle(post_type et post_mime_type) web.drop(web[web['post_type']=='attachment'].index, inplace=True) web.nunique().tolist # On supprime aussi toutes les colonnes qui nous reste avec un valeur unique web.drop(['average_rating','tax_status','post_status','comment_status','ping_status','post_parent','menu_order','post_type','post_mime_type','comment_count'],axis=1, inplace=True) web.head() # ## erp erp.head() erp.shape erp.isnull().sum() erp.nunique().tolist erp.dtypes round(erp.describe(),2) # ## liaison liaison.head() # + tags=[] liaison.rename(columns={'id_web': 'sku'}, inplace=True) # - liaison.shape liaison.isnull().sum() # # Merge lia_web=pd.merge(liaison,web, on='sku', how='inner') lia_web.isnull().sum() lia_web.shape final=pd.merge(lia_web,erp, on='product_id', how='inner') # + tags=[] final.isnull().sum() # - final.shape final.head() final.nunique().tolist final[(final['stock_quantity']==0 )& (final['stock_status']=='instock')] final['CA']=(final['total_sales']*final['price']) final.sort_values(by = 'CA', ascending = False).head() final['CA'].sum() final[final['total_sales']>5] round(final['price'].describe(),2) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + import numpy as np import nibabel as nb import ndmg as nd import os import re import sys import numpy as np import nibabel as nb import ndmg.utils as mgu from argparse import ArgumentParser from scipy import ndimage from matplotlib.colors import LinearSegmentedColormap import matplotlib as mpl from nilearn.plotting.edge_detect import _edge_map as edge_map import matplotlib.pyplot as plt from scipy.sparse import lil_matrix import pandas as pd from plotly.offline import plot, iplot, init_notebook_mode import plotly.graph_objs as go import math import matplotlib.image as mpimg init_notebook_mode(connected=True) def plot_mtx(A, title=""): """ A basic function to plot an adjacency matrix. """ Adf = pd.DataFrame(A).stack().rename_axis(['y', 'x']).reset_index(name="Weight") trace = go.Heatmap(x=Adf.x, y=Adf.y, z=Adf.Weight) data = [trace] layout=go.Layout(width=550, height=550, title=title, xaxis=dict(title="Node Out"), yaxis=dict(title="Node In", autorange="reversed")) fig = go.Figure(data=data, layout=layout) iplot(fig) def opaque_colorscale(basemap, reference, vmin=None, vmax=None, alpha=1): """ A function to return a colorscale, with opacities dependent on reference intensities. **Positional Arguments:** - basemap: - the colormap to use for this colorscale. - reference: - the reference matrix. """ reference = reference if vmin is not None: reference[reference > vmax] = vmax if vmax is not None: reference[reference < vmin] = vmin cmap = basemap(reference) maxval = np.nanmax(reference) # all values beteween 0 opacity and 1 opaque_scale = alpha*reference/float(maxval) # remaps intensities cmap[:, :, 3] = opaque_scale return cmap def plot_brain(brain, minthr=2, maxthr=95, edge=False): brain = mgu.get_braindata(brain) cmap = LinearSegmentedColormap.from_list('mycmap2', ['black', 'green']) plt.rcParams.update({'axes.labelsize': 'x-large', 'axes.titlesize': 'x-large'}) fbr = plt.figure() if brain.shape == (182, 218, 182): x = [78, 90, 100] y = [82, 107, 142] z = [88, 103, 107] else: shap = brain.shape x = [int(shap[0]*0.35), int(shap[0]*0.51), int(shap[0]*0.65)] y = [int(shap[1]*0.35), int(shap[1]*0.51), int(shap[1]*0.65)] z = [int(shap[2]*0.35), int(shap[2]*0.51), int(shap[2]*0.65)] coords = (x, y, z) labs = ['Sagittal Slice (YZ fixed)', 'Coronal Slice (XZ fixed)', 'Axial Slice (XY fixed)'] var = ['X', 'Y', 'Z'] # create subplot for first slice # and customize all labels idx = 0 min_val, max_val = get_min_max(brain, minthr, maxthr) for i, coord in enumerate(coords): for pos in coord: idx += 1 ax = fbr.add_subplot(3, 3, idx) ax.set_axis_bgcolor('black') ax.set_title(var[i] + " = " + str(pos)) if i == 0: image = ndimage.rotate(brain[pos, :, :], 90) elif i == 1: image = ndimage.rotate(brain[:, pos, :], 90) else: image = brain[:, :, pos] if idx % 3 == 1: ax.set_ylabel(labs[i]) ax.yaxis.set_ticks([0, image.shape[0]/2, image.shape[0] - 1]) ax.xaxis.set_ticks([0, image.shape[1]/2, image.shape[1] - 1]) if edge: image = edge_map(image).data ax.imshow(image, interpolation='none', cmap=cmap, alpha=1, vmin=min_val, vmax=max_val) fbr.set_size_inches(12.5, 10.5, forward=True) fbr.tight_layout() return fbr def plot_overlays(atlas, b0, cmaps=None, minthr=2, maxthr=95, edge=False): plt.rcParams.update({'axes.labelsize': 'x-large', 'axes.titlesize': 'x-large'}) foverlay = plt.figure() atlas = mgu.get_braindata(atlas) b0 = mgu.get_braindata(b0) if atlas.shape != b0.shape: raise ValueError('Brains are not the same shape.') if cmaps is None: cmap1 = LinearSegmentedColormap.from_list('mycmap1', ['black', 'magenta']) cmap2 = LinearSegmentedColormap.from_list('mycmap2', ['black', 'green']) cmaps = [cmap1, cmap2] if b0.shape == (182, 218, 182): x = [78, 90, 100] y = [82, 107, 142] z = [88, 103, 107] else: shap = b0.shape x = [int(shap[0]*0.35), int(shap[0]*0.51), int(shap[0]*0.65)] y = [int(shap[1]*0.35), int(shap[1]*0.51), int(shap[1]*0.65)] z = [int(shap[2]*0.35), int(shap[2]*0.51), int(shap[2]*0.65)] coords = (x, y, z) labs = ['Sagittal Slice (YZ fixed)', 'Coronal Slice (XZ fixed)', 'Axial Slice (XY fixed)'] var = ['X', 'Y', 'Z'] # create subplot for first slice # and customize all labels idx = 0 if edge: min_val = 0 max_val = 1 else: min_val, max_val = get_min_max(b0, minthr, maxthr) for i, coord in enumerate(coords): for pos in coord: idx += 1 ax = foverlay.add_subplot(3, 3, idx) ax.set_title(var[i] + " = " + str(pos)) if i == 0: image = ndimage.rotate(b0[pos, :, :], 90) atl = ndimage.rotate(atlas[pos, :, :], 90) elif i == 1: image = ndimage.rotate(b0[:, pos, :], 90) atl = ndimage.rotate(atlas[:, pos, :], 90) else: image = b0[:, :, pos] atl = atlas[:, :, pos] if idx % 3 == 1: ax.set_ylabel(labs[i]) ax.yaxis.set_ticks([0, image.shape[0]/2, image.shape[0] - 1]) ax.xaxis.set_ticks([0, image.shape[1]/2, image.shape[1] - 1]) if edge: image = edge_map(image).data image[image > 0] = max_val image[image == 0] = min_val ax.imshow(atl, interpolation='none', cmap=cmaps[0], alpha=.9) ax.imshow(opaque_colorscale(cmaps[1], image, alpha=.9, vmin=min_val, vmax=max_val)) foverlay.set_size_inches(12.5, 10.5, forward=True) foverlay.tight_layout() return foverlay def get_min_max(data, minthr=2, maxthr=95): ''' data: regmri data to threshold. ''' min_val = np.percentile(data, minthr) max_val = np.percentile(data, maxthr) return (min_val.astype(float), max_val.astype(float)) def side_by_side(f1, f2, t1="", t2=""): fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(16, 8)) axes[0].imshow(f1) axes[1].imshow(f2) axes[0].set_title(t1) axes[1].set_title(t2) return fig # - # # Master-M3R Comparison # # In this notebook, we will compare the individual step-by-step outputs of `master` and `m3r-release` to figure out where they diverge. For steps with a difference, we will additionally rerun each step with the previous matching inputs to figure out exactly where the two branches diverge. # # Preprocessing # # In this step, we will validate that the gradient tables are the same. # + m3r_grad_f = '/inputs/m3r-out/tmp/sub-0025427/ses-1/dwi/preproc/sub-0025427_ses-1_dwi_1.bvec' m3r_t1_f = '/inputs/m3r-out/tmp/sub-0025427/ses-1/dwi/preproc/sub-0025427_ses-1_dwi_t1.nii.gz' master_grad_f = '/inputs/ndmg-out/tmp/sub-0025427/ses-1/dwi/preproc/sub-0025427_ses-1_dwi_1.bvec' master_t1_f = '/inputs/ndmg-out/tmp/sub-0025427/ses-1/dwi/preproc/sub-0025427_ses-1_dwi_t1.nii.gz' # load the gradient tables m3r_grad = np.loadtxt(m3r_grad_f, delimiter=' ') master_grad = np.loadtxt(master_grad_f, delimiter=' ') np.testing.assert_array_equal(m3r_grad, master_grad) # load the t1ws m3r_t1dat = nb.load(m3r_t1_f).get_data() master_t1dat = nb.load(master_t1_f).get_data() np.testing.assert_array_equal(m3r_t1dat, master_t1dat) # - # # Registration # # In this step, we will validate that the registrations produce identical outputs, up to a rotation. Recall that the master branch uses the MNI brain not in MNI space; rather, the affine is flipped. To account for this, we will compare the outputs with and without an appropriate affine flip, to see whether FLIRT is robust to arbitrary rotations. If it is non-robust to rotations, we will use the master branch code coupled with the properly-oriented brains to compute the tractography and graphs to then further check these steps for equality between the two branches. # # ## Unrotated # + m3r_dwi_reg = '/inputs/m3r-out/dwi/registered/sub-0025427_ses-1_dwi_space-MNI152NLin6_res-1x1x1_registered.nii.gz' master_dwi_reg = '/inputs/ndmg-out/reg/dwi/sub-0025427_ses-1_dwi_aligned.nii.gz' m3r_dwidat = nb.load(m3r_dwi_reg).get_data() master_dwidat = nb.load(master_dwi_reg).get_data() np.testing.assert_array_equal(m3r_dwidat, master_dwidat) # - # As we can see, this is because the affines are different: # + template = '/ndmg_atlases/atlas/MNI152NLin6_res-1x1x1_T1w.nii.gz' print("MNI152 Affine") print(nb.load(template).affine) print("M3r Affine") print(nb.load(m3r_dwi_reg).affine) print("Master Affine") print(nb.load(master_dwi_reg).affine) # + ref = nb.load(template).get_data() m3r2tempim = 'm3r2temp.png' master2tempim = 'master2temp.png' m3r2temp = plot_overlays(b0=ref, atlas=m3r_dwidat[:,:,:,0], edge=True) plt.savefig(m3r2tempim) plt.close() master2temp = plot_overlays(b0=ref, atlas=master_dwidat[:,:,:,0], edge=True) plt.savefig(master2tempim) plt.close() masterim = mpimg.imread(master2tempim) m3rim = mpimg.imread(m3r2tempim) # - # Below, we look at the diffusion brain's alignment with the MNI152 template at 1mm resolution and standard orientation: # + # %matplotlib inline dwi_cmp = side_by_side(masterim, m3rim, t1="Master to Temp", t2="M3r to Temp") plt.close() dwi_cmp # - # As we can see, the master branch is backwards wrt the MNI152 template. Since the only difference between `master` and `m3r` through tractography steps is the registration being backwards on `master`, we continue using the tractography from `m3r`, noting that I did not touch anything for preprocessing, registration, and tractography on master with the exception of reorienting everything to standard space. tracks = np.load('/inputs/m3r-out/dwi/fiber/sub-0025427_ses-1_dwi_space-MNI152NLin6_res-1x1x1_fibers.npz')['arr_0'] # ## Graph # # Finally, we compare the graph generation code, using the graphs produced by `m3r` and comparing with the graphs produced from master's copy of the graph generation code. Here, I changed the following lines: # # ``` # def make_graph(self, streamlines, attr=None): # """ # Takes streamlines and produces a graph # **Positional Arguments:** # streamlines: # - Fiber streamlines either file or array in a dipy EuDX # or compatible format. # """ # nlines = np.shape(streamlines)[0] # print("# of Streamlines: " + str(nlines)) # # print_id = np.max((int(nlines*0.05), 1)) # in case nlines*.05=0 # for idx, streamline in enumerate(streamlines): # if (idx % print_id) == 0: # print(idx) # # points = np.round(streamline).astype(int) # p = set() # for point in points: # try: # loc = self.rois[point[0], point[1], point[2]] # except IndexError: # pass # else: # pass # # if loc: # p.add(loc) # # ** edges = set([tuple(sorted(x)) for x in product(p, p)]) # for edge in edges: # ** lst = tuple(sorted([str(node) for node in edge])) # self.edge_dict[lst] += 1 # # ** edge_list = [(str(k[0]), str(k[1]), v) for k, v in self.edge_dict.items()] # self.g.add_weighted_edges_from(edge_list) # ``` # # To: # # ``` # def make_graph(self, streamlines, attr=None): # """ # Takes streamlines and produces a graph # **Positional Arguments:** # streamlines: # - Fiber streamlines either file or array in a dipy EuDX # or compatible format. # """ # nlines = np.shape(streamlines)[0] # print("# of Streamlines: " + str(nlines)) # # for idx, streamline in enumerate(streamlines): # if (idx % int(nlines*0.05)) == 0: # print(idx) # # points = np.round(streamline).astype(int) # p = set() # for point in points: # try: # loc = self.rois[point[0], point[1], point[2]] # except IndexError: # pass # else: # pass # # if loc: # p.add(loc) # # ** edges = combinations(p, 2) # for edge in edges: # ** lst = tuple([int(node) for node in edge]) # self.edge_dict[tuple(sorted(lst))] += 1 # # ** edge_list = [(k[0], k[1], v) for k, v in self.edge_dict.items()] # self.g.add_weighted_edges_from(edge_list) # ``` # # Note the changed lines of interest are `**`'d. # # The below function is the original graphgen code from master: # + from __future__ import print_function from itertools import combinations from collections import defaultdict import numpy as np import networkx as nx import nibabel as nb import ndmg import time class graph_master(object): def __init__(self, N, rois, attr=None, sens="dwi"): """ Initializes the graph with nodes corresponding to the number of ROIs **Positional Arguments:** N: - Number of rois rois: - Set of ROIs as either an array or niftii file) attr: - Node or graph attributes. Can be a list. If 1 dimensional will be interpretted as a graph attribute. If N dimensional will be interpretted as node attributes. If it is any other dimensional, it will be ignored. """ self.N = N self.edge_dict = defaultdict(int) self.rois = nb.load(rois).get_data() n_ids = np.unique(self.rois) n_ids = n_ids[n_ids != 0] self.g = nx.Graph(name="Generated by NeuroData's MRI Graphs (ndmg)", date=time.asctime(time.localtime()), source="http://m2g.io", region="brain", sensor=sens, ecount=0, vcount=len(n_ids) ) print(self.g.graph) [self.g.add_node(ids) for ids in n_ids] pass def make_graph(self, streamlines, attr=None): """ Takes streamlines and produces a graph **Positional Arguments:** streamlines: - Fiber streamlines either file or array in a dipy EuDX or compatible format. """ nlines = np.shape(streamlines)[0] print("# of Streamlines: " + str(nlines)) for idx, streamline in enumerate(streamlines): if (idx % int(nlines*0.05)) == 0: print(idx) points = np.round(streamline).astype(int) p = set() for point in points: try: loc = self.rois[point[0], point[1], point[2]] except IndexError: pass else: pass if loc: p.add(loc) edges = combinations(p, 2) for edge in edges: lst = tuple([int(node) for node in edge]) self.edge_dict[tuple(sorted(lst))] += 1 edge_list = [(k[0], k[1], v) for k, v in self.edge_dict.items()] self.g.add_weighted_edges_from(edge_list) def cor_graph(self, timeseries, attr=None): """ Takes timeseries and produces a correlation matrix **Positional Arguments:** timeseries: -the timeseries file to extract correlation for. dimensions are [numrois]x[numtimesteps] """ print("Estimating correlation matrix for {} ROIs...".format(self.N)) cor = np.corrcoef(timeseries) # calculate pearson correlation roilist = np.unique(self.rois) roilist = roilist[roilist != 0] roilist = np.sort(roilist) for (idx_out, roi_out) in enumerate(roilist): for (idx_in, roi_in) in enumerate(roilist): self.edge_dict[tuple((roi_out, roi_in))] = float(np.absolute( cor[idx_out, idx_in])) edge_list = [(k[0], k[1], v) for k, v in self.edge_dict.items()] self.g.add_weighted_edges_from(edge_list) pass def get_graph(self): """ Returns the graph object created """ try: return self.g except AttributeError: print("Error: the graph has not yet been defined.") pass def save_graph(self, graphname, fmt='edgelist'): """ Saves the graph to disk **Positional Arguments:** graphname: - Filename for the graph **Optional Arguments:** fmt: - Output graph format """ self.g.graph['ecount'] = nx.number_of_edges(self.g) g = nx.convert_node_labels_to_integers(self.g, first_label=1) if fmt == 'edgelist': nx.write_weighted_edgelist(g, graphname, encoding='utf-8') elif fmt == 'gpickle': nx.write_gpickle(g, graphname) elif fmt == 'graphml': nx.write_graphml(g, graphname) else: raise ValueError('edgelist, gpickle, and graphml currently supported') pass def summary(self): """ User friendly wrapping and display of graph properties """ print("\n Graph Summary:") print(nx.info(self.g)) pass # - parc = '/ndmg_atlases/label/desikan_res-1x1x1.nii.gz' # make the graph with m3r's graphgen code, which is on branch m3r m3r_graph = nd.graph(np.unique(nb.load(parc).get_data()) - 1, parc) m3r_graph.make_graph(tracks) # make the graph using master's original graphgen code above master_graph = graph_master(np.unique(nb.load(parc).get_data()) - 1, parc) master_graph.make_graph(tracks) master_graph.g.nodes() == m3r_graph.g.nodes() master_adj = nx.to_numpy_matrix(master_graph.g) m3r_adj = nx.to_numpy_matrix(m3r_graph.g) np.fill_diagonal(m3r_adj, 0) np.testing.assert_equal(master_adj, m3r_adj) # Showing that if the brains are registered properly on master (and hence, the tracts are properly registered) the resulting graphs are the same. Finally, just to be doubly sure, we will visualize the parcellation with MNI152: parc_dat = nb.load(parc).get_data() f = plot_overlays(b0=ref, atlas=parc_dat, edge=True) plt.close() f # Recall that the dwi image for M3r is shown above to be properly aligned with the properly oriented MNI152. Then by transitivity we can conclude that the `dwi` brain aligns properly with the parcellation as well. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from PIL import Image # used for loading images import numpy as np import os # used for navigating to image path import imageio # used for writing images import random import tensorflow as tf from tensorflow.keras import datasets, layers, models import matplotlib.pyplot as plt from tensorflow.keras.optimizers import SGD import pydot from timeit import default_timer as timer from tensorflow.keras import backend as K "ResNet 50 dependencies" from tensorflow.keras.applications.resnet50 import ResNet50 from tensorflow.keras.applications import resnet50 "GoogLeNet dependencies" from tensorflow.keras import regularizers from tensorflow.keras.models import Sequential, Model from tensorflow.keras.layers import Input, Flatten, Dense, Dropout, BatchNormalization from tensorflow.keras.layers import Conv2D, MaxPooling2D, AveragePooling2D, ZeroPadding2D from tensorflow.keras.layers import Concatenate from tensorflow.keras.preprocessing.image import ImageDataGenerator # - """Load numpy output files""" pr_im_64 = np.load('../../data/tidy/preprocessed_images/size64_exp5_Pr_Im.npy', allow_pickle=True) pr_po_im_64 = np.load('../../data/tidy/preprocessed_images/size64_exp5_Pr_Po_Im.npy', allow_pickle=True) pr_poim_64 = np.load('../../data/tidy/preprocessed_images/size64_exp5_Pr_PoIm.npy', allow_pickle=True) prpo_im_64 = np.load('../../data/tidy/preprocessed_images/size64_exp5_PrPo_Im.npy', allow_pickle=True) def getImageOneHotVector(image_file_name, classification_scenario = "B"): """Returns one-hot vector encoding for each image based on specified classification scenario: Classification Scenario A (3 classes): {probable, possible, improbable} Classification Scenario B (2 classes): {probable, improbable} Classification Scenario C (2 classes): {{probable, possible}, improbable} Classification Scenario D (2 classes): {probable, {possible, improbable}} """ word_label = image_file_name.split('-')[0] if classification_scenario == "A": if word_label == 'probable' : return np.array([1, 0, 0]) elif word_label == 'possible' : return np.array([0, 1, 0]) elif word_label == 'improbable': return np.array([0, 0, 1]) else : return np.array([0, 0, 0]) # if label is not present for current image elif classification_scenario == "B": if word_label == 'probable' : return np.array([1, 0]) elif word_label == 'improbable' : return np.array([0, 1]) else : return np.array([0, 0]) # if label is not present for current image elif classification_scenario == "C": if word_label in ['probable', 'possible'] : return np.array([1, 0]) elif word_label == 'improbable' : return np.array([0, 1]) else : return np.array([0, 0]) # if label is not present for current image elif classification_scenario == "D": if word_label == 'probable' : return np.array([1, 0]) elif word_label in ['possible', 'improbable'] : return np.array([0, 1]) else : return np.array([0, 0]) # if label is not present for current image #IMG_SIZE = 300 NUM_CLASS = 3 NUM_CHANNEL = 1 CLASSIFICATION_SCENARIO = "A" #IMG_SIZE = 300 NUM_CLASS = 2 NUM_CHANNEL = 1 CLASSIFICATION_SCENARIO = "B" DIR = '../../data/tidy/labeled_images' def processImageData(img_size, channels=1, l=400,t=0,r=3424,b=3024): data = [] image_list = os.listdir(DIR) for img in image_list: label = getImageOneHotVector(img, CLASSIFICATION_SCENARIO) if label.sum() == 0: continue path = os.path.join(DIR, img) img = Image.open(path) if channels == 1: img = img.convert('L') # convert image to monochrome img = img.crop((l, t, r, b)) # after cropping, image size is 3024 x 3024 pixels #img_size_w, img_size_h = img.size img = img.resize((img_size, img_size), Image.BICUBIC) data.append([(np.array(img)/255.).T, label])#scale to 0-1 and transpose # flip_img = np.fliplr((np.array(img)/255.).T)# Basic Data Augmentation - Horizontal Flipping # data.append([flip_img, label])#scale to 0-1 and transpose elif channels == 3: img = img.crop((l, t, r, b)) # after cropping, image size is 3024 x 3024 pixels img = img.resize((img_size, img_size), Image.BICUBIC) data.append([(np.array(img)/255.).T, label])#scale to 0-1 and transpose return (data) def splitData(image_array, prop = 0.80, seed_num = 111): """Returns training and test arrays of images with specified proportion - prop:1-prop""" random.Random(seed_num).shuffle(image_array) train_size = int(prop*np.shape(image_array)[0]) train = image_array[:train_size] test = image_array[train_size:] return(train, test) processed_image_data = processImageData(108, channels = NUM_CHANNEL) train_data, test_data = splitData(processed_image_data, seed_num = 111) tr_data, te_data = splitData(pr_im_64, seed_num = 111) processed_image_data[202][1] plt.imshow(processed_image_data[202][0], cmap = 'gist_gray') #plt.savefig( "../../figures/image0.png", dpi=100) def getImageShape(image_array): if NUM_CHANNEL==1: image_shape = np.array([np.expand_dims(x[0],axis=2) for x in image_array]).shape[1:4] elif NUM_CHANNEL==3: image_shape = np.array([x[0] for x in image_array]).shape[1:4][::-1] print(image_shape) return image_shape input_image_shape = getImageShape(tr_data) # + # Train the model. if NUM_CHANNEL == 1: train_array = np.array([np.expand_dims(x[0],axis=2) for x in tr_data]) validation_array = np.array([np.expand_dims(x[0],axis=2) for x in te_data]) elif NUM_CHANNEL == 3: train_array = np.array([x[0] for x in tr_data]) train_array = np.moveaxis(train_array, 1, -1) validation_array = np.array([x[0] for x in te_data]) validation_array = np.moveaxis(validation_array, 1, -1) train_labels = np.array([x[1] for x in tr_data]) validation_labels = np.array([x[1] for x in te_data]) # + #https://github.com/keras-team/keras/issues/5400#issuecomment-408743570 def check_units(y_true, y_pred): if y_pred.shape[1] != 1: y_pred = y_pred[:,1:2] y_true = y_true[:,1:2] return y_true, y_pred def precision(y_true, y_pred): y_true, y_pred = check_units(y_true, y_pred) true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1))) predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1))) precision = true_positives / (predicted_positives + K.epsilon()) return precision def recall(y_true, y_pred): y_true, y_pred = check_units(y_true, y_pred) true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1))) possible_positives = K.sum(K.round(K.clip(y_true, 0, 1))) recall = true_positives / (possible_positives + K.epsilon()) return recall def f1(y_true, y_pred): def recall(y_true, y_pred): true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1))) possible_positives = K.sum(K.round(K.clip(y_true, 0, 1))) recall = true_positives / (possible_positives + K.epsilon()) return recall def precision(y_true, y_pred): true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1))) predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1))) precision = true_positives / (predicted_positives + K.epsilon()) return precision y_true, y_pred = check_units(y_true, y_pred) precision = precision(y_true, y_pred) recall = recall(y_true, y_pred) return 2*((precision*recall)/(precision+recall+K.epsilon())) # + def plot_model_accuracy(hist): plt.plot(hist.history["accuracy"]) plt.plot(hist.history["val_accuracy"]) plt.title("Model Accuracy") plt.ylabel("Accuracy") plt.xlabel("Epoch") plt.legend(["Train", "Validation"], loc="lower right") plt.show() def plot_model_loss(hist): plt.plot(hist.history["loss"]) plt.plot(hist.history["val_loss"]) plt.title("Model Loss") plt.ylabel("Loss") plt.xlabel("Epoch") plt.legend(["Train", "Validation"], loc="upper right") plt.show() # + # opt = keras.optimizers.Adam(learning_rate=0.01) # model.compile(loss='categorical_crossentropy', optimizer=opt) model = models.Sequential([ layers.Conv2D(filters = 64, kernel_size = 7, strides = 2, activation="relu", padding="same", input_shape = input_image_shape), layers.MaxPooling2D(2), layers.Conv2D(128, 3, activation="relu", padding="same"), layers.Conv2D(128, 3, activation="relu", padding="same"), layers.MaxPooling2D(2), layers.Conv2D(256, 3, activation="relu", padding="same"), layers.Conv2D(256, 3, activation="relu", padding="same"), layers.MaxPooling2D(2), layers.Flatten(), layers.Dense(128, activation="relu"), layers.Dropout(0.5), # randomly drop out 50% of the neuorns at each training step layers.Dense(64, activation="relu"), # flatten all outputs layers.Dropout(0.5), layers.Dense(NUM_CLASS, activation="softmax") ]) # + # Compile the model. opt = SGD(lr = 0.001) #default learning rate (lr) = 0.1 model.compile(loss='categorical_crossentropy', optimizer = "adam", metrics=[precision,recall, f1, 'accuracy']) start = timer() hist_seq = model.fit( train_array, train_labels, batch_size = 32, epochs = 5, validation_data=(validation_array, validation_labels) ) end = timer() print(end - start) # Time in seconds, e.g. 5.380919524002 # - plot_model_accuracy(hist_seq) plot_model_loss(hist_seq) # + jupyter={"outputs_hidden": true} #loss, acc = model.evaluate(testImages, testLabels, verbose = 0) #print(acc * 100) y_pred = model.predict(validation_array, batch_size=32, verbose=1) y_pred_bool = np.argmax(y_pred, axis=1) #print(classification_report(validation_labels, y_pred_bool)) # + jupyter={"outputs_hidden": true} tf.keras.utils.plot_model(model, "../../figures/cnn-model1.png", expand_nested = False, rankdir = "TB", show_shapes=True, dpi=192) # - # ## ResNet 34 class ResidualUnit(tf.keras.layers.Layer): def __init__(self, filters, strides=1, activation="relu", **kwargs): super().__init__(**kwargs) self.activation = tf.keras.activations.get(activation) self.main_layers = [ tf.keras.layers.Conv2D(filters, 3, strides=strides, padding="same", use_bias=False), tf.keras.layers.BatchNormalization(), tf.keras.layers.Conv2D(filters, 3, strides=1, padding="same", use_bias=False), tf.keras.layers.BatchNormalization() ] self.skip_layers = [] if strides > 1: self.skip_layers = [ tf.keras.layers.Conv2D(filters, 1, strides=strides, padding="same", use_bias=False), tf.keras.layers.BatchNormalization() ] def call(self, inputs): Z = inputs for layer in self.main_layers: Z = layer(Z) skip_Z = inputs for layer in self.skip_layers: skip_Z = layer(skip_Z) return self.activation(Z + skip_Z) resnet34mod = tf.keras.models.Sequential() resnet34mod.add(tf.keras.layers.Conv2D(64, 7, strides=2, input_shape=input_image_shape, padding="same", use_bias=False)) resnet34mod.add(tf.keras.layers.BatchNormalization()) resnet34mod.add(tf.keras.layers.Activation("relu")) resnet34mod.add(tf.keras.layers.MaxPool2D(pool_size=3, strides=2, padding="same")) prev_filters = 64 for filters in [64] * 3 + [128] * 4 + [256] * 6 + [512] * 3: strides = 1 if filters == prev_filters else 2 resnet34mod.add(ResidualUnit(filters, strides=strides)) prev_filters = filters resnet34mod.add(tf.keras.layers.GlobalAvgPool2D()) resnet34mod.add(tf.keras.layers.Flatten()) resnet34mod.add(tf.keras.layers.Flatten()) resnet34mod.add(tf.keras.layers.Dense(NUM_CLASS, activation="softmax")) resnet34mod.compile(loss='binary_crossentropy', optimizer = "adam",# metrics=[ 'accuracy']) #tf.keras.metrics.SpecificityAtSensitivity(0.5), tf.keras.metrics.SensitivityAtSpecificity(0.5), metrics=[precision,recall, f1, 'accuracy']) #metrics=['accuracy']) # + jupyter={"outputs_hidden": true} start = timer() resnet34mod.fit( train_array, train_labels, batch_size = 32, epochs = 4, validation_data=(validation_array, validation_labels) ) end = timer() print(end - start) # Time in seconds, e.g. 5.380919524002 # - # ## ResNet 50 # + jupyter={"outputs_hidden": true} "Define ResNet 50 model instance (Keras built-in)" rn50 = resnet50.ResNet50(include_top=True, weights=None, input_tensor=None, input_shape=input_image_shape, pooling= 'max', classes=2) rn50.summary() # + "Configure the model with losses and metrics" rn50.compile(loss='categorical_crossentropy', optimizer = "adam", metrics=[precision,recall, f1, 'accuracy']) start = timer() "Fit ResNet 50 to data" hist_rn50 = rn50.fit( train_array, train_labels, batch_size = 32, epochs = 5, validation_data=(validation_array, validation_labels) ) end = timer() print(end - start) # - plot_model_accuracy(hist_rn50) plot_model_loss(hist_rn50) # ## GoogLeNet # ### InceptionV3 """Instantiate the Inception v3 architecture""" iv3 = tf.keras.applications.InceptionV3( include_top=True, weights=None, input_tensor=None, input_shape=input_image_shape, pooling='avg', classes=2, classifier_activation="softmax", ) # + "Configure the model with losses and metrics" iv3.compile(loss='categorical_crossentropy', optimizer = "adam", metrics=[precision,recall, f1, 'accuracy']) start = timer() "Fit Inception v3 to data" hist_iv3 = iv3.fit( train_array, train_labels, batch_size = 32, epochs = 10, validation_data=(validation_array, validation_labels) ) end = timer() print(end - start) # - plot_model_accuracy(hist_iv3) plot_model_loss(hist_iv3) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: pySpark # language: python # name: pyspark # --- # # Working with NOAA dataset # # The journal article describing GHCN-Daily is: # # ., , , , and , 2012: An overview of the Global Historical Climatology Network-Daily Database. Journal of Atmospheric and Oceanic Technology, 29, 897-910, doi:10.1175/JTECH-D-11-00103.1 # + # #!pip install pyspark # - # For multiple output per cell from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" DATASET_FOLDER = '/media/data-nvme/dev/datasets/WorldBank/' noaa_csv_path = DATASET_FOLDER + 'noaa/ASN*.csv' SPARK_MASTER = 'spark://192.168.0.9:7077' from pyspark import SparkContext from pyspark import SparkConf import shutil # + # import os # os.environ["PYSPARK_DRIVER_PYTHON"] = "/usr/bin/python3" # os.environ["PYSPARK_PYTHON"]="/usr/bin/python3" # - # # Compute Pi # + conf = SparkConf().setAppName('pi') conf = conf.setMaster(SPARK_MASTER) sc = SparkContext(conf=conf) import random num_samples = 100000000 def inside(p): x, y = random.random(), random.random() return x*x + y*y < 1 count = sc.parallelize(range(0, num_samples)).filter(inside).count() pi = 4 * count / num_samples print(pi) sc.stop() # - # # Line count # + #noaa_csv_path = '/media/data-nvme/dev/datasets/WorldBank//noaa/ASN00060066.csv' #noaa_csv_path = DATASET_FOLDER + '/noaa/ASN*.csv' #noaa_csv_path = DATASET_FOLDER + '/noaa/*.csv' noaa_csv_path = DATASET_FOLDER + '/small_dataset/*.csv' def count_lines(): # configuration APP_NAME = 'count NOAA all lines' conf = SparkConf().setAppName(APP_NAME) conf = conf.setMaster(SPARK_MASTER) sc = SparkContext(conf=conf) # core part of the script files = sc.textFile(noaa_csv_path) total = files.count() # lineLength = lines.map(lambda s: len(s)) # lineLength.persist() # totalLength = lineLength.reduce(lambda a,b:a+b) # # output results # print(totalLength) sc.stop() return total total = count_lines() #print(f'{totalLength:%.2f}') total # - print(f'{total:,d}') # Number of lines : 1 076 166 433 # # # Filters # # https://docs.opendata.aws/noaa-ghcn-pds/readme.html # # https://www1.ncdc.noaa.gov/pub/data/ghcn/daily/readme.txt # # Keep only # # ELEMENT Summary # # The five core elements are: # # PRCP = Precipitation (tenths of mm) # SNOW = Snowfall (mm) # SNWD = Snow depth (mm) # TMAX = Maximum temperature (tenths of degrees C) # TMIN = Minimum temperature (tenths of degrees C) # # Variable Columns Type # ------------------------------ # ID 1-11 Character # YEAR 12-15 Integer # MONTH 16-17 Integer # ELEMENT 18-21 Character # VALUE1 22-26 Integer # MFLAG1 27-27 Character # QFLAG1 28-28 Character # # VALUE1 is the value on the first day of the month (missing = -9999). # # "PRCP_ATTRIBUTES" = a,M,Q,S where: # a = DaysMissing (Numeric value): The number of days (from 1 to 5) missing or flagged is provided # M = GHCN-Daily Dataset Measurement Flag (see Section 1.3.a.ii for more details) # Q = GHCN-Daily Dataset Quality Flag (see Section 1.3.a.iii for more details) # S = GHCN-Daily Dataset Source Code (see Section 1.3.a.iv for more details) # # MFLAG1 is the measurement flag for the first day of the month. There are # ten possible values: # # Blank = no measurement information applicable # B = precipitation total formed from two 12-hour totals # D = precipitation total formed from four six-hour totals # H = represents highest or lowest hourly temperature (TMAX or TMIN) # or the average of hourly values (TAVG) # K = converted from knots # L = temperature appears to be lagged with respect to reported # hour of observation # O = converted from oktas # P = identified as "missing presumed zero" in DSI 3200 and 3206 # T = trace of precipitation, snowfall, or snow depth # W = converted from 16-point WBAN code (for wind direction) # # QFLAG1 is the quality flag for the first day of the month. There are # fourteen possible values: # # Blank = did not fail any quality assurance check # D = failed duplicate check # G = failed gap check # I = failed internal consistency check # K = failed streak/frequent-value check # L = failed check on length of multiday period # M = failed megaconsistency check # N = failed naught check # O = failed climatological outlier check # R = failed lagged range check # S = failed spatial consistency check # T = failed temporal consistency check # W = temperature too warm for snow # X = failed bounds check # Z = flagged as a result of an official Datzilla # investigation # # - WESF = Water equivalent of snowfall (tenths of mm) # # WV** = Weather in the Vicinity where ** has one of the following values: # # 01 = Fog, ice fog, or freezing fog (may include heavy fog) # 03 = Thunder # 07 = Ash, dust, sand, or other blowing obstruction # 18 = Snow or ice crystals # 20 = Rain or snow shower # # WMO ID is the World Meteorological Organization (WMO) number for the station. If the station has no WMO number (or one has not yet been matched to this station), then the field is blank. # # # # HCN/CRN FLAG = flag that indicates whether the station is part of the U.S. Historical Climatology Network (HCN). There are three possible values: # # Blank = Not a member of the U.S. Historical Climatology or U.S. Climate Reference Networks # HCN = U.S. Historical Climatology Network station # CRN = U.S. Climate Reference Network or U.S. Regional Climate Network Station # # !head /media/data-nvme/dev/datasets/WorldBank//noaa/ASN00060066.csv' # !head $DATASET_FOLDER/noaa/ASN00060066.csv # !wc -l $DATASET_FOLDER/noaa/ASN00060066.csv # !grep 2016 $DATASET_FOLDER/noaa/AE000041196.csv | head -1 # !rm -r /media/data-nvme/dev/datasets/WorldBank/year_2016-01 # # Filter on Year # + # %%time #noaa_csv_path = '/media/data-nvme/dev/datasets/WorldBank//noaa/ASN00060066.csv' #noaa_csv_path = DATASET_FOLDER + '/noaa/AE*.csv' #noaa_csv_path = DATASET_FOLDER + '/noaa/*.csv' noaa_csv_path = DATASET_FOLDER + '/small_dataset/*.csv' def filter_year(): total = 0 # configuration APP_NAME = 'Filter on Year' conf = SparkConf().setAppName(APP_NAME) conf = conf.setMaster(SPARK_MASTER) sc = SparkContext(conf=conf) try: # core part of the script print(f'Processing {noaa_csv_path}...') files_rdd = sc.textFile(noaa_csv_path) year_2016 = files_rdd.filter(lambda s : "\"2016-01-01" in s) output = DATASET_FOLDER + 'year_2016-01' print(f'Saving to {output}') year_2016.saveAsTextFile(output) total = year_2016.count() except Exception as inst: print('ERROR') print(type(inst)) # the exception instance print(inst.args) # arguments stored in .args print(inst) raise finally: sc.stop() return total total = filter_year() #print(f'{totalLength:%.2f}') total # - # # Filter on Rain - Python IN version s = '"USR0000AFRA","2016-01-25","35.8456","-113.055","2063.5","RIZONA, AZ US"," 39","H,,U"," -39","H,,U"," -10",",,U"' s[1:12] s[14:] s = 'ACW00011604 17.1167 -61.7833 TMAX 1949 1949' s[0:11] s[36:] # + # %%time #noaa_csv_path = DATASET_FOLDER + '/noaa/*.csv' noaa_csv_path = DATASET_FOLDER + 'year_2016-01.csv' inventory_path = DATASET_FOLDER + 'ghcnd-inventory.txt' def extract_rain(): total = 0 # configuration APP_NAME = 'Filter on Rain' conf = SparkConf().setAppName(APP_NAME) conf = conf.setMaster(SPARK_MASTER) sc = SparkContext(conf=conf) try: # Load inventory inventory_rdd = sc.textFile(inventory_path) rain_inventory_rdd = inventory_rdd.filter(lambda s : "PRCP" in s) # Now we have a list of all files containings precipitation data # Keep only the first column rain_inventory_rdd = rain_inventory_rdd.filter(lambda s: s[0:11]) rain_inventory = rain_inventory_rdd.collect() # Load the stations data points print(f'Processing {noaa_csv_path}...') files_rdd = sc.textFile(noaa_csv_path+'/*') # Keep only precipitation data rain_rdd = files_rdd.filter(lambda s : s[1:12] in rain_inventory) # Save precipitation data rain_rdd.saveAsTextFile(DATASET_FOLDER + 'year_2016-01_rain') total = rain_rdd.count() except Exception as inst: print('ERROR') print(type(inst)) # the exception instance print(inst.args) # arguments stored in .args print(inst) raise finally: sc.stop() return total total = extract_rain() #print(f'{totalLength:%.2f}') total # - # # Filter on Rain - Join version # !rm -r /media/data-nvme/dev/datasets/WorldBank/year_2016-01_rain # + # %%time #noaa_csv_path = DATASET_FOLDER + '/noaa/*.csv' noaa_csv_path = DATASET_FOLDER + 'year_2016-01' inventory_path = DATASET_FOLDER + 'ghcnd-inventory.txt' sc.stop() total = 0 # configuration APP_NAME = 'Filter on Rain' conf = SparkConf().setAppName(APP_NAME) conf = conf.setMaster(SPARK_MASTER) sc = SparkContext(conf=conf) # Load inventory inventory_rdd = sc.textFile(inventory_path) # Filter on Rain rain_inventory_rdd = inventory_rdd.filter(lambda s : "PRCP" in s) # Now we have a list of all files containings precipitation data # Keep only the first column # Format each RDD as (K, V) to prepare for the join operation rain_inventory_rdd = rain_inventory_rdd.map(lambda line : (line[0:11], line[36:])) # Keep code and years #rain_inventory_rdd = rain_inventory_rdd.filter(lambda s: s[0:11]) #rain_inventory = rain_inventory_rdd.collect() # Load the stations data points print(f'Processing {noaa_csv_path}...') all_data_rdd = sc.textFile(noaa_csv_path+'/*') all_data_rdd = all_data_rdd.map(lambda line: (line[1:12], line[14:])) # Keep only precipitation data join = rain_inventory_rdd.join(all_data_rdd) #rain_rdd = files_rdd.filter(lambda s : s[1:12] in rain_inventory) # Save precipitation data output = DATASET_FOLDER + 'year_2016-01_rain' print(f'Saving to {output}') join_output = join.map(lambda x: ','.join([x[0],x[1][1]])) #sc.parallelize(join_output.take(2)).collect() # Flatten the result #join = rdd.map(lambda x: ','.join([x[0],x[1][0],x[1][1]])) # Get all partition on one node, to have one file (don't do it for huge dataset) join_output = join_output.repartition(1) shutil.rmtree(output, ignore_errors=True) join_output.saveAsTextFile(output) total = join.count() sc.stop() total # - # https://stackoverflow.com/questions/56957589/how-to-read-multiple-csv-files-with-different-schema-in-pyspark # !echo '"STATION","DATE","LATITUDE","LONGITUDE","ELEVATION","NAME","PRCP","PRCP_ATTRIBUTES","SNWD","SNWD_ATTRIBUTES","TMAX","TMAX_ATTRIBUTES","TMIN","TMIN_ATTRIBUTES","TAVG","TAVG_ATTRIBUTES"' # !head $DATASET_FOLDER/year_2016-01_rain/part-00000 sc.parallelize(join.take(2)).collect() # "ORLY" ID : 0-20000-0-07149 # # Coordinates: 48.7166666667°N, 2.3844444444°E, 89m # + # # !cd $DATASET_FOLDER/year_2016-01_rain && (ls | xargs cat) > ../year_2016-01_rain.csv # # ! head $DATASET_FOLDER/year_2016-01_rain.csv # + #noaa_csv_path = '/media/data-nvme/dev/datasets/WorldBank//noaa/ASN00060066.csv' #noaa_csv_path = DATASET_FOLDER + '/noaa/AE*.csv' #noaa_csv_path = DATASET_FOLDER + '/noaa/*.csv' noaa_csv_path = DATASET_FOLDER + 'year_2016-01.csv' def extract_rain(): total = 0 # configuration APP_NAME = 'Count Rain' conf = SparkConf().setAppName(APP_NAME) conf = conf.setMaster(SPARK_MASTER) sc = SparkContext(conf=conf) try: # core part of the script print(f'Processing {noaa_csv_path}...') files_rdd = sc.textFile(noaa_csv_path+'/*') total = files_rdd.count() except Exception as inst: print('ERROR') print(type(inst)) # the exception instance print(inst.args) # arguments stored in .args print(inst) raise finally: sc.stop() return total total = extract_rain() #print(f'{totalLength:%.2f}') total # - # !head $DATASET_FOLDER/year_2016.csv # + from pyspark.sql.functions import input_file_name from pyspark.sql import SQLContext from pyspark.sql.types import * sqlContext = SQLContext(sc) customSchema = StructType([ \ StructField("asset_id", StringType(), True), \ StructField("price_date", StringType(), True), \ etc., StructField("close_price", StringType(), True), \ StructField("filename", StringType(), True)]) df = spark.read.format("csv") \ .option("header", "false") \ .option("sep","|") \ .schema(customSchema) \ .load(fullPath) \ .withColumn("filename", input_file_name()) # - # # TEST conf = SparkConf().setAppName('test') conf = conf.setMaster(SPARK_MASTER) sc = SparkContext(conf=conf) s = ['"USR0000AFRA","2016-01-25","35.8456","-113.055","2063.5","FRAZIER WELLS ARIZONA, AZ US"," 39","H,,U"," -39","H,,U"," -10",",,U"'] s += ['"FRM00007149","2016-01-01","48.7167","2.3842","89.0","ORLY, FR"," 10",",,E",,," 85",",,E"," 38",",,E"," 62","H,,S"'] s rdd = sc.parallelize(s) #test = rain_inventor_rdd.map(lambda line : (line[0:11], line[36:])) # Keep code and years rdd = rdd.map(lambda line : (line[1:11], (line[15:25], line[28:35]))) rdd.collect() rdd.repartition(1).collect() rdd.map(lambda x: ','.join([x[0],x[1][0],x[1][1]])).repartition(1).collect() sc.stop() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="-mWLpB1R6L5J" # # Deep Learning model for Fashion item recognition # + [markdown] colab_type="text" id="RjIg9c8uQd-z" # * Classification of Fashion-MNIST dataset with tensorflow.keras, using a Convolutional Neural Network (CNN) architecture. # # * The dataset contains 70,000 grayscale images in 10 categories. # Label Description: # # `` # 0 T-shirt/top # 1 Trouser # 2 Pullover # 3 Dress # 4 Coat # 5 Sandal # 6 Shirt # 7 Sneaker # 8 Bag # 9 Ankle boot`` # # * The images show individual articles of clothing at low resolution (28 by 28 pixels). # + [markdown] colab_type="text" id="RP1v4b2l6L5P" # ### Imports # + colab={} colab_type="code" id="xX-aKKIKLWAT" from keras import backend as K # Importing Keras backend (by default it is Tensorflow) from keras.datasets import fashion_mnist # Import the mnist dataset from keras.layers import Input, Conv2D, Dense, Dropout, Flatten, MaxPool2D,BatchNormalization # Layers to be used for building our model from keras.models import Model # The class used to create a model from keras.optimizers import Adam from keras.callbacks import EarlyStopping, ModelCheckpoint from keras.utils import np_utils # Utilities to manipulate numpy arrays from tensorflow import set_random_seed # Used for reproducible experiments from sklearn.metrics import confusion_matrix,classification_report from keras import regularizers from keras.preprocessing.image import ImageDataGenerator # for data augmentetion from IPython.display import SVG from keras.utils.vis_utils import model_to_dot import gc import matplotlib.pyplot as plt import numpy as np # %matplotlib inline # + [markdown] colab_type="text" id="YPXP8r-TRedf" # * 60,000 images to train the network and 10,000 images to test it. # * The shape of each image is 28x28 pixels x1 channel (grey). # * **Data Normalization**: The values of the inputs are in [0, 255] so we normalize them to [0, 1] dividing by 255. # + colab={} colab_type="code" id="O75oVVFrLm7t" batch_size = 128 classes = 10 epochs = 60 (X_train, y_train), (X_test, y_test) = fashion_mnist.load_data() # Reshape data for Conv2D X_train = X_train.reshape(X_train.shape[0], 28, 28, 1) X_test = X_test.reshape(X_test.shape[0], 28, 28, 1) input_shape = (28, 28, 1) X_train = X_train.astype('float32') X_test = X_test.astype('float32') # Data Normalization X_train /= 255 X_test /= 255 Y_train = np_utils.to_categorical(y_train, classes) Y_test = np_utils.to_categorical(y_test, classes) # + colab={} colab_type="code" id="JrQxXZc-LwJU" # function to plot results def plot_history(hs, epochs, metric): plt.clf() plt.rcParams['figure.figsize'] = [10, 5] plt.rcParams['font.size'] = 16 for label in hs: plt.plot(hs[label].history[metric], label='{0:s} train {1:s}'.format(label, metric)) plt.plot(hs[label].history['val_{0:s}'.format(metric)], label='{0:s} validation {1:s}'.format(label, metric)) x_ticks = np.arange(0, epochs + 1, epochs / 10) x_ticks [0] += 1 plt.xticks(x_ticks) plt.ylim((0, 1)) plt.xlabel('Epochs') plt.ylabel('Loss' if metric=='loss' else 'Accuracy') plt.legend() plt.show() # + colab={} colab_type="code" id="xQdFoPfXLxQC" def clean_up(model): K.clear_session() del model gc.collect() # + [markdown] colab_type="text" id="7e6jjSXOGIoL" # ### Data augmentetion # # Data augmentation is often used in order to improve generalisation properties. Typically, horizontal flipping,zooming shifts are used. However, in this case data augmentation did not improved the classifier's results. Bellow are the the data augmentation settings that tried out. # + colab={"base_uri": "https://localhost:8080/", "height": 111} colab_type="code" id="hUTRVX8dGJfB" outputId="4cc204ae-acac-418d-e6ee-3ae757737a1a" image_generator = ImageDataGenerator( rotation_range=10, zoom_range = 0.1, width_shift_range=0.05, height_shift_range=0.05, horizontal_flip=False, vertical_flip=False, data_format="channels_last", zca_whitening=True ) # fit data for zca whitening image_generator.fit(X_train, augment=True) augment_size=5000 # get transformed images randidx = np.random.randint(X_train.shape[0], size=augment_size) x_augmented = X_train[randidx].copy() y_augmented = Y_train[randidx].copy() x_augmented = image_generator.flow(x_augmented, np.zeros(augment_size), batch_size=augment_size, shuffle=False).next()[0] # append augmented data to trainset X_train2 = np.concatenate((X_train, x_augmented)) Y_train2 = np.concatenate((Y_train, y_augmented)) # + [markdown] colab_type="text" id="ylAgSf3tXcfH" # ### The Model # # - We use a Functional Model with following layers: # # - `Input` layer # - ` 2D Convolution layer` with 16 filters and [5x5] kernel size. # - `Batch Normalization` # - `Max Pooling` # - `Dropout` 25% # - ` 2D Convolution layer` with 32 filters and [5x5] kernel size. # - `Batch Normalization` # - `Max Pooling` # - `Dropout` 25% # - Then, `Flatten` the convolved images so as to input them to a Dense Layer. # - `Dense Layer` hidden layer with output 1024 nodes. # - `Dropout` 25% # - `Output Layer`. # # To compile the model: # - As optimazer we use `Adam`. # - As loss function we use `categorical_crossentropy` because the targets are categorical one-hot encoded. # # To evaluate the model: # - Accuracy on both validation and test set is used. # # # **In order to find the above model the following strategy is used:** # 1. As mentioned above, data augmentetion worsen the classifier's performance; thus, only the original data is used. In case of less data, data augmentetion would be useful. # 2. I choose to build a model with `CNNs` because they outerperform the other NN at image classification tasks. Additionally, I built a MLP model with 2 hidden layers, which had 3% less accuracy compared to the CNN model. # # - The proper number of CNN layers is defined to two after several trials. In fact, more layers cause overfitting. # # - At detector stage of convolutional layers `ReLU` is used as activiation function. Other activiation functions such as ELU was also used but with no better results than Relu. # # - At polling stage of convolutional layers `max pooling` was choosen with striding [2,2]. Without striding the model had slightly better accuracy, but way more parameters need to be trained. # # - Each convolution layer has different number of `filters`. In this way, the network will learn more details about the unique characteristics of each class. These, values was defined after trials. # # - The `kernel size` was set to [5,5] . # # - The `dilation rate` found to not improve model's accuracy. Thus, it was set to [1,1]. # # - `Striding` was set to [1,1], since the input size of images is already small we do not want to loose any information. # # - `Dropout` was set to [0.25], helping the model to avoid overfitting and train faster. # # - `Batch normalization` between convolution(linear) and detector stage(non-linear) acts as regularizer, allowing higher learning rates. In this case, Batch normalization helped the model to be trained in fewer epochs and the model had slightly better accuracy. Thus, we used this normalization strategy. # - `Kernel regularization` was tested without better results on model's accuracy. # # 3. After the convolutional layers an `MLP` with one hidden layer is used to do the classification job, as MLPs are powerful classifiers. # # - `ReLU` used as activiation function. # - `Dropout` [0.25] was used again at this stage. # # 4. As output activiation function we use `softmax`, which is the suggested output activiation function for multi-label classification problems. # 5. `EarlyStopping` is used in order to find the proper number of epochs which reduces the generalization error. # # # + colab={} colab_type="code" id="a1fg2q0_L6eI" def train_model( optimizer, epochs=100, batch_size=128, conv_layers=2, conv_activation='relu', hidden_layers=1, hidden_activation='relu', batch_normalization = False, conv_dropout=False, output_activation='softmax'): np.random.seed(1402) # Define the seed for numpy to have reproducible experiments. set_random_seed(1981) # Define the seed for Tensorflow to have reproducible experiments. model_name = 'gru-{0:d}-{1:d}'.format(conv_layers, epochs) # Define the input layer. input = Input( shape=input_shape, name='Input' ) x = input # Define the convolutional layers. for i in range(conv_layers): x = Conv2D( filters=16*(i+1), kernel_size=(5, 5), strides=(1, 1), padding='same', dilation_rate=(1, 1), activation=conv_activation, #kernel_regularizer=regularizers.l2(0.01), name='Conv2D-{0:d}'.format(i + 1) )(x) if batch_normalization: x = BatchNormalization(axis=2,name='Batch_Normalization-{0:d}'.format(i + 1) )(x) x = MaxPool2D( pool_size=(2, 2), strides=(2, 2), padding='same', name='MaxPool2D-{0:d}'.format(i + 1) )(x) if conv_dropout: x = Dropout( rate=0.25, name='Dropout-{0:d}'.format(i + 1) )(x) # Flatten the convolved images so as to input them to a Dense Layer x = Flatten(name='Flatten')(x) # Define the remaining hidden layers. for i in range(hidden_layers): x = Dense( units=256, kernel_initializer='glorot_uniform', activation=hidden_activation, name='Hidden-{0:d}'.format(i + 1) )(x) if conv_dropout: x = Dropout( rate=0.25, name='Dropout-{0:d}'.format(3) )(x) # Define the output layer. output = Dense( units=classes, activation=output_activation, name='Output' )(x) # Define the model and train it. model = Model(inputs=input, outputs=output) model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy']) hs = model.fit( x=X_train, y=Y_train, validation_split=0.1, # use 10% of the training data as validation data epochs=epochs, verbose=0, batch_size=batch_size, callbacks=[EarlyStopping(monitor='val_acc', patience=5, verbose=1), ModelCheckpoint(filepath='{0:s}.chk'.format(model_name), monitor='val_acc', save_best_only=True, save_weights_only=True ) ] ) print('Finished training.') print('------------------') model.summary() # Print a description of the model. return model, hs # + [markdown] colab_type="text" id="zsAcwu0w6L53" # ### Training and Evaluation # + colab={"base_uri": "https://localhost:8080/", "height": 646} colab_type="code" id="24rwYVToNW2o" outputId="24b6fb41-c93e-4fbe-8f64-f18cf13a2980" # Using Adam optimizer = Adam() # 2 Convolutional Layers and 1 dense Layer model, hs = train_model( optimizer=optimizer, epochs=50, batch_size=batch_size, conv_layers=2, conv_activation='relu', batch_normalization=True, conv_dropout=True, hidden_layers=1, output_activation='softmax' ) # Evaluate on test data and show all the results. eval_model = model.evaluate(X_test, Y_test, verbose=1) # Predictions on test set predictions = model.predict(X_test) clean_up(model=model) # + colab={"base_uri": "https://localhost:8080/", "height": 778} colab_type="code" id="x8VtEUu7NgMv" outputId="8c8a3603-2532-447d-fd38-e4a3f200584f" print("Train Loss : {0:.5f}".format(hs.history['loss'][-1])) print("Validation Loss: {0:.5f}".format(hs.history['val_loss'][-1])) print("Test Loss : {0:.5f}".format(eval_model[0])) print("---") print("Train Accuracy : {0:.5f}".format(hs.history['acc'][-1])) print("Validation Accuracy: {0:.5f}".format(hs.history['val_acc'][-1])) print("Test Accuracy : {0:.5f}".format(eval_model[1])) # Plot train and validation error per epoch. plot_history(hs={'Model': hs}, epochs=40, metric='loss') plot_history(hs={'Model': hs}, epochs=40, metric='acc') # + colab={"base_uri": "https://localhost:8080/", "height": 1422} colab_type="code" id="Dhb6_VGSkJVq" outputId="5b46ecb7-4d51-4e7f-dd59-7c9acfa9c826" SVG(model_to_dot(model, show_shapes=True).create(prog='dot', format='svg')) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="35paboT5TvBn" outputId="fb568b2c-7e4d-4080-9b12-8e38226cb8b1" predicted_labels = np.apply_along_axis(np.argmax, 1, predictions) #print('Predicted labels:', predicted_labels) misclassified = y_test != predicted_labels #print('Misclassified:', misclassified) misclassified_indices = np.argwhere(misclassified) #print('Misclassified indices:\n', misclassified_indices) print('Number of misclassified:', len(misclassified_indices)) # - # ### Model's performance # # - This relatively small and easy to be trained CNN model achieved 92.58% accuracy and 25.53% loss on test set (742 out of 10000 images was misclassified). Furthermore, 417,650 parameters was trained in 28 epochs, where the model retains its generalization power. # + [markdown] colab_type="text" id="wTvKrTiQ6L6C" # ### Misclassified images # + colab={} colab_type="code" id="UR9jtA-jJ1VI" def plot_image(i, predictions, true_labels, img): predictions_array, true_label, img = predictions[i], true_labels[i], img[i] plt.grid(False) plt.xticks([]) plt.yticks([]) plt.imshow(img.reshape(28,28), cmap=plt.cm.binary) predicted_label = np.argmax(predictions_array) if predicted_label == true_label: color = 'blue' else: color = 'red' plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label], 100*np.max(predictions_array), class_names[true_label]), color=color) def plot_value_array(i, predictions, true_labels, show_xticks=False): predictions_array, true_label = predictions[i], true_labels[i] plt.grid(False) plt.xticks([]) if show_xticks: plt.xticks(range(10), class_names, rotation='vertical') else: plt.xticks([]) plt.yticks([]) thisplot = plt.bar(range(10), predictions_array, color="#777777") plt.ylim([0, 1]) predicted_label = np.argmax(predictions_array) thisplot[predicted_label].set_color('red') thisplot[true_label].set_color('green') # + colab={"base_uri": "https://localhost:8080/", "height": 578} colab_type="code" id="3R3vpqH2U_nM" outputId="d0d4d01a-e048-4a39-a8a5-b96606597f71" class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] num_rows = 4 num_cols = 4 num_images = num_rows * num_cols plt.figure(figsize=(2*2*num_cols, 2*num_rows)) show_x_ticks = False plt.title('Some misclassified images') for i in range(num_images): if i >= num_images - num_cols: show_x_ticks = True plt.subplot(num_rows, 2*num_cols, 2*i+1) plot_image(misclassified_indices[i-1][0], predictions, y_test, X_test) plt.subplot(num_rows, 2 * num_cols, 2*i+2) plot_value_array(misclassified_indices[i-1][0], predictions, y_test, show_x_ticks) plt.suptitle('Some misclassified images') plt.show; # - # - As it can been seen from the above plots, even a human eye cannot easily discriminate the proper class for each of those images. Thus, this model is performing quite well at fashion recognition. # + colab={"base_uri": "https://localhost:8080/", "height": 476} colab_type="code" id="L_JNg1k8WCxB" outputId="cd175d41-8c38-4ee2-ca2c-61b36f28ba01" print(confusion_matrix(y_test,predicted_labels)) print(classification_report(y_test,predicted_labels,target_names=class_names)) # + [markdown] colab={} colab_type="code" id="0jQIqKsUY29O" # - Finally, the confusion matrix shows that the model classified 'trouser', 'sandal','sneaker', 'bag' and 'ankle boot' classes over 97% correctly but seemed to struggle quite a bit with 'shirt' class (79% accurate), which was regularly confused with 't-shirt/top', 'pullover' and 'coat' classes. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.8 64-bit (''base'': conda)' # name: python3 # --- import pandas as pd import os book = pd.read_csv("sales2019.csv") df = pd.DataFrame(book) df # + df.user_submitted_review.unique() df.title.unique() # + usercount = [0,0,0] x = "Individual" # negative, neutral, positive print(usercount) # - def query(title, customer_type): usercount = [0,0,0] for i in range(len(df)): if df.loc[i, "title"] == title: if customer_type == "all": if df.loc[i, "user_submitted_review"] == "it was okay": usercount[2] = usercount[2] + 1 elif df.loc[i, "user_submitted_review"] == "Awesome!": usercount[2] = usercount[2] + 1 elif df.loc[i, "user_submitted_review"] == "I learned a lot": usercount[2] = usercount[2] + 1 elif df.loc[i, "user_submitted_review"] == "Never read a better book": usercount[2] = usercount[2] + 1 elif df.loc[i, "user_submitted_review"] == "OK": usercount[1] = usercount[1] + 1 elif df.loc[i, "user_submitted_review"] == "A lot of material was not needed": usercount[1] = usercount[1] + 1 elif df.loc[i, "user_submitted_review"] == "The author''s other books were better": usercount[1] = usercount[1] + 1 elif df.loc[i, "user_submitted_review"] == "Would not recommend": usercount[0] = usercount[0] + 1 elif df.loc[i, "user_submitted_review"] == "Hated it": usercount[0] = usercount[0] + 1 if df.loc[i, "user_submitted_review"] == "it was okay" and df.loc[i, "customer_type"] == customer_type: usercount[2] = usercount[2] + 1 elif df.loc[i, "user_submitted_review"] == "Awesome!" and df.loc[i, "customer_type"] == customer_type: usercount[2] = usercount[2] + 1 elif df.loc[i, "user_submitted_review"] == "I learned a lot" and df.loc[i, "customer_type"] == customer_type: usercount[2] = usercount[2] + 1 elif df.loc[i, "user_submitted_review"] == "Never read a better book" and df.loc[i, "customer_type"] == customer_type: usercount[2] = usercount[2] + 1 elif df.loc[i, "user_submitted_review"] == "OK" and df.loc[i, "customer_type"] == customer_type: usercount[1] = usercount[1] + 1 elif df.loc[i, "user_submitted_review"] == "A lot of material was not needed" and df.loc[i, "customer_type"] == customer_type: usercount[1] = usercount[1] + 1 elif df.loc[i, "user_submitted_review"] == "The author''s other books were better" and df.loc[i, "customer_type"] == customer_type: usercount[1] = usercount[1] + 1 elif df.loc[i, "user_submitted_review"] == "Would not recommend" and df.loc[i, "customer_type"] == customer_type: usercount[0] = usercount[0] + 1 elif df.loc[i, "user_submitted_review"] == "Hated it" and df.loc[i, "customer_type"] == customer_type: usercount[0] = usercount[0] + 1 print(usercount) query("Top 10 Mistakes R Beginners Make", "all") # + monthdate = [] for i in range(len(df)): monthdate.append(df.loc[i, "date"].replace("/", "")) len(monthdate) # + for x in range(len(monthdate)): if int(monthdate[x][:2]) > 12: monthdate[x] = monthdate[x][:1] else: monthdate[x] = monthdate[x][:2] monthdate # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Land area by number of involved conflicts as a function of Country # Group Members: , , , , and Ryan (Hsin-Yuan) Wang # Group Name: Allied Against An Anonymous Axis (aka 5A) # Github Repo: https://github.com/jenna-jordan/IS590DV-FinalProject import pandas as pd import numpy as np import matplotlib.pyplot as plt import matplotlib import matplotlib.cm as cm from matplotlib.colors import Normalize from matplotlib import gridspec cshapes = pd.read_csv("../Data/CShapes/country_shapes.csv") ucdp = pd.read_csv("../Data/UCDP-PRIO_ArmedConflict/participants_gw.csv") # We start by grouping the dataset by the gw_id which is a number used to identify a country (Gleditsch & Ward number). from there we count the number of unique conflict id's which are assigned to each unique conflict. From there the countries are sorted by the counts is ascending order. sorted_conflict_counts = ucdp.groupby("gw_id")[["conflict_id"]].count().sort_values(by=['conflict_id']) # We then truncate the data into the top 20 and bottom 20 and sort each truncation is descending order, then rename the column to reflect that it is infact a count of conflicts and not a conflict id top = sorted_conflict_counts.tail(20).sort_values(by=['conflict_id'], ascending=False) bottom = sorted_conflict_counts.head(20).sort_values(by=['conflict_id'], ascending=False) top.rename(columns={"conflict_id":"number_of_conflicts"},inplace=True) bottom.rename(columns={"conflict_id":"number_of_conflicts"},inplace=True) print(top) print(bottom) for key,item in top.iterrows(): print(cshapes[cshapes["gw_code"] == key][["country_name","area"]]) for key,item in bottom.iterrows(): print(cshapes[cshapes["gw_code"] == key][["country_name","area"]]) # It is then time to merge the conflict counts with teh cshapes dataset to get the associated area all in one table. top_full = top.merge(cshapes, left_on="gw_id", right_on="gw_code") bottom_full = bottom.merge(cshapes, left_on="gw_id", right_on="gw_code") top_full # + my_cmap = cm.get_cmap('brg') my_cmap2 = cm.get_cmap('hot') my_norm = Normalize(vmin=40, vmax=300) my_norm2 = Normalize(vmin=0, vmax=5) fig2 = plt.figure(figsize=(30,30)) gs = gridspec.GridSpec(2, 2, width_ratios=[20, 1]) ax1 = plt.subplot(gs[0]) ax1.bar(top_full['country_name'],top_full['area'], color=my_cmap(my_norm(top_full['number_of_conflicts'].to_list()))) ax1.ticklabel_format(axis="y", style="plain") ax1.set_title("Top 20 countries with the most conflicts", fontsize=30) ax1.set_xlabel('Country Name', fontsize=20, labelpad=-50, fontweight='bold') ax1.set_ylim([0,25000000]) ax2 = plt.subplot(gs[1]) cb1 = matplotlib.colorbar.ColorbarBase(ax2, cmap=my_cmap, norm=my_norm, orientation='vertical') cb1.set_label('Number of Conflicts', fontsize=20, fontweight='bold', labelpad=20) ax3 = plt.subplot(gs[2]) ax3.bar(bottom_full['country_name'],bottom_full['area'], color=my_cmap2(my_norm2(bottom_full['number_of_conflicts'].to_list()))) ax3.set_title("Top 20 countries with the least conflicts", fontsize=30) ax3.ticklabel_format(axis="y", style="plain") ax3.set_xlabel('Country Name', fontsize=20, labelpad=-50, fontweight='bold') ax3.set_ylim([0,2500000]) ax4 = plt.subplot(gs[3]) cb2 = matplotlib.colorbar.ColorbarBase(ax4, cmap=my_cmap2, norm=my_norm2, orientation='vertical') cb2.set_label('Number of Conflicts', fontsize=20, fontweight='bold', labelpad=20) ax3.set_xlabel('Country Name', fontsize=20) for ax in (ax1,ax3): ax.set_ylabel('Land Area ($km^2$)', fontsize=20, fontweight='bold', labelpad=20) ax.xaxis.set_tick_params(labelsize=20) ax.yaxis.set_tick_params(labelsize=20) for ax_cmap in (ax2,ax4): ax_cmap.xaxis.set_tick_params(labelsize=20) ax_cmap.yaxis.set_tick_params(labelsize=20) for ax in fig2.axes: matplotlib.pyplot.sca(ax) plt.xticks(rotation=70) plt.suptitle("Land area of countries with most and least conflicts", fontsize=30, fontweight='bold') plt.subplots_adjust(hspace=200) plt.tight_layout(rect=[0, 0, 1, 0.95]) plt.savefig(fname="LandArea-Conflicts-Top_Bottom.png") # - # The first important thing to note about these graphs is that the y-axis scale in the top20 chart is actually an order of magnitude greater than the bottom20 chart (max of 25,000,000 instead of 2,500,000). Setting the plots to the same scale ended up making the bottom 20 bars almost hidden due to the drastic difference in scale. As such, a limit was placed on the y axis in order to make sure the bars can for the most part show up. This does show that there is a correlation to land area and number of conflicts. The larger a country, the better the chance that they have been involved in a large amount of conflicts. # This then got us interested in what the middle portion looked like, so it was decided to plot the entire series of sorted number of conflicts and see if the expected trend held throughout the whole dataset. All 203 rows. sorted_counts_full = sorted_conflict_counts.merge(cshapes, left_on="gw_id", right_on="gw_code") sorted_counts_full.rename(columns={"conflict_id":"number_of_conflicts"},inplace=True) sorted_counts_full # + my_cmap3 = cm.get_cmap('brg') my_norm3 = Normalize(vmin=0, vmax=300) fig3 = plt.figure(figsize=(50,30)) gs2 = gridspec.GridSpec(1, 2, width_ratios=[20, 1]) nax1 = plt.subplot(gs2[0]) nax1.bar(sorted_counts_full['gw_code'].astype(str),sorted_counts_full['area'], color=my_cmap3(my_norm3(sorted_counts_full['number_of_conflicts'].to_list()))) nax1.ticklabel_format(axis="y", style="plain") #nax1.set_xlabel('Country Name', fontsize=20) nax1.set_xticks([]) nax2 = plt.subplot(gs2[1]) ncb = matplotlib.colorbar.ColorbarBase(nax2, cmap=my_cmap3, norm=my_norm3, orientation='vertical') ncb.set_label('Number of Conflicts', fontsize=20) nax1.set_ylabel('Land Area ($km^2$)', fontsize=20) nax1.xaxis.set_tick_params(labelsize=20) nax1.yaxis.set_tick_params(labelsize=20) nax2.xaxis.set_tick_params(labelsize=20) nax2.yaxis.set_tick_params(labelsize=20) plt.suptitle("Land area of countries compared to number of conflicts", fontsize=30) plt.tight_layout(rect=[0, 0.3, 1, 0.95]) plt.savefig(fname="LandArea-Conflicts-Full.png") # - # The x-axis is intentionally left unlabled due to the inability to display any tick labels due to the closeness of the datapoints. This graph is only intended to show the trend of number of conflicts to land area across the whole dataset. The countries represented by the bars are sorted in order of number of total conflicts which is represented by the colormap as well. This confirmed that the predicted trend of larger countries having been involved in conflicts that was predicted above does hold relatively true throughout the dataset. # + #top_full.to_csv("top_full.csv") #bottom_full.to_csv("bottom_full.csv") #sorted_counts_full.to_csv("sorted_counts_full.csv") # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Midterm #1-A # # + import pandas as pd import numpy as np import statsmodels.api as sm from statsmodels.regression.rolling import RollingOLS from sklearn.linear_model import LinearRegression import matplotlib.pyplot as plt import seaborn as sns import warnings #warnings.filterwarnings("ignore") # - # ## 1 # # #### 1. False - The MV optimisation, in practise goes long the highest sharpe ratio and shorts the next highest one, MV optimisation is all about reducing the covariance term so it will actually go long one of the securities that has low correlation with other securities. # #### 2. False - The LETF has to price the instrument daily so it loses the cross product of returns everyday as compared to a regular levered instrument, e.g a levered etf on S&P will always have a lower return than say a 3x leveraged S&P portfolio. Apart from this even the proces of daily leveraging is difficult and incurs significant trading costs. # #### 3. Bitcoin and other cryptos are notorious for having very high mean returns and very high volatilites. Even if we had a few years of data an intercept might be required to get match the returns of bitcoin using a traditional portfolio. Hence we will include an intercept. # #### 4. HDG had a very high correlation with HFRI in sample and a decently high correlation OOS too. HDG was meant to track the MErill indices which tracked the HFRI but since HFRI is an index and HDG is a tradeable ETF, it will never be able to exactly match it, there will always be a tracking error. Moreover the sharpe ratio for HDG was also lower. # #### 5. A hedge fund alpha is defined as the excess returns that it will get above the market, this can be due to it taking on more diverse assets and risky positions. The Merril Lynch factors have exposure to assets liek ccys, commodites etc. The hedge fund manager might have also invested in these asset classes, leading to the alpha changing from a large positive number to a negative number as the reuturns that alpha initially captured have now been included in the model. # # ## 2 Allocation # #### 1. weights df2 = pd.read_excel(r'proshares_analysis_data.xlsx',sheet_name = 'merrill_factors') df2.set_index(keys = 'date',inplace = True) # + df2ex = df2.sub(df2['USGG3M Index'], axis=0) df2ex.drop(columns = 'USGG3M Index', inplace = True) # + ## as the data is monthly, peridos = 12 global one_arr one_arr = np.ones(len(df2ex.columns),) periods = 12 def tangency_portfolio(df,periods): """ the function takes the returns and makes them annualized then calculates the correct weights for a tangency portfolio according to mean variance optimization. After getting the weights, it calculates the mean, volatility and sharpe ratio for a tangency portfolio""" df = df*periods mu_tilde = df.mean() inverse_of_cov = np.linalg.inv(df.cov()) #inverting the covariance matrix omega_tangent = inverse_of_cov@mu_tilde/(one_arr.T@inverse_of_cov@mu_tilde) # calculating tangency weights ## Calculating the weights, mean, vol and sharpe ratio for a MV frontier portfolio mean_tan = omega_tangent@mu_tilde vol_tan = (mean_tan/ np.sqrt(mu_tilde.T@inverse_of_cov@mu_tilde))/np.sqrt(periods) sharpe_tan = mean_tan/vol_tan omega_tangent = pd.Series(omega_tangent, index=mu_tilde.index) return omega_tangent, mean_tan,vol_tan,sharpe_tan omega_tangency, mean_tan,vol_tan,sharpe_tan = tangency_portfolio(df2ex,periods) # - # #### 1, Tangency weights ## tangency weights omega_tangency # + target_mean = 0.02*periods def target_portfolio(df,target_mean,periods): """ the function takes the returns and makes them annualized then calculates the correct weights for a target portfolio according to mean variance optimization, as per the target mean. After getting the weights, it calculates the mean, volatility and sharpe ratio for a target portfolio""" df = df*periods mu_tilde = df.mean() inverse_of_cov = np.linalg.inv(df.cov()) #inverting the covariance matrix omega_tangent = inverse_of_cov@mu_tilde/(one_arr.T@inverse_of_cov@mu_tilde) # calculating tangency weights delta = ((one_arr.T @ inverse_of_cov @ mu_tilde)/(mu_tilde.T@inverse_of_cov@mu_tilde))*target_mean #calculating the scaling delta omega_target = delta*omega_tangent # calculating target weights ## Calculating the weights, mean, vol and sharpe ratio for a MV frontier portfolio mean_target = omega_target@mu_tilde vol_target = (mean_target/ np.sqrt(mu_tilde.T@inverse_of_cov@mu_tilde))/np.sqrt(periods) sharpe_target = mean_target/vol_target omega_target = pd.Series(omega_target, index=mu_tilde.index) return omega_target, mean_target,vol_target,sharpe_target omega_target, mean_target,vol_target,sharpe_target = target_portfolio(df2ex,target_mean,periods) # - # #### 2. target portfolio, weights omega_target sum(omega_target) wt_rfr = 1- sum(omega_target) wt_rfr # #### As the sum of weights is >1 the portfolio indeed is borrowing the risk free rate with the weight of -0.158 # #### 3. I am assuming the optimized portfolio here means the one with target mean of 0.2 per month. Even in such a case only mean and sd will be different, sharpe still remains the same across all MV optimized portfolios # mean_target vol_target sharpe_target # #### 4. OOS performance df2ex.head() df2ex_is = df2ex['2011':'2018'] omega_target_is, mean_target_is,vol_target_is,sharpe_target_is = target_portfolio(df2ex_is,target_mean,periods) # + years = [2019,2020,2021] def oos_performance(df_excess,years,periods,weights_target_in_sample): oos_data = df_excess[df_excess.index.year.isin(years)]*periods # isolating the oos data cov_mat = oos_data.cov() #covariance matrix mu_tilde_oos = oos_data.mean() # calculating the mean, vol and sharpe for oos data using in sample weights mean_oos = weights_target_in_sample@mu_tilde_oos vol_oos = np.sqrt(weights_target_in_sample.T@cov_mat@weights_target_in_sample)/np.sqrt(periods) sharpe_oos = mean_oos/vol_oos return mean_oos,vol_oos,sharpe_oos mean_oos,vol_oos,sharpe_oos = oos_performance(df2ex,years,periods,omega_target_is) # - mean_oos vol_oos sharpe_oos # #### 5. It probably would have been better if the commodity futures arent as risky or correlated as the equities as, since the correlation would have been low, the oos sample performance might have been more stable. It depends on the correlation matrix of the chosen assets # ## 3 Hedging and Replication df3 = df2ex sns.heatmap(df3.corr()) df3.columns Y = df3['EEM US Equity'] X = df3['SPY US Equity'] # + static_model = sm.OLS(Y,X).fit() # - static_model.params static_model.summary() # #### 1. The optimal hedge ratio is given by B in a one factor regression, therefore the hedge ratio is 0.92566, i.e. we will invest 0.92566 dollars in S&P for every dollar in EEM # #### 2. Our hedged position would simply be returns of EEM - B* returns of SPY hp = pd.DataFrame(index = df3.index) hp.head() df3.columns for i in range(len(df3.index)): hp.iloc[i,0] = df3.iloc[i,1] - static_model.params*df3.iloc[i,0] hp.head() mean = hp.mean() std = hp.std()/np.sqrt(periods) sharpe = mean/std mean std sharpe df3.mean()*periods # #### 3. The value of the two means is not equal (-0.007792 vs 0.037784) as SPY and EEM have very high correlation so if we take out the B due to SPY there is very less excess return, infact the very low value of returns shows that the hedge is working very well. The high multicollinearity probably won't affect the hedging as beta is simply the ratio of vol x correlation of the two # #### 4. IWM also has a very high correlation with SPY and EEM as shown in the correlation heatmap above. This means that there will be a lot of multicollinearity, making it almost impossible to trust the betas outputted by the model, so attribution is out of the picture. For hedging again it might not be very useful out of sample due to the high multicollinearity, it might be difficult to get the beta for both the factors, making it difficult to judge how much money needs to be put into each to accurately hedge the portfolio # ## 4 Modeling Risk df4 = df2 df4.columns # + # we need a distribution of returns that captures returns of SPY - returns of EFA # and check if it is greater than zero df4 = df2.sub(df2['EFA US Equity'], axis=0) series = df4['SPY US Equity'] # - mu = series.mean()*periods mu std = series.std()*np.sqrt(12) std h = 10 np.sqrt(10)*mu/std import scipy.stats def p(h, tilde_mu, tilde_sigma): x = - np.sqrt(h) * tilde_mu / tilde_sigma val = scipy.stats.norm.cdf(x) return val probability = p(h,mu,std) probability # #### The probability of SPY outperforming EFA is very high, there is just a 0.04% chance of it not happening over the next 10 years # + #std of september 2021 # - std = ((df2['EFA US Equity']**2).shift(1).rolling(60).mean()[-1])**0.5 # + ## Z-score for 99% is 2.576 # - Var = -2.576*std Var # #### The 99% one month VaR estimate is -0.108 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + colab={"base_uri": "https://localhost:8080/"} id="x-tuGW9zgZ_X" outputId="a29afa18-8863-4d08-fd73-fa22fa4d9ad3" import numpy as np A = np.array([[4, 10, 8], [10, 26, 26], [8, 26, 61]]) print(A) B = np.array([[44], [128], [214]]) print(B) #AA^-1X = BA^-1 ## Solving for the inverse of A A_inv = np.linalg.inv(A) print(A_inv) ## Solving for BA^-1 X = np.dot(A_inv, B) print(X) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Actor-Critic for MountainCar-v0 import gym import numpy as np import torch import torch.nn as nn import torch.nn.functional as F from matplotlib import pyplot as plt from IPython.display import clear_output # + env = gym.make("MountainCar-v0").env s = env.reset() obs_shape = env.observation_space.shape n_actions = env.action_space.n # - env.observation_space.low class Network(nn.Module): def __init__(self): super(Network, self).__init__() self.dense1 = nn.Linear(obs_shape[0], 400) self.dense2 = nn.Linear(400, n_actions) self.dense3 = nn.Linear(400, 1) def forward(self, x): x = torch.FloatTensor(x) x = self.dense1(x) x = F.relu(x) logits = self.dense2(x) values = self.dense3(x) return logits, values agent = Network() def where(cond, x1, x2): return (1 - cond)*x1 + cond*x2 gamma = 0.99 opt = torch.optim.Adam(agent.parameters(), lr=5e-5) rewards = [] env_rewards = [] actorLoss = [] criticLoss = [] Entropy = [] state = env.reset() for i_episode in range(500000): logits, values = agent(state) policy = F.softmax(logits) log_policy = F.log_softmax(logits) action = np.random.choice(n_actions, p=policy.data.numpy()) next_state, env_reward, done, _ = env.step(action) reward = torch.FloatTensor([env_reward + np.sin(3*state[0])]) # np.abs(state[0] + 0.5) next_logits, next_values = agent(next_state) # actor entropy = -torch.mean(policy * log_policy) advantage = reward + gamma * next_values - torch.squeeze(values) advantage = reward if not done else reward + gamma * next_values - torch.squeeze(values) actor_loss = -0.5*entropy - torch.mean(advantage.detach() * log_policy[action]) # critic td_target = reward + gamma * next_values if not done else reward critic_loss = torch.mean((td_target.detach() - values) ** 2) rewards.append(reward) state = next_state env_rewards.append(env_reward) loss = actor_loss + critic_loss loss.backward() opt.step() opt.zero_grad() if done: state = env.reset() continue if i_episode % 1000 == 0: clear_output(True) actorLoss.append(actor_loss.data.numpy()) criticLoss.append(critic_loss.data.numpy()) Entropy.append(entropy.data.numpy()) print(policy.data.numpy()) print("episode: {}, env_reward: {}, reward: {}".format(i_episode + 1, np.mean(env_rewards[-10:]), np.mean(rewards[-10:]) )) print("Critic loss:", criticLoss[-1], "Actor loss:", actorLoss[-1]) plt.figure(figsize=(20, 15)) plt.subplot(411) plt.plot(criticLoss) plt.title("Critic loss") plt.subplot(412) plt.plot(actorLoss) plt.title("Actor loss") plt.subplot(413) plt.plot(Entropy) plt.title("Entropy") plt.subplot(414) plt.plot(env_rewards) plt.title("Environment reward") plt.show() from collections import Counter print(Counter(env_rewards)) # + state = env.reset() for t in range(1000): logits, values = agent(state) action = np.random.choice(n_actions, p=F.softmax(logits).data.numpy()) next_state, reward, done, _ = env.step(action) state = next_state env.render() if done: state = env.reset() continue env.close() # - # ### Actor-critic for MountainCarContinuous-v0 # ### link on tensorflow implementation of A2C for MountainCarContinuous-v0 # https://github.com/dennybritz/reinforcement-learning/blob/master/PolicyGradient/Continuous%20MountainCar%20Actor%20Critic%20Solution.ipynb env = gym.make("MountainCarContinuous-v0") s = env.reset() obs_shape = env.observation_space.shape action_shape = env.action_space.shape class PolicyNetwork(nn.Module): def __init__(self): super(PolicyNetwork, self).__init__() self.dense1 = nn.Linear(obs_shape[0], 200) self.dense2 = nn.Linear(200, 1) self.dense3 = nn.Linear(200, 1) def forward(self, x): x = torch.tensor(x, dtype=torch.float32).to(device) x = F.tanh(self.dense1(x)) self.mu = self.dense2(x) self.sigma = self.dense3(x) self.mu = torch.squeeze(self.mu) self.sigma = F.softplus(torch.squeeze(self.sigma)) + 1e-5 self.normal = torch.distributions.normal.Normal(self.mu, self.sigma) actions = self.normal.sample(sample_shape=torch.Size(action_shape)) actions = torch.clamp(actions, env.action_space.low[0], env.action_space.high[0]) return actions def get_entropy(self): return self.normal.entropy() def get_log_prob(self, action): return self.normal.log_prob(action) class ValueNetwork(nn.Module): def __init__(self): super(ValueNetwork, self).__init__() self.dense1 = nn.Linear(obs_shape[0], 200) self.dense2 = nn.Linear(200, 1) def forward(self, x): x = torch.tensor(x, dtype=torch.float32).to(device) x = F.tanh(self.dense1(x)) v_s = self.dense2(x) return v_s policy_estimator = PolicyNetwork().to(device) policy_estimator([s]).shape env.action_space.sample().shape value_estimator = ValueNetwork().to(device) def generate_session(tmax=1000): states, actions, rewards, dones, next_states = [], [], [], [], [] s = env.reset() for i in range(tmax): action = policy_estimator([s]) new_s, reward, done, info = env.step(action.cpu().data.numpy()) if done: break states.append(s) actions.append(action) rewards.append(reward) dones.append(done) next_states.append(new_s) s = new_s return states, actions, rewards, dones, next_states states, actions, rewards, dones, next_states = generate_session() def compute_critic_loss(optimizer, states, rewards, next_states, gamma): states = torch.tensor(states, dtype=torch.float32).to(device) next_states = torch.tensor(next_states.astype('float'), dtype=torch.float32).to(device) rewards = torch.tensor(rewards, dtype=torch.float32).to(device) next_v_s = value_estimator(next_states).detach() td_target = rewards + gamma*next_v_s td_error = (td_target - value_estimator(states))**2 loss = torch.mean(td_error) loss.backward() optimizer.step() optimizer.zero_grad() return loss.cpu().data.numpy() def compute_actor_loss(optimizer, states, actions, rewards, next_states, gamma): H = policy_estimator.get_entropy() states = torch.tensor(states, dtype=torch.float32).to(device) actions = torch.tensor(actions, dtype=torch.float32).to(device) rewards = torch.tensor(rewards, dtype=torch.float32).to(device) next_states = torch.tensor(next_states.astype('float'), dtype=torch.float32) td_target = rewards + gamma*value_estimator(next_states) td_error = td_target - value_estimator(states) loss = -0.1*H - policy_estimator.get_log_prob(actions)*td_error.detach() loss.backward() optimizer.step() optimizer.zero_grad() return loss.cpu().data.numpy() def train(gamma=0.99, episodes=10, tmax=1000): opt = torch.optim.Adam(list(policy_estimator.parameters()) + list(value_estimator.parameters()), lr=1e-3) rewards = [] loss_values_L = [] loss_values_J = [] for i_episode in range(episodes): state = env.reset() episode_rewards = [] for t in range(tmax): action = policy_estimator(state) next_state, reward, done, _ = env.step(action) episode_rewards.append(reward) Ltd = compute_critic_loss(opt, state, reward, next_state, gamma) J = compute_actor_loss(opt, state, action, reward, next_state, gamma) if t % 500 == 0: loss_values_J.append(J) loss_values_L.append(Ltd) state = next_state if done: break rewards.append(np.mean(np.array(episode_rewards))) clear_output(True) print("episode: {}, reward: {}".format(i_episode + 1, np.mean(np.array(episode_rewards)))) print(loss_values_J[-1]) plt.subplot(121) plt.plot(loss_values_L) plt.title("Ltd loss") plt.subplot(122) plt.plot(loss_values_J) plt.title("J_hat loss") plt.show() return np.array(rewards) normal = torch.distributions.normal.Normal(0, 1) normal.entropy() train() # + state = env.reset() for t in range(1000): action = policy_estimator(state).to('cpu') next_state, reward, done, _ = env.step(action) state = next_state env.render() if done: break env.close() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # *for loops seq = [1,2,3,4,5] for num in seq: print("i am number {}".format(num)) # # *while loops # + i = 1 while i<5: print("hello my number {}".format(i)) i+=1 # - # some usefull functions in python # # *Range() # lets see the old tradional way of keeping a sequnce of characters in loops : for num in seq: print(num) # instead we use a range function. Which is like a generator of numerical values # for num in range(1,10): print("{} is my lucky number ".format(num)) list(range(10)) # # List Comprehension # this will help or allow you to save a bunch of writting x = [1,2,3,4,5] # here i want to produce a list with squared values of the original list of x # + out = [] for num in x: out.append(num**2) # - print(out) # Now we're going to translate the above procedures by implementing a list comprehension [num**2 for num in x] # so simply we can put the above idea in the following manner out = [num**2 for num in x] print(out) # Hence all those two methods produce the same thing.But list comprehession will save more time for you # # # # # functions # + active="" # for the syntax # def function_name(parameters_1): # print(parameter_1) # # To call the function for use, we use the following syntax # => function_name(parameter_1) # - def my_func(par:str): print(par) my_func("hello") # * Now lets create a function named square,that takes in a number and returns the square of that number def square(num): """ This is a doc multiline string and this function squares a number passed a parameter """ return num**2 square(6) the_square_is = square(num) print(the_square_is) # now if you want to see my documntation string you have to write, the function name and press shift + tab # # # Map # now lets begin by creating a function def times2(var): return var*2 times2(5) # *now for the map function/ #creating a sequence named seq seq = [1,2,3,4,5] # * map is a built in function in python, that allows you to save the step of initiializing a for loop and saving the results in another function map(times2,seq) # * the above out[34], describes that we have a map at location 0x7f0d13b3bb80 # now lets cast this into a list by using boilt in function list(map(times2,seq)) # # # # # # Lambda Expression # now here what we do is, we use the times2 function above to make a new function powered by lambda expression lambda var:var*2 # * so we can assign the expression in a variable, then use the variable to act as ordinary calling of functions t = lambda var:var*2 t(4) # Lambda expressions are normally used in maps.. list(map(lambda num:num*2,seq)) # # # filter() built in function # the filter function does have similar structure to the map function, but instead of using the function for evey single element in the list, this will filter them out instead filter(lambda num:num%2 == 0,seq) # * In order to get the results back, we have to cast the filter function into a list list(filter(lambda num:num%2 == 0, seq)) # # # # Some String Methods # consider the following string and the use of the methods within s = "hello, my name is Joseph" # * s.lower # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import topicvisexplorer from ipywidgets import Play from ipywidgets import interact, interactive, fixed, interact_manual import matplotlib.pyplot as plt import seaborn as sns import textwrap from ipywidgets import FloatSlider import json # + vis = topicvisexplorer.TopicVisExplorer("name") # - vis.load_single_corpus_data("models_output/single_corpus_europe_cambridge_analytica_lda_mallet_gensim_new_prepared_data_enero_11.pkl") vis.run() # + type(topicvisexplorer.single_corpus_data['topic_similarity_matrix']) # - # ### Get confussion matrix # + names_excel_europe = [ "Issues after deleting facebook account (e.g, new account makes old facebook accounts suggestion, facebook account was deleted by irrational reasons)", "Cambridge analytica interfered on UK referendum (brexit)", "Facebook has move users out of reach of european privacy law to dodge GDPR", "Bigdata (Big data companies and big data solutions)", "Promote social networks accounts", " hearing (EU and US parliament)", "Inteligence Artificial on the industry", "News about data privacy breaches on Facebook", "Relationship between Scottish National party (snp) and Cambridge Analytica", "Discussion about Facebook company's practices", "People share and discuss different articles thay they read (some of them related to privacy)" ] # - names_excel_northamerica = [ 'Delete of content on Facebook (account, messages, photos,…)', "Comments about ", "Intelligence Artificial as a bussiness or service", "Facebook is censonring speech", "Facebook share information with other companies", "Privacy, as a problem that must be resolved // People problems ?", "Facebook and privacy regulations (such as GDPR)", "Discussion about Fake news", "US election was intervened (by Russia and Cambridge Analytica)", "Trump and inmigration policies (separating child from families)", "PC gaming streaming and Zuckerberg's hearing" ] matrices_dict_test = topicvisexplorer.single_corpus_data['topic_similarity_matrix'] def show_confussion_matrix(omega): omega = omega/100.0 max_width = 45 fig, ax = plt.subplots(figsize=(10,10)) sns.heatmap(matrices_dict_test[omega], annot=True, cmap='vlag_r', vmin=-1, vmax=1, linewidths=.5, ax=ax, xticklabels=names_excel_europe, yticklabels=names_excel_europe) ax.set_xticklabels(textwrap.fill(x.get_text(), max_width) for x in ax.get_xticklabels()) ax.set_yticklabels(textwrap.fill(x.get_text(), max_width) for x in ax.get_yticklabels()) plt.title("Topic similarity metric proposed - Omega: "+str(omega), fontsize = 20, y = 1.02) plt.xlabel('North America', fontsize=16) plt.ylabel('Europe', fontsize=16) from ipywidgets import Play interact(show_confussion_matrix, omega=Play(value=0, min=0, max=100, step=1, interval=200)); # ### Check bubbles positions - Scenario 1 # + data = json.loads(topicvisexplorer.single_corpus_data['new_circle_positions']) #single_corpus_data['new_circle_positions'] # - def plot_circles_with_procrustes(lambda_): new_circle_positions_selected = data[str(lambda_)] #print(lambda_, new_circle_positions_selected) x = [i[0] for i in new_circle_positions_selected] y = [i[1] for i in new_circle_positions_selected] plt.figure() plt.plot(x, y, 'o'); # markersize = 15 n = range(len(x)) for i, txt in enumerate(n): plt.annotate(txt, (x[i], y[i]), fontsize =15) plt.show() interact(plot_circles_with_procrustes, lambda_=FloatSlider(min=0, max=1, step=0.01)) topicvisexplorer.single_corpus_data['lda_model'][0].print_topics() vis.run() len(topicvisexplorer.single_corpus_data['relevantDocumentsDict']) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import time import math import numpy as np import tensorly as tl import pandas as pd import matplotlib.pyplot as plt from matplotlib.gridspec import GridSpec from brokenaxes import brokenaxes from online_tensor_decomposition import * # + # for sample video from cv2 import VideoWriter, VideoWriter_fourcc, imshow, imwrite def make_video(tensor, filename, isColor=True): start = time.time() height = tensor.shape[1] width = tensor.shape[2] FPS = 24 fourcc = VideoWriter_fourcc(*'MP42') video = VideoWriter(filename, fourcc, float(FPS), (width, height), isColor) for frame in tensor: video.write(np.uint8(frame)) video.release() print('created', filename, time.time()-start) # - try: import cPickle as pickle except ImportError: # Python 3.x import pickle results = {} with open('results_0117.p', 'rb') as fp: results = pickle.load(fp) # E5 results = {} with open('results_0127.p', 'rb') as fp: results = pickle.load(fp) results = {} with open('results_0128.p', 'rb') as fp: results = pickle.load(fp) # + def plot_acc(datasets, name): colors = ('dodgerblue','mediumseagreen', 'hotpink', '#fba84a') libs = ("dao", "dtd", "ocp", 'fcp') patterns = ( "" , "\\\\\\\\\\" , "////" , "xxxx") markers = ("o", "x", "s", "^", "4") ticks = [e.split('-')[-1] for e in datasets] index = np.arange(3) # create plot fig, axes = plt.subplots(2, 1, figsize = (2.5, 5), dpi = 150) ax1, ax2 = axes ax1.tick_params(axis='y') ax1.set_xlabel('Rank', size=12) ax1.set_xticks(index) ax1.set_xticklabels(ticks) ax1.set_ylabel('Global Fitness', size=12) for i, (color, lib) in enumerate(zip(colors, libs)): acc_list = [results[dataset][lib][0] for dataset in datasets] ax1.plot(index, acc_list, color=colors[i], marker=markers[i], linewidth=1, markersize=8, markerfacecolor="None", markeredgewidth=1.5) ax2.tick_params(axis='y') ax2.set_xlabel('Rank', size=12) ax2.set_xticks(index) ax2.set_xticklabels(ticks) ax2.set_ylabel('Average of Local Fitness', size=12) for i, (color, lib) in enumerate(zip(colors, libs)): acc_list = [results[dataset][lib][1] for dataset in datasets] ax2.plot(index, acc_list, color=colors[i], marker=markers[i], linewidth=1, markersize=8, markerfacecolor="None", markeredgewidth=1.5) fig.tight_layout() # otherwise the right y-label is slightly clipped plt.savefig(f'./plots/{name}.pdf', bbox_inches = 'tight', pad_inches = 0) # plt.show() plot_acc(('synthetic-20', 'synthetic-30', 'synthetic-40'), 'E1_synthetic') plot_acc(('video-20', 'video-30', 'video-40'), 'E1_video') plot_acc(('stock-15', 'stock-25', 'stock-35'), 'E1_stock') plot_acc(('hall-30', 'hall-35', 'hall-40'), 'E1_hall') plot_acc(('korea-30', 'korea-40', 'korea-50'), 'E1_korea') # + def plot_rt(datasets, name): colors = ('dodgerblue','mediumseagreen', 'hotpink', '#fba84a') libs = ("dao", "dtd", "ocp", 'fcp') patterns = ( "" , "\\\\\\\\\\" , "////" , "xxxx") markers = ("o", "x", "s", "^", "4") ticks = [e.split('-')[-1] for e in datasets] index = np.arange(3) # create plot fig, ax = plt.subplots(1, 1, figsize = (2.5, 2.5), dpi = 150) plt.yscale('log') ax.tick_params(axis='y') ax.set_xlabel('Rank', size=12) ax.set_xticks(index) ax.set_xticklabels(ticks) ax.set_ylabel('Local Running Time (s)', size=12) for i, (color, lib) in enumerate(zip(colors, libs)): acc_list = [results[dataset][lib][3] for dataset in datasets] ax.plot(index, acc_list, color=colors[i], marker=markers[i], linewidth=1, markersize=8, markerfacecolor="None", markeredgewidth=1.5) fig.tight_layout() # otherwise the right y-label is slightly clipped plt.savefig(f'./plots/{name}.pdf', bbox_inches = 'tight', pad_inches = 0) # plt.show() plot_rt(('synthetic-20', 'synthetic-30', 'synthetic-40'), 'E2_synthetic') plot_rt(('video-20', 'video-30', 'video-40'), 'E2_video') plot_rt(('stock-20', 'stock-22', 'stock-24'), 'E2_stock') plot_rt(('hall-30', 'hall-35', 'hall-40'), 'E2_hall') plot_rt(('korea-30', 'korea-40', 'korea-50'), 'E2_korea') # + def plot_E5_error(dataset): markers = ("+", "x", "1", "2") colors = ('dodgerblue','mediumseagreen', 'hotpink', '#fba84a') libs = ("dao", "dtd", "ocp", "fcp") fig = plt.figure(figsize = (7, 3), dpi = 150,) plt.ylabel('Local Error Norm', fontsize=12) plt.xlabel('# of Stacked Slices', fontsize=12) # ax1.xaxis.set_label_position('top') split_points, refine_points = results[dataset]['dao'][6] for p in refine_points: plt.axvline(p, label='line: {}'.format(p), c='lightgray', linewidth=2, linestyle='--') for p in split_points: plt.axvline(p, label='line: {}'.format(p), c='lightgray', linewidth=2, linestyle='-') for color, marker, lib in zip(colors, markers, libs): verbose_list = results[dataset][lib][5] plt.plot(verbose_list[:,0], verbose_list[:,2], linewidth=1, marker=marker, color=color) plt.savefig('plots/E5_{}_error.pdf'.format(dataset), bbox_inches='tight', pad_inches=0) plot_E5_error('video') def plot_E5_rt(dataset): markers = ("+", "x", "1", "2") colors = ('dodgerblue','mediumseagreen', 'hotpink', '#fba84a') libs = ("dao", "dtd", "ocp", "fcp") plt.figure(figsize = (7, 3), dpi = 150,) plt.yscale('log') plt.ylabel('Local Running Time (s)', fontsize=12) plt.xlabel('# of Stacked Slices', fontsize=12) # ax1.xaxis.set_label_position('top') split_points, refine_points = results[dataset]['dao'][6] for p in refine_points: plt.axvline(p, label='line: {}'.format(p), c='lightgray', linewidth=2, linestyle='--') for p in split_points: plt.axvline(p, label='line: {}'.format(p), c='lightgray', linewidth=2, linestyle='-') for color, marker, lib in zip(colors, markers, libs): verbose_list = results[dataset][lib][5] plt.plot(verbose_list[:,0], verbose_list[:,1], linewidth=1, marker=marker, color=color) plt.savefig('plots/E5_{}_rt.pdf'.format(dataset), bbox_inches='tight', pad_inches=0) plot_E5_rt('video') # + from matplotlib import gridspec def plot_E5(dataset): markers = ("x", "1", "2", "+") colors = ('mediumseagreen', 'hotpink', '#fba84a', 'dodgerblue') libs = ("dtd", "ocp", "fcp", "dao") fig = plt.figure(figsize = (9, 6), dpi = 150) gs = gridspec.GridSpec(2, 1, height_ratios=[1.5, 1]) ax1 = plt.subplot(gs[0]) ax1.set_ylabel('Local Error Norm', fontsize=12) # ax1.set_xlabel('# of Stacked Slices', fontsize=12) # ax1.xaxis.set_label_position('top') split_points, refine_points = results[dataset]['dao'][6] for p in refine_points: ax1.axvline(p, label='line: {}'.format(p), c='lightgray', linewidth=2, linestyle='--') for p in split_points: ax1.axvline(p, label='line: {}'.format(p), c='lightgray', linewidth=2, linestyle='-') for color, marker, lib in zip(colors, markers, libs): verbose_list = results[dataset][lib][5] ax1.plot(verbose_list[:,0], verbose_list[:,2], linewidth=1, marker=marker, color=color) ax2 = plt.subplot(gs[1], sharex = ax1) ax2.set_yscale('log') ax2.set_ylabel('Local Running\nTime (s)', fontsize=12) ax2.set_xlabel('# of Stacked Slices', fontsize=12) # ax1.xaxis.set_label_position('top') split_points, refine_points = results[dataset]['dao'][6] for p in refine_points: ax2.axvline(p, label='line: {}'.format(p), c='lightgray', linewidth=2, linestyle='--') for p in split_points: ax2.axvline(p, label='line: {}'.format(p), c='lightgray', linewidth=2, linestyle='-') for color, marker, lib in zip(colors, markers, libs): verbose_list = results[dataset][lib][5] ax2.plot(verbose_list[:,0], verbose_list[:,1], linewidth=1, marker=marker, color=color) plt.setp(ax1.get_xticklabels(), visible=False) plt.subplots_adjust(hspace=.0) # fig.tight_layout() # otherwise the right y-label is slightly clipped plt.savefig('plots/E5_{}.svg'.format(dataset), bbox_inches='tight', pad_inches=0) plt.show() plot_E5('video') # + def plot_rt(datasets, name): colors = ('dodgerblue','mediumseagreen', 'hotpink', '#fba84a') libs = ("dao", "dtd", "ocp", 'fcp') patterns = ( "" , "\\\\\\\\\\" , "////" , "xxxx") markers = ("o", "x", "s", "^", "4") ticks = [e.split('-')[-1] for e in datasets] index = np.arange(3) # create plot fig, axes = plt.subplots(1, 2, figsize = (6, 3), dpi = 150) ax1, ax2 = axes ax1.tick_params(axis='y') ax1.set_xlabel('Rank', size=12) ax1.set_xticks(index) ax1.set_xticklabels(ticks) ax1.set_ylabel('Global Running Time (s)', size=12) for i, (color, lib) in enumerate(zip(colors, libs)): acc_list = [results[dataset][lib][2] for dataset in datasets] ax1.plot(index, acc_list, color=colors[i], marker=markers[i], linewidth=1, markersize=8, markerfacecolor="None", markeredgewidth=1.5) ax2.tick_params(axis='y') ax2.set_xlabel('Rank', size=12) ax2.set_xticks(index) ax2.set_xticklabels(ticks) ax2.set_ylabel('Average of \nLocal Running Time (s)', size=12) for i, (color, lib) in enumerate(zip(colors, libs[:-1])): acc_list = [results[dataset][lib][3] for dataset in datasets] ax2.plot(index, acc_list, color=colors[i], marker=markers[i], linewidth=1, markersize=8, markerfacecolor="None", markeredgewidth=1.5) fig.tight_layout() # otherwise the right y-label is slightly clipped plt.savefig(f'./plots/{name}.svg', bbox_inches = 'tight', pad_inches = 0) # plt.show() # plot_rt(('synthetic-20', 'synthetic-30', 'synthetic-40'), 'rt_synthetic') plot_rt(('video-20', 'video-30', 'video-40'), 'rt_video') # plot_rt(('stock-20', 'stock-22', 'stock-24'), 'rt_stock') # plot_rt(('hall-30', 'hall-35', 'hall-40'), 'rt_hall') # plot_rt(('korea-30', 'korea-40', 'korea-50'), 'rt_korea') # - # --- # # Experiment #2 # + from matplotlib import colors def make_rgb_transparent(color, alpha=0.6, bg_rgb=(1,1,1)): rgb = colors.colorConverter.to_rgb(color) return [alpha * c1 + (1 - alpha) * c2 for (c1, c2) in zip(rgb, bg_rgb)] def plot_mem(datasets, name): colors = ('dodgerblue','mediumseagreen', 'hotpink', '#fba84a') libs = ("dao", "dtd", "ocp", 'fcp') patterns = ( "" , "\\\\\\\\\\" , "////" , "xxxx") markers = ("o", "x", "s", "^", "4") index = np.arange(5) bar_width = 0.2 # create plot fig, ax1 = plt.subplots(figsize = (6, 4), dpi = 150) plt.xticks(index + bar_width*1.5, ('Synthetic', 'Video', 'Stock', 'Hall', 'Korea')) plt.rcParams['hatch.linewidth'] = 0.2 for i, (color, lib) in enumerate(zip(colors, libs)): mem_list = [results[dataset][lib][4] for dataset in datasets] rects1 = ax1.bar(index + bar_width*i, mem_list, bar_width, color=make_rgb_transparent(color, alpha=0.0), label=lib, edgecolor='black', hatch=patterns[i], linewidth=0.5) ax1.set_xlabel('Datasets') ax1.set_ylabel('Memory Usage (byte)') ax1.set_yscale('log') ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis for i, (color, lib) in enumerate(zip(colors, libs)): acc_list = [results[dataset][lib][0] for dataset in datasets] for j, acc in enumerate(acc_list): if j == 4: ax2.scatter(index[j] + bar_width*i, acc, 70, color=colors[i], marker=markers[j], linewidth=2) elif j == 1: ax2.scatter(index[j] + bar_width*i, acc, 50, color=colors[i], marker=markers[j], linewidth=2) else: ax2.scatter(index[j] + bar_width*i, acc, 50, color=colors[i], marker=markers[j], facecolors='none', linewidth=2) ax2.tick_params(axis='y') ax2.set_ylabel('Global Fitness', rotation=270, labelpad=15) fig.tight_layout() # otherwise the right y-label is slightly clipped plt.show() # plt.savefig(f'./plots/{name}_mem.pdf', bbox_inches = 'tight', pad_inches = 0) plot_mem(('synthetic-30', 'video-30', 'stock-20', 'hall-30', 'korea-40'), 'E2') # + from matplotlib import colors def make_rgb_transparent(color, alpha=0.6, bg_rgb=(1,1,1)): rgb = colors.colorConverter.to_rgb(color) return [alpha * c1 + (1 - alpha) * c2 for (c1, c2) in zip(rgb, bg_rgb)] def plot_mem(datasets, name): colors = ('dodgerblue','mediumseagreen', 'hotpink', '#fba84a') libs = ("dao", "dtd", "ocp", 'fcp') patterns = ( "" , "\\\\\\\\\\" , "////" , "xxxx") markers = ("o", "x", "s", "^", "4") index = np.arange(5) bar_width = 0.2 # create plot fig, ax1 = plt.subplots(figsize = (6, 4), dpi = 150) plt.xticks(index + bar_width*1.5, ('Synthetic', 'Video', 'Stock', 'Hall', 'Korea')) plt.rcParams['hatch.linewidth'] = 0.2 for i, (color, lib) in enumerate(zip(colors, libs)): mem_list = [results[dataset][lib][4] for dataset in datasets] print(mem_list) rects1 = ax1.bar(index + bar_width*i, mem_list, bar_width, color=make_rgb_transparent(color, alpha=0.7), label=lib, edgecolor='black', hatch=patterns[i], linewidth=0.5) ax1.set_xlabel('Datasets') ax1.set_ylabel('Memory Usage (byte)') ax1.set_yscale('log') ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis for i, dataset in enumerate(datasets): acc_list = [results[dataset][lib][0] for lib in libs] ax2.plot(i + bar_width*index[:4], acc_list, marker="o", color='black', zorder=1) for i, (color, lib) in enumerate(zip(colors, libs)): acc_list = [results[dataset][lib][0] for dataset in datasets] ax2.scatter(index + bar_width*i, acc_list, 30, color='black', marker="o", facecolor=colors[i], linewidth=1.3, zorder=2) ax2.tick_params(axis='y') ax2.set_ylabel('Global Fitness', rotation=270, labelpad=13) fig.tight_layout() # otherwise the right y-label is slightly clipped # plt.show() plt.savefig(f'./plots/{name}_mem.svg', bbox_inches = 'tight', pad_inches = 0) print([results[dataset]['fcp'][4]/results[dataset]['dao'][4] for dataset in datasets]) plot_mem(('synthetic-30', 'video-30', 'stock-20', 'hall-30', 'korea-40'), 'E2') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # https://discourse.pymc.io/t/nan-occured-in-optimization-in-a-vonmises-mixture-model/1296 # + # %pylab inline import pymc3 as pm import pymc3.distributions.transforms as tr # - with pm.Model() as model: mu_1 = pm.VonMises('mu_1', mu=0, kappa=1) kappa_1 = pm.Gamma('kappa_1', 1, 1) vm_1 = pm.VonMises.dist(mu=mu_1, kappa=kappa_1) w = pm.Dirichlet('w', np.ones(2)) vm_comps = [vm_1, vm_1] vm = pm.Mixture('vm', w, vm_comps) approx = pm.fit(obj_optimizer=pm.adagrad_window(learning_rate=1e-3)) model.check_test_point() with pm.Model() as model1: mu_1 = pm.VonMises('mu_1', mu=0, kappa=1) kappa_1 = pm.Gamma('kappa_1', 1, 1) vm_1 = pm.VonMises.dist(mu=mu_1, kappa=kappa_1) w = np.ones(2)*.5 # pm.Dirichlet('w', np.ones(2)) vm_comps = [vm_1, vm_1] vm = pm.Mixture('vm', w, vm_comps, transform=tr.circular) with model1: approx = pm.fit(obj_optimizer=pm.adagrad_window(learning_rate=1e-3)) with pm.Model() as model2: mu_1 = pm.VonMises('mu_1', mu=0, kappa=1) kappa_1 = pm.Gamma('kappa_1', 1, 1) vm_1 = pm.VonMises('vm_1', mu=mu_1, kappa=kappa_1) with model2: approx = pm.fit(obj_optimizer=pm.adagrad_window(learning_rate=1e-3)) with pm.Model() as model_: mu_1 = pm.VonMises('mu_1', mu=0, kappa=1) kappa_1 = pm.Gamma('kappa_1', 1, 1) vm_1 = pm.VonMises.dist(mu=mu_1, kappa=kappa_1) w = pm.Dirichlet('w', np.ones(2)) vm_comps = [vm_1, vm_1] vm = pm.Mixture('vm', w, vm_comps, transform=tr.circular) approx = pm.fit(100000, obj_n_mc=10, obj_optimizer=pm.adagrad_window(learning_rate=1e-2) ) plt.plot(approx.hist); # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Lyric tf-idf DEA # # ## Imports, Inits, and Method definitions ## # + import pandas as pd import numpy as np from sqlalchemy import create_engine import matplotlib.pyplot as plt import matplotlib as mpl import seaborn as sns sns.set() # %matplotlib inline from sklearn.model_selection import train_test_split, cross_val_score, GridSearchCV, cross_validate from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, roc_curve, roc_auc_score from sklearn.linear_model import LogisticRegression from sklearn.naive_bayes import GaussianNB from sklearn.svm import SVC from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier from sklearn import preprocessing import importlib import mcnulty_methods import word_utils importlib.reload(mcnulty_methods); importlib.reload(word_utils); from mcnulty_methods import get_formatted_feature_df, get_lyrics_for_tracks from word_utils import get_word_counts, generate_word_charts,get_word_total_idf # + mpl.rcParams['axes.titlesize'] = 16 mpl.rcParams['axes.labelsize'] = 16 mpl.rcParams['xtick.labelsize'] = 13 mpl.rcParams['ytick.labelsize'] = 13 test_size = 0.2 random_state = 10 # + def get_artist_term_counts(): term_counts = pd.read_csv('top_artist_terms.csv', index_col='artist_id', names=['artist_id','term', 'term_count']) term_counts = term_counts[~(term_counts['term'] == 'term')] del term_counts['term_count'] return term_counts def get_term_counts(): return pd.read_csv('term_counts.csv', names=['term', 'count']) # - # ## Fetch Tracks for Particular Genres conn = create_engine('postgresql://@localhost:5432/mcnulty_songs').raw_connection() cursor = conn.cursor() genres = ['hip hop', 'metal'] features = get_formatted_feature_df(conn, genres=genres) features.set_index('track_id', inplace=True) features.shape features.sample(5) # ## Fetch Lyrics from Tracks ## # + genre_labels = genres unique_words = set() all_lyrics = None hiphop_lyrics = None pop_lyrics = None lyrics_by_genre = dict() remaining_features = features for genre_label in genre_labels: genre_df = remaining_features[(remaining_features['term'] == genre_label)] remaining_features = remaining_features[~(remaining_features.index.isin(genre_df.index))] genre_ids = genre_df.index genre_lyrics = get_lyrics_for_tracks(conn, genre_ids) del genre_lyrics['is_test'] lyrics_by_genre[genre_label] = genre_lyrics if all_lyrics is None: all_lyrics = genre_lyrics else: all_lyrics = pd.concat([all_lyrics, genre_lyrics]) #generate_word_charts(genre_lyrics, genre_label) # + def tf_calculate(series): return series / series.count() total_unique_songs = all_lyrics.index.nunique() all_lyrics['tf'] = all_lyrics.groupby('track_id')['count'].transform(lambda x: x / x.sum()) all_lyrics['word_document_count'] = all_lyrics.groupby('word')['count'].transform('count') #all_lyrics['idf'] = all_lyrics.groupby('word')['count'].transform(lambda x: total_unique_songs / x.count()) all_lyrics['idf'] = np.log10(total_unique_songs / all_lyrics['word_document_count']) all_lyrics['tf_idf'] = all_lyrics['idf'] * all_lyrics['tf'] all_lyrics[all_lyrics['word'] == '&'].head(5) # - # ## Word Analysis and Reshaping for Modeling ## # ## Feature Selection ## # # Starting with the top x words found per song in the dataset, we'll add features and record the results from our classification models def plot_ROC_compute_AUC(model, model_name, X,y): X_val, X_val_test, y_val, y_val_test = train_test_split(X, y, test_size=test_size, random_state=random_state) model.fit(X_val, y_val) y_prob = model.predict_proba(X_val_test)[:,1] auc = roc_auc_score(y_val_test, y_prob) return auc #TODO save these fpr, tpr, _ = roc_curve(y_val_test, y_prob) auc = roc_auc_score(y_val_test, y_prob) plt.plot(fpr, tpr) x = np.linspace(0,1, 100000) plt.plot(x, x, linestyle='--') plt.title('ROC Curve (Pop or Hip Hop)') plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.legend(['Logistic Regression']) return auc def create_test_result_df(rows): test_results = pd.DataFrame(rows, columns=['num_words', 'use_tfidf', 'model','accuracy','precision','recall','f1','auc']) test_results['accuracy'] = test_results['accuracy'].astype(np.float64) test_results['use_tfidf'] = test_results['use_tfidf'].astype(np.bool) test_results['precision'] = test_results['precision'].astype(np.float64) test_results['recall'] = test_results['recall'].astype(np.float64) test_results['f1'] = test_results['f1'].astype(np.float64) test_results['auc'] = test_results['auc'].astype(np.float64) return test_results def create_feature_df(rows): test_results = pd.DataFrame(rows, columns=['num_words','use_tfidf','model','feature','importance']) test_results['use_tfidf'] = test_results['use_tfidf'].astype(np.bool) test_results['importance'] = test_results['importance'].astype(np.float64) return test_results def get_X_Y(word_sample_size, use_tfidf=False): word_song_appearance, total_word_appearance = get_word_counts(all_lyrics) if use_tfidf: word_song_appearance = get_word_total_idf(all_lyrics) word_subset = word_song_appearance.iloc[:word_sample_size] remaining_lyrics = pd.merge(all_lyrics.reset_index(), word_subset[['word']], how='right', on='word') remaining_lyrics.set_index('track_id', inplace=True) if use_tfidf: del remaining_lyrics['tf'] del remaining_lyrics['word_document_count'] del remaining_lyrics['idf'] pivote_value = 'tf_idf' else: pivote_value= 'count' tid_lyrics = remaining_lyrics.pivot(columns='word', values=pivote_value) music_features = ['music_duration','music_key','music_loudness', 'music_mode', 'music_tempo', 'music_time_signature'] term_only = features[['term']].reset_index().set_index('track_id') feature_names = list(tid_lyrics.columns) # complete set,= tid_index -> genre -> word_a -> .... -> word_z complete_set = pd.merge(term_only, tid_lyrics, left_index=True, right_index=True, how='right') complete_set.fillna(0, inplace=True) y_text = np.asarray(complete_set.iloc[:,0]) y = np.array([1 if val=='hip hop' else 0 for val in y_text]) X = np.asarray(complete_set.iloc[:,1:]) return X,y,feature_names features['term'].value_counts() '''for i in range (0,2): print('Generating X and y') X, y, feature_names = get_X_Y(200,use_tfidf=i) X_val, X_test, y_val, y_test = train_test_split(X, y, test_size=test_size, random_state=random_state) X_val_fit, X_val_test, y_val_fit, y_val_test = train_test_split(X_val, y_val, test_size=test_size, random_state=random_state) model = RandomForestClassifier(n_estimators=100, class_weight={1 : 1, 0 : 2}) print('Fitting Model') model.fit(X_val_fit, y_val_fit) print('Predicting Values') y_test_pred = model.predict(X_val_test) print('Accuracy: {}'.format(accuracy_score(y_val_test, y_test_pred))) print('Recall: {}'.format(recall_score(y_val_test, y_test_pred))) print('Precision: {}'.format(precision_score(y_val_test, y_test_pred))) print('F1: {}'.format(f1_score(y_val_test, y_test_pred)))'''; # + word_chunk_size = 10 word_upper_bound = 3010 # let's stop at 3000 words feat_results = create_feature_df(None) score_results = create_test_result_df(None) tree100 = RandomForestClassifier(n_estimators=100, class_weight={1 : 1, 0 : 2}) tree1000 = RandomForestClassifier(n_estimators=1000, class_weight={1 : 1, 0 : 2}) tree_models = [tree100] tree_model_names = ['tree100'] log_model = LogisticRegression(penalty='l1', class_weight={1 : 1, 0 : 2}) output_feature_file_name = 'feature_importance_csvs/feature_importance_results.csv' output_score_file_name = 'feature_importance_csvs/feature_score_results.csv' for word_sample_size in range(10, word_upper_bound, word_chunk_size): print('word sample size: {}'.format(word_sample_size)) for i in range (1,2): print('\tis tfidf: {}'.format(i)) X, y, feature_names = get_X_Y(word_sample_size,use_tfidf=i) X_val, X_test, y_val, y_test = train_test_split(X, y, test_size=test_size, random_state=random_state) # For the pandas rows we're generating number_word_columns = [str(word_sample_size)] * len(feature_names) tf_idf_columns = [i] * len(feature_names) # Handle Logistic Regression print('\t\tlog fit') log_model.fit(X_val, y_val) model_columns = ['log'] * len(feature_names) zipped_features = list(zip(number_word_columns, tf_idf_columns, model_columns, feature_names, log_model.coef_[0])) new_results = create_feature_df(zipped_features) # Merge with our results feat_results = feat_results.append(new_results, ignore_index=True) feat_results.to_csv(output_feature_file_name) print('\t\tlog score') ## handle scores scoring = {'accuracy': 'accuracy', 'precision': 'precision', 'recall': 'recall', 'f1': 'f1',} scores = cross_validate(log_model, X_val, y_val, scoring=scoring, cv=5, n_jobs=-1) accuracy = np.mean(scores['test_accuracy']) precision = np.mean(scores['test_precision']) recall = np.mean(scores['test_recall']) f1 = np.mean(scores['test_f1']) auc = plot_ROC_compute_AUC(log_model, 'log', X_val, y_val) cv_row = [word_sample_size, i, 'log', accuracy, precision, recall, f1, auc] score_results = score_results.append(create_test_result_df([cv_row]), ignore_index=True) score_results.to_csv(output_score_file_name) # Now our trees cv_rows = [] for idx, tree in enumerate(tree_models): print('\t\t{} fit'.format(tree_model_names[idx])) tree.fit(X_val, y_val) model_columns = [tree_model_names[idx]] * len(feature_names) zipped_features = list(zip(number_word_columns, tf_idf_columns, model_columns, feature_names, tree.feature_importances_)) new_results = create_feature_df(zipped_features) feat_results = feat_results.append(new_results, ignore_index=True) feat_results.to_csv(output_feature_file_name) print('\t\t{} score'.format(tree_model_names[idx])) scores = cross_validate(tree, X_val, y_val, scoring=scoring, cv=5, n_jobs=-1) accuracy = np.mean(scores['test_accuracy']) precision = np.mean(scores['test_precision']) recall = np.mean(scores['test_recall']) f1 = np.mean(scores['test_f1']) #accuracy = np.mean(cross_val_score(tree, X_val, y_val, cv=5, n_jobs=-1, scoring='accuracy')) #precision = np.mean(cross_val_score(tree, X_val, y_val, cv=5, n_jobs=-1, scoring='precision')) #recall = np.mean(cross_val_score(tree, X_val, y_val, cv=5, n_jobs=-1, scoring='recall')) #f1 = np.mean(cross_val_score(tree, X_val, y_val, cv=5, n_jobs=-1, scoring='f1')) auc = plot_ROC_compute_AUC(tree, tree_model_names[idx], X_val, y_val) cv_row = [word_sample_size, i, tree_model_names[idx], accuracy, precision, recall, f1, auc] cv_rows.append(cv_row) score_results = score_results.append(create_test_result_df(cv_rows), ignore_index=True) score_results.to_csv(output_score_file_name) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:jcop_usl] # language: python # name: conda-env-jcop_usl-py # --- # Dataset yang digunakan dapat didownload di: https://github.com/rizalespe/Dataset-Sentimen-Analisis-Bahasa-Indonesia atau menggunakan ***git clone*** seperti contoh dibawah ini. Folder yang di _clone_ tersimpan ke dalam folder tempat file project ini disimpan. # + colab={"base_uri": "https://localhost:8080/"} id="uQhkN4DG2gXb" outputId="120f7340-a07e-4a82-b0a2-7b7de9bc2337" # #!git clone https://github.com/rizalespe/Dataset-Sentimen-Analisis-Bahasa-Indonesia # + [markdown] id="GQLN-l93iF2B" # ## Install Package # - # **Requirement Package**: # # ``` # 1. nltk : https://www.nltk.org/ # 2. Sastrawi: https://github.com/sastrawi/sastrawi # 3. numpy: https://numpy.org/ # 4. pandas: https://pandas.pydata.org/ # 5. sklearn: https://scikit-learn.org/stable/ # # ``` # + [markdown] id="LZMhPthAicoE" # # Import Package # + # #!pip install Sastrawi #nltk.download('stopwords') #nltk.download('punkt') # + id="uq4zAXNPih6-" import numpy as np import pandas as pd import re import pickle from string import punctuation import os import json from nltk.corpus import stopwords from nltk.tokenize import word_tokenize from Sastrawi.Stemmer.StemmerFactory import StemmerFactory from Sastrawi.StopWordRemover.StopWordRemoverFactory import StopWordRemoverFactory from sklearn.model_selection import train_test_split # + [markdown] id="Nh5funiyhB5G" # # ```{Utils}``` # + id="UW6_M8jhgyy-" def process_tweet(tweet): # kumpulan stemming factory_stem = StemmerFactory() stemmer = factory_stem.create_stemmer() # kumpulan stopwords factory_stopwords = StopWordRemoverFactory() stopword = factory_stopwords.get_stop_words() + stopwords.words('indonesian') # menghapus kata-kata yang tidak penting seperti @, # tweet = re.sub(r'\$\w*', '', tweet) tweet = re.sub(r'^RT[\s]+', '', tweet) tweet = re.sub(r'https?:\/\/.*[\r\n]*', '', tweet) tweet = re.sub(r'#', '', tweet) tweet = re.sub(r'', '', tweet) # tokenizer word tweet_tokens = word_tokenize(tweet) # membersihkan word tweets_clean = [stemmer.stem(word) for word in tweet_tokens if (word not in stopword and word not in punctuation)] return tweets_clean # - def lookup(freqs, word, label): n = 0 pair = (word, label) if (pair in freqs): n = freqs[pair] return n # + id="cKzrfhkFqV3K" def count_tweets(tweets, ys): yslist = np.squeeze(ys).tolist() freqs = {} for y, tweet in zip(yslist, tweets): for word in process_tweet(tweet): pair = (word, y) if pair in freqs: freqs[pair] += 1 else: freqs[pair] = 1 return freqs # + [markdown] id="Hmi7dW_Ttakm" # # Processing data # + [markdown] id="wLVQNDWptjJF" # ### Import data # + id="OnmXDdv7ia_u" df = pd.read_csv("data/dataset_komentar_instagram_cyberbullying.csv") # + colab={"base_uri": "https://localhost:8080/", "height": 204} id="upLfzDUrihHG" outputId="bdc4f034-0896-4e6d-9375-fd848593a09d" df.head() # + colab={"base_uri": "https://localhost:8080/"} id="eIOQcJYdodwa" outputId="ad5180ad-9eee-4e09-c645-9cc84d24a9fa" df.Sentiment.value_counts() # + id="pCoOr7tSzPNC" df.loc[(df.Sentiment == 'negative'),'Sentiment']=0 df.loc[(df.Sentiment == 'positive'),'Sentiment']=1 # + id="fjogwFsJv887" X = pd.DataFrame(df['Instagram Comment Text']) y = df.Sentiment # + id="yn0a5Qw1uuiB" X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) # + id="9O8VIbfl0inp" X_train = X_train.values.squeeze().tolist() X_test = X_test.values.squeeze().tolist() y_train = np.array([y_train.values.squeeze().tolist()]) y_test = np.array([y_test.values.squeeze().tolist()]) # - print(process_tweet(X_train[0])) # + [markdown] id="hEOOD0DD0HN9" # ### Build Freqs # - # Cell bisa dijalankan atau langsung saja import file `freqs.json` # + colab={"base_uri": "https://localhost:8080/"} id="mFFFA72t0GJa" outputId="c733f0af-f623-4fb2-ff84-c20ac78dfc72" # freqs = count_tweets(X_train, y_train) # - # check output print("type(freqs) = " + str(type(freqs))) print("len(freqs) = " + str(len(freqs.keys()))) # + # save data ## os.makedirs(name="data", exist_ok=True) ## with open('data/freqs.json', 'wb') as fp: ##pickle.dump(freqs, fp) # - with open('data/freqs_instag.json', 'rb') as f: freqs = pickle.load(f) # + [markdown] id="3nODmHg3LWsh" # # Naive Bayes Algorithm # - def train_naive_bayes(freqs, X_train, y_train): loglikelihood = {} logprior = 0 # menghitung v, jumlah dari unik word di dalam vocabulary vocab = set([pair[0] for pair in freqs.keys()]) # pakai set untuk eleminasi word yang sama V = len(vocab) # hitung N_pos and N_neg N_pos = N_neg = 0 for pair in freqs.keys():# bentuk ('benci', 0) # jika label positif atau lebih dari 0 if pair[1] > 0: N_pos += freqs[pair] # else, maka label negatif else: N_neg += freqs[pair] # hitung, jumlah dokumen D = len(y_train[-1]) # hitung D_pos atau jumlah dokumen yang positif D_pos = sum(pos for pos in y_train[-1] if pos > 0) # hitung D_neg atau jumlah dokumen yang negatif D_neg = D - D_pos # hitung logprior logprior = np.log(D_pos) - np.log(D_neg) # untu setiap kata di dalam vocabulary for word in vocab: # dapatkan frekuensi positive dan negatif dalam word freq_pos = lookup(freqs, word, 1.0) freq_neg = lookup(freqs, word, 0.0) # hitung probabilitas setip word adalah positif maupun negatif p_w_pos = (freq_pos + 1) / (N_pos + V) p_w_neg = (freq_neg + 1) / (N_neg + V) # calculate log likelihood dari kata loglikelihood[word] = np.log(p_w_pos/p_w_neg) return logprior, loglikelihood # tes algoritma logprior, loglikelihood = train_naive_bayes(freqs, X_train, y_train) print(logprior) print(len(loglikelihood)) # # Test Naive Bayes Algorithm def naive_bayes_predict(tweet, logprior, loglikelihood): # prosses word word_l = process_tweet(tweet) # inisiasi probabilitas dengan 0 p = 0 # tambah logprior p += logprior for word in word_l: # cek jika kata ada didalam loglikehood ditionary if word in loglikelihood: # tambahkan loglikehood dari kata tersebut ke probabilitas p += loglikelihood.get(word) return p # test dengan tweet sendiri. my_tweet = 'Ganteng sekali dia.' p = naive_bayes_predict(my_tweet, logprior, loglikelihood) print('Output adalah', p) def test_naive_bayes(X_test, y_test, logprior, loglikelihood): y_hats = [] for tweet in X_test: # jika prediksi > 0 if naive_bayes_predict(tweet, logprior, loglikelihood) > 0: # prediksi kelas adalah 1 y_hat_i = 1 else: # else, maka prediksi kelas adalah 0 y_hat_i = 0 # tambahkan hasil prediksi kelas ke list y_hats y_hats.append(y_hat_i) # error adalah rata-rata nilai absolut dari perbedaan y_hats dan y_test error = np.mean(np.absolute(y_hats - y_test[-1])) # akurasi adalah 1 - erros accuracy = 1 - error return accuracy print("Naive Bayes accuracy = %0.4f" % (test_naive_bayes(X_test, y_test, logprior, loglikelihood))) X_test[6] for tweet in ['Cowok mac cuihhh', 'Orang kaya malahn berperilaku sebagai mana orang sederhana', 'Romantisme hanya dia dan istri yg tau.']: p = naive_bayes_predict(tweet, logprior, loglikelihood) print(f'{tweet} -> {p:.2f}') # # Filter words by ratio dari jumlah positive dan negativ def get_ratio(freqs, word): pos_neg_ratio = {'positive':0, 'negative':0, 'ratio':0.0} # gunakan fungsi lookup() untuk menemukan positive count untuk sebuah kata pos_neg_ratio['positive'] = lookup(freqs, word, 1) # gunakan fungsi lookup() untuk meneukan negativ count untuk sebuah kata pos_neg_ratio['negative'] = lookup(freqs, word, 0) # hitung rasio positif negativ count untuk word pos_neg_ratio['ratio'] = (pos_neg_ratio['positive'] + 1) / (pos_neg_ratio['negative'] + 1) return pos_neg_ratio get_ratio(freqs, 'belagu') def get_words_by_threshold(freqs, label, threshold): word_list = {} for key in freqs.keys(): word, _ = key # dapatkan positive atau negative ratio untuk sebuah kata pos_neg_ratio = get_ratio(freqs, word) # jika label adalah 1 dan ratio lebih atau sama dengan threshold if label == 1 and pos_neg_ratio['ratio'] >= threshold: # tambahkan pos_neg_ratio ke dictionary word_list[word] = pos_neg_ratio # jika label = 0 dan pos_neg_ratio kurang dari sama dengan threshold elif label == 0 and pos_neg_ratio['ratio'] <= threshold: word_list[word] = pos_neg_ratio return word_list get_words_by_threshold(freqs, label=0, threshold=0.05) get_words_by_threshold(freqs, label=1, threshold=10) # # Error Analysis print('Truth Predicted Tweet') for x, y in zip(X_test, y_test[-1]): y_hat = naive_bayes_predict(x, logprior, loglikelihood) if y != (np.sign(y_hat) > 0): print('%d\t%0.2f\t%s' % (y, np.sign(y_hat) > 0, ' '.join( process_tweet(x)).encode('ascii', 'ignore'))) # # Predict own tweet # + my_tweet = 'Lo mau cari gara-gara' p = naive_bayes_predict(my_tweet, logprior, loglikelihood) print(p) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="95rZbZREmRvE" # # Depth First Search (DFS) # + id="SicRUvhEiaes" # !git clone https://github.com/lmcanavals/algorithmic_complexity.git 2>/dev/null # + id="ILQdP2LYif2y" from algorithmic_complexity.aclib import graphstuff as gs import numpy as np import networkx as nx # + colab={"base_uri": "https://localhost:8080/"} id="0Vn6ZpZNhniI" outputId="e4b62845-b6c1-402a-de33-28146a3cb56d" # %%file 1.adjlist 0 3 4 1 3 5 6 2 4 5 6 3 0 1 7 4 0 2 6 5 1 2 6 7 6 1 2 4 5 7 3 5 # + colab={"base_uri": "https://localhost:8080/", "height": 559} id="cXmVKr8QiVwU" outputId="0a75ba3b-46eb-40e8-a141-faf130e98268" G = nx.read_adjlist('1.adjlist') gs.as_gv(G) # + [markdown] id="RWwrN8jUmZxy" # ## Recursive implementation with networkx graph # + id="HynmAreFhoJC" def _dfs(G, u): if not G.nodes[u]['visited']: G.nodes[u]['visited'] = True for v in G.neighbors(u): if not G.nodes[v]['visited']: G.nodes[v]['π'] = u _dfs(G, v) def dfs(G, s): for u in G.nodes: G.nodes[u]['visited'] = False G.nodes[u]['π'] = -1 _dfs(G, s) # + colab={"base_uri": "https://localhost:8080/", "height": 673} id="j1apoUxqhrqZ" outputId="42748adc-fbe6-490b-f4f2-71cdcc261667" dfs(G, '5') path = [0]*G.number_of_nodes() for v, info in G.nodes.data(): path[int(v)] = int(info['π']) print(path) gs.path2gv(path) # + [markdown] id="RvkHAhzJmfvh" # ## Non recursive implementation with networkx graph # + id="IFM3ooOFiljJ" def dfsWithStack(G, s): for u in G.nodes: G.nodes[u]['visited'] = False G.nodes[u]['π'] = -1 stack = [s] while stack: u = stack.pop() if not G.nodes[u]['visited']: G.nodes[u]['visited'] = True for v in reversed(list(G.neighbors(u))): if not G.nodes[v]['visited']: G.nodes[v]['π'] = u stack.append(v) path = [0]*G.number_of_nodes() for v, info in G.nodes.data(): path[int(v)] = int(info['π']) return path # + colab={"base_uri": "https://localhost:8080/", "height": 673} id="sq1Wtmqrj2-k" outputId="c7ed288b-d716-4585-90c4-800fd4351204" path = dfsWithStack(G, '5') print(path) gs.path2gv(path) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + colab={"base_uri": "https://localhost:8080/"} id="AjiPEaxDWHyi" outputId="ac5b2459-4b1f-44ba-9244-b6818b944e82" #https://stackoverflow.com/questions/50207292/how-to-convert-geotiff-to-jpg-in-python-or-java from osgeo import gdal options_list = [ '-ot Byte', '-of JPEG', '-b 1 2 3', '-scale' ] options_string = " ".join(options_list) gdal.Translate( '/content/drive/MyDrive/Data/RGB.jpg', '/content/drive/MyDrive/Big/S2A_MSIL2A_20170613T101031_0_55/RGB.tif', options=options_string ) # + colab={"base_uri": "https://localhost:8080/"} id="DzEKLRD9wN6C" outputId="9b421c24-86bd-4bc2-d060-6d24bb7598d3" import os from PIL import Image os.chdir('/content/drive/MyDrive/Data') path = os.getcwd() print(path) im = Image.open('/content/drive/MyDrive/Data/a.tiff') im.convert("RGB").save('/content/drive/MyDrive/Data/out.jpeg', "JPEG", quality=100) # + colab={"base_uri": "https://localhost:8080/"} id="3sPq_g8hszjt" outputId="9dced770-04f9-49b8-f922-be46401490f7" #https://stackoverflow.com/questions/37747021/create-numpy-array-of-images resolution = 64 import glob import numpy as np from PIL import Image X_data = [] files = glob.glob(r"/content/drive/MyDrive/Data/*.jpg") for my_file in files: print(my_file) image = Image.open(my_file).convert('RGB') image = np.array(image) X_data.append(image) print('X_data shape:', np.array(X_data).shape) # + id="kh9fUILIqdje" import io from PIL import Image from PIL import ImageCms def convert_to_srgb(img): '''Convert PIL image to sRGB color space (if possible)''' icc = img.info.get('icc_profile', '') if icc: io_handle = io.BytesIO(icc) # virtual file src_profile = ImageCms.ImageCmsProfile(io_handle) dst_profile = ImageCms.createProfile('sRGB') img = ImageCms.profileToProfile(img, src_profile, dst_profile) return img img = Image.open('/content/drive/MyDrive/Data/RGB.jpg') img_conv = convert_to_srgb(img) if img.info.get('icc_profile', '') != img_conv.info.get('icc_profile', ''): # ICC profile was changed -> save converted file img_conv.save('/content/drive/MyDrive/Data/RGB.jpg', format = 'JPEG', quality = 100, icc_profile = img_conv.info.get('icc_profile','')) # + id="0jQ3zefoWFgC" import cv2 import numpy as np img = cv2.imread('/content/drive/MyDrive/Data/RGB.jpg') res = cv2.resize(img, dsize=(64,64), interpolation=cv2.INTER_CUBIC) # + colab={"base_uri": "https://localhost:8080/"} id="Wsz7VK8-sGoQ" outputId="81b1a2f8-31a8-4277-8ce0-5f33d935f475" # !pip install rasterio # + colab={"base_uri": "https://localhost:8080/", "height": 340} id="XPROntAsr4fl" outputId="0e5b55e7-a604-4dcb-c32b-e1def4ddd97f" #21 classes 256x256 pixels import rasterio from rasterio.plot import show img = r'/content/drive/MyDrive/UCMerced_LandUse/Images/agricultural/agricultural00.tif' img = rasterio.open(fp) show(img) # + colab={"base_uri": "https://localhost:8080/"} id="sStpckXSsU79" outputId="37a1a8ac-a243-4a40-950e-a1f7e14564eb" print(img.crs) # + colab={"base_uri": "https://localhost:8080/"} id="WUmgf09Ysp6X" outputId="f1236444-0f95-425a-bfa0-de3544777125" print(img.height, img.width) # + colab={"base_uri": "https://localhost:8080/", "height": 286} id="0q7WaBJVtShq" outputId="4792e050-eeb5-4ed0-8557-18b254171559" #for Reading particular band data of image from osgeo import gdal import matplotlib.pyplot as plt image = gdal.Open('/content/drive/MyDrive/UCMerced_LandUse/Images/agricultural/agricultural00.tif', gdal.GA_ReadOnly) # Note GetRasterBand() takes band no. starting from 1 not 0 band = dataset.GetRasterBand(3) arr = band.ReadAsArray() plt.imshow(arr) # + colab={"base_uri": "https://localhost:8080/"} id="nOQeoAILuB1E" outputId="fea6fa06-4e98-497b-91a0-9569bab26af0" #number of bands image.RasterCount # + colab={"base_uri": "https://localhost:8080/"} id="UuD29n2WuJG-" outputId="3b194c1e-3483-4dd3-e492-824a44f1b264" image.RasterXSize # + colab={"base_uri": "https://localhost:8080/"} id="dZ3tkfpruPpM" outputId="26513015-ef58-4b6a-a216-c30936da2f82" image.RasterYSize # + colab={"base_uri": "https://localhost:8080/"} id="c0tgmnYjuSgA" outputId="25c6e305-a3d4-4ef7-b94b-7ab233ec0596" img = image.GetRasterBand(1) img.GetStatistics(True,True) # shows min, max,mean, s.d. # + colab={"base_uri": "https://localhost:8080/"} id="vWkDcwtSu7Gy" outputId="d53d80f5-b69f-42b0-dd8c-91d732f68979" img = image.GetRasterBand(2) img.GetStatistics(True,True) # shows min, max,mean, s.d. # + colab={"base_uri": "https://localhost:8080/"} id="E8LEUqfgvDYG" outputId="16a8c4ff-ef22-4c6b-de94-935980b33ee4" img = image.GetRasterBand(3) img.GetStatistics(True,True)# shows min, max,mean, s.d. # + id="giTZF8gWwpqA" # + [markdown] id="f9gKgAv5wuqt" # ## Reading tif files and learning through CNN # + id="bR8IKjaDw3O6" import os path = '/content/drive/MyDrive/UCMerced_LandUse/Images' os.chdir(path) # + colab={"base_uri": "https://localhost:8080/"} id="01JMugdBxgQ-" outputId="51008a81-a55c-4829-83a9-3f2a529438fe" x = os.listdir(path) print(x) print(len(x)) # + colab={"base_uri": "https://localhost:8080/"} id="F49wLAaCx_S3" outputId="1b53052a-61ca-4552-a574-cd7f4fe2e8fa" #for Reading particular band data of image from osgeo import gdal import matplotlib.pyplot as plt import os start_path = '.' # current directory for path,dirs,files in os.walk(start_path): print(files) for filename in files: s = os.path.join(path,filename) # Note GetRasterBand() takes band no. starting from 1 not 0 # + colab={"base_uri": "https://localhost:8080/"} id="e2lOA7Ko1aJJ" outputId="99536a08-6894-455a-ceff-48324f0e25c0" from tensorflow.keras.preprocessing.image import ImageDataGenerator # import the needed packages import tensorflow as tf from keras import losses from keras import optimizers from keras import metrics import matplotlib.pyplot as plt import matplotlib.image as img import tensorflow.keras as keras from keras.preprocessing import image import numpy as np from keras.models import model_from_json import os path = '/content/drive/MyDrive/UCMerced_LandUse/Images' os.chdir(path) list_of_dir = os.listdir(path) # define and move to dataset directory datasetdir = path import os os.chdir(datasetdir) # shortcut to the ImageDataGenerator class ImageDataGenerator = keras.preprocessing.image.ImageDataGenerator gen = ImageDataGenerator() iterator = gen.flow_from_directory( os.getcwd(), target_size=(256,256), classes=('harbor', 'river', 'mobilehomepark', 'golfcourse', 'agricultural', 'runway', 'buildings', 'forest', 'airplane', 'baseballdiamond', 'overpass', 'intersection', 'chaparral', 'sparseresidential', 'parkinglot', 'tenniscourt', 'mediumresidential', 'denseresidential', 'beach', 'freeway', 'storagetanks') ) # we can guess that the iterator has a next function, # because all python iterators have one. batch = iterator.next() print(len(batch)) print(type(batch[0])) print(batch[0].shape) print(batch[0].dtype) #print(batch[0].max()) #print(batch[1].shape) #print(batch[1].dtype) #print(type(batch[1])) #the first element is an array of 32 images with 256x256 pixels, and 3 color channels, encoded as floats in the range 0 to 255 #The second element contains the 32 corresponding labels. # + colab={"base_uri": "https://localhost:8080/"} id="2WtSwXqD9i8U" outputId="92134010-785e-4093-af8d-433713dc3e25" batch = iterator.next() print(len(batch)) print(type(batch[1])) print(batch[1].shape) print(batch[1].dtype) # + colab={"base_uri": "https://localhost:8080/"} id="Q9oqLY5t9lcs" outputId="94ca4336-0266-451b-9564-5e55926d84c9" #Augmentation by Flipping images #Now, let's make the transformation a bit more complex. This time, the ImageDataGenerator will flip, zoom, and rotate the images on a random basis imgdatagen = ImageDataGenerator( rescale = 1/255., horizontal_flip = True, zoom_range = 0.3, rotation_range = 15., validation_split = 0.1, ) batch_size = 30 height, width = (256,256) train_dataset = imgdatagen.flow_from_directory( os.getcwd(), target_size = (height, width), classes=('harbor', 'river', 'mobilehomepark', 'golfcourse', 'agricultural', 'runway', 'buildings', 'forest', 'airplane', 'baseballdiamond', 'overpass', 'intersection', 'chaparral', 'sparseresidential', 'parkinglot', 'tenniscourt', 'mediumresidential', 'denseresidential', 'beach', 'freeway', 'storagetanks'), batch_size = batch_size, subset = 'training' ) val_dataset = imgdatagen.flow_from_directory( os.getcwd(), target_size = (height, width), classes=('harbor', 'river', 'mobilehomepark', 'golfcourse', 'agricultural', 'runway', 'buildings', 'forest', 'airplane', 'baseballdiamond', 'overpass', 'intersection', 'chaparral', 'sparseresidential', 'parkinglot', 'tenniscourt', 'mediumresidential', 'denseresidential', 'beach', 'freeway', 'storagetanks'), batch_size = batch_size, subset = 'validation' ) model = keras.models.Sequential() initializers = { } model.add( keras.layers.Conv2D( 24, 5, input_shape=(256,256,3), activation='relu', ) ) model.add( keras.layers.MaxPooling2D(2) ) model.add( keras.layers.Conv2D( 48, 5, activation='relu', ) ) model.add( keras.layers.MaxPooling2D(2) ) model.add( keras.layers.Conv2D( 96, 5, activation='relu', ) ) model.add( keras.layers.Flatten() ) model.add( keras.layers.Dropout(0.2) ) model.add( keras.layers.Dense( 21, activation='softmax', ) ) model.summary() model.compile(loss='binary_crossentropy', optimizer=keras.optimizers.Adamax(lr=0.001), metrics=['acc']) model.fit_generator( train_dataset, validation_data = val_dataset, workers=10, epochs=10, ) # serialize model to JSON https://machinelearningmastery.com/save-load-keras-deep-learning-models/ model_json = model.to_json() with open("model.json", "w") as json_file: json_file.write(model_json) # serialize weights to HDF5 model.save_weights("model.h5") print("Saved model to disk") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: py_37_env # language: python # name: py_37_env # --- # # Lecture des données et import du model # + import innvestigate import numpy as np from keras.models import load_model model = load_model('model.h5') import pickle train_x = pickle.load(open('train_x.pickle', "rb")) test_x = pickle.load(open('test_x.pickle', "rb")) test_y = pickle.load(open('test_y.pickle', "rb")) print(test_x.shape) print(test_y.shape) tests = np.append(test_x, test_y, axis=1) print(tests) print(tests.shape) tests_sorted_by_class = tests[np.lexsort((tests[:,-6], tests[:,-5], tests[:,-4], tests[:,-3], tests[:,-2], tests[:,-1]))] # - # # Séparation du dataset par ses classes # + classes_index = {} for i in range(1, 7): #print(i) #print(np.argwhere(tests_sorted_by_class[-i]>0)) classes_index[str(6-i)]=np.argwhere(tests_sorted_by_class[-i]==1) classes = {} classes['0'] = [] classes['1'] = [] classes['2'] = [] classes['3'] = [] classes['4'] = [] classes['5'] = [] for i in tests_sorted_by_class: classes[str(list(i[-6:]).index(1))].append(i[:-6]) print(classes.keys()) for i in classes: print("Len "+str(i)+" = "+str(len(classes[i]))) # + methods = [ ("lrp.z"), ] analyzers = {} method = methods[0] print(methods) print("Analyzer for Method : "+str(method)) analyzer = innvestigate.create_analyzer(method, model) analyzer.fit(train_x, batch_size=256, verbose=1) # - # # Analyse de chaque classe avec LRP et sauvegarde des résultats # + import os classes_name = ["0" ,"1", "2", "3", "4", "5"] analysis_results = {} for i in classes_name: print("Analyzing for class "+str(i)) tmp = [] x = np.array(classes[i]) analysis_results[i] = analyzer.analyze(x) print("analysis results shape : "+str(analysis_results[i].shape)) means = {} for i in classes_name: print("class "+str(classes_name[int(i)])) mat = analysis_results[i] mean = mat.mean(0) means[i] = mean # - # # Import des analyses et représentation en HeatMap # + plots = [] sorted_plots = [] inverse_sort = [] classes_equivalents = ["ALL", "AML", "CLL", "CML", "MDS", "Non-leukemia"] for i in means: plots.append(means[i]) toplot = np.array(plots) print("toplot shape : "+str(toplot.shape)) print(str(toplot)) import seaborn as sns; sns.set() import matplotlib.pyplot as plt print("SHAPE : "+str(toplot.shape)) # We use ax parameter to tell seaborn which subplot to use for this plot f,ax = plt.subplots(1,sharey=True) f.set_figheight(20) f.set_figwidth(25) g1 = sns.heatmap(toplot,cmap="PuBuGn",ax=ax) g1.set(yticklabels = classes_equivalents) plt.yticks(fontsize=20, rotation=0) g1.set_ylabel('') g1.set_xlabel('Importance des variables') # - # # ACP et Clustering Hiéarchique # + from sklearn.preprocessing import StandardScaler import numpy as np import pickle import matplotlib.pyplot as plt import seaborn as sns from scipy.cluster.hierarchy import dendrogram, linkage import scipy.spatial as spatial from sklearn.decomposition import PCA classes_equivalents = ["ALL", "AML", "CLL", "CML", "MDS", "Non-leukemia"] for i in range(6): print("Class "+str(classes_equivalents[i])) sc = StandardScaler() mat = analysis_results[str(i)] Z = sc.fit_transform(mat) acp = PCA(svd_solver='full', n_components=10) x_pca = acp.fit_transform(Z) plt.figure(figsize=(8,6)) sns.clustermap(mat[0]) plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + theta0 = 0.6 a0, b0 = 1, 1 print("step 0: mode = unknown") xx = np.linspace(0, 1, 1000) plt.plot(xx, sp.stats.beta(a0,b0).pdf(xx), label="initial") x = sp.stats.bernoulli(theta0).rvs(50) N0, N1 = np.bincount(x, minlength=2) a1, b1 = a0 + N1, b0 + N0 plt.plot(xx, sp.stats.beta(a1, b1).pdf(xx), label="1st") print("step 1: mode = ", (a1 - 1) / (a1 + b1 - 2)) x = sp.stats.bernoulli(theta0).rvs(50) N0, N1 = np.bincount(x, minlength=2) a2, b2 = a1 + N1, b1 + N0 plt.plot(xx, sp.stats.beta(a2, b2).pdf(xx), label="2nd"); print("step 2: mode =", (a2 - 1)/(a2 + b2 - 2)) x = sp.stats.bernoulli(theta0).rvs(50) N0, N1 = np.bincount(x, minlength=2) a3, b3 = a2 + N1, b2 + N0 plt.plot(xx, sp.stats.beta(a3, b3).pdf(xx), label="3rd"); print("step 3: mode =", (a3 - 1)/(a3 + b3 - 2)) x = sp.stats.bernoulli(theta0).rvs(50) N0, N1 = np.bincount(x, minlength=2) a4, b4 = a3 + N1, b3 + N0 plt.plot(xx, sp.stats.beta(a4, b4).pdf(xx), label="4th"); print("step 4: mode =", (a4 - 1)/(a4 + b4 - 2)) plt.legend() plt.show() # - from sklearn.datasets import load_boston boston = load_boston() print(boston.DESCR) dfX = pd.DataFrame(boston.data, columns=boston.feature_names) dfY = pd.DataFrame(boston.target, columns=["MEDV"]) df = pd.concat([dfX, dfY],axis=1) df["MEDV"].mean() df["MEDV"].describe() from sklearn.datasets import load_diabetes diabetes = load_diabetes() diabetes.describe from sklearn.datasets import load_iris iris = load_iris() print(iris.DESCR) df = pd.DataFrame(iris.data, columns=iris.feature_names) sy = pd.Series(iris.target, dtype="category") sy = sy.cat.rename_categories(iris.target_names) df['species'] = sy df.head() sns.pairplot(df, hue="species") plt.show() from sklearn.datasets import fetch_20newsgroups newsgroups = fetch_20newsgroups(subset='all') print(newsgroups.target_names) from pprint import pprint pprint(list(newsgroups.target_names)) print(newsgroups.data[1]) from sklearn.datasets import fetch_olivetti_faces olivetti = fetch_olivetti_faces() print(olivetti.DESCR) from sklearn.datasets import fetch_lfw_people lfw_people = fetch_lfw_people(min_faces_per_person=70, resize=0.4) from sklearn.datasets import load_digits digits = load_digits() N=2; M=5; # 두 로우, 5컬럼 fig = plt.figure(figsize=(10,5)) plt.subplots_adjust(top=1, bottom=0, hspace=0, wspace=0.05) for i in range(N): for j in range(M): k = i*M+j ax = fig.add_subplot(N, M, k+1) ax.imshow(digits.images[k], cmap=plt.cm.bone, interpolation="spline36"); ax.grid(False) ax.xaxis.set_ticks([]) ax.yaxis.set_ticks([]) plt.title(digits.target_names[k]) plt.tight_layout() plt.show() # + # ax.imshow? # - from sklearn.datasets.mldata import fetch_mldata mnist = fetch_mldata('MNIST original') # -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.4.1 # language: julia # name: julia-1.4 # --- # # ベイズ勉強会 Part 2 ベルヌーイ分布のベイズ推論 # > ベルヌーイ分布のベイズ推論を実践する # # - toc: true # - badges: true # - comments: true # - categories: [bayes] # - image: images/dag1.png # ベイズ勉強会資料は『ベイズ推論による機械学習入門』{% fn 1 %}を元に、途中式計算をできるだけ省略せずに行ったものです。 # # ベルヌーイ分布 # # ベルヌーイ分布は次の確率質量関数で表される確率分布である。 # # この確率質量関数は確率$\mu$で1、$1-\mu$で0を出力する。 # > Important: ベルヌーイ分布の確率質量関数$$Bern(x|\mu) = \mu^x (1-\mu)^{1-x} (x \in \{0, 1\}, \mu \in (0, 1))$$ # ## 問題 # # 今N個のデータ点$\mathcal{D} = \{x_1, x_2, \dots, x_N\} (x_1, \dots, x_N \in \{0, 1\})$が得られたとする。未知のデータ点$x_*$を予測するためにベイズ推測を行いたい。 # # この問題が解ければ、コインを投げた時表が出る確率はどの程度かをベイズ推測で評価できるようになる。 # # ## モデル構築 # # $\mathcal{D}, x_*, \mu$で同時分布を構築する。データ点のとりうる値が2値なので、$\mathcal{D}, x_*$が$\mu$をパラメータに持つベルヌーイ分布から生成されているとする。$\mathcal{D}, x_*, \mu$の関係をDAGで描くと以下のようになる。 # # ![](dags/dag_bern1.png) # # よって同時分布は # # $$ # p(\mathcal{D}, x_*, \mu) = p(\mathcal{D}|\mu)p(x_*|\mu)p(\mu) # $$ # # という尤度関数×事前分布の形で書ける。 # # ### 尤度関数 # # データ点はベルヌーイ分布から独立に生成されるとしているので # # $$ # p(\mathcal{D}|\mu) = \Pi_{n=1}^{N} Bern(x_n|\mu) \\ # p(x_*|\mu) = Bern(x_*|\mu) # $$ # と書ける。 # # ### 事前分布 # # 事前分布$p(\mu)$は$\mu \in (0,1)$を満たすような確率分布である必要がある。これを満たす確率分布に、ベータ分布がある。 # > Important: ベータ分布の確率密度関数 # > $$Beta(\mu|a, b) = C_B(a,b) \mu^{a-1} (1-\mu)^{b-1}$$ # > $$ただし, C_B (a,b) = \frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)}, \mu \in (0, 1), \{a, b\} \in \mathbb{R^+}を満たす.$$ # $a, b$がベータ分布のパラメータである。 # > Note: ベータ分布の係数とガンマ関数 # > $C_B(a,b)$は正規化されることを保証する係数である。$C_B(a,b)$中の$\Gamma(・)$はガンマ関数である。ガンマ関数は自然数に定義される階乗を一般化したものであり、正の実数$x \in \mathbb{R^+}$に対して、 # > # > $$\Gamma(x) = \int t^{x-1} e^{-t} dt$$ # > # > と定義される。重要な性質として # > # > $$\begin{eqnarray} \Gamma(x+1) &=& x\Gamma(x) \\ \Gamma(1) &=& 1 \end{eqnarray}$$ # > # > を満たし、自然数nに対しては # > # >$$\Gamma(n+1) = n!$$ # > # > が成り立つ。 # ベータ分布を用いることで事前分布$p(\mu)$は # # $$ # p(\mu) = Beta(\mu|a, b) # $$ # # と定義できる。$a,b$も加えてDAGを描き直すと次のように描ける。 # # ![](dags/dag_bern2.png) # # 更に$a,b$を出力する確率分布を考えることもできるが、モデルを複雑化すると計算も煩雑になるのでここまでにしておく。では$a,b$の値はどうするのかというと、事前に決めておくことになる。このような変数を、パラメータのためのパラメータということで、超パラメータ(ハイパーパラメータ)と呼ぶ。 # # ### まとめ # # まとめると、推論のためのモデルは次のように書ける。 # # $$ # \begin{eqnarray} # p(\mathcal{D}, x_*, \mu) &=& p(\mathcal{D}|\mu)p(x_*|\mu)p(\mu) \\ # p(\mathcal{D}|\mu) &=& \Pi_{n=1}^{N} Bern(x_n|\mu) \\ # p(x_*|\mu) &=& Bern(x_*|\mu) \\ # p(\mu) &=& Beta(\mu|a,b) # \end{eqnarray} # $$ # # ## 事後分布の推論 # # 実際に$\mathcal{D}$を観測した後の事後分布$p(\mu|\mathcal{D})$を求める。 # > Note: モデルの事後分布は$p(x_*, \mu|\mathcal{D}) = p(x_*|\mu)p(\mu|\mathcal{D})$だがデータからの学習に関わるのは$p(\mu|\mathcal{D})$の部分のみ。学習のみに推論を絞ってこちらを事後分布と呼ぶ場合も多い。本節でも$p(\mu|\mathcal{D})$を事後分布と呼ぶ。 # ベイズの定理を用いて、 # # $$ # \begin{eqnarray} # p(\mu|\mathcal{D}) &=& \frac{p(\mathcal{D}|\mu)p(\mu)}{p(\mathcal{D})} \\ # &=& \frac{\{\Pi_{n=1}^{N}p(x_n|\mu)\}p(\mu)}{p(\mathcal{D})} \\ # &\propto& \{\Pi_{n=1}^{N}p(x_n|\mu)\}p(\mu) # \end{eqnarray} # $$ # # である。分母は正規化されていることを保証する項であり、分布形状を決めるのは分子の部分であるため3行目では省略している。ベルヌーイ分布もベータ分布も指数部分があり、対数をとると計算が楽になる。 # # $$ # \begin{eqnarray} # \ln p(\mu|\mathcal{D}) &=& \Sigma_{n=1}^{N} \ln p(x_n|\mu) + \ln p(\mu) + const. (対数化により分母は定数項に) \\ # &=& \Sigma_{n=1}^{N} \ln(\mu^{x_n} (1-\mu)^{1-x_n}) + ln(C_B(a,b) \mu^{a-1} (1-\mu)^{b-1}) + const. (ベルヌーイ分布, ベータ分布の式を代入) \\ # &=& \Sigma_{n=1}^{N} x_n \ln \mu + \Sigma_{n=1}^{N} (1-x_n) \ln (1-\mu) + (a-1)\ln \mu + (b-1) \ln (1-\mu) + const. (C_B(a,b)は対数化によりconst.に吸収) \\ # &=& (\Sigma_{n=1}^{N} x_n + a - 1)\ln \mu + (N - \Sigma_{n=1}^{N} x_n + b - 1) \ln (1-\mu) + const. # \end{eqnarray} # $$ # # 対数を元に戻すと # # $$ # p(\mu|\mathcal{D}) \propto \mu^{(\Sigma_{n=1}^{N} x_n + a - 1)} (1-\mu)^{N - \Sigma_{n=1}^{N} x_n + b - 1} # $$ # # でありこれはベータ分布の形である。なお定数項は正規化されることを保証する係数となっている。つまり # # $$ # \begin{eqnarray} # p(\mu|\mathcal{D}) &=& Beta(\mu|\hat{a}, \hat{b}) \\ # ただし \hat{a} &=& \Sigma_{n=1}^{N} x_n + a \\ # \hat{b} &=& N - \Sigma_{n=1}^{N} x_n + b # \end{eqnarray} # $$ # # となる。 # > Note: このように、特定の確率分布のパラメータの事前分布とすることで、事後分布が事前分布と同じ形になる確率分布を共役事前分布という。ベルヌーイ分布の共役事前分布はベータ分布である。 # > Note: ベルヌーイ分布の場合は、成功確率パラメータである$\mu$をベータ分布で幅を持たせて推定できることがベイズ推論の意義となる。 # > Note: $N$はデータ点の個数、$\Sigma_{n=1}^{N} x_n$は値が1だったデータ点の個数である。ハイパーパラメータ$a,b$をデータの情報で更新しているという見方ができる。また、$N$が大きくなると、$a,b$が無視できる、すなわちハイパーパラメータが結果に影響しなくなることがわかる。 # ## 予測分布の算出 # # 未知のデータ点$x_*$の予測分布$p(x_*|\mathcal{D})$は$p(x_*, \mu|\mathcal{D}) = p(x_*|\mu)p(\mu|\mathcal{D})$をパラメータ$\mu$について周辺化することで求まる。 # # $$ # \begin{eqnarray} # p(x_*|\mathcal{D}) &=& \int p(x_*|\mu)p(\mu|\mathcal{D}) d\mu \\ # &=& \int Bern(x_*|\mu) Beta(\mu|\hat{a}, \hat{b}) d\mu \\ # &=& C_B(\hat{a},\hat{b}) \int \mu^{x_*} (1-\mu)^{1-x_*} \mu^{\hat{a}-1}(1-\mu)^{\hat{b}-1}d\mu \\ # &=& C_B(\hat{a},\hat{b}) \int \mu^{x_* + \hat{a} -1}(1-\mu)^{1-x_*+\hat{b}-1}d\mu # \end{eqnarray} # $$ # # ここでベータ分布の定義式から # # $$ # \int \mu^{x_* + \hat{a} -1}(1-\mu)^{1-x_*+\hat{b}-1}d\mu = \frac{1}{C_B(x_* + \hat{a}, 1-x_* + \hat{b})} # $$ # # となる。 # > Note: ベータ分布は確率分布なので積分した時1になる。つまり係数$C_B$以外の部分を積分した時の値は係数$C_B$の逆数である。$\int \mu^{x_* + \hat{a} -1}(1-\mu)^{1-x_*+\hat{b}-1}d\mu$はベータ分布の積分から係数$C_B$の部分を除いた形になっている。 # したがって、 # # $$ # \begin{eqnarray} # p(x_*|\mathcal{D}) &=& \frac{C_B(\hat{a},\hat{b})}{C_B(x_* + \hat{a}, 1-x_* + \hat{b})} \\ # &=& \frac{\Gamma(\hat{a}+\hat{b})\Gamma(x_* + \hat{a})\Gamma(1-x_* + \hat{b})}{\Gamma(\hat{a})\Gamma(\hat{b})\Gamma(\hat{a}+\hat{b}+1)} # \end{eqnarray} # $$ # # 複雑な式になっているが$x_*$は0か1しかとり得ないことを利用するともっと単純に書ける。 # # $$ # \begin{eqnarray} # p(x_* = 1|\mathcal{D}) &=& \frac{\Gamma(\hat{a}+\hat{b})\Gamma(1 + \hat{a})\Gamma(\hat{b})}{\Gamma(\hat{a})\Gamma(\hat{b})\Gamma(\hat{a}+\hat{b}+1)} \\ # &=& \frac{\Gamma(\hat{a}+\hat{b}) \hat{a}\Gamma(\hat{a})\Gamma(\hat{b})}{\Gamma(\hat{a})\Gamma(\hat{b})(\hat{a}+\hat{b})\Gamma(\hat{a}+\hat{b})} \\ # &=& \frac{\hat{a}}{\hat{a}+\hat{b}} \\ # p(x_* = 0|\mathcal{D}) &=& \frac{\Gamma(\hat{a}+\hat{b})\Gamma(\hat{a})\Gamma(1 + \hat{b})}{\Gamma(\hat{a})\Gamma(\hat{b})\Gamma(\hat{a}+\hat{b}+1)} \\ # &=& \frac{\hat{b}}{\hat{a}+\hat{b}} # \end{eqnarray} # $$ # # より # # $$ # \begin{eqnarray} # p(x_*|\mathcal{D}) &=& (\frac{\hat{a}}{\hat{a}+\hat{b}})^{x_*} (\frac{\hat{b}}{\hat{a}+\hat{b}})^{1-x_*} \\ # &=& (\frac{\hat{a}}{\hat{a}+\hat{b}})^{x_*} (1-\frac{\hat{a}}{\hat{a}+\hat{b}})^{1-x_*} \\ # &=& Bern(x_*|\frac{\hat{a}}{\hat{a}+\hat{b}}) \\ # &=& Bern(x_*|\frac{\Sigma_{n=1}^{N}x_n + a}{N+a+b}) # \end{eqnarray} # $$ # # と予測分布はベルヌーイ分布の形で書ける。 # > Note: 予測分布についても、$N$が大きくなると、$a,b$が無視できる、すなわちハイパーパラメータが結果に影響しなくなることがわかる。また予測分布の形状が尤度関数と変わっておらず、$\mu$の点推定を行った場合の予測と結局同じことをやっているように見える(特にNが大きければ最尤推定と同じである)が、尤度関数の種類によっては予測分布の形と異なる場合がある。ポアソン分布→負の二項分布、精度未知の1次元ガウス分布→Studentのt分布などの例がある。また、事後分布として$\mu$の分布が得られていて、幅のある推定ができている。 # # Juliaによる実装 # # [Turing.jlのTutorial](https://turing.ml/v0.8.3/tutorials/0-introduction/)より。 # # ## 素朴に # + # パッケージのimport # 乱数生成のためのパッケージ using Random # グラフ描画用のパッケージ using Plots # 確率分布の関数を呼び出すパッケージ using Distributions # + # 真の成功確率を0.5と置く p_true = 0.5 # 0~100までの数列 Ns = 0:100; # + # 100回ベルヌーイ試行を行う Random.seed!(12) data = rand(Bernoulli(p_true), last(Ns)) # 最初の5回 data[1:5] # - # 事前分布をベータ分布で置く。ハイパーパラメータはa=1,b=1とする prior_belief = Beta(1, 1); # + #hide_output # アニメーションをつけるためにStatsPlotsパッケージをimport using StatsPlots # ベイズ推論の進行過程をアニメーションに animation = @gif for (i, N) in enumerate(Ns) # 表がでた回数(heads)と裏が出た回数(tails)を計算 heads = sum(data[1:i-1]) tails = N - heads # 事後確率分布 updated_belief = Beta(prior_belief.α + heads, prior_belief.β + tails) # 描画用の関数 plot(updated_belief, size = (500, 250), title = "Updated belief after $N observations", xlabel = "probability of heads", ylabel = "", legend = nothing, xlim = (0,1), fill=0, α=0.3, w=3) vline!([p_true]) end; # - # ![](animations/bernoulli.gif) # # ## Turing.jlを使ったハミルトニアン・モンテカルロによる近似計算 # + # パッケージのimport # Load Turing and MCMCChains. using Turing, MCMCChains # Load the distributions library. using Distributions # Load StatsPlots for density plots. using StatsPlots # - # モデル設定 @model coinflip(y) = begin # 事前分布 p ~ Beta(1, 1) # 試行回数N N = length(y) for n in 1:N # 各試行の結果はベルヌーイ分布で決定する y[n] ~ Bernoulli(p) end end; # + # HMCの設定 iterations = 1000 ϵ = 0.05 τ = 10 # HMCの実行 chain = sample(coinflip(data), HMC(ϵ, τ), iterations, progress=false); # - # 結果のサマリ p_summary = chain[:p] plot(p_summary, seriestype = :histogram) # + # 解析的に解いた事後分布 N = length(data) heads = sum(data) updated_belief = Beta(prior_belief.α + heads, prior_belief.β + N - heads) # HMCによる事後分布の近似を青で描画 p = plot(p_summary, seriestype = :density, xlim = (0,1), legend = :best, w = 2, c = :blue) # 解析的に解いた事後分布を緑で描画 plot!(p, range(0, stop = 1, length = 100), pdf.(Ref(updated_belief), range(0, stop = 1, length = 100)), xlabel = "probability of heads", ylabel = "", title = "", xlim = (0,1), label = "Closed-form", fill=0, α=0.3, w=3, c = :lightgreen) # 真の成功確率を赤で描画 vline!(p, [p_true], label = "True probability", c = :red) # - # {{ '[須山敦志. 杉山将. ベイズ推論による機械学習入門. 講談社, 2017.](https://www.kspub.co.jp/book/detail/1538320.html)' | fndetail: 1 }} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Se cargan las librerías que se van a utilizar en ambos ejemplos import math import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns import sklearn from sklearn.impute import SimpleImputer from sklearn.compose import make_column_transformer from sklearn.model_selection import cross_val_score from sklearn.preprocessing import OneHotEncoder from sklearn.pipeline import make_pipeline from sklearn.preprocessing import PolynomialFeatures # <------ library to perform Polynomial Regression from sklearn.linear_model import Ridge from sklearn.linear_model import LinearRegression from sklearn import metrics from sklearn.metrics import mean_squared_error from sklearn.preprocessing import scale from sklearn.model_selection import train_test_split from sklearn.metrics import mean_squared_error from sklearn.metrics import r2_score from sklearn.linear_model import Lasso from sklearn.model_selection import GridSearchCV from sklearn.compose import ColumnTransformer from sklearn.pipeline import Pipeline from sklearn.impute import SimpleImputer from sklearn.preprocessing import StandardScaler, OrdinalEncoder, OneHotEncoder pd.set_option('display.max_rows', 90) # by default is 10, if change to None print ALL pd.set_option('display.max_columns', 90) # by default is 10, if change to None print ALL # + ## 1) EXTRAER DATOS # Los datos pueden encontrarse en diferentes formatos, en nuestro caso están en formato csv. # Se carga la base de datos train = pd.read_csv('train.csv') #Se encuentra en la misma carpeta que el jupyter notebook test = pd.read_csv('test.csv') #Se encuentra en la misma carpeta que el jupyter notebook print(train.shape) print(test.shape) train # - # # Eliminate of some columns # Veamos a eleminar las columnas que tienen más del 50% de sus valores como nulas en train col_plus_50percent_null = train.isnull().sum()[train.isnull().sum()>train.shape[0]/2] col_plus_50percent_null # Observemos que también hay casi las mismas columnas en test test.isnull().sum()[test.isnull().sum()>test.shape[0]/2] # Entonces nos queda features_drop = ['PoolQC','MiscFeature','Alley','Fence'] train = train.drop(features_drop, axis=1) test = test.drop(features_drop, axis=1) # Comprovemos que ya no tenemos esas variables col_plus_50percent_null = train.isnull().sum()[train.isnull().sum()>train.shape[0]/2] col_plus_50percent_null test.isnull().sum()[test.isnull().sum()>test.shape[0]/2] # # Separación de variables # # Separemos las variables en `X_train`, `X_test`, `y_train`, `y_test`, al igual que elijamos que columnas son numericas, ordinales y nominales # + numerical = train.select_dtypes(include=np.number).columns.tolist() numerical.remove('Id') numerical.remove('SalePrice') nominal = train.select_dtypes(exclude=np.number).columns.tolist() # ordinal = ["LandSlope", "OverallQual", "OverallCond", "YearRemodAdd", # "ExterQual", "ExterCond", "BsmtQual", "BsmtCond", "BsmtExposure", # "KitchenQual", "Functional", "GarageCond", "PavedDrive"] ordinal = [] X = train[nominal + ordinal + numerical] #LotFrontage y MasVnrType tiene NaNs y = train['SalePrice'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) X_REAL_test = test[nominal + ordinal + numerical] # - # # Pipelines auxiliares # # Para separar mejor el procesamiento de nuestros datos, utilizamos tres pipelines auxiliares # + # Pipeline datos ordinales ordinal_pipeline = Pipeline([ ("imputer", SimpleImputer(strategy="most_frequent")), ("encoder", OrdinalEncoder()) ]) # Pipeline datos nominales nominal_pipeline = Pipeline([ ("imputer", SimpleImputer(strategy="most_frequent")), ("encoder", OneHotEncoder(sparse=True, handle_unknown="ignore")) ]) # Pipeline datos numéricos numerical_pipeline = Pipeline([ ("imputer", SimpleImputer(strategy="mean")), ("scaler", StandardScaler()) ]) # Pegado de los tres pipelines preprocessing_pipeline = ColumnTransformer([ ("nominal_preprocessor", nominal_pipeline, nominal), ("ordinal_preprocessor", ordinal_pipeline, ordinal), ("numerical_preprocessor", numerical_pipeline, numerical) ]) # - # Finalmente agregamos todo en un solo pipeline # + # preprocessed_features = preprocessing_pipeline.fit_transform(train_features) # ML_model = Lasso(alpha=190) # ML_model = Ridge(alpha=20) ML_model = LinearRegression() complete_pipeline = Pipeline([ ("preprocessor", preprocessing_pipeline), ("estimator", ML_model) ]) # - complete_pipeline # # Predicciones # + complete_pipeline.fit(X_train, y_train) y_pred = complete_pipeline.predict(X_test) print('ERRORS OF PREDICTIONS') print('MAE:', metrics.mean_absolute_error(y_test, y_pred)) print('MSE:', metrics.mean_squared_error(y_test, y_pred)) print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, y_pred))) print('r2_score:', r2_score(y_test, y_pred)) p1 = max(max(y_pred), max(y_test)) p2 = min(min(y_pred), min(y_test)) plt.plot([p1, p2], [p1, p2], 'b-') plt.scatter(y_test,y_pred) # - # # Generación de archivo para Kaggle # + y_REAL_test = complete_pipeline.predict(X_REAL_test) pred=pd.DataFrame(y_REAL_test) sub_df=pd.read_csv('sample_submission.csv') datasets=pd.concat([sub_df['Id'],pred],axis=1) datasets.columns=['Id','SalePrice'] datasets.to_csv('sample_submission.csv',index=False) # - # Para subir el archivo es [aquí](https://www.kaggle.com/c/house-prices-advanced-regression-techniques/overview/evaluation) # + # FUNCIONA PIPELINE LASSO WITH numerical = train.select_dtypes(include=np.number).columns.tolist() numerical.remove('Id') numerical.remove('SalePrice') nominal = train.select_dtypes(exclude=np.number).columns.tolist() ordinal = ["LandSlope", "OverallQual", "OverallCond", "YearRemodAdd", "ExterQual", "ExterCond", "BsmtQual", "BsmtCond", "BsmtExposure", "KitchenQual", "Functional", "GarageCond", "PavedDrive"] ordinal = [] X = train[nominal + ordinal + numerical] #LotFrontage y MasVnrType tiene NaNs y = train['SalePrice'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) ordinal_pipeline = Pipeline([ ("imputer", SimpleImputer(strategy="most_frequent")), ("encoder", OrdinalEncoder()) ]) nominal_pipeline = Pipeline([ ("imputer", SimpleImputer(strategy="most_frequent")), ("encoder", OneHotEncoder(sparse=True, handle_unknown="ignore")) ]) numerical_pipeline = Pipeline([ ("imputer", SimpleImputer(strategy="mean")), ("scaler", StandardScaler()) ]) # here we are going to instantiate a ColumnTransformer object with a list of tuples # each of which has a the name of the preprocessor # the transformation pipeline (could be a transformer) # and the list of column names we wish to transform preprocessing_pipeline = ColumnTransformer([ ("nominal_preprocessor", nominal_pipeline, nominal), ("ordinal_preprocessor", ordinal_pipeline, ordinal), ("numerical_preprocessor", numerical_pipeline, numerical) ]) ## If you want to test this pipeline run the following code # preprocessed_features = preprocessing_pipeline.fit_transform(train_features) ML_model = Lasso(alpha=1) ML_model = Ridge(alpha=.1) # ML_model = LinearRegression() complete_pipeline = Pipeline([ ("preprocessor", preprocessing_pipeline), # ("scaler", StandardScaler()), # No mejora la estimación escalando # ('poly_features', PolynomialFeatures(degree=2)), # empeora con polynomal features ("estimator", ML_model) ]) complete_pipeline.fit(X_train, y_train) y_pred = complete_pipeline.predict(X_test) print('ERRORS OF PREDICTIONS') print('MAE:', metrics.mean_absolute_error(y_test, y_pred)) print('MSE:', metrics.mean_squared_error(y_test, y_pred)) print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, y_pred))) print('r2_score', r2_score(y_test, y_pred)) p1 = max(max(y_pred), max(y_test)) p2 = min(min(y_pred), min(y_test)) plt.plot([p1, p2], [p1, p2], 'b-') plt.scatter(y_test,y_pred) # - # # ALL IN ONE # + # FUNCIONA PIPELINE LASSO WITH numerical = train.select_dtypes(include=np.number).columns.tolist() numerical.remove('Id') numerical.remove('SalePrice') nominal = train.select_dtypes(exclude=np.number).columns.tolist() ordinal = ["LandSlope", "OverallQual", "OverallCond", "YearRemodAdd", "ExterQual", "ExterCond", "BsmtQual", "BsmtCond", "BsmtExposure", "KitchenQual", "Functional", "GarageCond", "PavedDrive"] ordinal = [] X = train[nominal + ordinal + numerical] #LotFrontage y MasVnrType tiene NaNs y = train['SalePrice'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) ordinal_pipeline = Pipeline([ ("imputer", SimpleImputer(strategy="most_frequent")), ("encoder", OrdinalEncoder()) ]) nominal_pipeline = Pipeline([ ("imputer", SimpleImputer(strategy="most_frequent")), ("encoder", OneHotEncoder(sparse=True, handle_unknown="ignore")) ]) numerical_pipeline = Pipeline([ ("imputer", SimpleImputer(strategy="mean")), ("scaler", StandardScaler()) ]) # here we are going to instantiate a ColumnTransformer object with a list of tuples # each of which has a the name of the preprocessor # the transformation pipeline (could be a transformer) # and the list of column names we wish to transform preprocessing_pipeline = ColumnTransformer([ ("nominal_preprocessor", nominal_pipeline, nominal), ("ordinal_preprocessor", ordinal_pipeline, ordinal), ("numerical_preprocessor", numerical_pipeline, numerical) ]) ## If you want to test this pipeline run the following code # preprocessed_features = preprocessing_pipeline.fit_transform(train_features) ML_model = Lasso(alpha=190) ML_model = LinearRegression() complete_pipeline = Pipeline([ ("preprocessor", preprocessing_pipeline), # ("scaler", StandardScaler()), # No mejora la estimación escalando # ('poly_features', PolynomialFeatures(degree=2)), ("estimator", LinearRegression()) ]) complete_pipeline.fit(X_train, y_train) y_pred = complete_pipeline.predict(X_test) print('ERRORS OF PREDICTIONS') print('MAE:', metrics.mean_absolute_error(y_test, y_pred)) print('MSE:', metrics.mean_squared_error(y_test, y_pred)) print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, y_pred))) print('r2_score', r2_score(y_test, y_pred)) p1 = max(max(y_pred), max(y_test)) p2 = min(min(y_pred), min(y_test)) plt.plot([p1, p2], [p1, p2], 'b-') plt.scatter(y_test,y_pred) # - aux = ct.fit_transform(X) aux = pd.df(aux) # + preprocessed_features = preprocessing_pipeline.fit_transform(X_train) preprocessed_features # + from sklearn.linear_model import Lasso from sklearn.model_selection import GridSearchCV lasso=Lasso() parameters={'alpha':[1e-15,1e-10,1e-8,1e-3,1e-2,1,5,10,20,30,35,40,45,50,55,100,200,300]} lasso_regressor=GridSearchCV(lasso,parameters,scoring='neg_mean_squared_error',cv=5) lasso_regressor.fit(preprocessing_pipeline.fit_transform(X_train),y_train) print(lasso_regressor.best_params_) print(lasso_regressor.best_score_) # - # # Encontrando alpha de Lasso (alpha = 180) parameters={'alpha':[100,150,170,180,190,200,220,250,300]} ML_model=Lasso() grid = GridSearchCV(ML_model,parameters,scoring='neg_mean_squared_error',cv=5) grid.fit(preprocessing_pipeline.fit_transform(X_train),y_train) # Convert the results of CV into a dataframe results = pd.DataFrame(grid.cv_results_)[['params', 'mean_test_score', 'rank_test_score']] results.sort_values('rank_test_score') # # Encontrando alpha de Ridge (alpha = 20) parameters={'alpha':[1e-15,1e-10,1e-8,1e-3,1e-2,1,5,10,20,30,35,40,45,50,55,100,200,300]} ML_model=Ridge() grid = GridSearchCV(ML_model,parameters,scoring='neg_mean_squared_error',cv=5) grid.fit(preprocessing_pipeline.fit_transform(X_train),y_train) # Convert the results of CV into a dataframe results = pd.DataFrame(grid.cv_results_)[['params', 'mean_test_score', 'rank_test_score']] results.sort_values('rank_test_score') # ## Numeric missing values # # One Hot Encoder model # + # https://salvatore-raieli.medium.com/a-complete-guide-to-linear-regression-using-gene-expression-data-regularization-f980ba6b11f7 model = Lasso(alpha = 180) model.fit(preprocessing_pipeline.fit_transform(X_train), y_train) y_pred = complete_pipeline.predict(X_test) coefs = model.coef_.flatten() names = X_train.columns genes = list(zip(names, coefs)) feature =pd.DataFrame(genes, columns = ["genes", "coefs"]) feature0 = feature.loc[(feature!=0).any(axis=1)] feature0 = feature[(feature != 0).all(1)] feature0.shape, feature.shape print(feature0.shape, feature.shape) coefs =feature0.sort_values(by=['coefs']) plt.figure(figsize=(20, 15)) g = sns.barplot(x="genes", y="coefs", data=coefs, color= "lightblue") g.figsize=(16,10) plt.xticks(rotation=45) # - feature0 # + # FUNCIONA LASSO X = train[['MSSubClass', 'LotArea', 'OverallQual']] y = train['SalePrice'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) model_lasso = Lasso(alpha=0.01) model_lasso.fit(X_train, y_train) y_pred= model_lasso.predict(X_test) print('Predictions with Polynomial Regression') print('MAE:', metrics.mean_absolute_error(y_test, y_pred)) print('MSE:', metrics.mean_squared_error(y_test, y_pred)) print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, y_pred))) print('r2_score', r2_score(y_test, y_pred)) plt.scatter(y_test,y_pred) # + # LASSO PIPELINE FUNCIONA X = train[['MSSubClass','LotArea','OverallQual','LotFrontage']]#LotFrontage y MasVnrType tiene NaNs y = train['SalePrice'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) imp_mean = SimpleImputer(missing_values =np.nan, strategy='mean') columns_imp_mean = ['LotFrontage'] scaler = StandardScaler() column_trans = make_column_transformer( (imp_mean,columns_imp_mean), remainder = 'passthrough') ML_model = Lasso(alpha=0.01) pipe = make_pipeline(column_trans, ML_model) print(cross_val_score(pipe,X_train,y_train,cv=5)) pipe.fit(X_train,y_train) y_pred= pipe.predict(X_test) print('ERRORS OF PREDICTIONS') print('MAE:', metrics.mean_absolute_error(y_test, y_pred)) print('MSE:', metrics.mean_squared_error(y_test, y_pred)) print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, y_pred))) print('r2_score', r2_score(y_test, y_pred)) plt.scatter(y_test,y_pred) # - # primero hace la división de cross-validation y después hace el pipeline, # La diferencia de hacerlo así es que entonces cuando toma promedios para calcular como llenar los missing values, # estos promedios son con respecto al cross-validation cross_val_score(pipe,X,y,cv=5,scoring='accuracy').mean() # + # FUNCIONA PIPELINE LASSO WITH nominal = ["MSZoning", "LotShape", "LandContour", "LotConfig", "Neighborhood", "Condition1", "BldgType", "RoofStyle", "Foundation", "CentralAir", "SaleType", "SaleCondition"] ordinal = ["LandSlope", "OverallQual", "OverallCond", "YearRemodAdd", "ExterQual", "ExterCond", "BsmtQual", "BsmtCond", "BsmtExposure", "KitchenQual", "Functional", "GarageCond", "PavedDrive"] numerical = ["LotFrontage", "LotArea", "MasVnrArea", "BsmtFinSF1", "BsmtUnfSF", "TotalBsmtSF", "1stFlrSF", "2ndFlrSF", "GrLivArea", "GarageArea", "OpenPorchSF"] X = train[nominal + ordinal + numerical] #LotFrontage y MasVnrType tiene NaNs y = train['SalePrice'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) ordinal_pipeline = Pipeline([ ("imputer", SimpleImputer(strategy="most_frequent")), ("encoder", OrdinalEncoder()) ]) nominal_pipeline = Pipeline([ ("imputer", SimpleImputer(strategy="most_frequent")), ("encoder", OneHotEncoder(sparse=True, handle_unknown="ignore")) ]) numerical_pipeline = Pipeline([ ("imputer", SimpleImputer(strategy="mean")), ("scaler", StandardScaler()) ]) # here we are going to instantiate a ColumnTransformer object with a list of tuples # each of which has a the name of the preprocessor # the transformation pipeline (could be a transformer) # and the list of column names we wish to transform preprocessing_pipeline = ColumnTransformer([ ("nominal_preprocessor", nominal_pipeline, nominal), ("ordinal_preprocessor", ordinal_pipeline, ordinal), ("numerical_preprocessor", numerical_pipeline, numerical) ]) ## If you want to test this pipeline run the following code # preprocessed_features = preprocessing_pipeline.fit_transform(train_features) complete_pipeline = Pipeline([ ("preprocessor", preprocessing_pipeline), ("scaler", StandardScaler()), # No mejora la estimación escalando ("estimator", LinearRegression()) ]) complete_pipeline.fit(X_train, y_train) y_pred = complete_pipeline.predict(X_test) print('ERRORS OF PREDICTIONS') print('MAE:', metrics.mean_absolute_error(y_test, y_pred)) print('MSE:', metrics.mean_squared_error(y_test, y_pred)) print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, y_pred))) print('r2_score', r2_score(y_test, y_pred)) plt.scatter(y_test,y_pred) # - # + # FUNCIONA PIPELINE LASSO WITH nominal = ["MSZoning", "LotShape", "LandContour", "LotConfig", "Neighborhood", "Condition1", "BldgType", "RoofStyle", "Foundation", "CentralAir", "SaleType", "SaleCondition"] ordinal = ["LandSlope", "OverallQual", "OverallCond", "YearRemodAdd", "ExterQual", "ExterCond", "BsmtQual", "BsmtCond", "BsmtExposure", "KitchenQual", "Functional", "GarageCond", "PavedDrive"] numerical = ["LotFrontage", "LotArea", "MasVnrArea", "BsmtFinSF1", "BsmtUnfSF", "TotalBsmtSF", "1stFlrSF", "2ndFlrSF", "GrLivArea", "GarageArea", "OpenPorchSF"] X = train[nominal + ordinal + numerical] #LotFrontage y MasVnrType tiene NaNs y = train['SalePrice'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) ordinal_pipeline = Pipeline([ ("imputer", SimpleImputer(strategy="most_frequent")), ("encoder", OrdinalEncoder()) ]) nominal_pipeline = Pipeline([ ("imputer", SimpleImputer(strategy="most_frequent")), ("encoder", OneHotEncoder(sparse=True, handle_unknown="ignore")) ]) numerical_pipeline = Pipeline([ ("imputer", SimpleImputer(strategy="mean")), ("scaler", StandardScaler()) ]) # here we are going to instantiate a ColumnTransformer object with a list of tuples # each of which has a the name of the preprocessor # the transformation pipeline (could be a transformer) # and the list of column names we wish to transform preprocessing_pipeline = ColumnTransformer([ ("nominal_preprocessor", nominal_pipeline, nominal), ("ordinal_preprocessor", ordinal_pipeline, ordinal), ("numerical_preprocessor", numerical_pipeline, numerical) ]) ## If you want to test this pipeline run the following code # preprocessed_features = preprocessing_pipeline.fit_transform(train_features) from sklearn.linear_model import LinearRegression complete_pipeline = Pipeline([ ("preprocessor", preprocessing_pipeline), ("estimator", LinearRegression()) ]) complete_pipeline.fit(X_train, y_train) y_pred = complete_pipeline.predict(X_test) print('ERRORS OF PREDICTIONS') print('MAE:', metrics.mean_absolute_error(y_test, y_pred)) print('MSE:', metrics.mean_squared_error(y_test, y_pred)) print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, y_pred))) print('r2_score', r2_score(y_test, y_pred)) plt.scatter(y_test,y_pred) # - # *A general rule of thumb: drop a dummy-encoded column if using a linear-based model, and do not drop it if using a tree-based model* # + true_value = y_test predicted_value = y_pred plt.figure(figsize=(10,10)) plt.scatter(true_value, predicted_value, c='crimson') # plt.yscale('log') # plt.xscale('log') p1 = max(max(predicted_value), max(true_value)) p2 = min(min(predicted_value), min(true_value)) plt.plot([p1, p2], [p1, p2], 'b-') plt.xlabel('True Values', fontsize=15) plt.ylabel('Predictions', fontsize=15) plt.axis('equal') plt.show() # - # The next cell is from [here](https://mahmoudyusof.github.io/general/scikit-learn-pipelines/) # + # The next cell is from https://mahmoudyusof.github.io/general/scikit-learn-pipelines/ train_df = pd.read_csv("train.csv") test_df = pd.read_csv("test.csv") ## let's create a validation set from the training set msk = np.random.rand(len(train_df)) < 0.8 val_df = train_df[~msk] train_df = train_df[msk] nominal = ["MSZoning", "LotShape", "LandContour", "LotConfig", "Neighborhood", "Condition1", "BldgType", "RoofStyle", "Foundation", "CentralAir", "SaleType", "SaleCondition"] ordinal = ["LandSlope", "OverallQual", "OverallCond", "YearRemodAdd", "ExterQual", "ExterCond", "BsmtQual", "BsmtCond", "BsmtExposure", "KitchenQual", "Functional", "GarageCond", "PavedDrive"] numerical = ["LotFrontage", "LotArea", "MasVnrArea", "BsmtFinSF1", "BsmtUnfSF", "TotalBsmtSF", "1stFlrSF", "2ndFlrSF", "GrLivArea", "GarageArea", "OpenPorchSF"] train_features = train_df[nominal + ordinal + numerical] train_label = train_df["SalePrice"] val_features = val_df[nominal + ordinal + numerical] val_label = val_df["SalePrice"] test_features = test_df[nominal + ordinal + numerical] from sklearn.pipeline import Pipeline from sklearn.impute import SimpleImputer from sklearn.preprocessing import StandardScaler, OrdinalEncoder, OneHotEncoder ordinal_pipeline = Pipeline([ ("imputer", SimpleImputer(strategy="most_frequent")), ("encoder", OrdinalEncoder()) ]) nominal_pipeline = Pipeline([ ("imputer", SimpleImputer(strategy="most_frequent")), ("encoder", OneHotEncoder(sparse=True, handle_unknown="ignore")) ]) numerical_pipeline = Pipeline([ ("imputer", SimpleImputer(strategy="mean")), ("scaler", StandardScaler()) ]) from sklearn.compose import ColumnTransformer # here we are going to instantiate a ColumnTransformer object with a list of tuples # each of which has a the name of the preprocessor # the transformation pipeline (could be a transformer) # and the list of column names we wish to transform preprocessing_pipeline = ColumnTransformer([ ("nominal_preprocessor", nominal_pipeline, nominal), ("ordinal_preprocessor", ordinal_pipeline, ordinal), ("numerical_preprocessor", numerical_pipeline, numerical) ]) ## If you want to test this pipeline run the following code # preprocessed_features = preprocessing_pipeline.fit_transform(train_features) from sklearn.linear_model import LinearRegression complete_pipeline = Pipeline([ ("preprocessor", preprocessing_pipeline), ("estimator", LinearRegression()) ]) complete_pipeline.fit(train_features, train_label) # score = complete_pipeline.score(val_features, val_label) # print(score) # predictions = complete_pipeline.predict(test_features) # + # pipe = make_pipeline(column_trans, ML_model) # print(cross_val_score(complete_pipeline,X_train,y_train,cv=5)) # pipe.fit(X_train,y_train) y_pred = complete_pipeline.predict(X_test) print('ERRORS OF PREDICTIONS') print('MAE:', metrics.mean_absolute_error(y_test, y_pred)) print('MSE:', metrics.mean_squared_error(y_test, y_pred)) print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, y_pred))) print('r2_score', r2_score(y_test, y_pred)) plt.scatter(y_test,y_pred) # - # + # LASSO PIPELINE FUNCIONA X = train[['MSSubClass','LotArea','OverallQual','LotFrontage']]#LotFrontage y MasVnrType tiene NaNs y = train['SalePrice'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) imp_mean = SimpleImputer(missing_values =np.nan, strategy='mean') columns_imp_mean = ['LotFrontage'] scaler = StandardScaler() column_trans = make_column_transformer( (imp_mean,columns_imp_mean), remainder = 'passthrough') ML_model = Lasso(alpha=0.01) pipe = make_pipeline(column_trans, ML_model) print(cross_val_score(pipe,X_train,y_train,cv=5)) pipe.fit(X_train,y_train) y_pred= pipe.predict(X_test) print('ERRORS OF PREDICTIONS') print('MAE:', metrics.mean_absolute_error(y_test, y_pred)) print('MSE:', metrics.mean_squared_error(y_test, y_pred)) print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, y_pred))) print('r2_score', r2_score(y_test, y_pred)) plt.scatter(y_test,y_pred) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.6.9 64-bit (''bert'': virtualenv)' # metadata: # interpreter: # hash: eff88277fb73975c12d1396ee5027a873de6a4cc7778f3072fe3a9960a29a15a # name: 'Python 3.6.9 64-bit (''bert'': virtualenv)' # --- import numpy as np import pandas as pd import torch import torch.nn as nn import torch.nn.functional as F from torch.utils.data import DataLoader from torch.optim.lr_scheduler import OneCycleLR from transformers import BertTokenizerFast, BertModel from transformers import AdamW from utils.datasets import TextDataset from utils.lr_finder import run_lr_finder from models.bert import BertForClassification import tqdm.notebook as tqdm MODEL = "neuralmind/bert-base-portuguese-cased" # # Loading tweet dataset train_df = pd.read_csv("/home/kenzo/datasets/cleaned_tweetsentbr/train.tsv", sep="\t", names=["id", "label", "alfa", "text"], index_col=0) test_df = pd.read_csv("/home/kenzo/datasets/cleaned_tweetsentbr/test.tsv", sep="\t", names=["id", "label", "alfa", "text"], index_col=0) # + # TODO: Exploração dos dados # + tags=[] tokenizer = BertTokenizerFast.from_pretrained(MODEL) # - train_ds = TextDataset.from_df(train_df, tokenizer, max_seq_len=128) test_ds = TextDataset.from_df(test_df, tokenizer, max_seq_len=128) # # Preparing model # + from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score from functools import partial metrics = { "accuracy": accuracy_score, "precision": partial(precision_score, average="macro"), "recall": partial(recall_score, average="macro"), "f1": partial(f1_score, average="macro"), } # - gpu = torch.device("cuda:2") # + tags=[] bert_model = BertModel.from_pretrained(MODEL) # - model = BertForClassification(bert_model, 3, metrics, device=gpu).cuda(gpu) train_dl = DataLoader(train_ds, batch_size=64, shuffle=True) test_dl = DataLoader(test_ds, batch_size=64, shuffle=True) # # Finding a good learning rate batches = len(train_dl) epochs = 3 optimizer = AdamW(model.parameters(), lr=1e-7) # scheduler = OneCycleLR(optimizer, max_lr=5e-5, steps_per_epoch=batches, epochs=epochs) criterion = nn.CrossEntropyLoss() run_lr_finder(train_dl, model, optimizer, criterion, device=gpu) scheduler = OneCycleLR(optimizer, max_lr=1e-5, steps_per_epoch=batches, epochs=epochs) # + tags=[] train, test = model.fit(epochs, train_dl, test_dl, criterion, optimizer, scheduler=scheduler) # - torch.save(model.state_dict(), "data/checkpoints/bert_tweetsent_br.ckpt") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd # + # Reading csv file apps = pd.read_csv(r'C:/Users/Rishabh/Desktop/pd/ml/appstore.csv') # - apps.head(5) # + #Dropping rows having Nan apps=apps.dropna(how='any') apps # + #Dropping unnecessary columns apps.drop(['URL','ID','Subtitle','Icon URL','Description','Developer','Languages','Primary Genre'],axis=1,inplace=True) apps.head(5) # + #Renaming columns apps.rename(columns={'Average User Rating':'Avg','User Rating Count':'UserRating','In-app Purchases':'Inapp','Original Release Date':'Ordate','Current Version Release Date':'crdate'},inplace=True) # - apps.head(5) # + # Checking for the number of months since last update apps['Ordate'] = pd.to_datetime(apps.Ordate) apps['crdate'] = pd.to_datetime(apps.crdate) startyear = apps['Ordate'].dt.year startmonth = apps['Ordate'].dt.month endyear = apps['crdate'].dt.year endmonth = apps['crdate'].dt.month nummonths = (endyear - startyear)*12 + (endmonth - startmonth) apps = apps[nummonths<=6] # - apps.head(5) # + # Games with more than 200 UserRating and more than 4.0 AverageRating apps=apps[apps['UserRating']>=200] apps=apps[apps['Avg']>=4] apps.head(5) # + # Removing words ['Games','Entertainment'] to avoid confusion apps.Genres = apps.Genres.str.replace('Games,','').str.replace('Entertainment','') apps.head(5) # + # Grouping games with Puzzle/Board Genre puzzlegame = apps[apps.Genres.str.contains('Puzzle' or 'Board')] # - puzzlegame # + # Grouping games with Family/Education Genre familygame = apps[apps.Genres.str.contains('Family' or 'Education')] # - familygame # + # Grouping games with Adventure/Role/Role Playing Genre adventuregame = apps[apps.Genres.str.contains('Adventure' or 'Role' or 'Role Playing')] # - adventuregame # + # Grouping games with Action Genre actiongame = apps[apps.Genres.str.contains('Action')] # - actiongame # + # Inferences # - p=puzzlegame.UserRating.mean() f=familygame.UserRating.mean() ad=adventuregame.UserRating.mean() ac=actiongame.UserRating.mean() import matplotlib.pyplot as plt import matplotlib.lines as mlines import numpy as np # + # Simple Bar Graph plot of Average User Rating objects = ('Puzzle Games', 'Family Games', 'Adventure Games', 'Action Games') y_pos = np.arange(len(objects)) avgUserRating = [p,f,ad,ac] plt.bar(y_pos, avgUserRating, align='center', alpha=0.7) plt.xticks(y_pos, objects) plt.ylabel('Average User Rating') plt.title('Plot of Average User Rating') plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] colab_type="text" id="FhGuhbZ6M5tl" # ##### Copyright 2018 The TensorFlow Authors. # + cellView="form" colab_type="code" id="AwOEIRJC6Une" colab={} #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # + cellView="form" colab_type="code" id="KyPEtTqk6VdG" colab={} #@title MIT License # # Copyright (c) 2017 # # Permission is hereby granted, free of charge, to any person obtaining a # # copy of this software and associated documentation files (the "Software"), # to deal in the Software without restriction, including without limitation # the rights to use, copy, modify, merge, publish, distribute, sublicense, # and/or sell copies of the Software, and to permit persons to whom the # Software is furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER # DEALINGS IN THE SOFTWARE. # + [markdown] colab_type="text" id="EIdT9iu_Z4Rb" # # Basic regression: Predict fuel efficiency # + [markdown] colab_type="text" id="bBIlTPscrIT9" # # # # # #
    # View on TensorFlow.org # # Run in Google Colab # # View source on GitHub # # Download notebook #
    # + [markdown] colab_type="text" id="AHp3M9ZmrIxj" # In a *regression* problem, we aim to predict the output of a continuous value, like a price or a probability. Contrast this with a *classification* problem, where we aim to select a class from a list of classes (for example, where a picture contains an apple or an orange, recognizing which fruit is in the picture). # # This notebook uses the classic [Auto MPG](https://archive.ics.uci.edu/ml/datasets/auto+mpg) Dataset and builds a model to predict the fuel efficiency of late-1970s and early 1980s automobiles. To do this, we'll provide the model with a description of many automobiles from that time period. This description includes attributes like: cylinders, displacement, horsepower, and weight. # # This example uses the `tf.keras` API, see [this guide](https://www.tensorflow.org/guide/keras) for details. # + colab_type="code" id="moB4tpEHxKB3" colab={} # Use seaborn for pairplot # !pip install seaborn # Use some functions from tensorflow_docs # !pip install git+https://github.com/tensorflow/docs # + colab_type="code" id="1rRo8oNqZ-Rj" colab={} from __future__ import absolute_import, division, print_function, unicode_literals import pathlib import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns # + colab_type="code" id="9xQKvCJ85kCQ" colab={} try: # # %tensorflow_version only exists in Colab. # %tensorflow_version 2.x except Exception: pass import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers print(tf.__version__) # + colab_type="code" id="Qz4HfsgRQUiV" colab={} import tensorflow_docs as tfdocs import tensorflow_docs.plots import tensorflow_docs.modeling # + [markdown] colab_type="text" id="F_72b0LCNbjx" # ## The Auto MPG dataset # # The dataset is available from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/). # # # + [markdown] colab_type="text" id="gFh9ne3FZ-On" # ### Get the data # First download the dataset. # + colab_type="code" id="p9kxxgzvzlyz" colab={} dataset_path = keras.utils.get_file("auto-mpg.data", "http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data") dataset_path # + [markdown] colab_type="text" id="nslsRLh7Zss4" # Import it using pandas # + colab_type="code" id="CiX2FI4gZtTt" colab={} column_names = ['MPG','Cylinders','Displacement','Horsepower','Weight', 'Acceleration', 'Model Year', 'Origin'] raw_dataset = pd.read_csv(dataset_path, names=column_names, na_values = "?", comment='\t', sep=" ", skipinitialspace=True) dataset = raw_dataset.copy() print(dataset.shape) print(dataset.head()) print(dataset.tail()) dataset.tail() # + [markdown] colab_type="text" id="3MWuJTKEDM-f" # ### Clean the data # # The dataset contains a few unknown values. # + colab_type="code" id="JEJHhN65a2VV" colab={} dataset.isna().sum() # + [markdown] colab_type="text" id="9UPN0KBHa_WI" # To keep this initial tutorial simple drop those rows. # + colab_type="code" id="4ZUDosChC1UN" colab={} dataset = dataset.dropna() dataset.isna().sum() dataset.tail() # + [markdown] colab_type="text" id="8XKitwaH4v8h" # The `"Origin"` column is really categorical, not numeric. So convert that to a one-hot: # + colab_type="code" id="gWNTD2QjBWFJ" colab={} dataset['Origin'] = dataset['Origin'].map({1: 'USA', 2: 'Europe', 3: 'Japan'}) # + colab_type="code" id="ulXz4J7PAUzk" colab={} dataset = pd.get_dummies(dataset, prefix='', prefix_sep='') dataset.tail() # + [markdown] colab_type="text" id="Cuym4yvk76vU" # ### Split the data into train and test # # Now split the dataset into a training set and a test set. # # We will use the test set in the final evaluation of our model. # + colab_type="code" id="qn-IGhUE7_1H" colab={} train_dataset = dataset.sample(frac=0.8,random_state=0) test_dataset = dataset.drop(train_dataset.index) print(train_dataset.shape) train_dataset # + id="00ASbRmi5_8R" colab_type="code" colab={} print(test_dataset.shape) test_dataset # + [markdown] colab_type="text" id="J4ubs136WLNp" # ### Inspect the data # # Have a quick look at the joint distribution of a few pairs of columns from the training set. # + colab_type="code" id="oRKO_x8gWKv-" colab={} sns.pairplot(train_dataset[["MPG", "Cylinders", "Displacement", "Weight"]], diag_kind="kde") # + [markdown] colab_type="text" id="gavKO_6DWRMP" # Also look at the overall statistics: # + colab_type="code" id="yi2FzC3T21jR" colab={} train_stats = train_dataset.describe() train_stats.pop("MPG") train_stats = train_stats.transpose() train_stats # + [markdown] colab_type="text" id="Db7Auq1yXUvh" # ### Split features from labels # # Separate the target value, or "label", from the features. This label is the value that you will train the model to predict. # + colab_type="code" id="t2sluJdCW7jN" colab={} train_labels = train_dataset.pop('MPG') test_labels = test_dataset.pop('MPG') train_labels # + id="GHkyg0U998l0" colab_type="code" colab={} test_labels # + [markdown] colab_type="text" id="mRklxK5s388r" # ### Normalize the data # # Look again at the `train_stats` block above and note how different the ranges of each feature are. # + [markdown] colab_type="text" id="-ywmerQ6dSox" # It is good practice to normalize features that use different scales and ranges. Although the model *might* converge without feature normalization, it makes training more difficult, and it makes the resulting model dependent on the choice of units used in the input. # # Note: Although we intentionally generate these statistics from only the training dataset, these statistics will also be used to normalize the test dataset. We need to do that to project the test dataset into the same distribution that the model has been trained on. # + colab_type="code" id="JlC5ooJrgjQF" colab={} def norm(x): return (x - train_stats['mean']) / train_stats['std'] normed_train_data = norm(train_dataset) normed_test_data = norm(test_dataset) normed_train_data # + [markdown] colab_type="text" id="BuiClDk45eS4" # This normalized data is what we will use to train the model. # # Caution: The statistics used to normalize the inputs here (mean and standard deviation) need to be applied to any other data that is fed to the model, along with the one-hot encoding that we did earlier. That includes the test set as well as live data when the model is used in production. # + [markdown] colab_type="text" id="SmjdzxKzEu1-" # ## The model # + [markdown] colab_type="text" id="6SWtkIjhrZwa" # ### Build the model # # Let's build our model. Here, we'll use a `Sequential` model with two densely connected hidden layers, and an output layer that returns a single, continuous value. The model building steps are wrapped in a function, `build_model`, since we'll create a second model, later on. # + colab_type="code" id="c26juK7ZG8j-" colab={} def build_model(): model = keras.Sequential([ layers.Dense(64, activation='relu', input_shape=[len(train_dataset.keys())]), layers.Dense(64, activation='relu'), layers.Dense(1) ]) optimizer = tf.keras.optimizers.RMSprop(0.001) model.compile(loss='mse', optimizer=optimizer, metrics=['mae', 'mse']) return model # + colab_type="code" id="cGbPb-PHGbhs" colab={} model = build_model() # + [markdown] colab_type="text" id="Sj49Og4YGULr" # ### Inspect the model # # Use the `.summary` method to print a simple description of the model # + colab_type="code" id="ReAD0n6MsFK-" colab={} model.summary() # + [markdown] colab_type="text" id="Vt6W50qGsJAL" # # Now try out the model. Take a batch of `10` examples from the training data and call `model.predict` on it. # + colab_type="code" id="-d-gBaVtGTSC" colab={} example_batch = normed_train_data[:10] example_result = model.predict(example_batch) example_result # + [markdown] colab_type="text" id="QlM8KrSOsaYo" # It seems to be working, and it produces a result of the expected shape and type. # + [markdown] colab_type="text" id="0-qWCsh6DlyH" # ### Train the model # # Train the model for 1000 epochs, and record the training and validation accuracy in the `history` object. # + colab_type="code" id="sD7qHCmNIOY0" colab={} EPOCHS = 1000 history = model.fit( normed_train_data, train_labels, epochs=EPOCHS, validation_split = 0.2, verbose=0, callbacks=[tfdocs.modeling.EpochDots()]) # + [markdown] colab_type="text" id="tQm3pc0FYPQB" # Visualize the model's training progress using the stats stored in the `history` object. # + colab_type="code" id="4Xj91b-dymEy" colab={} hist = pd.DataFrame(history.history) hist['epoch'] = history.epoch hist.tail() # + colab_type="code" id="czYtZS9A6D-X" colab={} plotter = tfdocs.plots.HistoryPlotter(smoothing_std=2) # + colab_type="code" id="nMCWKskbUTvG" colab={} plotter.plot({'Basic': history}, metric = "mae") plt.ylim([0, 10]) plt.ylabel('MAE [MPG]') # + colab_type="code" id="N9u74b1tXMd9" colab={} plotter.plot({'Basic': history}, metric = "mse") plt.ylim([0, 20]) plt.ylabel('MSE [MPG^2]') # + [markdown] colab_type="text" id="AqsuANc11FYv" # This graph shows little improvement, or even degradation in the validation error after about 100 epochs. Let's update the `model.fit` call to automatically stop training when the validation score doesn't improve. We'll use an *EarlyStopping callback* that tests a training condition for every epoch. If a set amount of epochs elapses without showing improvement, then automatically stop the training. # # You can learn more about this callback [here](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/EarlyStopping). # + colab_type="code" id="fdMZuhUgzMZ4" colab={} model = build_model() # The patience parameter is the amount of epochs to check for improvement early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10) early_history = model.fit(normed_train_data, train_labels, epochs=EPOCHS, validation_split = 0.2, verbose=0, callbacks=[early_stop, tfdocs.modeling.EpochDots()]) # + colab_type="code" id="LcopvQh3X-kX" colab={} plotter.plot({'Early Stopping': early_history}, metric = "mae") plt.ylim([0, 10]) plt.ylabel('MAE [MPG]') # + [markdown] colab_type="text" id="3St8-DmrX8P4" # The graph shows that on the validation set, the average error is usually around +/- 2 MPG. Is this good? We'll leave that decision up to you. # # Let's see how well the model generalizes by using the **test** set, which we did not use when training the model. This tells us how well we can expect the model to predict when we use it in the real world. # + colab_type="code" id="jl_yNr5n1kms" colab={} loss, mae, mse = model.evaluate(normed_test_data, test_labels, verbose=2) print("Testing set Mean Abs Error: {:5.2f} MPG".format(mae)) # + [markdown] colab_type="text" id="ft603OzXuEZC" # ### Make predictions # # Finally, predict MPG values using data in the testing set: # + colab_type="code" id="Xe7RXH3N3CWU" colab={} test_predictions = model.predict(normed_test_data).flatten() a = plt.axes(aspect='equal') plt.scatter(test_labels, test_predictions) plt.xlabel('True Values [MPG]') plt.ylabel('Predictions [MPG]') lims = [0, 50] plt.xlim(lims) plt.ylim(lims) _ = plt.plot(lims, lims) # + [markdown] colab_type="text" id="19wyogbOSU5t" # It looks like our model predicts reasonably well. Let's take a look at the error distribution. # + colab_type="code" id="f-OHX4DiXd8x" colab={} error = test_predictions - test_labels plt.hist(error, bins = 25) plt.xlabel("Prediction Error [MPG]") _ = plt.ylabel("Count") # + [markdown] colab_type="text" id="m0CB5tBjSU5w" # It's not quite gaussian, but we might expect that because the number of samples is very small. # + [markdown] colab_type="text" id="vgGQuV-yqYZH" # ## Conclusion # # This notebook introduced a few techniques to handle a regression problem. # # * Mean Squared Error (MSE) is a common loss function used for regression problems (different loss functions are used for classification problems). # * Similarly, evaluation metrics used for regression differ from classification. A common regression metric is Mean Absolute Error (MAE). # * When numeric input data features have values with different ranges, each feature should be scaled independently to the same range. # * If there is not much training data, one technique is to prefer a small network with few hidden layers to avoid overfitting. # * Early stopping is a useful technique to prevent overfitting. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + colab={"base_uri": "https://localhost:8080/", "height": 1105} colab_type="code" executionInfo={"elapsed": 14392, "status": "ok", "timestamp": 1554066245719, "user": {"displayName": "", "photoUrl": "https://lh5.googleusercontent.com/-gky4Rdx3FyM/AAAAAAAAAAI/AAAAAAAAB1w/n8k1h4Eyqt8/s64/photo.jpg", "userId": "02667765190758511137"}, "user_tz": 240} id="Wx3nOpUi02-L" outputId="d3c1494b-8e58-4276-a353-40f23145d7c3" # !wget https://github.com/MNRKhan/aps360-project/raw/master/baseline_watershed/baseline_watershed.py # !wget https://github.com/MNRKhan/aps360-project/raw/master/modules/data_loader.py # !wget https://github.com/MNRKhan/aps360-project/raw/master/modules/metrics.py # !wget https://github.com/MNRKhan/aps360-project/raw/master/modules/visualizer.py # + colab={} colab_type="code" id="dXCEOX-qCANF" import numpy as np import random import torch from torch.utils.data import DataLoader from torchvision import transforms from baseline_watershed import * from data_loader import * from metrics import * from visualizer import * # + colab={"base_uri": "https://localhost:8080/", "height": 337535} colab_type="code" executionInfo={"elapsed": 34700, "status": "ok", "timestamp": 1554066266080, "user": {"displayName": "", "photoUrl": "https://lh5.googleusercontent.com/-gky4Rdx3FyM/AAAAAAAAAAI/AAAAAAAAB1w/n8k1h4Eyqt8/s64/photo.jpg", "userId": "02667765190758511137"}, "user_tz": 240} id="-CsW_KtgBqTj" outputId="63f52d8e-b79f-4ed3-91b4-aa400241bd15" # !rm -rf __MACOSX # !rm -rf *.zip # !wget https://github.com/MNRKhan/aps360-project/raw/master/datasets/train2014/data_person_vehicle.zip # !unzip data_person_vehicle.zip # !rm -rf __MACOSX # !rm -rf *.zip # + colab={} colab_type="code" id="cw4052i7RyQX" # Set random seeds torch.manual_seed(360) np.random.seed(360) random.seed(360) # Form dataset transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) dataset = ImageMaskDataset("./data", transform, size=1000) # Dataset sizes size = len(dataset) train_size = int(0.6 * size) valid_size = int(0.2 * size) test_size = size - train_size - valid_size # Splitting datasets train_data, valid_data, test_data = torch.utils.data.random_split(dataset, [train_size, valid_size, test_size]) # Making dataloader valid = DataLoader(valid_data, batch_size=1, shuffle=True, num_workers=0) # + colab={"base_uri": "https://localhost:8080/", "height": 68} colab_type="code" executionInfo={"elapsed": 34661, "status": "ok", "timestamp": 1554066266086, "user": {"displayName": "", "photoUrl": "https://lh5.googleusercontent.com/-gky4Rdx3FyM/AAAAAAAAAAI/AAAAAAAAB1w/n8k1h4Eyqt8/s64/photo.jpg", "userId": "02667765190758511137"}, "user_tz": 240} id="f4FP9AztIg3l" outputId="8232a01c-b8db-4704-e380-c81c73c352e4" print("Full data set: ", size) print("Training size: ", train_size) print("Validation size: ", valid_size) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 46930, "status": "ok", "timestamp": 1554066278368, "user": {"displayName": "", "photoUrl": "https://lh5.googleusercontent.com/-gky4Rdx3FyM/AAAAAAAAAAI/AAAAAAAAB1w/n8k1h4Eyqt8/s64/photo.jpg", "userId": "02667765190758511137"}, "user_tz": 240} id="8t0KVathWdSc" outputId="5a84aad2-78df-4a72-bbe5-4a8365170004" print(watershed_iou(valid_data)) # + colab={} colab_type="code" id="7wB4hlQfiYl-" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="CgOelJ11JvRF" # ### Converting preprocessed text into vectors using fasttext embedding. # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 18670, "status": "ok", "timestamp": 1560094777544, "user": {"displayName": "", "photoUrl": "", "userId": "10838423032874605851"}, "user_tz": -360} id="SXe8uEAgIRjR" outputId="b9f5eb95-28fb-461b-8668-ee9977f2495c" from google.colab import drive import numpy as np import pandas as pd drive.mount('/content/gdrive') bengali_news_after_preprocessing = pd.read_pickle('/content/gdrive/My Drive/Projects/Bengali Text Classification/Bengali_Text_after_preprocessing.pkl') from sklearn.externals import joblib filename = '/content/gdrive/My Drive/Projects/Bengali Text Classification/fastText_Bangla_content_full.sav' loaded_model = joblib.load(filename) # + colab={} colab_type="code" id="zoutPCQLJIk0" import keras.backend as K import numpy as np number_of_sample, max_number_of_words, word_vector_size = 40000, 50, 32 temp = bengali_news_after_preprocessing.loc[:number_of_sample-1,:max_number_of_words-1] # + colab={} colab_type="code" id="NOhuZMPKKNG4" temp = temp.replace(['ঘস', 'ফগ', 'ঝবঃ', 'ঋন', 'ঊঘ', '\u09e4', 'ওৎ', 'গথ', 'খঢ', 'ঝ’', ' ং', 'ঔ', 'ডড', 'গঘ'], None) X = np.zeros((number_of_sample, max_number_of_words, word_vector_size), dtype=K.floatx()) for i in temp.index: X[i,:,:] = loaded_model.wv[temp.loc[i,:]] # + [markdown] colab_type="text" id="0EqTbmFsLDGd" # ### preparing labels from csv # + colab={} colab_type="code" id="zbz6a88_KTlW" bengali_news = pd.read_pickle('/content/gdrive/My Drive/Projects/Bengali Text Classification/40k_bangla_newspaper_article.p') bengali_news_dataframe = pd.DataFrame(bengali_news) y = bengali_news_dataframe['category'] # + colab={} colab_type="code" id="jENsUpfoLzj7" from sklearn import preprocessing import keras import numpy as np le = preprocessing.LabelEncoder() le.fit(y) enc = le.transform(y) y = keras.utils.to_categorical(enc) # + [markdown] colab_type="text" id="2VSBBww4YleT" # ### Train, Validation and Test Split # + colab={} colab_type="code" id="xFtKBeX7USSX" from sklearn.model_selection import train_test_split X_train, X_val, y_train, y_val = train_test_split(X, y, shuffle = True, test_size=0.125) X_val, X_test, y_val, y_test = train_test_split(X_val, y_val, shuffle = True, test_size=0.20) # + [markdown] colab_type="text" id="3XlcaonuzHJJ" # ### Training CNN # + colab={} colab_type="code" id="Xni95IcIzDA8" from keras.models import Sequential from keras.layers import Conv1D, Dropout, Dense, Flatten, LSTM, MaxPooling1D, Bidirectional from keras.optimizers import Adam from keras.callbacks import EarlyStopping, TensorBoard model = Sequential() model.add(Conv1D(32, kernel_size=3, activation='relu', padding='same', input_shape=(max_number_of_words, word_vector_size))) model.add(Conv1D(32, kernel_size=3, activation='relu', padding='same')) model.add(Conv1D(32, kernel_size=3, activation='relu', padding='same')) model.add(MaxPooling1D(pool_size=3)) model.add(Flatten()) model.add(Dense(256, activation='sigmoid')) model.add(Dropout(0.2)) model.add(Dense(256, activation='sigmoid')) model.add(Dropout(0.25)) model.add(Dense(256, activation='sigmoid')) model.add(Dropout(0.25)) model.add(Dense(13, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=0.0001, decay=1e-6), metrics=['accuracy']) # + colab={"base_uri": "https://localhost:8080/", "height": 544} colab_type="code" executionInfo={"elapsed": 791, "status": "ok", "timestamp": 1560098458352, "user": {"displayName": "", "photoUrl": "", "userId": "10838423032874605851"}, "user_tz": -360} id="Cq5g8SmHG97m" outputId="d4c1d563-fc1f-4648-95f4-7a30b9868602" model.summary() # + colab={"base_uri": "https://localhost:8080/", "height": 3434} colab_type="code" executionInfo={"elapsed": 63808, "status": "ok", "timestamp": 1560098528074, "user": {"displayName": "", "photoUrl": "", "userId": "10838423032874605851"}, "user_tz": -360} id="mghobpf7ZmNw" outputId="d68ddbb9-95ee-4847-9f83-128474563ac5" history = model.fit(X_train, y_train, batch_size= 500, shuffle=True, epochs= 100, validation_data=(X_val, y_val)) # + colab={} colab_type="code" id="vPP36yLOaKbO" predicts = model.predict(X_test) import numpy as np def decode(le, one_hot): dec = np.argmax(one_hot, axis=1) return le.inverse_transform(dec) y_test = decode(le, y_test) y_preds = decode(le, predicts) # + colab={"base_uri": "https://localhost:8080/", "height": 615} colab_type="code" executionInfo={"elapsed": 760, "status": "ok", "timestamp": 1560098542912, "user": {"displayName": "", "photoUrl": "", "userId": "10838423032874605851"}, "user_tz": -360} id="JHNM_Wbh4rpQ" outputId="70df3a2d-2ef7-4ba1-b49f-1f2c9ffc9568" from sklearn import metrics print(metrics.accuracy_score(y_test, y_preds)) print(metrics.confusion_matrix(y_test, y_preds)) print(metrics.classification_report(y_test, y_preds)) # + colab={} colab_type="code" id="gg3kjzUjP1XJ" accuracy, val_accuracy = np.array(history.history["acc"]), np.array(history.history["val_acc"]) accuracy, val_accuracy = accuracy.reshape(100,1), val_accuracy.reshape(100,1) accuracies = np.concatenate((accuracy,val_accuracy),axis=1) np.savetxt('/content/gdrive/My Drive/Projects/Bengali Text Classification/temp.csv',accuracies,delimiter=",") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd import itertools from datetime import timedelta import os import matplotlib.pyplot as plt import matplotlib.patches as mpatches # %matplotlib inline import seaborn as sns sns.set_context("poster") plt.style.use('fivethirtyeight') #plt.style.use('ggplot') plt.rcParams['axes.labelweight'] = 'bold' plt.rcParams['axes.titleweight'] = 'bold' plt.rcParams['figure.titleweight'] = 'bold' from IPython.display import display, HTML import numpy as np import math import datetime import time import sys import networkx as nx import sklearn print("sklearn.__version__:",sklearn.__version__) import pylab as pl import matplotlib.dates as mdates print(sys.version) # + _cell_guid="d2122447-d6a8-4299-be74-5931848568c7" _uuid="f8718cd50278fdf84bbdf59bcb37b5f38c652d81" result_df = pd.read_csv('../input/results_by_booth_2015 - english - v3.csv', encoding='iso-8859-1') print("Columns:") print(result_df.columns) print() print("df shape:",result_df.shape) result_df.tail(5) # + [markdown] _cell_guid="96d063ef-d3c3-4fa9-af4d-a14285316b29" _uuid="1797e9854e9687b28e3e1cf8311680d886d1ccf5" # # Clean Data # + _cell_guid="ee7cc9f7-71d5-4166-a1b4-681835f25c0f" _uuid="456d840bd15ff40bb18838e6e970f07d4449e3f8" result_df = result_df.dropna(axis=0, how='any') result_df = result_df[result_df.votes > 0] result_df.loc[result_df.Registered_voters == 0,'Registered_voters'] = result_df[result_df.Registered_voters == 0].votes result_df.shape # + [markdown] _cell_guid="5de8f866-e7e7-473b-aa7a-822fae9db13c" _uuid="e63634bec9e5605910c3447f33430a015de550f0" # # Overall Votes Per Party # + _cell_guid="61ddd0a2-8ae2-4c23-922c-9ab732da5306" _uuid="0f84e2ca5a4848a40a652cadf7aee72e306bb09e" block_percent = 0.0325 # + _cell_guid="b39a5813-bc01-4337-9b8b-fea560441176" _uuid="dd23e36c69871b31eec0d38856bb1f11f60f6a7c" all_registered_voters = result_df.Registered_voters.sum() all_votes = result_df.proper_votes.sum() print("all registerd voters:",all_registered_voters) print("all_votes:",all_votes) print("vote percentage:",all_votes/all_registered_voters) overall_votes_per_party = result_df.iloc[:,8:].sum() percantage_vote_per_pary = overall_votes_per_party/all_votes percantage_vote_per_pary = percantage_vote_per_pary[percantage_vote_per_pary.values>block_percent] percantage_vote_per_pary.sort_values(ascending=False).plot.bar(alpha=0.7,figsize=(16,6)) # + [markdown] _cell_guid="47870f94-cfe7-40b0-bdc4-1a103dc9f78b" _uuid="73fa01e780715098b2a7dae71d0975700b79420f" # # Group by City and Filter Out Small Parties # + _cell_guid="6192f1b2-f5bf-486d-9d11-9eebbae094e4" _uuid="9b8041e16dc21d673791b1ca8923ca891aa8e377" # Print the large parties large_parties = percantage_vote_per_pary.index.values print(large_parties) # + _cell_guid="1da54da0-9cb3-47aa-aa96-a9b27d95f5d6" _uuid="aad88f8242ca4aadb0202004649b7125fbde0551" non_party_col = list(result_df.iloc[:,0:8].columns) int_columns = [] int_columns.extend(non_party_col) int_columns.extend(list(large_parties)) print(int_columns) # + _cell_guid="863fd234-e09f-4196-9ebd-35099d4612a5" _uuid="d2e4192c33b2d4610568ff35e3d99f388b48d878" res_work_df = result_df.copy() res_work_df = res_work_df[int_columns] res_work_df_city = res_work_df.groupby(['settlement_name_english','Settlement_code'])[int_columns[4:]].sum().reset_index() print(res_work_df_city.shape) res_work_df_city.head(5) # + [markdown] _cell_guid="279a7835-1271-4bdf-8a3c-4c79fc516b48" _uuid="b118190f0225ed5b387120022ea63c5632f2bf28" # # Remove low votings rates # + _cell_guid="e6318e07-ea0b-420e-b7cc-187997b973e3" _uuid="880e80c6103e721946f1c589e3405f4272ca8b47" min_vote_rate = 0.6 min_proper_votes = 300 # - res_work_df = res_work_df_city.copy() res_work_df['vote_rate'] = res_work_df.proper_votes / res_work_df.Registered_voters res_work_df = res_work_df[(res_work_df.vote_rate > min_vote_rate) & (res_work_df.proper_votes > min_proper_votes)] print(res_work_df.shape) res_work_df.sample(10) res_work_df[res_work_df.settlement_name_english.str.contains("||JERU|HAI")] # / / JERUSALEM # # Check if there are bad rows with infinite values res_work_df[res_work_df.vote_rate == np.inf] # # Calculate percentage votes for each city-party res_work_df_percentage_votes = res_work_df.iloc[:,6:-1].div(res_work_df.proper_votes, axis=0) res_work_df_percentage_votes.head(5) # # Clustering res_work_df_percentage_votes_transpose = res_work_df_percentage_votes.transpose() res_work_df_percentage_votes_transpose.head(11) # ## Run K-Means # - Tanspose matrix # - Convert numeric voting rate to (1,0) where 1 means the voting rate in that settelment was above the median X = res_work_df_percentage_votes_transpose X.head(3) # + def above_median(fclist): med = np.median(fclist) return (fclist > med).astype(int) X = X.apply(above_median, axis=1) # - X[1:10] names = res_work_df_percentage_votes_transpose.index.tolist() # + from sklearn.cluster import KMeans km = KMeans(n_clusters=4, random_state=0).fit(X) clusters = km.labels_.tolist() clusters # - # ## Visualize Clusters # + from sklearn.manifold import TSNE tsne = TSNE(n_components=2) results_tsne = tsne.fit(X) coords = results_tsne.embedding_ colors = ['blue','red','green','cyan','magenta','yellow','black','white'] label_colors = [colors[i] for i in clusters] plt.figure(figsize=(16,8)) plt.subplots_adjust(bottom = 0.1) plt.scatter( coords[:, 0], coords[:, 1], marker = 'o', c=label_colors ) for label, x, y in zip(names, coords[:, 0], coords[:, 1]): plt.annotate( label, xy = (x, y), xytext = (-20, 20), textcoords = 'offset points', ha = 'right', va = 'bottom', bbox = dict(boxstyle = 'round,pad=0.5', fc = 'yellow', alpha = 0.5), arrowprops = dict(arrowstyle = '->', connectionstyle = 'arc3,rad=0')) plt.show() # - # # Distance Matrix # + _cell_guid="1ec981ab-1c58-478d-8e3d-cc290c0fdcaa" _uuid="e9f849eab78a5b7dd25df9267058b806aaf88d68" from sklearn.metrics.pairwise import pairwise_distances from sklearn.preprocessing import MinMaxScaler x = res_work_df_percentage_votes_transpose res = pairwise_distances(x, metric='correlation') # cosine / jaccard / correlation / euclidean distance = pd.DataFrame(res, index=res_work_df_percentage_votes_transpose.index, columns= res_work_df_percentage_votes_transpose.index) distance # - # ## Hierarchical Clustering # + import scipy from scipy.cluster import hierarchy labels = distance.index.values.tolist() sq_distance = scipy.spatial.distance.squareform(distance) Z = hierarchy.linkage(sq_distance, 'single') hierarchy.set_link_color_palette(['m', 'c', 'y', 'k']) fig, axes = plt.subplots(1, 1, figsize=(16, 6)) dn1 = hierarchy.dendrogram(Z, ax=axes, above_threshold_color='y', orientation='top', labels=labels) plt.show() # - # ## Heatmap of Distance Matrix Reordered as the Dendrogram new_order_distance = distance.reindex(dn1['ivl']) new_order_distance = new_order_distance[dn1['ivl']] import seaborn as sns ax = sns.heatmap(new_order_distance) # # Build Network distance_cutoff = 1 parties = percantage_vote_per_pary.index.tolist() parties # + import itertools dist_list = list(distance.index) all_2_org_combos = itertools.combinations(dist_list, 2) max_dist = distance.max().max() # Generate graph with nodes: G=nx.Graph() for p in parties: G.add_node(p, name=p, p_vote=float(percantage_vote_per_pary[p]), comm="0") # Connect nodes: for combo in all_2_org_combos: combo_dist = distance[combo[0]][combo[1]] opp_dist = combo_dist - max_dist if distance[combo[0]][combo[1]] < distance_cutoff: G.add_edge(combo[0],combo[1],weight=float(abs(opp_dist))) n = G.number_of_nodes() m = G.number_of_edges() print("number of nodes in graph G: ",n) print("number of edges in graph G: ",m) print() # - # ## Communities and Modularity import community communities = community.best_partition(G) mod = community.modularity(communities,G) print("modularity:", mod) if m > 0: for k,v in communities.items(): G.node[k]['comm'] = str(v) else: print("Not runnig Community algorithm because the graph has no edges") # ## Draw Network # + com_values = [communities.get(node) for node in G.nodes()] p_votes = [d['p_vote'] for n,d in G.nodes(data=True)] node_size=[v * 3000 for v in p_votes] plt.figure(figsize=(12,8)) pos=nx.fruchterman_reingold_layout(G) nx.draw_networkx(G,pos, cmap = plt.get_cmap('jet'), node_color = com_values, node_size=node_size, with_labels=True) plt.show() # + _cell_guid="bb1e01df-3277-4c55-90ef-38660e9254af" _uuid="bcb3c7dba6b76ceabc2c9612a4322cc5a046c0de" # + _cell_guid="042d7b5f-bbff-48a6-b871-87a0022374d3" _uuid="6942a0992ba147a35fd4491efd12a68c66142c8c" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Project - Write A Data Sceince Blog post # # Key Steps for Project # Feel free to be creative with your solutions, but do follow the CRISP-DM process in finding your solutions. # # 1) Pick a dataset. # # 2) Pose at least three questions related to business or real-world applications of how the data could be used. # # 3) Create a Jupyter Notebook, using any associated packages you'd like, to: # # Prepare data: # # Gather necessary data to answer your questions # Handle categorical and missing data # Provide insight into the methods you chose and why you chose them # Analyze, Model, and Visualize # # Provide a clear connection between your business questions and how the data answers them. # 4) Communicate your business insights: # # Create a Github repository to share your code and data wrangling/modeling techniques, with a technical audience in mind # Create a blog post to share your questions and insights with a non-technical audience # + # import libraries here; add more as necessary import numpy as np import pandas as pd import math import matplotlib.pyplot as plt # magic word for producing visualizations in notebook # %matplotlib inline # - # ### Pick a dataset # I picked the 2017 survey of Stack Overflow data to answer all of the below questions: # 1. Which part of the world is better for software developers, Western countires or Asian countries? # 2. What is the difference in pay scales of developers in these countries? # 3. What do developers feel about job satisfaction in these countries? # 4. Which part of the world provide better career satisfaction? #Read the data and glimpse it df = pd.read_csv('survey_results_public_2017.csv') display(df.head()) # + #shape of dataframe print(df.shape) #all columns print(df.columns.values) # - df.describe() # ### Which part of the world is better for software developers, Western countires or Asian countries? # Understanding the data from survey 2017 def bar_chart_plot(df, column, title): ''' Plotting a bar chart input args: df: a dataframe column: the column which we want to show title: the title of the chart ''' plt.figure(figsize=(11,7)) status_values = df[column].value_counts() s= (status_values[:10]/df.shape[0]) my_colors = list('rgbkymc') s.plot(kind="bar", color=my_colors,) plt.title(title); #plotting each Professional status bar_chart_plot(df, "Professional", "Developer type?") #Plotting Employment status bar_chart_plot(df, "EmploymentStatus", "Employment Status?") #plotting origin Country bar_chart_plot(df, "Country", "origin country?") #Plotting jobsatisfaction status bar_chart_plot(df, "JobSatisfaction", "How job statisfaction spreaded?") #Plotting salary status bar_chart_plot(df, "Overpaid", "How do professionals feel about overpaid status?") # ### Data preparation # Here I will devide the data columns from "Countries" int to Western, Top Asians and rest of the countires. def country_data(df): ''' Return a dataframe with country seperated Parameters: df: a dataframe Returns: df: a dataframe with a new column ''' # For Categorical variables "Country", we seperate them into # three sessions: western, asian and other western = ['United States', 'Liechtenstein', 'Switzerland', 'Iceland', 'Norway', 'Israel', 'Denmark', 'Ireland', 'Canada', 'United Kingdom', 'Germany', 'Netherlands', 'Sweden', 'Luxembourg', 'Austria', 'Finland', 'France', 'Belgium', 'Spain', 'Italy', 'Poland'] top_asians = ['India', 'Thailand', 'Singapore', 'Hong Kong', 'South Korea', 'Japan', 'China', 'Taiwan', 'Malaysia', 'Indonesia', 'Vietnam'] rest_asians = ['Malaysia', 'Indonesia', 'Vietnam', 'Sri Lanka', 'Pakistan', 'Bangladesh'] #Add a new catagory seperating to western and eastern df['west_or_asians'] = df['Country'].apply(lambda x: 'western' if x in western else ('top_asians' if x in top_asians else ('rest_asians' if x in rest_asians else 'other'))) return df # Here I will select some useful columns for the analysis. # - Country: Country they are living # - YearsCodingJob: Years they are coding # - JobSatisfaction & CareerSatisfaction: do they satisfied with their job and career # - EmploymentStatus: Their employment status # - Salary: Their Salary # # I will especially focus on full-time employed professional developer. #function to create a dataframe def data_prep(df): ''' Return useful columns with query condition Parameters: df: a raw data dataframe Returns: useful_df: a filtered dataframe with only useful columns ''' #Getting some useful columns for analysis usefull_col = ['Country', 'YearsCodedJob', 'EmploymentStatus', 'CareerSatisfaction', 'JobSatisfaction', 'JobSeekingStatus', 'HoursPerWeek', 'Salary', 'west_or_asians', 'Overpaid'] usefull_df = pd.DataFrame(df.query("Professional == 'Professional developer' and (Gender == 'Male' or Gender == 'Female') and EmploymentStatus == 'Employed full-time'"))[usefull_col] return usefull_df # + #create a dataframe to work upon and analysis d_df = country_data(df) d_df.head(5) usefull_df = data_prep(d_df) usefull_df.shape # - usefull_df.head(5) # For categorical variable Overpaid, we transfer it to calculatable integer value because we want to find out the mean of their opinion. def underpaid_check(df): """ Parameters: df: a dataframe Returns: dataframe: a converted dataframe with Overpaid column """ pay_mapping = { 'Greatly underpaid' : 1, 'Somewhat underpaid' : 2, 'Neither underpaid nor overpaid' : 3, 'Somewhat overpaid' : 4, 'Greatly overpaid' : 5, np.nan: np.nan } df['Overpaid'] = df['Overpaid'].apply(lambda x: np.nan if x == np.nan else pay_mapping[x] ) return df #Compare selected indicators between western and asians region usefull_df = underpaid_check(usefull_df) compared = usefull_df.groupby(['west_or_asians','YearsCodedJob']).mean() compared # column 'YearsCodedJob' will be transferred to calculatable integer value because we want to find out the mean of how long they have been coded. def handlingExp(df): """ Convert the working year to integer Parameters: df: a dataframe Returns: dataframe: a converted dataframe with YearsCodedJob column """ exp_mapping = {'1 to 2 years' : 1, '10 to 11 years' : 10, '11 to 12 years' : 11, '12 to 13 years' : 12, '13 to 14 years' : 13, '14 to 15 years' : 14, '15 to 16 years' : 15, '16 to 17 years' : 16, '17 to 18 years' : 17, '18 to 19 years' : 18, '19 to 20 years' : 19, '2 to 3 years' : 2, '20 or more years' : 20, '3 to 4 years' : 3, '4 to 5 years' : 4, '5 to 6 years' : 5, '6 to 7 years' : 6, '7 to 8 years' : 7, '8 to 9 years' : 8, '9 to 10 years' : 9, 'Less than a year' : 0} df_graph = df.reset_index() df_graph['YearsCodedJob'] = df_graph['YearsCodedJob'].apply(lambda x: exp_mapping[x]) df_graph['YearsCodedJob'] = pd.to_numeric(df_graph['YearsCodedJob']) return df_graph # + compared_graph = handlingExp(compared) compared_graph = compared_graph.sort_values(by='YearsCodedJob') compared_graph.set_index('YearsCodedJob', inplace=True) # - compared_graph.head() # ### Its time to evaluate the analysis, based on our questions # # ### Result of 1st question # 1st question is about better countries to work for a developer. For that we will compare the salaries of developers against their years of experience to find out which region has better potential. # + #Plot the Salary Comparison plt.figure(figsize=(12,10)) compared_graph.groupby('west_or_asians')['Salary'].plot(legend=True) plt.title("Salary Comparison between Western and Asian region"); plt.xlabel('Years of Exp') plt.ylabel('Average Salary') # - # As we could see above, Western countries pay better salary to their developers as compared to other parts of the world. # ### Result of 2nd question # 2nd question is about pay scale difference for developers. For that we will compare the overpaid status of developers against their years of experience to find out which region has better potentials. Overpaid status will tell if they feel good about about their remuneration or not. # + #Plot the overpaid status plt.figure(figsize=(12,10)) compared_graph.groupby('west_or_asians')['Overpaid'].plot(legend=True) plt.title("Do developers think they are overpaid?"); plt.xlabel('YearsCodedJob') plt.ylabel('Overpaid status') # - # Result is pretty interesting. As we could see above the "rest_asian" countries's developers said that they are somehow better paid than "Top_Asians" countries but their salary is lesser than Top Asian region's developers. # # And Top Asians feel that they are hugely underpaid than their Western countries counterparts. # ### Result of 3rd question # 3rd question is about Job Satisfaction for a developer. For that we will compare the job satisfaction status of developers against their years of experience to find out which region has better score. # + #Plot the JobSatisfaction status plt.figure(figsize=(12,10)) compared_graph.groupby('west_or_asians')['JobSatisfaction'].plot(legend=True) plt.title("Do developers feel statisfied with job?"); plt.xlabel('YearsCodedJob') plt.ylabel('Job Satisfaction status') # - # Here we can see developers from Western region and other parts of the world are satisfied with their job on an average scale as compared to their counterparts from Asian countries in the beginning of their job. After some 18 years in job, Rest_Asian countries data improves a lot than others, while others go to same level on an average. # # Final observation is gonna be so much interesting to know which part of the world is better. # # ### Result of 4th question # 4th question is about Career Satisfaction for a developer. For that we will compare the career satisfaction status of developers against their years of experience to find out which region has better potential. # + #Plot the CareerSatisfaction status plt.figure(figsize=(12,10)) compared_graph.groupby('west_or_asians')['CareerSatisfaction'].plot(legend=True) plt.title("Do developers feel statisfied with career?"); plt.xlabel('YearsCodedJob') plt.ylabel('Career Satisfaction status') # - # Here we can see developer from Western region and other parts of the world are satisfied with thier career on an average scale as compared to their counterparts from Asian countries in the beginning of their job. After some 18 years in job, Rest_Asian countries data improves a lot than others, while others go to same level on an average. # ### Combined analysis of Job and Career satisfaction for developers from western and asian countries compared.groupby('west_or_asians').mean().CareerSatisfaction compared.groupby('west_or_asians').mean().JobSatisfaction compared.groupby('west_or_asians').mean().Salary/50 # + #Plot Comparison of Career and Job Satisfaction between Western and Eastern plt.figure(figsize=(12,10)) plt.scatter(compared.groupby('west_or_asians').mean().CareerSatisfaction, compared.groupby('west_or_asians').mean().JobSatisfaction, compared.groupby('west_or_asians').mean().Salary/50, c=['blue','red', 'green','magenta']) plt.title('Comparison of Career and Job Satisfaction\n(Blue: Top_Asians; Red: rest_asians, Green: Other; Magenta: Western)') plt.xlabel('Career Satisfaction') plt.ylabel('Job Satisfaction') # - # As we could see in above comparison scatter plot, combining Salary, Job Satisfaction, Career Satisfaction against their years of exp in coding, developers feel better satisfied with career and job from these part of the world in following order (1st - better, last - lesser) # --> Western - Top_Asians - Others - Rest_Asians # ### Conclusion # - Western countries pay better salary to their developers as compared to other parts of the world. # - Regarding the pay scale difference, we found that "rest_asian" countries's developers said that they are somehow better paid than "Top_Asians" countries but their salary is lesser than Top Asian region. And Top Asians feel that they are hugely underpaid than their Western countries counterparts. # - Regarding the Job Satisfaction, we found that developesr from Western region and other parts of the world are satisfied with their job on an average scale as compared to their counterparts from Asian countries in the beginning of their job. After some 18 years in job, Rest_Asian countries data improves a lot than others, while others go to same level on an average. # - Regarding the Career Satisfaction, we found that developers from Western region and other parts of the world are satisfied with their career on an average scale as compared to their counterparts from Asian countries in the beginning of their job. After some 18 years in job, Rest_Asian countries data improves a lot than others, while others go to same level on an average # # -After combining Salary, Job Satisfaction and Career Satisfaction against their years of exp in coding, developers feel better satisfied with career and job from western countries as compared to others. # ### Final Observation # # Keeping Salary, Job Satisfaction, Career Satisfaction and other factors against their years of exp in coding in mind, Western part of the world is a better place to work for a developer. # !!jupyter nbconvert *.ipynb # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # CAPPI插值示例 # 要求pycwr版本>=0.2.15 # %matplotlib inline from pycwr.io import read_auto import numpy as np import matplotlib.pyplot as plt import cartopy.crs as ccrs from pycwr.draw.RadarPlot import plot_xy, Graph, plot_lonlat_map, GraphMap # ## 读取数据部分 # # PRD = read_auto("/Users/zhengyu/OneDrive/Work/13_天气雷达库_pycwr/test_data/Z9040.20190905.175751.AR2.bz2") PRD.fields[0] # ## 根据距离雷达中心的x和y生成cappi数据 x1d = np.arange(-200000, 200001, 1000) ##x方向1km等间距, -200km~200km范围 y1d = np.arange(-200000, 200001, 1000) ##y方向1km等间距, -200km~200km范围 PRD.add_product_CAPPI_xy(XRange=x1d, YRange=y1d, level_height=1500) ##插值1500m高度的 #XRange: np.ndarray, 1d, units:meters # YRange: np.ndarray, 1d, units:meters # level_height: 要插值的高度,常量, units:meters PRD.product ##可以查看1500m的cappi的产品 grid_x, grid_y = np.meshgrid(x1d, y1d, indexing="ij") fig, ax = plt.subplots() plot_xy(ax, grid_x, grid_y, PRD.product.CAPPI_1500) ##画图显示 # + #显示 0.5度仰角进行对比 fig, ax = plt.subplots() graph = Graph(PRD) graph.plot_ppi(ax, 0, "dBZ", cmap="CN_ref") ## 0代表第一层, dBZ代表反射率产品,cmap plt.show() # - # ## 根据经纬度信息生成cappi数据 lon1d = np.arange(117, 121.0001, 0.01) ##lon方向0.01等间距,117-121 范围 lat1d = np.arange(28, 32.0001, 0.01) ##lat方向0.01等间距, 28~32度范围 PRD.add_product_CAPPI_lonlat(XLon=lon1d, YLat=lat1d, level_height=1500) ##插值1500m高度的 # XLon:np.ndarray, 1d, units:degrees # YLat:np.ndarray, 1d, units:degrees # level_height:常量,要计算的高度 units:meters PRD.product ##可查看lat lon坐标的cappi grid_lon, grid_lat = np.meshgrid(lon1d, lat1d, indexing="ij") ax = plt.axes(projection=ccrs.PlateCarree()) plot_lonlat_map(ax, grid_lon, grid_lat, PRD.product.CAPPI_geo_1500, transform=ccrs.PlateCarree()) plt.show()##画图显示 ax = plt.axes(projection=ccrs.PlateCarree()) graph = GraphMap(PRD, ccrs.PlateCarree()) ##叠加地图 graph.plot_ppi_map(ax, 0, "dBZ", cmap="CN_ref") ## 0代表第一层, dBZ代表反射率产品 ax.set_title("Using pycwr for ploting data with map", fontsize=16) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + active="" # 说明: # 设计一个使用单词列表进行初始化的数据结构,单词列表中的单词 互不相同 。 # 如果给出一个单词,请判定能否只将这个单词中一个字母换成另一个字母,使得所形成的新单词存在于你构建的字典中。 # 实现 MagicDictionary 类: # 1、MagicDictionary() 初始化对象 # 2、void buildDict(String[] dictionary) 使用字符串数组 dictionary 设定该数据结构,dictionary 中的字符串互不相同 # 3、bool search(String searchWord) 给定一个字符串 searchWord ,判定能否只将字符串中 一个 字母换成另一个字母, # 使得所形成的新字符串能够与字典中的任一字符串匹配。如果可以,返回 true ;否则,返回 false 。 # # 示例: # 输入 # ["MagicDictionary", "buildDict", "search", "search", "search", "search"] # [[], [["hello", "leetcode"]], ["hello"], ["hhllo"], ["hell"], ["leetcoded"]] # 输出 # [null, null, false, true, false, false] # # 解释 # MagicDictionary magicDictionary = new MagicDictionary(); # magicDictionary.buildDict(["hello", "leetcode"]); # magicDictionary.search("hello"); // 返回 False # magicDictionary.search("hhllo"); // 将第二个 'h' 替换为 'e' 可以匹配 "hello" ,所以返回 True # magicDictionary.search("hell"); // 返回 False # magicDictionary.search("leetcoded"); // 返回 False # # # 提示: # 1 <= dictionary.length <= 100 # 1 <= dictionary[i].length <= 100 # dictionary[i] 仅由小写英文字母组成 # dictionary 中的所有字符串 互不相同 # 1 <= searchWord.length <= 100 # searchWord 仅由小写英文字母组成 # buildDict 仅在 search 之前调用一次 # 最多调用 100 次 search # + class Node: def __init__(self): self.children = {} self.isWord = False class Trie: def __init__(self): self.root = Node() def insert(self, word): node = self.root for c in word: if c not in node.children: node.children[c] = Node() node = node.children[c] node.isWord = True class MagicDictionary: def __init__(self): self.trie = Trie() def buildDict(self, dictionary) -> None: for word in dictionary: self.trie.insert(word) def check(self, node, word): for c in word: if c not in node.children: return False node = node.children[c] return node.isWord def search(self, searchWord: str) -> bool: node = self.trie.root for i, c in enumerate(searchWord): if c not in node.children: for key in node.children: if self.check(node.children[key], searchWord[i+1:]): return True return False elif len(node.children) > 1: chars = list(node.children.keys()) for char in chars: if char != c and self.check(node.children[char], searchWord[i+1:]): return True node = node.children[c] return False # - obj = MagicDictionary() obj.buildDict(["hello","leetcode"]) param_2 = obj.search(searchWord='hello') print(param_2) ["MagicDictionary", "buildDict", "search", "search", "search", "search"] [[], [["hello","hallo","leetcode"]], ["hello"], ["hhllo"], ["hell"], ["leetcoded"]] def check(self, node, word): for c in word: if c not in node.children: return False node = node.children[c] return node.isWord def search(self, searchWord: str) -> bool: node = self.trie.root count = 0 for i, c in enumerate(searchWord): if c not in node.children: for key in node.children: if self.check(node.children[key], searchWord[i+1:]): return True return False if len(node.children) > 1: count += 1 node = node.children[c] return count == 1 and node.isWord # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Plotting the salinity distribution # + import iris import glob import re from iris.experimental.equalise_cubes import equalise_attributes import matplotlib.pyplot as plt import seaborn # - # %matplotlib inline infiles = glob.glob("/g/data/r87/dbi599/drstree/CMIP5/GCM/CCCMA/CanESM2/historicalGHG/yr/ocean/so/r1i1p1/dedrifted/so_Oyr_CanESM2_historicalGHG_r1i1p1_*.nc") print infiles def get_time_constraint(time_list): """Get the time constraint used for reading an iris cube.""" start_date, end_date = time_list date_pattern = '([0-9]{4})-([0-9]{1,2})-([0-9]{1,2})' assert re.search(date_pattern, start_date) assert re.search(date_pattern, end_date) if (start_date == end_date): year, month, day = start_date.split('-') time_constraint = iris.Constraint(time=iris.time.PartialDateTime(year=int(year), month=int(month), day=int(day))) else: start_year, start_month, start_day = start_date.split('-') end_year, end_month, end_day = end_date.split('-') time_constraint = iris.Constraint(time=lambda t: iris.time.PartialDateTime(year=int(start_year), month=int(start_month), day=int(start_day)) <= t.point <= iris.time.PartialDateTime(year=int(end_year), month=int(end_month), day=int(end_day))) return time_constraint time_constraint = get_time_constraint(['1986-01-01', '2005-12-31']) with iris.FUTURE.context(cell_datetime_objects=True): cube = iris.load(infiles, 'sea_water_salinity' & time_constraint) print cube equalise_attributes(cube) iris.util.unify_time_units(cube) cube = cube.concatenate_cube() print cube mean_field = cube.collapsed('time', iris.analysis.MEAN) print mean_field volcello = '/g/data/ua6/drstree/CMIP5/GCM/CCCMA/CanESM2/historicalGHG/fx/ocean/volcello/r0i0p0/volcello_fx_CanESM2_historicalGHG_r0i0p0.nc' volume = iris.load_cube(volcello) print volume print mean_field.data.shape print volume.data.shape print mean_field.data.compressed().shape print volume.data.compressed().shape 40 * 192 * 256 # ## Kernel density estimate # # Weighted KDE calculator from [here](http://nbviewer.jupyter.org/gist/tillahoffmann/f844bce2ec264c1c8cb5) # + import numpy as np from scipy.spatial.distance import cdist class gaussian_kde(object): """Representation of a kernel-density estimate using Gaussian kernels. Kernel density estimation is a way to estimate the probability density function (PDF) of a random variable in a non-parametric way. `gaussian_kde` works for both uni-variate and multi-variate data. It includes automatic bandwidth determination. The estimation works best for a unimodal distribution; bimodal or multi-modal distributions tend to be oversmoothed. Parameters ---------- dataset : array_like Datapoints to estimate from. In case of univariate data this is a 1-D array, otherwise a 2-D array with shape (# of dims, # of data). bw_method : str, scalar or callable, optional The method used to calculate the estimator bandwidth. This can be 'scott', 'silverman', a scalar constant or a callable. If a scalar, this will be used directly as `kde.factor`. If a callable, it should take a `gaussian_kde` instance as only parameter and return a scalar. If None (default), 'scott' is used. See Notes for more details. weights : array_like, shape (n, ), optional, default: None An array of weights, of the same shape as `x`. Each value in `x` only contributes its associated weight towards the bin count (instead of 1). Attributes ---------- dataset : ndarray The dataset with which `gaussian_kde` was initialized. d : int Number of dimensions. n : int Number of datapoints. neff : float Effective sample size using Kish's approximation. factor : float The bandwidth factor, obtained from `kde.covariance_factor`, with which the covariance matrix is multiplied. covariance : ndarray The covariance matrix of `dataset`, scaled by the calculated bandwidth (`kde.factor`). inv_cov : ndarray The inverse of `covariance`. Methods ------- kde.evaluate(points) : ndarray Evaluate the estimated pdf on a provided set of points. kde(points) : ndarray Same as kde.evaluate(points) kde.pdf(points) : ndarray Alias for ``kde.evaluate(points)``. kde.set_bandwidth(bw_method='scott') : None Computes the bandwidth, i.e. the coefficient that multiplies the data covariance matrix to obtain the kernel covariance matrix. .. versionadded:: 0.11.0 kde.covariance_factor : float Computes the coefficient (`kde.factor`) that multiplies the data covariance matrix to obtain the kernel covariance matrix. The default is `scotts_factor`. A subclass can overwrite this method to provide a different method, or set it through a call to `kde.set_bandwidth`. Notes ----- Bandwidth selection strongly influences the estimate obtained from the KDE (much more so than the actual shape of the kernel). Bandwidth selection can be done by a "rule of thumb", by cross-validation, by "plug-in methods" or by other means; see [3]_, [4]_ for reviews. `gaussian_kde` uses a rule of thumb, the default is Scott's Rule. Scott's Rule [1]_, implemented as `scotts_factor`, is:: n**(-1./(d+4)), with ``n`` the number of data points and ``d`` the number of dimensions. Silverman's Rule [2]_, implemented as `silverman_factor`, is:: (n * (d + 2) / 4.)**(-1. / (d + 4)). Good general descriptions of kernel density estimation can be found in [1]_ and [2]_, the mathematics for this multi-dimensional implementation can be found in [1]_. References ---------- .. [1] , "Multivariate Density Estimation: Theory, Practice, and Visualization", iley & Sons, New York, Chicester, 1992. .. [2] , "Density Estimation for Statistics and Data Analysis", Vol. 26, Monographs on Statistics and Applied Probability, Chapman and Hall, London, 1986. .. [3] , "Bandwidth Selection in Kernel Density Estimation: A Review", CORE and Institut de Statistique, Vol. 19, pp. 1-33, 1993. .. [4] and , "Bandwidth selection for kernel conditional density estimation", Computational Statistics & Data Analysis, Vol. 36, pp. 279-298, 2001. Examples -------- Generate some random two-dimensional data: >>> from scipy import stats >>> def measure(n): >>> "Measurement model, return two coupled measurements." >>> m1 = np.random.normal(size=n) >>> m2 = np.random.normal(scale=0.5, size=n) >>> return m1+m2, m1-m2 >>> m1, m2 = measure(2000) >>> xmin = m1.min() >>> xmax = m1.max() >>> ymin = m2.min() >>> ymax = m2.max() Perform a kernel density estimate on the data: >>> X, Y = np.mgrid[xmin:xmax:100j, ymin:ymax:100j] >>> positions = np.vstack([X.ravel(), Y.ravel()]) >>> values = np.vstack([m1, m2]) >>> kernel = stats.gaussian_kde(values) >>> Z = np.reshape(kernel(positions).T, X.shape) Plot the results: >>> import matplotlib.pyplot as plt >>> fig = plt.figure() >>> ax = fig.add_subplot(111) >>> ax.imshow(np.rot90(Z), cmap=plt.cm.gist_earth_r, ... extent=[xmin, xmax, ymin, ymax]) >>> ax.plot(m1, m2, 'k.', markersize=2) >>> ax.set_xlim([xmin, xmax]) >>> ax.set_ylim([ymin, ymax]) >>> plt.show() """ def __init__(self, dataset, bw_method=None, weights=None): self.dataset = np.atleast_2d(dataset) if not self.dataset.size > 1: raise ValueError("`dataset` input should have multiple elements.") self.d, self.n = self.dataset.shape if weights is not None: self.weights = weights / np.sum(weights) else: self.weights = np.ones(self.n) / self.n # Compute the effective sample size # http://surveyanalysis.org/wiki/Design_Effects_and_Effective_Sample_Size#Kish.27s_approximate_formula_for_computing_effective_sample_size self.neff = 1.0 / np.sum(self.weights ** 2) self.set_bandwidth(bw_method=bw_method) def evaluate(self, points): """Evaluate the estimated pdf on a set of points. Parameters ---------- points : (# of dimensions, # of points)-array Alternatively, a (# of dimensions,) vector can be passed in and treated as a single point. Returns ------- values : (# of points,)-array The values at each point. Raises ------ ValueError : if the dimensionality of the input points is different than the dimensionality of the KDE. """ points = np.atleast_2d(points) d, m = points.shape if d != self.d: if d == 1 and m == self.d: # points was passed in as a row vector points = np.reshape(points, (self.d, 1)) m = 1 else: msg = "points have dimension %s, dataset has dimension %s" % (d, self.d) raise ValueError(msg) # compute the normalised residuals chi2 = cdist(points.T, self.dataset.T, 'mahalanobis', VI=self.inv_cov) ** 2 # compute the pdf result = np.sum(np.exp(-.5 * chi2) * self.weights, axis=1) / self._norm_factor return result __call__ = evaluate def scotts_factor(self): return np.power(self.neff, -1./(self.d+4)) def silverman_factor(self): return np.power(self.neff*(self.d+2.0)/4.0, -1./(self.d+4)) # Default method to calculate bandwidth, can be overwritten by subclass covariance_factor = scotts_factor def set_bandwidth(self, bw_method=None): """Compute the estimator bandwidth with given method. The new bandwidth calculated after a call to `set_bandwidth` is used for subsequent evaluations of the estimated density. Parameters ---------- bw_method : str, scalar or callable, optional The method used to calculate the estimator bandwidth. This can be 'scott', 'silverman', a scalar constant or a callable. If a scalar, this will be used directly as `kde.factor`. If a callable, it should take a `gaussian_kde` instance as only parameter and return a scalar. If None (default), nothing happens; the current `kde.covariance_factor` method is kept. Notes ----- .. versionadded:: 0.11 Examples -------- >>> x1 = np.array([-7, -5, 1, 4, 5.]) >>> kde = stats.gaussian_kde(x1) >>> xs = np.linspace(-10, 10, num=50) >>> y1 = kde(xs) >>> kde.set_bandwidth(bw_method='silverman') >>> y2 = kde(xs) >>> kde.set_bandwidth(bw_method=kde.factor / 3.) >>> y3 = kde(xs) >>> fig = plt.figure() >>> ax = fig.add_subplot(111) >>> ax.plot(x1, np.ones(x1.shape) / (4. * x1.size), 'bo', ... label='Data points (rescaled)') >>> ax.plot(xs, y1, label='Scott (default)') >>> ax.plot(xs, y2, label='Silverman') >>> ax.plot(xs, y3, label='Const (1/3 * Silverman)') >>> ax.legend() >>> plt.show() """ if bw_method is None: pass elif bw_method == 'scott': self.covariance_factor = self.scotts_factor elif bw_method == 'silverman': self.covariance_factor = self.silverman_factor elif np.isscalar(bw_method) and not isinstance(bw_method, string_types): self._bw_method = 'use constant' self.covariance_factor = lambda: bw_method elif callable(bw_method): self._bw_method = bw_method self.covariance_factor = lambda: self._bw_method(self) else: msg = "`bw_method` should be 'scott', 'silverman', a scalar " \ "or a callable." raise ValueError(msg) self._compute_covariance() def _compute_covariance(self): """Computes the covariance matrix for each Gaussian kernel using covariance_factor(). """ self.factor = self.covariance_factor() # Cache covariance and inverse covariance of the data if not hasattr(self, '_data_inv_cov'): # Compute the mean and residuals _mean = np.sum(self.weights * self.dataset, axis=1) _residual = (self.dataset - _mean[:, None]) # Compute the biased covariance self._data_covariance = np.atleast_2d(np.dot(_residual * self.weights, _residual.T)) # Correct for bias (http://en.wikipedia.org/wiki/Weighted_arithmetic_mean#Weighted_sample_covariance) self._data_covariance /= (1 - np.sum(self.weights ** 2)) self._data_inv_cov = np.linalg.inv(self._data_covariance) self.covariance = self._data_covariance * self.factor**2 self.inv_cov = self._data_inv_cov / self.factor**2 self._norm_factor = np.sqrt(np.linalg.det(2*np.pi*self.covariance)) #* self.n # + #matplotlib.pyplot.hist(x, bins=None, range=None, normed=False, weights=None, cumulative=False, bottom=None, histtype='bar', align='mid', orientation='vertical', rwidth=None, log=False, color=None, label=None, stacked=False, hold=None, data=None, **kwargs) plt.hist(mean_field.data.compressed(), weights=volume.data.compressed(), normed=True, histtype='stepfilled') pdf = gaussian_kde(mean_field.data.compressed(), weights=volume.data.compressed()) x = np.linspace(20, 40, 200) y = pdf(x) plt.plot(x, y, label='weighted kde') plt.show() # - def broadcast_array(array, axis_index, shape): """Broadcast an array to a target shape. Args: array (numpy.ndarray): One dimensional array axis_index (int or tuple): Postion in the target shape that the axis/axes of the array corresponds to e.g. if array corresponds to (lat, lon) in (time, depth lat, lon) then axis_index = [2, 3] e.g. if array corresponds to (lat) in (time, depth lat, lon) then axis_index = 2 shape (tuple): shape to broadcast to For a one dimensional array, make start_axis_index = end_axis_index """ if type(axis_index) in [float, int]: start_axis_index = end_axis_index = axis_index else: assert len(axis_index) == 2 start_axis_index, end_axis_index = axis_index dim = start_axis_index - 1 while dim >= 0: array = array[np.newaxis, ...] array = np.repeat(array, shape[dim], axis=0) dim = dim - 1 dim = end_axis_index + 1 while dim < len(shape): array = array[..., np.newaxis] array = np.repeat(array, shape[dim], axis=-1) dim = dim + 1 return array print cube.shape print volume.data.shape broadcast_volume = broadcast_array(volume.data, [1, 3], cube.shape) broadcast_volume.shape print broadcast_volume[0, 10, 100, 100] print broadcast_volume[0, 10, 100, 100] type(broadcast_volume) # + plt.hist(cube.data.compressed(), weights=broadcast_volume.compressed(), normed=True, histtype='stepfilled') pdf = gaussian_kde(cube.data.compressed(), weights=broadcast_volume.compressed()) x = np.linspace(20, 40, 41) y = pdf(x) plt.plot(x, y, label='weighted kde') plt.show() # - # The custom KDE function is fairly computationally expensive. I get memory errors if I try and increase the resolution of x too much (i.e. which would help make the curve smoother). # ## Skew normal distribution from scipy.optimize import curve_fit from scipy.stats import skewnorm hist, bin_edges = np.histogram(cube.data.compressed(), bins=100, normed=True, weights=broadcast_volume.compressed()) mid_points = (bin_edges[1:] + bin_edges[:-1]) / 2. popt, pcov = curve_fit(skewnorm.pdf, mid_points, hist, p0=(0, 35, 1)) popt # With respect to the `skewnorm` functions, `a` skews the distribution, `loc` moves the mean and `scale` makes it taller ($0 < scale < 1$) or shorter ($scale > 1$). # + #a = 0 #loc = 0 #scale = 0.5 #x = np.linspace(skewnorm.ppf(0.01, a, loc=loc, scale=scale), skewnorm.ppf(0.99, a, loc=loc, scale=scale), 100) a, loc, scale = popt plt.plot(mid_points, skewnorm.pdf(mid_points, a, loc=loc, scale=scale)) plt.plot(mid_points, hist) plt.hist(cube.data.compressed(), weights=broadcast_volume.compressed(), bins=100, normed=True) plt.xlim(27, 40) plt.show() # - # It appears the skewed normal probability distribution doesn't fit the salinity data that well. Changing the kurtosis also probably wouldn't help that much because you have to have tell and skinny (leptokurtic) or short and fat - this has both. # ## Hyperbolic secant distribution from scipy.stats import hypsecant popt, pcov = curve_fit(hypsecant.pdf, mid_points, hist, p0=(35, 1)) popt # + loc, scale = popt plt.plot(mid_points, hypsecant.pdf(mid_points, loc=loc, scale=scale)) plt.plot(mid_points, hist) plt.hist(cube.data.compressed(), weights=broadcast_volume.compressed(), bins=100, normed=True) plt.xlim(27, 40) plt.show() # - # The hyperbolic secant distribution fits better but the python implementation doesn't allow for skewness (you can google papers on the topic). Masking the marginal seas might also help the situation (or just looking in the non-polar areas). # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## List # - append() # - clear() # - copy() -->make deep copy # - count() # - extend () -->in memory operation # - list1 +list2 # - list1.__add__ # - index() # - insert # - pop # - remove # - reverse # - sort print(len(dir(list))) #dir(list) #uncomment to see all avaliable methods # + #help(list) #uncomment to observe all list methods # - name = 'Qasim' name name1 = 'Ali' name2 = 'Raza' name3 = 'Qasim' print(name1,name2,name3) # + # 0 1 2 names = ['Ali', 'Raza', 'Qasim'] # -3 -2 -1 names. print(names[1]) print(names[-1]) # - print(type(name1)) print(type(names)) l1 = [] l1.append('A') l1.append('B') l1.append('C') print(l1) print(l1) l1.clear() print(l1) l1 = ['A','B','C'] print(l1) del l1 print(l1) l1 = ["Qasim",'Hassan','Ali'] l2 = l1 # shalloow Copy address same print(id(l1)) print(id(l2)) l3 = l1.copy() # deep copy address change print(id(l1)) print(id(l3)) l3[0] = 'Pakistan' print(l3) print(l1) l1 = ['A','B','C','D','A','A'] l1.count('A') # + l1 = ['A','B','C'] l2 = ['X','Y','Z'] print(l1 + l2) # Inline Operation print(l1,l2) # + l1 = ['a','b','c'] l2 = ['x','y','z'] print(l1,l2) print(l1.__add__(l2)) # Inline Operation print(l1,l2) # + l1 = ['a','b','c'] l2 = ['x','y','z'] l1.extend(l2) print(l1) print(l2) # + l1 = ['a','b','c'] print(l1.index('c')) # + l1 = ['A','B', 'C', 'D','E','B','C'] l1.index('C',3) # + l1 = [] l1.insert(0,'A') # ['A'] l1.insert(0,'B') # ['B','A'] l1.insert(0,'C') # ['C','B','A'] print(l1) l1.insert(1,'Pakistan') print(l1) # + l1 = ['A','B','C'] del l1[-1] # ['A','B'] del l1[-1] # ['A'] l1 # + l2 = [] l1 = ['A','B','C','D'] a = l1.pop() # D l2.append(a) # ['D'] a = l1.pop() # C l2.append(a) # ['D','C'] a = l1.pop() # B l2.append(a) #['D','C','B'] print(l1) print(l2) # + l2 = [] l1 = ['A','B','C','D'] a = l1.pop(1) # B l2.append(a) # ['B'] a = l1.pop(1) # C l2.append(a) # ['B','C'] a = l1.pop(1) # B l2.append(a) #['B','C','D'] print(l1) print(l2) # - l1 = ['A','B','B'] l1.remove('B') l1 l1 = ['A','B','C','D'] l1.reverse() l1 l1 = ['X','Y','Z','A','B'] l1.sort() l1 l1 = ['X','Y','Z','A','B'] l1.sort(reverse=True) l1 # ### Magic or Dunder Methods # + active="" # __class__ # __contains__() # __doc__ # __delitem__() # __eq__() # __le__() # __ge__() # __sizeoff__() # - print(l1) l1.__class__ l1 = ['X','Y','Z','A','B'] print(l1) l1.__contains__('A') l1.__doc__ print(l1) l1.__delitem__(1) l1 [1,2,3].__eq__([3,2,1]) # equal [1,2,3].__le__([3,2,1]) #less then [1,2,3].__sizeof__() [1,2,3].__sizeof__ # ## Tuples` l1 = ['A','B','C'] print(l1) l1[0] = 'Pakistan' print(l1) l1 = ('A','B','C') print(l1) l1[0] = 'Pakistan' print(l1) l1 = ('A',5,'B','A','A','A') l1 l1 = ('A',5,'B','A','A','A') l1.count('A') l1 = ('A',5,'B','A','A','A') l1.index('A',1) help(tuple) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import torch from librosa import stft import numpy as np import matplotlib.pyplot as plt # t = 2*np.pi*np.linspace(0,1,5000) # x = np.sin(50*t) x = np.random.randn(10,10000).astype(np.float32) lib_output = stft(x[0], n_fft=2048, hop_length=512) window = torch.hann_window(2048) x = torch.nn.Parameter(x) torch_output = torch.stft(x, n_fft=2048, hop_length=512, window=window) plt.imshow(torch.sqrt(torch_output[0].pow(2).sum(-1)), aspect='auto') plt.imshow(abs(lib_output), aspect='auto') np.allclose(torch.sqrt(torch_output[0].pow(2).sum(-1)), abs(lib_output), atol=1e-5, rtol=1e-6) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: metis # language: python # name: metis # --- # ##### Import necessary ibraries from utils import pickle_to, pickle_from, ignore_warnings import pandas as pd import pickle import seaborn as sns import re import nltk from nltk.corpus import stopwords from nltk.stem import WordNetLemmatizer pd.set_option('display.max_colwidth', -1) # # Scrub data # ###### Encoding newly created variable names into 0 or 1 # # Encoding | Label # -- | -- # 0 | FAKE # 1 | REAL # # # + code_folding=[] def encode_label(label): """Encode real as 1, fake as 0, everything else as None""" if label == "REAL": return 1 elif label == "FAKE": return 0 else: return None # + # Load raw data raw = pickle_from('../data/raw/raw.pkl') # + # Apply label encoding function to raw data raw['numeric_label'] = raw['label'].apply(encode_label) raw.head() # - raw[raw.text.str.contains('turtle')] # + # Removing rows that have the string 'sweeping consequences' # These have been observed to be duplicates of a single line raw = raw[~raw['text'].str.contains('sweeping consequence')] # + # Removing rows that have the string 'automatically' # as this is part of html code raw = raw[~raw['text'].str.contains('automatically')] raw = raw.drop(220).reset_index() raw = raw.drop('index',axis = 1) # - # Dropping null values data = raw[['title','text','numeric_label']].dropna() data.info() # ##### Combining text and title data into a single column by name 'news' data['news'] = data['title'] + '. ' + data['text'] data.head() # ## Cleaning text data # # Involves the following
    # # * converting everyting to lowercase # * removing punctuations # * removing numbers # * removing non english words # # + # Converting to lowercase data.text = data.text.apply(lambda x:x.lower()) data['text'] = data['text'].str.replace("’","'") #Removing all punctuations data.text = data.text.str.replace('[^\w\s]','') # Removing numbers data.text = data.text.str.replace('\d+', ' ') # Making sure any double-spaces are single data.text = data.text.str.replace(' ',' ') # + data.sample(5) # + text_data = data[['numeric_label','text']].copy() # - text_data.info() text_data.head(1) # ### Tokenize & Lemmatize # + w_tokenizer = nltk.tokenize.WhitespaceTokenizer() lemmatizer = nltk.stem.WordNetLemmatizer() def lemmatize_text(text): return [lemmatizer.lemmatize(w) for w in w_tokenizer.tokenize(text)] # - text_data['tokenized'] = text_data.text.apply(lemmatize_text) text_data.head(1) # ### Remove english stopwords stop_words = stopwords.words('english') # + stop_words.append('tweet') stop_words.append('home') stop_words.append('headlines') stop_words.append('finance') stop_words.append('news') stop_words.append('was') stop_words.append('has') stop_words.append('said') stop_words.append('wa') stop_words.append('ha') stop_words.append('leave') stop_words.append('comment') stop_words.append('loading') stop_words.append('eaten') stop_words.append('matrix') stop_words.append('extraterrestrial') stop_words.append('wi') stop_words.append('ivt') # - pickle_to(stop_words,'stop_words.pkl') # + text_data['tokenized'] = text_data['tokenized'].apply(lambda x: [item for item in x if item not in stop_words]) # Remove words that have less than 3 characters text_data['text'] = text_data['text'].str.replace(r'\b(\w{1,2})\b', ' ') # + # Create a new column that has the length of each text text_data['token_length'] = text_data.apply(lambda row: len(row['tokenized']), axis=1) # - text_data[text_data['text'].str.contains('phrase block')] text_data[text_data.text.str.contains('automatically')].head() text_data.head() # ### Pickling # + # Pickling pickle_to(text_data,'text_data.pkl') pickle_to(data,'data.pkl') # - raw[raw['label'] == 'REAL'] raw = pickle_from('raw.pkl') raw[raw['label'] == 'FAKE'] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- 1 + "DS 1" 3 * "DS 1 " 42 - "DS 1" 1 / 3 # + course = "DS 1" course course # - number = 5 number type(number) float(number) floating_number = float(number) floating_number type(floating_number) pi = 3.14159 pi 5 / 2 5 // 2 5 % 2 6 / 3 6 % 3 int(6/3) int(2.0) 1 + 5 * 6 / 9 - 3 x = 42 y = 2(x+1) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: chat bot # language: python # name: chatbot # --- # %load_ext autoreload # %autoreload 2 # + import kampuan as kp kp.test() # - # + phrase='สวัสดี' kp.tokenize(phrase) # - kp.tokenize('ตลาด') # * 2 word 2 syllabus case # # + case_1 ={'กินข้าว':'กาวขิ้น', 'หิวจัง':'หังจิว', 'คำผวน':'ควนผำ', 'นอนแล้ว':'แนวร้อน', 'ตะปู':'ตูปะ', 'นักเรียน':'เนียนรัก', 'ขนม':'ขมหน่ะ'} tmp=[kp.tokenize(key) for key in case_1.keys()] tmp # get # - # # Subwords # # * initial_consonant # * single อักษรเดี่ยว # * sound_level (low/mid/high) # * low_single, low_double # * cluster อักษรควบ # * true_cluster อักษรควบแท้ # * fasle_cluster อักษรควบไม่แท้ # * leading_cluster อักษรนำ–อักษรตาม # * hornum_leading หอ นำ # * oarnum_leading ออ นำ # * non_conform_leading นำด้วยสูง หรือ กลาง นคร ขนม # * final_consonant # * type (dead/live) # * sound (eng) # * k: กก # * ng: กง # * t: กด # * m: กม # * n: กน # * p: กบ # * y: เกย # * v: เกอว # * sound_level low/mid/high # * vowel # * tone http://www.thai-language.com/ref/tone-rules # * tone_mark # * tone_sound # + import re from kampuan.lang_tools import get_vowel_pattern,get_vowel_form,remove_tone_mark from kampuan.sub_word import ThaiSubWord vowel_pattern=get_vowel_pattern() def match_pattern(pattern,text): m=re.match(pattern,text) return m def match_vowel_form(text,vowel_patterns): for i,pattern in enumerate(vowel_patterns): m=re.match(pattern,text) if m: return (i,m,pattern) return (-1,text,text) vowel_forms=get_vowel_form(REP='-') test_text =['เขียว', 'เกรียน', 'ตู่', 'ตด', 'กร', 'นอน', 'ขนุน', 'จัง', 'กำ', 'ใหญ่', 'ใคร', 'เดา', 'เปรต', 'เสร็จ', 'ปีน', 'เปี๊ยะ', 'คั่ว', 'แก้ม', 'แช่ง', 'แกล้ม', 'แพร่', 'แข็ง', 'เกลอ', 'เทอญ', 'เกิด', 'เตลิด', 'กลม', 'เลว', 'เสก', 'วัว', 'เยี่ยม', 'เพลง', 'ปลอม', 'หอม', 'เวร', 'อย่าง', 'อยู่', 'อย่า', 'กวน' , 'แวว', 'วน', 'ว่าย', 'สวย', 'เขย', 'เกรง', 'แปลง', 'กรง', 'ธง', 'ครี่', 'แซง', 'แปล', 'แทรก', 'ท้อ', 'ท้อง', 'ใคร', 'เสวย', 'เยี่ยม', 'บรรณ', 'สมุทร', 'จันทร์', 'พันธุ์', 'กริบ', 'อยาก', 'กบ', 'เกษม', 'เออ'] for text in test_text: try: print(text,ThaiSubWord(text)._vowel_form,ThaiSubWord(text)._tone) except AttributeError: print(text,ThaiSubWord(text).two_syllable) # except IndexError: # print('error',text) # plain=remove_tone_mark(text) # (i,m,pattern)=match_vowel_form(plain,vowel_pattern) # print('===') # print(m.groups(),vowel_forms[i],pattern) # - ThaiSubWord(text) # + from kampuan.const import THAI_VOW len(THAI_VOW) # + # case เ แ need to verify ควบกล้ำ # 1 ตัว -> no สะกด # 2 ตัว -> 1 สะกด # 3 ตัว -> 1 สะกด # อักษรนำ # #initialize the special character cases # #คำควบแท้ # self.two_char_combine = ['กร','กล', 'กว', 'คร', 'ขร', 'คล', 'ขล','คว', 'ขว', 'ตร', 'ปร', 'ปล', 'พร', 'พล' ,'ผล', # 'บร','บล','ดร','ฟร','ฟล','ทร','จร','ซร','ปร','สร'] # #คำนำ # self.lead_char_nosound = ['อย','หง','หญ','หน','หม','หย', 'หร','หล','หว'] # #อักษรสูงนำอักษรต่ำ # self._lead_char_high_low = ['ขน', 'ขม','สม','สย','สน','ขย','ฝร','ถล','ผว','ตน','จม','ตล',] # #อักษรสูงนำอักษรกลาง # self._high_char_high_medium = ['ผท','ผด','ผก','ผอ','ผช'] # - import pythainlp pythainlp.thai_tonemarks pythainlp.thai_punctuations ['-'+char for char in pythainlp.thai_below_vowels] ['_'+char for char in pythainlp.thai_tonemarks] ['_'+char for char in pythainlp.thai_above_vowels] [char+'_' for char in pythainlp.thai_lead_vowels] ['_'+char for char in pythainlp.thai_follow_vowels] pythainlp.thai_vowels ThaiSubWord('กริบ')._ex_regex.groups() ThaiSubWord('พันธุ์').__dict__ # + from kampuan.const import MUTE_MARK, THAI_CONS, THAI_TONE, VOWEL_FORMS,df_tone_rule,THAI_VOW def find_mute_vowel(text: str): """extract main mute vowel การันต์ case:('พันธุ์') Args: text (str): word to check Returns: List of tuple List[(int,str)]: return[] if no การันต์ , return [(index, consonant)] """ if MUTE_MARK not in text: return [] else: mark_index = text.index(MUTE_MARK) lead_mute=text[mark_index-1] if lead_mute in THAI_VOW: return [(mark_index-1,lead_mute)] else: return [] find_mute_vowel('พันธุ์') # - ThaiSubWord('เกรง').__dict__ # # kampuan logic: # 1. Input phrase # 2. Split phrase to syllables # 3. Sub word processing, types and tones # 4. preprocessing on two syllable words # 5. Determine two syllable to do the puan # 6. Puan process, output new # * Tone assignment # * # 7. Fine tune sound no nymss # * case บัน บรร # * case same vowel sound # * case tune shift # # # + from kampuan import puan_kam # # 1. Input phrase # input_text ='สวัสดี' # # input_text ='ไงสลิ่ม' # def puan_kam_preprocess(input_text): # # 2. Split phrase to syllables # tokenized= tokenize(input_text) # # 3. Sub word processing, types and tones # sub_words=[ThaiSubWord(word) for word in tokenized] # # 4. preprocessing on two syllable words # split_words =[word_split for word in sub_words for word_split in word.split_non_conform() ] # return split_words # def puan_2_kam(a_raw,b_raw,keep_tone =None): # a_raw_tone=a_raw._tone # b_raw_tone=b_raw._tone # # swap vowel # a_target= b_raw._vowel_form_sound.replace('-',a_raw.init_con) # b_target= a_raw._vowel_form_sound.replace('-',b_raw.init_con) # # swap final con # a_target =ThaiSubWord(a_target +b_raw.final_con) # b_target =ThaiSubWord(b_target +a_raw.final_con) # # Swap tone # # assign tones # if keep_tone is None: # if a_target._word_class =='dead' or b_target._word_class =='dead': # keep_tone =False # else: # keep_tone =True # if keep_tone: # a_target_tone=a_raw_tone # b_target_tone =b_raw_tone # else: # a_target_tone=b_raw_tone # b_target_tone =a_raw_tone # # apply tone rules # a_target=ThaiSubWord.pun_wunayook(a_target._raw,a_target_tone) # b_target=ThaiSubWord.pun_wunayook(b_target._raw,b_target_tone) # return a_target ,b_target # def puan_kam_base(input_text='สวัสดี',keep_tone =None, use_first=True,index=None): # if isinstance(input_text,str): # split_words=puan_kam_preprocess(input_text) # else: # split_words =split_words # # 5. Determine two syllable to do the puan # if not index: # n_subwords=len(split_words) # index=(0,0) # if n_subwords ==1: # index=(0,0) # elif n_subwords==2: # index=(0,1) # elif use_first: # index =(0,-1) # else: # index=(1,-1) # # 6. Puan process, output new # # puan kum given two subwords # a_raw=split_words[index[0]] # b_raw=split_words[index[1]] # # apply tone rules # a_target,b_target=puan_2_kam(a_raw,b_raw,keep_tone =keep_tone) # # 7. combine # kam_puan= [w._raw for w in split_words] # kam_puan[index[0]]=a_target # kam_puan[index[1]]=b_target # return kam_puan # def puan_kam_all(input_text='สวัสดี'): # keep_tone =[True,False] # use_first =[True,False] # result= {} # count=0 # for k in keep_tone: # for j in use_first: # result[count]=puan_kam_base(input_text=input_text,keep_tone =k,use_first=j) # count+=1 # return result # def puan_kam_auto(input_text='สวัสดี',use_first=None): # split_words=puan_kam_preprocess(input_text) # n_subwords=len(split_words) # index=(0,0) # if n_subwords ==1: # index=(0,0) # elif n_subwords==2: # index=(0,1) # elif n_subwords==3: # if split_words[0]._word_class =='dead': # index=(1,-1) # else: # index=(0,-1) # else:# more than 3 # if use_first is None: # return [puan_kam_base(input_text=input_text,keep_tone =None,index=(i,-1)) for i in [0,1]] # elif use_first: # index=(0,-1) # else: # index=(1,-1) # return puan_kam_base(input_text=input_text,keep_tone =None,index=index) case_1 =['คำผวน', 'นอนแล้ว', 'ตะปู', 'นักเรียน', 'ขนม', 'เรอทัก', 'ลองดู', 'เจอพี่', 'เรอมัก', 'อาไบ้', 'หิวข้าว', 'กะหล่ำ', 'เจอหมึก'] for k in case_1: print(puan_kam(k)) # - case_1 =['มะนาวต่างดุ๊ด', 'กาเป็นหมู', 'ก้างใหญ่', 'อะหรี่ดอย', 'นอนแล้ว', 'ตะปู', 'นักเรียน', 'ขนม', 'เรอทัก', 'สวัสดี'] for k in case_1: print(k) print(puan_kam_auto(input_text=k)) print('===========') case_1 ={'กินข้าว':'กาวขิ้น', 'หิวจัง':'หังจิว', 'คำผวน':'ควนผำ', 'นอนแล้ว':'แนวร้อน', 'ตะปู':'ตูปะ', 'นักเรียน':'เนียนรัก', 'ขนม':'ขมหน่ะ', 'เรอทัก':'รักเธอ'} for k,v in case_1.items(): print(k,v) print(puan_kam(k)) print('===========') # + from kampuan.lang_tools import convert_tone_pair_double_init, convert_tone_pair_single_init,remove_tone_mark from kampuan.const import TONE_MARK_CLASS_INV from kampuan.sub_word import ThaiSubWord import logging test_text=[ 'ไก่', 'เป็ด', 'แคง', 'แข็ง', 'ขาว', 'ด๊วด', 'หมา', 'คราว', 'คน', 'ทราบ', 'ก็', 'ภูมิ', ] for text in test_text: print('-----',text) for i in range(0,5): print(ThaiSubWord.pun_wunayook(text,tone_target=i)) # + from IPython.display import display test_text=[ 'ภูมิ', 'พยาธิ', 'ชาติ', 'สาเหตุ', 'ธาตุ' ] for text in test_text: sw =ThaiSubWord(text) print(text,sw._ex_vw_form) display(sw.__dict__) print() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### What is DCT (discrete cosine transformation) ? # # - This notebook creates arbitrary consumption functions at both 1-dimensional and 2-dimensional grids and illustrate how DCT approximates the full-grid function with different level of accuracies. # - This is used in [DCT-Copula-Illustration notebook](DCT-Copula-Illustration.ipynb) to plot consumption functions approximated by DCT versus original consumption function at full grids. # - Written by # - June 19, 2019 # + code_folding=[0, 11] # Setup def in_ipynb(): try: if str(type(get_ipython())) == "": return True else: return False except NameError: return False # Determine whether to make the figures inline (for spyder or jupyter) # vs whatever is the automatic setting that will apply if run from the terminal if in_ipynb(): # # %matplotlib inline generates a syntax error when run from the shell # so do this instead get_ipython().run_line_magic('matplotlib', 'inline') else: get_ipython().run_line_magic('matplotlib', 'auto') # - # Import tools import scipy.fftpack as sf # scipy discrete fourier transform import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D import numpy.linalg as lag from scipy import misc from matplotlib import cm # + code_folding=[] ## DCT in 1 dimension grids= np.linspace(0,100,100) # this represents the grids on which consumption function is defined.i.e. m or k c =grids + 50*np.cos(grids*2*np.pi/40) # this is an arbitrary example of consumption function c_dct = sf.dct(c,norm='ortho') # set norm =ortho is important ind=np.argsort(abs(c_dct))[::-1] # get indices of dct coefficients(absolute value) in descending order # + code_folding=[0] ## DCT in 1 dimension for difference accuracy levels fig = plt.figure(figsize=(5,5)) fig.suptitle('DCT compressed c function with different accuracy levels') lvl_lst = np.array([0.5,0.9,0.99]) plt.plot(c,'r*',label='c at full grids') c_dct = sf.dct(c,norm='ortho') # set norm =ortho is important ind=np.argsort(abs(c_dct))[::-1] for idx in range(len(lvl_lst)): i = 1 # starts the loop that finds the needed indices so that an target level of approximation is achieved while lag.norm(c_dct[ind[0:i]].copy())/lag.norm(c_dct) < lvl_lst[idx]: i = i + 1 needed = i print("For accuracy level of "+str(lvl_lst[idx])+", "+str(needed)+" basis functions used") c_dct_rdc=c.copy() c_dct_rdc[ind[needed+1:]] = 0 c_approx = sf.idct(c_dct_rdc) plt.plot(c_approx,label=r'c approx at ${}$'.format(lvl_lst[idx])) plt.legend(loc=0) # + code_folding=[0] ## Blockwise DCT. For illustration but not used in BayerLuetticke. ## But it illustrates how doing dct in more finely devided blocks give a better approximation size = c.shape c_dct = np.zeros(size) c_approx=np.zeros(size) fig = plt.figure(figsize=(5,5)) fig.suptitle('DCT compressed c function with different number of basis funcs') nbs_lst = np.array([20,50]) plt.plot(c,'r*',label='c at full grids') for i in range(len(nbs_lst)): delta = np.int(size[0]/nbs_lst[i]) for pos in np.r_[:size[0]:delta]: c_dct[pos:(pos+delta)] = sf.dct(c[pos:(pos+delta)],norm='ortho') c_approx[pos:(pos+delta)]=sf.idct(c_dct[pos:(pos+delta)]) plt.plot(c_dct,label=r'Nb of blocks= ${}$'.format(nbs_lst[i])) plt.legend(loc=0) # + code_folding=[0] # DCT in 2 dimensions def dct2d(x): x0 = sf.dct(x.copy(),axis=0,norm='ortho') x_dct = sf.dct(x0.copy(),axis=1,norm='ortho') return x_dct def idct2d(x): x0 = sf.idct(x.copy(),axis=1,norm='ortho') x_idct= sf.idct(x0.copy(),axis=0,norm='ortho') return x_idct # arbitrarily generate a consumption function at different grid points grid0=20 grid1=20 grids0 = np.linspace(0,20,grid0) grids1 = np.linspace(0,20,grid1) c2d = np.zeros([grid0,grid1]) # create an arbitrary c functions at 2-dimensional grids for i in range(grid0): for j in range(grid1): c2d[i,j]= grids0[i]*grids1[j] - 50*np.sin(grids0[i]*2*np.pi/40)+10*np.cos(grids1[j]*2*np.pi/40) ## do dct for 2-dimensional c at full grids c2d_dct=dct2d(c2d) ## convert the 2d to 1d for easier manipulation c2d_dct_flt = c2d_dct.flatten(order='F') ind2d = np.argsort(abs(c2d_dct_flt.copy()))[::-1] # get indices of dct coefficients(abosolute value) # in the decending order # + code_folding=[0] # DCT in 2 dimensions for different levels of accuracy fig = plt.figure(figsize=(15,10)) fig.suptitle('DCT compressed c function with different accuracy levels') lvl_lst = np.array([0.999,0.99,0.9,0.8,0.5]) ax=fig.add_subplot(2,3,1) ax.imshow(c2d) ax.set_title(r'$1$') for idx in range(len(lvl_lst)): i = 1 while lag.norm(c2d_dct_flt[ind2d[:i]].copy())/lag.norm(c2d_dct_flt) < lvl_lst[idx]: i += 1 needed = i print("For accuracy level of "+str(lvl_lst[idx])+", "+str(needed)+" basis functions are used") c2d_dct_rdc=c2d_dct.copy() idx_urv = np.unravel_index(np.sort(ind2d[needed+1:]),(grid0,grid1),order='F') c2d_dct_rdc[idx_urv] = 0 c2d_approx = idct2d(c2d_dct_rdc) ax = fig.add_subplot(2,3,idx+2) ax.set_title(r'${}$'.format(lvl_lst[idx])) ax.imshow(c2d_approx) # + code_folding=[0] ## surface plot of c at full grids and dct approximates with different accuracy levels fig = plt.figure(figsize=(15,10)) fig.suptitle('DCT compressed c function in different accuracy levels') lvl_lst = np.array([0.999,0.99,0.9,0.8,0.5]) ax=fig.add_subplot(2,3,1,projection='3d') ax.plot_surface(grids0,grids1,c2d,cmap=cm.coolwarm) ax.set_title(r'$1$') for idx in range(len(lvl_lst)): i = 1 while lag.norm(c2d_dct_flt[ind2d[:i]].copy())/lag.norm(c2d_dct_flt) < lvl_lst[idx]: i += 1 needed = i print("For accuracy level of "+str(lvl_lst[idx])+", "+str(needed)+" basis functions are used") c2d_dct_rdc=c2d_dct.copy() idx_urv = np.unravel_index(ind2d[needed+1:],(grid0,grid1)) c2d_dct_rdc[idx_urv] = 0 c2d_approx = idct2d(c2d_dct_rdc) ax = fig.add_subplot(2,3,idx+2,projection='3d') ax.set_title(r'${}$'.format(lvl_lst[idx])) ax.plot_surface(grids0,grids1,c2d_approx,cmap=cm.coolwarm) # + code_folding=[0] # surface plot of absoulte value of differences of c at full grids and approximated fig = plt.figure(figsize=(15,10)) fig.suptitle('Differences(abosolute value) of DCT compressed with c at full grids in different accuracy levels') lvl_lst = np.array([0.999,0.99,0.9,0.8,0.5]) ax=fig.add_subplot(2,3,1,projection='3d') c2d_diff = abs(c2d-c2d) ax.plot_surface(grids0,grids1,c2d_diff,cmap=cm.coolwarm) ax.set_title(r'$1$') for idx in range(len(lvl_lst)): i = 1 while lag.norm(c2d_dct_flt[ind2d[:i]].copy())/lag.norm(c2d_dct_flt) < lvl_lst[idx]: i += 1 needed = i print("For accuracy level of "+str(lvl_lst[idx])+", "+str(needed)+" basis functions are used") c2d_dct_rdc=c2d_dct.copy() idx_urv = np.unravel_index(ind2d[needed+1:],(grid0,grid1)) c2d_dct_rdc[idx_urv] = 0 c2d_approx = idct2d(c2d_dct_rdc) c2d_approx_diff = abs(c2d_approx - c2d) ax = fig.add_subplot(2,3,idx+2,projection='3d') ax.set_title(r'${}$'.format(lvl_lst[idx])) ax.plot_surface(grids0,grids1,c2d_approx_diff,cmap= 'OrRd',linewidth=1) ax.view_init(20, 90) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: html # language: python # name: html # --- # ## Bayesian Optimisation with Scikit-Optimize # # In this notebook, we will use **Bayesian Optimization** to select the best **hyperparameters** for a Gradient Boosting Regressor, using the open source Python package [Scikit-Optimize](https://scikit-optimize.github.io/stable/index.html). # # Scikit-Optimize offers an interface that allows us to do the Optimization in a similar way to the GridSearchCV or RandomizedSearchCV from Scikit-learn, through the class [BayesSearchCV](https://scikit-optimize.github.io/stable/modules/generated/skopt.BayesSearchCV.html#skopt.BayesSearchCV) # # In this notebook, we will see how to do so. # # # ### Important # # Remember that we use **Bayesian Optimization** when we are looking to optimize functions that are costly, like those derived from neuronal networks. For a Gradient Boosting Machine trained on little data like the one in this notebook, we would probably make a better search if we carried out a Random Search. # # # ### Hyperparameter Tunning Procedure # # To tune the hyper-parameters of our model we need to: # # - define a model # - decide which parameters to optimize # - define the objective function we want to minimize. # # # ### NOTE # # Scikit-Optimize will always **minimize** the objective function, so if we want to maximize a function, for example the roc-auc, we need to **negate** the metric. Thus, instead of maximizing the roc-auc, we minimize the -roc-auc. # + import numpy as np import pandas as pd import matplotlib.pyplot as plt from scipy import stats from sklearn.datasets import load_boston from sklearn.ensemble import GradientBoostingRegressor from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split # note that we only need to import the wrapper from skopt import BayesSearchCV # + # load dataset boston_X, boston_y = load_boston(return_X_y=True) X = pd.DataFrame(boston_X) y = pd.Series(boston_y) X.head() # - y.hist(bins=50) plt.title("House median price") plt.show() # + # split dataset into a train and test set X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.3, random_state=0) X_train.shape, X_test.shape # - # ## Define the model and Hyperparameter Space # + # set up the model gbm = GradientBoostingRegressor(random_state=0) # + # hyperparameter space param_grid = { 'n_estimators': (10, 120), 'min_samples_split': (0.001, 0.99, 'log-uniform'), 'max_depth': (1, 8), 'loss': ['ls', 'lad', 'huber'], } # + # At the moment of creating this notebook, the BayesSearchCV is not # compatible with sklearn version 0.24, because in this version the # param iid was deprecated from sklearn, and not yet from scikit-optimize # make sure you have version 0.23 of sklearn to run this notebook import sklearn sklearn.__version__ # - # ## Bayesian Optimization # # The rest of the notebook is very similar to that of RandomizedSearchCV, because the BayesSearchCV makes sure to bring forward all of Scikit-learn functionality. # + # set up the search search = BayesSearchCV( estimator=gbm, search_spaces=param_grid, scoring='neg_mean_squared_error', cv=3, n_iter=50, random_state=10, n_jobs=4, refit=True) # find best hyperparameters search.fit(X_train, y_train) # + # the best hyperparameters are stored in an attribute search.best_params_ # + # the best score search.best_score_ # + # we also find the data for all models evaluated results = pd.DataFrame(search.cv_results_) print(results.shape) results.head() # + # we can order the different models based on their performance results.sort_values(by='mean_test_score', ascending=False, inplace=True) results.reset_index(drop=True, inplace=True) # plot model performance and error results['mean_test_score'].plot(yerr=[results['std_test_score'], results['std_test_score']], subplots=True) plt.ylabel('Mean test score') plt.xlabel('Hyperparameter combinations') # + X_train_preds = search.predict(X_train) X_test_preds = search.predict(X_test) print('Train MSE: ', mean_squared_error(y_train, X_train_preds)) print('Test MSE: ', mean_squared_error(y_test, X_test_preds)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # %load_ext autoreload # %autoreload 2 # %matplotlib inline # + from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier from IPython.display import display from sklearn import metrics from pathlib import Path from sklearn.tree import export_graphviz import pandas as pd import numpy as np import matplotlib.pyplot as plt import IPython, graphviz import re import math # + def is_numeric_dtype(arr_or_dtype) -> bool: ... def fix_missing(df, col, name, na_dict): """ Fill missing data in a column of df with the median, and add a {name}_na column which specifies if the data was missing. Parameters: ----------- df: The data frame that will be changed. col: The column of data to fix by filling in missing data. name: The name of the new filled column in df. na_dict: A dictionary of values to create na's of and the value to insert. If name is not a key of na_dict the median will fill any missing data. Also if name is not a key of na_dict and there is no missing data in col, then no {name}_na column is not created. """ if is_numeric_dtype(col): if pd.isnull(col).sum() or (name in na_dict): df[name+'_na'] = pd.isnull(col) filler = na_dict[name] if name in na_dict else col.median() df[name] = col.fillna(filler) na_dict[name] = filler return na_dict def numericalize(df, col, name, max_n_cat): """ Changes the column col from a categorical type to it's integer codes. Parameters: ----------- df: A pandas dataframe. df[name] will be filled with the integer codes from col. col: The column you wish to change into the categories. name: The column name you wish to insert into df. This column will hold the integer codes. max_n_cat: If col has more categories than max_n_cat it will not change the it to its integer codes. If max_n_cat is None, then col will always be converted. """ if not is_numeric_dtype(col) and ( max_n_cat is None or len(col.cat.categories)>max_n_cat): df[name] = pd.Categorical(col).codes+1 def proc_df(df, y_fld=None, skip_flds=None, ignore_flds=None, do_scale=False, na_dict=None, preproc_fn=None, max_n_cat=None, subset=None, mapper=None): """ proc_df takes a data frame df and splits off the response variable, and changes the df into an entirely numeric dataframe. For each column of df which is not in skip_flds nor in ignore_flds, na values are replaced by the median value of the column. Parameters: ----------- df: The data frame you wish to process. y_fld: The name of the response variable skip_flds: A list of fields that dropped from df. ignore_flds: A list of fields that are ignored during processing. do_scale: Standardizes each column in df. Takes Boolean Values(True,False) na_dict: a dictionary of na columns to add. Na columns are also added if there are any missing values. preproc_fn: A function that gets applied to df. max_n_cat: The maximum number of categories to break into dummy values, instead of integer codes. subset: Takes a random subset of size subset from df. mapper: If do_scale is set as True, the mapper variable calculates the values used for scaling of variables during training time (mean and standard deviation). Returns: -------- [x, y, nas, mapper(optional)]: x: x is the transformed version of df. x will not have the response variable and is entirely numeric. y: y is the response variable nas: returns a dictionary of which nas it created, and the associated median. mapper: A DataFrameMapper which stores the mean and standard deviation of the corresponding continuous variables which is then used for scaling of during test-time. """ if not ignore_flds: ignore_flds=[] if not skip_flds: skip_flds=[] if subset: df = get_sample(df,subset) else: df = df.copy() ignored_flds = df.loc[:, ignore_flds] df.drop(ignore_flds, axis=1, inplace=True) if preproc_fn: preproc_fn(df) if y_fld is None: y = None else: if not is_numeric_dtype(df[y_fld]): df[y_fld] = pd.Categorical(df[y_fld]).codes y = df[y_fld].values skip_flds += [y_fld] df.drop(skip_flds, axis=1, inplace=True) if na_dict is None: na_dict = {} else: na_dict = na_dict.copy() na_dict_initial = na_dict.copy() for n,c in df.items(): na_dict = fix_missing(df, c, n, na_dict) if len(na_dict_initial.keys()) > 0: df.drop([a + '_na' for a in list(set(na_dict.keys()) - set(na_dict_initial.keys()))], axis=1, inplace=True) if do_scale: mapper = scale_vars(df, mapper) for n,c in df.items(): numericalize(df, c, n, max_n_cat) df = pd.get_dummies(df, dummy_na=True) df = pd.concat([ignored_flds, df], axis=1) res = [df, y, na_dict] if do_scale: res = res + [mapper] return res # + BASE_PATH = Path("../..") df_raw = pd.read_feather(BASE_PATH / 'bulldozers-raw') df_trn, y_trn, nas = proc_df(df_raw, 'SalePrice') # - def split_vals(a,n): return a[:n], a[n:] n_valid = 12000 n_trn = len(df_trn)-n_valid X_train, X_valid = split_vals(df_trn, n_trn) y_train, y_valid = split_vals(y_trn, n_trn) raw_train, raw_valid = split_vals(df_raw, n_trn) x_sub = X_train[['YearMade', 'MachineHoursCurrentMeter']] # ## Basic data structures class TreeEnsemble(): def __init__(self, x, y, n_trees, sample_sz, min_leaf=5): np.random.seed(42) self.x,self.y,self.sample_sz,self.min_leaf = x,y,sample_sz,min_leaf self.trees = [self.create_tree() for i in range(n_trees)] def create_tree(self): rnd_idxs = np.random.permutation(len(self.y))[:self.sample_sz] return DecisionTree(self.x.iloc[rnd_idxs], self.y[rnd_idxs], min_leaf=self.min_leaf) def predict(self, x): return np.mean([t.predict(x) for t in self.trees], axis=0) class DecisionTree(): def __init__(self, x, y, idxs=None, min_leaf=5): self.x,self.y,self.idxs,self.min_leaf = x,y,idxs,min_leaf m = TreeEnsemble(X_train, y_train, n_trees=10, sample_sz=1000, min_leaf=3) m.trees[0] class DecisionTree(): def __init__(self, x, y, idxs=None, min_leaf=5): if idxs is None: idxs=np.arange(len(y)) self.x,self.y,self.idxs,self.min_leaf = x,y,idxs,min_leaf self.n,self.c = len(idxs), x.shape[1] self.val = np.mean(y[idxs]) self.score = float('inf') self.find_varsplit() # This just does one decision; we'll make it recursive later def find_varsplit(self): for i in range(self.c): self.find_better_split(i) # We'll write this later! def find_better_split(self, var_idx): pass @property def split_name(self): return self.x.columns[self.var_idx] @property def split_col(self): return self.x.values[self.idxs,self.var_idx] @property def is_leaf(self): return self.score == float('inf') def __repr__(self): s = f'n: {self.n}; val:{self.val}' if not self.is_leaf: s += f'; score:{self.score}; split:{self.split}; var:{self.split_name}' return s m = TreeEnsemble(X_train, y_train, n_trees=10, sample_sz=1000, min_leaf=3) m.trees[0] # ## Single branch # __Find best split given variable__ ens = TreeEnsemble(x_sub, y_train, 1, 1000) tree = ens.trees[0] x_samp,y_samp = tree.x, tree.y x_samp.columns tree m = RandomForestRegressor(n_estimators=1, max_depth=1, bootstrap=False) m.fit(x_samp, y_samp) # + def set_plot_sizes(sml, med, big): plt.rc('font', size=sml) # controls default text sizes plt.rc('axes', titlesize=sml) # fontsize of the axes title plt.rc('axes', labelsize=med) # fontsize of the x and y labels plt.rc('xtick', labelsize=sml) # fontsize of the tick labels plt.rc('ytick', labelsize=sml) # fontsize of the tick labels plt.rc('legend', fontsize=sml) # legend fontsize plt.rc('figure', titlesize=big) # fontsize of the figure title def draw_tree(t, df, size=10, ratio=0.6, precision=0): """ Draws a representation of a random forest in IPython. Parameters: ----------- t: The tree you wish to draw df: The data used to train the tree. This is used to get the names of the features. """ s=export_graphviz(t, out_file=None, feature_names=df.columns, filled=True, special_characters=True, rotate=True, precision=precision) IPython.display.display(graphviz.Source(re.sub('Tree {', f'Tree {{ size={size}; ratio={ratio}', s))) # - draw_tree(m.estimators_[0], x_samp, precision=2) def find_better_split(self, var_idx): x,y = self.x.values[self.idxs,var_idx], self.y[self.idxs] for i in range(self.n): lhs = x<=x[i] rhs = x>x[i] if rhs.sum()self.split)[0] self.lhs = DecisionTree(self.x, self.y, self.idxs[lhs]) self.rhs = DecisionTree(self.x, self.y, self.idxs[rhs]) DecisionTree.find_varsplit = find_varsplit tree = TreeEnsemble(x_sub, y_train, 1, 1000).trees[0]; tree tree.lhs tree.rhs tree.lhs.lhs tree.lhs.rhs cols = ['MachineID', 'YearMade', 'MachineHoursCurrentMeter', 'ProductSize', 'Enclosure', 'Coupler_System', 'saleYear'] # %time tree = TreeEnsemble(X_train[cols], y_train, 1, 1000).trees[0] x_samp,y_samp = tree.x, tree.y m = RandomForestRegressor(n_estimators=1, max_depth=3, bootstrap=False) m.fit(x_samp, y_samp) draw_tree(m.estimators_[0], x_samp, precision=2, ratio=0.9, size=7) def predict(self, x): return np.array([self.predict_row(xi) for xi in x]) DecisionTree.predict = predict # + def predict_row(self, xi): if self.is_leaf: return self.val t = self.lhs if xi[self.var_idx]<=self.split else self.rhs return t.predict_row(xi) DecisionTree.predict_row = predict_row # - # %time preds = tree.predict(X_valid[cols].values) plt.scatter(preds, y_valid, alpha=0.05) metrics.r2_score(preds, y_valid) m = RandomForestRegressor(n_estimators=1, min_samples_leaf=5, bootstrap=False) # %time m.fit(x_samp, y_samp) preds = m.predict(X_valid[cols].values) plt.scatter(preds, y_valid, alpha=0.05) metrics.r2_score(preds, y_valid) # ## Putting it together # + class TreeEnsemble(): def __init__(self, x, y, n_trees, sample_sz, min_leaf=5): np.random.seed(42) self.x,self.y,self.sample_sz,self.min_leaf = x,y,sample_sz,min_leaf self.trees = [self.create_tree() for i in range(n_trees)] def create_tree(self): idxs = np.random.permutation(len(self.y))[:self.sample_sz] return DecisionTree(self.x.iloc[idxs], self.y[idxs], idxs=np.array(range(self.sample_sz)), min_leaf=self.min_leaf) def predict(self, x): return np.mean([t.predict(x) for t in self.trees], axis=0) def std_agg(cnt, s1, s2): return math.sqrt(abs((s2/cnt) - (s1/cnt)**2)) # - class DecisionTree(): def __init__(self, x, y, idxs, min_leaf=5): self.x,self.y,self.idxs,self.min_leaf = x,y,idxs,min_leaf self.n,self.c = len(idxs), x.shape[1] self.val = np.mean(y[idxs]) self.score = float('inf') self.find_varsplit() def find_varsplit(self): for i in range(self.c): self.find_better_split(i) if self.score == float('inf'): return x = self.split_col lhs = np.nonzero(x<=self.split)[0] rhs = np.nonzero(x>self.split)[0] self.lhs = DecisionTree(self.x, self.y, self.idxs[lhs]) self.rhs = DecisionTree(self.x, self.y, self.idxs[rhs]) def find_better_split(self, var_idx): x,y = self.x.values[self.idxs,var_idx], self.y[self.idxs] sort_idx = np.argsort(x) sort_y,sort_x = y[sort_idx], x[sort_idx] rhs_cnt,rhs_sum,rhs_sum2 = self.n, sort_y.sum(), (sort_y**2).sum() lhs_cnt,lhs_sum,lhs_sum2 = 0,0.,0. for i in range(0,self.n-self.min_leaf): xi,yi = sort_x[i],sort_y[i] lhs_cnt += 1; rhs_cnt -= 1 lhs_sum += yi; rhs_sum -= yi lhs_sum2 += yi**2; rhs_sum2 -= yi**2 if i, , in 2015 at the paper ["UNet: Convolutional Networks for Biomedical Image Segmentation"](https://arxiv.org/abs/1505.04597). It is based on Fully Convolutional Network architecture proposed by , , their paper ["Fully convolutional networks for semantic segmentation"](https://people.eecs.berkeley.edu/~jonlong/long_shelhamer_fcn.pdf). Here's what the U-Net architecture looks like: # + colab={"base_uri": "https://localhost:8080/", "height": 484} colab_type="code" id="jejo9pG5oBgV" outputId="4acfdee1-1d0c-4f4d-af6a-bc70b1b1bea4" ## image source: https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/ from IPython.display import Image Image("assets/u-net-architecture.png", width=700) # + [markdown] colab_type="text" id="nYYHm4FQoBgY" # ### Implementation # The implementation was taken from https://github.com/usuyama/pytorch-unet and can be found in the file `model.py` (and modified if needed): # + colab={} colab_type="code" id="IUPL0-5ffG3t" from model import UNet # We're predicting a one-channel mask for absence or presence of Pneumothorax channels_out = 1 unet = UNet(channels_out).to(device) # + [markdown] colab_type="text" id="2EfNI2X3fG3y" # ### Functions used for loss calculation # # Here we're using a weighted average of binary crossentropy loss and [Dice loss](https://en.wikipedia.org/wiki/S%C3%B8rensen%E2%80%93Dice_coefficient) for backpropagation. We're reporting this loss, together with both Dice and binary crossentropy losses, as metrics during training. # + colab={} colab_type="code" id="Dp3cGjjifG3z" # also pretty much inspired by https://github.com/usuyama/pytorch-unet def dice_loss(pred, target, smooth = 1.): pred = pred.contiguous() target = target.contiguous() intersection = (pred * target).sum(dim=2).sum(dim=2) loss = (1 - ((2. * intersection + smooth) / (pred.sum(dim=2).sum(dim=2) + target.sum(dim=2).sum(dim=2) + smooth))) return loss.mean() def calc_loss(pred, target, bce_weight=0.5): bce = F.binary_cross_entropy_with_logits(pred, target) pred = torch.sigmoid(pred) dice = dice_loss(pred, target) loss = bce * bce_weight + dice * (1 - bce_weight) return bce, dice, loss def update_metrics(metrics, bce, dice, loss, shape): metrics["bce"] += bce * shape metrics["dice"] += dice * shape metrics["loss"] += loss * shape return metrics def print_metrics(metrics, epoch_samples, phase): outputs = [] for k in metrics.keys(): outputs.append(f"{k}: {metrics[k]/epoch_samples:4f}") print(f"{phase}: {', '.join(outputs)}") # + [markdown] colab_type="text" id="RHMhHkrUfG31" # ## Federated dataset # # #### Hooking PyTorch and creating Virtual Workers # # First we need to override the PyTorch methods so that we can execute commands on remote tensors as if they were on our local machine. This is done by running `hook = sy.TorchHook(torch)`. Then we create a number of VirtualWorkers, which are the entities that are actually located on our local server, but they can be used to simulate the behavior of remote workers: # + colab={"base_uri": "https://localhost:8080/", "height": 108} colab_type="code" id="mc4bZ1YHfG34" outputId="9663f7b3-7282-409c-89dd-bcec7e6cef2b" import syft as sy hook = sy.TorchHook(torch) bob = sy.VirtualWorker(hook, id="bob") alice = sy.VirtualWorker(hook, id="alice") workers = (bob, alice) # + [markdown] colab_type="text" id="7kexOpifoBgj" # ### Creating a federated dataset and a federated dataloader # # We'll use the `dataset.federate()` method from PySyft here for demo purposes, but usually the datasets are already located on different clients. For this to be done, the dataset has to contain its data as the `.data` attribute, and its labels in the `.targets` attribute, so we need to tweak our train and validation datasets as follows: # + colab={} colab_type="code" id="oGEm_IIAoBgk" def reformat_subset_dataset(subset_dataset): subset_dataset.data = [subset_dataset.dataset[i][0] for i in subset_dataset.indices] subset_dataset.targets = [subset_dataset.dataset[i][1] for i in subset_dataset.indices] return subset_dataset dataset_train = reformat_subset_dataset(dataset_train) dataset_val = reformat_subset_dataset(dataset_val) federated_dataset_train = dataset_train.federate(workers) federated_dataset_val = dataset_val.federate(workers) federated_train_loader = sy.FederatedDataLoader(federated_dataset_train, batch_size=BATCH_SIZE ) federated_val_loader = sy.FederatedDataLoader(federated_dataset_val, batch_size=BATCH_SIZE ) dataloaders = { "train": federated_train_loader, "val": federated_val_loader } # + [markdown] colab_type="text" id="Qj3ZToAjoBgm" # ## Federated training without aggregation # # In this mode of training, we send the model to a worker at every iteration, and then retrieve it back, so that we're then able to send it to another worker with updated parameters. Here, the new parameters only take into account the data local to the current worker. # + colab={} colab_type="code" id="h1TBevGcfG36" def train_model(model, optimizer, num_epochs=25): best_model_wts = copy.deepcopy(model.state_dict()) best_loss = 1e10 bce_weight = 0.5 for epoch in range(num_epochs): print(f'Epoch {epoch + 1}/{num_epochs}') print('-' * 10) since = time.time() for phase in ['train', 'val']: if phase == 'train': # Set model to training mode model.train() else: model.eval() metrics = defaultdict(float) epoch_samples = 0 for data, target in dataloaders[phase]: # Send the model to the worker model.send(data.location) data, target = data.to(device), target.to(device) # Forward prop optimizer.zero_grad() output = model(data) bce, dice, loss = calc_loss(output, target) # Backprop if in training phase if phase == 'train': loss.backward() optimizer.step() # Update the metrics for this epoch metrics = update_metrics(metrics, bce.get().item(), dice.get().item(), loss.get().item(), target.shape[0]) # count samples epoch_samples += data.shape[0] # retrieve the model back model.get() # After each phase print the metrics and # calculate the average epoch loss print_metrics(metrics, epoch_samples, phase) epoch_loss = metrics['loss'] / epoch_samples # deep copy the model if the average validation loss has improved if phase == 'val' and epoch_loss < best_loss: print("saving best model") best_loss = epoch_loss best_model_wts = copy.deepcopy(model.state_dict()) # After each epoch print how much time it took time_elapsed = time.time() - since print(f'{time_elapsed//60:.0f}m {time_elapsed%60:.0f}s') # Load the best model print(f'Best val loss: {best_loss:4f}') model.load_state_dict(best_model_wts) return model # + [markdown] colab_type="text" id="NB3tztrxoBgr" # #### Now everything is ready for training! # + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="yLc1qJjJfG39" outputId="f54368b0-eab7-47a4-9fe6-8061233b3fb1" optimizer = optim.SGD(unet.parameters(), lr=1e-4) model = train_model(unet, optimizer, num_epochs=10) # + [markdown] colab_type="text" id="ZidWA9W7oYr9" # #### That's it! Now the model can be tested on new data, or trained further! # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + ## exercise 2.6 ### import numpy as np import matplotlib.pyplot as plt ### # Exercise 2.5 (programming) Design and conduct an experiment to demonstrate the # diculties that sample-average methods have for nonstationary problems. Use a modified # version of the 10-armed testbed in which all the q⇤(a) start out equal and then take # independent random walks (say by adding a normally distributed increment with mean # zero and standard deviation 0.01 to all the q⇤(a) on each step). Prepare plots like # Figure 2.2 for an action-value method using sample averages, incrementally computed, # and another action-value method using a constant step-size parameter, ↵ = 0.1. Use # " = 0.1 and longer runs, say of 10,000 steps. ### # + k = 10 mean = 0 sd = 0.01 epsilon = 0.1 runs = 2000 steps = 10000 alpha = 0.1 def select_e_greedy_action(Q,epsilon=epsilon): return np.random.randint(0,len(Q)) if np.random.uniform(0,1) <= epsilon else np.argmax(Q,axis=0) def run_k_bandit_problem(method,steps=steps,k=k,mean=mean,sd=sd,alpha=alpha): reward = [0] Q = k*[0] qStar = k*[0] N = k*[0] for step in range(1,steps+1): action = select_e_greedy_action(Q) N[action] += 1 reward.append(qStar[action]) Q[action] += (1/N[action])*(qStar[action] - Q[action]) if method == 'sample_avg' else (alpha)*(qStar[action] - Q[action]) qStar += np.random.normal(mean,sd,len(qStar)) return reward def run_trials(runs=runs,eps=epsilon,alpha=alpha): sample_avg_rwd = [run_k_bandit_problem('sample_avg') for i in range(runs)] weighted_avg_rwd = [run_k_bandit_problem('weighted_avg') for i in range(runs)] compiled_sample_avg = np.average(sample_avg_rwd,axis=0) compiled_weighted_avg = np.average(weighted_avg_rwd,axis=0) plt.plot(compiled_sample_avg,label=f'eps={eps}') plt.plot(compiled_weighted_avg,label=f"eps = {eps}, alpha = {alpha}") plt.legend() plt.ylabel('average reward') plt.xlabel('num steps') plt.show() # - run_trials() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #imports import matplotlib.pyplot as plt from sklearn.datasets import fetch_openml import numpy as np #load dataset mnist = fetch_openml(name='mnist_784') mnist #splat data and labels X_mnist = mnist.data Y_mnist = mnist.target #show first image # %matplotlib inline plt.imshow(X_mnist[0].reshape((28,28))) # + #plot random images def draw_batch(dataset,label,nrows,ncols): idx_batch = np.random.randint(0,70000,nrows*ncols) fig = plt.figure(figsize=(14,14)) for j in range(0,nrows*ncols): plt.subplot(nrows,ncols,j+1) plt.imshow(dataset[idx_batch[j]].reshape((28,28)), cmap=plt.cm.binary_r) plt.xlabel('Label: '+Y_mnist[idx_batch[j]]) plt.subplots_adjust(hspace=0.5, wspace=0.5) draw_batch(X_mnist,Y_mnist,4,4) # - #create training and testing dataset X_train, Y_train, X_test, Y_test = X_mnist[:60000], Y_mnist[:60000], X_mnist[60000:], Y_mnist[60000:] # + # train random forest clf from sklearn.ensemble import RandomForestClassifier rnd_for_clf = RandomForestClassifier(n_estimators=100) # + import time t0 = time.time() rnd_for_clf.fit(X_train,Y_train) t1 = time.time() # - print('Training took {:.2f}s'.format(t1-t0)) # + from sklearn.decomposition import PCA pca = PCA(n_components=0.95) X_train_pca = pca.fit_transform(X_train) # - X_test_pca = pca.transform(X_test) rnd_for_clf_pca = RandomForestClassifier(n_estimators=100) t0 = time.time() rnd_for_clf_pca.fit(X_train_pca,Y_train) t1 = time.time() print('Training took {:.2f}s'.format(t1-t0)) # + #accuracy score of the 2 trees from sklearn.metrics import accuracy_score Y_pred = rnd_for_clf.predict(X_test) score_rnd_for_clf = accuracy_score(Y_test, Y_pred) Y_pred_pca = rnd_for_clf_pca.predict(X_test_pca) score_rnd_for_clf_pca = accuracy_score(Y_test, Y_pred_pca) print('Random Forest score: ',score_rnd_for_clf) print('Random Forest PCA score: ',score_rnd_for_clf_pca) # - # So PCA hat die Genauigkeit nur minimal verringert. Die Trainingszeit aber verdoppelt. Im Buch war die zeit sogar 4 mal länger als ohne PCA. # Jetzt testen wir noch wie sich die PCA bei einer LogisticRegression verhällt # + from sklearn.linear_model import LogisticRegression log_clf_reg = LogisticRegression(multi_class='multinomial') t0 = time.time() log_clf_reg.fit(X_train, Y_train) t1 = time.time() print("Training took {:.2f}".format(t1-t0)) # - log_clf_reg_pca = LogisticRegression(multi_class='multinomial') t0 = time.time() log_clf_reg_pca.fit(X_train_pca, Y_train) t1 = time.time() print("Training took {:.2f}".format(t1-t0)) # + Y_pred = log_clf_reg.predict(X_test) score_log_clf_reg = accuracy_score(Y_test, Y_pred) Y_pred_pca = log_clf_reg_pca.predict(X_test_pca) score_log_clf_reg_pca = accuracy_score(Y_test, Y_pred_pca) print('Logistic Regression score: ',score_log_clf_reg) print('Logistic Regression PCA score: ',score_log_clf_reg_pca) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="YwSHuP37HUoq" colab_type="text" # # Taller en clase #1 # Generando numeros aleatorios en ptython. # Objetivos: # 1. Familiarizar al estudiante con la generación de numeros aleatorios de diferentes distribuciones en python. # 2. Usar las herramientas de python de manejo de arreglos. # 3. Usar las herramientas de generación de gráficas de python. # ## Entrega # En U-virtual, antes de la siguiente clase en el link designado como Taller en clase 1. # + id="Qo_OPH9SGj1L" colab_type="code" colab={} import numpy #as np# si no lo tiene instalado por favor correr en su consola $pip install numpy import scipy #as sp# si no lo tiene instalado por favor correr en su consola $pip install scipy import matplotlib as mpl#si no lo tiene instalado por favor correr en su consola $pip install matplotlib import matplotlib.pyplot as plt # + [markdown] id="ig65O2p0H60z" colab_type="text" # ## Primer punto Distribuciones discretas. # 1. Cree un arreglo de N muestras con N a su selección, pero mayor a 1000 de números distribuidos con las correspondientes distribuciones, para cada arreglo de números: # 1. Grafique los números aleatorios, # 2. Ordene de mayor a menor y grafique, # 3. Haga un histograma de m cajones y grafiquelo. # 4. Encuentre los valores teoricos de media y varianza, y comparelos con los estimados. # + id="quUeyxDhIHRQ" colab_type="code" colab={} #Para todas las simulaciones: N=1000 m=30 np.random.seed(seed=2**32 - 1)#Para fijar una semilla para los experimentos y que los numeros generados siempre sean los mismos en cada corrida completa de este fuente. # + id="0jgQSvLCHNEu" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 894} outputId="acf67ab0-fa31-40b7-c226-525f4701f4e4" # bernoulli con cierto p entre 0 y 1 p=0.3 mean, var, skew, kurt = scipy.stats.bernoulli.stats(p, moments='mvsk') r = scipy.stats.bernoulli.rvs(p, size=N) #ver https://docs.scipy.org/doc/scipy-0.14.0/reference/stats.html plt.plot(r) plt.title(str(N)+" puntos distribuidos bernoulli con p="+str(p)) plt.show() r.sort() plt.plot(r) plt.title(str(N)+" puntos distribuidos bernoulli ordenados con p="+str(p)) plt.show() plt.hist(r, bins = m) plt.title("Histograma de los "+str(N)+" puntos distribuidos bernoulli con p="+str(p)) plt.show() print("Note que hasta ahora no sabemos el tipo de nuestro arreglo r:") print(type(r)) print("Vamos a hallar algunas estimaciones de estadisticas de r") # ver https://numpy.org/doc/stable/reference/routines.statistics.html avg=r.mean() varest=r.var() print("Los valores teoricos de la media y varianza para la bernoulli con p=",p,"son u=" ,mean,"var=",var) print("Los valores estimados en nuestro experimento de la media y varianza para la bernoulli con p=",p,"son u_est=" ,avg,"var_est=",varest) # + id="85DMs2d4Ispg" colab_type="code" colab={} # binomial con cierto p entre 0 y 1 y cierto n n,p = 10, 0.5 # + id="FmypmPs-Rht-" colab_type="code" colab={} # Geométrica con cierto p entre 0 y 1 p = 0.3 # + id="KXLmQiMBTxWI" colab_type="code" colab={} # Poisson con cierto Lamnda L L = 5 # + id="-MaEjoSgVe9V" colab_type="code" colab={} #Binomial negativa con parametro p después de r fallos r,p = 5,0.5 # + [markdown] id="fF7owdRIWCMq" colab_type="text" # ## Segundo punto Distribuciones contínuas. # # + id="kI-DMnJOZGQt" colab_type="code" colab={} #Para todas las simulaciones: N=1000 m=30 np.random.seed(seed=0)#Para fijar una semilla para los experimentos y que los numeros generados siempre sean los mismos en cada corrida completa de este fuente. # + id="q--cxWhAWT9m" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 894} outputId="60138d75-ac10-4be0-e649-8d2ad73d035e" # Normal de media 5 y desviación estandar 0.5 u=5 std=0.5 mean, var, skew, kurt = scipy.stats.norm.stats(moments='mvsk',loc=u,scale=std) r = scipy.stats.norm.rvs (loc=u,scale=std, size=N) #ver https://docs.scipy.org/doc/scipy-0.14.0/reference/stats.html plt.plot(r) plt.title(str(N)+" puntos distribuidos normales con media="+str(u)+" y std="+str(std)) plt.show() r.sort() plt.plot(r) plt.title(str(N)+" puntos distribuidos normales ordenados con p="+str(p)) plt.show() plt.hist(r, bins = m) plt.title("Histograma de los "+str(N)+" puntos distribuidos normales con u="+str(u)+ " y std= "+str(std)) plt.show() print("Note que hasta ahora no sabemos el tipo de nuestro arreglo r:") print(type(r)) print("Vamos a hallar algunas estimaciones de estadisticas de r") # ver https://numpy.org/doc/stable/reference/routines.statistics.html avg=r.mean() varest=r.var() print("Los valores teoricos de la media y varianza para la normal son u=" ,mean,"var=",var) print("Los valores estimados en nuestro experimento de la media y varianza son u_est=" ,avg,"var_est=",varest) # + id="chiKc-YVXIdr" colab_type="code" colab={} # Uniforme entre 0 y 255 # + id="JPz6tdkWalDJ" colab_type="code" colab={} #T de Student con paramero K K=1.0 # + id="a5ZOHlaZZ2Q4" colab_type="code" colab={} # Exponencial con Lamnda L L=0.3 # + id="Wyywm9baaTJs" colab_type="code" colab={} #Chi cuadrada con K grados de libertad K=3.0 # + id="_ehXNV9gcqSA" colab_type="code" colab={} #Gamma con parametros k y theta t k,t=2.54,0.43 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Conditional Probability and Conditional Expectation # # ## Intro # # - given some partial information # - or just first "condition" on some appropriate $r.v. # \DeclareMathOperator*{\argmin}{argmin} # \newcommand{\ffrac}{\displaystyle \frac} # \newcommand{\Tran}[1]{{#1}^{\mathrm{T}}} # \newcommand{\d}[1]{\displaystyle{#1}} # \newcommand{\EE}[2][\,\!]{\mathbb{E}_{#1}\left[#2\right]} # \newcommand{\dd}{\mathrm{d}} # \newcommand{\Var}[2][\,\!]{\mathrm{Var}_{#1}\left[#2\right]} # \newcommand{\Cov}[2][\,\!]{\mathrm{Cov}_{#1}\left(#2\right)} # \newcommand{\Corr}[2][\,\!]{\mathrm{Corr}_{#1}\left(#2\right)} # \newcommand{\using}[1]{\stackrel{\mathrm{#1}}{=}} # \newcommand{\I}[1]{\mathrm{I}\left( #1 \right)} # \newcommand{\N}[1]{\mathrm{N} \left( #1 \right)} # \newcommand{\space}{\text{ }} # \newcommand{\bspace}{\;\;\;\;} # \newcommand{\QQQ}{\boxed{?\:}} # \newcommand{\CB}[1]{\left\{ #1 \right\}} # \newcommand{\SB}[1]{\left[ #1 \right]} # \newcommand{\P}[1]{\left( #1 \right)} # \newcommand{\ow}{\text{otherwise}}$ # # ## The Discrete Case # # $\forall$ events $E$ and $F$, the **conditional probability** of $E$ *given* $F$ is defined, as long as $P(F) > 0$, by # # $$P(E\mid F) = \ffrac{P(EF)} {P(F)}$$ # # Hence, if $X$ and $Y$ are discrete $r.v.$, then it's natural to define the **conditional probability mass function** of $X$ *given* that $Y=y$ and $P\CB{Y=y}>0$, by: # # $$\begin{align} # p_{X\mid Y}(X\mid Y) &= P\CB{X=x\mid Y=y} \\[0.5em] # &= \ffrac{P\CB{X=x\mid Y=y}} {P\CB{Y=y}} \\ # &= \ffrac{p(x,y)} {p_{Y}(y)} # \end{align}$$ # # and the conditional pdf of $X$ *given* that $Y=y$ and $P\CB{Y=y}>0$ is # # $$\begin{align} # F_{X\mid Y}(X\mid Y) &= P\CB{X \leq x\mid Y=y} \\[0.5em] # &= \sum_{a \leq x} p_{X\mid Y}(a\mid Y) # \end{align}$$ # # and finally, the conditional expectation of $X$ *given* that $Y=y$ and $P\CB{Y=y}>0$ is defined by # # $$\begin{align} # \EE{X\mid Y=y} &= \sum_{x} x \cdot P\CB{X = x\mid Y=y} \\[0.5em] # &= \sum_{x} x \cdot p_{X\mid Y}(x\mid Y) # \end{align}$$ # # $Remark$ # # If $X$ is independent of $Y$, then all the aforementioned definitions are identical to what we have learned before. # **e.g.** # # If $X_1$ and $X_2$ are independent **binomial** $r.v.$ with respective parameters $(n_1,p)$ and $(n_2,p)$. Find the conditional pmf of $X_1$ given that $X_1 + X_2 = m$. # # >$\begin{align} # P\CB{X_1 = k \mid X_1 + X_2 = m} &= \ffrac{P\CB{X_1 = k , X_1 + X_2 = m}} {P\CB{X_1 + X_2 = m}} \\[0.6em] # &= \ffrac{P\CB{X_1 = k}\cdot P\CB{ X_2 = m-k}} {P\CB{X_1 + X_2 = m}}\\ # &= \ffrac{\d{\binom{n_1} {k} p^k q^{\d{n_1 - k}} \cdot \binom{n_2} {m-k} p^{m-k} q^{\d{n_2 - m + k}}}} {\d{\binom{n_1 + n_2}{m} p^m q^{\d{n_1 + n_2 -m}}}} # \end{align}$ # > # >Here $q = 1-p$ and $X_1 + X_2$ is a **binomial** $r.v.$ as well with parameters $(n_1 + n_2 , p)$. Simplify that, we obtain: # > # > $$P\CB{X_1 = k \mid X_1 + X_2 = m} = \ffrac{\d{\binom{n_1} {k}\binom{n_2} {m-k}}} {\d{\binom{n_1 + n_2}{m}}}$$ # > # >$Remark$ # > # >This is a hypergeometric distribution. # # *** # # **e.g.** # # If $X$ and $Y$ are independent **poisson** $r.v.$ with respective parameters $\lambda_1$ and $\lambda_2$. Find the conditional pmf of $X$ given that $X+Y = n$. # # >We follow the same fashion and can easily get that # > # >$\begin{align} # P\CB{X = k \mid X + Y = n} &= \ffrac{e^{-\lambda_1} \; \lambda_1^k} {k!}\cdot\ffrac{e^{-\lambda_2}\; \lambda_2^{n-k}} {(n-k)!} \left[\ffrac{e^{-\lambda_1 - \lambda_2} \left(\lambda_1 + \lambda_2\right)^{n}} {n!}\right]^{-1} \\ # &= \ffrac{n!} {k!(n-k)!}\left(\ffrac{\lambda_1} {\lambda_1 + \lambda_2}\right)^k\left(\ffrac{\lambda_2} {\lambda_1 + \lambda_2}\right)^{n-k} # \end{align}$ # > # >Given this, we can say that the conditional distribution of $X$ given that $X+Y=n$ is the binomial distribution with parameters $n$ and $\lambda_1/\left(\lambda_1 + \lambda_2\right)$. Hence, we also have # > # >$$\EE{X \mid X+Y = n} = n\ffrac{\lambda_1}{\lambda_1 + \lambda_2}$$ # # *** # ## Continuous Case # # If $X$ and $Y$ have a joint probability density function $f\P{x,y}$, the the ***conditional pdf*** of $X$, given that $Y=y$, is defined for all values of $y$ such that $f_Y\P{y} < 0$, by # # $$f_{X \mid Y} \P{x \mid y} = \ffrac{f\P{x,y}} {f_Y\P{y}}$$ # # Then the expectation: $\EE{X \mid Y = y} = \d{\int_{-\infty}^{\infty}} x \cdot f_{X \mid Y} \P{x \mid y} \;\dd{x}$ # # **e.g.** # # Joint density of $X$ and $Y$: $f\P{x,y} = \begin{cases} # 6xy(2-x-y), & 0 < x < 1, 0 < y < 1\\ # 0, & \text{otherwise} # \end{cases}$ # # Compute the conditional expectation of $X$ given that $Y=y$, where $0 We first need to compute the conditional density: # > # >$f_{X \mid Y} \P{x\mid y} = \ffrac{f\P{x,y}} {f_Y\P{y}} = \ffrac{6xy(2-x-y)} {\d{\int_{0}^{1} 6xy(2-x-y) \;\dd{x}}} = \ffrac{6x(2-x-y)} {4-3y}$ # > # >Hence we can find the expectation # > # >$\EE{X \mid Y=y} = \d{\int_{0}^{1}} x \cdot \ffrac{6x(2-x-y)} {4-3y} \;\dd{x} = \ffrac{5-4y} {8-6y}$ # # *** # # **e.g.** The ***t-Distribution*** # # If $Z$ and $Y$ are independent, with $Z$ having a **standard normal** distribution and $Y$ having a **chi-squared** distribution with $n$ degrees of freedom, then the random variable $T$ defined by # # $$T = \ffrac{Z} {\sqrt{Y/n}} = \sqrt{n} \ffrac{Z} {\sqrt{Y}}$$ # # is said to be a $\texttt{t-}r.v.$ with $n$ degrees of freedom. We now compute its density function: # # >Here the strategy is to use one conditional density function and multiply that with the one just on condition, then integrate them. # > # >$$f_T(t) = \int_{0}^{\infty} f_{T,Y}\P{t,y} \;\dd{y} = \int_{0}^{\infty} f_{T\mid Y}\P{t \mid y} \cdot f_{Y}(y) \;\dd{y}$$ # > # >Since we have already get the pdf for **chi-squared** $r.v.$,so $f_Y(y) = \ffrac{e^{-y/2} y^{n/2-1}} {2^{n/2} \Gamma\P{n/2}}$, for $y > 0$. Then $T$ conditioned on $Y$, which is a **normal distribution** with mean $0$ and variance $\P{\sqrt{\ffrac{n} {y}}}^2 = \ffrac{n} {y}$, so # > # >$f_{T \mid Y} \P{t \mid y} = \ffrac{1} {\sqrt{2\pi n /y}} \exp\CB{-\ffrac{t^2 y} {2n}} = \ffrac{y^{1/2}} {\sqrt{2 \pi n}} \exp\CB{-\ffrac{t^2 y} {2n}}$ for $-\infty < t < \infty $. Then: # > # >$$\begin{align} # f_{T}(t) &= \int_{0}^{\infty} \ffrac{y^{1/2}} {\sqrt{2 \pi n}} \exp\CB{-\ffrac{t^2 y} {2n}} \cdot \ffrac{e^{-y/2} y^{n/2-1}} {2^{n/2} \Gamma\P{n/2}} \;\dd{y} \\[0.6em] # &= \ffrac{1} {\sqrt{\pi n} \; 2^{\P{n+1}/2} \; \Gamma\P{n/2}} \int_{0}^{\infty} \exp\CB{-\ffrac{1} {2} \P{1 + \ffrac{t^2} {n}}y} \cdot y^{\P{n-1}/2} \; \dd{y} \\[0.8em] # & \;\;\;\;\text{then we let } \ffrac{1} {2} \P{1 + \ffrac{t^2} {n}}y = x \\[0.8em] # &= \ffrac{\P{1 + \frac{t^2} {n}}^{-\P{n+1}/2} \P{1/2}^{-\P{n+1}/2}} {\sqrt{\pi n} \; 2^{\P{n+1}/2} \; \Gamma\P{n/2}} \int_{0}^{\infty} e^{-x} x^{\P{n-1}/2} \;\dd{x} \\[0.6em] # &= \ffrac{\P{1 + \frac{t^2} {n}}^{-\P{n+1}/2}} {\sqrt{\pi n} \; \Gamma\P{n/2}} \Gamma\P{\ffrac{n-1} {2} + 1} = \ffrac{\Gamma\P{\frac{n+1} {2}}} {\sqrt{\pi n} \; \Gamma\P{\frac{n} {2}}} \P{1 + \ffrac{t^2} {n}}^{-\P{n+1}/2} # \end{align}$$ # # $Remark$ # # It will be much easier if given the joint distribution of $T$ and $Y$. $\texttt{FXXX}$ # # *** # # **e.g.** # # Joint density of $X$ and $Y$: $f\P{x,y} = \begin{cases} # \ffrac{1} {2} y e^{-xy}, & 0 < x < \infty, 0 < y < 2 \\[0.7em] # 0, &\ow # \end{cases}$. So what is $\EE{e^{X/2} \mid Y=1}$? # # > This time will be much easier, we first obtain the conditional density of $X$ given that $Y=1$: # > # >$$\begin{align} # f_{X\mid Y} \P{x \mid 1} &= \ffrac{f\P{x,1}} {f_Y\P{1}} \\ # &= \ffrac{\ffrac{1} {2} e^{-x}} {\d{\int_{0}^{\infty}\frac{1} {2} e^{-x} \; \dd{x}}} = e^{-x} # \end{align}$$ # > # >Hence, we have $\EE{e^{X/2} \mid Y=1} = \d{\int_{0}^{\infty}} e^{x/2} \cdot f_{X\mid Y} \P{x \mid 1} \; \dd{x} = 2$ # # *** # **(V)e.g.** # # Let $X_1$ and $X_2$ be independent **exponential** $r.v.$ with rates $\mu_1$ and $\mu_2$. Find the conditional density of $X_1$ given that $X_1 + X_2 = t$. # # >We'll be using the formula: $f_{\d{X_1 \mid X_1 + X_2}}\P{x \mid t} = \ffrac{f_{\d{X_1, X_1 + X_2}}\P{x, t}} {f_{\d{ X_1 + X_2}}\P{t}}$. The denominator is a *constant at this time*, not given. And as for the numerator, using Jacobian determinant we can find that # > # >$$J = \begin{vmatrix} # \ffrac{\partial x} {\partial x} & \ffrac{\partial x} {\partial y}\\ # \ffrac{\partial x+y} {\partial x} & \ffrac{\partial x+y} {\partial y} # \end{vmatrix} = 1$$ # > # >So our conclusion is: $f_{\d{X_1, X_1 + X_2}}\P{x, t} = f_{\d{X_1, X_2}}\P{x_1,x_2} \cdot J^{-1} = f_{\d{X_1, X_2}}\P{x,t-x}$. Plug in, we have # > # >$$\begin{align} # f_{\d{X_1 \mid X_1 + X_2}}\P{x \mid t} &= \ffrac{f_{\d{X_1, X_2}}\P{x,t-x}} {f_{\d{X_1 + X_2}}\P{t}} \\ # &= \ffrac{1} {f_{\d{X_1 + X_2}}\P{t}} \cdot \P{\:\mu_1 e^{\d{-\mu_1 x}}}\P{\mu_2 e^{\d{-\mu_2 \P{t-x}}}}, \;\;\;\; 0 \leq x \leq t \\[0.7em] # &= C \cdot \exp\CB{-\P{\mu_1 - \mu_2}t}, \;\;\;\; 0 \leq x \leq t # \end{align}$$ # > # >Here $C = \ffrac{\mu_1 \mu_2 e^{\d{-\mu_2t}}} {f_{\d{X_1 + X_2}}\P{t}} $. The easier situation is when $\mu_1 = \mu_2 = \mu$, then $f_{\d{X_1 \mid X_1 + X_2}}\P{x \mid t} = C, 0 \leq x \leq t$, which is a uniform distribution, yielding that $C = 1/t$. # > # >And when they're not equal, we need to use: # > # >$$1 = \int_{0}^{t} f_{\d{X_1 \mid X_1 + X_2}}\P{x \mid t} \;\dd{x} = \ffrac{C} {\mu_1 - \mu_2} \P{1 - \exp\CB{-\P{\mu_1 - \mu_2}t}}\\[1.6em] # \Longrightarrow C = \ffrac{\mu_1 - \mu_2} {1 - \exp\CB{-\P{\mu_1 - \mu_2}t}}$$ # # >Now we can see the final answer and the byproduct: # > # >$f_{\d{X_1 \mid X_1 + X_2}}\P{x \mid t} = \begin{cases} # 1/t, & \text{if }\mu_1 = \mu_2 = \mu\\[0.6em] # \ffrac{\P{\mu_1 - \mu_2}\exp\CB{-\P{\mu_1 - \mu_2}t}} {1 - \exp\CB{-\P{\mu_1 - \mu_2}t}}, &\text{if } \mu_1 \neq \mu_2 # \end{cases}$ # > # >$ # f_{\d{X_1 + X_2}}\P{t} = \begin{cases} # \mu^2 t e^{-\mu t}, & \text{if }\mu_1 = \mu_2 = \mu\\[0.6em] # \ffrac{\mu_1\mu_2\P{\exp\CB{-\mu_2t} - \exp\CB{-\mu_1t}}} {\mu_1 - \mu_2}, &\text{if } \mu_1 \neq \mu_2 # \end{cases}$ # # $Remark$ # # >When calculate $f_{\d{ X_1 + X_2}}\P{t}$, $t$ is no longer a given constant. # # *** # ## Computing Expectation by Conditioning # # Denote $\EE{X \mid Y}$ as the *function* of the $r.v.$ $Y$ whose value at $Y = y$ is $\EE{X \mid Y = y}$. An extremely important property of conditional expectation is that for all $r.v.$ $X$ and $Y$, # # $$\EE{X} = \EE{\EE{X \mid Y}} = \begin{cases} # \d{\sum_y} \EE{X \mid Y = y} \cdot P\CB{Y = y}, & \text{if } Y \text{ discrete}\\[0.5em] # \d{\int_{-\infty}^{\infty}} \EE{X \mid Y = y} \cdot f_Y\P{y}\;\dd{y}, & \text{if } Y \text{ continuous} # \end{cases}$$ # # We can interpret this as that when we calculate $\EE{X}$ we may take a *weighted average* of the conditional expected value of $X$ given $Y = y$, each of the terms $\EE{X \mid Y = y}$ being weighted by the probability of the event on which it is conditioned. # # **e.g.** The Expectation of the Sum of a Random Number of Random Variables # # The expected number of unqualified cookie products per week at an industrial plant is $n$, and the number of broken cookies in each product are independent $r.v.$ with a common mean $m$. Also assume that the number of broken cookies in each product is independent of the number of unqualified products. Then, what's the expected number of broken cookies during a week? # # >Letting $N$ denote the number of unqualified cookie products and $X_i$ the number of broken cookies inside the $i\texttt{-th}$ product. Then the total broken cookies is $\sum X_i$. Now to bring the sum operation out of the expecation (even linear operation, however $N$ is a $r.v.$), we need to condition that on $N$ # > # >$\bspace \d{\EE{\sum\nolimits_{i=1}^{N} X_i} = \EE{\EE{\sum\nolimits_{i=1}^{N} X_i \mid N}}}$ # > # >But by the independence of $X_i$ and $N$ , we can derive that: # > # >$\bspace \d{\EE{\sum_{i=1}^{N}X_i \mid N =n} = \EE{\sum_{i=1}^{n} X_i} = n\cdot\EE{X}}$, # > # >which yields the function for $N$: $\d{\EE{\sum\nolimits_{i=1}^{N}X_i\mid N} = N \cdot \EE{X}}$ and thus: # > # >$\bspace \d{\EE{\sum_{i=1}^{N}X_i} = \EE{N\cdot \EE{X}} = \EE{N} \cdot \EE{X} = mn}$ # # $Remark$ # # >***Compound Random Variable***, as those the sum of a random number (like the preceding $N$) of $i.i.d.$ $r.v.$ (like the preceding $X_i$) that are also independent of $N$. # **e.g.** The Mean of a Geometric Distribution # # Before, we use derivatives or 错位相减 to obtain that $\EE{X} = \d{\sum_{n=1}^{\infty} n \cdot p(1-p)^{n-1}} = \ffrac{1} {p}$. Now a new method can be applied here: # # >Let $N$ be the number of trials required and define $Y$ as # > # >$Y = \bspace \begin{cases} # 1, &\text{if the first trial is a success} \\[0.6em] # 0, &\text{if the first trial is a failure} \\ # \end{cases}$ # > # >Then $\EE{N} = \EE{\EE{N\mid Y}} = \EE{N\mid Y = 1}\cdot P\CB{Y = 1} + \EE{N \mid Y = 0}\cdot P\CB{Y=0}$. However, $\EE{N \mid Y = 1} = 1$ (no more trials needed) and $\EE{N \mid Y = 0} = 1 + \EE{N}$. Substitute these back we have the equation: # > # >$\bspace \EE{N} = 1\cdot p + \P{1 + \EE{N}}\cdot \P{1-p} \space \Longrightarrow \space \EE{N} = 1/p$ # **e.g.** Multinomial Covariances # # Consider $n$ independent trials, each of which results in one of the outcomes $1, 2, \dots, r$, with respective probabilities $p_1, p_2, \dots, p_r$, and $\sum p_i = 1$. If we let $N_i$ denote the number of trials that result in outcome $i$, then $\P{N_1, N_2, \dots, N_r}$ is said to have a ***multinomial distribution***. For $i \neq j$, let us compute $\Cov{N_i, N_j} = \EE{N_i N_j} - \EE{N_i} \EE{N_j}$ # # > Each trial independently results in outcome $i$ with probability $p_i$, and it follows that $N_i$ is binomial with parameters $\P{n, p_i}$, given that $\EE{N_i}\EE{N_j} = n^2 p_i p_j$. Then to compute $\EE{N_i N_j}$, condition on $N_i$ we can obtain: # > # >$\bspace \begin{align} # \EE{N_i N_j}&=\sum_{k=0}^{n} \EE{N_i N_j \mid N_i = k}\cdot P\CB{N_i = k} \\ # &= \sum\nolimits_{k=0}^{n} k\EE{N_j \mid N_i = k} \cdot P\CB{N_i = k} # \end{align}$ # > # >Now given that only $k$ of the $n$ trials result in outcome $i$, each of the other $n-k$ trials independently results in outcome $j$ with probability: $P\P{j \mid \text{not }i} = \frac{p_j} {1-p_i}$, thus showing that the conditional distribution of $N_j$, given that $N_i = k$, is binomial with parameters $\P{n-k,\ffrac{p_j} {1-p_i}}$ and its expectation is: $\EE{N_j \mid N_i = k} = \P{n-k} \ffrac{p_j} {1-p_i}$. # > # >Using this yields: # >$\bspace \begin{align} # \EE{N_i N_j} &= \sum_{k=0}^{n} k\P{n-k} \ffrac{p_j} {1-p_i} P\CB{N_i = k} \\ # &= \ffrac{p_j} {1-p_i} \P{n \sum_{k=0}^{n} kP\CB{N_i = k} - \sum_{k=0}^{n} k^2 P \CB{N_i = k}} \\ # &= \ffrac{p_j} {1-p_i}\P{n\EE{N_i} - \EE{N_i^2}} # \end{align}$ # # >And $N_i$ is a binomial $r.v.$ with parameters $\P{n,p_i}$, thus, $\EE{N_i^2} = \Var{N_i} + \EE{N_i}^2 = np_i\P{1-p_i} + \P{np_i}^2$. Hence, $\EE{N_iN_j} = \ffrac{p_j} {1-p_i} \SB{n^2 p_i - np_i\P{1-p_i} - n^2p_i^2} = n\P{n-1}p_ip_j$, which yields the result: # > # >$\bspace \Cov{N_i N_j} = n\P{n-1}p_ip_j - n^2p_ip_j = -np_ip_j$ # **(V)e.g.** The Matching Rounds Problem from the example of last chapter, the one of randomly choosing hat. Now for those who have already chosen their own hats, leave the room. And then mix the wrongly-matched hats again and reselect. This process continues until each individual has his own hats. # # $\P{1}$ Let $R_n$ be the number of rounds that are necessary when $n$ individuals are initially present. Find $\EE{R_n}$. # # >Follow the results from last example. For each round, on average there'll be only *one* match, no matter how many candidates remain. So intuitively, $\EE{R_n} = n$. Here's an induction proof. Firstly, it's obvious $\EE{R_1} = $, then we assume that $\EE{R_k} = k$ for $k = 1,2,\dots,n-1$, then we find $\EE{R_n}$ by conditioning on $X_n$, which is the number of matches that occur in the first round: # > # >$\bspace \EE{R_n} = \d{\sum_{i=0}^{n} \EE{R_n \mid X_n = i}} \cdot P\CB{X_n = i}$. Now given that there're totally $i$ matches in the first round, then we have $\EE{R_n \mid X_n = i} = 1 + \EE{R_{n-i}}$. # > # >$\bspace \begin{align} # \EE{R_n} &= \sum_{i=0}^{n} \P{1 + \EE{R_{n-i}}} \cdot P\CB{X_n = i} \\ # &= 1 + \EE{R_n} P\CB{X_n = 0} + \sum_{i=1}^{n} \EE{R_{n-i}} \cdot P\CB{X_n = i} \\ # &= 1 + \EE{R_n} P\CB{X_n = 0} + \sum_{i=1}^{n} \P{n-i} \cdot P\CB{X_n = i}, \; \text{as the induction hypothesis}\\ # &= 1 + \EE{R_n} P\CB{X_n = 0} + n\P{1-P\CB{X_n = 0}} - \EE{X_n}, \; \P{\EE{X_n} = 1 \text{ as the result before}}\\[0.6em] # &= \EE{R_n} P\CB{X_n = 0} + n\P{1-P\CB{X_n = 0}}\\[0.7em] # & \bspace \text{then we solve the equation,}\\[0.7em] # \Longrightarrow& \space\EE{R_n} = n # \end{align}$ # # $Remark$ # # >Assuming something happened in the first trial... # # $\P{2}$ Let $S_n$ be the total number of selections made by the $n$ individuals for $n \geq 2$. Find $\EE{S_n}$ # # >Still we condition on $X_n$, which gives: # > # >$\bspace \begin{align} # \EE{S_n} &= \sum_{i=0}^{n} \EE{S_n \mid X_n = i} \cdot P\CB{X_n = i} \\ # &= \sum_{i=0}^{n} \P{n + \EE{S_{n-i}}} \cdot P\CB{X_n = i} = n + \sum_{i=0}^{n} \EE{S_{n-i}} \cdot P\CB{X_n = i} # \end{align}$ # > # >And since $\EE{S_0} = 0$ we rewrite it as $\EE{S_n} = n + \EE{S_{n-X_n}}$. To solve this, we first make a guess. What if there were exactly one matach in each round?. # # >Thus, totally there'll be $n+\cdots+1 = n\P{n+1}/2$ selections. So, for $n \geq 2$, we assume that # > # >$\bspace an + bn^2 = n + \EE{a\P{n-X_n} + b\P{n-X_n}^2} = n + a\P{n-\EE{X_n}} + b\P{n^2 - 2n\EE{X_n} + \EE{X_n^2}}$ # > # >Following the results before that $\EE{X_n} = \Var{X_n} = 1$, we can solve this that $a=1$ and $b = 1/2$. So that maybe, $\EE{S_n} = n + n^2/2$. What a guess (2333)! Now we prove it by induction on $n$. # > # > For $n=2$, the number of rounds is a geometric random variable with parameter $p = 1/2$. And it's obvious that the number of selections is twice the number of rounds. Thus, $\EE{S_n} = 4$. Right! # > # >Hence, upon assuming $\EE{S_0} = \EE{S_1} = 0$ and $\EE{S_k} = k+k^2/2$ for $k=2,3,\dots,n-1$: # > # >$\bspace \begin{align} # \EE{S_n} &= n + \EE{S_n} \cdot P\CB{X_n = 0} + \sum_{i=1}^{n} \SB{n-i+\P{n-i}^2}\cdot P\CB{X_n=i}\\ # &= n + \EE{S_n}\cdot P\CB{X_n = 0} + \P{n+n^2/2}\P{1-P\CB{X_n = 0}} - \P{n+1}\EE{X_n} + \ffrac{\EE{X_n^2}} {2}\\[0.7em] # & \bspace \text{then we solve the equation by using }\EE{X_n} = 1 \text{ and } \EE{X_n^2} = 2\\[0.7em] # \Longrightarrow& \space\EE{S_n} = n+n^2/2 # \end{align}$ # # $\P{c}$ Find the expected number of false selections made by one of the $n$ people for $n \geq 2$. # # >Let $C_j$ denote the number of hats chosen by person $j$, then we have $\sum C_j = S_n$. Then taking the expectation, using the fact that each $C_j$ takes the same mean, $\EE{C_j} = \EE{S_n}/ n = 1 + n/2$. Thus, the result is $\EE{C_j - 1} = n/2$. # # $\P{d}$ Suppose the first round, what if the first person cannot meet a match? Given that, find the conditional expected number of matches. # # >Let $Y$ equal $1$ if the first person has a match and $0$ otherwise. Let $X$ denote the number of matches. Then, with the result before we have # > # >$\bspace \begin{align} # 1 = \EE{X} &= \EE{X\mid Y=1} P\CB{Y=1} + \EE{X\mid Y=0} P\CB{Y=0} \\ # &= \ffrac{\EE{X\mid Y=1}} {n} + \ffrac{n-1} {n}\EE{X \mid Y = 0} # \end{align}$ # > # >Now given that $Y=1$, then the rest $n-1$ people will choose $n-1$ hats, thus $\EE{X\mid Y=1} = 1+1=2$, thus, the result is $\EE{X \mid Y=0} = \P{n-2}/\P{n-1}$ # **(V)e.g.** Consecutive success # # Independent trials, each of which is a success with probability $p$, are performed until there are $k$ consecutive success. What's the mean of necesssary trials? # # >Let $N_k$ denote the number of necessary trials to obtain $k$ consecutive success and $M_k = \EE{N_k}$. Then we might write: $N_k = N_{k-1} + \texttt{something}$... And actually that something is the number of additional trials needed to go from have $k-1$ successes in a row to having $k$ in a row. We denote that by $A_{k-1,k}$. Then we take the expectation and obtain: # # >$\bspace M_k = M_{k-1} + \EE{A_{k-1,k}}$. Wanna know what's $\EE{A_{k-1,k}}$? There're only two possible result, the $k\texttt{-th}$ trial is a success, or not, then: # > # >$\bspace \EE{A_{k-1,k}} = 1 \cdot p + \P{1+M_k}\cdot\P{1-p} = 1+\P{1-p}M_k \Rightarrow M_k = \ffrac{1} {p} + \ffrac{M_{k-1}} {p}$ # > # >Well, what's $M_1$? Obviously $N_1$ is a geometric with parameter $p$, thus $M_1 = 1/p$ and recursively we have: # > # >$\bspace M_k = \ffrac{1} {p} + \ffrac{1} {p^2} + \cdots \ffrac{1} {p^k}$ # **(V)e.g.** Quick-Sort Algorithm # # Given a set of $n$ distinct values, sort them increasingly. The quick-sort algorithm is defined reursively as: When $n=2$, compare the two values and puts them in appropriate order. When $n>2$, randomly shoose one of the $n$ values, say, $x_i$ and then compares each of the other $n-1$ values with $x_i$, noting which are smaller and which are larger than $x_i$. Letting $S_i$ denote the set of elements smaller than $X_i$ and $\bar{S}_i$ the set of elements greater than $X_i$. Then sort the set $S_i$ and $\bar{S}_i$. That's it. # # One measure of the effectiveness of this algorithm is the expected number of comparison that it makes. Denote by $M_n$ the expected number of comparisons needed by the q-s algorithm to sort a set of $n$ distinct values. To obtain a recursion for $M_n$ we condition on the rank of the initial value selected to obtain # # $$M_n = \sum_{j=1}^{n} \EE{\text{number of comparisons}\mid \text{value selected is actually j}\texttt{-th}\text{ smallest}}\cdot \ffrac{1} {n}$$ # # So $M_n = \d{\sum_{j=1}^{n} \P{n-1 + M_{j-1} + M_{n-j}}\cdot\frac{1} {n}} = n-1 + \ffrac{2} {n} \sum_{k=1}^{n-1} M_k$ ($M_0 = 0$), or equivalently, $nM_n = n\P{n-1} + 2\d{\sum_{k=1}^{n-1}}M_k$. # # 我们用一点数列知识。。。 # # $\bspace \P{n+1}M_{n+1} - mM_n = 2n+2M_n \Longrightarrow M_n = 2\P{n+2}\sum\limits_{k=0}^{n-1}\ffrac{n-k} {\P{n+1-k}\P{n+2-k}} = 2\P{n+2}\sum\limits_{i=1}^{n} \ffrac{i} {\P{i+1}\P{i+2}}$ for $n \geq 1$. # # $$\begin{align} # M_{n+1} &= 2\P{n+2}\SB{\sum_{i=1}^{n}\ffrac{2} {i+2} - \sum_{i=1}^{n}\ffrac{1} {i+1}} \\ # &\approx 2\P{n+2}\SB{\int_{3}^{n+2} \ffrac{2} {x} \;\dd{x} - \int_{2}^{n+1} \ffrac{1} {x} \;\dd{x}}\\ # &= 2\P{n+2}\SB{\log\P{n+2} + \log\P{\ffrac{n+1} {n+2}} + \log 2 -2\log 3}\\ # &\approx 2\P{n+2}\log\P{n+2} # \end{align}$$ # ### Computing Variances by Conditioning # # Here we first gonna use $\Var{X} = \EE{X^2} - \EE{X}^2$ while using the condioning to obtain both the $\EE{X^2}$ and $\EE{X}$. # # **e.g.** Variance of the Geometric $r.v.$ # # Independent trials, each resulting in a success with probability $p$ and are performed in sequence. Let $N$ be the trial number of the first success. Find $\Var{N}$. # # >Still we condition on the first trial: let $Y=1$ if the first trial is a success, and $0$ otherwise. # > # >$\bspace\begin{align} # \EE{N^2} &= \EE{\EE{N^2\mid Y}} \\ # &= \EE{N^2 \mid Y=1}\cdot P\P{Y=1} + \EE{N^2 \mid Y=0}\cdot P\P{Y=0}\\ # &= 1 \cdot p + \EE{\P{1+N}^2} \cdot \P{1-p}\\ # &= 1 + \EE{2N+N^2} \cdot \P{1-p} \\ # &= 1 + 2 \P{1-p}\EE{N} + \EE{N^2}\cdot \P{1-p} # \end{align}$ # > # >And from **e.g.** The Mean of a Geometric Distribution, we've acquired that $\EE{N} = 1/p$, thus we can substitute this back and then solve the equation and get: $\EE{N^2} = \ffrac{2-p} {p^2}$. Then # > # >$\bspace \Var{N} = \ffrac{2-p} {p^2} - \P{\ffrac{1} {p}}^2 = \ffrac{1-p} {p^2}$ # *** # # Then how about (a drink?) $\Var{X \mid Y}$? Here's the proposition. # # $Proposition$ ***conditional variance formula*** # # $\bspace \Var{X} = \EE{\Var{X \mid Y}} + \Var{\EE{X \mid Y}}$ # # $Proof$ # # >$\bspace \begin{align}\EE{\Var{X \mid Y}} &= \EE{\EE{X^2 \mid Y} - \P{\EE{X\mid Y}}^2} \\ # &= \EE{\EE{X^2 \mid Y}} - \EE{\P{\EE{X\mid Y}}^2} \\ # &= \EE{X^2} - \EE{\P{\EE{X\mid Y}}^2} # \end{align}$ # > # >$\bspace \begin{align} # \Var{\EE{X \mid Y}} &= \EE{\P{\EE{X \mid Y}}^2} - \P{\EE{\EE{X \mid Y}}}^2\\ # &= \EE{\P{\EE{X \mid Y}}^2} - \P{\EE{X}}^2 # \end{align}$ # > # >Therefore, $\EE{\Var{X \mid Y}} + \Var{\EE{X \mid Y}} = \EE{X^2} - \P{\EE{X}}^2 = \Var{X}$ # **e.g.** The Variance of a Compound $r.v.$ # # Let $X_1,X_2,\dots$ be $i.i.d.$ $r.v.$ with distribution $F$ having mean $\mu$ and variance $\sigma^2$, and assume that they are independent of the nononegative integer valued $r.v.$ $N$. Here the **compound $r.v.$** is $S = \sum_{i=1}^{N}X_i$. Find its variance. # # >You can directly condition on $N$ to obtain $\EE{S^2}$, well. # > # >$\bspace \Var{S\mid N=n} = \Var{\sum\limits_{i=1}^{n} X_i} = n\sigma^2$ # > # >and with the similar reasoning, we have $\EE{S \mid N=n} = n\mu$. Thus, $\Var{S\mid N} = N\sigma^2$ and $\EE{S \mid N} = N \mu$ and the conditional variance formula gives # > # >$\bspace \Var{S} = \EE{N\sigma^2} + \Var{N\mu} = \sigma^2 \EE{N} + \mu^2 \Var{N}$ # # $Remark$ # # >CANNOT directly write $\Var{S\mid N} = \EE{\P{S\mid N}^2}-\cdots$, because it's not well defined. So the correct way is to first find $\Var{S\mid N = n}$. Then convert this to the variable $\Var{S\mid N} = f\P{N}$ # *** # **e.g.** The Variance in the Matching Rounds Problem # # Following the previous definition, $R_n$ is the number of rounds that are necesssary when $n$ individuals are initially present. Let $V_n$ be its variance. Show that $V_n = n$ for $n \geq 2$. # # > Actually when you try $n=2$, as shown before, it's a geometric with parameter $p = 1/2$ thus $V_2 = \ffrac{1-p} {p^2} = 2$. So the assumption here is $V_j = j$ for $2\leq j \leq n-1$. And when there're $n$ individuals. still we condition that on the first round, or more specificly, let $X$ be the number of matches in the first round. Thus $\EE{R_n \mid X} = 1 + \EE{R_{n-X}}$ and by the previous result, we have $\EE{R_n \mid X} = 1 + n - X$. Also with $V_0 = 0$, we have $\Var{R_n \mid X} = \Var{R_{n-X}} = V_{n-X}$. Hence by the **conditional variance formula**, # > # >$\bspace \begin{align} # V_n &= \EE{\Var{R_n \mid X}} + \Var{\EE{R_n \mid X}} \\[0.6em] # &= \EE{V_{n-X}} + \Var{1+n-X} \\[0.5em] # &= \sum_{j=0}^{n} V_{n-j}\cdot P\CB{X=j} + \Var{X} \\ # &= V_n \cdot P \CB{X=0} + \sum_{j=1}^{n} \P{n-j}\cdot P\CB{X=j} + \Var{X} \\ # &= V_n \cdot P \CB{X=0} + n \P{1-P\CB{X=0}} - \EE{X} + \Var{X} # \end{align}$ # > # >Here $\EE{X}$ is given in a example from Chapter 2 and as for $\Var{X}$, with the similar method, we have # > # >$\bspace \begin{align} # \Var{X} &= \sum\Var{X_i} + 2\sum_{i=1}^{N}\sum_{j>i} \Cov{X_i,X_j} \\ # &= N\P{\EE{X_i^2} - \P{\EE{X_i}}^2} + 2\sum_{i=1}^{N}\sum_{j>i} \P{\EE{X_iX_j} - \EE{X_i} \EE{X_j}} \\ # &= N\P{\ffrac{1} {N} - \ffrac{1} {N^2}} + 2 \ffrac{N\P{N-1}} {2} \P{\ffrac{1} {N\P{N-1}} - \ffrac{1} {N^2}} =1 # \end{align}$ # > # >Thus we substitute the value into the last equation and solve is. Then it's proved. # ## Computing Probabilities by Conditioning # # We've already know that by conditioning we can easily calculate the expectation and variance. Now we present how to use this method to find certain probabilities. First we define the ***indicator $r.v.$***: # # $\bspace X = \begin{cases} # 1, &\text{if } E \text{ occurs}\\ # 0, &\ow # \end{cases}$ # # Then $\EE{X} = P\P{E}$ and $\EE{X\mid Y=y} = \P{E\mid Y=y}$ for any $r.v.$ $Y$. Therefore, # # $\bspace P\P{E} = \begin{cases} # \d{\sum_y P\CB{E \mid Y=y} \cdot P\CB{Y=y}}, & \text{if }Y\text{ is discrete}\\ # \d{\int_{-\infty}^{\infty} P\CB{E \mid Y=y} \cdot f_Y\P{y}\;\dd{y}}, & \text{if }Y\text{ is continuous}\\ # \end{cases}$ # # **e.g.** The probability of a $r.v.$ is less than another one # # Let $X$ and $Y$ be two independent continuous $r.v.$ with densities $f_X$ and $f_Y$ respectively. Compute $P\CB{X < Y}$ # # > $\bspace \begin{align} # P\CB{X < Y} &= \int_{-\infty}^{\infty} P\CB{XLet $T$ be the total times it happens. We can directly write that # > # >$\bspace \begin{align} # P\CB{S=n,F=m} &= P\CB{S=n,F=m,T=n+m}\\ # &= P\CB{S=n,F=m \mid T = n+m} \cdot P\CB{T = n+m} # \end{align}$ # > # >or by # > # >$\bspace \begin{align} # P\CB{S=n,F=m} &= \sum\limits_{t=0}^{\infty}P\CB{S=n,F=m \mid T = t} \cdot P\CB{T =t} \\ # &= 0 + \CB{S=n,F=m \mid T = n+m} \cdot P\CB{T = n+m} # \end{align}$ # > # >Thus, since it's just a binomial probability of $n$ successes in a $n+m$ trials, we have # > # >$\bspace \begin{align} # P\CB{S=n,F=m} &= \binom{n+m} {n} p^n \P{1-p}^{m} e^{-\lambda} \ffrac{\lambda^{n+m}} {\P{n+m}!}\\ # &= \ffrac{\P{n+m}!} {n!m!} p^n \P{1-p}^{m} \lambda^n \lambda^m \ffrac{e^{-\lambda p}e^{-\lambda\P{1-p}}} {\P{n+m}!}\\ # &= e^{-\lambda p} \ffrac{\P{\lambda p}^{n}} {n!}\cdot e^{-\lambda \P{1-p}} \ffrac{\P{\lambda \P{1-p}}^{m}} {m!} # \end{align}$ # > # >That's it. # # $Remark$ # # >It's also can be regarded as the product of two independent terms who are only rely on $n$ and $m$ respectively. It follows that $S$ and $F$ are independent. Moreover, # > # >$\bspace P\CB{S=n} = \sum\limits_{m=0}^{\infty}P\CB{S=n,F=m} = e^{-\lambda p} \ffrac{\P{\lambda p}^{n}} {n!}$ # > # >And similar for $P\CB{F=m} = e^{-\lambda \P{1-p}} \ffrac{\P{\lambda \P{1-p}}^{m}} {m!}$ # # $Remark$ # # >We can also generalize the result to the case where each of a Poisson distributed number of events, $N$, with mean $\lambda$ is independently classified as being one of $k$ types with the probability that it is type $i$ being $p_i$. And $N_i$ is the number that are classified as type $i$, then: # > # > $N_i$ for $i = 1,2,\dots,k$ are independent Poisson $r.v.$ with respective means $\lambda p_1,\lambda p_2,\dots, \lambda p_k$, and this follows that: # > # >$\bspace \begin{align} # &P\CB{N_1 = n_1, N_2 = n_2,\dots, N_k = n_k} \\ # =\;& P\CB{N_1 = n_1, N_2 = n_2,\dots, N_k = n_k \mid N = n} \cdot P\CB{N = n} \\[0.5em] # =\;& \binom{n} {n_1,n_2,\dots,n_k} \cdot e^{-\lambda} \ffrac{\lambda^n} {n!} \\ # =\;& \prod_{i=1}^{k} e^{-\lambda p_i} \ffrac{\P{\lambda p_i}^{n_i}}{n_i!} # \end{align}$ # **e.g.** The Distribution of the Sum of Independent Bernoulli $r.v.$ # # Let $X_i$ be independent Bernoulli $r.v.$ with $P\CB{X_i = 1} = p_i$. What's the pmf of $\sum X_i$? Here's a recursive way to obtain all these. # # > First let $P_k\P{j} = P\CB{X_1 + X_2 + \cdots + X_k = j}$ and note that $P_k\P{0} = \prod\limits_{i=1}^{k} q_i$, and $P_k\P{k} = \prod\limits_{i=1}^{k} p_i$. Then we condition $P_k\P{j}$ on $X_k$. (We don't contion that on the first event this time, cause we are doing a recursion! RECURSION! F\*\*\*) # > # > $\bspace \begin{align} # P_k\P{j} &= P\CB{X_1 + X_2 + \cdots + X_k = j \mid X_k = 1} \cdot p_k + P\CB{X_1 + X_2 + \cdots + X_k = j \mid X_k = 0} \cdot q_k \\ # &= P_{k-1}\P{j-1} \cdot p_k + P_{k-1}\P{j} \cdot q_k # \end{align}$ # **e.g.** The Best Prize Problem # # $n$ girls appeared in my life in sequence. I have to confess once if I met her or never would I got another chance. The only information I have when deciding whether to confess is the relative rank of that girl compared to ones alreay met. My wish is to maximize the probability of obtaining the best lover. Assuming all $n!$ ordering of the girls are equally likely, how do we do? # # > Strategy: fix a value $k$ and only confess to the first girl after that is better than all first $k$ girls. Using this, we denote $P_k\P{\text{best}}$ as the probability that the best girl is confessed by me. # >And to find that we condition that on $X$, the position of best girl. This gives: # > # >$\bspace \begin{align} # P_k\P{\text{best}} &= \sum_{x=1}^{n} P_k\P{\text{best}\mid X =x}\cdot P\CB{X = x} \\ # &= \ffrac{1} {n}\P{\sum_{x=1}^{k} P_k\P{\text{best}\mid X =x} +\sum_{x=k+1}^{n} P_k\P{\text{best}\mid X =x}}\\ # &= \ffrac{1} {n}\P{0 + \sum_{x=k+1}^{n}\ffrac{k} {i-1}}\\ # &\approx \ffrac{k} {n}\int_{k}^{n-1} \ffrac{1} {x} \;\dd{x} \approx \ffrac{k} {n} \log\P{\ffrac{n} {k}} # \end{align}$ # # >To find its maximum, we differentiate $g\P{x} = \ffrac{x} {n} \log\P{\ffrac{n} {x}}$ and get $x_{\min} = \ffrac{n} {e}$ # **e.g.** Hat match game # # $n$ men with their hats mixed up and then each man randomly select one. What's the probability of no matches? or exactly $k$ matches? # # >We are finding this by conditioning on whether or not the first man selects his own hat, $M$ or $M^c$. Let $E$ denote the event that no matches occur. Then # > # >$\bspace P_n = P\P{E} = P\P{E\mid M} \cdot P\P{M} + P\P{E\mid M^c} \cdot P\P{M^c}=P\P{E\mid M^c}\ffrac{n-1} {n}$ # > # >Following that, since it's given that the first man doesn't get a hat, then there're only $n-1$ men left and thus # > # >$\bspace P\P{E\mid M^c} = P_{n-1} + \ffrac{1} {n-1} P_{n-2}$ # > # >saying that it's equal to the condition when the second person find his hat, or not. # > # >Then since $P_1 = 0$, $P_2 = 0.5$ we can find all of them recursively. # **e.g.** The Ballot Problem # # In an election, candidate $A$ have received $n$ votes, and $B$, $m$ votes with $n>m$. Assuming that all orderings are equally likely, show that the probability that $A$ is always ahead in the count of votes is $\P{n-m}/\P{n+m}$. # # >Let $P_{n,m}$ denote the desired probability then: # > # >$\begin{align} # P_{n,m} &= P\CB{A \text{ always ahead}\mid A \text{ receive last vote}} \cdot \ffrac{n} {n+m} \\ # & \bspace + P\CB{A \text{ always ahead}\mid B \text{ receive last vote}} \cdot \ffrac{m} {n+m} \\ # &= \ffrac{n} {n+m} \cdot P_{n-1,m} + \ffrac{m} {n+m} \cdot P_{n,m-1} # \end{align}$ # > # >Then by induction, done! $P_{n,m} = \ffrac{n} {n+m} \cdot \ffrac{n-1-m} {n-1+m} + \ffrac{m} {n+m} \cdot \ffrac{n-m+1} {n+m-1} = \ffrac{n-m} {n+m}$ # # *** # **e.g.** # # Let $U_1,U_2,\dots$ be a sequence of independent uniform $\P{0,1}$ $r.v.$, and let $N = \min\CB{n \geq 2: U_n > U_{n-1}}$ and $M = \min\CB{n \geq 1: U_1 + U_2 + \dots + U_n > 1}$. Surprisingly, $N$ and $M$ have the same probability distribution, and their common mean is $e$! Prove it! # # > For $N$ since all the possible ordering or $U_1,\dots,U_n$ are equally likely, we have: # > # >$\bspace P\CB{U_1 > U_2 > \cdots > U_n} = \ffrac{1} {n!} = P\CB{N > n}$. # > # >For $M$, to use induction method, we intend to prove $P\CB{M\P{x} > n} = x^n/n!$ where $M\P{x} = \min\CB{n\geq 1: U_1 + U_2 + \cdots + U_n > x}$, for $0 < x \leq 1$. When $n=1$, # # > $\bspace P\CB{M\P{x} > 1} = P\CB{U_1 \leq x} = x$ # > # >Then assume that holds true for all $n$ to determine $P\CB{M\P{x} > n+1}$ we condition that on $U_1$ to obtain: # > # >$\bspace \begin{align} # P\CB{M\P{x} > n+1} &= \int_{0}^{1} P\CB{M\P{x} > n+1\mid U_1 = y}\;\dd{y} \\ # & \bspace\text{since }y \text{ cannot exceed }x\text{ , we change the upper limit of integral} \\ # &= \int_{0}^{x} P\CB{M\P{x} > n+1\mid U_1 = y}\;\dd{y} \\ # &= \int_{0}^{x} P\CB{M\P{x-y} > n}\;\dd{y} \\ # &\bspace \text{induction hypothesis}\\ # &= \int_{0}^{x} \ffrac{\P{x-y}^n} {n!}\;\dd{y} \\ # &\using{u=x-y} \int_{0}^{x} \ffrac{u^n} {n!} \;\dd{u} \\ # &= \ffrac{x^{n+1}} {\P{n+1}!} # \end{align}$ # > # >Thus $P\CB{M\P{x} > n} = x^n/n!$ and let $x=1$ and we can draw the final conclusion that $P\CB{M > 1} = 1/n!$, so that $N$ and $M$ have the same distribution. Finally, we have: # > # >$$\EE{M} = \EE{N} = \QQQ \sum_{n=0}^{\infty} 1/n! = e$$ # **e.g.** # # Let $X_1, X_2,\dots,X_n$ be independent continuous $r.v.$ with a common distribution function $F$ and density $f = F'$, suppose that they are to be observed one at a time in sequence. Let $N = \min\CB{n\geq 2: X_n = \text{second largest of }X_1,X_2,\dots,X_n}$ and $M = \min\CB{n\geq 2: X_n = \text{second smallest of }X_1,X_2,\dots,X_n}$. Which one tends to be larger? # # > To find $X_N$ it's natural to condition on the value of $N$. We first let $A_i = \CB{X_i \neq \text{second largest of }X_1,X_2,\dots,X_i}, i\geq 2$. Thus # > # > $$\newcommand{\void}{\left.\right.}P\CB{N=n} = P\P{A_2A_3\cdots A_{n-1}A^c} = \ffrac{1} {2}\ffrac{2} {3}\cdots\ffrac{n-2} {n-1} \ffrac{1} {n} = \ffrac{1} {n\P{n-1}}$$ # > # >Here $A_i$ being independent is because $X_i$ are $i.i.d.$. Then we condition that on $N$ and obtain: # > # >$$\begin{align} # f_{X_{\void_N}}\P{x} &= \sum_{n=2}^{\infty} \ffrac{1} {n\P{n-1}} f_{X_{\void_N}\mid N} \P{x\mid n} \\ # &= \sum_{n=2}^{\infty} \ffrac{1} {n\P{n-1}} \ffrac{n!} {1!\P{n-2}!} \P{F\P{x}}^{n-2}\cdot f\P{x} \cdot \P{1-F\P{x}} \\ # &= f\P{x} \cdot \P{1-F\P{x}} \sum_{n=0}^{\infty} \P{F\P{x}}^i \\ # &= f\P{x} # \end{align}$$ # > # >We are almost there. Now we think their opposite number. Let $W_i = -X_i$, then $M = \min\CB{n\geq 2: X_n = \text{second largest of }W_1,W_2,\dots,W_n}$. So similiarly, $W_M$ has the same distribution with $W_1$ just like $X_N$ and $X_1$. Then we drop the minus sign so that $X_M$ has the same distribution with $X_1$. So they all are of the same distribution. # # $Remark$ # # >This is a special case for ***Ignatov's Theorem***, where second could be $k\texttt{th}$ largest/smallest. Still the distribution is $F$, for all $k$! # # *** # **(V)e.g.** # # A population consists of $m$ families. Let $X_j$ denote the size of family $j$ and suppose that $X_1,X_2,\dots,X_m$ are independent $r.v.$ having the common pmf: $p_k = P\CB{X_j = k}, \sum_{i=1}^{\infty}p_k = 1$ with mean $\mu = \sum\limits_k k\cdot p_k$. Suppose a member of whatever family is chosen randomly, and let $S_i$ be the event that the selected individual is from a family of size $i$. Prove that $P\P{S_i} \to \ffrac{ip_i} {\mu}$ as $m \to \infty$. # # > This time we need to condition on a vector of $r.v.$. Let $N_i$ be the number of familier that are of size $i$: $N_i = \#\CB{k:k=1,2,\dots,m:X_k = i}$; and then condition on $\mathbf{X} = \P{X_1,X_2,\dots,X_m}$: # > # >$$P\P{S_i\mid \mathbf{X}} = \ffrac{iN_i} {\sum_{i=1}^{m} X_k}$$ # > # >$$P\P{S_i} = \EE{P\P{S_i\mid X}} # = \EE{\ffrac{iN_i} {\sum_{i=1}^{m} X_k}} # = \EE{\ffrac{iN_i/m} {\sum_{i=1}^{m} X_k/m}}$$ # > # >$\bspace$This follows by the **strong law of large numbers** that $N_i/m$ would converges to $p_i$ as $m \to \infty$, and $\sum_{i=1}^{m} X_k/m \to \EE{X} = \mu$. Thus: $P\P{S_i} \to \EE{\ffrac{ip_i} {\mu}} = \ffrac{ip_i} {\mu}$ as $m \to \infty$ # # *** # **e.g.** # # Consider $n$ independent trials in which each trials results in one of the outcomes $1,2,\dots,k$ with respective probabilities $p_i$ and $\sum p_i = 1$. Suppose further that $n > k$, and that we are interested in determining the probability that each outcome occurs at least once. If we let $A_i$ denote the event that outcome $i$ dose not occur in any of the $n$ trials, then the desired probability is $1 - P\P{\bigcup_{i=1}^{k} A_i}$ and by the inclusion-exclusion theorem, we have # # $$\begin{align} # p\P{\bigcup_{i=1}^{k} A_i} =& \sum_{i=1}^{k} P\P{A_i} - \sum_i\sum_{j>i} P\P{A_iA_j} \\ # &\bspace + \sum_i\sum_{j>i}\sum_{k>j} P\P{A_iA_jA_k} - \cdots + \P{-1}^{k+1} P\P{A_1 \cdots A_k} # \end{align}$$ # # where # # $$\begin{align} # P\P{A_i} &= \P{1-p_i}^{n} \\ # P\P{A_iA_j} &= \P{1-p_i - p_j}^{n}, \bspace i < j\\ # P\P{A_iA_jA_k} &= \P{1-p_i - p_j - p_k}^{n}, \bspace i < j < k\\ # \end{align}$$ # # How to solve this by conditioning on whatever something? # # > Note that if we start by conditioning on $N_k$, the number of times that outcome $k$ occurs, then when $N_k>0$ the resulting conditional probability will equal the probability that all of the outcomes $1,2,\dots,k-1$ occur at least once when $n-N_k$ trails are performed, and each results in outcome $i$ with probability $p_i/\sum_{j=1}^{k-1} p_j$, for $i = 1, 2,\dots,k-1$. Then we could use a similar conditioning step on these terms. # > # >Follow this idea, we let $A_{m,r}$, for $m\leq n, r\leq k$, denote the event that each of the outcomes $1,2,\dots,r$ occurs at least once when $m$ independent trails are performed, where each trial results in one of the outcomes $1,2,\dots,r$ with respective probabilities $p_1/P_r, \dots,p_r/P_r$, where $P_r = \sum_{j=1}^{r}p_j$. Then let $P\P{m,r} = P\P{A_{m,r}}$ and note that $P_{n,k}$ is the desired probability. To obtain the expression of $P\P{m,r}$, condition on the number of times that outcome $r$ occurs. This gives rise to: # > # >$\bspace\begin{align} # P_{m,r} &= \sum_{j=0}^{m} P\CB{A_{m,r} \mid r \text{ occurs }j \text{ times}} \binom{m}{j} \P{\ffrac{p_r} {P_r}}^j \P{1 - \ffrac{p_r} {P_r}}^{m-j} \\ # &= \sum_{j=1}^{m-r+1} P_{m-j,r-1}\binom{m}{j} \P{\ffrac{p_r} {P_r}}^j \P{1 - \ffrac{p_r} {P_r}}^{m-j} # \end{align}$ # > # >Starting with $P\P{m,1} = 1$ for $m \geq 1$ and $P\P{m,1} = 0$ for $m=0$. # > # >And this's how this recursion works. We can first find the $P\P{m,2}$ for $m = 2,3,\dots,n-\P{k-2}$ and then $P\P{m,3}$ for $m = 2,3,\dots,n-\P{k-3}$ and so on, up to $P\P{m,k-1}$ for $m = k-1,k,\dots,n-1$. Then we use the recursion to compute $P\P{n,k}$. # # *** # Now we extend our conclusion of how to calculate certain expectations using the conditioning fashion. Here's another formula for this: # # $\bspace\EE{X \mid Y = y} = \begin{cases} # \d{\sum_{w} \EE{X \mid W = w, Y = y} \cdot P\CB{W = w \mid Y = y}}, & \text{if } W \text{ discrete} \\ # \d{\int_{w} \EE{X \mid W = w, Y = y} \cdot f_{W \mid Y}P\P{w \mid y} \;\dd{w}}, & \text{if } W \text{ continuous} # \end{cases}$ # # and we write this as $\EE{X \mid Y} = \EE{\EE{X \mid Y,W}\mid Y}$ # # **e.g.** # # Automobile insurance company classifies its policyholders as one of the types $1, 2, \dots, k$. It supposes that the numbers of accidents that a type $i$ policyholder has in the following years are independent Poisson random variables with mean $\lambda_i$. For a new policyholder, the probability that he being type $i$ is $p_i$. # # Given that a policyholder had $n$ accidents in her first year, what is the expected number that she has in her second year? What is the conditional probability that she has $m$ accidents in her second year? # # >Let $N_i$ denote the number of accidents the policyholder has in year $i$. To obtain $\EE{N_2 \mid N_1 = n}$, condition on her risk type $T$. # # >$$\begin{align} # \EE{N_2 \mid N_1 = n} &= \sum_{j=1}^{k} \EE{N_2\mid T = j, N_1 = n} \cdot P\CB{T = j \mid N_1 = n} \\ # &= \sum_{j=1}^{k} \EE{N_2\mid T = j} \cdot \ffrac{P\CB{T = j, N_1 = n}} {P\CB{N_1 = n}} \\ # &= \ffrac{\sum\limits_{j=1}^{k} \lambda_j \cdot P\CB{N_1 = n \mid T = j} \cdot P\CB{T = j}} {\sum\limits_{j=1}^{k} P \CB{ N_1 = n \mid T = j}\cdot P\CB{T = j}} \\ # &= \ffrac{\sum\limits_{j=1}^{k} e^{-\lambda_j} \lambda_j^{n+1}p_j} {\sum\limits_{j=1}^{k} e^{-\lambda_j} \lambda_j^{n} p_j} # \end{align}$$ # > # >$$\begin{align} # P\CB{N_2 = m \mid N_1 = n} &= \sum_{j=1}^{k} P\CB{N_2 = m \mid T = j, N_1 = n} \cdot P\CB{T = j \mid N_1 = n} \\ # &= \sum_{j=1}^{k} e^{-\lambda_j} \ffrac{\lambda_j^m} {m!} \cdot P\CB{T = j \mid N_1 = n} \\ # &= \ffrac{\sum\limits_{j=1}^{k} e^{-2\lambda_j} \lambda_j^{m+n} p_j} {m! \sum\limits_{j=1}^{k} e^{-\lambda_j} \lambda_j^n p_j} # \end{align}$$ # # $Remark$ # # $\bspace P\P{A \mid BC} = \ffrac{P\P{AB \mid C}} {P\P{B \mid C}}$ # *** # ## Some Applications # See the extended version later if possible.😥 # # ## An Identity for Compound Random Variables # # Let $X_1,X_2,\dots$ be a sequence of $i.i.d.$ $r.v.$ and let $S_n = \sum_{i=1}^{n} X_i$ be the sum of the first $n$ of them, $n \geq 0$, where $S_0 = 0$. Then the **compound random variable** is defined as $S_N = \sum\limits^N X_i$ with the distribution of $N$ called the **compounding distribution**. # # To find $S_N$, first define $M$ as a $r.v.$ that is independnet of the sequence $X_1,X_2,\dots$, and which is such that $P\CB{M = n} = \ffrac{nP\CB{N = n}} {\EE{N}}$. # # $Proposition.5$ The Compound $r.v.$ Identity # # $\bspace$For any function $h$, $\EE{S_N h\P{S_N}} = \EE{N} \cdot \EE{X_1 h\P{S_M}}$ # # $Proof$ # # $$\begin{align} # \EE{S_N h\P{S_N}} &= \EE{\sum_{i=1}^{N} X_i h\P{X_N}} \\ # &= \sum_{n=0}^{\infty} \EE{\sum_{i=1}^{N} X_i h\P{X_N}\mid N = n} \cdot P\CB{N = n} \\ # &= \sum_{n=0}^{\infty} \EE{\sum_{i=1}^{n} X_i h\P{X_N}\mid N = n} \cdot P\CB{N = n} \\ # & \bspace\text{by the independence of }N \text{ and }X_1,X_2,\dots \\ # &= \sum_{n=0}^{\infty} \EE{\sum_{i=1}^{n} X_i h\P{X_N}} \cdot P\CB{N = n} \\ # &= \sum_{n=0}^{\infty} \sum_{i=1}^{n} \EE{X_i h\P{S_n}} \cdot P\CB{N = n} \\ # & \bspace\EE{X_ih\P{X_1 + X_2 + \cdots + X_n}} \text{ are symmetric} \\ # &= \sum_{n=0}^{\infty} n \EE{X_1 h\P{S_n}} \cdot P\CB{N = n} \\ # &= \EE{N} \sum_{n=0}^{\infty} \EE{X_1 h\P{S_n}} \cdot P\CB{M = n} \\ # &= \EE{N} \sum_{n=0}^{\infty} \EE{X_1 h\P{S_n} \mid M = n} \cdot P\CB{M = n} \\ # & \bspace\text{independence of }M \text{ and }X_1,X_2,\dots, X_n \\ # &= \EE{N} \sum_{n=0}^{\infty} \EE{X_1 h\P{S_M} \mid M=n} \cdot P\CB{M = n} \\ # &= \EE{N} \EE{X_1 h\P{S_M}} # \end{align}$$ # $Corollary.6$ # # Suppose $X_i$ are positive integer valued $r.v.$, and let $\alpha_j = P\CB{X_1 = j}$, for $j > 0$. Then: # # $\bspace\begin{align} # P\CB{S_N = 0} &= P\CB{N = 0} \\ # P\CB{S_N = k} &= \ffrac{1}{k} \EE{N} \sum_{j=1}^{k} j \alpha_j P\CB{S_{M-1} = k-j}, k > 0 # \end{align}$ # # $Proof$ # # >For $k$ fixed, let # > # >$\bspace h\P{x} = \begin{cases} # 1, & \text{if } x = k \\ # 0, & \text{if } x \neq k # \end{cases}$ # > # >and then $S_N h\P{S_N}$ is either equal to $k$ if $S_N = k$ or is equal to $0$ otherwise. Therefore, # > # >$$\EE{S_Nh\P{S_N}} = k P\CB{S_N = k}$$ # > # >and the compound identity yields: # > # >$$\begin{align} # kP\CB{S_N = k} &= \EE{N} \EE{X_1 h\P{S_M}} \\ # &= \EE{N} \sum_{j=1}^{\infty} \EE{X_1 h\P{S_M} \mid X_1 = j} \alpha_j \\ # &= \EE{N} \sum_{j=1}^{\infty} j \EE{h\P{S_M} \mid X_1 = j} \alpha_j \\ # &= \EE{N} \sum_{j=1}^{\infty} j P\CB{S_M = k \mid X_1 = j} \alpha_j \\ # \end{align}$$ # # >And now, # > # >$$\begin{align} # P\CB{S_M = k \mid X_1 = j} &= P\CB{\sum_{i=1}^{M} X_i = k \mid X_1 = j} \\ # &= P\CB{j + \sum_{i=2}^{M} X_i = k \mid X_1 = j} \\ # &= P\CB{j + \sum_{i=1}^{M-1} X_i = k} \\ # &= P\CB{S_{M-1} = k-j} # \end{align}$$ # # $Remark$ # # That's almost the end. Later we will show how to use this relationship to solve the problem using recursion. # ### Poisson Compounding Distribution # # Using $Proposition.5$, if $N$ is the Poisson distribution with mean $\lambda$, then # # $\bspace\begin{align} # P\CB{M-1 = n} &= P\CB{M = n+1} \\ # &= \ffrac{\P{n+1}P\CB{N = n+1}} {\EE{N}}\\ # &= \ffrac{1} {\lambda} \P{n+1} e^{-\lambda} \ffrac{\lambda^{n+1}} {\P{n+1}!} \\ # &= e^{-\lambda} \ffrac{\lambda^n} {n!} # \end{align}$ # # Thus with $P_n = P\CB{S_N = n}$, the recursion given by $Corollary.6$ can be written as # # $\bspace\begin{align} # P_0 = P\CB{S_N = 0} &= P\CB{N = 0} = e^{-\lambda} \\ # P_k = P\CB{S_N = k} &= \ffrac{1}{k} \EE{N} \sum_{j=1}^{k} j \alpha_j P\CB{S_{M-1} = k-j} = \ffrac{\lambda} {k} \sum_{j=1}^{k} j \alpha_j P_{k-j}, \bspace k > 0 # \end{align}$ # # $Remark$ # # When $X_i$ are chosen to be identical $1$, then the preceding expressions are reduced to a **Poisson** $r.v.$ # # $\bspace\begin{align} # P_0 &= e^{-\lambda} \\ # P_n &= \ffrac{\lambda} {n} P\CB{N = n-1}, \bspace k > 0 # \end{align}$ # *** # **e.g.** # # Let $S$ be a compound **Poisson** $r.v.$ with $\lambda = 4$ and $P\CB{X_i = i} = 0.25, i=1,2,3,4$. # # >$\bspace\begin{align} # P_0 &= e^{-\lambda} = e^{-4} \\ # P_1 &= \lambda\alpha_1 P_0 = e^{-4} \\ # P_2 &= \ffrac{\lambda} {2} \P{\alpha_1 P_1 + 2 \alpha_2 P_0} = \ffrac{3} {2} e^{-4} \\ # P_3 &= \ffrac{\lambda} {3} \P{\alpha_1 P_2 + 2 \alpha_2 P_1 + 3 \alpha_3 P_0} = \ffrac{13} {6} e^{-4} \\ # P_4 &= \ffrac{\lambda} {4} \P{\alpha_1 P_3 + 2 \alpha_2 P_2 + 3 \alpha_3 P_1 + 4 \alpha_4 P_0} = \ffrac{73} {24} e^{-4} \\ # P_5 &= \ffrac{\lambda} {5} \P{\alpha_1 P_4 + 2 \alpha_2 P_3 + 3 \alpha_3 P_2 + 4 \alpha_4 P_1 + 5 \alpha_5 P_0} = \ffrac{381} {120} e^{-4} \\ # \end{align}$ # ### Binomial Compounding Distribution # # Suppose $N$ is a binomial $r.v.$ with parmeter $r$ and $p$, then # # $\bspace\begin{align} # P\CB{M-1 = n} &= \ffrac{\P{n+1}P\CB{N = n+1}} {\EE{N}}\\ # &= \ffrac{n+1} {rp} \binom{r} {n+1} p^{n+1} \P{1-p}^{r-n-1} \\ # &= \ffrac{\P{r-1}!} {\P{r-1-n}!n!} p^n \P{1-p}^{r-1-n} # \end{align}$ # # Thus $M-1$ is also a binomial $r.v.$ with parameters $r-1$, $p$. # *** # The missing part in this Charpter might be included in future bonus content... I need to carry on to the next chapter, the Markov Chain // -*- coding: utf-8 -*- // --- // jupyter: // jupytext: // text_representation: // extension: .cpp // format_name: light // format_version: '1.5' // jupytext_version: 1.14.4 // kernelspec: // display_name: C++14 // language: C++14 // name: xcpp14 // --- // ### Using Decision Tree for Loan Default Prediction // // ### What is our objective ? // * To reliably predict wether a person's loan payment will be defaulted based on features such as Salary, Account Balance etc. // // ### Getting to know the dataset! // LoanDefault dataset contains historic data for loan defaultees, along with their associated financial background, it has the following features. // * Employed - Employment status of the borrower, (1 - Employed | 0 - Unemployed). // * Bank Balance - Account Balance of the borrower at the time of repayment / default. // * Annual Salary - Per year income of the borrower at the time of repayment / default. // * Default - Target variable, indicated if the borrower repayed the loaned amount within the stipulated time period, (1 - Defaulted | 0 - Re-Paid). // // ### Approach // * This is an trivial example for dataset containing class imbalance, considering most of the people will be repaying their loan without default. // * So, we have to explore our data to check for imbalance, handle it using various techniques. // * Explore the correlation between various features in the dataset // * Split the preprocessed dataset into train and test sets respectively. // * Train a DecisionTree (Classifier) using mlpack. // * Finally we'll predict on the test set and using various evaluation metrics such as Accuracy, F1-Score, ROC AUC to judge the performance of our model on unseen data. // // #### NOTE: In this example we'll be implementing 4 parts i.e modelling on imbalanced, oversampled, SMOTE & undersampled data respectively. !wget -q http://datasets.mlpack.org/LoanDefault.csv // Import necessary library headers. #include #include #include #include #include // + // Import utility headers. #define WITHOUT_NUMPY 1 #include "matplotlibcpp.h" #include "xwidgets/ximage.hpp" #include "../utils/preprocess.hpp" #include "../utils/plot.hpp" namespace plt = matplotlibcpp; // - using namespace mlpack; using namespace mlpack::data; using namespace mlpack::tree; // Utility functions for evaluation metrics. double Accuracy(const arma::Row& yPreds, const arma::Row& yTrue) { const size_t correct = arma::accu(yPreds == yTrue); return (double)correct / (double)yTrue.n_elem; } double Precision(const size_t truePos, const size_t falsePos) { return (double)truePos / (double)(truePos + falsePos); } double Recall(const size_t truePos, const size_t falseNeg) { return (double)truePos / (double)(truePos + falseNeg); } double F1Score(const size_t truePos, const size_t falsePos, const size_t falseNeg) { double prec = Precision(truePos, falsePos); double rec = Recall(truePos, falseNeg); return 2 * (prec * rec) / (prec + rec); } void ClassificationReport(const arma::Row& yPreds, const arma::Row& yTrue) { arma::Row uniqs = arma::unique(yTrue); std::cout << std::setw(29) << "precision" << std::setw(15) << "recall" << std::setw(15) << "f1-score" << std::setw(15) << "support" << std::endl << std::endl; for(auto val: uniqs) { size_t truePos = arma::accu(yTrue == val && yPreds == val && yPreds == yTrue); size_t falsePos = arma::accu(yPreds == val && yPreds != yTrue); size_t trueNeg = arma::accu(yTrue != val && yPreds != val && yPreds == yTrue); size_t falseNeg = arma::accu(yPreds != val && yPreds != yTrue); std::cout << std::setw(15) << val << std::setw(12) << std::setprecision(2) << Precision(truePos, falsePos) << std::setw(16) << std::setprecision(2) << Recall(truePos, falseNeg) << std::setw(14) << std::setprecision(2) << F1Score(truePos, falsePos, falseNeg) << std::setw(16) << truePos << std::endl; } } // Create a directory named data to store all preprocessed csv. !mkdir -p ./data // Drop the dataset header using sed, sed is an unix utility that prases and transforms text. !cat LoanDefault.csv | sed 1d > ./data/LoanDefault_trim.csv // ### Loading the Data // Load the preprocessed dataset into armadillo matrix. arma::mat loanData; data::Load("./data/LoanDefault_trim.csv", loanData); // Inspect the first 5 examples in the dataset std::cout << std::setw(12) << "Employed" << std::setw(15) << "Bank Balance" << std::setw(15) << "Annual Salary" << std::setw(12) << "Defaulted" << std::endl; std::cout << loanData.submat(0, 0, loanData.n_rows-1, 5).t() << std::endl; // ### Part 1 - Modelling using Imbalanced Dataset // Visualize the distribution of target classes. CountPlot("LoanDefault.csv", "Defaulted?", "", "Part-1 Distribution of target class"); auto img = xw::image_from_file("./plots/Part-1 Distribution of target class.png").finalize(); img // From the above visualization, we can observe that the presence of "0" and "1", so there is a huge class imbalance. For the first part we would not be handling the class imbalance. In order to see how our model performs on the raw imbalanced data // Visualize the distibution of target classes with respect to Employment. CountPlot("LoanDefault.csv", "Defaulted?", "Employed", "Part-1 Distribution of target class & Employed"); auto img = xw::image_from_file("./plots/Part-1 Distribution of target class & Employed.png").finalize(); img // ### Visualize Correlation // Plot the correlation matrix as heatmap. HeatMapPlot("LoanDefault.csv", "coolwarm", "Part-1 Correlation Heatmap", 1, 5, 5); auto img = xw::image_from_file("./plots/Part-1 Correlation Heatmap.png").finalize(); img // Split the data into features (X) and target (y) variables, targets are the last row. arma::Row targets = arma::conv_to>::from(loanData.row(loanData.n_rows - 1)); // Targets are dropped from the loaded matrix. loanData.shed_row(loanData.n_rows-1); // ### Train Test Split // The data set has to be split into a training set and a test set. Here the dataset has 10000 observations and the test Ratio is taken as 25% of the total observations. This indicates the test set should have 25% * 10000 = 2500 observations and trainng test should have 7500 observations respectively. This can be done using the `data::Split()` api from mlpack. // Split the dataset into train and test sets using mlpack. arma::mat Xtrain, Xtest; arma::Row Ytrain, Ytest; mlpack::data::Split(loanData, targets, Xtrain, Xtest, Ytrain, Ytest, 0.25); // ### Training Decision Tree model // Decision trees start with a basic question, From there you can ask a series of questions to determine an answer. These questions make up the decision nodes in the tree, acting as a means to split the data. Each question helps an individual to arrive at a final decision, which would be denoted by the leaf node. Observations that fit the criteria will follow the “Yes” branch and those that don’t will follow the alternate path. Decision trees seek to find the best split to subset the data. To create the model we'll be using `DecisionTree<>` API from mlpack. // Create and train Decision Tree model using mlpack. DecisionTree<> dt(Xtrain, Ytrain, 2); // ### Making Predictions on Test set // Classify the test set using trained model & get the probabilities. arma::Row output; arma::mat probs; dt.Classify(Xtest, output, probs); // ### Evaluation metrics // // * True Positive - The actual value was true & the model predicted true. // * False Positive - The actual value was false & the model predicted true, Type I error. // * True Negative - The actual value was false & the model predicted false. // * False Negative - The actual value was true & the model predicted false, Type II error. // // `Accuracy`: is a metric that generally describes how the model performs across all classes. It is useful when all classes are of equal importance. It is calculated as the ratio between the number of correct predictions to the total number of predictions. // // $$Accuracy = \frac{True_{positive} + True_{negative}}{True_{positive} + True_{negative} + False_{positive} + False_{negative}}$$ // // `Precision`: is calculated as the ratio between the number of positive samples correctly classified to the total number of samples classified as Positive. The precision measures the model's accuracy in classifying a sample as positive. // // $$Precision = \frac{True_{positive}}{True_{positive} + False_{positive}}$$ // // `Recall`: is calulated as the ratio between the number of positive samples correctly classified as Positive to the total number of Positive samples. The recall measures the model's ability to detect Positive samples. The higher the recall, the more positive samples detected. // // $$Recall = \frac{True_{positive}}{True_{positive} + False_{negative}}$$ // // * The decision of whether to use precision or recall depends on the type of problem begin solved. // * If the goal is to detect all positive samples then use recall. // * Use precision if the problem is sensitive to classifying a sample as Positive in general. // // * ROC graph has the True Positive rate on the y axis and the False Positive rate on the x axis. // * ROC Area under the curve in the graph is the primary metric to determine if the classifier is doing well, the higher the value the higher the model performance. // Save the yTest and probabilities into csv for generating ROC AUC plot. data::Save("./data/probabilities.csv", probs); data::Save("./data/ytest.csv", Ytest); // Model evaluation metrics. std::cout << "Accuracy: " << Accuracy(output, Ytest) << std::endl; ClassificationReport(output, Ytest); // Plot ROC AUC Curve to visualize the performance of the model on TP & FP. RocAucPlot("./data/ytest.csv", "./data/probabilities.csv", "Part-1 Imbalanced Targets ROC AUC Curve"); auto img = xw::image_from_file("./plots/Part-1 Imbalanced Targets ROC AUC Curve.png").finalize(); img // From the above classification report, we can infer that our model trained on imbalanced data performs well on negative class but not the same for positive class. // ### Part 2 - Modelling using Random Oversampling // For this part we would be handling the class imbalance. In order to see how our model performs on the randomly oversampled data. We will be using `Resample()` method to oversample the minority class i.e "1, signifying Defaulted" // Oversample the minority population. Resample("LoanDefault.csv", "Defaulted?", 0, 1, "oversample"); // Visualize the distribution of target classes. CountPlot("./data/LoanDefault_oversampled.csv", "Defaulted?", "", "Part-2 Distribution of target class"); auto img = xw::image_from_file("./plots/Part-2 Distribution of target class.png").finalize(); img // From the above plot we can see that after resampling the minority class (Yes) is oversampled to be equal to the majority class (No). This solves our imbalanced data issue for this part. !cat ./data/LoanDefault_oversampled.csv | sed 1d > ./data/LoanDefault_trim.csv // Load the preprocessed dataset into armadillo matrix. arma::mat loanData; data::Load("./data/LoanDefault_trim.csv", loanData); // Plot the correlation matrix as heatmap. HeatMapPlot("./data/LoanDefault_oversampled.csv", "coolwarm", "Part-2 Correlation Heatmap", 1, 5, 5); auto img = xw::image_from_file("./plots/Part-2 Correlation Heatmap.png").finalize(); img // Split the data into features (X) and target (y) variables, targets are the last row. arma::Row targets = arma::conv_to>::from(loanData.row(loanData.n_rows - 1)); // Targets are dropped from the loaded matrix. loanData.shed_row(loanData.n_rows-1); // ### Train Test Split // The dataset has to be split into training and test set. Here the dataset has 19334 observations and the test ratio is taken as 20% of the total observations. This indicates that the test set should have 20% * 19334 = 3866 observations and training set should have 15468 observations respectively. This can be done using the `data::Split()` api from mlpack. // Split the dataset into train and test sets using mlpack. arma::mat Xtrain, Xtest; arma::Row Ytrain, Ytest; mlpack::data::Split(loanData, targets, Xtrain, Xtest, Ytrain, Ytest, 0.25); // ### Training Decision Tree model // We will use `DecisionTree<>` API from mlpack to train the model on oversampled data. // Create and train Decision Tree model using mlpack. DecisionTree<> dt(Xtrain, Ytrain, 2); // ### Making Predictions on Test set // Classify the test set using trained model & get the probabilities. arma::Row output; arma::mat probs; dt.Classify(Xtest, output, probs); // Save the yTest and probabilities into csv for generating ROC AUC plot. data::Save("./data/probabilities.csv", probs); data::Save("./data/ytest.csv", Ytest); // Model evaluation metrics. std::cout << "Accuracy: " << Accuracy(output, Ytest) << std::endl; ClassificationReport(output, Ytest); // Plot ROC AUC Curve to visualize the performance of the model on TP & FP. RocAucPlot("./data/ytest.csv", "./data/probabilities.csv", "Part-2 Random Oversampled Targets ROC AUC Curve"); auto img = xw::image_from_file("./plots/Part-2 Random Oversampled Targets ROC AUC Curve.png").finalize(); img // From the above classification report, we can infer that our model trained on oversampled data performs well on both the classes, This proves the fact that imbalanced data has affected the model trained in part one. Also from the ROC AUC Curve, we can infer the True Positive Rate is around 99%, which is a good significance that our model performs well on unseen data. // ### Part 3 - Modelling using Synthetic Minority Oversampling Technique // For this part we would be handling the class imbalance. In order to see how our model performs on the oversampled data using SMOTE. We will be using `SMOTE` API from imblearn to oversample the minority class i.e "1, signifying Defaulted" // Oversample the minority class using SMOTE resampling strategy. Resample("LoanDefault.csv", "Defaulted?", 0, 1, "smote"); // We need to put back the headers manually into the newely sampled dataset for visualization purpose. !sed -i "1iEmployed,Bank Balance,Annual Salary,Defaulted?" ./data/LoanDefault_smotesampled.csv // Visualize the distribution of target classes. CountPlot("./data/LoanDefault_smotesampled.csv", "Defaulted?", "", "Part-3 Distribution of target class"); auto img = xw::image_from_file("./plots/Part-3 Distribution of target class.png").finalize(); img !cat ./data/LoanDefault_smotesampled.csv | sed 1d > ./data/LoanDefault_trim.csv // Load the preprocessed dataset into armadillo matrix. arma::mat loanData; data::Load("./data/LoanDefault_trim.csv", loanData); // Plot the correlation matrix as heatmap. HeatMapPlot("./data/LoanDefault_smotesampled.csv", "coolwarm", "Part-3 Correlation Heatmap", 1, 5, 5); auto img = xw::image_from_file("./plots/Part-3 Correlation Heatmap.png").finalize(); img // Split the data into features (X) and target (y) variables, targets are the last row. arma::Row targets = arma::conv_to>::from(loanData.row(loanData.n_rows - 1)); // Targets are dropped from the loaded matrix. loanData.shed_row(loanData.n_rows-1); // ### Train Test Split // The dataset has to be split into training and test set. The test ratio is taken as 25% of the total observations. This can be done using the `data::Split()` api from mlpack. // Split the dataset into train and test sets using mlpack. arma::mat Xtrain, Xtest; arma::Row Ytrain, Ytest; mlpack::data::Split(loanData, targets, Xtrain, Xtest, Ytrain, Ytest, 0.25); // ### Training Decision Tree model // We will use `DecisionTree<>` API from mlpack to train the model on SMOTE data. // Create and train Decision Tree model. DecisionTree<> dt(Xtrain, Ytrain, 2); // ### Making Predictions on Test set // Classify the test set using trained model & get the probabilities. arma::Row output; arma::mat probs; dt.Classify(Xtest, output, probs); // Save the yTest and probabilities into csv for generating ROC AUC plot. data::Save("./data/probabilities.csv", probs); data::Save("./data/ytest.csv", Ytest); // Model evaluation metrics. std::cout << "Accuracy: " << Accuracy(output, Ytest) << std::endl; ClassificationReport(output, Ytest); // Plot ROC AUC Curve to visualize the performance of the model on TP & FP. RocAucPlot("./data/ytest.csv", "./data/probabilities.csv", "Part-3 SMOTE ROC AUC Curve"); auto img = xw::image_from_file("./plots/Part-3 SMOTE ROC AUC Curve.png").finalize(); img // From the above classification report, we can infer that our model trained on SMOTE data performs well on both the classes. Also from the ROC AUC Curve, we can infer the True Positive Rate is around 90%, which is a quantifies that our model performs well on unseen data. But it performs slightly lower than the Oversampled data. // ### Part 4 - Modelling using Random Undersampling // For this part we would be handling the class imbalance by undersampling the majority class, to see how well our model trains and performs on randomly undersampled data. // Since the size of the data set is quite small, undersampling of majority class would not make much sense here. But still we are going forward with this part to get a sense of how our model performs on less amount of data and it's impact on the learning. // Undersample the majority class. Resample("LoanDefault.csv", "Defaulted?", 0, 1, "undersample"); // Visualize the distribution of target classes. CountPlot("./data/LoanDefault_undersampled.csv", "Defaulted?", "", "Part-4 Distribution of target class"); auto img = xw::image_from_file("./plots/Part-4 Distribution of target class.png").finalize(); img // From the above plot we can see that after resampling the majority class (No) is undersampled to be equal to the majority class (Yes). This solves our imbalanced data issue for this part. !cat ./data/LoanDefault_undersampled.csv | sed 1d > ./data/LoanDefault_trim.csv // Load the preprocessed dataset into armadillo matrix. arma::mat loanData; data::Load("./data/LoanDefault_trim.csv", loanData); // Plot the correlation matrix as heatmap. HeatMapPlot("./data/LoanDefault_undersampled.csv", "coolwarm", "Part-4 Correlation Heatmap", 1, 5, 5); auto img = xw::image_from_file("./plots/Part-4 Correlation Heatmap.png").finalize(); img // Split the data into features (X) and target (y) variables, targets are the last row. arma::Row targets = arma::conv_to>::from(loanData.row(loanData.n_rows - 1)); // Targets are dropped from the loaded matrix. loanData.shed_row(loanData.n_rows-1); // ### Train Test Split // The dataset has to be split into training and test set. Here the dataset has 666 observations and the test ratio is taken as 20% of the total observations. This indicates that the test set should have 20% * 666 = 133 observations and training set should have 533 observations respectively. This can be done using the `data::Split()` api from mlpack. // Split the dataset into train and test sets using mlpack. arma::mat Xtrain, Xtest; arma::Row Ytrain, Ytest; mlpack::data::Split(loanData, targets, Xtrain, Xtest, Ytrain, Ytest, 0.25); // ### Training Decision Tree model // We will use `DecisionTree<>` API from mlpack to train the model on SMOTE data. // Create and train Decision Tree model. DecisionTree<> dt(Xtrain, Ytrain, 2); // Classify the test set using trained model & get the probabilities. arma::Row output; arma::mat probs; dt.Classify(Xtest, output, probs); // Save the yTest and probabilities into csv for generating ROC AUC plot. data::Save("./data/probabilities.csv", probs); data::Save("./data/ytest.csv", Ytest); // Model evaluation metrics. std::cout << "Accuracy: " << Accuracy(output, Ytest) << std::endl; ClassificationReport(output, Ytest); // Plot ROC AUC Curve to visualize the performance of the model on TP & FP. RocAucPlot("./data/ytest.csv", "./data/probabilities.csv", "Part-4 Random Undersampled targets ROC AUC Curve"); auto img = xw::image_from_file("./plots/Part-4 Random Undersampled targets ROC AUC Curve.png").finalize(); img // From the above classification report, we can infer that our model trained on undersampled data performs well on both the classes compared to imbalanced model in Part 1. Also from the ROC AUC Curve, we can infer the True Positive Rate is around 80% although there is a small flatline, but still performs better than imbalanced model. // ### Conclusion // Models trained on resampled data performs well, but there is still room for improvement. Feel free to play around with the hyperparameters, training data split ratio etc. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Decision Tree # # given dataset $D=\left\{(x^{(i)},y^{(i)})\right\}$ # # decision tree is trying to pick $(feature, value)$ that partition the dataset to subsets # # after that partition, elements in each subsets is similar in total, i.e we gain certainty. # # continue the process, until we get subset that is very pure or partition too many times. # # we thus create a tree called the decision tree. # # when predicting, find leaf subset of that sample, then use typical value of leaf as prediction. # ## information gain # # we use entropy to measure the uncertainty of data. # # for classfication problem, assume $y_{(i)} \in \left\{1,...,k\right\}$, we have the entropy of dataset $D$: # # $$H(D) = E(-log\ p_{i}) = -\sum_{i=1}^{k}p_{i}log\ p_{i}$$ # # $p_{i}$ is the frequency of i-th class, it defines the uncertainty of $D$. # # suppose we partition $D$ according to feature $A$ into $D_{1},...,D_{n}$, we have: # # $$H(D|A)=\sum_{i=1}^{n}\frac{\#D_{i}}{\#D}H(D_{i})$$ # # that is: the uncertainty of $D$ after knowing $A$. # # information gain is uncertainty loss: # # $$g(D,A) = H(D) - H(D|A)$$ # # decision tree ID3 choose feature $A$ that maximize $g(D,A)$ until: # # 1. subset is empty # 2. information gain $g(D,A)\le\epsilon$ # ## information gain ratio # # if use information gain, we prefer feature $A$ such that $\#A$ is large. # # more precisely, we prefer features that is uncertain # # $$H_{A}(D) =-\sum_{i=1}^{n}\frac{\#D_{i}}{\#D}log\ \frac{\#D_{i}}{\#D}$$ # # defines that uncertainty, it is the entropy of viewing category of $A$ as labels. # # to fix that problem, we define the information gain ratio: # # $$g_{R}(D,A)=\frac{g(D,A)}{H_{A}(D)}=\frac{H(D)-H(D|A)}{H_{A}(D)}$$ # # algorithm that uses $g_{R}(D,A)$ is C4.5 # ## pruning # # we need to pruning the decision tree $\Rightarrow $ lower model's complexity $\Rightarrow $ mitigate overfit # # suppose now we have a decision tree $T$, use $\left | T \right | $ to denote the number of leaves of $T$, and these leaves are $T_{1},...,T_{\left | T \right | }$. # # then entropy of leaf $t$: $H(T_{t})$ # # total entropy of these leaves: # # $$C(T) = \sum_{t=1}^{\left | T \right |} \left | T_{t} \right |H(T_{t})$$ # # we want these minimize this entropy, and at the same time limit model's complexity, give rise to the loss function: # # $$C_{\alpha}(T) = C(T) + \alpha\left | T \right |$$ # # in practice, pruning is from leaves to root. # # if pruning a node result in a decrease of the loss function, the operate this pruning. # ## CART-classification and regression tree # # CART can solve both the classification and regression problem. # # CART simply uses different strategies for them. # # for regression problem, we try to find feature $j$ and cutting point $s$ that minimize the square error: # # $$\underset{j,s}{min}\left[\underset{c_{1}}{min}\sum_{x_{i} \in R_{1}(j, s)}(y_{i} - c_{1})^{2} + \underset{c_{2}}{min}\sum_{x_{i} \in R_{2}(j, s)}(y_{i} - c_{2})^{2}\right]$$ # # rather than optimizing information gain or information gain ratio. # # CART optimize Gini-index when facing a classification problem: # # $$Gini(D) = E(1 - p_{i}) = \sum_{i=1}^{k}p_{i}(1 - p_{i})$$ # # here, rather than self-information $-log\ p_{i}$ uses in entropy, we use $1 - p_{i}$ to indicate the information of event with probability $p_{i}$. # ## Practice using sklearn # + from sklearn.datasets import load_iris from sklearn.tree import DecisionTreeClassifier iris = load_iris() X = iris.data[:, 2:] # petal length and width y = iris.target # - tree_clf = DecisionTreeClassifier(max_depth=2) tree_clf.fit(X, y) # + """visualize using graphviz, need 1.pip install graphviz, 2.brew install graphviz""" from graphviz import Source from sklearn.tree import export_graphviz export_graphviz(tree_clf, out_file="iris_tree.dot", feature_names=iris.feature_names[2:], class_names=iris.target_names, rounded=True, filled=True ) Source.from_file("iris_tree.dot") # - tree_clf.predict_proba([[5, 1.5]]), tree_clf.predict([[5, 1.5]]) """criterion can switch from gini to entropy""" entropy_tree_clf = DecisionTreeClassifier(criterion="entropy", max_depth=3) """hyper-parameters for regularization""" regularized_tree_clf = DecisionTreeClassifier(max_depth=5, # maximum depth of that tree max_leaf_nodes=20, # maximum number of leaf nodes max_features=8, # maximum number of features when splitting each node min_samples_split=10, # min number of samples of a node before it can split min_samples_leaf=4, # min number of samples of a leaf node min_weight_fraction_leaf=0.01 # same as min_samples_leaf, but by weight frac ) # + """CART(sklearn uses) can also regression""" from sklearn.tree import DecisionTreeRegressor tree_reg = DecisionTreeRegressor(max_depth=3) # - # ### moon dataset # + """make moon dataset""" from sklearn.datasets import make_moons from sklearn.model_selection import train_test_split X, y = make_moons(n_samples=10000, noise=0.4) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2) # + """grid search""" from sklearn.model_selection import GridSearchCV param_grid = [{"max_leaf_nodes": [2, 5, 10, 20], "min_samples_split": [3, 4]}] tree_clf = DecisionTreeClassifier() grid_search = GridSearchCV(tree_clf, param_grid, cv=3, verbose=3) grid_search.fit(X_train, y_train) # + """using best estimator to predict""" from sklearn.metrics import accuracy_score y_predict = grid_search.predict(X_test) accuracy_score(y_true=y_test, y_pred=y_predict) # - # ### using multiple trees # + """generate 1000 subsets, each 100 instances""" from sklearn.model_selection import ShuffleSplit n_trees = 1000 n_instances = 100 mini_sets = [] rs = ShuffleSplit(n_splits=n_trees, test_size=len(X_train) - n_instances, random_state=42) # make train_size = len(X_train) - (len(X_train) - n_instances) = n_instances for mini_train_index, mini_test_index in rs.split(X_train): X_mini_train = X_train[mini_train_index] y_mini_train = y_train[mini_train_index] mini_sets.append((X_mini_train, y_mini_train)) # + """train each subset on grid_search.best_estimator_""" from sklearn.base import clone import numpy as np forest = [clone(grid_search.best_estimator_) for _ in range(n_trees)] accuracy_scores = [] for tree, (X_mini_train, y_mini_train) in zip(forest, mini_sets): tree.fit(X_mini_train, y_mini_train) y_pred = tree.predict(X_test) accuracy_scores.append(accuracy_score(y_test, y_pred)) np.mean(accuracy_scores) # + """save all predictions""" Y_pred = np.empty([n_trees, len(X_test)], dtype=np.uint8) for tree_index, tree in enumerate(forest): Y_pred[tree_index] = tree.predict(X_test) # + """use majority vote, improve performance""" from scipy.stats import mode y_pred_majority_votes, n_votes = mode(Y_pred, axis=0) accuracy_score(y_test, y_pred_majority_votes.reshape(-1)) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: conda_tensorflow_p36 # language: python # name: conda_tensorflow_p36 # --- # Just exploring the labels distribution. import numpy as np import matplotlib.pyplot as plt X_train = np.loadtxt('x_train2.csv', delimiter=',') Y_train = np.loadtxt('y_train2.csv', delimiter=',') Y_train_1d = [np.argmax(vector) for vector in Y_train] plt.hist(Y_train_1d, bins=40) # So the PCA is useless because it does not take into account the order of the symbols, among other things. from sklearn.decomposition import PCA pca = PCA(n_components=2) pca.fit(X_train) X_pca = pca.transform(X_train) plt.scatter(X_pca[:, 0], X_pca[:, 1], alpha = 0.1) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="9BfyADmSOdB1" # # **FreeBirds Crew** # # ### We Learn and Grow Together # # drawing # + [markdown] id="FQz3PMQKOkZ3" # ### **Stacks** # A Stack is an Abstract Data Type, Serves as a Collection of elements. # It has two Principals - # 1. Push # 2. Pop # **Push**,which adds an element to the collection, and **Pop**, which removes the most recently added element. # # Stacks uses the *LIFO(Last in First Out)* Principal that defines as, the order of elements in which they are added in stack the last element is present at the top of the Stack. # # ### **Conditions in Stacks** # Stacks has Bounded Capacity such that If the Stack is Full, not have enough space to add more elements then that condition is called **OverFlow**, Similiarly if the Stack has no Elements Such that No Element Can be Popped Out then the Condition is Called **UnderFlow**. # # # drawing # + [markdown] id="MVgcUvV3W8mn" # ## Topics that are Covered in this Session - # 1. Stack Implementation using Python # 2. Stack Implementation using Linked List # 3. Check if given expression is balanced expression or not # 4. Find duplicate parenthesis in an expression # 5. Decode the given sequence to construct minimum number without repeated digits # 6. Inorder Tree Traversal | Iterative & Recursive # 7. Preorder Tree Traversal | Iterative & Recursive # 10. Postorder Tree Traversal | Iterative & Recursive # + [markdown] id="Wai3fYTFnriZ" # ### **Stack Implementation in Python** # # The main functions that needs to be implemented are - # # 1. empty() – Returns if the stack is empty – Time Complexity : O(1) # 2. size() – Returns the size of the stack – Time Complexity : O(1) # 3. top() – Returns a reference to the top element of the stack – Time Complexity : O(1) # 4. push(g) – Adds the element ‘g’ at the top of the stack – Time Complexity : O(1) # 5. pop() – Deletes the top element of the stack – Time Complexity : O(1) # # drawing # # # # # We can Implement Stacks in Python by using following Techniques - # 1. List # 2. Collections.deque # 3. Queue.LifoQueue # + [markdown] id="Q6W6BfeCo2qK" # #### **Stacks using List** # Python has build-in Data Structure called **List**, that has **append()** method so we use that instead of **push() and pop()** method that is used to remove elements from **stack** that is also available in Lists and Works in **LIFO Order**. Time Compelexity of LIST Operations are O(n) # # **NOTE** - Unfortunately, List has a some disadvantages as well. The biggest issue is that there comes speed issues if the size of the list grows. The items that are stored in list are placed next to each other in memory if the size grows and more items need to added then python has to do some **Memory Allocations**, due to which the append() operations take more time then ususal. # + id="LhsCNbNmOezK" outputId="fc5aefa6-26a0-4081-82f5-b4866833416c" colab={"base_uri": "https://localhost:8080/", "height": 233} print("Making Stack using List") stack = [] print(stack) # append() function of list used in place of push() function of Stack that add elements in Stack stack.append('Subscribe') stack.append('FreeBirds Crew') stack.append('Youtube') print('Adding Elements....') print(stack) # pop() fucntion of list is same as pop() function in stack that works in LIFO Order. print('\nPopped Elements are: ') print(stack.pop()) print(stack.pop()) print(stack.pop()) print('\nStack gives Index Error of we Try to Pop from the Empty Stack...') print(stack) # + [markdown] id="WtP3B2RWs4TV" # #### **Stacks using Collections.deque** # Let's Implement the Stack using DEQUE Class of Collections Module. DEQUE is better then LIST When we need Quick Append and Pop Opertions from both Ends of the Container. DEQUE has O(1) Time Complexity for append() and pop() as Compared to the LIST which that O(n) Time Complexity. # Same methods on deque as seen in list are used, append() and pop(). # + id="OEMRzxpLq5uT" outputId="aa0b872e-45d7-4eab-f24c-4232b8a59bcd" colab={"base_uri": "https://localhost:8080/", "height": 233} from collections import deque print("Building Stack using Deque Class of Collections Module....") stack = deque() print(stack) # append() function of list used in place of push() function of Stack that add elements in Stack stack.append('a') stack.append('b') stack.append('c') print('Adding Elements....') print(stack) # pop() fucntion of list is same as pop() function in stack that works in LIFO Order. print('\nPopped Elements are: ') print(stack.pop()) print(stack.pop()) print(stack.pop()) print('\nStack gives Index Error of we Try to Pop from the Empty Stack...') print(stack) # + [markdown] id="CqyIpIm-uYDM" # #### **Stacks using Queue.LifoQueue** # QUEUE module has a LIFO Queue, which is a Stack. Data is inserted into Queue using put() function and get() takes data out from the Queue. # There are various functions present in this module: # 1. maxsize – Number of items allowed in the queue. # 2. empty() – Return True if the queue is empty, False otherwise. # 3. full() – Return True if there are maxsize items in the queue. If the queue was initialized with maxsize=0 (the default), then full() never returns True. # 4. get() – Remove and return an item from the queue. If queue is empty, wait until an item is available. # 5. get_nowait() – Return an item if one is immediately available, else raise QueueEmpty. # 6. put(item) – Put an item into the queue. If the queue is full, wait until a free slot is available before adding the item. # 7. put_nowait(item) – Put an item into the queue without blocking. # 8. qsize() – Return the number of items in the queue. If no free slot is immediately available, raise QueueFull. [Source](https://www.geeksforgeeks.org) # + id="PHjNOAIVuKA9" outputId="c2ff15f7-85b3-4000-ce7e-3733e69f32c8" colab={"base_uri": "https://localhost:8080/", "height": 251} from queue import LifoQueue print("Making the Stack using LifoQueue...") stack = LifoQueue(maxsize = 3) print(stack) # qsize() is the function that shows the size of the stack print(stack.qsize()) # put() function is used to add elements in the Stack print("Adding Elements in Stack ....") stack.put('a') stack.put('b') stack.put('c') print("Check if it's Full: ", stack.full()) print("Size of the Stack: ", stack.qsize()) # get() fucntion is used to Pop Elements from the Stack. print('\nPop Elements from the Stack ....') print(stack.get()) print(stack.get()) print(stack.get()) print("\nEmpty Stack: ", stack.empty()) # + [markdown] id="0ZoJNrOV8L_g" # ### **Stack Implementation using Linked List** # # Let's Implement Stack using Linked List in Such a Way that A Stack is then a Pointer to the "Head" of the List where Pushing and Popping Operations Happens at the Head of the List, Counter is also Track the Size of the List. # # **NOTE** - Advantage of Linked List is that, we can make a Dynamic Change in Size of Stack, and memory is also dynamically allocated so Stack Overflow cannot happen unless memory is exhausted. # # + id="Rxq2UKjj6PTP" outputId="89855cf0-d9c2-403e-e224-06b37df157b2" colab={"base_uri": "https://localhost:8080/", "height": 161} # Stack Class class Node: def __init__(self, key, next=None): self.key = key self.next = next class Stack: def __init__(self): self.top = None # Adding Elements in Stack using Push Operations def push(self, x): # Allocate the new node in the heap node = Node(x) # Check if stack is full, if not then add Element if node is None: print("Heap Overflow", end='') return print("Inserting ...", x) # set the data in allocated node node.data = x # Set the .next pointer of the new node to point to the current top node of the list node.next = self.top # update top pointer self.top = node # Function to check if the stack is empty or not def isEmpty(self): return self.top is None # Function to return top element in a stack def peek(self): # check for empty stack if not self.isEmpty(): return self.top.data else: print("Stack is empty") return -1 # Function to pop top element from the stack def pop(self): # remove at the beginning # Check for stack Underflow if self.top is None: print("Stack Underflow", end='') return print("Removing ...", self.peek()) # update the top pointer to point to the next node self.top = self.top.next if __name__ == '__main__': stack = Stack() stack.push(1) stack.push(2) stack.push(3) print("Top element is: ", stack.peek()) stack.pop() stack.pop() stack.pop() if stack.isEmpty(): print("Stack is empty") else: print("Stack is not empty") # + [markdown] id="uzGZii1c_zxx" # ### **Check if Given Expression is Balanced Expression or Not** # # drawing # # # + id="ItvQHcCw8OV8" outputId="a1daa15c-eceb-403b-c908-72dc31fc8337" colab={"base_uri": "https://localhost:8080/", "height": 35} from collections import deque # Function to Check if the Expression is Balanced or not def CheckExpression(exp): # Check that the Length of the Expression is Even if len(exp) & 1: return False # Make a Stack using DEQUE stack = deque() # Let's Start Traversing for ch in exp: # Append into the Stack when Found Open Braces if ch == '(' or ch == '{' or ch == '[': stack.append(ch) # Find the Close Braces if ch == ')' or ch == '}' or ch == ']': # return false if mismatch is found if not stack: return False # pop character from the stack top = stack.pop() # if the popped character if not an opening brace or does not pair if (top == '(' and ch != ')') or (top == '{' and ch != '}' or (top == '[' and ch != ']')): return False # expression is balanced only if stack is empty at this point return not stack if __name__ == '__main__': exp = "{()}[{}" if CheckExpression(exp): print("The expression is balanced") else: print("The expression is not balanced") # + [markdown] id="ocNC9pXOMN6F" # ### **Find the Duplicate Parenthesis in an Expression** # # Like - # # **Input** - ((x+y))+z # # **Output** - Duplicate Found # # # **Input** - (x+y) # # **Output** - Duplicate not Found # # + id="vIqiOdMP_3zy" outputId="f69898a9-e6b1-446e-e429-95d71696723b" colab={"base_uri": "https://localhost:8080/", "height": 35} from collections import deque def FindDupParenthesis(exp): if len(exp) <= 3: return False stack = deque() for c in exp: # if current in the expression is a not a closing parenthesis if c != ')': stack.append(c) # if current in the expression is a closing parenthesis else: # if we top element in the stack is an opening parenthesis, if stack[-1] == '(': return True # pop till '(' is found for current ')' while stack[-1] != '(': stack.pop() # pop '(' stack.pop() # if we reach here, then the expression does not have any duplicate parenthesis return False if __name__ == '__main__': exp = "((x+y))" # assumes valid expression if FindDupParenthesis(exp): print("The expression have duplicate parenthesis.") else: print("The expression does not have duplicate parenthesis") # + [markdown] id="zq0aXwRAhgtH" # ### **Decode the given sequence to construct minimum number without repeated digits** # # **Sequences -> Output** # # IIDDIDID -> 125437698 # # DDDD -> 54321 # # I denotes the 'Increasing Sequences' and D denotes the 'Decreasing Sequences', Decode it to construct the minimum number without Repeating digits. # + id="9kOqCCPOHBDv" outputId="13ab328f-4a6b-4450-cae9-be26a40c6e09" colab={"base_uri": "https://localhost:8080/", "height": 35} from collections import deque # Function to decode the given sequence to construct minimum number # without repeated digits def decode(seq): result = "" stack = deque() for i in range(len(seq) + 1): stack.append(i + 1) #print(stack) if i == len(seq) or seq[i] == 'I': while stack: result += str(stack.pop()) #print(result) return result if __name__ == '__main__': seq = "DIIDII" # input sequence print("Minimum number is", decode(seq)) # + [markdown] id="GlpXDIfvtDPk" # ### **Inorder Tree Traversal | Iterative & Recursive** # ![](https://leetcode.com/articles/Figures/145_transverse.png) # Trees are traversed in multiple ways in Depth_First_Order(Pre-Order,In-Order, and Post-Order) or Breadth_First_Order(Level Order Traversal) # # As We Know Tree is not a linear data-strcuture, that is from a given node there can be more then one possible way to next node. # # So For Tranversing the Binary Tree (non-empty) using In-Order Tranversal manner, we do three things for Every **Node N** starting from root node of the Tree - # 1. (L) Recursively Traverse the Left Subtree, After that Come Again at N # 2. (N) Process N itself also. # 3. (R) Recursively Traverse the Right Subtree, After that ome back to N. # # In this way we can traverse the Tree using In-Order. (Depth First Search) # # ![Image](https://www.techiedelight.com/wp-content/uploads/Inorder-Traversal.png) # + id="BzpwVIlylAEj" outputId="cf67add7-1596-41d4-966b-967d68bd566d" colab={"base_uri": "https://localhost:8080/", "height": 35} #Recursive Implementation class Node: def __init__(self, data=None, left=None, right=None): self.data = data self.left = left self.right = right def inorder_rec(root): #Return None if Tree is Empty if root is None: return #Traverse the Left Part of the Tree inorder_rec(root.left) #Display the Data or Number on Node print(root.data, end=' ') #Traverse the right tree inorder(root.right) if __name__ == '__main__': root = Node(1) root.left = Node(2) root.right = Node(3) root.left.left = Node(10) root.right.left = Node(5) root.right.right = Node(11) root.right.left.left = Node(7) root.right.left.right = Node(8) inorder_rec(root) # + id="lngwes1PlJJ4" outputId="66329096-d6b4-4dc8-88b9-4393c87f984e" colab={"base_uri": "https://localhost:8080/", "height": 35} #Iterative Implementation from collections import deque class Node: def __init__(self, data=None, left=None, right=None): self.data = data self.left = left self.right = right def inorder_itr(root): stack = deque() current = root while stack or current: # if current node is not None, push to the stack and move to its left child if current: stack.append(current) current = current.left else: # else if current node is None, we pop an element from the stack,print it and move to right current = stack.pop() print(current.data, end=' ') current = current.right if __name__ == '__main__': root = Node(1) root.left = Node(2) root.right = Node(3) root.left.left = Node(4) root.right.left = Node(5) root.right.right = Node(6) root.right.left.left = Node(7) root.right.left.right = Node(8) inorder_itr(root) # + [markdown] id="e12G3c-CA3Wp" # ### **Pre-Order Tree Traversal | Iterative & Recursive** # # Trees are traversed in multiple ways in Depth_First_Order(Pre-Order,In-Order, and Post-Order) or Breadth_First_Order(Level Order Traversal) # # As We Know Tree is not a linear data-strcuture, that is from a given node there can be more then one possible way to next node. # # So For Tranversing the Binary Tree (non-empty) using Pre-Order Tranversal manner, we do three things for Every **Node N** starting from root node of the Tree - # 1. (N) Process N itself also. # 2. (L) Recursively Traverse the Left Subtree, After that Come Again at N # 3. (R) Recursively Traverse the Right Subtree, After that ome back to N. # # In this way we can traverse the Tree using In-Order. (Depth First Search) # # ![Image](https://www.techiedelight.com/wp-content/uploads/Preorder-Traversal.png) # + id="GRmgWt5olZwx" outputId="9f329d3a-64e9-4dad-a953-a68f4b5eaecc" colab={"base_uri": "https://localhost:8080/", "height": 35} #Recursive Implementation class Node: def __init__(self, data=None, left=None, right=None): self.data = data self.left = left self.right = right def preorder_rec(root): if root is None: return print(root.data, end=' ') preorder_rec(root.left) preorder_rec(root.right) if __name__ == '__main__': root = Node(1) root.left = Node(11) root.right = Node(6) root.left.left = Node(10) root.right.left = Node(5) root.right.right = Node(2) root.right.left.left = Node(13) root.right.left.right = Node(8) preorder_rec(root) # + id="UwY3pmFOlbAC" outputId="cc2413d4-38ce-40ef-edc9-aba6d76792d4" colab={"base_uri": "https://localhost:8080/", "height": 35} #Iterative Implementation from collections import deque class Node: def __init__(self, data, left=None, right=None): self.data = data self.left = left self.right = right def preorder_itr(root): if root is None: return stack = deque() stack.append(root) current = root while stack: # if current node is not None, print it and push its right child to the stack and move to its left child if current: print(current.data, end=' ') if current.right: stack.append(current.right) current = current.left # else if current node is None, we pop a node from the stack set current node to the popped node else: current = stack.pop() if __name__ == '__main__': root = Node(1) root.left = Node(11) root.right = Node(6) root.left.left = Node(10) root.right.left = Node(5) root.right.right = Node(2) root.right.left.left = Node(13) root.right.left.right = Node(8) preorder_itr(root) # + [markdown] id="JLURUshWEI5V" # ### **Pre-Order Tree Traversal | Iterative & Recursive** # # Trees are traversed in multiple ways in Depth_First_Order(Pre-Order,In-Order, and Post-Order) or Breadth_First_Order(Level Order Traversal) # # As We Know Tree is not a linear data-strcuture, that is from a given node there can be more then one possible way to next node. # # So For Tranversing the Binary Tree (non-empty) using Pre-Order Tranversal manner, we do three things for Every **Node N** starting from root node of the Tree - # 1. (L) Recursively Traverse the Left Subtree, After that Come Again at N # 2. (R) Recursively Traverse the Right Subtree, After that ome back to N. # 3. (N) Process N itself also. # # In this way we can traverse the Tree using In-Order. (Depth First Search) # # ![Image](https://www.techiedelight.com/wp-content/uploads/Postorder-Traversal.png) # + id="wXJN8nXWCZwf" outputId="0216ebfd-bf34-4e83-dd40-7b7dfd83ad71" colab={"base_uri": "https://localhost:8080/", "height": 35} #Recursive Implementation class Node: def __init__(self, data=None, left=None, right=None): self.data = data self.left = left self.right = right def postorder(root): if root is None: return postorder(root.left) postorder(root.right) print(root.data, end=' ') if __name__ == '__main__': root = Node(1) root.left = Node(11) root.right = Node(6) root.left.left = Node(10) root.right.left = Node(5) root.right.right = Node(2) root.right.left.left = Node(13) root.right.left.right = Node(8) postorder_rec(root) # + id="wsHEV5_DEmKh" outputId="08a38f09-9081-42a5-9069-4cefc468ed06" colab={"base_uri": "https://localhost:8080/", "height": 35} #Iterative Implementation from collections import deque class Node: def __init__(self, key): self.data = key self.left = self.right = None def postorderIterative(root): stack = deque() stack.append(root) out = deque() while stack: current = stack.pop() out.append(current.data) if current.left: stack.append(current.left) if current.right: stack.append(current.right) while out: print(out.pop(), end=' ') if __name__ == '__main__': root = Node(1) root.left = Node(11) root.right = Node(6) root.left.left = Node(10) root.right.left = Node(5) root.right.right = Node(2) root.right.left.left = Node(13) root.right.left.right = Node(8) postorderIterative(root) # + id="yB-Ay0AuFLTQ" #Delete without head pointer def deleteNode(curr_node): temp = curr_node.next curr_node.data = temp.data curr_node.next = temp.next # + id="fAdccY2RJE8n" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="view-in-github" # Open In Colab # + [markdown] colab_type="text" id="2bwH96hViwS7" # ## Learn with us: www.zerotodeeplearning.com # # Copyright © 2021: Zero to Deep Learning ® Catalit LLC. # + colab={} colab_type="code" id="bFidPKNdkVPg" # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # + [markdown] colab_type="text" id="DvoukA2tkGV4" # # Attention with Keras # + [markdown] id="jeAWZNIdxFI3" # This exercise follows: # https://keras.io/examples/nlp/text_classification_with_transformer/ # + id="usTEmKM7kdVl" from tensorflow.keras.preprocessing.sequence import pad_sequences from tensorflow.keras.datasets import imdb import tensorflow as tf from tensorflow.keras.layers import Layer, Embedding, MultiHeadAttention, Dense, GlobalAveragePooling1D, Dropout, Dense, LayerNormalization, Input from tensorflow.keras.models import Sequential, Model import matplotlib.pyplot as plt from tensorflow.keras.utils import plot_model # - # ## Load the IMDB dataset and its word index # + colab={"base_uri": "https://localhost:8080/"} id="QfN9FD8mkqp-" outputId="65b818d8-c761-4dbc-cce4-c20100cefcd5" vocab_size = 20000 maxlen = 200 (X_train, y_train), (X_test, y_test) = imdb.load_data(num_words=vocab_size) X_train = pad_sequences(X_train, maxlen=maxlen) X_test = pad_sequences(X_test, maxlen=maxlen) word_index = imdb.get_word_index() inverted_word_index = dict((i+3, word) for (word, i) in word_index.items()) inverted_word_index[0] = '' inverted_word_index[1] = '' inverted_word_index[2] = '' # - # Let's check the data shape # + colab={"base_uri": "https://localhost:8080/"} id="v5tkLay9ksg6" outputId="88c45658-b4a4-4d27-8b48-6865fc96140b" X_train.shape # + colab={"base_uri": "https://localhost:8080/"} id="SuoO5CgAlLIz" outputId="166ecfcc-5d83-4c7f-f3d1-8ea1ecc4c03e" X_train[0] # - # Let's check a couple of sentences for consistency # + colab={"base_uri": "https://localhost:8080/", "height": 154} id="hrdoWLJhlM2c" outputId="25085b4b-f948-46fc-b2dc-86e6177a5268" " ".join(inverted_word_index[i] for i in X_train[0]) # + colab={"base_uri": "https://localhost:8080/", "height": 154} id="NM03LbYWlj00" outputId="99e7673a-1719-4e83-ca01-e76c3efaaf2f" " ".join(inverted_word_index[i] for i in X_train[1]) # - # ## Token and Position Embedding # + id="V9thqY9anz8g" class TokenAndPositionEmbedding(Layer): def __init__(self, maxlen, vocab_size, embed_dim, **kwargs): super(TokenAndPositionEmbedding, self).__init__(**kwargs) self.token_emb = Embedding(input_dim=vocab_size, output_dim=embed_dim) self.pos_emb = Embedding(input_dim=maxlen, output_dim=embed_dim) def call(self, x): maxlen = tf.shape(x)[-1] positions = tf.range(start=0, limit=maxlen, delta=1) positions = self.pos_emb(positions) x = self.token_emb(x) return x + positions # - # Let's display a few sentences: # + id="Amo9V3vpoRF_" example_tpe = Sequential([TokenAndPositionEmbedding(maxlen, vocab_size, 32)]) # + id="rn1rrjXyrOOX" n_reviews = 5 # + id="bZYDWHBOoW8n" embedded_sentences = example_tpe(X_train[:n_reviews]) # + colab={"base_uri": "https://localhost:8080/", "height": 734} id="2HTHE3BCq_m_" outputId="e9c859d0-d614-4e40-b56f-c6b2b39d56de" plt.figure(figsize=(10, 10)) for i in range(n_reviews): plt.subplot(n_reviews, 1, i+1) plt.imshow(embedded_sentences.numpy()[i].transpose()) plt.xlabel("word in sentence -->") plt.ylabel("<-- embedding dim") plt.title(f"movie review {i}") plt.tight_layout(); # - # ## Transformer Block # + id="iivbLxlSolvb" class TransformerBlock(Layer): def __init__(self, embed_dim, n_att_heads, n_dense_nodes, rate=0.1, **kwargs): super(TransformerBlock, self).__init__(**kwargs) self.att = MultiHeadAttention(num_heads=n_att_heads, key_dim=embed_dim) self.ffn = Sequential([ Dense(n_dense_nodes, activation="relu"), Dense(embed_dim)] ) self.layernorm2 = LayerNormalization(epsilon=1e-6) self.layernorm1 = LayerNormalization(epsilon=1e-6) self.dropout1 = Dropout(rate) self.dropout2 = Dropout(rate) def call(self, inputs, training): attn_output = self.att(inputs, inputs) attn_output = self.dropout1(attn_output, training=training) out1 = self.layernorm1(inputs + attn_output) ffn_output = self.ffn(out1) ffn_output = self.dropout2(ffn_output, training=training) return self.layernorm2(out1 + ffn_output) # - # ## Exercise 1: # Using either the Sequential or the Functional API in Keras build a transformer classification model with the following architecture: # # ``` # TokenAndPositionEmbedding(... # TransformerBlock(... # GlobalAveragePooling1D(... # Dropout(... # Dense(... # Dropout(... # Dense(2, activation="softmax") # ```` # # Once the model is built, print out the summary. # # You will need to decide a few hyperparameters including: # # - Embedding size # - Number of attention heads # - Size of the dense hidden layer inside the transformer block # - Size of the other dense layers # - Dropout rate # + id="o1DMTFYip7Ct" tags=["solution", "empty"] embed_dim = 32 # Embedding size for each token n_att_heads = 2 # Number of attention heads n_dense_nodes = 32 # Hidden layer size in feed forward network inside transformer model = Sequential([ TokenAndPositionEmbedding(maxlen, vocab_size, embed_dim, input_shape=(maxlen,)), TransformerBlock(embed_dim, n_att_heads, n_dense_nodes), GlobalAveragePooling1D(), Dropout(0.1), Dense(20, activation="relu"), Dropout(0.1), Dense(2, activation="softmax") ]) model.summary() # + colab={"base_uri": "https://localhost:8080/"} id="9aHj4LoItnR6" outputId="00167ed1-e662-4180-f7bb-07b78dc8de21" tags=["solution"] model.summary() # - # ## Exercise 2 # # Compile, train, and evaluate the model. Pay attention to the loss function. We defined the output layer as a `Dense(2, activation="softmax")` so you will need to choose the loss accordingly. # + colab={"base_uri": "https://localhost:8080/"} id="1kjWZocLqGdf" outputId="4ab87a62-7854-4303-b78e-1758bd8f0114" tags=["solution", "empty"] model.compile("adam", "sparse_categorical_crossentropy", metrics=["accuracy"]) h = model.fit(X_train, y_train, batch_size=32, epochs=2, validation_split=0.1) # + colab={"base_uri": "https://localhost:8080/"} id="yShtCMbDqQeI" outputId="3a71f85c-4237-4bf5-ad5c-fbc73bdca018" tags=["solution"] model.evaluate(X_test, y_test) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + import os import shutil path = "C:/Users/User/img/" category = "Skirt" file_list = os.listdir(path + category) print (file_list) i = 1 for name in file_list: src = os.path.join(path + category, name) if(i < 10): dst = "img_0000000" + "{}".format(i) + '.jpg' elif(10 <= i and i < 100): dst = 'img_000000' + "{}".format(i) + '.jpg' dst = os.path.join(path + category, dst) os.rename(src, dst) i += 1 # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Convolutional Neural Networks # # As a practical example, we are going to implement a ConvNet in `PyTorch`. Again, we start by importing all necessary libraries. # # You can observe that the code structure stays the same! # + import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torchvision import datasets, transforms import numpy as np import matplotlib.pyplot as plt # %matplotlib inline # - # ## 1 Define hyperparameters # # We define some of the hyperparameters beforehand. learning_rate = .01 batch_size = 64 test_batch_size = 1000 n_epochs = 5 # ## 2 Define model # # Our CNN will have two convolutional layers, followed by a ReLU unit and by Max Pooling. On top of the convolutional layers, there will be two fully connected layers. The output is then mapped through a Softmax activation function as before. # + class CNN(nn.Module): def __init__(self): super(CNN, self).__init__() self.conv1 = nn.Conv2d(in_channels=1, out_channels=20, kernel_size=5, stride=1) self.conv2 = nn.Conv2d(20, 50, 5, 1) self.fc1 = nn.Linear(4*4*50, 500) self.fc2 = nn.Linear(500, 10) def forward(self, x): x = F.relu(self.conv1(x)) x = F.max_pool2d(x, 2, 2) x = F.relu(self.conv2(x)) x = F.max_pool2d(x, 2, 2) x = x.view(-1, 4*4*50) x = F.relu(self.fc1(x)) x = self.fc2(x) return F.log_softmax(x, dim=1) model = CNN() print(model) # - # ## 3 Loss function and optimiser lossfunction = nn.CrossEntropyLoss() optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate) # ## 4 Load Data # + train_loader = torch.utils.data.DataLoader( datasets.MNIST('../data', train=True, download=True, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=batch_size, shuffle=True) test_loader = torch.utils.data.DataLoader( datasets.MNIST('../data', train=False, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=test_batch_size, shuffle=True) # + # obtain one batch of training images dataiter = iter(train_loader) images, labels = dataiter.next() images = images.numpy() # plot the images in the batch, along with the corresponding labels fig = plt.figure(figsize=(25, 4)) for idx in np.arange(10): ax = fig.add_subplot(2, 10/2, idx+1, xticks=[], yticks=[]) ax.imshow(np.squeeze(images[idx]), cmap='gray') ax.set_title(str(labels[idx].item())) # - # ## 5 Train the network # Now we want to compare the accuracy of the CNN with the accuracy achieved using a multilayer perceptron. # # As a difference to before, we choose smaller batches and for epoch in range(n_epochs): model.train() for batch_idx, (data, target) in enumerate(train_loader): optimizer.zero_grad() output = model(data) loss = lossfunction(output, target) loss.backward() optimizer.step() print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch+1, batch_idx * len(data), len(train_loader.dataset), 100. * batch_idx / len(train_loader), loss.item())) # test model.eval() test_loss = 0 correct = 0 with torch.no_grad(): for data, target in test_loader: output = model(data) # sum up batch loss test_loss += lossfunction(output, target).item() pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability correct += pred.eq(target.view_as(pred)).sum().item() test_loss /= len(test_loader.dataset) print(' Test set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format( test_loss, correct, len(test_loader.dataset), 100. * correct / len(test_loader.dataset))) # # Prediction # + # obtain test images dataiter = iter(test_loader) images, labels = dataiter.next() # load an image output = model(images) # compute predicted class from NN output _, preds = torch.max(output, 1) # prep images for display images = images.numpy() # plot the images in the batch, along with predicted and true labels fig = plt.figure(figsize=(25, 4)) for idx in np.arange(20): ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[]) ax.imshow(np.squeeze(images[idx]), cmap='gray') ax.set_title("{} ({})".format(str(preds[idx].item()), str(labels[idx].item())), color=("green" if preds[idx]==labels[idx] else "red")) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Getting Data # Browse [data.gov](https://www.data.gov/): # # - [climate](https://www.data.gov/climate/) # - [Data](https://catalog.data.gov/dataset?groups=climate5434&#topic=climate_navigation) # - Third [HTML](https://www.ncdc.noaa.gov/stormevents/) # # (Some) More Sources # # - [Open Data Census](http://us-cities.survey.okfn.org/) # - [Humanitarian Data Exchange](https://data.humdata.org/) # - [curated data set lists](https://github.com/awesomedata/awesome-public-datasets) # - [curated API lists](https://github.com/abhishekbanthia/Public-APIs) # - [Ohio Checkbook](http://ohiotreasurer.gov/Transparency/Ohios-Online-Checkbook) # # # From Python # # Install and import [requests]() import requests tb_data_dict_url = 'https://extranet.who.int/tme/generateCSV.asp?ds=dictionary' resp = requests.get(tb_data_dict_url) print(resp.text[:500]) import pandas as pd import io # + # pd.read_csv? # - buffer = io.StringIO(resp.text) tb = pd.read_csv(buffer) tb.head() # Options available for authentication # [In case of zipfiles...](https://techoverflow.net/2018/01/16/downloading-reading-a-zip-file-in-memory-using-python/) import zipfile def download_extract_zip(url): """ Download a ZIP file and extract its contents in memory yields (filename, file-like object) pairs """ response = requests.get(url) with zipfile.ZipFile(io.BytesIO(response.content)) as thezip: for zipinfo in thezip.infolist(): with thezip.open(zipinfo) as thefile: yield zipinfo.filename, thefile url='http://datasets.wri.org/dataset/540dcf46-f287-47ac-985d-269b04bea4c6/resource/27c271ef-63c3-49c5-a06a-f21bb7b96371/download/globalpowerplantdatabasev110' (filename, file_buffer) = next(download_extract_zip(url)) url='http://datasets.wri.org/dataset/540dcf46-f287-47ac-985d-269b04bea4c6/resource/27c271ef-63c3-49c5-a06a-f21bb7b96371/download/globalpowerplantdatabasev110' for (filename, file_buffer) in download_extract_zip(url): print(filename) url='http://datasets.wri.org/dataset/540dcf46-f287-47ac-985d-269b04bea4c6/resource/27c271ef-63c3-49c5-a06a-f21bb7b96371/download/globalpowerplantdatabasev110' for (filename, file_buffer) in download_extract_zip(url): if filename.endswith('.csv'): powerplants = pd.read_csv(file_buffer) powerplants.head() # See also: [Selenium](https://realpython.com/modern-web-automation-with-python-and-selenium/) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from sklearn.datasets import fetch_openml mnist = fetch_openml('mnist_784', version=1) import matplotlib as mpl import matplotlib.pyplot as plt mnist.keys() # + import numpy as np x, y = mnist["data"], mnist["target"] y = y.astype(np.uint8) # !!! Important to turn the strings into int's, otherwise the classfier will error. x_train, x_test, y_train, y_test = x[:60000], x[60000:], y[:60000], y[6000:] # - y_train_5 = (y_train == 5) y_test_5 = (y_test == 5) # + from sklearn.linear_model import SGDClassifier sgd_clf = SGDClassifier(random_state=42) sgd_clf.fit(x_train, y_train_5) # + not_five = np.array(x_test.iloc[0]) five = np.array(x_test.iloc[15]) print(sgd_clf.predict(not_five.reshape(1, -1))) plt.imshow(not_five.reshape(28,28),cmap="binary") # - print(sgd_clf.predict(five.reshape(1, -1))) plt.imshow(five.reshape(28,28),cmap="binary") from sklearn.model_selection import cross_val_score cross_val_score(sgd_clf, x_train, y_train_5, cv=3, scoring="accuracy") # The above scoring is a bad way to measure performance here, as most of the numbers are not 5. And such a estimator that alway's estimates "not 5", will have a great performance, as this is true most of the time. Their are 2 alternatives that are discussed in the book: # # 1. Confunsion matrix: # - precision = TP/(TP + FP) # - recall = TP/(TP+FN) aka sensitivity or true possive rate # This leads to the PR-curve which you should use when when positive class is rare. # 2. ROC Curve: (the way I already knew) # Plots the true positive rate versus the true negative rate # Use this when the negative class is rare. from sklearn.model_selection import cross_val_predict y_train_pred = cross_val_predict(sgd_clf, x_train, y_train_5, cv=3) from sklearn.metrics import confusion_matrix confusion_matrix(y_train_5, y_train_pred) # It's possible to normalize the values (not mentioned in the book) confusion_matrix(y_train_5, y_train_pred, normalize='true') # From the confusion matrix we can calculate the precision and recall. Or we can get them directly from the api: from sklearn.metrics import precision_score, recall_score print("precision score:"+str(precision_score(y_train_5, y_train_pred))) print("recall_score: " + str(recall_score(y_train_5, y_train_pred))) y_scores = cross_val_predict(sgd_clf, x_train, y_train_5, cv=3, method="decision_function") from sklearn.metrics import roc_curve fpr, tpr, thresholds = roc_curve(y_train_5, y_scores) plt.plot(fpr, tpr, linewidth=2) plt.xlabel("false positive rate") plt.ylabel("true positive rate") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Use Cases for Linear and Logistic Regression with Scikit-Learn # # * **Continuous Predictors** # * Linear Regression # * Tree Regression # * **Binary Classifiers** # * Logistic Regression # * Naive Bayes (if Time) # # We'll use my good friend Scikit-Learn: http://scikit-learn.org/stable/ # ## Continuous Variables & Linear Regression # # Outcomes can take on a real-value in a range between $-\inf$ and $\inf$. It's still good if your predictors only take on a range between certain values. # # Formula in the form $Y = \beta_0 + \beta_1 X_1 + \beta_2 X_2 + ... + \beta_n X_n + \epsilon$, where $\epsilon$ is the error term. Includes things we forgot to include, things we can't measure, measurement error, etc. # # Each $\beta$ is the value we expect per a one-unit change in the corresponding $X$. So if $\beta_3 = 4.3$, we expect $Y$ to go up $4.3$ units whenever $X_3$ increase by one-unit. # # Fit in the new data into the formula, multiply by the $\beta$'s and that's your prediction # # We use the form $\hat{Y} = \hat{\beta_0} + \hat{\beta_1} X_1 + \hat{\beta_2} X_2 + ... + \hat{\beta_n} X_n$ where the 'hats' show those are the calculated values, nor the error. # ## Calculating the X's # # We use a series of matrix operations to calculate the values of the $\beta$'s. # # $$ X = # \begin{matrix} # 1 & X_{1,1} & X_{1,2} & X_{1,3} &...& X_{1,n}\\ # 1 & X_{2,1} & X_{2,2} & X_{2,3} &...& X_{2,n}\\ # \vdots & \vdots & \vdots & \vdots& \ddots & \vdots \\ # 1 & X_{m,1} & X_{m,2} & X_{m,3} &...& X_{m,n} # \end{matrix}$$ # # Where the $X$'s are our data points. # # We do some matrix operations: $(X^TX)^{-1}X^T y = \hat{\underline{\beta}}$ to generate our coefficients. Then we use the formula $\hat{Y} = \hat{\underline{\beta}}X$ to get our predictions. # # We don't actually do this. Matrix operations are insanely expensive for computers to perform. Matrix multiplication is $O(n^{2.373})$ under the fastest matrix multiplication algorithms. So computers use a gradient descent algorithm that tries to minimize the squared distance between the predicted outcomes and the actual outcomes. # # We define the residuals, $r$, as $r = \hat{Y} - Y$. We want those residuals to look like random noise, and ideally normally distributed with mean 0. # # # Residuals with a pattern suggest we missed something # # # # ## Let's Look at Some Data # # We'll use the Boston Housing Data for this. # + import numpy as np import pandas as pd boston = pd.read_csv("/Users/evancolvin/dropbox/projects/PyData/Boston.csv", sep = '\t') cancer = pd.read_csv("/Users/evancolvin/dropbox/projects/PyData/cancer.csv") import matplotlib.pyplot as plt # %matplotlib inline # - boston.head(20) # # # * **crim:** Per Capita Crime per Town # * **zn:** proportion of residential land zoned for lots over 25,000 sq.ft # * **indus:** proportion of non-retail business acres per town # * **chas:** Charles River dummy variable (= 1 if tract bounds river; 0 otherwise). # * **nox:** nitrogen oxides concentration (parts per 10 million) # * **rm:** average number of rooms per dwelling # * **age:** proportion of owner-occupied units built prior to 1940 # * **dis:** weighted mean of distances to five Boston employment centres # * **rad:** index of accessibility to radial highways # * **tax:** full-value property-tax rate per $10,000 # * **ptratio:** pupil-teacher ratio by town # * **black:** 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town # * **lstat:** lower status of the population (percent) # * **medv:** median value of owner-occupied homes in 1000s. <--What we Want to Predict # boston.describe() cancer.describe() # ## Splitting the data sets for cross-validation # # Simple training and test sets with 70% of the day and 30% of the data respectively. from sklearn.cross_validation import train_test_split as split boston_train, boston_test = split(boston, test_size = .30) cancer_train, cancer_test = split(cancer, test_size = .30) # ## Linear Regression features = ['crim', 'zn', 'indus', 'chas', 'nox', 'rm', 'age', 'dis', 'rad', 'tax', 'ptratio', 'black', 'lstat'] target = 'medv' predictor_data = boston_train[features].values median_price = boston_train[target].values # + from sklearn.linear_model import LinearRegression linear_regression = LinearRegression() linear_regression.fit(predictor_data, median_price) y_hat = linear_regression.predict(predictor_data) resids = y_hat - median_price plt.figure(figsize = (10.5, 6)) plt.scatter(range(len(median_price)), resids) print "R2 " + str(linear_regression.score(predictor_data, median_price)) + "\n\n" coefficients = linear_regression.coef_.tolist() print "intercept: " + str(linear_regression.intercept_) for predictor in range(len(features)): print features[predictor] + ": " + str(coefficients[predictor]) # + testing_data = boston_test[features].values testing_targets = boston_test[target].values new_predictions = linear_regression.predict(predictor_data) print "The first predictions for prices are: " for i in range(10): print "\t" + str(new_predictions[i]) #print linear_regression.predict(testing_data)[:10] R2 = linear_regression.score(testing_data, testing_targets) print "\nAnd the R-Squared value for the unseen data is: " + str(R2) # - # ## Regression Trees # # "Prediction trees use the tree to represent the recursive partition. Each of the terminal nodes, or leaves, of the tree represents a cell of the partition, and has attached to it a simple model which applies in that cell only. A point x belongs to a leaf if x falls in the corresponding cell of the partition."-- # # * Data "splits" to make the predictions # * Greedy algorithm: makes best choice at each stage; doesn't look far ahead # * May miss a really good split down the line # * Bagging: Combining similar trees and averaging outcome # * Random Forest: Combining vastly different trees and average outcome # * Does this by hiding data from the trees at different time so optimum's look different # * : Can't Overfit # # from sklearn.tree import DecisionTreeRegressor tree = DecisionTreeRegressor(max_depth = 10) tree.fit(predictor_data, median_price) from sklearn.metrics import r2_score print "The R2 for the Tree Regression is " + str(r2_score(median_price, tree.predict(predictor_data))) new_preds = tree.predict(testing_data) print "The R2 for the new data is " + str(r2_score(testing_targets, new_preds)) # ## Classification & Logistic Regression # # ### Logistic Regression # * Feed linear regression into the sigmoid function $S(t) = \frac{1}{1 + e^{-t}}$ where $t$ is the linear regression # * Used for binary classification # * $ \le .5 \Rightarrow$ class 0 # * $ > .5 \Rightarrow$ class 1 # * Need new metrics since $R^2$ doesn't make sense here # * Log-Loss if you predict probabilities (big for Kaggle) # * Accuracy score that looks at the proportion of the correctly classified # # cancer.head(20) cancer.describe() features = ['Clump Thickness', 'Uniformity of Cell Shape', 'Marginal Adhesion', 'Single Epithelial Cell Size', 'Brand Chromatin', 'Normal Nucleoli', 'Mitosis'] target = 'Class' predictor_data = cancer_train[features].values cell_class = cancer_train[target].values from sklearn.linear_model import LogisticRegression logistic_regression = LogisticRegression() logistic_regression.fit(predictor_data, cell_class) from sklearn.metrics import accuracy_score print "The model predicts " + str(accuracy_score(cell_class, logistic_regression.predict(predictor_data))*100) + "% of the cells accurately" validation_data = cancer_test[features].values validation_target = cancer_test[target].values new_preds = logistic_regression.predict(validation_data) print "Also predicts " + str(accuracy_score(validation_target, new_preds)*100) + "% of new observations" # ## Naive Bayes # # New Belief = Old Belief $\times$ Likelihood of Evidence / Evidence $\leftarrow$ Bayes' Rule # # * "Naive" Because the algorithm assumes all predictors are conditionally independent of other predictors, which is dumb # * Uses the top half of Bayes rule and compares the two classes; Selects the larger value for classes # * Probabilities tend towards 0 and 1 # * Principle components vote more than once, skewing the results # * Used for spam classifiers # * Converges into Logistic Regression as the predictors become more and more independent # * Gaussian Naive Bayes assumes underlying probabilities are normally distributed # * You can start with "priors" and weight for data that's skewed towards one classifier or the other # + from sklearn.naive_bayes import GaussianNB gaussian_NB = GaussianNB() old_preds = gaussian_NB.fit(cancer_train[features].values, cell_class) new_preds = gaussian_NB.predict(validation_data) print "Predicts " + str(accuracy_score(validation_target, new_preds) * 100) + "% of unseen observations" #print accuracy_score(cell_class, old_preds) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.11 64-bit (''nlp'': conda)' # name: python3 # --- # + a = [[10, 20], [30, 40], [50, 60]] for i in range(len(a)): # 세로 크기 for j in range(len(a[i])): # 가로 크기 a[i] [값, 값]의 len은 2 print(a[i][j], end=' ') print() # + a = [[10, 20], [30, 40], [50, 60]] for i in a: for j in i: print(j) # 인덱스를 1 증가시킴 # + import pandas as pd data = pd.read_csv('한국경제 크롤링.csv', encoding = 'cp949') # - text = data.body # text # + # 품사 태깅 from konlpy.tag import Okt tagged_words = [] okt=Okt() for line in text: tagged_words.append(okt.pos(line)) print(tagged_words[0]) # from konlpy.tag import Kkma # kkma=Kkma() # for line in text: # print(kkma.pos(line)) # + # 빈도수세기 word frequency counter from collections import Counter count = [] for k in tokens: count.append(Counter(k)) print(count[:2], end=" ") # word_list = count.most_common(10) # for v in word_list: # print(v) # + # foreign, josa, punctuation 제거 pos = ["Foreign", "Josa", "Punctuation", "URL"] clean_words = [] for lt in tagged_words: for tags in lt: if tags[1] not in pos: clean_words.append(tags[0]) # clean_words print(clean_words[:20]) # + punct = "/-'?!.,#$%\'()*+-/:;<=>@[\\]^_`{|}~" + '""“”’' + '∞θ÷α•à−β∅³π‘₹´°£€\×™√²—–&' punct_mapping = {"‘": "'", "₹": "e", "´": "'", "°": "", "€": "e", "™": "tm", "√": " sqrt ", "×": "x", "²": "2", "—": "-", "–": "-", "’": "'", "_": "-", "`": "'", '“': '"', '”': '"', '“': '"', "£": "e", '∞': 'infinity', 'θ': 'theta', '÷': '/', 'α': 'alpha', '•': '.', 'à': 'a', '−': '-', 'β': 'beta', '∅': '', '³': '3', 'π': 'pi', } def clean_punc(text, punct, mapping): for p in mapping: text = text.replace(p, mapping[p]) for p in punct: text = text.replace(p, f' {p} ') specials = {'\u200b': ' ', '…': ' ... ', '\ufeff': '', 'करना': '', 'है': ''} for s in specials: text = text.replace(s, specials[s]) return text.strip() no_punc = [] for line in text: no_punc.append(clean_punc(line, punct, punct_mapping)) no_punc[0] # - # ! git clone https://github.com/SOMJANG/Mecab-ko-for-Google-Colab.git # ! bash Mecab-ko-for-Google-Colab/install_mecab-ko_on_colab190912.sh # + from eunjeon import Mecab tokenizer= Mecab() tokens = [] for i in no_punc: tokens.append(tokenizer.morphs(i)) print(tokens[0], end=' ') # - # + # tag::ds_create[] import ray # Create a dataset containing integers in the range [0, 10000). ds = ray.data.range(10000) # Basic operations: show the size of the dataset, get a few samples, print the schema. print(ds.count()) # -> 10000 print(ds.take(5)) # -> [0, 1, 2, 3, 4] print(ds.schema()) # -> # end::ds_create[] # - # tag::ds_read_write[] # Save the dataset to a local file and load it back. ray.data.range(10000).write_csv("local_dir") ds = ray.data.read_csv("local_dir") print(ds.count()) # end::ds_read_write[] # + # tag::ds_transform[] # Basic transformations: join two datasets, filter, and sort. ds1 = ray.data.range(10000) ds2 = ray.data.range(10000) ds3 = ds1.union(ds2) print(ds3.count()) # -> 20000 # Filter the combined dataset to only the even elements. ds3 = ds3.filter(lambda x: x % 2 == 0) print(ds3.count()) # -> 10000 print(ds3.take(5)) # -> [0, 2, 4, 6, 8] # Sort the filtered dataset. ds3 = ds3.sort() print(ds3.take(5)) # -> [0, 0, 2, 2, 4] # end::ds_transform[] # + # tag::ds_repartition[] ds1 = ray.data.range(10000) print(ds1.num_blocks()) # -> 200 ds2 = ray.data.range(10000) print(ds2.num_blocks()) # -> 200 ds3 = ds1.union(ds2) print(ds3.num_blocks()) # -> 400 print(ds3.repartition(200).num_blocks()) # -> 200 # end::ds_repartition[] # - # tag::ds_schema_1[] ds = ray.data.from_items([{"id": "abc", "value": 1}, {"id": "def", "value": 2}]) print(ds.schema()) # -> id: string, value: int64 # end::ds_schema_1[] # tag::ds_schema_2[] pandas_df = ds.to_pandas() # pandas_df will inherit the schema from our Dataset. # end::ds_schema_2[] # tag::ds_compute_1[] ds = ray.data.range(10000).map(lambda x: x ** 2) ds.take(5) # -> [0, 1, 4, 9, 16] # end::ds_compute_1[] # + # tag::ds_compute_2[] import numpy as np ds = ray.data.range(10000).map_batches(lambda batch: np.square(batch).tolist()) ds.take(5) # -> [0, 1, 4, 9, 16] # end::ds_compute_2[] # + # tag::ds_compute_3[] def load_model(): # Return a dummy model just for this example. # In reality, this would likely load some model weights onto a GPU. class DummyModel: def __call__(self, batch): return batch return DummyModel() class MLModel: def __init__(self): # load_model() will only run once per actor that's started. self._model = load_model() def __call__(self, batch): return self._model(batch) ds.map_batches(MLModel, compute="actors") # end::ds_compute_3[] # TODO how can we make this more concrete? cpu_intensive_preprocessing = lambda batch: batch gpu_intensive_inference = lambda batch: batch # - # tag::ds_pipeline_1[] ds = ray.data.read_parquet("s3://my_bucket/input_data")\ .map(cpu_intensive_preprocessing)\ .map_batches(gpu_intensive_inference, compute="actors", num_gpus=1)\ .repartition(10)\ .write_parquet("s3://my_bucket/output_predictions") # end::ds_pipeline_1[] # tag::ds_pipeline_2[] ds = ray.data.read_parquet("s3://my_bucket/input_data")\ .window(blocks_per_window=5)\ .map(cpu_intensive_preprocessing)\ .map_batches(gpu_intensive_inference, compute="actors", num_gpus=1)\ .repartition(10)\ .write_parquet("s3://my_bucket/output_predictions") # end::ds_pipeline_2[] # + # tag::parallel_sgd_1[] from sklearn import datasets from sklearn.linear_model import SGDClassifier from sklearn.model_selection import train_test_split @ray.remote class TrainingWorker: def __init__(self, alpha: float): self._model = SGDClassifier(alpha=alpha) def train(self, train_shard: ray.data.Dataset): for i, epoch in enumerate(train_shard.iter_epochs()): X, Y = zip(*list(epoch.iter_rows())) self._model.partial_fit(X, Y, classes=[0, 1]) return self._model def test(self, X_test: np.ndarray, Y_test: np.ndarray): return self._model.score(X_test, Y_test) # end::parallel_sgd_1[] # + # tag::parallel_sgd_2[] ALPHA_VALS = [0.00008, 0.00009, 0.0001, 0.00011, 0.00012] print(f"Starting {len(ALPHA_VALS)} training workers.") workers = [TrainingWorker.remote(alpha) for alpha in ALPHA_VALS] # end::parallel_sgd_2[] # + # tag::parallel_sgd_3[] # Generate training & validation data for a classification problem. X_train, X_test, Y_train, Y_test = train_test_split(*datasets.make_classification()) # Create a dataset pipeline out of the training data. The data will be randomly # shuffled and split across the workers for 10 iterations. train_ds = ray.data.from_items(list(zip(X_train, Y_train))) shards = train_ds.repeat(10)\ .random_shuffle_each_window()\ .split(len(workers), locality_hints=workers) # end::parallel_sgd_3[] # - # tag::parallel_sgd_4[] # Wait for training to complete on all of the workers. ray.get([worker.train.remote(shard) for worker, shard in zip(workers, shards)]) # end::parallel_sgd_4[] # tag::parallel_sgd_5[] # Get validation results from each worker. print(ray.get([worker.test.remote(X_test, Y_test) for worker in workers])) # end::parallel_sgd_5[] # + # tag::dask_on_ray_1[] import ray from ray.util.dask import enable_dask_on_ray ray.init() # Start or connect to Ray. enable_dask_on_ray() # Enable the Ray scheduler backend for Dask. # end::dask_on_ray_1[] # + # tag::dask_on_ray_2[] import dask df = dask.datasets.timeseries() df = df[df.y > 0].groupby("name").x.std() df.compute() # Trigger the task graph to be evaluated. # end::dask_on_ray_2[] # + # tag::dask_on_ray_3[] import ray ds = ray.data.range(10000) # Convert the Dataset to a Dask DataFrame. df = ds.to_dask() print(df.std().compute()) # -> 2886.89568 # Convert the Dask DataFrame back to a Dataset. ds = ray.data.from_dask(df) print(ds.std()) # -> 2886.89568 # end::dask_on_ray_3[] # + # tag::ml_pipeline_preprocess[] import ray from ray.util.dask import enable_dask_on_ray import dask.dataframe as dd LABEL_COLUMN = "is_big_tip" enable_dask_on_ray() def load_dataset(path: str, *, include_label=True): # Load the data and drop unused columns. df = dd.read_csv(path, assume_missing=True, usecols=["tpep_pickup_datetime", "tpep_dropoff_datetime", "passenger_count", "trip_distance", "fare_amount", "tip_amount"]) # Basic cleaning, drop nulls and outliers. df = df.dropna() df = df[(df["passenger_count"] <= 4) & (df["trip_distance"] < 100) & (df["fare_amount"] < 1000)] # Convert datetime strings to datetime objects. df["tpep_pickup_datetime"] = dd.to_datetime(df["tpep_pickup_datetime"]) df["tpep_dropoff_datetime"] = dd.to_datetime(df["tpep_dropoff_datetime"]) # Add three new features: trip duration, hour the trip started, and day of the week. df["trip_duration"] = (df["tpep_dropoff_datetime"] - df["tpep_pickup_datetime"]).dt.seconds df = df[df["trip_duration"] < 4 * 60 * 60] # 4 hours. df["hour"] = df["tpep_pickup_datetime"].dt.hour df["day_of_week"] = df["tpep_pickup_datetime"].dt.weekday if include_label: # Calculate label column: if tip was more or less than 20% of the fare. df[LABEL_COLUMN] = df["tip_amount"] > 0.2 * df["fare_amount"] # Drop unused columns. df = df.drop( columns=["tpep_pickup_datetime", "tpep_dropoff_datetime", "tip_amount"] ) return ray.data.from_dask(df) # end::ml_pipeline_preprocess[] # + # tag::ml_pipeline_model[] import torch import torch.nn as nn import torch.nn.functional as F NUM_FEATURES = 6 class FarePredictor(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(NUM_FEATURES, 256) self.fc2 = nn.Linear(256, 16) self.fc3 = nn.Linear(16, 1) self.bn1 = nn.BatchNorm1d(256) self.bn2 = nn.BatchNorm1d(16) def forward(self, *x): x = torch.cat(x, dim=1) x = F.relu(self.fc1(x)) x = self.bn1(x) x = F.relu(self.fc2(x)) x = self.bn2(x) x = F.sigmoid(self.fc3(x)) return x # end::ml_pipeline_model[] # + # tag::ml_pipeline_train_1[] import ray.train as train def train_epoch(iterable_dataset, model, loss_fn, optimizer, device): model.train() for X, y in iterable_dataset: X = X.to(device) y = y.to(device) # Compute prediction error. pred = torch.round(model(X.float())) loss = loss_fn(pred, y) # Backpropagation. optimizer.zero_grad() loss.backward() optimizer.step() # end::ml_pipeline_train_1[] # - # tag::ml_pipeline_train_2[] def validate_epoch(iterable_dataset, model, loss_fn, device): num_batches = 0 model.eval() loss = 0 with torch.no_grad(): for X, y in iterable_dataset: X = X.to(device) y = y.to(device) num_batches += 1 pred = torch.round(model(X.float())) loss += loss_fn(pred, y).item() loss /= num_batches result = {"loss": loss} return result # end::ml_pipeline_train_2[] # tag::ml_pipeline_train_3[] def train_func(config): batch_size = config.get("batch_size", 32) lr = config.get("lr", 1e-2) epochs = config.get("epochs", 3) train_dataset_pipeline_shard = train.get_dataset_shard("train") validation_dataset_pipeline_shard = train.get_dataset_shard("validation") model = train.torch.prepare_model(FarePredictor()) loss_fn = nn.SmoothL1Loss() optimizer = torch.optim.Adam(model.parameters(), lr=lr) train_dataset_iterator = train_dataset_pipeline_shard.iter_epochs() validation_dataset_iterator = \ validation_dataset_pipeline_shard.iter_epochs() for epoch in range(epochs): train_dataset = next(train_dataset_iterator) validation_dataset = next(validation_dataset_iterator) train_torch_dataset = train_dataset.to_torch( label_column=LABEL_COLUMN, batch_size=batch_size, ) validation_torch_dataset = validation_dataset.to_torch( label_column=LABEL_COLUMN, batch_size=batch_size) device = train.torch.get_device() train_epoch(train_torch_dataset, model, loss_fn, optimizer, device) result = validate_epoch(validation_torch_dataset, model, loss_fn, device) train.report(**result) train.save_checkpoint(epoch=epoch, model_weights=model.module.state_dict()) # end::ml_pipeline_train_3[] # tag::ml_pipeline_train_4[] def get_training_datasets(*, test_pct=0.8): ds = load_dataset("nyc_tlc_data/yellow_tripdata_2020-01.csv") ds, _ = ds.split_at_indices([int(0.01 * ds.count())]) train_ds, test_ds = ds.split_at_indices([int(test_pct * ds.count())]) train_ds_pipeline = train_ds.repeat().random_shuffle_each_window() test_ds_pipeline = test_ds.repeat() return {"train": train_ds_pipeline, "validation": test_ds_pipeline} # end::ml_pipeline_train_4[] # tag::ml_pipeline_train_5[] trainer = train.Trainer("torch", num_workers=4) config = {"lr": 1e-2, "epochs": 3, "batch_size": 64} trainer.start() trainer.run(train_func, config, dataset=get_training_datasets()) model_weights = trainer.latest_checkpoint.get("model_weights") trainer.shutdown() # end::ml_pipeline_train_5[] # + # tag::ml_pipeline_inference[] class InferenceWrapper: def __init__(self): self._model = FarePredictor() self._model.load_state_dict(model_weights) self._model.eval() def __call__(self, df): tensor = torch.as_tensor(df.to_numpy(), dtype=torch.float32) with torch.no_grad(): predictions = torch.round(self._model(tensor)) df[LABEL_COLUMN] = predictions.numpy() return df ds = load_dataset("nyc_tlc_data/yellow_tripdata_2021-01.csv", include_label=False) ds.map_batches(InferenceWrapper, compute="actors").write_csv("output") # end::ml_pipeline_inference[] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Merge-and-CountSplitInv # __Input:__ sorted arrays _C_ and _D_ (length _n_/2 each) # __Output:__ sorted array _B_ (length _n_) and the number of split inversions # __Simplifying assumption:__ _n_ is even # *** # $i:=1, j:=1, splitInv:=0$ # __for__ $k := 1$ to _n_ __do__ # __if__ $C[i] < D[j]$ __then__ # $B[k]:=C[i], i:=i+1$ # __else__ $//D[j]', '', ''] for name in names: name_split = name.split() name_result = (name_split[1]+' '+name_split[0]). title() print("Processed name: "+ name_result) # Similar axample but with larger number of names and with some names having the same given or last name. names = ['', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', ''] for name in names: name_split = name.split() name_result = (name_split[1]+' '+name_split[0]). title() print("Processed name: "+ name_result) # ## Example 3 import pandas names = pandas.read_excel('./names.xlsx') names names['Last name'] = 'DR '+ names['Last name'].str.upper() names['First name'] = names['First name'].str.capitalize() names names = names[['First name', 'Last name']] names # Final code put together # + import pandas names = pandas.read_excel('./names.xlsx') names['Last name'] = 'DR '+ names['Last name'].str.upper() names['First name'] = names['First name'].str.capitalize() names = names[['First name', 'Last name']] writer = pandas.ExcelWriter('names_processed.xlsx') names.to_excel(writer, 'Sheet1') writer.close() # - # ## Example 4 f = open('./names.txt') firstnames = [] lastnames = [] for line in f.readlines(): if line.startswith('LN'): lastnames.append(line.split()[1]) elif line.startswith('FN'): firstnames.append(line.split()[1]) else: pass f.close() firstnames lastnames # ## Example 5 import numpy numpy.set_printoptions(precision=3 , suppress= True) # this is just to make the output look better numbers = numpy.loadtxt('./numpers.txt') numbers reshaped = numbers.reshape(3,3) reshaped reshaped[:,2].mean() # Final code put together import numpy numbers = numpy.loadtxt('./numpers.txt') reshaped = numbers.reshape(3,3) reshaped[:,2].mean() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Breaking daily ranges import pandas as pd from datetime import timedelta, date start_date = date(year=2021, month=9, day=1) end_date = date(year=2021, month=11, day=1) d1=pd.date_range(start_date, end_date, freq="W-FRI") d1 d2=pd.date_range(start_date, end_date, freq="W-MON") d2 ranges=[] ranges.append((pd.Timestamp(start_date), d1[0])) for s,e in zip(d2[:-1], d1[1:]): ranges.append((s,e)) ranges.append((d2[-1], pd.Timestamp(end_date))) ranges # # Breaking hourly ranges # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.9.6 64-bit # name: python3 # --- #Two-Period Crusonia Model import numpy as np import math import matplotlib.pyplot as plt import pandas as pd # + #Friday's Uility Function; The Objective Function def u(c1, c2): return math.log(c1) + ß*math.log(c2) #The Crusonia Plant's Budget Constraint; The Constraint def c(c1, c2): return c1 + c2/π #Parameters: ß = 0.97 π = 1.05 c0 = 100 # + #System of Equations for the Optimal Bundle: #I derived these equations from the revelent Euler-Lagrange equations. A = np.array([[1+ß,0], [0,(1+ß)/(ß*π)]]) B = np.array([c0,c0]) C = np.linalg.solve(A, B) #Optimal Choice in Periods 1 and 2: c1 = C[0] c2 = C[1] #Printing the Results: print('Initial Mass of the Crusonia Plant:', c(c1, c2)) #This line makes sure that my results are consistent. print("Friday makes the best use of his Crusonia plant when he consumes", c1, "units of Crusonia today and", c2, "units of Crusonia tomorrow.") print("Friday's Total Utility:", u(c1, c2)) # + #Computing Friday's income income = c0/(1+1/π) print("Friday's income:",income) print("The product of Friday's discount factor and his Crusonia plant's productivity:",ß*π) print("The ratio of Friday's consumption tomorrow to his consumption today:", c2/c1) if income > c1: print("Friday saves.") if income < c1: print("Friday dissaves.") if income == c1: print("Friday consumes his income.") print("The ratio of Friday's income to his consumption today:", income/c1) print("The ratio of Friday's income to his consumption tomorrow:", income/c2) #Not quite because I don't have his income tomorrow! # + #Graph of Friday's Consumption Across Time: x = np.array(["Period 1", "Period 2"]) y = np.array([c1, c2]) twoperioddf = pd.DataFrame(data=np.column_stack((x,y)),columns=['Period',"Consumption"]) print(twoperioddf.to_string(index=False)) plt.bar(x,y, width=1.25) font1 = {'family':'MS Reference Sans Serif', 'color':'darkslategray','size':14} font2 = {'family':'MS Reference Specialty', 'color':'black','size':18} plt.xlabel("Period of Time", fontdict = font1) plt.ylabel("Friday's Consumption", fontdict = font1) plt.title("Friday's Optimal Consumption Over Time", fontdict = font2) plt.show() # + #Plotting the Budget Constraint and the Indifference Curve of the Optimal Level of Utility: #Plotting the Crusonia Plant's Budget Constraint: def B(c1): return π*(c0 - c1) #I define Friday's consumption tomorrow as a function of his consumption today def plot_constraint(ax, c0, π): c1_bc = np.array([0, c0]) c2 = B(c1_bc) ax.plot(c1_bc, c2, color='#2f4b7c') ax.fill_between(c1_bc, 0, c2, color='#2f4b7c', alpha=0.2) ax.set_xlabel("Consumption Today", fontdict = font1) ax.set_ylabel("Consumption Tomorrow", fontdict = font1) return ax fig, ax = plt.subplots() plot_constraint(ax, c0, 1/π) plt.axis('square') plt.title("Friday's Economic Problem:", size=12, fontdict = font2) plt.plot(c1, c2, 'o', color='#a05195') plt.subplot() xlist = np.linspace(0, c0+100, 100) # Create 1-D arrays for x,y dimensions ylist = np.linspace(0, c0+100, 100) X,Y = np.meshgrid(xlist, ylist) # Create 2-D grid xlist,ylist values F = np.log(X) + ß*np.log(Y) - u(c1, c2) #Implicit equation for the indifference curve plt.contour(X, Y, F, [0], colors = '#a05195', linestyles = 'solid') plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + colab={"base_uri": "https://localhost:8080/", "height": 118} colab_type="code" id="aty8eqFppk0G" outputId="31db7c7b-3723-437e-e4f3-30afb9b91396" from google.colab import drive drive.mount('/content/drive') # + colab={"base_uri": "https://localhost:8080/", "height": 50} colab_type="code" id="bp9fpo1Ikfwn" outputId="9e5281a9-9f36-47eb-e897-3bee5ff96774" # If you are running on Google Colab, uncomment below to install the necessary dependencies # before beginning the exercise. print("Setting up colab environment") # !pip uninstall -y -q pyarrow # !pip install -q https://s3-us-west-2.amazonaws.com/ray-wheels/latest/ray-0.8.0.dev5-cp36-cp36m-manylinux1_x86_64.whl # !pip install -q ray[debug] # A hack to force the runtime to restart, needed to include the above dependencies. print("Done installing! Restarting via forced crash (this is not an issue).") import os os._exit(0) # + colab={} colab_type="code" id="YHb-I_Vpkiu9" # If you are running on Google Colab, please install TensorFlow 2.0 by uncommenting below.. try: # # %tensorflow_version only exists in Colab. # %tensorflow_version 2.x except Exception: pass # + colab={"base_uri": "https://localhost:8080/", "height": 895} colab_type="code" id="aoxLAjPXnbGM" outputId="2ecd4564-dafc-4ad8-e368-cd7915bd6939" # !pip3 install ray xlrd pandas tensorboardX # + colab={"base_uri": "https://localhost:8080/", "height": 33} colab_type="code" id="rgMRfSCAeesr" outputId="8596c1a9-f5ed-42a3-ddde-a175e23dd373" print("Setting up colab environment") # !pip3 uninstall -y -q pyarrow # !pip3 install -q ray[debug] # + colab={} colab_type="code" id="KU1Eb-fkhb1l" # # !pip3 install tensorflow keras scikit-learn pandas # + colab={} colab_type="code" id="ApMfN5xMnJSU" import numpy as np np.random.seed(0) import tensorflow as tf try: tf.get_logger().setLevel('INFO') except Exception as exc: print(exc) import warnings warnings.simplefilter("ignore") from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, LSTM,Dropout from tensorflow.keras.optimizers import SGD, Adam from tensorflow.keras.callbacks import ModelCheckpoint from sklearn.model_selection import train_test_split import ray from ray import tune from ray.tune.examples.utils import get_iris_data import inspect import pandas as pd import matplotlib.pyplot as plt plt.style.use('ggplot') # %matplotlib inline # + colab={"base_uri": "https://localhost:8080/", "height": 570} colab_type="code" id="JE3O0Kbthb1s" outputId="a1081327-4325-4d6f-880f-660bbd1999eb" # !pip3 install ray[tune] ray[rllib] matplotlib # + colab={} colab_type="code" id="4HRWGKtZpu3i" import os import pandas as pd path = "/content/drive/My Drive/excels/" files = os.listdir(path) files_xls = [f for f in files if f[-3:] == 'xls'] df = pd.DataFrame() for f in files_xls: fp=path+f data = pd.read_excel(fp, 'Sheet') data.columns = ['Time', 'output','outputPitch','outputVolm','Player1BPM','Player1Pitch','Player1Volm', 'Player2BPM','Player2Pitch','Player2Volm', 'Player3BPM','Player3Pitch','Player3Volm', 'Player4BPM','Player4Pitch','Player4Volm'] df = df.append(data) df.columns = ['Time', 'output','outputPitch','outputVolm','Player1BPM','Player1Pitch','Player1Volm', 'Player2BPM','Player2Pitch','Player2Volm', 'Player3BPM','Player3Pitch','Player3Volm', 'Player4BPM','Player4Pitch','Player4Volm'] y = df[['output']] X = df[['Time','Player1BPM','Player1Pitch','Player1Volm', 'Player2BPM','Player2Pitch','Player2Volm', 'Player3BPM','Player3Pitch','Player3Volm', 'Player4BPM','Player4Pitch','Player4Volm']] # + colab={} colab_type="code" id="qjYWgmARnJSY" # df = pd.read_excel('/content/drive/My Drive/excels/A Classic Education - NightOwl.stem.xls', 'Sheet') # df.columns = ['Time', 'output','outputPitch','outputVolm','Player1BPM','Player1Pitch','Player1Volm', 'Player2BPM','Player2Pitch','Player2Volm', 'Player3BPM','Player3Pitch','Player3Volm', 'Player4BPM','Player4Pitch','Player4Volm'] # y = df[['output']] # X = df[['Time','Player1BPM','Player1Pitch','Player1Volm', 'Player2BPM','Player2Pitch','Player2Volm', 'Player3BPM','Player3Pitch','Player3Volm', 'Player4BPM','Player4Pitch','Player4Volm']] # + colab={"base_uri": "https://localhost:8080/", "height": 191} colab_type="code" id="vNj36EimnJSd" outputId="3f9f4677-09d6-435e-abda-5d18716ae1e1" X.head() # + colab={"base_uri": "https://localhost:8080/", "height": 191} colab_type="code" id="ubR9W1htnJSh" outputId="305af672-fee9-4401-9164-c27f4c967351" y.head() # + colab={} colab_type="code" id="LaJHpUXWnJSv" train_x, test_x, train_y, test_y = train_test_split(X, y, test_size=0.1,shuffle=False,random_state=42) # + colab={} colab_type="code" id="b1J3d_1inJSk" from sklearn.preprocessing import MinMaxScaler scaler_X = MinMaxScaler() X_train_scaled = scaler_X.fit_transform(train_x) X_test_scaled = scaler_X.transform(test_x) # + colab={"base_uri": "https://localhost:8080/", "height": 50} colab_type="code" id="wbdX_iVMWuQ4" outputId="6c0d5b92-fde2-4278-8593-68a86888a3ba" train_x = np.array(X_train_scaled) #X_train_scaled train_x = train_x[:, :, None] print(train_x.shape) test_x = np.array(X_test_scaled)#X_test_scaled test_x = test_x[:, :, None] print(test_x.shape) # + colab={} colab_type="code" id="XXVvvnBNnJSp" ###############------------- ORIGINAL ONE WITH 3 LAYERS------------############ RUNNING SUCCESSFully # def create_model(learning_rate, dense_1, dense_2 ,dense_3): # assert learning_rate > 0 and dense_1 > 0 and dense_2 > 0, "Did you set the right configuration?" # model = Sequential() # model.add(Dense(int(dense_1), input_shape=(13, ), activation='relu', name='fc1')) # model.add(Dense(int(dense_2), activation='relu', name='fc2')) # model.add(Dense(int(dense_2), activation='relu', name='fc3')) # model.add(Dense(1, activation='relu', name='output')) # optimizer = SGD(lr=learning_rate) # model.compile(optimizer, loss='mae', metrics=['mse']) # return model # + colab={} colab_type="code" id="wIUdTSSjTuGo" ###############------------- LSTM ONE WITH 4 LAYERS------------############ def create_model(learning_rate, lstm_1, lstm_2 ,lstm_3, lstm_4): model = Sequential() model.add(LSTM(int(lstm_1), return_sequences=True, input_shape=(13, 1))) model.add(Dropout(0.3)) model.add(LSTM(int(lstm_2), return_sequences=True)) model.add(Dropout(0.3)) model.add(LSTM(int(lstm_3), return_sequences=True)) model.add(Dropout(0.3)) model.add(LSTM(int(lstm_4))) model.add(Dropout(0.0)) model.add(Dense(units=1)) optimizer = SGD(lr=learning_rate) model.compile(optimizer, loss='mae', metrics=['mse']) return model # + colab={} colab_type="code" id="gBL0LyZPnJS0" # model = create_model(learning_rate=0.1, dense_1 = 2, dense_2 = 2,dense_3=2) # + colab={} colab_type="code" id="JphdsLdoTcav" model = create_model(learning_rate=0.1, lstm_1 = 50, lstm_2 = 50,lstm_3=50, lstm_4 = 50) # + colab={"base_uri": "https://localhost:8080/", "height": 440} colab_type="code" id="5xYKcbyznJS3" outputId="16f436c7-bdcb-4b19-c2ea-5090a5ebe21e" model.summary() # + colab={} colab_type="code" id="RvJ_wAUdnJS6" # test_model = model.fit(train_x, train_y, validation_data=(test_x, test_y), verbose=1, batch_size = 512, epochs = 20) # + colab={} colab_type="code" id="j59HXjLvnJS8" def train_on_iris(): model = create_model(learning_rate=0.1, lstm_1 = 512, lstm_2 = 512, lstm_3 = 512, lstm_4 = 512) # This saves the top model. `accuracy` is only available in TF2.0. checkpoint_callback = ModelCheckpoint("model.h5", monitor='mse', save_best_only=True, save_freq=2) # Train the model model.fit( train_x, train_y, validation_data=(test_x, test_y), verbose=1, batch_size=512, epochs=20, callbacks=[checkpoint_callback]) return model # + colab={} colab_type="code" id="s6nvr-VBnJS-" #original_model = train_on_iris() # + colab={"base_uri": "https://localhost:8080/", "height": 234} colab_type="code" id="N_NZOohwZ1LR" outputId="ee4ad520-c4d9-4556-a6cb-5b013d755f32" # Open the file with open('/content/drive/My Drive/ZOOOptSearchreport_orig.txt','w') as fh: # Pass the file handle in as a lambda function to make it callable original_model.summary(print_fn=lambda x: fh.write(x + '\n')) print(original_model.summary()) # + colab={} colab_type="code" id="HDNFmEP1nJTB" import tensorflow.keras as keras from ray.tune import track # + colab={} colab_type="code" id="2jqJzMlGBcsg" # config = { # "num_samples": 10 , # "config": { # "iterations": 100, # }, # "stop": { # "timesteps_total": 100 # }, # } # + colab={} colab_type="code" id="_g6k5UzqnJTD" class TuneReporterCallback(keras.callbacks.Callback): def __init__(self, logs={}): self.iteration = 0 super(TuneReporterCallback, self).__init__() def on_epoch_end(self, batch, logs={}): self.iteration += 1 track.log(keras_info=logs, mean_accuracy=logs.get("mse"), mean_loss=logs.get("mae")) # + colab={} colab_type="code" id="z9_z4Kl9V_-u" def tune_iris(config): # TODO: Change me. model = create_model(learning_rate=config["lr"], lstm_1=config["lstm_1"], lstm_2=config["lstm_2"],lstm_3=config["lstm_3"],lstm_4=config["lstm_4"]) checkpoint_callback = ModelCheckpoint( "model.h5", monitor='loss', save_best_only=True, save_freq=2) # Enable Tune to make intermediate decisions by using a Tune Callback hook. This is Keras specific. callbacks = [checkpoint_callback, TuneReporterCallback()] # Train the model model.fit( train_x, train_y, validation_data=(test_x, test_y), verbose=1, batch_size=512, epochs=10, callbacks=callbacks) # + colab={} colab_type="code" id="dpFv-Q8LnJTJ" assert len(inspect.getargspec(tune_iris).args) == 1, "The `tune_iris` function needs to take in the arg `config`." # + colab={"base_uri": "https://localhost:8080/", "height": 361} colab_type="code" id="ejdTIlRUnJTM" outputId="7c516ce6-990b-4a31-9560-c09a7512d169" print("Test-running to make sure this function will run correctly.") tune.track.init() # For testing purposes only. tune_iris({"lr": 0.1, "lstm_1": 64, "lstm_2": 32, "lstm_3": 8, "lstm_4":4}) print("Success!") # + colab={} colab_type="code" id="a-_fsOVynJTO" hyperparameter_space = { "lr": tune.loguniform(0.001, 0.1), "lstm_1": tune.uniform(2, 128), "lstm_2": tune.uniform(2, 128), "lstm_3": tune.uniform(2, 128), "lstm_4": tune.uniform(2, 128), } # + colab={"base_uri": "https://localhost:8080/", "height": 212} colab_type="code" id="0L4ed3Em_UDd" outputId="63cf0ee3-d537-4" pip install -U zoopt # + colab={"base_uri": "https://localhost:8080/"} colab_type="code" id="kEE6ckuFnJTQ" outputId="c9ce3879-a577-485a-c595-83d982082c99" # This seeds the hyperparameter sampling. import numpy as np; np.random.seed(5) hyperparameter_space = hyperparameter_space # TODO: Fill me out. num_samples = 25 #################################################################################################### ################ This is just a validation function for tutorial purposes only. #################### HP_KEYS = ["lr", "lstm_1", "lstm_2", "lstm_3", "lstm_4"] assert all(key in hyperparameter_space for key in HP_KEYS), ( "The hyperparameter space is not fully designated. It must include all of {}".format(HP_KEYS)) ###################################################################################################### ray.shutdown() # Restart Ray defensively in case the ray connection is lost. ray.init(log_to_driver=False) # We clean out the logs before running for a clean visualization later. # ! rm -rf ~/ray_results/tune_iris from ray.tune import run from ray.tune.suggest.zoopt import ZOOptSearch from zoopt import ValueType dim_dict = { "lr": (ValueType.CONTINUOUS, [0.001,0.1], 1e-2), "lstm_1": (ValueType.DISCRETE, [2, 512], False), "lstm_2": (ValueType.DISCRETE, [2, 512], False), "lstm_3": (ValueType.DISCRETE, [2, 512], False), "lstm_4": (ValueType.DISCRETE, [2, 512], False), } zoopt_search = ZOOptSearch( algo="Asracos", # only support Asracos currently budget=num_samples, dim_dict=dim_dict, metric="keras_info/mse", mode="min") analysis= tune.run(tune_iris,search_alg=zoopt_search,resources_per_trial = {"cpu": 4, "gpu": 1},num_samples = num_samples) assert len(analysis.trials) ==25, "Did you set the correct number of samples?" # + colab={} colab_type="code" id="i3zWjJ2VnJTS" # Obtain the directory where the best model is saved. print("You can use any of the following columns to get the best model: \n{}.".format( [k for k in analysis.dataframe() if k.startswith("keras_info")])) print("=" * 10) logdir = analysis.get_best_logdir("keras_info/val_loss", mode="min") # We saved the model as `model.h5` in the logdir of the trial. from tensorflow.keras.models import load_model tuned_model = load_model(logdir + "/model.h5") # + colab={} colab_type="code" id="Y8ICEBOZa2iJ" # Open the file with open('/content/drive/My Drive/ZOOOptSearchreport_tune.txt','w') as fh: # Pass the file handle in as a lambda function to make it callable tuned_model.summary(print_fn=lambda x: fh.write(x + '\n')) print(tuned_model.summary()) # + colab={} colab_type="code" id="kelKOf28nJTX" predicted_value = original_model.predict(test_x) predicted_value.shape # + colab={} colab_type="code" id="uoa-qPmkqVDZ" test_y.shape # + colab={} colab_type="code" id="hnc7TVAnnJTZ" predicted_value_tuned = tuned_model.predict(test_x) predicted_value_tuned.shape # + colab={} colab_type="code" id="Gnhq3u25nJTe" # print("Loss is {:0.4f}".format(tuned_loss)) # print("Tuned Error is {:0.4f}".format(tuned_error)) # #print("The original un-tuned model had an accuracy of {:0.4f}".format(original_accuracy)) # predicted_value = tuned_model.predict(test_x) # + colab={} colab_type="code" id="VvzAquVPnJTg" predicted_value_tuned # + colab={} colab_type="code" id="MHMmNkoknJTi" import matplotlib.pyplot as plt # plt.plot(test_y) plt.plot(predicted_value) plt.plot(predicted_value_tuned) plt.show() # + colab={} colab_type="code" id="LBqc-AbPpreY" import matplotlib.pyplot as plt plt.plot(predicted_value) plt.show() # + colab={} colab_type="code" id="HcE1M2P56One" import matplotlib.pyplot as plt plt.plot(predicted_value_tuned) plt.show() # + colab={} colab_type="code" id="uPT8IMhnnJTk" import matplotlib.pyplot as plt plt.plot(test_y) plt.plot(predicted_value_tuned) plt.plot(predicted_value) plt.show() # + colab={} colab_type="code" id="9Y41ADfRnJTp" # %load_ext tensorboard # %tensorboard --logdir ~/ray_results/tune_iris # + colab={} colab_type="code" id="abW7Fq1ahb3H" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Nonconvex scalar Conservation Laws # The scalar nonlinear conservation law $q_t + f(q)_x = 0$ is said to be "genuinely nonlinear" over some range of states between $q_l$ and $q_r$ if $f''(q) \neq 0$ for all $q$ between these values. This is true in particular if $f(q)$ is any quadratic function, for example the flux function for Burgers' equation and the LWR traffic flow model are genuinely nonlinear for all $q$. The reason this is important is that it means that the characteristic speed $f'(q)$ is either monotonically increasing or monotonically decreasing over the interval between $q_l$ and $q_r$. As discussed already in chapters [Burgers.ipynb](Burgers.ipynb) and [Traffic_flow.ipynb](Traffic_flow.ipynb), if $f'(q)$ is increasing as $q$ varies from $q_l$ to $q_r$ then the initial discontinuity of a Riemann problem spreads out into a smooth single-valued rarefaction wave. On the other hand if $f'(q)$ is decreasing then the discontinuity propagates as a single shock wave with speed given by the Rankine-Hugoniot condition. Hence the Riemann solution for a genuinely nonlinear scalar equation always consists of either a single rarefaction wave or a single shock. # # The solution can be much more complicated if $f''(q)$ vanishes somewhere between $q_l$ and $q_r$, since this means that the characteristic speed may not vary monotonically. As we will illustrate below, for a scalar conservation law of this type the solution to the Riemann problem might consist of multiple shocks and rarefaction waves. These waves all originate from $x=0$ and the Riemann solution is still a similarity solution $q(x,t) = Q(x/t)$ for some single-valued function $Q(\xi)$, but determining the right set of waves is more challenging. Below we present results computed using an elegant form of the solution to general scalar conservation laws due to Osher. # ## Linear degeneracy # # The opposite extreme of genuine nonlinearity is "linear degeneracy". The scalar advection equation has $f(q) = aq$ with constant characteristic speed $f'(q)\equiv a$ and so $f''(q) \equiv 0$ for all values of $q$. Since it is a linear equation, it naturally fails to be genuinely nonlinear over any range of $q$ values. # # Recall that in the Riemann solution for the scalar advection equation (discussed in [Advection.ipynb](Advection.ipynb)) the characteristics are parallel to the propagating discontinuity in the $x$-$t$ plane. They are neither impinging on the discontinuity (as in a shock wave, where the characteristic speed decreases from $q_l$ to $q_r$) nor are they spreading out (as they would in a rarefaction wave with $f'(q)$ increasing). # ## Implications to systems of equations # # For hyperoblic systems of $m$ equations, there are $m$ different characteristic fields. For classical nonlinear systems such as the shallow water equations or the Euler equations of gas dynamics, the Riemann solution generally consists of $m$ waves, each of which is either a discontinuity or a rarefaction wave. The fact that there is only one wave in each family results from the fact that, for these systems, each characteristic speed either varies monotonically through the wave (if the field is genuinely nonlinear) or else is constant across the wave (if the field is linearly degenerate). In the former case the wave is either a single shock or rarefaction wave, and in the latter case the wave is a "contact discontinuity", as discussed further in [Shallow_water_tracer](Shallow_water_tracer.ipynb) and [Euler.ipynb](Euler.ipynb). As in the case of scalar advection, the characteristic speed is constant across a contact discontinuity ($f'(q) \equiv 0$ between the left and right states). The definition of genuine nonlinearity and linear degeneracy for systems of equations is given in [Shallow_water_tracer](Shallow_water_tracer.ipynb). # # The nonconvex equations studied in this chapter illustrate that even for a scalar problem the Riemann solution becomes much more complicated if the problem fails to be genuinely nonlinear or linearly degenerate. *Systems* of equations that lack the corresponding properties are more difficult still and are beyond the scope of this book, although they do arise in some important applcations, such as magnetohydrodynamics (MHD). # ## Osher's Solution # # In this chapter we use Osher's general solution to the scalar nonlinear Riemann problem (valid also for non-convex fluxes), using the formula from (Osher 1984). The Riemann solution is always a similarity solution $q(x,t) = Q(x/t)$ for all $t>0$ (constant on any ray $x=\alpha t$ eminating from the origin, for any constant $\alpha$). The function $Q(\xi)$ is given by # # $$ # Q(\xi) = \begin{cases} # \text{argmin}_{q_l \leq q \leq q_r} [f(q) - \xi q]& \text{if} ~q_l\leq q_r,\\ # \text{argmax}_{q_r \leq q \leq q_l} [f(q) - \xi q]& \text{if} ~q_r\leq q_l.\\ # \end{cases} # $$ # # Recall that $\text{argmin}_{q_l \leq q \leq q_r} G(q)$ returns the value of $q$ for which $G(q)$ is minimized over the indicated interval, while argmax returns the value of $q$ where $G(q)$ is maximized. # # For more discussion, see also Section 16.1 of (LeVeque 2002). # %matplotlib inline # + tags=["hide"] # %config InlineBackend.figure_format = 'svg' import numpy as np from ipywidgets import widgets, fixed from ipywidgets import interact from exact_solvers import nonconvex from exact_solvers import nonconvex_demos # - # If you wish to examine the Python code for this chapter, please see: # - [exact_solvers/nonconvex.py](exact_solvers/nonconvex.py) # - [exact_solvers/nonconvex_demos.py](exact_solvers/nonconvex_demos.py) # # Note that in [exact_solvers/nonconvex.py](exact_solvers/nonconvex.py) we use the function `osher_solution` to define a function `nonconvex_solutions` that evaluates this solution at a set of `xi = x/t` values. It also computes the possibly multi-valued solution that would be obtained by tracing characteristics, for plotting purposes. In [exact_solvers/nonconvex_demos.py](exact_solvers/nonconvex_demos.py), an additional function `make_plot_function` returns a plotting function for use in interactive widgets below. # ## Traffic flow # # First we recall that the Riemann solution for a convex flux consists of a single shock or rarefaction wave. For example, consider the flux function $f(q) = q(1-q)$ from traffic flow (with $q$ now representing the density $\rho$ that was used in [Traffic_flow.ipynb](Traffic_flow.ipynb)). nonconvex_demos.demo1() # The plot on the left above shows a case where the solution is a rarefaction wave that can be computed by tracing characteristics. On the right we see the case for which tracing characteristics would give an multivalued solution (as a dashed line) whereas the correct Riemann solution consists of a shock wave (solid line). # For comparison with later examples, we also plot the quadratic flux function $f(q)$ and the linear characteristic speed $f'(q)$ for this range of $q$ values. Plotting $q$ vs. the characteristic speed shows how we can interpret each value of $q$ in the jump discontinuity (represented by the dashed vertical line in the plot on the right below) as propagating to the left or right at its characteristic speed. Since $f'(q)$ is linear in $q$, the rarefaction wave shown above is piecewise linear. f = lambda q: q*(1-q) q_left = 0.1 q_right = 0.6 nonconvex_demos.plot_flux(f, q_left, q_right) # ## Buckley-Leverett Equation # # The Buckley-Leverett equation for two-phase flow is described in Section 16.1.1 of (LeVeque 2002). It has the non-convex flux function # # $$ # f(q) = \frac{q^2}{q^2 + a(1-q)^2} # $$ # where $a$ is some constant, $q=1$ corresponds to pure water and $q=0$ to pure oil, in a saturated porous medium. # # Consider the Riemann problem for water intruding into oil, with $q_l=1$ and $q_r=0$. a = 0.5 f_buckley_leverett = lambda q: q**2 / (q**2 + a*(1-q)**2) q_left = 1. q_right = 0. # ### Plot the flux and its derivative # + tags=["hide"] nonconvex_demos.plot_flux(f_buckley_leverett, q_left, q_right) # - # Again the third plot above shows $q$ on the vertical axis and $f'(q)$ on the horizontal axis (it's the middle figure turned sideways). You can think of this as showing the characteristic velocity for each point on a jump discontinuity from $q=0$ to $q=1$ (indicated by the dashed line), and hence a triple valued solution of the Riemann problem at $t=1$ when each $q$ value has propagated this far. # ### The correct Riemann solution # # Below we show this triple-valued solution together with the correct solution to the Riemann problem, with a shock wave inserted at the appropriate point (as computed using the Osher solution defined above). Note that for this non-convex flux function the Riemann solution consists partly of a rarefaction wave together with a shock wave. # # In the plot on the right, we also show the flux function $f(q)$ as a red curve and the upper boundary of the convex hull of the set of points below the graph for $q_r \leq q \leq q_l$. Note that the convex hull boundary follows the flux function for the set of $q$ values corresponding to the rarefaction wave and then jumps from $q\approx 0.6$ to $q=0$, corresponding to the shock wave. See Section 16.1 of (LeVeque 2002) for more discussion of this construction of the Riemann solution. # + q_left = 1. q_right = 0. plot_function = nonconvex_demos.make_plot_function(f_buckley_leverett, q_left, q_right, xi_left=-2, xi_right=2) interact(plot_function, t=widgets.FloatSlider(value=0.8,min=0,max=.9), fig=fixed(0)); # - # Note from the plot on the left above that the triple-valued solution suggested by tracing characteristics (the dashed line) has been partially replaced by a shock wave. By conservation, the areas of the two regions cut off by the shock must cancel out. Moreover, the shock speed coincides with the characteristic speed along at the edge of the rarefaction wave that ends at the shock. In terms of the flux function shown by the dashed curve in the right-most figure above, we see that the shock wave connects $q_r=0$ to the point where the slope of the solid line (the shock speed) agrees with the slope of the flux function (the characteristic speed at this edge of the rarefaction wave). The correct Riemann solution lies along the upper convex hull of the flux function. # ### Swapping left and right states # Note what happens if we switch `q_left` and `q_right` in this problem. This now corresponds to oil on the left pushing into water on the right. Since $q_l < q_r$ in this case, the correct Riemann solution corresponds to the lower convex hull of the flux function. # + q_left = 0. q_right = 1. plot_function = nonconvex_demos.make_plot_function(f_buckley_leverett, q_left, q_right, xi_left=-2, xi_right=2) interact(plot_function, t=widgets.FloatSlider(value=0.8,min=0,max=.9), fig=fixed(0)); # - # ### Leftward flow # Note that the Buckley-Leverett equation as written above simulates flow from left to right. So if we wanted to model water on the right pushing leftward into oil on the left, we must negate the flux function, as is done in the next cell. Note that this gives the same solution as our original Riemann problem, but flipped in `x`. # + f_BL_leftward = lambda q: -f_buckley_leverett(q) q_left = 0. q_right = 1. plot_function = nonconvex_demos.make_plot_function(f_BL_leftward, q_left, q_right, xi_left=-2, xi_right=2) interact(plot_function, t=widgets.FloatSlider(value=0.8,min=0,max=.9), fig=fixed(0)); # - # ## Sinusoidal flux # # As another test, the flux function $f(q) = \sin(q)$ is used in Example 16.1 of (LeVeque 2002). # # First we plot the flux function $f(q)$ and also the characteristic speed $df/dq$. We also plot $q$ as a function of $df/dq$. This again helps us visualize whether the characteristic speed is positive or negative for each value of $q$ along the jump discontinuity in the Riemann data (again indicated by the vertical dashed line), and shows that trying to solve the Riemann problem by tracing characteristics alone would lead to a multi-valued solution. # + f_sin = lambda q: np.sin(q) q_left = np.pi/4. q_right = 15*np.pi/4. nonconvex_demos.plot_flux(f_sin, q_left, q_right) # - # And here is a dynamic version of Figure 16.4 of (LeVeque 2002), illustrating where shocks must be inserted to make the Riemann solution single valued. # + plot_function = nonconvex_demos.make_plot_function(f_sin, q_left, q_right, -1.5, 1.5) interact(plot_function, t=widgets.FloatSlider(value=0.8,min=0.,max=.9), fig=fixed(0)); # - # In the figure above, note that the shocks in the Riemann solution correspond to linear segments of the lower boundary of the convex hull of the set of points that lie above the flux function $f(q)$. This is because we chose $q_l < q_r$ in this example. # # If we switch the states so that $q_l > q_r$, then the Riemann solution corresponds to the upper boundary of the convex hull of the set of points that lie below the flux function: # + f_sin = lambda q: np.sin(q) q_left = 15*np.pi/4. q_right = np.pi/4. plot_function = nonconvex_demos.make_plot_function(f_sin, q_left, q_right, -1.5, 1.5) interact(plot_function, t=widgets.FloatSlider(value=0.8,min=0.,max=.9), fig=fixed(0)); # - # ## Yet another example # # Here's another example that you can run in a live notebook. Note the collection of shock and rarefaction waves that result from this. In the notebook you can adjust $q_l$ and $q_r$. Notice how the structure of the solution changes as you vary them. What do you think the Riemann solution looks like if you switch the left and right states? Check if you are right. # # Note this example doesn't work in the static html version due to sliders for $q_l$ and $q_r$. # + f = lambda q: 0.25*(1. - q)*np.sin(1.5*q) plot_function = nonconvex_demos.make_plot_function_qsliders(f) interact(plot_function, t=widgets.FloatSlider(value=0.8,min=0.,max=.9), q_left=widgets.FloatSlider(value=-3.5,min=-4,max=4), q_right=widgets.FloatSlider(value=3.5,min=-4,max=4), fig=fixed(0)); # - # Experiment with this and other flux functions in this notebook! But please note that the plotting functions, as written, may break down if you try an example with too many oscillations in the flux. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + import pandas as pd import numpy as np from datetime import datetime, timezone, timedelta # - # ### Load datasets events_df = pd.read_pickle("data/01_events_df.pkl") category_tree_df = pd.read_pickle("data/01_category_tree_df.pkl") item_properties_df = pd.read_pickle("data/01_item_properties_df.pkl") # ### Data Preparation # Define variables TIME_KEY = "timestamp" USER_KEY = "visitorid" ITEM_KEY = "itemid" SESSION_KEY = "sessionid" # Remove rows with NA userId events_prepared_df = events_df[~np.isnan(events_df[USER_KEY])].copy() # Truncate milli seconds events_prepared_df[TIME_KEY] = (events_prepared_df[TIME_KEY] / 1000).astype(int) # #### Introduce Sessions # Sort data by user and time events_prepared_df.sort_values(by=[USER_KEY, TIME_KEY], ascending=True, inplace=True) # Compute the time difference between queries tdiff = np.diff(events_prepared_df[TIME_KEY].values) # Check which of them are bigger then session threshold SESSION_THRESHOLD = 30 * 60 split_session = tdiff > SESSION_THRESHOLD split_session = np.r_[True, split_session] # Check when the user changes its data new_user = events_prepared_df[USER_KEY].values[1:] != events_prepared_df[USER_KEY].values[:-1] new_user = np.r_[True, new_user] # A new sessions stars when at least one of the two conditions is verified new_session = np.logical_or(new_user, split_session) # Compute the session ids session_ids = np.cumsum(new_session) events_prepared_df[SESSION_KEY] = session_ids events_prepared_df.head() def print_stats(data): data_start = datetime.fromtimestamp(data[TIME_KEY].min(), timezone.utc) data_end = datetime.fromtimestamp(data[TIME_KEY].max(), timezone.utc) print('\tEvents: {}\n\tUsers: {}\n\tSessions: {}\n\tItems: {}\n\tSpan: {} / {}'. format(len(data), data[USER_KEY].nunique(), data[SESSION_KEY].nunique(), data[ITEM_KEY].nunique(), data_start.date().isoformat(), data_end.date().isoformat())) print("Raw dataset:") print_stats(events_prepared_df) # #### Filter # Keep items with >=5 interactions MIN_ITEM_SUPPORT = 5 item_pop = events_prepared_df[ITEM_KEY].value_counts() good_items = item_pop[item_pop >= MIN_ITEM_SUPPORT].index events_prepared_df = events_prepared_df[events_prepared_df[ITEM_KEY].isin(good_items)] # Remove sessions with length < 2 MIN_SESSION_LENGTH = 2 session_length = events_prepared_df[SESSION_KEY].value_counts() good_sessions = session_length[session_length >= MIN_SESSION_LENGTH].index events_prepared_df = events_prepared_df[events_prepared_df[SESSION_KEY].isin(good_sessions)] # let's keep only returning users (with >= 2 sessions) # need to be 3, because we need at least 1 for each training, validation and test set MIN_USER_SESSIONS = 3 MAX_USER_SESSIONS = None sess_per_user = events_prepared_df.groupby(USER_KEY)[SESSION_KEY].nunique() if MAX_USER_SESSIONS is None: # no filter for max number of sessions for each user good_users = sess_per_user[(sess_per_user >= MIN_USER_SESSIONS)].index else: good_users = sess_per_user[(sess_per_user >= MIN_USER_SESSIONS) & (sess_per_user < MAX_USER_SESSIONS)].index events_prepared_df = events_prepared_df[events_prepared_df[USER_KEY].isin(good_users)] print("Filtered dataset:") print_stats(events_prepared_df) # ### Create single train / test split CLEAN_TEST = True def last_session_out_split(data, min_session_length): """ last-session-out split assign the last session of every user to the test set and the remaining ones to the training set """ sessions = data.sort_values(by=[USER_KEY, TIME_KEY]).groupby(USER_KEY)[SESSION_KEY] last_session = sessions.last() train = data[~data[SESSION_KEY].isin(last_session.values)].copy() test = data[data[SESSION_KEY].isin(last_session.values)].copy() if CLEAN_TEST: train_items = train[ITEM_KEY].unique() test = test[test[ITEM_KEY].isin(train_items)] #  Remove sessions in test shorter than min_session_length slen = test[SESSION_KEY].value_counts() good_sessions = slen[slen >= min_session_length].index test = test[test[SESSION_KEY].isin(good_sessions)].copy() train = train.reset_index(drop=True) test = test.reset_index(drop=True) return train, test # assign the last session of every user to the test set and the remaining ones to the training set train_sessions, test_sessions = last_session_out_split(events_prepared_df, MIN_SESSION_LENGTH) validation_train_sessions, validation_test_sessions = last_session_out_split(train_sessions, MIN_SESSION_LENGTH) print("Training set:") print_stats(train_sessions) print("Test set:") print_stats(test_sessions) print("Validation training set:") print_stats(validation_train_sessions) print("Validation test set:") print_stats(validation_test_sessions) # ### Store Datasets events_prepared_df.to_pickle("data/02_events_prepared_df.pkl") train_sessions.to_pickle("data/02_train_sessions.pkl") test_sessions.to_pickle("data/02_test_sessions.pkl") validation_train_sessions.to_pickle("data/02_validation_train_sessions.pkl") validation_test_sessions.to_pickle("data/02_validation_test_sessions.pkl") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.8.0 64-bit # language: python # name: python3 # --- # + import pandas as pd import seaborn as sns import numpy as np pets_df = pd.read_csv('../train.csv') caminhos = [] names = [] for ii in range(0, len(pets_df)): caminho = '../train/'+pets_df.Id[ii]+'.jpg' name = pets_df.Id[ii]+'.jpg' caminhos.append(caminho) names.append(name) pets_df['full_path'] = caminho pets_df['file_name'] = names pets_df = pets_df.rename(columns={"Pawpularity": "label"}) pets_df.label = pets_df.label # /100 pets_df # + bin_lbls = ['10','20','30','40','50','60','70','80','90','100'] lbls = pd.qcut(pets_df['label'], q= 10,labels=bin_lbls) lbls.head(10) df2 = pets_df df2.label = lbls df2 # + from imblearn.under_sampling import RandomUnderSampler y = df2.label X = df2.drop('label',axis = 1) rus = RandomUnderSampler(random_state=42) X_res, y_res = rus.fit_resample(X, y) # print(X_res) sns.histplot(data = y_res) # - newdf = X_res newdf['label'] = y_res newdf newdf.label.dtype # print(pets_df.label.describe()) # pets_df.label.hist() # + # https://keras.io/examples/vision/image_classification_efficientnet_fine_tuning/ from tensorflow.keras.applications import EfficientNetB0 import tensorflow as tf tf.keras.backend.clear_session() IMG_SIZE = 224 size = (IMG_SIZE, IMG_SIZE) batch_size =64 import tensorflow.keras.preprocessing.image as image # datagen = image.ImageDataGenerator(rescale=1/255, validation_split=0.3) datagen = image.ImageDataGenerator(validation_split=0.3) train_generator = datagen.flow_from_dataframe(dataframe=newdf, directory= '../train', #again not sure what's up with directories, took awhile to figure out I needed that ending /images x_col='file_name', y_col='label', target_size=(224,224), class_mode='categorical', batch_size=batch_size, subset='training' ) from tensorflow.keras.models import Sequential from tensorflow.keras import layers img_augmentation = Sequential( [ layers.RandomRotation(factor=0.15), layers.RandomTranslation(height_factor=0.1, width_factor=0.1), layers.RandomFlip(), layers.RandomContrast(factor=0.1), ], name="img_augmentation", ) from tensorflow.keras.applications import EfficientNetB0 NUM_CLASSES = 10 def build_model(num_classes): inputs = layers.Input(shape=(IMG_SIZE, IMG_SIZE, 3)) x = img_augmentation(inputs) model = EfficientNetB0(include_top=False, input_tensor=x, weights="imagenet") # Freeze the pretrained weights model.trainable = False # Rebuild top x = layers.GlobalAveragePooling2D(name="avg_pool")(model.output) x = layers.BatchNormalization()(x) top_dropout_rate = 0.2 x = layers.Dropout(top_dropout_rate, name="top_dropout")(x) outputs = layers.Dense(NUM_CLASSES, activation="softmax", name="pred")(x) # Compile model = tf.keras.Model(inputs, outputs, name="EfficientNet") optimizer = tf.keras.optimizers.Adam(learning_rate=1e-2) model.compile( optimizer=optimizer, loss="categorical_crossentropy", metrics=["accuracy"] ) return model model = build_model(num_classes=NUM_CLASSES) # model.summary() # ds_train = train_generator.map(lambda image, label: (tf.image.resize(image, size), label)) # print(tf.config.experimental.get_memory_usage("GPU:0")) epochs = 40 hist = model.fit(train_generator, epochs=epochs, verbose=1)#, validation_data=ds_test, verbose=2) # + import matplotlib.pyplot as plt def plot_hist(hist): plt.plot(hist.history["accuracy"]) # plt.plot(hist.history["val_accuracy"]) plt.title("model accuracy") plt.ylabel("accuracy") plt.xlabel("epoch") plt.legend(["train", "validation"], loc="upper left") plt.show() plot_hist(hist) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) # + # P3_7417 = pd.read_csv('../input/sibur-csv/P3_7417.csv',parse_dates=["date"], index_col="date") # P3_6751 = pd.read_csv('../input/sibur-csv/P3_6751.csv',parse_dates=["date"], index_col="date") # P4_7745 = pd.read_csv('../input/sibur-csv/P4_7745.csv',parse_dates=["date"], index_col="date") # + # P3_7417 = pd.read_csv('../input/sibur-csv/P3_7417.csv',parse_dates=["date"], index_col="date") # P3_6751 = pd.read_csv('../input/sibur-csv/P3_6751.csv',parse_dates=["date"], index_col="date") # P4_7745 = pd.read_csv('../input/sibur-csv/P4_7745.csv',parse_dates=["date"], index_col="date") # - P3_7417 = pd.read_csv('submit_model1.csv',parse_dates=["date"], index_col="date") P3_6751 = pd.read_csv('submit_model2.csv',parse_dates=["date"], index_col="date") P4_7745 = pd.read_csv('submit_model3.csv',parse_dates=["date"], index_col="date") S = pd.DataFrame() S['pet4'] = P3_7417['pet'] S['pet3'] = P3_6751['pet'] S['pet2'] = P4_7745['pet'] S['pet'] = S.mean(axis=1) S.drop(['pet4','pet3','pet2'],axis=1).to_csv('submit_final.csv') # + # S = pd.DataFrame() # S['pet4'] = P3_7417['pet'] # S['pet3'] = P3_6751['pet'] # S['pet'] = S.mean(axis=1) # S.drop(['pet4','pet3'],axis=1).to_csv('submit_final.csv') # 1.2 scor # - S # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Sandbox: Blur Kernel Inversion # + # %matplotlib notebook # %load_ext autoreload # %autoreload 2 # Load motiondeblur module and Dataset class import libwallerlab.projects.motiondeblur as md import libwallerlab.utilities.simulation as sim # Debugging imports import llops as yp import matplotlib.pyplot as plt yp.config.setDefaultBackend('numpy') # - # ## Load Data # + # Generate Object x = sim.brain((256,400)) object_size = yp.shape(x) # Generate blur kernel kernel = md.blurkernel.generate(object_size, 20) # - # # Test Inversion # + kernel_f = yp.Ft(kernel) kernel_inv = yp.iFt(yp.conj(kernel_f) / (yp.abs(kernel_f) ** 2 + 1e-1)) # Get kernel support mask kernel_mask = yp.boundingBox(kernel, return_roi=True) # Filter inverse by kernel mask kernel_inv_mask = kernel_inv * kernel_mask.mask # Convolve object with kernel y = yp.convolve(x, kernel) plt.figure() plt.subplot(231) plt.imshow(yp.abs(kernel)) plt.subplot(232) plt.imshow(yp.abs(kernel_inv)) plt.subplot(233) plt.imshow(yp.abs(kernel_inv_mask)) plt.subplot(234) plt.imshow(yp.abs(y)) plt.subplot(235) plt.imshow(yp.abs(yp.convolve(y, kernel_inv))) plt.subplot(236) plt.imshow(yp.abs(yp.convolve(y, kernel_inv_mask))) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd import matplotlib.pyplot as plt import seaborn # %pylab inline seaborn.set() # + import sys sys.path.append("..") import cfg # - NOISE_LEVELS = cfg.experiment.noise_levels; NOISE_LEVELS # # IMDB # ## CharCNN results = pd.read_csv('../results/CharCNN_IMDB.csv', index_col=0) results.sample(10) # + results = pd.read_csv('../results/CharCNN_IMDB.csv', index_col=0) results_same = results[results['noise_level_test'] > 0] results_orig = results[results['noise_level_test'] == -1] plot_x = [] plot_y = [] j = 0 for i, noise_level in enumerate(NOISE_LEVELS): plot_data = results_same[(results_same.noise_level_train == noise_level) &\ (results_same.noise_level_test == noise_level)] if not plot_data.empty: plot_x.append(round(noise_level, 3)) plot_y.append([]) for _, res in plot_data.iterrows(): plot_y[j].append(res['f1_test']) j += 1 plt.figure(figsize=(10, 8)) plt.title('IMDB. Noised test set. CharCNN') plt.xlabel('train noise level') plt.ylabel('test f1') seaborn.boxplot(plot_x, plot_y) plt.savefig('IMDB.png') plt.show() # + results = pd.read_csv('../results/FastText_IMDB.csv', index_col=0) results_same = results[results['noise_level_test'] > 0] results_orig = results[results['noise_level_test'] == -1] plot_x = [] plot_y = [] j = 0 for i, noise_level in enumerate(NOISE_LEVELS): plot_data = results_same[(results_same.noise_level_train == noise_level) &\ (results_same.noise_level_test == noise_level)] if not plot_data.empty: plot_x.append(round(noise_level, 3)) plot_y.append([]) for _, res in plot_data.iterrows(): plot_y[j].append(res['f1_test']) j += 1 plt.figure(figsize=(10, 8)) plt.title('IMDB. Noised test set. FastText') plt.xlabel('train noise level') plt.ylabel('test f1') seaborn.boxplot(plot_x, plot_y) plt.savefig('IMDB.png') plt.show() # + results = pd.read_csv('../results/YoonKim_IMDB.csv', index_col=0) results_same = results[results['noise_level_test'] > 0] results_orig = results[results['noise_level_test'] == -1] plot_x = [] plot_y = [] j = 0 for i, noise_level in enumerate(NOISE_LEVELS): plot_data = results_same[(results_same.noise_level_train == noise_level) &\ (results_same.noise_level_test == noise_level)] if not plot_data.empty: plot_x.append(round(noise_level, 3)) plot_y.append([]) for _, res in plot_data.iterrows(): plot_y[j].append(res['f1_test']) j += 1 plt.figure(figsize=(10, 8)) plt.title('IMDB. Noised test set. CharCNN+RNN') plt.xlabel('train noise level') plt.ylabel('test f1') seaborn.boxplot(plot_x, plot_y) plt.savefig('IMDB.png') plt.show() # + results = pd.read_csv('../results/AttentionedYoonKim_IMDB.csv', index_col=0) results_same = results[results['noise_level_test'] > 0] results_orig = results[results['noise_level_test'] == -1] plot_x = [] plot_y = [] j = 0 for i, noise_level in enumerate(NOISE_LEVELS): plot_data = results_same[(results_same.noise_level_train == noise_level) &\ (results_same.noise_level_test == noise_level)] if not plot_data.empty: plot_x.append(round(noise_level, 3)) plot_y.append([]) for _, res in plot_data.iterrows(): plot_y[j].append(res['f1_test']) j += 1 plt.figure(figsize=(10, 8)) plt.title('IMDB. Noised test set. CharCNN+RNN+Attention') plt.xlabel('train noise level') plt.ylabel('test f1') seaborn.boxplot(plot_x, plot_y) plt.savefig('IMDB.png') plt.show() # - # # Mokoron # + results = pd.read_csv('../results/CharCNN_mokoron_backup.csv', index_col=0) results_same = results[results['noise_level_test'] > 0] results_orig = results[results['noise_level_test'] == -1] plot_x = [] plot_y = [] j = 0 for i, noise_level in enumerate(NOISE_LEVELS): plot_data = results_same[(results_same.noise_level_train == noise_level) &\ (results_same.noise_level_test == noise_level)] plot_data if not plot_data.empty: plot_x.append(round(noise_level, 3)) plot_y.append([]) for _, res in plot_data.iterrows(): plot_y[j].append(res['f1_test']) j += 1 plt.figure(figsize=(10, 8)) plt.title('Mokoron. Noised test set. CharCNN') plt.xlabel('train noise level') plt.ylabel('test f1') seaborn.boxplot(plot_x, plot_y) plt.savefig('IMDB.png') plt.show() # + results = pd.read_csv('../results/FastText_mokoron.csv', index_col=0) results_same = results[results['noise_level_test'] > 0] results_orig = results[results['noise_level_test'] == -1] plot_x = [] plot_y = [] j = 0 for i, noise_level in enumerate(NOISE_LEVELS): plot_data = results_same[(results_same.noise_level_train == noise_level) &\ (results_same.noise_level_test == noise_level)] if not plot_data.empty: plot_x.append(round(noise_level, 3)) plot_y.append([]) for _, res in plot_data.iterrows(): plot_y[j].append(res['f1_test']) j += 1 plt.figure(figsize=(10, 8)) plt.title('Mokoron. Noised test set. Fasttext') plt.xlabel('train noise level') plt.ylabel('test f1') seaborn.boxplot(plot_x, plot_y) plt.savefig('IMDB.png') plt.show() # + results = pd.read_csv('../results/YoonKim_mokoron.csv', index_col=0) results_same = results[results['noise_level_test'] > 0] results_orig = results[results['noise_level_test'] == -1] plot_x = [] plot_y = [] j = 0 for i, noise_level in enumerate(NOISE_LEVELS): plot_data = results_same[(results_same.noise_level_train == noise_level) &\ (results_same.noise_level_test == noise_level)] if not plot_data.empty: plot_x.append(round(noise_level, 3)) plot_y.append([]) for _, res in plot_data.iterrows(): plot_y[j].append(res['f1_test']) j += 1 plt.figure(figsize=(10, 8)) plt.title('Mokoron. Noised test set. CharCNN+RNN') plt.xlabel('train noise level') plt.ylabel('test f1') seaborn.boxplot(plot_x, plot_y) plt.savefig('IMDB.png') plt.show() # + results = pd.read_csv('../results/AttentionedYoonKim_mokoron.csv', index_col=0) results_same = results[results['noise_level_test'] > 0] results_orig = results[results['noise_level_test'] == -1] plot_x = [] plot_y = [] j = 0 for i, noise_level in enumerate(NOISE_LEVELS): plot_data = results_same[(results_same.noise_level_train == noise_level) &\ (results_same.noise_level_test == noise_level)] if not plot_data.empty: plot_x.append(round(noise_level, 3)) plot_y.append([]) for _, res in plot_data.iterrows(): plot_y[j].append(res['f1_test']) j += 1 plt.figure(figsize=(10, 8)) plt.title('Mokoron. Noised test set. CharCNN+RNN+Attention') plt.xlabel('train noise level') plt.ylabel('test f1') seaborn.boxplot(plot_x, plot_y) plt.savefig('IMDB.png') plt.show() # + results = pd.read_csv('../../robust-w2v/results/rove_mokoron.csv', index_col=0) results_same = results[results['noise_level_test'] > 0] results_orig = results[results['noise_level_test'] == -1] plot_x = [] plot_y = [] j = 0 for i, noise_level in enumerate(NOISE_LEVELS): plot_data = results_same[(results_same.noise_level_train == noise_level) &\ (results_same.noise_level_test == noise_level)] if not plot_data.empty: plot_x.append(round(noise_level, 3)) plot_y.append([]) for _, res in plot_data.iterrows(): plot_y[j].append(res['f1_test']) j += 1 plt.figure(figsize=(10, 8)) plt.title('Mokoron. Noised test set. RoVe') plt.xlabel('train noise level') plt.ylabel('test f1') seaborn.boxplot(plot_x, plot_y) plt.savefig('IMDB.png') plt.show() # + results = pd.read_csv('../../robust-w2v/results/rove_mokoron.csv', index_col=0) results_orig = results[results['noise_level_test'] == -1] plot_x = [] plot_y = [] j = 0 for i, noise_level in enumerate(NOISE_LEVELS): plot_data = results_orig[(results_orig.noise_level_train == noise_level)] if not plot_data.empty: plot_x.append(round(noise_level, 3)) plot_y.append([]) for _, res in plot_data.iterrows(): plot_y[j].append(res['f1_test']) j += 1 plt.figure(figsize=(10, 8)) plt.title('Mokoron. Noised test set. RoVe') plt.xlabel('train noise level') plt.ylabel('test f1') plt.plot(plot_x, plot_y) plt.savefig('IMDB.png') plt.show() # - # ## Airline tweets # + results = pd.read_csv('../results/CharCNN_airline-tweets.csv', index_col=0) results_same = results[results['noise_level_test'] > 0] results_orig = results[results['noise_level_test'] == -1] plot_x = [] plot_y = [] j = 0 for i, noise_level in enumerate(NOISE_LEVELS): plot_data = results_same[(results_same.noise_level_train == noise_level) &\ (results_same.noise_level_test == noise_level)] if not plot_data.empty: plot_x.append(round(noise_level, 3)) plot_y.append([]) for _, res in plot_data.iterrows(): plot_y[j].append(res['f1_test']) j += 1 plt.figure(figsize=(10, 8)) plt.title('Airline tweets. Noised test set. CharCNN') plt.xlabel('train noise level') plt.ylabel('test f1') seaborn.boxplot(plot_x, plot_y) plt.savefig('IMDB.png') plt.show() # + results = pd.read_csv('../results/FastText_airline-tweets.csv', index_col=0) results_same = results[results['noise_level_test'] > 0] results_orig = results[results['noise_level_test'] == -1] plot_x = [] plot_y = [] j = 0 for i, noise_level in enumerate(NOISE_LEVELS): plot_data = results_same[(results_same.noise_level_train == noise_level) &\ (results_same.noise_level_test == noise_level)] if not plot_data.empty: plot_x.append(round(noise_level, 3)) plot_y.append([]) for _, res in plot_data.iterrows(): plot_y[j].append(res['f1_test']) j += 1 plt.figure(figsize=(10, 8)) plt.title('Airline tweets. Noised test set. Fasttext') plt.xlabel('train noise level') plt.ylabel('test f1') seaborn.boxplot(plot_x, plot_y) plt.savefig('IMDB.png') plt.show() # + results = pd.read_csv('../results/YoonKim_airline-tweets.csv', index_col=0) results_same = results[results['noise_level_test'] > 0] results_orig = results[results['noise_level_test'] == -1] plot_x = [] plot_y = [] j = 0 for i, noise_level in enumerate(NOISE_LEVELS): plot_data = results_same[(results_same.noise_level_train == noise_level) &\ (results_same.noise_level_test == noise_level)] if not plot_data.empty: plot_x.append(round(noise_level, 3)) plot_y.append([]) for _, res in plot_data.iterrows(): plot_y[j].append(res['f1_test']) j += 1 plt.figure(figsize=(10, 8)) plt.title('Airline tweets. Noised test set. CharCNN+RNN') plt.xlabel('train noise level') plt.ylabel('test f1') seaborn.boxplot(plot_x, plot_y) plt.savefig('IMDB.png') plt.show() # + results = pd.read_csv('../../robust-w2v/results/rove_airline_tweets.csv', index_col=0) results_same = results[results['noise_level_test'] > 0] results_orig = results[results['noise_level_test'] == -1] plot_x = [] plot_y = [] j = 0 for i, noise_level in enumerate(NOISE_LEVELS): plot_data = results_same[(results_same.noise_level_train == noise_level) &\ (results_same.noise_level_test == noise_level)] if not plot_data.empty: plot_x.append(round(noise_level, 3)) plot_y.append([]) for _, res in plot_data.iterrows(): plot_y[j].append(res['f1_test']) j += 1 plt.figure(figsize=(10, 8)) plt.title('Airline tweets. Noised test set. RoVe') plt.xlabel('train noise level') plt.ylabel('test f1') seaborn.boxplot(plot_x, plot_y) plt.savefig('IMDB.png') plt.show() # + import matplotlib.pyplot as plt import matplotlib.patches as mpatches from matplotlib.colors import colorConverter as cc import numpy as np def plot_mean_and_CI(mean, lb, ub, color_mean=None, color_shading=None): # plot the shaded range of the confidence intervals plt.fill_between(range(mean.shape[0]), ub, lb, color=color_shading, alpha=.5) # plot the mean on top plt.plot(mean, color_mean) class LegendObject(object): def __init__(self, facecolor='red', edgecolor='white', dashed=False): self.facecolor = facecolor self.edgecolor = edgecolor self.dashed = dashed def legend_artist(self, legend, orig_handle, fontsize, handlebox): x0, y0 = handlebox.xdescent, handlebox.ydescent width, height = handlebox.width, handlebox.height patch = mpatches.Rectangle( # create a rectangle that is filled with color [x0, y0], width, height, facecolor=self.facecolor, # and whose edges are the faded color edgecolor=self.edgecolor, lw=3) handlebox.add_artist(patch) # if we're creating the legend for a dashed line, # manually add the dash in to our rectangle if self.dashed: patch1 = mpatches.Rectangle( [x0 + 2*width/5, y0], width/5, height, facecolor=self.edgecolor, transform=handlebox.get_transform()) handlebox.add_artist(patch1) return patch # + # generate 3 sets of random means and confidence intervals to plot mean0 = np.random.random(50) ub0 = mean0 + np.random.random(50) + .5 lb0 = mean0 - np.random.random(50) - .5 mean1 = np.random.random(50) + 2 ub1 = mean1 + np.random.random(50) + .5 lb1 = mean1 - np.random.random(50) - .5 mean2 = np.random.random(50) -1 ub2 = mean2 + np.random.random(50) + .5 lb2 = mean2 - np.random.random(50) - .5 # plot the data fig = plt.figure(1, figsize=(7, 2.5)) plot_mean_and_CI(mean0, ub0, lb0, color_mean='k', color_shading='k') plot_mean_and_CI(mean1, ub1, lb1, color_mean='b', color_shading='b') plot_mean_and_CI(mean2, ub2, lb2, color_mean='g--', color_shading='g') bg = np.array([1, 1, 1]) # background of the legend is white colors = ['black', 'blue', 'green'] # with alpha = .5, the faded color is the average of the background and color colors_faded = [(np.array(cc.to_rgb(color)) + bg) / 2.0 for color in colors] plt.legend([0, 1, 2], ['Data 0', 'Data 1', 'Data 2'], handler_map={ 0: LegendObject(colors[0], colors_faded[0]), 1: LegendObject(colors[1], colors_faded[1]), 2: LegendObject(colors[2], colors_faded[2], dashed=True), }) plt.title('Example mean and confidence interval plot') plt.tight_layout() plt.grid() plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="NDJBMsRbW2vk" from sklearn.datasets import load_iris iris = load_iris() # + id="KxZCzvfck29h" # iris.data X = iris.data y = iris.target # + id="VCmd3ASylFFs" from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2) # + [markdown] id="6buXxpYLrhcJ" # **Nu-Support Vector Classification** # + colab={"base_uri": "https://localhost:8080/"} id="S2_937tzlHdl" outputId="6dc7d999-deff-447c-b451-a934d3cb389a" from sklearn.pipeline import make_pipeline from sklearn.preprocessing import StandardScaler from sklearn.svm import NuSVC clf = make_pipeline(StandardScaler(), NuSVC()) clf.fit(X_train, y_train) # + colab={"base_uri": "https://localhost:8080/"} id="Ak9OLnFulctY" outputId="11d9e551-dc38-4f30-b586-1258b6ec8019" #Prediction y_pred = clf.predict(X_test) print(y_pred) # + colab={"base_uri": "https://localhost:8080/"} id="AXNTbeCHlfL2" outputId="79f091d3-ebfa-406f-d418-ee4f1c614290" #Check precision from sklearn.metrics import classification_report from sklearn.metrics import accuracy_score print(classification_report(y_test, y_pred)) print("----------------") print(accuracy_score(y_test, y_pred)) # + colab={"base_uri": "https://localhost:8080/"} id="szYuhPKqliAh" outputId="ac899566-1dfb-43e7-984f-ec9e591dddc1" from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_test, y_pred) cm # + colab={"base_uri": "https://localhost:8080/", "height": 279} id="6QITKxE6lmd0" outputId="65b1fdfa-4061-4644-af59-b50655e2a22a" import seaborn as sn import pandas as pd import matplotlib.pyplot as plt df_cm = pd.DataFrame(cm, range(3), range(3)) # plt.figure(figsize=(10,7)) sn.set(font_scale=1.4) # for label size sn.heatmap(df_cm, annot=True, annot_kws={"size": 14}) plt.show() # + [markdown] id="stENWeY7tD5P" # **Linear Support Vector Classification** # + colab={"base_uri": "https://localhost:8080/"} id="-5NXYZC9lpoI" outputId="f9934e28-90fe-41aa-d499-362ea3aeb26f" from sklearn.svm import LinearSVC clf1 = make_pipeline(StandardScaler(), LinearSVC(random_state=0, tol=1e-5)) clf1.fit(X_train, y_train) # + colab={"base_uri": "https://localhost:8080/"} id="rP62j3ZrmcIa" outputId="2db0321f-ad17-451b-82a2-eced52ca54e1" y_pred1 = clf1.predict(X_test) print(y_pred1) # + colab={"base_uri": "https://localhost:8080/"} id="1IY9D2Ccmg5o" outputId="6242569d-58b6-4b81-8140-9662e6ebcdc1" from sklearn.metrics import classification_report from sklearn.metrics import accuracy_score print(classification_report(y_test, y_pred1)) print("----------------") print(accuracy_score(y_test, y_pred1)) # + colab={"base_uri": "https://localhost:8080/"} id="iQtATcBFm3vu" outputId="1b487a83-0cba-4773-f2a8-b58b5780501e" from sklearn.metrics import confusion_matrix cm1 = confusion_matrix(y_test, y_pred1) cm1 # + colab={"base_uri": "https://localhost:8080/", "height": 279} id="3DIw84Tjm8b4" outputId="873976c2-46ad-488c-88f6-c0123a87f6a4" df_cm1 = pd.DataFrame(cm1, range(3), range(3)) # Range fitted based on the previous array # plt.figure(figsize=(10,7)) sn.set(font_scale=1.4) # for label size sn.heatmap(df_cm1, annot=True, annot_kws={"size": 12}) # font size , size range based on theprevious plot matrix plt.show() # + id="K4UyfjMZnBkC" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # File Size Investigation # Deleting items from bottom to top to try to get under 100 MB # # Item | After Removal | Size (kB) # ---|---|--- # Original | 119424 | - # Single Feature Choropleth | 111034 | 8390 # Final Cluster Maps (two rows) | 102255 | 8779 # P-value DF | 102227 | 28 # Feature DF Sorted | 102183 | 44 # Feature DF Unsorted | 102139 | 44 # Distance Matrix Image | 102125 | 14 # Dendrogram | 102110 | 15 # Distance Matrix Image | 102096 | 14 # Sorted Cluster Index Plot | 102068 | 28 # Distance Matrix Image | 102054 | 14 # Kopt P Values | 102051 | 3 # Scaled DF | 102007 | 44 # 3D Scatter Plot | 101610 | 397 # Pairplot 3-wide Figure | 101353 | 257 # GMM Separation Statistic | 101333 | 20 # Gap Statistic 3 Image | 101271 | 52 # Gap Statistic 2 Image | 101210 | 61 # Gap Statistic 1 Image | 101134 | 74 # Silhouette | 101102 | 32 # Silhouette | 101072 | 30 # Silhouette | 101049 | 23 # Gaussian Weights Grid 26 | 100870 | 179 # 3D Scatter Plot | 96883 | 3987 # Pairplot 3-wide Figure | 96649 | 234 # Gaussian Weights Grid 20 | 96511 | 138 # Pairplot 6-wide Image | 95042 | 1607 # Pairplot 6-wide Image | 93517 | 1525 # Gaussian Weights 30 | 93335 | 282 # Pairplot 11-wide Image | 91716 | 1619 # Pairplot 11-wide Image | 89995 | 1711 # Gaussian Weights 30 | 89828 | 167 # Pairplot 11-Wide Image One Color | 88001 |1827 # PCA Weights DF | 87972 | 29 # List - Pairplot K-Means 5 Features 11 Normal | | # List - Pairplot K-Means 5 Features 11 Uniform | | # List - Pairplot GMM 10/11 Uniform | | # List - Pairplot GMM 8/11 Normal | | # List - Pairplot Mean Shift 10/11 Uniform | | # Single Feature Choropleth | 79665 | 8307 # K-Means Cluster Maps (one row) | 71354 | 8311 # Map: Census Area Difference | 62681 | 8673 # Area Difference Plots | 62625 | 56 # Map: Montreal Zero Pop | 62539 | 86 # Map: FSA Image | 61419 | 1120 # Map: FSA Interactive | 54289 | 7130 *to image* # Map: Invalid FSAs | 39946 | 14343 *to single interactive map* # Map: CT Image | 38816 | 1130 # Map: ADA Image | 37685 | 1131 # Map: DA Image | 36518 | 1167 # Map: Halifax Final Image | 35715 | 1203 # Map: Halifax Interactive | 26765 | 8950 *to image* # Map: Vancouver Image | 25956 | 809 # Map: Vancouver Select Image | 25143 | 813 # Map: Vancouver Interactive | 14049 | 11094 *to image* # Map: DA Image | 12883 | 1166 # # ### Original df_loc (Canada) Description # # [Statistics Canada](https://www.statcan.gc.ca/eng/start) provides several geographic divisions for reporting of census statistics: # # Dissemination Area: DAs are small, relatively stable geographic unit composed of one or more adjacent dissemination blocks with an average population of 400 to 700 persons based on data from the previous Census of Population Program. It is the smallest standard geographic area for which all census data are disseminated. # # Aggregate Dissemination Area: ADAs cover the entire country and, where possible, have a population count between 5,000 and 15,000 people, and respect provincial, territorial, census division (CD), census metropolitan area (CMA) and census agglomeration (CA) with census tract (CT) boundaries in effect for the 2016 Census. # # Census Tract: CTs are small, relatively stable geographic areas that usually have a population of less than 10,000 persons, based on data from the previous Census of Population Program. They are located in census metropolitan areas and in census agglomerations that had a core population of 50,000 or more in the previous census. # # Census Metropolitan Area/ Census Agglomeration: A CMA or CA is formed by one or more adjacent municipalities centred on a population centre (known as the core). A CMA must have a total population of at least 100,000 of which 50,000 or more must live in the core based on adjusted data from the previous Census of Population Program. A CA must have a core population of at least 10,000 also based on data from the previous Census of Population Program. To be included in the CMA or CA, other adjacent municipalities must have a high degree of integration with the core, as measured by commuting flows derived from data on place of work from the previous Census Program. # # Census Subdivision: CSD is the general term for municipalities (as determined by provincial/territorial legislation) or areas treated as municipal equivalents for statistical purposes (e.g., Indian reserves, Indian settlements and unorganized territories). # # Census Consolidated Subdivision:A CCS is a group of adjacent census subdivisions within the same census division. Generally, the smaller, more densely-populated census subdivisions (towns, villages, etc.) are combined with the surrounding, larger, more rural census subdivision, in order to create a geographic level between the census subdivision and the census division. # # Census Division: CDs are a group of neighbouring municipalities joined together for the purposes of regional planning and managing common services (such as police or ambulance services). Census divisions are intermediate geographic areas between the province/territory level and the municipality (census subdivision). # # Economic Region: An ER is a grouping of complete census divisions (CDs), with one exception in Ontario, created as a standard geographic unit for analysis of regional economic activity. # # Note: CSDNAME = 'Toronto' gives the same area as the FSAs, which makes sense as this is probably the definition of 'Toronto' # # The meaning of prefixes for NAME (N), UID (U), PUID (PU), TYPE (T), and CODE (C): # * DA U Dissemination Area unique identifier (composed of the 2-digit province/territory unique identifier followed by the 2-digit census division code and the 4-digit dissemination area code) # * PR U,N Province or territory # * CD U,N,T Census Division # * CCS U,N Census Consolidated Subdivision # * CSD U,N,T Census Subdivision # * ER U,N Economic Region # * SAC T,C Statistical Area Classification: Part of are a component of a census metropolitan area, a census agglomeration, a census metropolitan influenced zone or the territories? # * CMA U,PU,N,T Census Metropolitan Area or Census Agglomeration name, PU Uniquely identifies the provincial or territorial part of a census metropolitan area or census agglomeration (composed of the 2-digit province or territory unique identifier followed by the 3-digit census metropolitan area or census agglomeration unique identifier) # * CT U,N Census Tract within census metropolitan area/census agglomeration # * ADA U Aggregate dissemination area unique identifier # # We will use census Distribution Areas as proxies for neighborhoods for cities in Canada. In previous work where the Forward Sortation Areas (first three characters of the postal code) were used as neighborhood proxies, the sizes of many areas were quite large (several kilometers across) and therefore are likely internally non-homogeneous from a features perspective at the walking-distance (500 m) length scale. To convert to neighborhood names we can look up the associated census tract as seen on [this](https://en.wikipedia.org/wiki/Demographics_of_Toronto_neighbourhoods) Wikipedia page. # # File lda_000b16g_e.gml was downloaded from the [Statistics Canada: Boundary Files](https://www12.statcan.gc.ca/census-recensement/2011/geo/bound-limit/bound-limit-2016-eng.cfm) site. # # Exploring the gml file and computing the area and centroid of the distribution areas can be done using the [geopandas module](https://geopandas.org/). Geopandas builds upon [osgeo](https://gdal.org/python/index.html) which can also be used to explore and compute with the gml file, but in testing was 46 times faster due to vectorization of many calculations compared to a naive approach (see Appendix). # # Latitude and Longitude need to be obtained from a projection with those units, e.g. [EPSG-4326](https://epsg.io/4326). Area can be calculated from an equal-area projection, e.g. [EPSG-6931](https://epsg.io/6931) (though a geodesic area calculation would be more accurate for larger regions, that would have to come from an additional package such as [proj](https://proj.org/), but since all regions here are small such the curvature of the earth is negligible, and altitude indtroduces an additional error likely comparable or larger to the curvature error, we will proceed with a simpler equal-area projection calculation). The geometry is saved as text in the [Well-Known Text (WKT)](https://en.wikipedia.org/wiki/Well-known_text_representation_of_geometry) representation. # # Original Cities_Compared # + [markdown] tags=[] # Cities compared: # * Canada # * Toronto # * Montréal # * Vancouver # * Halifax # * United States of America # * New York City # * Boston # * Chicago # * San Fransisco # * France # * Paris # * England # * London # # First, get a list of Forward Sortation Areas as proxies for neighborhoods for cities in Canada. # - # # Original parsing code # ### Osgeo package # # Before finding the geopandas library, I was using [gdal](https://gdal.org/python/index.html) (which is one of many dependencies of geopandas). Using gdal's ogr and osr modules in the osgeo package I read in the gml file, converted coordinates to accurately compute area, and converted coordinates again to latitude and longitude. As a guide to learning the library I adapted code from some [examples](https://pcjericks.github.io/py-gdalogr-cookbook/geometry.html#quarter-polygon-and-create-centroids). Conversion to geojson may be done explicitly as demonstrated [here](https://gis.stackexchange.com/questions/77974/converting-gml-to-geojson-using-python-and-ogr-with-geometry-transformation), but by the time I was going to convert to geojson I found it much easier with geopandas, and this approach was abandoned. # #### Parsing GML file # from osgeo import ogr # from osgeo import osr # # def parseGMLtoDF(fn_gml,limit=None): # '''Reads in a GML file and outputs a DataFrame # # Adds columns for Geometry (in WKT format), Latitude (of centroid), Longitude (of centroid) and Area (in square meters) # ''' # # Add lightweight progress bar, source copied from GitHub # import ipypb # # # Read in the file # source = ogr.Open(fn_gml) # layer = source.GetLayer(0) # there is only one Layer (the FeatureCollection) # # # Get a list of field names to extract (also could be gotten from schema e.g. lda_000b16g_e.xsd) # # Fields are in order (sequence) and none are required (minOccurs=0) # layerDefinition = layer.GetLayerDefn() # layerFields = [layerDefinition.GetFieldDefn(i).GetName() for i in range(layerDefinition.GetFieldCount())] # # # Initialize the output dataframe # df = pd.DataFrame(columns=[*layerFields, 'Geometry', 'Latitude', 'Longitude', 'Area']) # # # Extract data from the features # parse_limit = layer.GetFeatureCount() # if not limit is None: # parse_limit = limit # # for i in ipypb.track(range(parse_limit)): # # Get the feature to be processed # feature = layer.GetNextFeature() # # # Copy all fields into an empty dataframe # df_tmp = pd.DataFrame(columns=df.columns) # # # Copy all fields individually, in case some are missing # for i, fieldname in enumerate(layerFields): # if feature.IsFieldSet(i): # df_tmp.loc[0,fieldname] = feature.GetFieldAsString(fieldname) # # # Get the latitude and longitude # inref = feature.GetGeometryRef().GetSpatialReference() # EPSG 3347 # llref = osr.SpatialReference() # llref.ImportFromEPSG(4326) # coordinates for this geometry are latitude, longitude # lltransform = osr.CoordinateTransformation(inref, llref) # geom = feature.GetGeometryRef().Clone() # geom.Transform(lltransform) # centroid = geom.Centroid() # df_tmp['Latitude'] = centroid.GetX() # df_tmp['Longitude'] = centroid.GetY() # df_tmp['Geometry'] = geom.ExportToWkt() # # # Get the area by converting to a locally-appropriate coordinate system # inref = feature.GetGeometryRef().GetSpatialReference() # arref = osr.SpatialReference() # arref.ImportFromEPSG(6931) # TODO: let this choose appropriate projection based on centroid for full-globe applicability, or alter to be a geodesic calculation. Lambert Cylindrical Equal Area 6933, U.S. National Atlas Equal Area Projection 2163, NSIDC EASE-Grid North 3408, WGS 84 / NSIDC EASE-Grid 2.0 North 6931, WGS 84 / NSIDC EASE-Grid 2.0 South 6932, WGS 84 / NSIDC EASE-Grid 2.0 Global/Temperate 6933 # artransform = osr.CoordinateTransformation(inref, arref) # geom = feature.GetGeometryRef().Clone() # Fresh instance to prevent error accumulation from multiple transforms # geom.Transform(artransform) # df_tmp['Area'] = geom.Area() # TODO: investigate proj module, related to osgeo, for geodesic area # # # Add the feature dataframe to the output # df = df.append(df_tmp, ignore_index=True) # # return df # #### Viewing fields # # layerDefinition = s2.GetLayerDefn() # # for i in range(layerDefinition.GetFieldCount()): # print(layerDefinition.GetFieldDefn(i).GetName(),f" \t",s2.GetFeature(0).GetFieldAsString(s2.GetFeature(0).GetFieldIndex(layerDefinition.GetFieldDefn(i).GetName()))) df_CA_DA = parseGMLtoDF('lda_000b16g_e.gml') display(df_CA_DA.head(2)) print(df_CA_DA.shape) # #### Layered parsing code # # From when I thought dataframe copying was the limiting step # # # Read in the file # source = ogr.Open('lda_000b16g_e.gml') # layer = source.GetLayer(0) # there is only one Layer (the FeatureCollection) # # # Get a list of field names to extract (see lda_000b16g_e.xsd) # # Fields are in order (sequence) and none are required (minOccurs=0) # layerDefinition = layer.GetLayerDefn() # layerFields = [layerDefinition.GetFieldDefn(i).GetName() for i in range(layerDefinition.GetFieldCount())] # # # %%time # import ipypb # Lightweight progress bar, source copied from GitHub # # # Initialize the output dataframe - REUSE FOR ALL OF CANADA # df_CA_DA = pd.DataFrame(columns=[*layerFields, 'Geometry', 'Latitude', 'Longitude', 'Area']) # # # Only process a few entries # parse_limit = 5 # # # Setup for 2-step dataframe appending, should be ~O(n^(1+e)) instead of ~O(n^2) # n_features = layer.GetFeatureCount() # #max_n_merge = 1000 # n_merge = n_features**0.5//1 # #n_merge = min(n_features**0.5//1,max_n_merge) # How many rows to process before a merge # df_tmp_collect = pd.DataFrame(columns=df_CA_DA.columns) # # # Extract data from the features # layer.ResetReading() # for i in ipypb.track(range(layer.GetFeatureCount())): # # Get the feature to be processed # feature = layer.GetNextFeature() # # # Copy all fields into an empty dataframe # df_tmp = pd.DataFrame(columns=df_CA_DA.columns) # # # Copy all fields individually, in case some are missing # for i, fieldname in enumerate(layerFields): # if feature.IsFieldSet(i): # df_tmp.loc[0,fieldname] = feature.GetFieldAsString(fieldname) # # # Get the latitude and longitude # inref = feature.GetGeometryRef().GetSpatialReference() # EPSG 3347 # llref = osr.SpatialReference() # llref.ImportFromEPSG(4326) # coordinates for this geometry are latitude, longitude # lltransform = osr.CoordinateTransformation(inref, llref) # geom = feature.GetGeometryRef().Clone() # geom.Transform(lltransform) # centroid = geom.Centroid() # df_tmp['Latitude'] = centroid.GetX() # df_tmp['Longitude'] = centroid.GetY() # df_tmp['Geometry'] = geom.ExportToWkt() # # # Get the area by converting to a locally-appropriate coordinate system # inref = feature.GetGeometryRef().GetSpatialReference() # arref = osr.SpatialReference() # arref.ImportFromEPSG(6931) # Lambert Cylindrical Equal Area 6933, U.S. National Atlas Equal Area Projection 2163, NSIDC EASE-Grid North 3408, WGS 84 / NSIDC EASE-Grid 2.0 North 6931, WGS 84 / NSIDC EASE-Grid 2.0 South 6932, WGS 84 / NSIDC EASE-Grid 2.0 Global/Temperate 6933 # artransform = osr.CoordinateTransformation(inref, arref) # geom = feature.GetGeometryRef().Clone() # Fresh instance to prevent error accumulation from multiple transforms # geom.Transform(artransform) # df_tmp['Area'] = geom.Area() # # # Add the feature dataframe to the output # #df_tmp_collect = df_tmp_collect.append(df_tmp, ignore_index=True) # df_tmp_collect = pd.concat([df_tmp_collect, df_tmp], ignore_index=True) # # # Check if 2nd-step dataframe needs to be reset # if df_tmp_collect.shape[0] >= n_merge: # #df_CA_DA = df_CA_DA.append(df_tmp_collect, ignore_index=True) # df_CA_DA = pd.concat([df_CA_DA,df_tmp_collect], ignore_index=True) # df_tmp_collect = pd.DataFrame(columns=df_CA_DA.columns) # # if not parse_limit is None: # parse_limit -= 1 # if parse_limit<=0: # break # #df_CA_DA = df_CA_DA.append(df_tmp_collect, ignore_index=True) # df_CA_DA = pd.concat([df_CA_DA,df_tmp_collect], ignore_index=True) # # ##### Timing Notes # # limit | n_merge | s/it | s total # ---|---|---|--- # 500 | 1 | 0.13 | 1:06 # 500 | 2 | 0.13 | 1:02 # 500 | 10 | 0.12 | 1:01 # 500 | 50 | 0.12 | 0:59.2 # 500 | 237 | 0.11 | 0:57.5 # 500 | 237 | 0.13 | 1:03 # 1000 | 1 | 0.25 | 4:05 # 1000 | 237 | 0.23 | 3:52 # maybe this was max instead of min? # 1000 | 1000 | 0.23 | 3:54 # 1000 | 1000 | 0.24 | 3:55 # pd.concat instead of dataframe.append # 1000 | 237 | 0.22 | 3:38 # pd.concat instead of dataframe.append # 1000 | - | 0.27 | 4:25 # original, no 2-levels # 500 | - | 0.15 | 1:15 # original, no 2-levels # 500 | 237 | 0.07 | 0:34.3 # pd.concat instead of dataframe.append # 2000 | 237 | 0.07 | 2:13 # pd.concat instead of dataframe.append # Have not been resetting df_CA_DA! # 4000 | 237 | 0.06 | 4:15 # pd.concat instead of dataframe.append # Resetting df from here on, and concat too, and GetNextFeature instead of GetFeature(int) which may have been the bottleneck then... # 1000 | 237 | 0.06 | 1:01 # 500 | 237 | 0.06 | 0:31.3 # 1000 | 50 | 0.06 | 1:00 # 1000 | 1 | 0.06 | 1:04 # 1000 | 1 | 0.23 | 3:52 # Try once with GetFeature(i), yes, this was the culprit, it must parse anew each time. # 4000 | 1 | 0.09 | 5:41 # 4000 | 5 | 0.06 | 4:11 # 4000 | 20 | 0.06 | 3:41 # 4000 | 50 | 0.06 | 3:54 # 4000 | 50 | 0.06 | 3:57 # 4000 | 100 | 0.06 | 3:58 # 4000 | 175 | 0.06 | 4:03 # 4000 | 237 | 0.06 | 4:08 # 4000 | 500 | 0.06 | 4:18 # 4000 | 500 | 0.06 | 3:50 # 4000 | 1000 | 0.06 | 4:16 # 4000 | 4000 | 0.07 | 4:58 # Oh, also hadn't reset the counter... adding that before last 500 and 30, that's been introducing variability too # 4000 | 1 | 0.06 | 4:16 # 4000 | 30 | 0.06 | 4:17, 4:09 # Might change when tabs are changed (first with changing, second staying on same tab) # 4000 | 60 | 0.06 | 3:53, 4:03 # 4000 | 200 | 0.06 | 3:56 # 4000 | 500 | 0.06 | 3:44 # 4000 | 1000 | 0.06 | 3:55 # 4000 | 4000 | 0.06 | 4:03 # 4000 | - | 0.06 | 4:04 # original, no 2-levels, with GetNextFeature # # Gain is marginal with the double layering... might be important for very large sets, but at 4000 features results in at most a 10% speedup. Note that geopandas speedup is ~4,600%. # #### Getting an appropriate EPSG projection # # Using UTM projections (of which there are 30, each subtending 6 degrees of longitude) was a commonly proposed method for projection for area calculation. I went with the simpler EPSG-6931/2/3, which is easier due to one zone encompassing all of Canada (though with unknown accuracy tradeoff). This approach is included just for reference. # # srcSR = osr.SpatialReference() # srcSR.ImportFromEPSG(4326) # WGS84 Geographic # destSR = osr.SpatialReference() # # lyr.ResetReading() # for feat in lyr: # geom = feat.GetGeometryRef() # if not geom.IsEmpty(): # make sure the geometry isn't empty # geom.AssignSpatialReference(srcSR) # you only need to do this if the shapefile isn't set or is set wrong # env = geom.GetEnvelope() # get the Xmin, Ymin, Xmax, Ymax bounds # CentX = ( env[0] + env[2] ) / 2 # calculate the centre X of the whole geometry # Zone = int((CentX + 180)/6) + 1 # see http://gis.stackexchange.com/questions/13291/computing-utm-zone-from-lat-long-point # EPSG = 32600 + Zone # get the EPSG code from the zone and the constant 32600 (all WGS84 UTM North start with 326) # destSR.ImportFromEPSG(EPSG) # create the 'to' spatial reference # geom.TransformTo(destSR) # project the geometry # print geom.GetArea() # get the area in square metres # #### Read in the Canadian data file: Original # gdf = geopandas.read_file('lda_000b16g_e.gml') # gdf.rename_geometry('Geometry', inplace=True) # Default geometry column name is 'geometry'; changed for consistent capitalization of columns # gdf.set_geometry('Geometry') # Renaming is insufficient; this sets special variable gdf.geometry = gdf['Geometry'] # gdf['Area'] = gdf['Geometry'].to_crs(epsg=6931).area # gdf['Centroid'] = gdf['Geometry'].centroid # gdf['Geometry'] = gdf['Geometry'].to_crs(epsg=4326) # gdf['Centroid'] = gdf['Centroid'].to_crs(epsg=4326) # Only the set geometry is converted with gdf.to_crs(); all other geometry-containing columns must be converted explicitly; here we convert all columns explicitly # gdf['Centroid Latitude'] = gdf['Centroid'].geometry.y # gdf['Centroid Longitude'] = gdf['Centroid'].geometry.x # gdf.drop(columns = 'Centroid', inplace=True) # Because WKT Point cannot be serialized to JSON, we drop the Centroid column and keep only its float components # gdf_CA_DA = gdf # Rename because we may be generating additional variables # gdf_CA_DA.head(2) # # Old Vancouver Extended # Old Vancouver Extended: cityname = 'Vancouver' gdf_select = selectRegion(gdf_CA_DA, 'CDNAME', method='contains', names='Greater Vancouver') mp = folium.Map(location=getCityByName(cityname)['centroid']) mp.fit_bounds(getGDFBounds(gdf_select.to_crs(epsg=4326))) m = folium.GeoJson( gdf_select.to_crs(epsg=4326).to_json(), style_function=lambda feature: { 'fillColor': 'yellow', 'fillOpacity': 0.8, 'color':'black', 'weight': 1, 'opacity': 0.2, }, ).add_to(mp) m.add_child(folium.features.GeoJsonTooltip(tooltip_DA)) mp # # Unused Choropleth map development # + # Not used - this naive choropleth displays only one map and doesn't allow much scale customization city_index = 1 # create a plain world map city_map = folium.Map(location=cities[city_index]['centroid'], control_scale=True) city_map.fit_bounds(cities[city_index]['bounds']) folium.Choropleth( geo_data=cities[city_index]['geojson'], data=cities[city_index]['data'], columns=['DAUID', 'Area'], key_on='feature.properties.DAUID', fill_color='YlOrRd', fill_opacity=0.7, line_opacity=0.2, legend_name='Area [square meters]' ).add_to(city_map) # display map city_map # + # Not used - links choropleth to a layercontrol, but does not get the legend on the correct layer from branca.element import MacroElement from jinja2 import Template class BindColormap(MacroElement): """Binds a colormap to a given layer. Parameters ---------- colormap : branca.colormap.ColorMap The colormap to bind. """ def __init__(self, layer, colormap): super(BindColormap, self).__init__() self.layer = layer self.colormap = colormap self._template = Template(u""" {% macro script(this, kwargs) %} {{this.colormap.get_name()}}.svg[0][0].style.display = 'block'; {{this._parent.get_name()}}.on('overlayadd', function (eventLayer) { if (eventLayer.layer == {{this.layer.get_name()}}) { {{this.colormap.get_name()}}.svg[0][0].style.display = 'block'; }}); {{this._parent.get_name()}}.on('overlayremove', function (eventLayer) { if (eventLayer.layer == {{this.layer.get_name()}}) { {{this.colormap.get_name()}}.svg[0][0].style.display = 'none'; }}); {% endmacro %} """) # noqa # https://gitter.im/python-visualization/folium?at=5a36090a03838b2f2a04649d print('Area Selection Criterion: CSDNAME contains the city name') import branca.element as bre from branca.colormap import LinearColormap f = bre.Figure() sf = [[],[],[]] city_map = [[],[],[]] colormap = [[],[],[]] cp = [[],[],[]] for i, city in enumerate(cities): sf[i] = f.add_subplot(1,len(cities),1+i) city_map[i] = folium.Map(location=city['centroid'], control_scale=True) sf[i].add_child(city_map[i]) title_html = '''

    {}

    '''.format(city['name']) city_map[i].get_root().html.add_child(folium.Element(title_html)) city_map[i].fit_bounds(city['bounds']) cp[i] = folium.Choropleth(geo_data=city['geojson'], data=city['data'], columns=['DAUID', 'Area'], key_on='feature.properties.DAUID', fill_color='YlOrRd', fill_opacity=0.7, line_opacity=0.2 ) city_map[i].add_child(cp[i]) city_map[i].add_child(folium.map.LayerControl()) city_map[i].add_child(BindColormap(cp[i],cp[i].color_scale)) city['map'] = city_map[i] display(f) # - # # To get a list of functions in a module: # from inspect import getmembers, isfunction # # from somemodule import foo # print(getmembers(foo, isfunction)) # Postal Codes import requests # from bs4 import BeautifulSoup # had to install to environment in Anaconda import lxml # had to install to environment in Anaconda, backdated to 4.6.1 (4.6.2 current) for pandas read_html() import html5lib # had to install to environment in Anaconda (1.1 current) for pandas read_html() # List of coordinate tuples of nonzero elements of matrix r list(zip(*np.nonzero(r))) geopandas.show_versions() # ### Header that won't display on GitHub (tags not supported) # # #
    Applied Data Science Capstone
    # # ####
    In completion of requirements for the IBM Data Science Professional Certificate on Coursera
    # #
    # #
    # # ### Loading/Saving # #### Loading/Saving Without Compression # Standalone # # import dill # with open('GDF_FSA-DA-D.db','wb') as file: # dill.dump(gdf_union,file) # with open('GDF_FSA-DA-D_times.db','wb') as file: # dill.dump(times,file) # with open('GDF_FSA-DA-D_areas.db','wb') as file: # dill.dump(areas,file) # # import dill # with open('GDF_FSA-DA-D.db','r') as file: # gdf_union = dill.load(file) # with open('GDF_FSA-DA-D_times.db','r') as file: # times = dill.load(file) # with open('GDF_FSA-DA-D_areas.db','r') as file: # areas = dill.load(file) # Option Block # # try: # Results have been calculated previously, load them to save time # with open(DIR_RESULTS+'GDF_FSA-DA_D.db','rb') as file: # gdf_union = dill.load(file) # with open(DIR_RESULTS+'GDF_FSA-DA_D_times.db','rb') as file: # times = dill.load(file) # with open(DIR_RESULTS+'GDF_FSA-DA_D_areas.db','rb') as file: # areas = dill.load(file) # print('Results loaded from file') # # except (FileNotFoundError, IOError): # Results not found in file, regenerate them # gdf_union, times, areas = intersectGDF(gdf_CA_FSA_D,'CFSAUID',gdf_CA_DA_D,'DAUID',verbosity=1) # # with open(DIR_RESULTS+'GDF_FSA-DA_D.db','wb+') as file: # dill.dump(gdf_union,file) # with open(DIR_RESULTS+'GDF_FSA-DA_D_times.db','wb+') as file: # dill.dump(times,file) # with open(DIR_RESULTS+'GDF_FSA-DA_D_areas.db','wb+') as file: # dill.dump(areas,file) # print('Results saved to file') # #### Loading/saving with compression # # Original load/compute/save logic for overlap computation: # # try: # Results have been calculated previously, load them to save time # with open(DIR_RESULTS+'GDF_FSA-DA_D_times.db.gz','rb') as file: # times = dill.loads(gzip.decompress(file.read())) # with open(DIR_RESULTS+'GDF_FSA-DA_D_areas.db.gz','rb') as file: # areas = dill.loads(gzip.decompress(file.read())) # try: # If not available (e.g. on github, due to size), reconstruct the intersection gdf # with open(DIR_RESULTS+'GDF_FSA-DA_D.db.gz','rb') as file: # gdf_union = dill.loads(gzip.decompress(file.read())) # except (FileNotFoundError, IOError): # print('Recomputing gdf_union using areas loaded from file') # gdf_union_areas, times_areas, areas_areas= intersectGDFareas(gdf_CA_FSA_D,'CFSAUID',gdf_CA_DA_D,'DAUID',areas_in=areas,verbosity=1) # gdf_union = gdf_union_areas # with open(DIR_RESULTS+'GDF_FSA-DA_D.db.gz','wb+') as file: # file.write(gzip.compress(dill.dumps(gdf_union))) # print('Results loaded from file') # # except (FileNotFoundError, IOError): # Results not found in file, regenerate them # gdf_union, times, areas = intersectGDF(gdf_CA_FSA_D,'CFSAUID',gdf_CA_DA_D,'DAUID',verbosity=1) # # with open(DIR_RESULTS+'GDF_FSA-DA_D.db.gz','wb+') as file: # file.write(gzip.compress(dill.dumps(gdf_union))) # with open(DIR_RESULTS+'GDF_FSA-DA_D_times.db.gz','wb+') as file: # file.write(gzip.compress(dill.dumps(times))) # with open(DIR_RESULTS+'GDF_FSA-DA_D_areas.db.gz','wb+') as file: # file.write(gzip.compress(dill.dumps(areas))) # print('Results saved to file') # ## ExtendBounds Development # ### Original extendBounds Function and Testing def extendBounds(bounds,method='nearestLeadingDigit',scale=10): '''Extend bounds (low, high) to give round numbers for scalebars The low bound is decreased and the high bound is increased according to method to the first number satisfying the method conditions. Returns a new bound which includes the old bounds in its entirety Parameters ---------- bounds: (low_bound, high_bound) list or tuple method: str describing the extension method 'nearestLeadingDigit': Bounds are nearest numbers with leading digit followed by zeros 'nearestPower': Bounds are nearest powers of scale (scale must be > 1). For negative numbers, the sign and direction are reversed, the extension performed, then the sign of the result is reversed back. 'nearestMultiple': Bounds are nearest multiple of scale (scale must be > 0) 'round': Bounds are rounded scale: numeric as described in method options Returns ------- 2-element tuple of extended bounds e.g. (newlow,newhigh) ''' if bounds[0]>bounds[1]: print('bounds must be ordered from least to greatest') return None if method=='nearestLeadingDigit': iszero = np.array(bounds)==0 isnegative = np.array(bounds) < 0 offsets = [1 if isnegative[0] else 0, 0 if isnegative[1] else 1] power = [0 if z else np.floor(np.log10(abs(b))) for b, z in zip(bounds, iszero)] firstdigit = [abs(b)//np.power(10,p) for b, p in zip(bounds, power)] exceeds = [abs(b)>f*np.power(10,p) for b, f, p in zip(bounds, firstdigit, power)] newbounds = [abs(b) if not t else (f+o)*np.power(10,p) for b, t, n, f, o, p in zip(bounds, exceeds, isnegative, firstdigit, offsets, power)] newbounds = [-n if t else n for n, t in zip(newbounds,isnegative)] elif method=='nearestPower': try: scale = float(scale) if scale<=1: print('scale should be greater than 1') return None except ValueError: print('scale should be a number greater than 1') return None isnegative = np.array(bounds) < 0 roundfuns = [np.ceil if isnegative[0] else np.floor, np.floor if isnegative[1] else np.ceil] newbounds = [0 if b==0 else np.power(scale, r(np.log10(abs(b))/np.log10(scale))) for b, r in zip(bounds,roundfuns)] newbounds = [-n if t else n for n, t in zip(newbounds,isnegative)] elif method=='nearestMultiple': try: scale = float(scale) if scale<=0: print('scale should be greater than 0') return None except ValueError: print('scale should be a number greater than 0') return None newbounds = [scale*(np.floor(bounds[0]/scale)), scale*(np.ceil(bounds[1]/scale))] elif method=='round': newbounds = [np.floor(bounds[0]), np.ceil(bounds[1])] else: print('Invalid method, see help(extendBounds)') return None return newbounds # print("Testing invalid method") # print(" Expect errors:") # print(f"{extendBounds([11,130],'invalid')}") # print() # # print("Testing method 'nearestLeadingDigit'") # print(" Expect errors:") # print(f"{extendBounds([9,-930],'nearestLeadingDigit')}") # print(f"{extendBounds([-9,-930],'nearestLeadingDigit')}") # print(" Expect success:") # print(f"{extendBounds([11,130],'nearestLeadingDigit')}",) # print(f"{extendBounds([11,130],'nearestLeadingDigit',-1)}") # print(f"{extendBounds([9,930],'nearestLeadingDigit')}") # print(f"{extendBounds([-9,930],'nearestLeadingDigit')}") # print(f"{extendBounds([-990,-930],'nearestLeadingDigit')}") # print(f"{extendBounds([-990,0.05],'nearestLeadingDigit')}") # print(f"{extendBounds([0,0.052],'nearestLeadingDigit')}") # print() # # print("Testing method 'nearestPower'") # print(" Expect errors:") # print(f"{extendBounds([11,130],'nearestPower',-2)}") # print(f"{extendBounds([11,130],'nearestPower',0)}") # print(f"{extendBounds([11,130],'nearestPower',1)}") # print(f"{extendBounds([-11,-130],'nearestPower',10)}") # print(" Expect success:") # print(f"{extendBounds([11,130],'nearestPower')}") # print(f"{extendBounds([10,100],'nearestPower')}") # print(f"{extendBounds([11,130],'nearestPower',1.1)}") # print(f"{extendBounds([11,130],'nearestPower',2)}") # print(f"{extendBounds([11,130],'nearestPower',10)}") # print(f"{extendBounds([11,130],'nearestPower',10.)}") # print(f"{extendBounds([-11,130],'nearestPower',10)}") # print(f"{extendBounds([-5100,-130],'nearestPower',10)}") # print(f"{extendBounds([-.0101,-0.00042],'nearestPower',10)}") # print(f"{extendBounds([0,0.00042],'nearestPower',10)}") # print() # # print("Testing method 'nearestMultiple'") # print(" Expect errors:") # print(f"{extendBounds([11,132],'nearestMultiple',-2)}") # print(f"{extendBounds([11,132],'nearestMultiple',0)}") # print(f"{extendBounds([0,-10],'nearestMultiple',100)}") # print(" Expect success:") # print(f"{extendBounds([11,132],'nearestMultiple')}") # print(f"{extendBounds([10,130],'nearestMultiple')}") # print(f"{extendBounds([11.55,132.55],'nearestMultiple',0.1)}") # print(f"{extendBounds([11.55,132.55],'nearestMultiple',1)}") # print(f"{extendBounds([11.55,132.55],'nearestMultiple',100)}") # print(f"{extendBounds([-11,132],'nearestMultiple',10)}") # print(f"{extendBounds([-1121,-132],'nearestMultiple',10)}") # print(f"{extendBounds([-10,-10],'nearestMultiple',10)}") # print(f"{extendBounds([-10,-10],'nearestMultiple',100)}") # print() # # print("Testing method 'round'") # print(" Expect errors:") # print(f"{extendBounds([-11.1,-132.1],'round')}") # print(" Expect success:") # print(f"{extendBounds([11.1,132.1],'round')}") # print(f"{extendBounds([10,130],'round')}") # print(f"{extendBounds([11.1,132.1],'round',-2)}") # print(f"{extendBounds([-11.1,132.1],'round')}") # print(f"{extendBounds([-1100.1,-132.1],'round')}") # Testing invalid method # Expect errors: # Invalid method, see help(extendBounds) # None # # Testing method 'nearestLeadingDigit' # Expect errors: # bounds must be ordered from least to greatest # None # bounds must be ordered from least to greatest # None # Expect success: # [10.0, 200.0] # [10.0, 200.0] # [9, 1000.0] # [-9, 1000.0] # [-1000.0, -900.0] # [-1000.0, 0.05] # [0, 0.06] # # Testing method 'nearestPower' # Expect errors: # scale should be greater than 1 # None # scale should be greater than 1 # None # scale should be greater than 1 # None # bounds must be ordered from least to greatest # None # Expect success: # [10.0, 1000.0] # [10.0, 100.0] # [10.834705943388395, 142.04293198443193] # [8.0, 256.0] # [10.0, 1000.0] # [10.0, 1000.0] # [-100.0, 1000.0] # [-10000.0, -100.0] # [-0.1, -0.0001] # [0, 0.001] # # Testing method 'nearestMultiple' # Expect errors: # scale should be greater than 0 # None # scale should be greater than 0 # None # bounds must be ordered from least to greatest # None # Expect success: # [10.0, 140.0] # [10.0, 130.0] # [11.5, 132.6] # [11.0, 133.0] # [0.0, 200.0] # [-20.0, 140.0] # [-1130.0, -130.0] # [-10.0, -10.0] # [-100.0, -0.0] # # Testing method 'round' # Expect errors: # bounds must be ordered from least to greatest # None # Expect success: # [11.0, 133.0] # [10.0, 130.0] # [11.0, 133.0] # [-12.0, 133.0] # [-1101.0, -132.0] # ### New extendBounds Function # + def extendBound(bound,direction='up',method='nearestLeadingDigit',scale=10): '''Extend bound to next 'round' number Parameters ---------- bound: float or float castable number or a list thereof direction: {'up','down',nonzero number} or a list of these values indicating the direction to round in method: str describing the extension method 'nearestLeadingDigit': Bound is nearest numbers with leading digit followed by zeros 'nearestPower': Bound is nearest integer power of scale (scale must be > 1). For negative numbers, the sign and direction are reversed, the extension performed, then the sign of the result is reversed back. 'nearestMultiple': Bound is nearest multiple of scale (scale must be > 0) 'round': Bound is rounded using the default method scale: numeric as described in method options or a list thereof Returns ------- float: the extended bound Notes ----- All inputs, if not single-valued, must be lists of length equal to input bound TODO: extend so that method may also be a list TODO: replace prints with raising errors ''' import numpy as np # Check and adjust the length of inputs unlist_bound = False if not(type(bound) in {list,tuple,range}): bound = [bound] unlist_bound = True acceptable_len = set((1,len(bound))) if not(type(direction) in {list,tuple,range}): direction = [direction] if not(len(direction) in acceptable_len): print('"direction" must have length 1 or length equal to the length of "bound"') return None if (type(method) in {str}): method = [method] if not(len(method) in acceptable_len): print('"method" must have length 1 or length equal to the length of "bound"') return None if not(type(scale) in {list,tuple,range}): scale = [scale] if not(len(scale) in acceptable_len): print('"scale" must have length 1 or length equal to the length of "bound"') return None if len(bound)>1: if len(direction)==1: direction = [direction[0] for b in bound] if len(scale)==1: scale = [scale[0] for b in bound] # If multiple methods are specified, recursively call this function for each method and reassemble results if len(bound)>1 and len(method)>1: ret = np.array([None for b in bound]) for m in list(set(method)): ind = np.where(np.array(method)==m) ret[ind] = extendBound(list(np.array(bound)[ind]),list(np.array(direction)[ind]),m,list(np.array(scale)[ind])) return list(ret) # Convert direction to a logical array roundup try: roundup = [True if d=='up' else False if d=='down' else True if float(d)>0 else False if float(d)<0 else None for d in direction] except: print('direction must be "up", "down", or a non-negative number') return None if any([r==None for r in roundup]): print('direction must be "up", "down", or a non-negative number') return None # Cases for multiple methods handled above, return to string method method = method[0] # Execute the conversions if method=='nearestLeadingDigit': iszero = np.array(bound)==0 isnegative = np.array(bound) < 0 offsets = np.logical_xor(roundup, isnegative) power = [0 if z else np.floor(np.log10(abs(b))) for b, z in zip(bound, iszero)] firstdigit = [abs(b)//np.power(10,p) for b, p in zip(bound, power)] exceeds = [abs(b)>f*np.power(10,p) for b, f, p in zip(bound, firstdigit, power)] newbound = [abs(b) if not t else (f+o)*np.power(10,p) for b, t, n, f, o, p in zip(bound, exceeds, isnegative, firstdigit, offsets, power)] newbound = [-n if t else n for n, t in zip(newbound,isnegative)] elif method=='nearestPower': try: scale = [float(s) for s in scale] if any([s<=1 for s in scale]): print('scale should be greater than 1') return None except ValueError: print('scale should be a number or list of numbers greater than 1') return None isnegative = np.array(bound) < 0 offsets = np.logical_xor(roundup, isnegative) roundfuns = [np.ceil if o else np.floor for o in offsets] newbound = [0 if b==0 else np.power(s, r(np.log10(abs(b))/np.log10(s))) for b, r, s in zip(bound,roundfuns,scale)] newbound = [-n if t else n for n, t in zip(newbound,isnegative)] elif method=='nearestMultiple': try: scale = [float(s) for s in scale] if any([s<=0 for s in scale]): print('scale should be greater than 0') return None except ValueError: print('scale should be a number or list of numbers greater than 0') return None roundfuns = [np.ceil if r else np.floor for r in roundup] newbound = [s*(r(b/s)) for b, r, s in zip(bound,roundfuns,scale)] elif method=='round': roundfuns = [np.ceil if r else np.floor for r in roundup] newbound = [f(b) for b, f in zip(bound, roundfuns)] else: print('Invalid method, see help(extendBound)') return None return newbound[0] if unlist_bound else newbound def extendBounds(bounds,method='nearestLeadingDigit',scale=10): if bounds[0]>bounds[1]: print('bounds must be ordered from least to greatest') return None return extendBound(bounds,direction=['down','up'],method=method,scale=scale) # - # ### Newest extendBounds and testing: list-castable # + def extendBound(bound,direction='up',method='nearestLeadingDigit',scale=10): '''Extend bound to next 'round' number Parameters ---------- bound: float or float castable number or a list thereof direction: {'up','down',nonzero number} or a list of these values indicating the direction to round in method: str describing the extension method 'nearestLeadingDigit': Bound is nearest numbers with leading digit followed by zeros 'nearestPower': Bound is nearest integer power of scale (scale must be > 1). For negative numbers, the sign and direction are reversed, the extension performed, then the sign of the result is reversed back. 'nearestMultiple': Bound is nearest multiple of scale (scale must be > 0) 'round': Bound is rounded using the default method scale: numeric as described in method options or a list thereof Returns ------- float: the extended bound Notes ----- All inputs, if not single-valued, must be list-castable and of equal length If all inputs are single-valued, the output is a float, otherwise it is a list of floats ''' import numpy as np # Check and adjust the length of inputs unlist = False try: bound = list(bound) except: try: bound = [bound] unlist = True except: print("Input 'bound' must be numeric or convertible to list type.") return None try: if type(direction)==str: direction = [direction] direction = list(direction) except: try: direction = [direction] except: print("Input 'direction' must be a string or nonzero number or convertible to list type.") return None try: if type(method)==str: method = [method] method = list(method) except: try: method = [method] except: print("Input 'method' must be a string or convertible to list type.") return None try: scale = list(scale) except: try: scale = [scale] except: print("Input 'scale' must be numeric or convertible to list type.") return None inputs = [bound, direction, method, scale] lengths = [len(i) for i in inputs] set_lengths = set(lengths) max_len = max(set_lengths) set_lengths.remove(1) if len(set_lengths)>1: print('Inputs must be of the same length or of length one. See help(extendBound)') return None if max_len>1: # can this be converted to a looped statement? if len(bound)==1: bound = bound*max_len if len(direction)==1: direction = direction*max_len if len(method)==1: method = method*max_len if len(scale)==1: scale = scale*max_len unlist = False # If multiple methods are specified, recursively call this function for each method and reassemble results if len(bound)>1 and len(set(method))>1: ret = np.array([None for b in bound]) for m in list(set(method)): ind = np.where(np.array(method)==m) ret[ind] = extendBound(list(np.array(bound)[ind]),list(np.array(direction)[ind]),m,list(np.array(scale)[ind])) return list(ret) # Convert direction to a logical array roundup try: roundup = [True if d=='up' else False if d=='down' else True if float(d)>0 else False if float(d)<0 else None for d in direction] except: print('direction must be "up", "down", or a non-negative number') return None if any([r==None for r in roundup]): print('direction must be "up", "down", or a non-negative number') return None # Cases for multiple methods handled above, return to string method method = method[0] # Execute the conversions if method=='nearestLeadingDigit': iszero = np.array(bound)==0 isnegative = np.array(bound) < 0 offsets = np.logical_xor(roundup, isnegative) power = [0 if z else np.floor(np.log10(abs(b))) for b, z in zip(bound, iszero)] firstdigit = [abs(b)//np.power(10,p) for b, p in zip(bound, power)] exceeds = [abs(b)>f*np.power(10,p) for b, f, p in zip(bound, firstdigit, power)] newbound = [abs(b) if not t else (f+o)*np.power(10,p) for b, t, n, f, o, p in zip(bound, exceeds, isnegative, firstdigit, offsets, power)] newbound = [-n if t else n for n, t in zip(newbound,isnegative)] elif method=='nearestPower': try: scale = [float(s) for s in scale] if any([s<=1 for s in scale]): print('scale should be greater than 1') return None except ValueError: print('scale should be a number or list of numbers greater than 1') return None isnegative = np.array(bound) < 0 offsets = np.logical_xor(roundup, isnegative) roundfuns = [np.ceil if o else np.floor for o in offsets] newbound = [0 if b==0 else np.power(s, r(np.log10(abs(b))/np.log10(s))) for b, r, s in zip(bound,roundfuns,scale)] newbound = [-n if t else n for n, t in zip(newbound,isnegative)] elif method=='nearestMultiple': try: scale = [float(s) for s in scale] if any([s<=0 for s in scale]): print('scale should be greater than 0') return None except ValueError: print('scale should be a number or list of numbers greater than 0') return None roundfuns = [np.ceil if r else np.floor for r in roundup] newbound = [s*(r(b/s)) for b, r, s in zip(bound,roundfuns,scale)] elif method=='round': roundfuns = [np.ceil if r else np.floor for r in roundup] newbound = [f(b) for b, f in zip(bound, roundfuns)] else: print('Invalid method, see help(extendBound)') return None return newbound[0] if unlist else newbound def extendBounds(bounds,method='nearestLeadingDigit',scale=10): if bounds[0]>bounds[1]: print('bounds must be ordered from least to greatest') return None return extendBound(bounds,direction=['down','up'],method=method,scale=scale) # + # Unit test of extendBounds # TODO: check out the builtin unittest module and convert this code to use that testing structure # Get default arguments from https://stackoverflow.com/questions/12627118/get-a-function-arguments-default-value import inspect def get_default_args(func): signature = inspect.signature(func) return { k: v.default for k, v in signature.parameters.items() if v.default is not inspect.Parameter.empty } defaults = get_default_args(extendBound) def test_extendBound(bounds,direction=defaults['direction'],method=defaults['method'],scale=defaults['scale'],expected=None): # default arguments taken from extendBound; not sure how to get defaults when not supplied output = extendBound(bounds,direction,method,scale) print('Input:',bounds,direction,method,scale,' Output:',output,' Expected:',expected,' Passed:',output==expected) return output==expected defaults = get_default_args(extendBounds) def test_extendBounds(bounds,method=defaults['method'],scale=defaults['scale'],expected=None): # default arguments taken from extendBounds; not sure how to get defaults when not supplied output = extendBounds(bounds,method,scale) print('Input:',bounds,method,scale,' Output:',output,' Expected:',expected,' Passed:',output==expected) return output==expected print('Testing function extendBounds\n') passed = True print("Testing invalid method") print(" Expect errors:") passed = test_extendBounds([11,130],'invalid',expected=None) and passed print() print("Testing method 'nearestLeadingDigit'") print(" Expect errors:") passed = test_extendBounds([9,-930],'nearestLeadingDigit',expected=None) and passed passed = test_extendBounds([-9,-930],'nearestLeadingDigit',expected=None) and passed print(" Expect success:") passed = test_extendBounds([11,130],'nearestLeadingDigit',expected=[10,200]) and passed passed = test_extendBounds([11,130],'nearestLeadingDigit',-1,expected=[10,200]) and passed passed = test_extendBounds([9,930],'nearestLeadingDigit',expected=[9,1000]) and passed passed = test_extendBounds([-9,930],'nearestLeadingDigit',expected=[-9,1000]) and passed passed = test_extendBounds([-990,-930],'nearestLeadingDigit',expected=[-1000,-900]) and passed passed = test_extendBounds([-990,0.05],'nearestLeadingDigit',expected=[-1000,0.05]) and passed passed = test_extendBounds([0,0.052],'nearestLeadingDigit',expected=[0,0.06]) and passed print() print("Testing method 'nearestPower'") print(" Expect errors:") passed = test_extendBounds([11,130],'nearestPower',-2,expected=None) and passed passed = test_extendBounds([11,130],'nearestPower',0,expected=None) and passed passed = test_extendBounds([11,130],'nearestPower',1,expected=None) and passed passed = test_extendBounds([-11,-130],'nearestPower',10,expected=None) and passed print(" Expect success:") passed = test_extendBounds([11,130],'nearestPower',expected=[10,1000]) and passed passed = test_extendBounds([10,100],'nearestPower',expected=[10,100]) and passed passed = test_extendBounds([11,130],'nearestPower',1.1,expected=[10.834705943388395, 142.04293198443193]) and passed passed = test_extendBounds([11,130],'nearestPower',2,expected=[8,256]) and passed passed = test_extendBounds([11,130],'nearestPower',10,expected=[10,1000]) and passed passed = test_extendBounds([11,130],'nearestPower',10.,expected=[10,1000]) and passed passed = test_extendBounds([-11,130],'nearestPower',10,expected=[-100,1000]) and passed passed = test_extendBounds([-5100,-130],'nearestPower',10,expected=[-10000,-100]) and passed passed = test_extendBounds([-.0101,-0.00042],'nearestPower',10,expected=[-0.1,-0.0001]) and passed passed = test_extendBounds([0,0.00042],'nearestPower',10,expected=[0,0.001]) and passed print() print("Testing method 'nearestMultiple'") print(" Expect errors:") passed = test_extendBounds([11,132],'nearestMultiple',-2,expected=None) and passed passed = test_extendBounds([11,132],'nearestMultiple',0,expected=None) and passed passed = test_extendBounds([0,-10],'nearestMultiple',100,expected=None) and passed print(" Expect success:") passed = test_extendBounds([11,132],'nearestMultiple',expected=[10,140]) and passed passed = test_extendBounds([10,130],'nearestMultiple',expected=[10,130]) and passed passed = test_extendBounds([11.55,132.55],'nearestMultiple',0.1,expected=[11.5,132.6]) and passed passed = test_extendBounds([11.55,132.55],'nearestMultiple',1,expected=[11,133]) and passed passed = test_extendBounds([11.55,132.55],'nearestMultiple',100,expected=[0,200]) and passed passed = test_extendBounds([-11,132],'nearestMultiple',10,expected=[-20,140]) and passed passed = test_extendBounds([-1121,-132],'nearestMultiple',10,expected=[-1130,-130]) and passed passed = test_extendBounds([-10,-10],'nearestMultiple',10,expected=[-10,-10]) and passed passed = test_extendBounds([-10,-10],'nearestMultiple',100,expected=[-100,0]) and passed print() print("Testing method 'round'") print(" Expect errors:") passed = test_extendBounds([-11.1,-132.1],'round',expected=None) and passed print(" Expect success:") passed = test_extendBounds([11.1,132.1],'round',expected=[11,133]) and passed passed = test_extendBounds([10,130],'round',expected=[10,130]) and passed passed = test_extendBounds([11.1,132.1],'round',-2,expected=[11,133]) and passed passed = test_extendBounds([-11.1,132.1],'round',expected=[-12,133]) and passed passed = test_extendBounds([-1100.1,-132.1],'round',expected=[-1101,-132]) and passed print() print("Testing array execution") print(" Expect errors:") print(" Expect success:") print(" method") passed = test_extendBound(1.5,'up','nearestLeadingDigit',4,expected=2) and passed passed = test_extendBound(1.5,'up','nearestPower',4,expected=4) and passed passed = test_extendBound(1.5,'up','nearestMultiple',4,expected=4) and passed passed = test_extendBound(1.5,'up','round',4,expected=2) and passed print(" direction") passed = test_extendBound(1.5,'up','nearestMultiple',4,expected=4) and passed passed = test_extendBound(1.5,0.1,'nearestMultiple',4,expected=4) and passed passed = test_extendBound(1.5,'down','nearestMultiple',4,expected=0) and passed passed = test_extendBound(1.5,-0.1,'nearestMultiple',4,expected=0) and passed print(" broadcasting") passed = test_extendBound([1.5],'up','nearestMultiple',1,expected=[2]) and passed passed = test_extendBound([1.5,2.5],'up','nearestMultiple',1,expected=[2,3]) and passed passed = test_extendBound(1.5,['up','down'],'nearestMultiple',1,expected=[2,1]) and passed passed = test_extendBound(1.5,'up',['nearestLeadingDigit','nearestPower','nearestMultiple','round'],3,expected=[2,3,3,2]) and passed passed = test_extendBound(1.5,'up','nearestMultiple',[1,5,10],expected=[2,5,10]) and passed print() print("All tests passed:",passed) # - # ## Import using GeoPandas # # Earlier version def GMLtoGDF(filename): gdf = geopandas.read_file(filename) gdf.rename_geometry('Geometry', inplace=True) # Default geometry column name is 'geometry'; changed for consistent capitalization of columns gdf.set_geometry('Geometry') # Renaming is insufficient; this sets special variable gdf.geometry = gdf['Geometry'] gdf = gdf.set_crs(epsg=3347) # Needed only for FSA file, the others are 3347 and parsed correctly by geopandas, and the pdf in the zip file has the same projection parameters (FSA vs. DA, ADA, CT) gdf['Area'] = gdf['Geometry'].to_crs(epsg=6931).area # Equal-area projection # MODIFY THIS to account for validity regions of each geometry gdf['Centroid'] = gdf['Geometry'].centroid gdf['Geometry'] = gdf['Geometry'].to_crs(epsg=4326) # Latitude/Longitude representation gdf['Centroid'] = gdf['Centroid'].to_crs(epsg=4326) # Only the set geometry is converted with gdf.to_crs(); all other geometry-containing columns must be converted explicitly; here we convert all columns explicitly gdf = gdf.set_crs(epsg=4326) # The series and geodataframe can have separate crs; this was found necessary for the geopandas.union function to operate easily gdf['Centroid Latitude'] = gdf['Centroid'].geometry.y gdf['Centroid Longitude'] = gdf['Centroid'].geometry.x gdf.drop(columns = 'Centroid', inplace=True) # Because WKT Point cannot be serialized to JSON, we drop the Centroid column and keep only its float components return gdf # Modified with comments; use standard 'geometry' instead of 'Geometry', preserve original crs def GMLtoGDF(filename): gdf = geopandas.read_file(filename) #gdf.rename_geometry('Geometry', inplace=True) # Removed to revert to standard geopandas naming 'geometry' # Default geometry column name is 'geometry'; changed for consistent capitalization of columns #gdf.set_geometry('Geometry') # Removed to revert to standard geopandas naming 'geometry' # Renaming is insufficient; this sets special variable gdf.geometry = gdf['Geometry'] gdf = gdf.set_crs(epsg=3347) # Needed only for FSA file, the others are 3347 and parsed correctly by geopandas, and the pdf in the zip file has the same projection parameters (FSA vs. DA, ADA, CT) gdf['Area'] = gdf.geometry.to_crs(epsg=6931).area # Equal-area projection # MODIFY THIS to account for validity regions of each geometry gdf['Centroid'] = gdf.geometry.centroid #gdf['Geometry'] = gdf.geometry.to_crs(epsg=4326) # Removed to preserve original crs # Latitude/Longitude representation gdf['Centroid'] = gdf['Centroid'].to_crs(epsg=4326) # Only the set geometry is converted with gdf.to_crs(); all other geometry-containing columns must be converted explicitly; here we convert all columns explicitly #gdf = gdf.set_crs(epsg=4326) # Removed to preserve original crs # The series and geodataframe can have separate crs; this was found necessary for the geopandas.union function to operate easily gdf['Centroid Latitude'] = gdf['Centroid'].geometry.y gdf['Centroid Longitude'] = gdf['Centroid'].geometry.x gdf.drop(columns = 'Centroid', inplace=True) # Because WKT Point cannot be serialized to JSON, we drop the Centroid column and keep only its float components return gdf # Final version without comments def GMLtoGDF(filename): gdf = geopandas.read_file(filename) gdf = gdf.set_crs(epsg=3347) # Needed only for FSA file, the others are 3347 and parsed correctly by geopandas, and the pdf in the zip file has the same projection parameters (FSA vs. DA, ADA, CT) gdf['Area'] = gdf.geometry.to_crs(epsg=6931).area # Equal-area projection # MODIFY THIS to account for validity regions of each geometry gdf['Centroid'] = gdf.geometry.centroid gdf['Centroid'] = gdf['Centroid'].to_crs(epsg=4326) # Only the set geometry is converted with gdf.to_crs(); all other geometry-containing columns must be converted explicitly; here we convert all columns explicitly gdf['Centroid Latitude'] = gdf['Centroid'].geometry.y gdf['Centroid Longitude'] = gdf['Centroid'].geometry.x gdf.drop(columns = 'Centroid', inplace=True) # Because WKT Point cannot be serialized to JSON, we drop the Centroid column and keep only its float components return gdf # ## Save/Load (original) # + # Function(s) to encapsulate loading and saving long calculations def loadResults_(name,tuples,fileformat='db',compress=False): '''Loads variables from files Parameters ---------- name: str, file name base (including directory if desired) tuples: list of tuples (varname, suffix), varname: str, the key of the output dict where the data will be stored suffix: str, the string appended to name to generate a full file name fileformat: str, suffix to save the file with (do not include period) compress: bool, True to zip results (appends '.gz' to filename) Returns ------- None if an error was encountered, or Tuple the length of tuples containing for each element of tuples: None if there was an error, or the variable loaded from file at the same position from tuples Notes ----- Files read in binary format with optional gzip encoding This function is the complement to saveResults_() TODO ---- Add option to change save format (text vs. binary) Make fileformat select the save format ''' if type(name)!=str: print('Error: name must be a string') return None if type(fileformat)!=str: print('Error: fileformat must be a string') return None ret = [] for n, s in tuples: fn = name+s+'.'+fileformat+('.gz' if compress else '') try: with open(fn,'rb') as file: ret.append(dill.loads(gzip.decompress(file.read()) if compress else file.read())) except (FileNotFoundError, IOError) as e: ret.append(None) print(f'An error was encountered while reading from file {fn}: {e}') return tuple(ret) def loadResults(name): '''Loads variables 'gdf_union', 'times', and 'areas' from zipped files Parameters ---------- name: str containing the base name of the files Returns ------- None if an error was encountered, or Tuple the length of tuples containing: None if there was an error, or the variable loaded from file at the same position from tuples Notes ----- File names area _.db.gz and are in gzip dill binary format Uses outside variable DIR_RESULTS if available, otherwise put path in name ''' tuples = [('gdf_union',''), ('times','_times'), ('areas','_areas')] return loadResults_(name,tuples,fileformat='db',compress=True) def saveResults_(name,tuples,fileformat='db',compress=False): '''Saves variables to files Parameters ---------- name: str, file name base (including directory if desired) tuples: list of tuples (varname, suffix), var: , the variable to be output to file suffix: str, the string appended to name to generate a full file name fileformat: str, suffix to save the file with (do not include period) compress: bool, True to zip results (appends '.gz' to filename) Returns ------- None if an error was encountered, or Tuple the same length as tuples containing return codes: 0 Failure 1 Success Notes ----- Files written in binary format Files are created if they do not already exist Files are overwritten if they already exist TODO ---- Make fileformat determine save format ''' if type(name)!=str: print('Error: name must be a string') return None if type(fileformat)!=str: print('Error: fileformat must be a string') return None ret = [] for v, s in tuples: fn = name+s+'.'+fileformat+('.gz' if compress else '') try: with open(fn,'wb+') as file: file.write(gzip.compress(dill.dumps(v)) if compress else dill.dumps(v)) ret.append(1) except IOError as e: ret.append(0) print(f'An error was encountered while writing to file {fn}: {e}') return tuple(ret) def saveResults(name, gdf_union, times, areas): '''Saves variables 'times', 'areas', and 'gdf_union' to zipped files Parameters ---------- name: str, file name base (including directory if desired) gdf_union: geodataframe of geographic areas, produced from intersectGDF() times: 1d array of computation times, produced from intersectGDF() areas: list of lists of overlap areas, produced from intersectGDF() Returns ------- None if an error was encountered, or Tuple the same length as tuples containing return codes: 0 Failure 1 Success Notes ----- File names area .db.gz and are in gzip dill binary format Use outside variable DIR_RESULTS in construction of name ''' tuples = [(gdf_union,''), (times,'_times'), (areas,'_areas')] return saveResults_(name,tuples,fileformat='db',compress=True) def loadComputeSave(gdf1, key1, gdf2, key2, loadname=None, savename=None, verbosity=1, area_epsg=6931, gdf1b=None, gdf2b=None): '''Returns the overlap of geometries, defaulting to file versions if possible Parameters ---------- gdf_1: GeoDataFrame (must match crs of gdf2, will be utilized for vectorized overlap calculation) keyfield1: column name in gdf1 which uniquely identifies each row and will be used to label the results gdf2: GeoDataFrame (must match crs of gdf1, will be iterated over for overlap calculation) keyfield2: column name in gdf2 which uniquely identifies each row and will be used to label the results loadname: str or None, base name of files to load data from (None -> 'DEFAULT'), see saveResults() savename: str or None, base name of files to save data to (None -> loadname), see loadResults() verbosity: int, detail level of reporting during execution: 0=none, 1=10-100 updates, 2=update every loop and announce exceptions area_epsg: int, convert to this epsg for area calculation gdf1b: gdf1 with all geometries valid, to be used in case of failed overlap with gdf1, if None use gdf1.buffer(0) gdf2b: gdf2 with all geometries valid, to be used in case of failed overlap with gdf2, if None use gdf2.buffer(0) Returns ------- gdf_union: Geodataframe containing columns of nonzero overlap geometries, corresponding gdf1[keyfield1], and corresponding gdf2[keyfield2], where only one value of gdf1[keyfield1] is selected which is the one with maximum overlap area times: List of execution times for each overlap calculation; len(times)=gdf2.shape[0] areas: List of pandas Series of overlap areas; len(areas)=gdf2.shape[0], len(areas[i])=gdf1.shape[0] Notes ----- gdf1 and gdf2 must be set to the same crs Iterates over gdf2, which should have the larger number of rows of {gdf1,gdf2} in order to minimize required memory (assuming geometries are of roughly equal size) ''' verbosity = 1 if savename is None: savename = loadname if not loadname is None else 'DEFAULT' ret = None if loadname is None else loadResults(DIR_RESULTS+loadname) recompute = False saveresults = False if ret is None: recompute = True saveresults = True else: gdf_union, times, areas = ret if gdf_union is None: if areas is None: # Recompute recompute = True saveresults = True else: # Reconstruct from areas print("Overlaps will be recomputed based on loaded variable 'areas'") gdf_union, times, areas = intersectGDF(gdf1,key1,gdf2,key2,areas_in=temp_areas,verbosity=1) saveresults = True else: print("Overlaps loaded from file") if recompute: print("Overlaps must be computed") gdf_union, times, areas = intersectGDF(gdf1,key1,gdf2,key2,verbosity=1) if saveresults: saveResults(DIR_RESULTS+savename, gdf_union, times, areas) print("Variables saved to file at "+DIR_RESULTS+savename) return gdf_union, times, areas # - # ### Save/Load new # ## mapCitiesAdjacent Old: def mapCitiesAdjacent(cities,propertyname,title='',tooltiplabels=None): '''Displays cities in separate adjacent maps Cities dict as above Displayed as choropleth keyed to propertyname Header displays title and colormap labeled with keyed propertyname Tooltips pop up on mouseover showing properties listed in tooltiplabels Inspired by https://gitter.im/python-visualization/folium?at=5a36090a03838b2f2a04649d Assumes propertyname is identical in gdf and geojson ''' if (not type(cities)==list) and type(cities)==dict: cities = [cities] f = bre.Figure() div_header = bre.Div(position='absolute',height='10%',width='100%',left='0%',top='0%').add_to(f) map_header = folium.Map(location=[0,0],control_scale=False,zoom_control=False,tiles=None,attr=False).add_to(div_header) div_header2 = bre.Div(position='absolute',height='10%',width='97%',left='3%',top='0%').add_to(div_header) html_header = '''

    {}

    '''.format(title) div_header2.get_root().html.add_child(folium.Element(html_header)) vbounds = getCityBounds(cities,propertyname) vbounds[0] = 0 vbounds = extendBounds(vbounds,'nearestLeadingDigit') cm_header = LinearColormap( colors=['yellow', 'orange', 'red'], index=None, vmin=vbounds[0], vmax=vbounds[1], caption=propertyname ).add_to(map_header) # .to_step(method='log', n=5, data=?), log has log labels but linear color scale for i, city in enumerate(cities): div_map = bre.Div(position='absolute',height='80%',width=f'{(100/len(cities))}%',left=f'{(i*100/len(cities))}%',top='10%').add_to(f) city_map = folium.Map(location=city['centroid'], control_scale=True) div_map.add_child(city_map) title_html = '''

    {}

    '''.format(city['name']) city_map.get_root().html.add_child(folium.Element(title_html)) city_map.fit_bounds(city['bounds']) m = folium.GeoJson( city['geojson'], style_function=lambda feature: { 'fillColor': cm_header.rgb_hex_str(feature['properties'][propertyname]), 'fillOpacity': 0.8, 'color':'black', 'weight': 1, 'opacity': 0.2, }, name=f'Choropleth_{i}' ).add_to(city_map) if not tooltiplabels==None: m.add_child(folium.features.GeoJsonTooltip(tooltiplabels)) return display(f) scalerN = sklearn.preprocessing.Normalizer() scalerN.fit_transform(cities[0]['df_venues']) ind_max = df_tmp['Dwelling Occupancy'].sort_values(ascending=False).index val_max = df_tmp.loc[ind_max,'Dwelling Occupancy'].values df_tmp.loc[ind_max[0],'Dwelling Occupancy'] = val_max[1] # Venues scaled globally with additional factor, Census data and Venue density scaled; from sklearn.preprocessing import StandardScaler scale_column_names = ['Dwelling Density','Log10 (Population Density)','Log10 (1 - Residentiality)','Log10 (Venue Density)'] for i in range(max(df_venues['Category Depth'])+1): features = locals()[f"toronto_grouped_{i}"] vectors = features.drop('Postal Code',1).values vectors = scaler.fit_transform(vectors.reshape(-1,1)).reshape(*list(vectors.shape)) vectors = vectors*1.4 # This factor increases MSS by about the same amount as the census and venue density does features[features.columns.drop('Postal Code')] = vectors features = features.merge(df_final[['Postal Code','Dwelling Density','Log10 (Population Density)','Log10 (1 - Residentiality)']],on='Postal Code').merge(df_venue_density[['Postal Code','Log10 (Venue Density)']],on='Postal Code') add_dict = {'name':f"Super-Scaled Venues Depth {i}\n with scaled Census Data\n and Venue Density", 'labels':features['Postal Code'], 'features':features.drop('Postal Code',1) } add_dict['features'][scale_column_names] = scaler.fit_transform(add_dict['features'][scale_column_names]) k_means_data.append(add_dict) from sklearn.preprocessing import StandardScaler from sklearn.preprocessing import MinMaxScaler from sklearn.preprocessing import RobustScaler from sklearn.preprocessing import QuantileTransformer scaler = MinMaxScaler() # StandardScaler(), RobustScaler(), MinMaxScaler(), QuantileTransformer(output_distribution='normal', random_state=0) # + m = folium.Map(location=totalCentroid(df_merged), control_scale=True) vbounds = [min(df_merged['Area Ratio']), max(df_merged['Area Ratio'])] cm = LinearColormap( colors=['blue', 'white', 'yellow', 'orange', 'red'], index=[vbounds[0],0,-vbounds[0],vbounds[1]/2,vbounds[1]], vmin=vbounds[0], vmax=vbounds[1], caption='Fractional Area Change' ).add_to(m) # .to_step(method='log', n=5, data=?), log has log labels but linear color scale g = folium.GeoJson( df_merged.to_crs(epsg=4326).to_json(), style_function=lambda feature: { 'fillColor': cm.rgb_hex_str(feature['properties']['Area Ratio']) if abs(feature['properties']['Area Ratio'])>0.01 else 'white',# 'red' if feature['properties']['Area Diff'] else 'yellow', 'fillOpacity': 0.4, 'color':'black', 'weight': 1, 'opacity': 0.2, }, name=f'Choropleth_{i}' ).add_to(m) g.add_child(folium.features.GeoJsonTooltip(['Calc Census Area','Census Area','Area Diff','Area Ratio','CTUID'],localize=True)) display(m) # - # To simplify the data collection, let's construct a single dataframe with all CTs labeled by their city: gdf_cities = pd.concat([c['gdf'].assign(**{'City':c['name']}) for c in cities],axis=0) print(gdf_cities.shape) gdf_cities.head(2) # Old NN Distances # %%time for city in cities: city['gdf']['City'] = city['name'] # For definitive sorting after dataframe concatenation later city['gdf'][['NN Index','NN CTUID','NN Distance']] = None city['gdf'][['NN Index','NN CTUID','NN Distance']] = city['gdf'][['NN Index','NN CTUID','NN Distance']].astype(object) for ii, row in city['gdf'].iterrows(): df_tmp = city['gdf'][['CTUID','Centroid Latitude','Centroid Longitude']].copy() df_tmp['Distance'] = [geopy.distance.distance((row['Centroid Latitude'],row['Centroid Longitude']),(x['Centroid Latitude'],x['Centroid Longitude'])).meters for jj, x in df_tmp.iterrows()] df_tmp.sort_values('Distance',inplace=True) # df_tmp.drop(index=row.name,axis=0,inplace=True) # To drop self-distance of 0 city['gdf']['NN Index'].loc[ii] = df_tmp.index.values city['gdf']['NN CTUID'].loc[ii] = df_tmp['CTUID'].index.values city['gdf']['NN Distance'].loc[ii] = df_tmp['Distance'].index.values # df_vensum: A pandas dataframe, collects summary of venue information, keyed to DAUID # for city in cities: city['isComplete'], city['df_venues'] = resumableGetNearbyVenues(city['gdf'], None, radius=RADIUS, extend=True) print(city['name'],'venue lookup completed successfully' if city['isComplete'] else 'venue lookup interrupted') display(city['df_venues'].tail(2)) for city in cities: if not city['isComplete']: city['isComplete'], city['df_venues'] = resumableGetNearbyVenues(city['gdf'], city['df_venues'], radius=RADIUS, extend=True) print(city['name'],'venue lookup completed successfully' if isComplete else 'venue lookup interrupted') display(df_venues.tail(2)) else: print(city['name'],'venue lookup is already finished') for jj in range(len(categories_depth_list)): fig, axs = plt.subplots(1,len(cities), figsize=(20,4)) df_venues = pd.DataFrame() for ii, city in enumerate(cities): colname = f"Venue Category, Depth {jj}" df_venues = df_venues.append(city['df_venues'][['CTUID',colname]]) df_category_counts_all = df_venues[['CTUID',colname]].groupby(colname).count() df_category_counts_all.rename(columns={'CTUID':'Unique Categories'},inplace=True) for ii, city in enumerate(cities): colname = f"Venue Category, Depth {jj}" df_category_counts = city['df_venues'][['CTUID',colname]].groupby(colname).count() df_category_counts.rename(columns={'CTUID':'Unique Categories'},inplace=True) plt.sca(axs[ii]) df_category_counts_all.hist(log=True,bins=np.logspace(0,5,20),ax=plt.gca(),color='orange') df_category_counts.hist(log=True,bins=np.logspace(0,5,20),ax=plt.gca(),color='blue',alpha=0.5) plt.gca().set_xscale('log') plt.title(f"{city['name']}, Depth {jj}") plt.xlabel('Number of venues in a venue category') plt.ylabel('Count of venue categories'); # Reset defaults after using yellowbrick # matplotlib.rcParams.update(matplotlib.rcParamsDefault) matplotlib.rcParams.update(matplotlib.rcParamsDefault) matplotlib.rcParams.update(matplotlib.rcParamsDefault) # Note: yellowbrick changed matplotlib.rcParams and matplotlib.rcParamsDefault # # Before yellowbrick: matplotlib.rcParams['figure.figsize']=[6.0,4.0] # # And something set dpi to 72 for saving figures of gap statistic; default is 100. Could it be folium or import order? # # matplotlib.pyplot.rcParams==matplotlib.rcParams # # For now the only solution is to not do plots after yellowbrick, else restart kernel (inelegant). # + jupyter={"outputs_hidden": true} tags=[] MyrcParams = matplotlib.rcParams.copy() MyrcParams # - matplotlib.rcParams.update(MyrcParams) matplotlib.rcParams.update(matplotlib.pyplot.rcParams) # + tags=[] matplotlib.rcParamsDefault # + tags=[] matplotlib.pyplot.rcParamsDefault # - matplotlib.pyplot.rcParams==matplotlib.rcParams # + import mpld3 #mpld3.enable_notebook() import mpl_toolkits fig = plt.figure() ax = mpl_toolkits.mplot3d.Axes3D(fig) #colors = [matplotlib.colors.to_hex(matplotlib.cm.hsv(x/(df_labels.shape[0]))) for x in range(df_labels.shape[0])] ax.scatter(xs=df_features.iloc[:,0],ys=df_features.iloc[:,1],zs=df_features.iloc[:,2],alpha=0.2)#,c=clusters) mpld3.display(fig) # - import ipyvolume as ipv # + fig = ipv.figure() #scatter = ipv.scatter(df_features.iloc[:,0].values, df_features.iloc[:,1].values, df_features.iloc[:,2].values) #ipv.show() ipv.quickscatter(df_features.iloc[:,0], df_features.iloc[:,1], df_features.iloc[:,2], size=1, marker="sphere") # - # ## Appendix # + [markdown] tags=[] # DONE: TODO: For overlaps, store all thresholded overlap areas in a geodataframe, do area extraction afterward? # # DONE: TODO: Convert overlapGDF, overlapGDFareas, and subsequent area examination to units of meters instead of degrees (e.g. add to_crs()) # # DONE: (supersceded) TODO: In the FSA geometry and area section, simplify code to use the newly declared indices instead of repeating computation. # # DONE: TODO: Additional overlap calculations use dataframes to simplify index tracking # # DONE: TODO: Indexing of overlap etc. should go into a dataframe, not be dealt with by indices... # Separate summarization in dataframe completed, still need to convert analysis sections # # TODO: NN Distances should be from centroid projected to other geometry exterior? # # TODO: Refine Foursquare lookup so that it doesn't require editing the functions to adapt to a new dataset # # TODO: For finding venues, also return a summary of counts vs radius steps for visualization # # DONE: TODO: Add comparison with Gaussian Mixture Model, Spectral Clustering, and other Estimation-Maximization clustering approaches like Maximum A-Posteriori Dirichlet Process Mixtures (MAP-DP, which can concurrently optimize for k). (See Extensions section, though Spectral Clustering was left out) # # TODO: Correct LoadResults_ docstring: input tuple - name not used, returns list # # DONE: TODO: Make df_features contain the unnormalized data, and df_features_scaled contain the adjusted data (reversed: df_features_unscaled instead) # # TODO: Functions for map image display/generate/save: abstract to use any map function e.g. with dict mapargs # # TODO: Functions for map feature display: a) look into DualMap to sync layer controls, b) input functions to control scaling e.g. log or feature arithmetic # # DONE: TODO: Next time, just copy params into data dicts? Check if this would be too unreasonable to write around for plotting... (Done BGM section in Extensions) # # TODO: Consider if PCA clusters could originate as an artifact somehow, possibly due to quantization of the venue vector. # # TODO: Add a cluster # - import sys modules = list(set(sys.modules) & set(globals())) print(modules) for module_name in modules: module = sys.modules[module_name] print(module_name, getattr(module, '__version__', 'unknown')) # + # Realative directories DIR_DATA = 'Data/' DIR_RESULTS = 'Results/' import config # login credentials (for Here (geocoding) and Foursquare (venue lookup)) import importlib # for reloading modules (e.g. after editing config) import copy # for performing a deepcopy to preserve original variables import inspect # for determining function signatures and default arguments import os # for getting current directory for relative save/load, and filepath functions import sys # for checking system recursion limit import time # for timing and web query rate throttling import dill # for saving/loading results to save on computation time import gzip # for compressing results to fit on github import collections # for testing if parameters are list-like import requests # for web queries (Foursquare) import line_profiler # line profiling # https://mortada.net/easily-profile-python-code-in-jupyter.html # %load_ext line_profiler import pandas # data management tool including dataframes and series, built on numpy import pandas as pd import geopandas # for creating dataframes with geometries, built on pandas, shapely, fiona, and matplotlib import numpy # for high performance array mathematics, used by pandas import numpy as np import geopy # for geocoding and geodesic distance calculation from geopy import distance import pyproj # for more efficient coordinate transformations and geodesic area import shapely # lower-level geometry library used in pyproj, geopy, and geopandas; for specific geometry error identification import ogr # lower-level geospatial library, paired with gdal wrapper library import matplotlib # for plotting import matplotlib.cm # colormaps import matplotlib.cm as cm import matplotlib.colors import matplotlib.pyplot as plt # for convenient plotting functions # %matplotlib inline import folium # for map display import branca # for HTML capability for folium extension import branca.element as bre from branca.colormap import LinearColormap import selenium # for rendering folium HTML maps and saving as images from selenium import webdriver # also requires geckodriver in the environment; available in anaconda import sklearn # machine learning module import sklearn.cluster import sklearn.feature_selection # for p-value calculation import sklearn.metrics # for silhouette scoring import sklearn.mixture # for gaussian mixture models import yellowbrick # for silhouette visualization import yellowbrick.cluster import seaborn import seaborn as sns # for convenient plotting import scipy # for dendrogram display import plotly import plotly.express as px # for interactive 3D scatter plot # Suppress unsightly warningsthat occur during geometric calculation import warnings warnings.filterwarnings("ignore") import logging logging.getLogger('shapely.geos').setLevel(logging.CRITICAL) # + import pkg_resources import types def get_imports(): for name, val in globals().items(): if isinstance(val, types.ModuleType): # Split ensures you get root package, # not just imported function name = val.__name__.split(".")[0] elif isinstance(val, type): name = val.__module__.split(".")[0] # Some packages are weird and have different # imported names vs. system/pip names. Unfortunately, # there is no systematic way to get pip names from # a package's imported name. You'll have to add # exceptions to this list manually! poorly_named_packages = { "PIL": "Pillow", "sklearn": "scikit-learn" } if name in poorly_named_packages.keys(): name = poorly_named_packages[name] yield name imports = list(set(get_imports())) # The only way I found to get the version of the root package # from only the name of the package is to cross-check the names # of installed packages vs. imported packages requirements = [] for m in pkg_resources.working_set: if m.project_name in imports and m.project_name!="pip": requirements.append((m.project_name, m.version)) # - [x.project_name for x in pkg_resources.working_set] # + # From https://stackoverflow.com/questions/2572582/return-a-list-of-imported-python-modules-used-in-a-script import ast modules = set() def visit_Import(node): for name in node.names: modules.add(name.name.split(".")[0]) def visit_ImportFrom(node): # if node.module is missing it's a "from . import ..." statement # if level > 0 it's a "from .submodule import ..." statement if node.module is not None and node.level == 0: modules.add(node.module.split(".")[0]) node_iter = ast.NodeVisitor() node_iter.visit_Import = visit_Import node_iter.visit_ImportFrom = visit_ImportFrom # - with open("Appendix.ipynb") as f: fstr = f"{chr(10)}#------------------------------------------{chr(10)}".join([''.join([s for s in cell['source'] if not s[0] in {'%','!'}]) for cell in json.loads(f.read())['cells'] if cell['cell_type']=='code']) node_iter.visit(ast.parse(fstr)) print(modules) print([globals()[m] for m in modules if m in globals()]) # ! jupyter notebook --version # ! jupyter nbconvert --version __file__ print(os.getcwd()) import pipdeptree # deps = !pipdeptree print(chr(10).join([s for s in deps[:-1] if not s[0] in {' ','-','*'}])) import pipdeptree pipdeptree.main() # deps = !pipdeptree deps = deps[:-1] lens = [len(s) for s in deps] inds = np.array(lens)==0 import gdal sum(inds) [s for s in deps if not s[0] in {' ','-','*'}] # !python --version folium.__version__ import collections def isIterable(obj): '''Determines if an object is list-like''' if not isinstance(obj, (str, collections.abc.ByteString)): if isinstance(obj, collections.abc.Sequence): return True try: a = [x for x in obj] # Caution: this could potentially consume significant resources if len(a)<1: return False except TypeError: return False return True return False isIterable([]) len([]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Group A12 submission : HW - 2 # # ### Section 2 - Solution import pandas as pd import numpy as np import datetime as dt import statsmodels.api as sm import matplotlib.pyplot as plt plt.style.use('ggplot') import seaborn as sns import statsmodels.formula.api as smf import numpy # + path_to_data_file = r'proshares_analysis_data.xlsx' # Assuming that the file is in the root/home directory df_sp = pd.read_excel(path_to_data_file,sheet_name = 2) df_sp.set_index('date',inplace =True) S_and_P_data = df_sp['SPY US Equity'] df_hedge_fund = pd.read_excel(path_to_data_file, sheet_name = 1) df_hedge_fund.set_index('date',inplace = True) periods = 12 # - # ### Question 1 # # 1. For the series in the “hedgefundseries” tab, report the following summary statistics: # 1(a) mean # (b) volatility # (c) Sharpe ratioAnnualize these statistics. df_hfs = pd.read_excel('proshares_analysis_data.xlsx',sheet_name = 1).set_index('date') periods = 12 df_hfs_annualized = df_hfs*periods mean = df_hfs_annualized.mean() std = df_hfs_annualized.std()/np.sqrt(periods) sharpe = mean/std df_summary = pd.DataFrame({'Mean(%)':mean*100,"Volatility":std,"Sharpe":sharpe}) df_summary # ### Question 2 # # 2. For the series in the “hedgefundseries” tab, , calculate the following statistics related to tail-risk.(a) Skewness(b) Excess Kurtosis (in excess of 3)(c) VaR (.05) - the fifth quantile of historic returns(d) CVaR (.05) - the mean of the returns at or below the fifth quantile(e) Maximum drawdown - include the dates of the max/min/recovery within the max drawdownperiod. # + df_hfs_2 = df_hfs skew =df_hfs_2.skew(axis = 0) # Calculating kurtosis from scipy.stats import kurtosis kurtosis = pd.Series(index = df_hfs_2.skew(axis = 0).index, data = kurtosis(df_hfs_2, fisher=True, axis = 0)) # Calculating VaR(0.05) VaR = pd.Series(index = df_hfs_2.skew(axis = 0).index, data = np.percentile(df_hfs, 0.05, axis = 0)) # Calculating CVaR(0.05) - - the mean of the returns at or below the fifth quantile CVar = df_hfs_2[df_hfs_2 <= VaR].mean() # Maximum drawdown cum_return = (1+ df_hfs).cumprod() rolling_max = cum_return.cummax() drawdown = (cum_return - rolling_max) / rolling_max maxDrawdown = drawdown.min() bottom = pd.Series(pd.to_datetime([drawdown[col].idxmin() for col in drawdown]),index=df_hfs.columns) peak = pd.Series(pd.to_datetime([(cum_return[col][:bottom[col]].idxmax()) for col in cum_return]),index=df_hfs.columns) peakLevels = pd.Series([cum_return[col].loc[peak.loc[col]] for col in cum_return],index=df_hfs.columns) recovered = [] for col in cum_return: for lev in cum_return[col][bottom[col]:]: if lev >= peakLevels[col]: recovered.append(cum_return.index[cum_return[col] == lev][0]) break # - pd.DataFrame({'Skewness':skew,'Excess Kurtosis':kurtosis,'5% VaR':VaR,'5% CVaR':CVar,'Max Drawdown':maxDrawdown,'Peak':peak,'Bottom':bottom,'Recovered':recovered}) df_hfs_2.loc['max drawdown'] = drawdown.min() drawdown.plot(figsize = (15,8),title = 'Drawdown') # ### Question 3 # #### 3. For the series in the "hedge_fund_series" tab, run a regression of each against SPY. Include an intercept. Report the following regression-based statistics # a. Maket Beta # b. Treynor Ratio # c. Information Ratio # Annualize the three stats as appropriate ## function to clean the data and replace missing values with 0 def clean(data): if data.isnull().values.any(): return data.fillna(0) else: return data def market_beta(model, x,y, market = False): beta = model.params.to_frame('Parameters').loc[x.columns[1],'Parameters'] return beta def Treynor_Ratio(model,x,y,periods): trey = [] pred = model.predict(x).mean()*periods beta = model.params.to_frame('Parameters').loc[x.columns[1],'Parameters'] trey = pred/beta return trey def Information_Ratio(model,x,y,periods): alpha = model.params.to_frame('Parameters').loc[x.columns[0],'Parameters'] sigma = (y-model.predict(x)).std() info = alpha/(sigma*(periods**0.5)) return info # + def regression(predicted_var, predicting_var, periods = 12, intercept = False): ## for the regression model y = predicted_var y = clean(y) x = predicting_var x = clean(x) if intercept == True: x = sm.add_constant(x) model =sm.OLS(y,x,missing = 'drop').fit() meanr = y.mean()*periods s = meanr/(y.std()*(periods)**0.5) m = market_beta(model,x,y,market=True) t = Treynor_Ratio(model,x,y,periods) i = Information_Ratio(model,x,y,periods) r = model.rsquared return meanr,s,m,t, i,r # + mean_returns = {} sharpe = {} beta = {} trey = {} info = {} r_sq = {} for c in df_hedge_fund.columns: mean_returns[c],sharpe[c],beta[c], trey[c], info[c],r_sq[c] = regression(df_hedge_fund[c],S_and_P_data,intercept = True) df_summary = pd.DataFrame([mean_returns,sharpe,r_sq,beta,trey,info]) df_summary = df_summary.T.rename(columns = {0:'Mean Returns (%)',1:'Sharpe Ratio',2:'R-squared',3:'Market Beta',4:'Treynor Ratio',5:'Information Ratio'}) df_summary['Mean Returns (%)'] = df_summary['Mean Returns (%)']*100 df_summary.loc[S_and_P_data.name] = [S_and_P_data.mean()*100,(S_and_P_data.mean()/S_and_P_data.std())*(periods)**0.5,1,1,'NA','NA'] df_summary # - # ### Question 4 # #### Discuss the previous statistics, and what they tell us about... # ##### (a) the differences between SPY and the hedge-fund series? # ##### (b) which performs better between HDG and QAI. # ##### (c) whether HDG and the ML series capture the most notable properties of HFRI. # #### a. All the hedge funds got better mean returns while having a higher sharpe ratio. For a risk averse investor, S&P could have been a better bet. # #### All the hedge funds had a low information ratio but a high R-squared, this means that they took on a bit more risk but did not deviate too far from the S&P as it can explain so much of the variance of the hedge funds. For the hedge funds with sharpe ratios close to S&P's sharpe ratio (HFRIFWI and MLEIFCTR), the fund managers were able to mean - variance optimize the portfolio well # #### b. HDG has a slightly higher mean return but lower values of Sharpe and Information ratios as compared to QAI. This means that the extra performance came at a cost of higher volatility and probably involved a bit of luck as compared to QAI. Overall it is close but QAI has slightly better statistics # #### c. HFRI has a very high correaltion with the ML series and HDG. Also all the 4 series have a very similar Beta with the S&P, which probably implies that they similar average volatilites. Overall, on average HDG and ML do capture the most notable features of HFRI but they will probably provide lesser mean returns along with lower volatility and a less risk of major loss as HFRI's tails are fatter. This is we believe is a case of average returns and average volatility masking the risky tail events. # ### Question 5 # # # 5. Report the correlation matrix for these assets.(a) Show the correlations as a heat map.(b) Which series have the highest and lowest correlations? path_to_data_file = 'proshares_analysis_data.xlsx' df = pd.read_excel(path_to_data_file,sheet_name='hedge_fund_series') df df.corr() sns.heatmap(df.corr()) # ### HDG US Equity and QAI US Equity has lowest correlation; MLEIFCTR Index and MLEIFCTX Index has highest correlation. # # ### Question 6 # ##### # Replicate HFRI with the six factors listed on the “merrillfactors” tab. Include a constant, and run the unrestricted regression, # # \begin{align} # r^{hfri}_{t} = \alpha^{merr} + x^{merr}_{t} \beta^{merr} + \epsilon^{merr}_{t} (1) # \end{align} # # \begin{align} # \hat r^{hfri}_{t} = \hat\alpha^{merr} + x^{merr}_{t} \hat \beta^{merr} (2) # \end{align} # # ### (a) Report the intercept and betas. # (b) Are the betas realistic position sizes, or do they require huge long-short positions? # (c) Report the R-squared. # (d) Report the volatility of $\epsilon^{merr}$ (the tracking error.) # + from dataclasses import dataclass import warnings # regression function @dataclass class RegressionsOutput: excess_ret_stats: pd.DataFrame params: pd.DataFrame residuals: pd.DataFrame tstats: pd.DataFrame other: pd.DataFrame df: pd.DataFrame def lfm_time_series_regression(df, portfolio_names, factors, annualize_factor=12): excess_ret_stats = pd.DataFrame(index=factors, columns=['average', 'std'], dtype=float) for factor in factors: excess_ret_stats.loc[factor, 'average'] = annualize_factor * df[factor].mean() excess_ret_stats.loc[factor, 'std'] = np.sqrt(annualize_factor) * df[factor].std() excess_ret_stats.loc[factor, 'sharpe_ratio'] = \ excess_ret_stats.loc[factor, 'average'] / excess_ret_stats.loc[factor, 'std'] # Here I'll just report the unscaled skewness excess_ret_stats.loc[factor, 'skewness'] = df[factor].skew() # excess_ret_stats.loc[factor, 'skewness'] = annualize_factor * df[factor].skew() _temp_excess_ret_stats = excess_ret_stats.copy() _temp_excess_ret_stats.loc['const', :] = 0 with warnings.catch_warnings(): warnings.simplefilter("ignore") rhs = sm.add_constant(df[factors]) df_params = pd.DataFrame(columns=portfolio_names) df_other = pd.DataFrame(columns=portfolio_names) df_residuals = pd.DataFrame(columns=portfolio_names) df_tstats = pd.DataFrame(columns=portfolio_names) for portfolio in portfolio_names: lhs = df[portfolio] res = sm.OLS(lhs, rhs, missing='drop').fit() df_params[portfolio] = res.params df_params.loc['const', portfolio] = annualize_factor * res.params['const'] df_other.loc['r_squared', portfolio] = res.rsquared df_other.loc['model_implied_excess_ret', portfolio] = df_params[portfolio] @ _temp_excess_ret_stats['average'] df_other.loc['ave_excess_ret', portfolio] = \ annualize_factor * df[portfolio].mean() df_other.loc['std_excess_ret', portfolio] = \ np.sqrt(annualize_factor) * df[portfolio].std() df_other.loc['skewness_excess_ret', portfolio] = \ annualize_factor * df[portfolio].skew() df_other.loc['sharpe_ratio', portfolio] = \ df_other.loc['ave_excess_ret', portfolio] / df_other.loc['std_excess_ret', portfolio] df_residuals[portfolio] = res.resid df_tstats[portfolio] = res.tvalues regression_outputs = RegressionsOutput( excess_ret_stats.T, df_params.T, df_residuals, df_tstats.T, df_other.T, df) return regression_outputs # + df_hfs = pd.read_excel('proshares_analysis_data.xlsx', sheet_name = 'hedge_fund_series').set_index('date') df_mf = pd.read_excel('proshares_analysis_data.xlsx', sheet_name = 'merrill_factors').set_index('date') merrill_factors = ['SPY US Equity', 'USGG3M Index', 'EEM US Equity', 'EFA US Equity', 'EUO US Equity', 'IWM US Equity'] df_mf['HFRIFWI Index'] = df_hfs['HFRIFWI Index'] prob6_regs = lfm_time_series_regression(df=df_mf, portfolio_names=['HFRIFWI Index'], factors=merrill_factors) # - # ### (a) Report the intercept and betas. display(prob6_regs.params) # ### (b) Are the betas realistic position sizes, or do they require huge long-short positions? # These betas are realistic position sizes. Only the USGG3M is short and that might not be allowed # ### (c) Report the R-squared. # ### (d) Report the volatility of  E(merr), (the tracking error.) # + display(prob6_regs.other) print('\nR-squared = {:.5f}'.format(np.array(prob6_regs.other['r_squared'])[0])) print('Tracking error = {:.5f}'.format(np.array(prob6_regs.residuals.std() * np.sqrt(12))[0])) # - # ## 7. Let's examine the replication out-of-sample. # + date_range = df_mf.iloc[60:, :].index oos_fitted = pd.Series(index = date_range, name = 'OOS_fit', dtype='float64') for i in range(60, len(df_mf)): date = df_mf.iloc[i:i+1, :].index # date_month_prior = pd.DatetimeIndex([date]).shift(periods = -1, freq = 'M')[0] df_subset = df_mf.iloc[i-60:i, :] with warnings.catch_warnings(): warnings.simplefilter("ignore") rhs = sm.add_constant(df_subset[merrill_factors]) lhs = df_subset['HFRIFWI Index'] res = sm.OLS(lhs, rhs, drop="missing").fit() alpha = res.params['const'] beta = res.params.drop(index='const') x_t = df_mf.loc[date, merrill_factors] predicted_next_value = alpha + x_t @ beta oos_fitted[date] = predicted_next_value oos_fitted.plot(figsize=(14,3)) df_mf.iloc[60:,:]['HFRIFWI Index'].plot() plt.legend() plt.show() None display((pd.DataFrame([oos_fitted, df_mf.iloc[60:,:]['HFRIFWI Index']])).T.corr()) # - # The OOS results perform well, showing almost the same level of replicability - 94.5% correlation between the replicating portfolio and the HFRI. # # 8. (a) regression beta # + from dataclasses import dataclass import warnings df_hfs = pd.read_excel('proshares_analysis_data.xlsx', sheet_name = 'hedge_fund_series').set_index('date') df_mf = pd.read_excel('proshares_analysis_data.xlsx', sheet_name = 'merrill_factors').set_index('date') # regression function @dataclass class RegressionsOutput: excess_ret_stats: pd.DataFrame params: pd.DataFrame residuals: pd.DataFrame tstats: pd.DataFrame other: pd.DataFrame df: pd.DataFrame def lfm_time_series_regression(df, portfolio_names, factors, annualize_factor=12): excess_ret_stats = pd.DataFrame(index=factors, columns=['average', 'std'], dtype=float) for factor in factors: excess_ret_stats.loc[factor, 'average'] = annualize_factor * df[factor].mean() excess_ret_stats.loc[factor, 'std'] = np.sqrt(annualize_factor) * df[factor].std() excess_ret_stats.loc[factor, 'sharpe_ratio'] = \ excess_ret_stats.loc[factor, 'average'] / excess_ret_stats.loc[factor, 'std'] # Here I'll just report the unscaled skewness excess_ret_stats.loc[factor, 'skewness'] = df[factor].skew() # excess_ret_stats.loc[factor, 'skewness'] = annualize_factor * df[factor].skew() _temp_excess_ret_stats = excess_ret_stats.copy() _temp_excess_ret_stats.loc['const', :] = 0 with warnings.catch_warnings(): warnings.simplefilter("ignore") rhs = sm.add_constant(df[factors]) df_params = pd.DataFrame(columns=portfolio_names) df_other = pd.DataFrame(columns=portfolio_names) df_residuals = pd.DataFrame(columns=portfolio_names) df_tstats = pd.DataFrame(columns=portfolio_names) for portfolio in portfolio_names: lhs = df[portfolio] res = sm.OLS(lhs, rhs, missing='drop').fit() df_params[portfolio] = res.params df_params.loc['const', portfolio] = 0 df_other.loc['r_squared', portfolio] = res.rsquared df_other.loc['model_implied_excess_ret', portfolio] = df_params[portfolio] @ _temp_excess_ret_stats['average'] df_other.loc['ave_excess_ret', portfolio] = \ annualize_factor * df[portfolio].mean() df_other.loc['std_excess_ret', portfolio] = \ np.sqrt(annualize_factor) * df[portfolio].std() df_other.loc['skewness_excess_ret', portfolio] = \ annualize_factor * df[portfolio].skew() df_other.loc['sharpe_ratio', portfolio] = \ df_other.loc['ave_excess_ret', portfolio] / df_other.loc['std_excess_ret', portfolio] df_residuals[portfolio] = res.resid df_tstats[portfolio] = res.tvalues regression_outputs = RegressionsOutput( excess_ret_stats.T, df_params.T, df_residuals, df_tstats.T, df_other.T, df) return regression_outputs # + merrill_factors = ['SPY US Equity', 'USGG3M Index', 'EEM US Equity', 'EFA US Equity', 'EUO US Equity', 'IWM US Equity'] df_mf['HFRIFWI Index'] = df_hfs['HFRIFWI Index'] prob8_regs = lfm_time_series_regression(df=df_mf, portfolio_names=['HFRIFWI Index'], factors=merrill_factors) display(prob8_regs.params) # - # # 8(b) mean of fitted value # #### the mean of HFRI is print(df_hfs['HFRIFWI Index'].mean()*12) # + a=df_mf['SPY US Equity'].mean() b=df_mf['USGG3M Index'].mean() c=df_mf['EEM US Equity'].mean() d=df_mf['EFA US Equity'].mean() e=df_mf['EUO US Equity'].mean() f=df_mf['IWM US Equity'].mean() mean_fitted=(0.072022*a-0.400591*b+0.072159*c+0.106318*d+0.022431*e+0.130892*f)*12 # - # #### the mean of fitted value is print(mean_fitted) # #### so we can see that compared to mean of MFRI, mean of fitted value is much lower # # 8(c) The correlation # #### parameters without interercept prob8_regs.params # #### parameters with interercept prob6_regs.params # + a=df_mf['SPY US Equity'] b=df_mf['USGG3M Index'] c=df_mf['EEM US Equity'] d=df_mf['EFA US Equity'] e=df_mf['EUO US Equity'] f=df_mf['IWM US Equity'] k=[] g=[] t=[] for i in range(len(a)): fitted_no_intece=0.072022*a[i]-0.400591*b[i]+0.072159*c[i]+0.106318*d[i]+0.022431*e[i]+0.130892*f[i] k.append(fitted_no_intece) fitted_intece=0.01376+0.072022*a[i]-0.400591*b[i]+0.072159*c[i]+0.106318*d[i]+0.022431*e[i]+0.130892*f[i] g.append(fitted_intece) t.append(df_hfs['HFRIFWI Index'][i]) df_corr = pd.DataFrame({'HFRIFWI Index':df_hfs['HFRIFWI Index'],'Predicted with intercept':g,'Predicted witout intercept':k}) df_corr.corr()[['HFRIFWI Index']] # - # #### the correlation is very similar so I think they can fit replicators without intercepts as this will help them replicate the index # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from sklearn import tree from sklearn.model_selection import cross_val_score import matplotlib.pyplot as plt import seaborn as sb import numpy as np import pandas as pd # %matplotlib inline # - from IPython.display import SVG from graphviz import Source from IPython.display import display df_train = pd.read_csv('../data/dogs_n_cats.csv') df_train = df_train.sample(frac=1).reset_index(drop=True) df_train.head() y_train = df_train['Вид'] X_train = df_train.drop(['Вид'], axis=1) X_train.head() # + clf = tree.DecisionTreeClassifier(max_depth=5, criterion='entropy') clf.fit(X_train, y_train) score = cross_val_score(clf, X=X_train, y=y_train, cv=5) score # - df_test = pd.read_json('../data/dataset_209691_15.txt') df_test.head() predicts = clf.predict(df_test) l = list(predicts) l.count('собачка') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + nbsphinx="hidden" # Ricordati di eseguire questa cella con Shift+Invio import sys sys.path.append('../') import jupman # - # # # Liste 1 - Introduzione # # # ## [Scarica zip esercizi](../_static/generated/lists.zip) # # [Naviga file online](https://github.com/DavidLeoni/softpython-it/tree/master/lists) # # # Una lista in Python è una sequenza di elementi eterogenei **mutabile**, in cui possiamo mettere gli oggetti che vogliamo. L'ordine in cui li mettiamo viene preservato. # # ## Che fare # # - scompatta lo zip in una cartella, dovresti ottenere qualcosa del genere: # # ``` # # lists # lists1.ipynb # lists1-sol.ipynb # lists2.ipynb # lists2-sol.ipynb # lists3.ipynb # lists3-sol.ipynb # lists4.ipynb # lists4-sol.ipynb # jupman.py # ``` # #
    # # **ATTENZIONE**: Per essere visualizzato correttamente, il file del notebook DEVE essere nella cartella szippata. #
    # # - apri il Jupyter Notebook da quella cartella. Due cose dovrebbero aprirsi, prima una console e poi un browser. Il browser dovrebbe mostrare una lista di file: naviga la lista e apri il notebook `lists1.ipynb` # - Prosegui leggendo il file degli esercizi, ogni tanto al suo interno troverai delle scritte **ESERCIZIO**, che ti chiederanno di scrivere dei comandi Python nelle celle successive. Gli esercizi sono graduati per difficoltà, da una stellina ✪ a quattro ✪✪✪✪ # # # Scorciatoie da tastiera: # # * Per eseguire il codice Python dentro una cella di Jupyter, premi `Control+Invio` # * Per eseguire il codice Python dentro una cella di Jupyter E selezionare la cella seguente, premi `Shift+Invio` # * Per eseguire il codice Python dentro una cella di Jupyter E creare una nuova cella subito dopo, premi `Alt+Invio` # * Se per caso il Notebook sembra inchiodato, prova a selezionare `Kernel -> Restart` # # # ## Creare liste # # Possiamo creare una lista specificando tra parentesi quadre gli elementi che contiene, e separandoli da una virgola. # # Per esempio, in questa lista inseriamo i numeri `7`, `4` e `9`: [7,4,9] # Come tutti gli oggetti in Python, possiamo associarli ad una variabile, in questo caso ce ne inventiamo una chiamata `lista`: lista = [7,4,9] lista # Vediamo meglio che succede in memoria, e compariamo la rappresentazione delle stringhe con quella delle liste: # + # AFFINCHE' PYTHON TUTOR FUNZIONI, RICORDATI DI ESEGUIRE QUESTA CELLA con Shift+Invio # (basta eseguirla una volta sola, la trovi anche all'inizio di ogni foglio) import sys sys.path.append('../') import jupman # + stringa = "prova" lista = [7,4,9] jupman.pytut() # - # Notiamo subito una differenza rilevante. La stringa è rimasta nella regione azzurra dove appaiono le associazioni tra variabili e valori, invece dalla variabile `lista` parte una freccia che punta ad una nuova regione gialla di memoria, che si crea non appena l'esecuzione raggiunge la riga che definisce la lista. # # In seguito approfondiremo meglio le conseguenze di ciò. # In una lista gli stessi elementi possono apparire più volte numeri = [1,2,3,1,3] numeri # In una lista possiamo mettere qualunque elemento, per es. le stringhe: frutti = ["mela", "pera", "pesca", "fragola", "ciliegia"] frutti # Possiamo anche mischiare i tipi di oggetti contenuti in una lista, per esempio possiamo avere interi e stringhe: misto = ["tavolo", 4 ,"sedia", 8, 5, 1, "sedia"] # In Python Tutor apparirà così: # + misto = ["tavolo", 5 , 4, "sedia", 8, "sedia"] jupman.pytut() # - # Per comodità possiamo anche scrivere la lista su più righe (gli spazi in questo caso non contano, ricordati solo di terminare le righe con delle virgole `,`) misto = ["tavolo", 5 , 4, "sedia", 8, "sedia"] # ### Esercizio - proviamo l'errore # # Prova a scrivere la lista qua sopra SENZA mettere una virgola dopo il 5, che errore appare? # + # scrivi qui # - # ### Tabelle # # Una lista può anche contenere altre liste: tabella = [ ['a','b','c'], ['d','e','f'] ] # Tipicamente, quando abbiamo strutture come questa, conviene disporle su più righe (non è obbligatorio ma conviene per chiarezza): tabella = [ # inizio listona esterna ['a','b','c'], # lista interna 1 ['d','e','f'] # lista interna 2 ] # fine listona esterna tabella # Vediamo come viene mostrata in Python Tutor: # + tabella = [ ['a','b','c'], ['d','e','f'] ] jupman.pytut() # - # Come detto in precedenza, in una lista possiamo mettere gli elementi che vogliamo, quindi possiamo mischiare liste di dimensioni diverse, stringhe, numeri, etc: ditutto = [ ['ciao',3,'mondo'], 'una stringa', [9,5,6,7,3,4], 8, ] print(ditutto) # Vediamo anche come appare in Python Tutor: # + ditutto = [ ['ciao',3,'mondo'], 'una stringa', [9,5,6,7,3,4], 8, ] jupman.pytut() # - # ### Lista vuota # # Ci sono due modi per creare una lista vuota. # # 1) con parentesi quadre: lista_vuota = [] lista_vuota # 2) Oppure con `list()`: altra_lista_vuota = list() altra_lista_vuota #
    # # **ATTENZIONE**: Quando crei una lista vuota (indipendentemente dalla notazione usata), in memoria viene allocata una NUOVA regione di memoria per accogliere la lista # #
    # # Vediamo meglio cosa vuol dire con Python Tutor: # + a = [] b = [] jupman.pytut() # - # Nota che sono apparse due frecce che puntano a regioni di memoria **differenti**. Lo stesso sarebbe accaduto inizializzando le liste con degli elementi: # + la = [8,6,7] lb = [9,5,6,4] jupman.pytut() # - # E avremmo avuto due liste in regioni di memoria diverse anche mettendo elementi identici dentro le liste: # + la = [8,6,7] lb = [8,6,7] jupman.pytut() # - # Le cose si complicano quando cominciamo ad usare operazioni di assegnazione: la = [8,6,7] lb = [9,5,6,4] lb = la # Scrivendo `lb = la`, abbiamo detto a Python di 'dimenticare' l'assegnazione precedente di `lb` a `[9,5,6,4]`, e di associare invece `lb` alla stesso valore già associato ad `la`, cioè `[8,6,7]`. Quindi, nella memoria vedremo una freccia che parte da `lb` ed arriva a `[8,6,7]`, e la regione di memoria dove stava la lista `[9,5,6,4]` verrà rimossa (non è più associata ad alcuna variabile). Guardiamo che succede con Python Tutor: # + la = [8,6,7] lb = [9,5,6,4] lb = la jupman.pytut() # - # ### Esercizio - scambi di liste # # Prova a scambiare le liste associate alle variabili `la` ed `lb` usando solo assegnazioni e **senza creare nuove liste**. Se vuoi, puoi sovrascrivere una terza variabile `lc`. Verifica che succede con Python Tutor. # # * il tuo codice deve poter funzionare con qualunque valore di `la`, `lb` ed `lc` # # Esempio - dati: # # ```python # la = [9,6,1] # lb = [2,3,4,3,5] # lc = None # ``` # # Dopo il tuo codice, deve risultare: # # ```python # >>> print(la) # [2,3,4,3,5] # >>> print(lb) # [9,6,1] # ``` # + la = [9,6,1] lb = [2,3,4,3,5] lc = None # scrivi qui lc = la la = lb lb = lc #print(la) #print(lb) # - # ### Domanda - creazione di liste # # # Guarda questi due pezzi di codice. Per ciascun caso, prova a pensare come possono essere rappresentati in memoria e verifica poi con Python Tutor. # # - che differenza ci potrà essere? # - quante celle di memoria verranno allocate in totale ? # - quante frecce vedrai ? # # ```python # # primo caso # lb = [ # [8,6,7], # [8,6,7], # [8,6,7], # [8,6,7], # ] # ``` # # ```python # # secondo caso # la = [8,6,7] # lb = [ # la, # la, # la, # la # ] # ``` # primo caso lb = [ [8,6,7], [8,6,7], [8,6,7], [8,6,7], ] jupman.pytut() # + # secondo caso la = [8,6,7] lb = [ la, la, la, la ] jupman.pytut() # - # **RISPOSTA**: Nel primo caso, abbiamo una 'listona' associata alla variabile `lb` che contiene 4 sottoliste ciascuna da 3 elementi. Ogni sottolista viene creata come nuova, quindi in totale in memoria abbiamo 4 celle della listona `lb` + (4 sottoliste * 3 celle ciascuna) = 16 celle # # Nel secondo caso invece abbiamo sempre la 'listona' associata alla variabile `lb` da 4 celle, ma al suo interno contiene dei puntatori alla stessa identica lista `la`. Quindi il numero totale di celle occupate è 4 celle della listona `lb` + (1 sottolista * 3 celle) = 7 celle # ### Esercizio - domino # # Nel tuo quartiere stanno organizzando un super torneo di domino, visto che il primo premio consiste in una tessera per ottenere 10 crostate dalla mitica Nonna Severina decidi di impegnarti seriamente. # # Inizi a pensare a come allenarti e decidi di iniziare a accodare le tessere in maniera corretta. # # ```python # tessera1 = [1,3] # tessera3 = [1,5] # tessera2 = [3,9] # tessera5 = [9,7] # tessera4 = [8,2] # ``` # # Date queste tessere genera una lista che conterrà a sua volta due liste: nella prima inserisci una possibile sequenza di tessere collegate; nella seconda lista invece metti le tessere rimaste escluse dalla sequenza di prima. # # Esempio: # # ```python # [ [ [1, 3], [3, 9], [9, 7] ], [ [1, 5], [8, 2] ] ] # ``` # # * **NON scrivere numeri** # * **USA** solo liste di variabili # + #jupman-purge-output tessera1 = [1,3] tessera2 = [3,2] tessera3 = [1,5] tessera4 = [2,4] tessera5 = [3,3] tessera6 = [5,4] tessera7 = [1,2] # scrivi qui sequenza = [tessera1, tessera5, tessera2, tessera4] spaiate = [tessera3, tessera6, tessera7] print([sequenza, spaiate]) # - # ### Esercizio - creare liste 2 # # Inserisci dei valori nelle liste `la`, `lb` tali per cui # # ```python # print([[la,la],[lb,la]]) # ``` # # stampi # # ``` # [[[8, 4], [8, 4]], [[4, 8, 4], [8, 4]]] # ``` # # * **inserisci solo dei NUMERI** # * Osserva in Python Tutor come vengono raffigurate le frecce # + la = [] # inserisci dei numeri lb = [] # inserisci dei numeri print([[la,la],[lb,la]]) # + # SOLUZIONE #jupman-purge-output la = [8,4] lb = [4,8,4] print([[la,la],[lb,la]]) # - # ### Esercizio - creare liste 3 # # Inserire dei valori come elementi delle liste `la`, `lb` e `lc` tali per cui # # ```python # print([[lb,lb,[lc,la]],lc]) # ``` # # stampi # # ``` # [[[8, [7, 7]], [8, [7, 7]], [[8, 7], [8, 5]]], [8, 7]] # ``` # # * **inseriri solo NUMERI oppure NUOVE LISTE DI NUMERI** # * Osservare in Python Tutor come vengono raffigurate le frecce # + la = [] # inserisci elementi (numeri o liste di numeri) lb = [] # inserisci elementi (numeri o liste di numeri) lc = [] # inserisci elementi (numeri o liste di numeri) print([[lb,lb,[lc,la]],lc]) # + # SOLUZIONE #jupman-purge-output la = [8,5] lb = [8,[7,7]] lc = [8,7] print([[lb,lb,[lc,la]],lc]) # - # ### Esercizio - creare liste 4 # # # Inserire dei valori nelle liste `la`, `lb` tali per cui # # ```python # print([[la,lc,la], lb]) # ``` # # stampi # # ``` # [[[3, 2], [[3, 2], [8, [3, 2]]], [3, 2]], [8, [3, 2]]] # ``` # # * **inserire solo NUMERI oppure VARIABILI** `la`,`lb` o `lc` # * Osservare in Python tutor come vengono raffigurate le frecce # + la = [] # inserisci numeri o variabili la, lb, lc lb = [] # inserisci numeri o variabili la, lb, lc lc = [] # inserisci numeri o variabili la, lb, lc print([[la,lc,la], lb]) # + # SOLUZIONE #jupman-purge-output la = [3,2] lb = [8,la] lc = [la,lb] print([[la,lc,la], lb]) # - # ## Convertire sequenze in liste # `list` può anche servire per convertire una qualsiasi sequenza in una NUOVA lista. Un tipo di sequenza che già abbiamo visto sono le stringhe, quindi proviamo a vedere cosa succede se usiamo `list` come fosse una funzione e gli passiamo come parametro una stringa: list("treno") # Abbiamo ottenuto una lista in cui ogni elemento è costituito da un carattere della stringa originale. # Se invece chiamiamo `list` su un'altra lista cosa succede? list( [7,9,5,6] ) # Apparentemente, niente di particolare, otteniamo una lista con gli stessi elementi di partenza. Ma è proprio la stessa lista? Guardiamo meglio con Python Tutor: # + la = [7,9,5,6] lb = list( la ) jupman.pytut() # - # # Notiamo che è stata creata una NUOVA regione di memoria con gli stessi elementi di `la`. # ### Esercizio - gulp # # Data una stringa con caratteri misti minuscoli e maiuscoli, scrivere del codice che crea una lista contente come primo elemento una lista coi caratteri della stringa tutti minuscoli e come secondo elemento una lista contenente tutti i caratteri maiuscoli # # * il tuo codice deve funzionare con qualunque stringa # * se non ricordi i metodi delle stringhe, [guarda qui](https://it.softpython.org/strings/strings3-sol.html) # # Esempio - dato # # ```python # s = 'GuLp' # ``` # # il tuo codice dovrà stampare # # ``` # [['g', 'u', 'l', 'p'], ['G', 'U', 'L', 'P']] # ``` # # + #jupman-purge-output s = 'GuLp' # scrivi qui print([list(s.lower()), list(s.upper())]) # - # ### Domanda - maratona # # Questo codice: # # - produce un errore o assegna qualcosa ad `x` ? # - Dopo la sua esecuzione, quante liste rimangono in memoria? # - Possiamo accorciarlo? # ```python # s = "maratona" # x = list(list(list(list(s)))) # ``` # **RISPOSTA**: Il codice assegna alla variabile `x` la lista `['m', 'a', 'r', 'a', 't', 'o', 'n', 'a']`. La prima volta `list(s)` genera la NUOVA lista `['m', 'a', 'r', 'a', 't', 'o', 'n', 'a']`. Le successive chiamate a `list` prendono in input la lista appena generata `['m', 'a', 'r', 'a', 't', 'o', 'n', 'a']` e continuano a creare NUOVE liste con contenuto identico. Dato che però nessuna lista prodotta eccetto l'ultima viene assegnata ad una variabile, quelle intermedie alla fine dell'esecuzione vengono di fatto eliminate. Possiamo quindi tranquillamente accorciare il codice scrivendo # # ```python # s = "maratona" # x = list(s) # ``` # ### Domanda - catena # # Questo codice: # # - produce un errore o assegna qualcosa ad `x` ? # - Dopo la sua esecuzione, quante liste rimangono in memoria? # # # ```python # s = "catena" # a = list(s) # b = list(a) # c = b # x = list(c) # ``` # **RISPOSTA**: Rimangono in memoria 3 liste ognuna contenente 6 celle. Questa volta le liste permangono in memoria perchè sono associate alle variabili `a`,`b` e `c`. Abbiamo 3 e non 4 liste perchè con l'istruzione `c = b` la variabile `c` viene associata alla stessa identica regione di memoria associata alla variabile `b` # ### Esercizio - garaga # # Date # # ```python # sa = "ga" # sb = "ra" # la = ['ga'] # lb = list(la) # ``` # # * Assegnare ad `lc` una lista costruita in modo che una volta stampata produca # # ```python # >>> print(lc) # ``` # ```python # [['g', 'a', 'r', 'a'], ['ga'], ['ga'], ['r', 'a', 'g', 'a']] # ``` # # * **in Python Tutor, TUTTE le frecce dovranno puntare ad una regione di memoria diversa** # # + sa = "ga" sb = "ra" la = ['ga'] lb = list(la) # inserire del codice nella lista lc = [] print(lc) jupman.pytut() # + # SOLUZIONE sa = "ga" sb = "ra" la = ['ga'] lb = list(la) lc = [list(sa + sb), list(la), list(lb), list(sb + sa) ] print(lc) jupman.pytut() # - # ## Continua # # Prosegui con il foglio [Liste 2 - Operatori](https://it.softpython.org/lists/lists2-sol.html) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np import matplotlib.pyplot as plt import math import random import time from IPython.core.debugger import Tracer; dataset = pd.read_csv("HLTVData/playerStats.csv") dataset.head() # + #mat = dataset.as_matrix() #dataADR = dataset['ADR'] #dataRaiting = dataset['Rating'] #indexMatrix = np.array(range(dataset.shape[0])) # - dataSet = pd.read_csv("HLTVData/playerStats.csv",usecols=['ADR','Rating']) dataSet['Rating'] = dataSet['Rating']*100 dataSet.head() # + #dataSet['Rating'] = dataSet['Rating']*100 #dataSet.head() # + #this never worked #4 is the ADR and 6 is the Rating #s = pd.DataFrame(dataset, index=indexMatrix,columns=(mat[:,4],mat[:,6]) ) #s = pd.DataFrame(dataset, index=np.array(range(dataset.shape[0])), columns=dataset['ADR']) # - X= np.array(dataSet['ADR']) y = np.array(dataSet['Rating']) plt.scatter(X,y) plt.xlabel('ADR') plt.ylabel('Rating') plt.show() data = np.array(dataSet) #data #data[1,0] def SSE(m,b,data): totalError=0.0 totalNan = 0 for i in range(data.shape[0]): if(math.isnan(data[i,0])): totalNan +=1 else: yOutput = m*data[i,0]+b y = data[i,1] error = (y-yOutput)**2 totalError =totalError+ error return totalError m = 5 b = 4 plt.scatter(X,y) plt.plot(X,m*X+b,color='red') plt.show() sse = SSE(m,b,data) print('For the fitting line: y = %sx + %s\nSSE: %.2f' %(m,b,sse)) # + [markdown] language="latex" # # # # o make these ideas more precise, stochastic gradient descent works by randomly picking out a small number m of randomly chosen training inputs. We'll label those random training inputs X_1, X_2, \ldots, X_m, and refer to them as a mini-batch. Provided the sample size m is large enough we expect that the average value of the \nabla C_{X_j} will be roughly equal to the average over all \nabla C_x, that is, # # \begin{eqnarray} # # \frac{\sum_{j=1}^m \nabla C_{X_{j}}}{m} \approx \frac{\sum_x \nabla C_x}{n} = \nabla C, # # \tag{18}\end{eqnarray} # # where the second sum is over the entire set of training data. Swapping sides we get # # \begin{eqnarray} # # \nabla C \approx \frac{1}{m} \sum_{j=1}^m \nabla C_{X_{j}}, # # \tag{19}\end{eqnarray} # # confirming that we can estimate the overall gradient by computing gradients just for the randomly chosen mini-batch. # # # # To connect this explicitly to learning in neural networks, suppose w_k and b_l denote the weights and biases in our neural network. Then stochastic gradient descent works by picking out a randomly chosen mini-batch of training inputs, and training with those, # # \begin{eqnarray} # # w_k & \rightarrow & w_k' = w_k-\frac{\eta}{m} # # \sum_j \frac{\partial C_{X_j}}{\partial w_k} \tag{20}\\ # # # # b_l & \rightarrow & b_l' = b_l-\frac{\eta}{m} # # \sum_j \frac{\partial C_{X_j}}{\partial b_l}, # # \tag{21}\end{eqnarray} # # where the sums are over all the training examples X_j in the current mini-batch. Then we pick out another randomly chosen mini-batch and train with those. And so on, until we've exhausted the training inputs, which is said to complete an epoch of training. At that point we start over with a new training epoch. # # # # # # # - def stochastic_gradient_descent_step(m,b,data_sample): n_points = data_sample.shape[0] #size of data m_grad = 0 b_grad = 0 stepper = 0.0001 #this is the learning rate for i in range(n_points): #Get current pair (x,y) x = data_sample[i,0] y = data_sample[i,1] if(math.isnan(x)|math.isnan(y)): #it will prevent for crashing when some data is missing #print("is nan") continue #you will calculate the partical derivative for each value in data #Partial derivative respect 'm' dm = -((2/n_points) * x * (y - (m*x + b))) #Partial derivative respect 'b' db = - ((2/n_points) * (y - (m*x + b))) #Update gradient m_grad = m_grad + dm b_grad = b_grad + db #Set the new 'better' updated 'm' and 'b' m_updated = m - stepper*m_grad b_updated = b - stepper*b_grad #print('m ', m) ##print('steepr*gradient ',stepper*m_grad) #print('m_updated', m_updated) ''' Important note: The value '0.0001' that multiplies the 'm_grad' and 'b_grad' is the 'learning rate', but it's a concept out of the scope of this challenge. For now, just leave that there and think about it like a 'smoother' of the learn, to prevent overshooting, that is, an extremly fast and uncontrolled learning. ''' return m_updated,b_updated # the diference beetween Gradient Descent, Stolchastich Gradient Descent and Mini-batch Gradient Descent is the next: # # Gradient Descent: You take all the data to compute the gradient. # # Stochastic Gradient Descent: You only take 1 point to compute the gradient (the bath size is 1) It is faster than Gradient Descent but is to noisy and is afected for the data variance. # # Mini-Batch Gradient Descent: you take n points (n< data_size) to compute the gradient. Normally you take n aleatory points. # # As note if you take in Mini-batch gradient descent n==data_size you will be computing normal gradient descent. # The difference between Stochastic Gradient Descent and Mini-batch Gradient descent is the size we take for computing the gradient. # # As you can see in the example bellow I add the first condition to return a sigle random row from the data if the batch_size is equal to 1. But is just for recall the point in the code about the difference between SGD and mini-batch GD. If you analize the code, you can delete the first two line of code and the algorith will result the same for SGD, because if you choose the batch_size equal to 1 it becomes in SGD # + def getSmallRandomDataSample(data, batch_size, shuffle=True): #this method only covers the solution when suffle is true #stolchastic gradient descent #it will take tha batch of size 1, Im just putting this here so you can see the difference. You can delete the next #two lines and it will work. if(batch_size==1): return np.array([random.choice(data)]) #mini-batch gradient descent if(batch_size< data.shape[0]): if(shuffle): #the first two line are simulating like if we were choosing randomly points from the data index = np.random.permutation(data.shape[0]) #first suffle the index of data index = index[0:batch_size] #then we take the batch #algorithm for getting the sample_data data_sample=[] for i in index: data_sample.append(data[i]) return np.array(data_sample) # - max_epochs = 100 print('Starting line: y = %.2fx + %.2f - Error: %.2f' %(m,b,sse)) #start = time.time() for i in range(max_epochs): data_sample = getSmallRandomDataSample(data,50) m,b = stochastic_gradient_descent_step(m,b,data_sample) sse = SSE(m,b,data) #if(i%10==0): #end = time.time() #print('time consumtion = ',end-start) #print('iteration ', i) #start = time.time() #print('At step %d - Line: y = %.2fx + %.2f - Error: %.2f' %(i+1,m,b,sse)) print('\nBest line: y = %.2fx + %.2f - Error: %.2f' %(m,b,sse)) print ('m ', m) print('b ', b) plt.scatter(X,y) plt.plot(X,m*X+b,color='red') plt.show() # + #this is to undertand what a permutation does. #a = np.random.permutation(100) #a.sort(kind='quicksort') #a # + #for gradient descent m = 1.4 , b = 0.77 #for SGD m = m 1.33361120453 , b = 3.71735648458 #for SGD batch=1 m 1.33361120453 , b = 3.71735648458 It has been the best line # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="zhRlBVwdE4PD" # ## Preprocessing # + colab={"base_uri": "https://localhost:8080/"} id="CcvJBLfpE4PJ" outputId="6d0e4c0b-65c9-40de-be96-1991a2b63c07" # Import our dependencies from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler import pandas as pd import tensorflow as tf #Mounting Drive from google.colab import drive drive.mount('/content/drive') # + colab={"base_uri": "https://localhost:8080/", "height": 424} id="-nG8VLs7I8kp" outputId="36373bcf-d506-47c8-8965-827f2147d363" # Import and read the charity_data.csv. import pandas as pd application_df = pd.read_csv("./drive/My Drive/Colab Notebooks/alphabet_soup/Resources/charity_data.csv") application_df # + id="AQ2i0n3mE4PM" # Drop the non-beneficial ID columns, 'EIN' and 'NAME'. # YOUR CODE GOES HERE app_df = application_df.drop(["EIN","NAME"],1) # + id="tGZ9ilhkE4PN" colab={"base_uri": "https://localhost:8080/"} outputId="9a81a3d9-51ff-468f-9517-51c4c18c1dff" # Determine the number of unique values in each column. # YOUR CODE GOES HERE app_df.nunique() # + id="ljcrwx5SE4PN" colab={"base_uri": "https://localhost:8080/"} outputId="865a5e8e-e073-468d-e772-0c692dd09925" # Look at APPLICATION_TYPE value counts for binning # YOUR CODE GOES HERE app_ct = app_df["APPLICATION_TYPE"].value_counts() app_ct # + colab={"base_uri": "https://localhost:8080/"} id="VL4MK9Kulnuj" outputId="0327acf7-f415-4881-c101-63d749b8bb51" list(app_ct[app_ct < 500].index) # + id="_eiwNZXLE4PO" colab={"base_uri": "https://localhost:8080/"} outputId="49df9cc8-5167-44ea-ea60-9a4d4a37eb87" # Choose a cutoff value and create a list of application types to be replaced # use the variable name `application_types_to_replace` # YOUR CODE GOES HERE application_types_to_replace = list(app_ct[app_ct < 500].index) # Replace in dataframe for app in application_types_to_replace: app_df['APPLICATION_TYPE'] = app_df['APPLICATION_TYPE'].replace(app,"Other") # Check to make sure binning was successful app_df['APPLICATION_TYPE'].value_counts() # + id="PqgSd9oYE4PO" colab={"base_uri": "https://localhost:8080/"} outputId="cecdffd9-ee43-4513-e762-39aa691ff134" # Look at CLASSIFICATION value counts for binning # YOUR CODE GOES HERE class_ct = app_df["CLASSIFICATION"].value_counts() class_ct # + id="gnrzJJaiE4PP" colab={"base_uri": "https://localhost:8080/"} outputId="05923403-d44f-45c9-cfa8-559626323207" # You may find it helpful to look at CLASSIFICATION value counts >1 # YOUR CODE GOES HERE class_ct[class_ct > 1] # + id="YE937bzlE4PQ" colab={"base_uri": "https://localhost:8080/"} outputId="b2340ab3-036c-42fd-ed5a-ab59abcb47f8" # Choose a cutoff value and create a list of classifications to be replaced # use the variable name `classifications_to_replace` # YOUR CODE GOES HERE classifications_to_replace = list(class_ct[class_ct < 1000].index) # Replace in dataframe for cls in classifications_to_replace: app_df['CLASSIFICATION'] = app_df['CLASSIFICATION'].replace(cls,"Other") # Check to make sure binning was successful app_df['CLASSIFICATION'].value_counts() # + id="GxlpqKJAE4PQ" colab={"base_uri": "https://localhost:8080/", "height": 461} outputId="63768ce6-f361-4197-888e-9d05a13c342f" # Convert categorical data to numeric with `pd.get_dummies` # YOUR CODE GOES HERE app_dummies_df = pd.get_dummies(app_df) app_dummies_df # + id="tQz0-KRjE4PR" # Split our preprocessed data into our features and target arrays # YOUR CODE GOES HERE X = app_dummies_df.drop(["IS_SUCCESSFUL"], axis="columns").values y = app_dummies_df["IS_SUCCESSFUL"].values # Split the preprocessed data into a training and testing dataset # YOUR CODE GOES HERE X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=50) # + id="5I4J5jO1E4PR" # Create a StandardScaler instances scaler = StandardScaler() # Fit the StandardScaler X_scaler = scaler.fit(X_train) # Scale the data X_train_scaled = X_scaler.transform(X_train) X_test_scaled = X_scaler.transform(X_test) # + [markdown] id="oU_hiPn-E4PR" # ## Compile, Train and Evaluate the Model # + id="YGBBbWigE4PS" colab={"base_uri": "https://localhost:8080/"} outputId="7922a555-a74a-4680-8339-06cd839a3dc5" # Define the model - deep neural net, i.e., the number of input features and hidden nodes for each layer. # YOUR CODE GOES HERE number_input_features = len(X_train[0]) hidden_nodes_layer1 = 150 hidden_nodes_layer2 = 100 # hidden_nodes_layer3 = 80 # hidden_nodes_layer4 = 30 nn = tf.keras.models.Sequential() # First hidden layer # YOUR CODE GOES HERE nn.add( tf.keras.layers.Dense(units=hidden_nodes_layer1, input_dim=number_input_features, activation="relu") ) # Second hidden layer # YOUR CODE GOES HERE nn.add( tf.keras.layers.Dense(units=hidden_nodes_layer2, activation="relu") ) # Third hidden layer # YOUR CODE GOES HERE # nn.add( # tf.keras.layers.Dense(units=hidden_nodes_layer3, activation="relu") # ) # # fourth hidden layer # # YOUR CODE GOES HERE # nn.add( # tf.keras.layers.Dense(units=hidden_nodes_layer4, activation="relu") # ) # Output layer # YOUR CODE GOES HERE nn.add( tf.keras.layers.Dense(units=1, activation="sigmoid") ) # Check the structure of the model nn.summary() # + id="dwxmVlxHE4PS" # Compile the model # YOUR CODE GOES HERE nn.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"]) # + id="FZCoxUYgE4PS" colab={"base_uri": "https://localhost:8080/"} outputId="c4dbede0-836f-4522-b072-c24caa2431e7" # Train the model # YOUR CODE GOES HERE fit_model = nn.fit(X_train_scaled, y_train, epochs=100) # + id="gDn-iTGYE4PT" colab={"base_uri": "https://localhost:8080/"} outputId="b9992165-c5d5-427a-9a24-d55a9fb6b34e" # Evaluate the model using the test data model_loss, model_accuracy = nn.evaluate(X_test_scaled,y_test,verbose=2) print(f"Loss: {model_loss}, Accuracy: {model_accuracy}") # + id="SlhNwsdRE4PT" # Export our model to HDF5 file # YOUR CODE GOES HERE nn.save("./drive/My Drive/Colab Notebooks/alphabet_soup/AlphabetSoupCharity_NOTOptimized.h5") # + id="uuqjmtCP--2G" // -*- coding: utf-8 -*- // --- // jupyter: // jupytext: // text_representation: // extension: .java // format_name: light // format_version: '1.5' // jupytext_version: 1.14.4 // kernelspec: // display_name: Java // language: java // name: java // --- // ## API de Reflexão // // A API de reflexão (Reflection API) permite retornar os metadados dos objetos e os objetos em tempo de execução. // // É possível recuperar o nome de uma classe, seus atributos e métodos, sem uma referência explícita no programa em tempo de compilação. Uma das aplicações comuns de uma API de reflexão é a implementação de um **debugador**. // // O código abaixo possui alguns exemplos simples da API de reflexão. // Este código: // - carrega uma classe ArrayList; // - retorna todos os atributos; // - retorna todos os métodos; // - instancia um objeto ArrayList; // - retorna o tipo do atributo 'size'; // - retorna o tipo de retorno do método 'size()'; // - chama o método size 2 vezes, com tamanho do Array diferentes. // // // Um tutorial com todas as possibilidades está disponível na página [https://docs.oracle.com/javase/tutorial/reflect/index.html](Trail: reflection API). // + import java.lang.reflect.*; class Programa { public static void main () throws Exception { Class c = Class.forName("java.util.ArrayList"); System.out.println(c.getName()); int i = 0; for (Field f : c.getDeclaredFields()){ System.out.println(f.getName()); i++; } System.out.println("Número de atributos:"+i+"\n"); i=0; for (Method m : c.getMethods()){ //System.out.println(m.getName()); i++; } System.out.println("Número de métodos:"+i+"\n"); ArrayList lista = (ArrayList)c.newInstance(); c.getDeclaredField("size").setAccessible(true); System.out.println("Tipo do atributo size: "+c.getDeclaredField("size").getType()); Method m = c.getMethod("size",new Class[]{}); System.out.println("tipo de retorno do método size: "+m.getReturnType()); System.out.println("tamanho da lista, chamando o método size: "+m.invoke(lista,new Object[]{})); lista.add(25); System.out.println("tamanho da lista, chamando o método size: "+m.invoke(lista,new Object[]{})); } } Programa.main(); // - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from openml.datasets import list_datasets, get_datasets import pandas as pd from sklearn.model_selection import train_test_split from supervised.automl import AutoML import os from prediction_feature.mljar import add_dataset # + pycharm={"name": "#%%\n"} dataset_dataframe = list_datasets(output_format="dataframe", number_classes='2..20').query('NumberOfInstances < 100000 & NumberOfFeatures < 1000') dataset_dataframe # + pycharm={"name": "#%%\n"} dataset_ids = dataset_dataframe['did'] # + pycharm={"name": "#%%\n"} dataset_dataframe_list = get_datasets(dataset_ids=dataset_ids) # + pycharm={"name": "#%%\n"} dataset = dataset_dataframe_list[8] X, y, _, _ = dataset.get_data( target=dataset.default_target_attribute) print(X) print('y: ') print(y) print('\n\n\n\n') dataset_id = dataset.dataset_id print(dataset_id) dataset_dataframe.loc[dataset_id, dataset_dataframe.columns[6:]] # + pycharm={"name": "#%%\n"} for i in range(len(dataset_dataframe_list[:5])): add_dataset(dataset_dataframe_list[i], dataset_dataframe) # + pycharm={"name": "#%%\n"} # + pycharm={"name": "#%%\n"} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from pyalink.alink import * useLocalEnv(1) from utils import * import os import pandas as pd pd.set_option('display.max_colwidth', 1000) pd.set_option('display.max_rows', 200) DATA_DIR = ROOT_DIR + "iris" + os.sep ORIGIN_FILE = "iris.data"; TRAIN_FILE = "train.ak"; TEST_FILE = "test.ak"; SCHEMA_STRING = "sepal_length double, sepal_width double, "\ + "petal_length double, petal_width double, category string" FEATURE_COL_NAMES = ["sepal_length", "sepal_width", "petal_length", "petal_width"] LABEL_COL_NAME = "category"; VECTOR_COL_NAME = "vec"; PREDICTION_COL_NAME = "pred"; # + #c_1_1 source = CsvSourceBatchOp()\ .setFilePath(DATA_DIR + ORIGIN_FILE)\ .setSchemaStr(SCHEMA_STRING); source\ .link( FirstNBatchOp().setSize(5) )\ .print(); source.firstN(5).print(); # + #c_1_2 source = CsvSourceBatchOp()\ .setFilePath(DATA_DIR + ORIGIN_FILE)\ .setSchemaStr(SCHEMA_STRING); source\ .sampleWithSize(50)\ .lazyPrintStatistics("< after sample with size 50 >")\ .sample(0.1)\ .print(); source\ .lazyPrintStatistics("< origin data >")\ .sampleWithSize(150, True)\ .lazyPrintStatistics("< after sample with size 150 >")\ .sample(0.03, True)\ .print(); source_stream = CsvSourceStreamOp()\ .setFilePath(DATA_DIR + ORIGIN_FILE)\ .setSchemaStr(SCHEMA_STRING); source_stream.sample(0.1).print(); StreamOperator.execute(); # + # c_1_3 source = CsvSourceBatchOp()\ .setFilePath(DATA_DIR + ORIGIN_FILE)\ .setSchemaStr(SCHEMA_STRING); source\ .select("*, CASE category WHEN 'Iris-versicolor' THEN 1 " + "WHEN 'Iris-setosa' THEN 2 ELSE 4 END AS weight")\ .link( WeightSampleBatchOp()\ .setRatio(0.4)\ .setWeightCol("weight") )\ .groupBy("category", "category, COUNT(*) AS cnt")\ .print(); # + #c_1_4 source = CsvSourceBatchOp()\ .setFilePath(DATA_DIR + ORIGIN_FILE)\ .setSchemaStr(SCHEMA_STRING); source\ .link( StratifiedSampleBatchOp()\ .setStrataCol("category")\ .setStrataRatios("Iris-versicolor:0.2,Iris-setosa:0.4,Iris-virginica:0.8") )\ .groupBy("category", "category, COUNT(*) AS cnt")\ .print(); source_stream = CsvSourceStreamOp()\ .setFilePath(DATA_DIR + ORIGIN_FILE)\ .setSchemaStr(SCHEMA_STRING); source_stream\ .link( StratifiedSampleStreamOp()\ .setStrataCol("category")\ .setStrataRatios("Iris-versicolor:0.2,Iris-setosa:0.4,Iris-virginica:0.8") )\ .print(); StreamOperator.execute(); # + #c_2 source = CsvSourceBatchOp()\ .setFilePath(DATA_DIR + ORIGIN_FILE)\ .setSchemaStr(SCHEMA_STRING); print("schema of source:"); print(source.getColNames()); spliter = SplitBatchOp().setFraction(0.9); source.link(spliter); print("schema of spliter's main output:"); print(spliter.getColNames()); print("count of spliter's side outputs:"); print(spliter.getSideOutputCount()); print("schema of spliter's side output :"); print(spliter.getSideOutput(0).getColNames()); spliter\ .lazyPrintStatistics("< Main Output >")\ .link( AkSinkBatchOp()\ .setFilePath(DATA_DIR + TRAIN_FILE)\ .setOverwriteSink(True) ); spliter.getSideOutput(0)\ .lazyPrintStatistics("< Side Output >")\ .link( AkSinkBatchOp()\ .setFilePath(DATA_DIR + TEST_FILE)\ .setOverwriteSink(True) ); BatchOperator.execute(); # + #c_3_1 source = CsvSourceBatchOp()\ .setFilePath(DATA_DIR + ORIGIN_FILE)\ .setSchemaStr(SCHEMA_STRING); source.lazyPrintStatistics("< Origin data >"); scaler = StandardScaler().setSelectedCols(FEATURE_COL_NAMES); scaler\ .fit(source)\ .transform(source)\ .lazyPrintStatistics("< after Standard Scale >"); BatchOperator.execute(); # + #c_3_2 source = CsvSourceBatchOp()\ .setFilePath(DATA_DIR + ORIGIN_FILE)\ .setSchemaStr(SCHEMA_STRING); source.lazyPrintStatistics("< Origin data >"); scaler = MinMaxScaler().setSelectedCols(FEATURE_COL_NAMES); scaler\ .fit(source)\ .transform(source)\ .lazyPrintStatistics("< after MinMax Scale >"); BatchOperator.execute(); # + #c_3_3 source = CsvSourceBatchOp()\ .setFilePath(DATA_DIR + ORIGIN_FILE)\ .setSchemaStr(SCHEMA_STRING); source.lazyPrintStatistics("< Origin data >"); scaler = MaxAbsScaler().setSelectedCols(FEATURE_COL_NAMES); scaler\ .fit(source)\ .transform(source)\ .lazyPrintStatistics("< after MaxAbs Scale >"); BatchOperator.execute(); # + #c_4_1 source = CsvSourceBatchOp()\ .setFilePath(DATA_DIR + ORIGIN_FILE)\ .setSchemaStr(SCHEMA_STRING)\ .link( VectorAssemblerBatchOp()\ .setSelectedCols(FEATURE_COL_NAMES)\ .setOutputCol(VECTOR_COL_NAME)\ .setReservedCols([LABEL_COL_NAME]) ); source.link( VectorSummarizerBatchOp()\ .setSelectedCol(VECTOR_COL_NAME)\ .lazyPrintVectorSummary("< Origin data >") ); VectorStandardScaler()\ .setSelectedCol(VECTOR_COL_NAME)\ .fit(source)\ .transform(source)\ .link( VectorSummarizerBatchOp()\ .setSelectedCol(VECTOR_COL_NAME)\ .lazyPrintVectorSummary("< after Vector Standard Scale >") ); VectorMinMaxScaler()\ .setSelectedCol(VECTOR_COL_NAME)\ .fit(source)\ .transform(source)\ .link( VectorSummarizerBatchOp()\ .setSelectedCol(VECTOR_COL_NAME)\ .lazyPrintVectorSummary("< after Vector MinMax Scale >") ); VectorMaxAbsScaler()\ .setSelectedCol(VECTOR_COL_NAME)\ .fit(source)\ .transform(source)\ .link( VectorSummarizerBatchOp()\ .setSelectedCol(VECTOR_COL_NAME)\ .lazyPrintVectorSummary("< after Vector MaxAbs Scale >") ); BatchOperator.execute(); # + #c_4_2 source = CsvSourceBatchOp()\ .setFilePath(DATA_DIR + ORIGIN_FILE)\ .setSchemaStr(SCHEMA_STRING)\ .link( VectorAssemblerBatchOp()\ .setSelectedCols(FEATURE_COL_NAMES)\ .setOutputCol(VECTOR_COL_NAME)\ .setReservedCols([LABEL_COL_NAME]) ); source\ .link( VectorNormalizeBatchOp()\ .setSelectedCol(VECTOR_COL_NAME)\ .setP(1.0) )\ .firstN(5)\ .print(); # + #c_5 df = pd.DataFrame( [ ["a", 10.0, 100], ["b", -2.5, 9], ["c", 100.2, 1], ["d", -99.9, 100], [None, None, None] ] ) source = BatchOperator\ .fromDataframe(df, schemaStr='col1 string, col2 double, col3 double')\ .select("col1, col2, CAST(col3 AS INTEGER) AS col3") source.lazyPrint(-1, "< origin data >"); pipeline = Pipeline()\ .add( Imputer()\ .setSelectedCols(["col1"])\ .setStrategy('VALUE')\ .setFillValue("e") )\ .add( Imputer()\ .setSelectedCols(["col2", "col3"])\ .setStrategy('MEAN') ); pipeline.fit(source)\ .transform(source)\ .print(); # + #c_6 dict_arr = {'name':['Alice','Bob','Cindy'], 'value':[1,2,3]} pd.DataFrame(dict_arr) # + arr_2D =[ ['Alice',1], ['Bob',2], ['Cindy',3] ] pd.DataFrame(arr_2D, columns=['name', 'value'] ) # + arr_2D =[ ['Alice',1], ['Bob',2], ['Cindy',3] ] pd.DataFrame(arr_2D, columns=['name', 'value'] ) # - # DataFrame <> BatchOperator source = CsvSourceBatchOp()\ .setFilePath("http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data")\ .setSchemaStr("sepal_length double, sepal_width double, petal_length double, petal_width double, category string") source.firstN(5).print() df_iris = source.collectToDataframe() df_iris.head() # + iris = BatchOperator\ .fromDataframe( df_iris, "sepal_length double, sepal_width double, petal_length double, " + "petal_width double, category string" ); iris.firstN(5).print() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- # # Module 6: Lecture load data # # # ## Problems # # When you load data from a CSV file, you have to ensure the dataset get assigned correctly the different data types and formats. An area that could be particularly problematic is the one with the DATES, HOURS and TIMESTAMPS, because his formats can vary significantly. # # In the case that the database instance doesn't recognize automatically the data format correctly, or the default configuration doesn't match, you have to fix it manually before load into the database, otherwise it's possible to get errors like ` SQLSTATE=22007` # # ## Fix # # To avoid this errors, on the DB2 console, you can get a preview of the datatypes and the format that are assigned automatically. # If there are any problem, usually Db2 identifies it and shows you with a warning message. # # To fix this, you can check pressing the clock icon to see the different formats available, if there aren't visible yet. # # First, check if already exist a default format that match with your date/hour/timestamp format. If not, select the correct format from the available ones. # # # # # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Analysis Part I import numpy as np from cibin import tau_twosided_ci from cibin import hypergeom_conf_interval # ## One-sided confidence intervals # ##### Upper one-sided bounds # Regeneron data n=753 m=752 N=n+m n01 = 59 n11 = 11 n00 = m-n01 n10 = n-n11 alpha=0.05 # retrieve upper one-sided bounds via simultaneous Bonferroni confidence bounds for N_+1 and N_1+ Ndot_1 = hypergeom_conf_interval(n11*N/n, n11, N, 1-alpha/2, alternative="upper") N1_dot = hypergeom_conf_interval(n01*N/70, n01, N, 1-alpha/2, alternative="lower") upp = (Ndot_1[1] - N1_dot[0])/N lower = (Ndot_1[0] - N1_dot[1])/N ci = [lower, upp] ci # ##### Lower one-sided bounds # retrieve lower one-sided bounds via simultaneous Bonferroni confidence bounds for N_+1 and N_1+ Ndot_1 = hypergeom_conf_interval(n11*N/n, n11, N, 1-alpha/2, alternative="lower") N1_dot = hypergeom_conf_interval(n01*N/70, n01, N, 1-alpha/2, alternative="upper") upp = (Ndot_1[1] - N1_dot[0])/N lower = (Ndot_1[0] - N1_dot[1])/N ci = [lower, upp] ci # ## Two-sided confidence intervals # ##### Two-sided bounds with Sterne method # retrieve two-sided bounds via Sterne's method Ndot_1 = hypergeom_conf_interval(round(n11*N/n), n11, N, 1-alpha, alternative="two-sided") N1_dot = hypergeom_conf_interval(round(n01*N/70), n01, N, 1-alpha, alternative="two-sided") lower = (Ndot_1[0] - N1_dot[1])/N upp = (Ndot_1[1] - N1_dot[0])/N ci=[lower,upp] ci # # Two-sided bounds with Li and Ding's method 3 # retrieve two-sided bounds via method 3 in Li and Ding ci = tau_twosided_ci(n11, n10, n01, n00, 0.05,exact=False, reps=1)[0] ci # ## Discussion # The difference between the two-sided bounds are the coverage region of the result. From hypergeometric distribution with sterne method. The lower bound is postive, and the lower bound for method 3 is negative and the upper bound is larger for sterne is larger than the upper bound of method 3. The conclusion might be that the treatment effect might not have such good performance if we use method 3 to retrive to result, which means the press might assume the result to be overestimate. # ## Legitimate # It is legitimate to use one-sided confidence intervals if the outcome only leaves out one side of the result, depending on the type of data we are looking at. For treatment effect, it is not reasonable to have only one side of cumulative distribution, since the one-sided confience interval might not contain the outcome whether it is upper or lower one-sided. And it is not reasonable to have extreme value such as 1 and -1 in 95% confidence intervals. The average effect should be zero to fulfill the null hypothesis. # ## Preferable # It is preferable to use two-sided confidence interval since one-sided intervals might accept extreme outcome or reject outcome that are acceptable if it is two-sided confidence interval. And also for treatment effect, it is reasonable that average confidence intervals contain 1-alpha probabilities by leaving out two sides of result since we can not guarantee that the result of the treatment will always have the worst performance or the best performance from the confidence intervals. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + colab={"resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "", "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": "OK"}}, "base_uri": "https://localhost:8080/", "height": 75} id="BdC-qKp8ATZj" outputId="0064c671-8632-41dc-9761-92e9f0af5157" #uploading ipynb from local folder to google drive/colab folder from google.colab import files uploaded = files.upload() # + id="xP1KKVOPDaOo" import pandas as pd import numpy as np import matplotlib.pyplot as plt # + colab={"base_uri": "https://localhost:8080/", "height": 417} id="dPHHaJbIenHL" outputId="9e5875d6-1040-488d-eb53-31afa7d4049a" import io df = pd.read_csv(io.BytesIO(uploaded['AB_NYC_2019.csv'])) df.head() # + id="ZS4z7t9mgy9J" data = df [['neighbourhood_group', 'room_type','latitude','longitude','price','minimum_nights', 'number_of_reviews', 'reviews_per_month', 'calculated_host_listings_count', 'availability_365']] # + colab={"base_uri": "https://localhost:8080/", "height": 363} id="85naq2KmhIPR" outputId="7c3180a4-b533-4bc3-e2fd-376b6706d1ba" data.head(10).T # + colab={"base_uri": "https://localhost:8080/", "height": 300} id="U4tigdOrhbJ6" outputId="91db9f09-ce0d-449b-e19a-d16cda8dfb69" data.describe().T # + colab={"base_uri": "https://localhost:8080/"} id="4GV07THG6k2X" outputId="2a7d6606-5b5d-4ec6-dc79-15a1138204f9" pd.value_counts(data.neighbourhood_group) # + [markdown] id="34qSPGt37B-T" # Q1 Answer: Manhattan # + [markdown] id="VFbiGY0gB-aq" # **setting up validation framework** # + colab={"base_uri": "https://localhost:8080/"} id="HE7jVp5K9Pd8" outputId="cbd4739a-6a10-4b97-9cf3-3de4035a8abc" # !pip install scikit-learn # + id="S5Fpwo4lCORK" from sklearn.model_selection import train_test_split # + colab={"base_uri": "https://localhost:8080/"} id="S66KyzR6gyBj" outputId="6545e681-619e-4f73-c25b-a213dae0a9f5" data.copy() df_full_train, df_test = train_test_split(data, test_size = 0.2, random_state = 42) df_train, df_val = train_test_split(df_full_train, test_size = 0.25, random_state = 42) print( len (df_train), len(df_val), len(df_test)) df_train.reset_index(drop = True) df_val.reset_index(drop = True) df_test.reset_index(drop = True) y_train = df_train['price'] y_val = df_val['price'] y_test = df_test['price'] # + id="MBCNeiRkL-39" del(df_train['price']) del(df_val['price']) del(df_test['price']) # + id="EKVVDXsHL-9J" # + [markdown] id="jBuWekm5CRsf" # **EDA** # + id="FDHTo7RJCUln" colab={"base_uri": "https://localhost:8080/", "height": 269} outputId="7969410c-83df-4f1c-9508-2189d3ed1be8" # + colab={"base_uri": "https://localhost:8080/", "height": 504} id="Tq8zNXrCON0Z" outputId="9726e5ec-a49e-4f1d-b4d4-f49f9fd3658e" # + id="RplxKtS_RVcz" above_average_train = (y_train >= 152).astype(int) above_average_val =(y_val >= 152).astype(int) above_average_test = (y_test >= 152).astype(int) # + colab={"base_uri": "https://localhost:8080/"} id="zz-WcxZvVLKZ" outputId="5e2de528-b364-4462-d212-f8bfe8a22f6f" df_train.dtypes # + id="g4FDXxW9Ywe7" # + id="loLPps26Y6eV" # + [markdown] id="jQllWI8wCZFK" # **feature importance/feature engineering: churn rate and risk ratio** # + id="m-QdVyYXCjPY" # + id="I2qhRGGXalbe" # + [markdown] id="ylDjKOqfbvgb" # Answer to Q3 : room type # + [markdown] id="qHVqpSQqCjf5" # **feature importance: mutual information** # + id="TQsXMcqlCz_Z" categorical = ['neighbourhood_group', 'room_type'] df_train_categorical = df_train[categorical] # + colab={"base_uri": "https://localhost:8080/"} id="sdPuc2dXcHTy" outputId="18e0a338-aeb3-47b3-ecbf-99cea5cf22af" from sklearn.metrics import mutual_info_score neighbourhood_score = mutual_info_score(df_train_categorical.neighbourhood_group, y_train ) room_type_score = mutual_info_score(df_train_categorical.room_type, y_train ) print (round(neighbourhood_score, 2), round(room_type_score, 2)) # + [markdown] id="KIVzPJDjC0fJ" # **feature importance: correlation** # + id="5ILxH_vzC58K" colab={"base_uri": "https://localhost:8080/", "height": 269} outputId="8e75ca37-af49-4abc-d8d9-bb2469efd32e" #correlation matrix train_corr = df_train.corr() train_corr # + colab={"base_uri": "https://localhost:8080/", "height": 504} id="DgwqmIwXcQsS" outputId="6ea4626f-8612-461b-98f3-885d2e16fd02" #visualizing correlation matrix import seaborn as sns ax = sns.heatmap(train_corr, vmin = -1, vmax = 1, center = 0, cmap=sns.diverging_palette(20, 500, n=200), square=True) ax.set_xticklabels(ax.get_xticklabels(),rotation=45, horizontalalignment='right') # + [markdown] id="DzMRPKWhcZxy" # reviews per month, number of reviews # + [markdown] id="cAWVvwbHcVHZ" # # + [markdown] id="iCt9OmK9C-SP" # **one-hot encoding** # + id="QjKxFMq7DB0g" colab={"base_uri": "https://localhost:8080/"} outputId="179357a0-39a6-4f24-f72e-30165eec885d" from sklearn.feature_extraction import DictVectorizer train_dicts = df_train.to_dict(orient = 'records') train_dicts[0] # + id="CNR5Y4KPfX1y" dv = DictVectorizer() X_train = dv.fit_transform(train_dicts) # + id="VryG9CPKgveJ" val_dicts = train_dicts = df_val.to_dict(orient = 'records') # + id="b52SIiGUgsLR" X_val = dv.transform(val_dicts) # + id="hvNY6PRRDEAh" colab={"base_uri": "https://localhost:8080/"} outputId="35820ae4-27c4-4602-f29d-ddf293d24ba3" print(X_train[0]) # + [markdown] id="TWxbXez5DEgK" # **logistic regression** # + id="jTnPclcjDHEZ" # + [markdown] id="TdAMFPdoDKZY" # **training logistic regression with scikit-learn** # + id="rkxikA6nDTQq" # + [markdown] id="JCjXYPg0DhtC" # **model interpretation** # + id="Apps-5uRDpIo" # + [markdown] id="elnw5y0zDpV4" # **using the model** # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import time now = time.time() # %matplotlib inline import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import warnings warnings.filterwarnings("ignore") train_dataset = pd.read_csv('DataSets/DatasetsCreated/train_dataset.csv') train_dataset.shape test_dataset = pd.read_csv('DataSets/DatasetsCreated/test_dataset.csv') test_dataset.shape test_dataset = test_dataset.replace([np.inf, -np.inf], 0) test_dataset.head() # ## Categorical Data # + # categorical_features = ['Tag', 'Purchaser', 'Merchant_Popular'] # for feature in categorical_features: # train_dataset[feature] = pd.Categorical(train_dataset[feature]) # One Hot Encoding # train_dataset = pd.concat([train_dataset,pd.get_dummies(train_dataset['Tag'], prefix='Tag',prefix_sep='_', drop_first=True,dummy_na=False)],axis=1) # train_dataset = pd.concat([train_dataset,pd.get_dummies(train_dataset['DayOfWeek'], prefix='DayOfWeek',prefix_sep='_', drop_first=True,dummy_na=False)],axis=1) # - train_dataset.columns[:100] # ## Final Features remove_columns = ['User_id', 'Merchant_id', 'Coupon_id', 'Discount_rate','Discount','Date_received', 'Date','Count','RedemptionDuration','DayList','DateTrack','DayNum','First_day', 'MerchantBuyList', 'Merchant_User_Visit', 'MerchantRedeemList','MerchantReleaseList', 'User_Coupon_Redeemed', 'User_Coupon_Ratio','User_discount_Ratio','Discounted_Redeemed', 'Coupon_User_Visit','UserReleaseList','UserRedeemList','CouponRedeemList','CouponReleaseList', 'MerchantUserReleaseList','MerchantUserRedeemList','FirstReleaseDate','UsersRecentMerchants', 'User_Merchant_Ratio', 'Visits', 'Duration', 'Merchant_AvgRate','Merchant_AvgDistance' ,'UniqueUsersCount','AvgDailyUsers','UniqueReleasesCount','ReleasesCount'] unimportant_features = ['DayOfWeek','ImpDay','Merchant_Popular','Purchaser'] non_time_based = ['User_Redeemed_Buy','User_Buys','User_Redeemed','User_Released','User_Ratio', 'Merchant_Ratio','Merchant_Redeemed','Merchant_Buys', 'Merchant_Redeemed_Buy', 'Coupon_Redeemed','Coupon_Released','Coupon_Ratio','UserMerchantCount'] features = list(set(train_dataset.columns)-set(remove_columns)-set(['Target'])-set(unimportant_features)-set(non_time_based)) print('Features to be included:'+str(len(features))) print(features) # ## Positive and Negative classes distribution ax = sns.countplot(x='Target', data=train_dataset) print(train_dataset['Target'].value_counts()) # # Model from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import roc_auc_score from sklearn.preprocessing import MinMaxScaler from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix from sklearn import metrics from xgboost import XGBClassifier from xgboost import plot_importance from matplotlib import pyplot from sklearn.model_selection import KFold import xgboost as xgb from sklearn.metrics import precision_score,recall_score from sklearn.model_selection import GridSearchCV # from imblearn.over_sampling import SMOTE def save_model(model): #saving model from sklearn.externals import joblib # Save the model as a pickle in a file joblib.dump(model, 'Model/xgboost.pkl') # Load the model from the file clf_saved= joblib.load('Model/xgboost.pkl') y = list(train_dataset['Target']) X = train_dataset X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.15, random_state=100) X_train, X_val, y_train, y_val= train_test_split(X_train, y_train, test_size=0.175) # + # X_train = train_dataset[train_dataset['DayNum']<=120] # y_train = X_train['Target'] # X_val = train_dataset[(train_dataset['DayNum']>120)&(train_dataset['DayNum']<=150)] # y_val = X_val['Target'] # X_test = train_dataset[train_dataset['DayNum']>150] # y_test = X_test['Target'] # - # ### XGBoost def train_model(): clf = XGBClassifier( n_estimators=500, objective= 'binary:logistic', nthread=3, scale_pos_weight=1, seed=410, alpha=0.2, colsample_bytree=0.8, gamma=1, learning_rate=0.1, max_depth=5, min_child_weight=5, subsample=0.8, smote__ratio=0.005, class_weight={0: 0.05, 1: 0.95}) model = clf.fit(X_train[features], y_train) # save_model(model) ax = plot_importance(model) fig = ax.figure fig.set_size_inches(15, 9) plt.show() return model model = train_model() # + # feature_importances = pd.DataFrame(model.feature_importances_,index = features,columns=['importance']).sort_values('importance',ascending=True) # feature_importances.plot(kind='barh',figsize=(10,20)) # + def evaluate_model(X_check, y_check): predictions = (model.predict_proba(X_check[features])[:,1]).tolist() predicted_values = (model.predict(X_check[features])).tolist() dataset = X_check.copy() dataset['Probability'] = [round(i, 6) for i in predictions] output = dataset[['User_id','Merchant_id','Date_received','Probability']] roc_score = round(roc_auc_score(y_check, predictions), 3) print('ROC AUC Score of Probailities: '+ str(roc_score)) print('ROC AUC Curve') fpr, tpr, _ = metrics.roc_curve(y_check, predictions) auc = metrics.roc_auc_score(y_check, predictions) plt.plot(fpr,tpr,label="auc="+str(auc)) plt.legend(loc=4) plt.show() predicted_values = [1 if x>0.5 else 0 for x in predictions] #print('AUC score of Predicted Values') #print(round(roc_auc_score(y_check, predicted_values), 3)) print('______________________________________________________________________') print('\n The classification report for the model:') print(classification_report(y_check, predicted_values) ) results = confusion_matrix(y_check, predicted_values) print('______________________________________________________________________') print('\n The confusion matrix for the model:') print(results) print('______________________________________________________________________') # threshold = np.arange(0,1,0.001) # precision = np.zeros(len(threshold)) # recall = np.zeros(len(threshold)) # for i in range(len(threshold)): # y1 = np.zeros(len(y_check),dtype=int) # y1 = np.where(predictions<=threshold[i],0,1) # precision[i] = precision_score(y_check,y1) # recall[i] = recall_score(y_check,y1) # plt.figure(figsize=(12,9)) # sns.set_style('whitegrid') # sns.lineplot(x=threshold,y=precision) # sns.lineplot(x=threshold,y=recall) # plt.xlabel('Threshold') # plt.title('Recall and Precision Values Vs Threshhold values') # plt.show() return output, dataset # - # ## Validation Data Scores val_output, val_data = evaluate_model(X_val, y_val) val_output.head() # ## Test Data Scores test_output, test_data = evaluate_model(X_test, y_test) test_output.head() # # Test Dataset and Submission File # + predictions = (model.predict_proba(test_dataset[features])[:,1]).tolist() predicted_values = (model.predict(test_dataset[features])).tolist() test = pd.merge(test_dataset[features], test_dataset[['User_id','Merchant_id','Date_received']] , how='left',left_index=True,right_index=True) test['Probability'] = [round(i, 6) for i in predictions] output = test[['User_id','Merchant_id','Date_received','Probability']] output[output['Probability']>0.5] # - output.to_csv('OutputFile.csv',index=False) # # Execution Time of this notebook later = time.time() difference = later - now print('Time taken for the execution of this notebook: '+str(round(difference/60,2))+' mins') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + #Import scikit learn dataset library from sklearn import datasets #Load datasets iris=datasets.load_iris() # - #print the label species(setosa,versicolar,virginica) print(iris.target_names) #print the names of four features print(iris.feature_names) #print the iris data(top 5 records) print(iris.data[0:5]) #print the iris labels (0:setosa,1:versicolar,2:virginica) print(iris.target) #Creating a dataframe of given iris dataset import pandas as pd data=pd.DataFrame({'sepal length':iris.data[:,0], 'sepal width':iris.data[:,1], 'petal length':iris.data[:,2], 'petal width':iris.data[:,3], 'species':iris.target}) data #Train test split from sklearn.model_selection import train_test_split X=data[['sepal length','sepal width','petal length','petal width']] #Features Y=data['species']#Labels #split dataset into into training set and testing set X_train,X_test,Y_train,Y_test=train_test_split(X,Y,test_size=0.4)#60% training and 40% testing #Import Random Forest Model from sklearn.ensemble import RandomForestClassifier #Create a Gaussian Classifier clf=RandomForestClassifier(n_estimators=100) #Train the model using the training setsy_pred=clf.predict(X_test) clf.fit(X_train,Y_train) Y_pred=clf.predict(X_test) #Import sckit-learn metrics module for accuracy calculation from sklearn import metrics #Model Accuracy,how often is the classifier correct? print('Accuracy:',metrics.accuracy_score(Y_test,Y_pred)) clf.predict([[3,4,8,9]]) # # Feature Selection feature_imp=pd.Series(clf.feature_importances_,index=iris.feature_names).sort_values(ascending=False) feature_imp import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline #Create a bar plot sns.barplot(x=feature_imp,y=feature_imp.index) #Add labels to your graph plt.xlabel('Feature importance score') plt.ylabel('Features') plt.title('Visualizing Important Features') plt.show() for index, row in data.interrows(): A=row[''] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pickle from matplotlib import pyplot pkl_path = "dipg_vs_normals_training_history.pkl" history = pickle.load(open(pkl_path, "rb")) history pyplot.plot(history['train_loss']) pyplot.plot(history['val_loss']) pyplot.title('model train vs validation loss') pyplot.ylabel('loss') pyplot.xlabel('epoch') pyplot.legend(['train', 'validation'], loc='upper right') pyplot.show() pyplot.plot(history['train_acc']) pyplot.plot(history['val_acc']) pyplot.title('model train vs validation loss') pyplot.ylabel('loss') pyplot.xlabel('epoch') pyplot.legend(['train', 'validation'], loc='upper right') pyplot.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] slideshow={"slide_type": "slide"} # ## Reguläre Ausdrücke # # # + [markdown] slideshow={"slide_type": "subslide"} # ### Eingangsbeispiel - Standardisierung von Straßennamen # + slideshow={"slide_type": "subslide"} string = '100 NORTH MAIN ROAD' string.replace("ROAD", "RD.") # + slideshow={"slide_type": "subslide"} string = '100 NORTH BROAD ROAD' # problem string.replace("ROAD", "RD.") # + slideshow={"slide_type": "subslide"} string[:-4] + string[-4:].replace('ROAD', 'RD.') # umstaendliche und spezifische loesung # + slideshow={"slide_type": "subslide"} import re re.sub('ROAD$', 'RD.', string) # regulaerer ausdruck # + [markdown] slideshow={"slide_type": "subslide"} # [Reguläre Ausdrücke](https://de.wikipedia.org/wiki/Regul%C3%A4rer_Ausdruck) (regular expressions) spezifizieren Mengen von Zeichenketten, die über verschiedene Operationen identifiziert werden können. Für Data Scraping sind reguläre Ausdrücke sehr hilfreich um z.B. relevante Texte aus Webseiten PDF's zu extrahieren. # # > Some people, when confronted with a problem, think “I know, I'll use regular expressions.” Now they have two problems. # # > ** # + [markdown] slideshow={"slide_type": "subslide"} # ### `re` # # Das Paket [re](https://docs.python.org/3/library/re.html) für reguläre Ausdrücke ist in der Standardbibliothek von Python enthalten. # # + slideshow={"slide_type": "subslide"} import re pattern = 'a' string = 'Spam, Eggs and Bacon' # + slideshow={"slide_type": "subslide"} print(re.match(pattern, string)) # Sucht am Anfang des Strings # + slideshow={"slide_type": "subslide"} print(re.search(pattern, string)) # Sucht erstes Auftreten im String # + [markdown] slideshow={"slide_type": "subslide"} # Als Objekt: # + slideshow={"slide_type": "fragment"} pattern = re.compile('a') print(pattern.search(string)) # nur suchen # + slideshow={"slide_type": "fragment"} print(pattern.search(string).group()) # suchen und ausgeben ueber group # + [markdown] slideshow={"slide_type": "subslide"} # ### `re.findall` # Findet alle Vorkommnisse in einem String und gibt diese als *Liste-von-Strings* zurück. # + slideshow={"slide_type": "fragment"} print(string) # + slideshow={"slide_type": "fragment"} print(re.findall('a', string)) # + slideshow={"slide_type": "fragment"} print(re.findall(' ', string)) # + [markdown] slideshow={"slide_type": "subslide"} # ### Sonderzeichen: # # 1. `.` (dot) ist der allgemeinste, reguläre Ausdruck. Er spezifiziert ein beliebiges Zeichen im String. # 2. `^` (carret) bezeichnet den Anfang eines Strings. # 3. `$` (dollar) bezeichnet die Position vor der newline (`\n`) oder das Ende des Strings im `MULTILINE` Modus. # + slideshow={"slide_type": "subslide"} print(string) # + slideshow={"slide_type": "subslide"} print(re.search('.a.', string).group()) # erster treffer # + slideshow={"slide_type": "subslide"} print(re.findall('.a.', string)) # alle treffer # + [markdown] slideshow={"slide_type": "subslide"} # ### Verkettung # # Spezifiziert Strings in bestimmter Reihenfolge. Die Reihenfolge kann negiert werden in dem man eine Menge angibt: `[]`. # + slideshow={"slide_type": "fragment"} print(re.search('AND', 'AND DNA XYZ').group()) # + slideshow={"slide_type": "fragment"} print(re.findall('[AND]', 'AND DNA XYZ')) # + slideshow={"slide_type": "fragment"} print(string) print(re.findall('[amb]', string)) # + [markdown] slideshow={"slide_type": "subslide"} # ### Alternative # # Findet mehrere Alternativen regulärer Ausdrücke. Wird durch `|`-Operator angegeben. # + slideshow={"slide_type": "fragment"} print(re.findall('AND|DNA|RNA', 'AND DNA XYZ')) # + [markdown] slideshow={"slide_type": "subslide"} # ### Weitere Sonderzeichen # Folgende Zeichen haben besondere Bedeutungen in regulären Ausdrücken: # # Zeichen | Bedeutung # -|- # `.`| Beliebiges Zeichen. Mit `DOTALL` auch die Newline (`\n`) # `^`| Anfang eines Strings. Wenn `MULTILINE`, dann auch nach jedem `\n` # `$`| Ende des Strings. Wenn `MULTILINE`, dann auch vor jedem `\n` # `\`| Escape für Sonderzeichen oder bezeichnet eine Menge # `[]`| Definiert eine Menge von Zeichen # `()`| Festlegung von Gruppen # + [markdown] slideshow={"slide_type": "subslide"} # ### Wiederholungen # Spezifiziert Anzahl der Wiederholungen des vorangegangenen regulären Ausdrucks. Folgende Wiederholungen sind möglich: # # Syntax | Bedeutung # -|- # `*` | 0 oder mehr Wiederholungen # `+` | 1 oder mehr Wiederholungen # `{m}` | Genau `m` Wiederholungen # `{m,n}` | Von `m` bis einschließlich `n` # # + slideshow={"slide_type": "subslide"} peter = '''The screen is filled by the face of PETER PARKER, a 17 year old boy. High school must not be any fun for Petttter, he's one hundred percent nerd- skinny, zitty, glasses. His face is just frozen there, a cringing expression on it, which strikes us odd until we realize the image is freeze framed.''' # + slideshow={"slide_type": "subslide"} peter # + [markdown] slideshow={"slide_type": "subslide"} # # Die Wiederholungen sind standardmäßig *greedy*, d.h. es wird soviel vom String verbraucht, wie möglich. Dieses Verhalten kann abgeschaltet werden, indem ein `?` nach der Wiederholung gesetzt wird. # + slideshow={"slide_type": "subslide"} print(re.findall('s.*n', peter)) # greedy # + slideshow={"slide_type": "subslide"} print(re.findall('s.*?n', peter)) # non-greedy # + [markdown] slideshow={"slide_type": "subslide"} # Zusatzparameter `re.DOTALL` um über `.` auch `\n` zu erfassen: # + slideshow={"slide_type": "subslide"} print(re.findall('s.*?n', peter, re.DOTALL)) # + slideshow={"slide_type": "subslide"} re.findall('\.', peter) # suche nach punkt, escapen des sonderzeichens "." # + [markdown] slideshow={"slide_type": "subslide"} # Grundsätzlich kann die Regex Kombination, `.*?`, verwendet werden, um mehrere Platzhalter (`.`) beliebig häufig (`*`) vorkommen zu lassen, solange bis das nächste Pattern zum ersten mal gefunden wird (`?`). # + slideshow={"slide_type": "fragment"} string = 'eeeAaZyyyyyyPeeAAeeeZeeeeyy' print(re.findall('A.*Z', string)) # greedy # + slideshow={"slide_type": "fragment"} print(re.findall('A.*?Z', string)) # non-greedy # + [markdown] slideshow={"slide_type": "subslide"} # ### Spezifizierung von Mengen # # Syntax | Äquivalent | Bedeutung # -|-|- # `\d` | `[0-9]` | Ganze Zahlen # `\D` | `[^0-9]` | Alles was keine Zahl ist # `\s` | `[ \t\n\r\f\v]` | Alles was whitespace ist # `\S` | `[^ \t\n\r\f\v] ` | Alles was nicht whitespace ist # `\w` | `[a-zA-Z0-9_]` | Alphanumerische Zeichen und Unterstrich # `\W` | `[^a-zA-Z0-9_]` | Kein alphanumerische Zeichen oder Unterstrich # + slideshow={"slide_type": "subslide"} print(peter) # + slideshow={"slide_type": "subslide"} re.sub('\s', '_', peter) # ersetzen # + slideshow={"slide_type": "subslide"} re.findall('\d', peter) # alle zahlen # + slideshow={"slide_type": "fragment"} re.findall('\d{2}', peter) # zwei aufeinanderfolgende zahlen # + [markdown] slideshow={"slide_type": "subslide"} # ### Look arounds # # Look arounds ermöglichen es Mengen vor und nach Strings zu prüfen ohne diese zu extrahieren. Grundlegende Syntax: `(?`...`)` # # Syntax | Bedeutung # -|- # `(?=`...`)` | *positive lookahead* # `(?!`...`)` | *negative lookahead* # `(?<=`...`)` | *positive lookbehind* # `(?, a 17 year old boy. High school must not be any fun for Petttter, he's one hundred percent nerd- skinny, zitty, glasses. His face is just frozen there, a cringing expression on it, which strikes us odd until we realize the image is freeze framed.''' # Code Übungsaufgabe 1 # + [markdown] slideshow={"slide_type": "subslide"} # ### Übungsaufgabe 2 # # Nutzt reguläre Ausdrücke um eine Liste von URL's nach Bildern (.jpg, .jpeg, .png) zu filtern. # + slideshow={"slide_type": "subslide"} beispiel_links = [ 'https://www.uni-bamberg.de/ma-politik/schwerpunkte/computational-social-sciences/', 'https://www.uni-bamberg.de/fileadmin/_processed_/f/c/csm_Schmid_Finzel_2c34cb23de.jpg', 'https://www.uni-bamberg.de/fileadmin/uni/verwaltung/presse/042_MARKETING/0421_Corporate_Design/Logos-extern/weltoffene-hochschule/Logo-EN-170.png', 'https://www.uni-bamberg.de/fileadmin/_processed_/e/d/csm_2020-04-30_Homeschooling_web_4cf4ce1ad8.jpeg', 'https://www.uni-bamberg.de/soziologie/lehrstuehle-und-professuren/'] # Code Übungsaufgabe 2 # + [markdown] slideshow={"slide_type": "subslide"} #
    #
    # # # ___ # # # **Kontakt: ** (Webseite: www.carstenschwemmer.com, Email: ) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: .venv # language: python # name: .venv # --- from nbdev import * # %nbdev_default_export geog # ## Mapping of Australian States # # https://data.gov.au/dataset/ds-dga-bdcf5b09-89bc-47ec-9281-6b8e9ee147aa/details?q=PSMA%20administrative%20boundaries # # https://psma.com.au/wp-content/uploads/2019/06/Administrative-Boundaries-Getting-Started-Guide-New.pdf import pandas as pd import numpy as np import matplotlib.pyplot as plt import geopandas as gpd # %matplotlib inline # https://data.gov.au/geoserver/nsw-suburb-locality-boundaries-psma-administrative-boundaries/wfs?request=GetFeature&typeName=ckan_91e70237_d9d1_4719_a82f_e71b811154c6&outputFormat=json # # https://data.gov.au/dataset/ds-dga-91e70237-d9d1-4719-a82f-e71b811154c6/details # # https://data.gov.au/data/dataset/12eca357-6bad-4130-9c47-eaaf4c11e039/resource/0def58c2-343e-47b2-a270-a35637c2f7b9/download/ntlocalitypolygonshp.zip # + # # !gunzip datasets/geo-data/nsw_locality_polygon_shp/NSW_LOCALITY_POLYGON_shp.shp.gz # - # %nbdev_export def extract_postcode(pc): try: postcode = np.int(pc[-4:]) except: postcode = np.NaN return postcode # %nbdev_export def load_suburb_data(state_name='NSW'): MAP_DATA_PATH = '../data/datasets/geo-data/' STATE_SUBURBS_FILE = 'NSW_Suburbs.geojson.json' suburbs_df = gpd.read_file(MAP_DATA_PATH + STATE_SUBURBS_FILE.replace('NSW', state_name)) suburbs_df['postcode'] = suburbs_df['loc_pid'].apply(lambda pc: extract_postcode(pc)) suburbs_df['postcode'] = suburbs_df['postcode'].astype(np.int64, errors='ignore') suburbs_df.to_excel(MAP_DATA_PATH + STATE_SUBURBS_FILE.replace('NSW', state_name).replace('.geojson.json', '.xlsx')) return suburbs_df suburbs_df = load_suburb_data() suburbs_df.to_feather('tmp_nsw_suburbs.feather') # + #TODO: Get data for all states - check formats - maybe just pull from URL (in JSON) and cache? # - NSW_URL = 'https://data.gov.au/geoserver/nsw-suburb-locality-boundaries-psma-administrative-boundaries/wfs?request=GetFeature&typeName=ckan_91e70237_d9d1_4719_a82f_e71b811154c6&outputFormat=json' nsw_df = gpd.read_file(NSW_URL) tmp_df = gpd.read_file('../data/datasets/geo-data/NSW_Suburbs.geojson.json') nsw_df.equals(tmp_df) # + ## NOTE: The shape file approach didn't work initially - but the GDA2020 file format seems ok but has different/less? info # than the geoJSON file - so using geoJSON data instead # set the filepath and load in a shapefile fp = '../data/datasets/geo-data/nsw_state_polygon_shp/NSW_STATE_POLYGON_shp.shp' map_shp_df = gpd.read_file(fp) # + # Lord Howe Island - map_df.iloc[3477] - drop as not part of mainland NSW try: map_df.drop(index=3477, inplace=True) except Exception as e: print(e) # + # now let's preview what our map looks like with no data in it fig, ax = plt.subplots(1, figsize=(20, 12)) ax.axis('off') map_df.plot(ax=ax) # - fig.savefig('testmap.png', dpi=300) # + # Load in other data # Synthetic example counts by post code df = pd.read_excel('../data/numberbypostcode.xlsx') # - df = df[['postcode', 'n']] df['n'] = df['n'].astype(np.float64) # + #merged = map_df.set_index('postcode').join(df.set_index('postcode')) # - merged = pd.merge(map_df, df, how='outer', on='postcode') merged.rename(index=str, columns={'n': 'nComplaints', 'postcode': 'Post code'}, inplace=True) merged['nComplaints'].fillna(0, inplace=True) merged.head() # + # set a variable that will call whatever column we want to visualise on the map variable = 'nComplaints' # set the range for the choropleth vmin, vmax = 100, 1000 # create figure and axes for Matplotlib fig, ax = plt.subplots(1, figsize=(20, 12)) # create map merged.plot(column=variable, cmap='Blues', linewidth=0.8, ax=ax, edgecolor='0.8') # Now we can customise and add annotations # remove the axis ax.axis('off') # add a title ax.set_title('Number of complaints by post code (NSW)', \ fontdict={'fontsize': '25', 'fontweight' : '3'}) # create an annotation for the data source ax.annotate('Source: AFCA', xy=(0.1, .08), xycoords='figure fraction', horizontalalignment='left', verticalalignment='top', fontsize=10, color='#555555') # Create colorbar as a legend sm = plt.cm.ScalarMappable(cmap='Blues', norm=plt.Normalize(vmin=vmin, vmax=vmax)) sm._A = [] cbar = fig.colorbar(sm) # this will save the figure as a high-res png. you can also save as svg # fig.savefig('testmap.png', dpi=300) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:phathom] # language: python # name: conda-env-phathom-py # --- import numpy as np from lapsolver import solve_dense from scipy.optimize import linear_sum_assignment N = 100 costs = np.random.randint(0, 100, (N, N)) # %%timeit -n 100 rows_lapsolver, cols_lapsolver = solve_dense(costs) rows_lapsolver # %%timeit -n 100 rows_scipy, cols_scipy = linear_sum_assignment(costs) rows_lapsolver, cols_lapsolver = solve_dense(costs) rows_scipy, cols_scipy = solve_dense(costs) assert np.all(rows_lapsolver == rows_scipy) assert np.all(cols_lapsolver == cols_scipy) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="Pl9sS5bdJHkf" # # [`for` loops](https://docs.python.org/3/tutorial/controlflow.html#for-statements) # + [markdown] id="DsgzuVswJHkp" # ## Looping lists # + id="Lf-k0-vCJHkr" outputId="a4a4a6e1-d18d-4c2a-f6cf-65c886888544" colab={"base_uri": "https://localhost:8080/"} my_list = [1, 2, 3, 4, 'Python', 'is', 'neat'] for item in my_list: print(item) # + [markdown] id="YBKSAFvHJHku" # ### `break` # Stop the execution of the loop. # + id="lnlocbXmJHkv" outputId="b4d9eeda-2a1e-4eff-bfe6-700dccaf6471" colab={"base_uri": "https://localhost:8080/"} for item in my_list: if item == 'Python': break print(item) # + [markdown] id="nebLktZUJHkw" # ### `continue` # Continue to the next item without executing the lines occuring after `continue` inside the loop. # + id="DQRaJwQ2JHky" outputId="71ca3e9f-2b9d-4d52-fa1d-0c37f7b3fa7e" colab={"base_uri": "https://localhost:8080/"} for item in my_list: if item == 1: continue print(item) # + [markdown] id="4zMx3-LCJHk0" # ### `enumerate()` # In case you need to also know the index: # + id="ITTsbFSxJHk1" outputId="cf368696-f847-42b3-ecb2-46c429c53b00" colab={"base_uri": "https://localhost:8080/"} for idx, val in enumerate(my_list): print('idx: {}, value: {}'.format(idx, val)) # + [markdown] id="w-IxHOfJJHk2" # ## Looping dictionaries # + id="kCHu0uIdJHk3" outputId="375b15d6-181b-4e4f-b4eb-865b5da283e6" colab={"base_uri": "https://localhost:8080/"} my_dict = {'hacker': True, 'age': 72, 'name': ''} for key in my_dict: print(key) # + id="D4s9YWo6JHk4" outputId="40e2d8ce-366f-4968-a894-bd4a7775ac7f" colab={"base_uri": "https://localhost:8080/"} for key, val in my_dict.items(): print('{}={}'.format(key, val)) # + id="40NlfntSPTbA" outputId="9c06b8ec-cc7f-47ba-a000-3eccab18f067" colab={"base_uri": "https://localhost:8080/"} for key in my_dict.values(): print(key) # + id="L9tuNzTcPd67" outputId="4f7ca8d2-21c2-4c25-f508-2534f1a41e0d" colab={"base_uri": "https://localhost:8080/"} for key in my_dict.keys(): print(key) # + [markdown] id="e0SHrCfKJHk5" # ## `range()` # + id="o2BXwt18JHk5" outputId="3ea3911d-79fc-4a84-dcb4-e608afc5e9f1" colab={"base_uri": "https://localhost:8080/"} for number in range(5): print(number) # + id="jbMkK-ZoJHk6" outputId="d304fe4f-6c96-487c-9acf-dc3fe66c6720" colab={"base_uri": "https://localhost:8080/"} for number in range(2, 5): print(number) # + id="RNXk5IDQJHk7" outputId="9bfd0fe6-aea6-4f96-c597-4f6ef6523652" colab={"base_uri": "https://localhost:8080/"} for number in range(0, 10, 2): # last one is step print(number) # + id="Wyba02wRPO4u" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="uijk_xI6mEXq" colab_type="text" # # Simple Ray tracing in python # In computer graphics, ray tracing is a rendering technique for generating an image by tracing the path of light as pixels in an image plane and simulating the effects of its encounters with virtual objects. # # ![Ray tracing illustration](https://upload.wikimedia.org/wikipedia/commons/thumb/8/83/Ray_trace_diagram.svg/450px-Ray_trace_diagram.svg.png) # # # + [markdown] id="GZNJh7GMnBJj" colab_type="text" # Ray tracing is a simpler algorithm than rasterization which relies on a complex pipeline of computer graphics techniques and algorithms. While the advantage of Ray tracing are simplicity and ease of implementation, it's notoriously slow compared to rasterization. # In this notebook, we'll attempt to implement a simple ray tracer in Python . Python is not a great choice for such a compute intesive task but as an example, it'll be enough. # + id="JQrdtNd8oTIt" colab_type="code" colab={} #import the required modules import numpy as np #np.seterr(divide='ignore', invalid='ignore') import math import cv2 import random import time import matplotlib.pyplot as plt # %matplotlib inline # + id="SocfBBIkoYAh" colab_type="code" colab={} #a short hand for vector normalization def normalize(arr): #return arr/np.sqrt(arr[0]**2+arr[1]**2+arr[2]**2) return arr/np.linalg.norm(arr) # + [markdown] id="l6rN0KzxofIN" colab_type="text" # Python natively doesn't have a 3D vector class but as 3D vectors are metrices only, we can use the numpy library to simulate 3D vector math. # Our simple ray tracer will only have spheres. As an extension, more objects can be added later. # + id="yMvXGVtkowqP" colab_type="code" colab={} # class Sphere(object): def __init__(self, center, radius, surface_color, transparency, reflection,emission_color=np.array([0.0,0.0,0.0])): self.center = center self.radius = radius self.surface_color = surface_color self.emission_color = emission_color self.transparency = transparency self.reflection = reflection #intersection routine: Hit a ray with a sphere and return hit points def intersect(self, ray_origin, ray_dir): l = self.center - ray_origin tca = l.dot(ray_dir) if tca < 0: return False,-1,-1 d2 = l.dot(l) - tca*tca if d2 > self.radius**2: return False,-1,-1 thc = math.sqrt(self.radius**2 - d2) t0 = tca - thc t1 = tca + thc return True,t0,t1 # + id="PeK3e3yCpzzk" colab_type="code" colab={} MAX_RAY_DEPTH=5 # a simple routine for mixing two colors def mix(a,b,mix): return b*mix + a*(1-mix) # + id="lrdCD9k1qeV2" colab_type="code" colab={} def ray_trace(ray_origin, ray_dir, spheres, depth): tnear = math.inf hit = None for sphere in spheres: status,t0,t1 = sphere.intersect(ray_origin, ray_dir) if status: if t0 < 0 : t0 = 1 if t0 < tnear: tnear = t0 hit = sphere if hit == None: return np.array([0.2,0.2,0.2]) surface_color = np.array([0.0,0.0,0.0]) phit = ray_origin + ray_dir * tnear nhit = phit - hit.center nhit = normalize(nhit) # If the normal and the view direction are not opposite to each other # reverse the normal direction. That also means we are inside the sphere so set # the inside bool to true. Finally reverse the sign of IdotN which we want # positive.bias bias = 1e-4 inside = False if ray_dir.dot(nhit) > 0: nhit = -1.0 * nhit inside = True if (hit.transparency > 0 or hit.reflection > 0) and depth < MAX_RAY_DEPTH: facing_ratio = -1.0*ray_dir.dot(nhit) fresnel_effect = mix(math.pow(1 - facing_ratio, 3), 1, 0.1) refl_dir = ray_dir - nhit * 2 * ray_dir.dot(nhit) refl_dir = normalize(refl_dir) reflection = ray_trace(phit + nhit*bias, refl_dir, spheres, depth + 1) refraction = np.array([0.0,0.0,0.0]) if hit.transparency > 0: ior = 1.1 eta = 0 if inside: eta = ior else: eta = 1 / ior cos_i = -1.0*nhit.dot(ray_dir) k = abs(1 - eta * eta * (1 - cos_i**2)) #print(k) sk = math.sqrt(k) refr_dir = normalize(ray_dir * eta + nhit * (eta * cos_i - math.sqrt(k) )) #print(refr_dir) refraction = ray_trace(phit - nhit*bias, refr_dir, spheres, depth + 1) surface_color = (reflection * fresnel_effect + refraction*(1 - fresnel_effect)*hit.transparency) * hit.surface_color else: surface_color = np.array([0.0,0.0,0.0]) for i in range(len(spheres)): if spheres[i].emission_color[0] > 0: transmission = np.array([1.0,1.0,1.0]) light_dir = spheres[i].center - phit light_dir = normalize(light_dir) for j in range(len(spheres)): if i != j: t0,t1=0.0,0.0 status,t0,t1= spheres[j].intersect(phit + nhit*bias, light_dir) if status: transmission = np.array([0.0,0.0,0.0]) break surface_color += hit.surface_color * transmission * max(0.0, nhit.dot(light_dir)) * spheres[i].emission_color #print(i,surface_color) return surface_color + hit.emission_color # + id="qwGlKgyHq1FW" colab_type="code" colab={} def render(spheres): height,width = 480,640 image = np.zeros((height,width,3)) fov = 30.0 aspect_ratio = float(width/height) angle = math.tan(math.pi*0.5*fov/180) for y in range(height): for x in range(width): xx = (2 * ((x + 0.5) * 1/width) - 1) * angle * aspect_ratio yy = (1 - 2 * ((y + 0.5) * 1/height)) * angle ray_dir = np.array([xx,yy,-1.0]) ray_dir = normalize(ray_dir) val = ray_trace(np.array([0.0,0.0,20.0]),ray_dir,spheres,0) #print(x,y) image[y,x] = val #save the buffer to a file image = np.float32(image) image = cv2.cvtColor(image,cv2.COLOR_BGR2RGB) plt.imshow(image) # + id="HIJxxRyIrvqI" colab_type="code" outputId="13a9534a-8c55-4f96-ca0a-6ac8edad8ebc" colab={"base_uri": "https://localhost:8080/", "height": 304} if __name__ == "__main__": random.seed(13) spheres = [] #position, radius, surface color, reflectivity, transparency, emission color spheres.append(Sphere(np.array([ 0.0, -10004, -20]), 10000, np.array([0.20, 0.20, 0.20]),0,0)) spheres.append(Sphere(np.array([ 0.0, 0, -20]), 4, np.array([1.0, 0.32, 0.36]),1,0.5)) spheres.append(Sphere(np.array([ 5.0, -1, -15]), 2, np.array([0.90, 0.76, 0.46]), 1,0)) spheres.append(Sphere(np.array([ 5.0, 0, -25]), 3, np.array([0.65, 0.77, 0.97]),1,0)) spheres.append(Sphere(np.array([ -5.5, 0, -15]), 3, np.array([0.90, 0.90, 0.90]), 1,0)) #light spheres.append(Sphere(np.array([ 0.0, 20, -10]), 3, np.array([0.0,0.0,0.0]), 0,0, np.array([3.0,3.0,3.0]))) start = time.time() render(spheres) print(f'total time for rendering:{time.time()-start} seconds') # + id="d9rjXQKWr_W6" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Iris Dataset - Logistic Regression # # In this Jupyter notebook we'll look at the classic Iris dataset and use it to train a logistic regression classifier using Scikit-learn. # import libaries import sklearn import numpy as np from sklearn import datasets import matplotlib.pyplot as plt # %matplotlib inline # load and describe Iris data iris = datasets.load_iris() print(iris.DESCR) print('Iris dataset keys: {}'.format(list(iris.keys()))) #print('Target/class names: {}'.format(iris.target_names)) #print('Feature names: {}'.format(iris.feature_names)) iris['data'][:,2:3] # assign features and labels X = iris["data"][:,3:] # petal length Y = (iris["target"] == 1).astype(np.int) # Iris Versicolour = 1 print('X shape: {}'.format(X.shape)) print('Y shape: {}'.format(Y.shape)) print(X,Y) # Logistic regression using scikit-learn from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression() log_reg.fit(X,Y) y_proba = log_reg.predict_proba(X_new) y_proba # + # Predict model's estimated probabilities for flowers with petal lengths 1-7 cm X_new = np.linspace(1,7,1000).reshape(-1,1) # 1000x1 matrix of length 1-7 y_proba = log_reg.predict_proba(X_new) print(y_proba[:,1]) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] decision_boundary = X_new[y_proba[:, 1] >= 0.5] plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Versicolour") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Versicolour") plt.plot(X_new[0],decision_boundary) #plt.plot([decision_boundary, decision_boundary], "k:", linewidth=2) plt.legend() # - print(y_pred) print(Y.shape) # + X = iris["data"][:,2:].reshape(-1,1) # petal width print(X.shape) y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") X.shape X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) plt.show() # - # ### References # Hands on ML" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:root] * # language: python # name: conda-root-py # --- # # Scraping Monthly Tweets # This notebook uses the getoldtweet package to scrape monthly tweets from twitters search results page. also it contains sme post processing options in the lower cells. # ## Import Packages # + import os import gc from IPython.display import Audio, display import time from collections import Counter import pandas as pd from datetime import datetime, timedelta from dateutil.relativedelta import relativedelta import GetOldTweets3 as got # - # ## Run Monthly Scrape # ### Construct each group to be scraped #Artificial Intelligence graph terms AI_graph = ['#AI', '#ML', '#NLP', 'Artificial Intelligence','Deep Learning', '"Machine Learning"', 'Natural Language Processing','Neural Network'] #Distributed Ledger graph terms DL_graph = ['Bitcoin', 'Blockchain','Ethereum','distributed ledger','smart contract'] # ### Define Functions # + def allDone(): '''this function outputs a short funny audio when called. Typically this is used to signal a task completion''' display(Audio(url='https://sound.peal.io/ps/audios/000/000/537/original/woo_vu_luvub_dub_dub.wav', autoplay=True)) def update_tweet_csv(path,DF,start,end,delta,Verbose=True): '''This function saves the results of the scrape to the disk. it is meant to be passed within a loop and append data being scraped with each loop to the DF stored on the disk. typically the loop runs daily scrapes for the period of a month''' #if the scrape was successful and the file doesnt exist then create a file and save the DF as a csv if len(DF)>0 and os.path.isfile(path) == False: DF.to_csv(path, index=False) #start and end parameters dont need editing since scrape was successful start, end = start, end #print date scraped, time of scrape, and number of daily tweets scraped if Verbose==True: print(since," // ",datetime.now()," / ", round(len(DF))," tweets/day") #if the scrape is successful and file name exists, then append to it elif len(DF)>0 and os.path.isfile(path) == True: #open the csv of the month being scraped globe = pd.read_csv(path) #append the day scraped globe = globe.append(DF) #save new DF to the csv globe.to_csv(path, index=False) start, end = start, end if Verbose==True: print(since," // ",datetime.now()," // ", round(len(DF))," tweets/day ",len(globe)) #If twitter data was not reached due to any interruptions/block wait then try that day again elif len(DF)==0: if Verbose==True: print(since," // ",datetime.now()," // ", round(len(DF))," tweets/day **") #adjust the start and end dates to retry scraping this day start -= delta end -= delta time.sleep(60) return start, end def tweets_to_df(tweet): '''this function saves the results of the twitter scrapes into lists then creates a DF out of them. this is needed to extract info from the getoldtweets3 generator object''' #initialize lists text, date, hashtag, username, link, keyword, ID = [], [], [], [], [], [], [] #add content to lists using GOT3 "tweet" generator object for tweets in tweet: text.append(str(tweets.text)) date.append(str(tweets.date)) hashtag.append(str(tweets.hashtags)) username.append(str(tweets.username)) link.append(tweets.permalink) keyword.append(word) ID.append(tweets.id) #compile content into a DF DF = pd.DataFrame({'tweet':text, 'date/time':date, 'hashtags':hashtag, 'user':username, 'links':link, 'search':keyword,'tweet_id':ID}) return DF # + [markdown] heading_collapsed=true # #### why twitter has limitations and why you should download in daily intervals: # + [markdown] hidden=true # "The issue here is **Min_position** and **Has_more_items** flags. Twitter's legacy timeline caching system **Haplo** has its limitations. So when you start downloading millions of tweets, it runs out of memory and sometimes returns has_more_items as false. You can read about how twitter cache works in here # # https://blog.twitter.com/engineering/en_us/topics/infrastructure/2017/the-infrastructure-behind-twitter-scale.html " # # source: https://github.com/Mottl/GetOldTweets3/issues/3 # - # ### Run Monthly Scrape # Info relating to the steps to be followed below: # - set the start and end date to be scraped # - scrapes are ran in daily intervals because that is the smallest interval allowed by twitter.(e.g. since the results are scraped in descending chronological order if a scrape is ran over a week and gets interrupted,due to hash issues, days worth of data can be lost, however if a single day's scrape gets interrupted then only hours are lost. this saves the user the hassle of rechecking for missing days and rescraping.) # - if the user doesnt want to see process updates set verbose==False in the update_tweet_csv function # # Background info: # - the typical speed of GOT3 is roughly 2.5 million tweets/day # - scraping a month worth's data, using the lists above(AI_graph and DL_graph), takes a full day # - using a different proxy for each request(20 tweets) using services like crawlera reduces scraping speed by 5.5 times. # - it is recommended to use a diffferent IP address for each day of scraping or scraping gets blocked by twitter repeatedly 10 cycles. for word in AI_graph+DL_graph: delta = timedelta(days = 1) #set scrape range (e.g. number of days, ,weeks, months) start = datetime(2019,7,1) - delta #set first day of scrape x = start + 2*delta # x is the element used in the while loop indicating the current start date being scraped stop_point = datetime(2019,8,1) #set final day of scrape, this is not inclusive data_dir = os.getcwd() + '/twitter_data_2019/' file_name = 'globe_' + word + "_" + (start+delta).strftime('%Y-%m') + '.csv' print(file_name, '\nstart: ', datetime.now()) while x < stop_point: try: start += delta end = start + delta since = (start).strftime("%Y-%m-%d") until = (end).strftime("%Y-%m-%d") x = end #Get tweets by query search tweetCriteria = got.manager.TweetCriteria().setQuerySearch(word).setSince(since).setUntil(until) tweet = got.manager.TweetManager.getTweets(tweetCriteria) #store the data as a DF DF = tweets_to_df(tweet) #save the daily scrape to csv on disk and update start & end accordingly path = data_dir + file_name start, end = update_tweet_csv(path,DF,start,end,delta,Verbose=True) #minimize memory retention del [DF, tweet, tweetCriteria] gc.collect() #in case of an error occuring mid a scrape cycle, wait then repeat the cycle except: print('error occured at ', since, datetime.now()) #maintian same date and dont save the data start -= delta end -= delta #wait a while before trying again time.sleep(120) #audio signal when each each phrase/month finishes allDone() # + [markdown] heading_collapsed=true # ## Check Continuity of Data # + [markdown] hidden=true # As mentioned above, due to hash issues and others, twitter sometimes limits the results returned in a search. to detect the missing data use the cell below and discover the number of hours and dates missing. # # info to be filled: # - the "filename" should be changed to the psth of the desired csv # - change the range of dates to be searched for missing data by changing b(end date) and a(start date) # - set the min_hrs parameter which is used to show the days with more than a certain number of hours missing(e.g. min_hrs=2 then only dates with more than 2 hrs missing will be printed) # # results: # - percent of hours missed (typically a DF will have <3% of missing data if scrape is done in daily intervals as is recommended) # - a list showing how many days have how many hours missing (# of hrs, num of days) e.g.[(1, 4), (2, 2),..., (23, 5), (24, 2)] # - the number of days with more than min_hrs missing # - date of each day with more than min_hrs missing (this list can later be used to rescrape dates with significant number of hrs/day missing) # + hidden=true #### Check if data is continous filename = 'globe_Bitcoin_130101_170121_6.csv' print(filename, datetime.now()) # get hours scraped, for days change 13 to 10 actual = set([datetime.strptime(date_str[:13],"%Y-%m-%d %H") for date_str in pd.read_csv(filename)['date/time']]) # generate all possible hours in date range b = datetime(2017,1,21) a = datetime(2013,1,1) numhrs = 24*((b.date()-a.date()).days) dateList = [] for x in range(0, numhrs+2): dateList.append((a - timedelta(hours = x))) #the list incomplete/missing dates min_hrs = 1 #the minumum number hours needed to display date hours_missed = sorted(set(dateLis) - actual) #all missing hours counter = Counter([date.date() for date in hours_missed]) #count hours missed per day sort = sorted(counter.items()) #sort in chronological order dates_missed = [date[0] for date in sort if date[1]>min_hrs] #keep dates with more than 2 hours missing #calculate the total number of hours missed as a percentage summary = Counter([date[1] for date in sort]) summary = sorted(summary.items()) total_missed_hours = sum([x[0]*x[1] for x in summary]) print('Missing: ', total_missed_hours*100/numhrs, "%","\n",summary) # create since and until to search twitter for those missing date ranges since_missing = dates_missed until_missing = [dm + timedelta(days=1) for dm in dates_missed] print(" # Days: ", len(dates_missed),"\n", "Ranges sizes: ", since_missing) # + [markdown] heading_collapsed=true # ## Rescrape missing Data # + [markdown] hidden=true # this cell is used to rescrape days with significant missing hours. this cell is optional and can be avoided by simply scraping 1 day at a time as reccomended above. it was only created to ammend scrapes initially done in larger intervals (1 week scrapes). however since twitter's smallest date interval is a date then by setting the scrape to 1 day intervals the highest accracy will be acheived from the start. # + hidden=true # import urllib3 # urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning) # import warnings # warnings.filterwarnings('ignore', category=ResourceWarning) #Amended search for missing dates filename = 'globe_' + AI_graph[0] + '_130101_170121_5_missing.csv' print("start: ", filename, datetime.now()) word = AI_graph[0] for i in range(len(since_missing)): if (until_missing[i] - since_missing[i]).days <= 0: continue since = (since_missing[i]).strftime("%Y-%m-%d") until = (until_missing[i]).strftime("%Y-%m-%d") text, date, hashtag, username, link, keyword, ID = [], [], [], [], [], [], [] try: #Get tweets by query search tweetCriteria = got.manager.TweetCriteria().setQuerySearch(word).setSince(since).setUntil(until) tweet = got.manager.TweetManager.getTweets(tweetCriteria) except: print('ERROR: ', since) time.sleep(15) continue #add content to lists for tweets in tweet: text.append(str(tweets.text)) date.append(str(tweets.date)) hashtag.append(str(tweets.hashtags)) username.append(str(tweets.username)) link.append(tweets.permalink) keyword.append(word) ID.append(tweets.id) #compile content into a DF DF = pd.DataFrame({'tweet':text, 'date/time':date, 'hashtags':hashtag, 'user':username, 'links':link, 'search':keyword,'tweet_id':ID}) if len(DF)>0 and os.path.isfile(filename) == False: DF.to_csv(filename, index=False) print(since,"-->",until," // ",datetime.now()," / ", len(DF)/(until_missing[i] - since_missing[i]).days,"rows/days") del [DF, text, date, hashtag, username, link, keyword, ID, tweet, tweetCriteria ] gc.collect() continue elif len(DF)>0 and os.path.isfile(filename) == True: globe = pd.read_csv(filename) globe = globe.append(DF) globe.to_csv(filename, index=False) print(since,"-->",until," // ",datetime.now()," // ", (len(DF))/(until_missing[i] - since_missing[i]).days,"rows/days") del [globe, DF, text, date, hashtag, username, link, keyword, ID, tweet, tweetCriteria ] gc.collect() continue else: print(since," // ",datetime.now()," // "," 0 rows") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: math_for_ML # language: python # name: math_for_ml # --- # # Mathematics for Machine Learning: Linear Algebra # ## Week2 # ## Module 2: # # ### Modulus & inner product (vectors) # #### usefull pakages # + from sympy import solve, Poly, Eq, Function, exp, sqrt, simplify,acos from sympy.abc import x, y, z, a, b, c, d , e, f from sympy.vector import CoordSys3D import numpy as np # - # ### lenght of the vector: C = CoordSys3D('C') # cartesian coordinate r = x*C.i + y*C.j + z*C.k sqrt(r.dot(r)) # ### dot product : s = a*C.i + b*C.j + c*C.k r.dot(s) # #### commutative so s.r = r.s assert r.dot(s) == s.dot(r) # #### distributive: r.(s+t) == r.s + r.t t = d*C.i + e*C.j + f*C.k r.dot(s+t) r.dot(s) + r.dot(t) assert r.dot(s+t).equals(r.dot(s) + r.dot(t)) (r-s).dot(r-s) from sympy.physics.vector import * N = ReferenceFrame('N') (a*N.x+b*N.y).magnitude() s=(a*N.x+b*N.y) r=(x*N.x + y*N.y) (r.dot(s)) r.magnitude()*s.magnitude() Tetha = acos((r.dot(s))/(r.magnitude()*s.magnitude())) Tetha (r.dot(r)) (r.dot(-r)) rn = (y*N.x + -x*N.y) #normal (r.dot(rn)) # ### Projection i, j, k = C.base_vectors() v1 = i + j + k v2 = 3*i + 4*j v1.projection(v2) v2.projection(v1) # #### Scalar Projection v2.projection(v1,scalar=True) == v1.dot(v2)/(v2.magnitude()**2) v2.projection(v1,scalar=True) v1.projection(v2,scalar=True) == v2.dot(v1)/(v1.magnitude()**2) v1.projection(v2,scalar=True) # ## Changing basis # 1) v = np.array([1,1,2,3]) b1 = np.array([1,0,0,0]) b2 = np.array([0,2,-1,0]) b3 = np.array([0,1,2,0]) b4 = np.array([0,0,0,0]) v.dot(b1)/(b1.dot(b1)) v.dot(b2)/(b2.dot(b2)) v.dot(b3)/(b3.dot(b3)) v.dot(b3)/(b3.dot(b3)) # ## Vector operations assessment v1 = 2*i + j c = 3*i -4 *j #c2 = i-j #assert c.dot(c2) == 0 assert v1.dot(c)/(c.dot(c)) == c.projection(v1,scalar=True) c.projection(v1,scalar=True) c2.projection(v1) v = -4*i-3*j+8*k b1 = i+2*j+3*k b2 = -2*i+1*j b3 = -3*i-6*j+5*k b1.projection(v,scalar=True) b2.projection(v,scalar=True) b3.projection(v,scalar=True) x = np.array([3, 2, 4]) v = np.array([-1, 2, -3]) t = 2 x_n = x + t * v x_n # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # # !pip install fake-useragent # # !pip install selenium # if you want to use webdriver on google colab, please install the following package # # !apt-get update # to update ubuntu to correctly run apt install # # !apt install chromium-chromedriver # # !cp /usr/lib/chromium-browser/chromedriver /usr/bin import fake_useragent from selenium import webdriver print( fake_useragent.VERSION ) # - # ![image.png](attachment:image.png) # Use Selenium to locate the drop-down menu that appears when the mouse is hovered And write the xpath conditional expression in separate lines: # ![image.png](attachment:image.png) # # Collecting the data # + ua = fake_useragent.UserAgent() # .random ua_list = [ua.ie, ua['Internet Explorer'], ua.msie, ua.chrome, ua.google, ua['google chrome'], ua.firefox, ua.ff, ua.safari, ua.opera ] import random current_ua = random.choice(ua_list) header = {'UserAgent': current_ua, 'Connection': 'close' } options = webdriver.ChromeOptions() options.add_argument( "'" + "user-agent=" + header['UserAgent'] + "'" ) options.add_argument('--disable-gpu') # google document mentioned this attribute can avoid some bugs # the purpose of the argument --disable-gpu was to enable google-chrome-headless on windows platform. # It was needed as SwiftShader fails an assert on Windows in headless mode ### earlier.### # it doesn't run the script without opening the browser,but this bug ### was fixed.### # options.add_argument('--headless') is all you need. # The browser does not provide a visualization page. # If the system does not support visualization under linux, it will fail to start if you do not add this one # options.add_argument('--headless') # Solve the prompt that chrome is being controlled by the automatic test software options.add_experimental_option('excludeSwitches',['enable-automation']) options.add_experimental_option('useAutomationExtension', False) # set the browser as developer model, prevent the website identifying that you are using Selenium driver = webdriver.Chrome( executable_path = 'C:/chromedriver/chromedriver', # last chromedriver is chromedriver.exe options = options ) # #run command("window.navigator.webdriver")in the Console of the inspection # #result: undefine # means: regular browser driver.execute_cdp_cmd("Page.addScriptToEvaluateOnNewDocument", { "source": """ delete navigator.__proto__.webdriver; """ } ) driver.execute_cdp_cmd("Page.addScriptToEvaluateOnNewDocument", { "source": """ Object.defineProperty( navigator, 'webdriver', { get: () => undefined } ) """ } ) # browser.set_window_size(1920, 1080) root = 'https://www.basketball-reference.com/leagues/' page_link = 'NBA_2016_games.html#schedule' url = root + page_link from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By import time from selenium.webdriver.common.action_chains import ActionChains import io import pandas as pd wait = WebDriverWait(driver, 2) time_0 = time.time() driver.get( url ) # https://www.selenium.dev/selenium/docs/api/py/webdriver/selenium.webdriver.common.by.html#module-selenium.webdriver.common.by wait.until( EC.presence_of_element_located( ( By.ID, 'content' ), ) ) time_1 = time.time() print(time_1-time_0,':\n',driver.current_url) #import pandas as pd # /html/body/div[2]/div[5]/div[3]/div[2]/table # df = pd.read_html(driver.page_source)[0] # print(df) # /html/body/div[2]/div[5]/div[2]/div[1] month_div_list = driver.find_elements_by_xpath('//div[@id="content"]/div[@class="filter"]/div') month_link_list = [] month_list = [] for m in month_div_list: page_link = m.find_element_by_xpath('./a').get_attribute('href')# .text # get_attribute('href') month_link_list.append(page_link) month_list.append( m.find_element_by_xpath('./a').text ) time_2 = time.time() print('Get all links:', time_2) print( month_list ) def crawlData( month ): driver.get( month[0] ) wait.until( EC.presence_of_element_located( ( By.ID, 'all_schedule' ), ) ) share_export='Get table as CSV (for Excel)' # //*[@id="schedule_sh"]/div/ul/li/div/ul/li[4]/button # /html/body/div[2]/div[5]/ div[3]/div[1]/ div/ul/li/div/ul/li[4]/button # share_export_menu = driver.find_element_by_xpath('//div[@id="schedule_sh"]/div[@class="section_heading_text"]') hidden_submenu = driver.find_element_by_xpath('//div[@id="schedule_sh"]/div[@class="section_heading_text"]' \ '/ul/li/div/ul/li/button[contains(text(),"{}")]'.format(share_export) ) actions = ActionChains( driver ) actions.move_to_element( share_export_menu ) actions.click( hidden_submenu ) actions.perform() wait.until( EC.presence_of_element_located( ( By.ID, 'div_schedule' ), ) ) csv = driver.find_element_by_xpath('//div[@id="div_schedule"]/div/pre[@id="csv_schedule"]').text df = pd.read_fwf( io.StringIO(csv) ) df.to_csv( month[1] + '.csv', #'temp.csv', header=False, index=False, sep='\n') # sep='\n' time_3 = time.time() from multiprocessing import Pool pool = Pool(2) pool.map( crawlData, zip(month_link_list, month_list) ) pool.close() pool.join() driver.quit() # - # ![image.png](attachment:image.png) # + ua = fake_useragent.UserAgent() # .random ua_list = [ua.ie, ua['Internet Explorer'], ua.msie, ua.chrome, ua.google, ua['google chrome'], ua.firefox, ua.ff, ua.safari, ua.opera ] import random current_ua = random.choice(ua_list) header = {'UserAgent': current_ua, 'Connection': 'close' } options = webdriver.ChromeOptions() options.add_argument( "'" + "user-agent=" + header['UserAgent'] + "'" ) options.add_argument('--disable-gpu') # google document mentioned this attribute can avoid some bugs # the purpose of the argument --disable-gpu was to enable google-chrome-headless on windows platform. # It was needed as SwiftShader fails an assert on Windows in headless mode ### earlier.### # it doesn't run the script without opening the browser,but this bug ### was fixed.### # options.add_argument('--headless') is all you need. # The browser does not provide a visualization page. # If the system does not support visualization under linux, it will fail to start if you do not add this one # options.add_argument('--headless') # Solve the prompt that chrome is being controlled by the automatic test software options.add_experimental_option('excludeSwitches',['enable-automation']) options.add_experimental_option('useAutomationExtension', False) # set the browser as developer model, prevent the website identifying that you are using Selenium driver = webdriver.Chrome( executable_path = 'C:/chromedriver/chromedriver', # last chromedriver is chromedriver.exe options = options ) # #run command("window.navigator.webdriver")in the Console of the inspection # #result: undefine # means: regular browser driver.execute_cdp_cmd("Page.addScriptToEvaluateOnNewDocument", { "source": """ delete navigator.__proto__.webdriver; """ } ) driver.execute_cdp_cmd("Page.addScriptToEvaluateOnNewDocument", { "source": """ Object.defineProperty( navigator, 'webdriver', { get: () => undefined } ) """ } ) # browser.set_window_size(1920, 1080) root = 'https://www.basketball-reference.com/leagues/' page_link = 'NBA_2016_games.html#schedule' url = root + page_link from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By import time from selenium.webdriver.common.action_chains import ActionChains import io import pandas as pd wait = WebDriverWait(driver, 2) time_0 = time.time() driver.get( url ) # https://www.selenium.dev/selenium/docs/api/py/webdriver/selenium.webdriver.common.by.html#module-selenium.webdriver.common.by wait.until( EC.presence_of_element_located( ( By.ID, 'content' ), ) ) time_1 = time.time() print(time_1-time_0,':\n',driver.current_url) #import pandas as pd # /html/body/div[2]/div[5]/div[3]/div[2]/table # df = pd.read_html(driver.page_source)[0] # print(df) # /html/body/div[2]/div[5]/div[2]/div[1] month_div_list = driver.find_elements_by_xpath('//div[@id="content"]/div[@class="filter"]/div') month_link_list = [] month_list = [] for m in month_div_list: page_link = m.find_element_by_xpath('./a').get_attribute('href')# .text # get_attribute('href') month_link_list.append(page_link) month_list.append( m.find_element_by_xpath('./a').text ) time_2 = time.time() print('Get all links:', time_2) # month_list for page_month_index in range( len(month_list) ): time_3 = time.time() driver.get( month_link_list[page_month_index] ) wait.until( EC.presence_of_element_located( ( By.ID, 'all_schedule' ), ) ) share_export='Get table as CSV (for Excel)' # //*[@id="schedule_sh"]/div/ul/li/div/ul/li[4]/button # /html/body/div[2]/div[5]/ div[3]/div[1]/ div/ul/li/div/ul/li[4]/button # share_export_menu = driver.find_element_by_xpath('//div[@id="schedule_sh"]/div[@class="section_heading_text"]') hidden_submenu = driver.find_element_by_xpath('//div[@id="schedule_sh"]/div[@class="section_heading_text"]' \ '/ul/li/div/ul/li/button[contains(text(),"{}")]'.format(share_export) ) actions = ActionChains( driver ) actions.move_to_element( share_export_menu ) actions.click( hidden_submenu ) actions.perform() wait.until( EC.presence_of_element_located( ( By.ID, 'div_schedule' ), ) ) csv = driver.find_element_by_xpath('//div[@id="div_schedule"]/div/pre[@id="csv_schedule"]').text # Process string format data, save it to csv file  df = pd.read_fwf( io.StringIO(csv) ) df.to_csv( month_list[page_month_index] + '.csv', #'temp.csv', header=False, index=False, sep='\n') # sep='\n' print(driver.current_url) driver.quit() # - # ![image.png](attachment:image.png) # # Combine the data in all CSV files and store them in a file and named it with ‘basketball.csv’ # delete "Mexico" in December.csv and then save it # ![image.png](attachment:image.png) # + df_whole = pd.read_csv( month_list[0] + '.csv', sep=',', dtype={ 'PTS': 'int', 'PTS.1': 'int', 'Attend.': 'int', 'Notes': "string", 'Unnamed: 7': 'object', }, ) for m in range( 1, len(month_list) ): cur_m_df = pd.read_csv( month_list[m] + '.csv', sep=',', dtype={ 'PTS': 'int', 'PTS.1': 'int', 'Attend.': 'int', 'Notes': "string", 'Unnamed: 7': 'object', }, ) df_whole = pd.merge( df_whole, cur_m_df, how='outer' ) df_whole.shape # - df_whole.to_csv('basketball.csv', index=False) # # load the dataset using the read_csv function # + import pandas as pd data_filename='basketball.csv' dataset = pd.read_csv(data_filename) dataset.head() # - # # After looking at the output, we can see a number of problems: # # * The date is just a string and not a date object # * From visually inspecting the results, the headings aren't complete or correct # + dataset = pd.read_csv( data_filename, parse_dates=['Date'] ) dataset.columns = ["Date", "Start (ET)", "Visitor Team", "VisitorPts", "Home Team", "HomePts", "OT?", "Score Type","Attend.", "Notes"] dataset.head() # - dataset = dataset.drop('Attend.', axis=1) dataset.head() print(dataset.dtypes) # # Extracting new features # specify our class as 1(True) if the home team wins and 0(False) if the visitor team wins dataset["HomeWin"] = dataset["VisitorPts"] < dataset["HomePts"] dataset.head() y_true = dataset["HomeWin"].values # show the advantage on home team dataset['HomeWin'].mean() # + from collections import defaultdict won_last = defaultdict(int) won_last # - dataset["HomeLastWin"] = 0 dataset["VisitorLastWin"] = 0 dataset.head() # + for row_index, row in dataset.iterrows(): # (row_index, The data of the row as a Series), ..., ... home_team = row["Home Team"] visitor_team = row["Visitor Team"] row["HomeLastWin"] = won_last[home_team] # won_last[home_team] : whether the home team won the previous game dataset.at[ row_index, "HomeLastWin" ] = won_last[home_team] dataset.at[ row_index, "VisitorLastWin" ] = won_last[visitor_team] won_last[home_team] = int( row["HomeWin"] ) won_last[visitor_team] = 1-int( row["HomeWin"] ) dataset.head(n=10) # - dataset.iloc[1000:1005] # + from sklearn.tree import DecisionTreeClassifier clf = DecisionTreeClassifier( random_state=14 ) # default=”gini” # - X_previous_wins = dataset[ ["HomeLastWin", "VisitorLastWin"] ].values X_previous_wins # + from sklearn.model_selection import cross_val_score import numpy as np # Accuracy = (TP+TN)/(TP+FP+FN+TN) # to use the default 5-fold cross validation scores = cross_val_score( clf, X_previous_wins, y_true, scoring='accuracy' ) print( "Accuracy: {0:.1f}%".format( np.mean(scores)*100 ) ) # - # # Sports outcome prediction # ![image.png](attachment:image.png) # + ua = fake_useragent.UserAgent() # .random ua_list = [ua.ie, ua['Internet Explorer'], ua.msie, ua.chrome, ua.google, ua['google chrome'], ua.firefox, ua.ff, ua.safari, ua.opera ] import random current_ua = random.choice(ua_list) header = {'UserAgent': current_ua, 'Connection': 'close' } options = webdriver.ChromeOptions() options.add_argument( "'" + "user-agent=" + header['UserAgent'] + "'" ) options.add_argument('--disable-gpu') # google document mentioned this attribute can avoid some bugs # the purpose of the argument --disable-gpu was to enable google-chrome-headless on windows platform. # It was needed as SwiftShader fails an assert on Windows in headless mode ### earlier.### # it doesn't run the script without opening the browser,but this bug ### was fixed.### # options.add_argument('--headless') is all you need. # The browser does not provide a visualization page. # If the system does not support visualization under linux, it will fail to start if you do not add this one # options.add_argument('--headless') # Solve the prompt that chrome is being controlled by the automatic test software options.add_experimental_option('excludeSwitches',['enable-automation']) options.add_experimental_option('useAutomationExtension', False) # set the browser as developer model, prevent the website identifying that you are using Selenium driver = webdriver.Chrome( executable_path = 'C:/chromedriver/chromedriver', # last chromedriver is chromedriver.exe options = options ) # #run command("window.navigator.webdriver")in the Console of the inspection # #result: undefine # means: regular browser driver.execute_cdp_cmd("Page.addScriptToEvaluateOnNewDocument", { "source": """ delete navigator.__proto__.webdriver; """ } ) driver.execute_cdp_cmd("Page.addScriptToEvaluateOnNewDocument", { "source": """ Object.defineProperty( navigator, 'webdriver', { get: () => undefined } ) """ } ) # browser.set_window_size(1920, 1080) root = 'https://www.basketball-reference.com/leagues/' page_link = 'NBA_2015_standings.html' url = root + page_link from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By import time from selenium.webdriver.common.action_chains import ActionChains import io import pandas as pd wait = WebDriverWait(driver, 2) driver.get( url ) # https://www.selenium.dev/selenium/docs/api/py/webdriver/selenium.webdriver.common.by.html#module-selenium.webdriver.common.by wait.until( EC.presence_of_element_located( ( By.ID, 'all_expanded_standings' ), ) ) share_export='Get table as CSV (for Excel)' # //*[@id="schedule_sh"]/div/ul/li/div/ul/li[4]/button # /html/body/div[2]/div[5]/ div[3]/div[1]/ div/ul/li/div/ul/li[4]/button # share_export_menu = driver.find_element_by_xpath('//div[@id="expanded_standings_sh"]/div[@class="section_heading_text"]') hidden_submenu = driver.find_element_by_xpath('//div[@id="expanded_standings_sh"]/div[@class="section_heading_text"]' \ '/ul/li/div/ul/li/button[contains(text(),"{}")]'.format(share_export) ) actions = ActionChains( driver ) actions.move_to_element( share_export_menu ) actions.click( hidden_submenu ) actions.perform() wait.until( EC.presence_of_element_located( ( By.ID, 'div_expanded_standings' ), ) ) csv = driver.find_element_by_xpath('//div[@id="div_expanded_standings"]/div/pre[@id="csv_expanded_standings"]').text standings_fileName="standings.csv" df = pd.read_fwf( io.StringIO(csv) ) df.to_csv( standings_fileName, #'temp.csv', header=False, index=False, sep='\n') # sep='\n' print(driver.current_url) driver.quit() # - # ![image.png](attachment:image.png) # + import os standings = pd.read_csv( "standings.csv", skiprows=1 ) standings.head() # + dataset["HomeTeamRanksHigher"] = 0 for index, row in dataset.sort_values(by="Date").iterrows(): home_team = row["Home Team"] visitor_team = row["Visitor Team"] home_rank = standings[ standings["Team"] == home_team ]["Rk"].values[0] visitor_rank = standings[ standings["Team"] == visitor_team ]["Rk"].values[0] dataset.at[index, "HomeTeamRanksHigher"] = int( home_rank < visitor_rank ) dataset.head(n=10) # - X_homehigher = dataset[ ["HomeTeamRanksHigher", "HomeLastWin", "VisitorLastWin"] ].values X_homehigher # + clf = DecisionTreeClassifier( random_state=14 , criterion='entropy' ) # default=”gini” scores = cross_val_score( clf, X_homehigher, y_true, scoring="accuracy" ) print( "Accuracy: {0:.1f}%".format( np.mean(scores)*100 ) ) # - # This now scores 61.8 percent even better than our previous result, and now better than just using wheather the teams were winner on previous game to do predictions on every time. Can we do better? # + last_match_winner = defaultdict( int ) dataset["HomeTeamWonLast"] = 0 for index, row in dataset.iterrows(): home_team = row["Home Team"] visitor_team = row["Visitor Team"] teams = tuple( sorted( [home_team, visitor_team] ) ) # We look up in our dictionary to see who won the last encounter between # the two teams. Then, we update the row in the dataset data frame: if last_match_winner[teams] : # == row["Home Team"] home_team_won_last = 1 else: home_team_won_last = 0 dataset.at[index, "HomeTeamWonLast"] = home_team_won_last # Who won this match? if row["HomeWin"]: winner = row["Home Team"] else: winner = row["Visitor Team"] last_match_winner[teams] = winner # - # This feature works much like our previous rank-based feature. However, instead of looking up the ranks, this features creates a tuple called teams, and then stores the previous result in a dictionary( last_match_winner = defaultdict( int ) ). When those two teams play each other next, it recreates this tuple, and looks up the previous result. Our code doesn't differentiate between home games and visitor games, which might be a useful improvement to look at implementing.  dataset.iloc[15:20] # Memphis Grizzlies (Visitor) won the 17th match against Indiana Pacers (home) dataset.iloc[345:405] # Houston Rockets (home) won the 348th match against Los Angeles Lakers (Visitor), so the 384th Houston Rockets (Visitor) and Los Angeles Lakers (home) record (HomeTeamWonLast) value is 0. # # Indiana Pacers lost in the 17th matchup against Memphis Grizzlies, so in the 401st Memphis Grizzlies(home) and Indiana Pacers (visitor) record (HomeTeamWonLast) value is 1. # + X_lastwinner = dataset[ ["HomeTeamWonLast",# The result of the last match between the two teams "HomeTeamRanksHigher", "HomeLastWin", # Results of the last match against other teams "VisitorLastWin", # Results of the last match against other teams ] ] # the Gini index and the cross-entropy are quite similar numerically clf = DecisionTreeClassifier( random_state=14, criterion="entropy" ) scores = cross_val_score( clf, X_lastwinner, y_true, scoring = "accuracy") print( "Accuracy: {0:.1f}%".format( np.mean(scores)*100) ) # - # While decision trees are capable of learning from categorical features, the implementation in scikit-learn requires those features to be encoded as numbers and features, instead of string values. We can use the LabelEncoder transformer to convert the string-based team names into assigned integer values. The code is as follows: # + from sklearn.preprocessing import LabelEncoder encoding = LabelEncoder() encoding.fit( dataset["Home Team"].values ) home_teams = encoding.transform( dataset["Home Team"].values ) visitor_teams = encoding.transform( dataset["Visitor Team"].values ) X_teams = np.vstack( [home_teams, # [ 4, 0, 9, ..., 9, 5, 9] visitor_teams # [ 5, 8, 18, ..., 5, 9, 5] ] ).T X_teams # - # These integers can be fed into the Decision Tree, but they will still be interpreted as continuous features by DecisionTreeClassifier. For example, teams may be allocated integers from 0 to 16. *The algorithm will see teams 1 and 2 as being similar, while teams 4 and 10 will be very different*--but this makes no sense as all. All of the teams are different from each other--two teams are either the same or they are not! # using OneHotEncoder # + from sklearn.preprocessing import OneHotEncoder onehot = OneHotEncoder() X_teams = onehot.fit_transform( X_teams ).todense() X_teams[-3:] # - # [9,5]: Represents the combination of Cleveland Cavaliers (as a visitor team, encoded 9) and Golden State Warriors (as a home team, encoded 5) as a category for OneHotEncoder encoding, and its encoding value is not same with [5,9], Golden State Warriors (as a visitor team, encoded 5) and Cleveland Cavaliers (as a home team, encoded 9) as a new combination, the encoding value of OneHotEncoder is different dataset[-3:] # + clf = DecisionTreeClassifier( random_state=14 ) scores = cross_val_score( clf, X_teams, y_true, scoring='accuracy') # default cv=5 print( 'Accuracy: {0:.1f}%'.format(np.mean(scores)*100) ) # + from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier( random_state=14) # default criterion='gini', n_estimators=100 score = cross_val_score( clf, X_teams, y_true, scoring="accuracy" )# default cv=5 print( "Accuracy: {0:.1f}%".format( np.mean(scores)*100 ) ) # + X_all = np.hstack( [X_lastwinner, X_teams] ) clf = RandomForestClassifier( random_state=14 ) # default criterion='gini', n_estimators=100 scores = cross_val_score( clf, X_all, y_true, scoring="accuracy" ) print( "Accuracy: {0:.1f}%".format(np.mean(scores)*100) ) # + X_all = np.hstack( [X_lastwinner, X_teams] ) clf = RandomForestClassifier( random_state=14, n_estimators=500 ) # default criterion='gini', n_estimators=100 scores = cross_val_score( clf, X_all, y_true, scoring="accuracy" ) print( "Accuracy: {0:.1f}%".format(np.mean(scores)*100) ) # + from sklearn.model_selection import GridSearchCV parameter_space = { "max_features":[2,10,'auto'], "n_estimators":[100,200], "criterion":["gini","entropy"], "min_samples_leaf":[2,4,6] } clf = RandomForestClassifier( random_state=14 ) grid = GridSearchCV( clf, parameter_space ) grid.fit( X_all, y_true ) print( "Accuracy: {0:.1f}%".format(grid.best_score_*100) ) # - # we can print out the best model that was found in the grid search. The code is as follows print( grid.best_estimator_ ) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python3 # --- import gbeflow import os import bebi103 import bokeh.io bokeh.io.output_notebook() import matplotlib.pyplot as plt import altair as alt alt.data_transformers.enable('json') from imp import reload reload(gbeflow) # # Data import and setup name = '20180110_htl_glc-CreateImageSubset-02_sc11_htl_rotate_brt' # %%time vf = gbeflow.VectorField(name) vf.add_image_data(os.path.join('../data',vf.name+'.tif')) p = vf.pick_start_points() vf.save_start_points(p) vf.starts # # Test a variety of $\Delta t$ values Ldt = [1,30,60,120,180] for dt in Ldt: vf.calc_track_set(vf.starts,dt,name='dt '+str(dt),timer=False) vf.tracks.head() alt.Chart(vf.tracks ).mark_point( ).encode( x='x:Q', y='y:Q', row='track:N', color='name:N' ) fig,ax = plt.subplots(len(Ldt),2) for j,dt in enumerate(Ldt): # ax[j,0].imshow(vf.img[0],cmap='Greys') # ax[j,1].imshow(vf.img[-1],cmap='Greys') for i in [0,1]: df = vf.tracks[vf.tracks['name'] == 'dt '+str(dt)] ax[j,i].scatter(df.x,df.y) fig,ax = plt.subplots # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) import time import shutil import os import matplotlib.pyplot as plt from tqdm import tqdm # ref.: https://www.kaggle.com/stainsby/fast-tested-rle def rle_encode(img): ''' img: numpy array, 1 - mask, 0 - background Returns run length as string formated ''' pixels = img.flatten() pixels = np.concatenate([[0], pixels, [0]]) runs = np.where(pixels[1:] != pixels[:-1])[0] + 1 runs[1::2] -= runs[::2] return ' '.join(str(x) for x in runs) def rle_decode(mask_rle, shape): ''' mask_rle: run-length as string formated (start length) shape: (height,width) of array to return Returns numpy array, 1 - mask, 0 - background ''' s = mask_rle.split() starts, lengths = [np.asarray(x, dtype=int) for x in (s[0:][::2], s[1:][::2])] starts -= 1 ends = starts + lengths img = np.zeros(shape[0]*shape[1], dtype=np.uint8) for lo, hi in zip(starts, ends): img[lo:hi] = 1 return img.reshape(shape) # - model_name = 'SegUNet_new_version_coord_2channel_at_middle' image_path = '../../data/test_masks/' + model_name + '/' sample_submission_name = '../../data/test_masks/sample_submission.csv' submission_name = '../../data/test_masks/' + model_name + '.csv' with open(sample_submission_name) as f: lines = f.readlines() mask_names = sorted(os.listdir(image_path)) with open(submission_name, 'w') as f: f.write('img,rle_mask') for i, mask_name in enumerate(tqdm(mask_names)): f.write('\n') image_name = mask_name.split('.')[0] + '.jpg' image_array = plt.imread(image_path + mask_name) image_str = rle_encode(image_array) f.write(image_name + ',' + image_str) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Before starting # # For the first point, We started downloading the yellow cab data for January 2018(yellow_tripdata_2018-01.csv). Our plan was to make our studies and analysis firstly for one month and then extend our results considering also other years. # Our Data-set contains a really large amount of data so we opted to choose the simple pd.read_csv() module from pandas. # After considering many options such as Dask DataFrames, we decided to use the pandas importing method together with the chucksize option. The solution to working with a massive file with eight milions of lines is to load the file in smaller chunks and analyze with the smaller chunks. Here is our code: import pandas as pd def data_aggregator(path,columnnumber,chunksize): df_list = [] for chunk in pd.read_csv(path,usecols=columnnumber, chunksize=chunksize): df_list.append(pd.DataFrame(chunk).dropna()) result = pd.concat(df_list) del df_list return result JanData="yellow_tripdata_2018-01.csv" #We define the path of our data(of January) JanDF=data_aggregator(JanData,[1,2,4,7,16],10000) print(JanDF.head()) #With this function we can easily import data we need. We select the columns and the dataset we desire and, as we said, #we use chunksize the make the function run faster(and to avoid memory problems). We have printed the head of the dataframe to #show that everything was correct and the df contained columns corresponding to [1,2,4,7,16] indexs. # After defining the function to import the data correctly, We were asked to merge our data with a .json file in the homework repo ([.json](https://github.com/CriMenghini/ADM-2018/blob/master/Homework_2/taxi_zone_lookup.csv)). The data was embedd in the page source-code so we had to apply our knowledge in web-scraping ang get the lines of sourcecode we needed. We used the **Beautiful** package. We firstly collected all the location ids, then the boroughs, then the zones and finally the serving zones. We have done that noticing that in the sourcecode all the above information required where all starting with a and we simply went trough all the values starting with it. After that we merged all together in a Pandas DataFrame called boroghFrame. # + import requests from bs4 import BeautifulSoup page = requests.get("https://github.com/CriMenghini/ADM-2018/blob/master/Homework_2/taxi_zone_lookup.csv") soup = BeautifulSoup(page.content, 'html.parser') ids=[] bor=[] zon=[] srv_zon=[] cell=0 for i in range(2,1327,5): #Firstly We get all the location ids a=soup.find_all('td')[i].get_text() ids.append(a) cell=cell+1 cell=0 for i in range(3,1328,5): #Then all the boroughs a=soup.find_all('td')[i].get_text() bor.append(a) cell=cell+1 cell=0 for i in range(4,1329,5): #After that we get al zones a=soup.find_all('td')[i].get_text() zon.append(a) cell=cell+1 cell=0 for i in range(5,1330,5): #Finally we get all the seving zones a=soup.find_all('td')[i].get_text() srv_zon.append(a) cell=cell+1 # after getting all the informations needed we merged all the lists together data_tuples = list(zip(ids,bor,zon,srv_zon)) boroughFrame=pd.DataFrame(data_tuples,columns = ["PULocationID", "Borough", "Zone", "srv_zon"]) #name columns assigned boroughFrame['PULocationID']=boroughFrame['PULocationID'].apply(int) #We wanted to be sure that PuLocationId were all ints. # - # Then, we finally merged our Dataframes defining this function. We merged using the PULocation id column. **So we assumed that the LocationID in the .json file was equal to PULocationId in taxidataset.** def data_aggregator2(df1,df2,oncolumns,jointype): result = pd.merge(df1, boroughFrame, on=oncolumns,how=jointype) return result JanData="yellow_tripdata_2018-01.csv" #We define the path of our data(of January) JanBoroughData=data_aggregator2(JanDF,boroughFrame,['PULocationID'],"inner") print(len(JanBoroughData)) del JanDF,boroughFrame,page,soup,ids,bor,zon,srv_zon #we also clean our memory deleting all the valuables we do not need. # We also noticed, exploring our dataset, that we had some values simply not reasonable and we decided to get rid of them. For example, for some rows the trip distance was equal to 0 or negative and that values would have impacted our analysis. We removed the Nan values from the datasets because we tought that those would have affected our results. We lost approximately 55000 rows JanBoroughData = JanBoroughData[JanBoroughData.trip_distance >0] print(len(JanBoroughData)) # Let's describe a litte bit our datasets. Our original dataset had 16 columns and over 8 millions(yes, millions) rows. It has all the information related to taxi in 7 so called 'Boroughs'. We decided to exclude from our analysis the 'EWR' and 'Unknown' boroughs because they had what appeared to be fictional values. For example, in EWR you can find a total price for the trip of 600$ for a 200 meters path. We have extended these hypothesis also to other monthsd # Anyway, We decided to import just the columns corresping to those indexes:**[1,2,4,7,16]** # # As we said we have only 5 Boroughs: Manhattan, Bronx, Brooklyn, Staten Island and Queens. The Borough with the maximum numbers of observation is undoubtly Manhattan, center of the economical life of New York. Bronx and Staten Island both have a really low number of observations. # # We have two columns, which will be used to calculate the time of the trip, in a datetime format. They show date and time of pickup and dropoff. # # The maximum amount money spent for a single taxi trip has been spent in Queens and corresponds to 3006 dollars fro a trip distance of 189483.84 miles. On average, Queens is the most expensive borough as well with a mean of 11.6 miles per trip. We will see that these result won't be confirmed in we consider # $$ \frac{totalamount}{tripdistance}$$ # JanBoroughData.groupby(['Borogh']).max() JanBoroughData.groupby(['Borogh']).mean() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import sys, os sys.path.insert(0, os.path.join("..", "..")) # # Project to graph # # A key step in (current) network algorithms is to move each input "event" to the closest point on the network. # # In this notebook, we explore efficient ways to do this. # + # %matplotlib inline import matplotlib.pyplot as plt import geopandas as gpd import os import descartes import numpy as np import open_cp.plot import open_cp.geometry import matplotlib.collections # - datadir = os.path.join("/media", "disk", "Data") areas = gpd.GeoDataFrame.from_file(os.path.join(datadir, "Chicago_Areas.geojson")) tiger_path = os.path.join("/media", "disk", "TIGER Data") filename = os.path.join(tiger_path, "tl_2016_17031_roads") tiger_frame = gpd.GeoDataFrame.from_file(filename) tiger_frame.head() # # Clip to chicago outline # # Slow... chicago_outline = areas.unary_union intersects_geometry = tiger_frame.geometry.map(lambda line : line.intersection(chicago_outline)) mask = intersects_geometry.map(lambda geo : geo.is_empty) chicago = tiger_frame[~mask].copy() chicago.geometry = intersects_geometry[~mask] # + fig, ax = plt.subplots(figsize=(12,12)) lc = matplotlib.collections.LineCollection(open_cp.plot.lines_from_geometry(chicago.geometry), color="black", linewidth=0.5) ax.add_collection(lc) patches = open_cp.plot.patches_from_geometry(areas.geometry, fc="blue", ec="none", alpha=0.5, zorder=5) ax.add_collection(matplotlib.collections.PatchCollection(patches)) ax.set_aspect(1.0) xmin, ymin, xmax, ymax = areas.total_bounds xd, yd = xmax - xmin, ymax - ymin ax.set(xlim=[xmin - xd/40, xmax + xd/40], ylim=[ymin - yd/40, ymax + yd/40]) None # - chicago = chicago.to_crs({"init":"epsg:3528"}) # # Projecting to nearest line segment # # ![Diagram](image1.png) # # In the above diagram, we wish to find $t$. We know that $v = b-a$, that $u\cdot v=0$, and that $x-a = tv + u$. Thus # $$ (x-a) \cdot v = t\|v\|^2 \quad\implies\quad t = \|v\|^{-2} (x-a)\cdot v. $$ # + import shapely lines = [ ((1,2), (3,1), (5,4)), ((0.5, 0.5), (1,4), (3,3)) ] lines_shapely = [shapely.geometry.LineString(li) for li in lines] points = np.random.random((10,2)) * 5 projs = [] for pt in points: best = open_cp.geometry.project_point_to_lines(pt, lines) projs.append(best) projs = np.asarray(projs) fig, ax = plt.subplots(figsize=(8,5)) frame = gpd.GeoDataFrame() frame.geometry = lines_shapely frame.plot(ax=ax) ax.scatter(points[:,0], points[:,1]) ax.scatter(projs[:,0], projs[:,1]) for pt, ppt in zip(points, projs): ax.plot([pt[0],ppt[0]], [pt[1],ppt[1]], color="blue") ax.set_aspect(1) # + projs2 = [] for pt in points: best = open_cp.geometry.project_point_to_lines_shapely(pt, lines_shapely) projs2.append(best) projs2 = np.asarray(projs) np.testing.assert_allclose(projs, projs2) # + pp = open_cp.geometry.ProjectPointLinesRTree(lines) projs3 = [] for pt in points: best = pp.project_point(pt) projs3.append(best) projs3 = np.asarray(projs) np.testing.assert_allclose(projs, projs3) # - # # Case study # # - "chicago_all_old.csv" is an old file, now no longer available, which has "correctly" geo-coded data. As such, it makes an interesting test for us. # - Current data is already (pretty much) projected to the center of streets. import open_cp.sources.chicago points = open_cp.sources.chicago.load(os.path.join(datadir, "chicago_all_old.csv"), {"BURGLARY"}, type="all") points.time_range # Convert to a sequence of lines lines = [] for geo in chicago.geometry: try: lines.append(list(geo.coords)) except: for x in geo: lines.append(list(x.coords)) out = [] for pt in points.coords.T: best = open_cp.geometry.project_point_to_lines(pt, lines) out.append(best) # + fig, ax = plt.subplots(figsize=(8,8)) ax.scatter(points.xcoords, points.ycoords) lc = matplotlib.collections.LineCollection(open_cp.plot.lines_from_geometry(chicago.geometry), color="black", linewidth=0.5) ax.add_collection(lc) xrange = [355000, 356000] yrange = [560000, 561000] ax.set(xlim=xrange, ylim=yrange) None # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # 实验目的: # 仿真不同间距的接收天线对于不同角度目标的探测 # # + from scipy.signal import chirp, spectrogram import matplotlib.pyplot as plt import numpy as np def zero_to_nan(values): """Replace every 0 with 'nan' and return a copy.""" return [float('nan') if x==0 else x for x in values] Fs = 2048; # sampling rate Ts = 1.0/Fs; # sampling interval fft_size = Fs/4 #t = np.arange(0,1,Ts) # time vector fs = 2048 #resolution T = 5 t = np.linspace(0, T, T*fs, endpoint=False) t1 = np.arange(0.5,1.5,Ts) w1 = chirp(t, f0=5, f1=400, t1=T , phi=90,method='linear') w4 = chirp(t, f0=5, f1=400, t1=T , phi=45,method='linear') n1 = np.zeros(256) w2 = np.append(n1, w1) #delay w1 to produce w2 n2 = np.zeros(512) w3 = np.append(n2, w4) #w2 =zero_to_nan(w2) #w2 = np.asarray(w2) w2 =w2[:fs*T] plt.plot(t[:1800],w3[:1800]) plt.plot(t[:1800],w2[:1800]) plt.show plt.savefig('Two Linear Chirp Signals.png', dpi=300) print(w3[700]-w1[700]) # + #fft w1 n = len(w1) # length of the signal k = np.arange(n) T = n/Fs frq = k/T # two sides frequency range frq = frq[range(int(n/2))] # one side frequency rangen Y = np.fft.fft(w1)/fft_size # fft computing and normalization Y = Y[range(int(n/2))] Y = np.clip(20*np.log10(np.abs(Y)), -120, 120) plt.plot(frq,Y) plt.xlabel("frequency(Hz)") plt.ylabel("power(dB)") # - # Δ𝛷=2𝜋Δ𝑑/𝜆 # + from scipy.fftpack import fft, ifft import numpy as np from scipy.signal import chirp t = np.linspace(0, 10, 5001) n = 6 # 1/n *pi initial phase # s_sine= np.sin(50.0 * 2.0*np.pi*t) #sin # s_cosine=np.sin(50.0 * 2.0*np.pi*t+np.pi/2) #cosin s_sine = chirp(t, f0=100, f1=150, t1=10, method='linear') s_cosine=chirp(t, f0=100, f1=150, t1=10, phi=90,method='linear') # r = np.sin(50.0 * 2.0*np.pi*t) # r2 = np.sin(50.0 * 2.0*np.pi*t+np.pi/n) #receive signal r2 = chirp(t, f0=100, f1=150, t1=10, phi=180/n,method='linear') qua_phase = r2*s_sine # receive signal multiply sin signal Quadrature phase in_phase= r2*s_cosine # receive signal multiply cosin signal in phase # plt.plot(t[:50],in_phase[:50],'r') # plt.plot(t[:50],qua_phase[:50],'b') # plt.grid() # plt.show() fft_signal=fft(r2) fft_qua_phase= fft(qua_phase) #quadrature phase fft_in_phase = fft(in_phase) #in phase initial_phase=np.real(np.arctan(fft_qua_phase[0]/fft_in_phase[0])/np.pi) initial_phase =1/(0.5-initial_phase) print('Initial Phase was:1/%.1f'%initial_phase, '*pi') # - initial_phase a =np.real(np.arctan(fft_qua_phase[0]/fft_in_phase[0])/np.pi) 1/(0.5-a) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + import cv2 import numpy as np import time net = cv2.dnn.readNet("C:/Users/tarun/Desktop/yolov3 object detection/yolov3_training_last (1).weights", "C:/Users/tarun/Desktop/yolov3 object detection/yolo_custom_detection/yolov3_testing.cfg") classes = ["ball"] layer_names = net.getLayerNames() output_layers = [layer_names[i - 1] for i in net.getUnconnectedOutLayers()] camera = cv2.VideoCapture(0) while True: _,image = camera.read() height, width, channels = image.shape # Detecting objects blob = cv2.dnn.blobFromImage(image, 0.00392, (320, 320), (0, 0, 0), True, crop=False) net.setInput(blob) outs = net.forward(output_layers) class_ids = [] confidences = [] boxes = [] for out in outs: for detection in out: scores = detection[5:] class_id = np.argmax(scores) confidence = scores[class_id] if confidence > 0.5: # Object detected center_x = int(detection[0] * width) center_y = int(detection[1] * height) w = int(detection[2] * width) h = int(detection[3] * height) # Rectangle coordinates x = int(center_x - w / 2) y = int(center_y - h / 2) boxes.append([x, y, w, h]) confidences.append(float(confidence)) class_ids.append(class_id) indexes = cv2.dnn.NMSBoxes(boxes, confidences, 0.5, 0.4) font = cv2.FONT_HERSHEY_PLAIN for i in range(len(boxes)): if i in indexes: x, y, w, h = boxes[i] label = str(classes[class_ids[i]]) color = (0, 0, 255) cv2.rectangle(image, (x, y), (x + w, y + h), color, 2) cv2.putText(image, label, (x, y -10), font, 3, color, 3) tlc=(x,y) trc=(x+h,y) blc=(x,y+h) brc=(x+w,y+h) center=(center_x,center_y) height=h width=w print("Center of Box:",center,"Height and width:",(h,w),"Coordinates:",tlc,trc,blc,brc) cv2.imshow("Image", image) key = cv2.waitKey(1) if key == 27: break camera.release() cv2.destroyAllWindows() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Chapter 4 in-class problems - Solutions # ## Using what you learned in Lab, answer questions 4.7, 4.8, 4.10, 4.11, 4.12, and 4.13 from numpy import sin,cos,sqrt,pi from qutip import * # ### Remember, these states are represented in the HV basis: H = Qobj([[1],[0]]) V = Qobj([[0],[1]]) P45 = Qobj([[1/sqrt(2)],[1/sqrt(2)]]) M45 = Qobj([[1/sqrt(2)],[-1/sqrt(2)]]) R = Qobj([[1/sqrt(2)],[-1j/sqrt(2)]]) L = Qobj([[1/sqrt(2)],[1j/sqrt(2)]]) # The sim_transform function creates the matrix $\bar{\mathbf{S}}$ that can convert from one basis to another. As an example, it will create the tranform matrix to convert from HV to ±45 if you run: # # Shv45 = sim_transform(H,V,P45,M45) # creates the matrix Shv45 # # Then you can convert a ket from HV to ±45 by applying the Shv45 matrix: # # Shv45*H # will convert H from the HV basis to the ±45 basis # # To convert operators, you have to sandwich the operator between $\bar{\mathbf{S}}$ and $\bar{\mathbf{S}}^\dagger$: # # Shv45*Ph*Shv45.dag() # converts Ph from HV basis to the ±45 basis. def sim_transform(o_basis1, o_basis2, n_basis1, n_basis2): a = n_basis1.dag()*o_basis1 b = n_basis1.dag()*o_basis2 c = n_basis2.dag()*o_basis1 d = n_basis2.dag()*o_basis2 return Qobj([[a.data[0,0],b.data[0,0]],[c.data[0,0],d.data[0,0]]]) # ## 4.11: Express $\hat{R}_p(\theta)$ in ±45 basis def Rp(theta): return Qobj([[cos(theta),-sin(theta)],[sin(theta),cos(theta)]]).tidyup() Shv45 = sim_transform(H,V,P45,M45) # ## 4.12: Rp45 = Shv45*Rp(pi/4)*Shv45.dag() Rp45*Shv45*P45 == Shv45*V # convert P45 to the ±45 basis Rp45* Qobj([[1],[0]]) ShvLR = sim_transform(H,V,L,R) ShvLR*Rp(pi/4)*ShvLR.dag() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import matplotlib.pyplot as plt from sklearn import cluster, preprocessing, datasets # %matplotlib inline # - np.random.seed(42) # + x, y = datasets.make_blobs(centers=3, n_features=2, n_samples=1500, cluster_std=2.5) x[:, 0] *= 10000 x1, x2 = x.T plt.scatter(x1, x2, alpha=0.5); # - plt.scatter(x1, x2, c=y, alpha=0.5); # Note que os eixos não estão na mesma escala. Ao utilizar uma medida de distância, pode haver distorção no espaço. # + km = cluster.KMeans(n_clusters=3) km.fit(x) plt.scatter(*x.T, c=km.predict(x), alpha=0.5) plt.scatter(*km.cluster_centers_.T, marker='*', s=200, c='r'); # - plt.scatter(x1, x2, c=km.predict(x), alpha=0.5) plt.scatter(*km.cluster_centers_.T, marker='*', s=200, c='r') plt.xlim((-150000, 150000)) plt.ylim((-150000, 150000)); # Para colocar ambos os eixos na mesma escala, basta calcular o mínimo e máximo de cada atributo $X_i$, e então utilizar uma transformação $X'_i = \frac{X_i - min\{X_i\}}{max\{X_i\} - min\{X_i\}}$. # # Essa transformação já está implementada na classe `sklearn.preprocessing.MinMaxScaler`. scaler = preprocessing.MinMaxScaler() x_t = scaler.fit_transform(x) x[:5] x_t[:5] # + km_t = cluster.KMeans(n_clusters=3) km_t.fit(x_t) plt.scatter(*x_t.T, c=km_t.predict(x_t), alpha=0.5) plt.scatter(*km_t.cluster_centers_.T, marker='*', s=200, c='r'); # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + import time import random import numpy as np import matplotlib.pyplot as plt import pandas as pd from tensorforce.environments import Environment from tensorforce.agents import Agent from tensorforce.execution import Runner # - # Main game # + def get_val(mean,var): return mean+var*2.0*(np.random.random()-0.5) class CustomEnvironment(Environment): def __init__(self, N=3, clist = [10., 10., 10.], vlist = [3.0,4.0,5.0], max_turns = 30, bad_list = [False,False,True]): super().__init__() self.N = N self.clist = clist self.vlist = vlist self.max_turns = max_turns self.turn = 0 self.picked_count = np.zeros(self.N) self.measured_list = [] self.bad_list = bad_list self.reset() def current_mean(self): return [np.mean([el for el in sublist]) for sublist in self.measured_list] #return mean_list def current_stddev(self): return [np.var([el for el in sublist])**.5 for sublist in self.measured_list] def current_norm_stddev(self): stddev_list = self.current_stddev() return stddev_list / max(stddev_list) def states(self): return dict(type='float', shape=(self.N*2,)) def actions(self): return dict(type='int', num_values=self.N) def reset(self): #rshuffle the clist/vlist order, but don't change them. z = list(zip(self.clist, self.vlist, self.bad_list)) np.random.shuffle(z) self.clist, self.vlist, self.bad_list = zip(*z) self.measured_list = [] self.turn = 0 #take first measurments for i in range(self.N): this_list = [] for j in range(2): this_list.append(get_val(mean=self.clist[i],var=self.vlist[i])) self.measured_list.append(this_list) #state is the current variance of each point self.state = np.zeros(2*self.N) for i in range(self.N): self.state[i] = np.var(self.measured_list[i])#**.5 self.state[int(self.N)+i] = self.picked_count[i]/float(self.max_turns) return self.state def execute(self, actions): #assert 0 <= actions.item() <= 3 #take another measurement of value 'action' this_val = get_val(mean=self.clist[actions],var=self.vlist[actions]) self.picked_count[int(actions)] += 1.0 self.measured_list[actions].append(this_val) next_state = self.state next_state[actions] = np.var(self.measured_list[actions])#**.5 next_state[self.N+actions] = self.picked_count[(int(actions))] / self.max_turns terminal = False reward = 0 self.turn += 1 if self.bad_list[actions]: #if this is a bad sample reward += 1 #give 1 point if self.turn >= self.max_turns: terminal = True reward = 0.0 #check if we've gotten min score on bad points for i in range(self.N): if self.bad_list[i] and self.picked_count[i] >= 20: reward += 100 #print ('woohoo '+str(i)) return next_state, terminal, reward # - # Testing Policies # + def play_sequential_game(N,clist,vlist,bad_list,max_turns): env = CustomEnvironment(N=N, clist=clist, vlist=vlist, bad_list = bad_list, max_turns=max_turns) sum_points = 0 game_terminated = False iguess = 0 while not game_terminated: best_guess = int(iguess) iguess += 1 next_state, game_terminated, next_reward = env.execute(iguess%env.N) sum_points += next_reward return sum_points def play_omniscent_game(N,clist,vlist,bad_list,max_turns): env = CustomEnvironment(N=N, clist=clist, vlist=vlist, bad_list = bad_list, max_turns=max_turns) sum_points = 0 game_terminated = False while not game_terminated: omniscent_guess = np.argmax(abs(10. - np.array(env.current_mean()))) next_state, game_terminated, next_reward = env.execute(omniscent_guess) sum_points += next_reward return sum_points def play_exploring_game(N,clist,vlist,max_turns,bad_list,explore_frac=.1): env = CustomEnvironment(N=N, clist=clist, vlist=vlist, bad_list = bad_list, max_turns=max_turns) sum_points = 0 game_terminated = False while not game_terminated: best_guess = np.argmax(env.current_norm_stddev()) if np.random.random() < explore_frac: best_guess = np.argmin(env.picked_count) next_state, game_terminated, next_reward = env.execute(best_guess) sum_points += next_reward return sum_points # - # + N = 30 clist = 30*[10.0] vlist = np.ones(30)*0.2 vlist[0:5] *= 20.0 #20 times higher variance in bad ones bad_list = 30*[False] bad_list[0:5] = 5*[True] max_turns = 400 print (f'sequential score {play_sequential_game(N, clist, vlist, bad_list, max_turns)}') print (f'omniscent score {play_omniscent_game(N, clist, vlist, bad_list, max_turns)}') #print (f'exploring score {play_exploring_game(N, clist, vlist, bad_list, max_turns)}') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + [markdown] id="d44ba9bc-ebf3-49d8-b2db-9d49bf1bec22" tags=[] # # GAN keras로 해보기 # > done # # - toc:true # - branch: master # - badges: false # - comments: false # - author: 최서연 # - categories: [GAN] # - import numpy as np from tensorflow.keras.utils import to_categorical from keras.datasets import cifar10 # #### 데이터 적재 (x_train,y_train),(x_test,y_test)=cifar10.load_data() # 데이터 셋을 적재한다. x_train.shape, x_test.shape, y_train.shape, y_test.shape # - shape에서 첫번째 차원은 index를 가리킴. # - 두 번째 차원은 이미지 높이 # - 세 번째 차원은 이미지 너비 # - 마지막 차원은 RGB 채널 # - 이런 배열을 4차원 tensor라 부름 NUM_CLASSES = 10 x_train = x_train.astype('float32')/255.0 x_test = x_test.astype('float32')/255.0 # 신경망은 -1~1 사이 놓여있게 만들기! y_train = to_categorical(y_train,NUM_CLASSES) y_test = to_categorical(y_test,NUM_CLASSES) # 이미지의 정수 레이블을 one-hot-encoding vector로 바꾸기 # - 어떤 이미지의 class 정수 레이블이 i라면 이 뜻은 one-hot encoding이 i번빼 원소가 1이고 그외에는 모두 0인 길이가 10(class개수)인 vector가 된다. # - 따라서 shape도 바뀐 것을 볼 수 있음 y_train.shape, y_test.shape # #### 모델 만들기 # Sequential 모델은 일렬로 층을 쌓은 네트워크를 빠르게 만들때 사용하기 좋음 # # API 모델은 각각 층으로 볼 수 있음 from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Flatten, Dense model = Sequential([Dense(200, activation = 'relu', input_shape=(32,32,3)), Flatten(), Dense(150, activation = 'relu'), Dense(10,activation = 'softmax'),]) # Sequential 사용한 network from tensorflow.keras.layers import Input, Flatten, Dense from tensorflow.keras.models import Model input_layer = Input(shape=(32,32,3)) x=Flatten()(input_layer) x=Dense(units=200,activation='relu')(x) x=Dense(units=150,activation = 'relu')(x) output_layer = Dense(units=10,activation='softmax')(x) model=Model(input_layer,output_layer) # API 사용한 network model.summary() # - **Input** # - 네트워크의 시작점 # - 입력 데이터 크기를 튜플에 알려주기 # - **Flatten** # - 입력을 하나의 vector로 펼치기 # - 32 * 32 * 3 = 3,072 # - **Dense** # - 이전 층과 완전하게 연결되는 유닛을 가짐 # - 이 층의 각 unit은 이전 층의 모든 unit과 연결됨! # - unit의 출력은 이전 층으로부터 받은 입력과 가중치를 곱하여 더한 것 # - **activation function** # - 비선형 활성화 함수 # - 렐루Relu: 입력이 음수이면 0, 그 외는 입력과 동일한 값 출력 # - 리키렐루leaky relu: 음수이면 입력에 비례하는 작은 음수 반환, 그 외는 입력과 동일한 값 출력 # - 시그모이드sigmoid: 층의 출력을 0에서 1사이로 조정하고 싶을 때!(binary/multilabel classification에서 사용) # - 소프트맥스softmax: 층의 전체 출력 합이 1이 되어야 할 때 사용(multiclass classification) # ![](http://androidkt.com/wp-content/uploads/2020/05/Selection_019-1024x490.png) # # ![](https://kjhov195.github.io/post_img/200107/image2.png) # ![](https://miro.medium.com/max/1400/1*IWUxuBpqn2VuV-7Ubr01ng.png) # #### 모델 컴파일 from tensorflow.keras.optimizers import Adam opt=Adam(lr=0.0001) model.compile(loss='categorical_crossentropy',optimizer=opt, metrics = ['accuracy']) # 손실함수와 옵티마이저 연결 # - compile method에 전달 # - metrics에 accuracy 같은 기록하고자 하는 지표 지정 가능 # #### 모델 훈련 model.fit(x_train, # 원본 이미지 데이터 y_train, # one-hot encoding된 class label batch_size = 32, # 훈련 step마다 network에 전달될 sample 개수 결정 epochs = 10, # network가 전체 훈련 data에 대해 반복하여 훈련할 횟수 shuffle = True) # True면 훈련 step마다 batch를 훈련 data에서 중복 허용않고 random하게 추출 # loss는 1.37로 감소했고, accuracy는 51%로 증가했다! # #### 모델 평가 model.evaluate(x_test,y_test) # predic method 사용해서 test 예측결과 확인 CLASSES = np.array(['airplane','automobile','bird','cat','deer','dog','frog','horse','ship','truck']) preds = model.predict(x_test) preds.shape pred_single = CLASSES[np.argmax(preds, axis = -1)] # agrmax로 하나의 예측결과로 바꾸기. axis=-1은 마지막 clss차원으로 배열을 압축하라는 뜻 pred_single.shape actual_single = CLASSES[np.argmax(y_test,axis=-1)] actual_single.shape import matplotlib.pyplot as plt n_to_show = 10 indices = np.random.choice(range(len(x_test)), n_to_show) # + fig=plt.figure(figsize=(15,3)) fig.subplots_adjust(hspace=0.4,wspace=0.5) for i, idx in enumerate(indices): img = x_test[idx] ax=fig.add_subplot(1,n_to_show ,i+1) ax.axis('off') ax.text(0.5,-0.35,'pred= ' + str(pred_single[idx]), fontsize=10, ha='center',transform=ax.transAxes) ax.text(0.5,-0.7,'act= ' + str(actual_single[idx]),fontsize=10, ha='center', transform=ax.transAxes) ax.imshow(img) # - # #### 합성곱 층 # 합성곱은 필터를 이미지의 일부분과 픽셀끼리 곱한 후 결과를 더한 것 # - 이미지 영역이 필터와 비슷할수록 큰 양수가 출력되고 필터와 반대일수록 큰 음수가 출력됨 # - **stride** # - 스트라이드는 필터가 한 번에 입력 위를 이동하는 크기, # - stride = 2로 하면 출력 텐서의 높이와 너비는 입력 텐서의 절반이 된다! # - network 통과하면서 채널 수 늘리고, 텐서의 공간 방향 크기 줄이는 데 사용 # - **padding** # - padding=same은 입력 데이터를 0으로 패딩하여 stride = 1일 때 출력의 크기를 입력 크기와 동일하게 만듦 # - padding=same 지정하면 여러 개의 합성곱 층을 통과할 때 텐서의 크기를 쉽게 파악할 수 있는 유용함! input_layer=Input(shape=(32,32,3)) conv_layer_1 = layers.Conv2D(filters = 10, kernel_size = (4,4), strides = 2, padding = 'same')(input_layer) conv_layer_2 = layers.Conv2D(filters = 20, kernel_size = (3,3), strides = 2, padding = 'same')(conv_layer_1) flatten_layer=Flatten()(conv_layer_2) output_layer=Dense(units=10, activation='softmax')(flatten_layer) model=Model(input_layer,output_layer) model.summary() # #### 배치 정규화 # - gradient exploding 기울기 폭주 # - 오차가 네트워크를 통해 거꾸로 전파되면서 앞에 놓인 층의 gradient 계산이 기하급수적으로 증가하는 경우 # - covariate shoft 공변량 변화 # - 네트워크가 훈련됨에 따라 가중치 값이 랜덤한 초깃값과 멀어지기 때문에 인력 스케일을 조정해도 모든 층의 활성화 울력의 scale이 안정적이지 않을 수 있음 # **배치정규화** # - 배치에 대해 각 입력 채널별로 평균과 표준 편차를 계산한 다음 평균을 빼고 표준편차로 나누어 정규화 # ![](https://gaussian37.github.io/assets/img/dl/concept/batchnorm/batchnorm.png) # #### 드롭아웃 층 # 이전 층의 unit 일부를 랜덤하게 선택하여 출력을 0으로 지정 # - 과대 적합 문제 맊기 가능 # #### 합성곱/배치정규화/드롭아웃 적용 input_layer=Input((32,32,3)) x = layers.Conv2D(filters=32, kernel_size=3, strides=1, padding='same')(input_layer) x=layers.BatchNormalization()(x) x=layers.LeakyReLU()(x) x= layers.Conv2D(filters=32, kernel_size=3, strides=2, padding='same')(x) x=layers.BatchNormalization()(x) x=layers.LeakyReLU()(x) x= layers.Conv2D(filters=64, kernel_size=3, strides=1, padding='same')(x) x=layers.BatchNormalization()(x) x=layers.LeakyReLU()(x) x= layers.Conv2D(filters=64, kernel_size=3, strides=2, padding='same')(x) x=layers.BatchNormalization()(x) x=layers.LeakyReLU()(x) x=Flatten()(x) x=Dense(128)(x) x=layers.BatchNormalization()(x) x=layers.LeakyReLU()(x) x=layers.Dropout(rate=0.5)(x) x=Dense(NUM_CLASSES)(x) output_layer = layers.Activation('softmax')(x) model=Model(input_layer,output_layer) # leakyrelu 층과 batchnomalization 층이 뒤따르는 4개의 conv2d 층을 쌓고 만들어진 텐서를 일렬로 펼쳐 128개의 unit을 가진 dense 층에 통과시키고 다시 한 번 batchnomalization층과 leakyrelu층을 거쳐 dropout층을 지나 10개의 unit을 가진 dense 층이 최종 출력을 만든다. model.summary() # kernal인 3 곱하기 3 곱하기 3(컬러 rgb 3개) = 27 곱하기 32 = 896 model.compile(loss='categorical_crossentropy',optimizer=opt, metrics = ['accuracy']) model.fit(x_train, # 원본 이미지 데이터 y_train, # one-hot encoding된 class label batch_size = 32, # 훈련 step마다 network에 전달될 sample 개수 결정 epochs = 10, # network가 전체 훈련 data에 대해 반복하여 훈련할 횟수 shuffle = True) # True면 훈련 step마다 batch를 훈련 data에서 중복 허용않고 random하게 추출 model.evaluate(x_test,y_test,batch_size=1000) # ------------------ # #### GAN keras로 해보기 # https://github.com/davidADSP/GDL_code # ref: https://keras.io/examples/generative/conditional_gan/ # + from tensorflow import keras from tensorflow.keras import layers from tensorflow_docs.vis import embed import matplotlib.pyplot as plt import tensorflow as tf import numpy as np import imageio # - batch_size = 64 num_channels = 1 num_classes = 10 image_size = 28 latent_dim = 128 # + # We'll use all the available examples from both the training and test # sets. (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() all_digits = np.concatenate([x_train, x_test]) all_labels = np.concatenate([y_train, y_test]) # Scale the pixel values to [0, 1] range, add a channel dimension to # the images, and one-hot encode the labels. all_digits = all_digits.astype("float32") / 255.0 all_digits = np.reshape(all_digits, (-1, 28, 28, 1)) all_labels = keras.utils.to_categorical(all_labels, 10) # Create tf.data.Dataset. dataset = tf.data.Dataset.from_tensor_slices((all_digits, all_labels)) dataset = dataset.shuffle(buffer_size=1024).batch(batch_size) print(f"Shape of training images: {all_digits.shape}") print(f"Shape of training labels: {all_labels.shape}") # - generator_in_channels = latent_dim + num_classes discriminator_in_channels = num_channels + num_classes print(generator_in_channels, discriminator_in_channels) # + # Create the discriminator. discriminator = keras.Sequential( [ keras.layers.InputLayer((28, 28, discriminator_in_channels)), layers.Conv2D(64, (3, 3), strides=(2, 2), padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2D(128, (3, 3), strides=(2, 2), padding="same"), layers.LeakyReLU(alpha=0.2), layers.GlobalMaxPooling2D(), layers.Dense(1), ], name="discriminator", ) # Create the generator. generator = keras.Sequential( [ keras.layers.InputLayer((generator_in_channels,)), # We want to generate 128 + num_classes coefficients to reshape into a # 7x7x(128 + num_classes) map. layers.Dense(7 * 7 * generator_in_channels), layers.LeakyReLU(alpha=0.2), layers.Reshape((7, 7, generator_in_channels)), layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2D(1, (7, 7), padding="same", activation="sigmoid"), ], name="generator", ) # - class ConditionalGAN(keras.Model): def __init__(self, discriminator, generator, latent_dim): super(ConditionalGAN, self).__init__() self.discriminator = discriminator self.generator = generator self.latent_dim = latent_dim self.gen_loss_tracker = keras.metrics.Mean(name="generator_loss") self.disc_loss_tracker = keras.metrics.Mean(name="discriminator_loss") @property def metrics(self): return [self.gen_loss_tracker, self.disc_loss_tracker] def compile(self, d_optimizer, g_optimizer, loss_fn): super(ConditionalGAN, self).compile() self.d_optimizer = d_optimizer self.g_optimizer = g_optimizer self.loss_fn = loss_fn def train_step(self, data): # Unpack the data. real_images, one_hot_labels = data # Add dummy dimensions to the labels so that they can be concatenated with # the images. This is for the discriminator. image_one_hot_labels = one_hot_labels[:, :, None, None] image_one_hot_labels = tf.repeat( image_one_hot_labels, repeats=[image_size * image_size] ) image_one_hot_labels = tf.reshape( image_one_hot_labels, (-1, image_size, image_size, num_classes) ) # Sample random points in the latent space and concatenate the labels. # This is for the generator. batch_size = tf.shape(real_images)[0] random_latent_vectors = tf.random.normal(shape=(batch_size, self.latent_dim)) random_vector_labels = tf.concat( [random_latent_vectors, one_hot_labels], axis=1 ) # Decode the noise (guided by labels) to fake images. generated_images = self.generator(random_vector_labels) # Combine them with real images. Note that we are concatenating the labels # with these images here. fake_image_and_labels = tf.concat([generated_images, image_one_hot_labels], -1) real_image_and_labels = tf.concat([real_images, image_one_hot_labels], -1) combined_images = tf.concat( [fake_image_and_labels, real_image_and_labels], axis=0 ) # Assemble labels discriminating real from fake images. labels = tf.concat( [tf.ones((batch_size, 1)), tf.zeros((batch_size, 1))], axis=0 ) # Train the discriminator. with tf.GradientTape() as tape: predictions = self.discriminator(combined_images) d_loss = self.loss_fn(labels, predictions) grads = tape.gradient(d_loss, self.discriminator.trainable_weights) self.d_optimizer.apply_gradients( zip(grads, self.discriminator.trainable_weights) ) # Sample random points in the latent space. random_latent_vectors = tf.random.normal(shape=(batch_size, self.latent_dim)) random_vector_labels = tf.concat( [random_latent_vectors, one_hot_labels], axis=1 ) # Assemble labels that say "all real images". misleading_labels = tf.zeros((batch_size, 1)) # Train the generator (note that we should *not* update the weights # of the discriminator)! with tf.GradientTape() as tape: fake_images = self.generator(random_vector_labels) fake_image_and_labels = tf.concat([fake_images, image_one_hot_labels], -1) predictions = self.discriminator(fake_image_and_labels) g_loss = self.loss_fn(misleading_labels, predictions) grads = tape.gradient(g_loss, self.generator.trainable_weights) self.g_optimizer.apply_gradients(zip(grads, self.generator.trainable_weights)) # Monitor loss. self.gen_loss_tracker.update_state(g_loss) self.disc_loss_tracker.update_state(d_loss) return { "g_loss": self.gen_loss_tracker.result(), "d_loss": self.disc_loss_tracker.result(), } # + cond_gan = ConditionalGAN( discriminator=discriminator, generator=generator, latent_dim=latent_dim ) cond_gan.compile( d_optimizer=keras.optimizers.Adam(learning_rate=0.0003), g_optimizer=keras.optimizers.Adam(learning_rate=0.0003), loss_fn=keras.losses.BinaryCrossentropy(from_logits=True), ) cond_gan.fit(dataset, epochs=5) # + # We first extract the trained generator from our Conditiona GAN. trained_gen = cond_gan.generator # Choose the number of intermediate images that would be generated in # between the interpolation + 2 (start and last images). num_interpolation = 9 # @param {type:"integer"} # Sample noise for the interpolation. interpolation_noise = tf.random.normal(shape=(1, latent_dim)) interpolation_noise = tf.repeat(interpolation_noise, repeats=num_interpolation) interpolation_noise = tf.reshape(interpolation_noise, (num_interpolation, latent_dim)) def interpolate_class(first_number, second_number): # Convert the start and end labels to one-hot encoded vectors. first_label = keras.utils.to_categorical([first_number], num_classes) second_label = keras.utils.to_categorical([second_number], num_classes) first_label = tf.cast(first_label, tf.float32) second_label = tf.cast(second_label, tf.float32) # Calculate the interpolation vector between the two labels. percent_second_label = tf.linspace(0, 1, num_interpolation)[:, None] percent_second_label = tf.cast(percent_second_label, tf.float32) interpolation_labels = ( first_label * (1 - percent_second_label) + second_label * percent_second_label ) # Combine the noise and the labels and run inference with the generator. noise_and_labels = tf.concat([interpolation_noise, interpolation_labels], 1) fake = trained_gen.predict(noise_and_labels) return fake start_class = 1 # @param {type:"slider", min:0, max:9, step:1} end_class = 5 # @param {type:"slider", min:0, max:9, step:1} fake_images = interpolate_class(start_class, end_class) # - fake_images *= 255.0 converted_images = fake_images.astype(np.uint8) converted_images = tf.image.resize(converted_images, (96, 96)).numpy().astype(np.uint8) imageio.mimsave("animation.gif", converted_images, fps=1) embed.embed_file("animation.gif") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Time series classification with Mr-SEQL # # Mr-SEQL\[1\] is a univariate time series classifier which train linear classification models (logistic regression) with features extracted from multiple symbolic representations of time series (SAX, SFA). The features are extracted by using SEQL\[2\]. # # \[1\] , , , and Interpretable Time Series Classification using Linear Models and Multi-resolution Multi-domain Symbolic Representations in Data Mining and Knowledge Discovery (DAMI), May 2019, https://link.springer.com/article/10.1007/s10618-019-00633-3 # # \[2\] , # “Bounded Coordinate-Descent for Biological Sequence Classification in High Dimensional Predictor Space” (KDD 2011) # # In this notebook, we will demonstrate how to use Mr-SEQL for univariate time series classification with the ArrowHead dataset. # # ## Imports from sklearn import metrics from sklearn.model_selection import train_test_split from sktime.classification.shapelet_based import MrSEQLClassifier from sktime.datasets import load_arrow_head from sktime.datasets import load_basic_motions # ## Load data # For more details on the data set, see the [univariate time series classification notebook](https://github.com/alan-turing-institute/sktime/blob/master/examples/02_classification_univariate.ipynb). X, y = load_arrow_head(return_X_y=True) X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42) print(X_train.shape, y_train.shape, X_test.shape, y_test.shape) # ## Train and Test # # Mr-SEQL can be configured to run in different mode with different symbolic representation. # # seql_mode can be either 'clf' (SEQL as classifier) or 'fs' (SEQL as feature selection). If 'fs' mode is chosen, a logistic regression classifier will be trained with the features extracted by SEQL. # 'fs' mode is more accurate in general. # # symrep can include either 'sax' or 'sfa' or both. Using both usually produces a better result. # # + # Create mrseql object ms = MrSEQLClassifier(seql_mode='clf') # use sax by default # ms = MrSEQLClassifier(seql_mode='fs', symrep=['sfa']) # use sfa representations # ms = MrSEQLClassifier(seql_mode='fs', symrep=['sax', 'sfa']) # use sax and sfa representations # fit training data ms.fit(X_train,y_train) # prediction predicted = ms.predict(X_test) # Classification accuracy print("Accuracy with mr-seql: %2.3f" % metrics.accuracy_score(y_test, predicted)) # - # ## Train and Test # Mr-SEQL also supports multivariate time series. Mr-SEQL extracts features from each dimension of the data independently. X, y = load_basic_motions(return_X_y=True) X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42) print(X_train.shape, y_train.shape, X_test.shape, y_test.shape) # + ms = MrSEQLClassifier() # fit training data ms.fit(X_train,y_train) predicted = ms.predict(X_test) # Classification accuracy print("Accuracy with mr-seql: %2.3f" % metrics.accuracy_score(y_test, predicted)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from IPython.core.display import HTML HTML("") # + [markdown] slideshow={"slide_type": "slide"} # # Lecture 6: Indirect methods for constrained optimization # + [markdown] slideshow={"slide_type": "slide"} # ## Some remarks # * How to deal with problems where the objective function should be maximized, i.e. $\max f(x)$? # * We can use the same methods if we instead minimize the negative of $f$, i.e. $\min -f(x)$ # * The optimal solution $x^*$ is the same for both the problems # - # insert image from IPython.display import Image Image(filename = "Images\MaxEqMin.jpg", width = 200, height = 300) # + [markdown] slideshow={"slide_type": "subslide"} # ## Simple example # + slideshow={"slide_type": "-"} def f_max(x): return -(x-3.0)**2 + 10.0 # clearly x* = 3.0 is the global maximum # + import numpy as np import matplotlib.pyplot as plt x = np.arange(-5.0, 12.0, 0.3) plt.plot(x, f_max(x), 'bo') plt.show() print(f_max(3.0)) # + from scipy.optimize import minimize_scalar # multiply f_max with -1.0 def g(x): return -f_max(x) plt.plot(x, g(x), 'ro') plt.show() # - res = minimize_scalar(g,method='brent') print(res) print(g(res.x)) print(f_max(res.x)) # + [markdown] slideshow={"slide_type": "slide"} # # Constrained optimization # - # Now we will move to studying constrained optimization problems i.e., the full problem # $$ # \begin{align} \ # \min \quad &f(x)\\ # \text{s.t.} \quad & g_j(x) \geq 0\text{ for all }j=1,\ldots,J\\ # & h_k(x) = 0\text{ for all }k=1,\ldots,K\\ # &a_i\leq x_i\leq b_i\text{ for all } i=1,\ldots,n\\ # &x\in \mathbb R^n, # \end{align} # $$ # where for all $i=1,\ldots,n$ it holds that $a_i,b_i\in \mathbb R$ or they may also be $-\infty$ or $\infty$. # + [markdown] slideshow={"slide_type": "slide"} # ## On optimal solutions for constrained problems # * Two types of constraints: equality and inequality constraints # * Inequality constraint $g_i(x)\geq0$ is said to be *active* at point $x$ if $g_i(x)=0$ # * Linear constraints are much easier to consider --> their gradients are constant # * Nonlinear constraints trickier --> gradient changes for different values of decision variables # + [markdown] slideshow={"slide_type": "subslide"} # No constraints # ![](images/unconstrained.png) # *Adopted from Prof. L. (Carnegie Mellon University)* # + [markdown] slideshow={"slide_type": "subslide"} # Inequality constraints # ![](images/inequality_constraints.jpg) # *Adopted from Prof. (Carnegie Mellon University)* # + [markdown] slideshow={"slide_type": "subslide"} # Both inequality and equality constraints # ![](images/constraints.jpg) # *Adopted from Prof. (Carnegie Mellon University)* # + [markdown] slideshow={"slide_type": "subslide"} # ## Transforming the constraints # Type of inequality: # $$ # g_i(x)\geq0 \iff -g_i(x)\leq0 # $$ # # + [markdown] slideshow={"slide_type": "fragment"} # Inequality to equality: # $$ # g_i(x)\leq0 \iff g_i(x)+y_i^2=0 # $$ # * $y_i$ is a *slack variable*; constraint is active if $y_i=0$ # * By adding $y_i^2$ no need to add $y_i\geq0$ # * If $g$ is linear, linearity can be preserved by $g_i(x)+y_i=0, y_i\geq0$ # + [markdown] slideshow={"slide_type": "fragment"} # Equality to inequality: # $$ # h_i(x)=0 \iff h_i(x)\geq0 \text{ and } -h_i(x) \geq0 # $$ # + [markdown] slideshow={"slide_type": "slide"} # ## Example problem # For example, we can have an optimization problem # $$ # \begin{align} \ # \min \quad &x_1^2+x_2^2\\ # \text{s.t.} \quad & x_1+x_2-1\geq 0\\ # &-1\leq x_1\leq 1, x_2\leq 3.\\ # \end{align} # $$ # + [markdown] slideshow={"slide_type": "subslide"} # In order to optimize that problem, we can define the following python function: # - import numpy as np def f_constrained(x): return np.linalg.norm(x)**2,[x[0]+x[1]-1, x[0]+1,-x[0]+1,-x[1]+3],[] # + slideshow={"slide_type": "notes"} # #np.linalg.norm?? # + [markdown] slideshow={"slide_type": "subslide"} # Now, we can call the function: # - (f_val,ieq,eq) = f_constrained([1,0]) print("Value of f is "+str(f_val)) if len(ieq)>0: print("The values of inequality constraints are:") for ieq_j in ieq: print(str(ieq_j)+", ") if len(eq)>0: print("The values of the equality constraints are:") for eq_k in eq: print(str(eq_k)+", ") # + [markdown] slideshow={"slide_type": "subslide"} # Is this solution feasible? # - if all([ieq_j>=0 for ieq_j in ieq]) and all([eq_k==0 for eq_k in eq]): print("Solution is feasible") else: print("Solution is infeasible") # + [markdown] slideshow={"slide_type": "slide"} # # Indirect and direct methods for constrained optimization # - # There are two categories of methods for constrained optimization: Indirect and direct methods (based on how they treat constraints). # # The main difference is that # # 1. **Indirect** methods convert the constrained optimization problem into a single or a sequence of unconstrained optimization problems, that are then solved. Often, the intermediate solutions do not need to be feasible, but the sequence of solutions converges to a solution that is optimal for the original problem (and, thus, feasible). # + [markdown] slideshow={"slide_type": "fragment"} # 2. **Direct** methods deal with the constrained optimization problem directly. In this case, all the intermediate solutions are feasible. # + [markdown] slideshow={"slide_type": "slide"} # # Indirect methods # + [markdown] slideshow={"slide_type": "slide"} # ## Penalty function methods # - # **IDEA:** Include constraints into the objective function with the help of penalty functions that **penalize constraint violations**. # # * **Exterior** penalty functions (approaching the optimum from outside of the feasible region) # * **Interior** penalty functions (approaching the optimum from inside of the feasible region) # + [markdown] slideshow={"slide_type": "fragment"} # ### Exterior penalty functions # # Let, $\alpha(x):\mathbb R^n\to\mathbb R$ be a function so that # * $\alpha(x)= 0$, for all feasible $x$ # * $\alpha(x)>0$, for all infeasible $x$. # # Define a set of optimization problems (depending on parameter $r$) # $$ # \begin{align} \ # \min \qquad &f(x)+r\alpha(x)\\ # \text{s.t.} \qquad &x\in \mathbb R^n # \end{align} # $$ # where 𝛼(𝑥) is a **penalty function** and 𝑟 is a **penalty parameter**. # # for $r>0$. Let $x_r$ be an optimal solution of such problem for a given $r$. # # In this case, the optimal solutions $x_r$ converge to the optimal solution of the constrained problem, when # # * $r\to\infty$, (in exterior penalty functions) # # if such a solution exists. # + [markdown] slideshow={"slide_type": "fragment"} # * All the functions should be continuous # * For each 𝑟, there should exist a solution for penalty functions problem and $𝑥_𝑟$ belongs to a compact subset of $\mathbb R^n$ # + [markdown] slideshow={"slide_type": "subslide"} # For example, good ideas for penalty functions are # * $h_k(x)^2$ for equality constraints, # * $\left(\min\{0,g_j(x)\}\right)^2$ for inequality constraints $g_j(x) \geq 0$, or # * $\left(\max\{0,g_j(x)\}\right)^2$ for inequality constraints $g_j(x) \leq 0$. # + [markdown] slideshow={"slide_type": "slide"} # # Illustrative example # $$ # \min x \\ # \text{ s.t. } -x + 2 \leq 0 # $$ # Let # $$ # \alpha(x) = (\max[0,(-x+2)])^2 # $$ # # Then # # $$ # \alpha(x) = 0, \text{ if }x\geq2 # $$ # $$ # \alpha(x) = (-x+2)^2, \text{ if } x<2 # $$ # # # + [markdown] slideshow={"slide_type": "fragment"} # Minimum of $f(x)+r\alpha(x)$ is at $2-1/2r$ # ![](images/penalty.png) # + [markdown] slideshow={"slide_type": "fragment"} # Then, if $ r \rightarrow \infty$, $$ \text{Min} f(x) + r \alpha(x) = 2 = \text{Min} f(x) $$ # + [markdown] slideshow={"slide_type": "slide"} # In general, a constrained optimization problem in a form of # $$ # \begin{align} \ # \min \quad &f(x)\\ # \text{s.t.} \quad & g_j(x) \geq 0\text{ for all }j=1,\ldots,J\\ # & h_k(x) = 0\text{ for all }k=1,\ldots,K\\ # &x\in \mathbb R^n, # \end{align} # $$ # # can be converted to the following unconstrained optimization problem with a penalty function # # $$ # \alpha(x) = \sum_{j=1}^J{(\min\{0,g_j(x)\})^2} + \sum_{k=1}^K{h_k(x)^2} # $$ # # + slideshow={"slide_type": "subslide"} def alpha(x,f): (_,ieq,eq) = f(x) return sum([min([0,ieq_j])**2 for ieq_j in ieq]) + sum([eq_k**2 for eq_k in eq]) # + [markdown] slideshow={"slide_type": "fragment"} # Let us go back to our example: # $$ # \begin{align} \ # \min \quad &x_1^2+x_2^2\\ # \text{s.t.} \quad & x_1+x_2-1\geq 0\\ # &-1\leq x_1\leq 1, x_2\leq 3.\\ # \end{align} # $$ # + slideshow={"slide_type": "fragment"} alpha([1,0],f_constrained) # + slideshow={"slide_type": "fragment"} def penalized_function(x,f,r): return f(x)[0] + r*alpha(x,f) # - # by increasing r we increase the penalty for being infeasible print(penalized_function([-1,0],f_constrained,10000)) print(penalized_function([-1,0],f_constrained,100)) print(penalized_function([-1,0],f_constrained,10)) print(penalized_function([-1,0],f_constrained,1)) # + [markdown] slideshow={"slide_type": "subslide"} # Let's solve the penalty problem by using Nelder-Mead from scipy.optimize # + slideshow={"slide_type": "-"} from scipy.optimize import minimize res = minimize(lambda x:penalized_function(x,f_constrained,10000000000),# by increasing r we assure convergency [0,0],method='Nelder-Mead', options={'disp': True}) print(res.x) # + (f_val,ieq,eq) = f_constrained(res.x) print("Value of f is "+str(f_val)) if len(ieq)>0: print("The values of inequality constraints are:") for ieq_j in ieq: print(str(ieq_j)+", ") if len(eq)>0: print("The values of the equality constraints are:") for eq_k in eq: print(str(eq_k)+", ") if all([ieq_j>=0 for ieq_j in ieq]) and all([eq_k==0 for eq_k in eq]): print("Solution is feasible") else: print("Solution is infeasible") # + [markdown] slideshow={"slide_type": "subslide"} # ### How to set the penalty parameter $r$? # - # The penalty parameter should # * be large enough in order for the solutions be close enough to the feasible region, but # * not be too large to # * cause numerical problems, or # * cause premature convergence to non-optimal solutions because of relative tolerances. # # Usually, the penalty term is either # * set as big as possible without causing problems (hard to know), or # * updated iteratively. # # + [markdown] slideshow={"slide_type": "subslide"} # **Note:** # # * We solved our example problem with a fixed value for the penalty parameter $r$. In order to make the penalty function method work in practice, you have to implement the iterative update for $r$. This you can practice in one of the upcoming exercises! # - # $$ # \begin{align} \ # \min \quad &f(x) + \sum_{j=1}^J{r_j(\min\{0,g_j(x)\})^2} + \sum_{k=1}^K{r_kh_k(x)^2} \\ # \text{s.t.} &\\ # &x\in \mathbb R^n, # \end{align} # $$ # + [markdown] slideshow={"slide_type": "fragment"} # * The starting point for solving the penalty problems can be selected in an efficient way. When you set $r_i$ and solve the corresponding unconstrained penalty problem, you get an optimal solution $x_{r_i}$. Then you update $r_i\rightarrow r_{i+1}$ and you can use $x_{r_i}$ as a starting point for solving the penalty problem with $r_{i+1}$. # + [markdown] slideshow={"slide_type": "slide"} # # Barrier function methods # - # **IDEA:** Prevent leaving the feasible region so that the value of the objective is $\infty$ outside the feasible set (an **interior** method). # # This method is only applicable to problems with inequality constraints and for which the set # $$\{x\in \mathbb R^n: g_j(x)>0\text{ for all }j=1,\ldots,J\}$$ # is non-empty. # Let $\beta:\{x\in \mathbb R^n: g_j(x)>0\text{ for all }j=1,\ldots,J\}\to \mathbb R$ be a function so that $\beta(x)\to \infty$, when $x\to\partial\{x\in \mathbb R^n: g_j(x)>0\text{ for all }j=1,\ldots,J\}$, where $\partial A$ is the boundary of the set $A$. # # Now, define optimization problem # $$ # \begin{align} # \min \qquad & f(x) + r\beta(x)\\ # \text{s.t. } \qquad & x\in \{x\in \mathbb R^n: g_j(x)>0\text{ for all }j=1,\ldots,J\}. # \end{align} # $$ # and let $x_r$ be the optimal solution of this problem (which we assume to exist for all $r>0$). # # In this case, $x_r$ converges to the optimal solution of the problem (if it exists), when $r\to 0^+$ (i.e., $r$ converges to zero from the right side (= positive numbers)). # + [markdown] slideshow={"slide_type": "subslide"} # A good idea for a barrier function is $-\frac1{g_j(x)}$. # - # ## Example # $$ # min \text{ } 𝑥 \\ # 𝑠.𝑡. −𝑥 + 1 ≤ 0 # $$ # # Let $𝛽(𝑥) = −\frac1{−𝑥+1}$ when $𝑥 ≠ 1$ # # $$ # \min 𝑓(𝑥) + 𝑟𝛽(𝑥) = 𝑥 + \frac{𝑟}{𝑥 − 1} # $$ # # is at 1 + $\sqrt r$ # + [markdown] slideshow={"slide_type": "fragment"} # Then, if $ r \rightarrow 0$, $$ \text{Min} f(x) + r \beta(x) = 1 = \text{Min} f(x) $$ # + [markdown] slideshow={"slide_type": "subslide"} # ![](images/barrierFun.jpg) # + slideshow={"slide_type": "slide"} def beta(x,f): _,ieq,_ = f(x) try: value=sum([1/max([0,ieq_j]) for ieq_j in ieq]) except ZeroDivisionError: value = float("inf") return value # + slideshow={"slide_type": "fragment"} def function_with_barrier(x,f,r): return f(x)[0]+r*beta(x,f) # + slideshow={"slide_type": "fragment"} # let's try to find a feasible starting point print(f_constrained([1,1])) # + slideshow={"slide_type": "fragment"} from scipy.optimize import minimize res = minimize(lambda x:function_with_barrier(x,f_constrained,0.1), [1,1],method='Nelder-Mead', options={'disp': True}) print(res.x) # + slideshow={"slide_type": "subslide"} """ To reduce the number of function evaluations, I eliminated some constraints for the sake of education. Also, here we know the optimum and can check if it does not satisfy the eliminated constraints. But, in practice, we should either increase the limitation of the function evaluations in the code or use a different method that needs fewer function evaluations. """ def f_constrained(x): return np.linalg.norm(x)**2,[x[0]+x[1]-1],[] # - from scipy.optimize import minimize res = minimize(lambda x:function_with_barrier(x,f_constrained,.000000000010), # test different values for r and track the optimum [1,1],method='Nelder-Mead', options={'disp': True}) print(res.x) # + slideshow={"slide_type": "subslide"} (f_val,ieq,eq) = f_constrained(res.x) print("Value of f is "+str(f_val)) if len(ieq)>0: print("The values of inequality constraints are:") for ieq_j in ieq: print(str(ieq_j)+", ") if len(eq)>0: print("The values of the equality constraints are:") for eq_k in eq: print(str(eq_k)+", ") if all([ieq_j>=0 for ieq_j in ieq]) and all([eq_k==0 for eq_k in eq]): print("Solution is feasible") else: print("Solution is infeasible") # - # It is 'easy' to see that x* = (0.5,0.5) and f(x*) = 0.5 # # https://www.wolframalpha.com/input/?i=minimize+x%5E2%2By%5E2+on+x%2By%3E%3D1 # + slideshow={"slide_type": "fragment"} print(f_constrained([.5,.5])) # + [markdown] slideshow={"slide_type": "subslide"} # ## Other notes about using penalty and barrier function methods # # * It is worthwhile to consider whether feasibility can be compromised. If the constraints do not have any tolerances, then the barrier function method should be considered. # + [markdown] slideshow={"slide_type": "fragment"} # * Also barrier methods parameter can be set iteratively # + [markdown] slideshow={"slide_type": "fragment"} # * Penalty and barrier functions should be chosen so that they are differentiable (thus $x^2$ above) # + [markdown] slideshow={"slide_type": "fragment"} # * In both methods, the minimum is attained at the limit. # + [markdown] slideshow={"slide_type": "fragment"} # * Different penalty and barrier parameters can be used for different constraints, even for the same problem. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.7.4 64-bit (''base'': conda)' # name: python37464bitbasecondab708f16640e64910af49310a539c91f5 # --- # # 二分查找 def bin_search(li, val): low = 0 high = len(li) if (high == 0): return -1 while low <= high: mid = low + (high - low) // 2 if li[mid] == val: return mid elif li[mid] < val: low = mid + 1 else: high = mid - 1 return -1 list1 = list(range(100)) bin_search(list1, 3) # # 排序 # ## 冒泡排序 # 稳定排序 def bubble_sort(li): n = len(li) for i in range(n - 1): for j in range(0, n - i - 1): if(li[j] > li[j+1]): li[j], li[j+1] = li[j+1], li[j] list1 = list(range(100)) li = list1[::-1] bubble_sort(li) print(li) # ## 冒泡排序改进 def bubble_sort_update(li, val): n = len(li) for i in range(n - 1): flag = True for j in range(0, n - i - 1): if (li[j] > li[j + 1]): li[j], li[j + 1] = li[j + 1], li[j] flag = False if flag: break # # 选择排序 # 不稳定 def select_sort(li): for i in range(len(li)): minIndex = i for j in range(i, len(li)): if li[j] < li[minIndex]: minIndex = j li[i], li[minIndex] = li[minIndex], li[i] li = list(range(100))[::-1] select_sort(li) print(li) # ## 插入排序 def insert_sort(li): n = len(li) for i in range(1, n): temp = li[i] j = i - 1 while (j >= 0 and li[j] > temp): li[j + 1] = li[j] j = j - 1 li[j + 1] = temp li = list(range(100))[::-1] insert_sort(li) print(li) # ## 快排 # # 最坏情况:有序,由于递归可能会暴栈 # + def partition(li, left, right): tmp = li[left] while left < right: while left < right and li[right] >= tmp: right -= 1 li[left] = li[right] while left < right and li[left] < tmp: left += 1 li[right] = li[left] li[left] = tmp return left def quick_sort(li, left, right): if left < right: mid = partition(li, left, right) quick_sort(li, left, mid - 1) quick_sort(li, mid + 1, right) # - li = list(range(100))[::-1] quick_sort(li, 0, len(li) - 1) print(li) # # 堆排序 # + def sift(li, low, high): """ # 堆调整 # li表示树,low表示树的根,hight表示树的最后一个节点 """ tmp = li[low] i = low j = 2 * i + 1 # 左孩子节点 while j < high: if j + 1 < high and li[j] < li[j + 1]: # 如果右孩子存在且比左孩子大 j += 1 if li[j] > li[i]: li[i] = li[j] i = j j = 2 * i + 1 else: # 两个孩子都比根小 break li[i] = tmp def heap_sort(li): # 1. 构造堆 n = len(li) for low in range(n//2 - 1, -1, -1): sift(li, low, n) # 2. 出数字 for high in range(n - 1, -1, -1): li[0], li[high] = li[high], li[0] sift(li, 0, high - 1) # - list1 = list(range(100))[::-1] heap_sort(list1) print(list1) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import requests from bs4 import BeautifulSoup import tqdm.notebook as tq import pickle import pandas as pd def get_pages(page_url): headers = { 'User-Agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.164 Safari/537.36' } response = requests.get(url=page_url,headers=headers) page_soup = BeautifulSoup(response.text,'lxml') return page_soup urls = ['https://www.transfermarkt.com/arsenal-fc/startseite/verein/11', 'https://www.transfermarkt.com/fc-chelsea/startseite/verein/631', 'https://www.transfermarkt.com/fc-liverpool/startseite/verein/31', 'https://www.transfermarkt.com/manchester-city/startseite/verein/281', 'https://www.transfermarkt.com/manchester-united/startseite/verein/985', 'https://www.transfermarkt.com/tottenham-hotspur/startseite/verein/148', 'https://www.transfermarkt.com/atletico-madrid/startseite/verein/13', 'https://www.transfermarkt.com/fc-barcelona/startseite/verein/131', 'https://www.transfermarkt.com/real-madrid/startseite/verein/418', 'https://www.transfermarkt.com/fc-paris-saint-germain/startseite/verein/583', 'https://www.transfermarkt.com/fc-bayern-munchen/startseite/verein/27', 'https://www.transfermarkt.com/borussia-dortmund/startseite/verein/16', 'https://www.transfermarkt.com/inter-mailand/startseite/verein/46', 'https://www.transfermarkt.com/ac-mailand/startseite/verein/5', 'https://www.transfermarkt.com/juventus-turin/startseite/verein/506', 'https://www.transfermarkt.com/as-rom/startseite/verein/12' ] premier_urls = ['https://www.transfermarkt.com/arsenal-fc/startseite/verein/11', 'https://www.transfermarkt.com/fc-chelsea/startseite/verein/631', 'https://www.transfermarkt.com/fc-liverpool/startseite/verein/31', 'https://www.transfermarkt.com/manchester-city/startseite/verein/281', 'https://www.transfermarkt.com/manchester-united/startseite/verein/985', 'https://www.transfermarkt.com/tottenham-hotspur/startseite/verein/148', 'https://www.transfermarkt.com/aston-villa/startseite/verein/405', 'https://www.transfermarkt.com/brighton-amp-hove-albion/startseite/verein/1237', 'https://www.transfermarkt.com/fc-burnley/startseite/verein/1132', 'https://www.transfermarkt.com/crystal-palace/startseite/verein/873', 'https://www.transfermarkt.com/fc-everton/startseite/verein/29', 'https://www.transfermarkt.com/leeds-united/startseite/verein/399', 'https://www.transfermarkt.com/leicester-city/startseite/verein/1003', 'https://www.transfermarkt.com/newcastle-united/startseite/verein/762', 'https://www.transfermarkt.com/fc-southampton/startseite/verein/180', 'https://www.transfermarkt.com/west-ham-united/startseite/verein/379', 'https://www.transfermarkt.com/wolverhampton-wanderers/startseite/verein/543', 'https://www.transfermarkt.com/fc-fulham/startseite/verein/931', 'https://www.transfermarkt.com/west-bromwich-albion/startseite/verein/984', 'https://www.transfermarkt.com/sheffield-united/startseite/verein/350' ] def get_names(url): soup = get_pages(url) hide = soup.find_all('td',class_='hide') for i in range(len(hide)): name = hide[i].text if name not in names: names.append(name) names = [] for url in tq.tqdm(premier_urls): url1 = url+'?saison_id=2020' url2 = url+'?saison_id=2019' get_names(url) get_names(url1) get_names(url2) def process(names): result = [] for name in names: sp = name.split() for word in sp: word = word.lower() if word not in result: result.append(word) return result player_names = process(names) pickle.dump(player_names, open('../data/pretrained/premier_player_names.pkl', 'wb')) coach_names = ['carlo','ancelotti','diego','simeone','julen', 'lopetegui','xavi','ronald','koeman', '','','','', '','','','','','', '','','','','', '','','','','','', '','','','','','', '','','','','mikel','Arteta','','', '','','','','','','','', '','','','','' ] manager_names = process(coach_names) pickle.dump(manager_names, open('../data/pretrained/manager_names.pkl', 'wb')) stadium_names = ['Emirates','Stamford Bridge','Anfield','Etihad','Trafford','Westfalenstadion','Allianz','' '','','Olimpico','Parc des Princes'] stadium = process(stadium_names) pickle.dump(stadium, open('../data/pretrained/stadium_names.pkl', 'wb')) game_url = '../data/match_info.html' soup = BeautifulSoup(open(game_url,'r',encoding = 'utf-8')) game_time = soup.find_all('span',class_='matches__date') team = soup.find_all('span',class_='swap-text__target')[1:] game_times = pd.DataFrame() for i in range(380): game_times = game_times.append({ 'time' : game_time[i].text.split()[0], 'team1' : team[2*i].text, 'team2' : team[2*i+1].text },ignore_index=True) game_time = pd.read_csv('../data/utils/game_time.csv') game_time game_time['time'] = pd.to_datetime(game_time.time) game_time game_time.iloc[0]['time'] from datetime import datetime,timedelta time = [] for i in range(game_time.shape[0]): time.append(game_time.iloc[i]['time']-timedelta(hours=1)) game_time['time']=time game_time game_time['time'] = pd.to_datetime(game_time['time']).dt.time game_time game_time.to_csv('../data/utils/game_time.csv',index=False) type(soup) game_times[:20] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import tensorflow as tf # What we really want to do is building a *custom estimator* which will simplify our life for results tracking and data visualization. Further improvements might include a distributed Cloud version of it to overcome the important computational cost it requires. # + # loading datasets and phreshing folds for evaluations and testing train_ds, test_ds = np.load("train.npz"), np.load("test.npz") partition, x_train, y_train = train_ds["partition"], train_ds["X_train"], train_ds["y_train"] x_test, y_test = test_ds['X_test'], test_ds['y_test'] # - # First attempt, _without_ cross validation # This is the actual training phase: # # for cross validation, we train the data on 4 different models by splitting the dataset in 4 subsets. # + feature_columns = [tf.feature_column.numeric_column("X", shape=(1000, 20))] num_hidden_units = [512, 256, 128] model = tf.estimator.DDN(feature_columns=feature_columns, n_classes=10, model_dir="./checkpoints_tutorial17-1/") # - # giving an error, but at least we've fixed it for i in range(1,5): x_part, y_part, x_val, y_val = x_train[np.where(partition != i)], y_train[np.where(partition != i)], x_train[np.where(partition == i)], y_train[np.where(partition == i)] train_input_fn = tf.estimator.inputs.numpy_input_fn(x={'X':x_part}, y=y_part, num_epochs=1, batch_size=128, shuffle=False) model.train(input_fn=train_input_fn, steps=10) final_test_input_fn = tf.estimator.inputs.numpy_input_fn(x={'X':x_test}, y=y_test, num_epochs=20, batch_size=128, shuffle=False) # + # the model function def cnn_clocator(inputs, classes, mode): conv1 = tf.layers.conv2d(inputs=inputs, filters=50, kernel_size=3, padding="same", activation=tf.nn.relu) conv2 = tf.layers.conv2d(inputs=conv1, filters=80, kernel_size=5, padding="same", activation=tf.nn.relu) pool = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], strides=2) pool_flat = tf.layers.flatten(pool) # - len(x_train[np.where(partition == 1)] # + train_input_fn = tf.estimator.inputs.numpy_input_fn(x={'X':x_train[np.where(partition == 1)]}, y=y_train[np.where(partition == 1)], num_epochs=1, shuffle=False) model.train(input_fn=train_input_fn, steps=10) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # %matplotlib inline from pak.datasets.Market1501 import Market1501 import matplotlib.pyplot as plt from keras.models import load_model import sys sys.path.insert(0, '../') from reid import reid root = 'data_storage' m1501 = Market1501(root) X, Y = m1501.get_train() model = reid.ReId() im1 = X[0] im2 = X[2] fig = plt.figure(figsize=(5,5)) ax = fig.add_subplot(121); ax.axis('off') ax.imshow(im1) ax = fig.add_subplot(122); ax.axis('off') ax.imshow(im2) plt.show() # - model.model.summary() # + im1 = X[50] im2 = X[500] score = model.predict(im1, im2) print('score:', score) fig = plt.figure(figsize=(5,5)) ax = fig.add_subplot(121); ax.axis('off') ax.imshow(im1) ax = fig.add_subplot(122); ax.axis('off') ax.imshow(im2) plt.show() # + from math import ceil import cv2 from os.path import isfile, join A = ['im01', 'im02', 'im03', 'im09', 'im10', 'im04'] B = ['im10', 'im09', 'im07', 'im05', 'im09', 'im08'] A = [ cv2.cvtColor(cv2.imread(join('img', a) + '.png'), cv2.COLOR_BGR2RGB) for a in A ] B = [ cv2.cvtColor(cv2.imread(join('img', b) + '.png'), cv2.COLOR_BGR2RGB) for b in B ] assert len(A) == len(B) n = len(A) score = model.predict(A, B) print('score:', score) fig = plt.figure(figsize=(12, 12)) for i, (a, b, s) in enumerate(zip(A, B, score)): ax = fig.add_subplot(int(ceil(n/2)), 4, i*2+1) ax.axis('off') ax.imshow(a) ax = fig.add_subplot(int(ceil(n/2)), 4, i*2+2) ax.axis('off') ax.imshow(b) txt = "{:10.2f}".format(s) ax = fig.add_subplot(int(ceil(n/2)), 2, i+1) ax.axis('off') ax.text(0.3, 0.5, txt, fontsize=16) ax.plot([0, 1], [0, 1], alpha=0.0) ax.arrow(x=0.5, y=0.4, dx=-0.05, dy=0, head_width=0.05, color='black') ax.arrow(x=0.5, y=0.4, dx=0.05, dy=0, head_width=0.05, color='black') plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:py39] # language: python # name: conda-env-py39-py # --- # + import torch import torch.nn as nn import git import os from torch.optim import SGD # - CHECKPOINTS_PATH = os.path.join(os.getcwd(), 'checkpoints') os.makedirs(CHECKPOINTS_PATH, exist_ok=True) CHECKPOINTS_PATH # + X = torch.rand(10, 10) y = torch.add(X, 2.) cfg = dict(d_in=10, d_hidden=8) model = torch.nn.Sequential(nn.Linear(cfg['d_in'], cfg['d_hidden']), nn.ReLU(), nn.Linear(cfg['d_hidden'], 1), nn.ReLU()) optimizer = SGD(model.parameters(), lr=1e-2) # - def get_training_state(cfg, model): training_state = { "commit_hash": git.Repo(search_parent_directories=True).head.object.hexsha, # Model structure "d_in": d_in, "d_out": d_hidden, # Model state "state_dict": model.state_dict() } return training_state torch.save(get_training_state(cfg, model), os.path.join(CHECKPOINTS_PATH, 'my_checkpoint.pth')) model.load_state_dict(torch.load(os.path.join(CHECKPOINTS_PATH, 'my_checkpoint.pth'))['state_dict']) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # PUF model and probability distributions # # We use the temperature-dependent model for SRAM-PUF as defined by . # \begin{align*} # r_i^{(j)}=\left\{\begin{matrix} # 0 , &\text{if } m_i+d_i\cdot t^{(j)} +w\leq \tau,\\ # 1 , &\text{if } m_i+d_i \cdot t^{(j)} +w> \tau. # \end{matrix}\right. # \end{align*} # With process variable $m_i \sim \mathcal{N}(\mu_M,\sigma_M^2)$, temperature dependence variable $d_i \sim \mathcal{N}(0,\sigma_D^2)$, noise variable $w_i^{(j)} \sim \mathcal{N}(0,\sigma_N^2)$, and temperature $t^{(j)}$ at time $j$. # Roel also defines some parameters $\lambda_1 = \sigma_N/\sigma_M$, $\lambda_2 = (\tau-\mu_M)/\sigma_M$, and $\theta= \sigma_N / \sigma_D $. # # We refer to the one-probability of cell $i$ as $\xi_i$ and in case of known state-variables, this is given as $\xi_i = \Pr(X=1) = 1- cdf_N(\tau - m_i-d_i \cdot t^{(j)} )=\Phi(\frac{-\tau + m_i + d_i \cdot t^{(j)} }{\sigma_N}) $. # # It is our goal to estimate $\lambda_1,\lambda_2,\theta$ for our dataset. # # We will use maximum-likelihood estimation to achieve this. # ## Settings of parameters import numpy as np from scipy.stats import norm import matplotlib.pyplot as plt import srampufparest # set the parameters lambda12 = [0.1,0.004] theta = 45; # # verify srampufparest.loadUnique() # Does it give the same histogram as the code that I used before ? Nsamples = 60 Ndevices = 96 datapath = r'C:\Users\lkusters\surfdrive\python\server_008104\20181125\Unique' folder = datapath+'\\002_Temp_025' _,merged1 = srampufparest.loadUnique(folder) import os data25 = [[] for i in range(Nsamples)] for ii in range(Ndevices): count = 0; for fname in os.listdir(folder): if fname.startswith("Unique_dev%03d"%(ii+1) ): data = np.fromfile(folder+'\\'+fname, np.uint8) data_bits = np.unpackbits(data) data25[count].extend(data_bits) count += 1; data25 = np.swapaxes(data25,0,1) # swap axes, s.t. 1st dim is cells, and second is observations # from srampufparest from numpy.random import uniform print(len(merged1)) N25 = [len(d) for d in merged1] # number of measurements per cell sums25 = [sum(n) for n in merged1 ] print(set(N25)) # now generate errorcount errorcount25 = [sum25 if uniform(0,1,1)>(sum25/Nsamples) else Nsamples-sum25 for sum25 in sums25] # original method from numpy.random import uniform print(len(data25)) N25 = [len(d) for d in data25] # number of measurements per cell sums25 = [sum(n) for n in data25 ] print(set(N25)) # now generate errorcount errorcount25 = [sum25 if uniform(0,1,1)>(sum25/Nsamples) else Nsamples-sum25 for sum25 in sums25] # + # ORIGINAL # plot the histogram of the number of observed ones in each cell from scipy.special import comb pmfdata_sum25,edges = np.histogram(sums25,bins=Nsamples+1,range=(-.5,Nsamples+.5)) plt.plot(range(Nsamples+1),[count/len(data25) for count in pmfdata_sum25] ,'o', fillstyle='none') plt.yscale('log') plt.xlabel('k Number of ones for total %d measurements'%(Nsamples)) plt.ylabel('#cells with Number of ones = k ') plt.title('$T=25^oC$') # Functions srampuf pmfhist,centers = srampufparest.getcounts1D(merged1) plt.plot(centers,[count/len(merged1) for count in pmfhist] ) plt.show() # - print(centers) # # Verify fitting method # # Pdf of one-probability (no temperature) # # We find that apparently the approximations by python (in integrator?) are not accurate enough, since the pdf does not sum to 1 (but instead 0.38). Therefore, we instead use the equations to approximate the cdf and then calculate the pdf by integrating (sum over small intervals). # # ## Pdf of the one-probability # \begin{equation} # p_0(\xi|\lambda_1,\lambda_2) # = \frac{\lambda_1 \cdot \phi( \Phi^{-1}(\xi) \cdot \lambda_1 +\lambda_2) }{\phi(\Phi^{-1}(\xi))} # \end{equation} # # First, we show that indeed the total probability does not sum to 1: # ## Cdf of the one-probability # # # The PDF can be approximated as # \begin{equation} # p_0(\xi|\lambda_1,\lambda_2) # = \Pr(\Xi \leq \xi + \Delta\xi) - \Pr(\Xi \leq \xi - \Delta\xi) , # \end{equation} # with # \begin{equation} # \Pr(\Xi \leq \xi) # = \Phi( \Phi^{-1}(\xi) \cdot \lambda_1 +\lambda_2) # \end{equation} # # Second show that we can use cdf and integrate to improve the approximation # + # First plot the pdf l1 = lambda12[0] l2 = lambda12[1] NX = 1000 # number of steps xx = [i for i in np.linspace(0,1,NX)] yy= [norm.cdf(l1*norm.ppf(xi)+l2) for xi in xx] pdf_one = np.diff(yy) pdf_x = [x+0.5/NX for x in xx[:-1]] plt.plot(pdf_x,pdf_one,'o', fillstyle='none') NX = 1000 # number of steps p0_p, p0_xi = srampufparest.pdfp0(l1,l2,NX) plt.plot(p0_xi,p0_p) plt.yscale('log') plt.xlabel('Pr(X=1)=xi') plt.ylabel('Pr(Xi=xi)') plt.show() pred = sum(p0_p) # should be 1 print(pred, 'should be equal to ONE ') # + # Then verify that we can reproduce the histogram of the one counts # plot the histogram of the number of observed ones in each cell from scipy.special import comb pmfhist,centers = srampufparest.getcounts1D(merged1) plt.plot(centers,[count/len(merged1) for count in pmfhist] ,'o', fillstyle='none') plt.yscale('log') plt.xlabel('k Number of ones for total %d measurements'%(Nsamples)) plt.ylabel('#cells with Number of ones = k ') plt.title('$T=25^oC$') # Now the fit l1 = lambda12[0] l2 = lambda12[1] NX = 10000 # number of steps p0_p, p0_xi = srampufparest.pdfp0(l1,l2,NX) pmf_ones = [0]*(Nsamples+1) for nones in range(Nsamples+1): # loop over number of ones pred = [(xi**nones)*((1-xi)**(Nsamples-nones))*pdfp for (xi,pdfp) in zip(p0_xi,p0_p) ] pmf_ones[nones] = comb(Nsamples,nones)*sum(pred) plt.plot(range(Nsamples+1),pmf_ones) plt.show() pred = sum(p0_p) # should be 1 print(pred, 'should be equal to ONE ') # + # Now the temperature dependent behavior # settings from Roel: 0.12,0.02,45 l1 = 0.12 l2 = 0.02 theta = 45 NX = 1000 # number of steps idx = srampufparest.getcellskones(merged2,5); # select cells that observed 5 ones in T2 merged1_sel = [merged1[i] for i in idx] # find observations of these cells at T1 hist,centerpoints = srampufparest.getcounts1D(merged1_sel); # generate the histogram p1 = 5/60 # one probability L = 40 # number of observations at T1 dT = 65 # temperature difference between T1 and T2 predict,x = srampufparest.pmfpkones_temperature(NX,dT,l1,l2,theta,p1,L); # predict the histogram plt.plot(x,predict) plt.plot(centerpoints,hist,'o', fillstyle='none') # actual values plt.yscale('log') plt.xlabel('k Number of ones for total %d measurements'%(Nsamples)) plt.ylabel('#cells with Number of ones = k ') # - # # Pdf of one-probability (with temperature) # # # ## Pdf of the one-probability # \begin{equation} # p_1(\xi_2|\xi_1,t_1,t_2,\lambda_1,\lambda_2,\theta) # = \phi \left(\frac{\theta}{t_2-t_1} \cdot (\Phi^{-1}(\xi_{t_2})-\Phi^{-1}(\xi_{t_1})) \right) \cdot \frac{\theta}{t_2-t_1} \cdot \frac{1 }{\phi(\Phi^{-1}(\xi_{t_2} ) ) } # \end{equation} # # First, we show that indeed the total probability does not sum to 1: # ## Cdf of the one-probability # # # The PDF can be approximated as # \begin{equation} # p_1(\xi_2|\xi_1,t_1,t_2,\lambda_1,\lambda_2,\theta) # = \Pr(\Xi_{t_2} \leq \xi_{t_2}+ \Delta\xi|\Xi_{t_1}=\xi_{t_1}) - \Pr(\Xi_{t_2} \leq \xi_{t_2}- \Delta\xi|\Xi_{t_1}=\xi_{t_1}) , # \end{equation} # with # \begin{equation} # \Pr(\Xi_{t_2} \leq \xi_{t_2}|\Xi_{t_1}=\xi_{t_1}) # = \Phi \left(\frac{\theta}{t_2-t_1} \cdot (\Phi^{-1}(\xi_{t_2})-\Phi^{-1}(\xi_{t_1})) \right) # \end{equation} # # We show that we can use cdf and integrate to improve the approximation # + # use the CDF l1 = lambda12[0] l2 = lambda12[1] th = theta NX = 100 # number of steps dT = 25 # temperature difference # first we choose single value for pi pi = 0.1 # xit1 xx = [i for i in np.linspace(0,1,NX)] yy = [norm.cdf( (th/dT) * (norm.ppf(xi)-norm.ppf(pi)) ) for xi in xx] p1_p = np.diff(yy) p1_xi = [x+0.5/NX for x in xx[:-1]] plt.plot(p1_xi,p1_p) plt.yscale('log') plt.xlabel('Pr(X=1)=xi') plt.ylabel('Pr(Xi=xi|pi=%0.2f,dT=%d)'%(pi,dT)) plt.show() pred = sum(p1_p) xx[0] print(pred, 'should be equal to ONE ') # second we integrate over all possible pi, to get the average distribution # we re-use p0_p and p0_xi ytotal = [ sum( [ Pdfpi*norm.cdf( (th/dT) * (norm.ppf(xi)-norm.ppf(pi)) ) for (Pdfpi,pi) in zip(p0_p,p0_xi) ]) for xi in xx] ytotal = np.diff(ytotal) ytotalx = [x+0.5/NX for x in xx[:-1]] plt.plot(ytotalx,ytotal) plt.yscale('log') plt.xlabel('Pr(X=1)=xi') plt.ylabel('Pr(Xi=xi|dT=%d)'%dT) plt.show() pred = sum(ytotal) print(pred, 'should be equal to ONE ') # + # use the CDF l1 = lambda12[0] l2 = lambda12[1] th = theta NX = 100 # number of steps dT = 25 # temperature difference # first we choose single value for pi pi = 0.1 # xit1 p1_p, p1_xi = srampufparest.pdfp1(pi,dT,l1,l2,th,NX) plt.plot(p1_xi,p1_p) plt.yscale('log') plt.xlabel('Pr(X=1)=xi') plt.ylabel('Pr(Xi=xi|pi=%0.2f,dT=%d)'%(pi,dT)) plt.show() pred = sum(p1_p) xx[0] print(pred, 'should be equal to ONE ') # + # plot the histogram of the number of observed ones in each cell pmfdata_sum25,edges = np.histogram(sums25,bins=Nsamples+1,range=(-.5,Nsamples+.5)) plt.plot(range(Nsamples+1),[count/len(data25) for count in pmfdata_sum25] ,'o', fillstyle='none') plt.yscale('log') plt.xlabel('k Number of ones for total %d measurements'%(Nsamples)) plt.ylabel('#cells with Number of ones = k ') plt.title('$T=25^oC$') # now also plot the fit pmf_est = calculatePmf_kones_noT(lambda12,Nsamples) plt.plot(range(Nsamples+1),pmf_est) fig1 = plt.gcf() plt.show() fig1.savefig('UNIQUE_kones_pdf.jpg', bbox_inches='tight') # - # # Build the grid for ML estimation # # Now the probability that for any cell we observe $k_1$ ones at temperature $t_1$ and $k_2$ ones at temperature $t_2$, is # \begin{equation} # \Pr_{k_1,k_2}(k_1,k_2 |t_1,t_2,\lambda_1,\lambda_2,\theta) # ={K_1 \choose k_1} {K_2 \choose k_2} \int_0^1 \xi_1^{k_1} (1-\xi_1)^{K_1 -k_1} p_0(\xi_1|\lambda_1,\lambda_2) \int_0^1 \xi_2^{k_2} (1-\xi_2)^{K_2 -k_2} p_1(\xi_2|\xi_1,t_1,t_2 ,\lambda_1,\lambda_2,\theta) d \xi_2 d \xi_1 . # \end{equation} # + from scipy.special import comb # preparations NX = 100 K=60 # number of observations l1 = 0.1213; l2 = 0.0210; th = 45; dT = 25 # difference between temperatures xx = [i for i in np.linspace(0,1,NX)] yy= [norm.cdf(l1*norm.ppf(xi)+l2) for xi in xx] p0_p = np.diff(yy) p0_xi = [x+0.5/NX for x in xx[:-1]] # + from scipy.special import comb import sys K=60 for th in [45,40,50,35,55]: print('Likelihood Pr_{k_1,k_2}(k_1,k_2 |dT,l1,l2,l2), with l1=%0.5f,l2=%0.5f,dT=%d,theta=%d'%(l1, l2, dT, th)) Pk1k2 = [[0]*(K+1) for k in range(K+1)] for k1 in range(K+1): comb1 = comb(K,k1) for k2 in range(K+1): sys.stdout.write("\rCalculating k1=%2.0d,k2=%2.0d" % (k1,k2) ) total = 0 for (xi1,pdfxi1) in zip(p0_xi,p0_p): yy = [norm.cdf( (th/dT) * (norm.ppf(xi)-norm.ppf(xi1)) ) for xi in xx] p1_p = np.diff(yy) p1_xi = [x+0.5/NX for x in xx[:-1]] int2 = sum([(xi2**k2)*((1-xi2)**(K-k2))*pdfxi2 for (xi2,pdfxi2) in zip(p1_xi,p1_p)]) total = total+ pdfxi1*int2*(xi1**k1)*((1-xi1)**(K-k1)) Pk1k2[k1][k2] = comb1*comb(K,k2)*total np.savetxt('Pk1k1_(%d).txt.gz' %th , X, delimiter=' ', newline='\n', header='Likelihood Pr_{k_1,k_2}(k_1,k_2 |dT,l1,l2,l2), with l1=%0.5f,l2=%0.5f,dT=%d,theta=%d'%(l1, l2, dT, th), comments='# ') # - np.savetxt('Pk1k1_(%d).txt.gz' %th , X, delimiter=' ', newline='\n', header='Likelihood Pr_{k_1,k_2}(k_1,k_2 |dT,l1,l2,l2), with l1=%0.5f,l2=%0.5f,dT=%d,theta=%d'%(l1, l2, dT, th), comments='# ') # # Appendix : We can do the same for the pdf of Pe # Again we show that the pdf (approximate by python) does not sum to 1 # + # can we also plot the pdf of Pe ? lambda12 = [0.13,0.02] l1 = lambda12[0] l2 = lambda12[1] NX = 1000 # number of steps xx = [i for i in np.linspace(0,1,NX)] y = [l1*(1-xi)*(norm.pdf(l1*norm.ppf(xi)+l2)+norm.pdf(l1*norm.ppf(xi)-l2))/norm.pdf(norm.ppf(xi)) for xi in xx] plt.plot(xx,y) plt.yscale('log') plt.show() pred = sum(y[1:-1])/NX print(pred) # is not even close to 1 ! # this shows us that we are not accurate enough! # - # Again show that we can use cdf and integrate to improve the approximation # + # what if we use cdf instead? lambda12 = [0.12,0.02] l1 = lambda12[0] l2 = lambda12[1] NX = 100 # number of steps xx = [i for i in np.linspace(0,1,NX)] upperlim = [norm.ppf(i) for i in xx] # these are the upper limits of the integral I want to calculate plt.plot(xx,upperlim) plt.yscale('log');plt.xlabel('x');plt.ylabel(r'$\Phi^{-1}(x)$') plt.title('Upper limit of integral') plt.show() # Now we generate the part inside the integral Nu = 500000; uu = np.linspace(-295,45,Nu) # outside of this range it is 0 anyway y = [norm.cdf(-u)*(norm.pdf(l1*u+l2)+norm.pdf(l1*u-l2)) for u in uu]; plt.plot(uu,y) #print(y[90:-355]) plt.yscale('log'); plt.xlabel('u');plt.ylabel(r'$\Phi(-u)\cdot(\phi(\dots)+\dots)$') plt.title('Values of function inside integral') plt.show() # now I wil calculate the values idxs = [np.searchsorted(uu, upper, side="left") for upper in upperlim]; cdfPe = [(340/Nu)*sum(y[:idx])*l1 for idx in idxs] plt.plot(xx,cdfPe) plt.yscale('log');plt.xlabel('x');plt.ylabel(r'$\lambda_1 \int_{-\infty}^{\Phi^{-1}(x)}\dots du$') plt.title('$cdf_{Pe}$') plt.show() # ok! now I have cdf. Can I make pdf ? pdfPe = np.diff(cdfPe) plt.plot([x+0.5/NX for x in xx[:-1]],pdfPe) plt.yscale('log');plt.xlabel('x');plt.ylabel('$\Delta cdf_{Pe}$') plt.title('$pdf_{Pe}$') plt.show() print(sum(pdfPe)) # - # -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .ps1 # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: .NET (PowerShell) # language: PowerShell # name: .net-powershell # --- # # T1134.005 - SID-History Injection # Adversaries may use SID-History Injection to escalate privileges and bypass access controls. The Windows security identifier (SID) is a unique value that identifies a user or group account. SIDs are used by Windows security in both security descriptors and access tokens. (Citation: Microsoft SID) An account can hold additional SIDs in the SID-History Active Directory attribute (Citation: Microsoft SID-History Attribute), allowing inter-operable account migration between domains (e.g., all values in SID-History are included in access tokens). # # With Domain Administrator (or equivalent) rights, harvested or well-known SID values (Citation: Microsoft Well Known SIDs Jun 2017) may be inserted into SID-History to enable impersonation of arbitrary users/groups such as Enterprise Administrators. This manipulation may result in elevated access to local resources and/or access to otherwise inaccessible domains via lateral movement techniques such as [Remote Services](https://attack.mitre.org/techniques/T1021), [Windows Admin Shares](https://attack.mitre.org/techniques/T1077), or [Windows Remote Management](https://attack.mitre.org/techniques/T1028). # ## Atomic Tests: # Currently, no tests are available for this technique. # ## Detection # Examine data in user’s SID-History attributes using the PowerShell Get-ADUser cmdlet (Citation: Microsoft Get-ADUser), especially users who have SID-History values from the same domain. (Citation: AdSecurity SID History Sept 2015) Also monitor account management events on Domain Controllers for successful and failed changes to SID-History. (Citation: AdSecurity SID History Sept 2015) (Citation: Microsoft DsAddSidHistory) # # Monitor for Windows API calls to the DsAddSidHistory function. (Citation: Microsoft DsAddSidHistory) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # There's another, similar dataset called MNIST which has items of handwriting -- the digits 0 through 9. # # Write an MNIST classifier that trains to 99% accuracy or above, and does it without a fixed number of epochs -- i.e. you should stop training once you reach that level of accuracy. # # Some notes: # # It should succeed in less than 10 epochs, so it is okay to change epochs= to 10, but nothing larger # When it reaches 99% or greater it should print out the string "Reached 99% accuracy so cancelling training!" # If you add any additional variables, make sure you use the same names as the ones used in the class # + import tensorflow as tf from os import path, getcwd, chdir path = f"{getcwd()}/../tmp2/mnist.npz" # - def train_mnist(): # please do not remove # model fitting inline comments. class myCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epochs, logs = {}): if(logs.get('acc')>0.99): print("Reached 99% accuracy so cancelling training!") self.model.stop_training = True callbacks = myCallback() mnist = tf.keras.datasets.mnist # Path to your local dataset here (x_train, y_train),(x_test, y_test) = mnist.load_data(path=path) x_train = x_train / 255.0 x_test = x_test / 255.0 model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(512, activation = 'relu'), tf.keras.layers.Dense(10, activation = 'softmax') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) # model fitting history = model.fit( x_train, y_train, epochs = 5, callbacks = [callbacks] ) # model fitting return history.epoch, history.history['acc'][-1] train_mnist() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- import os import math import data_loader import ResNet as models from config.miccai_settings import * import torch os.environ["CUDA_VISIBLE_DEVICES"] = "1" st = Settings() options = st.get_options() model = models.DANNet(num_classes=2) model.sharedNet.conv1.weight saved_model=None if options['load_initial_weights']: saved_model = torch.load(options['initial_weights_file']) print(options['initial_weights_file']) model.load_state_dict(saved_model.state_dict()) model.sharedNet.conv1.weight for param_tensor in model.state_dict(): print(param_tensor, "\t", model.state_dict()[param_tensor].size()) for param_tensor in model.state_dict(): print(param_tensor, "\t", model.state_dict()[param_tensor].size()) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Activity Cliffs # # Quando estamos desenvolvendo um modelo de SAR/SPR (relação estrutura-atividade/propriedade), fazemos a seguinte suposição: # # _"Compostos com estruturas similares terão propriedades similares"_ # # Isso é equivalente a imaginar o espaço químico (um espaço imaginário contendo todas as moléculas nos eixos x e y, e o valor de sua atividade no eixo z) como sendo contínuo, sem grandes variações em pequenas distâncias. Porém, nem sempre esse é o caso: pequenas modificações em uma estrutura molecular podem influenciar significativamente a atividade (ou qualquer propriedade). Em um espaço físico, isso seria equivalente a um precipício (ravina, barranco, etc.), ou, em inglês, *cliff*. Dessa forma, quando um par de estruturas muito similares apresentam propriedades muito diferentes, dizemos que se trata de um *activity cliff*. # # O que significa "ser muito similar"? O que significa "propriedades muito diferentes"? Não há definição estabelecida, portanto, escolha os valores de corte da forma que se encaixar melhor ao seu caso. # # Neste notebook vamos verificar a existência de *activity cliffs* em um banco de dados de inibidores enzimáticos, e tentar explorar as razões moleculares por trás desses *cliffs*. # Vamos começar importando o chembl_webresource_client, para trabalharmos com estruturas do ChEMBL from chembl_webresource_client.new_client import new_client activities = new_client.activity def get_inhibitors(target): """ Input: código ChEMBL do alvo. Para descobrir o código do seu alvo, acesse o site do ChEMBL Exemplo: alvo = Catepsina B = "CHEMBL4072" Output: um dataframe contendo SMILES e pChEMBL values (atividade) dos inibidores do alvo """ inhibitors = activities.filter(target_chembl_id=target, pchembl_value__isnull=False) data = [[item["canonical_smiles"], item["pchembl_value"]] for item in inhibitors] df = pd.DataFrame(data, columns=["SMILES", "pChEMBL"]) return df # + # Vamos procurar activity cliffs em inibidores de Catepsina B df = get_inhibitors("CHEMBL4072") # Remover linhas com dados faltando df.dropna(inplace=True) # Remover duplicatas df = df.sort_values("pChEMBL", ascending=False).drop_duplicates(subset="SMILES").reset_index(drop=True) df.head() # - len(df) # Podemos definir *activity cliffs* de acordo com uma diferença de atividade (por exemplo, uma diferença de duas unidades logarítmicas) ou separando os compostos em "ativos" e "inativos" e definindo que um *cliff* contém um par de compostos similares no qual um é ativo e um é inativo. # # Portanto, para definir nossos activity cliffs, vamos escolher o valor de pChEMBL = 6 como valor de corte: compostos com pChEMBL >= 6 serão considerados ativos, e inativos se pChEMBL < 6. # + # A coluna pChEMBL é importada como string, precisamos convertê-la a float df["Active"] = df["pChEMBL"].astype('float64') >= 6 # Número de compostos considerados "ativos" df["Active"].sum() # - # Já temos os valores de atividade dos compostos. Agora, precisamos comparar as similaridades entre as estruturas. Uma forma comum de fazer isso é utilizando *fingerprints* moleculares. # # Como exemplo, vamos utilizar os MACCS *fingerprints*, que consistem em uma lista contendo 166 valores (bits) que podem conter os números 0 ou 1. O valor 0 indica a ausência de uma sub-estrutura, e o valor 1 indica sua ausência. Por exemplo, o 42° bit de um MACCS *fingerprint* corresponde à presença ou ausência de um átomo de flúor: se a estrutura contém flúor, o valor desse bit será 1, caso contrário, será 0. As definições de cada bit podem ser encontradas [neste link](https://github.com/rdkit/rdkit/blob/master/rdkit/Chem/MACCSkeys.py). # # Após calcular os *fingerprints* para todas as estruturas, usaremos o valor do coeficiente de Tanimoto igual a 0.9 como valor de corte para similaridade. Em resumo, o coeficiente de Tanimoto compara dois *fingerprints* moleculares e retorna um valor entre 0 e 1; quanto mais próximo de 1, maior a similaridade. Assim, pares de compostos contendo um ativo e um inativo, cuja similaridade (coeficiente de Tanimoto) seja maior que 0.9 serão considerados *activity cliffs*. # + from rdkit.Chem import AllChem from rdkit.Chem import MACCSkeys # Criar coluna com RDKit Mol df["rdkit_mol"] = df["SMILES"].apply(Chem.MolFromSmiles) # Criar coluna com MACCS fingerprints df["Morgan"] = df["rdkit_mol"].apply(lambda mol: AllChem.GetMorganFingerprintAsBitVect(mol, 2, nBits=1024)) # df["MACCS"] = df["rdkit_mol"].apply(MACCSkeys.GenMACCSKeys) # + # Calcular activity cliffs cliffs = [] fps = df["Morgan"].to_list() for i in range(len(fps)): for j in range(i+1, len(fps)): sim = DataStructs.FingerprintSimilarity(fps[i], fps[j], metric=DataStructs.TanimotoSimilarity) if (sim > 0.9) and (df.loc[i]["Active"] != df.loc[j]["Active"]): # Append the indexes of the pair in order (active, inactive) if df.loc[i]["Active"]: cliffs.append((i,j)) else: cliffs.append((j,i)) # - # Criar listas com os RDKit Mol de cada cliff e legendas contendo o índice e o valor da atividade ms = [] leg = [] for item in cliffs: ms.append(df.loc[item[0]]["rdkit_mol"]) leg.append(', '.join((str(item[0]), df.loc[item[0]]["pChEMBL"]))) ms.append(df.loc[item[1]]["rdkit_mol"]) leg.append(', '.join((str(item[1]), df.loc[item[1]]["pChEMBL"]))) # Quantos cliffs foram encontrados len(cliffs) img = Draw.MolsToGridImage(ms, molsPerRow=2, subImgSize=(500, 300), legends=leg) img # Vemos que muitos *activity cliffs* correspondem a isômeros ou a pares moleculares com diferença de um grupo, como um anel aromático. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # This extension serves a narrow use case for writing, compiling, error-checking and validating stan models within jupyter notebook. I am not a comfortable R user and STAN currently leans towards R. I wrote this simple script as I wanted an uninterruped workflow for data manipulation, stan model creation and analysis all in jupyter notebook. Once the codemirror support is merged in, sysntax highlighting for stan code could also be enabled. Since few people were asking me for this, I have packaged this as a extension. # # Tested to work on Linux/Mac. Python versions 2.7/3.6 # ## Installation # `pip install git+https://github.com/Arvinds-ds/stanmagic.git` # ## Testing %%stan magic import numpy as np from collections import OrderedDict import pystan # ### Load stanmagic extension # %load_ext stanmagic # #### Optional: Using stanc compiler instead of pystan.stanc compiler # # To get cmdstan installed:- # # 1. Downlad latest cmdstan-xxxx.tar.gz https://github.com/stan-dev/cmdstan/releases # 2. extract to # 3. `cd ` # 4. `make` # 3. `make build` # 4. export PATH=$PATH:/bin/stanc # # # If the compiler is not in your path or you don't want to edit your PATH, you may have to pass --stanc [compiler_path] to %%stan. See below # ### Generate test data X = np.random.randn(100,3) beta = np.array([0.1,0.2,0.3]) alpha = 4 sigma = 1.7 y = np.dot(X,beta) + alpha + np.random.randn(100)*sigma N=100 K=3 data = OrderedDict({'X':X, 'y': y, 'N':N, 'K': K}) # ### 1. %%stan -f [stan_file_name] # # Saves the cell code to a file specified in [stan_file_name]. The file name can also be accessed in __`_stan_model.model_file`__ generated in local namespace # + # %%stan -f test.stan data { int N; int K; matrix[N,K] X; vector[N] y; } parameters{ real alpha; ordered[K] beta; real sigma; } model { alpha ~ normal(0,10); beta[1] ~ normal(.1,.1); beta[2] ~ normal(.2,.1); beta[3] ~ normal(.3,.1); sigma ~ exponential(1); for (n in 1:N) { y[n] ~ normal(X[n]*beta + alpha, sigma); } } generated quantities { vector[N] y_rep; for (n in 1:N) { y_rep[n] = normal_rng(X[n]*beta + alpha, sigma); } } # - _stan_model model = pystan.StanModel(file=_stan_model.model_file) model.sampling(data=data) # ### 2. %%stan -v model_test # Saves the cell code to a code string. The `-v` specifies the object name where the output object is saved. # The model code can be accessed in __`model_test.model_code`__ generated in local namespace # + # %%stan -v model_test data { int N; int K; matrix[N,K] X; vector[N] y; } parameters{ real alpha; ordered[K] beta; real sigma; } model { alpha ~ normal(0,10); beta[1] ~ normal(.1,.1); beta[2] ~ normal(.2,.1); beta[3] ~ normal(.3,.1); sigma ~ exponential(1); for (n in 1:N) { y[n] ~ normal(X[n]*beta + alpha, sigma); } } generated quantities { vector[N] y_rep; for (n in 1:N) { y_rep[n] = normal_rng(X[n]*beta + alpha, sigma); } } # - model_test model = pystan.StanModel(model_code=model_test.model_code) # ### 3. %%stan -f test3.stan -v test3_model # + # %%stan -f test3.stan -v test3_model data { int N; int K; matrix[N,K] X; vector[N] y; } parameters{ real alpha; ordered[K] beta; real sigma; } model { alpha ~ normal(0,10); beta[1] ~ normal(.1,.1); beta[2] ~ normal(.2,.1); beta[3] ~ normal(.3,.1); sigma ~ exponential(1); for (n in 1:N) { y[n] ~ normal(X[n]*beta + alpha, sigma); } } generated quantities { vector[N] y_rep; for (n in 1:N) { y_rep[n] = normal_rng(X[n]*beta + alpha, sigma); } } # - test3_model model = pystan.StanModel(file=test3_model.model_file) # ### 3. %%stan -f [stan_file_name] --save_only # # Saves the cell code to a file specified in [stan_file_name]. Skips compile step # + # %%stan -f test1.stan --save_only data { int N; int K; matrix[N,K] X; vector[N] y; } parameters{ real alpha; ordered[K] beta; real sigma; } model { alpha ~ normal(0,10); beta[1] ~ normal(.1,.1); beta[2] ~ normal(.2,.1); beta[3] ~ normal(.3,.1); sigma ~ exponential(1); for (n in 1:N) { y[n] ~ normal(X[n]*beta + alpha, sigma); } } generated quantities { vector[N] y_rep; for (n in 1:N) { y_rep[n] = normal_rng(X[n]*beta + alpha, sigma); } } # - model = pystan.StanModel(file='test1.stan') # ### 4. %%stan # # Saves the cell code to a code string. The code string can be accessed via __`_stan_model.model_code`__ # + # %%stan data { int N; int K; matrix[N,K] X; vector[N] y; } parameters{ real alpha; ordered[K] beta; real sigma; } model { alpha ~ normal(0,10); beta[1] ~ normal(.1,.1); beta[2] ~ normal(.2,.1); beta[3] ~ normal(.3,.1); sigma ~ exponential(1); for (n in 1:N) { y[n] ~ normal(X[n]*beta + alpha, sigma); } } generated quantities { vector[N] y_rep; for (n in 1:N) { y_rep[n] = normal_rng(X[n]*beta + alpha, sigma); } } # - _stan_model.model_code model = pystan.StanModel(model_code=_stan_model.model_code) model.sampling(data=data) # ### 5. %%stan --stanc pystan # + # %%stan --stanc pystan data { int N; int K; matrix[N,K] X; vector[N] y; } parameters{ real alpha; ordered[K] beta; real sigma; } model { alpha ~ normal(0,10); beta[1] ~ normal(.1,.1); beta[2] ~ normal(.2,.1); beta[3] ~ normal(.3,.1); sigma ~ exponential(1); for (n in 1:N) { y[n] ~ normal(X[n]*beta + alpha, sigma); } } generated quantities { vector[N] y_rep; for (n in 1:N) { y_rep[n] = normal_rng(X[n]*beta + alpha, sigma); } } # - model = pystan.StanModel(model_code=_stan_model.model_code) # ### 6. %%stan --stanc "~/Downloads/cmdstan-2.16.0/bin/stanc" # + # %%stan --stanc "~/Downloads/cmdstan-2.16.0/bin/stanc" data { int N; int K; matrix[N,K] X; vector[N] y; } parameters{ real alpha; ordered[K] beta; real sigma; } model { alpha ~ normal(0,10); beta[1] ~ normal(.1,.1); beta[2] ~ normal(.2,.1); beta[3] ~ normal(.3,.1); sigma ~ exponential(1); for (n in 1:N) { y[n] ~ normal(X[n]*beta + alpha, sigma); } } generated quantities { vector[N] y_rep; for (n in 1:N) { y_rep[n] = normal_rng(X[n]*beta + alpha, sigma); } } # - model = pystan.StanModel(model_code=_stan_model.model_code) # ### 7. %%stan -f [stan_file_name] -o [cpp_file_name] # Saves the cell code to a file specified in [stan_file_name] and outputs the compiled cpp file to the file name specified by [cpp_file_name] # # ### 8. %% stan -f [stan_file_name] --allow_undefined # passes the --allow_undefined argument to stanc compiler # ### 9.%%stan -f [stan_file_name] --stanc [stanc_compiler_path] # Saves the cell code to a file specified in [stan_file_name] and compiles using the stan compiler specified in [stanc_compiler]. By default, it uses stanc compiler in your path. If your path does not have the stanc compiler, use this option (e.g %%stan binom.stan --stanc "~/cmdstan-2.16.0/bin/stanc") assert True # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # In this notebook I'll implement the metropolis-hastings algorithm for the Ising model. We'll start by considering only one dimension (a chain of atoms). We will represent our system using an array whose elements are either -1 or +1 depending on the spin of the atom. We will also consider periodic boundary conditions - that is, the first atom is the neighbour of the last atom. In effect, we are simulating a loop of atoms. # # Let's start by writing a function to calculate the energy of a state. Note that I do it in a slightly tricky way using array indexing. This is faster than a for loop, which is helpful because we will have to evaluate the energy many times. # + import numpy as np import matplotlib.pyplot as plt from numpy.random import random def getEnergy_1D_loopversion(state, B, J, mu): energy = 0 for i in range(len(state)): # If the atom is the first in the chain if i==0: # Contribution from left neighbour. Left neighbour is last atom in the chain energy = energy - J*state[i]*state[-1] # Contribution from right neighbour energy = energy - J*state[i]*state[i+1] # Contribution from external field energy = energy - mu*B*state[i] # If the atom is the last in the chain elif i==len(state)-1: # Contribution from left neighbour. energy = energy - J*state[i]*state[i-1] # Contribution from right neighbour. Right neighbour is the first in the chain energy = energy - J*state[i]*state[0] # Contribution from external field energy = energy - mu*B*state[i] else: # Contribution from left neighbour energy = energy - J*state[i]*state[i-1] # Contribution from right neighbour energy = energy - J*state[i]*state[i+1] # Contribution from external field energy = energy - mu*B*state[i] return energy def getEnergy_1D(state, B, J, mu): energy = 0 # left neighbours energy = energy - J * (state[1:]*state[0:-1]).sum() # Deal with left neighbour of first spin energy = energy - J*state[0]*state[-1] # right neightbours energy = energy - J * (state[0:-1]*state[1:]).sum() # Deal with right neighbour of last spin energy = energy - J*state[-1]*state[0] # magnetic field energy = energy - (mu*B*state).sum() return energy # - # Let's do some quick checks: both methods should give the same value, and you can check these two examples by hand. # + state = np.array([1, 1, 1, 1]) print(getEnergy_1D_loopversion(state, 0, 1, 1)) print(getEnergy_1D(state, 0, 1, 1)) state = np.array([1, 1, -1, -1]) print(getEnergy_1D_loopversion(state, 0, 1, 1)) print(getEnergy_1D(state, 0, 1, 1)) # - # As an aside, we can time both verions of the function. You can see that it we have many atoms, the version that uses indexing is much faster. from timeit import timeit state = np.ones(10000) tNoLoop = timeit('getEnergy_1D(state, 0, 1, 1)', number=100, globals=globals()) tLoop = timeit('getEnergy_1D_loopversion(state, 0, 1, 1)', number=100, globals=globals()) print('Loop version: %.2e s, indexing version: %.2e s. Indexing version is %.2f times faster than \ loop version'%(tLoop, tNoLoop, tLoop/tNoLoop)) # Now let's write a function that decides (randomly) whether a change from `state1` to `state2` should be accepted or rejected. def acceptOrReject(E1, E2, beta): # Note we are defining beta = 1/kT. You should notice from the lesson and notes that every time either k or T appear in # the equations, they always appear in the combination 1/kT. deltaE = E2-E1 if deltaE < 0: return True else: # This is the relative probability of state 2. We need to return True with probability R. Note 0R. x = np.random.random() if x<=R: return True else: return False # Now we can write a function that does implements the MH algorithm. It will return: an array of size N_steps representing the energy at each step, an array `states` of size N_steps * N_spins representing the state of the system at each timestep, and an array `finalState` representing the final state of the system # + def do_MH(initialState, N_steps, beta, B=0, J=1, mu=1): N_spins = len(initialState) energies = np.zeros(N_steps) states = np.zeros((N_steps, N_spins)) currentState = initialState for i in range(N_steps): states[i, :] = currentState currentEnergy = getEnergy_1D(currentState, B, J, mu) energies[i] = currentEnergy # To get a trial new state, randomly choose a spin, and then flip it # Choose a spin spinToFlip = np.random.randint(0, N_spins) # flip it currentState[spinToFlip] *= -1 #Compute the new energy eTrial = getEnergy_1D(currentState, B, J, mu) #accept the change or reject it acceptOrRejectBoolean = acceptOrReject(currentEnergy, eTrial, beta) # if we reject it, we should flip the spin back. If we accept it, we don't need to do anything if not acceptOrRejectBoolean: currentState[spinToFlip] *= -1 finalState = currentState return finalState, states, energies # - # Let's try it! # # Experiment with the number of spins and magnetic field strength (but don't change `N_steps`- it should always be `10*N_spins`). You should notice that the energy settles down very quickly. When the temperature is higher (lower $\beta$), the energy takes longer to settle down. # # Note that you can't directly enter the temperature in kelvin because I'm not using SI units for $J$ and $\mu$. # # Also note that the magnetisation is defined to be $M = \sum_{i=1}^{N_\text{spins}} \sigma_i$. If the spins are mostly parallel, $|M|$ will be large. If they are not, $|M|$ will be small. # + N_spins = 1000 N_steps = 20*N_spins J = 0.2 mu = 0.33 # Note higher temperature means lower beta beta = 0.2 B = 0.5 # We can start in any configuration because we need to take enough steps that the initial configuration doesn't matter. # Let's start with a random initial configuration initialState = np.ones(N_spins, dtype='int32') for i in range(len(initialState)): # randomly flip each spin to create random initial state if np.random.random() > 0.5: initialState[i] = -initialState[i] finalState, states, energies = do_MH(initialState, N_steps, beta, B, mu=mu, J=J) magnetisation = states.sum(axis=1) plt.figure() ax = plt.gca() l1, = ax.plot(np.arange(N_steps), energies, 'x') plt.xlabel('Step') plt.ylabel('Energy') ax = plt.gca() ax2 = ax.twinx() l2, = ax2.plot(np.arange(N_steps), magnetisation, 'r.') ax2.set_ylabel('Magnetisation') plt.legend((l1, l2), ('Energy', 'Magnetisation')) plt.show() plt.matshow(states.T) ax = plt.gca() ax.set_aspect(1.5) plt.xlabel('Step') plt.yticks([]) plt.title('Spins over time') plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + deletable=true editable=true # Load required libraries import requests import numpy as np import os from Magics import toolbox as magics from ipywidgets import interact import ipywidgets as widgets # + deletable=true editable=true def graph (lat, lon, param, color, style) : url_fmt = 'http://earthserver.ecmwf.int/rasdaman/ows?service=WCS&version=2.0.1' \ '&request=ProcessCoverages' \ '&query=for c in (%s) return encode(c[Lat(%s), Long(%s), ansi("%s":"%s")], "csv") ' url = url_fmt%(param, lat, lon, "2000-01-01T00:00:00+00:00","2005-12-31T00:00:00+00:00") #r= requests.get(url, #proxies={'http':None} #) #r.raise_for_status() # Store the requested data in a numpy array yy = np.array(eval(r.text.replace('{','[').replace('}',']'))) xx = range(len(yy)) return magics.graph(xx, yy, title = "Time series %s at %d/%d" % ( param , lat, lon), graph = { "graph_line_colour" : color, "graph_line_style" : style }, ) interact(graph, lon=widgets.IntSlider(min=-180,max=180,step=1,value=10,continuous_update=False), lat=widgets.IntSlider(min=-90,max=90,step=1,value=10,continuous_update=False), param = widgets.Dropdown( options=['temp2m', 'precipitation', ], value='temp2m', ), color = widgets.Dropdown( options=[ "ecmwf_blue", 'navy', 'evergreen', ], value='ecmwf_blue',), style = widgets.Dropdown( options=[ "solid", 'dash', 'dot', ], value='solid',) ) # + deletable=true editable=true # + deletable=true editable=true # + deletable=true editable=true # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # LeNet Architeture # # Paper= http://vision.stanford.edu/cs598_spring07/papers/Lecun98.pdf # import numpy as np import torch import torch.nn as nn # + # The MNIST datasets are hosted on yann.lecun.com that has moved under CloudFlare protection # Run this script to enable the datasets download # Reference: https://github.com/pytorch/vision/issues/1938 from six.moves import urllib opener = urllib.request.build_opener() opener.addheaders = [('User-agent', 'Mozilla/5.0')] urllib.request.install_opener(opener) # + from torchvision import datasets import torchvision.transforms as transforms from torch.utils.data.sampler import SubsetRandomSampler # number of subprocesses to use for data loading num_workers = 0 # how many samples per batch to load batch_size = 20 # percentage of training set to use as validation valid_size = 0.2 # convert data to torch.FloatTensor transform = transforms.ToTensor() # choose the training and test datasets train_data = datasets.MNIST(root='data', train=True, download=True, transform=transform) test_data = datasets.MNIST(root='data', train=False, download=True, transform=transform) # obtain training indices that will be used for validation num_train = len(train_data) indices = list(range(num_train)) np.random.shuffle(indices) split = int(np.floor(valid_size * num_train)) train_idx, valid_idx = indices[split:], indices[:split] # define samplers for obtaining training and validation batches train_sampler = SubsetRandomSampler(train_idx) valid_sampler = SubsetRandomSampler(valid_idx) # prepare data loaders train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, sampler=train_sampler, num_workers=num_workers) valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, sampler=valid_sampler, num_workers=num_workers) test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, num_workers=num_workers) # - # Check the impage input volume shape input_shape = next(iter(train_loader))[0].shape input_shape # (W - F + 2P) / S + 1 out_vol = (28 - 5 + 2*2) / 1 + 1 # + class LeNet(nn.Module): def __init__(self, in_channels: int = 1, n_classes: int = 10): super(LeNet, self).__init__() self.conv1 = nn.Conv2d(in_channels=in_channels, out_channels=6, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2)) self.conv2 = nn.Conv2d(in_channels=6, out_channels=16, kernel_size=(5, 5), stride=(1, 1), padding=(0, 0)) self.conv3 = nn.Conv2d(in_channels=16, out_channels=120, kernel_size=(5, 5), stride=(1, 1), padding=(0, 0)) self.relu = nn.ReLU() self.pool = nn.AvgPool2d(kernel_size=(2, 2), stride=(2, 2)) self.fc1 = nn.Linear(120, 84) self.fc2 = nn.Linear(84, n_classes) def forward(self, x): x = self.relu(self.conv1(x)) x = self.pool(x) x = self.relu(self.conv2(x)) x = self.pool(x) x = self.relu(self.conv3(x)) x = x.view(x.size(0), -1) # batch_size x 120 x 1 x 1 -> 120 x 1 x = self.relu(self.fc1(x)) x = self.fc2(x) return x model = LeNet() model.parameters # + # specify loss function (categorical cross-entropy) criterion = nn.CrossEntropyLoss() # specify optimizer (stochastic gradient descent) and learning rate = 0.01 optimizer = torch.optim.SGD(model.parameters(), lr=0.01) # + # number of epochs to train the model n_epochs = 5 # initialize tracker for minimum validation loss valid_loss_min = np.Inf # set initial "min" to infinity for epoch in range(n_epochs): # monitor training loss train_loss = 0.0 valid_loss = 0.0 ################### # train the model # ################### model.train() # prep model for training for data, target in train_loader: # clear the gradients of all optimized variables optimizer.zero_grad() # forward pass: compute predicted outputs by passing inputs to the model output = model(data) # calculate the loss loss = criterion(output, target) # backward pass: compute gradient of the loss with respect to model parameters loss.backward() # perform a single optimization step (parameter update) optimizer.step() # update running training loss train_loss += loss.item() ###################### # validate the model # ###################### model.eval() # prep model for evaluation for data, target in valid_loader: # forward pass: compute predicted outputs by passing inputs to the model output = model(data) # calculate the loss loss = criterion(output, target) # update running validation loss valid_loss += loss.item() # print training/validation statistics # calculate average loss over an epoch train_loss = train_loss/len(train_loader) valid_loss = valid_loss/len(valid_loader) print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format( epoch+1, train_loss, valid_loss )) # save model if validation loss has decreased if valid_loss <= valid_loss_min: print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format( valid_loss_min, valid_loss)) torch.save(model.state_dict(), 'model.pt') valid_loss_min = valid_loss # - model.load_state_dict(torch.load('model.pt')) # + # initialize lists to monitor test loss and accuracy test_loss = 0.0 class_correct = list(0. for i in range(10)) class_total = list(0. for i in range(10)) model.eval() # prep model for evaluation for data, target in test_loader: # forward pass: compute predicted outputs by passing inputs to the model output = model(data) # calculate the loss loss = criterion(output, target) # update test loss test_loss += loss.item()*data.size(0) # convert output probabilities to predicted class _, pred = torch.max(output, 1) # compare predictions to true label correct = np.squeeze(pred.eq(target.data.view_as(pred))) # calculate test accuracy for each object class for i in range(batch_size): label = target.data[i] class_correct[label] += correct[i].item() class_total[label] += 1 # calculate and print avg test loss test_loss = test_loss/len(test_loader.dataset) print('Test Loss: {:.6f}\n'.format(test_loss)) for i in range(10): if class_total[i] > 0: print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % ( str(i), 100 * class_correct[i] / class_total[i], np.sum(class_correct[i]), np.sum(class_total[i]))) else: print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i])) print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % ( 100. * np.sum(class_correct) / np.sum(class_total), np.sum(class_correct), np.sum(class_total))) # + import matplotlib.pyplot as plt # obtain one batch of test images dataiter = iter(test_loader) images, labels = dataiter.next() # get sample outputs output = model(images) # convert output probabilities to predicted class _, preds = torch.max(output, 1) # prep images for display images = images.numpy() # plot the images in the batch, along with predicted and true labels fig = plt.figure(figsize=(25, 4)) for idx in np.arange(20): ax = fig.add_subplot(2, int(20/2), idx+1, xticks=[], yticks=[]) ax.imshow(np.squeeze(images[idx]), cmap='gray') ax.set_title("{} ({})".format(str(preds[idx].item()), str(labels[idx].item())), color=("green" if preds[idx]==labels[idx] else "red")) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Learning Together - Data Analysis # $e^{i\pi} + 1 = 0$ # Sekarang kita akan mencoba mengolah data yang ada dengan menggunakan data Kaggle Project yang bisa diakses pada pranala berikut: # # http://www.kaggle.com/c/titanic-gettingStarted # # Silakan teman-teman download data _train.csv_ # # _p.s teman-teman bisa buat akun di Kaggle Project dulu untuk akses datanya_ import pandas as pd from pandas import Series, DataFrame # + #Mengambil data train.csv dan menjadikannya bagian dari DataFrame data_df = pd.read_csv('train.csv') #menampilkan 10 data awal yang berhasil dibaca data_df.head(10) # - #membaca general information dari dataset data_df.info() # Dari data diatas, kita dapat mengetahui beberapa informasi umum seperti jumlah penumpang, umur, dll. Namun, perlu kita perhatikan juga bahwa terdapat satu kolom yang sangat menonjol, yaitu Cabin. Kolom Cabin hanya memiliki 204 data non-null sehingga hal ini harus diperhatikan kedepannya. # # Untuk sekarang, mari kita olah dan menggali beberapa informasi lainnya dari data train.csv ini import numpy as np import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline sns.catplot('Sex', kind="count", data=data_df) sns.catplot('Sex', kind="count", hue="Survived", data=data_df) #kita bisa mengganti class attribut dari chart ini sesuai dengan apa yang ini kita perlihatkan, seperti berikut. sns.catplot('Survived', kind="count", hue="Sex", data=data_df) def gender_child(pas): age,sex = pas if age < 20: return "Child" else: return sex data_df['person'] = data_df[['Age', 'Sex']].apply(gender_child,axis=1) data_df[1:11] # #### Data Interpretation # # Dari data diatas, kita bisa lihat kolom sebelah kanan pojok terdapat kolom baru dengan kategori nama 'person' yang berisikan informasi yang ingin kita ketahui tadi (male, female, atau child). sns.catplot('Survived', kind="count", hue="person", data=data_df) # Dan pada akhirnya kita berhasil memisahkan data-data penumpang yang selamat berdasarkan 3 kelas tadi, yaitu male, female, dan child. data_df['Age'].hist(bins=70) data_df['Age'].mean() data_df['person'].value_counts() # + gambar = sns.FacetGrid(data_df, hue='Sex', aspect=4) gambar.map(sns.kdeplot,'Age', shade=False) oldest = data_df['Age'].max() gambar.set(xlim=(0,oldest)) gambar.add_legend() # + gambar = sns.FacetGrid(data_df, hue='person', aspect=4) gambar.map(sns.kdeplot,'Age', shade=True, alpha=0.45) oldest = data_df['Age'].max() gambar.set(xlim=(0,oldest)) gambar.add_legend() # - data_df.head(5) # + #Melihat banyaknya NaN pada kolom Cabin, hal tersebut dapat mengganggu kegiatan analisa data kita kedepannya. #Kita bisa menghapus NaN pada kolom Cabin yang mana data barunya akan kita attributkan ke Dek dek = data_df['Cabin'].dropna() dek.head(10) # + levels = [] for level in dek: levels.append(level[0]) dek_df = DataFrame(levels) dek_df.columns = ['Cabin'] sns.catplot('Cabin', data=dek_df, palette='winter_d', kind="count") # - # Teman-teman bisa perhatikan, terdapat satu keanehan pada plot diatas, yaitu munculnya nilai T value pada hasil plot. Untuk menghilangkan kategori tersebut dari plot, bisa kita gunakan syntax dibawah ini # + dek_df = dek_df[dek_df.Cabin != 'T'] sns.catplot('Cabin', data=dek_df, palette='summer', kind="count") # - # ### Passengers Analysis # # Ingin memeriksa data keseluruhan dari penumpang, apakah mereka memiliki siblings di kapal tersebut ataupun bersama dengan orang tua. Untuk itu, kita akan perhatikan variabel Parch() dan SibSp() data_df['Sendiri'] = data_df.SibSp + data_df.Parch data_df['Sendiri'].head() data_df['Sendiri'].loc[data_df['Sendiri'] > 0] = 'Se-Keluarga' data_df['Sendiri'].loc[data_df['Sendiri'] == 0] = 'Sendiri' data_df.head() # Bisa teman-teman lihat, pada kolom sebelah kanan terdapat atribut baru dengan nama 'Sendiri' dan memiliki nilai yang sudah ditentukan apakan Se-Keluarga atau Sendiri. sns.catplot('Sendiri', data=data_df, palette='winter_d', kind='count') data_df['Selamat'] = data_df.Survived.map({0: 'tidak', 1:'selamat'}) data_df['Selamat'].head() sns.catplot('Selamat', data=data_df, palette='winter_d', kind='count') sns.catplot('Age','Selamat', hue='person',data=data_df, kind='violin') sns.lmplot('Age','Survived',data=data_df) sns.lmplot('Age','Survived',hue='person',data=data_df, palette = 'winter_d') # + generasi = [10,30,60,70,80] sns.lmplot('Age','Survived',hue='Pclass',data=data_df, palette = 'winter_d', x_bins=generasi) # - sns.lmplot('Age','Survived',hue='Sex',data=data_df, palette = 'winter_d', x_bins=generasi) # Dari data dibawah ini dapat kita ketahui, ternyata peluang penumpang Titanic selamat semakin besar apabila anda adalah wanita berumur, sedangkan peluang untuk lelaki berumur semakin tua maka banyak yang tidak selamat. sns.lmplot('Age','Survived',hue='Sendiri',data=data_df, palette = 'winter_d', x_bins=generasi) # Ini untuk informasi korelasi apakah bersama keluarga atau tidak, dapat meningkatkan peluang selamat? Sila teman-teman baca dan pahami! # # # Happy Analyzing the Data # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline import pandas as pd import matplotlib.pyplot as plt df = pd.read_csv("~/projects/us-education-datasets-unification-project/data/us-education-datasets-unification-project/states_all.csv") df_orig = df.copy() df.shape print('THERE ARE OUTLIERS PRESENT THAT NEED TO BE REMOVED') print('There are: ') print(df['PRIMARY_KEY'].duplicated(keep=False).sum(), 'duplicated PRIMARY_KEY s') print('Unique duplicated keys: \n', df[df['PRIMARY_KEY'].duplicated(keep=False)]['PRIMARY_KEY'].unique()) df[df['PRIMARY_KEY'].duplicated(keep=False)] df['avg_total_expenditure'] = df['TOTAL_EXPENDITURE'] / df['GRADES_ALL_G'] print('2 rows have unreasonably high avg_total_expenditure') df[df['avg_total_expenditure'] > 100] drop_rows= df[df['avg_total_expenditure'] > 100].index.to_list() print(drop_rows) # + plt.hist('avg_total_expenditure', bins=50, log=True, data=df); plt.show() # Drop the 2 outlier rows print('Before drop:', df.shape) df.drop(axis=0, labels= drop_rows, inplace=True) print('After drop:', df.shape) plt.hist('avg_total_expenditure', bins=50, log=True, data=df); plt.show() print('THERE ARE OUTLIERS PRESENT THAT NEED TO BE REMOVED') print('There are: ') print(df['PRIMARY_KEY'].duplicated(keep=False).sum(), 'duplicated PRIMARY_KEY s') print('Unique duplicated keys: \n', df[df['PRIMARY_KEY'].duplicated(keep=False)]['PRIMARY_KEY'].unique()) df[df['PRIMARY_KEY'].duplicated(keep=False)] # + hist_df = df[df['STATE']=='DISTRICT_OF_COLUMBIA'] plt.hist('GRADES_ALL_G', data=hist_df, log=False, bins=50); plt.show() print('2 rows in DISTRICT_OF_COLUMBIAs data have unreasonably small GRADES_ALL_G') # - print('These DISTRICT_OF_COLUMBIA duplicated keys have unreasonably small GRADES_ALL_G') print(df[ (df['STATE'] == 'DISTRICT_OF_COLUMBIA') & (df['GRADES_ALL_G'] < 40000)]['GRADES_ALL_G']) df[ (df['STATE'] == 'DISTRICT_OF_COLUMBIA') & (df['GRADES_ALL_G'] < 40000)] # + print('Drop these bad rows.') print('Before drop df.shape=', df.shape) drop_index= df[(df['STATE'] == 'DISTRICT_OF_COLUMBIA') & (df['GRADES_ALL_G'] < 40000)].index.to_list() print(drop_index) df.drop(axis=0, labels= drop_index, inplace=True) print('After drop df.shape=', df.shape) # + hist_df = df[df['STATE']=='DISTRICT_OF_COLUMBIA'] plt.hist('GRADES_ALL_G', data=hist_df, log=False, bins=50); plt.show() print('REMOVED: 2 rows in DISTRICT_OF_COLUMBIAs data HAD unreasonably small GRADES_ALL_G') # - print(df.shape, df_orig.shape) print('THERE ARE OUTLIERS PRESENT THAT NEED TO BE REMOVED') print('There are: ') print(df['PRIMARY_KEY'].duplicated(keep=False).sum(), 'duplicated PRIMARY_KEY s') print('Unique duplicated keys: \n', df[df['PRIMARY_KEY'].duplicated(keep=False)]['PRIMARY_KEY'].unique()) df[df['PRIMARY_KEY'].duplicated(keep=False)] # + print('The rows are identical except that one has NaN s in place of entries in some columns') print('Drop the row with more NaN s') drop_rows= df[(df['PRIMARY_KEY'] == '2010_DISTRICT_OF_COLUMBIA') & (df['GRADES_ALL_G'].isnull()) ].index.to_list() df[(df['PRIMARY_KEY'] == '2010_DISTRICT_OF_COLUMBIA') & (df['GRADES_ALL_G'].isnull()) ] # + print('Shape before: ', df.shape) df.drop(axis=0, labels=drop_rows, inplace=True) print('Shape after: ', df.shape) # - print('THERE ARE OUTLIERS PRESENT THAT NEED TO BE REMOVED') print('There are: ') print(df['PRIMARY_KEY'].duplicated(keep=False).sum(), 'duplicated PRIMARY_KEY s') print('Unique duplicated keys: \n', df[df['PRIMARY_KEY'].duplicated(keep=False)]['PRIMARY_KEY'].unique()) df[df['PRIMARY_KEY'].duplicated(keep=False)] print('Remove non-Original columns') print('Shape before: ', df.shape) df = df[df_orig.columns] print('Shape after: ', df.shape) print(len(df_orig) - len(df), ' rows removed') print('Original: ', df_orig.shape) print('Final: ', df.shape) df.to_csv("~/projects/us-education-datasets-unification-project/data/us-education-datasets-unification-project/states_all-cleaned.csv", \ index=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="UCnEwKqXbEuF" from torch import nn from collections import OrderedDict import torch.nn.functional as F import torch from torch.utils.data import DataLoader import torchvision import random from torch.utils.data import Subset from matplotlib import pyplot as plt from torchsummary import summary from torchvision import transforms import progressbar as pb import numpy as np # + id="NsmTAzcudX01" SUM = lambda x,y : x+y # + id="__9hu3JkdUma" def check_equity(property,a,b): pa = getattr(a,property) pb = getattr(b,property) assert pa==pb, "Different {}: {}!={}".format(property,pa,pb) return pa # + id="_dmaEbq1dO54" def module_unwrap(mod:nn.Module,recursive=False): children = OrderedDict() try: for name, module in mod.named_children(): if (recursive): recursive_call = module_unwrap(module,recursive=True) if (len(recursive_call)>0): for k,v in recursive_call.items(): children[name+"_"+k] = v else: children[name] = module else: children[name] = module except AttributeError: pass return children # + id="jvbILdHidK6b" class VGGBlock(nn.Module): def __init__(self, in_channels, out_channels,batch_norm=False): super().__init__() conv2_params = {'kernel_size': (3, 3), 'stride' : (1, 1), 'padding' : 1 } noop = lambda x : x self._batch_norm = batch_norm self.conv1 = nn.Conv2d(in_channels=in_channels,out_channels=out_channels , **conv2_params) #self.bn1 = nn.BatchNorm2d(out_channels) if batch_norm else noop self.bn1 = nn.GroupNorm(16, out_channels) if batch_norm else noop self.conv2 = nn.Conv2d(in_channels=out_channels,out_channels=out_channels, **conv2_params) #self.bn2 = nn.BatchNorm2d(out_channels) if batch_norm else noop self.bn2 = nn.GroupNorm(16, out_channels) if batch_norm else noop self.max_pooling = nn.MaxPool2d(kernel_size=(2, 2), stride=(2, 2)) @property def batch_norm(self): return self._batch_norm def forward(self,x): x = self.conv1(x) x = self.bn1(x) x = F.relu(x) x = self.conv2(x) x = self.bn2(x) x = F.relu(x) x = self.max_pooling(x) return x # + id="k4wZypnxbczs" class Classifier(nn.Module): def __init__(self,num_classes=1): super().__init__() self.classifier = nn.Sequential( nn.Linear(2048, 2048), nn.ReLU(True), nn.Dropout(p=0.5), nn.Linear(2048, 512), nn.ReLU(True), nn.Dropout(p=0.5), nn.Linear(512, num_classes) ) def forward(self,x): return self.classifier(x) # + id="6_XOTpHHbZOU" class VGG16(nn.Module): def __init__(self, input_size, batch_norm=False): super(VGG16, self).__init__() self.in_channels,self.in_width,self.in_height = input_size self.block_1 = VGGBlock(self.in_channels,64,batch_norm=batch_norm) self.block_2 = VGGBlock(64, 128,batch_norm=batch_norm) self.block_3 = VGGBlock(128, 256,batch_norm=batch_norm) self.block_4 = VGGBlock(256,512,batch_norm=batch_norm) @property def input_size(self): return self.in_channels,self.in_width,self.in_height def forward(self, x): x = self.block_1(x) x = self.block_2(x) x = self.block_3(x) x = self.block_4(x) # x = self.avgpool(x) x = torch.flatten(x,1) return x # + id="4h0BLCSfbUI1" class CombinedLoss(nn.Module): def __init__(self, loss_a, loss_b, loss_combo, _lambda=1.0): super().__init__() self.loss_a = loss_a self.loss_b = loss_b self.loss_combo = loss_combo self.register_buffer('_lambda',torch.tensor(float(_lambda),dtype=torch.float32)) def forward(self,y_hat,y): return self.loss_a(y_hat[0],y[0]) + self.loss_b(y_hat[1],y[1]) + self._lambda * self.loss_combo(y_hat[2],torch.cat(y,0)) # + [markdown] id="ihl5WS4mftpp" # ---------------------------------------------------------------------- # + id="wx0fr4tneCwh" DO='TRAIN' # + id="aoTgU1HIfqHc" random.seed(47) # + id="LIXMqkSMfpBZ" combo_fn = SUM # + id="k1gqpu8ifniL" lambda_reg = 1 # + id="Ofj6g7uffaaO" def train(nets, loaders, optimizer, criterion, epochs=20, dev=None, save_param=False, model_name="valerio"): # try: nets = [n.to(dev) for n in nets] model_a = module_unwrap(nets[0], True) model_b = module_unwrap(nets[1], True) model_c = module_unwrap(nets[2], True) reg_loss = nn.MSELoss() criterion.to(dev) reg_loss.to(dev) # Initialize history history_loss = {"train": [], "val": [], "test": []} history_accuracy = {"train": [], "val": [], "test": []} # Store the best val accuracy best_val_accuracy = 0 # Process each epoch for epoch in range(epochs): # Initialize epoch variables sum_loss = {"train": 0, "val": 0, "test": 0} sum_accuracy = {"train": [0,0,0], "val": [0,0,0], "test": [0,0,0]} progbar = None # Process each split for split in ["train", "val", "test"]: if split == "train": for n in nets: n.train() widgets = [ ' [', pb.Timer(), '] ', pb.Bar(), ' [', pb.ETA(), '] ', pb.Variable('ta','[Train Acc: {formatted_value}]') ] progbar = pb.ProgressBar(max_value=len(loaders[split][0]),widgets=widgets,redirect_stdout=True) else: for n in nets: n.eval() # Process each batch for j,((input_a, labels_a),(input_b, labels_b)) in enumerate(zip(loaders[split][0],loaders[split][1])): input_a = input_a.to(dev) input_b = input_b.to(dev) labels_a = labels_a.float().to(dev) labels_b = labels_b.float().to(dev) inputs = torch.cat([input_a,input_b],axis=0) labels = torch.cat([labels_a, labels_b]) # Reset gradients optimizer.zero_grad() # Compute output features_a = nets[0](input_a) features_b = nets[1](input_b) features_c = nets[2](inputs) pred_a = torch.squeeze(nets[3](features_a)) pred_b = torch.squeeze(nets[3](features_b)) pred_c = torch.squeeze(nets[3](features_c)) loss = criterion(pred_a, labels_a) + criterion(pred_b, labels_b) + criterion(pred_c, labels) for n in model_a: layer_a = model_a[n] layer_b = model_b[n] layer_c = model_c[n] if (isinstance(layer_a,nn.Conv2d)): loss += lambda_reg * reg_loss(combo_fn(layer_a.weight,layer_b.weight),layer_c.weight) if (layer_a.bias is not None): loss += lambda_reg * reg_loss(combo_fn(layer_a.bias, layer_b.bias), layer_c.bias) # Update loss sum_loss[split] += loss.item() # Check parameter update if split == "train": # Compute gradients loss.backward() # Optimize optimizer.step() # Compute accuracy #https://discuss.pytorch.org/t/bcewithlogitsloss-and-model-accuracy-calculation/59293/ 2 pred_labels_a = (pred_a >= 0.0).long() # Binarize predictions to 0 and 1 pred_labels_b = (pred_b >= 0.0).long() # Binarize predictions to 0 and 1 pred_labels_c = (pred_c >= 0.0).long() # Binarize predictions to 0 and 1 batch_accuracy_a = (pred_labels_a == labels_a).sum().item() / len(labels_a) batch_accuracy_b = (pred_labels_b == labels_b).sum().item() / len(labels_b) batch_accuracy_c = (pred_labels_c == labels).sum().item() / len(labels) # Update accuracy sum_accuracy[split][0] += batch_accuracy_a sum_accuracy[split][1] += batch_accuracy_b sum_accuracy[split][2] += batch_accuracy_c if (split=='train'): progbar.update(j, ta=batch_accuracy_c) if (progbar is not None): progbar.finish() # Compute epoch loss/accuracy epoch_loss = {split: sum_loss[split] / len(loaders[split][0]) for split in ["train", "val", "test"]} epoch_accuracy = {split: [sum_accuracy[split][i] / len(loaders[split][0]) for i in range(len(sum_accuracy[split])) ] for split in ["train", "val", "test"]} # # Store params at the best validation accuracy # if save_param and epoch_accuracy["val"] > best_val_accuracy: # # torch.save(net.state_dict(), f"{net.__class__.__name__}_best_val.pth") # torch.save(net.state_dict(), f"{model_name}_best_val.pth") # best_val_accuracy = epoch_accuracy["val"] print(f"Epoch {epoch + 1}:") # Update history for split in ["train", "val", "test"]: history_loss[split].append(epoch_loss[split]) history_accuracy[split].append(epoch_accuracy[split]) # Print info print(f"\t{split}\tLoss: {epoch_loss[split]:0.5}\tVGG 1:{epoch_accuracy[split][0]:0.5}" f"\tVGG 2:{epoch_accuracy[split][1]:0.5}\tVGG *:{epoch_accuracy[split][2]:0.5}") if save_param: torch.save({'vgg_a':nets[0].state_dict(),'vgg_b':nets[1].state_dict(),'vgg_star':nets[2].state_dict(),'classifier':nets[3].state_dict()},f'{model_name}.pth') # + id="wWiTuCN1fN0g" def test(net,classifier, loader): net.to(dev) classifier.to(dev) net.eval() sum_accuracy = 0 # Process each batch for j, (input, labels) in enumerate(loader): input = input.to(dev) labels = labels.float().to(dev) features = net(input) pred = torch.squeeze(classifier(features)) # https://discuss.pytorch.org/t/bcewithlogitsloss-and-model-accuracy-calculation/59293/ 2 pred_labels = (pred >= 0.0).long() # Binarize predictions to 0 and 1 batch_accuracy = (pred_labels == labels).sum().item() / len(labels) # Update accuracy sum_accuracy += batch_accuracy epoch_accuracy = sum_accuracy / len(loader) print(f"Accuracy: {epoch_accuracy:0.5}") # + id="lDeV1W6me7Ej" def parse_dataset(dataset): dataset.targets = dataset.targets % 2 return dataset # + id="7iwwrQD3e5sL" root_dir = './' # + id="Zx7dBzfre2X9" rescale_data = transforms.Lambda(lambda x : x/255) # Compose transformations data_transform = transforms.Compose([ transforms.Resize(32), transforms.RandomHorizontalFlip(), transforms.ToTensor(), rescale_data, #transforms.Normalize((-0.7376), (0.5795)) ]) test_transform = transforms.Compose([ transforms.Resize(32), transforms.ToTensor(), rescale_data, #transforms.Normalize((0.1327), (0.2919)) ]) # + id="qBYFu2DUe0xU" # Load MNIST dataset with transforms train_set = torchvision.datasets.MNIST(root=root_dir, train=True, download=True, transform=data_transform) test_set = torchvision.datasets.MNIST(root=root_dir, train=False, download=True, transform=test_transform) # + id="iS7seoNnezX7" train_set = parse_dataset(train_set) test_set = parse_dataset(test_set) # + id="6OtJiXl2es--" train_idx = np.random.permutation(np.arange(len(train_set))) test_idx = np.arange(len(test_set)) val_frac = 0.1 n_val = int(len(train_idx) * val_frac) val_idx = train_idx[0:n_val] train_idx = train_idx[n_val:] h = len(train_idx)//2 train_set_a = Subset(train_set,train_idx[0:h]) train_set_b = Subset(train_set,train_idx[h:]) h = len(val_idx)//2 val_set_a = Subset(train_set,val_idx[0:h]) val_set_b = Subset(train_set,val_idx[h:]) h = len(test_idx)//2 test_set_a = Subset(test_set,test_idx[0:h]) test_set_b = Subset(test_set,test_idx[h:]) # + id="s5WqGkR0enhP" # Define loaders train_loader_a = DataLoader(train_set_a, batch_size=128, num_workers=0, shuffle=True, drop_last=True) val_loader_a = DataLoader(val_set_a, batch_size=128, num_workers=0, shuffle=False, drop_last=False) test_loader_a = DataLoader(test_set_a, batch_size=128, num_workers=0, shuffle=False, drop_last=False) train_loader_b = DataLoader(train_set_b, batch_size=128, num_workers=0, shuffle=True, drop_last=True) val_loader_b = DataLoader(val_set_b, batch_size=128, num_workers=0, shuffle=False, drop_last=False) test_loader_b = DataLoader(test_set_b, batch_size=128, num_workers=0, shuffle=False, drop_last=False) test_loader_all = DataLoader(test_set,batch_size=128, num_workers=0,shuffle=False,drop_last=False) # Define dictionary of loaders loaders = {"train": [train_loader_a,train_loader_b], "val": [val_loader_a,val_loader_b], "test": [test_loader_a,test_loader_b]} # + id="xAmsywnHelhX" model1 = VGG16((1,32,32),batch_norm=True) model2 = VGG16((1,32,32),batch_norm=True) model3 = VGG16((1,32,32),batch_norm=True) classifier = Classifier(num_classes=1) # + id="fy81iDz9eizI" nets = [model1,model2,model3,classifier] # + id="_QQW2uVLee6L" dev = torch.device('cuda') # + id="eXegV7s_efNq" parameters = set() # + id="Vdw15S30efV7" for n in nets: parameters |= set(n.parameters()) # + id="HNk7ro7ueder" optimizer = torch.optim.SGD(parameters, lr = 0.01) # Define a loss criterion = nn.BCEWithLogitsLoss()#,nn.BCEWithLogitsLoss(),nn.BCEWithLogitsLoss(),_lambda = 1) n_params = 0 # + colab={"base_uri": "https://localhost:8080/"} id="WRPmx_uUeYBk" outputId="34dadfb2-b0ca-4bd1-e8a2-7fbe9a70dade" if (DO=='TRAIN'): train(nets, loaders, optimizer, criterion, epochs=50, dev=dev,save_param=True) else: state_dicts = torch.load('model.pth') model1.load_state_dict(state_dicts['vgg_a']) #questi state_dict vengono dalla funzione di training model2.load_state_dict(state_dicts['vgg_b']) model3.load_state_dict(state_dicts['vgg_star']) classifier.load_state_dict(state_dicts['classifier']) test(model1,classifier,test_loader_all) test(model2, classifier, test_loader_all) test(model3, classifier, test_loader_all) summed_state_dict = OrderedDict() for key in state_dicts['vgg_star']: if key.find('conv') >=0: print(key) summed_state_dict[key] = combo_fn(state_dicts['vgg_a'][key],state_dicts['vgg_b'][key]) else: summed_state_dict[key] = state_dicts['vgg_star'][key] model3.load_state_dict(summed_state_dict) test(model3, classifier, test_loader_all) # + id="UZLTGjlxZozP" colab={"base_uri": "https://localhost:8080/"} outputId="74ca9ebc-e439-476b-d4e3-e174dfb0762d" DO = 'TEST' if (DO=='TRAIN'): train(nets, loaders, optimizer, criterion, epochs=50, dev=dev,save_param=True) else: state_dicts = torch.load('valerio.pth') model1.load_state_dict(state_dicts['vgg_a']) #questi state_dict vengono dalla funzione di training model2.load_state_dict(state_dicts['vgg_b']) model3.load_state_dict(state_dicts['vgg_star']) classifier.load_state_dict(state_dicts['classifier']) test(model1,classifier,test_loader_all) test(model2, classifier, test_loader_all) test(model3, classifier, test_loader_all) summed_state_dict = OrderedDict() for key in state_dicts['vgg_star']: if key.find('conv') >=0: print(key) summed_state_dict[key] = combo_fn(state_dicts['vgg_a'][key],state_dicts['vgg_b'][key]) else: summed_state_dict[key] = state_dicts['vgg_star'][key] model3.load_state_dict(summed_state_dict) test(model3, classifier, test_loader_all) # + colab={"base_uri": "https://localhost:8080/"} id="Hx7Mt6yesHIv" outputId="1e47a8f0-57f2-462a-ea2e-3264662e84a8" weights11 = list(model1.block_1.parameters()) weights11 # + colab={"base_uri": "https://localhost:8080/"} id="cDfLTr-HtBi-" outputId="b8a640b7-b613-4253-d4fe-e2ddd78aa75f" weights12 = list(model1.block_2.parameters()) weights12 # + colab={"base_uri": "https://localhost:8080/"} id="QUbnsjRetGWt" outputId="8fc6a05e-7f88-4f0a-cb33-c58d0ac4b4fe" weights13 = list(model1.block_3.parameters()) weights13 # + colab={"base_uri": "https://localhost:8080/"} id="AOIa4hzttI0g" outputId="ac154efb-c9cb-4c2c-a33f-df7b8c89507c" weights14 = list(model1.block_4.parameters()) weights14 # + colab={"base_uri": "https://localhost:8080/"} id="firt-edqsOVz" outputId="b55ba2d8-8349-49b3-e692-b9c9d2544d32" weights21 = list(model2.block_1.parameters()) weights21 # + colab={"base_uri": "https://localhost:8080/"} id="htFLeiH_tQ99" outputId="4f9969fe-c4de-4444-bae5-199a7fa3e16d" weights22 = list(model2.block_2.parameters()) weights22 # + colab={"base_uri": "https://localhost:8080/"} id="mOQXY6P7tS_T" outputId="7a4847ad-3adc-4b71-d4ec-748f25ca801a" weights23 = list(model2.block_3.parameters()) weights23 # + colab={"base_uri": "https://localhost:8080/"} id="uVKCx5zEtU9C" outputId="d1b02cca-23b8-4573-b3e8-160108ac173c" weights24 = list(model2.block_4.parameters()) weights24 # + colab={"base_uri": "https://localhost:8080/"} id="yRJpz35PtYXp" outputId="4b4154fb-3870-495c-86f9-b7be485f041b" weights31 = list(model3.block_1.parameters()) weights31 # + colab={"base_uri": "https://localhost:8080/"} id="pxx-hG7ktbbr" outputId="8efae830-4f73-49a8-94fa-6cf16023e4e0" weights32 = list(model3.block_2.parameters()) weights32 # + id="m2_v6f0wtdiw" colab={"base_uri": "https://localhost:8080/"} outputId="cb592e87-a332-49aa-a27a-01aee9e304b6" weights33 = list(model3.block_3.parameters()) weights33 # + id="1tceyqJItfXM" colab={"base_uri": "https://localhost:8080/"} outputId="4271cec0-646b-4a2d-a12a-466e25634b68" weights34 = list(model3.block_4.parameters()) weights34 # + id="4exExPfCth38" # + colab={"base_uri": "https://localhost:8080/", "height": 281} id="wRJ7gX-in_BL" outputId="f3163007-487a-43c4-87f7-d9251e987cc2" # !pip install --upgrade progressbar2 # + id="8OkFGrWghD-D" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="AlUIBNeC353j" # # Download the Fruit dataset from Kaggle # + id="TZ9RhF955QdM" # !pip install -q kaggle import os os.environ['KAGGLE_USERNAME']='username' os.environ['KAGGLE_KEY']='kaggle_key' # Flower dataset # !kaggle datasets download -d alxmamaev/flowers-recognition # !unzip flowers-recognition.zip # Fruit dataset # !kaggle datasets download -d moltean/fruits # !unzip fruits.zip # + [markdown] id="tkXeHow_8hEv" # # Define labels and number of samples # + id="Y089GTy_8gjY" # labels labels=['sunflower', 'dandelion', 'tulip'] samples=700 labels_unknown=['Walnut', 'Blueberry', 'Watermelon'] # path base_path = './flowers' # path unknown base_unknown_path = './fruits-360/Training' # output dir out_path = './edge_dataset' # + [markdown] id="6LwNAiJs_CfR" # # Generate the dataset # + colab={"base_uri": "https://localhost:8080/"} id="3J0qiTAi_IYI" outputId="03905809-0641-4651-cf11-bc9b77160018" import random import shutil # Generate directory structure if not os.path.exists(out_path): print ("Create dir " + out_path) os.mkdir(out_path) for label in labels: dest = out_path + '/' + label.replace(" ", "_") if not os.path.exists(dest): print("Create dest dir ", dest) os.mkdir(dest) # random random.seed(); for label in labels: print("Selected word ["+label+"]") file_list = [] for filename in os.listdir(base_path + '/' + label): # print("Filename: ", filename) file_list.append(filename) random.shuffle(file_list) for i in range(samples): src = base_path + '/' + label + '/' + file_list[i] dest = out_path + '/' + label.replace(" ", "_") + '/' + label + str(i) + '.jpg' # print ("Copy from " + src + " to " + dest) shutil.copyfile(src, dest) # count files print("Image numbers ", len(os.listdir(out_path + '/' + label.replace(" ", "_")))) # + [markdown] id="VVQs-wJWzgPx" # # Generate unknow label dataset # + colab={"base_uri": "https://localhost:8080/"} id="dsp2pCR-zq6s" outputId="838a6d7d-aec2-463b-ad65-496402e17ca0" samples_per_ul = samples // len(labels_unknown) print("Sample per unknown image per label ", samples_per_ul) dest_unknown_path = out_path + '/unknown/'; # Generate directory structure if not os.path.exists(dest_unknown_path): print ("Create dir " + dest_unknown_path) os.mkdir(dest_unknown_path) for ul in labels_unknown: print("Selected word ["+ul+"]") file_list = [] for filename in os.listdir(base_unknown_path + '/' + ul): # print("Filename: ", filename) file_list.append(filename) print("File list size per " + ul + ", ", len(file_list)) random.shuffle(file_list) for i in range(0, samples_per_ul): src = base_unknown_path + '/' + ul + '/' + file_list[i] dest = dest_unknown_path + '/' + ul + str(i) + '.jpg' shutil.copyfile(src, dest) print("Image numbers ", len(os.listdir(out_path + '/unknown'))) # + [markdown] id="mVfHzl4KFIjT" # # Edge Impluse Dataset Upload # + [markdown] id="dAjc4gKlGXON" # ## Install Edge Impulse CLI # + id="UHDkRAAuFO4-" # !npm install -g --unsafe-perm edge-impulse-cli # + [markdown] id="8ngR91wIGi5M" # ## Upload dataset # + id="3xIgp3G8Goq4" # API Key api_key = '' labels.append('unknown') for label in labels: label = label.replace(" ", "_") sample_dir = out_path + '/' + label + '/*.jpg' file_list = [] for filename in os.listdir( out_path + '/' + label + '/'): _, ext = os.path.splitext(filename) if (ext.lower() == '.jpg'): file_list.append(filename) print("Uploading files from ", sample_dir) print("N. Samples ", len(file_list)) cmd = 'edge-impulse-uploader --api-key ' + api_key + ' --label ' + label + ' ' + sample_dir os.system(cmd) print("Done!"); # + [markdown] id="CC_Yvdp2Egl3" # # Delete all dataset # + id="qEtpfC7AEl8Z" # !rm -r ./edge_dataset # + id="R3BLzbHkErrh" # !rm -rf ./fruits-360/ # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Tuple and its methods tuple = (1,2,2,3,4,4,4,5,5,6,7,8) tuple # + # Returns the number of times a specified value occurs in a tuple tuple.count(4) # + # Searches the tuple for a specified value and returns the position of where it was found tuple.index(8) # + # Print the number of items in the tuple: len(tuple) # - tuple2 = (1,2,3,4,5,6) tuple2 # + # To join two or more tuples tuple3 = tuple + tuple2 # - tuple3 # + # You cannot remove items in a tuple. del tuple3 # - tuple3 # # The End # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: proyectovenv # language: python # name: proyectovenv # --- # **Nombre: ** # # **Carné: 21001127** # # **Ciencia de Datos en Python** # # **Sección U** # # **Proyecto Final** # ### Importando librerías necesarias import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from sklearn.linear_model import LinearRegression plt.rcParams.update({'figure.max_open_warning': 0}) # ### Importando datos del proyecto # + #Importando los datos del archivo binario de Numpy data = np.load("proyecto_training_data.npy") #Realizando una copia del slicing de los datos originales training = np.copy(data[:int(len(data)*0.8)]) validation = np.copy(data[-int(len(data)*0.2):]) # - # ### Análisis exploratorio de datos # + #Creando un data frame para los datos de entrenamiento con los nombres de las columnas utilizando pandas columns = ["SalePrice", "OverallQual", "1stFlrSF", "TotRmsAbvGrd", "YearBuilt", "LotFrontage"] df_training = pd.DataFrame(training, columns=columns) #Creando un data frame con los datos de validacion con los nombres de las columnas utilizando pandas df_validation = pd.DataFrame(training, columns=columns) #Aplicando la función describe sobre df_training, para mostrar la media, desviación estandar, valor minimo y valor maximo df_training.describe(percentiles=[]) # - # Encontrando el rango de las variables print("Rango de SalePrice: \t", np.ptp(df_training["SalePrice"])) print("Rango de OverallQual: \t", np.ptp(df_training["OverallQual"])) print("Rango de 1stFlrSF: \t", np.ptp(df_training["1stFlrSF"])) print("Rango de TotRmsAbvGrd: \t", np.ptp(df_training["TotRmsAbvGrd"])) print("Rango de YearBuilt: \t", np.ptp(df_training["YearBuilt"])) print("Rango de LotFrontage: \t", np.ptp(df_training["LotFrontage"].dropna())) # Ciclo para imprimir los histogramas de cada variable for x in df_training.columns: plt.figure() # Se utiliza histplot en lugar de la función distplot debido a que ya esta marcada como deprecated sns.histplot(df_training[x], kde=True).set_title("Histograma de "+ x) # Calculo de la correlación para 2 variables (SalePrice y OverallQual) corr_SalePrice_OverallQual = np.corrcoef(df_training['SalePrice'], df_training['OverallQual'])[0,1] # Se imprime el resultado corr_SalePrice_OverallQual # Calculo de la correlación para todas las variables correlation = df_training.corr() correlation # Ciclo para generar todas las scatter plots de las variables con su correlación for x in df_training.columns: for y in df_training.columns: if x != y: plt.figure() plt.title("{} vs {} (Corr = {:.4f})".format(x,y,correlation[x][y])) plt.scatter(df_training[x], df_training[y]) # ### Modelos de regresión lineal # Función para el modelo manual def modelo(x, y, epochs, imprimir_error_cada, lr): # Valores iniciales de m y b mb = np.array([0,0]) # Vector que almacenara los errores por cada epoch errores = np.array([]) # Se crea la estructura de datos para almacenar los resultados de las iteraciones, se elige un diccionario # con la llave igual al numero de iteracion y el valor es el vector de m y b calculados iteraciones = dict() # Se realiza la matriz solicitada a partir de x y una columna de 1 x_1 = np.column_stack((x,np.ones(len(x)))) # Se realizan las iteraciones indicadas for i in range(epochs): # Se calcula y_hat y_hat = np.dot(x_1, mb) # Se calcula el error error = np.mean(np.power(y-y_hat, 2))*0.5 # Se almacena el error en un vector errores = np.append(errores, error) # Se comprueba si corresponde imprimir el error basado en el numero de iteracion if (i+1)%imprimir_error_cada == 0: print(error) # Se crea un vector llamado y_1 con el valor igual a la diferencia entre y_hat y y y_1 = y_hat-y # Se calcula los gradientes gradientes = np.mean(x_1 * y_1.reshape(len(y_1),1), axis=0) # Se actualizan los parametros m y b mb = mb - (lr*gradientes) # Se almacenan los datos obtenidos de la iteracion iteraciones[i+1] = mb # Se retorna los valores de las iteraciones en un diccionario y los valores de los errores en un vector return iteraciones, errores # Datos de prueba para entrenar el modelo x = np.array([65,80,68]) y = np.array([208.5,181.5,223.5]) epochs = 10 imprimir = 10 lr = 0.0001 data, errores = modelo(x,y,epochs,imprimir,lr) # Función para graficar el cambio del error basado en las iteraciones (epochs) def cambio_error(errores): fig, ax = plt.subplots() ax.set_xlabel("Iteraciones") ax.set_ylabel("Error") ax.set_title("Cambio del error en el tiempo") ax.plot(np.arange(1,len(errores)+1),errores, label="Errores calculados") ax.legend() cambio_error(errores) # Función para visualizar la evolución del modelo def evolucion_modelo(data, n, x_training, y_training): fig, ax = plt.subplots() x = np.arange(1,x_training.max()+1) x_1 = np.column_stack((x,np.ones(len(x)))) ax.scatter(x_training, y_training) for i in range(len(data)): if (i+1)%n == 0: y = np.dot(x_1, data[i]) ax.plot(x,y, label="Iteracion {}".format(i+1)) ax.legend() # Evaluación de la función con los datos de prueba evolucion_modelo(data, 3, x, y) # Definición de función para modelo scikit-learn def modelo_sklearn (x, y): x = x.reshape(-1, 1) y = y.reshape(-1, 1) reg = LinearRegression().fit(x,y) return reg # Función para calculo de la predicción basado en el modelo manual, modelo sklearn y el promedio de ambos def prediccion(modelo_manual, modelo_sklearn, x): param_modelo_manual = modelo_manual[list(modelo_manual)[-1]] x_1 = np.column_stack((x,np.ones(len(x)))) pred_modelo_manual = np.dot(x_1, param_modelo_manual) pred_modelo_sklearn = np.asarray(modelo_sklearn.predict(x.reshape(-1, 1))).reshape(-1) pred_prom = np.mean(np.array([pred_modelo_manual, pred_modelo_sklearn]), axis=0 ) return pred_modelo_manual, pred_modelo_sklearn, pred_prom # ### Selección de variables # # Para este caso con base en los datos de obtenidos en la sección del análisis exploratorio, se selecciona a como variable independiente a la variable **OverallQual** y como variable dependiente a la variable **SalePrice**. Debido a que tienen un mayor coeficiente de correlación el cual es de ***0.793990***. Asi mismo, tambien se selecciona como segunda variable independiente a **1stFlrSF** ya que tiene un coeficiente de correlación con **SalePrice** de ***0.616289***, a modo de prueba y comparación con la primera variable. Por lo que se procede a entrenar los modelos con estos datos. # Entrenamiento de modelo para OverallQual y SalePrice x1 = df_training['OverallQual'].to_numpy() y1 = df_training['SalePrice'].to_numpy() epochs = 1000 imprimir = 1000 lr = 0.0001 data1, errores1 = modelo(x1,y1,epochs,imprimir,lr) cambio_error(errores1) evolucion_modelo(data1, 100, x1, y1) # Entrenamiento de modelo para 1stFlrSF y SalePrice x2 = df_training['1stFlrSF'].to_numpy() y2 = df_training['SalePrice'].to_numpy() epochs = 10 imprimir = 100 lr = 0.000001 data2, errores2 = modelo(x2,y2,epochs,imprimir,lr) cambio_error(errores2) evolucion_modelo(data2, 2, x2, y2) # + # Prediccion para OverallQual y SalePrice x1 = df_training['OverallQual'].to_numpy() y1 = df_training['SalePrice'].to_numpy() epochs = 1000 imprimir = 10000 lr = 0.0001 data1, errores1 = modelo(x1,y1,epochs,imprimir,lr) reg1 = modelo_sklearn (x1, y1) pred_modelo_manual1, pred_modelo_sklearn1, pred_prom1 = prediccion(data1, reg1, df_validation['OverallQual'].to_numpy()) # + # Prediccion para 1stFlrSF y SalePrice x2 = df_training['1stFlrSF'].to_numpy() y2 = df_training['SalePrice'].to_numpy() epochs = 10 imprimir = 10000 lr = 0.000001 data2, errores2 = modelo(x2,y2,epochs,imprimir,lr) reg2 = modelo_sklearn (x2, y2) pred_modelo_manual2, pred_modelo_sklearn2, pred_prom2 = prediccion(data2, reg2, df_validation['1stFlrSF'].to_numpy()) # - # ### Cálculo de error de los modelos de regresión lineal # Definición de la función para calcular el error del modelo manual, modelo scikit-learn y el promedio de los mismos def error(pred_modelo_manual, pred_modelo_sklearn, pred_prom, y): error_modelo_manual = np.mean(np.power(y_original-pred_modelo_manual, 2))*0.5 error_modelo_sklearn = np.mean(np.power(y_original-pred_modelo_sklearn, 2))*0.5 error_modelo_pred_prom = np.mean(np.power(y_original-pred_prom, 2))*0.5 return error_modelo_manual, error_modelo_sklearn, error_modelo_pred_prom # Calculo de errores para OverallQual y SalePrice y_original = df_validation['SalePrice'].to_numpy() error_modelo_manual1, error_modelo_sklearn1, error_modelo_pred_prom1 = error(pred_modelo_manual1, pred_modelo_sklearn1, pred_prom1, y_original) # Calculo de errores para 1stFlrSF y SalePrice y_original = df_validation['SalePrice'].to_numpy() error_modelo_manual2, error_modelo_sklearn2, error_modelo_pred_prom2 = error(pred_modelo_manual2, pred_modelo_sklearn2, pred_prom2, y_original) # Grafica del error por cada modelo para las variables OverallQual y 1stFlrSF fig, ax = plt.subplots() N = 3 ind = np.arange(N) width = 0.35 ax.bar(1, error_modelo_manual1, width, bottom=0, label='Modelo Manual ') ax.bar(1 + width, error_modelo_sklearn1, width, bottom=0, label='Modelo Scikit-learn') ax.bar(1 + 2*width, error_modelo_pred_prom1, width, bottom=0, label='Promedio Modelos') ax.bar(3, error_modelo_manual2, width, bottom=0, label='Modelo Manual') ax.bar(3 + width, error_modelo_sklearn2, width, bottom=0, label='Modelo Scikit-learn') ax.bar(3 + 2*width, error_modelo_pred_prom2, width, bottom=0, label='Promedio Modelos') ax.set_title('Errores') ax.set_xticks(ind + 1 + width) ax.set_xticklabels(['OverallQual', '', '1stFlrSF']) ax.legend() ax.autoscale_view() # ### Conclusiones # # * Para la variable OverallQual el mejor modelo es el que se entreno utilizando Scikit-learn. Para la variable de prueba 1stFlrSF, se encontro que el mejor modelo tambien es el que se entreno utilizando Scikit-learn. # * El modelo manual para OverallQual converge a partir de las 1000 iteraciones con un learning rate de 0.0001. Mientras que para la variable de prueba converge a 10 iteraciones con un learning rate de 0.000001. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] pycharm={"name": "#%% md\n"} # # Clean and explore galaxy zoo datasets # This file details the data cleansing of the Galaxy Zoo dataset and explains the correlations found between the datasets features. # , # + [markdown] pycharm={"name": "#%% md\n"} #
    # Picture of AM0002 captured by Hubble telescope # + [markdown] pycharm={"name": "#%% md\n"} # ## Abstract # # ### Galaxy zoo # ... # # ### Galaxies classification # ... # + [markdown] pycharm={"name": "#%% md\n"} # ## Pre-processing # .... # ### Library imports # + pycharm={"name": "#%%\n"} import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import warnings warnings.filterwarnings('ignore') # + [markdown] pycharm={"name": "#%% md\n"} # ### Basic analysis # + pycharm={"name": "#%%\n"} from IPython.core.display import display def checkIfHasRowIncompatible(dataset, rowName1, rowName2): return dataset[dataset[rowName1] == 1][dataset[rowName2] == 1].shape[0] > 0 def VisualiseDataset(dataset): print("En-tête du dataset :\n----------\n") display(dataset.head()) print("Informations des types du dataset :\n----------\n") display(dataset.info()) print("\n----------\nTaille du dataset :") display(dataset.shape) print("Informations du dataset :\n----------\n") display(dataset.describe()) print("Pourcentage de valeurs manquantes :\n----------\n") display((df.isna().sum()/df.shape[0]).sort_values()) print("Vérification si valeurs multiple à 1 incompatible :\n----------\n") print("A ligne Spirale == Elliptique : {}\n".format(checkIfHasRowIncompatible(df, "SPIRAL", "ELLIPTICAL"))) print("A ligne Spirale == Incertaine : {}\n".format(checkIfHasRowIncompatible(df, "SPIRAL", "UNCERTAIN"))) print("A ligne Elliptique == Incertaine : {}\n".format(checkIfHasRowIncompatible(df, "ELLIPTICAL", "UNCERTAIN"))) print("Type des valeurs :\n----------\n") display(df.dtypes.value_counts().plot.pie()) # + pycharm={"name": "#%%\n"} data = pd.read_csv("../../Datas/GalaxyZoo1_DR_table2.csv") df = data.copy() VisualiseDataset(df) # + [markdown] pycharm={"name": "#%% md\n"} # '### Basic checklist # - **Targets variables** : Spiral, Elliptical and Uncertain # - **lines and columns** : (667944, 16) # - **Type de variables** : Majortiée de numérique, le reste sont objet (ici des dates) # - **Analyse des valeurs manquantes** : Aucunes valeurs manquantes # + pycharm={"name": "#%%\n"} dfbiased = df.copy() dfbiased.drop(['OBJID', 'P_CS', 'P_EL', 'P_CW', 'P_MG', 'P_ACW', 'P_EDGE', 'RA', 'DEC', 'ELLIPTICAL', 'SPIRAL', 'UNCERTAIN'], axis=1, inplace=True) sns.histplot(dfbiased['NVOTE']) for col in dfbiased.select_dtypes('double'): plt.figure() sns.distplot(df[col]) # + [markdown] pycharm={"name": "#%% md\n"} # Supression des valeurs ou le nombre de vote est inférieur à 50. # + pycharm={"name": "#%%\n"} shapeBase = len(df.index) df = df[df.NVOTE >= 15] df = df[df.P_DK > 0.05] shapeAfterModif = len(df.index) print("Nombre de ligne suprimées : ") print(shapeBase - shapeAfterModif) df.shape # + pycharm={"name": "#%%\n"} correlationDf = df.copy() correlationDf.drop(['OBJID', 'NVOTE', 'DEC', 'RA', 'SPIRAL', 'UNCERTAIN', 'ELLIPTICAL'], axis=1, inplace=True) # + pycharm={"name": "#%%\n"} sns.clustermap(correlationDf.corr()) # + [markdown] pycharm={"name": "#%% md\n"} # Création d'un dataset final # + pycharm={"name": "#%%\n"} print("Nombre de galaxies ayant le statut MERGED :", df["P_MG"][df["P_MG"] > 0.4].count()) # + pycharm={"name": "#%%\n"} dfWithPreProcess = df.copy() dfWithPreProcess.drop(['NVOTE', 'P_EL', 'P_CW', 'P_ACW', 'P_EDGE', 'P_DK', 'P_CS', 'P_EL_DEBIASED', 'P_CS_DEBIASED'], axis=1, inplace=True) # + pycharm={"name": "#%%\n"} dfWithPreProcess['MERGED'] = np.where(df['P_MG'] > 0.5, 1, 0) dfWithPreProcess.head() # + pycharm={"name": "#%%\n"} #Merge the three columns by using their name as type in the column "TYPE". columns = ["MERGED", "SPIRAL", "ELLIPTICAL", "UNCERTAIN"] for name in columns: dfWithPreProcess.loc[dfWithPreProcess[name]==1, 'TYPE'] = name #Drop all columns not needed and each rows with type "UNCERTAIN" dfWithPreProcess.drop(index=dfWithPreProcess[dfWithPreProcess['TYPE'] == "UNCERTAIN"].index, columns=['P_MG', 'UNCERTAIN', "MERGED", "ELLIPTICAL", "SPIRAL"], inplace=True) dfWithPreProcess.to_csv('../Datas/dataWithPreProcess.csv', index=False) dfWithPreProcess.head() # + pycharm={"name": "#%%\n"} dfShow = dfWithPreProcess[dfWithPreProcess["TYPE"] == "MERGED"] dfShow.head() # + pycharm={"name": "#%%\n"} dfShow = dfWithPreProcess[dfWithPreProcess["TYPE"] == "ELLIPTICAL"] dfShow.head() # + pycharm={"name": "#%%\n"} dfShow = dfWithPreProcess[dfWithPreProcess["TYPE"] == "SPIRAL"] dfShow.head() # + pycharm={"name": "#%%\n"} dfShow = dfWithPreProcess[dfWithPreProcess["TYPE"] == "UNKNOWN"] dfShow.head() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # import standard libraries import numpy as np import pandas as pd import os import random from sklearn import preprocessing os.chdir(os.path.join(os.getcwd(), "..", "..", "data", "raw")) # read the data df = pd.read_csv("tweet_product_company.csv", encoding = "ISO-8859-1") # get the head df.head(10) # get the tail df.tail(10) # null count df.isnull().sum() # get the shape df.shape # get the unique emotions np.unique(df["is_there_an_emotion_directed_at_a_brand_or_product"]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd df = pd.read_csv('data/all_preds.csv') df = df.loc[df['count'] > 3] df = df.pred # - df.to_csv('data/all_preds_selected.csv', index = False, header = False) df # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Window functions # # General Elections were held in the UK in 2015 and 2017. Every citizen votes in a constituency. The candidate who gains the most votes becomes MP for that constituency. # # All these results are recorded in a table ge # # yr | firstName | lastName | constituency | party | votes # ---:|-----------|-----------|---------------|-------|------: # 2015 | Ian | Murray | S14000024 | Labour | 19293 # 2015 | Neil | Hay | S14000024 | Scottish National Party | 16656 # 2015 | Miles | Briggs | S14000024 | Conservative | 8626 # 2015 | Phyl | Meyer | S14000024 | Green | 2090 # 2015 | Pramod | Subbaraman | S14000024 | Liberal Democrat | 1823 # 2015 | Paul | Marshall | S14000024 | UK Independence Party | 601 # 2015 | Colin | Fox | S14000024 | Scottish Socialist Party | 197 # 2017 | Ian | MURRAY | S14000024 | Labour | 26269 # 2017 | Jim | EADIE | S14000024 | SNP | 10755 # 2017 | | SMITH | S14000024 | Conservative | 9428 # 2017 | | BEAL | S14000024 | Liberal Democrats | 1388 # import getpass import psycopg2 from sqlalchemy import create_engine import pandas as pd pwd = getpass.getpass() engine = create_engine( 'postgresql+psycopg2://postgres:%s@localhost/sqlzoo' % (pwd)) pd.set_option('display.max_rows', 100) # ## 1. Warming up # # Show the **lastName, party** and **votes** for the **constituency** 'S14000024' in 2017. ge = pd.read_sql_table('ge', engine) ge.loc[(ge['constituency']=='S14000024') & (ge['yr']==2017), ['lastname', 'party', 'votes']] # ## 2. Who won? # # You can use the RANK function to see the order of the candidates. If you RANK using (ORDER BY votes DESC) then the candidate with the most votes has rank 1. # # **Show the party and RANK for constituency S14000024 in 2017. List the output by party** a = ge.loc[(ge['constituency']=='S14000024') & (ge['yr']==2017), ['party', 'votes']] (a.assign(rank=a['votes'].rank(ascending=False)) .sort_values('party')) # ## 3. PARTITION BY # # The 2015 election is a different PARTITION to the 2017 election. We only care about the order of votes for each year. # # **Use PARTITION to show the ranking of each party in S14000021 in each year. Include yr, party, votes and ranking (the party with the most votes is 1).** a = ge[ge['constituency']=='S14000021'].copy() a['posn'] = (a.groupby('yr')['votes'] .rank(ascending=False)) a[['yr', 'party', 'votes', 'posn']].sort_values(['party', 'yr']) # ## 4. Edinburgh Constituency # # Edinburgh constituencies are numbered S14000021 to S14000026. # # **Use PARTITION BY constituency to show the ranking of each party in Edinburgh in 2017. Order your results so the winners are shown first, then ordered by constituency.** a = ge[(ge['constituency'].between('S14000021', 'S14000026')) & (ge['yr']==2017)].copy() a['posn'] = (a.groupby('constituency')['votes'] .rank(ascending=False)) (a[['constituency', 'party', 'votes', 'posn']] .sort_values(['posn', 'constituency'])) # ## 5. Winners Only # # You can use [SELECT within SELECT](https://sqlzoo.net/wiki/SELECT_within_SELECT_Tutorial) to pick out only the winners in Edinburgh. # # **Show the parties that won for each Edinburgh constituency in 2017.** a = ge[(ge['constituency'].between('S14000021', 'S14000026')) & (ge['yr']==2017)].copy() a['posn'] = a.groupby('constituency')['votes'].rank(ascending=False) a.loc[a['posn']==1, ['constituency', 'party']].sort_values('constituency') # ## 6. Scottish seats # # You can use **COUNT** and **GROUP BY** to see how each party did in Scotland. Scottish constituencies start with 'S' # # **Show how many seats for each party in Scotland in 2017.** a = ge[(ge['constituency'].str.startswith('S')) & (ge['yr']==2017)].copy() a['posn'] = (a.groupby('constituency')['votes'] .rank(ascending=False)) (a[a['posn']==1].groupby('party')['yr'] .count() .reset_index() .rename(columns={'yr': 'n'})) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="RHA88Oo_1JEz" colab_type="text" # # Data Processing # # - Load image data using PyTorch # - Image transformations # - Preprocess images (resize, crop, normalize) # + [markdown] id="MAerAjY6aeYg" colab_type="text" # ### Setup drive # + [markdown] id="BY8phsHYdicZ" colab_type="text" # Run the following cell to mount your Drive onto Colab. Go to the given URL and once you login and copy and paste the authorization code, you should see "drive" pop up in the files tab on the left. # + id="hurAYstF1TVc" colab_type="code" colab={} from google.colab import drive drive.mount('/content/drive') # + [markdown] id="RzP9vRERdmst" colab_type="text" # Click the little triangle next to "drive" and navigate to the "AI4All Chest X-Ray Project" folder. Hover over the folder and click the 3 dots that appear on the right. Select "copy path" and replace `PASTE PATH HERE` with the path to your folder. # + id="9kac1-X-cXhZ" colab_type="code" colab={} # cd "PASTE PATH HERE" # + [markdown] id="YoRfQsJE493A" colab_type="text" # ### Import necessary libraries # Torchvision, or the PyTorch package, consists of popular datasets, model architectures, and common image transformations for computer vision. # + id="y7qYM0a61JE2" colab_type="code" colab={} import matplotlib.pyplot as plt import numpy as np import os import pandas as pd import random from torch.utils.data import random_split, Subset import torchvision from torchvision import datasets, transforms from utils.plotting import imshow_dataset from utils.datahelper import calc_dataset_stats, get_random_image # + [markdown] id="FlNKHL6V1JFD" colab_type="text" # ### Setup paths # Define paths and load metadata # + id="JurkCoKd1JFF" colab_type="code" colab={} path_to_dataset = os.path.join('data') path_to_images = os.path.join(path_to_dataset, 'images') metadata = pd.read_csv(os.path.join(path_to_dataset, 'metadata_train.csv')) # + [markdown] id="RDefUUni1JFO" colab_type="text" # ### Load images # + [markdown] id="NWu981cD1JFb" colab_type="text" # **Pytorch loads the data using sub-folder names as class labels** # # Navigate to the "images" folder to see what this means. # # + id="7K9vxOog8phL" colab_type="code" colab={} dataset = datasets.ImageFolder(path_to_images, transform=None) dataset # + id="_jl4WNZ41JFo" colab_type="code" colab={} # EXERCISE: Use the function .class_to_idx to see what our classes are # + [markdown] id="WpUeGWTj83pJ" colab_type="text" # **Now let's take a look at the images themselves!** # # Note: The `imshow_dataset` function is defined in the file `utils/plotting.py`. # + id="4-5t6y8x1JFP" colab_type="code" colab={} # plots the first 5 images imshow_dataset(dataset, n=5) # + id="4-eKm8oweTiY" colab_type="code" colab={} # plots 5 random images imshow_dataset(dataset, n=5, rand=True) # + [markdown] id="fng1sPXc9Jak" colab_type="text" # > **Discuss with each other** # > # > What do you notice about the images? What are their dimensions? # # # + [markdown] id="aC0ylzSJ_ilu" colab_type="text" # ### Transformations # The transforms module in PyTorch defines various transformations that can be performed on an image. # # Image transformations are used to pre-process images # as well as to "augment" the data. (We will discuss data augmentation in another section.) # # + [markdown] id="E77sTYbmhxUY" colab_type="text" # **Resize the image using transforms** # + id="TxVrO7is_tnn" colab_type="code" colab={} # get a random image from the dataset and resize it im = get_random_image(dataset) im = transforms.Resize(100)(im) im # + id="IzaCwPplAnS_" colab_type="code" colab={} transforms.Resize(50)(im) # + [markdown] id="9kgklEULi8ac" colab_type="text" # **Try out other transformations** # # How do these transformations alter the image? # - `transforms.ColorJitter` # - `transforms.RandomAffine` # - `transforms.RandomHorizontalFlip` # # You can [read more about these transformations here](https://pytorch.org/docs/stable/torchvision/transforms.html) # # # # + id="wUfx08Koi6SV" colab_type="code" colab={} # EXERCISE: Apply different transformations to images and check out the output # # HINT: Use the code above as an example and try transforms functions such as RandomAffine # + [markdown] id="UPdzvLOgljle" colab_type="text" # > **Discuss with each other** # > # > Which transformations could be useful to normalize the dataset? Which transformations could be useful to add diversity to data set? # + [markdown] id="nvscfnfU1JFu" colab_type="text" # ### Examine image dimensions # + [markdown] id="bDKayhybj9wJ" colab_type="text" # Run the code below to calculate the image dimension. # # > **Discuss with each other** # > # > Based on the image dimension, are the images greyscale or color images? # + id="tKuIS_OF1JFv" colab_type="code" colab={} im_sizes = [d[0].size for d in dataset] dimensions = set([len(s) for s in im_sizes]) print(f'Dimensions in dataset: {dimensions}') # + [markdown] id="CkWTitgFkXjq" colab_type="text" # Compare x-ray images to another image # + id="kfqdczpxbOLq" colab_type="code" colab={} # Answer the above question before running this block! from skimage import io color_image = io.imread('https://unsplash.com/photos/twukN12EN7c/download') io.imshow(color_image) print(f'Random color image shape: {color_image.shape}') print(f'Random xray image shape: {get_random_image(dataset).size}') # + [markdown] id="lwBVMapR1JF5" colab_type="text" # **How much do image shapes and sizes vary in the dataset?** # # Run the code below to print the image dimensions for a set of random images # + id="EeawbFJC1JF6" colab_type="code" colab={} im_num = 10 rand_indices = random.sample(range(len(dataset)), im_num) subset = Subset(dataset, rand_indices) print(f'Image dimensions for {im_num} random images') for d in subset: print(d[0].size) # + [markdown] id="UO6YUmYBrt5N" colab_type="text" # **Smallest dimension measurements** # # Calculate the smallest image width and height in the dataset. # + id="NwR2WGDir1tp" colab_type="code" colab={} # EXERCISE: calculate the smallest image width and smallest image height in the # dataset # # HINT: look at blocks above for useful code, use min() to find minimum in a list # + [markdown] id="SFMboM51mNWD" colab_type="text" # > **Discuss with each other** # > # > How should we resize and crop the images? How do the smallest image width and smallest image height constrain our strategy? # + [markdown] id="w2zW9QWB1JGN" colab_type="text" # ### Resize and crop # + [markdown] id="83D3utI1mFGh" colab_type="text" # To make the images the same shape and size for the learning model, we can apply image transformations when loading the data. # # The `transforms.Compose` function puts together a list of image transformations, which are applied in order to the images. # + id="DzyqD6yJ1JGO" colab_type="code" colab={} # EXERCISE: set resize and crop parameters based on your observations above resize_value = # HERE # crop_value = # HERE # # compose transformations data_transforms = transforms.Compose([ transforms.Resize(resize_value), transforms.CenterCrop(crop_value)]) dataset = datasets.ImageFolder(path_to_images, transform=data_transforms) # + id="X4ZiIi7Lmv4e" colab_type="code" colab={} # EXERCISE: compare the images with and without transformation applied. # + id="0C8N8lRZm9FC" colab_type="code" colab={} # EXERCISE: try applying another list of transformations and compare the results # + [markdown] id="u856kmgx1JGT" colab_type="text" # ### Normalize images # + [markdown] id="q7UI9MUn1JGW" colab_type="text" # **Calculate the pixel intensity mean and standard deviation across all images in the dataset.** # # Note: This code takes some time to run. The output is # # - Mean: 0.544 # - Standard Deviation: 0.237 # + id="tORr1xbH1JGX" colab_type="code" colab={} data_transforms = transforms.Compose([ transforms.Resize(resize_value), transforms.CenterCrop(crop_value), transforms.ToTensor()]) dataset = datasets.ImageFolder(path_to_images, transform=data_transforms) data_mean, data_std = calc_dataset_stats(dataset) print(f'Mean: {data_mean:.3f}, Standard Deviation: {data_std:.3f}') # + [markdown] id="t-PKVJ1R1JGg" colab_type="text" # **Add normalization to the transformation list** # # The normalization step is applied on tensors and so is added after the `transforms.ToTensor` step. # + id="nW9X6mR31JGh" colab_type="code" colab={} data_transforms = transforms.Compose([ transforms.Grayscale(), transforms.Resize(resize_value), transforms.CenterCrop(crop_value), transforms.ToTensor(), transforms.Normalize(mean=[data_mean], std=[data_std])]) dataset = datasets.ImageFolder(path_to_images, transform=data_transforms) # + id="ElIWlR4dp7zf" colab_type="code" colab={} # EXERCISE: compare the images with all the transformations applied. # + id="MP4a2Uly1JGs" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + import requests import json import codecs import sys, time from requests.packages.urllib3.exceptions import InsecureRequestWarning requests.packages.urllib3.disable_warnings(InsecureRequestWarning) # - username = "xxxxxxxxxxx" password = "" headers = {'Content-Type' : "application/json"} # + print "\nCreating custom mmodel..." data = {"name" : "Custom model #2", "base_model_name" : "ja-JP_BroadbandModel", "description" : "300312-01 custom model and Add a single user word"} uri = "https://stream.watsonplatform.net/speech-to-text/api/v1/customizations" jsonObject = json.dumps(data).encode('utf-8') resp = requests.post(uri, auth=(username,password), verify=False, headers=headers, data=jsonObject) print "Model creation returns: ", resp.status_code if resp.status_code != 201: print "Failed to create model" print resp.text # - respJson = resp.json() customID = respJson['customization_id'] print "Model customization_id: ", customID corpus_file = "../corpus_utf8.txt" corpus_name = "corpus1" # + print "\nAdding corpus file..." uri = "https://stream.watsonplatform.net/speech-to-text/api/v1/customizations/"+customID+"/corpora/"+corpus_name with open(corpus_file, 'rb') as f: r = requests.post(uri, auth=(username,password), verify=False, headers=headers, data=f) print "Adding corpus file returns: ", r.status_code if r.status_code != 201: print "Failed to add corpus file" print r.text # + print "Checking status of corpus analysis..." uri = "https://stream.watsonplatform.net/speech-to-text/api/v1/customizations/"+customID+"/corpora/"+corpus_name r = requests.get(uri, auth=(username,password), verify=False, headers=headers) respJson = r.json() status = respJson['status'] time_to_run = 10 while (status != 'analyzed'): time.sleep(10) r = requests.get(uri, auth=(username,password), verify=False, headers=headers) respJson = r.json() status = respJson['status'] print "status: ", status, "(", time_to_run, ")" time_to_run += 10 print "Corpus analysis done!" # - print "\nListing words..." uri = "https://stream.watsonplatform.net/speech-to-text/api/v1/customizations/"+customID+"/words?sort=count" r = requests.get(uri, auth=(username,password), verify=False, headers=headers) print "Listing words returns: ", r.status_code file=codecs.open(customID+".OOVs.corpus", 'wb', 'utf-8') file.write(r.text) print "Words list from added corpus saved in file: "+customID+".OOVs.corpus" # + print "\nAdding multiple words..." data = {"words" : [{"word" : "ごばん", "sounds_like" : ["ゴバン"], "display_as" : "5番"},{"word" : "にじゅよんごう", "sounds_like" : ["ニジュヨンゴウ"], "display_as" : "二十四号"}, {"word" : "ふくむらのりちか", "sounds_like" : ["フクムラノリチカ"], "display_as" : "福村教親"}]} uri = "https://stream.watsonplatform.net/speech-to-text/api/v1/customizations/"+customID+"/words" jsonObject = json.dumps(data).encode('utf-8') r = requests.post(uri, auth=(username,password), verify=False, headers=headers, data=jsonObject) print "Adding multiple words returns: ", r.status_code # + uri = "https://stream.watsonplatform.net/speech-to-text/api/v1/customizations/"+customID r = requests.get(uri, auth=(username,password), verify=False, headers=headers) respJson = r.json() status = respJson['status'] print "Checking status of model for multiple words..." time_to_run = 10 while (status != 'ready'): time.sleep(10) r = requests.get(uri, auth=(username,password), verify=False, headers=headers) respJson = r.json() status = respJson['status'] print "status: ", status, "(", time_to_run, ")" time_to_run += 10 print "Multiple words added!" # + print "\nListing words..." uri = "https://stream.watsonplatform.net/speech-to-text/api/v1/customizations/"+customID+"/words?word_type=user&sort=alphabetical" r = requests.get(uri, auth=(username,password), verify=False, headers=headers) print "Listing user-added words returns: ", r.status_code file=codecs.open(customID+".OOVs.user", 'wb', 'utf-8') file.write(r.text) print "Words list from user-added words saved in file: "+customID+".OOVs.user" # + print "\nTraining custom model..." uri = "https://stream.watsonplatform.net/speech-to-text/api/v1/customizations/"+customID+"/train" data = {} jsonObject = json.dumps(data).encode('utf-8') r = requests.post(uri, auth=(username,password), verify=False, data=jsonObject) print "Training request returns: ", r.status_code if r.status_code != 200: print "Training failed to start - exiting!" # + uri = "https://stream.watsonplatform.net/speech-to-text/api/v1/customizations/"+customID r = requests.get(uri, auth=(username,password), verify=False, headers=headers) respJson = r.json() status = respJson['status'] time_to_run = 10 while (status != 'available'): time.sleep(10) r = requests.get(uri, auth=(username,password), verify=False, headers=headers) respJson = r.json() status = respJson['status'] print "status: ", status, "(", time_to_run, ")" time_to_run += 10 print "Training complete!" print "\nGetting custom models..." uri = "https://stream.watsonplatform.net/speech-to-text/api/v1/customizations" r = requests.get(uri, auth=(username,password), verify=False, headers=headers) print "Get models returns: ", r.status_code print r.text # - print "\nGetting custom models..." uri = "https://stream.watsonplatform.net/speech-to-text/api/v1/customizations" r = requests.get(uri, auth=(username,password), verify=False, headers=headers) print "Get models returns: ", r.status_code print r.text # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # **Deep Neural Network fashion_mnist dataset Train with two Dense layers** # **Import TensorFlow** # import tensorflow as tf print("You are using TensorFlow version", tf.version) if len(tf.config.list_physical_devices('GPU')) > 0: print("You have a GPU enabled.") else: print("Enable a GPU before running this notebook.") # **import keras** # Using Keras TensorFlow API to define neural network from tensorflow import keras import matplotlib.pyplot as plt # **import the Fashion MNIST** dataset = keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = dataset.load_data() # **Print the train test images** print(train_images.shape) print(test_images.shape) # **Print train labels** print(train_labels) train_images = train_images / 255.0 test_images = test_images / 255.0 # **Plot the images** plt.figure(figsize=(10,10)) for i in range (25): plt.subplot(5,5,i+1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(train_images[i], cmap=plt.cm.binary) plt.xlabel(train_labels[i]) plt.show() # **Create the layers** #A linear model model = keras.Sequential([ keras.layers.Flatten(input_shape=(28, 28)), keras.layers.Dense(128, activation='relu'), keras.layers.Dense(128, activation='relu'), keras.layers.Dense(10, activation='softmax') ]) # **Compile the model** model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'] ) EPOCHS=10 model.fit(train_images, train_labels, epochs=EPOCHS) # **Add plots to observe overfitting, create plots to observe overfittings** # + history = model.fit(train_images, train_labels, validation_data=(test_images, test_labels), epochs=EPOCHS) # + acc = history.history['accuracy'] val_acc = history.history['val_accuracy'] loss = history.history['loss'] val_loss = history.history ['val_loss'] epochs_range =range(EPOCHS) plt.figure(figsize=(8,8)) plt.subplot(1,2,1) plt.plot(epochs_range, acc, label = 'Training Accuracy') plt.plot(epochs_range, val_acc, label = 'Validation Accuracy') plt.legend(loc = 'lower right') plt.title('Training and Validation Accuracy') plt.subplot(1, 2, 2) plt.plot(epochs_range, loss, label = 'Training Loss') plt.plot(epochs_range, val_loss, label = 'Validation Loss') plt.legend(loc = 'upper right') plt.title('Training and Validation Loss') plt.show() # - test_loss, test_acc = model.evaluate(test_images, test_labels) print('\nTest accuracy:', test_acc) predictions = model.predict(test_images) print(predictions[0]) print(tf.argmax(predictions[0])) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="h4uhGmkQZpZL" # !cp "drive/My Drive/Colab Notebooks/bounded lognorm/bounded_lognorm.py" . from bounded_lognorm import bounded_lognorm from warnings import simplefilter simplefilter(action='ignore', category=FutureWarning) from scipy.stats import norm, truncnorm, lognorm, skew, kurtosis, normaltest import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns sns.set(style='darkgrid') plt.rcParams["font.family"] = "serif" plt.rcParams["figure.figsize"] = [10, 10/1.6] # + [markdown] id="uQckoVP9aMqF" # What are the fastest possible 100m sprint times? # # Let's first load data from the 2004-2016 Olympics. See [here](https://deepblue.lib.umich.edu/data/concern/data_sets/cr56n184r?locale=en) for details on this dataset. # + colab={"base_uri": "https://localhost:8080/", "height": 197} id="p8WKhJqEaKyI" outputId="6092c965-2625-4467-9790-20cc8618cabc" url = 'https://deepblue.lib.umich.edu/data/downloads/qn59q4716?locale=en' sprint_data = pd.read_csv(url, encoding="ISO-8859-1") sprint_data.head() # + [markdown] id="zc0kRcevaWlH" # Isolate the 100m sprint times, clean up the `MARK` field (time in seconds), and convert to average speed (meters per second). # + colab={"base_uri": "https://localhost:8080/", "height": 197} id="rXCIS24uaTOv" outputId="fe50ef4f-1b20-45ef-ce31-f79d473c3dab" sprint_data = sprint_data[sprint_data.Race == "100 m"] sprint_data['speed'] = sprint_data.MARK.str.split(expand=True)[0] sprint_data['speed'] = pd.to_numeric(sprint_data['speed'],'coerce') sprint_data = sprint_data[sprint_data['speed']>0] sprint_data['speed'] = 100 / sprint_data['speed'] sprint_data.head() # + colab={"base_uri": "https://localhost:8080/", "height": 406} id="ngt-OQseaZGu" outputId="6f926bda-8223-4a85-9556-d331d4a4c201" speeds_M = sprint_data[sprint_data.Gender == 'M']['speed'] shapes, results = bounded_lognorm.fit(speeds_M, MLE=False, verbose=True, flower=0) shapes_rounded = shapes.round(2) label = ( 'mode=' + str(shapes_rounded[0]) + ', sigma=' + str(shapes_rounded[1]) + ', lower=' + str(shapes_rounded[2]) + ', upper=' + str(shapes_rounded[3])) xdata = np.linspace(7, 11, len(speeds_M)) sns.lineplot(xdata, bounded_lognorm.pdf(xdata, *shapes), label=label) sns.distplot(speeds_M[speeds_M>7], kde=False, norm_hist=True) plt.xlabel("Men's 100m Sprints, Average Speeds (m/sec), 2004-2016 Olympics") plt.legend() plt.show() # + [markdown] id="QcxHAZShaj4J" # Convert the estimated upper bound from average speed (m/sec) to time (sec/100m). # + colab={"base_uri": "https://localhost:8080/"} id="GG6hFs9lafYv" outputId="5e373431-6177-4d5f-85fd-49027f1d9e8c" bpt = 100/shapes[-1] bpt.round(2) # + [markdown] id="c_vigAnKap2X" # With a 95% confidence interval of # + colab={"base_uri": "https://localhost:8080/"} id="dPLtYdqjamC3" outputId="c48cb616-940a-4115-b891-abd3d6fb4999" stdev = np.sqrt(np.diag(results[1]))[-1] conf_interval = np.array([shapes[-1] + 2 * stdev, shapes[-1] - 2 * stdev]) (100/conf_interval).round(2) # + [markdown] id="74cR2O61awXs" # For comparison, Usain Bolt holds the current record of 9.58 seconds, set at the World Championships in 2009, so outside this sample. The estimates are consistent with anecdotes that Bolt sometimes showboats before the finish line and could have done better had he given 100%. Also you can find another estimate of 9.44 at [this article](https://www.bbc.com/future/article/20120712-will-we-ever-run-100m-in-9-secs). # + colab={"base_uri": "https://localhost:8080/", "height": 406} id="yZA5gdm7a07o" outputId="81f3a552-a283-464a-9fdc-68bf2d7cdb4e" speeds_W = sprint_data[sprint_data.Gender == 'F']['speed'] shapes, results = bounded_lognorm.fit(speeds_W, MLE=False, verbose=True, fmode=speeds_W.mode()[0], flower=0) shapes_rounded = shapes.round(2) label = ( 'mode=' + str(shapes_rounded[0]) + ', sigma=' + str(shapes_rounded[1]) + ', lower=' + str(shapes_rounded[2]) + ', upper=' + str(shapes_rounded[3])) xdata = np.linspace(7, 10, len(speeds_W)) sns.lineplot(xdata, bounded_lognorm.pdf(xdata, *shapes), label=label) sns.distplot(speeds_W[speeds_W>7], kde=False, norm_hist=True) plt.xlabel("Women's 100m Sprints, Average Speeds (m/sec), 2004-2016 Olympics") plt.legend() plt.show() # + [markdown] id="nYW6FFOKa9EP" # Convert the estimated upper bound from average speed (m/sec) to time (sec/100m). # + colab={"base_uri": "https://localhost:8080/"} id="ihNU3QLHa6Tf" outputId="04d529fa-02cd-44f2-80c4-2af3cf278304" bpt = 100/shapes[-1] bpt.round(2) # + colab={"base_uri": "https://localhost:8080/"} id="aU-BKhTMa_oW" outputId="ad56fa08-5ea5-43fa-c4c5-1f1712b22f11" stdev = np.sqrt(np.diag(results[1]))[-1] conf_interval = np.array([shapes[-1] + 2 * stdev, shapes[-1] - 2 * stdev]) (100/conf_interval).round(2) # + [markdown] id="zLKKIevZbCnp" # For comparison, holds the current record of 10.49 seconds, set in 1988(!), so outside this sample. # + [markdown] id="i0BnJl5_bIuQ" # But aren't we assuming that the upper bound, which is outside the range of the sample data, is nonetheless inherent in the data? # # Here is an example of a random sample of 10,000 fictional runners, each having 10 qualities with values ranging from 0 to 1. The sum of the runners' qualities therefore can range from 0 to 10. The perfect runner is a 10 and the perfectly worst runner is a 0 # + colab={"base_uri": "https://localhost:8080/", "height": 407} id="EmtoEEu6bB6E" outputId="3daf23e8-f977-4e7a-b757-e4a233bebc12" runners = np.random.random_sample(size=(10, 10000)) runner_sums = runners.sum(axis=0) sns.distplot(runner_sums, kde=False) # + [markdown] id="zLJEo-UkbnAZ" # Say we don't know the value sum of the perfect runner, but we do of the worst (0). Let's estimate the perfect runner. # + colab={"base_uri": "https://localhost:8080/"} id="nprGESFUbkAq" outputId="6f06f1ef-7877-4b2c-e986-1a2dc6f7e1ae" shapes, results = bounded_lognorm.fit(runner_sums, MLE=False, verbose=True, flower=0) upper_estd = shapes[-1] print("The best possible runner, estimated:", upper_estd.round(2)) # + colab={"base_uri": "https://localhost:8080/"} id="a9i9n17Mbyng" outputId="fd2aec5c-b64a-4454-bb59-f1b0abcbd743" stdev = np.sqrt(np.diag(results[1]))[-1] conf_interval = np.array([upper_estd - 2 * stdev, upper_estd + 2 * stdev]) print("with a ~95% confidence interval of", conf_interval.round(2)) # + [markdown] id="LSbrSZlnb6hX" # These are much closer to the theoretically perfect runner (with value sum = 10) than the best from the sample. # + colab={"base_uri": "https://localhost:8080/"} id="S1hB_Q_pb04I" outputId="c8040fcf-55a9-4c3f-b9a0-99c606f8ee7e" print("The best runner in the sample:", max(runner_sums).round(2)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="ITxNweQ3MjA7" # Author: , Central Bank of India # Kaggle [URL](https://www.kaggle.com/c/GiveMeSomeCredit/overview) # Date: 20th May, 2021 # Published by: # # + [markdown] id="mXo-J_9zGvMg" # ## **Project Objective:**
    # ##### To improve on the state of the art in credit scoring by predicting the probability that somebody will experience financial distress in the next two years. # # ### **Context:**
    # ##### Banks play a crucial role in market economies. They decide who can get finance and on what terms and can make or break investment decisions. For markets and society to function, individuals and companies need access to credit.
    # # ##### Credit scoring algorithms, which make a guess at the probability of default, are the method banks use to determine whether or not a loan should be granted. This project requires to improve on the state of the art in credit scoring, by predicting the probability that somebody will experience financial distress in the next two years. # # The aim of this project is to build a model that borrowers can use to help make the best financial decisions. # # Historical data are provided on 250,000 borrowers # + [markdown] id="phEyUcGFKN0C" # ####**Calling Data Manupulation, Plotting, OS & Warning Libraries:** # # + id="Hc7R1t0HG1hQ" # For data manupulation import pandas as pd import numpy as np from tabulate import tabulate from scipy.stats import norm # For visual interpretation import seaborn as sns import matplotlib.pyplot as plt from matplotlib import cm # Call os for setting up navigation and environment import os # + id="Bxh9SzyC3O8-" # Import warnings module import warnings # Do not print warnings on screen warnings.filterwarnings("ignore") # + id="J_natiEZATVQ" # For displaying all outputs from a cell--not just the last from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" # + [markdown] id="ta1ELSTFzKC-" # #### **Importing Data To Colab Machine** # # ##### **(1)** Establish a connection to Google Drive. Login to the Google account when prompted and copy a validation token each time we connect to Drive from Colab. After establishing the connection, we will see a new folder appear in the side bar as ‘/gdrive’.
    # # ##### **(2)** Copy the compressed file from '/gdrive' to the virtual machine and unzip it to access the data. Do not remove the original zip from your Drive after copying. We will need to copy it at the start of each session while working with Colab.
    # # ##### **(2)** Unzip the data # # # # + colab={"base_uri": "https://localhost:8080/"} id="AyhR3OT-IQ3E" outputId="6a1129e6-8b08-4430-9764-51669d9862bf" # Mount google drive from google.colab import drive drive.mount('/ashok') # + id="eCMKaepOH9Q0" # Complete path to storage location of the .zip file of data zip_path = '/ashok/MyDrive/Colab_data_files/GiveMeSomeCredit' # + colab={"base_uri": "https://localhost:8080/", "height": 36} id="wb5Rw4EN3yl_" outputId="3dfdb88f-6d32-4d1a-d530-3aa005a6fb70" # Check current directory (be sure you're in the directory where Colab operates: '/content') os.getcwd() os.chdir(zip_path) # + [markdown] id="LU9DQFaiil1E" # ### **Reading Data** # + id="YUeZMmDxK2s_" df_train = pd.read_csv('cs-training.csv.zip') df_test = pd.read_csv('cs-test.csv.zip') df_sample = pd.read_csv('sampleEntry.csv.zip') df_DataDictionary = pd.read_excel('Data Dictionary.xls', header=0, index_col=0) # + [markdown] id="NqQtpEPPjlU-" # ##### **Dataset Contain Following Features:** # + colab={"base_uri": "https://localhost:8080/"} id="4-DRXIQmgN2v" outputId="d83f9434-ef92-4e7e-bb34-9b026fbb228c" #df_DataDictionary.style.set_properties(**{'text-align': 'left'}) print(tabulate(df_DataDictionary, headers="firstrow", tablefmt="fancy_grid")) # + [markdown] id="9105cmV5kygE" # #### **Exploring Dataset:** # + colab={"base_uri": "https://localhost:8080/"} id="e1rS893r8JfB" outputId="c551292f-c236-4c4c-baaa-520e5a4d6d49" print(df_train.shape) print(df_test.shape) # + colab={"base_uri": "https://localhost:8080/", "height": 244} id="nZwW7su9IBGc" outputId="ae1e2171-5875-4195-b764-19e892546c01" df_train.head() # + colab={"base_uri": "https://localhost:8080/"} id="kk9eQlya8CE7" outputId="e3ee156e-40bd-4584-f5b9-01bd3312cb6f" df_train.info() # + colab={"base_uri": "https://localhost:8080/", "height": 338} id="8bDrD0USlOcO" outputId="a3538fad-c733-469a-9b56-878c623460ba" df_train.describe().round(2) # + colab={"base_uri": "https://localhost:8080/", "height": 244} id="8XTl_yK3mCI1" outputId="09c75a8c-b9d0-4b2e-eead-55f2afb8571b" df_test.head() # + colab={"base_uri": "https://localhost:8080/"} id="KpLYCNCcJPfG" outputId="dc93eb56-01d5-49aa-c357-77409426c6e3" df_test.info() # + colab={"base_uri": "https://localhost:8080/", "height": 338} id="jeYfV5CBJQn8" outputId="b759cd41-26d4-40de-9efe-a6621ee58c4b" df_test.describe().round(2) # + [markdown] id="_p6I3xZSJ--A" # ###### *Values are missing from both Test & Train Datasets* # + [markdown] id="zgh6gbnOmhvz" # #### **Visual Analysis Of the Missing Values:** # + colab={"base_uri": "https://localhost:8080/", "height": 577} id="n8O_YIDUwp57" outputId="dcce287d-f562-4f48-e080-7ce541c048ad" # Heatmap for missing values in training data plt.figure(figsize=(8,5)) sns.heatmap(df_train.isnull(), cmap="coolwarm", yticklabels=False, cbar=True) plt.show() # + [markdown] id="QTEFHTYC8_Ux" # ###### _There are missing values in **MonthlyIncome** and **NumberOfDependents**_ # # + colab={"base_uri": "https://localhost:8080/"} id="dI7GdNHVOsLj" outputId="e036086f-0ad7-44b1-ebd7-dcf7d8577d7b" # Percentage of data missing round(df_train.isnull().mean() * 100,2) # + [markdown] id="h7YQ7iXVPDDW" # ###### _**MonthlyIncome** has 19.8% missing values and **NumberOfDependents** has 2.6% missing values_ # # + colab={"base_uri": "https://localhost:8080/", "height": 577} id="dDErbglb84Db" outputId="7deaf409-4ec7-4de2-9485-3b5fbd508d76" # Heatmap for missing values in test data plt.figure(figsize=(8,5)) sns.heatmap(df_test.isnull(), cmap="coolwarm", yticklabels=False, cbar=True) plt.show() # + [markdown] id="DiopIHaBP1V2" # ###### *There are missing values in __SeriousDlqin2yrs__, __MonthlyIncome__ and __NumberOfDependents__* # + colab={"base_uri": "https://localhost:8080/"} id="riSvoMX1PiAm" outputId="de18290d-b092-4e41-8b29-6b6fd17f0df3" round(df_test.isnull().mean() * 100,2) # + [markdown] id="aA0M0xImQW_m" # ###### _**SeriousDlqin2yrs** has 100%, **MonthlyIncome** has 19.8% and **NumberOfDependents** has 2.59% missing values_ # + [markdown] id="GOcfTKxouLeV" # #### **Fixing Missing Values in the training dataset:** # + id="jaXybJApmy1E" # We will impute the MonthlyIncome with mean df_train['MonthlyIncome'].fillna(df_train['MonthlyIncome'].mean(),inplace=True) # + id="o1mNAY-l8BNz" # We will impute the NumberOfDependents with mode df_train['NumberOfDependents'].fillna(df_train['NumberOfDependents'].mode()[0], inplace=True) # + [markdown] id="Xo1fUnhNKHo8" # #### **Fixing Missing Values in the test dataset:** # + id="qoPuLUl_KMlp" # We will impute the MonthlyIncome with mean df_test['MonthlyIncome'].fillna(df_test['MonthlyIncome'].mean(),inplace=True) # + id="XgioQA-UKYVq" # We will impute the NumberOfDependents with mode df_test['NumberOfDependents'].fillna(df_test['NumberOfDependents'].mode()[0], inplace=True) # + [markdown] id="iXaZmT82N-RT" # #### **Checking if imputations are propoer:** # + colab={"base_uri": "https://localhost:8080/"} id="zcnOkkDhN9Q6" outputId="be967b34-6ec1-45ff-ae06-114d39be3705" df_train.isnull().sum() # + colab={"base_uri": "https://localhost:8080/"} id="BVqUyGq1OZfI" outputId="7a0068ed-6535-440b-d8da-23cb20c95f2f" df_test.isnull().sum() # + [markdown] id="2PEdpb0PSmxM" # ###### _Imputations look good_ # + [markdown] id="-NXHZyjcO7t4" # #### **Visualisation of the data prepared:** # + colab={"base_uri": "https://localhost:8080/", "height": 412} id="93_Qz1fNPBFA" outputId="7a31fd2c-8df6-4413-8790-9a6cc99b0c9d" sns.set_theme(style="darkgrid") fig = plt.figure(figsize=(6,6)) sns.countplot(x='SeriousDlqin2yrs',data=df_train, palette="Set1_r") plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 412} id="CY28s2vywa4J" outputId="7983a04c-a706-47a8-a63d-079ea239577d" sns.set_theme(style="darkgrid") fig = plt.figure(figsize=(10,6)) sns.set_palette("bright") sns.histplot(x='RevolvingUtilizationOfUnsecuredLines',data=df_train, stat="density", bins=20, kde=True, hue='SeriousDlqin2yrs') plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 412} id="y6zlNE9Gwb1i" outputId="174d8a34-fd31-40e0-dd33-8697411561b7" sns.set_theme(style="darkgrid") fig = plt.figure(figsize=(10,6)) sns.set_palette("bright") sns.boxplot(x = 'SeriousDlqin2yrs', y = 'RevolvingUtilizationOfUnsecuredLines', data = df_train, showfliers=False) # removed the outliers as they were making the box plot very small to read) # + colab={"base_uri": "https://localhost:8080/", "height": 412} id="OBzwFwF0Yskg" outputId="3b7c3f43-5652-4e05-8dd4-a6ce1c6f9717" sns.set_theme(style="darkgrid") fig = plt.figure(figsize=(10,6)) sns.set_palette("bright") sns.histplot(x='age',data=df_train, stat="density", bins=20, kde=True, hue='SeriousDlqin2yrs') plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 412} id="AvJHJ1iXmfQN" outputId="61a5ed50-bdf0-4260-94e1-6d4bd3be8cfb" sns.set_theme(style="darkgrid") fig = plt.figure(figsize=(10,6)) sns.set_palette("bright") sns.boxplot(x = 'SeriousDlqin2yrs', y = 'age', data = df_train) # + colab={"base_uri": "https://localhost:8080/", "height": 412} id="cKyV6zkAsH8t" outputId="e9368390-942a-41fb-dce8-6cb8fa528aea" sns.set_theme(style="darkgrid") fig = plt.figure(figsize=(10,6)) sns.set_palette("bright") sns.histplot(x='NumberOfTime30-59DaysPastDueNotWorse',data=df_train, stat="density", bins=20, kde=True, hue='SeriousDlqin2yrs') plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 412} id="SKocIP78sVRZ" outputId="fc6ddcae-c6b1-450c-a8a2-96f58eb7b739" sns.set_theme(style="darkgrid") fig = plt.figure(figsize=(10,6)) sns.set_palette("bright") sns.boxplot(x = 'SeriousDlqin2yrs', y = 'NumberOfTime30-59DaysPastDueNotWorse', data = df_train, showfliers=False) # removed the outliers as they were making the box plot very small to read # + colab={"base_uri": "https://localhost:8080/", "height": 412} id="-xTxtJW62_s3" outputId="93e70a64-f27b-40b2-a577-ffa7ffe964b3" sns.set_theme(style="darkgrid") fig = plt.figure(figsize=(10,6)) sns.set_palette("bright") sns.histplot(x='DebtRatio',data=df_train, stat="density", bins=25, kde=True, hue='SeriousDlqin2yrs') plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 412} id="4Bpx2tpQ2_X3" outputId="1e6fc045-5a60-4e44-81fa-5e67da3fb76e" sns.set_theme(style="darkgrid") fig = plt.figure(figsize=(10,6)) sns.set_palette("bright") sns.boxplot(x = 'SeriousDlqin2yrs', y = 'DebtRatio', data = df_train, showfliers=False) # removed the outliers as they were making the box plot very small to read # + colab={"base_uri": "https://localhost:8080/", "height": 423} id="7YbPO8cv7xwS" outputId="d5e26cbb-5119-4902-ccd3-45a9ac663485" sns.set_theme(style="darkgrid") fig = plt.figure(figsize=(10,6)) sns.set_palette("bright") sns.histplot(x='MonthlyIncome',data=df_train, stat="density", bins=25, kde=True, hue='SeriousDlqin2yrs') plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 412} id="27pZkND_7xdL" outputId="b44ddf99-fce4-4d8b-bc33-12822d143dc8" sns.set_theme(style="darkgrid") fig = plt.figure(figsize=(10,6)) sns.set_palette("bright") sns.boxplot(x = 'SeriousDlqin2yrs', y = 'MonthlyIncome', data = df_train, showfliers=False) # removed the outliers as they were making the box plot very small to read # + colab={"base_uri": "https://localhost:8080/", "height": 412} id="ivWwVg717xtN" outputId="d20e3e53-271c-44d0-c520-9d7dce1c406e" sns.set_theme(style="darkgrid") fig = plt.figure(figsize=(10,6)) sns.set_palette("bright") sns.histplot(x='NumberOfOpenCreditLinesAndLoans',data=df_train, stat="density", bins=25, kde=True, hue='SeriousDlqin2yrs') plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 412} id="-oGww3NM7xag" outputId="c3baebdd-29f1-4d4b-e873-b64fc8dae9fa" sns.set_theme(style="darkgrid") fig = plt.figure(figsize=(10,6)) sns.set_palette("bright") sns.boxplot(x = 'SeriousDlqin2yrs', y = 'NumberOfOpenCreditLinesAndLoans', data = df_train, showfliers=False) # removed the outliers as they were making the box plot very small to read # + colab={"base_uri": "https://localhost:8080/", "height": 412} id="Qu2zQGkn7xqH" outputId="3e000257-8629-452b-ca25-5e4b11078d26" sns.set_theme(style="darkgrid") fig = plt.figure(figsize=(10,6)) sns.set_palette("bright") sns.histplot(x='NumberOfTimes90DaysLate',data=df_train, stat="density", bins=25, kde=True, hue='SeriousDlqin2yrs') plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 412} id="-Pr7zPPs7xS2" outputId="5aa53877-80bd-4029-b5ea-988559a4155a" sns.set_theme(style="darkgrid") fig = plt.figure(figsize=(10,6)) sns.set_palette("bright") sns.boxplot(x = 'SeriousDlqin2yrs', y = 'NumberOfTimes90DaysLate', data = df_train, showfliers=False) # removed the outliers as they were making the box plot very small to read # + colab={"base_uri": "https://localhost:8080/", "height": 412} id="x9HLehHV7xj0" outputId="4c7e552a-4298-410a-e2c0-cc109edad9ff" sns.set_theme(style="darkgrid") fig = plt.figure(figsize=(10,6)) sns.set_palette("bright") sns.histplot(x='NumberRealEstateLoansOrLines',data=df_train, stat="density", bins=25, kde=True, hue='SeriousDlqin2yrs') plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 412} id="FT5VY2YZ7xIL" outputId="e18a854f-5fc1-44a0-ad1b-32a805e15400" sns.set_theme(style="darkgrid") fig = plt.figure(figsize=(10,6)) sns.set_palette("bright") sns.boxplot(x = 'SeriousDlqin2yrs', y = 'NumberRealEstateLoansOrLines', data = df_train, showfliers=False) # removed the outliers as they were making the box plot very small to read # + colab={"base_uri": "https://localhost:8080/", "height": 412} id="ugAp_wS67xgP" outputId="8517023d-6228-44ad-b4d4-9986b77cc417" sns.set_theme(style="darkgrid") fig = plt.figure(figsize=(10,6)) sns.set_palette("bright") sns.histplot(x='NumberOfTime60-89DaysPastDueNotWorse',data=df_train, stat="density", bins=25, kde=True, hue='SeriousDlqin2yrs') plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 412} id="R3YUC7rI7w-M" outputId="07635f58-214f-470d-ed7e-48fa51e57b58" sns.set_theme(style="darkgrid") fig = plt.figure(figsize=(10,6)) sns.set_palette("bright") sns.boxplot(x = 'SeriousDlqin2yrs', y = 'NumberOfTime60-89DaysPastDueNotWorse', data = df_train, showfliers=False) # removed the outliers as they were making the box plot very small to read # + colab={"base_uri": "https://localhost:8080/", "height": 412} id="4WctPCWffXyt" outputId="530bd7d8-164d-48c6-f215-af0defbadc34" sns.set_theme(style="darkgrid") fig = plt.figure(figsize=(10,6)) sns.set_palette("bright") sns.histplot(x='NumberOfDependents',data=df_train, stat="density", bins=25, kde=True, hue='SeriousDlqin2yrs') plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 412} id="WsF0ZnpYqJH9" outputId="933d80ed-f128-4aa2-8eaa-21932fa46311" sns.set_theme(style="darkgrid") fig = plt.figure(figsize=(10,6)) sns.set_palette("bright") sns.boxplot(x = 'SeriousDlqin2yrs', y = 'NumberOfDependents', data = df_train, showfliers=False) # removed the outliers as they were making the box plot very small to read # + colab={"base_uri": "https://localhost:8080/", "height": 849} id="ZE4zk7Ep9wm_" outputId="b2fe088e-570c-41f2-dcbe-9a9698f714de" # Correlation fig,ax = plt.subplots(figsize=(10, 10)) sns.heatmap(df_train.corr(), annot=True, cmap='Greens', linewidths=.5, fmt='.2f',ax=ax) plt.show() # + [markdown] id="pHe26R2JT5Xa" # #### **Data Preparation For Modeling** # + id="5Ml1Ora4CG-6" # Dropping unnamed colum from test dataframe df_train.drop('Unnamed: 0', axis=1, inplace=True) # + id="sQF8fe_dC_vJ" # Dropping unnamed colum from test dataframe df_test.drop('Unnamed: 0', axis=1, inplace=True) # + id="MM-AcgAJDo8U" X = df_train.drop('SeriousDlqin2yrs',axis=1) y = df_train['SeriousDlqin2yrs'] # + [markdown] id="g50kXcAeEzV0" # ##### **Splitting Data** # + colab={"base_uri": "https://localhost:8080/"} id="toRSDoXxDuXg" outputId="d37d582b-e358-46af-8225-870960321ccb" from sklearn.model_selection import train_test_split #splitting data into train and test X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=1000) print(X_train.shape) print(X_test.shape) print(y_train.shape) print(y_test.shape) # + [markdown] id="bPi3Udad3S3n" # #### **Model 1: Using Random Forest** # + [markdown] id="vx5wxQn8U93N" # ##### **Calling Data Modeling Libraries** # + id="8zcZWg7zE9hz" from sklearn.ensemble import RandomForestClassifier from sklearn.ensemble import BaggingClassifier from sklearn.metrics import confusion_matrix, classification_report from sklearn.metrics import accuracy_score, recall_score, precision_score, f1_score,roc_curve,auc from sklearn.model_selection import GridSearchCV, cross_val_score, RandomizedSearchCV from sklearn.tree import DecisionTreeClassifier # + id="4kb9-tGAz8fO" RFC = RandomForestClassifier() # + id="h_QS0Q22yh5P" param_grid = { "n_estimators" : [9, 18, 27, 36, 100, 150], "max_depth" : [2,3,5,7,9], "min_samples_leaf" : [2, 3]} # + id="f5KTFugi0ANE" RFC_random = RandomizedSearchCV(RFC, param_distributions=param_grid, cv=5) # + colab={"base_uri": "https://localhost:8080/"} id="VJ9VVin00LIe" outputId="080fe8a9-6460-4132-bbbf-5713709dc97e" RFC_random.fit(X_train, y_train) # + id="eRDWw9xs0Z4L" best_est_RFC = RFC_random.best_estimator_ # + colab={"base_uri": "https://localhost:8080/"} id="L7kojaAU0cz0" outputId="a881e00a-7bdf-4e77-a56c-d408a8f46aae" print('Training Set Classifier Accuracy %: {:0.2f}'.format(RFC_random.score(X_train, y_train) * 100)) print('Test Set Classifier Accuracy %: {:0.2f}'.format(RFC_random.score(X_test, y_test) * 100)) # + id="YWaw2FSH0l8h" y_pred_RFC = best_est_RFC.predict_proba(X_test) y_pred_RFC = y_pred_RFC[:,1] # + colab={"base_uri": "https://localhost:8080/", "height": 573} id="-JMsVoEe1S6o" outputId="8f7ac0ec-ace7-4e45-efda-b8cd4ce06312" fpr,tpr,_ = roc_curve(y_test, y_pred_RFC) roc_auc = auc(fpr, tpr) plt.figure(figsize=(10,8)) plt.title('Receiver Operating Characteristic') sns.lineplot(fpr, tpr, label = 'AUC = %0.2f' % roc_auc) plt.legend(loc = 'lower right') plt.plot([0, 1], [0, 1],'r--') plt.ylabel('True Positive Rate') plt.xlabel('False Positive Rate') plt.show() # + [markdown] id="mpHv8CO9XPpd" # #### **Model 2: Using XGBOOST** # + [markdown] id="hei093eHn1Sz" # ##### **Calling Data Modeling Libraries** # + id="J0rMluHXwBpZ" import xgboost as xgb from xgboost import XGBClassifier # + id="Edt13nu2xSFN" from sklearn.model_selection import GridSearchCV, RandomizedSearchCV # + id="UeJDa-kjxUQ6" xgb = XGBClassifier(n_jobs=-1) # Use a grid over parameters of interest param_grid = { 'n_estimators' :[100,150,200,250,300], "learning_rate" : [0.001,0.01,0.0001,0.05, 0.10 ], "gamma" : [ 0.0, 0.1, 0.2 , 0.3 ], "colsample_bytree" : [0.5,0.7], 'max_depth': [3,4] } # + id="6bRNnHjJxe6F" xgb_randomgrid = RandomizedSearchCV(xgb, param_distributions=param_grid, cv=5) # + colab={"base_uri": "https://localhost:8080/"} id="nO5IqQmbxpP2" outputId="8fff5531-ffae-4236-caf0-0b4452df7382" xgb_randomgrid.fit(X_train,y_train) # + id="etsFAb15xqiE" best_est_XGB = xgb_randomgrid.best_estimator_ # + colab={"base_uri": "https://localhost:8080/"} id="bIjN5u8Bxwao" outputId="6d2a91fd-b554-407a-db65-7498d97c621f" print('Training Set Classifier Accuracy %: {:.2f}'.format(xgb_randomgrid.score(X_train, y_train) * 100)) print('Test Set Classifier Accuracy %: {:.2f}'.format(xgb_randomgrid.score(X_test, y_test) * 100)) # + id="fJbaaJUPx4H4" y_pred_XGB = best_est_XGB.predict_proba(X_train) y_pred_XGB = y_pred_XGB[:,1] # + colab={"base_uri": "https://localhost:8080/", "height": 590} id="6iH7V0fMyVtN" outputId="396c0397-c68c-47c4-f0a7-d7c3f3459fd7" fpr,tpr,_ = roc_curve(y_train, y_pred_XGB) roc_auc = auc(fpr, tpr) plt.figure(figsize=(10,8)) plt.title('Receiver Operating Characteristic') sns.lineplot(fpr, tpr, label = 'AUC = %0.2f' % roc_auc) plt.legend(loc = 'lower right') plt.plot([0, 1], [0, 1],'r--') plt.ylabel('True Positive Rate') plt.xlabel('False Positive Rate') plt.show() # + [markdown] id="OYvGRKdzA3AS" # ####**Conclusion:** # # Between the two models we built, we saw that:
    # (1) Area Under Curve (AUC) for RandomForest Model is 0.88.
    # (2) Area Under Curve (AUC) for XGboost Model is 0.87.
    # Therefore, it is inferred that RandomForest is the suitable model for this dataset # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Diving into Random Matrix Theory (RMT) # # ### Random symmetric matrix (Furedi & Komlos) # # A = ($a_{ij}$) is an n$\times$n matrix whose entries for i$\ge$j are independent random variables and $a_{ji} = a_{ij}$. # # For every i > j, $\mathbb{E} a_{ij}$ = $\mu$, $\mathbb{D}^2 a_{ij}$ = $\sigma^2$ and $\mathbb{E} a_{ii}$ = $\nu$. # # For any c > 2 $\sigma$ with the probability 1 - o(1), all eigenvalues except for at most o(n) lie in the interval $\mathcal{I} = (- c \sqrt{n}, c \sqrt{n}$). # # If $\mu$ = 0, with the probability 1 - o(1), all eigenvalues belong to $\mathcal{I}$. # # While $\mu$ > 0, only the largest eigenvalues $\lambda_1$ is outside $\mathcal{I}$. # # ### Eigenvalues distribution for some symmetric class of Hermitian matrices (GOE, GUE and GSE) # # **Universality** depends only on a symmetric class, which is related with parameter $\beta$, even for non-Hermitian ensembles: # # $$\beta = \frac{1}{T}.$$ # # + $\beta$ = 0: **Poisson distribution** (unrelated); # # + $\beta$=1: **GOE**; # # + $\beta$=2: **GUE**; # # + $\beta$ = 4: **GSE**; # # + $\beta = \infty$: **limiting distribution** (zero temperature). # + from itertools import combinations from numpy import array, ceil, concatenate, cosh, cumsum, diag, diff, dot, exp, eye, float64, floor, full_like from numpy import gradient, int64, linspace, mat, mean, nonzero, ones, pi, prod, sign, sqrt, trace, trapz, zeros from numpy.linalg import cond, det, eig, eigvals, eigvalsh, inv, norm, qr, slogdet, svd from numpy.random import chisquare, gamma, permutation, rand, randint, randn from scipy.integrate import odeint, quad from scipy.linalg import coshm, cosm, expm, logm, signm, sinhm, sinm, sqrtm, tanhm, tanm from scipy.sparse import spdiags from scipy.special import airy, comb from time import time from sympy import symbols, integrate import matplotlib.pyplot as plt from __future__ import division # - # ## 0 辅助函数 # ### Double factorial (双阶乘) # # In the univariate case, we have the following moments for the Hermite, Laguerre, and Jacobi weight functions: # $$\int_{\mathbb{R}} x^k e^{- x^2 / 2} dx = (2 k - 1) !! = (- 1)^{k / 2} H_k(0),$$ # # $$\int_{[0, \infty)} x^k x^{\gamma} e^{- x}dx = (\gamma + 1)_k = L_k^{\gamma}(0),$$ # # and # # $$\int_{[0, 1]} x^k x^a (1 - x)^b dx = \frac{(a + 1)_k \Gamma(a + b + 2)}{\Gamma(a + 1)\Gamma(a + b + k + 2)} = P_k^{a, b}(0).$$ # In the above, $k \geq 0$ and # # $$\Gamma(m + \frac{1}{2}) = \frac{(2 m - 1) !! \sqrt{\pi}}{2^m}$$ # # [随机矩阵引论:理论与实践](https://github.com/brucejunlee/RMT_Theory_Applications/blob/master/RM_introduction_python.ipynb): Chapter 0 # ### Determinant (行列式) # # + Define recursively: # # $$det(A) = \Sigma_{\sigma \in S_n} sign(\sigma)\prod A_{i, \sigma_i}.$$ def determinant(A): if len(A) <= 0: return None elif len(A) == 1: return A[0][0] else: s = 0 for i in range(len(A)): # 余子式 Ai = [[row[a] for a in range(len(A)) if a != i] for row in A[1:]] s += A[0][i] * det(Ai) * (-1) ** (i % 2) return s A = randn(5, 5) # %time determinant(A) # + Import **NumPy** library # %time det(A) # ### Permanent (积和式) # # Definition: # # $$per(A) = \Sigma_{\sigma \in S_n} \prod A_{i, \sigma_i}.$$ # # Methods: # # + 1 Define recursively; # + 2 Ryser formula(more faster): By counting multiplications it has efficiency $O(m \times 2^m)$. Assume $S \subseteq M$, $\bar{S}$ denotes the complementary set M \ S, |S| denotes the cardinality; # # $$per(A) = \sum_{S \subseteq M} (- 1)^{|S|} \prod_{j = 1}^m \sum_{i \in \bar{S}} A_{ij}$$ # # + 3 It is related to the polarization identity for symmetric tensors; # # + 4 From the partial derivatives of determinants of some matrices. # # Researchers: # # + ** (AAAI Fellow, 1992; Turing Award, 2010)**, ** (Turing Award, 1974)**, Vardi et. al. # # + , 2010: The permanent of a square matrix. def permanent(A): if len(A) <= 0: return None elif len(A) == 1: return A[0][0] else: s = 0 for i in range(len(A)): # 余子式 Ai = [[row[a] for a in range(len(A)) if a != i] for row in A[1:]] s += A[0][i] * det(Ai) return s # %time permanent(A) # ## 1 Gaussian normal distribution # # The **probability density function (pdf)** of standard normal distribution: # # $$f(x) = \frac{1}{\sqrt{2 \pi} \sigma} e^{-\frac{(x - \mu)^2}{2}}$$ # + h = 1e-5 N = int(8 / h) x = linspace(-4, 4, N) G = randn(N) plt.figure(figsize=(5, 5)) plt.hist(G, normed=True, align=u'mid', label=r'Gaussian samples') plt.plot(x, exp(- x ** 2 / 2) / sqrt(2 * pi), label='Probability density') plt.title('Gaussian normal distribution', fontsize=18) plt.xlabel(r'$x$', fontsize=18) plt.ylabel(r'$\rho(x)$', fontsize=18) plt.legend(loc='best') plt.show() # - # ## 2 Gaussian white noise and Brownian motion # + h = 1e-3 N = int(1 / h) x = linspace(0, 1, N) # Gaussian white noise dW = randn(N) * sqrt(h) # Brownian motion W = cumsum(dW) plt.figure(figsize=(5, 5)) plt.plot(x, W) plt.title('Brownian motion', fontsize=18) plt.xlabel(r'$x$', fontsize=18) plt.ylabel(r'$W$', fontsize=18) plt.show() # - # ## 3 A simple random matrix test function def cmp_A(A, x): res = full_like(A, 0.0) m = len(A) n = len(A[0]) for i in range(m): for j in range(n): if A[i, j] < x: res[i, j] = 1.0 return res # mean number trials = 20000 N = 20 v = 0 for i in range(trials): A = randn(N, N) # method 1 A = A < 0.5 A = A.astype(float64) # method 2 #A = cmp_A(A, 0.5) # method 3 #A = randint(2, size=(N, N)) #A = A.astype(float64) # entries ±1 with equal probability A = 2 * A - 1 # Q is a random orthogonal matrix Q = qr(randn(N, N))[0] # rotated by Q A = dot(Q, A) v += (A[0, 0] * A[1, 1]) ** 2 - 1 print v / trials # ## 4 Longest Increasing Subsequences (LIS) # # In the language of probability theory, we take a uniform random permutation $\pi$ of {1, 2, ..., n}. We say that $\pi(i_1)$, ..., $\pi(i_k)$ is an increasing subsequence in $\pi$ if $i_1$ < $i_2$ < ... < $i_k$ and $\pi(i_1)$ < $\pi(i_2)$ < ... < $\pi(i_k)$. Let $l_n(\pi)$ be the length of the longest increasing subsequence. # # There is an interesting link between the **moments of the eigenvalues of Q** and the number of permutations of length n with longest increasing subsequence less than or equal to length k. # # For example: # # + the permutation (5 1 3 2 4) has (1 2 4) and (1 3 4) as the longest increasing subsequences of length 3. # # + the permutation (3 1 8 4 5 7 2 6 9 10) has (1 4 5 7 9 10) and (1 4 5 6 9 10) as the longest increasing subsequences of length 6. # ### Random matrix approach # # $$\mathbb{P}(l_n \le k) = \frac{1}{n!} \int_{U_k} |Tr(Q)|^{2n} dQ$$ # # where dQ denotes normalized Haar measure on the unitary group $U_k$ of k × k matrices Q. # # see, Rains, Odlyzko, Deift, Baik, Johansson, Diaconis, et al. # + n = 4 k = 2 trials = 30000 res = [] for i in range(trials): A = randn(k, k) + randn(k, k) * 1j # QR algorithm does not guarantee nonnegative diagonal entries in R Q = qr(A)[0] #R = qr(A)[1] # obtain a Haar-distributed unitary matrix #Q = dot(Q, diag(sign(diag(R)))) # a simple correction by randomly perturbing the phase Q = dot(Q, diag(exp(2 * pi * 1j * rand(k)))) res.append(abs(trace(Q)) ** (2 * n)) mean(res) # - # ### [Patience sorting](https://en.wikipedia.org/wiki/Patience_sorting) # # > One-person card games are called **solitaire games** in American and **patience games** in British. # + [Algorithm Implementation/Sorting/Patience sort](https://en.wikibooks.org/wiki/Algorithm_Implementation/Sorting/Patience_sort) import bisect, heapq def patiencesort(seq): piles = [] for x in seq: new_pile = [x] i = bisect.bisect_left(piles, new_pile) if i != len(piles): piles[i].insert(0, x) else: piles.append(new_pile) print "longest increasing subsequence has length =", len(piles) # priority queue allows us to retrieve least pile efficiently for i in range(len(seq)): small_pile = piles[0] seq[i] = small_pile.pop(0) if small_pile: heapq.heapreplace(piles, small_pile) else: heapq.heappop(piles) assert not piles foo = [4, 65, 2, 4, -31, 0, 99, 1, 83, 782, 1] patiencesort(foo) print foo # ## 5 Wigner's semi-circle law # # [随机矩阵引论:理论与实践](https://github.com/brucejunlee/RMT_Theory_Applications/blob/master/RM_introduction_python.ipynb): Chapter 1, 3, 5 # ## 6 Finite semi-circle law # # + The weight function of GUE: # # $$w(x) = e^{- x^2 / 2}$$ # # + The weight function of LUE: # # $$w(x) = x^a e^{- x / 2}$$ # # + **Hermite polynomial**: # # $$ # H_k(x) = # \begin{cases} # 1, & k = 0 \\ # x, & k = 1 \\ # x H_{k - 1}(x) - (k - 1) H_{k-2}(x), & k \ge 2 # \end{cases} # $$ # # [随机矩阵引论:理论与实践](https://github.com/brucejunlee/RMT_Theory_Applications/blob/master/RM_introduction_python.ipynb): Chapter 12 # + n = 5 # Christoffel-Darboux formula x = linspace(-1, 1, 1000) x = x * sqrt(2 * n) * 1.3 x = array(x, dtype=float64) # -1st Hermite polynomial H_old = 0 * x # 0th Hermite polynomial H = 1 + 0 * x for i in range(n): H_new = (sqrt(2) * x * H - sqrt(i) * H_old) / sqrt(i + 1) H_old = H H = H_new H_new = (sqrt(2) * x * H - sqrt(n) * H_old) / sqrt(n + 1) # page 420 of Mehta's k = n * H ** 2 - sqrt(n * (n + 1)) * H_new * H_old # correct normalization multiplied k = k * exp(- x * x) / sqrt(pi) k = array(k, dtype=float64) # rescale on [-1, 1] and the area is π/2 plt.figure(figsize=(5, 5)) plt.plot(x / sqrt(2 * n), k * pi / sqrt(2 * n)) plt.title('Finite semicircle law', fontsize=18) plt.show() # - # ## 7 Consecutive spacings of the eigenvalues of Hermitian random matrices # # ### GOE # # + Nearest neighborhood spacings: # # $$\Delta = \lambda_{k + 1}-\lambda_k$$ # # > We can study Next-Nearest neighborhood spacings as well. # # + **Wigner surmise**: # # $$P(s) = \frac{\pi s}{2} e^{- \frac{\pi s^2}{4}}$$ # # where s denotes the spacings. # # [随机矩阵引论:理论与实践](https://github.com/brucejunlee/RMT_Theory_Applications/blob/master/RM_introduction_python.ipynb): Chapter 2 # # ### GUE (Edelman & , 2005) # # + It's related to **$Painlev\acute{e}$ V equation** and **zeros of Riemann $\zeta$ function**. # ## 8 Mar$\check{c}$enko-Pastur law # # Eigenvalues distribution of **sample covariance matrices**, namely, singular value distribution of initial Gaussian matrices. # # + A singular Mar$\check{c}$enko-Pastur distribution: # # $$d_{\mu_{MP}}(x) = \frac{\sqrt{4 - x}}{2 \pi \sqrt{x}} dx$$ # # [随机矩阵引论:理论与实践](https://github.com/brucejunlee/RMT_Theory_Applications/blob/master/RM_introduction_python.ipynb): Chapter 13, 14 # ## 9 Stochastic differential operator (Sutton & Edelman) # # ### Hermite # # + Soft edge on the right: # # $$A + \frac{2}{\sqrt{\beta}} W$$ # # $$\frac{d^2}{dx^2} - x + \sigma dW$$ # # approximate **Airy operator** A by $\frac{1}{h^2} D_2 - diag(x_1, \dots, x_n)$ and $\frac{2}{\sqrt{\beta}} W$ by $\frac{2}{\sqrt{\beta}} \frac{1}{\sqrt{h}} diag(G_1, \dots, G_n)$ with n = 200, $h = n^{-1/3}$, $x_k = hk$, and $G_1, \dots, G_n$ are i.i.d. standard Gaussians, and over $10^5$ trials. # # ### Jacobi (smallest singular value) # # + hard edge on the left: # # (use forward differences) # # $$\tilde{J}_a + \sqrt{\frac{2}{\beta}} \frac{1}{\sqrt{y}} W$$ # # approximate **Bessel operator** in Liouville form by $-\frac{1}{h} D_1 + (a + \frac{1}{2}) diag(\frac{1}{y_1}, \dots, \frac{1}{y_n})$ and random term by $\sqrt{\frac{2}{\beta}} diag(\frac{1}{\sqrt{y_1}}, \dots, \frac{1}{\sqrt{y_n}}) (\frac{1}{\sqrt{h}} diag(G_1, \dots, G_n)) (\frac{1}{2} (- \Omega D_1 \Omega))$ with n = 2000, $h = \frac{1}{n}$, $y_k = hk$, and $G_1, \dots, G_n$ i.i.d. standard Gaussians, and over $10^5$ trials. # # **Note**: The averaging matrix $\frac{1}{2} (- \Omega D_1 \Omega)$ splits the **noise** over two diagonals instead of one. # # + hard edge on the right: # # $$\tilde{J}_b + \sqrt{\frac{2}{\beta}} \frac{1}{\sqrt{y}} W$$ # # approximate **Bessel operator** in Liouville form by and random term by with n = 200, $h = \frac{1}{n + \frac{a + b + 1}{2}}$, $x_k = hk$, and $G_1, \dots, G_n$ i.i.d. standard Gaussians, and over $10^5$ trials. # # ### Square Laguerre (smallest singular value) # # + hard edge on the left: # # $$J_a + \frac{2}{\sqrt{\beta}} W$$ # # in which the Bessel operator has **type I BCs**. # # approximate **Bessel operator** in Liouville form by $- 2 diag(\sqrt{x_1}, \dots, \sqrt{x_n}) (\frac{1}{h} D_1) + a diag(\frac{1}{\sqrt{x_1}}, \dots, \frac{1}{\sqrt{x_n}})$ and random term by $\frac{2}{\sqrt{\beta}} (\frac{1}{2} (- \Omega D_1 \Omega)) (\frac{1}{\sqrt{h}} diag(G_1, \dots, G_n))$ with n = 2000, $h = \frac{1}{n}$, $x_k = hk$, and $G_1, \dots, G_n$ i.i.d. standard Gaussians, and over $10^5$ trials. # # ### Rectangular Laguerre # # + hard edge on the left: # # $$J_{a - 1} + \frac{2}{\sqrt{\beta}} W$$ # # in which the Bessel operator has **type II BCs**. # # approximate **Bessel operator** in Liouville form by and random term by with n = 200, $h = \frac{1}{n + \frac{a + 1}{2}}$, $x_k = hk$, and $G_1, \dots, G_n$ i.i.d. standard Gaussians, and over $10^5$ trials. # # ### Bulk (spacing distribution of the gap between eigenvalues m and m+1) # # + Jacobi near one-half # # ### Hermite near zero # # + For even case (2m $\times$ 2m), sine operators have **type I BCs**: # # $$J_{-\frac{1}{2}} + \frac{2}{\sqrt{2 \beta}} W$$ # # approximate $J_{-\frac{1}{2}}$ by $- 2 diag(\sqrt{x_1}, \dots, \sqrt{x_m}) (\frac{1}{h} D_1) - \frac{1}{2} diag(\frac{1}{\sqrt{x_1}}, \dots, \frac{1}{\sqrt{x_m}})$; approximate $W_{11}, W_{22}$ by $\frac{1}{\sqrt{h}} diag(G_1, \dots, G_m)$; approximate $W_{12}$ by $\frac{1}{\sqrt{h}} diag(G, \dots, G) \frac{1}{2} (- \Omega D_1 \Omega)$ with m = 200, $h = \frac{1}{m}$, $x_k = hk$. # # + For odd case ((2m + 1) $\times$ (2m + 1)), Bessel operators have **type II BCs**. # # ### Tracy-Widom law (Edelman & , 2005) # # $$L^{\beta} = \frac{d^2}{dx^2} - x + \frac{2}{\sqrt{\beta}} dW$$ # ## 10 Girko's circle law # # Take an n$\times$n matrix with independent entries of standard deviation $n^{-1/2}$ and you'll find that its eigenvalues fill the **complex unit disk uniformly** - exactly so in the limit $m \rightarrow \infty$. # # [Beyond Girko's Law](https://github.com/brucejunlee/RMT_Theory_Applications/blob/master/beyond_Girko_law.ipynb): Chapter 1 # ## 11 Girko's elliptic law # # [Beyond Girko's Law](https://github.com/brucejunlee/RMT_Theory_Applications/blob/master/beyond_Girko_law.ipynb): Chapter 5 # # ## 12 Simple ring law # # [Beyond Girko's Law](https://github.com/brucejunlee/RMT_Theory_Applications/blob/master/beyond_Girko_law.ipynb): Chapter 6 # ## 13 Eigenvalues distribution of non-Hermitian random matrices # # + Real eigenvalues: **Uniform distribution** # + N = 50 trials = 1000 E = [] total = 0 for _ in range(trials): G = (rand(N, N) - 0.5) * 2 es = eigvals(G) for e in es: if e.imag == 0: total += 1 E.append(e.real) print total E = array(E) plt.figure(figsize=(8, 8)) plt.hist(E, normed=True) plt.title('Real eigenvalues distribution of non-Hermitian matrices', fontsize=18) plt.show() # + Complex eigenvalues: Scaled **imaginary parts** # + N = 10 trials = 1000 E = [] v = 1 / sqrt(N) total = 0 for _ in range(trials): G1 = randn(N, N) + 1j * randn(N, N) G1 = mat(G1) A = (G1 + G1.H) / 2 # GUE E(trA^2)=N^2 G2 = randn(N, N) + 1j * randn(N, N) G2 = mat(G2) B = (G2 + G2.H)/2 # GUE E(trB^2)=N^2 J = (A + 1j * v * B) / sqrt(N) es = eigvals(J) for e in es: if e.imag != 0: total += 1 # scaled imaginary parts E.append(2 * N * e.imag) print total E = array(E) plt.figure(figsize=(8, 8)) plt.hist(E, normed=True) plt.title('Scaled imaginary parts of complex eigenvalues of weakly non-Hermitian matrices', fontsize=18) plt.show() # - # ## 14 Tracy-Widom law # # + Extreme eigenvalues distribution (**edge distribution** or largest eigenvalue distribution) # # > The largest eigenvalue distribution of Hermite ensemble and Laguerre ensemble $\sim$ Tracy-Widom law in **different normalization constants**. # # + $\beta$-Hermite ensemble: **Tracy & Widom, 2008** # + $\beta$-Laguerre ensemble: # + $\beta$ = 1: **Johnstone, 2001** # + $\beta$ = 2: **Johansson, 2000** # + N = 40 trials = 5000 E = [] for i in range(trials): A = randn(N, N) + randn(N, N) * 1j A = mat(A) A = (A + A.H) / 2 E.append(sorted(eigvalsh(A))[-1]) E = array(E) # normalization E = N ** (1 / 6) * (E - 2 * sqrt(N)) plt.figure(figsize=(8, 8)) plt.hist(E, normed=True) plt.title('Largest eigenvalues distribution of Hermitian matrices', fontsize=18) plt.xlabel(r'$x$', fontsize=18) plt.ylabel(r'$\rho(x)$', fontsize=18) plt.show() # + **Limiting** Tracy-Widom density # # > Theoretical probability density function is related to the solution of **$Painlev\acute{e}$ II equation**: # # $$q^{\prime \prime} = s q + 2 q^3$$ # # with the **BCs** $q(s) \sim Ai(s)$ as $s \rightarrow \infty$. # # The probability distributions thus obtained are the famous **Tracy-Widom** distributions. # # The probability distribution $f_2(s)$, corresponding to $\beta = 2$, is given by # # $$f_2(s) = \frac{d}{ds} F_2(s),$$ # # where # # $$F_2(s) = \exp(- \int_s^{\infty} (x - s) q(x)^2 dx).$$ # # The probability distributions $f_1(s)$ and $f_4(s)$ for $\beta = 1$ and $\beta = 4$ are the derivatives of $F_1(s)$ and $F_4(s)$ respectively, which are given by # # $$F_1(s)^2 = F_2(s) \exp(- \int_s^{\infty} q(x) dx)$$ # # and # # $$F_4(\frac{s}{2^{\frac{2}{3}}})^2 = F_2(s) (cosh(\int_s^{\infty} q(x) dx))^2.$$ # # 更多特殊函数查阅[scipy.special](https://docs.scipy.org/doc/scipy-0.16.1/reference/special.html) # # > The **trapezoidal rule** cumsum function is used to approximate numerically the integrals. # > To solve this equation with **odeint**, we must first convert it to **a system of first order equations**. # # $$\frac{d}{ds} \begin{pmatrix} q & \\ p \end{pmatrix} = \begin{pmatrix} p & \\ s q + 2 q^3 \end{pmatrix}.$$ # # Let y be the vector [q, p]. # # **Note**: A system of first order equations can be constructed in another form to simplify the computation. def PII(y, s): q, p = y dyds = [p, s * q + 2 * q ** 3] return dyds # + # right endpoint s0 = 5 # left endpoint sn = -8 # initial conditions y0 = [airy(s0)[0], airy(s0)[1]] sspan = linspace(s0, sn, 1000) y = odeint(PII, y0, sspan) q = y[:, 0] dI0 = I0 = J0 = 0 dI = - cumsum((q[:-1] ** 2 + q[1:] ** 2) / 2 * diff(array(sspan))) + dI0 dI = array([0] + list(dI)) I = - cumsum((dI[:-1] + dI[1:]) / 2 * diff(array(sspan))) + I0 I = array([0] + list(I)) J = - cumsum((q[:-1] + q[1:]) / 2 * diff(array(sspan))) + J0 J = array([0] + list(J)) F2 = exp(-I) f2 = gradient(F2, sspan) F1 = sqrt(F2 * exp(-J)) f1 = gradient(F1, sspan) #F4 = sqrt(F2) * cosh(J / 2) #s4 = sspan / (2 ** (2 / 3)) #f4 = gradient(F4, s4) plt.figure(figsize=(5, 5)) plt.plot(sspan, f1, label=r'$\beta = 1$') plt.plot(sspan, f2, label=r'$\beta = 2$') #plt.plot(sspan, f4, label=r'$\beta = 4$') plt.title('Limiting Tracy-Widom law', fontsize=18) plt.xlabel(r'$s$', fontsize=18) plt.ylabel(r'$f_{\beta}(s)$', fontsize=18) plt.legend(loc='best') plt.show() # - # ## 15 Faster Tracy-Widom law # # + Using **symmetric tridiagonal matrix** # # $$H_n^{\beta} \sim \frac{1}{\sqrt{2}} \begin{pmatrix} \mathcal{N}(0, 2) & \chi_{(n - 1)\beta} & \qquad & \qquad & \qquad \\ \chi_{(n - 1)\beta} & \mathcal{N}(0, 2) & \chi_{(n - 2)\beta} & \qquad & \qquad \\ \qquad & \ddots & \ddots & \ddots & \qquad \\ \qquad & \qquad & \chi_{2 \beta} & \mathcal{N}(0, 2) & \chi_{\beta} \\ \qquad & \qquad & \qquad & \chi_{\beta} & \mathcal{N}(0, 2) \end{pmatrix}$$ # # We call it **$\beta$-Hermite ensemble**. # + The observation that if $k = 10 n^{1/3}$, then the largest eigenvalue is determined numerically by the top $k \times k$ segment of n. (This is related to the decay of the **Airy function** that arises in the kernel whose eigenvalues determine the largest eigenvalue distribution. The **'magic number'** 10 here is not meant to be precise. It approximates the index k such that $\frac{v(k)}{v(1)} \approx \epsilon$, where $\epsilon$ = $2^{-52}$ for double precision arithmetic, and v is the eigenvector corresponding to the largest eigenvalue. For small $\beta$, it may be necessary to crank up the number 10 to a larger value.) # + n = 1e4 E = [] trials = 10000 alpha = 10 # real beta = 1 # cutoff parameters k = int(round(alpha * n ** (1 / 3))) for i in range(trials): d1 = sqrt(chisquare(array([j * beta for j in range(int(n - 1), int(n - k), -1)]))) d0 = randn(k) H = diag(d1, 1) + diag(d0) # scale so largest eigenvalues is near 1 H = (H + H.T) / sqrt(4 * n * beta) E.append(sorted(eigvalsh(H))[-1]) E = array(E) plt.figure(figsize=(8, 8)) plt.hist(E, normed=True) plt.title('Largest eigenvalues distribution of Hermitian matrices', fontsize=18) plt.xlabel(r'$x$', fontsize=18) plt.ylabel(r'$\rho(x)$', fontsize=18) plt.show() # - # ## 16 $\beta$-Ensembles with Covariance # # + **** (PhD Thesis, MIT 2014, doctoral advisor: ****) # ### $\beta$-Wishart (Recursive) Model # # + Broken-Arrow Matrix def betaWishart(m, n, beta, D): Z = zeros((n, n), dtype=complex) if n == 1: return sqrt(chisquare(m * beta)) * sqrt(D[0, 0]) else: Z[:-1, :-1] = betaWishart(m, n - 1, beta, D[:-1, :-1]) Z[-1, :-1] = zeros(n - 1) Z[:-1, -1] = array([sqrt(chisquare(beta)) * sqrt(D[-1, -1]) for _ in range(n-1)]) Z[-1, -1] = sqrt(chisquare((m - n + 1) * beta)) * sqrt(D[-1, -1]) return diag(svd(Z)[1]) # m > (n - 1) m = 4 n = 3 beta = 1 D = ones((n, n)) betaWishart(m, n, beta, D) # ### $\beta$-MONOVA Model # # Given $X_{m \times n}$, $Y_{p \times n}$ and $\Omega_{n \times n}$ is diagonal & real, then $\Omega X^{\ast} X \Omega$ have **eigendecomposition**: # # $$U \Lambda U^{\ast},$$ # # and $\Omega X^{\ast} X \Omega (Y^{\ast}Y)^{-1}$ have **eigendecomposition** # # $$V M V^{\ast}.$$ # # Then, # # $$(C, S) = gsvd_C (Y, X\Omega) = (M + I)^{- \frac{1}{2}}$$ # # $\Omega X^{\ast} X \Omega (Y^{\ast}Y)^{-1} \sim \Lambda U^{\ast}(Y^{\ast}Y)^{-1}U \sim \Lambda ((U^{\ast}Y^{\ast})(YU))^{-1} \sim \Lambda (Y^{\ast}Y)^{-1}$ def betaMANOVA(m, n, p, beta, Omega): L = betaWishart(m, n, beta, dot(Omega, Omega)) M = inv(betaWishart(p, n, beta, inv(L))) return sqrtm(M + eye(n)) m = 4 p = 5 n = 3 beta = 1 Omega = ones((n, n)) betaMANOVA(m, n, p, beta, Omega) # ## 17 Smallest eigenvalues distribution (, 2010) # # + N = 6 trials = 10000 E = [] for i in range(trials): A = randn(N, N) + randn(N, N) * 1j A = mat(A) A = (A + A.H) / 2 E.append(sorted(eigvalsh(A))[0]) E = array(E) # normalization E = N ** (1 / 6) * (E - 2 * sqrt(N)) plt.figure(figsize=(8, 8)) plt.hist(E, normed=True) plt.title('Smallest eigenvalues distribution of Hermitian matrices', fontsize=18) plt.xlabel(r'$x$', fontsize=18) plt.ylabel(r'$\rho(x)$', fontsize=18) plt.show() # + **Bidiagonalization (, 2003)** # # The bidiagonal of $\chi$'s can be viewed as a discretization of a **stochastic Bessel operator** # # $$- \sqrt{x} \frac{d}{dx} + \frac{1}{\sqrt{2}} dW,$$ # # where $dW$ denotes some white noise. # # $$\chi_k \approx \sqrt{k} + \frac{1}{\sqrt{2}}G$$ # # We use k $\times$ k $\chi$'s in the bottom and Gaussians for the rest. # # + Some random matrix statistics of the **multivariate hypergeometric functions** are the largest and smallest eigenvalue of a **Wishart matrix**. # # For example, the Wishart matrix can be written as # # $$L_n^{\beta} = B_n^{\beta} B_n^{\beta T},$$ # # where # # $$B_n^{\beta} = \begin{pmatrix} \chi_{2 a} & \qquad & \qquad & \qquad \\ \chi_{(n - 1)\beta} & \chi_{2 a - \beta} & \qquad & \qquad \\ \qquad & \ddots & \ddots & \qquad \\ \qquad & \qquad & \chi_{\beta} & \chi_{2 a - (n - 1) \beta} \end{pmatrix},$$ # # where $a > \frac{\beta}{2} (n - 1)$. # # We call it **$\beta$-Laguerre ensemble**. # # The probability density function of the smallest eigenvalue of the Wishart matrix is # # $$p(x) = x^9 e^{- \frac{3x}{2}} {_{2}F_0^1}(3, 4; ;-2 \frac{I_2}{x})$$ # # and # # $$f(x) = x^{kn} e^{- \frac{n x}{2}} {_{2}F_0^{2 / \beta}}(-k, \beta \frac{n}{2} + 1;;- \frac{2}{x}I_{n - 1}),$$ # # where $k = a - (n - 1)\frac{\beta}{2} -1$ is a nonnegative integer. # + N = 5 trials = 10000 beta = 6 #beta = 0.5 a = 16 #a = 5 E = [] for i in range(trials): d1 = sqrt(chisquare(array([j * beta for j in range(int(N - 1), 0, -1)]))) d0 = sqrt(chisquare(array([2 * a - j * beta for j in range(N)]))) B = diag(d1, -1) + diag(d0) L = dot(B, B.T) E.append(sorted(eigvalsh(L))[0]) E = array(E) plt.figure(figsize=(8, 8)) plt.hist(E, normed=True) plt.title('Smallest eigenvalues distribution of Wishart matrices', fontsize=18) plt.xlabel(r'$x$', fontsize=18) plt.ylabel(r'$\rho(x)$', fontsize=18) plt.show() # - # ## 18 Expected number of real eigvalues (, 1993) # # $$\lim_{n \rightarrow \infty} \frac{E_n}{\sqrt{n}} = \sqrt{\frac{2}{\pi}}$$ # # where $E_n$ denotes the **expected number of real eigvalues of standard normalized random matrices** uniformly distributed in [-1, 1]. # # + **Multivariate hypergeometric functions** are used in the proof of this theorem. ns = [1, 100, 200, 300, 400, 500] trials = 100 print r'n trials exprimental En theoretical En time(sec)' for n in ns: t1 = time() E = [] for i in range(trials): A = randn(n, n) es = eigvals(A) total = 0 for e in es: if e.imag == 0: total += 1 E.append(total) E = array(E) t2 = time() print n, ' ', trials, ' ', mean(E), ' ', sqrt(2 * n / pi), ' ', t2 - t1 # ## 19 Quarter-circle law # # + **singular values** distribution of Gaussian random matrices: # # $$\lim_{m \rightarrow \infty} \frac{m \beta}{n} = \gamma \leq 1.$$ # # When $\gamma = 1$, by making the change of variables $x = y^2$, one obtains the well-known **quarter-circle law** for the singular values of a matrix from $G^{\beta}(n, n)$. # # + The minimum sigular value of an $N \times N$ standard complex Gaussian matrix H satisfies **(, 1989; Jianhong Shen(Minnesota), 2001)** # # $$\lim_{N \rightarrow \infty} P[N \sigma_{min} > x] = e^{- x - \frac{x^2}{2}}$$ # + m = n = 50 trials = 1000 V = [] a = 0 b = 2 x = linspace(a, b, 1000) for i in range(trials): A = randn(m, n) vs = svd(A)[1] for v in vs: V.append(v) V = array(V) # normalization V = V / sqrt(m) plt.figure(figsize=(8, 8)) plt.hist(V, normed=True, label='Gaussian random matrices') plt.plot(x, sqrt(4 - x ** 2) / pi, label='quarter-circle law') plt.title('Quarter-circle law', fontsize=18) plt.xlabel(r'$x$', fontsize=18) plt.ylabel(r'$\rho(x)$', fontsize=18) plt.legend(loc='best') plt.show() # - # ## 20 Condition number distribution # # + **** # # + ****, **** and Demmel # # + [](http://math.mit.edu/~edelman/): **Eigenvalues and Condition Numbers of Random Matrices** (PhD Thesis, 1989) # # $$\lim_{n \rightarrow \infty} P(\frac{\kappa}{n} < x) = e^{- \frac{2}{x} - \frac{2}{x^2}}$$ # # The condition number $\kappa$ is defined as the ratio of the largest to smallest singular values: # # $$\kappa = \lVert A \rVert \lVert A^{-1} \rVert = \frac{\sigma_{max}}{\sigma_{min}}$$ # + n = 20 trials = 5000 C = [] eps = 1e-10 for i in range(trials): A = randn(n, n) + randn(n, n) * 1j A = mat(A) A = (A + A.T) / (4 * sqrt(n)) C.append(cond(A) / n) C = array(C) num_bins = 100 x = linspace(0, 30, 1000) plt.figure(figsize=(8, 8)) plt.hist(C, bins=num_bins, normed=True, label='GUE') plt.plot(x, (2 * x + 4) * exp(- 2 / (x + eps) - 2 / (x + eps) ** 2) / (x + eps) ** 3, label='Edelman law') plt.title('Condition number distribution of GUEs', fontsize=18) plt.xlabel(r'$x$', fontsize=18) plt.ylabel(r'$\rho(x)$', fontsize=18) plt.xlim(0, 30) plt.legend(loc='best') plt.show() # - # ## 21 Log determinant # # The **log-determinant of a Wigner matrix** is a **linear statistic of its singular value**, i.e., the sum of the logarithm of the singular values: $$\log(\lvert det(H_n)\rvert)=2\Sigma_{k=1}^n \log(\sigma_k)$$ # # + For the non-Hermitian case, due to the **instability of the spectrum** for such matrices, we need to work with the log-determinants $\log \lvert det(M_n-z_0) \rvert$ rather than with the **Stieltjes transform** $\frac{1}{n}trace(M_n-z_0)^{-1}$ in order to exploit **Girko's Hermitization method**. # # > & . **Random matrices: universality of local spectral statistics of non-hermitian matrices**. A = randn(5, 5) A = (A + A.T) / 2 logdet = slogdet(A)[1] print logdet # ## 22 Partition the singular values of $GUE_n$ # # $$GUE_{\infty} = LUE_{\infty} \cup LUE_{\infty}$$ # # $$GUE_7 = LUE_3^{\frac{1}{2}} \cup LUE_4^{-\frac{1}{2}}$$ # # + ****, 2015, [The Singular Values of the GUE (Less is More)](http://www-math.mit.edu/~edelman/publications/singular_values.pdf) # + We probabilistically un-mix singular values sampled from the $n \times n$ **GUE** to produce samples distributed as the union of two **Laguerre ensembles**. # + # number of samples trials = 10000 # n = order of GUE n = 7 mid_left = int(floor(n / 2)) mid_right = int(ceil(n / 2)) outlist = zeros((trials, n)) # We need a list of all partitions of {1, 2, ..., n} into two balanced parts. # We generate all subsets of size floor(n / 2) and their complements. # a central binomial coefficient cbinom = int(comb(n, mid_left)) # subsets of {0, 1, ..., n - 1} parta = array([c for c in combinations(range(n), mid_left)]) # their complements partb = zeros((cbinom, mid_right), dtype=int64) for prep in range(cbinom): partb[prep, :] = array(list(set(range(n)).difference(set(parta[prep, :])))) partitions = concatenate((parta, partb), axis=1) P = zeros(cbinom) for rep in range(trials): # Sample singular values from the GUE of appropriate size G = randn(n, n) + randn(n, n) * 1j G = mat(G) A = (G + G.H) / 2 eiglist = array(sorted(abs(eigvalsh(A)))) e_n = len(eiglist) # We'll need the differences of the squares of the eigenvalues singdiffs = zeros((e_n, e_n)) sd = eiglist ** 2 * ones(n) for i in range(e_n): singdiffs[i, :] = sd - sd[i] # The n eigenvalues can be partitioned in binom(n, floor(n/2)) ways. # Compute the relative densities with common factors ommited. for prep in range(cbinom): P[prep] = (abs(prod(eiglist[parta[prep, :]])) / prod(singdiffs[parta[prep, :], :][:, partb[prep, :]])) ** 2 # Separate the singular values with each partition occuring proportionally to its density outlist[rep, :] = eiglist[partitions[list(nonzero(cumsum(P) > rand() * sum(P))[0]), :][0]] #plt.figure(figsize=(10, 10)) fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2) ax1.hist(outlist[:, :3].flatten(), label='LUE_3') plt.suptitle('Eigenvalues distribution of GUEs') ax1.legend(loc='best') ax2.hist(outlist[:, 3:].flatten(), label='LUE_4') ax2.legend(loc='best') plt.show() # - # ## 23 Sturm sequences # # > Efficiently compute histograms of eigenvalues for **symmetric tridiagonal matrices** (time complexity $\mathcal{O}(m n)$) and apply these ideas to random matrix ensembles such as the $\beta$-Hermite ensemble, where n is the dimension of the matrix and m is the number of bins (with arbitrary bin centers and widths) desired in the histogram (m is usually much smaller than n). # # ### Definition # # + Lower right corner submatrix sequences: # # $$\{A_0, A_1, \dots, A_n\}$$ # # + Sturm sequences: # # $$\{d_0, d_1, \dots, d_n\} = \{1, |A_1|, \dots, |A_n|\}$$ # # + Sturm ratio sequences: # # $$r_i = \frac{d_i}{d_{i - 1}}, i = 1 \dots n$$ # # ### Lemma 1 # # > The number of **sign changes** in the Sturm sequence is equal to the number of **negative eigenvalues** of A. # # ### Lemma 2 # # > The number of **negative values** in the Sturm ratio sequence is equal to the number of **negative eigenvalues** of A. # # ### Example # # Given a **symmetric tridiagonal matrix** with values $(a_n, a_{n - 1}, \dots, a_1)$ on the diagonal and $(b_{n - 1}, b_{n - 2}, \dots, b_1)$ on the super/sub-diagonal, then # # $$ # d_i = # \begin{cases} # 1, & i = 0 \\ # a_1, & i = 1\\ # a_i d_{i-1} -b_{i-1}^2 d_{i-2}, & i \in \{2, 3, \dots, n\} # \end{cases} # $$ # # equivalently, # # $$ # r_i = # \begin{cases} # a_1, & i = 1\\ # a_i -\frac{b_{i - 1}^2}{r_{i - 1}}, & i \in \{2, 3, \dots, n\} # \end{cases} # $$ # # For $H_n^\beta$, Sturm ratio sequence of $H_n^\beta - \lambda I$: # # $$ # r_{i, \lambda} = # \begin{cases} # G(- \lambda, 1), & i = 1\\ # G(- \lambda, 1) - \frac{\chi_{\beta (i - 1)}^2}{2 r_{i - 1, \lambda}}, & i \in \{2, 3, \dots, n\} # \end{cases} # $$ # # For $i \ge 2$, the density of $r_i$ conditioned on $r_{i - 1}$ is # # $$f_{r_i|r_{i - 1}}(s_i|s_{i - 1}) = \frac{|s_{i - 1}|^{p_i}}{\sqrt{2 \pi}} e^{- \frac{1}{4}[2(s_i + \lambda)^2-z_i^2]^2}D_{- p_i}(z_i)$$ # # where $p_i = \frac{\beta (i - 1)}{2}$ and $z_i = sign(s_{i - 1})(s_i + \lambda + s_{i - 1})$. # # ### Application # # Separators between histogram bins: # # $$\{-\infty, k_1, k_2, \dots, k_{m - 1}, \infty\}$$ # # Histogram sequence: # # $$\{H_1, H_2, \dots, H_m\}$$ # # where $H_i$ is the number of eigenvalues between $k_{i - 1}$ and $k_i$ for $1 \le i \le m$. # # Let $\Lambda(M)$ be the number of negative eigenvalues of a matrix M, then # # $$H_1 = \Lambda(A - k_1 I),$$ # # and # # $$H_i = \Lambda(A - k_i I) - \Lambda(A - k_{i - 1} I),i \in \{2, 3, \dots, m - 1\},$$ # # and # # $$H_m = n - \Lambda(A - k_{m - 1} I).$$ # # ### Reference # # , and ****, **Sturm sequences and random eigenvalue distributions**, Foundations of Computational Mathematics(FoCM), 2009. # + n = 20 alpha = 10 # complex beta = 2 E = [] trials = 1000 for i in range(trials): d1 = sqrt(chisquare(array([j * beta for j in range(int(n - 1), 0, -1)]))) d0 = randn(n) H = diag(d1, 1) + diag(d0) + diag(d1, -1) H = H / sqrt(2) es = eigvalsh(H) E.append(len(es[es < 0])) E = array(E) print 'E(# negative eigenvalues): %f' % mean(E) # - beta = 2 labda = 0 n = 20 num = [] trials = 1000 for j in range(trials): r = [] r1 = 1 for i in range(1, n): r2 = randn() -labda - chisquare(beta * i) / (2 * r1) r.append(r2) r1 = r2 r = array(r) num.append(len(r[r < 0])) num = array(num) print 'E(# negative Sturm ratio): %f' % mean(num) # ## 24 Angular sychronization problem # # > Suppose we want to find n **unknown angles** $\theta_1, \theta_2, \dots, \theta_n \in [0, 2 \pi)$. # # > We are given $m \le \begin{pmatrix} n \\ 2 \end{pmatrix}$ "noisy" measurements $\delta_{ij}$ which are $\theta_i - \theta_j$ with probability p and uniformly chosen from $[0, 2 \pi)$ with probability 1- p. # # > Our goal is to devise a method that with high probability recovers the angles, under some conditions we impose. # # **Method**: # # + **Eigenvector method** # # **References**: # # + [Fast Angular Synchronization for Phase Retrieval via Incomplete Information](https://math.msu.edu/user_content/docs/FastAngularSync_ViswanthanIwen_201520150924162533237.pdf) # # + [Angular Sychronization by Spectral Methods](http://www.mit.edu/~18.338/projects/tran_slides.pdf) # ## 25 [RMT and Boson computer](http://web.mit.edu/18.338/www/projects/napp_slides.pdf) # # > The computer $A$ outputs a sample from a probability distribution $\mathcal{D}_A$. # # > if there exists a classical algorithm that can efficiently output a sample from a distribution close to $\mathcal{D}_A$, then $P^{\#P} = BPP^{NP}$: a drastic consequence for complexity theory! # # + The distribution of determinants of some matrix ensemble: # # $$det(A) = \Sigma_{j = 1 \dots n} (-1)^{i + j} a_{ij} A_{ij}$$ # # + The distribution of permanent absolute values of some matrix ensemble # # $$Per(A) = \Sigma_{j = 1 \dots n} a_{ij} A_{ij}$$ # # > & ; # # + Traditionally, the permanent computation is **NP-hard**. # # + But, using **tropical algebra** (semiring algebra), the permanent computation is changed into **#P**. # # $$Per(A) = \bigoplus_{j = 1 \dots n} a_{ij} \bigotimes A_{ij}$$ # + n = 7 trials = 1000 P = [] for i in range(trials): A = randn(n, n) P.append(abs(permanent(A))) P = array(P) plt.figure(figsize=(8, 8)) plt.hist(P, normed=True, label='GUE') plt.title('Permanent absolute values distribution of Gaussian matrices', fontsize=18) plt.xlabel(r'$|Per(A)|$', fontsize=18) plt.ylabel('Frequency', fontsize=18) plt.show() # - # ## 26 Spike Model # # > The popular **spiked covariance model**: in which several eigenvalues are significantly larger than all the others, which all equal 1. # # + **Stein**, 1956: **Seminal work** # # + Feral & **Peche**, 2006 # # + , , , 2013: [Optimal Shrinkage of Eigenvalues in the Spiked Covariance Model](https://arxiv.org/pdf/1311.0851.pdf) # # > **Loss functions**: # # + Stein Loss: # # $$L^{st}(A, B) = (trace(A^{- 1} B - I) - \log (\lvert B \rvert / \lvert A \rvert)) / 2$$ # # Consider the matricial pivot $\Delta = A^{-1/2} B A^{-1/2}$, then # # $$L^{st}(A, B) = (trace(\Delta - I) - \log \lvert \Delta \rvert) = g(\Delta)$$ # # + Entropy/Divergence Loss: # # $$L^{ent}(A, B) = L^{st}(B, A)$$ # # $$L^{div}(A, B) = L^{st}(A, B) + L^{st}(B, A) = \frac{1}{2}[trace(A^{-1} B - I) + trace(B^{-1} A - I)]$$ # # + Bhattarcharya/Matusita Affinity: # # $$L^{aff} = \frac{1}{2} \log \frac{\lvert A + B \rvert / 2}{\lvert A \rvert^{1/2} \lvert B \rvert^{1/2}}$$ # # Using the pivot $\Delta = A^{-1/2} B A^{-1/2}$, then # # $$L^{aff} = \frac{1}{4} \log(\lvert 2 I + \Delta + \Delta^{-1} \rvert /4)$$ # # + Frechet Discrepancy: # # $$L^{fre}(A, B) = trace(A + B - 2 A^{1/2} B^{1/2})$$ # # Using the pivot $\Delta = A^{1/2} - B^{1/2}$, then # # $$L^{fre} = trace(\Delta^2)$$ # # + Squared Error Loss: # # $$L^{F, 1}(A, B) = \lVert A - B \rVert_F^2$$ # # + Squared Error Loss on Precision: # # $$L^{F, 2}(A, B) = \lVert A^{-1} - B^{-1} \rVert_F^2$$ # # + Nuclear Norm Loss: # # $$L^{N, 1}(A, B) = \lVert A - B \rVert_\ast$$ # # where $\ast$ denotes the sum of singular values. # > **Spiked Wigner matrix**: # # + $\lambda \le 1$: $\lambda_{max}(Y) \rightarrow 2$ as $n \rightarrow \infty$; # # + $\lambda > 1$: $\lambda_{max}(Y) \rightarrow \lambda + \frac{1}{\lambda} > 2$ as $n \rightarrow \infty$. labda = 2.0 n = 1000 x = zeros(n) x[0] = 1 x = permutation(x) x = x.reshape(n, 1) A = randn(n, n) W = (A + A.T) / 2 Y = labda * dot(x, x.T) + W / sqrt(n) e = sorted(eigvals(Y))[-1] print e # ## 27 Monte Carlo sampling # # For a sample $\{x_i\}_{i = 1}^K$ of **pdf** p(x): # # $$\int Q(x) p(x) dx \approx \frac{1}{K} \sum_{i = 1}^K Q(x_i).$$ # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python2 # --- import simpegSP as SP from SimPEG import Mesh, np from SimPEG.EM import Static # %pylab inline # workdir = "/Documents and Settings/" workdir = "C:/00PLAN_UBC/seepageModeling/Cheong_dam/" fname = "drawdown(3500000sec).csv" #fname = "state(WL40.20m).csv" fluiddata = SP.Utils.readSeepageModel(workdir+fname) # + xyz = fluiddata["xyz"] h = fluiddata["h"] Sw = fluiddata["Sw"] Kx = fluiddata["Kx"] Ky = fluiddata["Ky"] P = fluiddata["P"] Ux = fluiddata["Ux"] Uy = fluiddata["Uy"] Gradx = fluiddata["Gradx"] Grady = fluiddata["Grady"] Uy[Uy>1e10] = np.nan mesh = fluiddata["mesh"] xsurf, ysurf, yup = fluiddata["xsurf"], fluiddata["ysurf"], fluiddata["yup"] hcc = fluiddata["hcc"] actind = fluiddata["actind"] waterind = fluiddata["waterind"] # - mesh.plotGrid() plt.gca().set_aspect('equal', adjustable='box') # + fig = plt.figure(figsize = (12, 5)) ax = plt.subplot(111) dat = Static.Utils.plot2Ddata(xyz, h, ax=ax, ncontour=30, contourOpts={"cmap":"viridis"}) ax.fill_between(xsurf, ysurf, yup, facecolor='white', edgecolor="white") ax.set_xlim(50, 130) ax.set_ylim(25, 44) cb = plt.colorbar(dat[1], orientation="horizontal") cb.set_label("Total head (m)") # + fig = plt.figure(figsize = (12, 5)) ax = plt.subplot(111) dat = Static.Utils.plot2Ddata(xyz, -np.c_[Gradx, Grady], vec=True, ax=ax, ncontour=30, contourOpts={"cmap":"RdPu"}) ax.fill_between(xsurf, ysurf, yup, facecolor='white', edgecolor="white") ax.set_xlim(50, 130) ax.set_ylim(25, 44) cb = plt.colorbar(dat[1], orientation="horizontal") cb.set_label("Gradient of total head (m)") # + fig = plt.figure(figsize = (12, 5)) ax = plt.subplot(111) dat = Static.Utils.plot2Ddata(xyz, np.c_[Ux, Uy], vec=True, ax=ax, ncontour=30, contourOpts={"cmap":"jet"}) ax.fill_between(xsurf, ysurf, yup, facecolor='white', edgecolor="white") ax.set_xlim(50, 130) ax.set_ylim(25, 44) cb = plt.colorbar(dat[1], orientation="horizontal") cb.set_label("Gradient of total head (m)") # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.9.0 64-bit (''practical-python-and-opencv-case-studies3'': # pyenv)' # name: python3 # --- # + # USAGE # python rotate.py --image ../images/trex.png import os import argparse import pathlib from typing import Union, Any import cv2 import imutils # Import the necessary packages import numpy as np from PIL import Image from IPython.display import display # Construct the argument parser and parse the arguments # ap = argparse.ArgumentParser() # ap.add_argument("-i", "--image", required=True, help="Path to the image") # args = vars(ap.parse_args()) args = {} current_folder = pathlib.Path(f"{globals()['_dh'][0]}") print(current_folder) # # Calculating path to the input data args["image"] = pathlib.Path(f"{current_folder.parent}/images/trex.png").resolve() print(args["image"]) assert args["image"].exists() # _image = f"{args['image']}" image: Union[np.ndarray, Any] # # # Load the image and show it image = cv2.imread(f"{args['image']}") # In Pillow, the order of colors is assumed to be RGB (red, green, blue). # As we are using Image.fromarray() of PIL module, we need to convert BGR to RGB. temp_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) # Converting BGR to RGB # SOURCE: https://gist.github.com/mstfldmr/45d6e47bb661800b982c39d30215bc88 display(Image.fromarray(temp_image)) # + # Grab the dimensions of the image and calculate the center # of the image (h, w) = image.shape[:2] # Integer division is used here, denoted as "//" to ensure we receive whole integer numbers. center = (w // 2, h // 2) # Rotate our image by 45 degrees M = cv2.getRotationMatrix2D(center, 45, 1.0) rotated = cv2.warpAffine(image, M, (w, h)) # In Pillow, the order of colors is assumed to be RGB (red, green, blue). # As we are using Image.fromarray() of PIL module, we need to convert BGR to RGB. temp_image = cv2.cvtColor(rotated, cv2.COLOR_BGR2RGB) # Converting BGR to RGB # SOURCE: https://gist.github.com/mstfldmr/45d6e47bb661800b982c39d30215bc88 display(Image.fromarray(temp_image)) # + # Rotate our image by -90 degrees M = cv2.getRotationMatrix2D(center, -90, 1.0) rotated = cv2.warpAffine(image, M, (w, h)) # In Pillow, the order of colors is assumed to be RGB (red, green, blue). # As we are using Image.fromarray() of PIL module, we need to convert BGR to RGB. temp_image = cv2.cvtColor(rotated, cv2.COLOR_BGR2RGB) # Converting BGR to RGB # SOURCE: https://gist.github.com/mstfldmr/45d6e47bb661800b982c39d30215bc88 display(Image.fromarray(temp_image)) # + # Finally, let's use our helper function in imutils.py to # rotate the image by 180 degrees (flipping it upside down) rotated = imutils.rotate(image, 180) cv2.imshow("Rotated by 180 Degrees", rotated) # In Pillow, the order of colors is assumed to be RGB (red, green, blue). # As we are using Image.fromarray() of PIL module, we need to convert BGR to RGB. temp_image = cv2.cvtColor(rotated, cv2.COLOR_BGR2RGB) # Converting BGR to RGB # SOURCE: https://gist.github.com/mstfldmr/45d6e47bb661800b982c39d30215bc88 display(Image.fromarray(temp_image)) # - image.shape type(image) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Conditional and Slice Selection import numpy as np arr = np.arange(0,9) arr = arr.reshape(3,3) up_to_five = arr < 6 up_to_five arr[up_to_five] # it's equivalent to the above but in one go arr[arr < 6] arr = np.arange(40).reshape(8,5) # any boolean operation applicable to the datatype contained in the array arr[arr % 2 == 0] arr # slice notation to get an internal sub matrix arr[1:5, 1:4] arr[:3,3:] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Convolutional Autoencoder on MNIST dataset # # **Learning Objective** # 1. Build an autoencoder architecture (consisting of an encoder and decoder) in Keras # 2. Define the loss using the reconstructive error # 3. Define a training step for the autoencoder using tf.GradientTape() # 4. Train the autoencoder on the MNIST dataset # ## Introduction # This notebook demonstrates how to build and train a convolutional autoencoder. # # Autoencoders consist of two models: an encoder and a decoder. # # # # In this notebook we'll build an autoencoder to recreate MNIST digits. This notebook demonstrates this process on the MNIST dataset. The following animation shows a series of images produced by the *generator* as it was trained for 100 epochs. The images increasingly resemble hand written digits as the autoencoder learns to reconstruct the original images. # # # ## Import TensorFlow and other libraries # + from __future__ import absolute_import, division, print_function import glob import imageio import os import PIL import time import numpy as np import matplotlib.pyplot as plt import tensorflow as tf from tensorflow.keras import layers from IPython import display # - # Next, we'll define some of the environment variables we'll use in this notebook. Note that we are setting the `EMBED_DIM` to be 64. This is the dimension of the latent space for our autoencoder. np.random.seed(1) tf.random.set_seed(1) BATCH_SIZE = 128 BUFFER_SIZE = 60000 EPOCHS = 60 LR = 1e-2 EMBED_DIM = 64 # intermediate_dim # ## Load and prepare the dataset # # For this notebook, we will use the MNIST dataset to train the autoencoder. The encoder will map the handwritten digits into the latent space, to force a lower dimensional representation and the decoder will then map the encoding back. (train_images, _), (test_images, _) = tf.keras.datasets.mnist.load_data() train_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype( "float32" ) train_images = (train_images - 127.5) / 127.5 # Normalize the images to [-1, 1] # Next, we define our input pipeline using `tf.data`. The pipeline below reads in `train_images` as tensor slices and then shuffles and batches the examples for training. # Batch and shuffle the data train_dataset = tf.data.Dataset.from_tensor_slices(train_images) train_dataset = train_dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE) train_dataset = train_dataset.prefetch(BATCH_SIZE * 4) test_images = test_images.reshape(test_images.shape[0], 28, 28, 1).astype( "float32" ) test_images = (test_images - 127.5) / 127.5 # Normalize the images to [-1, 1] # ## Create the encoder and decoder models # # Both our encoder and decoder models will be defined using the [Keras Sequential API](https://www.tensorflow.org/guide/keras#sequential_model). # ### The Encoder # # The encoder uses [`tf.keras.layers.Conv2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) layers to map the image into a lower-dimensional latent space. We will start with an image of size 28x28x1 and then use convolution layers to map into a final `Dense` layer. # **Exercise.** Complete the code below to create the CNN-based encoder model. Your model should have `input_shape` to be 28x28x1 and end with a final `Dense` layer the size of `embed_dim`. # TODO 1. def make_encoder(embed_dim): model = tf.keras.Sequential(name="encoder") # TODO: Your code goes here. assert model.output_shape == (None, embed_dim) return model # ### The Decoder # # The decoder uses [`tf.keras.layers.Conv2DTranspose`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2DTranspose) (upsampling) layers to produce an image from the latent space. We will start with a `Dense` layer with the same input shape as `embed_dim`, then upsample several times until you reach the desired image size of 28x28x1. # **Exercise.** Complete the code below to create the decoder model. Start with a `Dense` layer that takes as input a tensor of size `embed_dim`. Use `tf.keras.layers.Conv2DTranspose` over multiple layers to upsample so that the final layer has shape 28x28x1 (the shape of our original MNIST digits). # # Hint: Experiment with using `BatchNormalization` or different activation functions like `LeakyReLU`. # TODO 1. def make_decoder(embed_dim): model = tf.keras.Sequential(name="decoder") # TODO: Your code goes here. assert model.output_shape == (None, 28, 28, 1) return model # Finally, we stitch the encoder and decoder models together to create our autoencoder. ae_model = tf.keras.models.Sequential( [make_encoder(EMBED_DIM), make_decoder(EMBED_DIM)] ) # Using `.summary()` we can have a high-level summary of the full autoencoder model as well as the individual encoder and decoder. Note how the shapes of the tensors mirror each other as data is passed through the encoder and then the decoder. ae_model.summary() make_encoder(EMBED_DIM).summary() make_decoder(EMBED_DIM).summary() # Next, we define the loss for our autoencoder model. The loss we will use is the reconstruction error. This loss is similar to the MSE loss we've commonly use for regression. Here we are applying this error pixel-wise to compare the original MNIST image and the image reconstructed from the decoder. #TODO 2. def loss(model, original): reconstruction_error = # TODO: Your code goes here. return reconstruction_error # ### Optimizer for the autoencoder # # Next we define the optimizer for model, specifying the learning rate. optimizer = tf.keras.optimizers.SGD(lr=LR) # ### Save checkpoints # # This notebook also demonstrates how to save and restore models, which can be helpful in case a long running training task is interrupted. checkpoint_dir = "./ae_training_checkpoints" checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt") checkpoint = tf.train.Checkpoint(optimizer=optimizer, model=ae_model) # ## Define the training loop # # Next, we define the training loop for training our autoencoder. The train step will use `tf.GradientTape()` to keep track of gradient steps through training. # **Exercise.** # Complete the code below to define the training loop for our autoencoder. Notice the use of `tf.function` below. This annotation causes the function `train_step` to be "compiled". The `train_step` function takes as input a batch of images and passes them through the `ae_model`. The gradient is then computed on the loss against the `ae_model` output and the original image. In the code below, you should # - define `ae_gradients`. This is the gradient of the autoencoder loss with respect to the variables of the `ae_model`. # - create the `gradient_variables` by assigning each ae_gradient computed above to it's respective training variable. # - apply the gradient step using the `optimizer` #TODO 3. @tf.function def train_step(images): with tf.GradientTape() as tape: ae_gradients = # TODO: Your code goes here. gradient_variables = # TODO: Your code goes here. # TODO: Your code goes here. # We use the `train_step` function above to define training of our autoencoder. Note here, the `train` function takes as argument the `tf.data` dataset and the number of epochs for training. def train(dataset, epochs): for epoch in range(epochs): start = time.time() for image_batch in dataset: train_step(image_batch) # Produce images for the GIF as we go display.clear_output(wait=True) generate_and_save_images(ae_model, epoch + 1, test_images[:16, :, :, :]) # Save the model every 5 epochs if (epoch + 1) % 5 == 0: checkpoint.save(file_prefix=checkpoint_prefix) print( "Time for epoch {} is {} sec".format(epoch + 1, time.time() - start) ) # Generate after the final epoch display.clear_output(wait=True) generate_and_save_images(ae_model, epochs, test_images[:16, :, :, :]) # **Generate and save images.** # We'll use a small helper function to generate images and save them. def generate_and_save_images(model, epoch, test_input): # Notice `training` is set to False. # This is so all layers run in inference mode (batchnorm). predictions = model(test_input, training=False) fig = plt.figure(figsize=(4, 4)) for i in range(predictions.shape[0]): plt.subplot(4, 4, i + 1) pixels = predictions[i, :, :] * 127.5 + 127.5 pixels = np.array(pixels, dtype="float") pixels = pixels.reshape((28, 28)) plt.imshow(pixels, cmap="gray") plt.axis("off") plt.savefig("image_at_epoch_{:04d}.png".format(epoch)) plt.show() # Let's see how our model performs before any training. We'll take as input the first 16 digits of the MNIST test set. Right now they just look like random noise. generate_and_save_images(ae_model, 4, test_images[:16, :, :, :]) # ## Train the model # Call the `train()` method defined above to train the autoencoder model. # # We'll print the resulting images as training progresses. At the beginning of the training, the decoded images look like random noise. As training progresses, the model outputs will look increasingly better. After about 50 epochs, they resemble MNIST digits. This may take about one or two minutes / epoch # + # TODO 4. # TODO: Your code goes here. # - # ## Create a GIF # # Lastly, we'll create a gif that shows the progression of our produced images through training. # Display a single image using the epoch number def display_image(epoch_no): return PIL.Image.open( "./ae_images/image_at_epoch_{:04d}.png".format(epoch_no) ) display_image(EPOCHS) # + anim_file = "autoencoder.gif" with imageio.get_writer(anim_file, mode="I") as writer: filenames = glob.glob("./ae_images/image*.png") filenames = sorted(filenames) last = -1 for i, filename in enumerate(filenames): frame = 2 * (i ** 0.5) if round(frame) > round(last): last = frame else: continue image = imageio.imread(filename) writer.append_data(image) image = imageio.imread(filename) writer.append_data(image) import IPython if IPython.version_info > (6, 2, 0, ""): display.Image(filename=anim_file) # - # Copyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Youtube videos: # * [https://www.youtube.com/watch?v=kr0mpwqttM0&t=34s] # * [https://www.youtube.com/watch?v=swU3c34d2NQ] # * [https://www.youtube.com/watch?v=FsAPt_9Bf3U&t=35s] # * [https://www.youtube.com/watch?v=KlBPCzcQNU8] # #### First class functions # # We can treat functions just like anyother object or variable. # + def square(x): return x*x def cube(x): return x*x*x # This is custom-built map function which is going to behave like in-bulit map function. def my_map(func, arg_list): result = [] for i in arg_list: result.append(func(i)) return result squares = my_map(square, [1,2,3,4]) print(squares) cubes = my_map(cube, [1,2,3,4]) print(cubes) # - # #### Closures # # A closure closes over free variables from their environment. # + def html_tag(tag): def wrap_text(msg): print('<{0}>{1}<{0}>'.format(tag, msg)) return wrap_text print_h1 = html_tag('h1') print_h1('Test Headline') print_h1('Another Headline') print_p = html_tag('p') print_p('Test Paragraph!') # - # #### Decorators # # Decorators are a way to dynamically alter the functionality of your functions. So for example, if you wanted to # log information when a function is run, you could use a decorator to add this functionality without modifying the # source code of your original function. # + def decorator_function(original_function): def wrapper_function(): print("wrapper executed this before {}".format(original_function.__name__)) return original_function() return wrapper_function def display(): print("display function ran!") decorated_display = decorator_function(display) decorated_display() # + # The above code is functionally the same as below: def decorator_function(original_function): def wrapper_function(): print("wrapper executed this before {}".format(original_function.__name__)) return original_function() return wrapper_function @decorator_function def display(): print("display function ran!") display() # + # Lets make our decorator function to work with functions with different number of arguments # For this we use, *args (arguments) and **kwargs (keyword arguments). # args and kwargs are convention, you can use any other name you want like *myargs, **yourkeywordargs def decorator_function(original_function): def wrapper_function(*args, **kwargs): print("wrapper executed this before {}".format(original_function.__name__)) return original_function(*args, **kwargs) return wrapper_function @decorator_function def display(): print("display function ran!") @decorator_function def display_info(name, age): print('display_info ran with arguments ({}, {})'.format(name, age)) display() display_info('John', 25) # + # Now let's use a class as a decorator instead of a function class decorator_class(object): def __init__(self, original_function): self.original_function = original_function def __call__(self, *args, **kwargs): # This method is going to behave just like our wrapper function behaved print('call method executed this before {}'.format(self.original_function.__name__)) return self.original_function(*args, **kwargs) @decorator_class def display(): print("display function ran!") @decorator_class def display_info(name, age): print('display_info ran with arguments ({}, {})'.format(name, age)) display() display_info('John', 25) # - # #### Some practical applications of decorators # + #Let's say we want to keep track of how many times a specific function was run and what argument were passed to that function def my_logger(orig_func): import logging logging.basicConfig(filename='{}.log'.format(orig_func.__name__), level=logging.INFO) #Generates a log file in current directory with the name of original funcion def wrapper(*args, **kwargs): logging.info( 'Ran with args: {}, and kwargs: {}'.format(args, kwargs)) return orig_func(*args, **kwargs) return wrapper @my_logger def display_info(name, age): print('display_info ran with arguments ({}, {})'.format(name, age)) display_info('John', 25) # Note: # Since display_info is decorated with my_logger, the above call is equivalent to: # decorated_display = my_logger(display_info) # decorated_display('John', 25) # + def my_timer(orig_func): import time def wrapper(*args, **kwargs): t1 = time.time() result = orig_func(*args, **kwargs) t2 = time.time() - t1 print('{} ran in: {} sec'.format(orig_func.__name__, t2)) return result return wrapper @my_timer def display_info(name, age): print('display_info ran with arguments ({}, {})'.format(name, age)) display_info('John', 25) # Note: # Since display_info is decorated with my_timer, the above call is equivalent to: # decorated_display = my_timer(display_info) # decorated_display('John', 25) # (or simply put) # my_timer(display_info('John', 25)) # - # #### Chaining of Decorators # + @my_timer @my_logger def display_info(name, age): print('display_info ran with arguments ({}, {})'.format(name, age)) display_info('John', 25) # This is equivalent to my_timer(my_logger(display_info('John', 25))) # The above code will give us some unexpected results. # Instead of printing "display_info ran in: ---- sec" it prints "wrapper ran in: ---- sec" # - # Let's see if switching the order of decorators helps # + @my_logger @my_timer def display_info(name, age): print('display_info ran with arguments ({}, {})'.format(name, age)) display_info('John', 25) # This is equivalent to my_logger(my_timer(display_info('John', 25))) # Now this would create wrapper.log instead of display_info.log like we expected. # + # To understand why wrapper.log is generated instead of display_info.log let's look at the following code. def display_info(name, age): print('display_info ran with arguments ({}, {})'.format(name, age)) display_info = my_timer(display_info('John',25)) print(display_info.__name__) # + # So how do we solve this problem # The answer is by using the wraps decorator # For this we need to import wraps from functools module from functools import wraps def my_logger(orig_func): import logging logging.basicConfig(filename='{}.log'.format(orig_func.__name__), level=logging.INFO) #Generates a log file in current directory with the name of original funcion @wraps(orig_func) def wrapper(*args, **kwargs): logging.info( 'Ran with args: {}, and kwargs: {}'.format(args, kwargs)) return orig_func(*args, **kwargs) return wrapper def my_timer(orig_func): import time @wraps(orig_func) def wrapper(*args, **kwargs): t1 = time.time() result = orig_func(*args, **kwargs) t2 = time.time() - t1 print('{} ran in: {} sec'.format(orig_func.__name__, t2)) return result return wrapper @my_logger @my_timer def display_info(name, age): print('display_info ran with arguments ({}, {})'.format(name, age)) display_info('Hank', 22) # - # #### Decorators with Arguments # + def prefix_decorator(prefix): def decorator_function(original_function): def wrapper_function(*args, **kwargs): print(prefix, "Executed before {}".format(original_function.__name__)) result = original_function(*args, **kwargs) print(prefix, "Executed after {}".format(original_function.__name__), '\n') return result return wrapper_function return decorator_function @prefix_decorator('LOG:') def display_info(name, age): print('display_info ran with arguments ({}, {})'.format(name, age)) display_info('John', 25) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + colab={"base_uri": "https://localhost:8080/"} id="WhzkwOGwBJuq" outputId="bcd0a0a9-b8d8-4496-ad5a-c624b709069c" """Simple travelling salesman problem between cities.""" from ortools.constraint_solver import routing_enums_pb2 from ortools.constraint_solver import pywrapcp def create_data_model(): """Stores the data for the problem.""" data = {} data['distance_matrix'] = [ [0, 10, 15, 20], [5, 0, 9, 10], [6, 13, 0, 12], [8, 8, 9, 0] ] # yapf: disable data['num_vehicles'] = 1 data['depot'] = 0 return data def print_solution(manager, routing, solution): """Prints solution on console.""" print('Objective: {} miles'.format(solution.ObjectiveValue())) index = routing.Start(0) plan_output = 'Route for vehicle 0:\n' route_distance = 0 while not routing.IsEnd(index): plan_output += ' {} ->'.format(manager.IndexToNode(index)) previous_index = index index = solution.Value(routing.NextVar(index)) route_distance += routing.GetArcCostForVehicle(previous_index, index, 0) plan_output += ' {}\n'.format(manager.IndexToNode(index)) print(plan_output) plan_output += 'Route distance: {}miles\n'.format(route_distance) def main(): """Entry point of the program.""" # Instantiate the data problem. data = create_data_model() # Create the routing index manager. manager = pywrapcp.RoutingIndexManager(len(data['distance_matrix']), data['num_vehicles'], data['depot']) # Create Routing Model. routing = pywrapcp.RoutingModel(manager) def distance_callback(from_index, to_index): """Returns the distance between the two nodes.""" # Convert from routing variable Index to distance matrix NodeIndex. from_node = manager.IndexToNode(from_index) to_node = manager.IndexToNode(to_index) return data['distance_matrix'][from_node][to_node] transit_callback_index = routing.RegisterTransitCallback(distance_callback) # Define cost of each arc. routing.SetArcCostEvaluatorOfAllVehicles(transit_callback_index) # Setting first solution heuristic. search_parameters = pywrapcp.DefaultRoutingSearchParameters() search_parameters.first_solution_strategy = ( routing_enums_pb2.FirstSolutionStrategy.PATH_CHEAPEST_ARC) # Solve the problem. solution = routing.SolveWithParameters(search_parameters) # Print solution on console. if solution: print_solution(manager, routing, solution) if __name__ == '__main__': main() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # IPython code repository # experiment with FOR and WHILE loops # build test data with generators and iterators # # # !pip3 install --upgrade pip # !pip3 install StringGenerator from random import randint, randrange, uniform import string from strgen import StringGenerator as SG import numpy as np # %pdb print(randint(234, 567)) irand = randrange(0, 10) frand = uniform(0, 10) irand frand A = [randint(0, 999) for p in range(0, 11)] C = [randint(0, 999) for q in range(0,11)] D = [randint(0, 999) for q in range(0,11)] A.sort() print(A) print(C) #selection sort: find the minimal value and switch it with the first element #of an array, then sort the rest of the array. def coconut__Sort(C): n = len(C) for d in range(n): minimal = d for e in range(d + 1, n): if C[e] < C[minimal]: minimal = e C[d], C[minimal] = C[minimal], C[d] print(C) coconut__Sort(C) #count the elements in the array, iterate through in increasing order def count_sort(D, k): n = len(D) count = [0] * (k +1) for i in range(n): count[D[i]] += 1 p = 0 for i in range(k +1): for j in range(count[i]): D[p] = i p += 1 print(D) count_sort(D, 10) #return the number of unique values in an array def distinct(E): n = len(E) E.sort() result = 1 for f in range(1, n): if E[f] != E[f -1]: result += 1 return result E = [randint(0, 999) for q in range(0,11)] # %time distinct(E) # %%time #import numpy as np print(len(np.unique(E))) np.random.shuffle(A) def id_generator(size=16, chars=string.ascii_uppercase + string.digits): #return ''.join(random.choice(chars) for _ in range(size)) #return ''.join(random.SystemRandom().choice(string.ascii_uppercase + string.digits) for _ in range(size) #return ''.join(random.choices(string.ascii_uppercase + string.digits, k=size)) return ''.join(random.choices(string.ascii_uppercase + string.digits, k=size)) id_generator() SG("[\l\d]{10}").render_list(5,unique=True) SG("[\l\d]{10}&[\p]").render() import uuid; uuid.uuid4().hex.upper()[0:6] # + import uuid def my_random_string(sl): string_length = sl """Returns a random string of length string_length.""" random = str(uuid.uuid4()) # Convert UUID format to a Python string. random = random.upper() # Make all characters uppercase. random = random.replace("-","") # Remove the UUID '-'. return random[0:string_length] # Return the random string. print(my_random_string(10)) # - # %time my_random_string(113) A = [1, 3, 6, 4, 1, 2] def michael_ellis(e): # write your code in Python 3.6 min = np.min.A #while min not in a: max pass def smallest_missing_positive_integer(A): A.sort() N = len(A) i = 0 previous = 0 while i < N: current = A[i] if current > 0: if current > previous + 1: # breaks consecutiveness return previous + 1 else: previous = current i += 1 return max(previous+1, current+1) print(smallest_missing_positive_integer(A)) def second_largest(A): m1, m2 = float('inf'), float('inf') for x in A: if x >= m1: m1, m2 = x, m1 elif x > m2: m2 = x return m2 second_largest(A) import heapq def second_largest_2(numbers): return heapq.nsmallest(2, numbers)[-1] second_largest_2(A) def second_smallest(A): m1, m2 = float('inf'), float('inf') for x in A: if x <= m1: m1, m2 = x, m1 elif x < m2: m2 = x return m2 s = set(A) print (sorted(s)[1]) second_smallest(A) def minpositive(A): b = set(A) ans = 1 while ans in A: ans += 1 return ans print (minpositive(A)) A = [-1, -5, -3] # + from itertools import count, filterfalse def minpositive2(A): return(next(filterfalse(set(A).__contains__, count(1)))) # - print (minpositive2(A)) #find the binary gap, max sequence of consecutive zeros surrounded by ones def max_gap(x): max_gap_length = 0 current_gap_length = 0 for i in range(x.bit_length()): if x & (1 << i): # Set, any gap is over. if current_gap_length > max_gap_length: max_gap_length = current_gap_length current_gap_length = 0 else: # Not set, the gap widens. current_gap_length += 1 # Gap might end at the end. if current_gap_length > max_gap_length: max_gap_length = current_gap_length return max_gap_length #convert int to a binary, strip zeros from the right side, split a 1, #find max in the list and return the length def binary_gap(N): return len(max(format(N, 'b').strip('0').split('1'))) # %timeit binary_gap(529) bin(529) # %timeit max_gap(529) #rotate an array, each element moves to the right by one, last element goes to beginning x = np.arange(11) #random.randn(3,27,1) x # %timeit print(np.roll(x, 3)) def rotater(l, n): return l[n:] + l[:n] # %timeit print(rotater([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 3)) #find the unpaired value in an odd number of elements def no_pair(y) def no_pair(A): if len(A) == 1: return A[0] A = sorted(A) for i in range (0, len(A), 2): if i+1 == len(A): return A[i] if A[i] != A[i+1]: return A[i] arr = [9,3,9,3,9,7,9,7,1] def no_pair2(arr): result = 0 for number in arr: result ^= number return result no_pair2(arr) no_pair(arr) # + #rotate array elementsarr2 # - #count min number of fixed(y) jumps to >= D. #starting position is x def badger(x, y, d): distance = y- x if distance % d == 0: return distance//d else: return distance//d +1 # %timeit badger(10, 88834999337775, 3.3) def badger2(): pass #perm missing elements,if not a continous list will return highest missing value def llama_milk(m): length = len(m) xor_sum = 0 for index in range(0, length): xor_sum = xor_sum ^ m[index] ^ (index + 1) return xor_sum^(length +1) m =[2,3,1,27] llama_milk(m) # #codility tape_equilibrium # example from codility website # # >Array A = numbers on a tape, range(-1000, 1000) # >Any integer P, such that 0 < P < N, N is an int range(2, 100,00) # >splits this tape into two non-empty parts: # >A[0], A[1], ..., A[P − 1] and A[P], A[P + 1], ..., A[N − 1] # >the difference is the absolute diff between the sum of the first part # >and the sum of the second. # import numpy as np #A5 = [3,1,2,4,3] A5 = np.arange(1,66,1) np.random.shuffle(A5) A5 def tape_cutter(A5): a=A5 tablica=[] tablica_1=[] sum_a=0 if len(a) == 2: return abs(a[0]-a[1]) for x in a: sum_a = sum_a + x tablica.append(sum_a) for i in range(len(tablica)-1): wynik=(sum_a-2*tablica[i]) tablica_1.append(abs(wynik)) tablica_1.sort() return tablica_1[0] def tape_cutter2(A5): s = sum(A5) m = float('inf') left_sum = 0 for i in A5[:-1]: left_sum += i m = min(abs(s - 2*left_sum), m) return m # %time tape_cutter(A5) # %time tape_cutter2(A5) # ##Suduko converter # Pass in a string of 81 numbers, using zeros for unknown values place holder. # # # # [stackoverflow_Suduko](https://stackoverflow.com/questions/201461/shortest-sudoku-solver-in-python-how-does-it-work/201771#201771) # # original source = http://norvig.com/sudoku.html # # https://towardsdatascience.com/peter-norvigs-sudoku-solver-25779bb349ce # + import sys def same_row(i,j): return (i/9 == j/9) def same_col(i,j): return (i-j) % 9 == 0 def same_block(i,j): return (i/27 == j/27 and i%9/3 == j%9/3) def r(a): i = a.find('0') if i == -1: sys.exit(a) excluded_numbers = set() for j in range(81): if same_row(i,j) or same_col(i,j) or same_block(i,j): excluded_numbers.add(a[j]) for m in '123456789': if m not in excluded_numbers: r(a[:i]+m+a[i+1:]) if __name__ == '__main__': if len(sys.argv) == 2 and len(sys.argv[1]) == 81: r(sys.argv[1]) else: print ('Usage$ python sudoku.py puzzle') print (' where the puzzle is an 81 character string') print (' representing the puzzle read left-to-right,') print (' top-to-bottom, and 0 is a blank') # - s1 = '070001500503007000014008003000000654000070000246000000700300280000600405005700030' # %tb # + def cross(A, B): "Cross product of elements in A and elements in B." return [a+b for a in A for b in B] digits = '123456789' rows = 'ABCDEFGHI' cols = digits squares = cross(rows, cols) unitlist = ([cross(rows, c) for c in cols] + [cross(r, cols) for r in rows] + [cross(rs, cs) for rs in ('ABC','DEF','GHI') for cs in ('123','456','789')]) units = dict((s, [u for u in unitlist if s in u]) for s in squares) peers = dict((s, set(sum(units[s],[]))-set([s])) for s in squares) # - def test(): "A set of unit tests." assert len(squares) == 81 assert len(unitlist) == 27 assert all(len(units[s]) == 3 for s in squares) assert all(len(peers[s]) == 20 for s in squares) assert units['C2'] == [['A2', 'B2', 'C2', 'D2', 'E2', 'F2', 'G2', 'H2', 'I2'], ['C1', 'C2', 'C3', 'C4', 'C5', 'C6', 'C7', 'C8', 'C9'], ['A1', 'A2', 'A3', 'B1', 'B2', 'B3', 'C1', 'C2', 'C3']] assert peers['C2'] == set(['A2', 'B2', 'D2', 'E2', 'F2', 'G2', 'H2', 'I2', 'C1', 'C3', 'C4', 'C5', 'C6', 'C7', 'C8', 'C9', 'A1', 'A3', 'B1', 'B3']) print('All tests pass.') test() print(len(unitlist)) # + def parse_grid(grid): """Convert grid to a dict of possible values, {square: digits}, or return False if a contradiction is detected.""" ## To start, every square can be any digit; then assign values from the grid. values = dict((s, digits) for s in squares) for s,d in grid_values(grid).items(): if d in digits and not assign(values, s, d): return False ## (Fail if we can't assign d to square s.) return values def grid_values(grid): "Convert grid into a dict of {square: char} with '0' or '.' for empties." chars = [c for c in grid if c in digits or c in '0.'] assert len(chars) == 81 return dict(zip(squares, chars)) # - print(squares) # + def assign(values, s, d): """Eliminate all the other values (except d) from values[s] and propagate. Return values, except return False if a contradiction is detected.""" other_values = values[s].replace(d, '') if all(eliminate(values, s, d2) for d2 in other_values): return values else: return False def eliminate(values, s, d): """Eliminate d from values[s]; propagate when values or places <= 2. Return values, except return False if a contradiction is detected.""" if d not in values[s]: return values ## Already eliminated values[s] = values[s].replace(d,'') ## (1) If a square s is reduced to one value d2, then eliminate d2 from the peers. if len(values[s]) == 0: return False ## Contradiction: removed last value elif len(values[s]) == 1: d2 = values[s] if not all(eliminate(values, s2, d2) for s2 in peers[s]): return False ## (2) If a unit u is reduced to only one place for a value d, then put it there. for u in units[s]: dplaces = [s for s in u if d in values[s]] if len(dplaces) == 0: return False ## Contradiction: no place for this value elif len(dplaces) == 1: # d can only be in one place in unit; assign it there if not assign(values, dplaces[0], d): return False return values # - def display(values): "Display these values as a 2-D grid." width = 1+max(len(values[s]) for s in squares) line = '+'.join(['-'*(width*3)]*3) for r in rows: print (''.join(values[r+c].center(width)+('|' if c in '36' else '') for c in cols)) if r in 'CF': print(line) print grid1 = '003020600900305001001806400008102900700000008006708200002609500800203009005010300' display(parse_grid(grid1)) # + def cross(A, B): "Cross product of elements in A and elements in B." return [a+b for a in A for b in B] digits = '123456789' rows = 'ABCDEFGHI' cols = digits squares = cross(rows, cols) unitlist = ([cross(rows, c) for c in cols] + [cross(r, cols) for r in rows] + [cross(rs, cs) for rs in ('ABC','DEF','GHI') for cs in ('123','456','789')]) units = dict((s, [u for u in unitlist if s in u]) for s in squares) peers = dict((s, set(sum(units[s],[]))-set([s])) for s in squares) def parse_grid(grid): """Convert grid to a dict of possible values, {square: digits}, or return False if a contradiction is detected.""" ## To start, every square can be any digit; then assign values from the grid. values = dict((s, digits) for s in squares) for s,d in grid_values(grid).items(): if d in digits and not assign(values, s, d): return False ## (Fail if we can't assign d to square s.) return values def grid_values(grid): "Convert grid into a dict of {square: char} with '0' or '.' for empties." chars = [c for c in grid if c in digits or c in '0.'] assert len(chars) == 81 return dict(zip(squares, chars)) def assign(values, s, d): """Eliminate all the other values (except d) from values[s] and propagate. Return values, except return False if a contradiction is detected.""" other_values = values[s].replace(d, '') if all(eliminate(values, s, d2) for d2 in other_values): return values else: return False def eliminate(values, s, d): """Eliminate d from values[s]; propagate when values or places <= 2. Return values, except return False if a contradiction is detected.""" if d not in values[s]: return values ## Already eliminated values[s] = values[s].replace(d,'') ## (1) If a square s is reduced to one value d2, then eliminate d2 from the peers. if len(values[s]) == 0: return False ## Contradiction: removed last value elif len(values[s]) == 1: d2 = values[s] if not all(eliminate(values, s2, d2) for s2 in peers[s]): return False ## (2) If a unit u is reduced to only one place for a value d, then put it there. for u in units[s]: dplaces = [s for s in u if d in values[s]] if len(dplaces) == 0:http://norvig.com/top95.txt return False ## Contradiction: no place for this value elif len(dplaces) == 1: # d can only be in one place in unit; assign it there if not assign(values, dplaces[0], d): return False return values def solve(grid): return search(parse_grid(grid)) def search(values): "Using depth-first search and propagation, try all possible values." if values is False: return False ## Failed earlier if all(len(values[s]) == 1 for s in squares): return values ## Solved! ## Chose the unfilled square s with the fewest possibilities n,s = min((len(values[s]), s) for s in squares if len(values[s]) > 1) return some(search(assign(values.copy(), s, d)) for d in values[s]) def some(seq): "Return some element of seq that is true." for e in seq: if e: return e return False # - #permcheck practice example codility, solution by Sheng from codesays.com #permutation is a seq from 1 to N, nothing repeated or missing #return 0 if not a permutation def permcheck(A): counter = [0]*len(A) limit = len(A) for element in A: if not 1 <= element <= limit: return 0 else: if counter[element-1] != 0: return 0 else: counter[element-1] = 1 return 1 a2a = [4,1,3,2,5,9,9,6,7,8] permcheck(a2a) a3b = range(10,123,1) print(a3b) def frog1(X,A): covered = 0 covered_a = [-1]*X for index,element in enumerate(A): if covered_a[element-1] == -1: covered_a[element-1] = element covered += 1 if covered == X: return index return -1 A = [3, 3, 3, 3, 2, 1, 1, 4] A = [1,3,1,4,2,3,5,4] frog1(5, A) B = [9, 4, 7, 3, 7, 2, 4, 4] swap2(A, B, 8) #codility practice example 4, counting #given two arrays, can you swap elements to make the sums equal? def counting(A, m): n = len(A) count = [0] * (m + 1) for k in range(n): count[A[k]] += 1 return count ''' #option 1, swap every pair of elements and then sum def swap1(A, B, m): n = len(A) sum_a = sum(A) sum_b = sum(B) for i in range(n): for j in range(n): change = B[j] - A[i] sum_a += change sum_b -= change if sum_a == sum_b: return True sum_a -= change sum_b += change return False ''' #faster to count the elements and diff the sums def swap2(A, B, m): n = len(A) sum_a = sum(A) sum_b = sum(B) d = sum_b - sum_a if d % 2 == 1: return False d //= 2 count = counting(A, m) for i in range(n): if 0 <= B[i] - d and B[i] -d <= m and count [B[i] - d] > 0: print('swap possible!') #return True print('No swap for you!!!') #return False # + #codility max counters practice example #you have counters set to zero, increase by 1 or set to max #calculate counter value after operations def max_counter(N, A): counters = [0] * N max_value, threshold = 0, 0 for a in A: index = a - 1 if index == N: threshold = max_value else: counters[index] = max(threshold + 1, counters[index] + 1) max_value = max(max_value, counters[index]) for index, counter in enumerate(counters): counters[index] = max(threshold, counter) return counters #source = https://github.com/wouterken/codility-python/blob/master/codility-max-counters.py # - #should return 3 2 2 4 2 max_counter(5, [3,4,4,6,1,4,4]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="NkDV_hsUnloj" # #Data processing # # + id="bWMn18jbAXzc" # !pip install opendatasets --upgrade --quiet # + colab={"base_uri": "https://localhost:8080/"} id="rem6AL0lfwHq" outputId="81f3bb4f-322c-4670-8843-2312ba633915" import opendatasets as od dataset_url = "https://www.kaggle.com/c/dogs-vs-cats-redux-kernels-edition/data" od.download(dataset_url) # 18053a5c5a08203aeb457de2f5efac0f # + id="rJt-b74Ffypi" # !unzip "/content/Dogs-vs-Cats/data/train.zip" -d "/content/Dogs-vs-Cats/data" &> /dev/null # + id="Yy8me8u1g1fT" # !unzip "/content/Dogs-vs-Cats/data/test.zip" -d "/content/Dogs-vs-Cats/data" &> /dev/null # + id="fX21MYP3hU9O" import os # + id="xyciOIMVBczo" os.mkdir(os.path.join("/content/Dogs-vs-Cats/data/test","test")) # + id="FBJlxctghlIW" os.mkdir(os.path.join("/content/Dogs-vs-Cats/data/train","cats")) os.mkdir(os.path.join("/content/Dogs-vs-Cats/data/train","dogs")) # + id="aE39x2CohvhD" os.mkdir(os.path.join("/content/Dogs-vs-Cats/data","valid")) os.mkdir(os.path.join("/content/Dogs-vs-Cats/data/valid","cats")) os.mkdir(os.path.join("/content/Dogs-vs-Cats/data/valid","dogs")) # + id="dIMar4U3h5iH" train_dir = "/content/Dogs-vs-Cats/data/train" # + id="IifRaClOiY97" import os import pandas as pd import numpy as np import seaborn as sns import random import torch import torchvision import torch.nn as nn import torch.optim as optim import torch.nn.functional as F from torchvision.datasets.utils import download_url from torch.utils.data import random_split from tqdm.notebook import tqdm from torchvision import transforms, utils, datasets from torch.utils.data import Dataset, DataLoader, SubsetRandomSampler from sklearn.metrics import classification_report, confusion_matrix import matplotlib.pyplot as plt # %matplotlib inline # + colab={"base_uri": "https://localhost:8080/"} id="kPtmXfmgibZA" outputId="bf098698-e388-4e89-b6a5-db33d220f5d3" import os # train_dir = "./data/train" train_dogs_dir = f'{train_dir}/dogs' train_cats_dir = f'{train_dir}/cats' val_dir = "/content/Dogs-vs-Cats/data/valid" val_dogs_dir = f'{val_dir}/dogs' val_cats_dir = f'{val_dir}/cats' print("Printing data dir") print(os.listdir("/content/Dogs-vs-Cats/data")) print("Printing train dir") # !ls {train_dir} | head -n 5 print("Printing train dog dir") # !ls {train_dogs_dir} | head -n 5 print("Printing train cat dir") # !ls {train_cats_dir} | head -n 5 print("Printing val dir") # !ls {val_dir} | head -n 5 print("Printing val dog dir") # !ls {val_dogs_dir} | head -n 5 print("Printing val cat dir") # !ls {val_cats_dir} | head -n 5 # + id="E1BpUV63igBI" import shutil import re files = os.listdir(train_dir) # Move all train cat images to cats folder, dog images to dogs folder for f in files: catSearchObj = re.search("cat", f) dogSearchObj = re.search("dog", f) if catSearchObj: shutil.move(f'{train_dir}/{f}', train_cats_dir) elif dogSearchObj: shutil.move(f'{train_dir}/{f}', train_dogs_dir) # + colab={"base_uri": "https://localhost:8080/"} id="Go2k924vjNem" outputId="3b34404b-91a8-4184-af0e-a142ec0500b3" print("Printing training cats dir") # !ls {train_cats_dir} | head -n 5 # + colab={"base_uri": "https://localhost:8080/"} id="7qd9Z0FQjIod" outputId="bda7d7df-fe97-4f38-9883-0374994c7718" print("Printing train dog dir") # !ls {train_dogs_dir} | head -n 5 # + id="2uclwKniiwiF" files = os.listdir(train_dogs_dir) for f in files: validationDogsSearchObj = re.search("5\d\d\d", f) if validationDogsSearchObj: shutil.move(f'{train_dogs_dir}/{f}', val_dogs_dir) # + colab={"base_uri": "https://localhost:8080/"} id="af079yh0i1u1" outputId="b60b844c-b35f-4e8f-806e-a873b5f05ad0" print("Printing val dog dir") # !ls {val_dogs_dir} | head -n 5 # + id="oOdWwfYOjGQ5" files = os.listdir(train_cats_dir) for f in files: validationCatsSearchObj = re.search("5\d\d\d", f) if validationCatsSearchObj: shutil.move(f'{train_cats_dir}/{f}', val_cats_dir) # + colab={"base_uri": "https://localhost:8080/"} id="qu3V6Kgqi6Be" outputId="187696e7-d38f-4ccc-976a-2e25bc141183" print("Printing val cats dir") # !ls {val_cats_dir} | head -n 5 # + id="4oPTpGNJj5wL" test_dir = "./Dogs-vs-Cats/data/test/test" files = os.listdir("./Dogs-vs-Cats/data/test") for f in files: shutil.move(f'{"./Dogs-vs-Cats/data/test"}/{f}', test_dir) # + colab={"base_uri": "https://localhost:8080/"} id="Fi8CkSH3i-Wo" outputId="45ec863a-acc9-4466-fa59-0f079cb64cf2" test_dir = "./Dogs-vs-Cats/data/test/test" print("Printing from test dir") # !ls {test_dir} | head -n 5 # + [markdown] id="kqFpY1ttnfCs" # #Load Data # # + id="-MKyCRGmjsj9" import os import random import collections import shutil import time import glob import csv import numpy as np import torch import torch.backends.cudnn as cudnn import torch.nn as nn import torch.optim as optim import torch.utils.data as data import torchvision.datasets as datasets import torchvision.models as models import torchvision.transforms as transforms from PIL import Image ROOT_DIR = os.getcwd() DATA_HOME_DIR = './Dogs-vs-Cats/data' # + colab={"base_uri": "https://localhost:8080/", "height": 36} id="CleQsrhFofN_" outputId="2076c6c2-1213-4663-dcf0-39b28da7cd0a" DATA_HOME_DIR # + id="tzk-ZfMYkq0W" # paths data_path = DATA_HOME_DIR split_train_path = data_path + '/train' # full_train_path = data_path + '/train_full/' valid_path = data_path + '/valid' test_path = DATA_HOME_DIR + '/test/test' saved_model_path = './Dogs-vs-Cats/models' submission_path = './Dogs-vs-Cats/submissions' # data batch_size = 8 # model nb_runs = 1 nb_aug = 3 epochs = 5 lr = 1e-4 clip = 0.001 archs = ["resnet152"] model_names = sorted(name for name in models.__dict__ if name.islower() and not name.startswith("__")) best_prec1 = 0 # + colab={"base_uri": "https://localhost:8080/"} id="mpHbK1okk7cw" outputId="a35f6dec-76a5-4496-d896-53404c59fb65" model_names # + id="9MPLQKi2lJRb" def train(train_loader, model, criterion, optimizer, epoch): batch_time = AverageMeter() data_time = AverageMeter() losses = AverageMeter() acc = AverageMeter() end = time.time() # switch to train mode model.train() for i, (images, target) in enumerate(train_loader): # measure data loading time data_time.update(time.time() - end) target = target.cuda() image_var = torch.autograd.Variable(images) label_var = torch.autograd.Variable(target) # compute y_pred y_pred = model(image_var) loss = criterion(y_pred, label_var) # measure accuracy and record loss prec1, prec1 = accuracy(y_pred.data, target, topk=(1, 1)) losses.update(loss.data, images.size(0)) acc.update(prec1, images.size(0)) # compute gradient and do SGD step optimizer.zero_grad() loss.backward() optimizer.step() # measure elapsed time batch_time.update(time.time() - end) end = time.time() # + id="NpRG-C1XlNGm" def validate(val_loader, model, criterion, epoch): batch_time = AverageMeter() losses = AverageMeter() acc = AverageMeter() # switch to evaluate mode model.eval() end = time.time() for i, (images, labels) in enumerate(val_loader): labels = labels.cuda() image_var = torch.autograd.Variable(images) label_var = torch.autograd.Variable(labels) # compute y_pred y_pred = model(image_var) loss = criterion(y_pred, label_var) # measure accuracy and record loss prec1, temp_var = accuracy(y_pred.data, labels, topk=(1, 1)) losses.update(loss.data, images.size(0)) acc.update(prec1, images.size(0)) # measure elapsed time batch_time.update(time.time() - end) end = time.time() print(' * EPOCH {epoch} | Accuracy: {acc.avg:.3f} | Loss: {losses.avg:.3f}'.format(epoch=epoch, acc=acc, losses=losses)) return acc.avg # + id="NXN9CycSlSwi" def test(test_loader, model): csv_map = collections.defaultdict(float) # switch to evaluate mode model.eval() for aug in range(nb_aug): print(" * Predicting on test augmentation {}".format(aug + 1)) for i, (images, filepath) in enumerate(test_loader): # pop extension, treat as id to map filepath = os.path.splitext(os.path.basename(filepath[0]))[0] filepath = int(filepath) image_var = torch.autograd.Variable(images) y_pred = model(image_var) # get the index of the max log-probability smax = nn.Softmax() smax_out = smax(y_pred)[0] cat_prob = smax_out.data[0] dog_prob = smax_out.data[1] prob = dog_prob if cat_prob > dog_prob: prob = 1 - cat_prob prob = np.around(prob.cpu(), decimals=4) prob = np.clip(prob, clip, 1-clip) csv_map[filepath] += (prob / nb_aug) sub_fn = submission_path + '{0}epoch_{1}clip_{2}runs'.format(epochs, clip, nb_runs) for arch in archs: sub_fn += "_{}".format(arch) print("Writing Predictions to CSV...") with open(sub_fn + '.csv', 'w') as csvfile: fieldnames = ['id', 'label'] csv_w = csv.writer(csvfile) csv_w.writerow(('id', 'label')) for row in sorted(csv_map.items()): csv_w.writerow(row) print("Done.") # In [7]: # def save_checkpoint( # + id="06fqrCgylYE9" def save_checkpoint(state, is_best, filename='checkpoint.pth.tar'): torch.save(state, filename) if is_best: shutil.copyfile(filename, 'model_best.pth.tar') # + id="zUPe-955la_R" class AverageMeter(object): """Computes and stores the average and current value""" def __init__(self): self.reset() def reset(self): self.val = 0 self.avg = 0 self.sum = 0 self.count = 0 def update(self, val, n=1): self.val = val self.sum += val * n self.count += n self.avg = self.sum / self.count # + id="JxBW_an1ld1o" def adjust_learning_rate(optimizer, epoch): """Sets the learning rate to the initial LR decayed by 10 every 30 epochs""" global lr lr = lr * (0.1**(epoch // 30)) for param_group in optimizer.state_dict()['param_groups']: param_group['lr'] = lr def accuracy(y_pred, y_actual, topk=(1, )): """Computes the precision@k for the specified values of k""" maxk = max(topk) batch_size = y_actual.size(0) _, pred = y_pred.topk(maxk, 1, True, True) pred = pred.t() correct = pred.eq(y_actual.view(1, -1).expand_as(pred)) res = [] for k in topk: correct_k = correct[:k].view(-1).float().sum(0) res.append(correct_k.mul_(100.0 / batch_size)) return res # + id="9Z22_ExhlgF5" class TestImageFolder(data.Dataset): def __init__(self, root, transform=None): images = [] for filename in sorted(glob.glob(test_path + "*.jpg")): images.append('{}'.format(filename)) self.root = root self.imgs = images self.transform = transform def __getitem__(self, index): filename = self.imgs[index] img = Image.open(os.path.join(self.root, filename)) if self.transform is not None: img = self.transform(img) return img, filename def __len__(self): return len(self.imgs) # + id="BNQUlIAhljF7" def shear(img): width, height = img.size m = random.uniform(-0.05, 0.05) xshift = abs(m) * width new_width = width + int(round(xshift)) img = img.transform((new_width, height), Image.AFFINE, (1, m, -xshift if m > 0 else 0, 0, 1, 0), Image.BICUBIC) return img # + id="4u_AVeVLllFz" def main(mode="train", resume=False): global best_prec1 for arch in archs: # create model print("=> Starting {0} on '{1}' model".format(mode, arch)) model = models.__dict__[arch](pretrained=True) # Don't update non-classifier learned features in the pretrained networks for param in model.parameters(): param.requires_grad = False # Replace the last fully-connected layer # Parameters of newly constructed modules have requires_grad=True by default # Final dense layer needs to replaced with the previous out chans, and number of classes # in this case -- resnet 101 - it's 2048 with two classes (cats and dogs) model.fc = nn.Linear(2048, 2) if arch.startswith('alexnet') or arch.startswith('vgg'): model.features = torch.nn.DataParallel(model.features) model.cuda() else: model = torch.nn.DataParallel(model).cuda() # optionally resume from a checkpoint if resume: if os.path.isfile(resume): print("=> Loading checkpoint '{}'".format(resume)) checkpoint = torch.load(resume) start_epoch = checkpoint['epoch'] best_prec1 = checkpoint['best_prec1'] model.load_state_dict(checkpoint['state_dict']) print("=> Loaded checkpoint (epoch {})".format(checkpoint['epoch'])) else: print("=> No checkpoint found at '{}'".format(args.resume)) cudnn.benchmark = True # Data loading code traindir = split_train_path valdir = valid_path testdir = test_path normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) train_loader = data.DataLoader( datasets.ImageFolder(traindir, transforms.Compose([ # transforms.Lambda(shear), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), normalize, ])), batch_size=batch_size, shuffle=True, num_workers=4, pin_memory=True) val_loader = data.DataLoader( datasets.ImageFolder(valdir, transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), normalize, ])), batch_size=batch_size, shuffle=True, num_workers=4, pin_memory=True) test_loader = data.DataLoader( TestImageFolder(testdir, transforms.Compose([ # transforms.Lambda(shear), transforms.Resize(256), transforms.CenterCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), normalize, ])), batch_size=1, shuffle=False, num_workers=1, pin_memory=False) if mode == "test": test(test_loader, model) return # define loss function (criterion) and pptimizer criterion = nn.CrossEntropyLoss().cuda() if mode == "validate": validate(val_loader, model, criterion, 0) return optimizer = optim.Adam(model.module.fc.parameters(), lr, weight_decay=1e-4) for epoch in range(epochs): adjust_learning_rate(optimizer, epoch) # train for one epoch train(train_loader, model, criterion, optimizer, epoch) # evaluate on validation set prec1 = validate(val_loader, model, criterion, epoch) # remember best Accuracy and save checkpoint is_best = prec1 > best_prec1 best_prec1 = max(prec1, best_prec1) save_checkpoint({ 'epoch': epoch + 1, 'arch': arch, 'state_dict': model.state_dict(), 'best_prec1': best_prec1, }, is_best) # + colab={"base_uri": "https://localhost:8080/"} id="bMEqN1QCATci" outputId="3a55cd8c-0ede-4f17-c285-820b0009abd8" main(mode="train") # + colab={"base_uri": "https://localhost:8080/"} id="Ki983OYtDQPK" outputId="ced0d049-be3e-44ba-8399-6f5efc531921" main(mode="validate", resume='model_best.pth.tar') # + id="6L6PKWUAAS0l" # + colab={"base_uri": "https://localhost:8080/"} id="gzf_-fJAOjax" outputId="16a40740-70cc-4fe3-aa57-598e9b17aa11" main(mode="test", resume='model_best.pth.tar') # + id="r1G4FMcSO6GV" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="q_569ugy6lyY" # > This notebook was created for code illustration of the `ML.now()` course # # # `Univariate Linear Regression` # # **Date Created**: June 6, 2021 # # # **Author**: # [Nishi](https://github.com/queenish001/) # # [[Course Repository](https://github.com/shivanishimpi/MLnow_2.0)] # # + [markdown] id="KmW9S6zjaauQ" # ## Setup # + colab={"base_uri": "https://localhost:8080/"} id="oX_QibhWsnnX" outputId="69493bd4-6429-4d57-f2a5-ed9cc8330fe9" from google.colab import drive drive.mount('/content/drive') # + colab={"base_uri": "https://localhost:8080/"} id="wYuNn17Isvml" outputId="d36cb0d0-8ade-42ab-ae5c-1fc870b47a52" # cd '/content/drive/MyDrive/GS_ML_WORKSHOP/02_EDA/Student' # + colab={"base_uri": "https://localhost:8080/"} id="oR69jwALs-uP" outputId="4ba195b0-7b79-4426-a235-5ab8b583cd22" # ls # + id="0rYwvbeMtPJg" import os # + id="cnGQRbmes_tZ" import pandas as pd #working with csv or excel files import numpy as np #working with numbers/ arrays/ tensors import tensorflow as tf #framework from tensorflow import keras #API / library import os #using os commands between the python language # + id="VV-Ps6dJtX6B" mathData = pd.read_csv('student-mat.csv', sep=';') #load the csv file as dataframe # + colab={"base_uri": "https://localhost:8080/", "height": 379} id="2_KId_bptr05" outputId="4d709640-35a0-4a5a-a2ca-330fa7fcb842" mathData.head(10) #print the first ten rows of the dataframe # + [markdown] id="i2bal50yG_PZ" # We are just going to consider the columns `G1` and `G3` for univariate linear regression # # $G3_i = G1_i \cdot \theta_1 + \theta_0$ # + colab={"base_uri": "https://localhost:8080/", "height": 204} id="Yw0PNCuTSEDC" outputId="30b9d374-35f0-4d3a-b199-280c3e9a3d88" uniMathData = mathData[['G1', 'G3']] uniMathData.head(5) # + id="eyzP9mL4Hmb5" uniMathData.to_csv('univariate_MathData_2.csv') # + colab={"base_uri": "https://localhost:8080/"} id="I-3qK_nNxnTA" outputId="f449ca74-6d40-41dd-bc42-dfcc4e5d2350" # ls # + [markdown] id="ImV3p6AKxszU" # ## Data visualization # + colab={"base_uri": "https://localhost:8080/", "height": 157} id="RqumcsZLxq4v" outputId="7e172d4b-5db9-4e06-c7c1-14357188e15e" import seaborn as sns sns.palplot(sns.color_palette('PuOr')) #Purple to Orange colors pal = sns.color_palette('PuOr', 6) #print 6 color shades from Purple to Orange pal.as_hex() #set hex code values for colors import matplotlib.pyplot as plt plt.style.use(['seaborn']) sns_colors = ['#c6690c', '#664697'] #orange Purple hex codes sns.set_palette(sns_colors) #set the palette as sns_colors sns.palplot(sns.color_palette(sns_colors)) #plot the color codes # + colab={"base_uri": "https://localhost:8080/"} id="A6MT5E8Fy7MZ" outputId="50c8ed17-a366-4f9a-a0bf-530f3b5e3ab5" uniMathData.columns #columns in the dataframe # + colab={"base_uri": "https://localhost:8080/", "height": 391} id="PrsTwdHHzBbk" outputId="ea99d5af-7339-40d9-9d6a-695359775b99" #pairplot for all the values sns.pairplot(uniMathData, x_vars = ['G1', 'G3'], y_vars = ['G1', 'G3'], diag_kind='kde' ) # + [markdown] id="Klm33YcpKZkl" # ## Data Splits # + colab={"base_uri": "https://localhost:8080/"} id="UwrSOCKS1NhN" outputId="2cc691a0-33f6-41a7-e004-d2f60ee0eb0b" #80-20 train-test percent split trainDataset = uniMathData.sample(frac=0.8, random_state=0) testDataset = uniMathData.drop(trainDataset.index) print(trainDataset.head()) print(testDataset.head()) # + colab={"base_uri": "https://localhost:8080/"} id="YWO2xRV53wV-" outputId="2faeaa06-5618-48cc-dc15-bd7e78d87212" print(trainDataset.shape) print(testDataset.shape) # + colab={"base_uri": "https://localhost:8080/"} id="OmDTbvMC32hg" outputId="169f273e-319d-480c-cc37-51a4d21acdb8" # #copy the trainDataset dataframe for getting the features trainFeatures = trainDataset.copy() testFeatures = testDataset.copy() print(trainFeatures.head()) print(testFeatures.head()) # + colab={"base_uri": "https://localhost:8080/"} id="jSR0_uni4ayh" outputId="862cc218-97dd-49b4-9a05-123317c7c5c8" #removing the G3 column and saving it into the labels variable trainLabels = trainFeatures.pop('G3') testLabels = testFeatures.pop('G3') print(trainLabels.head()) print(testLabels.head()) # + colab={"base_uri": "https://localhost:8080/"} id="HnGSqb-q4wk8" outputId="1f8691f8-62c2-4c99-8f7f-17a6c6b02b8b" print(trainFeatures.head()) print(testFeatures.head()) # + id="XPTSq34jk4o_" # univariate -> num(features) = 1 # multvariate -> num(features) > 1 = 11 model = tf.keras.Sequential([ tf.keras.layers.Dense(1) ]) # + id="4C4GKBdE6iCE" model.compile( loss = 'mean_absolute_error', #minimizing the MAE loss optimizer = tf.keras.optimizers.Adam(0.001), #learning rate specified as 0.001 # optimizer = 'adam', #takes the default learning rate metrics = ['mae', 'mse'] #meanSquare and meanAbsolute error metrics ) # + colab={"base_uri": "https://localhost:8080/"} id="d_A7DrhB6zvm" outputId="11ee327b-d732-4d9f-f15c-66dfad9adf47" numEpochs = 300 history = model.fit(x = trainFeatures, y = trainLabels, validation_data = (testFeatures, testLabels), epochs = numEpochs) # + id="6zYkLGS6wQyy" colab={"base_uri": "https://localhost:8080/"} outputId="c00bec61-d053-4100-ba3c-e3e5d613f3da" print(history) # + colab={"base_uri": "https://localhost:8080/"} id="1lsbk_ohAu1j" outputId="522e4bd9-fdab-438f-ed4c-c882ba14891b" model.summary() # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="P6VnkPim7tH7" outputId="70fc0776-a0a7-45f5-a26d-a927324077fc" # tempString = 'mse' def curvePlots(tempString): plt.plot(history.history[tempString]) plt.plot(history.history[f'val_{tempString}']) plt.xlabel('NumEpochs') plt.ylabel(tempString) plt.legend([tempString, f'val_{tempString}']) plt.show() curvePlots('mse') curvePlots('mae') curvePlots('loss') # + id="esW0eHQbzoj4" colab={"base_uri": "https://localhost:8080/"} outputId="4d8989dd-945d-4235-f4c4-f2b6e8906263" model.predict([1]) # + colab={"base_uri": "https://localhost:8080/"} id="KJrLWamX8Tg-" outputId="15a0d32e-9571-4ff6-9f59-707e4c7c8f01" # testPreds = model.predict(testFeatures).flatten() #array of all prediction values #for single values print(f'Prediction for input value 1: {model.predict([1])}') # for a list of values tempListforPreds = [1,2,3,4,5] print(f''' input List = {tempListforPreds} List of Predictions: {model.predict(tempListforPreds)} List of Predictions (flattened out): {model.predict(tempListforPreds).flatten()} ''') # + id="vJNFh6hc0K6r" colab={"base_uri": "https://localhost:8080/"} outputId="182b0011-f838-43f5-cde1-73897ebd5724" print(testFeatures) # + id="MniGZOu50IyN" testPreds = model.predict(testFeatures).flatten() #array of all prediction values # + id="du0Orwu40Qan" colab={"base_uri": "https://localhost:8080/"} outputId="e6a9bb52-951d-4def-ecab-b075bce442e2" print(len(testPreds)) print(testPreds) # + colab={"base_uri": "https://localhost:8080/", "height": 361} id="EfP8bB_v89Wa" outputId="fe6a06a0-48c9-4375-e9d2-d6f516151b30" # prediciton plot --> how well is your model predicting across the actual labels def predPlot(labels, predictions): plt.scatter(labels, predictions) plt.ylabel('Predictions') plt.xlabel('True Value or Labels') plt.axis('equal') plt.axis('square') plt.xlim([0, plt.xlim()[1]]) plt.ylim([0, plt.ylim()[1]]) plt.show() predPlot(testLabels, testPreds) # + colab={"base_uri": "https://localhost:8080/", "height": 361} id="70608Z0o9tWL" outputId="59629145-be63-4b36-eaff-c4f7c056b719" #error plot --> gaussian distribution def errorPlot(preds, labels, counts): errors = preds - labels plt.hist(errors, counts) plt.xlabel('Error') plt.ylabel('Counts') plt.show() errorPlot(testPreds, testLabels, numEpochs) # + [markdown] id="oshf2mUG1vPk" # Note: # # # Validation loss `val_loss` is a metric that tells you how much deviation from the actual label can you expect in the predicted label # # To optimize your predicitons --> # # - Hyperparameter tuning --> `numEpochs`, `optimizer`, `learning_rate`, lossFunctions # + id="1UZhoCFOCoGk" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python3 # --- # ## Classification with RF # + from sklearn.datasets import load_iris from sklearn.cross_validation import cross_val_score, train_test_split from sklearn import ensemble, tree, metrics, clone import numpy as np from sklearn.model_selection import GridSearchCV np.random.seed(10) # set random seed # + iris = load_iris() X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, train_size=0.75, test_size=0.25) rf_classifier = ensemble.RandomForestClassifier(n_estimators=10, # number of trees criterion='gini', # or 'entropy' for information gain max_depth=None, # how deep tree nodes can go min_samples_split=2, # samples needed to split node min_samples_leaf=1, # samples needed for a leaf min_weight_fraction_leaf=0.0, # weight of samples needed for a node max_features='auto', # number of features for best split max_leaf_nodes=None, # max nodes min_impurity_split=1e-07, # early stopping n_jobs=1, # CPUs to use random_state = 10, # random seed class_weight="balanced") # adjusts weights inverse of freq, also "balanced_subsample" or None model = rf_classifier.fit(X_train, y_train) # + print("Score of model with test data defined above:") print(model.score(X_test, y_test)) print() predicted = model.predict(X_test) print("Classification report:") print(metrics.classification_report(y_test, predicted)) print() scores = cross_val_score(model, iris.data, iris.target, cv=10) print("10-fold cross-validation:") print(scores) print() print("Average of 10-fold cross-validation:") print(np.mean(scores)) # - # We can visualize and compare decision paths of decision trees and random forests: # + % matplotlib inline # The visualization code below is adapted from: # Copyright (c) 2016, # All rights reserved. import numpy as np import matplotlib.pyplot as plt from sklearn.externals.six.moves import xrange from sklearn.tree import DecisionTreeClassifier # Parameters n_classes = 3 plot_colors = ('r','orange','yellow','blue') markers = ('D','s','^','o') cmap = plt.cm.RdYlBu plot_step = 0.02 # fine step width for decision surface contours plot_step_coarser = 0.5 # step widths for coarse classifier guesses models = [DecisionTreeClassifier(max_depth=None), ensemble.RandomForestClassifier(n_estimators=10) ] titles = ["Decision Tree: Sepal Length vs. Sepal Width", "Random Forest: Sepal Length vs. Sepal Width", "Decision Tree: Sepal Length vs. Petal Length", "Random Forest: Sepal Length vs. Petal Length", "Decision Tree: Sepal Length vs. Petal Width", "Random Forest: Sepal Length vs. Petal Width"] cnt1 = 0 y_labels = ['Sepal Width (cm)','Sepal Width (cm)', 'Petal Length (cm)','Petal Length (cm)', 'Petal Width (cm)', 'Petal Length (cm)'] labels = iris.target_names for pair in ([0,1], [0,2], [0,3]): for model in models: # We only take the two corresponding features X = np.asarray([[x[pair[0]], x[pair[1]]] for x in X_train]) y = y_train.astype(np.float) np.random.seed(10) # Train clf = clone(model) clf = model.fit(X, y) scores = clf.score(X, y) model_title = titles[cnt1] plt.title(model_title) # Now plot the decision boundary using a fine mesh as input to a # filled contour plot x_min, x_max = 0, X[:, 0].max() + 1 y_min, y_max = 0, X[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step), np.arange(y_min, y_max, plot_step)) # Plot either a single DecisionTreeClassifier or alpha blend the # decision surfaces of the ensemble of classifiers if isinstance(model, DecisionTreeClassifier): Z = model.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) cs = plt.contourf(xx, yy, Z, cmap=cmap) else: # Choose alpha blend level with respect to the number of estimators # that are in use estimator_alpha = 1.0 / len(model.estimators_) for tree in model.estimators_: Z = tree.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) cs = plt.contourf(xx, yy, Z, alpha=estimator_alpha, cmap=cmap) # Build a coarser grid to plot a set of ensemble classifications # to show how these are different to what we see in the decision # surfaces. These points are regularly space and do not have a black outline xx_coarser, yy_coarser = np.meshgrid(np.arange(x_min, x_max, plot_step_coarser), np.arange(y_min, y_max, plot_step_coarser)) Z_points_coarser = model.predict(np.c_[xx_coarser.ravel(), yy_coarser.ravel()]).reshape(xx_coarser.shape) cs_points = plt.scatter(xx_coarser, yy_coarser, s=15, c=Z_points_coarser, cmap=cmap, edgecolors="none") # Plot the training points, these are clustered together and have a # black outline for i, c in zip(xrange(n_classes), plot_colors): idx = np.where(y == i) plt.scatter(X[idx, 0], X[idx, 1], c=c, cmap=cmap,s=60,marker=markers[i], label=labels[i]) plt.legend(loc=1) plt.xlabel('Sepal Length (cm)') plt.ylabel(y_labels[cnt1]) plt.show() cnt1+=1 # - # ## Regression with RFs # + from sklearn.cross_validation import train_test_split from sklearn.datasets import load_boston from sklearn import ensemble np.random.seed(10) # set random seed boston = load_boston() X_train, X_test, y_train, y_test = train_test_split(boston.data, boston.target, train_size=0.75, test_size=0.25) rf_reg = ensemble.RandomForestRegressor(n_estimators=10, # number of trees criterion='mse', # how to measure fit max_depth=None, # how deep tree nodes can go min_samples_split=2, # samples needed to split node min_samples_leaf=1, # samples needed for a leaf min_weight_fraction_leaf=0.0, # weight of samples needed for a node max_features='auto', # max feats max_leaf_nodes=None, # max nodes random_state = 10, # random seed n_jobs=1) # how many to run parallel model = rf_reg.fit(X_train, y_train) print(model.score(X_test, y_test)) # + print("Score of model with test data defined above:") print(model.score(X_test, y_test)) print() scores = cross_val_score(model, boston.data, boston.target, cv=10) print("10-fold cross-validation:") print(scores) print() print("Average of 10-fold cross-validation:") print(np.mean(scores)) # - # ### Grid Search # + param_grid = {'min_samples_split': range(2,10), 'min_samples_leaf': range(1,10)} model_r = GridSearchCV(ensemble.RandomForestRegressor(), param_grid) model_r.fit(X_train, y_train) best_index = np.argmax(model_r.cv_results_["mean_test_score"]) print(model_r.cv_results_["params"][best_index]) print(max(model_r.cv_results_["mean_test_score"])) print(model_r.score(X_test, y_test)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (python3/3.6.2) # language: python # name: module-python3-3.6.2-python3 # --- import sys sys.path[:] sys.path.insert(0, '/mnt/home/landerson/.local/lib/python3.6/site-packages') import pynbody import matplotlib.pyplot as plt # %matplotlib inline dir = '/mnt/ceph/users/firesims/ananke/Latte/m12m/snapdir_600/' filename = 'snapshot_600' sim = pynbody.load(dir+filename) sim sim.properties sim.unit sim.gas['mass'].units sim.gas['x'].units sim.dark['x'].abs().max() blah = pynbody.analysis.halo.center(sim.gas,vel=False) sim.dark['x'].abs().min() sim.physical_units() sim.original_units() import numpy as np np.min(np.abs(sim.gas['y'])) pynbody.plot.image(sim.gas, qty='rho', threaded=False) # + # pynbody.plot.image? # - pynbody.plot.faceon_image(sim.gas) pynbody.plot.sideon_image(sim.gas) sim.all_keys() sim['x'] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Machine Learning Client # # This is a rudimentary client that trains a ML model on sensor data. It is used in three steps: # # # * **Training Data Recording Phase** # # The client receives some sensor data from PureData along with a to-be-predicted label in the last position of the list from PureData. # # * **Training Phase** # # A ML model is then learned on that data to predict the labels provided during the training process # # * **Prediction Phase** # # Finally the client uses the learned ML model to predict one of the learned categories from new sensor data. The prediction is sent via UDP in OSC format to PureData # ## Training Data Recording # This should not be necessary, but in case the libraries are not installed, # !pip install -r requirements.txt # + import socket import numpy as np localIP = "127.0.0.1" localPortData = 20007 bufferSize = 64 # Create a datagram socket UDPServerSocketData = socket.socket(family=socket.AF_INET, type=socket.SOCK_DGRAM) # Bind to address and ip UDPServerSocketData.bind((localIP, localPortData)) # UDPServerSocketData.settimeout(2) print("UDP Data server up and listening") def parse_message(msg): return [float(x) for x in msg.strip().strip(';').split(" ")] data_and_labels = [] fh = open(f'recording-training-data-{np.random.randint(0,1000)}.txt','w') # Listen for incoming datagrams try: while(True): messageData, addressData = UDPServerSocketData.recvfrom(bufferSize) fh.write(messageData.decode()) data_and_labels.append(parse_message(messageData.decode())) finally: UDPServerSocketData.close() fh.close() # - data_and_labels # ## Training ML Model # # Replace the (rather simple) [NearestCentroid Classifier](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.NearestCentroid.html) with [any sklearn classifier](https://scikit-learn.org/stable/supervised_learning.html) from sklearn.neighbors import NearestCentroid from sklearn.metrics import classification_report Xy = np.array(data_and_labels) X, y = Xy[:,:-1], Xy[:,-1] clf = NearestCentroid().fit(X,y) print(classification_report(y, clf.predict(X))) # ## Prediction Phase # + from pythonosc import udp_client localPortPrediction = 7001 serverAddressPort = (localIP, localPortPrediction) osc_client = udp_client.SimpleUDPClient(*serverAddressPort) # Create a datagram socket UDPServerSocketData = socket.socket(family=socket.AF_INET, type=socket.SOCK_DGRAM) # Bind to address and ip UDPServerSocketData.bind((localIP, localPortData)) print("UDP Data server up and listening") # Create a datagram socket UDPClientSocketPrediction = socket.socket(family=socket.AF_INET, type=socket.SOCK_DGRAM) print("UDP Prediction client ready") fh = open(f'recording-during-prediction-{np.random.randint(0,1000)}.txt','w') # Listen for incoming datagrams try: while(True): message, address = UDPServerSocketData.recvfrom(bufferSize) fh.write(message.decode()) prediction = clf.predict([parse_message(message.decode())[:-1]])[0] bytesToSend = str.encode(f"{prediction}") print(f"data {message} prediction {prediction}") # send prediction to server osc_client.send_message("/prediction", prediction) finally: fh.close() UDPServerSocketData.close() UDPServerSocketPrediction.close() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 03- Stack & Queue # ## 3.1 Stack # A stack uses LIFO (last-in first-out) ordering. As in the stack of dinner plates, the most recent item added to the stack is the first item to be removed. # It uses the following operations: # - `pop()`: Remove the top item from the stack. # - `push()`: Add an item to the top of the stack. # - `peek()`: Return the top of the stack. # - `isEmpty()`: Return true if and only if the stack is empty. # ### Stacks vs. Array # - An array is a contiguous block of memory. A stack is a first-in-last-out data structure with access only to the top of the data. In stack we lose the ability of constant-time access to the ith item. However, it allows constant time add and removes as it doesn't require shifting elements around. # - Since many languages do not provide facility for stack, it is backed by either arrays or linked list. # - In arrays, the values can be added or deleted on any side, but in stack the other side is sealed. # ![array](./images/stack.jpg) # ## 3.2 Queue # A queue implements FIFO (first-in first-out) ordering. # It uses the following operations: # - `add(item)`: Add an item to the end of the list. # - `remove()`: Remove the first item in the list. # - `peek()`: Return the top of the queue. # - `isEmpty()`: Return true if and only if the queue is empty. # A queue can be implemented with a linked list. In fact, they are essentially the same thing as long as items are added and removed from opposite sides. # |Data Structure|Time Complexity||||||||Space Complexity| # |:--|:--|:--|:--|:--|:--|:--|:--|:--|:--| # ||Average||||Worst|||| # ||Access|Search|Insertion|Deletion|Access|Search|Insertion|Deletion| # |Stack|O(n)|O(n)|O(1)|O(1)|O(n)|O(n)|O(1)|O(1)|O(n)| # |Queue|O(n)|O(n)|O(1)|O(1)|O(n)|O(n)|O(1)|O(1)|O(n)| # ## Stack & Queue in python # ### the `list` Built-in # Python’s built-in list type makes a decent stack data structure as it supports push and pop operations in amortized O(1) time. # Python’s lists are implemented as dynamic arrays internally which means they occasional need to resize the storage space for elements stored in them when elements are added or removed. The list over-allocates its backing storage so that not every push or pop requires resizing and you get an amortized O(1) time complexity for these operations. # The downside is that this makes their performance less consistent than the stable O(1) inserts and deletes provided by a linked list based implementation. On the other hand lists do provide fast O(1) time random access to elements on the stack which can be an added benefit. # Here’s an important performance **caveat** when using lists as stacks: # To get the amortized O(1) performance for inserts and deletes new items must be added to the end of the list with the append() method and removed again from the end using pop(). Stacks based on Python lists grow to the right and shrink to the left. # # Adding and removing from the front is much slower and takes O(n) time, as the existing elements must be shifted around to make room for the new element. # # ### the `collections.deque` Class # # The deque class implements a double-ended queue that supports adding and removing elements from either end in O(1) time (non-amortized). # # Because deques support adding and removing elements from either end equally well, they can serve both as queues and as stacks. # # Python’s deque objects are implemented as doubly-linked lists which gives them excellent and consistent performance for inserting and deleting elements, but poor O(n) performance for randomly accessing elements in the middle of the stack. from collections import deque # + SIZE = 100000000 # Declaring deque queue = deque(list(range(SIZE))) mylist = list(range(SIZE)) # - # %timeit mylist[0] # %timeit queue[0] # ## 3.3 Python List Size & Capacity import sys l = [] size = sys.getsizeof(l) print("size of an empty list :", size) # append four element in the list l.append(1) l.append(2) l.append(3) l.append(4) sys.getsizeof(l) # calculate total size of the list after appending four element print("Now total size of a list :", sys.getsizeof(l)) print("Size of each element:", (sys.getsizeof(l) - size) / 4 ) # calculate capacity of the list after appending four element c = (sys.getsizeof(l) - size) // 8 c def get_capacity(mylist): # this functions returns the capacity of an element # 72 is list overhead size and 8 is the size of each element return (sys.getsizeof(mylist) - 72) // 8 # Now look at how python manages size and capacity in list # + mylist = [] for i in range(32): print(len(mylist), get_capacity(mylist)) mylist.append(i) # - # **Best way to reserve space in list:** # # ``` # mylist = [None] * reserve_size # ``` # reserve space for 1000 elements in a list mylist = [None] * 1000 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import matplotlib.pyplot as plt import numpy as np import os import sys import sqlite3 as db import datetime as dt import pandas as pd projectDir = os.path.dirname(os.path.abspath(os.path.curdir)) dbFile = os.path.join(projectDir, 'db', '2019_nCov_data.db') conn = db.connect(dbFile) cu = conn.cursor() # - # ## 介绍 # # 接下来的数据分析基于知乎上[Snowino](https://www.zhihu.com/question/368541456/answer/991544464?utm_source=wechat_session&utm_medium=social&utm_oi=40289565671424)的回答。 # # 在此对[Snowino](https://www.zhihu.com/question/368541456/answer/991544464?utm_source=wechat_session&utm_medium=social&utm_oi=40289565671424)的精彩分析表示感谢。 # # ## 前言 # # 需要回答的几个关键问题: # # - 武汉/湖北/全国各大小城市**封城**以后,疫情**传播速度是否得到控制**? # - 湖北现在医疗系统**是否超负荷了**? # - 疫情接下来几天会如何发展? # - 如果我得了这个病,我有多大概率会死? # 回答这些问题前,我们需要谈论数据的可信度。 # + import matplotlib.dates as mdates plt.rcParams['font.family'] = ['Arial Unicode MS'] plt.rcParams['axes.unicode_minus'] = False # get overall deadth count df = pd.read_sql_query("""select time, deadCount, confirmedCount from Overall""", conn) df['date'] = pd.to_datetime(df['time']/1000, unit='s') df = df.set_index('date') dailyMean = df.resample('D').mean() plt.figure(figsize=[9,4]) plt.plot(dailyMean.index, dailyMean['deadCount']/dailyMean['confirmedCount']*100, marker='o') plt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%m/%d')) plt.gca().xaxis.set_major_locator(mdates.DayLocator(interval=3)) plt.xticks(rotation=45) # plt.xlabel('日期') plt.ylabel(u'死亡率(%)') plt.ylim([1.5, 4.0]) plt.text(0.5, 0.6, '全国实时表观死亡比例 = 累计死亡数/累计确诊数', transform=plt.gca().transAxes, ) # - # 网上有质疑国家统计数据的言论如下: # # > 实时表观累计死亡比例一直都是2.1%,因此数据是人造的! # # 由上图可知,基于数据计算的死亡率并不是一个固定值,因此如上说法并不准确。我们应该相信国家统计数据的**权威性**。 # **武汉/湖北/全国各大小城市封城以后,疫情传播速度是否得到控制?** # # # + # get overall deadth count OverallDf = pd.read_sql_query("""select updateTime, deadCount, confirmedCount, provinceName from Region_Data where provinceName!='湖北省' and country='中国' and provinceName!='待明确地区';""", conn) OverallDf['date'] = pd.to_datetime(OverallDf['updateTime'].apply(int)/1000, unit='s') OverallDf = OverallDf.set_index('date') OverallDf = OverallDf.groupby('provinceName') dailyMeanOverall = OverallDf.resample('D').mean().round().groupby('date') totalConfirmedCount = dailyMeanOverall['confirmedCount'].agg(np.sum)[:-1] plt.figure(figsize=[8, 4]) plt.plot(totalConfirmedCount.index[1:], np.diff(totalConfirmedCount), marker='o') plt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%m/%d')) plt.gca().xaxis.set_major_locator(mdates.DayLocator(interval=3)) plt.xticks(rotation=45) # plt.xlabel('日期') plt.ylabel('湖北以外每日新增确诊数') plt.ylim([0, 1400]) # + # get data OverallDf = pd.read_sql_query("""select updateTime, confirmedCount as totalCount, provinceName from Region_Data where provinceName!='湖北省' and country='中国' and provinceName != '待明确地区'""", conn) OverallDf['date'] = pd.to_datetime(OverallDf['updateTime'].apply(int)/1000, unit='s') OverallDf = OverallDf.set_index('date') OverallDf = OverallDf.groupby('provinceName') dailyMeanOverall = OverallDf.resample('D').mean().round().groupby('date') totalCount = (dailyMeanOverall['totalCount']).agg(np.sum)[:-1] HubeiDf = pd.read_sql_query("""select updateTime, confirmedCount as totalCount, provinceName from Region_Data where provinceName='湖北省'""", conn) HubeiDf['date'] = pd.to_datetime(HubeiDf['updateTime'].apply(int)/1000, unit='s') HubeiDf = HubeiDf.set_index('date') HubeiDf = HubeiDf.groupby('provinceName') dailyMeanHubei = HubeiDf.resample('D').mean().round().groupby('date') HubeitotalCount = (dailyMeanHubei['totalCount']).agg(np.sum)[:-1] plt.figure(figsize=[8, 5]) plt.plot(totalCount.index[1:], np.diff(totalCount), marker='o', label='湖北省以外') plt.plot(HubeitotalCount.index[1:], np.diff(HubeitotalCount), marker='*', label='湖北省') plt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%m/%d')) plt.gca().xaxis.set_major_locator(mdates.DayLocator(interval=3)) plt.xticks(rotation=45) # plt.xlabel('日期') plt.ylabel('每日新增 确诊+疑似数') plt.legend() plt.ylim([0, 4000]) # - # ## 武汉市的疫情情况 # + WuhanDf = pd.read_sql_query("""select updateTime, confirmedCount from City_Data where cityName='武汉';""", conn) WuhanDf['date'] = pd.to_datetime(WuhanDf['updateTime'].apply(int)/1000, unit='s') WuhanDf = WuhanDf.set_index('date') dailyMeanWuhan = WuhanDf.resample('D').mean().round() fig, ax1 = plt.subplots(figsize=(8, 5)) s1, = ax1.plot( dailyMeanWuhan.index, dailyMeanWuhan['confirmedCount'], color='r', marker='o') ax1.set_ylabel(u'确诊人数') ax1.set_ylim([0, 60000]) ax2 = ax1.twinx() s2, = ax2.plot( dailyMeanWuhan.index[1:], np.diff(dailyMeanWuhan['confirmedCount']), color='k', marker='o') ax2.set_ylabel(u'每日新增确诊人数') ax2.set_ylim([0, 4000]) plt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%m/%d')) plt.gca().xaxis.set_major_locator(mdates.DayLocator(interval=3)) plt.xlim( [dt.datetime(2020, 1, 23), max(dailyMeanWuhan.index) + dt.timedelta(days=1)]) plt.setp(ax1.xaxis.get_majorticklabels(), rotation=45) plt.title('武汉市新型肺炎疫情情况', fontsize=24) plt.legend( (s1, s2), (u'确诊人数', u'每日新增确诊人数')) # - # ## 鄂州市疫情情况 # + EzhouDf = pd.read_sql_query("""select updateTime, confirmedCount from City_Data where cityName='鄂州';""", conn) EzhouDf['date'] = pd.to_datetime(EzhouDf['updateTime'].apply(int)/1000, unit='s') EzhouDf = EzhouDf.set_index('date') dailyMeanEzhou = EzhouDf.resample('D').mean() fig, ax1 = plt.subplots(figsize=(8, 5)) s1, = ax1.plot( dailyMeanEzhou.index, dailyMeanEzhou['confirmedCount'], color='r', marker='o') ax1.set_ylim([0, 1500]) ax1.set_ylabel(u'确诊人数') ax2 = ax1.twinx() s2, = ax2.plot( dailyMeanEzhou.index[1:], np.diff(dailyMeanEzhou['confirmedCount']), color='k', marker='o') ax2.set_ylabel(u'每日新增确诊人数') ax2.set_ylim([0, 20]) plt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%m/%d')) plt.gca().xaxis.set_major_locator(mdates.DayLocator(interval=3)) plt.xlim( [dt.datetime(2020, 1, 23), max(dailyMeanEzhou.index) + dt.timedelta(days=1)]) plt.setp(ax1.xaxis.get_majorticklabels(), rotation=45) plt.title('鄂州市新型肺炎疫情情况', fontsize=24) plt.legend( (s1, s2), (u'确诊人数', u'每日新增确诊人数'), loc='upper left') // --- // jupyter: // jupytext: // text_representation: // extension: .cpp // format_name: light // format_version: '1.5' // jupytext_version: 1.14.4 // kernelspec: // display_name: C++14 // language: C++14 // name: xcpp14 // --- // + [markdown] tags=[] // # Quicksort algorithm // // Quicksort is a divide-and-conquer algorithm. It works by selecting a 'pivot' element from the array and partitioning the other elements into two sub-arrays, according to whether they are less than or greater than the pivot. The sub-arrays are then sorted recursively. This can be done in-place, requiring small additional amounts of memory to perform the sorting. // // The steps for in-place quicksort are the following: // // - If the range has less than two elements, return immediatly. // - Otherwise pick a value, called a pivot, that occurs in the range. How you choose depends on the partition routine. In this notebook, we will always choose the last element (Lomuto partition scheme) // - Partition the range: reorder its elements, while determining a point of division, so that all elements smaller than the pivot come before the division, while all elements greater than the pivot come after it. // - Recursively apply the quicksort algorithm to the sub-range up to the point of division, and to the subrange after it. // - #include #include #include using array_type = std::vector; // The next cell initializes the array that we will use for testing. array_type a(10u); for (size_t i = 0; i < a.size(); ++i) { a[i] = (rand() / (double)RAND_MAX); } a // + [markdown] tags=[] // ## Partition // // Write a function `partition` that partitions an array. It should take the array, the lower bound and the upper bound as arguments, and should return the pivot index. // - size_t partition(array_type& a, size_t low, size_t high) { double pivot = a[high]; size_t index = low - 1; for (size_t i = low; i < high; ++i) { if (a[i] <= pivot) { ++index; std::swap(a[i], a[index]); } } ++index; std::swap(a[index], a[high]); return index; } // You can then test your function by executing the next cell. What do you notice? size_t index = partition(a, 0u, a.size()-1); std::cout << index << std::endl; a // ## Quicksort // // Implement the quicksort function, which divides the portion of an array into two partitions and sorts those. The function should take the array, the lower bound and the upper bound as arguments. void quicksort(array_type& a, size_t low, size_t high) { if (low >= high) return; size_t p = partition(a, low, high); quicksort(a, low, p-1); quicksort(a, p+1, high); } // Then test it in the next cell quicksort(a, 0, a.size() - 1); a // Having to pass the lower and upper bounds is required by the implementation, but not very convenient for the user. How can you improve the API? # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import gensim import pickle infile = open('patientnumber_to_stringtextfile_dict', 'rb') patient_to_stringtextfile = pickle.load(infile) def create_tagged_data(pat_to_textfile): tagged_documents = [] for pat_num in pat_to_textfile: tagged_documents.append(gensim.models.doc2vec.TaggedDocument(words=pat_to_textfile[pat_num], tags=['Patient' + str(pat_num)])) return tagged_documents tagged_documents = create_tagged_data(patient_to_stringtextfile) model = gensim.models.doc2vec.Doc2Vec(tagged_documents, vector_size=400, window=10, min_count=1, workers=6) model.train(tagged_documents, total_examples=len(tagged_documents), epochs=10) model.save('docembeddings_dim_400_window_10') model.docvecs['Patient11'] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- while start != destination: nxt = start + 1 min_distance += min(distance[start], sum_val - distance[nxt]) start = nxt return min_distance class Solution: def distanceBetweenBusStops(self, distance, start: int, destination): if start == destination: return 0 presum = [0] for n in distance: presum.append(n + presum[-1]) sum_val = sum(distance) dist = abs(presum[destination] - presum[start]) return min(dist, sum_val - dist) solution = Solution() solution.distanceBetweenBusStops([7,10,1,12,11,14,5,0], 7, 2) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # IMPORT PACKAGES from geoscilabs.em import UXO_TEM_Widget as UXO from IPython.display import display from ipywidgets import HBox # # Contents # # This app contains 3 widgets: # # * **Orientation and polarization widget:** This widget allows the user to visualize the orientation, infer the dimensions and change the polarizabilities of compact objects they wish to model. # * **Data visualization widget:** This widget allows the user to visualize the step-off response of compact objects using three commonly used instruments: EM61, TEMTADS, and MPV. # * **Parameter estimation widget:** This widget allows the user to invert synthetic data collected using EM61, TEMTADS or MPV instruments in order to recover the location and primary polarizabilities for a compact object. # # # # Background Theory # # ## Polarization Tensor # # The magnetic dipole moment ${\bf m}$ being experienced by a compact object is given by: # # \begin{equation} # \mathbf{m = Q \, h_p} # \end{equation} # # where ${\bf h_p} = [h_x,h_y,h_z]^T$ is the primary magnetic field caused by the transmitter before shut-off and ${\bf Q}$ is the called the **polarizability tensor**. The polarizability tensor is a 3X3 symmetric, positive-definite (SPD) matrix given by: # # \begin{equation} # {\bf Q} = \begin{bmatrix} q_{11} & q_{12} & q_{13} \\ q_{12} & q_{22} & q_{23} \\ q_{13} & q_{23} & q_{33} \end{bmatrix} # \end{equation} # # where $q_{ij}$ defines hows strongly field component $h_i$ contributes towards $m_j$. # # # # ## Coordinates and Primary Polarizations # # The polarizability tensor for an object depends on its orientation, dimensions and electromagnetic properties. Because the polarizability tensor is SPD, it can be decomposed using the following eigen-decomposition: # # \begin{equation} # {\bf Q = A \, L(t) \, A^T} # \end{equation} # # where # # \begin{equation} # {\bf A} = \begin{bmatrix} a_{11} & a_{12} & a_{13} \\ a_{12} & a_{22} & a_{23} \\ a_{13} & a_{23} & a_{33} \end{bmatrix} \;\;\;\; \textrm{and} \;\;\;\; # {\bf L(t)} = \begin{bmatrix} L_{x'}(t) & 0 & 0 \\ 0 & L_{y'}(t) & 0 \\ 0 & 0 & L_{z'}(t) \end{bmatrix} # \end{equation} # # ${\bf A}$ is a SPD rotation matrix from the coordinate system defined by the UXO ($x',y',z'$) to the field coordinate system ($x,y,z$). ${\bf A}$ is defined by three angles: $\psi,\theta$ and $\phi$. $\theta$ is the azimuthal angle (angle relative to vertical), $\phi$ is the declination (angle relative to North) and $\psi$ is the roll (rotation about z' axis). # # ${\bf L(t)}$ characterizes the primary polarizabilities of the object. The magnetic dipole moment experienced by the object is a linear combination of polarizabilities $L_{x'},L_{y'}$ and $L_{z'}$. Depending on the dimensions and of the object, $L_{x'},L_{y'}$ and $L_{z'}$ may differ. For example: # # * A sphere has primary polarizabilities $L_{x'}=L_{y'}=L_{z'}$ # * A UXO has primary polarizabilities $L_{x'}=L_{y'}0$ is given by: # # \begin{equation} # L_{ii}(t) = k_i \Bigg ( 1 + \frac{t^{1/2}}{\alpha_i^{1/2}} \Bigg )^{-\beta_i} e^{-t/\gamma_i} # \end{equation} # # where the decay of the object's polarization is determined by parameters $k_i,\alpha_i,\beta_i$ and $\gamma_i$. # # # # ## Predicting Data # # There are a multitude of instruments used to measure the time-domain responses exhibited by UXOs (EM61, TEMTADS, MPV). For each individual measurement, a transmitter loop produces a primary magnetic field ${\bf h_p} = [h_x,h_y,h_z]^T$ which is turned off a $t=0$. The primary field polarizes the UXO according to its polarizability tensor ${\bf Q}$. The polarization of the object produces a secondary field which induces an EMF in one or more receiver coils. The field component being measured by each receiver coil depends on its orientation. # # Where ${\bf G} = [g_x,g_y,g_z]$ maps the dipole moment experienced by the object to the induced voltage in a receiver coil: # # \begin{equation} # d = {\bf G \, m} = {\bf G \, Q \, h_p} # \end{equation} # # Because it is SPD, the polarizability tensor may be characterized at each time by 6 parameters $(q_{11},q_{12},q_{13},q_{22},q_{23},q_{33})$. The previous expression can ultimately be reformulated as: # # \begin{equation} # d = {\bf P \, q} # \end{equation} # # where # # \begin{equation} # {\bf q^T} = [q_{11} \;\; q_{12} \;\; q_{13} \;\; q_{22}\;\; q_{23} \;\; q_{33}] # \end{equation} # # and # # \begin{equation} # {\bf P} = [h_xg_x \;\; h_xg_y \!+\! h_yg_x \;\; h_xg_z \!+\! h_zg_x \;\; h_zg_y \;\; h_yg_z \!+\! h_zg_y \;\; h_zg_z] # \end{equation} # # Thus in the case that there are $N$ distinct transmitter-receiver pair, each transmitter-receiver pair is represented as a row within ${\bf P}$. ${\bf q}$ contains all the necessary information to construct ${\bf Q}$ and ${\bf P}$ contains all the geometric information associated with the problem. # # ## Inversion and Parameter Estimation # # When inverting field-collected UXO data there are two primary goals: # # * Accurate location of a target object (recover $x,y,z$) # * Accurate characterization of a target object (by recovering $L_{x'},L_{y'},L_{z'}$) # # For this widget, we will accomplish these goals in two steps. # # ### Step 1 # # In step 1, we intend to recover the location of the target $(x,y,z)$ and the elements of the polarizability tensor $(q_{11},q_{12},q_{13},q_{22},q_{23},q_{33})$ at each time. A basic approach is applied by finding the location and polarizabilities which minimize the following data misfit function: # # \begin{equation} # \begin{split} # \Phi &= \sum_{i=k}^K \Big \| {\bf W_k} \big ( {\bf P \, q_k - d_{k,obs}} \big ) \Big \|^2 \\ # & \textrm{s.t.} \\ # & q_{min} \leq q_{ij}(t) \leq q_{max} \\ # & q_{ii}(t) \geq 0 \\ # & \big | q_{ij}(t) \big | \leq \frac{1}{2} \big ( \; \big | q_{ii}(t) \big | + \big | q_{jj}(t) \big | \; \big ) # \end{split} # \end{equation} # # where ${\bf P}$ depends on the location of the target, $i$ refers to the time-channel, $d_{i,obs}$ is the observed data at time $i$ and ${\bf W_i}$ are a set of weights applied to the data misfit. The constraint assures that negative polarizabilities (non-physical) are not recovered in order to fit the data. # # ### Step 2 # # Once recovered, ${\bf q}$ at each time can be used to construct the corresponding polarizability tensor ${\bf Q}$. Recall that the eigen-decomposition of ${\bf Q}$ is given by: # # \begin{equation} # {\bf Q = A \, L(t) \, A^T} # \end{equation} # # Thus $L_{x'}(t),L_{y'}(t),L_{z'}(t)$ are just the eigenvalues of ${\bf Q}$ and the elements of the rotation matrix ${\bf A}$ are the eigenvectors. Once $L_{x'},L_{y'},L_{z'}$ have been recovered at all times, the curves can be compared against the known primary polarizabilities of objects which are stored in a library. # # ### Practical Considerations # # **Sampling Density:** The optimum line and station spacing depends significantly on the dimensions of the target, its depth and the system being used to perform the survey. It is important to use a sampling density which accurately characterizes TEM anomalies without adding unnecessary time and expense. # # **Excitation Orientation:** The excitation of a buried target occurs parallel to the inducing field. Thus in order to accurately recover polarizations $L_{x′},L_{y′}$ and $L_{z′}$ for the target, we must excite the target significantly from multiple angles. Ideally, the target would be excited from 3 orthogonal directions; thus assuring the data contains significant contributions from each polarization. # # Orientation and Polarization Widget # # ### Purpose # # This app allows the user to visualize the orientation, approximate dimensions and polarizability of compact objects they wish to model with subsequent apps. # # ### Parameter Descriptions # # * $\Phi$: Clockwise rotation about the z-axis # * $\theta$: Azimuthal angle (angle from vertical) # * $\phi$: Declination angle (Clockwise angle from North) # * $k_i,\alpha_i,\beta_i,\gamma_i$: Parameters which characterize the polarization along axis $i$ # + # NOTE: INITIATE WIDGET BY ADJUSTING ANY PARAMETER!!! Out1 = UXO.ImageUXOWidget() display(HBox(Out1.children[0:3])) display(HBox(Out1.children[3:7])) display(HBox(Out1.children[7:11])) display(HBox(Out1.children[11:15])) display(Out1.children[15]) Out1.out # - # # Data Visualization Widget # # ### Purpose # # This widget allows the user to visualize the time-domain response using three commonly used instruments: EM61, TEMTADS, and MPV. On the leftmost plot, the TEM anomaly at the center of the transmitter loop is plotted at a specified time. On the rightmost plot, the TEM decays registered by all receiver coils for a particular transmitter loop are plotted. # # ### Parameter Descriptions # # * TxType: Instrument used to predict data. Set as "EM61", "TEMTADS" or "MPV" # * $x_{true},y_{true},z_{true}$: Location of the object # * $\psi,\theta,\phi$: Angles defining the orientation of the object # * $k_i,\alpha_i,\beta_i,\gamma_i$: Parameters which characterize the polarization along axis $i$ # * Time channel: Adjusts the time in which the TEM anomaly at the center of the transmitter loop is plotted # * X location, Y location: The transmitter location at which you would like to see all decays measured by the receiver coils. # + # NOTE: INITIATE WIDGET BY ADJUSTING ANY PARAMETER!!! TxType = "EM61" # Set TxType to "EM61", "TEMTADS" or "MPV" Out2 = UXO.ImageDataWidget(TxType) display(HBox(Out2.children[0:3])) display(HBox(Out2.children[3:6])) display(HBox(Out2.children[6:10])) display(HBox(Out2.children[10:14])) display(HBox(Out2.children[14:18])) display(HBox(Out2.children[18:21])) if TxType is "MPV": display(Out2.children[21]) Out2.out # - # # Parameter Estimation Widget # # ### Purpose # # This widget allows the user to invert synthetic data using EM61, TEMTADS or MPV instruments in order to recover the location and primary polarizabilities for a compact object. The goal of this app is to demonstrate how successful recovery depends on: # # * Sampling density # * Excitation orientation # # ### Parameter Descriptions # # * TxType: Instrument used for simulation. Set as "EM61", "TEMTADS" or "MPV" # * $x_{true},y_{true},z_{true}$: True location of the object # * $\psi,\theta,\phi$: True angles defining the orientation of the object # * $k_i,\alpha_i,\beta_i,\gamma_i$: True parameters which characterize the polarization of the object along axis $i$ # * $D_x,D_y$: The x-width and y-width for the cued-interrogation region # * $N_x,N_y$: The number of stations in the x and y direction # * $x_0,y_0,z_0$: Starting guess for the location of the object # + # NOTE: INITIATE WIDGET BY ADJUSTING ANY PARAMETER!!! TxType = "EM61" # Set TxType to "EM61", "TEMTADS" or "MPV" Out3 = UXO.InversionWidget(TxType) display(HBox(Out3.children[0:3])) display(HBox(Out3.children[3:6])) display(HBox(Out3.children[6:10])) display(HBox(Out3.children[10:14])) display(HBox(Out3.children[14:18])) display(HBox(Out3.children[18:22])) display(HBox(Out3.children[22:25])) if TxType is "MPV": display(HBox(Out3.children[25:27])) else: display(Out3.children[25]) Out3.out # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + import numpy as np import pandas as pd from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.metrics import accuracy_score # Models from sklearn.linear_model import LogisticRegression from sklearn.linear_model import LinearRegression from sklearn.ensemble import RandomForestClassifier from sklearn.tree import DecisionTreeRegressor from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import LinearSVC # - # Dataset source: https://www.kaggle.com/ronitf/heart-disease-uci # For the 'target' column, 0 corresponds to a healthy heart and 1 corresponds to a defective heart. data = pd.read_csv("heart.csv") x = data.drop("target", axis=1) y = data["target"] # # Stratified train and test split train_x, test_x, train_y, test_y = train_test_split(x, y, test_size=0.1, stratify=y, random_state=1) scaler = StandardScaler() scaled_train_x = scaler.fit_transform(train_x) scaled_test_x = scaler.fit_transform(test_x) # + # model = LogisticRegression() # Training accuracy: 0.8419117647058824 # Testing accuracy: 0.9032258064516129 # model = DecisionTreeRegressor() # Training accuracy: 1.0 # Testing accuracy: 0.8387096774193549 model = RandomForestClassifier(max_depth=4, n_estimators=50) # Training accuracy: 1.0 # Testing accuracy: 0.9032258064516129 # model = KNeighborsClassifier(algorithm="brute", n_jobs=-1) # Training accuracy: 0.8584905660377359 # Testing accuracy: 0.8241758241758241 # model = LinearSVC(C=0.0001) # Training accuracy: 0.8308823529411765 # Testing accuracy: 0.8709677419354839 # - model.fit(scaled_train_x, train_y) x_prediction = model.predict(scaled_train_x) training_data_acc = accuracy_score(x_prediction, train_y) training_data_acc x_prediction = model.predict(scaled_test_x) testing_data_acc = accuracy_score(x_prediction, test_y) testing_data_acc # ## Function to estimate best parameters for the model # + active="" # from sklearn.model_selection import GridSearchCV # parameters = { # "n_estimators": [5,10,50,100,250,500], # "max_depth": [2,4,8,16,32,64,None] # } # # cv = GridSearchCV(model, parameters, cv=10) # cv.fit(scaled_train_x, train_y.values.ravel()) # + active="" # # def display(results): # print(f'Best parameters are: {results.best_params_}') # print("\n") # mean_score = results.cv_results_['mean_test_score'] # std_score = results.cv_results_['std_test_score'] # params = results.cv_results_['params'] # for mean,std,params in zip(mean_score,std_score,params): # print(f'{round(mean,3)} + or -{round(std,3)} for the {params}') # # display(cv) # - # ## Function to estimate the best test size for model # + model = LogisticRegression() # Training accuracy: 0.8636363636363636 # Testing accuracy: 0.9032258064516129 # model = DecisionTreeRegressor() # Training accuracy: 1.0 # Testing accuracy: 0.7741935483870968 # model = RandomForestClassifier(max_depth=4, n_estimators=50) # Training accuracy: 1.0 # Testing accuracy: 0.9032258064516129 # model = KNeighborsClassifier(algorithm="brute", n_jobs=-1) # Training accuracy: 0.8636363636363636 # Testing accuracy: 0.8241758241758241 0.3 # model = LinearSVC(C=0.0001) # Training accuracy: 0.8429752066115702 # Testing accuracy: 0.8709677419354839 max_acc = 0 t_size = 0 scaler = StandardScaler() for i in range(1,10,1): x = data.drop("target", axis=1) y = data["target"] train_x, test_x, train_y, test_y = train_test_split(x, y, test_size=(i/10), stratify=y, random_state=1) scaled_train_x = scaler.fit_transform(train_x) scaled_test_x = scaler.fit_transform(test_x) model.fit(scaled_train_x, train_y) x_prediction = model.predict(scaled_test_x) testing_data_acc = accuracy_score(x_prediction, test_y) if testing_data_acc > max_acc: max_acc = testing_data_acc t_size = i print(t_size, max_acc) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="ZvXpKVIZFN1j" # # Import libraries # + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _kg_hide-input=false _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" id="V_H582QaFN1l" outputId="9f22c361-73bd-4af8-d407-8e72a4af25ac" import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) import os,cv2 import numpy as np import matplotlib.pyplot as plt import matplotlib.image as mpimg from pylab import rcParams rcParams['figure.figsize'] = 20, 10 from sklearn.utils import shuffle from sklearn.model_selection import train_test_split import keras from keras.utils import np_utils # Input data files are available in the "../input/" directory. # For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory from keras.models import Sequential from keras.layers import Dense , Activation , Dropout ,Flatten from keras.layers.convolutional import Conv2D from keras.layers.convolutional import MaxPooling2D from keras.metrics import categorical_accuracy from keras.models import model_from_json from keras.callbacks import ModelCheckpoint from keras.optimizers import * from keras.layers.normalization import BatchNormalization import os print(os.listdir("../input/ckplus/CK+48")) # Any results you write to the current directory are saved as output from keras.wrappers.scikit_learn import KerasClassifier from sklearn.model_selection import cross_val_score, cross_val_predict from sklearn.datasets import make_classification from keras.callbacks import EarlyStopping, ModelCheckpoint, ReduceLROnPlateau # + [markdown] id="FWsVKbbKFN1o" # # Extracting images from directory # + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a" id="z9XWP-n9FN1q" outputId="fdbfe86e-4f46-4944-bbcb-103c97c2321d" data_path = '../input/ckplus/CK+48' data_dir_list = os.listdir(data_path) num_epoch=10 img_data_list=[] for dataset in data_dir_list: img_list=os.listdir(data_path+'/'+ dataset) print ('Loaded the images of dataset-'+'{}\n'.format(dataset)) for img in img_list: input_img=cv2.imread(data_path + '/'+ dataset + '/'+ img ) #input_img=cv2.cvtColor(input_img, cv2.COLOR_BGR2GRAY) input_img_resize=cv2.resize(input_img,(48,48)) img_data_list.append(input_img_resize) img_data = np.array(img_data_list) img_data = img_data.astype('float32') img_data = img_data/255 img_data.shape # + [markdown] id="OAFTE3bCFN1t" # # Putting label in data # + _uuid="1087480ce5ec7d1d59a5bdecde535a8745424f78" id="Pr_UNkRVFN1u" num_classes = 7 num_of_samples = img_data.shape[0] labels = np.ones((num_of_samples,),dtype='int64') labels[0:134]=0 #135 labels[135:188]=1 #54 labels[189:365]=2 #177 labels[366:440]=3 #75 labels[441:647]=4 #207 labels[648:731]=5 #84 labels[732:980]=6 #249 names = ['anger','contempt','disgust','fear','happy','sadness','surprise'] def getLabel(id): return ['anger','contempt','disgust','fear','happy','sadness','surprise'][id] # + [markdown] id="UAaUqeaNFN1v" # # Splitting train test # + _uuid="0b0ccd8e078f7f084ae69d6a5a641b55b1e6b40e" id="-6g-on6HFN1w" Y = np_utils.to_categorical(labels, num_classes) #Shuffle the dataset x,y = shuffle(img_data,Y, random_state=2) # Split the dataset X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=2) x_test=X_test # + [markdown] id="95hIoPkhFN1x" # # Creating Model # + _uuid="9dd6e2d9bf550e162c39a16e089972b81c2e1a7c" id="264y8zVXFN1y" def create_model(): input_shape=(48,48,3) model = Sequential() model.add(Conv2D(6, (5, 5), input_shape=input_shape, padding='same', activation = 'relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(16, (5, 5), padding='same', activation = 'relu')) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(64, (3, 3), activation = 'relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) model.add(Dense(128, activation = 'relu')) model.add(Dropout(0.5)) model.add(Dense(7, activation = 'softmax')) model.compile(loss='categorical_crossentropy', metrics=['accuracy'],optimizer='RMSprop') return model # + [markdown] id="LZEVwRi9FN10" # # Model Summary # + id="tmekYJOOFN11" outputId="7f7308cb-30a5-4d45-e56e-59a114eeb8d9" model_custom = create_model() model_custom.summary() # + [markdown] id="bgk6BvEGFN14" # # Conduct k-Fold Cross-Validation # + id="7AkPftm7FN15" from sklearn.model_selection import KFold # + id="JKX6xNcPFN16" kf = KFold(n_splits=5, shuffle=False) # + id="LHP_z13HFN17" from keras.preprocessing.image import ImageDataGenerator aug = ImageDataGenerator( rotation_range=25, width_shift_range=0.1, height_shift_range=0.1, shear_range=0.2, zoom_range=0.2,horizontal_flip=True, fill_mode="nearest") # + [markdown] id="zXE6Luq6FN18" # # Training Model # + id="liCcH0XCFN1-" BS = 8 EPOCHS = 200 # + id="WN3ab5RyFN1-" result = [] scores_loss = [] scores_acc = [] k_no = 0 for train_index, test_index in kf.split(x): X_Train_ = x[train_index] Y_Train = y[train_index] X_Test_ = x[test_index] Y_Test = y[test_index] file_path = "/kaggle/working/weights_best_"+str(k_no)+".hdf5" checkpoint = ModelCheckpoint(file_path, monitor='loss', verbose=0, save_best_only=True, mode='min') early = EarlyStopping(monitor="loss", mode="min", patience=8) callbacks_list = [checkpoint, early] model = create_model() hist = model.fit_generator(aug.flow(X_Train_, Y_Train), epochs=EPOCHS,validation_data=(X_Test_, Y_Test), callbacks=callbacks_list, verbose=0) # model.fit(X_Train, Y_Train, batch_size=batch_size, epochs=epochs, validation_data=(X_Test, Y_Test), verbose=1) model.load_weights(file_path) result.append(model.predict(X_Test_)) score = model.evaluate(X_Test_,Y_Test, verbose=0) scores_loss.append(score[0]) scores_acc.append(score[1]) k_no+=1 # + id="ycCkxHSkFN2A" outputId="420323d9-b57e-4e91-c4ca-e2a3aeb6b0c1" print(scores_acc,scores_loss) # + [markdown] id="8FbCbkCiFN2B" # # Taking model with lowest Loss # + id="SDlA-ndqFN2C" outputId="4cd0026f-cc12-488e-e184-b26e88f32242" value_min = min(scores_loss) value_index = scores_loss.index(value_min) print(value_index) # + id="O_vSdyfwFN2D" model.load_weights("/kaggle/working/weights_best_"+str(value_index)+".hdf5") # + id="Uaehfc1hFN2E" best_model = model # + [markdown] id="alqUEkn5FN2F" # # Evaluating model # + _uuid="51ba81dc8d47efc1c996d9b9da0fa9e247bddfe2" id="8I2433r5FN2F" outputId="6193df63-79f5-42a9-f98c-087c04f3b3ed" score = best_model.evaluate(X_test, y_test, verbose=0) print('Test Loss:', score[0]) print('Test accuracy:', score[1]) test_image = X_test[0:1] print (test_image.shape) print(best_model.predict(test_image)) print(best_model.predict_classes(test_image)) print(y_test[0:1]) #predict y_pred = best_model.predict(X_test) # + [markdown] id="78bHu6UYFN2G" # # Visualizing Train,Test--->Accuracy,Loss # + _uuid="7b51b07c551eb7b75d1d14139a50b40985f648e9" id="7gX2FXY6FN2H" outputId="57e5123e-86f1-41fe-8171-7c76bb7a5c07" # visualizing losses and accuracy # %matplotlib inline train_loss=hist.history['loss'] val_loss=hist.history['val_loss'] train_acc=hist.history['accuracy'] val_acc=hist.history['val_accuracy'] epochs = range(len(train_acc)) plt.plot(epochs,train_loss,'r', label='train_loss') plt.plot(epochs,val_loss,'b', label='val_loss') plt.title('train_loss vs val_loss') plt.legend() plt.figure() plt.plot(epochs,train_acc,'r', label='train_acc') plt.plot(epochs,val_acc,'b', label='val_acc') plt.title('train_acc vs val_acc') plt.legend() plt.figure() # + _uuid="52cc9a7562a18d52af242e68100b85bd9025faf6" id="-26pzD1kFN2I" #Model Save best_model.save_weights('model_weights.h5') best_model.save('model_keras.h5') # + [markdown] id="4eFwTU3JFN2J" # # Confusion Matrix # + id="28kcYJzxFN2K" from sklearn.metrics import confusion_matrix results = best_model.predict_classes(X_test) cm = confusion_matrix(np.where(y_test == 1)[1], results) #cm = cm.astype(np.float) / cm.sum(axis=1)[:, np.newaxis] # + id="w1rUV0R2FN2L" import seaborn as sns import pandas as pd # + id="NoHyxHReFN2L" label_mapdisgust = ['anger','contempt','disgust','fear','happy','sadness','surprise'] # + id="T354eZoSFN2M" #Transform to df for easier plotting cm_df = pd.DataFrame(cm, index = label_mapdisgust, columns = label_mapdisgust ) # + id="zYPW6a_FFN2N" final_cm = cm_df # + id="gJEPmDoEFN2N" outputId="f1261a22-c8a2-43bd-843a-8912360ce3fb" final_cm # + id="tPNucz_5FN2O" outputId="8a7fd288-c0b8-4d10-fd3b-23ba20a7a7c7" plt.figure(figsize = (5,5)) sns.heatmap(final_cm, annot = True,cmap='Greys',cbar=False,linewidth=2,fmt='d') plt.title('CNN Emotion Classify') plt.ylabel('True class') plt.xlabel('Prediction class') plt.show() # + [markdown] id="Qg7NGJp2FN2P" # # ROC Curve # + id="vVtIMGu0FN2Q" from sklearn.metrics import roc_curve,auc from itertools import cycle # + id="g2T1Xvc_FN2R" new_label = ['anger','contempt','disgust','fear','happy','sadness','surprise'] final_label = new_label new_class = 7 # + id="uvNyLOGmFN2S" #ravel flatten the array into single vector y_pred_ravel = y_pred.ravel() lw = 2 # + id="QGBdmIuVFN2S" outputId="1ddf96f8-4d1b-46ba-d0af-c6690f0443ee" fpr = dict() tpr = dict() roc_auc = dict() for i in range(new_class): fpr[i], tpr[i], _ = roc_curve(y_test[:,i], y_pred[:,i]) roc_auc[i] = auc(fpr[i], tpr[i]) #colors = cycle(['red', 'green','black']) colors = cycle(['red', 'green','black','blue', 'yellow','purple','orange']) for i, color in zip(range(new_class), colors): plt.plot(fpr[i], tpr[i], color=color, lw=lw, label='ROC curve of class {0}'''.format(final_label[i])) plt.plot([0, 1], [0, 1], 'k--', lw=lw) plt.xlim([0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver Operating Characteristic') plt.legend(loc="lower right") plt.show() // --- // jupyter: // jupytext: // text_representation: // extension: .scala // format_name: light // format_version: '1.5' // jupytext_version: 1.14.4 // kernelspec: // display_name: Scala // language: scala // name: scala // --- // ## Goal chomping // // * We run the most basic _strategic_ prover - a _goal chomper_ that keeps trying a succession of goals. // * Termination is when a goal fails or all goals are finished. // * Proving is only by generation, with the generation including backward reasoning rules. // * So far the goal chomper is untested; we test this. // * We also test updating displays etc. // // // ### The goals and proving. // // * To generate the goals, we start with just `Type`. // * To prove them, we include also `Zero`, `One` and `Star` import $cp.bin.`provingground-core-jvm-d7193b6a8f.fat.jar` import provingground._ , interface._, HoTT._, learning._ repl.pprinter() = { val p = repl.pprinter() p.copy( additionalHandlers = p.additionalHandlers.orElse { translation.FansiShow.fansiHandler } ) } val terms = FiniteDistribution.unif[Term](Unit, Zero, Star) val typs = FiniteDistribution.unif[Typ[Term]](Type, Unit, Zero) val ts = TermState(terms, typs) val ts0 = TermState(FiniteDistribution(), FiniteDistribution.unif(Type)) import monix.execution.Scheduler.Implicits.global val lp = LocalProver(ts).sharpen(10) val lp0 = LocalProver(ts0).sharpen(10) val unknownsT = lp0.unknownStatements.map(_.entropyVec.map(_.elem)).memoize val unF = unknownsT.runToFuture import StrategicProvers._ import almond.display._ val chompView = Markdown("## Results from Goal chomping\n") val chT = unknownsT.flatMap(typs => goalChomper(lp, typs)).memoize val chF = chT.runToFuture currentGoal successes.size unF.map(_.size) successes.reverse chF.foreach(s => chompView.withContent(s"## Goal chomping done\n\n * chomped: ${successes.size}").update()) chF.value chF.map(_._3.headOption) // ## Conclusions // // * Plenty of goals were chomped until the first failure. // * The failure could be avoided by strategic backward reasoning. // * Learning would also probably avoid this. // * For a better test, we should run a liberal chomper that keeps going till there are no unknowns left, recroding both successes and failures. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # #
    Compléments sur les listes, tuples, dictionnaires
    # ## 1. Compléments sur les listes. # ### Listes en compréhension # Voici un programme qui ajoute des éléments dans une liste façon successive: L=[] for i in range(20): if i%2==0: L.append(i) print(L) # Il existe une manière plus concise de procéder, la définition en compréhension : # Une liste en compréhension L=[i for i in range(20) if i%3==0] print(L) # #### Syntaxes # Voici des types d'instructions possibles pour générer des listes en compréhension: # # * [*fonction de x* **for** *x in ensemble* ] # * [*fonction de x* **if** *condition sur x* **else** *autre fonction de x* **for** *x in ensemble* ] # * [*fonction de x* **for** *x in ensemble* **if** *condition sur x* ] # # #### Exemples: # + #1 L=[3*i for i in range(11)] print(L) #2 from random import * n=10 Des=[randint(1,6) for i in range(n)] print(Des) #3 pileface=[choice(['P','F']) for i in range(20)] print(pileface) # - # ### Exercice 1: # En utilisant la syntaxe des listes en compréhension : # 1. Générer les entiers naturels de 1 à 100. # 2. Générer les multiples de 5 inférieurs ou égaux à 100. # 3. Générer une liste des entiers naturels de 1 à 100 dans laquelle les multiples de 5 seront remplacées par le caractère `*` # + #1. # + #2. # + #3. # - # ### Exercice 2: # En utilisant les fonctions `randint()` ou `choice()` du module `random` et la syntaxe des listes en compréhension: # 1. Générer une liste de 20 entiers naturels aléatoires entre 0 et 255. # 2. Générer 100 caractères au hasard parmi `a`,`b`,`c` # + #1. from random import * # + #2. # - # ### Listes de listes # Pour représenter des tableaux à double entrée(images, matrices...), on peut utiliser une liste de listes. On identifie ainsi un élément du tableau à l'aide de deux indexs. # # Exemple : tableau=[['A','b','c','d'],['E','f','g'],['I','j','k','m'],['N','o','p','q']] print(tableau[0][0]) # ### Exercice 3: # 1. Quel est la lettre correpondant à `tableau[1][2]`? # 2. Quelle instruction permet d'accéder à la lettre 'm' ? # 3. Ajouter au tableau la ligne `['R','s','t','u']`. # 4. Ajouter le caractère `h` à sa place. # 5. Remplacer la caractère `N` par `n`. # + #1. # + #2. # + #3 # + #4 # + #5 # - # ### Exercice 4: # Générer en comprehension une liste de 10 listes contenant chacune 10 entiers binaires aléatoires(0 ou 1). # + # - # ### Avant de poursuivre # Nous avons étudié jusqu’ici deux types de données construits(composites) : les chaînes, qui sont composées de caractères, et les listes, qui sont composées d’éléments de n’importe quel type. # # Rappel sur une différence importante entre chaînes et listes : il n’est pas possible de changer les caractères au sein d’une chaîne existante, alors que l'on peut modifier les éléments d’une liste. En d’autres termes, les listes sont des séquences modifiables, alors que les chaînes de caractères sont des séquences non-modifiables. # # Exemples : # Les listes sont modifiables L=['a','b','d'] L[2]='c' print(L) # les chaînes ne sont pas modifiables mot='abd' mot[2]='c' # Dans la suite de cette feuille deux nouveaux types de données : les tuples et les dictionnaires # ## 2. tuples (ou p-uplets) # Python propose un type de données appelé tuple, qui est assez semblable à une liste mais qui, comme les chaînes, n’est pas modifiable. Du point de vue de la syntaxe, un tuple est une collection d’éléments séparés par des virgules : #Exécuter le code ci-dessous tup1 = 1,2,3 tup2 = (6,7) tup3 = 'abc','def' tup4 = ('a',1,'b',2) print(tup1,type(tup1)) print(tup2,type(tup2)) print(tup3,type(tup3)) print(tup4,type(tup4)) # A retenir : # * De simples virgules suffisent à définir un tuple mais pour la lisibilité du code, il est préférable de l'enfermer dans des parenthèses. # ### Opérations sur les tuples #affectations t= 7,8,9 a,b,c = t print(a) # opérateurs + et * : concanténation et répétition t1= (3,2,1) t2 = (6,5,4) t3 = 2*t1+t2 print(t3) # Accéder aux éléments t4 = (2,4) print(t4[0]) # longueur d'un tuple print(len(t3)) # Parcours d'un tuple t5 = (0,1,2,3,4,5,6,7,8,9) for e in t5 : print(e+1, end=' ') # test in b = 3 in (2,4,6) print(b) # les tuples ne sont pas des listes t6 = (2,4,6,8) t6.append(10) #Ajouter un élément à un tuple t7 = 1,2,3,4,5 t7 = t7 + (6,) # ne pas oublier la virgule et les parenthèses print(t7) # A retenir : # * Les opérateurs de concaténation `+` et de multiplication `*` donnent les mêmes résultats que pour les chaînes et les listes. # * On accède aux éléments d'un tuple comme avec les chaînes et les listes. # * On peut déterminer la taille d’un tuple à l’aide de la fonction `len()`. # * On peut le parcourir à l’aide d’une boucle `for`, utiliser l’instruction `in` pour savoir si un élément donné en fait partie, exactement comme pour une liste ou pour une chaîne. # * Les tuples ne sont pas modifiables et on ne peut pas utiliser avec eux ni la méthode `.append()`ni l'instruction `del()`. # * Les tuples sont moins souples que les listes mais c'est aussi leur force. On est sûr que leur contenu ne sera pas modifié par erreur. De plus ils sont beaucoup moins gourmands en ressources et sont éxécutés plus rapidement. # ### Exercice 5 : # Ecrire la fonction `reverse(t)` qui prend en paramètre un tuple de trois valeurs et qui renvoie un tuple contenant les 3 valeurs dans l'ordre inverse. # # Ainsi `reverse(1,2,3)` renvoie `(3,2,1)` # + #Réponse # - # ### Exercice 6 : # Ecrire la fonction `initiales` qui prend en paramètre une chaîne de caractères de type `'NOM Prénom'` et qui renvoie un tuple contenant les initiales des noms et prénoms passés en argument. # # Ainsi `initiales('')` doit renvoyer `('J','D')`. # + #Réponse: # - # ### Exercice 7 : # En utilisant le résultat précédent, compléter la fonction `initiale_nom(noms,lettre)` qui prend en paramètres un tuple de noms et prénoms formatés comme précédemment ainsi qu'un caractère et qui renvoie un tuple contenant les chaînes avec la même initiale de nom. # # Ainsi avec le tuple ci-dessous, `initiale_nom(stars,'S')` doit renvoyer `('','', '')` # + #Réponse stars = ('','','','','','', '','','','','','') def initiale_nom(noms, lettre): resultats=() return resultats #Appels initiale_nom(stars,'S') # - # ### Exercice 8 : # # On se place dans le contexte du codage des couleurs selon le système RGB, déjà vu dans une feuille précédente. Dans une image dite bitmap, chaque pixel contient une couleur composée de ses couches Rouge, Verte et Bleue, chacune de ces couches étant représentée par un entier compris entre $0$ et $255$ # # Dans les logiciels de traitement d'images(Photoshop, Gimp,...), on travaille avec des calques d'images que l'on superpose et que l'on fusionne. En fonction des opérations mathématiques utilisées pour fusionner les calques, on obtient des rendus esthétiques différents: # # # Chacune des fonctions demandées correspondent à un mode de fusion de Photoshop , prennent en paramètres deux tuples de trois valeurs`pix1`et `pix2` correspondants aux deux pixels que l'on souhaite fusionner et renvoient un tuple contenant la couleur finale obtenue. # # Pour s'aider, voici une description des formules de certains modes de fusion( en milieu de page) : # https://helpx.adobe.com/fr/after-effects/using/blending-modes-layer-styles.html # # On pourra tester avec les deux pixels `p1=(200,128,63)` et `p2=(125,205,50)`. Fonctions `min()` et `max()` autorisées ! p1 = (200,128,63) p2 = (125,205,50) # 1. `eclaircir(p1,p2)` renvoie `(200,205,63)` # + #1 : remplacer "to do" par les bonnes valeurs def eclaircir(pix1, pix2): r= "to do" g= "to do" b= "to do" return (r,g,b) eclaircir(p1,p2) # - # 2. `obscurcir(p1,p2)` renvoie `(125,128,50)` # + #2 : def obscurcir(pix1, pix2): r= "to do" g= "to do" b= "to do" return (r,g,b) obscurcir(p1,p2) # - # 3. `difference(p1,p2)` renvoie `(75,77,13)` # + #3 : def difference(pix1,pix2): r= "to do" g= "to do" b= "to do" return (r,g,b) difference(p1,p2) # - # 4. `addition(p1,p2)` renvoie `(255,255,113)` # + #4 : def addition(pix1,pix2): r= "to do" g= "to do" b= "to do" return (r,g,b) addition(p1,p2) # - # 5. `soustraction(p1,p2)` renvoie `(0,77,0)` # + #5 : def soustraction(pix1,pix2): r= "to do" g= "to do" b= "to do" return (r,g,b) soustraction(p1,p2) # - # 6. `produit(p1,p2)` renvoie `(98,103,12)` # + #6 : def produit(pix1,pix2): r= "to do" g= "to do" b= "to do" return (r,g,b) produit(p1,p2) # - # 7. `division(p1,p2)` renvoie `(159,255,202)` # + #7 : def division(pix1,pix2): r= "to do" g= "to do" b= "to do" return (r,g,b) division(p1,p2) # - # 8. `superposition(p1,p2)` renvoie `(226,230,100)` # + #8 : def superposition(pix1,pix2): r= "to do" g= "to do" b= "to do" return (r,g,b) superposition(p1,p2) # - # ## 3. Dictionnaires # ### Définition et création # Les types de données construits que nous avons abordés jusqu’à présent (chaînes, listes et tuples) sont tous des séquences, c’est-à-dire des suites ordonnées d’éléments. Dans une séquence, il est facile # d’accéder à un élément quelconque à l’aide d’un index (un nombre entier), mais encore faut-il le connaître. # # Les dictionnaires constituent un autre type construit. Ils ressemblent aux # listes( ils sont modifiables comme elles), mais ce ne sont pas des séquences. # Les éléments que nous allons y enregistrer ne seront pas disposés dans un ordre immuable. En revanche, nous pourrons accéder à n’importe lequel d’entre eux à l’aide d’un index que l’on appellera une clé, laquelle pourra être alphabétique ou numérique. # # # Comme dans une liste, les éléments mémorisés dans un dictionnaire peuvent être de n’importe quel type( valeurs numériques, chaînes, listes ou encore des dictionnaires, et même aussi des fonctions). # # Exemple : # # + dico1={} dico1['nom']='Paris-Orly Airport' dico1['ville']='Paris' dico1['pays']='France' dico1['code']='ORY' dico1['gps']=(48.7233333,2.3794444) print(dico1) print(dico1['code']) print(dico1['gps']) # - # Remarques : # # - Des accolades délimitent un dictionnaire. # - Les éléments d'un dictionnaire sont séparés par une virgule. # - Chacun des éléments est une paire d'objets séparés par deux points : une clé et une valeur. # - La valeur de la clé `'code'` est `'ORY'`. # - La valeur de la clé `'gps'` est un tuple de deux flottants. # # ### Exercice 9 : # Voici des données concernant l'aéroport international de Los Angeles : # # * __"Los Angeles International Airport","Los Angeles","United States","LAX",33.94250107,-118.4079971__ # # # En utilisant les mêmes clés que dans l'exemple précédent, créer le dictionnaire 'dico2' qui contiendra les données ci-dessus # + #Réponse # - # ### Méthodes # + #Afficher les clés print(dico1.keys()) #Affichées les valeurs print(dico1.values()) #Afficher les éléments print(dico1.items()) # - # ### Parcours # On peut traiter par un parcours les éléments contenus dans un dictionnaire, mais attention : # * Au cours de l'itération, __ce sont les clés__ qui servent d'index # * L'ordre dans lequel le parcours s'effectue est imprévisible # # Exemple : for element in dico1: print(element) # ### Exercice 10 : # Modifier le programme ci-dessus pour obtenir l'affichage ci-dessous : # + # - # ## 4. Exercices # ### Exercice 11 : # Lors d'une élection, entre 2 et 6 candidats se présentent. Il y a 100 votants, chacun glisse un bulletin avec le nom d'un candidat dans l'urne. Les lignes de code ci-dessous simulent cette expérience. # + from random import * #liste des candidats potentiels noms=['Alice','Bob','Charlie','Daniella','Eva','Fred'] #liste des candidats réels candidats=list(set([choice(noms) for i in range(randint(2,len(noms)))])) #nombre de votants votants=100 #liste des bulletins dans l'urne urne=[choice(candidats) for i in range(votants)] print('Candidats : ',candidats) #print("Contenu de l'urne :", urne) # - # 1. Vérifier que les candidats réels changent à chaque éxecution de la cellule ci-desssus ainsi que le contenu de l'urne. # 2. Compléter la fonction `depouillement(urne)`. Elle prend en paramètre une liste (la liste des bulletins exprimés) et renvoie un dctionnaire. Les paires clés-valeurs sont respectivement constituées du noms d'un candidat réels et du nombre de bulletins exprimés en sa faveur. Par exemple, si la liste des candidats est `['Alice','Charlie,'Bob']`, le résultat pourra s'afficher sous la forme `{'Eva': 317, 'Charlie': 363, 'Alice': 320}` # + # Remplacer "pass" par les bonnes instructions def depouillement(urne): decompte={} for bulletin in urne: if bulletin in decompte: pass else: pass return decompte depouillement(urne) # - # 3. Ecrire la fonction `vainqueur(election)` qui prend en paramètre un dictionnaire contenant le décompte d'une urne renvoyé par la fonction précédente et qui renvoie le nom du vainqueur. # + def vainqueur(election): vainqueur='' vainqueur(depouillement(urne)) # - # ### Exercice 12 # # # Sur le site https://openflights.org/data.html , on trouve des bases de données mondiales aéronautiques.Le fichier `airports.txt` présent dans le dossier de cette feuille contient des informations sur les aéroports. # # # Chaque ligne de ce fichier est formatée comme l'exemple ci-dessous : # # `1989,"Bora Bora Airport","Bora Bora","French Polynesia","BOB","NTTB",-16.444400787353516,-151.75100708007812,10,-10,"U","Pacific/Tahiti","airport","OurAirports"` # # On souhaite extraire les informations suivantes pour chaque aéroport: # * Sa référence unique, un entier. # * Le nom de l'aéroport, une chaîne de caractères # * La ville principale qu'il dessert, une chaîne de caractères # * Le pays de cette ville,une chaîne de caractères # * Le code IATA de l'aéroport composé de 3 lettres en majuscules # * Ses coordonées gps (latitude puis longitude), un tuple de 2 flottants. # **1. Compléter les champs ci-dessous pour l'aéroport cité en exemple:** # * ref : # * nom : # * ville : # * pays : # * gps : # **2. La fonction `data_extract` doit parcourir le fichier et extraire les données demandées qu'elle renvoie sous forme d'une liste de dictionnaires.** # # * Chaque élément de la liste est donc un dictionnaire qui correspond à un aéroport. # * Les clés sont les noms des champs que l'on souhaite extraire et les valeurs sont celles associées à chaque aéroport. # # Recopier , modifier et compléter cette fonction pour qu'elle éxécute la tâche demandée : # + #2. def data_extract(chemin): fichier = open(chemin, "r",encoding='utf-8') res = [] # Pour contenir le résultat qui sera une liste de dictionnaires for ligne in fichier: datas = ligne.strip().split(",") # une ligne du fichier res.append( { "ref": int(datas[0]), "nom": datas[1][1:-1], "ville": "A compléter", "pays": datas[3][1:-1], "A compléter": datas[4][1:-1], "gps" : "A compléter" }) fichier.close() return res airports=data_extract('airports.txt') #nombre d'aéroports référencés print("A compléter") #un aéroport au hasard print(choice(airports)) # + #2. Réponse # - # **3. A l'aide d'une liste en compréhension, récupérer la liste des villes françaises desservies par un aéroport** #3. Réponse country='France' res=['A compléter'] # **4. Ecrire la fonction `infos(airports,ville)`.** # # Elle prend en paramètres la liste des dictionnaires de données extraites et une chaîne de caractères(le nom d'une ville). Elle renvoie les informations du ou des aéroports de cette ville dans une liste de dictionnaires: # + #4. def infos(airports,ville): res=[] return res print(infos(airports,'Nice')) # - # **5. Compléter les listes du code ci-dessous pour représenter les points de coordonnées gps de chacun des aéroports de la base de données(liste `X` des longitudes et liste `Y` des latitudes)** # + #5. from matplotlib import pyplot #bibliothèque pour tracer des graphiques X=[] #liste des longitudes Y=[] #liste des latitudes pyplot.figure(figsize = (14,7)) pyplot.xlim(-180, 180) pyplot.ylim(-90, 90) pyplot.plot(X,Y,'rx') # g=green, x est une croix # - # **6. Recopier et modifier le code précédent pour faire apparaître en rouge les aéroports situés en zone tropicale (c'est à dire dont la latitude est comprise entre $-23$ et $+23$)et en bleu les autres** # + #5.Réponse # - # #
    FIN
    -- --- -- jupyter: -- jupytext: -- text_representation: -- extension: .hs -- format_name: light -- format_version: '1.5' -- jupytext_version: 1.14.4 -- kernelspec: -- display_name: Haskell -- language: haskell -- name: haskell -- --- square x = x ** 2 square 10 -- # Modules -- Loading a function from a file `baby.hs` which contains a function `doubleMe` :l baby doubleMe 4 doubleMe 3.14 doubleMe pi -- # Creating higher level functions doubleSum x y = doubleMe x + doubleMe y doubleSum 10 20 doubleSum 1 2 + doubleSum 10 12 -- * Functions in Haskell don't have to be in any particular order -- * Functions can be redefined -- + doubleMe x = x * 3 doubleSum x y = doubleMe x + doubleMe y -- - doubleSum 10 20 :l baby.hs -- # If-else statement doubleSmallNumber x = if x > 100 then x else x * 2 -- * else part is mandatory in Haskell -- * if-else statement is an expression as it always returns something doubleSmallNumber 101 doubleSmallNumber 99 doubleSmallNumber' x = (if x > 100 then x else x * 2) + 1 doubleSmallNumber' 101 -- Apostrophe `'` is a valid character in a function name. Usually denoted to write a strict (non-lazy) version or a modified version. conanO'Brien = "It's a-me, Conan O'Brien!" -- * This is also a function with no input parameters -- * The convention is to use camelCase, although not strictly -- * Functions can't begin with uppercase snake_case = "Harrrrry pott-ahhhhh" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # References: # https://github.com/bentrevett/pytorch-seq2seq/blob/master/1%20-%20Sequence%20to%20Sequence%20Learning%20with%20Neural%20Networks.ipynb # - import torch from torch import nn, optim from torchtext.legacy.data import Field, BucketIterator, TabularDataset from sklearn.model_selection import train_test_split import numpy as np import unicodedata import re import pandas as pd import os import random import spacy import shutil # # !python -m spacy download fr_core_news_sm # # !python -m spacy download en_core_web_sm spacy_fr = spacy.load('fr_core_news_sm') spacy_eng = spacy.load('en_core_web_sm') path = "D:/Datasets/Eng-French Translation" os.chdir(path) df = pd.read_csv('eng_-french.csv') df.columns = ['english', 'french'] # + def unicode2Ascii(s): return ''.join(c for c in unicodedata.normalize('NFD', s) if unicodedata.category(c) != 'Mn') def normalizeString(s): s = unicode2Ascii(s.lower().strip()) s = re.sub(r"([.!?])", r"\1", s) s = re.sub(r"[^a-zA-Z.!?]+", r" ", s) return s # - df['english'] = df['english'].apply(lambda x: normalizeString(x)) df['french'] = df['french'].apply(lambda x: normalizeString(x)) # + MAX_LENGTH = 10 def filter_sentence(rows): if len(rows['english'].split(' ')) < MAX_LENGTH and len(rows['french'].split(' ')) < MAX_LENGTH: return rows else: return np.nan df = df.apply(filter_sentence, axis = 'columns') # - df_sample = df.dropna().reset_index(drop = True) np.random.seed(1234) random.seed(1234) torch.manual_seed(1234) n_samples = 1500 df_sample = df_sample.sample(n_samples).reset_index(drop = True) # + # tokenizers def french_tokenizer(text): return [tok.text for tok in spacy_fr.tokenizer(text)] def english_tokenizer(text): return [tok.text for tok in spacy_eng.tokenizer(text)] # + ENGLISH_TEXT = Field(sequential = True, tokenize = english_tokenizer, lower = True, init_token = "", eos_token = "") FRENCH_TEXT = Field(sequential = True, tokenize = french_tokenizer, lower= True, init_token = "", eos_token = "") # - # train - validation split train, valid = train_test_split(df_sample, test_size = 0.25, shuffle = True, random_state = 1234) print("Train : ", train.shape) print("Valid : ", valid.shape) # writing train and valid files into folder if not os.path.exists("inputs"): os.mkdir("inputs") print("inputs folder created succesfully") if not os.path.isfile("/inputs/train.csv"): train.to_csv("train.csv", index = False) print("train.csv written successfully") shutil.move("train.csv", "inputs/train.csv") print("train.csv moved successfully") if not os.path.isfile("/inputs/valid.csv"): valid.to_csv("valid.csv", index = False) print("valid.csv written successfully") shutil.move("valid.csv", "inputs/valid.csv") print("valid.csv moved successfully") else: print("Folder already exists") train.to_csv("train.csv", index = False) print("train.csv written successfully") shutil.move("train.csv", "inputs/train.csv") print("train.csv moved successfully") valid.to_csv("valid.csv", index = False) print("valid.csv written successfully") shutil.move("valid.csv", "inputs/valid.csv") print("valid.csv moved successfully") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] pycharm={"name": "#%% md\n"} # # Моделирование изменений условий освещения в видеопотоке, построение и сравнение интеграторов кадров видеопотока # # - from src.experiment import * # + pycharm={"name": "#%%\n"} # первая фаза аугментации, первая фаза интеграции convergence_rates, converged_num = conduct_experiment("data/Shoes-sRGB.jpg", 10, "second_stage_aug", "first_stage_int", "first_first", 10) # первая фаза аугментации, вторая фаза интеграции #conduct_experiment("data/Shoes-sRGB.jpg", videoflow_size=100, ) # вторая фаза аугментации, первая фаза интеграции #conduct_experiment("data/Shoes-sRGB.jpg", videoflow_size=100, ) # вторая фаза аугментации, вторая фаза интеграции # вторая фаза аугментации, третья фаза интеграции # третья фаза аугментации, первая фаза интеграции # третья фаза аугментации, вторая фаза интеграции # третья фаза аугментации, третья фаза интеграции # #conduct_experiment("data/Shoes-sRGB.jpg", videoflow_size=100, ) #conduct_experiment("data/Shoes-sRGB.jpg", videoflow_size=100, ) #conduct_experiment("data/Shoes-sRGB.jpg", videoflow_size=100, ) # + pycharm={"name": "#%%\n"} # первая фаза аугментации, первая фаза интеграции #conduct_experiment("data/Shoes-sRGB.jpg", 10, # "first_stage", "first_stage", "first_first", 10) # первая фаза аугментации, вторая фаза интеграции #conduct_experiment("data/Shoes-sRGB.jpg", videoflow_size=100, ) # вторая фаза аугментации, первая фаза интеграции #conduct_experiment("data/Shoes-sRGB.jpg", videoflow_size=100, ) # вторая фаза аугментации, вторая фаза интеграции # вторая фаза аугментации, третья фаза интеграции # третья фаза аугментации, первая фаза интеграции # третья фаза аугментации, вторая фаза интеграции # третья фаза аугментации, третья фаза интеграции # #conduct_experiment("data/Shoes-sRGB.jpg", videoflow_size=100, ) #conduct_experiment("data/Shoes-sRGB.jpg", videoflow_size=100, ) #conduct_experiment("data/Shoes-sRGB.jpg", videoflow_size=100, ) # + # первая фаза аугментации, первая фаза интеграции conduct_experiment("data/Shoes-sRGB.jpg", 10, "first_stage", "first_stage", "first_first", 10) # первая фаза аугментации, вторая фаза интеграции conduct_experiment("data/Shoes-sRGB.jpg", videoflow_size=100, ) # вторая фаза аугментации, первая фаза интеграции conduct_experiment("data/Shoes-sRGB.jpg", videoflow_size=100, ) # вторая фаза аугментации, вторая фаза интеграции # вторая фаза аугментации, третья фаза интеграции # третья фаза аугментации, первая фаза интеграции # третья фаза аугментации, вторая фаза интеграции # третья фаза аугментации, третья фаза интеграции # #conduct_experiment("data/Shoes-sRGB.jpg", videoflow_size=100, ) #conduct_experiment("data/Shoes-sRGB.jpg", videoflow_size=100, ) #conduct_experiment("data/Shoes-sRGB.jpg", videoflow_size=100, ) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import os from mapboxgl_notebook.map import MapboxMap from mapboxgl_notebook.sources import GeoJSONSource from mapboxgl_notebook.layers import PointCircleLayer, LineStringLineLayer, PolygonFillLayer from mapboxgl_notebook.properties import Paint from mapboxgl_notebook.interactions import ClickInteraction, HoverInteraction access_token = os.environ.get('') # Data-driven properties directly with mapbox gl expressions (picture can be different - real world dataset!) data_url = 'https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_hour.geojson' source = GeoJSONSource(data_url, source_id='earthquakes') paint = Paint( circle_color=[ 'interpolate', ["linear"], ['get', 'mag'], 1.3, '#0000ff', 2, '#ff0000' ] ) layer = PointCircleLayer(source, paint=paint) interaction = ClickInteraction(layer, properties=['place', 'mag', 'type']) mapbox_map = MapboxMap( style='mapbox://styles/annee/cjnbxyk893b2p2so2pb7z93mu', # lets use another style center=[0,0], zoom=1, access_token=access_token, sources=[source], layers=[layer], interactions=[interaction] ) mapbox_map.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + slideshow={"slide_type": "skip"} # %matplotlib notebook import matplotlib.pyplot as plt import numpy as np import sympy as sp from plots import * sp.init_printing() freqs = [f for f in np.random.standard_cauchy(11) if abs(f) < 10] omega = [2+ f for f in freqs] + [1 - f for f in freqs] + [1] from BondGraphTools import version import BondGraphTools as bgt assert version == "0.3.7" scale = 2 from matplotlib.font_manager import FontProperties def plot_graph(t, x): fontP = FontProperties() fontP.set_size('small') fig = plt.figure(figsize=(scale*4,scale*4)) plt.plot(t,x) ax = fig.gca() ax.set_xlabel('t') ax.set_title(f"System response to {impulse}") ax.legend( [f"$x_{i}$" for i in range(len(x))], bbox_to_anchor=(1.,1.), loc=1, borderaxespad=0., prop=fontP ) return fig def print_tree(bond_graph, pre=""): print(f"{pre}{bond_graph}") try: for component in reversed(bond_graph.components): if pre == "": print_tree(component, pre +"|-" ) else: print_tree(component, pre +"-" ) except AttributeError: pass # - # + [markdown] slideshow={"slide_type": "notes"} # # + [markdown] slideshow={"slide_type": "slide"} # # On Emergence in Complex Physical Systems # # # https://github.com/peter-cudmore # #   # # Dr. . # Systems Biology Labratory, # The School of Chemical and Biomedical Engineering, # The University of Melbourne. # + [markdown] slideshow={"slide_type": "subslide"} # Many problems in biology, physics and engineering involve predicting and controlling complex systems, loosely defined as interconnected system-of-systems. Such systems can exhibit a variety of interesting non-equilibrium features such as emergence and phase transitions, which result from mutual interactions between nonlinear subsystems. # # Modelling these systems is a task in-and-of itself, as systems can span many physical domains and evolve on multiple time scales. Nonetheless, one wishes to analyse the geometry of these models and relate both qualitative and quantitative insights back to the physical system. # # Beginning with the modelling and analysis of a coupled optomechanical systems, this talk presents some recent results concerning the existence and stability of emergent oscillations. This forms the basis for a discussion of new directions in symbolic computational techniques for complex physical systems as a means to discuss emergence more generally. # # + [markdown] slideshow={"slide_type": "subslide"} # ## The problem with big systems is that they're _big_... # + [markdown] slideshow={"slide_type": "subslide"} # ## Example: Human Metabolism # + [markdown] slideshow={"slide_type": "-"} #
    # # (Image courtesy of Human Metabolism map https://www.vmh.life ) # + [markdown] slideshow={"slide_type": "subslide"} # # Example: Ecosystems # + [markdown] slideshow={"slide_type": "-"} #
    # + [markdown] slideshow={"slide_type": "subslide"} # ## Complex Physical Systems # # A dynamical system is said to be a _complex physical system_ when: # * It is made up of many _interacting_ parts, or subsystems (High-dimensional). # * The subsystems are not all of the same (Heterogenenous). # * The subsystems are complicated (Nonlinear and/or Noisy). # * There are well defined boundaries between the subsystems (Network Topology). # * **Coupling takes place via resource exchange (Conservation Laws).** # # # > There is a fact, or if you wish, a law, governing all natural phenomena that are known to date. There is no known exception to this law—it is exact so far as we know. The law is called the conservation of energy. # # \- , 1963. http://www.feynmanlectures.caltech.edu/I_04.html # + [markdown] slideshow={"slide_type": "subslide"} # ## Complex Systems can exhibit _emergence_. # # - _Emergence_ is a phenomenom where the system displays novel new behaviour that could not be produced by individuals alone. # - _Synchronisation_ is the most studied example of emergence, and can occur in systems of coupled oscillator. # + [markdown] slideshow={"slide_type": "fragment"} #
    How can one predict and control emergent phenomenon?
    # + [markdown] slideshow={"slide_type": "subslide"} # ## The problem with big systems... # + [markdown] cell_style="split" slideshow={"slide_type": "-"} # $$\begin{align} # \dot{x} &= f(x, u;\lambda),\\ # 0 &= g(x,u;\lambda),\\ # y &= h(x,u). # \end{align} # $$ # # What do we do when $x$ is high dimensional and $f$ doesn't have exploitable structure? # + [markdown] cell_style="split" # ![the connectome](images/connectome.jpg) # + [markdown] slideshow={"slide_type": "fragment"} #
    # How can nonlinear dynamics be "scaled up"? #
    # + [markdown] slideshow={"slide_type": "subslide"} # ## Geometry and Physics # + [markdown] cell_style="split" # Geometric features often correspond to physically interesting features (Noether Theroem). # # # In systems biology in particular: # - Conserved Moieties $\iff$ first integrals # - Transport pathways $\iff$ invariant manifolds. # + [markdown] cell_style="split" # ![](images/phys2geo.svg) # - #
    # How can nonlinear dynamics be "scaled up"? #
    # + [markdown] slideshow={"slide_type": "subslide"} # ## Goals of this talk # # I want to convince you that: # 1. Emergence is a nonlinear phenomenon, so we need to look at _nonlinear_ systems. # 2. As systems get big, the usual ad-hoc techniques stop working so we need an alternative. # 3. Thinking about energy provides a means to _systematically_ model systems. # 4. Symbolic modelling software makes this scalable. # 5. This provides a pathway to study system level dynamics, in particular emergence. # + [markdown] slideshow={"slide_type": "subslide"} # ## Outline of this talk # + [markdown] cell_style="split" slideshow={"slide_type": "-"} # # ![The Goal](images/Sydney.svg) # + [markdown] cell_style="split" slideshow={"slide_type": "fragment"} # 1. Briefly discuss synchronisation as it's the best example of emergence. # + [markdown] cell_style="split" slideshow={"slide_type": "fragment"} # 2. Discuss some challenges and present solutions for modelling big systems. # + [markdown] cell_style="split" slideshow={"slide_type": "fragment"} # 3. Discuss software to make this work. # + [markdown] slideshow={"slide_type": "slide"} # ## Part 1: Synchronisation as the prototype for emergence # + [markdown] slideshow={"slide_type": "subslide"} # ## The Kuramoto Model # # _Self-entrainment of a population of coupled non-linear oscillators_ . (1975). # # $$ # \text{limit cycle oscillator:}\qquad \dot{z_j} = \gamma(1 - |z_j|^2)z_j + i\omega_j z_j + \frac{K}{n}\sum_{k=1}^nz_k, # \quad j=1,\ldots n, \qquad \gamma \gg 1, 0 \le K, \omega_j \in \mathbb{R} # $$ # + [markdown] cell_style="split" # The phase $\theta_j =\text{Arg}\ z_j$ of each oscillator with a natural frequency $\omega_j$ is given by # # \begin{equation} # \dot{\theta}_j = \omega_j + \frac{K}{n}\sum_{k=1}^n\sin(\theta_k - \theta_j),\qquad j=1,\ldots n # \end{equation} # - When $0\le K K_c$ more oscillator are recruited to collective. # # The value of $K_c$ depends upon the distribution of $\{\omega_j\}$. For symmetric distribtuions we have # $$K_c = \frac{2}{\pi g(0)}$$ # # + cell_style="split" # Omega = Cauchy(2,1) so that K_c = 2 p = KuramotoModel(omega=omega, scale=scale) plt.show() # + [markdown] slideshow={"slide_type": "notes"} # Points: # - Wiener -> Winfree -> Kuramoto # - Comes from studying BZ reaction # - Motion on a strongly attracive limit cycle (invariant manifold) such that coupling # - All-to-all coupling on a complete graph. # - sinusoidal in phase -> linear in complex co-ordinates. # - Kuramoto showed that at $K_c=2$ a Hopf bifurcation creates a synchronised state, that becomes progressive more stable as $K_c$ increases. # + [markdown] slideshow={"slide_type": "subslide"} # ## The Kuramoto Model (Cont.) # # $$ # \text{limit cycle oscillator:}\qquad \dot{z_j} = \gamma(1 - |z_j|^2)z_j + i\omega_j z_j + \frac{K}{n}\sum_{k=1}^nz_k, # \quad j=1,\ldots n, \qquad \gamma \gg 1, 0 \le K, \omega_j \in \mathbb{R} # $$ # + [markdown] cell_style="split" slideshow={"slide_type": "-"} # Kuramoto introduced an 'order parameter' $r$ to measure phase coherence # \begin{equation} # z = r\mathrm{e}^{\mathrm{i}\Theta} = \frac{1}{n}\sum_{k=1}^n \exp{\mathrm{i}\theta_k} \implies r = \frac{1}{n}\sum_{k=1}^n \exp\mathrm{i}(\theta_k-\Theta) # \end{equation} # It follows that # $$ # \Im\left[\frac{1}{n}\sum_{k=1}^n\exp i(\theta_k - \theta_j)\right] = # \Im\left[r\exp i(\Theta - \theta_j)\right] # $$ # # # Hence # $$ # \dot{\theta}_j = \omega_j + \frac{K}{n}\sum_{k=1}^n\sin(\theta_k - \theta_j)$$ # # becomes # $$ # \dot{\theta}_j = \omega_j + rK\sin(\Theta - \theta_j). # $$ # + cell_style="split" slideshow={"slide_type": "-"} p = KuramotoOrderModel(omega,scale=scale) plt.show() # + [markdown] slideshow={"slide_type": "notes"} # Points: # - Mean phase is a kind of coordinate for the synchronous manifold. # - Weak interactions with entire populations <=> strong coupling to collective statistics # - Feedback look means that if coupling increases coherence, then $r$ increases asymptotically to $r_\infty = \sqrt{1-K_c/K}$$. # + [markdown] cell_style="center" slideshow={"slide_type": "subslide"} # ## The Status of the Kuramoto Model # # $$ # \dot{\theta}_j = \omega_j + rK\sin(\Theta - \theta_j),\qquad j = 1,\ldots n.\qquad r = \frac{1}{n}\sum_{k=1}^n \exp i (\theta_k - \Theta). # $$ # # - Identical oscillators evolve on a 3 dimensional manifold (Watanabe and Strogatz, Physica D 1994. Ott and Antonsen, Chaos 2008). # - Heterogenous oscillator dynamics represented in terms of collective co-ordinates in the thermodynamic limit (Pikovsky and Rosenblum, Physica D, 2011) and for finite-n (Gottwald, Chaos 2015). # - Active research into applications in biology (particuarly neuroscience), physics and chemsitry. # - Extensions to noisy, graph coupled and with various different coupling mechanisms. # - Very few global results for heterogenous oscillators (Dietert, J. Math. Pures Appl. 2016). # - _No results as yet for geometrical interpretation of transtion to synchrony._ # + [markdown] slideshow={"slide_type": "subslide"} # ## Implications # + [markdown] cell_style="center" # #### From the Kuramoto Model # # $$ # \dot{\theta}_j = \omega_j + rK\sin(\Theta - \theta_j),\qquad j = 1,\ldots n.\qquad r = \frac{1}{n}\sum_{k=1}^n \exp i (\theta_k - \Theta). # $$ # 1. When thinking about emergence, we want to think about mutual coupling between population statistics and individuals. # 2. This means that, for a given system, we need to understand both mechanisms on the individual scale _and_ dynamics on the population scale. # + [markdown] slideshow={"slide_type": "fragment"} # #### For complex physical systems # # 1. We expect to see mutual coupling between the population, particularly via resource competition. # 2. Statistical approaches (like stat-mech) will not be suffient for our purposes. # # + [markdown] slideshow={"slide_type": "fragment"} #
    In order to look at emergence more generally, we must be able to model nonlinear systems more generally
    # + [markdown] slideshow={"slide_type": "slide"} #   # #   # #   # # # # Modelling Complex Physical Systems # #   # # ### Inheritance, Composition and Encapsulation # + [markdown] slideshow={"slide_type": "subslide"} # ## Ad-hoc modelling. # + [markdown] cell_style="split" # ![Optomechanial experiment](images/experiment.svg) # + [markdown] cell_style="split" # 1. Start with a Hamiltonian # 2. Derive equations of motion and do a whole bunch of algebra. # 3. Work out the appropriate coordinates (here, a non-standard slow fast system) # 4. More algebra to reduce model. # 5. Investigate the dynamics of the reduced models. # 6. Relate results in reduced model to observables in the original system # # In the case of emergent phenomenon, the 'reduced' subspace involves the whole (or at least a large part of) system. E.g. mean fields. # + [markdown] slideshow={"slide_type": "subslide"} # ## Ad-hoc approaches won't scale. # + [markdown] cell_style="split" slideshow={"slide_type": "-"} #
    # + [markdown] cell_style="split" # As an example: # - individual processes are far more heterogenous # - network topolgy is complicated # - many parameters are unknown # - almost guaranteed to be a differential-algebraic system # - **too big for one person, or even one lab** # # We must have: # - Ways to respresent and manipulate such systems, # - Ways to manage congnitive complexity, # - Ways to automate model capture and reduction, # - Ways to effective share work between researchers # + [markdown] slideshow={"slide_type": "subslide"} # ## Energetic Network Modelling # + [markdown] cell_style="split" # ![Optomechanial experiment](images/experiment.svg) # + [markdown] cell_style="split" # ![Cavity_network](images/cavity_network-01.svg) # + [markdown] slideshow={"slide_type": "subslide"} # ## The Structure of Complex Physical Systems # + [markdown] cell_style="split" # An approach based on 'bond graph' modelling, and port-Hamiltonian systems. # # - Energy is stored in 'state variables' $q,p$ # - Power is distributed via 'power variables' $e,f$ # - Formally describes the hyrdo-mechanical-electrical analogies. # # For example; # - Dissipation $R$ relates $e,f$ variables. (eg. Ohm's law, friction, etc) # - Potential storage $q$ to $e$ (eg, capacitors, gravity) # + [markdown] cell_style="split" # **Translational Mechanics** # - $q, p$ are position and momentum # - $f, e$ are velocity and force # # **Electromagnetics** # # - $q, p$ are charge and flux linkage # - $f, e$ are current and voltage # # **Hydraulics** # - $q, p$ are volume and pressure momentum # - $f, e$ are fow and pressure # # **Chemistry** # - $q,p $ is moles and.. chemical momentum? # - $f,e$ are molar flow and chemical potential # # + [markdown] slideshow={"slide_type": "subslide"} # ## An Object Oriented Representation of Energetic Systems # # Object Oriented Programming (OOP) is a software development paradigm that seeks to manage large, complicated projects by breaking problems into _data_ plus _methods_ that act on the data. # # Three big ideas in OOP are: # 1. _Inheritance_ or is-a relationships. # 2. _Composition_ or has-a relationships. # 3. _Encapsulation_ or infomation hiding. # # This allows for _hierarchical_ and _modular_ design which reduces model complexity. # + [markdown] slideshow={"slide_type": "fragment"} # 'Energetic systems' draws from: # - Network based analysis from engineering; in particular circuit analysis and the more general (and less well known) bond graph methodology, # - Classical mechanics, and in particular recent advances in port based Hamiltonian mechanics, # - The effective was of managing complexity within software engineering. # # All in service of describing _whole systems_ so as to understand _emergent processes_. # + [markdown] cell_style="split" slideshow={"slide_type": "subslide"} # ## Inheritance # #   # # For networked dynamic systems, _inheritance_ means we: # - define what the base components are, # - impose conditions on the dynamical sub-systems, # - describe the interface between nodes. # # + [markdown] cell_style="split" slideshow={"slide_type": "-"} # ![Inheritance](images/inheritance.svg) # + [markdown] cell_style="center" slideshow={"slide_type": "subslide"} # ### Definition (Energetic System) # + [markdown] cell_style="split" # An energetic system is a tuple $(M, \mathcal{D}, U,\Phi)$ # where the # * *state space* $M$ is a manifold of $\dim(M) = m\ge 0$ # * *port space* $\mathcal{D} \subset \mathcal{F} \times \mathcal{E}$ where, $\mathcal{E} = \mathcal{F}^*$ and $ \dim{\mathcal{D}} = \mathcal{F}|_\mathcal{D} =n$. # * *control space* $U \subset C^r:\mathbb{R}_+ \rightarrow \mathbb{R^k}$ with $k\ge 0$ # * *constitutive relation* is a smooth map $\Phi: TM \times \mathcal{D} \times U\times\mathbb{R}_+ \rightarrow # \mathbb{R}^{m+n}$ # such that # $$\Phi\left(\frac{dx}{dt},x,f,e,u,t\right)=0.$$ # # $\Phi$ relates the _internal state_ $M$ and the _external environment_ (via $\mathcal{D}$). # + [markdown] cell_style="split" #   # # ![Energetic System](images/EnergeticSystems.svg) # # #   # # The incoming *power* is $P_\text{in} = \left$ for $(f,e)\in \mathcal{D}$ # + [markdown] slideshow={"slide_type": "subslide"} # ## Energy Storage # + [markdown] cell_style="split" slideshow={"slide_type": "-"} # **Example (Potential Energy)** # # Potential energy storage can be defined as # # $$ # \Phi_\text{C}(\dot{x},e,f,x) = # \left(\begin{matrix} # x - Ce\\ # \dot{x} - f # \end{matrix}\right) = 0. # $$ # # **Example (Kinetic Energy)** # # Simiarly for generalised 'kinetic energy' # # $$ # \Phi_\text{L}(\dot{x},e,f,x) = # \left(\begin{matrix} # x - Lf\\ # \dot{x} - e # \end{matrix}\right) = 0. # $$ # # + [markdown] cell_style="split" # # #   # # ![Energetic System](images/EnergeticSystems.svg) # # # + [markdown] slideshow={"slide_type": "subslide"} # ## Port-Hamiltonains # + [markdown] cell_style="split" slideshow={"slide_type": "-"} # One can show that for conservative systems, one can define a storage function $H(x)$ and choose # # $$\Phi(\dot{x}, x, f,e,t) = # \left(\begin{matrix} # \dot{x} - f\\ # e - \nabla_x H(x) # \end{matrix}\right) = 0.$$ # # To recover Hamiltons equations, one must additionally connect ports $(e,f)_i$ to $(e,f)_j$, and hence impose a particular _Dirac structure_ on $\mathcal{D}$. # # **Example (Harmonic Oscillator Part 1)** # Given the storage function # # $$H(x) = \frac{\omega}{2}(x_1^2 + x_2^2)$$ # # we have # # $$\Phi(\dot{x},e,f,x) = (e_1 - \omega x_1, f_1-\dot{x}_1, e_2-\omega x_2, f_2-\dot{x}_2) = 0.$$ # + [markdown] cell_style="split" # # #   # # ![Energetic System](images/EnergeticSystems.svg) # # # + [markdown] slideshow={"slide_type": "subslide"} # # Connecting Ports # + [markdown] cell_style="split" # Two ports can be connected with via a _Dirac structure_ # # **Example (Common Effort Bond)** # # A common _effort_ or force connection is simply the power conserving relation $e_1 =e_2$ and $f_1 = -f_2$. # # This can be interpreted as # # $$ # \Phi_\text{Bond}(e,f) = \left(e_1 - e_2, f_1 + f_2\right) = 0 # $$ # # # # + [markdown] cell_style="split" # #   # # ![Energetic System](images/EnergeticSystems.svg) # # # + [markdown] slideshow={"slide_type": "subslide"} # ## Conservation Laws # + [markdown] cell_style="split" slideshow={"slide_type": "-"} # **Example (0 Junction)** # # One can define 'conservation of mass' (equally Kirchoffs voltage law) as # # $$ # \Phi(e,f) # = # \left(\begin{matrix} # e_1 - e_2\\ # \vdots\\ # e_1 - e_{n}\\ # \sum_{k=1}^n f_n # \end{matrix}\right) = 0.$$ # # one can easily check that this implies # # $$ # P_\text{in} = \sum_{k=1}^n e_kf_k = 0. # $$ # # This is called the 'zero junction' for historical reasons... # # + [markdown] cell_style="split" #   # # #   # # ![Energetic System](images/EnergeticSystems.svg) # # + [markdown] slideshow={"slide_type": "subslide"} # ## Dissipation # + [markdown] cell_style="split" slideshow={"slide_type": "-"} # **Example (Dissipation)** # # Linear Dissipation has no state and relates effort $e$ to flow $f$: # # $$ \Phi_\text{R}(e,f) = # e_1 - Rf_1 =0$$ # # such that the power entering the subsystem # # $$P_\text{in} = e_1f_1 = R (f_1)^2 \ge 0$$ # # is always positive, hence dissipation. # # + [markdown] cell_style="split" #   # # #   # # ![Energetic System](images/EnergeticSystems.svg) # # # + slideshow={"slide_type": "skip"} # + [markdown] cell_style="split" slideshow={"slide_type": "subslide"} # ## Inheritance # # For energetic systems: # # ### Nodes are particular _energetic systems_ # Each node is described by a set of differential-algebraic equations $\Phi(\dot{x},x,e,f) = 0$. # # ### Edges are constraints on port variables. # # An edge represents how state is shared between systems. # + [markdown] cell_style="split" # ![Inheritance](images/inheritance.svg) # + [markdown] slideshow={"slide_type": "skip"} # # + [markdown] cell_style="split" slideshow={"slide_type": "subslide"} # ## Composition # #   # # For networked dynamic systems _composition_ means that we can replace nodes with subgraphs and vice-versa. # + [markdown] cell_style="split" # ![Composition](images/composition.svg) # + [markdown] slideshow={"slide_type": "skip"} # # + [markdown] cell_style="split" slideshow={"slide_type": "subslide"} # ## Corollary (Composition) # If $\Psi_1 = (M_1, \mathcal{D}_1, U_1,\Phi_1)$ and $\Psi_2 = (M_2, \mathcal{D}_2, U_2,\Phi_2)$ are energetic systems, then # # $$\begin{eqnarray}\Psi_0 &=& \Psi_1 \oplus\Psi_2\\ # &=& # \left(M_1\oplus M_2,\mathcal{D}_1 \oplus\mathcal{D}_2,U_1\oplus U_2, \Phi_1\oplus\Phi_2\right) # \end{eqnarray}$$ # is also an energetic system. # # Suppose (abusing notation) $\Psi_0 = (\Psi_1,\Psi_2)$ is an energetic system with ports # # $$(e_i, f_i) \in \mathcal{D}_1, \quad (e_j,f_j) \in \mathcal{D}_2$$ # # Then $\Phi_0$ with the additional power conserving constraint # # $$e_i - e_j = 0\qquad f_i+f_j=0$$ # # is also a energetic system # + [markdown] cell_style="split" # ![Composition](images/composition.svg) # + [markdown] slideshow={"slide_type": "skip"} # # + [markdown] cell_style="split" slideshow={"slide_type": "subslide"} # ## Encapsulation # #   # # For a networked dynamical system _encapsulation_ means that we can apply simplification methods to a subgraph so that the replacement system is less complicated, while representing the same behaviour. # #   # # One can also go the other way by replacing a node with a more complicated subgraph. # + [markdown] cell_style="split" # ![Encapsulation](images/encapsulation.svg) # + [markdown] slideshow={"slide_type": "subslide"} # # Example: Linear Damped Harmonic Motion # + [markdown] cell_style="split" # Consider the following __nodes__ # $$ # \Phi_\text{C}= # \left(\begin{matrix} # x_c - Ce_c\\ # \dot{x}_c - f_c # \end{matrix}\right) # $$ # # $$ # \Phi_\text{L} = # \left(\begin{matrix} # x_L - Lf_L\\ # \dot{x}_L - e_L # \end{matrix}\right) # $$ # # $$ \Phi_\text{R} = # (e_R - Rf_R)$$ # # $$ # \Phi_\text{0} # = # \left(\begin{matrix} # e_1 - e_2\\ # e_1 - e_3\\ # e_1 - e_\text{port}\\ # f_1+f_2+f_3 + f_\text{port} # \end{matrix}\right)$$ # # Withthe __edges__ as power connections: # # $$P_1 = P_c,\qquad P_2 = P_L\qquad P_3 = P_R.$$ # + [markdown] cell_style="split" # Recall $P_1 = P_c$ implies # $$e_1 = e_c\qquad f_1 = -f_c$$ # # Since $\Phi_0$ implies $e$ are equal # $$ # e_\text{port} = \frac{1}{C}x_c = \dot{x}_L = rf_R # $$ # and the 'flow' sum gives # $$ # f_\text{port} = \dot{x}_c + \frac{1}{RC}x_c +\frac{1}{L}x_L # $$ # # If there is no flow allowed through $f_\text{port}$, then we have the usual equation for damped harmonic motion. # + [markdown] cell_style="split" slideshow={"slide_type": "fragment"} # - It is not difficult to extend this to nonlinaer $\Phi$ # - _Most of the heavy lifting can be done via linear algebra_ # + [markdown] slideshow={"slide_type": "subslide"} # ## Object Oriented Modelling and Energetic Systems # # Energetic systems provide: # - _Inheritance_; an abstract base representation of energetic systems. # - _Composition_; a way to hierarchically compose systems of systems. # - _Encapsulation_; a framework inside which simplifications can occur. # - #
    By systematically modelling physical systems, we can begin to understand system dynamics, and hence emergence.
    # + [markdown] slideshow={"slide_type": "slide"} #   # #   # #   # # # `BondGraphTools` # + [markdown] slideshow={"slide_type": "subslide"} # ## `BondGraphTools`: a `python` library for energetic systems. # # `BondGraphTools` (https://github.com/BondGraphTools) a framework for modelling energetic systems. # * Based upon an extension of bond graph and port-Hamiltonian modelling. # * Provies a simple, *minimal* object-oriented interface for constructing models. # * Implemented in `python` and uses the standard `scipy` stack. # * Performs symbolic model reduction and simulation. # * Simulations with DAE solvers in `julia`. # * Developed with sustainable software practices. # * Intended to be used in _conjunction_ with other tools. # # 'Bond Graphs' are a multi-domain port-based graphical modelling technique used predominantly in mechatronics. # Port-Hamiltonian systems integrate geometric approaches from classical mechanics and control theory with port based modelling. # + [markdown] slideshow={"slide_type": "subslide"} # ## `BondGraphTools` is an API for modelling energetic systems # #   # #
    Hence modelling a complex physical system is equivalent to writing code
    # + [markdown] slideshow={"slide_type": "subslide"} # ## Example: Linear Oscillator # + cell_style="split" class Linear_Osc(bgt.BondGraph): damping_rate = 0.1 #D amping rate common across oscillator array def __init__(self, freq, index): """Linear Oscillator Class Args: freq: Natural (undamped) frequency of this oscillator index: Oscillator number (used for naming). Instances of this class are bond graph models of externally forced damped harmonic oscillators. In the electrical analogy, these is simply an open loop series RLC circuit.""" # Create the components r = bgt.new("R", name="R", value=self.damping_rate) l = bgt.new("I", name="L", value=1/freq) c = bgt.new("C", name="C", value=1/freq) port = bgt.new("SS") conservation_law = bgt.new("1") # Create the composite model and add the components super().__init__( name=f"Osc_{index}", components=(r, l, c, port, conservation_law) ) # Wire the model up for component in (r,l,c): bgt.connect(conservation_law, component) bgt.connect(port, conservation_law) # Expose the SS component as an external port bgt.expose(port, label="P_in") # + [markdown] cell_style="split" # `Linear_Osc` # - _inherits_ from BondGraph, which is a base 'class' containing much of functionality # - is _composed_ of a variety of subcomponents # - _encapsulates_ a one port RLC component. # # + cell_style="split" example_osc = Linear_Osc(1000,1) example_osc.constitutive_relations # + [markdown] slideshow={"slide_type": "subslide"} # ## Automating Model Capture # + cell_style="split" slideshow={"slide_type": "-"} from BondGraphTools.reaction_builder import Reaction_Network TCA_reactions = { "Citrate synthase": ["acetyl-CoA + oxaloacetate + H2O = citrate + CoA-SH"], "Aconitase": ["Citrate = cis-Aconitate + H2O", "cis-Aconitate + H2O = Isocitrate"], "Isocitrate dehydrogenase": ["Isocitrate + NAD = Oxalosuccinate + NADH + H", "Oxalosuccinate = a-Ketoglutarate + CO2" ], "a-Ketoglutarate dehydrogenase": ["a-Ketoglutarate + NAD + CoA-SH = Succinyl-CoA + NADH + H + CO2"], "Succinyl-CoA synthetase": ["Succinyl-CoA + ADP + Pi = Succinate + CoA-SH + ATP"], "Succinate dehydrogenase": ["Succinate + Q = Fumarate + QH2"], "Fumarase": ["Fumarate + H2O = L-Malate"], "Malate dehydrogenase": ["L-Malate + NAD = Oxaloacetate + NADH + H"] } def TCA_Cycle(): reaction_net = Reaction_Network(name="TCA_Cycle") for enzyme in TCA_reactions: for index, reaction in enumerate(TCA_reactions[enzyme]): reaction_name = f"{enzyme} - {index}" reaction_net.add_reaction(reaction, name=reaction_name) return reaction_net # + cell_style="split" tca_bg = TCA_Cycle().as_network_model() tca_bg.constitutive_relations # + [markdown] slideshow={"slide_type": "subslide"} # ## Constructing big models with BondGraphTools. # 1. Define the _nodes_ (processes that act on energy). # 2. Define the _edges_ (the 'power ports', or shared variables). # 3. Feed this into `BondGraphTools`. # 4. ...? # 5. Profit! (Use the resulting equations of motion for whatever you want). # # + slideshow={"slide_type": "subslide"} bgt.draw(tca_bg) plt.show() # + [markdown] slideshow={"slide_type": "subslide"} # # ## State of `BondGraphTools` # # Current Status: # - In active development (v.0.3.7) and active use within the lab. # - Documentation at https://bondgraphtools.readthedocs.io/en/latest/ # - Available on PyPI https://pypi.org/project/BondGraphTools/ # - Source on GitHub https://github.com/BondGraphTools/BondGraphTools # - Manuscript in preparation. # + [markdown] slideshow={"slide_type": "fragment"} # ### Planned Future Developments # - Extraction of first integrals and invariant manifolds. # - Robust parameter and control value network. # - Interface for measuring port space. # - Algorithmic model reduction (particularly manifold reductions). # - Bifurcation analysis (particularly fixed point tracking). # + [markdown] slideshow={"slide_type": "subslide"} # # In Summary # + [markdown] slideshow={"slide_type": "fragment"} # - Energetic Modelling gives us a framework to systematically desrcibe complex physical systems. # + [markdown] slideshow={"slide_type": "fragment"} # - `BondGraphTools` provides a way to build and recude big model in symbolic form. # + [markdown] slideshow={"slide_type": "fragment"} # # - The resutls can feed into algorithmic model reduction, parameter estimation and sensitivity analysis. # + [markdown] slideshow={"slide_type": "subslide"} # # Thank You! # # Thanks to # - Gary # - The University of New South Wales # - Prof. , Prof. , . # - The Systems Biology Lab at The University of Melbourne # # # # # # #
    University of MelbourneARC CEnter of Excellence in Convergent Bio-Nano Science and Technology
    # + [markdown] slideshow={"slide_type": "subslide"} #   # #   # # # Please check out `BondGraphTools` # # # https://github.com/BondGraphTools/ # - print_tree(tca_bg) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import os import tarfile import urllib.request import sys import glob from bs4 import BeautifulSoup import nltk from string import ascii_lowercase import tensorflow as tf TEMP_DIR = '/tmp/tensorflow_tutorials' WORD_CHARS = set(ascii_lowercase + "'!?-.()") def download_and_cache(url, fname=None, dest=TEMP_DIR): if not os.path.exists(dest): os.makedirs(dest) if fname is None: fname = url.split('/')[-1] fpath = os.path.join(dest, fname) if not os.path.exists(fpath): def _progress(count, block_size, total_size): percentage = float(count * block_size) / float(total_size) * 100.0 sys.stdout.write('\r>> Downloading {} {:1.1f}%'.format(fname, percentage)) sys.stdout.flush() fpath, _ = urllib.request.urlretrieve(url, fpath, _progress) print() statinfo = os.stat(fpath) print('Successfully downloaded', fname, statinfo.st_size, 'bytes.') return fpath # - fpath = download_and_cache('http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz') with tarfile.open(fpath, 'r:gz') as tar: tar.extractall(TEMP_DIR) train_pos = glob.glob(os.path.join(TEMP_DIR, 'aclImdb', 'train/pos/', '*.txt')) train_neg = glob.glob(os.path.join(TEMP_DIR, 'aclImdb', 'train/neg/', '*.txt')) filenames = train_pos + train_neg labels = [1]*len(train_pos) + [0]*len(train_neg) dataset = tf.data.Dataset.from_tensor_slices((filenames, labels)) # + def _is_word(word): return set(word.lower()).issubset(WORD_CHARS) def _preprocess_text(input_text, label): soup = BeautifulSoup(input_text, "lxml") sents = nltk.sent_tokenize(soup.get_text()) words = [nltk.word_tokenize(sent) for sent in sents] res = ' '.join(' '.join(word.lower() for word in sent_word if _is_word(word)) for sent_word in words) return res, label def _read_files(filename, label): file_content = tf.read_file(filename) return file_content, label # - dataset = dataset.map(_read_files) def wrapped_func(text, label): return tuple(tf.py_func(_preprocess_text, [text, label], [tf.string, label.dtype])) dataset = dataset.map(wrapped_func) iterator = dataset.make_one_shot_iterator() next_element = iterator.get_next() with tf.Session() as sess: res = sess.run(next_element) print(res) with open(filenames[0], 'r') as fp: print(fp.readlines()) # + # Other resources: # https://cs230-stanford.github.io/tensorflow-input-data.html # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="copyright" # Copyright 2020 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # + [markdown] id="title" # # Vertex client library: Custom training text classification model for online prediction using exported dataset # # # # #
    # # Colab logo Run in Colab # # # # GitHub logo # View on GitHub # #
    #


    # + [markdown] id="overview:custom,exported_ds" # ## Overview # # This tutorial demonstrates how to use the Vertex client library for Python to train and deploy a custom text classification model for online prediction, using an exported `Dataset` resource. # + [markdown] id="dataset:happydb,tcn" # ### Dataset # # The dataset used for this tutorial is the [Happy Moments dataset](https://www.kaggle.com/ritresearch/happydb) from [Kaggle Datasets](https://www.kaggle.com/ritresearch/happydb). The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. # + [markdown] id="objective:custom,exported_ds,online_prediction" # ### Objective # # In this tutorial, you learn how to create a custom model using an exported `Dataset` resource from a Python script in a Docker container using the Vertex client library, and then do a prediction on the deployed model. You can alternatively create models using the `gcloud` command-line tool or online using the Google Cloud Console. # # The steps performed include: # # - Create a Vertex `Dataset` resource. # - Export the `Dataset` resource's manifest. # - Create a Vertex custom job for training a model. # - Import the exported dataset manifest. # - Train the model. # - Retrieve and load the model artifacts. # - View the model evaluation. # - Upload the model as a Vertex `Model` resource. # - Deploy the `Model` resource to a serving `Endpoint` resource. # - Make a prediction. # - Undeploy the `Model` resource. # + [markdown] id="costs" # ### Costs # # This tutorial uses billable components of Google Cloud (GCP): # # * Vertex AI # * Cloud Storage # # Learn about [Vertex AI # pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage # pricing](https://cloud.google.com/storage/pricing), and use the [Pricing # Calculator](https://cloud.google.com/products/calculator/) # to generate a cost estimate based on your projected usage. # + [markdown] id="install_aip" # ## Installation # # Install the latest version of Vertex client library. # + id="install_aip" import os import sys # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = "--user" else: USER_FLAG = "" # ! pip3 install -U google-cloud-aiplatform $USER_FLAG # + [markdown] id="install_storage" # Install the latest GA version of *google-cloud-storage* library as well. # + id="install_storage" # ! pip3 install -U google-cloud-storage $USER_FLAG # + [markdown] id="restart" # ### Restart the kernel # # Once you've installed the Vertex client library and Google *cloud-storage*, you need to restart the notebook kernel so it can find the packages. # + id="restart" if not os.getenv("IS_TESTING"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) # + [markdown] id="before_you_begin" # ## Before you begin # # ### GPU runtime # # *Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select* **Runtime > Change Runtime Type > GPU** # # ### Set up your Google Cloud project # # **The following steps are required, regardless of your notebook environment.** # # 1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs. # # 2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project) # # 3. [Enable the Vertex APIs and Compute Engine APIs.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component) # # 4. [The Google Cloud SDK](https://cloud.google.com/sdk) is already installed in Google Cloud Notebook. # # 5. Enter your project ID in the cell below. Then run the cell to make sure the # Cloud SDK uses the right project for all the commands in this notebook. # # **Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands. # + id="set_project_id" PROJECT_ID = "[your-project-id]" # @param {type:"string"} # + id="autoset_project_id" if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]": # Get your GCP project id from gcloud # shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID:", PROJECT_ID) # + id="set_gcloud_project_id" # ! gcloud config set project $PROJECT_ID # + [markdown] id="region" # #### Region # # You can also change the `REGION` variable, which is used for operations # throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you. # # - Americas: `us-central1` # - Europe: `europe-west4` # - Asia Pacific: `asia-east1` # # You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the [Vertex locations documentation](https://cloud.google.com/vertex-ai/docs/general/locations) # + id="region" REGION = "us-central1" # @param {type: "string"} # + [markdown] id="timestamp" # #### Timestamp # # If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial. # + id="timestamp" from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") # + [markdown] id="gcp_authenticate" # ### Authenticate your Google Cloud account # # **If you are using Google Cloud Notebook**, your environment is already authenticated. Skip this step. # # **If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. # # **Otherwise**, follow these steps: # # In the Cloud Console, go to the [Create service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey) page. # # **Click Create service account**. # # In the **Service account name** field, enter a name, and click **Create**. # # In the **Grant this service account access to project** section, click the Role drop-down list. Type "Vertex" into the filter box, and select **Vertex Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**. # # Click Create. A JSON file that contains your key downloads to your local environment. # # Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell. # + id="gcp_authenticate" # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. # If on Google Cloud Notebook, then don't execute this code if not os.path.exists("/opt/deeplearning/metadata/env_version"): if "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. elif not os.getenv("IS_TESTING"): # %env GOOGLE_APPLICATION_CREDENTIALS '' # + [markdown] id="bucket:custom" # ### Create a Cloud Storage bucket # # **The following steps are required, regardless of your notebook environment.** # # When you submit a custom training job using the Vertex client library, you upload a Python package # containing your training code to a Cloud Storage bucket. Vertex runs # the code from this package. In this tutorial, Vertex also saves the # trained model that results from your job in the same bucket. You can then # create an `Endpoint` resource based on this output in order to serve # online predictions. # # Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization. # + id="bucket" BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"} # + id="autoset_bucket" if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]": BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP # + [markdown] id="create_bucket" # **Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket. # + id="create_bucket" # ! gsutil mb -l $REGION $BUCKET_NAME # + [markdown] id="validate_bucket" # Finally, validate access to your Cloud Storage bucket by examining its contents: # + id="validate_bucket" # ! gsutil ls -al $BUCKET_NAME # + [markdown] id="setup_vars" # ### Set up variables # # Next, set up some variables used throughout the tutorial. # ### Import libraries and define constants # + [markdown] id="import_aip:protobuf" # #### Import Vertex client library # # Import the Vertex client library into our Python environment. # + id="import_aip:protobuf" import time from google.cloud.aiplatform import gapic as aip from google.protobuf import json_format from google.protobuf.json_format import MessageToJson, ParseDict from google.protobuf.struct_pb2 import Struct, Value # + [markdown] id="aip_constants" # #### Vertex constants # # Setup up the following constants for Vertex: # # - `API_ENDPOINT`: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services. # - `PARENT`: The Vertex location root path for dataset, model, job, pipeline and endpoint resources. # + id="aip_constants" # API service endpoint API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION) # Vertex location root path for your dataset, model and endpoint resources PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION # + id="labeling_constants:tcn" # Text Dataset type DATA_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/metadata/text_1.0.0.yaml" # Text Labeling type LABEL_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/ioformat/text_classification_single_label_io_format_1.0.0.yaml" # + [markdown] id="accelerators:training,prediction,cpu" # #### Hardware Accelerators # # Set the hardware accelerators (e.g., GPU), if any, for training and prediction. # # Set the variables `TRAIN_GPU/TRAIN_NGPU` and `DEPLOY_GPU/DEPLOY_NGPU` to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify: # # (aip.AcceleratorType.NVIDIA_TESLA_K80, 4) # # For GPU, available accelerators include: # - aip.AcceleratorType.NVIDIA_TESLA_K80 # - aip.AcceleratorType.NVIDIA_TESLA_P100 # - aip.AcceleratorType.NVIDIA_TESLA_P4 # - aip.AcceleratorType.NVIDIA_TESLA_T4 # - aip.AcceleratorType.NVIDIA_TESLA_V100 # # # Otherwise specify `(None, None)` to use a container image to run on a CPU. # # *Note*: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3 -- which is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support. # + id="accelerators:training,prediction,cpu" if os.getenv("IS_TESTING_TRAIN_GPU"): TRAIN_GPU, TRAIN_NGPU = ( aip.AcceleratorType.NVIDIA_TESLA_K80, int(os.getenv("IS_TESTING_TRAIN_GPU")), ) else: TRAIN_GPU, TRAIN_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1) if os.getenv("IS_TESTING_DEPOLY_GPU"): DEPLOY_GPU, DEPLOY_NGPU = ( aip.AcceleratorType.NVIDIA_TESLA_K80, int(os.getenv("IS_TESTING_DEPOLY_GPU")), ) else: DEPLOY_GPU, DEPLOY_NGPU = (None, None) # + [markdown] id="container:training,prediction" # #### Container (Docker) image # # Next, we will set the Docker container images for training and prediction # # - TensorFlow 1.15 # - `gcr.io/cloud-aiplatform/training/tf-cpu.1-15:latest` # - `gcr.io/cloud-aiplatform/training/tf-gpu.1-15:latest` # - TensorFlow 2.1 # - `gcr.io/cloud-aiplatform/training/tf-cpu.2-1:latest` # - `gcr.io/cloud-aiplatform/training/tf-gpu.2-1:latest` # - TensorFlow 2.2 # - `gcr.io/cloud-aiplatform/training/tf-cpu.2-2:latest` # - `gcr.io/cloud-aiplatform/training/tf-gpu.2-2:latest` # - TensorFlow 2.3 # - `gcr.io/cloud-aiplatform/training/tf-cpu.2-3:latest` # - `gcr.io/cloud-aiplatform/training/tf-gpu.2-3:latest` # - TensorFlow 2.4 # - `gcr.io/cloud-aiplatform/training/tf-cpu.2-4:latest` # - `gcr.io/cloud-aiplatform/training/tf-gpu.2-4:latest` # - XGBoost # - `gcr.io/cloud-aiplatform/training/xgboost-cpu.1-1` # - Scikit-learn # - `gcr.io/cloud-aiplatform/training/scikit-learn-cpu.0-23:latest` # - Pytorch # - `gcr.io/cloud-aiplatform/training/pytorch-cpu.1-4:latest` # - `gcr.io/cloud-aiplatform/training/pytorch-cpu.1-5:latest` # - `gcr.io/cloud-aiplatform/training/pytorch-cpu.1-6:latest` # - `gcr.io/cloud-aiplatform/training/pytorch-cpu.1-7:latest` # # For the latest list, see [Pre-built containers for training](https://cloud.google.com/vertex-ai/docs/training/pre-built-containers). # # - TensorFlow 1.15 # - `gcr.io/cloud-aiplatform/prediction/tf-cpu.1-15:latest` # - `gcr.io/cloud-aiplatform/prediction/tf-gpu.1-15:latest` # - TensorFlow 2.1 # - `gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-1:latest` # - `gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-1:latest` # - TensorFlow 2.2 # - `gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-2:latest` # - `gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-2:latest` # - TensorFlow 2.3 # - `gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-3:latest` # - `gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-3:latest` # - XGBoost # - `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-2:latest` # - `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-1:latest` # - `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.0-90:latest` # - `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.0-82:latest` # - Scikit-learn # - `gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-23:latest` # - `gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-22:latest` # - `gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-20:latest` # # For the latest list, see [Pre-built containers for prediction](https://cloud.google.com/vertex-ai/docs/predictions/pre-built-containers) # + id="container:training,prediction" if os.getenv("IS_TESTING_TF"): TF = os.getenv("IS_TESTING_TF") else: TF = "2-1" if TF[0] == "2": if TRAIN_GPU: TRAIN_VERSION = "tf-gpu.{}".format(TF) else: TRAIN_VERSION = "tf-cpu.{}".format(TF) if DEPLOY_GPU: DEPLOY_VERSION = "tf2-gpu.{}".format(TF) else: DEPLOY_VERSION = "tf2-cpu.{}".format(TF) else: if TRAIN_GPU: TRAIN_VERSION = "tf-gpu.{}".format(TF) else: TRAIN_VERSION = "tf-cpu.{}".format(TF) if DEPLOY_GPU: DEPLOY_VERSION = "tf-gpu.{}".format(TF) else: DEPLOY_VERSION = "tf-cpu.{}".format(TF) TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION) DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION) print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU) print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU) # + [markdown] id="machine:training,prediction" # #### Machine Type # # Next, set the machine type to use for training and prediction. # # - Set the variables `TRAIN_COMPUTE` and `DEPLOY_COMPUTE` to configure the compute resources for the VMs you will use for for training and prediction. # - `machine type` # - `n1-standard`: 3.75GB of memory per vCPU. # - `n1-highmem`: 6.5GB of memory per vCPU # - `n1-highcpu`: 0.9 GB of memory per vCPU # - `vCPUs`: number of \[2, 4, 8, 16, 32, 64, 96 \] # # *Note: The following is not supported for training:* # # - `standard`: 2 vCPUs # - `highcpu`: 2, 4 and 8 vCPUs # # *Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs*. # + id="machine:training,prediction" if os.getenv("IS_TESTING_TRAIN_MACHINE"): MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE") else: MACHINE_TYPE = "n1-standard" VCPU = "4" TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU print("Train machine type", TRAIN_COMPUTE) if os.getenv("IS_TESTING_DEPLOY_MACHINE"): MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE") else: MACHINE_TYPE = "n1-standard" VCPU = "4" DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU print("Deploy machine type", DEPLOY_COMPUTE) # + [markdown] id="tutorial_start:custom" # # Tutorial # # Now you are ready to start creating your own custom model and training for Happy Moments. # + [markdown] id="clients:custom,exported_ds" # ## Set up clients # # The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server. # # You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront. # # - Dataset Service for `Dataset` resources. # - Model Service for `Model` resources. # - Endpoint Service for deployment. # - Job Service for batch jobs and custom training. # - Prediction Service for serving. # + id="clients:custom,exported_ds" # client options same for all services client_options = {"api_endpoint": API_ENDPOINT} def create_job_client(): client = aip.JobServiceClient(client_options=client_options) return client def create_dataset_client(): client = aip.DatasetServiceClient(client_options=client_options) return client def create_model_client(): client = aip.ModelServiceClient(client_options=client_options) return client def create_endpoint_client(): client = aip.EndpointServiceClient(client_options=client_options) return client def create_prediction_client(): client = aip.PredictionServiceClient(client_options=client_options) return client clients = {} clients["job"] = create_job_client() clients["dataset"] = create_dataset_client() clients["model"] = create_model_client() clients["endpoint"] = create_endpoint_client() clients["prediction"] = create_prediction_client() for client in clients.items(): print(client) # + [markdown] id="create_aip_dataset" # ## Dataset # # Now that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it. # # ### Create `Dataset` resource instance # # Use the helper function `create_dataset` to create the instance of a `Dataset` resource. This function does the following: # # 1. Uses the dataset client service. # 2. Creates an Vertex `Dataset` resource (`aip.Dataset`), with the following parameters: # - `display_name`: The human-readable name you choose to give it. # - `metadata_schema_uri`: The schema for the dataset type. # 3. Calls the client dataset service method `create_dataset`, with the following parameters: # - `parent`: The Vertex location root path for your `Database`, `Model` and `Endpoint` resources. # - `dataset`: The Vertex dataset object instance you created. # 4. The method returns an `operation` object. # # An `operation` object is how Vertex handles asynchronous calls for long running operations. While this step usually goes fast, when you first use it in your project, there is a longer delay due to provisioning. # # You can use the `operation` object to get status on the operation (e.g., create `Dataset` resource) or to cancel the operation, by invoking an operation method: # # | Method | Description | # | ----------- | ----------- | # | result() | Waits for the operation to complete and returns a result object in JSON format. | # | running() | Returns True/False on whether the operation is still running. | # | done() | Returns True/False on whether the operation is completed. | # | canceled() | Returns True/False on whether the operation was canceled. | # | cancel() | Cancels the operation (this may take up to 30 seconds). | # + id="create_aip_dataset" TIMEOUT = 90 def create_dataset(name, schema, labels=None, timeout=TIMEOUT): start_time = time.time() try: dataset = aip.Dataset( display_name=name, metadata_schema_uri=schema, labels=labels ) operation = clients["dataset"].create_dataset(parent=PARENT, dataset=dataset) print("Long running operation:", operation.operation.name) result = operation.result(timeout=TIMEOUT) print("time:", time.time() - start_time) print("response") print(" name:", result.name) print(" display_name:", result.display_name) print(" metadata_schema_uri:", result.metadata_schema_uri) print(" metadata:", dict(result.metadata)) print(" create_time:", result.create_time) print(" update_time:", result.update_time) print(" etag:", result.etag) print(" labels:", dict(result.labels)) return result except Exception as e: print("exception:", e) return None result = create_dataset("happydb-" + TIMESTAMP, DATA_SCHEMA) # + [markdown] id="dataset_id:result" # Now save the unique dataset identifier for the `Dataset` resource instance you created. # + id="dataset_id:result" # The full unique ID for the dataset dataset_id = result.name # The short numeric ID for the dataset dataset_short_id = dataset_id.split("/")[-1] print(dataset_id) # + [markdown] id="data_preparation:text,u_dataset" # ### Data preparation # # The Vertex `Dataset` resource for text has a couple of requirements for your text data. # # - Text examples must be stored in a CSV or JSONL file. # + [markdown] id="data_import_format:tcn,u_dataset,csv" # #### CSV # # For text classification, the CSV file has a few requirements: # # - No heading. # - First column is the text example or Cloud Storage path to text file (.txt suffix). # - Second column the label. # + [markdown] id="import_file:u_dataset,csv" # #### Location of Cloud Storage training data. # # Now set the variable `IMPORT_FILE` to the location of the CSV index file in Cloud Storage. # + id="import_file:happydb,csv,tcn" IMPORT_FILE = "gs://cloud-ml-data/NL-classification/happiness.csv" # + [markdown] id="quick_peek:csv" # #### Quick peek at your data # # You will use a version of the Happy Moments dataset that is stored in a public Cloud Storage bucket, using a CSV index file. # # Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (`wc -l`) and then peek at the first few rows. # + id="quick_peek:csv" if "IMPORT_FILES" in globals(): FILE = IMPORT_FILES[0] else: FILE = IMPORT_FILE count = ! gsutil cat $FILE | wc -l print("Number of Examples", int(count[0])) print("First 10 rows") # ! gsutil cat $FILE | head # + [markdown] id="import_data" # ### Import data # # Now, import the data into your Vertex Dataset resource. Use this helper function `import_data` to import the data. The function does the following: # # - Uses the `Dataset` client. # - Calls the client method `import_data`, with the following parameters: # - `name`: The human readable name you give to the `Dataset` resource (e.g., happydb). # - `import_configs`: The import configuration. # # - `import_configs`: A Python list containing a dictionary, with the key/value entries: # - `gcs_sources`: A list of URIs to the paths of the one or more index files. # - `import_schema_uri`: The schema identifying the labeling type. # # The `import_data()` method returns a long running `operation` object. This will take a few minutes to complete. If you are in a live tutorial, this would be a good time to ask questions, or take a personal break. # + id="import_data" def import_data(dataset, gcs_sources, schema): config = [{"gcs_source": {"uris": gcs_sources}, "import_schema_uri": schema}] print("dataset:", dataset_id) start_time = time.time() try: operation = clients["dataset"].import_data( name=dataset_id, import_configs=config ) print("Long running operation:", operation.operation.name) result = operation.result() print("result:", result) print("time:", int(time.time() - start_time), "secs") print("error:", operation.exception()) print("meta :", operation.metadata) print( "after: running:", operation.running(), "done:", operation.done(), "cancelled:", operation.cancelled(), ) return operation except Exception as e: print("exception:", e) return None import_data(dataset_id, [IMPORT_FILE], LABEL_SCHEMA) # + [markdown] id="export_dataset" # ### Export dataset index # # Next, you will export the dataset index to a JSONL file which will then be used by your custom training job to get the data and corresponding labels for training your Happy Moments model. Use this helper function `export_data` to export the dataset index. The function does the following: # # - Uses the dataset client. # - Calls the client method `export_data`, with the following parameters: # - `name`: The human readable name you give to the dataset (e.g., happydb). # - `export_config`: The export configuration. # - `export_config` A python list containing a dictionary, with the key/value entries: # - `gcs_destination`: The Cloud Storage bucket to write the JSONL dataset index file to. # # The `export_data()` method returns a long running `operation` object. This will take a few minutes to complete. The helper function will return the long running operation and the result of the operation when the export has completed. # + id="export_dataset" EXPORT_FILE = BUCKET_NAME + "/export" def export_data(dataset_id, gcs_dest): config = {"gcs_destination": {"output_uri_prefix": gcs_dest}} start_time = time.time() try: operation = clients["dataset"].export_data( name=dataset_id, export_config=config ) print("Long running operation:", operation.operation.name) result = operation.result() print("result:", result) print("time:", int(time.time() - start_time), "secs") print("error:", operation.exception()) print("meta :", operation.metadata) print( "after: running:", operation.running(), "done:", operation.done(), "cancelled:", operation.cancelled(), ) return operation, result except Exception as e: print("exception:", e) return None, None _, result = export_data(dataset_id, EXPORT_FILE) # + [markdown] id="export_dataset_quick_peak:tcn" # #### Quick peak at your exported dataset index file # # Let's now take a quick peak at the contents of the exported dataset index file. When the `export_data()` completed, the response object was obtained from the `result()` method of the long running operation. The response object contains the property: # # - `exported_files`: A list of the paths to the exported dataset index files, which in this case will be one file. # # You will get the path to the exported dataset index file (`result.exported_files[0]`) and then display the first ten JSON objects in the file -- i.e., data items. # # The JSONL format for each data item is: # # { "textGcsUri": path_to_the_text_file, "classificationAnnotation": { "displayName": label } } # + id="export_dataset_quick_peak:tcn" jsonl_index = result.exported_files[0] # ! gsutil cat $jsonl_index | head # + [markdown] id="undeploy_model" # ## Undeploy the `Model` resource # # Now undeploy your `Model` resource from the serving `Endpoint` resoure. Use this helper function `undeploy_model`, which takes the following parameters: # # - `deployed_model_id`: The model deployment identifier returned by the endpoint service when the `Model` resource was deployed to. # - `endpoint`: The Vertex fully qualified identifier for the `Endpoint` resource where the `Model` is deployed to. # # This function calls the endpoint client service's method `undeploy_model`, with the following parameters: # # - `deployed_model_id`: The model deployment identifier returned by the endpoint service when the `Model` resource was deployed. # - `endpoint`: The Vertex fully qualified identifier for the `Endpoint` resource where the `Model` resource is deployed. # - `traffic_split`: How to split traffic among the remaining deployed models on the `Endpoint` resource. # # Since this is the only deployed model on the `Endpoint` resource, you simply can leave `traffic_split` empty by setting it to {}. # + id="undeploy_model" def undeploy_model(deployed_model_id, endpoint): response = clients["endpoint"].undeploy_model( endpoint=endpoint, deployed_model_id=deployed_model_id, traffic_split={} ) print(response) undeploy_model(deployed_model_id, endpoint_id) # + [markdown] id="cleanup" # # Cleaning up # # To clean up all GCP resources used in this project, you can [delete the GCP # project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial. # # Otherwise, you can delete the individual resources you created in this tutorial: # # - Dataset # - Pipeline # - Model # - Endpoint # - Batch Job # - Custom Job # - Hyperparameter Tuning Job # - Cloud Storage Bucket # + id="cleanup" delete_dataset = True delete_pipeline = True delete_model = True delete_endpoint = True delete_batchjob = True delete_customjob = True delete_hptjob = True delete_bucket = True # Delete the dataset using the Vertex fully qualified identifier for the dataset try: if delete_dataset and "dataset_id" in globals(): clients["dataset"].delete_dataset(name=dataset_id) except Exception as e: print(e) # Delete the training pipeline using the Vertex fully qualified identifier for the pipeline try: if delete_pipeline and "pipeline_id" in globals(): clients["pipeline"].delete_training_pipeline(name=pipeline_id) except Exception as e: print(e) # Delete the model using the Vertex fully qualified identifier for the model try: if delete_model and "model_to_deploy_id" in globals(): clients["model"].delete_model(name=model_to_deploy_id) except Exception as e: print(e) # Delete the endpoint using the Vertex fully qualified identifier for the endpoint try: if delete_endpoint and "endpoint_id" in globals(): clients["endpoint"].delete_endpoint(name=endpoint_id) except Exception as e: print(e) # Delete the batch job using the Vertex fully qualified identifier for the batch job try: if delete_batchjob and "batch_job_id" in globals(): clients["job"].delete_batch_prediction_job(name=batch_job_id) except Exception as e: print(e) # Delete the custom job using the Vertex fully qualified identifier for the custom job try: if delete_customjob and "job_id" in globals(): clients["job"].delete_custom_job(name=job_id) except Exception as e: print(e) # Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job try: if delete_hptjob and "hpt_job_id" in globals(): clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id) except Exception as e: print(e) if delete_bucket and "BUCKET_NAME" in globals(): # ! gsutil rm -r $BUCKET_NAME # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from neuron import h, gui from neuron.units import ms, mV h.load_file('stdrun.hoc') class Cell: def __init__(self, gid, x, y, z, theta): self._gid = gid self._setup_morphology() self.all = self.soma.wholetree() self._setup_biophysics() self.x = self.y = self.z = 0 # <-- NEW h.define_shape() self._rotate_z(theta) # <-- NEW self._set_position(x, y, z) # <-- NEW def __repr__(self): return '{}[{}]'.format(self.name, self._gid) # everything below here is NEW def _set_position(self, x, y, z): for sec in self.all: for i in range(sec.n3d()): sec.pt3dchange(i, x - self.x + sec.x3d(i), y - self.y + sec.y3d(i), z - self.z + sec.z3d(i), sec.diam3d(i)) self.x, self.y, self.z = x, y, z def _rotate_z(self, theta): """Rotate the cell about the Z axis.""" for sec in self.all: for i in range(sec.n3d()): x = sec.x3d(i) y = sec.y3d(i) c = h.cos(theta) s = h.sin(theta) xprime = x * c - y * s yprime = x * s + y * c sec.pt3dchange(i, xprime, yprime, sec.z3d(i), sec.diam3d(i)) class BallAndStick(Cell): name = 'BallAndStick' def _setup_morphology(self): self.soma = h.Section(name='soma', cell=self) self.dend = h.Section(name='dend', cell=self) self.dend.connect(self.soma) self.soma.L = self.soma.diam = 12.6157 self.dend.L = 200 self.dend.diam = 1 def _setup_biophysics(self): for sec in self.all: sec.Ra = 100 # Axial resistance in Ohm * cm sec.cm = 1 # Membrane capacitance in micro Farads / cm^2 self.soma.insert('hh') for seg in self.soma: seg.hh.gnabar = 0.12 # Sodium conductance in S/cm2 seg.hh.gkbar = 0.036 # Potassium conductance in S/cm2 seg.hh.gl = 0.0003 # Leak conductance in S/cm2 seg.hh.el = -54.3 # Reversal potential in mV # Insert passive current in the dendrite self.dend.insert('pas') for seg in self.dend: seg.pas.g = 0.001 # Passive conductance in S/cm2 seg.pas.e = -65 # Leak reversal potential mV mycell = BallAndStick(0, 0, 0, 0, 0) def create_n_BallAndStick(n, r): """n = number of cells; r = radius of circle""" cells = [] for i in range(n): theta = i * 2 * h.PI / n cells.append(BallAndStick(i, h.cos(theta) * r, h.sin(theta) * r, 0, theta)) return cells my_cells = create_n_BallAndStick(5, 50) ps = h.PlotShape(True) ps.show(0) # + stim = h.NetStim() # Make a new stimulator # Attach it to a synapse in the middle of the dendrite # of the first cell in the network. (Named 'syn_' to avoid # being overwritten with the 'syn' var assigned later.) syn_ = h.ExpSyn(my_cells[0].dend(0.5)) syn_.tau = 2 * ms print('Reversal potential = {} mV'.format(syn_.e)) stim.number = 1 stim.start = 9 ncstim = h.NetCon(stim, syn_) ncstim.delay = 1 * ms ncstim.weight[0] = 0.04 # NetCon weight is a vector. # - recording_cell = my_cells[0] soma_v = h.Vector().record(recording_cell.soma(0.5)._ref_v) dend_v = h.Vector().record(recording_cell.dend(0.5)._ref_v) syn_i = h.Vector().record(syn_._ref_i) t = h.Vector().record(h._ref_t) h.finitialize(-65 * mV) h.continuerun(25 * ms) # + import matplotlib.pyplot as plt # %matplotlib inline fig = plt.figure(figsize=(8,4)) ax1 = fig.add_subplot(2, 1, 1) soma_plot = ax1.plot(t, soma_v, color='black', label='soma(0.5)') dend_plot = ax1.plot(t, dend_v, color='red', label='dend(0.5)') rev_plot = ax1.plot([t[0], t[-1]], [syn_.e, syn_.e], label='syn reversal', color='blue', linestyle=':') ax1.legend() ax1.set_ylabel('mV') ax1.set_xticks([]) # Use ax2's tick labels ax2 = fig.add_subplot(2, 1, 2) syn_plot = ax2.plot(t, syn_i, color='blue', label='synaptic current') ax2.legend() ax2.set_ylabel(h.units('ExpSyn.i')) ax2.set_xlabel('time (ms)') plt.show() # - syns = [] netcons = [] for source, target in zip(my_cells, my_cells[1:] + [my_cells[0]]): syn = h.ExpSyn(target.dend(0.5)) nc = h.NetCon(source.soma(0.5)._ref_v, syn, sec=source.soma) nc.weight[0] = 0.05 nc.delay = 5 netcons.append(nc) syns.append(syn) h.finitialize(-65 * mV) h.continuerun(100 * ms) plt.plot(t, soma_v, label='soma(0.5)') plt.plot(t, dend_v, label='dend(0.5)') plt.legend() plt.show() spike_times = [h.Vector() for nc in netcons] for nc, spike_times_vec in zip(netcons, spike_times): nc.record(spike_times_vec) h.finitialize(-65 * mV) h.continuerun(100 * ms) for i, spike_times_vec in enumerate(spike_times): print('cell {}: {}'.format(i, list(spike_times_vec))) # + import matplotlib.pyplot as plt plt.figure() for i, spike_times_vec in enumerate(spike_times): plt.vlines(spike_times_vec, i + 0.5, i + 1.5) plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: linking # language: python # name: linking # --- # # Train a ML Classifier to Link FEBRL People Data # Open In Colab # In this tutorial, we'll train a machine learning classifier to score candidate pairs for linking, using supervised learning. We will use the same training dataset as the SimSum classification tutorial, as well as the same augmentation, blocking, and comparing functions. The functions have been included in a separate `.py` file for re-use and convenience, so we can focus on code unique to this tutorial. # # The SimSum classification tutorial included a more detailed walkthrough of augmentation, blocking, and comparing, and since we're using the same functions within this tutorial, details will be light for those steps. Please see the SimSum tutorial if you need a refresher. # ## Google Colab Setup # + # Check if we're running locally, or in Google Colab. try: import google.colab COLAB = True except ModuleNotFoundError: COLAB = False # If we're running in Colab, download the tutorial functions file # to the Colab session local directory, and install required libraries. if COLAB: import requests tutorial_functions_url = "https://raw.githubusercontent.com/rachhouse/intro-to-data-linking/main/tutorial_notebooks/linking_tutorial_functions.py" r = requests.get(tutorial_functions_url) with open("linking_tutorial_functions.py", "w") as fh: fh.write(r.text) # !pip install -q recordlinkage jellyfish altair # - # ## Imports # + import itertools import altair as alt import pandas as pd from sklearn.ensemble import AdaBoostClassifier from sklearn.model_selection import train_test_split # - # Grab the linking functions file from github and save locally for Colab. # We'll import our previously used linking functions from this file. import linking_tutorial_functions as tutorial # ## Load Training Data and Ground Truth Labels df_A, df_B, df_ground_truth = tutorial.load_febrl_training_data(COLAB) # ## Data Augmentation for df in [df_A, df_B]: df = tutorial.augment_data(df) # ## Blocking candidate_links = tutorial.block(df_A, df_B) # ## Comparing # + # %%time features = tutorial.compare(candidate_links, df_A, df_B) # - features.head() # ## Add Labels to Feature Vectors # We've augmented, blocked, and compared, so now we're ready to train a classification model which can score candidate record pairs on how likely it is that they are a link. As we did when classifying links via SimSum, we'll append our ground truth values to the features DataFrame. # + df_ground_truth["ground_truth"] = df_ground_truth["ground_truth"].apply(lambda x: 1.0 if x else 0.0) df_labeled_features = pd.merge( features, df_ground_truth, on=["person_id_A", "person_id_B"], how="left" ) df_labeled_features["ground_truth"].fillna(0, inplace=True) df_labeled_features.head() # - # ## Separate Candidate Links into Train/Test # Next, we'll separate our features DataFrame into a train and test set. # + X = df_labeled_features.drop("ground_truth", axis=1) y = df_labeled_features["ground_truth"] X_train, X_test, y_train, y_test = train_test_split( X, y, stratify=y, test_size=0.2 ) # - # ## Train ML Classifier # Though we're using a very simple machine learning model here, the important takeaway is to think of the classification step as a black box that produces a score indicating how likely the model thinks a given candidate record pair is a link. There must be an output score, but *how* that score is generated provides a lot of flexibility. Perhaps you just want to use SimSum, which could be considered an extremely simple "model". Maybe you want to build a neural net to ingest the comparison vectors and produce a score. Generally, in linking, the classification model is the simplest piece, and much more work will go into your blockers and comparators. classifier = AdaBoostClassifier(n_estimators=64, learning_rate=0.5) classifier.fit(X_train, y_train) # ## Predict Using ML Classifier # Here, we'll generate scores for our test set, and format those predictions in a form useful for evaluation. y_pred = classifier.predict_proba(X_test)[:,1] df_predictions = X_test.copy() df_predictions["model_score"] = y_pred df_predictions["ground_truth"] = y_test # ## Choosing a Linking Model Score Threshold # As with SimSum, we're able to examine the resulting score distribution and precision/recall vs. model score threshold plot to determine where the cutoff should be set. # ### Model Score Distribution tutorial.plot_model_score_distribution(df_predictions) # ### Precision and Recall vs. Model Score df_eval = tutorial.evaluate_linking(df_predictions) tutorial.plot_precision_recall_vs_threshold(df_eval) tutorial.plot_f1_score_vs_threshold(df_eval) # ### Top Scoring Non-Links # + display_cols = [ "first_name", "surname", "street_number", "address_1", "address_2", "suburb", "postcode", "state", "date_of_birth", "age", "phone_number", "soc_sec_id", "soundex_surname", "soundex_firstname", "nysiis_surname", "nysiis_firstname", ] display_cols = [[f"{col}_A", f"{col}_B"] for col in display_cols] display_cols = list(itertools.chain.from_iterable(display_cols)) # + df_top_scoring_negatives = df_predictions[ df_predictions["ground_truth"] == False ][["model_score", "ground_truth"]].sort_values("model_score", ascending=False).head(n=10) df_top_scoring_negatives = tutorial.augment_scored_pairs(df_top_scoring_negatives, df_A, df_B) with pd.option_context('display.max_columns', None): display(df_top_scoring_negatives[["model_score", "ground_truth"] + display_cols]) # - # ### Lowest Scoring True Links # + df_lowest_scoring_positives = df_predictions[ df_predictions["ground_truth"] == True ][["model_score", "ground_truth"]].sort_values("model_score").head(n=10) df_lowest_scoring_positives = tutorial.augment_scored_pairs(df_lowest_scoring_positives, df_A, df_B) with pd.option_context('display.max_columns', None): display(df_lowest_scoring_positives[["model_score", "ground_truth"] + display_cols]) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # !pip install pandas # !pip install xlrd # !pip install sklearn # !pip install imblearn import xlrd book = xlrd.open_workbook("Datasheets info.xlsx") sheetMQ2 = book.sheet_by_name("MQ2 - Pololulu") sheetMQ3 = book.sheet_by_name("MQ3 - Sparkfun") sheetMQ4 = book.sheet_by_name("MQ4 - Sparkfun") sheetMQ5 = book.sheet_by_name("MQ5 - Sparkfun") sheetMQ6 = book.sheet_by_name("MQ6 - Sparkfun") sheetMQ7 = book.sheet_by_name("MQ7 - Sparkfun") sheetMQ8 = book.sheet_by_name("MQ8 - Sparkfun") sheetMQ9 = book.sheet_by_name("MQ9 - Haoyuelectronics") sheetMQ131 = book.sheet_by_name("MQ131- Sensorsportal") sheetMQ135 = book.sheet_by_name("MQ135 - HANWEI") sheetMQ303A = book.sheet_by_name("MQ303A - HANWEI") sheetMQ309A = book.sheet_by_name("MQ309A - HANWEI") for row_index in range(1,20): #reading first columns RsR0, LPG, H2, CH4, CO, Alcohol = sheetMQ6.row_values(row_index, start_colx=0, end_colx=6) print(RsR0, " ", LPG, " ", H2, " ", CH4, " ", CO, " ", Alcohol) x_MQ6 = sheetMQ6.col_values(0)[2:] MQ6_LPG = sheetMQ6.col_values(1)[2:] MQ6_H2 = sheetMQ6.col_values(2)[2:] MQ6_CH4 = sheetMQ6.col_values(3)[2:] MQ6_CO = sheetMQ6.col_values(4)[2:] MQ6_Alcohol = sheetMQ6.col_values(5)[2:] def zero_to_nan(values): """Replace every 0 with 'nan' and return a copy.""" return [float('nan') if x==0 else x for x in values] MQ6_H2 =zero_to_nan(MQ6_H2) MQ6_LPG =zero_to_nan(MQ6_LPG) MQ6_CH4 =zero_to_nan(MQ6_CH4) MQ6_CO =zero_to_nan(MQ6_CO) MQ6_Alcohol =zero_to_nan(MQ6_Alcohol) # + import pandas as pd import numpy as np from sklearn.datasets import load_iris #from sklearn.cross_validation import train_test_split from sklearn.tree import DecisionTreeClassifier from sklearn import datasets from sklearn import linear_model dataH2 = {'RsRo': x_MQ6, 'H2': MQ6_H2} dataLPG = {'RsRo': x_MQ6, 'LPG': MQ6_LPG} dataCH4 = {'RsRo': x_MQ6, 'CH4': MQ6_CH4} dataCO = {'RsRo': x_MQ6, 'CO': MQ6_CO} dataALcohol = {'RsRo': x_MQ6, 'Alcohol': MQ6_Alcohol} dfMQ6_H2 = pd.DataFrame(dataH2) dfMQ6_LPG = pd.DataFrame(dataLPG) dfMQ6_CH4 = pd.DataFrame(dataCH4) dfMQ6_CO = pd.DataFrame(dataCO) dfMQ6_Alcohol = pd.DataFrame(dataALcohol) dfMQ6_H2['H2'] = pd.to_numeric(dfMQ6_H2['H2']) dfMQ6_LPG['LPG'] = pd.to_numeric(dfMQ6_LPG['LPG']) dfMQ6_CH4['CH4'] = pd.to_numeric(dfMQ6_CH4['CH4']) dfMQ6_CO['CO'] = pd.to_numeric(dfMQ6_CO['CO']) dfMQ6_Alcohol['Alcohol'] = pd.to_numeric(dfMQ6_Alcohol['Alcohol']) dfMQ6_H2['H2'] = dfMQ6_H2['H2'].replace('',None, regex=True) dfMQ6_LPG['LPG'] = dfMQ6_LPG['LPG'].replace('',None, regex=True) dfMQ6_CH4['CH4'] = dfMQ6_CH4['CH4'].replace('',None, regex=True) dfMQ6_CO['CO'] = dfMQ6_CO['CO'].replace('',None, regex=True) dfMQ6_Alcohol['Alcohol'] = dfMQ6_Alcohol['Alcohol'].replace('',None, regex=True) #Global X_Predict variable X_Predict = dfMQ6_LPG.RsRo.apply(lambda x: [x]).tolist() # - #Model and train H2 dataset2TrainH2 = dfMQ6_H2.copy() dataset2TrainH2.dropna(inplace=True) X_trainH2 = dataset2TrainH2.RsRo.apply(lambda x: [x]).tolist() y_trainH2 = dataset2TrainH2['H2'].tolist() model = linear_model.Lasso(alpha=0.1) model.fit(X_trainH2, y_trainH2) #Predict H2_Predicted = model.predict(X_Predict) #save into MQ2 MQ6_H2 = H2_Predicted #Model and train LPG dataset2TrainLPG = dfMQ6_LPG.copy() dataset2TrainLPG.dropna(inplace=True) X_trainLPG = dataset2TrainLPG.RsRo.apply(lambda x: [x]).tolist() y_trainLPG = dataset2TrainLPG['LPG'].tolist() model = linear_model.Lasso(alpha=0.1) model.fit(X_trainLPG, y_trainLPG) #Predict LPG_Predicted = model.predict(X_Predict) #save into MQ2 MQ6_LPG = LPG_Predicted #Model and train CH4 dataset2TrainCH4 = dfMQ6_CH4.copy() dataset2TrainCH4.dropna(inplace=True) X_trainCH4 = dataset2TrainCH4.RsRo.apply(lambda x: [x]).tolist() y_trainCH4 = dataset2TrainCH4['CH4'].tolist() model = linear_model.Lasso(alpha=0.1) model.fit(X_trainCH4, y_trainCH4) #Predict CH4_Predicted = model.predict(X_Predict) #save into MQ2 MQ6_CH4 = CH4_Predicted #Model and train CO dataset2TrainCO = dfMQ6_CO.copy() dataset2TrainCO.dropna(inplace=True) X_trainCO = dataset2TrainCO.RsRo.apply(lambda x: [x]).tolist() y_trainCO = dataset2TrainCO['CO'].tolist() model = linear_model.Lasso(alpha=0.1) model.fit(X_trainCO, y_trainCO) #Predict CO_Predicted = model.predict(X_Predict) #save into MQ2 MQ6_CO = CO_Predicted #Model and train Alcohol dataset2TrainAlcohol = dfMQ6_Alcohol.copy() dataset2TrainAlcohol.dropna(inplace=True) X_trainAlcohol = dataset2TrainAlcohol.RsRo.apply(lambda x: [x]).tolist() y_trainAlcohol = dataset2TrainAlcohol['Alcohol'].tolist() model = linear_model.Lasso(alpha=0.1) model.fit(X_trainAlcohol, y_trainAlcohol) #Predict Alcohol_Predicted = model.predict(X_Predict) #save into MQ2 MQ6_Alcohol = Alcohol_Predicted # + # %config InlineBackend.figure_formats = ['svg'] # %matplotlib inline import matplotlib.pyplot as plt import matplotlib.lines as mlines import matplotlib.transforms as mtransforms fig, ax = plt.subplots() fig.set_size_inches(9, 5.5, forward=True) fig.set_dpi(200) # only these two lines are calibration curves plt.plot(MQ6_H2, x_MQ6, marker='o', linewidth=1, label='H2') plt.plot(MQ6_LPG, x_MQ6, marker='o', linewidth=1, label='LPG') plt.plot(MQ6_CH4, x_MQ6, marker='o', linewidth=1, label='CH4') plt.plot(MQ6_CO, x_MQ6, marker='o', linewidth=1, label='CO') plt.plot(MQ6_Alcohol, x_MQ6, marker='o', linewidth=1, label='Alcohol') # reference line, legends, and axis labels #line = mlines.Line2D([0, 1], [0, 1], color='black') #transform = ax.transAxes #line.set_transform(transform) #ax.add_line(line) plt.yscale('log') plt.xscale('log') plt.legend() plt.grid(b=True, which='minor', color='lightgrey', linestyle='--') fig.suptitle('Calibration plot for MQ-6 data') ax.set_xlabel('PPM Concentration') ax.set_ylabel('Rs/Ro') #Save image plt.savefig('MQ6.svg', format = 'svg', dpi = 1200) plt.savefig('MQ6.png') plt.savefig('MQ6.eps', format = 'eps', dpi = 1200) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from logicqubit.logic import * # + def qft(qr): for i in range(len(qr)): for j in range(i): qr[i].CU1(qr[j], pi/float(2**(i-j))) qr[i].H() def iqft(qr): # transformada quântica de Fourier inversa for i in range(len(qr)): for j in range(i): qr[i].CU1(qr[j], -pi/float(2**(i-j))) qr[i].H() def swap(s1, s2): s2.CX(s1) s1.CX(s2) s2.CX(s1) # + # f(x) = 7^x mod 15 # truth table # x - y # 000 - 0001 # 001 - 0111 # 010 - 0100 # 011 - 1101 # 100 - 0001 # 101 - 0111 # 110 - 0100 # 111 - 1011 logicQuBit = LogicQuBit(7, first_left=True) x1 = Qubit() x2 = Qubit() x3 = Qubit() y1 = Qubit() y2 = Qubit() y3 = Qubit() y4 = Qubit() x1.H() x2.H() x3.H() oracle = Oracle([x1, x2, x3]) oracle.addTable(y1, ['011', '111']) oracle.addTable(y2, ['001', '010', '011', '101','110']) oracle.addTable(y3, ['001', '101', '111']) oracle.addTable(y4, ['000', '001', '011', '100','101','111']) logicQuBit.addOracle(oracle) logicQuBit.PrintOperations() qft([x1,x2,x3]) swap(x1,x3) # - psi = x1.getPsiAtAngles(degree = True) psi = {bin(i)[2:].zfill(7):value for i,value in enumerate(psi)} psi res = logicQuBit.Measure([y1,y2,y3,y4], True) #res = logicQuBit.Measure([x1,x2,x3], True) logicQuBit.Plot() print(res) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Distribution Sampling # # This notebook explores lognormal and normally distributed data import numpy as np import matplotlib.pyplot as plt # %matplotlib inline rng = np.random.RandomState(seed=42) # First, let's create a sample randomly drawn from a lognormal distribution mu, sigma = 3., 1. samps = rng.lognormal(mu, sigma, 1000) # Visualize the samples # + fig = plt.figure(figsize=(12,4)) ax1 = fig.add_subplot(1,2,1) count, bins, _ = ax1.hist(samps, 100, density=True, align='mid') x = np.linspace(min(bins), max(bins), 10000) pdf = (np.exp(-(np.log(x) - mu)**2 / (2 * sigma**2)) / (x * sigma * np.sqrt(2 * np.pi))) ax1.plot(x, pdf, linewidth=2, color='r') ax1.set_title('Randomly sampled Lognormal Data') ax2 = fig.add_subplot(1,2,2) _, bins2, _ = ax2.hist(np.log1p(samps), 100, density=True, align='mid') ax2.plot(bins2, 1/(sigma * np.sqrt(2 * np.pi)) * np.exp( - (bins2 - mu)**2 / (2 * sigma**2) ), linewidth=2, color='r') ax2.set_title('Randomly sampled Lognormal Data, Log-Transformed') plt.tight_layout() plt.show() # - # Ok, we have created a lognormally distributed sample of size n = 1000. Applying a log-transform to the data confirms it is lognormally distributed, meaning it is normally distributed in log-space. # # Something of interest from the numpy documentation is that np.random.RandomState().lognormal(mu, sigma) actually samples from an underlying normal distribution of mean, mu, and standard deviation, sigma. # # This seems to be a consequence (or maybe feature) of the Central Limit Theorem ... more on this later. # # Let's now measure the uncertainty in this distribution using random sampling with replacement, also known as *bootstrap*. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import random import numpy as np import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline plt.rcParams['figure.figsize'] = (15 , 10) sns.set_style('darkgrid') function_val_epoch_elitism = [] function_val_epoch_random_search = [] function_val_epoch_basic_genetic = [] function_val_epoch_diversity = [] # ## Using Elitism # ### Average Results: Minimum Value of fitness is between -8.7 and -9.50. Most of the time it is near -8.7. It also depends on the population size, with higher population size like >1000, it sometimes reached to a value of -9.7. # + function_val_epoch_elitism = [] RANGE_OF_X = [-2.04 , 2.04] POPULATION_SIZE = 1000 GENES = ["01" , "012" , "0123456789", "0123456789"] TARGET_LENGTH = 4 CROSSOVER_PROB = 0.1 h = 1e-7 X_SIZE = 5 np.random.seed(np.random.randint(low=0 , high=100)) random.seed(np.random.randint(low=0 , high=100)) def f1(X): return np.sum(np.square(X)) def f2(X): return np.sum(np.floor(X)) def f3(X): return np.sum(np.multiply(np.arange(len(X)) , np.power(X , 4) ) ) + np.random.standard_normal(1)[0] def g(X): return f1(X) + f2(X) + f3(X) def determine_target_length(range_of_x): n = max(range_of_x) return int(np.ceil(np.log(n)/np.log(2))) def getNum(l): num_str="" if l[0] == "1": num_str+="-" num_str += "{}.{}{}".format(l[1] , l[2], l[3]) return float(num_str) def inRange(l , range_of_x): num = getNum(l) return min(range_of_x)<= num <= max(range_of_x) class Individual(object): def __init__(self,chromosome): self.chromosome = chromosome self.fitness = self.calculate_fitness() @classmethod def mutate(self , digit_num:int): global GENES return random.choice(GENES[digit_num]) @classmethod def create_gnome(self): global TARGET_LENGTH global RANGE_OF_X global X_SIZE gnome = [] for i in range(X_SIZE): while True: l = [self.mutate(i) for i in range(TARGET_LENGTH)] if (inRange(l , RANGE_OF_X)): gnome.append(l) break return gnome def mate(self , par2): global CROSSOVER_PROB child_chromosome = [] for gp1 , gp2 in zip(self.chromosome , par2.chromosome): child_part_chromosome = [] # print(gp1) for i in range(len(gp1)): probability_of_crossover = random.random() if (probability_of_crossover > CROSSOVER_PROB): # do crossover probability_of_p1_gene = random.random() if probability_of_p1_gene > 0.5: child_part_chromosome.append(gp1[i]) else: child_part_chromosome.append(gp2[i]) else: # do mutation child_part_chromosome.append(self.mutate(i)) child_chromosome.append(child_part_chromosome) return Individual(child_chromosome) def calculate_fitness(self): global TARGET_LENGTH X = [] for s in self.chromosome: #print(s) #s = ''.join(map(str, self.chromosome)) x = getNum(s) X.append(x) return g(X) global POPULATION_SIZE global TARGET_LENGTH global RANGE_OF_X # TARGET_LENGTH = determine_target_length(RANGE_OF_X) generation = 1 count = 1000 population = [] for _ in range(POPULATION_SIZE): gnome = Individual.create_gnome() population.append(Individual(gnome)) while count!=0: count-=1 population = sorted(population , key = lambda x:x.fitness) # performing elitism new_generation = [] s = int(0.10*POPULATION_SIZE) new_generation.extend(population[:s]) s = int(0.90*POPULATION_SIZE) for _ in range(s): parent1 = random.choice(population[:POPULATION_SIZE//2]) parent2 = random.choice(population[:POPULATION_SIZE//2]) child = parent1.mate(parent2) new_generation.append(child) if generation % 5 ==0: population = new_generation get_num_arr = [] for l in population[0].chromosome: get_num_arr.append(getNum(l)) print("Gen: {} X: {} Fit: {}".format(generation, get_num_arr, population[0].fitness)) function_val_epoch_elitism.append(population[0].fitness) generation += 1 get_num_arr = [] for l in population[0].chromosome: get_num_arr.append(getNum(l)) print("Gen: {} X: {}\tMinimimum Value: {}".format(generation, get_num_arr, population[0].fitness)) plt.plot(range(len(function_val_epoch_elitism)) , function_val_epoch_elitism , "k--") # - # ## Using Basic Genetic Algorithm # ### The minimum value of fitness achieved is between -8.4 to -9.0. # + function_val_epoch_basic_genetic = [] RANGE_OF_X = [-2.04 , 2.04] POPULATION_SIZE = 1000 GENES = ["01" , "012" , "0123456789", "0123456789"] TARGET_LENGTH = 4 CROSSOVER_PROB = 0.1 h = 1e-7 X_SIZE = 5 np.random.seed(np.random.randint(low=0 , high=100)) random.seed(np.random.randint(low=0 , high=100)) def f1(X): return np.sum(np.square(X)) def f2(X): return np.sum(np.floor(X)) def f3(X): return np.sum(np.multiply(np.arange(len(X)) , np.power(X , 4) ) ) + np.random.standard_normal(1)[0] def g(X): return f1(X) + f2(X) + f3(X) def determine_target_length(range_of_x): n = max(range_of_x) return int(np.ceil(np.log(n)/np.log(2))) def getNum(l): num_str="" if l[0] == "1": num_str+="-" num_str += "{}.{}{}".format(l[1] , l[2], l[3]) return float(num_str) def inRange(l , range_of_x): num = getNum(l) return min(range_of_x)<= num <= max(range_of_x) class Individual(object): def __init__(self,chromosome): self.chromosome = chromosome self.fitness = self.calculate_fitness() @classmethod def mutate(self , digit_num:int): global GENES return random.choice(GENES[digit_num]) @classmethod def create_gnome(self): global TARGET_LENGTH global RANGE_OF_X global X_SIZE gnome = [] for i in range(X_SIZE): while True: l = [self.mutate(i) for i in range(TARGET_LENGTH)] if (inRange(l , RANGE_OF_X)): gnome.append(l) break return gnome def mate(self , par2): child_chromosome = [] global CROSSOVER_PROB for gp1 , gp2 in zip(self.chromosome , par2.chromosome): child_part_chromosome = [] # print(gp1) for i in range(len(gp1)): probability_of_crossover = random.random() if (probability_of_crossover > CROSSOVER_PROB): # do crossover probability_of_p1_gene = random.random() if probability_of_p1_gene > 0.5: child_part_chromosome.append(gp1[i]) else: child_part_chromosome.append(gp2[i]) else: # do mutation child_part_chromosome.append(self.mutate(i)) child_chromosome.append(child_part_chromosome) return Individual(child_chromosome) def calculate_fitness(self): global TARGET_LENGTH X = [] for s in self.chromosome: #print(s) #s = ''.join(map(str, self.chromosome)) x = getNum(s) X.append(x) return g(X) global POPULATION_SIZE global TARGET_LENGTH global RANGE_OF_X # TARGET_LENGTH = determine_target_length(RANGE_OF_X) generation = 1 count = 1000 population = [] for _ in range(POPULATION_SIZE): gnome = Individual.create_gnome() population.append(Individual(gnome)) while count!=0: count-=1 population = sorted(population , key = lambda x:x.fitness) # performing elitism new_generation = [] s = int(0.10*POPULATION_SIZE) new_generation.extend(population[:s]) s = int(0.90*POPULATION_SIZE) for _ in range(s): # no elitism parent1 = random.choice(population[:POPULATION_SIZE]) parent2 = random.choice(population[:POPULATION_SIZE]) child = parent1.mate(parent2) new_generation.append(child) if generation % 5 ==0: population = new_generation get_num_arr = [] for l in population[0].chromosome: get_num_arr.append(getNum(l)) print("Gen: {} X: {} Fit: {}".format(generation, get_num_arr, population[0].fitness)) function_val_epoch_basic_genetic.append(population[0].fitness) generation += 1 get_num_arr = [] for l in population[0].chromosome: get_num_arr.append(getNum(l)) print("Gen: {} X: {}\tMinimimum Value: {}".format(generation, get_num_arr, population[0].fitness)) plt.plot(range(len(function_val_epoch_basic_genetic)) , function_val_epoch_basic_genetic , "k--") # - # ## Using Diversity: # + function_val_epoch_diversity = [] RANGE_OF_X = [-2.04 , 2.04] POPULATION_SIZE = 1000 GENES = ["01" , "012" , "0123456789", "0123456789"] TARGET_LENGTH = 4 h = 1e-7 X_SIZE = 5 DIVERSITY_PERCENT = 50 np.random.seed(np.random.randint(low=0 , high=100)) random.seed(np.random.randint(low=0 , high=100)) def f1(X): return np.sum(np.square(X)) def f2(X): return np.sum(np.floor(X)) def f3(X): return np.sum(np.multiply(np.arange(len(X)) , np.power(X , 4) ) ) + np.random.standard_normal(1)[0] def g(X): return f1(X) + f2(X) + f3(X) def determine_target_length(range_of_x): n = max(range_of_x) return int(np.ceil(np.log(n)/np.log(2))) def getNum(l): num_str="" if l[0] == "1": num_str+="-" num_str += "{}.{}{}".format(l[1] , l[2], l[3]) return float(num_str) def inRange(l , range_of_x): num = getNum(l) return min(range_of_x)<= num <= max(range_of_x) class Individual(object): def __init__(self,chromosome): self.chromosome = chromosome self.fitness = self.calculate_fitness() @classmethod def mutate(self , digit_num:int): global GENES return random.choice(GENES[digit_num]) @classmethod def create_gnome(self): global TARGET_LENGTH global RANGE_OF_X global X_SIZE gnome = [] for i in range(X_SIZE): while True: l = [self.mutate(i) for i in range(TARGET_LENGTH)] if (inRange(l , RANGE_OF_X)): gnome.append(l) break return gnome def mate(self , par2): global DIVERSITY_PERCENT tot = len(self.chromosome) diversity_idx_arr = np.random.choice(range(tot) , replace=False , size=int(DIVERSITY_PERCENT*tot / 100)) child_chromosome = [] for j , gp1 , gp2 in zip(range(tot) , self.chromosome , par2.chromosome): child_part_chromosome = [] for i in range(len(gp1)): if (j*tot+i) in diversity_idx_arr: child_part_chromosome.append(self.mutate(i)) else: probability_of_p1_gene = random.random() if probability_of_p1_gene > 0.5: child_part_chromosome.append(gp1[i]) else: child_part_chromosome.append(gp2[i]) child_chromosome.append(child_part_chromosome) return Individual(child_chromosome) def calculate_fitness(self): global TARGET_LENGTH X = [] for s in self.chromosome: #print(s) #s = ''.join(map(str, self.chromosome)) x = getNum(s) X.append(x) return g(X) global POPULATION_SIZE global TARGET_LENGTH global RANGE_OF_X # TARGET_LENGTH = determine_target_length(RANGE_OF_X) generation = 1 count = 1000 population = [] for _ in range(POPULATION_SIZE): gnome = Individual.create_gnome() population.append(Individual(gnome)) while count!=0: count-=1 population = sorted(population , key = lambda x:x.fitness) # performing elitism new_generation = [] s = int(0.10*POPULATION_SIZE) new_generation.extend(population[:s]) s = int(0.90*POPULATION_SIZE) for _ in range(s): # no elitism parent1 = random.choice(population[:POPULATION_SIZE]) parent2 = random.choice(population[:POPULATION_SIZE]) child = parent1.mate(parent2) new_generation.append(child) if generation % 5 ==0: population = new_generation get_num_arr = [] for l in population[0].chromosome: get_num_arr.append(getNum(l)) print("Gen: {} X: {} Fit: {}".format(generation, get_num_arr, population[0].fitness)) function_val_epoch_diversity.append(population[0].fitness) generation += 1 get_num_arr = [] for l in population[0].chromosome: get_num_arr.append(getNum(l)) print("Gen: {} X: {}\tMinimimum Value: {}".format(generation, get_num_arr, population[0].fitness)) plt.plot(range(len(function_val_epoch_diversity)) , function_val_epoch_diversity , "k--") # - # ## Using Random Search # + function_val_epoch_random_search = [] RANGE_OF_X = [-2.04 , 2.04] POPULATION_SIZE = 1000 GENES = ["01" , "012" , "0123456789", "0123456789"] TARGET_LENGTH = 4 h = 1e-7 X_SIZE = 5 np.random.seed(np.random.randint(low=0 , high=100)) random.seed(np.random.randint(low=0 , high=100)) def f1(X): return np.sum(np.square(X)) def f2(X): return np.sum(np.floor(X)) def f3(X): return np.sum(np.multiply(np.arange(len(X)) , np.power(X , 4) ) ) + np.random.standard_normal(1)[0] def g(X): return f1(X) + f2(X) + f3(X) def determine_target_length(range_of_x): n = max(range_of_x) return int(np.ceil(np.log(n)/np.log(2))) def getNum(l): num_str="" if l[0] == "1": num_str+="-" num_str += "{}.{}{}".format(l[1] , l[2], l[3]) return float(num_str) def inRange(l , range_of_x): num = getNum(l) return min(range_of_x)<= num <= max(range_of_x) class Individual(object): def __init__(self,chromosome): self.chromosome = chromosome self.fitness = self.calculate_fitness() @classmethod def mutate(self , digit_num:int): global GENES return random.choice(GENES[digit_num]) @classmethod def create_gnome(self): global TARGET_LENGTH global RANGE_OF_X global X_SIZE gnome = [] for i in range(X_SIZE): while True: l = [self.mutate(i) for i in range(TARGET_LENGTH)] if (inRange(l , RANGE_OF_X)): gnome.append(l) break return gnome def mate(self , par2): child_chromosome = [] for gp1 , gp2 in zip(self.chromosome , par2.chromosome): child_part_chromosome = [] # print(gp1) for i in range(len(gp1)): probability_of_crossover = random.random() if (probability_of_crossover > 0.1): # do crossover probability_of_p1_gene = random.random() if probability_of_p1_gene > 0.5: child_part_chromosome.append(gp1[i]) else: child_part_chromosome.append(gp2[i]) else: # do mutation child_part_chromosome.append(self.mutate(i)) child_chromosome.append(child_part_chromosome) return Individual(child_chromosome) def calculate_fitness(self): global TARGET_LENGTH X = [] for s in self.chromosome: #print(s) #s = ''.join(map(str, self.chromosome)) x = getNum(s) X.append(x) return g(X) global POPULATION_SIZE global TARGET_LENGTH global RANGE_OF_X # TARGET_LENGTH = determine_target_length(RANGE_OF_X) generation = 1 count = 1000 population = [] for _ in range(POPULATION_SIZE): gnome = Individual.create_gnome() population.append(Individual(gnome)) while count!=0: count-=1 population = sorted(population , key = lambda x:x.fitness) # performing elitism new_generation = [] s = int(0.10*POPULATION_SIZE) new_generation.extend(population[:s]) s = int(0.90*POPULATION_SIZE) for _ in range(s): # Random Search gnome = Individual.create_gnome() new_generation.append(Individual(gnome)) if generation % 5 ==0: population = new_generation get_num_arr = [] for l in population[0].chromosome: get_num_arr.append(getNum(l)) print("Gen: {} X: {} Fit: {}".format(generation, get_num_arr, population[0].fitness)) function_val_epoch_random_search.append(population[0].fitness) generation += 1 get_num_arr = [] for l in population[0].chromosome: get_num_arr.append(getNum(l)) print("Gen: {} X: {}\tMinimimum Value: {}".format(generation, get_num_arr, population[0].fitness)) plt.plot(range(len(function_val_epoch_random_search)) , function_val_epoch_random_search , "k--") # - # ## Comparison: plt.plot(range(len(function_val_epoch_elitism)) , function_val_epoch_elitism , "k--") plt.plot(range(len(function_val_epoch_basic_genetic)) , function_val_epoch_basic_genetic , "b--") plt.plot(range(len(function_val_epoch_diversity)) , function_val_epoch_diversity , "r--") plt.plot(range(len(function_val_epoch_random_search)) , function_val_epoch_random_search , "g--") plt.legend(["with elitism" , "basic-genetic" , "with diversity" , "random search"]) # ### Clearly, Random Search is worst approach for this kind of the problems, it's complete luck. # ## Elitism: with different sample count # + RANGE_OF_X = [-2.04 , 2.04] POPULATION_SIZE = [50 ,100 , 500 , 1000] GENES = ["01" , "012" , "0123456789", "0123456789"] TARGET_LENGTH = 4 CROSSOVER_PROB = 0.1 h = 1e-7 X_SIZE = 5 np.random.seed(np.random.randint(low=0 , high=100)) random.seed(np.random.randint(low=0 , high=100)) def f1(X): return np.sum(np.square(X)) def f2(X): return np.sum(np.floor(X)) def f3(X): return np.sum(np.multiply(np.arange(len(X)) , np.power(X , 4) ) ) + np.random.standard_normal(1)[0] def g(X): return f1(X) + f2(X) + f3(X) def determine_target_length(range_of_x): n = max(range_of_x) return int(np.ceil(np.log(n)/np.log(2))) def getNum(l): num_str="" if l[0] == "1": num_str+="-" num_str += "{}.{}{}".format(l[1] , l[2], l[3]) return float(num_str) def inRange(l , range_of_x): num = getNum(l) return min(range_of_x)<= num <= max(range_of_x) class Individual(object): def __init__(self,chromosome): self.chromosome = chromosome self.fitness = self.calculate_fitness() @classmethod def mutate(self , digit_num:int): global GENES return random.choice(GENES[digit_num]) @classmethod def create_gnome(self): global TARGET_LENGTH global RANGE_OF_X global X_SIZE gnome = [] for i in range(X_SIZE): while True: l = [self.mutate(i) for i in range(TARGET_LENGTH)] if (inRange(l , RANGE_OF_X)): gnome.append(l) break return gnome def mate(self , par2): global CROSSOVER_PROB child_chromosome = [] for gp1 , gp2 in zip(self.chromosome , par2.chromosome): child_part_chromosome = [] # print(gp1) for i in range(len(gp1)): probability_of_crossover = random.random() if (probability_of_crossover > CROSSOVER_PROB): # do crossover probability_of_p1_gene = random.random() if probability_of_p1_gene > 0.5: child_part_chromosome.append(gp1[i]) else: child_part_chromosome.append(gp2[i]) else: # do mutation child_part_chromosome.append(self.mutate(i)) child_chromosome.append(child_part_chromosome) return Individual(child_chromosome) def calculate_fitness(self): global TARGET_LENGTH X = [] for s in self.chromosome: #print(s) #s = ''.join(map(str, self.chromosome)) x = getNum(s) X.append(x) return g(X) global POPULATION_SIZE global TARGET_LENGTH global RANGE_OF_X color = ["k--" , "b--" , "r--" , "g--" , "y--"] legend = [] for N, c in zip(POPULATION_SIZE , color): np.random.seed(np.random.randint(low=0 , high=100)) random.seed(np.random.randint(low=0 , high=100)) function_val_epoch_elitism = [] generation = 1 count = 1000 population = [] for _ in range(N): gnome = Individual.create_gnome() population.append(Individual(gnome)) while count!=0: count-=1 population = sorted(population , key = lambda x:x.fitness) # performing elitism new_generation = [] s = int(0.10*N) new_generation.extend(population[:s]) s = int(0.90*N) for _ in range(s): parent1 = random.choice(population[:N//2]) parent2 = random.choice(population[:N//2]) child = parent1.mate(parent2) new_generation.append(child) if generation % 5 ==0: population = new_generation get_num_arr = [] for l in population[0].chromosome: get_num_arr.append(getNum(l)) #print("Gen: {} X: {} Fit: {}".format(generation, get_num_arr, population[0].fitness)) function_val_epoch_elitism.append(population[0].fitness) generation += 1 get_num_arr = [] for l in population[0].chromosome: get_num_arr.append(getNum(l)) print("Population: {} X: {}\tMinimimum Value: {}".format(N, get_num_arr, population[0].fitness)) plt.plot(range(len(function_val_epoch_elitism)) , function_val_epoch_elitism , c) legend.append("N = {}".format(N)) plt.legend(legend) # - # The result above is as expected for a genetic algorithm with elitism. The more the size of the population the diversity and the fitness is maintained at the same time, which in principle yields (most of the time) a better result on increasing the size of the population with a suitable number of epochs. # ## Basic Genetic Algorithm : with different sample count # + function_val_epoch_basic_genetic = [] RANGE_OF_X = [-2.04 , 2.04] POPULATION_SIZE = [50, 100, 500, 1000] GENES = ["01" , "012" , "0123456789", "0123456789"] TARGET_LENGTH = 4 CROSSOVER_PROB = 0.1 h = 1e-7 X_SIZE = 5 def f1(X): return np.sum(np.square(X)) def f2(X): return np.sum(np.floor(X)) def f3(X): return np.sum(np.multiply(np.arange(len(X)) , np.power(X , 4) ) ) + np.random.standard_normal(1)[0] def g(X): return f1(X) + f2(X) + f3(X) def determine_target_length(range_of_x): n = max(range_of_x) return int(np.ceil(np.log(n)/np.log(2))) def getNum(l): num_str="" if l[0] == "1": num_str+="-" num_str += "{}.{}{}".format(l[1] , l[2], l[3]) return float(num_str) def inRange(l , range_of_x): num = getNum(l) return min(range_of_x)<= num <= max(range_of_x) class Individual(object): def __init__(self,chromosome): self.chromosome = chromosome self.fitness = self.calculate_fitness() @classmethod def mutate(self , digit_num:int): global GENES return random.choice(GENES[digit_num]) @classmethod def create_gnome(self): global TARGET_LENGTH global RANGE_OF_X global X_SIZE gnome = [] for i in range(X_SIZE): while True: l = [self.mutate(i) for i in range(TARGET_LENGTH)] if (inRange(l , RANGE_OF_X)): gnome.append(l) break return gnome def mate(self , par2): child_chromosome = [] global CROSSOVER_PROB for gp1 , gp2 in zip(self.chromosome , par2.chromosome): child_part_chromosome = [] # print(gp1) for i in range(len(gp1)): probability_of_crossover = random.random() if (probability_of_crossover > CROSSOVER_PROB): # do crossover probability_of_p1_gene = random.random() if probability_of_p1_gene > 0.5: child_part_chromosome.append(gp1[i]) else: child_part_chromosome.append(gp2[i]) else: # do mutation child_part_chromosome.append(self.mutate(i)) child_chromosome.append(child_part_chromosome) return Individual(child_chromosome) def calculate_fitness(self): global TARGET_LENGTH X = [] for s in self.chromosome: #print(s) #s = ''.join(map(str, self.chromosome)) x = getNum(s) X.append(x) return g(X) global POPULATION_SIZE global TARGET_LENGTH global RANGE_OF_X # TARGET_LENGTH = determine_target_length(RANGE_OF_X) color = ["k--" , "b--" , "r--" , "g--" , "y--"] legend = [] for N, c in zip(POPULATION_SIZE , color): np.random.seed(np.random.randint(low=0 , high=100)) random.seed(np.random.randint(low=0 , high=100)) function_val_epoch_basic_genetic = [] generation = 1 count = 1000 population = [] for _ in range(N): gnome = Individual.create_gnome() population.append(Individual(gnome)) while count!=0: count-=1 population = sorted(population , key = lambda x:x.fitness) # performing elitism new_generation = [] s = int(0.10*N) new_generation.extend(population[:s]) s = int(0.90*N) for _ in range(s): # no elitism parent1 = random.choice(population[:N]) parent2 = random.choice(population[:N]) child = parent1.mate(parent2) new_generation.append(child) if generation % 5 ==0: population = new_generation get_num_arr = [] for l in population[0].chromosome: get_num_arr.append(getNum(l)) #print("Gen: {} X: {} Fit: {}".format(generation, get_num_arr, population[0].fitness)) function_val_epoch_basic_genetic.append(population[0].fitness) generation += 1 get_num_arr = [] for l in population[0].chromosome: get_num_arr.append(getNum(l)) print("Population: {} X: {}\tMinimimum Value: {}".format(N, get_num_arr, population[0].fitness)) plt.plot(range(len(function_val_epoch_basic_genetic)) , function_val_epoch_basic_genetic , c) legend.append("N = {}".format(N)) plt.legend(legend) # - # Unlike genetic algorithm with elitism, in basic genetic algorithm the complete population get the chance to mate(crossover and mutation) which may or may not improve the results on increasing the size of the population, because increasing the size of the population also expose us to the risk that elite members will not get chance to mate. # ## Diversity: with the different sample counts # + function_val_epoch_diversity = [] RANGE_OF_X = [-2.04 , 2.04] POPULATION_SIZE = [50 , 100 , 500 , 1000] GENES = ["01" , "012" , "0123456789", "0123456789"] TARGET_LENGTH = 4 h = 1e-7 X_SIZE = 5 DIVERSITY_PERCENT = 50 np.random.seed(np.random.randint(low=0 , high=100)) random.seed(np.random.randint(low=0 , high=100)) def f1(X): return np.sum(np.square(X)) def f2(X): return np.sum(np.floor(X)) def f3(X): return np.sum(np.multiply(np.arange(len(X)) , np.power(X , 4) ) ) + np.random.standard_normal(1)[0] def g(X): return f1(X) + f2(X) + f3(X) def determine_target_length(range_of_x): n = max(range_of_x) return int(np.ceil(np.log(n)/np.log(2))) def getNum(l): num_str="" if l[0] == "1": num_str+="-" num_str += "{}.{}{}".format(l[1] , l[2], l[3]) return float(num_str) def inRange(l , range_of_x): num = getNum(l) return min(range_of_x)<= num <= max(range_of_x) class Individual(object): def __init__(self,chromosome): self.chromosome = chromosome self.fitness = self.calculate_fitness() @classmethod def mutate(self , digit_num:int): global GENES return random.choice(GENES[digit_num]) @classmethod def create_gnome(self): global TARGET_LENGTH global RANGE_OF_X global X_SIZE gnome = [] for i in range(X_SIZE): while True: l = [self.mutate(i) for i in range(TARGET_LENGTH)] if (inRange(l , RANGE_OF_X)): gnome.append(l) break return gnome def mate(self , par2): global DIVERSITY_PERCENT tot = len(self.chromosome) diversity_idx_arr = np.random.choice(range(tot) , replace=False , size=int(DIVERSITY_PERCENT*tot / 100)) child_chromosome = [] for j , gp1 , gp2 in zip(range(tot) , self.chromosome , par2.chromosome): child_part_chromosome = [] for i in range(len(gp1)): if (j*tot+i) in diversity_idx_arr: child_part_chromosome.append(self.mutate(i)) else: probability_of_p1_gene = random.random() if probability_of_p1_gene > 0.5: child_part_chromosome.append(gp1[i]) else: child_part_chromosome.append(gp2[i]) child_chromosome.append(child_part_chromosome) return Individual(child_chromosome) def calculate_fitness(self): global TARGET_LENGTH X = [] for s in self.chromosome: #print(s) #s = ''.join(map(str, self.chromosome)) x = getNum(s) X.append(x) return g(X) global POPULATION_SIZE global TARGET_LENGTH global RANGE_OF_X # TARGET_LENGTH = determine_target_length(RANGE_OF_X) color = ["k--" , "b--" , "r--" , "g--" , "y--"] legend = [] for N, c in zip(POPULATION_SIZE , color): np.random.seed(np.random.randint(low=0 , high=100)) random.seed(np.random.randint(low=0 , high=100)) function_val_epoch_diversity = [] generation = 1 count = 1000 population = [] for _ in range(N): gnome = Individual.create_gnome() population.append(Individual(gnome)) while count!=0: count-=1 population = sorted(population , key = lambda x:x.fitness) # performing elitism new_generation = [] s = int(0.10*N) new_generation.extend(population[:s]) s = int(0.90*N) for _ in range(s): # no elitism parent1 = random.choice(population[:N]) parent2 = random.choice(population[:N]) child = parent1.mate(parent2) new_generation.append(child) if generation % 5 ==0: population = new_generation get_num_arr = [] for l in population[0].chromosome: get_num_arr.append(getNum(l)) #print("Gen: {} X: {} Fit: {}".format(generation, get_num_arr, population[0].fitness)) function_val_epoch_diversity.append(population[0].fitness) generation += 1 get_num_arr = [] for l in population[0].chromosome: get_num_arr.append(getNum(l)) print("Population: {} X: {}\tMinimimum Value: {}".format(N, get_num_arr, population[0].fitness)) plt.plot(range(len(function_val_epoch_diversity)) , function_val_epoch_diversity , c) legend.append("N = {}".format(N)) plt.legend(legend) # - # I don't even need to state that this is the best results by far we have got. Just like the elitism it is also affected by increasing the population size and the overall trend is that the performance(on an average) increases. # ## Random Search : with different sample counts # + function_val_epoch_random_search = [] RANGE_OF_X = [-2.04 , 2.04] POPULATION_SIZE = [50, 100, 500, 1000] GENES = ["01" , "012" , "0123456789", "0123456789"] TARGET_LENGTH = 4 h = 1e-7 X_SIZE = 5 np.random.seed(np.random.randint(low=0 , high=100)) random.seed(np.random.randint(low=0 , high=100)) def f1(X): return np.sum(np.square(X)) def f2(X): return np.sum(np.floor(X)) def f3(X): return np.sum(np.multiply(np.arange(len(X)) , np.power(X , 4) ) ) + np.random.standard_normal(1)[0] def g(X): return f1(X) + f2(X) + f3(X) def determine_target_length(range_of_x): n = max(range_of_x) return int(np.ceil(np.log(n)/np.log(2))) def getNum(l): num_str="" if l[0] == "1": num_str+="-" num_str += "{}.{}{}".format(l[1] , l[2], l[3]) return float(num_str) def inRange(l , range_of_x): num = getNum(l) return min(range_of_x)<= num <= max(range_of_x) class Individual(object): def __init__(self,chromosome): self.chromosome = chromosome self.fitness = self.calculate_fitness() @classmethod def mutate(self , digit_num:int): global GENES return random.choice(GENES[digit_num]) @classmethod def create_gnome(self): global TARGET_LENGTH global RANGE_OF_X global X_SIZE gnome = [] for i in range(X_SIZE): while True: l = [self.mutate(i) for i in range(TARGET_LENGTH)] if (inRange(l , RANGE_OF_X)): gnome.append(l) break return gnome def mate(self , par2): child_chromosome = [] for gp1 , gp2 in zip(self.chromosome , par2.chromosome): child_part_chromosome = [] # print(gp1) for i in range(len(gp1)): probability_of_crossover = random.random() if (probability_of_crossover > 0.1): # do crossover probability_of_p1_gene = random.random() if probability_of_p1_gene > 0.5: child_part_chromosome.append(gp1[i]) else: child_part_chromosome.append(gp2[i]) else: # do mutation child_part_chromosome.append(self.mutate(i)) child_chromosome.append(child_part_chromosome) return Individual(child_chromosome) def calculate_fitness(self): global TARGET_LENGTH X = [] for s in self.chromosome: #print(s) #s = ''.join(map(str, self.chromosome)) x = getNum(s) X.append(x) return g(X) global POPULATION_SIZE global TARGET_LENGTH global RANGE_OF_X # TARGET_LENGTH = determine_target_length(RANGE_OF_X) color = ["k--" , "b--" , "r--" , "g--" , "y--"] legend = [] for N, c in zip(POPULATION_SIZE , color): np.random.seed(np.random.randint(low=0 , high=100)) random.seed(np.random.randint(low=0 , high=100)) function_val_epoch_random_search = [] generation = 1 count = 1000 population = [] for _ in range(N): gnome = Individual.create_gnome() population.append(Individual(gnome)) while count!=0: count-=1 population = sorted(population , key = lambda x:x.fitness) # performing elitism new_generation = [] s = int(0.10*N) new_generation.extend(population[:s]) s = int(0.90*N) for _ in range(s): # Random Search gnome = Individual.create_gnome() new_generation.append(Individual(gnome)) if generation % 5 ==0: population = new_generation get_num_arr = [] for l in population[0].chromosome: get_num_arr.append(getNum(l)) # print("Gen: {} X: {} Fit: {}".format(generation, get_num_arr, population[0].fitness)) function_val_epoch_random_search.append(population[0].fitness) generation += 1 get_num_arr = [] for l in population[0].chromosome: get_num_arr.append(getNum(l)) print("Population: {} X: {}\tMinimimum Value: {}".format(N, get_num_arr, population[0].fitness)) plt.plot(range(len(function_val_epoch_random_search)) , function_val_epoch_random_search , c) legend.append("N = {}".format(N)) plt.legend(legend) # - # Clearly, Random Search is completely out of this league. It doesn't stand a chance while comparing with the other algorithms. I won't even try it again. It's complete brute force. If you are very lucky(which is impossible) then you have a chance of getting a good result through random search. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + papermill={"duration": 0.024222, "end_time": "2021-01-10T16:03:17.569908", "exception": false, "start_time": "2021-01-10T16:03:17.545686", "status": "completed"} tags=[] import matplotlib.pyplot as plt import numpy as np # + [markdown] papermill={"duration": 0.011109, "end_time": "2021-01-10T16:03:17.592837", "exception": false, "start_time": "2021-01-10T16:03:17.581728", "status": "completed"} tags=[] # # Videoaula # + papermill={"duration": 0.174697, "end_time": "2021-01-10T16:03:17.778706", "exception": false, "start_time": "2021-01-10T16:03:17.604009", "status": "completed"} tags=[] from IPython.display import YouTubeVideo YouTubeVideo("DT02qw14Xzk") # + papermill={"duration": 0.022433, "end_time": "2021-01-10T16:03:17.813592", "exception": false, "start_time": "2021-01-10T16:03:17.791159", "status": "completed"} tags=[] plt.rcParams.update({'font.size':14}) # + papermill={"duration": 0.027191, "end_time": "2021-01-10T16:03:17.853653", "exception": false, "start_time": "2021-01-10T16:03:17.826462", "status": "completed"} tags=[] x = np.arange(5) x # + papermill={"duration": 0.026073, "end_time": "2021-01-10T16:03:17.892995", "exception": false, "start_time": "2021-01-10T16:03:17.866922", "status": "completed"} tags=[] poupanca = [1500, 32000, 54000, 2300, 91000] poupanca # + papermill={"duration": 0.933889, "end_time": "2021-01-10T16:03:18.840310", "exception": false, "start_time": "2021-01-10T16:03:17.906421", "status": "completed"} tags=[] plt.bar(x,poupanca,color = 'limegreen',edgecolor = 'teal',linewidth = 2,\ hatch = '*') plt.ylabel('Poupança (R$)',fontsize = 18) plt.savefig('GraficoBarras.png',dpi = 600,bbox_inches = 'tight') plt.show() # + [markdown] papermill={"duration": 0.013817, "end_time": "2021-01-10T16:03:18.868773", "exception": false, "start_time": "2021-01-10T16:03:18.854956", "status": "completed"} tags=[] # {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '*'} # + papermill={"duration": 0.023998, "end_time": "2021-01-10T16:03:18.906864", "exception": false, "start_time": "2021-01-10T16:03:18.882866", "status": "completed"} tags=[] pessoas = ['Ana','Pedro','Paulo','Carla', 'Andre'] pessoas # + papermill={"duration": 0.991699, "end_time": "2021-01-10T16:03:19.913103", "exception": false, "start_time": "2021-01-10T16:03:18.921404", "status": "completed"} tags=[] plt.bar(x,poupanca,color = 'limegreen',edgecolor = 'teal',linewidth = 2,\ hatch = '*') plt.xticks(x,pessoas,rotation = 45) plt.ylabel('Poupança (R$)',fontsize = 18) plt.savefig('GraficoBarras.png',dpi = 600,bbox_inches = 'tight') plt.show() # + papermill={"duration": 0.023553, "end_time": "2021-01-10T16:03:19.952497", "exception": false, "start_time": "2021-01-10T16:03:19.928944", "status": "completed"} tags=[] from matplotlib.ticker import FuncFormatter # + papermill={"duration": 0.024336, "end_time": "2021-01-10T16:03:19.992718", "exception": false, "start_time": "2021-01-10T16:03:19.968382", "status": "completed"} tags=[] def mil(x,pos): return f'R$ {x*1e-3:.1f}k' # + papermill={"duration": 0.025974, "end_time": "2021-01-10T16:03:20.034759", "exception": false, "start_time": "2021-01-10T16:03:20.008785", "status": "completed"} tags=[] formatter = FuncFormatter(mil) # + papermill={"duration": 0.924, "end_time": "2021-01-10T16:03:20.975204", "exception": false, "start_time": "2021-01-10T16:03:20.051204", "status": "completed"} tags=[] fig,ax = plt.subplots() ax.yaxis.set_major_formatter(formatter) barras = plt.bar(x,poupanca,color = 'mediumorchid',edgecolor = 'plum',linewidth = 2,\ hatch = '*') barras[0].set_hatch('+') barras[1].set_hatch('o') barras[4].set_hatch('/') plt.xticks(x,pessoas,rotation = 45) plt.ylabel('Poupança',fontsize = 18) plt.savefig('GraficoBarras.png',dpi = 600,bbox_inches = 'tight') plt.show() # + papermill={"duration": 0.01726, "end_time": "2021-01-10T16:03:21.009735", "exception": false, "start_time": "2021-01-10T16:03:20.992475", "status": "completed"} tags=[] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- # ## Quantum Machine Learning # # ### What is Machine Learning # # In recent decades the rapid development of cheap and powerful classical processing units has transformed all aspects of society. Artificial intelligence has been pondered by humanity from as far back as the 1800's but has remained the realm of science fiction until numerous great advances in the capability of computers has facilitated many implementations of AI across various industries from medicine to the military. # # Many of these AI technologies are based on machine learning # # ### Why machine learning could benefit from a quantum advantage # # # # # # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:.conda-gisenv] * # language: python # name: conda-env-.conda-gisenv-py # --- from collections import defaultdict import pandas as pd import os import math #myfolder=r'F:\website\cmis6\Civilworks cost\Work Schedule' #office computer myfolder=r'E:\Website_26_07_2020\cmis6\Civilworks cost\Work Schedule' #home computer inputpath=os.path.join(myfolder,'Schdule Input.xlsx') class GanttBar: def __init__(self,start,duration): self.start_day=start self.duration=duration self.finish_day=self.start_day+self.duration def addTaskUnit(self,unit): self.taskUnit=unit def addTaskVolume(self,volume): self.taskVolume=volume self.productionRate=math.ceil(self.taskVolume/self.duration) # + """Class to represent graph""" class Graph: def __init__(self,vertices): self.graph=defaultdict(list) self.V=vertices self.indeg0=[True]*(self.V+1) self.outdeg0=[True]*(self.V+1) self.edges=[] self.adj=defaultdict(list) self.visited=[False]*(self.V+1) self.path=[] self.durations=[] self.taskchains=[] self.tc_durations=[] self.minimum_start_date=[0]*self.V # function to add an edge to graph def addEdge(self,u,v): self.graph[u].append(v) edge=(u,v) print(edge) self.edges.append(edge) # set indeg0[v] <- false self.indeg0[v] = False # set outdeg0[u] <- false self.outdeg0[u] = False #print(self.graph[u]) # A recursive function used by topologicalSort def topologicalSortUtil(self,v,visited,stack): #mark the current node as visited visited[v]=True # Recur for all the vertices adjacent to this vertex for i in self.graph[v]: if visited[i]==False: self.topologicalSortUtil(i,visited,stack) # Mark the current node as visited. stack.insert(0,v) # The function to do Topological Sort. It uses recursive # topologicalSortUtil() def topologicalSort(self): # Mark all the vertices as not visited visited=[False]*(self.V+1) stack=[] # Call the recursive helper function to store Topological # Sort starting from all vertices one by one for i in range(self.V): if visited[i]==False: self.topologicalSortUtil(i,visited,stack) print (stack) def displayGraph(self): print("number of vertices={}".format(self.V)) print(self.graph) print("indegree={}".format(self.indeg0)) print("outdegree={}".format(self.outdeg0)) print("edges of the graph......") print(self.edges) print("durations of tasks") print(self.durations) def dfs(self,s): k=0 #append the node in path #and set visited self.path.append(s) self.visited[s]=True # Path started with a node # having in-degree 0 and # current node has out-degree 0, # print current path if self.outdeg0[s] and self.indeg0[self.path[0]]: print(*self.path) self.taskchains.append(list(self.path)) #for p in self.path: #self.taskchains[k].append(p) #myvalue[k]=self.path # k=k+1 #print(myvalue) # Recursive call to print all paths for node in self.graph[s]: if not self.visited[node]: self.dfs(node) # Remove node from path # and set unvisited #return self.path #print(self.path) #self.taskchains[s]=self.path self.path.pop() self.visited[s]=False #return myvalue def print_all_paths(self): for i in range(self.V): if self.indeg0[i] and self.graph[i]: self.path=[] self.visited=[False]*(self.V+1) self.dfs(i) #print("path={}".format(p)) def addDurations(self,duration_list): self.durations=duration_list def findPathdurations(self,path): total_duration=0 for p in path: total_duration +=self.durations[p-1] return total_duration def findPathCumulativeDurations(self,path): total_duration=0 path_cum=[] for p in path: total_duration +=self.durations[p-1] path_cum.append(total_duration) self.tc_durations.append(path_cum) #return path_cum def findStratDateofTasks(self): for T in range(1,self.V+1): d=self.findMinimumStartDateOfTask(T) print("Task-{} sdate={}".format(T,d)) self.minimum_start_date[T-1]=d def findMinimumStartDateOfTask(self,T): d=self.durations[T-1] duration_list=[] for tc,tcd in zip(self.taskchains,self.tc_durations): if T in tc: index=tc.index(T) sdate=tcd[index]-d+1 duration_list.append(sdate) #print("task found in={}".format(index)) #print("chain={} duration={}".format(tc,tcd)) #print("duration of tasks no={} total_time={}".format(T,d)) max_start_date=max(duration_list) return max_start_date def calculateStartOfAllTasks(self): for chain in self.taskchains: #duration=self.findPathdurations(chain) self.findPathCumulativeDurations(chain) def scheduleTask(self): self.topologicalSort() #self.displayGraph() self.print_all_paths() print(self.taskchains) for chain in self.taskchains: #duration=self.findPathdurations(chain) self.findPathCumulativeDurations(chain) #self.calculateStartOfAllTasks() #print(self.tc_durations) self.findStratDateofTasks() self.findProjectDuration() def createGanttSchedule(self): mytasks=[] for i in range(1,self.V+1): duration=self.durations[i-1] sday=self.minimum_start_date[i-1] bar=GanttBar(sday,duration) mytasks.append(bar) return mytasks def findProjectDuration(self): project_durations=[] for tcd in self.tc_durations: dmax=max(tcd) #print(dmax) project_durations.append(dmax) self.finishDay=max(project_durations) # + #Python program to print topological sorting of a DAG from collections import defaultdict #Class to represent a graph class Graph2: def __init__(self,vertices): self.graph = defaultdict(list) #dictionary containing adjacency List self.V = vertices #No. of vertices # function to add an edge to graph def addEdge(self,u,v): self.graph[u].append(v) # A recursive function used by topologicalSort def topologicalSortUtil(self,v,visited,stack): # Mark the current node as visited. visited[v] = True # Recur for all the vertices adjacent to this vertex for i in self.graph[v]: if visited[i] == False: self.topologicalSortUtil(i,visited,stack) # Push current vertex to stack which stores result stack.insert(0,v) # The function to do Topological Sort. It uses recursive # topologicalSortUtil() def topologicalSort(self): # Mark all the vertices as not visited visited = [False]*self.V stack =[] # Call the recursive helper function to store Topological # Sort starting from all vertices one by one for i in range(self.V): if visited[i] == False: self.topologicalSortUtil(i,visited,stack) # Print contents of stack print (stack ) # + g= Graph(6) g.addEdge(5, 2); g.addEdge(5, 0); g.addEdge(4, 0); g.addEdge(4, 1); g.addEdge(2, 3); g.addEdge(3, 1); print ("Following is a Topological Sort of the given graph") g.topologicalSort() # + def text2List(input_text): split_text=input_text.split(',') output_list=[int(x) for x in split_text ] return output_list def buildGraph(input_df): shape=input_df.shape g=Graph(shape[0]) print(shape) for index,row in input_df.iterrows(): v=row['TaskNo'] if row['Predecessor']!=-1: pred_text=str(row['Predecessor']) length=len(pred_text) if length >1: sources=text2List(pred_text) #print(sources) else: sources=[int(pred_text)] #print("sources={}".format(sources)) for u in sources: g.addEdge(u,v) #print("predecosseo no={} value={}".format(length,pred_text)) #print(len(pred_text)) #print(pred_text) duration_list=list(input_df['Duration']) g.addDurations(duration_list) return g # - sheetName='WBS' myframe=pd.read_excel(inputpath,sheet_name=sheetName) myframe.fillna(0,inplace=True) myframe g=buildGraph(myframe) g. scheduleTask() print(g.finishDay) mybars=g.createGanttSchedule() g.findProjectDuration() g.findProjectDuration() # + g.topologicalSort() g.displayGraph() g.print_all_paths() # - mybars print(g.durations) for chain in g.taskchains: duration=g.findPathdurations(chain) cum=g.findPathCumulativeDurations(chain) print("tasks={} total duration={}".format(chain,duration)) print("tasks={} cumdistance={}".format(chain, cum)) print( g.tc_durations[1]) max_date=g.findMinimumStartDateOfTask(9) g.findStratDateofTasks() print(g.minimum_start_date) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="irEeFSG-_17e" colab_type="code" colab={} # !pip install tensorflow-gpu # + id="FD7q5Nkl0kvs" colab_type="code" colab={} # !pip install deeppavlov # + id="Jh_PW7dJ-o9x" colab_type="code" colab={} # !python -m deeppavlov install bert_sentiment.json # + id="eJsWV2ndHFRK" colab_type="code" colab={} # !python -m deeppavlov install fasttext_sentiment.json # + id="GX09UGLPEmmN" colab_type="code" colab={} from google.colab import drive drive.mount('/content/drive', force_remount=True) # + id="pjFIKhli8DkZ" colab_type="code" colab={} from deeppavlov import train_model # + id="c73N8OMY9Tnf" colab_type="code" colab={} model = train_model('bert_sentiment.json', download=False) # + id="HQlht50a80DN" colab_type="code" colab={} model = train_model('elmo_sentiment.json') # + id="V4qLxrWdDYNk" colab_type="code" colab={} model = train_model('fasttext_sentiment.json', download=False) # + id="NjqmXDApDDiM" colab_type="code" colab={} # !zip -r /content/model_bert.zip /content/model # + id="vxPuSm7w_zoW" colab_type="code" colab={} # !cp model_bert.zip 'drive/My Drive/Colab Notebooks' # + id="doT7W9-0Eit8" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # README # # --- # # VCP SDKを用いてクラウド上にCoursewareHub環境を構築します。 # ## 変更履歴 # # ### [20.04.0](https://github.com/nii-gakunin-cloud/ocs-templates/releases/tag/release%2F20.04.0)からの変更点 # # # 主に[NII-cloud-operation/CoursewareHub-LC_platform](https://github.com/NII-cloud-operation/CoursewareHub-LC_platform)の [#14](https://github.com/NII-cloud-operation/CoursewareHub-LC_platform/pull/14)から[#25](https://github.com/NII-cloud-operation/CoursewareHub-LC_platform/pull/25)までの変更内容を取り込んでいます。 # # 追加された主要な機能を以下に記します。 # # * 学認連携に基づく認証 # * LMS(Learning Management System)との認証連携 # * managerノードとNFSサーバを別ノードとして構成することを選択できるようになった # * single-userコンテナに割り当てるリソース量をグループや認可タイプ毎に設定できるようになった # ## はじめに # # このアプリケーションテンプレートではVCPで作成したノードに[CoursewareHub](https://github.com/NII-cloud-operation/CoursewareHub-LC_jupyterhub-deploy)環境を構築します。 # ### CoursewareHubのユーザ認証について # # CoursewareHubではユーザの認証機能として以下に示す三つの方式に対応しています。 # # * ローカルユーザ認証 # * CoursewareHubのローカルユーザデータベースを用いてユーザ管理を行う # * 学認連携に基づく認証 # * [学認](https://www.gakunin.jp/)のSPとして登録し、認証連携を行う # * CoursewareHubを直接SPとしては登録せずに、プロキシ(IdPプロキシ)を経由して連携することも可能 # * LMS(Learning Management System)との認証連携 # * [LTI 1.3](http://www.imsglobal.org/spec/lti/v1p3/)による認証連携を行う # * このテンプレートでは連携するLMSとして[Moodle](https://moodle.org/)を想定している # # それぞれの認証機能は共存することが可能になっています。ただし、学認連携認証を用いる場合はコンテナの構成や設定手順が異なります。そのため、それに応じた異なる構築手順を用意しています。 # # 一方 LMSとの認証連携を行う場合は、まずローカルユーザ認証、あるいは学認連携認証の手順でCoursewareHubを構築してください。その後にLMSとの認証連携の設定を追加する手順となっています。 # ### コンテナの構成について # # ローカルユーザ認証のみを用いる場合と、学認連携認証を利用する場合とではコンテナの構成が異なります。ここでは、それぞれのコンテナ構成について記します。 # # CoursewareHubでは、学認連携の有無、あるいは連携方法の違いにより以下に示す方式を選択することができます。 # * ローカルユーザ認証のみを利用する # * 学認フェデレーションに参加し、学認のIdPを利用して認証を行う # - IdP-proxyをSPとして学認に登録し、複数のCoursewareHubがIdP-proxyを通して学認のIdPを利用する # - CoursewareHubを直接SPとして学認に登録する # # それぞれの方式に対応する構成図を以下に示します。 # # #### ローカルユーザ認証のみを利用する場合 # # ![モジュール構成a](images/cw-000-a.png) # # #### IdP-proxyを利用する場合 # # ![モジュール構成b](images/cw-000-b.png) # # #### CoursewareHubを直接SPとして登録する場合 # # ![モジュール構成c](images/cw-000-c.png) # ### ノード構成 # # CoursewareHubのノードは役割に応じて以下のものに分類されます # # * manager # - JupyterHub, auth-proxy, PostgreSQLなどのSystemコンテナを実行するノード # - Docker Swarmのmanagerノードとなる # * worker # - single-user Jupyter notebook serverを実行するノード # - Docker Swarm の workerノードとなる # # CoursewareHubではデータやNotebookなどをノード間で共有するためにNFSを利用します。NFSサーバの配置により以下の3つパターン構成が可能となっています。 # # 1. 構成1 # - managerノードにNFSサーバを配置する # 1. 構成2 # - managerノードとNFSサーバを別のノードとして構成する # 1. 構成3 # - 構成2のNFSサーバに、新たなCoursewareHub環境を追加する構成 # #### 構成1 # # managerノードでNFSサーバを実行します。 # # ![構成1](notebooks/images/cw-011-01.png) # #### 構成2 # # managerノードとNFSサーバを分離し別々のノードとして構築します。 # # ![構成2](notebooks/images/cw-021-01.png) # #### 構成3 # # 構成2のNFSサーバに、新たなCoursewareHub環境を追加します。NFSサーバは複数のCoursewareHub環境で共有されます。 # # ![構成3](notebooks/images/cw-031-01.png) # ### 収容設計について # #### managerノード # * システム用コンテナが実行される # - auth-proxyコンテナ # - JupyterHubコンテナ # - PostgreSQLコンテナ # * ユーザが利用する single-userサーバコンテナは実行されない # * NFSサーバをmanagerノードに共存させる場合(構成1)はディスク容量を適切な値に設定する # #### workerノード # * ユーザが利用するsingle-userコンテナが実行される # * single-userコンテナのリソース量として以下の設定を行っている # - 最大CPU利用数 # - 最大メモリ量(GB) # - 保証される最小割当てメモリ量(GB) # * システム全体で必要となるリソース量を見積もるには # - (コンテナに割り当てるリソース量)×(最大同時使用人数)+(システムが利用するリソース量)×(ノード数) # #### 運用例 # * 最大同時使用人数 # - 400 人 # * コンテナに割り当てるリソース量 # - メモリ最小値保証 # - 1GB # - メモリ最大値制限 # - 2GB(swap 4GB) # - CPU最大値制限 # - 200% (2cores) # # 上記の条件で運用を行った際の実績値を示します。 # * managerノード # - vCPU # - 10 # - memory # - 16GB # - HDD # - 800GB # * workerノード # - ノードA # - ノード数 # - 4 # - vCPU # - 30 # - memory # - 100GB # - HDD # - 300GB # - ノードB # - ノード数 # - 1 # - vCPU # - 20 # - memory # - 80GB # - HDD # - 300GB # # > workerノードはリソース量の異なるノードAとノードBで構成されていた。 # # workerノードのメモリ総量は480GB(=100×4+80)となっていますが、これは以下のように見積もっています。 # ``` # (コンテナのメモリ最小値保証)×(最大同時使用人数)+(システム利用分) # = 1GB × 400人 + 80GB # ``` # ## 事前に準備が必要となるものについて # # このアプリケーションテンプレートを実行するにあたって事前に準備が必要となるものについて記します。 # ### VCノード # # ノードを作成するとき必要となるものについて記します。 # * VCCアクセストークン # - VCP SDKを利用してクラウド環境にノード作成などを行うために必要となります # - VCCアクセストークンがない場合はVC管理者に発行を依頼してください # * SSH公開鍵ペア # - VCノードに登録するSSHの公開鍵 # - このNotebook環境内で新たに作成するか、事前に作成したものをこの環境にアップロードしておいてください # * VCノードに割り当てるアドレス # - ノードのネットワークインタフェースに以下に示す何れかのアドレスを指定することができます # - IPアドレス # - MACアドレス # * NTPの設定 # - 学認フェデレーションに参加し SAML 認証を利用する場合、正しい時刻設定が必要となります # - VCノードのNTPサービスを有効にするためには、事前にVCコントローラへの設定が必要となります # - VCコントローラへの設定にはOCS運用担当者への申請が必要となります # ### CoursewareHub # # CoursewareHub環境を構築する際に必要となるものについて記します。 # * CoursewareHubのサーバ証明書 # - CoursewareHubではHTTPSでサーバを公開するため、サーバ証明書とその秘密鍵が必要となります # - 必要に応じて、サーバ証明書の中間CA証明書を準備してください # - サーバ証明書に記載されているホスト名のDNS登録も必要となります # # また事前の段階では不要ですが、学認のIdPを認証に利用する場合は構築手順の過程で # 学認フェデレーションに参加の申請を行う必要があります。 # ### IdP-proxy # # IdP-proxy を構築する際に必要となるものについて記します。 # * IdP-proxyのサーバ証明書 # - IdP-proxyではHTTPSでサーバを公開するため、サーバ証明書とその秘密鍵が必要となります # - 必要に応じて、サーバ証明書の中間CA証明書を準備してください # - サーバ証明書に記載されているホスト名のDNS登録も必要となります # # また事前の段階では不要ですが、学認のIdPを認証に利用する場合は構築手順の過程で # 学認フェデレーションに参加の申請を行う必要があります。 # ## Notebookの一覧 # # テンプレートのNotebook一覧を示します。 # **注意**: # # この節ではテンプレートのNotebookへのリンクを示す箇所がありますが、リンク先のNotebookは参照用となっていて**そのままでは実行できません**。 # # > Notebook自体は実行できてしまいますが、パスなどが想定しているものと異なるため正しく処理できずエラーとなります。 # # 次のどちらかの手順で作業用Notebookを作成する必要があります。 # # * 次節の「作業用Notebookの作成」で作業用のNotebookを作成する。 # * テンプレートのNotebookを配置してある `notebooks/` から、この`000-README.ipynb`と同じディレクトリにNotebookをコピーする。 # ### 各Notebookの関連について # # 各Notebookの実行順序などの関係性を示す図を表示します。 # 次のセルを実行すると、各Notebookの関連を示す図を表示します。 # # > 図が表示されずに `` と表示されている場合は、次のセルを `unfreeze` した後に再実行してください。 # # 図に表示される1つのブロックが1つのNotebookに対応しており、ブロックのタイトル部分にNotebookへのリンクが埋め込まれています。 # #### ローカルユーザ認証のみを利用する場合 from IPython.display import SVG # %run scripts/nb_utils.py setup_diag() SVG(filename=generate_svg_diag(diag='images/notebooks-a.diag')) # #### IdP-proxyを利用する場合 from IPython.display import SVG # %run scripts/nb_utils.py setup_diag() SVG(filename=generate_svg_diag(diag='images/notebooks-b.diag')) # #### CoursewareHubを直接SPとして登録する場合 # from IPython.display import SVG # %run scripts/nb_utils.py setup_diag() SVG(filename=generate_svg_diag(diag='images/notebooks-c.diag')) # ### 各Notebookの目次 # 次のセルを実行すると、各Notebookの目次が表示されます。 # # > 目次が表示されずに `` と表示されている場合は、次のセルを `unfreeze` した後に再実行してください。 # # リンクが表示されている項目が一つのNotebookに対応しており、そのサブ項目が各Notebook内の目次になっています。 from IPython.display import Markdown # %run scripts/nb_utils.py Markdown(notebooks_toc()) # ## 作業用Notebookの作成 # # この節のセルを実行することで、テンプレートのNotebookから作業用Notebookを作成することができます。 # まず、作業用Notebookを配置するディレクトリを指定してください。 WORK_DIR = 'work' # 以下のセルを実行すると、Notebook名のドロップダウンリストと「作業開始」ボタンが現れます。 # 「作業開始」ボタンを押すと、テンプレートのNotebookをコピーし、そのNotebookを自動的にブラウザで開きます。 # Notebookの説明を確認しながら実行、適宜修正しながら実行していってください。 # # > このNotebookを Shutdown した後に再度開いた場合、次のセルに既に表示されている「作用開始」ボタンが正しく動作しません。この節のセルをいったん unfreeze した後、セルを再実行してから「作業開始」ボタンをクリックして下さい。 # # 認証方式毎に実行するNotebookが異なるので、対応する節のセルを実行してください。 # ### ローカルユーザ認証のみを利用する場合 from IPython.core.display import HTML # %run scripts/nb_utils.py setup_nb_workdir(WORK_DIR) HTML(generate_html_work_nbs(WORK_DIR, nb_group='group-a')) # ### IdP-proxyを利用する場合 # #### IdP-proxyの構築 from IPython.core.display import HTML # %run scripts/nb_utils.py setup_nb_workdir(WORK_DIR) HTML(generate_html_work_nbs(WORK_DIR, nb_group='group-b0')) # #### CoursewareHubの構築 from IPython.core.display import HTML # %run scripts/nb_utils.py setup_nb_workdir(WORK_DIR) HTML(generate_html_work_nbs(WORK_DIR, nb_group='group-b1')) # ### CoursewareHubを直接SPとして登録する場合 from IPython.core.display import HTML # %run scripts/nb_utils.py setup_nb_workdir(WORK_DIR) HTML(generate_html_work_nbs(WORK_DIR, nb_group='group-c')) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="bk0F5gIMV6v6" # # The Sisters Of Mercy - Covers # # I analysed 987 covers of the Sisters from 818 unique artists across Spotify, Bandcamp, Soundcloud and YouTube. # # The TL;DR: # # * Most covered songs: **Lucretia** by a length, then **Alice** and **Marian**. # * Most covered band line-up: **Floodland**, then **FALAA** and **Reptile House** jostling for second place. # * Busiest years for covers: **2021**, **2019** and **2020** # * Genre: **53%** are covers in the style of the original material, then **Electronic** 15%, **Metal** 12%, **Acoustic** 9% # * Most likely to be covered by dodgy darklings: **Alice**, **Body Electric**, **Dominion** # * Most likely to be covered by techno bois: **Dominion**, **Body Electric**, **Lucretia** # * Most likely to be covered by angry metalheads: **No Time To Cry**, **This Corrosion**, **Black Planet** # * Most likely to be an acoustic/unplugged cover: **When You Don't See Me**, **Something Fast**, **Marian** # * Most diverse genres of covers: **This Corrosion**, **Temple Of Love** # * Least diverse genres of covers: **Something Fast**, **Alice**, **Body Electric** # * Most prolific cover artists: ****, followed by **nmacog** and **botchandango** # * No covers: **Watch** and **Phantom**, also **Driver**, **Wide Receiver**, **Better Reptile**, **I Will Call You** # + id="R02X_qRSmSdW" colab={"base_uri": "https://localhost:8080/"} outputId="23cdd877-7ea4-4e57-f338-b41d4ea1eec0" pip install requests pandas matplotlib # + id="CyTxCZpCnWYd" #from google.colab import drive #drive.mount('/content/drive') # #!cp drive/'My Drive'/Music/tsom/tsom_covers.csv . # #!ls -la # + colab={"base_uri": "https://localhost:8080/"} id="qhv25C9ZNDoW" outputId="2e0451d6-3d07-4d4a-d9d7-9a1e35f8b4d2" import requests download_url = "https://raw.githubusercontent.com/alan-mcl/tsom-covers/main/tsom_covers.csv" target_csv_path = "tsom_covers.csv" response = requests.get(download_url) response.raise_for_status() # Check that the request was successful with open(target_csv_path, "wb") as f: f.write(response.content) print("Download ready.") # !ls -la # #!cat tsom_covers.csv # + id="Sd3rV4eWn0UH" colab={"base_uri": "https://localhost:8080/", "height": 468} outputId="af58ec17-f6c6-4022-c393-277f9182d492" import pandas as pd temp = pd.read_csv("tsom_covers.csv", error_bad_lines=True) tsom = temp.copy() print(type(tsom)) print(len(tsom)) print(tsom.shape) pd.set_option("display.max_rows", None) tsom.head() # + id="Yivpet0gqwXb" colab={"base_uri": "https://localhost:8080/"} outputId="3fc8e485-9eff-451d-a0f0-2c5a2c09c2fe" tsom.info() # + id="KXxw0kr2rMqr" colab={"base_uri": "https://localhost:8080/", "height": 274} outputId="78faf66a-b7b8-4790-e3e2-32337588bfb5" tsom.describe(include=object) # + id="Xi2JvWx0yOUX" all_titles = ["1959", "A Rock And A Hard Place", "Adrenochrome", "Afterhours", "Alice", "Amphetamine Logic", "Anaconda", "Arms", "Better Reptile", "Black Planet", "Black Sail", "Blood Money", "Body And Soul", "Body Electric", "Burn", "Bury Me Deep", "But Genevieve", "Colours", "Come Together", "Crash And Burn", "Detonation Boulevard", "Doctor Jeep", "Dominion", "Driven Like The Snow", "Driver", "Finland Red, Egypt White", "First And Last And Always", "Fix", "Flood I", "Flood II", "Floorshow", "Giving Ground", "Good Things", "Heartland", "I Have Slept With All The Girls In Berlin", "I Was Wrong", "I Will Call You", "Jihad", "Kiss The Carpet", "Lights", "Lucretia, My Reflection", "Marian", "Mother Russia", "More", "Never Land", "Nine While Nine", "No Time To Cry", "On The Wire", "Phantom", "Poison Door", "Possession", "Rain From Heaven", "Ribbons", "Romeo Down", "Show Me", "Some Kind Of Stranger", "Something Fast", "Still", "Summer", "Temple Of Love", "The Damage Done", "This Corrosion", "Torch", "Train", "Under The Gun", "Valentine", "Vision Thing", "Walk Away", "War On Drugs", "Watch", "We Are The Same, Susanne", "When You Don't See Me", "Wide Receiver", "Will I Dream?", "You Could Be The One"] lineup_map = { \ "1959" : "Floodland", "A Rock And A Hard Place" : "FALAA", "Adrenochrome" : "Reptile House", "Afterhours" : "FALAA", "Alice" : "Reptile House", "Amphetamine Logic" : "FALAA", "Anaconda" : "Reptile House", "Arms" : "Post-VT Live Only", "Better Reptile" : "Post-VT Live Only", "Black Planet" : "FALAA", "Black Sail" : "Post-VT Live Only", "Blood Money" : "FALAA", "Body And Soul" : "FALAA", "Body Electric" : "Reptile House", "Burn" : "Reptile House", "Bury Me Deep" : "FALAA", "But Genevieve" : "Post-VT Live Only", "Colours" : "Floodland", "Come Together" : "Post-VT Live Only", "Crash And Burn" : "Post-VT Live Only", "Detonation Boulevard" : "Vision Thing", "Doctor Jeep" : "Vision Thing", "Dominion" : "Floodland", "Driven Like The Snow" : "Floodland", "Driver" : "Reptile House", "Finland Red, Egypt White" : "Floodland", "First And Last And Always" : "FALAA", "Fix" : "Reptile House", "Flood I" : "Floodland", "Flood II" : "Floodland", "Floorshow" : "Reptile House", "Giving Ground" : "Floodland", "Good Things" : "Reptile House", "Heartland" : "Reptile House", "I Have Slept With All The Girls In Berlin" : "Post-VT Live Only", "I Was Wrong" : "Vision Thing", "I Will Call You" : "Post-VT Live Only", "Jihad" : "Floodland", "Kiss The Carpet" : "Reptile House", "Lights" : "Reptile House", "Lucretia, My Reflection" : "Floodland", "Marian" : "FALAA", "" : "Floodland", "More" : "Vision Thing", "Never Land" : "Floodland", "Nine While Nine" : "FALAA", "No Time To Cry" : "FALAA", "On The Wire" : "FALAA", "Phantom" : "Reptile House", "Poison Door" : "FALAA", "Possession" : "FALAA", "Rain From Heaven" : "Floodland", "Ribbons" : "Vision Thing", "Romeo Down" : "Post-VT Live Only", "Show Me" : "Post-VT Live Only", "Some Kind Of Stranger" : "FALAA", "Something Fast" : "Vision Thing", "Still" : "Post-VT Live Only", "Summer" : "Post-VT Live Only", "Temple Of Love" : "Reptile House", "The Damage Done" : "Damage/Watch", "This Corrosion" : "Floodland", "Torch" : "Floodland", "Train" : "FALAA", "Under The Gun" : "Post-VT Live Only", "Valentine" : "Reptile House", "Vision Thing" : "Vision Thing", "Walk Away" : "FALAA", "War On Drugs" : "Post-VT Live Only", "Watch" : "Damage/Watch", "We Are The Same, Susanne" : "Post-VT Live Only", "When You Don't See Me" : "Vision Thing", "Wide Receiver" : "FALAA", "Will I Dream?" : "Post-VT Live Only", "You Could Be The One" : "Vision Thing", } # + id="GxyStYdSxNyS" tsom["Title"] = pd.Categorical(tsom["Title"], categories=all_titles) tsom["ArtistGenre"] = pd.Categorical(tsom["ArtistGenre"]) tsom["CoverType"] = pd.Categorical(tsom["CoverType"]) # + id="5VNM12Ef9Yin" tsom["lineup"] = tsom["Title"].map(lineup_map) # + [markdown] id="ny6aFTrpF9l5" # # Methodology # # Basically, I did a lot of detailed searching on each platform. I worked through the list of released and unreleased TSOM and Sisterhood tracks making queries for covers of each. I tried a variety of alternate names for the track and band where applicable. # # Where the platforms supported public playlists, and there are Sisters playlists, I perused all of those for applicable covers. # # Finally, I dug through old Heartland threads about covers and picked up some material that I missed, although often old links were dead. # # "Lucretia, My Reflection" has the most variety of annoying incorrect spellings, although I didn't record that data. Bandcamp was most irritating platform because it doesn't support playlists. # # Despite all this I'm sure I missed some material and expect to be loudly informed about it. # # On all platforms I **disqualified** the following material as a "cover" and excluded it from the data set: # # * Live or studio bootlegs of the real band. # * Rips of released material. # * Techno remixes of original material. # * Vocal covers over original material (please don't ever do this on YouTube). # * Mashups of original material (sorry Project Kiss Kass). # * "Covers of the Sisters" *own* covers (e.g. Emma; more common than you'd think). # * Solo instrumental parts (sorry Wayne and all the YouTube bassists). # * Speed Kings and NME. # * Music from those *goddam* karaoke vendors. # * Covers of that Leonard Cohen song # * Abrimaal # + [markdown] id="k5EuzVQBGDDr" # # Songs # # Let's look at total covers. # # Everyone and his dog has covered **Lucretia** (154), and this *after* I excluded countless "watch me play the Lucretia bassline" YouTube videos. **Alice** (95) and **Marian** (71) are clear second and third places, then **Temple Of Love** (59) and **This Corrosion** (58) are neck and neck. **No Time To Cry** (28) may be a surprise at number 6. **First And Last And Always** (27), **Dominion** (27), **Black Planet** (24) and **Something Fast** (23) round out the top 10. # # **Driven Like The Snow** (17) and **1959** (15) make strong showings for songs that have never been performed in the live set, speaking to the enduring strength of the Floodland material. # # For one of the better songs on its album **Possession** (3) has surprisingly few covers. Is this one hard to play, or hard to sing? Same could be said about **Ribbons** (6). # # The Steinman collaborations (This Corrosion, Dominion/Mother Russia and More) account for 10.6% of the total covers. # # Of the released material, I didn't find any covers of **Watch** and **Phantom**. I'm not surprised by the latter. But Watch deserves a cover or two - it lasted longer in the live set than Damage and the later iterations were better than the released single. # # Including unreleased material: **Driver**, **Wide Receiver**, **Better Reptile** and **I Will Call You** are not represented in the data set (I didn't consider Body Politic and Burn It Down to be unique tracks). Wide Receiver is garbage, but Driver is a good song and I was surprised not to find any covers. For the two recent songs I guess we need Von to tell us the actual lyrics before anyone tries them. Maybe a cover that embraces the indistinct mumbling is the way to go? # + colab={"base_uri": "https://localhost:8080/"} id="XbZsg7yesYH0" outputId="3087f97d-8879-418e-a534-ec6985131fc2" tsom["Title"].value_counts() # + colab={"base_uri": "https://localhost:8080/"} id="fV5wQbdJGSl8" outputId="e6539139-6cad-465e-f7ec-442bb8ff9e74" tsom["Title"].value_counts(normalize=True).mul(100).round(1).astype(str) + '%' # + [markdown] id="6BwzgMMZMBrx" # # Genres # I assigned a genre to each cover after listening to it. This part of the analysis is obviously somewhat subjective so take it with a pinch of salt. # # "Straight" covers are those done in the style of the original material, otherwise I picked from broad genre buckets. # # 53% are **Straight** covers. These are all the cover bands and tribute acts, many of the fan-made amateur covers, and a variety of professional dodgy darklings. And techno fans of The Sisterhood. # # Nearly 15% are from some "**Electronic**" genre (when the original track isn't). No I don't care about the difference between your acid house vs synth pop. "Chiptune"-style covers are in this bucket too. # # "**Metal**" is next at 12%. I did initially break out a couple of subgenres mostly because I am more familiar with the landscape here. Eventually I just rolled them up into the Metal bucket. # # "**Acoustic**" is fourth at 9%. I included unplugged guitar, piano, and random unpowered instruments like ukulele, violin etc in here. # # "**Alternative**" (6%) is the last significant category and is basically defined by the "alternative" recommendations that Bandcamp sends me weekly. A lot of artists self-identify as some kind of "alternative" too so this isn't completely subjective. # # Industrial, A Capella, Punk, Country, Hip Hop, LoFi, Reggae, Soul and Jazz represent the long tail of genres with very few covers each. # + colab={"base_uri": "https://localhost:8080/"} id="9C--qgOOuSq6" outputId="98ad1878-f4c6-4057-ccc9-a5c2dfa1d125" tsom["CoverType"].value_counts(normalize=True).mul(100).round(1).astype(str) + '%' # + [markdown] id="Q95dVIvJaTS8" # # Genre by Song # # Let's dig deeper into the tracks with 20 or more covers. # # **Something Fast** has the most Straight covers (70%). Basically everyone in a lockdown with a guitar and a laptop drum machine put a cover of this on YouTube. # # Discounting that, **Alice** (64%), **Body Electric** (62%) and **Dominion** (52%) are most likely to be in the setlist of a tribute band near you. I am surprised **Marian** (51%) is as low as fourth. **No Time To Cry** has the least Straight covers at 21%. # # **Dominion** (26%), **Body Electric** (24%) and **Lucretia** (23%) are beloved of the Electronic music community. **Something Fast** has zero techno covers even though it's name has some amusing synergy with the genre. # # Fully 42% of **No Time To Cry** covers are Metal. There is a cluster of metalheads who have redone the well-known Cradle Of Filth cover (not all of who seem to realise it's originally from the Sisters) making it arguably the most influential Sisters cover in any genre. **This Corrosion** (28%), **Black Planet** (25%) and **More** (25%) are other favourites in this genre. **Domninion** (4%) has the fewest metal covers. # # People with a guitar and an emo reach for **When You Don't See Me** - 23% of it's covers are Acoustic. **Something Fast** (22%) and **Marian** (18%) are obvious acoustic covers, **This Corrosion** (17%) maybe less so at #5. **Body Electric** has no acoustic covers and that is something I'd like to hear. # + colab={"base_uri": "https://localhost:8080/"} id="N8ZEJnc_UVcL" outputId="a5981984-d856-46c3-fda9-3fed1df2d5e6" filtered = tsom.groupby("Title").filter(lambda x : len(x) >= 20) print("Straight covers") print("-------------------") print((filtered[filtered["CoverType"]=="Straight"]["Title"].value_counts() / filtered["Title"].value_counts() *100).sort_values(ascending=False).head(20).round(1).astype(str) + '%') print("\nElectronic covers") print("-------------------") print((filtered[filtered["CoverType"]=="Electronic"]["Title"].value_counts() / filtered["Title"].value_counts() *100).sort_values(ascending=False).head(20).round(1).astype(str) + '%') print("\nMetal covers") print("-------------------") print ((filtered[filtered["CoverType"]=="Metal"]["Title"].value_counts() / filtered["Title"].value_counts() *100).sort_values(ascending=False).head(20).round(1).astype(str) + '%') print("\nAcoustic covers") print("-------------------") print((filtered[filtered["CoverType"]=="Acoustic"]["Title"].value_counts() / filtered["Title"].value_counts() *100).sort_values(ascending=False).head(20).round(1).astype(str) + '%') # + [markdown] id="vevpuqRj5dlZ" # # Diversity # # I hacked up a "diversity index" for each song by using the sum of squares of the proportions of each genre per song, inverted because big numbers are better. # # Once again taking the songs with more than 20 total covers, **This Corrosion** stands out with the most diverse group of covers. Something about it's combination of cocky beats, relentless groove and over-the-top angst clearly appeals to a wide range of artists. # # **Temple Of Love** is in a clear second place, also attracting the attention of a wide variety of musical styles. **Black Planet**, **First And Last And Always** and **When You Don't See Me** round out the "diversity" top 5. # # **Something Fast** props up the bottom of the list because all it's covers are either straight or acoustic (i.e. straight without drums). **Alice** and **Body Electric** are a predictably second and third least diverse. # + colab={"base_uri": "https://localhost:8080/"} id="TN7prHUC5cMJ" outputId="dfecae30-69e2-4787-ccb4-80b78ef95c6b" filtered = tsom.groupby("Title").filter(lambda x : len(x) >= 20) (filtered.groupby(["Title", "CoverType"])["CoverType"].count()/tsom.groupby(["Title"])["Title"].count()).pow(2).groupby("Title").sum().pow(0.5).pow(-1).sort_values(ascending=False) # + [markdown] id="GgfuF70L1SRN" # # Line-up # Sorting total covers by era, the **Floodland** "line-up" (314) is well out ahead as the most covered, mostly due to the volume of Lucretia (note that I included The Sisterhood tracks in this line-up bucket which may not be entirely fair). **FALAA** (266) and **Reptile House** (263)are pretty even behind that. Despite being an actual live line-up the **Vision Thing** songs (112) are a lot less popular with covering artists. One could make a case for splitting the post-VT live line-ups by lead guitarist or something but I couldn't be arsed. # # All line-ups have roughly the same proportion of straight covers. Floodland unsurprisingly has the highest proportion of Electronic covers (18%). Vision Thing has both the most Acoustic (16%) and Metal (13%) covers, which might speak to its reach outside the traditional Sisters fanbase. # + colab={"base_uri": "https://localhost:8080/"} id="TNwtz1HwAgjr" outputId="5944efc9-82e6-4440-d042-aef9612f93a4" tsom["lineup"].value_counts() # + colab={"base_uri": "https://localhost:8080/"} id="PHe5jwpw2Man" outputId="a4821956-7e74-4555-d72d-c92e08b8c510" filtered = tsom print("\nFloodland") print("-------------------") print((filtered[filtered["lineup"]=="Floodland"].groupby("CoverType")["Title"].count()/filtered[filtered["lineup"]=="Floodland"]["Title"].count()*100).sort_values(ascending=False).head(20).round(1).astype(str) + '%') print("\nFALAA") print("-------------------") print((filtered[filtered["lineup"]=="FALAA"].groupby("CoverType")["Title"].count()/filtered[filtered["lineup"]=="FALAA"]["Title"].count()*100).sort_values(ascending=False).head(20).round(1).astype(str) + '%') print("\nReptile House") print("-------------------") print((filtered[filtered["lineup"]=="Reptile House"].groupby("CoverType")["Title"].count()/filtered[filtered["lineup"]=="Reptile House"]["Title"].count()*100).sort_values(ascending=False).head(20).round(1).astype(str) + '%') print("\nVision Thing") print("-------------------") print((filtered[filtered["lineup"]=="Vision Thing"].groupby("CoverType")["Title"].count()/filtered[filtered["lineup"]=="Vision Thing"]["Title"].count()*100).sort_values(ascending=False).head(20).round(1).astype(str) + '%') # + id="E1pruidmu-BY" colab={"base_uri": "https://localhost:8080/"} outputId="fc717f67-8c8a-4f87-8839-e43ce7e812c2" tsom["Year"].value_counts(normalize=True).mul(100).round(1).astype(str) + '%' # + [markdown] id="LDrQ2zSKOVKO" # # Artist # # The median Sisters covers per artist is 1. Most people are happy doing one Sisters cover and then moving on with their lives. The mean is 1.2 but that is propped up by some outliers at the top. # # 81 artists covered 2 or more songs. **** (20) is the most prolific. **nmacog** (15) and **botchandango** (10) are the others in double digits. Look these guys up on Soundcloud and YouTube. # # This data doesn't represent the tribute bands very well because of the limited vids of them that exist on YouTube. # # # + colab={"base_uri": "https://localhost:8080/"} id="AWDgy8MuOgdQ" outputId="49c70a4a-2d93-4332-a334-cf6e70bcae99" tsom["Artist"].value_counts().describe() # + colab={"base_uri": "https://localhost:8080/"} id="GxsqeF-dFXsi" outputId="e1a3984f-1c74-43b0-becf-15bddb2e89ed" print(tsom["Artist"].value_counts()[lambda x : x>=2].count()) tsom["Artist"].value_counts().head(10) # + [markdown] id="qD6o_qJVRwo8" # # Year # # **2021** was the biggest year by far for Sisters covers (140), followed by **2019** (101) and **2020** (92). In general, the number of Sisters covers each year has grown almost every year that I have data for, with a noticeable spike in 2000 when that unspeakable tribute album came out. # # Some of this is recency bias: more recent covers are easier to find and more likely to still be available on the platforms. But the pandemic effects of the last few years and the increasing ease of making, recording and publishing music online certainly have a role too. # # # + colab={"base_uri": "https://localhost:8080/"} id="VduZwvZMzjzZ" outputId="4bab494c-26a6-43aa-cd52-b627b80bb456" tsom.groupby("Year")["Year"].count() # + [markdown] id="OmytVmohe0gh" # # Platform # # **Spotify** has the least covers (71). These are mostly established professional artists with record company deals. **Bandcamp** (80) has only slightly more. I suppose these artists are also professional but less established and/or successful. # # **SoundCloud** (395) has lots of covers. Most are solidly in the realm of amateur (disclosure: my own included), or at least "musician isn't my only job" artists. # # **YouTube** (619) has the most covers. This is basically the full gamut from pros who are also on Spotify, to less established bands, to talented amateurs, to your uncle Bob and his dog covering Something Fast. # # There was less overlap that I expected. 84% of covers were on one platform only and 13% were on two platforms. I found only *3* total covers on all four platforms - shout out to the marketing teams for Cradle Of Filth, The Court Of Sybaris and Sleepmask. # # Rehashing the diversity index (heh) suggests the **Bandcamp** hosts the least diversity of covers, which is disappointing. For whatever reason, the covers there are largely straight derivatives of the original material. **Spotify** is little better. I expected **SoundCloud** to top the diversity stakes but that honour is held by **YouTube**, even after applying my strict filtering criteria. # + colab={"base_uri": "https://localhost:8080/"} id="nZUana7qgLoW" outputId="c443ee90-88c6-4536-da57-5301d09bfd8b" tsom.filter(["spotify", "bandcamp", "soundcloud", "youtube"]).sum() # + colab={"base_uri": "https://localhost:8080/"} id="J3VYmeF6kGU4" outputId="8157d1eb-42f0-4c44-8fcc-1db3c41f506f" tsom["platforms"] = tsom["spotify"] + tsom["bandcamp"] + tsom["soundcloud"] + tsom["youtube"] print(tsom.groupby("platforms")["Title"].count()) tsom.groupby("platforms")["Title"].count() / len(tsom) *100 # + colab={"base_uri": "https://localhost:8080/"} id="D_rpWMHHKSni" outputId="8bef9587-2317-4cf5-9b06-b3b8237f614e" filtered = tsom[tsom["spotify"] == 1] print(((filtered.groupby(["CoverType"])["CoverType"].count()/len(filtered)).pow(2).sum()) ** 0.5 ** -1 *100) filtered = tsom[tsom["bandcamp"] == 1] print(((filtered.groupby(["CoverType"])["CoverType"].count()/len(filtered)).pow(2).sum()) ** 0.5 ** -1 *100) filtered = tsom[tsom["soundcloud"] == 1] print(((filtered.groupby(["CoverType"])["CoverType"].count()/len(filtered)).pow(2).sum()) ** 0.5 ** -1 *100) filtered = tsom[tsom["youtube"] == 1] print(((filtered.groupby(["CoverType"])["CoverType"].count()/len(filtered)).pow(2).sum()) ** 0.5 ** -1 *100) # + [markdown] id="q9mmemxSeawh" # # Conclusion # # Other than concluding that I spent way too much time on this, I will leave further interpretation up to the reader. It would be interesting to compare Sisters covers to those from other bands, but it sure ain't gonna be me who collects that data. # # I will follow this up with a post listing some of my favourite covers discovered during this exercise. # # PM me if you want to get hold of the data set and the Jupyter notebook (I used Google Colab and Pandas). # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pandas as pd import warnings warnings.filterwarnings('ignore') # ### Train/Test split already done # + #from sklearn.cross_validation import train_test_split # create 80%-20% train-test split #X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=5555) # - twoD6_test = pd.read_csv("data/test2d6.csv", index_col='SID') twoD6_train = pd.read_csv("data/training2d6.csv", index_col='SID') twoD6_train.head() # + col_names2D6 = twoD6_train.columns.tolist() print('Column names:') print(col_names2D6) # + # Isolate response variable ActivityScore = twoD6_train['ActivityScore'] y_train = np.where(ActivityScore >= 40,1,0) ActivityScore2 = twoD6_test['ActivityScore'] y_test = np.where(ActivityScore2 >= 40,1,0) # - # looks right sized y_train.shape, y_test.shape y_test # We don't need this column anymore to_drop = ['ActivityScore'] inhib_feat_space = twoD6_train.drop(to_drop,axis=1) inhib_feat_space_test = twoD6_test.drop(to_drop,axis=1) # Pull out features for future use features = inhib_feat_space.columns features_test = inhib_feat_space_test.columns X_train = inhib_feat_space.as_matrix().astype(np.float) X_test = inhib_feat_space_test.as_matrix().astype(np.float) X_train.shape, X_test.shape n_pos1 = y_test.sum() n_pos1 n_pos2 = y_train.sum() n_pos2 # + print('Feature space holds '+repr(X_train.shape[0])+' observations and '+repr(X_test.shape[1])+' features') print('Unique target labels: '+repr(np.unique(y_train))) print('Feature space holds '+repr(X_test.shape[0])+' observations and '+repr(X_test.shape[1])+' features') print('Unique target labels: '+repr(np.unique(y_test))) # - X_test.shape[1] # ## Scale the features before training model # This is important from sklearn.preprocessing import StandardScaler scaler = StandardScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.fit_transform(X_test) # + from sklearn.cross_validation import KFold def run_cv(X,y,clf_class,**kwargs): # Construct a kfolds object kf = KFold(len(y),n_folds=5,shuffle=True) y_pred = y.copy() # Iterate through folds for train_index, test_index in kf: X_train, X_test = X[train_index], X[test_index] y_train = y[train_index] # Initialize a classifier with key word arguments clf = clf_class(**kwargs) clf.fit(X_train,y_train) y_pred[test_index] = clf.predict(X_test) return y_pred # + from sklearn.svm import SVC from sklearn.ensemble import RandomForestClassifier as RF from sklearn.neighbors import KNeighborsClassifier as KNN def accuracy(y_true,y_pred): # NumPy interpretes True and False as 1. and 0. return np.mean(y_true == y_pred) print("K-nearest-neighbors (training set):") print("%.3f" % accuracy(y_train, run_cv(X_train,y_train,KNN))) print("K-nearest-neighbors (test set):") print("%.3f" % accuracy(y_test, run_cv(X_test,y_test,KNN))) print('Support vector machines (training set):') print("%.3f" % accuracy(y_train, run_cv(X_train,y_train,SVC))) print('Support vector machines (test set):') print("%.3f" % accuracy(y_test, run_cv(X_test,y_test,SVC))) print("Random forest (training set):") print("%.3f" % accuracy(y_train, run_cv(X_train,y_train,RF))) print("Random forest (test set):") print("%.3f" % accuracy(y_test, run_cv(X_test,y_test,RF))) # + from sklearn.metrics import confusion_matrix y_train = np.array(y_train) class_names = np.unique(y_train) confusion_matrices_training = [ ( "K-Nearest-Neighbors training", confusion_matrix(y_train,run_cv(X_train,y_train,KNN)) ), ( "Support Vector Machines training", confusion_matrix(y_train,run_cv(X_train,y_train,SVC)) ), ( "Random Forest taining", confusion_matrix(y_train,run_cv(X_train,y_train,RF)) ), ] y_test = np.array(y_test) class_names = np.unique(y_test) confusion_matrices_test = [ ( "K-Nearest-Neighbors test", confusion_matrix(y_test,run_cv(X_test,y_test,KNN)) ), ( "Support Vector Machines test", confusion_matrix(y_test,run_cv(X_test,y_test,SVC)) ), ( "Random Forest test", confusion_matrix(y_test,run_cv(X_test,y_test,RF)) ), ] #draw_confusion_matrices(confusion_matrices,class_names) confusion_matrices_training, confusion_matrices_test # - roc_auc_score(is_churn, pred_churn) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Parameter Sharing # # ![](./images/parameters_01.png) # # The weights, `w`, are shared across patches for a given layer in a CNN to detect the cat above regardless of where in the image it is located. # # When we are trying to classify a picture of a cat, we don’t care where in the image a cat is. If it’s in the top left or the bottom right, it’s still a cat in our eyes. We would like our CNNs to also possess this ability known as translation invariance. How can we achieve this? # # As we saw earlier, **the classification of a given patch in an image is determined by the weights and biases corresponding to that patch**. # # If we want a cat that’s in the top left patch to be classified in the same way as a cat in the bottom right patch, **we need the weights and biases corresponding to those patches to be the same**, so that they are classified the same way. # # This is exactly what we do in CNNs. The weights and biases we learn for a given output layer are shared across all patches in a given input layer. Note that as we increase the depth of our filter, the number of weights and biases we have to learn still increases, as the weights aren't shared across the output channels. # # There’s an additional benefit to sharing our parameters. If we did not reuse the same weights across all patches, we would have to learn new parameters for every single patch and hidden layer neuron pair. This does not scale well, especially for higher fidelity images. Thus, sharing parameters not only helps us with translation invariance, but also gives us a smaller, more scalable model. # # # ### Padding # ![](./images/parameters_02_padding.png) # # Let's say we have a `5x5` grid (as shown above) and a filter of size `3x3` with a stride of `1`. What's the width and height of the next layer? # # As we can see, the width and height of each subsequent layer decreases in the above scheme. # # In an ideal world, we'd be able to maintain the same width and height across layers so that we can continue to add layers without worrying about the dimensionality shrinking and so that we have consistency. How might we achieve this? One way is to simply add a border of `0`s to our original `5x5` image. You can see what this looks like in the below image. # # ![](./images/parameters_03_padding.png) # The same grid with `0` padding. # # This would expand our original image to a `7x7`. With this, we now see how our next layer's size is again a `5x5`, keeping our dimensionality consistent. # # # ### Dimensionality # From what we've learned so far, how can we calculate the number of neurons of each layer in our CNN? # # Given: # * our input layer has a width of `W` and a height of `H` # * our convolutional layer has a filter size `F` # * we have a stride of `S` # * a padding of `P` # * and the number of filters `K`, # # # the following formula gives us the width of the next layer: `W_out =[ (W−F+2P)/S] + 1`. # # The output height would be `H_out = [(H-F+2P)/S] + 1`. # # And the **output depth would be equal to the number of filters** `D_out = K`. # # The output volume would be `W_out * H_out * D_out`. # # Knowing the dimensionality of each additional layer helps us understand how large our model is and how our decisions around filter size and stride affect the size of our network. # # # # # ### Quiz: Convolution Output Shape # # ### Setup # H = height, W = width, D = depth # # * We have an input of shape 32x32x3 (HxWxD) # * 20 filters of shape 8x8x3 (HxWxD) # * A stride of 2 for both the height and width (S) # * With padding of size 1 (P) # * Recall the formula for calculating the new height or width: # # ``` # new_height = (input_height - filter_height + 2 * P)/S + 1 # new_width = (input_width - filter_width + 2 * P)/S + 1 # ``` # # What's the shape of the output? # # The answer format is HxWxD, so if you think the new height is 9, new width is 9, and new depth is 5, then type 9x9x5. # # The answer is 14x14x20. # # We can get the new height and width with the formula resulting in: # ``` # (32 - 8 + 2 * 1)/2 + 1 = 14 # (32 - 8 + 2 * 1)/2 + 1 = 14 # ``` # # The new depth is equal to the number of filters, which is 20. # # This would correspond to the following code: # ``` # input = tf.placeholder(tf.float32, (None, 32, 32, 3)) # filter_weights = tf.Variable(tf.truncated_normal((8, 8, 3, 20))) # (height, width, input_depth, output_depth) # filter_bias = tf.Variable(tf.zeros(20)) # strides = [1, 2, 2, 1] # (batch, height, width, depth) # padding = 'SAME' # conv = tf.nn.conv2d(input, filter_weights, strides, padding) + filter_bias # # ``` # # Note the output shape of `conv` will be [1, 16, 16, 20]. It's 4D to account for batch size, but more importantly, it's not [1, 14, 14, 20]. This is because the padding algorithm TensorFlow uses is not exactly the same as the one above. An alternative algorithm is to switch `padding` from `'SAME'` to `'VALID'` which would result in an output shape of [1, 13, 13, 20]. If you're curious how padding works in TensorFlow, read [this document](https://www.tensorflow.org/api_docs/python/tf/nn/convolution). # # In summary TensorFlow uses the following equation for 'SAME' vs 'VALID' # # SAME Padding, the output height and width are computed as: # # `out_height` = ceil(float(in_height) / float(strides[1])) # # `out_width` = ceil(float(in_width) / float(strides[2])) # # VALID Padding, the output height and width are computed as: # # `out_height` = ceil(float(in_height - filter_height + 1) / float(strides[1])) # # `out_width` = ceil(float(in_width - filter_width + 1) / float(strides[2])) # # ### Number of Parameters # # We're now going to calculate the number of parameters of the convolutional layer. The answer from the last quiz will come into play here! # # Being able to calculate the number of parameters in a neural network is useful since we want to have control over how much memory a neural network uses. # # #### Setup # H = height, W = width, D = depth # # * We have an input of shape 32x32x3 (HxWxD) # * 20 filters of shape 8x8x3 (HxWxD) # * A stride of 2 for both the height and width (S) # * Zero padding of size 1 (P) # # #### Output Layer # 14x14x20 (HxWxD) # # #### Hint # Without parameter sharing, each neuron in the output layer must connect to each neuron in the filter. In addition, each neuron in the output layer must also connect to a single bias neuron. # # Convolution Layer Parameters 1 # How many parameters does the convolutional layer have (without parameter sharing)? # # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: master-env # language: python # name: master-env # --- import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import warnings warnings.filterwarnings('ignore') # %matplotlib inline import os # # Define path variables # + data_root_dpath = os.path.join("..", "data", "OPP-115") processed_data_dpath = os.path.join(data_root_dpath, "processed_data") catsplit_annotations_115_parsed_dpath = os.path.join(processed_data_dpath, "catsplit_annotations_115_parsed") # - # # First Party Collection/Use df = pd.read_csv(os.path.join(catsplit_annotations_115_parsed_dpath, "First_Party_Collection-Use.csv")) df # # Third Party Sharing/Collection # # User Choice/Control # # User Access/Edit & Deletion # # Data Retention # # Data Security # # Policy Change # # Do Not Track # # International & Specific Audiences # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # I first constructed annual CRFs and then calculate the orientation angles between them and ICRF3. # # Here I presented the second method, which is that I use the mean positions of source within one-year windows to construct the annual CRF. # + import matplotlib.pyplot as plt from matplotlib.ticker import MultipleLocator import numpy as np from astropy.table import Table import astropy.units as u # My progs from my_progs.catalog.read_icrf import read_icrf3 from my_progs.stat_func.rms_calc import rms_calc from my_progs.catalog.pos_diff import radio_cat_diff_calc from tool_func import calc_orient_new # + icrf3sx = read_icrf3(wv="sx") icrf3def = icrf3sx[icrf3sx["type"] == "D"] # + years = np.concatenate(([1984], np.arange(1986, 2021))) # years = np.concatenate((np.arange(1979, 2021))) num = len(years) # - N0 = np.zeros_like(years) # N1 = np.zeros_like(years) pmt = np.empty((num, 8), dtype=float) err = np.empty((num, 8), dtype=float) N0_d = np.zeros_like(years) # N1_d = np.zeros_like(years) pmt_d = np.empty((num, 8), dtype=float) err_d = np.empty((num, 8), dtype=float) # + for i, year in enumerate(years): # print("\nProcessing time series within {:d}-{:d}".format(year, year+1)) ts_sou = Table.read("../data/yearly-ts-nju-10step/{:d}.dat".format(year), format="ascii") # Add unit information ts_sou["ra"].unit = u.deg ts_sou["dec"].unit = u.deg ts_sou["ra_err"].unit = u.mas ts_sou["dec_err"].unit = u.mas # ICRF3 defining sources pos_oft = radio_cat_diff_calc(ts_sou, icrf3def, sou_name="iers_name") # N0_d[i], N1_d[i], pmt_d[i], err_d[i] = calc_orient(pos_oft) N0_d[i], pmt_d[i], err_d[i] = calc_orient_new(pos_oft) # All sources pos_oft = radio_cat_diff_calc(ts_sou, icrf3sx, sou_name="iers_name") # N0[i], N1[i], pmt[i], err[i] = calc_orient(pos_oft) N0[i], pmt[i], err[i] = calc_orient_new(pos_oft) # - # # ICRF3 defining sources only # + fig, ax = plt.subplots() ax.plot(years, N0_d, "b-o", ms=2, lw=0.5, label="$N_0$") # ax.plot(years, N1_d, "r-s", ms=2, lw=0.5, label="$N_1$") # ax.set_yscale("log") ax.set_xlabel("Epoch (year)", fontsize=12) ax.set_ylabel("Nb sources", fontsize=12) # ax.legend() # + fig, ax = plt.subplots(figsize=(8, 6)) ax.hlines(200, 1979, 2025, ls="dashed", color="b", lw=0.5) ax.hlines(0, 1979, 2025, ls="dashed", color="k", lw=0.5) ax.hlines(-200, 1979, 2025, ls="dashed", color="r", lw=0.5) ax.errorbar(years, pmt_d[:, 3] + 200, yerr=err_d[:, 3], color="blue", ms=3, fmt="-s", elinewidth=0.5, lw=0.2, label="$\\epsilon_x+0.2$ mas", capsize=1) ax.errorbar(years, pmt_d[:, 4], yerr=err_d[:, 4], color="black", ms=3, fmt="-o", elinewidth=0.5, lw=0.2, label="$\\epsilon_y$", capsize=1) ax.errorbar(years, pmt_d[:, 5] - 200, yerr=err_d[:, 5], color="red", ms=3, fmt="-^", elinewidth=0.5, lw=0.2, label="$\\epsilon_x-0.2$ mas", capsize=1) ax.set_xlabel("Epoch (year)", fontsize=12) ax.set_ylabel("Rotation ($\mathrm{\mu as}$)", fontsize=12) ax.axis([1983, 2022, -400, 400]) # plt.title("Rotation from 10-step solution", fontsize=15) ax.legend() plt.tight_layout() plt.savefig("../plots/orient-from-yearly-ts-nju.eps") # + fig, ax = plt.subplots(figsize=(8, 6)) ax.hlines(200, 1979, 2021, ls="dashed", color="b", lw=0.5) ax.hlines(0, 1979, 2021, ls="dashed", color="k", lw=0.5) ax.hlines(-200, 1979, 2021, ls="dashed", color="r", lw=0.5) ax.errorbar(years, pmt_d[:, 0] + 200, yerr=err_d[:, 0], color="blue", ms=3, fmt="-s", elinewidth=0.5, lw=0.2, label="$g_x+0.2$ mas", capsize=1) ax.errorbar(years, pmt_d[:, 1], yerr=err_d[:, 1], color="black", ms=3, fmt="-o", elinewidth=0.5, lw=0.2, label="$g_y$", capsize=1) ax.errorbar(years, pmt_d[:, 2] - 200, yerr=err_d[:, 2], color="red", ms=3, fmt="-^", elinewidth=0.5, lw=0.2, label="$g_z-0.2$ mas", capsize=1) ax.axis([1982, 2021, -400, 400]) ax.set_xlabel("Epoch (year)", fontsize=12) ax.set_ylabel("Glide angle ($\mathrm{\mu as}$)", fontsize=12) ax.legend() plt.tight_layout() plt.savefig("../plots/glide-from-yearly-ts-nju.eps") # + wx = pmt_d[:, 3] wy = pmt_d[:, 4] wz = pmt_d[:, 5] wx_err = err_d[:, 3] wy_err = err_d[:, 4] wz_err = err_d[:, 5] # + wmean1, wrms1, wstd1 = rms_calc(wx) wmean2, wrms2, wstd2 = rms_calc(wy) wmean3, wrms3, wstd3 = rms_calc(wz) print("Rotation statistics (No weighted)") print(" Mean RMS Std") print(" uas uas uas") print("R1 {:+4.0f} {:4.0f} {:.0f}".format(wmean1, wrms1, wstd1)) print("R2 {:+4.0f} {:4.0f} {:.0f}".format(wmean2, wrms2, wstd2)) print("R3 {:+4.0f} {:4.0f} {:.0f}".format(wmean3, wrms3, wstd3)) wmean1, wrms1, wstd1 = rms_calc(wx, wx_err) wmean2, wrms2, wstd2 = rms_calc(wy, wy_err) wmean3, wrms3, wstd3 = rms_calc(wz, wz_err) print("\nRotation statistics (Weighted)") print(" Mean WRMS Std") print(" uas uas uas") print("R1 {:+4.0f} {:.0f} {:.0f}".format(wmean1, wrms1, wstd1)) print("R2 {:+4.0f} {:.0f} {:.0f}".format(wmean2, wrms2, wstd2)) print("R3 {:+4.0f} {:.0f} {:.0f}".format(wmean3, wrms3, wstd3)) # - year_start = 1995 mask = (years >= year_start) # + wmean1, wrms1, wstd1 = rms_calc(wx[mask]) wmean2, wrms2, wstd2 = rms_calc(wy[mask]) wmean3, wrms3, wstd3 = rms_calc(wz[mask]) print("Rotation statistics (No weighted, Remove data < {:d})".format(year_start)) print(" Mean RMS Std") print(" uas uas uas") print("R1 {:+4.0f} {:4.0f} {:.0f}".format(wmean1, wrms1, wstd1)) print("R2 {:+4.0f} {:4.0f} {:.0f}".format(wmean2, wrms2, wstd2)) print("R3 {:+4.0f} {:4.0f} {:.0f}".format(wmean3, wrms3, wstd3)) wmean1, wrms1, wstd1 = rms_calc(wx[mask], wx_err[mask]) wmean2, wrms2, wstd2 = rms_calc(wy[mask], wy_err[mask]) wmean3, wrms3, wstd3 = rms_calc(wz[mask], wz_err[mask]) print("Rotation statistics (Weighted, Remove data < {:d})".format(year_start)) print(" Mean WRMS Std") print(" uas uas uas") print("R1 {:+4.0f} {:.0f} {:.0f}".format(wmean1, wrms1, wstd1)) print("R2 {:+4.0f} {:.0f} {:.0f}".format(wmean2, wrms2, wstd2)) print("R3 {:+4.0f} {:.0f} {:.0f}".format(wmean3, wrms3, wstd3)) # - # # All sources # + fig, ax = plt.subplots() ax.plot(years, N0, "b-o", ms=2, lw=0.5, label="$N_0$") # ax.plot(years, N1, "r-s", ms=2, lw=0.5, label="$N_1$") # ax.set_yscale("log") ax.set_xlabel("Epoch (year)", fontsize=12) ax.set_ylabel("No. sources", fontsize=12) # ax.legend() # + fig, ax = plt.subplots(figsize=(8, 6)) ax.hlines(200, 1984, 2021, ls="dashed", color="b", lw=0.5) ax.hlines(0, 1984, 2021, ls="dashed", color="k", lw=0.5) ax.hlines(-200, 1984, 2021, ls="dashed", color="r", lw=0.5) ax.errorbar(years, pmt[:, 0] + 200, yerr=err[:, 0], color="blue", ms=3, fmt="-s", elinewidth=0.5, lw=0.2, label="$g_x+0.2$ mas", capsize=1) ax.errorbar(years, pmt[:, 1], yerr=err[:, 1], color="black", ms=3, fmt="-o", elinewidth=0.5, lw=0.2, label="$g_y$", capsize=1) ax.errorbar(years, pmt[:, 2] - 200, yerr=err[:, 2], color="red", ms=3, fmt="-^", elinewidth=0.5, lw=0.2, label="$g_z-0.2$ mas", capsize=1) ax.axis([1984, 2021, -400, 400]) ax.set_xlabel("Epoch (year)", fontsize=12) ax.set_ylabel("Glide ($\mathrm{\mu as}$)", fontsize=12) ax.legend() plt.tight_layout() # plt.savefig("../plots/glide-from-yearly-ts.eps") # + fig, ax = plt.subplots(figsize=(8, 6)) ax.hlines(200, 1984, 2021, ls="dashed", color="b", lw=0.5) ax.hlines(0, 1984, 2021, ls="dashed", color="k", lw=0.5) ax.hlines(-200, 1984, 2021, ls="dashed", color="r", lw=0.5) ax.errorbar(years, pmt[:, 3] + 200, yerr=err[:, 3], color="blue", ms=3, fmt="-s", elinewidth=0.5, lw=0.2, label="$\\epsilon_x+0.2$ mas", capsize=1) ax.errorbar(years, pmt[:, 4], yerr=err[:, 4], color="black", ms=3, fmt="-o", elinewidth=0.5, lw=0.2, label="$\\epsilon_y$", capsize=1) ax.errorbar(years, pmt[:, 5] - 200, yerr=err[:, 5], color="red", ms=3, fmt="-^", elinewidth=0.5, lw=0.2, label="$\\epsilon_z-0.2$ mas", capsize=1) ax.axis([1984, 2021, -400, 400]) ax.set_xlabel("Epoch (year)", fontsize=12) ax.set_ylabel("Orientation angle ($\mathrm{\mu as}$)", fontsize=12) ax.legend() plt.tight_layout() # plt.savefig("../plots/orient-from-yearly-ts.eps") # + wx = pmt[:, 3] wy = pmt[:, 4] wz = pmt[:, 5] wx_err = err[:, 3] wy_err = err[:, 4] wz_err = err[:, 5] # + wmean1, wrms1, wstd1 = rms_calc(wx) wmean2, wrms2, wstd2 = rms_calc(wy) wmean3, wrms3, wstd3 = rms_calc(wz) print("Rotation statistics (No weighted)") print(" Mean RMS Std") print(" uas uas uas") print("R1 {:+4.0f} {:4.0f} {:.0f}".format(wmean1, wrms1, wstd1)) print("R2 {:+4.0f} {:4.0f} {:.0f}".format(wmean2, wrms2, wstd2)) print("R3 {:+4.0f} {:4.0f} {:.0f}".format(wmean3, wrms3, wstd3)) wmean1, wrms1, wstd1 = rms_calc(wx, wx_err) wmean2, wrms2, wstd2 = rms_calc(wy, wy_err) wmean3, wrms3, wstd3 = rms_calc(wz, wz_err) print("Rotation statistics (Weighted)") print(" Mean WRMS Std") print(" uas uas uas") print("R1 {:+4.0f} {:.0f} {:.0f}".format(wmean1, wrms1, wstd1)) print("R2 {:+4.0f} {:.0f} {:.0f}".format(wmean2, wrms2, wstd2)) print("R3 {:+4.0f} {:.0f} {:.0f}".format(wmean3, wrms3, wstd3)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## GeoPandas Demo: Get Counties # This example demonstrates how to grab data from an ArcGIS MapService and pull it into a GeoPandas data frame. # + import requests import pandas as pd import geopandas as gpd # %matplotlib inline # - # ### Fetching some data # We'll tap into a NOAA map server to pull some state boundary features... # * Build the request # * Send the request, receive the response # + #Build the request and parameters to fetch county features # from the NOAA ArcGIS map server end point stateFIPS = '37' #This is NC url = 'https://nowcoast.noaa.gov/arcgis/rest/services/nowcoast/mapoverlays_political/MapServer/find' params = {'searchText':stateFIPS, 'contains':'true', 'searchFields':'STATEFP', 'sr':'', 'layers':'2', 'layerDefs':'', 'returnGeometry':'true', 'maxAllowableOffset':'', 'geometryPrecision':'', 'dynamicLayers':'', 'returnZ':'false', 'returnM':'false', 'gdbVersion':'', 'returnUnformattedValues':'false', 'returnFieldName':'false', 'datumTransformations':'', 'layerParameterValues':'', 'mapRangeValues':'', 'layerRangeValues':'', 'f':'json'} # - #Fetch the data response = requests.get(url,params) # ### Examining the response # * Convert the response to a JSON object # * Examine its structure # * Extract the `attributes` and `geometry` elements. #Convert to a JSON object (i.e. a dictionary) respons_js = response.json() #The 'results' object contains a record for each county returned, i.e., a feature results = respons_js['results'] len(results) #Within each item in the results object are the following items results[0].keys() #The 'attributes' item contains the feature attributes results[0]['attributes'] #And the geometry object contains the shape results[0]['geometry'] # ### Convert the elements to dataFrames # * Creating a dataFrame from the Results object # * "Exploding" the dictionary values in the `attributes` and `geometry` columns # * Concatenating dataFrames lengthwise (adding columns) #Create a dataFrame from the results, # keeping just the attributes and geometry objects df = pd.DataFrame(results,columns=('attributes','geometry')) df.head() #Explode the dictionary values into fields dfCounties = df['attributes'].apply(pd.Series) dfGeom = df['geometry'].apply(pd.Series) #Combine the two dfAll = pd.concat((dfCounties,dfGeom),axis='columns') dfAll.head() # ### Converting the [ESRI] geometry coordinates to a [shapely] geometric feature # The `dfAll` dataframe now has all feature attributes and the geometry object stored in the `rings` column. # * Exploring the 'rings' object # * Exploring the `shapely` package: rings, polygons, and multipolygons # * Using shapely to create features # * Converting the dataFrame to geodataFrame # * Plotting the output #Explore the values in the "ring" column, looking at the first row of data rings = dfAll['rings'][0] print ("There is/are {} ring(s) in the record".format(len(rings))) print ("There are {} vertices in the first ring".format(len(rings[0]))) print ("The first vertex is at {}".format(rings[0][0])) # So, the "ring" value in each row of our dataframe contains a *list* of rings, with each ring being a list of coordinates defining the vertices of our polyon. Usually the list of rings only includes one ring, the outer boundary of a single polygon. However, it's possible it contains more than one, e.g. the boundary of Hawaii. # Now we'll extract the first ring object from the ring list of the first record in our dataframe and convert it to a Shapely polygon object. To do this we need to import a few Shapely geometry class objects. #Import the shapely objects we'll need from shapely.geometry import LinearRing from shapely.geometry import Polygon #Create a shapely polygon from the first ring ring = rings[0] # Get the outer ring, in coordinates r = LinearRing(ring) # Convert coordinates to shapely ring object s = Polygon(r) # Convert shapely ring object to shapely polygon object s.area # Show the area of the polygon # Now that we've seen the proof of concept, we'll form a Python function that # * takes a list of rings (i.e., the value of one row's `rings` field), # * converts each ring item in this ring list into a Shapely LinearRing object, # * converts *that* into a Shapely polygon object, adding each these polygons to a list, # * and then constructs a Shapely MultiPolygon object from the list of polygons #A function to convert all rings into a Shapely multipolygon object def polyFromRings(rings): #Import necessary Shapely classes from shapely.geometry import LinearRing, Polygon, MultiPolygon #Construct an empty list of polygons polyList = [] #Compile a list of shapely ring objects and convert to polygons for ring in rings: #Construct a ring from the ring coordinates r = LinearRing(ring) #Convert the ring to a shapely polygon s = Polygon(r) #Add the polygon to the polyList polyList.append(s) #Convert the list of polyongs to a multipolygon object multiPoly = MultiPolygon(polyList) return multiPoly # Now, we use Panda's `apply` method to apply the "polyFromRings" function above to each row's "ring" values. #Apply the function to each item in the geometry column dfAll['geometry']=dfAll.apply(lambda x: polyFromRings(x.rings),axis='columns') # ### Convert dataframe to a *geo*dataframe # With the rings successfuly converted to Shapely geometry objects, we can now "upgrade" our Pandas dataframe to a GeoPandas dataframe, capable of spatial analysis #Create a geodataframe from our pandas dataframe (the geometry column must exist) gdf=gpd.GeoDataFrame(dfAll) gdf.head() #Set the projection (obtained from spatialReference column) gdf.crs = {'init': 'epsg:3857'} #Check the data types; note some should be fixed! gdf.dtypes #Convert the `ALAND` and `AWATER` to floating point values gdf['ALAND']=gdf['ALAND'].astype('double') gdf['AWATER']=gdf['AWATER'].astype('double') gdf.dtypes #Use familiar Pandas operation to select a feature and gdf[gdf['NAME'] == 'Durham'].plot(); # ### Saving the data # We can save the attributes to CSV file or save the feature class to a shapefile #Save our attribute data to a shapefile gdf.to_csv("counties_{}.csv".format(stateFIPS)) # Saving data to a shapefile is a bit more finicky. In particular, we need to remove the old "rings" field in the geodataframe. #Delete the 'rings' column from our geodataframe gdf.drop('rings',axis='columns',inplace=True) #Write the data to a file gdf.to_file(driver='ESRI Shapefile',filename="./data/NC_Counties.shp") # ## Recap # So here we've imported a layer from an ESRI Map Service and done the necessary conversions to get this into a GeoPandas dataframe -- and also export it. This reveals a bit about the requirements of a geodataframe, namely the structure of the geometry column and how the Shapely package helps with that. # # From here, we can explore more about the cool things we can do with a GeoPandas dataframe. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="E8uHiX2rYAIL" # # Housing Market # + [markdown] id="AkQclaenYAIQ" # ### Introduction: # # This time we will create our own dataset with fictional numbers to describe a house market. As we are going to create random data don't try to reason of the numbers. # # ### Step 1. Import the necessary libraries # + id="j5QzBZonYAIQ" import pandas as pd import numpy as np # + [markdown] id="gWL2pE8qYAIR" # ### Step 2. Create 3 differents Series, each of length 100, as follows: # 1. The first a random number from 1 to 4 # 2. The second a random number from 1 to 3 # 3. The third a random number from 10,000 to 30,000 # + id="O-JW3Pt5YAIR" s1 = pd.Series(np.random.randint(1,4,100)) s2 = pd.Series(np.random.randint(1,3,100)) s3 = pd.Series(np.random.randint(10000,30000,100)) # + id="U45WEuVZZWI-" outputId="21596bf7-ef17-4b4c-b525-efe0ef4cad91" colab={"base_uri": "https://localhost:8080/"} s1[:5] # + id="LtYTNMoIZZVc" outputId="0b6f0b50-9dd3-4468-c6bb-720172ce2dc1" colab={"base_uri": "https://localhost:8080/"} s2[:10] # + [markdown] id="hhizuWLPYAIR" # ### Step 3. Let's create a DataFrame by joinning the Series by column # + id="rSuzO8J0YAIR" outputId="9ded2755-9203-4657-bfbb-721726381424" colab={"base_uri": "https://localhost:8080/", "height": 202} df = pd.concat([s1,s2,s3], axis=1 ) # axis=1 means columns df.head() # + [markdown] id="W1JruiUOYAIS" # ### Step 4. Change the name of the columns to bedrs, bathrs, price_sqr_meter # + id="0kn-VXYwYAIS" outputId="7a7cf55d-2b78-49ab-f953-d52641428941" colab={"base_uri": "https://localhost:8080/", "height": 202} df = df.rename(columns={0:'bedrs',1:'bathrs',2:'price_sqr_meter'}) df.head() # + [markdown] id="kvDpGUC6YAIS" # ### Step 5. Create a one column DataFrame with the values of the 3 Series and assign it to 'bigcolumn' # + id="NEYrZsbhYAIS" outputId="4e2576d3-a735-4c95-b13b-d84a29f1df3d" colab={"base_uri": "https://localhost:8080/", "height": 662} bigcolumn = pd.concat([s1,s2,s3], axis=0) # bigcolumn.shape bigcolumn = bigcolumn.to_frame() bigcolumn.tail(20) # + [markdown] id="IYp5qdEkYAIS" # ### Step 6. Oops, it seems it is going only until index 99. Is it true? # + id="eDe77WJ6YAIT" outputId="54226734-bab3-4fe7-d086-c4e204bd24be" colab={"base_uri": "https://localhost:8080/", "height": 202} bigcolumn.tail() # + [markdown] id="PHygdjkEYAIT" # ### Step 7. Reindex the DataFrame so it goes from 0 to 299 # + id="1LaOC759YAIT" outputId="4cd11d23-6832-4aff-9de6-89ae7b302c89" colab={"base_uri": "https://localhost:8080/", "height": 202} bigcolumn.reset_index(drop=True, inplace=True) bigcolumn.tail() # + id="_Nvv3qtXa85g" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # [View in Colaboratory](https://colab.research.google.com/github/XinyueZ/tf/blob/master/ipynb/Bundesliga_Results_estimator.ipynb) # + [markdown] id="rg2f5tmt2rcR" colab_type="text" # # Train model to evaluate football result. # + id="usywsvFLF2bo" colab_type="code" colab={} import tensorflow as tf from tensorflow.python.data import Dataset import numpy as np import pandas as pd from sklearn.preprocessing import LabelEncoder # + id="M1XjwLH4P00k" colab_type="code" colab={} tf.logging.set_verbosity(tf.logging.INFO) # + [markdown] id="dy53WMcs27Ez" colab_type="text" # Data-source from https://www.kaggle.com/thefc17/bundesliga-results-19932018 # + [markdown] id="TcCdsIeI225b" colab_type="text" # This dataset contains results from every Bundesliga match from 1993-1994 to 2017-2018. It also includes half time results, but only from 1995-96 to 2017-18. Columns include Division (denoted as D1), HomeTeam, AwayTeam, FTHG (final time home goals), FTAG (final time away goals), FTR (full time result), HTHG (half time home goals), HTAG (half time away goals), HTR (half time result), and season. # # Data compiled into one file from this site: http://www.football-data.co.uk/germanym.php # + id="6gRBSBTfiO1x" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="e606c1e9-283a-4afc-f13c-bf2cf5c5e80a" df = pd.read_csv("https://dl.dropbox.com/s/3jzvvjl2iqnlqzz/Bundesliga_Results.csv", sep=",") df = df[pd.notnull(df["FTHG"])] df = df[pd.notnull(df["FTAG"])] df = df[pd.notnull(df["FTR"])] df.head() # + id="PWk86xsBDkch" colab_type="code" colab={} def make_dataset_and_labels_and_class_num(df, label_name): """This method will prepare dataset, labels for train, evaluation, test and classes. Args: df: DataFrame format of datasource. label_name: The name of column in datasource which will be as target for train. Return: Tuple of (ds_train, ds_eval, ds_test, label_train, label_eval, label_test, classes) """ target_label_col = "label" #New column name in original table. encoder = LabelEncoder() label = encoder.fit_transform(df[label_name]) df.insert(8, target_label_col, label) result_fit = encoder.fit(df[label_name]) random_seed = None np.random.seed(random_seed) ds_train = df.sample(frac=0.9, random_state=random_seed) label_train = ds_train[target_label_col] ds_rest = df.drop(ds_train.index) ds_eval = ds_rest.sample(frac=0.8, random_state=random_seed) label_eval = ds_eval[target_label_col] ds_test = ds_rest.drop(ds_eval.index) label_test = ds_test[target_label_col] return ds_train[["FTHG", "FTAG"]], ds_eval[["FTHG", "FTAG"]], ds_test[["FTHG", "FTAG"]], label_train, label_eval, label_test, result_fit.classes_ # + id="NgkHsXdun_dO" colab_type="code" colab={} x_train, x_eval, x_test, y_train, y_eval, y_test, result_classes = make_dataset_and_labels_and_class_num(df, "FTR") # + id="akLoHfAXAqOi" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="70f8a611-2e22-4f30-9656-3b04c9e3a667" result_classes # + id="mgVYA89XoHIV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 297} outputId="f27d75c6-f6ff-4095-c469-490e4dd0b11b" x_train.describe() # + id="oMlOOyPQoI1E" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 297} outputId="6ed9f08b-c96d-4b33-bf98-75d8624269dc" x_eval.describe() # + id="xciCTqS2ageV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 297} outputId="61683902-32b5-49bb-e9e4-13525a814ec5" x_test.describe() # + id="Yi1g3htco3st" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="c3bcc5e2-700d-4b5c-a7b8-a379f78d367c" x_train.head() # + id="6VRcJQxqpInl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="bf9a3c20-afe1-4c29-de53-452fccdf82d1" x_eval.head() # + id="g5fkf_HZaj2u" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="26f4b36b-f104-4e11-d776-116320d0a5f0" x_test.head() # + id="udjYU2TaCClX" colab_type="code" colab={} def input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None): """Trains a linear regression model of one feature. Args: features: pandas DataFrame of features targets: pandas DataFrame of targets batch_size: Size of batches to be passed to the model shuffle: True or False. Whether to shuffle the data. num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely Returns: Tuple of (features, labels) for next data batch """ # Construct a dataset, and configure batching/repeating features = {key:np.array(value) for key,value in dict(features).items()} ds = Dataset.from_tensor_slices((features, targets)) ds = ds.batch(batch_size).repeat(num_epochs) # Shuffle the data, if specified if shuffle: ds = ds.shuffle(buffer_size=10000) # Return the next batch of data features, labels = ds.make_one_shot_iterator().get_next() return features, labels # + id="XauzWrcNCh0J" colab_type="code" colab={} train_input_fn = lambda: input_fn(x_train, y_train) # + id="MUKqyBqocZJ_" colab_type="code" colab={} train_predict_input_fn = lambda: input_fn(x_eval, y_eval, num_epochs=1, shuffle=False) # + id="_QCb9zcOCf3A" colab_type="code" colab={} eval_predict_input_fn = lambda: input_fn(x_eval, y_eval, num_epochs=1, shuffle=False) # + id="YXjdJK8zbfW5" colab_type="code" colab={} test_predict_input_fun = lambda: input_fn(x_test, y_test, num_epochs=1, shuffle=False) # + id="_rvAbHuFjCND" colab_type="code" colab={} STEPS = 5000 # Steps of train loop. HIDDEN = [1000, 1000, 1000, 1000] PERIODS = 10 STEPS_PER_PERIOD = STEPS / PERIODS # + id="3AkQHk0ObqPJ" colab_type="code" colab={} # + id="O4Fyeywlz4dm" colab_type="code" colab={} feature_cols = [ tf.feature_column.numeric_column("FTHG"), tf.feature_column.numeric_column("FTAG") ] # + id="wR1wAHC0b8R5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 88} outputId="0ca1c4bf-e4c8-435b-e797-30a0e840a3d3" model = tf.estimator.DNNClassifier( feature_columns = feature_cols, hidden_units = HIDDEN, n_classes = len(result_classes) ) # + id="xibCYiz3by1B" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 18530} outputId="6f67dae8-dcbd-4fc4-be1a-b7cac7291f5b" for period in range(0, PERIODS): model.train(input_fn=train_input_fn, steps=STEPS) train_predict = model.predict(input_fn=train_predict_input_fn) eval_predict = model.predict(input_fn=eval_predict_input_fn) # + id="-2swRvPBdXvZ" colab_type="code" colab={} test_predict = model.predict(input_fn=test_predict_input_fun) # + id="GWsXI3wpeins" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 119} outputId="deae8783-fa4b-4f02-e11a-34b15c1b99d7" test_predict = np.array([item['classes'][0] for item in test_predict]) # + id="89JkWcZBfzd6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 2618} outputId="0cfe5e53-08a7-4b83-d74b-285e6f8a6f3f" for clazz in test_predict: result = result_classes[int(clazz)] print(result) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.11 64-bit (''streamlit'': conda)' # name: python3 # --- # + import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn.model_selection import train_test_split # + df = pd.read_csv("Task_furniture v2.csv", sep=";") print(df.shape) print(df.info()) print(df.DwellingType.unique()) print(df.Lifestage.unique) # checking where the missing values are df.isna().any() # + # outliers - detecting by z-score from scipy import stats print(df[(np.abs(stats.zscore(df["Age"])) > 3)]) df[(np.abs(stats.zscore(df["Salary"])) > 3)] # dropping outliers df = df[(np.abs(stats.zscore(df["Age"])) < 3)] # - # pairwise variables visualization # Create the default pairplot sns.pairplot(df.drop("ID", axis=1)) # Create a pair plot colored by continent with a density plot of the # diagonal and format the scatter plots. sns.pairplot(df.drop(columns = ['ID']), hue = 'City', diag_kind = 'kde', plot_kws = {'alpha': 0.6, 's': 80, 'edgecolor': 'k'}, size = 4) # missing values df.isnull().sum() # + # encoding the categorical variables from sklearn import preprocessing from sklearn.preprocessing import LabelEncoder def pre_modeling(df, classification = True): df_modelling = df.copy() df_modelling['City'] = df_modelling['City'].astype(str) if classification == False: df_modelling['Gender'] = df_modelling['Gender'].astype(str) df_modelling['Gender'] = df_modelling['Gender'].replace(['0','1'],["Female", "Male"]) # here is for Linear Regression df_modelling = pd.get_dummies(df_modelling, columns = ['Gender', 'City', 'Lifestage', 'DwellingType'], drop_first=True) df_modelling.drop(columns=['Target', 'ID'], inplace=True) df_modelling.dropna(axis=0, subset=['Salary'], inplace=True) else: label_encoder = LabelEncoder() df_modelling.iloc[:, 3] = label_encoder.fit_transform(df_modelling.iloc[:, 3]) df_modelling.iloc[:, 4] = label_encoder.fit_transform(df_modelling.iloc[:, 4]) df_modelling.iloc[:, 5] = label_encoder.fit_transform(df_modelling.iloc[:, 5]) #df_modelling = pd.get_dummies(df_modelling, columns = ['Gender', 'City', 'Lifestage', 'DwellingType'], drop_first=False) df_modelling.drop(columns=['Salary', 'ID'], inplace=True) return df_modelling # print(df_modelling.info()) # df_modelling.head(4) # + # Linear Regression from sklearn.linear_model import LinearRegression import statsmodels.api as sm from scipy import stats df_modelling = pre_modeling(df, classification=False) print(df_modelling.shape) print(df_modelling.head(2)) y = df_modelling.pop('Salary') x = df_modelling X2 = sm.add_constant(x) est = sm.OLS(y, X2) est2 = est.fit() # coeff_parameter = pd.DataFrame(model.coef_, x.columns, columns=['Coefficient']) # print(coeff_parameter) est2.summary() # + from sklearn.ensemble import RandomForestClassifier X_train, X_test, y_train_salary, y_test_salary = train_test_split(x, y, test_size=0.2, random_state=1234) feature_names = [f'feature {i}' for i in range(x.shape[1])] forest = RandomForestClassifier(random_state=1234) forest.fit(X_train, y_train_salary) # + from sklearn.inspection import permutation_importance result = permutation_importance(forest, X_test, y_test_salary, n_repeats=10, random_state=1234, n_jobs=1) forest_importances = pd.Series(result.importances_mean, index=feature_names) fig, ax = plt.subplots() forest_importances.plot.bar(yerr=result.importances_std, ax=ax) ax.set_title("Feature importances using permutation on full model") ax.set_ylabel("Mean accuracy decrease") fig.tight_layout() plt.show() # - x.columns[[0,12]] # + import eli5 from eli5.sklearn import PermutationImportance perm = PermutationImportance(forest, random_state=1234).fit(X_test, y_test_salary) eli5.show_weights(perm, feature_names = X_test.columns.tolist()) # - # # Classification - Target # + # Mixed Naive Bayes for Classification df_modelling = pre_modeling(df, classification=True) y = df_modelling.pop('Target') x = df_modelling print(df_modelling.head(1)) print(y[0:4]) x.head(5) # - x.info() # Use a utility from sklearn to split and shuffle your dataset. # split into train and test sets x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=1234) # summarize print('Train', x_train.shape, y_train.shape) print('Test', x_test.shape, y_test.shape) print(pd.value_counts(y_train)) print(pd.value_counts(y_test)) # + from mixed_naive_bayes import MixedNB from collections import Counter from imblearn.under_sampling import RandomUnderSampler from imblearn.over_sampling import SMOTE from sklearn.metrics import roc_auc_score from sklearn.metrics import precision_recall_curve from sklearn.metrics import f1_score from sklearn import metrics def naive_bayes_model(x, y, imblance_method): X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=1234) nb_mod = MixedNB(categorical_features=[1,2,3,4]) if imblance_method == "No": test_pred = nb_mod.fit(X_train, y_train).predict(X_test) model_roc_auc_score = roc_auc_score(y_test, test_pred) print('roc_auc_score=%.3f' % (model_roc_auc_score)) nb_precision, nb_recall, _ = precision_recall_curve(y_test, test_pred) nb_f1, nb_auc = f1_score(y_test, test_pred), metrics.auc(nb_recall, nb_precision) print('f1=%.3f precision/recall=%.3f' % (nb_f1, nb_auc)) elif imblance_method == "Undersampling": # summarize class distribution print("Before undersampling: ", Counter(y_train)) # define undersampling strategy undersample = RandomUnderSampler(sampling_strategy='majority', random_state = 1234) # fit and apply the transform X_train_under, y_train_under = undersample.fit_resample(X_train, y_train) # summarize class distribution print("After undersampling: ", Counter(y_train_under)) test_pred = nb_mod.fit(X_train_under, y_train_under).predict(X_test) model_roc_auc_score = roc_auc_score(y_test, test_pred) print('roc_auc_score=%.3f' % (model_roc_auc_score)) nb_precision, nb_recall, _ = precision_recall_curve(y_test, test_pred) nb_f1, nb_auc = f1_score(y_test, test_pred), metrics.auc(nb_recall, nb_precision) print('f1=%.3f precision/recall=%.3f' % (nb_f1, nb_auc)) elif imblance_method == "Oversampling": print("Before undersampling: ", Counter(y_train)) # define oversampling strategy SMOTE_mod = SMOTE() # fit and apply the transform X_train_SMOTE, y_train_SMOTE = SMOTE_mod.fit_resample(X_train, y_train) # summarize class distribution print("After oversampling: ", Counter(y_train_SMOTE)) nb_mod = MixedNB(categorical_features=[1,2,3,4]) test_pred = nb_mod.fit(X_train_SMOTE, y_train_SMOTE).predict(X_test) model_roc_auc_score = roc_auc_score(y_test, test_pred) print('roc_auc_score=%.3f' % (model_roc_auc_score)) nb_precision, nb_recall, _ = precision_recall_curve(y_test, test_pred) nb_f1, nb_auc = f1_score(y_test, test_pred), metrics.auc(nb_recall, nb_precision) print('f1=%.3f precision/recall=%.3f' % (nb_f1, nb_auc)) return model_roc_auc_score, nb_f1, nb_auc # - naive_bayes_model(x, y, imblance_method="Oversampling") naive_bayes_model(x, y, imblance_method="Undersampling") naive_bayes_model(x, y, imblance_method="No") x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=1234) nb_mod = MixedNB(categorical_features=[1,2,3,4]) nb_mod.fit(x_train, y_train) perm = PermutationImportance(nb_mod, random_state=1234).fit(x_test, y_test) eli5.show_weights(perm, feature_names = x_test.columns.tolist()) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + colab={"base_uri": "https://localhost:8080/", "height": 50} id="EIOlMOX7lt1R" outputId="f7e21642-a407-4852-8838-794b52129536" from google.colab import drive drive.mount('/content/drive') # %cd /content/drive/My Drive/Github/Biometric # + id="rQLTyIbVkJkU" import os import numpy as np import pandas as pd from tqdm import tqdm import seaborn as sns from sklearn.decomposition import PCA import tensorflow as tf from tensorflow.keras.models import load_model import matplotlib.pyplot as plt import cv2 from PIL import Image from FUNC_script import * # + id="JY0LMeaOl1H_" # Load FaceNet Model model = load_model('Template_poisoning/model/facenet_keras.h5', compile= False) model.trainable= False # + id="iB3r7PVemQRY" # Read Images (160x160x3) def load_img(path, resize=None): img= Image.open(path) img= img.convert('RGB') if resize is not None: img= img.resize((resize, resize)) return np.asarray(img) def Centroid(vector): ''' After the random initialization, we first optimize the glasses using the adversary’s centroid in feature space(Xc) # Arguments Input: feature vector(batch_size x feature) ''' Xc= tf.math.reduce_mean(vector, axis=0) return tf.reshape(Xc, (1, vector.shape[-1])) # + colab={"base_uri": "https://localhost:8080/", "height": 34} id="4T-NbZQOn4xS" outputId="da64bf3e-5a94-4944-b54a-db22ba860aa0" # Data Loading def load_data(dir): X=[] for i, sample in enumerate(os.listdir(dir)): image= load_img(os.path.join(dir, sample)) image = cv2.resize(image, (160, 160)) X.append(image/255.0) return np.array(X) X= load_data('Template_poisoning/Croped_data/adversary_images') Target_samples= load_data('Template_poisoning/Croped_data/target_images') X_ex= X.copy() # Copy of X print('Adversarial Batch:',X.shape) # + colab={"base_uri": "https://localhost:8080/", "height": 34} id="MquVc5CDn40u" outputId="e20df9fe-e2e6-485d-a46d-27e430245708" # GET Mask mask= load_img('Template_poisoning/final_mask.png') #Sacle(0-255), RGB mask= mask/255.0 mask.shape # + colab={"base_uri": "https://localhost:8080/", "height": 34} id="rl2fkdNSn4vC" outputId="5c5cb737-06fe-4b6f-c99f-32a3a78b0e57" # img_tr= load_img('Template_poisoning/Croped_data/target_images/ben_afflek_0.jpg') # feature_tr= model.predict(img_tr[np.newaxis, :, :, :]) #Target= Generate_target(feature_tr, batch_size= X.shape[0]) Target= Centroid(model.predict(Target_samples)) Target= Generate_target(Target, batch_size= X.shape[0]) Target.shape # + colab={"base_uri": "https://localhost:8080/", "height": 34} id="5hCCDwUYuAPE" outputId="6df0357f-54f0-432d-99a0-580dba33a3e8" delta_x= np.random.uniform(low=0.0, high=1.0, size=X.shape) # Scale(0-1) delta_x.shape # + colab={"base_uri": "https://localhost:8080/", "height": 169} id="VVdsjM3NRFdo" outputId="f1652b27-edd8-4f4d-955e-b6a0480952e4" f, ax= plt.subplots(1, 5, figsize=(14, 4)) image= X*(1-mask)+ delta_x*mask for i in range(5): ax[i].imshow(image[i+5]) ax[i].set_xticks([]); ax[i].set_yticks([]) plt.show() del image # + colab={"base_uri": "https://localhost:8080/", "height": 50} id="jqAjO_ISRlav" outputId="be68deb4-760a-4c1e-ed55-9a6e04829d8a" Xc= Centroid(model.predict(X)) #(1 x 128) print(Xc.shape) Xc= Generate_target(Xc, batch_size=24) #(24 x 128) print(Xc.shape) # + id="MgSGIG5PS6Rn" def loss_object(pred, label, delta= delta_x, direction= False): # Loss= euclidean distance + Delta_x pixel Variance dist= Euclidean_dist(pred, label) variance= Sample_variance(delta_x) if direction: sc= tf.math.subtract(1.0, tf.math.divide(1.0, label.shape[0])) vector_mean= tf.math.multiply(dist, sc) target_dir= tf.math.multiply(vector_mean, dist) Loss= tf.math.add(target_dir, tf.cast(variance, dist.dtype)) return Loss Loss= tf.math.add(tf.cast(dist, variance.dtype), variance) return Loss # + id="f98mUqiwRT5h" def back_propagate(model, X, mask, delta_x, label, direction= False): with tf.GradientTape() as g: g.watch(delta_x) X_batch= Generate_sample(X, delta_x, mask) feature= model(X_batch) loss= loss_object(pred= feature, label= label, delta= delta_x, direction= direction) # Get the gradients of the loss w.r.t to the input image. gradient = g.gradient(loss, delta_x) return gradient, tf.reduce_mean(loss).numpy() # + id="3prov3SDUIYV" # Tf Variables X= tf.Variable(X, dtype=tf.float64) delta_x= tf.Variable(delta_x, dtype=tf.float64) mask= tf.Variable(mask, dtype=tf.float64) Xc= tf.Variable(Xc) # + colab={"base_uri": "https://localhost:8080/", "height": 286} id="jECDM3ejPpaJ" outputId="2b747db4-bd84-41ac-9e31-d7e3dac11d72" epoch= 151 Lambda= 0.5 for ep in range(epoch): grad, loss= back_propagate(model, X, mask, delta_x, Xc) # Gradient step delta_x= delta_x - Lambda*grad if ep%10 == 0: print('Epoch: {} Loss: {:.3f}'.format((ep+1), loss)) # + colab={"base_uri": "https://localhost:8080/", "height": 790} id="YbRJMJw9sXAR" outputId="bb6f5b42-4308-45a3-8035-2c5f914b6aa9" Lambda= 0.2 for ep in range(int(3*epoch)): grad, loss= back_propagate(model, X, mask, delta_x, Target, direction= True) # Gradient step delta_x= delta_x - Lambda*grad if ep== 0: delta_x0= tf.identity(delta_x) if ep== 1: delta_x1= tf.identity(delta_x) if ep== 2: delta_x2= tf.identity(delta_x) if ep== 3: delta_x2= tf.identity(delta_x) if ep== 170: Lambda= 0.1 if ep== 300: Lambda= 0.04 if ep%10 == 0: print('Epoch: {}, Loss: {}'.format((ep+1), loss)) # + colab={"base_uri": "https://localhost:8080/", "height": 34} id="gdBAD-1Gcdyd" outputId="2dde8ee7-bb5e-4d7f-cee5-426aaa77534f" adv_sample0=Generate_sample(X, delta_x, mask) adv_sample0=adv_sample0.numpy() adv_sample1=Generate_sample(X, delta_x1, mask) adv_sample1=adv_sample1.numpy() adv_sample1.shape # + id="WCsgDlmilX_R" adv_sample0=np.clip(adv_sample0, 0, 1) adv_sample1=np.clip(adv_sample1, 0, 1) # + colab={"base_uri": "https://localhost:8080/", "height": 169} id="eEu0NYkBf2Wj" outputId="94177076-8934-4a83-8fc8-e6d204878c20" f, ax= plt.subplots(1, 5, figsize=(14, 4)) for i in range(5): ax[i].imshow(adv_sample0[i+5]) ax[i].set_xticks([]); ax[i].set_yticks([]) plt.show() # + id="a5uspmH0f_s4" adv_feature= model.predict(X_ex) df_adv= pd.DataFrame(adv_feature) # + id="Dotupoy9gdM4" adv_modified_feature0= model.predict(adv_sample0) df_adv_modify0= pd.DataFrame(adv_modified_feature0) adv_modified_feature1= model.predict(adv_sample1) df_adv_modify1= pd.DataFrame(adv_modified_feature1) # + id="olDOGvQ5tx2B" target_feature= model.predict(Target_samples) df_target= pd.DataFrame(target_feature) # + colab={"base_uri": "https://localhost:8080/", "height": 34} id="Wi6ac2lxhjOJ" outputId="f67d53ca-913d-424b-a401-6fb5c58a1f20" df_adv['target']= 'Adversarial_sample' df_adv_modify0['target']= 'Adversarial Pubertation initial step' df_adv_modify1['target']= 'Adversarial Pubertation final step' df_target['target']= 'Target_sample' df=pd.concat([df_target, df_adv_modify0,df_adv_modify1, df_adv], ignore_index= True) df.shape # + colab={"base_uri": "https://localhost:8080/", "height": 388} id="7hDpH2Vh96jH" outputId="9f0e1b59-18ca-4711-8109-477b89ce55d8" pca = PCA(n_components=2) # Fit pca to 'X' df1= pd.DataFrame(pca.fit_transform(df.drop(['target'], 1))) df1.shape df1['target']= df.target fig, ax = plt.subplots(figsize=(12, 6)) plt.grid(True) plt.xlabel('feature-1'); plt.ylabel('feature-2') sns.scatterplot(x=df1.iloc[:, 0] , y= df1.iloc[:, 1], hue = df1.iloc[:, 2], data= df1, palette='Set1', ax= ax) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="8ZtuUVpcCEno" colab_type="code" colab={} import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt # %matplotlib inline # + id="l0gBTuTZCPp_" colab_type="code" colab={} train = pd.read_csv("/content/train_mod.csv") # + id="gJMuBPLsIBdW" colab_type="code" colab={} tr = pd.read_csv("/content/train.csv") y=tr["Survived"] # + id="XI4MvhrtIBhQ" colab_type="code" colab={} # + id="kSigtyYyChRw" colab_type="code" colab={} test = pd.read_csv("/content/test_mod.csv") # + id="wURpx3hFDF5A" colab_type="code" colab={} X_test = test X_test = X_test.drop(columns=["PassengerId"]) # + id="xNilALBuHlVS" colab_type="code" outputId="fd459bb1-f76d-4d8d-d82e-dccc7c9de63c" colab={"base_uri": "https://localhost:8080/", "height": 195} train.head() # + id="hcMtnZeXDNNy" colab_type="code" outputId="e61b5d03-b10a-44bd-efe2-5bf8512862fd" colab={"base_uri": "https://localhost:8080/", "height": 403} from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import GridSearchCV,StratifiedKFold,cross_val_score,train_test_split from sklearn.metrics import accuracy_score rfc = RandomForestClassifier(random_state=100) param_grid = { 'n_estimators': [200, 500], 'max_features': ['auto', 'sqrt', 'log2'], 'max_depth' : [4,5,6,7,8], 'criterion' :['gini', 'entropy'] } kf = StratifiedKFold(n_splits=5) grfc = GridSearchCV(estimator = rfc,param_grid=param_grid,cv=kf) X_train,X_tes,y_train,y_test = train_test_split(train,y,random_state=69) grfc.fit(X_train,y_train) # + id="k2ZbRFVyKsr0" colab_type="code" outputId="3b561e5a-6e16-4ace-acb5-27c8c9f85416" colab={"base_uri": "https://localhost:8080/", "height": 34} grfc.best_score_ # + id="d6YBB3hiLbns" colab_type="code" outputId="f7774311-0d5e-4664-e926-070c7c5c56c4" colab={"base_uri": "https://localhost:8080/", "height": 84} grfc.best_params_ # + id="y8155B9zLfBe" colab_type="code" colab={} y_pred1 = grfc.predict(X_tes) # + id="hU4F_HEuLkAA" colab_type="code" outputId="a854c134-9e03-4451-9517-b545d524ee45" colab={"base_uri": "https://localhost:8080/", "height": 34} accuracy_score(y_test,y_pred1) # + id="691ZsEhwLpQM" colab_type="code" colab={} y_pred1 = grfc.predict(X_test) # + id="IF1rxJMtLwfu" colab_type="code" colab={} rfc_grid= pd.DataFrame({ "PassengerId": test["PassengerId"], "Survived": y_pred1 }) # + id="l9SAdpu7MBQl" colab_type="code" colab={} rfc_grid.to_csv("/content/rfc_grid_data12.csv", index=False) # + id="SHnKXzsiMI6M" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" # This Python 3 environment comes with many helpful analytics libraries installed # It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python # For example, here's several helpful packages to load import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) # Input data files are available in the read-only "../input/" directory # For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory import os for dirname, _, filenames in os.walk('/kaggle/input'): for filename in filenames: print(os.path.join(dirname, filename)) # You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All" # You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session # + _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a" _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" from __future__ import print_function from time import time import keras from keras.datasets import mnist,cifar10 from keras.models import Sequential from keras.layers import Conv2D, MaxPooling2D from keras.layers import Dense,Dropout,Activation,Flatten from keras.optimizers import Adam from keras import backend as K from matplotlib import pyplot as plt import random # - #no. of convolutional filters to use filters = 64 #size of pooling area for max pooling pool_size = 2 #convolutional kernel size kernel_size = 3 #load and split the data to train and test (x_cifar_train, y_cifar_train), (x_cifar_test, y_cifar_test) = cifar10.load_data() y_cifar_train = y_cifar_train.reshape(50000,) y_cifar_test = y_cifar_test.reshape(10000,) # + x_train_lt5 = x_cifar_train[y_cifar_train < 5] y_train_lt5 = y_cifar_train[y_cifar_train < 5] x_test_lt5 = x_cifar_test[y_cifar_test < 5] y_test_lt5 = y_cifar_test[y_cifar_test < 5] x_train_gte5 = x_cifar_train[y_cifar_train >= 5] y_train_gte5 = y_cifar_train[y_cifar_train >= 5] - 5 x_test_gte5 = x_cifar_test[y_cifar_test >= 5] y_test_gte5 = y_cifar_test[y_cifar_test >= 5] - 5 # - fig, ax = plt.subplots(2,10,figsize=(10,2.8)) fig.suptitle("Example of training images (from first 5 categories), for the first neural net\n", fontsize=15) axes = ax.ravel() for i in range(20): # Pick a random number idx=random.randint(1,1000) axes[i].imshow(x_train_lt5[idx]) axes[i].axis('off') fig.tight_layout(pad=0.5) plt.show() #set the no. of classes and the input shape num_classes = 5 input_shape = (32,32,3) # 3 is for rgb #keras expects 3d images feature_layers = [ Conv2D(filters, kernel_size, padding='valid', input_shape=input_shape), Activation('relu'), Conv2D(filters, kernel_size), Activation('relu'), MaxPooling2D(pool_size=pool_size), Dropout(0.25), Flatten(), ] classification_layers = [ Dense(128), Activation('relu'), Dropout(0.25), Dense(num_classes), Activation('softmax') ] # + # Assignment - Normalize the pixels for cifar images # for the classification layers change the dense to Conv2D and see which one works better # - model_1 = Sequential(feature_layers + classification_layers) def train_model(model, train, test, num_classes): x_train = train[0].reshape((train[0].shape[0],) + input_shape) x_test = test[0].reshape((test[0].shape[0],) + input_shape) x_train = x_train.astype('float32') x_test = x_test.astype('float32') x_train /= 255 x_test /= 255 print('x_train shape:', x_train.shape) print(x_train.shape[0], 'train samples') print(x_test.shape[0], 'test samples') # convert class vectors to binary class matrices y_train = keras.utils.to_categorical(train[1], num_classes) y_test = keras.utils.to_categorical(test[1], num_classes) model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=0.001), metrics=['accuracy']) t1 = time() model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test)) t2 = time() t_delta = round(t2-t1,2) print('Training time: {} seconds'.format(t_delta)) score = model.evaluate(x_test, y_test, verbose=0) print('Test score:', score[0]) print('Test accuracy:', score[1]) batch_size = 128 epochs = 20 #train model for the first 5 categories of images train_model(model_1, (x_train_lt5, y_train_lt5), (x_test_lt5, y_test_lt5), num_classes) model_1.summary() # freeze the features layer for l in feature_layers: l.trainable = False model_2 = Sequential(feature_layers + classification_layers) #train model for the greater than 5 categories of images (last five categrories) train_model(model_2, (x_train_gte5, y_train_gte5), (x_test_gte5, y_test_gte5), num_classes) history_dict = model_1.history.history print(history_dict.keys()) plt.title("Validation accuracy over epcohs",fontsize=15) plt.plot(model_1.history.history['accuracy']) plt.plot(model_1.history.history['val_accuracy']) # plt.plot(history.history['val_accuracy'],lw=3,c='k') plt.grid(True) plt.xlabel("Epochs",fontsize=14) plt.ylabel("Accuracy",fontsize=14) plt.xticks([2*i for i in range(11)],fontsize=14) plt.yticks(fontsize=14) plt.show() # # My code # load dataset (trainX, trainy), (testX, testy) = cifar10.load_data() # summarize loaded dataset print('Train: X=%s, y=%s' % (trainX.shape, trainy.shape)) print('Test: X=%s, y=%s' % (testX.shape, testy.shape)) # plot first few images for i in range(9): # define subplot pyplot.subplot(330 + 1 + i) # plot raw pixel data pyplot.imshow(trainX[i]) # show the figure pyplot.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + import tensorflow as tf import random import numpy as np path = 'users.dat' input_vect_size = 0 with open(path, 'rb') as input_file: for line in input_file: line = line.strip() line = line.split()[1:] for item in line: if int(item) > input_vect_size: input_vect_size = int(item) with open(path, 'rb') as input_file: train_set = [] val_set = [] for line in input_file: line = line.strip() line = line.split()i #num_saved_items = int(line[0]) #indices = [] full_data = [] for i in range(1, len(line)): full_data.append(int(i)) training_data = get_training_data(np.array(full_data)) val_set.append(make_sparse_from_raw_set(full_data, input_vect_size)) train_set.append(make_sparse_from_raw_set(training_data, input_vect_size)) #indices.append([0, int(line[i]) - 1]) #values = [1] * num_saved_items #shape = [1, input_vect_size] #sv = tf.SparseTensor(indices=indices, values=values, shape=shape) # Need to get the sampled on from test set # - import numpy as np sess = tf.Session() dv = tf.sparse_tensor_to_dense(sv) res = sess.run(dv) a = np.where(res > 0)[1] def make_sparse_from_raw_set(raw_set, size): indices = [[0, i - 1] for i in raw_set] values = [1] * len(indices) shape = [1, size] #return [indices, values, shape] return tf.SparseTensor(indices=indices, values=values, shape=shape) def get_training_data(full_dataset_one_d): '''Input: 1-D vector of just the item numbers that are included in in the full dataset Output: 2-D with the first dimension holding the test set that omits 20% of the data, and the second dimension including the full set. In this application the validation set is the full set. ''' #TODO - Might need to check that training set statisfies a min length full_set = list(full_dataset_one_d) ind = np.random.randint(1, len(full_dataset_one_d), len(full_dataset_one_d)) train_set_ind = np.where(ind <= 0.8 * len(full_dataset_one_d))[0] train_set = full_set[train_set_ind] return train_set b, c = get_validation_set(a) b len_b = b.shape[0] len_a = a.shape[0] print float(len_b) / len_a import tensorflow as tf W_init_val = [tf.constant(range(i, i+2), dtype=tf.float32) for i in range(1, 7, 2)] W_prime_init_val = [tf.constant(range(i, i+2), dtype=tf.float32) for i in range(7, 13, 2)] b_prime_init_val = [tf.constant(2.0, dtype=tf.float32) for _ in xrange(3)] # + import tensorflow as tf sess = tf.Session() optimizer = tf.train.GradientDescentOptimizer(0.01) y_0 = [[1, 0, 1]] y = tf.placeholder(tf.float32, [None, 3]) # Test Gradients weight = dict() # weight['W'] = [tf.Variable(seq) for seq in W_init_val] # weight['W_prime'] = [tf.Variable(seq) for seq in W_prime_init_val] # weight['b'] = tf.Variable(tf.constant([[1.0, 1.0]], dtype=tf.float32)) # weight['b_prime'] = [tf.Variable(seq) for seq in b_prime_init_val] #tf.pack(weight['W_prime'])[0] weight['W'] = tf.Variable(W_init_val, name='W') weight['W_prime'] = tf.Variable(W_prime_init_val, name='W_prime') weight['b'] = tf.Variable(tf.constant([[1.0, 1.0]], dtype=tf.float32), name='b') weight['b_prime'] = tf.Variable(b_prime_init_val, dtype=tf.float32, name='b_prime') z = tf.add(tf.matmul(y, weight['W']), weight['b']) y_hat = tf.add(tf.matmul(z, tf.transpose(weight['W_prime'])), weight['b_prime']) loss_0 = tf.nn.l2_loss(tf.sub(y, y_hat)) # #loss_1 = tf.nn.l2_loss(tf.sub(y, y_hat)) # #cost = 0.5 * tf.reduce_sum(tf.nn.l2_loss(tf.sub(y, y_hat))) # # W_var_before = [sess.run(var) for var in weight['W']] # # W_prime_var_before = [sess.run(var) for var in weight['W_prime']] # #Gradient # #dw_0 = tf.gradients(loss_0, [weight['W'][0]])[0] # #dw_prime_0 = tf. gradients(loss_0, [weight['W_prime'][0]])[0] dw_prime_x = optimizer.compute_gradients(loss_0, var_list=[weight['W'], weight['W_prime'], weight['b'], weight['b_prime']])[1][0] #var_list=[weight['W_prime']])[0][0] # #dw_1 = tf.gradients(loss_0, [weight['W'][1]])[0] # #dw_prime_1 = tf. gradients(loss_0, [weight['W_prime'][1]])[0] # #W_var = [sess.run(var) for var in weight['W']] # #W_prime_var = [sess.run(var) for var in weight['W_prime']] # #z_out, y_hat_out, loss, grad_w0, grad_w_prime0, grad_w1, grad_w_prime1 = sess.run((z, y_hat, loss_0, dw_0, dw_prime_0, dw_1, dw_prime_1), feed_dict={y: y_0}) sparse_update = tf.scatter_update(weight['W_prime'], [0, 2], tf.gather(dw_prime_x, [0, 2])) # # #normalized_weights = [tf.nn.l2_normalize(var, 0) for var in weight.iteritems()] # # #w_sum = tf.reduce_sum(normalized_weights) sess.run(tf.initialize_all_variables()) #a = sess.run(dw_prime_x, feed_dict={y: y_0})[1][0] W_prime_var_before = sess.run(weight['W_prime']) # print sess.run(dw_prime_x, feed_dict={y: y_0}) sess.run(sparse_update, feed_dict={y: y_0}) # # #reg_sum = sess.run(w_sum) W_prime_var = sess.run(weight['W_prime']) # - a print W_prime_var_before print '' print W_prime_var # + l = 0.1 loss = tf.nn.l2_loss(tf.sub(y, y_hat)) dloss_d = lambda var: optimizer.compute_gradients(loss, var_list=[var])[0][0] dCost_d = lambda var: dloss_d(var) - tf.mul(l, var) grad_op = lambda var: optimizer.apply_gradients([[dCost_d(var), var]]) W_var_before = [sess.run(var) for var in weight['W']] #cost = lambda var: loss - tf.mul(l, var) #cost = 0.5 * tf.reduce_sum([tf.nn.l2_loss(vect) for vect in tf.sub(z_0, z)]) #cost = 0.5 * tf.reduce_sum(tf.nn.l2_loss(tf.sub(z_0, z))) sess.run(grad_op(weight['W'][0]), feed_dict={y: y_0}) W_var = [sess.run(var) for var in weight['W']] # - W_var_before W_var # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- import os import cioppy import getpass ciop = cioppy.Cioppy() data_path = '/workspace/data' username = getpass.getpass("Enter your username:") api_key = getpass.getpass("Enter the API key:") if not os.path.isdir(data_path): os.makedirs(data_path) input_references = ['https://catalog.terradue.com/better-ethz-02-01-01/search?uid=F4636C8462A88D225393E23281C011ED477B22A7'] # Sakurajima 31.614,130.658 input_references = ['https://catalog.terradue.com/better-ethz-02-01-01/search?uid=C858B8C24E1B7ED261F1B0D9AF04A39DB11B1C82'] # Mount Merapi -7.507,110.452 input_references = ['https://catalog.terradue.com/better-ethz-02-01-01/search?uid=BDB9FD29E55F127A9874877E0D662F5D95C356E0'] # + import os import getpass vm_user = getpass.getuser() os.environ['HOME'] = '/home/{}'.format(vm_user) # - for input_reference in input_references: enclosure = ciop.search(end_point=input_reference, params=[('do', 'terradue')], output_fields='enclosure', model='EOP')[0]['enclosure'] print enclosure retrieved = ciop.copy(enclosure, data_path, extract=True, credentials='{}:{}'.format(username, api_key)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from pandas import read_csv, read_excel, crosstab, DataFrame from statistics import mean, median import seaborn as sns import numpy as np import matplotlib.pyplot as plt from scipy import stats from scipy.stats import fisher_exact from helpers.plots import labels, theme, show_corelations theme() # ## Load cleaned data g = read_csv('data/cleaned_answers.csv') g.head(2) # ## Demographic statistics scouts = g[g.is_polish_scout] not_scouts = g[g.is_polish_scout == False] h_mean = mean(scouts['total_score_without_images']) nh_mean = mean(not_scouts['total_score_without_images']) g_mean = mean(g['total_score_without_images']) h_mean, nh_mean, g_mean g.place_of_residence.value_counts() # ### Number of valid participants participants_n = len(g) participants_n # ### Number of women g.is_women.describe() len(g[g.is_women]) # Majority participants were women f'{round((len(g[g.is_women])/len(g))*100, 2)}% of participants were women.' # ### Number of scouts g.is_polish_scout.describe() len(g[g.is_polish_scout]) # Only about 1/3 of participants were scouts f'{round((len(g[g.is_polish_scout])/len(g))*100, 2)}% of participants were scouts.' # ### Age of participants f'Mean age of all participants was: {round(mean(g.age), 2)} years.' f'Median age of all participants was: {median(g.age)} years.' # Mean and median age of participants are comparable so I expect little skew in the age distribution. s = g.age.std() # Assuming that age distribution is normal I would expect 95% of participants to be in this age group: mean(g.age) - 2 * s, mean(g.age) + 2 * s fig, ax1 = plt.subplots() ax2 = ax1.twinx() sns.distplot(g.age, kde=False, ax=ax1, hist_kws={'rwidth': 0.5}) sns.distplot(g.age, hist=False, ax=ax2, kde_kws={'bw': .5}); g.age.describe() # ### Age distribution: control group vs Polish population # Those data were used in my master thesis. The data are from 2017 as 2018 data were not available at the time of writing my master thesis. Now I have decided to use 2018 data as they become available. # # GUS 2017 and 2018 data comparison is available in the [GUS_17_and_18](GUS_17_and_18.ipynb) notebook. # # The data is prepared in [Demographics_data](Demographics_data.ipynb) notebook. # **Data 2018 source:** [Central Statistical Office in Poland, *Ludność. Stan i struktura oraz ruch naturalny w przekroju terytorialnym w 2018 r. Stan w dniu 31 XII*](https://stat.gov.pl/obszary-tematyczne/ludnosc/ludnosc/ludnosc-stan-i-struktura-oraz-ruch-naturalny-w-przekroju-terytorialnym-w-2018-r-stan-w-dniu-31-xii,6,25.html) # gus_2018 = read_csv('data/gus_2018_cleaned.csv') gus_2018.head() gus_age = np.repeat(gus_2018.age, gus_2018.total) gus_age sns.kdeplot(g[~g.is_polish_scout].age, bw=.5, label="Grupa kontrolna", shade=True) sns.kdeplot(gus_age, bw=.5, label="Populacja wg GUS", shade=True); # Age distribution for all participants was not normal - Shapiro test stats.shapiro(g.age) # ### Age distribution: scouts vs control group theme() sns.kdeplot(g[g.is_polish_scout == False].age, bw=.5, label="Grupa kontrolna", shade=True) sns.kdeplot(g[g.is_polish_scout].age, bw=.5, label="Harcerze", shade=True) sns.rugplot(g['age'], color='.2', height=0.01) labels(title='Rozkład wieku pomiędzy badanymi grupami', x='Wiek', y='Gęstość', legend=True) theme() # ks test sprawdza czy dystrybucje są takie same, im bliżej zera tym większa szansa, że są podobne, im mniejsze pvalue tym bardziej istotne statystycznej # # `ks_2samp` p-value is computed more precisely since SciPy 1.3.0 ([stats.ks_2samp changes log](https://docs.scipy.org/doc/scipy/reference/release.1.3.0.html#scipy-stats-improvements)) - see difference in p-value below: # old stats.ks_2samp(g[g.is_polish_scout].age, g[g.is_polish_scout == False].age, mode='asymp') # new stats.ks_2samp(g[g.is_polish_scout].age, g[g.is_polish_scout == False].age) # z-score = miara efektu, czyli jak ważne stats.skewtest(g[g.is_polish_scout].age) stats.skewtest(g[g.is_polish_scout == False].age) # `scipy.stats.skew` # # >For normally distributed data, the skewness should be about 0. A skewness value > 0 means that there is more weight in the left tail of the distribution. The function skewtest can be used to determine if the skewness value is close enough to 0, statistically speaking. # # > Returns: The skewness of values along an axis, returning 0 where all values are equal. # # Source: [SciPy docs](https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.stats.skew.html) stats.skew(g[g.is_polish_scout].age) stats.shapiro(g[g.is_polish_scout].age) stats.shapiro(g[g.is_polish_scout == False].age) # ### Sex distribution comparison # Female to male ratio from GUS 2018 data t = gus_2018[['males', 'females']].apply(sum) gus_female_male_ratio = t['females'] / t['males'] print(f'GUS women to men ratio: {round(gus_female_male_ratio, 2)}') # Female to male ratio according to ZHP 2018 data. # # The data is prepared in [Demographics_data > ZHP](Demographics_data.ipynb#ZHP) notebook. # # Data: http://hib.zhp.pl/zhp-w-liczbach/ # # In my Master Thesis I used data from 2017 (`female_male_ratio_zhp = 1.44`) as 2018 was not available at that time. zhp_in_numbers = read_csv('data/zhp_2018_cleaned.csv', index_col=0) zhp_in_numbers.tail(1) zhp_members_total = zhp_in_numbers["Razem:"] zhp_members_total.head(4) zhp_females = zhp_members_total.loc["Wędr K"] + zhp_members_total.loc["Wędr/Inst K"] zhp_males = zhp_members_total.loc["Wędr M"] + zhp_members_total.loc["Wędr/Inst M"] zhp_female_male_ratio = zhp_females / zhp_males print(f'ZHP women to men ratio: {round(zhp_female_male_ratio, 2)}') # Sex distribution: scouts vs control group contingency_table = crosstab(g.sex, g.is_polish_scout) contingency_table is_scout_sex = contingency_table.to_dict() # odds_ratio, pvalue fisher_exact(contingency_table) # pl: Kobieta -> eng: Woman # # pl: Mężczyzna -> eng: Man ct = DataFrame(contingency_table.unstack(), columns=['count']).reset_index() ct # pl: Grupa kontrolna -> eng: Contol group # # pl: Harcerze -> eng: Scouts dictionary_is_polish_scout = { False: 'Grupa kontrolna', True: 'Harcerze' } ct.is_polish_scout = ct.is_polish_scout.replace(dictionary_is_polish_scout) ct.is_polish_scout gr = sns.catplot( x="sex", y="count", hue="is_polish_scout", data=ct, kind="bar", palette="muted", legend=False, height=8, aspect=14 / 8 ) gr.ax.legend(loc='upper right', frameon=False) labels(title='Rozkład płci pomiędzy badanymi grupami', x=' ', y='Liczna respondentów', legend=False); is_scout_sex[True]['Kobieta'] no_scout_sex_ratio_w_to_m = (is_scout_sex[False]['Kobieta']) / (is_scout_sex[False]['Mężczyzna']) scout_sex_ratio_w_to_m = (is_scout_sex[True]['Kobieta']) / (is_scout_sex[True]['Mężczyzna']) print( f'Control group women to men ratio: {round(no_scout_sex_ratio_w_to_m, 2)}\n' f'Scouts women to men ratio: {round(scout_sex_ratio_w_to_m, 2)}' ) gr = sns.FacetGrid(g, row="sex", col="is_polish_scout", margin_titles=True) gr.map(sns.kdeplot, "age", color="steelblue", shade=True, bw=0.45); axes = gr.axes.flatten() axes[0].set_title("Control\ngroup") axes[1].set_title("Scouts"); # ### Place distributipon table = crosstab(g.place_of_residence, g.is_polish_scout) table t = DataFrame(table.unstack(), columns=['count']).reset_index() def licz_procent(wiersz): return 100 * wiersz['count'] / len(g[g.is_polish_scout == wiersz.is_polish_scout]) t = t.assign(procent_grupy=t.apply(licz_procent, axis='columns')) t dict_place_of_residence = { 'Miasto do 50 tys. mieszkańców': 'Miasto\ndo 50 tys.\nmieszkańców', 'Miasto od 50 do 100 tys. mieszkańców': 'Miasto\nod 50 do 100 tys.\nmieszkańców', 'Miasto powyżej 100 tys. mieszkańców': 'Miasto powyżej\n100 tys.\nmieszkańców' } t.place_of_residence = t.place_of_residence.replace(dict_place_of_residence) t.place_of_residence dict_is_polish_scout = { False: 'Grupa kontrolna', True: 'Harcerze' } t.is_polish_scout = t.is_polish_scout.replace(dict_is_polish_scout) t.is_polish_scout t.groupby(t.is_polish_scout).apply t # replace column name from 'is_polish_scout' to ' ' t.rename(columns={'is_polish_scout': 'group'}, inplace=True) graph = sns.catplot( x="place_of_residence", y="procent_grupy", hue="group", data=t, kind="bar", palette="muted", aspect=3/2, height=10, legend=False ) graph.ax.legend(loc='upper left', frameon=False) labels( title='Miejsce zamieszkania w badanych grupach', x=' ', y='Procent respondentów w badanych grupach', legend=False ); # + counts = DataFrame( g[~g.is_polish_scout].place_of_residence.value_counts().reset_index() ).rename({'index': 'place', 'place_of_residence': 'total'}, axis='columns') scout_counts = DataFrame( g[g.is_polish_scout].place_of_residence.value_counts().reset_index() ).rename({'index': 'place', 'place_of_residence': 'total'}, axis='columns') control = { 'miasto': sum(counts[counts.place.str.startswith('Miasto')].total), 'wieś': counts[counts.place == 'Wieś'].total.iloc[0] } control # - scouts_miejsce = { 'miasto': sum(scout_counts[scout_counts.place.str.startswith('Miasto')].total), 'wieś': scout_counts[scout_counts.place == 'Wieś'].total.iloc[0] } scouts_miejsce control_sum = control['miasto'] + control['wieś'] 100 * control['miasto'] / control_sum scouts_sum = scouts_miejsce['miasto'] + scouts_miejsce['wieś'] 100 * scouts_miejsce['miasto'] / scouts_sum total_gus = gus_2018[['city_total', 'rural_total']].apply(sum) total_gus # + contingency_table = [ # all populacja [control['miasto'], total_gus.loc['city_total']], # miasto [control['wieś'], total_gus.loc['rural_total']], # wieś ] print('Miasto / wieś') print('Kontrolna', control['miasto'] / control['wieś']) print('Harcerze', scouts_miejsce['miasto'] / scouts_miejsce['wieś']) print('GUS', total_gus.loc['city_total'] / total_gus.loc['rural_total']) stats.fisher_exact(contingency_table), stats.chi2_contingency(contingency_table) # - (control['miasto'] / control['wieś']) / (total_gus.loc['city_total'] / total_gus.loc['rural_total']) # ### Demographics corelations show_corelations(g, only=[ 'age', 'is_women', 'size_of_residence_place', 'is_polish_scout' ]) stats.spearmanr(g.is_polish_scout, g.age); # średniomaly efekt stats.spearmanr(g.is_polish_scout, g.size_of_residence_place) stats.spearmanr(g.is_polish_scout, g.is_women) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="9nIsY0K4fSQr" cellView="form" #@markdown Configuración Inicial from IPython.utils import io from google.colab.data_table import DataTable from IPython.display import display, display_svg from IPython.display import Javascript from IPython.display import Markdown, Latex from IPython.display import Audio, Image from IPython.display import IFrame, HTML with io.capture_output() as capt: # https://matplotlib-axes-aligner.readthedocs.io/en/latest/ # !pip install mpl-axes-aligner # !pip install gradio import gradio as gr import numpy as np import pandas as pd import matplotlib.pyplot as plt from matplotlib import patches from mpl_toolkits.mplot3d import Axes3D # https://matplotlib-axes-aligner.readthedocs.io/en/latest/ from mpl_axes_aligner import align import random from scipy import constants as const from scipy import stats as st from sympy import Point, Polygon # Avoids scroll-in-the-scroll in the entire Notebook # https://stackoverflow.com/a/66891328 def resize_colab_cell(): display(Javascript( 'google.colab.output.setIframeHeight(0, true, {maxHeight: 5000})' )) get_ipython().events.register('pre_run_cell', resize_colab_cell) def dLatex(self): return display(Latex(self)) def dMarkdown(self): return display(Markdown(self)) # + colab={"base_uri": "https://localhost:8080/", "height": 443} id="I4ZTz2wS8JVU" executionInfo={"status": "ok", "timestamp": 1637724248762, "user_tz": 300, "elapsed": 1258, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjJYTjYLxPtS7eNvnLLzMN3nzPcMEEgWNX5XBsozg=s64", "userId": "11800081141013004956"}} outputId="be5fe161-b937-44d1-9018-bcff03b1c501" G_dict = const.physical_constants['Newtonian constant of gravitation'] dMarkdown(f'G: {G_dict}') def sphereGrav(x, z, R, rho): A = 4*np.pi*const.G*rho*R**3 xz = (x/z)**2+1 B = 3*z**2*xz**(3/2) g = A/B if np.isscalar(x): return g else: return np.c_[x, g] xx = np.r_[-100:100:501j] xe = 23 p_0 = { 'z': -20, #(m) 'R': 8, #(m) 'rho': 1000 #(kg/m^3) } p_1 = { 'z': -45, #(m) 'R': 12, #(m) 'rho': 600 #(kg/m^3) } columns = ['x','z','R','rho','g'] df = pd.DataFrame([p_0,p_1], columns=columns[:-1]) df['x'] = xe display(df) phi, theta = np.mgrid[0:1*np.pi:100j, 0:2*np.pi:100j] sp = lambda R: R*np.array([np.sin(phi)*np.cos(theta), np.sin(phi)*np.sin(theta), np.cos(phi) ]) xyz = lambda x, z, R: [sum(_) for _ in zip(sp(R), (x, 0, z))] fig = plt.figure(figsize=plt.figaspect(1)) ax = Axes3D(fig) ax.plot_surface(*xyz(x=0, z=p_0['z'], R=p_0['R']), color='tab:cyan', lw=0, alpha=2/3) ax.plot_surface(*xyz(x=0, z=p_1['z'], R=p_1['R']), color='tab:blue', lw=0, alpha=2/3) ax.set_xlim(-30, +30) ax.set_ylim(-30, +30) ax.set_zlim(-60, 0) ax.set_xlabel('x (m)') ax.set_ylabel('y (m)') ax.set_zlabel('z (m)') plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 354} id="nTlpGts65TAh" executionInfo={"status": "ok", "timestamp": 1637724248763, "user_tz": 300, "elapsed": 7, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjJYTjYLxPtS7eNvnLLzMN3nzPcMEEgWNX5XBsozg=s64", "userId": "11800081141013004956"}} outputId="08b5a15a-14c8-4fb1-9352-1c98e28aaf8a" p_0b = p_0.copy() p_0b['x'] = xe S_0 = sphereGrav(**p_0b) p_0b['g'] = S_0 table_0 = pd.DataFrame(p_0b, columns=columns, index=[0]) display(table_0) fig, ax_z = plt.subplots() ax_g = ax_z.twinx() ax_z.axhline(c='k', lw=1, zorder=0) ax_g.axvline(100, c='k', lw=1) p = ax_g.plot(*sphereGrav(xx, **p_0).T) c = p[-1].get_color() circle_0 = plt.Circle((0, p_0['z']), p_0['R'], fill=True, color=c) ax_z.add_artist(circle_0) ax_z.set_aspect(1) ax_g.set_yticks(np.r_[0:4e-7:6j]) ax_z.set_yticks(np.r_[-60:0:7j]) ax_z.set_xlim(-100,100) ax_z.set_xticks(np.r_[-92:92:9j]) align.yaxes(ax_g, 0, ax_z, 0, 1/2) ax_z.set_xlabel('x (m)') ax_z.set_ylabel('z (m)', y=1/4) ax_g.set_ylabel('g (mGal)', y=3/4) plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 354} id="ccVOJ6g0A4ee" executionInfo={"status": "ok", "timestamp": 1637724249317, "user_tz": 300, "elapsed": 559, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjJYTjYLxPtS7eNvnLLzMN3nzPcMEEgWNX5XBsozg=s64", "userId": "11800081141013004956"}} outputId="63cd7915-40b2-401d-c0b7-96510abbef73" p_1b = p_1.copy() p_1b['x'] = xe S_1 = sphereGrav(**p_1b) p_1b['g'] = S_1 table_1 = pd.DataFrame(p_1b, columns=columns, index=[1]) display(table_1) fig, ax_z = plt.subplots() ax_g = ax_z.twinx() ax_z.axhline(c='k', lw=1, zorder=0) ax_g.axvline(xe, c='k', lw=1) p = ax_g.plot(*sphereGrav(xx, **p_1).T) c = p[-1].get_color() circle_1 = plt.Circle((0, p_1['z']), p_1['R'], fill=True, color=c) ax_z.add_artist(circle_1) ax_z.set_aspect(1) ax_g.set_yticks(np.r_[0:4e-7:6j]) ax_z.set_yticks(np.r_[-60:0:7j]) ax_z.set_xlim(-100,100) ax_z.set_xticks(np.r_[-92:92:9j]) align.yaxes(ax_g, 0, ax_z, 0, 1/2) ax_z.set_xlabel('x (m)') ax_z.set_ylabel('z (m)', y=1/4) ax_g.set_ylabel('g (mGal)', y=3/4) plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 687} id="8kg6LQPxTZnu" executionInfo={"status": "ok", "timestamp": 1637724249632, "user_tz": 300, "elapsed": 319, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjJYTjYLxPtS7eNvnLLzMN3nzPcMEEgWNX5XBsozg=s64", "userId": "11800081141013004956"}} outputId="1fa51cee-dd7a-4d53-8e4d-18b9ee194f52" table = pd.concat([table_0, table_1]) display(table) err = abs(S_1-S_0)/(S_1+S_0)*2 dMarkdown(f'err={err:.6g}') fig, ax_z = plt.subplots() ax_g = ax_z.twinx() ax_z.axhline(c='k', lw=1, zorder=0) ax_g.axvline(xe, c='k', lw=1) for p_i in (p_0, p_1): p = ax_g.plot(*sphereGrav(xx, **p_i).T) c = p[-1].get_color() circle_i = plt.Circle((0, p_i['z']), p_i['R'], fill=True, color=c) ax_z.add_artist(circle_i) ax_z.set_aspect(1) ax_g.set_yticks(np.r_[0:4e-7:6j]) ax_z.set_yticks(np.r_[-60:0:7j]) ax_z.set_xlim(-100,100) ax_z.set_xticks(np.r_[-92:92:9j]) align.yaxes(ax_g, 0, ax_z, 0, 1/2) ax_z.set_xlabel('x (m)') ax_z.set_ylabel('z (m)', y=1/4) ax_g.set_ylabel('g (mGal)', y=3/4) plt.show() xxb = np.r_[22.5:23.5:11j] fig, ax = plt.subplots() ax.plot(*sphereGrav(xxb, **p_0).T) ax.plot(*sphereGrav(xxb, **p_1).T) ax.axvline(xe, c='k', lw=1) ax.set_xlabel('x (m)') ax.set_ylabel('g (mGal)') plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 319} id="b-o3NKxKdK5Q" executionInfo={"status": "ok", "timestamp": 1637724249966, "user_tz": 300, "elapsed": 339, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjJYTjYLxPtS7eNvnLLzMN3nzPcMEEgWNX5XBsozg=s64", "userId": "11800081141013004956"}} outputId="8da355bb-69d9-43cf-bdf3-96a4b737983a" def fun_sphere(p_a, p_b): fig, ax_z = plt.subplots() ax_g = ax_z.twinx() ax_z.axhline(c='k', lw=1, zorder=0) for p_i in (p_a, p_b): p = ax_g.plot(*sphereGrav(xx, **p_i).T) c = p[-1].get_color() circle_i = plt.Circle((0, p_i['z']), p_i['R'], fill=True, color=c, alpha=1/2) ax_z.add_artist(circle_i) ax_z.set_aspect(1) ax_g.set_yticks(np.r_[0:4e-7:6j]) ax_z.set_yticks(np.r_[-60:0:7j]) ax_z.set_xlim(-100,100) ax_z.set_xticks(np.r_[-92:92:9j]) align.yaxes(ax_g, 0, ax_z, 0, 1/2) ax_z.set_xlabel('x (m)') ax_z.set_ylabel('z (m)', y=1/4) ax_g.set_ylabel('g (mGal)', y=3/4) fig.tight_layout(pad=0) plt.close() return fig fun_sphere(p_0, p_1) # + colab={"base_uri": "https://localhost:8080/", "height": 336} id="Bp5dyaPufklt" executionInfo={"status": "ok", "timestamp": 1637724250318, "user_tz": 300, "elapsed": 354, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjJYTjYLxPtS7eNvnLLzMN3nzPcMEEgWNX5XBsozg=s64", "userId": "11800081141013004956"}} outputId="24253bd8-035b-463e-d73a-c65701ebd1a4" my_fun = lambda z, R, rho: fun_sphere( p_0.copy(), {'z': z, 'R': R, 'rho': rho}) print(p_1) my_fun(**p_1) # + colab={"base_uri": "https://localhost:8080/", "height": 1039} id="gLjDQDw9hKU_" executionInfo={"status": "ok", "timestamp": 1637724256268, "user_tz": 300, "elapsed": 5954, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjJYTjYLxPtS7eNvnLLzMN3nzPcMEEgWNX5XBsozg=s64", "userId": "11800081141013004956"}} outputId="7573c056-524c-4580-e080-589c7f163e7e" iface = gr.Interface( fn=my_fun, inputs=[gr.inputs.Slider(-60, -10 , .1, default=p_0['z']), gr.inputs.Slider(5, 15 , .1, default=p_0['R']), gr.inputs.Slider(0, 2500 , 10, default=p_0['rho'])], outputs='plot', live=True, allow_flagging=False, allow_screenshot=False, # title='Gravedad Teorica', # description='Valores Teoricos de Gravedad', # article = """

    # # Wikipedia | Theoretical Gravity

    """, examples=[list(p_i.values()) for p_i in (p_1, p_0)], theme='huggingface', # "default", "compact" or "huggingface" layout='unaligned' # 'horizontal', 'unaligned', 'vertical' ) with io.capture_output() as captured: iface.launch(inline=True) print(iface.share_url) IFrame(src=iface.share_url, width=1200, height=1000) # + colab={"base_uri": "https://localhost:8080/", "height": 283} id="9hTyTDZ0Phja" executionInfo={"status": "ok", "timestamp": 1637724256999, "user_tz": 300, "elapsed": 741, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjJYTjYLxPtS7eNvnLLzMN3nzPcMEEgWNX5XBsozg=s64", "userId": "11800081141013004956"}} outputId="ce934db5-73a0-4281-c6b7-8dbaa1e8b018" def Talwani(Model, XZ, rho=600): k = len(Model) lenXZ = len(XZ) xietalist = [Model-XZ[i] for i in range(lenXZ)] lenxieta = len(xietalist) grav = np.empty(lenxieta) for j in range(lenxieta): xi = xietalist[j].T[0] eta = xietalist[j].T[1] sum = 0 for i in range(k): A = (xi[i-1]*eta[i] - xi[i]*eta[i-1])/\ ((xi[i]-xi[i-1])**2 + (eta[i]-eta[i-1])**2) B1 = 0.5*(eta[i] - eta[i-1])*\ np.log((xi[i]**2 + eta[i]**2)/\ (xi[i-1]**2 + eta[i-1]**2)) B2 = (xi[i] - xi[i-1])*\ (np.arctan(xi[i]/eta[i])-\ np.arctan(xi[i-1]/eta[i-1])) sum += A*(B1+B2) grav[j] = sum grav = 1e6*2*const.G*rho*grav return np.c_[XZ[:,0], grav] def draw_eye(s, q=(1,1), d=(0,0), N=50): m = N//2 n = (N-m) q = np.array(q)/2 gauss = st.norm(0, 1/s) f = lambda x: (gauss.pdf(x)-gauss.pdf(1))/\ (gauss.pdf(0)-gauss.pdf(1)) ii = np.r_[-1:+1:(n+1)*1j] jj = np.r_[+1:-1:(m+1)*1j] top = np.c_[ii, +f(ii)][:-1] bottom = np.c_[jj, -f(jj)][:-1] eye = q*np.r_[top, bottom] + d return eye XZ = np.c_[-90:90:101j, 0:0:101j] cosas = np.c_[1:5.7:5j, 40:10:5j, 10:40:5j, -10:-70:5j, 600:1500:5j ] fig, ax_z = plt.subplots() ax_g = ax_z.twinx() ax_z.axhline(c='k', lw=1, zorder=0) for w, qx, qz, dz, rho in cosas[:3]: eye = draw_eye(w, (qx, qz), (0, dz), 100) grav = Talwani(eye, XZ, rho) p = ax_g.plot(*grav.T) c = p[-1].get_color() eye_poly = patches.Polygon( eye, Fill = True, color=c) ax_z.add_patch(eye_poly) ax_z.set_aspect(1) ax_g.set_yticks(np.r_[0:1.6:6j]) ax_z.set_yticks(np.r_[-60:0:7j]) ax_z.set_xlim(-90,90) ax_z.set_ylim(-100,0) ax_z.set_xticks(np.r_[-90:90:9j]) align.yaxes(ax_g, 0, ax_z, 0, 1/2) ax_z.set_xlabel('x (m)') ax_z.set_ylabel('z (m)', y=1/4) ax_g.set_ylabel('g (mGal)', y=3/4) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import matplotlib.pyplot as plt from fABBA import fabba_model from fABBA import ABBAbase from fABBA import compress np.random.seed(1) N = 100 ts = [np.sin(0.05*i) for i in range(1000)] # original time series fabba = fabba_model(tol=0.1, alpha=0.1, sorting='norm', scl=1, verbose=0) print(fabba) string = fabba.fit_transform(ts) print(string) inverse_ts = fabba.inverse_transform(string, ts[0]) # print(inverse_ts) plt.plot(ts, label='time series') plt.plot(inverse_ts, label='reconstruction') plt.legend() plt.grid(True, axis='y') plt.savefig('demo.png', bbox_inches='tight') plt.show() from fABBA import digitize from sklearn.cluster import KMeans kmeans = KMeans(n_clusters=5, random_state=0, init='k-means++') abba = ABBAbase(tol=0.1, scl=1, clustering=kmeans, verbose=0) string = abba.fit_transform(ts) print(string) # + from fABBA import compress from fABBA import inverse_compress pieces = compress(ts, 0.1) print(pieces) inverse_ts = inverse_compress(pieces, ts[0]) plt.plot(ts, label='time series') plt.plot(inverse_ts, label='reconstruction') plt.legend() plt.grid(True, axis='y') plt.show() # + from fABBA import digitize from fABBA import inverse_digitize string, parameters = digitize(pieces, alpha=0.1, sorting='2-norm', scl=1) # compression of the polygon print(''.join(string)) # prints BbAaAaAaAaAaAaAaC inverse_pieces = inverse_digitize(string, parameters) inverse_ts = inverse_compress(inverse_pieces, ts[0]) # numerical time series # + tags=[] import numpy as np import matplotlib.pyplot as plt from fABBA.load_datasets import load_images from fABBA import image_compress from fABBA import image_decompress from fABBA import fabba_model from cv2 import resize img_samples = load_images() # load fABBA image test samples img = resize(img_samples[0], (100, 100)) # select the first image for test fabba = fabba_model(tol=0.1, alpha=0.01, sorting='2-norm', scl=1, verbose=1, max_len=-1) strings = image_compress(fabba, img) inverse_img = image_decompress(fabba, strings) # - IMG = plt.imread('samples/img/n02101556_4241.jpg') plt.imshow(img) plt.savefig('img.png', bbox_inches='tight') plt.show() plt.imshow(inverse_img) plt.savefig('inverse_img.png', bbox_inches='tight') plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pandas as pd import matplotlib.pyplot as plt import sklearn as sk import datetime import seaborn as sns from datetime import date train = pd.read_csv("Data/train_users_2.csv") test = pd.read_csv("Data/test_users.csv") print('the columns name of training dataset:\n',train.columns) print('the columns name of test dataset:\n',test.columns) # + dac_train = train.date_account_created.value_counts() dac_test = test.date_account_created.value_counts() # change data to datetime type dac_train_date = pd.to_datetime(train.date_account_created.value_counts().index) dac_test_date = pd.to_datetime(test.date_account_created.value_counts().index) # Calculate the number of days from the registration time of the first registered user in the training set dac_train_day = dac_train_date - dac_train_date.min() dac_test_day = dac_test_date - dac_train_date.min() plt.scatter(dac_train_day.days, dac_train.values, color = 'r', label = 'train dataset') plt.scatter(dac_test_day.days, dac_test.values, color = 'b', label = 'test dataset') plt.title("Accounts created vs day") plt.xlabel("Days") plt.ylabel("Accounts created") plt.legend(loc = 'upper left') # - tfa_train_dt = train.timestamp_first_active.astype(str).apply(lambda x: datetime.datetime(int(x[:4]), int(x[4:6]), int(x[6:8]), int(x[8:10]), int(x[10:12]), int(x[12:]))) print(tfa_train_dt.describe()) print(train.date_first_booking.describe()) print(test.date_first_booking.describe()) print(train.age.value_counts().head()) # + # Divide age into 4 groups,missing values, too small age, reasonable age, too large age age_train =[train[train.age.isnull()].age.shape[0], train.query('age < 15').age.shape[0], train.query('age <= 15 & age <25').age.shape[0], train.query('age <= 25 & age <35').age.shape[0], train.query('age <= 35 & age <45').age.shape[0], train.query('age <= 45 & age <55').age.shape[0], train.query("age <= 55 & age <65").age.shape[0], train.query("age <= 65 & age <75").age.shape[0], train.query("age <= 75 & age <85").age.shape[0], train.query('age > 85').age.shape[0]] age_test = [test[test.age.isnull()].age.shape[0], test.query('age < 15').age.shape[0], test.query("age >= 15 & age <= 85").age.shape[0], test.query('age > 90').age.shape[0]] columns = ['Null', 'age < 15', 'age', 'age > 85'] # plot fig, (ax1,ax2) = plt.subplots(1,2,sharex=True, sharey = True,figsize=(10,5)) sns.barplot(columns, age_train, ax = ax1) sns.barplot(columns, age_test, ax = ax2) ax1.set_title('training dataset') ax2.set_title('test dataset') ax1.set_ylabel('counts') # - def feature_barplot(feature, df_train = train, df_test = test, figsize=(10,5), rot = 90, saveimg = True): feat_train = df_train[feature].value_counts() feat_test = df_test[feature].value_counts() fig_feature, (axis1,axis2) = plt.subplots(1,2,sharex=True, sharey = True, figsize = figsize) sns.barplot(feat_train.index.values, feat_train.values, ax = axis1) sns.barplot(feat_test.index.values, feat_test.values, ax = axis2) axis1.set_xticklabels(axis1.xaxis.get_majorticklabels(), rotation = rot) axis2.set_xticklabels(axis1.xaxis.get_majorticklabels(), rotation = rot) axis1.set_title(feature + ' of training dataset') axis2.set_title(feature + ' of test dataset') axis1.set_ylabel('Counts') plt.tight_layout() if saveimg == True: figname = feature + ".png" fig_feature.savefig(figname, dpi = 75) feature_barplot('gender', saveimg = True) feature_barplot('signup_method') feature_barplot('signup_flow') feature_barplot('language') feature_barplot('affiliate_channel') feature_barplot('first_affiliate_tracked') feature_barplot('signup_app') feature_barplot('first_device_type') feature_barplot('first_browser') def label_barplot(feature, df_train = train, figsize=(10,5), rot = 90, saveimg = False): feat_train = df_train[feature].value_counts() fig = plt.figure(figsize=(8,4)) sns.barplot(feat_train.index.values, feat_train.values) plt.title(feature + ' of training dataset') plt.ylabel('Counts') plt.tight_layout() if saveimg == True: figname = feature + ".png" fig_feature.savefig(figname, dpi = 75) label_barplot('country_destination') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ___ # # # ___ #
    Copyright
    #
    For more information, visit us at www.pieriandata.com
    # # # # AutoEncoders on Image Data # ## The Data import pandas as pd import numpy as np import matplotlib.pyplot as plt from tensorflow.keras.datasets import mnist (X_train, y_train), (X_test, y_test) = mnist.load_data() plt.imshow(X_train[0]) X_train = X_train/255 X_test = X_test/255 # ## Basic AutoEncoder from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense,Flatten,Reshape from tensorflow.keras.optimizers import SGD 783/2 encoder = Sequential() encoder.add(Flatten(input_shape=[28,28])) encoder.add(Dense(400,activation="relu")) encoder.add(Dense(200,activation="relu")) encoder.add(Dense(100,activation="relu")) encoder.add(Dense(50,activation="relu")) encoder.add(Dense(25,activation="relu")) decoder = Sequential() decoder.add(Dense(50,input_shape=[25],activation='relu')) decoder.add(Dense(100,activation='relu')) decoder.add(Dense(200,activation='relu')) decoder.add(Dense(400,activation='relu')) decoder.add(Dense(28 * 28, activation="sigmoid")) decoder.add(Reshape([28, 28])) autoencoder = Sequential([encoder, decoder]) autoencoder.compile(loss="binary_crossentropy",optimizer=SGD(lr=1.5),metrics=['accuracy']) autoencoder.fit(X_train, X_train, epochs=5,validation_data=[X_test, X_test]) passed_images = autoencoder.predict(X_test[:10]) plt.imshow(passed_images[0]) plt.imshow(X_test[0]) # # AutoEncoders for Denoising Images from tensorflow.keras.layers import GaussianNoise sample = GaussianNoise(0.2) noisey = sample(X_test[0:2],training=True) plt.imshow(X_test[0]) plt.imshow(noisey[0]) # ### Create noise removal autoencoder and train it. import tensorflow as tf import numpy as np # + # TO create the exact same noise as us (optional) tf.random.set_seed(101) np.random.seed(101) encoder = Sequential() encoder.add(Flatten(input_shape=[28,28])) # Add noise to images before going through autoencoder encoder.add(GaussianNoise(0.2)) encoder.add(Dense(400,activation="relu")) encoder.add(Dense(200,activation="relu")) encoder.add(Dense(100,activation="relu")) encoder.add(Dense(50,activation="relu")) encoder.add(Dense(25,activation="relu")) # - decoder = Sequential() decoder.add(Dense(50,input_shape=[25],activation='relu')) decoder.add(Dense(100,activation='relu')) decoder.add(Dense(200,activation='relu')) decoder.add(Dense(400,activation='relu')) decoder.add(Dense(28 * 28, activation="sigmoid")) decoder.add(Reshape([28, 28])) noise_remover = Sequential([encoder, decoder]) noise_remover.compile(loss="binary_crossentropy", optimizer='adam',metrics=['accuracy']) noise_remover.fit(X_train, X_train, epochs=8, validation_data=[X_test, X_test]) ten_noisey_images = sample(X_test[0:10],training=True) denoised = noise_remover(ten_noisey_images[0:10]) n = 1 print("The Original") plt.imshow(X_test[n]) plt.show() print("The Noisey Version") plt.imshow(ten_noisey_images[n]) plt.show() print("After going through denoiser") plt.imshow(denoised[n]) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- import pandas as pd import numpy as np import json # + def load_jsonl(fname): fin = open(fname, encoding="utf-8") data = [] for line in fin: d = json.loads(line.strip()) data.append(d) return data def save_jsonl(data, filename): with open(filename, "w", encoding="utf-8") as fo: for idx, d in enumerate(data): fo.write(json.dumps(d, ensure_ascii=False)) fo.write("\n") # - # ls .. data = [] DIR = "../Datasets/WisesightSentiment/few-shot/" for i in range(1, 5+1): d = load_jsonl(f"{DIR}/misp{i}/all.jsonl") for sent in d: del sent["id"] sent["meta"]["part"] = i d = pd.DataFrame(d) data.append(d) print("#Annotators:", len(data)) # # Inter-Annotator Agreement # # import sklearn from sklearn.metrics import cohen_kappa_score import statsmodels from statsmodels.stats.inter_rater import fleiss_kappa # + # from sklearn.metrics import cohen_kappa_score # y_true = [2, 0, 2, 2, 0, 1] # y_pred = [0, 0, 2, 2, 0, 2] # # + # from itertools import combinations # pairs = list(combinations(range(5), 2)) # - # ### Sentence-wise # + # import modules import matplotlib.pyplot as mp import pandas as pd import seaborn as sb import numpy as np def has_misspelling(val): return int(len(val)==0) mat = [] for p1 in range(5): row = [] for p2 in range(5): if p2 >= p1: row.append(0) continue d1 = data[p1] d2 = data[p2] d = d1.merge(d2, on="data", how="inner") m1 = d["label_x"].apply(has_misspelling) m2 = d["label_y"].apply(has_misspelling) row.append(cohen_kappa_score(m1, m2)) print(p1, p2, cohen_kappa_score(m1, m2), (m1!=m2).sum()) mat.append(row) # - mask = np.triu(np.ones_like(mat)) # # plotting a triangle correlation heatmap dataplot = sb.heatmap(mat, cmap="YlGnBu", annot=True, mask=mask, vmin=0, vmax=1) fig = dataplot.get_figure() # fig.savefig("../Figures/iaa1.png") # ### Token-wise # + def get_segments(text, labels): seg = [] pt = 0 labels = sorted(labels, key=lambda x: x[0], reverse=False) for l in labels: seg.append({"text": text[pt:l[0]], "misp": False, "int":False, "s": pt, "t":l[0]}) mint = (l[2]=='ตั้งใจ') seg.append({"text": text[l[0]:l[1]], "misp": True, "int":mint, "s": l[0], "t":l[1]}) pt = l[1] seg.append({"text": text[pt:], "misp": False, "int":False, "s": pt, "t":len(text)}) length = sum([len(s["text"]) for s in seg]) assert(length==len(text)) return seg def overlap_segments(text, segments): idx = set() for seg in segments: for s in seg: idx.add(s["s"]) idx.add(s["t"]) idx = sorted(idx) newseg = [] for i, _ in enumerate(idx): if i==0: continue newseg.append({ "text": text[idx[i-1]:idx[i]], "s": idx[i-1], "t": idx[i], }) o = [] for seg in segments: ns = [] for s in newseg: for ref in seg: if s["s"] >= ref["s"] and s["s"] < ref["t"]: break ns.append({ "text": s["text"], "misp": ref["misp"], "int": ref["int"], "s": s["s"], "t": s["t"], }) o.append(ns) return o mat = [] for p1 in range(5): row = [] for p2 in range(5): if p2 >= p1: row.append(0) continue d1 = data[p1] d2 = data[p2] d = d1.merge(d2, on="data", how="inner") s1 = [] s2 = [] for idx, sent in d.iterrows(): seg1 = get_segments(sent["data"], sent["label_x"]) seg2 = get_segments(sent["data"], sent["label_y"]) seg = overlap_segments(sent["data"], [seg1, seg2]) for s in seg[0]: if not s["misp"]: s1.append(0) elif s["int"]: s1.append(1) else: s1.append(2) for s in seg[1]: if not s["misp"]: s2.append(0) elif s["int"]: s2.append(1) else: s2.append(2) row.append(cohen_kappa_score(s1, s2)) print(p1, p2, cohen_kappa_score(s1, s2), (np.array(s1)!=np.array(s2)).sum()*100/len(s1)) mat.append(row) # - mask = np.triu(np.ones_like(mat)) # # plotting a triangle correlation heatmap dataplot = sb.heatmap(mat, cmap="YlGnBu", annot=True, mask=mask, vmin=0, vmax=1) fig = dataplot.get_figure() # fig.savefig("Figures/iaa2.png") # # Intention Labelling Entropy across Sentences from pythainlp.tokenize import word_tokenize engine = "deepcut" # word_tokenize("ฉันรักแมว", engine=engine) reference = load_jsonl(f"{DIR}/missplling_train_wisesight_samples.jsonl") reference = pd.DataFrame(reference) O = reference for i, d in enumerate(data): d = d[["data", "label"]] d.columns = ["text", f"misp{i}"] O = O.merge(d, on="text", how="left") # + import copy from tqdm import tqdm from itertools import groupby def norm_word(word): groups = [list(s) for _, s in groupby(word)] ch = [] extraToken = "" for g in groups: if len(g)>=3: extraToken = "" ch.append(g[0]) word = "".join(ch)+extraToken return word def tolabel(n): if n==2: return "neg" elif n==1: return "neu" elif n==0: return "pos" else: raise(f"Unknow label: {n}") merged = [] for idx, row in tqdm(O.iterrows(), total=len(O)): segs = [] for i in range(5): if pd.isna([row[f"misp{i}"]]).all(): seg = get_segments(row["text"], []) else: seg = get_segments(row["text"], row[f"misp{i}"]) segs.append(seg) # seg2 = get_segments(sent["data"], sent["label_y"]) o = overlap_segments(row["text"], segs) tokens = [] # if row["text"]=="อีดออกกก ฟังเพลงล้ะมีโฆษณาเอ็มเคชีสซิ๊ดเเซ่บคือเหี้ยใร": # print(o) for i in range(len(o[0])): s = copy.copy(o[0][i]) mispProb = 0 intProb = 0 # assert(len(o)==5) for j in range(len(o)): if (o[j][i]["misp"]): mispProb += 1/3 if (o[j][i]["int"]): intProb += 1/3 assert(mispProb <= 1) assert(intProb <= 1) if (mispProb < 0.5) and (intProb < 0.5): continue s["int"] = intProb s["msp"] = mispProb if s["text"]=="ใร": print(s, row["text"]) # s["misp"] = s["text"] # del s["text"] # s["int"] = (intProb > 0.5) # s["tokens"] = word_tokenize(s["text"], engine=engine) s["corr"] = None tokens.append(s) merged.append({ "text": row["text"], "category": tolabel(row["meta"]["category"]), "misp_tokens": tokens }) merged = pd.DataFrame(merged) # {"corr": "ไหม", "misp": "มั้ย", "int": true, "s": 67, "t": 71} # - tokenized = load_jsonl(f"{DIR}/../tokenized_train-misp-3000.jsonl") tokenized = pd.DataFrame(tokenized) # + # merged # + # tokenized # - sents = merged.merge(tokenized[["text", "segments"]], on="text") # + from collections import defaultdict cnt = defaultdict(int) cntmsp = defaultdict(list) cntint = defaultdict(list) def cal_entropy(labels): s = pd.Series(labels) counts = s.value_counts() return entropy(counts) for idx, sent in sents.iterrows(): for seg in sent["segments"]: for w in seg[0]: cnt[norm_word(w)] += 1 for m in sent["misp_tokens"]: norm = norm_word(m["text"]) # cntmsp[norm].append(int(m["msp"] > 0.5)) cntint[norm].append(int(m["int"] > 0.5)) # cntint[norm].append(m["int"]) # + from scipy.stats import entropy mispconsis = {} for m in cntint: if (cnt[m] < 5): continue mispconsis[m] = cal_entropy(cntint[m]) # mispconsis # - mispconsis values = {k: v for k, v in sorted(mispconsis.items(), key=lambda item: -item[1])} x = [] y = [] for i, k in enumerate(values): x.append(k) y.append(values[k]) # print(k, values[k]) # if i > 10: # break # + # # import warnings # # warnings.filterwarnings("ignore") # import matplotlib.pyplot as plt # import matplotlib.font_manager as fm # font_dirs = ["../"] # font_files = fm.findSystemFonts(fontpaths=font_dirs) # for font_file in font_files: # fm.fontManager.addfont(font_file) # + # # set font # import matplotlib.pyplot as plt # plt.rcParams['font.family'] = 'TH Sarabun New' # plt.rcParams['xtick.labelsize'] = 20.0 # plt.rcParams['ytick.labelsize'] = 20.0 # # mx = len(x) # # plt.ylim(0, 1) # # plt.xticks(rotation=90) # # plt.rcParams["figure.figsize"] = (20,3) # # plt.xticks([], []) # # plt.bar(x[0:mx], y[0:mx]) # + # y # + plt.rcParams["figure.figsize"] = (10,6) plt.rcParams.update({'font.size': 25}) plt.ylabel("No. misspelt words") plt.xlabel("Entropy of the annotated labels (intentional/unintentional)") plt.hist(y, bins=10) plt.savefig('../Figures/int_entropy.png') # + # save_jsonl(merged, f"{DIR}/train-misp-3000.jsonl") # + # from collections import defaultdict # labels = defaultdict(int) # for sent in unified: # labels[sent["category"]] += 1 # # break # labels # - # # Term Frequency # + from collections import defaultdict cnt = defaultdict(int) misp = defaultdict(int) for idx, sent in sents.iterrows(): mispFound = False for seg in sent["segments"]: for m, c in zip(seg[0], seg[1]): if m!=c: mispFound = True cnt["misp"] += 1 n = norm_word(m) misp[n] += 1 cnt["token"] += 1 if mispFound: cnt["sent"] += 1 # # break? # + print("%Mispelling Sentence:", cnt["sent"]*100/len(sents), cnt["sent"]) print() print("#Misspelling:", cnt["misp"]) print("%Misspelling:", cnt["misp"]*100/cnt["token"]) print("#Unique Misspelling Tokens:", len(misp)) # + # from transformers import XLMRobertaTokenizerFast # tokenizer = XLMRobertaTokenizerFast.from_pretrained('xlm-roberta-base') # import itertools # cnt["misp_sub"] = 0 # cnt["corr_sub"] = 0 # unk = tokenizer.convert_tokens_to_ids([""])[0] # for sent in trainmisp: # s = [list(zip(seg[0], seg[1])) for seg in sent["segments"]] # tokens = list(itertools.chain(*s)) # # misptokens = [t[0] for t in tokens] # # midx = tokenizer.convert_tokens_to_ids(misptokens) # # for i in range(len(midx)): # # if midx[i]==unk: # # t = tokenizer.tokenize("_"+misptokens[i])[1:] # # cnt["misp_sub"] += len(t) # # else: # # cnt["misp_sub"] += 1 # cnt["misp_sub"] += len(tokens) # cnt["corr_sub"] += len(tokenizer.tokenize(sent["text"])) # # break # print("%Subtokens Different", abs(cnt["misp_sub"]-cnt["corr_sub"])*100/cnt["corr_sub"]) # - # ## Most common misspelling words # + sortedmisp = {k: v for k, v in sorted(misp.items(), key=lambda item: -item[1])} for i, k in enumerate(sortedmisp): print(k, sortedmisp[k]) if i > 10: break # - # + x = [x for x in sortedmisp] y = [sortedmisp[k] for k in sortedmisp] mint = [int(np.average(cntint[k]) > 0.5) for k in sortedmisp] c = ['b' if i==1 else 'r' for i in mint] mx = 100 # plt.ylim(0, 1.2) # plt.xticks(rotation=90) # plt.rcParams["figure.figsize"] = (20,3) plt.xticks([], []) plt.bar(x[0:mx], y[0:mx], color=c[0:mx]) plt.ylabel("Frequency") plt.xlabel(f"\nMisspelling terms (top {mx})") plt.savefig("../Figures/tf.png") # _x = np.array(range(len(x))) # _y = (np.full(len(x), y[0]))/(_x+1) # plt.plot(_x[0:mx], _y[0:mx], "r") # - print("Common mispelling") cc = 0 obs = 0 for i in range(len(x)): if cc <= 20: print(cc, x[i], y[i], 'int' if mint[i]==1 else 'un') cc += 1 obs += y[i] # + print("Common intentional mispelling") cc = 0 obs = 0 for i in range(len(x)): if mint[i]==1: if cc <= 15: print(cc, x[i], y[i]) cc += 1 obs += y[i] print("#Intentional Words:", cc, obs) # - print("Common unintentional mispelling") cc = 0 obs = 0 for i in range(len(x)): if mint[i]!=1: if cc <= 10: print(x[i], y[i]) cc += 1 obs += y[i] print("#Unintentional Words:", cc, obs) # # Sentiment Class and Misspelling # + from collections import defaultdict from itertools import groupby labels = defaultdict(int) for idx, sent in sents.iterrows(): mispFound = False for seg in sent["segments"]: for m, c in zip(seg[0], seg[1]): if m!=c: mispFound = True if mispFound: labels[sent["category"]] += 1 for l in labels: print(l, labels[l], labels[l]/cnt["sent"]) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="zj7d6enfXO0I" # ### Sentiment Analyisis - Natural Language Processing Task # + [markdown] id="OktPX6xTXazv" # ### Imports # + id="pjuSkXtBXJ9L" import tensorflow as tf import tensorflow.keras as keras from tensorflow.keras import datasets import numpy as np import matplotlib.pyplot as plt # + colab={"base_uri": "https://localhost:8080/"} id="Uc6ZI-C5Xc2q" outputId="05bb156e-cf4e-4907-d6fa-8c2600d7e4ac" print(dir(datasets)) # + [markdown] id="CwXmc7RdXkWz" # ### Loading data # + colab={"base_uri": "https://localhost:8080/"} id="yO1d58mbXivq" outputId="9ba31af1-b398-4718-a64f-61a1fd8edd91" imdb = datasets.imdb.load_data(num_words=10000) # + id="hiSQo5NBXjR9" (X_train, y_train), (X_test, y_test) = imdb # + colab={"base_uri": "https://localhost:8080/"} id="c_FTYu_qXqqZ" outputId="69c823ce-da38-4d99-cb39-907b43402148" X_train[:2], y_train[:2] # + [markdown] id="HeItLjcvXuVc" # ### Concating the data # + id="tQ6cfpR8Xt_1" features = np.concatenate([X_train, X_test]) labels = np.concatenate([y_train, y_test]) # + colab={"base_uri": "https://localhost:8080/"} id="Ke0AxzfMXsre" outputId="9dedd73a-f66a-45fe-d373-427e0483b0fd" class_names =np.array( list(reversed(["positive", "negative"]))) targets = np.unique(labels) class_names, targets # + [markdown] id="gWxwArC3YVWG" # ### Getting all the words that correspond to the Index given # + colab={"base_uri": "https://localhost:8080/"} id="DdKXZRW0YMU6" outputId="9c3b7e0c-3cbe-4b5d-b40f-ba1df455b046" index = datasets.imdb.get_word_index() reverse_index = dict([(value, key) for (key, value) in index.items()]) reverse_index # + colab={"base_uri": "https://localhost:8080/", "height": 35} id="203K2r58YXay" outputId="a8c46276-1e0c-444e-a47f-215ab8eb63b6" reverse_index[59386] # + colab={"base_uri": "https://localhost:8080/", "height": 171} id="dCny-KWbYanh" outputId="45e4c59c-8b97-4726-d93d-4f583284bb17" sent = [reverse_index.get(i - 3, "#") for i in features[0]] " ".join(sent) # + [markdown] id="VMR5zAW1YgtV" # > We are replacing every unknown word with a `#`. And we can see that this is a positive review # + [markdown] id="cbc8Ot6OYpgM" # ### DATA PREPARATION # Now it's time to prepare our data. We will vectorize every review and fill it with zeros so it contains exactly 10,000 numbers. That means we fill every review that is shorter than 10,000 with zeros. We need to do this because the biggest review is nearly that long and every input for our neural network needs to have the `same size`. We will also transform the targets into floats. # + id="DXvQka2_Yc3T" def vectorize(sequences, dim=10000): res = np.zeros((len(sequences), dim)) for i, seq in enumerate(sequences): res[i, seq] = 1 return res # + id="OR0pwhhDYsXw" X = vectorize(features) y = labels.astype(np.float32) # + id="EDxPNnU8YvB1" from sklearn.model_selection import train_test_split # + id="rZrWXJt3YvLE" X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=33, test_size=.25 ) # + colab={"base_uri": "https://localhost:8080/"} id="C1fu2SflYzrg" outputId="bacc889a-e965-4cc9-ff70-5208cae5ded6" X_train.shape, y_train.shape, X_test.shape # + [markdown] id="e_JAKomXZdbD" # ### Building a Model # + colab={"base_uri": "https://localhost:8080/"} id="SHTjYWYAZdI0" outputId="b958ef84-9bef-4e98-955f-930a33383412" model = keras.Sequential([ keras.layers.Dense(32, activation='relu', input_shape=(10000, )), keras.layers.Dropout(0.3, noise_shape=None, seed=None), keras.layers.Dense(32, activation='relu'), keras.layers.Dropout(0.3, noise_shape=None, seed=None), keras.layers.Dense(50, activation = "relu"), keras.layers.Dropout(0.2, noise_shape=None, seed=None), keras.layers.Dense(50, activation = "relu"), # Output- Layer keras.layers.Dense(1, activation = "sigmoid") ]) model.summary() # + [markdown] id="LQZ2sLSQaR9H" # ### Compiling the model # + id="Jcgz-8D0Y2j7" model.compile( optimizer=keras.optimizers.Adam(learning_rate=1e-3), loss = keras.losses.BinaryCrossentropy(from_logits=True), metrics=["accuracy"] ) # + colab={"base_uri": "https://localhost:8080/"} id="4S4S52jiZQYs" outputId="5c52d54e-230c-414f-dfec-6e8b72afe357" BATCH_SIZE = 16 EPOCHS = 5 VALIDATION_SET = (X_test, y_test) model.fit(X_train, y_train, batch_size=BATCH_SIZE, epochs=EPOCHS, validation_data=VALIDATION_SET) # + [markdown] id="DeSGuQ_rdNmk" # #### Evaluating the model # + colab={"base_uri": "https://localhost:8080/"} id="CS6vwWhGdSOP" outputId="c3386aa2-9755-40ad-b8d6-b62b3facb133" model.evaluate(X_test, y_test) # + [markdown] id="tR5YfbeWdFEG" # #### Making predictions # + colab={"base_uri": "https://localhost:8080/"} id="XSJto463bh87" outputId="9822f43d-7d30-4acd-c1f3-c851407f3fdd" X_test[0], y_test[5] # + colab={"base_uri": "https://localhost:8080/"} id="b9r_Fk23da75" outputId="436c710b-d855-47fa-e870-bdbb76e58d30" np.round(model.predict((X_test[:5]))), y_test[:5] # + colab={"base_uri": "https://localhost:8080/"} id="jYrR7J7be45q" outputId="957bca45-5f5a-49df-c7b6-23cc547a6015" y # + [markdown] id="0O3RsJqRh28n" # That's all # + id="z_P3ajIVggjo" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 7. Evaluation and results comparison # Evaluating stored metrics for different experiments # # + import pandas as pd import numpy as np import seaborn as sns import math from IPython.display import Image from matplotlib import pyplot as plt from matplotlib import patches from metrics import save_metrics from metrics import plot_metrics from metrics import init_metrics_file from models import evaluate_model from lightgbm import LGBMClassifier from sklearn.model_selection import train_test_split import time from sklearn.metrics import f1_score from sklearn.model_selection import GridSearchCV sns.set() # - # --- # + #init_metrics_file() # - # ### Define methods to plot metrics # + def plot_classifier(df_data_to_plot, classifier): plt.figure(figsize=(20,5)) x = range(0, df_data_to_plot.shape[1]) _x = np.arange(len(x)) labels = df_data_to_plot.columns plt.xticks(x, labels, rotation='vertical'); plt.bar(_x + 0, height=df_data_to_plot.loc[(classifier, 'accuracy')], width=.1, label='accuracy') plt.bar(_x + .1, height=df_data_to_plot.loc[(classifier, 'f1-score')], width=.1, label='f1-score') plt.bar(_x + .2, height=df_data_to_plot.loc[(classifier, 'auc-roc')], width=.1, label='auc-roc') plt.bar(_x + .3, height=df_data_to_plot.loc[(classifier, 'precision')], width=.1, label='precision') plt.bar(_x + .4, height=df_data_to_plot.loc[(classifier, 'recall')], width=.1, label='recall') plt.xticks(rotation=90); plt.title(f'Metrics comparison for classifier {classifier}'); plt.legend(); def plot_metric(df_data_to_plot, classifiers, metric): plt.figure(figsize=(20,5)) x = range(0, df.shape[1]) _x = np.arange(len(x)) labels = df.columns plt.xticks(x, labels, rotation='vertical'); width = 0 for classifier in classifiers: plt.bar(_x + width, height=df_data_to_plot.loc[(classifier, metric)], width=.1, label=classifier) width = width + .1 plt.xticks(rotation=90); plt.title(f'Metric {metric} comparison for multiple classifiers'); plt.legend(); # - # --- # ### Read metrics from a file df = plot_metrics() df = df.replace(to_replace='-', value=0).astype(float) cols = ['first_dataset', 'balanced', 'with_distance', 'with_angle', 'with_player_ids', 'with_player_stats', 'with_player_stats_tuned', 'with_player_salary', 'short_dist', 'long_dist'] df = df[cols] cols_top = [ 'balanced', 'with_distance', 'with_angle', 'with_player_stats', 'with_player_stats_tuned'] df_top = df[cols_top].copy() df # --- # ### Plot metrics per classifier plot_classifier(df, 'LogReg'); plot_classifier(df, 'LGBM'); plot_classifier(df, 'KNC'); # --- # ### Plot metrics per metric for all classifier plot_metric(df, ['LogReg', 'LGBM', 'KNC'], 'f1-score'); plot_metric(df, ['LogReg', 'LGBM', 'KNC'], 'accuracy'); plot_metric(df, ['LogReg', 'LGBM', 'KNC'], 'auc-roc'); plot_metric(df, ['LogReg', 'LGBM', 'KNC'], 'precision'); plot_metric(df, ['LogReg', 'LGBM', 'KNC'], 'recall'); # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import csv import operator import numpy as np # %matplotlib notebook import re import json import matplotlib as mpl import matplotlib.pyplot as plt from __future__ import print_function, division a = [] b = [] ab = np.empty((43,2)) with open('geocoordinatedata.csv', 'rt', encoding='utf8') as csvfile: csv.reader(csvfile, delimiter=' ', quotechar='|') sort = sorted(csvfile, key=operator.itemgetter(0)) for eachline in sort: coordinates = eachline.split('":{"type":"Point","coordinates":')[1].split('},"source')[0] filtered_tweets = list(filter(bool, re.split('[^a-z]', json.loads(eachline)["text"].lower()))) x = (coordinates.split('[')[1].split(',')[0]) y = (coordinates.split(',')[1].split(']')[0]) x_float, y_float = float(x), float(y) a.append(x_float) b.append(y_float) a_array, b_array = np.asarray(a), np.asarray(b) print(a_array) print() print(b_array) final = np.stack((a_array, b_array), axis=1) print(final) fig = plt.figure(figsize=(10,5)) ax = fig.add_subplot(1, 1, 1) plt.scatter(final[:,0], final[:,1], label='Exact Tweet Coordinates') ax.set_xlabel("Longitude: x") ax.set_ylabel("Latitude: y") ax.set_title("A simple plot.") ax.legend() ax.grid("on") # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python3 # --- # # A Wright-Fisher simulation implemented in C++ via Cython. # # This tutorial implements a Wright-Fisher simulation with mutation and recombination using [Cython](http://www.cython.org). Cython is two things: # # * A grammar/dialect of Python that allows static typing Python and of C/C++ types. # * A static compiler to turn the Cython grammar in to C or C++ code to compile into a Python extension module. # # Cython has a learning curve of its own. A lot of what is shown below reflects best practices. For those, we refer you to the [Cython documentation](https://cython.readthedocs.io/en/latest/). # # In Python, we would use numpy for random number generation and fast access/manipulation of numeric vectors. As fast as numpy is, it has to talk back and forth to Python, meaning we can outperform it by writing code that executes solely on the C/C++ side. In order to do so, we make use of the C++ standard library and the excellent GNU Scientific Library. # # This example is closer to reality for those working in lower-level languages. First, we must build our world, which means defining data types (a C++ class in this case) and functions acting on those types. After all that, we can code up the `simplify` and `evolve` functions. Such is the price of speed. # # Here, we use C++ rather than C so that we don't have to worry about memory management and error handling. Cython will nicely convert any C++ exception into a Python exception, meaning that one big `try/execept` block is sufficient to not leak memory for our `gsl_rng` object. # # However, Cython does not allow *idiomatic* C++ to be written. We do not have access to all C++11 syntax and concepts, nor do we have access to a fully const-correct grammar. Cython is a "C first" tool, and the limited C++ support is simply a fact of life. # # This example is conceptually identical to previous Python examples. It does the following: # # * Model mutations according to an infinitely-many sites scheme # * Model recombination as a uniform Poisson process # * Collect intermediate data during simulation and periodically simplify. # # This tutorial results in a simulation fast enough to obtain many replicates, which allows us to compare the distribution of summary statistics to `msprime`. # # First, we load an extension allowing us to write Cython in a notebook: # %load_ext Cython # Set ourselves up for some plotting, too # %matplotlib inline # %config InlineBackend.figure_format = 'svg' # The Cython cell magic directs Cython to generate C++ code and compile is using the C++11 language standard, which we need for `unordered_set`. # # The following code block is long. The length is unavoidable, as the `cdef` functions are only accessible to C++, and therefore they must be in the same scope as our `evolve` function. # # We use [struct.pack](https://docs.python.org/3/library/struct.html) to encode mutation metadata. It is much faster than pickling, and portable across languages because it is equivalent to writing the raw bits to a character string. The metadata we record is the mutation position, generation when it first arose, and the node ID on which it first arose. A "real-world" simulation would probably record effect sizes and other interesting things. # + magic_args="--cplus --compile-args=-std=c++11 -3 -lgsl -lgslcblas -lm" language="cython" # # import msprime # import numpy as np # import struct # cimport cython # cimport numpy as np # from cython.operator cimport dereference as deref # from libc.stdint cimport int32_t, uint32_t # from libcpp.vector cimport vector # from libcpp.unordered_set cimport unordered_set # from libcpp.utility cimport pair # from libcpp.algorithm cimport sort as cppsort # from cython_gsl.gsl_rng cimport * # # # Cython doesn't export all of C++'s standard library, # # so we have to expose a few things we need: # cdef extern from "" namespace "std" nogil: # Iter find[Iter,ValueType](Iter begin, Iter end, const ValueType & value) # Iter max_element[Iter](Iter begin, Iter end) # Iter remove[Iter,ValueType](Iter begin, Iter end, const ValueType & value) # # # Define a class to hold # # all the data for new # # nodes, edges, etc. # # Cython's functionality is # # limited here, so we stick # # to the KISS principle. # cdef cppclass TableData: # # Node data: # vector[double] time # # Edge data # vector[int32_t] parent, child # vector[double] left, right # # Mutation data # vector[double] pos # vector[int32_t] node, generation # # # lookup table to book-keep # # infinitely-many sites mutation # # model # unordered_set[double] mut_lookup # # TableData(): # time.reserve(10000) # parent.reserve(10000) # child.reserve(10000) # left.reserve(10000) # right.reserve(10000) # pos.reserve(10000) # node.reserve(10000) # generation.reserve(10000) # # void clear(): # time.clear() # parent.clear() # child.clear() # left.clear() # right.clear() # pos.clear() # node.clear() # generation.clear() # # @cython.boundscheck(False) # turn off bounds-checking for entire function # @cython.wraparound(False) # turn off negative index wrapping for entire function # void reset_lookup(np.ndarray[double,ndim=1] pos): # mut_lookup.clear() # cdef size_t i=0 # for i in range(len(pos)): # mut_lookup.insert(pos[i]) # # cdef pair[int32_t, int32_t] pick_parents(const gsl_rng *r, # const int32_t N, # const int32_t first_parental_index): # cdef pair[int32_t, int32_t] rv # cdef int32_t p = gsl_ran_flat(r,0.0,N) # # rv.first = first_parental_index + 2*p # rv.second = rv.first+1 # # "Mendel" # if gsl_rng_uniform(r) < 0.5: # rv.first, rv.second = rv.second, rv.first # return rv # # # cdef void infsites(const gsl_rng *r, const double mu, # const int32_t offspring_node, # const int32_t generation, # TableData & tables): # cdef unsigned nmuts=gsl_ran_poisson(r,mu) # cdef size_t i # cdef double pos # for i in range(nmuts): # pos = gsl_rng_uniform(r) # while tables.mut_lookup.find(pos)!=tables.mut_lookup.end(): # pos = gsl_rng_uniform(r) # tables.mut_lookup.insert(pos) # tables.pos.push_back(pos) # tables.node.push_back(offspring_node) # tables.generation.push_back(generation) # # cdef void poisson_recombination(const gsl_rng * r, # const double recrate, # const int32_t offspring_node, # int32_t pg1, int32_t pg2, # TableData & tables): # cdef unsigned nbreaks = gsl_ran_poisson(r,recrate) # if nbreaks == 0: # tables.parent.push_back(pg1) # tables.child.push_back(offspring_node) # tables.left.push_back(0.0) # tables.right.push_back(1.0) # return # # cdef vector[double] breakpoints # #Track uniqueness of breakpoints # #to avoid double x-overs, which # #would mean an edge with # #left == right, causing msprime # #to raise an exception # cdef unordered_set[double] bp, dx # cdef unsigned index = 0 # cdef double pos # for index in range(nbreaks): # pos = gsl_rng_uniform(r) # if bp.find(pos) != bp.end(): # # Then we have a double # # x-over at pos # dx.insert(pos) # else: # breakpoints.push_back(pos) # bp.insert(pos) # # cppsort(breakpoints.begin(),breakpoints.end()) # # Cython magically translates the for loop # # below into pure c++! # for pos in dx: # breakpoints.erase(remove(breakpoints.begin(), # breakpoints.end(), # pos),breakpoints.end()) # # if breakpoints.empty(): # tables.parent.push_back(pg1) # tables.child.push_back(offspring_node) # tables.left.push_back(0.0) # tables.right.push_back(1.0) # return # # if breakpoints.front() == 0.0: # pg1,pg2 = pg2,pg1 # else: # breakpoints.insert(breakpoints.begin(),0.0) # # breakpoints.push_back(1.0) # # for index in range(1,breakpoints.size()): # tables.parent.push_back(pg1) # tables.child.push_back(offspring_node) # tables.left.push_back(breakpoints[index-1]) # tables.right.push_back(breakpoints[index]) # pg1,pg2 = pg2,pg1 # # cdef void simplify(TableData & tables, # object nodes, object edges, # object sites, object mutations, # const double dt): # # If no edges, don't bother... # if tables.parent.empty(): # return # # # Push current node times # # further back into the past. # nodes.set_columns(time=nodes.time+dt, # flags=nodes.flags) # # # Reverse new node times so # # that time moves backwards into past # # from current extant generation. # cdef mtime = deref(max_element(tables.time.begin(), # tables.time.end())) # cdef size_t index=0 # for index in range(tables.time.size()): # tables.time[index]=-1.0*(tables.time[index]-mtime) # # # Append data stored on the C++ side. # # We use Cython's typed memoryviews + # # np.asarray. The result is a numpy # # array wrapping the underlying memory # # stored in the C++ vectors. NO COPY # # IS MADE HERE, but msprime will end up # # copying the data from C++ into its # # internal arrays. # nodes.append_columns(time=np.asarray(tables.time.data()), # flags=np.ones(tables.time.size(),dtype=np.uint32)) # # edges.append_columns(left=np.asarray(tables.left.data()), # right=np.asarray(tables.right.data()), # parent=np.asarray(tables.parent.data()), # child=np.asarray(tables.child.data())) # for index in range(tables.pos.size()): # sites.add_row(position=tables.pos[index], # ancestral_state=`0`) # mutations.add_row(site=len(sites)-1, # node=tables.node[index], # derived_state='1', # metadata=struct.pack('iid', # tables.generation[index], # tables.node[index], # tables.pos[index])) # # # samples=np.where(nodes.time==0.0)[0] # msprime.sort_tables(nodes=nodes,edges=edges,sites=sites, # mutations=mutations) # msprime.simplify_tables(samples=samples.tolist(), # nodes=nodes, # edges=edges, # sites=sites, # mutations=mutations) # # tables.clear() # tables.reset_lookup(sites.position) # assert(tables.mut_lookup.size() == len(sites)) # # def evolve(int N, int ngens, double theta, double rho, int gc, int seed): # nodes = msprime.NodeTable() # nodes.set_columns(time=np.zeros(2*N), # flags=np.ones(2*N,dtype=np.uint32)) # edges = msprime.EdgeTable() # sites = msprime.SiteTable() # mutations = msprime.MutationTable() # # cdef double mu = theta/(4*N) # cdef double recrate = rho/(4*N) # cdef TableData tables # cdef gsl_rng * r = gsl_rng_alloc(gsl_rng_mt19937) # gsl_rng_set(r,seed) # cdef size_t generation=0,offspring=0 # cdef int32_t next_offspring_index = 2*N #Same as len(nodes) # cdef int32_t first_parental_index = 0 # cdef int32_t last_gc_time = 0 # try: # for generation in range(ngens): # if generation > 0 and generation % gc == 0.0: # simplify(tables,nodes,edges,sites, # mutations,generation-last_gc_time) # last_gc_time = generation # first_parental_index = 0 # next_offspring_index = len(nodes) # else: # first_parental_index = next_offspring_index - 2*N # # for offspring in range(N): # parents1 = pick_parents(r,N, # first_parental_index) # parents2 = pick_parents(r,N, # first_parental_index) # # # Add 2 new nodes # tables.time.push_back(generation+1.0) # tables.time.push_back(generation+1.0) # # # Recombine and mutate # # both offspring nodes # poisson_recombination(r,recrate, # next_offspring_index, # parents1.first,parents1.second, # tables) # infsites(r,mu,next_offspring_index, # generation+1,tables) # # poisson_recombination(r,recrate, # next_offspring_index+1, # parents2.first,parents2.second, # tables) # infsites(r,mu,next_offspring_index+1, # generation+1,tables) # next_offspring_index += 2 # except: # gsl_rng_free(r) # raise # # if tables.time.size() > 0: # simplify(tables,nodes,edges,sites, # mutations,generation+1-last_gc_time) # # gsl_rng_free(r) # # return msprime.load_tables(nodes=nodes,edges=edges, # sites=sites,mutations=mutations) # - # %%time evolve(1000,20000,100,100,603,42) # # Comparison to msprime # # In this section, we compare the distribution of outputs to msprime using [pylibseq](https://github.com/molpopgen/pylibseq), a Python interface to [libsequence](http://molpopgen.github.io/libsequence/) # + from IPython.display import SVG import msprime import numpy as np import matplotlib.pyplot as plt import seaborn as sns import pickle from libsequence.polytable import SimData from libsequence.summstats import PolySIM from libsequence.msprime import make_SimData import concurrent.futures import pandas as pd from collections import namedtuple SummStats=namedtuple('SummStats',['S','pi','D','hprime','rmin']) # - # Let's take a quick tour of pylibseq: # + # Simulate data with msprime ts = msprime.simulate(10,mutation_rate=1,random_seed=666) # Get it into the format expected by pylibseq d = make_SimData(ts) # This should look familiar! :) print(d) # - # Create object to calculate summary stats x = PolySIM(d) # Calculate a few: print(x.thetapi(),x.tajimasd(),x.hprime(),x.rm()) # %%time msprime_raw_data=[] for i in msprime.simulate(10,mutation_rate=100.0/4.0, recombination_rate=100.0/4.0, num_replicates=1000, random_seed=42): d = make_SimData(i) ps = PolySIM(d) # A little check that the two pieces of code agree assert(ps.numpoly() == i.num_mutations) msprime_raw_data.append(SummStats(ps.numpoly(), ps.thetapi(),ps.tajimasd(), ps.hprime(),ps.rm())) # To run the forward simulations, we will use multiple Python processes via Python 3's [`concurrent.futures`](https://docs.python.org/3/library/concurrent.futures.html) library. The short of it is that we need a Python function to send out to different processes and return results, which will be pickled into a future back in the main process. def run_forward_sim(nreps,seed,repid): """ Run our forward sim, calculate a bunch of stats, and return the list. """ np.random.seed(seed) seeds = np.random.randint(0,1000000,nreps) sims = [] for i in range(nreps): ts = evolve(1000,10000,100.0,100.0,1000,seeds[i]) samples = np.random.choice(2000,10,replace=False) assert(all(ts.tables.nodes.time[samples]==0.0)) ts2 = ts.simplify(samples=samples.tolist()) d=make_SimData(ts2) ps=PolySIM(d) sims.append(SummStats(ps.numpoly(), ps.thetapi(), ps.tajimasd(), ps.hprime(), ps.rm())) return sims # %%time x=run_forward_sim(1,66,3511) print(x) # In the next bit, we map our function into four separate processes. # # **Note:** We could use a `concurrent.futures.ThreadPoolExecutor` instead of the process pool executor. However, some of our Cython functions rely on Python types, meaning that the Global Interpreter Lock is a barrier to efficient concurrency. In practice, we've found it better to take the hit of pickling between processes so that your simulations can run at 100% CPU in different processes. # %%time fwd_sim_data=[] np.random.seed(666) with concurrent.futures.ProcessPoolExecutor(max_workers=4) as executor: futures = {executor.submit(run_forward_sim,50, np.random.randint(0,2000000,1)[0],i): i for i in range(4)} for fut in concurrent.futures.as_completed(futures): fn = fut.result() fwd_sim_data.extend(fn) msprime_df = pd.DataFrame(msprime_raw_data) msprime_df['engine'] = ['msprime']*len(msprime_df.index) fwd_df = pd.DataFrame(fwd_sim_data) fwd_df['engine']=['forward']*len(fwd_df) summstats_df = pd.concat([msprime_df,fwd_df]) # + sns.set(style="darkgrid") g = sns.FacetGrid(summstats_df,col="engine",margin_titles=True) bins = np.linspace(summstats_df.pi.min(),summstats_df.pi.max(),20) g.map(plt.hist,'pi',bins=bins,color="steelblue",lw=0,normed=True); g = sns.FacetGrid(summstats_df,col="engine",margin_titles=True) bins = np.linspace(summstats_df.S.min(),summstats_df.S.max(),20) g.map(plt.hist,'S',bins=bins,color="steelblue",lw=0,normed=True); g = sns.FacetGrid(summstats_df,col="engine",margin_titles=True) bins = np.linspace(summstats_df.D.min(),summstats_df.D.max(),20) g.map(plt.hist,'D',bins=bins,color="steelblue",lw=0,normed=True); g = sns.FacetGrid(summstats_df,col="engine",margin_titles=True) bins = np.linspace(summstats_df.rmin.min(),summstats_df.rmin.max(),20) g.map(plt.hist,'rmin',bins=bins,color="steelblue",lw=0,normed=True); # - from scipy.stats import ks_2samp print(summstats_df.groupby(['engine']).agg(['mean','std'])) ks_2samp(fwd_df.pi,msprime_df.pi) ks_2samp(fwd_df.S,msprime_df.S) ks_2samp(fwd_df.D,msprime_df.D) ks_2samp(fwd_df.rmin,msprime_df.rmin) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # GUI to plot results from various codes # + #NOTES: # To install (in a python 3 virtual environment): # - pip install numpy matplotlib ipywidgets # - pip install widget_periodictable # - jupyter nbextension enable --py widget_periodictable # - # Use interactive plots (10x faster than creating PNGs) # %matplotlib notebook # + # For the notebook mode, we need to reduce the default font sie import matplotlib font = { #'family' : 'normal', #'weight' : 'bold', 'size' : 7 } matplotlib.rc('font', **font) # + # The next cell prevents that a cell gets vertical scrolling. # This is important for the final plot, as we have a lot of plots in the same notebook cell. # + language="javascript" # IPython.OutputArea.prototype._should_scroll = function(lines) { # return false; # } # - import json import os import numpy as np import pylab as pl import ipywidgets as ipw import widget_periodictable import matplotlib.colors as mcolors import quantities_for_comparison as qc # + ## Functions (prittifiers) # + def symmetrical_colormap(cmap_settings, new_name = None ): ''' This function take a colormap and create a new one, as the concatenation of itself by a symmetrical fold. ''' # get the colormap cmap = pl.cm.get_cmap(*cmap_settings) if not new_name: new_name = "sym_"+cmap_settings[0] # ex: 'sym_Blues' # this defined the roughness of the colormap, 128 fine n= 128 # get the list of color from colormap colors_r = cmap(np.linspace(0, 1, n)) # take the standard colormap # 'right-part' colors_l = colors_r[::-1] # take the first list of color and flip the order # "left-part" # combine them and build a new colormap colors = np.vstack((colors_l, colors_r)) mymap = mcolors.LinearSegmentedColormap.from_list(new_name, colors) return mymap def get_conf_nice(configuration_string): """ Convert the configuration string to a nicely typeset string in LaTeX. """ ret_pieces = [] for char in configuration_string: if char in "0123456789": ret_pieces.append(f"$_{char}$") else: ret_pieces.append(char) return "".join(ret_pieces) # + # Get all results from all s that has a results-.json file in the current folder file_prefix = 'results-' file_suffix = '.json' results_folder = os.curdir code_results = {} for fname in os.listdir(results_folder): if fname.startswith(file_prefix) and fname.endswith(file_suffix): label = fname[len(file_prefix):-len(file_suffix)] if not "unaries" in fname: with open(os.path.join(results_folder, fname)) as fhandle: code_results[label] = json.load(fhandle) # - # Defines the colors of the EoS curves and associate one color to each colors = ['#1f78b4', '#33a02c', '#e31a1c', '#ff7f00', '#6a3d9a', '#b15928', '#a6cee3', '#b2df8a', '#fb9a99', '#fdbf6f', '#cab2d6', '#ffff99'] color_code_map = {} index = 0 for plugin_name in code_results: color_code_map[plugin_name] = colors[index] index = index + 1 index = index % len(colors) # Map a name (key of the dictionary) to a function (value of the dictionary), allows the selection # of the quantity to use for the heatmap plot that compares the codes quantity_for_comparison_map = { "delta_per_formula_unit (meV)": qc.delta, #"delta_per_atom": qc.delta_per_atom, "Prefactor*epsilon": qc.epsilon, "Prefactor*B0_rel_diff": qc.B0_rel_diff, "Prefactor*V0_rel_diff": qc.V0_rel_diff, "Prefactor*B1_rel_diff": qc.B1_rel_diff, "Prefactor*|relerr_vec(weight_b0,weight_b1)|": qc.rel_errors_vec_length } # + ## Main function that creates the plots # - def plot_for_element(code_results, element, configuration, selected_codes, selected_quantity, prefactor, b0_w, b1_w, axes, max_val=None): """ For a configuration, loops over the data sets (one set for each code) and plots the data (both the eos points and the birch murnaghan curves). It also calculates the data for the comparison of codes (using the selected quantity: delta, V0_dif, ...) and return them in an heatmap plot. """ # The eos data are plotted straight away in the codes loop, on the contrary we # delay the plotting of the fitted data, so to have the same x range for all. # The fitting curves info are collected in this list. fit_data = [] # Initializations code_names_list = [] color_idx = 0 dense_volume_range = None # Will eventually be a tuple with (min_volume, max_volume) y_range = None # Loop over codes for code_name in sorted(code_results): reference_plugin_data = code_results[code_name] scaling_ref_plugin = qc.get_volume_scaling_to_formula_unit( reference_plugin_data['num_atoms_in_sim_cell'][f'{element}-{configuration}'], element, configuration ) # Get the EOS data try: eos_data = reference_plugin_data['eos_data'][f'{element}-{configuration}'] except KeyError: # This code does not have eos data, but it might have the birch murnaghan parameters # (for instance reference data sets). We set eos_data to None and go on eos_data = None # Get the fitted data try: ref_BM_fit_data = reference_plugin_data['BM_fit_data'][f'{element}-{configuration}'] except KeyError: # Set to None if fit data is missing (might be fit failed). We will still plot the # points using a trick to find the reference energy. ref_BM_fit_data = None # Only in no data and fit are present we skip if eos_data is None and ref_BM_fit_data is None: continue # Take care of range. We update the minimum and maximum volume. It is an iterative process # so we have a range that includes all the relevant info for any set of data if ref_BM_fit_data is not None: if dense_volume_range is None: dense_volume_range = (ref_BM_fit_data['min_volume'] * 0.97, ref_BM_fit_data['min_volume'] * 1.03) else: dense_volume_range = ( min(ref_BM_fit_data['min_volume'] * 0.97, dense_volume_range[0]), max(ref_BM_fit_data['min_volume'] * 1.03, dense_volume_range[1]) ) if eos_data is not None: volumes, energies = (np.array(eos_data).T).tolist() if dense_volume_range is None: dense_volume_range = (min(volumes), max(volumes)) else: dense_volume_range = ( min(min(volumes), dense_volume_range[0]), max(max(volumes), dense_volume_range[1])) # Plotting style. It is different for selected and unselected codes. The unselected # codes will be in grey and put on the background. alpha = 1. send_to_back = False if code_name not in selected_codes: curve_color = '#000000' alpha = 0.1 send_to_back = True else: curve_color = color_code_map[code_name] color_idx += 1 # Set energy shift (important to compare among codes!!!) warning_string = '' if ref_BM_fit_data is not None: # Situation when all fit parameters but E0 are present, this hopefully happens only when # only fit data are present. To set to zero is the good choice if ref_BM_fit_data.get('E0') is None: ref_BM_fit_data['E0'] = 0. energy_shift = ref_BM_fit_data['E0'] else: # No fit data, shift selected to be the minimum of the energies. Not correct in general # because we might not have the exact minimum on the grid, or even minimum might be out of range warning_string = " (WARNING NO FIT!)" volumes, energies = (np.array(eos_data).T).tolist() energy_shift = min(energies) # Collect the fitting data to plot later (only later will have correct range) position_to_insert = 0 if send_to_back else len(fit_data) + 1 if ref_BM_fit_data is not None: code_names_list.insert(position_to_insert, code_name) fit_data.insert(position_to_insert, (ref_BM_fit_data, energy_shift, { # Show the label on the fit if no eos data is visible (I want one and only one label), # but don't show it for hidden plots 'label': f'{code_name}{warning_string}' if eos_data is None and send_to_back is False else None, 'alpha': alpha, 'curve_color': curve_color })) # Plot EOS points straigh away. if eos_data is not None: volumes, energies = (np.array(eos_data).T).tolist() # Don't show the label for hidden plots label = f'{code_name}{warning_string}' if send_to_back is False else None scaled_en = np.array(energies)/scaling_ref_plugin scaled_en_shift = energy_shift/scaling_ref_plugin scaled_vol = np.array(volumes)/scaling_ref_plugin axes[0].plot(scaled_vol, scaled_en - scaled_en_shift, 'o', color=curve_color, label=label, alpha=alpha) if not send_to_back: if y_range is None: y_range = (min(scaled_en) - scaled_en_shift, max(scaled_en) - scaled_en_shift) else: y_range = ( min(min(scaled_en) - scaled_en_shift, y_range[0]/scaling_ref_plugin), max(max(scaled_en) - scaled_en_shift, y_range[1]/scaling_ref_plugin)) # A check on the dense_volume_range is needed since we are # now out of the loop and it is possible that any code managed to have data for # a paricular element. if dense_volume_range is not None: dense_volumes = np.linspace(dense_volume_range[0], dense_volume_range[1], 100) # Plot all fits and calculate deltas iii=0 collect=[] codezz=[] for ref_BM_fit_data, energy_shift, plot_params in fit_data: reference_eos_fit_energy = qc.birch_murnaghan( V=dense_volumes, E0=ref_BM_fit_data['E0'], V0=ref_BM_fit_data['min_volume'], B0=ref_BM_fit_data['bulk_modulus_ev_ang3'], B01=ref_BM_fit_data['bulk_deriv'] ) axes[0].plot( np.array(dense_volumes)/scaling_ref_plugin, np.array(reference_eos_fit_energy)/scaling_ref_plugin - energy_shift/scaling_ref_plugin, '-', color=plot_params['curve_color'], alpha=plot_params['alpha'] * 0.5, label=plot_params['label'] ) #The way to distinguish selected codes here is quite fragile, based on curve color if plot_params['curve_color'] != '#000000': deltas = [] #Need to compare this to any other selected code for sec in fit_data: if sec[2]['curve_color'] != '#000000': #Collect the values V0_1 = ref_BM_fit_data['min_volume']/scaling_ref_plugin B0_1 = ref_BM_fit_data['bulk_modulus_ev_ang3'] B01_1 = ref_BM_fit_data['bulk_deriv'] V0_2 = sec[0]['min_volume']/scaling_ref_plugin B0_2 = sec[0]['bulk_modulus_ev_ang3'] B01_2 = sec[0]['bulk_deriv'] #calculate delta (or other quantity based on "selected_quantity") and collect func = quantity_for_comparison_map[selected_quantity] res = func(V0_1, B0_1, B01_1, V0_2, B0_2, B01_2, prefactor, b0_w, b1_w) delta = float(res) deltas.append(round(delta,2)) codezz.append(code_names_list[iii]) collect.append(deltas) iii = iii+1 #Plot the heatmaps with deltas to_plot = np.array(collect) # set value from data if max_val is None maxim = max_val or max([abs(de) for de in deltas]) axes[1].imshow(to_plot,cmap=symmetrical_colormap(("Reds",None)), vmin=-maxim, vmax=maxim) axes[1].set_xticks(np.arange(len(codezz))) axes[1].set_yticks(np.arange(len(codezz))) axes[1].set_xticklabels(codezz) axes[1].set_yticklabels(codezz) # Rotate the tick labels and set their alignment. pl.setp(axes[1].get_xticklabels(), rotation=35, ha="right", rotation_mode="anchor") # Loop over data dimensions and create text annotations. for i in range(len(codezz)): for j in range(len(codezz)): text = axes[1].text(j, i, to_plot[i, j], ha="center", va="center", color="black") #Some labels and visual choices # Set the y range to (visible) points only, if at least one of the selected codes had EOS data points if y_range is not None: # Make sure that the minimum is zero (or negative if needed) y_range = (min(y_range[0], 0), y_range[1]) axes[0].set_ylim(y_range) axes[0].legend(loc='upper center') axes[0].set_xlabel("Cell volume per formula unit ($\\AA^3$)") axes[0].set_ylabel("$E-TS$ per formula unit (eV)") conf_nice = get_conf_nice(configuration) axes[0].set_title(f"{element} ({conf_nice})") axes[1].set_title(f"{element} ({conf_nice}) -- {selected_quantity}") # + ## Widgets definition and main call to the plot function # + ipw_pref = ipw.FloatText( value=100, description='Prefactor', disabled=False ) ipw_b0 = ipw.FloatText( value=0.1, description='weight_b0', disabled=False ) ipw_b1 = ipw.FloatText( value=0.01, description='weight_b1', disabled=False ) ipw_codes = ipw.SelectMultiple( options=sorted(code_results), value=sorted(code_results), # Select all rows=15, description='Code plugins', disabled=False ) #style = {'description_width': 'initial', 'widget_width':'initial'} ipw_comp_quantity = ipw.Select( options=sorted(quantity_for_comparison_map), value="Prefactor*V0_rel_diff", rows=15, #description="Quantity for code comparison", #style=style #disabled=False ) ipw_periodic = widget_periodictable.PTableWidget(states=1, selected_colors = ["#a6cee3"], disabled_elements=['Bk', 'Cf', 'Es', 'Fm', 'Md', 'No', 'Lr'], selected_elements={'Si': 0}) ipw_output = ipw.Output() fig = None axes_list = None def replot(): global fig, axes_list with ipw_output: if fig is None: ipw_output.clear_output(wait=True) fig, axes_list = pl.subplots(6, 2, figsize=((10,27)), gridspec_kw={"hspace":0.5}) else: for axes in axes_list: axes[0].clear() axes[1].clear() for element in sorted(ipw_periodic.selected_elements.keys()): #Each axes is one line, not a single sublot. So axes[0] will host EoS and fit, axes[1] the deltas for configuration, axes in zip( ['XO', 'XO2', 'XO3', 'X2O', 'X2O3', 'X2O5'], axes_list ): plot_for_element( code_results=code_results, element=element, configuration=configuration, selected_codes=ipw_codes.value, selected_quantity=ipw_comp_quantity.value, prefactor=ipw_pref.value, b0_w=ipw_b0.value, b1_w=ipw_b1.value, axes=axes ) #pl.show() def on_codes_change(event): if event['type'] == 'change': replot() def on_quantity_change(event): if event['type'] == 'change' and event['name'] == 'value': replot() def on_pref_or_weights_change(event): if event['type'] == 'change': replot() last_selected = ipw_periodic.selected_elements def on_element_select(event): global last_selected if event['name'] == 'selected_elements' and event['type'] == 'change': if tuple(event['new'].keys()) == ('Du', ): last_selected = event['old'] elif tuple(event['old'].keys()) == ('Du', ): #print(last_selected, event['new']) if len(event['new']) != 1: # Reset to only one element only if there is more than one selected, # to avoid infinite loops newly_selected = set(event['new']).difference(last_selected) # If this is empty it's ok, unselect all # If there is more than one, that's weird... to avoid problems, anyway, I pick one of the two if newly_selected: ipw_periodic.selected_elements = {list(newly_selected)[0]: 0} else: ipw_periodic.selected_elements = {} # To have the correct 'last' value for next calls last_selected = ipw_periodic.selected_elements replot() ipw_codes.observe(on_codes_change) ipw_comp_quantity.observe(on_quantity_change) ipw_pref.observe(on_pref_or_weights_change) ipw_b0.observe(on_pref_or_weights_change) ipw_b1.observe(on_pref_or_weights_change) ipw_periodic.observe(on_element_select) link = ipw.HTML( value="here", ) display(ipw.HBox([ipw_codes, ipw.HTML("       "), ipw.Label('Quantity for comparison'), ipw_comp_quantity])) display(ipw.HBox([ipw.Label("For a description of the quantities for comparison, click"), link])) display(ipw.Label('The following values are used only for some of the `Quantity for comparison` listed above. Look at the quantities name to understand the relevant values for each quantity.')) display(ipw.HBox([ipw_pref,ipw_b0,ipw_b1])) display(ipw_periodic) # - # Display in a different cell, so if there is scrolling, it's independent of the top widgets display(ipw_output) # Trigger first plot replot() # + import io import tqdm import contextlib def plot_all(max_val=None, only_elements=None): all_elements = [ "Ac", "Ag", "Al", "Am", "Ar", "As", "At", "Au", "B", "Ba", "Be", "Bi", "Br", "C", "Ca", "Cd", "Ce", "Cl", "Cm", "Co", "Cr", "Cs", "Cu", "Dy", "Er", "Eu", "F", "Fe", "Fr", "Ga", "Gd", "Ge", "H", "He", "Hf", "Hg", "Ho", "I", "In", "Ir", "K", "Kr", "La", "Li", "Lu", "Mg", "Mn", "Mo", "N", "Na", "Nb", "Nd", "Ne", "Ni", "Np", "O", "Os", "P", "Pa", "Pb", "Pd", "Pm", "Po", "Pr", "Pt", "Pu", "Ra", "Rb", "Re", "Rh", "Rn", "Ru", "S", "Sb", "Sc", "Se", "Si", "Sm", "Sn", "Sr", "Ta", "Tb", "Tc", "Te", "Th", "Ti", "Tl", "Tm", "U", "V", "W", "Xe", "Y", "Yb", "Zn", "Zr" ] if only_elements is not None: all_elements = [elem for elem in all_elements if elem in only_elements] # Avoid interactive creation of figures pl.ioff() def get_axes_from_list(pos, axes_list): num_rows = len(axes_list) num_columns = len(axes_list[0]) assert num_rows*num_columns == 12 assert num_columns % 2 == 0 assert pos < 6 row = pos // (num_columns // 2) column = (pos % (num_columns // 2) ) * 2 return (axes_list[row, column], axes_list[row, column+1]) try: for element in tqdm.tqdm(all_elements): f = io.StringIO() with contextlib.redirect_stdout(f): num_rows = 3 num_cols = 4 fig, axes_list = pl.subplots(num_rows, num_cols, figsize=((5 * num_cols, 4.5 * num_rows)), gridspec_kw={"hspace":0.5}) configurations = ['XO', 'XO2', 'XO3', 'X2O', 'X2O3', 'X2O5'] #Each axes is one line, not a single sublot. So axes[0] will host EoS and fit, axes[1] the deltas for idx, configuration in enumerate(configurations): axes = get_axes_from_list(pos=idx, axes_list=axes_list) plot_for_element( code_results=code_results, element=element, configuration=configuration, selected_codes=ipw_codes.value, selected_quantity=ipw_comp_quantity.value, prefactor=ipw_pref.value, b0_w=ipw_b0.value, b1_w=ipw_b1.value, axes=axes, max_val=max_val ) fig.savefig(f"{element}.png") pl.close(fig) finally: # Reactivates interactive figure creation pl.ion() # + ## Examples - you can set the max value for the color bar, or let each plot have a different maximum defined by ## the maximum value of the data for that element and configuration. ## In addition, especially for testing, you can decide to create the files only for a few elements ## Finally, you can decide which metric (and parameters) to use directly above, with the widgets. ## Running the function will generate files, in the current folder, named Ac.pdf, Ag.pdf, Al.pdf, ... #plot_all(max_val=2., only_elements= ["Ac", "Ag", "Al", "Am", "Ar"]) #plot_all(max_val=1.) # + ## Otherwise, uncomment this to have a button generate the plots #def on_generate_click(button): # button.disabled = True # try: # plot_all(max_val=2.) # finally: # button.disabled = False # #generate_button = ipw.Button(description="Generate all PNGs", status="success") #generate_button.on_click(on_generate_click) #display(generate_button) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import mdf_toolbox # # Globus Transfer Utilities # While using the Globus Web App to perform data transfers is easy, programmatically doing the same has some overhead, which these utilities try to reduce. # ## `quick_transfer` # `quick_transfer()` starts a transfer and monitors its progress. You must have a `TransferClient` already created and authenticated (see the Authentication Utilities tutorial). # # You must provide: # # `transfer_client`: An authenticated Transfer client # # `source_ep`: The source endpoint ID # # `dest_ep`: The destination endpoint ID # # `path_list`: A list of tuples, where the first element is a source path, # and the second element is the destination path. # Directory paths must end in a slash, and file paths must not. transfer_client = mdf_toolbox.login(services=["transfer"])["transfer"] source_ep = "" dest_ep = "" path_list = [ ("/home/source/files/file.dat", "/home/destination/doc.dat"), ("/home/source/all_reports/", "/home/destination/docs/reports/") ] mdf_toolbox.quick_transfer(transfer_client, source_ep, dest_ep, path_list) # It is normal for Globus Transfer to encounter a few transient errors when transferring data. The transfer is very robust and can usually recover from most issues. By default, `quick_transfer()` will tolerate 10 error events before cancelling the transfer. You can customize this with `retries`, with `-1` indicating infinite retries (until the transfer times out naturally). # # Additionally, you can customize the poll interval (in seconds) with `interval`. mdf_toolbox.quick_transfer(transfer_client, source_ep, dest_ep, path_list, interval=7, retries=-1) # ## `custom_transfer` # `custom_transfer()` is similar to `quick_transfer()`, but allows you greater control over the finer details. It is also more complex. # # All of the required parameters from `quick_transfer()` are required for `custom_transfer()`. `interval` is also available. However, you can additionally set the natural timeout due to inactivity (in seconds) with `inactivity_time`. # # The major difference between the two transfer utilities is that `custom_transfer()` will `yield` any error events and allow you to `.send(False)` to cancel the transfer. transfer_generator = mdf_toolbox.quick_transfer( transfer_client, source_ep, dest_ep, path_list, inactivity_time=600) res = next(transfer) print(res) try: while True: res = transfer.send(True) print(res) except StopIteration: pass # ## `get_local_ep` # `get_local_ep()` attempts to discover the local machine's Globus Connect Personal endpoint ID. It does not properly detect Globus Connect Server endpoints, and can sometimes return inaccurate results if the local machine is not running GCP. If there are multiple possible endpoints, the user is prompted to choose the correct one. # # `get_local_ep()` requires a `TransferClient`. # # If possible, it is recommended to get the local EP ID some other way, such as through the Globus Web App or by asking the endpoint's owner. mdf_toolbox.get_local_ep(transfer_client) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # In this notebook, we'll go through some exploratory data analysis, visualize these data points to gain deeper insights, and then build the cost function according to the problem formulation and do a bit of a search for the optimum. # # # The Problem # ![Santa](https://i.imgur.com/reXzZWf.jpg) # # Santa's worhshop runs like a well-oiled machine and the accountants run a tight ship. Let's have a look at how we can accommodate as many families as possible without breaking the bank entirely. # # ## The Deets: # - 5,000 families # - 100 days # - 125 to 300 per day (hard limit) # # ## The Costs: # ### Consolation Gifts: # # This should be interesting to look at. Every choice has a cost and the lower we go in priority, the more gratuitous we get with the families. Look at `9` and `++` getting helicopter rides for each person. Ooph! # # | `choice` | Gift Card | Buffet | Helicopter | # |----------|------------|---------|------------| # | | (fixed) | (per P) | (per P) | # | 0 | 0 | 0 | 0 | # | 1 | 50 | 0 | 0 | # | 2 | 50 | 9 | 0 | # | 3 | 100 | 9 | 0 | # | 4 | 200 | 9 | 0 | # | 5 | 200 | 18 | 0 | # | 6 | 300 | 18 | 0 | # | 7 | 300 | 36 | 0 | # | 8 | 400 | 36 | 0 | # | 9 | 500 | 36 | 199 | # | ++ | 500 | 36 | 398 | # # ### Accounting: # Accounting runs a tight, recursive ship. $N_d$ is the number of occupants on the day $d$. Starting 100 days before, working your way towards Christmas. # # $$\text{accounting penalty} = \sum\limits_{d=100}^1 \frac{N_d - 125}{400} \cdot N_d^{\left(0.5 + \frac{\left|N_d - N_{d+1}\right|}{50}\right)}$$ # # ## Sources: # # Blatantly copying stuff from these notebooks. Give them an upvote! # # - https://www.kaggle.com/inversion/santa-s-2019-starter-notebook (Get a good initial idea) # - https://www.kaggle.com/chewzy/santa-finances-a-closer-look-at-the-costs (Nice EDA) # - https://www.kaggle.com/nickel/santa-s-2019-fast-pythonic-cost-23-s (Really fast Pythonic cost, which I adapted) # - https://www.kaggle.com/xhlulu/santa-s-2019-stochastic-product-search (Great impementation of random search, which I adapted) # - https://www.kaggle.com/ilu000/greedy-dual-and-tripple-shuffle-with-fast-scoring (Better data, which I'm using in the newer kernels.) # + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" # This Python 3 environment comes with many helpful analytics libraries installed # It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python # For example, here's several helpful packages to load in import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) import seaborn as sns import matplotlib.pyplot as plt from numba import njit from itertools import product from time import time # Input data files are available in the "../input/" directory. # For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory start_time = time() end_time = start_time + (7.5 *60 *60) import os for dirname, _, filenames in os.walk('/kaggle/input'): for filename in filenames: print(os.path.join(dirname, filename)) # Any results you write to the current directory are saved as output. # + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a" fpath = '/kaggle/input/santa-workshop-tour-2019/family_data.csv' data = pd.read_csv(fpath, index_col='family_id') fpath = '/kaggle/input/santa-workshop-tour-2019/sample_submission.csv' submission = pd.read_csv(fpath, index_col='family_id') fpath = '/kaggle/input/santa-s-2019-stochastic-product-search/submission_76177.csv' prediction = pd.read_csv(fpath, index_col='family_id').assigned_day.values # - family_sizes = data.n_people.values.astype(np.int8) # # Families Visualized # Quick look at the data and the distribution of family sizes and all that jazz. The families seem to be be mostly smaller than 5 families, and most would like to go right before Christmas, let's dive in: data.head() print("Average number of people per day:", sum(data['n_people'])//100) # + family_size = data['n_people'].value_counts().sort_index() family_size /= 50 plt.figure(figsize=(14,6)) ax = sns.barplot(x=family_size.index, y=family_size.values) ax.set_ylim(0, 1.1*max(family_size)) plt.xlabel('Family Size', fontsize=14) plt.ylabel('Percentage', fontsize=14) plt.title('Members in Family', fontsize=20) plt.show() # - # No surprises there. Most families between 2 and 5 with 4 being the highest. Some families in the long tail, sadly lacking information whether they're catholic. # + fave_day = data['choice_0'].value_counts().sort_index() plt.figure(figsize=(14,6)) ax = sns.barplot(x=fave_day.index, y=fave_day.values) plt.xlabel('$\Leftarrow$ Christmas this way!', fontsize=14) plt.ylabel('Wishes', fontsize=14) plt.title('Primary Choice', fontsize=20) plt.show() # + ninthfave_day = data['choice_9'].value_counts().sort_index() plt.figure(figsize=(14,6)) ax = sns.barplot(x=ninthfave_day.index, y=ninthfave_day.values) plt.xlabel('$\Leftarrow$ Christmas this way!', fontsize=14) plt.ylabel('Wishes', fontsize=14) plt.title('Last Choice', fontsize=20) plt.show() # - # So, Christmas day is the most wished for clearly. In between there are the weekends pretty steady. The weekday wishes decrease towards the farther days. This holds for the first as much as the last. # # Let's look at some costs: # # Costs Visualized # Set up a dict, because they're cheap and we only need two values accessible. We can use this one for later hopefully. Array would probably be fine though. I'm no software engineer. # + cost_dict = {0: [ 0, 0], 1: [ 50, 0], 2: [ 50, 9], 3: [100, 9], 4: [200, 9], 5: [200, 18], 6: [300, 18], 7: [300, 36], 8: [400, 36], 9: [500, 36 + 199], 10: [500, 36 + 398], } def cost(choice, members, cost_dict): x = cost_dict[choice] return x[0] + members * x[1] # - # Let's look at all the costs for each choice for each family. In the image, I'm dropping the choices `0` and `1`, because they're boring (constant for any number of members). # # It's really clear there is a big cliff for `choice_9` and `otherwise`, helicopters gonna helicopter. all_costs = {k: pd.Series([cost(k, x, cost_dict) for x in range(2,9)], index=range(2,9)) for k in cost_dict.keys()} df_all_costs = pd.DataFrame(all_costs) plt.figure(figsize=(14,11)) sns.heatmap(df_all_costs.drop([0, 1],axis=1), annot=True, fmt="g") plt.figure(figsize=(14,11)) sns.heatmap(df_all_costs.drop([0, 1, 9, 10],axis=1), annot=True, fmt="g") # We need this one to get a quick cost calculation and to be quite honest, we don't need to look at it technically, but the barcode is kinda cool: # + family_cost_matrix = np.zeros((100,len(family_sizes))) # Cost for each family for each day. for i, el in enumerate(family_sizes): family_cost_matrix[:, i] += all_costs[10][el] # populate each day with the max cost for j, choice in enumerate(data.drop("n_people",axis=1).values[i,:]): family_cost_matrix[choice-1, i] = all_costs[j][el] # fill wishes into cost matrix # - plt.figure(figsize=(40,10)) sns.heatmap(family_cost_matrix) plt.show() # # Accounting Visualized # It seems accounting can throw us off a bit. I cliped it to 4k similar to the gift-cards to make a point. But if you go to the maximum in the corner, the cost easily outpaces paying off every family to stay home. # # That's why you always take care off accounting. def accounting(today, previous): return ((today - 125) / 400 ) * today ** (.5 + (abs(today - previous) / 50)) # + acc_costs = np.zeros([176,176]) for i, x in enumerate(range(125,300+1)): for j, y in enumerate(range(125,300+1)): acc_costs[i,j] = accounting(x,y) plt.figure(figsize=(10,10)) plt.imshow(np.clip(acc_costs, 0, 4000)) plt.title('Accounting Cost') plt.colorbar() print("The maximum cost is a ridiculous:", acc_costs.max()) # - # # Optimized Cost Function [22μs] # This means we can now look up all of the costs for the actual optimization. # # Everyone would love to come on the weekend and of course on the day before Christmas, but ideally we should smooth it out somehow, so accounting doesn't bite us. @njit(fastmath=True) def cost_function(prediction, family_size, family_cost_matrix, accounting_cost_matrix): N_DAYS = 100 MAX_OCCUPANCY = 300 MIN_OCCUPANCY = 125 penalty = 0 accounting_cost = 0 max_occ = False daily_occupancy = np.zeros(N_DAYS + 1, dtype=np.int16) for i, (pred, n) in enumerate(zip(prediction, family_size)): daily_occupancy[pred - 1] += n penalty += family_cost_matrix[pred - 1, i] daily_occupancy[-1] = daily_occupancy[-2] for day in range(N_DAYS): n_next = daily_occupancy[day + 1] n = daily_occupancy[day] max_occ += MIN_OCCUPANCY > n max_occ += MAX_OCCUPANCY < n accounting_cost += accounting_cost_matrix[n-MIN_OCCUPANCY, n_next-MIN_OCCUPANCY] if max_occ: return 1e11 return penalty + accounting_cost cost_function(prediction, family_sizes, family_cost_matrix, acc_costs) # %timeit cost_function(prediction, family_sizes, family_cost_matrix, acc_costs) # Aw yiss. Thanks Crescenzi. I did choose not to return an array, but just a single cost, which apparently shaves off another $\mu s$, not sure if that matters. Let's use the [stochastic product search](https://www.kaggle.com/xhlulu/santa-s-2019-stochastic-product-search/data), because it seems really cool. Let's do two long rounds as well, this is a slight modification not using tqdm, I think more modifications would be great to make it work even better. # # # Versatile Stochastic Search def stochastic_product_search(original, choice_matrix, top_k=2, num_fam=8, n_time=None, n_iter=None, early_stop=np.inf, decrease=np.inf, random_state=42, verbose=1e4): """Randomly sample families, reassign and evaluate. At every iterations, randomly sample num_fam families. Then, given their top_k choices, compute the Cartesian product of the families' choices, and compute the score for each of those top_k^num_fam products. Both n_time and n_iter can be set, optimization stops when first one is reached. Arguments: original {1d Array} -- Initial assignment of families choice_matrix {2d Array} -- Choices of each family Keyword Arguments: top_k {int} -- Number of choices to include (default: {2}) num_fam {int} -- Number of families to sample (default: {8}) n_time {int} -- Maximum execution time (default: {None}) n_iter {int} -- Maximum number of executions (default: {None}) early_stop {int} -- Stop after number of stagnant iterations (default: {np.inf}) decrease {int} -- Decrease num_fam after number of stagnant iterations (default: {np.inf}) random_state {int} -- Set NumPy random state for reproducibility (default: {42}) verbose {int} -- Return current best after number of iterations (default: {1e4}) Example: best = stochastic_product_search( choice_matrix=data.drop("n_people", axis=1).values, top_k=3, num_fam=16, original=prediction, n_time=int(3600), n_iter=int(1e5), early_stop=5e6, decrease=5e4, verbose=int(5e3), ) Returns: [1d Array] -- Best assignment of families """ np.random.seed(random_state) i = 0 early_i = 0 opt_time = time() if n_time: max_time = opt_time + n_time else: max_time = opt_time if n_iter: max_iter = n_iter else: max_iter = 0 best = original.copy() best_score = cost_function(best, family_sizes, family_cost_matrix, acc_costs) while ((max_time - time() > 0) or (not n_time)) and ((i < max_iter) or (not n_iter)) and (early_i < early_stop): fam_indices = np.random.choice(choice_matrix.shape[0], size=num_fam) for change in np.array(np.meshgrid(*choice_matrix[fam_indices, :top_k])).T.reshape(-1,num_fam): new = best.copy() new[fam_indices] = change new_score = cost_function(new, family_sizes, family_cost_matrix, acc_costs) if new_score < best_score: best_score = new_score best = new early_i = 0 else: early_i += 1/num_fam if (early_i > decrease) and (num_fam > 2): num_fam -= 1 early_i = 0 if verbose: print(f"Decreasing sampling size to {num_fam}.") if verbose and i % verbose == 0: print(f"Iteration {i:05d}:\t Best score is {best_score:.2f}\t after {(time()-opt_time)//60:.0f} minutues {(time()-opt_time)%60:.0f} seconds.") i += 1 print(f"Final best score is {best_score:.2f} after {(time()-opt_time)//60:.0f} minutues {(time()-opt_time)%60:.0f} seconds.") print(f"Each loop took {1000*(time()-opt_time)/i:.2f} ms.") return best # This is just a quick demonstration for the kernel: best = stochastic_product_search( prediction, choice_matrix=data.drop("n_people", axis=1).values, top_k=1, num_fam=32, n_time=int(60), early_stop=5e6, decrease=5e4, verbose=int(5e3), ) # Now let's just bundle this up and send it off to the elves. submission['assigned_day'] = best final_score = cost_function(best, family_sizes, family_cost_matrix, acc_costs) submission.to_csv(f'submission_{int(final_score)}.csv') # ## Happy Kaggling # I think we can clearly improve this further, large families seem quite costly and there's a trade-off between caching solutions and simply calculating them, considering the optimization of the cost function. Nevertheless, random search seems to be a pretty good fit for this problem. Maybe Genetic algorithms would also work. Either way, this is a small overview over the data analysis. # # Don't forget to upvote the other kernels, they did amazing work. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="Okfr_uhwhS1X" colab_type="text" # # Lambda School Data Science - Making Data-backed Assertions # # This is, for many, the main point of data science - to create and support reasoned arguments based on evidence. It's not a topic to master in a day, but it is worth some focused time thinking about and structuring your approach to it. # + [markdown] id="lOqaPds9huME" colab_type="text" # ## Assignment - what's going on here? # # Consider the data in `persons.csv` (already prepared for you, in the repo for the week). It has four columns - a unique id, followed by age (in years), weight (in lbs), and exercise time (in minutes/week) of 1200 (hypothetical) people. # # Try to figure out which variables are possibly related to each other, and which may be confounding relationships. # # Try and isolate the main relationships and then communicate them using crosstabs and graphs. Share any cool graphs that you make with the rest of the class in Slack! # + id="F1_ORL6jt9Nj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 105} outputId="a7e89095-0dc9-467c-e4cb-3711a279e31b" # !pip install pandas==0.23.4 #make sure to "restart runtime" to ensure this will run, even if it install and says its satisfied. Runtime-->Restart Runtime and it will work #loading this at the top to ensure it works anytime i want to take a look at the crosstabs # + id="TGUS79cOhPWj" colab_type="code" outputId="4b60cb4a-7b93-4e2a-9fd5-0dbe084401c5" colab={"base_uri": "https://localhost:8080/", "height": 197} # TODO - your code here # Use what we did live in lecture as an example # HINT - you can find the raw URL on GitHub and potentially use that # to load the data with read_csv, or you can upload it yourself import pandas as pd import numpy as np import matplotlib.pyplot as plt df = pd.read_csv('https://raw.githubusercontent.com/LambdaSchool/DS-Unit-1-Sprint-1-Dealing-With-Data/master/module3-databackedassertions/persons.csv') df.head() # + id="NmLiFz7Azorp" colab_type="code" outputId="c9d74eb8-730d-471f-abc2-0bb3825f10c0" colab={"base_uri": "https://localhost:8080/", "height": 34} df.shape # + id="UxqlEKzDzr_Z" colab_type="code" outputId="7bb89c2c-53db-46f4-86a5-10c02244acb3" colab={"base_uri": "https://localhost:8080/", "height": 105} df.dtypes # + id="Cg4DUl_Iz2RV" colab_type="code" outputId="c79dcc92-4da1-4476-b643-34a2a392620b" colab={"base_uri": "https://localhost:8080/", "height": 287} #all integers so lets check some summary statistics df.describe() # + id="kvde5uS90DtN" colab_type="code" outputId="0562eb10-969e-4fd8-810f-4f6a3f0ba793" colab={"base_uri": "https://localhost:8080/", "height": 283} df.plot(x='age', y='weight', kind='scatter') ; # + id="LDlMmJx_1LNW" colab_type="code" outputId="39dccae7-ee7b-4220-d7e7-23e9665fb58c" colab={"base_uri": "https://localhost:8080/", "height": 284} df.plot(x='exercise_time', y='weight', kind='scatter', color='r') ; # + id="x9EAEp12199g" colab_type="code" outputId="d3e334a3-4ffd-438c-f632-b153d52b9683" colab={"base_uri": "https://localhost:8080/", "height": 284} df.plot(x='exercise_time', y='age', kind='scatter', color='r') ; # + id="7GuNkTPOGbBb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 257} outputId="29517807-24b1-4665-aa6e-eb432f0f3889" df.plot.hexbin(x='exercise_time', y='age', gridsize=20); #trying out different plots # + id="9x6t6c0p5C3G" colab_type="code" colab={} #both weight and age are related to exercise time, thus making exercise time the confounding variable # + id="loAhA93nrPck" colab_type="code" outputId="a84a8c14-e7bd-4844-ba04-d9f8f82dcfd4" colab={"base_uri": "https://localhost:8080/", "height": 227} #bin the weights, and ages using the crosstab to see if you can pull a more meaningful relationship #need to bin all three of the parameters #it makes sense to bin all the variables so you only have a few categories #pd.crosstab(df['age'], df['exercise_time']) exercise_time_bins = pd.cut(df['exercise_time'], 5) age_bins = pd.cut(df['age'], 5) weight_bins = pd.cut(df['weight'],5) #pd.crosstab(df['exercise_time'], exercise_time_bins, normalize='columns') #pd.crosstab(df['age'], age_bins, normalize='columns') #pd.crosstab(df['weight'], weight_bins, normalize='columns') #pd.crosstab(exercise_time_bins, df['age']) pd.crosstab(exercise_time_bins, age_bins, normalize='columns') #still gives error after downgrading to older version #print('This is a better way to split the data up. We can see five different age groups and the bins of the percentage of exercise time for each age group.') # + id="j7ovfJPZsm7y" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 227} outputId="9f593f97-ea3f-4b06-a05f-74b85f54058b" pd.crosstab(exercise_time_bins, weight_bins, normalize='columns') # + id="5DVDA3O2sm3o" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 227} outputId="90fdf9bd-466b-4c76-b205-b969be3e4e3f" wa = pd.crosstab(weight_bins, age_bins, normalize='columns') wa # + id="m5GinvL0smu4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 227} outputId="22e0d82b-569a-4e64-f19c-fad9b07af697" wa_subset = wa.iloc[:, [0,1,2]] #the ":" is specifying ALL rows, and the list is specifying which columns wa_subset # + id="L58voteICd3H" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 273} outputId="b9daa0af-4120-476b-e24f-3c7d6f414f87" wa_subset.plot(); # + id="kJUAWg-gC_QC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 227} outputId="7f6e1043-31e9-462d-c07b-aeb956fe2544" a_t = pd.crosstab(exercise_time_bins, age_bins, normalize='columns') a_t # + id="k4BZS0fJDoJ1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 344} outputId="ffea2150-b30c-438f-f2e9-9aa647f73031" #plotting the age bins to exercise time. This shows some interesting relationships. You can clearly see the higher age bracket drops off considerably a_t.plot(kind='bar',style='ggplot'); # + id="YctPD_xxJSwc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 453} outputId="033b995b-afae-4593-e4b1-bdad1273b5b8" a_t.plot(kind='bar',stacked=True, figsize=(8, 6)); # + id="cDtTPphLDO50" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 227} outputId="7ab6c566-5289-4e30-9b64-959d446048cd" #using the iloc feature to segment the data to visualize it better a_t_subset = a_t.iloc[:,[0,1,2]] a_t_subset # + id="v3-XW9yMDcRm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 344} outputId="a22b8305-bc7a-467c-c366-cba278e0ca2d" a_t_subset.plot(kind='bar'); #playing with different age groups # + [markdown] id="BT9gdS7viJZa" colab_type="text" # ### Assignment questions # # After you've worked on some code, answer the following questions in this text block: # # 1. What are the variable types in the data? # 2. What are the relationships between the variables? # 3. Which relationships are "real", and which spurious? # # + [markdown] id="nhmrJGapoM7k" colab_type="text" # 1. The variable types in the data are all integers. # # numerical --> ordinal - set order, discrete - uses whole numbers like age # categorical --> # + [markdown] id="syCMCXoJoSi9" colab_type="text" # 2. The relationships between the variables lead to a conclusion that both weight and age are related to exercise time, making exercise time the confounding variable and obfuscating the possible relationship between age and weight. One possible conclusion you can draw from a persons age as it relates to their weight is that it is evenly distributed. # + [markdown] id="qkieeiiUpisJ" colab_type="text" # 3. # Real: # * The amount of time spent exercising as a person ages starts to drop considerably after age 60. # * The amount of time spent exercising as a person grows past a weight of around 150-160 lbs drops considerably as well. # # Spurious: # * The relationship between age and weight in this data set. # # # + [markdown] id="_XXg2crAipwP" colab_type="text" # ## Stretch goals and resources # # Following are *optional* things for you to take a look at. Focus on the above assignment first, and make sure to commit and push your changes to GitHub. # # - [Spurious Correlations](http://tylervigen.com/spurious-correlations) # - [NIH on controlling for confounding variables](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4017459/) # # Stretch goals: # # - Produce your own plot inspired by the Spurious Correlation visualizations (and consider writing a blog post about it - both the content and how you made it) # - Pick one of the techniques that NIH highlights for confounding variables - we'll be going into many of them later, but see if you can find which Python modules may help (hint - check scikit-learn) # + id="MDwqeoIdxlSv" colab_type="code" colab={} #reminder after looking at this data: correlation does not equal causation # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np # + arr = np.arange(1,10) print(arr) print(arr+2) print(arr-2) print(arr*2) arr=arr**2 arr # - print('First Four : {}'.format(arr[:4])) print('Last Four : {}'.format(arr[-4:])) print('Start at 2, exclude last 2 : {}'.format(arr[2:-2])) print('Start at 0, pick odd indexes : {}'.format(arr[0:arr.size:2])) print('Sort descending : {}'.format(arr[::-1])) arr = np.array([ ['Alice', 'Beth','Cathy', 'Dorothy'], ['65', '78', '90','81'], ['71', '82', '79', '92'] ]) print(arr) arr[...,-2:] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Paper Graphics # version import datetime print(datetime.date.today()) # %matplotlib notebook run plots.py prec = 500 # prepath where to save the figures savepath = "./figs/" # savepath = "./" # ## Policy Classification: opt, sus, safe # + δ = 1.0 ρ = 0.2 γ = 1.0 rh = 1.0 rl = 0.5 #rmin = 0.2 rmin = 0.3 #rl = 0.4 #rmin = 0.3 xaxis = "δ" yaxis = "γ" # + fig, ax = plt.subplots(1, 3, sharey='row', sharex="col", figsize=(10, 4) ) bott=0.24 topp=0.9 leftt=0.06 rightt=0.98 smallfontsize = 8.5 # colors ov = 0.6 risky_opt_color = (1.0, ov, 0) cautious_opt_color = (1.0, 0, ov) both_sus_color = (ov, ov, 1.0) cautious_sus_color = (0.0, ov, 1.0) cautious_safe_color = (0.0, 1.0, ov) plot_optimal_policies(δ, ρ, γ, rh, rl, xaxis=xaxis, yaxis=yaxis, prec=prec, riskyoptcolor=risky_opt_color, cautiousoptcolor=cautious_opt_color, ax=ax[0]) plot_sustainble_policies(δ, ρ, γ, rh, rl, rmin, xaxis=xaxis, yaxis=yaxis, prec=prec, bothsuscolor=both_sus_color, cautioussuscolor=cautious_sus_color, ax=ax[1]) plot_SOS_policies(δ, ρ, γ, rh, rl, xaxis=xaxis, yaxis=yaxis, prec=prec, cautiousafecolor=cautious_safe_color, ax=ax[2]) # clear labels for a in ax: a.set_ylabel("") a.set_xlabel("") # titles yight = 1.07 fsize = 14 ax[0].annotate("Optimization paradigm", (0.5, yight), xycoords="axes fraction", ha="center", va="center", fontsize=fsize) ax[1].annotate("Sustainability paradigm", (0.5, yight), xycoords="axes fraction", ha="center", va="center", fontsize=fsize) ax[2].annotate("Safe Operating Space\nparadigm", (0.5, yight), xycoords="axes fraction", ha="center", va="center", fontsize=fsize) # numbers ax[0].annotate("a", (0.0, yight), xycoords="axes fraction", weight='demibold', ha="left", va="center", fontsize=12) ax[1].annotate("b", (0.0, yight), xycoords="axes fraction", weight='demibold', ha="left", va="center", fontsize=12) ax[2].annotate("c", (0.0, yight), xycoords="axes fraction", weight='demibold', ha="left", va="center", fontsize=12) # add labels xlab = r"Collapse probability $\delta$" ylab = r"Discount factor $\gamma$" labelfsize = 10 for axis in ax: axis.set_xlabel(xlab) axis.tick_params(axis='both', which='major', labelsize=smallfontsize) axis.xaxis.label.set_fontsize(labelfsize) for axis in [ax[0]]: axis.set_ylabel(ylab) axis.tick_params(axis='both', which='major', labelsize=smallfontsize) axis.yaxis.label.set_fontsize(labelfsize) # Legend plt.subplots_adjust(bottom=bott, top=topp, left=leftt, right=rightt) lfontsize = 9 dy = 0.05 legendtopbottom = 0.07 # - left column left = 0.1 ro = fig.add_axes((left, legendtopbottom, 0.02,0.02), frameon=False, xticks=[], yticks=[]) ro.fill_between(np.linspace(0,1,prec), 0, 1, color=risky_opt_color) ro.annotate(" only risky policy optimal", (1.0, 0.5), xycoords="axes fraction", ha="left", va="center", fontsize=lfontsize) so= fig.add_axes((left, legendtopbottom-dy, 0.02, 0.02), frameon=False, xticks=[], yticks=[]) so.fill_between(np.linspace(0,1,prec), 0, 1, color=cautious_opt_color) so.annotate(" only cautious policy optimal", (1.0, 0.5), xycoords="axes fraction", ha="left", va="center", fontsize=lfontsize) # - mid column left = 0.41 bs = fig.add_axes((left, legendtopbottom, 0.02,0.02), frameon=False, xticks=[], yticks=[]) bs.fill_between(np.linspace(0,1,prec), 0, 1, color=both_sus_color) bs.annotate(" both policies sustainable", (1.0, 0.5), xycoords="axes fraction", ha="left", va="center", fontsize=lfontsize) ss = fig.add_axes((left, legendtopbottom-dy, 0.02,0.02), frameon=False, xticks=[], yticks=[]) ss.fill_between(np.linspace(0,1,prec), 0, 1, color=cautious_sus_color) ss.annotate(" only cautious policy sustainable", (1.0, 0.5), xycoords="axes fraction", ha="left", va="center", fontsize=lfontsize) # - right column left = 0.746 bsos = fig.add_axes((left, legendtopbottom, 0.02,0.02), frameon=False, xticks=[], yticks=[]) bsos.fill_between(np.linspace(0,1,prec), 0, 1, color=cautious_safe_color) bsos.annotate(" only cautious policy safe", (1.0, 0.5), xycoords="axes fraction", ha="left", va="center", fontsize=lfontsize) # space plt.subplots_adjust(wspace=0.1) plt.savefig(savepath + "Results_Policies.png", dpi=300) # - # ## Policy Combinations # + fig = plt.figure(figsize=(7, 11)) # ======================== # AXES # ======================== leftax = 0.1 rightax = 0.98 bottax = 0.05 highax = 0.55 dx=0.01 dy=0.008 ax11 = plt.axes([leftax, bottax, 0.5*(rightax-leftax)-dx, 0.5*(highax-bottax)-dy]) ax21 = plt.axes([leftax, dy+bottax+0.5*(highax-bottax), 0.5*(rightax-leftax)-dx, 0.5*(highax-bottax)-dy]) ax12 = plt.axes([dx+leftax+0.5*(rightax-leftax), bottax, 0.5*(rightax-leftax)-dx, 0.5*(highax-bottax)-dy]) ax22 = plt.axes([dx+leftax+0.5*(rightax-leftax), dy+bottax+0.5*(highax-bottax), 0.5*(rightax-leftax)-dx, 0.5*(highax-bottax)-dy]) #ax3 = plt.axes([-d3 , -0.03+bottax+0.2*highax, # 0.04+leftax, 0.03+0.6*(highax-bottax)]) # ========================== # Plottings # ========================== # ρ = 0.2 rmin = 0.3 _plot_PolicyCombinations(δ, ρ, γ, rh, rl, rmin, xaxis=xaxis, yaxis=yaxis, prec=prec, policy="risky", ax=ax21) ax21.set_title("Risky Policy") ax21.set_ylabel(r"Discount factor $\gamma$") ax21.set_xlabel("") ax21.set_xticklabels([]) _plot_PolicyCombinations(δ, ρ, γ, rh, rl, rmin, xaxis=xaxis, yaxis=yaxis, prec=prec, policy="safe", ax=ax22) ax22.set_title("Cautious Policy") ax22.set_ylabel(r"") ax22.set_xlabel(r"") ax22.set_yticklabels([]) ax22.set_xticklabels([]) rmin = 0.7 # ====================================== _plot_PolicyCombinations(δ, ρ, γ, rh, rl, rmin, xaxis=xaxis, yaxis=yaxis, prec=prec, policy="risky", ax=ax11) ax11.set_xlabel(r"Collapse probability $\delta$") ax11.set_ylabel(r"Discount factor $\gamma$") _plot_PolicyCombinations(δ, ρ, γ, rh, rl, rmin, xaxis=xaxis, yaxis=yaxis, prec=prec, policy="safe", ax=ax12) ax12.set_xlabel(r"Collapse probability $\delta$") ax12.set_yticklabels([]) # Threshold labels ax12.annotate(r"High normative threshold $r_{min}$", (-0.03, 0.5), xycoords="axes fraction", ha="center", va="center", rotation=0, bbox=dict(boxstyle="round", fc=(1.,1.,1.), ec=(0., .0, .0))) ax22.annotate(r"Low normative threshold $r_{min}$", (-.03, 0.5), xycoords="axes fraction", ha="center", va="center", rotation=0, bbox=dict(boxstyle="round", fc=(1.,1.,1.), ec=(0., .0, .0))) d3=0.04 ax3 = plt.axes([0, highax+d3, 0.8, 1-highax-1.1*d3 ]) image = plt.imread("figs/Paradigms_Legend.png") ax3.imshow(image) ax3.set_yticks([]) ax3.axis('off') # clear x- and y-axes # Trefoil Legend left = 0.75 dy = 0.03 legendtopbottom = highax+d3+(1-highax-d3)/2 + 1.2*dy #0.02+highax+d3+3*dy lfontsize = 10 cv = 200/255. l1 = fig.add_axes((left, legendtopbottom, 0.06,0.02), frameon=False, xticks=[], yticks=[]) l1.fill_between(np.linspace(0,0.5,prec), 0, 1, color=(cv, cv, 0.)) l1.fill_between(np.linspace(0.5,1,prec), 0, 1, color=(cv, 0, 0)) l1.annotate(" opt & not sus", (1.0, 0.5), xycoords="axes fraction", ha="left", va="center", fontsize=lfontsize) l2 = fig.add_axes((left, legendtopbottom-dy, 0.06,0.02), frameon=False, xticks=[], yticks=[]) l2.fill_between(np.linspace(0,0.5,prec), 0, 1, color=(cv, 0, 0.)) l2.fill_between(np.linspace(0.5,1,prec), 0, 1, color=(cv, 0, cv)) l2.annotate(" opt & not safe", (1.0, 0.5), xycoords="axes fraction", ha="left", va="center", fontsize=lfontsize) l3 = fig.add_axes((left, legendtopbottom-2*dy, 0.06,0.02), frameon=False, xticks=[], yticks=[]) l3.fill_between(np.linspace(0,0.5,prec), 0, 1, color=(0, cv, 0.)) l3.fill_between(np.linspace(0.5,1,prec), 0, 1, color=(cv, cv, 0)) l3.annotate(" safe & not sus", (1.0, 0.5), xycoords="axes fraction", ha="left", va="center", fontsize=lfontsize) l4 = fig.add_axes((left, legendtopbottom-3*dy, 0.06,0.02), frameon=False, xticks=[], yticks=[]) l4.fill_between(np.linspace(0,0.5,prec), 0, 1, color=(0, 0, cv)) l4.fill_between(np.linspace(0.5,1,prec), 0, 1, color=(cv, 0, cv)) l4.annotate(" sus & not safe", (1.0, 0.5), xycoords="axes fraction", ha="left", va="center", fontsize=lfontsize) # number labels ax3.annotate("a", (0.04, 0.94), xycoords="axes fraction", ha="center", va="center", weight='demibold', fontsize=12) ax21.annotate("b", (0.95, 0.94), xycoords="axes fraction", ha="center", va="center", weight='demibold', fontsize=12, color="white") ax11.annotate("c", (0.95, 0.94), xycoords="axes fraction", ha="center", va="center", weight='demibold', fontsize=12, color="white") ax22.annotate("d", (0.95, 0.94), xycoords="axes fraction", ha="center", va="center", weight='demibold', fontsize=12, color="black") ax12.annotate("e", (0.95, 0.94), xycoords="axes fraction", ha="center", va="center", weight='demibold', fontsize=12, color="black") plt.savefig(savepath + "Results_PolicyTrefoil.png", dpi=300) # - # ## Real world examples # + # revision update fig, ax = plt.subplots(1, 3, figsize=(10, 4) ) δ = (0.0, 1.0) ρ = (0.0, 1.0) γ = (0.95, 0.99) rh = (1.0, 1.0) #rl = 0.4 #rmin = 0.3 rl = (0.3, 0.7) #rl = (0.25, 0.75) # rmin = (0.3, 0.5) rmin = (0.1, 0.5) xaxis = "δ" yaxis = "ρ" prec = 101 uprec = 21 plot_PolicyCombinations_withUncertainty(δ, ρ, γ, rh, rl, rmin, xaxis=xaxis, yaxis=yaxis, plotprec=prec, uprec=uprec, policy="risky", ax=ax[0]) ax[0].set_title("Risky Policy") ax[0].set_xlabel(r"Collapse probability $\delta$") ax[0].set_ylabel(r"Recovery probability $\rho$") plot_PolicyCombinations_withUncertainty(δ, ρ, γ, rh, rl, rmin, xaxis=xaxis, yaxis=yaxis, plotprec=prec, uprec=uprec, policy="cautious", ax=ax[1]) ax[1].set_title("Cautious Policy") ax[1].set_yticklabels([]) ax[1].set_ylabel("") ax[1].set_xlabel(r"Collapse probability $\delta$") image = plt.imread("figs/Paradigms_Legend_Sparse.png") ax[2].imshow(image) ax[2].set_yticks([]) ax[2].axis('off') # clear x- and y-axes ax0 = plt.axes([0.11, 0.2, 0.22, 0.6]) ax1 = plt.axes([0.425, 0.2, 0.22, 0.6]) δ2 = (0.0, 0.055) ρ2 = (0.0, 0.022) plot_PolicyCombinations_withUncertainty(δ2, ρ2, γ, rh, rl, rmin, xaxis=xaxis, yaxis=yaxis, plotprec=prec, uprec=uprec, policy="risky", ax=ax0) plot_PolicyCombinations_withUncertainty(δ2, ρ2, γ, rh, rl, rmin, xaxis=xaxis, yaxis=yaxis, plotprec=prec, uprec=uprec, policy="cautious", ax=ax1) ax0.spines['bottom'].set_color('white') ax0.spines['top'].set_color('white') ax0.spines['left'].set_color('white') ax0.spines['right'].set_color('white') for t in ax0.xaxis.get_ticklines(): t.set_color('white') for t in ax0.yaxis.get_ticklines(): t.set_color('white') for t in ax0.xaxis.get_ticklabels(): t.set_color("white") for t in ax0.yaxis.get_ticklabels(): t.set_color("white") ax1.set_xticks([0.0, 0.025, 0.05]) ax0.set_xticks([0.0, 0.025, 0.05]) ax0.set_yticks([0.0, 0.01, 0.02]) ax1.set_yticks([0.0, 0.01, 0.02]) ax1.set_xlabel("") ax0.set_xlabel("") ax1.set_ylabel("") ax0.set_ylabel("") props = lambda years: 1/(years+1) #ax0.plot([props(30), props(50)], [0, 0], lw=4, color="yellow") ax0.annotate("Climate", (0.025, 0.0), xycoords="data", color="w", ha="center", va="bottom") ax1.annotate("Climate", (0.025, 0.0), xycoords="data", color="k", ha="center", va="bottom") ax0.annotate("Fisheries", (0.045, 0.02), xycoords="data", color="w", ha="center", va="center") ax1.annotate("Fisheries", (0.045, 0.02), xycoords="data", color="k", ha="center", va="center") ax0.annotate("Farming", (0.01, 0.003), xycoords="data", color="w", ha="center", va="bottom") ax1.annotate("Farming", (0.01, 0.003), xycoords="data", color="k", ha="center", va="bottom") ax[0].annotate("a", (0.95, 0.95), xycoords="axes fraction", ha="center", va="center", weight='demibold', fontsize=12, color="white") ax[1].annotate("b", (0.95, 0.95), xycoords="axes fraction", ha="center", va="center", weight='demibold', fontsize=12, color="k") plt.subplots_adjust(wspace=0.08, left=0.06, right=0.98) plt.savefig(savepath + "Policies_RealWorld.png", dpi=300) # - # ## Volumes # %run plots.py plot_ParadigmVolumes() plt.tight_layout() plt.savefig(savepath + "VolumesOfParadigmCombinations.png", dpi=300) # ## Methods Fig: Acceptable States + Sustainable Policies # %run plots.py δ = 1.0 ρ = 0.2 γ = 1.0 rh = 1.0 rl = 0.5 rmin = 0.3 xaxis = "δ" yaxis = "γ" prec = 501 # + fig, ax = plt.subplots(2, 2, sharey='row', sharex="col", figsize=(5, 5) ) bott=0.28 topp=0.9 leftt=0.19 rightt=0.94 smallfontsize = 8.5 # colors ov = 0.6 tol_underboth_color = (1.0, 1.0, 0.0) tol_undercautious_color = (1.0-ov, 1.0, 0.0) tol_underrisky_color = (1.0, 1.0-ov, 0.0) tol_underno_color = (1.0-ov, 1.0-ov, 0.0) both_sus_color = (ov, ov, 1.0) cautious_sus_color = (0.0, ov, 1.0) # the actual plot plot_acceptal_states(δ, ρ, γ, rh, rl, rmin, bothacceptcolor=tol_underboth_color, cautiousacceptcolor=tol_undercautious_color, riskyacceptcolor=tol_underrisky_color, nonacceptcolor=tol_underno_color, state="degraded", xaxis="δ", yaxis="γ", prec=prec, ax=ax[1,0]) plot_acceptal_states(δ, ρ, γ, rh, rl, rmin, bothacceptcolor=tol_underboth_color, cautiousacceptcolor=tol_undercautious_color, riskyacceptcolor=tol_underrisky_color, nonacceptcolor=tol_underno_color, state="prosperous", xaxis="δ", yaxis="γ", prec=prec, ax=ax[0,0]) plot_sustainble_policies(δ, ρ, γ, rh, rl, rmin, xaxis=xaxis, yaxis=yaxis, prec=prec, bothsuscolor=both_sus_color, cautioussuscolor=cautious_sus_color, ax=ax[1,1]) plot_sustainble_policies(δ, ρ, γ, rh, rl, rmin, xaxis=xaxis, yaxis=yaxis, prec=prec, bothsuscolor=both_sus_color, cautioussuscolor=cautious_sus_color, ax=ax[0,1]) #plot_acceptable_region_to_ax(a, b, g, hhh, hlh, rmin=r_min, axes=ax.T[0]) #plot_suspol_to_ax(a, b, g, hhh, hlh, rmin=r_min, axes=ax.T[1]) # clear labels for axes in ax: map(lambda ax: ax.set_ylabel(""), axes) map(lambda ax: ax.set_xlabel(""), axes) # titles yight = 1.15 fsize = 14 ax[0, 0].annotate("Acceptable states", (0.5, yight), xycoords="axes fraction", ha="center", va="bottom", fontsize=fsize) ax[0, 1].annotate("Sustainable policies", (0.5, yight), xycoords="axes fraction", ha="center", va="bottom", fontsize=fsize) fig.subplots_adjust(left=0.2) ax[0, 0].annotate("Prosperous state", (-0.4, 0.5), xycoords="axes fraction", ha="right", va="center", rotation=90, fontsize=13) ax[1, 0].annotate("Degraded state", (-0.4, 0.5), xycoords="axes fraction", ha="right", va="center", rotation=90, fontsize=13) # numbers ax[0, 0].annotate("a", (0.95, 0.97), xycoords="axes fraction", ha="right", va="top", fontsize=12) ax[1, 0].annotate("b", (0.95, 0.97), xycoords="axes fraction", ha="right", va="top", fontsize=12) ax[0, 1].annotate("c", (0.95, 0.97), xycoords="axes fraction", ha="right", va="top", fontsize=12) ax[1, 1].annotate("d", (0.95, 0.97), xycoords="axes fraction", ha="right", va="top", fontsize=12) # add labels xlab = r"Collapse probability $\delta$" ylab = r"Discount factor $\gamma$" labelfsize = 11 for axis in ax[1]: axis.set_xlabel(xlab) axis.tick_params(axis='both', which='major', labelsize=smallfontsize) axis.xaxis.label.set_fontsize(labelfsize) for axis in ax.T[0]: axis.set_ylabel(ylab) axis.tick_params(axis='both', which='major', labelsize=smallfontsize) axis.yaxis.label.set_fontsize(labelfsize) ax[0, 1].set_ylabel("") ax[1, 1].set_ylabel("") ax[0, 0].set_xlabel("") ax[0, 1].set_xlabel("") # Legend plt.subplots_adjust(bottom=bott, top=topp, left=leftt, right=rightt) lfontsize = 7.5 dy = 0.04 legendtopbottom = 0.15 # - left column left = 0.16 tuboth = fig.add_axes((left, legendtopbottom, 0.02,0.02), frameon=False, xticks=[], yticks=[]) tuboth.fill_between(np.linspace(0,1,prec), 0, 1, color=tol_underboth_color) tuboth.annotate(" acceptable under both policies", (1.0, 0.5), xycoords="axes fraction", ha="left", va="center", fontsize=lfontsize) tusafe = fig.add_axes((left, legendtopbottom-dy, 0.02,0.02), frameon=False, xticks=[], yticks=[]) tusafe.fill_between(np.linspace(0,1,prec), 0, 1, color=tol_undercautious_color) tusafe.annotate(" acceptable only under cautious policy", (1.0, 0.5), xycoords="axes fraction", ha="left", va="center", fontsize=lfontsize) turisky = fig.add_axes((left, legendtopbottom-2*dy, 0.02,0.02), frameon=False, xticks=[], yticks=[]) turisky.fill_between(np.linspace(0,1,prec), 0, 1, color=tol_underrisky_color) turisky.annotate(" acceptable only under risky policy", (1.0, 0.5), xycoords="axes fraction", ha="left", va="center", fontsize=lfontsize) tuno = fig.add_axes((left, legendtopbottom-3*dy, 0.02,0.02), frameon=False, xticks=[], yticks=[]) tuno.fill_between(np.linspace(0,1,prec), 0, 1, color=tol_underno_color) tuno.annotate(" not acceptable under both policies", (1.0, 0.5), xycoords="axes fraction", ha="left", va="center", fontsize=lfontsize) # - right column left = 0.59 bs = fig.add_axes((left, legendtopbottom, 0.02,0.02), frameon=False, xticks=[], yticks=[]) bs.fill_between(np.linspace(0,1,prec), 0, 1, color=both_sus_color) bs.annotate(" both policies sustainable", (1.0, 0.5), xycoords="axes fraction", ha="left", va="center", fontsize=lfontsize) ss = fig.add_axes((left, legendtopbottom-dy, 0.02,0.02), frameon=False, xticks=[], yticks=[]) ss.fill_between(np.linspace(0,1,prec), 0, 1, color=cautious_sus_color) ss.annotate(" only cautious policy sustainable", (1.0, 0.5), xycoords="axes fraction", ha="left", va="center", fontsize=lfontsize) # space plt.subplots_adjust(wspace=0.1, hspace=0.1) plt.savefig(savepath + "Results_AcceptSus.png") # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Header # + import numpy as np import pandas as pd import matplotlib.pyplot as plt import math as math import janitor from sklearn.linear_model import LinearRegression import os exec(open("../header.py").read()) # - # # Import raw_data = pd.read_csv(processed_root("01-poetryfoundation/poetry_foundation.csv")) raw_data.head() raw_data.columns # # Clean data_threshold_30 = raw_data.loc[lambda x:x.author_poem_count >= 30] data_threshold_40 = raw_data.loc[lambda x:x.author_poem_count >= 40] data_threshold_50 = raw_data.loc[lambda x:x.author_poem_count >= 50] # ## Train-validation-test split def train_val_test_split(data, splits = [0.7, 0.2, 0.1]): if not np.isclose(sum([0.7,0.2,0.1]), 1): raise RuntimeError("Splits must sum to 1.") train_pt = splits[0] val_pt = splits[0] + splits[1] train_data = data.loc[lambda x:x.author_poem_pct <= train_pt] val_data = data.loc[lambda x: (x.author_poem_pct > train_pt)& (x.author_poem_pct <= val_pt)] test_data = data.loc[lambda x:x.author_poem_pct > val_pt] train_data.name = 'train_data.csv' val_data.name = 'val_data.csv' test_data.name = 'test_data.csv' return train_data, val_data, test_data train_30, val_30, test_30 = train_val_test_split(data_threshold_30) train_40, val_40, test_40 = train_val_test_split(data_threshold_40) train_50, val_50, test_50 = train_val_test_split(data_threshold_50) # # Save datasets def save_datasets(dfs, save_folder): for df in dfs: try: df.to_csv(save_folder + "/" + df.name, index = False) except FileNotFoundError: os.mkdir(save_folder) df.to_csv(save_folder + "/" + df.name, index = False) save_datasets([train_30, val_30, test_30], save_folder = processed_root("02-train-validation-test-split/threshold-30")) save_datasets([train_40, val_40, test_40], save_folder = processed_root("02-train-validation-test-split/threshold-40")) save_datasets([train_50, val_50, test_50], save_folder = processed_root("02-train-validation-test-split/threshold-50")) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.pipeline import Pipeline from sklearn.preprocessing import PolynomialFeatures from sklearn.linear_model import LinearRegression from sklearn.model_selection import GridSearchCV # Let's begin with a very simple synthetic dataset. x = np.linspace(0, 1, 1000) # between 0 and 1 y = 3 * x**2 - 2 * x + 12 + np.random.randn(1000) * 0.2 # a second-order polynomial in x, with some noise plt.plot(x, y) # We're going to fit this with a standard polynomial regression. We don't know what degree the data is, so we'll cross-validate against that parameter and select the best model. # + model = Pipeline([ ("polynomial_features", PolynomialFeatures()), # let's take linear x and make polynomial features from it ("linear_regression", LinearRegression(fit_intercept=True)) # then a linear regression on the polynomial features ]) # Note that the LinearRegression has fit_intercept=True because we always want to fit an intercept, # and this is not a parameter we will ever be varying. We talked about this at the pub - in linear regressions, # there's no need to scale variables ( LinearRegression does offer a "normalize" argument ), but we want # to look at the actual coefficients that come out of our fits because they have real-world meaning. # Therefore, we don't normalize, and as such, we must allow the linear regression to fit an intercept. # Now we decide on what hyperparameters of the model we allow to vary. # In this case, we'll vary the degree of the polynomial. parameters = { "polynomial_features__degree": [1, 2, 3, 4, 5], } # - # We have a model set out. Now let's fit it and see what the result is like. x.reshape(-1, 1).shape # + gridsearch = GridSearchCV( model, # our pipeline parameters, # the parameters of the pipeline that we want to vary n_jobs=-1, # use all cores on our computer scoring="neg_mean_squared_error", # evaluate the model using mean squared error cv=5 # and perform 5-fold cross-validation to score the model ) gridsearch.fit(x.reshape(-1, 1), y) # sklearn wants *observations* in the first axis and *features* in the second axis # - # So how did we do ? print("Best MSE score ( lower is better ) : {}".format(gridsearch.best_score_)) print("Best hyperparameters : {}".format(gridsearch.best_params_)) # Looks like we did well, and our model selected the correct polynomial for us : degree 2. # Let's look at the actual model from within our gridsearch object. best_model = gridsearch.best_estimator_ print(best_model) # Great, we've recovered a fully tuned pipeline from our gridsearch. Let's make predictions and compare. # + # Predict y_hat from our input features predictions = best_model.predict(x.reshape(-1, 1)) # Plot to compare plt.scatter(x, y, alpha=0.3) plt.plot(x, predictions, lw=5, c="k") # - # Great. Onto a more complicated dataset, with multiple features. # + # Features : number of rooms, number of baths, and square footage of the house df = pd.DataFrame({ "rooms": np.random.randint(1, 6, 1000), # between one and six rooms "baths": np.random.randint(1, 4, 1000), # between one and four baths "sqft": np.random.rand(1000) * 5000 # between 0 and 5000 sqft }) # GROUND TRUTH : house price ( we don't know this relation ) df["price"] = df.rooms * 400 + \ df.baths * 200 + \ df.sqft * 10 + \ df.sqft ** 2 + \ df.rooms * df.baths * df.sqft * 0.5 + \ np.random.randn(1000) * 50000 df.head() # - # Let's build the same model and grid search over the polynomial degree but also over whether we only want interaction terms or if we want to allow cross-terms. Our ground truth is a degree-3 polynomial with both cross- and interaction-terms. # + model = Pipeline([ ("polynomial_features", PolynomialFeatures()), ("linear_regression", LinearRegression(fit_intercept=True)) ]) parameters = { "polynomial_features__degree": [1, 2, 3, 4, 5], "polynomial_features__interaction_only": [True, False] } gridsearch = GridSearchCV( model, parameters, n_jobs=-1, scoring="neg_mean_squared_error", cv=5 ) gridsearch.fit(df[["rooms", "baths", "sqft"]], df.price) print("Our best model has the parameters : \n{}".format(gridsearch.best_params_)) # - # Let's make predictions and compare to our data. df["yhat"] = gridsearch.best_estimator_.predict(df[["rooms", "baths", "sqft"]]) df.head(10) # Residuals plt.scatter(range(1000), df.yhat - df.price) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from sklearn.datasets import make_blobs datos = make_blobs(n_samples=200, n_features=2, centers=4) datos datos[0] datos[1] plt.scatter(datos[0][:,0], datos[0][:,1]) from sklearn.cluster import KMeans modelo = KMeans(n_clusters=4) modelo.fit(datos[0]) modelo.cluster_centers_ modelo.labels_ datos[1] # + fig, (ax1, ax2) = plt.subplots(1,2,figsize=(12,4)) ax1.scatter(datos[0][:,0], datos[0][:,1], c=modelo.labels_) ax1.set_title("Algoritmo k-medias") ax2.scatter(datos[0][:,0], datos[0][:,1], c=datos[1]) ax2.set_title("Datos originales") # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="3AjJNo0dZG06" # # # # ## Signals and Systems # # # # # # + [markdown] id="-BC1P-40TMuh" # **a.Impulse Sequence** # + colab={"base_uri": "https://localhost:8080/", "height": 350} id="oV9DX81fTFiC" outputId="4f56049e-1863-44c5-e653-22a367ea30e5" from numpy import* from pylab import* x=array ([-3,-2,-1,0,1,2,3]) y=array ([0,0,0,1,0,0,0]) stem(x,y) xlabel ("<- n ->") ylabel ("Amplitude") title ("Impulse Sequence") grid ("on") show() # + [markdown] id="1_PXY59xUTHM" # **b.Unit Step Sequence** # # + colab={"base_uri": "https://localhost:8080/", "height": 350} id="0MyJW8O3UZvX" outputId="abd686b6-e5e5-4d47-a835-06a4d6a3bf5f" from numpy import* from pylab import* x=array([-4,-3,-2,-1,0,1,2,3,4,5,6,7,8,9,10]) y=array ([0,0,0,0,1,1,1,1,1,1,1,1,1,1,1]) stem(x,y) xlabel ("<- n ->") ylabel ("Amplitude") title ("Unit Step Sequence") grid ("on") show () # + [markdown] id="wZnTWthqUeUp" # **c.Pulse Signal** # + colab={"base_uri": "https://localhost:8080/", "height": 295} id="kslrvjP0Ukfb" outputId="1c95f6ed-3051-4115-c39a-069607560ca8" #Pulse Signal from numpy import* from pylab import* y=array ([0,0,1,1,1,0,0]) x=array ([-30,-15.001,-15,0,15,15.001,30]) plot(x,y) xlabel ("<- t ->") ylabel ("Amplitude") title ("Pulse Signal") grid ("on") show () # + [markdown] id="ywESX6Vxpkuy" # # + [markdown] id="3CO4U_phpke4" # # + colab={"base_uri": "https://localhost:8080/", "height": 628} id="OJGBejNLUzhS" outputId="b07bb0d5-0152-45d3-8e6e-2242e9c14f02" from numpy import* from pylab import* x=array ([0,1,2,3,4,5,6,7,8,9,10]) y=array ([0,1,2,3,4,5,6,7,8,9,10]) figure (1) plot (x,y) title ("Ramp Signal") xlabel ("Time t->") ylabel ("Amplitude") grid ('on') figure (2) stem (x,y) title ("Ramp Sequence") xlabel ("No.of Samples") ylabel ("Amplitude") grid ('on') show () # + [markdown] id="8kDaBMz4U5ae" # **e.Sine Wave** # + colab={"base_uri": "https://localhost:8080/", "height": 364} id="QOdz7MFmVA32" outputId="845f7669-631a-4169-bd2e-8d0f576809eb" from numpy import* from scipy import* t=arange(0,2,0.001) f=1000 a=10 y=a*sin(2*3.14*f*t) title("Sine Wave") xlabel('time') ylabel('Amplitude') plot(t,y) grid('on') show() # + [markdown] id="O4NUc0QSVGVz" # **2.Convolution** # + colab={"base_uri": "https://localhost:8080/", "height": 933} id="Kxc8rUG7VVaC" outputId="7cefd9a1-2135-403d-b543-7dcc52d87d1a" import numpy as np from pylab import* x=np.array([1,2,3]) h=np.array([1,0,1]) y=np.convolve(x,h) figure (1) title ('x(n)') stem (x) grid ('on') figure (2) title ('h(n)') stem (h) grid ('on') figure (3) title ('y(n)') stem (y) grid ('on') show () # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + from __future__ import print_function from distutils.version import LooseVersion as Version import sys OK = '\x1b[42m[ OK ]\x1b[0m' FAIL = "\x1b[41m[FAIL]\x1b[0m" try: import importlib except ImportError: print(FAIL, "Python version 3.9 is required," " but %s is installed." % sys.version) def import_version(pkg, min_ver, fail_msg=""): mod = None try: mod = importlib.import_module(pkg) if pkg in {'PIL'}: ver = mod.VERSION else: ver = mod.__version__ if Version(ver) == min_ver: print(OK, "%s version %s is installed." % (lib, min_ver)) else: print(FAIL, "%s version %s is required, but %s installed." % (lib, min_ver, ver)) except ImportError: print(FAIL, '%s not installed. %s' % (pkg, fail_msg)) return mod # first check the python version pyversion = Version(sys.version) if pyversion >= "3.9": print(OK, "Python version is %s" % sys.version) elif pyversion < "3.9": print(FAIL, "Python version 3.9 is required," " but %s is installed." % sys.version) else: print(FAIL, "Unknown Python version: %s" % sys.version) print() requirements = {'numpy': "1.21.1", 'matplotlib': "3.4.2",'sklearn': "0.24.2", 'pandas': "1.3.1",'xgboost': "1.3.3", 'shap': "0.39.0"} # now the dependencies for lib, required_version in list(requirements.items()): import_version(lib, required_version) # + ## import packages import numpy as np import pandas as pd import warnings import matplotlib import matplotlib.pyplot as plt import seaborn as sns from scipy.special import jn from IPython.display import display, clear_output import time warnings.filterwarnings('ignore') # %matplotlib inline ## model prediction from sklearn import linear_model from sklearn import preprocessing from sklearn.svm import SVR from sklearn.ensemble import RandomForestRegressor,GradientBoostingRegressor from sklearn.linear_model import LinearRegression from sklearn.metrics import r2_score ## Reduced features from sklearn.decomposition import PCA,FastICA,FactorAnalysis,SparsePCA import lightgbm as lgb import xgboost as xgb ## Param serach and evaluation from sklearn.model_selection import GridSearchCV,cross_val_score,StratifiedKFold,train_test_split from sklearn.metrics import mean_squared_error, mean_absolute_error # + Train_data = pd.read_csv(r'../data/used_car_train.csv',sep=' ') Test_data = pd.read_csv(r'../data/used_car_testB.csv',sep=' ') print('Train data shape:',Train_data.shape) print('Test data shape:',Test_data.shape) # - Train_data.head() Train_data.info() Train_data.columns Test_data.info() Train_data.describe() Test_data.describe() numerical_cols = Train_data.select_dtypes(exclude = 'object').columns print(numerical_cols) categorical_cols = Train_data.select_dtypes(include = 'object').columns print(categorical_cols) # + ## 选择特征列 feature_cols = [col for col in numerical_cols if col not in ['SaleID','name','regDate','creatDate','price','model','brand','regionCode','seller']] feature_cols = [col for col in feature_cols if 'Type' not in col] ## 提前特征列,标签列构造训练样本和测试样本 X_data = Train_data[feature_cols] Y_data = Train_data['price'] X_test = Test_data[feature_cols] print('X train shape:',X_data.shape) print('X test shape:',X_test.shape) # - plt.hist(Y_data) plt.show() plt.close() X_data = X_data.fillna(-1) X_test = X_test.fillna(-1) # + # ## xgb-Model xgr = xgb.XGBRegressor(n_estimators=2, max_depth=5) scores_train = [] scores = [] xgr.fit(X_data,Y_data) pred_train_xgb=xgr.predict(X_data) score_train = r2_score(Y_data,pred_train_xgb) scores_train.append(score_train) print('Train r2:',score_train) # + # ## xgb-Model xgr = xgb.XGBRegressor(n_estimators=2,max_depth = 8) scores_train = [] scores = [] xgr.fit(X_data,Y_data) pred_train_xgb=xgr.predict(X_data) score_train = r2_score(Y_data,pred_train_xgb) scores_train.append(score_train) print('Train r2:',np.mean(score_train)) # - a= ([0.530133664028601,0.5588657862577329]) np.std(a) np.mean(a) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Gaussian blur benchmarking # This notebook compares different implementations of the Gaussian blur filter. # # **Note:** benchmarking results vary heavily depending on image size, kernel size, used operations, parameters and used hardware. Use this notebook to adapt it to your use-case scenario and benchmark on your target hardware. If you have different scenarios or use-cases, you are very welcome to submit your notebook as pull-request! # + import pyclesperanto_prototype as cle from skimage import filters import cupy import cupyx.scipy.ndimage as ndi import time # to measure kernel execution duration properly, we need to set this flag. It will slow down exection of workflows a bit though cle.set_wait_for_kernel_finish(True) # selet a GPU with the following in the name. This will fallback to any other GPU if none with this name is found cle.select_device('RTX') # + # test data import numpy as np test_image = np.random.random([100, 512, 512]) sigma = 10 # - # ## clEsperanto # + # convolve with pyclesperanto result_image = None cl_test_image = cle.push_zyx(test_image) for i in range(0, 10): start_time = time.time() result_image = cle.gaussian_blur(cl_test_image, result_image, sigma_x=sigma, sigma_y=sigma, sigma_z=sigma) print("pyclesperanto Gaussian duration: " + str(time.time() - start_time)) # - # ## cupy # + # convolve with cupy result_image = None cu_test_image = cupy.asarray(test_image) for i in range(0, 10): start_time = time.time() result_image = ndi.gaussian_filter(cu_test_image, output=result_image, sigma=sigma) cupy.cuda.stream.get_current_stream().synchronize() # we need to wait here to measure time properly print("cupy Gaussian duration: " + str(time.time() - start_time)) # - # ## scikit-image # + # convolve with scikit-image result_image = None for i in range(0, 10): start_time = time.time() result_image = filters.gaussian(test_image, output=result_image, sigma=sigma) print("skimage Gaussian duration: " + str(time.time() - start_time)) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline import numpy as np import pandas as pd import matplotlib.pyplot as plt import matplotlib as mpl import matplotlib.patches as mpatches mpl.get_backend() mpl.use('MacOSX') # + def get_location_of_legend_box(leg_artist): frame = leg_artist.get_frame() x0 = frame.get_x() y0 = frame.get_y() x1 = x0 + frame.get_width() y1 = y0 + frame.get_height() ax_box_coord = ax.transAxes.inverted().transform([[x0, y0], [x1, y1]]) width = ax_box_coord[1, 0] - ax_box_coord[0, 0] height = ax_box_coord[1, 1] - ax_box_coord[0, 1] lowest_point = ax_box_coord[0, 1] farthest_point = ax_box_coord[1, 0] return width,height,lowest_point,farthest_point def arrange_legend_boxs(legs_hl_list,ax): current_lowest_point = 1 current_farthest_point = 1 last_farthest_point = None have_legend_in_the_column = False ready_legend_artist = [] for item in legs_hl_list: leg_artist = ax.legend(handles=item[0],labels=item[1],loc='upper left',bbox_to_anchor=(current_farthest_point,current_lowest_point)) plt.pause(0.01) width,height,lowest_point,farthest_point = get_location_of_legend_box(leg_artist) if have_legend_in_the_column: if lowest_point < 0: current_farthest_point = max([last_farthest_point,farthest_point]) leg_artist = ax.legend(handles=item[0],labels=item[1],loc='upper left',bbox_to_anchor=(current_farthest_point,1)) plt.pause(0.01) width, height, lowest_point, farthest_point = get_location_of_legend_box(leg_artist) current_lowest_point = lowest_point last_farthest_point = farthest_point ready_legend_artist.append(leg_artist) have_legend_in_the_column = True for artist in ready_legend_artist: ax.add_artist(artist) # - ## test leg1_hl = ([mpatches.Patch(color=i) for i in 'rgb'], ['r', 'g', 'b']) leg2_hl = ([mpatches.Patch(color=i) for i in 'rgb'], ['r', 'g', 'b']) leg3_hl = ([mpatches.Patch(color=i) for i in 'rgb'], ['r', 'g', 'b']) leg4_hl = ([mpatches.Patch(color=i) for i in 'rgb'], ['r', 'g', 'b']) leg5_hl = ([mpatches.Patch(color=i) for i in 'rgb'], ['r', 'g', 'b']) leg6_hl = ([mpatches.Patch(color=i) for i in 'rgb'], ['r', 'g', 'b']) fig,ax = plt.subplots() arrange_legend_boxs([leg1_hl,leg2_hl,leg3_hl,leg4_hl,leg5_hl,leg6_hl],ax) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # AutoEncoder implementation in Lasagne # + import gzip import cPickle import numpy as np import theano import theano.tensor as T import random examples_per_labels = 10 # Load the pickle file for the MNIST dataset. dataset = 'mnist.pkl.gz' f = gzip.open(dataset, 'rb') train_set, dev_set, test_set = cPickle.load(f) f.close() #train_set contains 2 entries, first the X values, second the Y values train_x, train_y = train_set dev_x, dev_y = dev_set test_x, test_y = test_set print 'Train: ', train_x.shape print 'Dev: ', dev_x.shape print 'Test: ', test_x.shape examples = [] examples_labels = [] examples_count = {} for idx in xrange(train_x.shape[0]): label = train_y[idx] if label not in examples_count: examples_count[label] = 0 if examples_count[label] < examples_per_labels: arr = train_x[idx] examples.append(arr) examples_labels.append(label) examples_count[label]+=1 train_subset_x = np.asarray(examples) train_subset_y = np.asarray(examples_labels) print "Train Subset: ",train_subset_x.shape # - # # Autoencoder # This is the code how the autoencoder should work in principle. However, the pretraining seems not to work as the loss stays approx. identical for all epochs. If someone finds the problem, please send me an email. # + import lasagne def autoencoder(n_in, n_hidden, input_var=None): #Input layer, 1 dimension = number of samples, 2 dimension = input, our 28*28 image l_in = lasagne.layers.InputLayer(shape=(None, n_in), input_var=input_var) l_hid0 = lasagne.layers.DenseLayer(incoming=l_in, num_units=100, nonlinearity=lasagne.nonlinearities.tanh, W=lasagne.init.GlorotUniform()) # Our first hidden layer with n_hidden units # As nonlinearity we use tanh, you could also try rectify l_hid = lasagne.layers.DenseLayer(incoming=l_hid0, num_units=n_hidden, nonlinearity=lasagne.nonlinearities.tanh, W=lasagne.init.GlorotUniform()) # Our output layer (a softmax layer) l_out = lasagne.layers.DenseLayer(incoming=l_hid, num_units=n_in, nonlinearity=lasagne.nonlinearities.tanh) return l_hid, l_out # Parameters n_in = 28*28 n_hidden = 2 # Create the network x = T.fmatrix('x') # the data, one image per row y = T.lvector('y') # the labels are presented as 1D vector of [int] labels ae_hid, ae_out = autoencoder(n_in, n_hidden, x) prediction = lasagne.layers.get_output(ae_out) loss = lasagne.objectives.squared_error(prediction, x) loss = loss.mean() params = lasagne.layers.get_all_params(ae_out, trainable=True) updates = lasagne.updates.nesterov_momentum(loss, params, learning_rate=0.01, momentum=0.9) train_fn = theano.function(inputs=[x], outputs=loss, updates=updates) def iterate_minibatches(inputs, targets, batchsize, shuffle=False): assert len(inputs) == len(targets) if shuffle: indices = np.arange(len(inputs)) np.random.shuffle(indices) for start_idx in range(0, len(inputs) - batchsize + 1, batchsize): if shuffle: excerpt = indices[start_idx:start_idx + batchsize] else: excerpt = slice(start_idx, start_idx + batchsize) yield inputs[excerpt], targets[excerpt] number_of_epochs = 10 print "%d epochs" % number_of_epochs for epoch in xrange(number_of_epochs): loss = 0 for batch in iterate_minibatches(train_x, train_y, 100, shuffle=True): inputs, targets = batch loss += train_fn(inputs) print "%d loss: %f" % (epoch,loss) print "Done" # + ############ # Plot it ############ # %matplotlib inline import matplotlib.pyplot as plt import matplotlib.patches as mpatches network_predict_label = lasagne.layers.get_output(ae_hid, deterministic=True) predict_points = theano.function(inputs=[x], outputs=network_predict_label) train_predict = predict_points(train_x) colors = {0: 'b', 1: 'g', 2: 'r', 3:'c', 4:'m', 5:'y', 6:'k', 7:'orange', 8:'darkgreen', 9:'maroon'} plt.figure(figsize=(10, 10)) patches = [] for idx in xrange(0,100): point = train_predict[idx] label = train_y[idx] color = colors[label] line = plt.plot(point[0], point[1], color=color, marker='o', markersize=5) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Heatmap # The `HeatMap` mark represents a 2d matrix of values as a color image. It can be used to visualize a 2d function, or a grayscale image for instance. # # # `HeatMap` is very similar to the `GridHeatMap`, but should be preferred for a greater number of points (starting at around 100x100), to avoid overloading the browser. `GridHeatMap` offers more control (interactions, selections), and is better suited for a smaller number of points. import numpy as np from ipywidgets import Layout import bqplot.pyplot as plt from bqplot import ColorScale # ### Data Input # # - `x` is a 1d array, corresponding to the abscissas of the points (size N) # - `y` is a 1d array, corresponding to the ordinates of the points (size M) # - `color` is a 2d array, $\text{color}_{ij}$ is the intensity of the point $(x_i, y_j)$ (size (N, M)) # # Scales must be defined for each attribute: # - a `LinearScale`, `LogScale` or `OrdinalScale` for `x` and `y` # - a `ColorScale` for `color` x = np.linspace(-5, 5, 200) y = np.linspace(-5, 5, 200) X, Y = np.meshgrid(x, y) color = np.cos(X ** 2 + Y ** 2) # ## Plotting a 2-dimensional function # # This is a visualization of the function $f(x, y) = \text{cos}(x^2+y^2)$ fig = plt.figure( title="Cosine", layout=Layout(width="650px", height="650px"), min_aspect_ratio=1, max_aspect_ratio=1, padding_y=0, ) heatmap = plt.heatmap(color, x=x, y=y) fig # ## Displaying an image # # The `HeatMap` can be used as is to display a 2d grayscale image, by feeding the matrix of pixel intensities to the `color` attribute # + from scipy.misc import ascent Z = ascent() Z = Z[::-1, :] aspect_ratio = Z.shape[1] / Z.shape[0] # - img = plt.figure( title="Ascent", layout=Layout(width="650px", height="650px"), min_aspect_ratio=aspect_ratio, max_aspect_ratio=aspect_ratio, padding_y=0, ) plt.scales(scales={"color": ColorScale(scheme="Greys", reverse=True)}) axes_options = { "x": {"visible": False}, "y": {"visible": False}, "color": {"visible": False}, } ascent = plt.heatmap(Z, axes_options=axes_options) img # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #hide # %load_ext autoreload # %autoreload 2 #hide from nbdev import * # # Deck of Cards # # > A minimal example of using nbdev to create a python library. # This repo uses code from 's ThinkPython2 # ## Install # `pip install your_project_name` # ## How to use # Playing cards in python! from deck_of_cards.deck import Deck d = Deck() print(f'Number of playing cards in the deck: {len(d.cards)}') card = d.pop_card() print(card) # See the [docs](https://yannipapandreou.github.io/deck_of_cards/) for more details # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from __future__ import division, print_function, absolute_import from __future__ import absolute_import from __future__ import print_function import numpy as np import numpy import PIL from PIL import Image np.random.seed(1337) # for reproducibility from math import sqrt import random from keras.datasets import mnist from keras.models import Sequential, Model from keras.layers import Dense, Dropout, Input, Lambda from keras.layers.convolutional import Conv2D from keras.layers.convolutional import MaxPooling2D from keras.layers import Flatten from keras.optimizers import RMSprop from keras import backend as K from keras.layers import Concatenate, Dense, LSTM, Input, concatenate import tensorflow as tf import numpy as np import matplotlib.pyplot as plt # + import scipy.io mat = scipy.io.loadmat('/home/aniruddha/deep-learning-projects/Siamese_Networks/Dataset/PaviaCentre.mat') arr = mat['pavia'] arr = np.array(arr) print(arr.shape) import scipy.io mat = scipy.io.loadmat('/home/aniruddha/deep-learning-projects/Siamese_Networks/Dataset/PaviaCentre_gt.mat') arr1 = mat['pavia_gt'] arr1 = np.array(arr1) print(arr1.shape) a=[] label=[] k=0 for i in range(0,arr1.shape[0]): for j in range(0,arr1[i].shape[0]): a.append(arr[i][j]) label.append(arr1[i][j]) a=np.array(a) label=np.array(label) X_train=[] y_train=[] for i in range (0,a.shape[0]): if(label[i]==2): y_train.append(0) if(label[i]==3): y_train.append(1) if(label[i]==4): y_train.append(2) if(label[i]==5): y_train.append(3) if(label[i]==7): y_train.append(4) if(label[i]==8): y_train.append(5) if(label[i]==9): y_train.append(6) if (label[i]==2 or label[i]==3 or label[i]==4 or label[i]==5 or label[i]==7 or label[i]==8 or label[i]==9): X_train.append(a[i]) X_train=np.array(X_train) y_train=np.array(y_train) print(X_train.shape) print(y_train.shape) # + from sklearn.utils import shuffle X_train, y_train = shuffle(X_train, y_train, random_state = 0) from sklearn.preprocessing import StandardScaler X_train = StandardScaler().fit_transform(X_train) from sklearn.decomposition import PCA pca = PCA(n_components=64) X_train = pca.fit_transform(X_train) print(X_train.shape) # + import scipy.io mat = scipy.io.loadmat('/home/aniruddha/deep-learning-projects/Siamese_Networks/Dataset/PaviaU.mat') arr = mat['paviaU'] arr = np.array(arr) import scipy.io mat = scipy.io.loadmat('/home/aniruddha/deep-learning-projects/Siamese_Networks/Dataset/PaviaU_gt.mat') arr1 = mat['paviaU_gt'] arr1 = np.array(arr1) print(arr1.shape) a=[] label=[] k=0 for i in range(0,arr1.shape[0]): for j in range(0,arr1[i].shape[0]): a.append(arr[i][j]) label.append(arr1[i][j]) a=np.array(a) label=np.array(label) print(a.shape) print(label.shape) X_train1=[] y_train1=[] for i in range (0,a.shape[0]): if(label[i]==4): y_train1.append(0) if(label[i]==1): y_train1.append(1) if(label[i]==8): y_train1.append(2) if(label[i]==7): y_train1.append(3) if(label[i]==9): y_train1.append(4) if(label[i]==2): y_train1.append(5) if(label[i]==6): y_train1.append(6) if (label[i]==4 or label[i]==1 or label[i]==8 or label[i]==7 or label[i]==9 or label[i]==2 or label[i]==6): X_train1.append(a[i]) X_train1=np.array(X_train1) y_train1=np.array(y_train1) from sklearn.utils import shuffle X_train1, y_train1 = shuffle(X_train1, y_train1, random_state = 0) from sklearn.preprocessing import StandardScaler X_train1 = StandardScaler().fit_transform(X_train1) from sklearn.decomposition import PCA pca = PCA(n_components=64) X_train1 = pca.fit_transform(X_train1) print(X_train1.shape) # - print(X_train.max()) print(X_train1.max()) X_train=X_train.astype('float32') X_train1=X_train1.astype('float32') X_train=X_train/100 X_train1=X_train1/100 # + X_test=X_train[50000:72933,:] y_test=y_train[50000:72933] X_train=X_train[0:50000,:] y_train=y_train[0:50000] print(X_train.shape) print(X_train1.shape) print(X_test.shape) # + learning_rate = 0.01 num_steps = 20 batch_size = 20 total_numbers = 291 display_step = 1000 examples_to_show = 10 # Network Parameters num_hidden_1 = 32 # 1st layer num features num_hidden_2 = 16 # 2nd layer num features (the latent dim) num_input = 64 num_classes = 7 # tf Graph input (only pictures) X = tf.placeholder("float", [None, num_input]) Y = tf.placeholder("float", [None, num_classes]) weights = { 'encoder_h1': tf.Variable(tf.random_uniform([num_input, num_hidden_1], minval=-4*np.sqrt(6.0/(num_input + num_hidden_1)), maxval=4*np.sqrt(6.0/(num_input + num_hidden_1)))), 'encoder_h2': tf.Variable(tf.random_uniform([num_hidden_1, num_hidden_2], minval=-4*np.sqrt(6.0/(num_hidden_1 + num_hidden_2)), maxval=4*np.sqrt(6.0/(num_hidden_1 + num_hidden_2)))), 'decoder_h1': tf.Variable(tf.random_uniform([num_hidden_2, num_hidden_1], minval=-4*np.sqrt(6.0/(num_hidden_1 + num_hidden_2)), maxval=4*np.sqrt(6.0/(num_hidden_1 + num_hidden_2)))), 'decoder_h2': tf.Variable(tf.random_uniform([num_hidden_1, num_input], minval=-4*np.sqrt(6.0/(num_input + num_hidden_1)), maxval=4*np.sqrt(6.0/(num_input + num_hidden_1)))), 'classifier1_h': tf.Variable(tf.random_uniform([num_hidden_2, 10], minval=-4*np.sqrt(6.0/(10 + num_hidden_2)), maxval=4*np.sqrt(6.0/(10 + num_hidden_2)))), 'classifier_h': tf.Variable(tf.random_uniform([10, num_classes], minval=-4*np.sqrt(6.0/(10 + num_classes)), maxval=4*np.sqrt(6.0/(10 + num_classes)))), } biases = { 'encoder_b1': tf.Variable(tf.truncated_normal([num_hidden_1])/sqrt(num_hidden_1)), 'encoder_b2': tf.Variable(tf.truncated_normal([num_hidden_2])/sqrt(num_hidden_2)), 'decoder_b1': tf.Variable(tf.truncated_normal([num_hidden_1])/sqrt(num_hidden_1)), 'decoder_b2': tf.Variable(tf.truncated_normal([num_input])/sqrt(num_hidden_2)), 'classifier1_b': tf.Variable(tf.truncated_normal([10])/sqrt(10)), 'classifier_b': tf.Variable(tf.truncated_normal([num_classes])/sqrt(num_classes)), } # + # Building the encoder def encoder(x): # Encoder Hidden layer with sigmoid activation #1 layer_1 = tf.nn.sigmoid(tf.add(tf.matmul(x, weights['encoder_h1']), biases['encoder_b1'])) # Encoder Hidden layer with sigmoid activation #2 layer_2 = tf.nn.sigmoid(tf.add(tf.matmul(layer_1, weights['encoder_h2']), biases['encoder_b2'])) return layer_2 # Building the decoder def decoder(x): # Decoder Hidden layer with sigmoid activation #1 layer_1 = tf.nn.sigmoid(tf.add(tf.matmul(x, weights['decoder_h1']), biases['decoder_b1'])) # Decoder Hidden layer with sigmoid activation #2 layer_2 = tf.nn.sigmoid(tf.add(tf.matmul(layer_1, weights['decoder_h2']), biases['decoder_b2'])) return layer_2 # Construct model encoder_op = encoder(X) decoder_op = decoder(encoder_op) # Prediction y_pred = decoder_op classify1 = tf.nn.sigmoid(tf.add(tf.matmul(encoder_op, weights['classifier1_h']), biases['classifier1_b'])) label_pred = tf.nn.softmax(tf.add(tf.matmul(classify1, weights['classifier_h']), biases['classifier_b'])) y_clipped = tf.clip_by_value(label_pred, 1e-10, 0.9999999) # Targets (Labels) are the input data. y_true = X label_true = Y # Define loss and optimizer, minimize the squared error loss_autoencoder = tf.reduce_mean(tf.pow(y_true - y_pred, 2)) cross_entropy_loss = -tf.reduce_mean(tf.reduce_sum(label_true * tf.log(y_clipped) + (1 - label_true) * tf.log(1 - y_clipped), axis=1)) loss_total = loss_autoencoder+cross_entropy_loss optimizer = tf.train.RMSPropOptimizer(learning_rate).minimize(loss_total) # Initialize the variables (i.e. assign their default value) init = tf.global_variables_initializer() # - from keras.utils import np_utils y_test11 = np_utils.to_categorical(y_test) y_train11 = np_utils.to_categorical(y_train1) print(y_train11.shape) print(y_test11.shape) # define an accuracy assessment operation correct_prediction = tf.equal(tf.argmax(Y, 1), tf.argmax(label_pred, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # + # Start Training # Start a new TF session sess = tf.Session() # Run the initializer sess.run(init) batch_size = 64 num_batch = 614 # Training for i in range(0,400): k = 0 # Prepare Data # Get the next batch of MNIST data (only images are needed, not labels) avg_cost = 0 for j in (0,num_batch): batch_x = X_train1[k:k+batch_size,:] batch_y = y_train11[k:k+batch_size,:] k += 64 #print(j) # Run optimization op (backprop) and cost op (to get loss value) _, l = sess.run([optimizer, loss_total], feed_dict={X: batch_x, Y: batch_y}) avg_cost += l / num_batch print("Epoch:", (i + 1), "cost =", "{:.8f}".format(avg_cost)) print("Epoch:", (i + 1), "accuracy =", "{:.8f}".format(sess.run(accuracy, feed_dict={X: X_train1, Y: y_train11}))) # + # on 200 epoch print(sess.run([accuracy], feed_dict={X: X_test, Y: y_test11})) # + # on 400 epoch print(sess.run([accuracy], feed_dict={X: X_test, Y: y_test11})) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Homework 13: Classes and Structured Data Files # + ### units.py """ Simple unit conversion module. Some basic conversions for length and temperature. """ ft_to_m = 0.3048 m_to_ft = 3.28084 ft_to_in = 12.0 in_to_ft = 1.0/12.0 mi_to_ft = 5280.0 ft_to_mi = 1.0/5280.0 in_to_mi = 1.0/12.0/5280.0 mi_to_in = 5280.0*12.0 m_to_km = 0.001 m_to_dm = 10.0 m_to_cm = 100.0 m_to_mm = 1000.0 m_to_um = 1.0E6 # m to micrometers def K_to_C(T_K) : return T_K - 273.2 def C_to_K(T_C) : return T_C + 273.2 def F_to_C(T_F) : return (T_F - 32.0)/1.8 def C_to_F(T_C) : return T_C*1.8 + 32.0 # - # ### Problem 1 # # #### Part a # # * The homework includes a file ```units.py```. The file should be copied into the same folder as your HW13.ipynb file that you are working on (this file). # * Show the contents of that file in your notebook by typing ```%load units.py``` in a cell and running it. # #### Part b # # * Extend the ```units.py``` module in a new ```ext_units.py``` file with the leading line ```from units import *``` # * Add functions to convert from F to R and R to F. # * Correct the conversions from C to K and from K to C by overriding those functions. # * Test the conversion functions by computing the freezing point of water at $T_{f} = 0^o \; C$ in F, K, and R. # #### Part c # # * Import the ```ext_units.py``` module and use it in the function you write below. # * Define a function that returns the ideal gas pressure given temperature with n=1 mole and V=1 m^3. # * The function should look like this: ```def P_ig(T, Tu) : ```, # * where, T is the Temperature and Tu is the units of Temperature. # * Hint: Convert all input units to K, then compute pressure (in Pa). # * Test the function by computing the pressures at temperatures of $0^o \; C$, $32^o \; F$, and $491.67 \; R$. # ### Problem 2 # # Retrieve ```thermo.yaml``` with the following command. # + # import or install wget try: import wget except: try: from pip import main as pipmain except: from pip._internal import main as pipmain pipmain(['install','wget']) import wget url = 'https://apmonitor.com/che263/uploads/Main/thermoData.yaml' filename = wget.download(url) print('') print('Retrieved thermoData.yaml') # - # #### Part a # * Write a class for computing thermodynamic properties in a cell below. # * Call the class "thermo" # * Include an ```__init__(self, species)``` function that sets the gas constant Rgas = 8314.46 J/kmol*K. # * Use kmol instead of gmol because kg is the SI unit of mass, not gm. # * The __init__ function should open the file thermoData.yaml included with the homework. Use this code to open the file: # # # import yaml # with open("thermoData.yaml") as yfile : # yfile = yaml.load(yfile) # # # * Also in ```__init__``` Make two arrays that are members of the class called a_lo, and a_hi. # * Get these arrays from the yaml file using something like ```a_lo = yfile[species]["a_lo"]```, where "species" is the string passed as an argument to __init__. When you create an instance of the class, you should give a string argument that is one of the species in the HW13P2.yaml file. # * The two arrays work in two separate temperature ranges: a_lo is for $300 5000: foxes_now += 1 else: rabbits_now = rabbits_now + rabbits_now * rabbit_growth if foxes_now >= 10: foxes_now = 0 rabbits_now -= 3000 rabbit_list.append(rabbits_now) plt.plot(rabbit_list) plt.title("Rabbit population chart") plt.xlabel("Time") plt.ylabel("Rabbits") # - # # Exercise # Create a program that simulates a deer population over a total of 100 time steps. At the beginning there are 1000 deer and normally the deer population increases by 10% per time step. However, when there are more than 5000 deer, the deer are hunted. This hunt lasts 10 time steps and in each time step, instead of population growth, there is a reduction of the population by 15%. In the plot you should not see the number of deer, but the number of deer / 1000. # + TIMESTEPS = 100 DEER_GROWTH = 0.1 # 10 % HUNT_LIMIT = 5000 HUNT_EFFECT = 0.15 # 15 % deer_pop = 1000 hunt = 0 deer_list = list() for it in range(TIMESTEPS): if deer_pop > HUNT_LIMIT: hunt = 10 if hunt > 0: deer_pop -= deer_pop * HUNT_EFFECT hunt -= 1 else: deer_pop += deer_pop * DEER_GROWTH deer_list.append(deer_pop) result = list() for value in deer_list: result.append(value/1000) plt.plot(result) plt.title("Deer population chart") plt.xlabel("Time") plt.ylabel("Number of deer") # + TIMESTEPS = 100 DEER_GROWTH = 0.1 # 10 % HUNT_LIMIT = 5000 HUNT_EFFECT = 0.15 # 15 % deer_pop = 1000 # We can use a variable `hunt` to make sure we follow through # the required 10 steps of hunting: hunt = 0 for it in range(TIMESTEPS): if deer_pop > HUNT_LIMIT: hunt = 10 if hunt > 0: print(hunt) # we print out the value of the hunt variable deer_pop -= deer_pop * HUNT_EFFECT hunt -= 1 else: deer_pop += deer_pop * DEER_GROWTH # - # # Helpful stuff # # Recap of the **ternary operator**. # # The *arity* of an operator tells us on how many operands it operates. # # The `+` operator takes **two** numbers and adds them up. It is a **binary** operator: `a + b` 3+5 # We use the hyphen (`-`) both as **unary** and **binary** operator. # As unary operator it negates a number: `- a` # But as binary operator it can perform substraction: `a - b` 2-3 # What *arity* do the following operators have? (*Tipp:* Try them out in Python!) # # + `*` (multiplication) # + `/` (division) # + `not` (operates on True/False values) not (3 < 5) # The **ternary** operator is called **the** *ternary operator* because it is the only one in Python that has three operands. # # ~~~python # a if b else c # ~~~ # # It decides between `a` and `c` by checking whether `b` is `True`. # + a = 1 b = False c = 2 (a if b else c) # - # The same can be accomplished with some effort using an `if`-expression: if b: r = a else: r = c r # The ternary operator is especially useful if we have a variable # for which the value depends one of two cases gr = HUNT_EFFECT if hunt > 0 else DEER_GROWTH gr # apparently currently we are hunting # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 0.4.0-rc4 # language: julia # name: julia-0.4 # --- using Plots pyplot() default(leg=false,size=(400,400)); y = collect(linspace(0,1,100)) p = subplot(Any[y*i for i in 1:4],n=4, ylabel=["y1","y1","y2","y2"], xlabel=["x1","x2","x1","x2"]) subplot!(link=true) #xlims!(p.plts[1], (-5,50)) #ylims!(p.plts[1], -0.5,1) gplt = p.plts[1].o plot!(p.plts[1], rand(100,5)*3) lims = [Inf,-Inf] for l in gplt.layers Plots.expandLimits!(lims, l.mapping[:y]) end lims p = subplot(repmat(y,1,4),layout=[1,1,2]) using Plots, StatsBase; pyplot() default(size=(600,600),leg=false) p = subplot(Any[rand(sample(10:200))*sample(1:10) for i in 1:6], n=6, link=true) subplot!(xlabel=["x1","x2","x3"], ylabel=["y1","","","y2"]) # + using Plots, OnlineStats qwt() default(size=(800,800),leg=false) n = 1000 x = rand(n) y = 2randn(n) + 0.4x z = 0.2randn(n) - 0.3x - 0.1y + 100 u = randn(n) - 0.5z - 20 v = 0.1randn(n) + x M = [x y z u v] # M = repmat(M, 1, 2) C = cor(CovarianceMatrix(M)) # debugplots() p = corrplot(M, C, labels=["item$i" for i in 1:size(M,2)]) #, size=(600,600), colors=[colorant"orange", colorant"black", colorant"green"]) # - ax = p.plts[end].o.ax lim1, lim2 = ax[:get_xlim]() xticks!(p.plts[end], [lim1,lim2]) for (r,c) in gl @show r,c end length(gl) plot!(p.plts[1], xlim=(-5,5)) gui() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from IPython.display import Image Image('mat01.png') import matplotlib.pyplot as plt import numpy as np x = np.linspace(-2*np.pi, 2*np.pi, 128) y1 = np.sin(x) print('the len of x:', len(x)) print('the len of y1:', len(y1)) # print(x) # print(' ') # print(y1) print(np.round(x[0]-x[1],5)) print(np.round(x[-1]-x[-2],5)) plt.plot(x,y1,color='r') # + import numpy as np import matplotlib.pyplot as plt import matplotlib as mpl x = np.linspace(-2*np.pi, 2*np.pi, 128) y1 = np.sin(x) y2 = np.cos(x) plt.title('plot simple image', color='k',fontsize=14) plt.plot(x, y1, 'r', label='y1=sin(x)') plt.plot(x, y2, 'b', label='y2=cos(x)') plt.axvline(x=0, color='k') plt.axhline(y=0, color='k') plt.legend(loc=2,fontsize=14) # - # ## All done !!! # - Please feel free to let me know if there is any questions # - Please subscribe my youtube channel too # - Thank you very much # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:estate] # language: python # name: conda-env-estate-py # --- # + import pandas as pd import re import matplotlib.pyplot as plt import numpy as np from glob import glob pd.set_option('display.max_columns', 50) # %matplotlib inline # - paths = glob('../data/*.csv') data = pd.concat([pd.read_csv(path) for path in paths]) data['sub'].values[0] # + def extract_price(text): price, price_per_sqm, *_ = text.split('zł') price = re.sub(r',', '.', price) price = re.sub(r'[^0-9.]', '', price) price_per_sqm = re.sub(r',', '.', price_per_sqm) price_per_sqm = re.sub(r'[^0-9.]', '', price_per_sqm) return float(price), float(price_per_sqm) def extract_area(text): area = re.sub(r',', '.', text) area = re.sub(r'[^0-9.]', '', area) return float(area) def extract_rooms(text): rooms = re.sub(r',', '.', text) rooms = re.sub(r'[^0-9.]', '', rooms) return int(rooms) def extract_floor(text): if not re.search(r'\d', text): return None, None text = re.sub(r'parter', '1', text) text = re.sub(r'poddasze', '0', text) floor, number_of_floors = None, None if re.search(r'piętro', text): if re.search(r'\(z \d+\)', text): # try: floor, number_of_floors = text.split('z') # except: # print("ERROR", text) number_of_floors = re.sub(r'[^0-9.]', '', number_of_floors) else: floor = text floor = re.sub(r'[^0-9.]', '', floor) else: number_of_floors = re.sub(r'[^0-9.]', '', text) floor = int(number_of_floors) if floor == '0' and number_of_floors is not None else floor floor = int(floor) if floor is not None else None number_of_floors = int(number_of_floors) if number_of_floors is not None else None return floor, number_of_floors def extract_main(main_list): raw_price = main_list[0] raw_area = main_list[1] raw_rooms = main_list[2] raw_floor = main_list[3] return [ *extract_price(raw_price), extract_area(raw_area), extract_rooms(raw_rooms), *extract_floor(raw_floor) ] def extract_sub(text): def split_and_strip(text): key, value = text.split(':') return key.strip(), value.strip() return dict(split_and_strip(x) for x in text) def extract_dotted(text): return {x.strip(): True for x in text} # + def prepare_lat(data): result = pd.DataFrame(data['lat']).reset_index(drop=True) result.columns = ['lat'] return result def prepare_lon(data): result = pd.DataFrame(data['lon']).reset_index(drop=True) result.columns = ['lon'] return result def prepare_main(data): main_columns = ['price', 'price_per_sqm', 'area', 'rooms', 'floor', 'number_of_floors'] result = pd.DataFrame(data['main'].map(extract_main).tolist(), columns=main_columns).reset_index(drop=True) result.columns = main_columns return result def prepare_sub(data): pre_data = pd.DataFrame(data['sub'].map(eval).map(extract_sub).tolist()) columns = pre_data.columns result = pre_data.reset_index(drop=True) result.columns = columns return result def prepare_dotted(data): pre_data = pd.DataFrame(data['dotted'].map(eval).map(extract_dotted).tolist()) columns = pre_data.columns result = pre_data.reset_index(drop=True) result.columns = columns return result # - def prepare_data(data): data.coords = data.coords.map(eval) data.place = data.place.map(eval) data.main = data.main.map(eval) data['lat'] = data.coords.map(lambda x: float(x[0][0])) data['lon'] = data.coords.map(lambda x: float(x[1][0])) main_columns = ['price', 'price_per_sqm', 'area', 'rooms', 'floor', 'number_of_floors'] dfs = [ prepare_lat(data), prepare_lon(data), prepare_sub(data), prepare_dotted(data), prepare_main(data) ] final_data = pd.concat(dfs, axis=1) final_data['rok budowy'] = final_data['rok budowy'].apply(np.float64) final_data['rooms'] = final_data['rooms'].apply(np.float64) return final_data # + # data.place = data.place.map(lambda x: x[0].split(', ')) # data['len_place'] = data.place.map(len) # data.sort_values(by='len_place', ascending=False).place # + # from itertools import chain # cur = data.dotted.map(eval).map(extract_dotted) # dd = list(chain.from_iterable(cur.values)) # - data = prepare_data(data) data.columns data plt.scatter(data.lat.values, data.lon.values) plt.show() plt.hist(data.price.values, bins=50) plt.show() plt.hist(data.area.values, bins=100) plt.show() plt.hist(data.rooms.values, bins=100) plt.show() plt.scatter(data.area.values, data.price.values) plt.xlim(0, 300) plt.ylim(0, 2250000) plt.show() # # Simple analysis import xgboost as xgb model = xgb.XGBRegressor() mask = np.random.random(len(data)) < 0.8 columns_of_interest = ['rynek', 'area', 'lat', 'lon', 'rok budowy', 'rooms', 'floor', 'number_of_floors'] train_data = pd.get_dummies(data[columns_of_interest + ['price']].loc[mask]) test_data = pd.get_dummies(data[columns_of_interest + ['price']].loc[~mask]) train_data model.fit(train_data[['rynek_pierwotny', 'area', 'lat', 'lon', 'rok budowy', 'rooms', 'floor', 'number_of_floors']], train_data['price']) error = model.predict(test_data[['rynek_pierwotny', 'area', 'lat', 'lon', 'rok budowy', 'rooms', 'floor', 'number_of_floors']]) - test_data['price'] error np.mean(np.abs(error.values)) plt.hist(error, bins=100) plt.show() len(data) data # !jupyter nbconvert --to script preprocessing.ipynb # # Histogram, binned groupby bins = np.array([20, 30, 50, 80, 110]) groups = data.groupby(pd.cut(data.area, bins)) hist = groups.count().area.values hist = hist / len(data) * 100 hist # # Putting into database # + import json from pymongo import MongoClient host = 'localhost' port = 30017 db_name = 'test3' collection_name = 'otodom_offers' client = MongoClient(host, port) db = client[db_name] collection = db[collection_name] # - data.columns = [x.replace('.', '_') for x in data.columns] records = json.loads(data.T.to_json()).values() collection.insert(records) len(list(collection.find())) def get_nearest_neighbors_mean_price_per_sqm(data=data, lat, lon, nb_nearest=20): distances = (data.lat - current_lat)**2 + (data.lon - current_lon)**2 sorted_distances_top = distances.sort_values()[:nb_nearest] mean_result = data.ix[sorted_result_top.index]['price_per_sqm'].mean() return mean_result current_lon = data.lon[300] current_lat = data.lat[300] current_lon, current_lat mean_result # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pandas as pd import networkx as nx from graspy.utils import * from graspy.embed import AdjacencySpectralEmbed from graspy.plot import heatmap, pairplot import seaborn as sns import matplotlib.pyplot as plt import warnings warnings.filterwarnings("ignore") import re FONTSIZE=20 # + df = pd.read_excel('SI 3 Synapse lists.xlsx', sheet=0) new_labels = ['num_contin', 'em_series', 'pre', 'all_post', 'type', 'sections', 'num_post', 'post1', 'post2', 'post3', 'post4'] df.columns = new_labels chem = df.loc[df['type'] == 'chemical'] elec = df.loc[df['type'] == 'electrical'] ax = plt.subplots(1,1, figsize=(10,10))[1] sns.countplot(x='num_post', data=chem) ax.set_title('Chemical synapse partner number', fontsize=FONTSIZE) chem_1 = chem.loc[chem['num_post'] == 1] chem_2 = chem.loc[chem['num_post'] == 2] chem_3 = chem.loc[chem['num_post'] == 3] chem_4 = chem.loc[chem['num_post'] == 4] chem_5 = chem.loc[chem['num_post'] == 5] chem_adics = [chem_1, chem_2, chem_3, chem_4, chem_5] # - names = set() for i, row in chem.iterrows(): pre = row.pre post_names = re.split('[, .]', row.all_post) names.add(pre) [names.add(post) for post in post_names] names.remove('') cell_ids = np.unique(list(names)) inds = range(len(cell_ids)) cell_id_map = dict(zip(cell_ids,inds)) cell_id_map adic_graphs = [nx.MultiDiGraph() for n in range(5)] for i, row in chem.iterrows(): pre = row.pre post_names = row.all_post.split(',') post_names = re.split('[, .]', row.all_post) for g in adic_graphs: g.add_node(pre) [g.add_node(post) for post in post_names] [adic_graphs[row.num_post - 1].add_edge(pre, post, weight=1) for post in post_names] adic_As = [nx.to_numpy_array(g, nodelist=cell_ids) for g in adic_graphs] # + def df_to_adjacency_undirected(df, nodelist, weight=1): rows = [] for i, row in df.iterrows(): post_names = re.split('[, .]', row.all_post) for a in post_names: # pre | post | weight rows.append([row.pre, a, weight]) df_split = pd.DataFrame(rows, columns=['source', 'target', 'weight']) g = nx.MultiGraph() g = nx.from_pandas_edgelist(df_split, create_using=g) return nx.to_numpy_array(g, nodelist=cell_ids) chem_A_undirected = df_to_adjacency_undirected(chem, cell_ids) # - adic_counts # + adic_undirected_graphs = [nx.MultiGraph() for n in range(5)] adic_counts = np.zeros(5) for i, row in chem.iterrows(): pre = row.pre post_names = re.split('[, .]', row.all_post) for g in adic_graphs: g.add_node(pre) [g.add_node(post) for post in post_names] [adic_undirected_graphs[row.num_post - 1].add_edge(pre, post, weight=1) for post in post_names] adic_counts[row.num_post-1] += 1 adic_undirected_As = [nx.to_numpy_array(g, nodelist=cell_ids) for g in adic_undirected_graphs] # - # summing over the columns to get the degree of each node for any-adic num_possible = np.sum(chem_A_undirected, axis=1) num_present = [] # for each adic-graph, getting the number that were actually made [num_present.append(np.sum(A, axis=1)) for ind, A in enumerate(adic_undirected_As)] proportions = [] for adic_num_present in num_present: proportions.append(np.divide(adic_num_present, num_possible)) proportions mean_adic_proportions = [prop.sum()/len(cell_ids) for prop in proportions] mean_adic_proportions 2*list(range(1,6)) 5*['mean cell proportion'] + 5*['overall proportion'] mean_adic_proportions + list(adic_props_overall) adicp = pd.DataFrame() num_synapses = sum(adic_counts) adic_props_overall = adic_counts / num_synapses adicp['proportion'] = mean_adic_proportions + list(adic_props_overall) adicp['num_post'] = 2*list(range(1,6)) adicp['method'] = 5*['mean cell proportion'] + 5*['overall proportion'] sns.barplot(x='num_post', y='proportion', hue='method', data=adicp) # Overall proportion is just the total number of synapses in that category, divided by the total number of synapses # # Mean cell proportion: # For each category (biadic, triadic ...) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # CS231n_CNN for Visual Recognition # > Stanford University CS231n # # - toc: true # - badges: true # - comments: true # - categories: [CNN] # - image: images/ # --- # - http://cs231n.stanford.edu/ # --- # # Image Classification # # # - **Image Classification:** We are given a **Training Set** of labeled images, asked to predict labels on **Test Set.** Common to report the **Accuracy** of predictions(fraction of correctly predicted images) # # - We introduced the **k-Nearest Neighbor Classifier**, which predicts the labels based on nearest images in the training set # # - We saw that the choice of distance and the value of k are **hyperparameters** that are tuned using a **validation set**, or through **cross-validation** if the size of the data is small. # # - Once the best set of hyperparameters is chosen, the classifier is evaluated once on the test set, and reported as the performance of kNN on that data. # - Nearest Neighbor 분류기는 CIFAR-10 데이터셋에서 약 40% 정도의 정확도를 보이는 것을 확인하였다. 이 방법은 구현이 매우 간단하지만, 학습 데이터셋 전체를 메모리에 저장해야 하고, 새로운 테스트 이미지를 분류하고 평가할 때 계산량이 매우 많다. # # - 단순히 픽셀 값들의 L1이나 L2 거리는 이미지의 클래스보다 배경이나 이미지의 전체적인 색깔 분포 등에 더 큰 영향을 받기 때문에 이미지 분류 문제에 있어서 충분하지 못하다는 점을 보았다. # --- # # Linear Classification # - We defined a **score function** from image pixels to class scores (in this section, a linear function that depends on weights **W** and biases **b**). # # - Unlike kNN classifier, the advantage of this **parametric approach** is that once we learn the parameters we can discard the training data. Additionally, the prediction for a new test image is fast since it requires a single matrix multiplication with **W**, not an exhaustive comparison to every single training example. # # - We introduced the **bias trick**, which allows us to fold the bias vector into the weight matrix for convenience of only having to keep track of one parameter matrix. # 하나의 매개변수 행렬만 추적해야 하는 편의를 위해 편향 벡터를 가중치 행렬로 접을 수 있는 편향 트릭을 도입했습니다 . # # - We defined a **loss function** (we introduced two commonly used losses for linear classifiers: the **SVM** and the **Softmax**) that measures how compatible a given set of parameters is with respect to the ground truth labels in the training dataset. We also saw that the loss function was defined in such way that making good predictions on the training data is equivalent to having a small loss. # --- # # Optimization # - We developed the intuition of the loss function as a **high-dimensional optimization landscape** in which we are trying to reach the bottom. The working analogy we developed was that of a blindfolded hiker who wishes to reach the bottom. In particular, we saw that the SVM cost function is piece-wise linear and bowl-shaped. # # - We motivated the idea of optimizing the loss function with **iterative refinement**, where we start with a random set of weights and refine them step by step until the loss is minimized. # # - We saw that the **gradient** of a function gives the steepest ascent direction and we discussed a simple but inefficient way of computing it numerically using the finite difference approximation (the finite difference being the value of h used in computing the numerical gradient). # # - We saw that the parameter update requires a tricky setting of the **step size** (or the **learning rate**) that must be set just right: if it is too low the progress is steady but slow. If it is too high the progress can be faster, but more risky. We will explore this tradeoff in much more detail in future sections. # # - We discussed the tradeoffs between computing the **numerical** and **analytic** gradient. The numerical gradient is simple but it is approximate and expensive to compute. The analytic gradient is exact, fast to compute but more error-prone since it requires the derivation of the gradient with math. Hence, in practice we always use the analytic gradient and then perform a **gradient check**, in which its implementation is compared to the numerical gradient. # # - We introduced the **Gradient Descent** algorithm which iteratively computes the gradient and performs a parameter update in loop. # --- # # Backprop # - We developed intuition for what the gradients mean, how they flow backwards in the circuit, and how they communicate which part of the circuit should increase or decrease and with what force to make the final output higher. # # - We discussed the importance of **staged computation** for practical implementations of backpropagation. You always want to break up your function into modules for which you can easily derive local gradients, and then chain them with chain rule. Crucially, you almost never want to write out these expressions on paper and differentiate them symbolically in full, because you never need an explicit mathematical equation for the gradient of the input variables. Hence, decompose your expressions into stages such that you can differentiate every stage independently (the stages will be matrix vector multiplies, or max operations, or sum operations, etc.) and then backprop through the variables one step at a time. # --- # # Neural Network - 1 # - We introduced a very coarse model of a biological **neuron** # # - 실제 사용되는 몇몇 **활성화 함수** 에 대해 논의하였고, ReLU가 가장 일반적인 선택이다. # - 활성화 함수 쓰는 이유 : 데이터를 비선형으로 바꾸기 위해서. 선형이면 은닉층이 1개밖에 안나옴 # # # - We introduced **Neural Networks** where neurons are connected with **Fully-Connected layers** where neurons in adjacent layers have full pair-wise connections, but neurons within a layer are not connected. # # - 우리는 layered architecture를 통해 활성화 함수의 기능 적용과 결합된 행렬 곱을 기반으로 신경망을 매우 효율적으로 평가할 수 있음을 보았다. # # - 우리는 Neural Networks가 **universal function approximators**(NN으로 어떠한 함수든 근사시킬 수 있다)임을 보았지만, 우리는 또한 이 특성이 그들의 편재적인 사용과 거의 관련이 없다는 사실에 대해 논의하였다. They are used because they make certain “right” assumptions about the functional forms of functions that come up in practice. # # - 우리는 큰 network가 작은 network보다 항상 더 잘 작동하지만, 더 높은 model capacity는 더 강력한 정규화(높은 가중치 감소같은)로 적절히 해결되어야 하며, 그렇지 않으면 오버핏될 수 있다는 사실에 대해 논의하였다. 이후 섹션에서 더 많은 형태의 정규화(특히 dropout)를 볼 수 있을 것이다. # --- # # Neural Network - 2 # - 권장되는 전처리는 데이터의 중앙에 평균이 0이 되도록 하고 (zero centered), 스케일을 [-1, 1]로 정규화 하는 것 입니다. # - 올바른 전처리 방법 : 예를들어 평균차감 기법을 쓸 때 학습, 검증, 테스트를 위한 데이터를 먼저 나눈 후 학습 데이터를 대상으로 평균값을 구한 후에 평균차감 전처리를 모든 데이터군(학습, 검증, 테스트)에 적용하는 것이다. # # - ReLU를 사용하고 초기화는 $\sqrt{2/n}$ 의 표준 편차를 갖는 정규 분포에서 가중치를 가져와 초기화합니다. 여기서 $n$은 뉴런에 대한 입력 수입니다. E.g. in numpy: `w = np.random.randn(n) * sqrt(2.0/n)`. # # - L2 regularization과 드랍아웃을 사용 (the inverted version) # # - Batch normalization 사용 (이걸쓰면 드랍아웃은 잘 안씀) # # - 실제로 수행할 수 있는 다양한 작업과 각 작업에 대한 가장 일반적인 손실 함수에 대해 논의했다. # --- # # Neural Network - 3 # 신경망(neural network)를 훈련하기 위하여: # # - 코드를 짜는 중간중간에 작은 배치로 그라디언트를 체크하고, 뜻하지 않게 튀어나올 위험을 인지하고 있어야 한다. # # - 코드가 제대로 돌아가는지 확인하는 방법으로, 손실함수값의 초기값이 합리적인지 그리고 데이터의 일부분으로 100%의 훈련 정확도를 달성할 수 있는지 확인해야한다. # # - 훈련 동안, 손실함수와 train/validation 정확도를 계속 살펴보고, (이게 좀 더 멋져 보이면) 현재 파라미터 값 대비 업데이트 값 또한 살펴보라 (대충 ~ 1e-3 정도 되어야 한다). 만약 ConvNet을 다루고 있다면, 첫 층의 웨이트값도 살펴보라. # # - 업데이트 방법으로 추천하는 건 SGD+Nesterov Momentum 혹은 Adam이다. # # - 학습 속도를 훈련 동안 계속 하강시켜라. 예를 들면, 정해진 에폭 수 뒤에 (혹은 검증 정확도가 상승하다가 하강세로 꺾이면) 학습 속도를 반으로 깎아라. # # - Hyper parameter 검색은 grid search가 아닌 random search으로 수행하라. 처음에는 성긴 규모에서 탐색하다가 (넓은 hyper parameter 범위, 1-5 epoch 정도만 학습), 점점 촘촘하게 검색하라. (좁은 범위, 더 많은 에폭에서 학습) # - 추가적인 개선을 위하여 모형 앙상블을 구축하라. # --- # # CNN # - ConvNet 아키텍쳐는 여러 레이어를 통해 입력 이미지 볼륨을 출력 볼륨 (클래스 점수)으로 변환시켜 준다. # # - ConvNet은 몇 가지 종류의 레이어로 구성되어 있다. CONV/FC/RELU/POOL 레이어가 현재 가장 많이 쓰인다. # # - 각 레이어는 3차원의 입력 볼륨을 미분 가능한 함수를 통해 3차원 출력 볼륨으로 변환시킨다. # # - parameter가 있는 레이어도 있고 그렇지 않은 레이어도 있다 (FC/CONV는 parameter를 갖고 있고, RELU/POOL 등은 parameter가 없음). # # - hyperparameter가 있는 레이어도 있고 그렇지 않은 레이어도 있다 (CONV/FC/POOL 레이어는 hyperparameter를 가지며 ReLU는 가지지 않음). # # - stride, zero-padding ... # --- # # Spatial Localization and Detection # # - Classification : 사진에 대한 라벨이 아웃풋 # - Localization : 사진에 대한 상자가 아웃풋 (x, y, w, h) # - Detection : 사진에 대한 여러개의 상자가 아웃풋 DOG(x, y, w, h), CAT(x, y, w, h), ... # - Segmentation : 상자가 아니라 객체의 이미지 형상을 그대로. # - Localization method : localization as Regression, Sliding Window : Overfeat # # - Region Proposals : 비슷한 색깔, 텍스쳐를 기준으로 박스를 생성 # # - Detection : # - R-CNN : Region-based CNN. Region -> CNN # - 문제점 : Region proposal 마다 CNN을 돌려서 시간이 매우 많이든다. # - Fast R-CNN : CNN -> Region # - 문제점 : Region Proposal 과정에서 시간이 든다. # - Faster R-CNN : Region Proposals도 CNN을 이용해서 해보자. # # - YOLO(You Only Look Once) : Detection as Regression # - 성능은 Faster R-CNN보다 떨어지지만, 속도가 매우 빠르다. # --- # # CNNs in practice # - Data Augmentation # - Change the pixels without changing the label # - Train on transformed data # - VERY widely used # # ..... # # 1. Horizontal flips # 2. Random crops/scales # 3. Color jitter # - Transfer learning # # 이미지넷의 클래스와 관련있는 데이터라면 사전학습시 성능이 좋아지는게 이해가되는데 관련없는 이미지 (e.g. mri같은 의료이미지)의 경우도 성능이 좋아지는데 그 이유는 무엇인가? # # -> 앞단에선 엣지, 컬러같은 low level의 feature를 인식, 뒤로갈수록 상위레벨을 인식. lowlevel을 미리 학습해놓는다는 것은 그 어떤 이미지를 분석할 때도 도움이된다! # - How to stack convolutions: # # - Replace large convolutions (5x5, 7x7) with stacks of 3x3 convolutions # - 1x1 "bottleneck" convolutions are very efficient # - Can factor NxN convolutions into 1xN and Nx1 # - All of the above give fewer parameters, less compute, more nonlinearity # # 더 적은 파라미터, 더 적은 컴퓨팅연산, 더 많은 nonlinearity(필터 사이사이 ReLU등이 들어가기에) # - Computing Convolutions: # - im2col : Easy to implement, but big memory overhead. # - FFT : Big speedups for small kernels # - "Fast Algorithms" : seem promising, not widely used yet # --- # # Segmentaion # - Semantic Segmentation # - Classify all pixels # - Fully convolutional models, downsample then upsample # - Learnable upsampling: fractionally strided convolution # - Skip connections can help # # ... # # - Instance Segmentation # - Detect instance, generate mask # - Similar pipelines to object detection # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # "Proceso de aprendizaje automático supervisado" # > "Una introducción la automatización del aprendizaje supervisado. Python + Pytorch" # # - toc: false # - branch: master # - badges: true # - comments: true # - categories: [curso_Fast.ai] # - image: images/red.jpg # - hide: false # - search_exclude: true # - metadata_key1: metadata_value1 # - metadata_key2: metadata_value2 # Fuentes: [Practical Deep Learning for Coders (Fast.ai](https://course.fast.ai/), [ - Dot CSV](https://www.youtube.com/channel/UCy5znSnfMsDwaLlROnZ7Qbg) # #
    # Partimos de una **realidad** que queremos estudiar.
    # Buscando **patrones** construimos **modelos** que simplifiquen esta realidad y nos ayuden a sacar ventajas.
    # Los modelos buscan el equilibrio entre representar bien la realidad y la simplicidad de uso.
    # # Elementos: # # - datos: valores sacados de la realidad que alimentan el modelo # - parámetros: valores que se pueden variar para ajustar el modelo # - error: mide el grado de ajuste de nuestro modelo a los datos # # Partiendo de un modelo inicial, el objetivo será **optimizar**lo (entrenarlo, ajustarlo) a través de la variación de los parámetros buscando el mínimo error. # ## El modelo: Regresión lineal como primer acercamiento # Dentro del *aprendizaje automático* (machine learning) hay tres ramas diferenciadas: # # - **aprendizaje supervisado**: se tiene datos etiquetados (clasificados), sabemos el objetivo buscado # - **aprendizaje no supervisado**: se busca sacar conocimiento con datos no etiquetados, se desconoce a priori el objetivo. # - **aprendizaje reforzado**: se establece un sistema de refuerzos y penalizaciones para llegar al objetivo # # Nosotros comenzaremos por el aprendizaje supervisado escogiendo lo más sencillo, un modelo lineal. # #### Regresión lineal simple: una variable # # Tenemos sólo una variable **x** de entrada (v. independiente), y se busca un resultado **y** que depende de la anterior. Existe una relación lineal entre ambas, por lo que se representa como una línea: # # y = b + w1 * x # # b y w1 son los parámetros: el sesgo y la pendiente de la línea # ![](images/linea.png "y = b + w1 * x") # #### Regresión lineal múltiple # # ##### 2 variables # # Representan un plano # # y = b + w1 * x1 + w2 * x2 # ![](images/plano.png "y = b + w1 * x1 + w2 * x2") # ##### n variables # # Representan un hiperplano # # y = b + w1 * x1 + w2 * x2 + w3 * x3 + ... + wn* xn #
    # Tanto para regresion lineal simple como múltiple tendremos unos datos sacados de la realidad y queremos encontrar un modelo (una línea, plano,...) que se aproxime a ellos: # ![](images/estimacion01.png) # Para cada punto podemos calcular el error (distancia) que hay entre el valor real y la predicción:
    #
    # error = Y real - Y estimado #

    # El **error** (coste) **del modelo** será la media de estas distancias: error_media_abs = ((y_real-y_estimado)).abs().mean() # ![](images/estimacion02.png) # Aunque, se suele utilizar la raiz cuadrada del **error medio cuadrático (MSE)** porque penaliza más cuanto mayor sea el error: error_mse = ((y_real-y_estimado)**2).mean().sqrt() # En Pytorch tenemos estas funciones ya optimizadas F.l1_loss(y_real.float(),y_estimado) F.mse_loss(y_real,y_estimado).sqrt() #
    # # ## El aprendizaje: Descenso del gradiente # Una vez tenemos el modelo y la función de error, iremos dando valores a los parámetros **w y b** buscando el menor error hasta encontrar el modelo que mejor se ajuste a la realidad. # # Queremos que ese ajuste de parámetros se realice de forma autónoma. Para ello tenemos la herramienta **desceso del gradiente**. # # La técnica de *descenso del gradiente* consiste en moverse por la función de error buscando el punto donde se minimiza. Para ello, en cada posición, busca la dirección en la que se reduce más el error. Esto se obtiene con la derivada parcial en el punto respecto de cáda parámetro (el gradiente de la función), que es la pendiente de la curva. # # Una vez obtenida la dirección en que hay que moverse se recalculan los parámetros de la siguiente forma: # # w := w - gradiente(w) * lr # # # 'lr' es la **tasa de aprendizaje**, que define el tamaño del paso que se realizará. Si es demasiado pequeño se necesitará mucho tiempo para alcanzar el mínimo y si es muy grande seguramente se pasará de largo sin llegar al mínimo. # En Pytorch la siguiente función calcula el gradiente funcion_error.backward() # Se muestra el gradiente calculado parametros.grad #
    # Ahora ya podemos preparar el algoritmo de aprendizaje automático. Los pasos serán: # # 1- Cargar los datos y escoger el modelo # 2- Inicializar los parámetros: se suele hacer de forma aleatoria # Los parámetros (w) pueden almacenar el gradiente con "requires_grad_()" parametros = torch.randn(num_parametros).requires_grad_() # 3- Calcular la predicciones # 4- Calcular el error (coste) error = mse(prediciones, valores_reales) # 5- Calcular el gradiente # 6- Realizar el paso, modificación de los parámetros lr = 1e-5 parametros.data -= lr * parametros.grad.data parametros.grad = None # 7- Repetir el proceso (punto 3) las veces necesarias # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] toc=true #

    Table of Contents

    # # - # # Random Forest # ## Introduction # # Random forest is a supervised learning algorithm. The "forest" it builds, is an collection of decision trees, usually trained with the “bagging” method. The general idea of the bagging method is that a combination of learning models increases the overall result. # # i.e. random forest builds multiple decision trees and merges them together to get a more accurate and stable prediction. # # It is one of the most used algorithms, because of its simplicity and diversity (it can be used for both classification and regression tasks). # ## Why Random Forest? # # Why go to the trouble of creating a forest of decision trees when a single decision tree already works? # # Trees are easy to build, easy to use, and easy to interpret BUT they are not very accurate. Decision Trees have some intrinsic limitations that a Random Forest helps to address - Decision Trees are prone to overfitting - i.e. they work great with training data but their accuracy is not so great with classifying new samples. # # Random Forests combine the simplicity of Decision Trees with flexibility to deliver a vast improvement in accuracy. # ### Advantages # # - Random forests is considered as a highly accurate and robust method because of the number of decision trees participating in the process. # - It does not suffer from the overfitting problem. The main reason is that it takes the average of all the predictions, which cancels out the biases. # - The algorithm can be used in both classification and regression problems. # - Random forests can also handle missing values. There are two ways to handle these: using median values to replace continuous variables, and computing the proximity-weighted average of missing values. # - You can get the relative feature importance, which helps in selecting the most contributing features for the classifier. # ### Disadvantages # # - Random forests is slow in generating predictions because it has multiple decision trees. Whenever it makes a prediction, all the trees in the forest have to make a prediction for the same given input and then perform voting on it. This whole process is time-consuming. # - The model is difficult to interpret compared to a decision tree, where you can easily make a decision by following the path in the tree. # ## How Random Forest Works? # # Here's a short video on how to build a Random Forest. # + ## Run this cell (shift+enter) to see the video from IPython.display import IFrame IFrame("https://www.youtube.com/embed/J4Wdy0Wc_xQ", width="814", height="509") # - # ## Random Forests vs Decision Trees # - Random forests is a set of multiple decision trees. # - Deep decision trees may suffer from overfitting, but random forests prevents overfitting by creating trees on random subsets. # - Decision trees are computationally faster. # - Random forests is difficult to interpret, while a decision tree is easily interpretable and can be converted to rules. # ## Project - Flower Classification Using Random Forest Algorithm # # You will be building a model on the iris flower dataset, which is a very famous classification set. It comprises the sepal length, sepal width, petal length, petal width, and type of flowers. There are three species or classes: setosa, versicolor, and virginia. You will build a model to classify the type of flower. The dataset is available in the scikit-learn library or you can download it from the UCI Machine Learning Repository. # # Here's a link to the dataset if you wish to explore it further - http://archive.ics.uci.edu/ml/datasets/Iris/ import numpy as np import pandas as pd from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier import matplotlib # %matplotlib inline # ## Data Load and Initial Exploration # # The Iris dataset come preloaded in the scikitlearn library. # + #Import scikit-learn dataset library from sklearn import datasets #Load dataset iris = datasets.load_iris() # - # Let's take a quick look at the features of our dataset and the different flower types that the samples are classified into - # print the names of the four features print(iris.feature_names) # print the label species(setosa, versicolor,virginica) print(iris.target_names) # ### Changing Data Format # # The `iris` object that holds our dataset is not a pandas dataframe. Let's store the data in a dataframe to enable further analysis and modeling. type(iris) # + ## converting to a dataframe data=pd.DataFrame({ 'sepal length':iris.data[:,0], 'sepal width':iris.data[:,1], 'petal length':iris.data[:,2], 'petal width':iris.data[:,3], 'species':iris.target }) # - # Let's take a quick look at the data before we move ahead - data.head() data['species'].value_counts() data.hist() # ## Data Pre-processing # # We will not need to do much pre-processing in this case. The data is already in the form we need it. We will do two quick things though - # # - split the features and the labels # - split the data into train and test datasets # + # Import train_test_split function from sklearn.model_selection import train_test_split X=data[['sepal length', 'sepal width', 'petal length', 'petal width']] # Features y=data['species'] # Labels # Split dataset into training set and test set X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3) # 70% training and 30% test # - # ## Train the Model and Predict # + #Import Random Forest Model from sklearn.ensemble import RandomForestClassifier #Create a Gaussian Classifier clf=RandomForestClassifier(n_estimators=100) #Train the model using the training sets y_pred=clf.predict(X_test) clf.fit(X_train,y_train) y_pred=clf.predict(X_test) # - # ## Evaluate the Model # Test scores for the test dataset - #Import scikit-learn metrics module for accuracy calculation from sklearn import metrics # Model Accuracy, how often is the classifier correct? print("Accuracy:",metrics.accuracy_score(y_test, y_pred)) # Test scores for the train dataset - print("Accuracy:",metrics.accuracy_score(y_train, clf.predict(X_train))) # ## Hyperparameter Tuning # # Now that you've built a Random Forest classifier how can you improve the accuracy? # # We covered Hyperparameter Tuning in the Decision Trees notebook. Let's see how we can apply that to our current project. We will be ranking the importance of features and then selectively use the most important features to re-train our model. # # The RandomForestClassifier helps us get feature importance scores - feature_imp = pd.Series(clf.feature_importances_,index=iris.feature_names).sort_values(ascending=False) feature_imp # Now that we know the top three features that are pertinent for our model, let's do feature selection accordingly - # Split dataset into features and labels X=data[['petal length', 'petal width','sepal length']] # Removed feature "sepal width" y=data['species'] # Split dataset into training set and test set X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.70, random_state=5) # 70% training and 30% test # Let's retrain our model - # + #Create a Gaussian Classifier clf=RandomForestClassifier(n_estimators=100) #Train the model using the training sets y_pred=clf.predict(X_test) clf.fit(X_train,y_train) # prediction on test set y_pred=clf.predict(X_test) # - # ... and now for the evaluation. Did the feature selection lead to a favorable outcome? # Model Accuracy, how often is the classifier correct? print("Accuracy:",metrics.accuracy_score(y_test, y_pred)) # Great! As you might notice, our accuracy has gone up from 0.933 to 0.952. # # This project was created using a variety of online resources as references. This informative tutorial by was used extensively - https://www.datacamp.com/community/tutorials/random-forests-classifier-python # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="UwHurPrcu7Cr" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 158} outputId="fd5bc4a4-9cfd-4d9b-e0ce-c9d59adb0b57" # este script tenta reproducir os resultados do artigo "BERT for Stock Market Sentiment Analysis" do # autoria deles. # autor deste codigo: steve # notas: verificar que o Runtime do Colab seja GPU e Python 3. import datetime import json import os import pprint import random import string import sys import tensorflow as tf # clonamos o git # !test -d bert_repo || git clone https://github.com/stonescenter/bert.git bert_repo if not 'bert_repo' in sys.path: sys.path += ['bert_repo'] if 'COLAB_TPU_ADDR' not in os.environ: print('Not connected to TPU') else: print("Connected to TPU, please use GPU") # + id="3k285wkrA1tD" colab_type="code" colab={} # Available pretrained model checkpoints: # uncased_L-12_H-768_A-12: uncased BERT base model # uncased_L-24_H-1024_A-16: uncased BERT large model # cased_L-12_H-768_A-12: cased BERT large model BERT_MODEL = 'uncased_L-12_H-768_A-12' BERT_BASE_DIR='gs://bert_models/2018_10_18/uncased_L-12_H-768_A-12' BERT_MODEL_HUB = 'https://tfhub.dev/google/bert_' + BERT_MODEL + '/1' #BERT_MODEL_HUB = "https://tfhub.dev/google/bert_uncased_L-12_H-768_A-12/1" # configuramos o modelo pre-treinado do bert BERT_PRETRAINED_DIR = 'gs://cloud-tpu-checkpoints/bert/' + BERT_MODEL print('***** BERT pretrained directory: {} *****'.format(BERT_PRETRAINED_DIR)) # !gsutil ls $BERT_PRETRAINED_DIR BERT_BASE_DIR = BERT_PRETRAINED_DIR # !gsutil ls $BERT_BASE_DIR # + id="KreT_7IQONSA" colab_type="code" outputId="4b210875-25f0-4b4c-b2dd-ebc6b24fb8ce" colab={"base_uri": "https://localhost:8080/", "height": 34} # dados temporales durante a session em colab DATA_TRAIN ='bert_repo/BNEWS_DATA/datasetEconomyNews_PN.json' DATA_TEST = 'bert_repo/BNEWS_DATA/predict_data.json' TASK_NAME = 'bnd' # bucket propio para almacenar se nao tem entao sera eliminado constantemente # em cada sesion #BUCKET = 'hd-storage-bucket' #OUTPUT_DIR = 'gs://{}/bert/models/{}'.format(BUCKET, TASK_NAME) OUTPUT_DIR = 'output/bert/{}'.format(TASK_NAME) tf.gfile.MakeDirs(OUTPUT_DIR) print('***** Directorio creado : {} *****'.format(OUTPUT_DIR)) # #!gsutil ls -la $OUTPUT_DIR # !test -d $OUTPUT_DIR if not OUTPUT_DIR in sys.path: sys.path += [OUTPUT_DIR] #with tf.gfile.GFile(OUTPUT_DIR + '/dataDistribution.txt', 'a') as f: # f.write("------ STATISTICS %s ------\n") #f.close() # #!gsutil cat $DIST #comando para editar un file # #%pycat bert_repo/run_classifier.py # #!rm bert_repo/run_classifier.py # #%%writefile bert_repo/run_classifier.py # + id="o5vNQ_athkNL" colab_type="code" colab={} # verificamos se funciona a variavel DATA_TEST dados do teste data_predict = [] with tf.gfile.Open(DATA_TEST,'r') as fp: data_predict = json.load(fp) for block in data_predict: title, text = block["headlineTitle"], block["headlineText"] print(text) print(DATA_TEST) # + id="9RCAB7tjMDND" colab_type="code" colab={} # mandamos a treinar os dados. # !python bert_repo/run_classifier.py \ # --task_name=$TASK_NAME \ # --do_train=true \ # --do_eval=true \ # --do_test=true \ # --do_predict=true \ # --data_dir=$DATA_TRAIN \ # --vocab_file=$BERT_BASE_DIR/vocab.txt \ # --bert_config_file=$BERT_BASE_DIR/bert_config.json \ # --init_checkpoint=$BERT_BASE_DIR/bert_model.ckpt \ # --max_seq_length=64 \ # --train_batch_size=32 \ # --learning_rate=2e-5 \ # --num_train_epochs=10.0 \ # --output_dir=$OUTPUT_DIR \ # --seed=123124124 # + id="DEoqHuQOBIxY" colab_type="code" colab={} # mandamos a testar os dados. Aqui sai um error de incompatibilidade, o motivo parecer ser o tensorflow e os checkpoint # e nao ao codigo. # !python bert_repo/run_classifier.py \ # --task_name=$TASK_NAME \ # --do_predict=true \ # --data_dir=$DATA_TEST \ # --vocab_file=$BERT_BASE_DIR/vocab.txt \ # --bert_config_file=$BERT_BASE_DIR/bert_config.json \ # --init_checkpoint=$BERT_BASE_DIR/bert_model.ckpt \ # --max_seq_length=64 \ # --output_dir=$OUTPUT_DIR # + [markdown] id="kUL_2mkLgacx" colab_type="text" # / --- / jupyter: / jupytext: / text_representation: / extension: .q / format_name: light / format_version: '1.5' / jupytext_version: 1.14.4 / kernelspec: / display_name: SQL / language: sql / name: SQL / --- / + [markdown] azdata_cell_guid="27d8be3b-8e80-4e07-b09e-47dd877029a0" / # Variables / + azdata_cell_guid="ec25df8c-d7d0-426c-839f-62aa57e1928a" -- Search Customer by city using variable (scoped to batch) DECLARE @City varchar(20) = 'Toronto'; SET @City = 'Bellevue' SELECT c.CustomerID, c.FirstName + ' ' + c.LastName AS CustomerName, a.City FROM SalesLT.Customer AS c JOIN SalesLT.CustomerAddress AS ca ON c.CustomerID = ca.CustomerID JOIN SalesLT.Address AS a ON ca.AddressID = a.AddressID WHERE a.City = @City -- Note: Do not put GO command between variable declaration and query using that variable (cuz they're sent as different batches for execution) / + azdata_cell_guid="8fda8c8e-1708-44f3-bc02-3b225779b2cd" -- Assign value to variable using SELECT query and output it DECLARE @MaxListPrice int; SELECT @MaxListPrice = MAX(ListPrice) FROM SalesLT.Product PRINT 'Maximum List Price in Product table: ' + CAST(@MaxListPrice AS varchar(20)); / + [markdown] azdata_cell_guid="0e121626-d6f0-4512-90be-bcc2052a262c" / # Conditional Branching / + azdata_cell_guid="c52ded71-0671-469b-b586-b9ab936d8aac" IF 'Yes' = 'yes' PRINT 'True' ELSE PRINT 'False' / + azdata_cell_guid="008edada-1659-4ec6-80ba-546ecbc20feb" -- Display a custom message to the user if UPDATE statement affected any rows DECLARE @pid int = 680; UPDATE SalesLT.Product SET DiscontinuedDate = GETDATE() WHERE ProductID = @pid; DECLARE @RowsAffected int = @@ROWCOUNT; IF @RowsAffected < 1 PRINT 'No product found.' ELSE BEGIN PRINT CAST(@RowsAffected AS varchar(10)) + ' product updated.' SELECT * FROM SalesLT.Product WHERE ProductID = @pid; END -- Note: @@ROWCOUNT is a system variable which returns the number of rows affected by the previous query / + [markdown] azdata_cell_guid="a6d729ca-d6ef-43a5-8d87-87be496115f0" / # Looping / + azdata_cell_guid="810d7748-1b11-42ce-9c60-6996ba65f2f7" tags=[] DECLARE @DemoTable AS TABLE (Description varchar(20)); DECLARE @Counter int = 1; WHILE @Counter <= 5 BEGIN INSERT INTO @DemoTable VALUES ('Row ' + CAST(@Counter AS varchar(20))); SET @Counter += 1; END SELECT * FROM @DemoTable; / + [markdown] azdata_cell_guid="4d99f9b9-2d42-4392-9292-356e06e1c5ae" / # Stored Procedures / + azdata_cell_guid="cf43fd3c-f258-4dbf-afbb-c3cbf2a991f2" -- Create a stored procedure to get all Products by ProductCategoryID CREATE PROCEDURE SalesLT.GetProductsByCategoryID (@CategoryID INT = NULL) AS IF @CategoryID IS NULL SELECT ProductID, Name, Color, Size, ListPrice FROM SalesLT.Product; ELSE SELECT ProductID, Name, Color, Size, ListPrice FROM SalesLT.Product WHERE ProductCategoryID = @CategoryID; GO -- Marks the end of procedure definition batch -- Execute procedure without parameters EXEC SalesLT.GetProductsByCategoryID; -- Execute procedure with parameter EXEC SalesLT.GetProductsByCategoryID 6; # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="wrhbqcV-Dp8n" colab={"base_uri": "https://localhost:8080/"} outputId="e23efb09-5ec5-4bd7-8b90-73072bd8bfa2" # !pip install simpledbf # + id="7ShGotOt9xf7" import shutil import urllib.request as request from contextlib import closing import zipfile import shutil import sys import os from pathlib import Path import pandas as pd from simpledbf import Dbf5 import platform def download(url, save_filepath): with requests.get(url, stream=True) as r: r.raise_for_status() with open(save_filepath, 'wb') as f: for chunk in r.iter_content(chunk_size=8192): # If you have chunk encoded response uncomment if # and set chunk_size parameter to None. #if chunk: f.write(chunk) def download_ftp(url, savepath): with closing(request.urlopen(url)) as r: with open(savepath, 'wb') as f: shutil.copyfileobj(r, f) root = Path(".") # + id="VnoIvGBO9xgB" dbc_save_folder = root / "SINASC_DBC" dbf_save_folder = root / "SINASC_DBF" csv_save_folder = root / "SINASC_CSV" dbc_save_folder.mkdir(exist_ok=True) dbf_save_folder.mkdir(exist_ok=True) csv_save_folder.mkdir(exist_ok=True) # + id="Z2EpgNZu9xgC" colab={"base_uri": "https://localhost:8080/"} outputId="7888ccd9-e85e-4b2c-89ab-97a065bd5480" SINASC_URLS = [ f"ftp://ftp.datasus.gov.br/dissemin/publicos/SINASC/1996_/Dados/DNRES/DNSP{year}.dbc" for year in range(2010, 2019 + 1) ] for url in SINASC_URLS: print("Downloading", url.split("/")[-1]) save_path = dbc_save_folder / url.split("/")[-1] download_ftp(url, save_path) # + id="whWXpLVzFYnP" # %load_ext rpy2.ipython # + id="un_pBWn7F3Dr" colab={"base_uri": "https://localhost:8080/"} outputId="a926243d-dca3-4c05-c36d-420734918871" language="R" # # Using read.dbc package # # https://pt.linkedin.com/pulse/datasus-conhe%C3%A7a-nova-ferramenta-para-ler-arquivos-dbc-petruzalek # # install.packages("read.dbc") # library("read.dbc") # # + id="S9IFq3KYK0YV" # !ls SINASC_CSV # + id="GPSxZANrHBGT" colab={"base_uri": "https://localhost:8080/"} outputId="15347d58-6e1a-4706-ed7d-4408cfa8f57c" language="R" # files = list.files("SINASC_DBC") # for (file in files){ # # print(file) # # filename = strsplit(file, ".dbc")[1] # # dbc_filepath = paste("./SINASC_DBC/", file, sep="") # # csv_filepath = paste("./SINASC_CSV/", filename, ".csv", sep="") # # df <- read.dbc(dbc_filepath) # # write.csv(df, csv_filepath) # } # + id="WgbL2yLu9xgG" colab={"base_uri": "https://localhost:8080/"} outputId="76a471a8-d260-4510-b1f0-efae412de843" union_df = None dfs = [] cols = ["DTNASC", "QTDFILVIVO", "SEXO", "IDADEMAE", "PESO", "GRAVIDEZ", "CONSULTAS", "RACACOR", "CODMUNNASC", "ESTCIVMAE", "ESCMAE", "PARTO", "IDANOMAL"] union_csv_filename = "union.csv" for csv_filepath in csv_save_folder.glob("*.csv"): if csv_filepath.name == union_csv_filename: continue df = pd.read_csv(csv_filepath, usecols=cols) print(csv_filepath, len(df.columns)) sorted_cols = list(sorted(df.columns)) df = df[cols] dfs += [df] if union_df is None: union_df = df else: union_df = pd.concat([union_df, df]) union_df.to_csv(csv_save_folder / union_csv_filename) # + id="e6GwCGtO9xgH" colab={"base_uri": "https://localhost:8080/", "height": 435} outputId="1a307b55-80d5-4e22-802c-5a2b9e6c944b" import pandas as pd pd.read_csv(csv_save_folder / "union.csv") # + id="TmzZLlnlZdQS" outputId="ce367955-067b-4f97-9a03-c3b36ec1d661" colab={"base_uri": "https://localhost:8080/"} df = pd.read_csv(csv_save_folder / "union.csv") # + id="-hPlLbcRbmpk" outputId="2406772b-3f3e-4120-bb8b-daf029aca381" colab={"base_uri": "https://localhost:8080/"} cities = { 3502101: 'Andradina', 3502804: 'Araãatuba', 3505708: 'Barueri', 3514403: 'Dracena', 3517406: 'Guaãra', 3524808: 'Jales', 3546801: 'Santa Isabel', 3548500: 'Santos', 3548807: 'São Caetano do Sul', 3549805: 'São Josã do Rio Preto', 3550308: 'São Paulo' } cities = {(cod // 10): name for cod, name in cities.items()} cities # + id="h6vwLDvTZQf4" def parse_day(d): d = str(d) day, month, year = int(d[-8:-6]), int(d[-6:-4]), int(d[:-4]) return day def parse_month(d): d = str(d) day, month, year = int(d[-8:-6]), int(d[-6:-4]), int(d[:-4]) return month def parse_year(d): d = str(d) day, month, year = int(d[-8:-6]), int(d[-6:-4]), int(d[-4:]) if year < 2000: year += 2000 return year df = df[df.CODMUNNASC.apply(lambda x: x in cities.keys())] df["YEAR"], df["MONTH"], df["DAY"] = 0, 0, 0 df["DAY"] = df.DTNASC.apply(parse_day) df["MONTH"] = df.DTNASC.apply(parse_month) df["YEAR"] = df.DTNASC.apply(parse_year) df["CITYNAME"] = df.CODMUNNASC.apply(lambda x: cities[x]) # + id="adkM7HaSa43s" outputId="79fc6a6d-b954-4fe3-ec06-9e60aa80d818" colab={"base_uri": "https://localhost:8080/", "height": 573} df # + id="nXork2Yeg8Yw" outputId="812f4bd9-27e8-4743-f68d-d6c3f1017d06" colab={"base_uri": "https://localhost:8080/", "height": 1000} pd.set_option('display.max_rows', None) count_df = df.groupby(["YEAR", "MONTH", "CITYNAME", "CODMUNNASC"]).count()[["DTNASC"]] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # STAT 453: Deep Learning (Spring 2021) # Instructor: () # # Course website: http://pages.stat.wisc.edu/~sraschka/teaching/stat453-ss2021/ # GitHub repository: https://github.com/rasbt/stat453-deep-learning-ss21 # # --- # %load_ext watermark # %watermark -a '' -v -p torch # # All-Convolutional Net on Cifar-10 # ## Imports import torch import torchvision import numpy as np import matplotlib.pyplot as plt # From local helper files from helper_evaluation import set_all_seeds, set_deterministic, compute_confusion_matrix from helper_train import train_model from helper_plotting import plot_training_loss, plot_accuracy, show_examples, plot_confusion_matrix from helper_dataset import get_dataloaders_cifar10, UnNormalize # ## Settings and Dataset # + ########################## ### SETTINGS ########################## RANDOM_SEED = 123 BATCH_SIZE = 256 NUM_EPOCHS = 50 DEVICE = torch.device('cuda:2' if torch.cuda.is_available() else 'cpu') # - set_all_seeds(RANDOM_SEED) #set_deterministic() # + ########################## ### CIFAR-10 DATASET ########################## train_transforms = torchvision.transforms.Compose([ torchvision.transforms.Resize((70, 70)), torchvision.transforms.RandomCrop((64, 64)), torchvision.transforms.ToTensor(), torchvision.transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) ]) test_transforms = torchvision.transforms.Compose([ torchvision.transforms.Resize((70, 70)), torchvision.transforms.CenterCrop((64, 64)), torchvision.transforms.ToTensor(), torchvision.transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) train_loader, valid_loader, test_loader = get_dataloaders_cifar10( batch_size=BATCH_SIZE, validation_fraction=0.1, train_transforms=train_transforms, test_transforms=test_transforms, num_workers=2) # Checking the dataset for images, labels in train_loader: print('Image batch dimensions:', images.shape) print('Image label dimensions:', labels.shape) print('Class labels of 10 examples:', labels[:10]) break # - # ## Model # + ########################## ### MODEL ########################## class AllConvNet(torch.nn.Module): def __init__(self, num_classes): super().__init__() self.net = torch.nn.Sequential( torch.nn.Conv2d(in_channels=3, out_channels=16, kernel_size=(3, 3), stride=(1, 1), padding=1, bias=False), torch.nn.BatchNorm2d(16), torch.nn.ReLU(inplace=True), torch.nn.Conv2d(in_channels=16, out_channels=16, kernel_size=(3, 3), stride=(2, 2), padding=1, bias=False), torch.nn.BatchNorm2d(16), torch.nn.ReLU(inplace=True), torch.nn.Conv2d(in_channels=16, out_channels=32, kernel_size=(3, 3), stride=(1, 1), padding=1, bias=False), torch.nn.BatchNorm2d(32), torch.nn.ReLU(inplace=True), torch.nn.Conv2d(in_channels=32, out_channels=32, kernel_size=(3, 3), stride=(2, 2), padding=1, bias=False), torch.nn.BatchNorm2d(32), torch.nn.ReLU(inplace=True), torch.nn.Conv2d(in_channels=32, out_channels=64, kernel_size=(3, 3), stride=(1, 1), padding=1, bias=False), torch.nn.BatchNorm2d(64), torch.nn.ReLU(inplace=True), torch.nn.Conv2d(in_channels=64, out_channels=64, kernel_size=(3, 3), stride=(2, 2), padding=1, bias=False), torch.nn.BatchNorm2d(64), torch.nn.ReLU(inplace=True), torch.nn.Conv2d(in_channels=64, out_channels=num_classes, kernel_size=(3, 3), stride=(1, 1), padding=1, bias=False), torch.nn.BatchNorm2d(10), torch.nn.ReLU(inplace=True), torch.nn.AdaptiveAvgPool2d(1), torch.nn.Flatten() ) def forward(self, x): x = self.net(x) #probas = torch.softmax(x, dim=1) return x # + model = AllConvNet(num_classes=10) model = model.to(DEVICE) optimizer = torch.optim.SGD(model.parameters(), momentum=0.9, lr=0.1) scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, factor=0.1, mode='max', verbose=True) minibatch_loss_list, train_acc_list, valid_acc_list = train_model( model=model, num_epochs=NUM_EPOCHS, train_loader=train_loader, valid_loader=valid_loader, test_loader=test_loader, optimizer=optimizer, device=DEVICE, scheduler=scheduler, scheduler_on='valid_acc', logging_interval=100) plot_training_loss(minibatch_loss_list=minibatch_loss_list, num_epochs=NUM_EPOCHS, iter_per_epoch=len(train_loader), results_dir=None, averaging_iterations=200) plt.show() plot_accuracy(train_acc_list=train_acc_list, valid_acc_list=valid_acc_list, results_dir=None) plt.ylim([60, 100]) plt.show() # + model.cpu() unnormalizer = UnNormalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) class_dict = {0: 'airplane', 1: 'automobile', 2: 'bird', 3: 'cat', 4: 'deer', 5: 'dog', 6: 'frog', 7: 'horse', 8: 'ship', 9: 'truck'} show_examples(model=model, data_loader=test_loader, unnormalizer=unnormalizer, class_dict=class_dict) # - mat = compute_confusion_matrix(model=model, data_loader=test_loader, device=torch.device('cpu')) plot_confusion_matrix(mat, class_names=class_dict.values()) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # here we will be doing dimensionality reduction with PCA and tSNE on the swiss roll datasets. We'll also visualize it. import sealion as sl #import first from sealion.DimensionalityReduction import PCA, tSNE # the 2 dimensionality reduction algorithms import matplotlib.pyplot as plt # for visualization from sklearn.datasets import make_swiss_roll # for getting the data # + X, _ = make_swiss_roll(1000) # load in a 100 points of the swiss roll dataset, all in 3D #First we need to visualize this 3D-data fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.scatter(X[:, 0], X[:, 1], X[:, 2], c='red', marker='o') ax.set_xlabel('x-axis') ax.set_ylabel('y-axis') ax.set_zlabel('z-axis') plt.title('Swiss Roll Dataset') plt.show # + # now let's say that actually I need that data in 2D. But I don't just want to remove the z-axis and squish it, # because that wouldn't capture the full dataset very well. #here's what squishing looks like : fig = plt.figure() ax = fig.add_subplot() ax.scatter(X[:, 0], X[:, 1]) plt.xlabel("x-axis") plt.ylabel("y-axis") plt.title("Swiss Roll Squish") # + # YUCK! To maintain the structure in lower dimensions, a special type of machine learning # algorithms known as dimensionality reduction come in to play. Let's have a go at it. # the first algorithm we will be trying out is principal component analysis. Here we will be reducing it to 2D. pca = PCA(2) # 2 because we are transforming it to 2D. X_2D = pca.transform(X) # runs very fast # now we can visualize it in 2-Dimensions fig = plt.figure() ax = fig.add_subplot() ax.scatter(X_2D[:, 0], X_2D[:, 1]) plt.xlabel("x-axis") plt.ylabel("y-axis") plt.title("Swiss Roll PCA") # + # now if we wanted to do, we could see use the inverse_transform() method in the PCA class and compare it to our # original, 3D data. X_3D = pca.inverse_transform(X_2D) # give it the transformed data fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.scatter(X_3D[:, 0], X_3D[:, 1], X_3D[:, 2], c='red', marker='o') ax.set_xlabel('x-axis') ax.set_ylabel('y-axis') ax.set_zlabel('z-axis') plt.title('Swiss-Roll PCA inverse transform') plt.show() # + # notice how it looks like a plane. That's because PCA finds linear projections for the data to go onto. # Our next algorithm is tSNE, which personally for me was one of the hardest in this library to create. tSNE # struggles with a lot of data, so using bigger processors is usually preferred. # It's time complexity for SeaLion is O(n^2) so this may take some time. tsne = tSNE(2) # 2 dimensions X_2D = tsne.transform(X[:100]) #we'll just do it for the first 100 points fig = plt.figure() ax = fig.add_subplot() ax.scatter(X_2D[:, 0], X_2D[:, 1]) plt.xlabel("x-axis") plt.ylabel("y-axis") plt.title("Swiss Roll tSNE") # Which one do you like more? tSNE gives the better projection, but takes longer. PCA does it much quicker, but does it much worse. These are the choices you # will learn to make. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Probability distribution # 확률은 사건이라는 표본의 집합에 대해 할당된 숫자라고 하였다. # - 어떤 사건에 대해 어느 정도의 확률이 할당되었는지 묘사한 것을 확률 분포라고 한다. # # 대부분의 사건이 시작점과 끝점이라는 두 개의 숫자로 이루어진 단순간 구간을 사용하여 나타낼 수 있다. 따라서 이차원의 함수 $P(a, b)$를 사용하여 구현이 가능하다. # # $$P(\{ a \leq x < b \}) = P(a, b) $$ # # 그러나 구간 하나를 정의하기 위해 숫자 두 개가 필요하다는 점은 불편하다. 구간 시작점을 음수 무한대($-\infty$)로 통일하여 차원을 줄였다. # # 확률 공간의 정의를 이용하여 각 구간들로부터 시작점이 무한대가 아닌 다른 구간에 대한 확률 값도 구할 수 있다. # # $$P(\{ a \leq x < b \}) = P(\{ -\infty \leq x < b \}) - P(\{ -\infty \leq x < a \})$$ # ### cumulative distribution function (누적 분포 함수) # 위와 같은 방법으로 확률 분포를 묘사하는 방법을 누적 분포 함수라 하고, 약자로 **cdf**를 쓴다. # # 실제 확률 변수 $X$에 대한 누적 확률 분포 $F(x)$의 수학적 정의는 다음과 같다. # # $$F(x) = P(\{X < x\}) $$ # # 시계 바늘 확률 문제를 cdf로 표현할 경우 다음과 같이 생겼다. # %matplotlib inline t = np.linspace(-100, 500, 100) F = t / 360 F[t < 0] = 0 F[t > 360] = 1 plt.plot(t, F) plt.ylim(-0.1, 1.1) plt.xticks([0, 180, 360]) plt.title("Cumulative Distribution Function") plt.xlabel("$x$ (deg.)") plt.ylabel("$F(x)$") plt.show() # ### probability density function (확률 밀도 함수) # 누적 분포 함수로는 분포의 형상을 직관적으로 이해하기 힘든 단점이 있다. 다시 말해 어떤 확률 변수 값이 더 자주 나오는지에 대한 정보를 알기 힘들다는 점이다. # # 구할 수는 있지만, 전체 구간을 아주 작은 폭을 가지는 구간들도 나누어 각 구간의 확률을 살펴보아야 한다. 이는 구간 폭(width)을 정하는 데에 추가적 약속이 필요하기 때문에 실효성이 떨어진다. (혹은 각 점에 대한 미분값을 구하거나) # # 이를 극복하기 위해 생각한 것이 절대적 확률이 아닌 **상대적인** 확률 분포 형태만을 보기 위한 확률 밀도 함수이다. # # 기울기를 구하느 수학적 연산이 미분이므로 확률 밀도 함수는 누적분포 함수의 미분으로 정의한다. # # $$\dfrac{dF(x)}{dx} = f(x)$$ # # 적분으로 나타낼 경우 다음과 같다. # # $$F(x) = \int_{-\infty}^{x} f(u) du$$ # # - 확률 밀도 함수는 특정 확률 변수 구간의 확률이 다른 구간에 비해 상대적으로 얼마나 높은가를 나타내는 것이며 그** 값 자체가 확률은 아니다 ** # # 특징 # 1. $$\int_{-\infty}^{\infty} f(u)du = 1$$ # 1. $$ f(x) \geq 0 $$ # # 시계 바늘 문제의 확률 밀도함수는 다음과 같다. t = np.linspace(-100, 500, 1000) F = t / 360 F[t < 0] = 0 F[t > 360] = 1 f = np.gradient(F, 600/1000) # 수치미분 plt.plot(t, f) plt.ylim(-0.0001, f.max()*1.1) plt.xticks([0, 180, 360]) plt.title("Probability Density Function") plt.xlabel("$x$ (deg.)") plt.ylabel("$f(x)$") plt.show() # ### Probability mass function (확률 질량 함수) # # 이산 확률 분포는 확률 밀도 함수를 정의할 수 없는 대신 확률 질량 함수가 존재한다. # # 이산 확률 변수 또한 누적 분포 함수를 가지고 있다. x = np.arange(1,7) y = np.array([0.1, 0.1, 0.4, 0.2, 0.1, 0.1]) plt.stem(x, y) plt.xlim(0, 7) plt.ylim(-0.01, 0.5) plt.show() x = np.arange(1,7) y = np.array([0.1, 0.1, 0.4, 0.2, 0.1, 0.1]) z = np.cumsum(y) plt.step(x, z, where="post"); plt.xlim(0, 7); plt.ylim(-0.01, 1.1) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np import geopy import geopandas as gpd import matplotlib.pyplot as plt import matplotlib.ticker as mtick # # Donor-Advised Fund Tracker # ##### Exploratory Data Analysis # ---- # This project aims to analyze and map the flow of funds from donor-advised funds to nonprofit organizations, as well as demonstrate a sustainable open-source information project for the nonprofit sector. grantees = pd.read_csv('Grantees.csv') sponsors = pd.read_csv('Sponsors.csv') print('Number of nonunique grantee organizations: ', grantees.shape[0]) print('Number of sponsoring organizations: ', sponsors.shape[0]) # The *sponsors* dataframe indicates the sponsoring organizations of any donor-advised funds, and the *grantees* dataframe aggregates information about the recipients. See [fields.csv](https://github.com/leppekja/IRS-990-DAFs/blob/master/fields.csv) as a reference for field descriptions. sponsors.head() grantees.head() # ### Data Cleaning # Both dataframes: # # - remove Unnamed:0 index column # - 8 digit zip codes to 5 # - Check state abbreviations, address standardization # # grantees.csv # # - remove trailing zero on ZIPCd, RecipientEIN, # + grantees.drop(['Unnamed: 0'], axis=1, inplace=True) sponsors.drop(['Unnamed: 0'], axis=1, inplace=True) grantees['ZIPCd'] = grantees['ZIPCd'].astype(str).str[:5] sponsors['ZIPCd'] = sponsors['ZIPCd'].astype(str).str[:5] # Grantees may be school districts, meaning no EIN grantees['RecipientEIN'].fillna(-1, inplace=True) grantees['RecipientEIN'] = grantees['RecipientEIN'].astype(int) # - # ### Data Visualizations # ---- # - how many nonprofit organizations supported by each sponsor bar chart # - how much funding given by each sponsor bar chart # - aggregate income into daf for each sponsor bar chart # - how much funding recieved by each nonprofit organization bar chart # - functions for mapping sponsors outgoing funds # - functions for mapping grantee's incoming funds fig, ax = plt.subplots(figsize=(12,3)) grantees.groupby('Sponsor')['RecipientEIN'].count().plot(kind='bar', ax=ax) grantees['RecipientEIN'].nunique() sponsors.DonorAdvisedFundsHeldCnt.sum() def frequency(data, columns): ''' Displays general visualizations to explore data ''' fig, axs = plt.subplots(len(columns), figsize=(15,4), sharex=False) fig.subplots_adjust(hspace=1) plt.suptitle("Frequency and Proportions of " + ", ".join(columns)) axes_iterator = ([axs] if len(columns) == 1 else axs.flatten()) for ax, col in zip(axes_iterator, columns): twin = ax.twinx() ax.bar(data[col].value_counts().index, data[col].value_counts().values) twin.plot(np.cumsum(data[col].value_counts(normalize=True)), c='r') twin.yaxis.set_major_formatter(mtick.PercentFormatter(1.0)) plt.setp(ax.get_xticklabels(), rotation=30, ha='right', fontsize=12) ax.set_ylabel('Frequency') twin.set_ylabel('Proportion') ax.grid(b=True, which='major') ax.set_axisbelow(b=True) plt.show() sponsors[sponsors['DonorAdvisedFundsHeldCnt'] == sponsors['DonorAdvisedFundsHeldCnt'].max()] sponsors.iloc[sponsors['DonorAdvisedFundsHeldCnt'].nlargest(10).index] # Quick geocoding test using the Census Bureau's [geocoding service](https://geocoding.geo.census.gov/geocoder/locations/addressbatch?form): sponsor_addresses = sponsors.loc[:,['AddressLine1Txt','CityNm','StateAbbreviationCd','ZIPCd']].to_csv('sponsor_addresses.csv') grantee_addresses_1 = grantees.loc[:9999,['AddressLine1Txt','CityNm','StateAbbreviationCd','ZIPCd']].to_csv('grantee_addresses_1.csv') grantee_addresses_2 = grantees.loc[9999:,['AddressLine1Txt','CityNm','StateAbbreviationCd','ZIPCd']].to_csv('grantee_addresses_2.csv') grantees.columns grantees.shape sponsors_locs = gpd.GeoDataFrame(pd.read_csv('sponsor_address_results.csv')) sponsors_locs.head() sponsors_locs['Unnamed: 0'].str.split(',') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline import matplotlib.pyplot as plt # + from flask import Flask, render_template, request, jsonify, redirect import base64 import numpy as np import pickle as pkl import cv2 import os from predict import predict_digit app = Flask(__name__) stats = None img4 = None if not os.path.exists('stats/stats.pkl'): stats = { 'yes':0, 'no':0 } with open('stats/stats.pkl', 'wb') as fp: pkl.dump(stats,fp) @app.route('/') def home(): acc = '100' with open('stats/stats.pkl', 'rb') as fp: global stats; stats = pkl.load(fp) total = sum(stats.values()) if total!=0: acc = f"{ 100*stats['yes']/total:.2f} ({stats['yes']}/{total})" return render_template('index.html',data = {'status':False, 'accuracy':acc} ) @app.route('/charrecognize', methods = ['POST']) def predict(): global img4; if request.method == 'POST': data = request.get_json() imagebase64 = data['image'] binary = base64.b64decode(imagebase64) img = np.asarray(bytearray(binary), dtype="uint8") img = cv2.imdecode(img, 0) canny = cv2.Canny( cv2.blur(img, (3,3)), 100, 250) contours, _ = cv2.findContours(canny, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE ) idx = np.argmax( list( map( cv2.contourArea, contours ) ) ) x,y,w,h = cv2.boundingRect(contours[idx]) img = 255-img[ y:y+h, x:x+w ] img4 = img img = (cv2.resize(img, (20, 20))>127)*1 final = np.zeros( (28,28) ) final[ 4:24,4:24 ] = img final = final.reshape(1,28,28,1) return jsonify({ 'prediction': predict_digit(final), 'status': True }); @app.route('/no', methods=['get']) def no(): with open('stats/stats.pkl', 'wb') as fp: stats['no']+=1 pkl.dump(stats,fp) return redirect('/') @app.route('/yes', methods=['get']) def yes(): with open('stats/stats.pkl', 'wb') as fp: stats['yes']+=1 pkl.dump(stats,fp) return redirect('/') if __name__=='__main__': app.run(host = '0.0.0.0',port = int(3000)) # - img4.min() plt.imshow( cv2.resize(img, (20, 20), cmap='gray' ) cv2.boundingRect(img4[1][0]) plt.imshow(img4[0][ 15:15+243,25:25+156 ]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import copy from difflib import SequenceMatcher import os import pandas as pd from tqdm import tqdm #

    Cargamos libro Excel

    # + df_main = pd.read_excel('Compilado02.xlsx', engine='openpyxl', sheet_name=None) for key in df_main.keys(): print(key) # - #

    Hacemos las comparaciones de nombres

    # + df_students01 = df_main['alumnas2020'].copy(deep=True) df_students02 = df_main['alumnas2021'].copy(deep=True) # This function will help us to compare strings def similar(a, b): return SequenceMatcher(None, a, b).ratio() # This one makes the comparison industrial def gen_similar(list01, list02): list_similars = [] for word in list01: ratio = 0 for spiegel in list02: if ratio < max(ratio, similar(word, spiegel)): ratio = similar(word, spiegel) solution = (word, ratio, spiegel) list_similars.append(solution) return(list_similars) list01 = df_students01['Nombre'].to_list() list02 = df_students02['Nombre'].to_list() list_similars = gen_similar(list01, list02) # - #

    Creamos nuestra columna relacional

    # + for row in range(df_students02.shape[0]): name_2021 = df_students02.at[row, 'Nombre'] for i, col in enumerate(['Nombre 2020', 'Name match %']): value = [entry[i] for entry in list_similars if entry[-1] == name_2021] if len(value) != 0: df_students02.at[row, col] = value[0] dict_final = {'students2020': df_students01, 'students2021': df_students02, 'classes': df_main['clases']} # Adicionalmente, se guarda el diccionario en formato de libro en Excel writer = pd.ExcelWriter('test.xlsx', engine='xlsxwriter') for df_name, df in dict_final.items(): df.to_excel(writer, sheet_name=df_name) writer.save() # + num = 289 den = 2 while den < 100: if num % den == 0: print(den, num/den) break den += 1 # - #

    Agregamos los nombres del 2021 como llave

    # + names_2020 = dict_final['students2020']['Nombre'].to_list() names_2021 = dict_final['students2021']['Nombre'].to_list() df_2020 = dict_final['students2021'].copy(deep=True) df_2021 = dict_final['students2021'].copy(deep=True) df_classes = dict_final['classes'].copy(deep=True) # We create our new columns with None values for every row df_classes['name 2020'], df_classes['ratio 2020'], df_classes['email 2020'] = None, None, None df_classes['name 2021'], df_classes['ratio 2021'], df_classes['email 2021'] = None, None, None # A list called "cols" is created to order our columns in the future cols = [col for col in df_classes] # Through an iteration the best match-ratio gets chosen for i, row in enumerate(tqdm(range(df_classes.shape[0]))): name = df_classes.at[row, 'Nombre'] solution_2020, solution_2021 = None, None ratio_2020, ratio_2021 = 0, 0 for name_2020 in names_2020: if ratio_2020 < max(ratio_2020, similar(name, name_2020)): ratio_2020 = similar(name, name_2020) solution_2020 = name_2020 for name_2021 in names_2021: if ratio_2021 < max(ratio_2021, similar(name, name_2021)): ratio_2021 = similar(name, name_2021) solution_2021 = name_2021 # In this line we finally fill our rows with the corresponding information df_classes.at[row, 'name 2020'], df_classes.at[row, 'ratio 2020'] = solution_2020, ratio_2020 df_classes.at[row, 'name 2021'], df_classes.at[row, 'ratio 2021'] = solution_2021, ratio_2021 dict_aux = {solution_2020: ('email 2020', df_2020), solution_2021: ('email 2021', df_2021)} # Agregamos los correos for key, value in dict_aux.items(): try: df_classes.at[row, value[0]] = value[1][value[1]['Nombre'] == key]['Mail'].values[0].strip() except: pass # This line orders our dataframe columns as wanted df_classes = df_classes[[cols[0]] + cols[-6:] + cols[1:-6]] # Finally, we update our dictionary, generate a new Excel file, and display our edited dataframe dict_final = {'students2020': df_students01, 'students2021': df_students02, 'classes': df_classes} writer = pd.ExcelWriter('esperemos_que_funcione.xlsx', engine='xlsxwriter') for df_name, df in dict_final.items(): df.to_excel(writer, sheet_name=df_name) writer.save() display(df_classes) # - for key in dict_final.keys(): print(key) display(dict_final['students2021']) #

    Preparación de la hoja de clases

    # + df_test = dict_final['classes'].copy(deep=True) df_2021 = dict_final['students2021'].copy(deep=True) cols = [col for col in df_test] df_test['Correos'] = None for row in range(df_test.shape[0]): current_row = df_test.loc[row] name = current_row['name 2021'] # En las siguientes líneas corregimos el error en la fecha del workshop if (current_row['Grupo'] == 'Workshop') and (current_row['Días'] == 'Sa'): df_test.at[row, 'Inicio'] = current_row['Inicio'].replace(year=2021) df_test.at[row, 'Termino'] = current_row['Termino'].replace(year=2021) # Agregamos los correos try: df_test.at[row, 'Correos'] = df_2021[df_2021['Nombre'] == name]['Mail'].values[0].strip() except: pass # Utilizando las columnas dejamos la llave al comienzo df_test = df_test[[cols[-1]] + ['Correos'] + cols[1:-1] + [cols[0]]] # Eliminamos los grupos que no iniciaron en 2021, ordenamos por fecha y se borran duplicados df_test = df_test[(df_test['Inicio'].dt.year == 2021) & (df_test['Grupo'] != 'Workshop')] df_test = df_test.sort_values(by='Inicio', ascending=False) #df_test = df_test.drop_duplicates(subset='name 2021', keep='last') df_test.to_excel('mailing_with_duplicates.xlsx') display(df_test) # - #

    Hasta aquí código funcional

    #

    Comparisons as dictionaries

    # + df_students01 = df_main['alumnas2020'].copy(deep=True) df_students02 = df_main['alumnas2021'].copy(deep=True) def similar(a, b): return SequenceMatcher(None, a, b).ratio() def gen_similar(title01, list01, title02, list02, title_ratio): dict_similars = {title01: [], title_ratio: [], title02: []} for word in list01: ratio = 0 for spiegel in list02: if ratio < max(ratio, similar(word, spiegel)): ratio = similar(word, spiegel) solution = {title01: word, title_ratio: ratio, title02: spiegel} for key in dict_similars.keys(): dict_similars[key].append(solution[key]) return(dict_similars) list01 = df_students01['Nombre'].to_list() list02 = df_students02['Nombre'].to_list() dict_similars = gen_similar('2020 names', list01, '2021 names', list02, '2020 to 2021 names') # + df_classes = df_main['clases'].copy(deep=True) def gen_similar(list01, list02): list_similars = [] for word in list01: ratio = 0 for spiegel in list02: if ratio < max(ratio, similar(word, spiegel)): ratio = similar(word, spiegel) solution = (word, ratio, spiegel) list_similars.append(solution) return(list_similars) list01 = df_students02['Nombre'].to_list() list02 = df_classes['Nombre'].to_list() list_similars = gen_similar(list01, list02) print(df_classes['Nombre'][0], list_similars) for row in range(df_classes.shape[0]): name_2021 = df_classes.at[row, 'Nombre'] for i, col in enumerate(['Nombre 2021', 'Name match %']): value = [entry[i] for entry in list_similars if entry[-1] == name_2021] if len(value) != 0: df_classes.at[row, col] = value[0] display(df_classes) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Implementing a CGAN for the Iris data set to generate synthetic data # ### Import necessary modules and packages import os while os.path.basename(os.getcwd()) != 'Synthetic_Data_GAN_Capstone': os.chdir('..') from utils.utils import * safe_mkdir('experiments') from utils.data_loading import load_raw_dataset import pandas as pd import numpy as np import matplotlib.pyplot as plt import torch import torch.nn as nn import torch.optim as optim from torch.autograd import Variable from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from models.VGAN import VGAN_Generator, VGAN_Discriminator from models.CGAN_iris import CGAN_Generator, CGAN_Discriminator import random # ### Set random seed for reproducibility manualSeed = 999 print("Random Seed: ", manualSeed) random.seed(manualSeed) torch.manual_seed(manualSeed) # ### Import and briefly inspect data iris = load_raw_dataset('iris') iris.head() # ### Preprocessing data # Split 50-50 so we can demonstrate the effectiveness of additional data x_train, x_test, y_train, y_test = train_test_split(iris.drop(columns='species'), iris.species, test_size=0.5, stratify=iris.species, random_state=manualSeed) print("x_train:", x_train.shape) print("x_test:", x_test.shape) # ### Model parameters (feel free to play with these) nz = 32 # Size of generator noise input H = 16 # Size of hidden network layer out_dim = x_train.shape[1] # Size of output bs = x_train.shape[0] # Full data set nc = 3 # 3 different types of label in this problem num_batches = 1 num_epochs = 10000 exp_name = 'experiments/iris_1x16' safe_mkdir(exp_name) # ### Adam optimizer hyperparameters # I set these based on the original paper, but feel free to play with them as well. lr = 2e-4 beta1 = 0.5 beta2 = 0.999 # ### Set the device device = torch.device("cuda:0" if (torch.cuda.is_available()) else "cpu") # ### Scale continuous inputs for neural networks scaler = StandardScaler() x_train = scaler.fit_transform(x_train) x_train_tensor = torch.tensor(x_train, dtype=torch.float) y_train_dummies = pd.get_dummies(y_train) y_train_dummies_tensor = torch.tensor(y_train_dummies.values, dtype=torch.float) # ### Instantiate nets netG = CGAN_Generator(nz=nz, H=H, out_dim=out_dim, nc=nc, bs=bs, lr=lr, beta1=beta1, beta2=beta2).to(device) netD = CGAN_Discriminator(H=H, out_dim=out_dim, nc=nc, lr=lr, beta1=beta1, beta2=beta2).to(device) # ### Print models # I chose to avoid using sequential mode in case I wanted to create non-sequential networks, it is more flexible in my opinion, but does not print out as nicely print(netG) print(netD) # ### Define labels real_label = 1 fake_label = 0 # ### Training Loop # Look through the comments to better understand the steps that are taking place print("Starting Training Loop...") for epoch in range(num_epochs): for i in range(num_batches): # Only one batch per epoch since our data is horrifically small # Update Discriminator # All real batch first real_data = x_train_tensor.to(device) # Format batch (entire data set in this case) real_classes = y_train_dummies_tensor.to(device) label = torch.full((bs,), real_label, device=device) # All real labels output = netD(real_data, real_classes).view(-1) # Forward pass with real data through Discriminator netD.train_one_step_real(output, label) # All fake batch next noise = torch.randn(bs, nz, device=device) # Generate batch of latent vectors fake = netG(noise, real_classes) # Fake image batch with netG label.fill_(fake_label) output = netD(fake.detach(), real_classes).view(-1) netD.train_one_step_fake(output, label) netD.combine_and_update_opt() netD.update_history() # Update Generator label.fill_(real_label) # Reverse labels, fakes are real for generator cost output = netD(fake, real_classes).view(-1) # Since D has been updated, perform another forward pass of all-fakes through D netG.train_one_step(output, label) netG.update_history() # Output training stats if epoch % 1000 == 0 or (epoch == num_epochs-1): print('[%d/%d]\tLoss_D: %.4f\tLoss_G: %.4f\tD(x): %.4f\tD(G(z)): %.4f / %.4f' % (epoch+1, num_epochs, netD.loss.item(), netG.loss.item(), netD.D_x, netD.D_G_z1, netG.D_G_z2)) with torch.no_grad(): fake = netG(netG.fixed_noise, real_classes).detach().cpu() netG.fixed_noise_outputs.append(scaler.inverse_transform(fake)) print("Training Complete") # ### Output diagnostic plots tracking training progress and statistics # %matplotlib inline training_plots(netD=netD, netG=netG, num_epochs=num_epochs, save=exp_name) plot_layer_scatters(netG, title="Generator", save=exp_name) plot_layer_scatters(netD, title="Discriminator", save=exp_name) # It looks like training stabilized fairly quickly, after only a few thousand iterations. The fact that the weight norm increased over time probably means that this network would benefit from some regularization. # ### Compare performance of training on fake data versus real data # In this next section, we will lightly tune two models via cross-validation. The first model will be trained on the 75 real training data examples and tested on the remaining 75 testing data examples, whereas the second set of models will be trained on different amounts of generated data (no real data involved whatsoever). We will then compare performance and plot some graphs to evaluate our CGAN. y_test_dummies = pd.get_dummies(y_test) print("Dummy columns match?", all(y_train_dummies.columns == y_test_dummies.columns)) x_test = scaler.transform(x_test) labels_list = [x for x in y_train_dummies.columns] param_grid = {'tol': [1e-9, 1e-8, 1e-7, 1e-6, 1e-5], 'C': [0.5, 0.75, 1, 1.25], 'l1_ratio': [0, 0.25, 0.5, 0.75, 1]} # ### Train on real data model_real, score_real = train_test_logistic_reg(x_train, y_train, x_test, y_test, param_grid=param_grid, cv=5, random_state=manualSeed, labels=labels_list) # ### Train on various levels of fake data test_range = [75, 150, 300, 600, 1200] fake_bs = bs fake_models = [] fake_scores = [] for size in test_range: num_batches = size // fake_bs + 1 genned_data = np.empty((0, out_dim)) genned_labels = np.empty(0) rem = size while rem > 0: curr_size = min(fake_bs, rem) noise = torch.randn(curr_size, nz, device=device) fake_labels, output_labels = gen_labels(size=curr_size, num_classes=nc, labels_list=labels_list) fake_labels = fake_labels.to(device) rem -= curr_size fake_data = netG(noise, fake_labels).cpu().detach().numpy() genned_data = np.concatenate((genned_data, fake_data)) genned_labels = np.concatenate((genned_labels, output_labels)) print("For size of:", size) model_fake_tmp, score_fake_tmp = train_test_logistic_reg(genned_data, genned_labels, x_test, y_test, param_grid=param_grid, cv=5, random_state=manualSeed, labels=labels_list) fake_models.append(model_fake_tmp) fake_scores.append(score_fake_tmp) # Well, it looks like this experiment was a success. The models trained on fake data were actually able to outperform models trained on real data, which supports the belief that the CGAN is able to understand the distribution of the data it was trained on and generate meaningful examples that can be used to add additional information to the model. # Let's visualize some of the distributions of outputs to get a better idea of what took place iris_plot_scatters(genned_data, genned_labels, "Fake Data", scaler, alpha=0.5, save=exp_name) # Fake data iris_plot_scatters(iris.drop(columns='species'), np.array(iris.species), "Full Real Data Set", alpha=0.5, save=exp_name) # All real data iris_plot_densities(genned_data, genned_labels, "Fake Data", scaler, save=exp_name) # Fake data iris_plot_densities(iris.drop(columns='species'), np.array(iris.species), "Full Real Data Set", save=exp_name) # All real data plot_scatter_matrix(genned_data, "Fake Data", iris.drop(columns='species'), scaler=scaler, save=exp_name) plot_scatter_matrix(iris.drop(columns='species'), "Real Data", iris.drop(columns='species'), scaler=None, save=exp_name) # Finally, I present a summary of the test results ran above fake_data_training_plots(real_range=bs, score_real=score_real, test_range=test_range, fake_scores=fake_scores, save=exp_name) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #




    # # # uproot: array-based ROOT I/O # #




    # + # Uproot presents files and subdirectories with the same interface as Python dicts. import uproot file = uproot.open("http://scikit-hep.org/uproot/examples/nesteddirs.root") file.keys() # Exercise: how many objects named "tree" does this file contain? # # (https://uproot.readthedocs.io/en/latest/root-io.html#uproot-rootio-rootdirectory) # + # TTrees are also presented like Python dicts. events = uproot.open("data/Zmumu.root")["events"] events.keys() # + # And there are functions like array (singular) and arrays (plural) to get one or more arrays. print(events.array("E1"), end="\n\n\n\n") print(events.arrays(["E1", "px1", "py1", "pz1"])) # + # Some tricks for arrays (plural): (1) specify an encoding to get strings, rather than raw bytes. # # Note: Python 3 bytestrings start with `b`, strings do not. events.arrays(["E1", "px1", "py1", "pz1"], namedecode="utf-8") # + # Some tricks for arrays (plural): (2) specify an output type to wrap them in a different container: events.arrays(["E1", "px1", "py1", "pz1"], outputtype=tuple) # + # Some tricks for arrays (plural): (2) specify an output type to wrap them in a different container: # (see also tree.pandas.df) import pandas events.arrays(["E1", "px1", "py1", "pz1"], outputtype=pandas.DataFrame) # + # Some tricks for arrays (plural): (3a) use filename wildcards (*, ?, [...]) to select by name-pattern. events.arrays(["E*", "p[xyz]1"]) # + # Some tricks for arrays (plural): (3b) use slashes inside the quotes to use full regular expressions. events.arrays(["/E.*/", "/p[x-z]1/"]) # - #
    # #

    Three ways to get data:

    # # # #
    #

    Direct

    #

    Read the file and return an array.

    # #
    #

    Lazy

    #

    Get an object that reads on demand.

    # #
    #

    Iterative

    #

    Read arrays in batches of entries.

    # #
    # #

    *Lazy arrays or iteration over sets of files.

    # + # Direct: events.array("E1") # + # Lazy: array = events.lazyarray("E1", entrysteps=500) # print([len(x) for x in array.chunks]) array # + # Iterative: for chunk in events.iterate("E1", entrysteps=500): print(len(chunk[b"E1"]), chunk[b"E1"][:5]) # - #

    # #

    Advantages and disadvantages of each:

    # # # #
    #

    Direct

    #

    Simple; returns pure Numpy arrays if possible.

    #
    #

    Lazy

    #

    Transparently work on data too large to fit into memory.

    #
    #

    Iterative

    #

    Control the loading of data into and out of memory.

    #
    # + # Exercise: compute numpy.sqrt(E1**2 - px1**2 - py1**2 - pz1**2) in all three modes. # # (https://docs.scipy.org/doc/numpy/reference/generated/numpy.concatenate.html) import numpy arrays = events.arrays(["E1", "p[xyz]1"], namedecode="utf-8") E1, px1, py1, pz1 = arrays["E1"], arrays["px1"], arrays["py1"], arrays["pz1"] # This is direct. What about lazy and iterative? result = numpy.sqrt(E1**2 - px1**2 - py1**2 - pz1**2) result # + # Controlling the chunk size: print("Lazy or iteration steps as a fixed number of entries:") for arrays in events.iterate(entrysteps=500): print(len(arrays[b"E1"])) print("\nLazy or iteration steps as a fixed memory footprint:") for arrays in events.iterate(entrysteps="100 kB"): print(len(arrays[b"E1"])) # - #




    # # ## Application: feeding data into PyTorch # #




    # + # Let's train a neural network! First, though, we need to set up a problem. # 2-dimensional for simplicity (easier to visualize and we don't have much data). E1, E2, px1, py1, pz1, px2, py2, pz2, Q1, Q2 = \ events.arrays(["E[12]", "p[xyz][12]", "Q[12]"], outputtype=tuple) E = (E1 + E2) p = numpy.sqrt((px1 + px2)**2 + (py1 + py2)**2 + (pz1 + pz2)**2) # Need to predict opposite sign (deepskyblue) vs same sign (orange) using E and p. # %matplotlib inline import matplotlib.pyplot matplotlib.pyplot.scatter(E[Q1 != Q2], p[Q1 != Q2], c="deepskyblue", edgecolor="black"); matplotlib.pyplot.scatter(E[Q1 == Q2], p[Q1 == Q2], c="orange", edgecolor="black"); matplotlib.pyplot.xlim(0, 200); matplotlib.pyplot.ylim(0, 200); # + import torch # transform inputs to fit PyTorch's expected shape and type X = torch.from_numpy(numpy.dstack([E, p])[0].astype(numpy.float32)) y = torch.from_numpy((Q1 != Q2).astype(numpy.float32).reshape(-1, 1)) neural_network = torch.nn.Sequential( # the neural network topology: torch.nn.Linear(2, 5), # input → hidden: 2 dimensions → 5 dimensions torch.nn.Sigmoid(), # non-linearity applied to each of the 5 components torch.nn.Linear(5, 1)) # hidden → output: 5 dimensions → 1 dimension loss_fn = torch.nn.MSELoss(reduction="sum") optimizer = torch.optim.Adam(neural_network.parameters(), lr=0.001) for i in range(1000): # iterate 1000 times to minimize loss: y_pred - y y_pred = neural_network(X) # neural_network is a function: X ↦ y loss = loss_fn(y_pred, y) optimizer.zero_grad() loss.backward() optimizer.step() # - grid_of_points = numpy.dstack(numpy.mgrid[0:220:20, 0:220:20].astype(numpy.float32)) Z = neural_network(torch.from_numpy(grid_of_points)).detach().numpy().reshape(grid_of_points.shape[:2]) matplotlib.pyplot.contourf(grid_of_points[:, :, 0], grid_of_points[:, :, 1], Z); matplotlib.pyplot.scatter(E[Q1 != Q2], p[Q1 != Q2], c="deepskyblue", edgecolor="black"); matplotlib.pyplot.scatter(E[Q1 == Q2], p[Q1 == Q2], c="orange", edgecolor="black"); matplotlib.pyplot.xlim(0, 200); matplotlib.pyplot.ylim(0, 200); # + # Now let's pretend the sample is so large, we can't load it into memory. # What has changed? What's the same? optimizer = torch.optim.Adam(neural_network.parameters(), lr=0.0001) # learn slower: fewer data points... for E1, E2, px1, py1, pz1, px2, py2, pz2, Q1, Q2 in \ events.iterate(["E[12]", "p[xyz][12]", "Q[12]"], outputtype=tuple, entrysteps=500): E = (E1 + E2) p = numpy.sqrt((px1 + px2)**2 + (py1 + py2)**2 + (pz1 + pz2)**2) X = torch.from_numpy(numpy.dstack([E, p])[0].astype(numpy.float32)) y = torch.from_numpy((Q1 != Q2).astype(numpy.float32).reshape(-1, 1)) for i in range(1000): y_pred = neural_network(X) loss = loss_fn(y_pred, y) optimizer.zero_grad() loss.backward() optimizer.step() # + # Now do it from a sample of jagged arrays (variable number of values per event). events2 = uproot.open("data/HZZ.root")["events"] E_all, px_all, py_all, pz_all, q_all = \ events2.arrays(["Muon_E", "Muon_P[xyz]", "Muon_Charge"], outputtype=tuple) E_all, px_all, py_all, pz_all, q_all # + # Getting two muons from events that have them: print(E_all.counts) E_all[E_all.counts >= 2, 0], E_all[E_all.counts >= 2, 1] # + # Or all pairs, without double-counting: left, right = E_all.argchoose(2).i0, E_all.argchoose(2).i1 E_all[left].flatten(), E_all[right].flatten() # + # Exercise: set up and run the neural network with muons from the jagged sample. X = FIXME y = FIXME neural_network = torch.nn.Sequential(torch.nn.Linear(2, 5), torch.nn.Sigmoid(), torch.nn.Linear(5, 1)) loss_fn = torch.nn.MSELoss(reduction="sum") optimizer = torch.optim.Adam(neural_network.parameters(), lr=0.001) for i in range(1000): y_pred = neural_network(X) loss = loss_fn(y_pred, y) optimizer.zero_grad() loss.backward() optimizer.step() # - grid_of_points = numpy.dstack(numpy.mgrid[0:520:20, 0:520:20].astype(numpy.float32)) Z = neural_network(torch.from_numpy(grid_of_points)).detach().numpy().reshape(grid_of_points.shape[:2]) matplotlib.pyplot.contourf(grid_of_points[:, :, 0], grid_of_points[:, :, 1], Z); matplotlib.pyplot.scatter(E[Q1 != Q2], p[Q1 != Q2], c="deepskyblue", edgecolor="black"); matplotlib.pyplot.scatter(E[Q1 == Q2], p[Q1 == Q2], c="orange", edgecolor="black"); matplotlib.pyplot.xlim(0, 500); matplotlib.pyplot.ylim(0, 500); #




    # # ## Caching # #




    # + # Direct array-reading is simple: every time you ask for an array, it reads from the file (raw bytes may be cached). # # You could avoid duplicate reading/decompressing/formatting by keeping a reference to previously read arrays. # read arrays = events.arrays(["E1", "p[xyz]1"], namedecode="utf-8") # use numpy.sqrt(arrays["px1"]**2 + arrays["py1"]**2) # use again numpy.sqrt(arrays["E1"]**2 - arrays["px1"]**2 - arrays["py1"]**2 - arrays["pz1"]**2) # + # But that would force you to re-arrange your analysis script to satisfy hardware constraints. # # Instead, use uproot's caching mechanism: any dict-like object can be used as a cache. mycache = {} # read and use E1, px1, py1, pz1 = events.arrays(["E1", "p[xyz]1"], outputtype=tuple, cache=mycache) numpy.sqrt(px1**2 + py1**2) # get from cache and use again E1, px1, py1, pz1 = events.arrays(["E1", "p[xyz]1"], outputtype=tuple, cache=mycache) numpy.sqrt(E1**2 - px1**2 - py1**2 - pz1**2) # + # The data are now in the mycache dict, which you can clear whenever you need to. # mycache.clear() mycache # - #

    # #

    The point is that naive, exploratory code can become production-ready code in small steps.

    # #

    Development sequence:

    # # 1. Read directly from the file in early exploration because it requires the least effort. # 2. Insert `mycache = {}` and `cache=mycache` to avoid costly re-reading when you start looking at more data. # 3. Use an `uproot.ArrayCache` to specify an upper limit when you start running out of memory. # 4. Maybe use an external cache like [memcached](https://realpython.com/python-memcache-efficient-caching/) to quickly recover from crashing 90% through your script, to more easily debug that last 10%. # 5. Maybe use a [diskcache](http://www.grantjenks.com/docs/diskcache/tutorial.html) to split data between a small, fast disk and a large, slow disk... # # Most cache libraries in the Python ecosystem use this dict of string → objects interface. # #

    # + # Example of limited memory: mycache = uproot.ArrayCache("100 kB") # integer number of bytes or string with units events.arrays(cache=mycache) # arrays in cache arrays in file len(mycache), len(events.keys()) # + # With large lazy arrays, you SHOULD use a cache with an upper limit. # (Otherwise, they'll hang onto data forever.) data = uproot.lazyarray( # a bunch or files "data/sample-*-zlib.root", # TTree name in each file "sample", # branch(s) in each file for lazyarray(s) "Af8", # use this cache with an upper limit instead of reading exactly once cache=uproot.ArrayCache("5 kB")) data # + # Compute something to do a pass over all chunks... # data + 100 [chunk.ismaterialized for chunk in data.chunks] # + # With iteration, you SHOULD NOT use a cache because the iteration manages memory explicitly. # You wouldn't want to keep the previous iteration in memory when you're done with it, right? for chunk in uproot.iterate("data/sample-*-zlib.root", "sample", "Af8", entrysteps="0.5 kB"): print(len(chunk[b"Af8"]), chunk[b"Af8"][:5]) # - #




    # # ## Objects in arrays # #




    # + # Most of the complexity of ROOT data is simplified by the fact that custom classes are "split" into branches. # uproot sees the branches. tree = uproot.open("http://scikit-hep.org/uproot/examples/Event.root")["T"] tree.show() # branch name streamer type, if any uproot's interpretation # + # In this view, class attributes are _not_ special types and can be read as arrays of numbers. tree.array("fTemperature", entrystop=20) # + # Fixed-width matrices map onto multidimensional arrays, tree.array("fMatrix[4][4]", entrystop=6) # + # branches with multiple leaves ("leaf-list") map onto Numpy record arrays, uproot.open("http://scikit-hep.org/uproot/examples/leaflist.root")["tree"].array("leaflist") # + # and anything in variable-length lists become a JaggedArrays, tree.array("fTracks.fMass2", entrystop=6) # + # even if they're fixed-width within jagged or whatever. tree.array("fTracks.fTArray[3]", entrystop=6) # + # There are some types that ROOT does not split because they are too complex. # For example, *histograms* inside a TTree: tree.array("fH", entrystop=6) # + # Uproot can read objects like this because ROOT describes their layout in "streamers." # Uproot reads the (most common types of) streamers and generates Python classes. # Some of these Python classes have specialized, high-level methods. for histogram in tree.array("fH", entrystop=3): print(histogram.title) print(histogram.values) print("\n...\n") for histogram in tree.array("fH", entrystart=-3): print(histogram.title) print(histogram.values) # + # One type, TLorentzVector, has MANY high-level methods. events3 = uproot.open("data/HZZ-objects.root")["events"] muons = events3.array("muonp4") muons # + # A single TLorentzVector has the methods you'd expect (kinematics, delta_phi, delta_r), muon1, muon2 = muons[0] muon1, muon1.pt, muon1.delta_phi(muon2), (muon1 + muon2).mass # + # but they also apply vectorially (like Numpy functions): muons1 = muons[muons.counts >= 2, 0] # first muon in each event (that has two) muons2 = muons[muons.counts >= 2, 1] # second muon in each event (that has two) muons1, muons1.pt, muons1.delta_phi(muons2), (muons1 + muons2).mass # + # Doubly nested lists like std::vector> aren't handled transparently # because they're stored differently. # # Note: it's an ObjectArray, an array containing list objects. doubly_nested = uproot.open( "http://scikit-hep.org/uproot/examples/vectorVectorDouble.root")["t"]["x"].array() doubly_nested # + # The multidimensional slicing doesn't apply as it does for JaggedArrays. try: doubly_nested[doubly_nested.counts > 0, 0] except Exception as err: print(type(err), err) # + # If you need to work with this type, explicitly convert it into a JaggedArray of JaggedArrays. import awkward jagged = awkward.fromiter(doubly_nested) jagged # + # Now you can do all the things. print(f"jagged[jagged.counts > 0, 0]: {jagged[jagged.counts > 0, 0]}") print(f"jagged.flatten(): {jagged.flatten()}") print(f"jagged.flatten().flatten(): {jagged.flatten().flatten()}") print(f"jagged.sum(): {jagged.sum()}") print(f"jagged.sum().sum(): {jagged.sum().sum()}") # - #




    # # ## Reading and writing histograms # #




    # + # As we've seen, histograms have some convenience methods. # They're mostly for conversion to other formats, like Numpy. # # Numpy "histograms" are a 2-tuple of counts and edges. uproot.open("http://scikit-hep.org/uproot/examples/hepdata-example.root")["hpx"].numpy() # + # Similarly for 2-dimensional histograms. uproot.open("http://scikit-hep.org/uproot/examples/hepdata-example.root")["hpxpy"].numpy() # + # It can also be useful to turn histograms into Pandas DataFrames (note the IntervalIndex). uproot.open("http://scikit-hep.org/uproot/examples/Event.root")["htime"].pandas() # + # Or HEPData's YAML format. As Python objects, it's just a little work to make different formats. print(uproot.open("http://scikit-hep.org/uproot/examples/Event.root")["htime"].hepdata()) # + # At the moment, only two kinds of objects can be *written* to ROOT files: TObjString and histograms. # # To write, open a file for writing (create/recreate/update) and assign to it like a dict: file = uproot.recreate("tmp.root", compression=uproot.ZLIB(4)) file["name"] = "Some object, like a TObjString." # + import ROOT pyroot_file = ROOT.TFile("tmp.root") pyroot_file.Get("name") # + # During assignment, uproot recognizes some Pythonic types, such as Numpy histograms. file["from_numpy"] = numpy.histogram(numpy.random.normal(0, 1, 10000)) # + pyroot_file = ROOT.TFile("tmp.root") # refresh the PyROOT file pyroot_hist = pyroot_file.Get("from_numpy") canvas = ROOT.TCanvas("canvas", "", 400, 300) pyroot_hist.Draw("hist") canvas.Draw() # + # 2-dimensional Numpy histograms. file["from_numpy2d"] = numpy.histogram2d(numpy.random.normal(0, 1, 10000), numpy.random.normal(0, 1, 10000)) # + pyroot_file = ROOT.TFile("tmp.root") # refresh the PyROOT file pyroot_hist = pyroot_file.Get("from_numpy2d") pyroot_hist.Draw() canvas.Draw() # - #




    # # ## Writing array data # #




    # + # Obviously, it would be nice to write array data to ROOT files as new TTrees. # This is an IRIS-HEP summer fellowship project (). # # Meanwhile, there are many nice options for saving and sharing Numpy arrays. # Numpy's NPZ files (really just ZIP) numpy.savez_compressed("one_array.npz", events.array("E1")) numpy.savez_compressed("many_arrays.npz", **events.arrays(namedecode="utf-8")) # HDF5 import h5py with h5py.File("tmp.hdf5", "w") as file: file["E1"] = events.array("E1") # + # If the arrays are jagged, lazy, or contain objects (i.e. not pure Numpy), use awkward-array. # awkward's AWKD files (really just ZIP) awkward.save("one_array.awkd", events2.array("Muon_Px"), mode="w") awkward.save("many_arrays.awkd", events2.arrays("Muon_*", namedecode="utf-8"), mode="w") # HDF5 with h5py.File("tmp.hdf5", "w") as file: awkward_file = awkward.hdf5(file) awkward_file["Muon_Px"] = events2.array("Muon_Px") # + # There are also industry-standard formats for jagged arrays. awkward.toparquet("one_array.parquet", events2.array("Muon_Px")) awkward.toparquet("many_arrays.parquet", events2.lazyarrays()) # + # You can also do some crazy things like "saving the laziness" of lazy arrays to make lightweight skims: # Read as lazy arrays with "persist virtual", data = events.lazyarrays(["E*", "p[xyz]*"], persistvirtual=True) # add a derived feature to the lazy array, data["mass"] = numpy.sqrt((data["E1"] + data["E2"])**2 - (data["px1"] + data["px2"])**2 - (data["py1"] + data["py2"])**2 - (data["pz1"] + data["pz2"])**2) # and save it. awkward.save("derived-feature.awkd", data, mode="w") # + # When you read it back, the original features from from the ROOT files and the derived from the AWKD: data2 = awkward.load("derived-feature.awkd") # Reads from derived-feature.awkd print(data2["mass"]) # Reads from the original ROOT files (if you don't move them!) print(data2["E1"]) # + # Similarly, if you apply masks (cuts) to a lazy array, the cut is saved to AWKD but the original data are in ROOT: selected = data[data["mass"] < 80] print(selected) awkward.save("selected-events.awkd", selected, mode="w") print(awkward.load("selected-events.awkd")) # - #




    # #

    That's all!

    # #




    # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: feml # language: python # name: feml # --- import pandas as pd # + # let's create a toy dataframe with some date variables # first we create a series with the ranges rng_ = pd.date_range('2019-03-05', periods=20, freq='T') # now we convert the series in a dataframe df = pd.DataFrame({'date': rng_}) # output the first 5 rows df.head() # + # let's explore the variable type df.dtypes # + # let's extract the date part df['date_part'] = df['date'].dt.date df['date_part'].head() # + # let's extract the time part df['time_part'] = df['date'].dt.time df['time_part'].head() # + # let's create a toy dataframe where the datetime variable is cast # as object df = pd.DataFrame({'date_var':['Jan-2015', 'Apr-2013', 'Jun-2014', 'Jan-2015']}) df # + # let's explore the variable type df.dtypes # + # let's re-cast the variable as datetime df['datetime_var'] = pd.to_datetime(df['date_var']) df # + # let's extract date and time df['date'] = df['datetime_var'].dt.date df['time'] = df['datetime_var'].dt.time df # + # let's explore the variable types df.dtypes # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Importing library # %matplotlib inline import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import numpy as np # # Loading data df = pd.read_csv('https://vincentarelbundock.github.io/Rdatasets/csv/COUNT/titanic.csv', usecols=[1, 2, 3, 4]) df.head() # # Analysing data plt.pie([np.sum(df['survived'] == 'yes'), np.sum(df['survived'] == 'no')], colors=['g', 'r'], labels=['Survived', "Didn't Survive"]) plt.title('People who survived') plt.show() plt.pie([np.sum(df['sex'] == 'man'), np.sum(df['sex'] == 'women')], colors=['g', 'r'], labels=['Male', "Female"]) plt.title('Sex Distribution') plt.show() plt.pie([np.sum(df['age'] == 'adults'), np.sum(df['age'] == 'child')], colors=['g', 'r'], labels=['Age', "Child"]) plt.title('Age') plt.show() plt.pie([np.sum(df['class'] == '1st class'), np.sum(df['class'] == '2nd class'), np.sum(df['class'] == '3rd class')], labels=['1st class', '2nd class', '3rd class']) plt.title('Class') plt.show() plt.bar(np.arange(3), [np.mean(df[df['class'] == '1st class']['survived'] == 'yes'), np.mean(df[df['class'] == '2nd class']['survived'] == 'yes'), np.mean(df[df['class'] == '3rd class']['survived'] == 'yes')], color='g', tick_label=['1st class', '2nd class', '3rd class']) plt.bar(np.arange(3), height = [np.mean(df[df['class'] == '1st class']['survived'] == 'no'), np.mean(df[df['class'] == '2nd class']['survived'] == 'no'), np.mean(df[df['class'] == '3rd class']['survived'] == 'no')], bottom = [np.mean(df[df['class'] == '1st class']['survived'] == 'yes'), np.mean(df[df['class'] == '2nd class']['survived'] == 'yes'), np.mean(df[df['class'] == '3rd class']['survived'] == 'yes')], color='r', tick_label=['1st class', '2nd class', '3rd class']) plt.title('Percentage Survival per class') plt.ylabel('Percentage survived') plt.show() plt.bar(np.arange(2), [np.mean(df[df['age'] == 'adults']['survived'] == 'yes'), np.mean(df[df['age'] == 'child']['survived'] == 'yes')], color='g', tick_label=['Adult', 'Child']) plt.bar(np.arange(2), [np.mean(df[df['age'] == 'adults']['survived'] == 'no'), np.mean(df[df['age'] == 'child']['survived'] == 'no')], bottom = [np.mean(df[df['age'] == 'adults']['survived'] == 'yes'), np.mean(df[df['age'] == 'child']['survived'] == 'yes')], color='r', tick_label=['Adult', 'Child']) plt.title('Percentage Survival according to age') plt.ylabel('Percentage survived') plt.show() plt.bar(np.arange(2), [np.mean(df[df['sex'] == 'man']['survived'] == 'yes'), np.mean(df[df['sex'] == 'women']['survived'] == 'yes')], color='g', tick_label=['Male', 'Female']) plt.bar(np.arange(2), [np.mean(df[df['sex'] == 'man']['survived'] == 'no'), np.mean(df[df['sex'] == 'women']['survived'] == 'no')], bottom = [np.mean(df[df['sex'] == 'man']['survived'] == 'yes'), np.mean(df[df['sex'] == 'women']['survived'] == 'yes')], color='r', tick_label=['Male', 'Female']) plt.title('Percentage Survival according to sex') plt.ylabel('Percentage survived') plt.show() # # Preprocessing X = df.drop('survived', axis=1) y = df['survived'] # ## Converting categorical features into numeric forms # + from sklearn.preprocessing import LabelEncoder import numpy as np class_le = LabelEncoder() class_le = class_le.fit(np.unique(X['class'])) X['class'] = class_le.transform(X['class']) age_le = LabelEncoder() age_le = age_le.fit(np.unique(X['age'])) X['age'] = age_le.transform(X['age']) sex_le = LabelEncoder() sex_le = sex_le.fit(np.unique(X['sex'])) X['sex'] = sex_le.transform(X['sex']) survived_le = LabelEncoder() survived_le = survived_le.fit(np.unique(y)) y = survived_le.transform(y) # + from sklearn.preprocessing import OneHotEncoder import numpy as np enc = OneHotEncoder() enc.fit(X) X = enc.transform(X).toarray() # - # # Training and testing model from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3) # ## LogisticRegression from sklearn.linear_model import LogisticRegression clf = LogisticRegression() clf = clf.fit(X_train, y_train) print("Train error:", clf.score(X_train, y_train)) print("Test error:", clf.score(X_test, y_test)) # ## Decision Tree Classifier from sklearn.tree import DecisionTreeClassifier clf = DecisionTreeClassifier() clf = clf.fit(X_train, y_train) print("Train error:", clf.score(X_train, y_train)) print("Test error:", clf.score(X_test, y_test)) # ## Naive Bayes from sklearn.naive_bayes import MultinomialNB clf = MultinomialNB() clf = clf.fit(X_train, y_train) print("Train error:", clf.score(X_train, y_train)) print("Test error:", clf.score(X_test, y_test)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.8.12 ('kaggle') # language: python # name: python3 # --- import os import pandas import gc import tqdm import numpy import time root = '../resource/csv/' class table: start = time.time() transaction = pandas.read_csv(os.path.join(root, "transactions_train.csv"), dtype={'article_id':str}) submission = pandas.read_csv(os.path.join(root, "sample_submission.csv")) article = pandas.read_csv(os.path.join(root, "articles.csv")) customer = pandas.read_csv(os.path.join(root, "customers.csv")) end = time.time() print('elapsed {} time'.format(end-start)) pass print('unique {} [customer_id] in transaction table'.format(table.transaction['customer_id'].nunique())) len(set.difference(set(table.submission['customer_id']), set(table.transaction['customer_id']))) len(set.difference(set(table.transaction['customer_id']), set(table.submission['customer_id']))) intersection = set.intersection(set(table.submission['customer_id']), set(table.transaction['customer_id'])) len(intersection) table.transaction.loc[[i in intersection for i in table.transaction['customer_id']]] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Recommendations tutorial # # In this tutorial we will learn and compare two basic recommendation algorithms: # 1. [FunkSvd](https://medium.com/datadriveninvestor/how-funk-singular-value-decomposition-algorithm-work-in-recommendation-engines-36f2fbf62cac) # 2. [Neural Collaborative Filtering](https://arxiv.org/abs/1708.05031) # # This is a minimal demo adapted from https://github.com/guoyang9/NCF # %matplotlib inline # + import time import os import requests import tqdm import numpy as np import pandas as pd import scipy.sparse as sp import torch import torch.nn as nn import torch.nn.functional as F import torch.utils.data as td import torch.optim as to import matplotlib.pyplot as pl import seaborn as sns # + # Configuration # The directory to store the data data_dir = "data" train_rating = "ml-1m.train.rating" test_negative = "ml-1m.test.negative" # NCF config train_negative_samples = 4 test_negative_samples = 99 embedding_dim = 64 hidden_dim = 32 # Training config batch_size = 256 epochs = 10 # Original implementation uses 20 top_k=10 device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # - # ### Download and preprocess the data # # Use Movielens 1M data from the NCF paper authors' implementation https://github.com/hexiangnan/neural_collaborative_filtering # + if not os.path.exists(data_dir): os.mkdir(data_dir) for file_name in [train_rating, test_negative]: file_path = os.path.join(data_dir, file_name) if os.path.exists(file_path): print("Skip loading " + file_name) continue with open(file_path, "wb") as tf: print("Load " + file_name) r = requests.get("https://raw.githubusercontent.com/hexiangnan/neural_collaborative_filtering/master/Data/" + file_name, allow_redirects=True) tf.write(r.content) # + def preprocess_train(): train_data = pd.read_csv(os.path.join(data_dir, train_rating), sep='\t', header=None, names=['user', 'item'], usecols=[0, 1], dtype={0: np.int32, 1: np.int32}) user_num = train_data['user'].max() + 1 item_num = train_data['item'].max() + 1 train_data = train_data.values.tolist() # Convert ratings as a dok matrix train_mat = sp.dok_matrix((user_num, item_num), dtype=np.float32) for user, item in train_data: train_mat[user, item] = 1.0 return train_data, train_mat, user_num, item_num train_data, train_mat, user_num, item_num = preprocess_train() # + def preprocess_test(): test_data = [] with open(os.path.join(data_dir, test_negative)) as tnf: for line in tnf: parts = line.split('\t') assert len(parts) == test_negative_samples + 1 user, positive = eval(parts[0]) test_data.append([user, positive]) for negative in parts[1:]: test_data.append([user, int(negative)]) return test_data test_data = preprocess_test() # - # ### Pytorch dataset class NCFDataset(td.Dataset): def __init__(self, positive_data, item_num, positive_mat, negative_samples=0): super(NCFDataset, self).__init__() self.positive_data = positive_data self.item_num = item_num self.positive_mat = positive_mat self.negative_samples = negative_samples self.reset() def reset(self): print("Resetting dataset") if self.negative_samples > 0: negative_data = self.sample_negatives() data = self.positive_data + negative_data labels = [1] * len(self.positive_data) + [0] * len(negative_data) else: data = self.positive_data labels = [0] * len(self.positive_data) self.data = np.concatenate([np.array(data), np.array(labels)[:, np.newaxis]], axis=1) def sample_negatives(self): negative_data = [] for user, positive in self.positive_data: for _ in range(self.negative_samples): negative = np.random.randint(self.item_num) while (user, negative) in self.positive_mat: negative = np.random.randint(self.item_num) negative_data.append([user, negative]) return negative_data def __len__(self): return len(self.data) def __getitem__(self, idx): user, item, label = self.data[idx] return user, item, label # ### Implement recommendation models in Pytorch # # Because this is what people do in 2020 class Ncf(nn.Module): def __init__(self, user_num, item_num, embedding_dim, hidden_dim): super(Ncf, self).__init__() self.user_embeddings = nn.Embedding(user_num, embedding_dim) self.item_embeddings = nn.Embedding(item_num, embedding_dim) self.layers = nn.Sequential( nn.Linear(2 * embedding_dim, hidden_dim), nn.ReLU(), nn.Linear(hidden_dim, hidden_dim), nn.ReLU(), nn.Linear(hidden_dim, 1) ) self.initialize() def initialize(self): nn.init.normal_(self.user_embeddings.weight, std=0.01) nn.init.normal_(self.item_embeddings.weight, std=0.01) for layer in self.layers: if isinstance(layer, nn.Linear): nn.init.xavier_uniform_(layer.weight) layer.bias.data.zero_() def forward(self, user, item): user_embedding = self.user_embeddings(user) item_embedding = self.item_embeddings(item) concat = torch.cat((user_embedding, item_embedding), -1) return self.layers(concat).view(-1) def name(self): return "Ncf" class FunkSvd(nn.Module): def __init__(self, user_num, item_num, embedding_dim): super(FunkSvd, self).__init__() self.user_embeddings = nn.Embedding(user_num, embedding_dim) self.item_embeddings = nn.Embedding(item_num, embedding_dim) self.user_bias = nn.Embedding(user_num, 1) self.item_bias = nn.Embedding(item_num, 1) self.bias = torch.nn.Parameter(torch.tensor(0.0)) self.initialize() def initialize(self): nn.init.normal_(self.user_embeddings.weight, std=0.01) nn.init.normal_(self.item_embeddings.weight, std=0.01) nn.init.normal_(self.user_bias.weight, std=0.01) nn.init.normal_(self.item_bias.weight, std=0.01) def forward(self, user, item): user_embedding = self.user_embeddings(user) user_bias = self.user_bias(user).view(-1) item_embedding = self.item_embeddings(item) item_bias = self.item_bias(item).view(-1) dot = (user_embedding * item_embedding).sum(1) return dot + user_bias + item_bias + self.bias def name(self): return "FunkSvd" # ### Metrics # # - mean hit rate @K # - mean DCG @K # # Test data is organized as a sequence `user -> [positive_item, negative_item_1, ..., negative_item_99]`. Each batch in the test loader contains the data for a single user in the same order. # + def hit_metric(actual, recommended): return int(actual in recommended) def dcg_metric(actual, recommended): if actual in recommended: index = recommended.index(actual) return np.reciprocal(np.log2(index + 2)) return 0 def metrics(model, test_loader, top_k): hits, dcgs = [], [] for user, item, label in test_loader: item = item.to(device) predictions = model(user.to(device), item) _, indices = torch.topk(predictions, top_k) recommended = torch.take(item, indices).cpu().numpy().tolist() item = item[0].item() hits.append(hit_metric(item, recommended)) dcgs.append(dcg_metric(item, recommended)) return np.mean(hits), np.mean(dcgs) # - # ### Basic training loop # # Notes # - resample new negatives at each epoch # - no early stopping, checkpointing, LR decay etc.; this is a demo, remember? def train(model): train_dataset = NCFDataset(train_data, item_num, train_mat, train_negative_samples) train_loader = td.DataLoader(train_dataset, batch_size=batch_size, shuffle=True, num_workers=4) test_dataset = NCFDataset(test_data, item_num, train_mat) test_loader = td.DataLoader(test_dataset, batch_size=test_negative_samples+1, shuffle=False, num_workers=0) loss_function = nn.BCEWithLogitsLoss() optimizer = to.Adam(model.parameters()) history = [] for epoch in range(epochs): model.train() train_loader.dataset.reset() start_time = time.time() for user, item, label in tqdm.tqdm(train_loader): model.zero_grad() prediction = model(user.to(device), item.to(device)) loss = loss_function(prediction, label.to(device).float()) loss.backward() optimizer.step() model.eval() hr, dcg = metrics(model, test_loader, top_k) elapsed = time.time() - start_time history.append({"model": model.name(), "epoch": epoch, "hit_rate": hr, "dcg": dcg, "elapsed": elapsed}) print("[{model}] epoch: {epoch}, hit rate: {hit_rate}, dcg: {dcg}".format(**history[-1])) return history # ### Experiment # # It takes a couple of minutes per epoch on GTX 1080 # + print("# Train NCF") ncf = Ncf(user_num, item_num, embedding_dim, hidden_dim).to(device) ncf_history = train(ncf) print("# Train FunkSVD") svd = FunkSvd(user_num, item_num, embedding_dim).to(device) svd_history = train(svd) # - history = pd.DataFrame(ncf_history + svd_history) # + columns = ["hit_rate", "dcg", "elapsed"] figure, axes = pl.subplots(nrows=1, ncols=3, sharex=True, figsize=(18, 3)) for j, column in enumerate(columns): sns.lineplot(x="epoch", y=column, hue="model", data=history, ax=axes[j]) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] _cell_guid="7f7382af-cf02-47ac-bcea-7f5f407e28e1" _uuid="2ee8e2efa2101c1264fff2b8a72f846f9879646c" # # Analysis of News Headlines: Topic Modelling with LSA # In this Project, LSA modelling algorithm is explored. These techniques are applied to the 'A Million News Headlines' dataset, which is a corpus of over one million news article headlines published by the ABC. # + _cell_guid="b1f10f39-91d4-45f1-b62f-d1f590e50438" _kg_hide-input=false _kg_hide-output=false _uuid="7edd510ba8ac857514e34d6b38c0466d125cffb9" ## Import required packages and modules # + _uuid="99df159d8549d11d99e8a349a4a7893812b187f2" ## Import Dataset # + [markdown] _cell_guid="81e6c728-92d6-46b6-848f-2124df59fc92" _uuid="ea84e7b744d0d5701e0019253aa9905718e55d72" # ## Exploratory Data Analysis # As usual, it is prudent to begin with some basic exploratory analysis. # + ## Check the count of NaN Values # + ## If the NaN values are less, then drop them or else replace them with suitable values # + ## Check new data values available # + [markdown] _cell_guid="1cdbefb7-9e8e-4a11-9cee-d39f8f16f557" _uuid="c31f278963d7bdd4c3553ada1fdb6095d50ca658" # First develop a list of the top words used across all one million headlines, giving us a glimpse into the core vocabulary of the source data. Stop words should be omitted here to avoid any trivial conjunctions, prepositions, etc. # + ## Download Stopwords incase you haven't done that before ## you can use nltk.download('stopwords') # + ## Define certain new stopwords that may have no significance in determining top news headlines # + _cell_guid="db6ce2f5-1247-4446-90f2-6aa2ba8af168" _kg_hide-input=true _uuid="59f4ae5e8e06786fa3ebdec7ec3011645aad3544" ## Define helper functions to get top n words ## Defined function must return a tuple of the top n words in a sample and their ## accompanying counts, given a CountVectorizer object and text sample # + _cell_guid="a2701525-298e-4943-8d3e-adad5d0d629d" _uuid="142e18023411d5c61cbe59a3c08c78c58993cdcc" ## plot top 25 words in headlines dataset and their number of occurances ## Pass the new created set of stopwords to count vectoriser function ## Initially try to work on a batch of data instead of entire dataset (Say on 200000 examples) # + [markdown] _cell_guid="781d7425-adcb-4642-8685-39722c5f2a4f" _uuid="a595fd6ead91a95c590d027c155fbc6fe3d800f6" # Next you can generate a histogram of headline word lengths, and use part-of-speech tagging to understand the types of words used across the corpus. This requires first converting all headline strings to TextBlobs and calling the ```pos_tags``` method on each, yielding a list of tagged words for each headline. # + _uuid="ce8f68db595b0ff3535e2c2b57f527b496a1dc99" ## You can download punkt and averaged perceptron tagger for NLTK if required using ## nltk.download('punkt') ## nltk.download('averaged_perceptron_tagger') # + ## Identify Tagged Headlines # + _cell_guid="f8ccacff-5771-4073-ba4a-6ad8bac2785c" _uuid="fc78480a55051d74b1802bc91aeba366b223fcb0" ## For furthur analysis one can try finding average headline word length ## and Part of speech tagging for headline corpus # + _cell_guid="25556701-ea73-4a62-9456-d89f01c738b6" _uuid="2b3abe64f0e4567e508a5fa45057655f2d2fecd5" ## By plotting the number of headlines published per day, per month and per year, ## one can also get a sense of the sample density. # + [markdown] _cell_guid="1627d53c-a120-4cdb-b2c2-a616865a24d0" _uuid="c5983d57a8ae630af799b218c196e9e006ddf2f8" # ## Topic Modelling # You can now apply a clustering algorithm to the headlines corpus in order to study the topic focus of ABC News, as well as how it has evolved through time. To do so, first experiment with a small subsample of the dataset, then scale up to a larger portion of the available data. # + [markdown] _cell_guid="19b5a00b-e1c1-433a-aa4c-490bdd40b798" _uuid="0efcc663a7c6a267b0eba42378d902002677d422" # ### Preprocessing # The only preprocessing step required in our case is feature construction, where we take the sample of text headlines and represent them in some tractable feature space. In practice, this simply means converting each string to a numerical vector. This can be done using the ```CountVectorizer``` object from SKLearn, which yields an $n×K$ document-term matrix where $K$ is the number of distinct words across the $n$ headlines in our sample (less stop words and with a limit of ```max_features```). # + [markdown] _cell_guid="dc19ca30-8586-4a29-a0be-cd46e39ae4b3" _uuid="66eeb9d4c023e095911313862580f90d28f0d87f" # Thus you have your (very high-rank and sparse) training data, ```small_document_term_matrix```, and can now actually implement a clustering algorithm. Your choice Latent Semantic Analysis, will take document-term matrix as input and yield an $n \times N$ topic matrix as output, where $N$ is the number of topic categories (which we supply as a parameter). For the moment, we shall take this to be 15. # + _cell_guid="12f50c7e-c8d4-473d-9918-d476faeed788" _uuid="009f7a3aa962a4e3a74f8bc78c4e14b845acf51f" ## To find top 15 topics we set ## n_topics = 15 # + [markdown] _cell_guid="0f3bc959-5f94-497b-afc2-6799ddd2a689" _uuid="2392087f63ed8827e3fb30ffea17d7c95e31c92f" # ### Latent Semantic Analysis # Let's start by experimenting with LSA. This is effectively just a truncated singular value decomposition of a (very high-rank and sparse) document-term matrix, with only the $r=$```n_topics``` largest singular values preserved. # + _cell_guid="5edba11f-1c8c-45ee-a833-f39a6eed0ffe" _uuid="403f67711d38b3acba59c3725cdf3be86536969a" ## Define LSA Model # + [markdown] _cell_guid="b4db9892-57a7-48e7-87e3-df1e8f47e948" _uuid="c90ceacbaa7085d28244b6d66b2ce57dcf9007bd" # Taking the $\arg \max$ of each headline in this topic matrix will give the predicted topics of each headline in the sample. We can then sort these into counts of each topic. # + _cell_guid="48e9a37b-f6a3-4f8c-87ea-4bbe0026fa10" _kg_hide-input=true _uuid="56ea2060287a4efc2ee17ecd3b2f2caf57d5812e" ## Define helper functions to get keys that returns an integer list of predicted topic ## categories for a given topic matrix ## and KeysToCount that returns a tuple of topic categories and their ## accompanying magnitudes for a given list of keys # + [markdown] _cell_guid="c223207c-25e0-47f2-9475-44bb6b8b92fa" _uuid="f317335665331fa8dbc8aadf3a89750c4c3100db" # However, these topic categories are in and of themselves a little meaningless. In order to better characterise them, it will be helpful to find the most frequent words in each. # + _cell_guid="378ec442-fe50-40c1-af07-285c47f06991" _kg_hide-input=true _uuid="c260d3062f3039a58b924428b7013629f99c346d" ## Define helper function get_top_n_words that returns a list of n_topic strings, ## where each string contains the n most common words in a predicted category, in order # + [markdown] _cell_guid="c51c0e10-f1f8-4d41-acff-941ecb6af716" _uuid="44c511893a4403d28df7539caa1b13b32dcc4567" # Thus we have converted our initial small sample of headlines into a list of predicted topic categories, where each category is characterised by its most frequent words. The relative magnitudes of each of these categories can then be easily visualised though use of a bar chart. # + _cell_guid="5f51277b-37d2-4712-9737-00b50a396e8a" _uuid="a4f5e96898e844072d57e237d307ed589bb6d8cf" ## Visualise each topic vs Number of headlines These will be the most discussed topics ## In case you want to do furthur analysis you can try dimentionality reduction and ## analyse and compare it's result to other techniques like LDA that is left as an optional assignment for you # + [markdown] _cell_guid="0ccf8530-c578-4cad-a705-0a707b426d25" _uuid="26ce1e10578112b422b29c7079c7cd684156abee" # However, this does not provide a great point of conclusion, you can instead use a dimensionality-reduction technique called $t$-SNE, which will also serve to better illuminate the success of the clustering process. # + [markdown] _cell_guid="b8435c0e-5c00-4c8e-8650-db70106f67e3" _uuid="615effdf063641b3f6f1da9576f9ea1de11a27e7" # Now that you have reduced these ```n_topics```-dimensional vectors to two-dimensional representations, you can then plot the clusters using Bokeh. Before doing so however, it will be useful to derive the centroid location of each topic, so as to better contextualise our visualisation. # + _cell_guid="91916c42-685c-4c3b-bc68-2248ac8efcd0" _kg_hide-input=true _uuid="d37e004a79a6d264ca40c43e9dc847597402e318" # Define helper functions that returns a list of centroid vectors from each predicted topic category # + [markdown] _cell_guid="5e600401-a789-422b-a11c-82442c4eb50c" _uuid="98c59c1fbb011e3157212da64ac05d86200c829c" # All that remains is to plot the clustered headlines. Also included are the top three words in each cluster, which are placed at the centroid for that topic. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Goal # # * simulating amplicon fragments for genomes in non-singleton OTUs # # Setting variables # + import os workDir = '/var/seq_data/ncbi_db/genome/Jan2016/ampFrags/' genomeDir = '/var/seq_data/ncbi_db/genome/Jan2016/bac_complete_rn/' ampliconFile = '/var/seq_data/ncbi_db/genome/Jan2016/rnammer_aln/otusn_map_nonSingle.txt' # - # # Init # %load_ext rpy2.ipython # %load_ext pushnote # + language="R" # library(dplyr) # library(tidyr) # library(ggplot2) # + if not os.path.isdir(workDir): os.makedirs(workDir) # %cd $workDir # - # simlink amplicon OTU map file tmp = os.path.join(workDir, '../', ampliconFile) # !ln -s -f $tmp . # # Indexing genomes # !head -n 3 $ampliconFile # !cut -f 13 $ampliconFile | head # ## symlinking all genomes of interest # !cut -f 13 $ampliconFile | \ # sort -u | \ # perl -pe 's|^|../bac_complete_rn/|' | \ # xargs -I % ln -s -f % . # !cut -f 13 $ampliconFile | sort -u | wc -l # !find . -name "*.fna" | wc -l # !ls -thlc 2>/dev/null | head -n 4 # ## Making genome -> genome_file index # !cut -f 13 $ampliconFile | perl -pe 's/(.+).fna/\$1\t\$1\.fna/' | sort -u > genome_index.txt # !wc -l genome_index.txt # !head genome_index.txt # # Index genomes # !SIPSim genome_index \ # genome_index.txt \ # --fp . --np 26 \ # > index_log.txt \ # 2> index_log_err.txt # !find . -name "*sqlite3.db" | wc -l # # Simulating fragments # # copy primer file # !cp /home/nick/notebook/SIPSim/dev/515F-806R.fna ../ # !SIPSim fragments \ # genome_index.txt \ # --fp $workDir \ # --fr ../515F-806R.fna \ # --fld skewed-normal,9000,2500,-5 \ # --flr None,None \ # --nf 10000 \ # --np 20 \ # 2> ../ampFrags.log \ # > ../ampFrags.pkl # %pushnote SIPSim fragments complete # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np from scipy.integrate import RK45,solve_ivp from ODE_potentials import VanDerPolePotential,LotkiVolterraPotential from ODE_samplers import MALA_ODE,ULA_ODE,grad_ascent_ODE,run_eval_test,set_function from multiprocessing import Pool import multiprocessing from zv_cv import Eval_ZVCV from sklearn.preprocessing import PolynomialFeatures import copy from baselines import construct_ESVM_kernel, construct_Tukey_Hanning from optimize import optimize_parallel_new from utils import * # - # Parameters for van-der-Pole and Lotka-Volterra examples: # + typ = 'LV' #'LV' for Lotka-Volterra, 'VdP' for Van-der-Pole method = {"sampler":"MALA"} #switch between ULA and MALA f_type = "sum_comps" if typ == 'VdP': #true parameter value theta_star = 1.0 #initial coordiante and speed y0 = np.array([0.0,2.0],dtype=float) #error of measurements sigma = 0.5 #prior variance sigma_prior = 0.5 elif typ == 'LV': theta_star = np.array([0.6,0.025,0.8,0.025],dtype = float) #initial number of victims and predators y0 = np.array([30.0,4.0],dtype=float) #setting prior parameters sigma_prior = np.array([0.5,0.05,0.5,0.05],dtype = float) mu_prior = np.array([1.0,0.05,1.0,0.05],dtype=float) #measurements error sigma = np.array([0.25,0.25]) # - # Timestaps #initial and last time moments t0 = 0 t_bound = 10 N_steps = 10 #moments of observations t_moments = np.linspace(t0,t_bound,N_steps+1) print(t_moments) # Creating potentials if typ == 'VdP': Cur_pot = VanDerPolePotential(sigma,sigma_prior,t_moments,theta_star,y0,t0,t_bound) elif typ == 'LV': Cur_pot = LotkiVolterraPotential(sigma,mu_prior,sigma_prior,t_moments,theta_star,y0,t0,t_bound) # Sampling (currently with MALA) r_seed = 666 #burn-in period N_burn = 1*10**3 #Train size N_train = 1*10**4 #Test size N_test = 1*10**4 #number of test trajectories n_traj = 1 if typ == 'VdP': #dimension d = 1 #step size step = 1e-3 elif typ == 'LV': #dimension d = 4 #step size step = 5e-6 # ### Construct kernels and sample if typ == 'VdP': params_prior = {"sigma":sigma_prior} elif typ == 'LV': params_prior = {"mu":mu_prior,"sigma":sigma_prior} # ### Compute starting point (maximum likelihood) N_steps_ascent = 5000 traj,traj_grad = grad_ascent_ODE(1453,Cur_pot,step,params_prior,N_steps_ascent,d,typ) theta_mle = traj[-1,:] print("mle for parameters: ",theta_mle) Cur_pot.set_mle(theta_mle) # ### Setting function #step_train = 5e-7 inds_arr = np.array([0]) params = {"ind":0} t_moments = None r_seed = 777 traj = [] traj_grad = [] #generate data nbcores = multiprocessing.cpu_count() trav = Pool(nbcores) res = trav.starmap(ULA_ODE, [(r_seed+i,Cur_pot, step, params_prior, N_burn, N_train, d, typ) for i in range (n_traj)]) trav.close() for i in range(len(res)): traj.append(res[i][0]) traj_grad.append(res[i][1]) #print("accepted = ",res[i][2]) traj = np.asarray(traj) traj_grad = np.asarray(traj_grad) traj_grad = (-1)*traj_grad # ### Apply control variates # + def H(k, x): if k==0: return 1.0 if k ==1: return x if k==2: return (x**2 - 1)/np.sqrt(2) c = np.zeros(k+1,dtype = float) c[k] = 1.0 h = P.hermite_e.hermeval(x,c) / np.sqrt(sp.special.factorial(k)) return h def compute_H(k,x,d): cur_prod = 1.0 for i in range(d): cur_prod = cur_prod*H(k[i],x[:,i]) return cur_prod # + def generate_lexicographical_2nd_order(d): """ function to generate lexigoraphical polynomials of 1st and 2nd order """ cur_list = [] for i in range(d): cur_permute = np.zeros(d,dtype = int) cur_permute[i] = 1 cur_list.append(np.copy(cur_permute)) cur_permute[i] = 0 for i in range(d): cur_permute = np.zeros(d,dtype = int) cur_permute[i] = 2 cur_list.append(np.copy(cur_permute)) cur_permute[i] = 1 for j in range(i+1,d): cur_permute[j] = 1 cur_list.append(np.copy(cur_permute)) cur_permute[j] = 0 return cur_list a = generate_lexicographical_2nd_order(3) print(a) # - def test_traj(coefs_poly_regr,gamma,r_seed,lag,d,N_test,x0): """ function to perform 1-dimensional martingale decomposition """ X_test,Noise = generate_traj(x0,N_test,gamma,r_seed,d) test_stat_vanilla = np.zeros(N_test,dtype = float) test_stat_vr = np.zeros_like(test_stat_vanilla) #compute number of basis polynomials basis_funcs = generate_lexicographical_2nd_order(d) num_basis_funcs = len(basis_funcs) #compute polynomials of noise variables Z_l poly_vals = np.zeros((num_basis_funcs,N_test), dtype = float) for k in range(len(basis_funcs)): poly_vals[k,:] = compute_H(basis_funcs[k],Noise,d) #initialize function f_vals_vanilla = np.sum(X_test**2,axis=1) #array to store control variates values cvfs = np.zeros_like(f_vals_vanilla) #compute coeffitients bar_a bar_a_1st_order = np.zeros((lag,d,N_test),dtype=float) bar_a_2nd_order = np.zeros((lag,d,d,N_test),dtype=float) #preprocessing X_test = np.concatenate((x0.reshape(1,d),X_test),axis=0) coefs_poly_1st_order = np.zeros((lag,d),dtype=float) coefs_poly_1st_order = coefs_poly_regr[:,1:d+1] coefs_poly_2nd_order = np.zeros((lag,d,d),dtype=float) counter = 0 for i in range(d): for j in range(i,d): coefs_poly_2nd_order[:,i,j] = coefs_poly_regr[:,d+1+counter] counter += 1 for i in range(lag): #first-order coefficients for j in range(d): bar_a_1st_order[i,j,:] = coefs_poly_1st_order[i,j]*np.sqrt(gamma)*sigma(X_test[:-1])[:,j] #sum more coefficients for k in range(j): bar_a_1st_order[i,j,:] += coefs_poly_2nd_order[i,k,j]*np.sqrt(gamma)*sigma(X_test[:-1])[:,j]*((X_test[:-1]+gamma*b(X_test[:-1]))[:,k]) #diagonal part bar_a_1st_order[i,j] += 2*coefs_poly_2nd_order[i,j,j]*np.sqrt(gamma)*sigma(X_test[:-1])[:,j]*((X_test[:-1]+gamma*b(X_test[:-1]))[:,j]) #sum more coefficients for k in range(j+1,d): bar_a_1st_order[i,j] += coefs_poly_2nd_order[i,j,k]*np.sqrt(gamma)*sigma(X_test[:-1])[:,j]*((X_test[:-1]+gamma*b(X_test[:-1]))[:,k]) #second-order coefficients, to be filled bar_a_2nd_order[i,j,j,:] = coefs_poly_2nd_order[i,j,j]*np.sqrt(2)*gamma*sigma(X_test[:-1])[:,j] for k in range(j+1,d): bar_a_2nd_order[i,j,k,:] = coefs_poly_2nd_order[i,j,k]*gamma*sigma(X_test[:-1])[:,j]*sigma(X_test[:-1])[:,k] """ bar_a_0_1[i,1:] = coefs_poly_regr[i,1]*cov[0,1]*np.sqrt(gamma)*sigma(X_test[:-1])[:,0]+\ coefs_poly_regr[i,2]*cov[1,1]*np.sqrt(gamma)*sigma(X_test[:-1])[:,1]+\ 2*coefs_poly_regr[i,3]*cov[0,1]*np.sqrt(gamma)*sigma(X_test[:-1])[:,0]*(X_test[:-1]+gamma*b(X_test[:-1]))[:,0]+\ coefs_poly_regr[i,4]*(((X_test[:-1]+gamma*b(X_test[:-1]))[:,0])*sigma(X_test[:-1])[:,1]*np.sqrt(gamma)*cov[1,1] +\ ((X_test[:-1]+gamma*b(X_test[:-1]))[:,1])*sigma(X_test[:-1])[:,0]*np.sqrt(gamma)*cov[0,1])+\ 2*coefs_poly_regr[i,5]*cov[1,1]*np.sqrt(gamma)*sigma(X_test[:-1])[:,1]*(X_test[:-1]+gamma*b(X_test[:-1]))[:,1] bar_a_0_1[i,0] = coefs_poly_regr[i,1]*cov[0,1]*np.sqrt(gamma)*sigma(x0)[0]+\ coefs_poly_regr[i,2]*cov[1,1]*np.sqrt(gamma)*sigma(x0)[1]+\ 2*coefs_poly_regr[i,3]*cov[0,1]*np.sqrt(gamma)*sigma(x0)[0]*(x0+gamma*b(x0))[0]+\ coefs_poly_regr[i,4]*(((x0+gamma*b(x0))[0])*sigma(x0)[1]*np.sqrt(gamma)*cov[1,1] +\ ((x0+gamma*b(x0))[1])*sigma(x0)[0]*np.sqrt(gamma)*cov[0,1])+\ 2*coefs_poly_regr[i,5]*cov[1,1]*np.sqrt(gamma)*sigma(x0)[1]*(x0+gamma*b(x0))[1] #coefficients with H_1_0 bar_a_1_0[i,1:] = coefs_poly_regr[i,1]*cov[0,0]*np.sqrt(gamma)*sigma(X_test[:-1])[:,0]+\ coefs_poly_regr[i,2]*cov[0,1]*np.sqrt(gamma)*sigma(X_test[:-1])[:,1]+\ 2*coefs_poly_regr[i,3]*cov[0,0]*np.sqrt(gamma)*sigma(X_test[:-1])[:,0]*(X_test[:-1]+gamma*b(X_test[:-1]))[:,0]+\ coefs_poly_regr[i,4]*(((X_test[:-1]+gamma*b(X_test[:-1]))[:,0])*sigma(X_test[:-1])[:,1]*np.sqrt(gamma)*cov[0,1] +\ ((X_test[:-1]+gamma*b(X_test[:-1]))[:,1])*sigma(X_test[:-1])[:,0]*np.sqrt(gamma)*cov[0,0])+\ 2*coefs_poly_regr[i,5]*cov[0,1]*np.sqrt(gamma)*sigma(X_test[:-1])[:,1]*(X_test[:-1]+gamma*b(X_test[:-1]))[:,1] bar_a_1_0[i,0] = coefs_poly_regr[i,1]*cov[0,0]*np.sqrt(gamma)*sigma(x0)[0]+\ coefs_poly_regr[i,2]*cov[0,1]*np.sqrt(gamma)*sigma(x0)[1]+\ 2*coefs_poly_regr[i,3]*cov[0,0]*np.sqrt(gamma)*sigma(x0)[0]*(x0+gamma*b(x0))[0]+\ coefs_poly_regr[i,4]*(((x0+gamma*b(x0))[0])*sigma(x0)[1]*np.sqrt(gamma)*cov[0,1] +\ ((x0+gamma*b(x0))[1])*sigma(x0)[0]*np.sqrt(gamma)*cov[0,0]) +\ 2*coefs_poly_regr[i,5]*cov[0,1]*np.sqrt(gamma)*sigma(x0)[1]*(x0+gamma*b(x0))[1] #second-order coefficients bar_a_1_1[i,1:] = 2*coefs_poly_regr[i,3]*gamma*(sigma(X_test[:-1])[:,0])**2*cov[0,0]*cov[0,1]+\ coefs_poly_regr[i,4]*gamma*(sigma(X_test[:-1])[:,0])*(sigma(X_test[:-1])[:,1])*(cov[0,1]**2 + cov[0,0]*cov[1,1])+\ 2*coefs_poly_regr[i,5]*gamma*(sigma(X_test[:-1])[:,1])**2*cov[1,1]*cov[0,1] bar_a_1_1[i,0] = 2*coefs_poly_regr[i,3]*gamma*(sigma(x0)[0])**2*cov[0,0]*cov[0,1]+\ coefs_poly_regr[i,4]*gamma*(sigma(x0)[0])*(sigma(x0)[1])*(cov[0,1]**2 + cov[0,0]*cov[1,1])+\ 2*coefs_poly_regr[i,5]*gamma*(sigma(x0)[1])**2*cov[1,1]*cov[0,1] #coefficients with H_2_0 bar_a_2_0[i,1:] = np.sqrt(2)*coefs_poly_regr[i,3]*gamma #coefficients with H_0_2 bar_a_0_2[i,1:] = np.sqrt(2)*coefs_poly_regr[i,5]*gamma """ #bar_a_1_0 = bar_a_1_0*poly_vals[0,:] #bar_a_0_1 = bar_a_0_1*poly_vals[1,:] #bar_a_1_1 = bar_a_1_1*poly_vals[2,:] #bar_a_2_0 = bar_a_2_0*poly_vals[3,:] #bar_a_0_2 = bar_a_0_2*poly_vals[4,:] counter = 0 for i in range(d): bar_a_1st_order[:,i,:] = poly_vals[i,:]*bar_a_1st_order[:,i,:] bar_a_2nd_order[:,i,i,:] = poly_vals[d + d*(d+1)//2 - (d-i)*(d-i+1)//2,:]*bar_a_2nd_order[:,i,i,:] counter += 1 for j in range(i+1,d): bar_a_2nd_order[:,i,j,:] = poly_vals[d+counter]*bar_a_2nd_order[:,i,j,:] counter += 1 #compute martingale sums M_n_1st = 0.0 M_n_2nd = 0.0 for l in range(N_test): for r in range(min(N_test-l,lag)): M_n_1st += np.sum(bar_a_1st_order[r,:,l]) M_n_2nd += np.sum(bar_a_2nd_order[r,:,:,l]) return np.mean(f_vals_vanilla), np.mean(f_vals_vanilla)-(M_n_1st)/N_test, np.mean(f_vals_vanilla)-(M_n_1st+M_n_2nd)/N_test # # # def approx_q(X_train,Y_train,N_traj_train,lag,max_deg): """ Function to regress q functions on a polynomial basis; Args: X_train - train tralectory; Y_train - function values; N_traj_train - number of training trajectories; lag - truncation point for coefficients, those for |p-l| > lag are set to 0; max_deg - maximum degree of polynomial in regression """ dim = X_train[0,:].shape[0] #print("dimension = ",dim) coefs_poly = np.array([]) for i in range(lag): x_all = np.array([]) y_all = np.array([]) for j in range(N_traj_train): y = Y_train[j,i:,0] if i == 0: x = X_train[j,:] else: x = X_train[j,:-i] #concatenate results if x_all.size == 0: x_all = x else: x_all = np.concatenate((x_all,x),axis = 0) y_all = np.concatenate([y_all,y]) #should use polyfeatures here #print("variance: ",np.var(y_all)) #print(y_all[:50]) poly = PolynomialFeatures(max_deg) X_features = poly.fit_transform(x_all) #print(X_features.shape) lstsq_results = np.linalg.lstsq(X_features,y_all,rcond = None) coefs = copy.deepcopy(lstsq_results[0]) coefs.resize((1,X_features.shape[1])) if coefs_poly.size == 0: coefs_poly = copy.deepcopy(coefs) else: coefs_poly = np.concatenate((coefs_poly,coefs),axis=0) return coefs_poly X_train = traj print(X_train.shape) Y_train = X_train[:,:,0].reshape((1,-1,1)) print(Y_train.shape) lag = 100 S_max = 2 #polynomial coefficients coefs_poly = approx_q(X_train,Y_train,n_traj,lag,S_max) #print(coefs_poly.shape) print(coefs_poly) regr_vals = np.zeros((lag,X_train.shape[1]),dtype=float) poly = PolynomialFeatures(S_max) features = poly.fit_transform(X_train[0]) #features = np.zeros((X_train.shape[1],6),dtype=float) #features[:,0] = 1.0 #features[:,1:3] = X_train[0,:,:] #features[:,3] = X_train[0,:,0]**2 #features[:,4] = X_train[0,:,0]*X_train[0,:,1] #features[:,5] = X_train[0,:,1]**2 for i in range(len(regr_vals)): regr_vals[i,:] = np.sum(coefs_poly[i,:]*features,axis=1) # Test our regressors cur_lag = 99 N_pts = 500 plt.figure(figsize=(10, 10)) plt.title("Testing regression model",fontsize=20) plt.plot(Y_train[0,cur_lag:N_pts+cur_lag,0],color='r',label='true function') plt.plot(regr_vals[cur_lag,:N_pts],color='g',label = 'practical approximation') plt.legend(loc = 'upper left',fontsize = 16) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Mixed precision training # This module allows the forward and backward passes of your neural net to be done in fp16 (also known as *half precision*). This is particularly important if you have an NVIDIA GPU with [tensor cores](https://www.nvidia.com/en-us/data-center/tensorcore/), since it can speed up your training by 200% or more. # + hide_input=true from fastai.gen_doc.nbdoc import * from fastai.callbacks.fp16 import * from fastai.vision import * # - # ## Overview # To train your model in mixed precision you just have to call [`Learner.to_fp16`](/train.html#to_fp16), which converts the model and modifies the existing [`Learner`](/basic_train.html#Learner) to add [`MixedPrecision`](/callbacks.fp16.html#MixedPrecision). # + hide_input=true show_doc(Learner.to_fp16) # - # For example: path = untar_data(URLs.MNIST_SAMPLE) data = ImageDataBunch.from_folder(path) model = simple_cnn((3,16,16,2)) learn = Learner(data, model, metrics=[accuracy]).to_fp16() learn.fit_one_cycle(1) # Details about mixed precision training are available in [NVIDIA's documentation](https://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/index.html). We will just summarize the basics here. # # The only parameter you may want to tweak is `loss_scale`. This is used to scale the loss up, so that it doesn't underflow fp16, leading to loss of accuracy (this is reversed for the final gradient calculation after converting back to fp32). Generally the default `512` works well, however. You can also enable or disable the flattening of the master parameter tensor with `flat_master=True`, however in our testing the different is negligible. # # Internally, the callback ensures that all model parameters (except batchnorm layers, which require fp32) are converted to fp16, and an fp32 copy is also saved. The fp32 copy (the `master` parameters) is what is used for actually updating with the optimizer; the fp16 parameters are used for calculating gradients. This helps avoid underflow with small learning rates. # # All of this is implemented by the following Callback. # + hide_input=true show_doc(MixedPrecision) # - # ### Callback methods # You don't have to call the following functions yourself - they're called by fastai's [`Callback`](/callback.html#Callback) system automatically to enable the class's functionality. # + hide_input=true show_doc(MixedPrecision.on_backward_begin) # + hide_input=true show_doc(MixedPrecision.on_backward_end) # + hide_input=true show_doc(MixedPrecision.on_loss_begin) # + hide_input=true show_doc(MixedPrecision.on_step_end) # + hide_input=true show_doc(MixedPrecision.on_train_begin) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Gaussian Blur, Medical Images # ### Import resources and display image # + import numpy as np import matplotlib.pyplot as plt import cv2 # %matplotlib inline # Read in the image image = cv2.imread('images/brain_MR.jpg') # Make a copy of the image image_copy = np.copy(image) # Change color to RGB (from BGR) image_copy = cv2.cvtColor(image_copy, cv2.COLOR_BGR2RGB) plt.imshow(image_copy) # - # ### Gaussian blur the image # + # Convert to grayscale for filtering gray = cv2.cvtColor(image_copy, cv2.COLOR_RGB2GRAY) # Create a Gaussian blurred image gray_blur = cv2.GaussianBlur(gray, (9, 9), 0) f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10)) ax1.set_title('original gray') ax1.imshow(gray, cmap='gray') ax2.set_title('blurred image') ax2.imshow(gray_blur, cmap='gray') # - # ### Test performance with a high-pass filter # + # High-pass filter # 3x3 sobel filters for edge detection sobel_x = np.array([[ -1, 0, 1], [ -2, 0, 2], [ -1, 0, 1]]) sobel_y = np.array([[ -1, -2, -1], [ 0, 0, 0], [ 1, 2, 1]]) # Filter the orginal and blurred grayscale images using filter2D filtered = cv2.filter2D(gray, -1, sobel_x) filtered_blurred = cv2.filter2D(gray_blur, -1, sobel_y) f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10)) ax1.set_title('original gray') ax1.imshow(filtered, cmap='gray') ax2.set_title('blurred image') ax2.imshow(filtered_blurred, cmap='gray') # + # Create threshold that sets all the filtered pixels to white # Above a certain threshold retval, binary_image = cv2.threshold(filtered_blurred, 50, 255, cv2.THRESH_BINARY) plt.imshow(binary_image, cmap='gray') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: conda_amazonei_mxnet_p36 # language: python # name: conda_amazonei_mxnet_p36 # --- # + import os,sys sys.path.append('../../src/python/') import copy import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns from sklearn.compose import ColumnTransformer from sklearn.datasets import make_blobs from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from sklearn.model_selection import train_test_split, GridSearchCV, RandomizedSearchCV from sklearn.pipeline import Pipeline from sklearn.preprocessing import MinMaxScaler, OneHotEncoder, OrdinalEncoder from sklearn.preprocessing import PolynomialFeatures from sklearn.feature_selection import SelectFromModel from sklearn.preprocessing._encoders import _BaseEncoder # - # # Loading the dataset: # ## Data Set Information: # # Predicting the age of abalone from physical measurements. The age of abalone is determined by cutting the shell through the cone, staining it, and counting the number of rings through a microscope -- a boring and time-consuming task. Other measurements, which are easier to obtain, are used to predict the age. Further information, such as weather patterns and location (hence food availability) may be required to solve the problem. # # From the original data examples with missing values were removed (the majority having the predicted value missing), and the ranges of the continuous values have been scaled for use with an ANN (by dividing by 200). # # # Attribute Information: # # Given is the attribute name, attribute type, the measurement unit and a brief description. The number of rings is the value to predict: either as a continuous value or as a classification problem. # # ## Name / Data Type / Measurement Unit / Description # ----------------------------- # - Sex / nominal / -- / M, F, and I (infant) # - Length / continuous / mm / Longest shell measurement # - Diameter / continuous / mm / perpendicular to length # - Height / continuous / mm / with meat in shell # - Whole weight / continuous / grams / whole abalone # - Shucked weight / continuous / grams / weight of meat # - Viscera weight / continuous / grams / gut weight (after bleeding) # - Shell weight / continuous / grams / after being dried # - Rings / integer / -- / +1.5 gives the age in years data = pd.read_csv('../../data/raw/abalone.csv') data.columns = [x.replace(' ', '_').lower() for x in data.columns] data data['sex'].value_counts(dropna=False) data.columns data.info() data.describe().transpose() # # Data preparation: TARGET_COL = 'rings' X = data.drop(columns=[TARGET_COL]) y = data[TARGET_COL] x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42) print('Train:', x_train.shape[0], 'samples', x_train.shape[1], 'columns') print('Test:', x_test.shape[0], 'samples', x_test.shape[1], 'columns') x_train.shape[1] # # Pre-processing Pipeline: # Creating the transformation: numeric_transformer = Pipeline(steps=[('scaler', MinMaxScaler())]) categorical_transformer = Pipeline(steps=[('onehot', OneHotEncoder(handle_unknown='ignore', dtype=np.float32))]) # + # Feature Encoding: # numeric_features = [] numeric_features = ['length', 'diameter', 'height', 'whole_weight', 'shucked_weight', 'viscera_weight', 'shell_weight'] # categorical_features = [] categorical_features = ['sex'] passthrough_features = [] all_features_ = (numeric_features + categorical_features + passthrough_features) features_encoder = ColumnTransformer(transformers=[('num', numeric_transformer, numeric_features), ('cat', categorical_transformer, categorical_features), # ('passthrough', 'passthrough', passthrough_features), ], remainder='drop') # - # Feature Construction: features_creator = PolynomialFeatures(2, include_bias=False) # Feature Selection: regr_select = RandomForestRegressor() features_selector = SelectFromModel(regr_select) preprocessor = Pipeline([('feature_encoding', features_encoder), # ('feature_creation', features_creator), # ('feature_selection', features_selector), ]) # Columns dropped by preprocessor: columns_to_drop = data.columns[~np.isin(data.columns, all_features_)] columns_to_drop x_train_encoded = preprocessor.fit_transform(x_train, y_train) pd.DataFrame(x_train_encoded) x_test_encoded = pd.DataFrame(preprocessor.transform(x_test)) x_test_encoded # # Training the model into the main pipeline: # + # # Training the model and getting the results: # lr_regr = LinearRegression() # estimator = lr_regr.fit(x_train_encoded, y_train) # model_full = Pipeline(steps=[('preprocessor', preprocessor), # ('estimator', lr_regr)]) # # Com o linear regressor e OneHotEncoder houve problema na predição do ONNX # # Aparentemente os coeficientes ligados ao OneHot ficavam com valores inadequados. # + # Training the model and getting the results: rf_regr = RandomForestRegressor() estimator = rf_regr.fit(x_train_encoded, y_train) model_full = Pipeline(steps=[('preprocessor', preprocessor), ('estimator', estimator)]) # - y_train_pred = model_full.predict(x_train) y_test_pred = model_full.predict(x_test) # # Model evaluation: print(f'MAE (train): {mean_absolute_error(y_train, y_train_pred)} MAE (test): {mean_absolute_error(y_test, y_test_pred)}') print(f'MSE (train): {mean_squared_error(y_train, y_train_pred)} MSE (test): {mean_squared_error(y_test, y_test_pred)}') print(f'RMSE (train): {mean_squared_error(y_train, y_train_pred)**0.5} RMSE (test): {mean_squared_error(y_test, y_test_pred)**0.5}') print(f'R2 score (train): {r2_score(y_train, y_train_pred)} R2 score (test): {r2_score(y_test, y_test_pred)}') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # *** # # # # *** # # Label Detection with Python, Google Vision API and google-vision-wrapper # # ### Introduction # In this quick tutorial we are going to show how to use google-vision-wrapper to perform logo detection on images. Please refer to the [Official Github Page](https://github.com/gcgrossi/google-vision-wrapper) for more information. # ### Before you begin # Before starting, it is mandatory to correctly setup a Google Cloud Project, authorise the Google Vision API and generate a .json API key file. Be sure to have fulfilled all the steps in the [Before you Begin Guide](https://cloud.google.com/vision/docs/before-you-begin) before moving on. # ### Imports # + # the main class from gvision import GVisionAPI #other imports import os import cv2 import numpy as np import matplotlib.pyplot as plt # - # ### Read The input image # You can read the input image in the way you prefer, the class accepts 2 formats: # 1. numpy.ndarray # 2. bytes # # The Google Vision API accepts images in bytes format. If you chose to go on with numpy array the wrapper will perform the conversion. I always chose to read the image using OpenCV. #read the image from disk img = cv2.imread(os.path.join(os.getcwd(),'images','dunkin.jpg')) # we are going to use this image: #show the image #transform to RGB -> an OpenCV speciality plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB)) # I'm sorry if you are hungry but this is my favourite ... ! Let's see how to detect their logo on the image. # ### Initialize the class # ```authfile``` is the path to your API key file in .json format. # path to auth key file authfile = os.path.join(os.getcwd(),'gvision_auth.json') gvision = GVisionAPI(authfile) # ### Perform a request # The method we are goin to use is: ```.perform_request(img,option)```. It accepts 2 parameters: # 1. the image already loaded # 2. an option that specifies what kind of request to make # # You can access the possibile options in this two ways: # + # method to print request options gvision.request_options() #request options from the class attribute print('\nPossible Options:') print(gvision.request_types) # - # We are ready to perform the actual request. The body of the response from the API can be accessed using the ```.response``` attribute. # + #perform a request to the API gvision.perform_request(img,'logo detection') # print the response print(gvision.response) # - # With one logo the output is still ok to parse, but with a very crowded image it can become quite verbose. Thank to google-vision-wrapper we can obtain. # ### Obtaining the information as list # The information regarding the detection can be accessed using different methods. In the following, we are going to obtain all the annotations. # obtaining lists headers,logos = gvision.logos() print(headers) print(logos) # Remember: for each logo detected (there could be more than 1) a list with the corresponding information is filled. I.e. the first logo is ```logos[0]```. As you can see, each list contains the logo name, the detection confidence and the coordinates of the bounding box surrounding it. # ### Obtaining the information as pandas DataFrame # the same information can also de retrieved as a pandas DataFrame for convenience, using the method ```.to_df(option,name)```. It accepts 2 parameters: # 1. an option, specifying the type of information to dump # 2. the optional name or id of the image, that will be appended to each row of the DataFrame. Default is set to ```'image'```. # # You can access the possible options in the two following ways: # + # method to print df options gvision.df_options() #request options from the class attribute print('\nPossible Options:') print(gvision.df_types) # - # Let's obtain the information. # obtain the information as a pandas DataFrame df_labels =gvision.to_df('logos','donuts') df_labels # ### Draw the results # You can now draw the results in the way you prefer. I will do it using OpenCV. # + # #copy the original image #transform to RGB -> an OpenCV speciality boxed = cv2.cvtColor(img.copy(), cv2.COLOR_BGR2RGB) # draw rectangles and information # on the detected logos for logo in logos: name = logo[0] score = logo[1]*100 tl,tr,br,bl = logo[2],logo[3],logo[4],logo[5] cv2.rectangle(boxed, (int(tl[0]),int(tl[1])), (int(br[0]),int(br[1])), (0, 255, 0), 2) # draw name and confidence cv2.putText(boxed, "{} {:.1f}%".format(name,score), (int(tl[0]), int(tl[1] + 15)), cv2.FONT_HERSHEY_SIMPLEX,0.4, (0, 255, 0), 2) #show the image plt.imshow(boxed) # save to disk output = cv2.cvtColor(boxed, cv2.COLOR_RGB2BGR) cv2.imwrite(os.path.join(os.getcwd(),'assets','output_dunkin.jpg'), output) # - # And here we are! With a very few lines of code we were able to detect the logo associated with this box of donuts. I'm sorry, I should have written also something regarding Google Maps API and how to find the nearest to your place because writing this tutorial made me very very hungry! I will go and grab a coffee and a 12 box for my colleagues before starting my coding session! # # ## _'Hasta la ciambella, siempre!'_ # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="rscAyIxioesU" # # MOML - Clustering # # --- # + [markdown] id="eyDmtuJZxbHu" # ### Import de tous les modules nécessaires # + id="cbAlquKoomVW" # %matplotlib inline import plotly.graph_objects as go import plotly.express as px import matplotlib.pyplot as plt from scipy.spatial import distance from scipy.spatial.distance import pdist, squareform from sklearn.cluster import KMeans from scipy.io import arff import pandas as pd import networkx as nx import numpy as np from numpy import linalg as LA # + [markdown] id="gbDkSOTUpB80" # ### Chargement et pré-traitement de la base de donées # + [markdown] id="f8FliIm6pQMJ" # Pour pouvoir utiliser la base de données, il nous faut en retirer: # # # * Les lignes contenant des missing values # * Les features inadaptées aux calculs (```"fruit", "daf", "site"```) # # On transforme également la colonne cible depuis son format de chaîne de charactères vers un format entier. # + colab={"base_uri": "https://localhost:8080/", "height": 215} id="N5gO1gko_jgQ" outputId="d7b3a541-60d8-45d1-c100-48a47d845b93" data, meta = arff.loadarff("Squash.arff") #remplacer Squash.arff par le chemin du fichier qui contient la database sur votre machine df = pd.DataFrame(data) # on retire les rows avec des missing values df.dropna(inplace=True) # on retire les colonnes non numériques, dont on ne va pas se servir del df["site"] del df["daf"] del df["fruit"] # excellent : 0, ok : 1, not_acceptable : 2 df["Acceptability"] = df["Acceptability"].replace([b"excellent", b"ok", b"not_acceptable"], [0, 1, 2]) df.head() #premières valeurs pour avoir une idée des données # + [markdown] id="IsF-qVZRM1E-" # # Clustering par KMeans # # + id="ytdCebzhrK8z" k = 3 # Nombre de clusters model = KMeans(n_clusters=k) model.fit(df) #On stocke les prédictions dans une nouvelle colonne de la dataframe df["km_label"] = model.labels_ # + [markdown] id="dvrCSLktr9x0" # Pour plot les données, on choisit deux features arbitraires : le poids et la saveur. # + colab={"base_uri": "https://localhost:8080/", "height": 350} id="YycHs8ONbllJ" outputId="832e1dc6-fdfd-44aa-b374-5e2271a448c8" fig = plt.figure(figsize=(16,5)) # Plot du clustering cible plt.subplot(121) plt.scatter(df["weight"], df["flavour"], c=df["Acceptability"]) plt.xlabel("Poids") plt.ylabel("Saveur") plt.title("Clustering cible") # Plot du clustering donné par KMeans plt.subplot(122) plt.scatter(df["weight"], df["flavour"], c=df["km_label"]) plt.xlabel("Poids") plt.ylabel("Saveur") plt.title("Clustering par KMeans") plt.show() # + colab={"base_uri": "https://localhost:8080/"} id="JRtr3tPTvhvI" outputId="4d3290f2-53ac-4f52-f371-5ff4e8bbdb31" # Calcul du taux de prédictions correctes n = df.shape[0] corrects = df.apply(lambda x: (x["Acceptability"] == x["km_label"]), axis=1) n_correct = len(corrects[corrects == True].index) print("Précision :", n_correct/n) # + [markdown] id="GyA8Wc04Nw_-" # # Matrices d'adjacence et de similarité # + id="NI6qpb02N-VH" def dist_sq(x, y): """ fonction qui calcule la distance au carré entre 2 points x et y sont 2 vecteurs (1 x n) """ square_root_distance = distance.euclidean(x,y) #calcul la distance euclidienne sans mettre au carré resu = square_root_distance**2 # on met au carré return resu # + id="kwDA4uVOrBun" def weight(x, y): """ fonction qui calcule le poids entre 2 points connectés """ d = dist_sq(x, y) # appel de la fonction qui calcule la distance entre 2 points resu = np.exp(-d/2) # calcul du poids avec la distance d'après la formule de similarité du cours (NB : ici on posera sigma = 1) return resu # + id="XpJ550uxOCIo" def distance_matrix(data): """ fonction qui calcule la matrice de distance entre les différents points, rend une matrice symétrique """ return squareform(pdist(data.iloc[:], lambda x, y : dist_sq(x, y))) # + id="t8jHG17NRhg8" def simil_no_epsilon(data): """ fonction qui calcule la matrice de similarité sans critère epsilon, pour une utilisation par le spectral clustering Cette fonction implémente une normalisation des distances pour éviter que l'exponentielle dans weight(x, y) n'écrase toutes les grandes distances sur 0 """ dist = distance_matrix(data) max_dist = np.amax(dist) dist = np.array([elem/max_dist for elem in dist]) return squareform(pdist(dist, lambda x, y: weight(x, y))) # + id="eNHXdhO7OL1X" def adjacency_and_similarity_matrix(data, epsilon, bypass=False) : """ fonction qui définit si 2 points sont connectés ou non sur la base de leur distance ou de leur similarité, retourne les matrices d'adjacence et de similarité epsilon : seuil arbitraire pour savoir si 2 point sont connectés ou non sur la base de leur distance bypass = si True, néglige le seuil epsilon (deprecated, utiliser le simil_no_epsilon) """ mat_dist = distance_matrix(data) # appel fonction qui calcule la matrice de distance entre chaque points n= mat_dist.shape[0] # nombre de ligne de la data frame , i.e : nombre de points adja= np.zeros((n,n)) #défintion de la matrice d'adjacence , qui contiendra des 0 ou des 1 simi = np.zeros((n,n))# définition de la matrice de similarité , qui contiendra des poids entre 0 et 1 for i in range (0,n): for j in range (i,n): if ( mat_dist[i,j] < epsilon) or bypass: # critère de connectivité adja[i, j] = 1 adja[j, i] = 1 vec_i = data.iloc[[i],:] vec_j = data.iloc[[j],:] simi[i, j] = weight(vec_i,vec_j) simi[j, i] = simi[i, j] return adja, simi # + colab={"base_uri": "https://localhost:8080/", "height": 335} id="4mZgbqQ1PMPt" outputId="fc19fc43-988d-492a-eb71-de4f44713201" #Graph epsilon = 200000 adja,simi = adjacency_and_similarity_matrix(df,epsilon) n = simi.shape[0] G = nx.Graph() for i in range(0,n): G.add_nodes_from([i]) for i in range(0,n): for j in range(0,n): if adja[i,j]==1 : G.add_edge(i, j, weight = simi[i,j]) nx.draw(G, with_labels=True, font_weight='bold') plt.title(f"Graphe de connnectivité pour epsilon = {epsilon}") plt.show() # + [markdown] id="GSGlvqwolu8P" # # Spectral clustering # + id="SD6T-dFv3KAM" def Ls(W): '''Construction de la matrice de Laplace et de la matrice de Laplace normalisée ''' D = np.sum(W, axis=1) # degrés D_ = np.diag(np.power(D, -1/2)) # D^(-1/2) L = np.diag(D) - W # Matrice de Laplace L_norm = np.dot(D_, np.dot(L, D_)) # L normalisée return L_norm # + id="W5IxhVqQmROP" def spectral_clustering(W, nbCluster) : """ Fonction de la methode du spectral clustering. Entrées : W = matrice de similarité / nbCluster = nombre de clusters souhaité Sortie : Renvoie une liste d'array comportant les indices des pts du même cluster. """ #On construit la matrice laplacienne L_norm = Ls(W) #Trier les vecteurs propres dans l'ordre croissant des val. propres eigenValues, eigenVectors = LA.eigh(L_norm) indice = eigenValues.argsort() eigenValues, eigenVectors = eigenValues[indice], eigenVectors[:,indice] V = eigenVectors[:,:nbCluster] #On applique la méthode kmean sur les lignes de V model = KMeans(n_clusters=nbCluster).fit(V) return model.labels_ # + id="QZQXxxHqKXbp" df["sc_label"] = spectral_clustering(simil_no_epsilon(df), 3) # + colab={"base_uri": "https://localhost:8080/", "height": 350} id="TxCaIHHCk7jV" outputId="120fceac-ac52-4987-861c-5610c678602e" fig = plt.figure(figsize=(16,5)) # Plot du clustering cible plt.subplot(121) plt.scatter(df["weight"], df["flavour"], c=df["Acceptability"]) plt.xlabel("Poids") plt.ylabel("Saveur") plt.title("Clustering cible") # Plot du clustering donné par KMeans plt.subplot(122) plt.scatter(df["weight"], df["flavour"], c=df["sc_label"]) plt.xlabel("Poids") plt.ylabel("Saveur") plt.title("Clustering par Spectral Clustering") plt.show() # + colab={"base_uri": "https://localhost:8080/"} id="v--tG6lJl6L7" outputId="92962dba-4918-40d1-f8a7-e6beeec352d1" # Calcul du taux de prédictions correctes n = df.shape[0] corrects = df.apply(lambda x: (x["Acceptability"] == x["sc_label"]), axis=1) n_correct = len(corrects[corrects == True].index) print("Précision :", n_correct/n) # + [markdown] id="vvc9tN5YtYMw" # # Unsupervised learning: Méthodes pour choisir le nombre de clusters # + [markdown] id="JDYeMiUbLUor" # ## Eigengap method # + id="W0UjcY3FLXeK" def eigengap(Ls): """méthode qui renvoie l'indice du plus grand écart entre les valeurs propres de la matrice de laplace Cet indice est cense etre le meilleur nombre de clusters Arguments: Ls {numpy array} -- [matrice de laplace] """ w, _ = LA.eigh(Ls) # Les valeurs propres de la matrice de laplace w = np.sort(w) # triees dans l'ordre croissant #calcul des écarts en valeur absolue gaps = np.zeros((len(w))) for i in range(len(w) - 1): gaps[i] = abs(w[i]-w[i+1]) n_vals = 20 #nb des premieres valeurs propres à plot fig = plt.figure(figsize=(10,5)) plt.plot(np.arange(n_vals), w[:n_vals], "x") plt.xticks(np.arange(n_vals)) plt.xlabel("Indice de la valeur propre") plt.ylabel("Valeur propre") plt.title("Eigengaps") #indice du plus grand écart (+1 pour le nb de clusters) return np.argmax(gaps)+1 # + colab={"base_uri": "https://localhost:8080/", "height": 367} id="mQOKVCWKFee_" outputId="7b150cf4-f54e-4ab2-9d17-7573b38cc507" print("k qui maximise l'eigengap: ",eigengap(Ls(simil_no_epsilon(df)))) # + [markdown] id="XI7BjaYKIi2N" # ## Méthode Elbow # + id="KYEByjgiIomD" def elbow(data): """méthode qui plot les variances d'un clustering en Kmeans pour des valeurs de k allant de 1 à 10 L'elbow method prescrit de choisir le k qui correspond à un "coude" dans le plot Arguments: data {[DataFrame} -- [les données sur lesquelles effectuer Kmeans] """ vars = [] for k in range(1, 10): model = KMeans(n_clusters=k) model.fit(data) vars.append(model.inertia_) # variances plt.figure(figsize=(8,4)) plt.plot(np.arange(1, 10), vars) plt.xlabel("Nombre de clusters") plt.ylabel("Variance") plt.title("Méthode Elbow") plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 295} id="pdrLq1rqPVjo" outputId="e32781d0-0361-42cb-91a1-0f3508d7c577" elbow(df) # + [markdown] id="ntscXnx3NF9F" # ## Silhouette method # + id="fwDfq9QGXVm2" def compute_Si(point, cluster, non_neighbors, k): """Calcul du score de silhouette d'un point donné Note: un point dans cluster de taille 1 a toujours un score de 0 Arguments: point {DataFrame} -- [le point dont on calcule le score] cluster {DataFrame} -- [le cluster auquel appartient le point] non_neighbors {DataFrame} -- [tous les points des autres clusters] k {int} -- [nb de clusters] Returns: number -- [le score de silhouette de point] """ if cluster.shape[0] == 1: ai = 0 else: dists_inter = squareform(pdist(pd.concat([point, cluster]))) s_inter = np.sum(dists_inter[0,:]) # somme des distances du point à ses collègues de cluster ai = s_inter/(cluster.shape[0] - 1) # moyenne, avec /n-1 puisque le point lui-même ne compte pas dists_exter = [] for i in range(k): if i == point["km_label"].iat[0]: # on passe ce tour de boucle, # car other_cluster (cf definition ci-dessous) serait vide # pour i le label de point # puisque non_neighbors ne contient pas les points du cluster de point continue other_cluster = non_neighbors[non_neighbors["km_label"] == i] # les points du cluster i if other_cluster.shape[0] == 1: dists_exter_k = ai # si ai s'avere etre le min des distances, Si sera nul else: # cf le calcul de ai, similaire dists_exter_k = np.sum(squareform(pdist(pd.concat([point, other_cluster])))[0,:]) dists_exter_k = dists_exter_k/other_cluster.shape[0] dists_exter.append(dists_exter_k) bi = np.amin(np.array(dists_exter)) if ai == bi: return 0 elif ai < bi: return 1 - ai/bi return 1 - bi/ai # + id="EmNU3glhNLfX" def silhouette(dataframe): """Calcule le meilleur score de silhouette de différents clustering Kmeans pour k allant de 2 à 10 Plus le score est proche de 1, meilleur est le clustering Note: cette fonction plot aussi les valeurs du score Arguments: dataframe {DataFrame} -- [les données pour le clustering] Returns: [int] -- [le meilleur k pour le clustering] """ data = dataframe.copy() # parce que l'on va rajouter/modifier des colonnes Sk = [] # liste qui stockera les scores moyens des k-clusterings for k in range(2, 11): Si= [] # liste qui stockera les scores des points dans le clustering actuel model = KMeans(n_clusters=k) model.fit(data) data["km_label"] = model.labels_ # labels predits for ind in range(data.shape[0]): point = data.iloc[[ind]] label_pt = point["km_label"].iat[0] neighbors = data[data["km_label"] == label_pt] # les points du cluster auquel appartient le point non_neighbors = data[data["km_label"] != label_pt] # les autres points Si.append(compute_Si(point, neighbors, non_neighbors, k)) Sk.append(np.mean(np.array(Si))) plt.plot(np.arange(2, 11), Sk) plt.title("Score de silhouette moyen pour k clusters") plt.xlabel("k") plt.ylabel("Score") plt.show() return np.argmax(np.array(Sk)) + 2 # +2 car le range va de 2 à 11, et les indices de listes commencent à 0 # + colab={"base_uri": "https://localhost:8080/", "height": 312} id="9weDuSdURa1c" outputId="5cca6c9c-78c5-4ebc-e7bb-d1649850281c" print("Nombre recommandé de clusters selon la méthode silhouette :", silhouette(df)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:PythonData] * # language: python # name: conda-env-PythonData-py # --- # ## Observations and Insights # Among the several drug treatment regimens, the for most promissing treatments for squamous cell carcinoma (SCC), a commonly occurring form of skin cancer, are Capomulin, Ramicane, Infubinol, and Ceftamin, and from these, Ramicane appears to be the most successful treatment, with an average post treatment tumor volume of 36.56 mm3. This is followed closly by Capomulin with an average post treatment tumor volume of 32.38 mm3. # # # Ovre 45 days of treatment, the SSC tumor volume in mice receiving Campomulin decreased by about 9mm3. # # There is a strong positive correlation of 0.842 (r2 = 0.709) between mouse weight and tumor volume. Inidcating that mice who are overweight, have a much higher perpensity to develop SCC. # # Areas for further analysis would be to see if sex and age has an impact on the efficacy of the drug regimens. # + # Dependencies and Setup import matplotlib.pyplot as plt import pandas as pd import scipy.stats as st from scipy.stats import linregress import numpy as np # Study data files mouse_metadata_path = "data/Mouse_metadata.csv" study_results_path = "data/Study_results.csv" # Read the mouse data and the study results mouse_metadata = pd.read_csv(mouse_metadata_path) study_results = pd.read_csv(study_results_path) # Combine the data into a single dataset merged_data_df =pd.merge(mouse_metadata, study_results, on='Mouse ID', how='left') # Display the data table for preview merged_data_df # - # Checking the number of mice. merged_data_df['Mouse ID'].nunique() # Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint. duplicate_data = merged_data_df.loc[merged_data_df.duplicated(subset=['Mouse ID', 'Timepoint'], keep=False)] duplicate_data # Optional: Get all the data for the duplicate mouse ID. merged_data_df.loc[merged_data_df['Mouse ID']=='g989'] # Create a clean DataFrame by dropping the duplicate mouse by its ID. clean_data_df = merged_data_df.loc[merged_data_df['Mouse ID']!='g989'] clean_data_df # Checking the number of mice in the clean DataFrame. clean_data_df['Mouse ID'].nunique() # ## Summary Statistics # + # Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen # Grouping data by drug regimen and statistics calculations mean = clean_data_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].mean() median = clean_data_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].median() variance = clean_data_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].var() std = clean_data_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].std() std_error_means = clean_data_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].sem() # This method is the most straighforward, creating multiple series and putting them all together at the end. summary_stats_df = pd.DataFrame({'Mean': mean, 'Median': median, 'Variance': variance, 'Standard Deviation': std, 'SEM': std_error_means}) # Display formatted summary statistics summary_stats_df.style.format({'Mean':'{:.3f}', 'Median':'{:.3f}', 'Variance':'{:.3f}', 'Standard Deviation':'{:.3f}', 'SEM':'{:.3f}'}) # + # Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen # This method produces everything in a single groupby function # - # ## Bar and Pie Charts # + # Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pandas. # Group data by drug treatment # mice on drug treatments mice_per_treatment = clean_data_df.groupby('Drug Regimen')['Mouse ID'].count() # Generate bar plot and set labels mice_per_treatment_plot = mice_per_treatment.plot(kind='bar', title='Mice per Drug Regimen') mice_per_treatment_plot.set_ylabel('Number of Mice') plt.show() # + # Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pyplot. # Create x-ticker loactions x_axis = np.arange(len(mice_per_treatment)) # Create list of x-axis labels drug_regimens = summary_stats_df.index.tolist() # Generate bar plot and add labels plt.bar(x_axis, mice_per_treatment, align='center') plt.xticks(x_axis, drug_regimens, rotation=90) plt.title('Total Mice per Drug Regimen') plt.xlabel('Drug Regimen') plt.ylabel('Number of Mice') plt.show() # + # Generate a pie plot showing the distribution of female versus male mice using pandas # Grouping data by sex and counting number of mice per sex mice_sex = clean_data_df.groupby('Sex')['Mouse ID'].nunique() # The colors of each section of the pie chart colors = ["pink", "blue"] # Generating pie plot and add formatting and labels mice_sex_plot = mice_sex.plot(kind='pie', title='Male vs Female Mice', autopct="%1.1f%%", figsize=(5,5), fontsize=11, colors=colors) mice_sex_plot.set_ylabel('Percentage of Mice') plt.show() # + # Generate a pie plot showing the distribution of female versus male mice using pyplot # Create list of x-axis labels labels_sex = sorted(clean_data_df['Sex'].unique()) # The colors of each section of the pie chart colors = ["pink", "blue"] # Generating pie plot and add formatting and labels plt.pie(mice_sex, autopct="%1.1f%%", labels=labels_sex, colors=colors) plt.title('Male vs Female Mice') plt.ylabel('Percentage of Mice') plt.axis('equal') plt.show() # - # ## Quartiles, Outliers and Boxplots # + # Calculate the final tumor volume of each mouse across four of the treatment regimens: # Capomulin, Ramicane, Infubinol, and Ceftamin # Start by getting the last (greatest) timepoint for each mouse max_timepoint = clean_data_df.groupby('Mouse ID')['Timepoint'].max() max_timepoint_df = pd.DataFrame(max_timepoint) # Merge this group df with the original dataframe to get the tumor volume at the last timepoint merged_data2_df =pd.merge(clean_data_df, max_timepoint_df, on='Mouse ID') merged_data2_df # + # Put treatments into a list for for loop (and later for plot labels) best_regimens = ['Capomulin', 'Ramicane', 'Infubinol', 'Ceftamin'] # Create empty list to fill with tumor vol data (for plotting) tumor_vol_data = [] # Locate the rows which contain mice on each drug and get the tumor volumes mouse_maxTime_df = merged_data2_df.loc[merged_data2_df['Timepoint_x'] == merged_data2_df['Timepoint_y']] mouse_maxTime_df = mouse_maxTime_df.sort_values(by='Drug Regimen').reset_index(drop=True) # Add subset for: # Capomulin capomulin_df = mouse_maxTime_df.loc[mouse_maxTime_df['Drug Regimen'] == 'Capomulin'] capomulin_tumorVol = pd.DataFrame(capomulin_df['Tumor Volume (mm3)']) capomulin_tumorVol = capomulin_tumorVol.rename(columns={'Tumor Volume (mm3)':'Capomulin'}).sort_values(by='Capomulin').reset_index(drop=True) # Ramicane ramicane_df = mouse_maxTime_df.loc[mouse_maxTime_df['Drug Regimen'] == 'Ramicane'] ramicane_tumorVol = pd.DataFrame(ramicane_df['Tumor Volume (mm3)']) ramicane_tumorVol = ramicane_tumorVol.rename(columns={'Tumor Volume (mm3)':'Ramicane'}).sort_values(by='Ramicane').reset_index(drop=True) # Infubinol infubinol_df = mouse_maxTime_df.loc[mouse_maxTime_df['Drug Regimen'] == 'Infubinol'] infubinol_tumorVol = pd.DataFrame(infubinol_df['Tumor Volume (mm3)']) infubinol_tumorVol = infubinol_tumorVol.rename(columns={'Tumor Volume (mm3)':'Infubinol'}).sort_values(by='Infubinol').reset_index(drop=True) # Ceftamin ceftamin_df = mouse_maxTime_df.loc[mouse_maxTime_df['Drug Regimen'] == 'Ceftamin'] ceftamin_tumorVol = pd.DataFrame(ceftamin_df['Tumor Volume (mm3)']) ceftamin_tumorVol = ceftamin_tumorVol.rename(columns={'Tumor Volume (mm3)':'Ceftamin'}).sort_values(by='Ceftamin').reset_index(drop=True) # Summarizing Tumor Volumes across Drug Regimens data = [capomulin_tumorVol['Capomulin'], ramicane_tumorVol['Ramicane'], infubinol_tumorVol['Infubinol'], ceftamin_tumorVol['Ceftamin']] # Grouping separate data frams to one drug_Regimen_TumorVol = pd.concat(data, axis=1, keys=best_regimens) # - # ## The final tumor volume of each mouse across four of the most promising treatment regimens are: # drug_Regimen_TumorVol # + # Calculate the IQR and quantitatively determine if there are any potential outliers. # Capomulin cap = capomulin_tumorVol['Capomulin'] quartiles = cap.quantile([.25,.5,.75]).round(2) lowerq = quartiles[0.25] upperq = quartiles[0.75] iqr = round(upperq-lowerq,2) print(f"The lower quartile of Capomulin is: {lowerq}") print(f"The upper quartile of Capomulin is: {upperq}") print(f"The interquartile range of Capomulin is: {iqr}") print(f"The the median of Capomulin is: {quartiles[0.5]} ") lower_bound = round(lowerq - (1.5*iqr),2) upper_bound = round(upperq + (1.5*iqr),2) print(f"Values below {lower_bound} for Capomulin could be outliers.") print(f"Values above {upper_bound} for Capomulin could be outliers.") print("----------------------------------------------------") print("") # Ramicane ram = ramicane_tumorVol['Ramicane'] quartiles = ram.quantile([.25,.5,.75]).round(2) lowerq = quartiles[0.25] upperq = quartiles[0.75] iqr = round(upperq-lowerq,2) print(f"The lower quartile of Ramicane is: {lowerq}") print(f"The upper quartile of Ramicane is: {upperq}") print(f"The interquartile range of Ramicane is: {iqr}") print(f"The the median of Ramicane is: {quartiles[0.5]} ") lower_bound = round(lowerq - (1.5*iqr),2) upper_bound = round(upperq + (1.5*iqr),2) print(f"Values below {lower_bound} for Ramicane could be outliers.") print(f"Values above {upper_bound} for Ramicane could be outliers.") print("----------------------------------------------------") print("") # Infubinol inf = infubinol_tumorVol['Infubinol'] quartiles = inf.quantile([.25,.5,.75]).round(2) lowerq = quartiles[0.25] upperq = quartiles[0.75] iqr = round(upperq-lowerq,2) print(f"The lower quartile of Infubinol is: {lowerq}") print(f"The upper quartile of Infubinol is: {upperq}") print(f"The interquartile range of Infubinol is: {iqr}") print(f"The the median of Infubinol is: {quartiles[0.5]} ") lower_bound = round(lowerq - (1.5*iqr),2) upper_bound = round(upperq + (1.5*iqr),2) print(f"Values below {lower_bound} for Infubinol could be outliers.") print(f"Values above {upper_bound} for Infubinol could be outliers.") print("----------------------------------------------------") print("") # Ceftamin cef = ceftamin_tumorVol['Ceftamin'] quartiles = cef.quantile([.25,.5,.75]).round(2) lowerq = quartiles[0.25] upperq = quartiles[0.75] iqr = round(upperq-lowerq,2) print(f"The lower quartile of Ceftamin is: {lowerq}") print(f"The upper quartile of Ceftamin is: {upperq}") print(f"The interquartile range of Ceftamin is: {iqr}") print(f"The the median of Ceftamin is: {quartiles[0.5]} ") lower_bound = round(lowerq - (1.5*iqr),2) upper_bound = round(upperq + (1.5*iqr),2) print(f"Values below {lower_bound} for Ceftamin could be outliers.") print(f"Values above {upper_bound} for Ceftamin could be outliers.") print("----------------------------------------------------") print("") # Determine outliers using upper and lower bounds # + # Generate a box plot of the final tumor volume of each mouse across four regimens of interest fig1, ax1 = plt.subplots(figsize=(7,5)) ax1.set_title('Final Tumor Volume by Drug Regimen') ax1.set_xlabel('Drug Regimens') ax1.set_ylabel('Tumor Volume (mm3)') ax1.boxplot(data, sym='gD') plt.xticks([1,2,3,4],best_regimens) plt.show() # - # ## Line and Scatter Plots # Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin # Create Capomulin data df capomulin_data = clean_data_df.loc[clean_data_df['Drug Regimen'] == 'Capomulin'] capomulin_data # + # Group data by timepoint and get mean of tumor volume tumor_vol_mean = capomulin_data.groupby('Timepoint')['Tumor Volume (mm3)'].mean().round(3) cap_time_tumorVol = pd.DataFrame({'Avg Tumor Volume (Capomulin)':tumor_vol_mean}) # Create list of Timepoints and mean tumor volumes timepoints = list(cap_time_tumorVol.index.values) cap_tumorVol_mean = list(cap_time_tumorVol['Avg Tumor Volume (Capomulin)'].values) # Lineplot of timeseries and mean tumor volume with configuration v, = plt.plot(timepoints, cap_tumorVol_mean, marker="+", color="blue", linewidth="1", label ='Average') plt.title('Capomulin Avg Tumor Volume Time Series') plt.xlabel('Time (Days)') plt.ylabel('Tumor Volume (mm3)') plt.legend(handles= [v], loc="upper right") # Set limits plt.ylim(min(cap_tumorVol_mean)-1,max(cap_tumorVol_mean)+1) plt.show() # + # Extract data for a single mouse ID: b128 b128_capomulin_data = capomulin_data.loc[merged_data_df['Mouse ID']=='b128'].reset_index() # Create list of Timepoints and mean tumor volumes b128_timepoints = b128_capomulin_data['Timepoint'] b128_cap_tumorVol = b128_capomulin_data['Tumor Volume (mm3)'] # Lineplot of timeseries and mean tumor volume with configuration v, = plt.plot(b128_timepoints, b128_cap_tumorVol, marker="+", color="blue", linewidth="1", label ='Mouse ID: b128') plt.title('Capomulin Tumor Volume Time Series') plt.xlabel('Time (Days)') plt.ylabel('Tumor Volume (mm3)') plt.legend(handles= [v], loc="upper right") # Set limits plt.ylim(min(cap_tumorVol_mean)-1,max(cap_tumorVol_mean)+2) plt.show() # - # Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen # Group data by timepoint and get mean of tumor volume cap_mouse_weight = capomulin_data.groupby('Mouse ID')['Weight (g)'].mean().round(3) cap_mouse_tumorVol_mean = capomulin_data.groupby('Mouse ID')['Tumor Volume (mm3)'].mean().round(3) cap_weight_tumorVol = pd.DataFrame({'Mouse Weight (g)':cap_mouse_weight, 'Capomulin Avg Tumor Volume (mm3)':cap_mouse_tumorVol_mean}) cap_weight_tumorVol # + # Create scatter plot of weight and mean tumor volume with configuration plt.scatter( cap_weight_tumorVol['Mouse Weight (g)'], cap_weight_tumorVol['Capomulin Avg Tumor Volume (mm3)'], marker="o", facecolor="red", edgecolors="black", alpha = 0.75) plt.title('Mouse weight vs. Avg. Tumor Volume (Capomulin)') plt.xlabel('Mouse Weight (g)') plt.ylabel('Avg Tumor Volume (mm3)') plt.show() # - # ## Correlation and Regression # + # Calculate the correlation coefficient and linear regression model # for mouse weight and average tumor volume for the Capomulin regimen correlation = st.pearsonr(cap_weight_tumorVol['Mouse Weight (g)'], cap_weight_tumorVol['Capomulin Avg Tumor Volume (mm3)']) print(f'The correlation between Mouse Weight and Tumor Volume is {round(correlation[0],2)}') # + # Calculate and add linear regression equation to previous scatter plot x_values = cap_weight_tumorVol['Mouse Weight (g)'] y_values = cap_weight_tumorVol['Capomulin Avg Tumor Volume (mm3)'] (slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values) regress_values = x_values * slope + intercept line_eqn = f'Y = {str(round(slope,2))}x + {str(round(intercept,2))}' plt.scatter(x_values,y_values) plt.plot(x_values,regress_values,'r-') plt.annotate(line_eqn,(18,37),fontsize=15,color='black') plt.title('Mouse weight vs. Avg. Tumor Volume (Capomulin)') plt.xlabel('Mouse Weight (g)') plt.ylabel('Avg Tumor Volume (mm3)') print(f'The r-squared is: {rvalue**2}') print(f'The equation of the regression line is: {line_eqn}') plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernel_info: # name: python3 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Note # * Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps. # + # Dependencies and Setup import pandas as pd # File to Load (Remember to Change These) file_to_load = "Resources/purchase_data.csv" # Read Purchasing File and store into Pandas data frame purchase_data = pd.read_csv(file_to_load) # - # ## Player Count # * Display the total number of players # # + #create player information player_info=purchase_data.loc[:,['Gender','Age','SN']] # remove duplicates player_info=player_info.drop_duplicates() #create dataframe player_total=player_info.count()[0] total_players=pd.DataFrame({'Total Players':[player_total]}) total_players # - # ## Purchasing Analysis (Total) # * Run basic calculations to obtain number of unique items, average price, etc. # # # * Create a summary data frame to hold the results # # # * Optional: give the displayed data cleaner formatting # # # * Display the summary data frame # # + #Identify number of unique items Unique_items = len(purchase_data["Item Name"].unique()) #averge Purchase Price Avg_price = purchase_data["Price"].mean() #Calculating the Total Number of Purchases Number_purchases = len(purchase_data["Item Name"]) #Calculate Total Revenue Revenue = purchase_data["Price"].sum() #Create data frame purchase_anaylsis=pd.DataFrame({'Number of Unique Items': [Unique_items], 'Total Revenue': [Revenue], 'Number of Purchases': [Number_purchases], 'Average Purchase Price': [Avg_price]}) #Formatting the data purchase_anaylsis = purchase_anaylsis.round(2) purchase_anaylsis['Average Purchase Price']= purchase_anaylsis['Average Purchase Price'].map('${:,.2f}'.format) purchase_anaylsis['Total Revenue']=purchase_anaylsis['Total Revenue'].map('${:,.2f}'.format) purchase_anaylsis = purchase_anaylsis.loc[:,['Number of Unique Items', 'Average Purchase Price', 'Number of Purchases', 'Total Revenue']] purchase_anaylsis # - # ## Gender Demographics # * Percentage and Count of Male Players # # # * Percentage and Count of Female Players # # # * Percentage and Count of Other / Non-Disclosed # # # # + # Basic Calculations gender_demo = player_info["Gender"].value_counts() gender_demo_percents = player_info['Gender'].value_counts(normalize=True).mul(100).round(2).astype(str) + '%' # Data Cleanup gender_demo = pd.DataFrame({"Total Count": gender_demo, "Percentage of Players": gender_demo_percents}) gender_demo = gender_demo.round(2) #Data Gender_demographics gender_demo # - # # ## Purchasing Analysis (Gender) # * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender # # # # # * Create a summary data frame to hold the results # # # * Optional: give the displayed data cleaner formatting # # # * Display the summary data frame # + # Create Gender Group gender = purchase_data.groupby(["Gender"]) # Data for Table purchase_count = gender["SN"].count() purchase_price = gender["Price"].mean() purchase_value = gender["Price"].sum() # Remove all duplicates duplicates = purchase_data.drop_duplicates(subset='SN', keep="first") new_gender = duplicates.groupby(["Gender"]) # Find normalized data purchase_average = (gender["Price"].sum() / new_gender["SN"].count()) # Create new DataFrame gender_analysis = pd.DataFrame({"Purchase Count": purchase_count, "Average Purchase Price": purchase_price, "Total Purchase Value": purchase_value, "Average Purchase Total by Person": purchase_average}) # DataFrame formatting gender_analysis["Average Purchase Price"] = gender_analysis["Average Purchase Price"].map("${:.2f}".format) gender_analysis["Total Purchase Value"] = gender_analysis["Total Purchase Value"].map("${:.2f}".format) gender_analysis["Average Purchase Total by Person"] = gender_analysis["Average Purchase Total by Person"].map("${:.2f}".format) gender_analysis = gender_analysis[["Purchase Count", "Average Purchase Price", "Total Purchase Value", "Average Purchase Total by Person"]] gender_analysis # - # ## Age Demographics # * Establish bins for ages # # # * Categorize the existing players using the age bins. Hint: use pd.cut() # # # * Calculate the numbers and percentages by age group # # # * Create a summary data frame to hold the results # # # * Optional: round the percentage column to two decimal points # # # * Display Age Demographics Table # # + age_bins = [0, 9.90, 14.90, 19.90, 24.9, 29.9, 34.90, 39.90, 9999999] group_names = ["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", ">40"] #Use Cut to categorize players player_info["Age Ranges"] = pd.cut(player_info["Age"], age_bins, labels=group_names) # Create the table data age_demo_totals = player_info["Age Ranges"].value_counts() age_demo_percents = ((age_demo_totals / player_total) * 100).round(2).astype(str) + '%' age_demographics = pd.DataFrame({"Total Count": age_demo_totals, "Percent of Players": age_demo_percents}) age_demographics = age_demographics.sort_index() age_demographics # - # ## Purchasing Analysis (Age) # * Bin the purchase_data data frame by age # # # * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below # # # * Create a summary data frame to hold the results # # # * Optional: give the displayed data cleaner formatting # # # * Display the summary data frame # + # Binning age_bins = [0, 9.90, 14.90, 19.90, 24.90, 29.90, 34.90, 39.90, 99999] binLabel = ["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", ">40"] # Add bins to new dataframe and groupby bin_df = purchase_data.copy() bin_df["Age Groups"] = pd.cut(bin_df["Age"], age_bins, labels=binLabel) binColumn = pd.cut(bin_df["Age"], age_bins, labels=binLabel) grouped_bin = bin_df.groupby(["Age Groups"]) # Data for table binPurchCount = grouped_bin["Age"].count() binPurchAver = grouped_bin["Price"].mean() binPurchTotal = grouped_bin["Price"].sum() # deleting duplicates for new counts binduplicate = purchase_data.drop_duplicates(subset='SN', keep="first") binduplicate["Age Groups"] = pd.cut(binduplicate["Age"], age_bins, labels=binLabel) binduplicate = binduplicate.groupby(["Age Groups"]) binTPPP = (grouped_bin["Price"].sum() / binduplicate["SN"].count()) binTPPP # Create new DF and format Age_Demo = pd.DataFrame({"Purchase Count": binPurchCount, "Average Purchase Price": binPurchAver, "Total Purchase Value": binPurchTotal, "Avg Price Per Person Totals": binTPPP}) Age_Demo["Average Purchase Price"] = Age_Demo["Average Purchase Price"].map("${:.2f}".format) Age_Demo["Total Purchase Value"] = Age_Demo["Total Purchase Value"].map("${:.2f}".format) Age_Demo["Avg Price Per Person Totals"] = Age_Demo["Avg Price Per Person Totals"].map("${:.2f}".format) Age_Demo = Age_Demo[["Purchase Count", "Average Purchase Price", "Total Purchase Value", "Avg Price Per Person Totals"]] Age_Demo.head(10) # - # ## Top Spenders # * Run basic calculations to obtain the results in the table below # # # * Create a summary data frame to hold the results # # # * Sort the total purchase value column in descending order # # # * Optional: give the displayed data cleaner formatting # # # * Display a preview of the summary data frame # # # + top_spend = purchase_data['Item ID'].groupby(purchase_data['SN']).count() # set the dataframe for top spender top_spend = pd.DataFrame(data=top_spend) top_spend.columns = ['Purchase Count'] top_spend['Average Purchase Price'] = round(purchase_data['Price'].groupby(purchase_data['SN']).mean(),2) top_spend['Total Purchase Value'] = purchase_data['Price'].groupby(purchase_data['SN']).sum() top_spend.sort_values(by=['Total Purchase Value'], ascending=False, inplace=True) top_spend['Average Purchase Price'] = top_spend['Average Purchase Price'].map('${:,.2f}'.format) top_spend['Total Purchase Value'] = top_spend['Total Purchase Value'].map('${:,.2f}'.format) top_spend.head() # - # ## Most Popular Items # * Retrieve the Item ID, Item Name, and Item Price columns # # # * Group by Item ID and Item Name. Perform calculations to obtain purchase count, average item price, and total purchase value # # # * Create a summary data frame to hold the results # # # * Sort the purchase count column in descending order # # # * Optional: give the displayed data cleaner formatting # # # * Display a preview of the summary data frame # # # + # Data for table groupItem = purchase_data.groupby(["Item ID", "Item Name"]) groupItemCount = groupItem["SN"].count() groupPriceSum = groupItem["Price"].sum() groupItemPPP = (groupPriceSum / groupItemCount) groupItemValue = (groupItemPPP * groupItemCount) # New DF with formatting Popular_Item = pd.DataFrame({"Purchase Count": groupItemCount, "Item Price": groupItemPPP, "Total Purchase Value": groupItemValue}) Popular_Item = Popular_Item.sort_values("Purchase Count", ascending=False) Popular_Item["Item Price"] = Popular_Item["Item Price"].map("${:.2f}".format) Popular_Item["Total Purchase Value"] = Popular_Item["Total Purchase Value"].map("${:.2f}".format) Popular_Item = Popular_Item[["Purchase Count", "Item Price", "Total Purchase Value"]] Popular_Item.head() # - # ## Most Profitable Items # * Sort the above table by total purchase value in descending order # # # * Optional: give the displayed data cleaner formatting # # # * Display a preview of the data frame # # # + # Data for table groupedItem = purchase_data.groupby(["Item ID", "Item Name"]) groupedItemCount = groupedItem["Gender"].count() groupedSum = groupedItem["Price"].sum() groupedItemPPP = (groupedSum / groupedItemCount) # Make a new DF and format Prof_Val = pd.DataFrame({"Purchase Count": groupedItemCount, "Item Price": groupedItemPPP, "Total Purchase Value": groupedSum}) Prof_Val = Prof_Val.sort_values("Total Purchase Value", ascending=False) Prof_Val["Item Price"] = Prof_Val["Item Price"].map("${:.2f}".format) Prof_Val["Total Purchase Value"] = Prof_Val["Total Purchase Value"].map("${:.2f}".format) Prof_Val = Prof_Val[["Purchase Count", "Item Price", "Total Purchase Value"]] Prof_Val.head() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="8yI7r1V4K43j" # Copyright 2022 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # + [markdown] id="uRb_t3yvvZe9" # # Google I/O 2022: Product Fairness Testing for Developers # # --- # **Note: if you want to run this notebook, you need to clone it: use *__"File" -> "Save a copy in Drive"__* to get your own copy that you can play with.** # # # # # + [markdown] id="tTxo-SRaNhRP" # # Content # # This notebook was created for Google I/O workshop on product fairness testing. # # ## [1. Load and explore examples in our dataset](#scrollTo=kKQAvjrvNnWO) # # # ## [2. Filter data using the sensitive terms dataset](#scrollTo=mKwIfGoFN_qK) # # # ## [3. Refine our analysis using interactions](#scrollTo=jPHSuAfCUCkk) # + [markdown] id="kKQAvjrvNnWO" # # 1. Load and explore examples in our dataset # Let's load some example data to analyze. These examples are a sample of about 1.1k conversations from WikiDialog, a dataset of 11 million generated conversations. # + id="n4adJk13u9e5" #@title Import relevant libraries # These are standard Python libraries useful to load and filter data. import re import csv import collections import io import logging import random # Pandas and data_table to represent the data and view it nicely. import pandas as pd from google.colab import data_table # The datasets wiki_dialog_url = 'https://raw.githubusercontent.com/google/responsible-innovation/main/data/wiki_dialog.csv' sensitive_terms_url = 'https://raw.githubusercontent.com/google/responsible-innovation/main/data/sensitive_terms.csv' interaction_table_url = 'https://raw.githubusercontent.com/google/responsible-innovation/main/data/interaction_table.csv' # + id="yeUT1W-VvHvS" #@title Load data # functions def view(df, include_index=False): """Display a Pandas data frame as an easy to use data table.""" view_table = data_table.DataTable(df, include_index=include_index, max_columns=100, num_rows_per_page=10) return view_table # Load worksheet. examples = pd.read_csv(wiki_dialog_url, keep_default_na=False) # View data. view(examples[['pid', 'title', 'utterances']]) # + [markdown] id="mKwIfGoFN_qK" # # 2. Filter data using the sensitive terms dataset # Let's try and be a bit more targetted with our exploration using the Sensitive Terms dataset. The dataset consists of a taxonomy of sensitive terms and adjectives to aid fairness analysis. # + id="qi-z7Ed6wyeS" #@title Load the Sensitive Terms dataset. sensitive_terms = pd.read_csv(sensitive_terms_url, keep_default_na=False, converters={ 'sub_cat': str, 'sentiment': str, 'sensitive_characteristic': str, }) view(sensitive_terms) # + id="0WvWNdCgOe2m" #@title Implement matcher for sensitive terms. # Create a regex matcher for the terms in the dataset. We can # use this matcher to efficiently find and extract terms # from the dataset that appear in sentences. term_matcher = re.compile(r'\b(' + '|'.join( f'{term.lower()}' for term in sensitive_terms['term']) + r')\b') def get_matched_terms(text): return set(term_matcher.findall(text.lower())) example_sentence = "He is an abusive man." #@param {type:"string"} get_matched_terms(example_sentence) # + id="qfxJu6hbPwM2" #@title Filter the dataset to rows matching sensitive terms. def get_matched_terms_string(row): """A helper function to return the matched terms as a string.""" matched_terms = get_matched_terms(row['utterances']) return ", ".join(matched_terms) # Extend examples to include the matched terms as another column. # (axis=1) means that we will apply the above function to each row. examples['matched_terms'] = examples.apply(get_matched_terms_string, axis=1) examples_filtered_by_terms = examples[examples['matched_terms'] != ''] view(examples_filtered_by_terms[['pid', 'title', 'utterances', 'matched_terms']]) # + [markdown] id="jPHSuAfCUCkk" # # 3. Refine the analysis using interactions # One observation is that we need to refine our analysis by looking at interactions between the sensitive terms and adjectives. We include a table of interaction terms. # # + id="X8oeODojR-OW" #@title Load sensitive-interaction table. interaction_table = pd.read_csv(interaction_table_url, keep_default_na=False) interaction_table = interaction_table.set_index('Interaction Type') view(interaction_table, include_index=True) # + id="3E493OXgUCAH" #@title Implement matcher for sensitive interactions. # Each term can describe a sensitive characteristic or an adjective type. # Store a mapping of them here. sensitive_categories, adjective_categories = {}, {} for _, row in sensitive_terms.iterrows(): label = row['category'] if row['sub_cat']: label += f": {row['sub_cat']}" if row['sentiment'] != 'NULL': label += f"/{row['sentiment']}" adjective_categories[row['term'].lower()] = label if row['sensitive_characteristic'] != "NULL": sensitive_categories[row['term'].lower()] = row['sensitive_characteristic'] # Convert the interaction table into an easier format to find. sensitive_interactions = set() for term1, row in interaction_table.items(): for term2, value in row.items(): if value == 'X': sensitive_interactions.add((term1.strip(), term2.strip())) # Define a function to find interactions. def get_matched_interactions(matched_terms): """Find interactions between the `matched_terms` that might be sensitive.""" interactions = [] matched_terms = sorted(matched_terms) for i, term1 in enumerate(matched_terms): id1 = sensitive_categories.get(term1) adj1 = adjective_categories.get(term1) for term2 in matched_terms[i+1:]: id2 = sensitive_categories.get(term2) adj2 = adjective_categories.get(term2) if (id1, adj2) in sensitive_interactions: interactions.append(f'{id1} ({term1}) x {adj2} ({term2})') elif (id2, adj1) in sensitive_interactions: interactions.append(f'{id2} ({term2}) x {adj1} ({term1})') return set(interactions) example = "aggressive men" #@param{type: 'string'} matched_terms = get_matched_terms(example) get_matched_interactions(matched_terms) # + id="76pOpsHAxX35" #@title Separate the given and generated text. def get_generated_text(row): generated_questions = [] for utterance in row['utterances'].split('\n'): if utterance.startswith("Q:"): generated_questions.append(utterance) return "\n".join(generated_questions) def get_given_text(row): generated_questions = [] for utterance in row['utterances'].split('\n'): if utterance.startswith("A:"): generated_questions.append(utterance) return "\n".join(generated_questions) examples["generated_text"] = examples.apply(get_generated_text, axis=1) examples["given_text"] = examples.apply(get_given_text, axis=1) view(examples[['pid', 'title', 'generated_text', 'given_text']]) # + id="BrQ48aNPYfcl" #@title Filter the dataset to rows that match sensitive interactions. # Filter rows that match any of the aforementioned terms. def get_matched_interactions_string(row): generated_terms = get_matched_terms(row['generated_text']) given_terms = get_matched_terms(row['given_text']) generated_terms.difference_update(given_terms) matched_interactions = get_matched_interactions(generated_terms) return ", ".join(matched_interactions) examples["matched_interactions"] = examples.apply( get_matched_interactions_string, axis=1) examples_filtered_by_interactions = examples[ examples["matched_interactions"] != ""] # + id="YHsRY0y-ev3p" #@title Count the number of interactions. examples_filtered_by_interactions["pid"].describe() # + id="4w1h2XeSe_g0" #@title Let's look at the first 8 examples view(examples_filtered_by_interactions.head(n=8)[ ['pid', 'title', 'utterances', 'matched_terms', 'matched_interactions']]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: hark-env # language: python # name: hark-env # --- # + import HARK.ConsumptionSaving.ConsPortfolioFrameModel as cpfm import HARK.ConsumptionSaving.ConsPortfolioModel as cpm import numpy as np from HARK.utilities import ( CRRAutility, ) # - # The `FrameAgentType` is an alternative way to specify a model. # # The library contains a demonstration of this form of model, `ConsPortfolioFrameModel`, which is a replica of the `ConsPortfolioModel`. # # This notebook compares the results of simulations of the two models. # + pct = cpm.PortfolioConsumerType(T_sim=5000, AgentCount=200) pct.cycles = 0 # Solve the model under the given parameters pct.solve() pct.track_vars += [ "mNrm", "cNrm", "Share", "aNrm", "Risky", "Adjust", "PermShk", "TranShk", "bNrm", "who_dies" ] pct.initialize_sim() pct.simulate() # - shock_history = {shock : pct.history[shock] for shock in pct.shocks} shock_history['who_dies'] = pct.history['who_dies'] # + pcft = cpfm.PortfolioConsumerFrameType( T_sim=5000, AgentCount=200, read_shocks = True ) pcft.shock_history = shock_history pcft.cycles = 0 # Solve the model under the given parameters pcft.solve() pcft.track_vars += [ "mNrm", "cNrm", "Share", "aNrm", "Adjust", "PermShk", "TranShk", "bNrm", 'U' ] pcft.initialize_sim() pcft.simulate() # - pcft.history['TranShk'].shape # + import matplotlib.pyplot as plt plt.plot(range(5000), pct.history['PermShk'].mean(axis=1), label = 'original') plt.plot(range(5000), pcft.history['PermShk'].mean(axis=1), label = 'frames', alpha = 0.5) plt.legend() # - plt.plot(range(5000), pct.history['TranShk'].mean(axis=1), label = 'original') plt.plot(range(5000), pcft.history['TranShk'].mean(axis=1), label = 'frames', alpha = 0.5) plt.legend() plt.plot(range(5000), pct.history['bNrm'].mean(axis=1), label = 'original') plt.plot(range(5000), pcft.history['bNrm'].mean(axis=1), label = 'frames', alpha = 0.5) plt.legend() # + #plt.plot(range(5000), pct.history['Risky'].mean(axis=1), label = 'original') #plt.plot(range(5000), pcft.history['Risky'].mean(axis=1), label = 'frames', alpha = 0.5) #plt.legend() # - plt.plot(range(5000), pct.history['aNrm'].mean(axis=1), label = 'original') plt.plot(range(5000), pcft.history['aNrm'].mean(axis=1), label = 'frames', alpha = 0.5) plt.legend() plt.plot(range(5000), pct.history['mNrm'].mean(axis=1), label = 'original') plt.plot(range(5000), pcft.history['mNrm'].mean(axis=1), label = 'frames', alpha = 0.5) plt.legend() plt.plot(range(5000), pct.history['cNrm'].mean(axis=1), label = 'original') plt.plot(range(5000), pcft.history['cNrm'].mean(axis=1), label = 'frames', alpha = 0.5) plt.legend() # **TODO**: Handly Risky as an aggregate value. # + #pct.history['Risky'][:3, :3] # + #pcft.history['Risky'][:3, :3] # - plt.plot(range(5000), pct.history['Share'].mean(axis=1), label = 'original') plt.plot(range(5000), pcft.history['Share'].mean(axis=1), label = 'frames', alpha = 0.5) plt.legend() plt.plot(range(5000), pcft.history['cNrm'].mean(axis=1), label = 'frames - cNrm', alpha = 0.5) plt.plot(range(5000), pcft.history['U'].mean(axis=1), label = 'frames - U', alpha = 0.5) plt.legend() pcft.history['U'] pcft.history['U'].mean(axis=1) pcft.history['U'][0,:] pcft.history['cNrm'][0,:] pcft.parameters['CRRA'] CRRAutility(pcft.history['cNrm'][0,:], 5) pcft.frames[4].parents # # Visualizing the Transition Equations import networkx as nx # + g = nx.DiGraph() g.add_nodes_from([ (frame.name(), {'control' : frame.control, 'reward' : frame.reward, 'aggregate' : frame.aggregate}) for frame in pcft.frames ]) for frame in pcft.frames: for child in frame.children: g.add_nodes_from([ (child.name(), { 'control' : child.control, 'reward' : child.reward, 'aggregate' : child.aggregate })]) g.add_edge(frame.name(), child.name()) # + pos = nx.drawing.layout.kamada_kawai_layout(g) node_options = { "node_size": 2500, "node_color": "white", "edgecolors": "black", "linewidths": 1, "pos" : pos } edge_options = { "node_size": 2500, "width": 2, "pos" : pos } label_options = { "font_size": 12, #"labels" : {node : str(node[0]) if len(node) == 1 else str(node) for node in g.nodes}, "pos" : pos } reward_nodes = [k for k,v in g.nodes(data = True) if v['reward']] control_nodes = [k for k,v in g.nodes(data = True) if v['control']] aggregate_nodes = [k for k,v in g.nodes(data = True) if v['aggregate']] chance_nodes = [node for node in g.nodes() if node not in reward_nodes and node not in control_nodes and node not in aggregate_nodes ] plt.figure(figsize=(16,12)) nx.draw_networkx_nodes(g, nodelist = chance_nodes, node_shape = 'o', **node_options) nx.draw_networkx_nodes(g, nodelist = reward_nodes, node_shape = 'd', **node_options) nx.draw_networkx_nodes(g, nodelist = control_nodes, node_shape = 's', **node_options) nx.draw_networkx_nodes(g, nodelist = aggregate_nodes, node_shape = 'h', **node_options) nx.draw_networkx_edges(g, **edge_options) nx.draw_networkx_labels(g, **label_options) # - # Note that in the HARK `ConsIndShockModel`, from which the `ConsPortfolio` model inherits, the aggregate permanent shocks are considered to be portions of the permanent shocks experienced by the agents, not additions to those idiosyncratic shocks. Hence, they do not show up directly in the problem solved by the agent. This explains why the aggregate income levels are in a separarte component of the graph. # # Building the Solver [INCOMPLETE] # Preliminery work towards a generic solver for FramedAgentTypes. controls = [frame for frame in pcft.frames if frame.control] def get_expected_return_function(control): # Input: a control frame # Returns: function of the control variable (control frame target) # that returns the expected return, which is # the sum of: # - direct rewards # - expected value of next-frame states (not yet implemented) # rewards = [child for child in control.children if child.reward] expected_values = [] # TODO ## note: function signature is what's needed for scipy.optimize def expected_return_function(x, *args): ## returns the sum of ## the reward functions evaluated in context of ## - parameters ## - the control variable input # x - array of inputs, here the control frame target # args - a tuple of other parameters needed to complete the function expected_return = 0 for reward in rewards: ## TODO: figuring out the ordering of `x` and `args` needed for multiple downstream scopes local_context = {} # indexing through the x and args values i = 0 num_control_vars = None # assumes that all frame scopes list model variables first, parameters later # should enforce or clarify at the frame level. for var in reward.scope: if var in control.target: local_context[var] = x[i] i = i + 1 elif var in pcft.parameters: if num_control_vars is None: num_control_vars = i local_context[var] = args[i - num_control_vars] i = i + 1 # can `self` be implicit here? expected_return += reward.transition(reward, **local_context) return expected_return return expected_return_function def optimal_policy_function(control): erf = get_expected_return_function(control) constraints = control.constraints ## these will reference the context of the control transition, including scope ## Returns function: ## input: control frame scope ## output: result of scipy.optimize of the erf with respect to constraints ## getting the optimal input (control variable) value return func def approximate_optimal_policy_function(control, grid): ## returns a new function: ## that is an interpolation over optimal_policy_function ## over the grid return func # # Solving methods on the Frame, FrameAgentType # # Using Frame Solution in simulation and comparing result. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Gradient descent in action # ## Goal # # The goal of this lab is to explore how chasing function gradients can find the function minimum. If the function is a loss function representing the quality of a model's fit to a training set, we can use function minimization to train models. # # When there is no symbolic solution to minimizing the loss function, we need an iterative solution, such as gradient descent. # ## Set up # + import numpy as np import pandas as pd from mpl_toolkits.mplot3d import Axes3D # required even though not ref'd! from sklearn.linear_model import LinearRegression, LogisticRegression, Lasso, Ridge from sklearn.ensemble import RandomForestRegressor from sklearn.datasets import load_boston from sklearn.model_selection import train_test_split from sklearn.metrics import confusion_matrix, precision_score, recall_score, log_loss, mean_absolute_error import matplotlib.pyplot as plt import matplotlib as mpl # %config InlineBackend.figure_format = 'retina' # - def normalize(X): X = X.copy() for colname in X.columns: u = np.mean(X[colname]) s = np.std(X[colname]) if s>0.0: X[colname] = (X[colname] - u) / s else: X[colname] = (X[colname] - u) return X def plot3d(X, y, b0_range, b1_range): b0_mesh, b1_mesh = np.meshgrid(b0_range, b1_range, indexing='ij') L = np.zeros(b0_mesh.shape) for i in range(len(b0_range)): for j in range(len(b1_range)): L[i,j] = loss([b0_range[i],b1_range[j]], X=X, y=y) fig = plt.figure(figsize=(5,4)) ax = fig.add_subplot(111, projection='3d') surface = ax.plot_surface(b0_mesh, b1_mesh, L, alpha=0.7, cmap='coolwarm') ax.set_xlabel('$\\beta_0$', fontsize=14) ax.set_ylabel('$\\beta_1$', fontsize=14) # ## Simple function gradient descent # # Let's define a very simple quadratic in one variable, $y = f(x) = (x-2)^2$ and then use an iterative solution to find the minimum value. def f(x) : return (x-2)**2 # We can hide all of the plotting details in a function, as we will use it multiple times. def fplot(f,xrange,fstr='',x0=None,xn=None): plt.figure(figsize=(3.5,2)) lx = np.linspace(*xrange,200) fx = [f(x) for x in lx] plt.plot(lx, fx, lw=.75) if x0 is not None: plt.scatter([x0], [f(x0)], c='orange') plt.scatter([xn], [f(xn)], c='green') plt.xlabel("$x$", fontsize=12) plt.ylabel(fstr, fontsize=12) fplot(f, xrange=(0,4), fstr="$(x-2)^2$") # To minimize a function of $x$, we need the derivative of $f(x)$, which is just a function that gives the slope of the curve at every $x$. # # **1. Define a function returning the derivative of $f(x)$** # # You can ask for symbolic derivatives at a variety of sites, but here's one [solution](https://www.symbolab.com/solver/derivative-calculator/%5Cfrac%7Bd%7D%7Bdx%7D%5Cleft(x-2%5Cright)%5E%7B2%7D). def df(x): return 2*(x-2) #
    # Solution #
    # def df(x): return 2*(x-2)
    #
    # **2. Pick an initial $x$ location and take a single step according to the derivative** # # Use a learning rate of $\eta = 0.4$. The output should be `1.76`. (Also keep in mind that the minimum value is clearly at $x=2$.) x = .8 # initial x location x -=.4*df(x) print(x) #
    # Solution #
    # x = x - .4 * df(x); print(x)
    # 
    #
    # **Q.** How can we symbolically optimize a quadratic function like this with a single minimum? # iterate x through df(x) until df(x) is 0 (where x= 2) #
    # Solution # When the derivative goes to zero, it means the curve is flat, which in turn means we are at the function minimum. Set the derivative equal to zero and solve for $x$: $\frac{d}{dx} (x-2)^2 = 2(x-2) = 2x-4 = 0$. Solving for $x$ gives $x=2$. #
    # **3. Create a loop that takes five more steps (same learning rate)** # # The output should look like: # # ``` # 1.952 # 1.9904 # 1.99808 # 1.999616 # 1.9999232 # ``` for i in range(5): x-= 0.4*df(x) print(x) #
    # Solution #
    # for i in range(5):
    #     x = x - 0.4 * df(x); print(x)
    # 
    #
    # Notice how fast the iteration moves $x$ to the location where $f(x)$ is minimum! # ### Minimizing a more complicated function # # This iterative minimization approach works for any (smooth) function, assuming we choose a small enough learning rate. For example, let's find one of the minima for $f(x) = x \sin(0.6x)$ in the range \[-1,10\]. The plot should look something like: # # # # Depending on where we start, minimization will find either minimum at $x=0$ or at $8.18$. The location of the lowest function value is called the global minimum and any others are called local minima. # **1. Define a function for $x \sin(0.6x)$** def f(x) : return np.sin(0.6*x)*x #
    # Solution #
    # def f(x) : return np.sin(0.6*x)*x
    # 
    #
    fplot(f, xrange=(-1,10), fstr="$x \sin(0.6x)$") #plt.tight_layout(); plt.savefig("xsinx.png",dpi=150,bbox_inches=0) # **2. Define the derivative function: $\frac{df}{dx} = 0.6x \cos(0.6 x) + \sin(0.6 x)$** def df(x):return 0.6*x*np.cos(0.6*x)+np.sin(0.6*x) #
    # Solution #
    # def df(x): return 0.6*x * np.cos(0.6*x) + np.sin(0.6*x)
    # 
    #
    # **3. Pick a random initial value, $x_0$, between -1 and 10; display that value** x0 = np.random.rand()*11 - 1 # pick value between -1 and 10 x0 # **4. Start $x$ at $x_0$ and iterate 12 times using the gradient descent method** # # Use a learning rate of 0.4. x = x0 for i in range(12): x = x - .4 * df(x); print(f"{x:.10f}") # **5. Plot the starting and stopping locations on the curve** fplot(f, xrange=(-1,10), fstr="$x \sin(0.6x)$", x0=x0, xn=x) # **6. Rerun the notebook several times to see how the random start location affects where it terminates.** # **Q.** Rather than iterating a fixed number of times, what's a better way to terminate the iteration? # setup a threshold of gradient, stop when B is smaller than certian small number #
    # Solution # A simple stopping condition is when the (norm of the) gradient goes to zero, meaning that it does not suggest we move in any direction to get a lower loss of function value. We could also check to see if the new $x$ location is substantially different from the previous. #
    # ## The effect of learning rate on convergence # # Let's move back to the simple function $f(x) = (x-2)^2$ and consider different learning rates to see the effect. def df(x): return 2*(x-2) # Let's codify the minimization process in a handy function: def minimize(df,x0,eta): x = x0 for i in range(10): x = x - eta * df(x); print(f"{x:.2f}") # **1. Update the gradient descent loop to use a learning rate of 1.0** # # Notice how the learning rate is so large that iteration oscillates between two (incorrect) solutions. The output should be: # # ``` # 3.20 # 0.80 # 3.20 # 0.80 # 3.20 # 0.80 # 3.20 # 0.80 # 3.20 # 0.80 # ``` minimize(df, x0=0.8, eta=1) # **2. Update the gradient descent loop to use a learning rate of 2.0** # # Notice how the solution diverges when the learning rate is too big. The output should be: # # ``` # 5.60 # -8.80 # 34.40 # -95.20 # 293.60 # -872.80 # 2626.40 # -7871.20 # 23621.60 # -70856.80 # ``` minimize(df, x0=0.8, eta=2) # **2. Update the gradient descent loop to use a learning rate of 0.01** # # Notice how **slowly** the solution converges when the learning rate is two small. The output should be: # # ``` # 0.82 # 0.85 # 0.87 # 0.89 # 0.92 # 0.94 # 0.96 # 0.98 # 1.00 # 1.02 # ``` minimize(df, x0=0.8, eta=.01) # **Q.** How do you choose the learning rate $\eta$? # samll rate will always hit the minimum but really slow; big rate will bounce or even diverge. #
    # Solution # The learning rate is specific to each problem unfortunately. A general strategy is to start with a small $\eta$ and gradually increase it until it starts to oscillate around the solution, then back off a little bit. Having a single global learning rate for un-normalized data usually means very slow convergence. A learning rate small enough to be appropriate for a variable with small range is unlikely to be appropriate for variable with a large range. This is overcome with the more sophisticated gradient descent methods, such as the Adagrad strategy you will use in your project. In that case, we keep a history of gradients and use that to speed up descent in directions that are historically shallow in the gradient. #
    # ## Examine loss surface for LSTAT var from Boston dataset # # Turning to a common toy data set, the Boston housing data set, let's pick the most important single feature and look at the loss function for simple OLS regression. # **1. Load the Boston data set into a data frame** boston = load_boston() X = pd.DataFrame(boston.data, columns=boston.feature_names) y = boston.target X.head() # **2. Train an OLS linear regression model** lm = LinearRegression() lm.fit(X, y) # **3. Using `rfpimp` package, display the feature importances** # + from rfpimp import * I = importances(lm, X, y) plot_importances(I) # - # **4. LSTAT is most important variable so train a new model with just `X['LSTAT']`** # # Print out the true $\beta_0, \beta_1$ coefficients. # + X_ = X['LSTAT'].values.reshape(-1,1) # Extract just one x variable lm = LinearRegression() lm.fit(X_, y) print(f"True OLS coefficients: {np.array([lm.intercept_]+list(lm.coef_))}") # - # **5. Show marginal plot of LSTAT vs price** fig, ax1 = plt.subplots(figsize=(5,2.0)) ax1.scatter(X_, y, s=15, alpha=.5) lx = np.linspace(np.min(X_), np.max(X_), num=len(X)) ax1.plot(lx, lm.predict(lx.reshape(-1,1)), c='orange') ax1.set_xlabel("LSTAT", fontsize=10) ax1.set_ylabel("price", fontsize=10) plt.show() # **6. Define an MSE loss function for single variable regression** # # $$ # \frac{1}{n} \sum_{i=1}^n (y - (\beta_0 + \beta_1 x^{(i)}))^2 # $$ def loss(B,X,y): # B=[beta0, beta1] y_pred = ... return np.mean(...) #
    # Solution #
    # def loss(B,X,y):
    #     y_pred = B[0] + X*B[1]
    #     return np.mean((y - y_pred)**2)
    # 
    #
    # **7. Check the loss function value at the true OLS coordinates** loss(np.array([34.55384088, -0.95004935]), X_, y) # demo loss function at minimum # **8. Plot the loss function in 3D in region around $\beta$s** # # When you enter the correct loss function above, the plot should look something like: # # # # # + b0_range = np.linspace(-50, 120, 70) b1_range = np.linspace(-6, 4, 70) plot3d(X_, y, b0_range, b1_range) #plt.tight_layout(); plt.savefig("boston-loss.png",dpi=150,bbox_inches=0) # - # ### Repeat using normalized data # **1. Normalize the $x$ variables** X_norm = normalize(X) # **2. Retrain the model** # + X_ = X_norm['LSTAT'].values.reshape(-1,1) lm = LinearRegression() lm.fit(X_, y) print(f"True OLS coefficients: {np.array([lm.intercept_]+list(lm.coef_))}") # - # **3. Show the marginal plot again** # # Notice how only the $x$ scale has changed but not $y$, nor has the shape changed. fig, ax1 = plt.subplots(figsize=(5,2.0)) ax1.scatter(X_, y, s=15, alpha=.5) lx = np.linspace(np.min(X_), np.max(X_), num=len(X)) ax1.plot(lx, lm.predict(lx.reshape(-1,1)), c='orange') ax1.set_xlabel("LSTAT", fontsize=10) ax1.set_ylabel("price", fontsize=10) plt.show() # **4. Plot the cost surface with a region around the new minimum location** # + b0_range = np.linspace(15, 30, 70) b1_range = np.linspace(-10, 5, 70) plot3d(X_, y, b0_range, b1_range) # - # **Q.** Compare the loss function contour lines of the unnormalized and normalized variables. # #
    # Solution # The normalized variables clearly result in a bowl shaped loss function, which gives spherical contours. A gradient descent method with a single learning rate will convergent much faster given visible shape. #
    # **Q.** Look at the loss function directly from above; in which direction do the gradients point? # #
    # Solution # The negative of the gradients will point directly at the minimum loss function value location. The gradients themselves, however, point in the exact opposite direction.
    # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Builder Tutorial number 4 # # The builder tutorials demonstrate how to build an operational GSFLOW model using `pyGSFLOW` from shapefile, DEM, and other common data sources. These tutorials focus on the `gsflow.builder` classes. # # ## Performing watershed (basin) delineation and subbasin delineation # # In this tutorial, we demonstrate how to perform watershed and subbasin delineation to define model boundaries and contributing area zones (subbasins). Performing watershed delineation is necessary for building an operational GSFLOW model. Subbasin delineation is an optional, but useful method that allows users to isolate contributing areas. import os import shapefile import matplotlib.pyplot as plt import matplotlib.colors as mcolors import numpy as np import flopy from gsflow.builder import GenerateFishnet # ### The `FlowAccumulation` class (refresher) # # The `FlowAccumulation` class performs many operations including generating flow direction arrays and flow accumulation arrays. This example notebook focuses is on the `flow_direction` and `flow_accumulation` methods of this class. Other methods are presented in following tutorials. # # The `FlowAccumulation` class has 3 required parameters and 5 optional input parameters: # # **REQUIRED Parameters** # - `data` : resampled dem data array of dimension nrow, ncol (matches modelgrid dimension) # - `xcenters` : a two dimensional array of x coordinate cell centers (dimension nrow, ncol) # - `ycenters` : a two dimensional array of y coordinate cell centers (dimension nrow, ncol) # # **OPTIONAL Parameters** # - `acc_type` : flow accumlation type, currently only "d8" is supported # - `hru_type` : optional hru_type array where 0=inactive, 1=land, 2=lake, and 3=swale # - `closed_basin` : If true hru_type 2 is used in the flow direction calculations. False ignores hru_type 2. Default is False. # - `flow_dir_array` : previously calculated flow direction array. This parameter is used to restart the class without performing flow direction analysis # - `verbose` : boolean flag to print verbose output # # Let's start with importing the class. from gsflow.builder import FlowAccumulation # ## Applying the methods to the Sagehen 50m example problem # # In this example the methods are applied directly to the Sagehen 50m model as they are presented. # + # define the input and output data paths input_ws = os.path.join("data", "sagehen", "50m_tutorials") shp_ws = os.path.join("data", "geospatial") output_ws = os.path.join("data", "temp") # define the modelgrid and resampled DEM data paths mg_file = os.path.join(input_ws, "sagehen_50m_grid.bin") dem_data = os.path.join(input_ws, "sagehen_50m_dem.txt") # define the flow direction and flow accumulation data paths flowdir_file = os.path.join(input_ws, "sagehen_50m_flowdir.txt") flowacc_file = os.path.join(input_ws, "sagehen_50m_flowacc.txt") # shapefile pour point shp_file = os.path.join(shp_ws, "model_points.shp") # - # Load the previously processed data # load modelgrid, dem, flow directions, and flow accumulation modelgrid = GenerateFishnet.load_from_file(mg_file) dem_data = np.genfromtxt(dem_data) flow_directions = np.genfromtxt(flowdir_file, dtype=float) flow_accumulation = np.genfromtxt(flowacc_file) # ### Restarting the `FlowAccumulation` class from existing data # # In this tutorial series, the flow direction and flow accumulation calculations were performed in the previous builder tutorial. Instead of re-running these calculations, which can be time consuming for very large models, we can provide the saved flow direction array to the class as a way of restarting the solution. # # To restart from the previous solution, the saved flow direction array is passed to the `flow_dir_array` parameter during instantiation as shown in this example fa = FlowAccumulation( dem_data, modelgrid.xcellcenters, modelgrid.ycellcenters, flow_dir_array=flow_directions, verbose=True ) # Now the `FlowAccumulation` object is ready to perform basin and subbasin deliniation. # ### Watershed (basin) Delineation # # The watershed (basin) delineation routine uses information from the stored flow direction array to map watershed boundaries based on topographical divides within the watershed. This method is not automatic, and gives the user flexibility to define the basin outlet in which the watershed will be defined from. # # The `define_watershed` method performs watershed deliniation and has the following parameters: # - `pour_point` : three seperate input methods can be used to define the pour point as described below # - list of [(xcoordinate, ycoordinate)] location that define the pour point # - list of [(row, column)] location that define the pour point # - shapefile name, file contains a single pour point that defines the basin outlet # - `modelgrid` : modelgrid instance from `GenerateFishnet` (flopy.discretization.StructuredGrid object) # - `fmt` : format of pour point input ("xy" for xy coordinates, "rowcol" for row column, "shp" for shapefile) # # This example demonstrates how to use an xy coordinate that is extracted from an existing shapefile point. # + # read in our pour point from a shapefile as an xy coordinate with shapefile.Reader(shp_file) as r: shape = r.shape(0) pour_point = shape.points print(pour_point) # - # Now that the pour point is loaded, watershed delineation can be run using `define_watershed()` watershed = fa.define_watershed(pour_point, modelgrid, fmt="xy") # Now the watershed has been generated and it can be inspected # + fig = plt.figure(figsize=(12, 12)) ax = fig.add_subplot(1, 1, 1, aspect="equal") pmv = flopy.plot.PlotMapView(modelgrid=modelgrid, ax=ax) pc = pmv.plot_array( watershed, vmin=0, vmax=1, ) plt.plot(*pour_point[0], 'bo') plt.colorbar(pc, shrink=0.7) plt.title("Sagehen 50m watershed delineation (yellow=active)"); # - # ### Subbasin delineation # # Subbasin delineation can be performed in much the same way as watershed delineation.Once again, this method is not automatic, and gives the user flexibility to define the subbasin outlets in which the watershed will be defined from. The benefit of this approach is that the user can identify areas of interest and/or define subbasins based on existing stream gages to isolate the contributing areas to that particular area. # # The `define_subbasins()` method performs subbasin delineation and has the following input parameters: # - `pour_points` : three seperate input methods can be used to define the pour points as described below # - list of [(xcoordinate0, ycoordinate0),...(xcoordinaten, ycoordinaten] locations that define the pour points # - list of [(row0, column0),...(rown, columnn)] locations that define the pour points # - shapefile name, file contains pour points that define the subbasin outlets # - `modelgrid` : modelgrid instance from `GenerateFishnet` (flopy.discretization.StructuredGrid object) # - `fmt` : format of pour point input ("xy" for xy coordinates, "rowcol" for row column, "shp" for shapefile) # # # *Note: this example is for illustrative puposes only and is not applied to the operational Sagehen 50m model.* # Let's create some pour point data! pour_points = [] # add our watershed outlet as row column location pour_points.append(modelgrid.intersect(pour_point[0][0], pour_point[0][1])) pour_points += [(73, 118), (69, 67), (64, 71)] pour_points[::-1] # Now define the subbasins using the "rowcol" fmt for pour points subbasins = fa.define_subbasins( pour_points, modelgrid, fmt="rowcol" ) # Now the subbasins have been generated and can be inspected # + fig = plt.figure(figsize=(12, 12)) ax = fig.add_subplot(1, 1, 1, aspect="equal") pmv = flopy.plot.PlotMapView(modelgrid=modelgrid, ax=ax) pc = pmv.plot_array( subbasins, vmin=0, vmax=4, masked_values=[0,] ) for r, c in pour_points: xc, yc = modelgrid.xcellcenters[r, c], modelgrid.ycellcenters[r, c] plt.plot(xc, yc, 'ko', ms=4) plt.colorbar(pc, shrink=0.7) plt.title("Sagehen 50m subbasin delineation"); # - # ## Saving the watershed delineation and subbasin delineation arrays for later use # # The builder methods allow the user to save the watershed and subbasin delineation arrays and pick up where they left off in another session or script. # # These arrays can be saved using numpy's `savetxt()` method. # # *In the next tutorial we will load the watershed delineation and pick up from where we left off* # + np.savetxt( os.path.join(output_ws, "sagehen_50m_watershed.txt"), watershed.astype(int), delimiter=" ", fmt="%d" ) np.savetxt( os.path.join(output_ws, "sagehen_50m_subbasin.txt"), subbasins.astype(int), delimiter=" ", fmt="%d" ) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] slideshow={"slide_type": "slide"} # # Installation # # ```shell # python -m pip install requests # conda install -c conda-forge ipyleaflet ipywidgets # ``` # + slideshow={"slide_type": "slide"} from IPython.display import Image # + slideshow={"slide_type": "fragment"} url="http://www.hausgazette.org/py/img/IMG_0939-small.JPG" # + slideshow={"slide_type": "fragment"} Image(url) # + slideshow={"slide_type": "slide"} import requests r = requests.get(url) r.status_code # + slideshow={"slide_type": "fragment"} rawdata = r.content Image(data=rawdata) # + slideshow={"slide_type": "slide"} from PIL import Image as PImage import io im = PImage.open(io.BytesIO(rawdata)) # + slideshow={"slide_type": "fragment"} im # + slideshow={"slide_type": "slide"} print(im.format, "{}x{} {}".format(*im.size, im.mode)) # + slideshow={"slide_type": "fragment"} thumb_size = (256,256) im.thumbnail(thumb_size) # + slideshow={"slide_type": "fragment"} im # + slideshow={"slide_type": "fragment"} print(im.format, "{}x{} {}".format(*im.size, im.mode)) # + slideshow={"slide_type": "slide"} rot_im = im.rotate(270, expand=1) rot_im # + slideshow={"slide_type": "slide"} from ImageMetaData import ImageMetaData meta_data = ImageMetaData(im) # + slideshow={"slide_type": "fragment"} exif_data = meta_data.get_exif_data() exif_data # + slideshow={"slide_type": "slide"} latlng = meta_data.get_lat_lng() print(latlng) # + slideshow={"slide_type": "slide"} # #!conda install -c conda-forge ipyleaflet # #!conda install -c conda-forge ipywidgets # + slideshow={"slide_type": "fragment"} from ipyleaflet import ( Map, Marker ) from IPython.display import display # + slideshow={"slide_type": "slide"} # show map center = (52.5, 13.3) #latlng zoom = 100 m = Map(center=center, zoom=zoom) mark = Marker(location=m.center) m += mark mark.interact(opacity=(0.0,1.0,0.01)) display(m) # + slideshow={"slide_type": "slide"} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Import packages # + from keras.optimizers import Adam from keras.models import Sequential, load_model from keras.layers import Dense, AlphaDropout, BatchNormalization from keras.wrappers.scikit_learn import KerasClassifier from sklearn.preprocessing import Normalizer, QuantileTransformer from sklearn.model_selection import train_test_split, RandomizedSearchCV from sklearn.metrics import classification_report from time import time import pandas as pd # - # ### Helper function to get and process the data def dataget(train_path, test_path): train_data = pd.read_csv(train_path) test_data = pd.read_csv(test_path) #Join the train and test data to cleanse and enhance the data df = train_data.append(test_data, ignore_index=True) Titles_Dictionary = { "Capt": "Officer", "Col": "Officer", "Major": "Officer", "Jonkheer": "Royalty", "Don": "Royalty", "Sir": "Royalty", "Dr": "Officer", "Rev": "Officer", "the Countess": "Royalty", "Dona": "Royalty", "Mme": "Mrs", "Mlle": "Miss", "Ms": "Mrs", "Mr": "Mr", "Mrs": "Mrs", "Miss": "Miss", "Master": "Master", "Lady": "Royalty" } ## Extract Title and map to the Titles from each Name df['Title'] = df['Name'].apply(lambda x: Titles_Dictionary[x.split(',')[1].split('.')[0].strip()]) ## Fill missing Embarked with 'C' df['Embarked'].fillna('C', inplace=True) ## Note down the Imputed Ages df['Imputed'] = df['Age'].isnull().astype('uint8') columns = ['Age','Fare'] groups = ['Title', 'Embarked'] ## Fill null Ages with the mean Age based on Title, Embarked df[columns] = df.groupby(groups)[columns].transform(lambda x: x.fillna(x.mean())) ## Convert to categorical data categories = ['Title', 'Sex', 'Pclass', 'SibSp', 'Parch', 'Embarked'] df[categories] = df[categories].apply(lambda x: x.astype('category')) df = df.drop(columns=['Cabin', 'Name', 'Ticket']) #df = df.drop(columns=['Title', 'SibSp', 'Imputed', 'Pclass', 'Parch', 'Embarked', 'Fare']) df = df.round(2) original = df.copy() df = pd.get_dummies(df, drop_first=True) test_data = df[df.Survived.isnull()].copy() test_data = test_data.drop(columns=['Survived']) train_data = df.dropna().copy() train_data['Survived'] = train_data['Survived'].astype('uint8') train_data = train_data.drop(columns=['PassengerId']) return original, train_data, test_data # ## Neural Network def build_model(optimizer=Adam(amsgrad=True), total_features=1, activation='elu', units=1, dropout_value=0.3, multi_layer=True, op_activation='sigmoid', loadprevmodel=False, modelname='Titanic-Kaggle-best' ): if loadprevmodel: try: model = load_model(modelname + '.h5') print('Model loaded successfully') except IOError: print('Loading previous model failed, Building a new model') model = Sequential() model.add(Dense(input_dim=total_features, activation=activation, units=units)) if activation == 'selu': model.add(AlphaDropout(dropout_value)) else: model.add(BatchNormalization()) if multi_layer: model.add(Dense(activation=activation, units=max(3,int(units/2)))) if activation == 'selu': model.add(AlphaDropout(min(dropout_value, dropout_value * 0.9))) else: model.add(BatchNormalization()) model.add(Dense(units=1, activation=op_activation)) model.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=['accuracy']) return model train_path = 'train.csv' test_path = 'test.csv' original, train_data, test_data = dataget(train_path, test_path) df = original.copy() df.drop(columns=['PassengerId'], inplace=True) #df.dropna(inplace=True) print(df.head(10)) print(df.describe()) print(df.info()) search = True modelh5 = 'Titanic-Kaggle' loadmodelh5 = 'Titanic-Kaggle-best' batch_size = 891 epochs = 300 normalizer = Normalizer(norm='l1') df = train_data train = df.dropna() y_train = train['Survived'].values x_train = train.drop(columns=['Survived']).values quantile_transformer = QuantileTransformer(output_distribution='normal') X_train = normalizer.fit_transform(x_train) #X_train = quantile_transformer.fit_transform(x_train) # + clf = KerasClassifier(build_model, total_features=X_train.shape[1], units=5, batch_size=batch_size, epochs=epochs, verbose=0 ) dr = [element/100 for element in range(1,50,5)] param_dist = {'units': list(range(int(X_train.shape[1]/2), int(X_train.shape[1] * 2))), 'dropout_value': dr, 'activation': ['elu', 'relu', 'selu'], 'multi_layer': [True,False] } n_iter_search = 2 if search: random_search = RandomizedSearchCV(clf, param_distributions=param_dist, n_jobs =1, n_iter=n_iter_search, scoring='accuracy', random_state=42) start = time() random_search.fit(X_train, y_train) print("RandomizedSearchCV took %.2f seconds for %d candidates" " parameter settings." % ((time() - start), n_iter_search)) print(random_search.best_score_) print(random_search.best_params_) # + #http://scikit-learn.org/stable/auto_examples/preprocessing/plot_all_scaling.html#sphx-glr-auto-examples-preprocessing-plot-all-scaling-py # - #params = {'units': 20, 'multi_layer': False, 'dropout_value': 0.31, 'activation': 'relu'} quantile params = {'units': 20, 'multi_layer': False, 'dropout_value': 0.31, 'activation': 'relu'} clf = KerasClassifier(build_model, total_features=X_train.shape[1], batch_size=batch_size, epochs=1500, verbose=0, **params) clf.fit(X_train, y_train) y_true, y_pred = y_train, clf.predict(X_train) print(classification_report(y_true, y_pred)) # + df = test_data resultdf = pd.DataFrame(data=df['PassengerId']) df = df.drop(columns=['PassengerId']) #test_x = quantile_transformer.transform(df) test_x = normalizer.transform(df) predictions = clf.predict(test_x) resultdf['Survived'] = predictions.astype(int) resultdf.to_csv('submission.csv', index=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Importing, Exporting, Basic Slicing and Indexing. # In terms of the importing and exporting files I would not go depth on it. You can refer the docstring for complete information on the various ways it can be used. A few examples will be given here in regard to this. I would spent sometime on the slicing and indexing arrays. # ### Here are the main steps we will go through # * How to use loadtxt, genfromtxt, and savetxt? # * How to slice and index array? # # This is just little illustration. # # #### How to use loadtxt, genfromtxt, and savetxt?? # * numpy.loadtxt() : Load data from a text file. This function aims to be a fast reader for simply formatted files. # * numpy.genfromtxt(): Load data from a text file, with missing values handled as specified. When spaces are used as delimiters, or when no delimiter has been given as input, there should not be any missing data between two fields. When the variables are named (either by a flexible dtype or with names, there must not be any header in the file (else a ValueError exception is raised). Individual values are not stripped of spaces by default. When using a custom converter, make sure the function does remove spaces. # * numpy.savetxt(): Save an array to a text file. Further explanation of the fmt parameter (%[flag]width[.precision]specifier): # # ##### Example # Here is the general idea, I'll come back to it. import numpy as np # using numpy you can load text file np.loadtxt('file_name.txt') # load csv file np.genfromtxt('file_name.csv', delimiter=',') # you can write to a text file and save it np.savetxt('file_name.txt', arr, delimiter=' ') # you can write to a csv file and save it np.savetxt('file_name.csv', arr, delimiter=',') # #### How to slice and index array? # ndarrays can be indexed using the standard Python x[obj] syntax, where x is the array and obj the selection. There are three kinds of indexing available: field access, basic slicing, advanced indexing. Which one occurs depends on obj. # # The basic slice syntax is i:j:k where i is the starting index, j is the stopping index, and k is the step (k\neq0). This selects the m elements (in the corresponding dimension) with index values i, i + k, ..., i + (m - 1) k where m = q + (r\neq0) and q and r are the quotient and remainder obtained by dividing j - i by k: j - i = q k + r, so that i + (m - 1) k < j. # # Check the docstring for complete information on the various ways it can be used. A few examples will be given here: # slicing 1 to 7 gives us: [1 through 6] slice_array = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) slice_array[1:7] # if we do this, we are giving k, which is the step function. in this case step by 2 slice_array[1:7:2] slice_arrays = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])+1 #return the element at index 5 slice_arrays[5] #returns the 2D array element on index value of 2 and 5 slice_arrays[[2,5]] #assign array element on index 1 the value 4 slice_arrays[1] = 100 #assign array element on index [1][3] the value 10 slice_arrays[[1,3]] = 100 slice_arrays #return the elements at indices 0,1,2 on a 2D array: slice_arrays[0:3] #returns the elements at indices 1,100 slice_arrays[:2] slice_2d = np.arange(16).reshape(4,4) slice_2d #returns the elements on rows 0,1,2, at column 4 slice_2d[0:3, :4] #returns the elements at index 1 on all columns slice_2d[:, 1] # return the last two rows slice_2d[-2:10] # returns the last three rows slice_2d[1:] # reverse all the array backword slice_2d[::-1] #returns an array with boolean values slice_2d < 5 #inverts a boolearn array, if its positive arr - convert to negative, vice versa ~slice_2d #returns array elements smaller than 5 slice_2d[slice_2d < 5] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # #### 1. Which networking configuration interface is newer and has extended capabilities? # ##### Ans: ip # #### 2. Using Predictable Network Interface Device Names (PNIDN) has come into use because: # ##### Ans: # - Many computers are no longer in one location; for example, laptops are on the move, and available interfaces are subject to change # - On modern systems, the order in which network hardware is found is less predictable # - Hardware such as USB devices can be added and removed at runtime # #### 3. Which command(s) will bring the network interface eth0 up and assign an address to it? # ##### Ans: # - sudo ifconfig eth0 up 192.168.1.100 # - sudo ip addr add 192.168.1.100 dev eth0 # #### 4. You can see statistics for the eth0 interface by (select all answers that apply): # ##### Ans: # - doing sudo ifconfig eth0 # - doing sudo ip -s link show eth0 # - looking at /sys/class/net/eth0/statistics # #### 5. What does MTU stand for? # ##### Ans: Maximum Transfer Unit (usually 1500 bytes by default) for Ethernet packets # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] tags=[] # # FBSDE # # Ji, Shaolin, , , and . “Three Algorithms for Solving High-Dimensional Fully-Coupled FBSDEs through Deep Learning.” ArXiv:1907.05327 [Cs, Math], February 2, 2020. http://arxiv.org/abs/1907.05327. # # For standard $d$-dimensional Brownian motion and $d$-dimensional Poisson process with intensity $\lambda$ (in every component) consider an FBSDE: # # $$ # \begin{aligned} # dX_t^i &= \frac 1\alpha \exp\left(-\alpha X_t^i\right) \sum_j Z^{ij}_t dW^j_t + \frac 1\alpha \sum_j \log\left(1+r^{ij}_t \exp\left(-\alpha X^i_t\right)\right) dN^j_t & X_0 = x_0 \\ # dY_t^i &= \frac 12 \exp\left(-\alpha X^i_t\right) \sum_j \left(Z^{ij}_t\right)^2 dt + \sum_j Z^{ij}_t dW^j_t + \sum_j r^{ij}_t dN^j_t & Y^i_T= \exp\left(X^i_T\right) # \end{aligned} # $$ # # One can see by Ito's lemma (Oksendal, p.9, 1.2.8) that # # $$ # Y^i_t = \exp(X^i_t) # $$ # + # working configuration without jumps: time_horizon = 0.1, n_paths = 2 ** 16, n_timesteps = 4, n_dimensions = 50 # working configuration with jumps: time_horizon = 0.1, n_paths = 2 ** 12, n_timesteps = 4, n_dimensions = 2 # - import os from makers.gpu_utils import * os.environ["CUDA_VISIBLE_DEVICES"] = str(pick_gpu_lowest_memory()) import numpy as np import tensorflow as tf from keras.layers import Input, Dense, Lambda, Reshape, concatenate, Layer, BatchNormalization from keras import Model, initializers from keras.callbacks import ModelCheckpoint from keras.metrics import mean_squared_error import matplotlib.pyplot as plt from datetime import datetime from keras.metrics import mse from keras.optimizers import Adam from keras.callbacks import Callback import pandas as pd from tqdm import tqdm from itertools import product print("Num GPUs Available: ", len(tf.config.list_physical_devices("GPU"))) # # Inputs # model parameters n_paths = 2 ** 12 n_timesteps = 16 alpha = 0.1 time_horizon = 1. n_dimensions = 2 n_diffusion_factors = 2 n_jump_factors = 2 intensity = 100. dt = time_horizon / n_timesteps # create dirs timestamp = datetime.now().strftime("%Y%m%d-%H%M%S") model_name = f"{timestamp}__{n_dimensions}__{n_paths}__{n_timesteps}__{time_horizon}" tb_log_dir = "/home/tmp/starokon/tensorboard/" + model_name output_dir = f"_output/models/{model_name}" os.makedirs(output_dir, exist_ok=True) # + def b(t, x, y, z, r): return [0. for _ in range(n_dimensions)] def s(t, x, y, z, r): return [[tf.exp(-alpha * x[i]) * z[i][j] / alpha for j in range(n_diffusion_factors)] for i in range(n_dimensions)] def v(t, x, y, z, r): return [[tf.math.log(1 + tf.math.maximum(r[i][j], 0.) * tf.exp(-alpha * x[i])) / alpha for j in range(n_jump_factors)] for i in range(n_dimensions)] def f(t, x, y, z, r): return [tf.exp(-alpha * x[i]) * tf.reduce_sum(z[i] ** 2) / 2. for i in range(n_dimensions)] def g(x): return [tf.exp(x[i]) for i in range(n_dimensions)] # - # # Custom layers and callbacks class InitialValue(Layer): def __init__(self, y0, **kwargs): super().__init__(**kwargs) self.y0 = y0 def call(self, inputs): return self.y0 # # Model # + tags=[] def build_model(n_dimensions, n_diffusion_factors, n_jump_factors, n_timesteps, time_horizon): dt = time_horizon / n_timesteps def dX(t, x, y, z, r, dW, dN): def drift(arg): x, y, z, r = arg return tf.math.multiply(b(t, x, y, z, r), dt) a0 = tf.vectorized_map(drift, (x, y, z, r)) def noise(arg): x, y, z, r, dW = arg return tf.tensordot(s(t, x, y, z, r), dW, [[1], [0]]) a1 = tf.vectorized_map(noise, (x, y, z, r, dW)) def jump(arg): x, y, z, r, dN = arg return tf.tensordot(v(t, x, y, z, r), dN, [[1], [0]]) a2 = tf.vectorized_map(jump, (x, y, z, r, dN)) return a0 + a1 + a2 def dY(t, x, y, z, r, dW, dN): def drift(arg): x, y, z, r = arg return tf.math.multiply(f(t, x, y, z, r), dt) a0 = tf.vectorized_map(drift, (x, y, z, r)) def noise(arg): x, y, z, r, dW = arg return tf.tensordot(z, dW, [[1], [0]]) a1 = tf.vectorized_map(noise, (x, y, z, r, dW)) def jump(arg): x, y, z, r, dN = arg return tf.tensordot(r, dN, [[1], [0]]) a2 = tf.vectorized_map(jump, (x, y, z, r, dN)) return a0 + a1 + a2 @tf.function def hx(args): i, x, y, z, r, dW, dN = args return x + dX(i * dt, x, y, z, r, dW, dN) @tf.function def hy(args): i, x, y, z, r, dW, dN = args return y + dY(i * dt, x, y, z, r, dW, dN) paths = [] n_hidden_units = n_dimensions + n_diffusion_factors + n_jump_factors + 10 inputs_x0 = Input(shape=(n_dimensions)) inputs_dW = Input(shape=(n_timesteps, n_diffusion_factors)) inputs_dN = Input(shape=(n_timesteps, n_jump_factors)) # variable x0 y0 = Dense(n_hidden_units, activation='elu', kernel_initializer=initializers.RandomNormal(stddev=1e-1), name='y1_0')(inputs_x0) y0 = Dense(n_dimensions, activation='elu', kernel_initializer=initializers.RandomNormal(stddev=1e-1), name='y2_0')(y0) x = inputs_x0 y = y0 # constant x0 # x0 = tf.Variable([[1. for _ in range(n_dimensions)]], trainable=False) # y0 = tf.Variable([[0. for _ in range(n_dimensions)]], trainable=True) # x = InitialValue(x0, trainable=False, name='x_0')(inputs_dW) # y = InitialValue(y0, trainable=True, name='y_0')(inputs_dW) z = concatenate([x, y]) z = Dense(n_hidden_units, activation='relu', kernel_initializer=initializers.RandomNormal(stddev=1e-1), name='z1_0')(z) z = Dense(n_dimensions * n_diffusion_factors, activation='relu', kernel_initializer=initializers.RandomNormal(stddev=1e-1), name='z2_0')(z) # z = BatchNormalization(name='zbn_0')(z) z = Reshape((n_dimensions, n_diffusion_factors), name='zr_0')(z) r = concatenate([x, y]) r = Dense(n_hidden_units, activation='relu', kernel_initializer=initializers.RandomNormal(stddev=1e-1), name='r1_0')(r) r = Dense(n_dimensions * n_jump_factors, activation='relu', kernel_initializer=initializers.RandomNormal(stddev=1e-1), name='r2_0')(r) # r = BatchNormalization(name='rbn_0')(r) r = Reshape((n_dimensions, n_jump_factors), name='rr_0')(r) paths += [[x, y, z, r]] # pre-compile lambda layers for i in range(n_timesteps): step = InitialValue(tf.Variable(i, dtype=tf.float32, trainable=False))(inputs_dW) dW = Lambda(lambda x: x[0][:, tf.cast(x[1], tf.int32)])([inputs_dW, step]) dN = Lambda(lambda x: x[0][:, tf.cast(x[1], tf.int32)])([inputs_dN, step]) x, y = ( Lambda(hx, name=f'x_{i+1}')([step, x, y, z, r, dW, dN]), Lambda(hy, name=f'y_{i+1}')([step, x, y, z, r, dW, dN]), ) # we don't train z for the last time step; keep for consistency z = concatenate([x, y]) z = Dense(n_hidden_units, activation='relu', name=f'z1_{i+1}')(z) z = Dense(n_dimensions * n_diffusion_factors, activation='relu', name=f'z2_{i+1}')(z) z = Reshape((n_dimensions, n_diffusion_factors), name=f'zr_{i+1}')(z) # z = BatchNormalization(name=f'zbn_{i+1}')(z) # we don't train r for the last time step; keep for consistency r = concatenate([x, y]) r = Dense(n_hidden_units, activation='relu', kernel_initializer=initializers.RandomNormal(stddev=1e-1), name=f'r1_{i+1}')(r) r = Dense(n_dimensions * n_jump_factors, activation='relu', kernel_initializer=initializers.RandomNormal(stddev=1e-1), name=f'r2_{i+1}')(r) r = Reshape((n_dimensions, n_jump_factors), name=f'rr_{i+1}')(r) # r = BatchNormalization(name=f'rbn_{i+1}')(r) paths += [[x, y, z, r]] outputs_loss = Lambda(lambda r: r[1] - tf.transpose(tf.vectorized_map(g, r[0])))([x, y]) outputs_paths = tf.stack( [tf.stack([p[0] for p in paths[1:]], axis=1), tf.stack([p[1] for p in paths[1:]], axis=1)] + [tf.stack([p[2][:, :, i] for p in paths[1:]], axis=1) for i in range(n_diffusion_factors)] + [tf.stack([p[3][:, :, i] for p in paths[1:]], axis=1) for i in range(n_jump_factors)], axis=2) model_loss = Model([inputs_x0, inputs_dW, inputs_dN], outputs_loss) # (n_sample, n_timestep, x/y/z_k, n_dimension) # skips the first time step model_paths = Model([inputs_x0, inputs_dW, inputs_dN], outputs_paths) model_y0 = Model([inputs_x0], y0) return model_loss, model_paths, model_y0 # + [markdown] tags=[] # # Training # - np.set_printoptions(edgeitems=11, linewidth=90, formatter=dict(float=lambda x: "%7.5g" % x)) # create model dt = time_horizon / n_timesteps model_loss, model_paths, model_y0 = build_model(n_dimensions=n_dimensions, n_diffusion_factors=n_dimensions, n_jump_factors=n_dimensions, n_timesteps=n_timesteps, time_horizon=time_horizon) # + # model_loss.summary() # - # x0 = tf.constant(np.full((n_paths, n_dimensions), 1.)) x0 = 1. + 0. * tf.random.normal((n_paths, n_dimensions)) dW = tf.sqrt(dt) * tf.random.normal((n_paths, n_timesteps, n_diffusion_factors)) dN = tf.random.poisson((n_paths, n_timesteps), tf.constant(dt * np.array([intensity for _ in range(n_jump_factors)]))) target = tf.zeros((n_paths, n_dimensions)) class Y0Callback(Callback): def __init__(self, filepath=None): super(Y0Callback, self).__init__() self.filepath = filepath self.y0s = [] def on_epoch_end(self, epoch, logs=None): y0 = model_y0(tf.constant([[1. for _ in range(n_dimensions)]])) self.y0s += [y0[0]] print(f"{y0}\n") def on_train_end(self, logs=None): if self.filepath is not None: pd.DataFrame(self.y0s).to_csv(self.filepath) # + # define callbacks callbacks = [] callbacks += [Y0Callback(filepath=os.path.join(output_dir, "y0.csv"))] callbacks += [ModelCheckpoint(os.path.join(output_dir, "model.h5"), monitor="loss", save_weights_only=True, save_best_only=True, overwrite=True)] # callbacks += [tf.keras.callbacks.TensorBoard(log_dir=tb_log_dir, histogram_freq=1, profile_batch=1)] callbacks += [tf.keras.callbacks.TerminateOnNaN()] callbacks += [tf.keras.callbacks.EarlyStopping(monitor="loss", min_delta=1e-4, patience=10)] # - # leave it here to readjust learning rate on the fly adam = Adam(learning_rate=1e-3) model_loss.compile(loss='mse', optimizer=adam) history = model_loss.fit([x0, dW, dN], target, batch_size=128, initial_epoch=0, epochs=1000, callbacks=callbacks) df_loss = pd.DataFrame(history.history['loss']) df_loss.to_csv(os.path.join(output_dir, 'loss.csv')) # + [markdown] tags=[] # # Transfer weights (if needed) # + # try transfer learning from another starting point model_loss.get_layer('y_0').set_weights(m_large.get_layer('y_0').get_weights()) for i in range(n_timesteps): model_loss.get_layer(f'z1_{i}').set_weights(m_large.get_layer(f'z1_{i}').get_weights()) model_loss.get_layer(f'z2_{i}').set_weights(m_large.get_layer(f'z2_{i}').get_weights()) # + tags=[] # transfer learning from cruder discretization model_loss.get_layer('y_0').set_weights(m_small.get_layer('y_0').get_weights()) n_small = 4 for i in range(n_small): for j in range(n_timesteps // n_small): model_loss.get_layer(f'z1_{n_timesteps // n_small * i}').set_weights(m_small.get_layer(f'z1_{i}').get_weights()) model_loss.get_layer(f'z2_{n_timesteps // n_small * i}').set_weights(m_small.get_layer(f'z2_{i}').get_weights()) model_loss.get_layer(f'z1_{n_timesteps // n_small * i + j}').set_weights(m_small.get_layer(f'z1_{i}').get_weights()) model_loss.get_layer(f'z2_{n_timesteps // n_small * i + j}').set_weights(m_small.get_layer(f'z2_{i}').get_weights()) # + # check for exploding gradients before training with tf.GradientTape() as tape: loss = mse(model_loss([dW, dN]), target) # bias of the last dense layer variables = model_loss.variables[-1] tape.gradient(loss, variables) # + [markdown] tags=[] # # Training loop # - for time_horizon, n_paths, n_timesteps, n_dimensions in tqdm(list(product([0.1], [2**12], [4], [2]))): # create model dt = time_horizon / n_timesteps model_loss, model_paths = build_model(n_dimensions=n_dimensions, n_diffusion_factors=n_dimensions, n_jump_factors=n_dimensions, n_timesteps=n_timesteps, time_horizon=time_horizon) # generate paths dW = tf.sqrt(dt) * tf.random.normal((n_paths, n_timesteps, n_diffusion_factors)) dN = tf.random.poisson((n_paths, n_timesteps), tf.constant(dt * np.array([intensity for _ in range(n_jump_factors)]).transpose().reshape(-1))) target = tf.zeros((n_paths, n_dimensions)) # create dirs timestamp = datetime.now().strftime("%Y%m%d-%H%M%S") model_name = f"{timestamp}__{n_dimensions}__{n_paths}__{n_timesteps}__{time_horizon}" tb_log_dir = "/home/tmp/starokon/tensorboard/" + model_name output_dir = f"_output/models/{model_name}" os.makedirs(output_dir, exist_ok=True) # define callbacks y0_callback = Y0Callback(filepath=os.path.join(output_dir, "y0.csv")) checkpoint_callback = ModelCheckpoint(os.path.join(output_dir, "model.h5"), monitor="loss", save_weights_only=True, save_best_only=True, overwrite=True) tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=tb_log_dir, histogram_freq=1, profile_batch=1) nan_callback = tf.keras.callbacks.TerminateOnNaN() loss_callback = tf.keras.callbacks.EarlyStopping(monitor="loss", min_delta=1e-4, patience=20) # callbacks callbacks = [ nan_callback, checkpoint_callback, # tensorboard_callback, y0_callback, loss_callback, ] # leave it here to readjust learning rate on the fly adam = Adam(learning_rate=1e-3) model_loss.compile(loss='mse', optimizer=adam) # train history = model_loss.fit([dW, dN], target, batch_size=128, initial_epoch=0, epochs=3000, callbacks=callbacks) df_loss = pd.DataFrame(history.history['loss']) df_loss.to_csv(os.path.join(output_dir, 'loss.csv')) # # Validate # + dW_test = tf.sqrt(dt) * tf.random.normal((n_paths//8, n_timesteps, n_diffusion_factors)) dN_test = tf.random.poisson((n_paths//8, n_timesteps), tf.constant(dt * np.array([1., 1.]).transpose().reshape(-1))) target_test = tf.zeros((n_paths//8, n_dimensions)) model_loss.evaluate([dW_test, dN_test], target_test) # - # # Display paths and loss # load bad model model_loss.load_weights('_models/weights0011.h5') loss = model_loss([dW, dN]).numpy() loss paths = model_paths([dW, dN]).numpy() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #Libraries import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from scipy import stats import os import re # %matplotlib inline def load_production_data(data_path="./"): production_train = [] train_files = sorted(os.listdir(data_path+"/TrainingDataProduction/TrainingDataProduction")) train_files.sort(key=lambda f: int(re.sub('\D', '', f))) for file in train_files: temp = pd.read_csv(data_path+"/TrainingDataProduction/TrainingDataProduction/"+file, sep=";") temp["ProcessCycle"] = int(file.split('.')[0]) production_train.append(temp) production_train = pd.concat(production_train) production_valid = [] valid_files = sorted(os.listdir(data_path+"/ValidationDataProduction/ValidationDataProduction")) valid_files.sort(key=lambda f: int(re.sub('\D', '', f))) for file in valid_files: temp = pd.read_csv(data_path+"/ValidationDataProduction/ValidationDataProduction/"+file, sep=";") temp["ProcessCycle"] = int(file.split('.')[0]) production_valid.append(temp) production_valid = pd.concat(production_valid) production_test = [] test_files = sorted(os.listdir(data_path+"/EvaluationDataProduction/EvaluationDataProduction")) test_files.sort(key=lambda f: int(re.sub('\D', '', f))) for file in test_files: temp = pd.read_csv(data_path+"/EvaluationDataProduction/EvaluationDataProduction/"+file, sep=";") temp["ProcessCycle"] = int(file.split('.')[0]) production_test.append(temp) production_test = pd.concat(production_test) cols_with_no_variance_train = [] cols_with_no_variance_valid = [] for col in production_train.columns: if len(production_train[col].unique()) == 1: cols_with_no_variance_train.append(col) for col in production_valid.columns: if len(production_valid[col].unique()) == 1: cols_with_no_variance_valid.append(col) common_cols_with_no_variance = list(set(cols_with_no_variance_train) & set(cols_with_no_variance_valid)) production_train = production_train.drop(common_cols_with_no_variance, axis = 1) production_valid = production_valid.drop(common_cols_with_no_variance, axis = 1) production_test = production_test.drop(common_cols_with_no_variance, axis = 1) production_train["Split"] = "Train" production_valid["Split"] = "Valid" production_test["Split"] = "Test" production_train_val_combined = pd.concat([production_train, production_valid]).reset_index(drop=True) return production_train_val_combined, production_test # + df_train, df_test = load_production_data() # - train, valid = df_train, df_test # + #Erstellen von Dummies, um numerische Werte zu erhalten split = pd.get_dummies(train['Split'],drop_first=True) train.drop(["Split"],axis=1,inplace=True) train = pd.concat([train,split],axis=1) # - train.info() from sklearn.model_selection import train_test_split # + #Init von Trainings- und Testdaten X_train, X_test, y_train, y_test = train_test_split(train.drop('Valid',axis=1), train['Valid'], test_size=0.30, random_state=101) y_train = y_train.values y_test = y_test.values # + from imblearn.over_sampling import SMOTE sm = SMOTE(random_state=12, ratio = 1.0) X_train, y_train = sm.fit_sample(X_train, y_train) print(X_train.shape) print(X_test.shape) print(len(y_train[y_train==0]), 'negative samples in the training set') print(len(y_train[y_train==1]), 'positive samples in the training set') print(len(y_test[y_test==0]), 'negative samples in the test set') print(len(y_test[y_test==1]), 'positive samples in the test set') # - #Principal Component Analysis für die Dimensionsreduktion from sklearn.decomposition import PCA # Make an instance of the Model pca = PCA(n_components = 2) pca_2d = pca.fit_transform(X_train) X_train.info() plt.figure(figsize = (10, 10)) plt.scatter(pca_2d[:,0], pca_2d[:,1], c = y_train) plt.title('PCA scatter plot') plt.show() from sklearn.linear_model import LogisticRegression, LogisticRegressionCV, SGDClassifier from sklearn.model_selection import GridSearchCV # + #Erstellen und Fitten des Modells auf Trainingsdaten logmodel = LogisticRegression(solver="liblinear", max_iter=250) #logmodel = LogisticRegressionCV(cv= 5, class_weight="balanced") logmodel.fit(X_train,y_train) sgdmodel = SGDClassifier(loss="log") sgdmodel.fit(X_train,y_train) # + from keras.models import Sequential from keras.layers import Dense, Dropout from keras.optimizers import SGD, Adam from keras.regularizers import l2 model = Sequential() model.add(Dense(32, input_shape=(X_train.shape[1],), activation='relu')) model.add(Dropout(0.5)) model.add(Dense(32, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(1, activation='sigmoid')) model.compile( loss='binary_crossentropy', # loss optimizer="rmsprop", # learning rule #optimizer=Adam(), # learning rule metrics=['accuracy'] # show accuracy ) print(model.summary()) # - # Train the linear classifier on the extracted features model.fit(x=X_train, y=y_train, batch_size=128, epochs=20, validation_split=0.2) from sklearn.neural_network import MLPClassifier mlp = MLPClassifier(activation="logistic") mlp.fit(X_train,y_train) y_pred_mlp = mlp.predict_proba(X_test)[:, 1] from sklearn import metrics from sklearn.metrics import roc_curve, auc evaluation_auc(logmodel) evaluation(logmodel.predict(X_test)) def evaluation(predictions): #Auswertung durch Confusion Matrix und dem Built-In Report von Sklearn cnf_matrix = metrics.confusion_matrix(y_test,predictions) print(cnf_matrix) print("\n") print(metrics.classification_report(y_test, predictions)) class_names=[0,1] # name of classes fig, ax = plt.subplots() tick_marks = np.arange(len(class_names)) plt.xticks(tick_marks, class_names) plt.yticks(tick_marks, class_names) # create heatmap sns.heatmap(pd.DataFrame(cnf_matrix), annot=True, cmap="YlGnBu" ,fmt='g') ax.xaxis.set_label_position("top") plt.tight_layout() plt.title('Confusion matrix', y=1.1) plt.ylabel('Actual label') plt.xlabel('Predicted label') def evaluation_auc(model): #AUC als Performanceindikator der alle möglichen Thresholds aggregiert. Je höher der AUC, also die Fläche unter der ROC-Kurve ist, #desto besser kann eine Klasse bestimmt werden. Jedoch ist ein AUC von 0.5 ein Indiz für eine nicht erfolgreiche Bestimmung der Klassen y_pred_proba = model.predict_proba(X_test)[::,1] fpr, tpr, _ = metrics.roc_curve(y_test, y_pred_proba) auc = metrics.roc_auc_score(y_test, y_pred_proba) plt.plot(fpr,tpr,label="data 1, auc="+str(auc)) plt.legend(loc=4) plt.ylabel('True Positive Rate') plt.xlabel('False Positive Rate') plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # %matplotlib inline # import statements import numpy as np import matplotlib.pyplot as plt #for figures from mpl_toolkits.basemap import Basemap #to render maps import math import json #to write dict with parameters import GrowYourIC from GrowYourIC import positions, geodyn, geodyn_trg, geodyn_static, plot_data, data plt.rcParams['figure.figsize'] = (8.0, 3.0) #size of figures cm = plt.cm.get_cmap('viridis') cm2 = plt.cm.get_cmap('winter') # + print("==== Models ====") age_ic_dim = 1e9 #in years rICB_dim = 1221. #in km velocity_center = [0., 100.]#center of the eastern hemisphere center = [0,-80] #center of the western hemisphere units = None #we give them already dimensionless parameters. rICB = 1. age_ic = 1. #Slow translation v_slow = 0.8 omega_slow = 1.57 exponent_slow = 1. velocity_slow = geodyn_trg.translation_velocity(velocity_center, v_slow) proxy_type = "growth rate"#"growth rate" proxy_name = "growth rate (km/Myears)" #growth rate (km/Myears)" proxy_lim = None print("=== Model 1 : slow translation, no rotation ===") SlowTranslation = geodyn_trg.TranslationGrowthRotation() #can do all the models presented in the paper parameters = dict({'units': units, 'rICB': rICB, 'tau_ic':age_ic, 'vt': velocity_slow, 'exponent_growth': exponent_slow, 'omega': 0., 'proxy_type': proxy_type, 'proxy_name': proxy_name, 'proxy_lim': proxy_lim}) SlowTranslation.set_parameters(parameters) SlowTranslation.define_units() print("=== Model 2 : slow translation, rotation ===") SlowTranslation2 = geodyn_trg.TranslationGrowthRotation() #can do all the models presented in the paper parameters = dict({'units': units, 'rICB': rICB, 'tau_ic':age_ic, 'vt': velocity_slow, 'exponent_growth': exponent_slow, 'omega': omega_slow, 'proxy_type': proxy_type, 'proxy_name': proxy_name, 'proxy_lim': proxy_lim}) SlowTranslation2.set_parameters(parameters) SlowTranslation2.define_units() #Fast translation v_fast = 10.3 omega_fast = 7.85 time_translation = rICB_dim*1e3/4e-10/(np.pi*1e7) maxAge = 2.*time_translation/1e6 velocity_fast = geodyn_trg.translation_velocity(velocity_center, v_fast) exponent_fast = 0.1 proxy_type = "age" proxy_name = "age (Myears)" #growth rate (km/Myears)" proxy_lim = [0, maxAge] print("=== Model 3 : fast translation, no rotation ===") FastTranslation = geodyn_trg.TranslationGrowthRotation() #can do all the models presented in the paper parameters = dict({'units': units, 'rICB': rICB, 'tau_ic':age_ic, 'vt': velocity_fast, 'exponent_growth': exponent_fast, 'omega': 0., 'proxy_type': proxy_type, 'proxy_name': proxy_name, 'proxy_lim': proxy_lim}) FastTranslation.set_parameters(parameters) FastTranslation.define_units() print("=== Model 4 : fast translation, rotation ===") FastTranslation2 = geodyn_trg.TranslationGrowthRotation() #can do all the models presented in the paper parameters = dict({'units': units, 'rICB': rICB, 'tau_ic':age_ic, 'vt': velocity_fast, 'exponent_growth': exponent_fast, 'omega': omega_fast, 'proxy_type': proxy_type, 'proxy_name': proxy_name, 'proxy_lim': proxy_lim}) FastTranslation2.set_parameters(parameters) FastTranslation2.define_units() # + npoints = 60 #number of points in the x direction for the data set. data_set = data.PerfectSamplingEquator(npoints, rICB = 1.) data_set.method = "bt_point" proxy = geodyn.evaluate_proxy(data_set, FastTranslation, proxy_type="age", verbose = False) data_set.plot_c_vec(FastTranslation, proxy=proxy, cm=cm, nameproxy="age (Myears)") proxy = geodyn.evaluate_proxy(data_set, FastTranslation2, proxy_type="age", verbose = False) data_set.plot_c_vec(FastTranslation2, proxy=proxy, cm=cm, nameproxy="age (Myears)") proxy = geodyn.evaluate_proxy(data_set, SlowTranslation, proxy_type="age", verbose = False) data_set.plot_c_vec(SlowTranslation, proxy=proxy, cm=cm, nameproxy="age (Myears)") proxy = geodyn.evaluate_proxy(data_set, SlowTranslation2, proxy_type="age", verbose = False) data_set.plot_c_vec(SlowTranslation, proxy=proxy, cm=cm, nameproxy="age (Myears)") # + npoints = 50 #number of points in the x direction for the data set. data_set = data.PerfectSamplingSurface(npoints, rICB = 1., depth=0.01) data_set.method = "bt_point" surface1 = geodyn.evaluate_proxy(data_set, FastTranslation, proxy_type=FastTranslation.proxy_type, verbose = False) surface2 = geodyn.evaluate_proxy(data_set, FastTranslation2, proxy_type=FastTranslation2.proxy_type, verbose = False) surface3 = geodyn.evaluate_proxy(data_set, SlowTranslation, proxy_type=SlowTranslation.proxy_type, verbose = False) surface4 = geodyn.evaluate_proxy(data_set, SlowTranslation2, proxy_type=SlowTranslation2.proxy_type, verbose = False) X, Y, Z = data_set.mesh_TPProxy(surface1) ## map m, fig = plot_data.setting_map() y, x = m(Y, X) sc = m.contourf(y, x, Z, 30, cmap=cm, zorder=2, edgecolors='none') cbar = plt.colorbar(sc) cbar.set_label(FastTranslation.proxy_name) X, Y, Z = data_set.mesh_TPProxy(surface2) ## map m, fig = plot_data.setting_map() y, x = m(Y, X) sc = m.contourf(y, x, Z, 30, cmap=cm, zorder=2, edgecolors='none') cbar = plt.colorbar(sc) cbar.set_label(FastTranslation2.proxy_name) X, Y, Z = data_set.mesh_TPProxy(surface3) ## map m, fig = plot_data.setting_map() y, x = m(Y, X) sc = m.contourf(y, x, Z, 30, cmap=cm, zorder=2, edgecolors='none') cbar = plt.colorbar(sc) cbar.set_label(SlowTranslation.proxy_name) X, Y, Z = data_set.mesh_TPProxy(surface4) ## map m, fig = plot_data.setting_map() y, x = m(Y, X) sc = m.contourf(y, x, Z, 30, cmap=cm, zorder=2, edgecolors='none') cbar = plt.colorbar(sc) cbar.set_label(SlowTranslation2.proxy_name) # + # perfect repartition in depth (for meshgrid plots) data_meshgrid = data.Equator_upperpart(150,150) data_meshgrid.method = "bt_point" meshgrid1 = geodyn.evaluate_proxy(data_meshgrid, FastTranslation, verbose = False) meshgrid2 = geodyn.evaluate_proxy(data_meshgrid, FastTranslation2, verbose = False) meshgrid3 = geodyn.evaluate_proxy(data_meshgrid, SlowTranslation, verbose = False) meshgrid4 = geodyn.evaluate_proxy(data_meshgrid, SlowTranslation2, verbose = False) fig3, ax3 = plt.subplots(figsize=(8, 2)) X, Y, Z = data_meshgrid.mesh_RPProxy(meshgrid1) sc = ax3.contourf(Y, rICB_dim*(1.-X), Z, 100, cmap=cm) sc2 = ax3.contour(sc, levels=sc.levels[::15], colors = "k") ax3.set_ylim(-0, 120) fig3.gca().invert_yaxis() ax3.set_xlim(-180,180) cbar = fig3.colorbar(sc) #cbar.set_clim(0, maxAge) cbar.set_label(FastTranslation.proxy_name) ax3.set_xlabel("longitude") ax3.set_ylabel("depth below ICB (km)") fig3, ax3 = plt.subplots(figsize=(8, 2)) X, Y, Z = data_meshgrid.mesh_RPProxy(meshgrid2) sc = ax3.contourf(Y, rICB_dim*(1.-X), Z, 100, cmap=cm) sc2 = ax3.contour(sc, levels=sc.levels[::15], colors = "k") ax3.set_ylim(-0, 120) fig3.gca().invert_yaxis() ax3.set_xlim(-180,180) cbar = fig3.colorbar(sc) #cbar.set_clim(0, maxAge) cbar.set_label(FastTranslation2.proxy_name) ax3.set_xlabel("longitude") ax3.set_ylabel("depth below ICB (km)") fig3, ax3 = plt.subplots(figsize=(8, 2)) X, Y, Z = data_meshgrid.mesh_RPProxy(meshgrid3) sc = ax3.contourf(Y, rICB_dim*(1.-X), Z, 100, cmap=cm) sc2 = ax3.contour(sc, levels=sc.levels[::15], colors = "k") ax3.set_ylim(-0, 120) fig3.gca().invert_yaxis() ax3.set_xlim(-180,180) cbar = fig3.colorbar(sc) #cbar.set_clim(0, maxAge) cbar.set_label(SlowTranslation.proxy_name) ax3.set_xlabel("longitude") ax3.set_ylabel("depth below ICB (km)") fig3, ax3 = plt.subplots(figsize=(8, 2)) X, Y, Z = data_meshgrid.mesh_RPProxy(meshgrid4) sc = ax3.contourf(Y, rICB_dim*(1.-X), Z, 100, cmap=cm) sc2 = ax3.contour(sc, levels=sc.levels[::15], colors = "k") ax3.set_ylim(-0, 120) fig3.gca().invert_yaxis() ax3.set_xlim(-180,180) cbar = fig3.colorbar(sc) #cbar.set_clim(0, maxAge) cbar.set_label(SlowTranslation2.proxy_name) ax3.set_xlabel("longitude") ax3.set_ylabel("depth below ICB (km)") # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import nibabel as nib import numpy as np data = nib.load("fa_S11.nii.gz") print(data) #Generate image and metadata image_data = data.get_fdata() image_data.shape import matplotlib.pyplot as plt plt.imshow(image_data[:,:,64], cmap="gray") #Sample visualization plt.imshow(image_data[:,:,56], cmap="gray") plt.imshow(image_data[20,:,:], cmap="gray") #Axial visualization arr = [] arr += [image_data] len(arr) for i in range(9,10) : data = nib.load(f"fa_S{i}.nii.gz") img_data = data.get_fdata() arr += [img_data] len(arr) arr_out = np.vstack(np.expand_dims(arr,0)) #Create a batch of np tensors arr_out.shape arr_out_b = np.expand_dims(arr_out , -1) #Get the extra dimension for color arr_out_b.shape import torch x = torch.from_numpy(arr_out_b).float().view(-1,1,96,114,94) #Need to resize to BxCxHxW format x.size() torch.save(x , 'trial_data.pt') #Save torch tensor # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Building a Tracker for NTN-B 2050 # ## Author: # In this notebook we will see all of the details on how to build a tracker for a single brazilian inflation-indexd bond. There are three big steps here: # # 1. Build the daily series of the adjusted nominal face value (Valor Nominal Ajustado, VNA) based on the Brazilian inflation index and its forecasts. # 1. Find the unit price (PU) of the bond each day. # 1. Build a trivial strategy of reinvesting coupons on the correct payment dates. # # **All of the pricing details for the NTN-B bonds are in [this website](http://www.tesouro.fazenda.gov.br/documents/10180/410323/NTN-B_novidades.pdf).** # # We start by importing the relevant libraries, including our own `DayCounts` class from the FinanceHub. import pandas as pd from tqdm import tqdm import os import matplotlib.pyplot as plt from calendars import DayCounts # --- # ## Importing data # There is an excel spreadsheet called `VNA Raw.xlsx`, which contains three worksheets: # * **Diario**: Contains the daily series of the current forecast for next month's inflation rate. # * **Mensal**: Contains the monthly series of brazilian inflation index (IPCA) and its MoM change. # * **Release**: Contains the actual release dates (not the reference dates) of the brazilian inflation index (IPCA). # + file_path = r'data//VNA Raw.xlsx' df_mensal = pd.read_excel(file_path, 'Mensal', index_col=0) df_diario = pd.read_excel(file_path, 'Diario', index_col=0, na_values=['#N/A N/A']) df_release = pd.read_excel(file_path, 'Release') df_release.columns = ['Date', 'IPCA'] display(df_mensal) display(df_diario) display(df_release) # - # Create an empty dataframe with the columns we will use. df = pd.DataFrame(index=pd.date_range('2003-03-18', 'today', freq='D'), columns=['dia util', 'ultima virada', 'DU desde virada', 'DU entre viradas', 'time fraction', 'proj anbima', 'saiu IPCA', 'ultimo IPCA', 'proj IPCA', 'ultimo index', 'VNA']) df.index.name = 'Date' # In the DataFrame above, notice that its index has a daily frquency, meaning that it includes weekends and brazilian holidays. These days have to be dropped from our DataFrame. in order to do that we need to check which of the dates are in the official brazilian holiday calendar. The official source for this calendar is ANBIMA. # # We can use our `DayCounts` class to do exactly that, and it automatically uses the brazilian convention of counting business days. dc = DayCounts('BUS/252', calendar='anbima') df['dia util'] = dc.isbus(df.index) df['dia util'] # We are not going to drop these dates yet, but now we have a column `dia util` that tells me if that date is a business date. # --- # ## Computing the Nominal Face Value # # As seen on the pdf file in the beggining of this notebook, which gives the details on how to compute the VNA of the bond, we know that the 15th of each month is considered the reference date for the inflation index. So we need to find the series of "the last 15th day". for d in tqdm(df.index, "Filling 'ultima virada'"): if d.day >= 15: df.loc[d, 'ultima virada'] = pd.datetime(d.year, d.month, 15) else: if d.month - 1 == 0: df.loc[d, 'ultima virada'] = pd.datetime(d.year-1, 12, 15) else: df.loc[d, 'ultima virada'] = pd.datetime(d.year, d.month-1, 15) # Now we need to compute how much time has passed since the last 15th day, expressed as a fraction of the month (a month here means "time between two 15th days"). df['DU desde virada'] = dc.days(df['ultima virada'], df.index) df['DU entre viradas'] = dc.days(df['ultima virada'], df['ultima virada'] + pd.DateOffset(months=1)) df['time fraction'] = df['DU desde virada'] / df['DU entre viradas'] # We add to this data frame the projections for the inflation of the next month. df['proj anbima'] = df_diario['Anbima+0']/100 df['proj anbima'] = df['proj anbima'].fillna(method='ffill') # The following loop checks if the inflation index for that month has been realesed on that date. # + df.loc[df.index[0], 'saiu IPCA'] = df_release['Date'].isin([df.index[0]]).any() for d, dm1 in zip(df.index[1:], df.index[:-1]): if d.day <= 15 and df.loc[dm1, 'saiu IPCA']: df.loc[d, 'saiu IPCA'] = True else: df.loc[d, 'saiu IPCA'] = df_release['Date'].isin([d]).any() # - # On the column `ultimo IPCA` we are going to put the last available IPCA for a given date. In Excel, this operation would be done with the `VLOOKUP` function with approximate match. In pandas we have an equivalent operation with `pd.merge_asof`. df_aux = df.index.to_frame() df_aux.index.name = None df_aux = pd.merge_asof(df_aux, df_release) df_aux = df_aux.set_index('Date') df['ultimo IPCA'] = df_aux['IPCA']/100 # Chooses the correct inflation rate to use on the VNA. If the actual value of the IPCA has been released, uses the actual value of the IPCA, otherwise, use the projected value from ANBIMA. df['proj IPCA'] = df['saiu IPCA']*df['ultimo IPCA'] + (1 - df['saiu IPCA'])*df['proj anbima'] # We use the `pd.merge_asof` function to also grab the last available value of the IPCA Index. df_aux = df['ultima virada'].to_frame('Dates') df_aux['Dates'] = pd.to_datetime(df_aux['Dates']) df_aux = pd.merge_asof(df_aux, df_mensal.reset_index()[['Dates', 'IPCA Index']]) df_aux.index = df.index df['ultimo index'] = df_aux['IPCA Index'] # We then compute the VNA. df['VNA'] = 1000*(df['ultimo index']/1614.62)*((1+df['proj IPCA'])**df['time fraction']) # Now that everything is done, we can drop the non-business days from our dataframe. df = df[df['dia util']] df_vna = df['VNA'] df_vna.plot() # --- # ## Pricing the Bonds # # First we just arrange the daily data we are going to use (VNA and the bond yields) in a DataFrame. # + df = pd.concat([df['VNA'], df_diario['Yield B50']], axis=1).dropna(how='all') df['Yield B50'] = df['Yield B50'].fillna(method='ffill') df = df.dropna(how='any') df['Cupom'] = 0 display(df) # - # The following cell creates a series of the 15ths of every august and february. I suggest reading the documentation for `pd.date_range` to understand exatcly how it is doing so. After that it checks if it is a business day and if it is not, grabs the next following day. dcf_dates = pd.date_range(start='2012-08-15', end='2050-08-15', freq='12SMS') dcf_dates = dc.busdateroll(dcf_dates, 'following') df_dcf = pd.DataFrame(index=dcf_dates) # Now, for each day, it builds the cashflows of the bonds, finds the daycounts for all of them, disccount them using the yield for that day and saves it. It also checks if there is coupon payments on that date and, if so, saves its value. for d in tqdm(df.index, 'Pricing'): vna_d = df.loc[d, 'VNA'] rate_d = df.loc[d, 'Yield B50']/100 df_dcf['DU'] = dc.days(d, df_dcf.index) df_dcf['Fluxo'] = ((1.06**0.5) - 1) * vna_d df_dcf.loc['2050-08-15', 'Fluxo'] = df_dcf.loc['2012-08-15', 'Fluxo'] + vna_d df_dcf['Fluxo Descontado'] = df_dcf['Fluxo']/((1+rate_d)**(df_dcf['DU']/252)) df.loc[d, 'PU'] = df_dcf['Fluxo Descontado'].sum() if d in dcf_dates: df.loc[d, 'Cupom'] = ((1.06**0.5) - 1) * vna_d # Now we have a series for the price of the NTN-Bs which is not yet our tracker, since we still have to take the coupon payments into account. # # --- # # Building the Tracker # By dividing the columns of coupon payments by the column of the price of the bond, we get how many bonds we can buy on the coupon paying dates. We have to be careful to shift dates by one day, in order to match the dates that we actually get the cash from the coupon. # # We then multiply the ammount of bonds we are holding by their price to get the notional of our portfolio. df['Quantidade'] = 1 + (df['Cupom'].shift(1, fill_value=0)/df['PU']).expanding().sum() df['Notional'] = df['Quantidade']*df['PU'] # The chart below shows how the holdings of bonds increases over time with the coupon payments. df['Quantidade'].plot() # This one shows the difference between the bond price (PU) an the total return index (Notional) df[['PU', 'Notional']].plot() # The percent returns of both series are very similar. In fact they only differ on the dates of coupon payments. import seaborn seaborn.regplot(data=df[['PU', 'Notional']].pct_change(1), x='PU', y='Notional') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.7.4 64-bit (''conda-env'': conda)' # language: python # name: python37464bitcondaenvconda3f4d7783b740440a9c80fd58cd6c0e7e # --- from pyspark.sql import SparkSession spark = SparkSession \ .builder \ .appName("reading csv") \ .getOrCreate() # ### SparkSession.builder will return a builder. type(SparkSession.builder) type(SparkSession.builder.\ appName("reading csv")) # ### SparkSession.Builder can accept # # - SparkConf # - SparkSession‘s own configuration from pyspark.conf import SparkConf type(SparkSession.builder.\ appName("reading csv").\ config(conf=SparkConf())) type(SparkSession.builder.\ config("spark.some.config.option", "some-value")) type(SparkSession.builder.\ appName("reading csv").\ config(conf=SparkConf()).\ enableHiveSupport()) # ### Call GetOrCreate to return the SparkSession. type(SparkSession.builder.\ appName("reading csv").\ config(conf=SparkConf()).\ enableHiveSupport().\ getOrCreate()) # ## References # # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np df = pd.read_table('species_counts.tsv', header=0, index_col=False, dtype={'country_s':str, '#_Records':np.float64, '#_Species':np.float64, 'geonames_id':np.int32}) geo = pd.read_table('geography_data.txt', header=0, index_col=False, dtype={'country_g':str, 'geonames_id':np.int32, 'area_km2':np.float64, 'percent_water':np.float64, 'population_density':np.float64, 'gdp_nominal':np.float64, 'gdp_ppp':np.float64, 'gini':np.float64, 'hdi':np.float64}) eco = pd.read_table('ecoregions.tsv', header=0, index_col=False, dtype={'ecozone':str, 'biome':str, 'ecoregion':str, 'country_e':str, 'geonames_id':np.int32}) print(list(df)) df.set_index(['geonames_id']) geo.set_index('geonames_id') eco.set_index('geonames_id') all_data = pd.concat([df, geo, eco], axis=1, join_axes=[df.index]) print('complete') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import control import numpy as np import matplotlib.pyplot as plt import kontrol.regulator s = control.tf("s") k = np.random.random() q = np.random.randint(1, 100) wn = np.random.random() plant = k*wn**2 / (s**2 + wn/q*s + wn**2) kd = kontrol.regulator.feedback.critical_damping(plant) regulator = kontrol.regulator.predefined.pid(kd=kd) ki = kontrol.regulator.feedback.add_integral_control( plant, regulator) _, _, _, _, ugf_kd, _ = control.stability_margins( regulator*plant, returnall=True) _, _, _, _, ugf_ki, _ = control.stability_margins( ki/s*plant, returnall=True) oltf_kd = kd*s*plant oltf_ki = ki/s*plant f = np.logspace(-3, 1, 1000) plt.loglog(f, abs(oltf_kd(1j*2*np.pi*f))) plt.loglog(f, abs(oltf_ki(1j*2*np.pi*f))) plt.loglog(f, abs((ki/s*plant.dcgain())(1j*2*np.pi*f))) plt.vlines(ugf_ki/2/np.pi, min(f), max(f)) # - ugf_kd/2/np.pi ugf_ki/2/np.pi # + import numpy as np import scipy.special import kontrol.sensact """Tests for kontrol.sensact.calibration.calibrate""" # Test method="linear" xdata = np.linspace(-1, 1, 1000) m = np.random.random() c = np.random.random() ydata = m*xdata + c ## Tests exception try: kontrol.sensact.calibrate(xdata=xdata, ydata=ydata, method="abc") raise except ValueError: pass slope, intercept, linear_range, model = kontrol.sensact.calibrate( xdata=xdata, ydata=ydata, method="linear", return_linear_range=True, return_model=True) assert np.allclose([m, c], [slope, intercept]) assert np.allclose(ydata, model(xdata)) # Test methor="erf" xdata = np.linspace(-3, 3, 1000) a = 1 b = 1 c = 0 d = 0 ydata = a*scipy.special.erf(b*(xdata-c)) + d slope, intercept, linear_range, model = kontrol.sensact.calibrate( xdata=xdata, ydata=ydata, method="erf", return_linear_range=True, return_model=True) assert np.allclose( [model.amplitude, model.slope, model.x_offset, model.y_offset], [a, b, c, d], rtol=1e-3, atol=1e-3) assert np.allclose(model(xdata), ydata, rtol=1e-3, atol=1e-3) # - import numpy as np np.random.random() print([model.amplitude, model.slope, model.x_offset, model.y_offset], [a, b, c, d]) abs(ydata-model(xdata)) # ?np.allclose model(xdata) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="IYfX0b4Qy9yk" r1 = float(input('Digite o lado A ')) r2 = float(input('Digite o lado B ')) r3 = float(input('Digite o lado C ')) if r1 < r2 + r3 and r2 < r1 + r3 and r3 < r1 + r2: print ('Os seguimentos acima podem ser um triangulo', end='') if r1==r2==r3: print(' Equilátero') elif r1 != r2 != r3 !=r1: print (' Escaleno') else: print(' Isoscele') else: print('Não é triangulo') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:py37_pytorch] # language: python # name: conda-env-py37_pytorch-py # --- # # Comparing matrix factorization with transformers for MovieLens recommendations using PyTorch-accelerated. # By # The package versions used are: torch==1.10.0 torchmetrics==0.6.0 pytorch-accelerated==0.1.7 # + from pathlib import Path import numpy as np import pandas as pd from statsmodels.distributions.empirical_distribution import ECDF import matplotlib.pyplot as plt # - # ## Prepare Data # For our dataset, we shall use MovieLens-1M, a collection of one million ratings from 6000 users on 4000 movies. This dataset was collected and is maintained by GroupLens, a research group at the University of Minnesota, and released in 2003; it has been frequently used in the Machine Learning community and is commonly presented as a benchmark in academic papers. # ### Download Movielens-1M dataset # !wget http://files.grouplens.org/datasets/movielens/ml-1m.zip # !unzip ml-1m.zip # ## Load Data # MovieLens consists of three files, 'movies.dat', 'users.dat', and 'ratings.dat', which have the following formats: dataset_path = Path('ml-1m') # + users = pd.read_csv( dataset_path/"users.dat", sep="::", names=["user_id", "sex", "age_group", "occupation", "zip_code"], encoding='latin-1', engine='python' ) ratings = pd.read_csv( dataset_path/"ratings.dat", sep="::", names=["user_id", "movie_id", "rating", "unix_timestamp"], encoding='latin-1', engine='python' ) movies = pd.read_csv( dataset_path/"movies.dat", sep="::", names=["movie_id", "title", "genres"], encoding='latin-1', engine='python' ) # - users movies ratings # Let's combine some of this information into a single DataFrame, to make it easier for us to work with. ratings_df = pd.merge(ratings, movies)[['user_id', 'title', 'rating', 'unix_timestamp']] ratings_df["user_id"] = ratings_df["user_id"].astype(str) # Using pandas, we can print some high-level statistics about the dataset, which may be useful to us. # + ratings_per_user = ratings_df.groupby('user_id').rating.count() ratings_per_item = ratings_df.groupby('title').rating.count() print(f"Total No. of users: {len(ratings_df.user_id.unique())}") print(f"Total No. of items: {len(ratings_df.title.unique())}") print("\n") print(f"Max observed rating: {ratings_df.rating.max()}") print(f"Min observed rating: {ratings_df.rating.min()}") print("\n") print(f"Max no. of user ratings: {ratings_per_user.max()}") print(f"Min no. of user ratings: {ratings_per_user.min()}") print(f"Median no. of ratings per user: {ratings_per_user.median()}") print("\n") print(f"Max no. of item ratings: {ratings_per_item.max()}") print(f"Min no. of item ratings: {ratings_per_item.min()}") print(f"Median no. of ratings per item: {ratings_per_item.median()}") # - # From this, we can see that all ratings are between 1 and 5 and every item has been rated at least once. As every user has rated at least 20 movies, we don't have to worry about the case of how to recommend items to a user where we know nothing about their preferences - but this is often not the case in the real world! # ### Splitting into training and validation sets # Before we start modeling, we need to split this dataset into training and validations sets. Often, splitting the dataset is done by randomly sampling a selection of rows, which is a good approach in some cases. However, as we intend to train a transformer model on sequences of ratings, this approach will not work for our purposes. This is because, if we were to simply remove a set of random rows, this is not a good representation of the task that we are trying to model; as it is likely that, for some users, ratings from the middle of a sequence will end up in the validation set. # # To avoid this, one approach would be to use a strategy known as 'leave-one-out' validation, in which we select the last chronological rating for each user, given that they have rated some number of items greater than a defined threshold. As this is a good representation of the approach we are trying to model, this is the approach we shall use here. # # Let's define a function to get the last n for each user def get_last_n_ratings_by_user( df, n, min_ratings_per_user=1, user_colname="user_id", timestamp_colname="unix_timestamp" ): return ( df.groupby(user_colname) .filter(lambda x: len(x) >= min_ratings_per_user) .sort_values(timestamp_colname) .groupby(user_colname) .tail(n) .sort_values(user_colname) ) get_last_n_ratings_by_user(ratings_df, 1) # We can now use this to define another function to mark the last n ratings per user as our validation set; representing this using the is_valid column: def mark_last_n_ratings_as_validation_set( df, n, min_ratings=1, user_colname="user_id", timestamp_colname="unix_timestamp" ): """ Mark the chronologically last n ratings as the validation set. This is done by adding the additional 'is_valid' column to the df. :param df: a DataFrame containing user item ratings :param n: the number of ratings to include in the validation set :param min_ratings: only include users with more than this many ratings :param user_id_colname: the name of the column containing user ids :param timestamp_colname: the name of the column containing the imestamps :return: the same df with the additional 'is_valid' column added """ df["is_valid"] = False df.loc[ get_last_n_ratings_by_user( df, n, min_ratings, user_colname=user_colname, timestamp_colname=timestamp_colname, ).index, "is_valid", ] = True return df # Applying this to our DataFrame, we can see that we now have a validation set of 6040 rows - one for each user. mark_last_n_ratings_as_validation_set(ratings_df, 1) train_df = ratings_df[ratings_df.is_valid==False] valid_df = ratings_df[ratings_df.is_valid==True] len(valid_df) # Even when considering model benchmarks on the same dataset, to have a fair comparison, it is important to understand how the data has been split and to make sure that the approaches taken are consistent! # ## Creating a Baseline Model # When starting a new modeling task, it is often a good idea to create a very simple model - known as a baseline model - to perform the task in a straightforward way that requires minimal effort to implement. We can then use the metrics from this model as a comparison for all future approaches; if a complex model is getting worse results than the baseline model, this is a bad sign! # # Here, an approach that we can use for this is to simply predict the average rating for every movie, irrespective of context. As the mean can be heavily affected by outliers, let's use the median for this. We can easily calculate the median rating from our training set as follows: median_rating = train_df.rating.median(); median_rating # We can then use this as the prediction for every rating in the validation set and calculate our metrics: # + import math from sklearn.metrics import mean_squared_error, mean_absolute_error predictions = np.array([median_rating]* len(valid_df)) mae = mean_absolute_error(valid_df.rating, predictions) mse = mean_squared_error(valid_df.rating, predictions) rmse = math.sqrt(mse) print(f'mae: {mae}') print(f'mse: {mse}') print(f'rmse: {rmse}') # - # ## Matrix factorization with bias # One very popular approach toward recommendations, both in academia and industry, is matrix factorization. # # In addition to representing recommendations in a table, such as our DataFrame, an alternative view would be to represent a set of user-item ratings as a matrix. We can visualize this on a sample of our data as presented below: ratings_df[((ratings_df.user_id == '1') | (ratings_df.user_id == '2')| (ratings_df.user_id == '4')) & ((ratings_df.title == "One Flew Over the Cuckoo's Nest (1975)") | (ratings_df.title == "To Kill a Mockingbird (1962)")| (ratings_df.title == "Saving Private Ryan (1998)"))].pivot_table('rating', index='user_id', columns='title').fillna('?') # As not every user will have rated every movie, we can see that some values are missing. Therefore, we can formulate our recommendation problem in the following way: # # How can we fill in the blanks, such that the values are consistent with the existing ratings in the matrix? # # One way that we can approach this is by considering that there are two smaller matrices that can be multiplied together to make our ratings matrix. # Before we think about training a model, we first need to get the data into the correct format. Currently, we have a title that represents each movie, which is a string; we need to convert this to an integer format so that we can feed it into the model. While we already have an ID representing each user, let's also create our own encoding for this. I generally find it good practice to control all the encodings related to a training process, rather than relying on predefined ID systems defined elsewhere; you will be surprised how many IDs that are supposed to be immutable and unique turn out to be otherwise in the real world! # # Here, we can do this very simply by enumerating every unique value for both users and movies. We can create lookup tables for this as shown below: user_lookup = {v: i+1 for i, v in enumerate(ratings_df['user_id'].unique())} movie_lookup = {v: i+1 for i, v in enumerate(ratings_df['title'].unique())} # Now that we can encode our features, as we are using PyTorch, we need to define a Dataset to wrap our DataFrame and return the user-item ratings. # + from torch.utils.data import Dataset class UserItemRatingDataset(Dataset): def __init__(self, df, movie_lookup, user_lookup): self.df = df self.movie_lookup = movie_lookup self.user_lookup = user_lookup def __getitem__(self, index): row = self.df.iloc[index] user_id = self.user_lookup[row.user_id] movie_id = self.movie_lookup[row.title] rating = torch.tensor(row.rating, dtype=torch.float32) return (user_id, movie_id), rating def __len__(self): return len(self.df) # - # We can now use this to create our training and validation datasets: train_dataset = UserItemRatingDataset(train_df, movie_lookup, user_lookup) valid_dataset = UserItemRatingDataset(valid_df, movie_lookup, user_lookup) # Next, let's define the model. # + import torch from torch import nn class MfDotBias(nn.Module): def __init__( self, n_factors, n_users, n_items, ratings_range=None, use_biases=True ): super().__init__() self.bias = use_biases self.y_range = ratings_range self.user_embedding = nn.Embedding(n_users+1, n_factors, padding_idx=0) self.item_embedding = nn.Embedding(n_items+1, n_factors, padding_idx=0) if use_biases: self.user_bias = nn.Embedding(n_users+1, 1, padding_idx=0) self.item_bias = nn.Embedding(n_items+1, 1, padding_idx=0) def forward(self, inputs): users, items = inputs dot = self.user_embedding(users) * self.item_embedding(items) result = dot.sum(1) if self.bias: result = ( result + self.user_bias(users).squeeze() + self.item_bias(items).squeeze() ) if self.y_range is None: return result else: return ( torch.sigmoid(result) * (self.y_range[1] - self.y_range[0]) + self.y_range[0] ) # - # As we can see, this is very simple to define. Note that because an embedding layer is simply a lookup table, it is important that when we specify the size of the embedding layer, it must contain any value that will be seen during training and evaluation. Because of this, we will use the number of unique items observed in the full dataset to do this, not just the training set. We have also specified a padding embedding at index 0, which can be used for any unknown values. PyTorch handles this by setting this entry to a zero-vector, which is not updated during training. # # Additionally, as this is a regression task, the range that the model could predict is potentially unbounded. While the model can learn to restrict the output values to between 1 and 5, we can make this easier for the model by modifying the architecture to restrict this range prior to training. We have done this by applying the sigmoid function to the model's output - which restricts the range to between 0 and 1 - and then scaling this to within a range that we can define. # ### Train with PyTorch accelerated # At this point, we would usually start writing the training loop; however, as we are using pytorch-accelerated, this will largely be taken care of for us. However, as pytorch-accelerated tracks only the training and validation losses by default, let's create a callback to track our metrics. # + from functools import partial from pytorch_accelerated import Trainer, notebook_launcher from pytorch_accelerated.trainer import TrainerPlaceholderValues, DEFAULT_CALLBACKS from pytorch_accelerated.callbacks import EarlyStoppingCallback, SaveBestModelCallback, TrainerCallback, StopTrainingError import torchmetrics # - # Let's create a callback to track our metrics class RecommenderMetricsCallback(TrainerCallback): def __init__(self): self.metrics = torchmetrics.MetricCollection( { "mse": torchmetrics.MeanSquaredError(), "mae": torchmetrics.MeanAbsoluteError(), } ) def _move_to_device(self, trainer): self.metrics.to(trainer.device) def on_training_run_start(self, trainer, **kwargs): self._move_to_device(trainer) def on_evaluation_run_start(self, trainer, **kwargs): self._move_to_device(trainer) def on_eval_step_end(self, trainer, batch, batch_output, **kwargs): preds = batch_output["model_outputs"] self.metrics.update(preds, batch[1]) def on_eval_epoch_end(self, trainer, **kwargs): metrics = self.metrics.compute() mse = metrics["mse"].cpu() trainer.run_history.update_metric("mae", metrics["mae"].cpu()) trainer.run_history.update_metric("mse", mse) trainer.run_history.update_metric("rmse", math.sqrt(mse)) self.metrics.reset() # Now, all that is left to do is to train the model. PyTorch-accelerated provides a notebook_launcher function, which enables us to run multi-GPU training runs from within a notebook. To use this, all we need to do is to define a training function that instantiates our Trainer object and calls the train method. # # Components such as the model and dataset can be defined anywhere in the notebook, but it is important that the trainer is only ever instantiated within a training function. def train_mf_model(): model = MfDotBias( 120, len(user_lookup), len(movie_lookup), ratings_range=[0.5, 5.5] ) loss_func = torch.nn.MSELoss() optimizer = torch.optim.AdamW(model.parameters(), lr=0.01) create_sched_fn = partial( torch.optim.lr_scheduler.OneCycleLR, max_lr=0.01, epochs=TrainerPlaceholderValues.NUM_EPOCHS, steps_per_epoch=TrainerPlaceholderValues.NUM_UPDATE_STEPS_PER_EPOCH, ) trainer = Trainer( model=model, loss_func=loss_func, optimizer=optimizer, callbacks=( RecommenderMetricsCallback, *DEFAULT_CALLBACKS, SaveBestModelCallback(watch_metric="mae"), EarlyStoppingCallback( early_stopping_patience=2, early_stopping_threshold=0.001, watch_metric="mae", ), ), ) trainer.train( train_dataset=train_dataset, eval_dataset=valid_dataset, num_epochs=30, per_device_batch_size=512, create_scheduler_fn=create_sched_fn, ) notebook_launcher(train_mf_model, num_processes=2) # Comparing this to our baseline, we can see that there is an improvement! # ## Sequential recommendations using a transformer # Using matrix factorization, we are treating each rating as being independent from the ratings around it; however, incorporating information about other movies that a user recently rated could provide an additional signal that could boost performance. For example, suppose that a user is watching a trilogy of films; if they have rated the first two instalments highly, it is likely that they may do the same for the finale! # # One way that we can approach this is to use a transformer network, specifically the encoder portion, to encode additional context into the learned embeddings for each movie, and then using a fully connected neural network to make the rating predictions. # ### Pre-processing the data # The first step is to process our data so that we have a time-sorted list of movies for each user. Let's start by grouping all the ratings by user: grouped_ratings = ratings_df.sort_values(by='unix_timestamp').groupby('user_id').agg(tuple).reset_index() grouped_ratings # Now that we have grouped by user, we can create an additional column so that we can see the number of events associated with each user grouped_ratings['num_ratings'] = grouped_ratings['rating'].apply(lambda row: len(row)) # Let's take a look at the new dataframe grouped_ratings # Now that we have grouped all the ratings for each user, let's divide these into smaller sequences. To make the most out of the data, we would like the model to have the opportunity to predict a rating for every movie in the training set. To do this, let's specify a sequence length s and use the previous s-1 ratings as our user history. # # As the model expects each sequence to be a fixed length, we will fill empty spaces with a padding token, so that sequences can be batched and passed to the model. Let's create a function to do this. # # We are going to arbitrarily choose a length of 10 here. sequence_length = 10 def create_sequences(values, sequence_length): sequences = [] for i, v in enumerate(values): seq = values[:i+1] if len(seq) > sequence_length: seq = seq[i-sequence_length+1:i+1] elif len(seq) < sequence_length: seq =(*(['[PAD]'] * (sequence_length - len(seq))), *seq) sequences.append(seq) return sequences # To visualize how this function works, let's apply it, with a sequence length of 3, to the first 10 movies rated by the first user. These movies are: grouped_ratings.iloc[0]['title'][:10] # Applying our function, we have: create_sequences(grouped_ratings.iloc[0]['title'][:10], 3) # As we can see, we have 10 sequences of length 3, where the final movie in the sequence is unchanged from the original list. # Now, let's apply this function to all of the features in our dataframe grouped_cols = ['title', 'rating', 'unix_timestamp', 'is_valid'] for col in grouped_cols: grouped_ratings[col] = grouped_ratings[col].apply(lambda x: create_sequences(x, sequence_length)) grouped_ratings.head(2) # Currently, we have one row that contains all the sequences for a certain user. However, during training, we would like to create batches made up of sequences from many different users. To do this, we will have to transform the data so that each sequence has its own row, while remaining associated with the user ID. We can use the pandas 'explode' function for each feature, and then aggregate these DataFrames together. exploded_ratings = grouped_ratings[['user_id', 'title']].explode('title', ignore_index=True) dfs = [grouped_ratings[[col]].explode(col, ignore_index=True) for col in grouped_cols[1:]] seq_df = pd.concat([exploded_ratings, *dfs], axis=1) seq_df.head() # Now, we can see that each sequence has its own row. However, for the is_valid column, we don't care about the whole sequence and only need the last value as this is the movie for which we will be trying to predict the rating. Let's create a function to extract this value and apply it to these columns. # + def get_last_entry(sequence): return sequence[-1] seq_df['is_valid'] = seq_df['is_valid'].apply(get_last_entry) # - seq_df # Also, to make it easy to access the rating that we are trying to predict, let's separate this into its own column. seq_df['target_rating'] = seq_df['rating'].apply(get_last_entry) seq_df['previous_ratings'] = seq_df['rating'].apply(lambda seq: seq[:-1]) seq_df.drop(columns=['rating'], inplace=True) # To prevent the model from including padding tokens when calculating attention scores, we can provide an attention mask to the transformer; the mask should be 'True' for a padding token and 'False' otherwise. Let's calculate this for each row, as well as creating a column to show the number of padding tokens present. seq_df['pad_mask'] = seq_df['title'].apply(lambda x: (np.array(x) == '[PAD]')) seq_df['num_pads'] = seq_df['pad_mask'].apply(sum) seq_df['pad_mask'] = seq_df['pad_mask'].apply(lambda x: x.tolist()) # in case we serialize later # Let's inspect the transformed data seq_df # All looks as it should! Let's split this into training and validation sets and save this. train_seq_df = seq_df[seq_df.is_valid == False] valid_seq_df = seq_df[seq_df.is_valid == True] # ### Training the model # As we saw previously, before we can feed this data into the model, we need to create lookup tables to encode our movies and users. However, this time, we need to include the padding token in our movie lookup. user_lookup = {v: i+1 for i, v in enumerate(ratings_df['user_id'].unique())} def create_feature_lookup(df, feature): lookup = {v: i+1 for i, v in enumerate(df[feature].unique())} lookup['[PAD]'] = 0 return lookup movie_lookup = create_feature_lookup(ratings_df, 'title') # Now, we are dealing with sequences of ratings, rather than individual ones, so we will need to create a new dataset to wrap our processed DataFrame: class MovieSequenceDataset(Dataset): def __init__(self, df, movie_lookup, user_lookup): super().__init__() self.df = df self.movie_lookup = movie_lookup self.user_lookup = user_lookup def __len__(self): return len(self.df) def __getitem__(self, index): data = self.df.iloc[index] user_id = self.user_lookup[str(data.user_id)] movie_ids = torch.tensor([self.movie_lookup[title] for title in data.title]) previous_ratings = torch.tensor( [rating if rating != "[PAD]" else 0 for rating in data.previous_ratings] ) attention_mask = torch.tensor(data.pad_mask) target_rating = data.target_rating encoded_features = { "user_id": user_id, "movie_ids": movie_ids, "ratings": previous_ratings, } return (encoded_features, attention_mask), torch.tensor( target_rating, dtype=torch.float32 ) train_dataset = MovieSequenceDataset(train_seq_df, movie_lookup, user_lookup) valid_dataset = MovieSequenceDataset(valid_seq_df, movie_lookup, user_lookup) # Now, let's define our transformer model! As a start, given that the matrix factorization model can achieve good performance using only the user and movie ids, let's only include this information for now. class BstTransformer(nn.Module): def __init__( self, movies_num_unique, users_num_unique, sequence_length=10, embedding_size=120, num_transformer_layers=1, ratings_range=(0.5, 5.5), ): super().__init__() self.sequence_length = sequence_length self.y_range = ratings_range self.movies_embeddings = nn.Embedding( movies_num_unique + 1, embedding_size, padding_idx=0 ) self.user_embeddings = nn.Embedding(users_num_unique + 1, embedding_size) self.position_embeddings = nn.Embedding(sequence_length, embedding_size) self.encoder = nn.TransformerEncoder( encoder_layer=nn.TransformerEncoderLayer( d_model=embedding_size, nhead=12, dropout=0.1, batch_first=True, activation="gelu", ), num_layers=num_transformer_layers, ) self.linear = nn.Sequential( nn.Linear( embedding_size + (embedding_size * sequence_length), 1024, ), nn.BatchNorm1d(1024), nn.Mish(), nn.Linear(1024, 512), nn.BatchNorm1d(512), nn.Mish(), nn.Dropout(0.2), nn.Linear(512, 256), nn.BatchNorm1d(256), nn.Mish(), nn.Linear(256, 1), nn.Sigmoid(), ) def forward(self, inputs): features, mask = inputs encoded_user_id = self.user_embeddings(features["user_id"]) user_features = encoded_user_id encoded_movies = self.movies_embeddings(features["movie_ids"]) positions = torch.arange( 0, self.sequence_length, 1, dtype=int, device=features["movie_ids"].device ) positions = self.position_embeddings(positions) transformer_features = encoded_movies + positions transformer_output = self.encoder( transformer_features, src_key_padding_mask=mask ) transformer_output = torch.flatten(transformer_output, start_dim=1) combined_output = torch.cat((transformer_output, user_features), dim=1) rating = self.linear(combined_output) rating = rating.squeeze() if self.y_range is None: return rating else: return rating * (self.y_range[1] - self.y_range[0]) + self.y_range[0] # We can see that, as a default, we feed our sequence of movie embeddings into a single transformer layer, before concatenating the output with the user features - here, just the user ID - and using this as the input to a fully connected network. Here, we are using only a simple positional encoding that is learned to represent the sequence in which the movies were rated; using a sine- and cosine-based approach provided no benefit during my experiments, but feel free to try it out if you are interested! # # Once again, let's define a training function for this model; except for the model initialization, this is identical to the one we used to train the matrix factorization model. def train_seq_model(): model = BstTransformer( len(movie_lookup), len(user_lookup), sequence_length, embedding_size=120 ) loss_func = torch.nn.MSELoss() optimizer = torch.optim.AdamW(model.parameters(), lr=0.01) create_sched_fn = partial( torch.optim.lr_scheduler.OneCycleLR, max_lr=0.01, epochs=TrainerPlaceholderValues.NUM_EPOCHS, steps_per_epoch=TrainerPlaceholderValues.NUM_UPDATE_STEPS_PER_EPOCH, ) trainer = Trainer( model=model, loss_func=loss_func, optimizer=optimizer, callbacks=( RecommenderMetricsCallback, *DEFAULT_CALLBACKS, SaveBestModelCallback(watch_metric="mae"), EarlyStoppingCallback( early_stopping_patience=2, early_stopping_threshold=0.001, watch_metric="mae", ), ), ) trainer.train( train_dataset=train_dataset, eval_dataset=valid_dataset, num_epochs=10, per_device_batch_size=512, create_scheduler_fn=create_sched_fn, ) notebook_launcher(train_seq_model, num_processes=2) # We can see that this is a significant improvement over the matrix factorization approach! # ### Adding additional data # So far, we have only considered the user ID and a sequence of movie IDs to predict the rating; it seems likely that including information about the previous ratings made by the user would improve performance. Thankfully, this is easy to do, and the data is already being returned by our dataset. Let's tweak our architecture to include this: class BstTransformer(nn.Module): def __init__( self, movies_num_unique, users_num_unique, sequence_length=10, embedding_size=120, num_transformer_layers=1, ratings_range=(0.5, 5.5), ): super().__init__() self.sequence_length = sequence_length self.y_range = ratings_range self.movies_embeddings = nn.Embedding( movies_num_unique + 1, embedding_size, padding_idx=0 ) self.user_embeddings = nn.Embedding(users_num_unique + 1, embedding_size) self.ratings_embeddings = nn.Embedding(6, embedding_size, padding_idx=0) self.position_embeddings = nn.Embedding(sequence_length, embedding_size) self.encoder = nn.TransformerEncoder( encoder_layer=nn.TransformerEncoderLayer( d_model=embedding_size, nhead=12, dropout=0.1, batch_first=True, activation="gelu", ), num_layers=num_transformer_layers, ) self.linear = nn.Sequential( nn.Linear( embedding_size + (embedding_size * sequence_length), 1024, ), nn.BatchNorm1d(1024), nn.Mish(), nn.Linear(1024, 512), nn.BatchNorm1d(512), nn.Mish(), nn.Dropout(0.2), nn.Linear(512, 256), nn.BatchNorm1d(256), nn.Mish(), nn.Linear(256, 1), nn.Sigmoid(), ) def forward(self, inputs): features, mask = inputs encoded_user_id = self.user_embeddings(features["user_id"]) user_features = encoded_user_id movie_history = features["movie_ids"][:, :-1] target_movie = features["movie_ids"][:, -1] ratings = self.ratings_embeddings(features["ratings"]) encoded_movies = self.movies_embeddings(movie_history) encoded_target_movie = self.movies_embeddings(target_movie) positions = torch.arange( 0, self.sequence_length - 1, 1, dtype=int, device=features["movie_ids"].device, ) positions = self.position_embeddings(positions) encoded_sequence_movies_with_position_and_rating = ( encoded_movies + ratings + positions ) encoded_target_movie = encoded_target_movie.unsqueeze(1) transformer_features = torch.cat( (encoded_sequence_movies_with_position_and_rating, encoded_target_movie), dim=1, ) transformer_output = self.encoder( transformer_features, src_key_padding_mask=mask ) transformer_output = torch.flatten(transformer_output, start_dim=1) combined_output = torch.cat((transformer_output, user_features), dim=1) rating = self.linear(combined_output) rating = rating.squeeze() if self.y_range is None: return rating else: return rating * (self.y_range[1] - self.y_range[0]) + self.y_range[0] # We can see that, to use the ratings data, we have added an additional embedding layer. For each previously rated movie, we then add together the movie embedding, the positional encoding and the rating embedding before feeding this sequence into the transformer. Alternatively, the rating data could be concatenated to, or multiplied with, the movie embedding, but adding them together worked the best out of the approaches that I tried. # # As Jupyter maintains a live state for each class definition, we don't need to update our training function; the new class will be used when we launch training: notebook_launcher(train_seq_model, num_processes=2) # We can see that incorporating the ratings data has improved our results slightly! # ### Adding user features # In addition to the ratings data, we also have more information about the users that we could add into the model. To remind ourselves, let's take a look at the users table: users # Let's try adding in the categorical variables representing the users' sex, age groups, and occupation to the model, and see if we see any improvement. While occupation looks like it is already sequentially numerically encoded, we must do the same for the sex and age_group columns. We can use the 'LabelEncoder' class from scikit-learn to do this for us, and append the encoded columns to the DataFrame: from sklearn.preprocessing import LabelEncoder le = LabelEncoder() users['sex_encoded'] = le.fit_transform(users.sex) users['age_group_encoded'] = le.fit_transform(users.age_group) users["user_id"] = users["user_id"].astype(str) # Now that we have all the features that we are going to use encoded, let's join the user features to our sequences DataFrame, and update our training and validation sets. seq_with_user_features = pd.merge(seq_df, users) train_df = seq_with_user_features[seq_with_user_features.is_valid == False] valid_df = seq_with_user_features[seq_with_user_features.is_valid == True] # Let's update our dataset to include these features. class MovieSequenceDataset(Dataset): def __init__(self, df, movie_lookup, user_lookup): super().__init__() self.df = df self.movie_lookup = movie_lookup self.user_lookup = user_lookup def __len__(self): return len(self.df) def __getitem__(self, index): data = self.df.iloc[index] user_id = self.user_lookup[str(data.user_id)] movie_ids = torch.tensor([self.movie_lookup[title] for title in data.title]) previous_ratings = torch.tensor( [rating if rating != "[PAD]" else 0 for rating in data.previous_ratings] ) attention_mask = torch.tensor(data.pad_mask) target_rating = data.target_rating encoded_features = { "user_id": user_id, "movie_ids": movie_ids, "ratings": previous_ratings, "age_group": data["age_group_encoded"], "sex": data["sex_encoded"], "occupation": data["occupation"], } return (encoded_features, attention_mask), torch.tensor( target_rating, dtype=torch.float32 ) train_dataset = MovieSequenceDataset(train_df, movie_lookup, user_lookup) valid_dataset = MovieSequenceDataset(valid_df, movie_lookup, user_lookup) # We can now modify our architecture to include embeddings for these features and concatenate these embeddings to the output of the transformer; then we pass this into the feed-forward network. class BstTransformer(nn.Module): def __init__( self, movies_num_unique, users_num_unique, sequence_length=10, embedding_size=120, num_transformer_layers=1, ratings_range=(0.5, 5.5), ): super().__init__() self.sequence_length = sequence_length self.y_range = ratings_range self.movies_embeddings = nn.Embedding( movies_num_unique + 1, embedding_size, padding_idx=0 ) self.user_embeddings = nn.Embedding(users_num_unique + 1, embedding_size) self.ratings_embeddings = nn.Embedding(6, embedding_size, padding_idx=0) self.position_embeddings = nn.Embedding(sequence_length, embedding_size) self.sex_embeddings = nn.Embedding( 3, 2, ) self.occupation_embeddings = nn.Embedding( 22, 11, ) self.age_group_embeddings = nn.Embedding( 8, 4, ) self.encoder = nn.TransformerEncoder( encoder_layer=nn.TransformerEncoderLayer( d_model=embedding_size, nhead=12, dropout=0.1, batch_first=True, activation="gelu", ), num_layers=num_transformer_layers, ) self.linear = nn.Sequential( nn.Linear( embedding_size + (embedding_size * sequence_length) + 4 + 11 + 2, 1024, ), nn.BatchNorm1d(1024), nn.Mish(), nn.Linear(1024, 512), nn.BatchNorm1d(512), nn.Mish(), nn.Dropout(0.2), nn.Linear(512, 256), nn.BatchNorm1d(256), nn.Mish(), nn.Linear(256, 1), nn.Sigmoid(), ) def forward(self, inputs): features, mask = inputs user_id = self.user_embeddings(features["user_id"]) age_group = self.age_group_embeddings(features["age_group"]) sex = self.sex_embeddings(features["sex"]) occupation = self.occupation_embeddings(features["occupation"]) user_features = user_features = torch.cat( (user_id, sex, age_group, occupation), 1 ) movie_history = features["movie_ids"][:, :-1] target_movie = features["movie_ids"][:, -1] ratings = self.ratings_embeddings(features["ratings"]) encoded_movies = self.movies_embeddings(movie_history) encoded_target_movie = self.movies_embeddings(target_movie) positions = torch.arange( 0, self.sequence_length - 1, 1, dtype=int, device=features["movie_ids"].device, ) positions = self.position_embeddings(positions) encoded_sequence_movies_with_position_and_rating = ( encoded_movies + ratings + positions ) encoded_target_movie = encoded_target_movie.unsqueeze(1) transformer_features = torch.cat( (encoded_sequence_movies_with_position_and_rating, encoded_target_movie), dim=1, ) transformer_output = self.encoder( transformer_features, src_key_padding_mask=mask ) transformer_output = torch.flatten(transformer_output, start_dim=1) combined_output = torch.cat((transformer_output, user_features), dim=1) rating = self.linear(combined_output) rating = rating.squeeze() if self.y_range is None: return rating else: return rating * (self.y_range[1] - self.y_range[0]) + self.y_range[0] notebook_launcher(train_seq_model, num_processes=2) # Here, we can see a slight decrease in the MAE, but a small increase in the MSE and RMSE, so it looks like these features made a negligible difference to the overall performance. # In writing this article, my main objective has been to try and illustrate how these approaches can be used, and so I've picked the hyperparameters somewhat arbitrarily; it's likely that with some hyperparameter tweaks, and different combinations of features, these metrics can probably be improved upon! # # Hopefully this has provided a good introduction to using both matrix factorization and transformer-based approaches in PyTorch, and how pytorch-accelerated can speed up our process when experimenting with different models! # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:ocr] * # language: python # name: conda-env-ocr-py # --- # In this notebook, I am applying some standard text cleaning steps to the text files. These are steps that would be applied in any standard situation and so are necessary to avoid "stacking the deck" against the OCR-generated text. # # These are: # - removing special characters # - reconnecting line endings (changed mind on this idea). # - remove single character words (excluding "a" and "i") # # + import os import re project_directory = "/Users/jeriwieringa/Documents/Research/ocr-and-nlp/" input_directory = "data/text/ocr-generated/" output_directory = "data/text/corrected" # - file_list = os.listdir(os.path.join(project_directory, input_directory)) file_list[:5] def open_and_read(file_name): with open(os.path.join(project_directory, input_directory, file_name), 'r') as f: text_blob = f.read() return text_blob def remove_special_characters(text_blob): # text_blob = re.sub("[^!,.:;-?](?= |$)", "", text_blob) text_blob = re.sub("[^.,?!a-zA-Z0-9 ]", "", text_blob) # text_blob = re.sub("[^\w]", "", text_blob) # text_blob = re.sub("[\n\r]", " ", text_blob) return text_blob def remove_single_character_words(text_blob): "Not working" cleaned = re.sub("(?<=(^|\s))[^aiAI](\s|$)", " ", text_blob) return cleaned def write_to_file(file_name, text_blob): with open(os.path.join(project_directory, output_directory, file_name), 'w') as o: o.write(text_blob) return for file_name in file_list: if not file_name.startswith('.'): contents = open_and_read(file_name) cleaned_contents = remove_special_characters(contents) # cleaned_contents = remove_single_character_words(cleaned_contents) write_to_file(file_name, cleaned_contents) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pickle # + train_sentences = [] train_ancorra_tags = [] train_features = [] dev_sentences= [] dev_ancorra_tags = [] dev_features = [] # - with open('Data/Training_sentences.pkl', 'rb') as f: train_sentences = pickle.load(f) with open('Data/Training_ancorra_tags.pkl', 'rb') as f: train_ancorra_tags = pickle.load(f) with open('Data/Training_features.pkl', 'rb') as f: train_features = pickle.load(f) with open('Data/Development_sentences.pkl', 'rb') as f: dev_sentences = pickle.load(f) with open('Data/Development_ancorra_tags.pkl', 'rb') as f: dev_ancorra_tags = pickle.load(f) with open('Data/Development_features.pkl', 'rb') as f: dev_features = pickle.load(f) # %%time from sklearn_crfsuite import CRF crf = CRF( algorithm='lbfgs', c1=0.01, c2=0.1, max_iterations=100, all_possible_transitions=True ) crf.fit(train_features, train_ancorra_tags) from sklearn_crfsuite import metrics from sklearn_crfsuite import scorers # + pred_ancorra_tags = crf.predict(dev_features) print("F1 score on Dev Data ") print(metrics.flat_f1_score(dev_ancorra_tags, pred_ancorra_tags, average='weighted',labels=crf.classes_)) ### Look at class wise score print(metrics.flat_classification_report( dev_ancorra_tags, pred_ancorra_tags, labels=crf.classes_, digits=3 )) # + test_sentences = [] test_ancorra_tags = [] test_features = [] with open('Data/Testing_sentences.pkl', 'rb') as f: test_sentences = pickle.load(f) with open('Data/Testing_ancorra_tags.pkl', 'rb') as f: test_ancorra_tags = pickle.load(f) with open('Data/Testing_features.pkl', 'rb') as f: test_features = pickle.load(f) # - pred_test_ancorra_tags = crf.predict(test_features) print("F1 score on Test Data ") print(metrics.flat_f1_score(test_ancorra_tags, pred_test_ancorra_tags, average='weighted',labels=crf.classes_)) ### Look at class wise score print(metrics.flat_classification_report( test_ancorra_tags, pred_test_ancorra_tags, labels=crf.classes_, digits=3 )) with open('crf_model.pkl', 'wb') as model: pickle.dump(crf, model) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np import matplotlib.pyplot as plt # %matplotlib inline import statsmodels.api as sm from statsmodels.sandbox.regression.predstd import wls_prediction_std from statsmodels.regression.linear_model import RegressionResults # + # turns a ascii raster file into a list of values def raster_to_list(filename): data_list = [] with open(filename, 'r') as fn: for line in fn: try: float(line[0]) # see if the line represents data line_list = line.rstrip().split(' ') # reformat line string for value in line_list: data_list.append(float(value)) except: continue return data_list # + # convert the relevant rasters model_output = raster_to_list('ABM_Output_Mean.asc') observations = raster_to_list('cov_density.asc') parcels = raster_to_list('parcels_1910/parcels_1910.asc') water_distance = raster_to_list('DistToWater/disttowater.asc') # + # turn the raster data into a table. Each relevant pixel is a row data_tuples = list(zip(range(len(model_list)), model_output, observations, parcels, water_distance)) df = pd.DataFrame(data_tuples, columns=['ID', 'model', 'observation', 'parcel', 'water_distance']) # only include pixels that were undeveloped at the start of the model run and that aren't water # we know those won't have covenants df = df.loc[(df['parcel'] == 0) & (df['water_distance'] > 0)] # check the head of the dataframe to confirm it looks correcnt df.head(20) # - # display a scatter plot of the relationship between the modeled output and observation data # If the model is good, there should be a positive correlation plt.scatter(df['model'], df['observation']) # + # create a linear regression model of the relationship between the predicted and observed values # data from dataframe X = df['model'].values.reshape(-1, 1) y = df['observation'].values.reshape(-1, 1) # stats model needs a constant to create an intercept X = sm.add_constant(X) model = sm.OLS(y, X) results = model.fit() print(results.summary()) # - residualsdf = pd.DataFrame({'ID': df['ID'], 'Residual': results.resid}) residualsdf.head(20) # + # turn the residuals table into an ascii raster # Import into ArcGIS to check if they are spatially autocorrelated with open('residuals.asc', 'w') as residuals: residuals.write( "ncols 41\nnrows 70\nxllcorner 473979.5318\nyllcorner 4970867.4064\ncellsize 250\nNODATA_value -9999\n" ) for i in range(len(model_output)): try: residuals.write(str(residualsdf.loc[i]['Residual'])) except: residuals.write('0') if i % 41 == 40: residuals.write('\n') else: residuals.write(' ') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:devmar] * # language: python # name: conda-env-devmar-py # --- # ## AutoML Experiment using SDK V2 # # Make sure SDK v2 is installed, via. the documentation in this [README.md](https://msdata.visualstudio.com/Vienna/_git/sdk-cli-v2?path=%2FREADME.md&_a=preview). # # Also ensure that the above installation is done in a conda environment where AutoML SDK v1 is already installed. You may also need to install MLFlow to do some operations. (e.g. via. `pip install azureml-mlflow`) # + # utility methods # Currently, there's no SDK v2 equivalent of v1's 'show_output' or 'wait_for_completion' functionality, # that prints the AutoML iteration info def show_output(client, job) -> None: # This doesn't appear to stream anything at the moment client.jobs.stream(created_job.name) def wait_for_completion(client, job, poll_duration: int = 30) -> None: """Poll for job status every `poll_duration` seconds, until it is terminated""" import time from azure.ml._operations.run_history_constants import RunHistoryConstants cur_status = client.jobs.get(job.name).status print("Current job status: ", cur_status) while cur_status not in RunHistoryConstants.TERMINAL_STATUSES: time.sleep(poll_duration) cur_status = client.jobs.get(job.name).status print("Current job status: ", cur_status) def download_outputs(client, job) -> None: # This does not download any logs (no models as well, since this is at the parent run level) client.jobs.download(job.name, download_path="./outputs") # For the child run level, currently this throws an exception saying it's not supported for the job type try: first_child_run = "{}_0".format(job.name) client.jobs.download(first_child_run, download_path="./outputs/") except Exception as e: import traceback print(str(e)) traceback.print_exc() def print_studio_url(job, open_in_new_tab: bool = False) -> None: # TODO: Any easier way to get the URL? print("Studio URL: ", job.interaction_endpoints['Studio'].endpoint) if open_in_new_tab: import webbrowser webbrowser.open(job.interaction_endpoints['Studio'].endpoint) # + # Global imports from azure.ml import MLClient from azure.core.exceptions import ResourceExistsError from azure.ml.entities.workspace.workspace import Workspace from azure.ml.entities.compute.compute import Compute from azure.ml.entities.assets import Data # + subscription_id = '381b38e9-9840-4719-a5a0-61d9585e1e91' resource_group_name = 'gasi_rg_neu' workspace_name = 'gasi_ws_neu' experiment_name = "3-automl-remote-compute-run" # + # Create an MLClient # A resource group must already be existing at this point client = MLClient(subscription_id, resource_group_name) # default_workspace_name=workspace) # + # Set the default workspace for the Client, creating one if it doesn't exist. workspace = Workspace(name=workspace_name) try: client.workspaces.create(workspace) except ResourceExistsError as re: print(re) client.default_workspace_name = workspace_name # + # Set or create compute cpu_cluster_name = "cpucluster" compute = Compute("amlcompute", name=cpu_cluster_name, size="STANDARD_D2_V2", min_instances=0, max_instances=3, idle_time_before_scale_down=120) # Load directly from YAML file # compute = Compute.load("./compute.yaml") try: # TODO: This currently results in an exception in Azure ML, please create compute manually. client.compute.create(compute) except ResourceExistsError as re: print(re) except Exception as e: import traceback print("Could not create compute.", str(e)) traceback.print_exc() # + # Upload dataset dataset_name = "train_dataset_beer" training_data = Data(name=dataset_name, version=1, local_path="./data") # Load directly from YAML file # training_data = Data.load("./data.yaml") try: data = client.data.create_or_update(training_data) print("Uploaded to path : ", data.path) print("Datastore location: ", data.datastore) except Exception as e: print("Could not create dataset. ", str(e)) # + # Initialize MLFlow, setting the tracking URI to AzureML, and changing the active experiment import mlflow ##### NOTE: This is SDK v1 API ##### # TODO: How do we get this from MLClient? Tracking URI can't be obtained from v2 Workspace object from azureml.core import Workspace as WorkspaceV1 ws = WorkspaceV1(workspace_name=workspace_name, resource_group=resource_group_name, subscription_id=subscription_id) #################################### mlflow.set_tracking_uri(ws.get_mlflow_tracking_uri()) # Set the active experiment, creating one if it doesn't exist mlflow.set_experiment(experiment_name) # Get Experiment Details experiment = mlflow.get_experiment_by_name(experiment_name) print("Experiment_id: {}".format(experiment.experiment_id)) print("Artifact Location: {}".format(experiment.artifact_location)) print("Tags: {}".format(experiment.tags)) print("Lifecycle_stage: {}".format(experiment.lifecycle_stage)) print("\nRegistry URI: {}".format(mlflow.get_registry_uri())) print("\nCurrent tracking uri: {}".format(mlflow.get_tracking_uri())) # + from azure.ml._restclient._2020_09_01_preview.machinelearningservices.models import GeneralSettings, LimitSettings, DataSettings, TrainingDataSettings, ValidationDataSettings, TrainingSettings from azure.ml._restclient._2020_09_01_preview.machinelearningservices.models._azure_machine_learning_workspaces_enums import TaskType, OptimizationMetric from azure.ml._schema.compute_binding import InternalComputeConfiguration from azure.ml.entities import AutoMLJob from azure.ml.entities.job.automl.forecasting import ForecastingSettings from azure.ml.entities.job.automl.featurization import FeaturizationSettings compute = InternalComputeConfiguration(target=cpu_cluster_name) general_settings = GeneralSettings(task_type=TaskType.FORECASTING, primary_metric= OptimizationMetric.NORMALIZED_ROOT_MEAN_SQUARED_ERROR, enable_model_explainability=True) # TODO: Seems like a bug here, max_trials=3 + max_concurrent_trials=4 seems to only trigger one child run limit_settings = LimitSettings(job_timeout=60, max_trials=4, max_concurrent_trials=4, enable_early_termination=False) # TODO: How can we reuse the 'data' object created above? training_data_settings = TrainingDataSettings(dataset_arm_id="train_dataset_beer:1", target_column_name="BeerProduction") validation_data_settings = ValidationDataSettings(n_cross_validations=5) data_settings = DataSettings(training_data=training_data_settings, validation_data=validation_data_settings) featurization_settings = FeaturizationSettings(featurization_config="auto") training_settings = TrainingSettings(enable_dnn_training=False) forecasting_settings = ForecastingSettings(country_or_region_for_holidays="US", forecast_horizon=12, target_rolling_window_size=0, time_column_name="DATE") ### get unique job name for repeated trials ### ### This can be skipped, in which case a random guid is generated for the job name import time job_name = "simplebeerjob{}".format(str(int(time.time()))) ################################################ extra_automl_settings = {"save_mlflow": True} automl_job = AutoMLJob( # name=job_name, compute=compute, general_settings=general_settings, limit_settings=limit_settings, data_settings=data_settings, forecasting_settings=forecasting_settings, training_settings=training_settings, featurization_settings=featurization_settings, properties=extra_automl_settings, ) ######## For loading directly from YAML ######## # from pathlib import Path # from azure.ml.entities import Job, AutoMLJob # job_path_yaml = Path("./automl_beer_job.yml") # automl_job = Job.load(job_path_yaml) automl_job # - # Submit job # TODO: There appears to be a bug here (repro: try executing this cell twice) created_job = client.jobs.create_or_update(automl_job) created_job # + # Get Studio URL, open in new tab print_studio_url(created_job) # Wait until the job is finished wait_for_completion(client, created_job) # Download logs + outputs locally download_outputs(client, created_job) # - # ## Code below currently doens't work # + from pprint import pprint from mlflow.tracking import MlflowClient from mlflow.entities import ViewType def print_model_info(models): import datetime import time for m in models: print("--") print("Name: {}".format(m.name)) print("Time Created: {}".format(m.creation_timestamp)) # print("description: {}".format(m.description)) mlflow_client = MlflowClient() experiment = mlflow_client.get_experiment_by_name(experiment_name) print(experiment) mlflow_client.list_run_infos(experiment.experiment_id, run_view_type=ViewType.ACTIVE_ONLY) dir(mlflow_client) mlflow_client.list_registered_models() # best_run = client.search_runs(experiment_ids=[experiment.id], filter_string="", run_vew_type=ViewType.ACTIVE_ONLY, max_results=1, order_by=[f"metrics.{primary_metric} DESC"])[0] # best_models = client.search_model_versions(f"name='{best_run.id}'") # for rm in client.list_registered_models(): # pprint(dict(rm), indent=4) # + # TODO: What's the API to get Experiment / id via SDK v2.0? from azureml.core.experiment import Experiment, ViewType experiment = Experiment(workspace=ws, name="3-automl-remote-compute-run") client = MlflowClient() print(client.list_registered_models()) print(dir(client)) best_run = client.search_runs( experiment_ids=[experiment.id], filter_string="", max_results=1, order_by=[f"metrics.{OptimizationMetric.NORMALIZED_ROOT_MEAN_SQUARED_ERROR} DESC"])[0] best_models = client.search_model_versions(f"name='simplebeerjob1620684744'") best_model = best_models[0] # we may store 1 or 2 models depending on how our API proposal goes. # If sklearn and onnx are flavors of the same model, this would only contain one, # if they are stored separately, we'll have 2 and we'll need to specify an aditional filter # the above is requiring us to name the model after the child run id, it should be achievable without that, # need to sync with some folks, but if getting that run's model isn't really supported, something like # the below would be convenient: model_filter = f"parent_run_id='simplebeerjob1620684744';sort_by_metric=\'{OptimizationMetric.NORMALIZED_ROOT_MEAN_SQUARED_ERROR}\'" models = client.list_registered_models(model_filter) best_model = models[0] # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Insert no SQLite usando Variaveis import sqlite3 import random import time import datetime # + # criando conexão conn = sqlite3.connect('dsa.db') # cursor c = conn.cursor() # criar tabela def create_table(): c.execute('CREATE TABLE IF NOT EXISTS produtos(id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, date TEXT, '\ 'prod_name TEXT, valor REAL)') # inserir linha def data_insert(): c.execute("INSERT INTO produtos VALUES('2018-05-02 12:34:45', 'Teclado', 130.00 )") conn.commit() c.close() conn.close() # usando variáveis para inserir dados def data_insert_var(): new_date = datetime.datetime.now() new_prod_name = 'Monitor' new_valor = random.randrange(50,100) c.execute("INSERT INTO produtos (date, prod_name, valor) VALUES (?, ?, ?)", (new_date, new_prod_name, new_valor)) conn.commit() # - # gerando valores e inserindo na tabela for i in range(10): data_insert_var() time.sleep(1) c.close() conn.close() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from pyodesys.tests._robertson import get_ode_exprs import sympy as sp from sympy import symbols sp.init_printing() fj = [get_ode_exprs(reduced=reduced) for reduced in range(4)] t = symbols('t') y = A, B, C = symbols('A B C') inits = symbols('A0 B0 C0') p = symbols('k1 k2 k3') p1 = p2 = p3 = p + inits y1, y2, y3 = [B, C], [A, C], [A, B] f0, j0 = fj[0][0](t, y, p, backend=sp), fj[0][1](t, y, p, backend=sp) f0, sp.Matrix(j0) f1, j1 = fj[1][0](t, y1, p1, backend=sp), fj[1][1](t, y1, p1, backend=sp) f1, sp.Matrix(j1) f2, j2 = fj[2][0](t, y2, p2, backend=sp), fj[2][1](t, y2, p2, backend=sp) f2, sp.Matrix(j2) f3, j3 = fj[3][0](t, y3, p3, backend=sp), fj[3][1](t, y3, p3, backend=sp) f3, sp.Matrix(j3) diff1 = sp.Matrix(f1).jacobian(y1) - sp.Matrix(j1) diff1.simplify() diff1 diff2 = sp.Matrix(f2).jacobian(y2) - sp.Matrix(j2) diff2.simplify() diff2 diff3 = sp.Matrix(f3).jacobian(y3) - sp.Matrix(j3) diff3.simplify() diff3 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + [markdown] colab_type="text" id="view-in-github" # Open In Colab # + colab={"base_uri": "https://localhost:8080/"} id="B7YlhS4gYc8s" outputId="7ed228ec-718f-43ae-bf46-9511ead2581e" from google.colab import drive drive.mount('/content/drive') # + [markdown] id="FB7LuLHBlpvM" # # Clone the Repository # + colab={"base_uri": "https://localhost:8080/"} id="0d35c532" outputId="cb3b1d58-fdfb-48f5-9fa6-dadc8a12fc24" # !git clone https://github.com/i1idan/schizophrenia-diagnosis-eeg-signals.git # + colab={"base_uri": "https://localhost:8080/"} id="6X3Blwu5ZKWq" outputId="77e2207f-1941-4df9-d692-dd19d7811671" import os os.chdir('/content/schizophrenia-diagnosis-eeg-signals') # !git pull origin main # + [markdown] id="pfZzZofyZVB_" # # Install Requirements # + colab={"base_uri": "https://localhost:8080/"} id="f3uUKOYfZUiA" outputId="1e927faf-42ef-421a-f8e8-7d640bfb6b07" # !pip install -r requirements.txt # + [markdown] id="lCirzbPuZyuQ" # # Get Data # + colab={"base_uri": "https://localhost:8080/"} id="tsJQv-K5Z0k-" outputId="87b9103c-18b2-4af4-ab91-2797b4e663e3" # !gdown --id 1jnWHWrArzJQIvny0cQkfPP42hEJAp_56 # !mv DATA.mat /content/schizophrenia-diagnosis-eeg-signals/data/DATA.mat # + [markdown] id="4130c324" # # Training Models # + id="98c9e21c" # Number of training in a sequence multi_train = 10 epochs = 200 model_name = "WaveletCustom" data_path = "./data/DATA.mat" # checkpoints = "/content/drive/MyDrive/schizophrenia/checkpoints" checkpoints = "./checkpoints" batch_size = 4 early_stopping = 100 reduce_lr = 50 seed = 1234 # + colab={"base_uri": "https://localhost:8080/"} id="4bf6ef6b" outputId="87919b59-9c9b-4601-d6a1-c93a5e63b286" csv_files = [] for n in range(multi_train): dir_name = f"{n}" new_seed = seed + n # preserve reproducibility # !PYTHONHASHSEED=0 # !TF_DETERMINISTIC_OPS=0 # !TF_CUDNN_DETERMINISTIC=0 # !python train.py --model-name $model_name --epochs $epochs --seed $new_seed --dir-name $dir_name --checkpoints $checkpoints --batch-size $batch_size print(f"-----------------------------train {n} is done! ----------------------------------") # + [markdown] id="0977a1b7" # # Get Metrics # + id="34cf3098" outputId="d581a2c5-f802-4e46-ebdb-8f9220da1b11" from utils.group_metrics import get_mean_std import os csv_files = [os.path.join(checkpoints, model_name ,f"{n}", "log.csv") for n in range(multi_train)] metrics = get_mean_std(csv_files, arguments=("accuracy", "loss", "val_accuracy", "val_loss"), operators=(max, min, max, min) ) print(metrics) # + [markdown] id="be7e53a2" # # Get Sensitivity & Specificity # + from utils.group_metrics import get_conf_mean_std import os conf_matrixes = [os.path.join(checkpoints, model_name ,f"{n}", "conf_matrix.csv") for n in range(multi_train)] metrics = get_conf_mean_std(conf_matrixes) metrics # - # _:)_ # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="MD4QHCipmmwb" #pip install lime # + [markdown] id="8Ehrsgnxmmwh" # ## Learning Objectives # # Today we will be covering the full ML pipeline in all of its glory, starting from good clean data (that is a big if) to the final predictions. # + [markdown] id="lfNsqgtWmmwk" # ## The full picture # # With cross validation we can show you the full picture of model building (after you have done the hard work of data munging). The magic that cross validation unlocks is twofold # # 1. It allow you to have more training data and therefore get better performance and more accurate representations of your performance # 2. It actually simplifies the process. You will no longer need to keep 3 sets of data and you can get by with just two. # # Let's get started: # + colab={"base_uri": "https://localhost:8080/"} id="bEDpd62Ymmwm" outputId="b1590fbd-b9b9-4b1b-9008-909f63aaf3d1" from sklearn.datasets import load_boston from sklearn.model_selection import train_test_split boston_data = load_boston() # we make our test set X_train, X_test, y_train, y_test = train_test_split(boston_data['data'], boston_data['target'], test_size=0.2, random_state=1) # and we make our validation set X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.2, random_state=1) # + [markdown] id="n54ReACZmmwn" # Our next step will be to define the model that we are looking at: # + id="BCtbh1pZmmwn" from sklearn.tree import DecisionTreeRegressor reg = DecisionTreeRegressor() # + [markdown] id="G89rhvJSmmwo" # Then we determine which parameters we would like to search over: # + id="93w6LJL3mmwp" params = { 'max_depth': range(2, 20, 2), 'min_samples_leaf': range(5, 25, 5) } # + [markdown] id="Gk4AVMzFmmwr" # And finally we use GridSearchCV which will search over the parameters doing cross validation to determine their performance: # + id="errL7RCQmmws" from sklearn.model_selection import GridSearchCV gs = GridSearchCV(reg, params, scoring='neg_mean_absolute_error') # + colab={"base_uri": "https://localhost:8080/"} id="GUjhKdo_mmwt" outputId="9fbfd86e-9d5e-4b39-ec16-d69df8037f3d" gs.fit(X_train, y_train) # + [markdown] id="5W3xw-JAmmwv" # We get a lot of goodies. We can see the best score and estimator: # + colab={"base_uri": "https://localhost:8080/"} id="dTXFh8vzmmww" outputId="aef08d0c-58e2-4b63-e60b-e167df9e8e01" gs.best_score_ # + colab={"base_uri": "https://localhost:8080/"} id="3AqFJcOOmmwx" outputId="a5d997b5-ee5e-4980-f21c-934fd761315d" gs.best_estimator_ # + [markdown] id="NwmsHXQCmmwy" # And we get to use the grid search object as that estimator as well: # + colab={"base_uri": "https://localhost:8080/"} id="Nad8IE67mmwz" outputId="8c83fbc4-102e-4fd9-aafe-bf8fe2425040" gs.predict(X_train[:5]) # + [markdown] id="HnC03GeCmmwz" # ## A note on hyperparam tuning # # Grid search might be becoming a bit old school in the next few years, with advancements like random search, hyperband, bayesian hyperparam search and more we might use a more advanced way to search through available params. That being said it is good to know and still widely used in ML. # + [markdown] id="1La0GAO5mmw0" # ### Model Explainer # + colab={"base_uri": "https://localhost:8080/", "height": 339} id="uDkff6mAmmw0" outputId="beabd30a-b71c-4f59-f033-05f1760b4082" import lime import lime.lime_tabular import numpy as np # + id="OusDO-yZmmw1" categorical_features = np.argwhere(np.array([len(set(boston_data.data[:,x])) for x in range(boston_data.data.shape[1])]) <= 10).flatten() # + id="8JHolL61mmw2" explainer = lime.lime_tabular.LimeTabularExplainer(X_train, feature_names=boston_data.feature_names, class_names=['price'], categorical_features=categorical_features, verbose=True, mode='regression') # + id="V7aH35qgmmw2" i = 15 exp = explainer.explain_instance(X_test[i], gs.predict, num_features=5) # + id="Thb7-_Cemmw4" exp.show_in_notebook(show_table=True) # + id="-vSd7yDOmmw5" exp.as_list() # + id="8DhsVujFmmw6" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] slideshow={"slide_type": "slide"} # # Velocity and Acceleration of a point of a rigid body # # > , # > [Laboratory of Biomechanics and Motor Control](http://pesquisa.ufabc.edu.br/bmclab) # > Federal University of ABC, Brazil # + [markdown] slideshow={"slide_type": "skip"} # This notebook shows the expressions of the velocity and acceleration of a point on rigid body, given the angular velocity of the body. # + [markdown] slideshow={"slide_type": "slide"} # ## Frame of reference attached to a body # # The concept of reference frame in Biomechanics and motor control is very important and central to the understanding of human motion. For example, do we see, plan and control the movement of our hand with respect to reference frames within our body or in the environment we move? Or a combination of both? # The figure below, although derived for a robotic system, illustrates well the concept that we might have to deal with multiple coordinate systems. # #
    Figure. Multiple coordinate systems for use in robots (figure from Corke (2017)).
    # # For three-dimensional motion analysis in Biomechanics, we may use several different references frames for convenience and refer to them as global, laboratory, local, anatomical, or technical reference frames or coordinate systems (we will study this later). # There has been proposed different standardizations on how to define frame of references for the main segments and joints of the human body. For instance, the International Society of Biomechanics has a [page listing standardization proposals](https://isbweb.org/activities/standards) by its standardization committee and subcommittees: # + [markdown] slideshow={"slide_type": "slide"} # ## Position of a point on a rigid body # # The description of the position of a point P of a rotating rigid body is given by: # # # \begin{equation} # {\bf\vec{r}_{P/O}} = x_{P/O}^*{\bf\hat{i}'} + y_{P/O}^*{\bf\hat{j}'} # \end{equation} # # # where $x_{P/O}^*$ and $y_{P/O}^*$ are the coordinates of the point P position at a reference state with the versors described as: # # # \begin{equation} # {\bf\hat{i}'} = \cos(\theta){\bf\hat{i}}+\sin(\theta){\bf\hat{j}} # \end{equation} # # # # \begin{equation} # {\bf\hat{j}'} = -\sin(\theta){\bf\hat{i}}+\cos(\theta){\bf\hat{j}} # \end{equation} # # # # # # Note that the vector ${\bf\vec{r}_{P/O}}$ has always the same description for any point P of the rigid body when described as a linear combination of ${\bf\hat{i}'}$ and ${\bf\hat{j}'}$. # # + [markdown] slideshow={"slide_type": "slide"} # ## Translation of a rigid body # # Let's consider now the case in which, besides a rotation, a translation of the body happens. This situation is represented in the figure below. In this case, the position of the point P is given by: # # # \begin{equation} # {\bf\vec{r}_{P/O}} = {\bf\vec{r}_{A/O}}+{\bf\vec{r}_{P/A}}= {\bf\vec{r}_{A/O}}+x_{P/A}^*{\bf\hat{i}'} + y_{P/A}^*{\bf\hat{j}'} # \end{equation} # # # # + [markdown] slideshow={"slide_type": "slide"} # ## Angular velocity of a body # # The magnitude of the angular velocity of a rigid body rotating on a plane is defined as: # # # \begin{equation} # \omega = \frac{d\theta}{dt} # \end{equation} # # # Usually, it is defined an angular velocity vector perpendicular to the plane where the rotation occurs (in this case the x-y plane) and with magnitude $\omega$: # # # \begin{equation} # \vec{\bf{\omega}} = \omega\hat{\bf{k}} # \end{equation} # # # # # + [markdown] slideshow={"slide_type": "slide"} # ## Velocity of a point with no translation # + [markdown] slideshow={"slide_type": "slide"} # First we will consider the situation with no translation. The velocity of the point P is given by: # # # \begin{equation} # {\bf\vec{v}_{P/O}} = \frac{d{\bf\vec{r}_{P/O}}}{dt} = \frac{d(x_{P/O}^*{\bf\hat{i}'} + y_{P/O}^*{\bf\hat{j}'})}{dt} # \end{equation} # # + [markdown] slideshow={"slide_type": "slide"} # To continue this deduction, we have to find the expression of the derivatives of # ${\bf\hat{i}'}$ and ${\bf\hat{j}'}$. This is very similar to the derivative expressions of ${\bf\hat{e_R}}$ and ${\bf\hat{e_\theta}}$ of [polar basis](http://nbviewer.jupyter.org/github/BMClab/bmc/blob/master/notebooks/PolarBasis.ipynb). # # # \begin{equation} # \frac{d{\bf\hat{i}'}}{dt} = -\dot{\theta}\sin(\theta){\bf\hat{i}}+\dot{\theta}\cos(\theta){\bf\hat{j}} = \dot{\theta}{\bf\hat{j}'} # \end{equation} # # # # \begin{equation} # \frac{d{\bf\hat{j}'}}{dt} = -\dot{\theta}\cos(\theta){\bf\hat{i}}-\dot{\theta}\sin(\theta){\bf\hat{j}} = -\dot{\theta}{\bf\hat{i}'} # \end{equation} # # + [markdown] slideshow={"slide_type": "slide"} # Another way to represent the expressions above is by using the vector form to express the angular velocity $\dot{\theta}$. It is usual to represent the angular velocity as a vector in the direction ${\bf\hat{k}}$: ${\bf\vec{\omega}} = \dot{\theta}{\bf\hat{k}} = \omega{\bf\hat{k}}$. Using this definition of the angular velocity, we can write the above expressions as: # # # \begin{equation} # \frac{d{\bf\hat{i}'}}{dt} = \dot{\theta}{\bf\hat{j}'} = \dot{\theta} {\bf\hat{k}}\times {\bf\hat{i}'} = {\bf\vec{\omega}} \times {\bf\hat{i}'} # \end{equation} # # # # \begin{equation} # \frac{d{\bf\hat{j}'}}{dt} = -\dot{\theta}{\bf\hat{i}'} = \dot{\theta} {\bf\hat{k}}\times {\bf\hat{j}'} ={\bf\vec{\omega}} \times {\bf\hat{j}'} # \end{equation} # # + [markdown] slideshow={"slide_type": "slide"} # So, the velocity of the point P in the situation of no translation is: # # # \begin{equation} # {\bf\vec{v}_{P/O}} = \frac{d(x_{P/O}^*{\bf\hat{i}'} + y_{P/O}^*{\bf\hat{j}'})}{dt} = x_{P/O}^*\frac{d{\bf\hat{i}'}}{dt} + y_{P/O}^*\frac{d{\bf\hat{j}'}}{dt}=x_{P/O}^*{\bf\vec{\omega}} \times {\bf\hat{i}'} + y_{P/O}^*{\bf\vec{\omega}} \times {\bf\hat{j}'} = {\bf\vec{\omega}} \times \left(x_{P/O}^*{\bf\hat{i}'}\right) + {\bf\vec{\omega}} \times \left(y_{P/O}^*{\bf\hat{j}'}\right) ={\bf\vec{\omega}} \times \left(x_{P/O}^*{\bf\hat{i}'}+y_{P/O}^*{\bf\hat{j}'}\right) # \end{equation} # # # # \begin{equation} # {\bf\vec{v}_{P/O}} = {\bf\vec{\omega}} \times {\bf{\vec{r}_{P/O}}} # \end{equation} # # # This expression shows that the velocity vector of any point of a rigid body is orthogonal to the vector linking the point O and the point P. # # It is worth to note that despite the above expression was deduced for a planar movement, the expression above is general, including three dimensional movements. # + [markdown] slideshow={"slide_type": "slide"} # ## Relative velocity of a point on a rigid body to another point # # To compute the velocity of a point on a rigid body that is translating, we need to find the expression of the velocity of a point (P) in relation to another point on the body (A). So: # # # \begin{equation} # {\bf\vec{v}_{P/A}} = {\bf\vec{v}_{P/O}}-{\bf\vec{v}_{A/O}} = {\bf\vec{\omega}} \times {\bf{\vec{r}_{P/O}}} - {\bf\vec{\omega}} \times {\bf{\vec{r}_{A/O}}} = {\bf\vec{\omega}} \times ({\bf{\vec{r}_{P/O}}}-{\bf{\vec{r}_{A/O}}}) = {\bf\vec{\omega}} \times {\bf{\vec{r}_{P/A}}} # \end{equation} # # # # + [markdown] slideshow={"slide_type": "slide"} # ## Velocity of a point on rigid body translating # # The velocity of a point on a rigid body that is translating is given by: # # # \begin{equation} # {\bf\vec{v}_{P/O}} = \frac{d{\bf\vec{r}_{P/O}}}{dt} = \frac{d({\bf\vec{r}_{A/O}}+x_{P/A}^*{\bf\hat{i}'} + y_{P/A}^*{\bf\hat{j}'})}{dt} = \frac{d{\bf\vec{r}_{A/O}}}{dt}+\frac{d(x_{P/A}^*{\bf\hat{i}'} + y_{P/A}^*{\bf\hat{j}'})}{dt} = {\bf\vec{v}_{A/O}} + {\bf\vec{\omega}} \times {\bf{\vec{r}_{P/A}}} # \end{equation} # # + [markdown] slideshow={"slide_type": "slide"} # Below is an example of a body rotating with the angular velocity of $\omega = \pi/10$ rad/s and translating at the velocity of # ${\bf\vec{v}} = 0.7 {\bf\hat{i}} + 0.5 {\bf\hat{j}}$ m/s. The red arrow indicates the velocity of the geometric center of the body and the blue arrow indicates the velocity of the lower point of the body # + slideshow={"slide_type": "slide"} import numpy as np import matplotlib.pyplot as plt # %matplotlib notebook from matplotlib.animation import FuncAnimation from matplotlib.patches import FancyArrowPatch t = np.linspace(0,13,10) omega = np.pi/10 #[rad/s] voa = np.array([[0.7],[0.5]]) # velocity of center of mass fig = plt.figure() plt.grid() ax = fig.add_axes([0, 0, 1, 1]) ax.axis("on") plt.rcParams['figure.figsize']=5,5 def run(i): ax.clear() theta = omega * t[i] phi = np.linspace(0,2*np.pi,100) B = np.squeeze(np.array([[2*np.cos(phi)],[6*np.sin(phi)]])) Baum = np.vstack((B,np.ones((1,np.shape(B)[1])))) roa = voa * t[i] R = np.array([[np.cos(theta), -np.sin(theta)],[np.sin(theta), np.cos(theta)]]) T = np.vstack((np.hstack((R,roa)), np.array([0,0,1]))) BRot = R@B BRotTr = T@Baum plt.plot(BRotTr[0,:],BRotTr[1,:], roa[0], roa[1],'.') plt.fill(BRotTr[0,:],BRotTr[1,:], 'g') vVoa = FancyArrowPatch(np.array([float(roa[0]), float(roa[1])]), np.array([float(roa[0]+5*voa[0]), float(roa[1]+5*voa[1])]), mutation_scale=20, lw=2, arrowstyle="->", color="r", alpha=1) ax.add_artist(vVoa) element = 75 Vp = np.array([voa[0]-omega*BRot[1,element], voa[1]+omega*BRot[0,element]]) vVP = FancyArrowPatch(np.array([float(BRotTr[0,element]), float(BRotTr[1,element])]), np.array([float(BRotTr[0,element]+5*Vp[0]), float(BRotTr[1,element]+5*Vp[1])]), mutation_scale=20, lw=2, arrowstyle="->", color="b", alpha=1) ax.add_artist(vVP) plt.xlim((-10, 20)) plt.ylim((-10, 20)) ani = FuncAnimation(fig, run, frames = 50,repeat=False, interval =500) plt.show() # + [markdown] slideshow={"slide_type": "slide"} # ## Acceleration of a point on a rigid body # # The acceleration of a point on a rigid body is obtained by deriving the previous expression: # # # \begin{equation} # {\bf\vec{a}_{P/O}} = {\bf\vec{a}_{A/O}} + \dot{\bf\vec{\omega}} \times {\bf{\vec{r}_{P/A}}} + {\bf\vec{\omega}} \times {\bf{\vec{v}_{P/A}}} = {\bf\vec{a}_{A/O}} + \dot{\bf\vec{\omega}} \times {\bf{\vec{r}_{P/A}}} + {\bf\vec{\omega}} \times ({\bf\vec{\omega}} \times {\bf{\vec{r}_{P/A}}}) = {\bf\vec{a}_{A/O}} + \ddot{\theta}\bf\hat{k} \times {\bf{\vec{r}_{P/A}}} - \dot{\theta}^2{\bf{\vec{r}_{P/A}}} # \end{equation} # # # The acceleration has three terms: # # - ${\bf\vec{a}_{A/O}}$ -- the acceleration of the point O. # - $\ddot{\theta}\bf\hat{k} \times {\bf{\vec{r}_{P/A}}}$ -- the acceleration of the point P due to the angular acceleratkion of the body. # - $- \dot{\theta}^2{\bf{\vec{r}_{P/A}}}$ -- the acceleration of the point P due to the angular velocity of the body. It is known as centripetal acceleration. # # # # Below is an example of a rigid body with an angular acceleration of $\alpha = \pi/150$ rad/s$^2$ and initial angular velocity of $\omega_0 = \pi/100$ rad/s. Consider also that the center of the body accelerates with ${\bf\vec{a}} = 0.01{\bf\hat{i}} + 0.05{\bf\hat{j}}$, starting from rest. # # + slideshow={"slide_type": "slide"} t = np.linspace(0, 20, 40) alpha = np.pi/150 #[rad/s^2] angular acceleration omega0 = np.pi/100 #[rad/s] angular velocity aoa = np.array([[0.01],[0.05]]) # linear acceleration fig = plt.figure() plt.grid() ax = fig.add_axes([0, 0, 1, 1]) ax.axis("on") plt.rcParams['figure.figsize']=5,5 theta = 0 omega = 0 def run(i): ax.clear() phi = np.linspace(0,2*np.pi,100) B = np.squeeze(np.array([[2*np.cos(phi)],[6*np.sin(phi)]])) Baum = np.vstack((B,np.ones((1,np.shape(B)[1])))) omega = alpha*t[i]+omega0 #[rad/s] angular velocity theta = alpha/2*t[i]**2 + omega0*t[i] # [rad] angle voa = aoa*t[i] # linear velocity roa = aoa/2*t[i]**2 # position of the center of the body R = np.array([[np.cos(theta), -np.sin(theta)],[np.sin(theta), np.cos(theta)]]) T = np.vstack((np.hstack((R,roa)), np.array([0,0,1]))) BRot = R@B BRotTr = T@Baum plt.plot(BRotTr[0,:],BRotTr[1,:], roa[0], roa[1],'.') plt.fill(BRotTr[0,:],BRotTr[1,:],'g') element = 75 ap = np.array([aoa[0] - alpha*BRot[1,element] - omega**2*BRot[0,element], aoa[1] + alpha*BRot[0,element] - omega**2*BRot[1,element]]) vVP = FancyArrowPatch(np.array([float(BRotTr[0,element]), float(BRotTr[1,element])]), np.array([float(BRotTr[0,element]+5*ap[0]), float(BRotTr[1,element]+5*ap[1])]), mutation_scale=20, lw=2, arrowstyle="->", color="b", alpha=1) ax.add_artist(vVP) plt.xlim((-10, 20)) plt.ylim((-10, 20)) ani = FuncAnimation(fig, run, frames=50,repeat=False, interval=500) plt.show() # + [markdown] slideshow={"slide_type": "slide"} # ## Problems # # 1. Solve the problems 16.2.5, 16.2.10, 16.2.11 and 16.2.20 from [Ruina and Rudra's book](http://ruina.tam.cornell.edu/Book/index.html). # 2. Solve the problems 17.1.2, 17.1.8, 17.1.9, 17.1.10, 17.1.11 and 17.1.12 from [Ruina and Rudra's book](http://ruina.tam.cornell.edu/Book/index.html). # # + [markdown] slideshow={"slide_type": "slide"} # ## Reference # # - , (2019) [Introduction to Statics and Dynamics](http://ruina.tam.cornell.edu/Book/index.html). Oxford University Press. # - (2017) [Robotics, Vision and Control: Fundamental Algorithms in MATLAB](http://www.petercorke.com/RVC/). 2nd ed. Springer-Verlag Berlin. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: healpyvenv # language: python # name: healpyvenv # --- # # Unit conversion with broadband detectors looking at the CMB # - categories: [cmbs4, pysm, simonsobs] import healpy as hp import numpy as np import matplotlib.pyplot as plt from pysm3 import units as u import pysm3 as pysm # %matplotlib inline dip = hp.synfast([0,1], lmax=1, nside=128) * u.V hp.mollview(dip, unit=dip.unit) # We measure the sky with out broadband instrument, we assume we only measure the CMB solar dipole, # initially the units are arbitrary, for example Volts of our instrument. # # Next we calibrate on the solar dipole, which is known to be 3.3 mK. calibration_factor = 2 * 3.3 * u.mK_CMB / (dip.max() - dip.min()) calibration_factor calibrated_dip = calibration_factor * dip calibrated_dip hp.mollview(calibrated_dip, unit=calibrated_dip.unit) # First we simplify and consider a delta-frequency instrument at 300 GHz center_frequency = 300 * u.GHz dip_peak = calibrated_dip.max() calibrated_dip.max() calibrated_dip.max().to(u.mK_RJ, equivalencies=u.cmb_equivalencies(center_frequency)) calibrated_dip.max().to(u.MJy/u.sr, equivalencies=u.cmb_equivalencies(center_frequency)) # Next we assume instead that we have a broadband instrument, of 20% bandwidth, with uniform response in that range. # For simplicity, we only take 4 points. freq = [270, 290, 310, 330] * u.GHz weights = [1, 1, 1, 1] weights /= np.trapz(weights, freq) weights # The instrument bandpass is defined in power so we can transform our signal in MJy/sr at the 4 reference frequencies, # then integrate. dip_peak_MJysr = dip_peak.to(u.MJy/u.sr, equivalencies=u.cmb_equivalencies(freq)) dip_peak_MJysr integrated_SR = np.trapz(dip_peak_MJysr * weights, freq) integrated_SR # This is different than assuming uniform bandpass in $K_{CMB}$, where instead we would recover the same result of the delta-bandpass: np.trapz(dip_peak * weights, freq) SR = u.MJy/u.sr # We use the PySM 3 function to compute unit conversion given a bandpass # # $ \tilde{I}[unit_{out}] = \tilde{I}[unit_{in}] \dfrac{ \int C_{unit_{in}}^{Jy~sr^{-1}}(\nu) g(\nu) d\nu} { \int C_{unit_{out}}^{Jy~sr^{-1}}(\nu) g(\nu) d\nu} $ # # which comes from equating in power: # # $ \tilde{I}[unit_{out}]{ \int C_{unit_{out}}^{Jy~sr^{-1}}(\nu) g(\nu) d\nu} = \tilde{I}[unit_{in}]\int C_{unit_{in}}^{Jy~sr^{-1}}(\nu) g(\nu) d\nu $ SR pysm.utils.bandpass_unit_conversion(freq, weights=weights, output_unit=u.mK_CMB, input_unit=SR) integrated_SR * _ 1 * u.mK_CMB / (1 * SR) # We can doublecheck the implementation of the PySM function by executing it here: K_CMB_to_MJysr = ((1*SR).to(u.mK_CMB, equivalencies=u.cmb_equivalencies(freq)))/(1*SR) K_CMB_to_MJysr # Integrating the `K_CMB_to_MJysr` conversion factor is wrong, we always need to do the integral in power, # therefore we integrate the inverse and then take its inverse. np.trapz(K_CMB_to_MJysr * weights, freq) 1/np.trapz(1/K_CMB_to_MJysr * weights, freq) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Coding Exercise #0301 # ### 1. Discrete probability distribution: import scipy.stats as st import matplotlib.pyplot as plt import numpy as np # %matplotlib inline # #### 1.1. Binomial: # Sample size and the p parameter. n=10 p=0.5 # Probability distribution at x = 5. st.binom.pmf(5, n, p) # Quantile at alpha = 0.3 # More about quantiles in the Unit III. st.binom.ppf(0.3, n, p) # P(0 <= x <= 5) st.binom.cdf(5,n,p) # P(3 <= x <=7) st.binom.cdf(7,n,p)-st.binom.cdf(2,n,p) # Visualizing the probability distribution. x=np.arange(0,11) plt.scatter(x, st.binom.pmf(x,n,p),color='red') plt.show() # #### 1.2. Poisson: # The lambda parameter. lamb = 2 # Probability distribution at x = 2. st.poisson.pmf(2,lamb) # Quantile at alpha = 0.5 # More about quantiles in the Unit III. st.poisson.ppf(0.5,lamb) # P(0 <= x <= 5) st.poisson.cdf(5,lamb) # P(3 <= x <=7) st.poisson.cdf(7,lamb)-st.poisson.cdf(2,lamb) # Visualizing the probability distribution. x=np.arange(0,11) plt.scatter(x, st.poisson.pmf(x,lamb),color='green') plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Build Week 4 (Python3) # language: python # name: bw4 # --- import matplotlib.pyplot as plt import numpy as np import pandas as pd df = pd.read_csv('../data/merged_data.csv') df.head() # + id_a = '06w9JimcZu16KyO3WXR459' id_b = '6XzyAQs2jU4PMWmnFlEQLW' track_a = df[df['track_id'] == id_a] track_b = df[df['track_id'] == id_b] artist_a = track_a['artist_name'].values[0] artist_b = track_b['artist_name'].values[0] name_a = track_a['track_name'].values[0] name_b = track_b['track_name'].values[0] label_a = f'{artist_a} - {name_a}'[:30] label_b = f'{artist_b} - {name_b}'[:30] # - track_df = pd.concat([track_a, track_b]).drop(columns=['duration_ms', 'popularity']) vis_labels = [label_a, label_b] # + labels = [ 'acousticness', 'danceability', 'energy', 'instrumentalness', 'liveness', 'valence' ] num_vals = len(labels) angles = [n / float(num_vals) * 2 * np.pi for n in range(num_vals)] angles += angles[:1] # make cyclic to connect vertices in polygon # Set figure settings fig, ax = plt.subplots(figsize=(6, 6), subplot_kw=dict(polar=True)) ax.set_theta_offset(np.pi / 2) ax.set_theta_direction(-1) ax.set_thetagrids(np.degrees(angles), labels) ax.set_rlabel_position(0) ax.set_yticks([0.20, 0.40, 0.60, 0.80]) ax.set_yticklabels(['0.20', '0.40', '0.60', '0.80']) ax.set_ylim(0, 1) # Plot and fill the radar polygons feature_df = track_df[labels].reset_index().drop(columns=['index']) colors = ['#EF019F', '#780150'] for i, color in enumerate(colors): values = feature_df.loc[i].values.flatten().tolist() values += values[:1] # make cyclic to connect vertices in polygon ax.plot( angles, values, color=color, linewidth=1, linestyle='solid', label=vis_labels[i] ) ax.fill(angles, values, color=color, alpha=0.25) # Set feature labels so they don't overlap the chart for label, angle in zip(ax.get_xticklabels(), angles): if angle in [0, np.pi]: label.set_horizontalalignment('center') elif 0 < angle < np.pi: label.set_horizontalalignment('left') else: label.set_horizontalalignment('right') ax.legend(loc='best') plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Dans ce projet, nous allons analyser l'évolution du Covid 19 dans le monde # Source des données: https://www.ecdc.europa.eu/en/publications-data/download-todays-data-geographic-distribution-covid-19-cases-worldwide # # Voici quelques questions qui peuvent guider notre analyse # * Quels sont les pays avec les plus grand nombres de cas ? # * Quels sont les pays avec un taux de mortalité élevé ? # * Le confinement a-t-il eu un effet sur le nombre de cas ? # * Comparer la situation par continent # # Importer les données import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline # !pip install xlrd==1.2.0 df=pd.read_excel('datasets/COVID-19-geographic-disbtribution-worldwide-2020-12-14.xlsx') df.head() df.info() df.describe() df.columns # Calculating the percentage of null values: # CODE: df.isna().sum().sum()/len(df) df.dropna(inplace=True) # # Pays avec les plus grand nombres de cas df.columns df_by_country=df.groupby('countriesAndTerritories')['cases', 'deaths'].sum().sort_values('cases',ascending=False) df_by_country # # Pays avec Taux de mortalité élevé df_by_country['mortality_rate']=df_by_country['deaths']/df_by_country['cases'] #Sorting the values for the mortality rate in the descending order plt.figure(figsize=(15,10)) ax = df_by_country['mortality_rate'].sort_values(ascending=False).head(20).plot.bar() ax.set_xticklabels(ax.get_xticklabels(), rotation=45, ha="right") ax.set_xlabel("Country") ax.set_ylabel("Mortality rate") ax.set_title("Countries with highest mortality rates") # # Pays avec le plus de morts #sorting the number of deaths in the descending order plt.figure(figsize=(10,6)) ax=df_by_country['deaths'].sort_values(ascending=False).head(5).plot(kind='bar') ax.set_xticklabels(ax.get_xticklabels(), rotation=45) ax.set_title("Countries suffering the most fatalities from COVID-19") ax.set_xlabel("Countries") ax.set_ylabel("Number of deaths") # # Effet du confinement sur le nombre de cas df_by_month = df.groupby('month')['cases','deaths'].sum() df_by_month.head() # + fig=plt.figure(figsize=(15,10)) ax1=fig.add_subplot(1,2,1) ax2=fig.add_subplot(1,2,2) df_by_month['cases'].plot(kind='line',ax=ax1) ax1.set_title("Nombre total de cas de covid par mois") ax1.set_xlabel("Mois") ax1.set_ylabel("Nombre de cas") df_by_month['deaths'].plot(kind='line',ax=ax2) ax2.set_title("Nombre total de morts par mois") ax2.set_xlabel("Mois") ax2.set_ylabel("Nombre de morts") # - # Quand on regarde de façon globale on a pas l'impression d'une diminution. Essayons de voir le cas de certains pays en particulier, qui ont un confinement strict # + df_germany = df[df.countriesAndTerritories == 'Germany'] df_germany_monthwise = df_germany.groupby('month')['cases','deaths'].sum() df_germany_grouped = df_germany_monthwise.reset_index() df_uk = df[df.countriesAndTerritories == 'United_Kingdom'] df_uk_monthwise = df_uk.groupby('month')['cases','deaths'].sum() df_uk_grouped = df_uk_monthwise.reset_index() df_france = df[df.countriesAndTerritories == 'France'] df_france_monthwise = df_france.groupby('month')['cases','deaths'].sum() df_france_grouped = df_france_monthwise.reset_index() df_italy = df[df.countriesAndTerritories == 'Italy'] df_italy_monthwise = df_italy.groupby('month')['cases','deaths'].sum() df_italy_grouped = df_italy_monthwise.reset_index() # + fig=plt.figure(figsize=(20,15)) ax1=fig.add_subplot(2,2,1) df_uk_grouped.plot(kind='line',x='month',y='cases',ax=ax1) ax1.set_title("Evolution du covid en UK") ax2=fig.add_subplot(2,2,2) df_france_grouped.plot(kind='line',x='month',y='cases',ax=ax2) ax2.set_title("Evolution du covid en France") ax3=fig.add_subplot(2,2,3) df_italy_grouped.plot(kind='line',x='month',y='cases',ax=ax3) ax3.set_title("Evolution du covid en Italy") ax4=fig.add_subplot(2,2,4) df_germany_grouped.plot(kind='line',x='month',y='cases',ax=ax4) ax4.set_title("Evolution du covid en Allemagne") plt.show() # - # # Situation par continent ? df.columns df['continentExp'].value_counts() df.groupby('continentExp')['cases', 'deaths'].sum().sort_values('cases') df.groupby('continentExp')['cases', 'deaths'].sum().sort_values('deaths') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Visualizing Iris Dataset using PCA and t-SNE # This notebook is based on [The Iris Dataset](http://scikit-learn.org/stable/auto_examples/datasets/plot_iris_dataset.html#sphx-glr-auto-examples-datasets-plot-iris-dataset-py) example from scikit-learn website # + # %matplotlib inline import pandas as pd from sklearn.datasets import load_iris from sklearn.decomposition import PCA from sklearn.manifold import TSNE import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap from mpl_toolkits.mplot3d import Axes3D import seaborn as sns # - sns.set() sns.set(rc={"figure.figsize": (10, 8)}) PALETTE = sns.color_palette('deep', n_colors=3) CMAP = ListedColormap(PALETTE.as_hex()) RANDOM_STATE = 42 # ## Loading the dataset dataset = load_iris() features = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width'] target = 'species' iris = pd.DataFrame( dataset.data, columns=features) iris[target] = dataset.target iris.head() # ## Defining plotting functions def plot_iris_2d(x, y, title, xlabel="1st eigenvector", ylabel="2nd eigenvector"): sns.set_style("darkgrid") plt.scatter(x, y, c=iris['species'], cmap=CMAP, s=70) plt.title(title, fontsize=20, y=1.03) plt.xlabel(xlabel, fontsize=16) plt.ylabel(ylabel, fontsize=16) def plot_iris_3d(x, y, z, title): sns.set_style('whitegrid') fig = plt.figure(1, figsize=(8, 6)) ax = Axes3D(fig, elev=-150, azim=110) ax.scatter(x, y, z, c=iris['species'], cmap=CMAP, s=40) ax.set_title(title, fontsize=20, y=1.03) fsize = 14 ax.set_xlabel("1st eigenvector", fontsize=fsize) ax.set_ylabel("2nd eigenvector", fontsize=fsize) ax.set_zlabel("3rd eigenvector", fontsize=fsize) ax.w_xaxis.set_ticklabels([]) ax.w_yaxis.set_ticklabels([]) ax.w_zaxis.set_ticklabels([]) # ## Plotting first two components plot_iris_2d( x = iris['sepal_length'], y = iris['sepal_width'], title = 'Plotting first two components', xlabel = 'Sepal length', ylabel = 'Sepal width') # ## 2D Plotting with PCA pca = PCA(n_components=2) points = pca.fit_transform(iris[features]) plot_iris_2d( x = points[:,0], y = points[:,1], title = 'Iris dataset visualized with PCA') # ## 2D plotting with t-SNE tsne = TSNE(n_components=2, n_iter=1000, random_state=RANDOM_STATE) points = tsne.fit_transform(iris[features]) plot_iris_2d( x = points[:, 0], y = points[:, 1], title = 'Iris dataset visualized with t-SNE') # ## 3D plotting with PCA pca = PCA(n_components=3) points = pca.fit_transform(iris[features]) plot_iris_3d( x = points[:,0], y = points[:,1], z = points[:,2], title = "Iris dataset visualized with PCA") # ## 3D plotting with t-SNE tsne = TSNE(n_components=3, n_iter=5000, random_state=RANDOM_STATE) points = tsne.fit_transform(iris[features]) plot_iris_3d( x = points[:,0], y = points[:,1], z = points[:,2], title = "Iris dataset visualized with t-SNE") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Aprendizaje Automático y Big Data # ## Práctica 2 : Regresión Logistica # # y # # # ### PARTE 1 # ### Regresión logística # ### 1.1. Visualización de los datos # + import numpy as np from pandas.io.parsers import read_csv import matplotlib.pyplot as plt from matplotlib import cm from matplotlib.ticker import LinearLocator, FormatStrFormatter from mpl_toolkits.mplot3d import Axes3D import scipy.optimize as opt def carga_csv(file_name): valores = read_csv(file_name, header=None).to_numpy() return valores.astype(float) # - # ### 1.1. Visualización de los datos def draw_graph(file_name, labels = ['y = 1', 'y = 0']): datos = carga_csv(file_name) X = datos[:,:-1] Y = datos[:,-1] # Obtiene un vector con los índices de los ejemplos positivos pos = np.where (Y == 1) posn = np.where (Y == 0) # Dibuja los ejemplos positivos plt.scatter(X[pos, 0], X[pos, 1], marker='+', c='k', label = labels[0]) plt.scatter(X[posn, 0], X[posn, 1], marker='o', c='y', label = labels[1]) plt.legend(loc='upper right') plt.show() draw_graph("ex2data1.csv", ['Admited', 'Not admited']) # ### 1.2. Función sigmoide def sigmoide(Z): sigmoide = 1 / (1 + np.exp(-Z)) return sigmoide # ### 1.3. Cálculo de la función de coste y su gradiente def normalizar(X): mu = np.mean(X, axis=0) sigma = np.std(X, axis=0) X_norm = (X-mu)/sigma return(X_norm, mu, sigma) def coste_logistico(T, X, Y): G = sigmoide(np.dot(X, T)) #falla porq G es todo unos no se muy bien porq coste = (- 1 / (len(X))) * (np.dot(Y, np.log(G)) + np.dot((1 - Y), np.log(1 - G))) return coste def gradiente_logistico(T, X, Y): m = len(Y) G = sigmoide(np.matmul(X, T)) Dif = (np.transpose(G) - Y) gradiente = (1/m)*np.dot(Dif, X) return gradiente def descenso_gradiente(X, Y): n = len(Y) X_2 = np.hstack([np.ones([n,1]), X]) X_nor, mu, s = normalizar(X) X_nor2 = np.hstack([np.ones([n,1]), X_nor]) m = np.shape(X_2)[1] T = np.zeros([m,1]) T = gradiente_logistico(T, X_nor2, Y) c = coste_logistico(T[0], X_nor2, Y) return T[0] , c # + def main2(): datos = carga_csv("ex2data1.csv") X = datos[:,:-1] Y = datos[:,-1] P , c = descenso_gradiente(X,Y) print(P, c) main2() # - # ### 1.4. Cálculo del valor óptimo de los parámetros def optimiza(): datos = carga_csv("ex2data1.csv") X = datos[:,:-1] Y = datos[:,-1] n = len(Y) X = np.hstack([np.ones([n,1]), X]) m = np.shape(X)[1] T = np.zeros([m,1]) result = opt.fmin_tnc(func=coste_logistico, x0=T, fprime = gradiente_logistico,args=(X, Y)) theta_opt = result[0] return theta_opt optimiza() def draw_graph(file_name, labels = ['y = 1', 'y = 0'], line = False, Theta = []): datos = carga_csv(file_name) X = datos[:,:-1] Y = datos[:,-1] # Obtiene un vector con los índices de los ejemplos positivos pos = np.where (Y == 1) posn = np.where (Y == 0) # Dibuja los ejemplos positivos plt.scatter(X[pos, 0], X[pos, 1], marker='+', c='k', label = labels[0]) plt.scatter(X[posn, 0], X[posn, 1], marker='o', c='y', label = labels[1]) plt.legend(loc='upper right') if(line): pinta_frontera_recta(X, Y, Theta, plt) plt.show() def frontera(X, Y, theta, plt): x1_min, x1_max = X[:, 0].min(), X[:, 0].max() x2_min, x2_max = X[:, 1].min(), X[:, 1].max() x1, x2 = np.meshgrid(np.linspace(x1_min, x1_max), np.linspace(x2_min, x2_max)) h = sigmoide(np.c_[np.ones((x1.ravel().shape[0], 1)), x1.ravel(), x2.ravel()].dot(theta)) h = h.reshape(x1.shape) plt.contour(x1, x2, h, [0.5], linewidths=1, colors='b') draw_graph("ex2data1.csv", ['Admited', 'Not admited'], line= True, Theta = optimiza()) # ### 1.5. Evaluación de la regresión logística def evaluacion(): datos = carga_csv("ex2data1.csv") X = datos[:,:-1] Y = datos[:,-1] n = len(Y) X = np.hstack([np.ones([n,1]), X]) Theta = optimiza() unos = sigmoide(np.matmul(X,Theta)) >= 5 compara = unos == Y porcentaje = sum(compara) / n return porcentaje evaluacion() # mal por lo mismo coste y # ### PARTE 2 # ### Regresión logística regularizada draw_graph("ex2data2.csv") # ### 2.1. Mapeo de los atributos import sklearn.preprocessing as pr def plot_decisionboundary(X, Y, theta, poly): plt.figure() x1_min, x1_max = X[:, 0].min(), X[:, 0].max() x2_min, x2_max = X[:, 1].min(), X[:, 1].max() xx1, xx2 = np.meshgrid(np.linspace(x1_min, x1_max), np.linspace(x2_min, x2_max)) h = sigmoide(poly.fit_transform(np.c_[xx1.ravel(), xx2.ravel()]).dot(theta)) h = h.reshape(xx1.shape) plt.contour(xx1, xx2, h, [0.5], linewidths=1, colors='g') plt.savefig("boundary.pdf") plt.close() datos = carga_csv("ex2data1.csv") X = datos[:,:-1] Y = datos[:,-1] n = len(Y) X = np.hstack([np.ones([n,1]), X]) m = np.shape(X)[1] T = np.ones([6,1]) plot_decisionboundary(X, Y, T, pr.PolynomialFeatures(2)) # ### 2.2. Cálculo de la función de coste y su gradiente def coste_regularizado(T, X, Y, landa): G = sigmoide(np.dot(X, T)) #falla porq G es todo unos no se muy bien porq coste = (- 1 / (len(X))) * (np.dot(Y, np.log(G)) + np.dot((1 - Y), np.log(1 - G))) + (landa / 2*(len(X))) return coste def gradiente_regularizado(T, X, Y, landa): m = len(Y) G = sigmoide(np.matmul(X, T)) Dif = (np.transpose(G) - Y) gradiente = (1/m)*np.dot(Dif, X) + (landa/m)*T[j] return gradiente # ### 2.3. Cálculo del valor óptimo de los parámetros def optimiza(): datos = carga_csv("ex2data1.csv") X = datos[:,:-1] Y = datos[:,-1] n = len(Y) X = np.hstack([np.ones([n,1]), X]) m = np.shape(X)[1] T = np.zeros([m,1]) ############################################## ######## Falta el mapeo de atributos ######### ############################################## result = opt.fmin_tnc(func=coste_regularizado, x0=T, fprime = gradiente_regularizado,args=(X, Y)) theta_opt = result[0] return theta_opt # ### 2..4. Efectos de la regularización # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ⊕ [7. Extracting Information from Text](http://www.nltk.org/book/ch07.html) # # + import nltk sentence = [("the", "DT"), ("little", "JJ"), ("yellow", "JJ"), ("dog", "NN"), ("barked", "VBD"), ("at", "IN"), ("the", "DT"), ("cat", "NN")] grammar = "NP: {
    ?*}" cp = nltk.RegexpParser(grammar) result = cp.parse(sentence) print(result) # + import os from IPython.display import Image, display from nltk.draw import TreeWidget from nltk.draw.util import CanvasFrame # ⊕ [How do you make NLTK draw() trees that are inline in iPython/Jupyter - Stack Overflow](https://stackoverflow.com/questions/31779707/how-do-you-make-nltk-draw-trees-that-are-inline-in-ipython-jupyter) def jupyter_draw_nltk_tree(tree): cf = CanvasFrame() tc = TreeWidget(cf.canvas(), tree) tc['node_font'] = 'arial 13 bold' tc['leaf_font'] = 'arial 14' tc['node_color'] = '#005990' tc['leaf_color'] = '#3F8F57' tc['line_color'] = '#175252' cf.add_widget(tc, 10, 10) cf.print_to_file('tmp_tree_output.ps') cf.destroy() os.system('convert tmp_tree_output.ps tmp_tree_output.png') display(Image(filename='tmp_tree_output.png')) os.system('rm tmp_tree_output.ps tmp_tree_output.png') # + import konlpy import nltk # POS tag a sentence sentence = u'만 6세 이하의 초등학교 취학 전 자녀를 양육하기 위해서는' words = konlpy.tag.Okt().pos(sentence) # Define a chunk grammar, or chunking rules, then chunk grammar = """ NP: {*?} # Noun phrase VP: {*} # Verb phrase AP: {*} # Adjective phrase """ parser = nltk.RegexpParser(grammar) chunks = parser.parse(words) print("# Print whole tree") print(chunks.pprint()) print("\n# Print noun phrases only") for subtree in chunks.subtrees(): if subtree.label()=='NP': print(' '.join((e[0] for e in list(subtree)))) print(subtree.pprint()) # - # Display the chunk tree # chunks.draw() # jupyter_draw_nltk_tree(chunks) display(chunks) # + from nltk.tree import Tree # from IPython.display import display tree = Tree.fromstring('(S (NP this tree) (VP (V is) (AdjP pretty)))') display(tree) # - display(chunks) parser = nltk.RegexpParser(r'NP: {<[NJ].*>+}') tree = parser.parse(nltk.corpus.brown.tagged_sents()[0]) display(tree) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- from pulp import * L = 4 W = 2 H = 2 n = 8 l = [1, 1, 1, 1, 2, 2, 2, 2, ] w = [1, 1, 1, 1, 1, 1, 1, 1, ] h = [2, 2, 2, 2, 1, 1, 1, 1, ] M = max([L, W, H]) m = LpProblem(sense=LpMinimize) x = [ LpVariable("x%d" %i, lowBound=0, upBound=L, cat=LpInteger) for i in range(n) ] y = [ LpVariable("y%d" %i, lowBound=0, upBound=M, cat=LpInteger) for i in range(n) ] z = [ LpVariable("z%d" %i, lowBound=0, upBound=H, cat=LpInteger) for i in range(n) ] a = [[LpVariable("a%02d%02d" %(i, j), cat=LpBinary) for j in range(n)] for i in range(n)] b = [[LpVariable("b%02d%02d" %(i, j), cat=LpBinary) for j in range(n)] for i in range(n)] c = [[LpVariable("c%02d%02d" %(i, j), cat=LpBinary) for j in range(n)] for i in range(n)] L = LpVariable("L") m += L for i in range(n): for j in range(n): m += x[i] + l[i] <= x[j] + M * (1 - a[i][j]) m += y[i] + w[i] <= y[j] + M * (1 - b[i][j]) m += z[i] + h[i] <= z[j] + M * (1 - c[i][j]) if i < j: m += a[i][j] + a[j][i] + b[i][j] + b[j][i] + c[i][j] + c[j][i] == 1 for i in range(n): m += x[i] <= L - l[i] m += y[i] <= W - w[i] m += z[i] <= H - h[i] # %%time m.solve(PULP_CBC_CMD(threads=4)) from pulp import * # L = 4 # W = 2 # H = 2 # n = 8 # l = [1, 1, 1, 1, 2, 2, 2, 2, ] # w = [1, 1, 1, 1, 1, 1, 1, 1, ] # h = [2, 2, 2, 2, 1, 1, 1, 1, ] L = 6 W = 2 H = 2 n = 10 l = [1, 1, 2, 2, 2, 2, 1, 1, 1, 1, ] w = [1, 1, 1, 1, 1, 1, 2, 2, 2, 2, ] h = [2, 2, 1, 1, 1, 1, 1, 1, 1, 1, ] M = max([L, W, H]) m = LpProblem(sense=LpMinimize) x = [ LpVariable("x%d" %i, lowBound=0, upBound=L, cat=LpInteger) for i in range(n) ] y = [ LpVariable("y%d" %i, lowBound=0, upBound=M, cat=LpInteger) for i in range(n) ] z = [ LpVariable("z%d" %i, lowBound=0, upBound=H, cat=LpInteger) for i in range(n) ] a = [[LpVariable("a%02d%02d" %(i, j), cat=LpBinary) for j in range(n)] for i in range(n)] b = [[LpVariable("b%02d%02d" %(i, j), cat=LpBinary) for j in range(n)] for i in range(n)] c = [[LpVariable("c%02d%02d" %(i, j), cat=LpBinary) for j in range(n)] for i in range(n)] L = LpVariable("L") m += L for i in range(n): for j in range(n): m += x[i] + l[i] <= x[j] + M * (1 - a[i][j]) m += y[i] + w[i] <= y[j] + M * (1 - b[i][j]) m += z[i] + h[i] <= z[j] + M * (1 - c[i][j]) if i < j: m += a[i][j] + a[j][i] + b[i][j] + b[j][i] + c[i][j] + c[j][i] == 1 for i in range(n): m += x[i] <= L - l[i] m += y[i] <= W - w[i] m += z[i] <= H - h[i] # %%time m.solve(PULP_CBC_CMD(threads=4)) from pulp import * # L = 4 # W = 2 # H = 2 # n = 8 # l = [1, 1, 1, 1, 2, 2, 2, 2, ] # w = [1, 1, 1, 1, 1, 1, 1, 1, ] # h = [2, 2, 2, 2, 1, 1, 1, 1, ] # # L = 6 # W = 2 # H = 2 # n = 10 # l = [1, 1, 2, 2, 2, 2, 1, 1, 1, 1, ] # w = [1, 1, 1, 1, 1, 1, 2, 2, 2, 2, ] # h = [2, 2, 1, 1, 1, 1, 1, 1, 1, 1, ] # L = 8 W = 2 H = 2 n = 12 l = [1, 1, 1, 1, 2, 2, 2, 2, 1, 1, 1, 1, ] w = [1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, ] h = [2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, ] M = max([L, W, H]) m = LpProblem(sense=LpMinimize) x = [ LpVariable("x%d" %i, lowBound=0, upBound=L, cat=LpContinuous) for i in range(n) ] y = [ LpVariable("y%d" %i, lowBound=0, upBound=M, cat=LpContinuous) for i in range(n) ] z = [ LpVariable("z%d" %i, lowBound=0, upBound=H, cat=LpContinuous) for i in range(n) ] a = [[LpVariable("a%02d%02d" %(i, j), cat=LpBinary) for j in range(n)] for i in range(n)] b = [[LpVariable("b%02d%02d" %(i, j), cat=LpBinary) for j in range(n)] for i in range(n)] c = [[LpVariable("c%02d%02d" %(i, j), cat=LpBinary) for j in range(n)] for i in range(n)] L = LpVariable("L") m += L for i in range(n): for j in range(n): m += x[i] + l[i] <= x[j] + M * (1 - a[i][j]) m += y[i] + w[i] <= y[j] + M * (1 - b[i][j]) m += z[i] + h[i] <= z[j] + M * (1 - c[i][j]) if i < j: m += a[i][j] + a[j][i] + b[i][j] + b[j][i] + c[i][j] + c[j][i] == 1 for i in range(n): m += x[i] <= L - l[i] m += y[i] <= W - w[i] m += z[i] <= H - h[i] # %%time m.solve(PULP_CBC_CMD(threads=4, msg=True)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Exercises # + # Exercise 1 - Create an object called roc1 from the class below, passing 2 parameters, and then call attributes and methods. from math import sqrt class Rocket(): def __init__(self, x=0, y=0): self.x = x self.y = y def move_rocket(self, x_increment=0, y_increment=1): self.x += x_increment self.y += y_increment def print_rocket(self): print(self.x, self.y) # + # Exercise 2 - Create a class called Person () with attributes: name, city, phone, and email. Use at least 2 special methods # in your class. Create an object of your class and make a call to at least one of its special methods. # + # Exercise 3 - Create the Smartphone class with 2 attributes, size and interface and create the MP3Player class with the # attributes capacity. The MP3player class must inherit the attributes from the Smartphone class. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Anscombe's Quartet Tutorial # # Author: # # ## Tutorial goals: # # * explore some simple statistics # * illustrate the importance of visualization # * robust vs non-robust statistics # * create and use pandas data frames # * derive and fit simple linear regression # * the role of outliers # * explore bayesian approach # ## Background # # The Anscombe data set shows multiple groups of data which have similiar summary statistics, **but** as we will see, there are some differences in the simple statistics, which will made clear by the distinction of robust and non-robust statistics. # # This data set was created by the statistician in 1973 to demonstrate both the importance of graphing data before analyzing it and the effect of outliers on statistical properties. # # There is also an interesting paper by Chattergee and Firat on how to derive arbitrary amounts of data that fit these criteria: # # http://www.tandfonline.com/doi/abs/10.1198/000313007X220057#.VjDpsZeUirM # # ### Load libraries import numpy as np import pandas as pd import matplotlib as mpl import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline # ### Load the Seaborn Anscombe data set anscombe = sns.load_dataset("anscombe") #anscombe #anscombe.describe() # does not work need to break up sns.lmplot(x="x", y="y", col="dataset", data=anscombe, aspect=.5); # ### Visualization # # Plotting the data allows one to see that while each quartet is fit by the same linear function, the grouping and clustering of the data is different in each case. This can be quickly seen by plotting. # # Recall: # # * a picture is worth a thousand words # ## Build up our own Pandas data frame # # start with numpy arrays # + x1 = np.array([10, 8, 13, 9, 11, 14, 6, 4, 12, 7, 5]) x2 = np.array([10,8,13,9,11,14,6,4,12,7,5]) x3 = np.array([10,8,13,9,11,14,6,4,12,7,5]) x4 = np.array([8, 8, 8,8,8,8,8,19,8,8,8]) y1 = np.array([8.04, 6.95, 7.58, 8.81, 8.33, 9.96, 7.24, 4.26, 10.84, 4.82, 5.68]) y2 = np.array([9.14, 8.14, 8.74, 8.77, 9.26, 8.10, 6.13, 3.10, 9.13, 7.26, 4.74]) y3 = np.array([7.46, 6.77, 12.74, 7.11, 7.81, 8.84, 6.08, 5.39, 8.15, 6.42, 5.73]) x4 = np.array([8,8,8,8,8,8,8,19,8,8,8]) y4 = np.array([6.58,5.76,7.71,8.84,8.47,7.04,5.25,12.50,5.56,7.91,6.89]) a_xs = [x1,x2,x3,x4] a_ys = [y1,y2,y3,y4] # - sns.regplot(x=x1, y=y1) # ### Load local pdapt libraries import os os.chdir('/home/matej/develop/pdapt') os.getcwd() # import my statistical library (pdapt stats) import pdapt_lib.machine_learning.stats as pstats pstats.standard_deviation(x1) # ## Summary statistics # # Notice that the robust statistics like the median, interquartile range (IQR), etc, **do** vary while the non-robust statisics: mean and standard deviation do not and very similar for each x and y data set in the quartet. # summary stats for each set should be same eg compare means for x in a_xs: pstats.summary(x) print() for y in a_ys: pstats.summary(y) print() # This value shows a close grouping in the y ranges of the data pstats.interquartile_range(y3) # This value shows a close grouping in the x ranges of the data pstats.interquartile_range(x4) # print out y4 and standarized equivalent.. # note standardized value of 12.5 is above abs(2) # which is another definition of and an outlier s = list(zip(y4, pstats.standardize(y4))) sorted(s,key=lambda x: abs(x[1])) def winsorization(v, edge): """ input: v (vector), edge (percentile of vector to replace with quantile value) output winsorized vector """ vs = sorted(v) lower_value_limit = pstats.quantile(v, edge) upper_value_limit = pstats.quantile(v, 1.0 - edge) lower_value = vs[vs.index(lower_value_limit)+1] upper_value = vs[vs.index(upper_value_limit)] w = [] for i in v: if i < lower_value: w.append(lower_value) elif i > upper_value: w.append(upper_value) else: w.append(i) return w a = [92, 19, 101, 58, 1053, 91, 26, 78, 10, 13, -40, 101, 86, 85, 15, 89, 89, 28, -5, 41] # should replace -40 with -5 and 1053 with 101 w = winsorization(a,0.05) w print(pstats.mean(a)) print(pstats.mean(w)) # ### Outliers # # Some notes: # # * The 3rd and 4th parts of the dataset (quartet) have outliers which skew the mean and standard deviation to be equal to the other cases. # # * Especially in cases of small sample sizes, it is not necessarily true that the outliers should be removed, they may be part of the naturally occuring underlying distribution of the population of interest. On the other hand, the outliers could be due to instrument failure, or the result of someone answering the wrong question. # # * *one must investigate the outliers!* # ## Linear Regression # # Here we will solve for the coefficients a few different ways, using: # # 1. general solution via linear algebra # 2. a statistical relation # 3. analytical solution to simple regression # ### 1. General Solution # To fit a linear regression # # https://en.wikipedia.org/wiki/Linear_regression # # we need to solve a general linear equation, # # $\mathbf{Ax} = \mathbf{b}$ # # https://en.wikipedia.org/wiki/System_of_linear_equations # # but here we are solving for the coefficients (weights) so looks more like # # $\mathbf{A w}=\mathbf{y}$ # # # # + from numpy import arange,array,ones,linalg from pylab import plot,show # simple example xi = arange(0,9) A = array([ xi, ones(9)]) print(A) y = [19, 20, 20.5, 21.5, 22, 23, 23, 25.5, 24] # solve it w = linalg.lstsq(A.T,y)[0] print(w) # - # ### 2. Statistical Relation for Simple Regression # # the theory of regression to the mean has some nice relationships. For example, # to solve the following: # # $\mathbf{y} = \mathrm{w_0} \mathbf{x} + \mathrm{w_1}$, # # use the statistical relations: # # $\mathrm{w_0} = \frac{\hat{\rho}(\mathbf{x},\mathbf{y}) \hat{\sigma}_y} {\hat{\sigma}_x}$ # # and # # $\mathrm{w_1} = \bar{\mathbf{y}} - \mathrm{w_0} \bar{\mathbf{x}}$ # # # where $\hat{\rho}$ is the sample correlation between **x** and **y** and $\hat{\sigma}$ is the sample standard deviation. # # test the above w0_stat = pstats.correlation(xi,y)*pstats.standard_deviation(y)/pstats.standard_deviation(xi) w1_stat = pstats.mean(y) - w0_stat*pstats.mean(xi) print(w0_stat,w1_stat) # + # could have used scipy from scipy import stats slope, intercept, r_value, p_value, std_err = stats.linregress(xi,y) print(slope, intercept, r_value, p_value, std_err) line = w[0]*xi+w[1] plot(xi,line,'r-',xi,y,'o') show() # - # ### 3. Analytical Solution for Simple Regression # # The analytical solution for the optimal fit is done by first creating a cost function, here we use a residual sum of squares cost function, # # $J_{RSS} = \sum_i^n \left( y_i - \hat{y}_i \right) ^2$ # # which pits the outcomes ($y_i$) against our predictions ($\hat{y}_i$). # # $J_{RSS} = \sum_i^n \left( y_i - \left( \mathrm{w_0} \mathbf{x} + \mathrm{w_1} \right) \right) ^2$ # # Now to obtain the optimal fit parameters we take the gradient of the cost function # # # $\frac{\partial{J_{RSS}}}{\partial{\mathrm{w_0}}} = -2 \sum_i^n \left( y_i - \left( \mathrm{w_0} \mathbf{x} + \mathrm{w_1} \right) \right)\mathbf{x} $ # # and # # $\frac{\partial{J_{RSS}}}{\partial{\mathrm{w_1}}} = -2 \sum_i^n \left( y_i - \left( \mathrm{w_0} \mathbf{x} + \mathrm{w_1} \right) \right) $ # # and by setting the gradient to zero and solving, we obtain the slope $\mathrm{w_0}$, # # $\mathrm{w_0} =\frac{ y_i x_i - y_i x_i / n }{ x_i^2 - x_i x_i / n}$ # # # and intercept ($\mathrm{w_1}$): # # $\mathrm{w_1} = \sum_i y_i/n - \mathrm{w_0}\sum_i x_i/n$ # # # we will need a dot product def dot(x,y): return sum(x_i * y_i for x_i, y_i in zip(x,y)) def solve_simple_regression(x,y): """ returns w_0 (slope) and w_1 (intercept)""" n = float(len(x)) slope = (dot(y,x) - ((sum(y)*sum(x))/n))/(dot(x,x)-((sum(x)*sum(x))/n)) intercept = sum(y)/n - slope*sum(x)/n return slope, intercept slope, intercept = solve_simple_regression(xi,y) print(slope, intercept) # ## Bayesian analysis # ## Summary # # * Some simple statistics **do** differ for the anscombe dataset (median etc) # # * outliers can complicate interpretation and should be carefully considered # note see file mkm_notebooks/license.txt for license of this notebook. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #Import all reuired libraries import pandas as pd import datetime import geopandas as gpd from shapely.geometry import Point from datetime import datetime from datetime import timedelta from sklearn import preprocessing import category_encoders as ce from sklearn.preprocessing import StandardScaler from sklearn.cluster import KMeans import matplotlib.pyplot as plt import numpy as np from yellowbrick.cluster import KElbowVisualizer from sklearn.metrics import silhouette_samples, silhouette_score import matplotlib.cm as cm import os # The primary dataset which will be used in this analysis is the victim based part 1 crime table, produced by the Baltimore City Police Department. It reports crimes based upon each victim, rather than occurance, and thus each row can be thought of as a victim. #Crime = pd.read_csv('https://data.baltimorecity.gov/api/views/wsfq-mvij/rows.csv?accessType=DOWNLOAD') Crime = pd.read_csv('C:/Users/cyeager/Desktop/UMBC/DATA 602/Homework2/Crime.csv',sep=",") print(Crime.columns) #Keep only the columns we want df=Crime.filter(['CrimeDate', 'CrimeTime','Description','Latitude','Longitude']) #Concat the date and time columns into a single feature df['DT']=df['CrimeDate'] + ' ' + df['CrimeTime'] #df #reformat into a new datetime column df['DT']=pd.to_datetime(df['DT']) #df # Here we zip the latitude and longitude features into one, then apply the point function so that they can be displayed as a single dot of a map for each crime. df['Coordinates'] = list(zip(df['Longitude'], df['Latitude'])) df['Coordinates'] = df['Coordinates'].apply(Point) #All data prior to Jan 1 2014 was found to be sparse and inconsistent and thus is removed df=df[(df['DT'] >= '2014-01-01 00:00:01')] df['DT'].min() #Save of a copy of this to come back to if we mess it up df_saved=df #Save of a copy of this to come back to if we mess it up df_saved.to_pickle('CrimemapData.pkl') df = pd.read_pickle('CrimemapData.pkl') #Remove na values, only a few are lost in proportion to overall size, not worth investigating to potentially fix or interpolate print(df.shape) df=df.dropna(axis=0,how='any') df=df.dropna(axis=1,how='any') print(df.shape) # We are going to break this data down into 2 hours intervals of time where we look at all of the crime which is taking place in the city during that snapshot and see if we can find a pattern that acts as a precursor to homicides/shootings. The first thing we do here is figure out how many 2 hours groups we can get from this dataset: print(df['DT'].min()) print(df['DT'].max()) T=(df['DT'].max())-(df['DT'].min()) #here we find the total number of hours contained within our df and then see how many 2 hour chunks there are within in. #We can create 29951 images days, seconds = T.days, T.seconds hours = days * 24 + seconds // 3600 hours/2 #We want each type of crime to be a specific color ClusterColors = ['#E47A2E', '#3F69AA', '#D5AE41', '#BD3D3A','#BD3D3A',"#006E6D",'#7F4145','#223A5E','#BD3D3A', '#3F69AA', '#D5AE41', '#766F57','#E47A2E',"#006E6D",'#7F4145','#223A5E',"#9C9A40","#9C9A40",'blue','green','red','purple','brown','#BD3D3A', '#3F69AA', '#D5AE41', '#766F57','#E47A2E',"#006E6D",'#7F4145','#223A5E',"#9C9A40"] #Here we assign the color and size to each crime type point CrimeColours = list(zip(list(df['Description'].unique()), ClusterColors)) CrimeColours=pd.DataFrame(CrimeColours) CrimeColours['shape']=50 #Rename Colmns CrimeColours.columns=['crime','color','shape'] df = pd.merge(df, CrimeColours, left_on=["Description"], right_on=['crime']) # Running the loop to create almost 30k maps is very time consuming. Via trial and error it was found that the df had a few flaws which were not immediately noticeable. One of these was that a hadful of points actually fell outside the city due to geocoding errors. Here we find those points via spatially joining all of the data to a polygon of the city, then filtering the main df of those without a match. Outsidezbmore=gpd.sjoin(BaltimoreCommunities,gpd.GeoDataFrame(df, geometry='Coordinates').set_crs("EPSG:4326"), op='contains') Outsidezbmore.index_right main_list = np.setdiff1d(list(df.index),list(Outsidezbmore.index_right)) main_list print(df.shape) df=df.drop(df.index[main_list]) print(df.shape) #save it again df.to_pickle('CrimemapDataClensed.pkl') df = pd.read_pickle('CrimemapDataClensed.pkl') df.head() # Finally, the loop below creates a new df for each 2 hour window within the master dataframe, then appends each of these new, smaller dfs to a list. maybe=[] X=df['DT'].min() Y=df['DT'].min()- timedelta(hours=2) for i in range(29950): X=X + timedelta(hours=2) Y=Y + timedelta(hours=2) maybe.append(df[(df.DT >=Y) & (df.DT <=X)]) #print(str(Y)+" - "+str(X)) # We create a list of the 2 hour dfs which contain either homicide or shooting for identification purposes since this is a classification model. Using this data we will place each image into a folder that fits its classification (homicide/shooting or none). searchfor=["SHOOTING","HOMICIDE"] um=[] for i in range(len(maybe)): um.append(maybe[i][maybe[i]['Description'].str.contains('|'.join(searchfor))].empty) #Read in the neighborhoods boundary shapefile from API BaltimoreCommunities = gpd.read_file('https://data.baltimorecity.gov/api/geospatial/peh2-3qyi?method=export&format=Shapefile') len(maybe) # The final data issue to overcome was filtering those 2 hour windows in which no crime at all took place. These would cause an error as the mapping operation is disrupted by empty dfs. Rather than deleted them, I simply incorporated an if statement with 'continue' line into the main mapping loop. dud=[] for i in range(len(maybe)): dud.append(maybe[i].empty) skiplist=[i for i,x in enumerate(dud) if x == True] #skiplist #Only 137 of the nearly 30k 2 hour windows contained no crime. len(skiplist) # This loop creates one map image for each 2 hour window. A point is displayed for each crime, it has a specific color which corresponds to its type. I've also spatially joined crime to the neighborhood in which it belongs and created a red outline for each neighbrhood where there was crime, all others are outlined in black. Once the map is created its given a unique name and saved as a .png file into a folder which dsignates its status as having a homicide or not. for i in range(0,len(maybe)): if maybe[i].empty==True: continue base = BaltimoreCommunities.plot(color='none', edgecolor='black',alpha=.5) base.set_aspect('equal') base.set_axis_off() ax=gpd.GeoDataFrame(maybe[i], geometry='Coordinates').set_crs("EPSG:4326").plot(ax=base,c=maybe[i]['color'], markersize = maybe[i]['shape'],alpha=.7) gpd.sjoin(BaltimoreCommunities,gpd.GeoDataFrame(maybe[i], geometry='Coordinates').set_crs("EPSG:4326"), op='contains').plot(ax=base,edgecolor='red',color='none') fig = ax.figure File=(os.getcwd()+"\Maps"+"\\"+str(um[i])+"\\"+maybe[i].DT.max().strftime(format='%Y%m_%I%M')+str(i)+".png") fig.savefig(File, bbox_inches = 'tight',pad_inches = 0) plt.close('all') # Out of our 29951 images, 'False' designates that there was a homicide or shooting during that 2 hour window. There were 4824 of these. The remaining 24989 (designated 'True') were windows of time without a shooting or homicide. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernel_info: # name: python3 # kernelspec: # argv: # - /Users/marc/venvs/edv-pilot/bin/python # - -m # - ipykernel_launcher # - -f # - '{connection_file}' # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] nteract={"transient": {"deleting": false}} # # Demo Laplace Mechanism Confidence Interval # # + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} from eeprivacy import (laplace_mechanism, laplace_mechanism_confidence_interval, laplace_mechanism_epsilon_for_confidence_interval, private_mean_with_laplace) import matplotlib.pyplot as plt import numpy as np import pandas as pd import matplotlib as mpl from scipy import stats np.random.seed(1234) # Fix seed for deterministic documentation mpl.style.use("seaborn-white") MD = 20 LG = 24 plt.rcParams.update({ "figure.figsize": [25, 7], "legend.fontsize": MD, "axes.labelsize": LG, "axes.titlesize": LG, "xtick.labelsize": LG, "ytick.labelsize": LG, }) # Exact ci = laplace_mechanism_confidence_interval( epsilon=1.0, sensitivity=1, confidence=0.95 ) print(f"Exact CI: {ci}") # Stochastic trials = [] for t in range(1000): res = laplace_mechanism( values=np.zeros(1000), epsilon=1.0, sensitivity=1.0 ) trials.append(np.quantile(res, 0.975)) plt.hist(trials, bins=20) plt.title("CI from stochastic trial") plt.xlabel("CI") plt.ylabel("Count") plt.show() # + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} # Now the reverse epsilon = laplace_mechanism_epsilon_for_confidence_interval( target_ci=3, sensitivity=1, confidence=0.95 ) print(epsilon) epsilon = laplace_mechanism_epsilon_for_confidence_interval( target_ci=3, sensitivity=1, confidence=0.95 ) print(epsilon) def compute_laplace_epsilon(target_ci, sensitivity, quantile=.95): """ Returns the ε for the Laplace Mechanism that will produce outputs +/-`target_ci` at `quantile` confidence for queries with `sensitivity`. e.g. compute_laplace_epsilon(5, 1, quantile=0.99) Returns ε for counting queries that should be within +/-5 of the true count at 99% confidence. """ quantile = 2 * quantile - 1 epsilon = -sensitivity * np.log(2 - 2 * quantile) / target_ci return epsilon compute_laplace_epsilon(17520, 8760, 0.95) # + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} trials = [] for t in range(20): A = private_mean_with_laplace(values=[0, 0, 0], epsilon=1.0, lower_bound=0, upper_bound=1) trials.append(A) plt.hist(trials, bins=20) plt.show() trials = [] values = np.random.laplace(4, scale=1, size=1000) for t in range(1000): A = private_mean_with_laplace(values=values, epsilon=0.1, lower_bound=0, upper_bound=20) trials.append(A) print(np.mean(values)) plt.hist(values, bins=20) plt.show() plt.hist(trials, bins=20) plt.show() print(np.mean(trials)) # + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} # + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} # + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} # + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} # + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} # + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from types import MappingProxyType d={1:'A'} d_proxy = MappingProxyType(d) # + pycharm={"name": "#%%\n"} d_proxy # + pycharm={"name": "#%%\n"} d_proxy[1] # + pycharm={"name": "#%%\n"} d_proxy[2] # + pycharm={"name": "#%%\n"} d[2] = 'B' # + pycharm={"name": "#%%\n"} d_proxy # + pycharm={"name": "#%%\n"} d_proxy[2] # + pycharm={"name": "#%%\n"} d_proxy[3] = 'x' # + pycharm={"name": "#%%\n"} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # ECE225A Project - Google Play Store Apps # ## 1. Introduction # # Nowadays, Android operating system is one of the main mobile operating systems based on the Linux kernel developed by Google LLC. Today, many smartphones manufactured by various companies choose Android to be their operating system. # # Google Play (previously Android Market) is a digital distribution service operated and developed by Google LLC. It serves as the official app store for the Android operating system, allowing users to browse and download applications developed with the Android software development kit (SDK) and published through Google. # # As we all know, smartphones today have become one of the necessities in most people's life. We depend on different apps on our smartphones to fulfill many of our demands, including working, living, entertainment and etc. As the official app store for Android - one of the most widely used mobile operating system, Google Play Store includes those various apps. Therefore, by taking a look at the statistics of apps on Google Play Store, we can have a glance on people do with their smartphones # ## 2. Analysis and Visualization on the Dataset and Some Results # 'googleplaystore.csv' shows some details about applications on Google Play Store. # #### Read Data # + import numpy as np import pandas as pd import matplotlib.pyplot as plt import scipy.stats import seaborn as sns import warnings warnings.simplefilter(action='ignore', category=FutureWarning) # - data = pd.read_csv('googleplaystore.csv') data # #### Categories # Legend of the pie chart starts from the most category and move counterclockwise. # + category = data.groupby(by='Category')['Category'].count().sort_values(ascending=False).drop('1.9') label = category.index.tolist() patches, texts = plt.pie(category,shadow=True) plt.title('All categories of Google Play Store Apps',fontsize=20) plt.legend(patches, label,ncol=4,fontsize=8,bbox_to_anchor=(0.9, 0.8)) plt.axis('equal') plt.show() # - top = 5 x = range(top) plt.bar(x,category.head(n=top),width=0.5) plt.title('Top 5 Categories of Google Play Store Apps',fontsize=15) plt.xticks(x,category.index,fontsize=8) plt.ylabel('Number of Apps',fontsize=12) plt.show() # From the plots, we can see that FAMILY (living) and GAME (entertainment) are the most popular categories, showing that people use mobile applications mostly during living. After those, work involving techniques appears (including MEDEICAL, BUSINESS, SPORTS, FINANCE, etc). # #### Ratings # Ratings reflect how users like the applications. The higher the ratings, the more user like the app from many aspects (usage, efficiency, price, etc). # + rating = data['Rating'].dropna() n_bins = 135 n,bins,patches = plt.hist(rating,n_bins,density=1,color='b',alpha=0.1) plt.title('Overall rating',fontsize=15) plt.xlim((0,6)) mu = rating.mean() sigma = rating.std() y = scipy.stats.norm.pdf(bins,mu,sigma) plt.plot(bins,y,'r--') sns.distplot(rating,color='b',bins=n_bins) plt.show() # - # The general shape of the overall rating looks roughly like Gaussian distributed centered at about 4, implying that most users feel OK with their apps while a few are awful and a few are excellent. # + family = data[data['Category']=='FAMILY'] family_r = family['Rating'].dropna() n_bins = 20 n,bins,patches = plt.hist(family_r,n_bins,density=1,color='b',alpha=0.1) plt.title('Family Apps rating',fontsize=15) plt.xlim((0,6)) mu = family_r.mean() sigma = family_r.std() y = scipy.stats.norm.pdf(bins,mu,sigma) plt.plot(bins,y,'r--') sns.distplot(family_r,color='b',bins=n_bins) plt.show() # - # Family Applications are the most popular category, so we take a look at it and compare with the overall. The shape of the family applications rating looks very similar to the overall rating while it has a higher mean and smaller variance, implying that family applications seem to be over averge. # + medical = data[data['Category']=='MEDICAL'] medical_r = medical['Rating'].dropna() n_bins = 20 n,bins,patches = plt.hist(medical_r,n_bins,density=1,color='b',alpha=0.1) plt.title('Medical Apps rating',fontsize=15) plt.xlim((0,6)) mu = medical_r.mean() sigma = medical_r.std() y = scipy.stats.norm.pdf(bins,mu,sigma) plt.plot(bins,y,'r--') sns.distplot(medical_r,color='b',bins=n_bins) plt.show() # - # Although medical apps are less than family apps, because of its higher requirements of techniques, medical apps appear to have generally higher rating. However, also because of the strict demand of techniques, poorly-performed apps receive much lower ratings. This happens to many other technique-depended categories, for example, business. # #### Type # + type = data.groupby(by='Type')['Type'].count().sort_values(ascending=False).drop('0') plt.bar(range(2),type) plt.title('Type of Google Play Store Apps',fontsize=15) plt.xticks(range(2),type.index,fontsize=8) plt.ylabel('Number of Apps',fontsize=12) plt.show() # - # Clearly, most apps on Google Play Store are free for installation since Android welcomes all companies to develop release their own apps. # + paid = data[data['Type']=='Paid'] paid_c = paid.groupby(by='Category')['Category'].count().sort_values(ascending=False) top = 5 x = range(top) plt.bar(x,paid_c.head(n=top),width=0.5) plt.title('Top 5 Categories of Paid Google Play Store Apps',fontsize=15) plt.xticks(x,paid_c.index,fontsize=8) plt.ylabel('Number of Apps',fontsize=12) plt.show() # - # #### Paid Apps # Among all paid apps, FAMILY is still the most populated category. However, we want to know whether it's because the total number is large or the percentage is large. In addition, we can see that MEDICAL is the second most populated one. So we want to look more into the percentage to make a comparison. # + label = paid_c.index.tolist() df = pd.DataFrame(columns=['Category','Percentage']) for i in label: p = float(paid_c[paid_c.index==i]/category[category.index==i]*100) df = df.append({'Category': i, 'Percentage': p}, ignore_index=True) df_c = df.sort_values(by='Percentage',ascending=False) plt.bar(x,df_c['Percentage'].head(n=top),width=0.5) plt.title('Top 5 Categories (Percentage) of Paid Apps',fontsize=15) plt.xticks(x,df_c['Category'].head(n=top),fontsize=6) plt.ylabel('Percentage',fontsize=12) plt.show() # - # From this graph, it's clear that MEDICAL has the most percentage of paid apps. And the next is PERSONALIZE. From this graph, we understand that the more specialized the app is, the more probable that it is a paid app. top = 5 x = np.arange(top) plt.bar(x,paid_c.head(n=top),width=0.3,label='number') plt.bar(x+0.3,df['Percentage'].head(n=top),width=0.3,label='percentage') plt.xticks(x,paid_c.index,fontsize=8) plt.legend() plt.show() # This is a graph showing number and percentage together, so that we can get a better feel of those paid apps categories. We can see that number of paid FAMILY is large mainly because the total number is large, while the number of paid MEDICAL is large because the percentage of paid apps is large. # #### Content Rating # Content rating shows what group of users the apps are designed for. # + content = data.groupby(by='Content Rating')['Content Rating'].count().sort_values(ascending=False).drop('Unrated') label = content.index.tolist() patches, texts = plt.pie(content,shadow=True) plt.title('Content Rating of Google Play Store Apps',fontsize=20) plt.legend(patches, label,loc='upper right',bbox_to_anchor=(1.1, 1.0)) plt.axis('equal') plt.show() # - x = range(5) plt.bar(x,content,width=0.5) plt.title('Content Rating of Google Play Store Apps',fontsize=15) plt.xticks(x,content.index,fontsize=8) plt.ylabel('Number of Apps',fontsize=12) plt.show() # As expected, most apps are designed for everyone. Aside from those apps, we can see that there are a lot of apps designed for teens since teens nowadays have relative more time and master the ability to get used to most new techniques. They like to play with electronic devices and have the curiosity to make them functional to replace some traditional ways to deal with problems. So, we would like to focus more on this specific group and investigate what kind of apps they like the most and whether that category structure will vary from the overall trend or not. # #### Apps for Teens teen = data[data['Content Rating']=='Teen'] teen_c = teen.groupby(by='Category')['Category'].count().sort_values(ascending=False) teen_c top = 5 x = range(top) plt.bar(x,teen_c.head(n=top),width=0.5) plt.title('Top 5 Categories of Teen Google Play Store Apps',fontsize=15) plt.xticks(x,teen_c.index,fontsize=7) plt.ylabel('Number of Apps',fontsize=12) plt.show() # Compare this bar chart with the bar chart for the overall apps, we can find that there are some obvious differences. For teens, number of apps of their interest and apps that are used for their life and study have a large increase. Apps in category like Game, Social and Personalize are very popular among teens. Teens love to play games and other entertainment. They need social, personalize and shopping apps to help them with their everyday life. Different from adults, teens don't have much knowledge and experience in professional areas like medical and business so that the number of apps specilized for those high-tech zones is very small. Additionally, teens don't need to worry everything about their family as their parents can deal with many problems so that the relative number of family apps is smaller than the overall number.The category of apps for teens correctly reflect characteristics of teens. # + rating_t = teen['Rating'].dropna() n_bins = 25 n,bins,patches = plt.hist(rating_t,n_bins,density=1,color='b',alpha=0.1) plt.title('Overall teen rating',fontsize=15) plt.xlim((0,6)) mu = rating.mean() sigma = rating.std() y = scipy.stats.norm.pdf(bins,mu,sigma) plt.plot(bins,y,'r--') sns.distplot(rating_t,color='b',bins=n_bins) plt.show() # - # From this histogram plot, we find that the overall rating for apps for teens on Google Play Store looks to have a higher mean than the overall rating for all apps, showing that the quality of those apps are generally above average. # + game = teen[teen['Category']=='GAME'] game_r = game['Rating'].dropna() n_bins = 20 n,bins,patches = plt.hist(game_r,n_bins,density=1,color='b',alpha=0.1) plt.title('Game Apps teen rating',fontsize=15) plt.xlim((0,6)) mu = game_r.mean() sigma = game_r.std() y = scipy.stats.norm.pdf(bins,mu,sigma) plt.plot(bins,y,'r--') sns.distplot(game_r,color='b',bins=n_bins) plt.show() # - # ## 3. Conclusion # # Overall, as the official apps platform for Android operating system, Google Play Store serves for a large variety of categories of apps from living, which provides convenience or entertainment in our everydaylife, to working, which requires professional techniques. While different categories of apps may have different quality and features, their general quality according to the rating shares similar trends. Most of them could be considered satisfactory, while a few of them show excellent quality (rating goes to 5) and a few perform terribly bad (rating goes to 1). As the feature of Android operating system, open source welcome companies and individuals to release their own apps. As a result, Most of the apps are free. Among paid apps, some of them requires professional knowledge such as medical, and some of them are designed for specialization such as personalization so that users are willing to pay for convenience. Considering the group of users that the apps designed for, most apps are face to everyone. However, there are quite many apps designed specially for teenagers. Speaking of teenagers, they have relatively more time for and the ability and curiosity to adapt to new techniques including those apps on their smartphones. At the same time, they loves playing games or other entertainment. In addition, they are new to the society so they don't have or need a lot of knowledge involving some professional areas. Most of the time, they only need to deal with some simple everyday problems for living, such as reading books and magazines, shopping, work out timetables for them to follow and etc. The apps also make full consideration about those characteristics. Game, entertainment and sports are very popular among teenagers to provide a variety of choices during their leisure time. Social apps help teenagers to familiarize with the society. Personalization, shopping, books and comics, dating and photography apps are also popular to provide a convenient and confortable environment of teenagers' daily life. Different from the apps for adults, we can see that apps require too much professional knowledge such as medical and finance are not very popular among teenagers. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: county-scraping-env # language: python # name: county-scraping-env # --- import pandas as pd import requests import json from datetime import date # ## Get raw data from [map](https://cocogis.maps.arcgis.com/apps/webappviewer/index.html?id=fea1f3021a50455495b7e7e11325ecd4) # + urls = [ 'https://services3.arcgis.com/42Dx6OWonqK9LoEE/ArcGIS/rest/services/FoodResources4_9_20_CombinedFrom211/FeatureServer/0/query', # 191 'https://services3.arcgis.com/42Dx6OWonqK9LoEE/arcgis/rest/services/SchoolMealPickups/FeatureServer/0/query', # 617 'https://services3.arcgis.com/42Dx6OWonqK9LoEE/arcgis/rest/services/SeedLibraries/FeatureServer/0/query' # 6 ] dfs = [] params = { "f":"json", "where":"1=1", "returnGeometry":"true", "outFields":"*", } for url in urls: res = requests.get(url, params=params) df = pd.json_normalize(json.loads(res.text)['features']) dfs.append(df) # - dfs[0].head(10) # + today = date.today() filename = 'contra_costa_foodresources_raw_%s.csv' % (today.strftime("%m_%d_%Y")) dfs[0].to_csv(filename) filename = 'contra_costa_schoolmeals_raw_%s.csv' % (today.strftime("%m_%d_%Y")) dfs[1].to_csv(filename) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: python3-datasci # language: python # name: python3-datasci # --- # # Assignment 03: Regression, Learning Curves and Regularization # # **Due Date:** Friday 10/02/2020 (by 5pm) # # **Please fill these in before submitting, just in case I accidentally mix up file names while grading**: # # Name: # # CWID-5: (Last 5 digits of cwid) # ## Introduction # # In this exercise we will be using what you have learned about linear regression, polynomial regression and # regularization, to explore an artificial dataset. # # I have generated a secret dataset. The dataset uses a polynomial combination of a single parameter. # The unknown function is no less than degree 3, but no more than a degree 15 polynomial. And some random # noise has also been added into the function, so that fitting it is not a completely trivial or obvious # exercise. Since the dataset is generated from a polynomial function, the output labels `y` are # real valued numbers. And thus you will be performing a regression fitting task in this assignment. # # Your task, should you choose to accept it, is to load and explore the data from this function. Your ultimate # goal is to try your best to determine the degree of the polynomial used, and the values of the parameters # then used in the secret function. Because of the noise added to the data you are given, you will not be able # to exactly recover the parameters used to generate the artificial data. You will even find that determining the # exact degree of the generating polynomial function is not possible. How you apply polynomial fitting and # regularization techniques can give different and better or worse approximations of the true underlying function. # # In the below cells, I give instructions for the tasks you should attempt. You will need to load the data and # visualize it to begin with. Then you will be asked to apply polynomial fitting and regularization in an attempt # to fit the data. But ultimately, at the end, you will be asked to take what you have discovered, and try and # give your best answer for the polynomial degree and best fitted polynomial parameters for this unknown data set. # + import numpy as np import matplotlib.pyplot as plt import pandas as pd import seaborn as sns # By convention, we often just import the specific classes/functions # from scikit-learn we will need to train a model and perform prediction. # Here we include all of the classes and functions you should need for this # assignment from the sklearn library, but there could be other methods you might # want to try or would be useful to the way you approach the problem, so feel free # to import others you might need or want to try from sklearn.preprocessing import PolynomialFeatures from sklearn.linear_model import LinearRegression from sklearn.linear_model import Ridge from sklearn.linear_model import Lasso from sklearn.linear_model import ElasticNet from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split from sklearn.pipeline import Pipeline # notebook wide settings to make plots more readable and visually better to understand np.set_printoptions(suppress=True) # - # notebook wide settings to make plots more readable and visually better to understand np.set_printoptions(suppress=True) plt.rc('axes', labelsize=14) plt.rc('xtick', labelsize=12) plt.rc('ytick', labelsize=12) plt.rc('figure', titlesize=18) plt.rc('legend', fontsize=14) plt.rcParams['figure.figsize'] = (12.0, 8.0) # default figure size if not specified in plot plt.style.use('seaborn-darkgrid') # ## Part 1: Load Data, Explore and Visualize # ------------ # # You have been given a set of 100 (artificial) data points in the file named `assg-03-data.csv` in our # data subdirectory. Start by loading this file into a pandas dataframe. Explore the data a bit. # Use the `describe()` function to get a sense of the number of values (there should be `m=100` samples), # and their mean and variance. There are 2 columns, where `x` is the feature, and `y` is the function # value, or in other words the label we will use for the regression fitting task. What is the range of # the `x` features? What is the range of the `y` output label here? # # Also plot a scatter plot of the data to get a sense of the function shape. Does it appear linear # or nonlinear? # load and explore the data. Use describe and other functions # minimal data cleaning / preparation is needed, but if you load using pandas you will have to split out the x input column separate from the y output labels. # + # visualize the data we loaded using a simple scatter plot # - # ## Part 2: Create an Overfit Model # # You have been told the degree of the polynomial function governing the data you just loaded is somehwere # between 3 and 15. Lets start by overfitting a degree 20 polynomial regression to the data. # In the next cells, use `PolynomialFeatures` and scikit-learn `LinearRegression()` to create a best fit # degree 20 polynomial model of the data. # overfit with a degree 20 polynomial, display cross validation of fit # no regularization # In the next cell, use introspection of your fitted model to display the intercept and fitted coefficient parameters. # Also discover the overall $R^2$ score of the fit. Display them here for future reference. # # You should of course get a single intercept parameter, but 20 fitted theta coefficients. Is there anything # you think you can learn looking at the coefficients for this degree 20 fit? For example, knowing that # the true degree is less than 20 for the underlying function, do you have any guess at this point of what # degree the polynomial might be that underlies this data? # # You should discover you get a pretty good $R^2$ score here, probably above 0.96. This indicates a good # fit that explains a lot of the variance seen. But since we have a very high degree model, you should # be worried at this point that at least some of that performance is coming from overfitting to the noise # present in the data instead of the true function that underlies the data. # display the intercept and coefficients of the fit as well as the R^2 score here # And finally for this part, visualize the fit from this degree 20 model. Plot the raw data as a scatter # plot again, and then use the `predict()` function to visualize the predictions made by the degree 20 # polynomial. # # Any insights from this visualization? Do you see evidence of the type of extreme overfitting that we # saw in the lectures? Especially around the ends of the data? # + # visualize the fit of the degree 20 polynomial here. Start by plotting the raw data as a scatter plot # then display the fitted model using the predict() member function # - # ## Part 3: Cross Validation of Degree 20 Model # ------------------------------------------ # # In these next parts of the assignment, we will walk you through applying regularization and using cross validation # on your degree 20 model to try and discover a model that is not overfitting the noise # present in the data set. This will hopefully lead to better insights on the true nature of the # function that may be generating the data you are analyzing. # # First of, for convenience, we recreate the `plot_learning_curves()` function from our textbook for # our use in this part of the assignment. Recall that this function, if you give it a # scikit-learn `Pipeline()` model, and the `X` input data and `y` labels, will perform # a series of cross validation trainings using the model and plot the results. In this case, the # function trains the model with a single input, then 2 inputs, and so on, and displays the # resulting model predictions on the data it trained with, and on the held back validation data. # As discussed in our lectures, these learning curves can help us determine whether a model is overfitting or # underfitting, and what performance we can expect from a properly powerful trained and fitted model. def plot_learning_curves(model, X, y): """Plot learning curves obtained with training the given scikit-learn model with progressively larger amounts of the training data X. Nothing is returned explicitly from this function, but a plot will be created and the resulting learning curves displayed on the plot. Parameters ---------- model - A scikit-learn estimator model to be trained and evaluated. X - The input training data y - The target labels for training """ # we actually split out 20% of the data solely for validation, we train on the other 80% X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2) # keep track of history of the training and validation cost / error function train_errors, val_errors = [], [] # train on 1 to m of the data, up to all of the data in the split off training set for m in range (1, len(X_train)): # fit/train model on the first m samples of the data model.fit(X_train[:m], y_train[:m]) # get model predictions y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) # determine RMSE errors and save history for plotting train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) # plot the resulting learning curve plt.plot(np.sqrt(train_errors), 'r-', linewidth=2, label='train') plt.plot(np.sqrt(val_errors), 'b-', linewidth=2, label='val') plt.xlabel('Training set size') plt.ylabel('RMSE') plt.legend(fontsize=18) # Using the above function, display the learning curve for a degree 20 polynomial. If you didn't do # already, you will need a scikit-learn `Pipeline` here that first applies a `PolynomialFeatures` transformation # to get a degree 20 set of input features, and then performs linear regression on the pipeline after # creating the polynomial features. # # Use your pipeline transform/fit model to call the `plot_learning_curves()` function and display the # learning curves for a degree 20 polynomial, that remember we should suspect is overfitting the # data at this point. # + # create a pipeline here if needed for a degree 20 set of PolynomialFeatures that is then # trained with a standard LinearRegression # plot the learning curves. You may need to change the limits of your plot, because if the data is overfitting # the performance on the validation data may be very bad compared to on the training data. # - # You can and will get different results when you plot the learning curve here because the train/validation # split is done randomly each time. So you should probably run the above plot of your learning curves # more than 1 time, to get a feel for what results you can get. # # But you should observe 2 important points here # # 1. Does it seem like the model is overfitting? e.g. Are you observing that often there is a very big gap in # performance between the validation and training RMSE measure, where validation RMSE is very bad compared to # what is seen with the data model was trained with. # 2. You should make a note of what performance is reached on the training data RMSE measure with this overfit # model. This level of RMSE for training can probably be approached with a properly tuned model that # generalizes well, and can thus get this performance on data it has not seen before. # # You might want to display the intercept, coefficients and $R^2$ score again here of the model you fit, # just to confirm they are similiar to what you saw the first time. But if you are using a pipeline, you # may need to access these parameters in a slightly different way now. # display the intercept and coefficients of the fit as well as the R^2 score here # ## Part 4: Applying Ridge Regularization # --------------------- # # In this section you will be asked to perform some of the regularization techniques we discussed in our # lectures on linear regression and regularization. The goal here is to try and get a better idea of what the # true degree of the governing function might be, as well as the values of the coefficients of this function. # # Lets start by trying a simple "ridge" regularization. Recall ridge regression applies $\ell_2$ penalities # (e.g. it adds in the square of the coefficient $\theta_i^2$) which tends to reduce (but not eliminate) # parameters that are not necessary for minimizing the fitness function. # # As before, create a pipeline that creates degree 20 polynomial features. But apply and fit a ridge # regression for this part of the assignment. Try and explore alpha values to get a good model. # A "good" model is one here that shows you are no longer overfitting. You can tell you are no longer # overfitting if there is no longer a big gap between training and validation performance. Also you should compare # the validation RMSE here to that achieved on the training data previously. If your validation RMSE here # is approaching that seen on only the training data before, the generalization of this fitted model # with regularization is doing well. # + # apply l2 "ridge" regularization to a degree 20 set of polynomial features # plot the learning curves you observe for a good example of a ridge regularization fit # - # When you think you have a relatively good alpha parameter for your ridge regression, display the # intercept, coefficients and $R^2$ score of you fitted model here for future reference. Compare the # coefficients here to the previous overfitted model. Do you have any insights on the degree of the # underlying polynomial from looking at your coefficients here? # display intercept, coefficients and R^2 fit score here # ## Part 5: Applying Lasso Regularization # ----------------------- # # In this part of the assignment we will next apply lasso regression. Recall that lasso regression is # the same as $\ell_1$ norm penality, which in practical terms means we use the absolute value of # the coefficient in our regularization penalty term. # # As before, create a pipeline that creates degree 20 polynomial features. But apply and fit a lasso # regression for this part of the assignment. You should explore `alpha` values around 0.001 to # 0.05 or so. You may get warnings with this fit, try setting the tolerance parameter `tol` to # 0.1 here, and maybe increasing `max_iter` as well, though it is not nesessary to completely eliminate # warnings to still get a relatively good fit here. And actually you may want to explore higher values # of `alpha`, as the higher you go, the more pressure on the fitted model to eliminate terms. This may give # you better insights into the true degree of the underlying polynomial function used to generate the # data here. # # Once you have a good example, use your best fitted model to display the learning curves using # the lasso regularization. # # Try and determine an `alpha` parameter that you think is working well here for the regularization. You are # doing "well" here if your learning curves indicate you are not ovrefitting, and you are approaching # RMSE performance on your validation data somewhere around where training RMSE achieved in previous # overfitting model. And you are doing well if you maybe have some idea of the cutoff that looks likely for # the true degree of the underlying function you are attempting to model. # + # apply l1 "lasso" regularization, to try and make unused terms drop out # plot the learning curves you observe for a good example of a lasso regularization fit # - # When you think you have found relatively good `alpha` and other parameters, display the intercept, coefficients # and $R^2$ scores again for this fitted model using lasso regularization. Any observations about # the coefficients now? Compare them to the degree 20 model with no regularization. You may be able to # start getting some ideas on the true degree of the polynomial from the results here. # display intercept, coefficients and R^2 fit score here # ## Part 6: Give me Your Best Model Estimate # # Taking what you have observed from the previous parts 3-5 of the assignment, try and give you best guess/estimate # for the true degree of the underlying polynomial. Your lasso regularization results might be most useful # for this determination, try finding values of `alpha` that are obviously too big (maybe by watching your # $R^2$) score), and then reduce this a bit to get an estimate on the upper bound of the number of terms in # the true polynomial. Recall that because of noise you won't be able to get a perfect answer here. Also it # helps to know, especially for even powers of the polynomial, that coefficients here are often only # reflecting lower even power effects. So for example, for a true function with only a $x^2$ term, you might # still get $x^4$ and $x^8$ coefficients using lasso regression, even with relatively high values of `alpha`. # # In any case, choose your best estimate of the "true" degree of the underlying polynomial function. Then train # a final linear regression with no or maybe slight $\ell_2$ regularization to try and get a best estimate # of the true model coefficients. You should do a little bit of testing again using the learning curves # and checking your $R^2$ fit score to determine that the model appears to be able to fit as well as # your degree 20 models with regularization. But then train a final model using all of the data, and report # the intercept, coefficients and $R^2$ fit you achieve with your best estimated model for this assignment. # + # estimate the polynomial degree and create your best model. You might want to try first with no regularization, # and then maybe with a little bit of l-2 (ridge) regulariztion and compare. # you should confirm that you model does not overfit and performs as well as your degree 20 models with # regularization # - # display intercept, coefficients and R^2 fit score here # Once you settle on your best model, do a final trainin of it with all of the data. Display # your intercept, coefficients and $R^2$ fit score for this best model. # # Then, visualize the fit of your best model. Once again scatter plot the raw data. Then using the # `predict()` function from you scikit-learn model, show the predicted regression as a line on top of the # raw data points. # + # perform one final fit of our best performing estimated model on all the data # fit on all the data # - # display intercept, coefficients and R^2 fit score here # + # visualize the fit, start with the raw data points we are fitting # display the best model fit # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Load some commonly-used packages import matplotlib import matplotlib.pyplot as plt import pandas as pd import numpy as np import math import sys , os # Load a few more useful packages import pySPM import spiepy # - # # Define some helper functions # + def load_sxm(filepath,channel='Z',direction='forward',preflatten='True'): """ Import an SXM image file and prepare it for use in SPIEPy functions. input 'filename' -> string optional Inputs -> strings output -> SPIEPy image object """ # Load the SXM image into Python data = pySPM.SXM(filepath) image = data.get_channel(channel , direction = direction) data.closefile() # Create an empty SPIEPy image object for further use im = spiepy.Im() # Assign the to the 'data' attribute of the new SPIEPy image object im.data = image.pixels if preflatten == 'True': # Pre-flatten to remove any really large-scale tilt in the original image. # This helps further processing steps to work better. im_output, _ = spiepy.flatten_xy(im) else: im_output = im.copy() return im_output def create_mask(im,mask_method = 'mask_by_mean'): """ Creates a mask to exclude certain parts of an image. This can be useful for excluding troublesome sections of an image during flattening. Inputs: im -> SPIEPy image object """ if mask_method == 'mean': # Use this if this if there is contamination, but no atomic resolution mask = spiepy.mask_by_mean(im) elif mask_method == 'peak-trough': # Use this if there is a fair amount of contamination in the imge, but also atomic resolution mask , _ = spiepy.mask_by_troughs_and_peaks(im) elif mask_method == 'step': # Use this if there are step edges in the image mask = spiepy.locate_steps(im, 4) else: print('Unknown masking type.') mask = 'NaN' return mask def plot_masked_image(im,mask): # Make a masked array, to visualize the mask superimposed over the image masked_image = np.ma.array(im.data, mask = mask) palette = spiepy.NANOMAP palette.set_bad('#00ff00', 1.0) # Show the masked image plt.imshow(masked_image, cmap = spiepy.NANOMAP, origin = 'lower') def gen_masked_multiplot(im_unleveled , mask , im_leveled , titles = ['Masked Image' , 'Leveled Image']): # Generate masked image of unleveled image masked_image = np.ma.array(im_unleveled.data, mask = mask) palette = spiepy.NANOMAP palette.set_bad('#00ff00', 1.0) # Generate a subplot figure for plotting. fig , (ax1 , ax2) = plt.subplots(1, 2) # Plot the masked image on the subplot figure ax1.imshow(masked_image, cmap = spiepy.NANOMAP, origin = 'lower') ax1.set_title(titles[0]) # Plot the leveled image on the subplot figure ax2.imshow(im_leveled.data, cmap = spiepy.NANOMAP, origin = 'lower') ax2.set_title(titles[1]) return fig , (ax1 , ax2) # - # # Import an image # The load_sxm function defined above also performs an initial first order plane fit to the image as a means of 'pre-flattening'. # + # Set some color pallete options from SPIEPy palette = spiepy.NANOMAP palette.set_bad('#00ff00', 1.0) # Specify a file for testing purposes path_directory = 'C:\\Users\\Jesse\\OneDrive\\Python-Stuff\\Research\\STM AI Project\\Sample_Images\\' filename = 'STM_WTip_Au(111)_Conditioning_127.sxm' filepath = path_directory + filename # Load the test image image = load_sxm(filepath) # Plot the test image plt.imshow(image.data , cmap = spiepy.NANOMAP, origin = 'lower') # - # # Demonstrate a few of the masking features # ## Leveling with Terraces # + # Locate the step mask_step = create_mask(image,mask_method = 'step') # Flatten the image, taking the step into account im_leveled_step, _ = spiepy.flatten_xy(image, mask_step) # Make a compound plot to show the results fig_step , axes_step = gen_masked_multiplot(image , mask_step , im_leveled_step , titles = ['Step Mask' , 'Leveled Image']) # - # ## Leveling with mask-by-mean # This keeps everything within a certain range around the mean of all elements in the image and ignores everything outside of this range. # + # Generate the mask mask_mean = create_mask(image,mask_method = 'mean') # Flatten the image, taking the step into account im_leveled_mean, _ = spiepy.flatten_xy(image, mask_mean) # Make a compound plot to show the results fig_mean , axes_mean = gen_masked_multiplot(image , mask_mean , im_leveled_mean , titles = ['Mask by Mean' , 'Leveled Image']) # - # # Leveling with peak-trough mask # # + # Generate the mask mask_pt = create_mask(image,mask_method = 'peak-trough') # Flatten the image, taking the step into account im_leveled_pt, _ = spiepy.flatten_xy(image, mask_pt) # Make a compound plot to show the results fig_pt , axes_pt = gen_masked_multiplot(image , mask_pt , im_leveled_pt , titles = ['Mask by Peak-Trough' , 'Leveled Image']) # - # # Level with a polynomial plane fit # Note: SPIEPy will only perform polynomial plane with up to second order polynomials (no cubic fitting natively). # + # Flatten the image using a quadratic plane fit. image_poly , _ = spiepy.flatten_poly_xy(image , deg = 2) # Show the flattened image plt.imshow(image_poly.data , cmap = spiepy.NANOMAP, origin = 'lower') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- import flopy.modflow as fm import mfexport # ## export a model # # #### load the model using flopy m = fm.Modflow.load('lpr_inset.nam', model_ws='data/lpr') m # #### define the location of the model grid # * spacing in meters grid = mfexport.MFexportGrid(delr=m.dis.delr.array * .3048, # grid spacing in meters delc=m.dis.delc.array * .3048, xul=557571, yul=446166, # upper left corner in CRS epsg=3070 # epsg reference for projected CRS ) # ### export all data arrays and stress period data (`MfLists`) to shapefiles, rasters and PDFs # * array data are written to compressed GeoTIFFs # * transient lists are written to shapefiles # * only model cells in the list are included # * stress period data are included as attributes; only periods where stresses change are included # * all data are displayed in PDFs using `matplotlib.pyplot.imshow` # %%capture mfexport.export(m, grid, output_path='postproc') # ### export a package # %%capture mfexport.export(m, grid, 'wel', output_path='postproc') # ### summarize model input df = mfexport.summarize(m) df.head(20) # ### Export model results # # #### cell by cell flows # * written to compressed GeoTIFFs mfexport.export_cell_budget('data/lpr/lpr_inset.cbc', grid, kstpkper=(4, 0), output_path='postproc') # #### heads # * compressed GeoTIFFs # * shapefiles of equipotential contours # + headsfile = 'data/lpr/lpr_inset.hds' mfexport.export_heads(headsfile, grid, hdry=m.upw.hdry, hnflo=m.bas6.hnoflo, kstpkper=(4, 0), output_path='postproc') # - # #### drawdown # * compressed GeoTIFFs # * shapefiles of equipotential contours mfexport.export_drawdown(headsfile, grid, hdry=m.upw.hdry, hnflo=m.bas6.hnoflo, kstpkper0=(4, 0), kstpkper1=(4, 8), output_path='postproc') # #### SFR results # * shapefile of SFR water balance results # * PDFs of simulated stream flows and stream/aquifer interactions # %%capture mf2005_sfr_outputfile = 'data/lpr/lpr_inset.sfr.out' outfiles = mfexport.export_sfr_results(mf2005_sfr_outputfile=mf2005_sfr_outputfile, model=m, grid=grid, kstpkper=(4, 0), output_length_units='feet', output_time_units='seconds', output_path='postproc' ) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + import numpy as np from calcbsimpvol import calcbsimpvol S = np.asarray(268.55) # Underlying Price K = np.asarray([275.0]) # Strike Price tau = np.asarray([9/365]) # Time to Maturity r = np.asarray(0.01) # Interest Rate q = np.asarray(0.00) # Dividend Rate cp = np.asarray(1) # Option Type P = np.asarray([0.31]) # Market Price imp_vol = calcbsimpvol(dict(cp=cp, P=P, S=S, K=K, tau=tau, r=r, q=q)) print(imp_vol) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:default] * # language: python # name: conda-env-default-py # --- # # Simple Meshing Workflow # # This is a simple workflow for generating 1D and 2D meshes. Note this workflow can do simple 3D meshes as well, especially for simple topography like hexahedral meshes that track a DEM or similar. # # Often, even in very simple simulations, we use variable vertical resolution (dz). For many processes (infiltration vs runoff generation, thermal processes, vegetation) much of the dynamics happens in the very near surface soil (within the top centimeters to meters, or in the critical zone). That said, groundwater depths or the annual thermal signal can propogate to many meters below the surface. To resolve the near surface dynamics without using so many cells that the simulation is unnecessarily expensive to run, we often use small dz (\~10cm or less) at the surface and much coarser (\~1-5 m) at depth. # # This workflow, and most ATS workflows, make **terrain following, extruded** meshes. These meshes can be unstructured in map-view, but follow the same vertical spacing everywhere. This is not required for ATS, but is the most commonly used type of mesh. # # Note that an unstructured mesh is completely specified by: # * nodal coordinates # * topology (faces as collections of nodes, cells as collections of faces) # * labeled sets (collections of nodes, faces, or cells) # # This workflow uses the "meshing_ats" tool provided with the ATS repository. # # ## Note on mesh dimensionality # # All meshes have two dimensions -- the "spatial" dimension (the dimension of the coordinates of the nodes) and the "manifold" dimension (the dimension of a single cell). ATS uses: # # * 3 space, 3 manifold: standard volumetric mesh in 3D # * 3 space, 2 manifold: the surface of the Earth # * 2 space, 2 manifold: a flattened surface mesh whose z-coordinate has been removed # * 1 space, 1 manifold: very few processes use a true 1D mesh # # ## Note on labels # # Labels must be unique. Therefore we use the following conventions to make life simpler across all ATS meshes. This is not _required_, but is strongly encouraged. # # * 1 : the bottom boundary faces of the domain # * 2 : the top (surface) boundary faces of the domain # * 3 : all other boundary faces (side boundaries) # * 4-9 : reserved for user-provided sets, e.g. outlets, observation points, etc. # * 10-99 : Land Cover side sets, typically NLCD indices are used directly # * 100 : geologic layer material IDs, e.g. rock types # * 999 : reserved for bedrock # * 1000-9999 : soil layer material IDs, e.g. soil types # * 10000+ : user-defined, no limitations # # ## Note on format # # We write these meshes to ExodusII files. ExodusII can handle two types of meshes -- "fixed type" meshes, which contain only elements of a given type (e.g. tetrahedrons, hexahedrons, triangular prisms, etc) or "polyhedral meshes." In the former, faces are not stored -- instead, each cell is a list of nodes ordered in a particular way. In the latter, faces are stored explicitly as a list of nodes, and cells are stored as a list of faces. ATS meshing workflows tend to write the latter, because it is actually simpler, and typed meshes can be stored as polyhedral meshes. # # Vis tools like VisIt and Paraview tend to do better with fixed type meshes, however. If you write a fixed type mesh in polyhedral form, MSTK's `meshconvert` utility can be used to detect and change the type automatically. This is built and installed with ATS: # # $AMANZI_TPLS_DIR/bin/meshconvert transect.exo transect_fixed_format.exo # # # + import sys,os sys.path.append(os.path.join(os.environ['ATS_SRC_DIR'],'tools','meshing_ats')) import meshing_ats import numpy as np from matplotlib import pyplot as plt # - # # ## Column Mesh # # Columns are useful for getting started, comparing to lab experiments, or for large, flat horizontal extents, where all of the dynamics are assumed to be 1D. # # Note that we use a "column of cells" to make clear that this is still a 3D volume mesh, but has only 1 cell in map-view. # # In this exercise we will make a column of cells of three types -- organic-rich layer, mineral soil layer, and bedrock. The organic-rich layer will be the top 25cm, the mineral soil from 25cm to 2m, and the bedrock from 2m to 40m depth. # # Make a surface mesh that is a 1m x 1m box in map-view. # # note that 2D here refers to the "manifold" dimension -- nodes are 3D, but this # describes a surface consisting of once cell. m2c = meshing_ats.Mesh2D.from_Column(0.,0.,0., width=1) print(f'# of cells: {m2c.num_cells()}') # + # Prepare layer extrusion data # # Meshes are extruded in the vertical by "layer", where a layer may # consist of multiple cells in the z direction. These layers are # logical units to make construction easier, and may or may not # correspond to material type (organic/mineral soil). # # The extrusion process is then given four lists, each of length # num_layers. # layer_types = [] # a list of strings that tell the extruding # code how to do the layers. See meshing_ats # documentation for more, but here we will use # only "constant", which means that dz within # the layer is constant. layer_data = [] # this data depends upon the layer type, but # for constant is the total thickness of the layer layer_ncells = [] # number of cells (in the vertical) in the layer. # The dz of each cell is the layer thickness / number of cells. layer_mat_ids = []# The material ID. This may be either a constant int (for # unform layering) or an array of size [ncells_vertical x ncells_horizontal] in the layer # where each entry corresponds to the material ID of that cell. # first, prescribe a single layer, 5 cells @ 5cm = 25 cm of organic-rich soil layer_types.append('constant') layer_data.append(0.25) layer_ncells.append(5) layer_mat_ids.append(1001) # organic-rich soil ID number current_depth = 0.25 # next, we want a mineral soil layer ranging from 0.25m to 2m deep # -- add another layer of 5 cells @ 5cm = 25 more cm of mineral soil layer_types.append('constant') layer_data.append(0.25) layer_ncells.append(5) layer_mat_ids.append(1002) # mineral-soil ID number current_depth += 0.25 # -- next, we start to grow dz by factors of 2, til we hit 2m dz = .05 i = 0 while current_depth < 2: dz *= 2 layer_types.append("constant") layer_data.append(dz) layer_ncells.append(1) layer_mat_ids.append(1002) # mineral soil current_depth += dz i += 1 # -- where are we at? print(f"After the mineral layer, dz = {dz}, current_depth={current_depth}") # By careful choice of the number of cells and dz, we are now at 2m. # # Now add in a bunch of 2m cells to reach 40 m, of equal dz that is ~2m. layer_types.append("constant") layer_data.append(38) layer_ncells.append(38//2) # note, // is integer division layer_mat_ids.append(999) # bedrock material ID current_depth += 38 # print a summary of the final layering meshing_ats.summarize_extrusion(layer_types, layer_data, layer_ncells, layer_mat_ids) # - # Turn this into a 3D mesh (column of cells), and save it to disk m3c = meshing_ats.Mesh3D.extruded_Mesh2D(m2c, layer_types, layer_data, layer_ncells, layer_mat_ids) if os.path.exists('column.exo'): os.remove('column.exo') m3c.write_exodus('column.exo') # ## Transect Mesh # # As a next step, make a transect mesh. This mesh is also a 3D volumetric mesh, but is conceptually a 2D mesh. Think of this as a cross-section of a hillslope. # + # Specify the top surface, given by z(x). # # 1 km long hillslope, 10% slope, 100 cells (or 101 nodes) in x. x = np.linspace(0,1000,101) z = 100 - 0.1*x print(f'# of x and z coordinates: {len(x)}, {len(z)}') # plot the surface topography plt.plot(x,z); plt.xlabel('x distance (m)'); plt.ylabel('z elevation (m)') # make the (manifold) 2D mesh. m2 = meshing_ats.Mesh2D.from_Transect(x,z) print(f'# of cells: {m2.num_cells()}') # + # In this mesh, we vary the organic layer thickness across the hillslope. # # Changing organic layer thickness def organic_thickness(x): """This function is the thickness of the layer we want to vary as a function of distance down the slope""" if x < 100: thickness = 0.5 elif ((100 <= x) and (x <= 200)): thickness = -0.0045*x + 0.95 elif ((200 < x) and (x < 800)): thickness = 0.05 elif ((800 <= x) and (x <= 900)): thickness = 0.0025*x - 1.95 else: thickness = 0.3 return thickness org_layer_thickness = np.array([organic_thickness(xx) for xx in m2.coords[:,0]]) plt.plot(x, org_layer_thickness[0:101]); plt.xlabel('x distance (m)'); plt.ylabel('org. layer thickness (m)'); # + # geometry of the transect extrusion transect_layer_types = [] transect_layer_data = [] transect_layer_ncells = [] depth = [] # bookkeeping for material IDs current_depth = 0 # We use the same dz as the above column, but because the material ID will change # at a given depth in x, we spell it out with 1 cell per transect. # # 10 cells @ 5cm dz = .05 depth.append(current_depth) for i in range(10): transect_layer_types.append('constant') transect_layer_data.append(dz) transect_layer_ncells.append(1) current_depth += dz depth.append(current_depth) # grow dz by factors of 2, til we hit 2m i = 0 while current_depth < 2: dz *= 2 transect_layer_types.append("constant") transect_layer_data.append(dz) transect_layer_ncells.append(1) current_depth += dz depth.append(current_depth) # 2m cells to 40m dz = 2 while current_depth < 40: transect_layer_types.append("constant") transect_layer_data.append(dz) transect_layer_ncells.append(1) current_depth += dz depth.append(current_depth) # + # calculate the cell centroid depth depth = np.array(depth) transect_layer_depth = (depth[0:-1] + depth[1:])/2 # allocate 2D matrix for material id, (# surface cells, # layers) n_layers = len(transect_layer_data) transect_layer_mat_ids=np.zeros((n_layers, m2.num_cells()), 'i') for j in range(m2.num_cells()): for i in range(n_layers): if (transect_layer_depth[i] < org_layer_thickness[j]): transect_layer_mat_ids[i,j] = 1001 elif transect_layer_depth[i] < 2: transect_layer_mat_ids[i,j] = 1002 else: transect_layer_mat_ids[i,j] = 999 # - # print out the layer information for the first column of cells meshing_ats.summarize_extrusion(transect_layer_types, transect_layer_data, transect_layer_ncells, transect_layer_mat_ids, 0) # make the mesh, save it as an exodus file m3 = meshing_ats.Mesh3D.extruded_Mesh2D(m2, transect_layer_types,transect_layer_data, transect_layer_ncells, transect_layer_mat_ids) if os.path.exists('transect.exo'): os.remove('transect.exo') m3.write_exodus('transect.exo') # Now convert the file from "polyhedral" to "fixed format" and open it in VisIt or Paraview. # # $AMANZI_TPLS_DIR/bin/meshconvert transect.exo transect_fixed_format.exo # # ![transect1.pnt](./transect1.png) # # ![transect2.png](./transect2.png) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- import glob import pandas as pd import numpy as np import cv2 import mediapipe as mp mp_drawing = mp.solutions.drawing_utils mp_drawing_styles = mp.solutions.drawing_styles mp_hands = mp.solutions.hands # + # Done def gesture_preprocess(landmark): """ convert landmarks for trainable data 66 features X (21): 0-20 Y (21): 21-41 Z (21): 42-62 X,Y,Z range (3): 63-65 params landmark: mediapipe landmark for 1 hand params label: str return: np.array (1,66) """ lm_x = np.array([]) lm_y = np.array([]) lm_z = np.array([]) for hlm in landmark.landmark: lm_x = np.append(lm_x, hlm.x) lm_y = np.append(lm_y, hlm.y) lm_z = np.append(lm_z, hlm.z) data_gest = [lm_x, lm_y, lm_z] x_rng, y_rng, z_rng = lm_x.max()-lm_x.min(), lm_y.max()-lm_y.min(), lm_z.max()-lm_z.min() data_gest = np.ravel([(k-k.min())/(k.max()-k.min()) for i, k in enumerate(data_gest)]) data_gest = np.append(data_gest, [x_rng, y_rng, z_rng]) return data_gest.reshape(1,-1) # for lm in IMAGE_RESULTS[0].multi_hand_landmarks: # print(gesture_preprocess(lm)) # + # Done def data2df(data, train, label, y=None): ''' add data to training df / label df param data: np.array param train: df param label: np.array param y: str return data: df, np.array ''' data_gest = gesture_preprocess(data) train = pd.concat([train, pd.DataFrame(data_gest)]) label = np.append(label, f'{y}') return train, label # train_df = pd.DataFrame() # label = np.array([]) # for lm in IMAGE_RESULTS[0].multi_hand_landmarks: # train_df, label = data2df(lm, train_df, label, 'rock') # train_df # label # + # For webcam input: train_df = pd.DataFrame() label = np.array([]) label_class = -1 cap = cv2.VideoCapture(0) with mp_hands.Hands( max_num_hands=1, model_complexity=1, min_detection_confidence=0.5, min_tracking_confidence=0.5) as hands: while cap.isOpened(): # hotkeys for generating dataset if cv2.waitKey(10) == ord('f'): label_class = 0 # hit print("recording hit") elif cv2.waitKey(10) == ord('g'): label_class = 1 # stand print("recording stand") elif cv2.waitKey(10) == ord('h'): label_class = 2 # split print("recording split") elif cv2.waitKey(10) == ord('j'): label_class = 3 # reset print("recording reset") elif cv2.waitKey(10) == ord('k'): label_class = -1 print("recording stopped") success, image = cap.read() if not success: print("Ignoring empty camera frame.") # If loading a video, use 'break' instead of 'continue'. continue # To improve performance, optionally mark the image as not writeable to # pass by reference. image.flags.writeable = False image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) results = hands.process(image) # Draw the hand annotations on the image. image.flags.writeable = True image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) if results.multi_hand_landmarks: for hand_landmarks in results.multi_hand_landmarks: if label_class >= 0: train_df, label = data2df(hand_landmarks, train_df, label, f'{label_class}') print('training added') mp_drawing.draw_landmarks( image, hand_landmarks, mp_hands.HAND_CONNECTIONS, mp_drawing_styles.get_default_hand_landmarks_style(), mp_drawing_styles.get_default_hand_connections_style()) # cv2.imshow('MediaPipe Hands', cv2.flip(image, 1)) cv2.imshow('MediaPipe Hands', image) if cv2.waitKey(5) & 0xFF == 27: break cap.release() cv2.destroyAllWindows() # - # export training data train_df.to_csv('train_X.csv', header=False, index=False) pd.DataFrame(label).to_csv('train_y.csv', header=False, index=False) # label.tofile('train_y.csv', sep=',') print(train_df.shape) print(label.shape) # + # For static images: filenames = glob.glob("test_imgs/*.jpg") IMAGE_FILES = [cv2.imread(img) for img in filenames] IMAGE_RESULTS = [] test_idx = 0 with mp_hands.Hands( static_image_mode=True, max_num_hands=2, min_detection_confidence=0.5) as hands: for idx, image in enumerate(IMAGE_FILES[0:test_idx+1]): # Read an image, flip it around y-axis for correct handedness output (see # above). # image = cv2.flip(cv2.imread(file), 1) # Convert the BGR image to RGB before processing. results = hands.process(cv2.cvtColor(image, cv2.COLOR_BGR2RGB)) IMAGE_RESULTS.append(results) # Print handedness and draw hand landmarks on the image. print('Handedness:', results.multi_handedness) if not results.multi_hand_landmarks: continue image_height, image_width, _ = image.shape annotated_image = image.copy() for hand_landmarks in results.multi_hand_landmarks: print('hand_landmarks:', hand_landmarks) # train_df, label = data2df(hand_landmarks, train_df, label, f'{label}') f'Index finger tip coordinates: (', f'{hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_TIP].x * image_width}, ' f'{hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_TIP].y * image_height})' ) mp_drawing.draw_landmarks( annotated_image, hand_landmarks, mp_hands.HAND_CONNECTIONS, mp_drawing_styles.get_default_hand_landmarks_style(), mp_drawing_styles.get_default_hand_connections_style()) clf_result = '' cv2.putText(image, 'Fedex', (x, y-10), cv2.FONT_HERSHEY_SIMPLEX, 0.9, (36,255,12), 2) cv2.imshow('debug gesture', annotated_image) cv2.waitKey(0) cv2.destroyAllWindows() # cv2.imwrite( # '/tmp/annotated_image' + str(idx) + '.png', annotated_image) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # TensorFlow 'raw' :: MNIST MLP # ============================== # # This is a quick illustration of a single-layer network training on the MNIST data. # # ( Credit for the original workbook : :: https://github.com/aymericdamien/TensorFlow-Examples ) # + slideshow={"slide_type": "slide"} import numpy as np import tensorflow as tf import matplotlib.pyplot as plt # %matplotlib inline import gzip import pickle # - # Seed for reproducibility np.random.seed(42) # ### Get the MNIST data # Put it into useful subsets, and show some of it as a sanity check # + slideshow={"slide_type": "slide"} # Download the MNIST digits dataset (only if not present locally) import os import urllib.request mnist_data = '../datasets' mnist_pkl_gz = mnist_data+'/mnist.pkl.gz' if not os.path.isfile(mnist_pkl_gz): if not os.path.exists(mnist_data): os.makedirs(mnist_data) print("Downloading MNIST data file") urllib.request.urlretrieve( 'http://deeplearning.net/data/mnist/mnist.pkl.gz', mnist_pkl_gz) print("MNIST data file available locally") # + slideshow={"slide_type": "slide"} # Load training and test splits as numpy arrays train, val, test = pickle.load(gzip.open(mnist_pkl_gz), encoding='iso-8859-1') X_train, y_train = train X_val, y_val = val X_test, y_test = test # + slideshow={"slide_type": "fragment"} # The original 28x28 pixel images are flattened into 784 dimensional feature vectors X_train.shape # + slideshow={"slide_type": "slide"} # Plot the first few examples plt.figure(figsize=(12,3)) for i in range(10): plt.subplot(1, 10, i+1) plt.imshow(X_train[i].reshape((28, 28)), cmap='gray', interpolation='nearest') plt.axis('off') # - # ### Create the Network # # + # Network Parameters n_input = 784 # MNIST data input (img shape: 28*28) n_hidden_1 = 256 # 1st layer number of features n_hidden_2 = 256 # 2nd layer number of features n_classes = 10 # MNIST total classes (0-9 digits) # tf Graph input x = tf.placeholder("float32", [None, n_input], name='x_input') #y = tf.placeholder("int32", [None, n_classes], name='y_target') # originally, a one-hot label y = tf.placeholder("int32", [ None, ], name='y_target') # This is the label index instead # + slideshow={"slide_type": "slide"} # Create model def multilayer_perceptron(x, weights, biases): # Hidden layer with RELU activation layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['h1']) layer_1 = tf.nn.relu(layer_1) # Hidden layer with RELU activation layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['h2']) layer_2 = tf.nn.relu(layer_2) # Output layer with linear activation out_layer = tf.matmul(layer_2, weights['out']) + biases['out'] return out_layer # + # Store layers weight & bias weights = { 'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])), 'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])), 'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes])) } biases = { 'h1': tf.Variable(tf.random_normal([n_hidden_1])), 'h2': tf.Variable(tf.random_normal([n_hidden_2])), 'out': tf.Variable(tf.random_normal([n_classes])) } # Construct model logits = multilayer_perceptron(x, weights, biases) pred = tf.argmax(logits, 1) # - # ### Set up the Loss Function # So that we can perform Gradient Descent to improve the networks' parameters during training # Define optimizer for the labels (expressed as a onehot encoding) labels = tf.one_hot(indices=y, depth=10) cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels)) # ### Set up the Training Function # # Parameters for the training phase learning_rate = 0.001 TRAINING_EPOCHS = 10 BATCH_SIZE = 100 display_step = 1 # + slideshow={"slide_type": "fragment"} # Define optimizer optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) #optimizer = tf.train.RMSPropOptimizer(learning_rate=learning_rate).minimize(cost) # - # ### Set up the Initializer # NB: Do this after creating all the variables (including those inplicitly create in, say AdamOptimizer) since this figures out all the variables that need initializing at the point it is defined, apparently. # + slideshow={"slide_type": "slide"} # Define an 'op' that initializes the variables init = tf.global_variables_initializer() # - # ### Batching of Training # For efficiency, we operate on data in batches, so that (for instance) a GPU can operate on multiple examples simultaneously # We'll choose a batch size, and calculate the number of batches in an "epoch" # (approximately one pass through the data). N_BATCHES = len(X_train) // BATCH_SIZE # + slideshow={"slide_type": "slide"} # For training, we want to sample examples at random in small batches def batch_gen(X_, y_, N): while True: idx = np.random.choice(len(y_), N) yield X_[idx], y_[idx] # - # Minibatch generator(s) for the training and validation sets train_batches = batch_gen(X_train, y_train, BATCH_SIZE) # Try sampling from the batch generator. # Plot an image and corresponding label from the training batcher to verify they match. X_batch, y_batch = next(train_batches) plt.imshow(X_batch[0].reshape((28, 28)), cmap='gray', interpolation='nearest') print(y_batch[0]) print((X_batch.shape, y_batch.shape)) # + # Plot an image and corresponding label from the validation set to verify they match. X_batch, y_batch = X_val, y_val plt.imshow(X_batch[0].reshape((28, 28)), cmap='gray', interpolation='nearest') print(y_batch[0]) print((X_batch.shape, y_batch.shape)) # - # ### Test function to check final accuracy # + #correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1)) # with one-hots correct_prediction = tf.equal(pred, tf.cast(y, tf.int64)) # with indices # Calculate accuracy accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float32")) # - # ### Finally, the Training... # For each epoch, we call the training function N_BATCHES times, accumulating an estimate of the training loss and accuracy. # # * Then we evaluate the accuracy on the validation set. # # TODO : Print out the ratio of loss in the validation set vs the training set to help recognize overfitting. # + # Launch the graph with tf.Session() as sess: sess.run(init) # Running this 'op' initialises the network weights # Training cycle for epoch in range(TRAINING_EPOCHS): avg_cost = 0. # Loop over all batches for _ in range(N_BATCHES): batch_x, batch_y = next(train_batches) #print(batch_x.shape, batch_y.shape) # Run optimization op (backprop) and cost op (to get loss value) _, c = sess.run([optimizer, cost], feed_dict={x:batch_x, y:batch_y}) # Compute average loss avg_cost += c / N_BATCHES # Display logs per epoch step if epoch % display_step == 0: print("Epoch:", '%04d' % (epoch+1), "cost=","{:.2f}".format(avg_cost)) print("Optimization Finished!") # Test model accuracy_, y_, pred_ = sess.run([accuracy, y, pred ], feed_dict={x:X_val[0:10], y:y_val[0:10] }) print("Validation Accuracy: %.2f%% for first 10 examples" % ( 100. * accuracy_, )) print("Validation Accuracy: %.2f%%" % ( 100. * accuracy.eval({ x: X_val, y: y_val, }),)) print("DONE") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # License Information # + active="" # This sample IPython Notebook is shared by Microsoft under the MIT license. Please check the LICENSE.txt file in the directory where this IPython Notebook is stored for license information and additional details. # - # # NYC Data wrangling using IPython Notebook and SQL Data Warehouse # This notebook demonstrates data exploration and feature generation using Python and SQL queries for data stored in Azure SQL Data Warehouse. We start with reading a sample of the data into a Pandas data frame and visualizing and exploring the data. We show how to use Python to execute SQL queries against the data and manipulate data directly within the Azure SQL Data Warehouse. # # This IPNB is accompanying material to the data Azure Data Science in Action walkthrough document (https://azure.microsoft.com/en-us/documentation/articles/machine-learning-data-science-process-sql-walkthrough/) and uses the New York City Taxi dataset (http://www.andresmh.com/nyctaxitrips/). # ## Read data in Pandas frame for visualizations # We start with loading a sample of the data in a Pandas data frame and performing some explorations on the sample. # # We join the Trip and Fare data and sub-sample the data to load a 0.1% sample of the dataset in a Pandas dataframe. We assume that the Trip and Fare tables have been created and loaded from the taxi dataset mentioned earlier. If you haven't done this already please refer to the 'Move Data to SQL Server on Azure' article linked from the Cloud Data Science process map. # #### Import required packages in this experiment import pandas as pd from pandas import Series, DataFrame import numpy as np import matplotlib.pyplot as plt from time import time import pyodbc import os import tables import time # #### Set plot inline # %matplotlib inline # #### Initialize Database Credentials SERVER_NAME = '' DATABASE_NAME = '' USERID = '' PASSWORD = '' DB_DRIVER = '' # #### Create Database Connection driver = 'DRIVER={' + DB_DRIVER + '}' server = 'SERVER=' + SERVER_NAME database = 'DATABASE=' + DATABASE_NAME uid = 'UID=' + USERID pwd = '=' + PASSWORD CONNECTION_STRING = ';'.join([driver,server,database,uid,pwd, ';TDS_VERSION=7.3;Port=1433']) print CONNECTION_STRING conn = pyodbc.connect(CONNECTION_STRING) # #### Report number of rows and columns in table # + nrows = pd.read_sql('''SELECT SUM(rows) FROM sys.partitions WHERE object_id = OBJECT_ID('.')''', conn) print 'Total number of rows = %d' % nrows.iloc[0,0] ncols = pd.read_sql('''SELECT count(*) FROM information_schema.columns WHERE table_name = ('') AND table_schema = ('')''', conn) print 'Total number of columns = %d' % ncols.iloc[0,0] # - # #### Report number of rows and columns in table # + nrows = pd.read_sql('''SELECT SUM(rows) FROM sys.partitions WHERE object_id = OBJECT_ID('.')''', conn) print 'Total number of rows = %d' % nrows.iloc[0,0] ncols = pd.read_sql('''SELECT count(*) FROM information_schema.columns WHERE table_name = ('') AND table_schema = ('')''', conn) print 'Total number of columns = %d' % ncols.iloc[0,0] # - # #### Read-in data from SQL Data Warehouse # + t0 = time.time() #load only a small percentage of the joined data for some quick visuals df1 = pd.read_sql('''select top 10000 t.*, f.payment_type, f.fare_amount, f.surcharge, f.mta_tax, f.tolls_amount, f.total_amount, f.tip_amount from . t, . f where datepart("mi",t.pickup_datetime)=0 and t.medallion = f.medallion and t.hack_license = f.hack_license and t.pickup_datetime = f.pickup_datetime''', conn) t1 = time.time() print 'Time to read the sample table is %f seconds' % (t1-t0) print 'Number of rows and columns retrieved = (%d, %d)' % (df1.shape[0], df1.shape[1]) # - # #### Descriptive Statistics # Now we can explore the sample data. We start with looking at descriptive statistics for trip distance: df1['trip_distance'].describe() # #### Box Plot # Next we look at the box plot for trip distance to visualize quantiles df1.boxplot(column='trip_distance',return_type='dict') # #### Distribution Plot fig = plt.figure() ax1 = fig.add_subplot(1,2,1) ax2 = fig.add_subplot(1,2,2) df1['trip_distance'].plot(ax=ax1,kind='kde', style='b-') df1['trip_distance'].hist(ax=ax2, bins=100, color='k') # #### Binning trip_distance trip_dist_bins = [0, 1, 2, 4, 10, 1000] df1['trip_distance'] trip_dist_bin_id = pd.cut(df1['trip_distance'], trip_dist_bins) trip_dist_bin_id # #### Bar and Line Plots # The distribution of the trip distance values after binning looks like the following: pd.Series(trip_dist_bin_id).value_counts() # We can plot the above bin distribution in a bar or line plot as below pd.Series(trip_dist_bin_id).value_counts().plot(kind='bar') pd.Series(trip_dist_bin_id).value_counts().plot(kind='line') # We can also use bar plots for visualizing the sum of passengers for each vendor as follows vendor_passenger_sum = df1.groupby('vendor_id').passenger_count.sum() print vendor_passenger_sum vendor_passenger_sum.plot(kind='bar') # #### Scatterplot # We plot a scatter plot between trip_time_in_secs and trip_distance to see if there is any correlation between them. plt.scatter(df1['trip_time_in_secs'], df1['trip_distance']) # To further drill down on the relationship we can plot distribution side by side with the scatter plot (while flipping independentand dependent variables) as follows df1_2col = df1[['trip_time_in_secs','trip_distance']] pd.scatter_matrix(df1_2col, diagonal='hist', color='b', alpha=0.7, hist_kwds={'bins':100}) # Similarly we can check the relationship between rate_code and trip_distance using a scatter plot plt.scatter(df1['passenger_count'], df1['trip_distance']) # #### Correlation # Pandas 'corr' function can be used to compute the correlation between trip_time_in_secs and trip_distance as follows: df1[['trip_time_in_secs', 'trip_distance']].corr() # ## Sub-Sampling the Data in SQL # In this section we used a sampled table we pregenerated by joining Trip and Fare data and taking a sub-sample of the full dataset. # # The sample data table named '' has been created and the data is loaded when you run the PowerShell script. # #### Report number of rows and columns in the sampled table # + nrows = pd.read_sql('''SELECT SUM(rows) FROM sys.partitions WHERE object_id = OBJECT_ID('.')''', conn) print 'Number of rows in sample = %d' % nrows.iloc[0,0] ncols = pd.read_sql('''SELECT count(*) FROM information_schema.columns WHERE table_name = ('') AND table_schema = ('')''', conn) print 'Number of columns in sample = %d' % ncols.iloc[0,0] # - # We show some examples of exploring data using SQL in the sections below. We also show some useful visualizatios that you can use below. Note that you can read the sub-sample data in the table above in Azure Machine Learning directly using the SQL code in the reader module. # ## Exploration in SQL # In this section, we would be doing some explorations using SQL on the 1% sample data (that we created above). # #### Tipped/Not Tipped Distribution # + query = ''' SELECT tipped, count(*) AS tip_freq FROM . GROUP BY tipped ''' pd.read_sql(query, conn) # - # #### Tip Class Distribution # + query = ''' SELECT tip_class, count(*) AS tip_freq FROM . GROUP BY tip_class ''' tip_class_dist = pd.read_sql(query, conn) tip_class_dist # - # #### Plot the tip distribution by class tip_class_dist['tip_freq'].plot(kind='bar') # #### Daily distribution of trips query = ''' SELECT CONVERT(date, dropoff_datetime) as date, count(*) as c from . group by CONVERT(date, dropoff_datetime) ''' pd.read_sql(query,conn) # #### Trip distribution per medallion query = '''select medallion,count(*) as c from . group by medallion''' pd.read_sql(query,conn) # #### Trip distribution by medallion and hack license query = '''select medallion, hack_license,count(*) from . group by medallion, hack_license''' pd.read_sql(query,conn) # #### Trip time distribution query = '''select trip_time_in_secs, count(*) from . group by trip_time_in_secs order by count(*) desc''' pd.read_sql(query,conn) # #### Trip distance distribution query = '''select floor(trip_distance/5)*5 as tripbin, count(*) from . group by floor(trip_distance/5)*5 order by count(*) desc''' pd.read_sql(query,conn) # #### Payment type distribution query = '''select payment_type,count(*) from . group by payment_type''' pd.read_sql(query,conn) query = '''select TOP 10 * from .''' pd.read_sql(query,conn) # We have now explored the data and can import the sampled data in Azure Machine Learning, add some features there and predict things like whether a tip will be given (binary class), the tip amount (regression) or the tip amount range (multi-class) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import logging import pandas as pd import matplotlib.pyplot as plt from world_rowing import api, livetracker, utils, dashboard logging.basicConfig(level=logging.INFO) logging.getLogger().setLevel(logging.INFO) # - # # Live tracking # # Running the cell below will automatically update the graph with the livetracking data and predictions for the end of the race. dash = dashboard.Dashboard.load_last_race(figsize=(14, 10)) dash.update() # # Viewing livetracking data for previous races # # You can view the livetracking data for previous races as well races = api.get_competition_races() races.iloc[:10] race = races.iloc[8] dash = dashboard.Dashboard.from_race_id( race.name, figsize=(14, 10)) dash.update() # # Viewing competition PGMTs # # Unfortunately doesn't work for the Olympics comp_pgmts = api.get_competition_pgmts() group_boat_pgmts = comp_pgmts.groupby('Boat') boat_pgmts = group_boat_pgmts\ .first()\ .sort_values('PGMT', ascending=False) boat_pgmts # + f, ax = plt.subplots(figsize=(12, 8)) for boat in boat_pgmts.index: pgmt = group_boat_pgmts.get_group(boat).PGMT.sort_values(ascending=False) ax.step(range(pgmt.size), pgmt.values, label=boat, where='post') ax.set_xlim(0, 10) ax.set_ylim(0.9, comp_pgmts.PGMT.max() + .01) ax.legend(); # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np from bokeh.io import curdoc from bokeh.layouts import row, widgetbox from bokeh.models import ColumnDataSource from bokeh.models.widgets import Slider, TextInput from bokeh.plotting import figure from bokeh.io import show, output_notebook # - output_notebook() # + N = 200 x = np.linspace(0, 4*np.pi, N) y = np.sin(x) source = ColumnDataSource(data=dict(x=x, y=y)) # + plot = figure(plot_height=400, plot_width=400, title="my sine wave", tools="crosshair,pan,reset,save,wheel_zoom", x_range=[0, 4*np.pi], y_range=[-2.5, 2.5]) plot.line('x', 'y', source=source, line_width=3, line_alpha=0.6) text = TextInput(title="title", value='my sine wave') offset = Slider(title="offset", value=0.0, start=-5.0, end=5.0, step=0.1) amplitude = Slider(title="amplitude", value=1.0, start=-5.0, end=5.0, step=0.1) phase = Slider(title="phase", value=0.0, start=0.0, end=2*np.pi) freq = Slider(title="frequency", value=1.0, start=0.1, end=5.1, step=0.1) # + def update_title(attrname, old, new): plot.title.text = text.value text.on_change('value', update_title) def update_data(attrname, old, new): # Get the current slider values a = amplitude.value b = offset.value w = phase.value k = freq.value # Generate the new curve x = np.linspace(0, 4*np.pi, N) y = a*np.sin(k*x + w) + b source.data = dict(x=x, y=y) for w in [offset, amplitude, phase, freq]: w.on_change('value', update_data) # + inputs = widgetbox(text, offset, amplitude, phase, freq) # curdoc().add_root(row(inputs, plot, width=800)) # curdoc().title = "Sliders" show(row(inputs, plot, width=800)) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Demo grid object with functions # + import pandas as pd import pandapower as pp import os,sys,inspect import matplotlib import matplotlib.pyplot as pyplt import plotly.graph_objs as go import datetime from datetime import date, timedelta import copy currentdir = os.path.abspath(os.getcwd()) parentdir = os.path.dirname(currentdir) sys.path.insert(0,parentdir) from capacitymap.analysis import analysis_check from capacitymap import controllers analysis_check.check_violations.counter=0 from capacitymap.grid import grid # - # ## Create grid object and plot grid at normal operation #create a grid instance for testing empt_df = pd.DataFrame(columns=['Start', 'End', 'Object_type','ObjectID', 'Status']) net = pp.from_json( os.path.join(currentdir,'data\svedala\svedala.json')) network = grid.Grid('test',net, {}, empt_df) go.Figure(network.plot_traces()) network.project_df #empty tabel network.config_dict #no configuration changes planned # ## Store new project # + #new incoming project, here one line will be taken out-of-service for a time period new_request = {'Start':datetime.datetime(2021,9,1), 'End':datetime.datetime(2021,9,3), 'Object_type':'line', 'ObjectID':network.grid.line.name.loc[3], 'Status':False} new_request2 = {'Start':datetime.datetime(2021,9,5), 'End':datetime.datetime(2021,9,15), 'Object_type':'line', 'ObjectID':network.grid.line.name.loc[9], 'Status':False} #project is stored network.store_project_and_update_config(new_request) network.store_project_and_update_config(new_request2) # - network.project_df #a record for every timestep, here 1 day is created for the stored project and the configuration change is stored on every timestep network.config_dict # ## Get grid model for specific date date = datetime.datetime(2021,9,1) network.active_config = network.config_dict[date] network.config() go.Figure(network.plot_traces()) network.restore() #restore grid mdel back to normal grid # ## Remove planned project project_to_drop = new_request network.remove_project_and_update_config(project_to_drop) network.project_df network.config_dict # ## Test new request before saving new_proj = {'Start':datetime.datetime(2021,9,1), 'End':datetime.datetime(2021,9,3), 'Object_type':'line', 'ObjectID':network.grid.line.name.loc[4], 'Status':False} #Generate dict of configuration related to new project conf1 = grid.generate_config_from_project(new_proj) conf1 # + #combine new project dict with already saved projects from config_dict config_to_test = grid.combine_dict(conf1,network.config_dict) # select active config based on time and config_dict with approved plans and new plan test_date = datetime.datetime(2021,9,1) network.active_config = config_to_test[test_date] network.config() # Analyse active config analysis_res, exp = analysis_check.check_violations(network.grid) print(analysis_res) # Restore to normal before proceeding network.restore() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Car Price Prediction # + #Reading the dataset import pandas as pd df = pd.read_csv('car data.csv') # - #Reading the columns an first five rows df.head() #checking total rows and columns, we have 8 features and 1 target column(selling price) df.shape #checking the unique values of the categorical features print(df['Fuel_Type'].unique()) print(df['Seller_Type'].unique()) print(df['Transmission'].unique()) print(df['Owner'].unique()) #Check missing or null values df.isnull().sum() df.describe() df.columns #Removing the car name column as its irrelevant final_dataset =df[[ 'Year', 'Selling_Price', 'Present_Price', 'Kms_Driven', 'Fuel_Type', 'Seller_Type', 'Transmission', 'Owner']] final_dataset.head() #Adding a new feature as current year which is 2020, this is needed to calculate the age of the car. final_dataset['Current_year']=2020 final_dataset.head() #Arriving at the age of the car. final_dataset['no_year']=final_dataset['Current_year']-final_dataset['Year'] final_dataset.head() #Inplace=True is to make it a permanent operation #Dropping the Year and current year column final_dataset.drop(['Year'],axis=1,inplace=True) final_dataset.drop(['Current_year'],axis=1,inplace=True) final_dataset.head() #pd.dummies is used to apply one hot encoding technique, drop first is used to drop the first column from creatng dummy variable trap. final_dataset=pd.get_dummies(final_dataset,drop_first=True) final_dataset.head() #Checking the correlation between various features. final_dataset.corr() import seaborn as sns #checking different vizualizations with the various features available sns.pairplot(final_dataset) import matplotlib.pyplot as plt # %matplotlib inline #cormat variable is created to be used ffor creation of a heatmap vizualization corrmat=final_dataset.corr() top_corr_features=corrmat.index plt.figure(figsize=(10,10)) #plot heat map g=sns.heatmap(final_dataset[top_corr_features].corr(),annot=True,cmap="RdYlGn" ) final_dataset.head() #independent and dependent feature x=final_dataset.iloc[:,1:] y=final_dataset.iloc[:,0] x.head() y.head() #Feature importance UNderstanding which feature is important from sklearn.ensemble import ExtraTreesRegressor model=ExtraTreesRegressor() model.fit(x,y) #checking the most important features print(model.feature_importances_) #plot graph for the most important feature feat_importances=pd.Series(model.feature_importances_,index =x.columns) feat_importances.nlargest(5).plot(kind='barh') plt.show() from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test=train_test_split(x,y,test_size=0.2) from sklearn.ensemble import RandomForestRegressor rf_random =RandomForestRegressor() #Hyper parameters #RandomizedSearchCV helps to find out find best parameters considering how many fmax features, depth etc is used. from sklearn.model_selection import RandomizedSearchCV import numpy as np n_estimators =[int(x) for x in np.linspace(start=100, stop=1200,num=12)] print(n_estimators) #No of trees in the forest n_estimators =[int(x) for x in np.linspace(start=100, stop=1200,num=12)] #no of features to consider in every split max_features=['auto','sqrt'] #max no of levels in a tree max_depth=[int(x) for x in np.linspace(5,30,num=6)] #max_depth.append(None) #Min non of samples needed to split a node min_samples_split=[2,5,10,15,100] #min no of samples needed at each leaf node min_samples_leaf=[1,2,5,10] #create a random grid random_grid={'n_estimators':n_estimators, 'max_features':max_features, 'max_depth':max_depth, 'min_samples_split':min_samples_split, 'min_samples_leaf':min_samples_leaf} print(random_grid) from sklearn.model_selection import RandomizedSearchCV rf=RandomForestRegressor() # + active="" # rf_random = RandomizedSearchCV(estimator=rf, param_distributions=random_grid, scoring='neg_mean_squared_error', n_iter=10, cv=5, verbose=2, random_state=42, n_jobs=1) # - rf_random.fit(x_train,y_train) predictions = rf_random.predict(x_test) predictions #Checking the difference from our pedictions to our actuals sns.distplot(y_test-predictions) #Creating vizualizations comaring our predictions with our actual values. plt.scatter(y_test,predictions) # + import pickle #open a file where you can store the data #file =open('random_forest_regression_model.pkl','wb') #dump information to that file pickle.dump(rf_random,open('random_forest_regression_model.pkl','wb')) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import requests, json, os import psycopg2 from datetime import datetime from dateutil import tz import pandas as pd requests.__version__ pd.__version__ # # Make Connection to Google cloud function import urllib.request QUERY_FUNC = 'https://us-central1-divvy-bike-shari-1562130131708.cloudfunctions.net/query_from_divvy_cloudsql' # %%time reslst = requests.post(QUERY_FUNC, json={'stationid': '35, 192'}).content.decode('utf-8') def to_dataframe(raw): lst_of_lst = [v.split(',') for v in raw.split('\n')] df = pd.DataFrame(lst_of_lst, columns=['timestamp', 'stationid', 'bikes_avail', 'docks_avail']) return df reslst = requests.post(QUERY_FUNC, json={'stationid': '35,192,100,2'}).content.decode('utf-8') df = to_dataframe(reslst) df.shape df.info() tmp = df[df.stationid == '35'].copy() tmp['timeindex'] = pd.to_datetime(df['timestamp']).dt.tz_localize('utc').dt.tz_convert('US/Central') tmp['month'] = tmp.timeindex.apply(lambda x: x.month) tmp['day'] = tmp.timeindex.apply(lambda x: x.day) tmp['hour'] = tmp.timeindex.apply(lambda x: x.hour) tmp.bikes_avail = tmp.bikes_avail.astype('int') tmp.docks_avail = tmp.docks_avail.astype('int') min_df = tmp.groupby(['month', 'day', 'hour'])[['timeindex', 'bikes_avail', 'docks_avail']].min()\ .reset_index().rename(columns={"bikes_avail": "min_bikes", "docks_avail": "min_docks"}) max_df = tmp.groupby(['month', 'day', 'hour'])[['bikes_avail', 'docks_avail']].max()\ .reset_index().rename(columns={"bikes_avail": "max_bikes", "docks_avail": "max_docks"}) ave_df = tmp.groupby(['month', 'day', 'hour'])[['bikes_avail', 'docks_avail']].mean()\ .reset_index().rename(columns={"bikes_avail": "ave_bikes", "docks_avail": "ave_docks"}) min_df.merge(max_df, on=['month', 'day', 'hour']).merge(ave_df, on=['month', 'day', 'hour']) # + # df.bikes_avail.rolling(12).min() # - # # Visualize data import plotly from plotly.offline import iplot, plot plotly.__version__ # # Query Live Station Status (Up to now) DIVVY_STATION_URL = 'https://gbfs.divvybikes.com/gbfs/en/station_information.json' # + res = requests.get(DIVVY_STATION_URL) jsonres = res.json() station_json = json.dumps(jsonres['data']['stations']) station_status_df = pd.read_json(station_json) # - cleaned_stationdata = station_status_df[['capacity', 'lat', 'lon', 'station_id', 'name', 'short_name']] cleaned_stationdata.to_csv('station.csv') cleaned_stationdata.head(5) # # Make Connection to postgres # ## Initialize connection # + gcp_sql_username = os.environ.get('gcp_sql_username') gcp_sql_password = os.environ.get('gcp_sql_password') conn = psycopg2.connect(user=gcp_sql_username, password=, host='localhost', port='5432') # - # ## Query data # + # %%time DISPLAY_ROWS = 10000 cur = conn.cursor() cur.execute('SELECT * FROM divvylivedata WHERE stationid = %s;' %('192')) print("Total rows: {}\nDisplayed rows: {}\n".format(cur.rowcount, DISPLAY_ROWS)) row_counter = 1 row = cur.fetchone() while row is not None and row_counter <= DISPLAY_ROWS: # print(','.join([str(v) for v in row])) row = cur.fetchone() row_counter += 1 print(row_counter) cur.close() # - # ### Convert unix timestamps into timestamps and consider timezone utc_timestamp = datetime.utcfromtimestamp(1565246835).strftime('%Y-%m-%d %H:%M:%S') print(utc_timestamp) # + # METHOD 1: Hardcode zones: from_zone = tz.gettz('UTC') to_zone = tz.gettz('America/Chicago') # # METHOD 2: Auto-detect zones: # from_zone = tz.tzutc() # to_zone = tz.tzlocal() # utc = datetime.utcnow() utc = datetime.strptime(utc_timestamp, '%Y-%m-%d %H:%M:%S') # Tell the datetime object that it's in UTC time zone since # datetime objects are 'naive' by default utc = utc.replace(tzinfo=from_zone) # Convert time zone central = utc.astimezone(to_zone) print("Local time in Chicago: ", central) # - # ## Close connection if conn: conn.close() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="1Ba0xpg6BZMe" colab_type="code" outputId="6a8d5097-ff5d-4663-f1ba-c43de7393f20" colab={"base_uri": "https://localhost:8080/", "height": 34} # %pylab inline # + id="WA5sm9e1Bdqh" colab_type="code" colab={} import numpy as np import pandas as pd from sklearn.preprocessing import StandardScaler from sklearn.model_selection import train_test_split from sklearn import model_selection from sklearn.neural_network import MLPClassifier from sklearn.model_selection import GridSearchCV from sklearn.model_selection import cross_val_score # + id="0rpoZ9HfBmnD" colab_type="code" outputId="805d3e0a-6b0a-4f1d-c2eb-70cd0ce6e9e0" colab={"base_uri": "https://localhost:8080/", "height": 34} from google.colab import drive drive.mount("/content/gdrive", force_remount=True) # + id="pxZ-sHTCBqgb" colab_type="code" colab={} # Read the dataset crimes_stacked = pd.read_csv('gdrive/My Drive/Crime Predictor/crimes_stacked.csv', index_col=None) # + id="Fj7akM9lBt2T" colab_type="code" colab={} # Get rid of useless columns X = crimes_stacked[['Arrest', 'Domestic', 'Beat', 'Community Area', 'Latitude', 'Longitude', 'Year', 'Hour', 'Closest police station']] # Extract the target feature y = crimes_stacked['Primary Type'] # + id="cpWbQ_noB5z3" colab_type="code" colab={} # Fill Nan with 0 X = X.fillna(0) # Gather column names feature_names = list(X) target_names = list(y) # + id="1zensZ6GB6VS" colab_type="code" outputId="6c48ff5e-7af0-4f6b-82e1-27ce416832fa" colab={"base_uri": "https://localhost:8080/", "height": 51} from sklearn.preprocessing import MinMaxScaler # Separate data into training and testing datasets X = MinMaxScaler().fit_transform(X) seed = 42 X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=seed) # Rescale the data scaler = StandardScaler() scaler.fit(X_train) X_train = scaler.transform(X_train) X_test = scaler.transform(X_test) # + id="dpQyregrB9J4" colab_type="code" outputId="8a4ef908-fdeb-4049-f9a7-e31d9ae4f096" colab={"base_uri": "https://localhost:8080/", "height": 34} # Use grid search with 3 K-Fold to find the best parameters scoring = 'accuracy' kfold = model_selection.KFold(n_splits=3, random_state=seed) # Parameters to be tested parameters = {'activation':['logistic', 'tanh', 'relu'], 'hidden_layer_sizes':[(50, 15), (25, 10), (10, 5)], 'solver':['sgd', 'adam'], 'learning_rate_init':[0.001, 0.01]} # Comparison mlp = MLPClassifier() clf = GridSearchCV(mlp, parameters, cv=kfold) print("kFold Done") clf.fit(X_train, y_train) print("Fitted") print("Best estimator: ", clf.best_estimator_) # + id="n2s82FZ9CBv3" colab_type="code" colab={} # Train with the best parameters mlp = clf.best_estimator_ mlp.fit(X_train, y_train) # + id="m5nfXpwmCCV7" colab_type="code" colab={} results = cross_val_score(dt, X, y) print("Accuracy: %.3f%% (%.3f%%)") % (results.mean()*100.0, results.std()*100.0) # + id="lbupSf3RCEhk" colab_type="code" colab={} print("Current loss:", mlp.loss_) # + id="7Hza0D5PCHCz" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + colab={"base_uri": "https://localhost:8080/"} id="F-ynZ8-x8QOu" outputId="fe7318cd-0fe8-4651-f784-bbd3dd5cb3b9" #1 #Função Principal def f_fahrenheit(F): C = (F-32) * 5/9 return C #Teste Função f_fahrenheit(255) # + id="Kuh8Z7dZ8jlP" #2 def hipotenusa(a, b): h = ((a**2) + (b**2)) ** 0.5 return h #Teste Função hipotenusa(52, 44) # + id="a5WAaY1U8vxL" #3 def aprovacao(nota_1, nota_2): if (nota_1 + nota_2) / 2 >= 6: print("PARABÉNS! VOCÊ FOI APROVADO") else: print("ESTUDE MAIS!!") #Teste Função aprovacao(7, 5) # + id="G96imoZp8yw7" # + id="it3fYxzB-4fB" #9 def divisivel(x, y): if (x % y == 0): return 1 else: return 0 #Teste Função divisivel(10, 10) # + id="ReDeUvLa--Jt" #10 def maior_que(a, b): if a > b: return a else: return b #Teste Função maior_que(2, 3) #maior_que(7, 4) # + id="EspwtrVd-_BS" #11 def pol_cm(pol): cm = pol * 2.54 return cm #Teste Função pol_cm(55) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + colab={"base_uri": "https://localhost:8080/"} id="Jj16tLhUvqQW" outputId="407c09cc-1408-465c-f770-6fead4fa6e24" # !pip install pycrypto # + colab={"base_uri": "https://localhost:8080/"} id="NcXt4pqDv9fW" outputId="9b7e2b6e-1339-42a9-d95d-e5e6f2214908" from Crypto.Cipher import AES #importring AES from os import urandom #importing OS import binascii key = urandom(32) iv = urandom(16) print(key,iv) # + colab={"base_uri": "https://localhost:8080/"} id="HwD5rHAxwDE5" outputId="8b24f6f4-b732-45dc-c3c2-0d6680f056ba" ############################# ENCRYPTION ####################################### encrypterObj = AES.new(key, AES.MODE_ECB, iv) encryptedText = encrypterObj.encrypt(('useDemo password'.encode())) print('Encrypted Password : ',encryptedText) # + colab={"base_uri": "https://localhost:8080/"} id="vg5eA994wF4J" outputId="fc089ae9-5ab8-4300-bdaf-cb9c5d93fed2" ############################# DECRYPTION ####################################### decrypterObj = AES.new(key, AES.MODE_ECB, iv) decryptedText = decrypterObj.decrypt(encryptedText) decryptedPassword = decryptedText.decode('UTF-8') print('Decrypted message : ',decryptedText.decode()) # + colab={"base_uri": "https://localhost:8080/"} id="70phYWSkwIlR" outputId="f2ff7998-0894-4fcd-dce6-fb6980aefbfb" if ('useDemo password' == decryptedPassword): print('Password Encrypted and Decrypted Successfully') else: print('not same') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Python Základy # # ## Poznámky k jupyter notebook # # Zachovejte PEP8 za každou cenu (pokud není potřeba jinak). # # V jupyter notebook je možné odkazovat na objekty vytvořené v předchozích buňkách (třeba reference na třídu, nebo proměnnou). Není ale možné přistupovat k buňkám samotným (například zastavit nekonečnou smyčku). # # Poslední řádek v buňce se vytiskne jako výstup, pokud vrací nějaký výstup. V případě že potřebujete tisknout i jiné řádky, použijte funkci `print()`. Příklad: print(11) 22 11 22 a = 11 22 a = 11 a print(11) print(a) # ## Čísla, text a základní operace # # Detailní informace: https://realpython.com/python-data-types/ # # Následuje příklad vytvoření proměnných obsahující celé číslo, reálné číslo a text. a = 1 print(a) b = 1. print(b) c = "1" print(c) d = True # d = False print(d) # ### type() - jak zjistit typ # Typ proměnné je možné ověřit následovně: type(a) type(b) type(c) type(d) type(int) # ### Základní operace # Následuje pár ilustrativních příkladů pro získání představy o tom jaké operace jsou možné a jaké ne: 1 + 2 1. - 2 "1" + "2" "1" + 1 "a" - "b" 3 * 4 "abc" * 3 "já ne" + " ko" * 10 + "ktám" 3 / 2 3 // 2 3 % 2 # Code example 1.0 # output should be equal to dividend dividend = 100 divisor = 6 result = dividend // divisor leftover = dividend % divisor result * divisor + leftover 3**2 3**(1/2) 3**1/2 # Pozor, `bool` variable se chová jako číslo: `True` ~ 1, `False` ~ 0. True + 1 True - 1 False + 1 True * 2 # !!! True + "abc" # Provádění některých operací s proměnnou `in place` je možné následovně: a = 1 print(a) a += 2 print(a) a *= 3 print(a) a -= 10 print(a) # ### Indexování, slicing # Indexování a slicing jsou operace které vám umožňují udělat podvýběr z nějaké kolekce. Python indexuje od `0`. Je možné používat i negativní index. Poslední element v kolekci má index `-1`. Následují příklady na textu: text = "Hello world!" text text[:] text[0:-1] text[3:5] text[2:-2] len(text) # Pozor, né každý objekt má index! Viz následující příklad. # !!! a = 1554254 a[1] a = str(a) int(a[1]) # ### dir() - jak zjistit všechny vlastnosti objektu # Pomocí funkce `dir()` můžete najít všechny vlastnosti/parametry/funkce připojené k libovolnému objektu. Pomocí této funkce tedy můžete zjistit, jestli má objekt definované operace jako sčítání, odčítání atp.. Následují příklady (výstup není tisknut, protože je dlouhý): a_dir = dir(a) root_dir = dir() # ## Kontejnery # Kontejnery jsou datové typy, které slouží ke skladování jiných (nebo stejných) datových typů. # ### List # List je nejvíc běžný kontejner. Je vymezen pomocí hranatých závorek `[]`. Je možný ho měnit přístupem přes index. a = [] type(a) b = ["1", 3.0, "a", 4] b b[2] b[2] = 3 b b[2:3] = ["a", "b", "c"] b c = ["1", "b"] + [3] * 3 c b[1] = c d = [c, b] d len(d) # Zajímavost, převod textu na list: a = list("Hello World!") a a = "Hello World!".split(" ") a # ### Tuple # Tuple je něco jako list. Rozdíl je, že tuple nelze měnit přístupem přes index. Je vymezen pomocí kulatých závorek. a = (1, 2, 3, "a") type(a) # !!! a[1] = 1 b = a + (1, 2) * 2 b c = (a, b) c # ### Dictionary (slovník) # Slovník je neseřazená kolekce párů (klíč, hodnota). Kolekce není řazená protože jejím indexem jsou klíče. Slovník je tedy možné chápat jako list, v kterém index nemusí být jen celá čísla, ale cokoliv (jakýkoliv hashovatelný objekt). Slovník je vymezent pomocí složených závore `{}`. Následují příklady. a = {} type(a) a = {"key1": "value1", "key2": "value2"} a a["key1"] a["key1"] = 133 a a["new_key"] = 2.0 a # Některé užitečné funkce jsou ukázány v následujících příkladech. Kompletní seznam vlastnosti/funkcí je možné získat pomocí funkce `dir()`. a.keys() a.items() a.get("key2", "Does not exist") a.get("key100", "Does not exist") # ### Set (množina) # Datový typ `set` představuje množinu, podobně jako jí známe z matematiky. Tento typ není často používán. Zde je uveden spíše pro zajímavost. V běžné praxi se používá pouze jako trik, jak se zbavit duplicitních elementů v kontejneru. a = (1, 3, 4, 5, 3, 5, 1, 8, 8, 8, 1) a = set(a) a = tuple(a) a # ## Řizení chodu programu # ### Vyhodnocení výrazu # Následují ukázky využítí různých operátorů pro porovnání. 1 > 2 1 <= 2 "a" == "a" 1 == 2 "1" == 1 True == 1 False == 1 3 > 2 > 1 (1 > 2) or (3 == 3) (1 > 2) and (3 < 4) # Negace může být realizována pomocí `not` před výrazem, nebo pomocí operátoru `!=`. 1 != 1 not 1 == 1 # Pozor, `list` a `str` jsou `True`, pokud nejsou prázdné. True == [] True == [1, 2] True == "abc" True == "" (not True == "") == (len("") == 0) # ### Podmínka IF # Příklady podmínky IF (a ELSE, ELIF) následují: if True: print("It is True") a = 0 if not a == 0: print("It is True") else: print("It is not True") a = 0 if a: print("It is True") else: print("It is not True") a = 2 if a == 1: print("a is 1") elif a == 2: print("a is 2") else: print("a is different from 1 and 2") # Něco navíc: Python obsahuje i takzvaný *ternary operator*: a = 1 b = "abc" if a == 1 else "def" b # ### Smyčka FOR # Smyčka FOR slouží k iterování přes iterovatelný objekt. a = "Hello" for letter in a: print(letter) a = ["1", "a", [1, 2]] for item in a: print(item) # Pokud potřebujete for smyčku použít jako počítadlo, můžete iterovat přes řadu čísel získanou pomocí funkce `range`. for idx in range(1, 4): print(idx) a = [1, 4, 5, 8] for idx in range(0, len(a)): print(idx, a[idx]) a = {"key1": "value1", "key2": "value2"} for key, value in a.items(): print(key, value) # Něco navíc: v pythonu existuje *list comprehension* a generátory. Výstupem *list comprehension* je list. Výstupem generátoru je iterovatelný objekt. # list comprehension a = ["Hello " + name for name in ["John", "Alice", "Bob"]] a # generator a = ("Hello " + name for name in ["John", "Alice", "Bob"]) print(a) for item in a: print(item) # ### Smyčka WHILE # Smyčka while slouží k iterování do doby než je splněna podmínka. Příklady následují. a = 0 while a < 5: print("Waiting...", a) a += 1 # ## Import # Příkaz import slouží k importu dalších funkcí (tříd, objektů, ...) z externích souborů. Importovat je možné balíčky obsažené v instalaci Pythonu, balíčky instalované z exterích zdrojů, nebo vlastní lokálně skladované balíčky. V tuto chvíli budeme importovat pouze balíčky ze základní instalace Pythonu. Následují příklady: import time time.time() from time import time time() from time import time as t t() # Poznámka: ověřit úspěšnost importu je možné opět pomocí funkce dir(). Stejně tak je možné pomocí funkce dir obsah balíčku. # ## Přístup k souborům # + f = open("data/example.txt", "w") f.write("Hello!") f.close() f = open("data/example.txt", "r") content = f.read() f.close() print(content) # + with open("data/example.txt", "w") as f: f.write("Hello again!") with open("data/example.txt", "r") as f: content = f.read() print(content) # - # ## K zapamatování # ### Funkce používají kulaté závory, do kterých se píšou argumenty. Následují různé správné použití: print() print(1) print(1, end=" ") list() list([1, 3, 3]) # ### Index používá hranaté závorky. Následují příklady: [1, 2, 3, 4][:3] (1, 3, 3)[-1] "abadsfqwe"[2:4] # ### K adresování vlastnosti nebo funkce nějakého objektu, se používá tečka (objekt.funkce). Následující příklady: (1, 3, 5, 4, 1, 2, 4).count(1) "fa sdf ad".split(" ") "_".join(["a", "b", "c"]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Set data dice1 = [1,2,3,4,5,6] dice2 = [1,2,3,4,5,6] total = len(dice1) * len(dice2) total_possibilities = 0 # Verify possibilities for i in range(len(dice1)): for j in range(len(dice2)): if (dice1[j] + dice1[i]) <= 9: total_possibilities = total_possibilities + 1 print("{0} + {1} = {2}".format(dice1[i], dice2[j], (dice1[j] + dice1[i]))) # Get probability probability = total_possibilities / total print("Probability: {0}/{1} = {2}".format(total_possibilities,total,probability)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.6.13 ('switched-on-leibnitz') # language: python # name: python3 # --- # # Numeros reales # # Laboratorio de cálculo diferencial e integral 2. # # Prof. (Gory). # ## Tipos numéricos # Los tipos (o clases) numéricos en python son estructuras de datos que tienen un valor numérico asignado. Cuando se habla de un valor numérico, se hace referencia a un elemento del conjunto de los números naturales $\mathbb{N}$, de números enteros $\mathbb{Z}$, de números racionales $\mathbb{Q}$, de números reales $\mathbb{R}$ ó de números complejos $\mathbb{C}$. # # ***Definición***: Un tipo de dato numérico de $n$ bits para el conjunto de números $\mathcal{N}\in \{ \mathbb{N}, \mathbb{Z}, \mathbb{Q}, \mathbb{R}, \mathbb{C} \}$, es una función que asignan a cada secuencia de $n$ bits, un único elemento del conjunto $\mathcal{N}$. # # Los siguientes son los tipos de datos numéricos por defecto en python. # Como hacer un tipo de datos print(type(1)) print(type(1.23)) # ### El problema de representación # Las computadoras actuales poseen memoria limitada, así que definir un tipo de dato numérico de $n >> 1$ bits, no valdrá la pena si no es posible almacenar una secuencia de $n$ bits en memoria. Se debe elegir entonces una $n$ no muy grande para poder almacenar varios números en el ordenador y que dicho tamaño sea estandarizado para compartir y resultados entre máquinas. # # Así que el dominio de un tipo de dato numérico $T$ es un conjunto finito de las $2^n$ secuencias distintas de bits en memoria, y al ser $T$ inyectivo, los números que pueden ser representados por este tipo son un subconjunto finito de $\mathcal{N}$ de $2^n$ elementos. # # ***Definición***: Un número $x\in\mathcal{N}$ es *representable* mediante el tipo de dato numérico $T$ si y sólamente sí $x$ está en la imagen de $T$. # # El hecho que el subconjunto de números representables por un tipo es finito, implica que **no es posible** definir un tipo numérico que pueda representar al conjunto de los números reales en su totalidad. Sin embargo el tipo de datos usual para el conjunto de números reales es el *float*, y los números representables por este tipo se llaman *Números flotantes* que al conjunto de estos lo denotaremos por $\mathbb{A}$. # La distribución de $\mathbb{A}$ sobre $\mathbb{R}$ se puede apreciar en la siguiente imagen. # # ***Pregunta*** ¿Cuál es la diferencia de esta imagen con la regla? # Errores con el punto flotante print(0.1, 0.2, 0.3) print(0.1 + 0.2) print(0.1 + 0.2 - 0.3) # ### Aproximando reales con flotantes # A pesar de no ser un tipo de dato perfecto, el *float* ha tenido un papel fundamental en la computación científica, la simulación, los gráficos por computadora, y la ingeniería, ramas que han generado productos escenciales en el desarrollo humano. # # Los números flotantes $\mathbb{A}$, hoy en día siguen siendo una forma útil de representar y aproximar números reales. A continuación se mostrará este hecho con un algoritmo que calcula una aproximación de $\sqrt{2}$ con un error menor que un $\epsilon>0$ dado. Recuérdese que $\sqrt{2}$ es un número irracional que posee una expansión decimal infinita no periódica. # ***Algoritmo***: Aproximación de raiz cuadrada de 2 # 1. Calcular $x_0$ el mayor entero menor o igual que $\sqrt{2}$, es decir $x_0 = \max\{z \in \mathbb{Z} \lvert z^2 \leq 2\}$. Este entero cumple que $\lvert \sqrt{2}-x_0 \lvert < 1$, osea que la aproximación tiene un error menor a 1. # 1. Iterativamente calcular $d_i$ el mayor dígito que cumple $$ x_i^2 = \left( x_0 + \sum_{k=1}^i d_k10^{-k} \right)^2 \leq 2 $$ Esta aproximación tiene un error menor a $10^{-i}$. # 1. Continuar el proceso hasta obtener la $n$-ésima aproximación $x_n$ que tenga error $10^{-n} < \epsilon$ # Aproximación de raiz de 2 def raiz_2(error=0.001): # Calcular x_0 x_0 = 0 while x_0**2 <= 2: x_0 += 1 if x_0**2 == 2: return x_0 x_i = x_0 - 1 err = 1 print('Aproximación con error menor que', err, '=', x_i) # Calcular x_i while error < err: err = err/10 for d_i in range(1,10): x_temp = x_i + d_i*err if x_temp**2 > 2: break if x_temp**2 == 2: return x_temp x_i = x_temp - err print('Aproximación con error menor que', err, ':', x_i) return x_i # Probando la función raiz_2(error=0.000000001) import math math.sqrt(2) - raiz_2(0.00001) # ## Funciones de $\mathbb{R}$ en $\mathbb{R}$ # De la misma forma que es posible representar valores numéricos de distintas formas, también existen estructuras de datos variadas para representar funciones de los reales en sí mismos. El tipo de codificación que se elija dependerá del subconjunto de funciones que se esté estudiando. # ### Funciones de $\mathbb{A}$ en $\mathbb{A}$. # # Python nos permite definir funciones que tomen un parámetro de tipo _float_ y que regrese otro dato del mismo tipo después de ejecutar un bloque de código llamado cuerpo. # + import numpy as np # Funciones de A en A def f(x): return -(x+1)**2 + 80 def g(x): return np.sin(x)/3 # + print('Llamada directa:', 'f({0}) = {1}'.format(3.1416, f(3.1416))) print('Iterar una lista:') dominio = [1, 2.55, 3.1416, 7.77] for x in dominio: print(f'f({x}) = {f(x)}') print('Mapear una lista:') parejas = list( map(f, dominio) ) print(parejas) print('Numpy') print(g(dominio)) # - composicion = lambda x: g(f(x)) composicion(1) # Este tipo de funciones son buenas para representar funciones continuas que puedan ser escritas algorítmicamente. Debido al siguiente teorema. # # ***Teorema***: Sean $f$ y $g$ funciones continuas de $\mathbb{R}$ en $\mathbb{R}$. Si $f(q) = g(q), \forall q\in\mathbb{Q}$ entonces $f = g$. # # ***Demostración***: Considérese la función continua $h = f-g$ que se anula en los racionales. Por densidad de los números racionales en los reales, para cualquier $r\in\mathbb{R}$ existe una secuencia $\{q_i\}_{\mathbb{N}}$ de racionales tales que $\lim_{n\rightarrow\infty}q_i = r$, y por continuidad de h se cumple que $$ 0 = \lim_{n\rightarrow\infty}f(q_i) = f(r)$$ Así que $h$ es la función constante 0, demostrando que $f = g \;\blacksquare$ # # # ### Polinomios en computación simbólica # La computación simbólica o álgebra computacional, es la rama de las ciencias de la computación que estudia la manipulación algoritmica de expresiones matemáticas y su implementación en software. # + import sympy x = sympy.symbols('x') p = x**2 - 2*x +1 Dp = sympy.diff(p) print('p:', p) print('p factorizado:', sympy.factor(p)) print('Dp:', Dp) print('p * Dp:', p*Dp) print('p * Dp expandido:', sympy.expand(p*Dp)) # - # ### Graficación de funciones # Para graficar una función de $\mathbb{A}$ en $\mathbb{A}$ se puede usar la libreria matplotlib. from matplotlib import pyplot as plt import numpy as np # %matplotlib inline # + # Graficando f y g # Evaluar el dominio con las funciones x = np.arange(-10, 10, 0.05) fx = list(map(f, x)) gx = g(x) hx = g(x+1) # Graficar fig, ax = plt.subplots(1, 3, figsize=(20, 6)) # Izquierda ax[0].plot(x,fx) ax[0].set_title('Tiro vertical') ax[0].set_xlabel('tiempo') ax[0].set_ylabel('altura') # Centro ax[1].plot(x,gx) ax[1].set_title('Movimiento armónico') ax[1].set_xlabel('tiempo') ax[1].set_ylabel('Distancia al punto de equilibro') # Derecha ax[2].plot(x,hx) ax[2].plot(x,gx) ax[2].set_title('Movimiento armónico') ax[2].set_xlabel('tiempo') ax[2].set_ylabel('Distancia al punto de equilibro') # - # ## Límites # Recordemos la caracterización del límite una función por sucesiones. # # **Teorema**: Sean $f: \mathbb{R}$ \rightarrow \mathbb{R}$ y $a \in [a,b]$. El límite $\lim_{x\rightarrow a}f(x)$ existe si y solamente si para toda sucesión $\{s_n\}_{\mathbb{N}}$ de reales que converge a $a$, la sucesión $\{f(s_n)\}_{\mathbb{N}}$ de imágenes bajo $f$, también es convergente. Además el límite de las sucesiones coincide con el límite de la función. # # Lo poderoso de esta caracterización es que para comprobar la existencia de un límite de funciones, basta con verificar que *todas* las sucesiones de elementos en el dominio que convergen al punto límite, sean convergentes. Aunque no es posible hacer el cálculo para cada sucesión (hay infinitas de ellas), si calculamos el límite de una de estas sucesiones y es convergente, el límite de la función debe ser igual al límite calculado, que nos da información suficiente para formular una hipótesis. # El siguiente algoritmo aproxima el límite de dos sucesiones particulares # # ***Algoritmo***: Entradas $\leftarrow$ ($f$ la función a calcular el límite, $a$ el punto límite) # 1. Calcular los primeros $n$ términos de la sucesión $\epsilon = \{2^{-n}\}_{\mathbb{N}}$ que es convergente a 0. # 2. Usar esta sucesión para generar otras dos sucesiones, $\mathcal{I} = a - \epsilon$ y $\mathcal{D} = a + \epsilon$ ambas convergentes a $a$. # 3. A partir de estas sucesiones obtener las sucesiones de imágenes $f(\mathcal{I})$ y $f(\mathcal{D})$ y la sucesión de diferencias $\delta = \lvert f(\mathcal{I}) - f(\mathcal{D}) \lvert$. # 4. Si la sucesión de diferencias tiende a 0 entonces $f(\mathcal{I})$ y $f(\mathcal{D})$ son convergentes y convergen al mismo límite. # Cálculo de límite def limite(f, a): n = np.arange(20) epsilon = 1/ 2 ** n izquierda = a - epsilon derecha = a + epsilon f_i = f(izquierda) f_d = f(derecha) promedio = (f_i + f_d)/2 delta = np.abs(f_i - f_d) fig, ax = plt.subplots(1, 2, figsize=(20, 6)) # Imagenes ax[0].plot(n, f_i, label='f(I)') ax[0].plot(n, f_d, label='f(D)') ax[0].plot(n, promedio, label='L') ax[0].set_title('Sucesiones') ax[0].set_xlabel('N') ax[0].set_ylabel('Imagen') ax[0].legend() # Diferencia ax[1].plot(n, delta) ax[1].set_title('Diferencia') ax[1].set_xlabel('N') ax[1].set_ylabel('Delta') return promedio[-1] # ### Funcionamiento del algoritmo # + #caso 1 def f1(x): return np.sin(x) lim = limite(f1, 0) print('Limite:', lim) # + #caso 2 def f2(x): return np.sin(x)/x lim = limite(f2, 0) print('Limite:', lim) # + #caso 3 def f3(x): return np.abs(x)/x lim = limite(f3, 0) print('Limite:', lim) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # ML: Movie Learning # # We're going to do some form of movie text wrangling using Python. At this point, we have acquired a list of movie titles, along with some other data scraped from Wikipedia. Unfortunately, these other data are inconsistently formatted and will be a bit difficult to work with. Since we want to get plot synopses from the OMDb API anyway, we can get these other data of interest from here as well. # # # # ## 1. Data Collection # # ### 1.1. The OMDb API # # We're going to use the OMDb API, which is free once again (http://www.omdbapi.com/). To use it, you need to get an API key. You can get a free key, which limits to 1000 requests per day. More requests (and a poster API) are available if you patronize the OMDb Patreon. The OMDb website explains how the API works pretty well. We'll use the `requests` package to make calls to the OMDb API. # # There are two ways we can get movie data using this API, either by movie title or IMDb ID. We'll want to be able to handle either, as I have a feeling that it will be easier to get a random list of IMDb ID's than a random list of movie titles? It is also more exact to use ID's since movie titles aren't unique (e.g.,"The Mummy" can either refer to the Brendan Fraser masterpiece, or the Tom Cruise dumpster fire). # # Note, I'm using format strings (`f'some {text}'`) which is a Python 3.6 feature (equivalent to `'some {}.format(text)'`). # + import os import json import requests from dotenv import load_dotenv, find_dotenv #find .env automagically by walking up directories until it's found dotenv_path = find_dotenv() # load up the entries as environment variables load_dotenv(dotenv_path) # + API_KEY = os.environ.get('OMDB_API_KEY') def get_movie_data(name, year=None, api_key=API_KEY, full_plot=False): """Returns json from OMDb API for movie.""" api_url = f'http://private.omdbapi.com/?apikey={api_key}' # There are actually utilities that can automatically escape invalid characters # but here we do the manual dumb solution name = name.lower().replace(' ', '+') # Can either manually extend the url with parameters or... #api_url += f'&t={name}' # if year is not None: # api_url += f'&y={year}' # if full_plot: # api_url += '&plot=full' # response = requests.get(api_url) # ... have `requests` do it for you! body = {'t': name} if year is not None: body['y'] = year if full_plot: body['plot'] = 'full' response = requests.get(api_url, params=body) # Throw error if API call has an error if response.status_code != 200: raise requests.HTTPError( f'Couldn\'t call API. Error {response.status_code}.' ) # Throw error if movie not found # if response.json()['Response'] == 'False': # raise ValueError(response.json()['Error']) return response.json() # - # Let's test it out, and see what kind of information we get from our request. response_json = get_movie_data('Snakes on a Plane') print(json.dumps(response_json, indent=4)) # With year provided response_json = get_movie_data('Snakes on a Plane', year=2006) print(json.dumps(response_json, indent=4)) # With incorrect year provided response_json = get_movie_data('Snakes on a Plane', year=2005) print(json.dumps(response_json, indent=4)) # With full plot: response_json = get_movie_data('Snakes on a Plane', full_plot=True) print(json.dumps(response_json, indent=4)) # Film without a lot of data # Note you can leave out the apostrophe and OMDb will still find the film response_json = get_movie_data('Boarding School Girls\' Pajama Parade', full_plot=False) print(json.dumps(response_json, indent=4)) # Nonsense film that doesn't exist response_json = get_movie_data('Phantom of the cheese cake') print(json.dumps(response_json, indent=4)) # Nonsense film that doesn't exist response_json = get_movie_data('ar') print(json.dumps(response_json, indent=4)) # --- # jupyter: # jupytext: # text_representation: # extension: .r # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: R # language: R # name: ir # --- # Load Packages library(ggplot2) library(dplyr) library(plyr) # Load data filename <- "./data/Encode_HMM_data.txt" data <- read.csv(filename, sep = '\t', header = FALSE) # Examine Data head(data) # Rename the columns for visualization names(data)[1:4] <- c("chrom", "start", "stop", "type") # Take a look at dataset head(data) # ## Create a Bar Plot # Bar Plot ggplot(data, aes(x=chrom, fill=type))+ geom_bar() # Save plot in png format png("figs/plot.png") ggplot(data, aes(x=chrom, fill=type))+ geom_bar() dev.off() # Save plot in pdf format pdf("figs/plot.pdf") ggplot(data, aes(x=chrom, fill=type))+ geom_bar() dev.off() # Save plot in jpeg format jpeg("figs/plot.jpeg") ggplot(data, aes(x=chrom, fill=type))+ geom_bar() dev.off() # Save plot in tiff format tiff("figs/plot.tiff") ggplot(data, aes(x=chrom, fill=type))+ geom_bar() dev.off() # Save plot in png format in high resulation png("figs/plot.png", 1000, 1000) ggplot(data, aes(x=chrom, fill=type))+ geom_bar() dev.off() # ## Basic Statistics # Check data matrix dim(data) # Summary statistics summary(data) # Data structure str(data) # Summary for chromosomes summary(data$chrom) # Summary for type summary(data$type) # Summary for start position summary(data$start) # Summary for stop position summary(data$stop) # Create a new column is called size data$size = data$stop - data$start # Now, take a look at dataset head(data) # Summary for size summary(data$size) # Mean size mean(data$size) # Meadian size median(data$size) # Standard deviation sd(data$size) # Max size max(data$size) # Min size min(data$size) # ## Customize Plot # Bar Plot ggplot(data, aes(x=chrom, fill=type))+ geom_bar() # Remove `chr` prefix data$chrom <- factor(gsub("chr", "", data$chrom)) # See the result with `chr` removed summary(data$chrom) # Now plot the data ggplot(data, aes(x=chrom, fill=type))+ geom_bar() # Reorder the chromosomes numerically data$chrom <- factor(data$chrom, levels = c(seq(1,22), "X", "Y")) # See the ordered chromosomes summary(data$chrom) # Ordering ggplot(data, aes(x=chrom, fill=type))+ geom_bar() data$type <- revalue(data$type, c("1_Active_Promoter"="Promoter", "4_Strong_Enhancer"="Enhancer", "8_Insulator"="Insulator")) data # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #Deutsch-Jozsa algorithm # %matplotlib inline from qiskit import * N = 4 # defining how many qubits will be used # Step 0: Prepare superposition state prep_circuit = QuantumCircuit(N + 1, N) prep_circuit.x(N) # working qubit starts with |1> prep_circuit.h(range(N+1)) # Generate all superpositions prep_circuit.barrier() prep_circuit.draw(output="mpl") # Step 1: Send input to blackbox # Here, we will experiment with 3 blackboxes # The first is the constant function f(x) = 1 constant_circuit = QuantumCircuit(N+1, N) constant_circuit.x(N) constant_circuit.barrier() constant_circuit.draw(output="mpl") # The second is blackbox implements f(x) = x mod 2, which is a balanced function mod2_circuit = QuantumCircuit(N+1, N) mod2_circuit.cx(0, N) mod2_circuit.barrier() mod2_circuit.draw(output="mpl") # The third circuit implements a function that has period 4 and has values {1, 0, 0, 1} # Before, we code the circuit, here is the representation def blackbox_3(x): if x % 4 == 0 or x % 4 == 3: return 1 else: return 0 periodic_circuit = QuantumCircuit(N+1, N) periodic_circuit.cx(0, N) periodic_circuit.cx(1, N) periodic_circuit.x(N) periodic_circuit.barrier() periodic_circuit.draw(output="mpl") # Step 2: Apply Hadamard to all qubits and measure measure_circuit = QuantumCircuit(N+1, N) measure_circuit.h(range(N)) measure_circuit.measure(range(N), range(N)) measure_circuit.draw(output="mpl") # An example of what the assembled circuit looks like (prep_circuit + periodic_circuit + measure_circuit).draw(output="mpl") # Now we simulate for each function, to do so we create an auxiliary function that runs it def simulate_circuit(prep, blackbox, measuring): """Returns the counts of the circuit that is combination of the three circuits""" circuit = prep + blackbox + measuring simulator = Aer.get_backend("qasm_simulator") job = execute(circuit, simulator, shots = 2**16) result = job.result() count = result.get_counts() return count # + # Recall that all the measurements are 0 if f is constant, and at least one measurement is 1 if f is balanced # - # For a constant function, we expect it to be all 0s count_constant = simulate_circuit(prep_circuit, constant_circuit, measure_circuit) visualization.plot_histogram(count_constant) # For balanced, we expect at least one measurement to be 1 count_mod2 = simulate_circuit(prep_circuit, mod2_circuit, measure_circuit) visualization.plot_histogram(count_mod2) # We try again with a balanced function to verify that at least one is zero count_periodic = simulate_circuit(prep_circuit, periodic_circuit, measure_circuit) visualization.plot_histogram(count_periodic) # The results match our predictions! # For purposes of reproducibility, the Qiskit version is qiskit.__qiskit_version__ # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #math import numpy as np import scipy.stats as stats import scipy.linalg as linalg import scipy.special #graphing import matplotlib.pyplot as plt #stats import statsmodels.api as sm from statsmodels.base.model import GenericLikelihoodModel # + mu = 3 sigma = 2 values_distr =stats.logistic(loc=mu, scale=sigma) def gen_data(values = values_distr, num_true = 4): nobs = 1000 #parameters min_bids =num_true max_bids =num_true bid_types = range(min_bids,max_bids+1) prob_type = [1/len(bid_types)]*len(bid_types) bidders = np.random.choice(bid_types, nobs, p=prob_type) bidders = np.sort(bidders) bids = [] for i in bid_types: #count number of obs num_i = sum(i == bidders) bids_i = values.rvs(size=(num_i,i)) bids_i = np.sort(bids_i, axis=1) bids_i = bids_i[:,-2] bids = np.concatenate((bids, bids_i)) #draw bids return bids, nobs bids,nobs = gen_data(values_distr) # + class Auction(GenericLikelihoodModel): def __init__(self, *args, values_distr=stats.logistic, **kwargs): super(Auction,self).__init__(*args,**kwargs) self.values_distr = values_distr def loglikeobs(self, params): bids = self.endog i = self.exog[:,0] cdf = self.values_distr.cdf(bids,loc=params[0],scale=max(params[1],1e-5)) pdf = self.values_distr.pdf(bids,loc=params[0],scale=max(params[1],1e-5)) factorial = scipy.special.factorial(i)/scipy.special.factorial(i-2) order_cdf = factorial*pdf*cdf**(i-2)*(1-cdf) #np.log(i) + np.log(cdf) + (i-1)*np.log((1-cdf)) #second highest order statistic return np.log(order_cdf) np.random.seed() yn,nobs = gen_data(stats.logistic(loc=mu, scale=sigma)) model = Auction(yn, np.ones(len(yn))*4 ) model_fit = model.fit(start_params=[mu,sigma],disp=False) model_fit.summary() # + def setup_shi(yn,num_bidders1 = 10, num_bidders2 = 2): model1 = Auction(yn,np.ones(len(yn))*num_bidders1) model1_fit = model1.fit(start_params=[mu,sigma],disp=False) ll1 = model1.loglikeobs(model1_fit.params) grad1 = model1.score_obs(model1_fit.params) hess1 = model1.hessian(model1_fit.params) #fit logistic values model2 = Auction(yn,np.ones(len(yn))*num_bidders2) model2_fit = model2.fit(start_params=[mu,sigma],disp=False) ll2 = model2.loglikeobs(model2_fit.params) grad2 = model2.score_obs(model2_fit.params) hess2 = model2.hessian(model2_fit.params) return ll1,grad1,hess1,ll2,2, grad2,hess2,2 yn,nobs = gen_data() ll1,grad1,hess1,ll2,k1, grad2,hess2,k2 = setup_shi(yn, num_bidders1 = 10, num_bidders2 = 2) # + def compute_eigen(yn,num_bidders1 = 10, num_bidders2 = 2): ll1,grad1,hess1,ll2,k1, grad2,hess2,k2 = setup_shi(yn,num_bidders1 = 10, num_bidders2 = 2) hess1 = hess1/len(ll1) hess2 = hess2/len(ll2) k = k1 + k2 n = len(ll1) #A_hat: A_hat1 = np.concatenate([hess1,np.zeros((k1,k2))]) A_hat2 = np.concatenate([np.zeros((k2,k1)),-1*hess2]) A_hat = np.concatenate([A_hat1,A_hat2],axis=1) #B_hat, covariance of the score... B_hat = np.concatenate([grad1,-grad2],axis=1) #might be a mistake here.. B_hat = np.cov(B_hat.transpose()) print(B_hat[0:2,2:]) #compute eigenvalues for weighted chisq sqrt_B_hat= linalg.sqrtm(B_hat) W_hat = np.matmul(sqrt_B_hat,linalg.inv(A_hat)) W_hat = np.matmul(W_hat,sqrt_B_hat) V,W = np.linalg.eig(W_hat) print(V) return V n_sims = 5000 yn,nobs = gen_data(num_true = 4) model_eigs = compute_eigen(yn,num_bidders1 = 10, num_bidders2 = 2) eigs_tile = np.tile(model_eigs,n_sims).reshape(n_sims,len(model_eigs)) normal_draws = stats.norm.rvs(size=(n_sims,len(model_eigs))) weighted_chi = ((normal_draws**2)*eigs_tile).sum(axis=1) plt.hist(weighted_chi,density=True,bins=25) plt.show() # - yn,nobs = gen_data(num_true = 10) model_eigs = compute_eigen(yn,num_bidders1 = 10, num_bidders2 = 2) eigs_tile = np.tile(model_eigs,n_sims).reshape(n_sims,len(model_eigs)) normal_draws = stats.norm.rvs(size=(n_sims,len(model_eigs))) weighted_chi = ((normal_draws**2)*eigs_tile).sum(axis=1) plt.hist(weighted_chi,density=True,bins=25) plt.show() yn,nobs = gen_data(num_true = 2) model_eigs = compute_eigen(yn,num_bidders1 = 10, num_bidders2 = 2) eigs_tile = np.tile(model_eigs,n_sims).reshape(n_sims,len(model_eigs)) normal_draws = stats.norm.rvs(size=(n_sims,len(model_eigs))) weighted_chi = ((normal_draws**2)*eigs_tile).sum(axis=1) plt.hist(weighted_chi,density=True,bins=25) plt.show() yn,nobs = gen_data(num_true = 13) model_eigs = compute_eigen(yn,num_bidders1 = 10, num_bidders2 = 2) eigs_tile = np.tile(model_eigs,n_sims).reshape(n_sims,len(model_eigs)) normal_draws = stats.norm.rvs(size=(n_sims,len(model_eigs))) weighted_chi = ((normal_draws**2)*eigs_tile).sum(axis=1) plt.hist(weighted_chi,density=True,bins=25) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # # Demo for data clustering # # In this notebook we present *clustering* functionalities of the `preprocess` module. # # ### Clustering # # A synthetic data set is created and we perform clustering using the available partitioning functions: # # - [**Section 1**](#variable_bins): Variable bins # - [**Section 2**](#predefined_bins): Pre-defined variable bins # - [**Section 3**](#zero_neighborhood_bins): Zero-neighborhood variable bins # - [**Section 4**](#mixture_fraction_bins): Bins of mixture fraction vector # # *** # **Should plots be saved?** save_plots = False # *** # + from PCAfold import preprocess import matplotlib.pyplot as plt import numpy as np # Set some initial parameters: colors = ['#6a6e7a'] k_colors = ['#0e7da7', '#ceca70', '#b45050', '#2d2d54'] data_point = 2 font_size = 16 # Function for plotting: def visualize_clustering(x, y, idx, xticks_list, xlim_list=[-1.1,1.1], xname=r'$var$'): populations = preprocess.get_populations(idx) n_clusters = len(np.unique(idx)) figure = plt.figure(figsize=(7, 4)) figureSubplot = plt.subplot(1,1,1) for k in range(0,n_clusters): plt.scatter(x[np.where(idx==k)], y[np.where(idx==k)], color=k_colors[k], marker='.', linewidth=data_point, label='$k_' + str(k) + '$ - ' + str(populations[k])) plt.axis('equal') plt.xlim(xlim_list), plt.ylim([-0.1,1.1]) plt.xticks(xticks_list), plt.yticks([0,1]) plt.xlabel(xname) plt.grid(alpha=0.2) plt.title('Clustered data set', fontsize=font_size) plt.legend(loc='upper center', bbox_to_anchor=(0.5, -0.2), fancybox=True, shadow=True, ncol=4, fontsize=font_size-4, markerscale=2) # - # *** # # ## Clustering based on a single vector # # [**Go up**](#header) # # We start with partitioning the synthetic two-dimensional data set based on bins of a variable vector. # # Create a synthetic two-dimensional data set: var = np.linspace(-1,1,100) y = -var**2 + 1 # Plot the synthetic data set: figure = plt.figure(figsize=(7, 4)) figureSubplot = plt.subplot(1,1,1) plt.scatter(var, y, color=colors[0], marker='.', linewidth=data_point,) plt.axis('equal') plt.xlim([-1.1,1.1]), plt.ylim([0,1]) plt.xticks([-1,0,1]), plt.yticks([0,1]) plt.xlabel(r'$var$') plt.grid(alpha=0.2) plt.title('Original data set', fontsize=font_size) if save_plots==True: plt.savefig('../images/tutorial-clustering-original-data-set.png', dpi = 500, bbox_inches='tight') # # ### Cluster with `variable_bins` into $k=4$ clusters: # # [**Go up**](#header) (idx_variable_bins) = preprocess.variable_bins(var, 4, verbose=True) # Visualize clustering of the data set: visualize_clustering(var, y, idx_variable_bins, [-1,-0.5,0,0.5,1]) if save_plots==True: plt.savefig('../images/tutorial-clustering-variable-bins-k4.png', dpi = 500, bbox_inches='tight') # # ### Cluster with `predefined_variable_bins` into $k=4$ clusters: # # [**Go up**](#header) split_values = [-0.6, 0.4, 0.8] (idx_predefined_variable_bins) = preprocess.predefined_variable_bins(var, split_values, verbose=True) # Visualize clustering of the data set: visualize_clustering(var, y, idx_predefined_variable_bins, [-1, -0.6, 0.4, 0.8, 1]) if save_plots==True: plt.savefig('../images/tutorial-clustering-predefined-variable-bins-k4.png', dpi = 500, bbox_inches='tight') # # ### Cluster with `zero_neighborhood_bins` into $k=3$ clusters with `split_at_zero=True`: # # [**Go up**](#header) (idx_zero_neighborhood_bins) = preprocess.zero_neighborhood_bins(var, 3, zero_offset_percentage=10, split_at_zero=False, verbose=True) # Visualize clustering of the data set: visualize_clustering(var, y, idx_zero_neighborhood_bins, [-1, -0.2, 0.2, 1]) if save_plots==True: plt.savefig('../images/tutorial-clustering-zero-neighborhood-bins-k3.png', dpi = 500, bbox_inches='tight') # ### Cluster with `zero_neighborhood_bins` into $k=4$ clusters with `split_at_zero=True`: # # [**Go up**](#header) (idx_zero_neighborhood_bins_split_at_zero) = preprocess.zero_neighborhood_bins(var, 4, zero_offset_percentage=10, split_at_zero=True, verbose=True) # Visualize clustering of the data set: visualize_clustering(var, y, idx_zero_neighborhood_bins_split_at_zero, [-1, -0.2, 0, 0.2, 1]) if save_plots==True: plt.savefig('../images/tutorial-clustering-zero-neighborhood-bins-split-at-zero-k4.png', dpi = 500, bbox_inches='tight') # # ### Cluster with `mixture_fraction_bins` into $k=4$ clusters with `Z_stoich=0.4`: # # [**Go up**](#header) Z = np.linspace(0,1,100) y_Z = (-25/9)*Z**2 + (20/9)*Z + (5/9) (idx_mixture_fraction_bins) = preprocess.mixture_fraction_bins(Z, 4, 0.4, verbose=True) # Visualize clustering of the data set: visualize_clustering(Z, y_Z, idx_mixture_fraction_bins, [0, 0.2, 0.4, 0.7, 1], xlim_list=[-0.01,1.01], xname=r'$Z$') if save_plots==True: plt.savefig('../images/tutorial-clustering-mixture-fraction-bins-k4.png', dpi = 500, bbox_inches='tight') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" id="hfV503AtcBDp" #Importing required libraries import numpy as np import pandas as pd import io import matplotlib.pyplot as plt # + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a" id="Y4rK9ffYcBEP" colab={"resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "", "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": ""}}, "base_uri": "https://localhost:8080/", "height": 258} outputId="263fbe28-d739-4713-afa6-013d39a94178" # reading the csv file, and deleting 2 columns from the file, checking first few rows of the file from google.colab import files uploaded = files.upload() data = pd.read_csv(io.BytesIO(uploaded['BuyComputer.csv'])) data.drop(columns=['User ID',],axis=1,inplace=True) data.head() # + colab={"base_uri": "https://localhost:8080/"} id="jEoi7Gm1AfmZ" outputId="8337fec4-0cb0-4090-ec58-91c2e55667b8" from google.colab import drive drive.mount('/content/drive') # + _uuid="4cb45e28344e7e245ab398e9f4f5272ef21d2129" id="jwuPgU6_cBE8" colab={"base_uri": "https://localhost:8080/"} outputId="b8ce868f-254e-4683-a391-22eed47c179b" #Declare label as last column in the source file label = data.iloc[:,-1].values print(label) # + _uuid="2e7a145fa49435ad9578ec2827f76a70cc99f2e1" id="2lhBrOp8cBFX" colab={"base_uri": "https://localhost:8080/"} outputId="f0b00225-9be7-48ff-9b80-711c089c0d4c" # Declaring X as all columns excluding last X = data.iloc[:,:-1].values print(X) # + _uuid="dffb1f3e19e19964995ac827bf55108b5815ff67" id="t8nwbTn6cBFp" # Splitting the data from sklearn.model_selection import train_test_split X_train, X_test, Y_train, Y_test = train_test_split(X, label, test_size = 0.3, random_state = 137) # + _uuid="7d4ed14782e114ae3282f20d3754121398e6d232" id="U4bUiVVFcBGD" # Scaling the data from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test) # + id="3pkIY8gOwgAC" # print(X_train) # + _uuid="2ff7415e3e0e0673d59051cfe6154c63d3312a32" id="W5yGgzqbcBGc" colab={"base_uri": "https://localhost:8080/"} outputId="16eab247-966c-45d4-8d10-d1e148aa3bf7" #Variables required to calculate sigmoid function y_pred = [] print(X_train[0]) len_x = len(X_train[0]) w = [] b = 0.2 print(len_x) # + _uuid="a228174207f4631be4f26a0cc05e379f3f58aa56" id="ZbqwTM0bcBGr" colab={"base_uri": "https://localhost:8080/"} outputId="1b3be8c0-09df-4bd5-c0ab-98f25bba3ab1" entries = len(X_train[:,0]) entries # + _uuid="5d4d6e47ee65c9c7404e60fcf8f05c11708546b3" id="vEV7Nn73cBG7" colab={"base_uri": "https://localhost:8080/"} outputId="190e86ff-a39f-4ea9-8993-b676632d876a" for weights in range(len_x): w.append(0) w # + _uuid="18dbd2196d72527a82d30ab88ed2aa8d10bd01ce" id="_fAtpylNcBHM" # Sigmoid function def sigmoid(z): return 1/(1 + np.exp(-z)) # + _uuid="daa0f87fdbf98591cb9f51b8dc7157dc399ca827" id="kfchkScTcBHd" def predict(inputs): return sigmoid(np.dot(w, inputs) + b) # + _uuid="4126f842d072ccd40019cc283b767a014e2ee074" id="K2ryTgglcBHt" #Loss function def loss_func(y,a): # Using the formula provided in the pdf # Here, a is htheta J = -(y * np.log(a) + (1-y) * np.log(1-a)) return J # + _uuid="fc0ceb65c69f4ee0c3f28e050744229dc90c621b" id="1KW3eDpmcBIA" dw = [] db = 0 J = 0 alpha = 0.1 for x in range(len_x): dw.append(0) # + _uuid="e4be38e9b500ae0c5a7134296a3055675c4fb2d8" id="ipqdFLP3cBIO" #Repeating the process 3000 times training_epochs = 3000 for epoch in range(training_epochs): for i in range(entries): sampl = X_train[i] a = predict(sampl) dz = a - Y_train[i] J += loss_func(Y_train[i], a) for j in range(len_x): dw[j] = dw[j] + (sampl[j] * dz) db += dz J = J/entries db = db/entries for x in range(len_x): dw[x] = dw[x] / entries for x in range(len_x): w[x] = w[x] - (alpha * dw[x]) b = b - (alpha * db) J = 0 # + _uuid="5479ccb6073ed1ea310ef7de01b2935fc3ec400e" id="7Q585AdrcBIs" colab={"base_uri": "https://localhost:8080/"} outputId="6b820147-fca4-42f2-bf9c-137b12c299bf" #Printing weight print(w) # + _uuid="a939c247b8a092f74c9843975612daa85c423621" id="rEiF-bNHcBJB" colab={"base_uri": "https://localhost:8080/"} outputId="1a5eb763-9370-4a61-e143-61bb940d3835" #Printing bias print(b) # + _uuid="b7ae24169a21c7ac8ea0787f4a38a0de3e07a6b5" id="MPt5nUcpcBJR" #Predicting the label for i in range(len(Y_test)): y_pred.append(predict(X_test[i])) # + _uuid="967ad1b72305ad792a5d50e4d8b8a07632f7b241" id="79HPPz7jcBJg" #Printing actual and predicted values in a table for i in range(len(y_pred)): if(y_pred[i] > 0.5): y_pred[i] = 1 else: y_pred[i] = 0 # + _uuid="a59807150900082ab876ef0200c6c7f8f93e098c" id="sdZDj_iVcBJt" colab={"base_uri": "https://localhost:8080/"} outputId="d9e15d03-3bf1-4b51-ccb6-96fc4e4f5565" # Calculating accuracy of prediction correct = 0 for i in range(len(y_pred)): if(y_pred[i] == Y_test[i]): correct += 1 print('Correct prediction is ', (correct / len(y_pred)) * 100) # + [markdown] id="x6nmajpzhAEn" # #Using sklearn LogisticRegression model # + _uuid="9aaade066015e04f20dd7eb1d37339be75ca3836" _kg_hide-output=true id="iG-BK4i9cBKH" colab={"base_uri": "https://localhost:8080/"} outputId="7ad15939-8a60-4d6a-9fa4-20b31aacb033" # Fitting Logistic Regression to the Training set from sklearn.linear_model import LogisticRegression LR = LogisticRegression(random_state = 137) #Fit LR.fit(X_train, Y_train) #predicting the test label with LR. Predict always takes X as input y_predLR = LR.predict(X_test) correctness = 0 for i in range(len(y_pred)): if(y_pred[i] == Y_test[i]): correctness += 1 print('Prediction : ', (correctness / len(y_pred)) * 100) # + [markdown] id="Y8sYVBu-iSW-" # **Exercise:** # # Try logistic regression on BuyComputer dataset and set Random state=Your_RollNumber (last 3 digit of ID, incase if you don't have ID) # # Done above # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Relatório de Análise I # ## Importando a Base de Dados import pandas as pd #importa a biblioteca pandas - a comunidade simplifica como pd # importando pd.read_csv('dados/aluguel.csv', sep = ';') dados = pd.read_csv('dados/aluguel.csv', sep = ';') # Armazena o dataframe em uma variável dados #faz a leitura da variável type(dados) #Le o tipo da variável dados.info() #Visualiza as informações da variável dados dados dados.head(10) # Mostra os 10 primeiras linhas da variável dados # ## Informações Gerais sobre a Base de Dados dados.dtypes # Visualiza os tipos das variáveis pd.DataFrame(dados.dtypes) # transforma o dtypes em DataFrame pd.DataFrame(dados.dtypes, columns = ['Tipos de Dados']) # Visualiza as informações em dataframe tipos_de_dado = pd.DataFrame(dados.dtypes, columns = ['Tipos de Dados']) # Armazena essas informações em uma variável tipos_de_dado.columns.name = 'Variáveis' # Vai dar nome a coluna tipos_de_dado # Leitura da Variáveldados. dados.shape # Comando que le a quantidade de linha , número de variáveis. print(f'A base de dado apresenta {dados.shape[0]} registros e {dados.shape[1]} variáveis.') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Read Images from TFRecords Format # > Read Images from TFRecords Format # # - toc: true # - badges: true # - comments: true # - categories: [TFRecords, TensorFlow, Image Processing] # - image: images/chart-preview.png import numpy as np import tensorflow as tf import glob # %matplotlib inline # ## 1. TFRecord Format # # - doesn't know anything about image formats # - can save both dense arrays or image formats # - in contrast to imread and imsave TF decouples reading/decoding and encoding/writting # ## 2. Reading Unknown Data raw_records = tf.data.TFRecordDataset("images/TFRecords/my-tfR.tfrecords") for raw_record in raw_records.take(1): print("") #example = tf.train.Example() #example.ParseFromString(raw_record.numpy()) #example # ## 3. TFRecord format (PNG raw file) # + raw_image_dataset = tf.data.TFRecordDataset("images/TFRecords/my-tfR.tfrecords") image_feature_description = { 'height': tf.io.FixedLenFeature([], tf.int64), 'width': tf.io.FixedLenFeature([], tf.int64), 'no_c': tf.io.FixedLenFeature([], tf.int64), 'raw_image': tf.io.FixedLenFeature([], tf.string), } def _parse_image_function(example_proto): # Parse the input tf.train.Example proto using the dictionary above. example = tf.io.parse_single_example(example_proto,image_feature_description) raw_image = example["raw_image"]#.numpy() #this is a tensor with bytes raw_image = tf.io.decode_png(raw_image,3) # this is a tensor with float32 shape_h = example["height"] shape_w = example["width"] no_c = example["no_c"] #raw_image = tf.reshape(raw_image, [400, 600,3]) #raw_image = tf.cast(raw_image, tf.float32) return raw_image parsed_image_dataset = raw_image_dataset.map(_parse_image_function) parsed_image_dataset # - for image in parsed_image_dataset.take(1): print(image.shape) # ## 4. TFRecord format (JPEG raw file) # + #raw_image_dataset = tf.data.TFRecordDataset("images/TFRecords/my-tfR-JPEG.tfrecords") def _parse_image_fct(example_proto): image_feature_description = { 'height': tf.io.FixedLenFeature([], tf.int64), 'width': tf.io.FixedLenFeature([], tf.int64), 'no_c': tf.io.FixedLenFeature([], tf.int64), 'raw_image': tf.io.FixedLenFeature([], tf.string), } # Parse the input tf.train.Example proto using the dictionary above. example = tf.io.parse_single_example(example_proto,image_feature_description) raw_image = example["raw_image"] #this is a tensor with bytes raw_image = tf.io.decode_jpeg(contents = raw_image, channels = 0) shape_h = example["height"] shape_w = example["width"] no_c = example["no_c"] #print(shape_h, shape_w, no_c) #raw_image = tf.reshape(raw_image, [shape_h, shape_w, no_c]) raw_image = tf.cast(raw_image, tf.float32) return raw_image #parsed_image_dataset = raw_image_dataset.map(_parse_image_function) def load_dataset(filename): dataset = tf.data.TFRecordDataset(filename) dataset = dataset.map(_parse_image_fct) return dataset def get_dataset(filename, BATCH_SIZE): dataset = load_dataset(filename) dataset = dataset.shuffle(2048) #dataset = dataset.prefetch() dataset = dataset.batch(BATCH_SIZE) return dataset BATCH_SIZE = 2 filename = "images/TFRecords/my-tfR-JPEG.tfrecords" dataset = get_dataset(filename, BATCH_SIZE) tfR_image = next(iter(dataset)) # - for i in range(2): fig = plt.figure(figsize = (10, 10)) ax1 = fig.add_subplot(212) ax1 = ax1.imshow(tfR_image[i, :, :, :]/255.) plt.colorbar(ax1) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # This notebook runs a covariance based Monte Carlo propagation in ADAM from adam import PropagationParams from adam import OpmParams from adam import ConfigManager from adam import ProjectsClient from adam import AdamProcessingService from adam import MonteCarloResults from adam import OrbitEventType import matplotlib.pyplot as plt # This sets up authenticated access to the server. It needs to be done before pretty much everything you want to do with ADAM. # ConfigManager loads the config set up via adamctl. # See the README at https://github.com/B612-Asteroid-Institute/adam_home/blob/master/README.md config = ConfigManager().get_config('dev') project = ProjectsClient().get_project_from_config(config) aps = AdamProcessingService() # ## Example Inputs # The PropagationParameters sent to the propagation API will operate on either Keplerian or Cartesian elements. # + # from https://newton.spacedys.com/neodys/index.php?pc=1.1.1.1&n=2000SG344 # true_anomaly_deg is also available instead of mean_anomaly_deg keplerian_elements = { 'semi_major_axis_km': 146222900, 'eccentricity': 0.066958, 'inclination_deg': 0.112, 'ra_of_asc_node_deg' : 191.912, 'arg_of_pericenter_deg' : 275.347, 'mean_anomaly_deg' : 35.681, 'gm' : 132712440041.93938 #JPL Horizons GM } # Lower triangular covariance matrix (21 elements in a list) covariance = [3.94346903514E+03, + \ -1.40266786788E-04, 5.00812620000E-12, + \ -2.91357694324E-04, 1.06017205000E-11, 3.15658331000E-11, + \ -3.83826656095E-03, 1.40431472000E-10, 2.32155752000E-09, 8.81161492000E-07, + \ -1.09220523817E-02, 3.62452521000E-10, -1.53067748000E-09, -8.70304198000E-07, 9.42413982000E-07, + \ -2.96713683611E-01, 1.05830167000E-08, 2.23110293000E-08, 2.93564832000E-07, 7.81029359000E-07, 2.23721205000E-05] # - # ### Set Parameters # # Commented parameters are optional. Uncomment to use. # + propagation_params = PropagationParams({ 'start_time': '2017-10-04T00:00:00Z', # propagation start time in ISO format 'end_time': '2017-10-11T00:00:00Z', # propagation end time in ISO format 'project_uuid': config['workspace'], 'description': 'Jupyter Keplerian Covariance Monte Carlo Example', 'monteCarloDraws': 10, 'propagationType': 'MONTE_CARLO', 'stopOnImpact': True, 'step_size': 86400, 'stopOnCloseApproach': False, 'stopOnImpactAltitudeMeters': 500000, 'closeApproachRadiusFromTargetMeters': 7000000000 }) opm_params = OpmParams({ 'epoch': '2017-10-04T00:00:00Z', 'keplerian_elements': keplerian_elements, 'keplerian_covariance': covariance, # object covariance # 'mass': 500.5, # object mass # 'solar_rad_area': 25.2, # object solar radiation area (m^2) # 'solar_rad_coeff': 1.2, # object solar radiation coefficient # 'drag_area': 33.3, # object drag area (m^2) # 'drag_coeff': 2.5, # object drag coefficient }) # - # ### Submit and Run Propagation batch_run = aps.execute_batch_propagation( project, propagation_params, opm_params, object_id="KeplerianCovarianceObject01", user_defined_id="KeplerianCovarianceWorkbook" ) # ### Get Status # See example notebook on how to search the ADAM system for previous submitted jobs print(batch_run.check_status()) batch_run.wait_for_complete(max_wait_sec=500, print_waiting = True) print() print(batch_run.check_status()) # ### Get Summary Statistics stats = batch_run.get_summary() print(stats) print(stats.get_misses()) # ### Get Ephemeris for specific run eph = batch_run.get_ephemeris_as_dataframe(2) print(eph) eph.plot(x='Epoch', y=['X','Y','Z']) eph.plot(x='Epoch', y=['Vx','Vy','Vz']) ephem_raw_data = batch_run.get_ephemeris_content(2) print(ephem_raw_data) # ### Get ending state vector # + close_approach_states = batch_run.get_states_dataframe(OrbitEventType.CLOSE_APPROACH) print("\nClose State Vectors") if not close_approach_states.empty: print(f'First close approach state:\n{close_approach_states.loc[0]}') else: print("None") impact_states = batch_run.get_states_dataframe(OrbitEventType.IMPACT) print("\nImpact State Vectors") if not impact_states.empty: print(f'First impact end state:\n{impact_states.loc[0]}') else: print("None") miss_states = batch_run.get_states_dataframe(OrbitEventType.MISS) print("\nMiss State Vectors") if not miss_states.empty: print(f'First miss end state:\n{miss_states.loc[0]}') else: print("None") # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.8.1 64-bit ('.env') # metadata: # interpreter: # hash: 94acae6b494b26e47a2c1ab0d73ccd231e7e575de7735a2bb644a41097a8ec67 # name: Python 3.8.1 64-bit ('.env') # --- # Building software can be challenging, but maintaining it after that initial build can be even moreso. Being able to test the software such that it verifies the software behaves as expected is crucial in building robust software applications that users depend upon, being able to automate this testing is even better! There's other blog posts on this blog around the topic of testing [Introduction to Pytest & Pipenv](https://jackmckew.dev/introduction-to-pytest-pipenv.html), but for this post we're going to focus on a very specific type of testing, **property based testing**. # # Property based testing differs itself from the conventional example based testing by being able to generate the test data that drives your tests, and even better, can help find the boundaries of where the tests fail. # # To demonstrate the power of property based testing, we're going to build some testing for the old faithful multiplication operator in Python. # # To help with this, we are going to use a few packages: # # - pytest (testing framework) # - hypothesis (property testing package) # - ipytest (to enable running tests in jupyter notebooks) # # Before we dive in, let's set up ipytest and use some **example-based testing** to verify the multiplication operator. # + import ipytest import pytest ipytest.autoconfig() def multiply(number_1, number_2): return number_1 * number_2 # + # %%run_pytest[clean] def test_example(): assert multiply(3,3) == 9 assert multiply(5,5) == 25 assert multiply(4,6) == 24 # - # Fantastic, our examples passed the test! Now let's ensure that the test fails. # + # %%run_pytest[clean] def test_fail_example(): assert multiply(3,3) == 9 assert multiply(3,5) == 150 # - # Perfect! We can see that the test fails as expected and even nicely tells us which line of code it failed on. Let's say we had lots of these examples that we wanted to test for, so to simplify it we could potentially use pytest's **parametrize** decorator. # + # %%run_pytest[clean] @pytest.mark.parametrize('number_1, number_2 , expected', [ (3,3,9), (5,5,25), (4,6,24) ]) def test_multiply(number_1,number_2,expected): assert expected == multiply(number_1,number_2) # - # Is this enough testing to verify our function? Really, we're only testing a few conditions that we'd expect to work, but in reality it's the ones that nobody foresees that would be ideal to capture in our tests. This also raises a few more things, the developer writing the tests may choose to write 2 or 2000 test cases but this doesn't guarantee anything when it comes to if it's truly covered. # # ## Introduce Property Based Testing # # Property based testing is considered as generative testing, we don't supply specific examples with inputs and expected outputs. Rather we define certain properties and generate randomized inputs to ensure the properties are correct. In addition to this, property based testing can also `shrink` outputs to find the exact boundary condition where a test fails. # # While this doesn't 100% replace example-based testing, they definitely have their use and have a lot of potential for effective testing. Now let's implement the same tests above, using property based testing with `hypothesis`. from hypothesis import given import hypothesis.strategies as st # + # %%run_pytest[clean] @given(st.integers(),st.integers()) def test_multiply(number_1,number_2): assert multiply(number_1,number_2) == number_1 * number_2 # - # Note that we've used the `given` decorator which makes our test parametrized, and use strategies which cover the types of input data to generate. As per the hypothesis documentation *Most things should be easy to generate and everything should be possible*, we can find more information on them here: # # Now this doesn't look any different to last time, so what even changed! Let's change our multiply function so it behaves strangely and see if we can see hypothesis shrink the failures in action. Shrinking is whenever it finds a failure, it'll try to get to the absolute boundary case to help us find the potential cause and even better it'll remember this failure for next time so it doesn't poke it's head up again! def bad_multiply(number_1,number_2): if number_1 > 30: return 0 if number_2 < 0: return 0 return number_1 * number_2 # + # %%run_pytest[clean] @given(st.integers(),st.integers()) def test_bad_multiply(number_1,number_2): assert bad_multiply(number_1,number_2) == number_1 * number_2 # - # Fantastic, we can see that the failure has been shrunken to `number_1` being 31 and `number_2` being 1 which is one integer off the 'bad' boundary conditions we'd introduced into the multiply function. # # Hopefully this has introduced the power of property based testing and can help make software more robust for everyone! # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="qQjfwiaBNwVq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 442} outputId="2dc2b9a0-eaa2-40aa-ed94-608ea76d8322" executionInfo={"status": "ok", "timestamp": 1583688930721, "user_tz": -60, "elapsed": 11490, "user": {"displayName": "", "photoUrl": "", "userId": "13310166437012434645"}} # !pip install --upgrade tables # !pip install eli5 # + id="tmZGPwE5M7PA" colab_type="code" colab={} import pandas as pd import numpy as np from sklearn.dummy import DummyRegressor from sklearn.tree import DecisionTreeRegressor from sklearn.metrics import mean_absolute_error as mae from sklearn.model_selection import cross_val_score import eli5 from eli5.sklearn import PermutationImportance # + id="jnBh8R2cNcaD" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="88d49554-93ab-4c7b-b0fb-a79db87834d7" executionInfo={"status": "ok", "timestamp": 1583689159863, "user_tz": -60, "elapsed": 836, "user": {"displayName": "", "photoUrl": "", "userId": "13310166437012434645"}} # cd "/content/drive/My Drive/Colab Notebooks/dw_matrix/matrix_two/dw_matrix_car" # + [markdown] id="IiTMIZBoOTLu" colab_type="text" # ## Wczytywanie danych # # + id="ZoZ9s1SIOW3a" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="16b2bcac-1585-4238-d412-c72b9f543283" executionInfo={"status": "ok", "timestamp": 1583689188624, "user_tz": -60, "elapsed": 4397, "user": {"displayName": "", "photoUrl": "", "userId": "13310166437012434645"}} df = pd.read_hdf('data/car.h5') df.shape # + id="scl5HBEpOzMy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 187} outputId="7e664574-909e-4729-8fdc-912e9712b5e6" executionInfo={"status": "ok", "timestamp": 1583689197411, "user_tz": -60, "elapsed": 1116, "user": {"displayName": "", "photoUrl": "", "userId": "13310166437012434645"}} df.columns # + id="sDJnB9DIO2Ix" colab_type="code" colab={} # + [markdown] id="Z4tUmvMwO7EH" colab_type="text" # ## Dummy Model # + id="k_N1Cv8WO9X8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="30744f7f-b887-4590-879e-0b1531712236" executionInfo={"status": "ok", "timestamp": 1583689255920, "user_tz": -60, "elapsed": 816, "user": {"displayName": "", "photoUrl": "", "userId": "13310166437012434645"}} df.select_dtypes(np.number).columns # + id="hGX_nrZcPEgS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="22e2af37-aa62-4d7c-9b9c-9c45f665e983" executionInfo={"status": "ok", "timestamp": 1583692444821, "user_tz": -60, "elapsed": 806, "user": {"displayName": "", "photoUrl": "", "userId": "13310166437012434645"}} feats = ['car_id'] X = df[ feats ].values y = df['price_value'].values model = DummyRegressor() model.fit(X, y) y_pred = model.predict(X) mae(y, y_pred) # + id="x5_QjaD5PdeC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="59cbc8b3-e6e1-4417-e2e6-c450398300fe" executionInfo={"status": "ok", "timestamp": 1583692797593, "user_tz": -60, "elapsed": 1185, "user": {"displayName": "", "photoUrl": "", "userId": "13310166437012434645"}} [x for x in df.columns if 'price' in x] # + id="tFvSiXfxclEA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="10e31acd-e79e-4224-a63a-358fc733d5c0" executionInfo={"status": "ok", "timestamp": 1583692818106, "user_tz": -60, "elapsed": 1336, "user": {"displayName": "", "photoUrl": "", "userId": "13310166437012434645"}} df['price_currency'].value_counts() # + id="bKynYA_fcqDN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="dd67ffff-b98b-4c79-c5eb-6325eef4b283" executionInfo={"status": "ok", "timestamp": 1583693493690, "user_tz": -60, "elapsed": 1812, "user": {"displayName": "", "photoUrl": "", "userId": "13310166437012434645"}} df['price_currency'].value_counts(normalize=True) * 100 # Żeby zobaczyć jaki procent stanowią wartości wyrażone w EUR # + id="OlfCfRcefO31" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="357cd9a5-d6dc-4782-bfe0-2eb8787514e7" executionInfo={"status": "ok", "timestamp": 1583693675316, "user_tz": -60, "elapsed": 861, "user": {"displayName": "", "photoUrl": "", "userId": "13310166437012434645"}} df = df[ df['price_currency'] != 'EUR' ] df.shape # + [markdown] id="ewLxajTRgar5" colab_type="text" # Features # + id="r5izDDAgf2uK" colab_type="code" colab={} SUFFIX_CAT = '__cat' for feat in df.columns: if isinstance(df[feat][0], list): continue factorized_values = df[feat].factorize()[0] if SUFFIX_CAT in feat: df[feat] = factorized_values else: df[feat + SUFFIX_CAT] = factorized_values # print(feat) # + id="s-g8eBMlggsb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="3d5c3e7c-a5d1-4cad-8564-4383903ee153" executionInfo={"status": "ok", "timestamp": 1583697095681, "user_tz": -60, "elapsed": 845, "user": {"displayName": "", "photoUrl": "", "userId": "13310166437012434645"}} cat_feats = [x for x in df.columns if SUFFIX_CAT in x ] cat_feats = [x for x in cat_feats if 'price' not in x] len(cat_feats) # + id="HSH_CDcwrv0B" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="ca42445b-51bf-4e7a-d288-a92169682ce7" executionInfo={"status": "ok", "timestamp": 1583700924439, "user_tz": -60, "elapsed": 5803, "user": {"displayName": "", "photoUrl": "", "userId": "13310166437012434645"}} X = df[cat_feats].values y = df['price_value'].values model = DecisionTreeRegressor(max_depth=5) scores = cross_val_score(model, X, y, cv=3, scoring='neg_mean_absolute_error') np.mean(scores) # + id="e8X1-EDotdiy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 391} outputId="a23f51b9-d628-41d0-e4db-2046f3297508" executionInfo={"status": "ok", "timestamp": 1583701190106, "user_tz": -60, "elapsed": 50641, "user": {"displayName": "", "photoUrl": "", "userId": "13310166437012434645"}} m = DecisionTreeRegressor(max_depth=5) m.fit(X, y) imp = PermutationImportance(m, random_state=0).fit(X, y) eli5.show_weights(imp, feature_names=cat_feats) # + id="jQ1b5ADQ8D-Z" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Read data from Rigol DS1054Z scope # https://readthedocs.org/projects/ds1054z/downloads/pdf/stable/ # ### Import the libraries from ds1054z import DS1054Z import matplotlib.pyplot as plt import numpy as np import math import scipy.io as sio import scipy.signal as sig from scipy.fft import rfft, rfftfreq import pyvisa as visa import time import os import shutil # ### Define plot mode. # Interactive mode is helpful for visuallizing the program execution # + # #%matplotlib widget # - # ### Verify scope connection scope = DS1054Z('192.168.1.206') print(scope.idn) # ### Test Description # This sheet is designed to take data on a Rigol DS1054z oscilloscope with channel 1 of the scope connected to an MSP6729 magnetic pick up viewing the chuck of a 7" x 10" mini lathe. # # Channel 1 common of the oscilloscope is connected to the magnetic pick up shield wire and the black wire. The signal of channel 1 is connected to the red wire of the mag pick up. # ![alt text](IMG_8440_Scope2MagPickup_Labels.png "Oscilloscope Connections") # ### Define functions used in the test # #### This is the function that sets the trigger level # def b_set_trigger(d_trigger_level = 1e-01): """Set the trigger configuration Keyword arguments: d_trigger_level -- Voltage level to trigger scope (default: 0.1 volts) Return values: [None] """ scope.write(':trigger:edge:source CHAN1') scope.write(':trigger:edge:level ' + format(d_trigger_level)) scope.single() # #### Function that contains the commands that setup the scope def b_setup_scope(scope, d_ch1_scale=5.e-1, timebase_scale=5e-2, d_trigger_level = 1e-01, b_single = True): """Setup Rigol ds1054z to read a 3/8-24 magnetic pickup Keyword arguments: scope -- Connection to scope d_ch1_scale -- Channel 1 scale (default: 0.5 volts) timebase_scale -- Time scale for data (default: 0.005 seconds) d_trigger_level -- Voltage level to trigger scope (default: 0.1 volts) b_trigger -- If true, then use trigger levels (default: True) Return values: d_ch1_scale_actual -- The closest value chosen by the scope """ scope.timebase_scale = timebase_scale scope.run() scope.display_channel(1,enable=True) scope.set_probe_ratio(1,1) scope.set_channel_scale(1,"{:e}".format(d_ch1_scale) +'V') scope.write(':CHANnel1:COUPling AC') scope.display_channel(2,enable=False) scope.display_channel(3,enable=False) scope.display_channel(4,enable=False) # Do we need a trigger? if b_single: # Set the scope to capture after trigger b_set_trigger(d_trigger_level) else: # No trigger, useful for seeing the scope data when you aren't sure # what the signal looks like scope.write(":TRIGger:SWEep AUTO") return scope.get_channel_scale(1) # #### Verify the help comments are at least somewhat on point help(b_setup_scope) # #### Define the function that acquires data from scope # This one is a little tricky because it can take time to acquire the signal so there are pause statements to allow data to accumulate at the scope. If the acquisition terminates before the sampling is complete there will be NaN's in the list. In this case the NaN's are converte zeros to allow processing to continue. It can be helpful to see a partial waveform to troubleshoot timing at the scope. def d_get_data(i_ch=1, timebase_scale=5e-2): """Get data from the scope Keyword arguments: i_ch -- 1-based index of channel to sample (default: 1) Return values: np_d_ch1 -- numpy array of values from the scope """ # Calculate the delay time d_time_delay = timebase_scale*32 + 1. # Acquire the data time.sleep(d_time_delay) d_ch1 = scope.get_waveform_samples(i_ch, mode='NORM') time.sleep(d_time_delay) scope.run() # Convert the list to a numpy array and replace NaN's with zeros np_d_ch1 = np.array(d_ch1) np_d_ch1 = np.nan_to_num(np_d_ch1) return np.array(np_d_ch1) # #### Verify the help text help(d_get_data) # #### Define the function that extracts features from the data class cl_sig_features: """Class to manage signal features on scope data Example usage: cl_test = cl_sig_features(np.array([1.,2., 3.]),1.1) Should produce: print('np_d_ch1: '+ np.array2string(cl_test.np_d_ch1)) print('timebase_scale: ' + '%0.3f' % cl_test.timebase_scale) print('i_ns: ' + '%3.f' % cl_test.i_ns) print('d_t_del: ' + '%0.3f' % cl_test.d_t_del) print('d_time' + np.array2string(cl_test.d_time)) np_d_ch1: [1. 2. 3.] timebase_scale: 1.000 i_ns: 3 d_t_del: 4.000 d_time[0. 4. 8.] """ def __init__(self, np_d_ch1, timebase_scale): self.__np_d_ch1 = np_d_ch1 self.__timebase_scale = float(timebase_scale) self.__np_d_rpm = np.zeros_like(self.np_d_ch1) self.__d_thresh = np.NaN self.__d_events_per_rev = np.NaN @property def np_d_ch1(self): """Numpy array containing the scope data""" return self.__np_d_ch1 @property def timebase_scale(self): """Scope time scale""" return self.__timebase_scale @property def i_ns(self): """Number of samples in the scope data""" self.__i_ns = len(self.__np_d_ch1) return self.__i_ns @property def d_t_del(self): """Delta time between each sample""" self.__d_t_del = (12.*float(self.timebase_scale))/float(self.i_ns) return self.__d_t_del @property def d_time(self): """Numpy array with time values, in seconds""" self.__d_time = np.linspace(0,(self.i_ns-1),self.i_ns)*self.d_t_del return self.__d_time @property def d_fs(self): """Sampling frequeny in hertz""" self.__d_fs = 1.0/(self.__d_time[1]-self.__d_time[0]) return self.__d_fs @property def np_d_ch1_filt(self): """ Return the signal, filtered with Savitsky-Golay""" self.__i_win_len = 31; self.__i_poly_order = 1; self.__np_d_ch1_filt = sig.savgol_filter(self.np_d_ch1, self.__i_win_len, self.__i_poly_order); self.__str_filt_desc = ('Savitsky-Golay | Window Length: ' + '%3.f' % self.__i_win_len + ' | Polynomial Order: ' + '%2.f' % self.__i_poly_order) self.__str_filt_desc_short = 'SGolay' return self.__np_d_ch1_filt @property def str_filt_desc(self): "Complete Filt description of the Savitsky-Golay filter design" return self.__str_filt_desc @property def str_filt_desc_short(self): """Short Filt description, useful for plot legend labels""" return self.__str_filt_desc_short @property def np_d_ch1_filt1(self): """ Return the signal, filtered with butter FIR filter""" self.__i_poles = 1 if self.d_fs < 300: self.__d_wn = fs/8 else: self.__d_wn = 100 self.__sos = sig.butter(self.__i_poles, self.__d_wn, btype='low', fs=self.d_fs, output = 'sos') self.__np_d_ch1_filt1 = sig.sosfilt(self.__sos, self.np_d_ch1) self.__str_filt1_desc = ('Butterworth | Poles: ' + '%2.f' % self.__i_poles + ' | Lowpass corner (Hz): ' + '%0.2f' % self.__d_wn) self.__str_filt1_desc_short = 'Butter' return self.__np_d_ch1_filt1 @property def str_filt1_desc(self): "Complete Filt1 description of the Butterworth filter design" return self.__str_filt1_desc @property def str_filt1_desc_short(self): """Short Filt1 description, useful for plot legend labels""" return self.__str_filt1_desc_short @property def np_d_eventtimes(self): """Numpy array of trigger event times""" return self.__np_d_eventtimes @property def d_thresh(self): """Trigger threshold value""" return self.__d_thresh @property def np_d_rpm(self): """Estimated RPM values""" return self.__np_d_rpm @property def d_events_per_rev(self): """Events per revolution""" return self.__d_events_per_rev @np_d_ch1.setter def np_d_ch1(self, np_d_ch1): self.__np_d_ch1 = np_d_ch1 @timebase_scale.setter def timebase_scale(self, timebase_scale): self.__timebase_scale = timebase_scale # Method for calculating the spectrum for a real signal def d_fft_real(self): """Calculate the half spectrum since this is a real-valued signal""" d_y = rfft(np_d_ch1) d_ws = rfftfreq(self.i_ns, 1./self.d_fs) return([d_ws, d_y]) # Plotting method, time domain signals. def plt_sigs(self): """Plot out the data in this signal feature class in the time domain Return values: handle to the plot """ plt.figure() plt.plot(self.d_time, self.np_d_ch1) plt.plot(self.d_time, self.np_d_ch1_filt) plt.plot(self.d_time, self.np_d_ch1_filt1) plt.grid() plt.xlabel("Time, seconds") plt.ylabel("Channel output, volts") plt.legend(['as-aquired', self.str_filt_desc_short, self.str_filt1_desc_short]) plt.show() self.__plot_handle = plt.gcf() return self.__plot_handle # Plotting method for single-sided (real signal) spectrum def plt_spec(self): """Plot data in frequency domain. This method assumes a real signal Return values: handle to the plot """ self.__spec = self.d_fft_real() plt.figure() plt.plot(self.__spec[0], np.abs(self.__spec[1])) plt.grid() plt.xlabel("Frequency, hertz") plt.ylabel("Channel amplitude, volts") plt.show() self.__plot_handle = plt.gcf() return [self.__plot_handle, self.__spec[0], self.__spec[1]] # Plotting method for the eventtimes def plt_eventtimes(self): """Plot event data in time. Return values: list: [handle to the plot, np array of eventtimes] """ # The eventtimes all should have threshold value for voltage self.__np_d_eventvalue = np.ones_like(self.__np_d_eventtimes)*self.d_thresh # Put up the the plot time plt.figure() plt.plot(self.__d_time, self.np_d_ch1) plt.plot(self.np_d_eventtimes, self.__np_d_eventvalue, "ok") plt.xlabel('Time, seconds') plt.ylabel('Amplitude, volts') plt.legend(['as-aquired', 'eventtimes']) plt.title('Amplitude and eventtimes vs. time') self.__plot_handle = plt.gcf() return [self.__plot_handle, self.__np_d_eventtimes] # Plotting method for the eventtimes def plt_rpm(self): """Plot rpm data in time. Return values: list: [handle to the plot, np array of RPM values] """ # Put up the the plot time fig,ax1 = plt.subplots() ax2 = ax1.twinx() ax1.plot(self.__d_time, self.np_d_ch1) ax2.plot(self.np_d_eventtimes, self.__np_d_rpm, "ok") ax1.set_xlabel('Time, seconds') ax1.set_ylabel('Amplitude, volts') ax2.set_ylabel('Event speed, RPM') plt.legend(['as-aquired', 'RPM']) plt.title('Amplitude and eventtimes vs. time') plt.show() self.__plot_handle = plt.gcf() return [self.__plot_handle, self.__np_d_rpm] # Estimate triggers for speed, public method def np_d_est_triggers(self, i_direction=0, d_thresh=0, d_hyst=0.1, i_kernel=5, b_verbose=False): """ This method estimates speed by identifying trigger points in time, a given threshold and hysteresis. When the signal level crosses the threshold, the trigger holds off. The trigger holds off until the signal crosses hysteresis levels. Hysteresis is defined relative to the threshold voltage. The trigger times can be used to estimate the rotating speed. Keyword arguments: i_direction -- 0 to search for threshold on rising signal, 1 to search on a falling signal. d_thresh -- Threshold value (default: 0.0 volts for zero crossings) d_hyst -- Hysteresis value (default: 0.1 volts) i_kernel -- Number of samples to consider in estimating slope, must be an odd number (default: 5) b_verbose -- Print the intermediate steps (default: False). Useful for stepping through the method to troubleshoot or understand it better. Return values: np_d_eventtimes -- numpy array with list of trigger event times """ # Store to local private member, it gets used in other places in the class self.__d_thresh = d_thresh # Initialize trigger state to hold off: the trigger will be active # once the signal crosses the hysteresis b_trigger_hold = True # half kernel get used a lot self.__i_half_kernel = int((i_kernel - 1)/2.) # Use smoothing and derivative functions of S-G filter for estimating rise/fall self.__np_d_ch1_dir = sig.savgol_filter(self.np_d_ch1, i_kernel, 1, deriv=1); # Initiate state machine: one state for rising signal, 'up', (i_direction = 0) # and another for falling signal, 'down', (i_direction = 1) self.__d_hyst_abs = 0 idx_event = 0 self.__np_d_eventtimes = np.zeros_like(self.np_d_ch1) if i_direction == 0: # Define the absolute hysteretic value, rising self.__d_hyst_ab = self.__d_thresh - d_hyst # Loop through the signal for idx,x in enumerate(self.np_d_ch1): # Intermediate results if b_verbose: print('idx: ' + '%2.f' % idx + ' | x: ' + '%0.5f' % x + ' | s-g: ' + '%0.4f' % self.__np_d_ch1_dir[idx]) # The trigger leaves 'hold-off' state if the slope is # negative and we fall below the threshold if (x <= self.__d_hyst_ab and self.__np_d_ch1_dir[idx] < 0 and b_trigger_hold == True): # Next time the signal rises above the threshold, trigger # will be set to hold-off state b_trigger_hold = False # If we are on the rising portion of the signal and there is no hold off # state on the trigger, trigger, and change state if (x >= self.__d_thresh and self.__np_d_ch1_dir[idx] > 0 and b_trigger_hold == False): # Change state to hold off b_trigger_hold = True # Estimate time of crossing with interpolation if idx>0: # Interpolate to estimate the actual crossing from the 2 nearest points xp = np.array([self.np_d_ch1[idx-1], self.np_d_ch1[idx]]) fp = np.array([self.__d_time[idx-1], self.__d_time[idx]]) self.__np_d_eventtimes[idx_event] = np.interp(d_thresh, xp, fp) # More intermediate results if b_verbose: print('xp: ' + np.array2string(xp) + ' | fp: ' + np.array2string(fp) + ' | d_thresh: ' + '%0.4f' % d_thresh + ' | eventtimes: ' + '%0.4f' % self.__np_d_eventtimes[idx_event]) # Increment the eventtimes index idx_event += 1 else: # Define the absolute hysteretic value, falling self.__d_hyst_ab = self.__d_thresh + d_hyst # Loop through the signal for idx,x in enumerate(self.np_d_ch1): # Intermediate results if b_verbose: print('idx: ' + '%2.f' % idx + ' | x: ' + '%0.5f' % x + ' | s-g: ' + '%0.4f' % self.__np_d_ch1_dir[idx]) # The trigger leaves 'hold-off' state if the slope is # positive and we rise above the threshold if (x >= self.__d_hyst_ab and self.__np_d_ch1_dir[idx] > 0 and b_trigger_hold == True): # Next time the signal rises above the threshold, trigger # will be set to hold-off state b_trigger_hold = False # If we are on the falling portion of the signal and there is no hold off # state on the trigger, trigger, and change state if (x <= self.__d_thresh and self.__np_d_ch1_dir[idx] < 0 and b_trigger_hold == False): # Change state to hold off b_trigger_hold = True # Estimate time of crossing with interpolation if idx>0: # Interpolate to estimate the actual crossing from the 2 nearest points xp = np.array([self.np_d_ch1[idx-1], self.np_d_ch1[idx]]) fp = np.array([self.__d_time[idx-1], self.__d_time[idx]]) self.__np_d_eventtimes[idx_event] = np.interp(d_thresh, xp, fp) # More intermediate results if b_verbose: print('xp: ' + np.array2string(xp) + ' | fp: ' + np.array2string(fp) + ' | d_thresh: ' + '%0.4f' % d_thresh + ' | eventtimes: ' + '%0.4f' % self.__np_d_eventtimes[idx_event]) # Increment the eventtimes index idx_event += 1 # Remove zero-valued element self.__np_d_eventtimes = np.delete(self.__np_d_eventtimes, np.where(self.__np_d_eventtimes == 0)) return self.__np_d_eventtimes # Method to estimate the RPM values def d_est_rpm(self, d_events_per_rev=1): """ Estimate the RPM from the signal using eventtimes which must have calculate with a previous call to the method np_d_est_triggers. """ # Store the new value in the object self.__d_events_per_rev = d_events_per_rev # Calculate the RPM using the difference in event times self.__np_d_rpm = 60./(np.diff(self.np_d_eventtimes)*float(d_events_per_rev)) # To keep the lengths the same, append the last sample self.__np_d_rpm = np.append(self.__np_d_rpm, self.__np_d_rpm[len(self.__np_d_rpm)-1]) return self.__np_d_rpm # Save the data def b_save_data(self, str_data_prefix = 'testclass', idx_data=1): """ Save the data in the object to a .csv file Keyword arguments: str_data_prefix -- String with file prefix (defaults to 'testclass') idx_data -- File index (defaults to 1) Return values: True if write succeeds """ str_file = str_data_prefix + '_' '%03.0f' % idx_data + '.csv' file_data = open(str_file,'w+') file_data.write('X,CH1,Start,Increment,\n') str_line = 'Sequence,Volt,Volt,0.000000e-03,' + str(self.d_t_del) file_data.write(str_line+'\n') for idx_line in range(0, self.i_ns): str_line = str(idx_line) + ',' + '%0.5f' % self.np_d_ch1[idx_line] + ',' + ',' file_data.write(str_line+'\n') file_data.close() return True # #### Verify help and class structure help(cl_sig_features) # ### Setup the test sequence # Seed the sequence, these typically work well for 200-500 RPM and the mag pick-up gapped at 10 mils str_data_prefix = 'test010' idx_data = 0 d_timebase_scale = 1e-1 d_ch1_scale = 5.e-1 # ### Acquisition loop # This loop has several steps: # # #### Acquire discovery signal # The code does not assume an RPM so it derives it from signal features. # # #### Scope setup # Setup the vertical and horizontal scales on the scope. For the first pass, no trigger is used and time scale is set so that at least 2 revolutions of the lathe should be seen in the signal. This lets us see if the signal is valid and how we want to configure the trigger. # # #### Initial acquisition # Once the setup is complete acquire the data and push the information into the signal feature class. The speed will be used to set the timescale so that we get about 5 events in the scope window # # #### Visualize the data # A few plots are presented of the scope data. Useful for troubleshooting # # #### Estimate the speed # With the event frequency known, the scope timebase_scale can be calculated while True: # Setup the scope for the trial sample acquisition d_ch1_scale = b_setup_scope(scope, d_ch1_scale=d_ch1_scale, timebase_scale=d_timebase_scale, d_trigger_level = 1e-01, b_single = False) # Acquire the test sample np_d_ch1 = d_get_data(i_ch=1, timebase_scale=d_timebase_scale) # Instatiate the class, send the waveform samples and scales cl_sig_no_trigger = cl_sig_features(np_d_ch1, d_timebase_scale) # Plot out the signal hp = cl_sig_no_trigger.plt_sigs() # The shape of the response is similar as speed increased, but the # triggering threshold has to increase to accomodate the higher # amplitudes d_thresh_est = 0.2 * (d_ch1_scale/0.5) # Calculate the trigger event times np_d_eventtimes = cl_sig_no_trigger.np_d_est_triggers(i_direction=0, d_thresh=d_thresh_est, d_hyst=0.2, b_verbose=False) np_d_eventtimes # Visualize the eventtimes hp = cl_sig_no_trigger.plt_eventtimes() # Calculated the desired timebase_scale print("d_timebase_scale (prior to adjustment): " '%0.3f' % d_timebase_scale) d_timebase_scale = (6./12.)*(np.mean(np.diff(np_d_eventtimes))) print("d_timebase_scale (after adjustment): " '%0.3f' % d_timebase_scale) # Check for clipping and correct scaling. The scope has 8 vertical division so the # total voltage range on the screen is 8 * d_ch1_scale d_pkpk = np.max(np_d_ch1) - np.min(np_d_ch1) print("d_pkpk: " + "%0.4f" % d_pkpk) d_volts_scale = (8*d_ch1_scale) print("d_volts_scale: " + "%0.4f" % d_volts_scale) if ( d_pkpk > d_volts_scale ): print("Voltage scale reduced") d_ch1_scale = d_ch1_scale*2. # Could be the vertical scale is too small, check for that if ( abs( d_volts_scale/d_pkpk > 2. )): print("Voltage scale increased") d_ch1_scale = d_ch1_scale/2. # The scope trigger setting scales with the overall amplitude since the # shape of the response is similar d_trigger_level_est = 0.2 * (d_ch1_scale/0.5) # Reset the scope with the adjusted features, set to trigger on single sample b_setup_scope(scope, d_ch1_scale=d_ch1_scale, timebase_scale=d_timebase_scale, d_trigger_level = 1e-01, b_single = True) # Acquire the sample np_d_ch1 = d_get_data(i_ch=1, timebase_scale=d_timebase_scale) # Reset back to free-run b_setup_scope(scope, d_ch1_scale=d_ch1_scale, timebase_scale=d_timebase_scale, d_trigger_level = 1e-01, b_single = False) # Instatiate the class, send the waveform samples and scales cl_sig_no_trigger = cl_sig_features(np_d_ch1, d_timebase_scale) # Visualize the data hp = cl_sig_no_trigger.plt_sigs() # Save it off to a file b_file_save = cl_sig_no_trigger.b_save_data(str_data_prefix = str_data_prefix, idx_data = idx_data) # Wait for the next speed adjustment and continue idx_data += 1 time.sleep(2) input("Press Enter to continue...") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 循环 # - 循环是一种控制语句块重复执行的结构 # - while 适用于广度遍历 # - for 开发中经常使用 # #广度遍历 # #深度遍历 # ## while 循环 # - 当一个条件保持真的时候while循环重复执行语句 # - while 循环一定要有结束条件,否则很容易进入死循环 # - while 循环的语法是: # # while loop-contunuation-conndition: # # Statement # ## 示例: # sum = 0 # # i = 1 # # while i <10: # # sum = sum + i # i = i + 1 # ## 错误示例: # sum = 0 # # i = 1 # # while i <10: # # sum = sum + i # # i = i + 1 # - 一旦进入死循环可按 Ctrl + c 停止 # ## EP: # ![](../Photo/143.png) # ![](../Photo/144.png) # # 验证码 # - 随机产生四个字母的验证码,如果正确,输出验证码正确。如果错误,产生新的验证码,用户重新输入。 # - 验证码只能输入三次,如果三次都错,返回“别爬了,我们小网站没什么好爬的” # - 密码登录,如果三次错误,账号被锁定 # 只要有break出现,不管嵌套在哪个循环里,只要执行,while循环结束。 import random a=random.randint(1000,9999) print(a) yonghu=eval(input("验证码")) b=1 while a!=yonghu: b=b+1 if b==4: print("别爬了,小网站没什么好爬的") break else: a=random.randint(1000,9999) print(a) yonghu=eval(input("验证码")) if a==yonghu: print("验证码正确") import random a1=random.randint(65,90) a2=random.randint(65,90) a3=random.randint(65,90) a4=random.randint(65,90) b1=chr(a1) b2=chr(a2) b3=chr(a3) b4=chr(a4) print(b1,b2,b3,b4) yonghu=input("验证码") zong=b1+b2+b3+b4 for i in range(2): if zong==yonghu: print("输入正确") break else: a1=random.randint(65,90) a2=random.randint(65,90) a3=random.randint(65,90) a4=random.randint(65,90) b1=chr(a1) b2=chr(a2) b3=chr(a3) b4=chr(a4) print(b1,b2,b3,b4) yonghu=input("验证码") else: print("小网站 不值得") for i in range(3): # ## 尝试死循环 # ## 实例研究:猜数字 # - 你将要编写一个能够随机生成一个0到10之间的且包括两者的数字程序,这个程序 # - 提示用户连续地输入数字直到正确,且提示用户输入的数字是过高还是过低 # ## 使用哨兵值来控制循环 # - 哨兵值来表明输入的结束 # - ![](../Photo/54.png) # ## 警告 # ![](../Photo/55.png) # ## for 循环 # - Python的for 循环通过一个序列中的每个值来进行迭代 # - range(a,b,k), a,b,k 必须为整数 # - a: start # - b: end # - k: step # - 注意for 是循环一切可迭代对象,而不是只能使用range #怎么查看对象可不可以迭代__(迭代:可以一个一个拿出来) #name.__iter__ # # 在Python里面一切皆对象 # ## EP: # - ![](../Photo/145.png) a=1 sum=0 for i in range(1002): sum =sum+a print(sum) # ## 嵌套循环 # - 一个循环可以嵌套另一个循环 # - 每次循环外层时,内层循环都会被刷新重新完成循环 # - 也就是说,大循环执行一次,小循环会全部执行一次 # - 注意: # > - 多层循环非常耗时 # - 最多使用3层循环 # ## EP: # - 使用多层循环完成9X9乘法表 # - 显示50以内所有的素数 # ## 关键字 break 和 continue # - break 跳出循环,终止循环 # - continue 跳出此次循环,继续执行 #可代替continue for i in range(10): if i==5: pass else: print(i) # + for i in range(1,10): for j in range(1,i+1): print(i,"*",j,"=",i*j,end=" ") print() # - for a in range(1,10): for i in range(2,50): if i%a==0: a=a+1 pass elif i>3 and i%3==0: pass elif i%5==0: pass elif i==49: pass else: print(i) break for i in range(5,50): for a in range(2,i): if i%a==0: break else: print(i) # ## 注意 # ![](../Photo/56.png) # ![](../Photo/57.png) # # Homework # - 1 # ![](../Photo/58.png) list_1=[] for i in range(10000): a=input("如果想停止 输入abc ") list_1.append(a) if a=="abc": list_1.pop() print(list_1) break num=0 for c in list_1: c=int(c) num=c+num d=num/i print(num) print(float(d)) # - 2 # ![](../Photo/59.png) a=10000 b=0 c=0 for i in range(9): a=a+a*0.05 b=b+a c=a print(a) for d in range(3): a=a+a*0.05 c=c+a print(c) # - 4 # ![](../Photo/60.png) list_2=[] list_3=[] for a in range(100,1000): if a%5==0 and a%6==0: list_2.append(a) else: pass print(list_2[1:11]) print(list_2[11:21]) print(list_2[21:31]) # - 5 # ![](../Photo/61.png) n=1 while n**2<12000: n=n+1 print(n) m=12000 while m**3>12000: m=m-1 print(m) # - 6 # ![](../Photo/62.png) money=eval(input("")) year=eval(input("")) # - 7 # ![](../Photo/63.png) # - 8 # ![](../Photo/64.png) # - 9 # ![](../Photo/65.png) # - 10 # ![](../Photo/66.png) # + list4=[] for a in range(1,10000): for b in range(1,a): b=b+1 if a%b==0: list4.append(b) else: break print(list4) # - # - 11 # ![](../Photo/67.png) for i in range(1,8,2): for j in range(2,8): if i!=j: print(i,j) print("共有",21) # - 12 # ![](../Photo/68.png) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline # # # Region Adjacency Graphs # # This example demonstrates the use of the `merge_nodes` function of a Region # Adjacency Graph (RAG). The `RAG` class represents a undirected weighted graph # which inherits from `networkx.graph` class. When a new node is formed by # merging two nodes, the edge weight of all the edges incident on the resulting # node can be updated by a user defined function `weight_func`. # # The default behaviour is to use the smaller edge weight in case of a conflict. # The example below also shows how to use a custom function to select the larger # weight instead. # # + from skimage.future.graph import rag import networkx as nx from matplotlib import pyplot as plt import numpy as np def max_edge(g, src, dst, n): """Callback to handle merging nodes by choosing maximum weight. Returns a dictionary with `"weight"` set as either the weight between (`src`, `n`) or (`dst`, `n`) in `g` or the maximum of the two when both exist. Parameters ---------- g : RAG The graph under consideration. src, dst : int The vertices in `g` to be merged. n : int A neighbor of `src` or `dst` or both. Returns ------- data : dict A dict with the "weight" attribute set the weight between (`src`, `n`) or (`dst`, `n`) in `g` or the maximum of the two when both exist. """ w1 = g[n].get(src, {'weight': -np.inf})['weight'] w2 = g[n].get(dst, {'weight': -np.inf})['weight'] return {'weight': max(w1, w2)} def display(g, title): """Displays a graph with the given title.""" pos = nx.circular_layout(g) plt.figure() plt.title(title) nx.draw(g, pos) nx.draw_networkx_edge_labels(g, pos, font_size=20) g = rag.RAG() g.add_edge(1, 2, weight=10) g.add_edge(2, 3, weight=20) g.add_edge(3, 4, weight=30) g.add_edge(4, 1, weight=40) g.add_edge(1, 3, weight=50) # Assigning dummy labels. for n in g.nodes(): g.nodes[n]['labels'] = [n] gc = g.copy() display(g, "Original Graph") g.merge_nodes(1, 3) display(g, "Merged with default (min)") gc.merge_nodes(1, 3, weight_func=max_edge, in_place=False) display(gc, "Merged with max without in_place") plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="KYfzS8gZ1qCd" # # Relatório de Análise VII # + id="O8KBPPpKwXqH" import pandas as pd # + colab={"base_uri": "https://localhost:8080/", "height": 254} id="Uul4mehi0Vvl" outputId="5b80235a-902c-42f5-b935-20b486126d8e" dados = pd.read_csv('/content/aluguel.csv', sep= ';') dados.head(7) # + id="imTvAKvY1zXn" bairros = ['Barra da Tijuca', 'Copacabana', 'Ipanema', 'Leblon', 'Botafogo', 'Flamengo', 'Tijuca'] selecao = dados['Bairro'].isin(bairros) dados = dados[selecao] # + colab={"base_uri": "https://localhost:8080/"} id="fXKZw-ff2tpr" outputId="ad31ad36-2f39-4f73-9257-16853c021b31" dados['Bairro'].drop_duplicates() # + colab={"base_uri": "https://localhost:8080/"} id="br3eKpDI20lW" outputId="4e9c6f88-e6a5-4c7d-fbff-a82da9f13141" grupo_bairro = dados.groupby('Bairro') type(grupo_bairro) # + colab={"base_uri": "https://localhost:8080/"} id="FyfDXS_H286N" outputId="31003c22-a864-4c56-8c62-8e633ed36328" for bairro, data in grupo_bairro: print('{} -> {}'.format(bairro, data.Valor.mean())) # + colab={"base_uri": "https://localhost:8080/"} id="tH4F_0NY3DxS" outputId="f3995643-6946-4c26-beef-1a880f7647dc" grupo_bairro['Valor'].mean().round(2) # + [markdown] id="uKrN4QjI9bvY" # # Estatíscas Descritivas # + colab={"base_uri": "https://localhost:8080/", "height": 284} id="bMOMxkLZ9bAb" outputId="4f789ec8-2f93-4193-ae69-a9e5009d9be5" grupo_bairro['Valor'].describe().round(2) # + colab={"base_uri": "https://localhost:8080/", "height": 284} id="kxfg65Cf9rcW" outputId="a86ac180-75dc-4f0e-f0a8-93a68843ba17" grupo_bairro['Valor'].aggregate(['min', 'max']).rename(columns = {'min': 'Minimo', 'max': 'Maximo'}) # + id="tMl5d_-k-RrR" # %matplotlib inline import matplotlib.pyplot as plt plt.rc('figure', figsize= (20, 10)) # + colab={"base_uri": "https://localhost:8080/", "height": 712} id="RND4_92N_DJO" outputId="e59b1c2e-01bb-4cdd-e1f2-e419ee7a2ed6" fig = grupo_bairro['Valor'].mean().plot.bar(color= 'blue') fig.set_ylabel('Valor do Aluguel') fig.set_title('Valor Médio do Aluguel por Bairro', {'fontsize': 22}) # + id="w0REBI9h_bs-" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Bayesian Optimisation Xmas # This notebook implements a xmas-themed guessing game to try and find the maximum of a black-box function. # It is part of this [repository](https://github.com/JohnReid/BayesOptJournalClub). # + from dataclasses import astuple import ipywidgets as widgets from IPython.display import display import numpy as np from scipy.interpolate import interp1d import pandas as pd import altair as alt import tensorflow as tf # For Gaussian processes import gpflow from gpflow.utilities import print_summary, set_trainable # For Bayesian optimisation import trieste from trieste.bayesian_optimizer import OptimizationResult from trieste.utils.objectives import mk_observer from trieste.acquisition.rule import OBJECTIVE, EfficientGlobalOptimization from trieste.acquisition import NegativeLowerConfidenceBound, ExpectedImprovement # Supporting code in the repository import blackbox as bb from importlib import reload reload(bb); # - # ## Santa's snowy scenario # It is Xmas Eve and Santa has to deliver his presents to all the children around the world. # Unfortunately it has been a really cold winter and some houses are completely blanketed by snow. # # Santa does have a shovel and could dig down to the chimney of each house to deliver his presents if only he knew where the chimneys were. # Santa does know that the chimney is the highest point in the skyline of each house. # # Fortunately Rudolph can help. # He can measure the depth of the snow at any point with his antlers but unfortunately this takes some time and the measurements are noisy. # Furthermore Rudolph demands a boiled sweet (his favourite treat) every time he measures the snow depth and Santa only has so many sweets. # # Santa has one night to deliver his presents, how can he find the chimneys in time? # # ### Skyline # Let's see what a typical house's skyline looks like. # `x` represents the location and `depth` measures the depth below the snowline. x, depth = bb.random_submerged_skyline(100) house = pd.DataFrame(dict(x=x, depth=depth)) house_chart = ( alt.Chart(house) .mark_line() .encode(x='x', y='depth')) house_chart.properties(width=800, height=300) # ### Grid-search strategy # One strategy to find the chimney is to lay down a grid and ask Rudolph to measure the height at each grid point. # # Let's fix the standard deviation of the Rudolph's measurement error and choose a number of grid points: error_sd = widgets.FloatSlider( value=.04, min=0.02, max=.1, step=0.01, description='error sd:', readout_format='.2f', ) npoints = widgets.IntSlider( value=20, min=3, max=100, step=1, description='grid size:', readout_format='d' ) display(error_sd) display(npoints) # Rudolph makes his measurements. measurements = pd.DataFrame(dict(x=np.linspace(house['x'].min(), house['x'].max(), npoints.value))) f = interp1d(house['x'], house['depth']) measurements['f'] = f(measurements['x']) measurements['y'] = measurements['f'] + bb.rng.normal(0, error_sd.value, size=npoints.value) measurements_chart = ( alt.Chart(measurements) .mark_point(color='red') .encode(x='x', y='y')) (measurements_chart + house_chart).properties(width=800, height=300) # We can see that there are many wasted measurements in locations where we are reasonably sure the maximum is not located. # In addition it is still difficult to be sure where the chimney is. # # How can we do better? # # ## Bayesian optimisation # This is where Bayesian optimisation (BO) comes in. # Bayesian optimisation is going to do better than grid (or random) search by choosing where to measure the depth of snow in some sort of *optimal* way. # # Bayesian optimisation requires: # - a *prior on the underlying function* to be optimised (some understanding of houses' skylines and their depths) # - a *prior on the measurement error* (some understanding of how good Rudolph is at measuring depth) # # Using some underlying theory, Bayesian optimisation then iterates around the following loop: # - *choose the next measurement location* so as to minimise the total number of measurements required (the acquisition function) # - ask the black-box function for a *noisy measurement* at that location # - *update its model* of the underlying function and measurement error (computes a posterior given prior and data) # # Some stopping criterion is used to avoid iterating indefinitely. # Bayesian optimisation returns the posterior for the underlying function to use as you see fit. # In particular you may wish to obtain a posterior over the location of the maximum of the function. # # ## Optimisation guessing game # However, before we try Bayesian optimisation, we will try to optimise a black-box function ourselves. # Your task will be to find the maximum of the underlying function using as few function evaluations as possible. # # Unfortunately we do not have a suitable Bayesian model for house skylines so we will use a Gaussian process prior on functions instead. # Some typical functions are shown below to give you an idea what the underlying function might look like. blackbox1 = bb.GPBlackBox(ndim=1) xx = blackbox1.xgrid(npoints=100)[:, 0] prior_samples = pd.concat([ pd.DataFrame(dict(x=xx, f=bb.prior_draw(blackbox1.kernel, xx), sample=s)) for s in range(3)]) prior_chart = ( alt.Chart(prior_samples) .mark_line() .encode(x='x', y='f', color='sample:N')) prior_chart.properties(width=800, height=300) # ### A function of one input # Now it is your turn. # You have to optimise a function of one input with domain `[-1, 1]`. # To start you off you already have three evaluations. # Remember these are noisy versions of the underlying objective function. blackbox1 = bb.GPBlackBox(ndim=1) y = blackbox1([-.6])[0][0] y = blackbox1([.6])[0][0] blackbox1.plot_xy().properties(width=800, height=300) # Now you should choose which input point (`x0`) to evaluate the function at next: x0 = widgets.FloatSlider( value=0.2, min=bb.DOMAIN_MIN, max=bb.DOMAIN_MAX, step=0.01, description='x0:', readout_format='.2f', ) display(x0) # Repeatedly adjust the slider above and execute the following cell to make new evaluations and plot all the evaluations `y` so far. y = blackbox1([x0.value])[0][0] print(f'Evaluated black box at {x0.value}; result={y}') blackbox1.plot_xy().properties(width=800, height=300) # Now go back and choose another point at which to evaluate the black box function. # # If you have finished evaluating the function at different points and you are confident where the maximum is, you can make a guess before executing the cells below. The cell below shows the function as a line and the noisy data we received as evaluations of it: f1 = blackbox1.sample_f(100) chart1f = ( alt.Chart(f1) .mark_line() .encode(x='x0', y='f')) chart1y = blackbox1.plot_xy() chart1 = alt.layer(chart1y, chart1f) chart1.properties(width=800, height=300) # Note that we didn't actually sample the underlying function `f` until the cell above despite having already seen the noisy observations `y`. # If you execute the cell above again, you will have a different sample for `f` that is also consistent with our noisy observations `y`. # We can do this because Gaussian processes have a nice consistency property under marginalisation. # # ### Bayesian optimisation for 1-dimensional input # Now we will try Bayesian optimisation using the [Trieste](https://secondmind-labs.github.io/trieste/) package which uses [GPflow](https://gpflow.readthedocs.io/en/master/) for Gaussian process regression. # This section is based on the Trieste tutorial. # + model1 = gpflow.models.GPR((f1['x0'].to_numpy().reshape((-1, 1)), f1['f'].to_numpy().reshape((-1, 1))), kernel=gpflow.kernels.Matern52(lengthscales=.4), noise_variance=1.1e-6) def blackbox_bo1(x): mean, var = model1.predict_y(tf.reshape(x, (-1, 1))) return - bb.rng.normal(loc=mean, scale=np.sqrt(blackbox1.noise_variance)) # - # #### Initial sample over search space # + observer = mk_observer(blackbox_bo1, OBJECTIVE) lower_bound = tf.cast([bb.DOMAIN_MIN], gpflow.default_float()) upper_bound = tf.cast([bb.DOMAIN_MAX], gpflow.default_float()) search_space = trieste.space.Box(lower_bound, upper_bound) num_initial_points = 5 initial_query_points = search_space.sample(num_initial_points) initial_data = observer(initial_query_points) # - # #### Model the objective function kernel = gpflow.kernels.Matern52(lengthscales=0.4 * tf.ones(1,)) objective_model = gpflow.models.GPR(astuple(initial_data[OBJECTIVE]), kernel=kernel, noise_variance=blackbox1.noise_variance) set_trainable(objective_model.kernel.lengthscales, False) set_trainable(objective_model.likelihood, False) print_summary(objective_model) model = {OBJECTIVE: trieste.models.create_model_interface( { "model": objective_model, "optimizer": gpflow.optimizers.Scipy(), "optimizer_args": {"options": dict(maxiter=100)}, } )} # #### Acquisition function # Could try either of the two below. # `ExpectedImprovement` is the default in Trieste. # `NegativeLowerConfidenceBound` is probably more robust to model mis-specification. acq_fn = NegativeLowerConfidenceBound() # acq_fn = ExpectedImprovement() rule = trieste.acquisition.rule.EfficientGlobalOptimization(acq_fn.using(OBJECTIVE)) # #### Optimisation loop bo = trieste.bayesian_optimizer.BayesianOptimizer(observer, search_space) result: OptimizationResult = bo.optimize(15, initial_data, model, acquisition_rule=rule) if result.error is not None: raise result.error dataset = result.datasets[OBJECTIVE] # #### Explore results # First use the optimiser's model to predict the underlying function, `f`, with uncertainty. mean, var = model[OBJECTIVE].model.predict_f(f1['x0'].to_numpy().reshape((-1, 1))) bo_f1 = pd.DataFrame(dict(x0=f1['x0'], f=- mean.numpy()[:, 0], sd=np.sqrt(var.numpy()[:, 0]))) # Next retrieve the evaluations the optimiser received from the black box. observations = pd.DataFrame(dict(x0=dataset.query_points.numpy()[:, 0], y=- dataset.observations.numpy()[:, 0])) # Now plot the optimiser's estimate of the underlying function in green (with error bars +/- two standard deviations), # the black box evaluations in red and the true underlying function in blue. chart_bo_f1 = ( alt.Chart(bo_f1) .mark_line(color='green') .encode(x='x0:Q', y='f:Q')) chart_bo_ferror = ( alt.Chart(bo_f1.assign(fmin=bo_f1['f'] - 2 * bo_f1['sd'], fmax=bo_f1['f'] + 2 * bo_f1['sd'])) .mark_errorbar(color='green') .encode(x="x0:Q", y="fmin:Q", y2="fmax:Q")) chart_obs = ( alt.Chart(observations) .mark_circle(color='red', size=60) .encode(x='x0:Q', y='y:Q')) (chart_obs + chart_bo_ferror + chart_bo_f1 + chart1f).properties(width=800, height=300) # ### 2-dimensional input # If you wish, you can try the same optimisation problem but for a function of two variables. blackbox2 = bb.GPBlackBox(ndim=2) x0 = widgets.FloatSlider( value=0.5, min=bb.DOMAIN_MIN, max=bb.DOMAIN_MAX, step=0.01, description='x0:', readout_format='.2f', ) x1 = widgets.FloatSlider( value=0.5, min=bb.DOMAIN_MIN, max=bb.DOMAIN_MAX, step=0.01, description='x1:', readout_format='.2f', ) w = widgets.Box([x0, x1]) # Choose which input point (x0, x1) to evaluate the function at: display(w) # Evaluate the function and plot all the evaluations so far: y = blackbox2([x0.value, x1.value])[0][0] print(f'Evaluated black box at ({x0.value}, {x1.value}); result={y}') blackbox2.plot_xy() # If you have finished evaluating the function at different points and you are confident where the maximum is, you can make a guess before executing the cells below. # # Now show the underlying function f (without noise) as a heatmap and the noisy data we received as evaluations of it: f2 = blackbox2.sample_f(51) chart2f = ( alt.Chart(f2) .mark_square(size=60) .encode(x=alt.X('x0:Q', scale=alt.Scale(domain=bb.DOMAIN)), y=alt.Y('x1:Q', scale=alt.Scale(domain=bb.DOMAIN)), color=alt.Color('f:Q', scale=alt.Scale(scheme=bb.COLOURSCHEME, domainMid=0)))) chart2y = blackbox2.plot_xy() chart2 = alt.layer(chart2f, chart2y) chart2.properties(width=800, height=300) // --- // jupyter: // jupytext: // text_representation: // extension: .cs // format_name: light // format_version: '1.5' // jupytext_version: 1.14.4 // kernelspec: // display_name: .NET (C#) // language: C# // name: .net-csharp // --- // + [markdown] dotnet_interactive={"language": "pwsh"} // # Backup Validation and DBCC CHECKDB using Rubrik Live Mount // - // ## Backup Validatation of a database using Live Mount // Live mount allows for near instant recovery of a database. If a database restore/export normally takes hours, then live mounting a database will take a few minutes. Live Mount does a full recovery of a database to either the same SQL Server Instance with a different database name or another SQL Server Instance with the same or different database name. The recovery of the database is much faster, because Rubrik does not need to copy the contents of the backup from the Rubrik Cluster back to the SQL Server. All of the recovery work is done on the Rubrik cluster itself. Then the database files are presented to the SQL Server Instance via a secure SMB3 share that is only accessible by the machine the share is mounted to. // // Live Mounting a database is great for a lot of different use cases: // - DBA Backup validation testing // - Object level recovery // - Developer testing // - DevOps Automation // - Reporting databases // - Database migration application smoke test validation. // // A key parameter is RecoveryDateTime. All dates in Rubrik are stored in UTC format. This parameter is expecting a fully qualified date and time in UTC format. example value is 2018-08-01T02:00:00.000Z. In the example below, we are pulling the latest recovery point that Rubrik knows about. // // **This article serves as a way to demonstrate how to use Live Mount for Backup Validation.** // // ***The code examples below make use of the Rubrik, SQL Server and dbatools Powershell Modules. This is meant to be an example and not the explicit way to achieve backup validation and database integrity checks. Please review this content and use as a way to write your own validation process.*** // ### Set up environment for all next steps. // + dotnet_interactive={"language": "pwsh"} $Server = $Rubrik.Server.cdm02 $Token = $Rubrik.token.cdm02 $SourceSQLServerInstance = "rp-sql19s-001.perf.rubrik.com" $SourceDatabaseName = "AdventureWorks2019" $TargetSQLServerInstance = "rp-sql19s-001.perf.rubrik.com" $MountedDatabaseName = "AdventureWorks2019_LiveMount" // - // ### Connect to the Rubrik Cluster // + dotnet_interactive={"language": "pwsh"} Connect-Rubrik -Server $Server -Token $Token // - // ### Get details about the database from the Rubrik Cluster // + dotnet_interactive={"language": "pwsh"} $RubrikDatabase = Get-RubrikDatabase -Name $SourceDatabaseName -ServerInstance $SourceSQLServerInstance # $RubrikDatabase | Format-List * // - // ### Mount the database to a SQL Server // The below example will live mount a database to the latest recovery point that Rubrik knows about. Depending on the recovery model of the database and the backups that have been run against the database, this could include the snapshot and the transaction log backups. // + dotnet_interactive={"language": "pwsh"} $TargetInstance = Get-RubrikSQLInstance -ServerInstance $TargetSQLServerInstance $RubrikRequest = New-RubrikDatabaseMount -id $RubrikDatabase.id ` -TargetInstanceId $TargetInstance.id ` -MountedDatabaseName $MountedDatabaseName ` -recoveryDateTime (Get-date (Get-RubrikDatabase -id $RubrikDatabase.id).latestRecoveryPoint) ` -Confirm:$false Get-RubrikRequest -id $RubrikRequest.id -Type mssql -WaitForCompletion // - // ### Confirm that database is live mounted // A Live mount of a database is the equivalent to doing a T-SQL Restore with your native backups. SQL Server has recovered the snapshot via the SQL Server VSS Writer, and if applicable, rolled the database forward to a point in time chosen by the user. This means we have applied all transactions from the time the snapshot has happened until the point in time chosen. Once a database has been Live Mounted to a SQL Server, the database is ready for any read/write query you would like to run. // + dotnet_interactive={"language": "pwsh"} $Query = "SELECT name, state_desc FROM sys.databases" Invoke-Sqlcmd -ServerInstance $TargetSQLServerInstance -Query $Query | Format-Table // - // ## DBCC CHECKDB on Live Mounted Database // #### Look Where Live Mount Database Files Reside // A Live Mounted database is a database that resides on the Rubrik Storage. It is then presented back to the SQL Server via an secure SMB3 share. When you look at the database files, you will see they reside on a UNC path. // + dotnet_interactive={"language": "pwsh"} $Query = "SELECT DB_NAME() as DB_Name , type_desc , name as logical_name , physical_name FROM sys.database_files" Invoke-Sqlcmd -ServerInstance $TargetSQLServerInstance -Query $Query -Database $MountedDatabaseName // - // Because this database is sitting on a UNC path, network latency can slow down access to the files. Additionally, the files are not sitting on your production storage array, so performance will not be the same. When you do a DBCC CHECKDB, an hidden database snapshot is created on the same location as the database files. DBCC CHECKDB, then runs its checks against the hidden snapshot. In this case, they will be created on the UNC path where the live mount is residing on. // // To make things peform a bit better, you should create your database snapshot of the live mounted database on the storage that is attached to the SQL Server. This will consume next to no storage on your SQL Server, but can help increase the performance of the DBCC CHECKDB operation. // ### Create the database snapshot based off of the live mount // + dotnet_interactive={"language": "pwsh"} $SnapshotName = "$($MountedDatabaseName)_DBCC" $DefaultSQLPaths = Get-DbaDefaultPath -SqlInstance $TargetSQLServerInstance New-DbaDbSnapshot -SQLInstance $TargetSQLServerInstance -Database $MountedDatabaseName -Path $DefaultSQLPaths.Data -Name $SnapshotName | Format-List // - // ### Run DBCC CHECKDB // + dotnet_interactive={"language": "pwsh"} $results = Invoke-Sqlcmd -Query "dbcc checkdb(); select @@spid as SessionID;" -ServerInstance $TargetSQLServerInstance -Database $SnapshotName $spid = "spid" + $results.sessionID Get-SqlErrorLog -ServerInstance $($TargetSQLServerInstance) | where-object { $_.Source -eq $spid } | Sort-Object -Property Date -Descending | Select -First 1 // - // ### Remove database snapshot // + dotnet_interactive={"language": "pwsh"} Remove-DbaDbSnapshot -SqlInstance $TargetSQLServerInstance -Snapshot $SnapshotName -Confirm:$false // - // ## Unmount the Database Live Mount // + dotnet_interactive={"language": "pwsh"} $RubrikDatabaseMount = Get-RubrikDatabaseMount -MountedDatabaseName $MountedDatabaseName -TargetInstanceId $TargetInstance.id $RubrikRequest = Remove-RubrikDatabaseMount -id $RubrikDatabaseMount.id -Confirm:$false # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # ELMO/ARAVEC/FASTTEXT Baseline for NSURL-2019 Shared Task 8 # In this notebook, we will walk you through the process of reproducing the ELMO/ARAVEC/FASTTEXT baseline for the NSURL-2019 Shared Task 8. # ## Loading Required Modules # We start by loading the needed python libraries. import os import tensorflow as tf import pandas as pd from tensorflow import keras from sklearn.metrics import f1_score import gensim import numpy as np import fasttext from embed_classer import embed # ## Loading Data # Using pandas, we can load and inspect the training, validation, and testing datasets as follows: df_train = pd.read_csv("../../data/nsurl/q2q_similarity_workshop_v2.1.tsv", sep="\t") df_test = pd.read_csv("../../private_datasets/q2q/q2q_no_labels_v1.0.tsv", sep="\t") # Below we list the 5 first entries in the training data. df_train.head() # And last but not least, the first 5 entries in the test data. df_test.head() # ## Model Preparation # We start by setting the randomisation seed and the maximum sentence length: tf.random.set_seed(123) max_sentence_len = 20 # + model_type = "fasttext" if model_type == "aravec": model_path = '../pretrained/full_uni_sg_300_twitter.mdl' size = 300 elif model_type == "fasttext": model_path = '../pretrained/cc.ar.300.bin' size = 300 elif model_type == "elmo": model_path= '../pretrained' size = 1024 # - # Next we load our model of choice: embedder = embed(model_type, model_path) # Then we define the input and output to the model: q1_input = keras.Input(shape=(max_sentence_len, size), name='q1') q2_input = keras.Input(shape=(max_sentence_len, size), name='q2') label = keras.Input(shape=(1,), name='label') # This is followed by defining the structure of the network: feat_1 = tf.abs(q1_input - q2_input) feat_2 = q1_input*q2_input forward_layer = tf.keras.layers.LSTM(size) backward_layer = tf.keras.layers.LSTM(size, go_backwards=True) masking_layer = tf.keras.layers.Masking() rnn = tf.keras.layers.Bidirectional(forward_layer, backward_layer=backward_layer) q1_logits = rnn(q1_input) q2_logits = rnn(q2_input) feat_1 = tf.abs(q1_logits - q2_logits) feat_2 = q1_logits*q2_logits logits = keras.layers.Dense(size*2, activation=tf.nn.sigmoid)(tf.keras.layers.concatenate([q1_logits, q2_logits, feat_1, feat_2])) logits = keras.layers.Dense(1, activation=tf.nn.sigmoid)(logits) # Then we construct and compile the model: model = keras.Model(inputs=[q1_input, q2_input], outputs=logits) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) # ## Model Training # First we perpare the inputs and outputs to be fed to the model during training: q1 = df_train["question1"].tolist() q2 = df_train["question2"].tolist() X1_train = embedder.embed_batch(q1, max_sentence_len) X2_train = embedder.embed_batch(q2, max_sentence_len) Y_train = df_train["label"] # Next we fit the data: model.fit([X1_train, X2_train], Y_train, epochs=10, batch_size=32) # ## Submission Preperation # We perpare the features for each testset instance as follows: x1_test = embedder.embed_batch(df_test["question1"].tolist(), max_sentence_len) x2_test = embedder.embed_batch(df_test["question2"].tolist(), max_sentence_len) # Then we predict the labels for each: predictions = (model.predict([x1_test, x2_test])>0.5).astype(int) # We perpare the predictions as a pandas dataframe. df_preds = pd.DataFrame(data=predictions, columns=["prediction"], index=df_test["QuestionPairID"]) df_preds.reset_index(inplace=True) # In the final step, we save the predictions as required by the competition guidelines. if not os.path.exists("./predictions/{}".format(model_type)): os.makedirs("./predictions/{}".format(model_type), exist_ok=True) df_preds.to_csv("./predictions/{}/q2q.tsv".format(model_type), index=False, sep="\t") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="I9Jp-W4SKboc" .--,-``-. ,---,. / / '. ,--, ,' .' \ ,---, / ../ ; ,--.'| ,---.' .' | ,---.'| \ ``\ .`- '| | : | | |: | | | : \___\/ \ :: : ' : : : / ,--.--. : : : \ : || ' | : | ; / \ : |,-. / / / ' | | | : \.--. .-. || : ' | \ \ \ | | : | | . | \__\/: . .| | / : ___ / : |' : |__ ' : '; | ," .--.; |' : |: | / /\ / :| | '.'| | | | ; / / ,. || | '/ :/ ,,/ ',- .; : ; | : / ; : .' \ : |\ ''\ ; | , / | | ,' | , .-./ \ / \ \ .' ---`-' `----' `--`---' `-'----' `--`-,,-' . Results may vary. Godspeed. By: . A mobile browser friendly audio transcriber and text translator built using txtai --- --- --- Hardware requirements: Ability to open browser and internet (don't press play on this cell obv) # + [markdown] id="_33L79RfK-fT" # --- # --- # --- # + id="KJzbtfMp0Wnq" #@title Press play and follow the prompts below. This will upload your audio file and convert filetype automatically. Proceed when finished. { display-mode: "form" } #@markdown --- from google.colab import files import sys import os import re import IPython from IPython.display import clear_output uploaded = files.upload() filename = next(iter(uploaded)) print(uploaded) for fn in uploaded.keys(): print('User uploaded file "{name}" with length {length} bytes'.format( name=fn, length=len(uploaded[fn]))) #converting to 16khz .wav from input # !ffmpeg -i $filename -acodec pcm_s16le -ac 1 -ar 16000 out.wav final_wav = 'out.wav' #installing and initializing transcription and translation # !pip install git+https://github.com/neuml/txtai#egg=txtai[pipeline] clear_output() from txtai.pipeline import Transcription from txtai.pipeline import Translation # Create transcription model transcribe = Transcription("facebook/wav2vec2-large-960h") # Create translation model translate = Translation() clear_output() # + id="-K2YJJzsVtfq" #@title Please input the two letter language code for your desired translation output. { display-mode: "form" } #@markdown Example: es for Espanol. from IPython.display import Audio, display text = transcribe("out.wav") Language = "" #@param {type: "string"} translated = translate(text, Language) clear_output() print('---------------------------------------------------------------') print('---------------------------------------------------------------') print('---------------------------------------------------------------') display(Audio("out.wav")) print('---------------------------------------------------------------') print('---------------------------------------------------------------') print('---------------------------------------------------------------') print('Transcription:') print('---------------------------------------------------------------') print(text) print('---------------------------------------------------------------') print('---------------------------------------------------------------') print('---------------------------------------------------------------') print('Translated Text:') print('---------------------------------------------------------------') print(translated) print('---------------------------------------------------------------') print('---------------------------------------------------------------') print('---------------------------------------------------------------') print('---------------------------------------------------------------') print('---------------------------------------------------------------') # + cellView="form" id="9BRW5s5hOeFP" #@title Press Play to download .txt file of transcription and translation to your device. #@markdown This is recommended as translations are lost after browser closes. { display-mode: "form" } with open('translated.txt', 'w') as f: f.write('Original:\n') f.write(text + "\n") f.write('Translated:\n') f.write(translated) files.download('translated.txt') # + [markdown] id="S4p6Hd9dDU63" # --- # --- # --- # Shoutout to txtai (https://github.com/neuml/txtai) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python2 # --- import tensorflow as tf import matplotlib.pyplot as plt # + tf.set_random_seed(777) # for reproducibility X = [1, 2, 3] Y = [1, 2, 3] W = tf.placeholder(tf.float32) # Our hypothesis for linear model X * W hypothesis = X * W # cost/loss function cost = tf.reduce_mean(tf.square(hypothesis - Y)) # Launch the graph in a session. sess = tf.Session() # Initializes global variables in the graph. sess.run(tf.global_variables_initializer()) # Variables for plotting cost function 그래프를 그리기위해 W_history = [] cost_history = [] for i in range(-30, 50): curr_W = i * 0.1 curr_cost = sess.run(cost, feed_dict={W: curr_W}) W_history.append(curr_W) cost_history.append(curr_cost) # Show the cost function plt.plot(W_history, cost_history) plt.xlabel("W") plt.ylabel("cost(W)") plt.show() # + x_data = [1, 2, 3] y_data = [1, 2, 3] # Try to find values for W and b to compute y_data = W * x_data + b # We know that W should be 1 and b should be 0 # But let's use TensorFlow to figure it out W = tf.Variable(tf.random_normal([1]), name='weight') X = tf.placeholder(tf.float32) Y = tf.placeholder(tf.float32) # Our hypothesis for linear model X * W hypothesis = X * W # cost/loss function cost = tf.reduce_mean(tf.square(hypothesis - Y)) # Minimize: Gradient Descent using derivative: W -= learning_rate * derivative learning_rate = 0.1 gradient = tf.reduce_mean((W * X - Y) * X) descent = W - learning_rate * gradient update = W.assign(descent) # Launch the graph in a session. sess = tf.Session() # Initializes global variables in the graph. sess.run(tf.global_variables_initializer()) for step in range(21): sess.run(update, feed_dict={X: x_data, Y: y_data}) print(step, sess.run(cost, feed_dict={X: x_data, Y: y_data}), sess.run(W)) # + tf.set_random_seed(777) # for reproducibility # tf Graph Input X = [1, 2, 3] Y = [1, 2, 3] # Set wrong model weights W = tf.Variable(-3.0) # Linear model hypothesis = X * W # cost/loss function cost = tf.reduce_mean(tf.square(hypothesis - Y)) # Minimize: Gradient Descent Magic optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1) train = optimizer.minimize(cost) # Launch the graph in a session. sess = tf.Session() # Initializes global variables in the graph. sess.run(tf.global_variables_initializer()) for step in range(100): print(step, sess.run(W)) sess.run(train) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import sys print(sys.version) """ Created on January 30 2019 @author: @contact: @web: www.ncaplar.com """ # + language="javascript" # try { # require(['base/js/utils'], function (utils) { # utils.load_extension('code_prettify/code_prettify'); # utils.load_extension('collapsible_headings/main'); # utils.load_extension('codefolding/edit'); # utils.load_extension('codefolding/main'); # utils.load_extension('execute_time/ExecuteTime'); # utils.load_extension('toc2/main'); # }); # } # catch (err) { # console.log('toc2 load error:', err); # } # - # make notebook nice and wide to fill the entire screen from IPython.core.display import display, HTML display(HTML("")) # + import numpy as np np.set_printoptions(suppress=True) np.seterr(divide='ignore', invalid='ignore') # astropy from astropy.io import * from astropy.io import fits import pickle #matplotlib import matplotlib import matplotlib.pyplot as plt from matplotlib.colors import LogNorm matplotlib.rcParams.update({'font.size': 18}) # %config InlineBackend.rc = {} # %matplotlib inline # - # # Original detector map and observed data # + detectorMapOriginal=fits.open('/Users/nevencaplar/Documents/PFS/DetectorMap/detectorMap-sim-1-r.fits') detectorMap5833=fits.open('/Users/nevencaplar/Documents/PFS/DetectorMap/pfsDetectorMap-005833-r1.fits') hdu = detectorMapOriginal["FIBERID"] fiberIds = hdu.data fiberIds = fiberIds.astype(np.int32) fiberIds.shape list_of_illuminated_slit_fibers_in_January_24_data=[32,63,111,192,223,255,289,308,342,401,418,464,518,525,587,620] # + list_of_illuminated_science_fibers_in_January_24_data=[] for i in list_of_illuminated_slit_fibers_in_January_24_data: list_of_illuminated_science_fibers_in_January_24_data.append(np.where(fiberIds==i)[0][0]) list_of_illuminated_science_fibers_in_January_24_data # + with open('/Users/nevencaplar/Documents/PFS/Data_Nov_14/Dataframes/finalHgAr_expanded.pkl', 'rb') as f: finalHgAr=pickle.load(f) # I know this was the brightest spot in the old data finalHgAr[finalHgAr["old_index"]==25] # - detectorMap5833[3].data[599][1] plt.figure(figsize=(8,8)) plt.scatter(detectorMapOriginal_for_illuminated_fiber_Jan_24_2019_as_list[0][:,1], detectorMapOriginal_for_illuminated_fiber_Jan_24_2019_as_list[0][:,2], s=15, c='red', cmap='seismic', alpha=1,edgecolor='black') plt.ylim(0,4176) plt.xlim(0,4096) plt.xlabel('x position on chip') plt.ylabel('y position on chip') plt.title('Original DetectorMap' ) detectorMapOriginal[3].data detectorMapOriginal_for_illuminated_fiber_Jan_24_2019_as_list_x=[] detectorMapOriginal_for_illuminated_fiber_Jan_24_2019_as_list_y=[] for j in list_of_illuminated_science_fibers_in_January_24_data: print(j) detectorMapOriginal_single_fiber_x=detectorMapOriginal[3].data[detectorMapOriginal[3].data['index']==j] detectorMapOriginal_single_fiber_y=detectorMapOriginal[4].data[detectorMapOriginal[4].data['index']==j] detectorMapOriginal_single_fiber_as_list_x=[] detectorMapOriginal_single_fiber_as_list_y=[] # because I do not know how to work with numpy.record for i in range(len(detectorMapOriginal_single_fiber_y)): detectorMapOriginal_single_fiber_as_list_x.append(list(detectorMapOriginal_single_fiber_x[i])) detectorMapOriginal_single_fiber_as_list_y.append(list(detectorMapOriginal_single_fiber_y[i])) detectorMapOriginal_single_fiber_as_array_x=np.array(detectorMapOriginal_single_fiber_as_list_x) detectorMapOriginal_single_fiber_as_array_y=np.array(detectorMapOriginal_single_fiber_as_list_y) detectorMapOriginal_for_illuminated_fiber_Jan_24_2019_as_list_x.append(detectorMapOriginal_single_fiber_as_array_x) detectorMapOriginal_for_illuminated_fiber_Jan_24_2019_as_list_y.append(detectorMapOriginal_single_fiber_as_array_y) # + def find_nearest(array, value): array = np.asarray(array) idx = (np.abs(array - value)).argmin() return array[idx] x_y_values_for_912=[] for i in range(16): value_nearest=find_nearest(detectorMapOriginal_for_illuminated_fiber_Jan_24_2019_as_list_y[i][:,2],912.58) y_value_for_912_single_fiber=detectorMapOriginal_for_illuminated_fiber_Jan_24_2019_as_list_y[i][detectorMapOriginal_for_illuminated_fiber_Jan_24_2019_as_list_y[i][:,2]==value_nearest] value_nearest=find_nearest(detectorMapOriginal_for_illuminated_fiber_Jan_24_2019_as_list_x[i][:,1],y_value_for_912_single_fiber[0][1]) x_value_for_912_single_fiber=detectorMapOriginal_for_illuminated_fiber_Jan_24_2019_as_list_x[i][detectorMapOriginal_for_illuminated_fiber_Jan_24_2019_as_list_x[i][:,1]==value_nearest] x_y_values_for_912.append([x_value_for_912_single_fiber,y_value_for_912_single_fiber]) # - x_y_values_for_912 # + x_y_values_for_912_concise=[] for i in range(16): x_y_values_for_912_concise.append([x_y_values_for_912[i][0][0][2],x_y_values_for_912[i][1][0][1]]) x_y_values_for_912_concise=np.array(x_y_values_for_912_concise) list_of_x_coordinates=[272,486,679,726,1329,1606,1712,2086,2296,2412,2618,2998,3192,3595,4132,4325] list_of_y_coordinates=[3414,3408,3391,3391,3384,3381,3378,3378,3378,3378,3380,3387,3387,3407,3433,3420] plt.figure(figsize=(8,8)) plt.scatter(x_y_values_for_912_concise[:,0], x_y_values_for_912_concise[:,1], s=15, c='red', cmap='seismic', alpha=1,edgecolor='black') plt.scatter(list_of_x_coordinates, list_of_y_coordinates, s=15, c='blue', cmap='seismic', alpha=1,edgecolor='black') plt.ylim(0,4176) plt.xlim(0,4406) plt.xlabel('x position on chip') plt.ylabel('y position on chip') plt.title('Original DetectorMap' ) # - plt.figure(figsize=(12,8)) plt.plot([3415,3405,3391,3391,3384,3383,3379,3378,3378,3378,3380,3387,3387,3408,3431,3421],label='data',marker='o') plt.plot(x_y_values_for_912_concise[:,1],label='predicted',marker='o') plt.xlabel('fiber nr. (counting from the left)') plt.ylabel('y-position') plt.legend() plt.title('for Ar line at 912.58 nm') x_y_values_for_912_concise[:1] x_y_values_for_912_concise[:,1] '' # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Lomb-Scargle performance on Evenly Spaced Signal # # This notebook tests the performance of the Lomb-Scargle algorithm as implemented in Pyleoclim and therefore Scipy. In using this algorithm, we found spurious peaks at the Nyquist frequency on "perfect" sinusoidal signal. # # An example of such spurious results can be found in [this notebook](https://github.com/KnowledgeCaptureAndDiscovery/autoTS/blob/master/notebooks/Methods/Lomb-Scargle%20Performance_RvsPython.ipynb). # # import pyleoclim as pyleo import matplotlib.pyplot as plt import scipy.signal as signal import numpy as np # # ## Example from a unevenly-spaced signal # # ### Using the scipy implementation directly # # The code is taken from the [Scipy example](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.lombscargle.html). # + A = 2. w = 1. phi = 0.5 * np.pi nin = 1000 nout = 100000 frac_points = 0.9 # Fraction of points to select r = np.random.rand(nin) x = np.linspace(0.01, 10*np.pi, nin) x = x[r >= frac_points] y = A * np.sin(w*x+phi) # Use the scipy.signal implementation on this signal f = np.linspace(0.01, 10, nout) pgram = signal.lombscargle(x, y, f, normalize=True) #plot plt.plot(f, pgram) plt.show() # - # ### Using the Pyleoclim implementation # # The implementation in Pyleoclim uses windows (hanning by default) and WOSA (number of segments with 50% ovelap =3). The first step is to create a pyleoclim.Series object and apply the spectral method with parameter 'lomb_scargle'. # # First, let's use the frequency vector defined in the example from scipy, which is passed through the settings parameter. ts=pyleo.Series(time=x,value=y) psd=ts.spectral(method='lomb_scargle',settings={'freq':f}) psd.plot() # Next, let's use the default frequency vector for this method which is based on the REDFIT implementation of Lomb-Scargle. psd2=ts.spectral(method='lomb_scargle', freq_method='lomb_scargle') psd2.plot() # There are no spurious peaks at the Nyquist frequency. # # ## Using an evenly-spaced signal # # # + t = np.arange(1,2001) sig = np.cos(2*np.pi*1/20*t) ts2=pyleo.Series(time=t,value=sig) psd3=ts2.spectral(method='lomb_scargle', freq_method='lomb_scargle') ts2.plot() psd3.plot() # - # We find a spurious peak at the Nyquist frequency. Let's cut off earlier. freq = np.linspace(1/5, 1/2000, nout) psd4=ts2.spectral(method='lomb_scargle', settings={'freq':freq}) psd4.plot() # There is still a spurious value at the last frequency, independent of Nyquist. # # Let's remove data points (40%) to our original signal. # + del_percent = 0.4 n_del = int(del_percent*np.size(t)) deleted_idx = np.random.choice(range(np.size(t)), n_del, replace=False) time_unevenly = np.delete(t, deleted_idx) signal_unevenly = np.delete(sig, deleted_idx) ts3 = pyleo.Series(time=time_unevenly, value=signal_unevenly) psd5=ts3.spectral(method='lomb_scargle', freq_method='lomb_scargle') ts3.plot() psd5.plot() # - # No spurious peak at the end but a less clearly defined peak and decay, which is to be expected from this degraded signal. The method is still capable of clearly identifyin the 20-year periodicity. # ### Scipy Implementation of the perfect harmonic # + t = np.arange(1,2001) sig = np.cos(2*np.pi*1/20*t) f=np.linspace(1/2000,1/2,5000) pgram = signal.lombscargle(t,sig, f, normalize=True) #plot fig, ax = plt.subplots() ax.plot(f, pgram_window) ax.set_xscale('log', nonpositive='clip') ax.set_yscale('log', nonpositive='clip') plt.show() # - # ### Windowing effect # # + pgram_window = signal.lombscargle(t,sig*signal.get_window('hann',len(t)), f) #plot fig, ax = plt.subplots() ax.plot(f, pgram_window) ax.set_xscale('log', nonpositive='clip') ax.set_yscale('log', nonpositive='clip') plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:env-TM2020] * # language: python # name: conda-env-env-TM2020-py # --- # + import re import os import torch import random import torch.nn as nn from os import listdir from tqdm.notebook import tqdm from collections import Counter from gensim.models import Word2Vec from torch.nn import functional as F from sklearn.metrics import accuracy_score from torch.utils.data import Dataset, DataLoader # - # ![Image of Yaktocat](https://i.postimg.cc/QNp5pFwd/sdf.png) # # The layers are as follows: # - **Tokenize:** This is not a layer for LSTM network but a mandatory step of converting our words into tokens (integers) # - **Embedding Layer:** that converts our word tokens (integers) into embedding of specific size # - **LSTM Layer:** defined by hidden state dims and number of layers # - **Fully Connected Layer:** that maps output of LSTM layer to a desired output size # - **Sigmoid Activation Layer:** that turns all output values in a value between 0 and 1 # - **Output:** Sigmoid output from the last timestep is considered as the final output of this network # + pycharm={"name": "#%%\n"} class RNN(nn.Module): def __init__(self): super(RNN, self).__init__() self.embedding_dim = 64 self.output_dim = 2 self.hidden_dim = 25 self.no_layers = 2 #lstm self.lstm = nn.LSTM(input_size=self.embedding_dim, hidden_size=self.hidden_dim, num_layers=self.no_layers, batch_first=True) # dropout layer self.dropout = nn.Dropout(0.3) # linear and sigmoid layer self.fc = nn.Linear(self.hidden_dim, self.output_dim) self.sig = nn.Sigmoid() def forward(self, x): batch_size = x.size(0) # embeddings and lstm_out embeds = x hidden = self.init_hidden(batch_size) hidden = tuple([each.data for each in hidden]) #print(embeds.shape) #[50, 500, 1000] lstm_out, hidden = self.lstm(embeds, hidden) # dropout and fully connected layer out = self.dropout(lstm_out) out = self.fc(out) # sigmoid function sig_out = self.sig(out) # reshape to be batch_size first sig_out = sig_out[:, -1] # get last batch of labels # return last sigmoid output and hidden state return sig_out def init_hidden(self, batch_size): ''' Initializes hidden state ''' # Create two new tensors with sizes n_layers x batch_size x hidden_dim, # initialized to zero, for hidden state and cell state of LSTM h0 = torch.zeros((self.no_layers, batch_size, self.hidden_dim)) c0 = torch.zeros((self.no_layers, batch_size, self.hidden_dim)) hidden = (h0, c0) return hidden # + [markdown] pycharm={"name": "#%% md\n"} # Preparing dataset object and embeddings # + pycharm={"name": "#%%\n"} class RnnDataset(Dataset): def __init__(self, data, model = None): self.data = data self.length = len(self.data) self.words2index = {} padding_string = "PADDING___PADDING" threshold = 5 sentences = [] #max size sentence size for each label (total mean of the size of all the sentences of that label) max_size = 0 cnt = Counter() for l,s in data: max_size += len(s) for w in s: cnt[w] += 1 max_size = int(max_size / self.length) #Add padding new_train_data = [] for l,s in data: new_s = [w if cnt[w] > threshold else "unk" for w in s] if len(new_s) < max_size: pad = [padding_string] * (max_size - len(new_s)) new_s = new_s + pad elif len(new_s) > max_size: new_s = new_s[0:max_size] new_train_data.append((l,new_s)) sentences.append(new_s) self.data = new_train_data if model is None : self.model = Word2Vec(sentences=sentences, min_count=-1, workers=4, size=64) self.model.build_vocab(cnt.keys(), update=True) self.model.save("w2vemb_rnn.model") else: self.model = model def __len__(self): return self.length def __getitem__(self, index): label,sentence = self.data[index] sentence = [w if w in self.model else "unk" for w in sentence] out = torch.tensor(self.model.wv[sentence],dtype=torch.float) label = torch.tensor(label,dtype=torch.long) return out, label # + [markdown] pycharm={"name": "#%% md\n"} # preprocessing sentences: removing symbols, extra spaces, digits and tokenizing # + pycharm={"name": "#%%\n"} def preprocess_token(s): # Removing all non-word character except letters and numbers s = re.sub(r"[^\w\s]", '', s) # Replacing all extra whitespaces with no space s = re.sub(r"\s+", '', s) # replacing digits with no space s = re.sub(r"\d", '', s) return s def tokenize(x_train): word_list = [] for word in x_train.lower().split(): word = preprocess_token(word) if word != '': word_list.append(word) return word_list # + [markdown] pycharm={"name": "#%% md\n"} # Preparing dataset instance by reading reviews and labels from files, followed by performing above defined preprocessing # + pycharm={"name": "#%%\n"} def get_rnn_dataset(path : str, optional_file : str = None): num_folder = 2 i = 0 data = [] cwd = os.getcwd() while i < num_folder: label = i folder_name = cwd + "/" + path + "/" + str(i) file_names = listdir(folder_name) for file_name in file_names: file_path = folder_name + "/" + file_name f = open(file_path,"r",encoding="utf-8") s = tokenize(f.read()) data.append((label,s)) f.close() i += 1 random.shuffle(data) if optional_file is None : return RnnDataset(data) return RnnDataset(data, Word2Vec.load(optional_file)) # + pycharm={"name": "#%%\n"} def train_rnn(rnn_instance: RNN, dataloader, epochs = 10): # loss and optimization functions lr=0.005 criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(rnn_instance.parameters(), lr=lr) clip = 5 for epoch in range(epochs): iterator = tqdm(dataloader) for inputs, labels in iterator: # Creating new variables for the hidden state, otherwise # we'd backprop through the entire training history # h = tuple([each.data for each in h]) output = rnn_instance(inputs) # calculate the loss and perform backprop loss = criterion(output, labels) rnn_instance.zero_grad() loss.backward() #`clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs. nn.utils.clip_grad_norm_(rnn_instance.parameters(), clip) optimizer.step() # + pycharm={"name": "#%%\n"} # initializing RNN instance rnn_inst = RNN() # reading data and preparing dataset instance rnn_dataset = get_rnn_dataset("/train") # + pycharm={"name": "#%%\n"} # preparing training, validation and test set from the above obtained dataset # train= 80% | valid = 10% | test = 10% train_size = int(0.8 * len(rnn_dataset)) rem_size = len(rnn_dataset) - train_size train_dataset, rem_dataset = torch.utils.data.random_split(rnn_dataset, [train_size, rem_size]) valid_size = int(rem_size/2) test_size = rem_size - valid_size validation_dataset, test_dataset = torch.utils.data.random_split(rem_dataset, [valid_size, test_size]) # + pycharm={"name": "#%%\n"} # preparing dataloaders batch_size = 50 train_loader = DataLoader(train_dataset, shuffle=True, batch_size=batch_size) validation_loader = DataLoader(validation_dataset, shuffle=True, batch_size=batch_size) test_loader = DataLoader(test_dataset, shuffle=True, batch_size=batch_size) # + pycharm={"name": "#%%\n"} train_rnn(rnn_inst, train_loader, 10) # + pycharm={"name": "#%%\n"} # save model after training torch.save(rnn_inst.state_dict(), "rnn.pt") # + pycharm={"name": "#%%\n"} # initializing new RNN instance using pretrained model rnn_loaded = RNN() rnn_loaded.load_state_dict(torch.load("rnn.pt")) rnn_loaded.eval() # + pycharm={"name": "#%%\n"} # evaluation function to evaluate the model's performance w.r.t. dataset def evaluate(clf, data_loader): true_labels = [] inf_labels = [] for data, labels in data_loader: out = clf(data) cls = torch.argmax(F.softmax(out, dim=1), dim=1) inf_labels.extend(cls.detach().numpy().tolist()) true_labels.extend(labels.numpy().tolist()) return accuracy_score(true_labels, inf_labels) # + pycharm={"name": "#%%\n"} evaluate(rnn_loaded, validation_loader) # + pycharm={"name": "#%%\n"} evaluate(rnn_loaded, test_loader) # + pycharm={"name": "#%%\n"} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="TNUKcdpHEvRD" colab_type="code" outputId="37b474a1-73e3-4203-b220-1a561adf4587" colab={"base_uri": "https://localhost:8080/", "height": 122} # !pip install -i https://test.pypi.org/simple/ lfr-lambdata-pt5 # + id="7Y-bxkYTFXcE" colab_type="code" outputId="6629eff2-da8a-4836-e425-b6c4196f61c9" colab={"base_uri": "https://localhost:8080/", "height": 34} from my_lambdata.my_mod import enlarge #import method from module x = 11 print(enlarge(x)) # + [markdown] id="lD8RAvBdEh8U" colab_type="text" # #Test State Abbreviation function # + id="WEJfE8zSEwde" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="a3334b30-c06d-4a5a-9c6d-eb2760014799" import pandas as pd from my_lambdata.assignment import add_state_name # import method from module import numpy as np abbrev_df = pd.DataFrame({"abbrev": ["CA", "CO", "CT", "WA", "TX"]}) abbrev_df2=add_state_name(abbrev_df) abbrev_df2.head() # + [markdown] id="bxe8KmbQEwxU" colab_type="text" # #Test timestamp abbreviation function # + id="lsG3R-2EE6jI" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 142} outputId="9c632360-2886-42a4-dec2-13add1024736" from my_lambdata.assignment import split_timestamp #import method from module df3 = pd.DataFrame({"timestamp":["2010-01-04", "2012-05-04", "2008-11-02"]}) df_timestamp_split = split_timestamp(df3) df_timestamp_split.head() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #

    Constant Memory on GPU

    # When you using the hybridizer, you probably want to use some constant memory on the device. # # This lab let you learn how to have some constant memory on the device. # --- # ## Run code on the GPU # # Using hybridizer to run some code, we need to: # - adding `[EntryPoint]` on method of interest. # - Using `ThreadIdx.x`(Position of the thread in its block), `BlockIdx.x`(Position of the Block in its Grid), `BlockDim.x`(Number of thread in the block) and `GridDim.x`(Number of block in the grid). According to this:![Thread And Block](Images/threadAndBlock.jpg) # # - By default, hybridizer configure with 128 threads by block and 16 * number on multiprocessor block. you can modify it with the `SetDistrib` function that you call on the wrapper before execute your kernel. # ```csharp # wrapped.SetDistrib(number_of_blocks, number_of_threads).mykernel(...); # ``` # - Add boilerplate code ton invoke method on gpu # # Modify the [01-gpu-run.cs](../../edit/05_ConstantMemory/01-gpu-run/01-gpu-run.cs) so the `Run` method runs on a GPU. # # If you get stuck, you can refer to the [solution](../../edit/05_ConstantMemory/01-gpu-run/solution/01-gpu-run.cs). # !hybridizer-cuda ./01-gpu-run/01-gpu-run.cs -o ./01-gpu-run/01-gpu-run.exe -run # --- # ## Constant memory # # Now, after you run the method on the gpu, you can create a static member that goes on the GPU's constant memory. The GPU's constant memory can accelerate performance when lots of thread are reading the same adress by passing it in a cache. # For that we need to: # - create a public static member and decorate it like the following # ```csharp # [HybridConstant(Location = ConstantLocation.ConstantMemory)] # public static type variableName = value; # ``` # # Modify the [02-memory-constant](../../edit/05_ConstantMemory/02-memory-constant/02-memory-constant.cs) so data is now a static member and go in constant memory. # # If you get stuck, you can refer to the [solution](../../edit/05_ConstantMemory/02-memory-constant/solution/02-memory-constant.cs). # !hybridizer-cuda ./02-memory-constant/02-memory-constant.cs -o ./02-memory-constant/02-memory-constant.exe -run # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline np.random.seed(0) # - # # Model definition # Logistic regression can be thought of as an extension of the perceptron: instead of just looking at which half-space the new example $x$ belongs to, we also account for the distance of that example from the decision boundary and return a probability that $x$ belongs to the positive class. # # In order to calculate that probability we use a non-linear function called the *sigmoid function*: # $$g(z) = {1 \over {1 + e^{-z}}}$$ # + def sigmoid(z): return 1 / (1 + np.exp(-z)) X = np.linspace(-5, 5) y = sigmoid(X) plt.plot(X, y) plt.show() # - # Thus, our hypothesis is defined as follows: # $$h_\theta(x) = g(\theta^T x) = {1 \over {1 + e^{-\theta^T x}}}$$ def h(x, theta): dot = np.dot(theta.T, x) return sigmoid(dot)[0][0] # We will use binary cross-entropy as our cost function: # $$J(\theta) = -{1 \over m} \sum_{i=1}^{m} \Big( y^{(i)} log(h_\theta(x^{(i)})) + (1 - y^{(i)}) log(1 - h_\theta(x^{(i)})) \Big)$$ def J(X, y, theta): m = X.shape[0] h = sigmoid(np.matmul(X, theta)) in_sum = np.multiply(y, np.log(h)) + np.multiply(1 - y, np.log(1 - h)) return -1 / m * np.sum(in_sum) # The partial derivative of $J(\theta)$ for each $\theta_j$: # $${\partial \over {\partial \theta_j}} J(\theta) = {1 \over m} \sum_{i=1}^{m} \Big((h_\theta(X^{(i)}) - y^{(i)}) \cdot X^{(i)}_j \Big)$$ def J_deriv(X, y, theta): m = X.shape[0] err = sigmoid(np.matmul(X, theta)) - y return 1 / m * np.matmul(X.T, err) def gradient_descent(X, y, starting_theta, alpha=0.01, epochs=5): theta = np.copy(starting_theta) errors = [J(X, y, theta)] for epoch in range(epochs): derivs = J_deriv(X, y, theta) theta -= alpha * derivs errors.append(J(X, y, theta)) return theta, errors # # Linear regression in practice # ### 1. Generating data # + from sklearn.datasets import make_blobs X1, y_train = make_blobs(n_samples=100, centers=2, n_features=2, random_state=2) X_train = np.append(np.ones((X1.shape[0], 1)), X1, axis=1) sns.scatterplot(x=X1[:, 0], y=X1[:, 1], hue=y_train) X2 = np.array([[-4, -10], [2, -10], [4, -2.5], [2, 2.5], [-1, -5]]) # some random points on the scatterplot X_test = np.append(np.ones((X2.shape[0], 1)), X2, axis=1) plt.scatter(x=X2[:, 0], y=X2[:, 1], marker='X', s=20 ** 2, color='red') plt.show() # - # ### 2. Training the model # + theta, errors = gradient_descent(X_train, y_train, np.random.normal(size=X_train.shape[1]), alpha=0.1, epochs=1000) y_test = (sigmoid(np.matmul(X_test, theta)) > 0.5).astype(int) theta, y_test # + k = -theta[1] / theta[2] b = -theta[0] / theta[2] line_X1 = np.linspace(-4, 4) line_X2 = k * line_X1 + b # + fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(6.4 * 2, 4.8)) sns.scatterplot(x=X1[:, 0], y=X1[:, 1], hue=y_train, ax=ax1) sns.scatterplot(x=X2[:, 0], y=X2[:, 1], hue=y_test, legend=False, marker='X', s=20 ** 2, ax=ax1) ax1.plot(line_X1, line_X2, 'r') ax1.set_title('Trained model') ax2.plot(errors) ax2.set_title('Error') plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #

    Diabetes Data Analysis

    # # All information regarding the features and dataset can be found in this research arcticle: # Impact of HbA1c Measurement on Hospital Readmission Rates: Analysis of 70,000 Clinical Database Patient Records # # In this project we want to know how different features affect diabetes in general. # For this kernel, we will be using a diabetes readmission dataset to explore the different frameworks for model explainability # # # Machine learning models that can be used in the medical field should be interpretable. # Humans should know why these models decided on a conclusion. # The problem is the more complex an ML model gets the less interpretable it gets. # In this kernel we will examine techniques and frameworks in interpreting ML models. #import libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt # %matplotlib inline # read the file and create a pandas dataframe data = pd.read_csv('dataset/diabetic_data.csv') # check the dimensions of the data data.shape # first 5 rows of data data.head() # we get discriptive statistics of numerical variable #discribtion of numerical data data[['time_in_hospital','num_lab_procedures','num_procedures','num_medications', 'number_outpatient','number_emergency','number_inpatient', 'number_diagnoses']].describe() #no of unique patient len(np.unique(data['patient_nbr'])) # + Remove duplicate recod based on patient_nbr column #remove duplicate patient recods data = data.drop_duplicates(subset = 'patient_nbr', keep = 'first') # + Plot some column data # the response variable 'readmitted' in the original dataset contains three categories. # 11% of patients were readmitted within 30 days (<30) # 35% of patients were readmitted after 30 days (>30) # 54% of patients were never readmitted (NO) data.groupby('readmitted').size().plot(kind='bar') plt.ylabel('Count') plt.title("Distribution of Readmitted Data") plt.show() data['readmitted'] = pd.Series([0 if val in ['NO', '>30'] else val for val in data['readmitted']], index=data.index) data['readmitted'] = pd.Series([1 if val in ['<30'] else val for val in data['readmitted']], index=data.index) #values counts of readmited column data.readmitted.value_counts() # the response variable 'readmitted' in the original dataset contains three categories. # 11% of patients were readmitted within 30 days (<30) # 35% of patients were readmitted after 30 days (>30) # 54% of patients were never readmitted (NO) data.groupby('readmitted').size().plot(kind='bar') plt.ylabel('Count') plt.title("Distribution of Readmitted Data") plt.show() # + Drop unnecessary column from data # remove irrelevant features data.drop(['encounter_id','patient_nbr', 'weight', 'payer_code','max_glu_serum','A1Cresult'], axis=1, inplace=True) # remove rows that have NA in 'race', 'diag_1', 'diag_2', or 'diag_3' and 'gender' # remove rows that have invalid values in 'gender' data = data[data['race'] != '?'] data = data[data['diag_1'] != '?'] data = data[data['diag_2'] != '?'] data = data[data['diag_3'] != '?'] data = data[data['gender'] != 'Unknown/Invalid'] # check 'age' feature data.groupby('age').size().plot(kind='bar') plt.ylabel('Count') plt.title("Distribution of Age") plt.show() # > from above graph we see that 60 to 100 age group data is large so we make single group for this # Recategorize 'age' so that the population is more evenly distributed data['age'] = pd.Series(['[20-60)' if val in ['[20-30)', '[30-40)', '[40-50)', '[50-60)'] else val for val in data['age']], index=data.index) data['age'] = pd.Series(['[60-100)' if val in ['[60-70)','[70-80)','[80-90)', '[90-100)'] else val for val in data['age']], index=data.index) data.groupby('age').size().plot(kind='bar') plt.ylabel('Count') plt.title("Distribution of Age") plt.show() data.groupby('number_outpatient').size().plot(kind='bar') plt.ylabel('Count') plt.title("number_outpatient v/s Count ") plt.show() data.groupby('number_emergency').size().plot(kind='bar') plt.title("number_emergency v/s Count") plt.ylabel('Count') plt.show() data.groupby('number_inpatient').size().plot(kind='bar') plt.title("number_inpatient v/s Count") plt.ylabel('Count') plt.show() # remove the other medications data.drop(['metformin', 'repaglinide', 'nateglinide', 'chlorpropamide', 'glimepiride', 'acetohexamide', 'glipizide', 'glyburide', 'tolbutamide', 'pioglitazone', 'rosiglitazone', 'acarbose', 'miglitol', 'troglitazone', 'tolazamide', 'examide', 'citoglipton', 'glyburide-metformin', 'glipizide-metformin', 'glimepiride-pioglitazone', 'metformin-rosiglitazone', 'metformin-pioglitazone','insulin'], axis=1, inplace=True) # + # Recategorize 'age' so that the population is more evenly distributed data['discharge_disposition_id'] = pd.Series(['Home' if val in [1] else val for val in data['discharge_disposition_id']], index=data.index) data['discharge_disposition_id'] = pd.Series(['Anather' if val in [2,3,4,5,6] else val for val in data['discharge_disposition_id']], index=data.index) data['discharge_disposition_id'] = pd.Series(['Expired' if val in [11,19,20,21] else val for val in data['discharge_disposition_id']], index=data.index) data['discharge_disposition_id'] = pd.Series(['NaN' if val in [18,25,26] else val for val in data['discharge_disposition_id']], index=data.index) data['discharge_disposition_id'] = pd.Series(['other' if val in [7,8,9,10,12,13,14,15,16,17,22,23,24,27,28,29,30] else val for val in data['discharge_disposition_id']], index=data.index) # - # original 'admission_source_id' contains 25 levels # reduce 'admission_source_id' into 3 categories data['admission_source_id'] = pd.Series(['Emergency Room' if val == 7 else 'Referral' if val in [1,2,3] else 'NaN' if val in [15,17,20,21] else 'Other source' for val in data['admission_source_id']], index=data.index) # original 'admission_type_id' contains 8 levels # reduce 'admission_type_id' into 2 categories data['admission_type_id'] = pd.Series(['Emergency' if val == 1 else 'Other type' for val in data['admission_type_id']], index=data.index) # Extract codes related to heart disease data = data.loc[data['diag_1'].isin(['410','411','412','413','414','415','420','421','422','423','424','425','426','427','428','429','430']) | data['diag_2'].isin(['410','411','412','413','414','415','420','421','422','423','424','425','426','427','428','429','430']) | data['diag_3'].isin(['410','411','412','413','414','415','420','421','422','423','424','425','426','427','428','429','430'])] data.shape import random #create variable emergency visits data['emergency_visits'] = [random.randint(0, 5) for _ in range(26703)] #create variable emergency visits data['acuity_of_admission'] = [random.randint(1, 5) for _ in range(26703)] #create variable emergency visits data['comorbidity'] = [random.randint(1, 15) for _ in range(26703)] categarical_colmun=["age","race","gender","medical_specialty","change","diabetesMed","discharge_disposition_id","admission_source_id","admission_type_id","diag_1","diag_2","diag_3"] dtypes = {c: 'category' for c in categarical_colmun} data=data.astype(dtypes) # conver categarical variable into categary code for i in categarical_colmun: data[i]=data[i].cat.codes # apply square root transformation on right skewed count data to reduce the effects of extreme values. # here log transformation is not appropriate because the data is Poisson distributed and contains many zero values. data['number_outpatient'] = data['number_outpatient'].apply(lambda x: np.sqrt(x + 0.5)) data['number_emergency'] = data['number_emergency'].apply(lambda x: np.sqrt(x + 0.5)) data['number_inpatient'] = data['number_inpatient'].apply(lambda x: np.sqrt(x + 0.5)) # + # feature scaling, features are standardized to have zero mean and unit variance feature_scale_cols = ['time_in_hospital', 'num_lab_procedures', 'num_procedures', 'num_medications', 'number_diagnoses', 'number_inpatient', 'number_emergency', 'number_outpatient'] from sklearn import preprocessing scaler = preprocessing.StandardScaler().fit(data[feature_scale_cols]) data_scaler = scaler.transform(data[feature_scale_cols]) data_scaler_df = pd.DataFrame(data=data_scaler, columns=feature_scale_cols, index=data.index) data.drop(feature_scale_cols, axis=1, inplace=True) data = pd.concat([data, data_scaler_df], axis=1) # - # create X (features) and y (response) X = data.drop(['readmitted'], axis=1) y = data['readmitted'] y.value_counts() # #### Find Top Features in data # split X and y into cross-validation (75%) and testing (25%) data sets from sklearn.model_selection import train_test_split X_cv, X_test, y_cv, y_test = train_test_split(X, y, test_size=0.25) # + # fit Random Forest model to the cross-validation data from sklearn.ensemble import RandomForestClassifier forest = RandomForestClassifier() forest.fit(X_cv, y_cv) importances = forest.feature_importances_ # make importance relative to the max importance feature_importance = 100.0 * (importances / importances.max()) sorted_idx = np.argsort(feature_importance) feature_names = list(X_cv.columns.values) feature_names_sort = [feature_names[indice] for indice in sorted_idx] pos = np.arange(sorted_idx.shape[0]) + .5 print('Top 10 features are: ') for feature in feature_names_sort[::-1][:10]: print(feature) # plot the result plt.figure(figsize=(12, 10)) plt.barh(pos, feature_importance[sorted_idx], align='center') plt.yticks(pos, feature_names_sort) plt.title('Relative Feature Importance', fontsize=20) plt.show() # - # ### Use over sampling method for handle imbalanace data from imblearn.over_sampling import SMOTE from sklearn import metrics from collections import Counter oversample = SMOTE() X, y = oversample.fit_resample(X, y) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = .3) # ### Logistic Regession from sklearn.linear_model import LogisticRegression clf=LogisticRegression(solver='saga') clf.fit(X_train, y_train) print(clf.score(X_test, y_test)) from sklearn.metrics import cohen_kappa_score y_pred=clf.predict(X_test) cohen_kappa_score(y_test, y_pred) #Classification Score from sklearn.metrics import classification_report print(classification_report(y_test, y_pred)) # ### Random Forest rf = RandomForestClassifier() rf.fit(X_train, y_train) print(rf.score(X_test, y_test)) # cohen kappa score from sklearn.metrics import cohen_kappa_score y_pred=rf.predict(X_test) cohen_kappa_score(y_test, y_pred) #Classification Score from sklearn.metrics import classification_report print(classification_report(y_test, y_pred)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import tensorflow as tf import numpy as np import matplotlib.pylab as plt # %matplotlib inline # Create sample data num_samples_class = 1000 positive_samples = 4 * np.random.randn(num_samples_class) + 4 negative_samples = 2 * np.random.randn(num_samples_class) - 8 x = np.concatenate((negative_samples, positive_samples), axis=0) y = np.zeros(num_samples_class*2) y[num_samples_class:] = 1 y_onehot = np.zeros((num_samples_class*2, 2)) y_onehot[:num_samples_class, 0] = 1 y_onehot[num_samples_class:, 1] = 1 plt.figure(figsize=(10, 10)) plt.subplot(2,1,1) res = plt.hist(x, bins=100) plt.subplot(2,1,2) res = plt.hist([positive_samples, negative_samples], bins=100) x = np.atleast_2d(x).T print(x.shape) print(y_onehot.shape) print(y.shape) # Create logistic regression model w_init = np.random.randn(1) print('Initial weight value = {}'.format(w_init[0])) w = tf.Variable(w_init, dtype=tf.float32) b = tf.Variable(1.0) weighted_x = w * x + b y_prob_pos = tf.nn.sigmoid(weighted_x) y_prob_neg = 1 - y_prob_pos y_prob = tf.concat([y_prob_neg, y_prob_pos], 1) # Loss: MSE loss = tf.nn.l2_loss(y_prob - y_onehot) # Logistic accuracy correct_prediction = tf.equal(tf.arg_max(y_prob, 1), tf.arg_max(y_onehot, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # Optimizer optimizer = tf.train.GradientDescentOptimizer(0.0001) train = optimizer.minimize(loss) # Main loop accuracy_vals = [] loss_vals = [] with tf.Session() as session: session.run(tf.global_variables_initializer()) init_vals = session.run([loss, w, b]) print('Initial values: loss={} w={} b={}'.format(*init_vals)) for step in range(150): print('Step {}'.format(step)) vals = session.run([train, loss, accuracy, w, b]) loss_val, accuracy_val = vals[1:3] accuracy_vals.append(accuracy_val) loss_vals.append(loss_val) print('loss={} accuracy={} w={} b={}'.format(*vals[1:])) print() plt.plot(accuracy_vals, label='Accuracy') plt.legend() plt.show() plt.plot(loss_vals, label='Loss') plt.legend() plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="oOmT6u-jTlI6" # How to do a fill when you also want to have different colors along the boundary of the object being filled. # + id="mNDcb4tEl4Xr" # !pip install ColabTurtlePlus # + id="fDp3eovimAmy" import ColabTurtlePlus.Turtle as T # + [markdown] id="vEnQACygSIB8" # Here we draw a square with two sides in red and two sides in blue, then a circle drawn in green. When the fill is done, all the sides and the circle are changed to the last pen color, green. # + id="2loZU_XvmBTa" T.initializeTurtle((500,500)) T.width(5) T.fillcolor("yellow") T.begin_fill() T.pencolor("red") T.forward(200) T.left(90) T.forward(200) T.left(90) T.pencolor("blue") T.forward(200) T.left(90) T.forward(200) T.pencolor("green") T.circle(100) T.end_fill() # + [markdown] id="_PCoXrtnTHab" # So to fix this to keep the original side/circle colors, we do the entire drawing first in one color and with the fill, then return to the starting point and initial heading to repeat the drawing with the desired colors. # + id="JmhOybcMSATq" T.initializeTurtle((500,500)) init_pos = T.pos() init_angle = T.heading() T.width(1) T.fillcolor("yellow") T.begin_fill() T.forward(200) T.left(90) T.forward(200) T.left(90) T.forward(200) T.left(90) T.forward(200) T.circle(100) T.end_fill() T.jumpto(init_pos) T.setheading(init_angle) T.width(5) T.speed(13) T.pencolor("red") T.forward(200) T.left(90) T.forward(200) T.left(90) T.pencolor("blue") T.forward(200) T.left(90) T.forward(200) T.pencolor("darkgreen") T.circle(100) # -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.0.3 # language: julia # name: julia-1.0 # --- # # Simple Linear Regression # # * We can use gradients to do gradient descent to find the parameters of a simple linear model. # + using Flux, Flux.Tracker using Flux.Tracker: gradient, update!, data using Plots # + # Create some data. We already know the parameters we're going to # try to find; the slope m is 0.5 and the y-intercept is 1. # y = mx + b X_train = collect(-5.:5.) y_train = .5X_train .+ 1 scatter(X_train, y_train, xlabel="X_train", ylabel="y_train", title="y_train = m*X_train + b = 0.5x + 1", leg=false) # + # Now, create some parameters for a linear model that we'll train to # match the data created above. # We'll use a slope of 0.4 and a y intercep of 2; # pretty close to the originals of 0.5 and 1. # We can tell Flux to treat an array as a parameter to be tracked with # the 'param' function. After that, Flux will keep track of what operations # have been performed on these variables. m = param(Float64[.4]) b = param(Float64[2]) # - # Now define a very simple model, which uses the parameters # created above. model(X) = m .* X .+ b # + # How close is the model to the actual data right now? scatter(X_train, y_train, xlabel="X_train", ylabel="y_train",leg=false) plot!(X_train, data(model(X_train)), leg=false) # + # To train the model we need a loss function. # We can use mean squared error. This is for demonstration, would normally use # Flux's mse function. function loss(X, y) ŷ = model(X) sum((y .- ŷ).^2) end # - # # Train the Model # # * Now train the model. # * Would normally use just the `train` function which would take an optimizer object. The optimizer object would then update the parameters after each epoch/batch. # * Using `gradient` and `update` here for demo purposes. # # + learning_rate = 0.001 for i in 1:100 grads = gradient( () -> loss(X_train, y_train), # anonymous function that takes no arguments. Params([m, b]) ) # Update will update the parameter values and clear the gradients on # the tracked objects (m and b). update!(m, -learning_rate .*grads[m]) update!(b, -learning_rate .*grads[b]) push!(losses, data(loss(X_train, y_train))) end scatter(X_train, y_train, xlabel="X_train", ylabel="y_train", title="y_train = mx + b = 0.5x + 1", leg=false) plot!(X_train, data(model(X_train)), leg=false) # + # After 100 iterations, the model is pretty close to the original data. # The learned parameters are: data(m), data(b) # - plot(losses) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + """Hartree-Fock Procedure for Approximate Quantum Chemistry""" __authors__ = ["","", "", ""] __email__ = ["", "", "", ""] __credits__ = [""] __copyright__ = "(c) 2008-2020, The Psi4Education Developers" __license__ = "BSD-3-Clause" __date__ = "2020-07-28" # - # ## Hartree-Fock Procedure for Approximate Quantum Chemistry import psi4 import numpy as np from scipy import linalg as splinalg # ### I. Warm up with the H atom # By solving the Hydrogen atom Schrödinger equation, we saw that the expression for the different # energy levels is given by: # # $$E_{n} = -\frac{m_e e^4}{8\epsilon_0^2h^2n^2}=-\frac{m_ee^4}{2(4\pi\epsilon_0)^2\hbar^2n^2}\qquad(1)$$ # # Atomic units (a.u.) are based on fundamental quantities so that many physical constants have numerical values of 1: # # | Symbol | Quantity | Value in a.u. | Value in SI units # |---|---|---|---| # | $e$ | electron charge| 1 |$1.602\times 10^{-19}$ C | # | $m_e$ | electron mass| 1 |$9.110\times 10^{-31}$ kg | # | $\hbar$ | angular momentum| 1 |$1.055\times 10^{-34}$ J s | # | $a_0$ | Bohr radius (atomic distance unit)| 1 |$5.292\times 10^{-11}$ m | # | $E$ | Hartree energy (atomic energy unit)| 1 |$4.360\times 10^{-18}$ J | # | $4\pi\epsilon_0$ | vacuum permittivity| 1 |$1.113\times 10^{-10}$ C$^2$/J m |. # # #### Question: Verify that the two forms of Eq.1 agree. # **Answer:** # #### Question: Rewrite $E_n$ in atomic units. # # **Answer:** # #### Calculate: In the cell below, write the formula to compute the exact H atom ground state energy in SI units and explicitly convert from SI to hartrees # + # ==> Toy Example: The Hydrogen Atom <== # Define fundamental constants in SI units m_e = 9.1093837015e-31 # kg e = 1.602176634e-19 # C epsilon_0 = 8.8541878128e-12 # F / m h = 6.62607015e-34 # J*s n = 1 # Define a.u. to SI energy conversion factor (https://en.wikipedia.org/wiki/Hartree) hartree2joules = 4.359744650e-18 # Compute ground state energy of H atom in SI units using constants above E_1 = # # Convert to atomic units E_1_au = # print(f'The exact ground state energy of the H atom in SI units is: {E_1} J') print(f'The exact ground state energy of the H atom in atomic units is: {E_1_au} Eh') # - # We obtained the Hydrogen atom energy expression above by solving the Schrödinger equation exactly. But what happens if we cannot do this? # # That's where Hartree-Fock molecular orbital theory comes in! Just as a test case, let's use Psi4 to compute the Hartree-Fock wavefunction and energy for the Hydrogen atom: # # #### Calculate: In the cell below, use psi4 to compute the exact H atom ground state energy in SI units and hartrees # + # ==> Compute H atom energy with Hartree-Fock using Psi4 <== # the H atom has a charge of 0, spin multiplicity of 2 (m_s=1/2) # and we place it at the xyz origin (0,0,0) h_atom = psi4.geometry(""" 0 2 H 0 0 0 """) # specify the basis basis = 'd-aug-cc-pv5z' # set computation options psi4.set_options({'basis': basis, 'reference': 'rohf', 'scf_type': 'pk'}) # compute energy e = psi4.energy('scf') psi4.core.clean() print(f"The Hartree-Fock ground state energy of the H atom in SI units is: {e * psi4.constants.hartree2J} J") print(f"The Hartree-Fock ground state energy of the H atom in atomic units is: {e} Eh") # - # # In this lab activity, you will build and diagonalize the Fock matrix to determine the MO coefficients and energies for a molecule. We will be using the functions of the Psi4 quantum chemistry software package to compute the integrals we need. The following notebook will lead you through setting up your molecule, establishing the basis set, and forming and diagonalizing the Fock matrix. Be sure to run each cell as your proceed through the notebook. # # ### II. The Hartree-Fock procedure # The Schrödinger equation has the structure of an eigenvalue equation # # $$\hat{H}|\psi\rangle = E|\psi\rangle$$ # # In Hartree-Fock theory, this is reexpresed in terms of the Fock matrix, $F$, a matrix of wavefunction amplitudes for each MO, $C$, and the overlap matrix, $S$, # # $$FC = SCE.\qquad(\text{2})$$ # # The Fock matrix for a closed-shell system is # $$ # F = H + 2J - K # $$ # where $H$ is the one electron "core" Hamiltonian, $J$ is the Coulomb integral matrix, and $K$ is the exchange integral matrix. The definitions of $J$ and $K$ depend on the coefficients, $C$. We see here the central premise of SCF: To get the Fock matrix, we need the coefficient matrix, but to compute the coefficient matrix we need the Fock matrix. When $S$ is not equal to the identity matrix (i.e. the basis is not orthonormal), then this is a pseudo-eigenvalue problem and is even harder to solve. This is our task. # # We will # - review Dirac notation # - introduce the overlap matrix # - learn how to build an orthogonalization matrix # - learn how to calculate the density # - learn how to calculate the Coulomb and Exchange integral matrices # - learn how to diagonalize the Fock matrix # - build an iterative procedure to converge HF energy # ### III. Orthogonalizing the AO basis set # Crash course in Dirac notation: # # The wavefunction, $\psi(x)$ can be represented as a *column vector*, $|\psi\rangle$ . The complex conjugate of the wavefunction, $\psi^*(x)$ is also represented by a vector which is the complex conjugate transpose of $|\psi\rangle$. # # \begin{align} # \psi(x)&\rightarrow|\psi\rangle\quad\text{column vector}\\ # \psi^*(x)&\rightarrow\langle\psi|\quad\text{row vector} # \end{align} # # The normalization of a wavefunction $\psi(x)$ is an integral # # $$\int\psi^*(x)\psi(x)\;\mathrm{d}x=1.$$ # # In Dirac notation, it is replaced with a vector equation # # $$\langle\psi|\psi\rangle=1.$$ # # The orthogonality of two wavefunctions, $\psi_i(x)$ and $\psi_j(x)$, which, in integral notation, is # # $$\int\psi_i^*(x)\psi_j(x)\;\mathrm{d}x=0,$$ # # becomes, in Dirac notation, # # $$\langle\psi_i|\psi_j\rangle=0.$$ # # #### Calculate: Define two vectors, `phi1` and `phi2`, with two elements each, that are normalized, in the sense $\langle\phi_i|\phi_i\rangle=1$, and orthogonal in sense that $\langle\phi_i|\phi_j\rangle=0$. Recall that one defines a vector `v` with the two elements `1` and `2` through the command `v=np.array([1,2])`. # + # define a first basis vector and a second, orthogonal vector phi1 = # phi2 = # print(F'Phi1: {phi1}') print(F'Phi2: {phi2}') print()#empty line # - # Note: Numpy commands for vector operations assuming `v`= vector: # # inner product (scalar product) of two vectors: `v.dot(v)` # # complex conjugate of a vector: `v.conj()` # # inner product of $v^\dagger v$: `v.conj().dot(v)` # # # #### Calculate: Use the commands demonstrated above to show that `phi1` is normalized. # + # calculate normalization phi1_norm = # print(F' = {phi1_norm}') # - # #### Calculate: Similarly, enter the three remaining calculations below to show that `phi1` and `phi2` are orthonormal. # + # calculate normalization print(F' = {}') # calculate orthogonality print(F' = {}') print(F' = {}') # - # ### IV. The overlap integrals # For a set of basis functions, $\phi_i(\tau)$, where $\tau$ is a shorthand for all the coordinates of all the particles, we can calculate the overlap integrals between the basis functions in the following way # # $$S_{ij}=\int {\rm d}\tau\; \phi_i^*(\tau)\phi_j(\tau).$$ # # #### Question: Define $S_{ij}$ using Dirac notation. # **Answer:** # #### Question: For an orthonormal basis, what does the overlap integral array, `S`, look like? # **Answer:** # #### Calculate: the terms `S_ij` using the basis vectors `phi1` and `phi2`. # calculate the overlap (inner product) of the vectors S_11 = # S_12 = S_21 = S_22 = print('The ij elements of S:') print(S_11,S_12) print(S_21,S_22) print() # ### V. Constructing the overlap matrix # These overlap integrals, $S_{ij}$, can be interpreted as the elements on the $i$-th row and $j$-th column of a matrix, $S$. Let's propose a matrix, $S$, made of the overlap integrals $S_{ij}$. We can build $S$ systematically in the following way. First, make a matrix, $B$, composed of our basis vectors as columns, # # $$ B = \left(\begin{array}{ccc}|& |&|\\ \phi_1 &\phi_2&\phi_3\\|&|&|\end{array}\right).$$ # # We will use the symbol $\dagger$ to indicate the complex conjugate of the transpose of a matrix. So # # $$B^\dagger = (B^T)^*.$$ # + # construct the overlap matrix from matrix of basis vectors vector_length = phi1.size #length of the vector space phi1_column = phi1.reshape(vector_length,1) #this makes phi a column vector phi2_column = phi2.reshape(vector_length,1) # put together (concatenate) the vectors into the matrix B B = np.concatenate((phi1_column,phi2_column),axis=1) print(F'The matrix B:\n{B}') B_dagger = B.conj().T print(F'The matrix B^\dagger:\n{B_dagger}') # - # Now, multiplying the rows of $B^\dagger$ by the columns of $B$ (normal matrix multiplication) produces a matrix of the overlap integrals in the correct locations. we defined to be the matrix $S$. # # $$B^\dagger B =\left(\begin{array}{ccc}-& \phi_1^*&-\\-& \phi_2^* &-\\-&\phi_3&-\end{array}\right) \left(\begin{array}{ccc}|& |&|\\ \phi_1 &\phi_2&\phi_3\\|&|&|\end{array}\right)\equiv S.$$ # Note: Numpy commands for matrix operations assuming `v` = vector, and `M` = matrix: # # matrix vector product: `M.dot(v)` # # matrix matrix product: `M.dot(M)` # # matrix complex conjugate: `M.conj()` # # matrix transpose: `M.T` # #### Calculate: Use `B` and `B_dagger` and the matrix rules above to calculate the matrix `S`. # calculate S from matrix of basis vectors S = # print(F'The matrix of eigenvectors in columns B =\n {B} \n\nand S = B^\dagger B =\n{S}\n') # ### VI. Einstein implicit summation notation # Matrix multiplication is defined through # # $$(AB)_{pq}= \sum_i A_{p,i}B_{i,q}\qquad\text{explicit summation}$$ # # Note that there is a repeated index, $i$, in the summation. In implicit summation notation, Einstein notation, we do not write the $\sum$ and treat the summation as understood. # # $$(AB)_{pq}=A_{p,i}B_{i,q}\qquad\text{implicit summation}$$ # # Using implicit summation for the case at hand, $B^\dagger B$ gives # # $$S_{pq}=(B^\dagger B)_{pq}= (B^\dagger)_{p,i}B_{i,q}\qquad\text{implicit summation}$$ # $$= (B^*)_{i,p}B_{i,q}\qquad\text{implicit summation}$$ # # where $B^*$ is the complex conjugate of $B$ (no transpose). Note that the two sets of indices, $(i,p)$ and $(i,q)$, in the input matrices become one set, $(p,q)$, in the product. # # *(Python programming aside: There is a convenient function in `numpy` called `einsum()`, which is one of the crown jewels of the numpy library. In short, `einsum` lets you perform various combinations of multiplying, summing, and transposing matrices very efficiently. A good tutorial about `einsum` can be found at http://ajcr.net/Basic-guide-to-einsum/. To specify the operations you want to do, you use the **Ein**stein **Sum**mation convention.)* # # To calculate the sum # # $$S_{pq} = (B^*)_{i,p}B_{i,q}\qquad\text{implicit summation}$$ # # use `S=np.einsum('ip,iq->pq',B.conj(),B)`, as demonstrated below. We will use `einsum()` in several places. # # Let's try this with a simple basis set of two (perhaps) orthonormal vectors. # #### Question: Describe how the notation of the `np.einsum` command coorelates to the implicit summation formula written above. # ****Answer**** # #### Calculate: Use the function `np.einsum()` to calculate the matrix `S`, and confirm that your answer is the same as above. # calculate S from Einstein sum S = # print(F'S from Einstein notation:\n{S}') # #### Question: Propose a different orthonormal basis, modify `phi1` and `phi2`, and verify that `S` still has the same form. There are infinitely many choices. It isn't complex... or *is* it?! # #### Question: Propose what will happen to `S` if the vectors are not orthonormal. # #### Calculate: Test your prediction! # ### VII. Gaussian atomic orbital basis set # H$_2$O is a small but interesting molecule to use in our exploration. # # #### Question: Answer the following questions for the H$_2$O molecule: # ##### How many electrons are there in total? # ##### How many occupied molecular orbitals would you expect? # Before we can begin to implement the HF procedure, we need to specifcy the molecule and basis set that we will be using. We will also set the memory usage for our calcluation and the output file name. # + # ==> Set Basic Psi4 Options <== # Memory specification psi4.set_memory('500 MB') numpy_memory = 2 # No NumPy array can exceed 2 MB in size # set output file psi4.core.set_output_file('output.dat', False) # specify the basis # basis = 'cc-pvdz' basis = 'sto-3g' # Set computation options psi4.set_options({'basis': basis, 'scf_type': 'pk', 'e_convergence': 1e-8}) # ==> Define Molecule <== # Define our model of water -- # we will distort the molecule later, which may require C1 symmetry mol = psi4.geometry(""" O H 1 1.1 H 1 1.1 2 104 symmetry c1 """) # compute energy SCF_E_psi = psi4.energy('scf') psi4.core.clean() print(f"The Hartree-Fock ground state energy of the water is: {SCF_E_psi} Eh") # - # Next, we need to build the wavefunction from the basis functions. We store the wavefunction in a variable called `wfn`. We use the function `nalpha()` provided by the wavefunction object we created above, `wfn`, to determine the number of orbitals with spin alpha, which will be doubly occupied orbitals for close shelled systems. We save this answer as a variable called `ndocc` (number of doubly occupied orbitals). # # #### Calculate: Execute the code below and confirm that the number of doubly occupied orbitals matches your expectation for the molecule you chose. # + # ==> Compute static 1e- and 2e- quantities with Psi4 <== wfn = psi4.core.Wavefunction.build(mol, psi4.core.get_global_option('basis')) # number of spin alpha orbitals (doubly occupied for closed-shell systems) ndocc = wfn.nalpha() nbf = wfn.basisset().nbf() print(F'Number of occupied orbitals: {ndocc}') print(F'Number of basis functions: {nbf}') # - # Next we will examine the atomic orbital basis set. To do this, we have to set up a data structure, called a class, to calculate the molecular integrals. (Psi4 will do the nasty calculus for us.) We will call this data structure `mints` (Molecular INTegralS). We use the function `ao_overlap` to calculate the overlap integrals between all the AO basis functions. We cast the result to a numpy array called `S`. # # _(Python programming aside: `asarray()` is a special case of `array()` that does not copy arrays when compatible and converts array subclasses to base class ndarrays. https://stackoverflow.com/questions/14415741/what-is-the-difference-between-numpys-array-and-asarray-functions)_ # + # Construct a molecular integrals object mints = psi4.core.MintsHelper(wfn.basisset()) # Overlap matrix as a psi4 Matrix object S_matrix = mints.ao_overlap() # Overlap matrix converted into an ndarray S = np.asarray(S_matrix) print(F'Shape of S is {S.shape}') # - # #### Question: Explain the shape (number of rows and columns) of `S` in terms of the AO basis set we chose. # Examine the contents of `S`. print(S) #the full matrix may be somewhat hard to read based on the basis set # + # Look at the first few elements def peak(S,nrows=4,ncols=4): print(F'Here is a peak at the first {nrows} x {ncols} elements of the matrix:\n{S[:nrows,:ncols]}') peak(S) # - # #### Question: Based on your observations of `S` in the AO basis, answer the following questions # ##### What do the diagonal elements of `S` indicate? # ##### What do the off-diagonal elements of `S` indicate? # ##### Does the Gaussian atomic orbital basis set form an orthonormal basis? # We can perform this test programmatically as well, with a few python tricks. Construct an array of the same size as the overlap array (`S`) that has 1's along the diagonal and 0's everywhere else. Then compare that array to the `S` array to determine if the AO basis is orthonormal. # + # example of testing for orthonormality # define a function def isBasisOrthonormal(S): # get the number of rows of S size_S = S.shape[0] # construct an identity matrix, I -- "eye", get it?!? Ha ha! Math is so funny! identity_matrix = np.eye(size_S) # are all elements of S numerically close to the identity matrix? # We won't test for equality because there can be very small numerical # differences that we don't care about orthonormal_check = np.allclose(S, identity_matrix) print(F'Q:(T/F) The AO basis is orthonormal? A: {orthonormal_check}') return orthonormal_check # use the function isBasisOrthonormal(S) # - # #### Question: Does the result agree with what you determined above? Explain. # ### VIII. An orthogonalization matrix # Recall that if we had used the hydrogen atom wavefunctions as our basis set, the AO wavefunctions would all be orthonormal. Since we used a basis set of Gaussian wavefuctions, this may not be the case. We will now introduce some tools to fix it! # # Since our AO basis set was not orthonormal, we seek to construct an orthogonalization matrix, $A$, such that ${\bf A}^{\dagger}{\bf SA} = {\bf 1}$. # # **Motivation:** If ${\bf A}$ and ${\bf S}$ were real numbers $a$ and $s$ (not matrices), this would be simple to solve. First, $a^\dagger=a$ because a real number is the same as its hermitian transpose. By simple algebra we can solve for a, # # $$a^\dagger s a=1$$ # $$a^\dagger s a=a s a=a^2s=1$$ # $$\Rightarrow{}a=s^{-1/2}$$ # # In linear algebra (with matrices instead of numbers) this is more complicated, but numpy and the mints class can take care of the details for us! Leaving out the details, we will calculate # # $${\bf A}={\bf S}^{-1/2}$$. # # #### Calculate: Use the function `np.linalg.inv()` to calculate the inverse of `S`, and the function `splinalg.sqrtm()` to take its (matrix) square root. Execute the code below and examine the matrix `A`. # + # ==> Construct AO orthogonalization matrix A <== # inverse of S using np.linalg.inv # matrix square root of the inverse of S using splinalg.sqrtm A = # peak(A) # - # #### Question: What do you observe about the elements of `A`? Is the matrix real or complex? Is the matrix symmetric or not? # Our basis set $B$ is not orthonormal, so we want to take linear combinations of its columns to make a new basis set, $B'$, that is orthonormal. We define a new matrix, $A$, in terms of that transformation, # # $$B' = BA.$$ # # The new overlap matrix will be # # \begin{align} # S' &= B'^\dagger B',\\ # &= (BA)^\dagger (BA),\\ # &= A^\dagger B^\dagger BA,\\ # &= A^\dagger S A. # \end{align} # The matrix $A$ makes the proper linear combination of the columns of $B$ and $A^\dagger$ makes the linear combinations of the rows of $B^\dagger$. This is a very common structure of transformation matrices. Because $S$ is real and symmetric, $A$ is also real and symmetric, so $A^\dagger=A$. The transformation becomes simply # # $$S' = A S A.$$ # #### Calculate: Use the orthogonalization matrix `A` to transform the overlap matrix, `S`. Check the transformed overlap matrix, `S_p`, to make sure it represents an orthonormal basis. # + # Transform S with A and assign the result to S_p S_p = # isBasisOrthonormal(S_p) # - # #### Question: The product A S A does not take the complex conjugate transpose of A. What conditions (properties of A) make that ok? # ### IX. The Fock Matrix Transformed to an Orthonormal Basis # We can now return to Eq.2 using our orthogonalization matrix $A$. A common linear algebra trick is to "insert one." In this case, the matrix $A$ times its inverse is, by definition the identity matrix, $AA^{-1}={\bf1}$. We can put that factor of one anywhere in an equation that is useful to us, and, then, typically we regroup terms in a way we want. In this case, we insert one bewteen $FC$ and $SC$. # # \begin{align} # FC&=SCE\\ # F({\bf1})C&=S({\bf1})CE\\ # FAA^{-1}C&=SAA^{-1}CE # \end{align} # Multiplying on the left by $A$ then gives # # $$ # AFAA^{-1}C=ASAA^{-1}CE # $$ # # We can recognize the transformation $S'=ASA$ on the right hand side and similarly define $F'=AFA$ on the left hand side. Lastly, we define a transformed coefficient matrix, $C'=A^{-1}C$. Our transformed Fock equation reads # # \begin{align} # F'C'&=S'C'E,\\ # &=C'E. # \end{align} # # The last line follows because $S'={\bf1}$ in our new basis. We now have an eigenvalue problem that we can solve by matrix diagonalization. # # In the expression # # $$C'=A^{-1}C,$$ # # the matrix $A^{-1}$ transforms the coefficients, $C$, into the orthogonalized basis set. We will also need a way to transform those coefficients, $C'$, back to the original AO basis. # # #### Question: Based on the definition of $C'$, propose a definition of $C$ in terms of $A$ and $C'$. Justify your equation. # **Answer:** # ### X. Initial guess for the Fock Matrix is the one electron Hamiltonian # To get the Fock matrix, we need the coefficient matrix, but to compute the coefficient matrix we need the Fock matrix. So we start with a guess for the Fock matrix, which is the core Hamiltonian matrix. # Build core Hamiltonian T = np.asarray(mints.ao_kinetic()) V = np.asarray(mints.ao_potential()) H = T + V # #### Calculate: In the cell below, use the core Hamiltonian matrix as your initial guess for the Fock matrix. Transform it with the same A matrix you used above. To calculate the eigenvalues, `vals`, and eigenvectors, `vecs`, of matrix `M` using `vals, vecs = np.linalg.eigh(M)`. # + # Guess for the Fock matrix # Transformed Fock matrix, F_p # Diagonalize F_p for eigenvalues & eigenvectors with NumPy # - # #### Calculate: Display, i.e., `print`, the coefficent matrix and confirm it the correct size # Now that we have the coefficents in the transformed basis, we need to go back and get the coefficients in the original AO basis. # # #### Calculate: Use `A` and the formula you proposed previously to transform the coefficient matrix back to the AO basis. Confirm that the resulting matrix appears reasonable, i.e., similar size and magnitude # Transform the coefficient matrix back into AO basis # ### XI. The Density Matrix # Recall, the Fock matrix is # # $$ # F = H + 2J - K # $$ # where $H$ is the one electron "core" Hamiltonian, $J$ is the Coulomb integral matrix, and $K$ is the exchange integral matrix. The HF energy can be expressed in explicit terms of one and two electron integrals # $$ # E_{HF} = \sum_i^{elec}\langle i|h_i|i\rangle + \sum_{i>j}^{elec}[ii|jj]-[ij|ji] # $$ # Expanding the orbitals in terms of basis functions, we find # $$ # [ii|jj]=\sum_{pqrs}c^*_{pi}c_{qi}c^*_{rj}c_{sj}\int{\rm d}\tau\; \phi_p^*(1)\phi_q(1)\frac{1}{r_{ij}}\phi_r^*(2)\phi_s(2)\qquad{\text{(3)}} # $$ # # First, look at the coefficients in Eq.3. They come in two pairs of complex-conjugates, $c^*_{pi}c_{qi}$ and $c^*_{ri}c_{si}$. The diagonal terms, when $p=q$ for example, are the probability of some basis function $p$ contributing to the MO $i$. We will sum each term over the occupied orbitals, $i$, to form the "density matrix" # $$ # D_{pq}=\sum_i^{occ} c^*_{pi}c_{qi}. # $$ # # We are going to construct the density matrix from the occupied orbitals. To get a matrix of just the occupied orbitals, use the coefficient matrix in the original AO basis, and take a slice to include all rows and just the columns that represent the occupied orbitals. # Grab occupied orbitals (recall: ndocc is the number of doubly occupied orbitals we found earlier) C_occ = C[:, :ndocc] print(F'The shape of C_occ is {C_occ.shape}') # #### Calculate: Build the density matrix, `D`, from the occupied orbitals, `C_occ`, using the function `np.einsum()`. (Recall Section VI.) # + # Build density matrix from occupied orbitals D = # print(F'The shape of D is {D.shape}') # - # ### XII. Coulomb and Exchange Integrals and the SCF Energy # # The integral on the right of Eq.3 is super important. It has four indicies, $p,q,r,s$, so formally it is a tensor. It accounts for the repulsion between pairs of electrons, so it is called the electron repulsion integral tensor, $I$, # $$ # I_{pqrs} = \int{\rm d}\tau\; \phi_p^*(1)\phi_q(1)\frac{1}{r_{ij}}\phi_r^*(2)\phi_s(2). # $$ # First, we can build the electron-repulsion integral (ERI) tensor, which stores the electron repulsion between the atomic orbital wavefunctions. Mints does all the work for us! # Build electron repulsion integral (ERI) Tensor I = np.asarray(mints.ao_eri()) # Eq.3 can be expressed in terms of $I$ and $D$ as # \begin{align} # [ii|jj] &= \sum_{pqrs}D_{pq}D_{rs}I_{pqrs},\\ # &=\sum_{pq}D_{pq}\sum_{rs}D_{rs}I_{pqrs}. # \end{align} # The term $\sum_{rs}D_{rs}I_{pqrs}$ is the effective repulsion felt by one electron due to the other electrons in the system. This term is the Coulomb integral matrix # $$ # J_{pq}=\sum_{rs}D_{rs}I_{pqrs}. # $$ # #### Calculate: Define J in terms of the density matrix, `D`, and the electron repulsion integral tensor, `I`, using `np.einsum()`. # + #Define J # - # Similarly, # \begin{align} # [ij|ji] &= \sum_{pqrs}D_{ps}D_{rq}I_{pqrs},\\ # &= \sum_{ps}D_{ps}\sum_{rq}D_{rq}I_{pqrs}. # \end{align} # corrects the repulsion due to electrons "avoiding each other" due to their Fermionic (antisymmetric w.r.t. exchange) character. This term is the exchange integral matrix # $$ # K_{ps}=\sum_{rq}D_{rq}I_{pqrs}. # $$ # #### Calculate: Define K in terms of the density matrix, `D`, and the electron repulsion integral tensor, `I`, using `einsum()`. # + #Define K # - # #### Calculate: Define F in terms of H, J, and K. (Recall Section XI.) # + #Define F as a function of H, J, and K # - # A more convenient form of the SCF energy is in terms of a sum over the AO basis functions # $$ # E = E_{nuc} + \sum_{pq}(H_{pq} + F_{pq})D_{pq}, # $$ # where $E_{nuc}$ is the nuclear repulsion, which psi4 can calculate for us as well. # #### Calculate: SCF energy based on H, F, and D using `np.einsum()`. # *(Hint: The right hand side of the equation is the sum of the product of two terms, each of which has two indices (p and q). The result, E, is a number, so it has no indices. In `einsum()` notation, this case will be represented with the indices for the matrices on the left of the `->`, and no index on the right of the `->`. For example, in the case of just one matrix, the sum of all its elements of a matrix `M` is `sum_of_m = np.einsum('pq->',M )`. In your answer below, be sure to account for any modifications required of an element-wise product of two matrices.)* E_nuc = mol.nuclear_repulsion_energy() SCF_E = # print(F'Energy = {SCF_E:.8f}') # #### Question: Based on the result of the calculation in Section VII, is this a reasonable answer? # **Answer:** # #### Question: Describe a procedure (i.e. identify the steps/commands) that will updated coefficients and compute a new density matrix based on the new definition of the Fock matrix. (Recall Section X) # **Answer** # #### Calculate: Using the procedure proposed above, calculate the updated coefficients # + # Update density matrix # - # You have just completed one cycle of the SCF calculation! # # # Now we will use the density matrix to build the Fock matrix. The code block below sets up a skeleton of the Hartree-Fock procedure. The basic steps are: # 1. Calculate the Fock Matrix based on the density matrix previously defined from a one electron hamiltonian # 2. Calculate the energy from the Fock matrix. # 3. Check and see if the energy has converged by comparing the current energy to the previous energy and seeing if it is within the convergence threshold. # 4. If the energy has not converged, transform the Fock matrix, and diagonalize the transformed Fock matrix to get the energy and MO coefficients. Then transform back to the original AO basis, pull the occupied orbitals, and reconstruct the density matrix. # + # ==> Nuclear Repulsion Energy <== E_nuc = mol.nuclear_repulsion_energy() # ==> SCF Iterations <== # Pre-iteration energy declarations SCF_E = 0.0 E_old = 0.0 # ==> Set default program options <== # We continue recalculating the energy until it converges to the level we specify. # The varible `E_conv` is where we set this level of convergence. We also set a # maximum number of iterations so that if our calculation does not converge, it # eventually stops and lets us know that it did not converge. # Maximum SCF iterations MAXITER = 40 # Energy convergence criterion E_conv = 1.0e-6 print('==> Starting SCF Iterations <==\n') # Begin Iterations for scf_iter in range(1, MAXITER + 1): # Build Fock matrix (Section XII) # # Compute SCF energy SCF_E = # print(F'SCF Iteration {scf_iter}: Energy = {SCF_E:.8f} dE = {SCF_E - E_old:.8f}') # Check to see if the energy is converged. If it is break out of the loop. # If it is not, set the current energy E_old if (abs(SCF_E - E_old) < E_conv): break E_old = SCF_E # Compute new coefficient & density matrices (Section X & XI) # # MAXITER exceeded? if (scf_iter == MAXITER): psi4.core.clean() raise Exception("Maximum number of SCF iterations exceeded.") # Post iterations print('\nSCF converged.') print(F'Final RHF Energy: {SCF_E:.6f} [Eh]') # - # Compare your results to Psi4 by computing the energy using `psi4.energy()` in the cell below. # + # ==> Compare our SCF to Psi4 <== # Call psi4.energy() to compute the SCF energy SCF_E_psi = psi4.energy('SCF') psi4.core.clean() # Compare our energy value to what Psi4 computes assert psi4.compare_values(SCF_E_psi, SCF_E, 6, 'My SCF Procedure') # - # #### Question: Modify the value of E_conv and describe its affect the number of iterations. # ### XIII. Using Hartree-Fock to Justify Molecular Structure # # Why is CO$_2$ linear? Why is H$_2$O bent? Why is CH$_4$ tetrahedral? Why is FeF$_6$ octahedral? In general # chemistry, we used valence shell electron pair repulsion (VSEPR) theory to justify molecular structures # by invoking a _repulsion_ between both bonding and non-bonding pairs of electrons. The reality of molecular # structure is more complicated, however. # # In this section of the lab, we will use the same Hartree-Fock method that we implemented above to justify the # bent structure of water by computing the electronic energy of H$_2$O at a variety of bond angles. # # + # ==> Scanning a Bond Angle: Flexible Water <== # Import a library to visualize energy profile import matplotlib.pyplot as plt # %matplotlib inline # Define flexible water molecule using Z-matrix flexible_water = """ O H 1 0.96 H 1 0.96 2 {} """ # Scan over bond angle range between 90 & 180, in 5 degree increments scan = {} for angle in range(90, 181, 5): # Make molecule mol = psi4.geometry(flexible_water.format(angle)) # Call Psi4 e = psi4.energy('scf/cc-pvdz', molecule=mol) #e = psi4.energy('scf/sto-3g', molecule=mol) # Save energy in dictionary scan[angle] = e # - # Visualize energy profile x = list(scan.keys()) y = list(scan.values()) plt.plot(x,y,'ro-') plt.xlabel('H-O-H Bond Angle ($^{\circ}$)') plt.ylabel('Molecular Energy ($E_h$)') plt.show() # Using the energy profile we generated above, justify the experimentally measured water bond angle of 104.5$^{\circ}$ in the cell below. # **Answer** # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.7 GPU # language: python # name: tensorflow # --- import os import numpy as np import cv2 import argparse import time from tqdm import tqdm # + #for renaming #for filename in os.listdir(): # file =filename.split(".") # if(file[-1]=="txt"): # new_list=[] # #print(filename) # with open(filename, "r+") as f: # old = f.read() # old=old.split("\n") # for i in old: # i=i.split(" ") # i[0]='2' # new_list.append(" ".join(i)) # with open(filename, "w") as f: # f.write("\n".join(new_list)) # - # # Data Augmentation #convert from Yolo_mark to opencv format def yoloFormattocv(x1, y1, x2, y2, H, W): bbox_width = x2 * W bbox_height = y2 * H center_x = x1 * W center_y = y1 * H voc = [] voc.append(center_x - (bbox_width / 2)) voc.append(center_y - (bbox_height / 2)) voc.append(center_x + (bbox_width / 2)) voc.append(center_y + (bbox_height / 2)) return [int(v) for v in voc] # convert from opencv format to yolo format # H,W is the image height and width def cvFormattoYolo(corner, H, W): bbox_W = corner[3] - corner[1] bbox_H = corner[4] - corner[2] center_bbox_x = (corner[1] + corner[3]) / 2 center_bbox_y = (corner[2] + corner[4]) / 2 return corner[0], round(center_bbox_x / W, 6), round(center_bbox_y / H, 6), round(bbox_W / W, 6), round(bbox_H / H, 6) class yoloRotatebbox: def __init__(self, filename, image_ext, angle): assert os.path.isfile(filename + image_ext) assert os.path.isfile(filename + '.txt') self.filename = filename self.image_ext = image_ext self.angle = angle # Read image using cv2 self.image = cv2.imread(self.filename + self.image_ext, 1) rotation_angle = self.angle * np.pi / 180 self.rot_matrix = np.array( [[np.cos(rotation_angle), -np.sin(rotation_angle)], [np.sin(rotation_angle), np.cos(rotation_angle)]]) def rotateYolobbox(self): new_height, new_width = self.rotate_image().shape[:2] f = open(self.filename + '.txt', 'r') f1 = f.readlines() new_bbox = [] H, W = self.image.shape[:2] for x in f1: bbox = x.strip('\n').split(' ') if len(bbox) > 1: (center_x, center_y, bbox_width, bbox_height) = yoloFormattocv(float(bbox[1]), float(bbox[2]), float(bbox[3]), float(bbox[4]), H, W) upper_left_corner_shift = (center_x - W / 2, -H / 2 + center_y) upper_right_corner_shift = (bbox_width - W / 2, -H / 2 + center_y) lower_left_corner_shift = (center_x - W / 2, -H / 2 + bbox_height) lower_right_corner_shift = (bbox_width - W / 2, -H / 2 + bbox_height) new_lower_right_corner = [-1, -1] new_upper_left_corner = [] for i in (upper_left_corner_shift, upper_right_corner_shift, lower_left_corner_shift, lower_right_corner_shift): new_coords = np.matmul(self.rot_matrix, np.array((i[0], -i[1]))) x_prime, y_prime = new_width / 2 + new_coords[0], new_height / 2 - new_coords[1] if new_lower_right_corner[0] < x_prime: new_lower_right_corner[0] = x_prime if new_lower_right_corner[1] < y_prime: new_lower_right_corner[1] = y_prime if len(new_upper_left_corner) > 0: if new_upper_left_corner[0] > x_prime: new_upper_left_corner[0] = x_prime if new_upper_left_corner[1] > y_prime: new_upper_left_corner[1] = y_prime else: new_upper_left_corner.append(x_prime) new_upper_left_corner.append(y_prime) # print(x_prime, y_prime) new_bbox.append([bbox[0], new_upper_left_corner[0], new_upper_left_corner[1], new_lower_right_corner[0], new_lower_right_corner[1]]) return new_bbox def rotate_image(self): """ Rotates an image (angle in degrees) and expands image to avoid cropping """ height, width = self.image.shape[:2] # image shape has 3 dimensions image_center = (width / 2, height / 2) # getRotationMatrix2D needs coordinates in reverse order (width, height) compared to shape rotation_mat = cv2.getRotationMatrix2D(image_center, self.angle, 1.) # rotation calculates the cos and sin, taking absolutes of those. abs_cos = abs(rotation_mat[0, 0]) abs_sin = abs(rotation_mat[0, 1]) # find the new width and height bounds bound_w = int(height * abs_sin + width * abs_cos) bound_h = int(height * abs_cos + width * abs_sin) # subtract old image center (bringing image back to origin) and adding the new image center coordinates rotation_mat[0, 2] += bound_w / 2 - image_center[0] rotation_mat[1, 2] += bound_h / 2 - image_center[1] # rotate image with the new bounds and translated rotation matrix rotated_mat = cv2.warpAffine(self.image, rotation_mat, (bound_w, bound_h)) return rotated_mat if __name__ == "__main__": angels=[45,70,90,115,135,155,180,205,225,250,270,295,315] for filename in tqdm(os.listdir()): file =filename.split(".") if(file[-1]=="jpg"): image_name=file[0] image_ext="."+file[1] else: continue for angle in angels: im = yoloRotatebbox(image_name, image_ext, angle) bbox = im.rotateYolobbox() image = im.rotate_image() # to write rotateed image to disk cv2.imwrite(image_name+'_' + str(angle) + '.jpg', image) file_name = image_name+'_' + str(angle) + '.txt' #print("For angle "+str(angle)) if os.path.exists(file_name): os.remove(file_name) # to write the new rotated bboxes to file for i in bbox: with open(file_name, 'a') as fout: fout.writelines( ' '.join(map(str, cvFormattoYolo(i, im.rotate_image().shape[0], im.rotate_image().shape[1]))) + '\n') #for imges only os.chdir("D:\\My ML Projects\\lenskart task\\sub-train\\Non-Power Reading\\Wayfarer") if __name__ == "__main__": #angels=[45,55,90,115,135,155,180,205,225,250,270,295,315,345] angels=[90,180,275] for filename in tqdm(os.listdir()): file =filename.split(".") if(file[-1]=="jpg"): image_name=file[0] image_ext="."+file[1] else: continue for angle in angels: im = yoloRotatebbox(image_name, image_ext, angle) image = im.rotate_image() # to write rotateed image to disk cv2.imwrite(image_name+'_' + str(angle) + '.jpg', image) from skimage.transform import rotate from skimage.util import random_noise, img_as_ubyte from skimage.io import imread import cv2, random # + def h_flip(image): return np.fliplr(image) def v_flip(image): return np.flipud(image) def add_noise(image): return random_noise(image) def blur_image(image): return cv2.GaussianBlur(image,(9,9),0) # - transformations={"horizontal flip":h_flip, "vertical flip": v_flip, "adding noise":add_noise, "making blur":blur_image } def Augment_images_gerenator(): image_path="D:/My ML Projects/lenskart task/sub-train/eyeframe/Wayfarer" augmentated_path="D:/My ML Projects/lenskart task/sub-train/eyeframe/Wayfarer" images=[] for im in os.listdir(image_path): images.append(os.path.join(image_path,im)) images_to_generate=1100 i=1 for i in tqdm(range(images_to_generate)): image=random.choice(images) original_image=imread(image) transformed_image=None n=0 transformation_count=random.randint(1,len(transformations)) while n<=transformation_count: key=random.choice(list(transformations)) transformed_image=transformations[key](original_image) n+=1 new_image_path="%s/aug_%s.jpg"%(augmentated_path,i) transformed_image=img_as_ubyte(transformed_image) transformed_image=cv2.cvtColor(transformed_image, cv2.COLOR_BGR2RGB) cv2.imwrite(new_image_path, transformed_image) #i+=1 Augment_images_gerenator() def Augment_images_gerenator(): image_path="D:/My ML Projects/lenskart task/sub-train/Non-Power Reading/Aviator" augmentated_path="D:/My ML Projects/lenskart task/sub-train/Non-Power Reading/Aviator" images=[] for im in os.listdir(image_path): images.append(os.path.join(image_path,im)) images_to_generate=1000 i=1 for i in tqdm(range(images_to_generate)): image=random.choice(images) original_image=imread(image) transformed_image=None n=0 transformation_count=random.randint(1,len(transformations)) while n<=transformation_count: key=random.choice(list(transformations)) transformed_image=transformations[key](original_image) n+=1 new_image_path="%s/aug_%s.jpg"%(augmentated_path,i) transformed_image=img_as_ubyte(transformed_image) transformed_image=cv2.cvtColor(transformed_image, cv2.COLOR_BGR2RGB) cv2.imwrite(new_image_path, transformed_image) #i+=1 Augment_images_gerenator() def Augment_images_gerenator(): image_path="D:/My ML Projects/lenskart task/sub-train/Non-Power Reading/Oval" augmentated_path="D:/My ML Projects/lenskart task/sub-train/Non-Power Reading/Oval" images=[] for im in os.listdir(image_path): images.append(os.path.join(image_path,im)) images_to_generate=1100 i=1 for i in tqdm(range(images_to_generate)): image=random.choice(images) original_image=imread(image) transformed_image=None n=0 transformation_count=random.randint(1,len(transformations)) while n<=transformation_count: key=random.choice(list(transformations)) transformed_image=transformations[key](original_image) n+=1 new_image_path="%s/aug_%s.jpg"%(augmentated_path,i) transformed_image=img_as_ubyte(transformed_image) transformed_image=cv2.cvtColor(transformed_image, cv2.COLOR_BGR2RGB) cv2.imwrite(new_image_path, transformed_image) #i+=1 Augment_images_gerenator() def Augment_images_gerenator(): image_path="D:/My ML Projects/lenskart task/sub-train/Non-Power Reading/Wayfarer" augmentated_path="D:/My ML Projects/lenskart task/sub-train/Non-Power Reading/Wayfarer" images=[] for im in os.listdir(image_path): images.append(os.path.join(image_path,im)) images_to_generate=200 i=1 for i in tqdm(range(images_to_generate)): image=random.choice(images) original_image=imread(image) transformed_image=None n=0 transformation_count=random.randint(1,len(transformations)) while n<=transformation_count: key=random.choice(list(transformations)) transformed_image=transformations[key](original_image) n+=1 new_image_path="%s/aug_%s.jpg"%(augmentated_path,i) transformed_image=img_as_ubyte(transformed_image) transformed_image=cv2.cvtColor(transformed_image, cv2.COLOR_BGR2RGB) cv2.imwrite(new_image_path, transformed_image) #i+=1 Augment_images_gerenator() def Augment_images_gerenator(): image_path="D:/My ML Projects/lenskart task/sub-train/eyeframe/Wayfarer" augmentated_path="D:/My ML Projects/lenskart task/sub-train/eyeframe/Wayfarer" images=[] for im in os.listdir(image_path): images.append(os.path.join(image_path,im)) images_to_generate=1100 i=1 for i in tqdm(range(images_to_generate)): image=random.choice(images) original_image=imread(image) transformed_image=None n=0 transformation_count=random.randint(1,len(transformations)) while n<=transformation_count: key=random.choice(list(transformations)) transformed_image=transformations[key](original_image) n+=1 new_image_path="%s/aug_%s.jpg"%(augmentated_path,i) transformed_image=img_as_ubyte(transformed_image) transformed_image=cv2.cvtColor(transformed_image, cv2.COLOR_BGR2RGB) cv2.imwrite(new_image_path, transformed_image) #i+=1 Augment_images_gerenator() def Augment_images_gerenator(): image_path="D:/My ML Projects/lenskart task/sub-train/sunglasses/Rectangle" augmentated_path="D:/My ML Projects/lenskart task/sub-train/sunglasses/Rectangle" images=[] for im in os.listdir(image_path): images.append(os.path.join(image_path,im)) images_to_generate=300 i=1 for i in tqdm(range(images_to_generate)): image=random.choice(images) original_image=imread(image) transformed_image=None n=0 transformation_count=random.randint(1,len(transformations)) while n<=transformation_count: key=random.choice(list(transformations)) transformed_image=transformations[key](original_image) n+=1 new_image_path="%s/aug_%s.jpg"%(augmentated_path,i) transformed_image=img_as_ubyte(transformed_image) transformed_image=cv2.cvtColor(transformed_image, cv2.COLOR_BGR2RGB) cv2.imwrite(new_image_path, transformed_image) #i+=1 Augment_images_gerenator() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="8BW2rOQfd9kx" colab_type="code" outputId="2361460c-2a44-4b95-c83b-ccd71093600e" colab={"base_uri": "https://localhost:8080/", "height": 52} from google.colab import drive drive.mount('/gdrive') # %cd /gdrive/'My Drive'/ # + id="UxzrGOcsZTJ9" colab_type="code" colab={} import numpy as np # + id="cp0lzQpfZTKW" colab_type="code" colab={} trainImages = np.load('./diabetic-retinopathy-resized/1x2/trainImages.npy') testImages = np.load('./diabetic-retinopathy-resized/1x2/testImages.npy') y_train = np.load('./diabetic-retinopathy-resized/1x2/y_train.npy') y_test = np.load('./diabetic-retinopathy-resized/1x2/y_test.npy') # + id="ux_GwMX8ZTKi" colab_type="code" outputId="d5cf1b28-3bab-4695-c7aa-a23766cfa9e8" colab={"base_uri": "https://localhost:8080/", "height": 86} print(trainImages.shape) print(testImages.shape) print(y_train.shape) print(y_test.shape) # + id="0zvMC0sHZTKu" colab_type="code" outputId="b2587e2c-c1f7-41b3-aac7-db24ebf243bc" colab={"base_uri": "https://localhost:8080/", "height": 52} print(trainImages.max()) print(testImages.max()) # + id="h5myAgrycc5-" colab_type="code" colab={} # Normalize trainImages = trainImages/255 testImages = testImages/255 # + id="FdlhksgnZTK5" colab_type="code" outputId="d601c955-1ccc-4bd5-f8fc-29534782c090" colab={"base_uri": "https://localhost:8080/", "height": 64} from tensorflow.keras import datasets, layers, models # + id="-BTf6EJwZTLC" colab_type="code" colab={} def convolutional_model(): # create model model = models.Sequential() model.add(layers.Conv2D(128, (5, 5), activation='relu', input_shape=(256, 256, 3))) model.add(layers.MaxPooling2D(pool_size=(2, 2), strides=(2, 2))) # model.add(layers.Dropout(0.3)) model.add(layers.Conv2D(64, (5,5), strides = (1,1), activation='relu')) model.add(layers.MaxPooling2D(pool_size = (2,2), strides = (2,2))) # model.add(layers.Dropout(0.5)) model.add(layers.Conv2D(32, (5,5), strides = (1,1), activation='relu')) model.add(layers.MaxPooling2D(pool_size = (2,2), strides = (2,2))) # model.add(layers.Dropout(0.3)) model.add(layers.Flatten()) model.add(layers.Dense(100, activation='relu')) # model.add(layers.Dropout(0.3)) model.add(layers.Dense(2, activation='softmax')) # model = models.Sequential() # model.add(layers.Conv2D(16, (5, 5), activation='relu', input_shape=(256, 256, 3))) # model.add(layers.MaxPooling2D(pool_size=(2, 2), strides=(2, 2))) # model.add(layers.Conv2D(8, (2, 2), activation='relu')) # model.add(layers.MaxPooling2D(pool_size=(2, 2), strides=(2, 2))) # model.add(layers.Flatten()) # model.add(layers.Dense(100, activation='relu')) # model.add(layers.Dense(2, activation='softmax')) model.summary() model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) return model # + id="9Wq_OaajZTLK" colab_type="code" outputId="435074d5-74b2-4ace-db04-15181f57c7b9" colab={"base_uri": "https://localhost:8080/", "height": 541} model = convolutional_model() # + id="D1ii7PQAZTLQ" colab_type="code" outputId="1e789d0f-f04f-42fd-aff7-5f2ef8a7dccf" colab={"base_uri": "https://localhost:8080/", "height": 1000} model.fit(trainImages, y_train, epochs=40, batch_size=100, validation_data=(testImages, y_test)) # + id="lkPTmCSkdOEX" colab_type="code" outputId="560b2827-7a3e-4568-f7eb-ed961f60763f" colab={"base_uri": "https://localhost:8080/", "height": 52} scores = model.evaluate(testImages, y_test, verbose=0) print('Accuracy: {}% \n Error: {}'.format(scores[1], 1 - scores[1])) # + id="x7GkOIoueMWf" colab_type="code" outputId="547940aa-ec13-49e6-d51c-ba3847beb393" colab={"base_uri": "https://localhost:8080/", "height": 52} from sklearn import metrics y_pred = model.predict(testImages) matrix = metrics.confusion_matrix(y_test.argmax(axis=1), y_pred.argmax(axis=1)) matrix # + id="8HRPUBPgdVJH" colab_type="code" outputId="6c7beab6-27b1-45e9-8a33-b6396fa854cc" colab={"base_uri": "https://localhost:8080/", "height": 34} model.save("training_1x2_1.h5") print("Saved model to disk") # + id="GB931bhIuDns" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import matplotlib.pyplot as plt import os from scipy.optimize import minimize plt.rcParams['figure.figsize'] = [12, 18] plt.rcParams.update({'font.size': 18}) # Solve y = Theta * s for "s" n = 1000 # dimension of s p = 200 # number of measurements, dim(y) Theta = np.random.randn(p,n) y = np.random.randn(p) # L1 Minimum norm solution s_L1 def L1_norm(x): return np.linalg.norm(x,ord=1) constr = ({'type': 'eq', 'fun': lambda x: Theta @ x - y}) x0 = np.linalg.pinv(Theta) @ y # initialize with L2 solution res = minimize(L1_norm, x0, method='SLSQP',constraints=constr) s_L1 = res.x # - # L2 Minimum norm solution s_L2 s_L2 = np.linalg.pinv(Theta) @ y # + fig,axs = plt.subplots(2,2) axs = axs.reshape(-1) axs[0].plot(s_L1,color='b',LineWidth=1.5) axs[0].set_ylim(-0.2,0.2) axs[1].plot(s_L2,color='r',LineWidth=1.5) axs[1].set_ylim(-0.2,0.2) axs[2].hist(s_L1,bins=np.arange(-0.105,0.105,0.01),rwidth=0.9) axs[3].hist(s_L2,bins=np.arange(-0.105,0.105,0.01),rwidth=0.9) plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- sub_list=input(list) listy=input(list) flag=0 if (set(sub_list).issubset(set(listy))): flag=1 if (flag): print("it's a Match") else: print("it's Gone") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Preprocessing text for Deep Learning # # This notebooks performs basic preprocessing on the Quora datasets using the method(s) from `common.nlp.sequence_preprocessing`. The goal is to prepare the data for using sequence models. This means some information, such as punctuation and capitals, are removed. This information should be captured in another process if we want to use it. # + import sys sys.path.append("../..") import numpy as np import pandas as pd from common.nlp.sequence_preprocessing import WORD_MAP, preprocess_text_for_dl # - # load data with right data types (this is important for the IDs in particular) dtypes = {"qid": str, "question_text": str, "target": int} train = pd.read_csv("../data/train.csv", dtype=dtypes) test = pd.read_csv("../data/test.csv", dtype=dtypes) # ### Preprocessing steps # The `preprocess_text_for_DL` method performs the following steps: # 1. Convert text to lower case # 2. Replace shorthand writing (such as `won't`) to their full form (`will not`) # 3. Selectively remove, retain or ignore punctuations # 4. Replace numbers with # (eg: 1 by #, 23 by ##, 1993 by ####) # # The `common.nlp.sequence_preprocessing` module contains a variable `WORD_MAP` that specifies the mapping to use in step 2. This mapping is used by default, but can be overridden in the function call. WORD_MAP # The punctuation that is removed is that from `string.punctuation`. import string string.punctuation # Preprocessing can be done for a single dataset or more datasets at once (all positional arguments are assumed to be datasets). # one dataset example just_train = preprocess_text_for_dl(train, text_col="question_text", word_map=WORD_MAP, puncts_ignore='/-', puncts_retain='&') just_train.head() just_train['question_text'][7] # multiple dataset example cleaned_train, cleaned_test = preprocess_text_for_dl(train, test, puncts_ignore='/-', puncts_retain='&') cleaned_train.shape, cleaned_test.shape # ### Results # Let's look at some (random) results. def sample_result(idx=None): if idx is None: idx = np.random.randint(0, len(cleaned_train)) print(idx) print(train["question_text"].iloc[idx]) print(cleaned_train["question_text"].iloc[idx]) # run this cell to see a random preprocessed instance and its original sample_result() # #### Some examples we might need to deal with differently... sample_result(1287932) # Should hyphens be replaced with a space instead of removed? (Handled!) sample_result(378162) # With slashes, maybe we should just choose one of the options (e.g., the most common one in the corpus?) to maintain a logical sentence? (Handled!) sample_result(1012331) # Maybe we should replace $ with `dollar` and replace general abbreviations like y.o. (years old) with full words? # # Also, we probably have to replace digits. sample_result(1068603) # What about `'s` that indicates possesion? By removing the apostrofe, the word turns into plural form or results in a misspelling of a name like here. sample_result(535806) # Some weird characters are still there apparently. # # __For the biggest part it seems to work fine though.__ # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python (YelpEnv) # language: python # name: yelpenv # --- # + # install yelp library # # !pip install yelp # - client_id = "Yba0SJQtkua4MCvdfFfUrg" api_key = "" url = "https://api.yelp.com/v3/businesses/search" response = requests.get() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from functools import partial from itertools import product import numpy as np import pandas as pd from graspy.cluster import GaussianCluster from joblib import Parallel, delayed from scipy.stats import mannwhitneyu, ttest_ind, ks_2samp from src import generate_truncnorm_sbms_with_communities, estimate_embeddings # + def estimate_community(embeddings, n_clusters): predicted_labels = ( GaussianCluster(n_clusters, n_clusters, "all").fit_predict(embeddings) + 1 ) # ari = adjusted_rand_score(true_labels, predicted_labels) return predicted_labels def compute_statistic(tests, pop1, pop2): res = np.zeros(len(tests)) for idx, test in enumerate(tests): if test.__name__ == "multiscale_graphcorr": statistic, pval, _ = test(pop1, pop2, reps=250, is_twosamp=True) elif test.__name__ == "test": statistic, pval = test(pop1, pop2, reps=250) else: # for other tests, do by edge statistic, pval = test(pop1, pop2) res[idx] = pval return res def run_experiment( m, block_1, block_2, mean_1, mean_2, var_1, var_2, mean_delta, var_delta, n_clusters, reps, tests, ): total_n = block_1 + block_2 r, c = np.triu_indices(total_n, k=1) res = np.zeros((reps, 2, len(tests))) for i in np.arange(reps).astype(int): pop1, pop2, true_labels = generate_truncnorm_sbms_with_communities( m=m, block_1=block_1, block_2=block_2, mean_1=mean_1, mean_2=mean_2, var_1=var_1, var_2=var_2, mean_delta=mean_delta, var_delta=var_delta, ) pop1_edges = pop1[:, r, c] pop2_edges = pop2[:, r, c] true_edges = (true_labels[:, None] + true_labels[None, :])[r, c] sig_edges = np.zeros((len(tests), total_n, total_n))[:, r, c] for j in np.unique(true_edges): tmp_labels = true_edges == j tmp_pop1_edges = pop1_edges[:, tmp_labels].ravel() tmp_pop2_edges = pop2_edges[:, tmp_labels].ravel() pvals = compute_statistic(tests, tmp_pop1_edges, tmp_pop2_edges) for p_idx, pval in enumerate(pvals): if pval <= 0.05: sig_edges[p_idx][tmp_labels] = 1 prec = (sig_edges[:, true_edges == 0]).sum(axis=1) / sig_edges.sum( axis=1 ) np.nan_to_num(prec, False) recall = (sig_edges[:, true_edges == 0]).sum(axis=1) / ( true_edges == 0 ).sum(axis=0) res[i] = np.array((prec, recall)) res = res.mean(axis=0).reshape(-1) to_append = [ m, mean_1, mean_2, var_1, var_2, mean_delta, var_delta, *res, ] return to_append # + spacing = 50 block_1 = 25 # different probability block_2 = 25 mean_1 = 0 mean_2 = 0 var_1 = 0.25 var_2 = 0.25 mean_delta = 0 mean_deltas = np.linspace(mean_1, 1 - mean_1, spacing + 1) #var_deltas = np.linspace(0, 3, spacing + 1) var_delta = 0 reps = 50 n_clusters = range(2, 5) ms = np.linspace(0, 250, spacing + 1)[1:].astype(int) tests = [ks_2samp, mannwhitneyu, ttest_ind] partial_func = partial( run_experiment, block_1=block_1, block_2=block_2, mean_1=mean_1, mean_2=mean_2, var_1=var_1, var_2=var_2, var_delta=var_delta, #mean_delta=mean_delta, n_clusters=n_clusters, reps=reps, tests=tests, ) args = [dict(m=m, mean_delta=mean_delta) for m, mean_delta in product(ms, mean_deltas)] #args = sum(zip(reversed(args), args), ())[: len(args)] #args = sum(zip(reversed(args), args), ())[: len(args)] res = Parallel(n_jobs=-1, verbose=7)(delayed(partial_func)(**arg) for arg in args) # + cols = [ "m", "mean_1", "mean_2", "var_1", "var_2", "mean_delta", "var_delta", *[ f"omni_{metric}_{test.__name__}" for metric in ["precision", "recall"] for test in tests ], ] res_df = pd.DataFrame(res, columns=cols) res_df.to_csv( f"./results/20200321_truth_means.csv", index=False ) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import torch # + #----------------------------张量的一些常见操作------------------------------ # - #torch.trunk可以将一个张量在dim维度(默认为0)上分割成多个张量 b = torch.rand([3,2]) b c,d = torch.chunk(b, chunks=2) c,d c,d = torch.chunk(b, chunks=2, dim=1) c,d b, torch.reshape(b,[2,3]), torch.reshape(b,[-1]) #torch.scatter: 根据dim和index将src张量中填充到结果张量相应位置中 #self[index[i][j][k]][j][k] = src[i][j][k] # if dim == 0 #self[i][index[i][j][k]][k] = src[i][j][k] # if dim == 1 #self[i][j][index[i][j][k]] = src[i][j][k] # if dim == 2 src = torch.arange(1, 11).reshape((2, 5)) index = torch.tensor([[0, 1, 2, 0]]) src,torch.zeros(3, 5, dtype=src.dtype).scatter_(0, index, src) index = torch.tensor([[0, 1, 2], [0, 1, 4]]) src,torch.zeros(3, 5, dtype=src.dtype).scatter_(1, index, src) #torch.split(tensor, split_size_or_sections, dim=0),其和trunk不同之处是可以传入列表 a = torch.arange(10).reshape(5,2) a,torch.split(a, 2) torch.split(a, [1,4]) # torch.squeeze(input, dim=None, *, out=None) → Tensor:将input所有维度为1的维度移除 # When dim is given, a squeeze operation is done only in the given dimension. a = torch.rand([3,2]) a,a.shape torch.reshape(a,[3,1,2]) torch.squeeze(torch.reshape(a,[3,1,2])) b = torch.reshape(a, [3,1,2,1,1]) b torch.squeeze(b,dim=1).shape # dim不能为元组和列表 #torch.stack(tensors, dim=0, *, out=None) → Tensor #Concatenates a sequence of tensors along a new dimension. #All tensors need to be of the same size. b = torch.rand([3,2]) a,b torch.stack([a,b]),torch.stack([a,b]).shape torch.stack([a,b], dim=1),torch.stack([a,b], dim=1).shape torch.stack([a,b], dim=2),torch.stack([a,b], dim=1).shape # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 31009 - Final Project - Data Clean # ### Ada, Rohit, Dylan # import numpy as np import pandas as pd import re import nltk from nltk.corpus import stopwords from nltk.stem.porter import PorterStemmer from collections import Counter import seaborn as sns import matplotlib.pyplot as plt from IPython.core.display import display, HTML import string from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.preprocessing.sequence import pad_sequences import tensorflow as tf from tqdm import tqdm dat = pd.read_csv("train.csv") dat.head() # ## 1. Data Processing: # + ## Helper functions def Clean_URL(text): ## remove URL hyper links url = re.compile(r'https?://\S+|www\.\S+') return url.sub(r'',text) def Clean_html(text): ## remove html tags html=re.compile(r'<.*?>') return html.sub(r'',text) def Clean_emoji(text): ## remove emojis pattern = re.compile("[" u"\U0001F600-\U0001F64F" u"\U0001F300-\U0001F5FF" u"\U0001F680-\U0001F6FF" u"\U0001F1E0-\U0001F1FF" u"\U00002702-\U000027B0" u"\U000024C2-\U0001F251" "]+", flags=re.UNICODE) return pattern.sub(r'', text) ## Remove punctuations in the text def Clean_punct(text): table = str.maketrans('','',string.punctuation) return text.translate(table) from spellchecker import SpellChecker spell = SpellChecker() ## Spelling Checking def Correct_spellings(text): corrected = [] misspelled = spell.unknown(text.split()) for word in text.split(): if word in misspelled: corrected.append(spell.correction(word)) else: corrected.append(word) return " ".join(corrected) # - # ### Data Clean: #Apply all helper functions dat['text'] = dat['text'].apply(lambda x : Clean_URL(x)) dat['text'] = dat['text'].apply(lambda x : Clean_html(x)) dat['text'] = dat['text'].apply(lambda x : Clean_emoji(x)) dat['text'] = dat['text'].apply(lambda x : Clean_punct(x)) dat['text'] = dat['text'].apply(lambda x : Correct_spellings(x)) dat.to_csv(r'Cleaned_Train.csv', index = False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # !pip install pycaret from pycaret.datasets import get_data import pandas as pd df = get_data("titanic") df = df.drop(["Name", "PassengerId", "Ticket", "Cabin"], axis = 1) df["Sex"] = pd.factorize(df["Sex"])[0] dummy = pd.get_dummies(df['Embarked'], prefix='Cabin') df = pd.concat([df.drop("Embarked", axis=1), dummy], axis = 1) df.head() from sklearn.linear_model import LogisticRegression from sklearn.neighbors import KNeighborsClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.svm import SVC from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import accuracy_score # + from sklearn.model_selection import train_test_split y = df["Survived"] X = df.drop("Survived", axis = 1) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) fill_age = X_train["Age"].mean() X_train["Age"] = X_train["Age"].fillna(fill_age) X_test["Age"] = X_test["Age"].fillna(fill_age) # + from sklearn.linear_model import LogisticRegression def train_model(model: Any, X_train, X_test, y_train, y_test): clf = model.fit(X_train, y_train) y_pred = clf.predict(X_test) acc = accuracy_score(y_test, y_pred) return {"model": model.__class__.__name__, "params": model.get_params(), "accuracy": acc} train_model(LogisticRegression(), X_train, X_test, y_train, y_test) # - train_model(KNeighborsClassifier(), X_train, X_test, y_train, y_test) # + from prefect import Flow, task, unmapped from typing import Any import prefect @task(nout=4) def create_data(): df = get_data("titanic") df = df.drop(["Name", "PassengerId", "Ticket", "Cabin"], axis = 1) df["Sex"] = pd.factorize(df["Sex"])[0] dummy = pd.get_dummies(df['Embarked'], prefix='Cabin') df = pd.concat([df.drop("Embarked", axis=1), dummy], axis = 1) y = df["Survived"] X = df.drop("Survived", axis = 1) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) fill_age = X_train["Age"].mean() X_train["Age"] = X_train["Age"].fillna(fill_age) X_test["Age"] = X_test["Age"].fillna(fill_age) return X_train, X_test, y_train, y_test @task def get_models(): return [LogisticRegression(random_state=42), KNeighborsClassifier(), DecisionTreeClassifier(), SVC(), RandomForestClassifier(n_estimators=200, max_depth=4, random_state=42), RandomForestClassifier(n_estimators=100, max_depth=3, random_state=42)] @task def train_model(model: Any, X_train, X_test, y_train, y_test): clf = model.fit(X_train, y_train) y_pred = clf.predict(X_test) acc = accuracy_score(y_test, y_pred) return {"model": model.__class__.__name__, "params": model.get_params(), "accuracy": acc} @task def get_results(results): res = pd.DataFrame(results) prefect.context.logger.info(res) return res with Flow("distributed") as flow: X_train, X_test, y_train, y_test = create_data() models = get_models() training_runs = train_model.map(models, unmapped(X_train), unmapped(X_test), unmapped(y_train), unmapped(y_test)) get_results(training_runs) # - from prefect.executor import LocalDaskExecutor flow.executor = LocalDaskExecutor() flow.run() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python2 # --- # # Lecture 5: Common Random Variables and How to Sample Them # %matplotlib inline import matplotlib as mpl mpl.rcParams['figure.dpi']= 300 import matplotlib.pyplot as plt import seaborn as sns import numpy as np # ## Pseudo-random Number Generators (PRNG) # # PRNG's are used to generate random integers between zero and a maximum number, say $m$. # # ### The middlesquare algorithm () # # 1. Take a number and square it. # 2. Pad the result with zeros to get to the desired number of digits. # 3. Take the middle digits of the resulting number. # 4. Repeat. # # Here is an implementation using strings. def middlesquare(s, digits=4): # Square the number s2 = s ** 2 # Turn the resulting number into a string padding with zeros to get to the desired number of digits s2_str = str(s2).zfill(2*digits) # Keep only the middle middle_str = s2_str[digits/2:][:-digits/2] return int(middle_str) seed = 1234 s = seed for _ in range(20): s = middlesquare(s, digits=4) print s # Unfortunately, the middlesquare algorithms results in periodic sequences with very small period. For example: seed = 540 s = seed for _ in range(20): s = middlesquare(s, digits=4) print s # ### Linear Congruential Generator (LCG) # The linear congruential generator works as follows. You pick three big integers $a$, $b$ and $m$. # Pick a seed $x_0$. # Then iterate: # $$ # x_{i+1} = (a x_i + b)\mod m # $$ def lcg(x, a=123456, b=978564, m=6012119): return (a * x + b) % m seed = 1234 s = seed for _ in range(20): s = lcg(s) print s # The good thing about LCG is that you can prove a lot of stuff about it using group theory and that you know that the maximum possible number is $m$. # ### Mersenne Twister PRNG # Numpy uses the [Mersenne Twister](https://en.wikipedia.org/wiki/Mersenne_Twister) to generate random numbers. # Its details are more complicated than LCG, but it is still initialized by an integer seed. # You can test it as follows: # set the seed np.random.seed(12345) # print 5 integers from 0 to 6012119 for _ in range(5): print np.random.randint(0, 6012119) # see what the seed does - Here is what happens if you rerun the code above: for _ in range(5): print np.random.randint(0, 6012119) # And here is what happens if you reset the seed to its original value and rerun the code np.random.seed(12345) for _ in range(5): print np.random.randint(0, 6012119) # So, resetting the seed gives you the same sequence. In your numerical simulations you should always set the seed by hand in order to ensure the reproducibility of your work. # ## Sampling from the uniform distribution # # If we have a PRNG that samples between zero and a big integer, say $m$, we can create a generator that samples from the uniform distribution. # If $d$ is the sample from the PRNG, then # $$ # x = \frac{d}{m}, # $$ # is approximately uniformly distributed. # Let's experiment with this idea. # + # The maximum integer m = 6012119 # First a uniform random generator based on lcg lcg_seed = 123456 # A seed of lcg lcg_state = lcg_seed # Internal state of lcg def unif_lcg(): global lcg_state lcg_state = lcg(lcg_state) return lcg_state / (1. * m) # The 1. in the denominator ensures # that the division is done in floating point arithmetic print 'LCG Uniform Samples:' for _ in range(5): print unif_lcg() # And let's also do it with Mersenne Twister from numpy np.random.seed(123456) def unif_mt(): return np.random.randint(0, m) / (1. * m) print '\nMT Uniform Samples:' for _ in range(5): print unif_mt() # - # Which one of the two is better? There are many statistical tests that we would like our uniform random number generator to go through. First (and most importantly) the empirical histograms of the generated numbers should be uniform. Let's test this. # How many numbers to sample: N = 100 lcg_X = [unif_lcg() for _ in range(N)] mt_X = [unif_mt() for _ in range(N)] # Plot the histograms fig, ax = plt.subplots() ax.hist(lcg_X, normed=True, alpha=0.5, label='LGC_unif') ax.hist(mt_X, normed=True, alpha=0.5, label='MT_unif') ax.set_xlabel('$x$') ax.set_ylabel('$p(x)$') plt.legend(loc='best') # ### Question 01 # + Hmm, we probably need to increase the number of samples to observe this statistic better. Increase $N$ from 100 to $1,000$ and then to $10,000$. How do the distributions look like now? # # + A second thing that we would like to test is whether or not consecutive numbers are all independent (Idependent identically distributed). Unfortunately, we need more theory than we know to do this. # # + For future reference, note that you should not really use ``unif_mt`` to generate uniform random numbers. Numpy already implements this in ``numpy.random.rand``. We provide an example right below. # Generate some random numbers with numpy's unif_mt: X = np.random.rand(10) print X # ## The Bernoulli Distribution # The Bernoulli distribution arises from a binary random variable representing the outcome of an experiment with a given probability of success. # Let us encode success with 1 and failure with 0. # Then, we say that the random variable # $$ # X\sim\mathcal{B}(\theta), # $$ # is a Bernoulli random variable with parameter $\theta$ if: # $$ # X = \begin{cases} # 1,\;\text{with probability}\;\theta,\\ # 0,\;\text{otherwise}. # \end{cases} # $$ # Another way to write the same thing is through the probability density function of $X$: # $$ # p(x) = \theta \delta(x-1) + (1-\theta)\delta(x), # $$ # where we used Dirac's delta to talk about point masses. # To sample from it, we do the following steps: # # + Sample a uniform number $u$ (i.e., a number of $\mathcal{U}([0,1])$). # # + If $u\le \theta$, then set $x = 1$. # # + Otherwise, set $x = 0$. # # Let's see if this process does indeed produce the desired result. # + def sample_bernoulli(theta): u = np.random.rand() if u <= theta: return 1 return 0 for _ in range(10): print sample_bernoulli(0.3) # - # Let's do a histogram like before N = 1000 X = [sample_bernoulli(0.3) for _ in range(N)] fig, ax = plt.subplots() ax.hist(X, alpha=0.5) ax.set_xlabel('$x$') ax.set_ylabel('$p(x)$') # Ok, it looks fine. About $\theta N$ samples went to 1 and $(1-\theta)N$ samples went to 0. # ## Sampling Discrete Distributions # Consider a generic discrete random variable $X$ taking $m$ different values. # Without loss of generality, you may assume that these values are integers $\{0, 1,2,\dots,m-1\}$ (they are just the labels of the discrete objects anyway). # Let us assume that # $$ # p(X=k) = p_k, # $$ # where, of course, we must have: # $$ # p_k \ge 0, # $$ # and # $$ # \sum_{k=0}^{m-1} p_k = 1. # $$ # Remember, that an succint way to write this is using the Dirac delta: # $$ # p(x) = \sum_{k=0}^{m-1}p_k\delta(x-k). # $$ # In any case, here is how you sample from such a distribution: # # + Draw a uniform sample $u$. # + Find the index $j\in\{0,1,\dots,m-1\}$ such that: # $$ # \sum_{k=0}^{j-1}p_k \le u < \sum_{k=0}^j. # $$ # + Then, your sample is $j$. # # Let's code it. def sample_discrete(p): """ Sample from a discrete probability density. :param p: An array specifying the probability of each possible state. The number of states ``m=len(p)``. :returns: A random integer. (btw this is how you document a python function) """ m = len(p) u = np.random.rand() c = 0. for j in range(m): c += p[j] if u <= c: return j # Let's test it with a four-state discrete random variable with probabilities p = [0.2, 0.3, 0.4, 0.1] # Let's take 1,000 samples N = 1000 X = [sample_discrete(p) for _ in range(N)] # and do the empirical histrogram fig, ax = plt.subplots() ax.hist(X, alpha=0.5) ax.set_xlabel('$x$') ax.set_ylabel('$p(x)$') # Of course, numpy already implements this functionality. Here is how to do the same thing numpy: X_np = np.random.choice(np.arange(4), # The objects that you want to sample (here integers, 0,1,2,3) p=p, # The probability of sampling each object size=N # How many samples you want ) # Let's compare the two histograms fig, ax = plt.subplots() ax.hist(X, alpha=0.5, label='Our implementation') ax.hist(X_np, alpha=0.5, label='Numpy implementation') ax.set_xlabel('$x$') ax.set_ylabel('$p(x)$') plt.legend(loc='best') # ## The Binomial Distribution # # The Binomial distribution gives you the number of successes in $N$ tries of a random experiment with probability of success $\theta$. # We write: # $$ # X\sim \mathcal{B}(N,\theta). # $$ # You can easily simulate it (excersize) by noticing that: # $$ # X = \sum_{i=1}^N X_i, # $$ # where # $$ # X_i \sim \mathcal{B}(\theta), # $$ # are indepdent Bernoulli trials. # We can also show that: # $$ # p(X=k) = \left(\begin{array}{c}N\\ k\end{array}\right)\theta^k(1-\theta)^{N-k}. # $$ # Let's plot this distribution for various $N$'s. # We will use the built-in ``scipy.stats`` functionality for this one. # For your future reference, you can find it [here](https://docs.scipy.org/doc/scipy/reference/stats.html). # + import scipy.stats as st def plot_binom_pdf(N, theta): k = np.arange(N) + 1. # From 1 to N p_k = st.binom(N, theta).pmf(k) # pmf is short for probability mass function # which is the right terminology for a discrete variable # (i.e., we use 'mass' instead of 'density') fig, ax = plt.subplots() ax.plot(k, p_k, 'o', color='b') ax.vlines(k, 0, p_k, colors='b', lw=5, alpha=0.5) ax.set_xlabel('$x$') ax.set_ylabel('$p(x)$') ax.set_title(r'$\mathcal{B}(N=%d, \theta=%.2f)$' % (N, theta)) # the 'r' is required to render # '\' character correctly plot_binom_pdf(4, 0.3) # - # Ok, now let's play with $N$. plot_binom_pdf(10, 0.3) # ### Question 02 # + Start increasing $N$. Try really big numbers. Does the result remind you a familiar distribution? # # + Play a little bit with $\theta$. What happes as you move it around? # ## Inverse Sampling # How do you sample an arbitrary univariate continuous random variable $X$ with CDF $F(x)$. # In this scenario, *inverse sampling* is the way to go. # It relies on the observation that the random variable # $$ # Y = F^{-1}(U), # $$ # where $F^{-1}$ is the inverse of the CDF of $X$ and $U\sim\mathcal{U}([0,1])$ has exactly the same distribution as $X$. # # We will demonstrate this by exmaple. To this end, let us consider an exponential random variable: # $$ # T \sim \mathcal{r}, # $$ # where $r > 0$ is known as the *rate parameter*. # The exponential distribution describes the time it passes between random events that occur at a constnat rate $r$. # Its CDF is: # $$ # p(t) = re^{-rt}, # $$ # and its CFG is: # $$ # F(t) = p(T\le t) = 1 - e^{-rt}. # $$ # We plot it next. r = .5 # Events occur every 0.5 minutes fig, ax = plt.subplots() t = np.linspace(0., 5. / r, 100) ax.plot(t, st.expon(scale=1./r).cdf(t)) ax.set_xlabel('$t$') ax.set_ylabel(r'$F(t) = p(T <= t)$') ax.set_title(r'$T\sim\mathcal{E}(r=%.2f)$' % r) # To sample $T$ using inverse sampling, we need the inverse of the CDF. This is easily shown to be: # $$ # F^{-1}(u) = -\frac{\ln(1-u)}{r}. # $$ # Let's see if this is going to give us the right samples. # We will compare the empirical histogram obtained by inverse sampling to the actual PDF $p(t)$. # + def sample_exp(r): u = np.random.rand() return -np.log(1. - u) / r N = 10000 T = [sample_exp(r) for _ in range(N)] fig, ax = plt.subplots() ax.hist(T, alpha=0.5, normed=True, bins=100, label='Histogram of samples') ax.plot(t, st.expon(scale=1./r).pdf(t)) ax.set_xlabel('$t$') ax.set_ylabel('$p(t)') ax.set_title(r'$T\sim\mathcal{E}(r=%.2f)$' % r) plt.legend(loc='best') # - # ### Questions 03 # # + Implement inverse sampling for a univariate Gaussian with zero mean and unit variance. Use ``scipy.stats`` to find the inverse CDF of the Gaussian (It is ``st.norm.ippf``). # ## The Central Limit Theorem # Consider, $X_1,X_2,\dots$ be iid random variables with mean $\mu$ and variance $\sigma^2$. # Define their sum: # $$ # S_N = \frac{X_1+\dots+X_N}{N}. # $$ # The Central Limit Theorem (CLT), states that: # $$ # S_N \sim \mathcal{N}(S_N|\mu, \frac{\sigma^2}{N}), # $$ # for large $N$. # That is, they start to look like Gaussian. # Let's test it for the Exponential distribution. # We will use ``numpy.random.exponential`` to sample from the exponential. # + r = 0.5 N = 5 # How many iid variables are we going to sum M = 10000 # How many times do you want to sample Ts = np.random.exponential(scale=1./r, size=(N, M)) # Notice that it uses the inverse of the rate. # It is always a good idea to look at the documentation # if you are unsure. # These are the samples of SN: SN = np.sum(Ts, axis=0) / N # Notice that I am only summing the rows fig, ax = plt.subplots() ax.hist(SN, bins=100, normed=True, alpha=0.5, label='Empirical histogram of $S_N$') mu_CLT = 1. / r # CLT mean sigma_CLT = np.sqrt(1. / (N * r**2)) # CLT standard deviation Ss = np.linspace(SN.min(), SN.max(), 100) ax.plot(Ss, st.norm(loc=mu_CLT, scale=sigma_CLT).pdf(Ss), label='CLT Gaussian') ax.set_xlabel('$S_N$') ax.set_ylabel('$p(S_N)$') ax.set_title('CLT: Exponential by Gaussian (N=%d)' % N) plt.legend(loc='best') # - # ### Questions 04 # # + Start increase $N$ and observe the convergence. # + Go back to the Bernoulli distribution. What are its mean and variance? What is the mean and the variance of the Gaussian approximating the sum of idenpdent Bernoulli distributions? Verify this result numerically (copy paste the code above and make the appropriate changes). # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Analysis # 1. The peak of the temperature trend seems to be around 25 degrees latitude. # 2. There seems to be a high degree of humidity at a majority of points near the equator. # 3. Wind speeds seem to be the highest between -25 and 75 degrees latitude. # + #Import dependencies import pandas as pd import numpy as np import random import requests import json import csv import openweathermapy.core as owm import matplotlib.pyplot as plt from datetime import datetime from citipy import citipy #Importing personal API key from own config file from config import api_key # - # # Generating a list of cities #Making an empty list to store randomly generated longitude and latitude data longi = [] latit = [] for i in range(1500): longi.append(float(random.uniform(-180.00,180.00))) latit.append(float(random.uniform(-90.00,90.00))) #Creating empty lists to store the cities and their names cities = [] city_names = [] #Finding the nearest city given the random coordinates for i in range(len(longi)): cities.append(citipy.nearest_city(latit[i],longi[i])) #Finding the city info and appending to empty list for city in cities: city_names.append(city.city_name) #Inputting the city name, lat, and lon into a dataframe city_df = pd.DataFrame({"City": city_names, "Latitude": latit, "Longitude": longi}) #Dropping any duplicate cities unique_city_df = city_df.drop_duplicates(subset = ["City"]) unique_city_df.head() #print(len(unique_city_df)) print(len(unique_city_df)) # # Grabbing city weather data #Grabbing data using API url = "http://api.openweathermap.org/data/2.5/weather?" units = "imperial" target_cities = city_df["City"] country = [] date = [] max_temp = [] humidity = [] cloudiness = [] wind_speed = [] for target_city in target_cities: query_url = f"{url}appid={api_key}&units={units}&q=" response_url = query_url + target_city print(f"Processing record for {target_city}: {response_url}") try: weather_data = requests.get(query_url + target_city).json() country_data = weather_data["sys"]["country"] date_data = weather_data["dt"] temperature = weather_data["main"]["temp_max"] humidity_data = weather_data["main"]["humidity"] cloudiness_data = weather_data["clouds"]["all"] wind_data = weather_data["wind"]["speed"] except KeyError: print("Pull was unsuccessful") country.append(country_data) date.append(date_data) max_temp.append(temperature) humidity.append(humidity_data) cloudiness.append(cloudiness_data) wind_speed.append(wind_data) weather_dict = {"City": target_cities, "Cloudiness": cloudiness, "Country": country, "Date": date, "Humidity": humidity, "Lat": latit, "Lon": longi, "Max Temp": max_temp, "Wind Speed": wind_speed} #Putting the weather dictionary into a dataframe weather_df = pd.DataFrame(weather_dict) weather_df = weather_df.drop_duplicates(subset="City") weather_df = weather_df.dropna(how="any",inplace=False) weather_df.head() # # Temperature (F) vs. Latitude plt.scatter(weather_df["Lat"],weather_df["Max Temp"],c="Blue", alpha=0.75) cur_date = datetime.now() cur_date = cur_date.strftime("%Y-%m-%d") plt.xlim(-95,95) plt.ylim(0,100) plt.title(f"City Latitude vs. Max Temperature {cur_date}") plt.xlabel("Latitude") plt.ylabel("Max Temperature (F)") plt.grid(True) plt.savefig("./Images/temp_vs_lat.jpg") plt.show() # # Humidity (%) vs. Latitude plt.scatter(weather_df["Lat"],weather_df["Humidity"],c="Blue", alpha=0.75) cur_date = datetime.now() cur_date = cur_date.strftime("%Y-%m-%d") plt.xlim(-95,95) plt.ylim(0,110) plt.title(f"City Latitude vs. Humidity {cur_date}") plt.xlabel("Latitude") plt.ylabel("Humidity (%)") plt.grid(True) plt.savefig("./Images/humidity_vs_lat.jpg") plt.show() # # Cloudiness (%) vs. Latitude plt.scatter(weather_df["Lat"],weather_df["Cloudiness"],c="Blue", alpha=0.75) cur_date = datetime.now() cur_date = cur_date.strftime("%Y-%m-%d") plt.xlim(-95,95) plt.ylim(-10,120) plt.title(f"City Latitude vs. Cloudiness {cur_date}") plt.xlabel("Latitude") plt.ylabel("Cloudiness (%)") plt.grid(True) plt.savefig("./Images/cloudiness_vs_lat.jpg") plt.show() # # Wind Speed (mph) vs. Latitude plt.scatter(weather_df["Lat"],weather_df["Wind Speed"],c="Blue", alpha=0.75) cur_date = datetime.now() cur_date = cur_date.strftime("%Y-%m-%d") plt.xlim(-95,95) plt.ylim(-5,50) plt.title(f"City Latitude vs. Wind Speed {cur_date}") plt.xlabel("Latitude") plt.ylabel("Wind Speed (mph)") plt.grid(True) plt.savefig("./Images/windspeed_vs_lat.jpg") plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # > This is one of the 100 recipes of the [IPython Cookbook](http://ipython-books.github.io/), the definitive guide to high-performance scientific computing and data science in Python. # # # 15.7. Analyzing a nonlinear differential system: Lotka-Volterra (predator-prey) equations # Here, we conduct a brief analytical study of a famous nonlinear differential system: the Lotka-Volterra equations, also known as predator-prey equations. This simple model describes the evolution of two interacting populations (e.g. sharks and sardines), where the predators eat the preys. This example illustrates how we can use SymPy to obtain exact expressions and results for fixed points and their stability. from sympy import * init_printing() var('x y') var('a b c d', positive=True) # The variables x and y represent the populations of the preys and predators, respectively. The parameters a, b, c and d are positive parameters (described more precisely in "How it works..."). The equations are: # # $$\begin{align} # \frac{dx}{dt} &= f(x) = x(a-by)\\ # \frac{dy}{dt} &= g(x) = -y(c-dx) # \end{align}$$ f = x * (a - b*y) g = -y * (c - d*x) # Let's find the fixed points of the system (solving f(x,y) = g(x,y) = 0). solve([f, g], (x, y)) (x0, y0), (x1, y1) = _ # Let's write the 2D vector with the two equations. M = Matrix((f, g)); M # Now we can compute the Jacobian of the system, as a function of (x, y). J = M.jacobian((x, y)); J # Let's study the stability of the two fixed points by looking at the eigenvalues of the Jacobian at these points. M0 = J.subs(x, x0).subs(y, y0); M0 M0.eigenvals() # The parameters a and c are strictly positive, so the eigenvalues are real and of opposite signs, and this fixed point is a saddle point. Since this point is unstable, the extinction of both populations is unlikely in this model. M1 = J.subs(x, x1).subs(y, y1); M1 M1.eigenvals() # The eigenvalues are purely imaginary so this fixed point is not hyperbolic, and we cannot draw conclusions about the qualitative behavior of the system around this fixed point from this linear analysis. However, one can show with other methods that oscillations occur around this point. # > You'll find all the explanations, figures, references, and much more in the book (to be released later this summer). # # > [IPython Cookbook](http://ipython-books.github.io/), by [](http://cyrille.rossant.net), Packt Publishing, 2014 (500 pages). # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline import sys import numpy as np import pandas as pd import matplotlib.pyplot as plt import math import os # + regression_example = '../Data/RegressionExamples/BikeTrain/train.csv' binary_class_example = '../Data/ClassExamples/DiabetesData/pima-indians-diabetes.data.txt' multi_class_example = '../Data/ClassExamples/Iris/iris.data.csv' data_type_example = '../Data/ClassExamples/TwitterAWS/aml_training_dataset.csv' # - # ## Labeled Data # Labeled Data - contains the features and target attribute with correct answer # ## Training Set # Part of labeled data that is used for training the model. 60-70% of the labeled data is used for training # ## Evaluation Set # 30-40% of the labeled data is reserved for checking prediction quality with known correct answer # ## Training Example # Training Example or a row. Contains one complete observation of features and the correct answer # ## Features # Known by different names: Columns, Features, variables, attributes. These are values that define a particular example. Values for some of the features could be missing or invalid in real-world datasets. So, some cleaning may have to be done before feeding to Machine Learning # # 1. Input Feature - Variables that are provided as input to the model # 2. Target Attribute - Variable that model needs to predict # # #

    AWS ML Data Types

    #
    Data from training set needs to be mapped to one of these data types
    #

    1. Binary. Can contain only two state 1/0.

    #
      #
    • Positive Values: 1, y, yes, t, true
    • #
    • Negative Values: 0, n, no, f, false
    • #
    • Values are case-insensitive and AWS ML converts the above values to 1 or 0
    • #
    #

    2. Categorical. Qualitative attribute that describes something about an observation.

    #
    Example
    #
      #
    • Day of the week: Sunday, Monday, Tuesday,...
    • #
    • Size: XL, L, M, S
    • #
    • Month: 1, 2, 3, 4,...12
    • #
    • Season: 1, 2, 3, 4
    • #
    #

    3. Numeric. Measurements, Counts are represented as numeric types

    #
      #
    • Discrete: 20 cars, 30 trees, 2 ships
    • #
    • Continous: 98.7 degree F, 32.5 KMPH, 2.6 liters
    • #
    #

    4. Text. String of words. AWS ML automatically tokenizes at white space boundary

    #
    Example
    #
      #
    • product description, comment, reviews
    • #
    • Tokenized: 'Wildlife Photography for beginners' => {‘Wildlife’, ‘Photography’, ‘for’, ‘beginners’}
    • #
    #

    Algorithms

    #

    Linear Regression

    # Predict a numeric value as output based on given features #

    Examples: # What is the market value of a car? # What is the current value of a House? # For a product, how many units can we sell?

    # #
    Concrete Example
    # Kaggle Bike Rentals - Predict number of bike rentals every hour. Total should include both casual # rentals and registered users rentals # # Input Columns/Features = ['datetime', 'season', 'holiday', 'workingday', 'weather', 'temp', # 'atemp', 'humidity', 'windspeed'] # # Output Column/Target Attribute = 'count'
    # count = casual + registered
    # Option 1: Predict casual and registered counts separately and then sum it up
    # Option 2: Predict count directly # read the bike train csv file df = pd.read_csv(regression_example) df.head() df.corr() df['count'].describe() df.season.value_counts() df.holiday.value_counts() df.workingday.value_counts() df.weather.value_counts() df.temp.describe() #

    Binary Classification

    # Predict a binary class as output based on given features # #

    Examples: Do we need to follow up on a customer review? # Is this transaction fraudulent or valid one? # Are there signs of onset of a medical condition or disease? # Is this considered junk food or not? #

    # # #
    Concrete Example
    # pima-indians-diabetes - Predict if a given patient has a risk of getting diabetes
    # https://archive.ics.uci.edu/ml/datasets/Pima+Indians+Diabetes # # Input Columns/Features = ['preg_count', 'glucose_concentration', 'diastolic_bp', # 'triceps_skin_fold_thickness', 'two_hr_serum_insulin', 'bmi', # 'diabetes_pedi', 'age'] # # Output Column/Target Attribute = 'diabetes_class'. 1 = diabetic, 0 = normal df = pd.read_csv(binary_class_example) df.head() df.columns df.corr() df.age.value_counts().head() df.diabetes_class.value_counts() #

    Multiclass Classification

    # Predict a class as output based on given features # # Examples: # 1. How healthy is the food based on given ingredients? # Classes: Healthy, Moderate, Occasional, Avoid. # # 2. Identify type of mushroom based on features # # 3. What type of advertisement can be placed for this search? # #
    Concrete Example
    # Iris Classification - Predict the type of Iris plant based on flower measurments
    # https://archive.ics.uci.edu/ml/datasets/Iris # # Input Columns / Features = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width'] # # Output Column / Target Attribute = 'class'. # # Class: Iris-setosa, Iris-virginica, Iris-versicolor df = pd.read_csv(multi_class_example) np.random.seed(5) # print 10 random rows df.iloc[np.random.randint(0, df.shape[0], 10)] df.columns df['class'].value_counts() df.sepal_length.describe() # # Data Types df = pd.read_csv(data_type_example) df.columns df[['description', 'favourites_count', 'favorited', 'text', 'screen_name', 'trainingLabel']].head() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline # # # Bezier Curve # # # This example showcases the `~.patches.PathPatch` object to create a Bezier # polycurve path patch. # # + import matplotlib.path as mpath import matplotlib.patches as mpatches import matplotlib.pyplot as plt Path = mpath.Path fig, ax = plt.subplots() pp1 = mpatches.PathPatch( Path([(0, 0), (1, 0), (1, 1), (0, 0)], [Path.MOVETO, Path.CURVE3, Path.CURVE3, Path.CLOSEPOLY]), fc="none", transform=ax.transData) ax.add_patch(pp1) ax.plot([0.75], [0.25], "ro") ax.set_title('The red point should be on the path') plt.show() # - # ------------ # # References # """""""""" # # The use of the following functions, methods, classes and modules is shown # in this example: # # import matplotlib matplotlib.path matplotlib.path.Path matplotlib.patches matplotlib.patches.PathPatch matplotlib.axes.Axes.add_patch # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Set default matplotlib figure size import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = [10, 5] # current version of seaborn generates a bunch of warnings that we'll ignore import warnings warnings.filterwarnings("ignore") import statsmodels.api as sm import seaborn as sns from pandas import DataFrame, read_excel # - # # Models for the housing market # # This dataset contains (506 cases) information collected by the U.S Census Service concerning housing in the area of Boston Mass. # # Toolkit: # * **Seaborn, statistical data visualization [docs](https://seaborn.pydata.org/api.html)** # * **Statsmodels, statistical models in python [docs](https://www.statsmodels.org/stable/index.html)** # # ``` # CRIM - per capita crime rate by town # ZN - proportion of residential land zoned for lots over 25,000 sq.ft. # INDUS - proportion of non-retail business acres per town. # CHAS - Charles River dummy variable (1 if tract bounds river; 0 otherwise) # NOX - nitric oxides concentration (parts per 10 million) # RM - average number of rooms per dwelling # AGE - proportion of owner-occupied units built prior to 1940 # DIS - weighted distances to five Boston employment centres # RAD - index of accessibility to radial highways # TAX - full-value property-tax rate per $10,000 # PTRATIO - pupil-teacher ratio by town # B - 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town # LSTAT - perc. lower status of the population # MEDV - Median value of owner-occupied homes in $1000's # ``` df = read_excel('../data/boston-dataset.xls') f1 = sns.heatmap(df.corr(), annot=True) f2 = sns.regplot(y='RM', x='MEDV', data=df) # + x = df['RM'] y = df['MEDV'] x = sm.add_constant(x) # Note the difference in argument order model = sm.OLS(y, x).fit() predictions = model.predict(x) # make the predictions by the model # Print out the statistics model.summary() # + x = df[['RM', 'TAX', 'LSTAT']] y = df['MEDV'] x = sm.add_constant(x) # Note the difference in argument order model = sm.OLS(y, x).fit() predictions = model.predict(x) # make the predictions by the model # Print out the statistics model.summary() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: navigation # language: python # name: navigation # --- # # Navigation # # --- # # # ### 1. Start the Environment # # Import some necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/). # + import random import torch import numpy as np import matplotlib.pyplot as plt from collections import deque from unityagents import UnityEnvironment from src.dqn_agent import Agent from utils.util import save_checkpoint # %matplotlib inline # %load_ext autoreload # %autoreload 2 # - # Load Banana Environment from Unity ML-Agents # **Note:** On MacOS you might get an `OSError: handle is closed`. In this case you should restart the Kernel of the notebook (go to Kernel tab and click on Restart). env = UnityEnvironment(file_name="env/Banana.app") # Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python. brain_name = env.brain_names[0] brain = env.brains[brain_name] # ### 2. Train Agent with (Vanilla) DQN # A reward of +1 is provided for collecting a yellow banana, and a reward of -1 is provided for collecting a blue banana. Thus, the goal of your agent is to collect as many yellow bananas as possible while avoiding blue bananas. # # The state space has 37 dimensions and contains the agent's velocity, along with ray-based perception of objects around the agent's forward direction. agent = Agent(state_size=37, action_size=4, seed=0) # + def dqn(n_episodes=1800, max_t=300, eps_start=1.0, eps_end=0.01, eps_decay=0.995): """Deep Q-Learning. Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode eps_start (float): starting value of epsilon, for epsilon-greedy action selection eps_end (float): minimum value of epsilon eps_decay (float): multiplicative factor (per episode) for decreasing epsilon """ scores = [] # list containing scores from each episode scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon delta = 1e-5 MAX_AVG_REWARD = 0. for i_episode in range(1, n_episodes+1): env_info = env.reset(train_mode=True)[brain_name] # reset the environment state = env_info.vector_observations[0] # get the current state score = 0 # initialize the score for t in range(max_t): # learn for max_t steps action = agent.act(state, eps) # choose an action by following a behavioral policy env_info = env.step(action)[brain_name] # send the action to the environment next_state = env_info.vector_observations[0] # observe the next state and reward reward = env_info.rewards[0] done = env_info.local_done[0] # see if episode has finished agent.step(state, action, reward, next_state, done) # learn from experience score += reward # update the score state = next_state # roll over the state to next time step if done: # exit loop if episode finished break scores_window.append(score) # save most recent score scores.append(score) # save most recent score eps = max(eps_end, eps_decay*eps) # decrease epsilon avg_reward = np.mean(scores_window) # Maybe log some statistics print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, avg_reward), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, avg_reward)) # Note: Taks is solved if agent receives at least an average score of 13 over the last 100 episodes if avg_reward >= 13 and avg_reward > MAX_AVG_REWARD and i_episode >= 100: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, avg_reward)) save_checkpoint({ 'episode': i_episode, 'state_dict': agent.qnetwork_local.state_dict(), 'avg_reward': avg_reward }, 'checkpoints/checkpoint.pth') if abs(MAX_AVG_REWARD - avg_reward) < delta: break else: MAX_AVG_REWARD = avg_reward return scores # train the agent scores = dqn() # plot the scores fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() env.close() # - # ### 3. Watch a Smart Agent # + # load load weights of qnetwork of smart agent load_path = 'checkpoints/checkpoint.pth' cuda = torch.cuda.is_available() if cuda: checkpoint = torch.load(load_path) else: # Load GPU model on CPU checkpoint = torch.load(load_path, map_location=lambda storage, loc: storage) agent.qnetwork_local.load_state_dict(checkpoint['state_dict']) num_episodes_test = 1 max_t = 300 for i in range(num_episodes_test): env_info = env.reset(train_mode=False)[brain_name] state = env_info.vector_observations[0] for t in range(max_t): action = agent.act(state) env_info = env.step(action)[brain_name] state = env_info.vector_observations[0] rewards = env_info.rewards[0] done = env_info.local_done[0] print("\rTimestep: {}\tIs it done: {}\tReward received: {}".format(t, done, rewards), end="") if done: break env.close() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import sys sys.path.append("src") import pandas as pd import matplotlib.pyplot as plt import torch from torchvision.datasets import ImageFolder from torch.utils.data import DataLoader import torchvision as tv import numpy as np from PIL import Image from IPython.display import display from tqdm.notebook import tqdm # - import aegan.aegan as aegan # + noise_fn = lambda x: torch.randn((x, 16), device='cpu') gan = aegan.AEGAN(16, noise_fn, None) gan.encoder.load_state_dict(torch.load("remote_results/checkpoints/epoch_02249/encoder.pt", map_location="cpu")) gan.encoder.train(False) encoder = gan.encoder # + transform = tv.transforms.Compose([ # tv.transforms.Resize((96, 96)), # tv.transforms.RandomAffine(0, translate=(5 / 96, 5 / 96), fillcolor=(255, 255, 255)), # tv.transforms.ColorJitter(hue=0.5), # tv.transforms.RandomHorizontalFlip(p=0.5), tv.transforms.ToTensor(), tv.transforms.Normalize((0.5, 0.5, 0.5,), (0.5, 0.5, 0.5,)) ]) folder = ImageFolder("data", transform) loader = DataLoader(folder, batch_size=1, shuffle=False) reals = [im for im, _ in loader] # + def revert(im): im = im[0] im -= im.min() im /= im.max() im = im.numpy().transpose((1, 2, 0)) im = np.array(im * 255, dtype=np.uint8) return im display(Image.fromarray(revert(reals[9]))) display(Image.open('data/sprites/10007.png.96x96.png')) # + display(reals[9].shape) torch.cat(reals, 0).shape # - with torch.no_grad(): z_reals = torch.cat([gan.encoder(reals[i]) for i in tqdm(range(len(reals)))]).numpy() z_reals.shape # + fig, axs = plt.subplots(4, 4, figsize=(18, 18)) for i in range(4): for j in range(4): ax = axs[i, j] ii = i + 4 jj = j + 4 ax.scatter(z_reals[:, ii], z_reals[:, jj]) ax.set_title(f"{ii} vs {jj}") ax.set_xlim(-1.5, 1.5) ax.set_ylim(-1.5, 1.5) plt.show() # - z_fakes = noise_fn(500) # + fig, axs = plt.subplots(4, 4, figsize=(18, 18)) for i in range(4): for j in range(4): ax = axs[i, j] ii = i + 4 jj = j + 4 ax.scatter(z_fakes[:, ii], z_fakes[:, jj]) ax.set_title(f"{ii} vs {jj}") ax.set_xlim(-1.5, 1.5) ax.set_ylim(-1.5, 1.5) plt.show() # + noise_fn_ = lambda x: torch.randn((x, 16), device='cpu') / 2 z_fakes_ = noise_fn_(500) fig, axs = plt.subplots(4, 4, figsize=(18, 18)) for i in range(4): for j in range(4): ax = axs[i, j] ii = i + 4 jj = j + 4 ax.scatter(z_fakes_[:, ii], z_fakes_[:, jj]) ax.set_title(f"{ii} vs {jj}") ax.set_xlim(-1.5, 1.5) ax.set_ylim(-1.5, 1.5) plt.show() # - z_reals.mean(0) z_reals.std(0) gan.generator.load_state_dict(torch.load("remote_results/checkpoints/epoch_02249/generator.pt", map_location="cpu")) gan.generator.train(False) gen = gan.generator # + with torch.no_grad(): s = torch.from_numpy(z_reals[0].reshape(1, 16)) fake = gen(s) display(Image.fromarray(revert(fake))) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: ML # language: python # name: ml # --- from smtplib import SMTP_SSL from email.message import EmailMessage import pandas as pd import password destinatari = pd.read_excel("AEAround the World - 22 Dicembre ore 21_30(1-97).xlsx")["Posta elettronica"].unique() messaggio = """Gentilissima/o, Il link all'evento di stasera alle 21.30 è https://polimi-it.zoom.us/j/85442361591 Buona serata, Il team di AEA Polimi www.aeapolimi.it """ def invia_mail(s,f): s = SMTP_SSL(host='smtp.office365.com', port=587) s.login("", password.) msg = EmailMessage() msg.set_content(messaggio) msg['Subject'] = 'Link zoom evento AEAround The World' msg['From'] = "" msg['Bcc'] = ", ".join(destinatari[s:f]) s.send_message(msg) s.quit() #for s,f in [[0,100], [100,200], [200,len(destinatari)]]: #invia_mail(s,f) # + #invia_mail(0,97) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Initially I forked from this [kernel](https://www.kaggle.com/khursani8/fast-ai-starter-resnet34), changed architecture to ResNet 50, added augmentation and did some initial tuning of parameters like learning rate. # In later versions I plugged in OptimizedRounder class and Ben's processing functions. # # Libraries import # + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" # %reload_ext autoreload # %autoreload 2 # %matplotlib inline # + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a" from fastai import * from fastai.vision import * import pandas as pd import matplotlib.pyplot as plt import numpy as np import os import scipy as sp from functools import partial from sklearn import metrics from collections import Counter from fastai.callbacks import * import PIL import cv2 # + # Set seed for all def seed_everything(seed=1358): random.seed(seed) os.environ['PYTHONHASHSEED'] = str(seed) np.random.seed(seed) torch.manual_seed(seed) torch.cuda.manual_seed(seed) torch.backends.cudnn.deterministic = True seed_everything() # - # # Ben's Preprocessing Functions # These functions are taken from famous kernel https://www.kaggle.com/ratthachat/aptos-updatedv14-preprocessing-ben-s-cropping. Below I am showing how they can be applied for fast.ai pipeline. # + def crop_image1(img,tol=7): # img is image data # tol is tolerance mask = img>tol return img[np.ix_(mask.any(1),mask.any(0))] def crop_image_from_gray(img,tol=7): if img.ndim ==2: mask = img>tol return img[np.ix_(mask.any(1),mask.any(0))] elif img.ndim==3: gray_img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) mask = gray_img>tol check_shape = img[:,:,0][np.ix_(mask.any(1),mask.any(0))].shape[0] if (check_shape == 0): # image is too dark so that we crop out everything, return img # return original image else: img1=img[:,:,0][np.ix_(mask.any(1),mask.any(0))] img2=img[:,:,1][np.ix_(mask.any(1),mask.any(0))] img3=img[:,:,2][np.ix_(mask.any(1),mask.any(0))] # print(img1.shape,img2.shape,img3.shape) img = np.stack([img1,img2,img3],axis=-1) # print(img.shape) return img def load_ben_color(path, sigmaX=10): image = cv2.imread(path) image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) image = crop_image_from_gray(image) image = cv2.resize(image, (IMG_SIZE, IMG_SIZE)) image=cv2.addWeighted ( image,4, cv2.GaussianBlur( image , (0,0) , sigmaX) ,-4 ,128) return image # - # # Data PATH = Path('../input/aptos2019-blindness-detection') df = pd.read_csv(PATH/'train.csv') df.head() # !ls ../input/resnet50/ # # copy pretrained weights for resnet50 to the folder fastai will search by default Path('/tmp/.cache/torch/checkpoints/').mkdir(exist_ok=True, parents=True) # !cp '../input/resnet50/resnet50.pth' '/tmp/.cache/torch/checkpoints/resnet50-19c8e357.pth' df.diagnosis.value_counts() # So our train set is definitely imbalanced, majority of images are normal (without illness). # # Model # + IMG_SIZE = 512 def _load_format(path, convert_mode, after_open)->Image: image = cv2.imread(path) image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) image = crop_image_from_gray(image) image = cv2.resize(image, (IMG_SIZE, IMG_SIZE)) image=cv2.addWeighted ( image,4, cv2.GaussianBlur( image , (0,0), 10) ,-4 ,128) return Image(pil2tensor(image, np.float32).div_(255)) #return fastai Image format vision.data.open_image = _load_format src = ( ImageList.from_df(df,PATH,folder='train_images',suffix='.png') .split_by_rand_pct(0.2, seed=42) .label_from_df(cols='diagnosis',label_cls=FloatList) ) src # - tfms = get_transforms(do_flip=True, flip_vert=True, max_rotate=0.10, max_zoom=1.3, max_warp=0.0, max_lighting=0.2) # Let's train with small image size first to get some rough approximation data = ( src.transform(tfms,size=128) .databunch() .normalize(imagenet_stats) ) data # + # Definition of Quadratic Kappa from sklearn.metrics import cohen_kappa_score def quadratic_kappa(y_hat, y): return torch.tensor(cohen_kappa_score(torch.round(y_hat), y, weights='quadratic'),device='cuda:0') learn = cnn_learner(data, base_arch=models.resnet50 ,metrics=[quadratic_kappa],model_dir='/kaggle',pretrained=True) # - # Find a good learning rate learn.lr_find() learn.recorder.plot() lr = 1e-2 learn.fit_one_cycle(3, lr) # Now switching to 224x224 size which is usually used for ResNet 50: # progressive resizing learn.data = data = ( src.transform(tfms,size=224) .databunch() .normalize(imagenet_stats) ) learn.lr_find() learn.recorder.plot() lr = 1e-2 learn.fit_one_cycle(5, lr) # + learn.unfreeze() learn.lr_find() learn.recorder.plot() # - learn.fit_one_cycle(5, slice(1e-6,1e-3)) # # Metric Optimization # This part is taken from @abhishek great kernel: https://www.kaggle.com/abhishek/optimizer-for-quadratic-weighted-kappa valid_preds = learn.get_preds(ds_type=DatasetType.Valid) class OptimizedRounder(object): def __init__(self): self.coef_ = 0 def _kappa_loss(self, coef, X, y): X_p = np.copy(X) for i, pred in enumerate(X_p): if pred < coef[0]: X_p[i] = 0 elif pred >= coef[0] and pred < coef[1]: X_p[i] = 1 elif pred >= coef[1] and pred < coef[2]: X_p[i] = 2 elif pred >= coef[2] and pred < coef[3]: X_p[i] = 3 else: X_p[i] = 4 ll = metrics.cohen_kappa_score(y, X_p, weights='quadratic') return -ll def fit(self, X, y): loss_partial = partial(self._kappa_loss, X=X, y=y) initial_coef = [0.5, 1.5, 2.5, 3.5] self.coef_ = sp.optimize.minimize(loss_partial, initial_coef, method='nelder-mead') print(-loss_partial(self.coef_['x'])) def predict(self, X, coef): X_p = np.copy(X) for i, pred in enumerate(X_p): if pred < coef[0]: X_p[i] = 0 elif pred >= coef[0] and pred < coef[1]: X_p[i] = 1 elif pred >= coef[1] and pred < coef[2]: X_p[i] = 2 elif pred >= coef[2] and pred < coef[3]: X_p[i] = 3 else: X_p[i] = 4 return X_p def coefficients(self): return self.coef_['x'] optR = OptimizedRounder() optR.fit(valid_preds[0],valid_preds[1]) coefficients = optR.coefficients() print(coefficients) # # # # Predictions # test_df = pd.read_csv(PATH/'test.csv') # test_df.head() sample_df = pd.read_csv(PATH/'sample_submission.csv') sample_df.head() learn.data.add_test(ImageList.from_df(sample_df,PATH,folder='test_images',suffix='.png')) preds,y = learn.get_preds(DatasetType.Test) test_predictions = optR.predict(preds, coefficients) sample_df.diagnosis = test_predictions.astype(int) sample_df.head() sample_df.to_csv('submission.csv',index=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import matplotlib from matplotlib import pyplot as plt import numpy as np # ## Задание 1 fig, ax = plt.subplots() rect = matplotlib.patches.Rectangle((-1, -1), 2, 2) ax.add_patch(rect) plt.axis([-2, 2, -1.5, 1.5]) plt.show() # ## Задание 2 x = np.arange(0, 10, 0.2) y = np.sin(x) plt.scatter(x, y) plt.xlabel('x') plt.ylabel('y') plt.show() # ## Задание 3 # + np.random.seed(4) lam = 1.5 pois_dist = np.random.poisson(1.5, 10001) plt.hist(pois_dist, max(pois_dist), density=1, facecolor='g', alpha=0.75) x = np.arange(min(pois_dist), max(pois_dist)) y = [((lam**i * np.exp(-lam)) / np.math.factorial(i)) for i in x] plt.plot(x, y, '--') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # #

    #

    Sentence splitter and Tokenization

    # ## Sentence splitter: string = 'Every one of us is, in the cosmic perspective, precious. If a human disagrees with you, let him live. In a hundred billion galaxies, you will not find another.' from nltk.tokenize import sent_tokenize _sent = sent_tokenize(string) print(_sent) import nltk.tokenize.punkt #This tokenizer divides a text into a list of sentences, #by using an unsupervised algorithm to build a model for abbreviation #words, collocations, and words that start sentences. tokenizer = nltk.tokenize.punkt.PunktSentenceTokenizer() tokenizer.tokenize(string) # ## Tokenization: # ### Word Tokenizing: a = 'Hi NLTK students ! level s10' # simplest tokenizer: uses white space as delimiter. print(a.split()) from nltk.tokenize import word_tokenize word_tokenize(a) # Another method using TreebankWorldTokenizer from nltk.tokenize import TreebankWordTokenizer tokenizer = TreebankWordTokenizer() print(tokenizer.tokenize(a)) # ### Removing Noise # + import re # Example of removing numbers: def remove_numbers(text): return re.sub(r'\d+', "", text) # - txt = 'This a sample sentence in English, \n with whitespaces and many numbers 123456!' print('Removed numbers:', remove_numbers(txt)) # + # example of removing punctuation from text import string def remove_punctuation(text): words = word_tokenize(text) pun_removed = [w for w in words if w.lower() not in string.punctuation] return " ".join(pun_removed) # - b = 'This a great course of NLP using Python and NLTK!!! for this year 2017, isnt.?' print(remove_punctuation(b)) from nltk.tokenize import regexp_tokenize, wordpunct_tokenize, blankline_tokenize # # + : one or more times | \w : character or digit regexp_tokenize(b, pattern='\w+') regexp_tokenize(b, pattern='\d+') c = 'The capital is raising up to $100000' regexp_tokenize(c, pattern='\w+|\$') # # + = one or more times regexp_tokenize(c, pattern='\w+|\$[\d]') regexp_tokenize(c, pattern='\w+|\$[\d\.]+|\S+') wordpunct_tokenize(b) blankline_tokenize(b) # ## References: # # https://docs.python.org/2/library/tokenize.html # # http://www.nltk.org/_modules/nltk/tokenize.html # # http://www.nltk.org/_modules/nltk/tokenize/punkt.html # # http://www.nltk.org/_modules/nltk/tokenize/treebank.html # # http://www.nltk.org/_modules/nltk/tokenize/regexp.html # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Instagram Follow Automation to Particular Accounts from selenium import webdriver from selenium.webdriver.common.keys import Keys from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By from selenium.webdriver.support.wait import WebDriverWait import time import random # ## Step 1 - Chrome Driver driver = webdriver.Chrome('D:\chromedriver_win32\chromedriver.exe') driver.get('https://www.instagram.com') # ## Step 2 - Log in # + username = WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.XPATH, "//input[@name='username']"))).send_keys("MY_USERNAME") password = WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "input[name='password']"))).send_keys("") try : submit = WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.XPATH, "//button[@type='submit']"))).click() print('berhasil login') except : print('gagal login') # - # ## Step 3 - Handle Alerts notNow_alert = WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.XPATH, "//button[contains(text(),'Not Now')]"))) notNow_alert.click() # ## Step 4 - Search Account time.sleep(1) searchBox = WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.XPATH, "//input[@placeholder='Search']"))) searchBox.send_keys('wahh') time.sleep(1) searchBox.send_keys(Keys.ENTER) time.sleep(1) searchBox.send_keys(Keys.ENTER) # ## Step 5 - Button Follow Click time.sleep(4) followBtn = WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.XPATH, "//button[contains(text(),'Follow')]"))) followBtn.click() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="tCf8JnDCuVBR" colab={"resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "", "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": ""}}, "base_uri": "https://localhost:8080/", "height": 73} outputId="22526756-3979-4428-b397-649aaa1b0cf9" import pandas as pd from google.colab import files data_to_load = files.upload() # + id="T1uOZPtoucBr" # importing advertising.csv import io advertising_multi = pd.read_csv(io.BytesIO(data_to_load['advertising.csv'])) # + [markdown] id="1mR6A3-AuhPy" # **Step 1: CLEANING DATA AND GETTING IMPORTANT INFO FROM OUR CSV FILE** # + colab={"base_uri": "https://localhost:8080/", "height": 206} id="6uRiSfvVUb2U" outputId="f0779b66-b174-4f45-ee49-6d2a8f92f6ae" # Looking at the first five rows advertising_multi.head() # + colab={"base_uri": "https://localhost:8080/", "height": 206} id="NV5L1B3_Uljj" outputId="15e97427-518c-487d-e0a7-65e92762cdc8" # Looking at the last five rows advertising_multi.tail() # + colab={"base_uri": "https://localhost:8080/"} id="mAQkgR1EU_-q" outputId="da35ba08-2c4e-4d54-d8e2-84aaf987a031" # What type of values are stored in the columns? advertising_multi.info() # + colab={"base_uri": "https://localhost:8080/", "height": 300} id="Z19Pl9HmVJPy" outputId="8f7379f2-6673-4090-e466-1854a350130f" # Let's look at some statistical information about our dataframe. advertising_multi.describe() # + [markdown] id="zisPS5fHVtFC" # **Step 2: Visualising Data** # + id="3PznXQkXVwtJ" import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline # + colab={"base_uri": "https://localhost:8080/", "height": 743} id="j_1im3qBWJHK" outputId="d2497024-31e6-4090-cac2-13ed8d66c09a" # Let's plot a pair plot of all variables in our dataframe sns.pairplot(advertising_multi) # + colab={"base_uri": "https://localhost:8080/", "height": 567} id="rtQuSvOHbaNh" outputId="499be2c4-b8e2-499d-a828-081c14ab33df" # Visualize the relationship between the features and the response using scatterplots sns.pairplot(advertising_multi, x_vars=['TV', 'Radio', 'Newspaper'], y_vars='Sales', size=7, aspect=0.7, kind='scatter') # + [markdown] id="4t3sUnlpdGc4" # **Step 3: Splitting the Data for Training and Testing** # + id="MZU2raI8dL7w" # Putting feature variable to X X = advertising_multi[['TV', 'Radio', 'Newspaper']] # Putting response variable to y y = advertising_multi['Sales'] # + id="gy9TqTBEdtX5" # random state is the seed used by the random number generator, it can be any integer. from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.7, random_state=100) # + [markdown] id="8C_DorQJeUVg" # **Step 4: Performing Linear Regression** # + colab={"base_uri": "https://localhost:8080/"} id="-Xnbt8J_eYUA" outputId="c21ea06a-d186-4074-818e-3d3177d40503" # import LinearRegression from sklearn from sklearn.linear_model import LinearRegression # Representing LinearRegression as lr lr = LinearRegression() # Fit the model using lr.fit() lr.fit(X_train, y_train) # + [markdown] id="Fnc7-GNtenYo" # **Step 5: Model Evaluation** # + colab={"base_uri": "https://localhost:8080/"} id="Mmr6fsuLerjQ" outputId="d68de467-d255-47c5-d7c3-c1dd8cea3e8a" # Print the intercept and coefficients print(lr.intercept_) # + colab={"base_uri": "https://localhost:8080/", "height": 143} id="CoLNE5_RfA6I" outputId="cdb598bb-2914-4ed1-f919-9b8cc3e1c7c3" # Let's see the coefficient coeff_df = pd.DataFrame(lr.coef_, X_test.columns, columns=['Coefficient']) coeff_df # + [markdown] id="bUbMhKa3htqI" # From the above result we may infern that if TV price increases by 1 unit it will affect sales by 0.045 units # + [markdown] id="iOtcqPAFh8YP" # **Step 6: Predictions** # + id="lC6_LFyHh7D3" # Making predictions using the model y_pred = lr.predict(X_test) # + [markdown] id="eoKsheOxiPiH" # **Step 7: Calculating Error Terms** # + id="Bw7MIj0_ieMf" from sklearn.metrics import mean_squared_error, r2_score mse = mean_squared_error(y_test, y_pred) r_squared = r2_score(y_test, y_pred) # + colab={"base_uri": "https://localhost:8080/"} id="FoR1h692izEg" outputId="e92aee7e-ec6a-4600-ca7a-77551196def1" print('Mean Squared Error: ', mse) print('R squared value: ', r_squared) # + colab={"base_uri": "https://localhost:8080/", "height": 296} id="95L9sDq-jJav" outputId="03da4eaa-1994-4949-a736-8c3e2966c67d" plt.scatter(y_test, y_pred) plt.xlabel('Y test') plt.ylabel('Y Prediction') # + [markdown] id="bSKfo8PUi9VP" # **Optional Step: Checking for P-value using STATSMODELS** # + colab={"base_uri": "https://localhost:8080/"} id="LMsHUNGGjC73" outputId="808c73b8-7cee-49f9-badd-4220c679148a" import statsmodels.api as sm X_train_sm = X_train # Unlike SKLearn, statsmodels don't automatically fit a constant. # So you need to use the method sm.add_constant(X) in order to add a constant. X_train_sm = sm.add_constant(X_train_sm) # Create a fitter model in one line lr_1 = sm.OLS(y_train, X_train_sm).fit() # print the coefficients lr_1.params # + colab={"base_uri": "https://localhost:8080/"} id="jdfjjj4NkirA" outputId="a3951ee9-3d70-42e0-cb2c-180734f8a446" print(lr_1.summary()) # + [markdown] id="s2A4F7TElLQX" # From the above we can see that Newspaper is insignificant # + colab={"base_uri": "https://localhost:8080/", "height": 341} id="vDKs9kKUlQ9_" outputId="0f96d4c6-2a27-4930-dcc4-b767d9bf5243" plt.figure(figsize= (5,5)) sns.heatmap(advertising_multi.corr(), annot=True) # + [markdown] id="KMGeWdrXl-oP" # **Step 8: Implementing the results and running the model again** # + [markdown] id="I3YhwJ1CmJ63" # From the above we can see that Newspaper is insignificant # + id="PjIPb1Y0mLaG" # Removing Newspaper from our dataset X_train_new = X_train[['TV', 'Radio']] X_test_new = X_test[['TV', 'Radio']] # + colab={"base_uri": "https://localhost:8080/"} id="ZMmsmFekmgCa" outputId="9337f0af-b720-448b-dd48-7bbdddf9913d" # Model building lr.fit(X_train_new, y_train) # + id="zfTADnIimFlm" # Making predictions y_pred_new = lr.predict(X_test_new) # + colab={"base_uri": "https://localhost:8080/", "height": 333} id="KVJp3BHam7xP" outputId="2cf7a9af-d6d0-4a07-a4c0-88ce842b28a6" # Actual vs Predicted c = [i for i in range(1,61,1)] fig = plt.figure() plt.plot(c, y_test, color="blue", linewidth=2.5, linestyle="-") plt.plot(c, y_pred, color="red", linewidth=2.5, linestyle="-") fig.suptitle('Actual and Predicted', fontsize=20) plt.xlabel('Index', fontsize=18) plt.ylabel('Sales', fontsize=16) # + colab={"base_uri": "https://localhost:8080/", "height": 333} id="IoiF3VrXnNle" outputId="ab05bf10-6118-4847-94e2-723f3f8736e5" # Error terms c = [i for i in range(1,61,1)] fig = plt.figure() plt.plot(c, y_test-y_pred, color="blue", linewidth=2.5, linestyle="-") fig.suptitle('Error Terms', fontsize=20) plt.xlabel('Index', fontsize=18) plt.ylabel('ytest-ypred', fontsize=16) # + id="OH26mqcOncet" mse2 = mean_squared_error(y_test, y_pred_new) r_squared2 = r2_score(y_test, y_pred_new) # + colab={"base_uri": "https://localhost:8080/"} id="6iY5FsgJnnLe" outputId="cdb54ef4-7a98-4df8-fbc8-f5bbb8632312" print('Mean Squared Error: ', mse2) print('R squared value: ', r_squared2) # + colab={"base_uri": "https://localhost:8080/"} id="hjJF56kVoi6_" outputId="46a781aa-4d46-426a-e78f-c32eca7f52b0" X_train_final = X_train_new # Unlike SKLearn, statsmodels don't automatically fit a constant. # So you need to use the method sm.add_constant(X) in order to add a constant. X_train_final = sm.add_constant(X_train_final) # Create a fitter model in one line lr_final = sm.OLS(y_train, X_train_final).fit() # print the summary print(lr_final.summary()) # + [markdown] id="GXY5vJkzpob2" # *Haya Ngoja tuone shida ilikuwa nini kwenye Newspaper... (Hii ni optional)* # # We're going to use **Simple Linear Regression: Newspaper(X) and Sales(y)** # + colab={"base_uri": "https://localhost:8080/"} id="J4cvhmATp3F9" outputId="04d59654-5ad5-4549-99dd-8e89c706de3f" import numpy as np x_news = advertising_multi['Newspaper'] y_news = advertising_multi['Sales'] # Data Split X_train, X_test, y_train, y_test = train_test_split(x_news, y_news, train_size=0.7, random_state=110) # Required only in the case of simple linear regression X_train = X_train[:, np.newaxis] X_test = X_test[:, np.newaxis] # Linear regression from sklearn lm = LinearRegression() # Fitting the model lm.fit(X_train, y_train) # Making predictions y_pred_np = lm.predict(X_test) # Computing mean square error and r square from sklearn library. mse_np = mean_squared_error(y_test, y_pred_np) r_sqaured_np = r2_score(y_test, y_pred_np) # Printing print('Mean Squared Error: ', mse_np) print('R squared value: ', r_sqaured_np) # + [markdown] id="pvxK17sduAIt" # From above, ukicheck kwenye MSE, unaona kabisa namna kuna values kama 23 zinaaingua but also kwenye RSV, chance ya model kusaidia ni 0.8%... # # Ndo maana 'Newspaper' ilikuwa ni insignificant # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Data Scientist Nanodegree/ Udacity # # # # # # # # # #
    Crime in Vancouver (Canada) 2003-2017
    # # # # # # # # # # Business Understanding # # ### Crimes in Vancouver have become a very serious issue, it can be seen that the crimes rate in Vancouver has slightly increased in the past three years, so in this project we are going to investigate and analyses the dataset crims in Vancouver(Canada) between 2003 to 2017 which to find the reasons for committing these crimes, and to try to find a solution. # # # ### __I am going to perform a consideration on crims in Vancouver Dataset from Kaggle__. # # ### In order to analyse the dataset I have come up with some questions which seem are important in this project: # # # ### __Questions__ # #### __Q1/Which year has the highest crime rate in Vancouver?__ # #### __Q2/What is the most committed type of crime between year 2003 to 2017? What is the correlation between the type of crime and Time?__ # #### __Q3/Find the corrleation between the type of crime and Time?__ # # ________________________________________________________________________________________________________________________________ # # ## Columns description: # # | __Column name__ | __Description__ | # |-------------|---------------| # |-------------|---------------| # | TYPE | Type of Crime| # | YEAR | Year when the reported crime activity occurred| # | MONTH | Month when the reported crime activity occurred| # | DAY | Day when the reported crime activity occurred| # | HOUR | Hour when the reported crime activity occurred| # | MINUTE | Minute when the reported crime activity occurred| # | HUNDRED_BLOCK | Generalized location of the report crime activity| # | NEIGHBOURHOOD | Neighbourhood where the reported crime activity occurred| # | X | Coordinate values projected in UTM Zone 10| # | Y | Coordinate values projected in UTM Zone 10| # | Latitude | Coordinate values converted to Latitude| # | Longitude| Coordinate values converted to Longitude| # imported the libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.preprocessing import LabelEncoder, OneHotEncoder import seaborn as sns # %matplotlib inline import warnings warnings.filterwarnings('ignore') # ## Data Understanding # # ##### This project will be using dataset from Kaggle and in this section we will understand this data # Load dataset df=pd.read_csv('crime.csv') def Data_understanding(): ' This function can only use to read the dataset and getting information about the dataset where are working with' #print the top 30 rows print('\n\nExplaring the dataframe:\n\n',df.head(31)) # getting the basic information about this data print('\n\nThe dataset describtion is:\n\n',df.describe()) # counting the number of rows and columns number_of_rows= df.shape[0] number_of_columns=df.shape[1] print('\n\nThe number of rows:\n\n',number_of_rows) print('\n\nThe number of columns:\n\n',number_of_columns) # Getting information about the coulmns print('\n\nColumn_Names:\n\n',df.columns) return Data_understanding # To get information about the dataset Data_understanding() # changing the data type to categorical for col in ['TYPE','NEIGHBOURHOOD','HUNDRED_BLOCK']: df[col] = df[col].astype('category') # ## Prepare Data # # In this section, some preparation steps will be performed which can help to answer all the questions before using the dataset: # #### 1- Searching for missing values in each column # #### 2- Fillin all the missing values # #### 3- Drop unwanted columns # #### 3- Using the one-Hot-Encoding method to deal with categorical data such as TYPE, NEIGHBOURHOOD # Checking the missing values df.isnull().sum() # ### After checking the missing values which show only exist in column "HOUR" & "MINUTE", "NEIGHBOURHOOD" & "HUNDRED_BLOCK". My understanding is that missing values in the columns which nan-values is that these nan-values mean many crimes are not reported correctly with location and time, Thus it's a better idea to use zero to fill in these missing values. # Finding the number of Missing vlaues missing_values=df.isnull().sum() missing= missing_values.sum() print('The number of missing values is',missing) #Fillin all the missing values with zero df = df.fillna(df.mode().iloc[0]) # Finding the number of Missing vlaues missing_values=df.isnull().sum() missing= missing_values.sum() print('The number of missing values is',missing) # + # drop unwanted columns columns_un_used=['X', 'Y', 'Latitude','MINUTE','HUNDRED_BLOCK', 'Longitude'] df.drop(axis=0,columns=columns_un_used,inplace=True) # - df.head() #Using the one-Hot-Encoding method to deal with categorical data and convert it to numerical varibles label = LabelEncoder() df['NEIGHBOURHOOD']= label.fit_transform(df['NEIGHBOURHOOD'].astype('category')) df # # Answer Questions base on dataset # + ## Q1/Which year has the highest crime rate in Vancouver? # check the most frequent values in the YEAR column df_check=df['YEAR'].value_counts().idxmax() print('The Highest Crime rate was in',df_check) # - # plotting histogram plt.hist(df.YEAR,bins=30) plt.xlim(xmin=2003, xmax = 2017) plt.title('The Crime rate in Vancouver between 2003-2017') plt.xlabel('Year') plt.show() # ### As we can see above the highest year of committing crimes in Vancouver was in 2003 and 2004 # + # Q2/What is the most committed type of crime between year 2003 to 2017? What is the corrleation between the type of crime and Time? # counting the most frequent type of crime df_check=df['TYPE'].value_counts().idxmax() print('the most committed type of crime is',df_check) # - # plotting a graph plt.figure(figsize = (20,8)) sns.countplot(df['HOUR'],hue=df['TYPE']) # ## As we can see above Theft from Vehicle is the common crime in Vancouver which has the highest rate between the year 2003 until 2017. # ## From the chart above It can be seen that Theft from vehicle increased in the evening starting from 6 pm until late at night. On the other hand, in the morning it seems that there wasn't a lot of crimes, I think that because most of the stores open early. So there is no chance for a criminal to steal between those hours. # + # Q3/ What the relation between the type of crime and the neighborhood that has a high rate of crimes? list1=list() mylist=list() for ii in df.TYPE.cat.categories: list1.append(df[df.TYPE==ii].NEIGHBOURHOOD) mylist.append(ii) sns.set_style('whitegrid') fig, ax = plt.subplots() fig.set_size_inches(11.7,8.27) plt.title('The Relation Between Type of Crime and The Neighbourhood' , fontsize=20, color='Black',fontname='Console') plt.ylabel('Type of crime',fontsize=20,color='Black') plt.xlabel('Neigbourhood Areas',fontsize=20,color='Black') plt.yticks(fontsize=10) plt.xticks(fontsize=10.5) h=plt.hist(list1,stacked=True, bins=20, rwidth=1,label=mylist) plt.legend(frameon=True,fancybox=True,shadow=True,framealpha=0 ,fontsize=10) plt.show() # - # ## From the chart, we can see that neighborhood 0 which represents Arbutus Ridge and neighborhood 1 which represent Central Business District have the most crime rate which means these areas should be check by the police because it has a high crime rate comparing with another neighborhood which seems it has a low crime rate, As well as Theft from vehicle and theft of the vehicle, are the most committed crimes in these areas. # # # In conclusion, # # ### Despite the fact that Vancouver has become the safest place to visit, the crime rate is slightly increased. To reduce the number of crimes, the relevant institutions in the region should increase CCTV cameras, particularly in areas that have a high crime rate. # # ## Reference: # #### 1- https://medium.com/@wise_plum_macaw_832/crime-in-vancouver-canada-2003-2017-e5ece4ec3080 (Blog-post) # #### 2-https://www.kaggle.com/wosaku/crime-in-vancouver # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # SLU2 - Subsetting data in Pandas: Examples notebook # # In this example notebook we will be working with airbnb's listing dataset to over the concepts learned in this unit: # # # + import pandas as pd # This is an option to preview less rows in the notebook's cells' outputs pd.options.display.max_rows = 10 # - # ## 1 - Using the index # ### Read airbnb_input data with neighborhood as index # + # Read the data in file airbnb_input.csv into a pandas DataFrame and use column neighborhood as the DataFrame index. df = pd.read_csv('data/airbnb_input.csv', index_col='neighborhood') # Preview the first rows of the DataFrame. df.head() # - # ### Sort index alphabetically (descending) df = df.sort_index(ascending=False) df # ### Reset neighborhood index, keeping it as a column, and setting host_id as index df = df.reset_index(drop=False).set_index('host_id').sort_index() df # ### Add host with id 1 df.loc[1] = ['Alvalade',567,'Private room',2,4.5,2,1,24] df # As we can see the our new host in the last poistion of our dataframe. This means that is no longer sorted along the index. df = df.sort_index() # ### Select last 7 rows df[-7:] df.iloc[-7:] # ### Select between positions 25 and 33 df[25:33] df.iloc[25:33] # ### Select between host 1 and 150000 df.loc[1:15000] # ### Select columns reviews and price for hosts with id between 150000 and 600000 df.loc[15000:60000,['reviews','price']] # ### Update reviews to value 5 for for room_id 33348. df.loc[df.room_id==33348,'reviews']=5 df.loc[df.room_id==33348,:] # ### Drop *host_id* 1 and column *accommodates* # Drop row df_new = df.drop(labels=1) # Drop column df_new = df_new.drop(columns='accommodates') # Show header df_new.head() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import matplotlib.pyplot as plt CHECKINS = 6442892 ci_path = '../../Dataset/loc-gowalla_totalCheckins.txt' ci_file = open(ci_path, 'r') ci_lines = ci_file.readlines() ci_data = np.zeros((CHECKINS, 2), dtype=np.int32) for ci in range(CHECKINS): line_split = ci_lines[ci].split("\t") ci_data[ci, 0] = int(line_split[0]) ci_data[ci, 1] = int(line_split[4]) print(ci_data[:10]) np.save('../../Dataset/ci_ids.npy', ci_data) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # data-hub: # # * [Website](https://data-hub.ir/) # * [Youtube](https://www.youtube.com/channel/UCrBcbQWcD0ortWqHAlP94ug) # * [Github](https://github.com/datahub-ir) # * Telegram Channel: @**data_hub_ir** # * Telegram Group: **@data_jobs** # + [markdown] colab_type="text" id="PqdtdLcGZdzy" # #
    #

    # مقدمه #

    #
    #
    #
    #
    # مصورسازی داده بخش مهمی از تجزیه و تحلیل داده است. مصورسازی به ما کمک می کند تا روابط بین متغیرها را بشناسیم و همچنین تشخیص دهیم کدام متغیرها مهم هستند یا می‌توانند در مقدار متغیر دیگری تاثیر بگذارند. #
    #
    #
    کتابخانه matplotlib یکی از کتابخانه‌های ترسیم در زبان برنامه‌سازی پایتون است. مجموعه matplotlib.pyplot شامل توابعی است که به ما کمک می‌کنند با مشخص کردن بخش های اساسی یک نمودار، آن را رسم کنیم.
    # + [markdown] colab_type="text" id="dB0TvnlwZdz2" #
    در این آموزش ما از مجموعه داده mpg استفاده می‌کنیم. این مجموعه شامل اطلاعات جمع‌آوری شده از ۲۳۴ خودرو است. ستون‌های این مجموعه داده عبارت‌اند از:
    # # # > * model: نام مدل # > * displ: اندازه موتور # > * year: سال ساخت # > * cyl: تعداد سیلندر # > * hwy: مسافت طی‌شده با یک گالن سوخت در بزرگراه # > * cty: مسافت طی‌شده با یک گان سوخت در شهر # > * fl: نوع سوخت # > * class: کلاس ماشین # + [markdown] colab_type="text" id="4Bq4vxIEZdz7" #
    #

    # نصب کتابخانه matplotlib #

    #
    #
    #
    #
    برای نصب یا بروزرسانی کتاب‌خانه matplotlib می‌توانید دستور زیر را در محیط command line وارد کنید:
    # + [markdown] colab_type="text" id="6YCXxEf_Zdz9" # ```bash # pip install matplotlib # ``` # - # !python --version # !pip install matplotlib pip show matplotlib # + [markdown] colab_type="text" id="-F6iIi2FZdz-" # #
    پس از اتمام فرایند نصب ، برای استفاده از آن لازم است کتاب‌خانه را به کد اضافه کنید؛ توجه داشته باشید برای سهولت استفاده، مخفف plt را برای فراخوانی matplotlib.pyplot تعریف می‌کنیم.
    # + colab={} colab_type="code" id="a1SivckAZd0A" import numpy as np import pandas as pd import matplotlib.pyplot as plt # + [markdown] colab_type="text" id="v33_18jlZd0L" # #
    #

    # تابع plot #

    #
    # # - x = np.linspace(0, 25, 250) plt.plot(x, np.sin(x)) plt.plot(x, np.cos(x), '--') plt.plot(x, np.cos(x)+.1) plt.show() plt.scatter(x[:10], x[:10]+2) plt.show() plt.scatter(x, np.sin(x)) plt.show() # #
    برای رسم هر تابع دلخواه می‌توان از تابع plot استفاده کرد. این تابع با ورودی گرفتن مجموعه مقادیر x و y تابعی را با استفاده از آن‌ها رسم می‌کند. # در مثال زیر تابع sin در پنچاه نقطه در بازه‌ی $[-\pi, \; \pi]$ رسم شده است.
    # + colab={"base_uri": "https://localhost:8080/", "height": 265} colab_type="code" id="eIwOts-MZd0N" outputId="30fcb413-1453-43a2-e082-af1f170e9b11" pi = np.pi x = np.linspace(start=-pi, stop=pi, num=10) y = np.sin(x) plt.plot(x, y) plt.show() # + [markdown] colab_type="text" id="gflerBgzZd0S" #
    برای نمایش نمودار از تابع show استفاده می‌کنیم.
    #
    #
    در نمودار زیر با تغییر پارامتر‌های markersize، marker، ‍linewidth، linestyle و color صورت‌های مختلف نمودار قبلی را رسم کنید.
    #
    پارامتر‌های markersize و linewidth عدد طبیعی‌اند. همچنین در جدول زیر برخی از مقادیر ممکن برای پارامتر‌های marker و linestyle آمده است.
    # + [markdown] colab_type="text" id="tCwaJ7mXZd0T" # # | پارامتر |مقدار | # |:----------|:------------------:| # | `marker` | '.' '*' '+' | # | `linestyle` | '-' '--' '-.' ':' | # # + colab={"base_uri": "https://localhost:8080/", "height": 265} colab_type="code" id="9zSUmeevZd0V" outputId="95ac5e9b-b4c6-47f4-93fe-e0a3ae553e6a" plt.plot(x, y, color='red', linewidth=1, linestyle=':', marker='+', markersize=10) plt.show() # + [markdown] colab_type="text" id="SwOI33KSZd0X" #
    همان‌طور که احتمالا متوجه شدید با تعیین پارامتر marker می‌توان مکان دقیق داده‌ها را در نمودار نشان داد.
    # - age = [21,12,32,45,37,18,28,52,5,40,48,15] height = [160,135,170,165,173,168,175,159,105,171,155,158] # + # Set figure size plt.figure(figsize=(12,6)) # Add a main title plt.title("Plot of Age vs. Height (in cms)\n", fontsize=20, fontstyle='italic') # X- and Y-label with fontsize plt.xlabel("Age (years)", fontsize=16) plt.ylabel("Height (cms)", fontsize=16) # Turn on grid plt.grid(True) # Set Y-axis value limit plt.ylim(100,200) # X- and Y-axis ticks customization with fontsize and placement plt.xticks([i*5 for i in range(12)], fontsize=15) plt.yticks(fontsize=15) # Main plotting function with choice of color, marker size, and marker edge color plt.scatter(x=age, y=height, c='orange', s=150, edgecolors='k') # Adding bit of text to the plot plt.text(x=15, y=105, s="Height increaes up to around \n20 years and then tapers off", fontsize=15, rotation=30, linespacing=2) plt.text(x=22, y=185, s="Nobody has a height beyond 180 cm", fontsize=15) # Adding a vertical line plt.vlines(x=20, ymin=100, ymax=180, linestyles='dashed', color='blue', lw=3) # Adding a horizontal line plt.hlines(y=180, xmin=0, xmax=55, linestyles='dashed', color='red', lw=3) # Adding a legend plt.legend(['Height in cms'], loc=2, fontsize=14) # Final show method plt.show() # + [markdown] colab_type="text" id="G5n1cXtxZd0Z" #
    #

    # نمودار پراکندگی (scatter plot) #

    #
    # #
    #
    #
    در نمودار پراکندگی می‌توان داده‌ها را با استفاده از دو یا سه ویژگی عددی در دستگاه مختصات رسم کنیم. برای این کار از تابع scatter استفاده می‌کنیم. #
    # ابتدا مجموعه داده mpg را می‌خوانیم.
    # + colab={} colab_type="code" id="9hqHkQeaZ8R0" mpg_csv = "mpg.csv" # + colab={"base_uri": "https://localhost:8080/", "height": 204} colab_type="code" id="1XovJTBNZd0Z" outputId="e01eead9-49eb-4f7d-e7ae-e96691164462" df = pd.read_csv(mpg_csv) df.head() # + [markdown] colab_type="text" id="fuvQnVG9Zd0c" #
    #
    #
    به عنوان مثال نمودار پراکندگی مقدار مسافت طی‌شده با یک گالن سوخت (hwy) نسبت به اندازه موتور خودرو (displ) را رسم می‌کنیم.
    # # + colab={"base_uri": "https://localhost:8080/", "height": 295} colab_type="code" id="azIDARniZd0f" outputId="1a09814e-9dc6-4bc9-b3ee-007f9ef5511d" plt.scatter(x=df['displ'], y=df['hwy'], c='blue', alpha=0.4) plt.title('hwy - displ') plt.xlabel('displ') plt.ylabel('hwy') plt.show() # + [markdown] colab_type="text" id="LKdfc7ZAZd0h" #
    #
    برای استفاده از تابع scatter لازم است مقادیر هر بعد را مشخص نماییم. از طرفی با استفاده از پارامتر alpha میزان شفافیت هر نشان را مشخص می‌کنیم. بنابراین در مکان‌هایی که نشان پررنگ‌تر است، در واقع داده‌های بیشتری رو هم قرار گرفته‌اند. # برای تعیین عنوان نمودار از تابع title و برای نام‌گذاری محور‌ها از توابع xlabel و ylabel استفاده می‌کنیم.
    #
    #
    # + rng = np.random.RandomState(0) colors = rng.rand(len(df)) sizes = 1000 * rng.rand(len(df)) #size متناسب با بزرگی اعداده plt.scatter(x=df['displ'], y=df['hwy'], c=colors, s=sizes, alpha=0.3, cmap='viridis') plt.colorbar() # + [markdown] colab_type="text" id="FyzJAl3XZd0i" #
    حال می‌خواهیم با استفاده از ویژگی تعداد سیلندر داده‌ها را رنگ کنیم.
    # - df.head() # + colab={"base_uri": "https://localhost:8080/", "height": 295} colab_type="code" id="CWvuQUBFZd0j" outputId="1f28a2cf-07ac-48d5-be5d-c03670e703bc" plt.scatter(x=df['displ'], y=df['hwy'], c=df['cyl'],alpha=0.5) plt.title('hwy - displ') plt.xlabel('displ') plt.ylabel('hwy') plt.colorbar() plt.show() # + [markdown] colab_type="text" id="oKNYsScbZd0l" #
    به کمک تابع colorbar می‌توانیم طیف رنگ را نمایش دهیم. همان طور که انتظار داشتیم، خودرو‌های با تعداد سیلندر بیشتر مصرف سوخت بیشتری دارند.
    #
    #
    # - # In Matplotlib, the figure (an instance of the class plt.Figure) can be thought of as a single container that contains all the objects representing axes, graphics, text, and labels. The axes (an instance of the class plt.Axes) is what we see above: a bounding box with ticks and labels, which will eventually contain the plot elements that make up our visualization. Throughout this book, we'll commonly use the variable name fig to refer to a figure instance, and ax to refer to an axes instance or group of axes instances. # + fig = plt.figure() ax = plt.axes() x = np.linspace(0, 10, 1000) ax.plot(x, np.sin(x)) # - # Alternatively, we can use the pylab interface and let the figure and axes be created for us in the background # (see [Two Interfaces for the Price of One](04.00-Introduction-To-Matplotlib.ipynb#Two-Interfaces-for-the-Price-of-One) for a discussion of these two interfaces): plt.plot(x, np.sin(x)) x = np.linspace(0, 10, 1000) fig, ax = plt.subplots() ax.plot(x, np.sin(x), '-b', label='Sine') ax.plot(x, np.cos(x), '--r', label='Cosine') ax.axis('equal') leg = ax.legend() ax.legend(loc='upper left', frameon=False) fig ax.legend(frameon=False, loc='lower center', ncol=2) fig # + [markdown] colab_type="text" id="mxvWzLnvZd0l" #
    #

    # نمودار سه‌بعدی #

    #
    # #
    #
    # #
    در این بخش مجموعه دادگان mpg را با استفاده از سه ویژگی عددی در دستگاه مختصات رسم می‌کنیم. به این منظور از ابزار mplot3d در کنار matplotlib استفاده می‌کنیم.
    # + colab={"base_uri": "https://localhost:8080/", "height": 248} colab_type="code" id="W585be9aZd0l" outputId="347745c5-38e7-4e2e-ba30-585c0aa81906" from mpl_toolkits import mplot3d fig = plt.figure() ax = plt.axes(projection='3d') ax.scatter3D(df['displ'], df['hwy'], df['cyl']) plt.show() # - ax = plt.axes(projection='3d') ax.scatter3D(df['displ'], df['hwy'], df['cyl'], c=df['cyl']) plt.show() # + [markdown] colab_type="text" id="PtrpUW1oZd0n" #
    همان‌طور که مشاهده‌ می‌شود با اضافه کردن ویژگی تعداد سیلندر داده‌ها به خوبی خوشه‌بندی شده‌اند.
    #
    #
    # - fig.savefig('my_figure.png') # + [markdown] colab_type="text" id="IZXyaXNEZd0n" #
    #

    # نمودار میله‌ای (bar plot) #

    #
    #
    #
    #
    با استفاده از این نمودار می‌توان کمیت عددی مربوط به مقادیر متفاوت یک کمیت دسته‌ای را نمایش داد و با هم مقایسه کرد.
    # + colab={"base_uri": "https://localhost:8080/", "height": 279} colab_type="code" id="_Ik5KGUvZd0o" outputId="07983cef-9567-44fa-ce80-1555c74b6c9b" groups = ['G1', 'G2', 'G3', 'G4', 'G5'] scores = [20, 34, 30, 32, 27] bar_width = 0.3 plt.bar(groups, scores, width=bar_width, color='black') plt.xlabel('Groups') plt.ylabel('Scores') plt.show() # + [markdown] colab_type="text" id="Ev3ryF3xZd0p" #
    #
    در نمودار میله‌ای بعد میانگین hwy و cty را برای کلاس‌های متفاوت خودرو‌ها به صورت همزمان نشان می‌دهیم.
    # - df['class'].unique() # + colab={"base_uri": "https://localhost:8080/", "height": 265} colab_type="code" id="DKqqZ_p_Zd0q" outputId="49e05b77-970b-4757-9fed-1d6093a07c3a" classes = df['class'].unique() barwidth = 0.3 cty_mean = [] hwy_mean = [] for x in classes: cty_mean.append(df[df['class'] == x]['cty'].mean()) hwy_mean.append(df[df['class'] == x]['hwy'].mean()) # - index = pd.factorize(classes)[0] + 1 index plt.bar(index, cty_mean, barwidth, color='black', label='mean city miles per gallon') plt.legend() plt.show() plt.bar(index, cty_mean, barwidth, color='black', label='mean city miles per gallon') plt.legend() plt.xticks(index, classes) plt.show() plt.bar(index, hwy_mean, barwidth, color='red', label='mean highway miles per gallon') plt.legend() plt.xticks(index, classes) plt.show() plt.bar(index, cty_mean, barwidth, color='black', label='mean city miles per gallon') plt.bar(index + barwidth, hwy_mean, barwidth, color='purple', label='mean highway miles per gallon') plt.xticks(index + barwidth / 2, classes) plt.legend() plt.show() # + [markdown] colab_type="text" id="vloc3yxZZd0r" #
    تابع xticks با گرفتن مکان دسته‌ها و نامشان، آن‌ها را روی محور x نمایش می دهد. # هنگامی که چند نمودار را در یک شکل رسم می‌کنیم، می‌‌توان با تعیین پارامتر label برای نمودار‌ها و فراخوانی تابع legend، برچسب تعیین شده برای هر نمودار را در شکل نشان دهیم. # دقت کنید در این شکل ما دو نمودار را به صورت همزمان به تصویر کشیدیم. در حالت کلی می‌توانیم تعداد دلخواهی نمودار با نوع مطلوب در یک شکل رسم کنیم.
    #
    #
    # - # ### Histogram weight = [55,35,77,68,70,60,72,69,18,65,82,48] import numpy as np plt.figure(figsize=(5,5)) # Main plot function 'hist' plt.hist(weight, color='red', edgecolor='k', alpha=0.75, bins=7) plt.title("Histogram of patient weight", fontsize=18) plt.xlabel("Weight in kgs", fontsize=15) plt.xticks(fontsize=15) plt.yticks(fontsize=15) plt.show() # + [markdown] colab_type="text" id="yyfoO9mxZd0s" #
    #

    # نمودار جعبه‌ای (box plot) #

    #
    # #
    #
    #
    در نمودار جعبه‌ای خلاصه‌ای از دادگان شامل کمینه، چارک اول، میانه، چارک سوم و بیشینه نمایش داده می‌شود.
    #
    # #
    #
    # + [markdown] colab_type="text" id="M9Ign8a9Zd0s" #
    در کد زیر از توابع تعریف شده روی dataframe برای کشیدن نمودار استفاده شده است. برای مطالعه‌ی بیشتر می‌توانید به این لینک مراجعه کنید.
    # - df.describe() # + colab={"base_uri": "https://localhost:8080/", "height": 265} colab_type="code" id="R2QHEaBhZd0s" outputId="c747f8e5-d2cb-4924-a245-7d83c36a2092" plt.style.use('ggplot') df.boxplot(column=['cty', 'hwy'],showmeans=True) plt.show() # + [markdown] colab_type="text" id="i38zbKFRZd0v" #
    #
    #
    در نمودار زیر داده‌ها ابتدا بر حسب ویژگی class گروه‌بندی شده‌اند، سپس نمودار جعبه‌ای برای هر دسته رسم شده است.
    # + colab={"base_uri": "https://localhost:8080/", "height": 301} colab_type="code" id="J33CuBE9Zd0w" outputId="643534ef-b071-41ab-b16b-c8a53aec3b6d" df.boxplot(column=['hwy'], by='class') plt.show() # - # ## Pandas DataFrames support some visualizations directly! df.plot.scatter('displ', 'hwy') plt.show() df['hwy'].plot.hist(bins=5,figsize=(5,5),edgecolor='k') plt.xlabel('hwy percentage') plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernel_info: # name: pompy # kernelspec: # display_name: mothpy # language: python # name: python3 # --- # # MothPy # # `mothpy` is a NumPy based implementation of moth-inspired navigation strategies that uses # `pompy` library to create the puff, wind and concentration models. see `pompy/Readme.md` # for details # # ### What is this repository for? # # This Python package allows simulation moth-like navigators in dynamic 2D odour # concentration fields spread in turbulent flows # # ### Installation and requirements # # Python # Numpy # Scipy # Matplotlib # Seaborn # Pompy https://github.com/InsectRobotics/pompy # import numpy as np import scipy as sp import matplotlib.pyplot as plt # %matplotlib inline from IPython.display import Image # import pompy # ### This notebook shows interactively the run of the script # `python compare_navigators_in_different_wind_conditions.py` # + inputHidden=false outputHidden=false # demo of the output Image('../img/Demonstration_of_different_navigation_strategies.png') # - # ## How to build figures for the paper # # ### Set up the navigators (optional) # # The file Casting_competition initiates the navigators to compete in the simulation. Four loops initiate four equal sized groups of navigators, their properties can be changed within these loops - navigation and casting strategies, location, and so on. # For more information about navigators check out the models file. # # ### Set up the wind and plume conditions (optional) # # The file Compare_navigtors... initiates the main loop. For each iteration a new plume and wind model are initiated for the simulation to occur in. The function generate_job dictates the terms of the simulation in terms of wind and plume partameteres. In order to set the simulation enter the required parameters as input for generate_job. For example here - # ``` # for i in range(4): # job_file_name = 'job'+ str(i)+ '.json' # data_file_name = 'data'+ str(i)+ '.json' # #titles_file_name = 'titles'+ str(i)+ '.json' # generate_job(char_time = 3.5, amplitude =0.1, job_file = job_file_name, # t_max =15, puff_release_rate = 100, # puff_spread_rate = 0.0001*(1+i), # dt = 0.01, num_it = 1) # ``` # # The only value that changes is the puff spread rate, varying from 0.0001 to 0.0004. # Make sure that only one variable of the simulation changes with each iteration. Multivaribale changes will create problems later on. # ### Run `comapare_navigators.py` # # When the file is run the wind and plume paramters that have been set are saved into "job" files, one JSON file for each iteration (job0.json, job1.json...). # The trajectories of the navigators are saved as "data" files, (data0.json, data1.json), on which the later analyses will be made. # Notice the following line: # # ```python # #save_plot(job_file_name,data_file_name,title,navigator_titles) # ``` # # uncomment this line to save an image for each navigation attempt in the deafulat settings, that means 800 pictures). That could supply useful input in some cases. # if the package installed, this is not necessary import sys sys.path.append('../mothpy') # + # # %run compare_navigators_in_different_wind_conditions.py # Run 4 jobs from job_generator import generate_job from casting_competition import create_trajectory_data from graphics_from_file import save_plot for i in range(4): job_file_name = 'job'+ str(i)+ '.json' data_file_name = 'data'+ str(i)+ '.json' #titles_file_name = 'titles'+ str(i)+ '.json' generate_job(char_time = 7, amplitude = 0.1, job_file = job_file_name, t_max =20, puff_release_rate = 100, puff_spread_rate = 0.0001*(1+i), dt = 0.01, num_it = 1) navigator_titles = create_trajectory_data(job_file_name,data_file_name) title = 'loop ' +str(i) # save_plot(job_file_name,data_file_name,title,navigator_titles) #save_detection_plot(job_file_name,data_file_name,navigators_titles) print ('finished simulation number ' + str(i+1)) # - # #### Run Line_graphs # The file `line_graphs.py` plots bar graphs of the four different simulations. It read from the Data and Job files, so those could be replaced and . There is no need to set up anything for this file, just run it: # # ``` python line_graphs.py ``` # # # The output should look like the figures below. # # # + inputHidden=false outputHidden=false from line_graphs import * num_jobs = 4 (xlabel,values)=process_jobs(num_jobs) legends = ('A','B','C','D') liberzonlist =[] Benellilist = [] lfslist =[] fslist = [] for i in range(num_jobs): loop = str(i) data_list = get_data('data'+loop+'.json',4) #print (data_list[0]) liberzonlist.append(data_list[0]) Benellilist.append((data_list[1])) lfslist.append((data_list[2])) fslist.append((data_list[3])) #[succ_prec ,average_time_,average_efficiency] lib_succ, lib_avg,lib_efficiency = zip(*liberzonlist) Bene_succ, Bene_avg,Bene_efficiency = zip(*Benellilist) lfs_succ, lfs_avg, lfs_efficiency = zip(*lfslist) fs_succ, fs_avg,fs_efficiency = zip(*fslist) fig,ax = plt.subplots() n_groups = 4 bar_width = 0.3 opacity = 0.4 index = np.arange(0, 2*n_groups, 2) """ data0 =(lib_succ[0],Bene_succ[0],lfs_succ[0],fs_succ[0]) data1 =(lib_succ[1],Bene_succ[1],lfs_succ[1],fs_succ[1]) data2 =(lib_succ[2],Bene_succ[2],lfs_succ[2],fs_succ[2]) data3 =(lib_succ[3],Bene_succ[3],lfs_succ[3],fs_succ[3]) """ def set_charts(data0,data1,data2,data3): global chart chart = plt.bar(index, data0, bar_width,color = 'white', edgecolor='black') chart = plt.bar(index+bar_width, data1, bar_width,color = 'white', hatch = '+' ,edgecolor='black') chart = plt.bar(index+2*bar_width, data2, bar_width,color = 'white', hatch = '\\', edgecolor='black') chart = plt.bar(index+3*bar_width, data3, bar_width,color = 'black', hatch = '*', edgecolor='black') set_charts(lib_succ,Bene_succ,lfs_succ,fs_succ) if xlabel == 'puff_spread_rate': ax.set_xlabel(r'$\sigma$') else: ax.set_xlabel(xlabel) ax.set_ylabel('Success (%)') ax.xaxis.label.set_fontsize(17) ax.yaxis.label.set_fontsize(17) #ax.set_title('Success Percentage vs '+ xlabel) plt.xticks(index+bar_width*1.5, (str(values[0]), str(values[1]), str(values[2]),str(values[3]))) plt.legend(legends) plt.tight_layout() plt.savefig('Success Percentage vs '+ xlabel) plt.show() #create second graph fig,ax = plt.subplots() n_groups = 4 bar_width = 0.3 opacity = 0.4 index = np.arange(0, 2*n_groups, 2) #plt.xlabel(xlabel) #plt.ylabel('Average Navigation Time (ratio)') if xlabel == 'puff_spread_rate': ax.set_xlabel(r'$\sigma$') else: ax.set_xlabel(xlabel) ax.set_ylabel('T/'+r'$\tau$') set_charts(lib_avg,Bene_avg,lfs_avg,fs_avg) plt.subplots_adjust(bottom=0.15) plt.xticks(index+bar_width*1.5, (str(values[0]), str(values[1]), str(values[2]),str(values[3]))) #ax.set_title('Average Navigation Time vs '+ xlabel) ax.xaxis.label.set_fontsize(17) ax.yaxis.label.set_fontsize(17) plt.legend(legends) # plt.savefig('Average Navigation Time vs '+ xlabel) # plt.show() # - # ## How to manage and design new navigators # ### initiating a navigator # Let us look at this example from the casting_competition file: # ``` # navigator1 = models.moth_modular(sim_region, cd['x_start'], cd['y_start'], cd['nav_type'] , cd['cast_type'], cd['wait_type']) # ``` # The navigator is initiated with it's initial x and y coordinates and the modes of navigating, casting and waiting. # ### wait, cast and nav types # A navigator is an object of the ```moth_modular``` class. It has an attribute to define each movement type, ```wait_type, cast_type, nav_type```. # The attribute itself can be an integer or a string, it doesn't matter, but it should correlate to a signifier inside of the corresponding function. For example, let's look at the casting function - # ``` # def cast(self,wind_vel_at_pos): # if self.state != 'cast' : # #if this is the beginging of a new casting phase # self.change_direction() # # if self.cast_type == 0: # self.u=0 # self.v=0 # # if self.cast_type == 1: # self.calculate_wind_angle(wind_vel_at_pos) # self.u = -self.speed*np.cos(self.gamma+self.wind_angle) # self.v = self.speed*np.sin(self.gamma+self.wind_angle) # # if self.cast_type == 2: # #define different betas for different casting patterns # self.cast2(wind_vel_at_pos) # ``` # The function, like all movement functions, takes as input the parameters of the navigator and the wind velocity at the position (as calculated by the wind model). # The first conditional changes the direction of casting from the previous direction. This has nothing to with the cast type. # The second, third and fourth conditionals are dependant on the cast type, and use it as an indicator as to how to move. Note that the function can call upon other functions. The stracture of the ```wait``` and ```navigate``` are very similar - The function sets the velocity (u,v) of the navigator. The actual time step is performed in the update function. # #### defining new movement types # In order to create a new waiting, casting or navigation, first enter the models file. For example, let's say we would like to design a new waiting mode. First, we sould define a condition within the waiting function. # ``` # def wait(self,wind_vel_at_pos): # if wait_type == 'example wait type': # ``` # Now, if the navigator was initiated to so its wait type attribute is 'example wait type' the wait function will be directed into the actions we define under that conditional. Secondly, define the changes in you would like to be made to the velocity of the navigator: # ``` # def wait(self,wind_vel_at_pos): # if wait_type == 'example wait type': # u *= 1.1 # v *= 1.1 # ``` # The same approach should be applied to any of the movement functions. # After we defined the new conditional, we can use it when initiatin a new navigator: # # ``` # navigator1 = models.moth_modular(sim_region, cd['x_start'], cd['y_start'], cd['nav_type'] , cd['cast_type'], 'example wait type') # ``` # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="TMboOZrPfK9-" colab_type="code" outputId="729375a4-2103-48e1-acf0-c63b4551397b" colab={"base_uri": "https://localhost:8080/", "height": 1000} # !pip install tensorflow==2.0 # + id="a8koFovcoFar" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="dbbea0cf-5e1d-47eb-8c56-2dc7e3e9f94a" # install necessary packages import tensorflow as tf print(tf.__version__) # + id="p14Zc30VkFre" colab_type="code" colab={} import os import numpy as np import pandas as pd from sklearn.preprocessing import MinMaxScaler import joblib import seaborn as sns sns.set(color_codes=True) import matplotlib.pyplot as plt # %matplotlib inline from numpy.random import seed # from tensorflow import set_random_seed # import tensorflow as tf # tf.logging.set_verbosity(tf.logging.ERROR) from tensorflow.keras.layers import Input, Dense, Dropout, LSTM, TimeDistributed, RepeatVector from tensorflow.keras.models import Model from tensorflow.keras import regularizers # + id="KZrGz7qdkIXm" colab_type="code" colab={} # set random seed tf.random.set_seed(10) # + id="yiGOYYWCkLmw" colab_type="code" outputId="5f82ef2a-21e5-472f-cc9b-2a24867db33b" colab={"base_uri": "https://localhost:8080/", "height": 276} df = pd.read_csv('/content/data.csv') df.info() # + id="FupDGc1ck2xH" colab_type="code" outputId="ad31b69c-fed4-4561-dd3f-bed28af8d54a" colab={"base_uri": "https://localhost:8080/", "height": 207} df.nunique() # + id="UrFGyrC5k6nJ" colab_type="code" colab={} df.drop(labels=['Timestamp', 'jobName', 'values_ConveyorBeltTimestamp', '_timestamp', '_version'], axis=1, inplace=True) # + id="-YPEDLr5l3Do" colab_type="code" outputId="37525bca-fdcd-4a1f-bef9-362a04bcdf56" colab={"base_uri": "https://localhost:8080/", "height": 190} df.info() # + id="MGX1AMQUnuAr" colab_type="code" outputId="19187d5f-1fd1-4db3-98f7-0d2332f99472" colab={"base_uri": "https://localhost:8080/", "height": 198} df.head() # + id="YYa8NnI9n_Rk" colab_type="code" colab={} df_new = df.groupby('Timestamp_HR', as_index=False)['values_PostStage', 'values_MidStage', 'values_PreStage'].mean() # + id="mkSpNBdMn_lx" colab_type="code" outputId="7ca1068d-f5e1-4fe8-b15f-7e9707bd5359" colab={"base_uri": "https://localhost:8080/", "height": 173} df_new.info() # + id="HgBmpkV9n_pt" colab_type="code" colab={} # set index df_new.set_index('Timestamp_HR', inplace=True) # + id="SmIQ9lgbl4lL" colab_type="code" outputId="9c1ca273-40f3-41f8-e176-2ae678d9bed5" colab={"base_uri": "https://localhost:8080/", "height": 155} df_new.info() # + id="soWNYsj9oRh7" colab_type="code" outputId="d7a5f427-f9b8-440e-8be5-2cfafb72a827" colab={"base_uri": "https://localhost:8080/", "height": 228} df_new.head() # + id="M4zbKP7ppfx7" colab_type="code" colab={} df_new.to_csv('sensor.csv') # + id="0AQjkdcOpo56" colab_type="code" colab={} # Define train and test dataset df_new.drop(labels=['values_MidStage', 'values_PreStage'], axis=1, inplace=True) # + id="LMBn8XKaDT1J" colab_type="code" outputId="f67a5970-aab6-49b0-e45b-cada273f7182" colab={"base_uri": "https://localhost:8080/", "height": 228} df_new.head() # + id="aSoWAm2Tt1e4" colab_type="code" colab={} # split into train and test split train = df_new[:2162] val = df_new[2162:] # + id="9_wCrlcIuWyn" colab_type="code" outputId="a2c97ad3-2189-47c8-dcc6-5c5bc206b78d" colab={"base_uri": "https://localhost:8080/", "height": 34} train.shape, val.shape # + id="F9SbGERGuh9a" colab_type="code" outputId="f2a99e68-fb50-4d8d-8c4c-0be0badb362c" colab={"base_uri": "https://localhost:8080/", "height": 439} # Only fot values_PostStage fig, ax = plt.subplots(figsize=(14, 6), dpi=80) ax.plot(train['values_PostStage'], label='values_PostStage', color='blue', animated = True, linewidth=1) plt.legend(loc='lower left') ax.set_title('Sensor Training Data', fontsize=16) plt.show() # + id="VXRorA1P9DXF" colab_type="code" outputId="a9de549d-7c9b-45b2-a409-15803978c437" colab={"base_uri": "https://localhost:8080/", "height": 318} # Only fot values_PostStage fig, ax = plt.subplots(figsize=(10, 4), dpi=80) ax.plot(val['values_PostStage'], label='values_PostStage', color='blue', animated = True, linewidth=1) plt.legend(loc='lower left') ax.set_title('Sensor Training Data', fontsize=16) plt.show() # + id="Zo8-hRaH9lMd" colab_type="code" colab={} # transforming data from the time domain to the frequency domain using fast Fourier transform train_fft = np.fft.fft(train) val_fft = np.fft.fft(val) # + id="ehx4sMyq9wW5" colab_type="code" outputId="68f8b5d0-a14c-4aef-85e3-6810d81ad46a" colab={"base_uri": "https://localhost:8080/", "height": 977} # frequencies of the healthy sensor signal fig, ax = plt.subplots(figsize=(14, 10), dpi=80) ax.plot(train_fft[:,0], label='values_PostStage', color='blue', animated = True, linewidth=1) plt.legend(loc='top left') ax.set_title('Sensor Training Data', fontsize=16) plt.show() # + id="GSabHhF--Klq" colab_type="code" outputId="518d56d8-1a42-47d3-dc90-41a0df512e18" colab={"base_uri": "https://localhost:8080/", "height": 715} # frequencies of the healthy sensor signal fig, ax = plt.subplots(figsize=(14, 10), dpi=80) ax.plot(val_fft[:,0], label='values_PostStage', color='blue', animated = True, linewidth=1) plt.legend(loc='lower left') ax.set_title('Sensor Training Data', fontsize=16) plt.show() # + id="NStXQexz-gGk" colab_type="code" outputId="2b43341e-cda8-4067-e109-a4b2369e7cd7" colab={"base_uri": "https://localhost:8080/", "height": 34} # first normalize it to a range between 0 and 1. # normalize the data scaler = MinMaxScaler() X_train = scaler.fit_transform(train) X_val = scaler.transform(val) scaler_filename = "scaler_data" joblib.dump(scaler, scaler_filename) # + id="-4L158SGCzQb" colab_type="code" outputId="ce6f362d-c1b2-4c10-f00e-adbb225999c7" colab={"base_uri": "https://localhost:8080/", "height": 34} X_train.shape # + id="bGRkwG1qDiTk" colab_type="code" outputId="8fb3767d-2242-4dca-a8b1-d9839e35747b" colab={"base_uri": "https://localhost:8080/", "height": 34} X_val.shape # + id="qYNu24gNCquB" colab_type="code" outputId="76ad173d-7602-43a1-bd3a-d16ce06e49df" colab={"base_uri": "https://localhost:8080/", "height": 52} # reshape inputs for LSTM [samples, timesteps, features] X_train = X_train.reshape(X_train.shape[0], 1, X_train.shape[1]) print("Training data shape:", X_train.shape) X_val = X_val.reshape(X_val.shape[0], 1, X_val.shape[1]) print("Test data shape:", X_val.shape) # + id="wXL72BLxDqJJ" colab_type="code" colab={} # define the autoencoder network model def autoencoder_model(X): inputs = Input(shape=(X.shape[1], X.shape[2])) L1 = LSTM(16, activation='relu', return_sequences=True, kernel_regularizer=regularizers.l2(0.00))(inputs) L2 = LSTM(4, activation='relu', return_sequences=False)(L1) L3 = RepeatVector(X.shape[1])(L2) L4 = LSTM(4, activation='relu', return_sequences=True)(L3) L5 = LSTM(16, activation='relu', return_sequences=True)(L4) output = TimeDistributed(Dense(X.shape[2]))(L5) model = Model(inputs=inputs, outputs=output) return model # + id="82bpQ1asDu1L" colab_type="code" outputId="203447a8-a773-4f5c-dc97-6c50851c1eb8" colab={"base_uri": "https://localhost:8080/", "height": 397} # create the autoencoder model model = autoencoder_model(X_train) model.compile(optimizer='adam', loss='mae') model.summary() # + id="AeM6e8XDDyGd" colab_type="code" outputId="02c769aa-3c87-4cb7-db9d-a593081f9d51" colab={"base_uri": "https://localhost:8080/", "height": 1000} # fit the model to the data nb_epochs = 30 batch_size = 100 history = model.fit(X_train, X_train, epochs=nb_epochs, batch_size=batch_size, validation_split=0.05).history # + id="Emf4BOUNEJyu" colab_type="code" outputId="c8ecf847-8483-457f-b9cc-7a31ef0f611e" colab={"base_uri": "https://localhost:8080/", "height": 268} plt.plot(history['loss'], label='train') plt.plot(history['val_loss'], label='val') plt.legend(); # + id="a6tIpM0dEsJX" colab_type="code" colab={} # Finding Anomalies X_train_pred = model.predict(X_train) train_mae_loss = np.mean(np.abs(X_train_pred - X_train), axis=1) # + id="rWsoIKZyFOJq" colab_type="code" outputId="834a8075-7ee3-455a-f4f1-04f5b1965c09" colab={"base_uri": "https://localhost:8080/", "height": 268} # look at the error: sns.distplot(train_mae_loss, bins=50, kde=True); # + id="Xhhyj-B9FPtE" colab_type="code" colab={} THRESHOLD = 0.03 # + id="YUjWnQr1FZMy" colab_type="code" colab={} # Let’s calculate the MAE on the test data: X_val_pred = model.predict(X_val) val_mae_loss = np.mean(np.abs(X_val_pred - X_val), axis=1) # + id="yeZ9rvWwFhIu" colab_type="code" colab={} # We’ll build a DataFrame containing the loss and the anomalies (values above the threshold): val_score_df = pd.DataFrame(index=val.index) val_score_df['loss'] = val_mae_loss val_score_df['threshold'] = THRESHOLD val_score_df['anomaly'] = val_score_df.loss > val_score_df.threshold val_score_df['values_PostStage'] = val.values_PostStage # + id="43zpWR_jF_hl" colab_type="code" outputId="d56fc438-2f77-437f-9149-b1875d4f3ae4" colab={"base_uri": "https://localhost:8080/", "height": 307} plt.plot(val_score_df.index, val_score_df.loss, label='loss') plt.plot(val_score_df.index, val_score_df.threshold, label='threshold') plt.xticks(rotation=25) plt.legend(); # + id="0G2Uyq-oFwna" colab_type="code" outputId="1078c5d3-0713-45ed-da9c-c07ba55bf59d" colab={"base_uri": "https://localhost:8080/", "height": 228} # Looks like we’re thresholding extreme values quite well. Let’s create a DataFrame using only those: anomalies = val_score_df[val_score_df.anomaly == True] anomalies.head() # + id="87EhLWFtHBVO" colab_type="code" outputId="e746e8b4-7d31-4b23-9036-9945004d2343" colab={"base_uri": "https://localhost:8080/", "height": 324} plt.plot( val.index, val.values_PostStage, label='values_PostStage' ); sns.scatterplot( anomalies.index, anomalies.values_PostStage, color=sns.color_palette()[3], s=52, label='anomaly' ) plt.xticks(rotation=25) plt.legend(); # + id="S2L_r7y_I26A" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python3 # --- # ### Using fmriprep # [fmriprep](https://fmriprep.readthedocs.io/en/stable/) is a package developed by the Poldrack lab to do the minimal preprocessing of fMRI data required. It covers brain extraction, motion correction, field unwarping, and registration. It uses a combination of well-known software packages (e.g., FSL, SPM, ANTS, AFNI) and selects the 'best' implementation of each preprocessing step. # # Once installed, `fmriprep` can be invoked from the command line. We can even run it inside this notebook! The following command should work after you remove the 'hashtag' `#`. # # However, running fmriprep takes quite some time (we included the hashtag to prevent you from accidentally running it). You'll most likely want to run it in parallel on a computing cluster. # + # #!fmriprep \ # # --ignore slicetiming \ # # --ignore fieldmaps \ # # --output-space template \ # # --template MNI152NLin2009cAsym \ # # --template-resampling-grid 2mm \ # # --fs-no-reconall \ # # --fs-license-file \ # # ../license.txt \ # # ../data/ds000030 ../data/ds000030/derivatives/fmriprep participant # - # The command above consists of the following parts: # - \"fmriprep\" calls fmriprep # - `--ignore slicetiming` tells fmriprep to _not_ perform slice timing correction # - `--ignore fieldmaps` tells fmriprep to _not_ perform distortion correction (unfortunately, there are no field maps available in this data set) # - `--output-space template` tells fmriprep to normalize (register) data to a template # - `--template MNI152NLin2009cAsym` tells fmriprep that the template should be MNI152 version 6 (2009c) # - `--template-resampling-grid 2mm` tells fmriprep to resample the output images to 2mm isotropic resolution # - `--fs-license-file ../../license.txt` tells fmriprep where to find the license.txt-file for freesurfer - you can ignore this # - `bids` is the name of the folder containing the data in bids format # - `output_folder` is the name of the folder where we want the preprocessed data to be stored, # - `participant` tells fmriprep to run only at the participant level (and not, for example, at the group level - you can forget about this) # # The [official documentation](http://fmriprep.readthedocs.io/) contains all possible arguments you can pass. # ### Using nipype # fmriprep makes use of [Nipype](https://nipype.readthedocs.io/en/latest/), a pipelining tool for preprocessing neuroimaging data. Nipype makes it easy to share and document pipelines and run them in parallel on a computing cluster. If you would like to build your own preprocessing pipelines, a good resource to get started is [this tutorial](https://miykael.github.io/nipype_tutorial/). # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import tensorflow as tf import cv2 as cv with tf.gfile.FastGFile('frozen_inference_graph.pb', 'rb') as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) with tf.Session() as sess: # Restore session sess.graph.as_default() tf.import_graph_def(graph_def, name='') img = cv.imread('cow.jpg') rows = img.shape[0] cols = img.shape[1] inp = cv.resize(img, (300, 300)) inp = inp[:, :, [2, 1, 0]] # Run the model out = sess.run([sess.graph.get_tensor_by_name('num_detections:0'), sess.graph.get_tensor_by_name('detection_scores:0'), sess.graph.get_tensor_by_name('detection_boxes:0'), sess.graph.get_tensor_by_name('detection_classes:0')], feed_dict={'image_tensor:0': inp.reshape(1, inp.shape[0], inp.shape[1], 3)}) # coco_dataset class 18 - Dog # coco_dataset class 20 - Sheep # coco_dataset class 21 - Cow # Visualize detected bounding boxes. num_detections = int(out[0][0]) dog_count = 0 sheep_count = 0 cow_count = 0 for i in range(num_detections): classId = int(out[3][0][i]) score = float(out[1][0][i]) bbox = [float(v) for v in out[2][0][i]] if score > 0.3: if (classId == 18): dog_count += 1 if (classId == 20): sheep_count += 1 if (classId == 21): cow_count += 1 x = bbox[1] * cols y = bbox[0] * rows right = bbox[3] * cols bottom = bbox[2] * rows cv.rectangle(img, (int(x), int(y)), (int(right), int(bottom)), (0, 0, 0), thickness=2) if (dog_count != 0): label = 'No of dogs ' + str(dog_count) cv.putText(img,label, (0,25), cv.FONT_HERSHEY_DUPLEX, 1, (0,255,0)) if (sheep_count != 0): label = 'No of sheeps ' + str(sheep_count) cv.putText(img, label, (0,25), cv.FONT_HERSHEY_DUPLEX, 1, (0,0,255)) if (cow_count != 0): label = 'No of cows ' + str(cow_count) cv.putText(img, label, (0,25), cv.FONT_HERSHEY_DUPLEX, 1, (255,255,0)) cv.imshow('Image', img) cv.waitKey() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.8.5 64-bit ('usr') # name: python3 # --- # + id="fTZYaL1ANwxu" # Following https://github.com/huggingface/notebooks/blob/master/examples/translation.ipynb # + id="566GHCfaILnG" # Load the TensorBoard notebook extension # %load_ext tensorboard # Clear any logs from previous runs # !rm -rf ./logs/ # + colab={"base_uri": "https://localhost:8080/"} id="CpHoG6l5Nx1E" executionInfo={"status": "ok", "timestamp": 1628500934784, "user_tz": -120, "elapsed": 97113, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj7ldLv317JVrv6jtloGcYIgoqAhT08NCy6q5wtrA=s64", "userId": "06955205838397477388"}} outputId="b86e5bee-1850-4c55-d78a-104ecfc4e8d1" from google.colab import drive drive.mount('drive') # + colab={"base_uri": "https://localhost:8080/"} id="FUM2HkRlKpUE" executionInfo={"status": "ok", "timestamp": 1628501035852, "user_tz": -120, "elapsed": 524, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj7ldLv317JVrv6jtloGcYIgoqAhT08NCy6q5wtrA=s64", "userId": "06955205838397477388"}} outputId="3fa0fd22-8c50-4f14-e19d-4136ae7e264e" # !unzip lc-quad-wikidata-2021-07-09.zip # + id="EXgnU7gQOGGS" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1628500934787, "user_tz": -120, "elapsed": 18, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj7ldLv317JVrv6jtloGcYIgoqAhT08NCy6q5wtrA=s64", "userId": "06955205838397477388"}} outputId="b8068c78-67ab-4f53-a8c7-404b6d0f5b89" # !nvidia-smi # + id="Q9TjriZbNwxy" model_checkpoint = 't5-small' gdir = 'drive/My Drive/Colab Notebooks/t5-2021-07-11/' model_name='sparql-translator-t5-2021-08-08-large' model_path='./models/'+model_name ds_path= 'lc-quad-wikidata-2021-07-09' # + id="Acx8YqC8Ok-H" # !mkdir models # + id="fz6l2bnROu-W" # !pip install datasets transformers sacrebleu -qqq # + id="WGFsMmwlNwxz" from datasets import load_dataset, load_metric, Dataset, load_from_disk raw_datasets = load_from_disk(ds_path) # + id="ak3Fc0pDUG8_" colab={"base_uri": "https://localhost:8080/", "height": 253} executionInfo={"status": "ok", "timestamp": 1628501050514, "user_tz": -120, "elapsed": 4102, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj7ldLv317JVrv6jtloGcYIgoqAhT08NCy6q5wtrA=s64", "userId": "06955205838397477388"}} outputId="cbad26bd-65b0-4831-b092-51b7f930e388" # !pip install tqdm==4.49.0 # + id="-dRUZHjeNwxz" from tqdm import tqdm # + id="gqgySQUJFZev" colab={"base_uri": "https://localhost:8080/", "height": 49, "referenced_widgets": ["ed506f08601040e8b373290d1122f473", "75cb744503cb4d59aa8464af590b8912", "08860bcdbba644188c36feda590e42da", "9920ab5ff73445a6aaa8f4510626a039", "531a28ce7f014769ad297b0c9c71635b", "63752066a91c469bb17cb3b0c9822f53", "1afe950c10bb40179af4ae10212b769f", "", "67c4c17e413b4d0fa035bb892abf240f", "", ""]} executionInfo={"status": "ok", "timestamp": 1628501051236, "user_tz": -120, "elapsed": 739, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj7ldLv317JVrv6jtloGcYIgoqAhT08NCy6q5wtrA=s64", "userId": "06955205838397477388"}} outputId="9e79ef5e-aa1f-4b2b-c23d-6cd68cd2b720" metric = load_metric("sacrebleu") # + id="HGDdBpGfNwxz" # Preprocessing # + id="UnY4aKkPNwx0" colab={"base_uri": "https://localhost:8080/", "height": 113, "referenced_widgets": ["c539809a354b4d8b91fc0962f8897225", "629731748a8b4158ab7ab9ed5d01d1dd", "691f8e5dc8224226bce91db3a0d4f0de", "ff9cdc17850a4e6183d6f7b2c31bd16a", "", "", "", "", "eb7e8ddc9217450d9f8ad8b9c8db82af", "63f778a0af934159a50c760596e80f3f", "d0218ea78c9a4f238e83e35145b3d719", "c7073fe66c864ca0b47e6e146a1cba8e", "06f309eaffcb49c1a5aa792bed9b348c", "", "", "01699bed9a9e45edae815b20fe0163a7", "bf0501456ece4a7d99ecda20d80586a3", "", "", "", "370111135ec5432c85a8662668279c8a", "f983e734d0f940098f1feb550f6dd7bc", "514e3b2e5cd446808a1899e0b49c16e2", "", "3338874a7d8c4e24864f7c8a79b6b494", "b136deab206a440ca0a8954559975622", "44cc2e9972584bea94d861fed1b63aaf", "8588827b5a6e4087be1345210999c9e3", "8b398d8b1d8d459fb0d441c7bec3ac62", "bb9e98a3d29c499f85e4ab3323f0ff30", "b67b13dad4e14233a46ba75f63ced5d1", "bff85e6eb4be444c8b80ce7a450fe90e", "cb044677cf67427985bb4ef576e76db6"]} executionInfo={"status": "ok", "timestamp": 1628501062561, "user_tz": -120, "elapsed": 11330, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj7ldLv317JVrv6jtloGcYIgoqAhT08NCy6q5wtrA=s64", "userId": "06955205838397477388"}} outputId="695e7074-cc33-469e-a323-931bfe4f173a" from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) # + id="LtJhGYmEaJAr" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1628501062563, "user_tz": -120, "elapsed": 19, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj7ldLv317JVrv6jtloGcYIgoqAhT08NCy6q5wtrA=s64", "userId": "06955205838397477388"}} outputId="534fc079-ec03-4df3-b97e-63ea8fd58ab0" print(raw_datasets['test']['translation'][0]['sparql']) # + id="5iL-TpK3mUk8" colab={"base_uri": "https://localhost:8080/", "height": 35} executionInfo={"status": "ok", "timestamp": 1628501062566, "user_tz": -120, "elapsed": 16, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj7ldLv317JVrv6jtloGcYIgoqAhT08NCy6q5wtrA=s64", "userId": "06955205838397477388"}} outputId="39c012ac-3d8d-462b-bd87-6d546f96fe1d" tokenizer.decode(tokenizer(raw_datasets['test']['translation'][0]['sparql'])['input_ids']) # + id="EZdjRilTNwx1" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1628501062568, "user_tz": -120, "elapsed": 17, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj7ldLv317JVrv6jtloGcYIgoqAhT08NCy6q5wtrA=s64", "userId": "06955205838397477388"}} outputId="263cbad6-1635-4953-a26e-89652d18a3c2" print(raw_datasets['test']['translation'][0]['en'],tokenizer(raw_datasets['test']['translation'][0]['en'])) # + id="tNf9AWfpNwx1" prefix = "translate English to Sparql: " # + tags=[] id="kkidcZ_hNwx2" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1628501062954, "user_tz": -120, "elapsed": 398, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj7ldLv317JVrv6jtloGcYIgoqAhT08NCy6q5wtrA=s64", "userId": "06955205838397477388"}} outputId="a9b17439-f19a-4e02-d507-fa80f5d40b6e" max_input_length = 0 max_target_length = 0 for d in tqdm(raw_datasets['train']['translation']): len_en = len(d['en']) len_qry = len(d['sparql']) if len_en > max_input_length: max_input_length=len_en if len_qry > max_target_length: max_target_length=len_qry # + id="JyEtHVN6Nwx2" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1628501062955, "user_tz": -120, "elapsed": 11, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj7ldLv317JVrv6jtloGcYIgoqAhT08NCy6q5wtrA=s64", "userId": "06955205838397477388"}} outputId="f1b94bf9-87e6-4964-d85e-9965252eefe3" print(max_input_length, max_target_length) # + id="OXiURhR-Nwx3" source_lang = "en" target_lang = "sparql" def preprocess_function(examples): inputs = [] targets= [] for ex in examples["translation"]: inputs.append(prefix + ex[source_lang]) targets.append(ex[target_lang]) model_inputs = tokenizer(inputs, max_length=max_input_length, truncation=True) # Setup the tokenizer for targets with tokenizer.as_target_tokenizer(): labels = tokenizer(targets, max_length=max_target_length, truncation=True) model_inputs["labels"] = labels["input_ids"] return model_inputs # + id="qmaK1UX0Nwx3" colab={"base_uri": "https://localhost:8080/", "height": 81, "referenced_widgets": ["df458fbd27154e9da34ad7c40dac4972", "c761a0828bbe452495badcf480f569fa", "e448844fbe91486f9fcc3fbe0a8335f9", "fde4d7035795479a887f055cd08b023a", "", "3ef1ace60831437fb2513aa76f5d13c7", "", "f43fe3695925442e93b6348796ce9f4f", "607796fb886946e68a2918d30e462295", "", "", "", "2678da2eb22e496db3429559ad05e5a0", "7ef1c54a35b54cdea4ce6bbe80fc4ee6", "abdfd23f71f647899182974013ada503", "965e825363f34036b3b1b95ecae66b32", "7c0f763aadbe4403ac8b8b1ab69a7155", "8fdfb3e4c13a4032b2b342866a060c22", "", "", "a0fb2219fb7d4f0ca13ba9e684c669aa", "347f90c961784ee7a66b798e8e07c043"]} executionInfo={"status": "ok", "timestamp": 1628501072550, "user_tz": -120, "elapsed": 9602, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj7ldLv317JVrv6jtloGcYIgoqAhT08NCy6q5wtrA=s64", "userId": "06955205838397477388"}} outputId="1a4cd3bc-0e34-4d36-ec8a-ad274efba063" tokenized_datasets = raw_datasets.map(preprocess_function, batched=True) # + id="knRoh_8wNwx4" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1628501072552, "user_tz": -120, "elapsed": 14, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj7ldLv317JVrv6jtloGcYIgoqAhT08NCy6q5wtrA=s64", "userId": "06955205838397477388"}} outputId="9a038412-9bf0-4ccb-9d01-63b2ee215d3f" tokenized_datasets # + id="eUiaZ1Y0Nwx4" # Fine-tuning the model # + id="rUJNfPjvNwx5" colab={"base_uri": "https://localhost:8080/", "height": 49, "referenced_widgets": ["6f260fef1c70436d9549801c615d3bb3", "92625a6d0c2144f7b728f682917aeb55", "c8c3ee1999df430f88a06d594e6b3d4e", "fec6412bc353490687c6655fb361f109", "41e671a7afe942d7955a7fc2aac9dc05", "b7a5e60e91be432f90189ae567156191", "329983b598f44153ab1c38490e83d4a5", "abe8706090ce4645add8d60736eb2f31", "cc0240d6105a419e92c9d063e2aebc78", "46f24cb47059491a8db4f8fbcffe8aff", "42a73d2ebd9246f08233dfbe85687608"]} executionInfo={"status": "ok", "timestamp": 1628501083180, "user_tz": -120, "elapsed": 10636, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj7ldLv317JVrv6jtloGcYIgoqAhT08NCy6q5wtrA=s64", "userId": "06955205838397477388"}} outputId="8f3c8f5d-46ae-421a-b45f-9f403a99ff3a" from transformers import AutoModelForSeq2SeqLM, DataCollatorForSeq2Seq, Seq2SeqTrainingArguments, Seq2SeqTrainer model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint) # + id="Qt8rECNwNwx5" batch_size = 8 args = Seq2SeqTrainingArguments( model_name, evaluation_strategy = "epoch", learning_rate=2e-5, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, weight_decay=0.01, save_total_limit=3, num_train_epochs=1, predict_with_generate=True, fp16=True, ) # + id="wHZPXO3lNwx5" data_collator = DataCollatorForSeq2Seq(tokenizer, model=model) # + id="UO8h_ilTLM3Y" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1628501083181, "user_tz": -120, "elapsed": 18, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj7ldLv317JVrv6jtloGcYIgoqAhT08NCy6q5wtrA=s64", "userId": "06955205838397477388"}} outputId="acfa5a4c-fee2-4335-910e-87892ed50fc7" fake_preds = ["hello there", "general kenobi"] fake_labels = [["hello there"], ["general kenobi"]] metric.compute(predictions=fake_preds, references=fake_labels) # + id="7TzcEj3jKSlD" # The last thing to define for our Seq2SeqTrainer is how to compute # the metrics from the predictions. We need to define # a function for this, which will just use the metric we loaded earlier, # and we have to do a bit of pre-processing to decode the predictions into texts: import numpy as np def postprocess_text(preds, labels): preds = [pred.replace('?',' ?').replace('.', ' .').strip() for pred in preds] labels = [[label.replace('?',' ?').replace('.', ' .').strip()] for label in labels] return preds, labels def compute_metrics(eval_preds): preds, labels = eval_preds if isinstance(preds, tuple): preds = preds[0] decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True) # Replace -100 in the labels as we can't decode them. labels = np.where(labels != -100, labels, tokenizer.pad_token_id) decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) # Some simple post-processing decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels) result = metric.compute(predictions=decoded_preds, references=decoded_labels) result = {"bleu": result["score"]} prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds] result["gen_len"] = np.mean(prediction_lens) result = {k: round(v, 4) for k, v in result.items()} return result # + id="9ZhWPx09Nwx6" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1628501093036, "user_tz": -120, "elapsed": 9869, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj7ldLv317JVrv6jtloGcYIgoqAhT08NCy6q5wtrA=s64", "userId": "06955205838397477388"}} outputId="6daf2183-1ec8-4cd3-88e1-7ede335987bd" trainer = Seq2SeqTrainer( model, args, train_dataset=tokenized_datasets["train"], eval_dataset=tokenized_datasets["test"], data_collator=data_collator, tokenizer=tokenizer, compute_metrics=compute_metrics ) # + colab={"base_uri": "https://localhost:8080/"} id="SgIrXB8xNTld" executionInfo={"status": "ok", "timestamp": 1628501355186, "user_tz": -120, "elapsed": 262167, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj7ldLv317JVrv6jtloGcYIgoqAhT08NCy6q5wtrA=s64", "userId": "06955205838397477388"}} outputId="34a7da8b-0e20-42b2-c82d-9549260868eb" # !tensorboard dev upload --logdir runs # + id="qS5Q9hT2Nwx6" colab={"base_uri": "https://localhost:8080/", "height": 1000} executionInfo={"status": "ok", "timestamp": 1628504218136, "user_tz": -120, "elapsed": 2862968, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj7ldLv317JVrv6jtloGcYIgoqAhT08NCy6q5wtrA=s64", "userId": "06955205838397477388"}} outputId="f056b058-a0ce-44a9-e649-263ca8d64381" # %%time trainer.train() # + id="MdHQ-R8XNwx6" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1628504219031, "user_tz": -120, "elapsed": 911, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj7ldLv317JVrv6jtloGcYIgoqAhT08NCy6q5wtrA=s64", "userId": "06955205838397477388"}} outputId="a1cb9adf-0c4b-4e9e-b50a-b65fb9bdbbdf" trainer.save_model(model_path) # + id="MeNvtqwLfj4H" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1628504219032, "user_tz": -120, "elapsed": 24, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj7ldLv317JVrv6jtloGcYIgoqAhT08NCy6q5wtrA=s64", "userId": "06955205838397477388"}} outputId="e65f364a-6cd3-4450-e35f-957b17cbb364" # !ls -l --block-size=M {model_path} # + colab={"base_uri": "https://localhost:8080/"} id="NivCzVmChYft" executionInfo={"status": "ok", "timestamp": 1628504219033, "user_tz": -120, "elapsed": 13, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj7ldLv317JVrv6jtloGcYIgoqAhT08NCy6q5wtrA=s64", "userId": "06955205838397477388"}} outputId="3bc097d9-5e9d-4f6c-80bb-335c1c00cec0" # !mkdir drive/MyDrive/models/{model_path} # + id="ueXkcK7rf5SI" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1628504219527, "user_tz": -120, "elapsed": 500, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj7ldLv317JVrv6jtloGcYIgoqAhT08NCy6q5wtrA=s64", "userId": "06955205838397477388"}} outputId="dbbbbcc5-0029-4569-a535-7c19d5b6faf4" # !cp {model_path}/* drive/MyDrive/models/{model_path}/ # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pandas as pd from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA from sklearn.metrics import classification_report, confusion_matrix Smarket = pd.read_csv("/Users/arpanganguli/Documents/Finance/ISLR/Datasets/Smarket.csv", index_col = 'SlNo') Smarket.head() from sklearn.model_selection import train_test_split X = np.array(Smarket[['Lag1', 'Lag2']]) y = np.array(Smarket['Direction']) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2016, random_state=101) ldafit = LDA() ldafit.fit(X_train, y_train) ldafit.priors_ ldafit.means_ ldafit.coef_ ldapred = ldafit.predict(X_test) print(confusion_matrix(y_test, ldapred)) print(classification_report(y_test, ldapred)) # **The precision is intuitively the ability of the classifier not to label as positive a sample that is negative. Therefore, # this model is able to correctly classify 57% of the test data (which I believe is great given the vagaries of the stock # market!!** # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + import pandas as pd import numpy as np from sklearn import tree from sklearn.datasets import load_iris, load_breast_cancer, load_wine from sklearn.model_selection import cross_validate, cross_val_score, train_test_split, GridSearchCV from sklearn.pipeline import Pipeline from statistics import mean, stdev # Todo: remove once have pip install import sys sys.path.insert(0, 'C:\python_projects\ArithmeticFeatures_project\ArithmeticFeatures') from ArithmeticFeatures import ArithmeticFeatures # todo: fix once have pip install np.random.seed(0) # - # ## Methods to load data # + # This provides methods to test with 3 of the toy datasets provided by sklearn. These are used below # to test the accuracy of models using RotationFeatures. def get_iris(): iris = load_iris() X, y = iris.data, iris.target X = pd.DataFrame(X, columns=iris['feature_names']) return X, y def get_breast_cancer(): X, y = load_breast_cancer(return_X_y=True, as_frame=True) return X,y def get_wine(): X, y = load_wine(return_X_y=True, as_frame=True) return X,y # - # ## Examples 1 to 4: Simple examples using the ArithmeticFeatures class # + X,y = get_iris() arith = ArithmeticFeatures() # Example 1: Using fit() then transform(), and get_feature_names() arith.fit(X) X1 = pd.DataFrame(arith.transform(X), columns=arith.get_feature_names()) print("\nExample 1:") display(X1.head()) # Example 2: Using fit_transform() and get_params() X2 = pd.DataFrame(arith.fit_transform(X), columns=arith.get_feature_names()) print("\nExample 2:") display(X2.head()) print("Calling get_params(): ", arith.get_params()) # Example 3: Using a two-column numpy array print("\nExample 3:") X = np.arange(6).reshape(3, 2) print("type of X: ", type(X)) X3 = pd.DataFrame(arith.fit_transform(X), columns=arith.get_feature_names()) display(X3.head()) # Example 4: Using a two-column python matrix print("\nExample 4:") X = np.arange(6).reshape(3, 2).tolist() print("type of X: ", type(X)) X4 = pd.DataFrame(arith.fit_transform(X), columns=arith.get_feature_names()) display(X4.head()) # - # ## Example 5: Comparing the accuracy when using the arithmetic-based generated features vs using only the original features. # + # Test sklearn's decision tree using either the original or the original plus the generated features. In most cases there # is an improvement in the f1 score, as well as a reduction in the variation between folds, and smaller decision trees, and # hence greater interpretability. def test_classification(X, y): # The depth is limited to 4 in order to maintain the interpretability of the trees induced. dt = tree.DecisionTreeClassifier(max_depth=4, random_state=0) scores = cross_validate(dt, X, y, cv=5, scoring='f1_macro', return_train_score=True, return_estimator=True) test_scores = scores['test_score'] train_scores = scores['train_score'] score_name = "f1_score: " avg_test_score = test_scores.mean() scores_std_dev = stdev(test_scores) avg_train_score = train_scores.mean() estimators = scores['estimator'] total_num_nodes = 0 for est in estimators: total_num_nodes += est.tree_.node_count avg_num_nodes = total_num_nodes / len(estimators) print("\n Average f1 score on training data: ", round(avg_train_score,3)) print(" Average f1 score on test data: ", round(avg_test_score,3)) print(" Std dev of f1 scores on test data: ", round(scores_std_dev,3)) print(" Average number of nodes: ", round(avg_num_nodes,3)) # Given a dataframe X, return an extended dataframe, having the same set of rows, but additional, generated columns def get_extended_X(X): arith = ArithmeticFeatures() extended_X = pd.DataFrame(arith.fit_transform(X), columns=arith.get_feature_names()) return extended_X # Given a method to load a dataset, load the dataset and test the accuracy of a sklearn decision tree with and without # the extended features. def test_dataset(load_method, file_name): print("\n\n*********************************************") print("Calling for " + file_name) print("*********************************************") X,y = load_method() print("\nUsing original features only") test_classification(X, y) print("\nUsing arithmetic-based features") extended_X = get_extended_X(X) test_classification(extended_X, y) test_dataset(get_iris, "Iris") test_dataset(get_breast_cancer, "Breast Cancer") test_dataset(get_wine, "Wine") # - # ## Example 6: Using an ArithmeticFeatures in a sklearn pipeline # + X,y = get_iris() pipe = Pipeline([('arith', ArithmeticFeatures()), ('dt', tree.DecisionTreeClassifier())]) # Example getting the training score pipe.fit(X,y) sc = pipe.score(X,y) print('Training Accuracy: %.3f' % sc) # Example getting the cross validated accuracy n_scores = cross_val_score(pipe, X, y, scoring='accuracy', n_jobs=-1, error_score='raise') print('Cross validated Accuracy: %.3f (%.3f)' % (mean(n_scores), stdev(n_scores))) # - # ## Example 7: Using a pipeline and grid search to optimize the hyperparameters used by ArithmeticFeatures and by the Decision Tree # + X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42) parameters = { 'arith__support_plus': (True, False), 'arith__support_mult': (True, False), 'arith__support_minus': (True, False), 'arith__support_div': (True, False), 'arith__support_min': (True, False), 'arith__support_max': (True, False), 'dt__max_depth': (3,4,5) } gs_clf = GridSearchCV(pipe, parameters) gs_clf.fit(X_train, y_train) s = gs_clf.score(X_test, y_test) print("score: ", s) b = gs_clf.best_params_ b # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] slideshow={"slide_type": "slide"} #
    #

    Introduction to Data Science and Programming

    #

    Class 11: Scientific programming with numpy

    #

    IT University of Copenhagen, Fall 2019

    #

    Instructor:

    #
    # - # # Source # This notebook was adapted from: # * Scientific Python course by # * a lecture of (http://jrjohansson.github.io) # * Python for Data Analysis by # + [markdown] slideshow={"slide_type": "slide"} # ## Numpy - multidimensional data arrays # ### Introduction # - # The `numpy` package (module) is used in almost all numerical computation using Python. It is a package that provide high-performance vector, matrix and higher-dimensional data structures for Python. It is implemented in C and Fortran so when calculations are vectorized (formulated with vectors and matrices), performance is very good. # # To use `numpy` you need to import the module. The standard way is: import numpy as np # In the `numpy` package the terminology used for vectors, matrices and higher-dimensional data sets is *array*. Numpy's array object is called `ndarray`, for N-dimensional array. # # # + slideshow={"slide_type": "slide"} # Ignore this for now, it is only for plotting import matplotlib.pyplot as plt # + [markdown] slideshow={"slide_type": "slide"} # ## Creating `numpy` arrays # - # There are a number of ways to initialize new numpy arrays, for example from # # * a Python list or tuples # * using functions that are dedicated to generating numpy arrays, such as `arange`, `linspace`, etc. # * reading data from files # + [markdown] slideshow={"slide_type": "slide"} # ### From lists # - # For example, to create new vector and matrix arrays from Python lists we can use the `numpy.array` function. # a vector: the argument to the array function is a Python list v = np.array([1,2,3,4]) v # a matrix: the argument to the array function is a nested Python list M = np.array([[1, 2], [3, 4]]) M # + [markdown] slideshow={"slide_type": "slide"} # The vector has 1 dimension, the matrix has 2. We learn this with `numpy.ndim`. # - np.ndim(v), np.ndim(M) # + [markdown] slideshow={"slide_type": "-"} # The `v` and `M` objects are both of the type `ndarray` that the `numpy` module provides. # - type(v), type(M) # + [markdown] slideshow={"slide_type": "slide"} # The difference between the `v` and `M` arrays is their shapes. We can get information about the shape of an array by using the `ndarray.shape` property. # - v.shape M.shape # + [markdown] slideshow={"slide_type": "slide"} # The number of elements in the array is available through the `ndarray.size` property: # - v.size M.size # + [markdown] slideshow={"slide_type": "slide"} # Equivalently, we could use the function `numpy.shape` and `numpy.size` # - np.shape(M) np.size(M) # + [markdown] slideshow={"slide_type": "slide"} # So far the `numpy.ndarray` looks awfully much like a Python list (or nested list). Why not simply use Python lists for computations instead of creating a new array type? # # There are several reasons: # # * Python lists are very general. They can contain any kind of object. They are dynamically typed. They do not support mathematical functions such as matrix and dot multiplications, etc. Implementing such functions for Python lists would not be very efficient because of the dynamic typing. # * Numpy arrays are **statically typed** and **homogeneous**. The type of the elements is determined when the array is created. # * Numpy arrays are memory efficient. # * Because of the static typing, fast implementation of mathematical functions such as multiplication and addition of `numpy` arrays can be implemented in a compiled language (C and Fortran is used). # # Using the `dtype` (data type) property of an `ndarray`, we can see what type the data of an array has: # - v.dtype # + [markdown] slideshow={"slide_type": "slide"} # We get an error if we try to assign a value of the wrong type to an element in a numpy array: # - v[0] = "hello" # + [markdown] slideshow={"slide_type": "slide"} # If we want, we can explicitly define the type of the array data when we create it, using the `dtype` keyword argument: # - M = np.array([[1, 2], [3, 4]], dtype=complex) M # Common data types that can be used with `dtype` are: `int`, `float`, `complex`, `bool`, `object`, etc. # # We can also explicitly define the bit size of the data types, for example: `int64`, `int16`, `float128`, `complex128`. # + [markdown] slideshow={"slide_type": "slide"} # ### Using array-generating functions # - # For larger arrays it is inpractical to initialize the data manually, using explicit python lists. Instead we can use one of the many functions in `numpy` that generate arrays of different forms. Some of the more common are: # #### arange # create a range x = np.arange(0, 10, 1) # arguments: start, stop, step. Like the function range for lists! x # + slideshow={"slide_type": "slide"} x = np.arange(-0.5, 0.5, 0.1) #note that here we can use floats and non-integer steps. You could not do this with lists x # + [markdown] slideshow={"slide_type": "slide"} # #### linspace and logspace # - # using linspace, both end points ARE included np.linspace(0, 10, 25) # + slideshow={"slide_type": "slide"} np.logspace(0, 1, 5, base=10) # - import math np.logspace(0, 1, 5, base=math.exp(1)) # + [markdown] slideshow={"slide_type": "slide"} # #### Random data # - # uniform random numbers in [0,1] np.random.rand(5,5) # + [markdown] slideshow={"slide_type": "slide"} # standard normal distributed random numbers $\mu = 0$ and $\sigma^2=1$ # - np.random.randn(5,5) # + [markdown] slideshow={"slide_type": "slide"} # There is a huge variety of functions that you can use: # # # # + [markdown] slideshow={"slide_type": "slide"} # and you can generate samples from all the major distributions. # Have a look at the documentation at https://docs.scipy.org/doc/numpy-1.14.1/reference/routines.random.html: # # + [markdown] slideshow={"slide_type": "slide"} # #### zeros and ones # - # You can also create arrays filled with the same element: np.zeros((3,3)) np.ones((3,3)) # + slideshow={"slide_type": "slide"} np.zeros((3,3))+2 # We will see more matrix and vector operations later # - np.ones((3,3))*4 # We will see more matrix and vector operations later # + [markdown] slideshow={"slide_type": "slide"} # The function `numpy.empty` initializes an "empty" array. # + slideshow={"slide_type": "-"} np.empty((3,3)) # - # It’s not safe to assume that np.empty will return an array of all zeros. In many cases it will return uninitialized garbage values! # + [markdown] slideshow={"slide_type": "slide"} # ## Manipulating arrays # ### Indexing # We can index elements in an array using square brackets and indices: # - v = np.array([1,2,3,4]) M = np.array([[1, 2], [3, 4]]) # v is a vector, and has only one dimension, taking one index v[0] # + slideshow={"slide_type": "slide"} # M is a matrix, or a 2 dimensional array, taking two indices M[0,1] # - # # + [markdown] slideshow={"slide_type": "slide"} # If we omit an index of a multidimensional array it returns the whole row (or, in general, a N-1 dimensional array) # - M M[1] # + [markdown] slideshow={"slide_type": "slide"} # *** # ### Quick exercise # If I have a list of lists, what is the syntax to access the first element of the first list? Pay attention to the difference with arrays! # *** # + [markdown] slideshow={"slide_type": "slide"} # ### Index slicing # - # Index slicing is the technical name for the syntax `M[lower:upper:step]` to extract part of an array: A = np.array([1,2,3,4,5]) # It works in the same way as for **lists**. A[1:3] A[1:3] = [-2,-3] A A[::] # lower, upper, step all take the default values # + slideshow={"slide_type": "slide"} A[::2] # step is 2, lower and upper defaults to the beginning and end of the array # - A[:3] # first three elements A[3:] # elements from index 3 # + [markdown] slideshow={"slide_type": "slide"} # Negative indices counts from the end of the array (positive index from the beginning): # - A[-1:] A[-2:] # + [markdown] slideshow={"slide_type": "slide"} # Index slicing works exactly the same way for multidimensional arrays: # + A = np.array([[n+m*10 for n in range(5)] for m in range(5)]) A # - # a block from the original array A[1:4, 1:4] # + [markdown] slideshow={"slide_type": "slide"} # # + [markdown] slideshow={"slide_type": "slide"} # The devil is in the details! # - A[4:,:] A[4,:] # + [markdown] slideshow={"slide_type": "slide"} # ### An index slice only creates a view! # An important distinction from lists is that array slices are *views* on the original array: The data is not copied, and any modifications to the view will be reflected in the source array! # - arr = np.arange(10) arr arr_slice = arr[5:8] arr_slice arr_slice[:] = 666 arr arr[:] = 999 arr_slice # If you want a copy instead, use: `arr[5:8].copy()` # + [markdown] slideshow={"slide_type": "slide"} # ### Fancy indexing # Fancy indexing is the name for when an array or list is used in-place of an index: # - A row_indices = [1, 2, 3] A[row_indices,:] # this selects the second, third and fourth row of A, and all its columns A[row_indices] #this is equivalent to the expression above # Fancy indexing, unlike slicing, always copies the data into a new array. # + slideshow={"slide_type": "slide"} row_indices = [1, 2, 3] A # + slideshow={"slide_type": "-"} col_indices = [1, 2, -1] # remember, index -1 means the last element A[row_indices, col_indices] # - # *** # ### Quiz # If this is matrix `A`: # `[[ 0, 1, 2, 3, 4], # [10, 11, 12, 13, 14], # [20, 21, 22, 23, 24], # [30, 31, 32, 33, 34], # [40, 41, 42, 43, 44]]` # # Then what is the output of `A[[1,2,3], [1,-1,2]]`? # *** # + # A[[1,2,3], [1,-1,2]] # Uncomment to see the solution. # - # #### Masks # + [markdown] slideshow={"slide_type": "slide"} # We can also use index masks: If the index mask is an Numpy array of data type `bool`, then an element is selected (True) or not (False) depending on the value of the index mask at the position of each element: # - B = np.array([n for n in range(5)]) B row_mask = np.array([True, False, True, False, False]) B[row_mask] # same thing row_mask = np.array([1,0,1,0,0], dtype=bool) #1 is true, 0 is false B[row_mask] # + [markdown] slideshow={"slide_type": "slide"} # This feature is very useful to conditionally select elements from an array, using for example comparison operators: # - x = np.arange(-1, 5, 0.5) x mask = (2 < x) & (x < 4.5) # Always use parentheses for mask conditions. Only then can you join them with & and | mask x[mask] # The Python keywords `and` and `or` do not work with boolean arrays. Use instead `&` and `| `. # + [markdown] slideshow={"slide_type": "slide"} # ## Functions for extracting data from arrays and creating arrays # - # ### where # The index mask can be converted to position index using the `where` function x = np.arange(-1, 5, 0.5) mask = (2 < x) & (x < 4.5) indices = np.where(mask) print(type(indices)) indices x[indices] # this indexing is equivalent to the fancy indexing x[mask] # + slideshow={"slide_type": "slide"} x # - # Setting values with boolean arrays works in a common-sense way. To set all of the negative values to 0 we need only do: x[x < 0] = 0 x # + [markdown] slideshow={"slide_type": "slide"} # ## File I/O # - # ### Comma-separated values (CSV) # A very common file format for data files is comma-separated values (CSV), or related formats such as TSV (tab-separated values). # !head stockholm_temperatures.dat # head is a shell command that displays the beginning of a file # The file stockholm_temperatures.dat contains the temperature in Stockholm since 1800 until 2011. The first three columns are respectively year, month and day, and the last column is the temperature. # + [markdown] slideshow={"slide_type": "slide"} # To read data from such files into Numpy arrays we can use the `numpy.loadtxt` function. For example: # - data = np.loadtxt('files/stockholm_temperatures.dat') data.shape # + slideshow={"slide_type": "slide"} # Ignore this code for now - we will explain it later. fig, ax = plt.subplots(figsize=(14,4)) ax.plot(data[:,0]+data[:,1]/12.0+data[:,2]/365, data[:,3]) ax.set_title('Temperatures in Stockholm') ax.set_xlabel('Year') ax.set_ylabel('Temperature (C)'); # + [markdown] slideshow={"slide_type": "slide"} # Using `numpy.savetxt` we can store a Numpy array to a file in CSV format: # - M = np.random.rand(3,3) M np.savetxt("random-matrix.csv", M) # !head random-matrix.csv np.savetxt("random-matrix.csv", M, fmt='%.5f') # fmt specifies the format # !head random-matrix.csv # + [markdown] slideshow={"slide_type": "slide"} # ### Numpy's native file format (uncompressed) # - # `np.save` and `np.load` are the two workhorse functions for efficiently saving and loading array data on disk. Arrays are saved by default in an uncompressed raw binary format with file extension `.npy`. np.save("random-matrix.npy", M) # !file random-matrix.npy # file is a shell command that displays the file type np.load("random-matrix.npy") # + [markdown] slideshow={"slide_type": "slide"} # ### Numpy's native file format (compressed) # - # You save multiple arrays in a zip archive using `np.savez` and passing the arrays as keyword arguments: np.savez('array_archive.npz', a=M, b=data) # When loading an .npz file, you get back a dict-like object which loads the individual arrays: arch = np.load('array_archive.npz') arch['a'] # + [markdown] slideshow={"slide_type": "slide"} # ### Special values # - a = np.array([-1, 0, 1, 2]) a = a/0 a # [-1/0 0/0 1/0 2/0] # + slideshow={"slide_type": "slide"} np.nan == np.nan # nan is not equal to anything, not even nan # + slideshow={"slide_type": "-"} np.isnan(a) # nan is nan # + slideshow={"slide_type": "-"} np.isinf(a) # nan is not infinite # + slideshow={"slide_type": "-"} np.isfinite(a) # nan is not finite # + [markdown] slideshow={"slide_type": "slide"} # #### Selecting subsets of a real data set # - # We can select subsets of the data in an array using indexing, fancy indexing, and the other methods of extracting data from an array (described above), and run computations later. # # For example, if we want to calculate the average temperature in 1971 only, we can create a mask in the following way: # + slideshow={"slide_type": "slide"} # reminder, the temperature dataset is stored in the data variable: np.shape(data) # - mask = (data[:,0] == 1971) data[mask,0] # + slideshow={"slide_type": "slide"} data[mask,3] # + slideshow={"slide_type": "slide"} print("This is the mean temperature in Stockholm in 1971: "+str(np.mean(data[mask,3]))) # + [markdown] slideshow={"slide_type": "slide"} # If we are interested in the average temperature only in a particular month, say February, then we can create a index mask and use it to select only the data for that month using: # - np.unique(data[:,1]) # the month column takes values from 1 to 12 mask_feb = (data[:,1] == 2) # the temperature data is in column 3 np.mean(data[mask_feb,3]) # + [markdown] slideshow={"slide_type": "slide"} # #### More functions: sum, prod, and trace # - d = np.arange(0, 10) d # sum up all elements np.sum(d) # product of all elements np.prod(d+1) # cummulative sum np.cumsum(d) # cummulative product np.cumprod(d+1) # + [markdown] slideshow={"slide_type": "slide"} # When you have two dimensional objects, you can specificy along which dimension (axis) you want to perform the sum (or mean, or the maximum, etc.) # # - x = np.array([[1, 1], [2, 2]]) print(x.sum(axis=0)) # columns (first dimension) print(x[:, 0].sum(), x[:, 1].sum()) print(x.sum(axis=1)) # rows (second dimension) print(x[0, :].sum(), x[1, :].sum()) # + [markdown] slideshow={"slide_type": "slide"} # ## Reshaping, resizing and stacking arrays # - # The shape of an Numpy array can be modified without copying the underlaying data, which makes it a fast operation even for large arrays. A n, m = A.shape B = A.reshape((1,n*m)) B # + slideshow={"slide_type": "slide"} B[0,0:5] = 5 # modify the array B # - A # and the original variable is also changed. B is only a different view of the same data # + [markdown] slideshow={"slide_type": "slide"} # We can also use the function `flatten` to make a higher-dimensional array into a vector. But this function creates a copy of the data. # - B = A.flatten() B B[0:5] = 10 B A # now A has not changed, because B's data is a copy of A's, not refering to the same data # + [markdown] slideshow={"slide_type": "slide"} # ## Adding a new dimension: newaxis # - # With `newaxis`, we can insert new dimensions in an array, for example converting a vector to a column or row matrix: v = np.array([1,2,3]) v.shape # + slideshow={"slide_type": "slide"} # make a column matrix of the vector v v[:,np.newaxis] # - # column matrix v[:,np.newaxis].shape # row matrix v[np.newaxis,:].shape # + [markdown] slideshow={"slide_type": "slide"} # ## Stacking and repeating arrays # - # Using function `repeat`, `tile`, `vstack`, `hstack`, and `concatenate` we can create larger vectors and matrices from smaller ones: # ### tile and repeat a = np.array([[1, 2], [3, 4]]) # repeat each element 3 times np.repeat(a, 3) # tile the matrix 3 times np.tile(a, 3) # + [markdown] slideshow={"slide_type": "slide"} # ### concatenate # - b = np.array([[5, 6]]) np.concatenate((a, b), axis=0) np.concatenate((a, b.T), axis=1) # + [markdown] slideshow={"slide_type": "slide"} # ### hstack and vstack # - np.vstack((a,b)) np.hstack((a,b.T)) # + [markdown] slideshow={"slide_type": "slide"} # ## Using arrays in conditions # - # When using arrays in conditions,for example `if` statements and other boolean expressions, one needs to use `any` or `all`, which requires that any or all elements in the array evalutes to `True`: M if (M > 5).any(): print("at least one element in M is larger than 5") else: print("no element in M is larger than 5") if (M > 5).all(): print("all elements in M are larger than 5") else: print("all elements in M are not larger than 5") # + [markdown] slideshow={"slide_type": "slide"} # ## Using numpy on a photo # - # This code loads a photo. from skimage import io photo = io.imread('files/balloons.jpg') type(photo) # + slideshow={"slide_type": "-"} photo.shape # + slideshow={"slide_type": "slide"} np.ndim(photo) # - photo # + slideshow={"slide_type": "slide"} # This code plots a photo. Ignore this for now - we will cover plotting in a later class. plt.imshow(photo); # + slideshow={"slide_type": "slide"} plt.imshow(photo[20:120, 350:420]); # + slideshow={"slide_type": "slide"} plt.imshow(photo[::-1]); # + slideshow={"slide_type": "slide"} plt.imshow(photo[::2, ::2]); # + slideshow={"slide_type": "slide"} # when np.where is used with 3 arguments, it replaces all True elements with the second, and all False elements with the third photo_masked = np.where(photo > 155, photo, 0) plt.imshow(photo_masked); # + slideshow={"slide_type": "slide"} photo_onlyblue = np.copy(photo) photo_onlyblue[:, :, :-1] = 0 plt.imshow(photo_onlyblue); # - photo_bw = np.copy(photo) mask_dark = photo.mean(axis=2) < 127 photo_bw[mask_dark, :] = 0 photo_bw[~mask_dark, :] = 255 plt.imshow(photo_bw); # + [markdown] slideshow={"slide_type": "slide"} # ## Linear algebra # - # Vectorizing code is the key to writing efficient numerical calculation with Python/Numpy. That means that as much as possible of a program should be formulated in terms of matrix and vector operations, like matrix-matrix multiplication. # ### Scalar-array operations # We can use the usual arithmetic operators to multiply, add, subtract, and divide arrays with scalar numbers. v1 = np.arange(0, 5) v1 * 2 v1 + 2 # + slideshow={"slide_type": "slide"} A * 2 # - A + 2 # + [markdown] slideshow={"slide_type": "slide"} # ### Element-wise array-array operations # - # When we add, subtract, multiply and divide arrays with each other, the default behaviour is **element-wise** operations: A * A # element-wise multiplication v1 * v1 # + [markdown] slideshow={"slide_type": "slide"} # If we multiply arrays with compatible shapes, we get an element-wise multiplication of each row: # - A.shape, v1.shape A * v1 # + [markdown] slideshow={"slide_type": "slide"} # ### Matrix algebra # - # What about matrix mutiplication? There are two ways. We can either use the `dot` function, which applies a matrix-matrix, matrix-vector, or inner vector multiplication to its two arguments: np.dot(A, A) np.dot(A, v1) np.dot(v1, v1) # + [markdown] slideshow={"slide_type": "slide"} # Alternatively, we can cast the array objects to the type `matrix`. This changes the behavior of the standard arithmetic operators `+, -, *` to use matrix algebra. # - M = np.matrix(A) v = np.matrix(v1).T # make it a column vector v M * M # + slideshow={"slide_type": "slide"} M * v # - # inner product v.T * v # with matrix objects, standard matrix algebra applies v + M*v # + [markdown] slideshow={"slide_type": "slide"} # If we try to add, subtract or multiply objects with incompatible shapes we get an error: # - v = np.matrix([1,2,3,4,5,6]).T M.shape, v.shape M * v # See also the related functions: `inner`, `outer`, `cross`, `kron`, `tensordot`. Try for example `help(kron)`. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="l4H_FcOA4zPk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="38a16b45-896d-4b5e-9899-f5d199cf4743" import tensorflow as tf import numpy as np a = tf.constant(np.arange(1,9), dtype=tf.int32) print(a) # + id="OKzcyl4d8SDj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="58e5ba0c-0c41-48ae-ef3a-8bb9829781ed" import tensorflow as tf import numpy as np a = np.arange(1,3, dtype=np.int32) b = tf.convert_to_tensor(a) print(a) # + id="pfOwRS_j-iOJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="9e51e73d-42a6-4b4c-8672-2c0670a7183b" import tensorflow as tf a = tf.random.normal([5,5,5], mean=0, stddev=2) b = tf.random.truncated_normal([5,5,5], mean=0, stddev=2) c = tf.random.uniform([3,3], minval=-2, maxval=2) print(c) # + id="EjgEc6Gf6Htw" colab_type="code" colab={} import tensorflow as tf tf.ones([3,3,3]) tf.zeros([6,7],dtype=tf.int16) c=tf.fill([3,3,3,3], 0) #print(c) # + id="-nThi983WK8x" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="be5e0b4f-5442-410c-ac9a-631b9b6fd028" x1 = tf.constant ([1., 2., 3.], dtype=tf.float64) print(x1) x2 = tf.cast (x1, tf.int32) print(x2) print (tf.reduce_min(x2), tf.reduce_max(x2)) # + id="c2CKCGNaWN-l" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 102} outputId="0508f943-1049-416b-c0c4-a28ddd170429" x=tf.constant( [ [ 1, 2, 3], [ 2,2,3]]) print(x) print(tf.reduce_mean( x )) print(tf.reduce_sum( x, axis=1 )) # + id="G62hhws2XGzr" colab_type="code" colab={} w = tf.Variable(tf.random.normal([2, 2], mean=0, stddev=1)) # + id="BckqgzqdZjo_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 119} outputId="20c96484-8500-4041-c4c9-aa847ae9fd4c" a = tf.ones([1, 3]) b = tf.fill([1, 3], 3.) print(a) print(b) print(tf.add(a,b)) print(tf.subtract(a,b)) print(tf.multiply(a,b)) print(tf.divide(b,a)) # + id="V7K0YWJAaHEw" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 153} outputId="9a909fd8-f812-4840-f836-42dd77d228e4" a = tf.fill([2, ], 3.) print(a) print(tf.pow(a, 3)) print(tf.square(a)) print(tf.sqrt(a)) b = tf.fill([1, 2], 3.) print(b) print(tf.square(b)) print(tf.pow(b,3)) print(tf.sqrt(b)) # + id="XNw8cYzlarX6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="33c09304-9eb4-4bc4-a375-5ae6687b9ef9" a = tf.ones([3, 2]) b = tf.fill([2, 3], 3.) print(tf.matmul(a,b)) # + id="lNSQY4ZPb602" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 139} outputId="aab68644-234b-4181-9cc1-1045a2a534d0" import tensorflow as tf features = tf.ones([5, 4]) labels = tf.constant([12,22,44,21]) dataset = tf.data.Dataset.from_tensor_slices((features[:,:1], features[:,1:])) print(dataset) for element in dataset: print(element) # + id="p6ikAeG2eSf_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="7187a972-e53e-4db7-afa8-cb69634bc3f4" with tf.GradientTape() as tape: # w = tf.Variable(tf.constant(3.0)) # loss = tf.pow(w,2) w = tf.Variable(3.0) loss = tf.square(w) grad = tape.gradient(loss, w) print(grad) # + id="Cp1L1h7chQC1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 119} outputId="330e42d3-84aa-4b6f-ca52-2ccee695a17f" seq = ['one', 'two', 'three'] i = 9 for j, element in enumerate(seq): print(i, element) print(j, element) # + id="Zvzg2R3ChsxT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="c5407bd0-b967-4014-e808-c353a266895c" classes = 4 labels = tf.constant([1,2,5]) # max value 5 is out of range labels = tf.constant([1,2,4]) # max value is 4 at a maximum labels = tf.constant([-1, 0, 4]) # min value -1 is out of range, too output = tf.one_hot(labels, depth=classes) print(output) # + id="To2HY4wCLfVd" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="2e3e2db6-00da-470e-98be-519a702073a3" import tensorflow as tf t = tf.nn.softmax([2.,3.,1.]) print(t) t_ = tf.constant([2.,3,1]) t__ = tf.nn.softmax(t_) print(t__) # They are definitely the same thing!!!!! # + id="Zy704UN7ioD1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="0a7cbd40-5455-4dad-9549-7e5cbf7f4004" y = tf.constant([1., 2, -8]) # The value should be half, bfloat16, float, double y_pro = tf.nn.softmax(y) print("After softmax, y_pro is: ", y_pro) # + id="NohojfUGlIxt" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="62d40d08-75cd-4bb8-e0fe-ab24234806d2" w = tf.Variable(3) w.assign_sub(1) print(w) # + id="RweP5mLEm_cl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 136} outputId="c57c1de8-3bb2-4fbe-a52c-fae329a6294c" import numpy as np test = np.array([[1, 2, 3], [2, 3, 4], [5, 4, 3], [8, 7, 2]]) print(test) print( tf.argmax (test, axis=0)) # 返回每一列(经度)最大值的索引 print( tf.argmax (test, axis=1)) # 返回每一行(纬度)最大值的索引 print(tf.argmax(test)) # default axis is axis 0 # + id="YxsmLsMznS3H" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="7139c305-b1ba-49e6-be92-8c37320e0cd8" import numpy as np np.random.seed(4) a = np.random.randint(100, size=(2,2)) print(a) # + id="_Aer-HJVmzyc" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python3 # --- # + #python deep_dream.py path_to_your_base_image.jpg prefix_for_results #python deep_dream.py img/mypic.jpg results/dream from __future__ import print_function from keras.preprocessing.image import load_img, img_to_array import numpy as np import scipy import argparse from keras.applications import inception_v3 from keras import backend as K from keras.preprocessing import image import keras import tensorflow as tf # + ## load striped CAV, layer 9 import os import cav working_dir = '/home/tyler/Desktop/tcav_on_azure' subpath = 'striped_sub_1-random500_0-mixed9' cav_path = 'cavs/' + subpath + '-linear-0.1.pkl' path = os.path.join(working_dir, cav_path) this_cav = cav.CAV.load_cav(path) layer_9_cav = this_cav.cavs[0] # + K.set_learning_phase(0) # Build the InceptionV3 network with our placeholder. # The model will be loaded with pre-trained ImageNet weights. model = inception_v3.InceptionV3(weights='imagenet',include_top=False) dream = model.input print('Model loaded.') # + # Playing with these hyperparameters will also allow you to achieve new effects step = 0.05 # Gradient ascent step size num_octave = 1 # Number of scales at which to run gradient ascent octave_scale = 1.4 # Size ratio between scales iterations = 50 # Number of ascent steps per scale max_loss = 100000000000 base_image_path = '/home/tyler/Desktop/tcav_on_azure/concepts/horse_sub_1/img100.jpg' #result_prefix = '/home/tyler/Desktop/tcav_on_azure/results/test' settings = { 'features': { 'mixed9': 10 },} # - img = preprocess_image(base_image_path) # + layer_9_cav = layer_9_cav.reshape(-1,1) layer_9_cav_K = K.constant(layer_9_cav) layer_dict = dict([(layer.name, layer) for layer in model.layers]) sess = K.get_session() # Define the loss. #loss = K.variable(0.) #for layer_name in settings['features']: # Add the L2 norm of the features of a layer to the loss. # assert layer_name in layer_dict.keys(), 'Layer ' + layer_name + ' not found in model.' # coeff = settings['features'][layer_name] # x = layer_dict[layer_name].output # acts = x # We avoid border artifacts by only involving non-border pixels in the loss. # scaling = K.prod(K.cast(K.shape(x), 'float32')) #loss += 3 # if K.image_data_format() == 'channels_first': # loss += coeff * K.sum(K.square(x[:, :, 2: -2, 2: -2])) / scaling # else: # loss += coeff * K.sum(K.square(x[:, 2: -2, 2: -2, :])) / scaling # Compute the gradients of the dream wrt the loss. #grads = K.gradients(loss, model.input)[0] # Normalize gradients. #grads /= K.maximum(K.mean(K.abs(grads)), K.epsilon()) # Set up function to retrieve the value of the loss and gradients given an input image. #outputs = [loss, grads] #fetch_loss_and_grads = K.function([model.input], outputs) loss_2 = K.variable(0.) for layer_name in settings['features']: assert layer_name in layer_dict.keys(), 'Layer ' + layer_name + ' not found in model.' coeff = settings['features'][layer_name] acts = layer_dict[layer_name].output #flat_act = np.reshape(np.asarray(acts).squeeze(), -1) #flat_act_norm = keras.utils.normalize(flat_act) #loss2 = euclidean_distance(vec_norm(layer_9_cav),flat_act_norm) #loss_2 += K.sum(K.square(K.reshape(acts,(131072,)) - layer_9_cav_K)) #loss_2 += K.dot(K.reshape(acts,(1,131072)),K.transpose(layer_9_cav_K)) loss_2 -= K.dot(K.reshape(acts,(1,131072)),layer_9_cav_K) #loss_2 = layer_9_cav_K #loss_2 = loss grads_2 = K.gradients(loss_2, model.input)[0] grads_2 /= K.maximum(K.mean(K.abs(grads_2)), K.epsilon()) outputs_2 = [loss_2, grads_2, acts] fetch_loss_and_grads_2 = K.function([model.input], outputs_2) def eval_loss_and_grads(x): outs = fetch_loss_and_grads_2([x]) loss_value = outs[0] grad_values = outs[1] return loss_value, grad_values #def eval_loss_and_grads(x): # outs = fetch_loss_and_grads(x) # loss_value = get_loss(x) # grads = K.gradients(loss, model.input)[0] # grads /= K.maximum(K.mean(K.abs(grads)), K.epsilon()) # return loss_value, grads def gradient_ascent(x, iterations, step, max_loss=None): for i in range(iterations): loss_value, grad_values = eval_loss_and_grads(x) if max_loss is not None and loss_value > max_loss: break print('..Loss value at', i, ':', loss_value) #print(loss.eval()) x -= step * grad_values return x def save_img(img, fname): pil_img = deprocess_image(np.copy(img)) scipy.misc.imsave(fname, pil_img) # - # + tf.logging.set_verbosity(0) img_pic = image.load_img(base_image_path, target_size=(299, 299)) img = image.img_to_array(img_pic) img = np.expand_dims(img, axis=0) img = inception_v3.preprocess_input(img) #original_img = np.copy(img) img = gradient_ascent(img,iterations=iterations,step=step,max_loss=max_loss) save_img(img, fname='results/test_1.png') # - img_path = 'results/test_1.png' test_img = image.load_img(img_path, target_size=(299, 299)) test_img # + tf.logging.set_verbosity(0) img_pic = image.load_img(base_image_path, target_size=(299, 299)) img = image.img_to_array(img_pic) img = preprocess_image(base_image_path) if K.image_data_format() == 'channels_first': original_shape = img.shape[2:] else: original_shape = img.shape[1:3] successive_shapes = [original_shape] for i in range(1, num_octave): shape = tuple([int(dim / (octave_scale ** i)) for dim in original_shape]) successive_shapes.append(shape) successive_shapes = successive_shapes[::-1] original_img = np.copy(img) shrunk_original_img = resize_img(img, successive_shapes[0]) for shape in successive_shapes: #print('Processing image shape', shape) #img = resize_img(img, shape) img = gradient_ascent(img, iterations=iterations, step=step, max_loss=max_loss) #upscaled_shrunk_original_img = resize_img(shrunk_original_img, shape) #same_size_original = resize_img(original_img, shape) #lost_detail = same_size_original - upscaled_shrunk_original_img #img += lost_detail #shrunk_original_img = resize_img(original_img, shape) save_img(img, fname='results/test_1.png') # + #img # - # ## Working layer_name = 'mixed9' layer_out = layer_dict[layer_name].output layer_out img_in = shrunk_original_img img_in.shape new_acts = fetch_loss_and_grads_2([img_in])[0] new_acts img_in layer_9_acts = sess.run(bottlenecks_tensors[bottleneck_name],{endpoints_v3['input']: img_in}) layer_9_acts[0][5][0] new_acts[0][5][0] # ## New Loss def get_loss(this_img): layer_9_acts = sess.run(bottlenecks_tensors[bottleneck_name],{endpoints_v3['input']: this_img}) flat_act = np.reshape(np.asarray(layer_9_acts).squeeze(), -1) loss += euclidean_distance(vec_norm(layer_9_cav),vec_norm(flat_act)) return loss get_loss(original_img) original_img.shape sess = K.get_session() #my_graph = tf.get_default_graph() # + #my_graph.get_collection() # - sess model.input # + this_img = original_img loss = K.variable(0.) layer_9_acts = sess.run(bottlenecks_tensors[bottleneck_name],{model.input: this_img}) flat_act = np.reshape(np.asarray(layer_9_acts).squeeze(), -1) loss += euclidean_distance(vec_norm(layer_9_cav),vec_norm(flat_act)) #K.clear_session() # + #loss # + #loss.eval(sess) # + #K.clear_session() # - # + #endpoints_v3 # - model.input # + #img.shape # - layer_9_acts = layer_dict[layer_name].output layer_9_acts x.shape sess.run(bottlenecks_tensors[bottleneck_name], {self.ends['input']: examples}) # + #bottlenecks_tensors # - layer_9_cav img.shape model.input # + #sess.run(bottlenecks_tensors[bottleneck_name],{model.input: img}) # + #layer_9_acts = sess.run(bottlenecks_tensors[bottleneck_name],{endpoints_v3['input']: img}) #flat_act = np.reshape(np.asarray(layer_9_acts).squeeze(), -1) # + #layer_9_acts = sess.run(bottlenecks_tensors[bottleneck_name],{endpoints_v3['input']: x}) #flat_act = np.reshape(np.asarray(layer_9_acts).squeeze(), -1) #euclidean_distance(vec_norm(layer_9_cav),vec_norm(flat_act)) # - # ## Static functions # + def preprocess_image(image_path): # Util function to open, resize and format pictures # into appropriate tensors. img = load_img(image_path) img = img_to_array(img) img = np.expand_dims(img, axis=0) img = inception_v3.preprocess_input(img) return img def deprocess_image(x): # Util function to convert a tensor into a valid image. if K.image_data_format() == 'channels_first': x = x.reshape((3, x.shape[2], x.shape[3])) x = x.transpose((1, 2, 0)) else: x = x.reshape((x.shape[1], x.shape[2], 3)) x /= 2. x += 0.5 x *= 255. x = np.clip(x, 0, 255).astype('uint8') return x def resize_img(img, size): img = np.copy(img) if K.image_data_format() == 'channels_first': factors = (1, 1, float(size[0]) / img.shape[2], float(size[1]) / img.shape[3]) else: factors = (1, float(size[0]) / img.shape[1], float(size[1]) / img.shape[2], 1) return scipy.ndimage.zoom(img, factors, order=1) def euclidean_distance(a,b): return np.linalg.norm(a-b) def vec_norm(vec): return vec / np.linalg.norm(vec) def get_bottleneck_tensors(): """Add Inception bottlenecks and their pre-Relu versions to endpoints dict.""" graph = tf.get_default_graph() bn_endpoints = {} for op in graph.get_operations(): # change this below string to change which layers are considered bottlenecks # use 'ConcatV2' for InceptionV3 # use 'MaxPool' for VGG16 (for example) if 'ConcatV2' in op.type: name = op.name.split('/')[0] bn_endpoints[name] = op.outputs[0] return bn_endpoints endpoints_v3 = dict( input=model.inputs[0].name, input_tensor=model.inputs[0], logit=model.outputs[0].name, prediction=model.outputs[0].name, prediction_tensor=model.outputs[0], ) bottlenecks_tensors = get_bottleneck_tensors() bottleneck_name = 'mixed9' #Process: # Load the original image. # Define a number of processing scales (i.e. image shapes), from smallest to largest. # Resize the original image to the smallest scale. # For every scale, starting with the smallest (i.e. current one): # Run gradient ascent # Upscale image to the next scale # Reinject the detail that was lost at upscaling time # Stop when we are back to the original size. #To obtain the detail lost during upscaling, we simply take the original image, shrink it down, upscale it, # and compare the result to the (resized) original image. # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] slideshow={"slide_type": "slide"} # # Aula 3: Python - um pouco mais - avançado # # + [markdown] slideshow={"slide_type": "subslide"} # Exercício da semana passada, resolvendo usando alguns conceitos (filosóficos) de python: # # - DRY: don't repeat yourself (não se repita) # - Loops para simplificar # + slideshow={"slide_type": "subslide"} ## carregar arquivo com a lista de especies lista_especies = open('../dados/lista_de_especies.txt', 'r').read() lista_especies = lista_especies.split('\n') # pegamos as ocorrências únicas, ou seja, o nome de cada espécie que aparece na lista todas_especies = set(lista_especies) # criamos um dicionario vazio que será preenchido mais pra frente dicionario_especies = {} # iteramos pelas especies [DRY+loops] for especie in todas_especies: if especie != '': # podemos obter a ocorrência de uma variavel em um lista usando o método count() ocorrencias = lista_especies.count(especie) # adicionamos um novo registro ao dicionário dicionario_especies[especie] = ocorrencias # vamos exibir na tela a quantidade de organismos de cada especie de forma iterativa, utilizando o método .items(): for especie,ocorrencia in dicionario_especies.items(): print(f"A espécie {especie} possui {ocorrencia} ocorrencias.") # + [markdown] slideshow={"slide_type": "subslide"} # **O que veremos hoje?** # # - como declarar funções e importar pacotes em python # - pacote para manipulação numérica de informações # - pacote para visualização de dados (1D) # # + [markdown] slideshow={"slide_type": "subslide"} endofcell="--" # **material de apoio** # # - numpy guia de usuário: https://numpy.org/doc/stable/numpy-user.pdf # - numpy: https://ncar-hackathons.github.io/scientific-computing/numpy/intro.html # - numpy + scipy: https://github.com/gustavo-marques/hands-on-examples/blob/master/scientific-computing/numpy_scipy.ipynb # - matplotlib documentação: https://matplotlib.org/3.3.2/contents.html # - # -- # + [markdown] slideshow={"slide_type": "slide"} # # Python - um pouco mais - avançado # # - pequenos problemas, pequenos trechos de códigos # - funções e métodos # + [markdown] slideshow={"slide_type": "subslide"} # ## Funções (functions) # # - bloco de código independente # - uma ou mais ações específicas (funções) # - retorna ou não alguma informação # # # # # # # + [markdown] slideshow={"slide_type": "subslide"} # ## Funções (functions) # # - bloco de código independente # - uma ou mais ações específicas (funções) # - retorna ou não alguma informação # - sintaxe: # # # ![image.png](https://www.fireblazeaischool.in/blogs/wp-content/uploads/2020/06/Capture-1.png) # # Exemplo prático: # + slideshow={"slide_type": "subslide"} def K_para_C(T_em_K): # sequencia de bloco de códigos identados print('Estamos dentro de uma função!') # conversão T_em_C = T_em_K - 273.15 return T_em_C # + slideshow={"slide_type": "-"} temp = 273.15 temperatura = K_para_C(temp) print(temperatura) # + [markdown] slideshow={"slide_type": "subslide"} # Podemos deixar uma função mais completa adicionando informações sobre como ela funciona. Para isso, usamos as ```docstrings```: # + slideshow={"slide_type": "-"} def K_para_C(T_em_K): """ Este conjunto de texto entre aspas triplas se chamada docstring. É usada para passar informações importantes sobre a função, como: valores que ela recebe e retorna, o que ela faz, se é baseada em algum artigo científico, por exemplo, e muito mais. Quando tivermos dúvida sobre alguma função, podemos buscar o docstring dela utilizando o comando: K_para_C? No nosso caso, essa é uma função para converter uma temperatura de Kelvin para Celsius argumentos ---------- T_em_K: float Temperatura em Kelvin para ser convertida. returns ------- T_em_C: float Temperatura em graus Celsius. """ # sequencia de bloco de códigos identados print('Estamos dentro de uma função!') # conversão T_em_C = T_em_K - 273.15 return T_em_C # + slideshow={"slide_type": "-"} # K_para_C? # + [markdown] slideshow={"slide_type": "slide"} # # Pacotes (packages ou libraries) # # - coleção de funções # - objetivo específico # - podemos criar um # - podemos (e iremos sempre) importar um pacote # + [markdown] slideshow={"slide_type": "slide"} # # NumPy: pacote numérico # # - background de qualquer pacote científico hoje em dia # - utilizado para operações numéricas (matriciais ou escalares) # - alta performance # - Importamos da seguinte forma: # + slideshow={"slide_type": "-"} import numpy as np # + [markdown] slideshow={"slide_type": "subslide"} # # **Conceitos** # # Lembrando alguns conceitos matemáticos: # # - Vetores (N,): # # $V = \begin{bmatrix} # 1 & 2 & 3 # \end{bmatrix}$ # # - Matrizes (NxM, linhasxcolunas): # # $M = \begin{bmatrix} # 1 & 2 & 3\\ # 4 & 5 & 6 # \end{bmatrix}$ # # Vamos ver alguns exemplos e entender na prática a funcionalidade do NumPy. # + slideshow={"slide_type": "subslide"} # primeiro importamos e só o precisamos fazer uma vez em cada código import numpy as np V = np.array([1, 2, 3]) # vetor: 1D M = np.array([[1, 2, 3], [4, 5, 6]]) # matriz 2D M # + [markdown] slideshow={"slide_type": "subslide"} # Apesar de falarmos em vetores e matrizes, para o numpy é tudo a mesma coisa. O que diferencia é apenas a dimensão. # # Nota: ```ndarray``` significa n-dimensional array (matriz com n-dimensões) # - type(M), type(V) # + [markdown] slideshow={"slide_type": "subslide"} # Mas como saber quantas dimensões uma matriz tem? # + slideshow={"slide_type": "-"} V.ndim, M.ndim # + [markdown] slideshow={"slide_type": "subslide"} # Podemo verificar o tamanho de uma matriz utilizando o método .shape. Se quisermos saber o tamanho geral da matriz (número total de itens), usamos o método. size: # + slideshow={"slide_type": "-"} # V.shape, V.size print(V) # + slideshow={"slide_type": "-"} # M.shape, M.size print(M) # + [markdown] slideshow={"slide_type": "subslide"} # **Utilidade** # # - muito similar a listas # - numpy nos permite trabalhar com: # - operações matriciais # - performance computacional alta # # **Nota**: # # - matrizes somente com mesmo tipo de variável (dtype) # - o tipo de variável mais restritivo comanda o dtype da matriz: # - A = np.array([0,'teste']) A # + [markdown] slideshow={"slide_type": "subslide"} # Podemos também indicar o tipo de matriz que queremos, ao cria-la, utilizando o argumento ```dtype``` e inserindo alguma das opções que já nos é conhecida (int, float, bool e complex): # - c = np.array([1, 2, 3], dtype=complex) c # + [markdown] slideshow={"slide_type": "subslide"} # **Principais funções disponíveis no Numpy** # + slideshow={"slide_type": "subslide"} # criar um vetor ordenado (crescente) de 1 a 1 x = np.arange(0, 100, 1) x # + slideshow={"slide_type": "subslide"} # podemos criar o mesmo vetor, mas na ordem descrescente x_inverso = np.arange(100, 0, -1) x_inverso # + slideshow={"slide_type": "subslide"} # criar um vetor de 0 a 100, com de 10 intervalos y = np.linspace(0, 100, 10) y # + slideshow={"slide_type": "subslide"} # criar uma grade x,y = np.mgrid[0:5, 0:5] x # + slideshow={"slide_type": "subslide"} # utilizando números aleatórios np.random.rand(5,3) # + [markdown] slideshow={"slide_type": "subslide"} # Outros métodos: # # cumsum, dot, det, sort, max, min, argmax, argmin, sqrt, e outros. # # Testem o que estes métodos fazem. # # Lembre que parar usar você deve: # # ```python # x = np.arange(0,10,1) # soma = x.sum() # ``` # + [markdown] slideshow={"slide_type": "-"} # Para quem vem do matlab: # # http://wiki.scipy.org/NumPy_for_Matlab_Users # - # + [markdown] slideshow={"slide_type": "slide"} # # Pacote de visualização: Matplotlib # # - base para qualquer visualização de dados # - é um saco importar, porém com o tempo fica menor pior # - muito poderoso em termos de controle dos elementos do gráfico e da estrutura em si, isto é: # - estruturas complexas de eixos de visualização podem ser manipuladas por este pacote # # **Importar** # # ```python # import matplotlib.pyplot as plt # ``` # # Quando utilizando um notebook, como no caso da aula, precisamos ativar o modo de exibir as figuras dentro do próprio notebook. Usamos então: # # ```python # # # %matplotlib inline # ``` # + [markdown] slideshow={"slide_type": "subslide"} # **Elementos de uma figura** # # ![image1](https://raw.githubusercontent.com/storopoli/ciencia-de-dados/e350e1c686cd00b91c0f316f33644dfd2322b2e3/notebooks/images/matplotlib-anatomy.png) # # Fonte: Prof Storopoli [https://github.com/storopoli/ciencia-de-dados] # + slideshow={"slide_type": "subslide"} import matplotlib.pyplot as plt # quando usando notebook, use o comando mágico abaixo para exibir as figuras no proprio arquivo: # %matplotlib inline # + [markdown] slideshow={"slide_type": "subslide"} # Gráfico único na figura: # + slideshow={"slide_type": "-"} fig = plt.figure() # porém, como não existe informação plotada, nada é exibido abaixo. # + [markdown] slideshow={"slide_type": "subslide"} # Refazendo o comando, mas inserindo um plot 1D simples: # + slideshow={"slide_type": "-"} fig = plt.figure() plt.plot([1,2,3]) # + [markdown] slideshow={"slide_type": "subslide"} # Múltiplos gráficos em uma mesma figura: # + slideshow={"slide_type": "-"} fig,axes = plt.subplots(nrows=1, ncols=2) axes[1] # - # Note: # # - subplots já estruturados # - a variável ```axes```é uma matriz 1D contendo os subplots # + slideshow={"slide_type": "subslide"} fig,axes = plt.subplots(nrows=1, ncols=2) # exibindo o tipo de axes print(type(axes)) # plotando no gráfico da esquerda (1o gráfico) ax = axes[0] ax.plot([1,2,3]) # plotando no gráfico da direita (2o gráfico) axes[1].plot([3,2,1]) # + [markdown] slideshow={"slide_type": "subslide"} # Como o eixo y (ordenada) compartilham o mesmo intervalo, podemos usar: # + slideshow={"slide_type": "-"} fig,axes = plt.subplots(nrows=2, ncols=1, sharex=True) # plotando no gráfico da esquerda (1o gráfico) ax = axes[0] ax.plot([1,2,3]) # plotando no gráfico da direita (2o gráfico) axes[1].plot([3,2,1]) # + [markdown] slideshow={"slide_type": "subslide"} # E ainda podemos especificar o tamanho da nossa figura com o argumento ```figsize```. Este argumento também funciona em ```plt.figure()```. # + slideshow={"slide_type": "-"} fig,axes = plt.subplots(nrows=1, ncols=3, sharey=True, figsize=(20,5)) # plotando no gráfico da esquerda (1o gráfico) ax = axes[0] ax.plot([1,2,3]) # plotando no gráfico da direita (2o gráfico) axes[1].plot([3,2,1]) axes[2].plot([3,2,1]) # + [markdown] slideshow={"slide_type": "subslide"} # ### Customizando nossos gráficos # + slideshow={"slide_type": "-"} fig,axes = plt.subplots(nrows=1, ncols=2, sharey=True, figsize=(15,5)) # plotando no gráfico da esquerda (1o gráfico) ax = axes[0] ax.plot([1,2,3], 'r') # plotando no gráfico da direita (2o gráfico) axes[1].plot([3,2,1], 'g-o') # + [markdown] slideshow={"slide_type": "subslide"} # Para exemplificar uma função de visualização, vamos transformar o código acima em uma função: # + slideshow={"slide_type": "-"} def plot_simples(): fig,axes = plt.subplots(nrows=1, ncols=2, sharey=True, figsize=(15,5)) # plotando no gráfico da esquerda (1o gráfico) ax = axes[0] ax.plot([1,2,3], 'r-') # plotando no gráfico da direita (2o gráfico) axes[1].plot([3,2,1], 'g-o') # retornamos fig e axes pq precisaremos no futuro return fig,axes # + slideshow={"slide_type": "subslide"} # usando a função. Note que atribuimos o retorno da função a duas variáveis fig,axes = plot_simples() # inserindo rótulos ao eixo das abscissas (x) e ordenada (y): axes[0].set_xlabel('Abscissa', fontsize=20) axes[0].set_ylabel('Ordenada', fontsize=20) axes[1].set_xlabel('Abscissa', fontsize=20) # adicionando um título para cada subplot axes[0].set_title('Figura da esquerda', fontsize=25) axes[1].set_title('Figura da direita', fontsize=25) # podemos configurar o intervalo discreto das abscissas e ordenadas axes[0].set_xticks(np.arange(0,3,1)) # podemos ainda trocar os rótulos de cada tick axes[0].set_xticklabels(['primeiro', 'segundo', 'terceiro']) # configurar os limites axes[1].set_xlim([0,10]) # inserindo outros elementos: axes[0].grid('--', alpha=.3) axes[1].grid('--', alpha=.3) # axes[0].fill_between([0,1,2], 0, 2) # - # + [markdown] slideshow={"slide_type": "subslide"} # **Tipos de gráficos para 1D** # # - linha: ```plt.plot()``` # - barra: ```plt.barh()``` # - histograma: ```plt.hist()``` # # Podemos plotar, de forma semelhante, histogramas ou gráficos de barra ou histograma: # + slideshow={"slide_type": "-"} fig,axes = plt.subplots(1,2,figsize=(15,5)) # grafico de barras horizontal axes[0].barh(x,y) # histograma para x _,_,_ = axes[1].hist(x,5) # bonus: personalizacao dos subplots usando apenas uma linha com compreensão de listas (list comprehension) _ = [ax.grid('--', alpha=.3) for ax in axes] # + slideshow={"slide_type": "subslide"} # precisando de ajuda? help(plt.plot) # # plt.plot? # + [markdown] slideshow={"slide_type": "subslide"} # Enfim, para **salvar** uma figura, utilizamos o método ```plt.savefig()```: # # - formatos disponíveis: pdf, png, jpg, tif, svg # - dpi: qualidade da figura salva # - bbox_to_inches: use ```tight``` para remover espaços em branco ao redor # + slideshow={"slide_type": "-"} # usando a nossa função fig,ax = plot_simples() # salvando a figura plt.savefig('nome_da_figure.png', dpi=150, bbox_to_inches='tight') # + [markdown] slideshow={"slide_type": "subslide"} # Exercício: # # Utilizando o dicionário criado com a lista de espécies, plote um gráfico de barras horizontais, utilizando ```plt.barh()```. # # **dica**: use ```list()``` para converter as chaves e valores do dicionário para uma lista. # + # solução especies = list(dicionario_especies.keys()) ocorrencias = list(dicionario_especies.values()) fig = plt.figure(figsize=(15,5)) ax = plt.barh(especies, ocorrencias) plt.xlabel('Ocorrências') plt.title('Titulo para o gráfico') # + [markdown] slideshow={"slide_type": "subslide"} # # É claro que existem diversas formas de visualização de gráficos de uma dimensão no python. Apresentamos dois tipos bem básicos, porém muito utilizados no dia a dia de um cientista. # # Para mais exemplo, navegue pela documentação do matplotlib. Ao longo do curso iremos explorar diversos formatos de visualização e explicaremos cada caso conforme uso. # + [markdown] slideshow={"slide_type": "subslide"} # Exercícios de casa: perfis verticais # # Arquivos com temperatura e salinidade climatológica do World Ocean Atlas (WOA) serão fornecidos para uma região específica do litoral Norte de São Paulo: Ubatuba. # # 1. carregue os dados com numpy usando genfromtxt ou loadtxt(sep=','), sendo uma variável para cada propriedade # # ```python # temperatura = np.loadtxt('../dados/salinidade_woa2018_ubatuba_60m.csv', delimiter=',') # ``` # # 2. explore a estrutura da matriz que virá. Por exemplo, identifique: # - o que é cada coluna? E cada linha? # - como acessá-los pelo indexamento de matrizes? # # Esteja familiarizado com a matriz fornecida antes de prosseguir para a visualização, assim erros que poderão assustar serão evitados. # - # codigo para baixar o arquivo, caso você esteja rodando este notebook no Google Colab # !wget --directory-prefix=../dados/ https://raw.githubusercontent.com/nilodna/python-basico/feature_iojr-shortcourse/dados/temperatura_woa2018_ubatuba_60m.csv # !wget --directory-prefix=../dados/ https://raw.githubusercontent.com/nilodna/python-basico/feature_iojr-shortcourse/dados/salinidade_woa2018_ubatuba_60m.csv # + slideshow={"slide_type": "subslide"} # explorando matrizes de temperatura e salinidade temperatura = np.loadtxt('../dados/temperatura_woa2018_ubatuba_60m.csv', delimiter=',') temperatura[:, 12] # + [markdown] slideshow={"slide_type": "subslide"} # Nível 1: # - faça subplots de perfis verticais para o mês de janeiro. Cada subplot será uma propriedade (temperatura e salinidade) # # Nível 2: # - Faça uma média dos meses de verão e meses de inverno # - plot as médias em dois subplots, para cada propriedade # # Nível 3: # - plote todos os meses da climatologia em cada figura (uma para cada propriedade como feito nos anteriores). # - mantenha consistência em cores ou marcadores para cada mês # # Bônus: # - plote em um só gráfico a temperatura e salinidade para um mês a sua escolha. # - sabendo que os limites serão diferentes, adicione um segundo eixo (eixo gêmeo) à figura usando ```ax.twinx()```. # # **dicas** # - use ```:``` para acessar todos os valores de uma dimensão da matriz # - estilize seu perfil vertical com gradeamento (grid), rotulos (labels) nos eixos. legenda (legend), etc... # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # # # Trees - for BHSA data (Hebrew) # This notebook composes syntax trees out of the BHSA data. # To this end, it imports the module ``etcbc.trees``, which contains the logic to # synthesize trees from the data as it lies encoded in an EMDROS database and has been translated to LAF. # # ``etcbc.trees`` can also work for other data, such as the CALAP database of parts of the Syriac Peshitta. # # This notebook invokes the functions of ``etcbc.trees`` to generate trees for every sentence in the Hebrew Bible. # After that it performs a dozen sanity checks on the trees. # There are 13 sentences that do not pass some of these tests (out of 71354). # # They will be excluded from further processing, since they probably have been coded wrong in the first place. # ## BHSA data and syntax trees # The process of tree construction is not straightforward, since the BHSA data have not been coded as syntax trees. # Rather they take the shape of a collection of features that describe observable characteristics of the words, phrases, clauses and # sentences. Moreover, if a phrase, clause or sentence is discontinuous, it is divided in *phrase_atoms*, *clause_atoms*, # or *sentence_atoms*, respectively, which are by definition continuous. # # There are no explicit hierarchical relationships between these objects. But there is an implicit hierarchy: *embedding*. # Every object carries with it the set of word occurrences it contains. # # The module ``etcbc.trees`` constructs a hierarchy of words, subphrases, phrases, clauses and sentences based on the embedding # relationship. # # But this is not all. The BHSA data contains a *mother* relationship, which in some cases denotes a parent relationship between # objects. The module ``etcbc.trees`` reconstructs the tree obtained from the embedding by # using the mother relationship as a set of instructions to move certain nodes below others. # In some cases extra nodes will be constructed as well. # ## The embedding relationship # ### Objects: # The BHSA data is coded in such a way that every object is associated with a *type* and a *monad set*. # # The *type* of an object, $T(O)$, determines which features an object has. ETCBC types are ``sentence``, ``sentence_atom``, # ``clause``, ``clause_atom``, ``phrase``, ``phrase_atom``, ``subphrase``, ``word``. # # There is an implicit *ordering of object types*, given by the sequence above, where ``word`` comes first and # ``sentence`` comes last. We denote this ordering by $<$. # # The *monad set* of an object, $m(O)$, is the set of word occurrences contained by that object. # Every word occurrence in the source has a unique sequence number, so monad sets are sets of sequence numbers. # # Note that # when a sentence contains a clause which contains a phrase, the sentence, clause, and phrase contain words directly as monad sets. The fact that a sentence contains a clause is not marked directly, it is a consequence of the monad set embedding. # # ### Definition (monad set order): # There is a natural order on monad sets, following the intuition that monad sets with smaller elements come before monad set with bigger elements, and embedding monad sets come before embedded monad sets. # Hence, if you enumerate a set of objects that happens to constitute a tree hierarchy based on monad set embedding, and you enumerate those objects in the monad set order, you will walk the tree in pre-order. # # This order is a modification of the one as described in (Doedens 1994, 3.6.3). # # > Doedens, Crist-Jan (1994), *Text Databases. One Database Model and Several Retrieval Languages*, number 14 in Language and Computers, Editions Rodopi, Amsterdam, Netherlands and Atlanta, USA. ISBN: 90-5183-729-1, http://books.google.nl/books?id=9ggOBRz1dO4C. # The order as defined by Doedens corresponds to walking trees in post-order. # # For a lot of processing, it is handy to have a the stack of embedding elements available when working with an element. That is the advantage of pre-order over post-order. It is very much like SAX parsing. # # Here is as the formal definition of my order: # # $m_1 < m_2$ if either of the following holds: # # 1. $m_2 \subset m_1$ (note that the embedder comes first!) # 1. $m_1 \not\subset m_2 \wedge m_2 \not\subset m_1 \wedge \min(m_1 \setminus m_2) < \min(m_2 \setminus m_1)$ # # We will not base our trees on *all* object types, since in the BHSA data they do not constitute a single hierarchy. # We will restrict ourselves to the set $\cal O = \{$ ``sentence``, ``clause``, ``phrase``, ``word`` $\}$. # # ### Definition (directly below): # Object type $T_1$ is *directly below* $T_2$ ( $T_1 <_1 T_2 $ ) in $\cal O$ if $T_1 < T_2$ and there is no $T$ in $\cal O$ with # $T_1 < T < T_2$. # # Now we can introduce the notion of (tree) parent with respect to a set of object types $\cal O$ # (e.g. ): # # ### Definition (parent) # Object $A$ is a parent of object $B$ if the following are true: # 1. $m(A) \subseteq\ m(B)$ # 2. $T(A) <_1 T(B)$ in $\cal O$. # ## The mother relationship # While using the embedding got us trees, using the mother relationship will give us more interesting trees. # In general, the *mother* in the ETCBC points to an object on which the object in question is, in some sense, dependent. # The nature of this dependency is coded in a specific feature on clauses, the ``clause_constituent_relation``. # For a list of values this feature can take and the associated meanings, see the notebook # [clause_phrase_types](http://nbviewer.ipython.org/github/ETCBC/laf-fabric-nbs/blob/master/valence/clause_phrase_types.ipynb) in this directory. # # Here is a description of what we do with the mother relationship. # # If a *clause* has a mother, there are three cases for the *clause_constituent_relation* of this clause: # # 1. its value is in $\{$ ``Adju``, ``Objc``, ``Subj``, ``PrAd``, ``PreC``, ``Cmpl``, ``Attr``, ``RgRc``, ``Spec`` $\}$ # 2. its value is ``Coor`` # 3. its value is in $\{$ ``Resu``, ``ReVo``, ``none`` $\}$ # # In case 3 we do nothing. # # In case 1 we remove the link of the clause to its parent and add the clause as a child to either the object that the mother # points to, or to the parent of the mother. We do the latter only if the mother is a word. We will not add children to words. # # In the diagrams, the red arrows represent the mother relationship, and the black arrows the embedding relationships, and the fat black arrows the new parent relationships. The gray arrows indicated severed parent links. # # # # In case 2 we create a node between the mother and its parent. # This node takes the name of the mother, and the mother will be added as child, but with name ``Ccoor``, and the clause which points to the mother is added as a sister. # # This is a rather complicated case, but the intuition is not that difficult. Consider the sentence: # # John thinks that Mary said it and did it # # We have a compound object sentence, with ``Mary said it`` and ``did it`` as coordinated components. # The way this has been marked up in the BHSA database is as follows: # # ``Mary said it``, clause with ``clause_constituent_relation``=``Objc``, ``mother``=``John thinks``(clause) # # ``and did it``, clause with ``clause_constituent_relation``=``Coor``, ``mother``=``Mary said it``(clause) # # So the second coordinated clause is simply linked to the first coordinated clause. Restructuring means to create a parent for both coordinated clauses and treat both as sisters at the same hierarchical level. See the diagram. # # # # ### Note on order # When we add nodes to new parents, we let them occupy the sequential position among its new sisters that corresponds with the monad set ordering. # # ### Note on discontinuity # Sentences, clauses and phrases are not always continuous. Before restructuring it will not always be the case that if you # walk the tree in pre-order, you will end up with the leaves (the words) in the same order as the original sentence. # Restructuring generally improves that, because it often puts an object under a non-continuous parent object precisely at the location that corresponds with the a gap in the parent. # # However, there is no guarantee that every discontinuity will be resolved in this graceful manner. # When we create the trees, we also output the list of monad numbers that you get when you walk the tree in pre-order. # Whenever this list is not monotonic, there is an issue with the ordering. # # ### Note on incest # If a mother points to itself or a descendant of itself, we have a grave form of incest. In these cases, the restructuring algorithm will disconnect a parent link without introducing a new link to the tree above it: a whole fragment of the tree becomes disconnected and will get lost. # # Sanity check 6 below reveals that this occurs in fact 4 times in the BHSA version 4 (it occurred 13 times in the BHSA # 3 version). # We will exclude these trees from further processing. # # ### Note on adultery # If a mother points outside the sentence of the clause on which it is specified we have a form of adultery. # This should not happen. Mothers may point outside their sentences, but not in the cases that trigger restructuring. # Yet, the sanity checks below reveal that this occurs twice. We will exclude these cases from further processing. # + import sys import collections import random # %load_ext autoreload # %autoreload 2 import laf from laf.fabric import LafFabric from etcbc.preprocess import prepare from etcbc.lib import Transcription, monad_set from etcbc.trees import Tree fabric = LafFabric() tr = Transcription() # - # The engines are fired up now, all ETCBC data we need is accessible through the ``fabric`` and ``tr`` objects. # # Next we select the information we want and load it into memory. API = fabric.load('etcbc4', '--', 'trees', { "xmlids": {"node": False, "edge": False}, "features": (''' oid otype monads g_cons_utf8 sp rela typ label ''',''' mother '''), "prepare": prepare, }, verbose='NORMAL') exec(fabric.localnames.format(var='fabric')) show_ccr = collections.defaultdict(lambda: 0) for n in NN(): otype = F.otype.v(n) if otype == 'clause': show_ccr[F.rela.v(n)] += 1 for c in sorted(show_ccr): print("{:<8}: {}x".format(c, show_ccr[c])) # Before we can apply tree construction, we have to specify the objects and features we work with, since the tree constructing # algorithm can also work on slightly different data, with other object types and different feature names. # # We also construct an index that tells to which verse each sentence belongs. # + type_info = ( ("word", ''), ("subphrase", 'U'), ("phrase", 'P'), ("clause", 'C'), ("sentence", 'S'), ) type_table = dict(t for t in type_info) type_order = [t[0] for t in type_info] pos_table = { 'adjv': 'aj', 'advb': 'av', 'art': 'dt', 'conj': 'cj', 'intj': 'ij', 'inrg': 'ir', 'nega': 'ng', 'subs': 'n', 'nmpr': 'n-pr', 'prep': 'pp', 'prps': 'pr-ps', 'prde': 'pr-dem', 'prin': 'pr-int', 'verb': 'vb', } ccr_info = { 'Adju': ('r', 'Cadju'), 'Attr': ('r', 'Cattr'), 'Cmpl': ('r', 'Ccmpl'), 'CoVo': ('n', 'Ccovo'), 'Coor': ('x', 'Ccoor'), 'Objc': ('r', 'Cobjc'), 'PrAd': ('r', 'Cprad'), 'PreC': ('r', 'Cprec'), 'Resu': ('n', 'Cresu'), 'RgRc': ('r', 'Crgrc'), 'Spec': ('r', 'Cspec'), 'Subj': ('r', 'Csubj'), 'NA': ('n', 'C'), } tree_types = ('sentence', 'clause', 'phrase', 'subphrase', 'word') (root_type, leaf_type, clause_type) = (tree_types[0], tree_types[-1], 'clause') ccr_table = dict((c[0],c[1][1]) for c in ccr_info.items()) ccr_class = dict((c[0],c[1][0]) for c in ccr_info.items()) root_verse = {} for n in NN(): otype = F.otype.v(n) if otype == 'verse': cur_verse = F.label.v(n) elif otype == root_type: root_verse[n] = cur_verse # - # Now we can actually construct the tree by initializing a tree object. # After that we call its ``restructure_clauses()`` method. # # Then we have two tree structures for each sentence: # # * the *etree*, i.e. the tree obtained by working out the embedding relationships and nothing else # * the *rtree*, i.e. the tree obtained by restructuring the *etree* # # We have several tree relationships at our disposal: # # * *eparent* and its inverse *echildren* # * *rparent* and its inverse *rchildren* # * *eldest_sister* going from a mother clause of kind ``Coor`` (case 2) to its daughter clauses # * *sisters* being the inverse of *eldest_sister* # # where *eldest_sister* and *sisters* only occur in the *rtree*. # # This will take a while (25 seconds approx on a MacBook Air 2012). tree = Tree(API, otypes=tree_types, clause_type=clause_type, ccr_feature='rela', pt_feature='typ', pos_feature='sp', mother_feature = 'mother', ) tree.restructure_clauses(ccr_class) results = tree.relations() parent = results['rparent'] sisters = results['sisters'] children = results['rchildren'] elder_sister = results['elder_sister'] msg("Ready for processing") # # Checking for sanity # Let us see whether the trees we have constructed satisfy some sanity constraints. # After all, the algorithm is based on certain assumptions about the data, but are those assumptions valid? # And restructuring is a tricky operation, do we have confidence that nothing went wrong? # # 1. How many sentence nodes? From earlier queries we know what to expect. # 1. Does any sentence have a parent? If so, there is something wrong with our assumptions or algorithm. # 1. Is every top node a sentence? If not, we have material outside a sentence, which contradicts the assumptions. # 1. Do you reach all sentences if you go up from words? If not, some sentences do not contain words. # 1. Do you reach all words if you go down from sentences? If not, some words have become disconnected from their sentences. # 1. Do you reach the same words in reconstructed trees as in embedded trees? If not, some sentence material has got lost during the restructuring process. # 1. From what object types to what object types does the parent relationship link? Here we check that parents do not link object types that are too distant in the object type ranking. # 1. How many nodes have mothers and how many mothers can a node have? We expect at most one. # 1. From what object types to what object types does the mother relationship link? # 1. Is the mother of a clause always in the same sentence? If not, foreign sentences will be drawn in, leading to (very) big chunks. This may occur when we use mother relationships in cases where ``clause_constituent_relation`` has different values than the ones that should trigger restructuring. # 1. Has the max/average tree depth increased after restructuring? By how much? This is meant as an indication by how much our tree structures improve in significant hierarchy when we take the mother relationship into account. #1 msg("Counting {}s ... (expecting 66045 (in etcbc3: 71354))".format(root_type)) msg("There are {} {}s".format(len(set(NN(test=F.otype.v, value=root_type))), root_type)) #2 msg("Checking parents of {}s ... (expecting none)".format(root_type)) exceptions = set() for node in NN(test=F.otype.v, value=root_type): if node in parent: exceptions.add(node) if len(exceptions) == 0: msg("No {} has a parent".format(root_type)) else: msg("{} {}s have a parent:".format(len(exceptions), root_type)) for n in sorted(exceptions): p = parent[n] msg("{} {} [{}] has {} parent {} [{}]".format( root_type, n, F.monads.v(n), F.otype.v(p), p, F.monads.v(p) )) # + #3 (again a check on #1) msg("Checking the types of root nodes ... (should all be sentences)") exceptions = collections.defaultdict(lambda: []) sn = 0 for node in NN(): otype = F.otype.v(node) if otype not in type_table: continue if otype == root_type: sn += 1 if node not in parent and node not in elder_sister and otype != root_type: exceptions[otype].append(node) if len(exceptions) == 0: msg("All top nodes are {}s".format(root_type)) else: msg("Top nodes which are not {}s:".format(root_type)) for t in sorted(exceptions): msg("{}: {}x".format(t, len(exceptions[t])), withtime=False) msg("{} {}s seen".format(sn, root_type)) for c in exceptions[clause_type]: (s, st) = tree.get_root(c, 'e') v = root_verse[s] msg("{}={}, {}={}={}, verse={}".format(clause_type, c, root_type, st, s, v), withtime=False) # + #4, 5 def get_top(kind, rel, rela, multi): seen = set() top_nodes = set() start_nodes = set(NN(test=F.otype.v, value=kind)) next_nodes = start_nodes msg("Starting from {} nodes ...".format(kind)) while len(next_nodes): new_next_nodes = set() for node in next_nodes: if node in seen: continue seen.add(node) is_top = True if node in rel: is_top = False if multi: for c in rel[node]: new_next_nodes.add(c) else: new_next_nodes.add(rel[node]) if node in rela: is_top = False if multi: for c in rela[node]: new_next_nodes.add(c) else: new_next_nodes.add(rela[node]) if is_top: top_nodes.add(node) next_nodes = new_next_nodes top_types = collections.defaultdict(lambda: 0) for t in top_nodes: top_types[F.otype.v(t)] += 1 for t in top_types: msg("From {} {} nodes reached {} {} nodes".format(len(start_nodes), kind, top_types[t], t), withtime=False) msg("Embedding trees") get_top(leaf_type, tree.eparent, {}, False) get_top(root_type, tree.echildren, {}, True) msg("Restructd trees") get_top(leaf_type, tree.rparent, tree.elder_sister, False) get_top(root_type, tree.rchildren, tree.sisters, True) msg("Done") # - #6 msg("Verifying whether all monads are preserved under restructuring") errors = [] #i = 10 for snode in NN(test=F.otype.v, value=root_type): declared_monads = monad_set(F.monads.v(snode)) results = {} thisgood = {} for kind in ('e', 'r'): results[kind] = set(int(F.monads.v(l)) for l in tree.get_leaves(snode, kind) if F.otype.v(l) == leaf_type) thisgood[kind] = declared_monads == results[kind] #if not thisgood[kind]: #print("{} D={}\nL={}".format(kind, declared_monads, results[kind])) #i -= 1 #if i == 0: break if False in thisgood.values(): errors.append((snode, thisgood['e'], thisgood['r'])) msg("{} mismatches:".format(len(errors))) mine = min(20, len(errors)) skip = {e[0] for e in errors} for (s, e, r) in errors[0:mine]: msg("{} embedding: {}; restructd: {}".format(s, 'OK' if e else 'XX', 'OK' if r else 'XX'), withtime=False) #7 msg("Which types embed which types and how often? ...") for kind in ('e', 'r'): plinked_types = collections.defaultdict(lambda: 0) parent = tree.eparent if kind == 'e' else tree.rparent kindrep = 'embedding' if kind == 'e' else 'restructd' for (c, p) in parent.items(): plinked_types[(F.otype.v(c), F.otype.v(p))] += 1 msg("Found {} parent ({}) links between types".format(len(parent), kindrep)) for lt in sorted(plinked_types): msg("{}: {}x".format(lt, plinked_types[lt]), withtime=False) #8 msg("How many mothers can nodes have? ...") mother_len = {} for c in NN(): lms = list(C.mother.v(c)) nms = len(lms) if nms: mother_len[c] = nms count = collections.defaultdict(lambda: 0) for c in tree.mother: count[mother_len[c]] += 1 msg("There are {} tree nodes with a mother".format(len(tree.mother))) for cnt in sorted(count): msg("{} nodes have {} mother{}".format(count[cnt], cnt, 's' if cnt != 1 else ''), withtime=False) #9 msg("Which types have mother links to which types and how often? ...") mlinked_types = collections.defaultdict(lambda: set()) for (c, m) in tree.mother.items(): ctype = F.otype.v(c) mlinked_types[(ctype, F.rela.v(c), F.otype.v(m))].add(c) msg("Found {} mother links between types".format(len(parent))) for lt in sorted(mlinked_types): msg("{}: {}x".format(lt, len(mlinked_types[lt])), withtime=False) #10 msg("Counting {}s with mothers in another {}".format(clause_type, root_type)) exceptions = set() for node in tree.mother: if F.otype.v(node) not in type_table: continue mnode = tree.mother[node] snode = tree.get_root(node, 'e') smnode = tree.get_root(mnode, 'e') if snode != smnode: exceptions.add((node, snode, smnode)) msg("{} nodes have a mother in another {}".format(len(exceptions), root_type)) for (n, sn, smn) in exceptions: msg("[{} {}]({}) occurs in {} but has mother in {}".format(F.otype.v(n), F.monads.v(n), n, sn, smn), withtime=False) # + #11 msg("Computing lengths and depths") ntrees = 0 rntrees = 0 total_depth = {'e': 0, 'r': 0} rtotal_depth = {'e': 0, 'r': 0} max_depth = {'e': 0, 'r':0} rmax_depth = {'e': 0, 'r': 0} total_length = 0 for node in NN(test=F.otype.v, value=root_type): ntrees += 1 total_length += tree.length(node) this_depth = {} for kind in ('e', 'r'): this_depth[kind] = tree.depth(node, kind) different = this_depth['e'] != this_depth['r'] if different: rntrees += 1 for kind in ('e', 'r'): if this_depth[kind] > max_depth[kind]: max_depth[kind] = this_depth[kind] total_depth[kind] += this_depth[kind] if different: if this_depth[kind] > rmax_depth[kind]: rmax_depth[kind] = this_depth[kind] rtotal_depth[kind] += this_depth[kind] msg("{} trees seen, of which in {} cases restructuring makes a difference in depth".format(ntrees, rntrees)) if ntrees > 0: msg("Embedding trees: max depth = {:>2}, average depth = {:.2g}".format(max_depth['e'], total_depth['e'] / ntrees)) msg("Restructd trees: max depth = {:>2}, average depth = {:.2g}".format(max_depth['r'], total_depth['r'] / ntrees)) if rntrees > 0: msg("Statistics for cases where restructuring makes a difference:") msg("Embedding trees: max depth = {:>2}, average depth = {:.2g}".format(rmax_depth['e'], rtotal_depth['e'] / rntrees)) msg("Restructd trees: max depth = {:>2}, average depth = {:.2g}".format(rmax_depth['r'], rtotal_depth['r'] / rntrees)) msg("Total number of leaves in the trees: {}, average number of leaves = {:.2g}".format(total_length, total_length / ntrees)) # - # # Writing Trees # After all these checks we can proceed to print out the tree structures as plain, bracketed text strings. # # Per tree we also print a string of the monad numbers that you get when you walk the tree in pre-order. # And we produce object ids from the EMDROS database and node ids from the LAF version. # # First we apply our algorithms to a limited set of interesting trees and a random sample. # For those cases we also apply a ``debug_write()`` method that outputs considerably more information. # # This output has been visually checked by and . # + def get_tag(node): otype = F.otype.v(node) tag = type_table[otype] if tag == 'P': tag = F.typ.v(node) elif tag == 'C': tag = ccr_table[F.rela.v(node)] is_word = tag == '' pos = pos_table[F.sp.v(node)] if is_word else None monad = int(F.monads.v(node)) if is_word else None text = '"{}"'.format(F.g_cons_utf8.v(node)) if is_word else None return (tag, pos, monad, text, is_word) def passage_roots(verse_label): sought = [] grab = -1 for n in NN(): if grab == 1: continue otype = F.otype.v(n) if otype == 'verse': check = F.label.v(n) == verse_label if check: grab = 0 elif grab == 0: grab = 1 if grab == 0 and otype == root_type: sought.append(n) return sought def showcases(cases, ofile): out = outfile(ofile) for snode in cases: out.write("\n====================\n{}\n{}\n{} bhs_id={} laf_node={}:\n".format( root_verse[snode], cases[snode], root_type, F.oid.v(snode), snode, )) for kind in ('e', 'r'): out.write("\nTree based on monad embedding {}\n\n".format( "only" if kind == 'e' else " and mother+clause_constituent relation" )) (tree_rep, words_rep, bmonad) = tree.write_tree(snode, kind, get_tag, rev=False, leafnumbers=False) out.write("{}\n\n{}\n".format(words_rep, tree_rep)) out.write("\nDepth={}\n".format(tree.depth(snode, kind))) out.write(tree.debug_write_tree(snode, kind, legenda=kind=='r')) out.close() # below holds for etcbc3, in etcbc4 we have less problem cases problem_desc = collections.OrderedDict(( (1131739, "debug reorder"), (1131712, "interesting"), (1131701, "interesting"), (1140469, "subject clause order"), (passage_roots(' GEN 01,16')[1], "interesting"), (1164864, "interesting"), (1143081, "cyclic mothers"), (1153973, "cyclic mothers"), (1158971, "cyclic mothers"), (1158971, "cyclic mothers"), (1160416, "cyclic mothers"), (1160464, "cyclic mothers"), (1161141, "nested cyclic mothers: C.coor => C.attr => P below first C.coor"), (1163666, "cyclic mothers"), (1164830, "cyclic mothers"), (1167680, "cyclic mothers"), (1170057, "cyclic mothers"), (1193065, "cyclic mothers"), (1199681, "cyclic mothers"), (1199682, "mother points outside sentence"), )) fixed_sample = ( 1167680, 1167152, 1145250, 1154339, 1136677, 1166385, 1198984, 1152969, 1153930, 1150648, 1168396, 1151917, 1164750, 1156719, 1148048, 1138673, 1134184, 1156789, 1156600, 1140469, ) sample_size = 20 sample = {} fsample = collections.OrderedDict() mother_keys = list(sorted(tree.mother)) for s in range(20): r = random.randint(0, len(mother_keys) - 1) snode = tree.get_root(tree.mother[mother_keys[r]], 'e')[0] sample[snode] = 'random sample in {}s with {}s with mothers'.format(root_type, clause_type) for snode in fixed_sample: fsample[snode] = 'random sample in {}s with {}s with mothers'.format(root_type, clause_type) #showcases(problem_desc, "tree_notabene.txt") showcases(sample, 'trees_random_{}.txt'.format(sample_size)) #showcases(fsample, 'trees_fixed_{}.txt'.format(len(fsample))) # - # Finally, here is the production of the whole set of trees. msg("Writing {} trees".format(root_type)) trees = outfile("trees.txt") verse_label = '' s = 0 chunk = 10000 sc = 0 for node in NN(): if node in skip: continue otype = F.otype.v(node) oid = F.oid.v(node) if otype == 'verse': verse_label = F.label.v(node) continue if otype != root_type: continue (tree_rep, words_rep, bmonad) = tree.write_tree(node, 'r', get_tag, rev=False, leafnumbers=False) trees.write("\n#{}\tnode={}\toid={}\tbmonad={}\t{}\n{}\n".format( verse_label, node, oid, bmonad, words_rep, tree_rep, )) s += 1 sc += 1 if sc == chunk: msg("{} trees written".format(s)) sc = 0 trees.close() msg("{} trees written".format(s)) close() # # Preview # Here are the first lines of the output. # !head -n 25 {my_file('trees.txt')} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + language="bash" # conda install pandas_datareader pytables # + # %matplotlib inline import matplotlib matplotlib.rcParams['figure.figsize'] = (19, 8) # + from lmk.ticker import Ticker ticker = Ticker("000001.SS") # The Grand Stock Market Bubble ticker.retrieve_history("2015-02-01", "2015-09-30") ticker.visualize("V,C,CL,LMK,WM,PV") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="guuhSs1qXxN-" colab_type="code" colab={} import nltk import gensim import pandas as pd import datetime # + id="T-x0n9Cr2kf7" colab_type="code" outputId="1fecb4e4-6e9d-40fc-de1a-21139ab3a540" colab={"base_uri": "https://localhost:8080/", "height": 88} import spacy # !python -m spacy download en_core_web_lg # + id="sTpx0bDpBVmL" colab_type="code" colab={} nlp = spacy.load("en_core_web_lg") # + id="vL0AmlCe8Ihp" colab_type="code" colab={} """Gives you a size 300 array of all the relevant entries.""" def aggregate_score(all_relevant_entries): sum = [0]*300 for i in all_relevant_entries.values: # Note: I'm using SpaCy's vectorizer here, but transition to Google's thing # if this doesn't work at all doc = nlp(i[0]) sum = sum + doc.vector * i[1] return sum # + [markdown] id="7HRTqFFBdB86" colab_type="text" # # Dealing with the Reddit data # + id="y04hw8MxdGMt" colab_type="code" colab={} red = pd.read_csv("reddits.csv", index_col=0) # + id="EGIj3Gxyj5Eo" colab_type="code" colab={} def utc_to_date(utc): return datetime.datetime.utcfromtimestamp(utc).strftime('%Y-%m-%d') # + id="SIJlLTF1dKQZ" colab_type="code" outputId="4fe6c069-0548-4933-f6d6-aa93719fc6da" colab={"base_uri": "https://localhost:8080/", "height": 204} red.head() # + id="UXXEqCnwdLFI" colab_type="code" colab={} red2 = red.copy() red2["date"] = red2.created_utc.apply(utc_to_date) # + id="RMgRk3DxkA-c" colab_type="code" colab={} aggs = red2.groupby("date").apply(aggregate_score) # + id="wtSazab2kwAk" colab_type="code" outputId="8b763adf-3d80-40b8-88b1-4bf676707a46" colab={"base_uri": "https://localhost:8080/", "height": 238} aggs # + id="1cww2UHclR4d" colab_type="code" outputId="cf47078b-79d1-4b84-8f8a-3809ff7a01f1" colab={"base_uri": "https://localhost:8080/", "height": 419} final_vectors = pd.DataFrame(data={"vector": aggs}, index=red2.date.unique()) final_vectors # + id="x9RHIkWjog9j" colab_type="code" colab={} stock_market = pd.read_csv("stock_market.csv", index_col=0) # + id="1pkJNJrsq-tY" colab_type="code" outputId="e18b8e43-c424-472e-b8a5-c6f42d32c0ab" colab={"base_uri": "https://localhost:8080/", "height": 450} sm = stock_market.drop(columns=["year", "month", "day", "utc"]) sm # + id="-2AoCIoKrB58" colab_type="code" outputId="662cc8e9-37ed-42ad-eead-ddee2b965d56" colab={"base_uri": "https://localhost:8080/", "height": 450} sm = sm.rename(columns={"1. open": "open", "4. close": "close"}) sm["open"] = sm["open"].shift(1) sm.dropna(axis=0, inplace=True) sm # + id="cxNaMD3avzHO" colab_type="code" outputId="429f3fa4-592d-4c52-87b4-1b78b7744ea5" colab={"base_uri": "https://localhost:8080/", "height": 450} newsm = sm.join(final_vectors, on="index", how="inner") newsm = newsm.rename(columns={"1. open": "open", "4. close": "close"}) newsm # newsm.vector['2020-02-10'] # + id="Dut9QyvLrKDH" colab_type="code" colab={} all_data = newsm all_data.to_csv("all_data.csv") # + id="GJbj36xpre0n" colab_type="code" outputId="7c08c184-45ac-4a47-b452-f45389348328" colab={"base_uri": "https://localhost:8080/", "height": 450} all_data # + [markdown] id="Ovp_gaUGxnPZ" colab_type="text" # # Making a Model # + id="_8wWFfv-4EOM" colab_type="code" colab={} import pickle # + id="tjgC_udF7hhe" colab_type="code" colab={} with open('data.pkl', 'wb') as output: pickle.dump(all_data, output) # + id="SFOBWQ6g7rm_" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ![Callysto.ca Banner](https://github.com/callysto/curriculum-notebooks/blob/master/callysto-notebook-banner-top.jpg?raw=true) # # Open in Callysto # # Statistics Project # # For this project, you will collect numerical data about a topic that interests you. Then you will perform a statistical analysis of your data and report on your findings. You will be expected to present your findings and your predictions using a Jupyter notebook, plus be prepared to explain your results when asked questions. There are a number of starter notebooks in the [AccessingData](./AccessingData) folder. # # For more background on statistics, check out this [Introductory Statistics Textbook](https://openstax.org/books/introductory-statistics/pages/1-introduction), this [online course](http://www.learnalberta.ca/content/t4tes/courses/senior/math20-2/index.html?l1=home&l2=4-m4&l3=4-m4-l01&page=m4/m20_2_m4_l01.html&title=Mathematics%2020-2:%20Module%204:%20Lesson%201), or these [class notes](https://sites.google.com/a/share.epsb.ca/ms-carlson-s-math-site/20-2-class-notes). # # ## Part 1: Creating an Action Plan # # Create a research question on a statistical topic you would like to answer. # # Think of some subjects that interest you. Then make a list of topics that are related to each # subject. Once you have chosen several topics, do some research to see which topic would best # support a project. Of these, choose the one that you think is the best. # # Some questions to consider when selecting a topic: # * Does the topic interest you? # * Is the topic practical to research? # * Can you find enough numerical data to do a statistical analysis? # * Is there an important issue related to the topic? # * Will your audience appreciate your presentation? # # You may choose a topic where you collect the data yourself through surveys, interviews, and direct observations (*primary data*) **OR** where you find data that has already been collected through other sources such as websites, newspapers, magazines, etc. (*secondary data*). # # ### Primary Data # * To make sure that you have an adequate amount of information with which you can perform a statistical analysis, **you must collect at least 100 data values**. If you are collecting your own data, and it involves surveying people, please note that if you collect any [personally identifiable information](https://www.priv.gc.ca/en/privacy-topics/privacy-laws-in-canada/the-personal-information-protection-and-electronic-documents-act-pipeda/pipeda_brief/#_h2) such as age and name then you'll need to obtain people's consent. # # #### Ideas for Primary Data Topics: # Feel free to choose a topic that is not in this list, but keep in mind that you need to be able to find enough data. # # * People's: height, shoe size, etc. # * Family size, number of children in a family, the time separation between siblings, age # difference between mother and father # * Number in a household of: pets, TV’s, books, mobile devices, vehicles, etc. # * Number of mobile devices that a person has owned # * Number of hours of television watched in a day or week # * Number of songs on a person’s playlist # * Number of hours on the phone in a day or week # * Number of text messages sent in a day or week # * Mass of apples or any other type of produce # * The number of items of your favourite product sold in a day (contact local retailers) # * The length (time or distance) of a person's commute to school or work # * The length of a poet’s poems (either by lines or words) # * The size of classes # * The number or percentage of each gender in classes # * How long people can keep their eyes open or stand on one foot # * Number of people on the different school clubs or sports teams # # ### Secondary Data # If you choose this type of data, please make sure your research question is well-defined. To make sure that you have an adequate amount of information with which you can perform a statistical analysis, you must find at least 100 data values. # # You may want to do a comparison (e.g. compare climate in provinces, compare career stats of two or more hockey players, etc.) in order to get enough information to perform a statistical analysis. # # #### Ideas for Secondary Data Topics: # You can use the starter notebooks in the [AccessingData folder](./AccessingData). Feel free to choose a topic that is not in that list, but keep in mind that you need to be able to find enough data. # # ### Creating Your Research Question or Statement # A good question requires thought and planning. A well-written research question or statement clarifies exactly what your project is designed to do. It should have the following characteristics: # # * The research topic is easily identifiable and the purpose of the research is clear. # * The question/statement is focused. The people who are listening to or reading the question/statement will know what you are going to be researching. # # ### Evaluating Your Research Question or Statement # * Does the question or statement clearly identify the main objective of the research? # * Are you confident that the question or statement will lead you to sufficient and approprate data to reach a conclusion? # * Can you use the statistical methods you learned in class to analyze the data? # * Is the question or statement interesting? Does it make you want to learn more? # # ### Your Turn: # A. Write a research question for your topic. # B. Use the above checklist to evaluate your question. Adjust your question as needed. # C. Be prepared to discuss with your teacher your research question and your plan for collecting or finding the data. # # ## Part 2: Carrying Out Your Research # # A. Decide if you will use primary data, secondary data, or both. Explain how you made your decision. # B. Make a plan you can follow to collect your data. # C. Collect the data. There is a sheet at the back of this booklet for you to record your data. # * If using primary data, describe your data collection method(s). # * If using secondary data, you must record detailed information about your source(s), so that you can cite them in your report. # * Consider the type of data you need and ensure that you have a reliable source (or sources) for that data, especially if you are using secondary data. # # ## Part 3: Analyzing Your Data # # Statistical tools can help you analyze and interpret the data you collect. You need to think carefully about which statistical tools are most applicable to your topic. Some tools may work well; others may not be applicable with your topic. Keep in mind that you will be marked on the thoroughness of your statistical analyis, so don’t try to scrape by with the bare minimum! # # ### Tools # * Data Table # * Visualization(s) (e.g. Histogram, Frequency Polygon, Scatterplot) # * Measures of Central Tendency (Mean, Median, Mode) # * Which is the most appropriate for measuring the “average” of your data? # * Measures of Dispersion: Range and Standard Deviation # * Comment on the dispersion of your data. # * Outliers # * Are there outliers in your data? Do these skew the results? Would it be more appropriate to remove the outliers before calculating measures of central tendency or dispersion? # * Normal Distribution # * Does your data approximate a normal distribution? Explain why or why not. # * Z-Scores # * Find the z-score of a number of significant data points. If your data is normally distributed, find the percentage of data that is below or above a significant data point. # # ### Your Turn: # A. Determine which statistical tools are appropriate for your data. # B. Meet with your teacher to evaluate your plan for analyzing your data. Be prepared to explain why you chose the statistical tools you did. Modify your plan as necessary. # C. Use statistics to analyze your data. # * Your analysis must include a table and a graphical display (histogram or frequency polygon) of the data you collected. Make sure these are neat and labeled on a scale large enough to display to the class. # * Include all appropriate measures of central tendency, measures of dispersion, and outliers. # * Comment on trends in your data. Interpret the statistical measures you used. What conclusions can you draw? # * If you have collected data on more than one group, person, time frame, or region, comment on the differences between them. What conclusions can you draw? # # ### Example Projects # # [Income Per Person](example-project-income-per-person.ipynb) # # [Soccer Data](example-project-soccer.ipynb) # # ## Part 4: The Final Product and Presentation # Your final presentation should be more than just a factual written report of the information you have found. To make the most of your hard work, select a format for your final presentation that will suit your strengths, as well as your topic. You will be presenting the results of your research topic via video (approximately 3 to 5 minutes), and must include visuals. # # ### Evaluating Your Own Presentation # Before giving your presentation, you can use these questions to decide if your presentation will be effective: # * Did I define my topic well? What is the best way to define my topic? # * Is my presentation focused? Will my audience (classmates and teacher) find it focused? # * Did I organize my information effectively? Is it obvious that I am following a plan in my presentation? # * Does my topic suit one presentation format better than others? # * From which presentation format do I think my audience will gain the greatest understanding? # * Am I satisfied with my presentation? What might make it more effective? # * What unanswered questions might my audience have? # # ### Your Turn: # A. Choose a format for your presentation, and create your presentation. # B. Meet with your teacher to discuss your presentation plans and progress. # C. Present your topic via video. # D. Submit your statistical analysis and written conclusions. # # ## Statistical Research Project Scoring Rubric # # |Criteria|4 Excellent|3 Proficient|2 Adequate|1 Limited|Insufficient or Blank| # |---|---|---|---|---|---| # |Data Collection|**pertinent** and from **reliable** and **documented** sources|**relevant** and from **substantially reliable** and **documented** sources|**suitable** but from **questionable** or **partially documented** sources|unsuitable|no data collected| # |Process and Reflection (Checkpoints)|prepared to discuss topic, plans, and methods insightfully|prepared to discuss topic, plans, and methods|partially prepared to discuss topic, plans, and method vaguely|unprepared to discuss topic, plans, and method with difficulty or incorrectly|no discussion| # |Statistical Calculations|thorough statistical analysis with accurate calculations|adequate statistical analysis with essentially correct calculations|superficial statistical analysis with partially correct calculations|minimal statistical analysis with flawed calculations|no statistical calculations performed| # |Interpretation|astute and insightful|credible and understandable|rudimentary and minimal|incomplete or flawed|no interpretation of the data and statistical measures| # |Organization and Presentation|purposeful and compelling|logical and effective|reasonable and simplistic|disorganized and ineffective manner|no organization or presentation| # # [![Callysto.ca License](https://github.com/callysto/curriculum-notebooks/blob/master/callysto-notebook-banner-bottom.jpg?raw=true)](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Author : # github link : https://github.com/amirshnll/Wine # dataset link : http://archive.ics.uci.edu/ml/datasets/Wine # email : # - import numpy as np import matplotlib.pyplot as plt import seaborn as sns import pandas as pd # + col_names = ['class', 'Alcohol', 'Malic acid', 'Ash', 'Alcalinity of ash', 'Magnesium', 'Total phenols', 'Flavanoids', ' Nonflavanoid phenols','Proanthocyanins','Color intensity','Hue','OD280/OD315 of diluted wines','Proline'] wine =pd.read_csv("wine.csv",header=None, names=col_names) # - wine.head() # + inputs =wine.drop('class',axis='columns') target = wine['class'] # - inputs # + # ایجاد دو دسته داده های آموزشی و تست برای ارزیابی عملکرد from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(inputs, target, test_size=0.3,random_state=109) # + # توابع رگوسیون from sklearn.linear_model import LogisticRegression logreg = LogisticRegression() logreg.fit(X_train,y_train) y_pred=logreg.predict(X_test) # - # ایجاد ماتریس در هم ریختگی from sklearn import metrics cnf_matrix = metrics.confusion_matrix(y_test, y_pred) cnf_matrix # + #ساخت ماتریس درهم ریختگی برای ارزیابی عملکرد یک طبقه بندی و نشان دادن تعداد درست و نادرست class_names=[0,1] # name of classes fig, ax = plt.subplots() tick_marks = np.arange(len(class_names)) plt.xticks(tick_marks, class_names) plt.yticks(tick_marks, class_names) # ساخت هیت مپ sns.heatmap(pd.DataFrame(cnf_matrix), annot=True, cmap="YlGnBu" ,fmt='g') ax.xaxis.set_label_position("top") plt.tight_layout() plt.title('Confusion matrix', y=1.1) plt.ylabel('Actual label') plt.xlabel('Predicted label') # - print("Accuracy:",metrics.accuracy_score(y_test, y_pred)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 02_Autograd # In this notebook, we will see how to compute the derative of tensors. from __future__ import print_function import torch # ## Auto Graph & Gradient Computation # # It is the `autograd` package that provides automatic differentiation for all operations on Tensors. # It is a define-by-run framework, which means that your backprop is defined by how your code is run, and that every single iteration can be different. # # If you set the attribute `.requires_grad` of some tensor as `True`, it starts to track all operations on it. # When you finish your computation you can call `.backward()` and have all the gradients computed automatically. # The gradient for this tensor will be accumulated into `.grad` attribute. # # There’s one more class which is very important for autograd implementation - a `Function`. # # `Tensor` and `Function` are interconnected and build up an acyclic graph, that encodes a complete history of computation. Each tensor has a `.grad_fn` attribute that references a `Function` that has created the `Tensor` (except for Tensors created by the user - their `grad_fn` is None). # + x = torch.ones(1) # create a tensor with requires_grad=False (default) print(x.requires_grad) y = torch.ones(1) # another tensor with requires_grad=False z = x + y print(z.requires_grad) # then autograd won't track this computation. let's verify! # z.backward() w = torch.ones(1, requires_grad=True) print(w.requires_grad) # add to the previous result that has require_grad=False total = w + z # the total sum now requires grad! print(total.requires_grad) # no computation is wasted to compute gradients for x, y and z, which don't require grad print(z.grad == x.grad == y.grad == None) # you can also manually enable gradients for a tensor, but use this with caution! x = torch.ones(1) print(x.requires_grad) x.requires_grad_(True) print(x.requires_grad) # + # create graph x = torch.tensor([3], dtype=torch.float, requires_grad=True) y = 2*x +3 print(x, y) print(x.requires_grad, y.requires_grad) # - print(y.grad_fn, y.grad_fn.next_functions[0][0], y.grad_fn.next_functions[0][0].next_functions[0][0], sep='\n') print(y.grad_fn.next_functions[0][0].next_functions[0][0].next_functions) y.backward() # calculate dy / dx == d(2*x + 3) / dx == 2 print(x, x.grad) # dy / dx is stored" # To stop a tensor from tracking history, you can call `.detach()` to detach it from the computation history, and to prevent future computation from being tracked. z = x.detach() print(z, z.requires_grad) print(z.grad) # To prevent tracking history (and using memory), you can also wrap the code block in `with torch.no_grad():`. This can be particularly helpful when evaluating a model because the model may have trainable parameters with requires_grad=True, but for which we don’t need the gradients. x = torch.zeros(1, requires_grad=True) with torch.no_grad(): y = x * 2 print(y.requires_grad) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Activation Functions # import pandas as pd import matplotlib as plt import numpy as np from IPython.display import Image # In this chapter, we will tackle a few of the activation functions and discuss their roles. We use different activation functions for different cases, and understanding how they work can help you properly pick which of them is best for your task. # # # The activation function is applied to the output of a neuron (or layer of neurons), which modifies outputs. We use activation functions because if the activation function itself is nonlinear, it allows for neural networks with usually two or more hidden layers to map nonlinear functions. We’ll be showing how this works in this chapter. # # # ### In general, your neural network will have two types of activation functions. The first will be the activation function used in hidden layers, and the second will be used in the output layer. # ### The Step Activation Function # In a single neuron, if the weights · inputs + bias results in a value greater than 0, the neuron will fire and output a 1; otherwise, it will output a 0. # Image("Images/ActF.png", width = "500") # ### The Linear Activation Function # # A linear function is simply the equation of a line. It will appear as a straight line when graphed, where y=x and the output value equals the input. Image("Images/sc.png", width = "500") # ### The Sigmoid Activation Function # # The problem with the step function is it’s not very informative. When we get to training and network optimizers, you will see that the way an optimizer works is by assessing individual impacts that weights and biases have on a network’s output. # # The problem with a step function is that it’s less clear to the optimizer what these impacts are because there’s very little information gathered from this function. It’s either on (1) or off (0). It’s hard to tell how “close” this step function was to activating or deactivating. # # Maybe it was very close, or maybe it was very far. In terms of the final output value from the network, it doesn’t matter if it was close to outputting something else. Thus, when it comes time to optimize weights and biases, it’s easier for the optimizer if we have activation functions that are more granular and informative. Image("Images/Sigmoid.png", width = "500") # # As mentioned earlier, with “dead neurons,” it’s usually better to have a more granular approach for the hidden neuron activation functions. In this case, we’re getting a value that can be reversed to its original value; the returned value contains all the information from the input, contrary to # a function like the step function, where an input of 3 will output the same value as an input of 300,000. The output from the Sigmoid function, being in the range of 0 to 1, also works better with neural networks — **especially compared to the range of the negative to the positive infinity — and adds nonlinearity** # ### Relu Activation Function # he rectified linear activation function is simpler than the sigmoid. It’s quite literally y=x, clipped # at 0 from the negative side. If x is less than or equal to 0, then y is 0 — otherwise, y is equal to x. # # # This simple yet powerful activation function is the most widely used activation function at the time of writing for various reasons — mainly speed and efficiency. While the sigmoid activation function isn’t the most complicated, it’s still much more challenging to compute than the ReLU activation function. The ReLU activation function is extremely close to being a linear activation function while remaining nonlinear, due to that bend after 0. This simple property is, however, very effective. Image("Images/Relu.png", width = "500") # While there are certainly problems in life that are linear in nature, for example, trying to figure out the cost of some number of shirts, and we know the cost of an individual shirt, and that there are no bulk discounts, then the equation to calculate the price of any number of those products is # a linear equation. # # Other problems in life are not so simple, like the price of a home. The number of factors that come into play, such as size, location, time of year attempting to sell, number of rooms, yard, neighborhood, and so on, makes the pricing of a home a nonlinear equation. Many of the more interesting and hard problems of our time are nonlinear. The main attraction for neural networks has to do with their ability to solve nonlinear problems. # The finance world has mostly non-linear problems that has to deal with and this is where activation functions shine the most. # # ## Visual link to how neurons are being fitted in non-linear data # # https://nnfs.io/mvp/ # + #### RELU Act function code inputs = [0,2,-1,3.3,-2.7,1.1,2.2,-100] #instant collection output output = [] for i in inputs: if i >0: output.append(i) else: output.append(0) print(output) # - # The ReLU in this code is a loop where we’re checking if the current value is greater than 0. If it is, we’re appending it to the output list, and if it’s not, we’re appending 0. This can be written more simply, as we just need to take the largest of two values: 0 or neuron value. For example: # + inputs = [0, 2, -1, 3.3, -2.7, 1.1, 2.2, -100] output = [] for i in inputs: output.append(max(0,i)) print(output) # + import numpy as np inputs = [0, 2, -1, 3.3, -2.7, 1.1, 2.2, -100] output = np.maximum(0, inputs) '''here we checking if the value is bigger than 0 we append it if not were appending to 0''' print(output) # + import nnfs from nnfs.datasets import spiral_data # + import numpy as np inputs = [0, 2, -1, 3.3, -2.7, 1.1, 2.2, -100] output = np.maximum(0, inputs) '''here we checking if the value is bigger than 0 we append it if not were appending to 0''' print(output) # + ## ReLu activation class Activation_ReLU: #forward pass def forward(self, inputs): #calc output values from input self.output = np.maximum(0,inputs) #Let's apply this to the dense layer outputs in our code #Create df X, y = spiral_data(samples= 100, classes=3) #create dense layer with 2 inputs features and 3 output values dense1 = Layer_Dense(2,3) #ReLU Activation activation1 = Activation_ReLU() # Make a forward pass of our training data through this layer dense1.forward(X) # Forward pass through activation func. # Takes in output from previous layer activation1.forward(dense1.output) print(activation1.output[:5]) # - # As you can see, negative values have been clipped (modified to be zero). That’s all there is to the rectified linear activation function used in the hidden layer. Let’s talk about the activation function that we are going to use on the output of the last layer. # ### The Softmax Activation Function # # In the case of classification, what we want to see is a prediction of which class the network “thinks” the input represents. This distribution returned by the softmax activation function represents confidence scores for each class and will add up to 1. The predicted class is associated with the output neuron that returned the largest confidence score. Still, we can also note the other confidence scores in our overarching algorithm/program that uses this network. # # For example, if our network has a confidence distribution for two classes: [045, 055], the prediction is the 2nd class, but the confidence in # this prediction isn’t very high. Maybe our program would not act in this case since it’s not very confident. # # # # # # # That might look daunting, but we can break it down into simple pieces and express it in Python code, which you may find is more approachable than the formula above. To start, here are example outputs from a neural network layer: # layer_outputs = [4.8, 1.21, 2.385] # The first step for us is to “exponentiate” the outputs. We do this with Euler’s number, e, which is roughly 271828182846 and referred to as the “exponential growth” number. Exponentiating is taking this constant to the power of the given parameter: # # $$y = e^x$$ # # # Both the numerator and the denominator of the Softmax function contain e raised to the power of z, where z, given indices, means a singular output value — the index i means the current sample and the index j means the current output in this sample. The numerator exponentiates the current output value and the denominator takes a sum of all of the exponentiated outputs for a given sample. We need then to calculate these exponentiates to continue: # # + # Values from the previous output when we described # what a neural network is layer_outputs = [4.8, 1.21, 2.385] # e - mathematical constant, we use E here to match a common coding # style where constants are uppercased E = 2.71828182846 # you can also use math.e # + #For each value in a vector, calc the exponential value exp_values = [] for output in layer_outputs: exp_values.append(E**output) print("exponentiated values:") print(exp_values) # - # Exponentiation serves multiple purposes. To calculate the probabilities, we need non-negative values. Imagine the output as [4.8, 1.21, -2.385] — even after normalization, the last value will still be negative since we’ll just divide all of them by their sum. A negative probability (or confidence) does not make much sense. An exponential value of any number is always non- negative — it returns 0 for negative infinity, 1 for the input of 0, and increases for positive values: # # The exponential function is a monotonic function. This means that, with higher input values, outputs are also higher, so we won’t change the predicted class after applying it while making sure that we get non-negative values. It also adds stability to the result as the normalized exponentiation is more about the difference between numbers than their magnitudes. Once we’ve exponentiated, we want to convert these numbers to a probability distribution (converting the values into the vector of confidences, one for each class, which add up to 1 for everything in the vector). What that means is that we’re about to perform a normalization where we take a given value and divide it by the sum of all of the values. For our outputs, exponentiated at this stage, that’s what the equation of the Softmax function describes next — to take a given exponentiated value and divide it by the sum of all of the exponentiated values. Since each output value normalizes to a fraction of the sum, all of the values are now in the range of 0 to 1 and add up to 1 — they share the probability of 1 between themselves. Let’s add the sum and normalization to the code: # # # + #Now normalize values norm_base = sum(exp_values) norm_values = [] for value in exp_values: #dividing the summed exp values to to a fraction of the sum norm_values.append(value / norm_base) print("Normalised exponentiated values:") print(norm_values) print("Sum of normalised values:", sum(norm_values)) # - # ### We can achieve the same thing using NUMPY # # + import numpy as np # Values from the previous output when we described # what a neural network is layer_outputs = [4.8, 1.21, 2.385] #for every value in the layer_outputs calc the exp value exp_values = np.exp(layer_outputs) print('exponentiated values:') print(exp_values) #Now normalise values norm_values = exp_values / np.sum(exp_values) print('normalized exponentiated values:') print(norm_values) print('sum of normalized values:', np.sum(norm_values)) # - # **There are two main pervasive challenges with neural networks: “dead neurons” and very large numbers (referred to as “exploding” values). “Dead” neurons and enormous numbers can wreak havoc down the line and render a network useless over time. The exponential function used in softmax activation is one of the sources of exploding values. Let’s see some examples of how and why this can easily happen:** print(np.exp(10)) print(np.exp(100)) print(np.exp(1000)) # # It doesn’t take a very large number, in this case, a mere 1,000, to cause an overflow error. We know the exponential function tends toward 0 as its input value approaches negative infinity, and the output is 1 when the input is 0 (as shown in the chart earlier): # # # **With Softmax, thanks to the normalization, we can subtract any value from all of the inputs, and it will not change the output:** # + #Softmax activation class Activation_Softmax: #forward pass def forward(self,inputs): #get unnormalised probab exp_values = np.exp(inputs - np.max(inputs, axis=1, keepdims=True)) #normalise them for each sample probabilities = exp_values / np.sum(exp_values, axis = 1, keepdims=True) self.output = probabilities # - # # + softmax = Activation_Softmax() softmax.forward([[1,2,3]]) print(softmax.output) # + softmax.forward([[-2, -1, 0]]) # subtracted 3 - max from the list print(softmax.output) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="NgIo7_iIhmnR" colab_type="code" colab={} ## Ecrit par ## pour le cours Deep Learning du Data Engineering Master de l'EHTP ## Avril 2020 # + id="mSMcZsei_IkH" colab_type="code" colab={} import numpy as np import matplotlib.pyplot as plt # + [markdown] id="bm0X7gA3hT-l" colab_type="text" # ## Exercice 1 # + id="q0l781677q3z" colab_type="code" outputId="b3d174fc-7937-4af1-e9c9-48fad8c5d21c" colab={"base_uri": "https://localhost:8080/", "height": 212} # !wget https://leseco.ma/wp-content/uploads/2018/03/ehtp.jpg # + id="O4mWfbsK-W7Y" colab_type="code" outputId="1de9fc6d-c618-4651-fdfd-afcf94d51257" colab={"base_uri": "https://localhost:8080/", "height": 34} from keras.preprocessing.image import array_to_img, img_to_array, load_img # + id="X03o2WkJ-cq6" colab_type="code" colab={} im = load_img('ehtp.jpg' ) imRGB = img_to_array(im) # + id="fBS12Xa_-hNQ" colab_type="code" outputId="0aa5ece9-cf03-4fd4-c5fa-2567aeeff3d9" colab={"base_uri": "https://localhost:8080/", "height": 34} img_to_array(im).shape # + id="FDyh7jd2_w6N" colab_type="code" colab={} imR, imG, imB = imRGB.copy(), imRGB.copy(), imRGB.copy() imR[:,:,[1,2]]=0. imG[:,:,[0,2]]=0. imB[:,:,[0,1]]=0. # + id="SxemcG9q-1oQ" colab_type="code" outputId="ca512672-d5a7-45d7-feff-faec8e82288a" colab={"base_uri": "https://localhost:8080/", "height": 211} plt.figure(figsize=(30,20)) plt.subplot(1,4,1) plt.imshow(im) plt.subplot(1,4,2) plt.imshow(array_to_img(imR)) plt.subplot(1,4,3) plt.imshow(array_to_img(imG)) plt.subplot(1,4,4) plt.imshow(array_to_img(imB)) # + [markdown] id="k3x1_ZwLhY5E" colab_type="text" # ## Exercice 2 # + [markdown] id="UsiItu-g60fe" colab_type="text" # un réseau appris sur MNIST permuté peut aussi prédire avec un bon score les images permuté de MNIST et ça montre que le MLP ne donne pas d'importance à la structure locale # + id="ZvsOH-FHzrXf" colab_type="code" colab={} import keras from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense, Dropout from keras.optimizers import RMSprop, Adam import matplotlib.pyplot as plt # + id="-sCyXzXny7I7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 52} outputId="700a4337-cf77-4343-e866-d5239a450828" # the data, split between train and test sets (x_train, y_train), (x_test, y_test) = mnist.load_data() # + id="mR9A6qsBy7MM" colab_type="code" colab={} rng = np.random.RandomState(42) # + id="w7cy11H6y7O6" colab_type="code" colab={} perm = rng.permutation(784) # + id="RbU144YC0QFT" colab_type="code" colab={} x_train_perm = x_train.reshape(60000, 784)[:,perm].reshape(60000, 28, 28) x_test_perm = x_test.reshape(10000, 784)[:,perm].reshape(10000, 28, 28) # + id="WXYdZBqV0QIE" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 284} outputId="730ddb76-f54e-4a72-a97f-50ac3e2ed4fc" randindices=np.random.randint(x_train.shape[0],size=3) plt.figure() plt.subplot(2,3,1) plt.imshow(x_train[randindices[0],:,:], cmap='gray') plt.subplot(2,3,2) plt.imshow(x_train[randindices[1],:,:], cmap='gray') plt.subplot(2,3,3) plt.imshow(x_train[randindices[2],:,:], cmap='gray') ####### plt.subplot(2,3,4) plt.imshow(x_train_perm[randindices[0],:,:], cmap='gray') plt.subplot(2,3,5) plt.imshow(x_train_perm[randindices[1],:,:], cmap='gray') plt.subplot(2,3,6) plt.imshow(x_train_perm[randindices[2],:,:], cmap='gray') # + id="gTo9G1id1aUS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 52} outputId="fd19ed88-206f-40d2-f7ed-f2010726823d" x_train_perm = x_train_perm.reshape(60000, 784) x_test_perm = x_test_perm.reshape(10000, 784) x_train_perm = x_train_perm.astype('float32') x_test_perm = x_test_perm.astype('float32') x_train_perm /= 255 x_test_perm /= 255 print(x_train_perm.shape[0], 'train samples') print(x_test_perm.shape[0], 'test samples') # + id="ia45f62U2J54" colab_type="code" colab={} num_classes = 10 y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) # + id="5x9kQ5hi2NBZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 336} outputId="a9983822-09d8-48f2-c03d-a33a8c8a66ca" model = Sequential() model.add(Dense(512, activation='relu', input_shape=(784,))) model.add(Dropout(0.2)) model.add(Dense(512, activation='relu')) model.add(Dropout(0.2)) model.add(Dense(num_classes, activation='softmax')) model.summary() # + id="Nu5vUJXa2NKI" colab_type="code" colab={} model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=1e-3), metrics=['accuracy']) # + id="Qu7_D5gD2NO9" colab_type="code" colab={} batch_size = 128 epochs = 20 # + id="Zy-x0eJ42NR8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 745} outputId="7117eb32-054a-428b-d7ce-8f32e21b2f3b" history = model.fit(x_train_perm, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test_perm, y_test)) # + id="qwvG4KL12ZSH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 52} outputId="df2d9321-c981-4e50-c8f8-3ea0361e33a0" score = model.evaluate(x_test_perm, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) # + id="WEl2LS1b2gHA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 284} outputId="eeeb3a72-8cc3-4cd6-b552-7b6944117e11" randindices=np.random.randint(x_test.shape[0],size=3) plt.figure() plt.subplot(2,3,1) plt.imshow(x_test[randindices[0],:,:], cmap='gray') plt.subplot(2,3,2) plt.imshow(x_test[randindices[1],:,:], cmap='gray') plt.subplot(2,3,3) plt.imshow(x_test[randindices[2],:,:], cmap='gray') ####### plt.subplot(2,3,4) plt.imshow(np.reshape(x_test_perm[randindices[0],:],(28,28)), cmap='gray') plt.subplot(2,3,5) plt.imshow(np.reshape(x_test_perm[randindices[1],:],(28,28)), cmap='gray') plt.subplot(2,3,6) plt.imshow(np.reshape(x_test_perm[randindices[2],:],(28,28)), cmap='gray') # + id="UaAWR5sU2gKI" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 70} outputId="18fb7c65-2b9f-43cc-ecfe-23edc43c1ed1" predictions=model.predict(x_test_perm) print("Le réseau reconnaît le chiffre "+ str(np.argmax(predictions[randindices[0],:])) + ' avec une confiance ' + str(np.max(predictions[randindices[0],:]) * 100) + '%.') print("Le réseau reconnaît le chiffre "+ str(np.argmax(predictions[randindices[1],:])) + ' avec une confiance ' + str(np.max(predictions[randindices[1],:]) * 100) + '%.') print("Le réseau reconnaît le chiffre "+ str(np.argmax(predictions[randindices[2],:])) + ' avec une confiance ' + str(np.max(predictions[randindices[2],:]) * 100) + '%.') # + [markdown] id="nDQ5vmH37HaG" colab_type="text" # # Exercice 3 : LeNet5 pour MNIST # + [markdown] id="NPASvcBGWRA8" colab_type="text" # a little bit modified version of LeNet5 # + id="HvHT0x5FVB81" colab_type="code" colab={} from keras.layers import Conv2D, MaxPooling2D, Flatten # + id="q_fLUZs_Vrlf" colab_type="code" colab={} # Reshape the dataset into 4D array x_train = x_train.reshape(x_train.shape[0], 28,28,1) x_test = x_test.reshape(x_test.shape[0], 28,28,1) # + id="pdcVTvkz7Iqk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 443} outputId="19dae512-47d4-41fb-da95-8126b4c8b594" model = keras.Sequential() model.add(Conv2D(filters=6, kernel_size=(5, 5), activation='relu', padding='valid', input_shape=(28,28,1))) model.add(MaxPooling2D()) model.add(Conv2D(filters=16, kernel_size=(5, 5), activation='relu', padding='valid')) model.add(MaxPooling2D()) model.add(Flatten()) model.add(Dense(units=120, activation='relu')) model.add(Dense(units=84, activation='relu')) model.add(Dense(units=10, activation = 'softmax')) model.summary() # + id="IgSChBBs2XRm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 745} outputId="70acf4b5-505b-4fc8-9486-244135e212ca" model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy']) history = model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test)) # + id="Is1aDwnY2aCL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 282} outputId="c11fe66b-517e-4075-be1f-17a996b21993" plt.plot(history.history['loss'], label='train') plt.plot(history.history['val_loss'], label='val') plt.legend() # + id="pVAZ4iIz2ZWK" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 52} outputId="99a20e50-a58f-4981-8a7a-4b9850a791a2" score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) # + id="2TB_fNKR7i9S" colab_type="code" colab={} # + [markdown] id="qyJ3s_OS7kE4" colab_type="text" # # Exercice 4: LSTM pour prédicition de nombre de voyageurs d'une compagnie aérienne # # Source: https://machinelearningmastery.com/time-series-prediction-lstm-recurrent-neural-networks-python-keras/ # + id="yFgCUkTC77sa" colab_type="code" colab={} from keras.layers import LSTM import pandas as pd from sklearn.preprocessing import MinMaxScaler from sklearn.metrics import mean_squared_error # + id="189srTNo7vlz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 212} outputId="0bbf983a-0e1d-448a-a776-769560db1f07" # !wget https://raw.githubusercontent.com/jbrownlee/Datasets/master/airline-passengers.csv # + id="okxFjwmV8BRH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 197} outputId="9b021239-6f0d-4750-e90f-9d7f6c238832" dataset = pd.read_csv('airline-passengers.csv', usecols=[1], engine='python') dataset.head() # + id="hkbFy0cS_WIg" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 282} outputId="91a1fa4f-26d0-46a9-b267-96c3ebfdc82f" plt.plot(dataset) # + id="D2s_soj_AaHR" colab_type="code" colab={} # normalize the dataset scaler = MinMaxScaler(feature_range=(0, 1)) dataset = scaler.fit_transform(dataset) # + id="3jZWyopoEcR2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="3a021ad1-2995-403e-8aca-5fe530b84946" # split into train and test sets train_size = int(len(dataset) * 0.67) test_size = len(dataset) - train_size train, test = dataset[0:train_size,:], dataset[train_size:len(dataset),:] print(len(train), len(test)) # + id="YRgnRyHNEcUl" colab_type="code" colab={} # convert an array of values into a dataset matrix def create_dataset(dataset, look_back=1): dataX, dataY = [], [] for i in range(len(dataset)-look_back-1): a = dataset[i:(i+look_back), 0] dataX.append(a) dataY.append(dataset[i + look_back, 0]) return np.array(dataX), np.array(dataY) # + id="9dIXNATLEcZp" colab_type="code" colab={} # reshape into X=t and Y=t+1 look_back = 1 trainX, trainY = create_dataset(train, look_back) testX, testY = create_dataset(test, look_back) # + id="cWPIBg82EcX9" colab_type="code" colab={} # reshape input to be [samples, time steps, features] trainX = np.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1])) testX = np.reshape(testX, (testX.shape[0], 1, testX.shape[1])) # + id="z_0EvJVhEmlz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 230} outputId="9252671a-be23-49df-a20c-faf57f03e1f4" # create and fit the LSTM network model = Sequential() model.add(LSTM(4, input_shape=(1, look_back))) model.add(Dense(1)) model.summary() # + id="32y1ZoXTEmon" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="a6da28ce-c750-4242-fc0b-5a559ccf637d" model.compile(loss='mean_squared_error', optimizer='adam') model.fit(trainX, trainY, epochs=100, batch_size=1, verbose=2) # + id="dtnhYdObEmrY" colab_type="code" colab={} # make predictions trainPredict = model.predict(trainX) testPredict = model.predict(testX) # invert predictions trainPredict = scaler.inverse_transform(trainPredict) #trainY = scaler.inverse_transform([trainY]) testPredict = scaler.inverse_transform(testPredict) #testY = scaler.inverse_transform([testY]) # calculate root mean squared error #trainScore = math.sqrt(mean_squared_error(trainY[0], trainPredict[:,0])) #print('Train Score: %.2f RMSE' % (trainScore)) #testScore = math.sqrt(mean_squared_error(testY[0], testPredict[:,0])) #print('Test Score: %.2f RMSE' % (testScore)) # + id="oA6fjz0VEmuk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 264} outputId="708ac4fa-e74d-4e70-ed9a-a0eb4ebc1c2c" # shift train predictions for plotting trainPredictPlot = np.empty_like(dataset) trainPredictPlot[:, :] = np.nan trainPredictPlot[look_back:len(trainPredict)+look_back, :] = trainPredict # shift test predictions for plotting testPredictPlot = np.empty_like(dataset) testPredictPlot[:, :] = np.nan testPredictPlot[len(trainPredict)+(look_back*2)+1:len(dataset)-1, :] = testPredict # plot baseline and predictions plt.plot(scaler.inverse_transform(dataset)) plt.plot(trainPredictPlot) plt.plot(testPredictPlot) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Notebook - Fractopo – KB11 Fracture Network Analysis # # `fractopo` enables fast and scalable analysis of two-dimensional georeferenced fracture and lineament datasets. These are typically created with remote sensing using a variety of background materials: fractures can be extracted from outcrop orthomosaics and lineaments from digital elevation models (DEMs). Collectively both lineament and fracture datasets can be referred to as *trace* datasets or as *fracture networks*. # # `fractopo` implements suggestions for structural geological studies by [Peacock and Sanderson (2018)](https://doi.org/10.1016/j.earscirev.2018.06.006): # # > **Basic geological descriptions should be followed by measuring their # geometries and topologies, understanding their age relationships,** # kinematic and mechanics, and developing a realistic, data-led model # for related fluid flow. # # `fractopo` characterizes the individual and overall geometry and topology of fractures and the fracture network. Furthermore the age relations are investigated with determined topological cross-cutting and abutting relationship between fracture sets. # # Whether `fractopo` evolves to implement all the steps in the quote remains to be seen! The functionality grows as more use cases require implementation. # ## Development imports (just skip to next heading, Data, if not interested!) # Avoid cluttering outputs with warnings. # + import warnings warnings.filterwarnings("ignore") # - # `geopandas` is the main module which `fractopo` is based on. It along with `shapely` and `pygeos` implement all spatial operations required for two-dimensional fracture network analysis. `geopandas` further implements all input-output operations like reading and writing spatial datasets (shapefiles, GeoPackages, GeoJSON, etc.). import geopandas as gpd # `geopandas` uses `matplotlib` for visualizing spatial datasets. import matplotlib.pyplot as plt # During local development the imports might be a bit messy as seen here. Not interesting for end-users. # + # This cell's contents only for development purposes. from importlib.util import find_spec if find_spec("fractopo") is None: import sys sys.path.append("../../") # - # ## Data # # Fracture network data consists of georeferenced lineament or fracture traces, manually or automatically digitized, and a target area boundary that delineates the area in which fracture digiziting has been done. The boundary is important to handle edge effects in network analysis. `fractopo` only has a stub (and untested) implementation for cases where no target area is given so I strongly recommend to always delineate the traced fractures and pass the target area to `Network`. # # `geopandas` is used to read and write spatial datasets. Here we use `geopandas` to both download and load trace and area datasets that are hosted on GitHub. A more typical case is that you have local files you wish to analyze in which case you can replace the url string with a path to the local file. E.g. # # ``` python # # Local trace data # trace_data_url = "~/data/traces.gpkg" # ``` # # The example dataset here is from an island south of Loviisa, Finland. The island only consists of outcrop quite well polished by glaciations. The dataset is in `ETRS-TM35FIN` coordinate reference system. # + # Trace and target area data available on GitHub trace_data_url = "https://raw.githubusercontent.com/nialov/fractopo/master/tests/sample_data/KB11/KB11_traces.geojson" area_data_url = "https://raw.githubusercontent.com/nialov/fractopo/master/tests/sample_data/KB11/KB11_area.geojson" # Use geopandas to load data from urls traces = gpd.read_file(trace_data_url) area = gpd.read_file(area_data_url) # Name the dataset name = "KB11" # - # ### Visualizing trace map data # # `geopandas` has easy methods for plotting spatial data along with data coordinates. The plotting is based on `matplotlib`. # + # Initialize the figure and ax in which data is plotted fig, ax = plt.subplots(figsize=(9, 9)) # Plot the loaded trace dataset consisting of fracture traces. traces.plot(ax=ax, color="blue") # Plot the loaded area dataset that consists of a single polygon that delineates the traces. area.boundary.plot(ax=ax, color="red") # Give the figure a title ax.set_title(f"{name}, Coordinate Reference System = {traces.crs}") # - # ## Network # # So far we have not used any `fractopo` functionality, just `geopandas`. Now we use the `Network` class to create `Network` instances that can be thought of as abstract representations of fracture networks. The fracture network contains traces and a target area boundary delineating the traces. # # To characterize the topology of a fracture network `fractopo` determines the topological branches and nodes ([Sanderson and Nixon 2015](https://doi.org/10.1016/j.jsg.2015.01.005)). # # - Nodes consist of trace endpoints which can be isolated or snapped to end at another trace. # - Branches consist of every trace segment between the aforementioned nodes. # # Automatic determination of branches and nodes is determined with the `determine_branches_nodes` keyword. If given as `False`, they are not determined. You can still use the `Network` object to investigate geometric properties of just the traces. # # `Network` initialization should be supplied with information regarding the trace dataset: # # - `truncate_traces` # # - If you wish to only characterize the network within the target area boundary, the input traces should be cropped to the target area boundary. This is done when `truncate_traces` is given as `True.` `True` recommended. # # - `circular_target_area` # # - If the target area is a circle `circular_target_area` should be given as `True`. A circular target area is recommended to avoid orientation bias in node counting. # # - `snap_threshold` # # - To determine topological relationships between traces the abutments between traces should be snapped to some tolerance. This tolerance can be given here. E.g. when digitizing in QGIS with snapping turned on, the tolerance is probably much lower than even `0.001`. This represents the lowest distance between nodes that will be interpreted by `fractopo`. If you are doing millimetre scale or lower interpretations you might have to lower this value, otherwise `0.001` is probably fine. # # - The trace validation functionality of `fractopo` can be (and should be) used to check that there are no topological errors within a certain tolerance. # Import the Network class from fractopo from fractopo import Network # Create Network and automatically determine branches and nodes # The Network instance is saved as kb11_network variable. kb11_network = Network( traces, area, name=name, determine_branches_nodes=True, truncate_traces=True, circular_target_area=True, snap_threshold=0.001, ) # ### Visualizing fracture network branches and nodes # # We can similarly to the traces visualize the branches and nodes with `geopandas` plotting. # + # Import identifier strings of topological branches and nodes from fractopo.general import CC_branch, CI_branch, II_branch, X_node, Y_node, I_node # Function to determine color for each branch and node type def assign_colors(feature_type: str): if feature_type in (CC_branch, X_node): return "green" if feature_type in (CI_branch, Y_node): return "blue" if feature_type in (II_branch, I_node): return "black" return "red" # - # | Branch or Node Type | Color | # |---------------------|-------| # | C - C, X | Green | # | C - I, Y | Blue | # | I - I, I | Black | # | Other | Red | # #### Branches fix, ax = plt.subplots(figsize=(9, 9)) kb11_network.branch_gdf.plot( colors=[assign_colors(bt) for bt in kb11_network.branch_types], ax=ax ) area.boundary.plot(ax=ax, color="red") # #### Nodes fix, ax = plt.subplots(figsize=(9, 9)) # Traces kb11_network.trace_gdf.plot(ax=ax, linewidth=0.5) # Nodes kb11_network.node_gdf.plot( c=[assign_colors(bt) for bt in kb11_network.node_types], ax=ax, markersize=10 ) area.boundary.plot(ax=ax, color="red") # ## Geometric Fracture Network Characterization # # The most basic geometric properties of traces are their **length** and **orientation**. # # **Length** is the overall travel distance along the digitized trace. The length of traces individually is usually not interesting but the value **distribution** of all of the lengths is ([Bonnet et al. 2001](https://doi.org/10.1029/1999RG000074)). `fractopo` uses another Python package, `powerlaw`, for determining power-law, lognormal and exponential distribution fits. The wrapper around `powerlaw` is thin and therefore I urge you to see its [documentation](https://github.com/jeffalstott/powerlaw) and associated [article](https://doi.org/10.1371/journal.pone.0095816) for more info. # # **Orientation** of a trace (or branch, or any line) can be defined in multiple ways that approach the same result when the line is **sublinear**: # # - Draw a straight line between the start and endpoints of the trace and calculate the orientation of that line. # # - This is the approach used in `fractopo`. Simple, but when the trace is curvy enough the simplification might be detrimental to analysis. # # - Plot each coordinate point of a trace and fit a linear regression trend line. Calculate the orientation of the trend line. # # - Calculate the orientation of each segment between coordinate points resulting in multiple orientation values for a single trace. # ### Length distributions # #### Traces # Plot length distribution fits (powerlaw, exponential and lognormal) of fracture traces using powerlaw fit, fig, ax = kb11_network.plot_trace_lengths() # Fit properties print(f"Automatically determined powerlaw cut-off: {fit.xmin}") print(f"Powerlaw exponent: {fit.alpha - 1}") print( f"Compare powerlaw fit to lognormal: R, p = {fit.distribution_compare('power_law', 'lognormal')}" ) # #### Branches # Length distribution of branches fit, fig, ax = kb11_network.plot_branch_lengths() # Fit properties print(f"Automatically determined powerlaw cut-off: {fit.xmin}") print(f"Powerlaw exponent: {fit.alpha - 1}") print( f"Compare powerlaw fit to lognormal: R, p = {fit.distribution_compare('power_law', 'lognormal')}" ) # ### Rose plots # # A rose plot is a histogram where the bars have been oriented based on pre-determined bins. `fractopo` rose plots are length-weighted and equal-area. Length-weighted means that each bin contains the total length of traces or branches within the orientation range of the bin. # # The method for calculating the bins and reasoning for using **equal-area** rose plots is from publication by [ (2020)](https://doi.org/10.1016/j.earscirev.2019.103055). # Plot azimuth rose plot of fracture traces and branches azimuth_bin_dict, fig, ax = kb11_network.plot_trace_azimuth() # Plot azimuth rose plot of fracture branches azimuth_bin_dict, fig, ax = kb11_network.plot_branch_azimuth() # ## Topological Fracture Network Characterization # # The determination of branches and nodes are essential for characterizing the topology of a fracture network. The topology is the collection of properties of the traces that do not change when the traces are transformed continously i.e. the traces are not cut but are extended or shrinked. In geological terms the traces can go through ductile transformation without losing their topological properties but not brittle transformation. Furthermore this means the end topology of the traces is a result of brittle transformation(s). # # At its simplest the proportion of different types of branches and nodes are used to characterize the topology. # # Branches can be categorized into three main categories: # # - **C–C** is connected at both endpoints # # - **C-I** is connected at one endpoint # # - **I-I** is not connected at either endpoint # # Nodes can be similarly categorized into three categories: # # - **X** represents intersection between two traces # # - **Y** represents abutment of one trace to another # # - **I** represents isolated termination of a trace # # Furthermore **E** node and any **E**-containing branch classification (e.g. **I-E**) are related to the trace area boundary. Branches are always cropped to the boundary and branches that are cut then have a **E** node as end endpoint. # ### Node and branch proportions # # The proportion of the different types of nodes and branches have direct implications for the overall connectivity of a fracture network (Sanderson and Nixon 2015). # # The proportions are plotted on ternary plots. The plotting uses [python-ternary](https://github.com/marcharper/python-ternary). kb11_network.node_counts # Plot ternary XYI-node proportion plot fig, ax, tax = kb11_network.plot_xyi() kb11_network.branch_counts # Plot ternary branch (C-C, C-I, I-I) proportion plot fig, ax, tax = kb11_network.plot_branch() # ### Crosscutting and abutting relationships # # If the geometry and topology of the fracture network are investigated together the cross-cutting and abutting relationships between orientation-binned fracture sets can be determined. Traces can be binned into sets based on their orientation (e.g. N-S oriented traces could belong to Set 1 and E-W oriented traces to Set 2). If the endpoints of the traces of sets are examined the abutment relationship between can be determined i.e. which abuts in which (e.g. does the N-S oriented Set 1 traces abut to E-W oriented Set 2 or do they crosscut each other equal amounts.) # Sets are defaults print(f"Azimuth set names: {kb11_network.azimuth_set_names}") print(f"Azimuth set ranges: {kb11_network.azimuth_set_ranges}") # Plot crosscutting and abutting relationships between azimuth sets figs, fig_axes = kb11_network.plot_azimuth_crosscut_abutting_relationships() # ## Numerical Fracture Network Characterization Parameters # # The quantity, total length and other geometric and topological properties of the traces, branches and nodes within the target area can be determined as numerical values. For the following parameters I refer you to the following articles: # # - [Mauldon et al. 2001](https://doi.org/10.1016/S0191-8141(00)00094-8) # - [Sanderson and Nixon 2015](https://doi.org/10.1016/j.jsg.2015.01.005) # + tags=[] kb11_network.parameters # - # ## Contour Grids # # To visualize the spatial variation of geometric and topological parameter values the network can be sampled with a grid of rectangles. From the center of each rectangle a sampling circle is made which is used to do the actual sampling following [Nyberg et al. 2018 (See Fig. 3F)](https://doi.org/10.1130/GES01595.1). # # Sampling with a contour grid is time-consuming and therefore the following code is not executed within this notebooks by default. The end result is embedded as images. Paste the code from the below cell blocks to a Python cell to execute them. # # To perform sampling with a cell width of 0.75 $m$: # # ~~~python # sampled_grid = network.contour_grid( # cell_width=0.75, # ) # ~~~ # # To visualize results for parameter *Fracture Intensity P21*: # # ~~~python # network.plot_contour(parameter="Fracture Intensity P21", sampled_grid=sampled_grid) # ~~~ # # Result: # # ![*Fracture Intensity P21*](https://raw.githubusercontent.com/nialov/fractopo/master/docs_src/imgs/kb11_contour_grid_p21.png) # # To visualize results for parameter *Connections per Branch*: # # ~~~python # network.plot_contour(parameter="Connections per Branch", sampled_grid=sampled_grid) # ~~~ # # Result: # # ![*Connections per Branch*](https://raw.githubusercontent.com/nialov/fractopo/master/docs_src/imgs/kb11_contour_grid_cpb.png) # ## Tests to verify notebook consistency assert hasattr(kb11_network, "contour_grid") and hasattr(kb11_network, "plot_contour") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Aim # # Inverse detection efficiency completeness calculation on TESS data. Currently only uses geometric probabilities. # # IDEM as in Appendix A of Hsu 2018 (https://arxiv.org/pdf/1803.10787.pdf). # + import pandas as pd from matplotlib import pyplot as plt import numpy as np import sys sys.path.append('..') from tqdm.notebook import tqdm from os import path from dev import utils # %load_ext autoreload # %autoreload 2 # - stellar = utils.get_tess_stellar().drop_duplicates("ID") print("Initial number of targets: {}".format(len(stellar))) stellar = stellar[np.isfinite(stellar['rad'])] stellar = stellar[np.isfinite(stellar['mass'])] print("Cut to {} targets".format(len(stellar))) planetary = utils.get_tois(force_redownload=True) exclude_candidates = True print("Initial number of TOIs: {}".format(len(planetary))) if exclude_candidates: planetary = planetary[np.logical_or(planetary["toi_pdisposition"] == "KP", planetary["toi_pdisposition"] == "CP")] planetary = planetary[np.isfinite(planetary['toi_prad'])] planetary = planetary[np.isfinite(planetary['toi_period'])] print("Cut to {} TOIs".format(len(planetary))) # let's group all these planets into buckets based on planet size and orbital period! radii = np.array(planetary['toi_prad']) _ = plt.hist(np.log(radii), bins=100) periods = np.array(planetary['toi_period']) _ = plt.hist(np.log(periods), bins=100) long_planets = planetary[planetary["toi_period"] > 27.4] for planet in long_planets.iterrows(): planet = planet[1] num_sectors = len(planet["sectors"].split()) if planet["toi_period"] > 27.4 * num_sectors: print("TOI {0} has a period of {1} days and was observed in {2} sectors.".format(planet["toi_id"], planet["toi_period"], num_sectors)) print("Public comment on this TOI: {}".format(planet["comment"])) # exomast: plug in the name of the planet, get all the observations # https://exo.mast.stsci.edu/ plt.plot(np.log(periods), np.log(radii), ".k") plt.xlabel("Log-period (ln days)") plt.ylabel("Log-radius (ln Earth radii)") # planetary = planetary[planetary['TOI Disposition'] == "PC"] combined = pd.merge(planetary, stellar, left_on="TIC", right_on="ID") periods = combined['toi_period'].values prads = combined['toi_prad'].values len(combined) # + def get_a(period, mstar, Go4pi=2945.4625385377644/(4*np.pi*np.pi)): # dfm.io/posts/exopop """ Compute the semi-major axis of an orbit in Solar radii. :param period: the period in days :param mstar: the stellar mass in Solar masses """ return (Go4pi*period*period*mstar) ** (1./3) pgeoms = combined['rad'].values / get_a(periods, combined['mass'].values) # - _ = plt.hist(pgeoms) def pcomp(period, rstars=stellar['rad'].values, mstars=stellar['mass'].values): ''' For each of the stars in the stellar catalog, computes the probability of detection of the planet with the given period and radius Arguments --------- stellar : pd.DataFrame The full stellar catalog, as extracted from the merge on MAST and the MIT TEV database. period : scalar The period value for the TOI, in days. prad : scalar The radius value for the TOI, in R_Earths. Returns ------- pcomps : np.ndarray The probability of each of the detections. ''' pgeoms = rstars / get_a(period, mstars) return pgeoms ij_filename = '../data/idem_pdets_{}.npy'.format(exclude_candidates) if not path.exists(ij_filename): pdet_ij = np.empty((len(periods), len(stellar))) for i, period in enumerate(tqdm(periods, total=len(combined))): pdet_ij[i] = pcomp(period) np.save(ij_filename, pdet_ij) else: pdet_ij = np.load(ij_filename) i_filename = '../data/idem_pdets_i.npy'.format(exclude_candidates) if not path.exists(i_filename): pdet_i = np.nanmean(pdet_ij, axis=1) np.save(i_filename, pdet_i) else: pdet_i = np.load(i_filename) plt.hist(pdet_i) weights = np.nan_to_num(1 / pgeoms) period_bins = np.array([0.5, 1.25, 2.5, 5, 10, 20, 40, 80, 160, 320]) rp_bins = np.array([0.5, 0.75, 1, 1.25, 1.5, 1.75, 2, 2.5, 3, 4, 6, 8, 12, 16]) counts = np.histogram2d(periods, prads, bins=[period_bins, rp_bins])[0] N = np.histogram2d(periods, prads, bins=[period_bins, rp_bins], weights=weights) f = N[0] / len(stellar) plt.imshow(f) plt.colorbar() _ = plt.xticks(list(range(len(rp_bins))), rp_bins) plt.xlabel(r"Radius ($R_E$)") _ = plt.yticks(list(range(len(period_bins))), period_bins) plt.ylabel("Period (days)") plt.title("TESS completeness: exclude candidates = {}".format(exclude_candidates)) sigmas = np.divide(f, np.sqrt(counts), out=np.zeros_like(f), where=counts!=0) plt.imshow(sigmas) plt.colorbar() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="view-in-github" # Open In Colab # + [markdown] colab_type="text" id="0-GIXNONlAn6" # # Integers # + [markdown] colab_type="text" id="GfqrqQEClAn8" # *Question 1* # # Find the type of the value of the variables below. # # x = 1 # # x = 1.5 # + colab={} colab_type="code" id="8nSmzEmXlAn9" outputId="fdf6df61-8b46-4660-e8b8-c886fd8e6ec2" x = 1 type(x) # + colab={} colab_type="code" id="4po0bQenlAoB" outputId="0acfb169-f4ef-4877-f644-3e71d6b45044" x = 1.5 type(x) # + [markdown] colab_type="text" id="II-QQMx5lAoD" # *Question 2* # # Divide 7 by 3. # + colab={} colab_type="code" id="CSIJvYF-lAoE" outputId="fcb200ff-f6bf-438d-9ad5-a944d535b218" print ( 7 / 3) # + [markdown] colab_type="text" id="llL3SNt2lAoG" # *Question 3* # # Divide 7 by 3 but the answer should be in integer value. # + colab={} colab_type="code" id="7ccedYWklAoH" outputId="f4d8a77f-3c33-4276-914f-29711e11c3b4" print ( 7 // 3) # + colab={} colab_type="code" id="c8O-PLrflAoI" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.3 64-bit (''base'': conda)' # name: python3 # --- # ### Word to Vec # + # #! pip install plotly_express # - import gensim OHCO = ['book_id', 'chap_num', 'para_num', 'sent_num', 'token_num'] BAG = OHCO[:4] # Paragraphs # BAG = OHCO[:5] # Sentences window = 5 import pandas as pd import numpy as np from gensim.models import word2vec from sklearn.manifold import TSNE import plotly_express as px # %matplotlib inline LIB = pd.read_csv('LIB.csv') for i in LIB['book_title']: if 'Austen' in i: print(i) # + aus_books = [] for i in LIB.iterrows(): if 'austen' in i[1][3]: aus_books.append(i[1][0]) # - aus_books TOKENS = pd.read_csv('TOKEN2.csv') t1=TOKENS corpus1 = t1[~t1.pos.str.match('NNPS?')]\ .groupby(BAG)\ .term_str.apply(lambda x: x.tolist())\ .reset_index()['term_str'].tolist() m1 = word2vec.Word2Vec(corpus1, window=window, min_count=200, workers=4) # coords = pd.DataFrame(index=range(len(model.wv))) # coords['label'] = [w for w in model.wv] # coords['vector'] = coords['label'].apply(lambda x: model.wv.get_vector(x)) coords1 = pd.DataFrame(index=range(len(m1.wv.key_to_index))) coords1['label'] = m1.wv.index_to_key coords1['vector'] = coords1['label'].apply(lambda x: m1.wv.get_vector(x)) tsne_model1 = TSNE(perplexity=40, n_components=2, init='pca', n_iter=2500, random_state=23) tsne_values1 = tsne_model1.fit_transform(coords1['vector'].tolist()) coords1['x'] = tsne_values1[:,0] coords1['y'] = tsne_values1[:,1] coords1.head() px.scatter(coords1, 'x', 'y', text='label', height=1000, title='Jane Austen Word Plot').update_traces(mode='text') def complete_analogy_1(A, B, C, n=2): try: return m1.wv.most_similar(positive=[B, C], negative=[A])[0:n] except KeyError as e: print('Error:', e) return None complete_analogy_1('mother', 'herself', 'father') complete_analogy_1('sister', 'me', 'brother') complete_analogy_1('you', 'thought', 'me') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Cotton Plant Diesease Detector # # Description # ## The dataset is organized into three folders (train, val and test) and contains subfolders for each image category . There are 2293 plant with and without diesease images (JPEG) and 4 categories. # # Problem Statement # - Detect Dieseases in Cotton Plant # # Constraints # - False Negative , is the biggest constraint for any botany based ML & DL based problems, we need to minimize this. # - Latancy is not the problem as tradition approches takes weeks. # # Benifits # - Predicting damage in eyes without using tradition or having domain expertise # ## Importing Liberaries # + import numpy as np import pickle import cv2 import seaborn as sb from tqdm import tqdm from os import listdir from sklearn.preprocessing import LabelBinarizer from tensorflow.keras.models import Sequential from tensorflow.keras.layers import BatchNormalization from tensorflow.keras.layers import Conv2D from tensorflow.keras.layers import MaxPooling2D from tensorflow.keras.layers import Activation, Flatten, Dropout, Dense from tensorflow.keras import backend as K from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.optimizers import Adam from tensorflow.keras.preprocessing import image from tensorflow.keras.preprocessing.image import img_to_array from sklearn.preprocessing import MultiLabelBinarizer from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt from keras.callbacks import ModelCheckpoint, ReduceLROnPlateau import tensorflow as tf from tensorflow.keras.models import Model from tensorflow.keras.optimizers import Adam from tensorflow.keras.layers import GlobalAveragePooling2D from tensorflow.keras.layers import Dense from tensorflow.keras.applications.inception_v3 import InceptionV3 from tensorflow.keras.utils import to_categorical from tensorflow.keras.preprocessing import image_dataset_from_directory # - # ## Preparing some variables # + train_dir = '/Users/abuzaid/Downloads/Machine Learning/Pianalystics/Cotton/data/train' val_dir = '/Users/abuzaid/Downloads/Machine Learning/Pianalystics/Cotton/data/val' test_dir = '/Users/abuzaid/Downloads/Machine Learning/Pianalystics/Cotton/data/test' EPOCHS = 20 BATCH_SIZE = 256 #Batch size IMG_SIZE = (150, 150) # Image Size learning_rate = 0.001 train_dataset = image_dataset_from_directory(train_dir, shuffle=True, batch_size=BATCH_SIZE, image_size=IMG_SIZE,) val_dataset = image_dataset_from_directory(val_dir, shuffle=True, batch_size=BATCH_SIZE, image_size=IMG_SIZE,) test_dataset = image_dataset_from_directory(test_dir, shuffle=True, batch_size=BATCH_SIZE, image_size=IMG_SIZE) # - # ## Image Augumentation # + training_datagen = ImageDataGenerator(rescale = 1./255,rotation_range=40,width_shift_range=0.2,height_shift_range=0.2, shear_range=0.2,zoom_range=0.2,horizontal_flip=True,fill_mode='nearest') val_datagen = ImageDataGenerator(rescale = 1./255) testing_datagen = ImageDataGenerator(rescale = 1./255) # + # Passing the images from datagenerator train_generator = training_datagen.flow_from_directory(train_dir, target_size=(150,150), class_mode='categorical', batch_size=126) # Passing Validation data in datagenator val_generator = val_datagen.flow_from_directory(val_dir, target_size=(150,150), class_mode='categorical', batch_size=126) # Passing Test data in datagenator test_generator = testing_datagen.flow_from_directory(test_dir, target_size=(150,150), class_mode='categorical', batch_size=126) # - # ## EDA # Ploting Diffrent Images class_names = train_dataset.class_names plt.figure(figsize=(10, 10)) for images, labels in train_dataset.take(1): for i in range(len(class_names)): ax = plt.subplot(2, 2, i + 1) plt.imshow(images[i].numpy().astype("uint8")) plt.title(class_names[i]) plt.axis("off") # ## Barplot # + def plot(dirr): # Simple function to count all images in all three folders according to their classes image_list, label_list = [], [] root_dir = listdir(dirr) plant_count = {} for plant_disease_folder in root_dir: count = 0 plant_disease_image_list = listdir(f"{dirr}/{plant_disease_folder}/") for image in plant_disease_image_list: image_directory = f"{dirr}/{plant_disease_folder}/{image}" if image_directory.endswith(".jpg") == True or image_directory.endswith(".JPG") == True: count += 1 plant_count[plant_disease_folder] = count return plant_count train_dat = plot(train_dir) val_dat = plot(val_dir) test_dat = plot(test_dir) dat = val_dat ,test_dat for i in dat: keys = list(i.keys()) for key in keys: val = i.get(key) train_dat[key] =+ val data = train_dat plt.figure(figsize=(10, 7)) plt.bar(x = list(data.keys()), height = list(data.values())) plt.xticks(rotation = 90) plt.show() # - # ## PDF plt.figure(figsize=(10,5)) sb.distplot(list(data.values())) plt.show() # ## Transfer Learning/Creating Model # + # Get the InceptionV3 model so we can do transfer learning base_inception = InceptionV3(weights='imagenet', include_top=False, input_shape=(150, 150, 3)) # Add a global spatial average pooling layer out = base_inception.output out = GlobalAveragePooling2D()(out) out = Dense(512, activation='relu')(out) out = Dense(512, activation='relu')(out) predictions = Dense(len(class_names), activation='softmax')(out) model = Model(inputs=base_inception.input, outputs=predictions) # only if we want to freeze layers for layer in base_inception.layers: layer.trainable = False # Compile opt = Adam(lr=learning_rate, decay=learning_rate / EPOCHS) model.compile(loss='categorical_crossentropy', optimizer=opt,metrics=["accuracy"]) # - earlystoping=tf.keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=0, verbose=0, mode='min', baseline=None, restore_best_weights=True) history = model.fit(train_generator, epochs=40, steps_per_epoch=10, validation_data = val_generator, verbose = 1, validation_steps=3, callbacks = [earlystoping]) # # 88% Accuracy # + acc = history.history['accuracy'] val_acc = history.history['val_accuracy'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(1, len(acc) + 1) #Train and validation accuracy plt.plot(epochs, acc, 'b', label='Training accurarcy') plt.plot(epochs, val_acc, 'r', label='Validation accurarcy') plt.title('Training and Validation accurarcy') plt.legend() plt.figure() #Train and validation loss plt.plot(epochs, loss, 'b', label='Training loss') plt.plot(epochs, val_loss, 'r', label='Validation loss') plt.title('Training and Validation loss') plt.legend() plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.8.3 64-bit (conda) # name: python383jvsc74a57bd05c6bee9180bd5f215609a25bd8cb44b61e55e337adc560d50a1b40fec4435080 # --- # + import matplotlib.pyplot as plt import scipy as sp from scipy.io import loadmat import numpy as np import time from skimage.transform import rescale, resize, downscale_local_mean from skimage import data import sys sys.path.append("../../../LocalGraphClustering/") sys.path.append("../../../LocalGraphClustering/notebooks/") from localgraphclustering import * import warnings warnings.filterwarnings('ignore') # + import matplotlib.pyplot as plt import scipy as sp from scipy.io import loadmat import numpy as np import time from skimage.transform import rescale, resize, downscale_local_mean from skimage import data import sys sys.path.append("../../../LocalGraphClustering/") sys.path.append("../../../LocalGraphClustering/notebooks/") from localgraphclustering import * import warnings warnings.filterwarnings('ignore') import multiprocessing as mp def clutser_improvement(g,ratio,nprocs): records = [] def wrapper(q_in,q_out): while True: seed = q_in.get() if seed is None: break R = [seed] R.extend(g.neighbors(seed)) R = list(set(R)) one_hop = R.copy() for node in one_hop: R.extend(g.neighbors(node)) R = list(set(R)) cond_R = g.set_scores(R)["cond"] mqi_output = flow_clustering(g,R,method="mqi_weighted") print("2-hop exp. - mqi",len(R),cond_R,len(mqi_output[0]),mqi_output[1]) sl_output1 = flow_clustering(g,R,method="sl_weighted",delta=0.01) print("2-hop exp. - sl_output1",len(R),cond_R,len(sl_output1[0]),sl_output1[1]) sl_output2 = flow_clustering(g,R,method="sl_weighted",delta=0.1) print("2-hop exp. - sl_output2",len(R),cond_R,len(sl_output2[0]),sl_output2[1]) sl_output3 = flow_clustering(g,R,method="sl_weighted",delta=1) print("2-hop exp. - sl_output3",len(R),cond_R,len(sl_output3[0]),sl_output3[1]) q_out.put((seed,"2-hop",cond_R,mqi_output[1],sl_output1[1],sl_output2[1],sl_output3[1])) n = g._num_vertices np.random.seed(seed=123) seeds = np.random.choice(range(n),size=int(ratio*n), replace=False) q_in,q_out = mp.Queue(),mp.Queue() procs = [mp.Process(target=wrapper,args=(q_in,q_out)) for i in range(nprocs)] for seed in seeds: q_in.put(seed) for i in range(nprocs): q_in.put(None) for p in procs: p.start() ncounts = 0 t1 = time.time() while ncounts < len(seeds): ncounts += 1 record = q_out.get() print(ncounts,len(seeds),record[2],record[3],record[4],record[5],record[6]) print(time.time()-t1) records.append(record) for p in procs: p.join() return records #graph = GraphLocal("com-orkut.ungraph.txt") graph = GraphLocal("../../dataset/lawlor-spectra-k32.edgelist","edgelist") g = graph.largest_component() ratio = 0.005 records = clutser_improvement(g,ratio,40) # + colors = ['#1f77b4', '#ff7f0e', '#2ca02c', '#d62728'] # import pickle # rptr = open("ncp_records_bfs.p","rb") # records = pickle.load(rptr) # rptr.close() improvement = np.array([[i[3]/i[2],i[4]/i[2],i[5]/i[2],i[6]/i[2]] for i in records]) import statsmodels.api as sm from statsmodels.distributions.empirical_distribution import ECDF f, ax = plt.subplots(1,1,figsize=(8,4),gridspec_kw={"wspace":0.05}) ax_inset = f.add_axes([0.2,0.6,0.2,0.2]) x_inset = np.array(range(0,101))*0.01 ax_inset.set_xlim(-0.0001,1.02) ax_inset.set_ylim(-0.0001,1.02) ax.set_xlim(-0.0001,1) ax.set_ylim(-0.0001,6) for i in range(improvement.shape[1]): subset = improvement[:,i] kde = sm.nonparametric.KDEUnivariate(subset) kde.fit(kernel="tri",fft=False,bw=0.03,cut=0,clip=(-1*float("inf"),1)) ax.plot(kde.support,kde.density,color=colors[i],linewidth=3) ax.hist(subset,color=colors[i],bins=100,density=True,alpha=0.2) ecdf = ECDF(subset) ax_inset.plot(x_inset,ecdf(x_inset),color=colors[i],linewidth=3) ax_inset.set_title("CDF") ax_inset.spines['top'].set_visible(False) ax_inset.spines['right'].set_visible(False) # ax_inset.spines['left'].set_visible(False) # ax_inset.spines['bottom'].set_visible(False) ax.set_xlabel(r"$\phi($"+"Improved) / "+r"$\phi($"+"2-hop BFS)",fontsize=15) #ax.set_title("Conductance improvement",fontsize=15) ax.set_ylabel("Probability density",fontsize=15) ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) ax.spines['left'].set_visible(False) ax.spines['bottom'].set_visible(False) ax.tick_params(axis='y', which='both', length=5,width=2) ax.tick_params(axis='x', which='both', length=5,width=2) for tick in ax.xaxis.get_major_ticks(): tick.label.set_fontsize(15) for tick in ax.yaxis.get_major_ticks(): tick.label.set_fontsize(15) f.legend(ax.get_lines(),["MQI",r"LFI ($\delta=0.01$)",r"LFI ($\delta=0.1$)",r"LFI ($\delta=1$)"], fancybox=True, shadow=True,fontsize=15,bbox_to_anchor=(0.7,0.95)) f.savefig("cluster_improvement_BFS.pdf",format='pdf',bbox_inches='tight') # + f, ax = plt.subplots(1,1,figsize=(8,4),gridspec_kw={"wspace":0.05}) ax_inset = f.add_axes([0.2,0.6,0.2,0.2]) x_inset = np.array(range(0,101))*0.01 ax_inset.set_xlim(-0.0001,1.02) ax_inset.set_ylim(-0.0001,1.02) ax.set_xlim(-0.0001,1) ax.set_ylim(-0.0001,6) for i in range(1,improvement.shape[1]): subset = improvement[:,i] kde = sm.nonparametric.KDEUnivariate(subset) kde.fit(kernel="tri",fft=False,bw=0.03,cut=0,clip=(-1*float("inf"),1)) ax.plot(kde.support,kde.density,color=colors[i],linewidth=3) ax.hist(subset,color=colors[i],bins=100,density=True,alpha=0.2) ecdf = ECDF(subset) ax_inset.plot(x_inset,ecdf(x_inset),color=colors[i],linewidth=3) f.legend(ax.get_lines(),[r"LFI ($\delta=0.01$)",r"LFI ($\delta=0.1$)",r"LFI ($\delta=1$)"], fancybox=True, shadow=True,fontsize=15,bbox_to_anchor=(0.72,0.9)) ax_inset.set_title("CDF") ax_inset.spines['top'].set_visible(False) ax_inset.spines['right'].set_visible(False) ax.set_xlabel(r"$\phi($"+"LocalFlowImprove) / "+r"$\phi($"+"MQI)",fontsize=15) #ax.set_title("Conductance improvement",fontsize=15) ax.set_ylabel("Probability density",fontsize=15) ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) ax.spines['left'].set_visible(False) ax.spines['bottom'].set_visible(False) ax.tick_params(axis='y', which='both', length=5,width=2) ax.tick_params(axis='x', which='both', length=5,width=2) for tick in ax.xaxis.get_major_ticks(): tick.label.set_fontsize(15) for tick in ax.yaxis.get_major_ticks(): tick.label.set_fontsize(15) plt.show() f.savefig("cluster_improvement_MQI_BFS.pdf",format='pdf',bbox_inches='tight') # + import multiprocessing as mp def clutser_improvement(g,ratio,rholist,nprocs): records = [] def wrapper(q_in,q_out): while True: seed = q_in.get() if seed is None: break gamma = 0.01/0.99 alpha = 1.0-1.0/(1.0+gamma) for rho in rholist: l1reg_output = spectral_clustering(g,[seed],rho=rho,alpha=alpha,method="l1reg-rand",cpp=False) mqi_output = flow_clustering(g,l1reg_output[0],method="mqi_weighted") sl_output1 = flow_clustering(g,l1reg_output[0],method="sl_weighted",delta=0.01) sl_output2 = flow_clustering(g,l1reg_output[0],method="sl_weighted",delta=0.1) sl_output3 = flow_clustering(g,l1reg_output[0],method="sl_weighted",delta=1) print(rho,len(l1reg_output[0]),len(mqi_output[0]),len(sl_output1[0]),len(sl_output2[0]),len(sl_output3[0])) q_out.put((rho,"node",l1reg_output[1],mqi_output[1],sl_output1[1],sl_output2[1],sl_output3[1])) for rho in rholist: R = [seed] R.extend(g.neighbors(seed)) l1reg_output = spectral_clustering(g,R,rho=rho,alpha=alpha,method="l1reg-rand",cpp=False) mqi_output = flow_clustering(g,l1reg_output[0],method="mqi_weighted") sl_output1 = flow_clustering(g,l1reg_output[0],method="sl_weighted",delta=0.01) sl_output2 = flow_clustering(g,l1reg_output[0],method="sl_weighted",delta=0.1) sl_output3 = flow_clustering(g,l1reg_output[0],method="sl_weighted",delta=1) print(rho,len(l1reg_output[0]),len(mqi_output[0]),len(sl_output1[0]),len(sl_output2[0]),len(sl_output3[0])) q_out.put((rho,"set",l1reg_output[1],seed)) n = g._num_vertices np.random.seed(seed=123) seeds = np.random.choice(range(n),size=int(ratio*n), replace=False) q_in,q_out = mp.Queue(),mp.Queue() procs = [mp.Process(target=wrapper,args=(q_in,q_out)) for i in range(nprocs)] for seed in seeds: q_in.put(seed) for i in range(nprocs): q_in.put(None) for p in procs: p.start() ncounts = 0 t1 = time.time() while ncounts < len(rholist)*len(seeds): ncounts += 1 record = q_out.get() print(ncounts,len(rholist)*len(seeds),record[2]) print(time.time()-t1) records.append(record) for p in procs: p.join() return records #graph = GraphLocal("com-orkut.ungraph.txt") graph = GraphLocal("../../dataset/lawlor-spectra-k32.edgelist","edgelist") g = graph.largest_component() ratio = 0.001 rholist = [1.0e-5,1.0e-6,1.0e-7] records = clutser_improvement(g,ratio,rholist,40) # + # import pickle # rptr = open("ncp_records8.p","rb") # records = pickle.load(rptr) # rptr.close() improvement = np.array([[i[3]/i[2],i[4]/i[2],i[5]/i[2],i[6]/i[2]] for i in records]) f, ax = plt.subplots(1,1,figsize=(8,4),gridspec_kw={"wspace":0.05}) ax_inset = f.add_axes([0.2,0.6,0.2,0.2]) x_inset = np.array(range(0,101))*0.01 ax_inset.set_xlim(-0.0001,1.02) ax_inset.set_ylim(-0.0001,1.02) ax.set_xlim(-0.0001,1) ax.set_ylim(-0.0001,6) for i in range(improvement.shape[1]): subset = improvement[:,i] kde = sm.nonparametric.KDEUnivariate(subset) kde.fit(kernel="tri",fft=False,bw=0.03,cut=0,clip=(-1*float("inf"),1)) ax.plot(kde.support,kde.density,color=colors[i],linewidth=3) ax.hist(subset,color=colors[i],bins=100,density=True,alpha=0.2) ecdf = ECDF(subset) ax_inset.plot(x_inset,ecdf(x_inset),color=colors[i],linewidth=3) ax_inset.set_title("CDF") ax_inset.spines['top'].set_visible(False) ax_inset.spines['right'].set_visible(False) # ax_inset.spines['left'].set_visible(False) # ax_inset.spines['bottom'].set_visible(False) ax.set_xlabel(r"$\phi($"+"Improved) / "+r"$\phi($"+"Seeded PR)",fontsize=15) #ax.set_title("Conductance improvement",fontsize=15) ax.set_ylabel("Probability density",fontsize=15) ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) ax.spines['left'].set_visible(False) ax.spines['bottom'].set_visible(False) ax.tick_params(axis='y', which='both', length=5,width=2) ax.tick_params(axis='x', which='both', length=5,width=2) for tick in ax.xaxis.get_major_ticks(): tick.label.set_fontsize(15) for tick in ax.yaxis.get_major_ticks(): tick.label.set_fontsize(15) f.legend(ax.get_lines(),["MQI",r"LFI ($\delta=0.01$)",r"LFI ($\delta=0.1$)",r"LFI ($\delta=1$)"], fancybox=True, shadow=True,fontsize=15,bbox_to_anchor=(0.7,0.95)) f.savefig("cluster_improvement.pdf",format='pdf',bbox_inches='tight') # + f, ax = plt.subplots(1,1,figsize=(8,4),gridspec_kw={"wspace":0.05}) ax_inset = f.add_axes([0.2,0.6,0.2,0.2]) x_inset = np.array(range(0,101))*0.01 ax_inset.set_xlim(-0.0001,1.02) ax_inset.set_ylim(-0.0001,1.02) ax.set_xlim(-0.0001,1) ax.set_ylim(-0.0001,6) for i in range(1,improvement.shape[1]): subset = improvement[:,i] kde = sm.nonparametric.KDEUnivariate(subset) kde.fit(kernel="tri",fft=False,bw=0.03,cut=0,clip=(-1*float("inf"),1)) ax.plot(kde.support,kde.density,color=colors[i],linewidth=3) ax.hist(subset,color=colors[i],bins=100,density=True,alpha=0.2) ecdf = ECDF(subset) ax_inset.plot(x_inset,ecdf(x_inset),color=colors[i],linewidth=3) f.legend(ax.get_lines(),[r"LFI ($\delta=0.01$)",r"LFI ($\delta=0.1$)",r"LFI ($\delta=1$)"], fancybox=True, shadow=True,fontsize=15,bbox_to_anchor=(0.72,0.9)) ax_inset.set_title("CDF") ax_inset.spines['top'].set_visible(False) ax_inset.spines['right'].set_visible(False) ax.set_xlabel(r"$\phi($"+"LocalFlowImprove) / "+r"$\phi($"+"MQI)",fontsize=15) #ax.set_title("Conductance improvement",fontsize=15) ax.set_ylabel("Probability density",fontsize=15) ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) ax.spines['left'].set_visible(False) ax.spines['bottom'].set_visible(False) ax.tick_params(axis='y', which='both', length=5,width=2) ax.tick_params(axis='x', which='both', length=5,width=2) for tick in ax.xaxis.get_major_ticks(): tick.label.set_fontsize(15) for tick in ax.yaxis.get_major_ticks(): tick.label.set_fontsize(15) plt.show() f.savefig("cluster_improvement_MQI.pdf",format='pdf',bbox_inches='tight') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Jupyter Notebook test # # Uploaded 10/19/2019 # + # !python -V import numpy as np import matplotlib.pyplot as plt # %matplotlib notebook # - from mpl_toolkits import mplot3d fig = plt.figure() ax = plt.axes(projection='3d') # + ax = plt.axes(projection='3d') # Data 3d zline = np.linspace(0, 15, 1000) # linearly spaced, 0 to 15, 1000 points) xline = np.sin(zline) # sine of zline yline = np.cos(zline) # cosine of zline ax.plot3D(xline, yline, zline, 'gray') # Data for scattered points along the line-spiral zdata = 15 * np.random.random(100) # 100 random points along z-axis. xdata = np.sin(zdata) + 0.1 * np.random.randn(100) # random perturbation on x-axis. ydata = np.cos(zdata) + 0.1 * np.random.randn(100) # random perturbation on y-axis. # Spelling change from Github edit window -- worked! Did not corrupt my Jupyter notebook. ax.scatter3D(xdata, ydata, zdata, c=zdata, cmap='Greens'); # - # ### 3D Contour Plots # + def f(x, y): return np.sin(np.sqrt(x**2 + y**2)) # sine wave of circle. x = np.linspace(-6, 6, 30) # linearly spaced line, -6 to +6, 30 points. y = np.linspace(-6, 6, 30) X, Y = np.meshgrid(x, y) Z = f(X, Y) # + # Plotting - MatLab style fig = plt.figure() ax = plt.axes(projection='3d') ax.contour3D(X, Y, Z, 50, cmap='binary') ax.set_xlabel('x') ax.set_ylabel('y') ax.set_zlabel('z') # Increase viewing tilt and rotate ax.view_init(60, 35); # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="y8HYCkhXvsoq" # ## Standard Imports # + id="BLcUO6J7umhS" import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from sklearn.linear_model import LinearRegression # + [markdown] id="UCPEnPDpyM14" # ## Build Test Data # + colab={"base_uri": "https://localhost:8080/", "height": 206} id="rHCoapSd4REN" outputId="b306983c-b246-4c37-ee5c-0993943abfa9" # a = 3 b = 2 c = 1 a = [1,2,3,4,5,1,0] b = [2,1,5,3,7,2,2] c = [6,3,4,5,7,1,1] tot = [13,11,23,223,236,28,25] df = pd.DataFrame({'a':a,'b':b,'c':c,'tot':tot}) df.head() # + [markdown] id="P2qL4cLdygus" # # Create Linear Regression Model # + colab={"base_uri": "https://localhost:8080/"} id="Ui_DU7V6yF4x" outputId="546cc593-9c2f-4339-c89b-15700d09cfba" # Assign the Training Data X = df[['a','b','c']] y = df.tot # Select a linear model model = LinearRegression() # Train the model model.fit(X, y) # + [markdown] id="-EwIrPgwyut8" # # Test Model Accuracy # + colab={"base_uri": "https://localhost:8080/"} id="5WbE5XUgyn96" outputId="02fcc6c4-badd-48c2-a316-9e76df0229d8" model.coef_ # + colab={"base_uri": "https://localhost:8080/"} id="8ZJrRnicyoHM" outputId="cf2b6fd1-d550-4e73-bb3b-a6cff042da72" # Make a prediction X_test = [[1,10,22],[2,4,8],[3,3,3],[20,2,10]] y_test = [45,22,18,74] y_pred = model.predict(X_test) print('pred') print(y_pred) print('test') print(y_test) # + [markdown] id="BzwKnU7xy4oI" # # More Complex Example # + [markdown] id="jhzm-wa7v2gy" # ## Get Data # + id="PmD0nWFJvj6E" colab={"base_uri": "https://localhost:8080/"} outputId="827d224c-d970-4124-daaf-4a48027f6ebf" # ! wget https://raw.githubusercontent.com/SuperDataWorld/Python/main/Data/bikerental.csv # + colab={"base_uri": "https://localhost:8080/", "height": 293} id="iaQxndU_uWXz" outputId="f5eaa818-dc3f-4b0c-bd2a-130b119939b2" df = pd.read_csv('bikerental.csv') df.head() # + [markdown] id="mbgciFDA26TB" # # Deal with Categorical Data # + colab={"base_uri": "https://localhost:8080/", "height": 293} id="gyBFZK9q1jGS" outputId="7b805110-ac98-470e-b590-e83eb26d0475" df.head() # + colab={"base_uri": "https://localhost:8080/"} id="eeFK_ybc6DSm" outputId="166477d3-4efa-473f-c247-50919f49f0cf" df.yr.value_counts() # + colab={"base_uri": "https://localhost:8080/"} id="nlZ6anAo6aJk" outputId="c35a86a3-3bd7-41a1-887b-3591320d277a" df.mnth.value_counts() # + colab={"base_uri": "https://localhost:8080/"} id="JO3zuZjF6ucb" outputId="e9b54df8-6d8e-4e69-aff5-b99c3eb50bd2" df.columns # + id="lWwAZCeg6aG3" df.drop(['instant','dteday','registered', 'casual'], axis = 1, inplace = True) # + [markdown] id="_WPhJfhU4Pqq" # ## Create Data for ML and Split # + id="Ajx7i4xH7Q5x" y = df['cnt'] X = df.drop(['cnt'], axis = 1) # + id="TxTJGnhT4f-o" from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = .2, random_state = 42) # + [markdown] id="u99KJyAm44_y" # ## Train and Fit Model # + colab={"base_uri": "https://localhost:8080/", "height": 423} id="Vg2m3fYB_ImZ" outputId="76a7d7e1-0402-4562-9789-f5c42bfa4e84" X # + colab={"base_uri": "https://localhost:8080/"} id="SwFVSE07455Y" outputId="205a2957-fd97-490a-a58a-b7661b5cbe98" model = LinearRegression() model.fit(X_train, y_train) # + id="Rc6nZrl_45qf" y_pred = model.predict(X_test) # + colab={"base_uri": "https://localhost:8080/", "height": 283} id="O_oTORw77plL" outputId="5fcfefbb-d208-49e9-b461-8ee032f62ca1" y_test.hist() # + colab={"base_uri": "https://localhost:8080/", "height": 283} id="V8VpOySX8ABY" outputId="38b52c7e-c508-4459-d685-b567a0b0449b" pd.Series(y_pred).hist() # + [markdown] id="RJlkllSN56Rz" # ## Performance # + id="h31x8pfh54av" from sklearn.metrics import mean_squared_error, r2_score # + colab={"base_uri": "https://localhost:8080/"} id="WgZuktM753-q" outputId="99874096-1532-416b-a5da-41184bc004de" print('Coefficients:', model.coef_) print('Intercept:', model.intercept_) print('Root Mean squared error (RMSE):', np.sqrt(mean_squared_error(y_pred, y_test))) print('Coefficient of determination', r2_score(y_pred, y_test)) # + id="9BZntzs9-bN-" colab={"base_uri": "https://localhost:8080/", "height": 283} outputId="a14871d7-8609-41b2-8319-2e7bf218ecbd" plt.scatter(y_test, y_pred) # + colab={"base_uri": "https://localhost:8080/", "height": 425} id="xvVHqHubB1ah" outputId="bce229f8-6ee1-4e34-f890-a58ec411c919" df2 = pd.DataFrame([pd.Series(y_test.values), pd.Series(y_pred), pd.Series(X_test.mnth.values)]) df2 = df2.transpose() df2.columns = ['Actual', 'Prediction', 'Month'] df2 = df2.groupby(by = 'Month').sum().reset_index() df2['% diff'] = round((df2['Prediction'] / df2['Actual']) - 1, 2) df2 # + id="7olLcK18ELeE" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="hPxSVqcvqPHO" # # TFX SDK examples # # This notebook contains a series of related examples based on the "Chicago Taxi Pipeline", that show using the TFX SDK. It includes examples of how to use custom Python functions and custom containers. # # # + [markdown] id="d-Av6cm0oBFV" # ## Setup # # Before you run this notebook, ensure that your Google Cloud user account and project are granted access to the Managed Pipelines Experimental. To be granted access to the Managed Pipelines Experimental, fill out this [form](http://go/cloud-mlpipelines-signup) and let your account representative know you have requested access. # # This notebook is intended to be run on [AI Platform Notebooks](https://cloud.google.com/ai-platform-notebooks). See the "AI Platform Notebooks" section in the Experimental [User Guide](https://docs.google.com/document/d/1JXtowHwppgyghnj1N1CT73hwD1caKtWkLcm2_0qGBoI/edit?usp=sharing) for more detail on creating a notebook server instance. # # **To run this notebook on AI Platform Notebooks**, click on the **File** menu, then select "Download .ipynb". Then, upload that notebook from your local machine to AI Platform Notebooks. (In the AI Platform Notebooks left panel, look for an icon of an arrow pointing up, to upload). # # The notebook will probably run on [Google Colab](https://colab.research.google.com/notebooks/intro.ipynb) as well, though currently you may see some ignorable warnings. # # # + [markdown] id="fwZ0aXisoBFW" # We'll first install some libraries and set some variables. # # Ensure python 3 is being used. # + colab={"base_uri": "https://localhost:8080/", "height": 35} executionInfo={"elapsed": 383, "status": "ok", "timestamp": 1611787891259, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GioeoRKji2UzXPs8FGm3SW5pdlIyXsTsVcz7ReBnA=s64", "userId": "10115205878095222258"}, "user_tz": 480} id="PQ-QwavmqPHP" outputId="8b8e258c-7f2e-4184-f7ae-6f06cb021fdf" import sys sys.version # + [markdown] id="fZ-GWdI7SmrN" # Set `gcloud` to use your project. **Edit the following cell before running it**. # + id="pD5jOcSURdcU" PROJECT_ID = 'your-project-id' # <---CHANGE THIS # + [markdown] id="GAaCPLjgiJrO" # Set `gcloud` to use your project. # + id="VkWdxe4TXRHk" # !gcloud config set project {PROJECT_ID} # + [markdown] id="gckGHdW9iPrq" # If you're running this notebook on colab, authenticate with your user account: # + id="kZQA0KrfXCvU" import sys if 'google.colab' in sys.modules: from google.colab import auth auth.authenticate_user() # + [markdown] id="aaqJjbmk6o0o" # ----------------- # # **If you're on AI Platform Notebooks**, authenticate with Google Cloud before running the next section, by running # ```sh # gcloud auth login # ``` # **in the Terminal window** (which you can open via **File** > **New** in the menu). You only need to do this once per notebook instance. # + id="6F08xSY0qPHV" # !gsutil cp gs://cloud-aiplatform-pipelines/releases/latest/kfp-1.5.0rc5.tar.gz . # !gsutil cp gs://cloud-aiplatform-pipelines/releases/latest/aiplatform_pipelines_client-0.1.0.caip20210415-py3-none-any.whl . # + id="MQJl_6EUoBFr" if 'google.colab' in sys.modules: USER_FLAG = '' else: USER_FLAG = '--user' # + id="iyQtljP-qPHY" # !pip install {USER_FLAG} pip==20.2.4 --upgrade # !pip install {USER_FLAG} kfp-1.5.0rc5.tar.gz aiplatform_pipelines_client-0.1.0.caip20210415-py3-none-any.whl --upgrade # !pip install {USER_FLAG} -i https://pypi-nightly.tensorflow.org/simple tfx[kfp]==0.30.0.dev20210418 "google-cloud-storage>=1.37.1" --upgrade # + [markdown] id="mZeSQybKwbpp" # If you're on colab, and you got a prompt to restart the runtime after TFX installation, rerun some setup now before proceeding. As before, **edit the following to define your project id** before you run the next cell. # + id="PpkxFp93xBk5" import sys if 'google.colab' in sys.modules: PROJECT_ID = 'your-project-id' # <---CHANGE THIS # !gcloud config set project {PROJECT_ID} from google.colab import auth auth.authenticate_user() USER_FLAG = '' # + id="KHTSzMygoBF6" if not 'google.colab' in sys.modules: # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) # + [markdown] id="BDnPgN8UJtzN" # Check the TFX version. It should be == 0.30.0. # + id="6jh7vKSRqPHb" # Check version # !pip show tfx # + [markdown] id="aDtLdSkvqPHe" # ### Setup variables # # Let's set up some variables used to customize the pipelines below. **Edit the values in the cell below before running it**. # + id="EcUseqJaE2XN" # PATH=%env PATH # %env PATH={PATH}:/home/jupyter/.local/bin # Required Parameters USER = 'YOUR_LDAP' # <---CHANGE THIS BUCKET_NAME = 'YOUR_BUCKET_NAME' # <---CHANGE THIS PIPELINE_ROOT = 'gs://{}/pipeline_root/{}'.format(BUCKET_NAME, USER) PROJECT_ID = 'YOUR_PROJECT_ID' # <---CHANGE THIS REGION = 'us-central1' API_KEY = 'YOUR_API_KEY' # <---CHANGE THIS print('PIPELINE_ROOT: {}'.format(PIPELINE_ROOT)) # + [markdown] id="hvb0SspyqPH4" # ## Part 2: Custom Python functions # # In this section, we will create components from Python functions. We won't be doing any real ML— these simple functions are just used to illustrate the process. # # We'll use [Cloud Build](https://cloud.google.com/cloud-build) to build the container image that runs the functions. # # # + id="08r9af-yoBGv" # !mkdir -p custom_fns # + id="oaEJd_sYoBGy" # %cd custom_fns # + [markdown] id="mhYjn9Fj6mdo" # We begin by writing a preprocessing function that enables the user to specify different split fractions between training and test data. # + id="cHNtKTuiqPH4" # %%writefile my_preprocess.py import os import tensorflow as tf # Used for writing files. from tfx.types.experimental.simple_artifacts import Dataset from tfx.dsl.component.experimental.decorators import component from tfx.dsl.component.experimental.annotations import OutputArtifact @component def MyPreprocess(training_data: OutputArtifact[Dataset]): with tf.io.gfile.GFile(os.path.join(training_data.uri, 'training_data_file.txt'), 'w') as f: f.write('Dummy training data') # We'll modify metadata and ensure that it gets passed to downstream components. training_data.set_string_custom_property('my_custom_field', 'my_custom_value') training_data.set_string_custom_property('uri_for_output', training_data.uri) # + [markdown] id="MtapXcbSqPH6" # Let's write a second component that uses the training data produced. # + id="27ZEf2xQqPH7" # %%writefile my_trainer.py import os import tensorflow as tf from tfx.types.experimental.simple_artifacts import Dataset from tfx.types.standard_artifacts import Model from tfx.dsl.component.experimental.decorators import component from tfx.dsl.component.experimental.annotations import InputArtifact, OutputArtifact, Parameter @component def MyTraining(training_data: InputArtifact[Dataset], model: OutputArtifact[Model], num_iterations: Parameter[int] = 100): # Let's read the contents of training data and write to the metadata. with tf.io.gfile.GFile( os.path.join(training_data.uri, 'training_data_file.txt'), 'r') as f: contents = f.read() model.set_string_custom_property('contents_of_training_data', contents) model.set_int_custom_property('num_iterations_used', num_iterations) # + [markdown] id="zJJma2rNqPH9" # Let's write a finalizer component that collects all metadata, and dumps it. Ensure that PIPELINE_ROOT is correctly defined before creating the template. # + id="kw3ZfNyu7GSg" PIPELINE_ROOT # + id="5XRaIluaqPH9" collector_template = f""" import os import tensorflow as tf import json from google.protobuf import json_format from tfx.types.experimental.simple_artifacts import Dataset from tfx.types.standard_artifacts import Model from tfx.dsl.component.experimental.decorators import component from tfx.dsl.component.experimental.annotations import * from tfx.utils import json_utils OUTPUT_LOCATION = '{PIPELINE_ROOT}/python_function_pipeline/metadata.json' @component def MetadataCollector(training_data: InputArtifact[Dataset], model: InputArtifact[Model]): artifacts = [ json_format.MessageToDict(x) for x in [training_data.mlmd_artifact, model.mlmd_artifact] ] with tf.io.gfile.GFile(OUTPUT_LOCATION, 'w') as f: f.write(json.dumps(artifacts, indent=4)) """ with open('metadata_collector.py', 'w') as f: f.write(collector_template) # + [markdown] id="N0Nno65wqPH_" # Next, let's package the above into a container. We'll do this manually using a Dockerfile. # + id="GB4yCHshqPIE" # %%writefile Dockerfile FROM gcr.io/tfx-oss-public/tfx:0.27.0 WORKDIR /pipeline COPY ./ ./ ENV PYTHONPATH="/pipeline:${PYTHONPATH}" # + [markdown] id="BippqtrLoBHI" # Next, we'll use Cloud Build to build the container image and upload it to GCR. # + id="IPmVRi7L8JOS" # !echo $PROJECT_ID # + id="DxiLqXxPqPH_" # !gcloud builds submit --tag gcr.io/{PROJECT_ID}/caip-tfx-custom:{USER} . # + [markdown] id="lIrGHQzFqPII" # Next, let's author a pipeline using these components. # + id="j43snQpRqPII" import os # Only required for local run. from tfx.orchestration.metadata import sqlite_metadata_connection_config from tfx.orchestration.pipeline import Pipeline from tfx.orchestration.kubeflow.v2 import kubeflow_v2_dag_runner from my_preprocess import MyPreprocess from my_trainer import MyTraining from metadata_collector import MetadataCollector PIPELINE_NAME = "function-based-pipeline-v2{}".format(USER) def function_based_pipeline(pipeline_root): preprocess = MyPreprocess() training = MyTraining( training_data=preprocess.outputs['training_data'], num_iterations=10000) collect = MetadataCollector( training_data=preprocess.outputs['training_data'], model=training.outputs['model']) pipeline_name = "function-based-pipeline-v2{}".format(USER) return Pipeline( pipeline_name=PIPELINE_NAME, pipeline_root=pipeline_root, # Only needed for local runs. metadata_connection_config=sqlite_metadata_connection_config('metadata.sqlite'), components=[preprocess, training, collect]) # + [markdown] id="tzyAQJrUqPIK" # Let's make sure this pipeline works locally, using the local runner: # + id="PgBSXC12qPIL" from tfx.orchestration.local.local_dag_runner import LocalDagRunner LocalDagRunner().run(function_based_pipeline(pipeline_root='/tmp/pipeline_root')) # + [markdown] id="ofs15ZiWqPIN" # Check that the metadata was produced locally. # + id="KXUzLgYUqPIN" from ml_metadata import metadata_store from ml_metadata.proto import metadata_store_pb2 connection_config = metadata_store_pb2.ConnectionConfig() connection_config.sqlite.filename_uri = 'metadata.sqlite' connection_config.sqlite.connection_mode = 3 # READWRITE_OPENCREATE store = metadata_store.MetadataStore(connection_config) store.get_artifacts() # + [markdown] id="zWlObcW9qPIP" # ### Run the pipeline with Managed Pipelines # + [markdown] id="zwXwGvrV6Rcc" # Now we're ready to run the pipeline! We're constructing the client using the API_KEY that you created during setup. # + id="ehuX7MUt6Rcf" from aiplatform.pipelines import client my_client = client.Client( project_id=PROJECT_ID, region=REGION, api_key=API_KEY ) # + id="jTCk8EkRqPIQ" config = kubeflow_v2_dag_runner.KubeflowV2DagRunnerConfig( project_id=PROJECT_ID, display_name='function-based-pipeline-v2{}'.format(USER), default_image='gcr.io/{}/caip-tfx-custom:{}'.format(PROJECT_ID, USER)) runner = kubeflow_v2_dag_runner.KubeflowV2DagRunner( config=config, output_filename='pipeline.json') runner.compile( function_based_pipeline( pipeline_root=os.path.join(PIPELINE_ROOT, PIPELINE_NAME)), write_out=True) my_client.create_run_from_job_spec('pipeline.json') # + [markdown] id="xHhmznCBqPIT" # When the pipeline is complete, we can check the final output file to see the metadata produced. # + id="HQdAv3RKqPIU" MD_URI = 'gs://{}/pipeline_root/{}/python_function_pipeline/metadata.json'.format(BUCKET_NAME, USER) MD_URI # !gsutil cat {MD_URI} # + id="SiSY2vLx7SbV" # %cd .. # + [markdown] id="CCaF4s3dqPIV" # ## Part 3: Custom containers # # # In this section, we will build custom containers and chain them together as a pipeline. # # This illustrates how we can pass data (using uris) to custom containers. # # + [markdown] id="rvuOnyfg5dzY" # # ### Container 1: Generate examples # # First, we'll define and write out the `generate_examples.py` code: # + id="MyrXfan--u6o" # !mkdir -p generate # + id="j4mRhnC9qPIW" # %%writefile generate/generate_examples.py import argparse import json import os import numpy as np import tensorflow as tf import tensorflow_datasets as tfds def _serialize_example(example, label): example_value = tf.io.serialize_tensor(example).numpy() label_value = tf.io.serialize_tensor(label).numpy() feature = { 'examples': tf.train.Feature( bytes_list=tf.train.BytesList(value=[example_value])), 'labels': tf.train.Feature(bytes_list=tf.train.BytesList(value=[label_value])), } return tf.train.Example(features=tf.train.Features( feature=feature)).SerializeToString() def _tf_serialize_example(example, label): serialized_tensor = tf.py_function(_serialize_example, (example, label), tf.string) return tf.reshape(serialized_tensor, ()) def generate_examples(training_data_uri, test_data_uri, config_file_uri): (train_data, test_data), info = tfds.load( # Use the version pre-encoded with an ~8k vocabulary. 'imdb_reviews/subwords8k', # Return the train/test datasets as a tuple. split=(tfds.Split.TRAIN, tfds.Split.TEST), # Return (example, label) pairs from the dataset (instead of a dictionary). as_supervised=True, with_info=True) serialized_train_examples = train_data.map(_tf_serialize_example) serialized_test_examples = test_data.map(_tf_serialize_example) filename = os.path.join(training_data_uri, "train.tfrecord") writer = tf.data.experimental.TFRecordWriter(filename) writer.write(serialized_train_examples) filename = os.path.join(test_data_uri, "test.tfrecord") writer = tf.data.experimental.TFRecordWriter(filename) writer.write(serialized_test_examples) encoder = info.features['text'].encoder config = { 'vocab_size': encoder.vocab_size, } config_file = os.path.join(config_file_uri, "config") with tf.io.gfile.GFile(config_file, 'w') as f: f.write(json.dumps(config)) if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('--training_data_uri', type=str) parser.add_argument('--test_data_uri', type=str) parser.add_argument('--config_file_uri', type=str) args = parser.parse_args() generate_examples(args.training_data_uri, args.test_data_uri, args.config_file_uri) # + [markdown] id="TvseVvSDqPIX" # Next, we'll create a Dockerfile that builds a container to run `generate_examples.py`. Here we use a Google DLVM container image as our base. You may use your own image as the base image as well. Note that we're also installing the `tensorflow_datasets` library. # + id="ymsKgvwkqPIY" # %%writefile generate/Dockerfile FROM gcr.io/deeplearning-platform-release/tf2-cpu.2-3:latest WORKDIR /pipeline COPY generate_examples.py generate_examples.py RUN pip install tensorflow_datasets ENV PYTHONPATH="/pipeline:${PYTHONPATH}" # + id="yAcPO09H-90d" # !gcloud builds submit --tag gcr.io/{PROJECT_ID}/caip-tfx-custom-container-generate:{USER} generate # + [markdown] id="XH541EW0qPId" # ### Container 2: Train Examples # Next, we'll do the same for the 'Train Examples' custom container. We'll first write out a `train_examples.py` file, then build a container that runs it. # + id="W-LOa4Fr_ILY" # !mkdir -p train # + id="ZFc8KfNBqPId" # %%writefile train/train_examples.py import argparse import json import os import numpy as np import tensorflow as tf def _parse_example(record): f = { 'examples': tf.io.FixedLenFeature((), tf.string, default_value=''), 'labels': tf.io.FixedLenFeature((), tf.string, default_value='') } return tf.io.parse_single_example(record, f) def _to_tensor(record): examples = tf.io.parse_tensor(record['examples'], tf.int64) labels = tf.io.parse_tensor(record['labels'], tf.int64) return (examples, labels) def train_examples(training_data_uri, test_data_uri, config_file_uri, output_model_uri, output_metrics_uri): train_examples = tf.data.TFRecordDataset( [os.path.join(training_data_uri, 'train.tfrecord')]) test_examples = tf.data.TFRecordDataset( [os.path.join(test_data_uri, 'test.tfrecord')]) train_batches = train_examples.map(_parse_example).map(_to_tensor) test_batches = test_examples.map(_parse_example).map(_to_tensor) with tf.io.gfile.GFile(os.path.join(config_file_uri, 'config')) as f: config = json.loads(f.read()) model = tf.keras.Sequential([ tf.keras.layers.Embedding(config['vocab_size'], 16), tf.keras.layers.GlobalAveragePooling1D(), tf.keras.layers.Dense(1, activation='sigmoid') ]) model.summary() model.compile( optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) train_batches = train_batches.shuffle(1000).padded_batch( 32, (tf.TensorShape([None]), tf.TensorShape([]))) test_batches = test_batches.padded_batch( 32, (tf.TensorShape([None]), tf.TensorShape([]))) history = model.fit( train_batches, epochs=10, validation_data=test_batches, validation_steps=30) loss, accuracy = model.evaluate(test_batches) metrics = { 'loss': str(loss), 'accuracy': str(accuracy), } model_json = model.to_json() with tf.io.gfile.GFile(os.path.join(output_model_uri, 'model.json'), 'w') as f: f.write(model_json) with tf.io.gfile.GFile(os.path.join(output_metrics_uri, 'metrics.json'), 'w') as f: f.write(json.dumps(metrics)) if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('--training_data_uri', type=str) parser.add_argument('--test_data_uri', type=str) parser.add_argument('--config_file_uri', type=str) parser.add_argument('--output_model_uri', type=str) parser.add_argument('--output_metrics_uri', type=str) args = parser.parse_args() train_examples(args.training_data_uri, args.test_data_uri, args.config_file_uri, args.output_model_uri, args.output_metrics_uri) # + [markdown] id="UYaUI5gaqPIf" # Next, we'll create a Dockerfile that builds a container to run train_examples.py: # + id="UaGrpO6gqPIf" # %%writefile train/Dockerfile FROM gcr.io/deeplearning-platform-release/tf2-cpu.2-3:latest WORKDIR /pipeline COPY train_examples.py train_examples.py ENV PYTHONPATH="/pipeline:${PYTHONPATH}" # + id="ETZl3Hkz_Spr" # !gcloud builds submit --tag gcr.io/{PROJECT_ID}/caip-tfx-custom-container-train:{USER} train # + [markdown] id="ljvt7BTtqPIk" # ### Define a container-based pipeline # # Now we're ready to define a pipeline that uses these containers. # + id="ZwibfgFNqPIk" import os from tfx.orchestration.pipeline import Pipeline from tfx.types.standard_artifacts import Model from tfx.orchestration.kubeflow.v2 import kubeflow_v2_dag_runner from tfx.types.experimental.simple_artifacts import Dataset from tfx.types.experimental.simple_artifacts import File from tfx.types.experimental.simple_artifacts import Metrics from tfx.dsl.component.experimental.container_component import create_container_component from tfx.dsl.component.experimental.placeholders import InputUriPlaceholder from tfx.dsl.component.experimental.placeholders import OutputUriPlaceholder import absl from tfx.orchestration import metadata from tfx.orchestration.beam.beam_dag_runner import BeamDagRunner def container_based_pipeline(): generate = create_container_component( name='GenerateExamples', outputs={ 'training_data': Dataset, 'test_data': Dataset, 'config_file': File, }, image = 'gcr.io/{}/caip-tfx-custom-container-generate:{}'.format(PROJECT_ID, USER), command=[ 'python', '/pipeline/generate_examples.py', '--training_data_uri', OutputUriPlaceholder('training_data'), '--test_data_uri', OutputUriPlaceholder('test_data'), '--config_file_uri', OutputUriPlaceholder('config_file'), ]) train = create_container_component( name='Train', inputs={ 'training_data': Dataset, 'test_data': Dataset, 'config_file': File, }, outputs={ 'model': Model, 'metrics': Metrics, }, image='gcr.io/{}/caip-tfx-custom-container-train:{}'.format(PROJECT_ID, USER), command=[ 'python', '/pipeline/train_examples.py', '--training_data_uri', InputUriPlaceholder('training_data'), '--test_data_uri', InputUriPlaceholder('test_data'), '--config_file_uri', InputUriPlaceholder('config_file'), '--output_model_uri', OutputUriPlaceholder('model'), '--output_metrics_uri', OutputUriPlaceholder('metrics'), ]) generate_component = generate() train_component = train( training_data=generate_component.outputs['training_data'], test_data=generate_component.outputs['test_data'], config_file=generate_component.outputs['config_file']) pipeline_name = "container-based-pipeline-v2{}".format(USER) return Pipeline( pipeline_name=pipeline_name, enable_cache=True, pipeline_root=os.path.join(PIPELINE_ROOT, pipeline_name), components=[generate_component, train_component]) container_based_pipeline = container_based_pipeline() # + [markdown] id="Tm1_zvCpqPIm" # *Now* let's run the pipeline! # + id="0sDWS15mqPIm" config = kubeflow_v2_dag_runner.KubeflowV2DagRunnerConfig( project_id=PROJECT_ID, display_name='container-based-pipeline-v2{}'.format(USER)) runner = kubeflow_v2_dag_runner.KubeflowV2DagRunner( config=config, output_filename='pipeline.json') runner.compile( container_based_pipeline, write_out=True) my_client.create_run_from_job_spec('pipeline.json') # + [markdown] id="bjcJibzaqPIq" # ## Part 4: Caching # In Part 3, the pipeline was run with the cache enabled. Let's try to run the same pipeline again after the one above has finished. We should see it complete immediately since all steps were cached (which you can tell from the green arrow displayed for the components that used caching): # # + id="rtVq5j6fqPIr" my_client.create_run_from_job_spec('pipeline.json') # + [markdown] id="XV69L_dRaEJr" # # + [markdown] id="cBGhTAgZqPIt" # Now, let's disable the cache and run it again. This time, it should re-run all steps: # + id="1rDVTDdhqPIt" container_based_pipeline.enable_cache = False runner.compile(container_based_pipeline, write_out=True) my_client.create_run_from_job_spec('pipeline.json') # + [markdown] id="E1v_yRk-qPIv" # ## Part 5: Specifying Task-based dependencies # # In this section, we will run two steps of a pipeline using task-based dependencies (rather than I/O dependencies) to schedule them. We'll build and use the same container for both steps. # + id="bqTIUEHiFV_0" # !mkdir -p task_based # + id="6x3i6hleqPIv" # %%writefile task_based/task_based_step.py import os import tensorflow as tf from tfx.types.experimental.simple_artifacts import File from tfx.dsl.component.experimental.decorators import component from tfx.dsl.component.experimental.annotations import OutputArtifact, Parameter @component def MyTaskBasedStep( output_file: OutputArtifact[File], step_number: Parameter[int] = 0, contents: Parameter[str] = ''): # Write out whatever string was passed in to the file. with tf.io.gfile.GFile(os.path.join(output_file.uri, 'output.txt'), 'w') as f: f.write('Step {}: Contents: {}'.format(step_number, contents)) # + [markdown] id="i1ABNuNhqPIx" # Write out the docker file. # + id="ETCaQQu-qPIy" # %%writefile task_based/Dockerfile FROM gcr.io/tfx-oss-public/tfx:0.27.0 WORKDIR /pipeline COPY task_based_step.py task_based_step.py ENV PYTHONPATH="/pipeline:${PYTHONPATH}" # + [markdown] id="-xlPjxjOqPI2" # Build a container image for the new component. # + id="-AQpnBPnFeG0" # !gcloud builds submit --tag gcr.io/{PROJECT_ID}/caip-tfx-custom-task-based:{USER} task_based # + [markdown] id="N4P-rwRLqPI4" # Let's create a pipeline with this simple component. Note the `add_upstream_node` call. # + id="xX91D9JJHVgN" # %cd task_based # + id="RnJ7aiQWqPI4" import os from tfx.orchestration.pipeline import Pipeline from tfx.orchestration.kubeflow.v2 import kubeflow_v2_dag_runner from task_based_step import MyTaskBasedStep def task_based_pipeline(): step_1 = MyTaskBasedStep(step_number=1, contents="This is step 1") step_2 = MyTaskBasedStep( step_number=2, contents="This is step 2", instance_name='MyTaskBasedStep2') step_2.add_upstream_node(step_1) pipeline_name = "task-dependency-based-pipeline-v2{}".format(USER) return Pipeline( pipeline_name=pipeline_name, pipeline_root=os.path.join(PIPELINE_ROOT, pipeline_name), components=[step_1, step_2]) task_based_pipeline = task_based_pipeline() # + [markdown] id="4Pcq62-IbHc3" # Run the pipeline: # + id="KrEZzwrPqPI6" config = kubeflow_v2_dag_runner.KubeflowV2DagRunnerConfig( project_id=PROJECT_ID, default_image='gcr.io/{}/caip-tfx-custom-task-based:{}'.format(PROJECT_ID, USER)) runner = kubeflow_v2_dag_runner.KubeflowV2DagRunner( config=config, output_filename='pipeline.json') runner.compile( task_based_pipeline, write_out=True) my_client.create_run_from_job_spec('pipeline.json') # + id="dLr2uixy7cc2" # %cd .. # + [markdown] id="XuIN0GPeqPHh" # ## Part 6: Chicago Taxi Pipeline # # In this section, we'll run the canonical Chicago Taxi Pipeline. # # ### What's new # # If you're familiar with the previous version, here's what's new: # # - Support for resolvers: LatestArtifactResolver and LatestBlessedModelResolver # - FileBasedExampleGen # - A Python client SDK to talk to the Alpha service # + [markdown] id="PE3r-NU_yp9m" # We'll first do some imports: # + id="t5RHVqCvqPHh" from typing import Any, Dict, List, Optional, Text import absl import os import tensorflow as tf import tensorflow_model_analysis as tfma from tfx.extensions.google_cloud_big_query.example_gen.component import BigQueryExampleGen from tfx.components import CsvExampleGen from tfx.components import Evaluator from tfx.components import ExampleValidator from tfx.components import InfraValidator from tfx.components import Pusher from tfx.components import ResolverNode from tfx.components import SchemaGen from tfx.components import StatisticsGen from tfx.components import Trainer from tfx.components import Transform from tfx.dsl.experimental import latest_artifacts_resolver from tfx.dsl.experimental import latest_blessed_model_resolver from tfx.orchestration import pipeline as tfx_pipeline from tfx.orchestration.kubeflow.v2 import kubeflow_v2_dag_runner from tfx.proto import pusher_pb2 from tfx.proto import trainer_pb2 from tfx.types import standard_artifacts from tfx.utils import dsl_utils from tfx.types import channel # + [markdown] id="c2VR2BM-yuKY" # Next, we'll define the [BigQuery](https://cloud.google.com/bigquery/docs) query that we want to use with the `BigQueryExampleGen` component. # + id="hbnWZCLwqPHk" # Define the query used for BigQueryExampleGen. QUERY = """ SELECT pickup_community_area, fare, EXTRACT(MONTH FROM trip_start_timestamp) AS trip_start_month, EXTRACT(HOUR FROM trip_start_timestamp) AS trip_start_hour, EXTRACT(DAYOFWEEK FROM trip_start_timestamp) AS trip_start_day, UNIX_SECONDS(trip_start_timestamp) AS trip_start_timestamp, pickup_latitude, pickup_longitude, dropoff_latitude, dropoff_longitude, trip_miles, pickup_census_tract, dropoff_census_tract, payment_type, company, trip_seconds, dropoff_community_area, tips FROM `bigquery-public-data.chicago_taxi_trips.taxi_trips` WHERE (ABS(FARM_FINGERPRINT(unique_key)) / 0x7FFFFFFFFFFFFFFF) < 0.000001""" # Data location for the CsvExampleGen. CSV_INPUT_PATH = 'gs://ml-pipeline/sample-data/chicago-taxi/data' # + [markdown] id="0lc88uEhzAkI" # ### Create a helper function to construct the TFX pipeline # # # + id="5LL-fA0HqPHn" # Create a helper function to construct a TFX pipeline. def create_tfx_pipeline( query: Optional[Text] = None, input_path: Optional[Text] = None, ): """Creates an end-to-end Chicago Taxi pipeline in TFX.""" if bool(query) == bool(input_path): raise ValueError('Exact one of query or input_path is expected.') if query: example_gen = BigQueryExampleGen(query=query) else: example_gen = CsvExampleGen(input_base=input_path) beam_pipeline_args = [ # Uncomment to use Dataflow. # '--runner=DataflowRunner', # '--experiments=shuffle_mode=auto', '--temp_location=' + os.path.join(PIPELINE_ROOT, 'dataflow', 'temp'), # '--region=us-central1', # '--disk_size_gb=100', '--project={}'.format(PROJECT_ID) # Always needed for BigQueryExampleGen. ] module_file = 'gs://ml-pipeline-playground/tfx_taxi_simple/modules/taxi_utils.py' statistics_gen = StatisticsGen(examples=example_gen.outputs['examples']) schema_gen = SchemaGen( statistics=statistics_gen.outputs['statistics'], infer_feature_shape=False) example_validator = ExampleValidator( statistics=statistics_gen.outputs['statistics'], schema=schema_gen.outputs['schema']) transform = Transform( examples=example_gen.outputs['examples'], schema=schema_gen.outputs['schema'], module_file=module_file) # Fetch the latest trained model under the same context for warm-starting. latest_model_resolver = ResolverNode( instance_name='latest_model_resolver', resolver_class=latest_artifacts_resolver.LatestArtifactsResolver, model=channel.Channel(type=standard_artifacts.Model)) trainer = Trainer( transformed_examples=transform.outputs['transformed_examples'], schema=schema_gen.outputs['schema'], base_model=latest_model_resolver.outputs['model'], transform_graph=transform.outputs['transform_graph'], train_args=trainer_pb2.TrainArgs(num_steps=10), eval_args=trainer_pb2.EvalArgs(num_steps=5), module_file=module_file, ) # Get the latest blessed model for model validation. model_resolver = ResolverNode( instance_name='latest_blessed_model_resolver', resolver_class=latest_blessed_model_resolver.LatestBlessedModelResolver, model=channel.Channel(type=standard_artifacts.Model), model_blessing=channel.Channel(type=standard_artifacts.ModelBlessing)) # # Set the TFMA config for Model Evaluation and Validation. # # This is # eval_config = tfma.EvalConfig( # model_specs=[tfma.ModelSpec(signature_name='eval')], # metrics_specs=[ # tfma.MetricsSpec( # metrics=[tfma.MetricConfig(class_name='ExampleCount')], # thresholds={ # 'binary_accuracy': # tfma.MetricThreshold( # value_threshold=tfma.GenericValueThreshold( # lower_bound={'value': 0.5}), # change_threshold=tfma.GenericChangeThreshold( # direction=tfma.MetricDirection.HIGHER_IS_BETTER, # absolute={'value': -1e-10})) # }) # ], # slicing_specs=[ # tfma.SlicingSpec(), # tfma.SlicingSpec(feature_keys=['trip_start_hour']) # ]) # This eval_config is for blindly blessing the model. eval_config = tfma.EvalConfig( model_specs=[ tfma.ModelSpec(signature_name='eval'), ], metrics_specs=[ tfma.MetricsSpec(metrics=[ tfma.config.MetricConfig( class_name='ExampleCount', # Bless the model as long as there are at least one example. threshold=tfma.config.MetricThreshold( value_threshold=tfma.GenericValueThreshold( lower_bound={'value': 0}))), ]), ], slicing_specs=[tfma.SlicingSpec()]) evaluator = Evaluator( examples=example_gen.outputs['examples'], model=trainer.outputs['model'], baseline_model=model_resolver.outputs['model'], eval_config=eval_config) pusher = Pusher( model=trainer.outputs['model'], model_blessing=evaluator.outputs['blessing'], push_destination=pusher_pb2.PushDestination( filesystem=pusher_pb2.PushDestination.Filesystem( base_directory=os.path.join(PIPELINE_ROOT, 'model_serving')))) components=[ example_gen, statistics_gen, schema_gen, example_validator, transform, latest_model_resolver, trainer, model_resolver, evaluator, pusher ] return tfx_pipeline.Pipeline( pipeline_name='taxi-pipeline-v2{}'.format(USER), pipeline_root=PIPELINE_ROOT, components=components, beam_pipeline_args=beam_pipeline_args ) # + [markdown] id="E3B_ALPmzOop" # ### Compile and run the pipeline # # We'll first use the helper function to create the pipeline, passing it the BigQuery query we defined. # + id="yFxe5RjvqPHq" bigquery_taxi_pipeline = create_tfx_pipeline(query=QUERY) # + [markdown] id="JY7viEgkqPHy" # Compile the pipeline. This creates the `pipeline.json` job spec. # + id="Iex7qZB9qPHy" runner = kubeflow_v2_dag_runner.KubeflowV2DagRunner( config=kubeflow_v2_dag_runner.KubeflowV2DagRunnerConfig( project_id=PROJECT_ID ), output_filename='pipeline.json') _ = runner.compile(bigquery_taxi_pipeline, write_out=True) # + [markdown] id="OGgtF1g-qPH1" # Now we're ready to run the pipeline! We're constructing the client using the API_KEY that you created during setup. # + id="1ZmXfxJ5rhCx" from aiplatform.pipelines import client my_client = client.Client( project_id=PROJECT_ID, region=REGION, api_key=API_KEY ) # + [markdown] id="aqKbLBjizlBs" # Run the pipeline using the client's `create_run_from_job_spec` method. # + id="QOQXoa5wAEB0" my_client.create_run_from_job_spec('pipeline.json') # + [markdown] id="6Kgtx8-bW7cM" # ### Monitor the pipeline run in the Cloud Console # # Once you've deployed the pipeline run, you can monitor it in the [Cloud Console](https://console.cloud.google.com/ai/platform/pipelines) under **AI Platform (Unified)** > **Pipelines**. # # Click in to the pipeline run to see the run graph, and click on a step to view the job detail and the logs for that step. # # As you look at the pipeline graph, you'll see that you can inspect the artifacts passed between the pipeline steps. # + [markdown] id="gjSnifQ1csR5" # In the pipeline run's UI, which shows the DAG, you should be able to see that the model is 'blessed' by the evaluator. # # + [markdown] id="TezenrizqPH3" # Now let's see if the 'blessed' model can # be retried by another pipeline job as the baseline model for model validation. # # First, to expedite the process, let's turn on the cache. Then we'll submit the pipeline for execution. # + id="YgLUoKtGsYnY" import time bigquery_taxi_pipeline.enable_cache = True runner.compile(bigquery_taxi_pipeline, write_out=True) my_client.create_run_from_job_spec('pipeline.json', name='big-query-taxi-pipeline-{}-2nd-{}'.format(USER, str(int(time.time())))) # + [markdown] id="t44-06setIKu" # You should be able to see that all the steps before Trainer are cached. And a 'blessed' model is used as the baseline model for the Evaluator component. # # # # # + [markdown] id="LEImagWTdJ-g" # Next, let's try out a taxi pipeline which uses a file location as the data source. # + id="NEu-NFSsqPHv" fbeg_taxi_pipeline = create_tfx_pipeline(input_path=CSV_INPUT_PATH) runner = kubeflow_v2_dag_runner.KubeflowV2DagRunner( config=kubeflow_v2_dag_runner.KubeflowV2DagRunnerConfig( project_id=PROJECT_ID), output_filename='pipeline.json') runner.compile(fbeg_taxi_pipeline, write_out=True) my_client.create_run_from_job_spec('pipeline.json', name='file-based-taxi-pipeline-{}-3rd-{}'.format(USER, str(int(time.time())))) # + [markdown] id="89fYarRLW7cN" # ----------------------------- # Copyright 2020 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # CSE 163 Final Project # ## Stock Price Prediction using Reddit News Headlines import pandas as pd import seaborn as sns import numpy as np import matplotlib.pyplot as plt from pprint import pprint from sklearn.model_selection import RandomizedSearchCV from sklearn.metrics import accuracy_score from sklearn.linear_model import LogisticRegression from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.metrics import classification_report from sklearn.metrics import accuracy_score,recall_score,precision_score,f1_score from sklearn.model_selection import GridSearchCV from sklearn.metrics import confusion_matrix from sklearn.linear_model import SGDClassifier from sklearn.ensemble import RandomForestClassifier from subprocess import check_output from keras.models import Sequential from keras.layers import Dense, Dropout, Activation, Conv1D, GlobalMaxPooling1D # ## Data preparation # + data = pd.read_csv('Combined_News_DJIA.csv') train = data[data['Date'] <= '2015-01-01'] test = data[data['Date'] >= '2014-12-31'] # combine all the headlines into one string for each dates in lowercase trainheadlines = [] testheadlines = [] for row in range(0,len(train.index)): trainheadlines.append(' '.join(str(x).lower() for x in train.iloc[row,2:27])) for row in range(0, len(test.index)): testheadlines.append(' '.join(str(x).lower() for x in test.iloc[row, 2:27])) data.describe() # - # Since we wish to predict up and down of stock price based on news headlines, we have assumed that a single word (i.e. unigram ) will not contain enough information of sentiment of word towards up and down of stock price. # # First we will naively verify the claim by calculating accuracy score of each n-grams on LogisticRegression and use the best n-gram. # ## Vectorizing using Tf-Idf and n-gram # ### Purpose: To find the best n-gram for future use # + # Use logistic regression to choose naively choose best n-gram # We will also use Tf-idf in addition to n-gram to give a score to the words from headlines to identify which word or # words best represents a set of headlines, and use that score of each word or words as input to the unigramVectorizer = TfidfVectorizer( min_df=0.03, max_df=0.97, max_features = 200000, ngram_range = (1, 1)) bigramVectorizer = TfidfVectorizer( min_df=0.03, max_df=0.97, max_features = 200000, ngram_range = (2, 2)) uniTrain = unigramVectorizer.fit_transform(trainheadlines) biTrain = bigramVectorizer.fit_transform(trainheadlines) uniTest = unigramVectorizer.transform(testheadlines) biTest = bigramVectorizer.transform(testheadlines) print("unigram train set shape: ", uniTrain.shape) print("bigram train set shape: ", biTrain.shape) print("unigram test set shape: ", uniTest.shape) print("bigram test set shape: ", biTest.shape) # - biLogisticR = LogisticRegression() uniLogisticR = LogisticRegression() biLogisticR.fit(biTrain, train['Label']) uniLogisticR.fit(uniTrain, train['Label']) # Accuracy score of two uni_accuracy = accuracy_score(test['Label'], uniLogisticR.predict(uniTest)) bi_accuracy = accuracy_score(test['Label'], biLogisticR.predict(biTest)) print("Unigram accuracy score: ", uni_accuracy) print("Bigram accuracy score: ", bi_accuracy) # show highest coefficient and corresponding feature def head_tail(vectorizer, model): features = vectorizer.get_feature_names() coeff = list(model.coef_)[0] df = pd.DataFrame({"features": features, "coefficient": coeff}) df = df.sort_values('coefficient', ascending=False) result_df = pd.concat([df.head(), df.tail()]) return result_df head_tail(bigramVectorizer,biLogisticR) # Although, bigram had accuracy greater than random, the words(features) are not yet intuitive. # # It looks like two word is better than one word, but still not enough to hold meaningful intuition. trigramVectorizer = TfidfVectorizer(min_df=0.03, max_df=0.97, max_features = 200000, ngram_range = (3, 3)) triTrain = trigramVectorizer.fit_transform(trainheadlines) triTest = trigramVectorizer.transform(testheadlines) triLogisticR = LogisticRegression() triLogisticR.fit(triTrain, train['Label']) tri_accuracy = accuracy_score(test['Label'], triLogisticR.predict(triTest)) tri_accuracy head_tail(trigramVectorizer,triLogisticR) # combine all uni, bi, and tri allVectorizer = TfidfVectorizer(min_df=0.03, max_df=0.97, max_features = 200000, ngram_range = (1, 3)) allTrain = allVectorizer.fit_transform(trainheadlines) allTest = allVectorizer.transform(testheadlines) allLogisticR = LogisticRegression() allLogisticR.fit(allTrain, train['Label']) all_accuracy = accuracy_score(test['Label'], allLogisticR.predict(allTest)) all_accuracy head_tail(allVectorizer,allLogisticR) # With the results above, we procede our stock price up and down prediction with **bi-gram** tf_idf # ### Logisitic regression with bigram # # Previously, We already used Logistic Regression with bi-gram without any hyperparameter tuning. # The follwoing is confusion matrix with accuracy score. print("Bigram accuracy score: ", bi_accuracy) confusion_matrix(test['Label'],biLogisticR.predict(biTest)) plt.matshow(confusion_matrix(test['Label'],biLogisticR.predict(biTest))) # ### Hyperparameter tuning on Test data(No validation set) # sns.set() best_c = 0 best_acc = 0 best_model = None data_acc = [] for i in np.arange(0.1, 1.0, 0.001): model = LogisticRegression(C = i) model.fit(biTrain, train['Label']) y_train_pred = model.predict(biTrain) train_acc = accuracy_score(train['Label'], y_train_pred) y_test_pred = model.predict(biTest) test_acc = accuracy_score(test['Label'], y_test_pred) data_acc.append({'C': i, 'train accuracy': train_acc, 'test accuracy': test_acc}) if(test_acc > best_acc): best_acc = test_acc best_c = i best_model = model data_acc = pd.DataFrame(data_acc) sns.relplot(kind='line', x='C', y='train accuracy', data=data_acc) plt.savefig('train_acc.png') sns.relplot(kind='line', x='C', y='test accuracy', data=data_acc) plt.savefig('test_acc.png') print("best_c: ", best_c) print("best_acc: ", best_acc) print("best_model: ", best_model) confusion_matrix(test['Label'],best_model.predict(biTest)) # The result above is not quite regularized since it tested it's hyperparameters on test dataset, which make the model overfit to the test data. We will improve this by using cross validation and using validation set to test hyper parameters. # With the visualization best hyperparameter C, which is inverse of regularization strength. Now with this idea, let's peform GRID SEARCH of range from 0.4 to 0.6 for each 0.001 steps # + grid_values = {'penalty': ['l2'],'C':np.arange(0.45, 0.55, 0.0001)} grid_clf_acc = GridSearchCV(LogisticRegression(), param_grid = grid_values, verbose=1, cv=3) grid_clf_acc.fit(biTrain, train['Label']) #Predict values based on new parameters y_pred_acc = grid_clf_acc.predict(biTest) # New Model Evaluation metrics print('Accuracy Score : ' + str(accuracy_score(test['Label'],y_pred_acc))) print('Precision Score : ' + str(precision_score(test['Label'],y_pred_acc))) print('Recall Score : ' + str(recall_score(test['Label'],y_pred_acc))) print('F1 Score : ' + str(f1_score(test['Label'],y_pred_acc))) print('best model: ' + str(grid_clf_acc.best_estimator_)) #Logistic Regression (Grid Search) Confusion matrix confusion_matrix(test['Label'],y_pred_acc) # - # We can see decrease in accuray by 0.01 but this is acceptable because this model is no longer overfit to test data. Now, let this logistic regression accuracy be our baseline for the accuracy of futher prediction by other classifiers.
    # Let's try Random Forest # ### Base Random Forest Model #Create a Gaussian Classifier base_randomForest=RandomForestClassifier(n_estimators=100, random_state=1) #Train the model using the training sets y_pred=base_randomForest.predict(X_test) base_randomForest.fit(biTrain,train['Label']) y_pred = base_randomForest.predict(biTest) base_randomForest_acc = accuracy_score(test['Label'],y_pred) print("Base RandomForest Accuracy: ", base_randomForest_acc) # Simple randomfrest classifier whoed 0.52 accuracy, which is still worse than our baseline.
    # Let's do radom search to narrow down the hyperparameter space to improve the accuracy by hyperparameter tuning later in Grid Search. # ### RandomSearch Grid # Number of trees in random forest n_estimators = np.arange(100, 1000, 100) # Number of features to consider at /3every split max_features = ['sqrt'] # auto is using all features, sqrt is usually good for classification # Maximum number of levels in tree max_depth = np.arange(10, 110, 10) max_depth = np.append(max_depth,None) # Minimum number of samples required to split a node min_samples_split = [2, 5, 10] # Minimum number of samples required at each leaf node min_samples_leaf = [1, 2, 4] # Method of selecting samples for training each tree bootstrap = [True, False] # Create the random grid random_grid = {'n_estimators': n_estimators, 'max_features': max_features, 'max_depth': max_depth, 'min_samples_split': min_samples_split, 'min_samples_leaf': min_samples_leaf, 'bootstrap': bootstrap, 'random_state': [1]} pprint(random_grid) # ### RandomSearch randomForest = RandomForestClassifier() randomForest_random = RandomizedSearchCV(estimator = randomForest, param_distributions = random_grid, n_iter = 100, cv = 3, verbose=1, n_jobs = -1) # Fit the random search model randomForest_random.fit(biTrain, train['Label']) # ### Let's checkout best estimator of RandomSearch def evaluate(model, test_features, test_labels): predictions = model.predict(test_features) errors = abs(predictions - test_labels) mape = 100 * np.mean(errors / test_labels) accuracy = 100 - mape print('Model Performance') print('Average Error: {:0.4f} degrees.'.format(np.mean(errors))) print('Accuracy = {:0.2f}%.'.format(accuracy)) return accuracy # Best parameters from random search print(randomForest_random.best_estimator_) print(accuracy_score(test['Label'], randomForest_random.predict(biTest))) # ### Now let's use this values to tune hyperparameters more precisesly # ### Grid Search n_features = randomForest_random.best_estimator_.n_features_ max_feature_by3 = n_features // 3 param_grid = { 'bootstrap': [True], 'max_depth': np.arange(20, 51, 10), 'max_features': ['sqrt', max_feature_by3], 'min_samples_leaf': [4, 5], 'min_samples_split': [8, 10, 12], 'n_estimators': np.arange(300, 1000, 100), 'random_state': [1] } # Instantiate the grid search model grid_search = GridSearchCV(estimator = RandomForestClassifier(), param_grid = param_grid, cv = 3, n_jobs = -1, verbose = 1) grid_search.fit(biTrain,train['Label']) n_features = randomForest_random.best_estimator_.n_features_ max_feature_by3 = n_features // 3 param_grid = { 'bootstrap': [True], 'max_depth': np.arange(20, 51, 10), 'max_features': ['sqrt', max_feature_by3], 'min_samples_leaf': [4, 5], 'min_samples_split': [8, 10, 12], 'n_estimators': np.arange(400, 600, 50), 'random_state': [1] } # Instantiate the grid search model grid_search2 = GridSearchCV(estimator = RandomForestClassifier(), param_grid = param_grid, cv = 3, n_jobs = -1, verbose = 1) grid_search2.fit(biTrain,train['Label']) # We will definitely compare accuracy of models grid_search.best_estimator_ accuracy_score(test['Label'],grid_search.predict(biTest)) # + from sklearn.tree import export_graphviz import pydot best_grid_search = grid_search.best_estimator_ # Pull out one tree from the forest tree = best_grid_search.estimators_[5] # Export the image to a dot file export_graphviz(tree, out_file = 'tree.dot', feature_names = feature_list, rounded = True, precision = 1) # Use dot file to create a graph (graph, ) = pydot.graph_from_dot_file('tree.dot') # Write graph to a png file graph.write_png('tree.png') # - # model 1: Hyperparameter tuning on Test data(No validation set) # model 2: Hyperparameter tuning using validation set # model 3: Base RandomForest # model 4: RandomForest using RandomSearch # model 5: RandomForest using GridSearch models = ['model 1', 'model 2', 'model 3', 'model 4', 'model 5'] accs = [0.5883905013192612, 0.575197889182058, 0.5224274406332454, 0.5224274406332454, 0.5277044854881267] sns.set(style="whitegrid") sns.barplot(x=models, y=accs, palette="rocket") plt.title('Compare Models\' Accuracy Score') plt.ylabel('Accuracy') plt.savefig('compare_acc.png') epochs = np.arange(1, 10, 1) dense = np.arange(8, 65, 8) models = [] acc_result = [] for i in epochs: for j in dense: model = Sequential() model.add(Dense(j,input_shape=(biTrain.shape[1],))) model.add(Dropout(0.2)) model.add(Activation('relu')) model.add(Dense(j)) model.add(Dropout(0.2)) model.add(Activation('relu')) model.add(Dense(1)) model.add(Activation('sigmoid')) model.summary() model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['acc']) model.fit(biTrain,train['Label'],batch_size=32,epochs=i,verbose=1,validation_split=0.2) test_loss, test_acc = model.evaluate(biTest, test['Label']) models.append(model) acc_result.append({'epochs': i, 'dense': j, 'test accuracy': test_acc}) acc_result[acc_result['test accuracy'] == max(acc_result['test accuracy'])] acc_result acc_result = pd.DataFrame(acc_result) sns.set(rc={'figure.figsize':(16, 8)}) sns.scatterplot(x='epochs', y='dense', hue="test accuracy", size="test accuracy",data=acc_result, sizes=(10, 500)) plt.savefig('nn.png') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + from __future__ import division, print_function import os from collections import defaultdict import numpy as np from scipy import interp import pandas as pd import matplotlib.pyplot as plt import seaborn.apionly as sns from mlxtend.plotting import plot_decision_regions from sklearn.model_selection import cross_val_score, StratifiedShuffleSplit, KFold, StratifiedKFold from sklearn.metrics import accuracy_score import comptools as comp import comptools.analysis.plotting as plotting # color_dict allows for a consistent color-coding for each composition color_dict = comp.analysis.get_color_dict() # %matplotlib inline # - df_sim_train, df_sim_test = comp.load_sim(config='IC86.2012') feature_list, feature_labels = comp.analysis.get_training_features() comp_class = True target = 'MC_comp_class' if comp_class else 'MC_comp' comp_list = ['light', 'heavy'] if comp_class else ['P', 'He', 'O', 'Fe'] pipeline_str = 'BDT' pipeline = comp.get_pipeline(pipeline_str) df_sim_train.MC_comp.value_counts() energybins = comp.analysis.get_energybins() # ## Visualizing feature space cos_zenith_min = 0.85 cos_zenith_max = 0.9 cos_zenith_mask = np.logical_and(df_sim_train['lap_cos_zenith'] <= cos_zenith_max, df_sim_train['lap_cos_zenith'] >= cos_zenith_min) cos_zenith_mask.sum() # + training_comps = ['PPlus', 'Fe56Nucleus'] comp_to_label_dict = {'PPlus': 0, 'Fe56Nucleus': 1} cmaps = ['Blues', 'Oranges'] def comp_to_label(composition): try: return comp_to_label_dict[composition] except KeyError: raise KeyError('Incorrect composition ({}) entered'.format(composition)) # Get restricted training DataFrame comps_mask = df_sim_train['MC_comp'].isin(training_comps) df_train = df_sim_train.loc[comps_mask].copy() df_train['target'] = df_train['MC_comp'].apply(comp_to_label) # Fit pipeline with training data for this fold pipeline = comp.get_pipeline(pipeline_str) # pipeline.named_steps['classifier'].set_params(loss='deviance') pipeline.fit(df_train[feature_list], df_train['target']) fig, ax = plt.subplots() for MC_comp, cmap in zip(training_comps, cmaps): comp_mask = df_train['MC_comp'] == MC_comp sns.kdeplot(df_train.loc[cos_zenith_mask & comp_mask, 'log_s125'], df_train.loc[cos_zenith_mask & comp_mask, 'log_dEdX'], n_levels=10, cmap=cmap, label=MC_comp, alpha=0.5, ax=ax) # Plot decision region using mlxtend ax = plot_decision_regions(df_train.loc[cos_zenith_mask, feature_list].values, df_train.loc[cos_zenith_mask, 'target'].values, clf=pipeline, feature_index=[1, 2], filler_feature_values={0: np.mean([cos_zenith_min, cos_zenith_max])}, res=0.02, legend=1, colors='C0,C1', hide_spines=False, ax=ax) ax.set_ylabel('$\mathrm{\\log_{10}(dE/dX)}$') ax.set_xlabel('$\mathrm{\\log_{10}(S_{125})}$') zenith_range_str = '{:0.2f} '.format(cos_zenith_min) + \ '$\mathrm{ \leq \cos(\\theta_{reco}) \leq }$' + \ ' {:0.2f}'.format(cos_zenith_max) ax.text(-0.3, 3.05, zenith_range_str, fontsize=14) ax.set_xlim(-0.5, 2.5) ax.set_ylim(0, 3.5) ax.grid() ax.legend(loc='upper center') decision_region_outfile = os.path.join(comp.paths.figures_dir, 'decision_regions', 'P-Fe-only.png') comp.check_output_dir(decision_region_outfile) plt.savefig(decision_region_outfile) plt.show() # + training_comps = ['O16Nucleus', 'Fe56Nucleus'] comp_to_label_dict = {'O16Nucleus': 0, 'Fe56Nucleus': 1} cmaps = ['Reds', 'Oranges'] def comp_to_label(composition): try: return comp_to_label_dict[composition] except KeyError: raise KeyError('Incorrect composition ({}) entered'.format(composition)) # Get restricted training DataFrame comps_mask = df_sim_train['MC_comp'].isin(training_comps) df_train = df_sim_train.loc[comps_mask].copy() df_train['target'] = df_train['MC_comp'].apply(comp_to_label) # Fit pipeline with training data for this fold pipeline = comp.get_pipeline(pipeline_str) # pipeline.named_steps['classifier'].set_params(loss='deviance') pipeline.fit(df_train[feature_list], df_train['target']) fig, ax = plt.subplots() for MC_comp, cmap in zip(training_comps, cmaps): comp_mask = df_train['MC_comp'] == MC_comp sns.kdeplot(df_train.loc[cos_zenith_mask & comp_mask, 'log_s125'], df_train.loc[cos_zenith_mask & comp_mask, 'log_dEdX'], n_levels=10, cmap=cmap, label=MC_comp, alpha=0.5, ax=ax) # Plot decision region using mlxtend ax = plot_decision_regions(df_train.loc[cos_zenith_mask, feature_list].values, df_train.loc[cos_zenith_mask, 'target'].values, clf=pipeline, feature_index=[1, 2], filler_feature_values={0: np.mean([cos_zenith_min, cos_zenith_max])}, res=0.02, legend=1, colors='C3,C1', hide_spines=False, ax=ax) ax.set_ylabel('$\mathrm{\\log_{10}(dE/dX)}$') ax.set_xlabel('$\mathrm{\\log_{10}(S_{125})}$') zenith_range_str = '{:0.2f} '.format(cos_zenith_min) + \ '$\mathrm{ \leq \cos(\\theta_{reco}) \leq }$' + \ ' {:0.2f}'.format(cos_zenith_max) ax.text(-0.3, 3.05, zenith_range_str, fontsize=14) ax.set_xlim(-0.5, 2.5) ax.set_ylim(0, 3.5) ax.grid() ax.legend(loc='upper center') decision_region_outfile = os.path.join(comp.paths.figures_dir, 'decision_regions', 'O-Fe-only.png') comp.check_output_dir(decision_region_outfile) plt.savefig(decision_region_outfile) plt.show() # + training_comps = ['PPlus', 'Fe56Nucleus', 'O16Nucleus'] comp_to_label_dict = {'PPlus': 0, 'Fe56Nucleus': 1, 'O16Nucleus':2} cmaps = ['Blues', 'Oranges', 'Reds'] def comp_to_label(composition): try: return comp_to_label_dict[composition] except KeyError: raise KeyError('Incorrect composition ({}) entered'.format(composition)) # Get restricted training DataFrame comps_mask = df_sim_train['MC_comp'].isin(training_comps) df_train = df_sim_train.loc[comps_mask].copy() df_train['target'] = df_train['MC_comp'].apply(comp_to_label) # Fit pipeline with training data for this fold pipeline = comp.get_pipeline(pipeline_str) pipeline.named_steps['classifier'].set_params(loss='deviance') pipeline.fit(df_train[feature_list], df_train['target']) fig, ax = plt.subplots() for MC_comp, cmap in zip(training_comps, cmaps): comp_mask = df_train['MC_comp'] == MC_comp sns.kdeplot(df_train.loc[cos_zenith_mask & comp_mask, 'log_s125'], df_train.loc[cos_zenith_mask & comp_mask, 'log_dEdX'], n_levels=10, cmap=cmap, label=MC_comp, alpha=0.5, ax=ax) # # Plot decision region using mlxtend # ax = plot_decision_regions(df_train.loc[cos_zenith_mask, feature_list].values, # df_train.loc[cos_zenith_mask, 'target'].values, # clf=pipeline, feature_index=[1, 2], # filler_feature_values={0: np.mean([cos_zenith_min, cos_zenith_max])}, # res=0.02, legend=1, colors='C0,C1,C3', hide_spines=False, # ax=ax) ax.set_ylabel('$\mathrm{\\log_{10}(dE/dX)}$') ax.set_xlabel('$\mathrm{\\log_{10}(S_{125})}$') zenith_range_str = '{:0.2f} '.format(cos_zenith_min) + \ '$\mathrm{ \leq \cos(\\theta_{reco}) \leq }$' + \ ' {:0.2f}'.format(cos_zenith_max) ax.text(-0.3, 3.05, zenith_range_str, fontsize=14) ax.set_xlim(-0.5, 2.5) ax.set_ylim(0, 3.5) ax.grid() ax.legend(loc='upper center') decision_region_outfile = os.path.join(comp.paths.figures_dir, 'decision_regions', 'P-O-Fe-only.png') comp.check_output_dir(decision_region_outfile) plt.savefig(decision_region_outfile) plt.show() # + training_comps = ['PPlus', 'Fe56Nucleus', 'O16Nucleus', 'He4Nucleus'] comp_to_label_dict = {'PPlus': 0, 'Fe56Nucleus': 1, 'O16Nucleus':2, 'He4Nucleus': 3} cmaps = ['Blues', 'Oranges', 'Reds', 'Purples'] def comp_to_label(composition): try: return comp_to_label_dict[composition] except KeyError: raise KeyError('Incorrect composition ({}) entered'.format(composition)) # Get restricted training DataFrame comps_mask = df_sim_train['MC_comp'].isin(training_comps) df_train = df_sim_train.loc[comps_mask].copy() df_train['target'] = df_train['MC_comp'].apply(comp_to_label) # Fit pipeline with training data for this fold pipeline = comp.get_pipeline(pipeline_str) pipeline.named_steps['classifier'].set_params(loss='deviance') pipeline.fit(df_train[feature_list], df_train['target']) fig, ax = plt.subplots() for MC_comp, cmap in zip(training_comps, cmaps): comp_mask = df_train['MC_comp'] == MC_comp sns.kdeplot(df_train.loc[cos_zenith_mask & comp_mask, 'log_s125'], df_train.loc[cos_zenith_mask & comp_mask, 'log_dEdX'], n_levels=10, cmap=cmap, label=MC_comp, alpha=0.5, ax=ax) # Plot decision region using mlxtend ax = plot_decision_regions(df_train.loc[cos_zenith_mask, feature_list].values, df_train.loc[cos_zenith_mask, 'target'].values, clf=pipeline, feature_index=[1, 2], filler_feature_values={0: np.mean([cos_zenith_min, cos_zenith_max])}, res=0.02, legend=1, colors='C0,C1,C3,C4', hide_spines=False, ax=ax) ax.set_ylabel('$\mathrm{\\log_{10}(dE/dX)}$') ax.set_xlabel('$\mathrm{\\log_{10}(S_{125})}$') zenith_range_str = '{:0.2f} '.format(cos_zenith_min) + \ '$\mathrm{ \leq \cos(\\theta_{reco}) \leq }$' + \ ' {:0.2f}'.format(cos_zenith_max) ax.text(-0.3, 3.05, zenith_range_str, fontsize=14) ax.set_xlim(-0.5, 2.5) ax.set_ylim(0, 3.5) ax.grid() ax.legend(loc='upper center') decision_region_outfile = os.path.join(comp.paths.figures_dir, 'decision_regions', 'P-He-O-Fe.png') comp.check_output_dir(decision_region_outfile) plt.savefig(decision_region_outfile) plt.show() # - # ## P+He light & O+Fe heavy skf = StratifiedKFold(n_splits=10, shuffle=True, random_state=2) cv_accuracy = {MC_comp: [] for MC_comp in df_sim_train.MC_comp.unique()} for train_index, test_index in skf.split(df_sim_train[feature_list], df_sim_train['target']): # Get training/testing data for this fold df_sim_train_fold = df_sim_train.iloc[train_index] df_sim_test_fold = df_sim_train.iloc[test_index] # Fit pipeline with training data for this fold pipeline.fit(df_sim_train_fold[feature_list], df_sim_train_fold['target']) # Get fraction of events classified as light for each MC composition for MC_comp in df_sim_train_fold.MC_comp.unique(): comp_mask = df_sim_test_fold['MC_comp'] == MC_comp pred_label = pipeline.predict(df_sim_test_fold[feature_list][comp_mask]) pred_comp = np.array([comp.label_to_comp(i) for i in pred_label]) log_enegy = df_sim_test_fold['lap_log_energy'][comp_mask].values light_mask = pred_comp == 'light' counts_light = np.histogram(log_enegy[light_mask], bins=energybins.log_energy_bins)[0] counts_total = np.histogram(log_enegy, bins=energybins.log_energy_bins)[0] frac_light = counts_light/counts_total cv_accuracy[MC_comp].append(frac_light) df_frac_light = pd.DataFrame.from_dict(cv_accuracy) df_frac_light # + fig, ax = plt.subplots() for MC_comp in df_frac_light.columns: frac_light_mean = df_frac_light[MC_comp].mean(axis=0) frac_light_std = df_frac_light[MC_comp].values.std(axis=0) plotting.plot_steps(energybins.log_energy_bins, frac_light_mean, yerr=frac_light_std, color=color_dict[MC_comp], label=MC_comp, ax=ax) ax.axhline(0.5, marker='None', ls='-.', c='k', label='Guessing') ax.set_xlim(energybins.log_energy_min, energybins.log_energy_max) ax.set_ylim(0, 1) ax.set_ylabel('Fraction classified as light') ax.set_xlabel('$\mathrm{\log_{10}(E_{reco}/GeV)}$') ax.set_title(r'\underline{Class definitions}'+'\nLight: P, He | \t Heavy: O, Fe') ax.grid() # leg = plt.legend(loc='upper center', frameon=False, # bbox_to_anchor=(0.5, # horizontal # 1.25),# vertical # ncol=3, fancybox=False) ax.legend(loc='center left', bbox_to_anchor=(1, 0.5), frameon=False) outfile = os.path.join(comp.paths.figures_dir, 'fraction-classified-light_4-comp-training.png') plt.savefig(outfile) plt.show() # - # ## P light & Fe heavy skf = StratifiedKFold(n_splits=10, shuffle=True, random_state=2) cv_accuracy = {MC_comp: [] for MC_comp in df_sim_train.MC_comp.unique()} for train_index, test_index in skf.split(df_sim_train[feature_list], df_sim_train['target']): # Get training/testing data for this fold df_sim_train_fold = df_sim_train.iloc[train_index] df_sim_test_fold = df_sim_train.iloc[test_index] # Restrict to only training on Fe56Nucleus and PPlus simulation iron_proton_mask = df_sim_train_fold['MC_comp'].isin(['Fe56Nucleus', 'PPlus']).values # Fit pipeline with training data for this fold pipeline.fit(df_sim_train_fold[feature_list][iron_proton_mask], df_sim_train_fold['target'][iron_proton_mask]) # Get fraction of events classified as light for each MC composition for MC_comp in df_sim_train_fold.MC_comp.unique(): comp_mask = df_sim_test_fold['MC_comp'] == MC_comp pred_label = pipeline.predict(df_sim_test_fold[feature_list][comp_mask]) pred_comp = np.array([comp.label_to_comp(i) for i in pred_label]) log_enegy = df_sim_test_fold['lap_log_energy'][comp_mask].values light_mask = pred_comp == 'light' counts_light = np.histogram(log_enegy[light_mask], bins=energybins.log_energy_bins)[0] counts_total = np.histogram(log_enegy, bins=energybins.log_energy_bins)[0] frac_light = counts_light/counts_total cv_accuracy[MC_comp].append(frac_light) df_frac_light = pd.DataFrame.from_dict(cv_accuracy) df_frac_light # + fig, ax = plt.subplots() for MC_comp in df_frac_light.columns: frac_light_mean = df_frac_light[MC_comp].mean(axis=0) frac_light_std = df_frac_light[MC_comp].values.std(axis=0) plotting.plot_steps(energybins.log_energy_bins, frac_light_mean, yerr=frac_light_std, color=color_dict[MC_comp], label=MC_comp, ax=ax) ax.axhline(0.5, marker='None', ls='-.', c='k', label='Guessing') ax.set_xlim(energybins.log_energy_min, energybins.log_energy_max) ax.set_ylim(0, 1) ax.set_ylabel('Fraction classified as light') ax.set_xlabel('$\mathrm{\log_{10}(E_{reco}/GeV)}$') ax.set_title(r'\underline{Class definitions}'+'\nLight: P | \t Heavy: Fe') ax.grid() # leg = plt.legend(loc='upper center', frameon=False, # bbox_to_anchor=(0.5, # horizontal # 1.25),# vertical # ncol=3, fancybox=False) ax.legend(loc='center left', bbox_to_anchor=(1, 0.5), frameon=False) outfile = os.path.join(comp.paths.figures_dir, 'fraction-classified-light_P-Fe-training-only.png') plt.savefig(outfile) plt.show() # + comp_to_label_dict = {'PPlus': 0, 'Fe56Nucleus': 1} def comp_to_label(composition): try: return comp_to_label_dict[composition] except KeyError: raise KeyError('Incorrect composition ({}) entered'.format(composition)) # - skf = StratifiedKFold(n_splits=10, shuffle=True, random_state=2) cv_correct = {MC_comp: [] for MC_comp in df_sim_train.MC_comp.unique()} # Restrict to only training on PPlus and He4Nucleus simulation proton_iron_mask = df_sim_train['MC_comp'].isin(['PPlus', 'Fe56Nucleus']).values for train_index, test_index in skf.split(df_sim_train.loc[proton_iron_mask, feature_list], df_sim_train.loc[proton_iron_mask, 'target']): # Get training/testing data for this fold df_sim_train_fold = df_sim_train[proton_iron_mask].iloc[train_index].copy() df_sim_train_fold['target'] = df_sim_train_fold['MC_comp'].apply(comp_to_label) df_sim_test_fold = df_sim_train[proton_iron_mask].iloc[test_index].copy() df_sim_test_fold['target'] = df_sim_test_fold['MC_comp'].apply(comp_to_label) # Fit pipeline with training data for this fold pipeline.fit(df_sim_train_fold[feature_list], df_sim_train_fold['target']) # Get fraction of events classified as light for each MC composition for MC_comp in df_sim_train_fold.MC_comp.unique(): comp_mask = df_sim_test_fold['MC_comp'] == MC_comp pred_comp = pipeline.predict(df_sim_test_fold[feature_list][comp_mask]) correctly_identified = pred_comp == df_sim_test_fold['target'][comp_mask] log_enegy = df_sim_test_fold['lap_log_energy'][comp_mask].values counts_correct = np.histogram(log_enegy[correctly_identified], bins=energybins.log_energy_bins)[0] counts_total = np.histogram(log_enegy, bins=energybins.log_energy_bins)[0] frac_correct = counts_correct/counts_total cv_correct[MC_comp].append(frac_correct) df_frac_correct = pd.DataFrame.from_dict({key: value for key, value in cv_correct.items() if len(value) != 0}) df_frac_correct # + fig, ax = plt.subplots() for MC_comp in df_frac_correct.columns: frac_light_mean = df_frac_correct[MC_comp].mean(axis=0) frac_light_std = df_frac_correct[MC_comp].values.std(axis=0) plotting.plot_steps(energybins.log_energy_bins, frac_light_mean, yerr=frac_light_std, color=color_dict[MC_comp], label=MC_comp, ax=ax) ax.axhline(0.5, marker='None', ls='-.', c='k', label='Guessing') ax.set_xlim(energybins.log_energy_min, energybins.log_energy_max) ax.set_ylim(0, 1) ax.set_ylabel('Fraction correctly classified') ax.set_xlabel('$\mathrm{\log_{10}(E_{reco}/GeV)}$') ax.set_title(r'\underline{Class definitions}'+'\nClass 1: P | \t Class 2: Fe') ax.grid() # leg = plt.legend(loc='upper center', frameon=False, # bbox_to_anchor=(0.5, # horizontal # 1.25),# vertical # ncol=3, fancybox=False) ax.legend(loc='center left', bbox_to_anchor=(1, 0.5), frameon=False) outfile = os.path.join(comp.paths.figures_dir, 'fraction-correct_P-light-Fe-heavy-training.png') plt.savefig(outfile) plt.show() # - # ## P light & He heavy # + comp_to_label_dict = {'PPlus': 0, 'He4Nucleus': 1} def comp_to_label(composition): try: return comp_to_label_dict[composition] except KeyError: raise KeyError('Incorrect composition ({}) entered'.format(composition)) # - skf = StratifiedKFold(n_splits=10, shuffle=True, random_state=2) cv_accuracy = {MC_comp: [] for MC_comp in df_sim_train.MC_comp.unique()} # Restrict to only training on PPlus and He4Nucleus simulation proton_helium_mask = df_sim_train['MC_comp'].isin(['PPlus', 'He4Nucleus']).values for train_index, test_index in skf.split(df_sim_train.loc[proton_helium_mask, feature_list], df_sim_train.loc[proton_helium_mask, 'target']): # Get training/testing data for this fold df_sim_train_fold = df_sim_train[proton_helium_mask].iloc[train_index].copy() df_sim_train_fold['target'] = df_sim_train_fold['MC_comp'].apply(comp_to_label) df_sim_test_fold = df_sim_train[proton_helium_mask].iloc[test_index].copy() df_sim_test_fold['target'] = df_sim_test_fold['MC_comp'].apply(comp_to_label) # Fit pipeline with training data for this fold pipeline.fit(df_sim_train_fold[feature_list], df_sim_train_fold['target']) # Get fraction of events classified as light for each MC composition for MC_comp in df_sim_train_fold.MC_comp.unique(): comp_mask = df_sim_test_fold['MC_comp'] == MC_comp pred_comp = pipeline.predict(df_sim_test_fold[feature_list][comp_mask]) log_enegy = df_sim_test_fold['lap_log_energy'][comp_mask].values light_mask = pred_comp == 0 counts_light = np.histogram(log_enegy[light_mask], bins=energybins.log_energy_bins)[0] counts_total = np.histogram(log_enegy, bins=energybins.log_energy_bins)[0] frac_light = counts_light/counts_total cv_accuracy[MC_comp].append(frac_light) df_frac_light = pd.DataFrame.from_dict({key: value for key, value in cv_accuracy.items() if len(value) != 0}) df_frac_light # + fig, ax = plt.subplots() for MC_comp in df_frac_light.columns: frac_light_mean = df_frac_light[MC_comp].mean(axis=0) frac_light_std = df_frac_light[MC_comp].values.std(axis=0) plotting.plot_steps(energybins.log_energy_bins, frac_light_mean, yerr=frac_light_std, color=color_dict[MC_comp], label=MC_comp, ax=ax) ax.axhline(0.5, marker='None', ls='-.', c='k', label='Guessing') ax.set_xlim(energybins.log_energy_min, energybins.log_energy_max) ax.set_ylim(0, 1) ax.set_ylabel('Fraction classified as light') ax.set_xlabel('$\mathrm{\log_{10}(E_{reco}/GeV)}$') ax.set_title(r'\underline{Class definitions}'+'\nLight: P | \t Heavy: He') ax.grid() # leg = plt.legend(loc='upper center', frameon=False, # bbox_to_anchor=(0.5, # horizontal # 1.25),# vertical # ncol=3, fancybox=False) ax.legend(loc='center left', bbox_to_anchor=(1, 0.5), frameon=False) outfile = os.path.join(comp.paths.figures_dir, 'fraction-classified-light_P-light-He-heavy-training.png') plt.savefig(outfile) plt.show() # - skf = StratifiedKFold(n_splits=10, shuffle=True, random_state=2) cv_correct = {MC_comp: [] for MC_comp in df_sim_train.MC_comp.unique()} # Restrict to only training on PPlus and He4Nucleus simulation proton_helium_mask = df_sim_train['MC_comp'].isin(['PPlus', 'He4Nucleus']).values for train_index, test_index in skf.split(df_sim_train.loc[proton_helium_mask, feature_list], df_sim_train.loc[proton_helium_mask, 'target']): # Get training/testing data for this fold df_sim_train_fold = df_sim_train[proton_helium_mask].iloc[train_index].copy() df_sim_train_fold['target'] = df_sim_train_fold['MC_comp'].apply(comp_to_label) df_sim_test_fold = df_sim_train[proton_helium_mask].iloc[test_index].copy() df_sim_test_fold['target'] = df_sim_test_fold['MC_comp'].apply(comp_to_label) # Fit pipeline with training data for this fold pipeline.fit(df_sim_train_fold[feature_list], df_sim_train_fold['target']) # Get fraction of events classified as light for each MC composition for MC_comp in df_sim_train_fold.MC_comp.unique(): comp_mask = df_sim_test_fold['MC_comp'] == MC_comp pred_comp = pipeline.predict(df_sim_test_fold[feature_list][comp_mask]) correctly_identified = pred_comp == df_sim_test_fold['target'][comp_mask] log_enegy = df_sim_test_fold['lap_log_energy'][comp_mask].values counts_correct = np.histogram(log_enegy[correctly_identified], bins=energybins.log_energy_bins)[0] counts_total = np.histogram(log_enegy, bins=energybins.log_energy_bins)[0] frac_correct = counts_correct/counts_total cv_correct[MC_comp].append(frac_correct) df_frac_correct = pd.DataFrame.from_dict({key: value for key, value in cv_correct.items() if len(value) != 0}) df_frac_correct # + fig, ax = plt.subplots() for MC_comp in df_frac_correct.columns: frac_light_mean = df_frac_correct[MC_comp].mean(axis=0) frac_light_std = df_frac_correct[MC_comp].values.std(axis=0) plotting.plot_steps(energybins.log_energy_bins, frac_light_mean, yerr=frac_light_std, color=color_dict[MC_comp], label=MC_comp, ax=ax) ax.axhline(0.5, marker='None', ls='-.', c='k', label='Guessing') ax.set_xlim(energybins.log_energy_min, energybins.log_energy_max) ax.set_ylim(0, 1) ax.set_ylabel('Fraction correctly classified') ax.set_xlabel('$\mathrm{\log_{10}(E_{reco}/GeV)}$') ax.set_title(r'\underline{Class definitions}'+'\nClass 1: P | \t Class 2: He') ax.grid() # leg = plt.legend(loc='upper center', frameon=False, # bbox_to_anchor=(0.5, # horizontal # 1.25),# vertical # ncol=3, fancybox=False) ax.legend(loc='center left', bbox_to_anchor=(1, 0.5), frameon=False) outfile = os.path.join(comp.paths.figures_dir, 'fraction-correct_P-light-He-heavy-training.png') plt.savefig(outfile) plt.show() # - # ## P light & He+O+Fe heavy # + comp_to_label_dict = {'PPlus': 0, 'He4Nucleus': 1, 'O16Nucleus': 1, 'Fe56Nucleus': 1} def comp_to_label(composition): try: return comp_to_label_dict[composition] except KeyError: raise KeyError('Incorrect composition ({}) entered'.format(composition)) # - skf = StratifiedKFold(n_splits=10, shuffle=True, random_state=2) cv_accuracy = {MC_comp: [] for MC_comp in df_sim_train.MC_comp.unique()} for train_index, test_index in skf.split(df_sim_train[feature_list], df_sim_train['target']): # Get training/testing data for this fold df_sim_train_fold = df_sim_train.iloc[train_index].copy() df_sim_train_fold['target'] = df_sim_train_fold['MC_comp'].apply(comp_to_label) df_sim_test_fold = df_sim_train.iloc[test_index].copy() df_sim_test_fold['target'] = df_sim_test_fold['MC_comp'].apply(comp_to_label) # Fit pipeline with training data for this fold pipeline.fit(df_sim_train_fold[feature_list], df_sim_train_fold['target']) # Get fraction of events classified as light for each MC composition for MC_comp in df_sim_train_fold.MC_comp.unique(): comp_mask = df_sim_test_fold['MC_comp'] == MC_comp pred_comp = pipeline.predict(df_sim_test_fold[feature_list][comp_mask]) log_enegy = df_sim_test_fold['lap_log_energy'][comp_mask].values light_mask = pred_comp == 0 counts_light = np.histogram(log_enegy[light_mask], bins=energybins.log_energy_bins)[0] counts_total = np.histogram(log_enegy, bins=energybins.log_energy_bins)[0] frac_light = counts_light/counts_total cv_accuracy[MC_comp].append(frac_light) df_frac_light = pd.DataFrame.from_dict(cv_accuracy) df_frac_light # + fig, ax = plt.subplots() for MC_comp in df_frac_light.columns: frac_light_mean = df_frac_light[MC_comp].mean(axis=0) frac_light_std = df_frac_light[MC_comp].values.std(axis=0) plotting.plot_steps(energybins.log_energy_bins, frac_light_mean, yerr=frac_light_std, color=color_dict[MC_comp], label=MC_comp, ax=ax) ax.axhline(0.5, marker='None', ls='-.', c='k', label='Guessing') ax.set_xlim(energybins.log_energy_min, energybins.log_energy_max) ax.set_ylim(0, 1) ax.set_ylabel('Fraction classified as light') ax.set_xlabel('$\mathrm{\log_{10}(E_{reco}/GeV)}$') ax.set_title(r'\underline{Class definitions}'+'\nLight: P | \t Heavy: He, O, Fe') ax.grid() # leg = plt.legend(loc='upper center', frameon=False, # bbox_to_anchor=(0.5, # horizontal # 1.25),# vertical # ncol=3, fancybox=False) ax.legend(loc='center left', bbox_to_anchor=(1, 0.5), frameon=False) outfile = os.path.join(comp.paths.figures_dir, 'fraction-classified-light_P-light-else-heavy-training.png') plt.savefig(outfile) plt.show() # - # ## Pairwise classification from itertools import combinations MC_comp_list = ['PPlus', 'He4Nucleus', 'O16Nucleus', 'Fe56Nucleus'] get_MC_comp_name = {'PPlus':'P', 'He4Nucleus':'He', 'O16Nucleus':'O', 'Fe56Nucleus':'Fe'} a = list(combinations(MC_comp_list, 2)) comp1, comp2 = a[0] comp1 for comp1, comp2 in list(combinations(MC_comp_list, 2)): comp1_name = get_MC_comp_name[comp1] comp2_name = get_MC_comp_name[comp2] print(comp1_name, comp2_name) comp_to_label_dict = {comp1: 0, comp2: 1} def comp_to_label(composition): try: return comp_to_label_dict[composition] except KeyError: raise KeyError('Incorrect composition ({}) entered'.format(composition)) skf = StratifiedKFold(n_splits=10, shuffle=True, random_state=2) cv_correct = {MC_comp: [] for MC_comp in df_sim_train.MC_comp.unique()} # Restrict to only training on PPlus and He4Nucleus simulation comps_mask = df_sim_train['MC_comp'].isin([comp1, comp2]).values for train_index, test_index in skf.split(df_sim_train.loc[comps_mask, feature_list], df_sim_train.loc[comps_mask, 'target']): # Get training/testing data for this fold df_sim_train_fold = df_sim_train[comps_mask].iloc[train_index].copy() df_sim_train_fold['target'] = df_sim_train_fold['MC_comp'].apply(comp_to_label) df_sim_test_fold = df_sim_train[comps_mask].iloc[test_index].copy() df_sim_test_fold['target'] = df_sim_test_fold['MC_comp'].apply(comp_to_label) # Fit pipeline with training data for this fold pipeline = comp.get_pipeline(pipeline_str) pipeline.fit(df_sim_train_fold[feature_list], df_sim_train_fold['target']) # Get fraction of events classified as light for each MC composition for MC_comp in df_sim_train_fold.MC_comp.unique(): comp_mask = df_sim_test_fold['MC_comp'] == MC_comp pred_comp = pipeline.predict(df_sim_test_fold[feature_list][comp_mask]) correctly_identified = pred_comp == df_sim_test_fold['target'][comp_mask] log_enegy = df_sim_test_fold['lap_log_energy'][comp_mask].values counts_correct = np.histogram(log_enegy[correctly_identified], bins=energybins.log_energy_bins)[0] counts_total = np.histogram(log_enegy, bins=energybins.log_energy_bins)[0] frac_correct = counts_correct/counts_total cv_correct[MC_comp].append(frac_correct) df_frac_correct = pd.DataFrame.from_dict({key: value for key, value in cv_correct.items() if len(value) != 0}) fig, ax = plt.subplots() for MC_comp in df_frac_correct.columns: frac_mean = df_frac_correct[MC_comp].mean(axis=0) frac_std = df_frac_correct[MC_comp].values.std(axis=0) plotting.plot_steps(energybins.log_energy_bins, frac_mean, yerr=frac_std, color=color_dict[MC_comp], label=MC_comp, ax=ax) for composition, name, y_coord in zip([comp1, comp2], [comp1_name, comp2_name], [0.25, 0.05]): acc_mean = np.concatenate(df_frac_correct[composition]).mean() acc_std = np.concatenate(df_frac_correct[composition]).std() ax.text(6.6, y_coord, '{} accuracy:\n {:0.2f} +/- {:0.2f}'.format(name, acc_mean, acc_std), fontsize=16) ax.axhline(0.5, marker='None', ls='-.', c='k', label='Guessing') ax.set_xlim(energybins.log_energy_min, energybins.log_energy_max) ax.set_ylim(0, 1) ax.set_ylabel('Fraction correctly classified') ax.set_xlabel('$\mathrm{\log_{10}(E_{reco}/GeV)}$') # ax.set_title(r'\underline{Class definitions}'+'\nClass 1: {} | \t Class 2: {}'.format(comp1_name, comp2_name)) ax.grid() # leg = plt.legend(loc='upper center', frameon=False, # bbox_to_anchor=(0.5, # horizontal # 1.25),# vertical # ncol=3, fancybox=False) # ax.legend(loc='center left', bbox_to_anchor=(1, 0.5), frameon=False) ax.legend(loc='lower right') outfile = os.path.join(comp.paths.figures_dir, 'fraction-correct_{}-light-{}-heavy-training.png'.format(comp1_name, comp2_name)) plt.savefig(outfile) plt.show() # + comp_to_label_dict = {'PPlus': 0, 'He4Nucleus': 1, 'O16Nucleus': 2, 'Fe56Nucleus': 3} def comp_to_label(composition): try: return comp_to_label_dict[composition] except KeyError: raise KeyError('Incorrect composition ({}) entered'.format(composition)) skf = StratifiedKFold(n_splits=10, shuffle=True, random_state=2) cv_correct = {MC_comp: [] for MC_comp in df_sim_train.MC_comp.unique()} # Restrict to only training on PPlus and He4Nucleus simulation for train_index, test_index in skf.split(df_sim_train[feature_list], df_sim_train['MC_comp']): # Get training/testing data for this fold df_sim_train_fold = df_sim_train.iloc[train_index].copy() df_sim_train_fold['target'] = df_sim_train_fold['MC_comp'].apply(comp_to_label) df_sim_test_fold = df_sim_train.iloc[test_index].copy() df_sim_test_fold['target'] = df_sim_test_fold['MC_comp'].apply(comp_to_label) # Fit pipeline with training data for this fold pipeline = comp.get_pipeline(pipeline_str) pipeline.named_steps['classifier'].set_params(loss='deviance') pipeline.fit(df_sim_train_fold[feature_list], df_sim_train_fold['target']) # Get fraction of events classified as light for each MC composition for MC_comp in df_sim_train_fold.MC_comp.unique(): comp_mask = df_sim_test_fold['MC_comp'] == MC_comp pred_comp = pipeline.predict(df_sim_test_fold[feature_list][comp_mask]) correctly_identified = pred_comp == df_sim_test_fold['target'][comp_mask] log_enegy = df_sim_test_fold['lap_log_energy'][comp_mask].values counts_correct = np.histogram(log_enegy[correctly_identified], bins=energybins.log_energy_bins)[0] counts_total = np.histogram(log_enegy, bins=energybins.log_energy_bins)[0] frac_correct = counts_correct/counts_total cv_correct[MC_comp].append(frac_correct) df_frac_correct = pd.DataFrame.from_dict({key: value for key, value in cv_correct.items() if len(value) != 0}) fig, ax = plt.subplots() for MC_comp in df_frac_correct.columns: frac_mean = df_frac_correct[MC_comp].mean(axis=0) frac_std = df_frac_correct[MC_comp].values.std(axis=0) plotting.plot_steps(energybins.log_energy_bins, frac_mean, yerr=frac_std, color=color_dict[MC_comp], label=MC_comp, ax=ax) np.concatenate(df_frac_correct.Fe56Nucleus).mean() np.concatenate(df_frac_correct.Fe56Nucleus).std() ax.axhline(0.25, marker='None', ls='-.', c='k', label='Guessing') ax.set_xlim(energybins.log_energy_min, energybins.log_energy_max) ax.set_ylim(0, 1) ax.set_ylabel('Fraction correctly classified') ax.set_xlabel('$\mathrm{\log_{10}(E_{reco}/GeV)}$') ax.set_title('4-composition classification') ax.grid() # leg = plt.legend(loc='upper center', frameon=False, # bbox_to_anchor=(0.5, # horizontal # 1.25),# vertical # ncol=3, fancybox=False) ax.legend(loc='center left', bbox_to_anchor=(1, 0.5), frameon=False) outfile = os.path.join(comp.paths.figures_dir, 'fraction-correct_4-component-training.png') plt.savefig(outfile) plt.show() # + comp_to_label_dict = {'PPlus': 0, 'O16Nucleus': 1, 'Fe56Nucleus': 2} def comp_to_label(composition): try: return comp_to_label_dict[composition] except KeyError: raise KeyError('Incorrect composition ({}) entered'.format(composition)) skf = StratifiedKFold(n_splits=10, shuffle=True, random_state=2) cv_correct = {MC_comp: [] for MC_comp in df_sim_train.MC_comp.unique()} # Restrict to only training on PPlus and He4Nucleus simulation comps_mask = df_sim_train['MC_comp'].isin(['PPlus', 'O16Nucleus', 'Fe56Nucleus']).values for train_index, test_index in skf.split(df_sim_train.loc[comps_mask, feature_list], df_sim_train.loc[comps_mask, 'MC_comp']): # Get training/testing data for this fold df_sim_train_fold = df_sim_train[comps_mask].iloc[train_index].copy() df_sim_train_fold['target'] = df_sim_train_fold['MC_comp'].apply(comp_to_label) df_sim_test_fold = df_sim_train[comps_mask].iloc[test_index].copy() df_sim_test_fold['target'] = df_sim_test_fold['MC_comp'].apply(comp_to_label) # Fit pipeline with training data for this fold pipeline = comp.get_pipeline(pipeline_str) pipeline.named_steps['classifier'].set_params(loss='deviance') pipeline.fit(df_sim_train_fold[feature_list], df_sim_train_fold['target']) # Get fraction of events classified as light for each MC composition for MC_comp in df_sim_train_fold.MC_comp.unique(): comp_mask = df_sim_test_fold['MC_comp'] == MC_comp pred_comp = pipeline.predict(df_sim_test_fold[feature_list][comp_mask]) correctly_identified = pred_comp == df_sim_test_fold['target'][comp_mask] log_enegy = df_sim_test_fold['lap_log_energy'][comp_mask].values counts_correct = np.histogram(log_enegy[correctly_identified], bins=energybins.log_energy_bins)[0] counts_total = np.histogram(log_enegy, bins=energybins.log_energy_bins)[0] frac_correct = counts_correct/counts_total cv_correct[MC_comp].append(frac_correct) df_frac_correct = pd.DataFrame.from_dict({key: value for key, value in cv_correct.items() if len(value) != 0}) fig, ax = plt.subplots() for MC_comp in df_frac_correct.columns: frac_mean = df_frac_correct[MC_comp].mean(axis=0) frac_std = df_frac_correct[MC_comp].values.std(axis=0) plotting.plot_steps(energybins.log_energy_bins, frac_mean, yerr=frac_std, color=color_dict[MC_comp], label=MC_comp, ax=ax) np.concatenate(df_frac_correct.Fe56Nucleus).mean() np.concatenate(df_frac_correct.Fe56Nucleus).std() ax.axhline(1/3, marker='None', ls='-.', c='k', label='Guessing') ax.set_xlim(energybins.log_energy_min, energybins.log_energy_max) ax.set_ylim(0, 1) ax.set_ylabel('Fraction correctly classified') ax.set_xlabel('$\mathrm{\log_{10}(E_{reco}/GeV)}$') ax.set_title('4-composition classification') ax.grid() # leg = plt.legend(loc='upper center', frameon=False, # bbox_to_anchor=(0.5, # horizontal # 1.25),# vertical # ncol=3, fancybox=False) ax.legend(loc='center left', bbox_to_anchor=(1, 0.5), frameon=False) outfile = os.path.join(comp.paths.figures_dir, 'fraction-correct_3-component-training.png') plt.savefig(outfile) plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.5 64-bit (''indicadores'': conda)' # name: python3 # --- # + from sklearn.datasets import fetch_openml import pandas as pd mnist = fetch_openml('mnist_784', version=1, as_frame=False) mnist.keys() # - X, y = mnist['data'], mnist['target'] X.shape # + # %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt some_digit = X_test[0] some_digit_image = some_digit.reshape(28, 28) plt.imshow(some_digit_image, cmap=mpl.cm.binary) plt.axis("off") plt.show() # - X_train, X_test, y_train, y_test = X[:60000], X[60000:], y[:60000], y[60000:] # ## K-nearest neighbord print(X_train.shape, X_test.shape, y_train.shape, y_test.shape) # + from sklearn.model_selection import GridSearchCV from sklearn.neighbors import KNeighborsClassifier from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score from sklearn.metrics import make_scorer param_grid = [ {'n_neighbors': [3, 5, 7, 9], 'weights': ['uniform','distance']} ] scoring = {'accuracy': make_scorer(accuracy_score), 'precision': make_scorer(precision_score, average = 'macro'), 'recall': make_scorer(recall_score, average = 'macro'), 'f1_macro': make_scorer(f1_score, average = 'macro'), 'f1_weighted': make_scorer(f1_score, average = 'weighted')} model = KNeighborsClassifier() grid_search = GridSearchCV(model, param_grid, cv=2, scoring=scoring, verbose=3, refit=False) grid_search.fit(X_train, y_train) cv_results = pd.DataFrame.from_dict(grid_search.cv_results_) # - cv_results.dtypes cv_results.loc[cv_results['rank_test_f1_weighted']==1] from sklearn.metrics import accuracy_score # %time print('KNN Accuracy: %.3f' % accuracy_score(y_test,prediction)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.5 64-bit (''anaconda3'': virtualenv)' # language: python # name: python385jvsc74a57bd072544b4b71f926df8e6550fd872403aeb49cdc47d952f1d1cea45f1032de013d # --- import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import numpy as np from sklearn.decomposition import PCA df_wine=pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data',header=None) df_wine.head() df_wine.isnull().sum() plt.figure(figsize=(10,8)) sns.heatmap(df_wine.corr(),annot=True) df_wine[0].value_counts() from sklearn.model_selection import train_test_split X_train,X_test,y_train,y_test=train_test_split(df_wine.iloc[:,1:],df_wine.iloc[:,0],) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Title # The title of the notebook should be coherent with file name. Namely, file name should be: # *author's initials_progressive number_title.ipynb* # For example: # *EF_01_Data Exploration.ipynb* # # ## Purpose # State the purpose of the notebook. # # ## Methodology # Quickly describe assumptions and processing steps. # # ## WIP - improvements # Use this section only if the notebook is not final. # # Notable TODOs: # - todo 1; # - todo 2; # - todo 3. # # ## Results # Describe and comment the most important results. # # ## Suggested next steps # State suggested next steps, based on results obtained in this notebook. # # Setup # # ## Library import # We import all the required Python libraries # + # Data manipulation import pandas as pd import numpy as np import os import glob from biopandas.pdb import PandasPdb from pymol import cmd # Options for pandas pd.options.display.max_columns = 50 pd.options.display.max_rows = 30 # Visualizations import plotly import plotly.graph_objs as go import plotly.offline as ply plotly.offline.init_notebook_mode(connected=True) import matplotlib.pyplot as plt import seaborn as sns # Autoreload extension if 'autoreload' not in get_ipython().extension_manager.loaded: # %load_ext autoreload # %autoreload 2 # - # ### Change directory # If Jupyter lab sets the root directory in `notebooks`, change directory. if "notebook" in os.getcwd(): os.chdir("..") # ## Local library import # We import all the required local libraries libraries # + # Include local library paths import sys # sys.path.append("./src") # uncomment and fill to import local libraries # Import local libraries # import src.utilities as utils # - # # Parameter definition # We set all relevant parameters for our notebook. By convention, parameters are uppercase, while all the # other variables follow Python's guidelines. # # # Data import # We retrieve all the required data for the analysis. # # Data processing # Put here the core of the notebook. Feel free to further split this section into subsections. # # References # We report here relevant references: # 1. author1, article1, journal1, year1, url1 # 2. author2, article2, journal2, year2, url2 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Computational Astrophysics # ## Partial Differential Equations. 01 Generalities # # --- # ## # # Observatorio Astronómico Nacional\ # Facultad de Ciencias\ # Universidad Nacional de Colombia # # --- # ### About this notebook # # In this notebook we present some of the generalities about systems of Partial Differential Equations. # # `. Numerical Methods for Physics. (1999). Chapter 6 - 7 ` # # --- # ## Partial Differential Equations (PDEs) # # A PDE is a relation between the partial derivatives of an unknown # function and the independent variables. The order of the highest derivative sets the *order* of the PDE. # # A PDE is *linear* if it is of the first degree in the dependent # variable (i.e. the unknown function) and in its partial derivatives. # # If each term of a PDE contains either the dependent variable # or one of its partial derivatives, the PDE is called *homogeneous*. # Otherwise it is *non-homogeneous*. # # # # # # # --- # ### Types of Partial Differential Equations # # There are three general types of PDEs # # 1. hyperbolic # 2. parabolic # 3. elliptic # # Not all PDEs fall into one of these three types, but many # PDEs used in practice do. # # These classes of PDEs model different sorts # of phenomena, display different behavior, and require different numerical # techniques for their solution. # # It is not always straighforward to see (to show and proof) what type # of PDE is a given PDE!!. # # ### Linear Second Order Differential Equation # # Consider a function $u=u(x,y)$ satisfying the linear second order differential equation # # \begin{equation} # a \partial^2_{xx} u + b \partial^2_{xy} u + c \partial^2_{yy} u + d \partial_x u + e \partial_y u + f u = g\,\,, # \end{equation} # # This equation is straightforwardly categorized based on the discriminant, # # \begin{equation} # b^2 - 4ac \left\{ \begin{array}{lcr} # < 0 & \rightarrow & \text{elliptic},\\ # = 0 & \rightarrow & \text{parabolic},\\ # > 0 & \rightarrow & \text{hyperbolic}. # \end{array}\right. # \end{equation} # # The names come from analogy with conic sections in the theory of # ellipses. # --- # ## Hyperbolic PDEs # # Hyperbolic equations in physics and astrophysics # describe **dynamical** processes and systems that generally start # at some initial time $t_0=0$ with some initial conditions. Hence, the equations are then integrated in time. # # # The prototypical linear second-order hyperbolic equation is the # homogeneous wave equation, # # \begin{equation} # c^2 \partial^2_{xx} u - \partial^2_{tt} u = 0\,\,, # \end{equation} # # where $c$ is the wave speed. # --- # Another class of hyperbolic equations are the **first-order # hyperbolic systems**. In one space dimension and assuming a linear # problem, this is # # \begin{equation} # \partial_t u + A \partial_x u = 0\,\,, # \end{equation} # # where $u(x,t)$ is a state vector with $s$ components and $A$ # is a $s \times s$ matrix. # # The problem is called *hyperbolic* if # $A$ has only real eigenvalues and is diagonizable, i.e., has a complet set # of linearly independent eigenvectors so that one can construct a # matrix # # \begin{equation} # \Lambda = Q^{-1} A Q\,\,, # \end{equation} # # where $\Lambda$ is diagonal and has real numbers on its diagonal. # # **Example** # # An example of these equations is the linear **advection equation**, in which the function $u=u(t,x)$ satisfies # # \begin{equation} # \partial_t u + v \partial_x u = 0\,\,, # \end{equation} # # which is first order, linear, and homogeneous. # --- # # Other example is given by the non-linear first-order systems. Consider the equation # # \begin{equation} # \partial_t u + \partial_x F(u) = 0\,\,, # \end{equation} # # where $F(u)$ is the **flux** and may or may not be non-linear in $u(t,x)$. # We can re-write this PDE in **quasi-linear** form, by introducing # the Jacobian # # \begin{equation} # \bar{A} = \frac{\partial F}{\partial u}\,\,, # \end{equation} # # and writing # # \begin{equation} # \partial_t u + \bar{A}\partial_x u = 0\,\,. # \label{eq:pde_quasilin1} # \end{equation} # # This PDE is hyperbolic if $\bar{A}$ has real eigenvalues and is # diagonizable. # # The **equations of hydrodynamics** are a key example # of a non-linear, first-order hyperbolic PDE system. # ### Initial Conditions for Hyperbolic Problems # # One must specify either von Neumann, Dirichlet, or Robin boundary # conditions: # # 1. **Dirichlet Boundary Conditions** # # \begin{equation} # \begin{aligned} # u(x=0,t) &= \Phi_1(t)\,\,,\\ # u(x=L,t) &= \Phi_2(t)\,\,. # \end{aligned} # \end{equation} # # 2. **von Neumann Boundary Conditions** # \begin{equation} # \begin{aligned} # \partial_x u(x=0,t) &= \Psi_1(t)\,\,,\\ # \partial_x u(x=L,t) &= \Psi_2(t)\,\,. # \end{aligned} # \end{equation} # # Note that in a multi-dimensional problem $\partial_x$ turns into the derivative in the direction of the normal to the boundary. # # 3. **Robin Boundary Conditions** # # $a_1, b_1, a_2, b_2$ be real numbers with $a_i \neq 0$ and $ b_i \neq 0$. # # \begin{equation} # \begin{aligned} # a_1 u(x=0,t) + b_1 \partial_x u(x=0,t) &= \Psi_1(t)\,\,,\\ # a_2 u(x=L,t) + b_2 \partial_x u(x=L,t) &= \Psi_2(t)\,\,. # \end{aligned} # \end{equation} # # Dirichlet and von Neuman boundary conditions are recovered if either # both $a_i$ or both $b_i$ vanish. # Note that in a multi-D problem $\partial_x$ turns into # the derivative in the direction of the normal to the boundary. # # # --- # ## Parabolic PDEs # # Parabolic PDEs describe processes that are slowly changing, such as # the slow diffusion of heat in a medium, sediments in ground water, and # radiation in an opaque medium. Parabolic PDEs are second order and # have generally the shape # # \begin{equation} # \partial_t u - k \partial^2_{xx} u = f\,\,. # \end{equation} # # ### Initial Conditions for Parabolic Problems # # One must specify $u(x,t=0)$ at all $x$. # # # ### Boundary Conditions for Parabolic Problems # # Dirichlet, von Neumann or Robin boundary conditions. # If the boundary conditions are independent of time, the system will # evolve towards a steady state ($\partial_t u = 0$). In this case, # one may set $\partial_t u = 0$ for all times and treat the differential equation as an elliptic equation. # --- # ## Ellpitic PDEs # # Elliptic equations describe systems that are static, in steady state # and/or in equilibrium. There is no time dependence. A typical elliptic # equation is the Poisson equation, # # \begin{equation} # \nabla^2 u = f\,\,, # \end{equation} # # which one encounters in Newtonian gravity and in electrodynamics. # $\nabla^2$ is the Laplace operator, and $f$ is a given scalar function # of position. Elliptic problems may be linear ($f$ does not depend on # $u$ or its derivatives) or non-linear ($f$ depends on $u$ or its # derivatives). # # # ### Initial Conditions for Elliptic Problems # # Do not apply, since there is no time dependence. # # ### Boundary Conditions for Elliptic Problems # # Dirichlet, von Neumann or Robin boundary conditions. # --- # # --- # # ## Numerical Methods for PDEs # # There is no such thing as a general robust method for the solution of # generic PDEs. Each type (and each sub-type) of PDE requires a different # approach. Real-life PDEs may be of mixed type or may have special # properties that require knowledge about the underlying physics for their # successful solution. # # There are three general classes of approaches to solving PDEs # ### 1. Finite Difference Methods. # # The differential operators are approximated using their finite-difference # representation on a given grid. A sub-class of finite-difference methods, # so-called finite-volume methods, can be used for PDEs arising from # conservation laws (e.g., the hydrodynamics equations). # # Finite difference/volume methods have polynomial convergence for # smooth functions. # # ### 2. Finite Element Methods. # # The domain is divided into cells (**elements**). The solution # is represented as a single function (e.g., a polynomial) on each # cell and the PDE is transformed to an algebraic problem for the matching # conditions of the simple functions at cell interfaces. # # Finite element methods can have polynomial or exponential convergence # for smooth functions. # # ### 3. Spectral Methods. # The solution is represented by a linear combination of known # functions (e.g. trigonometric functions or special # polynomials). The PDE is transformed to a set of algebraic # equations (or ODEs) for the amplitudes of the component # functions. A sub-class of these methods are the collocation # methods. In them, the solution is represented on a grid and the # spectral decomposition of the solution in known functions is # used to estimate to a high degree of accuracy the partial # derivatives of the solution on the grid points. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Introduction: Get Date/Time Usage # # In this notebook, we will see how to use the `get_datetime_info` function to extract date and time information (month, week, day of week, hour, minute, fraction of day, etc.) from a datetime in a pandas dataframe. This function will provide us with 17 attributes we can use for modeling or to examine patterns. # + # Standard data science libraries import pandas as pd import numpy as np # Options for pandas pd.options.display.max_columns = 25 # Display all cell outputs from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = 'all' from get_datetime_info import get_datetime_info # - # The data we are using is building energy data. It does not really matter what the data is so long as it has a datetime. data = pd.read_csv('building_one_with_tz.csv', header = [0, 1], index_col = 0).loc[:, 'Energy'] data.head() # We will use the `index` here that includes a time zone. __As an important note, the function converts times into local time when calculating the attributes if a time zone is passed in.__ To disable this behavior, do not pass in a time zone. # The function takes in a number of parameters: # # * `df`: The dataframe # * `date_col`: a string with the name of the column containing the datetimes. This can also be "index" to use the index of the dataframe # * `timezone`: string with the timezone of the building. If provided, the times are converted to local time # * `drop`: boolean for whether the original column should be dropped form the dataframe # # The return is a new dataframe with the 17 columns containing the attributes. data_with_info = get_datetime_info(df=data, date_col='index', timezone='America/New_York', drop=False) data_with_info.head() # We can also run this without a timezone which means none of the times will be converted. data_without_tz = get_datetime_info(df=data, date_col='index', timezone=None, drop=False) data_without_tz.head() # All of the attributes are now given in utc time. # We can also use a column (not the index) with the same effect. data_with_info2 = get_datetime_info(data.reset_index(), date_col='measured_at', timezone='America/New_York', drop = False) data_with_info2.head() np.all(np.equal(data_with_info.iloc[:, 3:].values, data_with_info2.iloc[:, 4:].values)) # As we can see, both methods return the same values. # # We now have a dataframe with many date and time attributes that we can use as needed. This function is useful for rapidly creating new features for fitting a model or finding trends in data. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # This notebook contains code to make figures and undertake analysis for a report on the spread of the variant of concern across the UK. # # Author: # (https://github.com/ViralVerity) # + import sys import geopandas import pandas as pd from collections import defaultdict from collections import Counter import datetime as dt import matplotlib.pyplot as plt import numpy as np import math import seaborn as sns import scipy.stats import csv import matplotlib as mpl # + map_data = "../Data/gadm36_GBR_2.json" eire = geopandas.read_file("../Data/gadm36_IRL_1.json") ni = geopandas.read_file("../Data/NI_counties.geojson") level_3 = geopandas.read_file("../Data/gadm36_GBR_3.json") level_2 = geopandas.read_file(map_data) level_3 = level_3.to_crs("EPSG:3395") level_2 = level_2.to_crs("EPSG:3395") # + rate_db = pd.read_csv("../Data/growth_rates_UTLANAME_06012021.csv") ####This data is unavailable for use due to data protection issues#### genome_data = "input_data/20210122_Genomes_UTLA_corrected.csv" ##### figdir = "figures/" result_dir = "result_files/" # - # ## Make custom location dataframe to match to rate calculations utla = set(rate_db["prov"]) adm2s = list(level_2["NAME_2"]) adm3s = list(level_3["NAME_3"]) # + #locations in the rate dataframe that match 1:1 with locations in the GADM dataframe conversion_dict = {} conversion_dict["Perth and Kinross"] = "Perthshire and Kinross" conversion_dict["Kingston upon Hull, City of"] = "Kingston upon Hull" conversion_dict["St. Helens"] = "Saint Helens" conversion_dict["Armagh City, Banbridge and Craigavon"] = "Armagh, Banbridge and Craigavon" conversion_dict["Rhondda Cynon Taf"] = "Rhondda, Cynon, Taff" conversion_dict["Glasgow City"] = "Glasgow" conversion_dict["Comhairle nan Eilean Siar"] = "Eilean Siar" conversion_dict["Bristol, City of"] = "Bristol" conversion_dict["Dundee City"] = "Dundee" conversion_dict["Herefordshire, County of"] = "Herefordshire" conversion_dict["Ards and North Down"] = "North Down and Ards" conversion_dict["Derry City and Strabane"] = "Derry and Strabane" conversion_dict["City of Edinburgh"] = "Edinburgh" conversion_dict["County Durham"] = "Durham" conversion_dict["Isle of Anglesey"] = "Anglesey" conversion_dict["Aberdeen City"] = "Aberdeen" conversion_dict["Hackney and City of London"] = "Hackney" conversion_dict["Bedford"] = "Bedfordshire" conversion_dict["Na h-Eileanan Siar"] = "Eilean Siar" reverse_conversion_dict = {} for k,v in conversion_dict.items(): reverse_conversion_dict[v.upper().replace(" ","_")] = k.upper().replace(" ","_") # - #GADM locations which need to be merged together merge_dict = {} merge_dict["Cornwall"] = "Cornwall and Isles of Scilly" merge_dict["Isles of Scilly"] = "Cornwall and Isles of Scilly" merge_dict["Bournemouth"] = "Bournemouth, Christchurch and Poole" merge_dict["Poole"] = "Bournemouth, Christchurch and Poole" #These locations are going to be split up, so the larger polygon is removed first remove_london = level_2.loc[level_2["NAME_2"] != "Greater London"] remove_lincs = remove_london.loc[remove_london["NAME_2"] != "Lincolnshire"] remove_sefton = remove_lincs.loc[remove_lincs["NAME_2"] != "Sefton"] # + #merge together those locations in the merge dict for_merging = [] for location in remove_sefton["NAME_2"]: if location in merge_dict: for_merging.append(merge_dict[location]) else: for_merging.append(location) remove_sefton["Multi_loc"] = for_merging merged_locs = remove_sefton.dissolve(by="Multi_loc") mergeds = [] for multi_loc in merged_locs.index: mergeds.append(multi_loc) merged_locs["NAME_2"] = mergeds # - #Get the london boroughs ready to_add = level_3.loc[level_3["NAME_2"] == "Greater London"] to_add["NAME_2"] = to_add["NAME_3"] # + #Get lincolnshire components ready lincs = level_3.loc[level_3["NAME_2"] == "Lincolnshire"] for_merging = [] for location in lincs["NAME_3"]: if location != "North East Lincolnshire": for_merging.append("Lincolnshire") else: for_merging.append(location) lincs["Multi_loc"] = for_merging linc_merged_locs = lincs.dissolve(by="Multi_loc") mergeds = [] for multi_loc in linc_merged_locs.index: mergeds.append(multi_loc) linc_merged_locs["NAME_2"] = mergeds # - #Get Sefton/Liverpool ready sefton = level_3.loc[level_3["NAME_2"] == "Sefton"] sefton["NAME_2"] = sefton["NAME_3"] #beds = level_3.loc[level_3["NAME_2"] == "Luton"] #Add them all together custom_df = merged_locs.append(linc_merged_locs).append(sefton).append(to_add) london_boroughs = [] for i in level_3.loc[level_3["NAME_2"] == "Greater London"]["NAME_3"]: london_boroughs.append(i.upper().replace(" ","_")) #A quick test to make sure all of locations in the rate dataframe are accounted for lower_londons = [] for i in london_boroughs: lower_londons.append(i.replace("_", " ").title()) new_adm2s = list(custom_df["NAME_2"]) for i in utla: if i not in conversion_dict and i not in adm2s and i not in merge_dict.values() and i not in lower_londons: print(i) # ## Growth rates across UK # + #This is six days before the current date # cutoff = dt.date(2020,12,8) # cutoff = dt.date(2020,12,15) # cutoff = dt.date(2020,12,22) cutoff = dt.date(2020,12,17) cutoff_string = cutoff.strftime("%Y-%m-%d") print(cutoff_string) #Convert strings to date objects rate_db['times'] = pd.to_datetime(rate_db['times']).dt.date #Remove more recent times to account for reporting lag new_rates = rate_db.loc[rate_db["times"] < cutoff] # + #Calculate seven day rolling average for each location, and take the most recent value df_dict = defaultdict(list) for place in utla: first = new_rates.loc[new_rates["prov"] == place] rolling_mean = first.rates.rolling(window=7).mean() last_mean = float(rolling_mean.iloc[[-1]]) df_dict["name"].append(place) df_dict["Rate"].append(last_mean) average_rate = pd.DataFrame(df_dict) # + #Convert any names that need converting to match the custom dataframe new_names = [] for place in average_rate["name"]: if place in conversion_dict: new_name = conversion_dict[place] else: new_name = place new_names.append(new_name) average_rate["NAME_2"] = new_names # - #Combine the dataframe with shapes with the rates with_rates = custom_df.merge(average_rate) # + #Plot the rates fig, ax = plt.subplots(figsize=(15,15)) with_rates = with_rates.to_crs("EPSG:3395") eire = eire.to_crs("EPSG:3395") eire["geometry"] = eire.geometry.simplify(1000) with_rates["geometry"] = with_rates.geometry.simplify(1000) with_rates.plot(ax=ax, column="Rate", legend=True) eire.plot(ax=ax, color="darkgrey") plt.title(cutoff_string) ax.axis("off") plt.savefig(f'{figdir}rate_map_{cutoff_string}.png', format='png') plt.savefig(f'{figdir}rate_map_{cutoff_string}.pdf', format='pdf', dpi=5) plt.show() # - # ## Map of location where variant is present and when first sample was detected. # aggregates = {} with open("input_data/adm2_aggregation.csv") as f: next(f) for l in f: toks = l.strip("\n").split(",") aggregates[toks[0]] = toks[1] nice_names = {"BIRMINGHAM|COVENTRY|DUDLEY|SANDWELL|SOLIHULL|WALSALL|WOLVERHAMPTON":"West Midlands", "DERBY|DERBYSHIRE|LEICESTER|LEICESTERSHIRE|LINCOLNSHIRE|NORTHAMPTONSHIRE|NOTTINGHAM|NOTTINGHAMSHIRE|RUTLAND":"East Midlands", "BOLTON|BURY|MANCHESTER|OLDHAM|ROCHDALE|SALFORD|STOCKPORT|TAMESIDE|TRAFFORD|WIGAN":"Greater Manchester", "EAST_SUSSEX|WEST_SUSSEX":"Sussex", "BRADFORD|CALDERDALE|KIRKLEES|LEEDS|WAKEFIELD":"West Yorkshire", "GATESHEAD|NEWCASTLE_UPON_TYNE|NORTH_TYNESIDE|SOUTH_TYNESIDE|SUNDERLAND": "Tyne and Wear", "BARNSLEY|DONCASTER|ROTHERHAM|SHEFFIELD": "South Yorkshire", "BRACKNELL_FOREST|READING|SLOUGH|WEST_BERKSHIRE|WINDSOR_AND_MAIDENHEAD|WOKINGHAM":"Berkshire", "BRACKNELL_FOREST|BUCKINGHAMSHIRE|READING|SLOUGH|WEST_BERKSHIRE|WINDSOR_AND_MAIDENHEAD|WOKINGHAM":"Berkshire and Buckinghamshire", 'KNOWSLEY|SAINT_HELENS|SEFTON|WIRRAL':"Merseyside", "CHESHIRE_EAST|CHESHIRE_WEST_AND_CHESTER":"Cheshire", "RHONDDA_CYNON_TAFF": "RHONDDA,_CYNON,_TAFF" } #Pull out the genomic data #Two sequences required additional cleaning, and two sequences had locations which were too ambiguous, so these were left out count = 0 location_to_dates = defaultdict(list) with open(genome_data) as f: data = csv.DictReader(f) for line in data: date = dt.datetime.strptime(line["sample_date"], "%Y-%m-%d").date() loc = line["adm2"] if line["lineage"] == "B.1.1.7": if loc != "" and loc != "Needs_manual_curation" and loc != "THURROCK|GREATER_LONDON|KENT" and loc != "THURROCK|KENT|MEDWAY": if "|" in loc and loc not in nice_names.keys(): new_name = "" bits = loc.split("|") for i in bits: if i in aggregates: next_bit = aggregates[i] else: next_bit = i if len(new_name) == 0: new_name += next_bit else: new_name += "|" + next_bit elif loc in nice_names.keys(): new_name = nice_names[loc].upper().replace(" ","_") elif "|" not in loc and loc in aggregates: new_name = aggregates[loc].upper().replace(" ","_") # if "EAST_SUSSEX" in loc: # print(loc, new_name) else: new_name = loc final_name = "" if "|" in new_name: name_lst = [] name_lst.extend(new_name.split("|")) name_set = set(name_lst) if len(name_set) == 1: final_name = list(name_set)[0] else: pass else: final_name = new_name if final_name != "": location_to_dates[final_name].append(date) # + #OLD #Deal with sequence locations that are combinations of GADM adm2s #These are mostly commonly used aggregations like Merseyside that we split up to ease matching sequences to shapefiles. #The adm2 cleaning step that provides input for this step is a product of the grapevine pipeline which is run every day #Find all the ambiguities and their component parts ambiguous = [] ambiguous_dict = defaultdict(set) clusters = [] for adm2 in location_to_dates.keys(): if "|" in adm2: ambiguous.append(frozenset(adm2.split("|"))) amb_set = set(ambiguous) for group in amb_set: for group2 in amb_set: if group != group2: if group & group2: group |= group2 for group2 in amb_set: if group != group2: if group & group2: group |= group2 clusters.append(group) for cluster in clusters: for place in cluster: ambiguous_dict[place] = "|".join(sorted(cluster)) #Merge the dates together into the new merged locations newlocation_to_dates = defaultdict(list) for key, value in location_to_dates.items(): if "|" in key: lookup = key.split("|")[0] new_loc = ambiguous_dict[lookup] else: if key in ambiguous_dict: new_loc = ambiguous_dict[key] else: new_loc = key newlocation_to_dates[new_loc].extend(value) # + #Find the earliest date of each new location loc_to_earliest = defaultdict(list) all_earliest = [] for k,v in newlocation_to_dates.items(): all_earliest.append(min(v)) count = 0 date_to_num = {} num_to_date = {} one_day = dt.timedelta(1) day_range = (max(all_earliest) - min(all_earliest)).days date = min(all_earliest) for num in range(day_range+1): date_to_num[date] = num num_to_date[num] = date date += one_day for k,v in location_to_dates.items(): loc_to_earliest["NAME_2"].append(k) loc_to_earliest["earliest_date"].append(min(v)) loc_to_earliest["date_number"].append(date_to_num[min(v)]) earliest_date = pd.DataFrame(loc_to_earliest) earliest_date.to_csv(result_dir + "variant_present.csv") # + #Merge the shape files together ambiguous_dict = aggregates for_merging = [] for location in level_2["NAME_2"]: if location.upper().replace(" ","_") in ambiguous_dict: for_merging.append(ambiguous_dict[location.upper().replace(" ","_")]) else: for_merging.append(location.upper().replace(" ","_")) level_2["Multi_loc"] = for_merging level_2_merged = level_2.dissolve(by="Multi_loc") mergeds = [] for multi_loc in level_2_merged.index: mergeds.append(multi_loc) level_2_merged["NAME_2"] = mergeds # + correct_crs = level_2_merged.to_crs("EPSG:4326") correct_crs correct_crs.to_file("merged_adm2_NI_districts.shp") # + # Remove Northern Ireland LADs and replace with counties to match with sequence data no_ni = level_2_merged.loc[level_2_merged["NAME_1"] != "Northern Ireland"] ni["NAME_2"] = ni["CountyName"] ni = ni.to_crs("EPSG:3395") ni_fixed = no_ni.append(ni) # - print(ni.crs) with_earliest = ni_fixed.merge(earliest_date, how="outer") # + #get present/more than one here with_earliest.loc[with_earliest["earliest_date"].isnull()] for k,v in location_to_dates.items(): if len(v) == 1: print(k) # + #Plot the results mpl.rcParams["path.simplify"] = True mpl.rcParams["path.simplify_threshold"] = 1.0 fig, ax = plt.subplots(figsize=(15,15)) with_earliest = with_earliest.to_crs("EPSG:3395") eire = eire.to_crs("EPSG:3395") eire["geometry"] = eire.geometry.simplify(1000) with_earliest["geometry"] = with_earliest.geometry.simplify(1000) with_earliest.plot(ax=ax, cmap='viridis', legend=True, column="date_number", legend_kwds={"shrink":0.75}, missing_kwds={"color": "lightgrey","label": "No variant recorded"}) eire.plot(ax=ax, color="darkgrey") colourbar = ax.get_figure().get_axes()[1] yticks = colourbar.get_yticks() colourbar.set_yticklabels([num_to_date[ytick] for ytick in yticks]) ax.axis("off") plt.savefig(figdir + "variant_first_date.png", format='png') plt.savefig(figdir + "variant_first_date.pdf", format='pdf') plt.show() # - # ### Converting to utla for ease of comparison utla_list = [] for i in utla: utla_list.append(i.upper().replace(" ","_")) # + adm2_names = [] for name in box_df["location"]: adm2_names.append(name) utla_names = {} for name in adm2_names: new_name = [] if "|" in name: toks = name.split("|") for i in toks: if i in reverse_conversion_dict: new_name.append(reverse_conversion_dict[i]) elif i in utla_ambig2: new_name.extend(utla_ambig2[i].split("|")) else: new_name.append(i) if i not in utla_list: print(i) else: if name in reverse_conversion_dict: new_name.append(reverse_conversion_dict[name]) elif name in utla_ambig2: new_name.extend(utla_ambig2[name].split("|")) else: new_name.append(name) if name not in utla_list: print(name) final = "|".join(new_name) utla_names[name] = final # + df_dict = defaultdict(list) for k,v in utla_names.items(): df_dict["location"].append(k) df_dict["utla"].append(v) if k in newlocation_to_dates: df_dict["earliest_sequence"].append(min(newlocation_to_dates[k])) else: df_dict["earliest_sequence"].append("NA") utla_df = pd.DataFrame(df_dict) box_df.merge(utla_df) box_df.to_csv("utla_earliest_seq_rate.csv") # + #Some descriptive measures print(len(newlocation_to_dates)) small_locs = [] count = 0 for i,v in newlocation_to_dates.items(): if len(v) <=5: small_locs.append(i) else: count += 1 print(count) print(len(small_locs)) # - # ## Frequency calculation (early late and updated) # + all_dates = [] with open("all_dates_locs_updated.csv") as f: next(f) for l in f: toks = l.strip("\n").split(",") date = dt.datetime.strptime(toks[0], "%Y-%m-%d").date() if toks[1] == "UK": all_dates.append(date) print(max(all_dates)) # - count = 0 for i in all_dates: if i > dt.date(2020,12,17): count += 1 print(count) # + all_locs_to_dates_early = defaultdict(list) all_locs_to_dates_late = defaultdict(list) all_locs_to_dates_latest = defaultdict(list) all_locs_to_dates = defaultdict(list) early_start = dt.date(2020,11,25) early_finish = dt.date(2020,12,2) late_start = dt.date(2020,12,10) late_finish = dt.date(2020,12,17) later_start = dt.date(2020,12,18) later_finish = dt.date(2020,12,21) with open("all_dates_locs_updated.csv") as f: next(f) for l in f: toks = l.strip("\n").split(",") date = dt.datetime.strptime(toks[0], "%Y-%m-%d").date() if toks[1] == "UK": if toks[2] != "" and toks[2] != "Needs_manual_curation": if date >= early_start and date <= early_finish: all_locs_to_dates_early[toks[2]].append(date) all_locs_to_dates[toks[2]].append(date) if date >= late_start and date <= late_finish: all_locs_to_dates_late[toks[2]].append(date) all_locs_to_dates[toks[2]].append(date) if date >= later_start and date <= later_finish: all_locs_to_dates_latest[toks[2]].append(date) all_locs_to_dates[toks[2]].append(date) elephants_early = defaultdict(list) elephants_late = defaultdict(list) elephants_latest = defaultdict(list) for k,v in location_to_dates.items(): lst_early = [] lst_late = [] lst_later = [] for i in v: if i >= early_start and i <= early_finish: lst_early.append(i) if i >= late_start and i <= late_finish: lst_late.append(i) if i >= later_start and i <= later_finish: lst_later.append(i) if len(lst_early) != 0 or len(lst_late) != 0 or len(lst_later) != 0: elephants_early[k] = lst_early elephants_late[k] = lst_late elephants_latest[k] = lst_later # - def make_merged_locations(input_dict): output_dict = defaultdict(list) for key, value in input_dict.items(): if "|" in key: lookup = key.split("|")[0] new_loc = ambiguous_dict[lookup] else: if key in ambiguous_dict: new_loc = ambiguous_dict[key] else: new_loc = key output_dict[new_loc].extend(value) return output_dict # + #Find all the ambiguities and their component parts ambiguous = [] ambiguous_dict = defaultdict(set) clusters = [] for adm2 in all_locs_to_dates.keys(): if "|" in adm2: ambiguous.append(frozenset(adm2.split("|"))) amb_set = set(ambiguous) for group in amb_set: for group2 in amb_set: if group != group2: if group & group2: group |= group2 for group2 in amb_set: if group != group2: if group & group2: group |= group2 for group2 in amb_set: if group != group2: if group & group2: group |= group2 clusters.append(group) for cluster in clusters: for place in cluster: ambiguous_dict[place] = "|".join(sorted(cluster)) #Merge the dates together into the new merged locations new_locs_early_all = make_merged_locations(all_locs_to_dates_early) new_locs_late_all = make_merged_locations(all_locs_to_dates_late) new_locs_later_all = make_merged_locations(all_locs_to_dates_latest) new_elephants_early = make_merged_locations(elephants_early) new_elephants_late = make_merged_locations(elephants_late) new_elephants_latest = make_merged_locations(elephants_latest) # + early_frequencies = {} late_frequencies = {} latest_frequencies = {} for loc, lst in new_elephants_early.items(): try: early_frequencies[loc] = len(lst)/len(new_locs_early_all[loc]) except ZeroDivisionError: early_frequencies[loc] = 0 for loc, lst in new_elephants_late.items(): try: late_frequencies[loc] = len(lst)/len(new_locs_late_all[loc]) except ZeroDivisionError: late_frequencies[loc] = 0 for loc, lst in new_elephants_latest.items(): try: latest_frequencies[loc] = len(lst)/len(new_locs_later_all[loc]) except ZeroDivisionError: latest_frequencies[loc] = 0 # + df_dict = defaultdict(list) for k,v in late_frequencies.items(): df_dict["Location"].append(k) df_dict["Early_frequency"].append(early_frequencies[k]) df_dict["Early_count_all"].append(len(new_locs_early_all[k])) df_dict["Early_count_elephants"].append(len(new_elephants_early[k])) df_dict["Late_frequency"].append(v) df_dict["Late_count_all"].append(len(new_locs_late_all[k])) df_dict["Late_count_elephants"].append(len(new_elephants_late[k])) df_dict["Later_frequency"].append(latest_frequencies[k]) df_dict["Later_count_all"].append(len(new_locs_later_all[k])) df_dict["Later_count_elephants"].append(len(new_elephants_latest[k])) freq_df = pd.DataFrame(df_dict) freq_df.to_csv("Early_and_late_and_later_frequencies.csv") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Data Ingestion and Filtering - pipeline for the topology classifier with Apache Spark # # **1. Data Ingestion** is the first stage of the pipeline. Here we will read the ROOT file from HDFS into a Spark dataframe using [Spark-ROOT](https://github.com/diana-hep/spark-root) reader and then we will create the Low Level Features (LLF) and High Level Features datasets. # # To run this notebook we used the following configuration: # * *Software stack*: Spark 2.4.3 # * *Platform*: CentOS 7, Python 3.6 # * *Spark cluster*: Analytix # pip install pyspark or use your favorite way to set Spark Home, here we use findspark import findspark findspark.init('/home/luca/Spark/spark-2.4.3-bin-hadoop2.7') #set path to SPARK_HOME # + # Configure according to your environment pyspark_python = "/bin/python" spark_root_jar="https://github.com/diana-hep/spark-root/blob/master/jars/spark-root_2.11-0.1.17.jar?raw=true" from pyspark.sql import SparkSession spark = SparkSession.builder \ .appName("1-Data Ingestion") \ .master("yarn") \ .config("spark.driver.memory","8g") \ .config("spark.executor.memory","14g") \ .config("spark.executor.cores","8") \ .config("spark.executor.instances","50") \ .config("spark.dynamicAllocation.enabled","false") \ .config("spark.jars",spark_root_jar) \ .config("spark.jars.packages","org.diana-hep:root4j:0.1.6") \ .config("spark.pyspark.python",pyspark_python) \ .config("spark.eventLog.enabled","false") \ .getOrCreate() # - # Check if Spark Session has been created correctly spark # Add a file containing functions that we will use later spark.sparkContext.addPyFile("utilFunctions.py") # ## Read the data from HDFS #
    # As first step we will read the samples into a Spark dataframe using Spark-Root. We will select only a subset of columns present in the original files. # + PATH = "hdfs://analytix/Training/Spark/TopologyClassifier/lepFilter_rawData/" samples = ["qcd_lepFilter_13TeV, "ttbar_lepFilter_13TeV", "Wlnu_lepFilter_13TeV"] requiredColumns = [ "EFlowTrack", "EFlowNeutralHadron", "EFlowPhoton", "Electron", "MuonTight", "MuonTight_size", "Electron_size", "MissingET", "Jet" ] # + from pyspark.sql.functions import lit dfList = [] for label,sample in enumerate(samples): print("Loading {} sample...".format(sample)) tmpDF = spark.read \ .format("org.dianahep.sparkroot.experimental") \ .load(PATH + sample + "/*.root") \ .select(requiredColumns) \ .withColumn("label", lit(label)) dfList.append(tmpDF) # - # Merge all samples into a single dataframe df = dfList[0] for tmpDF in dfList[1:]: df = df.union(tmpDF) # Let's have a look at how many events there are for each class. Keep in mind that the labels are mapped as follow # * $0=\text{QCD}$ # * $1=\text{t}\bar{\text{t}}$ # * $2=\text{W}+\text{jets}$ # Print the number of events per sample and the total (label=null) df.rollup("label").count().orderBy("label", ascending=False).show() # Let's print the schema of one of the required columns. This shows that the # **schema is complex and nested** (the full schema is even more complex) df.select("EFlowTrack").printSchema() # ## Create derivate datasets # # Now we will create the LLF and HLF datasets. This is done by the function `convert` below which takes as input an event (i.e. the list of particles present in that event) and do the following steps: # 1. Select the events with at least one isolated electron/muon (implemented in `selection`) # 2. Create the list of 801 particles and the 19 low level features for each of them # 3. Compute the high level features # + # Import Lorentz Vector and other functions for pTmaps from utilFunctions import * def selection(event, TrkPtMap, NeuPtMap, PhotonPtMap, PTcut=23., ISOcut=0.45): """ This function simulates the trigger selection. Foreach event the presence of one isolated muon or electron with pT >23 GeV is required """ if event.Electron_size == 0 and event.MuonTight_size == 0: return False, False, False foundMuon = None foundEle = None l = LorentzVector() for ele in event.Electron: if ele.PT <= PTcut: continue l.SetPtEtaPhiM(ele.PT, ele.Eta, ele.Phi, 0.) pfisoCh = PFIso(l, 0.3, TrkPtMap, True) pfisoNeu = PFIso(l, 0.3, NeuPtMap, False) pfisoGamma = PFIso(l, 0.3, PhotonPtMap, False) if foundEle == None and (pfisoCh+pfisoNeu+pfisoGamma) foundMuon[5]: return True, foundEle, foundMuon else: return True, foundMuon, foundEle if foundEle != None: return True, foundEle, foundMuon if foundMuon != None: return True, foundMuon, foundEle return False, None, None # + import math import numpy as np from pyspark.ml.linalg import Vectors def convert(event): """ This function takes as input an event, applies trigger selection and create LLF and HLF datasets """ q = LorentzVector() particles = [] TrkPtMap = ChPtMapp(0.3, event) NeuPtMap = NeuPtMapp(0.3, event) PhotonPtMap = PhotonPtMapp(0.3, event) if TrkPtMap.shape[0] == 0: return Row() if NeuPtMap.shape[0] == 0: return Row() if PhotonPtMap.shape[0] == 0: return Row() # # Get leptons # selected, lep, otherlep = selection(event, TrkPtMap, NeuPtMap, PhotonPtMap) if not selected: return Row() particles.append(lep) lepMomentum = LorentzVector(lep[1], lep[2], lep[3], lep[0]) # # Select Tracks # nTrk = 0 for h in event.EFlowTrack: if nTrk>=450: continue if h.PT<=0.5: continue q.SetPtEtaPhiM(h.PT, h.Eta, h.Phi, 0.) if lepMomentum.DeltaR(q) > 0.0001: pfisoCh = PFIso(q, 0.3, TrkPtMap, True) pfisoNeu = PFIso(q, 0.3, NeuPtMap, False) pfisoGamma = PFIso(q, 0.3, PhotonPtMap, False) particles.append([q.E(), q.Px(), q.Py(), q.Pz(), h.PT, h.Eta, h.Phi, h.X, h.Y, h.Z, pfisoCh, pfisoGamma, pfisoNeu, 1., 0., 0., 0., 0., float(np.sign(h.PID))]) nTrk += 1 # # Select Photons # nPhoton = 0 for h in event.EFlowPhoton: if nPhoton >= 150: continue if h.ET <= 1.: continue q.SetPtEtaPhiM(h.ET, h.Eta, h.Phi, 0.) pfisoCh = PFIso(q, 0.3, TrkPtMap, True) pfisoNeu = PFIso(q, 0.3, NeuPtMap, False) pfisoGamma = PFIso(q, 0.3, PhotonPtMap, False) particles.append([q.E(), q.Px(), q.Py(), q.Pz(), h.ET, h.Eta, h.Phi, 0., 0., 0., pfisoCh, pfisoGamma, pfisoNeu, 0., 0., 1., 0., 0., 0.]) nPhoton += 1 # # Select Neutrals # nNeu = 0 for h in event.EFlowNeutralHadron: if nNeu >= 200: continue if h.ET <= 1.: continue q.SetPtEtaPhiM(h.ET, h.Eta, h.Phi, 0.) pfisoCh = PFIso(q, 0.3, TrkPtMap, True) pfisoNeu = PFIso(q, 0.3, NeuPtMap, False) pfisoGamma = PFIso(q, 0.3, PhotonPtMap, False) particles.append([q.E(), q.Px(), q.Py(), q.Pz(), h.ET, h.Eta, h.Phi, 0., 0., 0., pfisoCh, pfisoGamma, pfisoNeu, 0., 1., 0., 0., 0., 0.]) nNeu += 1 for iTrk in range(nTrk, 450): particles.append([0., 0., 0., 0., 0., 0., 0., 0.,0., 0.,0., 0., 0., 0., 0., 0., 0., 0., 0.]) for iPhoton in range(nPhoton, 150): particles.append([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) for iNeu in range(nNeu, 200): particles.append([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) # # High Level Features # myMET = event.MissingET[0] MET = myMET.MET phiMET = myMET.Phi MT = 2.*MET*lepMomentum.Pt()*(1-math.cos(lepMomentum.Phi()-phiMET)) HT = 0. nJets = 0. nBjets = 0. for jet in event.Jet: if jet.PT > 30 and abs(jet.Eta)<2.6: nJets += 1 HT += jet.PT if jet.BTag>0: nBjets += 1 LepPt = lep[4] LepEta = lep[5] LepPhi = lep[6] LepIsoCh = lep[10] LepIsoGamma = lep[11] LepIsoNeu = lep[12] LepCharge = lep[18] LepIsEle = lep[16] hlf = Vectors.dense([HT, MET, phiMET, MT, nJets, nBjets, LepPt, LepEta, LepPhi, LepIsoCh, LepIsoGamma, LepIsoNeu, LepCharge, LepIsEle]) # # return the Row of low level features and high level features # return Row(lfeatures=particles, hfeatures=hlf, label=event.label) # - # Finally apply the function to all the events features = df.rdd \ .map(convert) \ .filter(lambda row: len(row) > 0) \ .toDF() features.printSchema() # ## Save the datasets as Parquet files # + dataset_path = "hdfs://analytix/Training/Spark/TopologyClassifier/dataIngestion_full_13TeV" num_partitions = 3000 # used in DataFrame coalesce operation to limit number of output files # %time features.coalesce(num_partitions).write.partitionBy("label").parquet(dataset_path) # - print("Number of events written to Parquet:", spark.read.parquet(dataset_path).count()) spark.stop() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from sklearn.datasets import make_blobs import matplotlib.pyplot as plt from numpy import where plt.rcParams['figure.dpi'] = 300 plt.rcParams['figure.autolayout'] = True plt.style.use('science') # 生成样本 X, y = make_blobs(n_samples=1000, centers=3, n_features=2, cluster_std=2, random_state=2) with plt.style.context(['science', 'scatter']): for class_value in range(3): # 找出各类别样本点的索引 row_ix = where(y == class_value) # 以不同颜色绘制样本点 plt.scatter(X[row_ix, 0], X[row_ix, 1]) plt.show() # + from sklearn.datasets import make_blobs from keras.layers import Dense from keras.models import Sequential from keras.optimizers import SGD from keras.utils import to_categorical import matplotlib.pyplot as plt plt.rcParams['figure.dpi'] = 300 plt.rcParams['figure.autolayout'] = True plt.style.use('science') def create_dataset(): # 生成训练样本 X, y = make_blobs(n_samples=1000, centers=20, n_features=100, cluster_std=2, random_state=2) # one-hot编码 y = to_categorical(y) # 划分 train&test n_train = 500 trainX, testX = X[:n_train, :], X[n_train:, :] trainy, testy = y[:n_train], y[n_train:] return trainX, trainy, testX, testy def evaluate_model(n_nodes, trainX, trainy, testX, testy): # 添加输出和类别变量 n_input, n_classes = trainX.shape[1], testy.shape[1] # 定义模型 model = Sequential() model.add(Dense(n_nodes, input_dim=n_input, activation='relu', kernel_initializer='he_uniform')) model.add(Dense(n_classes, activation='softmax')) # 编译模型 opt = SGD(lr=0.01, momentum=0.9) model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy']) # 训练模型 history = model.fit(trainX, trainy, epochs=100, verbose=0) # 评估模型 _, test_acc = model.evaluate(testX, testy, verbose=0) return history, test_acc # 准备数据 trainX, trainy, testX, testy = create_dataset() # 节点数评估列表 num_nodes = [1, 2, 3, 4, 5, 6, 7] for n_nodes in num_nodes: history, result = evaluate_model(n_nodes, trainX, trainy, testX, testy) print('nodes=%d: %.3f' % (n_nodes, result)) plt.plot(history.history['loss'], label=str(n_nodes)) plt.legend() plt.show() # + # study of mlp learning curves given different number of layers for multi-class classification from sklearn.datasets import make_blobs from keras.models import Sequential from keras.layers import Dense from keras.optimizers import SGD from keras.utils import to_categorical import matplotlib.pyplot as plt plt.rcParams['figure.dpi'] = 300 plt.rcParams['figure.autolayout'] = True plt.style.use('science') def create_dataset(): # 生成训练样本 X, y = make_blobs(n_samples=1000, centers=20, n_features=100, cluster_std=2, random_state=2) # one-hot编码 y = to_categorical(y) # 划分 train&test n_train = 500 trainX, testX = X[:n_train, :], X[n_train:, :] trainy, testy = y[:n_train], y[n_train:] return trainX, trainy, testX, testy def evaluate_model(n_layers, trainX, trainy, testX, testy): # 添加输出和类别变量 n_input, n_classes = trainX.shape[1], testy.shape[1] # 定义模型 model = Sequential() model.add(Dense(10, input_dim=n_input, activation='relu', kernel_initializer='he_uniform')) for _ in range(1, n_layers): model.add(Dense(10, activation='relu', kernel_initializer='he_uniform')) model.add(Dense(n_classes, activation='softmax')) # 编译模型 opt = SGD(lr=0.01, momentum=0.9) model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy']) # 训练模型 history = model.fit(trainX, trainy, epochs=100, verbose=0) # 评估模型 _, test_acc = model.evaluate(testX, testy, verbose=0) return history, test_acc trainX, trainy, testX, testy = create_dataset() all_history = [] # 设置评估的层数 num_layers = [1, 2, 3, 4, 5] for n_layers in num_layers: history, result = evaluate_model(n_layers, trainX, trainy, testX, testy) print('layers=%d: %.3f' % (n_layers, result)) plt.plot(history.history['loss'], label=str(n_layers)) plt.legend() plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Remise à niveau Python # # Toutes les séances de notre module seront présentées dans des documents au format iPython notebook, associé à l'extension `.ipynb`. # Ce format permet de joindre dans le même document: # # - du texte dans un format à balises simple, le format [markdown](http://markdowntutorial.com/), # - du code à exécuter en [Python](https://docs.python.org/3.7/) (ici la version 3.7), # - le résultat des commandes Python à la suite des cellules, que ce soit un résultat textuel ou une image. # # ### À noter # Les cellules en jaune (sur le modèle suivant) sont des exercices à faire! #
    Exercice: Faire les exercices
    # ## Quelques éléments de syntaxe Python # # Python est un langage typé dynamiquement, on n’est donc pas obligé de déclarer le type des variables. # On pourra par exemple exécuter le programme suivant en plaçant le curseur dans la cellule et en tapant `Maj+Enter`. a = 3 b = 4.0 c = "a * b : " print(c + str(a * b)) # On préférera cette notation print("a * b : {}".format(a * b)) # Les opérateurs habituels sur les entiers et les flottants sont disponibles. # La boucle `for` permet de parcourir des objets itérables comme des listes, des tuples etc. # La fonction `range` permet de générer une séquence d’entiers : for x in range(2, 7): # le 7 est exclu print(x) #
    # Important : vous remarquerez ici que l’indentation est importante en Python. Elle permet de définir les blocs. #
    # La conditionnelle est définie classiquement : if a == 3: print("a = 3") else: print("a = something else...") # Le mot clé pour définir une fonction est `def`. # La documentation d'une fonction peut-être écrite au format `__doctest__` en utilisant les triples guillemets. # Les exemples d'utilisation indiqués dans la documentation `__doctest__` servent de test unitaire. def fact(n): """Renvoie la factorielle de n. >>> fact(6) 720 >>> [fact(n) for n in range(6)] [1, 1, 2, 6, 24, 120] Si n est négatif, une exception de type ValueError est levée. >>> fact(-1) Traceback (most recent call last): ... ValueError: n doit être un entier positif """ res = 1 if n < 0: raise ValueError("n doit être un entier positif") while n > 0: res = n * res n = n - 1 return res fact(6) # Il est important de bien remplir la documentation, on pourrait en avoir besoin par la suite: help(fact) # La fonction `doctest.testmod()` permet de tester toutes les fonctions d'un module donné (ici `__main__`): import doctest doctest.testmod() # Le format iPython notebook permet également de consulter la documentation dans une pop-up à part: # ?fact # **Note :** # # Si vous rencontrez des difficultés de syntaxe Python, vous pouvez consulter: # # - la documentation officielle https://docs.python.org/3.7/, # - ce [site](https://www.google.fr) bien pratique, ou mieux, [celui-ci](https://duckduckgo.com/). # ## Structures de données # # Python propose un certain nombres de structures de données de base munies de facilités syntaxiques et algorithmiques. # #
    # Règle no.1 Tout l'art de la programmation Python consiste à choisir (et adapter) les bonnes structures de données. #
    # # ### Chaînes de caractères # # Les chaînes de caractères sont écrites à l’aide de guillemets simples, doubles ou retriplées (multi-lignes). "hel" + 'lo' # + a = """Hello - dear students; - dear all""" print (a) # - a[0] a[2:4] a[-1] a[-8:] a = "heLLo" (a + ' ') * 2 len(a) a.capitalize() " hello ".strip() "hello y’all".split() # ### Listes a = [1, "deux", 3.0] a[0] len(a) a.append(1) a a.count(1) 3 in a a[1] = 2 a a.sort() a a[1:3] # On peut utiliser le mécanisme de *compréhension de liste* pour construire une liste facilement. # Par exemple, pour # construire une liste contenant les carrés des valeurs comprises entre 1 et 5 : [i for i in range(5)] # similar to list(i for i in range(5)) # similar to list(range(5)) [str(i) for i in range(5)] [i**2 for i in range(5)] [i**2 for i in range(5) if i&1 == 0] # smarter than i%2 == 0 [x.upper() for x in "hello"] # even with strings # Rappelons également le constructeur `sorted` qui crée une liste triée à partir d'une structure itérable ou d'un générateur: sorted(i**2 for i in range(-5, 5)) # ### Ensembles a = set() a.add(1) a.add(2) a.add(3) a.add(1) a a.intersection({2, 3, 4}) # set theory a.isdisjoint({4, 5}) [i == 2 for i in a] # list comprehension set([1, 2, 4, 1]) # conversion # ### Dictionnaires d = dict() d[1] = ('Ain', "Bourg-en-Bresse") d[2] = ('Aisne', "Laon") d d[1] d['1'] d d.keys() d.values() d.items() e = {} for key, value in d.items(): e[("%02d" % key)] = value e # No order in dictionary in Python 3.5 (but ok in 3.6) {("%02d" % key): value for key, value in d.items()} #
    # Exercice: Construire et documenter une fonction qui calcule la liste des nombres premiers inférieurs ou égaux à n. #
    # write your code here! # # %load solutions/prime.py ", ".join(str(i) for i in prime(100)) # #
    # Exercice: Écrire un programme qui lit le fichier 00-lorem-ipsum.txt et y compte le nombre d’occurrences de chaque mot. #
    # # On pourra partir du modèle suivant : # ```python # with open("data/00-lorem-ipsum.txt") as f: # print(f.readlines()) # ``` # write your code here! # # %load solutions/lorem_ipsum.py # # %load solutions/lorem_ipsum_alt.py # # %load solutions/lorem_ipsum_alt2.py # #
    # Rappel Tout l'art de la programmation Python consiste à choisir les bonnes structures de données. (voir par exemple la page sur les collections) #
    # # # # ## La bibliothèque `numpy` de Python # # `numpy` est une extension du langage Python qui permet de manipuler des tableaux multi-dimensionnels et/ou des matrices. # Elle est souvent utilisée conjointement à l'extension `scipy` qui contient des outils relatifs: # # - aux intégrateurs numériques (`scipy.integrate`); # - à l'algèbre linéaire (`scipy.linalg`); # - etc. # # Des fonctionnalités simples illustrant le fonctionnement de `numpy` sont présentées ci-dessous. # En cas de souci, n'hésitez pas à vous référer à: # # - la documentation de `numpy` [ici](https://docs.scipy.org/doc/numpy/reference/); # - la documentation de `scipy` [ici](https://docs.scipy.org/doc/scipy/reference/) pour ce que vous ne trouvez pas dans `numpy`. import numpy as np # ### Types des données embarquées # # On peut créer un tableau `numpy` à partir d'une structure itérable (tableau, tuple, liste) Python. # La puissance de `numpy` vient du fait que tous les éléments du tableau sont forcés au même type (le moins disant). # + # Définition d'un tableau à partir d'une liste tableau = [2, 7.3, 4] print('>>> Liste Python: type %s' % type(tableau)) for l in tableau: print('{%s, %s}' % (l, type(l)), end=" ") print() print() # Création d'un tableau numpy tableau = np.array(tableau) print('>>> Tableau numpy: type %s' % type(tableau)) for l in tableau: print('{%s, %s}' % (l, type(l)), end=" ") print() print('On retrouve alors le type de chaque élément dans dtype: %s' % tableau.dtype) # - # ### Performance # # On reproche souvent à Python d'être lent à l'exécution. C'est dû à de nombreux paramètres, notamment la flexibilité du langage, les nombreuses vérifications faites à notre insu (Python ne présume de rien sur vos données), et surtout au **typage dynamique**. # Avec `numpy`, on connaît désormais une fois pour toute le type de chaque élément du tableau; de plus les opérations mathématiques sur ces tableaux sont alors codées en C (rapide!) # # Observez plutôt: tableau = [i for i in range(1, 10000000)] array = np.array(tableau) # %timeit double = [x * 2 for x in tableau] # %timeit double = array * 2 # ### Vues sur des sous-ensembles du tableau # # Il est possible avec `numpy` de travailler sur des vues d'un tableau à $n$ dimensions qu'on aura construit. # On emploie ici le mot **vue** parce qu'une modification des données dans la vue modifie les données dans le tableau d'origine. # # Observons plutôt: tableau = np.array([[i+2*j for i in range(5)] for j in range(4)]) print(tableau) # On affiche les lignes d'indices 0 à 1 (< 2), colonnes d'indices 2 à 3 (< 4) sub = tableau[0:2, 2:4] print(sub) # L'absence d'indice signifie "début" ou "fin" sub = tableau[:3, 2:] print(sub) # On modifie sub sub *= 0 print(sub) #
    # Attention: voici pourquoi on parlait de vue ! #
    print(tableau) # ### Opérations matricielles # # `numpy` donne accès aux opérations matricielles de base. # + a = np.array([[4,6,7,6]]) b = np.array([[i+j for i in range(5)] for j in range(4)]) print(a.shape, a, sep="\n") print() print(b.shape, b, sep="\n") # - # Produit matriciel (ou vectoriel) print(np.dot (a, b)) #
    # Attention: Contrairement à Matlab, les opérateurs arithmétiques +, -, * sont des opérations terme à terme. #
    # # Pour bien comprendre la différence: # + import numpy.linalg a = np.array([[abs(i-j) for i in range(5)] for j in range(5)]) inv_a = numpy.linalg.inv(a) # L'inverse print(a) print(inv_a) # - print(inv_a * a) print("\nDiantre!!") print(np.dot(inv_a, a)) print("\nC'est si facile de se faire avoir...") #
    # Note: Depuis Python 3.5, l'opérateur @ est utilisable pour la multiplication de matrice. #
    print(inv_a @ a) # # La bibliothèque `matplotlib` de Python # # `matplotlib` propose un ensemble de commandes qui permettent d'afficher des données de manière graphique, d'afficher des lignes, de remplir des zones avec des couleurs, d'ajouter du texte, etc. # # L'instruction `%matplotlib inline` avant l'import permet de rediriger la sortie graphique vers le notebook. # %matplotlib inline import matplotlib.pyplot as plt # L'instruction `plot` prend une série de points en abscisses et une série de points en ordonnées: plt.plot([1, 2, 3, 4], [1, 4, 9, 16]) # Il y a un style par défaut qui est choisi de manière automatique, mais il est possible de sélectionner: # # - les couleurs; # - le style des points de données; # - la longueur des axes; # - etc. plt.plot([1, 2, 3, 4], [1, 4, 9, 16], 'ro-') plt.xlim(0, 6) plt.ylim(0, 20) plt.xlabel("Temps") plt.ylabel("Argent") # Il est recommandé d'utiliser `matplotlib` avec des tableaux `numpy`. # + # échantillon à 200ms d'intervalle t = np.arange(0., 5., 0.2) # red dashes, blue squares and green triangles plt.plot(t, t, 'r--', t, t**2, 'bs', t, t**3, 'g^') # - # Enfin il est possible d'afficher plusieurs graphes côte à côte. # Notez que l'on peut également gérer la taille de la figure (bitmap) produite. # + fig, ax = plt.subplots(ncols=2, nrows=2, figsize=(10, 10)) # Vous pouvez choisir des palettes de couleurs « jolies » avec des sites comme celui-ci : # http://paletton.com/#uid=7000u0kllllaFw0g0qFqFg0w0aF ax[0,0].plot(t, np.sin(t), '#aa3939') ax[0,1].plot(t, np.cos(t), '#aa6c39') ax[1,0].plot(t, np.tan(t), '#226666') ax[1,1].plot(t, np.sqrt(t), '#2d882d') # - # Un bon réflexe semble être de commencer tous les plots par: # # ```python # fig, ax = plt.subplots() # ``` # #
    # Exercice: Tracer le graphe de la fonction $t \mapsto e^{-t} \cdot \cos(2\,\pi\,t)$ pour $t\in[0,5]$ #
    #
    # Exercice: À partir des coordonnées polaires, produire les coordonnées $(x,y)$ pour la fonction $r=\sin(5\,\theta)$, puis les tracer. #
    # # Consigne : n'utiliser que des tableaux et des fonctions `numpy` pour produire les données à tracer. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="MpyHvucW9uFF" # ## PROBLEM STATEMENT 1. # + colab={"base_uri": "https://localhost:8080/"} id="yRfaHwXY9tN4" outputId="989a9c60-ea3c-4120-c515-db72e5c4e6da" num=[] x=0 for x in range(1,11): while True: y=float(input("Enter a number:")) if y<5: break else: print("Invalid, please put a number less than 5") continue num.append(y) x+=1 x=0 summation=0 print("The numbers you entered are:",num) print("") for x in num: if x<5: summation=summation+x print("The sum of numbers that are less than 5 is:",summation) # + [markdown] id="oeeADn9r90Xg" # ## PROBLEM STATEMENT 2. # + colab={"base_uri": "https://localhost:8080/"} id="UzqzXCAx-Wo9" outputId="3f9dd0e0-df4c-428f-adde-ac85f5341b64" num = 1; data = {}; while (num<=5): c = str(num); data[num] = input("Enter a number:") print("The value for number " +c+ " is " +data[num]) num+=1 if(num==6): sum= int(data[1])+int(data[5]) print("The sum of the first and last number is:"+str(sum)) # + [markdown] id="xT7X1gn892qW" # ## PROBLEM STATEMENT 3. # + colab={"base_uri": "https://localhost:8080/"} id="xUQgHnjT_D1M" outputId="0cdacfeb-98b0-465b-e812-8dd5508acef7" grade = int(input("Enter your grade:")) if grade>=90: print("A") elif grade>=80: print("B") elif grade>=70: print("C") elif grade>=60: print("D") else: print("F") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python (moneyball) # language: python # name: moneyball # --- # # Vectorizing Raw Data: Count Vectorization # ### Count vectorization # # Creates a document-term matrix where the entry of each cell will be a count of the number of times that word occurred in that document. # ### Read in text # + import pandas as pd import re import string import nltk pd.set_option('display.max_colwidth', 100) stopwords = nltk.corpus.stopwords.words('english') ps = nltk.PorterStemmer() # faster, but not as good data = pd.read_csv("SMSSpamCollection.tsv", sep='\t') data.columns = ['label', 'body_text'] # - data.shape # ### Create function to remove punctuation, tokenize, remove stopwords, and stem def clean_text(text): text = "".join([word.lower() for word in text if word not in string.punctuation]) tokens = re.split('\W+', text) text = [ps.stem(word) for word in tokens if word not in stopwords] return text # ### Apply CountVectorizer # + from sklearn.feature_extraction.text import CountVectorizer count_vect = CountVectorizer(analyzer=clean_text) X_counts = count_vect.fit_transform(data['body_text']) print(X_counts.shape) print(count_vect.get_feature_names()) # - # ### Apply CountVectorizer to smaller sample # + data_sample = data[0:20] count_vect_sample = CountVectorizer(analyzer=clean_text) X_counts_sample = count_vect_sample.fit_transform(data_sample['body_text']) print(X_counts_sample.shape) print(count_vect_sample.get_feature_names()) # - # ### Vectorizers output sparse matrices # # _**Sparse Matrix**: A matrix in which most entries are 0. In the interest of efficient storage, a sparse matrix will be stored by only storing the locations of the non-zero elements._ X_counts_sample X_counts_df = pd.DataFrame(X_counts_sample.toarray()) X_counts_df X_counts_df.columns = count_vect_sample.get_feature_names() X_counts_df -- --- -- jupyter: -- jupytext: -- text_representation: -- extension: .hs -- format_name: light -- format_version: '1.5' -- jupytext_version: 1.14.4 -- kernelspec: -- display_name: Haskell -- language: haskell -- name: haskell -- --- -- ### The `Selection` Widgets -- -- + Dropdown -- + RadioButtons -- + ToggleButtons -- + Select -- + SelectMultiple -- These widgets can be used to choose between multiple alternatives. The `SelectMultiple` widget allows multiple selections, whereas `Dropdown`, `RadioButtons`, `ToggleButtons`, and `Select` only allow one selection. {-# LANGUAGE OverloadedStrings #-} import IHaskell.Display.Widgets -- + -- Allows single selection tgbs <- mkToggleButtons -- Allows multiple selections msel <- mkSelectMultiple -- + setField msel Description "Functions to show (One or more)" setField msel Options (OptionLabels ["sin", "cos"]) setField tgbs Description "Plot style" setField tgbs Options (OptionLabels ["line", "point"]) -- - -- The cell below requires `Chart` and `Chart-cairo` to be installed. -- + import Graphics.Rendering.Chart.Easy hiding (tan) import Graphics.Rendering.Chart.Backend.Cairo import qualified Data.ByteString as B import Data.Text (pack, unpack) import IHaskell.Display (base64) import Control.Applicative ((<$>)) import Control.Monad (when, forM) import Data.Maybe (fromJust) dset :: [(String, [(Double, Double)])] dset = [("sin", zmap sin r), ("cos", zmap cos r)] where zmap f xs = zip xs (map f xs) r = [0, 0.1 .. 6.3] i <- mkImageWidget setField i Width 500 setField i Height 500 -- Redraw the plot based on values from the widgets refresh = do -- Read values from the widgets funs <- map unpack <$> getField msel SelectedValues sty <- unpack <$> getField tgbs SelectedValue let pts = zip funs (map (fromJust . flip lookup dset) funs) opts = def { _fo_size = (500, 500) } toFile opts ".chart" $ do layout_title .= "Plotting: " ++ unwords funs if sty == "line" then mapM_ (\(s, ps) -> plot (line s [ps])) pts else mapM_ (\(s, ps) -> plot (points s ps)) pts img <- B.readFile ".chart" setField i B64Value (base64 img) -- Add event handlers to make widgets work setField msel SelectionHandler refresh setField tgbs SelectionHandler refresh -- Trigger event to show empty grid initially triggerSelection msel -- - -- Display the widgets msel tgbs i -- The `Dropdown`, `RadioButtons` and `Select` widgets behave just like the `ToggleButtons` widget. They have the same properties, and the same functionality. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # ChordNet # Benchmark for musical chord recognition # import module from music_code import chordnet # initialize c = chordnet.ChordNet() # connect to MySQL database, verify credentials c.connect() # run Tkinter app c.main() # human level performance on chord quality recognition c.KPI() # visualize KPI moving average c.display() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="azReToS7wWX3" # #### Goal of the Project # # This project is designed for you to practice and solve the activities that are based on the concepts covered in the following lessons: # # 1. Simple linear regression III - Model Evaluation # # + [markdown] id="fR_SN7K6475D" # ### Problem Statement # # The most important factor for an Insurance Company is to determine what premium charges must be paid by an individual. The charges depend on various factors like age, gender, income, etc. # # Build a model that is capable of predicting the insurance charges a person has to pay depending on his/her age using simple linear regression. Also, evaluate the accuracy of your model by calculating the value of error metrics such as R-squared, MSE, RMSE, and MAE. # # # # + [markdown] id="lZt4yKiJwrUs" # # #### Activity 1: Analysing the Dataset # # - Create a Pandas DataFrame for **Insurance** dataset using the below link. This dataset consists of following columns: # # |Field|Description| # |---:|:---| # |age|Age of primary beneficiary| # |sex|Insurance contractor gender, female or male| # |bmi|Body mass index| # |children|Number of children covered by health insurance/number of dependents| # |region|Beneficiary's residential area in the US, northeast, southeast, southwest, northwest| # |charges|Individual medical costs billed by health insurance| # # # **Dataset Link:** https://s3-student-datasets-bucket.whjr.online/whitehat-ds-datasets/insurance_dataset.csv # # - Print the first five rows of the dataset. Check for null values and treat them accordingly. # # - Create a regression plot with `age` on X-axis and `charges` on Y-axis to identify the relationship between these two attributes. # # # # + id="6U6NaAy4WQgs" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="f3998c7d-424a-4e13-e2b3-27c8de02d422" # Import modules import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns # Load the dataset # Dataset Link: 'https://s3-student-datasets-bucket.whjr.online/whitehat-ds-datasets/insurance_dataset.csv' df = pd.read_csv('https://s3-student-datasets-bucket.whjr.online/whitehat-ds-datasets/insurance_dataset.csv') # Print first five rows using head() function df.head() # + id="jg7hAMJ4jKC5" colab={"base_uri": "https://localhost:8080/"} outputId="74373fd9-949f-4251-e838-671f5c839dc4" # Check if there are any null values. If any column has null values, treat them accordingly df.isnull().sum() # + id="A8RW5WbUuR88" colab={"base_uri": "https://localhost:8080/", "height": 496} outputId="9c124bfe-fc10-481e-f419-6e8cf7ac118e" # Create a regression plot between 'age' and 'charges' plt.figure(figsize = (21, 7)) sns.regplot(df['age'], df['charges'], color = 'red') plt.show() # + [markdown] id="uG9YxYbpjgVG" # --- # + [markdown] id="uDTmlU-Mz0fI" # #### Activity 2: Train-Test Split # # We have to determine the effect of `age` on insurance charges. Thus, `age` is the feature variable and `charges` is the target variable. # # Split the dataset into training set and test set such that the training set contains 67% of the instances and the remaining instances will become the test set. # + id="Ku_loAWZ0LXr" # Split the DataFrame into the training and test sets. from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(df['age'], df['charges'], test_size = 0.33, random_state = 41) # + [markdown] id="hCPg7ClP0Om1" # --- # + [markdown] id="ud8dLfCGjh0E" # #### Activity 3: Model Training # # Implement simple linear regression using `sklearn` module in the following way: # # 1. Reshape the feature and the target variable arrays into two-dimensional arrays by using `reshape(-1, 1)` function of numpy module. # 2. Deploy the model by importing the `LinearRegression` class and create an object of this class. # 3. Call the `fit()` function on the LinearRegression object and print the slope and intercept values of the best fit line. # # + id="Xost35Q1XreI" colab={"base_uri": "https://localhost:8080/"} outputId="ae893bfa-c308-465b-bd49-589772fe41f7" # 1. Create two-dimensional NumPy arrays for the feature and target variables. # Print the shape or dimensions of these reshaped arrays def reshape_(frame): return frame.values.reshape(-1, 1) x_train_reshaped = reshape_(x_train) x_test_reshaped = reshape_(x_test) y_train_reshaped = reshape_(y_train) y_test_reshaped = reshape_(y_test) print(x_train_reshaped.shape) print(x_test_reshaped.shape) print(y_train_reshaped.shape) print(y_test_reshaped.shape) # + id="U9iIV06LXuQP" colab={"base_uri": "https://localhost:8080/"} outputId="d4c34423-c903-40c0-c8dd-bad05e77d7ba" # 2. Deploy linear regression model using the 'sklearn.linear_model' module. from sklearn.linear_model import LinearRegression # Create an object of the 'LinearRegression' class. Anythingyoulike = LinearRegression() # 3. Call the 'fit()' function Anythingyoulike.fit(x_train_reshaped, y_train_reshaped) # Print the slope and intercept values print(f"Slope: {Anythingyoulike.coef_[0][0]}\nIntercept: {Anythingyoulike.intercept_[0]}") # + [markdown] id="cAPgWR45mrCo" # --- # + [markdown] id="CvcLZdremtHY" # #### Activity 4: Model Prediction and Evaluation # # Predict the values for both training and test sets by calling the `predict()` function on the LinearRegression object. Also, calculate the $R^2$, MSE, RMSE and MAE values to evaluate the accuracy of your model. # + id="hc3RPNgsX5-0" colab={"base_uri": "https://localhost:8080/"} outputId="bd08793d-2146-4805-adc8-deaead5fe3d8" # Predict the target variable values for both training set and test set # Call 'r2_score', 'mean_squared_error' & 'mean_absolute_error' functions of the 'sklearn' module. Calculate RMSE value by taking the square root of MSE. from sklearn.metrics import r2_score, mean_squared_error, mean_absolute_error y_train_pred = Anythingyoulike.predict(x_train_reshaped) y_test_pred = Anythingyoulike.predict(x_test_reshaped) # Print these values for both training set and test set print(f"Train Set\n{'-' * 50}") print(f"R-squared: {r2_score(y_train_reshaped, y_train_pred):.3f}") print(f"Mean Squared Error: {mean_squared_error(y_train_reshaped, y_train_pred):.3f}") print(f"Root Mean Squared Error: {np.sqrt(mean_squared_error(y_train_reshaped, y_train_pred)):.3f}") print(f"Mean Absolute Error: {mean_absolute_error(y_train_reshaped, y_train_pred):.3f}") print(f"\n\nTest Set\n{'-' * 50}") print(f"R-squared: {r2_score(y_test_reshaped, y_test_pred):.3f}") print(f"Mean Squared Error: {mean_squared_error(y_test_reshaped, y_test_pred):.3f}") print(f"Root Mean Squared Error: {np.sqrt(mean_squared_error(y_test_reshaped, y_test_pred)):.3f}") print(f"Mean Absolute Error: {mean_absolute_error(y_test_reshaped, y_test_pred):.3f}") # + [markdown] id="Bp0p4IT-Dn_w" # --- # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="dlrMB-Qy2taV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} executionInfo={"status": "ok", "timestamp": 1596261217035, "user_tz": 420, "elapsed": 374, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/-FXsetuIjTxg/AAAAAAAAAAI/AAAAAAAAAO4/TYiDKhUGRd0/s64/photo.jpg", "userId": "09258938025824762674"}} outputId="526f59e2-397b-43cc-cd3d-6f92bf792690" #Declare a float value and store it in a variable. a=10.5 #Check the type and print the id of the same. print(type(a)) print(id(a)) # + id="IL4fVDHg2wbI" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 136} executionInfo={"status": "ok", "timestamp": 1596261529626, "user_tz": 420, "elapsed": 938, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/-FXsetuIjTxg/AAAAAAAAAAI/AAAAAAAAAO4/TYiDKhUGRd0/s64/photo.jpg", "userId": "09258938025824762674"}} outputId="80ecdcba-142c-42af-cb08-56f3f1d8cbb3" #Arithmatic Operations on float #Take two different float values. #Store them in two different variables. #Do below operations on them:- #Find sum of both numbers a=6.78 b=5.55 sum=a+b print(sum) #Find differce between them difference=a-b print(difference) #Find the product of both numbers. product=a*b print(product) #Find value after dividing first num with second number div=a/b print(div) #Find the remainder after dividing first number with second number modulo=a%b print(modulo) #Find the quotient after dividing first number with second number quotient=a/b print(quotient) #Find the result of first num to the power of second number. power=a**b print(power) # + id="atCFaH6c2492" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} executionInfo={"status": "ok", "timestamp": 1596261588373, "user_tz": 420, "elapsed": 310, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/-FXsetuIjTxg/AAAAAAAAAAI/AAAAAAAAAO4/TYiDKhUGRd0/s64/photo.jpg", "userId": "09258938025824762674"}} outputId="98786473-98b7-4b56-ad10-86de9dfb3042" #Comparison Operators on float #Take two different float values. #Store them in two different variables. #Do below operations on them:- #Compare these two numbers with below operator:- #Greater than, '>' #Smaller than, '<' #Greater than or equal to, '>=' #Less than or equal to, '<=' #Observe their output(return type should be boolean) c=3.67 d=3.72 print(c>d) print(c=d) print(c<=d) # + id="agi3U3863A5i" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} executionInfo={"status": "ok", "timestamp": 1596261702493, "user_tz": 420, "elapsed": 424, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/-FXsetuIjTxg/AAAAAAAAAAI/AAAAAAAAAO4/TYiDKhUGRd0/s64/photo.jpg", "userId": "09258938025824762674"}} outputId="92f81aaa-e2fe-416a-ae04-51e83d0467b7" #Equality Operator #Take two different float values. #Store them in two different variables. #Equuate them using equality operator (==, !=) #Observe the output(return type should be boolean) c1=5.78 c2=7.89 c3=5.78 print(c1==c2) print(c3==c1) print(c1!=c3) # + id="3HLlgaEt3EnJ" colab_type="code" colab={} #Logical operators #Observe the output of below code #Cross check the output manually print(10.20 and 20.30) #both are true and second value taken >Output is 20.3 print(0.0 and 20.30) #First is false so first value taken->Output is 0.0 print(20.30 and 0.0) #Goes to till second and second value is false so second is taken>Output is 0.0 print(0.0 and 0.0) #First is false so first value is taken->Output is 0.0 print(10.20 or 20.30) #First is True so first value is taken>Output is 10.2 print(0.0 or 20.30) #Goes to till second and second is true second value is taken->Output is 20.3 print(20.30 or 0.0) #First is True so first value is taken->Output is 20.3 print(0.0 or 0.0) #Goes to till second and secod is also false and second value is taken>Output is 0.0 print(not 10.20) #-Not of true is false->Output is False print(not 0.0) #Not of false is True>Output is True # + id="7YdyRNtk3UZd" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} executionInfo={"status": "ok", "timestamp": 1596261733701, "user_tz": 420, "elapsed": 312, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/-FXsetuIjTxg/AAAAAAAAAAI/AAAAAAAAAO4/TYiDKhUGRd0/s64/photo.jpg", "userId": "09258938025824762674"}} outputId="fff30d07-c96b-4532-cf68-183d8b5b547e" #What is the output of expression inside print statement. Cross check before running the program. a = 10.20 b - 10.20 print(a is b) #True or False? True 10.20<256 --false print(a is not b) #True or False? False. --true # Why the Id of float values are different when the same value is assigned to two different variables # ex: a = 10.5 b=10.5. but id will be same if I assign the variable having float i.e. a=c then both a anc c's # Id are same # + id="iTvnIMLd3VLW" colab_type="code" colab={} #Bitwise operation is not applicable between instances of float. ## Why the Id of float values are different when the same value is assigned to two different variables ## ex: a = 10.5 b=10.5. but id will be same if I assign the variable having float i.e. a=c then both a anc c's ## Id are same #Object reusability concept is not applicable on float values. # + id="29AVJHpx3YUo" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 119} executionInfo={"status": "ok", "timestamp": 1596261791974, "user_tz": 420, "elapsed": 454, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/-FXsetuIjTxg/AAAAAAAAAAI/AAAAAAAAAO4/TYiDKhUGRd0/s64/photo.jpg", "userId": "09258938025824762674"}} outputId="97ae1c08-0f43-4062-daa8-88501d0c7ec6" #Membership operation #in, not in are two membership operators and it returns boolean value print('2.7' in 'Python2.7.8') #True print(10.20 in [10,10.20,10+20j,'Python']) #True print(10.20 in (10,10.20,10+20j,'Python')) # True print(20.30 in {1,20.30,30+40j}) # True print(2.3 in {1:100, 2.3:200, 30+40j:300}) # True print(10 in range(20)) # True # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import cv2 import numpy as np from scipy import signal import matplotlib.pyplot as plt # %matplotlib auto image = cv2.imread('../data/Lena.png') KSIZE = 11 ALPHA = 2 kernel = cv2.getGaussianKernel(KSIZE, 0) kernel = -ALPHA * kernel @ kernel.T kernel[KSIZE//2, KSIZE//2] += 1 + ALPHA print(kernel.shape, kernel.dtype, kernel.sum()) filtered = cv2.filter2D(image, -1, kernel) plt.figure(figsize=(8,4)) plt.subplot(121) plt.axis('off') plt.title('image') plt.imshow(image[:, :, [2, 1, 0]]) plt.subplot(122) plt.axis('off') plt.title('filtered') plt.imshow(filtered[:, :, [2, 1, 0]]) plt.tight_layout(True) plt.show() cv2.imshow('before', image) cv2.imshow('after', filtered) cv2.waitKey() cv2.destroyAllWindows() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Shadow price and slack exercise pt1 # You are planning what cupcakes a bakery should make. The bakery can make either: # # regular size cupcake: profit = $5 # # a jumbo cupcake twice the regular size: profit = $10 # # There are 2 constraints on oven hours (30) and worker hours (65). This scenario has been modeled in PuLP for you and a solution found. The model status, decision variables values, objective value (i.e. profit), the shadow prices and slack of the constraints have been printed in the shell. # # The sample script is a copy of that code. You will adjust the constraints to see how the optimal solution changes. # # # Increase the 1st constraint to 31. Run the code and see the change in the objective compared to the original solution. # # Increase the 2nd constraint to 80. Run the code and see no change in the objective compared to the original solution. # # Decrease the 2nd constraint to 60. Run the code and see no change in the objective compared to the original solution. # # Decrease the 2nd constraint to 59. Run the code and see a change in the objective value, and the amount of production. import pandas as pd from pulp import * # + # Define Constraints, Solve, Print Status, Variables, Objective model = LpProblem("Maximize Bakery Profits", LpMaximize) R = LpVariable('Regular_production', lowBound=0, cat='Continuous') J = LpVariable('Jumbo_production', lowBound=0, cat='Continuous') model += 5 * R + 10 * J, "Profit" # Adjust the constraint model += 0.5 * R + 1 * J <= 30 model += 1 * R + 2.5 * J <= 59 # + # Solve Model, Print Status, Variables, Objective, Shadow and Slack model.solve() print("Model Status: {}".format(pulp.LpStatus[model.status])) for v in model.variables(): print(v.name, "=", v.varValue) print("Objective = $", value(model.objective)) # - model.constraints.items() model.constraints.keys() model.constraints.values() o = [{'name':name, 'shadow price':c.pi, 'slack': c.slack} for name, c in model.constraints.items()] print(pd.DataFrame(o)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Dependency Grammar in NLTK # (C) 2019-2021 by [](http://damir.cavar.me/) # Based on the [NLTK HOWTO Dependency](https://www.nltk.org/howto/dependency.html). # We load the DependencyGrammar module from NLTK Grammar: from nltk.grammar import DependencyGrammar # We can load different Dependency Grammar parsers from NLTK: from nltk.parse import ( DependencyGraph, ProjectiveDependencyParser, NonprojectiveDependencyParser, ) treebank_data = """Pierre NNP 2 NMOD Vinken NNP 8 SUB , , 2 P 61 CD 5 NMOD years NNS 6 AMOD old JJ 2 NMOD , , 2 P will MD 0 ROOT join VB 8 VC the DT 11 NMOD board NN 9 OBJ as IN 9 VMOD a DT 15 NMOD nonexecutive JJ 15 NMOD director NN 12 PMOD Nov. NNP 9 VMOD 29 CD 16 NMOD . . 9 VMOD """ dg = DependencyGraph(treebank_data) dg.tree().pprint() for head, rel, dep in dg.triples(): print( '({h[0]}, {h[1]}), {r}, ({d[0]}, {d[1]})' .format(h=head, r=rel, d=dep) ) # ### Dependency Version of the Penn Treebank from nltk.corpus import dependency_treebank t = dependency_treebank.parsed_sents()[0] print(t.to_conll(3)) # doctest: +NORMALIZE_WHITESPACE # "Using the output of zpar (like Malt-TAB but with zero-based indexing)": zpar_data = """ Pierre NNP 1 NMOD Vinken NNP 7 SUB , , 1 P 61 CD 4 NMOD years NNS 5 AMOD old JJ 1 NMOD , , 1 P will MD -1 ROOT join VB 7 VC the DT 10 NMOD board NN 8 OBJ as IN 8 VMOD a DT 14 NMOD nonexecutive JJ 14 NMOD director NN 11 PMOD Nov. NNP 8 VMOD 29 CD 15 NMOD . . 7 P """ zdg = DependencyGraph(zpar_data, zero_based=True) print(zdg.tree()) # ## Projective Dependency Parsing grammar = DependencyGrammar.fromstring(""" 'fell' -> 'price' | 'stock' 'price' -> 'of' 'the' 'of' -> 'stock' 'stock' -> 'the' """) print(grammar) dp = ProjectiveDependencyParser(grammar) for t in sorted(dp.parse(['the', 'price', 'of', 'the', 'stock', 'fell'])): print(t) # ## Non-Projective Dependency Parsing grammar = DependencyGrammar.fromstring(""" 'taught' -> 'play' | 'man' 'man' -> 'the' 'play' -> 'golf' | 'dog' | 'to' 'dog' -> 'his' """) print(grammar) dp = NonprojectiveDependencyParser(grammar) g, = dp.parse(['the', 'man', 'taught', 'his', 'dog', 'to', 'play', 'golf']) print(g.root['word']) print(g) x = dp.parse(['the', 'man', 'taught', 'his', 'dog', 'to', 'play', 'golf']) for i in x: print(i) for _, node in sorted(g.nodes.items()): if node['word'] is not None: print('{address} {word}: {d}'.format(d=node['deps'][''], **node)) print(g.tree()) # **(C) 2021 by [](http://damir.cavar.me/) <<>>** # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Problems with Monte Carlo integration # # By default, we convert "continuous" predictions into grid based predictions by using monte carlo integration: evaluate the kernel at random points and average. # # This seems to introduce a large amount of error... The following demonstrates this with artificial, random data. However, I noticed this problem on data from Chicago. # + # %matplotlib inline import matplotlib.pyplot as plt import numpy as np import open_cp.retrohotspot import open_cp.data import open_cp.predictors import open_cp.evaluation # - n = 1000 times = np.random.random(n) * 365 times = times * (np.timedelta64(1, "D") / np.timedelta64(1, "s")) * np.timedelta64(1, "s") times = np.sort(np.datetime64("2016-01-01") + times) points = open_cp.data.TimedPoints(times, np.random.random((2,n)) * 1000) mask = [[False]*100]*100 grid = open_cp.data.MaskedGrid(10, 10, 0, 0, mask) predictor = open_cp.retrohotspot.RetroHotSpot() predictor.data = points predictor.weight = open_cp.retrohotspot.TruncatedGaussian(100, 10) cts_pred = predictor.predict(end_time=np.datetime64("2016-10-01")) cts_pred.samples = 5 pred = open_cp.predictors.GridPredictionArray.from_continuous_prediction_grid(cts_pred, grid) # + fig, ax = plt.subplots(ncols=2, figsize=(14,8)) ax[0].pcolor(*pred.mesh_data(), pred.intensity_matrix, cmap="Greys") p = open_cp.evaluation.top_slice_prediction(pred, 0.1) ax[1].pcolor(*p.mesh_data(), p.intensity_matrix, cmap="Greys") # - # Repeat: Form the prediction again, top slice the top 10% of risk again, and compare the selected cells (same as looking at the "mask" of the top sliced prediction. They differ by a lot!) cts_pred = predictor.predict(end_time=np.datetime64("2016-10-01")) cts_pred.samples = 5 pred = open_cp.predictors.GridPredictionArray.from_continuous_prediction_grid(cts_pred, grid) p1 = open_cp.evaluation.top_slice_prediction(pred, 0.1) np.sum(p.intensity_matrix.mask ^ p.intensity_matrix.mask) np.sum(p1.intensity_matrix.mask ^ p1.intensity_matrix.mask) np.sum(p.intensity_matrix.mask ^ p1.intensity_matrix.mask) # You might argue that 5 samples is too small... # + cts_pred = predictor.predict(end_time=np.datetime64("2016-10-01")) cts_pred.samples = 25 pred = open_cp.predictors.GridPredictionArray.from_continuous_prediction_grid(cts_pred, grid) p = open_cp.evaluation.top_slice_prediction(pred, 0.1) cts_pred = predictor.predict(end_time=np.datetime64("2016-10-01")) cts_pred.samples = 25 pred = open_cp.predictors.GridPredictionArray.from_continuous_prediction_grid(cts_pred, grid) p1 = open_cp.evaluation.top_slice_prediction(pred, 0.1) # - np.sum(p.intensity_matrix.mask ^ p1.intensity_matrix.mask) # We have added a "reproducible" mode, whereby we don't sample at random points, but rather on a regular sub-grid # + cts_pred = predictor.predict(end_time=np.datetime64("2016-10-01")) cts_pred.samples = -5 pred = open_cp.predictors.GridPredictionArray.from_continuous_prediction_grid(cts_pred, grid) p = open_cp.evaluation.top_slice_prediction(pred, 0.1) cts_pred = predictor.predict(end_time=np.datetime64("2016-10-01")) cts_pred.samples = -5 pred = open_cp.predictors.GridPredictionArray.from_continuous_prediction_grid(cts_pred, grid) p1 = open_cp.evaluation.top_slice_prediction(pred, 0.1) # - np.sum(p.intensity_matrix.mask ^ p1.intensity_matrix.mask) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [Root] # language: python # name: Python [Root] # --- # + [markdown] slideshow={"slide_type": "slide"} # # Kryptos 4 solver # # I had an idea that K4 is based on a re-arrangement and then a Vignère cipher substitution. # # This code will create random permutations of the 97 characters in K4 (or will accept a user-created permutation) and will work backwards through the 64th through 74th letters of BERLINCLOCK to see what the key would be # # Plaintext (pt): ...XXXBERLINCLOCKXXXXXXXXXXXXXXXXXXXXXXXXX # # Key (kout): ... # # Ciphertext (ct): ...NFBNYPVTTMZFPKWGDKZXTJCDIGKUHUAUEKCARXXX # # K1/K2 method = ELYOIECBAQK # # Standard method = SYMLWRQOYBQ # + slideshow={"slide_type": "skip"} import random import sys import time import math k1 = '' plaintext1 = 'BETWEENSUBTLESHADINGANDTHEABSENCEOFLIGHTLIESTHENUANCEOFIQLUSION' posct1 = 0 k2 = '' plaintext2 = 'ITWASTOTALLYINVISIBLEHOWSTHATPOSSIBLE?THEYUSEDTHEEARTHSMAGNETICFIELDXTHEINFORMATIONWASGATHEREDANDTRANSMITTEDUNDERGRUUNDTOANUNKNOWNLOCATIONXDOESLANGLEYKNOWABOUTTHIS?THEYSHOULDITSBURIEDOUTTHERESOMEWHEREXWHOKNOWSTHEEXACTLOCATION?ONLYWWTHISWASHISLASTMESSAGEXTHIRTYEIGHTDEGREESFIFTYSEVENMINUTESSIXPOINTFIVESECONDSNORTHSEVENTYSEVENDEGREESEIGHTMINUTESFORTYFOURSECONDSWESTXLAYERTWO' posct2 = 0 k4 = 'OBKRUOXOGHULBSOLIFBBWFLRVQQPRNGKSSOTWTQSJQSSEKZZWATJKLUDIAWINFBNYPVTTMZFPKWGDKZXTJCDIGKUHUAUEKCAR' plaintext4 = 'BERLINCLOCK' posct4 = 63 alph = 'KRYPTOSABCDEFGHIJLMNQUVWXZ' vig = ['ABCDEFGHIJLMNQUVWXZKRYPTOS', 'BCDEFGHIJLMNQUVWXZKRYPTOSA', 'CDEFGHIJLMNQUVWXZKRYPTOSAB', 'DEFGHIJLMNQUVWXZKRYPTOSABC', 'EFGHIJLMNQUVWXZKRYPTOSABCD', 'FGHIJLMNQUVWXZKRYPTOSABCDE', 'GHIJLMNQUVWXZKRYPTOSABCDEF', 'HIJLMNQUVWXZKRYPTOSABCDEFG', 'IJLMNQUVWXZKRYPTOSABCDEFGH', 'JLMNQUVWXZKRYPTOSABCDEFGHI', 'LMNQUVWXZKRYPTOSABCDEFGHIJ', 'MNQUVWXZKRYPTOSABCDEFGHIJL', 'NQUVWXZKRYPTOSABCDEFGHIJLM', 'QUVWXZKRYPTOSABCDEFGHIJLMN', 'UVWXZKRYPTOSABCDEFGHIJLMNQ', 'VWXZKRYPTOSABCDEFGHIJLMNQU', 'WXZKRYPTOSABCDEFGHIJLMNQUV', 'XZKRYPTOSABCDEFGHIJLMNQUVW', 'ZKRYPTOSABCDEFGHIJLMNQUVWX', 'KRYPTOSABCDEFGHIJLMNQUVWXZ', 'RYPTOSABCDEFGHIJLMNQUVWXZK', 'YPTOSABCDEFGHIJLMNQUVWXZKR', 'PTOSABCDEFGHIJLMNQUVWXZKRY', 'TOSABCDEFGHIJLMNQUVWXZKRYP', 'OSABCDEFGHIJLMNQUVWXZKRYPT', 'SABCDEFGHIJLMNQUVWXZKRYPTO'] #Find the number of matches (out of a maximum 11) def findRating(segin, segpt): totalmatch = 0 totallen = len(segin) for z in range(0, totallen): if segin[z] == segpt[z]: totalmatch += 1 return(totalmatch) #Turn k4 into a scrambled string, also return the cyphertext per-scrambled with the eventual BERLINCLOCK letters numbered #k4 #This is k4 = A B C D E F G H I J K L M N O P Q #Index in k4 = 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17... #scramlist #Index scram = 88 3 76 9 74 81 27 22 70 61 12 13 23 6 9 14 89... #scramit (aka scram) #New scram = J C N I L O T S V Z L M R F I N Y... #Will be cipharray #Index in BC = [32,66,6,61,19,95,88,2,76,1,74] #Will be uplower #k4 with up = A B c d e F g H i j k l m n o p q #Will be fftprep #fft 0 or 1 = [1, 1, 0, 0, 0, 1, 0, 1...] sicne 1, 2, 6, and 8 are BERLINCLOCK indicies def capsct(strk4): testnew = strk4 #Create a list from 0 to the length of the input string templist = list(range(0, len(testnew))) random.shuffle(templist) scramit = "" for followit in templist: scramit += testnew[followit] return(scramit,templist) #Now that we have a match, find the placement of the original ciphertext letters before they with placed together as BERLINCLOCK def makefft(k4in, bcarray): uplower = k4in #Find the original 11 indicies of the scrambled BERLINCLOCK cyphertext cipharray = bcarray[63:74] #Loop through the berlinclock indicies and make those 11 characters uppercase in the ciphertext ulout = "" fftprep = [] for ctall in range(0, len(uplower)): flagfound = 0 for loopca in range(0, len(cipharray)): #If this is character 13 in the ciphertext and [13] was found in the BC array, we have a match! if ctall == cipharray[loopca]: flagfound = 1 break if flagfound == 1: ulout += uplower[ctall].upper() fftprep += [1] else: ulout += uplower[ctall].lower() fftprep += [0] return(ulout, fftprep) print("Starting at " + time.strftime('%X %x %Z') + "\n") #Only look at ratings of 8/11 or higher ratebest = 8 #PLK test at 7 #ratebest = 7 snipbest = "" wordbest = "" #for i in range(0, 1): for i in range(0, 100000): #scram = k4 (slist,ctten) = capsct(k4) scram = ''.join(slist) #print(scram) kout = "" posct = posct4 groupit = "" #Do a Vigenere translation based on K1 and K2 on the new random (scram) arrangement for j in plaintext4: #print("j= " + j) alphpos = alph.find(j) #print("alphapos= " + str(alphpos)) cttry = scram[posct] #print("cttry= " + cttry) groupit += cttry knew = '' for k in vig: vigtry = k[alphpos] if vigtry == cttry: #print("k= " + k) knew = k[0] break #print(knew) kout += knew posct += 1 #At this point. we have scram and kout #print(kout + ' = ' + scram) #Test for a match of an English word #fin = open('short.txt') fin = open('words.txt') for linein in iter(fin): testin = fin.readline() testin = testin.strip() testin = testin.upper() #testin = "PALIMPSEST" #print(testin) #Only look at dictionary words of at least length 6 or greater if len(testin) > 5: testline = "" testline = testline + testin + testin + testin + testin + testin testline = testline + testin + testin + testin + testin + testin testline = testline + testin + testin + testin + testin + testin #print(testline) testseg = testline[posct4:(posct4+len(kout))] #print(testseg) ratenew = findRating(testseg, kout) if ratenew >= ratebest: #ratebest = ratenew snipbest = kout wordbest = testin (lowerout, arrout) = makefft(k4, ctten) print("") print("Rating: " + str(ratenew) + " out of " + str(len(kout))) print(lowerout) #print(arrout) print("Groupit: " + groupit) print("Segment: " + snipbest) print("Word: " + wordbest) print(scram) print("") #print(ctten) #longerarrout = arrout + [0,0,0] #N = len(longerarrout) #print("Len: " + str(N)) #W = np.fft.fft(longerarrout) #freq = np.fft.fftfreq(N,1) #absW = abs(W) #absW[0] = 0 #idx = absW.argmax(axis=0) #idxval = (np.amax(absW)+1) #max_f = abs(freq[idx]) #myest = int(round(1/max_f)) #print("Period estimate: ", myest) #print("Period estimate: ", (1/max_f)) #print("Strength: ", idxval) #print("") #plt.subplot(211) #plt.scatter([max_f,], [np.abs(W[idx]),], s=100,color='r') #plt.plot(freq[:N/2], abs(W[:N/2])) #plt.xlabel(r"$f$") #plt.subplot(212) #plt.plot(1.0/freq[:N/2], abs(W[:N/2])) #plt.scatter([1/max_f,], [np.abs(W[idx]),], s=100,color='r') #plt.xlabel(r"$1/f$") #plt.xlim(0,20) #plt.show() fin.close() if ((i % 1000 == 0) and (i > 0)): print("Iteration: " + str(i) + " at " + time.strftime('%X %x %Z')) print("Done at " + time.strftime('%X %x %Z')) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import findspark # @uthor findspark.init() # , 8851129962 from pyspark import SparkContext, SparkConf conf = SparkConf().setAppName("remove crutch_wrd") sc = SparkContext(conf=conf) def crutch_wrd(filename): #here a mere assumed file is taken strg=[] #we can do this for many files in one go and get the result f=open(filename,'r') #like if we have a list of names of files we can map the function str=f.readlines() #crutch_wrd in RDD and get resulted list of lines. for i in str: # which can be then mapped with function cr to get the desired result i=i.strip('\n') strg=strg+[i] return(strg) f.close() k=crutch_wrd("ab.txt") #input text file in place of k_rdd=sc.parallelize(k) #as k is a list we can convert it #into RDD in order parallize the task def cr(x): c=0 ls=['um','ah','you know','uh','actually','really','literally','suddenly','basically','definitely','very','truly','honestly','absolutely','totally','seriously','obviously','well','maybe','i guess','somehow','i suppose','regardless','nevertheless','for what it’s worth','er','so','um'] x=x.lower for i in ls: if(i in x): c=c+1 x1=x.split(" ") return(c/len(x1)) from operator import add k1=k_rdd.map(cr).reduce(add)# k1 is the degree of crutch #words used in whole text # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="-1n9MiopHRRf" colab_type="text" # Lambda School Data Science # # *Unit 2, Sprint 1, Module 3* # # --- # + [markdown] colab_type="text" id="7IXUfiQ2UKj6" # # Ridge Regression # # ## Assignment # # We're going back to our other **New York City** real estate dataset. Instead of predicting apartment rents, you'll predict property sales prices. # # But not just for condos in Tribeca... # # Instead, predict property sales prices for **One Family Dwellings** (`BUILDING_CLASS_CATEGORY` == `'01 ONE FAMILY DWELLINGS'`). # # Use a subset of the data where the **sale price was more than \\$100 thousand and less than $2 million.** # # The [NYC Department of Finance](https://www1.nyc.gov/site/finance/taxes/property-rolling-sales-data.page) has a glossary of property sales terms and NYC Building Class Code Descriptions. The data comes from the [NYC OpenData](https://data.cityofnewyork.us/browse?q=NYC%20calendar%20sales) portal. # # - [ ] Do train/test split. Use data from January — March 2019 to train. Use data from April 2019 to test. # - [ ] Do one-hot encoding of categorical features. # - [ ] Do feature selection with `SelectKBest`. # - [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html). # - [ ] Fit a ridge regression model with multiple features. # - [ ] Get mean absolute error for the test set. # - [ ] As always, commit your notebook to your fork of the GitHub repo. # # # ## Stretch Goals # - [ ] Add your own stretch goal(s) ! # - [ ] Instead of `Ridge`, try `LinearRegression`. Depending on how many features you select, your errors will probably blow up! 💥 # - [ ] Instead of `Ridge`, try [`RidgeCV`](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RidgeCV.html). # - [ ] Learn more about feature selection: # - ["Permutation importance"](https://www.kaggle.com/dansbecker/permutation-importance) # - [scikit-learn's User Guide for Feature Selection](https://scikit-learn.org/stable/modules/feature_selection.html) # - [mlxtend](http://rasbt.github.io/mlxtend/) library # - scikit-learn-contrib libraries: [boruta_py](https://github.com/scikit-learn-contrib/boruta_py) & [stability-selection](https://github.com/scikit-learn-contrib/stability-selection) # - [_Feature Engineering and Selection_](http://www.feat.engineering/) by Kuhn & Johnson. # - [ ] Try [statsmodels](https://www.statsmodels.org/stable/index.html) if you’re interested in more inferential statistical approach to linear regression and feature selection, looking at p values and 95% confidence intervals for the coefficients. # - [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way. # - [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html). # + colab_type="code" id="o9eSnDYhUGD7" colab={} # %%capture import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/' # !pip install category_encoders==2.* # If you're working locally: else: DATA_PATH = '../data/' # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') # + colab_type="code" id="QJBD4ruICm1m" colab={} import pandas as pd import pandas_profiling # Read New York City property sales data df = pd.read_csv(DATA_PATH+'condos/NYC_Citywide_Rolling_Calendar_Sales.csv') # Change column names: replace spaces with underscores df.columns = [col.replace(' ', '_') for col in df] # SALE_PRICE was read as strings. # Remove symbols, convert to integer df['SALE_PRICE'] = ( df['SALE_PRICE'] .str.replace('$','') .str.replace('-','') .str.replace(',','') .astype(int) ) # + id="m31CM-x6HRRn" colab_type="code" colab={} # BOROUGH is a numeric column, but arguably should be a categorical feature, # so convert it from a number to a string df['BOROUGH'] = df['BOROUGH'].astype(str) # + id="5zov_VXaHRRp" colab_type="code" colab={} # Reduce cardinality for NEIGHBORHOOD feature # Get a list of the top 10 neighborhoods top10 = df['NEIGHBORHOOD'].value_counts()[:10].index # At locations where the neighborhood is NOT in the top 10, # replace the neighborhood with 'OTHER' df.loc[~df['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER' # + id="elG1IGkBHRRr" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 434} outputId="15ee5119-bcb3-49fa-e565-711b29791c34" df.head() # + id="qm0W69YpSK26" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 218} outputId="372b7966-b772-4fb1-ba03-f15e420a6da5" df['SALE_PRICE'].value_counts() # + id="_NWtCt8WSbc-" colab_type="code" colab={} condition = (df['SALE_PRICE'] > 100000) & (df['SALE_PRICE'] < 2000000) # + id="TFREPgloVhRE" colab_type="code" colab={} df = df.loc[condition] # + id="dyEFJAywVnwe" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 218} outputId="9b963c4e-907c-4fdc-961a-69e7ea61190e" df['SALE_PRICE'].value_counts() # + id="nHUSdpOQWRS7" colab_type="code" colab={} OFD = df[df['BUILDING_CLASS_CATEGORY'] == '01 ONE FAMILY DWELLINGS'] # + id="rdigZAhyMbxV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 640} outputId="98f7db33-9b30-4d88-edf9-d90485e63b6e" OFD # + id="W7NaJmn8TvVa" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="d06126c6-885b-4088-d79a-ac2c41dc4960" OFD.shape # + id="jvFpf9xbHoyK" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 386} outputId="d8bcf77a-5cd2-41da-f64b-1f15bb8b4170" OFD.dtypes # + id="d_ilLoLDMViT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 118} outputId="8a05f9ac-6da9-4b3c-ccb1-fed7fbbe461b" # Create cutoff day OFD['SALE_DATE'] = pd.to_datetime(OFD['SALE_DATE'], infer_datetime_format = True) cutoff = pd.to_datetime('04/01/2019') # + id="W0vMvYfWNNK2" colab_type="code" colab={} #Train and test train = OFD[OFD['SALE_DATE'] < cutoff] test = OFD[OFD['SALE_DATE'] >= cutoff] # + id="ncfKfqcDOC1T" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 269} outputId="e8b859bd-f5c5-4a4b-a309-3c5eb5b62e93" train.head(), train.tail() # + id="KH7TuNDJOO7B" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 269} outputId="16b1fc9d-7baa-4be5-c969-28098bc8bef9" test.head(), test.tail() # + id="pyHWzNeKOphO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="c90858f8-21de-4ae5-965f-96b643de1774" print(train.shape, test.shape) # + id="4HRsws7AOtak" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 343} outputId="e63aa88e-e88d-4937-cb2b-e5752c878b7d" # Check for unique values of object columns train.select_dtypes(exclude = 'number').describe().T # + id="MpCxXeMtNl5x" colab_type="code" colab={} # Create features and target for one-hot coding target = 'SALE_PRICE' high_cardinality = ['ADDRESS', 'LAND_SQUARE_FEET', 'LAND_SQUARE_FEET', 'SALE_DATE'] NaN = ['EASE-MENT', 'APARTMENT_NUMBER'] features = train.columns.drop([target] + high_cardinality + NaN) X_train = train[features] y_train = train[target] X_test = test[features] y_test = test[target] # + id="z0ditX0nP4dz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 316} outputId="70d5ee41-d8c2-4830-f875-34011c237e88" print(X_train.shape) X_train.head() # + id="CcKRdSzbP_I1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 316} outputId="a5e57ac7-56d0-4548-ca2c-df0a0a60dedb" print(X_test.shape) X_test.head() # + id="EOFzjtMsQECw" colab_type="code" colab={} # One-hot encoder import category_encoders as ce encoder = ce.OneHotEncoder(use_cat_names = True) X_train = encoder.fit_transform(X_train) X_test = encoder.transform(X_test) # + id="acV_VPiOQoUg" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 249} outputId="3984d3c5-e61a-4654-b939-7018775d7540" print(X_train.shape) X_train.head() # + id="1iIdCE7WQtlO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 249} outputId="20d88c7d-9768-4f69-8063-a4166afa2a6f" print(X_test.shape) X_test.head() # + id="r5j3gFRPSOGC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 185} outputId="4b844b05-fa15-40e4-fc99-b1eb791bc388" # Perform feature selection with SelectKBest from sklearn.feature_selection import f_regression, SelectKBest selector = SelectKBest(score_func = f_regression, k = 20) X_train_selected = selector.fit_transform(X_train, y_train) X_test_selected = selector.transform(X_test) # + id="olDyggn-TZRv" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="a6e965ee-ca9c-49a9-a26b-6e62a9cb02f2" print(X_train_selected.shape, X_test_selected.shape) # + id="MMGqtbyOTih8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 874} outputId="66da9f8e-628a-4b5e-9a02-8ff1cf920370" # Which features were selected? all_names = X_train.columns selected_mask = selector.get_support() selected_names = all_names[selected_mask] unselected_names = all_names[~selected_mask] print('Selected Features') for name in selected_names: print(name) print('') print('Features Not Selected:') for name in unselected_names: print(name) # + id="Cibuz-t5UyKm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="d3e43f95-2d72-4750-97f5-43690b895af1" # Scaling features from sklearn.linear_model import LinearRegression as LR from sklearn.metrics import mean_absolute_error as MAE for k in range(1, len(X_train.columns) + 1): print(f'{k} features:') selector = SelectKBest(score_func = f_regression, k = k) X_train_selected = selector.fit_transform(X_train, y_train) X_test_selected = selector.transform(X_test) model = LR() model.fit(X_train_selected, y_train) y_pred = model.predict(X_test_selected) mae = MAE(y_test, y_pred) print(f'Test MAE: ${mae:,.0f} \n') # + id="dx0Ms_oaWsFL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="9f257cdb-0d41-419f-d8b9-dee846361cec" # Fit Ridge Regression and find MAE for test set # %matplotlib inline from IPython.display import display, HTML from ipywidgets import interact import matplotlib.pyplot as plt from sklearn.linear_model import Ridge from sklearn.preprocessing import StandardScaler for alpha in [10**1, 10**2, 10**3, 10**4, 10**5, 10**6]: # Scale data before doing Ridge Regression scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train) X_test_scaled = scaler.transform(X_test) # Fit Ridge Regression model display(HTML(f'Ridge Regression, with alpha = {alpha}')) model = Ridge(alpha = alpha) model.fit(X_train_scaled, y_train) y_pred = model.predict(X_test_scaled) # Get Test MAE mae = MAE(y_test, y_pred) display(HTML(f'Test Mean Absolute Error: ${mae:,.0f}')) # Plot coefficients # Plot coefficients coefficients = pd.Series(model.coef_, X_train.columns) plt.figure(figsize=(16,8)) coefficients.sort_values().plot.barh(color='grey') plt.xlim(-1000,1000) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 트리 순회하기 # 문제 링크: https://www.acmicpc.net/problem/1991 # # 일단 문제를 열심히 읽었는데 문제는 어렵지 않다. 기본적인 2진트리 순회니까. # 문제는 입력부를 가지고 트리를 구성하는게 더 어려운 듯한 느낌이 ㅋ. # # > 첫째 줄에는 이진 트리의 노드의 개수 N(1≤N≤26)이 주어진다. # > 둘째 줄부터 N개의 줄에 걸쳐 각 노드와 그의 왼쪽 자식 노드, 오른쪽 자식 노드가 주어진다. # > 노드의 이름은 A부터 차례대로 영문자 대문자로 매겨지며, 항상 A가 루트 노드가 된다. 자식 노드가 없는 경우에는 .으로 표현된다. # # 먼저 트리부터 만들자. # 그리고 트리를 순회해서 노드를 찾는 메소드도 만들어야 한다. # BST가 아니므로 BST search를 쓸 수 없음에 주의! # 값을 찾기 위해 중위순회의 변형으로 구현을 해 보았다. # + class TreeNode: def __init__(self, value): self.value = value self.left = self.right = None # 주어진 문제의 입력을 처리하기 위한 함수 def insert(self, value, lvalue, rvalue): # it always returns a node has value node = find(self, value) if lvalue != '.': node.left = TreeNode(lvalue) if rvalue != '.': node.right = TreeNode(rvalue) def find(node, value): if node is None: return elif node.value == value: return node else: left = find(node.left, value) if left is not None: return left right = find(node.right, value) return right def preorder(node, array): if node is not None: array.append(node.value) preorder(node.left, array) preorder(node.right, array) def inorder(node, array): if node is not None: inorder(node.left, array) array.append(node.value) inorder(node.right, array) def postorder(node, array): if node is not None: postorder(node.left, array) postorder(node.right, array) array.append(node.value) root = TreeNode('A') root.insert('A', 'B', 'C') root.insert('B', 'D', '.') print(find(root, 'A').value) print(find(root, 'B').value) print(find(root, 'C').value) print(find(root, 'D').value) # print(find(root, 'E').value) error # - # 이제 주어진 문제의 입력을 처리해 보자. # + input_arr='''7 A B C B D . C E F E . . F . G D . . G . .'''.split('\n') input_len = input_arr[0] root = TreeNode('A') for line in input_arr[1:]: arr = line.split() root.insert(arr[0], arr[1], arr[2]) pre = [] in_arr = [] post = [] preorder(root, pre) inorder(root, in_arr) postorder(root, post) print(''.join(pre)) print(''.join(in_arr)) print(''.join(post)) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] papermill={"duration": 0.02177, "end_time": "2022-03-21T00:29:43.359044", "exception": false, "start_time": "2022-03-21T00:29:43.337274", "status": "completed"} tags=[] # # # # Example: Train SimCLR on CIFAR10 # # In this tutorial, we will train a SimCLR model using lightly. The model, # augmentations and training procedure is from # `A Simple Framework for Contrastive Learning of Visual Representations `_. # # The paper explores a rather simple training procedure for contrastive learning. # # - # !nvidia-smi # + [markdown] papermill={"duration": 0.020242, "end_time": "2022-03-21T00:29:43.401855", "exception": false, "start_time": "2022-03-21T00:29:43.381613", "status": "completed"} tags=[] # ## Imports # # Import the Python frameworks we need for this tutorial. # # # + papermill={"duration": 6.108637, "end_time": "2022-03-21T00:30:02.386483", "exception": false, "start_time": "2022-03-21T00:29:56.277846", "status": "completed"} tags=[] import torch import torch.nn as nn import torchvision import pytorch_lightning as pl import lightly from lightly.data import LightlyDataset from torchvision.datasets import CIFAR10 from torch.utils.data import Subset import sys sys.path.append('../src') from utils import get_classes, generate_embeddings, custom_collate_fn from my_resnet import resnet20 # %matplotlib inline # + [markdown] papermill={"duration": 0.02944, "end_time": "2022-03-21T00:30:02.445604", "exception": false, "start_time": "2022-03-21T00:30:02.416164", "status": "completed"} tags=[] # ## Configuration # # We set some configuration parameters for our experiment. # Feel free to change them and analyze the effect. # # # + papermill={"duration": 0.044385, "end_time": "2022-03-21T00:30:02.519766", "exception": false, "start_time": "2022-03-21T00:30:02.475381", "status": "completed"} tags=[] num_workers = 2 batch_size = 128 seed = 1 max_epochs = 150 input_size = 32 # image height, assume its always square # Let's set the seed for our experiments data_path = "../data/cifar10" NORMAL_CLASS = 'dog' pl.seed_everything(seed) # + [markdown] papermill={"duration": 0.029471, "end_time": "2022-03-21T00:30:02.644733", "exception": false, "start_time": "2022-03-21T00:30:02.615262", "status": "completed"} tags=[] # ## Setup data augmentations and loaders # # + cifar10_train = CIFAR10(data_path, download=True, train=True) cifar10_test = CIFAR10(data_path, download=False, train=False) classes_ids_train = get_classes(cifar10_train) # long! classes_ids_test = get_classes(cifar10_test) # - # `collate_fn` Additional augmentations such as vertical flip or random rotation (90 degrees). # By adding these augmentations we learn our model invariance regarding the # orientation of the images. E.g. we don't care if a shirt is upside down # but more about the strcture which make it. # + jupyter={"outputs_hidden": false} papermill={"duration": 2.54491, "end_time": "2022-03-21T00:30:05.219039", "exception": false, "start_time": "2022-03-21T00:30:02.674129", "status": "completed"} tags=[] dataset_train = LightlyDataset.from_torch_dataset(Subset(cifar10_train, classes_ids_train[NORMAL_CLASS])) dataset_test = LightlyDataset.from_torch_dataset(Subset(cifar10_train, classes_ids_train[NORMAL_CLASS])) dataloader_train = torch.utils.data.DataLoader(dataset_train, batch_size=batch_size, shuffle=True, collate_fn=custom_collate_fn, drop_last=True, num_workers=num_workers) dataloader_test = torch.utils.data.DataLoader(dataset_test, batch_size=batch_size, shuffle=False, drop_last=False, collate_fn=custom_collate_fn, num_workers=num_workers) # + [markdown] papermill={"duration": 0.02912, "end_time": "2022-03-21T00:30:05.362705", "exception": false, "start_time": "2022-03-21T00:30:05.333585", "status": "completed"} tags=[] # # 1. Create the SimCLR Model # Now we create the SimCLR model. We implement it as a PyTorch Lightning Module # and use custom ResNet-20 backbone provided by . Lightly provides implementations # of the SimCLR projection head and loss function in the `SimCLRProjectionHead` # and `NTXentLoss` classes. We can simply import them and combine the building # blocks in the module. We will import constructed model from our `src`. # # # - from simclr_model import SimCLRModel # + [markdown] papermill={"duration": 0.030096, "end_time": "2022-03-21T00:30:05.493609", "exception": false, "start_time": "2022-03-21T00:30:05.463513", "status": "completed"} tags=[] # We first check if a GPU is available and then train the module # using the PyTorch Lightning Trainer. # # # + jupyter={"outputs_hidden": false} papermill={"duration": 1559.208489, "end_time": "2022-03-21T00:56:04.731487", "exception": false, "start_time": "2022-03-21T00:30:05.522998", "status": "completed"} tags=[] gpus = 1 if torch.cuda.is_available() else 0 device = "cuda" if torch.cuda.is_available() else "cpu" resnet_backbone = resnet20(num_classes=1) model = SimCLRModel(resnet_backbone, img_size = input_size) trainer = pl.Trainer( max_epochs=50, gpus=gpus, progress_bar_refresh_rate=10 ) trainer.fit(model, dataloader_train) # - torch.save(model.state_dict(), '../weights/weights_simclr') # # 2. Embeddings extraction # Next we create a helper function to generate embeddings # from our test images using the model we just trained. # Note that only the backbone is needed to generate embeddings, # the projection head is only required for the training. # Make sure to put the model into eval mode for this part! # # model.to(device) model.eval() embeddings_test, filenames_test = generate_embeddings(model, dataloader_test, device) embeddings_train, filenames_train = generate_embeddings(model, dataloader_train, device) print(f'Shape of TEST embeddings {embeddings_test.shape}') print(f'Shape of TRAIN embeddings {embeddings_train.shape}') # # 3. Calculate Hausdorff distance between poin clouds # + from scipy.spatial.distance import directed_hausdorff hausdorff_dist = directed_hausdorff(embeddings_train, embeddings_test)[0] print(f'Hausdorff Dist: {hausdorff_dist:.3f}') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] toc=true #

    Table of Contents

    # # - # # Import the json and pprint libraries import pandas as pd import numpy as np import json import pprint from collections import Counter # + import watermark # %load_ext watermark # %watermark -n -v -iv # - # # Load the JSON data and look for potential issues with open('data/allcandidatenewssample.json') as f: candidatenews = json.load(f) len(candidatenews) pprint.pprint(candidatenews[0:2]) pprint.pprint(candidatenews[0]['source']) # # Check for differences in the structure of the dictionaries Counter([len(item) for item in candidatenews]) pprint.pprint(next(item for item in candidatenews if len(item) < 9)) # checking the usage of next pprint.pprint((item for item in candidatenews if len(item) < 9)) pprint.pprint(next(item for item in candidatenews if len(item) > 9)) pprint.pprint([item for item in candidatenews if len(item) == 2][0:10]) candidatenews = [item for item in candidatenews if len(item) > 2] len(candidatenews) # # Generate counts from the JSON data politico = [item for item in candidatenews if item['source'] == "Politico"] len(politico) pprint.pprint(politico[0:2]) # # Get the source data and confirm that it has the anticipated length sources = [item.get('source') for item in candidatenews] type(sources) len(sources) sources[0:5] pprint.pprint(Counter(sources).most_common(10)) # # Fix any errors in the values in the dictionary for newsdict in candidatenews: newsdict.update((k, 'The Hill') for k, v in newsdict.items() if k == 'source' and v == 'TheHill') # + # Usage of item.get('source') instead of item['source']. # This is handy when there might be missing keys in a dictionary. get returns None when the key # is missing, but we can use an optional second argument to specify a value to return. sources = [item.get('source') for item in candidatenews] # - pprint.pprint(Counter(sources).most_common(10)) # # Create a pandas DataFrame candidatenewsdf = pd.DataFrame(candidatenews) candidatenewsdf.dtypes # # Confirm that we are getting the expected values for source candidatenewsdf.rename(columns={'date': 'storydate'}, inplace=True) candidatenewsdf['storydate'] = candidatenewsdf['storydate'].astype( 'datetime64[ns]') candidatenewsdf.shape candidatenewsdf['source'].value_counts(sort=True).head(10) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + ## Imports import sys import numpy as np import matplotlib from matplotlib import pyplot as plt import warnings sys.path.append("../.") import handybeam import handybeam.world import handybeam.tx_array_library import handybeam.tx_array import handybeam.visualise import handybeam.samplers.lambert_sampler from handybeam.solver import Solver matplotlib.rcParams['figure.figsize'] = [20,8] # + # Initialise the world world = handybeam.world.World() # Initialise a solver solver = Solver(parent = world) # Add a transmitter array to the world world.tx_array = handybeam.tx_array_library.rectilinear(parent = world) # Instruct the solver to solve for the activation coefficients solver.single_focus_solver(x_focus = 0 , y_focus = 0, z_focus = 100e-3) # Set the size of the sampling grid (along each axis) N = 300 # Set the grid point spacing before lambert projection delta = 0.01 # Set the radius of the hemisphere radius = 0.11 # Add a rectilinear sampling grid to the world lamb_sampler = world.add_sampler(handybeam.samplers.lambert_sampler.LambertSampler(parent = world, origin = np.array((0,0,0)), radius = 100e-3, required_resolution = 10e-3)) # Propagate the acoustic field world.propagate() # Visualise the result lamb_sampler.visualise_all_in_one() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #

    Log Parsing - Implement Code

    import json from json import JSONEncoder import re import time # + # TODO: Define log metrics class # - # http://stackoverflow.com/questions/3768895/how-to-make-a-class-json-serializable class MyEncoder(JSONEncoder): def default(self, o): return o.__dict__ def process_robocopy_log(file_list): # TODO: Parsing code for robocopy log return # + file_list = [ r"..\Data\RobocopyLog\rocopylog_invalid_source.txt", r"..\Data\RobocopyLog\rocopylog.txt"] start_time = time.time() process_robocopy_log(file_list) end_time = time.time() print('Elapsed seconds: {0:.2f}s'.format(end_time-start_time)) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: bts # language: python # name: bts # --- # + from pathlib import Path import pandas as pd raw = Path('../data/raw') interim = Path('../data/interim') df = pd.read_csv(Path(raw) / 'mlb_elo.csv') events = pd.read_pickle(Path(interim) / 'events.pkl') game_logs = pd.read_pickle(Path(interim) / 'game_logs.pkl') # + df = df[df.season >= 1920] df['N'] = df.groupby(['date', 'team1', 'team2'])['date'].transform('count') df['n'] = df.groupby(['date', 'team1', 'team2'])['date'].cumcount() + 1 df['group_id'] = df.groupby(['date', 'team1', 'team2']).ngroup() df = df.sort_values(['group_id', 'n'], ascending=[True, False]) df['L_elo1_post'] = df.groupby(['group_id'])['elo1_post'].shift(1) df['equal'] = (df['L_elo1_post'] == df['elo1_pre']).astype('int') df['order'] = df.groupby(['group_id'])['equal'].cumsum() + 1 df['double_id'] = 0 df.loc[df.N >= 2 , 'double_id'] = df.order df['GAME_ID'] = df['team1'] + df['date'].str.replace('-', '') + df['double_id'].astype('str') # + game_list = game_logs.loc[game_logs.year >= 1920, ['GAME_ID', 'VisitingTeam', 'HomeTeam']] game_list.columns = ['GAME_ID_MAIN', 'AWAY_TEAM_ID', 'HOME_TEAM_ID'] game_list['rest'] = game_list['GAME_ID_MAIN'].str.slice(3,12) game_list['NEW_HOME_TEAM_ID'] = game_list['HOME_TEAM_ID'] game_list['NEW_AWAY_TEAM_ID'] = game_list['AWAY_TEAM_ID'] recode = { 'CHA': 'CHW', 'NYA': 'NYY', 'KCA': 'KCR', 'NYN': 'NYM', 'CHN': 'CHC', 'LAN': 'LAD', 'SLN': 'STL', 'SFN': 'SFG', 'SDN': 'SDP', 'PHA': 'OAK', 'BRO': 'LAD', 'MON': 'WSN', 'WS1': 'MIN', 'NY1': 'SFG', 'BSN': 'ATL', 'CAL': 'ANA', 'SLA': 'BAL', 'TBA': 'TBD', 'FLO': 'FLA', 'MLN': 'ATL', 'KC1': 'OAK', 'WAS': 'WSN', 'WS2': 'TEX', 'MIA': 'FLA', 'LAA': 'ANA', 'SE1': 'MIL', } for old_code, new_code in recode.items(): game_list.loc[game_list.NEW_HOME_TEAM_ID == old_code, 'NEW_HOME_TEAM_ID'] = new_code game_list.loc[game_list.NEW_AWAY_TEAM_ID == old_code, 'NEW_AWAY_TEAM_ID'] = new_code game_list['GAME_ID'] = game_list['NEW_HOME_TEAM_ID'] + game_list['rest'] # + test = pd.merge(df, game_list, on=['GAME_ID'], indicator = True, how='right') test = test[[ 'elo1_pre', 'elo2_pre', 'elo_prob1', 'elo_prob2', 'rating1_pre', 'rating2_pre', 'rating_prob1', 'rating_prob2', 'pitcher1_rgs', 'pitcher2_rgs', 'pitcher1_adj', 'pitcher2_adj', 'GAME_ID_MAIN', 'HOME_TEAM_ID', 'AWAY_TEAM_ID' ]] test.columns = [ 'elo_pre1', 'elo_pre2', 'elo_prob1', 'elo_prob2', 'rating_pre1', 'rating_pre2', 'rating_prob1', 'rating_prob2', 'pitcher_rgs1', 'pitcher_rgs2', 'pitcher_adj1', 'pitcher_adj2', 'GAME_ID', 'TEAM_ID1', 'TEAM_ID2' ] test_wide = pd.wide_to_long( test, ['elo_pre', 'elo_prob', 'rating_pre', 'rating_prob', 'pitcher_rgs', 'pitcher_adj', 'TEAM_ID'], i='GAME_ID', j='Home' ) test_wide = test_wide.reset_index() test_wide['Home'] = 2 - test_wide['Home'] test_wide = test_wide.set_index(['GAME_ID', 'TEAM_ID']) # - test_wide # + # class color: # BOLD = '\033[1m' # END = '\033[0m' # print(color.BOLD + 'RETROSHEETS ONLY' + color.END) # print(test[test._merge == 'right_only'].HOME_TEAM_ID.value_counts().to_string()) # print(test[test._merge == 'right_only']['GAME_ID'].count()) # print(color.BOLD + '538 ONLY' + color.END) # print(test[(test._merge == 'left_only') & (test.playoff.isna())]['team1'].value_counts().to_string()) # print(color.BOLD + 'BOTH' + color.END) # print(test[test._merge == 'both'].HOME_TEAM_ID.value_counts().to_string()) # test.groupby(['team2', 'AWAY_TEAM_ID']).size() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Importing packages # Throughout this tutorial, we will use the following common Python packages: # Use these packages to easily access files on your hard drive import os, sys, glob # The Numpy package allows you to manipulate data (mainly numerical) import numpy as np # The Pandas package allows more advanced data manipulation e.g. in structured data frames import pandas as pd # The Matplotlib package is for plotting - uses the same syntax as plotting in Matlab (figures, axes etc) import matplotlib.pyplot as plt # Seaborn is a higher-level package for plotting that calls functions in Matplotlib, # you can usually input your Pandas dataframes to get pretty plots in 1 or 2 lines import seaborn as sns # We will use Scipy for advanced computation like model fitting import scipy # ## Solutions # #### 1. Create two lists that separate numbers (eg. from 1-100) divisible by 3 and numbers not divisible by 3. y1 = [] y2 = [] for x in np.arange(1,100): if x%3 == 0: #if divisible by 3, put the number in y1 y1.append(x) else: #if not divisible by 3, put the number in y1 y2.append(x) # display(y1) # display(y2) # #### 2. Keep generating random numbers until a generated number is greater than 0.8 and store the number of times it takes you to get this number u = np.random.rand() n = 1 while u < 0.8: u = np.random.rand() n = n+1 display(u) display(n) # #### 3. Generate some random data in two variables of equal length and make a scatter plot using matplotlib var1 = np.random.rand(100) var2 = np.random.rand(100) plt.scatter(var1,var2) # #### 4. Generate some data for a linear relationship between two variables (e.g. age and height of schoolchildren), put them in a Pandas dataframe with 2 named columns, and use Seaborn to create a scatterplot with regression line age = 5 + np.random.rand(100)*7 height = 108 + (152-108)*((age-5)/7) + np.random.randn(100)*20 age_height = pd.DataFrame.from_dict({'age':age,'height':height}).sort_values(by=['age','height']) display(age_height.head()) sns.regplot(data = age_height, x = 'age', y = 'height') # #### 5. Create a Pandas dataframe with height data for 5 age groups and use Seaborn to turn this into a barplot with errorbars and an overlaid stripplot or swarmplot. age_height['group'] = age_height['age'].apply(lambda x: np.floor(x)-4) age_height sns.barplot(data = age_height.query('group < 6'), x = 'group', y = 'height', alpha = .5) sns.swarmplot(data = age_height.query('group < 6'), x = 'group', y = 'height') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd # %matplotlib inline # - # --- # ![''](./04_src/04_21_01.png) # # ![''](./04_src/04_21_02.png) df = pd.DataFrame({'Older':[9, 4, 7, 3, 8, 6, 2, 8, 4, 5, 7, 5, 2, 6, 6], 'Younger':[7, 9, 6, 7, 8, 6, 7, 6, 6, 8, 9, 7, 8, 6, 9]}) df df.describe() # --- # ![''](./04_src/04_22_01.png) # # ![''](./04_src/04_22_02.png) df = pd.DataFrame({'Younger':[8, 6, 6, 7, 8, 7, 8, 8], 'Older':[7, 5, 8, 5, 7, 6, 8, 5]}) df df.Younger.var(), df.Older.var() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np submissions_df = pd.read_csv("/Users/frankkelly/Downloads/collated - Tabellenblatt1.csv") submissions_df.head() submissions_df = submissions_df.loc[1:,:] submissions_df.head() submissions_df.columns grade_df = submissions_df[["ID", "Rating", "Grade", "Rating: [a, b, c, d]"]] grade_df.head() pure_cool_df = submissions_df[["Coolness, attractiveness", "Coolness", "Coolness [+, ++, +++]"]] pure_cool_df.head() pure_cool_df cool_dict = {} cool_dict["+"] = 1 cool_dict["++"] = 2 cool_dict["+++"] = 3 cool_dict # + # def cool_series_convert(series_in): # return [cool_dict[x] if x in cool_dict.keys() else np.nan for x in series_in.values] # - def get_plus(series_in): return [len(''.join([x for x in val if x == '+'])) if val is not np.nan else np.nan for val in series_in.values ] pure_cool_df.iloc[1] get_plus(pure_cool_df.iloc[1]) cool_df = pure_cool_df.apply(lambda x: get_plus(x), axis=1) cool_df.head() pure_grade_df = grade_df[["Rating", "Grade", "Rating: [a, b, c, d]"]].apply(lambda x: x.str.lower(), axis=1) pure_grade_df.head() scoring_dict = {} scoring_dict["a"] = 4 scoring_dict["b"] = 3 scoring_dict["c"] = 2 scoring_dict["d"] = 1 scoring_dict # + # def series_convert(series_in): # list_out = [] # for x in series_in: # if x is not np.nan: # list_out.append(scoring_dict[y]) # else: # list_out.append(0) # return list_out def series_convert(series_in): return [scoring_dict[x] if x in scoring_dict.keys() else np.nan for x in series_in.values] # - pure_grade_df.iloc[0] pure_grade_numerical_df = pure_grade_df.apply(lambda x: series_convert(x), axis=1) grade_column = pure_grade_numerical_df.mean(axis=1) print(grade_column[:5]) coolness_column = cool_df.mean(axis=1) score_column = (grade_column + coolness_column)/2 score_column submissions_df.columns def remove_nan(list_in): list_listin = list(list_in) for item in list_listin: if item is np.nan: list_listin.remove(item) return list_listin remove_nan(['no', np.nan, np.nan]) mode = lambda x: x.str.lower().mode()[0] if len(x) > 2 else str(x.values) # + category_column = submissions_df[['Category', 'Category.1', 'cat']].apply(lambda x:str(x.values), axis=1) level_column = submissions_df[['Level', 'Level.1', 'level']].apply(mode, axis=1) pycon_column = submissions_df[['pycon', 'Suggest to Pycon', 'Suggest to Pycon: [yes, no]']]\ .apply(mode, axis=1) long_slot_column = submissions_df[['long slot', 'Long Slot', 'Long slot: [yes, no]']]\ .apply(mode, axis=1) print(long_slot_column[:5]) print(pycon_column[:5]) print(level_column[:5]) # - pd.Series(['beginner', 'Beginner', np.nan]).str.lower().mode() final_df = pd.concat([submissions_df[["ID"]], category_column, \ level_column, score_column, grade_column, \ coolness_column, pycon_column, long_slot_column], axis=1) final_df.columns=["ID", "category", "level", "score", "grade", "coolness", "pycon", "long-slot"] final_df.head() top50_df = final_df.sort_values(by="score", ascending=False).head(50) top50_df top30_df.to_csv("../data/top30entries.csv") top21_df.level.value_counts() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # ## Day 5, Part B: Using Your Trained Policy # ## Learning goals # - What to do with the trained policy # ## Definitions # - **Simulation environment**: Notice that this is not the same as the python/conda environment. The simulation environment is the simulated world where the reinforcement learning takes place. It provides opportunities for an agent to learn and explore, and ideally provides challenges that aid in efficient learning. # - **Agent (aka actor or policy)**: An entity in the simulation environment that performs actions. The agent could be a person, a robot, a car, a thermostat, etc. # - **State variable**: An observed variable in the simulation environment. They can be coordinates of objects or entities, an amount of fuel in a tank, air temperature, wind speed, etc. # - **Action variable**: An action that the agent can perform. Examples: step forward, increase velocity to 552.5 knots, push object left with force of 212.3 N, etc. # - **Reward**: A value given to the agent for doing something considered to be 'good'. Reward is commonly assigned at each time step and cumulated during a learning episode. # - **Episode**: A learning event consisting of multiple steps in which the agent can explore. It starts with the unmodified environment and continues until the goal is achieved or something prevents further progress, such as a robot getting stuck in a hole. Multiple episodes are typically run in loops until the model is fully trained. # - **Model (aka policy or agent)**: An RL model is composed of the modeling architecture (e.g., neural network) and parameters or weights that define the unique behavior of the model. # - **Policy (aka model or agent)**: The parameters of a model that encode the best choices to make in an environment. The choices are not necessarily good ones until the model undergoes training. The policy (or model) is the "brain" of the agent. # - **Replay Buffer**: A place in memory to store state, action, reward and other variables describing environmental state transitions. It is effectively the agent's memory of past experiences. # - **On-policy**: The value of the next action is determined using the current actor policy. # - **Off-policy**: The value of the next action is determined by a function, such as a value function, instead of the current actor policy. # - **Value function**: Function (typically a neural network) used to estimate the value, or expected reward, of an action. # ## I've trained. I'm happy with the results. Now what? # Training a RL policy can be very time consuming and expensive, so you want to make sure it's put to good use. Before we try to make use of the trained model, let's be sure it's ready to use. In previous notebooks, we have been saving the models (e.g., Day 1 Part C), but there are additional nuances that can be helpful during training. # # - Consider auto-saving the highest reward policy during training # - Consider auto-saving periodically in case you need to pause a long training run or there is a power outage # # A lot of the frameworks have these things included, but you want to verify or put those things in by hand (or add more for your own interests) should you need them. TD3 has some, and we'll mention those below. # # You might also want to stand up a database to store a large number of policy snapshots or to keep the model buffer state-action transitions (Methods that learn from previously determined transitions are a new field in RL). # # >**There's a golden rule you may have learned from late nights writing school essays and the power goes out: "Save. And save often."** # From our original stable_baselines3 CartPole: # # ```python # model.save("ppo_cartpole") # ``` # # In this case, saving produced a `ppo_cartpole.zip` file; others might produce a NumPy `.npy`, or simply an 'object'. These always contain the model values, but some also include other aspects of training, like episode number, so you can restart training where you left off. That depends on the library you use. In any case, **the file is the artifact that you spent all your time and money producing - the stored values in the neural network**. # # If you still have the ppo_cartpole.zip around, you can load it up and put it to use; otherwise, rerun Day1_PartC and create it. # # Now, import the same boilerplate: import os import gym from stable_baselines3 import PPO # Create our environment from `gym.make()` and load the zip back in to a variable using stable-baselines3's PPO load utilty. env = gym.make("CartPole-v1") model = PPO.load("ppo_cartpole") # We'll go ahead and do the same render we did before to 'see it in action', but lets take a look at what we have already: obs = env.reset() obs # This array is the environment state at reset. Feel free to re-run the above cell a few times, you'll see different results for each run. (Use control-enter to re-run without leaving the cell) # Passing that environment state to the policy in `model.predict(obs)` returns the policy action to take given the current environment. action, _states = model.predict(obs) action # Believe it or not, for CartPole, that's the *entire* ballgame. All that time and effort gives you a policy that delivers one thing: the action to take given the state of the environment. # # >**Remember that you are building a tool.** # # You *can* hook things up to run an entire episode and play things out like a simulation/game/etc., or you could just take that single one-off state->action converter and drop it into another piece of code. Maybe you have a theory-based algorithm that solves your problem perfectly, except for that one blind point where your algorithm has a divide-by-zero (shrug), so in that exception catch you drop in your trained policy to do a bit more than just a simple 'default action.' # # >**Try running the next two cells, again and again, to advance the environment forward; predict->step->predict->step** obs, rewards, dones, info = env.step(action) obs, rewards action, _states = model.predict(obs) action # We can, of course, just run the thing through the episode (or 1k steps in a loop below) given our policy. But we just want to remind you, you don't have to do just that with the policy you've trained. # # >**Your policy is now a function that pops out 'best actions' when you ask it to** # # Set up a multiplayer game, for example, and every time the computer gets a turn (or opportunity to move in some way... maybe on a set polling interval, or maybe 0.05 seconds, or whatever) you pass the observations to your policy, and your AI player can act. obs = env.reset() for _ in range(1000): action, _states = model.predict(obs) obs, rewards, dones, info = env.step(action) env.render() env.env.viewer.close() # For the case of TD3, there are save and load functions built in, and they look like this: # # ```python # def save(self, filename): # torch.save(self.critic.state_dict(), filename + "_critic") # torch.save(self.critic_optimizer.state_dict(), filename + "_critic_optimizer") # # torch.save(self.actor.state_dict(), filename + "_actor") # torch.save(self.actor_optimizer.state_dict(), filename + "_actor_optimizer") # # def load(self, filename): # self.critic.load_state_dict(torch.load(filename + "_critic")) # self.critic_optimizer.load_state_dict(torch.load(filename + "_critic_optimizer")) # self.critic_target = copy.deepcopy(self.critic) # # self.actor.load_state_dict(torch.load(filename + "_actor")) # self.actor_optimizer.load_state_dict(torch.load(filename + "_actor_optimizer")) # self.actor_target = copy.deepcopy(self.actor) # ``` # # It's simply using the PyTorch functions `torch.save()` and `torch.load()` to load the objects for actor and critic - then, when we request action and state updates, we're now asking a loaded (trained or partially trained) policy: `policy.select_action(np.array(state))` # ## Load Ant and have it perform some actions # # There's a lot of code in the next two cells, but it is rather simple in what it's doing: # - import/load boilerplate # - register the environment # - load the policy # # The `load_policy()` in this case is nearly identical to the first half of the `main()` function we were playing with in our Ant examples. It just stops as soon as it has the TD3 load accomplished, with correct parameters. We don't need most of them, but we're bringing them along for the ride, just in case. # + import numpy as np import torch import gym import pybullet_envs import os import time import sys from pathlib import Path sys.path.append(str(Path().resolve().parent)) import utils import TD3 from numpngw import write_apng from gym.envs.registration import registry, make, spec def register(id, *args, **kvargs): if id in registry.env_specs: return else: return gym.envs.registration.register(id, *args, **kvargs) register(id='MyAntBulletEnv-v0', entry_point='override_ant_random:MyAntBulletEnv', max_episode_steps=3000, reward_threshold=2500.0) # - def load_policy(env_name_var): args = { "policy" : "TD3", # Policy name (TD3, DDPG or OurDDPG) "env" : env_name_var, # OpenAI gym environment name "seed" : 0, # Sets Gym, PyTorch and Numpy seeds "start_timesteps" : 25e3, # Time steps initial random policy is used "eval_freq" : 5e3, # How often (time steps) we evaluate "max_timesteps" : 2e6, # Max time steps to run environment "expl_noise" : 0.1, # Std of Gaussian exploration noise "batch_size" : 256, # Batch size for both actor and critic "discount" : 0.99, # Discount factor "tau" : 0.007, # Target network update rate "policy_noise" : 0.2, # Noise added to target policy during critic update "noise_clip" : 0.5, # Range to clip target policy noise "policy_freq" : 2, # Frequency of delayed policy updates "save_model" : "store_true", # Save model and optimizer parameters "load_model" : "default", # Model load file name, "" doesn't load, "default" uses file_name } file_name = f"{args['policy']}_{args['env']}_{args['seed']}_{args['tau']}" print("---------------------------------------") print(f"Policy: {args['policy']}, Env: {args['env']}, Seed: {args['seed']}") print("---------------------------------------") if not os.path.exists("./results"): os.makedirs("./results") if args['save_model'] and not os.path.exists("./models"): os.makedirs("./models") env = gym.make(args['env']) # Set seeds env.seed(args['seed']) env.action_space.seed(args['seed']) torch.manual_seed(args['seed']) np.random.seed(args['seed']) state_dim = env.observation_space.shape[0] action_dim = env.action_space.shape[0] max_action = float(env.action_space.high[0]) kwargs = { "state_dim": state_dim, "action_dim": action_dim, "max_action": max_action, "discount": args['discount'], "tau": args['tau'], } # Initialize policy if args['policy'] == "TD3": # Target policy smoothing is scaled wrt the action scale kwargs["policy_noise"] = args['policy_noise'] * max_action kwargs["noise_clip"] = args['noise_clip'] * max_action kwargs["policy_freq"] = args['policy_freq'] policy = TD3.TD3(**kwargs) if args['load_model'] != "": policy_file = file_name if args['load_model'] == "default" else args['load_model'] policy.load(f"./models/{policy_file}") return policy policy = load_policy("MyAntBulletEnv-v0") env = gym.make("MyAntBulletEnv-v0", render=True) env.seed(0) state, done = env.reset(), False state # Now that everything's set up, you can view the ant and **step through the simulation using the next cell** (we could have even just made that it's own function - call it 'advance' or something). Control-enter will run the cell without advancing to the next cell. # # At this point, it's fun to keep the simulation window and the notebook both visible (I shrink my notebook to see the window on the side). The view can be adjusted with control-click and your mouse wheel. # **This cell can be used to advance the ant, one step at a time:** action = policy.select_action(np.array(state)) state, reward, done, _ = env.step(action) env.robot.body_xyz # Maybe mid-course we want to change the target the ant is walking to (which is then in the obs space): env.robot.walk_target_x = -10 env.robot.walk_target_y = -10 # Go back to the cell above and advance the ant. It should turn and go toward this new target. There might be some momentum built up and some maneuvering to turn around, so advance it enough times to actually see it turn and start walking again. # # You get the picture - the policy controls what actions are taken, it's an artifact that we save and load, but other than that it's up to us how it gets used. # ## Walk to each coordinate # For fun - lets set it up so we can pass it a list of points the ant needs to go to, and we can pass in walking-path coordinates. Maybe this list could be provided by another path-finding AI, or classical control scheme. my_list = [(3,3),(0,3),(-6,-6),(0,-6),(9,9),(0,9),(0,0)] for i in my_list: env.robot.walk_target_x = i[0] env.robot.walk_target_y = i[1] path_done = False counter_i = 0 while not path_done: action = policy.select_action(np.array(state)) state, reward, done, _ = env.step(action) time.sleep(1. / 100.) #comment out to run at max local system speed counter_i += 1 if counter_i > 500: path_done = True dist = np.linalg.norm([i[0]-env.robot.body_xyz[0],i[1]-env.robot.body_xyz[1]]) if dist < 0.2: path_done = True # Don't let all that power go to your head.. poor little ant. # # Try changing the list up a few times and see the ant run different routes. # # >**Can you make a path that Ant will follow?** # # This particular policy is from the 5 million time-step custom ant with no reward modification - it will mostly get the job done, but there will be a few instances where it just doesn't make the next point happen (this is why we give it the 500 time-step counter time-out per episode). # # >**For fun, try taking one of your modified ants with custom reward and send it through a similar challenge.** # # - What happens if you (by-hand) manually tweak some of the robot internal state values as it's moving? # - Is the ant robust to observation signal noise? How might you modify the training course so it would handle real-world sensor noise/errors/corruptions that it might encounter (as if the policy were placed into a real robot)? # - What modifications might you want to make to the environment that the ant is trained in? # - The observables it sees - would they be general enough to handle the 'real world'? # # There are lots of things to consider and weigh when building out your RL environment, training, and how you use the policy, but hopefully by this point you can start to answer some of these questions and think about what you might do, yourself. Best of luck! # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [py27] # language: python # name: Python [py27] # --- # + # %matplotlib notebook import itertools import logging from functools import partial from collections import defaultdict, OrderedDict import gensim import matplotlib from matplotlib import rc import matplotlib.pyplot as plt from matplotlib import colors as plt_colors import numpy as np import pandas as pnd import os from sklearn.cluster import * from sklearn.preprocessing import normalize from sklearn.cross_validation import train_test_split from sklearn.decomposition import PCA, RandomizedPCA from sklearn.manifold import TSNE from sklearn import svm, metrics from multiprocessing import Pool from knub.thesis.util import * pnd.set_option("display.max_colwidth", 200) LIGHT_COLORS = ["#a6cee3", "#b2df8a", "#fb9a99", "#fdbf6f", "#cab2d6", "#ffff99"] # light colors DARK_COLORS = ["#1f78b4", "#33a02c", "#e31a1c", "#ff7f00", "#6a3d9a"] # dark colors rc('font', **{'family': 'serif', 'serif': ['Computer Modern']}) rc('text', usetex=True) SIZE = 11 rc('font', size=SIZE) # controls default text sizes rc('axes', titlesize=SIZE) # fontsize of the axes title rc('axes', labelsize=SIZE) # fontsize of the x and y labels rc('xtick', labelsize=SIZE) # fontsize of the tick labels rc('ytick', labelsize=SIZE) # fontsize of the tick labels rc('legend', fontsize=SIZE) # legend fontsize rc('figure', titlesize=SIZE) # fontsize of the figure title def cm2inch(*tupl): inch = 2.54 if isinstance(tupl[0], tuple): return tuple(i/inch for i in tupl[0]) else: return tuple(i/inch for i in tupl) # + def evaluate_single(df_param): nr_inclusions = len(df_param) nr_succ = len(df_param[df_param.successful]) return float(nr_succ) / nr_inclusions def evaluate_df(df_param, print_eval): for method, df_group in df_param.groupby("method"): if "mixture" in method: continue succ_prob = evaluate_single(df_group) if print_eval: print "%s: %.2f" % (method, succ_prob) def evaluate_file(f, print_eval=True): df_file = pnd.read_csv("/home/knub/Repositories/master-thesis/webapp/out/" + f, sep="\t", header=None) df_file.columns = ["inclusion_idx", "inclusion_id", "method", "words", "intruder", "selected_word", "successful"] evaluate_df(df_file, print_eval) return df_file # - for f_name in sorted([f for f in os.listdir("../webapp/out") if f.endswith(".txt")]): print f_name evaluate_file(f_name) print # **Evaluating all results** # + dfs = [] for f_name in sorted([f for f in os.listdir("../webapp/out") if f.endswith(".txt")]): df_tmp = evaluate_file(f_name, print_eval=False) df_tmp["evaluator"] = f_name.replace(".txt", "").replace("20_", "") dfs.append(df_tmp) df_all = pnd.concat(dfs) evaluate_df(df_all, print_eval=True) # + def aggregate_samples(df_param): return all(df_param) df_agreement = df_all.groupby(["inclusion_id", "method"], as_index=False)["successful"].agg(aggregate_samples).reset_index() pnd.reset_option('display.max_rows') df_agreement.groupby("method")["successful"].agg(["mean", "count"]) # - df_agreement.groupby(["method"])["mean"].mean() # + def build_word_intrusion(df_param): models = sorted(list(set(df_param.method))) evaluators = sorted(list(set(df_param.evaluator))) df_return = pnd.DataFrame(columns=models, index=evaluators) df_return.index.name = "evaluator" df_return.columns.name = "model" print models for evaluator, df_group_evaluator in df_param.groupby("evaluator"): for model, df_group_model in df_group_evaluator.groupby("method"): df_return.set_value(evaluator, model, evaluate_single(df_group_model)) return df_return def plot_word_intrusion(df_param): df_param = pnd.melt(df_param.reset_index(), id_vars=['evaluator']) df_param = df_param.dropna() df_param = df_param.sort_values(["model", "value"]) models = sorted(list(set(df_param["model"]))) for m in models: df_param.loc[df_param["model"] == m, "offset"] = range(len(df_param["model"] == m)) models = OrderedDict([(v, k+1) for k, v in enumerate(models)]) scatters = np.linspace(-0.25, 0.25, 8) df_param["scatter_x"] = df_param.apply(lambda x: models[x.model] + scatters[x.offset], axis=1) plt.figure(figsize=cm2inch(12, 6), dpi=300) plt.xlim(0.6, 4.8) plt.ylim((0.,1.0)) plt.xticks(models.values(), ["LDA", "LFTM", "TopicVec", "WELDA"], rotation='horizontal') plt.scatter(df_param["scatter_x"], df_param["value"], c="black", marker='x', s=20, alpha=1.0, lw = 0.5) width = 0.25 for m in models: mean = df_param[df_param.model == m]["value"].mean() #plt.axhline(y=mean) #print models[m] plt.hlines(y=mean, xmin=models[m]-width, xmax=models[m]+width, linestyles="dashed", linewidth=0.5) plt.text(models[m] + width + 0.03, mean, "$%.2f$" % mean, verticalalignment="center") df_no_mixture = df_all[df_all.method.apply(lambda x: "mixture" not in x)] df_20 = df_no_mixture[df_no_mixture.method.apply(lambda x: "20" in x)] df_50 = df_no_mixture[df_no_mixture.method.apply(lambda x: "20" not in x)] df_word_intrusion_20 = build_word_intrusion(df_20) plot_word_intrusion(df_word_intrusion_20) df_word_intrusion_50 = build_word_intrusion(df_50) plot_word_intrusion(df_word_intrusion_50) # - df_word_intrusion_20 df_word_intrusion_50 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import tensorflow as tf from sklearn.utils import shuffle from sklearn.model_selection import train_test_split from pprint import pprint from math import ceil from time import time import matplotlib.pyplot as plt # %matplotlib inline # - BATCH_SIZE = 16 EPOCHS = 10 EPOCHS_BEFORE_SWA = 5 ALPHA1_LR = 0.1 ALPHA2_LR = 0.001 # # Data Preparation # # - load `train` and `test` subsets (CIFAR-10) # - split `train` into `train` + `valid` (80/20%, stratified split on labels) # # - create data generators (with `keras.preprocessing.image.ImageDataGenerator`): # - one ImageDataGenerator with data augmentation (horizontal flips, random translations) for train set # - three ImageDataGenerator without data augmentation for train, valid and test subset # - why `train` ? : to fit Batch Norm statistics without augmentation # + print("... loading CIFAR10 dataset ...") (x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data() y_train = np.squeeze(y_train) y_test = np.squeeze(y_test) x_train, y_train = shuffle(x_train, y_train) x_train, x_val, y_train, y_val = train_test_split(x_train, y_train, test_size=0.2, stratify=y_train, random_state=51) # cast samples and labels x_train = x_train.astype(np.float32) / 255. x_val = x_val.astype(np.float32) / 255. x_test = x_test.astype(np.float32) / 255. y_train = y_train.astype(np.int32) y_val = y_val.astype(np.int32) y_test = y_test.astype(np.int32) print("\tTRAIN - images {} | {} - labels {} - {}".format(x_train.shape, x_train.dtype, y_train.shape, y_train.dtype)) print("\tVAL - images {} | {} - labels {} - {}".format(x_val.shape, x_val.dtype, y_val.shape, y_val.dtype)) print("\tTEST - images {} | {} - labels {} - {}\n".format(x_test.shape, x_test.dtype, y_test.shape, y_test.dtype)) # + generator_aug = tf.keras.preprocessing.image.ImageDataGenerator(width_shift_range=8, height_shift_range=8, fill_mode='constant', cval=0.0, horizontal_flip=True) generator = tf.keras.preprocessing.image.ImageDataGenerator() # python iterator object that yields augmented samples iterator_train_aug = generator_aug.flow(x_train, y_train, batch_size=BATCH_SIZE) # python iterators object that yields not augmented samples iterator_train = generator.flow(x_train, y_train, batch_size=BATCH_SIZE) iterator_valid = generator.flow(x_val, y_val, batch_size=BATCH_SIZE) iterator_test = generator.flow(x_test, y_test, batch_size=BATCH_SIZE) # + # test x, y = iterator_train_aug.next() img = x[0]*255 print("x : {} | {}".format(x.shape, x.dtype)) print("y : {} | {}".format(y.shape, y.dtype)) plt.figure(figsize=(6,6)) plt.imshow(img.astype(np.uint8)) # - # # Build a network with MovingFreeBatchNorm from swa_tf import moving_free_batch_norm, StochasticWeightAveraging config = tf.ConfigProto() config.allow_soft_placement = True config.gpu_options.allow_growth = True sess = tf.Session(config=config) with tf.name_scope('inputs'): batch_x = tf.placeholder(shape=[None, 32, 32, 3], dtype=tf.float32) batch_y = tf.placeholder(shape=[None, ], dtype=tf.int64) is_training_bn = tf.placeholder(shape=[], dtype=tf.bool) use_moving_statistics = tf.placeholder(shape=[], dtype=tf.bool) learning_rate = tf.placeholder(shape=[], dtype=tf.float32) # + # network similar to a VGG11 with tf.name_scope('network'): x = tf.layers.conv2d(batch_x, filters=64, kernel_size=3, strides=1, use_bias=True, activation=tf.nn.relu, data_format='channels_last', padding='same') x = moving_free_batch_norm(x, axis=-1, training=is_training_bn, use_moving_statistics=use_moving_statistics, momentum=0.99) x = tf.layers.max_pooling2d(x, pool_size=2, strides=2, data_format='channels_last', padding='same') x = tf.layers.conv2d(x, filters=128, kernel_size=3, strides=1, use_bias=True, activation=tf.nn.relu, data_format='channels_last', padding='same') x = moving_free_batch_norm(x, axis=-1, training=is_training_bn, use_moving_statistics=use_moving_statistics, momentum=0.99) x = tf.layers.max_pooling2d(x, pool_size=2, strides=2, data_format='channels_last', padding='same') x = tf.layers.conv2d(x, filters=256, kernel_size=3, strides=1, use_bias=True, activation=tf.nn.relu, data_format='channels_last', padding='same') x = moving_free_batch_norm(x, axis=-1, training=is_training_bn, use_moving_statistics=use_moving_statistics, momentum=0.99) x = tf.layers.conv2d(x, filters=256, kernel_size=3, strides=1, use_bias=True, activation=tf.nn.relu, data_format='channels_last', padding='same') x = moving_free_batch_norm(x, axis=-1, training=is_training_bn, use_moving_statistics=use_moving_statistics, momentum=0.99) x = tf.layers.max_pooling2d(x, pool_size=2, strides=2, data_format='channels_last', padding='same') x = tf.layers.conv2d(x, filters=512, kernel_size=3, strides=1, use_bias=True, activation=tf.nn.relu, data_format='channels_last', padding='same') x = moving_free_batch_norm(x, axis=-1, training=is_training_bn, use_moving_statistics=use_moving_statistics, momentum=0.99) x = tf.layers.conv2d(x, filters=512, kernel_size=3, strides=1, use_bias=True, activation=tf.nn.relu, data_format='channels_last', padding='same') x = moving_free_batch_norm(x, axis=-1, training=is_training_bn, use_moving_statistics=use_moving_statistics, momentum=0.99) x = tf.layers.max_pooling2d(x, pool_size=2, strides=2, data_format='channels_last', padding='same') x = tf.layers.conv2d(x, filters=512, kernel_size=3, strides=1, use_bias=True, activation=tf.nn.relu, data_format='channels_last', padding='same') x = moving_free_batch_norm(x, axis=-1, training=is_training_bn, use_moving_statistics=use_moving_statistics, momentum=0.99) x = tf.layers.conv2d(x, filters=512, kernel_size=3, strides=1, use_bias=True, activation=tf.nn.relu, data_format='channels_last', padding='same') x = moving_free_batch_norm(x, axis=-1, training=is_training_bn, use_moving_statistics=use_moving_statistics, momentum=0.99) # Global Average Pooling x = tf.reduce_mean(x, axis=[1, 2]) logits = tf.layers.dense(x, units=10, use_bias=True) # - model_vars = tf.trainable_variables() pprint(model_vars) # + # operations to update moving averages in batch norm layers # to run before updating weights ! (with tf.control_dependencies()) update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) # operations to updates (mean, variance, n) in batch norm layers update_bn_ops = tf.get_collection('UPDATE_BN_OPS') # operations to reset (mean, variance, n) to zero reset_bn_ops = tf.get_collection('RESET_BN_OPS') pprint(update_ops) pprint(update_bn_ops) pprint(reset_bn_ops) # group these operations update_ops = tf.group(*update_ops) update_bn_ops = tf.group(*update_bn_ops) reset_bn_ops = tf.group(*reset_bn_ops) # - with tf.name_scope('loss'): loss_tf = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=batch_y)) acc_tf = tf.reduce_mean(tf.cast(tf.equal(batch_y, tf.argmax(logits, axis=1)), dtype=tf.float32)) with tf.name_scope('optimizer'): opt = tf.train.MomentumOptimizer(learning_rate=learning_rate, momentum=0.9) with tf.control_dependencies([update_ops,]): train_op = opt.minimize(loss_tf, var_list=model_vars) with tf.name_scope('SWA'): swa = StochasticWeightAveraging() swa_op = swa.apply(var_list=model_vars) # Make backup variables with tf.variable_scope('BackupVariables'): backup_vars = [tf.get_variable(var.op.name, dtype=var.value().dtype, trainable=False, initializer=var.initialized_value()) for var in model_vars] # operation to assign SWA weights to model swa_to_weights = tf.group(*(tf.assign(var, swa.average(var).read_value()) for var in model_vars)) # operation to store model into backup variables save_weight_backups = tf.group(*(tf.assign(bck, var.read_value()) for var, bck in zip(model_vars, backup_vars))) # operation to get back values from backup variables to model restore_weight_backups = tf.group(*(tf.assign(var, bck.read_value()) for var, bck in zip(model_vars, backup_vars))) global_init_op = tf.global_variables_initializer() sess.run(global_init_op) # ## Set Learning rate policy def get_learning_rate(step, epoch, steps_per_epoch): if epoch < EPOCHS_BEFORE_SWA: return ALPHA1_LR if step > int(0.9 * EPOCHS * steps_per_epoch): return ALPHA2_LR length_slope = int(0.9 * EPOCHS * steps_per_epoch) - EPOCHS_BEFORE_SWA * steps_per_epoch re return ALPHA1_LR - ((ALPHA1_LR - ALPHA2_LR) / length_slope) * (step - EPOCHS_BEFORE_SWA * steps_per_epoch) # + steps_per_epoch_train = iterator_train.n//BATCH_SIZE all_steps = [] all_lr = [] step = 0 for epoch in range(EPOCHS): for _ in range(steps_per_epoch_train): all_steps.append(step) all_lr.append(get_learning_rate(step, epoch, steps_per_epoch_train)) step += 1 plt.figure(figsize=(16,4)) plt.plot(all_steps, all_lr) # - # ## Launch traninig # - create a function to fit statistics with train subset # - create a function to run inference on [train/]val/test subsets (with/without moving statistics # - create a function to run inference with SWA weights steps_per_epoch_train = int(ceil(iterator_train.n/BATCH_SIZE)) steps_per_epoch_val = int(ceil(iterator_valid.n/BATCH_SIZE)) steps_per_epoch_test = int(ceil(iterator_test.n/BATCH_SIZE)) def fit_bn_statistics(): sess.run(reset_bn_ops) feed_dict = {is_training_bn: True, use_moving_statistics: True} for _ in range(steps_per_epoch_train): x, y = iterator_train.next() feed_dict[batch_x] = x sess.run(update_bn_ops, feed_dict=feed_dict) def inference(iterator, with_moving_statistics=True): feed_dict = {is_training_bn: False, use_moving_statistics: with_moving_statistics} all_acc = [] all_loss = [] nb_steps = int(ceil(iterator.n/BATCH_SIZE)) for _ in range(nb_steps): x, y = iterator.next() feed_dict[batch_x] = x feed_dict[batch_y] = y acc_v, loss_v = sess.run([acc_tf, loss_tf], feed_dict=feed_dict) all_acc.append(acc_v) all_loss.append(loss_v) return np.mean(all_acc), np.mean(all_loss) # %%time # test your random model ? acc, loss = inference(iterator_valid, with_moving_statistics=True) print("Random model : acc={:.5f} loss={:.5f}".format(acc, loss)) # %%time # now fit batch norm statistics and make inference again fit_bn_statistics() acc, loss = inference(iterator_valid, with_moving_statistics=False) print("Random model : acc={:.5f} loss={:.5f}".format(acc, loss)) feed_dict_train = {is_training_bn: True, use_moving_statistics: True} # + step = 0 start = time() acc_train = [] loss_train = [] acc_val = [] loss_val = [] acc_val_bn = [] loss_val_bn = [] acc_val_swa = [] loss_val_swa = [] for epoch in range(EPOCHS): acc = [] loss = [] for _ in range(steps_per_epoch_train): feed_dict_train[learning_rate] = get_learning_rate(step, epoch, steps_per_epoch_train) x, y = iterator_train_aug.next() feed_dict_train[batch_x] = x feed_dict_train[batch_y] = y acc_v, loss_v, _ = sess.run([acc_tf, loss_tf, train_op], feed_dict=feed_dict_train) acc.append(acc_v) loss.append(loss_v) step += 1 acc = np.mean(acc) loss = np.mean(loss) acc_train.append((epoch,acc)) loss_train.append((epoch,loss)) print("TRAIN @ EPOCH {} : acc={:.5f} loss={:.5f} in {:.3f} s".format(epoch, acc, loss, time()-start)) acc, loss = inference(iterator_valid, with_moving_statistics=True) acc_val.append((epoch,acc)) loss_val.append((epoch,loss)) print("VALID @ EPOCH {} : acc={:.5f} loss={:.5f} in {:.3f} s".format(epoch, acc, loss, time()-start)) # now fit batch norm statistics and make inference again fit_bn_statistics() acc, loss = inference(iterator_valid, with_moving_statistics=False) acc_val_bn.append((epoch,acc)) loss_val_bn.append((epoch,loss)) print("VALID updated bn @ EPOCH {} : acc={:.5f} loss={:.5f} in {:.3f} s".format(epoch, acc, loss, time()-start)) if epoch >= EPOCHS_BEFORE_SWA: sess.run(swa_op) if epoch > EPOCHS_BEFORE_SWA: sess.run(save_weight_backups) sess.run(swa_to_weights) fit_bn_statistics() acc, loss = inference(iterator_valid, with_moving_statistics=False) acc_val_swa.append((epoch,acc)) loss_val_swa.append((epoch,loss)) print("VALID with SWA @ EPOCH {} : acc={:.5f} loss={:.5f} in {:.3f} s".format(epoch, acc, loss, time()-start)) sess.run(restore_weight_backups) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + from sympy.physics.mechanics import ReferenceFrame,Point,dynamicsymbols from sympy.physics.mechanics import Point from sympy import latex,pprint,symbols,init_printing from sympy.algebras.quaternion import Quaternion import numpy as np import sys sys.path.append("../tools") from vis import Visualizer # %matplotlib notebook init_printing() # Para visualizar simbolos # Definicion del modelo para el ejemplo de atraccion de parque de diversiones a=ReferenceFrame('A') #Defina el punto O o=Point('O') # Parametros del modelo l,r,h=symbols('L,R,h') # Variables de movimiento q1,q2=dynamicsymbols('q1,q2') # Marco de referencia B b=a.orientnew('B','Axis',(q1,a.x)) p=o.locatenew('P',l*b.y) # Marco de referencia C c=b.orientnew('C','Axis',(q2,b.z)) q=p.locatenew('Q',r*c.y+h*c.z) # Visualizacion (Incluyendo superficies en stl) # Construya un objecto de visualizacion con el marco de referencia inercial y punto de origen vis=Visualizer(a,o) # Agrege marcos y puntos para ser visualizados (marco,punto,geometria) vis.add(a,o,shape='assets/Atraccion_Parque_base.stl') vis.add(b,o,shape='assets/Atraccion_Parque_link1.stl') vis.add(c,p,shape='assets/Atraccion_Parque_disk.stl') vis.add(a,o,frame_scale=50) vis.add(b,o,frame_scale=50) vis.add(c,q,frame_scale=50) # Modifique las variables de movimiento y verifique el cambio en la posicion y # orientacion de los marcos vis.plot({l:35,r:30,h:12,q1:0,q2:0}) # - #Modifique las variables de movimiento para cambiar la configuracion cinematica del modelo #Puede hacerlo desde esta celda para actualizar el modelo sin tener que volver a generar la figura. vis.plot({l:35,r:30,h:12,q1:0.5,q2:1.2}) # + #Esto permite crear animaciones facilmente #Ejecute esta celda y de click en la figura 1 para observar la animacion del movimiento. import matplotlib from matplotlib.animation import FuncAnimation tt=np.linspace(0,10,500) qq1=np.sin(2*np.pi*0.1*tt) qq2=np.linspace(0,np.pi*2*10,len(tt)) def animfunc(i,qq1,qq2): vis.plot({l:35,r:30,h:12,q1:qq1[i],q2:qq2[i]}) FuncAnimation(vis.fig,animfunc,fargs=(qq1,qq2),interval=50) # --- # jupyter: # jupytext: # text_representation: # extension: .sh # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: SSH # language: bash # name: ssh # --- # # Tutorial: SSH Agent # # Tutorial SSH Kernel with [ssh-agent](https://linux.die.net/man/1/ssh-agent). # ## Start jupyter notebook with ssh-agent # # In console: # # ``` # $ ssh-agent jupyter notebook # ``` # ## Generate key and connect the server # # In `Terminal` in Jupyter notebook, generate keys with passphrase and connect the server with it. # # ![image.png](attachment:image.png) # # ``` # $ ssh-keygen -f /tmp/id_ed25519_key -t ed25519 -m PEM # Enter passphrase (empty for no passphrase): # Enter same passphrase again: # Your identification has been saved in /tmp/id_ed25519_key. # Your public key has been saved in /tmp/id_ed25519_key.pub. # The key fingerprint is: # SHA256:FtwAprPsfW5Fu/SwQLbMS+TUw8gJLvyX2/8zTj8apsY masaru@garnet # The key's randomart image is: # +--[ED25519 256]--+ # | o.. | # | o.. o | # | .o. oo=. | # | .oo. O.= | # | oo OS+ o | # | . ...X = | # | . .o.B.= o. | # | o+ +E+.+o | # | .. .o.++oo| # +----[SHA256]-----+ # # $ edit ~/.ssh/config # # # (You must customize these definitions to your environment.) # # + Host testagent # + Hostname localhost # + User root # + Port 11122 # + IdentityFile /tmp/id_ed25519_key # # $ ssh testagent # The authenticity of host '[localhost]:11122 ([::1]:11122)' can't be established. # ECDSA key fingerprint is SHA256:. # Are you sure you want to continue connecting (yes/no)? yes # Warning: Permanently added '[localhost]:11122' (ECDSA) to the list of known hosts. # root@localhost's password: # :~# exit # # $ ssh-copy-id -i /tmp/id_ed25519_key testagent # /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/tmp/id_ed25519_key.pub"/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are alre # ady installed/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to insta # ll the new keys # root@localhost's password: # # Number of key(s) added: 1 # # Now try logging into the machine, with: "ssh 'testagent'" # and check to make sure that only the key(s) you wanted were added. # # # $ ssh testagent id # Enter passphrase for key '/tmp/id_ed25519_key':uid=0(root) gid=0(root) groups=0(root) # ``` # ## Add a private key identities to the authentication agent # # ``` # $ ssh-add /tmp/id_ed25519_key # Enter passphrase for /tmp/id_ed25519_key: # Identity added: /tmp/id_ed25519_key () # # # Now you can connect without passphrase # $ ssh testagent id # uid=0(root) gid=0(root) groups=0(root) # ``` # ## Execute commands in notebook # %login testagent id # %logout # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- import pandas as pd import numpy as np df_prop=pd.read_csv("/home/yash/Downloads/dataset/Affline_Dataset/Property dataset/Properties.csv") df_pta=pd.read_csv("/home/yash/Downloads/dataset/Affline_Dataset/training dataset/Accounts_properties.csv") df_prop.head() df_pta.head() acquired_props=df_pta.id_props.unique() df_pta.shape acquired_props.shape all_props=df_prop.id_props.unique() all_props.shape forSale=[x for x in all_props if x not in acquired_props] len(forSale) type(all_props) forSale=np.setdiff1d(all_props,acquired_props) forSale.shape # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="A0TQHyu8cAU0" # # Springboard Data Science Career Track Unit 4 Challenge - Tier Two Complete # # ## Objectives # Hey! Great job getting through those challenging DataCamp courses. You're learning a lot in a short span of time. Let's see how this new knowledge can help solve a real world problem. # # In this notebook, we're going to apply the skills you've been learning, bridging the gap between the controlled environment of DataCamp and the *slightly* messier work that data scientists do with actual datasets! # # Here’s the mystery we’re going to solve: ***which boroughs of London have seen the greatest increase in housing prices, on average, over the last two decades?*** # # # A borough is just a fancy word for district. You may be familiar with the five boroughs of New York… well, there are 32 boroughs within Greater London [(here's some info for the curious)](https://en.wikipedia.org/wiki/London_boroughs). Some of them are more desirable areas to live in, and the data will reflect that with a greater rise in housing prices. # # This is the Tier Two notebook. Don't sweat it if you got stuck on the highest difficulty. You'll have a lot more guidance this time around. If you get stuck again, you can always drop down to Tier One. Just remember to come back around and redo the project on the higher difficulty. # # This challenge will make use of only what you learned in the following DataCamp courses: # - Prework courses (Introduction to Python for Data Science, Intermediate Python for Data Science) # - Data Types for Data Science # - Python Data Science Toolbox (Part One) # - pandas Foundations # - Manipulating DataFrames with pandas # - Merging DataFrames with pandas # # Of the tools, techniques and concepts in the above DataCamp courses, this challenge should require the application of the following: # - **pandas** # - **data ingestion and inspection** (pandas Foundations, Module One) # - **exploratory data analysis** (pandas Foundations, Module Two) # - **tidying and cleaning** (Manipulating DataFrames with pandas, Module Three) # - **transforming DataFrames** (Manipulating DataFrames with pandas, Module One) # - **subsetting DataFrames with lists** (Manipulating DataFrames with pandas, Module One) # - **filtering DataFrames** (Manipulating DataFrames with pandas, Module One) # - **grouping data** (Manipulating DataFrames with pandas, Module Four) # - **melting data** (Manipulating DataFrames with pandas, Module Three) # - **advanced indexing** (Manipulating DataFrames with pandas, Module Four) # - **matplotlib** (Intermediate Python for Data Science, Module One) # - **fundamental data types** (Data Types for Data Science, Module One) # - **dictionaries** (Intermediate Python for Data Science, Module Two) # - **handling dates and times** (Data Types for Data Science, Module Four) # - **function definition** (Python Data Science Toolbox - Part One, Module One) # - **default arguments, variable length, and scope** (Python Data Science Toolbox - Part One, Module Two) # - **lambda functions and error handling** (Python Data Science Toolbox - Part One, Module Four) # + [markdown] colab_type="text" id="IzUYCh-EcAU2" # ## The Data Science Pipeline # Data Science is magical. In this case study, you'll get to apply some complex machine learning algorithms. But as [](https://www.youtube.com/watch?v=oUs1uvsz0Ok) reminds us, there is no substitute for simply **taking a really, really good look at the data.** Sometimes, this is all we need to answer our question. # # Data Science projects generally adhere to the four stages of Data Science Pipeline: # 1. Sourcing and loading # 2. Cleaning, transforming, and visualizing # 3. Modeling # 4. Evaluating and concluding # + [markdown] colab_type="text" id="u8vWP1fccAU3" # ### 1. Sourcing and Loading # # Any Data Science project kicks off by importing ***pandas***. The documentation of this wonderful library can be found [here](https://pandas.pydata.org/). As you've seen, pandas is conveniently connected to the [Numpy](http://www.numpy.org/) and [Matplotlib](https://matplotlib.org/) libraries. # # ***Hint:*** This part of the data science pipeline will test those skills you acquired in the pandas Foundations course, Module One. # + [markdown] colab_type="text" id="Eni457VLcAU4" # #### 1.1. Importing Libraries # + colab={} colab_type="code" id="S1MNvAo4cAU4" # Let's import the pandas, numpy libraries as pd, and np respectively. import _ _ _ as _ _ _ import _ _ _ as _ _ _ # Load the pyplot collection of functions from matplotlib, as plt import _ _ _.pyplot as _ _ _ # + [markdown] colab_type="text" id="nXaI_sTJcAU7" # #### 1.2. Loading the data # # # Your data comes from the [London Datastore](https://data.london.gov.uk/): a free, open-source data-sharing portal for London-oriented datasets. # + colab={} colab_type="code" id="yQaBWqficAU8" # First, make a variable called url_LondonHousePrices, and assign it the following link, enclosed in quotation-marks as a string: # https://data.london.gov.uk/download/uk-house-price-index/70ac0766-8902-4eb5-aab5-01951aaed773/UK%20House%20price%20index.xls _ _ _ = "https://data.london.gov.uk/download/uk-house-price-index/70ac0766-8902-4eb5-aab5-01951aaed773/UK%20House%20price%20index.xls" # The dataset we're interested in contains the Average prices of the houses, and is actually on a particular sheet of the Excel file. # As a result, we need to specify the sheet name in the read_excel() method. # Put this data into a variable called properties. properties = pd._ _ _(url_LondonHousePrices, sheet_name='Average price', index_col= None) # + [markdown] colab_type="text" id="Z7D2LXfMcAU-" # ### 2. Cleaning, transforming, and visualizing # This second stage is arguably the most important part of any Data Science project. The first thing to do is take a proper look at the data. Cleaning forms the majority of this stage, and can be done both before or after Transformation. # # The end goal of data cleaning is to have tidy data. When data is tidy: # # 1. Each variable has a column. # 2. Each observation forms a row. # # Keep the end goal in mind as you move through this process, every step will take you closer. # # # # ***Hint:*** This part of the data science pipeline should test those skills you acquired in: # - Intermediate Python for data science, all modules. # - pandas Foundations, all modules. # - Manipulating DataFrames with pandas, all modules. # - Data Types for Data Science, Module Four. # - Python Data Science Toolbox - Part One, all modules # + [markdown] colab_type="text" id="h4ta27NycAU_" # #### 2.1. Exploring the data # + colab={} colab_type="code" id="QXVWC7ZecAU_" # First off, let's use .shape feature of pandas DataFrames to look at the number of rows and columns. _ _ _._ _ _ # + colab={} colab_type="code" id="R04S1lGycAVB" # Using the .head() method, let's check out the state of our dataset. _ _ _.head() # + [markdown] colab_type="text" id="-5ffSAxDcAVD" # Oh no! What are you supposed to do with this? # # You've got the data, but it doesn't look tidy. At this stage, you'd struggle to perform analysis on it. It is normal for your initial data set to be formatted in a way that is not conducive to analysis. A big part of our job is fixing that. # # Best practice is for pandas DataFrames to contain the observations of interest as rows, and the features of those observations as columns. You want tidy DataFrames: whose rows are observations and whose columns are variables. # # Notice here that the column headings are the particular boroughs, which is our observation of interest. The first column contains datetime objects that capture a particular month and year, which is a variable. Most of the other cell-values are the average proprety values of the borough corresponding to that time stamp. # # Clearly, we need to roll our sleeves up and do some cleaning. # + [markdown] colab_type="text" id="7_STF8sZcAVE" # #### 2.2. Cleaning the data (Part 1) # Data cleaning has a bad rep, but remember what your momma told you: cleanliness is next to godliness. Data cleaning can be really satisfying and fun. In the dark ages of programming, data cleaning was a tedious and difficult ordeal. Nowadays, new and improved tools have simplified the process. Getting good at data cleaning opens up a world of possibilities for data scientists and programmers. # # The first operation you want to do on the dataset is called **transposition**. you *transpose* a table when you flip the columns into rows, and *vice versa*. # # If you transpose this DataFrame then the borough names will become the row indices, and the date time objects will become the column headers. Since your end goal is tidy data, where each row will represent a borough and each column will contain data about that borough at a certain point in time, transposing the table bring you closer to where you want to be. # # Python makes transposition simple. # # Each pandas DataFrame already has the *.T* attribute which is the transposed version of that DataFrame. # # Assign the transposed version of the original to a new variable and call it *properties_T*. # # Boom! You’ve got a transposed table to play with. # + colab={} colab_type="code" id="MElF4IBacAVF" # Do this here properties_T = _ _ _ # + colab={} colab_type="code" id="E10vl2gbcAVG" # Let's check the head of our new Transposed DataFrame. _ _ _.head() # + [markdown] colab_type="text" id="fHvxke9XcAVI" # You've made some progress! But with new progress comes new issues. For one, the row indices of your DataFrame contain the names of the boroughs. You should never have a piece of information you want to analyze as an index, this information should be within the DataFrame itself. The indices should just be a unique ID, almost always a number. # # Those names are perhaps the most important piece of information! Put them where you can work with them. # + colab={} colab_type="code" id="HdouAxnTcAVJ" # To confirm what our row indices are, let's call the .index variable on our properties_T DataFrame. _ _ _.index # + colab={} colab_type="code" id="aao6yKGFcAVL" # Our suspicion was correct. # Call the .reset_index() method on properties_T to reset the indices, and the reassign the result to properties_T: properties_T = properties_T._ _ _ # + colab={} colab_type="code" id="tGj6YX1VcAVN" # Now let's check out our DataFrames indices: properties_T._ _ _ # + [markdown] colab_type="text" id="UZO2SqYTcAVP" # # Progress! # # The indicies are now a numerical RangeIndex, exactly what you want. # # **Note**: if you call the reset_index() line more than once, you'll get an error because a whole extra level of row indices will have been inserted! If you do this, don't worry. Just hit Kernel > Restart, then run all the cells up to here to get back to where you were. # # + colab={} colab_type="code" id="-q99HGLscAVP" # Call the head() function again on properties_T to check out the new row indices: _ _ _.head() # + [markdown] colab_type="text" id="BaW90dtbcAVR" # You're getting somewhere, but our column headings are mainly just integers. The first one is the string 'index' and the rest are integers ranging from 0 to 296, inclusive. # # # For the ultimate aim of having a *tidy* DataFrame, turn the datetimes found along the first row (at index 0) into the column headings. The resulting DataFrame will have boroughs as rows, the columns as dates (each representing a particular month), and the cell-values as the average property value sold in that borough for that month. # + colab={} colab_type="code" id="hvdvy-8rcAVR" # To confirm that our DataFrame's columns are mainly just integers, call the .columns feature on our DataFrame: properties_T._ _ _ # + [markdown] colab_type="text" id="C_-aS8tncAVT" # To confirm that the first row contains the proper values for column headings, use the ***iloc[] method*** on our *properties_T* DataFrame. Use index 0. You'll recall from DataCamp that if you use single square brackets, you'll return a series. If you use double square brackets, a DataFrame is returned. # + colab={} colab_type="code" id="6xWcjc1qcAVU" # Call the iloc[] method with double square brackets on the properties_T DataFrame, to see the row at index 0. properties_T._ _ _ # + [markdown] colab_type="text" id="szi9ZHrHcAVW" # **Notice that these values are all the months from January 1995 to August 2019, inclusive**. You can reassign the columns of your DataFrame the values in the row at index 0 by making use of the *.columns* feature. # + colab={} colab_type="code" id="ioW2FRhKcAVW" # Try this now. properties_T.columns = _ _ _.iloc[0] # + colab={} colab_type="code" id="OBHmvVH_cAVa" # Check out our DataFrame again: properties_T.head() # + [markdown] colab_type="text" id="l9Pa-QvrcAVb" # Drop the row at index 0! # # A good way to do this is reassign *properties_T* the value given by the result of calling ***drop()*** on it, with 0 passed to that method. # + colab={} colab_type="code" id="TiWzfXKfcAVc" # Have a go at this now. _ _ _ = properties_T._ _ _ # + colab={} colab_type="code" id="B32bbCFQcAVd" # Now check out our DataFrame again to see how it looks. properties_T.head() # + [markdown] colab_type="text" id="R1Teru9ecAVh" # You're slowly but surely getting there! Exciting, right? # # **Each column now represents a month and year, and each cell-value represents the average price of houses sold in borough of the corresponding row**. # # You have total control over your data! # + [markdown] colab_type="text" id="UazH1Wh_cAVi" # #### 2.3. Cleaning the data (Part 2) # You can see from the *.head()* list call that you need to rename some of your columns. # # 'Unnamed: 0' should be something like 'London Borough' and 'NaN' should be changed. # # Recall, that pandas DataFrames have a ***.rename()*** method. One of the keyworded arguments to this method is *columns*. You can assign it a dictionary whose keys are the current column names you want to change, and whose values are the desired new names. # # **Note**: you can change the 'Unnamed: 0' name of the first column just by including that string as a key in our dictionary, but 'NaN' stands for Not A Number, and is denoted by *pd.NaT*. Do not use quotes when you include this value. NaN means Not A Number, and NaT means Not A Time - both of these values represent undefined or unrepresenable values like 0/0. They are functionally Null values. Don't worry, we'll help you with this. # # Call the **rename()** method on *properties_T* and set the *columns* keyword equal to the following dictionary: # {'Unnamed: 0':'London_Borough', pd.NaT: 'ID'} # , then reassign that value to properties_T to update the DataFrame. # + colab={} colab_type="code" id="M5kHlbxucAVi" # Try this here. properties_T = properties_T._ _ _(_ _ _ = {'Unnamed: 0':'London_Borough', pd.NaT: 'ID'}) # + colab={} colab_type="code" id="CI5yUUK6cAVl" # Let's check out the DataFrame again to admire our good work. properties_T.head() # + [markdown] colab_type="text" id="Rd8YFt0CcAVn" # You're making great leaps forward, but your DataFrame still has lots of columns. Let's find out exactly how many it has by calling ***.columns*** on our DataFrame. # # + colab={} colab_type="code" id="ZnBbY8iFcAVn" # Try this here. properties_T.columns # + [markdown] colab_type="text" id="O1ZskDM0cAVp" # #### 2.4. Transforming the data # Our data would be tidier if we had fewer columns. # # In fact a ***single*** column for time be better than nearly 300? This single column will contain all of the datetimes in our current column headings. # # **Remember** the two most important properties of tidy data are: # 1. **Each column is a variable.** # # 2. **Each row is an observation.** # # One of the miraculous things about pandas is ***melt()***, which enables us to melt those values along the column headings of our current DataFrame into a single column. # # Let's make a new DataFrame called clean_properties, and assign it the return value of ***pd.melt()*** with the parameters: *properties_T* and *id_vars = ['Borough', 'ID']*. # # The result will be a DataFrame with rows representing the average house price within a given month and a given borough. Exactly what we want. # + colab={} colab_type="code" id="eZ4-eIHAcAVp" # Try this here: _ _ _ = pd._ _ _(properties_T, id_vars= ['London_Borough', 'ID']) # + colab={} colab_type="code" id="ukbTKJ9gcAVr" clean_properties.head() # + [markdown] colab_type="text" id="GveQuAwCcAVs" # Awesome. This is looking good. # # We now want to rename the '0' column 'Month', and the 'value' column 'Average_price'. # # Use the ***rename()*** method, and reassign *clean_properties* with the result. # + colab={} colab_type="code" id="Cef4yksfcAVt" # Re-name the column names _ _ _ = clean_properties._ _ _(columns = {0: 'Month', 'value': 'Average_price'}) # + colab={} colab_type="code" id="9CCkY4mdcAVv" # Check out the DataFrame: clean_properties.head() # + [markdown] colab_type="text" id="RSUuXokncAVx" # You need to check out the data types of our clean_properties DataFrame, just in case you need to do any type conversions. # + colab={} colab_type="code" id="BW6IB1vzcAVx" # Let's use the .dtypes attribute to check the data types of our clean_properties DataFrame: clean_properties._ _ _ # + [markdown] colab_type="text" id="rz9pr-TBcAVz" # You should change the Average_price column to a numeric type, specifically, a float. # # Call the ***to_numeric()*** method on *pd*, pass the 'Average_price' column into its brackets, and reassign the result to the *clean_properties* 'Average_price' column. # + colab={} colab_type="code" id="W3UQDMuNcAV0" # Try this here clean_properties['_ _ _'] = pd._ _ _(clean_properties['Average_price']) # + colab={} colab_type="code" id="eP80X64kcAV2" # Check out the new data types: clean_properties.dtypes # + colab={} colab_type="code" id="AjYRqVAycAV3" # To see if there are any missing values, we should call the count() method on our DataFrame: clean_properties._ _ _() # + [markdown] colab_type="text" id="NGqAbVlMcAV5" # #### 2.5. Cleaning the data (Part 3) # Houston, we have a problem! # # There are fewer data points in some of the columns. Why might this be? Let's investigate. # # Since there are only 32 London boroughs, check out the unique values of the 'London_Borough' column to see if they're all there. # # Just call the ***unique()*** method on the London_Borough column. # + colab={} colab_type="code" id="jWsQtcrtcAV6" # Do this here. clean_properties['_ _ _']._ _ _() # + [markdown] colab_type="text" id="Sk9R8sNpcAV7" # Aha! Some of these strings are not London boroughs. You're basically , getting ever closer solving the mystery! # # The strings that don't belong: # - 'Unnamed: 34' # - 'Unnamed: 37' # - 'NORTH EAST' # - 'NORTH WEST' # - 'YORKS & THE HUMBER' # - 'EAST MIDLANDS' # - 'WEST MIDLANDS' # - 'EAST OF ENGLAND' # - 'LONDON' # - 'SOUTH EAST' # - 'SOUTH WEST' # - 'Unnamed: 47' # - 'England' # # Go see what information is contained in rows where London_Boroughs is 'Unnamed’ and, if there’s nothing valuable, we can drop them. To investigate, subset the clean_properties DataFrame on this condition. # + colab={} colab_type="code" id="Mk1e3woZcAV8" # Subset clean_properties on the condition: df['London_Borough'] == 'Unnamed: 34' to see what information these rows contain. clean_properties[_ _ _['_ _ _'] == '_ _ _'].head() # + colab={} colab_type="code" id="P1Uqu0q9cAV9" # Let's do the same for 'Unnamed: 37': clean_properties[clean_properties['London_Borough'] _ _ _ 'Unnamed: 37'].head() # + [markdown] colab_type="text" id="2mRyfAgXcAV_" # These rows don't contain any valuable information. Delete them. # # # + colab={} colab_type="code" id="OlvJPQ6KcAWA" # Let's look at how many rows have NAs as their value for ID. # To this end, subset clean_properties on the condition: clean_properties['ID'].isna(). # Notice that this line doesn't actually reassign a new value to clean_properties. _ _ _[clean_properties['ID']._ _ _()] # + [markdown] colab_type="text" id="DOPMYa-8cAWD" # You always have a ***choice*** about how to deal with Null (NaN) values. We'll teach you two methods today: # 1. filtering on ***notna()*** # 2. reassigning on ***dropna()*** # # Try ***notna()*** first. ***notna()*** will return a series of booleans, where the value will be true if there's a not a null and false if there is a null. # # Make a new variable called *NaNFreeDF1* and assign it the result of filtering *clean_properties* on the condition: *clean_properties['Average_price'].notna()* # + colab={} colab_type="code" id="ZY9aC1zHcAWD" # Try your hand at method (1) here: _ _ _ = clean_properties[_ _ _['Average_price'].notna()] NaNFreeDF1.head(48) # + colab={} colab_type="code" id="EJJnp-AtcAWF" # If we do a count on our new DataFrame, we'll see how many rows we have that have complete information: NaNFreeDF1._ _ _() # + [markdown] colab_type="text" id="47b-DUF7cAWH" # Looks good! # # For completeness, now use ***dropna()***. ***dropna()*** will drop all null values. # # Make a new variable called *NaNFreeDF2*, and assign it the result of calling ***dropna()*** on *clean_properties*. # + colab={} colab_type="code" id="NCQd17hfcAWI" # filtering the data with NaN values _ _ _ = clean_properties._ _ _() NaNFreeDF2.head(48) # + colab={} colab_type="code" id="bTaY4oO5cAWJ" # Let's do a count on this DataFrame object: NaNFreeDF2.count() # + colab={} colab_type="code" id="P3X4CvA0cAWM" NaNFreeDF2['London_Borough'].unique() # + [markdown] colab_type="text" id="KVd0KUb8cAWR" # Both these methods did the job! Thus, you can pick either resultant DataFrame. # + colab={} colab_type="code" id="KFb1y4u-cAWR" # Using the .shape attribute, compare the dimenions of clean_properties, NaNFreeDF1, and NaNFreeDF2: print(clean_properties._ _ _) print(_ _ _.shape) print(NaNFreeDF2.shape) # + [markdown] colab_type="text" id="R1s7zSOMcAWT" # Our suggestions is to pick NaNFreeDF2. # # Go drop the rest of the invalid 'London Borough' values. # # An elegant way to do this is to make a list of all those invalid values, then use the *isin()* method, combined with the negation operator *~*, to remove those values. Call this list *nonBoroughs*. # + colab={} colab_type="code" id="ZKv_F2zTcAWT" # A list of non-boroughs. nonBoroughs = ['Inner London', 'Outer London', 'NORTH EAST', 'NORTH WEST', 'YORKS & THE HUMBER', 'EAST MIDLANDS', 'WEST MIDLANDS', 'EAST OF ENGLAND', 'LONDON', 'SOUTH EAST', 'SOUTH WEST', 'England'] # + [markdown] colab_type="text" id="htCU6bAlcAWV" # Filter *NanFreeDF2* first on the condition that the rows' values for *London_Borough* is *in* the *nonBoroughs* list. # + colab={} colab_type="code" id="A93oUrRJcAWV" # Do this here. NaNFreeDF2[NaNFreeDF2.London_Borough._ _ _(_ _ _)] # + [markdown] colab_type="text" id="7dGYKzu5cAWW" # Now put the negation operator *~* before the filter statement to get just those rows whose values for *London_Borough* is **not** in the *nonBoroughs* list: # + colab={} colab_type="code" id="QGsHnDUgcAWX" NaNFreeDF2[_ _ _NaNFreeDF2.London_Borough._ _ _(nonBoroughs)] # + [markdown] colab_type="text" id="XMd2zFW_cAWa" # Then just execute the reassignment: # + colab={} colab_type="code" id="3Y9Tm7BVcAWa" _ _ _ = NaNFreeDF2[~NaNFreeDF2._ _ _.isin(nonBoroughs)] # + colab={} colab_type="code" id="91DZpDsxcAWe" NaNFreeDF2.head() # + [markdown] colab_type="text" id="2-P-vXBFcAWg" # Make a new variable called simply *df*, which is what data scientists typically call their final-stage DataFrame that's ready for analysis. # + colab={} colab_type="code" id="6sUjFQ1-cAWh" # Do that here. _ _ _ = NaNFreeDF2 # + colab={} colab_type="code" id="IGgspjXGcAWi" df.head() # + colab={} colab_type="code" id="h7rvtjQ_cAWk" df.dtypes # + [markdown] colab_type="text" id="Gh-4l57acAWm" # #### 2.6. Visualizing the data # It'll help to get a visual idea of the price shift occurring in the London boroughs. # # Restrict your observations to Camden for now. # # How have housing prices changed since 1995? # + colab={} colab_type="code" id="-oow_JhLcAWm" # First of all, make a variable called camden_prices, and assign it the result of filtering df on the following condition: # df['London_Borough'] == 'Camden' camden_prices = df[_ _ _['_ _ _'] == 'Camden'] # Make a variable called ax. Assign it the result of calling the plot() method, and plugging in the following values as parameters: # kind ='line', x = 'Month', y='Average_price' _ _ _ = camden_prices.plot(_ _ _ ='line', _ _ _ = 'Month', y='_ _ _') # Finally, call the set_ylabel() method on ax, and set that label to the string: 'Price'. ax._ _ _('_ _ _') # + [markdown] colab_type="text" id="onOBD9CKcAWo" # To limit the amount of temporal data-points, it's useful to extract the year from every value in your *Month* column. 300 is more datapoints than we need. # # To this end, apply a ***lambda function***. The logic works as follows. You'll: # 1. look through the `Month` column # 2. extract the year from each individual value in that column # 3. store that corresponding year as separate column # + colab={} colab_type="code" id="rEpQSO0QcAWo" # Try this yourself. df['Year'] = df['Month'].apply(lambda t: t.year) # Call the tail() method on df df._ _ _() # + [markdown] colab_type="text" id="Nna5hT-PcAWp" # To calculate the mean house price for each year, you first need to **group by** the London_Borough and Year columns. # # Make a new variable called *dfg*, and assign it the result of calling the ***groupby()*** method on *df*. Plug in the parameters: by=['Borough', 'Year']. Chain ***mean()*** onto your function to get the average of values. # # We've helped you with this line, it's a little tricky. # + colab={} colab_type="code" id="U8erRWaccAWq" # Using the function 'groupby' will help you to calculate the mean for each year and for each Borough. ## As you can see, the variables Borough and Year are now indices dfg = _ _ _._ _ _(by=['London_Borough', 'Year']).mean() dfg.sample(10) # + colab={} colab_type="code" id="UeIsrPVBcAWr" # Let's reset the index for our new DataFrame dfg, and call the head() method on it. dfg = dfg._ _ _() dfg.head() # + [markdown] colab_type="text" id="gsJ0h6i-cAWs" # ### 3. Modeling # Now comes the really exciting stuff. # # You'll create a function that will calculate a ratio of house prices, that compares the price of a house in 2018 to the price in 1998. # # Call this function create_price_ratio. # # You want this function to: # # 1. Take a filter of dfg, specifically where this filter constrains the London_Borough, as an argument. For example, one admissible argument should be: **dfg[dfg['London_Borough']=='Camden']**. # # 2. Get the Average Price for that borough for 1998 and, seperately, for 2018. # # 3. Calculate the ratio of the Average Price for 1998 divided by the Average Price for 2018. # # 4. Return that ratio. # # Once you've written this function, you'll use it to iterate through all the unique London Boroughs and work out the ratio capturing the difference of house prices between 1998 and 2018. # # ***Hint***: This section should test the skills you acquired in: # - Python Data Science Toolbox (Part 1), all modules # # + colab={} colab_type="code" id="P2Fx4GnscAWs" # Here's where you should write your function: def _ _ _(d): y1998 = float(d['Average_price'][d['Year']==1998]) y2018 = float(d['Average_price'][d['Year']==2018]) ratio = [_ _ _/_ _ _] return ratio # + colab={} colab_type="code" id="-L3XXDOfcAWu" # Test out the function by calling it with the following argument: # dfg[dfg['London_Borough']=='Barking & Dagenham'] create_price_ratio(_ _ _[dfg['London_Borough']=='Barking & Dagenham']) # + colab={} colab_type="code" id="4iznOphscAWv" # We want to do this for all of the London Boroughs. # First, let's make an empty dictionary, called final, where we'll store our ratios for each unique London_Borough. final = {} # + colab={} colab_type="code" id="vXocw7CccAWw" # Now let's declare a for loop that will iterate through each of the unique elements of the 'London_Borough' column of our DataFrame dfg. # Call the iterator variable 'b'. for _ _ _ in dfg['London_Borough'].unique(): # Let's make our parameter to our create_price_ratio function: i.e., we subset dfg on 'London_Borough' == b. borough = dfg[dfg['_ _ _'] == _ _ _] # Make a new entry in the final dictionary whose value's the result of calling create_price_ratio with the argument: borough final[b] = create_price_ratio(borough) # We use the function and incorporate that into a new key of the dictionary print(final) # + [markdown] colab_type="text" id="WNyHvOG8cAWy" # Now you have a dictionary with data about the ratio of average prices for each borough between 1998 and 2018, but you can make it prettier by converting it to a DataFrame. # + colab={} colab_type="code" id="VqkwL6mUcAWy" # Make a variable called df_ratios, and assign it the result of calling the DataFrame method on the dictionary final. _ _ _ = pd.DataFrame(final) # + colab={} colab_type="code" id="padD0pVNcAW0" # Call the head() method on this variable to check it out. df_ratios.head() # + colab={} colab_type="code" id="01lBHJxrcAW2" # All we need to do now is transpose it, and reset the index! df_ratios_T = _ _ _.T df_ratios = df_ratios_T.reset_index() df_ratios.head() # + colab={} colab_type="code" id="zqAiWFF5cAW5" # Let's just rename the 'index' column as 'London_Borough', and the '0' column to '2018'. df_ratios._ _ _(columns={'index':'Borough', 0:'2018'}, inplace=True) df_ratios.head() # + colab={} colab_type="code" id="1Bj_msyacAW6" # Let's sort in descending order and select the top 15 boroughs. # Make a variable called top15, and assign it the result of calling sort_values() on df_ratios. _ _ _ = df_ratios._ _ _(by='2018',ascending=False).head(15) print(top15) # + colab={} colab_type="code" id="Dat6ffSNcAW7" # Let's plot the boroughs that have seen the greatest changes in price. # Make a variable called ax. Assign it the result of filtering top15 on 'Borough' and '2018', then calling plot(), with # the parameter kind = 'bar'. _ _ _ = top15[['Borough','_ _ _']].plot(kind='bar') ax.set_xticklabels(top15.Borough) # + [markdown] colab_type="text" id="dh8A7TaAcAW8" # ### 4. Conclusion # Congratulation! You're done. Excellent work. # # What can you conclude? Type your conclusions below. # # We hope you enjoyed this practical project. It should have consolidated your data cleaning and pandas skills by looking at a real-world problem with the kind of dataset you might encounter as a budding data scientist. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: tf2 # language: python # name: tf2 # --- # # Load Library import pandas as pd import os import numpy as np # # Load Data DOWNLOAD_ROOT = "https://raw.githubusercontent.com/ageron/handson-ml2/master/" HOUSING_PATH = os.path.join("datasets", "housing") HOUSING_URL = DOWNLOAD_ROOT + "datasets/housing/housing.tgz" def load_housing_data(housing_path=HOUSING_PATH): csv_path = os.path.join(housing_path, "housing.csv") return pd.read_csv(csv_path) housing = load_housing_data() housing.head() # # Split into train test housing['income_cat'] = pd.cut(housing['median_income'], bins=[0, 1.5, 3.0, 4.5, 6.0, np.inf], labels=[1,2,3,4,5]) # + from sklearn.model_selection import StratifiedShuffleSplit split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42) for train_index, test_index in split.split(housing, housing['income_cat']): train = housing.loc[train_index] test = housing.loc[test_index] # - for set_ in (train, test): set_.drop("income_cat", axis=1, inplace=True) # + train_labels = train["median_house_value"].copy() train = train.drop("median_house_value", axis=1) test_labels = test["median_house_value"].copy() test = test.drop("median_house_value", axis=1) # - # # Prepare the data # + from sklearn.base import BaseEstimator, TransformerMixin rooms_ix, bedrooms_ix, population_ix, households_ix = 3, 4, 5, 6 class CombinedAttributesAdder(BaseEstimator, TransformerMixin): def __init__(self, add_bedrooms_per_room = True): self.add_bedrooms_per_room = add_bedrooms_per_room def fit(self, X, y=None): return self def transform(self, X): rooms_per_household = X[:, rooms_ix] / X[:, households_ix] population_per_household = X[:, population_ix] / X[:, households_ix] if self.add_bedrooms_per_room: bedrooms_per_room = X[:, bedrooms_ix] / X[:, rooms_ix] return np.c_[X, rooms_per_household, population_per_household, bedrooms_per_room] else: return np.c_[X, rooms_per_household, population_per_household] # + from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler from sklearn.impute import SimpleImputer from sklearn.preprocessing import OneHotEncoder num_pipeline = Pipeline([ ('imputer', SimpleImputer(strategy='median')), ('attribs_adder', CombinedAttributesAdder()), ('std_scaler', StandardScaler()) ]) from sklearn.compose import ColumnTransformer train_num = train.drop('ocean_proximity', axis=1) num_attribs = list(train_num) cat_attribs = ['ocean_proximity'] full_pipeline = ColumnTransformer([ ("num", num_pipeline, num_attribs), ("cat", OneHotEncoder(), cat_attribs) ]) # - train_prepared = full_pipeline.fit_transform(train) # + from sklearn.svm import SVR regr = SVR(C=1.0, kernel='linear') regr.fit(train_prepared, train_labels) # + from sklearn.model_selection import cross_val_score scores1 = cross_val_score(regr, train_prepared, train_labels, scoring='neg_mean_squared_error') scores1_rmse = np.sqrt(-scores1) # - regr2 = SVR(C=3.0, kernel='rbf') regr2.fit(train_prepared, train_labels) scores2 = cross_val_score(regr, train_prepared, train_labels, scoring='neg_mean_squared_error') scores2_rmse = np.sqrt(-scores2) def display_scores(scores): print("Scores:", scores) print("Mean:", scores.mean()) print("Standard deviation:", scores.std()) display_scores(scores1_rmse) display_scores(scores2_rmse) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Data (pre) processing using `DataTransformer` and `Pipeline` # # In this notebook, we will demonstrate how to perform some common preprocessing tasks using `darts` # # As a toy example, we will use the [Monthly Milk Production dataset](https://www.kaggle.com/tejasgosavi/monthlymilkproductionpounds). # ## The `DataTransformer` abstraction # # `DataTransformer` aims to provide a unified way of dealing with transformations of `TimeSeries`: # # - `transform()` is implemented by all transformers. This method takes in either a `TimeSeries` of a sequence of `TimeSeries`, applies the transformation and returns it as a new `TimeSeries`/sequence of `TimeSeries. # - `inverse_transform()` is implemented by transformers for which an inverse transformation function exists. It works in a similar way as `transform()` # - `fit()` allows transformers to extract some information from the time series first before calling `transform()` or `inverse_transform()` # # ## Setting up the example # + # fix python path if working locally from utils import fix_pythonpath_if_working_locally fix_pythonpath_if_working_locally() # %load_ext autoreload # %autoreload 2 # %matplotlib inline # + import pandas as pd import matplotlib.pyplot as plt import numpy as np from darts import TimeSeries from darts.models import ExponentialSmoothing from darts.dataprocessing.transformers import ( Scaler, MissingValuesFiller, Mapper, InvertibleMapper, ) from darts.dataprocessing import Pipeline from darts.metrics import mape from darts.utils.statistics import check_seasonality, plot_acf, plot_residuals_analysis from darts.utils.timeseries_generation import linear_timeseries from darts.datasets import MonthlyMilkDataset, MonthlyMilkIncompleteDataset import warnings warnings.filterwarnings("ignore") import logging logging.disable(logging.CRITICAL) # - # ### Reading the data and creating a time series # + series = MonthlyMilkDataset().load() print(series) series.plot() # - # ## Using a transformer: Rescaling a time series using `Scaler`. # # Some applications may require your datapoints to be between 0 and 1 (e.g. to feed a time series to a Neural Network based forecasting model). This is easily achieved using the default `Scaler`, which is a wrapper around `sklearn.preprocessing.MinMaxScaler(feature_range=(0, 1))`. scaler = Scaler() rescaled = scaler.fit_transform(series) print(rescaled) # This scaling can easily be inverted too, by calling `inverse_transform()` back = scaler.inverse_transform(rescaled) print(back) # Note that the `Scaler` also allows to specify other scalers in its constructor, as long as they implement `fit()`, `transform()` and `inverse_transform()` methods on `TimeSeries` (typically scalers from `scikit-learn`) # ## Another example : `MissingValuesFiller` # # Let's look at handling missing values in a dataset. incomplete_series = MonthlyMilkIncompleteDataset().load() incomplete_series.plot() # + filler = MissingValuesFiller() filled = filler.transform(incomplete_series, method="quadratic") filled.plot() # - # Since `MissingValuesFiller` wraps around `pd.interpolate` by default, we can also provide arguments to the `pd.interpolate()` function when calling `transform()` filled = filler.transform(incomplete_series, method="quadratic") filled.plot() # ## `Mapper` and `InvertibleMapper`: A special kind of transformers # # Sometimes you may want to perform a simple `map()` function on the data. This can also be done using data transformers. `Mapper` takes in a function and applies it to the data elementwise when calling `transform()`. # # `InvertibleMapper` also allows to specify an inverse function at creation (if there is one) and provides the `inverse_transform()` method. # + lin_series = linear_timeseries(start_value=0, end_value=2, length=10) squarer = Mapper(lambda x: x ** 2) squared = squarer.transform(lin_series) lin_series.plot(label="original") squared.plot(label="squared") plt.legend() # - # ### More complex (and useful) transformations # # In the Monthly Milk Production dataset used earlier, some of the difference between the months comes from the fact that some months contain more days than others, resulting in a larger production of milk during those months. This makes the time series more complex, and thus harder to predict. # + training, validation = series.split_before(pd.Timestamp("1973-01-01")) model = ExponentialSmoothing() model.fit(training) forecast = model.predict(36) plt.title("MAPE = {:.2f}%".format(mape(forecast, validation))) series.plot(label="actual") forecast.plot(label="forecast") plt.legend() # - # To take this fact into account and achieve better performance, we could instead: # # 1. Transform the time series to represent the average daily production of milk for each month (instead of the total production per month) # 2. Make a forecast # 3. Inverse the transformation # # Let's see how this would be implemented using `InvertibleMapper` and `pd.timestamp.days_in_month` # # (Idea taken from ["Forecasting: principles and Practice"](https://otexts.com/fpp2/transformations.html) by Hyndman and Athanasopoulos) # To transform the time series, we have to divide a monthly value (the data point) by the number of days in the month given by the value's corresponding timestamp. # # `map()` (and thus `Mapper` / `InvertibleMapper`) makes this convenient by allowing to apply a transformation function which uses both the value and its timestamp to compute the new value: `f(timestamp, value) = new_value` # + # Transform the time series toDailyAverage = InvertibleMapper( fn=lambda timestamp, x: x / timestamp.days_in_month, inverse_fn=lambda timestamp, x: x * timestamp.days_in_month, ) dailyAverage = toDailyAverage.transform(series) dailyAverage.plot() # + # Make a forecast dailyavg_train, dailyavg_val = dailyAverage.split_after(pd.Timestamp("1973-01-01")) model = ExponentialSmoothing() model.fit(dailyavg_train) dailyavg_forecast = model.predict(36) plt.title("MAPE = {:.2f}%".format(mape(dailyavg_forecast, dailyavg_val))) dailyAverage.plot() dailyavg_forecast.plot() plt.legend() # - # Inverse the transformation # Here the forecast is stochastic; so we take the median value forecast = toDailyAverage.inverse_transform(dailyavg_forecast) plt.title("MAPE = {:.2f}%".format(mape(forecast, validation))) series.plot(label="actual") forecast.plot(label="forecast") plt.legend() # ## Chaining transformations : introducing `Pipeline` # # Now suppose that we both want to apply the above transformation (daily averaging), and rescale the dataset between 0 and 1 to use a Neural Network based forecasting model. Instead of applying these two transformations separately, and then inversing them separately, we can use a `Pipeline`. # pipeline = Pipeline([toDailyAverage, scaler]) transformed = pipeline.fit_transform(training) transformed.plot() # If all transformations in the pipeline are invertible, the `Pipeline` object is too. back = pipeline.inverse_transform(transformed) back.plot() # Recall now the incomplete series from `monthly-milk-incomplete.csv`. Suppose that we want to encapsule all our preprocessing steps into a Pipeline, consisting of: a `MissingValuesFiller` for filling the missing values, and a `Scaler` for scaling the dataset between 0 and 1. # + incomplete_series = MonthlyMilkIncompleteDataset().load() filler = MissingValuesFiller() scaler = Scaler() pipeline = Pipeline([filler, scaler]) transformed = pipeline.fit_transform(incomplete_series) # - # Suppose we have trained a Neural Network and produced some predictions. Now, we want to scale back out data. Unfortunately, since the `MissingValuesFiller` is not an `InvertibleDataTransformer` (why on Earth would someone want to insert missing values in the results!?), the inverse transformation will raise an exception: _ValueError: Not all transformers in the pipeline can perform inverse_transform_. # # Frustrating right? Fortunately, you don't have to re-run everything from scratch, excluding the `MissingValuesFiller` from the `Pipeline`. Instead, you can just set the `partial` argument of the `inverse_transform` method to True. In this case, the inverse transformation will be performed skipping the not invertible transformers. back = pipeline.inverse_transform(transformed, partial=True) # ## Processing multiple TimeSeries # # Often, we have to deal with multiple TimeSeries. DARTS supports Sequences of TimeSeries as input to transformers and pipelines, so that you don't have to take care of processing each sample separately. Furthermore, it will take care of storing the parameters used by each scaler while transforming different TimeSeries (e.g., with scalers). # + series = MonthlyMilkDataset().load() incomplete_series = MonthlyMilkIncompleteDataset().load() multiple_ts = [incomplete_series, series[:10]] filler = MissingValuesFiller() scaler = Scaler() pipeline = Pipeline([filler, scaler]) transformed = pipeline.fit_transform(multiple_ts) for ts in transformed: ts.plot() # - back = pipeline.inverse_transform(transformed, partial=True) for ts in back: ts.plot() # ## Monitoring & parallelising data processing # # Sometimes, we could also have to deal with huge datasets. In this cases, processing each sample sequentially can take quite a long time. Darts can help both monitoring the transformations, and processing multiple samples in parallel, when possible. # # Setting the `verbose` parameter in each transformer or pipeline while create some progress bars: # + series = MonthlyMilkIncompleteDataset().load() huge_number_of_series = [series] * 10000 scaler = Scaler(verbose=True, name="Basic") transformed = scaler.fit_transform(huge_number_of_series) # - # We now know for how long we will have to wait. But since nobody loves wasting time waiting, we can leverage multiple cores of our machine to process data in parallel. We can do so by setting the `n_jobs` parameter (same usage as in `sklearn`). # # **Note**: the speed-up closely depends on the number of available cores and the 'CPU-intensiveness' of the transformation. # setting n_jobs to -1 will make the library using all the cores available in the machine scaler = Scaler(verbose=True, n_jobs=-1, name="Faster") scaler.fit(huge_number_of_series) back = scaler.transform(huge_number_of_series) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #from autograd.scipy.stats import norm #from autograd.scipy.stats import gamma #import autograd.numpy as np from scipy.stats import norm from scipy.stats import gamma import numpy as np import matplotlib.pyplot as plt from astropy.table import Table, unique, Column, hstack, vstack #from autograd import grad from xdgmm import XDGMM #from autograd import hessian from scipy.optimize import minimize import matplotlib as mpl # + #import autograd.numpy as np #import autograd.numpy.linalg as npla #import autograd.scipy.misc as scpm import scipy.misc as scpm #import scipy.special.logsumexp as logsumexp def mog_logprob(x, means, icovs, lndets, pis): """ compute the log likelihood according to a mixture of gaussians with means = [mu0, mu1, ... muk] icovs = [C0^-1, ..., CK^-1] lndets = ln [|C0|, ..., |CK|] pis = [pi1, ..., piK] (sum to 1) at locations given by x = [x1, ..., xN] """ xx = np.atleast_2d(x) D = xx.shape[1] centered = xx[:,:,np.newaxis] - means.T[np.newaxis,:,:] solved = np.einsum('ijk,lji->lki', icovs, centered) logprobs = - 0.5*np.sum(solved * centered, axis=1) - (D/2.)*np.log(2*np.pi) \ - 0.5*lndets + np.log(pis) logprob = scpm.logsumexp(logprobs, axis=1) if np.isscalar(x) or len(x.shape) == 1: return logprob[0] else: return logprob # + def predicted_color(A, c, alpha=0.3): return c + A*alpha def predicted_magnitude(A, M, d, beta=0.16): return M + A*beta + 5.*np.log10(d*1e3/10.) def logjoint(theta, chat, mhat, varpihat, sigmac, sigmam, sigmavarpi, pis, means, icovs, lndets): lnA, M, c, lnd = theta A, d = np.exp(lnA), np.exp(lnd) lnp_c = norm.logpdf(chat, predicted_color(A, c), sigmac) lnp_m = norm.logpdf(mhat, predicted_magnitude(A, M, d), sigmam) lnp_varpi = norm.logpdf(varpihat, 1./d, sigmavarpi) lnp_Mc = mog_logprob(np.array([c,M]), means, icovs, lndets, pis) if (A >= 0) & (A < 5): lnp_A = 0 else: lnp_A = -np.inf if (d > 0) & (d < 10): lnp_d = 0 else: lnp_d = -np.inf string = 'p(c):{0} p(m):{1} p(varpi):{2} p(Mc):{3} p(A):{4} p(d):{5}'.format(lnp_c, lnp_m, lnp_varpi, lnp_Mc, lnp_A, lnp_d) #print(string) return np.sum([lnp_c, lnp_m, lnp_varpi, lnp_Mc, lnp_A, lnp_d]) # + def plotXdgmm(xdgmm, ax, c='k', lw=1, label='prior', step=0.001): ts = np.arange(0, 2. * np.pi, step) #magic amps = xdgmm.weights mus = xdgmm.mu Vs = xdgmm.V for gg in range(xdgmm.n_components): if amps[gg] == np.max(amps): label=label else: label=None w, v = np.linalg.eigh(Vs[gg]) points = np.sqrt(w[0]) * (v[:, 0])[:,None] * (np.cos(ts))[None, :] + \ np.sqrt(w[1]) * (v[:, 1])[:,None] * (np.sin(ts))[None, :] + \ mus[gg][:, None] ax.plot(points[0,:], points[1,:], c, lw=lw, alpha=amps[gg]/np.max(amps), rasterized=True, label=label) def plotgaussian1sigma(ax, mu, V, c='k', lw=1, label='prior', step=0.001, alpha=1.0): ts = np.arange(0, 2. * np.pi, step) #magic w, v = np.linalg.eigh(V) points = np.sqrt(w[0]) * (v[:, 0])[:,None] * (np.cos(ts))[None, :] + \ np.sqrt(w[1]) * (v[:, 1])[:,None] * (np.sin(ts))[None, :] + \ mu[:, None] ax.plot(points[0,:], points[1,:], c, lw=lw, alpha=alpha, rasterized=True, label=label) return points # - ncomp = 256 filename = 'rjce_fullcmd_sfd_0.01_{0}G.fits'.format(ncomp) xdgmm = XDGMM(filename=filename, method='Bovy') lndets = np.array([np.linalg.slogdet(s)[1] for s in xdgmm.V]) icovs = np.array([np.linalg.inv(s) for s in xdgmm.V]) means = xdgmm.mu pis = xdgmm.weights datahigh = Table.read('dustHighLat-result.fits.gz') datalow = Table.read('dustLowLat-result.fits.gz') data = vstack((datahigh, datalow)) mhat = data['w2mpro'] w2hat = data['w2mpro'] hhat = data['h_m'] chat = data['h_m'] - data['w2mpro'] varpihat = data['parallax'] sigmac = np.sqrt(data['h_msigcom']**2 + data['w2mpro_error']**2.) sigmam = data['w2mpro_error'] sigmavarpi = data['parallax_error'] sigmaw2 = data['w2mpro'] sigmah = data['h_msigcom'] sample = xdgmm.sample(int(len(chat)/10)) # + method= None #'L-BFGS-B' i = 1 fig, ax = plt.subplots(1, 1, figsize=(10, 10)) xlim = [-0.5, 1.0] ylim = [-6, 10] nbins = 100 xbins = np.linspace(xlim[0], xlim[1], nbins) ybins = np.linspace(ylim[0], ylim[1], nbins) H, xe, ye = np.histogram2d(sample[:,0], sample[:,1], bins=(xbins, ybins)) im = ax.pcolormesh(xe, ye, H.T, norm=mpl.colors.LogNorm(), cmap=plt.get_cmap('Blues')) im.set_rasterized(True) for i in range(0, 20): args = (chat[i], mhat[i], varpihat[i], sigmac[i], sigmam[i], sigmavarpi[i], pis, means, icovs, lndets) theta_0 = np.array([0.1, 1., 0.08, 0.5]) # A, M, c, d = theta obj = lambda *args: -1.*logjoint(*args) res = minimize(obj, theta_0, args=args, method=method) #, jac=gobj) theta_hat = res.x print('Dust: {0:.3f} Absmag: {1:0.3f} color: {2:0.3f} distance: {3:0.3f}'.format(theta_hat[0], theta_hat[1], theta_hat[2], theta_hat[3])) am = mhat[i] - 5.*np.log10(1./(varpihat[i]/1e3*10)) ax.scatter(chat[i], am, s=10, c='black') ax.scatter(theta_hat[2], theta_hat[1], c='red', s=10) plt.arrow(chat[i], am, theta_hat[2] - chat[i], theta_hat[1] - am) ax.set_xlabel('H-W2', fontsize=20) ax.set_ylabel('$M_{W2}$', fontsize=20) ax.set_xlim(xlim) ax.set_ylim(ylim[::-1]) plt.savefig('posteriors.png', rasterized=True) # - xx, yy = np.meshgrid(xe, ye) values = np.zeros(len(xx.flatten())) for i, (x, y) in enumerate(zip(xx.flatten(), yy.flatten())): values[i] = mog_logprob(np.array([x, y]), means, icovs, lndets, pis) values[values < -10] = -10 from scipy.integrate import simps simps(simps(np.exp(values).reshape(xx.shape), ye), xe) # + method='L-BFGS-B' i = 1 fig, ax = plt.subplots(1, 1, figsize=(10, 10)) xlim = [-0.5, 1.0] ylim = [-6, 10] nbins = 100 xbins = np.linspace(xlim[0], xlim[1], nbins) ybins = np.linspace(ylim[0], ylim[1], nbins) im = ax.contourf(xx, yy, values.reshape(xx.shape), 100, vmin=-10, vmax=10) fig.colorbar(im,ax=ax) #im.set_rasterized(True) for j in range(0, 20): i = np.random.randint(0, len(bright)) args = (chat[bright[i]], mhat[bright[i]], varpihat[bright[i]], sigmac[bright[i]], sigmam[bright[i]], sigmavarpi[bright[i]], pis, means, icovs, lndets, xdgmm) theta_0 = np.array([0.1, 2., 0.08, 1.0]) obj = lambda *args: -1.*logjoint(*args) gobj = grad(obj) hobj = hessian(obj) res = minimize(obj, theta_0, args=args, method=method, jac=gobj) theta_hat = res.x sigma_hat = np.linalg.inv(hobj(theta_hat, chat[bright[0]], mhat[bright[0]], varpihat[bright[0]], sigmac[bright[0]], sigmam[bright[0]], sigmavarpi[bright[0]], pis, means, icovs, lndets, xdgmm)) print('Dust: {0:.3f}+/-{1:0.3f} Absmag: {2:0.3f}+-{3:0.3f} color: {4:0.3f}+-{5:0.3f} distance: {6:0.3f}+-{7:0.3f}'.format(theta_hat[0], np.sqrt(sigma_hat[0,0]), theta_hat[1], np.sqrt(sigma_hat[1,1]), theta_hat[2], np.sqrt(sigma_hat[2,2]), theta_hat[3], np.sqrt(sigma_hat[3,3]))) am = mhat[bright[i]] - 5.*np.log10(1./(varpihat[bright[i]]/1e3*10)) #plotXdgmm(xdgmm, ax[1], lw=2) V = np.zeros((2,2)) V[0,0] = sigmac[0]**2. V[1,1] = sigmam[0]**2 + sigmavarpi[0]**2 p = plotgaussian1sigma(ax, np.array([chat[bright[i]], am]), V, alpha=0.5) V = np.zeros((2,2)) V[0,0] = sigma_hat[2,2] V[1,0] = sigma_hat[2,1] V[0,1] = sigma_hat[1,2] V[1,1] = sigma_hat[1,1] p = plotgaussian1sigma(ax, np.array([theta_hat[2], theta_hat[1]]), V, c='red') #A, M, c, d = theta plt.arrow(chat[bright[i]], am, theta_hat[2] - chat[bright[i]], theta_hat[1] - am) ax.set_xlabel('H-W2', fontsize=20) ax.set_ylabel('$M_{W2}$', fontsize=20) ax.set_xlim(xlim) ax.set_ylim(ylim[::-1]) #ax.set_title(l, fontsize=20) plt.savefig('posteriors.png', rasterized=True) # - p(c):-6.217428764152867 p(m):-49.720487084570664 p(varpi):-2857.7659418150315 p(Mc):-4.268768121303194 p(A):-0.1 p(d):-1.0 p(c):-21.72984608241658 p(m):-533.6503819365388 p(varpi):-1937.6469227094653 p(Mc):-7.56569717726221 p(A):-0.053345126719801426 p(d):-0.7029512825086909 lnp_c, lnp_m, lnp_varpi, lnp_Mc, lnp_A, lnp_d 2.1611949728028126 -90.33760374614414 -21.50742721543208 -12.24564139236691 -0.9239385332046727 -3.2265236261987185 -8.424173677612075 -14.177497800096212 -37.45983009577268 4.6380018062232455 -0.9675665739445588 -3.2262035033735943 # + plt.contourf(xx, yy, values.reshape(xx.shape), 100, vmin=-10, vmax=10, cmap=plt.get_cmap('Blues')) plt.colorbar() plt.gca().invert_yaxis() # - np.exp(values) # + fig, ax = plt.subplots(1, 3, figsize=(30, 10)) xlim = [-0.5, 1.0] ylim = [-6, 10] nbins = 100 xbins = np.linspace(xlim[0], xlim[1], nbins) ybins = np.linspace(ylim[0], ylim[1], nbins) for c, ab, a in zip([chat, sample[:,0]], [absmag, sample[:,1]], [ax[0], ax[2]]): H, xe, ye = np.histogram2d(c, ab, bins=(xbins, ybins)) im = a.pcolormesh(xe, ye, H.T, norm=mpl.colors.LogNorm(), cmap=plt.get_cmap('Blues'))#vmax=75))#, vmax=20) #, vmin=-100, vmax=100) im.set_rasterized(True) plotXdgmm(xdgmm, ax[1], lw=2) V = np.zeros((2,2)) V[0,0] = sigmac[0]**2. V[1,1] = sigmam[0]**2 + sigmavarpi[0]**2 p = plotgaussian1sigma(ax[2], np.array([chat[bright[i]], mhat[bright[i]] - 5.*np.log10(1./(varpihat[bright[i]]/1e3*10))]), V) V = np.zeros((2,2)) V[0,0] = sigma_hat[2,2] V[1,0] = sigma_hat[2,1] V[0,1] = sigma_hat[1,2] V[1,1] = sigma_hat[1,1] p = plotgaussian1sigma(ax[2], np.array([theta_hat[2], theta_hat[1]]), V) #A, M, c, d = theta labels = ['data', 'gaussian mixture', 'samples'] for a , l in zip(ax, labels): a.set_xlabel('H-W2', fontsize=20) a.set_ylabel('$M_{W2}$', fontsize=20) a.set_xlim(xlim) a.set_ylim(ylim[::-1]) a.set_title(l, fontsize=20) # - theta_hat print(p) plot gmm plot chat, mhat-> Mhat plot c, M plot arrow # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # BPMN Workflow (Microservices) # ---------------------------------------------- # # ![](images/Microservices-BPMN.png) # # - - - # # Die [Business Process Model and Notation](https://de.wikipedia.org/wiki/Business_Process_Model_and_Notation) (BPMN, deutsch Geschäftsprozessmodell und -notation) ist eine grafische Spezifikationssprache in der Wirtschaftsinformatik und im Prozessmanagement. Sie stellt Symbole zur Verfügung, mit denen Fach-, Methoden- und Informatikspezialisten Geschäftsprozesse und Arbeitsabläufe modellieren und dokumentieren können. # # [Camunda BPM](https://camunda.com/de/) ist ein in Java geschriebenes freies Workflow-Management-System, mit dem Geschäftsprozesse in BPMN 2.0 definiert und ausgeführt werden können. # # Das nachfolgende BPMN Beispiel basiert auf dem Blog Eintrag [Use Camunda without touching Java and get an easy-to-use REST-based orchestration and workflow engine](https://blog.bernd-ruecker.com/use-camunda-without-touching-java-and-get-an-easy-to-use-rest-based-orchestration-and-workflow-7bdf25ac198e) von . # # - - - # Das Beispiel besteht aus folgenen Microservices # # * Einer BPMN **Workflow Engine** (Camunda) mit einen Prozess zur Freigabe von Rechnungen. # * Ein **Frontend** in HTML und JS implementiert zum starten des obigen Prozesses. # * Ein **Backend** zum Verbuchen der Rechnungen implementiert in Java. # ! ./kubectl apply -f https://raw.githubusercontent.com/mc-b/misegr/master/bpmn/bpmn-backend.yaml # ! ./kubectl apply -f https://raw.githubusercontent.com/mc-b/misegr/master/bpmn/bpmn-frontend.yaml # ! ./kubectl apply -f https://raw.githubusercontent.com/mc-b/misegr/master/bpmn/camunda.yaml # - - - # Nach dem Starten der Microservices müssen wir den BPMN Prozess in der Workflow Engine veröffentlichen. # ! wget https://raw.githubusercontent.com/mc-b/bpmn-tutorial/master/RechnungStep3.bpmn # ! curl -w "\n" -H "Accept: application/json" -F "deployment-name=rechnung" -F "enable-duplicate-filtering=true" -F "deploy-changed-only=true" -F "RechnungStep3.bpmn=@RechnungStep3.bpmn" http://camunda:8080/engine-rest/deployment/create # - - - # Nun können wir neue Rechnungen über das Frontend erfassen und diese in der Workflow Engine anschauen. Die URLs sind wie folgt: # ! echo "BPMN Workflow Engine: https://ip cluster/camunda # User/Password: " # ! echo "Frontend : https://ip cluster/frontend/" # Im `Cockpit` sollte unter `Process Definitions` ein Prozess `RechnungStep3` ersichtlich sein. Fehlt dieser, den Befehl oben für die Veröffentlichung des Prozesses nochmals ausführen. # # Mittels dem Frontend oder nachfolgenden Befehl können wir diesen Prozess starten. Der gestartete Prozess ist dann unter `Tasklist` ersichtlich. # # + language="bash" # curl \ # -H "Content-Type: application/json" \ # -X POST \ # -d '{ "variables": { "rnr": {"value": "123", "type": "long"}, "rbetrag": {"value": "200.00", "type": "String"} } }' \ # http://camunda:8080/engine-rest/process-definition/key/RechnungStep3/start # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Getting Started # The simplest tasks are `pypolycontain` is defining polytopic objects, performing operations, and visualizing them. First, import the package alongside `numpy`. import numpy as np import pypolycontain as pp # ## Objects # ### H-polytope # We define an H-polytope $\mathbb{P}=\{x \in \mathbb{R}^2 | Hx \le h\}$. We give the following numericals for H and h: # $$ # H=\left( \begin{array}{cc} 1 & 1 \\ -1 & 1 \\ 0 & -1 \end{array}\right), # h= \left( \begin{array}{c} 1 \\ 1 \\ 0 \end{array}\right). # $$ H=np.array([[1,1],[-1,1],[0,-1]]) h=np.array([1,1,0]) A=pp.H_polytope(H,h) # This a triangle as it is defined by intersection of 3 half-spaces in $\mathbb{R}^2$. In order to visualizate the polytope, we call the following function. Note the brackets around `visualize` function - it takes in a list of polytopes as its primary argument. pp.visualize([A],title=r'$A$') # ### AH-polytope # We define an AH-polytope as $t+T\mathbb{P}$ with the following numbers. The transformation represents a rotation of $30^\circ$ and translation in $x$ direction. t=np.array([5,0]).reshape(2,1) # offset theta=np.pi/6 # 30 degrees T=np.array([[np.cos(theta),np.sin(theta)],[-np.sin(theta),np.cos(theta)]]) # Linear transformation B=pp.AH_polytope(t,T,A) pp.visualize([B],title=r'$B$') # ### Zonotope # We define a zonotope as $\mathbb{Z}=x+G[-1,1]^{n_p}$, where $n_p$ is the number of rows in $p$. x=np.array([4,0]).reshape(2,1) # offset G=np.array([[1,0,0.5],[0,0.5,-1]]).reshape(2,3) C=pp.zonotope(x=x,G=G) pp.visualize([C],title=r'$C$') # ## Visualization # The `visualize` function allows for visualizing multiple polytopic objects. pp.visualize([A,C,B],title=r'$A,B,C$') # You may have noticed the colors. Here are the default colors for various polytopic objects. Using color argument you can change the color. # # | Object | Default Color | # |:-----------:|:-----:| # | H-polytope | Red | # | Zonotope | Green | # | AH-polytope | Blue | # ### Options # `visualize` has a set of options. While it only supports 2D plotting, it can take high dimensional polytopes alongside the argument `tuple_of_projection_dimensions`. Its ```default``` is `[0,1]`, meaning the projection into the first and second axis is demonstrated. # # You can also add a separate `subplot` environment to plot the polytopes. Take the following example: import matplotlib.pyplot as plt fig,ax=plt.subplots() fig.set_size_inches(6, 3) pp.visualize([A,B,C],ax=ax,fig=fig) ax.set_title(r'A triangle (red), rotated by 30 degrees (blue), and a zonotope (green)',FontSize=15) ax.set_xlabel(r'$x$',FontSize=15) ax.set_ylabel(r'$y$',FontSize=15) ax.axis('equal') # ## Operations # `pypolycontain` supports a broad range of polytopic operations. The complete list of operations is [here](https://pypolycontain.readthedocs.io/en/latest/operations.html). # ### Minkowski sum D=pp.operations.minkowski_sum(A,B) D.color=(0.9, 0.9, 0.1) pp.visualize([D,A,B],title=r'$A$ (red),$B$ (blue), $D=B\oplus C$ (yellow)') # ### Convex Hull # Let's take the convex-hull of $A$ and $C$. E=pp.convex_hull(A,C) E.color='purple' pp.visualize([E,A,C],alpha=0.5,title=r'$A$ (red),$C$ (green), $E={ConvexHull}(A,C)$ (purple)') # ### Intersection # Let's take the intersection of $B$ and $D$. F=pp.intersection(D,E) F.color='black' pp.visualize([D,E,F],alpha=1,title=r'$D$ (yellow), $E$ (purple), $F=D \cap E$ (black)') # ### Bounding Box # We can compute the bounding box of a polytopic object. For instance, let's compute the bounding box of $D$. G=pp.bounding_box(D) pp.visualize([G,D],title=r'$D$ and its bounding box') # Or the following: mylist=[A,B,C] pp.visualize([pp.bounding_box(p) for p in mylist]+mylist,title=r'$A,B,C$ and their bounding boxes') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- total = totmil = menor = cont = 0 barato = " " while True: produto= str(input('Nome do Produto: ')) preco= float(input('Preço: R$')) cont +=1 total +=preco if preco > 1000: totmil +=1 if cont == 1: menor = preco barato = produto resp = ' ' while resp not in 'SN': resp = str(input('Quer continuar [S/N]: ')).strip().upper()[0] if resp == 'N': break print('FIM DO PROGRAMA') print(f'O total da compra foi R${total:.2f}') print(f'Temos {totmil} custando mais que R$ 1000.00') print(f'O produto com menor preço é {produto} que custa R${menor:.2f}') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Introduction to Gaussian Process Models # Gaussian process (GP) models serve as approximations of computationally expensive (time-consuming) black-box functions. To reduce the number of times the expensive function must be queried during optimization, the GP is used to guide the sampling decisions in the parameter space and only the "most promising" parameters are selected for evaluation. # A GP model treats the function it approximates like the realization of a stochastic process: # $m_{GP}(\theta) = \mu + Z(\theta)$, # where $\mu$ represents the mean of the stochastic process and $Z(\theta) \sim \mathcal{N}(0,\sigma^2)$ is the deviation from the mean. # The correlation between two random variables $Z(\theta_k)$ and $Z(\theta_l)$ is defined by a kernel, e.g., the squared exponential (also Radial basis function) kernel: # \begin{equation} # Corr(Z(\theta_k),Z(\theta_l)) = \exp(-\sum_{i=1}^d \gamma_i|\theta_k^{(i)}-\theta_l^{(i)}|^{q_i}) # \end{equation}, # with $\gamma_i$ determining how quickly the correlation in dimension $i$ decreases, and $q_i$ refelcts the smoothness of the function in dimension $i$ # Denoting $\mathbf{R}$ as the matrix whose $(k,l)$-th element is given as the correlation above, maximum likelihood estimation is used to determine the GP parameters $\mu$, $\sigma^2$, and $\gamma_i$. Then, at an unsampled point $\theta^{new}$, the GP prediction is \begin{equation} # m_{\text{GP}}(\theta^{\text{new}})=\hat{\mu}+\mathbf{r}^T\mathbf{R}^{-1}(\mathbf{f}-\mathbf{1}\hat\mu), # \end{equation} # where $\mathbf{1}$ is a vector of ones of appropriate dimension and $\mathbf{f}$ is the vector of function values obtained so far, and # \begin{equation} # \boldsymbol{r}= # \begin{bmatrix} # Corr\left(Z(\theta^{\text{new}}), Z(\theta_1)\right)\\ # \vdots\\ # Corr\left(Z(\theta^{\text{new}} # ), Z(\theta_n)\right) # \end{bmatrix}. # \end{equation} # The corresponding mean squared error is # \begin{equation} # s^2(\theta^{\text{new}})=\hat{\sigma}^2\left( 1-\boldsymbol{r}^T\boldsymbol{R}^{-1}\boldsymbol{r} +\frac{(1-\boldsymbol{1}^T\boldsymbol{R}^{-1}\boldsymbol{r})^2}{\mathbf{1}^T\boldsymbol{R}^{-1}\mathbf{1}}\right) # \end{equation} # with # \begin{equation} # \hat{\mu} = \frac{\mathbf{1}^T\boldsymbol{R}^{-1}\mathbf{f}}{\mathbf{1}^T\boldsymbol{R}^{-1}\mathbf{1}} # \end{equation} # and # \begin{equation} # \hat{\sigma}^2=\frac{(\mathbf{f}-\mathbf{1}\hat{\mu})^T\boldsymbol{R}^{-1}(\mathbf{f}-\mathbf{1}\hat{\mu})}{n}. # \end{equation} # Python has a good implementation of GPs where you can choose different kernels. # First, we need (input, output) data pairs. Inputs are parameters where we query the function (for simplicity, the example has an inexpensive function). From the Sckit-Learn website: https://scikit-learn.org/stable/auto_examples/gaussian_process/plot_gpr_noisy_targets.html import numpy as np from matplotlib import pyplot as plt from sklearn.gaussian_process import GaussianProcessRegressor from sklearn.gaussian_process.kernels import RBF, ConstantKernel, Matern, RationalQuadratic, ExpSineSquared, WhiteKernel from scipy.optimize import minimize from scipy.spatial import distance import scipy.spatial as scp from scipy.stats import norm from pyDOE import * #needed if Latin hypercube design is used import warnings warnings.filterwarnings("ignore") def f(x): """The function we want to approximate.""" return x * np.sin(x) xlow = 0 #lower bound on x xup = 10 #upper bound on x dim = 1 #dimension of the problem lhs_wanted = False np.random.seed(420) if not(lhs_wanted): #when not using space-filling design X = np.atleast_2d([1., 3., 7., 8.]).T #select some points where we evaluate the function # Function evaluations y = f(X).ravel() # Other options for creating space filling designs is latin hypercube sampling: if lhs_wanted: ninit=6 #6 initial evaluations init_design = lhs(dim, samples =ninit, criterion='maximin') #initial design in [0,1]^dim X = xlow+(xup-xlow)*init_design #scale to [xlow,xup] # Function evaluations y = f(X).ravel() # **Exercise:** run the code with different initial samples, i.e., try lhs_wanted = False and lhs_wanted = True and compare the sampling history # + # Select a GP kernel (here RBF or squared exponential) kernel = RBF() gp = GaussianProcessRegressor(kernel=kernel, normalize_y=True,n_restarts_optimizer=9) # Fit the GP to the input-output data gp.fit(X, y) # - # Make some good-looking plots def plot_the_gp(X, y, gp, xnew): #select a bunch of points where we want to make predictions wioth the GP x = np.atleast_2d(np.linspace(0, 10, 1000)).T # Make the GP prediction at the points where no evaluations were taken - also return predicted uncertainty y_pred, sigma = gp.predict(x, return_std=True) plt.figure() plt.plot(x, f(x), 'r:', label=r'$f(x) = x\,\sin(x)$') plt.plot(X, y, 'r.', markersize=10, label='Observations') plt.plot(x, y_pred, 'b-', label='Prediction') if len(xnew)>0: plt.plot(X[-1], y[-1], 'gs', markersize=10, label='Newest sample') plt.fill(np.concatenate([x, x[::-1]]), np.concatenate([y_pred - 1.9600 * sigma, (y_pred + 1.9600 * sigma)[::-1]]), alpha=.5, fc='b', ec='None', label='95% confidence interval') plt.xlabel('$x$') plt.ylabel('$f(x)$') plt.ylim(-10, 20) plt.legend(loc='upper left') plt.show() plot_the_gp(X, y, gp, []) # **Optional Exercise:** check out the Scikit-Learn website https://scikit-learn.org/stable/modules/gaussian_process.html#kernels-for-gaussian-processes and experiment around with different basic kernels, kernel parameters and kernel combinations, e.g., # - does using "kernel = RBF(10, (1e-2, 1e2))" change anything? # - what happens when you use "kernel = Matern(length_scale=1.0, nu=1.5)" # - try "kernel = 1.0 * RationalQuadratic(length_scale=1.0, alpha=0.1, alpha_bounds=(1e-5, 1e15))" # - "kernel = 1.0 * ExpSineSquared( # length_scale=1.0, # periodicity=3.0, # length_scale_bounds=(0.1, 10.0), # periodicity_bounds=(1.0, 10.0),)" # - use a combination of kernels: "kernel = RBF()+WhiteKernel(noise_level=.001)" using different noise_levels # # **Exercise:** Change the inputs of the GP (i.e., the training samples) and see how the GP predictions change (use fewer or more points, use different points in [0,10], e.g., "X=np.atleast_2d(np.random.uniform(0,10,5)).T" # Takeaway: the quality and accuracy of the GP highly depends on the trianing data and the kernel used # # Adaptive Optimization with the GP # GP models are often used in optimization algorithms. In each iteration of the optimization, a new sample point is selected by maximizing the expected improvement (EI): # \begin{equation} # \mathbb{E}(I)(\theta) = s(\theta)\left(v\Phi(v)+\phi(v) \right), # \end{equation} # where # \begin{equation} # v=\frac{f^{\text{best}}-m_{\text{GP}}(\theta)}{s(\theta)} # \end{equation} # and $\Phi$ and $\phi$ are the normal cumulative distribution and density functions, respectively, and $s(\theta)=\sqrt{s^2(\theta)}$ is the square root of the mean squared error. # # The function $\mathbb{E}(I)(\theta)$ can be maximized with any python optimization library. The point $\theta^{\text{new}}$ where it reaches its maximum will be the new point where $f$ is evaluated. # define expected improvement function def ei(x, gpr_obj, Xsamples, Ysamples): #expected improvement dim = len(x) x= x.reshape(1, -1) min_dist=np.min(scp.distance.cdist(x, Xsamples)) if min_dist<1e-6: #threshold for when points are so close that they are considered indistinguishable expected_improvement=0.0 return expected_improvement mu, sigma = gpr_obj.predict(x.reshape(1, -1), return_std=True) mu_sample = gpr_obj.predict(Xsamples) mu_sample_opt = np.min(Ysamples) # In case sigma equals zero with np.errstate(divide='ignore'): Z = (mu_sample_opt-mu) / sigma expected_improvement = (mu_sample_opt-mu) * norm.cdf(Z) + sigma * norm.pdf(Z) expected_improvement[sigma == 0.0] == 0.0 answer=-1.*expected_improvement #to maximize EI, you minimize the negative of it return answer def plot_the_ei(gpr_obj, X, Y): x = np.atleast_2d(np.linspace(0, 10, 1000)).T expimp=np.zeros(1000) for ii in range(1000): expimp[ii] = -ei(x[ii], gpr_obj, X, Y) plt.figure() plt.plot(x, expimp, 'k--', label='Expected improvement') plt.plot(X, np.zeros(X.shape[0]), 'rx', markersize=10, label='Observation sites') #plt.plot(X[-1],0, 'gs', markersize=10, label='Newest sample') plt.xlabel('$x$') plt.ylabel('$EI(x)$') #plt.ylim(-10, 20) plt.legend(loc='upper left') plt.show() # do your GP iterations: maximize EI, select new point, evaluate new point, update GP, maximize EI, .... n_GP_samples = 20 # allow 50 evaluations of f bound_list = np.array([[xlow, xup]]) xnew=[] while X.shape[0]< n_GP_samples: gpr_obj = GaussianProcessRegressor(kernel=kernel, random_state=0,normalize_y=True, n_restarts_optimizer=10).fit(X, y) #create the GP plot_the_gp(X, y, gpr_obj, xnew) plot_the_ei(gpr_obj, X, y) #compute next point by maximizing expected improvement, multi-start optimization xnew = [] fnew =np.inf for ii in range(10): x0 = xlow + (xup-xlow) * np.random.rand(1,dim) #random starting point for optimizing expected improvement res= minimize(ei,np.ravel(x0),method='SLSQP',bounds=bound_list, args=(gpr_obj, X, y)) dist = np.min(scp.distance.cdist(np.asmatrix(res.x), X)) #make sure new point is sufficiently far away from already sampled points if np.min(dist)>1e-6 and res.success: #1e-3 is tunable x_ = np.asmatrix(res.x) if res.fun< fnew: xnew = x_ fnew = res.fun else: #use random point as new point x_ = np.asarray(xlow) + np.asarray(xup-xlow) * np.asarray(np.random.rand(1,dim)) #random starting point fv= ei(x_, gpr_obj, X, y) if len(xnew)== 0 or fv < fnew: xnew = np.asmatrix(x_) fnew= fv fval = f(np.ravel(xnew)) #update Xsamples and Ysamples arrays X=np.concatenate((X, np.asmatrix(xnew)), axis = 0) Y_ = np.zeros(len(y)+1) Y_[0:len(y)]= y Y_[-1]=fval y =Y_ minID=np.argmin(y) #find index of best point print('best point: ', X[minID]) print('best value: ', y[minID]) print('Number evaluations: ', X.shape[0]) # From the images of the expected improvement we can see that the peaks are becoming increasingly narrow, to almost looking like jump-discontinuities. This means that for an optimizer that tries to find the maximum of the expected improvement function, it becomes increasingly harder to find the optimum and sampling becomes more "random" in the space because the function is flat and EI values are the same everywhere except at the jumps. # Takeaways: # - GPs can be useful to guide the search during optimization # - They shine when the number of function evaluations is severely limited # - The expected improvement function helps to select points that are the "most promising" next evaluations # - The expected improvement function is multimodal and becomes increasingly harder to optimize # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] toc=true id="b6Q30JGIVCZ_" colab_type="text" #

    Table of Contents

    # # + [markdown] id="ena_VA5wVCaC" colab_type="text" # ## Why is gradient descent so bad at optimizing polynomial regression?# # # Question from Stackexchange: # https://stats.stackexchange.com/questions/350130/why-is-gradient-descent-so-bad-at-optimizing-polynomial-regression # # # # + [markdown] id="XCD0Ml6cVCaD" colab_type="text" # ### Linear regression # # #### Cost function # $J(\theta) = \frac{1}{2m}\sum_{i=1}^m(h_\theta(x^{(i)}) - y^{(i)})^2 $ # # $J(\theta) = \frac{1}{2m}(X\theta - y)^T(X\theta - y) $ (vectorized version) # # #### Gradient # $\frac{\partial J(\theta)}{\partial \theta} = \frac{1}{m}X^T(X\theta - y) $ # # ##### Hessian # $\frac{\partial^2 J(\theta)}{\partial \theta^2} = \frac{1}{m}X^T X $ # # ## Polynomial regression # The design matrix is of the form: # # $ \mathbf{X = [1 , x , x^2 , x^3 , ... , x}^n]$ # # ### Libraries # + id="ZwF8glsTVCaE" colab_type="code" colab={} outputId="543623b2-9017-414d-bcaa-65a18d77e876" import numpy as np import pandas as pd import scipy.optimize as opt from sklearn import linear_model import statsmodels.api as sm from matplotlib import pyplot as plt from mpl_toolkits.mplot3d import Axes3D # %matplotlib inline plt.style.use('seaborn-white') # + [markdown] id="0UUb_8-dVCaK" colab_type="text" # ### Helper Functions # + id="bdtydzViVCaL" colab_type="code" colab={} def costfunction(theta,X,y): m = np.size(y) theta = theta.reshape(-1,1) #Cost function in vectorized form h = X @ theta J = float((1./(2*m)) * (h - y).T @ (h - y)); return J; def gradient_descent(theta,X,y,alpha = 0.0005,num_iters=1000): m = np.size(y) J_history = np.empty(num_iters) count_history = np.empty(num_iters) theta_1_hist, theta_2_hist = [], [] for i in range(num_iters): #Grad function in vectorized form h = X @ theta theta = theta - alpha * (1/m)* (X.T @ (h-y)) #Tracker values for plotting J_history[i] = costfunction(theta,X,y) count_history[i] = i theta_1_hist.append(theta[0,0]) theta_2_hist.append(theta[1,0]) return theta, J_history,count_history, theta_1_hist, theta_2_hist def grad(theta,X,y): #Initializations theta = theta[:,np.newaxis] m = len(y) grad = np.zeros(theta.shape) #Computations h = X @ theta grad = (1./m)*(X.T @ ( h - y)) return (grad.flatten()) def polynomial_features(data, deg): data_copy=data.copy() for i in range(1,deg): data_copy['X'+str(i+1)]=data_copy['X'+str(i)]*data_copy['X1'] return data_copy def hessian(theta,X,y): m,n = X.shape X = X.values return ((1./m)*(X.T @ X)) # + [markdown] id="Z5OFLPaDVCaO" colab_type="text" # # : Comparing results for high order polynomial regression # + [markdown] id="z6EZnMUTVCaP" colab_type="text" # ### Initializing the data # + id="gCsxXFIcVCaQ" colab_type="code" colab={} #Create data from sin function with uniform noise x = np.linspace(0.1,1,40) noise = np.random.uniform( size = 40) y = np.sin(x * 1.5 * np.pi ) y_noise = (y + noise).reshape(-1,1) y_noise = y_noise - y_noise.mean() #Centering the data degree = 7 X_d = polynomial_features(pd.DataFrame({'X0':1,'X1': x}),degree) # + [markdown] id="dKCLEGiNVCaS" colab_type="text" # ### Closed form solution # + id="c6bh8aG-VCaT" colab_type="code" colab={} outputId="e12b5a6b-4aa2-4c88-cbee-e5d0db0dc31f" def closed_form_solution(X,y): return np.linalg.inv(X.T @ X) @ X.T @ y coefs = closed_form_solution(X_d.values,y_noise) coefs # + [markdown] id="abQUWPItVCaX" colab_type="text" # ### Numpy only fit # + id="E1SUg29QVCaY" colab_type="code" colab={} outputId="9ee7af1e-5c2f-4af4-99f2-bb24141657a8" stepsize = .1 theta_result_1,J_history_1, count_history_1, theta_1_hist, theta_2_hist = gradient_descent(np.zeros((len(X_d.T),1)).reshape(-1,1), X_d,y_noise,alpha = stepsize,num_iters=5000) display(theta_result_1) # + [markdown] id="6ZlFJj7HVCac" colab_type="text" # ### Sciy optimize fit using first order derivative only # #### Comment: BFGS does very well but requires adjustment of options # In particular tolerance must be made smaller as the cost function is very flat near the global minimum # + id="el05qJjRVCad" colab_type="code" colab={} outputId="8da1342d-9c0d-471c-c388-03bb184ce35a" import scipy.optimize as opt theta_init = np.ones((len(X_d.T),1)).reshape(-1,1) model_t = opt.minimize(fun = costfunction, x0 = theta_init , args = (X_d, y_noise), method = 'BFGS', jac = grad, options={'maxiter':1000, 'gtol': 1e-10, 'disp' : True}) model_t.x # + [markdown] id="48DuMFgFVCah" colab_type="text" # ### Sciy optimize fit using hessian matrix # #### As expected, 2nd order information allows to converge much faster # + id="4zzQ6TzyVCai" colab_type="code" colab={} outputId="2891901f-e3d7-4ebd-91e0-46698c9396df" import scipy.optimize as opt theta_init = np.ones((len(X_d.T),1)).reshape(-1,1) model_t = opt.minimize(fun = costfunction, x0 = theta_init , args = (X_d, y_noise), method = 'dogleg', jac = grad, hess= hessian, options={'maxiter':1000, 'disp':True}) model_t.x # + [markdown] id="I2AeCv-LVCam" colab_type="text" # ### Sklearn fit # + id="Ty5H0fbbVCan" colab_type="code" colab={} outputId="f52e0fa2-95c2-4702-d454-fd50bd2b29a2" from sklearn import linear_model model_d = linear_model.LinearRegression(fit_intercept=False) model_d.fit(X_d,y_noise) model_d.coef_ # + [markdown] id="WmU5-2DJVCar" colab_type="text" # ### Statsmodel fit # + id="3jv3fVKgVCas" colab_type="code" colab={} outputId="b765eb17-9ce7-4a2c-d10f-c442840e0798" import statsmodels.api as sm model_sm = sm.OLS(y_noise, X_d) res = model_sm.fit() print(res.summary()) # + [markdown] id="y7hS4N4cVCaw" colab_type="text" # # : Repeating the experiment with 2 polynomial variables and visualizing the results # Here we will focus on a 2-D design matrix with $x$ and $x^2$ values. The y values have been centered so we will ignore the constant term and y-intercept # + [markdown] id="sCQJQoCYVCa2" colab_type="text" # ### Initializing the data # + id="xPExOVv-VCa3" colab_type="code" colab={} #Create data from sin function with uniform noise x = np.linspace(0.1,1,40) #Adjusting the starting point to reduce numerical instability noise = np.random.uniform( size = 40) y = np.sin(x * 1.5 * np.pi ) y_noise = (y + noise).reshape(-1,1) y_noise = y_noise - y_noise.mean() #Centering the data #2nd order polynomial only degree = 2 X_d = polynomial_features(pd.DataFrame({'X1': x}),degree) #Starting point for gradient descent - see later diagrams initial_theta = np.array([0,-2]).reshape(-1,1) # + id="rE19XQzLVCa6" colab_type="code" colab={} outputId="2232fba1-ea63-48ed-810c-360cfb96005c" X_d = X_d[['X1','X2']] X_d.head() # + [markdown] id="QxogL3ibVCbA" colab_type="text" # ### Closed form solution # + id="Wz_PCN89VCbC" colab_type="code" colab={} outputId="ab042a44-6b41-4aa0-f797-bf0a650b69a2" def closed_form_solution(X,y): return np.linalg.inv(X.T @ X) @ X.T @ y coefs = closed_form_solution(X_d.values,y_noise) coefs # + [markdown] id="peqaeZnOVCbM" colab_type="text" # ### Numpy only fit # + id="klz--J-MVCbN" colab_type="code" colab={} outputId="4471e4f2-cc4b-4e56-c257-6671a499ad3f" stepsize = .3 theta_result,J_history, count_history_1, theta_1, theta_2 = gradient_descent(initial_theta,X_d,y_noise,alpha = stepsize,num_iters=10000) display(theta_result) # + [markdown] id="Y7buiryKVCbQ" colab_type="text" # ### Plotting the gradient descent convergence and resulting fits # + id="JJ_ocEDWVCbQ" colab_type="code" colab={} outputId="67afa845-bf7c-4d91-97ed-5ba4595871f2" fig = plt.figure(figsize = (18,8)) #Looping through different stepsizes for s in [.001,.01,.1,1]: theta_calc,J_history_1, count_history_1, theta_1, theta_2 = gradient_descent(initial_theta, X_d,y_noise,alpha = s,num_iters=5000) #Plot gradient descent convergence ax = fig.add_subplot(1, 2, 1) ax.plot(count_history_1, J_history_1, label = 'Grad. desc. stepsize: {}'.format(s)) #Plot resulting fits on data ax = fig.add_subplot(1, 2, 2) ax.plot(x,X_d@theta_calc, label = 'Grad. desc. stepsize: {}'.format(s)) #Adding plot features ax = fig.add_subplot(1, 2, 1) ax.axhline(costfunction(coefs, X_d, y_noise), linestyle=':', label = 'Closed form minimum') ax.set_xlabel('Count') ax.set_ylabel('Cost function') ax.set_title('Plot of convergence: Polynomial regression x, x^2 ={}'.format(degree)) ax.legend(loc = 1) ax = fig.add_subplot(1, 2, 2) ax.scatter(x,y_noise, facecolors = 'none', edgecolor = 'darkblue', label = 'f(x) + noise') ax.plot(x,X_d@coefs, linestyle=':', label = 'Closed form fit') ax.set_xlabel('x') ax.set_ylabel('y') ax.set_title('Noisy data and gradient descent fits'.format(degree)) ax.legend() plt.show() # + [markdown] id="iTZ6wJCnVCbT" colab_type="text" # ### Plotting the cost function in 3D # + id="a-r86kJ8VCbU" colab_type="code" colab={} outputId="d80e0c07-e066-4fe0-be90-618fa64871b9" #Creating the dataset (as previously) X = X_d.values #Setup of meshgrid of theta values T0, T1 = np.meshgrid(np.linspace(0,6,100),np.linspace(0,-8,100)) #Computing the cost function for each theta combination zs = np.array( [costfunction(np.array([t0,t1]).reshape(-1,1), X, y_noise.reshape(-1,1)) for t0, t1 in zip(np.ravel(T0), np.ravel(T1)) ] ) #Reshaping the cost values Z = zs.reshape(T0.shape) #Computing the gradient descent theta_result,J_history, count_history_1, theta_1, theta_2 = gradient_descent(initial_theta,X,y_noise,alpha = 0.3,num_iters=5000) #Angles needed for quiver plot anglesx = np.array(theta_1)[1:] - np.array(theta_1)[:-1] anglesy = np.array(theta_2)[1:] - np.array(theta_2)[:-1] # %matplotlib inline fig = plt.figure(figsize = (16,8)) #Surface plot ax = fig.add_subplot(1, 2, 1, projection='3d') ax.plot_surface(T0, T1, Z, rstride = 5, cstride = 5, cmap = 'jet', alpha=0.5) ax.plot(theta_1,theta_2,J_history, marker = '*',markersize = 4, color = 'r', alpha = .2, label = 'Gradient descent') ax.plot(coefs[0],coefs[1], marker = '*', color = 'black', markersize = 10) ax.set_xlabel('theta 1') ax.set_ylabel('theta 2') ax.set_zlabel('Cost function') ax.set_title('Gradient descent: Root at {}'.format(theta_calc.flatten().round(2))) ax.view_init(45, -45) ax.legend() #Contour plot ax = fig.add_subplot(1, 2, 2) ax.contour(T0, T1, Z, 70, cmap = 'jet') ax.quiver(theta_1[:-1], theta_2[:-1], anglesx, anglesy, scale_units = 'xy', angles = 'xy', scale = 1, color = 'r', alpha = .9) ax.plot(coefs[0],coefs[1], marker = '*', color = 'black', markersize = 10) ax.set_xlabel('theta 1') ax.set_ylabel('theta 2') ax.set_title('Gradient descent: Root at {}'.format(theta_calc.flatten().round(2))) ax.legend() plt.legend() plt.show() # + [markdown] id="ZCkux-XkVCbX" colab_type="text" # ### Sciy optimize fit # + id="KgKB3RjKVCbY" colab_type="code" colab={} outputId="010fdf28-95e5-4828-d320-b48a0577600b" import scipy.optimize as opt theta_init = np.ones((len(X_d.T),1)).reshape(-1,1) model_t = opt.minimize(fun = costfunction, x0 = theta_init , args = (X_d, y_noise), method = 'dogleg', jac = grad, hess= hessian, options={'maxiter':1000}) model_t.x # + [markdown] id="1Ac6wJI3VCba" colab_type="text" # ### Sklearn fit # + id="iZm2TQxcVCbb" colab_type="code" colab={} outputId="96670be3-069e-491f-bc8e-0baa1a0c9fce" from sklearn import linear_model model_d = linear_model.LinearRegression(fit_intercept=False) model_d.fit(X_d,y_noise) model_d.coef_ # + [markdown] id="OyHSwPZ_VCbf" colab_type="text" # ### Statsmodel fit # + id="MJml09sEVCbg" colab_type="code" colab={} outputId="ba4a3528-22ce-4518-d147-3bfc5c18f9cc" import statsmodels.api as sm model_sm = sm.OLS(y_noise, X_d) res = model_sm.fit() print(res.summary()) # + id="B3sJJNQVVCbj" colab_type="code" colab={} # + id="wrUNeaZeVCbl" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + import sys sys.path.append('../mylib') from yahoo import Yahoo # - # my_yahoo = Yahoo('CVS') my_yahoo = Yahoo('HYG') type(my_yahoo) s_dict=my_yahoo.get() s_dict.get('open') s_dict['open'] for key in s_dict.keys(): print(key) def return_shortened_list(keyword, full_key_enum): short_list = [] for the_key in full_key_enum: # print(the_key) if keyword in the_key: short_list.append(the_key) return(short_list) my_ratios = return_shortened_list('Ratio', s_dict.keys()) my_ratios full_list = s_dict.keys() my_operating = return_shortened_list('oper', full_list) my_operating my_margins = return_shortened_list('Margin', full_list) my_margins my_prices = return_shortened_list('Price', full_list) my_prices import pandas as pd def build_df(my_list): df_dict={} for stock_element in my_list: #print(stock_element) val = s_dict.get(stock_element) print(stock_element, val) df_dict[stock_element] = val df=pd.DataFrame(df_dict, index=[0]) return df my_price_df = build_df(my_prices) my_price_df my_price_df.dtypes # + my_ratio_df = build_df(my_ratios) my_margin_df = build_df(my_margins) my_operating_df = build_df(my_operating) # - my_ratio_df my_margin_df my_operating_df s_dict['exchange'] s_dict['priceToBook'] s_dict['pegRatio'] s_dict['dividendYield'] s_dict['fundFamily'] # industry s_dict['industry'] for key in s_dict.keys(): print(key) s_dict['market'] s_dict['morningStarRiskRating'] #annualHoldingsTurnover s_dict['annualHoldingsTurnover'] s_dict['longName'] s_dict['shortName'] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Demonstrates the usage of the interactive input form # The goal of the interactive input form is to assist the simulator. In a user friendly way, it aims to get an estimate from the user of the `z2jh_cost_simulator`, about for example the maximum users for every given hour during a full day. # For live changes to the z2jh_cost_simulator package - by avoiding caching # %load_ext autoreload # %autoreload 2 # + from z2jh_cost_simulator.input_form import InteractiveInputForm workweek_day = InteractiveInputForm() weekend_day = InteractiveInputForm() display(workweek_day.get_input_form("Maximum number of users per day on a weekday")) display(weekend_day.get_input_form("Maximum number of users per day on a weekend")) # - workweek_day.get_data() weekend_day.get_data() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="YOXixjyxYnh6" colab_type="code" colab={} #Google Colab 上の場合はインストールが必要 # !pip install simpleOption # + id="bkZWsOchYmgw" colab_type="code" colab={} #グラフ描画等に必要なライブラリ読込 from simpleOption import * import pandas as pd import numpy as np % matplotlib inline import matplotlib.pyplot as plt import seaborn as sns sns.set_style('darkgrid') # + id="qMbEwYSaYmhD" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="2f124b63-73de-4119-cc66-b5b0ef0d327b" # simlpe な使い方の例(理論価格計算) op = Option('02/P20000') setting(20250, 26, 20190204) #マーケット情報(IV26%と仮定) print(f'理論価格={op.v():.2f}' ) # + id="FBC43y5sZRis" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 283} outputId="ea7afa67-f9f9-46b2-ddef-72f1b496c183" #簡単なグラフの例 x = np.arange(19000, 21000) #グラフを描く範囲(日経平均価格範囲) plt.plot(x, np.vectorize(op.v)(x), label= op ) plt.legend(loc="best") # + id="cPOqlb_AYmhb" colab_type="code" colab={} outputId="3b77be7e-0fd5-4360-b0fb-b4b76bdbef13" # + id="hKX1rpXBYmhl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 283} outputId="5df1f808-60ff-4a61-f614-af1405e7b264" # 簡単な複合ポジのグラフ描画例 p = Portfolio( """ 02/C21000[1] 02/C21250[-2] 02/C21500[1] """) setting(20250, 26, 20190204) #マーケット情報設定(IV26%と仮定) x = np.arange(20000, 22000) #グラフを描く範囲(日経平均価格範囲) setting(20250, 26, 20190204) #マーケット情報(IV26%と仮定) plt.plot(x, np.vectorize(p.v)(x), label= 'Batafly_feb04' ) setting(evaluationDate=20190207) #日付を経過させてみる plt.plot(x, np.vectorize(p.v)(x), label= 'Batafly_feb07' ) plt.plot(x, np.vectorize(p.pay)(x), label= 'Payoff',linestyle="dashed" ) plt.legend(loc="best") # + id="vNfhGb2nYmhv" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: python3 (dev) # language: python # name: dev # --- # # Recursion Notes and Algorithms # Notes taken from: Data Structures and Algorithms in Python 1st edition by Goodrich, ., Tamassia, , Michael (2013) Hardcover Hardcover – 1732 from nbdev.showdoc import * from nbdev.imports import * # + # default_exp recursion # - # ## Examples # # Some examples of recursive functions # ### Factorial # + # exports # O(n) using n+1 activations each of which use O(1) operations def factorial(n): if n == 0: return 1 return n * factorial(n-1) # - test_eq(factorial(0), 1) test_eq(factorial(1), 1) test_eq(factorial(3), 6) # ### Draw Ruler # # English ruler similar to a fractal. Its shape has a self-repetitive structure. # # In general, an interval with a central tick length $L \le 1$ is composed of: # # * An interval with a central tick length $L − 1$ # * A single tick of length L # * An interval with a central tick length L − 1 # > Note: for $c \ge 0$ a call to draw_interval(c) results in precisely $2^c -1$ lines of output # # Base case draw_interval(0): $2^0 -1$ = 0 lines of output # # Number of lines printed by draw_interval(c) is one more than twice the number generated by a call to draw_interval(c-1) # # > $1 + 2(2^{c-1} -1) = 1 + 2^c - 2 = 2^c -1$ # + # exports # for c >=0 a call to draw_interval(c) results in precisely 2^c - 1 calls def draw_line(tick_length, tick_label=""): """Draw line with given tick length and optional label""" line = '-'*tick_length if tick_label: line += ' ' + tick_label print(line) # - draw_line(1) draw_line(2) draw_line(3) draw_line(4, "four") # exports def draw_interval(center_length): """Draw tick length based upon a central tick length""" if center_length > 0: draw_interval(center_length-1) draw_line(center_length) draw_interval(center_length-1) draw_interval(1) draw_interval(2) draw_interval(3) # exports def draw_ruler(num_inches, major_length): """draw english ruler with given number of inches and number of subdivisions""" draw_line(major_length, str(0)) for i in range(1, 1+ num_inches): draw_interval(major_length-1) draw_line(major_length, str(i)) draw_ruler(2,4) draw_ruler(0,4) # ### Binary Search # + # exports def binary_search(data, target, low, high): """return True if target is found""" if low > high: return False mid = (low + high) // 2 print(mid) if target == data[mid]: return True elif target < data[mid]: return binary_search(data, target, low, mid-1) else: return binary_search(data, target, mid+1, high) # - binary_search([0,1,2,3,4,5,6,7],6,0,7) # ## File Systems # ```python # Algorithm DiskUsage(path): # # Input: A string designating a path to a file-system entry # Output: The cumulative disk space used by that entry and any nested entries # total = size(path) # {immediate disk space used by the entry} # if path represents a directory then # for each child entry stored within directory path do # total = total + DiskUsage(child) #{recursive call} # return total # ``` # Useful module is os # # * os.path.getsize(path): returns immediate disk usage used by file or directory identified as path # * os.path.isdir(path): returns true if designated string path is a directory # * os.listdir(path): returns list of strings that are names of all entries within a directory # * os.path.join(path, filename): joins a path and a filename import os # exports def disk_usage(path): """returns disk usage for a path""" total = os.path.getsize(path) if os.path.isdir(path): for filename in os.listdir(path): child_path = os.path.join(path, filename) total += disk_usage(child_path) print(f"{total:<10} {path}") return total # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="NYbON1c1ImwA" # # Aula 1 # + [markdown] id="KVZOBoSHJL2_" # ## Colab e Kaggle # + [markdown] id="zegDJZcnInbu" # Nós vamos usar uma base de dados do Kaggle chamada [Fraud Detection Example](https://www.kaggle.com/gopalmahadevan/fraud-detection-example) e ela tem uma fração de dados do [PaySim](https://github.com/EdgarLopezPhD/PaySim), um simulador de dados financeiros feito exatamente para detecção de fraude. # + [markdown] id="CgXdNw8NInyO" # **Variáveis do dataset** # # **step** - mapeia uma unidade de tempo no mundo real. Neste caso, 1 passo é 1 hora de tempo. Total de etapas 744 (simulação de 30 dias). # # **type** - CASH-IN, CASH-OUT, DEBIT, PAYMENT and TRANSFER. # (caixa-de-entrada, caixa-de-saida, débito, pagamento e transferência) # # **amount** - valor da transação em moeda local. # # **nameOrig** - cliente que iniciou a transação # # **oldbalanceOrg** - saldo inicial antes da transação # # **newbalanceOrig** - novo saldo após a transação # # **nameDest** - cliente que é o destinatário da transação # # **oldbalanceDest** - destinatário do saldo inicial antes da transação. # Observe que não há informações para clientes que começam com M (Comerciantes). # # **newbalanceDest** - novo destinatário do saldo após a transação. Observe que não há informações para clientes que começam com M (Comerciantes). # # **isFraud** - São as transações feitas pelos agentes fraudulentos dentro da simulação. Neste conjunto de dados específico, o comportamento fraudulento dos agentes visa lucrar ao assumir o controle das contas dos clientes e tentar esvaziar os fundos transferindo para outra conta e depois sacando do sistema. # # **isFlaggedFraud** - O modelo de negócios visa controlar transferências massivas de uma conta para outra e sinaliza tentativas ilegais. Uma tentativa ilegal neste conjunto de dados é uma tentativa de transferir mais de 200.000 em uma única transação. # # + [markdown] id="ZJjM0peYKpoX" # # Aula 2 # + [markdown] id="WvF1j4BXKtPI" # ## Análise com Pandas # + id="Rlx_Y8c8Kaf_" import pandas as pd # + colab={"base_uri": "https://localhost:8080/", "height": 424} id="5CechMX5KzWo" outputId="cd990986-54c0-485e-d5fc-25fd9619f42b" df = pd.read_csv('/content/fraud_dataset_example.csv') df # + colab={"base_uri": "https://localhost:8080/"} id="XXWS0Wgxu8ai" outputId="04ece531-6d3b-4e34-ad54-2a1aad5f877a" df.columns # + [markdown] id="UkiZK5ecLf-P" # ### Trazendo as colunas de fraude para o começo do dataset # + colab={"base_uri": "https://localhost:8080/", "height": 206} id="Zfe1fPk8K3N-" outputId="41c92c02-9660-4f32-9a4e-95c83cc013db" df = df[['isFraud', 'isFlaggedFraud', 'step', 'type', 'amount', 'nameOrig', 'oldbalanceOrg', 'newbalanceOrig', 'nameDest', 'oldbalanceDest', 'newbalanceDest']] df.head() # + [markdown] id="aDu_pDqULo2v" # ### Renomeando as colunas # + [markdown] id="veR6FVfPo9hY" # Criando um dicionário # ``` # colunas = { # 'isFraud': 'fraude', # 'isFlaggedFraud':'super_fraude', # 'step':'tempo', # 'type':'tipo', # 'amount':'valor', # 'nameOrig':'cliente1', # 'oldbalanceOrg':'saldo_inicial_c1', # 'newbalanceOrig':'novo_saldo_c1', # 'nameDest':'cliente2', # 'oldbalanceDest':'saldo_inicial_c2', # 'newbalanceDest':'novo_saldo_c2', # } # ``` # + id="WVv3nNpVK382" colunas = { 'isFraud': 'fraude', 'isFlaggedFraud':'super_fraude', 'step':'tempo', 'type':'tipo', 'amount':'valor', 'nameOrig':'cliente1', 'oldbalanceOrg':'saldo_inicial_c1', 'newbalanceOrig':'novo_saldo_c1', 'nameDest':'cliente2', 'oldbalanceDest':'saldo_inicial_c2', 'newbalanceDest':'novo_saldo_c2', } # + colab={"base_uri": "https://localhost:8080/", "height": 206} id="upO8-lWoK43d" outputId="a90ebd9d-38f9-42c6-a288-11ea557c1429" df = df.rename(columns = colunas) df.head() # + [markdown] id="tVJ4ujv4Ltcf" # ### Outras informações do dataset # + colab={"base_uri": "https://localhost:8080/", "height": 300} id="Jycj1niNK5o2" outputId="d8842333-a307-4b5e-de02-c74d903986e2" df.describe() # + [markdown] id="z-_wceH5MOTv" # O método describe() fornece as informações sobre: # # **count** - Conta a quantidade de número de valores não vazios. Com esses valores podemos entender melhor o tamanho da amostra. # # **mean** - O valor médio, em média aritmética. Como ele faz uma média aritmética nem sempre mostra a realidade da maior parte dos casos do banco de dados. # # **std** - O desvio padrão. É a medida de como os dados se dispersam em relação à média, ou seja, o quanto eles estão espalhados. # # **min** e **max** - Valores que auxiliam a identificar a amplitude da amostra, entre o valor mínimo e máximo. # # **quartis** - Valores que nos mostram de que forma os dados foram distribuídos, por exemplo em 50% é a mediana e metade dos valores são inferiores a X valor, a outra metade é superior àquele valor. # # Para saber mais sobre esse método, acesse o artigo [Ampliando a análise com o Describe](https://www.alura.com.br/artigos/ampliando-a-analise-com-describe). # # + colab={"base_uri": "https://localhost:8080/", "height": 300} id="Djo35BPUK8EO" outputId="0f2ed2f1-a7e5-4cd3-f3ed-0a1938e974cd" df.describe().T # + colab={"base_uri": "https://localhost:8080/"} id="vA7jiRlMK83_" outputId="d7ccddac-9296-4e5f-e97b-1d1ab3f18898" df.shape # + colab={"base_uri": "https://localhost:8080/"} id="IznxOdQsy7af" outputId="2fc15746-76d8-4d04-dd75-d5402b6b4f4d" df.info() # + [markdown] id="eJazaOp9LFMQ" # ### Verificando a variável target # + colab={"base_uri": "https://localhost:8080/"} id="f22w57FzLKuM" outputId="b6641c5b-6a9c-4205-b88b-0c55fd862df2" df.groupby('fraude').tempo.count() # + colab={"base_uri": "https://localhost:8080/"} id="PRlXlN2QLLXP" outputId="44e65488-3e69-46bd-b662-3f70b7360b04" df.isnull().values.any() # + [markdown] id="RBpETqBaMizu" # ## Encoding # + [markdown] id="KrCm9qJEisBI" # ### Pandas Profiling # + [markdown] id="Jchv8BEBxojL" # Instalando o Pandas Profiling # # ```!pip install -U pandas-profiling``` # + id="fHrjk6e4Mlv0" # + id="NzOgDCuSMnNP" # + [markdown] id="irDrx1CuiuKZ" # ### Aplicando o Encoding # + [markdown] id="nrkpbqfHnfdQ" # #### **Tipos de encoding** # # **Label Encoding** - Renomea as classes com valores numéricos de 1 a **n**, sendo n o número de classes. Pode existir hierarquia entre as classes. # # **One-Hot Encoding** - Transforma as variáveis em **n** colunas binárias, sendo n o número de classes. Todas as classes são analisadas de forma igual, quando tiver a ocorrência dela a coluna terá o valor 1 e quando não o valor 0, isso acontece para as demais colunas criadas. # # # + id="zXdieOLkNA8O" # + [markdown] id="HwOXYrTCosAQ" # #### Removendo variáveis # + id="FMjzuys9NCVn" # + [markdown] id="2dMihrBKNIEF" # # Aula 3 # + [markdown] id="pfWzAWlrhfYR" # ## Regressão Logística # + id="A1rnkJ59NJ_2" # + [markdown] id="j44GLj2AhilI" # ## Balanceamento de dados # + id="3s33yYyhhmVn" # + [markdown] id="Kyvew0CchnhR" # ## Formulando as hipóteses # + id="IbcNWU4vhqGY" # + [markdown] id="9sz-sfx4icAQ" # **Inserir as hipóteses aqui!** (dê um duplo clique na célula) # + [markdown] id="Lxuitc62hxuB" # # Aula 4 # + [markdown] id="Sxc3b5Qkh2th" # ## Árvore de Decisão # + id="bgRmKIfahy2q" # + [markdown] id="TVqmiB4yhzbg" # ## Random Forest # + id="tiZa5pIhh2Vs" # + [markdown] id="-gdlvQUjh6WD" # ## Análise de Métricas # + id="fshZb8kBh8Z9" # + [markdown] id="8r6EWH_Rh9Y4" # # Aula 5 # + [markdown] id="I6gTbG8hiAR5" # ## Melhorando o modelo # + id="FWHfVIvxh-SC" # + [markdown] id="yt-E3yVUh_PR" # ## Resultados Finais # + id="4yJaNnRsiFI5" # + [markdown] id="hmI3dXt5iFf5" # ## Conclusão # + [markdown] id="Ofyqowo8iI8x" # **Inserir as soluções para cada hipótese aqui!** (dê um duplo clique na célula) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="iyMFL8dK3StB" colab={"base_uri": "https://localhost:8080/"} outputId="3added65-6be9-4b5f-ea02-ff3431f927f5" # !pip install sentencepiece # + id="e0x1Zzn13YQW" import sentencepiece as spm import time # + colab={"base_uri": "https://localhost:8080/"} id="ajTf5tl9Iuai" outputId="4265530f-e0b4-4623-a24c-5a6331f4235c" from google.colab import drive drive.mount('/content/drive/') # + id="N6py_iBFAteN" colab={"base_uri": "https://localhost:8080/"} outputId="df7a86e6-fd87-430f-a776-df967abf9274" dataPath = 'drive/MyDrive/Colab_Notebooks/am-mono' print(dataPath) # + [markdown] id="KPMLMlcxKhd0" # # **Function that processes number of lines, blank lines, sentences, and words in a give file.** # + id="D-aRN9g5DQyv" def processFile(filename): lines, blanklines, sentences, words = 0, 0, 0, 0 try: file = open(dataPath, 'r') except IOError: print(f"Unable to open file '%s'" % dataPath) for line in file: lines += 1 if line.startswith('\n'): blanklines += 1 else: # assuming that each sentence ends with :: or ? sentences += line.count('።') + line.count('?') tempwords = line.split(None) words += len(tempwords) file.close() print(f"-"*50) print(f"Lines : {lines:,d}") print(f"Blank lines : {blanklines:,d}") print(f"Sentences : {sentences:,d}") print(f"Words : {words:,d}") # + colab={"base_uri": "https://localhost:8080/"} id="H4fAlF1wGWsV" outputId="a0416faf-8d94-4e35-c14a-126342d7cf6e" processFile(dataPath) # + [markdown] id="Ml5U20NDK7Th" # # **Train sentencePiece for Amharic using BPE (Byte Pair Encoding)** # + colab={"base_uri": "https://localhost:8080/"} id="Y8CcKZVZ6h-5" outputId="c19377a3-ea44-4a7e-d1b6-5be372ac6ada" start_time = time.time() spm.SentencePieceTrainer.train(input=dataPath, model_prefix='am-bpe', vocab_size=16000, model_type="bpe", character_coverage=1.0) print("--- %s seconds ---" % (time.time() - start_time)) # + colab={"base_uri": "https://localhost:8080/"} id="7d1K4guC7dB2" outputId="f1413394-fd0b-46c4-85fc-c23e98548b3b" model = spm.SentencePieceProcessor(model_file='am-bpe.model') model.encode('ለአማርኛ ተናጋሪዎች የቀረበ መረጃ።', out_type=str) # + [markdown] id="736gQDu3LkHg" # # **Train sentencePiece for Amharic using Unigram.** # + colab={"base_uri": "https://localhost:8080/"} id="-ro6Og0N_Z3t" outputId="a7a88f29-bc10-4c34-e649-8dd926c438d2" start_time = time.time() spm.SentencePieceTrainer.train(input=dataPath, model_prefix='am-unigram', vocab_size=16000, model_type="unigram", character_coverage=1.0) print("--- %s seconds ---" % (time.time() - start_time)) # + colab={"base_uri": "https://localhost:8080/"} id="3LWP782zAX58" outputId="305c435b-60f2-4d63-b0e6-75be67691889" model = spm.SentencePieceProcessor(model_file='am-unigram.model') model.encode('ለአማርኛ ተናጋሪዎች የቀረበ መረጃ።', out_type=str) # + [markdown] id="ta0__9mOLnYa" # # **Train sentencePiece for Amharic using charcters** # + colab={"base_uri": "https://localhost:8080/"} id="BZ8bOOfpBaUJ" outputId="b5ea48f9-3e6f-4fb2-9205-e979b5efcb48" start_time = time.time() spm.SentencePieceTrainer.train(input=dataPath, model_prefix='am-char', vocab_size=16000, model_type="char", character_coverage=1.0) print("--- %s seconds ---" % (time.time() - start_time)) # + colab={"base_uri": "https://localhost:8080/"} id="T3epS8_TBaUL" outputId="32b3ffff-48a0-46e6-b928-c2a0fbcb45b7" model = spm.SentencePieceProcessor(model_file='am-char.model') model.encode('ለአማርኛ ተናጋሪዎች የቀረበ መረጃ።', out_type=str) # + [markdown] id="gUAvPZTPLuYR" # # **Train sentencePiece for Amharic using words** # + colab={"base_uri": "https://localhost:8080/"} id="ETUjzrjDBxDk" outputId="e20f68de-2a14-49a3-9b79-dd7cbe713154" start_time = time.time() spm.SentencePieceTrainer.train(input=dataPath, model_prefix='am-word', vocab_size=16000, model_type="word", character_coverage=1.0) print("--- %s seconds ---" % (time.time() - start_time)) # + colab={"base_uri": "https://localhost:8080/"} id="JaYURM8uBxDm" outputId="2c2b1602-381a-4200-a6d0-1a82eed29bd9" model = spm.SentencePieceProcessor(model_file='am-word.model') model.encode('ለአማርኛ ተናጋሪዎች የቀረበ መረጃ።', out_type=str) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="pWWAuDqukdyj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 651, "referenced_widgets": ["9a40d6298e5245b78b004bb9a2cd8ee7", "94f4c1242def40e79ceca38b68c950ee", "c52e0e0c7a1f4b2090c5f2ec11877c3e", "14f32865008644b49f67a1457b9c498a", "35af5368826a405381b1c2901deac07b", "f242f4292e5f4d6da5901c230f6d6697", "223aa01f04a6403584bc9e519d66e133", "0e25fe1acd0a48db8e3f3baf5e8948ae", "ee0ad9ec2fa5440e96ed515c862d9a00", "dc496a4b58ef4805aba9fd1911394cd7", "7d76865129ff4537b00974d835b39af6", "", "9f193b4f02964be28897c653addab2e2", "", "", "f46723c9abe8433890783856f7e42d38", "", "9be074cfd647476dbf0a6b396a43f04a", "89b0d5ef4a934ce1bdc9c9cd5cc6209e", "", "", "3083c2ba81b341ae8d9f7fed49bda3d9", "9f9d66f8acec45adb9856dccaf402686", "01284b3137e4401a879dd7b09df41ab7", "", "", "c1cba18894df4aa1a6e7fa33db5865a6", "", "c50dd3f6415e4de3bb1e5a08ccc544da", "3ea02d1106bb4918b96f6de302a4911c", "8c19029135134492baf8e671e0d2b951", "a6e1b7dc02bb47f4a0bc77fa8d2affe2"]} outputId="ce69e049-5274-49a7-8502-0b0b167ae813" import numpy as np import torch import torch.nn as nn import torch.optim as optim import torch.nn.init as init import torchvision.datasets as dset import torchvision.transforms as transforms from torch.utils.data import DataLoader batch_size = 256 learning_rate = 0.0002 num_epoch = 10 mnist_train = dset.FashionMNIST("../", train=True, transform=transforms.ToTensor(), target_transform=None, download=True) mnist_test = dset.FashionMNIST("../", train=False, transform=transforms.ToTensor(), target_transform=None, download=True) train_loader = torch.utils.data.DataLoader(mnist_train,batch_size=batch_size, shuffle=True,num_workers=2,drop_last=True) test_loader = torch.utils.data.DataLoader(mnist_test,batch_size=batch_size, shuffle=False,num_workers=2,drop_last=False) class Linear(nn.Module): def __init__(self): super(Linear,self).__init__() self.layer = nn.Sequential( nn.Linear(784,300), nn.ReLU(), nn.Linear(300,100), nn.ReLU(), nn.Linear(100,10) ) def forward(self,x): out = x.view(-1, 784) out = self.layer(out) return out device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") print(device) model = Linear().to(device) loss_func = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) loss_arr =[] for i in range(num_epoch): for j,[image,label] in enumerate(train_loader): x = image.to(device) y_= label.to(device) optimizer.zero_grad() output = model.forward(x) loss = loss_func(output,y_) loss.backward() optimizer.step() if j % 1000 == 0: print(loss) loss_arr.append(loss.cpu().detach().numpy()) correct = 0 total = 0 pred_labels = [] with torch.no_grad(): for image,label in test_loader: x = image.to(device) y_= label.to(device) output = model.forward(x) _,output_index = torch.max(output,1) total += label.size(0) correct += (output_index == y_).sum().float() pred = output_index.to('cpu').numpy() pred_labels.extend(pred) print("Accuracy of Test Data: {}".format(100*correct/total)) np.savetxt('submit.txt', pred_labels, fmt='%d') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd import os from itertools import product import numpy as np import re import matplotlib.pyplot as plt import matplotlib import seaborn as sb font = {'family' : 'Arial', 'size' : 14} matplotlib.rc('font', **font) matplotlib.rc('ytick', labelsize=14) matplotlib.rc('xtick', labelsize=14) matplotlib.rcParams["figure.dpi"] = 140 # - folder = 'downsample_metrics/' filenames = os.listdir(folder) methods = ['X_scvi', 'X_ldvae', 'X_expimap_1', 'X_cdec_lr_0.05_pca_True'] p = re.compile("scIB_IE_(.*)_B_donor_CT_celltype.*") dct = {} for file in filenames: dat = pd.read_pickle(folder+file) ds = dat.pop('adata_name') method = p.search(file).group(1) df = pd.DataFrame(dat, index=[method]) if ds in dct: dct[ds] = dct[ds].append(df) else: dct[ds] = df for ds in dct: dct[ds]['overall'] = dct[ds].mean(axis=1) metrics_overall = None for ds in dct: df = pd.DataFrame(dct[ds]['overall'].to_dict(), index=[ds]) metrics_overall = df if metrics_overall is None else metrics_overall.append(df) dct.keys() dct['blood_ssample_0_5.h5ad'].loc[methods].T.plot.bar(figsize=(11,9), colormap='viridis', ylim=(0,1)) metrics_overall.loc[:, methods].plot.bar(figsize=(10,8), colormap='viridis', ylim=(0,1)) metrics_overall.index = ['0.005', '0.01', '0.05', '0.1', '0.25', '0.5', '1'] rnm = {'X_scvi': 'scVI', 'X_cdec_lr_0.05_pca_True': 'scVI (non-amortized)', 'X_expimap_1': 'expiMap', 'X_ldvae': 'LDVAE'} metrics_overall = metrics_overall.rename(columns=rnm) methods = ['expiMap', 'scVI', 'scVI (non-amortized)', 'LDVAE'] metrics_overall = metrics_overall.loc[:, methods] cm = matplotlib.cm.get_cmap('viridis') colors = [] for i in np.linspace(0, 1, 4): colors.append(cm(i)) colors[-2] = cm(0.7) colors[-1] = cm(0.92) metrics_overall.loc[:, methods].plot.line(style='.-', color=colors, figsize=(9,7)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Using presamples with Brightway2 # # This Notebook is meant to accompany the official documentation on [readthedocs](https://presamples.readthedocs.io/en/latest/use_with_bw2.html). # # The official documentation provides more context, but does not show all the code (e.g. it doesn't show the creation of LCI databases, the formatting of matrices for display, etc.). # This Notebook contains all this extra code, but is much skimpier on context. You should probably read the official docs first, if you have not done so yet. # + [markdown] toc=true #

    Table of Contents

    # # - # ## Importing required modules # Principal modules import presamples as ps import brightway2 as bw bw.projects.set_current("presamples doc") # Other modules import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt # %matplotlib inline # The matrices used in the example are not sparse, so we ignore the SparseEfficiencyWarning import warnings from scipy.sparse import SparseEfficiencyWarning warnings.filterwarnings("ignore", category=SparseEfficiencyWarning) # ## Formating functions # The following functions were written to format some of the objects that are encountered in LCA. # # They are of limited interest and can be skipped. # + # Format matrices to DataFrame def format_matrices(database=None, demand=None, lca=None, simple=True): if lca is None: if demand is None: act = bw.Database(database).random() demand={act:act.get('production amount', 1)} lca = bw.LCA(demand) lca.lci() rev_activity_dict, rev_product_dict, rev_bio_dict = lca.reverse_dict() def get_name_with_units(act_key, simple): act = bw.get_activity(act_key) if simple: return "\n{}\n{}\n".format(act.key, act['unit']) else: return "\n{} ({})\n{}\n".format(act['name'], act['unit'], act.key) col_names = [get_name_with_units(rev_activity_dict[i], simple) for i in np.arange(lca.inventory.shape[1])] techno_row_names = [get_name_with_units(rev_product_dict[i], simple) for i in np.arange(lca.technosphere_matrix.shape[0])] bio_row_names = [get_name_with_units(rev_bio_dict[i], simple) for i in np.arange(lca.biosphere_matrix.shape[0])] lca.demand_array = np.eye(lca.technosphere_matrix.shape[0]) A_formatted = pd.DataFrame(index=techno_row_names, columns=col_names, data=lca.technosphere_matrix.todense()) B_formatted = pd.DataFrame(index=bio_row_names, columns=col_names, data=lca.biosphere_matrix.todense()) S = lca.solve_linear_system() G = lca.biosphere_matrix * S invA_formatted = pd.DataFrame(index=col_names, columns=techno_row_names, data=S) G_formatted = pd.DataFrame(index=bio_row_names, columns=col_names, data=G) return A_formatted, B_formatted, invA_formatted, G_formatted # Format matrices to figure def matrix_to_plot(matrix_as_df, return_fig=True, title=None, save_path=None, title_size=14, scaling_ratio=10): w = 2 + matrix_as_df.shape[1]/scaling_ratio h = 2 + matrix_as_df.shape[0]/scaling_ratio plt.figure(figsize=(w,h)) matrix_plot = sns.heatmap( matrix_as_df, annot=True, cbar=False, cmap=(['white']), mask=(matrix_as_df==0).values, linewidths=1, linecolor='grey', square=True ) fig = matrix_plot.get_figure() if title: plt.title(title, fontsize=title_size) if save_path: fig.savefig(save_path, bbox_inches="tight", ext="jpg") if return_fig: return fig # - # ## Biosphere database bio = bw.Database("bio") bio.register() bio.write({ ("bio", "emission"): { 'categories': ['water'], 'name': 'Some emission', 'type': 'emission', 'unit': 'kg' }, ("bio", "water in"): { 'categories': ['natural resource'], 'exchanges': [], 'name': 'Water in', 'type': 'natural resource', 'unit': 'm3' }, ("bio", "water out"): { 'categories': ['water'], 'exchanges': [], 'name': 'Water out', 'type': 'emission', 'unit': 'm3' }, ("bio", "land from"): { 'categories': ('natural resource', 'land'), 'exchanges': [], 'name': 'Transformation, from x', 'type': 'natural resource', 'unit': 'm2' }, ("bio", "land to"): { 'categories': ('natural resource', 'land'), 'exchanges': [], 'name': 'Transformation, to y', 'type': 'natural resource', 'unit': 'm2' }, }) # ## LCIA methods # + # For pollutants m_name = ("mock method", "pollutant emission") bw.Method(m_name).register() m_as_method = bw.Method(m_name) m_as_method.metadata['unit'] = "kg_eq emission" data_as_list = [(('bio', 'emission'), 1)] m_as_method.write(data_as_list) # For water use m_name = ("mock method", "water use") bw.Method(m_name).register() m_as_method = bw.Method(m_name) m_as_method.metadata['unit'] = "m3 water" data_as_list = [ (('bio', 'water in'), 1), (('bio', 'water out'), -1) ] m_as_method.write(data_as_list) # For land transformation m_name = ("mock method", "land transformation") bw.Method(m_name).register() m_as_method = bw.Method(m_name) m_as_method.metadata['unit'] = "m2 land" data_as_list = [ (('bio', 'land from'), 1), (('bio', 'land to'), -0.5) ] m_as_method.write(data_as_list) # - # ## Example 1: - Static scenario analysis: changing supplier # # Accompanies the [corresponding section of the ReadTheDocs](https://presamples.readthedocs.io/en/latest/use_with_bw2.html#bw2_eg1) # ### Writing initial database # Writing database my_db = bw.Database('db1') my_db.register() my_db.write({ ('db1', 'a1'): { 'type': 'process', 'name': 'Production A1', 'unit': 'kg', 'location': 'GLO', 'reference product': 'A', 'production amount': 1, 'activity type': 'ordinary transforming activity', 'comment': 'represents cradle-to-gate LCI', 'exchanges': [ { 'name': 'A', 'unit': 'kg', 'amount': 1.0, 'input': ('db1', 'a1'), 'type': 'production', }, { 'name': 'Some emission', 'unit': 'kg', 'amount': 2, 'input': ('bio', 'emission'), 'type': 'biosphere', }, ], }, ('db1', 'a2'): { 'type': 'process', 'name': 'Production A2', 'unit': 'kg', 'location': 'GLO', 'reference product': 'A', 'production amount': 1, 'activity type': 'ordinary transforming activity', 'comment': 'represents cradle-to-gate LCI', 'exchanges': [ { 'name': 'A', 'unit': 'kg', 'amount': 1.0, 'input': ('db1', 'a2'), 'type': 'production', 'uncertainty type': 0, }, { 'name': 'Some emission', 'unit': 'kg', 'amount': 1, 'input': ('bio', 'emission'), 'type': 'biosphere', }, ], }, ('db1', "b"): { 'type': 'process', 'name': 'Producer b', 'unit': 'kg', 'location': 'GLO', 'reference product': 'b', 'production amount': 1, 'activity type': 'ordinary transforming activity', 'exchanges': [ { 'name': 'b', 'unit': 'kg', 'amount': 1.0, 'input': ('db1', 'b'), 'type': 'production', 'uncertainty type': 0, }, { 'name': 'A', 'unit': 'kg', 'amount': 0.6, 'input': ('db1', 'a1'), 'type': 'technosphere', }, { 'name': 'A', 'unit': 'kg', 'amount': 0.4, 'input': ('db1', 'a2'), 'type': 'technosphere', }, ], }, }) # ### Initial system # **Initial product system**: # ![bw2_eg1_ps_initial](../source/images/bw2_eg1a.jpg) # **Initial A matrix**: A, _, _, _ = format_matrices(database='db1', simple=True) matrix_to_plot(A, return_fig=True, title="Technosphere matrix \n$\mathbf{A}$", save_path=r"../source/images/bw2_eg1_A_orig.jpg"); # ### Scenario # ![bw2_eg1_ps_initial](../source/images/bw2_eg1b.jpg) # ### matrix_data scenario_array = np.array( [ 1, # New value for exchange between ('db1', 'a2') and ('db1', 'b') 0 # New value for exchange between ('db1', 'a1') and ('db1', 'b') ]).reshape(-1, 1) scenario_indices = [ (('db1', 'a2'), ('db1', 'b'), 'technosphere'), (('db1', 'a1'), ('db1', 'b'), 'technosphere') ] scenario_matrix_data = [(scenario_array, scenario_indices, 'technosphere')] # ### Creating presamples package scen_pp_id, scen_pp_path = ps.create_presamples_package( matrix_data = scenario_matrix_data, ) scen_pp_path # ### Using presamples in an LCA # #### Without presamples # LCA without presamples lca_wo = bw.LCA(demand={('db1', 'b'): 1}) lca_wo.load_lci_data() A_wo, _, _, _ = format_matrices(lca=lca_wo) matrix_to_plot(A_wo, title="Technosphere matrix \n$\mathbf{A}$", save_path=r"../source/images/bw2_eg1_A_without_ps.jpg"); # #### With presamples # LCA with presamples lca_w = bw.LCA(demand={('db1', 'b'): 1}, presamples=[scen_pp_path]) lca_w.load_lci_data() A_w, _, _, _ = format_matrices(lca=lca_w) matrix_to_plot(A_w, title="Technosphere matrix \n$\mathbf{A}$", save_path=r"../source/images/bw2_eg1_A_with_ps.jpg"); # #### Presamples are not persistent # LCA without presamples lca_wo = bw.LCA(demand={('db1', 'b'): 1}) lca_wo.load_lci_data() A_wo, _, _, _ = format_matrices(lca=lca_wo) matrix_to_plot(A_wo); # ## Example 2 - Using presamples for time series # **Context** # Supply of a varies over time: # # # **matrix_data** time_array = np.array( [ [0.9, 0.8, 0.6, 0.3, 0.6, 0.5],#, 0.9, 1, 0.9, 1, 0.8, 0.6, 0.4, 0.2], [0.1, 0.2, 0.4, 0.7, 0.4, 0.5]#, 0.1, 0, 0.1, 0, 0.2, 0.4, 0.6, 0.8] ] ) time_array.shape time_indices = [ (('db1', 'a2'), ('db1', 'b'), 'technosphere'), (('db1', 'a1'), ('db1', 'b'), 'technosphere') ] time_matrix_data = [(time_array, time_indices, 'technosphere')] # **create presamples package** time_pp_id, time_pp_path = ps.create_presamples_package( matrix_data = time_matrix_data, seed='sequential' ) # **LCA** lca = bw.LCA({('db1', 'b'):1}, presamples=[time_pp_path], method=("mock method", "pollutant emission")) lca.lci() A, _, _, _ = format_matrices(lca=lca) matrix_to_plot(A, title="Technosphere matrix \n$\mathbf{A}$", save_path=r"../source/images/bw2_eg2_A0.jpg"); lca.presamples.update_matrices() A, _, _, _ = format_matrices(lca=lca) matrix_to_plot(A, title="Technosphere matrix \n$\mathbf{A}$", save_path=r"../source/images/bw2_eg2_A1.jpg"); lca = bw.LCA({('db1', 'b'):1}, presamples=[time_pp_path]) print("Times updated\tIndex value\tInput from a1\tInput from a2") for i in range(10): if i == 0: lca.lci() else: lca.presamples.update_matrices() from_a1 = lca.technosphere_matrix[ lca.product_dict[('db1', 'a1')], lca.activity_dict[('db1', 'b')] ] from_a2 = lca.technosphere_matrix[ lca.product_dict[('db1', 'a2')], lca.activity_dict[('db1', 'b')] ] index_value = lca.presamples.matrix_indexer[0].index print(i, "\t\t", index_value, "\t\t", from_a1, "\t\t", from_a2) lca = bw.LCA({('db1', 'b'):1}, presamples=[time_pp_path], method=("mock method", "pollutant emission")) for i in range(6): if i == 0: lca.lci() lca.lcia() else: lca.presamples.update_matrices() lca.redo_lci() lca.redo_lcia() print(i, lca.score) time_not_seq_pp_id, time_not_seq_pp_path = ps.create_presamples_package( matrix_data = time_matrix_data, seed=42 ) lca = bw.LCA({('db1', 'b'):1}, presamples=[time_not_seq_pp_path]) print("Times updated\tIndex value\tInput from a1\tInput from a2") for i in range(10): if i == 0: lca.lci() else: lca.presamples.update_matrices() from_a1 = lca.technosphere_matrix[ lca.product_dict[('db1', 'a1')], lca.activity_dict[('db1', 'b')] ] from_a2 = lca.technosphere_matrix[ lca.product_dict[('db1', 'a2')], lca.activity_dict[('db1', 'b')] ] index_value = lca.presamples.matrix_indexer[0].index print(i, "\t\t", index_value, "\t\t", from_a1, "\t\t", from_a2) # ## Example 3 # Before: # ![bw2_eg3_ps_initial](../source/images/bw2_eg3_before.jpg) # After: # ![bw2_eg3_ps_after](../source/images/bw2_eg3_after.jpg) # Writing database my_db = bw.Database('db2') my_db.register() my_db.write({ ('db2', 'a'): { 'type': 'process', 'name': 'Production a', 'unit': 'kg', 'location': 'GLO', 'reference product': 'A', 'production amount': 1, 'activity type': 'ordinary transforming activity', 'comment': 'represents cradle-to-gate LCI', 'exchanges': [ { 'name': 'A', 'unit': 'kg', 'amount': 1.0, 'input': ('db2', 'a'), 'type': 'production', }, { 'name': 'Some emission', 'unit': 'kg', 'amount': 0.5, 'input': ('bio', 'emission'), 'type': 'biosphere', }, ], }, ('db2', "b"): { 'type': 'process', 'name': '', 'unit': 'kg', 'location': 'GLO', 'reference product': 'b', 'production amount': 1, 'activity type': 'ordinary transforming activity', 'exchanges': [ { 'name': 'b', 'unit': 'kg', 'amount': 1.0, 'input': ('db2', 'b'), 'type': 'production', 'uncertainty type': 0, }, { 'name': 'A', 'unit': 'kg', 'amount': 1, 'input': ('db2', 'a'), 'type': 'technosphere', }, { 'name': '', 'unit': 'kg', 'amount': 0.6, 'input': ('bio', 'emission'), 'type': 'biosphere', }, ], }, }) eg3_matrix_data = [ ( np.array([1.2]).reshape(1, 1), # Only one value, but array still needs to have two dimensions [(('db2', 'a'), ('db2', 'b'), 'technosphere')], 'technosphere' ), ( np.array([0.4]).reshape(1, 1), # Again, only one value [(('bio', 'emission'), ('db2', 'b')),], # No need to specify the exchange type, there is only one type 'biosphere' ) ] eg3_pp_id, eg3_pp_path = ps.create_presamples_package(matrix_data = eg3_matrix_data) lca0 = bw.LCA({('db2', 'b'):1}, method=('mock method', 'pollutant emission')) lca1 = bw.LCA({('db2', 'b'):1}, method=('mock method', 'pollutant emission'), presamples=[eg3_pp_path]) lca0.lci() lca0.lcia() lca1.lci() lca1.lcia() lca1.score/lca0.score # ## Example 4 - Balancing sampled exchange values # Writing database my_db = bw.Database('db3') my_db.register() my_db.write({ ('db3', 'a'): { 'type': 'process', 'name': 'Production a', 'unit': 'kg', 'location': 'GLO', 'reference product': 'A', 'production amount': 1, 'activity type': 'ordinary transforming activity', 'exchanges': [ { 'name': 'A', 'unit': 'kg', 'amount': 1.0, 'input': ('db3', 'a'), 'type': 'production', }, { 'name': 'Some emission', 'unit': 'kg', 'amount': 3, 'input': ('bio', 'emission'), 'type': 'biosphere', 'uncertainty type': 2, 'loc': np.log(3), 'scale': np.log(np.sqrt(1.2)), }, { 'name': 'fuel', 'unit': 'kg', 'amount': 1, 'input': ('db3', 'fuel'), 'type': 'technosphere', 'uncertainty type': 2, 'loc': np.log(1), 'scale': np.log(np.sqrt(1.2)), }, ], }, ('db3', "fuel"): { 'type': 'process', 'name': 'fuel production', 'unit': 'kg', 'location': 'GLO', 'reference product': 'b', 'production amount': 1, 'activity type': 'ordinary transforming activity', 'comment': 'Represents cradle-to-gate emissions', 'exchanges': [ { 'name': 'fuel', 'unit': 'kg', 'amount': 1.0, 'input': ('db3', 'fuel'), 'type': 'production', 'uncertainty type': 0, }, { 'name': 'Some emission', 'unit': 'kg', 'amount': 0.5, 'input': ('bio', 'emission'), 'type': 'biosphere', 'uncertainty type': 2, 'loc': np.log(0.5), 'scale': np.log(np.sqrt(1.2)), }, ], }, }) mc = bw.MonteCarloLCA({('db3', 'a'):1}, method=("mock method", "pollutant emission")) print("Fuel\t\tEmission\tRatio") for _ in range(10): arr[_]=next(mc) fuel = mc.technosphere_matrix[ mc.product_dict[('db3', 'fuel')], mc.activity_dict[('db3', 'a')], ] emission = mc.biosphere_matrix[ mc.biosphere_dict[('bio', 'emission')], mc.activity_dict[('db3', 'a')], ] print("{:.3}\t\t{:.3}\t\t{:.6}".format(-fuel, emission, -emission/fuel)) df = pd.DataFrame(columns=['Parameter', 'Balanced', 'Amount']) # + mc = bw.MonteCarloLCA({('db3', 'a'):1}, method=("mock method", "pollutant emission")) for i in range(1000): next(mc) fuel = mc.technosphere_matrix[ mc.product_dict[('db3', 'fuel')], mc.activity_dict[('db3', 'a')], ] emission = mc.biosphere_matrix[ mc.biosphere_dict[('bio', 'emission')], mc.activity_dict[('db3', 'a')], ] df=df.append({'Parameter': 'Fuel', 'Balanced': 'False', 'Amount': -fuel}, ignore_index=True) df=df.append({'Parameter': 'Emissions', 'Balanced': 'False', 'Amount': emission}, ignore_index=True) df=df.append({'Parameter': 'Ratio', 'Balanced': 'False', 'Amount': -emission/fuel}, ignore_index=True) # - fuel_consumption = np.random.lognormal(mean=np.log(1), sigma=np.log(np.sqrt(1.2)), size=1000) emissions = fuel_consumption * 3 balanced_samples = np.stack([fuel_consumption, emissions], axis=0) balanced_indices = [ (('db3', 'fuel'), ('db3', 'a'), 'technosphere'), (('bio', 'emission'), ('db3', 'a'), 'biosphere'), ] matrix_data = ps.split_inventory_presamples(balanced_samples, balanced_indices) bio_data = matrix_data[0] bio_data[0][0, 0:10], bio_data[1], bio_data[2] techno_data = matrix_data[1] techno_data[0][0, 0:10], techno_data[1], techno_data[2] balanced_id, balanced_path = ps.create_presamples_package( matrix_data=ps.split_inventory_presamples(balanced_samples, balanced_indices) ) mc_balanced = bw.MonteCarloLCA({('db3', 'a'):1}, method=("mock method", "pollutant emission"), presamples=[balanced_path]) for i in range(1000): next(mc_balanced) fuel = mc_balanced.technosphere_matrix[ mc_balanced.product_dict[('db3', 'fuel')], mc_balanced.activity_dict[('db3', 'a')], ] emission = mc_balanced.biosphere_matrix[ mc_balanced.biosphere_dict[('bio', 'emission')], mc_balanced.activity_dict[('db3', 'a')], ] df=df.append({'Parameter': 'Fuel', 'Balanced': 'True', 'Amount': -fuel}, ignore_index=True) df=df.append({'Parameter': 'Emissions', 'Balanced': 'True', 'Amount': emission}, ignore_index=True) df=df.append({'Parameter': 'Ratio', 'Balanced': 'True', 'Amount': -emission/fuel}, ignore_index=True) g = sns.FacetGrid(df, row="Balanced", col="Parameter", margin_titles=True) g.map(sns.boxplot, "Amount", orient='V') g.savefig(r"../source/images/eg4_plot.jpeg") ratio_balanced.min(), ratio_balanced.max(), fuel_consumption for i in range(10): next(mc) print("iteration: ", i,"\n\tindexer count: ", mc.presamples.matrix_indexer[0].count,"\n\tindexer index: ", mc.presamples.matrix_indexer[0].index) lca = bw.LCA({('db1', 'b'):1}, presamples=[time_pp_path], method=("mock method", "pollutant emission")) lca.presamples.reset_sequential_indices() print("iteration\tindex") for i in range(time_array.shape[1]): lca.presamples.update_matrices() lca.lci() print(i, "\t\t", lca.presamples.matrix_indexer[0].index) #print("iteration: ", i,"\n\tindexer count: ", lca.presamples.matrix_indexer[0].count,"\n\tindexer index: ", lca.presamples.matrix_indexer[0].index) for i in range(time_array.shape[1]): if i == 0: print(i, lca.presamples.matrix_indexer[0].count, lca.presamples.matrix_indexer[0].index) lca.lci() A, _, _, _ = format_matrices(lca=lca) matrix_to_plot(A) else: lca.presamples.update_matrices() print(i, lca.presamples.matrix_indexer[0].count, lca.presamples.matrix_indexer[0].index) lca.lci() A, _, _, _ = format_matrices(lca=lca) matrix_to_plot(A) # + if lca.presamples.update_package_indices() lca.presamples.matrix_indexer[0].count # - lca.lci() lca.presamples.update_matrices() lca.presamples.update_package_indices() lca.presamples.advance_indices lca = bw.LCA({('db1', 'b'):1}, presamples=[time_pp_path], method=("mock method", "pollutant emission")) for t in range(time_array.shape[1]): print("time step", t) lca.load_lci_data() lca.lci() lca.lcia() print(lca.score) A, _, _, _ = format_matrices(lca=lca) matrix_to_plot( A, return_fig=True, title="Technosphere matrix \n{}\nTime step {}".format("A", t), save_path=r"../source/images/eg2_A_{}.jpg".format(t)); indexer = lca.presamples.matrix_indexer[0] next(indexer) indexer.index # ## LCA matrices and the case for using presamples # At its barest expression, LCA models can be represented with three matrices and a vector: # # * the technosphere matrix $\mathbf{A}$, describing the links among activities in the technosphere (technosphere exchanges) # * the biosphere matrix $\mathbf{B}$, satellite matrix describing the exchanges between the activities and the environment (elementary flows) # * the characterization matrix $\mathbf{C}$, giving unit impact factors for elementary flows with the environment (characterisation factors) # * the final demand vector **f** # # An impact score per functional unit is given by $\mathbf{g} = \mathbf{CBA^{-1}f}$ # # Presamples can replace values in any these matrices as calculations are carried out. # # Storing and injecting specific values in LCA matrices can improve LCA calculations in many ways: # # * Storing and reusing data characterizing given scenarios makes scenario analysis much easier. # * It can easily integrate time series. # * It can use pre-generated static or stochastic values that were generated by complex, non-linear models, allowing the # LCA model to capture system dynamics more accurately. # * It is possible to account to correlation across parameters during Monte Carlo Simulations (e.g. for correlation # between characterization factors, between fuel use and CO2 emissions, etc. # * Since sampled data can be used directly, it is unnecessary to fit data to a distribution. # ## Defining the input matrix_data # GET FROM RST # ### Formating functions (Notebook version only) # The following functions were written to format some of the objects that are encountered in LCA. # These are presented here, but are excluded from the documentation found on [readthedocs](https://presamples.readthedocs.io/). # If you prefer a less cluttered view of the use of presamples, we suggest you visit the docs instead. # Import required modules import numpy as np import pandas as pd # The matrices used in the example are not sparse, so we ignore the SparseEfficiencyWarning import warnings from scipy.sparse import SparseEfficiencyWarning warnings.filterwarnings("ignore", category=SparseEfficiencyWarning) import seaborn as sns import matplotlib.pyplot as plt # %matplotlib inline # + # Format matrices to DataFrame def format_matrices(database=None, demand=None, lca=None): if lca is None: if demand is None: act = bw.Database(database).random() demand={act:act.get('production amount', 1)} lca = bw.LCA(demand) lca.lci() rev_activity_dict, rev_product_dict, rev_bio_dict = lca.reverse_dict() def get_name_with_units(act_key): act = bw.get_activity(act_key) return "\n{} ({})\n{}\n".format(act['name'], act['unit'], act.key) col_names = [get_name_with_units(rev_activity_dict[i]) for i in np.arange(lca.inventory.shape[1])] techno_row_names = [get_name_with_units(rev_product_dict[i]) for i in np.arange(lca.technosphere_matrix.shape[0])] bio_row_names = [get_name_with_units(rev_bio_dict[i]) for i in np.arange(lca.biosphere_matrix.shape[0])] lca.demand_array = np.eye(lca.technosphere_matrix.shape[0]) A_formatted = pd.DataFrame(index=techno_row_names, columns=col_names, data=lca.technosphere_matrix.todense()) B_formatted = pd.DataFrame(index=bio_row_names, columns=col_names, data=lca.biosphere_matrix.todense()) #TODO CHANGE NAMES S = lca.solve_linear_system() G = lca.biosphere_matrix * S invA_formatted = pd.DataFrame(index=col_names, columns=techno_row_names, data=S) G_formatted = pd.DataFrame(index=bio_row_names, columns=col_names, data=G) return A_formatted, B_formatted, invA_formatted, G_formatted def format_matrices_simple(database=None, demand=None, lca=None): if lca is None: if demand is None: act = bw.Database(database).random() demand={act:act.get('production amount', 1)} lca = bw.LCA(demand) lca.lci() rev_activity_dict, rev_product_dict, rev_bio_dict = lca.reverse_dict() def get_name_with_units(act_key): act = bw.get_activity(act_key) return "\n{}\n{}\n".format(act.key, act['unit']) col_names = [get_name_with_units(rev_activity_dict[i]) for i in np.arange(lca.inventory.shape[1])] techno_row_names = [get_name_with_units(rev_product_dict[i]) for i in np.arange(lca.technosphere_matrix.shape[0])] bio_row_names = [get_name_with_units(rev_bio_dict[i]) for i in np.arange(lca.biosphere_matrix.shape[0])] lca.demand_array = np.eye(lca.technosphere_matrix.shape[0]) A_formatted = pd.DataFrame(index=techno_row_names, columns=col_names, data=lca.technosphere_matrix.todense()) B_formatted = pd.DataFrame(index=bio_row_names, columns=col_names, data=lca.biosphere_matrix.todense()) #TODO CHANGE NAMES S = lca.solve_linear_system() G = lca.biosphere_matrix * S invA_formatted = pd.DataFrame(index=col_names, columns=techno_row_names, data=S) G_formatted = pd.DataFrame(index=bio_row_names, columns=col_names, data=G) return A_formatted, B_formatted, invA_formatted, G_formatted # Format matrices to figure def matrix_to_plot(matrix_as_df, return_fig=True, title=None, save_path=None, title_size=14, scaling_ratio=4): w = 2 + matrix_as_df.shape[1]/scaling_ratio h = 2 + matrix_as_df.shape[0]/scaling_ratio plt.figure(figsize=(w,h)) matrix_plot = sns.heatmap( matrix_as_df, annot=True, cbar=False, cmap=(['white']), mask=(matrix_as_df==0).values, linewidths=1, linecolor='grey', square=True ) fig = matrix_plot.get_figure() if title: plt.title(title, fontsize=title_size) if save_path: fig.savefig(save_path, bbox_inches="tight", ext="jpg") if return_fig: return fig # - import brightway2 as bw bw.projects.set_current("presamples doc") bio = bw.Database("bio") bio.register() bio.write({ ("bio", "emission"): { 'categories': ['water'], 'name': 'Some emission', 'type': 'emission', 'unit': 'kg' }, ("bio", "water in"): { 'categories': ['natural resource'], 'exchanges': [], 'name': 'Water in', 'type': 'natural resource', 'unit': 'm3' }, ("bio", "water out"): { 'categories': ['water'], 'exchanges': [], 'name': 'Water out', 'type': 'emission', 'unit': 'm3' }, ("bio", "land from"): { 'categories': ('natural resource', 'land'), 'exchanges': [], 'name': 'Transformation, from x', 'type': 'natural resource', 'unit': 'm2' }, ("bio", "land to"): { 'categories': ('natural resource', 'land'), 'exchanges': [], 'name': 'Transformation, to y', 'type': 'natural resource', 'unit': 'm2' }, }) # ## Example 1 - Static scenario analysis: changing supplier my_db = bw.Database('db1') my_db.register() my_db.write({ ('db1', "a1"): { 'type': 'process', 'name': 'a, producer a1', 'unit': 'kg', 'location': 'GLO', 'reference product': 'a', 'production amount': 1, 'activity type': 'ordinary transforming activity', 'comment': 'Normal activity, uncertainty lognormal', 'exchanges': [ { 'name': 'a', 'unit': 'kg', 'amount': 1.0, 'input': ('db1', 'a1'), 'type': 'production', 'uncertainty type': 0, }, { 'name': 'Some emission', 'unit': 'kg', 'amount': 1, 'input': ('bio', 'emission'), 'type': 'biosphere', 'uncertainty type': 2, 'loc': np.log(1), 'scale': 0.1, }, { 'name': 'land from', 'unit': 'square meter', 'amount': 1, 'input': ('bio', 'land from'), 'type': 'biosphere', 'uncertainty type': 5, 'loc': 1, 'minimum': 0.5, 'maximum': 1.5, }, { 'name': 'land to', 'unit': 'square meter', 'amount': 1, 'input': ('bio', 'land to'), 'type': 'biosphere', 'uncertainty type': 5, 'loc': 1, 'minimum': 0.5, 'maximum': 1.5, }, { 'name': 'water in', 'unit': 'cubic meter', 'amount': 1, 'input': ('bio', 'water in'), 'type': 'biosphere', 'uncertainty type': 2, 'loc': np.log(1), 'scale': 0.1, }, { 'name': 'water out', 'unit': 'cubic meter', 'amount': 1, 'input': ('bio', 'water out'), 'type': 'biosphere', 'uncertainty type': 2, 'loc': np.log(1), 'scale': 0.1, } ], }, ('db1', "a2"): { 'type': 'process', 'name': 'a, producer a2', 'unit': 'kg', 'location': 'GLO', 'reference product': 'a', 'production amount': 1, 'activity type': 'ordinary transforming activity', 'comment': 'Normal activity, uncertainty triangular', 'exchanges': [ { 'name': 't', 'unit': 'kg', 'amount': 1.0, 'input': ('db1', 'a2'), 'type': 'production', 'uncertainty type': 0, }, { 'name': 'Some emission', 'unit': 'kg', 'amount': 2, 'input': ('bio', 'emission'), 'type': 'biosphere', 'uncertainty type': 5, 'loc': 2, 'minimum': 1, 'maximum': 3, }, { 'name': 'land from', 'unit': 'square meter', 'amount': 1, 'input': ('bio', 'land from'), 'type': 'biosphere', 'uncertainty type': 5, 'loc': 1, 'minimum': 0.5, 'maximum': 1.5, }, { 'name': 'land to', 'unit': 'square meter', 'amount': 1, 'input': ('bio', 'land to'), 'type': 'biosphere', 'uncertainty type': 5, 'loc': 1, 'minimum': 0.5, 'maximum': 1.5, }, { 'name': 'water in', 'unit': 'cubic meter', 'amount': 1, 'input': ('bio', 'water in'), 'type': 'biosphere', 'uncertainty type': 2, 'loc': np.log(1), 'scale': 0.1, }, { 'name': 'water out', 'unit': 'cubic meter', 'amount': 1, 'input': ('bio', 'water out'), 'type': 'biosphere', 'uncertainty type': 2, 'loc': np.log(1), 'scale': 0.1, } ], }, ('db1', "b"): { 'type': 'process', 'name': 'x, producer b', 'unit': 'kg', 'location': 'GLO', 'reference product': 'x', 'production amount': 1, 'activity type': 'ordinary transforming activity', 'exchanges': [ { 'name': 'x', 'unit': 'kg', 'amount': 1.0, 'input': ('db1', 'b'), 'type': 'production', 'uncertainty type': 0, }, { 'name': 'a', 'unit': 'kg', 'amount': 0.6, 'input': ('db1', 'a1'), 'type': 'technosphere', 'uncertainty type': 2, 'loc': np.log(0.6), 'scale': 0.1 }, { 'name': 'a', # input from a2 'unit': 'kg', 'amount': 0.4, 'input': ('db1', 'a2'), 'type': 'technosphere', 'uncertainty type': 2, 'loc': np.log(0.4), 'scale': 0.1 }, ], }, }) A_formatted, B_formatted, invA_formatted, G_formatted = format_matrices_simple(database='db1') matrix_to_plot(A_formatted, title=None, save_path=r"../source/images/bw2_eg1_A_orig.jpeg"); matrix_data = [ (np.array((0, 1)).reshape(-1, 1), [ (('db1', 'a1'), ('db1', 'b'), 'technosphere'), (('db1', 'a2'), ('db1', 'b'), 'technosphere'), ], 'technosphere' ) ] import presamples as ps _, pp_path = ps.create_presamples_package(matrix_data=matrix_data) pp_path lca = bw.LCA({('db1', 'b'): 1}, presamples=[pp_path]) A_formatted_w_ps, B_formatted_w_ps, invA_formatted_w_ps, G_formatted_w_ps = format_matrices_simple(lca=lca) matrix_to_plot(A_formatted_w_ps, save_path=r"../source/images/eg1_A_after.jpeg"); # ## Simple database used in documentation # We need some life cycle inventory (LCI) data to showcase the use of `presamples`. We use a very simple set of fake # activities contained in a database "db": # ![title](../data/A_as_diagram.jpg) # The importing of "db" in Brightway2 is done in another Notebook, available [here](https://github.com/PascalLesage/presamples/blob/master/docs/notebooks/Importing_sample_databases_for_documentation.ipynb). You need to run that other Notebook in order to import the data on your own computer. # Once imported, you can access the database by setting your current brightway project to the one where the data was imported ("presamples doc" if you followed along in the other notebook). # Import Brightway2 and switch to project with the sample databases: import brightway2 as bw bw.projects.set_current("presamples doc") bw.databases # The actual data contained in the database can be presented via the technosphere $\mathbf{A}$ and biosphere $\mathbf{B}$ matrices. In the following matrix images, row and column headers show both the name and the key of the activity or elementary flow. # Generate DataFrame versions of the matrices A, B, _, _ = format_matrices(database='db1') matrix_to_plot(A, return_fig=True, title="Technosphere matrix \n$\mathbf{A}$", save_path=r"../data/A.jpg"); # Plot the A matrix matrix_to_plot(A, return_fig=True, title="Technosphere matrix \n$\mathbf{A}$", save_path=r"../data/A.jpg"); # Plot the B matrix matrix_to_plot(B, return_fig=True, title="Biosphere matrix \n$\mathbf{B}$", save_path=r"../data/B.jpg"); # ## Run through # To create a presample package, one needs to pass: # - information about matrix indices, # - values for the cells at these matrix indices. # # Take a simple case: we want to (attributionally) analyse a scenario where production activity prod_A1 begins to send all its waste to treatment activity treat_W1. This can be done by replacing two numbers in the A matrix. # - The exchange from prod_A1 to market_A is set to 0 # - The exchange from prod_A1 to treat_W1 is set to 1 # # The corresponding matrix indices and samples (in this case, one single observation per parameter) are defined as follows: # Import packages: import presamples as ps indices = [ (('my database', 'treat_1'), ('my database', 'prod_A1'), 'technosphere'), (('my database', 'treat_market'), ('my database', 'prod_A1'), 'technosphere'), ] samples = np.array([-0.2, 0]).reshape(-1, 1) ps_id, ps_path = ps.create_presamples_package(matrix_data=[(samples, indices, 'technosphere')]) lca = bw.LCA({('my database', 'prod_Amarket'): 1}, method=('fake method', 'emission')) lca_with_presamples = bw.LCA({('my database', 'prod_Amarket'):1}, presamples=[ps_path], method=('fake method', 'emission',)) lca.lci() lca_with_presamples.lci() lca.lcia() lca_with_presamples.lcia() lca_with_presamples.technosphere_matrix.todense()==lca.technosphere_matrix.todense() A, B, invA, G = format_matrices(lca) format_A_matrix(lca).loc[:, str(bw.get_activity(('my database', 'prod_A1')))] format_A_matrix(lca_with_presamples).loc[:, str(bw.get_activity(('my database', 'prod_A1')))] lca_with_presamples.score/lca.score # ## Passing matrix data to presample creation # SEE NOTEBOOK FOR FULL DESCIPTIONS # To write presamples, one must provide two things: # - Values, with each row representing a cell in a given matrix and each column one set of values for these cells; # - Indices, which inform the matrix and the matrix coordinates that the values correspond to. # # Values are stored in # Suppose we want to (attributionally) analyse the scenario where production activity prod_A1 begins to send all its waste to treatment activity treat_W1. This can be done by replacing two numbers in the A matrix: # - The exchange from prod_A1 to market_A is set to 0 # - The exchange from prod_A1 to treat_W1 is set to 1 # # This scenario can be expressed as a combination of values (0, 1) and matrix indices for a given matrix (A). # ## Using presamples in LCA # ## Using presamples in MonteCarloLCA # ## Fixed sum helper model # ## Kronecker delta helper model # ## Parameterized brightway models # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Tutorial for AQCEL # ・Quantum Gate Pattern recognition (level = 1, 2, 3) #
    # ・Circuit Optimization (level = 1, 2) #
    # + import sys sys.path.append('..') import warnings warnings.simplefilter('ignore') from qiskit import(QuantumCircuit, QuantumRegister, ClassicalRegister) # - # ## Example Circuit # + q = QuantumRegister(6, 'q') c = ClassicalRegister( 6, 'c') circ = QuantumCircuit(q,c) circ.toffoli(0,4,1) circ.toffoli(2,3,0) circ.x(2) circ.x(3) circ.x(2) circ.x(3) circ.toffoli(2,3,0) circ.toffoli(0,4,1) circ.x(4) circ.x(1) circ.toffoli(0,4,1) circ.toffoli(2,3,0) circ.h(2) circ.h(3) circ.toffoli(2,3,0) circ.toffoli(0,4,1) circ.x(5) circ.cry(1,2,0) circ.toffoli(1,5,2) circ.toffoli(0,4,1) circ.toffoli(2,3,0) circ.h(2) circ.h(3) circ.toffoli(2,3,0) circ.toffoli(0,4,1) circ.toffoli(3,4,1) circ.toffoli(1,5,2) circ.cry(1,2,0) circ.toffoli(1,5,2) circ.toffoli(3,4,1) circ.x(2) circ.ry(1,0) circ.x(2) circ.toffoli(3,4,1) circ.toffoli(1,5,2) circ.cry(2,2,0) circ.toffoli(1,5,2) circ.toffoli(3,4,1) circ.h(1) circ.cry(2,2,0) circ.toffoli(1,5,2) circ.cx(3,4) circ.cx(1,5) circ.cry(1,2,0) circ.cx(1,5) circ.cx(3,4) circ.x(2) circ.cx(1,0) circ.toffoli(2,3,0) circ.x(2) circ.cx(3,4) circ.cx(1,5) circ.cry(2,2,0) circ.cx(1,5) circ.cx(3,4) circ.toffoli(0,4,1) circ.toffoli(2,3,0) circ.x(2) circ.x(3) circ.x(2) circ.x(3) circ.toffoli(2,3,0) circ.toffoli(0,4,1) for i in range(6): circ.measure(i,i) circ.draw(output='mpl',fold=100) # - # ## Pattern recognition from aqcel.optimization import pattern example2 = pattern.recognition( circuit=circ, level=2, n_patterns=4, min_num_nodes=4, max_num_nodes=8, min_n_repetition=2) circ_max, designated_gates2 = example2.quantum_pattern() circ_max.draw(output='mpl') # ## Circuit Optimization level1 from aqcel.optimization import slim print(circ.depth(), ',', circ.__len__()) print('Gate counts:', circ.count_ops()) example1 = slim.circuit_optimization( circuit=circ, threshold=None) circ_op = example1.slim() print(circ_op.depth(), ',', circ_op.__len__()) print('Gate counts:', circ_op.count_ops()) circ_op.draw(output='mpl',fold=100) # ## Circuit Optimization level 2 from aqcel.optimization import optimization example3 = optimization.optimizer( circuit=circ, slim_level=2, pattern_level =2, n_patterns=4, min_num_nodes=4, max_num_nodes=8, min_n_repetition=2) circ_op2 = example3.slimer() circ_op2.draw(output='mpl',fold=30) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Kaggle # + [markdown] papermill={"duration": 0.041289, "end_time": "2022-02-16T09:14:00.970097", "exception": false, "start_time": "2022-02-16T09:14:00.928808", "status": "completed"} tags=[] # The data: # 112 stocks book samples over 10 min time buckets # 112 stocks trade data over 10 min time buckets # + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _kg_hide-input=true _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" papermill={"duration": 1.047169, "end_time": "2022-02-16T09:14:02.058695", "exception": false, "start_time": "2022-02-16T09:14:01.011526", "status": "completed"} tags=[] import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) import os files=[] for dirname, _, filenames in os.walk('/kaggle/input'): for filename in filenames: files.append(os.path.join(dirname, filename)) # + papermill={"duration": 0.05043, "end_time": "2022-02-16T09:14:02.150954", "exception": false, "start_time": "2022-02-16T09:14:02.100524", "status": "completed"} tags=[] s='86' sf=[] sf=[file for file in files if str('=' + s) in file] # + papermill={"duration": 0.90678, "end_time": "2022-02-16T09:14:03.097593", "exception": false, "start_time": "2022-02-16T09:14:02.190813", "status": "completed"} tags=[] a=pd.read_parquet(sf[0]) b=pd.read_parquet(sf[1]) a.info(),b.info() # + papermill={"duration": 0.062861, "end_time": "2022-02-16T09:14:03.202516", "exception": false, "start_time": "2022-02-16T09:14:03.139655", "status": "completed"} tags=[] b.head() # + papermill={"duration": 0.660997, "end_time": "2022-02-16T09:14:03.904799", "exception": false, "start_time": "2022-02-16T09:14:03.243802", "status": "completed"} tags=[] b[['seconds_in_bucket','time_id']].hist() # + papermill={"duration": 0.061046, "end_time": "2022-02-16T09:14:04.010086", "exception": false, "start_time": "2022-02-16T09:14:03.94904", "status": "completed"} tags=[] a.info() # + papermill={"duration": 1.108175, "end_time": "2022-02-16T09:14:05.162794", "exception": false, "start_time": "2022-02-16T09:14:04.054619", "status": "completed"} tags=[] b.ask_price1.hist(bins=200) b.bid_price1.hist(bins=200) # + papermill={"duration": 1.734906, "end_time": "2022-02-16T09:14:06.94158", "exception": false, "start_time": "2022-02-16T09:14:05.206674", "status": "completed"} tags=[] import os import pandas as pd import numpy as np from sklearn.metrics import r2_score import glob import seaborn as sns from sklearn.model_selection import train_test_split as split from sklearn.linear_model import LinearRegression from sklearn.tree import DecisionTreeRegressor from sklearn.neighbors import KNeighborsRegressor from sklearn.metrics import mean_squared_log_error as msle from sklearn.metrics import mean_squared_error as mse from sklearn.metrics import r2_score list_order_book_file_train = glob.glob('/kaggle/input/optiver-realized-volatility-prediction/book_train.parquet/*') list_trade_file_train = glob.glob('/kaggle/input/optiver-realized-volatility-prediction/trade_train.parquet/*') train = pd.read_csv('../input/optiver-realized-volatility-prediction/train.csv') def log_return(list_stock_prices): return np.log(list_stock_prices).diff() def realized_volatility(series_log_return): return np.sqrt(np.sum(series_log_return**2)) def rmspe(y, y_pred): return (np.sqrt(np.mean(np.square((y - y_pred) / y)))) # + [markdown] papermill={"duration": 0.046082, "end_time": "2022-02-16T09:14:07.031758", "exception": false, "start_time": "2022-02-16T09:14:06.985676", "status": "completed"} tags=[] # # Feature creation # This code creates raw features based on the raw data: # from the book data, per stock and time bucket: # - realized volatility # - sum of log returns # - sum of bid and ask sizes for each level # - mean spread # # from the trade data, per stock and time bucket: # - trade size # - order count # # *trade price is left out since it is accaunted for by 'wap' # + papermill={"duration": 357.332947, "end_time": "2022-02-16T09:20:04.409698", "exception": false, "start_time": "2022-02-16T09:14:07.076751", "status": "completed"} tags=[] def book_features(file_path, prediction_column_name): df_book_data = pd.read_parquet(file_path) df_book_data['wap'] =(df_book_data['bid_price1'] * df_book_data['ask_size1']+df_book_data['ask_price1'] * df_book_data['bid_size1']) / ( df_book_data['bid_size1']+ df_book_data[ 'ask_size1']) df_book_data['log_return'] = df_book_data.groupby(['time_id'])['wap'].apply(log_return) df_book_data = df_book_data[~df_book_data['log_return'].isnull()] df_book_data['spread'] = df_book_data['ask_price1'] - df_book_data['bid_price1'] df_book_features = pd.DataFrame(df_book_data.groupby(['time_id'])['log_return'].agg(realized_volatility)).reset_index() df_book_features = df_book_features.rename(columns = {'log_return':prediction_column_name}) df_agg_data = pd.DataFrame(df_book_data.groupby(['time_id'])[['log_return','bid_size1','ask_size1','bid_size2','ask_size2']].sum()).reset_index() stock_id = file_path.split('=')[1] df_book_features['stock_id'] = stock_id df_agg_data = df_agg_data.rename(columns = {'log_return':'sum_log_return'}) df_agg_data['row_id'] = df_agg_data['time_id'].apply(lambda x:f'{stock_id}-{x}') df_spread = pd.DataFrame(df_book_data.groupby(['time_id'])['spread'].mean()).reset_index() df_spread['row_id'] = df_spread['time_id'].apply(lambda x:f'{stock_id}-{x}') df_book_features = df_book_features.merge(df_agg_data) df_book_features = df_book_features.merge(df_spread) return df_book_features df_book_features = pd.DataFrame() for file in list_order_book_file_train: a=book_features(file,prediction_column_name='realized_vol') df_book_features=pd.concat([df_book_features,a]) # + [markdown] papermill={"duration": 0.044107, "end_time": "2022-02-16T09:20:04.501953", "exception": false, "start_time": "2022-02-16T09:20:04.457846", "status": "completed"} tags=[] # **The resulting book features df:** # + papermill={"duration": 0.174667, "end_time": "2022-02-16T09:20:04.721762", "exception": false, "start_time": "2022-02-16T09:20:04.547095", "status": "completed"} tags=[] df_book_features.info() # + papermill={"duration": 10.731448, "end_time": "2022-02-16T09:20:15.502568", "exception": false, "start_time": "2022-02-16T09:20:04.77112", "status": "completed"} tags=[] def trade_features(file_path): df_trade_data = pd.read_parquet(file_path) df_trade_features = pd.DataFrame(df_trade_data.groupby(['time_id'])[['size','order_count']].sum()).reset_index() df_trade_features = df_trade_features.rename(columns = {'size':'trade_size'}) stock_id = file_path.split('=')[1] df_trade_features['stock_id'] = stock_id df_trade_features['row_id'] = df_trade_features['time_id'].apply(lambda x:f'{stock_id}-{x}') return df_trade_features df_trade_features = pd.DataFrame() for file in list_trade_file_train: a=trade_features(file) df_trade_features=pd.concat([df_trade_features,a]) # + [markdown] papermill={"duration": 0.04443, "end_time": "2022-02-16T09:20:15.592082", "exception": false, "start_time": "2022-02-16T09:20:15.547652", "status": "completed"} tags=[] # **the resulting trade data features df:** # + papermill={"duration": 0.154811, "end_time": "2022-02-16T09:20:15.791542", "exception": false, "start_time": "2022-02-16T09:20:15.636731", "status": "completed"} tags=[] df_trade_features.info() # + papermill={"duration": 0.586851, "end_time": "2022-02-16T09:20:16.42372", "exception": false, "start_time": "2022-02-16T09:20:15.836869", "status": "completed"} tags=[] df_raw_features = df_book_features.merge(df_trade_features,how='left') # + papermill={"duration": 0.169499, "end_time": "2022-02-16T09:20:16.638299", "exception": false, "start_time": "2022-02-16T09:20:16.4688", "status": "completed"} tags=[] df_raw_features.info() # + papermill={"duration": 0.083688, "end_time": "2022-02-16T09:20:16.767707", "exception": false, "start_time": "2022-02-16T09:20:16.684019", "status": "completed"} tags=[] df_trade_features.stock_id.nunique() # + [markdown] papermill={"duration": 0.046893, "end_time": "2022-02-16T09:20:16.860026", "exception": false, "start_time": "2022-02-16T09:20:16.813133", "status": "completed"} tags=[] # # Feature engeneering # + [markdown] papermill={"duration": 0.045578, "end_time": "2022-02-16T09:20:16.95165", "exception": false, "start_time": "2022-02-16T09:20:16.906072", "status": "completed"} tags=[] # First we will calculate the future realized volatility for each time bucket and stock (this is the target). # # by doing that we have to depart with all the last time buckets - it won't have future data. # + papermill={"duration": 0.160626, "end_time": "2022-02-16T09:20:17.15758", "exception": false, "start_time": "2022-02-16T09:20:16.996954", "status": "completed"} tags=[] df_raw_features['future_vol'] = df_raw_features.groupby(['stock_id'])['realized_vol'].shift(-1) df_raw_features = df_raw_features[~df_raw_features['future_vol'].isnull()] # + [markdown] papermill={"duration": 0.044889, "end_time": "2022-02-16T09:20:17.24768", "exception": false, "start_time": "2022-02-16T09:20:17.202791", "status": "completed"} tags=[] # The bid and ask sizes will be used to claculate the book depth: # - the ratio between first level bid and ask i.e. 'baratio' # - the ratio between the bid size and the average time bucket trade size i.e. 'bratio' # - the ratio between the ask size and the average time bucket trade size i.e. 'aratio' # - the average order # + papermill={"duration": 0.268327, "end_time": "2022-02-16T09:20:17.56126", "exception": false, "start_time": "2022-02-16T09:20:17.292933", "status": "completed"} tags=[] df_raw_features.groupby(['stock_id','time_id'])['trade_size'].sum().unstack().mean(axis=1) # + papermill={"duration": 0.417669, "end_time": "2022-02-16T09:20:18.024397", "exception": false, "start_time": "2022-02-16T09:20:17.606728", "status": "completed"} tags=[] df_raw_features['baratio'] = df_raw_features['bid_size1'] / df_raw_features['ask_size1'] sizes = df_raw_features.groupby(['stock_id','time_id'])['trade_size'].sum().unstack().mean(axis=1) df_raw_features = df_raw_features.merge(sizes.rename('mean_trade_size'), left_on='stock_id', right_on='stock_id') df_raw_features['bratio']=(df_raw_features['bid_size1'] + df_raw_features['bid_size2'])/df_raw_features['mean_trade_size'] df_raw_features['aratio']=(df_raw_features['ask_size1'] + df_raw_features['ask_size2'])/df_raw_features['mean_trade_size'] df_raw_features['mean_order']=df_raw_features['trade_size']/df_raw_features['order_count'] df_raw_features.head() # + papermill={"duration": 0.070614, "end_time": "2022-02-16T09:20:18.141316", "exception": false, "start_time": "2022-02-16T09:20:18.070702", "status": "completed"} tags=[] # df_raw_features['bratio']=(df_raw_features['bid_size1'] + df_raw_features['bid_size2'])/df_raw_features['mean_trade_size'] # df_raw_features['aratio']=(df_raw_features['ask_size1'] + df_raw_features['ask_size2'])/df_raw_features['mean_trade_size'] df_raw_features.head() # + [markdown] papermill={"duration": 0.046053, "end_time": "2022-02-16T09:20:18.233867", "exception": false, "start_time": "2022-02-16T09:20:18.187814", "status": "completed"} tags=[] # - most of the features seem to distribute log normaly. so in order for them to fit the scale of our target - I will log them # + [markdown] papermill={"duration": 0.046022, "end_time": "2022-02-16T09:20:18.326442", "exception": false, "start_time": "2022-02-16T09:20:18.28042", "status": "completed"} tags=[] # Transformations # + papermill={"duration": 0.146329, "end_time": "2022-02-16T09:20:18.51936", "exception": false, "start_time": "2022-02-16T09:20:18.373031", "status": "completed"} tags=[] logged = df_raw_features[['spread','baratio','bratio','aratio','mean_order']] logged = np.log1p(logged) #hundreds = df_raw_features[['realized_vol','future_vol']]*100 # + [markdown] papermill={"duration": 0.046138, "end_time": "2022-02-16T09:20:18.612973", "exception": false, "start_time": "2022-02-16T09:20:18.566835", "status": "completed"} tags=[] # # EDA # now we will see some analysis of the calculated features # + papermill={"duration": 0.159977, "end_time": "2022-02-16T09:20:18.820384", "exception": false, "start_time": "2022-02-16T09:20:18.660407", "status": "completed"} tags=[] df_final_features = df_raw_features.copy()[['time_id','stock_id','row_id','realized_vol','future_vol', 'sum_log_return']] df_final_features = df_final_features.merge(logged,left_index=True, right_index=True) #df_final_features = df_final_features.merge(hundreds,left_index=True, right_index=True) df_final_features.head() # + [markdown] papermill={"duration": 0.047292, "end_time": "2022-02-16T09:20:18.914769", "exception": false, "start_time": "2022-02-16T09:20:18.867477", "status": "completed"} tags=[] # # Removing outliars # + papermill={"duration": 0.165518, "end_time": "2022-02-16T09:20:19.12765", "exception": false, "start_time": "2022-02-16T09:20:18.962132", "status": "completed"} tags=[] df_final_features = df_final_features[df_final_features['future_vol']<0.06] df_final_features = df_final_features[df_final_features['realized_vol']<0.06] df_final_features = df_final_features[df_final_features['spread']<0.012] # + papermill={"duration": 0.056634, "end_time": "2022-02-16T09:20:19.231624", "exception": false, "start_time": "2022-02-16T09:20:19.17499", "status": "completed"} tags=[] len(df_final_features) # + papermill={"duration": 2.522131, "end_time": "2022-02-16T09:20:21.801393", "exception": false, "start_time": "2022-02-16T09:20:19.279262", "status": "completed"} tags=[] df_final_features.hist(bins=80,figsize=(9,6)) # + [markdown] papermill={"duration": 0.048914, "end_time": "2022-02-16T09:20:21.899524", "exception": false, "start_time": "2022-02-16T09:20:21.85061", "status": "completed"} tags=[] # - on most cases the features seem to distribute log normaly (except for the returns which are allready logged) # + papermill={"duration": 0.634648, "end_time": "2022-02-16T09:20:22.583491", "exception": false, "start_time": "2022-02-16T09:20:21.948843", "status": "completed"} tags=[] sns.heatmap(df_final_features.corr().abs()) df_final_features.corr() # + papermill={"duration": 4.845853, "end_time": "2022-02-16T09:20:27.479823", "exception": false, "start_time": "2022-02-16T09:20:22.63397", "status": "completed"} tags=[] #sns.pairplot(df_final_features.iloc[:,2:], height=1.5) df_final_features.plot.scatter('realized_vol','spread') df_final_features.plot.scatter('realized_vol','future_vol') df_final_features.plot.scatter('aratio','bratio') df_final_features.columns # + [markdown] papermill={"duration": 0.055201, "end_time": "2022-02-16T09:20:27.59079", "exception": false, "start_time": "2022-02-16T09:20:27.535589", "status": "completed"} tags=[] # - the spread is heavily correlated with the realized volatility # - the ask ratio and the bid ratio are correlated, but the scatter isn't so clear. # remove one? log them? # - the other features are not very correlated to each other and to the target. # - question - the outliers indicate special trade points. do we leave them? seperate? # + papermill={"duration": 0.074791, "end_time": "2022-02-16T09:20:27.720065", "exception": false, "start_time": "2022-02-16T09:20:27.645274", "status": "completed"} tags=[] df_final_features.head() # + [markdown] papermill={"duration": 0.054979, "end_time": "2022-02-16T09:20:27.829841", "exception": false, "start_time": "2022-02-16T09:20:27.774862", "status": "completed"} tags=[] # # Linear Regression # - aratio and bratio are correleted. so for the linear regression I'll use only one # + papermill={"duration": 0.14292, "end_time": "2022-02-16T09:20:28.028029", "exception": false, "start_time": "2022-02-16T09:20:27.885109", "status": "completed"} tags=[] #df_final_features.columns X = df_final_features[['realized_vol','sum_log_return','spread','baratio','bratio']] y = df_final_features['future_vol'] lin_model_1 = LinearRegression(fit_intercept=False).fit(X, y) # + papermill={"duration": 0.065281, "end_time": "2022-02-16T09:20:28.149154", "exception": false, "start_time": "2022-02-16T09:20:28.083873", "status": "completed"} tags=[] list(zip(X.columns, lin_model_1.coef_)) # + papermill={"duration": 1.854211, "end_time": "2022-02-16T09:20:30.058625", "exception": false, "start_time": "2022-02-16T09:20:28.204414", "status": "completed"} tags=[] y_pred = lin_model_1.predict(X) ax = sns.scatterplot(x=y, y=y_pred) ax.plot(y, y, 'r') # + papermill={"duration": 0.068534, "end_time": "2022-02-16T09:20:30.184424", "exception": false, "start_time": "2022-02-16T09:20:30.11589", "status": "completed"} tags=[] mse(y, y_pred, squared=False) # + papermill={"duration": 0.074203, "end_time": "2022-02-16T09:20:30.31579", "exception": false, "start_time": "2022-02-16T09:20:30.241587", "status": "completed"} tags=[] R2 = round(r2_score(y_true = y, y_pred = y_pred),3) RMSPE = round(rmspe(y = y, y_pred = y_pred),3) print(f'Performance of the linear prediction: R2 score: {R2}, RMSPE: {RMSPE}') # + [markdown] papermill={"duration": 0.057885, "end_time": "2022-02-16T09:20:30.432852", "exception": false, "start_time": "2022-02-16T09:20:30.374967", "status": "completed"} tags=[] # # Decision Tree # + papermill={"duration": 5.62758, "end_time": "2022-02-16T09:20:36.118153", "exception": false, "start_time": "2022-02-16T09:20:30.490573", "status": "completed"} tags=[] X = df_final_features[['realized_vol','sum_log_return','spread','baratio','bratio','aratio']] y = df_final_features['future_vol'] dt_model_1 = DecisionTreeRegressor(max_leaf_nodes=1000).fit(X, y) # + papermill={"duration": 1.858068, "end_time": "2022-02-16T09:20:38.033765", "exception": false, "start_time": "2022-02-16T09:20:36.175697", "status": "completed"} tags=[] y_pred = dt_model_1.predict(X) ax = sns.scatterplot(x=y, y=y_pred) ax.plot(y, y, 'r') # + papermill={"duration": 0.278488, "end_time": "2022-02-16T09:20:38.372017", "exception": false, "start_time": "2022-02-16T09:20:38.093529", "status": "completed"} tags=[] pd.Series(dt_model_1.feature_importances_, index=X.columns).sort_values(ascending=False).plot.barh() # + papermill={"duration": 0.082238, "end_time": "2022-02-16T09:20:38.517178", "exception": false, "start_time": "2022-02-16T09:20:38.43494", "status": "completed"} tags=[] msle(y, y_pred)**0.5 # + papermill={"duration": 0.088929, "end_time": "2022-02-16T09:20:38.677499", "exception": false, "start_time": "2022-02-16T09:20:38.58857", "status": "completed"} tags=[] R2 = round(r2_score(y_true = y, y_pred = y_pred),3) RMSPE = round(rmspe(y = y, y_pred = y_pred),3) print(f'Performance of DT model: R2 score: {R2}, RMSPE: {RMSPE}') # + [markdown] papermill={"duration": 0.066159, "end_time": "2022-02-16T09:20:38.823623", "exception": false, "start_time": "2022-02-16T09:20:38.757464", "status": "completed"} tags=[] # # KNN # + papermill={"duration": 6.763423, "end_time": "2022-02-16T09:20:45.658694", "exception": false, "start_time": "2022-02-16T09:20:38.895271", "status": "completed"} tags=[] knn_model_1 = KNeighborsRegressor(n_neighbors=4).fit(X, y) y_pred = knn_model_1.predict(X) # + papermill={"duration": 1.812258, "end_time": "2022-02-16T09:20:47.532323", "exception": false, "start_time": "2022-02-16T09:20:45.720065", "status": "completed"} tags=[] ax = sns.scatterplot(x=y, y=y_pred) ax.plot(y, y, 'r') # + papermill={"duration": 0.081922, "end_time": "2022-02-16T09:20:47.677349", "exception": false, "start_time": "2022-02-16T09:20:47.595427", "status": "completed"} tags=[] R2 = round(r2_score(y_true = y, y_pred = y_pred),3) RMSPE = round(rmspe(y = y, y_pred = y_pred),3) print(f'Performance of KNN: R2 score: {R2}, RMSPE: {RMSPE}') # + [markdown] papermill={"duration": 0.068148, "end_time": "2022-02-16T09:20:47.827698", "exception": false, "start_time": "2022-02-16T09:20:47.75955", "status": "completed"} tags=[] # # Validating the Decision tree # + [markdown] papermill={"duration": 0.063548, "end_time": "2022-02-16T09:20:47.955409", "exception": false, "start_time": "2022-02-16T09:20:47.891861", "status": "completed"} tags=[] # > To be continued with better models # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.8.12 64-bit # name: python3 # --- # ## Introdução e prelinares # Até a pg 61 import mxnet as mx from mxnet import np, npx npx.set_np() x=np.arange(12) x,type(x) x.shape x.size x=x.reshape(3,4) x X=np.arange(12).reshape(3,4) Y=np.array([[2,1,4,3], [1,2,3,4], [4,3,2,1]]) X Y np.concatenate([X, Y], axis=0), np.concatenate([X, Y], axis=1) X==Y x=np.array([1,2,3]).reshape(3,1) x y=np.array([1,2]).reshape(1,2) y x+y X,Y X[-1] X[1:3] X[0:3, 1:3]=13 X X>Y,X,Y T1 = np.arange(12).reshape(1, 6, 2) T1,np.arange(12) T2 = np.ones(shape=(6,6,1)) T2 # ## Data Preprocessing # # pg 51 import os os.makedirs(os.path.join('../../data','tmp'), exist_ok=True) data_file=os.path.join('../../data','tmp','house_tiny.csv') with open(data_file,'w') as f: f.write('NumRooms,Alley,Price\n') # Column names f.write('NA,Pave,127500\n') # Each row represents a data example f.write('2,NA,106000\n') f.write('4,NA,178100\n') f.write('NA,NA,140000\n') import pandas as pd data=pd.read_csv(data_file) print(data) inputs, outputs=data.iloc[:,0:2], data.iloc[:,2] print(inputs) inputs=inputs.fillna(inputs.mean()) print(inputs) inputs=pd.get_dummies(inputs, dummy_na=True) print(inputs) X, y=np.array(inputs.values), np.array(outputs.values) X, y # ### fazer exercicios da pg 53 t3=np.random.choice(np.array([1,2,np.nan]), (20, 5),p=np.array([0.4,0.4, 0.3])) t3 df=pd.DataFrame(t3.asnumpy(),columns=['a','b','c','e','f']) print(df) df.isnull().mean() df.columns[df.isnull().mean() < 0.25] res1=df[df.columns[df.isnull().mean() < 0.25]] print(res1) resInput=np.array(res1.values) resOutput=np.random.rand(20) resInput,resOutput # ## Linear Algebra # pg 53 A=np.arange(20).reshape(5,4) A,A.mean(axis=0),A.mean(axis=1) A.sum(),A.sum(axis=1),A.sum(axis=1,keepdims=True) A/A.sum(axis=1,keepdims=True),A.cumsum(axis=1) # ### Produto Hadamard (Element-wise) e Produto de Matriz # * dot # A1=np.array([1,2,3,4]).reshape(2,2) B1=np.array([-1,3,4,2]).reshape(2,2) A1,B1,A1*B1,np.dot(A1,B1),np.dot(B1,A1) A1=np.array([1,2,3,4]) B1=np.array([-1,3,4,2]) A1,B1,A1*B1,np.dot(A1,B1) y=np.ones(4) x.reshape(1,3), y # ### Norm # pg 63 # # - norma vetorial # # # ### Norm L2 # \[3,-4] # # sqrt{3² + -4²}=5 # u=np.array([3,-4]) u,np.linalg.norm(u) # ###Norm L1 # # \[3,-4] # # |3|+|-4|=7 # np.abs(u).sum() # ### Frobenius norm # Aplica a L2 sobre Matrizes # # ### Exercícios pg 65 # # #### Confrontar com https://hy38.github.io/D2L-2-linear-algebra # # #### 1. Prove that the transpose of a matrix A’s transpose is A : (𝐀T)⊤=A(𝐀^T)^⊤ = A(AT)⊤=A A=np.arange(25).reshape(5,5) AT=A.T TA=AT.T res=(A==TA).reshape(A.size) #type(res),type(res.asnumpy()), #np.diff(res,prepend=False, append=False) np.count_nonzero(res%2==1)==A.size # # #### 2. Given two matrices A and B, show that the sum of transposes is equal to the transpose of a sum: A⊤+B⊤= (A+B)⊤ # A1=np.array([1,2,3,4,5,6,7,8,9]).reshape(3,3) A2=A1.copy() B1=np.array([2,4,6,8,10,12,14,16,18]).reshape(3,3) B2=B1.copy() A1,A2,B1,B2 R1=A1.T + B1.T R2=(A1+B1).T res=(R1==R2).reshape(R1.size) np.count_nonzero(res%2==1)==R1.size # #### 3. Given any square matrixA, is A+A⊤ always symmetric? Why? # t1=A1+A1.T t2=B1+B1.T res1=(t1==t1.T).reshape(t1.size) res2=(t2==t2.T).reshape(t2.size) res1,res2 np.count_nonzero(res1%2==1)==res1.size , np.count_nonzero(res2%2==1)==res2.size # #### 4. We defined the tensorX # of shape (2, 3, 4) in this section. What is the output oflen(X)? T=np.ones((2,3,4)) T,T.size,T.shape # #### 5. For a tensorXof arbitrary shape, does len(X) always correspond to the length of a certain axis ofX? What is that axis? # first # #### 6.Run A / A.sum(axis=1) # and see what happens. Can you analyze the reason? t=A.sum(axis=1) A,t,A/t # #### 8. Consider a tensor with shape (2, 3, 4). What are the shapes of the summation outputs alongaxis 0, 1, and 2? A=mx.nd.arange(1,25).reshape(2,3,4) A,A.sum(0),A.sum(1),A.sum(2) # #### 9. Feed a tensor with 3 or more axes to the linalg.norm function and observe its output. Whatdoes this function compute for tensors of arbitrary shape? # https://www.educative.io/edpresso/what-is-the-nplinalgnorm-method-in-numpy # A=(np.arange(24)-5).reshape(2,3,4) A , np.linalg.norm(A),np.linalg.norm(A,axis=0),np.linalg.norm(A,axis=1),np.linalg.norm(A,axis=2) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="4fc265b0" # # NLP_TP Transformers # # + [markdown] id="145be2de" # ## 01- Sentiment analysis # + colab={"base_uri": "https://localhost:8080/"} id="583ef981" outputId="9d9f6111-4c40-43c4-faa3-fae61c43bb0d" # !pip install transformers # + colab={"base_uri": "https://localhost:8080/"} id="10e24977" outputId="24112ed9-83d9-41cd-a742-e1a0b7332d43" from transformers import pipeline nlp = pipeline("sentiment-analysis", model="nlptown/bert-base-multilingual-uncased-sentiment") result = nlp("bien dit")[0] print(result) print(f"label: {result['label']}, with score: {round(result['score']*100, 2)}%") result = nlp("mauvais travail")[0] print(f"label: {result['label']}, with score: {round(result['score']*100, 2)}%") # + [markdown] id="80f031f4" # ## 02- Text generation # + id="4668bfb2" from transformers import pipeline # + colab={"base_uri": "https://localhost:8080/"} id="63bf964e" outputId="7d975737-cdc1-4560-a6ad-55aa7224ba4a" # Frensh text_generator_fr = pipeline('text-generation', model='dbddv01/gpt2-french-small') print(text_generator_fr("je lis un", max_length=50, do_sample=False)) # + colab={"base_uri": "https://localhost:8080/"} id="bc8556ac" outputId="67a9e4d0-e6d2-441b-9b41-bd0a0e530d82" # Arabic text_generator_Ar = pipeline('text-generation', model='akhooli/gpt2-small-arabic') print(text_generator_Ar("في المغرب السياحة الجبلية ", max_length=50, do_sample=False)) # + [markdown] id="1af36c17" # ## 03- Name entity recognition (NER) # + colab={"base_uri": "https://localhost:8080/"} id="3a2364c9" outputId="4fef81d1-b475-412d-c977-8eb4595ce87d" from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline ner_english_recognition = pipeline("ner", model="dslim/bert-base-NER", tokenizer="dslim/bert-base-NER") ner_arabic_recognition = pipeline("ner", model="hatmimoha/arabic-ner", tokenizer="hatmimoha/arabic-ner") ner_french_recognition = pipeline("ner", model="gilf/french-postag-model", tokenizer="gilf/french-postag-model") print(ner_arabic_recognition("في المغرب السياحة الجبلية")) print(ner_english_recognition("good time")) print(ner_french_recognition("je lis un")) # + [markdown] id="dff416e2" # ## 04- Question answering # + colab={"base_uri": "https://localhost:8080/"} id="511e712a" outputId="7c74ef42-77d4-493a-d166-98b633d117d3" from transformers import pipeline question_answering = pipeline("question-answering") context = """ Le ville de Salé se trouve dans le plateau côtier large de 10 à 50 km, formé de plaines douces inclinées vers l’Océan Atlantique qui s'étend de Rabat-Salé à Skhirate-Témara, et du littoral atlantique au barrage Sidi Mohammed ben Abdellah9. L'altitude de la ville de Salé et du plateau côtier tout entier ne dépasse pas les 100 m10. Le fleuve Bouregreg qui sépare Rabat et Salé, donne une vallée plus ou moins large selon les endroits, pénétrant d’une quinzaine de kilomètres en amont de l’embouchure, surplombée par les plateaux de Bettana, Sala Al Jadida et de la commune rurale de Shoul du côté de Salé, et par ceux des quartiers de Hassan, El Youssoufia, Nahda et Akkrach du côté de Rabat. L'« arrière-pays » de Rabat-Salé est plutôt vert loin de l'urbanisation de masse, notamment grâce à la présence des forêts de la Mamora et de Témara, à proximité. """ question = "Quelle est Sala Al Jadida?" result = question_answering(question=question, context=context) print("Reponse:", result['answer']) # + [markdown] id="c9a9c291" # ## 05- Filling masked text # + colab={"base_uri": "https://localhost:8080/"} id="3c913305" outputId="0b4dafe1-c4ab-4fee-fc0a-6bef5384d0b8" from transformers import pipeline nlp = pipeline("fill-mask") from pprint import pprint pprint(nlp(f"Les coronavirus sont des {nlp.tokenizer.mask_token} de la famille des Coronaviridae.")) # + colab={"base_uri": "https://localhost:8080/"} id="cc8ba494" outputId="77ed09b5-e13d-4096-80ac-99140e943bc8" #Arabic arabic_fill_mask = pipeline('fill-mask', model='CAMeL-Lab/bert-base-camelbert-ca') pprint(arabic_fill_mask("جمعيات تدق ناقوس الخطر بشأن استنزاف الموارد[MASK]‬ بالجنوب الشرقي .")) # + [markdown] id="6f56d9cc" # ## 06- Summarization # + colab={"base_uri": "https://localhost:8080/"} id="6186b099" outputId="4ad638b3-9a4c-429e-de28-565f368c9993" from transformers import pipeline summarizer = pipeline("summarization") ARTICLE = """ Le Maroc était connu sous le nom de royaume de Marrakech, sous les trois dynasties qui avaient cette ville comme capitale. Puis, sous le nom de royaume de Fès, sous les dynasties qui résidaient à Fès. Au xixe siècle, les cartographes européens mentionnaient toujours un « royaume de Maroc », en indiquant l'ancienne capitale « Maroc » (pour Marrakech). Sous la dynastie des Alaouites, toujours au pouvoir, le pays est passé de l'appellation d'« Empire chérifien » à celle de « royaume du Maroc » en 195725, le sultan ben Youssef en devenant le roi, en tant que . Il peut être aussi surnommé « Royaume chérifien », en référence au souverain alaouite, descendant du prophète de l'islam Mahomet, qualifié de « chérif ». """ print(summarizer(ARTICLE, max_length=130, min_length=30, do_sample=False)) # + [markdown] id="d3e5558b" # ## 07- Translation # + colab={"base_uri": "https://localhost:8080/"} id="5269606b" outputId="8cea9d0f-b82d-49c1-a27f-371f6af5d79c" from transformers import pipeline # English to french translator = pipeline("translation_en_to_fr") from transformers import AutoTokenizer, AutoModelForSeq2SeqLM pprint(translator("This allows people to understand complex terms or phrases.", max_length=40)) # + colab={"base_uri": "https://localhost:8080/"} id="a7451bc5" outputId="c4901140-8429-444c-873f-7fd97b12285d" # english to Arabic from transformers import MarianTokenizer, MarianMTModel tokenizer = MarianTokenizer.from_pretrained("marefa-nlp/marefa-mt-en-ar") model = MarianMTModel.from_pretrained("marefa-nlp/marefa-mt-en-ar") text = "Mountain tourism in Morocco" translated_tokens = model.generate(**tokenizer.prepare_seq2seq_batch(text, return_tensors="pt")) Output_text = [tokenizer.decode(t, skip_special_tokens=True) for t in translated_tokens] print(Output_text) # + colab={"base_uri": "https://localhost:8080/", "height": 181, "referenced_widgets": ["", "", "8ffc79499f63482b82122e66a9811d7a", "14fff933a0b24375804d082133a4dd59", "0beead98f5b1460d976ed506a1123129", "", "98267d9b90f441508572dac91e2755e8", "", "", "c561e407f6754b68b093e96e0a93486c", "", "", "", "", "", "24338d0496b645a1a025f19037f18aa5", "09fabcb54c0e42568124f91f56a22288", "", "2e36e37457d4439a91bf9ac538356ee5", "23ec9403ed1e4ca1848ffa302bffc0d9", "", "", "3fae572982f14e7094d947507f8de719", ""]} id="e032ce8d" outputId="5076fd6e-f965-436f-c65d-58ace5647822" # Arabic to English from transformers import MBartForConditionalGeneration, MBart50TokenizerFast text_ar = "السياحة الجبلية في المغرب" model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-many-mmt") tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-many-mmt") tokenizer.src_lang = "ar_AR" encoded_ar = tokenizer(text_ar, return_tensors="pt") generated_tokens = model.generate(**encoded_ar, forced_bos_token_id=tokenizer.lang_code_to_id["en_XX"]) pprint(tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)) # + [markdown] id="7531300a" # ## 08- Feature extraction # + colab={"base_uri": "https://localhost:8080/"} id="f397f772" outputId="65ce663d-7f02-4a76-cbd3-2deac6026905" from sklearn.feature_extraction.text import CountVectorizer # sentences. sentences = [ "This is a sample sentence", "I am interested in politics", "You are a very good software engineer, engineer.",] vectorizer = CountVectorizer(stop_words='english') vectorizer.fit(sentences) vectorizer.get_feature_names() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pandas as pd import matplotlib import matplotlib.pyplot as plt from jupyterthemes import jtplot import copy as cp # # Notebook para comparar f1 entre kriging y GP # + # se cargan los datos de entrenamiento train_data = pd.read_csv('../../GP_Data/cy17_spc_assays_rl6_entry.csv') train_cols = ['midx','midy','midz','cut'] test_data = pd.read_csv('../../GP_Data/12_BLASTHOLE_ASS_ENTRY.csv') test_cols = ['POINTEAST','POINTNORTH','POINTRL'] # se definen los estilos de los graficos jtplot.style(theme='onedork',figsize = (20,10)) # + # funciones utiles para ahorrar lineas y hacer codigo # mas legible def get_years(path_estimacion): estimacion_sorted = add_year_month_sorted(path_estimacion) df_years = estimacion_sorted['year'] seen = set() YEARS = [] for item in df_years: if item not in seen: seen.add(item) YEARS.append(item) return YEARS def add_year_month_sorted(path_estimacion): # se agregan las columans year y month a # al archivo dque contiene las estimacines estimacion_original = pd.read_csv(path_estimacion) # solamente se consideran las estimaciones # que tienen f1 dado (i.e. no -99) y cut # mayor que cero estimacion_filtrada = estimacion_original.loc[(estimacion_original['f1'] > 0) & (estimacion_original['cut'] > 0)] f1 = estimacion_filtrada['f1'].as_matrix() estimacion = cp.copy(estimacion_filtrada) estimacion = estimacion.assign(year=((f1-1)/12).astype(int) + 2014) estimacion = estimacion.assign(mes=((f1-1)%12 + 1).astype(int)) estimacion_sorted = estimacion.sort_values('f1',ascending=True) return estimacion_sorted def dicc_error_bloque(path_estimacion): estimacion_sorted = add_year_month_sorted(path_estimacion) YEARS = get_years(path_estimacion) dicc_anual = dict() for year in YEARS: plt.figure() df_by_year = estimacion_sorted.loc[estimacion_sorted['year']==year] meses = df_by_year['mes'] dicc_promedios_mensual = dict() for mes in meses: cut_mes = df_by_year.loc[df_by_year['mes']==mes]['cut'] cut_poz_mes = df_by_year.loc[df_by_year['mes']==mes]['cut_poz'] cuociente = np.divide(cut_poz_mes, cut_mes) promedio_mensual = cuociente.mean() dicc_promedios_mensual[mes] = promedio_mensual dicc_anual[year] = dicc_promedios_mensual return dicc_anual #for year,dicc in dicc_anual: # plt.figure() # for mes, promedio in dicc: # plt.plot(mes,promedio) def dicc_error_volumen(path_estimacion): estimacion_sorted = add_year_month_sorted(path_estimacion) YEARS = get_years(path_estimacion) dicc_anual = dict() for year in YEARS: plt.figure() df_by_year = estimacion_sorted.loc[estimacion_sorted['year']==year] meses = df_by_year['mes'] seen = set() MESES = [] # se eliminan los meses repetidos for mes in meses: if mes not in seen: seen.add(mes) MESES.append(mes) dicc_cuociente_mensual = dict() # mes: cut_poz.mean()/cut.mean() for mes in MESES: cut_mes = df_by_year.loc[df_by_year['mes']==mes]['cut'] cut_poz_mes = df_by_year.loc[df_by_year['mes']==mes]['cut_poz'] cuociente = np.divide(cut_poz_mes.mean(), cut_mes.mean()) dicc_cuociente_mensual[mes] = cuociente dicc_anual[year] = dicc_cuociente_mensual _plot_f1(dicc_anual,YEARS) return dicc_anual def _plot_f1(dicc,years): df_dicc = pd.DataFrame.from_dict(dicc) f1 = df_dicc[years[0]] for year in years[1:]: f1 = pd.concat([f1,df_dicc[year]],names=['f1']) f1 = f1.dropna() # elimina todas las filas con Nan f1_df = pd.DataFrame(f1.as_matrix(), index=pd.date_range('1/1/'+str(years[0]), periods=f1.shape[0],freq='MS')) f1_df.columns = ['f1'] f1_df.plot(style = 'bo-') plt.axhline(y=1.1, color='g', linestyle='-') plt.axhline(y=0.9, color='g', linestyle='-') return f1_df # - # # Error por bloque $\frac{1}{N}\sum_{i=1}^N \text{f1}_i$ # calcular cuociente de los promedios path_estimacion = 'mp_test_all_2.csv' diccionario = dicc_error_volumen(path_estimacion) plt.show() # calcular cuociente de los promedios path_estimacion = 'mp_test_all_3.csv' diccionario = dicc_error_volumen(path_estimacion) plt.show() estimacion_ordenada = add_year_month_sorted(path_estimacion) YEARS = get_years(path_estimacion) estimacion_ordenada estimacion_ordenada_by_year = estimacion_ordenada.loc[estimacion_ordenada['year']==2017] estimacion_ordenada_by_year # + # calcular cuociente y promediar path_estimacion = 'mp_test_all_2.csv' diccionario = dicc_error_bloque(path_estimacion) diccionario # - df_dicc = pd.DataFrame.from_dict(diccionario) df_dicc annos = get_years(path_estimacion) serie_concat = _plot_f1(diccionario,annos) serie_concat plt.show() plt.figure(); df_dicc.plot(style='k--', label='Series'); plt.show() # + ts = pd.Series(np.random.randn(12), index=pd.date_range('1/1/2000', periods=12,freq='MS')) ts = ts.cumsum() df = pd.DataFrame(np.random.randn(12, 4), index=ts.index, columns=list('ABCD')) df = df.cumsum() plt.figure(); df.plot(); # plt.axhline(y=1.5, color='r', linestyle='-') plt.show() # - df toy_list = [1,2] toy_list[1:] # + path_estimacion = 'mp_test_all_2.csv' diccionario_vol = dicc_error_volumen(path_estimacion) diccionario_vol # - # cuantos puntos test no se analizan por falta de f1 estimacion_original = pd.read_csv(path_estimacion) estimacion_original estimacion_filtrada = estimacion_original.loc[(estimacion_original['f1'] > 0)] estimacion_filtrada # + # esta linea solo es para ver si se actualiza en GitHub # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: scraping # language: python # name: scraping # --- # %load_ext nb_black from bs4 import BeautifulSoup from lxml import etree import requests from lxml.etree import tostring from collections import defaultdict mapping = { "mahmood & blanco": "mahmood e blanco", "rettore & ditonellapiaga": "ditonellapiaga con donatella rettore", "highsnub & hu": "highsnob e hu", "aka7even": "aka 7even", "aka7ven": "aka 7even", "ditonellapiaga con rettore": "ditonellapiaga con donatella rettore", } bets_map = defaultdict(list) # ### Unibet bets_page = open("bets.html", "r").read() bets_soup = BeautifulSoup(bets_page, "html.parser") dom = etree.HTML(str(bets_soup)) rows = dom.xpath("/html/body/table/tbody//tr") for row in rows: singer = row.xpath("td[3]/a/text()")[0].lower() if singer in mapping: singer = mapping[singer] bet = float(row.xpath("td[5]/text()")[0]) bets_map[singer].append(bet) # ### Goldbet bets_page = open("bets_2.html", "r").read() bets_soup = BeautifulSoup(bets_page, "html.parser") dom = etree.HTML(str(bets_soup)) rows = dom.xpath("//td[contains(@class,'match aview')]") for row in rows: singer = row.xpath("span[1]/text()")[0].lower() if singer in mapping: singer = mapping[singer] bet = float(row.xpath("span[2]/text()")[0]) bets_map[singer].append(bet) bets_map singers_page = open("singers.html", "r").read() soup_singers = BeautifulSoup(singers_page, "html.parser") singers_map = {} dom_singers = etree.HTML(str(soup_singers)) rows_singers = dom_singers.xpath("/html/body/div//div[@class='song-card']") for row in rows_singers: singer = row.xpath("div/div[2]/div[1]/div/h1/text()")[0].lower().replace('’',"'") soup_temp = BeautifulSoup(tostring(row.xpath("div/div[2]/h1[2]")[0]), "html.parser") quotation = float(soup_temp.getText().replace('Baudi','').strip()) singers_map[singer] = quotation singers_map # + singers = [] costs = [] bets = [] scores = [] for k, v in singers_map.items(): singers.append(k) costs.append(v) current_bet = sum(bets_map[k]) / len(bets_map[k]) bets.append(current_bet) scores.append((1 / current_bet) / v) # - list(zip(singers, costs, bets, scores)) # + from pulp import LpProblem, LpMaximize, LpVariable, LpInteger def maximization_scores(singers, costs, bets, scores): P = range(len(singers)) # Declare problem instance, maximization problem prob = LpProblem("Portfolio", LpMaximize) # Declare decision variable x, which is 1 if a # singer is part of the portfolio and 0 else x = LpVariable.matrix("x", list(P), 0, 1, LpInteger) # Objective function -> Maximize scores prob += sum(scores[p] * x[p] for p in P) ### Constraint definition # constraint for the number of singers in the team prob += sum(x[p] for p in P) == 5 # constraint on the total amount of credits prob += sum(costs[p] * x[p] for p in P) <= 100 # constraint for having every player only once in the final team for elem in singers: prob += sum(x[p] for p in P if singers[p] == elem) <= 1 # Start solving the problem instance prob.solve() # Extract solution portfolio = [(singers[p], costs[p], bets[p], scores[p]) for p in P if x[p].varValue] portfolio = sorted(portfolio, key=lambda x: x[3], reverse=True) return portfolio # - portfolio = maximization_scores(singers, costs, bets, scores) portfolio sorted(list(zip(singers, costs, scores)), key=lambda x: x[2], reverse=True) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import torch from torch import nn import torch.nn.functional as F class CNNCifar(nn.Module): def __init__(self, num_classes): super(CNNCifar, self).__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, num_classes) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, 16 * 5 * 5) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x # + net_glob = CNNCifar(num_classes = 10).to("cuda") inp = torch.randn([1,3,32,32]).cuda() # log_probs = net_glob(inp) # inp = torch.randn([3,32,32]).cuda() flops1 = FlopCountAnalysis(net_glob, inp) print("Total FLOPs: " + str(flops1.total())) print(flop_count_table(flops1)) # - class MLP_from_CNN(nn.Module): def __init__(self): super(MLP_from_CNN, self).__init__() self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = x.view(-1, 16 * 5 * 5) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x net_glob = MLP_from_CNN().to("cuda") inp = torch.randn(400,1).cuda() flops1 = FlopCountAnalysis(net_glob, inp) print(flops1.total()) print(flop_count_table(flops1)) # + class CNNCifar2(nn.Module): def __init__(self, num_classes): super(CNNCifar2, self).__init__() self.conv1 = nn.Conv2d(3, 5, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(5, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, num_classes) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, 16 * 5 * 5) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x net_glob = CNNCifar2(num_classes = 10).to("cuda") inp = torch.randn([1,3,32,32]).cuda() # log_probs = net_glob(inp) # inp = torch.randn([3,32,32]).cuda() flops1 = FlopCountAnalysis(net_glob, inp) print("Total FLOPs: " + str(flops1.total())) print(flop_count_table(flops1)) # - class MLP(nn.Module): def __init__(self, dim_in, dim_hidden, dim_out): super(MLP, self).__init__() self.layer_input = nn.Linear(dim_in, dim_hidden) self.relu = nn.ReLU() self.dropout = nn.Dropout() self.layer_hidden = nn.Linear(dim_hidden, dim_out) def forward(self, x): x = x.view(-1, x.shape[1]*x.shape[-2]*x.shape[-1]) x = self.layer_input(x) x = self.dropout(x) x = self.relu(x) x = self.layer_hidden(x) return x from fvcore.nn import FlopCountAnalysis from fvcore.nn import flop_count_table net_glob = MLP(dim_in=784, dim_hidden= 0, dim_out=10).to("cuda") inp = torch.randn(784,1).cuda() flops1 = FlopCountAnalysis(net_glob, inp) flops1.total() print(flop_count_table(flops1)) n1 = MLP(dim_in=784, dim_hidden=150, dim_out=10).to("cuda") inp = torch.randn(784,1).cuda() flops2 = FlopCountAnalysis(n1, inp) flops2.total() #GlobalNet 200 neurons in hidden layer print(flop_count_table(flops1)) #N2 N3 N4 150 neurons in hidden layer print(flop_count_table(flops2)) setting_array = [8,0,0,2] # + a = 10 * setting_array[0] b = a + 10 * setting_array[1] c = b + 10 * setting_array[2] print(0,a) print(a,b) print(b,c) print(d,100) # - setting = str(setting_array).replace(",","").replace(" ","").replace("[","").replace("]","") with open('temp.txt', 'a+') as f: print('printing to a file.', file=f) import winsound duration = 1000 # milliseconds freq = 940 # Hz winsound.Beep(freq, 500) winsound.Beep(38, 500) winsound.Beep(freq, 500) winsound.Beep(38, 500) winsound.Beep(freq, 500) winsound.Beep(38, 500) winsound.Beep(freq, 500) winsound.Beep(38, 500) winsound.Beep(freq, 500) winsound.Beep(38, 500) winsound.Beep(freq, 500) winsound.Beep(38, 500) import os dir = './save/non-iid-mnist/Update_every_round' os.listdir(dir) ll = list(os.listdir(dir)) ll for e in ll: if e.endswith('.txt'): id = (e.split('_')[-1][0:4]) with open(dir +'/' + e) as f: # print(e) print(id) for line in f: if line.startswith("Tr"): # print(line) # print(line[-6:-1]) # print("Testing Accuracy: "+line[-6:-1]) print("Training Accuracy: "+line[-6:-1]) # pass # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.5 64-bit (''base'': conda)' # name: python3 # --- # + # default_exp FE # - # # FE # # > This module contains functions to generate parametric LS-Dyna simulations. #hide from nbdev.showdoc import * # + #export from pyDOE import lhs import numpy as np from scipy.stats.distributions import norm from scipy.stats import uniform import yaml from qd.cae.dyna import KeyFile import os import pandas as pd from diversipy.hycusampling import maximin_reconstruction as maxmin from pathlib import PurePath class FE(): """ This Class contains set of methods which performs reading of the .yaml file and replaces values of the input parameters with newly generated sample data sets. And then, new key files are generated for simulation. ----------- INPUTS ----------- settigs : Input file for FE simulations to get the user input """ def __init__(self, settings): self.settings = settings self.folders_count=0 self._read_user_input() def _read_user_input(self): """ gets the user input details from the settings.yaml file. Returns ------- fin_dir : Final path of the created directory self.Run : Number of runs self.para_list : A .yaml file containing the parameters/ features/ variables for sampling with appropriate values as subkeys in the same file. self.key : .key file containg the initial simulation details. """ """ gets the user input details from the settings.yaml file. Returns ------- fin_dir : Final path of the created directory self.Run : Number of runs self.para_list : A .yaml file containing the parameters/ features/ variables for sampling with appropriate values as subkeys in the same file. self.key : .key file containg the initial simulation details. """ with open(self.settings,'r') as file: inp = yaml.load(file, Loader=yaml.FullLoader) inp_vals=[*inp.values()] inp_keys=[*inp.keys()] req=['baseline_directory','simulations'] for names in req: if names not in inp_keys: raise Exception(names +" not in dynakit_FE.yaml file") if inp[names] == None: raise Exception(names +" value not in dynakit_FE.yaml file") if isinstance(inp['simulations'], int) == True: self.Run=inp['simulations'] self.int='yes' self.Flag=1 elif isinstance(inp['simulations'], str) == True: self.DOE=pd.read_csv(inp['simulations']) self.int='no' self.Run=len(self.DOE) self.Flag=1 else: print('Enter either a Integer or a .csv Input') self.cwd=os.getcwd() base_dir=PurePath(inp['baseline_directory']) self.basepath=os.path.abspath(base_dir) self.fin_dir=os.path.dirname(self.basepath) self.basename=base_dir.name self.dyna_dir = os.path.join(self.fin_dir,'.dynakit') self.para_list='FE_parameters.yaml' self.key=inp['main_key'] self.fol_name=self.basename.split('_')[0] if os.path.exists(self.dyna_dir): if [name for name in os.listdir(self.dyna_dir) if name.endswith(".csv")] == []: os.rmdir(self.dyna_dir) try: os.mkdir(self.dyna_dir) except OSError as err: print('Adding new samples to the existing directory') self.Flag=0 return self.fin_dir , self.Run , self.key , self.para_list def read_parameters(self): """ converts the .yaml file to a dictionary Parameters ---------- self.para_list : the config.yaml file with the user inputs Returns ------- z : the .yaml file in dictionary format """ os.chdir(self.fin_dir) with open(self.para_list,'r') as file: parameter_list = yaml.load(file, Loader=yaml.FullLoader) dynParams = {k: v for k, v in parameter_list['parameters'].items() if parameter_list['parameters'][k]['type'] == 'dynaParameter'} self.dynaParameters = pd.DataFrame.from_dict(dynParams) onparams = {k: v for k, v in dynParams.items() if dynParams[k]['status'] == True } self.new_par=pd.DataFrame.from_dict(onparams) on=self.new_par.loc['parameter'] self.on_params=on.to_list() return self.dynaParameters def get_samples(self): """ samples the data based on the .yaml file using normal / uniform distribution and lhs library Parameters ---------- vars : values assigned to the sub keys in the .yaml file self.Run : Number of samples required Returns ------- Data : samples matrix in a list """ os.chdir(self.dyna_dir) if self.int=='yes': self.col_names=self.dynaParameters.loc['parameter'] elif self.int=='no': self.col_names=self.DOE.columns if self.int =='yes': DOE_s = lhs(len(self.new_par.loc['parameter']),samples = self.Run) j=0 self.DOE=np.zeros((self.Run,len(self.dynaParameters.loc['parameter']))) for i in range(0,len(self.dynaParameters.loc['parameter'])): if self.dynaParameters.loc['parameter'][i] in self.on_params: self.DOE[:,i]=DOE_s[:,j] j+=1 else: self.DOE[:,i]=1 save_file=pd.DataFrame(self.DOE) os.chdir(self.dyna_dir) save_file.to_csv('DOE.csv', index=False) minimum_val = self.dynaParameters.loc['min'] maximum_val = self.dynaParameters.loc['max'] for j in range(0,len(self.dynaParameters.loc['parameter'])): if self.dynaParameters.loc['parameter'][j] in self.on_params: if self.dynaParameters.loc['distribution'][j]=='Uniform': self.DOE[:,j]=uniform(self.dynaParameters.loc['min'][j], self.dynaParameters.loc['max'][j] - self.dynaParameters.loc['min'][j]).ppf(self.DOE[:, j]) elif self.dynaParameters.loc['distribution'][j]=='Normal': self.DOE[:, j] = norm(loc=self.dynaParameters.loc['mean'][j], scale=self.dynaParameters.loc['SD'][j]).ppf(self.DOE[:, j]) else: self.DOE[:,j]=self.dynaParameters.loc['default'][j] elif self.int=='no': os.chdir(self.dyna_dir) df=self.DOE df.to_csv('DOE.csv', index=False) self.DOE=np.array(self.DOE) return self.DOE def add_samples(self): """ adds samples of the data based on the .yaml file using normal / uniform distribution and lhs library Parameters ---------- vars : values assigned to the sub keys in the .yaml file self.Run : Number of samples required self.fin_dir : final path of the created directory Returns ------- Data : samples matrix in a list """ os.chdir(self.cwd) os.chdir(self.fin_dir) self.folders_count =len([name for name in os.listdir(os.getcwd()) if name.startswith(self.fol_name)])-1 os.chdir(self.dyna_dir) if os.path.isfile('DOE.csv'): old_DOE_s=pd.read_csv('DOE.csv') else: print('No preexisting DOE found!') if self.int=='yes': self.col_names=self.dynaParameters.loc['parameter'] elif self.int=='no': self.col_names=self.DOE.columns if self.int=='yes': old_DOE=np.zeros((self.folders_count,len(self.new_par.loc['parameter']))) old=old_DOE_s.values j=0 for i in range(0,len(self.dynaParameters.loc['parameter'])): if self.dynaParameters.loc['parameter'][i] in self.on_params: old_DOE[:,j]=old[:,i] j+=1 data_add=lhs(len(self.new_par.loc['parameter']),samples = self.Run) DOE_new_add= maxmin(self.Run,len(self.new_par.loc['parameter']), num_steps=None, initial_points=data_add, existing_points=old_DOE, use_reflection_edge_correction=None, dist_matrix_function=None, callback=None) new_DOE=np.zeros((self.Run,len(self.dynaParameters.loc['parameter']))) j=0 for i in range(0,len(self.dynaParameters.loc['parameter'])): if self.dynaParameters.loc['parameter'][i] in self.on_params: new_DOE[:,i]=DOE_new_add[:,j] j+=1 else: new_DOE[:,i]=1 df=pd.DataFrame(new_DOE) os.chdir(self.dyna_dir) df.to_csv('DOE.csv', mode='a', header=False, index=False) self.DOE= pd.read_csv('DOE.csv') for j in range(0,len(self.dynaParameters.loc['parameter'])): if self.dynaParameters.loc['parameter'][j] in self.on_params: if self.dynaParameters.loc['distribution'][j]=='Uniform': self.DOE.values[:,j]=uniform(self.dynaParameters.loc['min'][j], self.dynaParameters.loc['max'][j] - self.dynaParameters.loc['min'][j]).ppf(self.DOE.values[:, j]) elif self.dynaParameters.loc['distribution'][j]=='Normal': self.DOE.values[:, j] = norm(loc=self.dynaParameters.loc['mean'][j], scale=self.dynaParameters.loc['SD'][j]).ppf(self.DOE.values[:, j]) else: self.DOE.values[:,j]=self.dynaParameters.loc['default'][j] self.DOE=self.DOE.values elif self.int=='no': os.chdir(self.dyna_dir) df=self.DOE df.to_csv('DOE.csv', mode='a', header=False, index=False) self.DOE=np.array(self.DOE) return self.DOE def generate_keyfile(self): """ Generate the new updated .key file and a FE_Parameters.yaml file containing respective sampled values for each parameters in new folders. Parameters ---------- self.newkey : a new key file with an updated file path. self.fin_dir : final path of the created directory self.Run : Number of samples required self.para_num : number of parameters/variables/features self.para_names : Names of parameters/variables/features self.DOE : samples matrix in a list Returns ------- fldolder in the directory """ os.chdir(self.basepath) kf=KeyFile(self.key) os.chdir(self.fin_dir) key_parameters=kf["*PARAMETER"][0] key_parameters_array=np.array(kf["*PARAMETER"][0]) # Creating a dictionary with key and it's values: key_dict={} R_index=[] for i in range(0,len(key_parameters_array)): if key_parameters_array[i].startswith('R'): R_index.append(i) f=key_parameters_array[i].split(' ') key_dict[f[1]]=f[-1] par_lis=[*key_dict.keys()] os.chdir(self.fin_dir) self.folders_count =len([name for name in os.listdir(os.getcwd()) if name.startswith(self.fol_name)]) for run in range(0,self.Run): os.mkdir('{}_{:03}'.format(self.fol_name,(run+self.folders_count))) os.chdir('{}_{:03}'.format(self.fol_name,(run+self.folders_count))) FE_Parameters = {} for para in range(0,len(self.col_names)): for i in range(0,len(R_index)): if par_lis[i] == self.col_names[para]: key_parameters[i+1,1] = self.DOE[run+self.folders_count-1,para] kf.save("run_main_{:03}.key".format((run+self.folders_count))) FE_Parameters[par_lis[i]] = key_parameters[i+1,1] with open('simulation_Parameters.yaml','w') as FE_file: yaml.dump(FE_Parameters,FE_file,default_flow_style = False) os.chdir(self.fin_dir) def get_simulation_files(self): """ Runs all the methods of pre-process class """ self.read_parameters() if self.Flag==1: self.get_samples() elif self.Flag==0: self.add_samples() self.generate_keyfile() # - # # Using the library to run the program. # To access the Dynakit library, the user need to fill necessary key inputs in dynakit_FE.yaml and FE_parameters.yaml files # # - User input in dynakit_FE.yaml # # ```yaml # # Path to baseline input file # # baseline_directory : 'example\test_project\base_000' # # # If a number is given, the DoE is generated for the given number # # If you prefer to provide the DoE, input the DoE as a csv and give its name here # # simulations: 5 # # ``` # # - Inputs to the FE_parameters.yaml file are given as below, # # ```yaml # parameters: # 'delta velocity' : # type : dynaParameter # status: off # parameter : DV # default : 60 # max : 65 # min : 40 # distribution: Uniform/Normal # ``` #hide import os path_to_run=os.getcwd() # + # In this case, dynakit_FE.yaml will read from the working directory: project =FE("example/test_project/dynakit_FE.yaml") # if dynakit_FE.yaml will be in a different directory: # project = FE(r'D:\Project_1\dynakit_FE.yaml') # reads FE_parameters.yaml file and creates dictionary for the parameters with type 'dynaparameters' df=project.read_parameters() #samples the data for the given parameter ranges and type of distribution specified by the user. df_DOE=project.get_samples() # Generates the key files based on the sampled data for each parameters project.generate_keyfile() # + # hide os.chdir(path_to_run) with open('example/test_project/dynakit_FE.yaml','r') as file: dyna = yaml.load(file, Loader=yaml.FullLoader) project_path=os.path.dirname(os.path.abspath(dyna['baseline_directory'])) file_name=[name for name in os.listdir(project_path) if not (name.startswith('.'))][0].split('_')[0] # --------------------test 1----------------------------------------------- os.chdir(project_path) assert os.path.exists('.dynakit') == True # -------------------test for config---------------------------------------- os.chdir(path_to_run) with open('example/test_project/FE_parameters.yaml','r') as file: para_list = yaml.load(file, Loader=yaml.FullLoader) a={k: v for k, v in para_list['parameters'].items() if para_list['parameters'][k]['type'] == 'dynaParameter'} assert len([*a.keys()])==len(df.columns) assert len([*a.values()][0])==len(df) assert [*a.values()][0]['max'] == df.iloc[:,0]['max'] # -------------------------Test for get_samples ------------------------------ assert df.iloc[:,0]['min']max(df_DOE[:,0]) # -----------------------test for the generate_keyfile------------------------- assert dyna['simulations'] == len([name for name in os.listdir(project_path) if name.startswith(file_name)])-1 # save the length of the DOE to compare later os.chdir(project_path) doe_o=len(pd.read_csv(".dynakit/DOE.csv")) # - # ### Augmentation # - Adding more samples to the existing sample set. #hide os.chdir(path_to_run) # + # Re-initiate the program for augmenting project = FE('example/test_project/dynakit_FE.yaml') df=project.read_parameters() # augmentation method for sampling df_DOE=project.add_samples() # to generate key files project.generate_keyfile() # + #hide #--------------------------test for augmentation--------------------------- os.chdir(path_to_run) with open('example/test_project/dynakit_FE.yaml','r') as file: set_list = yaml.load(file, Loader=yaml.FullLoader) os.chdir(project_path) doe_n=pd.read_csv(".dynakit/DOE.csv") assert doe_o+set_list['simulations'] == len(doe_n) # - # ### Run all methods: # - Run the complete program in a single step #hide os.chdir(path_to_run) # + # initiate the program for augmenting project = FE('example/test_project/dynakit_FE.yaml') # command to execute all the methods project.get_simulation_files() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # #### 1. You want to select author's last name from a table, but you only remember the author’s last name starts with the letter B, which string pattern can you use? # ##### Ans: SELECT lastname from author where lastname like ‘B%’ # #### 2. In a SELECT statement, which SQL clause controls how the result set is displayed? # ##### Ans: ORDER BY clause # #### 3. Which SELECT statement eliminates duplicates in the result set? # ##### Ans: SELECT distinct(country) from author # #### 4. What is the default sorting mode of the ORDER BY clause? # ##### Ans: Ascending # #### 5. Which of the following can be used in a SELECT statement to restrict a result set? # ##### Ans: #
      #
    • HAVING
    • #
    • GROUP BY
    • #
    • DISTINCT
    • #
    # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #

    CSC398 Report: Participation in industry-based Design and Tech Conferences

    #

    Submitted by: (1005095068)

    # #
    # # # # # ### Introduction # The purpose of this report is to analyse the participation of people in a particular chosen industry-based Design and Tech Conference i.e. The Future of Innovation, Technology, and Creativity(FITC) Toronto. # # Participation in technology is changing over the years. It is important to analyse the trends and address the issues of under-representation of certain ethnic groups and genders. This report is an attempt to get an insight into how the community is represented and how its diversity changes over the years in the area of Design and Tech. To be specific, we will be looking at the years 2002-2019. # # To achieve this, we will be testing multiple hypotheses. We will be testing the change in the population, representation of genders, and various ethnic groups. # # Data Collection: In order to test these hypothesis, we had to collect relevant data. This was achieved by a web scrapper tool called Scrapy to extract names from the conference websites. Further information on names like gender and race-ethnicaity was obtained using a paid API i.e. NamSor. # ### Hypothesis: Number of people attending the Design and Tech conference increases each year. # # The number of participants varies each year and the new technologies coming up rapidly and also new fields being introduced, it would interesting to test this hypothesis to measure the impact on Design and Tech conferences. # # To test this, we will be using Linear Regression.
    # First, we will set our Null Hypothesis.
    # # Null Hypothesis: Population remains the same over the years.
    # This implies that our coefficient of x i.e. B(Beta Value) in a linear regression model(y = Bx + c) must be 0. # # In the analysis, we will be using various python libraries for testing this. # + import csv import numpy as np import pandas as pd import matplotlib.pyplot as plt import statsmodels.api as sm # Read population data from the CSV file. data = pd.read_csv("population_data.csv") data # Year 1 corresponds to '2002' and similarly other index values map to consective years untill 2019 i.e. 17. # - data.describe() # #### Define the dependent(y) and the independent variable(x1) y = data['Population'] x1 = data['Year'] # #### Explore the Data plt.scatter(x1,y) plt.xlabel('Year', fontsize = 20) plt.ylabel('Population', fontsize = 20) plt.show() x = sm.add_constant(x1) results = sm.OLS(y,x).fit() results.summary() plt.scatter(x1,y) yhat = 1.5077*x1 + 56.8431 #These values are from Regression Table. 27.700 is Year coef and 335.1 is constant. fig = plt.plot(x1,yhat, lw=4, c='orange', label = 'regression line') plt.xlabel('Year', fontsize = 20) plt.ylabel('Population', fontsize = 20) plt.xlim(0) plt.ylim(0) plt.show() #

    Observations

    # # The first two dots in the graph above represents the initial years of the conference. Therefore, since the conference was new, it makes sense to have low attendance. These two years of data should be treated as outliers and be removed from the analysis. In further analysis, we will look at the sudden jump in population after 2 initial years. # # Data Analysis after removing the outliers import matplotlib.pyplot as plt data1 = pd.read_csv("population_data_modified.csv") y1 = data1['Population'] x2 = data1['Year'] x = sm.add_constant(x2) results = sm.OLS(y1,x).fit() results.summary() plt.scatter(x2,y1) plt.xlabel('Year', fontsize = 20) plt.ylabel('Population', fontsize = 20) yhat = -0.1956*x1 + 78.35 #These values are from Regression Table. -0.1956 is Year coef and 78.35 is constant. fig = plt.plot(x1,yhat, lw=4, c='orange', label = 'regression line') plt.xlim(0) plt.ylim(0) plt.show() #

    Analysis

    # The P-value for year = 0.736 (> 0.05) suggests that there is no significant impact of change in time over the population of the conference. Also, the P-value = 0.000 for population suggests that this result is definitely not by chance. # The B-value(co-effecient of x) is -0.1956. This is incosistent with our Null Hypothesis i.e. B = 0. However, it is important to note that the difference between 0 and -0.1956 is very small and it is difficult to make a strong conclusion about the Null Hypothesis. # # Evidence: # The yellow line in the graph suggests that the population remained similar over the years. To support this claim, an additional research was performed. Acording to the observations, it was noticed that the location of the conference had a strong influence over the population of the conference. In the initial years, the conference was held at different locations. However, after initial years, the conference has been held at the same location from past several years. This explains the low attendance in the initial years and constant after that. This also suggests that the organisers invited a limited number of people as per their capacity and the data trends of this conference cannot be used to generalise the trend in the industry as a whole. Therefore, we fail to reject the Null Hypothesis. # # Furthermore, we cannot make any strong conclusions about our initial hypothesis i.e. the population of the conference increases over the years. #

    Hypothesis: The gender composition is changing over the years.

    # # I decided to test this hypothesis because it is very crucial as women in tech are usually under-represented. There is an uneven split between number of males and females attending the conferences. It is important to analyse and study these data, so the participation of under-represented gender can be encouraged more. Even though more and more women are being encouraged and welcomed in the tech field, I still feel they are under-represented. #
    # In order to determine the gender of the participants, I used NamSor tool which tells me about the likely gender of each participant. # # #### Exploring the Data import pandas as pd import numpy as np import matplotlib.pyplot as plt # data to plot # Read gender data from the CSV file. gender_data = pd.read_csv("gender_data.csv") gender_data # Year 1 corresponds to '2002' and similarly other index values map to consective years untill 2019 i.e. 18. gender_data.describe() # + x = gender_data["Year"] y1 = gender_data["Male"] y2 = gender_data["Female"] # create a plot fig, ax = plt.subplots() index = np.arange(18) bar_width = 0.35 opacity = 0.8 rects1 = plt.bar(index, y2, bar_width, alpha=opacity, color='b', label='Female') rects2 = plt.bar(index + bar_width, y1, bar_width, alpha=opacity, color='g', label='Male') plt.xlabel('Years') plt.ylabel('Males and Females') plt.title('Female to Male Ratio') plt.xticks(index + bar_width, ('02', '03', '04', '05','06','07','08','09','10','11','12','13','14','15','16','17','18','19')) plt.legend() plt.tight_layout() plt.show() # - # #### Observations # It is important to note that I judged the accuracy of the results of NamSor tool based on two evidences. First, I performed a manual analysis of the known names whose genders I am aware of. It matched all the actual genders. Secondly, most of the names have a confidence probability of over 0.89 which is very high. However, some of the names were 'pseudo-names' like 'GMUNK' which might be more famous in the industry. I highly doubt the accuracy of NamSor API on such names. However, the number of such names is very small(3-5). Therefore, it won't have a significant impact on the data. # # Also, the NamSor tool only segregates a name by either male or female. It does not account for the people who identify themselves as LGBTQ communities. The conference is based in Toronto. According to Statistics Canada(https://www.statcan.gc.ca/eng/dai/smr08/2015/smr08_203_2015#a3), less than 2% Canadian population identified themselves belonging to the LGBTQ community. Therefore, it is less likely that this bias in data will have a significant impact on the overall trend of the data. # # Another important observation is that, in the year 2002, there were no women at the conference. The representation was very uneven. # # Looking at the bar graph, we can clearly see that there is an uneven split in the men-women representation in this conference. However, we need to verify our claim statistically. # # To verify my claim, I will be using Linear Regression analysis. To model the data, we will be taking the ratio of females to the total population. # # Therefore, we need a Null Hypothesis and the Alternative Hypothesis. # # Null Hypothesis: The female to total ratio remains the same over the years, i.e. B(beta) value which is co-efficient of x in linear regression remains 0. # # Alternative Hypothesis: The female to total ratio increases over the years i.e. B > 0 # # #### Linear Regression import statsmodels.api as sm gender_data = pd.read_csv("gender_data.csv") y3 = gender_data['Ratio'] x1 = sm.add_constant(x) results = sm.OLS(y3,x1).fit() results.summary() plt.scatter(x,y3) yhat = +0.0153*x + 0.0120 #These values are from Regression Table. 0.0153 is Year coef and 0.0120 is constant. fig = plt.plot(x,yhat, lw=4, c='orange', label = 'regression line') plt.xlabel('Year', fontsize = 20) plt.ylabel('Female to Total Ratio', fontsize = 20) plt.xlim(0) plt.ylim(0) plt.title('Linear Regression') plt.show() # #### Result and Summary # # We can see from the regression table and our graph that the B value(coefficient of x) is non-zero. Also, the p-value is 0.000 which suggests it is statistically significant and there is very weak evidence for the null hypothesis. It also suggests that this result is not by-chance. Therefore, our Null-Hypothesis is not true. This implies that the female to total ratio is not constant over the years. # # Therefore, our Alternative Hypothesis is correct. Actually, the value of B = 0.153 > 0. This is really encouraging as it indicates that the participation of women is increasing over the years and heading towards the even split in gender ratios. Therefore, the regression analysis supports our original hypothesis that female to total population ratio has improved over the years. # #
    # # ### Hypothesis 3: The race ethnicity is unevenly distributed in the Big Data conferences. # # In any conference, there are people who come from different cultural backgrounds, different countries, and different races of ethnicity. This is an important factor such as diversity that leads to different perspectives at a conference. A dominant group also tends to lead the direction in which the future of the field goes. Therefore, I chose this hypothesis to further analyse the participation of various groups in the conference. # # We used the NameSor tool to determine the race-ethnicity of the participants. The data is based on the race-ethnicity of the US. The US is full of people and immigrants from various cultures and race should be a good data set to determine the likely ethnicity of the participants. # # Data Representation import pandas as pd import numpy as np import matplotlib.pyplot as plt # data to plot # Read ethnicity data from the CSV file. ethnicity_data = pd.read_csv("ethnicity_data.csv") ethnicity_data ethnicity_data.describe() # + # Year 1 corresponds to '2002' and similarly other index values map to consective years untill 2019 i.e. 18. x = ethnicity_data["Year"] y1 = ethnicity_data["Asian"] y2 = ethnicity_data["Black, Non Latino"] y3 = ethnicity_data["Hispano Latino"] y4 = ethnicity_data["White, Non Latino"] # create a plot fig, ax = plt.subplots() index = np.arange(18) bar_width = 0.2 opacity = 0.8 rects1 = plt.bar(index, y1, bar_width, alpha=opacity, color='b', label='Asian') rects2 = plt.bar(index + bar_width, y2, bar_width, alpha=opacity, color='g', label='Black, Non Latino') rects3 = plt.bar(index + 2*bar_width, y3, bar_width, alpha=opacity, color='r', label='Hispano Latino') rects2 = plt.bar(index + 3*bar_width, y4, bar_width, alpha=opacity, color='y', label='White, Non Latino') plt.xlabel('Years') plt.ylabel('Number of people attending the conference') plt.title('Race Ethnicity Demographics') plt.xticks(index + bar_width, ('02', '03', '04', '05','06','07','08','09','10','11','12','13','14','15','16','17','18','19')) plt.legend() # plt.tight_layout() plt.show() # - # Looking at the bar, graph we can clearly see the race-ethnicity distribution is not equal in this conference. However, we need to verify our claim statisticly. # # To verify my claim statistically, I will be using ANOVA F-test (Analysis of Variance). # # Therefore, we need a Null Hypothesis and Alternative Hypothesis. # # Null Hypothesis: The race ethnicity distribution is equal i.e. mean of each race group is equal. # # Alternative Hypothesis: Atleast one race group's mean is different. # # #### ANOVA F-Test # generate a boxplot to see the data distribution by race. Using boxplot, we can easily detect the differences # between different race ethnicity. ethnicity_data.boxplot(column=['Asian', 'Black, Non Latino', 'Hispano Latino', 'White, Non Latino'], grid=False) # load packages import scipy.stats as stats # stats f_oneway functions takes the groups as input and returns F and P-value fvalue, pvalue = stats.f_oneway(ethnicity_data['Asian'], ethnicity_data['Black, Non Latino'], ethnicity_data['Hispano Latino'], ethnicity_data['White, Non Latino']) print(fvalue, pvalue) # Observations and Results # # Here we can clearly see that the result of pvalue = 9.748241731748067e-28 (P < 0.05). This implies that there is significant difference in the means of the all the race groups. According to the box-plot graph, we can see that highest representation is by White, Non Latino community. The second highest is Asians, followed by Black and Hispano Latinos. This representation of ethnic groups has been relatively consistent over past 17 years. Since this conference is held in Toronto, I compared the above data with ethnic demographics of Toronto. (https://www.thestar.com/news/gta/2018/09/30/toronto-is-segregated-by-race-and-income-and-the-numbers-are-ugly.html). The article displays that Toronto has similar ethnic diversity when compared with this conference. # # # Therefore, our null hypothesis is rejected. Our alternative hypothesis that atleast one group has different mean is accepted. Observing the graphs and other articles, I can clearly see White community dominating this space. The same observation was supported by our analysis. Therefore, our original hypothesis that race ethnicity is unevenly distributed as been verified. # # # #### Summary # In our report, we tested various hypotheses to determine the diversity in the participation of people in the Design and Tech industry conference. One of the interesting points to note in this analysis is that participation in the conference is highly affected by the location i.e. Toronto. A similar trend was seen in all the hypotheses. The number of people invited is limited and often the same people are invited over the years. The representation of both genders is moving towards the desired goal of equal split every year. The ethnic demographics are unevenly distributed. I feel that the conference is limited to local speakers. This also explains the unpopularity of the conference internationally. Even though this analysis displays many interesting trends, the data presented cannot be used to generalize similar participation in the overall field of Design and Technology. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression # - ccpp = pd.read_excel("~/Downloads/Folds5x2_pp.xlsx") ccpp #Rename Columns to make them better descriptive and unddable rename_columns = ccpp.rename(columns = {"V": "Exhaust_Vacuum", "AP": "Ambient_Pressure", "RH": "Relative_Humidity", "AT": "Temperature", "PE": "Electricity_output"}) rename_columns #Check for missing Values in the dataset ccpp.isna().sum() rename_columns.info() rename_columns.describe() #Check for Outliers of any sort fig = plt.figure(figsize = (10, 7)) plt.boxplot(rename_columns) rename_columns.shape fig= sns.lmplot(x = 'Relative_Humidity', y='Electricity_output',data=rename_columns) plt.title("Electricity_output VS Relative_Humidity") fig= sns.lmplot(x = 'Ambient_Pressure', y='Electricity_output',data=rename_columns) plt.title("Ambient_Pressure VS Electrical_output") fig= sns.lmplot(x = 'Temperature', y='Electricity_output',data=rename_columns) plt.title("Electricity_output VS Temperature") fig= sns.lmplot(x = 'Exhaust_Vacuum', y='Electricity_output',data=rename_columns) plt.title("Electricity_output VS Exhaust_Vacuum") ##HeatMap Can also be used to check the correlation between the dependent variable and the Independent variable plt.figure(figsize=(15, 10)) sns.heatmap(rename_columns.corr(), annot=True) y = rename_columns['Electricity_output'] X = rename_columns.drop(columns=['Electricity_output'], axis=1) y.shape X.shape #Splitting the DataSet into Train and Test. The second Train and Set will be used for validation X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.20, random_state=45) X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.25, random_state=1) # + from sklearn.ensemble import RandomForestRegressor model = RandomForestRegressor(random_state=1, max_depth=4) model.fit(rename_columns,x) RandomForestRegressor(max_depth=4, random_state=1) # - features = rename_columns.columns importances = model.feature_importances_ indices = np.argsort(importances)[-4:] plt.title('Feature Importances') plt.barh(range(len(indices)), importances[indices], color='green', align='center') plt.yticks(range(len(indices)), [features[i] for i in indices]) plt.xlabel('Relative Importance') plt.show() model = LinearRegression() power_output = model.fit(X_val, y_val) power_output coef_deter = power_output.score(X_val, y_val) coef_deter 1 - (1-model.score(X_val, y_val)) * (len(y_val)-1)/(len(y_val)-X_val.shape[1]-1) intercept = model.coef_ intercept X_predict = power_output.predict(X_test) X_predict from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score print("MAE : ",mean_absolute_error(y_test, X_predict)) print("MSE : ", mean_squared_error(y_test, X_predict)) print("R2 Sore : ", r2_score(y_test, X_predict)) from sklearn.linear_model import Ridge model_ = Ridge() power_output = model_.fit(X_val, y_val) power_output coef_deter = power_output.score(X_val, y_val) coef_deter 1 - (1-model_.score(X_val, y_val)) * (len(y_val)-1)/(len(y_val)-X_val.shape[1]-1) intercept = model_.coef_ intercept X_predict = power_output.predict(X_test) X_predict from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score print("MAE : ",mean_absolute_error(y_test, X_predict)) print("MSE : ", mean_squared_error(y_test, X_predict)) print("R2 Sore : ", r2_score(y_test, X_predict)) # + train_score = [] test_score = [] random_state = 10 for i in range(random_state): X_train1, X_test1, y_train1, y_test1 = train_test_split(X_val, y_val, test_size = 0.30, random_state = i) linear_model_1 = LinearRegression() linear_model_1.fit(X_train1, y_train1) y_train_predict_mimax = linear_model_1.predict(X_train1) y_test_predict_minmax = linear_model_1.predict(X_test1) train_score = np.append(train_score, r2_score(y_train1, y_train_predict_mimax)) test_score = np.append(test_score, r2_score(y_test1, y_test_predict_minmax)) print(X_train1.shape) print(y_train1.shape) # - print('R2 train: %.2f +/- %.2f' % (np.mean(train_score),np.std(train_score))) print('R2 train: %.2f +/- %.2f' % (np.mean(train_score),np.std(train_score))) # + from yellowbrick.regressor import ResidualsPlot model = LinearRegression() visualizer = ResidualsPlot(model, hist=True, qqplot=False) visualizer.fit(X_train, y_train) visualizer.score(X_test, y_test) visualizer.show() # - *Train the model with the Linear Regression model since it performs better than the Ridge Regression model = LinearRegression() power_output = model.fit(X_train, y_train) coef_deter = power_output.score(X_train, y_train) coef_deter intercept = model.coef_ intercept y_predict = power_output.predict(X_test) from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score print("MAE : ",mean_absolute_error(y_test, y_predict)) print("MSE : ",mean_squared_error(y_test, y_predict)) print("R2 score : ",r2_score(y_test, y_predict)) From the R2 score above it shows that the variables in our regression equation can explain ~93% of the change in our output variable.* # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # The shutil module offers a number of high-level operations on files and collections of files. In particular, functions are provided which support file copying and removal. For operations on individual files. import shutil print(dir(shutil)) # **shutil.copyfileobj(fsrc, fdst[, length])** - method in Python is used to copy the contents of a file-like object to antoher file-like object. By default this method copy data in chunks and if want we can also specify the buffer size through length parameter. # This method copies the content of the file from the current file position to the end of the file import shutil source = 'file.txt' fsrc = open(source, 'r') dest = 'file_copy.txt' fdst = open(dest, 'w') shutil.copyfileobj(fsrc, fdst) print("Contents of the file object copied sucessfully") fsrc.close() fdst.close() # **shutil.copyfile(src, dst, *, follow_symlinks=True)** - Copy the contents (no metadata) of the file named src to a file named dst and return dst in the most efficient way possible. src and dst are path-like objects or path names given as strings import shutil import os import glob print(os.listdir()) resp = shutil.copyfile('file.txt', 'file2.txt') print(f"The response is - {resp}") print(os.listdir()) # **shutil.copymode()** - It is used to copy the permission bits from the given source path to given destination path. The shutil.copymode() method does not affect the file content or owner and group information print(glob.glob('*')) src = 'app.py' dest = 'file.txt' print(f"Permission bits of source: {oct(os.stat(src).st_mode)[-3:]}") print(f"Permission bits of source: {oct(os.stat(dest).st_mode)[-3:]}") # shutil.copymode(src, dest) # print(f"Permission bits of source: {oct(os.stat(dest).st_mode)[-3:]}") # **shutil.copystat(source, destination, *, follow_symlinks = True)** - It is used to copy the permission bits, last access time, last modification time and flags value from the given source path to given destination. This method does not affect the file content and owner and group info # + import time src = 'basicLogging.py' dest = 'file2.txt' print("Before using shutil.copystat() method:") print("Source metadata:") print("Permission bits:", oct(os.stat(src).st_mode)[-3:]) print("Last access time:", time.ctime(os.stat(src).st_atime)) print("Last modification time:", time.ctime(os.stat(src).st_mtime)) print("--------------------------------------------------------------") print("\nDestination metadata:") print("Permission bits:", oct(os.stat(dest).st_mode)[-3:]) print("Last access time:", time.ctime(os.stat(dest).st_atime)) print("Last modification time:", time.ctime(os.stat(dest).st_mtime)) shutil.copystat(src, dest) print("\nDestination metadata:") print("Permission bits:", oct(os.stat(dest).st_mode)[-3:]) print("Last access time:", time.ctime(os.stat(dest).st_atime)) print("Last modification time:", time.ctime(os.stat(dest).st_mtime)) # - # **shutil.copy(src,dst, *, follow_symlinks=True)** - Copies the file src to the file or directory dst. src and dst should be path-like objects or strings. If dst specifies a directory, the file will be copied into dst using the base filename from src. Returns the path to the newly created file. import glob glob.glob('*') import os os.mkdir('Test') import shutil shutil.copy('app.py', 'Test') shutil.copy('app.py', 'yoyo.txt') # **shutil.copy2(src, dst, *, follow_symlinks=True)** - Identical to copy except that copy2() also attempts to preserve file metadata. shutil.copy2('app.py', 'yoyoyo.txt') shutil.__all__ # **shutil.ignore_patterns(*patterns)** - This factory function creates a function that can be used as a callable for copytree()'s ignore argument, ignoring files and directories that match one of the glob-style patterns provided # **shutil.copytree(src, dst, symlinks=False, ignore=None, copy_function=copy2, ignore_dangling_symlinks=False, dirs_exist_ok=False)** - Recursively copy an entire directory tree rooted at src to a directory named dst and returned the destination directory. dir_exists_ok dictates whether to raise an exception in case dst or any missing parent directory already exists. import os import glob import shutil if os.path.isdir('ToTest'): os.rmdir('ToTest') else: os.mkdir("ToTest") shutil.copytree(os.path.abspath(''), os.path.abspath('ToTest'), ignore=shutil.ignore_patterns('.ipynb_checkpoints')) os.path.abspath('') os.listdir() # **shutil.rmtree(path, ignore_errors=False, onerror=None)** - Delete an entire directory tree; path must point to a directory (but not a symbolic link to a directory) for i in os.listdir(): if os.path.isdir(i): print(i) os.rmdir('ToTest') shutil.rmtree('ToTest') # **shutil.move(src, dst, copy_function=copy2)** - Recursively move a file or directory (src) to another location (dst) and return the destination. src = glob.glob('*') dst = os.path.split(os.path.abspath(''))[0] shutil.move(os.path.abspath('Test'), os.path.split(os.path.abspath(''))[0]) os.path.abspath('Test') os.mkdir(f"{os.path.split(os.path.abspath(''))[0]}\\TOTESTYO") os.path.split(os.path.abspath(''))[0] # **shutil.disk_usage(path)** - Return disk usage statistics about the given path as a named tuple with the attributes total, used and free, which are the amount of total, used and free space, in bytes. Path may be a file or a directory shutil.disk_usage('C:') # **shutil.chown(path, user=None, group=None)** - Change owner user and/or group of the given path import os import glob import shutil from pathlib import Path os.listdir() # + path = 'app.py' info = Path(path) user = info.owner() group = info.group() print("Current owner and group of the specified path") print("Owner:", user) print("Group:", group) # - # **shutil.which(cmd, mode=os.F_OK | os.X_OK, path=None)** - Return the path to an executable which would be run if the given cmd was called. If no cmd would be called, return None. print(shutil.which('python')) print(shutil.which('gcc')) print(shutil.which('git')) # **exception shutil.Error** - This exception collects exception that are raised during a multi-line operation. FOr copytree() the exception argument is a list of 3-tuples (srcname, dstname, exception) # # Archiving Operations # # **shutil.make_archive(base_name, format[, root_dir[, base_dir[, verbose[, dry_run[, owner[, group[, logger]]]]]]])** - Create an archive file (such as zip or tar) and return its name shutil.make_archive('Archive_Test', 'zip') archive_name = os.path.expanduser(os.path.join('~', 'myarchive')) archive_name os.path.expanduser(os.path.join('~', '.ssh')) # **shutil.get_archive_formats()** - Return a list of supported formats for archivomg. Each elements of the returned sequence is a tuple (name, description) import os import glob import shutil os.listdir() shutil.get_archive_formats() # **shutil.register_archive_format(name, function[, extra_args[, description]])** - Register an archiver for the format name. def myzip(): pass shutil.register_archive_format('myzip', myzip, description='myzip file') shutil.get_archive_formats() # **shutil.unregister_archive_format(name)** - Remove the archive format name from the list of supported formats. shutil.unregister_archive_format('myzip') shutil.get_archive_formats() # **shutil.unpack_archive(filename[, extract_dir[, format]])** - Unpack an archive. filename is the full path of the archive. shutil.unpack_archive(os.path.abspath('file.zip'), 'C:\\Users\\\\Standard Library\\FORTESTYO', 'zip') # **shutil.register_unpack_format(name, extensions, function[, extra_args[, description]])** - register an un[acl format. name is the name of the format and extensions is a list of extensions corresponding to the format like `.zip` for Zip file def yoyo(): pass shutil.register_unpack_format('yoyo', '[.yoyo]', yoyo, description='yoyo') shutil.get_unpack_formats() # **shutil.unregister_unpack_format(name)** - Unregister an unpack format. name is the name of the formnat shutil.unregister_unpack_format('yoyo') shutil.get_unpack_formats() # **shutil.get_unpack_formats()** - Return a list of all registered formats for unpacking. Each element of the returned sequence is a tuple (name, extensions, description). shutil.get_unpack_formats() # # Querying the size of the output terminal # **shutil.get_terminal_size(fallback=(columns, lines))** - Get the size of the terminal window shutil.get_terminal_size() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.10 64-bit (''fcc_tf'': conda)' # name: python3 # --- # + import tensorflow as tf from tensorflow.keras import datasets, layers, models import matplotlib.pyplot as plt # + (train_images, train_labels), (test_images, test_labels) = datasets.cifar10.load_data() # Normalize pixel values train_images, test_images = train_images / 255.0, test_images / 255.0 class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] # + IMG_INDEX = 2 plt.imshow(train_images[IMG_INDEX], cmap=plt.cm.binary) plt.xlabel(class_names[train_labels[IMG_INDEX][0]]) plt.show() # + model = models.Sequential() # Convolutional Base model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) # Classifier model.add(layers.Flatten()) model.add(layers.Dense(64, activation='relu')) model.add(layers.Dense(10)) # - model.summary() # + model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy( from_logits=True), metrics=['accuracy']) history = model.fit(train_images, train_labels, epochs=10, validation_data=(test_images, test_labels)) # - test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2) print(test_acc) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="fWjWlshjLsok" colab_type="code" colab={} import keras from keras.datasets import mnist from tensorflow.python.keras import Sequential from tensorflow.python.keras.layers import Dense, Dropout from tensorflow.compat.v1.keras.optimizers import RMSprop # + id="mUZCxpzbL1P1" colab_type="code" colab={} #Carregando treino e teste #função possui parentes (x_treino, y_treino),(x_teste, y_teste) = mnist.load_data() # + id="-2EPDhqyNM18" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="bb64add2-cb9a-4a71-a10b-777711fee341" #how many images I have for train? print("img for train",len(x_treino)) #how many image for test? print("img for test", len(x_teste)) #who type x_treino? print("Tipo do x_treino", type(x_treino)) #get first image primeira_imagem = x_treino[0] print(primeira_imagem) # + id="RO-DKCK9OSIj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 282} outputId="d740eb7c-68d0-46eb-db9e-062122ef847c" import matplotlib.pyplot as plt indice = 15000 plt.imshow(x_treino[indice]) plt.show # + id="v3VfulcHQPh3" colab_type="code" colab={} #achatando matrizes de pixel e transformando em uma unica lista com valores entre 0 e 1 quantidade_treino = len(x_treino) quantidade_teste = len(x_teste) resolucao_imagem = x_treino[0].shape resolucao_total = resolucao_imagem[0] * resolucao_imagem[1] x_treino = x_treino.reshape(quantidade_treino, resolucao_total) x_teste = x_teste.reshape(quantidade_teste, resolucao_total) # + id="oCyxlqK2S6Za" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 52} outputId="9b745769-8d60-4051-d8cd-86cc47042ad7" #normalização dos dados #255 vira 1 # 0 vira 0 #assim por diante x_treino = x_treino.astype('float32') x_teste = x_teste.astype('float32') x_treino /= 255 x_teste /= 255 print(type(x_treino[0][350])) print(x_treino[0][350]) # + id="Jg19KxBeUzJD" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="86d61317-9c7f-4329-e685-c72ee6dc7073" #dados normalizados print("Dados normalizados", x_treino[0]) # + id="_iHKq-iVWLwY" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 70} outputId="8b5d47a1-98f7-43c4-c2ae-5e801a836013" #camada de saida valores_unicos = set(y_treino) quantidade_valores_unicos = len(valores_unicos) print(valores_unicos) #transformação dos valores unicos em variaveis categoricas #representação categorica de um numero em uma rede neural # numero 0 -> [1,0,0,0,0,0,0,0,0,0] #... #numero 9 - > [0,0,0,0,0,0,0,0, 9] print("y_treino[0 antes", y_treino[0] ) y_treino = keras.utils.to_categorical(y_treino, quantidade_valores_unicos) y_teste = keras.utils.to_categorical(y_teste, quantidade_valores_unicos) print("y_treino[0] depois", y_treino[0]) #executa apenas 1 vez ou pode quebrar todo código # + id="zd0TMqnLWmoX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 336} outputId="89485325-6d1c-4f10-a4a5-cba92234a12e" #criando modelo de rede neural model = Sequential() #primeira hidden layer com 30 neuronios e função de ativação ReLu #Na camada 1 , informamos input shape, que é (784,) , ela é uma tupla, entao virgula é essencial model.add(Dense(450, activation='relu', input_shape=(resolucao_total, ))) #adicionando dropout pra evitar overfit model.add(Dropout(0.2)) #segunda hidden layer , essa tem pedaços maiores, entao sera cada uma com 20 neutonios + relu model.add(Dense(300, activation='relu')) #mais um regularizador apos a 2 hidden model.add(Dropout(0.2)) #finalizamos com camada de saida, output , informando valores unicos model.add(Dense(quantidade_valores_unicos, activation='softmax')) #exibe modelo criado model.summary() # + id="SCm8A_R0d-A5" colab_type="code" colab={} model.compile(loss='categorical_crossentropy', optimizer=RMSprop(), metrics=['accuracy']) # + id="xhEv3cUkfNfc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 727} outputId="8be60b21-b4ab-43ea-9bb5-95634fc9aaf7" #Treinar o modelo history = model.fit(x_treino,y_treino, batch_size=128, epochs=20 ,verbose=1 ,validation_data= (x_teste, y_teste)) # + id="FtCIpTgEgLA-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 371} outputId="2c919ac3-256e-455f-ff1f-a59413a5ab91" #Fazendo nossas previsoes indice = 9800 #Qual valor categorico de y_test[indice] print("Valor categorico em y_teste[indice]", y_teste[indice]) #parece ser 7 #print(x_teste[indice]) imagem = x_teste[indice].reshape((1, resolucao_total)) #print(imagem) #fazendo previsao prediction = model.predict(imagem) print("Previsão :", prediction) #prediction_class = model.predict_classes(imagem) import numpy as np prediction_class = np.argmax(model.predict(imagem), axis= -1) print("Previsão ajustada", prediction_class) (x_treino_img, y_treino_img), (x_teste_img, y_teste_img) = mnist.load_data() plt.imshow(x_teste_img[indice], cmap=plt.cm.binary) # + id="LJvZf22eh12X" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="2c3153a0-8083-44db-8588-dfa34ef47751" #Fazendo nossas previsoes while True: print("digite um valor entre 0 e 10000") indice = int(input()) if indice == -1: break #Qual valor categorico de y_test[indice] print("Valor categorico em y_teste[indice]", y_teste[indice]) #print(x_teste[indice]) imagem = x_teste[indice].reshape((1, resolucao_total)) #print(imagem) #fazendo previsao prediction = model.predict(imagem) print("Previsão :", prediction) #prediction_class = model.predict_classes(imagem) import numpy as np prediction_class = np.argmax(model.predict(imagem), axis= -1) print("Previsão ajustada", prediction_class) (x_treino_img, y_treino_img), (x_teste_img, y_teste_img) = mnist.load_data() plt.imshow(x_teste_img[indice], cmap=plt.cm.binary) plt.show() # + id="L4UEqjzwkW0C" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="KBkpRgBCBS2_" # !pip install -q gpt-2-simple import gpt_2_simple as gpt2 from datetime import datetime from google.colab import files # + id="P8wSlgXoDPCR" gpt2.download_gpt2(model_name="124M") # + id="6OFnPCLADfll" file_name = "DickinsonCorpus.txt" # + id="aeXshJM-Cuaf" sess = gpt2.start_tf_sess() gpt2.finetune(sess, dataset=file_name, model_name='124M', steps=1000, restore_from='fresh', run_name='run1', print_every=10, sample_every=200, save_every=500 ) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import cv2 # computer vision library import helper import numpy as np import matplotlib.pyplot as plt import matplotlib.image as mpimg # %matplotlib inline # - # Image data directories image_dir_training = "../Data/Day-Night-Train/" image_dir_test = "../Data/Day-Night-Test/" # Using the load_dataset function in helper.py # Load training data IMAGE_LIST = helper.load_dataset(image_dir_training) # Standardize all training images STANDARDIZED_LIST = helper.standardize(IMAGE_LIST) # + # Display a standardized image and its label # Select an image by index image_num = 0 selected_image = STANDARDIZED_LIST[image_num][0] selected_label = STANDARDIZED_LIST[image_num][1] # Display image and data about it plt.imshow(selected_image) print("Shape: "+str(selected_image.shape)) print("Label [1 = day, 0 = night]: " + str(selected_label)) # - # Find the average Value or brightness of an image def avg_brightness(rgb_image): # Convert image to HSV hsv = cv2.cvtColor(rgb_image, cv2.COLOR_RGB2HSV) # Add up all the pixel values in the V channel sum_brightness = np.sum(hsv[:,:,2]) area = 600*1100.0 # pixels # find the avg avg = sum_brightness/area return avg # + # Testing average brightness levels # Look at a number of different day and night images and think about # what average brightness value separates the two types of images # As an example, a "night" image is loaded in and its avg brightness is displayed image_num = 190 test_im = STANDARDIZED_LIST[image_num][0] avg = avg_brightness(test_im) print('Avg brightness: ' + str(avg)) plt.imshow(test_im); # - # # Classification and Visualizing Error # # In this section, we'll turn our average brightness feature into a classifier that takes in a standardized image and returns a `predicted_label` for that image. This `estimate_label` function should return a value: 0 or 1 (night or day, respectively). # --- # ### TODO: Build a complete classifier # # Set a threshold that you think will separate the day and night images by average brightness. # This function should take in RGB image input def estimate_label(rgb_image): ## TODO: extract average brightness feature from an RGB image # Use the avg brightness feature to predict a label (0, 1) predicted_label = 0 ## TODO: set the value of a threshold that will separate day and night images ## TODO: Return the predicted_label (0 or 1) based on whether the avg is # above or below the threshold return predicted_label # + ## Test out your code by calling the above function and seeing # how some of your training data is classified # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Project 3 # + students = {'a':1,'b':2,'c':3,'d':4,'e':5,'f':6,'g':7,'h':8,'i':9,'j':10,} def score_check(): x = input(" What is your name?").lower() if x in students: print(x, "Your score is",students[x]) # y = students.values() # print(y) else: print("I don't know you") while True: score_check() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # How to find optimal parameters using RandomizedSearchCV # + def BESTPARA(): print() print(format('How to find parameters using RandomizedSearchCV','*^60')) import warnings warnings.filterwarnings("ignore") # load libraries from sklearn import datasets from sklearn.model_selection import train_test_split from sklearn.model_selection import RandomizedSearchCV from sklearn.ensemble import GradientBoostingClassifier from scipy.stats import uniform as sp_randFloat from scipy.stats import randint as sp_randInt # load the iris datasets dataset = datasets.load_wine() X = dataset.data; y = dataset.target X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25) model = GradientBoostingClassifier() parameters = {'learning_rate': sp_randFloat(), 'subsample' : sp_randFloat(), 'n_estimators' : sp_randInt(100, 1000), 'max_depth' : sp_randInt(4, 10) } randm = RandomizedSearchCV(estimator=model, param_distributions = parameters, cv = 2, n_iter = 10, n_jobs=-1) randm.fit(X_train, y_train) # Results from Random Search print("\n========================================================") print(" Results from Random Search " ) print("========================================================") print("\n The best estimator across ALL searched params:\n", randm.best_estimator_) print("\n The best score across ALL searched params:\n", randm.best_score_) print("\n The best parameters across ALL searched params:\n", randm.best_params_) print("\n ========================================================") BESTPARA() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- a = np.matrix('1 0 0; 0 1 0; 0 0 1') a a.H b=np.array([[1,5],[3,2],[5,6]]) b x = np.matrix(b) x a.dot(b) np.dot(a,b) np.cross(a,b) np.linalg.det(a) np.linalg.eig(a)[0] vec1 = np.array([1, 2, 3]) print(vec1) vec2 = np.array([[3, 2], [2, 1], [1, 0]]) print(vec2) vec2 = vec2.transpose() print(vec2) vec1 * vec2 vec2 * vec1 vec1.diagonal() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:pyprobml] # language: python # name: conda-env-pyprobml-py # --- # # Discrete Probability Distribution Plot # + import jax import jax.numpy as jnp import matplotlib.pyplot as plt import seaborn as sns try: from probml_utils import savefig, latexify except ModuleNotFoundError: # %pip install -qq git+https://github.com/probml/probml-utils.git from probml_utils import savefig, latexify # + tags=["hide-input"] latexify(width_scale_factor=2) # + # Bar graphs showing a uniform discrete distribution and another with full mass on one value. N = 4 def make_graph(probs, N, save_name, fig=None, ax=None): x = jnp.arange(1, N + 1) if fig is None: fig, ax = plt.subplots() ax.bar(x, probs, align="center") ax.set_xlim([min(x) - 0.5, max(x) + 0.5]) ax.set_xticks(x) ax.set_yticks(jnp.linspace(0, 1, N + 1)) ax.set_xlabel("$x$") ax.set_ylabel("$Pr(X=x)$") sns.despine() if len(save_name) > 0: savefig(save_name) return fig, ax uniform_probs = jnp.repeat(1.0 / N, N) _, _ = make_graph( uniform_probs, N, "uniform_histogram_latexified" ) # Do not add .pdf or .png as it is automatically added by savefig method delta_probs = jnp.array([1, 0, 0, 0]) _, _ = make_graph(delta_probs, N, "delta_histogram_latexified"); # - # ## Demo # # You can see different examples of discrete distributions by changing the seed in the following demo. # + from ipywidgets import interact @interact(random_state=(1, 10), N=(2, 10)) def generate_random(random_state, N): key = jax.random.PRNGKey(random_state) probs = jax.random.uniform(key, shape=(N,)) probs = probs / jnp.sum(probs) fig, ax = make_graph(probs, N, "") ax.set_yticks(jnp.linspace(0, 1, 11)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernel_info: # name: u4-s3-dnn # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="IXV7ia0IuMxK" colab_type="text" # #

    #

    # # # Major Neural Network Architectures Challenge # ## *Data Science Unit 4 Sprint 3 Challenge* # # In this sprint challenge, you'll explore some of the cutting edge of Data Science. This week we studied several famous neural network architectures: # recurrent neural networks (RNNs), long short-term memory (LSTMs), convolutional neural networks (CNNs), and Autoencoders. In this sprint challenge, you will revisit these models. Remember, we are testing your knowledge of these architectures not your ability to fit a model with high accuracy. # # __*Caution:*__ these approaches can be pretty heavy computationally. All problems were designed so that you should be able to achieve results within at most 5-10 minutes of runtime locally, on AWS SageMaker, on Colab or on a comparable environment. If something is running longer, double check your approach! # # ## Challenge Objectives # *You should be able to:* # * Part 1: Train a LSTM classification model # * Part 2: Utilize a pre-trained CNN for object detection # * Part 3: Describe a use case for an autoencoder # * Part 4: Describe yourself as a Data Science and elucidate your vision of AI # + [markdown] colab_type="text" id="-5UwGRnJOmD4" # # ## Part 1 - LSTMSs # # Use a LSTM to fit a multi-class classification model on Reuters news articles to distinguish topics of articles. The data is already encoded properly for use in a LSTM model. # # Your Tasks: # - Use Keras to fit a predictive model, classifying news articles into topics. # - Report your overall score and accuracy # # For reference, the [Keras IMDB sentiment classification example](https://github.com/keras-team/keras/blob/master/examples/imdb_lstm.py) will be useful, as well as the LSTM code we used in class. # # __*Note:*__ Focus on getting a running model, not on maxing accuracy with extreme data size or epoch numbers. Only revisit and push accuracy if you get everything else done! # + colab_type="code" id="DS-9ksWjoJit" outputId="a51a706a-9634-41dc-f517-85e8c453b8ea" colab={"base_uri": "https://localhost:8080/", "height": 52} from tensorflow.keras.datasets import reuters (X_train, y_train), (X_test, y_test) = reuters.load_data(num_words=None, skip_top=0, maxlen=None, test_split=0.2, seed=723812, start_char=1, oov_char=2, index_from=3) # + colab_type="code" id="fLKqFh8DovaN" outputId="a876e314-7565-44c2-969f-2f0ca61cf699" colab={"base_uri": "https://localhost:8080/", "height": 106} # Demo of encoding word_index = reuters.get_word_index(path="reuters_word_index.json") print(f"Iran is encoded as {word_index['iran']} in the data") print(f"London is encoded as {word_index['london']} in the data") print("Words are encoded as numbers in our dataset.") # + id="9knT-ENEx4l8" colab_type="code" colab={} from tensorflow.keras.preprocessing import sequence from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Embedding from tensorflow.keras.layers import LSTM # + colab_type="code" id="_QVSlFEAqWJM" colab={} # Do not change this line. You need the +1 for some reason. max_features = len(word_index.values()) + 1 max_len = 80 batches = 32 # + id="HnSW1aKVypvh" colab_type="code" outputId="7015bda1-a016-4520-f789-6a9ab589775c" colab={"base_uri": "https://localhost:8080/", "height": 35} X_train0 = sequence.pad_sequences(X_train, maxlen=max_len) X_test0 = sequence.pad_sequences(X_test, maxlen=max_len) X_train0.shape, X_test0.shape # + id="zbKNaDsBzK-V" colab_type="code" outputId="98fcaae4-2a42-43cc-c85a-c2702a645be0" colab={"base_uri": "https://localhost:8080/", "height": 159} X_train0[0] # + id="4mYQidEW53yB" colab_type="code" outputId="bdb145a2-4683-4ba0-ae5c-2e264fe69db2" colab={"base_uri": "https://localhost:8080/", "height": 35} len(X_train0) # + id="VoTB_UD-352g" colab_type="code" outputId="725f5c4b-3eb8-41f9-ef6c-b6f4110d5b40" colab={"base_uri": "https://localhost:8080/", "height": 35} len(y_train) # + id="tdjkPp406eKa" colab_type="code" outputId="6da179ef-d071-4ade-b9dd-598c114d48e6" colab={"base_uri": "https://localhost:8080/", "height": 35} len(set(y_train)) # + id="LvgKzWTC8kQF" colab_type="code" outputId="8d12be6d-60ae-4a32-bd56-6869531a4406" colab={"base_uri": "https://localhost:8080/", "height": 35} len(set(y_test)) # + id="r3DqY9Fs48pW" colab_type="code" outputId="d253d737-195b-4b8b-85f1-a5392ef23a1e" colab={"base_uri": "https://localhost:8080/", "height": 55} model = Sequential() model.add(Embedding(max_features, 64)) model.add(LSTM(64, dropout=0.2, recurrent_dropout=0.2)) model.add(Dense(46, activation='softmax')) # + id="hOeVAoUl9Egb" colab_type="code" outputId="ca048c98-1d1c-4096-b509-7b393bbcba02" colab={"base_uri": "https://localhost:8080/", "height": 266} model.compile( loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'] ) model.summary() # + id="JgAaUiPP9f5G" colab_type="code" outputId="ea046b45-999f-4270-aaac-b9ca333b90ff" colab={"base_uri": "https://localhost:8080/", "height": 301} presto = model.fit( X_train0, y_train, batch_size=batches, epochs=8, validation_data=(X_test0, y_test) ) # + [markdown] nteract={"transient": {"deleting": false}} id="-IpcPJDMuMxr" colab_type="text" # ## Sequence Data Question # #### *Describe the `pad_sequences` method used on the training dataset. What does it do? Why do you need it?* # # The 'pad_sequences' method reshapes the input arrays to the same length, in this case specified to 80, by either trimming the latter values that are beyond index position 80 for that observation, or adding 0's to the beginning until each observation is an array of length 80. This allows the input arrays to be of a uniform length for the input layer of the net. # # # # ## RNNs versus LSTMs # #### *What are the primary motivations behind using Long-ShortTerm Memory Cell unit over traditional Recurrent Neural Networks?* # # An RNN is the most basic form of Sequential / Recursive Neural Network types, but it does not remember inputs and outputs from previous sequence positions of the sequential data, as LSTM's do, and RNN's in their most vanilla form are more sensitive to the vanishing gradient descent problem, making it harder to train the earlier layers of an RNN than in an LSTM specifically. # # # ## RNN / LSTM Use Cases # #### *Name and Describe 3 Use Cases of LSTMs or RNNs and why they are suited to that use case* # # RNN's and LSTM's are well suited to any type of sequential data because the chronology of data input and learning from the sequentially 'earlier' data points matters. Their recursive structure preserves the importance of these sequence orders. Such sequential data includes year-over-year government failure events, as I worked with in Unit 2; or maybe a health tracker app in which data from today is not effected by data from tomorrow, but data from yesterday and before; or even tracking a contagion in epidemiology, as infection is chronological. # + [markdown] colab_type="text" id="yz0LCZd_O4IG" # # ## Part 2- CNNs # # ### Find the Frog # # Time to play "find the frog!" Use Keras and ResNet50 (pre-trained) to detect which of the following images contain frogs: # # # + id="X3abv5qlBEwI" colab_type="code" outputId="a95de119-0a89-43c3-8088-90820120f553" colab={"base_uri": "https://localhost:8080/", "height": 35} from google.colab import drive drive.mount('/content/drive') # + colab_type="code" id="whIqEWR236Af" colab={} from skimage.io import imread_collection from skimage.transform import resize #This might be a helpful function for you path = '/content/drive/My Drive/Colab Notebooks/U4 ML Engineering/frog_images/' images = imread_collection(path + '*.jpg') # images = imread_collection('./frog_images/*.jpg') # + colab_type="code" id="EKnnnM8k38sN" colab={"base_uri": "https://localhost:8080/", "height": 124} outputId="a0091f4d-4bf0-4b9c-8567-8be464f2413c" print(type(images)) print(type(images[0]), end="\n\n") print("Each of the Images is a Different Size") print(images[0].shape) print(images[1].shape) # + [markdown] colab_type="text" id="si5YfNqS50QU" # Your goal is to validly run ResNet50 on the input images - don't worry about tuning or improving the model. Print out the predictions in any way you see fit. # # *Hint* - ResNet 50 doesn't just return "frog". The three labels it has for frogs are: `bullfrog, tree frog, tailed frog` # # *Stretch goal* - Check for other things such as fish. # + id="1RuX9p9N921n" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 52} outputId="45f16c65-c5ac-4535-ddd7-3380fe727644" import numpy as np from tensorflow.keras.applications.resnet50 import ResNet50 from tensorflow.keras.preprocessing import image from tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions from tensorflow.keras.layers import Dense, GlobalAveragePooling2D from tensorflow.keras.models import Model # This is the functional API resnet = ResNet50(weights='imagenet', include_top=False) # + id="XP7W2-sJNd3_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 287} outputId="d1ee06f3-6dde-4c7f-a04b-6da804e84f65" import matplotlib.pyplot as plt photo = images[0] plt.imshow(photo) # + id="YYmV9IQzQslO" colab_type="code" colab={} def process_img_path(img_path): return image.load_img(img_path, target_size=(224, 224)) # + id="9mFEU37hQus9" colab_type="code" colab={} img0_prepped = process_img_path('/content/drive/My Drive/Colab Notebooks/U4 ML Engineering/frog_images/cristiane-teston-bcnfJvEYm1Y-unsplash.jpg') img1_prepped = process_img_path('/content/drive/My Drive/Colab Notebooks/U4 ML Engineering/frog_images/drew-brown-VBvoy5gofWg-unsplash.jpg') img2_prepped = process_img_path('/content/drive/My Drive/Colab Notebooks/U4 ML Engineering/frog_images/ed-van-duijn-S1zA6AR50X8-unsplash.jpg') img3_prepped = process_img_path('/content/drive/My Drive/Colab Notebooks/U4 ML Engineering/frog_images/elizabeth-explores-JZybccsrB-0-unsplash.jpg') img4_prepped = process_img_path('/content/drive/My Drive/Colab Notebooks/U4 ML Engineering/frog_images/jacky-watt-92W5jPbOj48-unsplash.jpg') img5_prepped = process_img_path('/content/drive/My Drive/Colab Notebooks/U4 ML Engineering/frog_images/jared-evans-VgRnolD7OIw-unsplash.jpg') img6_prepped = process_img_path('/content/drive/My Drive/Colab Notebooks/U4 ML Engineering/frog_images/joel-henry-Rcvf6-n1gc8-unsplash.jpg') img7_prepped = process_img_path('/content/drive/My Drive/Colab Notebooks/U4 ML Engineering/frog_images/marcus-neto-fH_DOdTt-pA-unsplash.jpg') img8_prepped = process_img_path('/content/drive/My Drive/Colab Notebooks/U4 ML Engineering/frog_images/matthew-kosloski-sYkr-M78H6w-unsplash.jpg') img9_prepped = process_img_path('/content/drive/My Drive/Colab Notebooks/U4 ML Engineering/frog_images/mche-lee-j-P8z4EOgyQ-unsplash.jpg') img10_prepped = process_img_path('/content/drive/My Drive/Colab Notebooks/U4 ML Engineering/frog_images/priscilla-du-preez-oWJcgqjFb6I-unsplash.jpg') img11_prepped = process_img_path('/content/drive/My Drive/Colab Notebooks/U4 ML Engineering/frog_images/saturday_sun-_q37Ca0Ll4o-unsplash.jpg') img12_prepped = process_img_path('/content/drive/My Drive/Colab Notebooks/U4 ML Engineering/frog_images/serenity-mitchell-tUDSHkd6rYQ-unsplash.jpg') img13_prepped = process_img_path('/content/drive/My Drive/Colab Notebooks/U4 ML Engineering/frog_images/yanna-zissiadou-SV-aMgliWNs-unsplash.jpg') img14_prepped = process_img_path('/content/drive/My Drive/Colab Notebooks/U4 ML Engineering/frog_images/zdenek-machacek-HYTwWSE5ztw-unsplash (1).jpg') # + id="9taVENUXWwZX" colab_type="code" colab={} prepped_imgs = [img0_prepped, img1_prepped, img2_prepped, img3_prepped, img4_prepped, img5_prepped, img6_prepped, img7_prepped, img8_prepped, img9_prepped, img10_prepped, img11_prepped, img12_prepped, img13_prepped, img14_prepped] # + id="Q2Nj5Z4-WUey" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 287} outputId="57e9b8c6-c5ac-4d54-9fca-0c212efceb43" plt.imshow(img0_prepped) # + id="zQ_60FtxEWO3" colab_type="code" colab={} def has_frog(img): pic = plt.imshow(img) x=image.img_to_array(img) x=np.expand_dims(x, axis=0) x=preprocess_input(x) model = ResNet50(weights='imagenet') # , include_top=False) features = model.predict(x) results = decode_predictions(features, top=3)[0] return pic, print(results) # + id="entAhzsNOpYg" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 305} outputId="44ef9a14-7657-454c-a166-0f63468553d3" has_frog(img0_prepped) # + id="KKXbTEUTOvNC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 803} outputId="5bec2c8a-297a-424f-f844-dd54d058430d" for img in prepped_imgs: print(has_frog(img)) # + [markdown] id="QVbihwEbc7K9" colab_type="text" # # Ugh sorry about the rush job # + id="FFOcrh0GcYlT" colab_type="code" colab={} # + [markdown] colab_type="text" id="XEuhvSu7O5Rf" # # ## Part 3 - Autoencoders # # Describe a use case for an autoencoder given that an autoencoder tries to predict its own input. # # __*Your Answer:*__ # # It could simply be used for file compression of a novel, not-yet-well-defined file type. Jpegs, gifs, etc already have well defined compression algorhithms, but we may not always be working with such file types. But we still need to be able to send them and reverse-query them, much like a reverse-image search would use an autoencoder to compress various images to search them more rapidly. # + [markdown] colab_type="text" id="626zYgjkO7Vq" # # ## Part 4 - More... # + [markdown] colab_type="text" id="__lDWfcUO8oo" # Answer the following questions, with a target audience of a fellow Data Scientist: # # - What do you consider your strongest area, as a Data Scientist? # # I feel like I have a strength for verifying whether code worked as inteneded, whether it's a model, analytics, or data processing. Sometimes the code fails silently, and we need to run checks that the output that the interpreter just internalized meets the parameters we intended. Much like we'd check a multiplication problem by dividing. I feel like I have a knack for that. # # - What area of Data Science would you most like to learn more about, and why? # # I want to learn more about mechine learning, how that can be taught to many people at a young age, and how instances of ML like algorhithmic trading can onboard more people into high return ownership of liquid capital. I feel that a synthesis of these applcations of ML could help drastically augment the obsolecent destructiveness of automation. # # And furthermore I'd like to investigate not only how to prevent automated poverty, but also how to prevent automated extinction. Not only do I want us to be able to harness the computerized oxen, I want us to not be horned and hooved by it. I'm interested in defining value alignment, algorithmic accountability, and using AI to hedge global catastrophic risks rather than amplifying them. # # - Where do you think Data Science will be in 5 years? # # I'm a bit pessimistic about the recovery of the general economy that most people think will just remain after the quarrantines are lifted. I think the scare that this brought to businesses of all sizes will accelerate investment and hiring for specifically automation engineers, because all of a sudden businesses want any and every way to continue operations during a disease outbreak. Computer-genic automation has been constant since the 1960's, but I think this will only accelerate automation efforts. The American public thought they were being lectured to when they were warned about automation in 2016, and many voted with an emotional reaction to that lecturing rather than ratioanally. I think the automation will feel less hypothitical by 2025, it will be a much more obvious sour taste in everyone's mouth by then. Anyway, where I'm going with this... is that of all of the subtypes of data science job titles you see out there, expect a sharp and motivated uptick in positions for "automation engineers" or other closely related titles. # # - What are the threats posed by AI to our society? # # Any type of extinction you could imagine. How about we start with that? Leathal Autonomous Weapons Systems don't even have to end up in the hands of "bad guys". The so called "good guys" that created them could take a totalitarian backslide, and there's a recipe for a future of slavery and genocide with really shiny gadgets in the mix. So many ways it could go wrong. For me, the root of the danger is the same paradigm that preceded world war 1, the same paradigm which also lead to the (ONGOING) nuclear cold-war. That paradigm is the MOMENTUM that these technologies have all their own. The apes that created them have every incentive to acquire them before the "bad guys" do, and once they exist in a functional form, they tend to proliferate and get abused. # # - How do you think we can counteract those threats? # # Game theory and studies of information assymetry might have some interesting applications at the very algorithmic level of ML for building structures of accountability, adversarial accountability, value alignment, and purging of perverse incentive / perverse optimization. # # - Do you think achieving General Artifical Intelligence is ever possible? # # Yes. It's just a matter of getting silicon to do what carbon-oxygen-nitrogen complices in our brains have been doing for hundreds of thousands of years. When I put it that way, does it not seem inevitable? Evolution did it without even a desire to attain it. # # A few sentences per answer is fine - only elaborate if time allows. # + [markdown] colab_type="text" id="_Hoqe3mM_Mtc" # ## Congratulations! # # Thank you for your hard work, and congratulations! You've learned a lot, and you should proudly call yourself a Data Scientist. # # + id="u_ssBJCYuMyT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 319} outputId="49b95fc9-77ee-4749-ebe9-0b4fad95c8a7" from IPython.display import HTML HTML("""

    via GIPHY

    """) # + id="k-RLBxzAjn34" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: xpython # language: python # name: xpython # --- # + [markdown] deletable=false editable=false # Copyright 2020 , and made available under [CC BY-SA](https://creativecommons.org/licenses/by-sa/4.0) for text and [Apache-2.0](http://www.apache.org/licenses/LICENSE-2.0) for code. # # - # # Logistic Regression # # So far, we have looked at two broad kinds of supervised learning, classification and regression. # Classification predicts a class label for an observation (i.e., a row of the dataframe) and regression predicts a numeric value for an observation. # # Logistic regression is a kind of regression that is primarily used for classification, particularly binary classification. # It does this by predicting the **probability** (technically the log-odds) of the positive class assigned label `1`. # If the probability is above a threshold, e.g .50, then this predicted numeric value is interpreted as a classification of `1`. # Otherwise, the predicted numeric value is interpreted as a classification of `0`. # So **logistic regression predicts a numeric probability that we convert into a classification.** # # Logistic regression is widely used in data science classification tasks, for example to: # # * categorize a person as having diabetes or not having diabetes # * categorize an incoming email as spam or not spam # # Because logistic regression is also regression, it captures the relationship between an outcome/dependent variable and the predictor/independent variables in a similar way to linear regression. # The major difference is that the coefficients in logistic regression can be interpreted probabilistically, so that we can say how much more likely a predictor variable makes a positive classification. # # The most common kind of logistic regression is binary logistic regression, but it is possible to have: # # * Binary/binomial logistic regression # * Multiclass/Multinomial logistic regression # * Ordinal logistic regression (there is an order among the categories) # # # ## What you will learn # # In the sections that follow you will learn about logistic regression, an extension of linear regression, and how it can be used for classification. # We will study the following: # # - The math behind logistic regression # - Interpreting logistic regression coefficients # - Evaluating classification performance # # ## When to use logistic regression # # Logistic regression works best when you need a classifier and want to be able to interpret the predictor variables easily, as you can with linear regression. # Because logistic regression is fundamentally regression, it has the some assumptions of linearity and additivity, which may not be appropriate for some problems. # Binary logistic regression is widely used and scales well, but multinomial variants typically begin to have performance problems when the number of classes is large. # ## Mathematical Foundations of Logistic Regression for Binary Classification # # We briefly review in this section the mathematical formulation of logistic regression for binary classification problems. # That is, the predicted categories are just two (say, 1 or 0) and each object or instance belongs to one and only one category. # # Logistic regression expresses the relationship between the output variable, also called dependent variable, and the predictors, also called independent variables or features, in a similar way to linear regression with an additional twist. # The additional twist is necessary in order to transform the typical continuous value of linear regression onto a categorical value (0 or 1). # # **From Linear Regression to Logistic Regression** # # Let us review first the basics of linear regression and then discuss how to transform the mathematical formulation of linear regression such that the outcome is categorical. # # In a typical linear regression equation, the output variable $Y$ is related to $n$ predictor variables $X_j$ ($j=1,n$) using the following linear relation, where the output $Y$ is a linear combination of the predictors $X_j$ with corresponding weights (or coefficients) $\beta_{j}$: # # $$Y = {\beta}_{0} + \sum \limits _{j=1} ^{n} X_{j}{\beta}_{j}$$ # # In linear regression, the output $Y$ has continuous values between $-\inf$ and $+\inf$. In order to map such output values to just 0 and 1, we need to apply the sigmoid or logistic function. # # $$\sigma (t) = \frac{1}{1 + e^{-t}}$$ # # A graphical representation of the sigmoid or logistic function is shown below (from Wikipedia). # The important part is that the output values are in the interval $(0,1)$ which is close to our goal of just predicted values 1 or 0. # # # #
    Figure 1. The logistic function. Source: Wikipedia
    # # When applied to the $Y = {\beta}_{0} + \sum \limits _{j=1} ^{n} X_{j}{\beta}_{j}$ from linear regression we get the following formulation for logistic regression: # $$\frac{1}{1 + e^{{\beta}_{0} + \sum \limits _{j=1} ^{n} X_{j}{\beta}_{j}}}$$ # # The net effect is that the the typical linear regression output values ranging from $-\inf$ and $+\inf$ are now bound to $(0,1)$, which is typical for probabilities. That is, the above formulation can be interpreted as estimating the probability of instance $X$ (described by all predictors $X_j$) belonging to class 1. # # $$ P( Y=1 | X ) = \frac{1}{1 + e^{{\beta}_{0} + \sum \limits _{j=1} ^{p} X_{j}{\beta}_{j}}}$$ # # The probability of class 0 is then: # # $$ P( Y=0 | X ) = 1 - P( Y=1 | X ) $$ # # Values close to 0 are deemed to belong to class 0 and values close to 1 are deemed to belong to class 1, thus resulting in a categorical output which is what we intend in logistic regression. # # # # # Interpreting the Coefficients in Logistic Regression # # One of the best ways to interpret the coefficients in logistic regression is to transform it back into a linear regression whose coefficients are easier to interpret. # From the earlier formulation, we know that: # # $$ Y = P( Y=1 | X ) = \frac{1}{1 + e^{{\beta}_{0} + \sum \limits _{j=1} ^{p} X_{j}{\beta}_{j}}}$$ # # Applying a log function on both sides, we get: # # $$ log \frac{P ( Y=1 | X )}{1- P( Y=1 | X )} = \sum \limits _{j=1} ^{p} X_{j}{\beta}_{j} $$ # # On the left-hand of the above expression we have the log odds defined as the ratio of the probability of class 1 versus the probability of class 0. Indeed, this expression $\frac{P ( Y=1 | X )}{1- P( Y=1 | X )}$ is the odds because $1- P( Y=1 | X )$ is the probability of class 0, i.e., $P( Y=0 | X )$. # # Therefore, we conclude that the log odds are a linear regression of the predictor variables weighted by the coefficients $\beta_{j}$. Each such coefficient therefore indicates a change in the log odds when the corresponding predictor changes with a unit (in the case of numerical predictors). # # You may feel more comfortable with probabilities than odds, but you have probably seen odds expressed frequently in the context of sports. # Here are some examples: # # - 1 to 1 means 50% probability of winning # - 2 to 1 means 67% probability of winning # - 3 to 1 means 75% probability of winning # - 4 to 1 means 80% probability of winning # # Odds are just the probability of success divided by the probability of failure. # For example 75% probability of winning means 25% probability of losing, and $.75/.25=3$, and we say the odds are 3 to 1. # # Because log odds are not intuitive (for most people), it is common to interpret the coefficients of logistic regression as odds. # When a log odds coefficient has been converted to odds (using $e^\beta$), a coefficient of 1.5 means the positive class is 1.5 times more likely given a unit increase in the variable. # # Peformance Evaluation # # Performance evaluation for logistic regression is the same as for other classification methods. # The typical performance metrics for classifiers are accuracy, precision, and recall (also called sensitivity). # We previously talked about these, but we did not focus much on precision, so let's clarify that. # # In some of our previous classification examples, there are only two classes that are equally likely (each is 50% of the data). # When classes are equally likely, we say they are **balanced**. # If our classifier is correct 60% of the time with two balanced classes, we know it is 10% better than chance. # # However, sometimes things are very unbalanced. # Suppose we're trying to detect a rare disease that occurs once in 10,000 people. # In this case, a classifier that always predicts "no disease" will be correct 99.99% of the time. # This is because the **true negatives** in the data are so much greater than the **true positives** # Because the metrics of accuracy and specificity use true negatives, they can be somewhat misleading when classes are imbalanced. # # In contrast, precision and recall don't use true negatives at all (see the figure below). # This makes them behave more consistently in both balanced and imbalance data. # For these reasons, precision, recall, and their combination F1 (also called f-measure) are very popular in machine learning and data science. # #
    # #
    # #
    Figure 2. A confusion matrix. Note recall is an alternate label for sensitivity.
    # # # In some cases, maximizing precision or recall may be preferred. # For instance, a high recall is highly recommended when making medical diagnosis since it is preferrable to err on mis-diagnosing someone as having cancer as opposed to missing someone who indeed has cancer, i.e., the method should try not to miss anyone who may indeed have cancer. # This idea is sometimes referred to as **cost-sensitive classification**, because there may be an asymmetric cost toward making one kind of mistake vs. another (i.e. FN vs. FP). # # In general, there is a trade-off between precision and recall. # If precision is high then recall is low and vice versa. # Total recall (100% recall) is achievable by always predicting the positive class, i.e., label all instances as positive, in which case precision will be very low. # # In the case of logistic regression, you can imagine that we changed the threshold from .50 to a higher value like .90. # This would make many observations previously classified as 1 now classified as 0. # What was left of 1 would be very likely to be 1, since we are 90% confident (high precision). # However, we would have lost all of the 1s between 50-90% (low recall). # # # + [markdown] slideshow={"slide_type": "slide"} # # Example: Diabetes or no Diabetes # # The type of dataset and problem is a classic supervised binary classification. # Given a number of elements all with certain characteristics (features), we want to build a machine learning model to identify people affected by type 2 diabetes. # # To solve the problem we will have to analyze the data, do any required transformation and normalisation, apply a machine learning algorithm, train a model, check the performance of the trained model and iterate with other algorithms until we find the most performant for our type of dataset. # # # ## The Pima Indians Dataset # # The Pima are a group of Native Americans living in Arizona. # A genetic predisposition allowed this group to survive normally to a diet poor of carbohydrates for years. # In recent years, because of a sudden shift from traditional agricultural crops to processed foods, together with a decline in physical activity, made them develop the highest prevalence of type 2 diabetes, and for this reason they have been subject of many studies. # # This dataset is originally from the National Institute of Diabetes and Digestive and Kidney Diseases. # The dataset includes data from 768 women with 8 characteristics, in particular: # # | Variable | Type | Description | # |----------|-------|:--------------------------------------------------------------------------| # | pregnant | Ratio | Number of times pregnant | # | glucose | Ratio | Plasma glucose concentration a 2 hours in an oral glucose tolerance test | # | bp | Ratio | Diastolic blood pressure (mm Hg) | # | skin | Ratio | Triceps skin fold thickness (mm) | # | insulin | Ratio | 2-Hour serum insulin (mu U/ml) | # | bmi | Ratio | Body mass index (weight in kg/(height in m)^2) | # | pedigree | Ratio | Diabetes pedigree function | # | age | Ratio | Age (years) | # | label | Ratio | Diagnosed with diabetes (0 or 1) | # # **Source:** This dataset was taken from the UCI Machine Learning Repository library. # # # + [markdown] slideshow={"slide_type": "slide"} # ## The problem # # The type of dataset and problem is a classic supervised binary classification. # Given a number of elements all with certain characteristics (features), we want to build a machine learning model to identify people affected by type 2 diabetes. # # To solve the problem we will have to analyze the data, do any required transformation and normalization, apply a machine learning algorithm, train a model, check the performance of the trained model and iterate with other algorithms until we find the most performant for our type of dataset. # # # # ## Get the data # # - First import `pandas` as `pd` so we can read the data file into a dataframe # - # # Because our data file doesn't have a header (i.e., column names), we need to define these: # # - Create variable `col_names` # - Set it to to a list containing: `"pregnant", "glucose", "bp", "skin", "insulin", "bmi", "pedigree", "age", "label"` # - Create variable `dataframe` # - Set it to `with pd do read_csv` using a list containing # - `"datasets/pima-indians-diabetes.csv"` # - freestyle `header=None` # - freestyle `names=col_names` # - `dataframe` (to display) # # # ## Clean the data # # As you noticed when displaying the dataframe, something is wrong. # Often the first row of a data file will be a **header** row that gives the names of the columns. # In comma separated value (csv) format, the header and each following row of data are divided into columns using commas. # However, in this case, something different is going on. # # Let's take a closer look at the first 20 rows: # # - `with dataframe do head using 20` # As you can see, the first 9 rows (rows 0 to 8) are what we might expect in column headers. # Since we manually specified the column names when we loaded the dataframe, these rows are "junk", and we should get rid of them. # One way to do that is to get a sublist of rows from dataset that excludes them: # # - Set `dataframe` to `in list dataframe get sublist from #10 to last` # - `dataframe` # While the dataframe may look OK now, there is a subtle problem. # When `pandas` reads data from a file, it uses what it finds in the column to decide what kind of variable that column is. # Since the first column originally had some header information in it, `pandas` doesn't think it is numeric. # So we need to tell `pandas` to correct it: # # - `import numpy as np` # Convert everything in the dataframe to numeric: # # - Set `dataframe` to `with dataframe do astype using from np get float32` # ## Explore the data # # ### Descriptive statistics # # - `with dataframe do describe using` # There are some zeros which are really problematic. # Having a glucose or blood pressure of 0 is not possible for a living person. # Therefore we assume that variables with zero values in all variables except `pregnant` and `label` are actually **missing data**. # That means, for example, that a piece of equipment broke during blood pressure measurement, so there was no value. # # - Create variable `dataframe2` # - Set it to `with dataframe do drop using` a list containing # - freestyle `columns=["pregnant","label"]` # # # Now replace all the zeros in the remaining columns with the median in those columns: # # - `with dataframe2 do replace using` a list containing # - `0` # - `with dataframe2 do median using` # - freestyle `inplace=True` # Add the two missing columns back in: # # - Set dataframe to `with dataframe2 do assign using` a list containing # - freestyle `pregnant = dataframe["pregnant"]` # - freestyle `label = dataframe["label"]` # - `dataframe` (to display) # ### Correlations # # One of the most basic ways of exploring the data is to look at correlations. # As we previously discussed, correlations show you how a variable is related to another variable. # When the correlation is further away from zero, the variables are more strongly related: # # - Create `corr` and set to `with dataframe do corr` using nothing # - Output `corr` # This is a correlation matrix. # The diagonal is 1.0 because each variable is perfectly correlated with itself. # You might also notice that the upper and lower triangular matrices (above/below the diagonal) are mirror images of each other. # # Sometimes its easier to interpret a correlation matrix if we plot it in color with a heatmap. # # First, the import `plotly` for plotting: # # - `import plotly.express as px` # To display the correlation matrix as a heatmap: # # - `with px do imshow using` a list containing # - `corr` # - A freestyle block **with a notch on the right** containing `x=`, connected to `from corr get columns` # - A freestyle block **with a notch on the right** containing `y=`, connected to `from corr get columns` # This is the color that represents zero: # ![image.png](attachment:image.png) # So anything darker is a negative correlation, and anything lighter is a positive one. # As you can see, most of the negative correlations are weak and so not very interesting. # The most positive correlations are pink-orange at around .55, which is a medium correlation. # ### Histograms # # Another way to try to understand the data is to create histograms of all the variables. # As we briefly discussed, a histogram shows you the count (on the y-axis) of the number of data points that fall into a certain range (also called a bin) of the variable. # # It can be very tedious to make a separate plot for each variable when you have many variables. # The best way is to do it in a loop: # # - `for each item i in list` `from dataframe get columns` (use the green loop) # - Set `fig` to `with px do histogram using` a list containing # - `dataframe` # - `x=` followed by `i` **Hint**: ![image.png](attachment:image.png) # - Empty freestyle followed by `with fig do show using` # **Hint**: ![image.png](attachment:image.png) # Often we omit `with fig do show using` because Jupyter always displays the last "thing" in a cell. # In this case, however, we want to display multiple things using one cell, so we need to explicitly display each one. # From these histograms we observe: # # - Only `glucose`, `bp`, and `bmi` are normal # - Everything else has larger mass on the lower end of the scale (i.e. on the left) # ## Prepare train/test sets # # We need to split the dataframe into training data and testing data, and also separate the predictors from the class labels. # # Let's start by dropping the label: # # - Create variable `X` # - Set it to `with dataframe do drop using` a list containing # - freestyle `columns=["label"]` # - `X` (to display) # Save a dataframe with just `label` in `Y`: # # - Create variable `Y` # - Set it to `dataframe [ ] ` containing the following in a list # - `"label"` # - `Y` (to display) # To split our `X` and `Y` into train and test sets, we need an import: # # - `import sklearn.model_selection as model_selection` # Now do the splits: # # - Create variable `splits` # - Set it to `with model_selection do train_test_split using` a list containing # - `X` (the features in an array) # - `Y` (the labels in an array) # - freestyle `test_size=0.2` (the proportion of the dataset to include in the test split) # ## Logistic regression model # # We need libraries for: # # - Logistic regression # - Performance metrics # - Ravel # # As well as libraries we need to standardize: # # - Scale # - Pipeline # # So do the following imports: # # - `import sklearn.linear_model as linear_model` # - `import sklearn.metrics as metrics` # - ~~`import numpy as np`~~ (already imported above) # - `import sklearn.preprocessing as pp` # - `import sklearn.pipeline as pipe` # We're going to make a pipeline so we can scale and train in one step: # # - Create variable `std_clf` # - Set it to `with pipe do make_pipeline using` a list containing # - `with pp create StandardScaler using` # - `with linear_model create LogisticRegression using` # We can treat the whole pipeline as a classifier and call `fit` on it: # # - `with std_clf do fit using` a list containing # - `in list splits get # 1` (this is Xtrain) # - `with np do ravel using` a list containing # - `in list splits get # 3` (this is Ytrain) # Now we can get predictions from the model for our test data: # # - Create variable `predictions` # - Set it to `with std_clf do predict using` a list containing # - `in list splits get # 2` (this is Xtest) # - `predictions` (to display) # # ## Assessing the model # # To get the accuracy: # # - `print create text with` # - "Accuracy:" # - `with metrics do accuracy_score using` a list containing # - `in list splits get # 4` (this is `Ytest`) # - `predictions` # To get precision, recall, and F1: # # - `print with metrics do classification_report using` a list containing # - `in list splits get # 4` (this is `Ytest`) # - `predictions` # Notice how the recall is much lower for `1` (diabetes), the rare class. # Finally, let's create an ROC plot. # To create the plot, we need predicted probabilities (for class `1`) as well as the ROC metrics using these probabilities and the true class labels: # # - Create variable `probs` # - Set it to `with std_clf do predict_proba using` a list containing # - `in list splits get # 2` (this is Xtest) # - Create variable `rocMetrics` # - Set it to `with metrics do roc_curve using` a list containing # - `in list splits get # 4` (this is Ytest) # - freestyle `probs[:,1]` (this is the positive class probabilities) # - Set `fig` to `with px do line using` a list containing # - freestyle `x=rocMetrics[0]` # - freestyle `y=rocMetrics[1]` # - `with fig do update_yaxes using` a list containing # - freestyle `title_text="Recall/True positive rate"` # - `with fig do update_xaxes using` a list containing # - freestyle `title_text="False positive rate"` # # # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: PythonData # language: python # name: python3 # --- # ### Deliverable 1: Preprocessing the Data for a Neural Network # + # Import our dependencies from sklearn.model_selection import train_test_split import pandas as pd # Import and read the charity_data.csv. application_df = pd.read_csv("charity_data.csv") application_df.head() # Drop the non-beneficial ID columns, 'EIN' and 'NAME'. # also, I'm dropping ask amount because it's numeric, and I'm dropping income amount just because it has 9 options, and that's a lot combinatorially. # I could partition the income amount into "other", but I just didn't want to take the time application_df=application_df.drop(['EIN', 'NAME','ASK_AMT','INCOME_AMT'], axis=1) # Determine which values to replace if counts are less than ...? replace_application = list(application_df['APPLICATION_TYPE'].value_counts().loc[application_df['APPLICATION_TYPE'].value_counts() < 1100].index) # Replace in dataframe for app in replace_application: application_df.APPLICATION_TYPE = application_df.APPLICATION_TYPE.replace(app,"Other") # Determine which values to replace if counts are less than ..? replace_class = list(application_df['CLASSIFICATION'].value_counts().loc[application_df['CLASSIFICATION'].value_counts() < 1900].index) # Replace in dataframe for cls in replace_class: application_df.CLASSIFICATION = application_df.CLASSIFICATION.replace(cls,"Other") # + from helper_functions import predict_row, choose, do_the_split, plot_stuff from sklearn.metrics import accuracy_score # Split the preprocessed data into a training and testing dataset all_train, all_test = train_test_split( application_df, test_size=0.33) depth = 2 metadata = do_the_split(all_train,0,depth,'IS_SUCCESSFUL') # import pandas as pd # small_df = pd.DataFrame({ # 'a':[0,0,0,0,1,1,1,1], # 'b':[0,0,1,1,0,0,1,1], # 'c':[0,1,0,1,0,1,0,1], # 'y':[1,0,0,1,0,0,0,1], # }) #metadata = do_the_split(small_df,0,4,'y') #small_df['pred']=[choose(predict_row(small_df.iloc[i],metadata)) for i in range(len(small_df.index))] #accuracy_score(small_df['y'],small_df['pred']) import matplotlib.pyplot as plt ys= all_train['IS_SUCCESSFUL'].value_counts().sort_index() ys_in = metadata['ys_in'] ys_out = metadata['ys_out'] ys_in_in = metadata['next_split_in']['ys_in'] ys_in_out = metadata['next_split_in']['ys_out'] ys_out_in = metadata['next_split_out']['ys_in'] ys_out_out = metadata['next_split_out']['ys_out'] plt.bar(ys.index,ys) ys_list = plot_stuff(0,depth,[],metadata) i = 2 for ys in ys_list: plt.bar([i,i+1],[ys[0],ys[1]]) i = i + 2 plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="2cOCCaF6Q4ys" # ### Key Terms # # Complete the following questions to solidify you're understanding of the key concepts. Feel free to use the lecture notebook or any other resources to help you, but do not copy and paste answers from the internet instead use your own words to present your understanding. # # ---- # + [markdown] id="e_bDYmbjRHNc" # What is a general use programming language? ***The purpose of a programing language is to give the computer instructions on how to preform a task. Takes in the code, which translates into ones and zeros. That is the compiler's job. *** # + [markdown] id="X4j6a8y4RHGG" # Who invented Python and when? # *** created Python in the 1980s to make it easier to read and write *** # + [markdown] id="RHkgvf39RHBm" # What is the difference between front-end and back-end? # **A front-end is what you can see, while a back-end is what is behind the scenes.** # + [markdown] id="oTkkpwhVRG80" # What is a GUI? # ***GUI can be used to carry out commands using icons, menus, etc. *** # + [markdown] id="nHjQwFW1RG4C" # What is an API? # **Application programing interface, sets of deffinitions for intergrating** # + [markdown] id="RWzJmW61RGzR" # What is Open Source Software? # ***A software that can allow anyone to change or modify the code. *** # + [markdown] id="gob_Rgp8RGqT" # What is a development environment? # ***A software application that can allow you to code and give instructions to the computer. *** # + [markdown] id="lKduQMuWSYoC" # What is meant by local and remote in the context of computers? # ***Local is restricted to the place here. While remote is another place.*** # + [markdown] id="cb6Coc48Sgev" # What is an operating system? # ***Software that provides hardware, recources, and services. *** # + [markdown] id="7tRiStzGSltk" # What is a kernel? # ***The computer program that has control of the computer's operating system. Basicly has control of everything.*** # + [markdown] id="WdOAakQJSloH" # What is a shell? # ***Shows the code to the person operating the device. Can modify the code if needed.*** # + [markdown] id="hn5QICK0Slio" # What shell do we have access to in Colab notebooks? # ***It is the outermost layer of the system.*** # + [markdown] id="dmgmzh-ISldB" # What is an interpreter? # ***Translates one statement from the program into machine language. *** # + [markdown] id="A2q30j02S3G3" # What is a value in programming? # ***A entity that can be manipulated in the program by the person coding. *** # + [markdown] id="KI34llQnS--5" # What is an expression in programming? # ***Values and functions are combined by the compiler to make a new value. *** # + [markdown] id="pmKFicLMTCcD" # What is syntax? # ***The "rules" of programing. Basicly like grammer, but you have to code correctly to get the computer to understand. *** # + [markdown] id="B8m5jqm6TLHE" # What do we call the process of discovering and resolving errors? # ***We call them finding bugs, and we can debug the code, and fix it.*** # + [markdown] id="OJR_RDQpTRyR" # ### Code # + [markdown] id="zPvODBfiTWCP" # Let's revisit some of the things we practiced in the lecture. In the code cell below print your name to the console without first declaring it as a variable. # + id="mZb-v_UwTO7B" colab={"base_uri": "https://localhost:8080/"} outputId="77ae3334-5eaf-4c27-d54d-d2667c7847f9" print("Anthony") # + [markdown] id="sZPksnwpTnTD" # Now declare your first name and last name as separate variables and combine them in the print statement. # + id="oqmZRhYLTztw" firstname=("Anthony") lastname=("Maini") # + [markdown] id="cNe3K4WZT2_0" # In the cell below run the "Zen of Python" easter egg. # + id="FSkN7Q52UKyU" colab={"base_uri": "https://localhost:8080/"} outputId="4e914330-ba91-4720-a908-50861378e21e" >>> import this # + [markdown] id="2ADI5kQAUMLI" # ### Explore # + [markdown] id="vchHFmicUOid" # This portion of the assignment contains things we didn't explicitly cover in the lecture, instead encouraging you to explore and experiment on your own to discover some of the different operators and expressions in Python. For each expression first describe what you expect to happen before running the code cell. # # Documentation for Python's numeric operators can be found [here](https://docs.python.org/3.10/library/stdtypes.html#numeric-types-int-float-complex) # + [markdown] id="_lTiBbJMU28S" # #### `5 + 2 * 2` # # What do you expect to happen? # ***Add 5 + 2 and multiply the sum of that number by 2.*** # + id="ALTC2aYRUNRe" # + colab={"base_uri": "https://localhost:8080/"} id="yvCBm1GLebIt" outputId="ff6e9530-023d-4758-a905-8f069a4fa357" 5+2*2 # + [markdown] id="zSMDH8osVEEN" # #### `2 / 3` # # What do you expect to happen? # ***Program will divide 2 by 3.*** # + id="FYMUlyCEVHD1" colab={"base_uri": "https://localhost:8080/"} outputId="9fbd0719-2404-4ea1-f8e2-1b3d1760db12" 2/3 # + [markdown] id="c8LQrbNQVIID" # #### `2.5 * 10` # # What do you expect to happen? # ***Multiply 2.5 times 10. *** # + id="IuE7GclzVOpO" colab={"base_uri": "https://localhost:8080/"} outputId="0b7546f4-7455-4849-df9b-bfa1f28eb540" 2.5*10 # + [markdown] id="YfKfY31nVSfy" # #### `a` # # What do you expect to happen? # ***Varible will be unchanged.*** # + [markdown] id="Y_FQACtgVVL8" # # + id="pukzPvzXVUgM" colab={"base_uri": "https://localhost:8080/", "height": 166} outputId="4e3a1f65-4d07-4bcb-fbe8-bcf71e3dade3" a # + [markdown] id="x_G2qoXLVhVj" # #### `'a'` # # # What do you expect to happen? # ***Respond with 'a'*** # + id="e0PEkzRHVjVo" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="c30e6a91-d01f-4d27-f4a5-317721697568" 'a' # + colab={"base_uri": "https://localhost:8080/"} id="TWs3jpSsfl4w" outputId="a65f971a-5dc9-45be-cd93-b07b5cd7fdc7" 521//5 # + [markdown] id="2kCnvuRvVprG" # #### `521 // 5` # # What do you expect to happen? # ***521 is the floored quotient of x and y*** # + [markdown] id="QWOocovcV3i6" # # + id="n9QgKjHxV7oX" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="i0xRL3kp8p5J" colab_type="text" # In this exercise you'll try to build a neural network that predicts the price of a house according to a simple formula. # # So, imagine if house pricing was as easy as a house costs 50k + 50k per bedroom, so that a 1 bedroom house costs 100k, a 2 bedroom house costs 150k etc. # # How would you create a neural network that learns this relationship so that it would predict a 7 bedroom house as costing close to 400k etc. # # Hint: Your network might work better if you scale the house price down. You don't have to give the answer 400...it might be better to create something that predicts the number 4, and then your answer is in the 'hundreds of thousands' etc. # # + id="1G5YoHI28k4P" colab_type="code" outputId="0f9f0d1e-6428-405f-ff32-a7e24734d29e" colab={"base_uri": "https://localhost:8080/", "height": 35} try: # %tensorflow_version 2.x except Exception: pass import tensorflow as tf import numpy as np from tensorflow import keras # + id="3cejZj9N84jm" colab_type="code" colab={} def house_model(y_new): xs=[] ys=[] for i in range(1,10): xs.append(i) ys.append((1+float(i))*50) xs=np.array(xs,dtype=float) ys=np.array(ys, dtype=float) model = keras.Sequential([keras.layers.Dense(units = 1, input_shape = [1])]) model.compile(optimizer='sgd', loss='mean_squared_error') model.fit(xs, ys, epochs = 4500) return (model.predict(y_new)[0]/100) # + id="TQqO_Uo99Gzd" colab_type="code" outputId="946760af-2523-412e-bb55-d2953d7438e2" colab={"base_uri": "https://localhost:8080/", "height": 1000} prediction = house_model([7.0]) print(prediction) # + id="yKYebUkR9Hbs" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:capstone] * # language: python # name: conda-env-capstone-py # --- # # Processing pipeline # + import sys sys.path.insert(1, '/Users/jakoliendenhollander/capstone/capstone') import warnings import pandas as pd import numpy as np import tidy_functions.load_data import tidy_functions.clean_data import tidy_functions.merge_data import tidy_functions.feature_engineering warnings.filterwarnings(action='ignore') pd.set_option('display.max_columns', None) # To display all columns # - # ## Read in data # + # Reading in survey data from csv into a dictionary of dataframes. dfs_country = tidy_functions.load_data.load_survey_data("/Users/jakoliendenhollander/capstone/capstone/data/CMU_Global_data/Full_Survey_Data/country/smooth/", "country") # Concatenating individuals dataframes from the dictionary into one dataframe for regions. survey_data = pd.concat(dfs_country, ignore_index=True) # Corona stats covid_cases = pd.read_csv("/Users/jakoliendenhollander/capstone/capstone/data/Corona_stats/owid-covid-data.csv") print('Read in covid data completed.') # Mask wearing requirements mask_wearing_requirements = pd.read_csv("/Users/jakoliendenhollander/capstone/capstone/data/data-nbhtq.csv") print('Read in mask wearing requirements data completed.') # - # ## Cleaning data # + # Survey data survey_data = tidy_functions.clean_data.delete_other_gender(survey_data) survey_data = tidy_functions.clean_data.deal_with_NaNs_masks(survey_data) # Corona stats covid_cases = tidy_functions.clean_data.deal_with_NaNs_corona_stats(covid_cases) # Mask wearing requirements mask_wearing_requirements = tidy_functions.clean_data.prepare_mask_req(mask_wearing_requirements) mask_wearing_requirements = tidy_functions.clean_data.dummies_mask_req(mask_wearing_requirements) mask_wearing_requirements = tidy_functions.clean_data.dummies_public_mask_req(mask_wearing_requirements) mask_wearing_requirements = tidy_functions.clean_data.dummies_indoors_mask_req(mask_wearing_requirements) mask_wearing_requirements = tidy_functions.clean_data.dummies_transport_mask_req(mask_wearing_requirements) mask_wearing_requirements = tidy_functions.clean_data.data_types_mask_req(mask_wearing_requirements) # HDI hdi_data = tidy_functions.clean_data.rename_hdi_countries("/Users/jakoliendenhollander/capstone/capstone/data/","hdro_statistical_data_tables_1_15_d1_d5.xlsx") dict_hdi = tidy_functions.clean_data.create_hdi_dict(hdi_data) dict_hdi_levels = tidy_functions.clean_data.create_hdi_levels_dict(hdi_data) # - # ## Merging data covid_merge = tidy_functions.merge_data.merge_corona_stats(survey_data,covid_cases) requirements_merge = tidy_functions.merge_data.merge_mask_req(covid_merge,mask_wearing_requirements) hdi_merge = tidy_functions.merge_data.create_hdi_columns(requirements_merge, dict_hdi, dict_hdi_levels) # ## Feature engineering date_fixed = tidy_functions.feature_engineering.insert_month(hdi_merge) requirement_date = tidy_functions.feature_engineering.add_requirement_by_date(date_fixed) df = requirement_date.copy() df_select = df[['date','smoothed_pct_worked_outside_home_weighted','smoothed_pct_grocery_outside_home_weighted', 'smoothed_pct_ate_outside_home_weighted','smoothed_pct_attended_public_event_weighted', 'smoothed_pct_used_public_transit_weighted','smoothed_pct_direct_contact_with_non_hh_weighted', 'smoothed_pct_wear_mask_all_time_weighted','smoothed_pct_wear_mask_most_time_weighted', 'total_cases_per_million','median_age','hdi','cur_mask_recommended','cur_mask_not_required', 'cur_mask_not_required_recommended','cur_mask_not_required_universal','cur_mask_required_part_country', 'cur_mask_everywhere_in_public','cur_mask_public_indoors','cur_mask_public_transport']] df_select.isna().sum() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + colab={"base_uri": "https://localhost:8080/"} id="wSfiuAAsYFGv" outputId="6e850544-e4b3-4350-d1b5-99140fe0a18b" # # !conda install -c conda-forge cairosvg pycairo # # !conda install -c anaconda tk # !pip install flatland-rl # + id="5wH_6CZnYMjC" import random import sys import copy import os import pickle import datetime from argparse import ArgumentParser, Namespace from collections import namedtuple, deque, Iterable from pathlib import Path # base_dir = Path(__file__).resolve().parent.parent # sys.path.append(str(base_dir)) import matplotlib.pyplot as plt import numpy as np import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from flatland.envs.rail_env import RailEnv from flatland.envs.rail_generators import sparse_rail_generator, random_rail_generator, complex_rail_generator from flatland.envs.schedule_generators import sparse_schedule_generator from flatland.envs.observations import TreeObsForRailEnv # + id="6jNItEtJZePx" class DuelingQNetwork(nn.Module): """Dueling Q-network (https://arxiv.org/abs/1511.06581)""" def __init__(self, state_size, action_size, hidsize1=128, hidsize2=128): super(DuelingQNetwork, self).__init__() # value network self.fc1_val = nn.Linear(state_size, hidsize1) self.fc2_val = nn.Linear(hidsize1, hidsize2) self.fc4_val = nn.Linear(hidsize2, 1) # advantage network self.fc1_adv = nn.Linear(state_size, hidsize1) self.fc2_adv = nn.Linear(hidsize1, hidsize2) self.fc4_adv = nn.Linear(hidsize2, action_size) def forward(self, x): val = F.relu(self.fc1_val(x)) val = F.relu(self.fc2_val(val)) val = self.fc4_val(val) # advantage calculation adv = F.relu(self.fc1_adv(x)) adv = F.relu(self.fc2_adv(adv)) adv = self.fc4_adv(adv) return val + adv - adv.mean() # + id="d0RZneMGbSJZ" Experience = namedtuple("Experience", field_names=["state", "action", "reward", "next_state", "done"]) class ReplayBuffer: """Fixed-size buffer to store experience tuples.""" def __init__(self, action_size, buffer_size, batch_size, device): """Initialize a ReplayBuffer object. Params ====== action_size (int): dimension of each action buffer_size (int): maximum size of buffer batch_size (int): size of each training batch """ self.action_size = action_size self.memory = deque(maxlen=buffer_size) self.batch_size = batch_size self.device = device def add(self, state, action, reward, next_state, done): """Add a new experience to memory.""" e = Experience(np.expand_dims(state, 0), action, reward, np.expand_dims(next_state, 0), done) self.memory.append(e) def sample(self): """Randomly sample a batch of experiences from memory.""" experiences = random.sample(self.memory, k=self.batch_size) states = torch.from_numpy(self.__v_stack_impr([e.state for e in experiences if e is not None])) \ .float().to(self.device) actions = torch.from_numpy(self.__v_stack_impr([e.action for e in experiences if e is not None])) \ .long().to(self.device) rewards = torch.from_numpy(self.__v_stack_impr([e.reward for e in experiences if e is not None])) \ .float().to(self.device) next_states = torch.from_numpy(self.__v_stack_impr([e.next_state for e in experiences if e is not None])) \ .float().to(self.device) dones = torch.from_numpy(self.__v_stack_impr([e.done for e in experiences if e is not None]).astype(np.uint8)) \ .float().to(self.device) return states, actions, rewards, next_states, dones def __len__(self): """Return the current size of internal memory.""" return len(self.memory) def __v_stack_impr(self, states): sub_dim = len(states[0][0]) if isinstance(states[0], Iterable) else 1 np_states = np.reshape(np.array(states), (len(states), sub_dim)) return np_states # + id="rZFwHl0MZgJ_" class Policy: def step(self, state, action, reward, next_state, done): raise NotImplementedError def act(self, state, eps=0.): raise NotImplementedError def save(self, filename): raise NotImplementedError def load(self, filename): raise NotImplementedError class DDDQNPolicy(Policy): """Dueling Double DQN policy""" def __init__(self, state_size, action_size, parameters, evaluation_mode=False): self.evaluation_mode = evaluation_mode self.state_size = state_size self.action_size = action_size self.double_dqn = True self.hidsize = 1 if not evaluation_mode: self.hidsize = parameters.hidden_size self.buffer_size = parameters.buffer_size self.batch_size = parameters.batch_size self.update_every = parameters.update_every self.learning_rate = parameters.learning_rate self.tau = parameters.tau self.gamma = parameters.gamma self.buffer_min_size = parameters.buffer_min_size # Device if parameters.use_gpu and torch.cuda.is_available(): self.device = torch.device("cuda:0") # print("🐇 Using GPU") else: self.device = torch.device("cpu") # print("🐢 Using CPU") # Q-Network self.qnetwork_local = DuelingQNetwork(state_size, action_size, hidsize1=self.hidsize, hidsize2=self.hidsize).to(self.device) if not evaluation_mode: self.qnetwork_target = copy.deepcopy(self.qnetwork_local) self.optimizer = optim.Adam(self.qnetwork_local.parameters(), lr=self.learning_rate) self.memory = ReplayBuffer(action_size, self.buffer_size, self.batch_size, self.device) self.t_step = 0 self.loss = 0.0 def act(self, state, eps=0.): state = torch.from_numpy(state).float().unsqueeze(0).to(self.device) self.qnetwork_local.eval() with torch.no_grad(): action_values = self.qnetwork_local(state) self.qnetwork_local.train() # Epsilon-greedy action selection if random.random() > eps: return np.argmax(action_values.cpu().data.numpy()) else: return random.choice(np.arange(self.action_size)) def step(self, state, action, reward, next_state, done): assert not self.evaluation_mode, "Policy has been initialized for evaluation only." # Save experience in replay memory self.memory.add(state, action, reward, next_state, done) # Learn every UPDATE_EVERY time steps. self.t_step = (self.t_step + 1) % self.update_every if self.t_step == 0: # If enough samples are available in memory, get random subset and learn if len(self.memory) > self.buffer_min_size and len(self.memory) > self.batch_size: self._learn() def _learn(self): experiences = self.memory.sample() states, actions, rewards, next_states, dones = experiences # Get expected Q values from local model q_expected = self.qnetwork_local(states).gather(1, actions) if self.double_dqn: # Double DQN q_best_action = self.qnetwork_local(next_states).max(1)[1] q_targets_next = self.qnetwork_target(next_states).gather(1, q_best_action.unsqueeze(-1)) else: # DQN q_targets_next = self.qnetwork_target(next_states).detach().max(1)[0].unsqueeze(-1) # Compute Q targets for current states q_targets = rewards + (self.gamma * q_targets_next * (1 - dones)) # Compute loss self.loss = F.mse_loss(q_expected, q_targets) # Minimize the loss self.optimizer.zero_grad() self.loss.backward() self.optimizer.step() # Update target network self._soft_update(self.qnetwork_local, self.qnetwork_target, self.tau) def _soft_update(self, local_model, target_model, tau): # Soft update model parameters. # θ_target = τ*θ_local + (1 - τ)*θ_target for target_param, local_param in zip(target_model.parameters(), local_model.parameters()): target_param.data.copy_(tau * local_param.data + (1.0 - tau) * target_param.data) def save(self, filename): torch.save(self.qnetwork_local.state_dict(), filename + ".local") torch.save(self.qnetwork_target.state_dict(), filename + ".target") def load(self, filename): if os.path.exists(filename + ".local"): self.qnetwork_local.load_state_dict(torch.load(filename + ".local")) if os.path.exists(filename + ".target"): self.qnetwork_target.load_state_dict(torch.load(filename + ".target")) def save_replay_buffer(self, filename): memory = self.memory.memory with open(filename, 'wb') as f: pickle.dump(list(memory)[-500000:], f) def load_replay_buffer(self, filename): with open(filename, 'rb') as f: self.memory.memory = pickle.load(f) def test(self): self.act(np.array([[0] * self.state_size])) self._learn() # + id="Fv2o6uY5Z1vh" def max_lt(seq, val): """ Return greatest item in seq for which item < val applies. None is returned if seq was empty or all items in seq were >= val. """ max = 0 idx = len(seq) - 1 while idx >= 0: if seq[idx] < val and seq[idx] >= 0 and seq[idx] > max: max = seq[idx] idx -= 1 return max def min_gt(seq, val): """ Return smallest item in seq for which item > val applies. None is returned if seq was empty or all items in seq were >= val. """ min = np.inf idx = len(seq) - 1 while idx >= 0: if seq[idx] >= val and seq[idx] < min: min = seq[idx] idx -= 1 return min def norm_obs_clip(obs, clip_min=-1, clip_max=1, fixed_radius=0, normalize_to_range=False): """ This function returns the difference between min and max value of an observation :param obs: Observation that should be normalized :param clip_min: min value where observation will be clipped :param clip_max: max value where observation will be clipped :return: returnes normalized and clipped observatoin """ if fixed_radius > 0: max_obs = fixed_radius else: max_obs = max(1, max_lt(obs, 1000)) + 1 min_obs = 0 # min(max_obs, min_gt(obs, 0)) if normalize_to_range: min_obs = min_gt(obs, 0) if min_obs > max_obs: min_obs = max_obs if max_obs == min_obs: return np.clip(np.array(obs) / max_obs, clip_min, clip_max) norm = np.abs(max_obs - min_obs) return np.clip((np.array(obs) - min_obs) / norm, clip_min, clip_max) def _split_node_into_feature_groups(node) -> (np.ndarray, np.ndarray, np.ndarray): data = np.zeros(6) distance = np.zeros(1) agent_data = np.zeros(4) data[0] = node.dist_own_target_encountered data[1] = node.dist_other_target_encountered data[2] = node.dist_other_agent_encountered data[3] = node.dist_potential_conflict data[4] = node.dist_unusable_switch data[5] = node.dist_to_next_branch distance[0] = node.dist_min_to_target agent_data[0] = node.num_agents_same_direction agent_data[1] = node.num_agents_opposite_direction agent_data[2] = node.num_agents_malfunctioning agent_data[3] = node.speed_min_fractional return data, distance, agent_data def _split_subtree_into_feature_groups(node, current_tree_depth: int, max_tree_depth: int) -> (np.ndarray, np.ndarray, np.ndarray): if node == -np.inf: remaining_depth = max_tree_depth - current_tree_depth # reference: https://stackoverflow.com/questions/515214/total-number-of-nodes-in-a-tree-data-structure num_remaining_nodes = int((4 ** (remaining_depth + 1) - 1) / (4 - 1)) return [-np.inf] * num_remaining_nodes * 6, [-np.inf] * num_remaining_nodes, [-np.inf] * num_remaining_nodes * 4 data, distance, agent_data = _split_node_into_feature_groups(node) if not node.childs: return data, distance, agent_data for direction in TreeObsForRailEnv.tree_explored_actions_char: sub_data, sub_distance, sub_agent_data = _split_subtree_into_feature_groups(node.childs[direction], current_tree_depth + 1, max_tree_depth) data = np.concatenate((data, sub_data)) distance = np.concatenate((distance, sub_distance)) agent_data = np.concatenate((agent_data, sub_agent_data)) return data, distance, agent_data def split_tree_into_feature_groups(tree, max_tree_depth: int) -> (np.ndarray, np.ndarray, np.ndarray): """ This function splits the tree into three difference arrays of values """ data, distance, agent_data = _split_node_into_feature_groups(tree) for direction in TreeObsForRailEnv.tree_explored_actions_char: sub_data, sub_distance, sub_agent_data = _split_subtree_into_feature_groups(tree.childs[direction], 1, max_tree_depth) data = np.concatenate((data, sub_data)) distance = np.concatenate((distance, sub_distance)) agent_data = np.concatenate((agent_data, sub_agent_data)) return data, distance, agent_data def normalize_observation(observation, tree_depth: int, observation_radius=0): """ This function normalizes the observation used by the RL algorithm """ data, distance, agent_data = split_tree_into_feature_groups(observation, tree_depth) data = norm_obs_clip(data, fixed_radius=observation_radius) distance = norm_obs_clip(distance, normalize_to_range=True) agent_data = np.clip(agent_data, -1, 1) normalized_obs = np.concatenate((np.concatenate((data, distance)), agent_data)) return normalized_obs # + id="OdzAav6yZ2nl" def train_agent(n_episodes): # Environment parameters n_agents = 1 x_dim = 50 y_dim = 50 n_cities = 2 max_rails_between_cities = 2 max_rails_in_city = 3 seed = 42 # Observation parameters observation_tree_depth = 5 observation_radius = 10 # Exploration parameters eps_start = 1.0 eps_end = 0.01 eps_decay = 0.997 # for 2500ts # Set the seeds random.seed(seed) np.random.seed(seed) # Observation builder tree_observation = TreeObsForRailEnv(max_depth=observation_tree_depth) # Setup the environment env = RailEnv( width=x_dim, height=y_dim, # rail_generator=sparse_rail_generator( # max_num_cities=n_cities, # seed=seed, # grid_mode=False, # max_rails_between_cities=max_rails_between_cities, # max_rails_in_city=max_rails_in_city # ), # schedule_generator=sparse_schedule_generator(), rail_generator=random_rail_generator(), number_of_agents=n_agents, obs_builder_object=tree_observation ) env.reset(True, True) # Calculate the state size given the depth of the tree observation and the number of features n_features_per_node = env.obs_builder.observation_dim n_nodes = 0 for i in range(observation_tree_depth + 1): n_nodes += np.power(4, i) state_size = n_features_per_node * n_nodes # The action space of flatland is 5 discrete actions action_size = 5 # Max number of steps per episode # This is the official formula used during evaluations max_steps = int(4 * 2 * (env.height + env.width + (n_agents / n_cities))) action_dict = dict() # And some variables to keep track of the progress scores_window = deque(maxlen=100) # todo smooth when rendering instead completion_window = deque(maxlen=100) scores = [] completion = [] action_count = [0] * action_size agent_obs = [None] * env.get_num_agents() agent_prev_obs = [None] * env.get_num_agents() agent_prev_action = [2] * env.get_num_agents() update_values = False # Training parameters training_parameters = { 'buffer_size': int(1e6), 'batch_size': 128, 'update_every': 8, 'learning_rate': 0.5e-4, 'tau': 1e-3, 'gamma': 0.99, 'buffer_min_size': 0, 'hidden_size': 512, 'use_gpu': True } make_dir(CHECKPOINT_DIR) # write params to checkpoint dir params_file = os.path.join(CHECKPOINT_DIR, 'params.txt') with open(params_file, "w") as file1: file1.write(f'n_agents={n_agents}' + '\n') file1.write(f'x_dim={x_dim}' + '\n') file1.write(f'y_dim={y_dim}' + '\n') file1.write(f'n_cities={n_cities}' + '\n') file1.write(f'max_rails_between_cities={max_rails_between_cities}' + '\n') file1.write(f'max_rails_in_city={max_rails_in_city}' + '\n') file1.write(f'seed={seed}' + '\n') file1.write(f'observation_tree_depth={observation_tree_depth}' + '\n') file1.write(f'observation_radius={observation_radius}' + '\n') file1.write(f'buffer_size={training_parameters["buffer_size"]}' + '\n') file1.write(f'batch_size={training_parameters["batch_size"]}' + '\n') file1.write(f'update_every={training_parameters["update_every"]}' + '\n') file1.write(f'learning_rate={training_parameters["learning_rate"]}' + '\n') file1.write(f'tau={training_parameters["tau"]}' + '\n') file1.write(f'gamma={training_parameters["gamma"]}' + '\n') file1.write(f'buffer_min_size={training_parameters["buffer_min_size"]}' + '\n') file1.write(f'hidden_size={training_parameters["hidden_size"]}' + '\n') file1.write(f'use_gpu={training_parameters["use_gpu"]}' + '\n') # Double Dueling DQN policy policy = DDDQNPolicy(state_size, action_size, Namespace(**training_parameters)) for episode_idx in range(1, n_episodes + 1): score = 0 # Reset environment obs, info = env.reset(regenerate_rail=True, regenerate_schedule=True) # Build agent specific observations for agent in env.get_agent_handles(): if obs[agent]: agent_obs[agent] = normalize_observation(obs[agent], observation_tree_depth, observation_radius=observation_radius) agent_prev_obs[agent] = agent_obs[agent].copy() # Run episode for step in range(max_steps - 1): for agent in env.get_agent_handles(): if info['action_required'][agent]: # If an action is required, we want to store the obs at that step as well as the action update_values = True action = policy.act(agent_obs[agent], eps=eps_start) action_count[action] += 1 else: update_values = False action = 0 action_dict.update({agent: action}) # Environment step next_obs, all_rewards, done, info = env.step(action_dict) # Update replay buffer and train agent for agent in range(env.get_num_agents()): # Only update the values when we are done or when an action was taken and thus relevant information is present if update_values or done[agent]: policy.step(agent_prev_obs[agent], agent_prev_action[agent], all_rewards[agent], agent_obs[agent], done[agent]) agent_prev_obs[agent] = agent_obs[agent].copy() agent_prev_action[agent] = action_dict[agent] if next_obs[agent]: agent_obs[agent] = normalize_observation(next_obs[agent], observation_tree_depth, observation_radius=10) score += all_rewards[agent] if done['__all__']: break # Epsilon decay eps_start = max(eps_end, eps_decay * eps_start) # Collection information about training tasks_finished = np.sum([int(done[idx]) for idx in env.get_agent_handles()]) completion_window.append(tasks_finished / max(1, env.get_num_agents())) scores_window.append(score / (max_steps * env.get_num_agents())) completion.append((np.mean(completion_window))) scores.append(np.mean(scores_window)) action_probs = action_count / np.sum(action_count) if episode_idx % 100 == 0: end = "\n" torch.save(policy.qnetwork_local, os.path.join(CHECKPOINT_DIR, str(episode_idx) + '.pth')) action_count = [1] * action_size else: end = " " print('\rTraining {} agents on {}x{}\t Episode {}\t Average Score: {:.3f}\tDones: {:.2f}%\tEpsilon: {:.2f} \t Action Probabilities: \t {}'.format( env.get_num_agents(), x_dim, y_dim, episode_idx, np.mean(scores_window), 100 * np.mean(completion_window), eps_start, action_probs ), end=end) pickle_list(scores, os.path.join(CHECKPOINT_DIR, 'scores.pkl')) pickle_list(completion, os.path.join(CHECKPOINT_DIR, 'completion.pkl')) # Plot overall training progress at the end plt.plot(scores) plt.show() plt.savefig(os.path.join(CHECKPOINT_DIR, 'scores.png')) plt.plot(completion) plt.show() plt.savefig(os.path.join(CHECKPOINT_DIR, 'completion.png')) # + def make_dir(dir_path): if not os.path.exists(dir_path): os.makedirs(dir_path) def get_timestamp(): ct = datetime.datetime.now() return str(ct).split('.')[0].replace(' ', '').replace('-', '').replace(':', '') def pickle_list(l, file_path): with open(file_path, 'wb') as f: pickle.dump(l, f) # + colab={"base_uri": "https://localhost:8080/", "height": 669} id="II-EzccEYq1y" outputId="a3e0e8ee-df05-41fc-c180-c18b4bdbe471" n_episodes = 1000 CHECKPOINT_DIR = '/scratch/ns4486/flatland-reinforcement-learning/single-agent/checkpoints' CHECKPOINT_DIR = os.path.join(CHECKPOINT_DIR, get_timestamp()) train_agent(n_episodes) # + id="aVuIiUAxa7nR" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="a36e9e20-001" colab_type="text" # #1. Install Dependencies # First install the libraries needed to execute recipes, this only needs to be done once, then click play. # # + id="a36e9e20-002" colab_type="code" # !pip install git+https://github.com/google/starthinker # + [markdown] id="a36e9e20-003" colab_type="text" # #2. Get Cloud Project ID # To run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play. # # + id="a36e9e20-004" colab_type="code" CLOUD_PROJECT = 'PASTE PROJECT ID HERE' print("Cloud Project Set To: %s" % CLOUD_PROJECT) # + [markdown] id="a36e9e20-005" colab_type="text" # #3. Get Client Credentials # To read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play. # # + id="a36e9e20-006" colab_type="code" CLIENT_CREDENTIALS = 'PASTE CLIENT CREDENTIALS HERE' print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS) # + [markdown] id="a36e9e20-007" colab_type="text" # #4. Enter CM360 Log Audit Parameters # Downloads Campaign manager logs and allows audits. # 1. Wait for BigQuery->->->CM_* to be created. # 1. Wait for BigQuery->->->Barnacle_* to be created, then copy and connect the following data sources. # 1. Join the StarThinker Assets Group to access the following assets # 1. Copy Barnacle Profile Advertiser Map and connect. # 1. Copy Barnacle Profile Campaign Map and connect. # 1. Copy Barnacle Profile Site Map and connect. # 1. Copy Barnacle Profiles Connections and connect. # 1. Copy Barnacle Report Delivery Profiles and connect. # 1. Copy Barnacle Roles Duplicates and connect. # 1. Copy Barnacle Roles Not Used and connect. # 1. Copy Barnacle Site Contacts Profiles and connect. # 1. If reports checked, copy Barnacle Profile Report Map and connect. # 1. Copy Barnacle Report. # 1. When prompted choose the new data sources you just created. # 1. Or give these intructions to the client. # Modify the values below for your use case, can be done multiple times, then click play. # # + id="a36e9e20-008" colab_type="code" FIELDS = { 'auth_read': 'user', # Credentials used for reading data. 'auth_write': 'service', # Credentials used for writing data. 'accounts': [], # Comma separated CM account ids. 'days': 7, # Number of days to backfill the log, works on first run only. 'recipe_project': '', # Google BigQuery project to create tables in. 'recipe_slug': '', # Google BigQuery dataset to create tables in. } print("Parameters Set To: %s" % FIELDS) # + [markdown] id="a36e9e20-009" colab_type="text" # #5. Execute CM360 Log Audit # This does NOT need to be modified unless you are changing the recipe, click play. # # + id="a36e9e20-010" colab_type="code" from starthinker.util.configuration import Configuration from starthinker.util.configuration import execute from starthinker.util.recipe import json_set_fields USER_CREDENTIALS = '/content/user.json' TASKS = [ { 'dataset': { 'description': 'The dataset will hold log table, Create it exists.', 'hour': [ 1 ], 'auth': 'user', 'dataset': {'field': {'name': 'recipe_slug','kind': 'string','order': 4,'default': '','description': 'Name of Google BigQuery dataset to create.'}} } }, { 'dcm_log': { 'description': 'Will create tables with format CM_* to hold each endpoint via a call to the API list function. Exclude reports for its own task.', 'hour': [ 2 ], 'auth': 'user', 'accounts': { 'single_cell': True, 'values': {'field': {'name': 'accounts','kind': 'integer_list','order': 2,'default': [],'description': 'Comma separated CM account ids.'}} }, 'days': {'field': {'name': 'days','kind': 'integer','order': 3,'default': 7,'description': 'Number of days to backfill the log, works on first run only.'}}, 'out': { 'auth': 'user', 'project': {'field': {'name': 'recipe_project','kind': 'string','order': 4,'default': '','description': 'Google BigQuery project to create tables in.'}}, 'dataset': {'field': {'name': 'recipe_slug','kind': 'string','order': 5,'default': '','description': 'Google BigQuery dataset to create tables in.'}} } } } ] json_set_fields(TASKS, FIELDS) execute(Configuration(project=CLOUD_PROJECT, client=CLIENT_CREDENTIALS, user=USER_CREDENTIALS, verbose=True), TASKS, force=True) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #imports from datasets import load_dataset from thai2transformers.metrics import classification_metrics from sklearn.metrics import f1_score import pandas as pd import numpy as np from tqdm.auto import tqdm # + #parameters class Args: dataset_name_or_path = 'thainer' feature_col = 'tokens' label_col = 'pos_tags' metric_for_best_model = 'f1_macro' seed = 2020 data_dir = '~/Downloads/LST20_Corpus' args = Args() # - if args.dataset_name_or_path == 'lst20': dataset = load_dataset(args.dataset_name_or_path,data_dir=args.data_dir) else: dataset = load_dataset(args.dataset_name_or_path) dataset if args.dataset_name_or_path == 'thainer' and args.label_col== 'ner_tags': dataset = dataset.map(lambda examples: {'ner_tags': [i if i not in [13,26] else 27 for i in examples[args.label_col]]}) train_valtest_split = dataset['train'].train_test_split(test_size=0.2, shuffle=True, seed=args.seed) dataset['train'] = train_valtest_split['train'] dataset['validation'] = train_valtest_split['test'] val_test_split = dataset['validation'].train_test_split(test_size=0.5, shuffle=True, seed=args.seed) dataset['validation'] = val_test_split['train'] dataset['test'] = val_test_split['test'] tag_labels = dataset['train'].features[args.label_col].feature.names tag_labels = [tag_labels[i] for i in range(len(tag_labels)) if i not in [13,26]] elif args.dataset_name_or_path == 'thainer' and args.label_col== 'pos_tags': train_valtest_split = dataset['train'].train_test_split(test_size=0.2, shuffle=True, seed=args.seed) dataset['train'] = train_valtest_split['train'] dataset['validation'] = train_valtest_split['test'] val_test_split = dataset['validation'].train_test_split(test_size=0.5, shuffle=True, seed=args.seed) dataset['validation'] = val_test_split['train'] dataset['test'] = val_test_split['test'] tag_labels = dataset['train'].features[args.label_col].feature.names else: tag_labels = dataset['train'].features[args.label_col].feature.names dataset if args.dataset_name_or_path == 'thainer': from transformers import AutoTokenizer mbert_tokenizer = AutoTokenizer.from_pretrained('bert-base-multilingual-cased') def pre_tokenize(token, space_token): token = token.replace(' ', space_token) return token def is_not_too_long(example, max_length=510): tokens = sum([mbert_tokenizer.tokenize( pre_tokenize(token, space_token='<_>')) for token in example[args.feature_col]], []) return len(tokens) < max_length dataset['test'] = dataset['test'].filter(is_not_too_long) dataset['test'] # + # %%time #get sentence forms def generate_sents(dataset, idx): features = dataset[idx][args.feature_col] labels = dataset[idx][args.label_col] return [(features[i], labels[i]) for i in range(len(features))] train_sents = [generate_sents(dataset['train'],i) for i in range(len(dataset['train']))] valid_sents = [generate_sents(dataset['validation'],i) for i in range(len(dataset['validation']))] test_sents = [generate_sents(dataset['test'],i) for i in range(len(dataset['test']))] len(train_sents), len(valid_sents), len(test_sents) # + #generate x,y def extract_features(doc, window=3, max_n_gram=3): #padding for words doc = ['xxpad' for i in range(window)] + doc + ['xxpad' for i in range(window)] doc_features = [] #for each word for i in range(window, len(doc)-window): #bias term word_features = ['bias'] #ngram features for n_gram in range(1, min(max_n_gram+1,2+window*2)): for j in range(i-window,i+window+2-n_gram): feature_position = f'{n_gram}_{j-i}_{j-i+n_gram}' #word word_ = f'{"|".join(doc[j:(j+n_gram)])}' word_features += [f'word_{feature_position}={word_}'] #append to feature per word doc_features.append(word_features) return doc_features def generate_xy(all_tuples): #target y = [[str(l) for (w,l) in t] for t in all_tuples] #features x_pre = [[w for (w,l) in t] for t in all_tuples] x = [extract_features(x_, window=2, max_n_gram = 2) for x_ in tqdm(x_pre)] return x, y x_train, y_train = generate_xy(train_sents) if args.dataset_name_or_path=='lst20': import random random.seed(args.seed) x_train_small = random.sample(x_train,10000) random.seed(args.seed) y_train_small = random.sample(y_train,10000) else: x_train_small = x_train y_train_small = y_train x_valid, y_valid = generate_xy(valid_sents) x_test, y_test = generate_xy(test_sents) # + import pycrfsuite from sklearn.metrics import classification_report def train_crf(model_name, c1, c2, x_train, y_train, max_iterations=500): # Train model trainer = pycrfsuite.Trainer(verbose=True) for xseq, yseq in tqdm(zip(x_train, y_train)): trainer.append(xseq, yseq) trainer.set_params({ 'c1': c1, 'c2': c2, 'max_iterations': max_iterations, 'feature.possible_transitions': True, 'feature.minfreq': 3.0, }) trainer.train(f'{model_name}_{c1}_{c2}.model') def evaluate_crf(model_path, features, labels, tag_labels): tagger = pycrfsuite.Tagger() tagger.open(model_path) y_pred = [] for xseq in tqdm(features, total=len(features)): y_pred.append(tagger.tag(xseq)) preds = [int(tag) for row in y_pred for tag in row] labs = [int(tag) for row in labels for tag in row] return classification_report(labs,preds, target_names = tag_labels,digits=4),\ f1_score(labs,preds,average='micro'),\ f1_score(labs,preds,average='macro') # - hyperparams = [] for c1 in tqdm([0.,0.5,1.]): for c2 in tqdm([0.,0.5,1.]): train_crf(args.dataset_name_or_path,c1,c2,x_train_small,y_train_small) report, f1_micro, f1_macro = evaluate_crf(f'{args.dataset_name_or_path}_{c1}_{c2}.model', x_valid, y_valid, tag_labels) print(report) d = {'c1':c1, 'c2':c2, 'f1_micro':f1_micro, 'f1_macro': f1_macro, 'report':report} hyperparams.append(d) hyperparams_df = pd.DataFrame(hyperparams).sort_values('f1_macro',ascending=False).reset_index(drop=True) best_hyperparams = hyperparams_df.iloc[0,:].to_dict() hyperparams_df print(best_hyperparams['report']) #final model c1, c2 = best_hyperparams['c1'], best_hyperparams['c2'] train_crf(f'{args.dataset_name_or_path}_{args.label_col}_best',c1,c2,x_train,y_train) # + #debug c1, c2 = 0.5, 0.0 if args.dataset_name_or_path=='lst20' and args.label_col=='ner_tags': report, f1_micro, f1_macro = evaluate_crf(f'{args.dataset_name_or_path}_{args.label_col}_best_{c1}_{c2}.model', x_test, y_test, tag_labels[:-1]) #test set of lst20 does not have E_TTL print(report) else: report, f1_micro, f1_macro = evaluate_crf(f'{args.dataset_name_or_path}_{args.label_col}_best_{c1}_{c2}.model', x_test, y_test, tag_labels) print(report) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import requests import json import csv import pandas as pd import numpy as np import matplotlib.pyplot as plt # import GetOldTweets3 as got # import random import pickle import seaborn as sns import sys # #### Reading of CSV df = pd.read_csv('combined_df.csv') df.drop(["Unnamed: 0"], axis=1, inplace=True) df.set_index("user_id", inplace=True) df.iloc[0].values.reshape(1,-1).shape df # #### Step 1: Imagine if we have a user's data (i.e one row of the csv) in a list. test = df.iloc[0].values.tolist() test # #### Step 2: We can now make use the API. endpoint_url = 'https://dt4o8agm5c.execute-api.us-east-1.amazonaws.com/v1' # #### Step 5: Make either a single row request or multiple users row request: # + request = { "instances": [ { "features": [test] # from [10] } ] } # Note that each row is something like {"features":[[val1, val2]]} multiple_request = { "instances": [ { "features": [test] }, { "features": [test] } , { "features": [test] } ] } # - response = requests.post(endpoint_url, data=json.dumps(request)) result # #### Multiple data example: response2 = requests.post(endpoint_url, data=json.dumps(multiple_request)) result2 = response2.json() result2 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- from datetime import tzinfo, timedelta, datetime, timezone # + ZERO = timedelta(0) HOUR = timedelta(hours=1) SECOND = timedelta(seconds=1) # A class capturing the platform's idea of local time. # (May result in wrong values on historical times in # timezones where UTC offset and/or the DST rules had # changed in the past.) import time as _time STDOFFSET = timedelta(seconds = -_time.timezone) if _time.daylight: DSTOFFSET = timedelta(seconds = -_time.altzone) else: DSTOFFSET = STDOFFSET DSTDIFF = DSTOFFSET - STDOFFSET class LocalTimezone(tzinfo): def fromutc(self, dt): assert dt.tzinfo is self stamp = (dt - datetime(1970, 1, 1, tzinfo=self)) // SECOND args = _time.localtime(stamp)[:6] dst_diff = DSTDIFF // SECOND # Detect fold fold = (args == _time.localtime(stamp - dst_diff)) return datetime(*args, microsecond=dt.microsecond, tzinfo=self, fold=fold) def utcoffset(self, dt): if self._isdst(dt): return DSTOFFSET else: return STDOFFSET def dst(self, dt): if self._isdst(dt): return DSTDIFF else: return ZERO def tzname(self, dt): return _time.tzname[self._isdst(dt)] def _isdst(self, dt): tt = (dt.year, dt.month, dt.day, dt.hour, dt.minute, dt.second, dt.weekday(), 0, 0) stamp = _time.mktime(tt) tt = _time.localtime(stamp) return tt.tm_isdst > 0 Local = LocalTimezone() # A complete implementation of current DST rules for major US time zones. def first_sunday_on_or_after(dt): days_to_go = 6 - dt.weekday() if days_to_go: dt += timedelta(days_to_go) return dt # US DST Rules # # This is a simplified (i.e., wrong for a few cases) set of rules for US # DST start and end times. For a complete and up-to-date set of DST rules # and timezone definitions, visit the Olson Database (or try pytz): # http://www.twinsun.com/tz/tz-link.htm # http://sourceforge.net/projects/pytz/ (might not be up-to-date) # # In the US, since 2007, DST starts at 2am (standard time) on the second # Sunday in March, which is the first Sunday on or after Mar 8. DSTSTART_2007 = datetime(1, 3, 8, 2) # and ends at 2am (DST time) on the first Sunday of Nov. DSTEND_2007 = datetime(1, 11, 1, 2) # From 1987 to 2006, DST used to start at 2am (standard time) on the first # Sunday in April and to end at 2am (DST time) on the last # Sunday of October, which is the first Sunday on or after Oct 25. DSTSTART_1987_2006 = datetime(1, 4, 1, 2) DSTEND_1987_2006 = datetime(1, 10, 25, 2) # From 1967 to 1986, DST used to start at 2am (standard time) on the last # Sunday in April (the one on or after April 24) and to end at 2am (DST time) # on the last Sunday of October, which is the first Sunday # on or after Oct 25. DSTSTART_1967_1986 = datetime(1, 4, 24, 2) DSTEND_1967_1986 = DSTEND_1987_2006 def us_dst_range(year): # Find start and end times for US DST. For years before 1967, return # start = end for no DST. if 2006 < year: dststart, dstend = DSTSTART_2007, DSTEND_2007 elif 1986 < year < 2007: dststart, dstend = DSTSTART_1987_2006, DSTEND_1987_2006 elif 1966 < year < 1987: dststart, dstend = DSTSTART_1967_1986, DSTEND_1967_1986 else: return (datetime(year, 1, 1), ) * 2 start = first_sunday_on_or_after(dststart.replace(year=year)) end = first_sunday_on_or_after(dstend.replace(year=year)) return start, end class USTimeZone(tzinfo): def __init__(self, hours, reprname, stdname, dstname): self.stdoffset = timedelta(hours=hours) self.reprname = reprname self.stdname = stdname self.dstname = dstname def __repr__(self): return self.reprname def tzname(self, dt): if self.dst(dt): return self.dstname else: return self.stdname def utcoffset(self, dt): return self.stdoffset + self.dst(dt) def dst(self, dt): if dt is None or dt.tzinfo is None: # An exception may be sensible here, in one or both cases. # It depends on how you want to treat them. The default # fromutc() implementation (called by the default astimezone() # implementation) passes a datetime with dt.tzinfo is self. return ZERO assert dt.tzinfo is self start, end = us_dst_range(dt.year) # Can't compare naive to aware objects, so strip the timezone from # dt first. dt = dt.replace(tzinfo=None) if start + HOUR <= dt < end - HOUR: # DST is in effect. return HOUR if end - HOUR <= dt < end: # Fold (an ambiguous hour): use dt.fold to disambiguate. return ZERO if dt.fold else HOUR if start <= dt < start + HOUR: # Gap (a non-existent hour): reverse the fold rule. return HOUR if dt.fold else ZERO # DST is off. return ZERO def fromutc(self, dt): assert dt.tzinfo is self start, end = us_dst_range(dt.year) start = start.replace(tzinfo=self) end = end.replace(tzinfo=self) std_time = dt + self.stdoffset dst_time = std_time + HOUR if end <= dst_time < end + HOUR: # Repeated hour return std_time.replace(fold=1) if std_time < start or dst_time >= end: # Standard time return std_time if start <= std_time < end - HOUR: # Daylight saving time return dst_time Eastern = USTimeZone(-5, "Eastern", "EST", "EDT") Central = USTimeZone(-6, "Central", "CST", "CDT") Mountain = USTimeZone(-7, "Mountain", "MST", "MDT") Pacific = USTimeZone(-8, "Pacific", "PST", "PDT") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import torch import os import collections from collections import defaultdict from torch.utils import data # + def read_data_nmt(data_path): """Load the English-French dataset.""" with open(data_path, 'r', encoding = 'utf-8') as f: return f.read() raw_text = read_data_nmt('../data/fra-eng/fra.txt') #tab separated string print(raw_text[:20]) # - # After downloading the dataset, we proceed with several preprocessing steps for the raw text data. # For instance, we replace non-breaking space with space, convert uppercase letters to lowercase ones, and # insert space between words and punctuation marks. # + def preprocess_nmt(text): """Preprocess the English-French dataset.""" def no_space(char, prev_char): return char in set(',.!?') and prev_char != ' ' # Replace non-breaking space with space, and convert uppercase letters to # lowercase ones text = text.replace('\u202f', ' ').replace('\xa0', ' ').lower() # Insert space between words and punctuation marks out = [' ' + char if i > 0 and no_space(char, text[i - 1]) else char for i, char in enumerate(text)] return ''.join(out) text = preprocess_nmt(raw_text) print(text[:80]) # - # Different from character-level tokenization in Section 8.3, for machine translation we prefer word-level tokenization here (state-of-the-art models may use more advanced tokenization techniques). The following tokenize_nmt function tokenizes the the first num_examples text sequence pairs, where each token is either a word or a punctuation mark. This function returns two lists of token lists: source and target. Specifically, source[i] is a list of tokens from the ith text sequence in the source language (English here) and target[i] is that in the target language (French here). # + def tokenize_nmt(text, num_examples=None): """Tokenize the English-French dataset.""" source, target = [], [] for i, line in enumerate(text.split('\n')): if num_examples and i > num_examples: break parts = line.split('\t') if len(parts) == 2: source.append(parts[0].split(' ')) target.append(parts[1].split(' ')) return source, target source, target = tokenize_nmt(text) print(source[10], target[10]) print(source[20], target[20]) # - # ### Vocabulary # # Since the machine translation dataset consists of pairs of languages, we can build two vocabularies for # both the source language and the target language separately. # With word-level tokenization, the vocabulary size will be significantly larger than that using character-level # tokenization. To alleviate this, here we treat infrequent tokens that appear less than 2 times as the # same unknown (\) token. Besides that, we specify additional special tokens s # uch as for padding (\) sequences to the same length in minibatches, and for marking the # beginning (\) or end (\) of sequences. # Such special tokens are commonly used in natural language processing tasks. # + class Vocab: """Vocabulary for text.""" def __init__(self, tokens=None, min_freq=0, reserved_tokens=None): if tokens is None: tokens = [] if reserved_tokens is None: reserved_tokens = [] # Sort according to frequencies counter = count_corpus(tokens) self.token_freqs = sorted(counter.items(), key=lambda x: x[1], reverse=True) # The index for the unknown token is 0 self.unk, uniq_tokens = 0, [''] + reserved_tokens uniq_tokens += [token for token, freq in self.token_freqs if freq >= min_freq and token not in uniq_tokens] self.idx_to_token, self.token_to_idx = [], dict() for token in uniq_tokens: self.idx_to_token.append(token) self.token_to_idx[token] = len(self.idx_to_token) - 1 def __len__(self): return len(self.idx_to_token) def __getitem__(self, tokens): if not isinstance(tokens, (list, tuple)): return self.token_to_idx.get(tokens, self.unk) return [self.__getitem__(token) for token in tokens] def to_tokens(self, indices): if not isinstance(indices, (list, tuple)): return self.idx_to_token[indices] return [self.idx_to_token[index] for index in indices] def count_corpus(tokens): """Count token frequencies.""" # Here `tokens` is a 1D list or 2D list if len(tokens) == 0 or isinstance(tokens[0], list): # Flatten a list of token lists into a list of tokens tokens = [token for line in tokens for token in line] return collections.Counter(tokens) # - src_vocab = Vocab(source, min_freq=2, reserved_tokens=['', '', '']) len(src_vocab) # In machine translation, each example is a pair of source and target text sequences, where each text sequence may have different lengths. # # For computational efficiency, we can still process a minibatch of text sequences at one time by truncation and padding. Suppose that every sequence in the same minibatch should have the same length num_steps. If a text sequence has fewer than num_steps tokens, we will keep appending the special \ token to its end until its length reaches num_steps. Otherwise, we will truncate the text sequence by only taking its first num_steps tokens and discarding the remaining. In this way, every text sequence will have the same length to be loaded in minibatches of the same shape. # + def truncate_pad(line, num_steps, padding_token): """Truncate or pad sequences.""" if len(line) > num_steps: return line[:num_steps] # Truncate return line + [padding_token] * (num_steps - len(line)) # Pad truncate_pad(src_vocab[source[0]], 10, src_vocab[''] # - # Now we define a function to transform text sequences into minibatches for training. We append the special \ token to the end of every sequence to indicate the end of the sequence. When a model is predicting by generating a sequence token after token, the generation of the \\ token can suggest that the output sequence is complete. Besides, we also record the length of each text sequence excluding the padding tokens. # + def load_array(data_arrays, batch_size, is_train=True): """Construct a PyTorch data iterator.""" dataset = data.TensorDataset(*data_arrays) return data.DataLoader(dataset, batch_size, shuffle=is_train) def build_array_nmt(lines, vocab, num_steps): """Transform text sequences of machine translation into minibatches.""" lines = [vocab[l] for l in lines] lines = [l + [vocab['']] for l in lines] array = torch.tensor( [truncate_pad(l, num_steps, vocab['']) for l in lines]) valid_len = (array != vocab['']).type(torch.int32).sum(1) return array, valid_len def load_data_nmt(data_path,batch_size, num_steps, num_examples=600): """Return the iterator and the vocabularies of the translation dataset.""" text = preprocess_nmt(read_data_nmt(data_path)) source, target = tokenize_nmt(text, num_examples) src_vocab = Vocab(source, min_freq=2, reserved_tokens=['', '', '']) tgt_vocab = Vocab(target, min_freq=2, reserved_tokens=['', '', '']) src_array, src_valid_len = build_array_nmt(source, src_vocab, num_steps) tgt_array, tgt_valid_len = build_array_nmt(target, tgt_vocab, num_steps) data_arrays = (src_array, src_valid_len, tgt_array, tgt_valid_len) data_iter = load_array(data_arrays, batch_size) return data_iter, src_vocab, tgt_vocab data_path = '../data/fra-eng/fra.txt' train_iter, src_vocab, tgt_vocab = load_data_nmt(data_path,batch_size=2, num_steps=8) for X, X_valid_len, Y, Y_valid_len in train_iter: print('Batch of 2 sentences:') print('X:', X.type(torch.int32)) print('valid lengths for X:', X_valid_len) print('Y:', Y.type(torch.int32)) print('valid lengths for Y:', Y_valid_len) break # - print(src_vocab.token_to_idx) print(tgt_vocab.token_to_idx) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 1: Cell physiology # ## This notebook shows the code and calculations used to determine: # ### - Cell volume # ### - Chlorophyll content (a and c) # # (Table 1 in the manuscript) ## Required packages import pandas as pd import numpy as np import csv import math as m # ## Cell volumes # Cell length and width were determined using the straight-line tool in ImageJ (Schindelin J, Rueden CT, Hiner MC & Eliceiri KW (2015) The ImageJ ecosystem: An open platform for biomedical image analysis. Mol Reprod Dev 82: 518–529) ## Representative image of LL acclimated cells from IPython.display import Image Image("1_Pt_LL_acclim.jpg") # + ## Read in the cell dimensions cell_dim = pd.read_csv('1_cell_measurements.csv') ## Pull out the High Light acclimated cell measurements HL_vols = list() HL_dims = cell_dim[['HL_len','HL_wid']] ## Iterate over the length and width measurements to determine the cell volumes for i in HL_dims.index: ## Get the volume of the core ellipse ### Semi-major axis: (total length - 2x width)/2 smaj = (HL_dims.iloc[i]['HL_len']-2*HL_dims.iloc[i]['HL_wid'])/2 ### Semi-minor axis: (width/2) smin = HL_dims.iloc[i]['HL_wid']/2 ellipse = (4./3.)*np.pi*(smaj*(smin*smin)) ### Get the volume of the cones cones = (1./3.)*np.pi*(np.power((HL_dims.iloc[i]['HL_wid']/4),2)+((HL_dims.iloc[i]['HL_wid']/4)*1)+1)*(HL_dims.iloc[i]['HL_wid']) HL_vols.append(ellipse+(2*cones)) ## Pull out the Low Light acclimated cell measurements LL_vols = list() LL_dims = cell_dim[['LL_len','LL_wid']] LL_dims.dropna(inplace=True) ## Iterate over the length and width measurements to determine the cell volumes for i in LL_dims.index: ## Get the volume of the core ellipse ### Semi-major axis: (total length - 2x width)/2 smaj = (LL_dims.iloc[i]['LL_len']-2*LL_dims.iloc[i]['LL_wid'])/2 ### Semi-minor axis: (width/2) smin = LL_dims.iloc[i]['LL_wid']/2 ellipse = (4./3.)*np.pi*(smaj*(smin*smin))\ ### Get the volume of the cones cones = (1./3.)*np.pi*(np.power((LL_dims.iloc[i]['LL_wid']/4),2)+((LL_dims.iloc[i]['LL_wid']/4)*1)+1)*(LL_dims.iloc[i]['LL_wid']) LL_vols.append(ellipse+(2*cones)) print("HL Mean: ", np.floor(np.mean(HL_vols))) print("HL Std: ", np.floor(np.std(HL_vols))) print("LL Mean: ", np.floor(np.mean(LL_vols))) print("LL Std: ", np.floor(np.std(LL_vols))) # - # ## Chlorophyll content # # Based on Ritchie 2008 PHOTOSYNTHETICA 46 (1): 115-126, 2008 # # 5 mL of cells were concentrated and resuspended in 2 mL MeOH # # So [chl] in ug/cell = ug_chl/mL * 2 mL MeOH / (1e6 * total cells) ## Cell densities (total cells, in millions) # High light acclimated chl_cell={'Pt_chl_F1' : 52.5,# High light acclimated 'Pt_chl_F2' : 44.0, # High light acclimated 'Pt_chl_F3' : 38.7, # High light acclimated 'Pt_chl_F4' : 66.0, # High light acclimated 'Pt_chl_F5' : 22.0, # Low light acclimated 'Pt_chl_F6' : 21.5, # Low light acclimated 'Pt_chl_F7' : 26.0} # Low light acclimated # + ## Read in and plot the data import matplotlib.pyplot as plt import seaborn as sns sns.set() ## Read in the blank subtracted data Pt_chl = pd.read_excel('1_Cell_Physiology.xlsx',sheet_name='Chlorophyll') x = Pt_chl.wavelength.values fig, axes = plt.subplots(4, 4, figsize=(10, 7)) blank = Pt_chl['f/2 blank'].values samples = [i for i in Pt_chl.columns if i != 'wavelength'] start = 1 for s in samples: # get values for the sample vals = [float(val) for val in Pt_chl[s].values] plt.subplot(4,4,start) plt.plot(x,vals,'b') plt.title(s) start = start+1 plt.tight_layout() plt.show() # - # ## The plots above are total absorbace (A) per cm pathlength. # ### As the pathlength was 1 cm, the values are total absorbance. # ### Next, we will use the chlorophyll extinction coefficients to determine concentration # + ## Define functions for determining chlorophlly a,c, total def calc_chla(sample_ID,chl_DF): # Pull the absorbance data for the desired sample OD_list = [750,665,652,632] abs_values = dict() for o in OD_list: abs_values[o] = float(chl_DF[chl_DF['wavelength']==o][sample_ID].values[0]) chla = (16.4351*(abs_values[665]-abs_values[750]))-(6.4151*(abs_values[652]-abs_values[750]))-(3.2416*(abs_values[632]-abs_values[750])) return chla def calc_chlc(sample_ID,chl_DF): # Pull the absorbance data for the desired sample OD_list = [750,665,652,632] abs_values = dict() for o in OD_list: abs_values[o] = float(chl_DF[chl_DF['wavelength']==o][sample_ID].values[0]) chlc = (-1.5492*(abs_values[665]-abs_values[750]))-(12.8087*(abs_values[652]-abs_values[750]))+(34.2247*(abs_values[632]-abs_values[750])) return chlc def calc_chl(sample_ID,chl_DF): # Pull the absorbance data for the desired sample OD_list = [750,665,652,632] abs_values = dict() for o in OD_list: abs_values[o] = float(chl_DF[chl_DF['wavelength']==o][sample_ID].values[0]) total_chl = (1.0015*(abs_values[665]-abs_values[750]))+(12.9241*(abs_values[652]-abs_values[750]))+(27.9603*(abs_values[632]-abs_values[750])) return total_chl # + ## Calculate the chlorophyll content ### Units are grams chlorophyll per m3 of methanol samples = [i for i in Pt_chl.columns if i != 'wavelength'] chl_values = pd.DataFrame(index=['Chl_total','Chla','Chlc']) for s in samples: chl_values[s] = [calc_chl(s,Pt_chl),calc_chla(s,Pt_chl),calc_chlc(s,Pt_chl)] print(chl_values) # - # ## These values are in units of g m^-3 MeOH # ### g/m3 converted to pg/mL : 1e12/1e6 = 1e6 # ### Next, normalize by total cells and multiply by 2 since there is 2 mL of MeOH ## Take the flask cell data from the above dictionary for fl,cell in chl_cell.items(): df_cols = [col for col in chl_values if fl in col] # Get the right column for c in df_cols: temp_data = chl_values[c].values/(cell*1e6)*2*1e6 # Normalize by cell count, multiply by 2mL MeOH, convert g/m3 to pg/cell chl_values[c]=temp_data # Write to DataFrame # + ## Print the mean and std for HL and LL acclimated cells ### Separate the HL and LL flasks HL_fl = ['F1','F2','F3','F4'] ## HL flasks LL_fl = ['F5','F6','F7'] ## LL flasks HL_col = [] for f in HL_fl: HL_col= HL_col+[c for c in chl_values if f in c] ## Separate the DataFrame HL_chl = chl_values[HL_col] print('HL mean chla: ',np.around(HL_chl.loc['Chla'].mean(),3), 'picograms per cell') print('HL std chla: ',np.around(HL_chl.loc['Chla'].std(),3), 'picograms per cell') print('HL mean chlc: ',np.around(HL_chl.loc['Chlc'].mean(),3), 'picograms per cell') print('HL std chlc: ',np.around(HL_chl.loc['Chlc'].std(),3), 'picograms per cell') LL_col = [] for f in LL_fl: LL_col= LL_col+[c for c in chl_values if f in c] ## Separate the DataFrame LL_chl = chl_values[LL_col] print('LL mean chla: ',np.around(LL_chl.loc['Chla'].mean(),3), 'picograms per cell') print('LL std chla: ',np.around(LL_chl.loc['Chla'].std(),3), 'picograms per cell') print('LL mean chlc: ',np.around(LL_chl.loc['Chlc'].mean(),3), 'picograms per cell') print('LL std chlc: ',np.around(LL_chl.loc['Chlc'].std(),3), 'picograms per cell') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] colab_type="text" id="n6JMkNPuIgT1" # # Enhanced-Analytic-System-for-Smart-University-Assistance # Name - # Roll No - 1729048 # Project Name - Be-Friend # Github Repo - [Link](https://github.com/rahulbordoloi/Enhanced-Analytic-System-for-Smart-University-Assistance/) # Email - , # Language - Python # Project is Done on Google Colab. # + [markdown] colab_type="text" id="jA9RNdbzIjZL" # * Libraries Pre-requisites - [requirements.txt](https://drive.google.com/file/d/1RmyCxSOJBOnDc-I3Xn8a_laz58f1pi4b/view?usp=sharing) # # * Download Pre-loaded Model - [Pickle Link](https://drive.google.com/file/d/1jRVVnEVPGb2_6UXqfTjOg-MpzqolknZz/view?usp=sharing) # # # To install , download the file and run - # ``` # # # !pip install -r requirements.txt # ``` # * RAM of 8GB is preferred if run on Local. # # # # # # # + [markdown] colab_type="text" id="YtMOtMtBIjZL" # # 1. Import Dataset and Libraries # + colab_type="code" id="jFU_GbOQIjZL" colab={} from google.colab import drive drive.mount('/content/drive') # + id="CwiSRcAMJ1_f" colab_type="code" colab={} import warnings warnings.filterwarnings("ignore") # + colab_type="code" id="T_ytc8G8IjZM" colab={} import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns # + colab_type="code" id="aW1Sbm7GIjZN" colab={} train = pd.read_csv('/content/drive/My Drive/Minor Final/train.csv', error_bad_lines=False, header=0, skiprows=[0], encoding='ISO-8859-1') # + id="TrZ5XButOXbo" colab_type="code" colab={} test = pd.read_csv('/content/drive/My Drive/Minor Final/test.csv', error_bad_lines=False, header=0, skiprows=[0], encoding='ISO-8859-1') # + id="DhCBu341OBx4" colab_type="code" colab={} train.head(2) # + colab_type="code" id="U2c0Mgc4IjZQ" colab={} print(train.shape,test.shape) # + [markdown] colab_type="text" id="nHHUf5-MIjZR" # # 2. Working on Train [Feature Engg and Selection] # + [markdown] colab_type="text" id="Vh6YmqBuIjZS" # Visualising and Dropping off the Completely Null Columns # + colab_type="code" id="Q0eEaiulIjZS" colab={} #how many values are missing in each column. train.isnull().sum() # + [markdown] id="1SlbKp3PO7Mh" colab_type="text" # Result : There's not even a single null value in the whole dataset.
    # Therefore we can except a plain heatmap. # + colab_type="code" id="lOSUTdUqIjZV" colab={} #visualizing and observing the null elements in the dataset plt.figure(figsize=(10,10)) sns.heatmap(train.isnull(), cbar = False, cmap = 'YlGnBu') #ploting missing data #cbar, cmap = colour bar, colour map # + colab_type="code" id="iATWgb4QIjZX" colab={} train # + [markdown] id="aaVSX2a4Pfom" colab_type="text" # To check for duplicate columns # + colab_type="code" id="WQ09xoqGIjZj" colab={} x = set() # set as to store only the unique values for i in range(train.shape[1]): c1 = train.iloc[:, i] for j in range(i + 1, train.shape[1]): c2 = train.iloc[:, j] if c1.equals(c2): x.add(train.columns.values[j]) for col in x: print(col) # + [markdown] id="9wMNtLqdPk15" colab_type="text" # Result : All the columns are unique in nature # + colab_type="code" id="S9UX1IPIIjZk" colab={} #gives out no of unqiue elements per column train.nunique() # + colab_type="code" id="t8ZeGJJ3IjZl" colab={} train.info() # + colab_type="code" id="a7uUXCz0IjZm" colab={} train.describe() # + colab_type="code" id="EkUUztUNIjZn" colab={} train.columns # + colab_type="code" id="_CDfH0XpIjZo" colab={} train.drop(['Candidate ID','Name','Number of characters in Original Name','Month of Birth','Year of Birth'], # dropping unnecessary columns inplace = True, axis = 1) # + id="iXkmLrm_dO46" colab_type="code" colab={} train.drop(['State (Location)'], inplace = True, axis = 1) # because it isn't necessary and cause more ambiguity in our data. # + colab_type="code" id="4-2W0kWtIjZp" colab={} train.head(5) # + id="Eq7CcP5yf85F" colab_type="code" colab={} train[train['Degree of study'] == 'X'].shape[0] # + [markdown] id="tFShHFStg5hu" colab_type="text" # Therefore, Degree of Study is a Quasi-Constant Column # + id="Mo5mCJSBhO7c" colab_type="code" colab={} number_of_occ_per = 653/train.shape[0] * 100 print(str(number_of_occ_per) + '%') # + colab_type="code" id="V5gXM5obiicn" colab={} #encoding categorical data Gender from sklearn.preprocessing import LabelEncoder l = LabelEncoder() x=pd.DataFrame() x.loc[:,'Degree'] = l.fit_transform(train.loc[:,'Degree of study']) # + id="AVJGxlmlkAMI" colab_type="code" colab={} x # + id="qisOUQXJiuUp" colab_type="code" colab={} from sklearn.feature_selection import VarianceThreshold qconstant_filter = VarianceThreshold(threshold=0.01) qconstant_filter.fit(x) len(x.columns[qconstant_filter.get_support()]) # len(train.columns[constant_filter.get_support()]) # + [markdown] id="WwPAU0bhkmSp" colab_type="text" # Therefore, Degree of study must be dropped. # + id="439P7qxmklRI" colab_type="code" colab={} train.drop(['Degree of study'], inplace = True, axis = 1) # + colab_type="code" id="LMFgyNVOIjZs" colab={} train.shape # + id="TELOHqrRSL_z" colab_type="code" colab={} train['10th Completion Year'].unique(), train['Year of Completion of college'].unique(), train['Quantitative Ability 1'].unique(), train['Domain Skills 1'].unique(), train['Analytical Skills 1'].unique() # + id="KQtuzErZFYx3" colab_type="code" colab={} train.replace(to_replace='MD', value = float(0.0), inplace = True) # + id="EtMjOT0mVBib" colab_type="code" colab={} train['Quantitative Ability 1'] = pd.to_numeric(train['Quantitative Ability 1']) train['Domain Skills 1'] = pd.to_numeric(train['Domain Skills 1']) train['Analytical Skills 1'] = pd.to_numeric(train['Analytical Skills 1']) # + id="2cWYqjDBUe0M" colab_type="code" colab={} train['Performance'].unique(), train['Quantitative Ability 1'].unique() # + id="iIttxZXK1TdU" colab_type="code" colab={} train['Performance'].replace(to_replace=0.0, value = 'MD', inplace = True) # + id="kQVSbdKg1aUr" colab_type="code" colab={} train['Performance'].unique() # + id="0LkQg6E1Ui_T" colab_type="code" colab={} train.info() # + [markdown] colab_type="text" id="x3-_EQa5IjZ0" # Checking and learning about the train set's skewness. # + colab_type="code" id="l7JWJrekIjZ1" colab={} #checking the skewness of the train set train.skew(axis = 0, skipna = True) # + [markdown] colab_type="text" id="wogFfIpeIjZ2" # Inference : some columns seems to be a lightly left-skewed.
    # They are : 10th percentage, English 4, Quantitative Ability 2, Quantitative Ability 3, Domain Skills 2, Domain Test 4, Analytical Skills 1, Analytical Skills 2
    # These columns need to be later transformed. # + [markdown] colab_type="text" id="yYVQ_uJEIjZ4" # Checking for Correlation # + id="MO33cDdv21X3" colab_type="code" colab={} # gives out the columns which are highly correlated amongst each other def correlation(train, threshold = 0.90): corr_col = set() # Set of all the names of deleted columns corr_m = train.corr() for i in range(len(corr_m.columns)): for j in range(i): if (corr_m.iloc[i, j] >= threshold) and (corr_m.columns[j] not in corr_col): col = corr_m.columns[i] # getting the name of column corr_col.add(col) return corr_col print(correlation(train,0.9)) # + [markdown] id="z0X4GSIN4HBV" colab_type="text" # Inference : There is no correlated columns # + colab_type="code" id="kvw4VSOpyEmr" colab={} train.columns # + [markdown] colab_type="text" id="t4GIAUOAIjaJ" # # 3. Visualizations # + colab_type="code" id="EhmTh0jzIjaJ" colab={} #plotting pairwise relationships in train sns.pairplot(train) # + colab_type="code" id="8dPq-7UxIjaK" colab={} # Distribution Plot and ‘Boxplot of payment_amount’ to learn about its distribution and also, to know about outliers if present. plt.figure(figsize=(8,6)) plt.subplot(1,2,1) plt.title('English Marks Distribution Plot') sns.distplot(train['English 1']) plt.subplot(1,2,2) plt.title('English Marks Spread') sns.boxplot(y=train['English 1']) plt.show() plt.tight_layout() plt.figure(figsize=(8,6)) plt.subplot(1,2,1) plt.title('Quantitative Ability Distribution Plot') sns.distplot(train['Quantitative Ability 1']) plt.subplot(1,2,2) plt.title('Quantitative Ability Spread') sns.boxplot(y=train['Quantitative Ability 1']) plt.show() plt.tight_layout() plt.figure(figsize=(8,6)) plt.subplot(1,2,1) plt.title('Domain Skills Distribution Plot') sns.distplot(train['Domain Skills 1']) plt.subplot(1,2,2) plt.title('Domain Skills Spread') sns.boxplot(y=train['Domain Skills 1']) plt.show() plt.tight_layout() plt.figure(figsize=(8,6)) plt.subplot(1,2,1) plt.title('Analytical Skills Distribution Plot') sns.distplot(train['Analytical Skills 1']) plt.subplot(1,2,2) plt.title('Analytical Skills Spread') sns.boxplot(y=train['Analytical Skills 1']) plt.show() plt.tight_layout() # + colab_type="code" id="csCUUCe0IjaL" colab={} # Checking out the distribution of variables across different variables in train set. plt.figure(figsize=(25, 6)) df = pd.DataFrame(train.groupby(['Performance'])['English 1'].mean().sort_values(ascending = False)) df.plot.bar() plt.title('Performance vs English') plt.show() df = pd.DataFrame(train.groupby(['Performance'])['Quantitative Ability 1'].mean().sort_values(ascending = False)) df.plot.bar() plt.title('Performance vs Quantitative Ability') plt.show() df = pd.DataFrame(train.groupby(['Performance'])['Domain Skills 1'].mean().sort_values(ascending = False)) df.plot.bar() plt.title('Performance vs Domain Skills') plt.show() df = pd.DataFrame(train.groupby(['Performance'])['Analytical Skills 1'].mean().sort_values(ascending = False)) df.plot.bar() plt.title('Performance vs Analytical Skills') plt.show() # + colab_type="code" id="Nc4ixJivIjaM" colab={} train.columns # + [markdown] colab_type="text" id="RSUNWrGMIjaO" # # 4. Mapping Test and Train # + colab_type="code" id="e2IHOxL3IjaO" colab={} #copying the columns from train cols = train.columns.to_list() cols # + colab_type="code" id="Ihvo3eo_IjaP" colab={} print(test.shape , train.shape) # + colab_type="code" id="67iIGXxJIjaP" colab={} #mapping the features test = test.reindex(columns=cols) test.shape # + [markdown] colab_type="text" id="DrcY9dxqIjaQ" # # 5. Data Preprocessing # + colab_type="code" id="X2HPnoIyIjaQ" colab={} train.columns # + [markdown] colab_type="text" id="xGg2gyazIjaR" # Encoding categorical variables # + colab_type="code" id="7Ynt8xo4IjaS" colab={} # !pip install --upgrade category_encoders # + id="hpe76GDOfUN2" colab_type="code" colab={} #encoding categorical data Gender from sklearn.preprocessing import LabelEncoder l = LabelEncoder() train.loc[:,'Gender'] = l.fit_transform(train.loc[:,'Gender']) # train.loc[:, '12th Completion year'] = l.fit_transform(train.loc[:, '12th Completion year']) # train.loc[:, '10th Completion Year'] = l.fit_transform(train.loc[:, '10th Completion Year']) # + id="6qfokRjISkmh" colab_type="code" colab={} train.loc[:,'Performance']=l.fit_transform(train.loc[:,'Performance']) # + colab_type="code" id="q6hzqje8IjaT" colab={} from category_encoders import TargetEncoder encoder = TargetEncoder() train['Specialization in study'] = encoder.fit_transform(train['Specialization in study'], train['Performance']) # train['10Y'] = encoder.fit_transform(train['10th Completion Year'], train['Performance']) # train['12Y'] = encoder.fit_transform(train['12th Completion year'], train['Performance']) # + id="NomyNERLTUqE" colab_type="code" colab={} encoder = TargetEncoder() train['Year of Completion of college'] = encoder.fit_transform(train['Year of Completion of college'], train['Performance']) # + colab_type="code" id="f27cRCKuTxLL" colab={} encoder = TargetEncoder() train['12th Completion year'] = encoder.fit_transform(train['12th Completion year'], train['Performance']) # + colab_type="code" id="1oOotmgeTxRU" colab={} encoder = TargetEncoder() train['10th Completion Year'] = encoder.fit_transform(train['10th Completion Year'], train['Performance']) # + id="3xR3rpNZ6e_K" colab_type="code" colab={} train.head(5) # + colab_type="code" id="WflyGKP3IjaV" colab={} train.describe() # + colab_type="code" id="AfuGVaQ5IjaW" colab={} #Correlation using heatmap plt.figure(figsize = (10, 10)) hm = train.corr().where(np.tril(np.ones(train.corr().shape)).astype(np.bool)) sns.heatmap(hm, annot = True, cmap="YlGnBu") plt.show() # + id="TMnQR1Yhzthz" colab_type="code" colab={} train.columns # + [markdown] colab_type="text" id="y_SnH-C0IjaW" # Splitting train set into x and y i.e independent variable vector and dependent variable vector. # + colab_type="code" id="WLdb3tAkIjaX" colab={} x = train.drop(['Performance'], axis = 1) y = train.loc[:,'Performance'] # + colab_type="code" id="FJL9O2MDIjaX" colab={} print(x.shape,y.shape) # + colab_type="code" id="1m4zU_JrIjaY" colab={} y = y.values.reshape(-1,1) # + colab_type="code" id="MNq1g_KiIjaZ" colab={} #plotting distribution plot of newly encoded variables plt.figure(figsize=(8,6)) plt.subplot(1,2,1) sns.distplot(x['Year of Completion of college']) plt.subplot(1,2,2) sns.distplot(x['10th Completion Year']) plt.figure(figsize=(8,6)) plt.subplot(1,2,1) sns.distplot(x['12th Completion year']) plt.subplot(1,2,2) sns.distplot(x['Specialization in study']) # + [markdown] id="lIKgSadxVhTQ" colab_type="text" # Inference : It is highly skewed. # + colab_type="code" id="KyWNQCTGVaaD" colab={} #checking the skewness of the train set x.skew(axis = 0, skipna = True) # + [markdown] colab_type="text" id="GDaVbRCgIjaZ" # Reducing skewness of the features according to their skewness amount. # + colab_type="code" id="LzpgJjU1IjaZ" colab={} #trying square-root and log transformations crim = np.log(x['Year of Completion of college']) crim_s = np.sqrt(x['Year of Completion of college']) print(crim.skew(),crim_s.skew()) # + colab_type="code" id="bcX-ll8vIjaa" colab={} #Observing the distribution plot of ‘Year of Completion of college’ after boxcox transformation. from scipy import stats crim_b = stats.boxcox(x['Year of Completion of college'])[0] pd.Series(crim_b).skew() sns.distplot(crim_b); # + [markdown] id="MV6I-tuHcNzW" colab_type="text" # Boxcox will be the best transformation for 'Year of Completion of college' # + colab_type="code" id="CYt2QyhRIjab" colab={} x.skew() # + colab_type="code" id="c0G9OFAvIjac" colab={} a = np.sqrt(x['12th Completion year']) #square-root transformation b = np.log(x['12th Completion year']) #logarithimic transformation print(a.skew(),b.skew()) a = np.sqrt(x['10th Completion Year']) #square-root transformation b = np.log(x['10th Completion Year']) #logarithimic transformation print(a.skew(),b.skew()) # + colab_type="code" id="MHdfa4rCIjac" colab={} #Updating the required pandas series. a = pd.Series(stats.boxcox(x['10th Completion Year'])[0]) b = pd.Series(stats.boxcox(x['12th Completion year'])[0]) c = pd.Series(stats.boxcox(x['Specialization in study'])[0]) d = pd.Series(stats.boxcox(x['Year of Completion of college'])[0]) print(a.skew(), b.skew(), c.skew(), d.skew()) # + colab_type="code" id="tgyAtwtHIjad" colab={} x['10th Completion Year'] = a x['12th Completion year'] = b x['Specialization in study'] = c x['Year of Completion of college'] = d # + id="k4MMLJsub1qP" colab_type="code" colab={} x.skew() # + colab_type="code" id="uS9zahNRIjae" colab={} x.head(1) # + colab_type="code" id="_Zvx6cWaIjam" colab={} x.shape # + colab_type="code" id="gx50pUc9Ijan" colab={} x['12th Completion year'].describe() # + [markdown] colab_type="text" id="yupaiGfYIjap" # Standard Scaling all the features to come under a common range. # + colab_type="code" id="Mh4ZkFaQIjap" colab={} from sklearn.preprocessing import StandardScaler sc= StandardScaler() x = sc.fit_transform(x) # + colab_type="code" id="6rDGYsH5Ijaq" colab={} x # + colab_type="code" id="x6Y81NDvIjar" colab={} y # + [markdown] colab_type="text" id="oHC_vkk8Ijas" # # 6. Splitting Train into x_train/y_train # + id="NdvokqCynLaL" colab_type="code" colab={} y.shape # + colab_type="code" id="DvoefmDkIjas" colab={} from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.25) # + colab_type="code" id="NppjVrXpIjat" colab={} print(x_train.shape,y_train.shape) # + colab_type="code" id="VpWH3RH9Ijat" colab={} print(x_test.shape,y_test.shape) # + [markdown] colab_type="text" id="5_5eu-zZIjau" # #7. Now, Model Testing! # + [markdown] colab_type="text" id="ojQj8NWVIjau" # **1. Logistic Regression** # + colab_type="code" id="nR58bQIWIjau" colab={} # fitting simple linear regression to the training set from sklearn.linear_model import LogisticRegression classifier = LogisticRegression(random_state=0) classifier.fit(x_train, y_train) # + colab_type="code" id="j86KN2kiSCfw" colab={} # predicting the test set results y_pred=classifier.predict(x_test) # + [markdown] id="-xEECPYGgT_4" colab_type="text" # Checking Accuracies # + colab_type="code" id="RhO8K5LbSCfy" colab={} from sklearn.metrics import confusion_matrix, classification_report cm=confusion_matrix(y_test, y_pred) plt.figure(figsize = (5,5)) sns.heatmap(cm, annot=True) plt.xlabel('Predicted') plt.ylabel('Truth') # + id="FPVzN0GMjcNZ" colab_type="code" colab={} print(classification_report(y_test, y_pred)) # + colab_type="code" id="NUrz_GT6Yjdq" colab={} #applying k-fold cross validation from sklearn.model_selection import cross_val_score as cvs accuracies = cvs(estimator=classifier,X=x_train,y=y_train,cv=10) print(accuracies.mean()) print(accuracies.std()) # + [markdown] colab_type="text" id="GzFEKzNQkmw_" # **2. Random Forest** # + colab_type="code" id="jJWyjA-ikmxB" colab={} #fitting random forest classifier to the training set from sklearn.ensemble import RandomForestClassifier as rfc classifier = rfc(n_estimators=100,criterion='entropy',random_state=0) classifier.fit(x_train, y_train) # + colab_type="code" id="_j5JOYXJpNgv" colab={} #predicting the test set results y_pred=classifier.predict(x_test) # + [markdown] colab_type="text" id="LWU0cg5RkmxC" # Checking Accuracies # + colab_type="code" id="pCnklzoZkmxD" colab={} classifier.score(x_test, y_test) # + colab_type="code" id="hWcw1CCKkmxF" colab={} from sklearn.metrics import confusion_matrix, classification_report cm=confusion_matrix(y_test, y_pred) plt.figure(figsize = (5,5)) sns.heatmap(cm, annot=True) plt.xlabel('Predicted') plt.ylabel('Truth') # + colab_type="code" id="KXBkFlWDkmxK" colab={} print(classification_report(y_test, y_pred)) # + colab_type="code" id="QQr6JMLukmxO" colab={} #applying k-fold cross validation from sklearn.model_selection import cross_val_score as cvs accuracies = cvs(estimator=classifier,X=x_train,y=y_train,cv=10) print(accuracies.mean()) print(accuracies.std()) # + [markdown] colab_type="text" id="Lv7eOVZFpldL" # **3. Kernel - SVM** # + colab_type="code" id="vcA9hLg8pldM" colab={} #fitting kernel SVM to the training set from sklearn.svm import SVC classifier = SVC(kernel='rbf', random_state=0) classifier.fit(x_train, y_train) # + colab_type="code" id="iGGEmqS0pldN" colab={} #predicting the test set results y_pred=classifier.predict(x_test) # + [markdown] colab_type="text" id="Lk1H1kNspldR" # Checking Accuracies # + colab_type="code" id="TNwIfxn9pldR" colab={} from sklearn.metrics import confusion_matrix, classification_report cm=confusion_matrix(y_test, y_pred) plt.figure(figsize = (5,5)) sns.heatmap(cm, annot=True) plt.xlabel('Predicted') plt.ylabel('Truth') # + colab_type="code" id="KBxh_yLipldU" colab={} print(classification_report(y_test, y_pred)) # + colab_type="code" id="A3d4HegxpldX" colab={} #applying k-fold cross validation from sklearn.model_selection import cross_val_score as cvs accuracies = cvs(estimator=classifier,X=x_train,y=y_train,cv=10) print(accuracies.mean()) print(accuracies.std()) # + [markdown] colab_type="text" id="vVNB972jqJez" # **4. Linear - SVM** # + colab_type="code" id="qyv_nvg5qKCD" colab={} # fitting kernel SVM to the training set from sklearn.svm import SVC classifier = SVC(kernel='linear', random_state=0) classifier.fit(x_train, y_train) # + colab_type="code" id="BPnFj_ptqKCE" colab={} #predicting the test set results y_pred=classifier.predict(x_test) # + [markdown] colab_type="text" id="Tu4JBCKCqKCG" # Checking Accuracies # + colab_type="code" id="cQh-nVWLqKCG" colab={} from sklearn.metrics import confusion_matrix, classification_report cm=confusion_matrix(y_test, y_pred) plt.figure(figsize = (5,5)) sns.heatmap(cm, annot=True) plt.xlabel('Predicted') plt.ylabel('Truth') # + colab_type="code" id="flNGrZkGqKCI" colab={} print(classification_report(y_test, y_pred)) # + colab_type="code" id="jNDZHrVmqKCK" colab={} #applying k-fold cross validation from sklearn.model_selection import cross_val_score as cvs accuracies = cvs(estimator=classifier,X=x_train,y=y_train,cv=10) print(accuracies.mean()) print(accuracies.std()) # + [markdown] colab_type="text" id="a-OIVjqqqJtd" # **5. K-NN** # + colab_type="code" id="P9YbA9F2qJte" colab={} #fitting knn to the training set from sklearn.neighbors import KNeighborsClassifier as knc classifier=knc(n_neighbors=10,metric='minkowski', p = 2) classifier.fit(x_train, y_train) # + colab_type="code" id="kzXoK6TkqJtf" colab={} #predicting the test set results y_pred=classifier.predict(x_test) # + [markdown] colab_type="text" id="qwffZTkZqJtg" # Checking Accuracies # + colab_type="code" id="OtzUfJ4aqJtg" colab={} from sklearn.metrics import confusion_matrix, classification_report cm=confusion_matrix(y_test, y_pred) plt.figure(figsize = (5,5)) sns.heatmap(cm, annot=True) plt.xlabel('Predicted') plt.ylabel('Truth') # + colab_type="code" id="iTcx2WA9qJti" colab={} print(classification_report(y_test, y_pred)) # + colab_type="code" id="qHDJG2-HqJtk" colab={} #applying k-fold cross validation from sklearn.model_selection import cross_val_score as cvs accuracies = cvs(estimator=classifier,X=x_train,y=y_train,cv=10) print(accuracies.mean()) print(accuracies.std()) # + [markdown] colab_type="text" id="YR3IiUvEqKCC" # **6. Decision Tree** # + colab_type="code" id="SUAluYI2qJe0" colab={} #fitting decision tree classifier to the training set from sklearn.tree import DecisionTreeClassifier as dtc classifier = dtc(criterion='entropy' , random_state=0) classifier.fit(x_train, y_train) # + colab_type="code" id="SZmJSmWnqJe6" colab={} #predicting the test set results y_pred=classifier.predict(x_test) # + [markdown] colab_type="text" id="3QpF3xYKqJe8" # Checking Accuracies # + colab_type="code" id="RKs-uwlXqJe8" colab={} from sklearn.metrics import confusion_matrix, classification_report cm=confusion_matrix(y_test, y_pred) plt.figure(figsize = (5,5)) sns.heatmap(cm, annot=True) plt.xlabel('Predicted') plt.ylabel('Truth') # + colab_type="code" id="Syc4YY4hqJe-" colab={} print(classification_report(y_test, y_pred)) # + colab_type="code" id="pzd1R0k0qJfA" colab={} #applying k-fold cross validation from sklearn.model_selection import cross_val_score as cvs accuracies = cvs(estimator=classifier,X=x_train,y=y_train,cv=10) print(accuracies.mean()) print(accuracies.std()) # + [markdown] colab_type="text" id="UrSGd67DrgEK" # **7. Naive Bayes** # + colab_type="code" id="_0nk7euMrgEL" colab={} #fitting naive bayes to the training set from sklearn.naive_bayes import GaussianNB classifier = GaussianNB() classifier.fit(x_train, y_train) # + colab_type="code" id="vAq6XGmqrgEM" colab={} #predicting the test set results y_pred=classifier.predict(x_test) # + [markdown] colab_type="text" id="0CkWQEDfrgEP" # Checking Accuracies # + colab_type="code" id="cZYPyK3VrgEP" colab={} from sklearn.metrics import confusion_matrix, classification_report cm=confusion_matrix(y_test, y_pred) plt.figure(figsize = (5,5)) sns.heatmap(cm, annot=True) plt.xlabel('Predicted') plt.ylabel('Truth') # + colab_type="code" id="Euv8VFXxrgER" colab={} print(classification_report(y_test, y_pred)) # + colab_type="code" id="xNbEdu2ergEU" colab={} #applying k-fold cross validation from sklearn.model_selection import cross_val_score as cvs accuracies = cvs(estimator=classifier,X=x_train,y=y_train,cv=10) print(accuracies.mean()) print(accuracies.std()) # + [markdown] colab_type="text" id="hzJFtHsZsUyb" # **8. XGBoost Classifier** # + colab_type="code" id="mgZ61NhmsUyc" colab={} #fitting XGBoost to the Training Set from xgboost import XGBClassifier classifier=XGBClassifier() classifier.fit(x_train,y_train) # + colab_type="code" id="rp0VUMK-sUyi" colab={} #predicting the test set results y_pred=classifier.predict(x_test) # + [markdown] colab_type="text" id="SuMgmVuksUyk" # Checking Accuracies # + colab_type="code" id="amoRMRDSsUyk" colab={} from sklearn.metrics import confusion_matrix, classification_report cm=confusion_matrix(y_test, y_pred) plt.figure(figsize = (5,5)) sns.heatmap(cm, annot=True) plt.xlabel('Predicted') plt.ylabel('Truth') # + colab_type="code" id="2_fNRIA7sUyo" colab={} print(classification_report(y_test, y_pred)) # + colab_type="code" id="nly8fw_KsUyr" colab={} #applying k-fold cross validation from sklearn.model_selection import cross_val_score as cvs accuracies = cvs(estimator=classifier,X=x_train,y=y_train,cv=10) print(accuracies.mean()) print(accuracies.std()) # + [markdown] colab_type="text" id="GKnpiryhu-F2" # **9. GradientBoosting Classifier** # + colab_type="code" id="UZek8h6Su-F2" colab={} #fitting XGBoost to the Training Set from sklearn.ensemble import GradientBoostingClassifier classifier = GradientBoostingClassifier(n_estimators=100, learning_rate=1.0, max_depth=1) classifier.fit(x_train, y_train) # + colab_type="code" id="mrttc1IRu-F4" colab={} #predicting the test set results y_pred=classifier.predict(x_test) # + [markdown] colab_type="text" id="A-ltyB2Qu-F7" # Checking Accuracies # + colab_type="code" id="_PJxZHTeu-F8" colab={} from sklearn.metrics import confusion_matrix, classification_report cm=confusion_matrix(y_test, y_pred) plt.figure(figsize = (5,5)) sns.heatmap(cm, annot=True) plt.xlabel('Predicted') plt.ylabel('Truth') # + colab_type="code" id="y0a8X3vAu-F9" colab={} print(classification_report(y_test, y_pred)) # + colab_type="code" id="P8OLxso9u-F_" colab={} #applying k-fold cross validation from sklearn.model_selection import cross_val_score as cvs accuracies = cvs(estimator=classifier,X=x_train,y=y_train,cv=10) print(accuracies.mean()) print(accuracies.std()) # + [markdown] colab_type="text" id="1zNRqf29vA4U" # **10. AdaBoost Classifier** # + colab_type="code" id="8Lovw1dcvA4V" colab={} #fitting XGBoost to the Training Set from sklearn.ensemble import AdaBoostClassifier dt = dtc() classifier = AdaBoostClassifier(n_estimators = 100, base_estimator = dt, learning_rate = 1) classifier.fit(x_train,y_train) # + colab_type="code" id="NuV0xAWYvA4X" colab={} #predicting the test set results y_pred=classifier.predict(x_test) # + [markdown] colab_type="text" id="p2wBVObmvA4Y" # Checking Accuracies # + colab_type="code" id="Rv6s4RBvvA4Y" colab={} from sklearn.metrics import confusion_matrix, classification_report cm=confusion_matrix(y_test, y_pred) plt.figure(figsize = (5,5)) sns.heatmap(cm, annot=True) plt.xlabel('Predicted') plt.ylabel('Truth') # + colab_type="code" id="qCSxXgy3vA4c" colab={} print(classification_report(y_test, y_pred)) # + colab_type="code" id="Hq1wipf3vA4e" colab={} #applying k-fold cross validation from sklearn.model_selection import cross_val_score as cvs accuracies = cvs(estimator=classifier,X=x_train,y=y_train,cv=10) print(accuracies.mean()) print(accuracies.std()) # + [markdown] colab_type="text" id="7ORgTR_wxF25" # **11. CatBoost Classifier** # + id="VATZrvaLxKuo" colab_type="code" colab={} # !pip install catboost # + colab_type="code" id="VmQcpYovxF26" colab={} #fitting CatBoost to the Training Set from catboost import CatBoostClassifier classifier = CatBoostClassifier(iterations=100, learning_rate=0.01) classifier.fit(x_train,y_train, eval_set = (x_test, y_test)) # + colab_type="code" id="Z9dpYysixF2-" colab={} #predicting the test set results y_pred=classifier.predict(x_test) # + [markdown] colab_type="text" id="DJFEn_uPxF3A" # Checking Accuracies # + colab_type="code" id="KtKGYjDPxF3A" colab={} from sklearn.metrics import confusion_matrix, classification_report cm=confusion_matrix(y_test, y_pred) plt.figure(figsize = (5,5)) sns.heatmap(cm, annot=True) plt.xlabel('Predicted') plt.ylabel('Truth') # + colab_type="code" id="Q3GS2R7pxF3D" colab={} print(classification_report(y_test, y_pred)) # + colab_type="code" id="bo_znuiOxF3F" colab={} #applying k-fold cross validation from sklearn.model_selection import cross_val_score as cvs accuracies = cvs(estimator=classifier,X=x_train,y=y_train,cv=10) print(accuracies.mean()) print(accuracies.std()) # + [markdown] colab_type="text" id="eDoNlbY3IjbD" # **12. Light GBM** # + colab_type="code" id="rbooI6RhIjbD" colab={} # !pip install lightgbm # + colab_type="code" id="WpOPh3A1IjbE" colab={} import lightgbm as lgbm # from sklearn import preprocessing # + colab_type="code" id="veOuOZS7IjbG" colab={} # kfold = KFold(n_splits=5, random_state = 0, shuffle = True) model_lgb = lgbm.LGBMClassifier(n_iterations =50, silent = False) model_lgb.fit(x_train, y_train) model_lgb.score(x_test, y_test) # + [markdown] id="wTSSaAezfgUF" colab_type="text" # Therefore, the accuracies are -
    # logistic regression 0.67
    # random forest 0.68
    # kernel svm 0.69
    # linear svm 0.71
    # knn 0.67
    # decision tree 0.50
    # naive bayes 0.43
    # xgboost 0.66
    # gradient boosting 0.60
    # adaboost 0.50
    # lightgbm 0.65
    # catboost 0.65
    # + [markdown] colab_type="text" id="jFMye-RjIjbP" # # So, our Best Model Selected according to its performance is Linear-SVM ! # + [markdown] colab_type="text" id="DaBQMxZ3IjbP" # # 8. Hyperparameter Tuning and Model Optimization # + [markdown] colab_type="text" id="kYiW7PxNIjbQ" # Using GridSearch for searching best hyperparameter. # Model: Linear-SVM # + colab_type="code" id="0tYW6WVsIjbQ" colab={} from sklearn.model_selection import GridSearchCV from sklearn.model_selection import StratifiedKFold # + id="Wr0-r7Al4BXp" colab_type="code" colab={} print(SVC().get_params().keys()) # + _cell_guid="c7496af3-31ed-4195-8797-7888f4410c3e" _uuid="31a5e026eac04f2edee67e09918ceddaf27ff629" colab_type="code" id="qdUq8aX-IjbQ" colab={} #using grid search method to find out the best groups of hyperparameters clas_cv = GridSearchCV(SVC(), {'C' : [1.0, 10, 100], "gamma" : ['scale'], 'kernel': ['linear'], 'kernel': ['linear'], 'random_state': [0], 'cache_size' : [200], 'degree': [3], 'coef0' : [0.0], 'decision_function_shape' : ['ovr'], 'tol' : [0.001], 'gamma' : ['scale'], 'max_iter' : [-1] }, scoring = 'accuracy', cv = 10, verbose=1) clas_cv.fit(x_train,y_train) # + id="_stSV4NOkms9" colab_type="code" colab={} best_parameters=clas_cv.best_params_ # + _cell_guid="cc978005-bea6-4ae8-a9ad-e7e7faef8f50" _uuid="2981044867820f15b04dc054b0d5a27acc76179b" colab_type="code" id="HoZ9bEzzIjbS" colab={} #dictionary of the best parameters clas_cv.best_params_ # + [markdown] _cell_guid="88574b66-b3c6-45f2-bf87-abb18714d34e" _uuid="85de70c233d62f08fe560b169f7465806c2dde54" colab_type="text" id="KY9yL4ZgIjbT" # Train data using Linear-SVM with best parameters # + _cell_guid="d7917d17-2f9a-48d3-97f6-4f579b24fb87" _uuid="dba01178f5fec3076f113d85f132828e59d6004e" colab_type="code" id="PYDnh4s_IjbT" colab={} c_svm = SVC(**clas_cv.best_params_) c_svm.fit(x_train,y_train) # + [markdown] _cell_guid="e19d3903-b989-4920-89d7-991573c6bf4a" _uuid="9917f23302d627e44ad847df8a7bf1cd27d75d4d" colab_type="text" id="mWXWwOfmIjbU" # Predicting the Results. # + _cell_guid="2c1f987b-4b3e-46fb-9e97-82aad2058b44" _uuid="a5660ec00c22ff2f55d51c26566b8490f3f248ed" colab_type="code" id="x8IsmZ0VIjbU" colab={} y_pred = c_svm.predict(x_test) y_pred # + [markdown] _cell_guid="6b7fd0fa-b746-4205-ac75-f5c6d6b28cf0" _uuid="5ec47c743cb6aed59901df918a26564fb0e74050" colab_type="text" id="5JgfK0QXIjbV" # Evaluating its Score # + colab_type="code" id="JEN3t_pVOStv" colab={} from sklearn.metrics import confusion_matrix, classification_report cm=confusion_matrix(y_test, y_pred) plt.figure(figsize = (5,5)) sns.heatmap(cm, annot=True) plt.xlabel('Predicted') plt.ylabel('Truth') print(confusion_matrix(y_test, y_pred)) ; print("\n\n") # + id="QniwlIZhifrA" colab_type="code" colab={} print(classification_report(y_test, y_pred)) # + id="9u5PcPRgIfH1" colab_type="code" colab={} c_svm.score(x_test, y_test), c_svm.score(x_train, y_train) # + [markdown] colab_type="text" id="6axCZo0JIjbY" # # 9. Saving the Model # + colab_type="code" id="kMxOR0fqIjbZ" colab={} import pickle filename = 'amcat.sav' pickle.dump(c_svm, open(filename, 'wb')) # + id="1GipEsIZK5Up" colab_type="code" colab={} # to check the integrity loaded_model = pickle.load(open(filename,'rb')) result = loaded_model.score(x_test, y_test) print(result) # + [markdown] colab_type="text" id="OFP1s3ZuIjbZ" # # 10. Creating a ML Pipeline # + colab_type="code" id="sF-QHf1VIjbZ" colab={} from sklearn.pipeline import Pipeline # + [markdown] id="jT-Iehsi15uA" colab_type="text" # Pre-Processing # + id="rzZy8NrATGUL" colab_type="code" colab={} ## ingore from sklearn.compose import ColumnTransformer from sklearn.preprocessing import FunctionTransformer label_enc = ['Gender'] target_enc = ['Specialization in study', 'Year of Completion of college', '12th Completion year', '10th Completion Year'] boxcox_f = ['10th Completion Year', '12th Completion year', 'Specialization in study', 'Year of Completion of college'] preprocessor = ColumnTransformer( transformers=[ ('boxcox', FunctionTransformer(stats.boxcox, validate=False), boxcox_f), ('label', FunctionTransformer(LabelEncoder, validate=False), label_enc), ('target', FunctionTransformer(TargetEncoder, validate=False), target_enc), ('scale', StandardScaler())], remainder = 'passthrough') # + [markdown] id="Ciwm30Q72R7U" colab_type="text" # Modeling # + id="3rVGiYkcUeMR" colab_type="code" colab={} ## ingore log_reg = Pipeline(steps=[('preprocess', preprocessor), ('linear_svm', SVC()) ]).fit(x_train, y_train); # + colab_type="code" id="-P3xheFLIjba" colab={} pipe = Pipeline([('standard', StandardScaler()), #('boxcox'), stats.boxcox()), ('l-svm', SVC())]) # + colab_type="code" id="fETU3ZceIjbb" colab={} pipe.fit(x_train, y_train) # + colab_type="code" id="_iHX2DIAIjbb" colab={} score = pipe.score(x_test, y_test) print('Linear-SVM pipeline test accuracy: %.3f' % score) # + [markdown] id="iTpv_qcB1nZj" colab_type="text" # # 11. Generating Requirements File # + id="tvpVDLDj0BWB" colab_type="code" colab={} # !pip freeze > requirements.txt # + [markdown] colab_type="text" id="yBVggzRoIjbc" # # End. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="ongEoQjaTJQk" colab_type="text" # # Intstall coach # Just use pip # + id="sRS7Z_lfGrs6" colab_type="code" colab={} pip install rl_coach # + [markdown] colab_type="text" id="9qn7Qf9gHZLk" # # AI Week Workshop # + [markdown] colab_type="text" id="kEcvEIxqHZLt" # ### ***Add new environmen***t # # In this section we will implement the short corridor environment from Sutton & Barto Book. # # ![alt text](https://drive.google.com/uc?id=1rYLI9dC92sfpF0BVxVENF964MfWJkxZq) # # * Three non terminal states- The location of the agent # # * The observations are one-hot encoding of the states # * Actions are reversed in the second state # # # * Reward is -1 for each time step # # # # # # + [markdown] id="G2__PdzbNrPi" colab_type="text" # ##### ***Helper function*** # The following code snippet contains some defines and an one-hot encoding helper function. # + id="lnMACvXo_y3U" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="54c9c4f8-cac8-4086-c9ce-a6f5860d7c56" # %%writefile short_corridor_env_helpper.py import numpy as np LEFT = 0 RIGHT = 1 START_STATE = 0 GOAL_STATE = 3 NUM_STATES = 4 REVERSE_STATE = 1 def to_one_hot(state): observation = np.zeros((NUM_STATES,)) observation[state] = 1 return observation # + [markdown] colab_type="text" id="RBxwrlSzHZLu" # ##### ***Write short corridor environment*** # Compete the following functions: # function and the step function # # 1. is_done - will return a boolean . True only at termination state # # 2. reset - Resets environment to initial state # 3. step - Returns the next observation, reward, and the boolean flag done # # # # # # * **complete code** # # + colab_type="code" outputId="02dae064-339e-4997-a111-b64fced6448e" id="uxwi5S1vHZLw" colab={"base_uri": "https://localhost:8080/", "height": 35} # %%writefile short_corridor_env.py import numpy as np import gym from gym import spaces from short_corridor_env_helpper import * class ShortCorridorEnv(gym.Env): def __init__(self): # Class constructor- Initializes class variables and sets initial state self.observation_space = spaces.Box(0, 1, shape=(NUM_STATES,)) self.action_space = spaces.Discrete(2) self.reset() def reset(self): ''' Resets the environment to start state ''' # Boolean. True only if the goal state is reached self.goal_reached = ??? # An integer representing the state. Number between zero and three self.current_state = ??? observation = to_one_hot(???) return observation def _is_done(self, current_state): ''' return done a Boolean- True only if we reached the goal state ''' # ??? return done def step(self, action): ''' Returns the next observation, reward, and the boolean flag done ''' if action ==LEFT: step = -1 elif action == RIGHT: step = ??? if self.current_state == REVERSE_STATE: ### Replace step = -1 with step = 1 and vise versa # ??? self.current_state += step self.current_state = max(0, self.current_state) observation = to_one_hot(self.current_state) reward = ??? done = self._is_done(self.current_state) return observation, reward, done, {} # + [markdown] colab_type="text" id="tO6bmhG1HZL0" # ##### ***Write preset to run existing agent on the new environment*** # *We will use the same preset from DQN example*. # # Since our environment is already using Gym API we are almost good to go. # # When selecting the environment parametes in the preset use **GymEnvironmentParameters** and pass the path of the environment source code using the level parameter # + colab_type="code" id="wVmJ2CfbHZL1" outputId="b0efca7e-119e-4f0c-9026-1c7bbe706796" colab={"base_uri": "https://localhost:8080/", "height": 35} # %%writefile short_corridor_dqn_preset.py from rl_coach.environments.gym_environment import GymEnvironmentParameters from rl_coach.filters.filter import NoInputFilter, NoOutputFilter from rl_coach.agents.dqn_agent import DQNAgentParameters from rl_coach.graph_managers.basic_rl_graph_manager import BasicRLGraphManager from rl_coach.graph_managers.graph_manager import SimpleSchedule from rl_coach.memories.memory import MemoryGranularity #################### # Graph Scheduling # #################### schedule_params = SimpleSchedule() ######### # Agent # ######### agent_params = DQNAgentParameters() agent_params.input_filter = NoInputFilter() agent_params.output_filter = NoOutputFilter() # DQN params # ER size agent_params.memory.max_size = (MemoryGranularity.Transitions, 40000) ############### # Environment # ############### env_params = GymEnvironmentParameters(level='short_corridor_env:ShortCorridorEnv') ################# # Graph Manager # ################# graph_manager = BasicRLGraphManager(agent_params=agent_params, env_params=env_params, schedule_params=schedule_params) # + [markdown] colab_type="text" id="mCwoQwnGHZL5" # ##### ***Run new preset*** # + colab_type="code" id="4db-w4ETHZL6" colab={} # !coach -p /content/short_corridor_dqn_preset.py:graph_manager # + [markdown] colab_type="text" id="aMs0JnUBHZL8" # ### ***Add new agent*** # Coach modularity makes adding an agent a clean and simple task. # Tipicaly consists of four parts: # # # 1. Implement an agent spesific network head (and loss) # 2. Implement exploration policy (optional) # 3. Define new parametes class that extends `AgentParametes` # 4. Implement a preset to run the agent on some environment # # # + [markdown] id="MtfJT8XbmsUo" colab_type="text" # ##### ***Write stochastic output layer*** # We use stochastic policy, meaning that we only produce the probability of going left and going right. # This layer takes in the input from previous layer, the middleware, and outputs two numbers. # + id="KIYd8t8GnCSu" colab_type="code" outputId="76bb7100-76d1-4be3-cc55-22e4cc7acbd8" colab={"base_uri": "https://localhost:8080/", "height": 35} # %%writefile probabilistic_layer.py import tensorflow as tf from rl_coach.architectures.tensorflow_components.layers import Dense class ProbabilisticLayer(object): def __init__(self, input_layer, num_actions): super().__init__() scores = Dense(num_actions)(input_layer, name='logit') self.event_probs = tf.nn.softmax(scores, name="policy") # define the distributions for the policy and the old policy self.policy_distribution = tf.contrib.distributions.Categorical(probs=self.event_probs) def log_prob(self, action): return self.policy_distribution.log_prob(action) def layer_output(self): return self.event_probs # + [markdown] colab_type="text" id="wNelbGV7HZME" # ##### ***Implement network head i.e. implement the loss*** # The Head needs to inherit from the base class `Head`. # # Inorder to maximize the sum of rewards, we want to go in the following direction $-\Sigma_i A_i \nabla_Wlog(\pi(a_i|x_i))$ # # $- A_i \nabla_Wlog[\pi(a_i|x_i)]$ # # `Complete code` # # # + colab_type="code" outputId="dd33ee5d-2eba-4e45-a230-009bd4b37fd2" id="hUnA2ZJHHZMF" colab={"base_uri": "https://localhost:8080/", "height": 35} # %%writefile simple_pg_head.py import tensorflow as tf from rl_coach.architectures.tensorflow_components.heads.head import Head from rl_coach.base_parameters import AgentParameters from rl_coach.spaces import SpacesDefinition from probabilistic_layer import ProbabilisticLayer class SimplePgHead(Head): def __init__(self, agent_parameters: AgentParameters, spaces: SpacesDefinition, network_name: str, head_idx: int = 0, is_local: bool = True): super().__init__(agent_parameters, spaces, network_name) self.exploration_policy = agent_parameters.exploration def _build_module(self, input_layer): # Define inputs actions = tf.placeholder(tf.int32, [None], name="actions") advantages = tf.placeholder(tf.float32, [None], name="advantages") # Two actions, left or right policy_distribution = ProbabilisticLayer(input_layer, num_actions=2) # calculate loss log_prob = policy_distribution.log_prob(???) mudulated_log_prob = ??? expected_mudulated_log_prob = tf.reduce_mean(mudulated_log_prob) ### Coach bookkeeping # List of placeholders for additional inputs to the head #(except from the middleware input) self.input.append(???) # The output of the head, which is also the output of the network. self.output.append(???) # Placeholder for the target that we will use to train the network self.target = ??? # The loss that we will use to train the network self.loss = ??? tf.losses.add_loss(self.loss) # + [markdown] colab_type="text" id="3Eqej3v50a91" # ##### ***Define exploration policy*** # Every iteration we want to sample from the network output distribution i.e. toss a bias coin to get the agent actual move # # **`Complete code`** # + colab_type="code" outputId="4efed15d-3956-4ff1-f072-4ec6592da0e0" id="zuYBPgnI0a-A" colab={"base_uri": "https://localhost:8080/", "height": 35} # %%writefile simple_pg_exploration.py import numpy as np from rl_coach.exploration_policies.exploration_policy import ExplorationPolicy, ExplorationParameters from rl_coach.spaces import ActionSpace class DiscreteExplorationParameters(ExplorationParameters): @property def path(self): return 'simple_pg_exploration:DiscreteExploration' class DiscreteExploration(ExplorationPolicy): """ Discrete exploration policy is intended for discrete action spaces. It expects the action values to represent a probability distribution over the action """ def __init__(self, action_space: ActionSpace): """ :param action_space: the action space used by the environment """ super().__init__(action_space) def get_action(self, probabilities): # choose actions according to the probabilities action = np.random.choice(self.action_space.actions, p=???) return chosen_action, probabilities # + [markdown] colab_type="text" id="XG5y-G43HZL9" # ##### ***Define new agent parameters*** # Coach is modular! # # Each class in Coach has a complementary parameters class which defines its constructor. # This is also true for the agent. The agent has a complementary `AgentParameters` class. This class enable to select the paramenters of the agent sub modules. # # It consists of the following four parts: # # # # 1. algorithm # 2. exploration # 3. memory # 4. Networks # # # + colab_type="code" outputId="047d9045-f825-4456-8340-328db9c78bac" id="bTAcPbnsHZL-" colab={"base_uri": "https://localhost:8080/", "height": 35} # %%writefile simple_pg_params.py from rl_coach.architectures.embedder_parameters import InputEmbedderParameters from rl_coach.architectures.head_parameters import HeadParameters from rl_coach.architectures.middleware_parameters import FCMiddlewareParameters from rl_coach.base_parameters import NetworkParameters, AlgorithmParameters, \ AgentParameters from rl_coach.exploration_policies.additive_noise import AdditiveNoiseParameters from rl_coach.exploration_policies.categorical import CategoricalParameters from rl_coach.memories.episodic.single_episode_buffer import SingleEpisodeBufferParameters from rl_coach.spaces import DiscreteActionSpace, BoxActionSpace from rl_coach.agents.policy_optimization_agent import PolicyGradientRescaler from simple_pg_exploration import DiscreteExplorationParameters class SimplePgAgentParameters(AgentParameters): def __init__(self): super().__init__(algorithm=SimplePGAlgorithmParameters(), #exploration=CategoricalParameters(), exploration=DiscreteExplorationParameters(), memory=SingleEpisodeBufferParameters(), networks={"main": SimplePgTopology()}) @property def path(self): #return 'simple_pg_agent:SimplePgAgent' return 'rl_coach.agents.policy_gradients_agent:PolicyGradientsAgent' # Since we are adding a new head we need to tell coach the heads path class SimplePgHeadParams(HeadParameters): def __init__(self): super().__init__(parameterized_class_name="AiWeekHead") @property def path(self): return 'simple_pg_head:SimplePgHead' class SimplePgTopology(NetworkParameters): def __init__(self): super().__init__() self.input_embedders_parameters = {'observation': InputEmbedderParameters()} self.middleware_parameters = FCMiddlewareParameters() self.heads_parameters = [SimplePgHeadParams()] class SimplePGAlgorithmParameters(AlgorithmParameters): """ :param num_steps_between_gradient_updates: (int) The number of steps between calculating gradients for the collected data. In the A3C paper, this parameter is called t_max. Since this algorithm is on-policy, only the steps collected between each two gradient calculations are used in the batch. """ def __init__(self): super().__init__() # TOTAL_RETURN # FUTURE_RETURN # FUTURE_RETURN_NORMALIZED_BY_EPISODE # FUTURE_RETURN_NORMALIZED_BY_TIMESTEP # Q_VALUE # A_VALUE # TD_RESIDUAL # DISCOUNTED_TD_RESIDUAL # GAE self.policy_gradient_rescaler = PolicyGradientRescaler.FUTURE_RETURN self.num_steps_between_gradient_updates = 20000 # this is called t_max in all the papers # + [markdown] colab_type="text" id="_8RT_QFdHZMH" # ##### ***Write preset to run new agent on short corridor*** # complete code # * **complete code** # * **Hint: look at DQN preset** # # + colab_type="code" outputId="ce90a661-5691-4b91-8bf0-cc9af56df38f" id="D1PLQuNaHZMI" colab={"base_uri": "https://localhost:8080/", "height": 35} # %%writefile short_corridor_new_agent_preset.py from rl_coach.base_parameters import VisualizationParameters from rl_coach.core_types import EnvironmentEpisodes, EnvironmentSteps from rl_coach.environments.gym_environment import GymEnvironmentParameters from rl_coach.filters.filter import NoInputFilter, NoOutputFilter from rl_coach.graph_managers.basic_rl_graph_manager import BasicRLGraphManager from rl_coach.graph_managers.graph_manager import SimpleSchedule from rl_coach.memories.memory import MemoryGranularity from rl_coach.schedules import LinearSchedule from simple_pg_params import SimplePgAgentParameters #################### # Graph Scheduling # #################### schedule_params = SimpleSchedule() ######### # Agent # ######### agent_params = ??? agent_params.input_filter = NoInputFilter() agent_params.output_filter = NoOutputFilter() ############### # Environment # ############### env_params = GymEnvironmentParameters(level='short_corridor_env:ShortCorridorEnv') ################# # Graph Manager # ################# graph_manager = BasicRLGraphManager(agent_params=agent_params, env_params=env_params, schedule_params=schedule_params) # + [markdown] colab_type="text" id="Ufc_EAe3HZMK" # ##### ***Run preset of the new agent on the new environment*** # # **`Complete code`** # # # # + colab_type="code" id="ad2xpEu8HZML" colab={} # ??? # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Comparision: FEM vs. other methods # # [![Binder](https://mybinder.org/badge.svg)](https://mybinder.org/v2/gh/danhtaihoang/hidden-variables/master?filepath=sphinx%2Fcodesource%2Fhidden.ipynb) # # We compare the performance of Free Energy Minimization (FEM) with other existing methods based on mean field approximations and Maximum Likelihood Estimation (MLE). # # First of all, we import the necessary packages to the jupyter notebook: # + import numpy as np import sys import matplotlib.pyplot as plt import simulate import inference # %matplotlib inline np.random.seed(1) # - # We generate a true interaction matrix `w0` and time series data `s0`. # + # parameter setting: n0 = 40 # number of variables g = 4.0 # interaction variability parameter w0 = np.random.normal(0.0,g/np.sqrt(n0),size=(n0,n0)) # generating time-series data l = int(4*(n0**2)) s0 = simulate.generate_data(w0,l) # - # Suppose only a subset of variables is observed. nh0 = 15 n = n0 - nh0 s = s0[:,:n].copy() # We use a number of hidden variables as its actual value. nh = nh0 # Because we will plot the prediction result of every methods, let us write a plot function. def plot_result(w0,w): plt.figure(figsize=(13.2,3.2)) plt.subplot2grid((1,4),(0,0)) plt.title('observed to observed') plt.plot([-2.5,2.5],[-2.5,2.5],'r--') plt.scatter(w0[:n,:n],w[:n,:n]) plt.xticks([-2,0,2]) plt.yticks([-2,0,2]) plt.xlabel('actual interactions') plt.ylabel('inferred interactions') plt.subplot2grid((1,4),(0,1)) plt.title('hidden to observed') plt.plot([-2.5,2.5],[-2.5,2.5],'r--') plt.scatter(w0[:n,n:],w[:n,n:]) plt.xticks([-2,0,2]) plt.yticks([-2,0,2]) plt.xlabel('actual interactions') plt.ylabel('inferred interactions') plt.subplot2grid((1,4),(0,2)) plt.title('observed to hidden') plt.plot([-2.5,2.5],[-2.5,2.5],'r--') plt.xticks([-2,0,2]) plt.yticks([-2,0,2]) plt.scatter(w0[n:,:n],w[n:,:n]) plt.xlabel('actual interactions') plt.ylabel('inferred interactions') plt.subplot2grid((1,4),(0,3)) plt.title('hidden to hidden') plt.plot([-2.5,2.5],[-2.5,2.5],'r--') plt.scatter(w0[n:,n:],w[n:,n:]) plt.xticks([-2,0,2]) plt.yticks([-2,0,2]) plt.xlabel('actual interactions') plt.ylabel('inferred interactions') plt.tight_layout(h_pad=1, w_pad=1.5) plt.show() # ## Naive Mean-Field approximation print('nMF:') cost_obs,w,sh = inference.infer_hidden(s,nh,method='nmf') w,sh = inference.hidden_coordinate(w0,s0,w,sh) plot_result(w0,w) # ## Thouless-Anderson-Palmer mean field approximation print('TAP:') cost_obs,w,sh = inference.infer_hidden(s,nh,method='tap') w,sh = inference.hidden_coordinate(w0,s0,w,sh) plot_result(w0,w) # ## Exact mean field approximation print('eMF:') cost_obs,w,sh = inference.infer_hidden(s,nh,method='emf') w,sh = inference.hidden_coordinate(w0,s0,w,sh) plot_result(w0,w) # ## Maximum Likelihood Estimation print('MLE:') cost_obs,w,sh = inference.infer_hidden(s,nh,method='mle') w,sh = inference.hidden_coordinate(w0,s0,w,sh) plot_result(w0,w) # ## Free Energy Minimization print('FEM:') cost_obs,w,sh = inference.infer_hidden(s,nh,method='fem') w,sh = inference.hidden_coordinate(w0,s0,w,sh) plot_result(w0,w) # From the above results, we conclude that FEM outperforms other existing methods. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline # # # Curve Fitting with Bayesian Ridge Regression # # # Computes a Bayesian Ridge Regression of Sinusoids. # # See `bayesian_ridge_regression` for more information on the regressor. # # In general, when fitting a curve with a polynomial by Bayesian ridge # regression, the selection of initial values of # the regularization parameters (alpha, lambda) may be important. # This is because the regularization parameters are determined by an iterative # procedure that depends on initial values. # # In this example, the sinusoid is approximated by a polynomial using different # pairs of initial values. # # When starting from the default values (alpha_init = 1.90, lambda_init = 1.), # the bias of the resulting curve is large, and the variance is small. # So, lambda_init should be relatively small (1.e-3) so as to reduce the bias. # # Also, by evaluating log marginal likelihood (L) of # these models, we can determine which one is better. # It can be concluded that the model with larger L is more likely. # # + print(__doc__) # Author: <> import numpy as np import matplotlib.pyplot as plt from sklearn.linear_model import BayesianRidge def func(x): return np.sin(2*np.pi*x) # ############################################################################# # Generate sinusoidal data with noise size = 25 rng = np.random.RandomState(1234) x_train = rng.uniform(0., 1., size) y_train = func(x_train) + rng.normal(scale=0.1, size=size) x_test = np.linspace(0., 1., 100) # ############################################################################# # Fit by cubic polynomial n_order = 3 X_train = np.vander(x_train, n_order + 1, increasing=True) X_test = np.vander(x_test, n_order + 1, increasing=True) # ############################################################################# # Plot the true and predicted curves with log marginal likelihood (L) reg = BayesianRidge(tol=1e-6, fit_intercept=False, compute_score=True) fig, axes = plt.subplots(1, 2, figsize=(8, 4)) for i, ax in enumerate(axes): # Bayesian ridge regression with different initial value pairs if i == 0: init = [1 / np.var(y_train), 1.] # Default values elif i == 1: init = [1., 1e-3] reg.set_params(alpha_init=init[0], lambda_init=init[1]) reg.fit(X_train, y_train) ymean, ystd = reg.predict(X_test, return_std=True) ax.plot(x_test, func(x_test), color="blue", label="sin($2\\pi x$)") ax.scatter(x_train, y_train, s=50, alpha=0.5, label="observation") ax.plot(x_test, ymean, color="red", label="predict mean") ax.fill_between(x_test, ymean-ystd, ymean+ystd, color="pink", alpha=0.5, label="predict std") ax.set_ylim(-1.3, 1.3) ax.legend() title = "$\\alpha$_init$={:.2f},\\ \\lambda$_init$={}$".format( init[0], init[1]) if i == 0: title += " (Default)" ax.set_title(title, fontsize=12) text = "$\\alpha={:.1f}$\n$\\lambda={:.3f}$\n$L={:.1f}$".format( reg.alpha_, reg.lambda_, reg.scores_[-1]) ax.text(0.05, -1.0, text, fontsize=12) plt.tight_layout() plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Using GeoPandas # # **Setting up the conda env:** # # ``` # conda create -n geo python=3.8 # conda activate geo # conda install mamba -c conda-forge # mamba install geemap geopandas descartes rtree=0.9.3 -c conda-forge # mamba install ipython-sql sqlalchemy psycopg2 -c conda-forge # ``` # # **Sample dataset:** # - [nyc_data.zip](https://github.com/giswqs/postgis/raw/master/data/nyc_data.zip) (Watch this [video](https://youtu.be/fROzLrjNDrs) to load data into PostGIS) # # **References**: # - [Introduction to PostGIS](https://postgis.net/workshops/postgis-intro) # - [Using SQL with Geodatabases](https://desktop.arcgis.com/en/arcmap/latest/manage-data/using-sql-with-gdbs/sql-and-enterprise-geodatabases.htm) # ## Connecting to the database import os from sqlalchemy import create_engine host = "localhost" database = "nyc" user = os.getenv('SQL_USER') password = os.getenv('SQL_PASSWORD') connection_string = f"postgresql://{user}:{password}@{host}/{database}" engine = create_engine(connection_string) from sqlalchemy import inspect insp = inspect(engine) insp.get_table_names() # ## Reading data from PostGIS import geopandas as gpd sql = 'SELECT * FROM nyc_neighborhoods' gdf = gpd.read_postgis(sql, con=engine) gdf gdf.crs # ## Writing files out_dir = os.path.expanduser('~/Downloads') if not os.path.exists(out_dir): os.makedirs(out_dir) out_json = os.path.join(out_dir, 'nyc_neighborhoods.geojson') gdf.to_file(out_json, driver="GeoJSON") out_shp = os.path.join(out_dir, 'nyc_neighborhoods.shp') gdf.to_file(out_shp) gdf.crs # ## Measuring area gdf = gdf.set_index("name") gdf["area"] = gdf.area gdf["area"] # ## Getting polygon bounary gdf['boundary'] = gdf.boundary gdf['boundary'] # ## Getting polygon centroid gdf['centroid'] = gdf.centroid gdf['centroid'] # ## Making maps gdf.plot() gdf.plot("area", legend=True, figsize=(10, 8)) gdf = gdf.set_geometry("centroid") gdf.plot("area", legend=True,figsize=(10, 8)) ax = gdf["geom"].plot(figsize=(10, 8)) gdf["centroid"].plot(ax=ax, color="black") gdf = gdf.set_geometry("geom") # ## Reprojecting data sql = 'SELECT * FROM nyc_neighborhoods' gdf = gpd.read_postgis(sql, con=engine) gdf_crs = gdf.to_crs(epsg="4326") gdf_crs geojson = gdf_crs.__geo_interface__ # ## Displaying data on an interative map import geemap m = geemap.Map(center=[40.7341, -73.9113], zoom=10, ee_initialize=False) m style = { "stroke": True, "color": "#000000", "weight": 2, "opacity": 1, "fill": True, "fillColor": "#0000ff", "fillOpacity": 0.4, } m.add_geojson(geojson, style=style, layer_name="nyc neighborhoods") sql2 = 'SELECT * FROM nyc_subway_stations' gdf_subway = gpd.read_postgis(sql2, con=engine) gdf_subway_crs = gdf_subway.to_crs(epsg="4326") subway_geojson = gdf_subway_crs.__geo_interface__ m.add_geojson(subway_geojson, layer_name="nyc subway stations") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import geopandas as gpd import matplotlib.pyplot as plt import numpy as np import osmnx as ox import pandas as pd ox.config(log_console=True, use_cache=True) weight_by_length = False # - df = pd.read_csv('data/ACS_17_5YR_B01003_with_ann.csv', dtype={'Id2':str}) df = df.rename(columns={'Estimate; Total':'pop_total'}) len(df) gdf = gpd.read_file('data/tl_2018_37_place', dtype={'GEOID':str}) len(gdf) places_all = pd.merge(gdf, df, how='inner', left_on='GEOID', right_on='Id2') len(places_all) places = places_all.sort_values('pop_total', ascending=False).head(25).reset_index()[['NAME', 'pop_total', 'geometry']] len(places) # ## Get the street networks and their edge bearings def reverse_bearing(x): return x + 180 if x < 180 else x - 180 bearings = {} for label, row in places.iterrows(): geometry = row['geometry'] place = row['NAME'] # get the graph G = ox.graph_from_polygon(geometry, network_type='drive') # calculate edge bearings Gu = ox.add_edge_bearings(ox.get_undirected(G)) # don't weight bearings, just take one value per street segment b = pd.Series([d['bearing'] for u, v, k, d in Gu.edges(keys=True, data=True)]) bearings[place] = pd.concat([b, b.map(reverse_bearing)]).reset_index(drop='True') # ## Visualize it def count_and_merge(n, bearings): # make twice as many bins as desired, then merge them in pairs # prevents bin-edge effects around common values like 0° and 90° n = n * 2 bins = np.arange(n + 1) * 360 / n count, _ = np.histogram(bearings, bins=bins) # move the last bin to the front, so eg 0.01° and 359.99° will be binned together count = np.roll(count, 1) return count[::2] + count[1::2] # function to draw a polar histogram for a set of edge bearings def polar_plot(ax, bearings, n=36, title=''): bins = np.arange(n + 1) * 360 / n count = count_and_merge(n, bearings) _, division = np.histogram(bearings, bins=bins) frequency = count / count.sum() division = division[0:-1] width = 2 * np.pi / n ax.set_theta_zero_location('N') ax.set_theta_direction('clockwise') x = division * np.pi / 180 bars = ax.bar(x, height=frequency, width=width, align='center', bottom=0, zorder=2, color='#003366', edgecolor='k', linewidth=0.5, alpha=0.7) ax.set_ylim(top=frequency.max()) title_font = {'family':'Arial', 'size':24, 'weight':'bold'} xtick_font = {'family':'Arial', 'size':10, 'weight':'bold', 'alpha':1.0, 'zorder':3} ytick_font = {'family':'Arial', 'size': 9, 'weight':'bold', 'alpha':0.2, 'zorder':3} ax.set_title(title.upper(), y=1.05, fontdict=title_font) ax.set_yticks(np.linspace(0, max(ax.get_ylim()), 5)) yticklabels = ['{:.2f}'.format(y) for y in ax.get_yticks()] yticklabels[0] = '' ax.set_yticklabels(labels=yticklabels, fontdict=ytick_font) xticklabels = ['N', '', 'E', '', 'S', '', 'W', ''] ax.set_xticklabels(labels=xticklabels, fontdict=xtick_font) ax.tick_params(axis='x', which='major', pad=-2) # + # create figure and axes n = len(places) ncols = int(np.ceil(np.sqrt(n))) nrows = int(np.ceil(n / ncols)) figsize = (ncols * 5, nrows * 5) fig, axes = plt.subplots(nrows, ncols, figsize=figsize, subplot_kw={'projection':'polar'}) # plot each city's polar histogram for ax, place in zip(axes.flat, places['NAME'].sort_values().values): polar_plot(ax, bearings[place].dropna(), title=place) # add super title and save full image suptitle_font = {'family':'Arial', 'fontsize':60, 'fontweight':'normal', 'y':1.07} fig.suptitle('City Street Network Orientation', **suptitle_font) fig.tight_layout() fig.subplots_adjust(hspace=0.35) fig.savefig('street-orientations.png', dpi=120, bbox_inches='tight') plt.close() # - places # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.8 64-bit (''base'': conda)' # language: python # name: python3 # --- # # Import Libraries import json import pandas as pd import emoji import re from sklearn.feature_extraction.text import CountVectorizer from sklearn.metrics import classification_report # # Retrieve Data f1 = open('tweets.json') data_tweets = json.load(f1).items() user_label = pd.read_csv('labeled_users_clean.csv') # Rename column for merging user_label.rename(columns={"screen_name": "id"}, inplace=True) print(user_label.head()) print(len(user_label)) data_tweets = pd.DataFrame.from_dict(data_tweets) data_tweets.rename(columns={0: "id", 1: "tweets"}, inplace=True) print(data_tweets[:5]) print(len(data_tweets)) # CSV data after cleaning and selection alldata = pd.merge(data_tweets, user_label, on='id') print(len(alldata)) # Extract 3 useful columns ID, tweets, age idAndTweets = alldata[["id","tweets","age"]] # # Clean data - remove mentions, urls, hashtags, emoji, punctuations, special chars, stop words # + #remove mentions, urls, hashtags. regexMap = {r"@[\w]+": "", r"http[\S]+": "", r"#[\w]+": ""} def cleaning(data): t = data for regx in regexMap.keys(): t = re.sub(regx, regexMap[regx], str(t)) return t #remove emojis def deEmojify(data): t = data return emoji.get_emoji_regexp().sub('', str(t)) idAndTweets["tweets"] = idAndTweets["tweets"].apply(cleaning) idAndTweets["tweets"] = idAndTweets["tweets"].apply(deEmojify) idAndTweets.head() # - # removing punctuations, numbers, and special characters idAndTweets['tweets'] = idAndTweets['tweets'].str.replace("[^a-zA-Z#]", " ") idAndTweets.head(10) #remove stop words import nltk from nltk.corpus import stopwords nltk.download('stopwords') stop_words = set(stopwords.words('english')) idAndTweets['tweets'] = idAndTweets['tweets'].apply( lambda x: ' '.join([w for w in x.split() if w not in stop_words])) idAndTweets.head() print(idAndTweets.head()) print(idAndTweets.size) # # TF-IDF vectorizer & Logistic Regression, Oversampling, 5-fold cross validation from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.model_selection import train_test_split from imblearn.over_sampling import SMOTE tfidf = TfidfVectorizer(max_features=7500, ngram_range=(1, 2)) X = tfidf.fit_transform(idAndTweets['tweets']) y = idAndTweets['age'] X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=1/4.0, random_state=0) oversample = SMOTE() X_train, y_train = oversample.fit_resample(X_train, y_train) # + # K-fold (5-fold) from sklearn.model_selection import KFold from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.model_selection import train_test_split from imblearn.over_sampling import SMOTE tfidf = TfidfVectorizer(max_features=7500, ngram_range=(1, 2)) X = tfidf.fit_transform(idAndTweets['tweets']) y = idAndTweets['age'] kf = KFold(n_splits=5) kf.get_n_splits(X) for train_index, test_index in kf.split(X): #print("TRAIN:", train_index, "TEST:", test_index) X_train, X_test = X[train_index], X[test_index] y_train, y_test = y[train_index], y[test_index] # Oversampling oversample = SMOTE() X_train, y_train = oversample.fit_resample(X_train, y_train) # - # Logistic Regression from sklearn.linear_model import LogisticRegression log = LogisticRegression(multi_class='multinomial', solver='lbfgs').fit(X_train, y_train) y_pred = log.predict(X_test) print(classification_report(y_test, y_pred)) # # Complement Naive Bayes # + import nltk from nltk.tokenize import RegexpTokenizer training_dataset = idAndTweets.sample(frac=0.7, ignore_index=True) training_dataset.head() test_dataset = idAndTweets.drop(training_dataset.index) token = RegexpTokenizer(r'[a-zA-Z0-9]+') vec = CountVectorizer(stop_words='english', ngram_range=(1, 3), tokenizer=token. tokenize) x_train = vec.fit_transform(training_dataset['tweets'].values) y_train = training_dataset['age'] x_test = vec.transform(test_dataset['tweets']) y_test= test_dataset['age'] # - from sklearn.naive_bayes import ComplementNB model = ComplementNB() model.fit(x_train, y_train) model.score(x_train,y_train) model.score(x_test,y_test) y_prediction = model.predict(x_test) print(classification_report(y_test,y_prediction)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Data wrangling # ## import data import pandas as pd import numpy as np url = "https://raw.githubusercontent.com/lopez-isaac/project-data-sets/master/world%202018%20lol.csv" main_df = pd.read_csv(url) main_df.head() # + ### If importing locally #path = "/Users/isaaclopez/Downloads/world 2018 lol.csv" #main_df = pd.read_csv(path) # - # ## Clean up data frame #remove display limits pd.set_option('display.max_columns', None) pd.set_option('display.max_rows', None) # #copy data clean_up = main_df # + #drop timebased or Unnecessary columns drop = ["gameid","url","league","split","date","week","game","patchno","playerid","player","team",] clean_up = main_df.drop(columns=drop) # - #check clean_up.head() #check for null values clean_up.isnull().sum() # + ## Fix nan values #fill nan values with either 1 or 0 fill_list =[1,0] clean_up['fbaron'] = clean_up['fbaron'].fillna(pd.Series(np.random.choice(fill_list, size=len(clean_up.index)))) #fill nan values with the column mean clean_up["fbarontime"] = clean_up["fbarontime"].fillna((clean_up["fbarontime"].mean())) #fill nan values with either 1 or 0 clean_up["herald"] = clean_up["herald"].fillna(pd.Series(np.random.choice(fill_list,size=len(clean_up.index)))) #drop the column mostly nan values clean_up = clean_up.drop(columns=["heraldtime"]) # - #check clean_up.isnull().sum() clean_up.head() clean_up.dtypes # + #change data type object to int object_int = ["doubles","triples","quadras","pentas"] for x in object_int: clean_up[x] = clean_up[x].replace(' ', np.NaN) #fill nan values with the column mode clean_up[x] = clean_up[x].fillna(clean_up[x].mode()[0]) clean_up[x] = clean_up[x].astype(float) # - clean_up["dmgshare"] = clean_up["dmgshare"].replace(' ', np.NaN) clean_up["dmgshare"] = clean_up["dmgshare"].astype(float) clean_up["dmgshare"] = clean_up["dmgshare"].fillna((clean_up["dmgshare"].mean())) clean_up["earnedgoldshare"] = clean_up["earnedgoldshare"].replace(' ', np.NaN) clean_up["earnedgoldshare"] = clean_up["earnedgoldshare"].astype(float) clean_up["earnedgoldshare"] = clean_up["earnedgoldshare"].fillna((clean_up["earnedgoldshare"].mean())) #drop to many blanks clean_up["visiblewardclearrate"].value_counts() #drop to many blanks clean_up["invisiblewardclearrate"].value_counts() clean_up = clean_up.drop(columns=["visiblewardclearrate","invisiblewardclearrate"]) clean_up.isnull().sum() # + #need to drop towers destoryed causing indirect leakage #teamtowerkills 0 #opptowerkills clean_up = clean_up.drop(columns=["opptowerkills","teamtowerkills"]) # - # # Feature Engineering feat_engin = clean_up print(clean_up.shape) clean_up.head() feat_engin["blue_side"] = (feat_engin["side"] == "Blue") feat_engin.head() blue_side = {True:1,False:0} feat_engin["blue_side"] = feat_engin["blue_side"].replace(blue_side) #def blue_victory(result,side): #if result == side: #return 1 #else: #return 0 Tournament = feat_engin # # machine learning, predict match winner # ## majority value baseline target = Tournament["result"] #unique value percentages for target target.value_counts(normalize=True) # ## train, validation, test split from sklearn.model_selection import train_test_split # + #train and test split 85% and 15% split train, test = train_test_split(Tournament, train_size=.85, test_size=.15, stratify=Tournament['result'], random_state=42) Tournament.shape,train.shape,test.shape # + #train and validation split. split based of test length train, val = train_test_split(train, test_size = len(test), stratify=train['result'], random_state=42) Tournament.shape,train.shape,test.shape,val.shape # - # ## random forest classifier import category_encoders as ce from sklearn.impute import SimpleImputer from sklearn.ensemble import RandomForestClassifier from sklearn.pipeline import make_pipeline # + # assign features and target target = "result" features = train.columns.drop(["result","side"]) X_train = train[features] y_train = train[target] X_val = val[features] y_val = val[target] X_test = test[features] y_test = test[target] # - features train.select_dtypes(exclude='number').describe().T.sort_values(by='unique') # + # make the pipeline rfc_pipeline = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(strategy='median'), RandomForestClassifier(n_estimators=10, max_depth=3, min_samples_leaf=1, random_state=42) ) # Fit on train, score on val rfc_pipeline.fit(X_train, y_train) print('Validation Accuracy', rfc_pipeline.score(X_val, y_val)) # - # ### Random forest importance import matplotlib.pyplot as plt # + rfc = rfc_pipeline.named_steps['randomforestclassifier'] importances = pd.Series(rfc.feature_importances_, X_train.columns) n = 20 plt.figure(figsize=(10,n/2)) plt.title(f'Top {n} features') importances.sort_values()[-n:].plot.barh(color='lightblue'); # - # ## eli5 permuation import eli5 from eli5.sklearn import PermutationImportance # + #eli doesnt work with pipline(workaround) transformers = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(strategy='median') ) X_train_transformed = transformers.fit_transform(X_train) X_val_transformed = transformers.transform(X_val) model = RandomForestClassifier(n_estimators=10, max_depth=3, min_samples_leaf=1, random_state=42) model.fit(X_train_transformed, y_train) # + permuter = PermutationImportance( model, scoring='accuracy', n_iter=5, random_state=42 ) permuter.fit(X_val_transformed, y_val) # + feature_names = X_val.columns.tolist() eli5.show_weights( permuter, top=None, # show permutation importances for all features feature_names=feature_names ) # - # ## random forest updated after permutation (feat inportance) minimum_importance = 0 mask = permuter.feature_importances_ > minimum_importance features = X_train.columns[mask] X_train = X_train[features] print('Shape after removing features:', X_train.shape) # + X_val = X_val[features] pipeline = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(strategy='median'), RandomForestClassifier(n_estimators=10, max_depth=3, min_samples_leaf=1, random_state=42) ) # Fit on train, score on val pipeline.fit(X_train, y_train) print('Validation Accuracy', pipeline.score(X_val, y_val)) # - from sklearn.metrics import accuracy_score y_pred = pipeline.predict(X_val) print('Validation Accuracy', accuracy_score(y_val, y_pred)) # ## xgboost model from xgboost import XGBClassifier encoder = ce.OrdinalEncoder() X_train_encoded = encoder.fit_transform(X_train) X_val_encoded = encoder.transform(X_val) X_train_encoded = encoder.fit_transform(X_train) X_val_encoded = encoder.transform(X_val) eval_set = [(X_train_encoded, y_train), (X_val_encoded, y_val)] xgb_model = XGBClassifier( n_estimators=500, # <= 1000 trees, depends on early stopping max_depth=7,# try deeper trees because of high cardinality categoricals learning_rate=0.5, # try higher learning rate objective="multi:softmax", num_class=n+1, n_jobs=-1 ) xgb_model.fit(X_train_encoded, y_train, eval_set=eval_set, eval_metric='merror', early_stopping_rounds=50 ) results = xgb_model.evals_result() train_error = results['validation_0']['merror'] val_error = results['validation_1']['merror'] epoch = range(1, len(train_error)+1) plt.plot(epoch, train_error, label='Train') plt.plot(epoch, val_error, label='Validation') plt.ylabel('Classification Error') plt.xlabel('Model Complexity (n_estimators)') plt.ylim((0.18, 0.22)) # Zoom in plt.legend(); # ## partitial dependency plot (pdpbox) specifically for categorial feature from pdpbox.pdp import pdp_isolate, pdp_plot from pdpbox import pdp #isolate crunches the numbers #plot is for grpahing # Use Ordinal Encoder, outside of a pipeline pdp_encoder = ce.OrdinalEncoder() X_val_encoded_pdp = pdp_encoder.fit_transform(X_val) pdp_model = RandomForestClassifier(n_estimators=10, max_depth=3, min_samples_leaf=1, random_state=42) pdp_model.fit(X_val_encoded_pdp, y_val) features # + #categorical setup #need to use pre permutation features to make it work # feature = the feature we want to change other features stay static pdp_feature = "side" pdp_dist = pdp.pdp_isolate(model=pdp_model, dataset=X_val_encoded_pdp, model_features=features, feature=pdp_feature) pdp.pdp_plot(pdp_dist, pdp_feature); # + pdp_feature = 'teamkills' isolated = pdp_isolate( model=pipeline, dataset=X_val, model_features=X_val.columns, feature=pdp_feature ) # - pdp_plot(isolated, feature_name=pdp_feature); # ## shap import shap # + # Get an individual observation to explain. # For example, the 0th row from the test set. row = X_val_encoded.iloc[[0]] row # - # What was the answer y_val.iloc[[0]] # What does the model predict for this apartment? xgb_model.predict(row) processor = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(strategy='median') ) X_train_processed = processor.fit_transform(X_train) X_val_processed = processor.transform(X_val) eval_set = [(X_train_processed, y_train), (X_val_processed, y_val)] model = XGBClassifier(n_estimators=1000, n_jobs=-1) model.fit(X_train_processed, y_train, eval_set=eval_set, eval_metric='auc', early_stopping_rounds=10) from sklearn.metrics import roc_auc_score X_test_processed = processor.transform(X_val) class_index = 1 y_pred_proba = model.predict_proba(X_val_processed)[:, class_index] print(f'Test ROC AUC for class {class_index}:') print(roc_auc_score(y_val, y_pred_proba)) # Ranges from 0-1, higher is better # Why did the model predict this? explainer = shap.TreeExplainer(model) row_processed = processor.transform(row) shap_values = explainer.shap_values(row_processed) shap.initjs() shap.force_plot( base_value=explainer.expected_value, shap_values=shap_values, features=row, link='logit' # For classification, this shows predicted probabilities ) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import sqlalchemy import psycopg2 import os import numpy as np import requests import re import copy from pandas.api.types import is_numeric_dtype from sklearn.linear_model import LinearRegression from sqlalchemy import create_engine import datetime from datetime import datetime as dt import sys import importlib sys.path.append('..') import modules.transforms importlib.reload(modules.transforms) from modules.transforms import * DBname=os.environ['DB_NAME'] postgres_psswd=os.environ['POSTGRES_PASSWORD'] postgres_user=os.environ['POSTGRES_USER'] postgres_port=str(os.environ['POSTGRES_PORT']) # A long string that contains the necessary Postgres login information postgres_str = ('postgresql://'+postgres_user+':'+postgres_psswd+'@'+DBname+':'+postgres_port+'/superset') # Create the connection cnx = create_engine(postgres_str) #guardar paso en csv path='/data/ETLcache/' now = dt.now() timestamp = now.strftime("_%d%m%Y_%H%M%S") # + #OBTENER RUTAS DE CARPETAS DISPONIBLES EN REPO COVID DE MINISTERIO DE CIENCIAS DE CHILE url="https://github.com/MinCiencia/Datos-COVID19/tree/master/output/" request=requests.get(url) root=url.split('github.com')[1] prefix=url.split('/Min')[0] folders=request.text.split(root) #product folders for idx in range(len(folders)): f=folders[idx] f=f.split('">')[0] folders[idx]=f folders=[prefix+root+f for f in folders if 'producto' in f] # + #DEFINIR LISTADO DE PRODUCTOS QUE SE VAN A CONSULTAR DE REPO DE MINISTERIO DE CIENCIAS productos=['producto5','producto3','producto10','producto13','producto25','producto14','producto19','producto37','producto53','producto54','producto52','producto50','producto48','producto24' ] # - #LISTADO DE CARPETAS (RUTAS) FILTRADO, SEGÚN PRODUCTOS A CONSULTAR fs=[] for p in productos: fs.extend([ x for x in folders if p==x.rsplit('/',1)[1]]) #guardar listado de productos consultados en DF (para posteriormente guardar en BD POstgres) productos=pd.DataFrame(productos) # + #crear esquema "tracking" si es que no existe en BD Postgres (para llevar registro de los productos consultados) k='tracking' if not cnx.dialect.has_schema(cnx, k): print('schema '+k+' does not exist, creating it') cnx.execute(sqlalchemy.schema.CreateSchema(k)) else: print('schema '+k+' exists, will not be created') productos.to_sql('min_ciencias_products',con=cnx,schema=k,if_exists='replace') # - #LEER ARCHIVOS DENTRO DE RUTAS CORRESPONDIENTES A PRODUCTOS, GUARDARLOS EN DICCIONARIO "TABLES" tables={} for f in fs: files_folder=f.rsplit('/')[-1] files_folder_prefix='/MinCiencia/Datos-COVID19/blob/master/output/'+files_folder+'/' tables[files_folder]={} raw_path='https://raw.githubusercontent.com/MinCiencia/Datos-COVID19/master/output/'+files_folder+'/' req=requests.get(f) files=req.text.split(files_folder_prefix) for idx in range(len(files)): files[idx]=files[idx].split('">')[0] files=[raw_path+f for f in files if '.csv' in f] for fl in files: try: print('reading file: '+fl) table=pd.read_csv(fl) table_name='_'.join(fl.rsplit('/',2)[-2:]).strip('.csv') tables[files_folder][table_name]=table except Exception as e: print(str(e)) pass #crear "producto11" como caso especial de producto 4 tables['producto11']={} tables['producto11']['producto4']=pd.read_csv('https://raw.githubusercontent.com/MinCiencia/Datos-COVID19/master/output/producto11/bulk/producto4.csv') tables['producto11']['producto4'].columns=[x.replace('*','x').replace('%','porcentaje_') for x in tables['producto11']['producto4'].columns] ortables=copy.deepcopy(tables) # + #normalizar tablas transpuestas for s in tables.keys(): for n in tables[s].keys(): #print(n) if '_T' in n[-2:]: print('fixing transposed table: '+n) tables[s][n]=transpose(tables[s][n]) #y cambiar campos "fecha" (donde haya) a tipo datetime for s in tables.keys(): print('working schema: '+s) for n in tables[s].keys(): if 'fecha' in [str(x).lower() for x in tables[s][n].columns]: print('converting to date in table: '+n) tables[s][n]=to_datetime(tables[s][n]) elif 'Fecha' in [str(x).lower() for x in tables[s][n].columns]: print('converting to date in table: '+n) tables[s][n]=to_datetime(tables[s][n]) for s in tables.keys(): print('working schema: '+s) for n in tables[s].keys(): print(tables[s][n].columns) if 'fecha' in [str(x) for x in tables[s][n].columns]: print('adding reverse date and index in table: '+n) tables[s][n]=reverse_idx(tables[s][n],date_field='fecha') elif 'Fecha' in [str(x) for x in tables[s][n].columns]: print('adding reverse date and index in table: '+n) tables[s][n]=reverse_idx(tables[s][n],date_field='Fecha') # + #tables['producto5']['producto5_TotalesNacionales_T']['Fecha']=pd.to_datetime(tables['producto5']['producto5_TotalesNacionales_T']['Fecha']) # + #calcular índice y fechas invertidos para tablas en que se desea extraer último dato #to_reverse_idx=[{'producto54':'producto54_r.nacional'},{'producto53':'producto53_confirmados_nacional'}, # {'producto53':'producto53_confirmados_regionale'}] #for n in to_reverse_idx: # k,i=list(n.items())[0] # print('calculating reverse index and date field in table: '+str(k)+' : '+str(i)) # tables[k][i]=reverse_idx(tables[k][i],date_field='fecha') #to_reverse_idx=[{'producto5':'producto5_TotalesNacionales_T'}, # {'producto14':'producto14_FallecidosCumulativo_T'}, # {'producto14':'producto14_FallecidosCumulativo_std'}] #for n in to_reverse_idx: # k,i=list(n.items())[0] # print('calculating reverse index and date field in table: '+str(k)+' : '+str(i)) # tables[k][i]=reverse_idx(tables[k][i],date_field='Fecha') # - #agregar variable dummy para región metropolitana for s in tables.keys(): for n in tables[s].keys(): #print(n) if 'region' in [x.lower() for x in tables[s][n].columns]: print('creando variable dummy para región metropolitana en tabla: '+n) tables[s][n]=make_dummies(tables[s][n],fields=['region'],keep_strings=['metropolitana'],value_map={'metropolitana':{0:'Resto de Chile',1:'Región Metropolitana'}}) for k in tables.keys(): if not cnx.dialect.has_schema(cnx, k): print('schema '+k+' does not exist, creating it') cnx.execute(sqlalchemy.schema.CreateSchema(k)) else: print('schema '+k+' exists, will not be created') for schema in tables.keys(): for name in tables[schema].keys(): try: df=tables[schema][name] name=name.replace('-','_') print("creating table "+name+' ,schema: '+schema) df.to_sql(name, schema=schema,con=cnx,if_exists='replace') print("saving table"+path+name+timestamp+'.csv in cache') df.to_csv(path+name+timestamp+'.csv',encoding='utf-8') except Exception as e: print(str(e)) pass os.system('jupyter nbconvert --output /home/jovyan/work/ETLdocs/' + 'ETL_covid-chile.html' + ' --to html ' + '/home/jovyan/work/ETL/covid-chile.ipynb') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from pandas import Series from matplotlib import pyplot series = Series.from_csv('Rainfall.csv', header=0) print(series.head()) series.plot() pyplot.show() from pandas import Series from matplotlib import pyplot from pandas.tools.plotting import lag_plot series = Series.from_csv('Rainfall.csv', header=0) lag_plot(series) pyplot.show() from pandas import Series from pandas import DataFrame from pandas import concat from matplotlib import pyplot series = Series.from_csv('Rainfall.csv', header=0) values = DataFrame(series.values) dataframe = concat([values.shift(1), values], axis=1) dataframe.columns = ['t-1', 't+1'] result = dataframe.corr() print(result) # ### Autocorrelation Plots from pandas import Series from matplotlib import pyplot from pandas.tools.plotting import autocorrelation_plot series = Series.from_csv('Rainfall.csv', header=0) autocorrelation_plot(series) pyplot.show() from pandas import Series from matplotlib import pyplot from statsmodels.graphics.tsaplots import plot_acf series = Series.from_csv('Rainfall.csv', header=0) plot_acf(series, lags=31) pyplot.show() # ### Persistence Model # + from pandas import Series from pandas import DataFrame from pandas import Series from pandas import DataFrame from pandas import concat from matplotlib import pyplot from sklearn.metrics import mean_squared_error series = Series.from_csv('Rainfall.csv', header=0) # create lagged dataset values = DataFrame(series.values) dataframe = concat([values.shift(1), values], axis=1) dataframe.columns = ['t-1', 't+1'] # split into train and test sets X = dataframe.values train, test = X[1:len(X)-7], X[len(X)-7:] train_X, train_y = train[:,0], train[:,1] test_X, test_y = test[:,0], test[:,1] # persistence model def model_persistence(x): return x # walk-forward validation predictions = list() for x in test_X: yhat = model_persistence(x) predictions.append(yhat) test_score = mean_squared_error(test_y, predictions) print('Test MSE: %.3f' % test_score) # plot predictions vs expected pyplot.plot(test_y) pyplot.plot(predictions, color='red') pyplot.show() # - # ### Autoregression Model # An autoregression model is a linear regression model that uses lagged variables as input variables. from pandas import Series from matplotlib import pyplot from statsmodels.tsa.ar_model import AR from sklearn.metrics import mean_squared_error series = Series.from_csv('Rainfall.csv', header=0) # split dataset X = series.values train, test = X[1:len(X)-7], X[len(X)-7:] # train autoregression model = AR(train) model_fit = model.fit() print('Lag: %s' % model_fit.k_ar) print('Coefficients: %s' % model_fit.params) # make predictions predictions = model_fit.predict(start=len(train), end=len(train)+len(test)-1, dynamic=False) for i in range(len(predictions)): print('predicted=%f, expected=%f' % (predictions[i], test[i])) error = mean_squared_error(test, predictions) print('Test MSE: %.3f' % error) # plot results pyplot.plot(test) pyplot.plot(predictions, color='red') pyplot.show() # #### In statistics, the mean squared error (MSE) or mean squared deviation (MSD) of an estimator (of a procedure for estimating an unobserved quantity) measures the average of the squares of the errors or deviations—that is, the difference between the estimator and what is estimated. from pandas import Series from matplotlib import pyplot from statsmodels.tsa.ar_model import AR from sklearn.metrics import mean_squared_error series = Series.from_csv('Rainfall.csv', header=0) # split dataset X = series.values train, test = X[1:len(X)-7], X[len(X)-7:] # train autoregression model = AR(train) model_fit = model.fit() window = model_fit.k_ar coef = model_fit.params # walk forward over time steps in test history = train[len(train)-window:] history = [history[i] for i in range(len(history))] predictions = list() for t in range(len(test)): length = len(history) lag = [history[i] for i in range(length-window,length)] yhat = coef[0] for d in range(window): yhat += coef[d+1] * lag[window-d-1] obs = test[t] predictions.append(yhat) history.append(obs) print('predicted=%f, expected=%f' % (yhat, obs)) error = mean_squared_error(test, predictions) print('Test MSE: %.3f' % error) # plot pyplot.plot(test) pyplot.plot(predictions, color='red') pyplot.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Optional: Different Actication Functions # # It is __optional__ but recommended for you to implement these activation functions in ```exercise_code/networks/layer.py```, both forward and backward pass, as a choice of hyperparameter. # # __Note__: We provide you with the sigmoid activation function for your convenience, which you've already worked with in previous exercises. # Recall that activation functions introduce more non-linearity to the network. Here we introduce several kinds of activation functions: # # * Sigmoid # # $$Sigmoid(x) = \frac{1}{1 + exp(-x)}$$ # # Figure4 # # * ReLU # # $$ReLU(x) = max(0, x)$$ # # Figure2 # # * Leaky ReLU # # $$LeakyReLU(x) = max(0.01x, x)$$ # # Figure3 # # * Tanh # # $$Tanh(x) = \frac{exp(x) - exp(-x)}{exp(x) + exp(-x)}$$ # # Figure3 # %load_ext autoreload # %autoreload 2 # + from exercise_code.tests.layer_tests import * print(ReluTest()()) print() print(LeakyReluTest()()) print() print(TanhTest()()) # - # __Hint__: # # If you have implemented the extra activation functions, please run the following cell to check whether you have did it the right way. # # Otherwise just skip the cell. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import cv2 import numpy as np img = cv2.imread("imgs/bookpage.jpg") retval, threshold = cv2.threshold(img, 12, 255, cv2.THRESH_BINARY) cv2.imshow('original',img) cv2.imshow('threshold',threshold) cv2.waitKey(0) cv2.destroyAllWindows() grayscaled = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) retval2, threshold2 = cv2.threshold(grayscaled, 12, 255, cv2.THRESH_BINARY) cv2.imshow('original',img) cv2.imshow('threshold',threshold) cv2.imshow('threshold2',threshold2) cv2.waitKey(0) cv2.destroyAllWindows() # + # Gaussian Adaptive Threshold # - gaus = cv2.adaptiveThreshold(grayscaled, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 115, 1) # take grayscaled img with max of 255, first take the adaptive gaussian treshold, then make it binary cv2.imshow('original',img) cv2.imshow('threshold',threshold) cv2.imshow('Gaussian',gaus) cv2.imshow('threshold2',threshold2) cv2.waitKey(0) cv2.destroyAllWindows() retval3,otsu = cv2.threshold(grayscaled,125,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU) # Otsu threshold. Not good for this example. # + cv2.imshow('original',img) cv2.imshow('threshold',threshold) cv2.imshow('threshold2',threshold2) cv2.imshow('Gaussian',gaus) cv2.imshow('Otsu', otsu) cv2.waitKey(0) cv2.destroyAllWindows() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # import the library # %matplotlib inline import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns # sklearn :: utils from sklearn.model_selection import train_test_split # sklearn :: models from sklearn.linear_model import LinearRegression from sklearn.neighbors import KNeighborsRegressor from sklearn.tree import DecisionTreeRegressor from sklearn.ensemble import RandomForestRegressor # sklearn :: evaluation metrics from sklearn.metrics import mean_absolute_error from sklearn.metrics import mean_squared_error from sklearn.model_selection import KFold sns.set_style('whitegrid') # - # # Problem definition # Apply regression models to predict the amount of purchase # # Load the data # Loading the data - we decided to work with a subset of our dataset because of our limited computational power during the experiments df_original = pd.read_csv('../../Data/Modeling/black_friday_processed_100K.csv') df = df_original.copy() print(df.columns) df.head() print(df.shape) print(df.dtypes) # # Feature Engineering # + # Removing unnecessary columns del df['User_ID'] del df['Product_ID'] # Transforming the categorical columns to numerical for col in ['Gender', 'Age', 'Occupation', 'City_Category', 'Stay_In_Current_City_Years', 'Marital_Status', 'Product_Category_1', 'Product_Category_2', 'Product_Category_3']: df_dummies = pd.get_dummies(df[col], prefix=col) df = pd.concat([df, df_dummies], axis=1) # Remove the original columns del df[col] df.head() # - df.dtypes # Selecting the columns X_columns = [x for x in df.columns if x != 'Purchase'] # and df.loc[:,x].dtype != object] y_column = ['Purchase'] list(X_columns) # # Model Training # + # Spliting the data using sklearn train_test_split threshold = 0.8 X = df[X_columns] y = df[y_column] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=1.0-threshold, shuffle=True) print('X_train', X_train.shape) print('y_train', y_train.shape) print('X_test', X_test.shape) print('y_test', y_test.shape) # - # # Model Evaluation # # Experiments # + def model_training(model_name, model, X_train, y_train): model.fit(X_train, y_train) return model def model_prediction(model, X_test): y_pred = model.predict(X_test) return y_pred def model_evaluation(model_name, y_test, y_pred): print(model_name) print('MAE', mean_absolute_error(y_test, y_pred)) print('RMSE', np.sqrt(mean_squared_error(y_test, y_pred))) plt.scatter(y_test, y_pred, alpha=0.3) plt.plot(range(0,10), range(0,10), '--r', alpha=0.3, label='Line1') plt.title(model_name) plt.xlabel('True Value') plt.ylabel('Predict Value') plt.show() print('') def run_experiment(model_name, model, X_train, y_train, X_test): train_model = model_training(model_name, model, X_train, y_train) predictions = model_prediction(train_model, X_test) model_evaluation(model_name, y_test, predictions) run_experiment('Linear Regression', LinearRegression(), X_train, y_train, X_test) run_experiment('KNN 2', KNeighborsRegressor(2), X_train, y_train, X_test) run_experiment('KNN 5', KNeighborsRegressor(5), X_train, y_train, X_test) run_experiment('Decision Tree', DecisionTreeRegressor(), X_train, y_train, X_test) run_experiment('Random Forest 10', RandomForestRegressor(10), X_train, y_train, X_test) run_experiment('Random Forest 100', RandomForestRegressor(100), X_train, y_train, X_test) # - # # Cross Validation # + # Evaluating our machine learning models based in our data sample models = [ ('LinearRegression', LinearRegression()), ('RandomForestRegressor10', RandomForestRegressor(n_estimators=10)), ('RandomForestRegressor100', RandomForestRegressor(n_estimators=100, n_jobs=4)), ('KNeighborsRegressor', KNeighborsRegressor()), ('DecisionTreeRegressor', DecisionTreeRegressor()) ] k = 5 results = {} for m in models: print('MODEL', m[0]) results[m[0]] = {'mae':[], 'rmse':[]} kf = KFold(n_splits=k) for train_index, test_index in kf.split(X): X_train_k, X_test_k = X.values[train_index], X.values[test_index] y_train_k, y_test_k = y.values[train_index], y.values[test_index] model = m[1] model.fit(X_train_k, y_train_k.ravel()) y_pred = model.predict(X_test_k) mae = mean_absolute_error(y_test_k, y_pred) rmse = np.sqrt(mean_squared_error(y_test_k, y_pred)) results[m[0]]['mae'].append(mae) results[m[0]]['rmse'].append(rmse) # - # Plotting the results of our evaluation for metric in ['mae', 'rmse']: values = [] labels = [] for model, result_values in results.items(): for m, v in result_values.items(): if m == metric: labels.append(model) values.append(v) plt.figure(figsize=(12,6)) plt.title(metric) plt.boxplot(values) plt.xticks(range(1, len(labels)+1), labels, rotation='horizontal') plt.show() # # Error Analysis # Choosing one model to work with model = RandomForestRegressor(100) model.fit(X_train, y_train) y_pred = model.predict(X_test) # Verifying which are the most important features fi = [] for i, col in enumerate(X_test.columns): fi.append([col, model.feature_importances_[i]]) pd.DataFrame(fi).sort_values(1, ascending=False) # As we can observe, the 10 first most important features are related to Product_Category_1. X_test.head() # Verifying the absolute error for predicted purchase amounts df_test = pd.DataFrame(X_test).copy() df_test['Purchase'] = y_test df_test['prediction'] = y_pred df_test['abs_error'] = abs(df_test['Purchase']-df_test['prediction']) df_test.sort_values(by='abs_error', ascending=False).round() # Plotting the absolute error plt.hist(df_test['abs_error'], bins=30) plt.show() plt.cla() plt.clf() plt.close() df_error = df_test[df_test['abs_error']>15000] print(len(df_error)) df_error.head() df_error.describe() df_error.corr()['abs_error'].dropna().sort_values() df[df['Age_18-25']==1][['Gender_M']].head() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.6.10 64-bit (''tensorflow2_p36'': conda)' # name: python3 # --- # + [markdown] id="TA21Jo5d9SVq" # # # ![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png) # # [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://githubtocolab.com/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/NER_FR.ipynb) # # # # + [markdown] id="CzIdjHkAW8TB" # # **Detect entities in French text** # + [markdown] id="wIeCOiJNW-88" # ## 1. Colab Setup # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="CGJktFHdHL1n" outputId="478d8289-41bc-447f-c7fa-50c485937f8e" # Install PySpark and Spark NLP # ! pip install -q pyspark==3.1.2 spark-nlp # Install Spark NLP Display lib # ! pip install --upgrade -q spark-nlp-display # + [markdown] id="eCIT5VLxS3I1" # ## 2. Start the Spark session # + id="sw-t1zxlHTB7" import json import pandas as pd import numpy as np from pyspark.ml import Pipeline from pyspark.sql import SparkSession import pyspark.sql.functions as F from sparknlp.annotator import * from sparknlp.base import * import sparknlp from sparknlp.pretrained import PretrainedPipeline spark = sparknlp.start() # + [markdown] id="9RgiqfX5XDqb" # ## 3. Select the DL model # + id="LLuDz_t40be4" # If you change the model, re-run all the cells below. # Applicable models: wikiner_840B_300 MODEL_NAME = "wikiner_840B_300" # + [markdown] id="2Y9GpdJhXIpD" # ## 4. Some sample examples # + id="vBOKkB2THdGI" # Enter examples to be transformed as strings in this list text_list = [ """ III (né le 28 octobre 1955) est un magnat des affaires, développeur de logiciels, investisseur et philanthrope américain. Il est surtout connu comme le co-fondateur de Microsoft Corporation. Au cours de sa carrière chez Microsoft, Gates a occupé les postes de président, chef de la direction (PDG), président et architecte logiciel en chef, tout en étant le plus grand actionnaire individuel jusqu'en mai 2014. Il est l'un des entrepreneurs et pionniers les plus connus du révolution des micro-ordinateurs des années 1970 et 1980. Né et élevé à Seattle, Washington, Gates a cofondé Microsoft avec son ami d'enfance en 1975, à Albuquerque, au Nouveau-Mexique; il est devenu la plus grande société de logiciels informatiques au monde. Gates a dirigé l'entreprise en tant que président-directeur général jusqu'à sa démission en tant que PDG en janvier 2000, mais il est resté président et est devenu architecte logiciel en chef. À la fin des années 1990, Gates avait été critiqué pour ses tactiques commerciales, considérées comme anticoncurrentielles. Cette opinion a été confirmée par de nombreuses décisions de justice. En juin 2006, Gates a annoncé qu'il passerait à un poste à temps partiel chez Microsoft et à un emploi à temps plein à la Bill & Melinda Gates Foundation, la fondation caritative privée que lui et sa femme, , ont créée en 2000. [ 9] Il a progressivement transféré ses fonctions à et . Il a démissionné de son poste de président de Microsoft en février 2014 et a assumé un nouveau poste de conseiller technologique pour soutenir le nouveau PDG .""", """La Joconde est une peinture à l'huile du XVIe siècle créée par Léonard. Il se tient au Louvre à Paris.""" ] # + [markdown] id="XftYgju4XOw_" # ## 5. Define Spark NLP pipeline # + colab={"base_uri": "https://localhost:8080/"} id="lBggF5P8J1gc" outputId="0c62ad2c-b4b5-4e25-8285-32c3185f9551" document_assembler = DocumentAssembler() \ .setInputCol('text') \ .setOutputCol('document') tokenizer = Tokenizer() \ .setInputCols(['document']) \ .setOutputCol('token') # The wikiner_840B_300 is trained with glove_840B_300, so the embeddings in the # pipeline should match. Same applies for the other available models. if MODEL_NAME == "wikiner_840B_300": embeddings = WordEmbeddingsModel.pretrained('glove_840B_300', lang='xx') \ .setInputCols(['document', 'token']) \ .setOutputCol('embeddings') elif MODEL_NAME == "wikiner_6B_300": embeddings = WordEmbeddingsModel.pretrained('glove_6B_300', lang='xx') \ .setInputCols(['document', 'token']) \ .setOutputCol('embeddings') elif MODEL_NAME == "wikiner_6B_100": embeddings = WordEmbeddingsModel.pretrained('glove_100d') \ .setInputCols(['document', 'token']) \ .setOutputCol('embeddings') ner_model = NerDLModel.pretrained(MODEL_NAME, 'fr') \ .setInputCols(['document', 'token', 'embeddings']) \ .setOutputCol('ner') ner_converter = NerConverter() \ .setInputCols(['document', 'token', 'ner']) \ .setOutputCol('ner_chunk') nlp_pipeline = Pipeline(stages=[ document_assembler, tokenizer, embeddings, ner_model, ner_converter ]) # + [markdown] id="mv0abcwhXWC-" # ## 6. Run the pipeline # + id="EYf_9sXDXR4t" empty_df = spark.createDataFrame([['']]).toDF('text') pipeline_model = nlp_pipeline.fit(empty_df) df = spark.createDataFrame(pd.DataFrame({'text': text_list})) result = pipeline_model.transform(df) # + [markdown] id="UQY8tAP6XZJL" # ## 7. Visualize results # + colab={"base_uri": "https://localhost:8080/", "height": 656} id="Ar32BZu7J79X" outputId="43fdadc7-fe1d-4d92-b908-af0aa6ddea96" from sparknlp_display import NerVisualizer NerVisualizer().display( result = result.collect()[0], label_col = 'ner_chunk', document_col = 'document' ) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (Intel, 2018) # language: python # name: c009-intel_distribution_of_python_3_2018 # --- import keras from keras.models import Sequential from keras.models import Model from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D import numpy as np from keras.models import load_model from keras.layers.merge import concatenate from keras.layers import Input, merge model = Sequential() model.add(Conv2D(128, kernel_size=(2, 2), activation="relu",input_shape=(4,4,16),padding = 'valid')) model.add(Conv2D(128, kernel_size=(2, 2), activation="relu",padding = 'valid')) model.add(Flatten()) model.add(Dense(128,activation="relu")) model.add(Dropout(0.25)) model.add(Dense(4,activation="softmax")) model.compile(optimizer='adam',loss='categorical_crossentropy', metrics=['accuracy']) #model.save('learn_32.h5') model1 = Sequential() model1.add(Conv2D(128, kernel_size=(3, 3), activation="relu",input_shape=(4,4,16),padding = 'same')) model1.add(Conv2D(128, kernel_size=(2, 2), activation="relu",padding = 'valid')) model1.add(Conv2D(128, kernel_size=(2, 2), activation="relu",padding = 'valid')) model1.add(Flatten()) model1.add(Dense(128,activation="relu")) model1.add(Dropout(0.25)) model1.add(Dense(4,activation="softmax")) model1.compile(optimizer='adam',loss='categorical_crossentropy', metrics=['accuracy']) #model1.save('my_highest.h5') model2 = Sequential() model2.add(Conv2D(128, kernel_size=(2, 2), activation="relu",input_shape=(4,4,16),padding = 'valid')) model2.add(Conv2D(128, kernel_size=(2, 2), activation="relu",input_shape=(4,4,16),padding = 'valid')) model2.add(Flatten()) model2.add(Dense(256,activation="relu")) model2.add(Dropout(0.5)) model2.add(Dense(4,activation="softmax")) model2.compile(optimizer='adam',loss='categorical_crossentropy', metrics=['accuracy']) #model2.save('my_low.h5') # + inputs=Input((4,4,16)) conv = inputs FLITERS=128 conv41 = Conv2D(filters=FLITERS,kernel_size=(4,1),kernel_initializer= 'he_uniform')(conv) conv14 = Conv2D(filters=FLITERS,kernel_size=(1,4),kernel_initializer= 'he_uniform')(conv) conv22 = Conv2D(filters=FLITERS,kernel_size=(2,2),kernel_initializer= 'he_uniform')(conv) conv33 = Conv2D(filters=FLITERS,kernel_size=(3,3),kernel_initializer= 'he_uniform')(conv) conv44 = Conv2D(filters=FLITERS,kernel_size=(4,4),kernel_initializer= 'he_uniform')(conv) hidden=concatenate([Flatten()(conv41),Flatten()(conv14),Flatten()(conv22),Flatten()(conv33),Flatten()(conv44)]) x = keras.layers.BatchNormalization()(hidden) x = keras.layers.Activation('relu')(hidden) for width in [512,128]: x= Dense(width,kernel_initializer = 'he_uniform')(x) x=keras.layers.BatchNormalization()(x) x =keras.layers.Activation('relu')(x) outputs = Dense(4,activation = 'softmax')(x) model3 = Model(inputs, outputs) model3.summary() model3.compile(optimizer='adam',loss='categorical_crossentropy', metrics=['accuracy']) #model3.save('learn_finally.h5') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # %matplotlib inline import numpy as np import pandas as pd import matplotlib.pyplot as plt import scipy.interpolate import requests import datetime # %load_ext autoreload # %autoreload 2 import ptvsd ptvsd.enable_attach() # - from access_treasurydirectgov import * import bond_analytics as ba from db_manager_UST import * date_range = pd.date_range(start='2020/08/06', end='2020/08/16', freq='D') dn = db_manager_UST() df = dn.retrieve_as_of(datetime.datetime(2020,8,6)) df_p = df[df['type'].isin(['Note', 'Bond'])] ttm = (df_p['maturityDate'] - df_p['date']).dt.days / 365.0 plt.figure() plt.plot(ttm, df_p['endOfDay'], '.') plt.show() df['securityTerm'].value_counts() df[df['securityTerm'] == '20-Year 6-Month'] get_cusip_info('912810FR4')[1] df = read_hist_data_from_treasurydirectgov(datetime.datetime(2020,8,20)) df any([True, True]) datetime.date.today() # + tags=[] adf = [] for dt in date_range: print(dt) try: df = read_hist_data_from_treasurydirectgov(dt) except Exception as ex: print(ex) continue df['Date'] = dt adf.append(df) df_a = pd.concat(adf) # - df_note = df_a[df_a['SECURITY TYPE'].isin(['MARKET BASED NOTE', 'MARKET BASED BOND'])].copy() df[['CUSIP','SECURITY TYPE','RATE', 'MATURITY DATE']].T.to_dict()[33]['MATURITY DATE'] # + def calc_yield(row): dt_start = datetime.datetime(2019,1,1) dt_end = row['MATURITY DATE'] coupon = row['RATE'] today = row['Date'] price = row['END OF DAY'] if price < 0.01: price = 0.5*(row['BUY'] + row['SELL']) b = ba.USConventional(row['SECURITY TYPE'], dt_start, dt_end, coupon) y = b.get_yield(today, price) return y df_note['Yield'] = df_note.apply(calc_yield, axis = 1) # - df_note_dt = df_note[df_note['Date'] == '2020-07-30'] df_p = df_note_dt plt.figure() x = df_p['MATURITY DATE'] y = df_p['Yield'] plt.plot(x, y, 'o') plt.show() import pymongo myclient = pymongo.MongoClient('mongodb://localhost:27017/') mydb = myclient['mydatabase'] myclient.list_database_names() mycol = mydb['customers'] x = mycol.find_one() print(x) # + tags=[] print(mydb.list_collection_names()) # + tags=[] collist = mydb.list_collection_names() if "customers" in collist: print("The collection exists.") # - mydict = {'name':'john', 'address':'highway 37'} x = mycol.insert_one(mydict) mycol.find_one() collist = mydb.list_collection_names() if "customers" in collist: print("The collection exists.") x.inserted_id mycol.insert_many(df_note.to_dict(orient = 'records')) pd.DataFrame(mycol.find({'BUY':{'$lt':0.1}}, {'_id':0})) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda root] # language: python # name: conda-root-py # --- # # MNIST Veri Seti ile Rakam Tanıma # # https://www.kaggle.com/c/digit-recognizer/ # # MNIST veri seti $28\times 28$ pikselden oluşan elle yazılmış rakamlar içeren bir veri seti. Veri seti 32000 eğitim ve 10000 test rakamı içeriyor. # # Normalde çok sınıflı bir sınıflandırma problemi olan rakam tanıma problemini güdümsüz öğrenme yöntemiyle ele alacağız. # # Şimdi paketleri yükleyerek örnek bir dosya okutalım. # + # %matplotlib inline import pandas as pd import numpy as np import time as time from skimage.io import imread, imshow, show, imsave import matplotlib.pyplot as plt train_size = 32000 test_size = 10000 picture_size = 28 image = imread('train/0.png', as_grey = True) imshow(image) show() print(image) # - # 'Train' ve 'Test' klasörlerindeki veriyi okuyarak başlayalım. # + df_train = np.zeros((train_size, picture_size*picture_size)) for i in range(train_size): img = imread('train/' + str(i) + '.png', as_grey=True) img = np.reshape(img, (1,picture_size*picture_size)) df_train[i,:]=img/255 df_train=pd.DataFrame(df_train) df_train.columns = ['pixel_' + str(i) for i in range(picture_size*picture_size)] y_train = pd.read_csv('train/labels_train.csv') df_test = np.zeros((test_size, picture_size*picture_size)) for i in range(train_size, train_size+test_size): img = imread('test/' + str(i) + '.png', as_grey=True) img = np.reshape(img, (1,picture_size*picture_size)) df_test[i-train_size,:]=img/255 df_test=pd.DataFrame(df_test) df_test.columns = ['pixel_' + str(i) for i in range(picture_size*picture_size)] y_test = pd.read_csv('test/labels_test.csv') print(len(df_train)) print(len(y_train)) print(len(df_test)) print(len(y_test)) print(df_train.iloc[1,:]) # - # Veri setini okuttuktan sonra güdümsüz öğrenme yöntemlerinden Mixture Model yöntemini kullanacağız. MM yöntemi soft-clustering imkanı tanıyan bir yöntem. Bir gözlem bir kümeye ait/ait değil bilgisinin yanında o kümeye ait olma olasılığını da elde etmek mümkün. Bu yöntemde bileşenler (component) profillere denk geliyor. Bu örnekte de bileşenler rakamların silhueti şeklinde olacak. 50 bileşenle modeli eğitelim. # + from sklearn.mixture import GMM gmm = GMM(n_components = 50) gmm.fit(df_train) # + means = gmm.means_ for i in range(len(means)): component = means[i,:] component = np.reshape(component,(picture_size,picture_size)) imshow(component) plt.title("Component " + str(i)) show() # - # Gördüğümüz gibi bileşenler rakamlara oldukça benziyor. Şimdi bu olasılıklardan yeni bir veri çerçevesi oluşturalım. # + probabilities = pd.DataFrame(gmm.predict_proba(df_train)) print(probabilities.head()) # - # Bu olasılıkları Gradient Boosting yöntemine girdi olarak vererek güdümlü öğrenmede kullanabiliriz. # + from sklearn.ensemble import GradientBoostingClassifier gbc = GradientBoostingClassifier() gbc.fit(probabilities,y_train) y_pred = gbc.predict(gmm.predict_proba(df_test)) from sklearn.metrics import confusion_matrix, accuracy_score print('Accuracy score of GBC: ' + str(accuracy_score(y_test,y_pred))) df_results = pd.DataFrame(confusion_matrix(y_test, y_pred)) df_results.index = [['Label_' + str(i) for i in range(10)]] df_results.columns = [['Pre_' + str(i) for i in range(10)]] print('Confusion matrix for GBC: ') print(df_results) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: automl_titanic # language: python # name: automl_titanic # --- # + import pandas as pd train = pd.read_csv('./data/train.csv') train.shape # + from sklearn.tree import DecisionTreeClassifier model = DecisionTreeClassifier() # - help(model.fit) train.columns X = train[['PassengerId', 'Pclass', 'Name', 'Sex', 'Age', 'SibSp', 'Parch', 'Ticket', 'Fare', 'Cabin', 'Embarked']] y = train['Survived'] model.fit(X, y) train.info() X = train[['PassengerId', 'Pclass', 'Age', 'SibSp', 'Parch', 'Fare']] y = train['Survived'] model.fit(X, y) X.info() X['Age'].fillna(X['Age'].median(), inplace=True) X.info() model.fit(X, y) test = pd.read_csv('./data/test.csv') test.shape model.predict(test) # + lil_test = test[['PassengerId', 'Pclass', 'Age', 'SibSp', 'Parch', 'Fare']] model.predict(lil_test) # - lil_test['Age'].fillna(lil_test['Age'].median(), inplace=True) model.predict(lil_test) lil_test.info() lil_test['Fare'].fillna(lil_test['Fare'].median(), inplace=True) model.predict(lil_test) # + y_predicted = model.predict(lil_test) output = pd.DataFrame({'PassengerId': lil_test['PassengerId'], 'Survived': y_predicted}) output # - output.to_csv('./submissions/scikit_basic.csv', index=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Building and fitting NN # + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" import sys import pandas as pd import numpy as np import scipy.sparse as sparse from scipy.sparse.linalg import spsolve import random import os import scipy.stats as ss import scipy from sklearn.preprocessing import MinMaxScaler from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score from catboost import CatBoostClassifier, Pool, sum_models, to_classifier from sklearn.model_selection import KFold from sklearn.utils.class_weight import compute_class_weight import implicit # + #df = pd.read_csv("../input/feature-creating-v2/train_3lags_v3.csv", low_memory=False) # + #df = df.loc[df['order_count'] == 2] # + #df.head() # + #threshold = 0.0005 #counts = df['service_title'].value_counts(normalize=True) #df2 = df.loc[df['service_title'].isin(counts[counts > threshold].index), :] # + #sub15 = pd.read_csv("../input/catboost-fitting-smart/submission_15.csv") # + #d_134 = df2.loc[df2['service_title'] == 134][:121500] # + #d_98 = df2.loc[df2['service_title'] == 98][:121500] # + #df2 = df2.loc[df2['service_title'] != 134] #df2 = df2.loc[df2['service_title'] != 98] # + #df2 = pd.concat([df2,d_134], axis=0) #df2 = pd.concat([df2,d_98], axis=0) # + #df2.to_csv('train_3lags_semibalanced.csv', index=False) # - # Downloading prepared data df2 = pd.read_csv("../input/irkutsk/train_3lags_semibalanced.csv", low_memory=False) # + #df2 = pd.read_csv("../input/irkutsk/train_3lags_v4.csv", low_memory=False) # + #df2.drop(['Unnamed: 0'], axis=1, inplace=True) # - df2['service_title'].value_counts(normalize=True)[:30] # + #sub = pd.read_csv("../input/nn-sub/nn_sub_5fold_2.csv") # + #df2['service_title'].value_counts(normalize=True)[:30] # + #sub['service_title'] = 1259 # + #sub.to_csv('check_1259.csv', index=False) # + #sub['service_title'].value_counts(normalize=True)[:30] # - df2 = df2.sample(frac=1).reset_index(drop=True) df_test = df2[['service_title', 'order_count']] df2.drop(['service', 'service_title', 'mfc', 'internal_status', 'external_status', 'order_type', 'department_id', 'custom_service_id', 'service_level', 'is_subdep', 'is_csid', 'proc_time', 'dayofweek', 'day_part', 'person', 'sole', 'legal', 'auto_ping_queue', 'win_count', 'month', 'week', 'year'], axis=1 , inplace=True) df_train = df2.loc[df2['order_count'] > 1] df_test = df_test.loc[df_test['order_count'] > 1] df_train.reset_index(inplace=True) df_test.reset_index(inplace=True) df_train.drop(['index'], axis=1, inplace=True) df_test.drop(['index'], axis=1, inplace=True) # + #mask = (df_train_mask['order_count'] == 5) #z_valid = df_train_mask[mask] #df_train_mask.loc[mask, 'service_4'] = np.nan # - df_train.columns categorical = ['service_1', 'service_title_1', 'mfc_1', 'internal_status_1', 'external_status_1', 'order_type_1', 'department_id_1', 'custom_service_id_1', 'service_level_1', 'is_subdep_1', 'is_csid_1', 'dayofweek_1', 'day_part_1', 'month_1', 'week_1', 'year_1', 'person_1', 'sole_1', 'legal_1', 'auto_ping_queue_1', 'service_2', 'service_title_2', 'mfc_2', 'internal_status_2', 'external_status_2', 'order_type_2', 'department_id_2', 'custom_service_id_2', 'service_level_2', 'is_subdep_2', 'is_csid_2', 'dayofweek_2', 'day_part_2', 'person_2', 'sole_2', 'month_2', 'week_2', 'year_2', 'legal_2', 'auto_ping_queue_2','service_3', 'service_title_3', 'mfc_3', 'internal_status_3', 'external_status_3', 'order_type_3', 'department_id_3', 'custom_service_id_3', 'service_level_3', 'is_subdep_3', 'is_csid_3', 'month_3', 'week_3', 'year_3', 'dayofweek_3', 'day_part_3', 'person_3', 'sole_3', 'legal_3', 'auto_ping_queue_3', 'requester_type', 'gender'] cat = ['service_1', 'service_title_1', 'mfc_1', 'internal_status_1', 'external_status_1', 'department_id_1', 'custom_service_id_1', 'month_1', 'week_1', 'year_1', 'is_subdep_1', 'is_csid_1', 'dayofweek_1', 'service_2', 'service_title_2', 'mfc_2', 'internal_status_2', 'external_status_2', 'department_id_2', 'month_2', 'week_2', 'year_2', 'custom_service_id_2', 'is_subdep_2', 'is_csid_2', 'dayofweek_2', 'service_3', 'service_title_3', 'mfc_3', 'internal_status_3', 'external_status_3', 'department_id_3', 'custom_service_id_3', 'is_subdep_3', 'is_csid_3', 'month_3', 'week_3', 'year_3', 'dayofweek_3', 'requester_type', 'gender'] X = df_train.drop(['requester'], axis=1) y = df_test['service_title'] # + X.drop(['service_2', 'service_title_2', 'mfc_2', 'internal_status_2', 'external_status_2', 'order_type_2', 'department_id_2', 'custom_service_id_2', 'service_level_2', 'is_subdep_2', 'is_csid_2', 'proc_time_2', 'dayofweek_2', 'day_part_2', 'person_2', 'sole_2', 'legal_2', 'auto_ping_queue_2', 'win_count_2', 'month_2', 'week_2', 'year_2'], axis=1 , inplace=True) X.drop(['service_3', 'service_title_3', 'mfc_3', 'internal_status_3', 'external_status_3', 'order_type_3', 'department_id_3', 'custom_service_id_3', 'service_level_3', 'is_subdep_3', 'is_csid_3', 'proc_time_3', 'dayofweek_3', 'day_part_3', 'person_3', 'sole_3', 'legal_3', 'auto_ping_queue_3', 'win_count_3', 'month_3', 'week_3', 'year_3'], axis=1 , inplace=True) # + categorical = ['service_1', 'service_title_1', 'mfc_1', 'internal_status_1', 'external_status_1', 'order_type_1', 'department_id_1', 'custom_service_id_1', 'service_level_1', 'is_subdep_1', 'is_csid_1', 'dayofweek_1', 'day_part_1', 'month_1', 'week_1', 'year_1', 'person_1', 'sole_1', 'legal_1', 'auto_ping_queue_1', 'requester_type', 'gender'] cat = ['service_1', 'service_title_1', 'mfc_1', 'internal_status_1', 'external_status_1', 'department_id_1', 'custom_service_id_1', 'month_1', 'week_1', 'year_1', 'is_subdep_1', 'is_csid_1', 'dayofweek_1', 'requester_type', 'gender'] # - X[cat] = X[cat].astype('Int64') X[cat] = X[cat].astype('object') # + #X[cat] = X[cat].astype('Int64') #X[cat] = X[cat].astype('object') # - X[categorical] = X[categorical].fillna('NA') # Function to reduce ram usage def reduce_mem_usage(df): """ iterate through all the columns of a dataframe and modify the data type to reduce memory usage. """ start_mem = df.memory_usage().sum() / 1024**2 print(('Memory usage of dataframe is {:.2f}' 'MB').format(start_mem)) for col in df.columns: col_type = df[col].dtype if col_type != object: c_min = df[col].min() c_max = df[col].max() if str(col_type)[:3] == 'int': if c_min > np.iinfo(np.int8).min and c_max <\ np.iinfo(np.int8).max: df[col] = df[col].astype(np.int8) elif c_min > np.iinfo(np.int16).min and c_max <\ np.iinfo(np.int16).max: df[col] = df[col].astype(np.int16) elif c_min > np.iinfo(np.int32).min and c_max <\ np.iinfo(np.int32).max: df[col] = df[col].astype(np.int32) elif c_min > np.iinfo(np.int64).min and c_max <\ np.iinfo(np.int64).max: df[col] = df[col].astype(np.int64) else: if c_min > np.finfo(np.float16).min and c_max <\ np.finfo(np.float16).max: df[col] = df[col].astype(np.float16) elif c_min > np.finfo(np.float32).min and c_max <\ np.finfo(np.float32).max: df[col] = df[col].astype(np.float32) else: df[col] = df[col].astype(np.float64) else: df[col] = df[col].astype('category') end_mem = df.memory_usage().sum() / 1024**2 print(('Memory usage after optimization is: {:.2f}' 'MB').format(end_mem)) print('Decreased by {:.1f}%'.format(100 * (start_mem - end_mem) / start_mem)) return df X = reduce_mem_usage(X) X = X[categorical] X['person_1'] = X['person_1'].astype('int32') X['sole_1'] = X['sole_1'].astype('int32') X['legal_1'] = X['legal_1'].astype('int32') X['auto_ping_queue_1'] = X['auto_ping_queue_1'].astype('int32') from sklearn import preprocessing labeling = [] for col in X.columns: d = pd.DataFrame() d[col] = X[col].unique() le = preprocessing.LabelEncoder() le.fit(d[col]) d[col+'_l'] = le.transform(d[col]) d = d.sort_values(by=[col+'_l']).reset_index(drop=True) labeling.append(d) d = pd.DataFrame() d['service_title'] = y.unique() le = preprocessing.LabelEncoder() le.fit(d['service_title']) d['service_title'+'_l'] = le.transform(d['service_title']) d = d.sort_values(by=['service_title'+'_l']).reset_index(drop=True) labeling_y = d labeling[0] i = 0 for col in X.columns: X[col] = X[col].map(labeling[i].set_index(col).to_dict()[col+'_l']) i += 1 X y = y.map(labeling_y.set_index('service_title').to_dict()['service_title'+'_l']) # + #s_w = compute_class_weight(class_weight='balanced', classes=labeling_y['service_title_l'], y=y) weights_l = labeling_y[:] weights_l['weights'] = 1 weights = pd.DataFrame(y) weights = pd.merge(weights, weights_l, how='left', left_on='service_title', right_on='service_title_l') #weights = np.array(weights['weights']) # - weights['service_title_y'].value_counts(normalize=True)[:30] weights['weights'].loc[weights['service_title_y'] == 4] = 0.4 weights['weights'].loc[weights['service_title_y'] == 603] = 0.6 weights['weights'].loc[weights['service_title_y'] == 98] = 0.9 weights['weights'].loc[weights['service_title_y'] == 134] = 0.87 weights = np.array(weights['weights']) for col in X.columns: print(col, len(X[col].unique())) import tensorflow as tf import tensorflow.keras.backend as K from sklearn.model_selection import StratifiedKFold # + # Detect hardware, return appropriate distribution strategy try: # TPU detection. No parameters necessary if TPU_NAME environment variable is # set: this is always the case on Kaggle. tpu = tf.distribute.cluster_resolver.TPUClusterResolver() print('Running on TPU ', tpu.master()) except ValueError: tpu = None if tpu: tf.config.experimental_connect_to_cluster(tpu) tf.tpu.experimental.initialize_tpu_system(tpu) strategy = tf.distribute.experimental.TPUStrategy(tpu) else: # Default distribution strategy in Tensorflow. Works on CPU and single GPU. strategy = tf.distribute.get_strategy() print("REPLICAS: ", strategy.num_replicas_in_sync) # + AUTO = tf.data.experimental.AUTOTUNE BATCH_SIZE = 64 * strategy.num_replicas_in_sync LEARNING_RATE = 1e-3 * strategy.num_replicas_in_sync EPOCHS = 5 # - X.columns len(X['service_1'].unique()) def generate_dataset(idxT, idxV): trn_weights = weights[idxT, ] val_weights = weights[idxV, ] #n samples trn_input_ids = np.array(X.index)[idxT,] # Trainset trn_input_service_1 = np.array(X['service_1'])[idxT,] trn_input_service_title_1 = np.array(X['service_title_1'])[idxT,] trn_input_mfc_1 = np.array(X['mfc_1'])[idxT,] trn_input_internal_status_1 = np.array(X['internal_status_1'])[idxT,] trn_input_external_status_1 = np.array(X['external_status_1'])[idxT,] trn_input_order_type_1 = np.array(X['order_type_1'])[idxT,] trn_input_department_id_1 = np.array(X['department_id_1'])[idxT,] trn_input_custom_service_id_1 = np.array(X['custom_service_id_1'])[idxT,] trn_input_service_level_1 = np.array(X['service_level_1'])[idxT,] trn_input_is_subdep_1 = np.array(X['is_subdep_1'])[idxT,] trn_input_is_csid_1 = np.array(X['is_csid_1'])[idxT,] trn_input_dayofweek_1 = np.array(X['dayofweek_1'])[idxT,] trn_input_day_part_1 = np.array(X['day_part_1'])[idxT,] trn_input_month_1 = np.array(X['month_1'])[idxT,] trn_input_week_1 = np.array(X['week_1'])[idxT,] trn_input_year_1 = np.array(X['year_1'])[idxT,] trn_input_person_1 = np.array(X['person_1'])[idxT,] trn_input_sole_1 = np.array(X['sole_1'])[idxT,] trn_input_legal_1 = np.array(X['legal_1'])[idxT,] trn_input_auto_ping_queue_1 = np.array(X['auto_ping_queue_1'])[idxT,] trn_input_requester_type = np.array(X['requester_type'])[idxT,] trn_input_gender = np.array(X['gender'])[idxT,] trn_service_title = np.array(pd.get_dummies(y))[idxT,].astype('int32') # Validation set val_input_service_1 = np.array(X['service_1'])[idxV,] val_input_service_title_1 = np.array(X['service_title_1'])[idxV,] val_input_mfc_1 = np.array(X['mfc_1'])[idxV,] val_input_internal_status_1 = np.array(X['internal_status_1'])[idxV,] val_input_external_status_1 = np.array(X['external_status_1'])[idxV,] val_input_order_type_1 = np.array(X['order_type_1'])[idxV,] val_input_department_id_1 = np.array(X['department_id_1'])[idxV,] val_input_custom_service_id_1 = np.array(X['custom_service_id_1'])[idxV,] val_input_service_level_1 = np.array(X['service_level_1'])[idxV,] val_input_is_subdep_1 = np.array(X['is_subdep_1'])[idxV,] val_input_is_csid_1 = np.array(X['is_csid_1'])[idxV,] val_input_dayofweek_1 = np.array(X['dayofweek_1'])[idxV,] val_input_day_part_1 = np.array(X['day_part_1'])[idxV,] val_input_month_1 = np.array(X['month_1'])[idxV,] val_input_week_1 = np.array(X['week_1'])[idxV,] val_input_year_1 = np.array(X['year_1'])[idxV,] val_input_person_1 = np.array(X['person_1'])[idxV,] val_input_sole_1 = np.array(X['sole_1'])[idxV,] val_input_legal_1 = np.array(X['legal_1'])[idxV,] val_input_auto_ping_queue_1 = np.array(X['auto_ping_queue_1'])[idxV,] val_input_requester_type = np.array(X['requester_type'])[idxV,] val_input_gender = np.array(X['gender'])[idxV,] val_service_title = np.array(pd.get_dummies(y))[idxV,].astype('int32') # Generating tf.data object train_dataset = ( tf.data.Dataset .from_tensor_slices(({'service_1':trn_input_service_1, 'service_title_1': trn_input_service_title_1, 'mfc_1': trn_input_mfc_1, 'internal_status_1': trn_input_internal_status_1, 'external_status_1': trn_input_external_status_1, 'order_type_1': trn_input_order_type_1, 'department_id_1': trn_input_department_id_1, 'custom_service_id_1': trn_input_custom_service_id_1, 'service_level_1': trn_input_service_level_1, 'is_subdep_1': trn_input_is_subdep_1, 'is_csid_1': trn_input_is_csid_1, 'dayofweek_1': trn_input_dayofweek_1, 'day_part_1': trn_input_day_part_1, 'month_1': trn_input_month_1, 'week_1': trn_input_week_1, 'year_1': trn_input_year_1, 'person_1': trn_input_person_1, 'sole_1': trn_input_sole_1, 'legal_1': trn_input_legal_1, 'auto_ping_queue_1': trn_input_auto_ping_queue_1, 'requester_type': trn_input_requester_type, 'gender': trn_input_gender}, {'service_title': trn_service_title}, trn_weights)) .shuffle(2048) .batch(BATCH_SIZE) .prefetch(AUTO) ) valid_dataset = ( tf.data.Dataset .from_tensor_slices(({'service_1':val_input_service_1, 'service_title_1': val_input_service_title_1, 'mfc_1': val_input_mfc_1, 'internal_status_1': val_input_internal_status_1, 'external_status_1': val_input_external_status_1, 'order_type_1': val_input_order_type_1, 'department_id_1': val_input_department_id_1, 'custom_service_id_1': val_input_custom_service_id_1, 'service_level_1': val_input_service_level_1, 'is_subdep_1': val_input_is_subdep_1, 'is_csid_1': val_input_is_csid_1, 'dayofweek_1': val_input_dayofweek_1, 'day_part_1': val_input_day_part_1, 'month_1': val_input_month_1, 'week_1': val_input_week_1, 'year_1': val_input_year_1, 'person_1': val_input_person_1, 'sole_1': val_input_sole_1, 'legal_1': val_input_legal_1, 'auto_ping_queue_1': val_input_auto_ping_queue_1, 'requester_type': val_input_requester_type, 'gender': val_input_gender}, {'service_title': val_service_title}, val_weights)) .batch(BATCH_SIZE) .cache() .prefetch(AUTO) ) return trn_input_ids.shape[0]//BATCH_SIZE, train_dataset, valid_dataset def scheduler(epoch): return LEARNING_RATE * 0.2**epoch np.array(pd.get_dummies(y))[[0, 1],].astype('int32') def build_model(): service_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='service_1') service_title_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='service_title_1') mfc_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='mfc_1') internal_status_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='internal_status_1') external_status_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='external_status_1') order_type_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='order_type_1') department_id_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='department_id_1') custom_service_id_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='custom_service_id_1') service_level_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='service_level_1') is_subdep_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='is_subdep_1') is_csid_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='is_csid_1') dayofweek_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='dayofweek_1') day_part_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='day_part_1') month_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='month_1') week_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='week_1') year_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='year_1') person_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='person_1') sole_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='sole_1') legal_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='legal_1') auto_ping_queue_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='auto_ping_queue_1') requester_type = tf.keras.layers.Input((1,), dtype=tf.int32, name='requester_type') gender = tf.keras.layers.Input((1,), dtype=tf.int32, name='gender') service_1_embedding = tf.keras.layers.Embedding(len(X['service_1'].unique()), 11, input_length=1, name='service_1_embedding')(service_1) service_title_1_embedding = tf.keras.layers.Embedding(len(X['service_title_1'].unique()), 50, input_length=1, name='service_title_1_embedding')(service_title_1) mfc_1_embedding = tf.keras.layers.Embedding(len(X['mfc_1'].unique()), 50, input_length=1, name='mfc_1_embedding')(mfc_1) internal_status_1_embedding = tf.keras.layers.Embedding(len(X['internal_status_1'].unique()), 5, input_length=1, name='internal_status_1_embedding')(internal_status_1) external_status_1_embedding = tf.keras.layers.Embedding(len(X['external_status_1'].unique()), 14, input_length=1, name='external_status_1_embedding')(external_status_1) order_type_1_embedding = tf.keras.layers.Embedding(len(X['order_type_1'].unique()), 2, input_length=1, name='order_type_1_embedding')(order_type_1) department_id_1_embedding = tf.keras.layers.Embedding(len(X['department_id_1'].unique()), 46, input_length=1, name='department_id_1_embedding')(department_id_1) custom_service_id_1_embedding = tf.keras.layers.Embedding(len(X['custom_service_id_1'].unique()), 26, input_length=1, name='custom_service_id_1_embedding')(custom_service_id_1) service_level_1_embedding = tf.keras.layers.Embedding(len(X['service_level_1'].unique()), 3, input_length=1, name='service_level_1_embedding')(service_level_1) is_subdep_1_embedding = tf.keras.layers.Embedding(len(X['is_subdep_1'].unique()), 1, input_length=1, name='is_subdep_1_embedding')(is_subdep_1) is_csid_1_embedding = tf.keras.layers.Embedding(len(X['is_csid_1'].unique()), 1, input_length=1, name='is_csid_1_embedding')(is_csid_1) dayofweek_1_embedding = tf.keras.layers.Embedding(len(X['dayofweek_1'].unique()), 4, input_length=1, name='dayofweek_1_embedding')(dayofweek_1) day_part_1_embedding = tf.keras.layers.Embedding(len(X['day_part_1'].unique()), 2, input_length=1, name='day_part_1_embedding')(day_part_1) month_1_embedding = tf.keras.layers.Embedding(len(X['month_1'].unique()), 6, input_length=1, name='month_1_embedding')(month_1) week_1_embedding = tf.keras.layers.Embedding(len(X['week_1'].unique()), 26, input_length=1, name='week_1_embedding')(week_1) year_1_embedding = tf.keras.layers.Embedding(len(X['year_1'].unique()), 1, input_length=1, name='year_1_embedding')(year_1) person_1_embedding = tf.keras.layers.Embedding(len(X['person_1'].unique()), 1, input_length=1, name='person_1_embedding')(person_1) sole_1_embedding = tf.keras.layers.Embedding(len(X['sole_1'].unique()), 1, input_length=1, name='sole_1_embedding')(sole_1) legal_1_embedding = tf.keras.layers.Embedding(len(X['legal_1'].unique()), 1, input_length=1, name='legal_1_embedding')(legal_1) auto_ping_queue_1_embedding = tf.keras.layers.Embedding(len(X['auto_ping_queue_1'].unique()), 1, input_length=1, name='auto_ping_queue_1_embedding')(auto_ping_queue_1) requester_type_embedding = tf.keras.layers.Embedding(len(X['requester_type'].unique()), 2, input_length=1, name='requester_type_embedding')(requester_type) gender_embedding = tf.keras.layers.Embedding(len(X['gender'].unique()), 1, input_length=1, name='gender_embedding')(gender) concatenated = tf.keras.layers.Concatenate()([service_1_embedding, service_title_1_embedding, mfc_1_embedding, internal_status_1_embedding, external_status_1_embedding, order_type_1_embedding, department_id_1_embedding, custom_service_id_1_embedding, service_level_1_embedding, is_subdep_1_embedding, is_csid_1_embedding, dayofweek_1_embedding, day_part_1_embedding, month_1_embedding, year_1_embedding, person_1_embedding, sole_1_embedding, legal_1_embedding, auto_ping_queue_1_embedding, requester_type_embedding, gender_embedding]) #out = tf.keras.layers.Flatten()(concatenated) #out = tf.keras.layers.Dense(512, activation='relu')(out) #out = tf.keras.layers.Dense(256, activation='relu')(out) #out = tf.keras.layers.Dense(256, activation='relu')(out) #out = tf.keras.layers.Conv1D(128, 2, padding='same')(concatenated) #out = tf.keras.layers.LeakyReLU()(out) #out = tf.keras.layers.Conv1D(64, 2, padding='same')(out) #out = tf.keras.layers.Flatten()(out) conv_0 = tf.keras.layers.Conv1D(128, 3, padding='same', activation='relu')(concatenated) conv_1 = tf.keras.layers.Conv1D(128, 2, padding='same', activation='relu')(concatenated) conv_2 = tf.keras.layers.Conv1D(128, 6, padding='same', activation='relu')(concatenated) conv_0 = tf.keras.layers.Conv1D(64, 3, padding='same', activation='relu')(conv_0) conv_1 = tf.keras.layers.Conv1D(64, 2, padding='same', activation='relu')(conv_1) conv_2 = tf.keras.layers.Conv1D(64, 6, padding='same', activation='relu')(conv_2) concatenated_tensor = tf.keras.layers.Concatenate(axis=1)([conv_0, conv_1, conv_2]) out = tf.keras.layers.Flatten()(concatenated_tensor) out = tf.keras.layers.Dense(len(y.unique()), activation='softmax', name='service_title')(out) model = tf.keras.models.Model(inputs=[service_1, service_title_1, mfc_1, internal_status_1, external_status_1, order_type_1, department_id_1, custom_service_id_1, service_level_1, is_subdep_1, is_csid_1, dayofweek_1, day_part_1, month_1, week_1, year_1, person_1, sole_1, legal_1, auto_ping_queue_1, requester_type, gender], outputs=out) optimizer = tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE) model.compile(loss = 'categorical_crossentropy', optimizer=optimizer, metrics=["accuracy"]) return model n_splits = 5 # + VER='v5' DISPLAY=1 # USE display=1 FOR INTERACTIVE service_title = np.zeros((X.shape[0], 1)) skf = StratifiedKFold(n_splits=n_splits,shuffle=True,random_state=777) for fold, (idxT, idxV) in enumerate(skf.split(X,y)): print('#'*25) print('### FOLD %i'%(fold+1)) print('#'*25) # Cleaning everything K.clear_session() tf.tpu.experimental.initialize_tpu_system(tpu) # Building model with strategy.scope(): model = build_model() n_steps, trn_dataset, val_dataset = generate_dataset(idxT, idxV) reduce_lr = tf.keras.callbacks.LearningRateScheduler(scheduler) sv = tf.keras.callbacks.ModelCheckpoint( '%s-learnedemb-%i.h5'%(VER,fold), monitor='val_accuracy', verbose=1, save_best_only=True, save_weights_only=True, mode='auto', save_freq='epoch') hist = model.fit(trn_dataset, epochs=EPOCHS, verbose=DISPLAY, callbacks=[sv, reduce_lr], validation_data=val_dataset) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # change working directory to the project root import os os.chdir('../../') import sys sys.path.append('models/utils') sys.path.append('models/brian2') sys.path.append('models/aln') # + jupyter={"outputs_hidden": false} # aln-imports import defaultParameters as dp import fitparams as fp import aEIF_extended as IF_Models # python-imports #% matplotlib inline import matplotlib.pyplot as plt import matplotlib.colors import numpy as np # - params = dp.loadDefaultParams(singleNode=1) # # Calculate and plot effective electrical field input # + jupyter={"outputs_hidden": false} EIFNeuron = IF_Models.EIFModel() print("aEIF C: {}".format(params['C']*1e-12)) print("aEIF gL: {}".format(params['gL']*1e-9)) # use default parameters for the BS neuron, aEIF_extended.py #EIFNeuron.C_soma = 1.0e-2 # Soma membrane capacitance (F / m2) #EIFNeuron.rhom_soma = 28.0e-1 # Soma membrane resistivity (Ohm m2) #EIFNeuron.d_soma = 10e-6 # Soma diameter ( m ) EIFNeuron.d_dend = 2.0e-6 # Dendritic tree diameter (m) EIFNeuron.L = 1200.0e-6 # Dendritic tree length (m) EIFNeuron.updateParams() print("BS C: {}".format(EIFNeuron.C_s)) print("BS gL: {}".format(EIFNeuron.gL)) EIFNeuron.polarizationTransfer # + jupyter={"outputs_hidden": false} def current_for_adex(freq, EIFNeuron): # impedance z_adex = params['gL']*1e-9 * (1-1 * np.exp((-70-params['VT'])/params['DeltaT'])) + params['C']*1e-12 * 1j* 2*np.pi * freq z_bs = EIFNeuron.gL * (1 - 1 * np.exp((-65e-3-EIFNeuron.VT)/EIFNeuron.deltaT)) + EIFNeuron.C_s * 1j* 2*np.pi * freq current = z_adex * EIFNeuron.polarizationTransfer(freq) return np.abs(current) freq = 30 print("{} pA".format(current_for_adex(freq, EIFNeuron)*1e12)) # + jupyter={"outputs_hidden": false} plt.figure(figsize=(3.5, 2.0), dpi=600) maxfr = 100 freqs = np.linspace(0, maxfr, maxfr+1) amps = [] for f in freqs: amps.append(current_for_adex(f, EIFNeuron)*1e12) cmap = plt.cm.cool_r norm = matplotlib.colors.Normalize(vmin=1, vmax=10) def c(c): return cmap(norm(c)) col = 'k' for amp in [1, 2, 3, 4, 5, 6, 8, 10, 12, 15, 20]: plt.plot(freqs, np.multiply(amps,amp), c=c(amp), label='{} V/m'.format(amp)) plt.ylim(0, 130) plt.xlim(0, 80) #plt.hlines(0.2, 0, maxfr, color='k', linestyle='--') #plt.hlines(0.4, 0, maxfr, color='k', linestyle='--') plt.grid(linewidth=0.2) plt.locator_params(axis='x', nbins=10) plt.locator_params(axis='y', nbins=10) plt.tick_params(axis='both', which='major', labelsize=6) plt.xlabel("Input frequency [Hz]") plt.ylabel("Input current [pA]") col = 'k' fs = 6.5 import scipy.misc for i, amp in enumerate([1, 2, 3, 4, 5, 6, 8, 10, 12, 15, 20]): freq_x = (10-amp)*4+2 plotfreqs = [4, 8, 11, 14, 17, 21, 25, 30, 35, 40, 45][::-1] freq_x = plotfreqs[i] amp_val = np.multiply(amps,amp)[freq_x] + 1 rot = 1/scipy.misc.derivative(lambda amp: np.multiply(amps,amp)[freq_x], freq_x) * 200 rots = [3, 6, 13, 20, 30, 35, 50, 55, 58, 60, 60] rot = rots[i] bbox_props = dict(fc="w", ec="white", pad=0.0, alpha=1.0) plt.text(freq_x, amp_val, '{} V/m'.format(amp), fontsize=5, color=col, bbox=bbox_props, rotation=rot, ha='center', va='center') plt.savefig("figures/equivalent_electric_field.png") plt.savefig("figures/equivalent_electric_field.png") # - # # Inverse problem # + jupyter={"outputs_hidden": false} # invert problem by brute force: def find_efield_amp(freq, current_amp): # current_amp: amplitude of current in pA that you want to convert to field # freq: the frequency of the oscillation return current_amp/(current_for_adex(freq, EIFNeuron)*1e12) amps = [10, 40, 60, 80, 100] freqs = np.linspace(0, 100, 101) field_amps = [] plt.figure(figsize=(3, 1), dpi=600) for amp in amps: field_amps = [] for freq in freqs: field_amp = find_efield_amp(freq, amp) field_amps.append(field_amp) #print(amp, field_amp) plt.plot(freqs, field_amps, label="{} pA".format(amp)) plt.locator_params(axis='y', nbins=6) plt.locator_params(axis='x', nbins=8) plt.grid() plt.legend(loc=1, prop={'size': 6}) plt.xlabel("Stimulataion frequency [Hz]") plt.ylabel("E-Field [V/m]") # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + language="bash" # # echo "test" > test # # + language="bash" # # for i in {1..100}; do cp test "test_$i"; done # # + language="bash" # # echo "COVID" >> test_100 # # + language="bash" # # cat test_100 # # + language="bash" # # grep -rnw -e "COVID" # # + language="bash" # # rm test_* test # # + language="bash" # # whoami # + language="bash" # # pwd # + language="bash" # # ls # + language="bash" # # ls -l # + language="bash" # # ls -F # + language="bash" # # cd .. # + language="bash" # # find *.csv # + language="bash" # # find *.csv | less # + language="bash" # # ls -t | find *.csv | head -n1 # + language="bash" # # mkdir exercises # + language="bash" # # cd exercise | touch test ; ls # + language="bash" # # cat test # + language="bash" # # echo "something" test ; cat test # + language="bash" # # echo "anything" >> test # + language="bash" # # cp test test_1 # + language="bash" # # for i in {1..100}; do cp test "test_$i"; done # + language="bash" # # rm test_* # + language="bash" # # mkdir exercise_1 ; mkdir exercise_2 # + language="bash" # # find exer* # + language="bash" # # touch exercise_1/test # + language="bash" # # cd exercise_1 ; ls # + language="bash" # # mv test ../exercise_2 # + language="bash" # # cd exercise_2 ; ls # + language="bash" # # cat billionaires.csv | head -n2 # + language="bash" # # wc -l billionaires.csv # + language="bash" # # awk '/China/ {print}' billionaires.csv | head -n5 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np import matplotlib.pyplot as plt # # Step 0 and 1 # Import Data (only FD001) # Download data from S3 # + import boto3 import sagemaker #role = sagemaker.get_execution_role() role = '' region = boto3.Session().region_name print(region) print(role) # Test bucket name immersionday-sagemaker-test bucket_name = 'iiot-book-data' file_name_train = 'train_FD001.txt' file_name_test = 'test_FD001.txt' import boto3 s3 = boto3.resource('s3') s3.Bucket(bucket_name).download_file(file_name_train, file_name_train) s3.Bucket(bucket_name).download_file(file_name_test,file_name_test) # - # Import Data (only FD001) # step 1: read the dataset columns = ['unitid', 'time', 'set_1','set_2','set_3'] columns.extend(['sensor_' + str(i) for i in range(1,22)]) df = pd.read_csv(file_name_train, delim_whitespace=True,names=columns) df_test = pd.read_csv(file_name_test, delim_whitespace=True,names=columns) # ## Prepare the data set # Remove unusefull columns. # + # only engine 1 i=1 # prepare model columns_feature=['sensor_4','sensor_7'] file_name_train = 'train.csv' file_name_test = 'test.csv' dataset_train=df[(df.unitid ==i)] dataset_test=df_test[(df_test.unitid ==i)] # - # ## Save on local test print(dataset_train) np.savetxt(os.path.join('./container/local_test/test_dir/input/data/training',file_name_train), dataset_train, delimiter=",",fmt='%s') # --- # # Build and deploy Docker File # # Now we can build our docker file, test locally and deploy on the cloud # + ## Local Test # + import os import subprocess def run_interactive(command, working_directory='.'): p = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, cwd=working_directory) for line in p.stdout.readlines(): print (line) retval = p.wait() # build docker file run_interactive('docker build -t rul-estimator .','container') # - # ## Train the model # Train model 'container/local_test/train_local.sh rul-estimator' # # ### Sart locally # From command line: # Start 'container/local_test/serve_local.sh rul-estimator' # ## Test local model run_interactive('./predict_local.sh ../../test.csv', 'container/local_test') # --------- # # Publish image on AWS ECS # publish image on AWS ECS file run_interactive('./build_and_push.sh rul-estimator','container/') print('now you can eccass to https://console.aws.amazon.com/ecs/ ') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.7.9 64-bit (''blog'': conda)' # metadata: # interpreter: # hash: a791f4b9424e3975ed64217961a14e58af61de2cbd0b16cd6f0b3a8e9eaf4389 # name: 'Python 3.7.9 64-bit (''blog'': conda)' # --- # # Difference quotients from scratch with Python # # # From the [Data Science from Scratch book](https://www.oreilly.com/library/view/data-science-from/9781492041122/). # ## Libraries and helper functions import pandas as pd import altair as alt from typing import List Vector = List[float] Vector # + def dot(vector1: Vector, vector2: Vector) -> float: assert len(vector1) == len(vector2) return sum(v1 * v2 for v1, v2 in zip(vector1, vector2)) assert dot([1, 2, 3], [4, 5, 6]) == 32 # + def sum_of_squares(v: Vector) -> Vector: return dot(v, v) assert sum_of_squares([1, 2, 3]) == 14 # - # ## Difference quotient # + from typing import Callable def difference_quotient( f: Callable[[float], float], x: float, h: float ) -> float : return (f(x + h) - f(x)) / h # - # ## A simple estimate def square(x: float) -> float: return x * x def derivative_x2(x: float) -> float: return 2 * x xs = range(-10, 11) actuals = [derivative_x2(x) for x in xs] actuals estimates = [difference_quotient(square, x, h=0.001) for x in xs] estimates df = pd.DataFrame({'actuals': actuals, 'estimates': estimates}).reset_index() df = df.melt(id_vars='index') df.sample(10) alt.Chart(df).mark_circle(opacity=0.75).encode( alt.X('index:Q'), alt.Y('value:Q'), alt.Size('variable:N'), alt.Color('variable:N') ).properties(title='Actual derivatives and estimated quotients') # ## Calculating an i-th difference quotient def partial_difference_quotient( f: Callable[[Vector], float], v: Vector, i: int, h: float ) -> float: """Return i-th parital difference quotient of `f` at a`v`""" w = [ v_j + (h if j == i else 0) for j, v_j in enumerate(v) ] return (f(w) - f(v)) / h def estimate_gradient( f: Callable[[Vector], float], v: Vector, h: float = 0.0001 ): return [ partial_difference_quotient(f, v, i, h) for i in range(len(v)) ] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- import numpy as np import matplotlib.pyplot as plt class EKF: def __init__(self, P, A, Q, R): ''' P: Predicted estimate covariance A: The state-transition model Q: The covariance of the process noise H: The observation model R: The covariance of the observation noise K: Kalman gain ''' self.P = P self.A = A self.Q = Q self.H = None self.R = R self.K = None def step(self, x, z): # Predict x = np.dot(self.A, x) self.P = np.dot(self.A, np.dot(self.P, self.A.T)) + self.Q # Kalman Gain self.H = self.get_H(x) self.K = np.dot(np.dot(self.P, self.H.T), np.linalg.pinv(np.dot(self.H, np.dot(self.P, self.H.T)) + self.R)) # Estimate x = x + np.dot(self.K, (z - self.get_pred(x))) # Update P self.P = self.P - np.dot(self.K, np.dot(self.H, self.P)) return x def get_pred(self, x): return np.matrix([np.sqrt(np.square(x[0, 0]) + np.square(x[2, 0]))]) def get_H(self, x): return np.matrix([[x[0, 0] / (np.sqrt(np.square(x[0, 0]) + np.square(x[2, 0]))), 0, x[2, 0] / (np.sqrt(np.square(x[0, 0]) + np.square(x[2, 0])))]]) # ### Data # + # Set Random Seed np.random.seed(42) # Generate Data position = 0 velocity = 100 altitude = 1000 dt = 0.05 num = 400 mea_dis = [] mea_pos = [] mea_vel = [] mea_alt = [] for i in range(num): v = velocity + 5 * np.random.randn() a = altitude + 10 * np.random.randn() p = position + velocity * dt d = np.sqrt(np.square(a) + np.square(p)) + p * 0.05 * np.random.randn() mea_dis.append(d) mea_pos.append(p) mea_vel.append(v) mea_alt.append(a) position = p # - # ### Run # + # Set Parameters x = np.matrix([[0], [90], [1100]]) P = np.matrix([[10, 0, 0], [0, 10, 0], [0, 0, 10]]) A = np.matrix([[1, 0.05, 0], [0, 1, 0], [0, 0, 1]]) Q = np.matrix([[0, 0, 0], [0, 0.001, 0], [0, 0, 0.001]]) R = np.matrix([10]) # Kalman Filter kalman = EKF(P, A, Q, R) est_pos = [] est_vel = [] est_alt = [] Ps = [] Ks = [] for z in mea_dis: z = np.matrix([z]) x = kalman.step(x, z) est_pos.append(x[0, 0]) est_vel.append(x[1, 0]) est_alt.append(x[2, 0]) est_dis = np.sqrt(np.square(est_pos) + np.square(est_alt)) # - # ### Graph # + plt.figure(figsize=(25, 4)) plt.subplot(1, 4, 1) plt.plot(mea_pos, 'o', alpha=0.5, color='k', label='Data') plt.plot(est_pos, lw=3, color='r', label='Estimation (EKF)') plt.legend() plt.xlim(0, num) plt.xlabel('Time') plt.ylabel('Position') plt.subplot(1, 4, 2) plt.plot(mea_vel, 'o', alpha=0.5, color='k', label='Data') plt.plot(est_vel, lw=3, color='r', label='Estimation (EKF)') plt.legend() plt.xlim(0, num) plt.xlabel('Time') plt.ylabel('Velocity') plt.subplot(1, 4, 3) plt.plot(mea_alt, 'o', alpha=0.5, color='k', label='Data') plt.plot(est_alt, lw=3, color='r', label='Estimation (EKF)') plt.legend() plt.xlim(0, num) plt.xlabel('Time') plt.ylabel('Altitude') plt.subplot(1, 4, 4) plt.plot(mea_dis, 'o', alpha=0.5, color='k', label='Data') plt.plot(est_dis, lw=3, color='r', label='Estimation (EKF)') plt.legend() plt.xlim(0, num) plt.xlabel('Time') plt.ylabel('Distance') plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="cM9RkMXxmg6X" # ## Mask Adaptivity Detection Using YOLO # # Mask became an essential accessory post COVID-19. Most of the countries are making face masks mandatory to avail services like transport, fuel and any sort of outside activity. It is become utmost necessary to keep track of the adaptivity of the crowd. This notebook from DeepHiveMind contains implementation of a face mask adaptivity tracker using YOLO. # # ![cover](https://i.ibb.co/RBG6m20/cover.png) # + colab_type="code" id="ero65aTmh5qu" colab={} import warnings import numpy as np import argparse import time import cv2 import os warnings.filterwarnings("ignore") # + [markdown] colab_type="text" id="YMRWP7xZlxB8" # ### Prepare DarkNet Environment # + colab_type="code" id="0XQThXbm5L6T" outputId="292b0e16-e31f-47b8-df22-1a019cd77b1b" colab={"base_uri": "https://localhost:8080/", "height": 1000} # Create DarkNet Environment def prepare_environment(): os.environ['PATH'] += ':/usr/local/cuda/bin' # !rm -fr darknet # !git clone https://github.com/AlexeyAB/darknet/ # !apt install gcc-5 g++-5 -y # !ln -s /usr/bin/gcc-5 /usr/local/cuda/bin/gcc # !ln -s /usr/bin/g++-5 /usr/local/cuda/bin/g++ # %cd darknet # !sed -i 's/GPU=0/GPU=1/g' Makefile # !sed -i 's/OPENCV=0/OPENCV=1/g' Makefile # !make # get yolov3 weights # #!wget https://pjreddie.com/media/files/darknet53.conv.74.weights # !chmod a+x ./darknet # !apt install ffmpeg libopencv-dev libgtk-3-dev python-numpy python3-numpy libdc1394-22 libdc1394-22-dev libjpeg-dev libtiff5-dev libavcodec-dev libavformat-dev libswscale-dev libxine2-dev libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev libv4l-dev libtbb-dev qtbase5-dev libfaac-dev libmp3lame-dev libopencore-amrnb-dev libopencore-amrwb-dev libtheora-dev libvorbis-dev libxvidcore-dev x264 v4l-utils unzip status='Completed' return status prepare_environment() # + colab_type="code" id="zCYwTSgQzpwN" outputId="0fe4f8e0-1b70-4249-9b19-849e1e32bac6" colab={"base_uri": "https://localhost:8080/", "height": 121} from google.colab import drive drive.mount('/content/drive') # + colab_type="code" id="ImCWrY-e0ZpQ" outputId="67fa87f4-5c73-40f7-ef91-a2f47ae0ae03" colab={"base_uri": "https://localhost:8080/", "height": 185} os.listdir('/content/drive/My Drive/darknet/YOLO_Custom') # + [markdown] colab_type="text" id="8GjNCDE4ZSqK" # ### Get Tiny YOLO Weight (Skip if Resuming Training) # + colab_type="code" id="xDOXcABQVpRn" outputId="4e370ab0-f1c7-4868-b981-2e0cdefa45b6" colab={"base_uri": "https://localhost:8080/", "height": 0} # !wget --header="Host: pjreddie.com" --header="User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36" --header="Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9" --header="Accept-Language: en-US,en;q=0.9" --header="Referer: https://github.com/AlexeyAB/darknet" --header="Cookie: __utma=134107727.1364647705.1589636782.1589689587.1589901067.3; __utmz=134107727.1589901067.3.3.utmcsr=google|utmccn=(organic)|utmcmd=organic|utmctr=(not%20provided)" --header="Connection: keep-alive" "https://pjreddie.com/media/files/yolov3-tiny.weights" -c -O 'yolov3-tiny.weights' # + colab_type="code" id="Ryv56nNnW-V9" outputId="3be424c8-3cae-4de9-eed7-150fea6e1adb" colab={"base_uri": "https://localhost:8080/", "height": 0} # !./darknet partial cfg/yolov3-tiny.cfg yolov3-tiny.weights yolov3-tiny.conv.15 15 # + [markdown] colab_type="text" id="f8dP_2ym7kZB" # # + [markdown] colab_type="text" id="eBrnmAohZzPX" # ### Copy Required Files from Drive # + colab_type="code" id="28bjxQKML3Nr" colab={} # Copy fils from Google Drive to the VM local filesystem # !cp -r "/content/drive/My Drive/darknet/YOLO_Custom" /content/darknet/YOLO_Custom # + [markdown] colab_type="text" id="UvPkfqEUaAiH" # ### Train # + [markdown] colab_type="text" id="DA9t5O5XaGJ6" # Use the below command to train yolo: # # # # !./darknet detector train data_file_path config_file_path training_weights_path log_path # + [markdown] colab_type="text" id="f7GmL-ttsvQN" # To train yolov3 instead of tiny yolo, replace the following files: # # use ***yolov3_train_cfg*** instead of ***yolov3-tiny_obj_train.cfg*** # use ***yolov3_test_cfg*** instead of ***yolov3-tiny_obj_test.cfg*** for testing purpose. # Replace the yolov3 weight link with the tiny yolo weight link # + colab_type="code" id="Q5JJidTpPoJs" outputId="510cdf8b-2fe2-4390-e315-1c2aae241c3a" colab={"base_uri": "https://localhost:8080/", "height": 1000} # !./darknet detector train "/content/darknet/YOLO_Custom/obj.data" "/content/darknet/YOLO_Custom/yolov3-tiny_obj_train.cfg" "/content/darknet/yolov3-tiny.conv.15" "train.log" -dont_show # + [markdown] colab_type="text" id="y7e5GdAPimAJ" # ### Utility Functions # + colab_type="code" id="sbBdherw_PTU" colab={} # Define threshold for confidence score and Non max supression here confthres=0.2 nmsthres=0.1 path="./" def get_labels(label_dir): return open(label_dir).read().split('\n') def get_colors(LABELS): # initialize a list of colors to represent each possible class label np.random.seed(42) COLORS = np.random.randint(0, 255, size=(len(LABELS), 3),dtype="uint8") return COLORS def get_weights(weights_path): # derive the paths to the YOLO weights and model configuration weightsPath = os.path.sep.join([yolo_path, weights_path]) return weightsPath def get_config(config_path): configPath = os.path.sep.join([yolo_path, config_path]) return configPath def load_model(configpath,weightspath): # load our YOLO object detector trained on COCO dataset (80 classes) print("[INFO] loading YOLO from disk...") net = cv2.dnn.readNetFromDarknet(configpath, weightspath) return net # + colab_type="code" id="gmvObbr3_Ubz" colab={} #https://medium.com/analytics-vidhya/object-detection-using-yolo-v3-and-deploying-it-on-docker-and-minikube-c1192e81ae7a def get_predection(image,net,LABELS,COLORS): (H, W) = image.shape[:2] # determine only the *output* layer names that we need from YOLO ln = net.getLayerNames() ln = [ln[i[0] - 1] for i in net.getUnconnectedOutLayers()] # construct a blob from the input image and then perform a forward # pass of the YOLO object detector, giving us our bounding boxes and # associated probabilities blob = cv2.dnn.blobFromImage(image, 1 / 255.0, (416, 416), swapRB=True, crop=False) net.setInput(blob) start = time.time() layerOutputs = net.forward(ln) #print(layerOutputs[0]) end = time.time() # show timing information on YOLO #print("[INFO] YOLO took {:.6f} seconds".format(end - start)) # initialize our lists of detected bounding boxes, confidences, and # class IDs, respectively boxes = [] confidences = [] classIDs = [] num_class_0 = 0 num_class_1 = 0 # loop over each of the layer outputs for output in layerOutputs: # loop over each of the detections for detection in output: # extract the class ID and confidence (i.e., probability) of # the current object detection scores = detection[5:] #print(detection) classID = np.argmax(scores) # print(classID) confidence = scores[classID] # filter out weak predictions by ensuring the detected # probability is greater than the minimum probability if confidence > confthres: # scale the bounding box coordinates back relative to the # size of the image, keeping in mind that YOLO actually # returns the center (x, y)-coordinates of the bounding # box followed by the boxes' width and height box = detection[0:4] * np.array([W, H, W, H]) (centerX, centerY, width, height) = box.astype("int") # use the center (x, y)-coordinates to derive the top and # and left corner of the bounding box x = int(centerX - (width / 2)) y = int(centerY - (height / 2)) # update our list of bounding box coordinates, confidences, # and class IDs boxes.append([x, y, int(width), int(height)]) confidences.append(float(confidence)) classIDs.append(classID) if(classID==0): num_class_0 +=1 elif(classID==1): num_class_1 +=1 # apply non-maxima suppression to suppress weak, overlapping bounding # boxes idxs = cv2.dnn.NMSBoxes(boxes, confidences, confthres, nmsthres) if(num_class_0>0 or num_class_1>0): index= num_class_0/(num_class_0+num_class_1) else: index=-1 #print(index,) # ensure at least one detection exists if len(idxs) > 0: # loop over the indexes we are keeping for i in idxs.flatten(): # extract the bounding box coordinates (x, y) = (boxes[i][0], boxes[i][1]) (w, h) = (boxes[i][2], boxes[i][3]) # draw a bounding box rectangle and label on the image color = [int(c) for c in COLORS[classIDs[i]]] cv2.rectangle(image, (x, y), (x + w, y + h), color, 2) text = "{}".format(LABELS[classIDs[i]]) #print(boxes) #print(classIDs) #cv2.putText(image, text, (x, y - 5), cv2.FONT_HERSHEY_SIMPLEX,0.5, color=(69, 60, 90), thickness=2) cv2.rectangle(image, (x, y-5), (x+62, y-15), color, cv2.FILLED) cv2.putText(image, text, (x, y - 5), cv2.FONT_HERSHEY_SIMPLEX,0.5, (0,0,0), 1) if(index!=-1 and index<.50): cv2.rectangle(image, (40, 46), (220, 16), (0,0,255), cv2.FILLED) cv2.putText(image,'Mask Adaptivity: POOR',(40,40),cv2.FONT_HERSHEY_SIMPLEX,0.5, (0,255,255), 1) elif(index>=.50 and index<.70): cv2.rectangle(image, (40, 46), (255, 16), (0, 165, 255), cv2.FILLED) cv2.putText(image,'Mask Adaptivity: MODERATE',(40,40),cv2.FONT_HERSHEY_SIMPLEX,0.5, (0,0,0), 1) elif(index>=0.70): cv2.rectangle(image, (40, 46), (220, 16), (42,236,42), cv2.FILLED) cv2.putText(image,'Mask Adaptivity: HIGH',(40,40),cv2.FONT_HERSHEY_SIMPLEX,0.5, (0,0,0), 1) return image # + colab_type="code" id="UapmhRcxfr_J" colab={} # Method to predict Image def predict_image(img_path): image = cv2.imread(img_path) nets=load_model(CFG,Weights) #Colors=get_colors(Lables) Colors=[(42,236,42),(0,0,255)] res=get_predection(image,nets,Lables,Colors) # image=cv2.cvtColor(image,cv2.COLOR_BGR2RGB) # show the output image cv2_imshow(res) cv2.waitKey() # + colab_type="code" id="uDlmOMWChG23" colab={} # Method to predict Video def predict_video(input_path,output_path): vid = cv2.VideoCapture(input_path) op=output_path height, width = None, None writer = None print('[Info] processing Video (It may take several minutes to Run)..') while True: grabbed, frame = vid.read() # Checking if the complete video is read if not grabbed: break if width is None or height is None: height, width = frame.shape[:2] frame=get_predection(frame,nets,Lables,Colors) if writer is None: fourcc = cv2.VideoWriter_fourcc(*"mp4v") writer = cv2.VideoWriter(op, fourcc, 27,(frame.shape[1], frame.shape[0]), True) writer.write(frame) print ("[INFO] Cleaning up...") writer.release() vid.release() print ("[INFO] Prediction Completed.") # + colab_type="code" id="VeDnJ1Frohs6" colab={} # This will not work in colab, as colab can't access local hardware import time def predict_web_cam(): #stream = cv2.VideoCapture(0) sess = K.get_session() while True: # Capture frame-by-frame grabbed, frame = stream.read() if not grabbed: break # Run detection start = time.time() image = Image.fromarray(frame) output_image = get_predection(image,nets,Lables,Colors) end = time.time() print("Inference time: {:.2f}s".format(end - start)) # Display the resulting frame cv2.imshow('Web Cam',np.asarray(output_image)) if cv2.waitKey(1) & 0xFF == ord('q'): break stream.release() cv2.destroyAllWindows() # + [markdown] colab_type="text" id="RVTB5zwwg5HQ" # ### Set the Variables (Must before Prediction) # + colab_type="code" id="JGotYYL5goIP" colab={} # Set the pah for test config file, label directory and weights CFG='/content/darknet/YOLO_Custom/yolov3-tiny_obj_test.cfg' label_dir='/content/darknet/YOLO_Custom/obj.names' #cfgpath=test_config_path Weights='/content/darknet/YOLO_Custom/yolov3-tiny_obj_train_tiny8.weights' Lables=get_labels(label_dir) # + [markdown] colab_type="text" id="mNjBMQk1ggXn" # ### Predict Image # + colab_type="code" id="1Nd17EMo_X2n" outputId="f3117978-4f58-46a5-dddc-9f5b0af46124" colab={"base_uri": "https://localhost:8080/", "height": 512} from google.colab.patches import cv2_imshow img_path='/content/buckey.jpg' predict_image(img_path) # + [markdown] colab_type="text" id="q2DUMTvFiLxb" # ### Predict Video # + colab_type="code" id="_mDqXjOumaNs" outputId="f9ae3f64-3f01-4556-b26e-788df8c75df6" colab={"base_uri": "https://localhost:8080/", "height": 50} input_path='/content/in.mp4' output_path='/content/out.mp4' predict_video(input_path,output_path) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="oqyGsgQXkorC" # Add Multi-line Comment - CTRL+/ # Online Compiler:- # Google Colab # Repl.it # Local Compiler:- [IDE] # Anaconda Navigator # PyCharm # + colab={"base_uri": "https://localhost:8080/"} id="pYLlaQaqau0v" outputId="41a8caf5-8b22-4532-e41c-b99f64905f50" # This is my first program in Python (SLoC) ''' This is a Multi-line Comment Author: Sanket Date: 02/11/2021 ''' print("hello world") # + colab={"base_uri": "https://localhost:8080/"} id="4-PLV-e6ebRQ" outputId="b98de00e-3e1c-4208-ebd6-b8a93fc7d078" # To RUN the Code Snippet - PLAY or CTRL+Enter or CMD+Enter print("hello world") print('hello world') print('''yorker''') print("""youtube""") # + colab={"base_uri": "https://localhost:8080/", "height": 167} id="j0BJf9xvglio" outputId="5cea8fbd-1639-4b53-b149-f01c4a5db381" # Python is a case-sensitive language PRINT("helloworld") # + colab={"base_uri": "https://localhost:8080/"} id="RCEi6BdMhMue" outputId="d9a05668-131b-4bcd-8126-4ec2db37a19b" print("helloworld") # Built-in Function # + colab={"base_uri": "https://localhost:8080/"} id="cAiApylzhWhi" outputId="dd213855-2de9-4c73-af9d-8c8dd4e83eab" x = 45 # definition of x -> Declaration + Initialization print(x) # + colab={"base_uri": "https://localhost:8080/", "height": 185} id="9rVq1g1DhdKO" outputId="a378f541-12f5-4fd2-c055-1f07ce339cc7" NUM = 600 print(num) # + colab={"base_uri": "https://localhost:8080/"} id="GXCVAtwJiItR" outputId="aa405de9-08b2-421c-fd80-f8cebaf60c31" # List Of Operators in Python # Arithmetic Operators (+,-,/,*,**,//,%) print(3+4) # + Addition Operator - Binary Operator print(3-4) print(3*4) print(13/4) # float division operator print(12/4) # float division operator print(13//4) # integer division operator - 3.25 --> it trims the fractional part and returns integer part --> 3 print(23//3) # returns integer -> 7 print(12**2) # Power Operator -> base ** exponent -> 12 ** 2 -> 144 print(45 % 20) # Modulo Operator - Remainder Value -> 5 print( 2020 % 4 ) # Remainder -> 0 # + colab={"base_uri": "https://localhost:8080/"} id="W2CdTHi4l0HX" outputId="7884d33c-ce7e-46e0-d8cf-4689305fb6c1" # Relational Operators (<,>,>=,<=,!=,==) return True/False -> Boolean(1/0)(T/F) print(3<4) print(3>4) print(3<=4) print(3>=4) print(23 != 3) # not-equal-to [inequality operator] -> True print(23 == 3) # equal-to [equality operator] -> False # + colab={"base_uri": "https://localhost:8080/"} id="RURGrrx1nAf0" outputId="018724c8-cefe-4e67-f80a-3aedacd9948c" # Logical Operators (and, or, not) return True/False -> Boolean(1/0)(T/F) print( 56<4 and 56>78) # and operator returns True if both operands are True print( True and True) print( False and False) print( False and True) print( True and False) # + id="fK5RzZZslnUV" # + colab={"base_uri": "https://localhost:8080/"} id="bK55Qyzoot8A" outputId="e18c66c4-cb6e-46fd-eb16-9548d8fb82c1" print( 56<4 or 56>78) # or operator returns True if atleast one of the operands is True print( 56<4 or 56>78) # False or False -> False print( True or True) print( False or False) print( False or True) print( True or False) # + colab={"base_uri": "https://localhost:8080/"} id="JebvaW3HpqXp" outputId="f0790957-3997-4a64-dd44-107958944020" # not is a unary operator - inverts the value (True->False) (False->True) print( not True) # not True -> False print( not (56<67)) # not True -> False age = 30 # + colab={"base_uri": "https://localhost:8080/", "height": 132} id="6nQrFWUGnz7X" outputId="8401f31b-f469-4c2e-f8af-fa4c246f3ed8" True = 34 # + id="2w64f4utmSai" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="8a9bdf11" # # Author : # + [markdown] id="5810d787" # ### TASK 1:Iris Flowers Classification ML Project # + [markdown] id="1123e586" # ##### Problem Statement :This particular ML project is usually referred to as the “Hello World” of Machine Learning. The iris flowers dataset contains numeric attributes, and it is perfect for beginners to learn about supervised ML algorithms, mainly how to load and handle data. Also, since this is a small dataset, it can easily fit in memory without requiring special transformations or scaling capabilities. # + [markdown] id="1825e1a6" # Dataset link : http://archive.ics.uci.edu/ml/datasets/Iris # + [markdown] id="02a4e0d3" # #### Step-1 : Lets First import required liberies # + id="da1a7931" import pandas as pd import numpy as np import scipy.stats as st import os import matplotlib.pyplot as plt import seaborn as sns from sklearn.model_selection import train_test_split from sklearn.preprocessing import LabelEncoder from sklearn.tree import DecisionTreeClassifier from sklearn.linear_model import LinearRegression from sklearn.metrics import accuracy_score from sklearn.linear_model import LogisticRegression from sklearn.neighbors import KNeighborsClassifier from sklearn.naive_bayes import GaussianNB from sklearn.preprocessing import PolynomialFeatures from sklearn import svm from sklearn.metrics import classification_report # + id="a595ac6b" outputId="f0e901de-b619-490d-b1c2-440b2f8d2c88" df = pd.read_csv('./Datasets/iris.csv') df.head(10) # + id="c1237d32" outputId="e0c99446-26ae-479b-a360-9306f61a322e" df.shape # + id="972f186e" outputId="fc690c63-f417-4cc5-ff3c-c92cfb8c9f97" df.tail(10) # + id="006704f5" outputId="194fd627-80b6-4547-f12d-0e8fa09a0ff1" df.sample(10) # + id="30600be5" outputId="832686c4-b86d-410d-d5e8-9d5bee2ba0bf" df["variety"].value_counts() # + id="e4e68199" outputId="ecba6ce5-e18b-4c52-b037-40215e46be7e" df.describe() # + id="ceda418f" outputId="3f49d60f-2191-4e54-acc2-4ef486d020ed" df.variety.unique() # + id="2d3e2e09" outputId="2a9dbf44-929e-4785-fa22-53c9354db394" df.nunique() # + id="ef36cb24" outputId="ebb1f626-1add-4b22-e8e8-3d4802ee6d61" df.isnull().sum() # + id="8850f547" outputId="5b87e9d4-a2d2-4b3d-dcac-f78dd8e203aa" df = df.rename(columns = {'sepal.length': 'sepal_length', 'petal.length': 'petal_length', 'sepal.width': 'sepal_width' , 'petal.width': 'petal_width', 'variety' : 'Species'}) print(df) # + [markdown] id="04f53ca5" # #### Lets Visualize the input data # + id="0ced9c42" outputId="1c33ae53-c8ad-4551-9e7a-92a38e2d54a4" sns.pairplot(df, hue = "Species") plt.show() # + id="71a36e94" outputId="cd9e49f0-a5e7-4193-d16b-49ae22e8eb5b" def histplots(): fig,axes=plt.subplots(2,2,figsize=(10,10)) df['sepal_length'].hist(ax=axes[0][0]) df['petal_length'].hist(ax=axes[0][1]) df['petal_width'].hist(ax=axes[1][0]) df['sepal_width'].hist(ax=axes[1][1]) plt.show() histplots() # + id="7b424e44" outputId="7f7b3da4-8f26-445f-d4be-f51f2d74926d" def barplots(): fig,axes=plt.subplots(2,2,figsize=(10,10)) sns.barplot(x=df.Species,y=df['sepal_length'],ax=axes[0][0]) sns.barplot(x=df.Species,y=df['petal_length'],ax=axes[0][1]) sns.barplot(x=df.Species,y=df['petal_width'],ax=axes[1][0]) sns.barplot(x=df.Species,y=df['sepal_width'],ax=axes[1][1]) plt.show() barplots() # + [markdown] id="eb4d8200" # #### Correlation # + id="2768fad3" outputId="21c30950-a082-408f-e69b-f271dbfd1442" df.corr() # + id="35110462" outputId="4c0b0ed9-88fb-468f-9b37-fe028cc9ec5f" corr = df.corr() fig, ax = plt.subplots(figsize=(5,5)) sns.heatmap(corr, annot=True, ax=ax) # + id="6fe3cb4d" outputId="a9caa973-c816-445a-be69-fa8cb88f31e0" le = LabelEncoder() df['Species'] = le.fit_transform(df['Species']) df.sample(15) # + id="765d1cd3" outputId="5ac70553-b02c-479f-b8cb-984ec868a7c0" x = df.iloc[:,:4].values y = df.iloc[:,4].values x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.3,random_state=10) print(x_train.shape) print(x_test.shape) print(y_train.shape) print(y_test.shape) # + [markdown] id="c89b7bf1" # # Supervised ML Algorithms # + id="2008d2ee" outputId="78a770f6-6229-4a87-c7d7-34586d2f1f08" print('-------------------------------------- LINEAR REGRESSION ----------------------------------------------') model = LinearRegression() model.fit(x_train,y_train) y_pred = model.predict(x_test) sc_lr = round(model.score(x_test, y_test) * 100 , 2) print("Accuracy: ", str(sc_lr) , " %" ) # + id="f6f152a1" outputId="0936eb6f-599e-4a30-841d-7ff6383e7d49" print('-------------------------------------- LOGISTIC REGRESSION ----------------------------------------------') model2 = LogisticRegression() model2.fit(x_train,y_train) sc_logr = round(model2.score(x_test, y_test) * 100,2) print("Accuracy: ", str(sc_logr) , " %") # + id="3cf8ea86" outputId="ba975543-87d7-407d-9e3e-0ce54ab6e84f" print('-------------------------------------- NAIVE BAYES ----------------------------------------------') nb = GaussianNB() nb.fit(x_train,y_train) y_pred_nb = nb.predict(x_test) score_nb = round(accuracy_score(y_pred_nb,y_test)*100,2) print("Accuracy: "+str(score_nb)+" %") print(classification_report(y_test, y_pred_nb)) # + id="6bac356d" outputId="9baa5718-e027-48a5-c81f-be3546a69884" print('--------------------------------------KNN CLASSIFIER -----------------------------------------------') model3 = KNeighborsClassifier() model3.fit(x_train,y_train) sc_knn = round(model3.score(x_test, y_test) * 100,2) print("Accuracy: ", str(sc_knn) , " %") # + id="4ab47d40" outputId="475686de-a6e2-4013-feb8-22450f0e7d61" print('--------------------------------------DECISION TREE CLASSIFIER------------------------------------------------') model4 = DecisionTreeClassifier() model4.fit(x_train, y_train) sc_dt= round(model4.score(x_test, y_test) * 100 , 2) print("Accuracy: ", str(sc_dt) , "%") # + id="0665afdd" outputId="01671e9a-b3e9-4855-8beb-d085d43c9e2a" print('--------------------------------------SVM ------------------------------------------------') sv = svm.SVC(kernel='linear') sv.fit(x_train, y_train) y_pred_svm = sv.predict(x_test) sc_svm = round(accuracy_score(y_pred_svm,y_test)*100,2) print("Accuracy: "+ str(sc_svm) +" %") print(classification_report(y_test, y_pred_svm)) # + [markdown] id="9ed5d2ad" # # Comparison # + id="591e7438" outputId="1ec7a206-0dca-49a7-b812-badc198a02a2" scores_plt = [sc_lr , sc_logr , score_nb, sc_dt, sc_svm, sc_knn] algorithms = ["Linear Regression","Logistic Regression","Naive Bayes","Decision tree","Support Vector Machine", "KNN"] sns.set(rc={'figure.figsize':(11,6)}) plt.xlabel("Algorithms") plt.ylabel("Accuracy score") sns.barplot(algorithms,scores_plt) plt.show() # + id="34959f72" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Disaster Tweets # * A **Natural Language Processing** task that detemines is talking about natural disaster or not using the [Kaggle](https://www.kaggle.com/c/nlp-getting-started) dataset # ### 1. Data Cleaning # #### Imports import matplotlib.pyplot as plt from nltk.tokenize import word_tokenize from nltk.corpus import stopwords import seaborn as sns from collections import Counter import re import pandas as pd # > **Loading the data:** disaster = pd.read_csv('train.csv') disaster.head() # > **We are interested in the** text and target. tweets = disaster["text"].values targets = disaster["target"].values print(tweets[:1]) targets[:50] # > **Data Cleaning** - We want to remove: # # # * urls # * non word chars # * stopwords - A stop word is a commonly used word (such as “the”, “a”, “an”, “in”) that a search engine has been programmed to ignore, both when indexing entries for searching and when retrieving them as the result of a search query. test = " '@Alexis_Sanchez: happy to see my teammates and training hard ?? goodnight gunners.?????? http://t.co/uc4j4jHvGR'" stopwords = stopwords.words('english') a = re.sub(r'https?://(\S+|www)\.\S+', ' ',test) # reomove urls b = re.sub(r'\W|_', ' ', a) # remove punctuactions and numbers and underscores c = re.sub(r'\s+', ' ', b).strip() # remove double spacing c.lower() # > **Now lets do it for all the tweets** tweets_cleaned1 = [] for tweet in tweets: a = re.sub(r'https?://(\S+|www)\.\S+', ' ',tweet) b = re.sub(r'\W|_', ' ', a) c = re.sub(r'\d', " ", b) d = re.sub(r'[^a-zA-Z]', " ", c) e = re.sub(r'\s+', ' ', d).strip() tweets_cleaned1.append(e.lower()) tweets_cleaned1 tweets_cleaned2 = [] for sent in tweets_cleaned1: words = word_tokenize(sent) tweets_cleaned2.append(" ".join([word for word in words if word not in stopwords])) tweets_cleaned2[:10] # ### Balancing the tweets conter = Counter(targets) conter # + tweets_balanced = [] targets_balanced = [] c0, c1 = 0, 0 for i in range(len(targets)): if targets[i] == 1 and c1 <= conter[1]: targets_balanced.append(targets[i]) tweets_balanced.append(tweets_cleaned2[i]) c1 += 1 elif targets[i] == 0 and c0 < conter[1]: targets_balanced.append(targets[i]) tweets_balanced.append(tweets_cleaned2[i]) c0 += 1 else: continue # - Counter(targets_balanced), tweets_balanced[:2] # ### Visualisation sns.countplot(x=targets_balanced); plt.show() # ### Counting unique words # + count = Counter() for sent in tweets_balanced: words = word_tokenize(sent) for word in words: count[word] +=1 # - count count.most_common(3) unique_words = len(count) unique_words # #### Spliting the Data from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(tweets_balanced, targets_balanced, test_size =.2, random_state=43) # #### Create vectors of words from tensorflow.keras.preprocessing.text import Tokenizer tokenizer = Tokenizer(num_words=unique_words) tokenizer.fit_on_texts(X_train) word_index = tokenizer.word_index word_index X_train_tokens = tokenizer.texts_to_sequences(X_train) X_test_tokens = tokenizer.texts_to_sequences(X_test) X_train_tokens[:2] # > **We need to apply the ``pad_sequence`` to each sentence to balance the lenght of each sentence**. Thi is because our NN accepts the data with the same shape. from tensorflow.keras.preprocessing.sequence import pad_sequences max_words = 20 # maximum number of unique words X_train_tokens_padded = pad_sequences(X_train_tokens, maxlen=max_words, padding="post", truncating="post") X_test_tokens_padded = pad_sequences(X_test_tokens, maxlen=max_words, padding="post", truncating="post") X_train_tokens_padded[1], X_train_tokens[1], X_train[1] # #### Fliping the words and it's index flipped_word_index = dict([(i, word) for (word, i) in word_index.items()]) flipped_word_index X_train_tokens[1] ## reverse this to a sentence " ".join([flipped_word_index[i] for i in X_train_tokens[1]]) # ### Creating a ``RNN`` - Tweets Classification # #### imports from tensorflow import keras import numpy as np model = keras.Sequential([ keras.layers.Embedding(unique_words, 32, input_length=max_words), keras.layers.LSTM(64, dropout=.2), keras.layers.Dense(64, activation='relu'), keras.layers.Dense(16, activation='relu'), keras.layers.Dense(1, activation='sigmoid') ]) model.compile( loss = keras.losses.BinaryCrossentropy(from_logits=False), metrics= ['accuracy'], optimizer = keras.optimizers.Adam(lr=1e-3) ) # + EPOCHS = 5 VERBOSE = 1 VALIDATION_DATA = (X_test_tokens_padded,np.array(y_test, dtype="float32")) BATCH_SIZE = 128 history = model.fit(X_train_tokens_padded, np.array(y_train, dtype="float32"), batch_size=BATCH_SIZE, epochs=EPOCHS, verbose = VERBOSE, validation_data = VALIDATION_DATA) # - print(np.round(model.predict(X_test_tokens_padded[:5]))), y_test[:5], X_test[:5], X_test_tokens_padded[5] # + cate = ["Not Disatser", "Disater"] def makePrediction(sent): tokens = tokenizer.texts_to_sequences([sent]) padded = np.array(pad_sequences(tokens, maxlen=20, padding="post", truncating="post")) print(cate[np.round(model.predict(padded))[0][0].astype('int32')]) makePrediction('we are in floods') # - # ### Saving the Model. model.save('models/disaster_tweets.h5') print('Saved') clf = keras.models.load_model('models/disaster_tweets.h5') clf.predict(X_test_tokens_padded[:5]) # ### Labling Tweets in the test.csv disaster = pd.read_csv('test.csv') disaster.head() tweets = disaster["text"].values tweets_cleaned1 = [] for tweet in tweets: a = re.sub(r'https?://(\S+|www)\.\S+', ' ',tweet) b = re.sub(r'\W|_', ' ', a) c = re.sub(r'\d', " ", b) d = re.sub(r'[^a-zA-Z]', " ", c) e = re.sub(r'\s+', ' ', d).strip() tweets_cleaned1.append(e.lower()) tweets_cleaned2 = [] for sent in tweets_cleaned1: words = word_tokenize(sent) tweets_cleaned2.append(" ".join([word for word in words if word not in stopwords])) tweets_cleaned2[:5] tokens = tokenizer.texts_to_sequences(tweets_cleaned2) tokens_padded = pad_sequences(tokens, maxlen=max_words, padding="post", truncating="post") tokens_padded[0] # ### Making Predictions predictions = np.round(model.predict(tokens_padded)).astype('int32') predictions predictions_targets = [pred[0] for pred in predictions] predictions_targets[:10] # ### Creating a new pandas csv with labeled tweets columns_names = ["text", "target"] rows = np.column_stack([tweets_cleaned2,predictions_targets ]) dataframe = pd.DataFrame(rows, columns=columns_names) dataframe.to_csv('tweets_labeled.csv') print("Saved") tweets = pd.read_csv('tweets_labeled.csv') tweets.head() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- A = set([ 1, 2, 3, 3, 2]) A B = frozenset(['H', 'T']) B A1 = set([1, 2, 3, 4]) A2 = set([2, 4, 6]) A3 = set([1, 2, 3]) A4 = set([2, 3, 4, 5, 6]) A1.union(A2) A2 | A1 A1 | A2 A3.intersection(A4) A4 & A3 A3.issubset(A1) A3 <= A1 A3 <= A2 A3 <= A3 A3 < A3 A1.difference(A2) A1 - A2 nullset = set([]) nullset nullset < A1 nullset.intersection(A1) nullset.union(A1) all_set = set(["HH", "HT", "TH", "TT"]) all set_2 = frozenset(set('HH')) set_2 {frozenset(set(( ))) : 0, frozenset(set('HH')) : 0.25, frozenset(set('HT')) : 0.25, } # + {frozenset(set(( ))) : 0, frozenset(set('HH')) : 0.3, frozenset(set('HT')) : 0.2, frozenset(set('TT')) : 0.3, frozenset(set('TH')) : 0.2, frozenset(set(['HH', 'HT'])) : 0.5, frozenset(set(['HH', 'TT'])) : 0.5, frozenset(set(['HH', 'TH'])) : 0.5, frozenset(set(['HT', 'TT'])) : 0.5, frozenset(set(['HT', 'TH'])) : 0.5, frozenset(set(['TT', 'TH'])) : 0.5, frozenset(set(['HH', 'HT', 'TT'])) : 0.75, frozenset(set(['HH', 'HT', 'TH'])) : 0.75, frozenset(set(['HH', 'TT', 'TH'])) : 0.75, frozenset(set(['HT', 'TT', 'TH'])) : 0.75, frozenset(set(['HH', 'HT', 'TT', 'TH'])) : 1.0 } print(set_1, '\n', set_2, '\n', set_3, '\n',set_4, '\n',set_5, '\n',set_6, '\n',set_7, '\n',set_8, '\n',set_9, '\n',set_10,'\n', set_11, '\n',set_12, '\n',set_13, '\n', set_14, '\n',set_15, '\n',set_16) # - # 연습 문제 3 # # # # 1. 다음 주 친구와의 약속에서 친구가 늦을 확률 # 표본 공간 : {Y, N} # # 2. 내일 아침 반찬으로 김치찌개가 나올 확률 # 표본 공간 : {"김치찌개", "부대찌개", "된장찌개"} # # # 표본의 공간 크기가 무한대인 경우 # # 3. 다음 주 월요일 오후 1시부터 오후 2시사이에 식사를 할 확률 # 표본 공간 : { 1 <= x <= 2 } 사이의 실수 # # 4. 오늘 저녁 집에 들어갔을 때, 내방 온도가 20도인 경우 # 표본 공간 : { 5<= x <= 25 } 사이의 실수 # {frozenset(): 0, frozenset({2}): 0.2, frozenset({2, 4}): 0.2, frozenset({3}): 0.3, frozenset({1, 2}): 0.7, frozenset({4}): 0, frozenset({1, 4}): 0.5, frozenset({2, 3, 4}): 0.5, frozenset({1, 2, 4}): 0.7, frozenset({3, 4}): 0.3, frozenset({2, 3}): 0.5, frozenset({1}): 0.5, frozenset({1, 3}): 0.8, frozenset({1, 3, 4}): 0.8, frozenset({1, 2, 3}): 1.0, frozenset({1, 2, 3, 4}): 1.0} # 시나리오 작성( = 리얼 월드를 만들어내고 있는 것.) # 전체 12명 # 1. 남자, 23살, 긴 머리 # 2. 여자, 15살, 짧은 머리 # 3. 남자, 17살, 짧은 머리 # 4. 여자, 32살, 긴 머리 # 5. 여자, 40살, 짧은 머리 # 6. 남자, 28살, 짧은 머리 # 7. 남자, 58살, 짧은 머리 # 8. 여자, 7살, 긴 머리 # 9. 여자, 20살, 긴 머리 # 10. 남자, 11살, 짧은 머리 # 11. 남자, 21살, 긴 머리 # 12. 여자, 98살, 짧은 머리 # $$ P(A) = \frac{6}{12} = 0.5 $$ # $$ P(B) = \frac{5}{12}$$ # #### 2. 여집합의 확률 # $$ P\{A^C\} = 1 - P\{A\} = 1 - 0.5 = 0.5, P\{A^C\} = 0.5 $$ # $$ P\{B^C\} = 1 - P\{B\} = 1 - \frac{5}{12} = \frac{7}{12}, P\{B^C\} = \frac{7]{12} $$ # | | 범인이 머리가 길다 : $P(B) = 0.5 $ | 범인이 머리가 길지 않다 | # | :------------ | :-----------: | -------------------: | # | 범인이 남자다 : $ P(A) = 0.6 | 2명 | 4명 | 6명 | # | 범인이 여자다 | 3명 | 3명 | 6명 # | 계 | 5명 | 7명 | 12명 | from pgmpy.factors.discrete import JointProbabilityDistribution as JPD # 1. 주변확률 # - $ P(A) = 0.5 $ # - $ P(B) = \frac{5}{12} $ # 2. 결합확률 # - $ P(A, B) = \frac{2}{12} $ # 3. 조건부 확률 # - $ P(A | B) = \frac{P(A | B)}{P(B)} = \frac{2}{5}$ # 4. 독립인가? # - $ P(A, B) = \frac{1}{12}, P(A)P(B) = \frac{1}{12} $ # # $ \therefore P(A, B)와 P(A)P(B)가 같으므로 독립이다. $ j1 = JPD(['A', 'B'], [2, 2], np.array([2, 4, 3, 3]) / 12 ) print(j1) m1a = j1.marginal_distribution(['A'] , inplace = True) print(m1a) j1 = JPD(['A', 'B'], [2, 2], np.array([3, 9, 7, 1]) / 20) print(j1) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- query1 = """query all($a: string) { all(func: eq(%s, $a)) { uid } }""" print(query1%("name")) # + import datetime d=datetime.datetime(1980, 1, 1, 23, 0, 0, 0).isoformat() print(d) datetime.datetime.fromtimestamp(1548508162215/1000) # + from sagas.ofbiz.entities import OfEntity as e, oc from sagas.ofbiz.entity_gen import get_dgraph_type import json def to_isoformat(datetime_val): dt=datetime.datetime.fromtimestamp(datetime_val/1000) return dt.isoformat() def filter_json_val(model, json_val): result={} for k,v in json_val.items(): if v is not None: if is_dt_field(model, k): result[k]=to_isoformat(v) else: result[k]=v return result def is_dt_field(model, fld_name): fld=model.getField(fld_name) vt=get_dgraph_type(fld.getType()) return vt=='datetime' val=e().refProduct('GZ-2002') print(val['createdTxStamp']) jval=json.loads(e('json').refProduct('GZ-2002')) print(jval['createdTxStamp']) print(datetime.datetime.fromtimestamp(jval['createdTxStamp']/1000)) print(to_isoformat(jval['createdTxStamp'])) model=oc.delegator.getModelEntity('Product') print(filter_json_val(model, jval)) # - import sagas.ofbiz.entities as ee ent=ee.entity('Product') val=ent.record('GZ-2002') print(ent.to_json(val,True)) schema = """ mo_name: string @index(exact) . mo_relation: uid @reverse . mo_total: int . mo_last_modified: datetime . mo_table: string . mo_service: string . mo_form_uri: string . mo_field: uid . mo_type: string . mo_mode: string . mo_primary: bool . mo_primary_keys: [string] @index(term) . """ # print(schema) from sagas.ofbiz.entities import OfEntity as e, MetaEntity, oc, format ent_name="Person" ent=MetaEntity(ent_name) flds=[str(fld.getName()) for fld in ent.model.getFieldsIterator()] print(flds) prikeys=['*' if fld.getIsPk() else ' ' for fld in ent.model.getFieldsIterator()] print(prikeys) print(ent.primary) # + from sagas.ofbiz.entities import OfEntity as e, MetaEntity, oc, format from sagas.ofbiz.entity_gen import get_dgraph_type ent_name="Person" ent=MetaEntity(ent_name) flds=[] for fld in ent.model.getFieldsIterator(): # print(fld.getName(), fld.getType(), fld.getIsPk(), fld.getIsAutoCreatedInternal()) if not fld.getIsAutoCreatedInternal(): flds.append({'name':fld.getName(), 'type':fld.getType(), 'schema_type': get_dgraph_type(fld.getType()), 'primary':fld.getIsPk()}) print(flds[:3]) ent_desc={'name':ent_name, 'primary_keys':ent.primary, 'field':flds} # - types=['id', 'currency-precise'] for t in types: print(get_dgraph_type(t)) # + from sagas.ofbiz.entities import OfEntity as e, MetaEntity, oc, format import sagas.ofbiz.entities as es from sagas.ofbiz.entity_gen import get_dgraph_type # ents=[str(v) for v in es.all_entities()] ents=es.all_entities(False) print(ents[:5]) servs=es.oc.all_service_names() print(servs[:5]) # + def deduce_type(a,b): src=set([a,b]) result='string' if src==set(['string','int']): result='string' elif src==set(['int','float']): result='float' elif src==set(['string','datetime']): result='datetime' else: raise ValueError('cannot reduce: %s-%s', a,b) return result print(deduce_type('string','int')) # + def collect_internal_fields(ent_name, fld_mapping): ent=MetaEntity(ent_name) for fld in ent.model.getFieldsIterator(): # print(fld.getName(), fld.getType(), fld.getIsPk(), fld.getIsAutoCreatedInternal()) if fld.getIsAutoCreatedInternal(): fld_name=str(fld.getName()) fld_type=get_dgraph_type(fld.getType()) fld_mapping[fld_name]=fld_type def to_schema(var_map): all_types=[] for k,v in var_map.items(): all_types.append("%s: %s ."%(k,v)) schema='\n'.join(all_types) return schema fld_map={} collect_internal_fields('Person', fld_map) print(fld_map) print(to_schema(fld_map)) # + def collect_fields(ent_name, fld_mapping, skip_fields, messages): ent=MetaEntity(ent_name) for fld in ent.model.getFieldsIterator(): # print(fld.getName(), fld.getType(), fld.getIsPk(), fld.getIsAutoCreatedInternal()) if not fld.getIsAutoCreatedInternal(): fld_name=str(fld.getName()) fld_type=get_dgraph_type(fld.getType()) if fld_name in fld_mapping: if fld_type!=fld_mapping[fld_name]: messages.append("%s's field %s has different field type: %s-%s"% (ent_name, fld_name, fld_type, fld_mapping[fld_name]) ) skip_fields.append((ent_name, fld_name, fld_type)) dtype=deduce_type(fld_type, fld_mapping[fld_name]) fld_mapping[fld_name]=dtype else: pass else: fld_mapping[fld_name]=fld_type fld_mapping={} skip_fields=[] messages=[] for ent in ents: collect_fields(ent, fld_mapping, skip_fields, messages) print('deduce fields', len(skip_fields)) print('total fields', len(fld_mapping)) for sf in skip_fields[:5]: print(sf) for m in messages: if 'datetime' in m: print(m) # + import pickle import redis r = redis.StrictRedis('localhost') p_mydict = pickle.dumps(fld_mapping) r.set('entitySchema',p_mydict) # - read_dict = r.get('entitySchema') entity_schema = pickle.loads(read_dict) print(len(entity_schema)) limit=0 # mo_table: string . for k,v in entity_schema.items(): print("%s: %s ."%(k,v)) limit=limit+1 if limit>=5: break all_types=[] for k,v in entity_schema.items(): all_types.append("%s: %s ."%(k,v)) print(all_types[:5]) schema='\n'.join(all_types) import sagas.graph.graph_manager as g gm=g.GraphManager() gm.reset_schema(schema) from sagas.ofbiz.entities import oc,finder,OfEntity # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="8nKdL3Crw81q" # **Remember to change runtime type to GPU.** # # Runtime > Change runtime type > GPU # + [markdown] id="IYCkiK58apNm" # **Clone git repo** # + colab={"base_uri": "https://localhost:8080/"} id="RGRkm-Hkn6Tl" executionInfo={"status": "ok", "timestamp": 1620635105101, "user_tz": -480, "elapsed": 8946, "user": {"displayName": "", "photoUrl": "", "userId": "03685829748003472020"}} outputId="f1b017be-57c8-47ad-beb2-b670a991581c" # !git clone https://github.com/wein98/webapp-first-order-motion.git # + colab={"base_uri": "https://localhost:8080/"} id="bSB2YeZEoCiJ" executionInfo={"status": "ok", "timestamp": 1620635107572, "user_tz": -480, "elapsed": 802, "user": {"displayName": "", "photoUrl": "", "userId": "03685829748003472020"}} outputId="7f4d3467-e500-43fc-acf3-f2dd0a2283d1" # cd webapp-first-order-motion/first-order-motion # + [markdown] id="R6l5IVMrxbDt" # **Image and video input here.** # # Upload the your input image and video under /content/webapp-first-order-motion/first-order-motion. # + [markdown] id="WXNmh9efHHTx" # **Download first order motion pretrained model.** # + colab={"base_uri": "https://localhost:8080/"} id="EX7GytT7HDpG" executionInfo={"status": "ok", "timestamp": 1620635134538, "user_tz": -480, "elapsed": 13105, "user": {"displayName": "", "photoUrl": "", "userId": "03685829748003472020"}} outputId="1f2ba425-a9a7-4a14-b37d-6e8774587850" # !gdown https://drive.google.com/uc?id=19eg-JkeauMAOlIBJPdIrAzgocAjRWp7T # + [markdown] id="llamfcYueQow" # Function to display output in Google Colab # + id="R2k6gBtMbByc" executionInfo={"status": "ok", "timestamp": 1620636611937, "user_tz": -480, "elapsed": 809, "user": {"displayName": "", "photoUrl": "", "userId": "03685829748003472020"}} import imageio import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation from skimage.transform import resize from IPython.display import HTML import warnings warnings.filterwarnings("ignore") def display(image_path, video_path, generated_vid_path): source_image = imageio.imread(image_path) reader = imageio.get_reader(video_path) generateReader = imageio.get_reader(generated_vid_path) source = resize(source_image, (256, 256))[..., :3] fps = reader.get_meta_data()['fps'] driving_video = [] try: for im in reader: driving_video.append(im) except RuntimeError: pass reader.close() driving = [resize(frame, (256, 256))[..., :3] for frame in driving_video] fps = generateReader.get_meta_data()['fps'] generated_video = [] try: for im in generateReader: generated_video.append(im) except RuntimeError: pass generateReader.close() generated = [resize(frame, (256, 256))[..., :3] for frame in generated_video] fig = plt.figure(figsize=(8 + 4 * (generated is not None), 6)) ims = [] for i in range(len(driving)): cols = [source] cols.append(driving[i]) if generated is not None: cols.append(generated[i]) im = plt.imshow(np.concatenate(cols, axis=1), animated=True) plt.axis('off') ims.append([im]) ani = animation.ArtistAnimation(fig, ims, interval=50, repeat_delay=1000) plt.close() return ani # + [markdown] id="e-xoIIeKaBIF" # ## Run the demo.py with our provided inputs # # change the `src_img` and `src_vid` arguments respectively to your uploaded filename to run your own input. # + colab={"base_uri": "https://localhost:8080/"} id="dRGwEkoKo8Ti" executionInfo={"status": "ok", "timestamp": 1620634837817, "user_tz": -480, "elapsed": 97937, "user": {"displayName": "", "photoUrl": "", "userId": "03685829748003472020"}} outputId="7f22082f-d048-4695-a01c-92c31574ed9f" # !python demo.py # + colab={"base_uri": "https://localhost:8080/", "height": 452} id="IyRUdGWKdu8K" executionInfo={"status": "ok", "timestamp": 1620636647561, "user_tz": -480, "elapsed": 33191, "user": {"displayName": "", "photoUrl": "", "userId": "03685829748003472020"}} outputId="69cce4fb-dda5-44eb-f28e-a40628f544a0" HTML(display('stage1_image.jpg', "stage1_video.mp4", "generated.mp4").to_html5_video()) # + id="udsezZ51pCrv" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1620635201528, "user_tz": -480, "elapsed": 61055, "user": {"displayName": "", "photoUrl": "", "userId": "03685829748003472020"}} outputId="908a6254-c136-452c-9412-3c988e762060" # !python demo.py --src_img obama.jpg --src_vid musk.mp4 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Solution to exercise 1 # # ### Dynamic Programming with # The question was: Does there always exists a $x \in [0, \infty)$ that solves the equation # $$ # x # = c (1-\beta) + \beta # \sum_{k=1}^K \max \left\{ # w_k ,\, x # \right\} # \, p_k # $$ # Is it unique? Suggest a strategy for computing it. # # We are assuming here that $\beta \in (0, 1)$. # There were hints, as follows: # # * Use the metric space $(\mathbb R, \rho)$ where $\rho(x, y) = |x-y|$ # # * If $x_1, \ldots, x_K$ are any $K$ numbers, then # # $$ \left| \sum_{k=1}^K x_k \right| \leq \sum_{k=1}^K |x_k| $$ # # * For any $a, x, y$ in $\mathbb R$, # # $$ # \left| # \max \left\{ a,\, x \right\} - \max \left\{ a,\, y \right\} # \right| # \leq | x - y | # $$ # # # You can convince yourself of the second inequality by sketching and checking different cases... # ### Solution # We're going to use the contraction mapping theorem. Let # # $$ # f(x) # = c (1-\beta) + \beta # \sum_{k=1}^K \max \left\{ # w_k ,\, x # \right\} # \, p_k # $$ # # We're looking for a fixed point of $f$ on $\mathbb R_+ = [0, \infty)$. # Using the hints above, we see that, for any $x, y$ in $\mathbb R_+$, we have # # \begin{align} # |f(x) - f(y)| # & = \left| # \beta \sum_{k=1}^K \max \left\{ # w_k ,\, x # \right\} \, p_k # - # \beta \sum_{k=1}^K \max \left\{ # w_k ,\, y # \right\} \, p_k # \right| # \\ # & = \beta\, \left| # \sum_{k=1}^K [\max \left\{ # w_k ,\, x # \right\} - \max \left\{ # w_k ,\, y # \right\} ]\, p_k # \right| # \\ # & \leq \beta\,\sum_{k=1}^K # \left| # \max \left\{ # w_k ,\, x # \right\} - \max \left\{ # w_k ,\, y # \right\} # \right| p_k # \\ # & \leq \beta\,\sum_{k=1}^K # \left| # x - y # \right| p_k # \\ # \end{align} # Since $\sum_k p_k = 1$, this yields # # $$ |f(x) - f(y)| \leq \beta |x - y| $$ # # Hence $f$ is a contraction map on $\mathbb R_+$, and therefore has a unique fixed point $x^*$ such that $f^n(x) \to x^*$ as $n \to \infty$ from any $x \in \mathbb R_+$. # Let's plot $f$ when # # * $K = 2$ # * $w_1 = 1$ and $w_2 = 2$ # * $p_1 = 0.3$ and $p_3 = 0.7$ # * $c=1$ and $\beta = 0.9$ # + import numpy as np import matplotlib.pyplot as plt c, w1, w2, p1, p2, β = 1, 1, 2, 0.3, 0.7, 0.9 def f(x): return c * (1 - β) + β * (max(x, w1)*p1 + max(x, w2)*p2) # + xvec = np.linspace(0, 4, 100) yvec = [f(x) for x in xvec] fig, ax = plt.subplots() ax.plot(xvec, yvec, label='$f$') ax.plot(xvec, xvec, 'k-', label='$45$') ax.legend() plt.show() # - # Now let's compute that fixed point by iteration: # + x = 1.0 x_vals = [] for i in range(50): x_vals.append(x) x = f(x) fig, ax = plt.subplots() ax.plot(x_vals) ax.set(xlabel="$n$", ylabel="$f^n(x)$") ax.grid() plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- # ## D3Factor # EldenRing!!!(bushi # + from Crypto.Util.number import bytes_to_long, getPrime from secret import msg from sympy import nextprime from gmpy2 import invert from hashlib import md5 flag = 'd3ctf{'+md5(msg).hexdigest()+'}' p = getPrime(256) q = getPrime(256) assert p > q n = p * q e = 0x10001 m = bytes_to_long(msg) c = pow(m, e, n) N = pow(p, 7) * q phi = pow(p, 6) * (p - 1) * (q - 1) d1 = getPrime(2000) d2 = nextprime(d1 + getPrime(1000)) e1 = invert(d1, phi) e2 = invert(d2, phi) print(f'c = {c}') print(f'N = {N}') print(f'e1 = {e1}') print(f'e2 = {e2}') ''' c = 2420624631315473673388732074340410215657378096737020976722603529598864338532404224879219059105950005655100728361198499550862405660043591919681568611707967 N = 1476751427633071977599571983301151063258376731102955975364111147037204614220376883752032253407881568290520059515340434632858734689439268479399482315506043425541162646523388437842149125178447800616137044219916586942207838674001004007237861470176454543718752182312318068466051713087927370670177514666860822341380494154077020472814706123209865769048722380888175401791873273850281384147394075054950169002165357490796510950852631287689747360436384163758289159710264469722036320819123313773301072777844457895388797742631541101152819089150281489897683508400098693808473542212963868834485233858128220055727804326451310080791 e1 = 425735006018518321920113858371691046233291394270779139216531379266829453665704656868245884309574741300746121946724344532456337490492263690989727904837374279175606623404025598533405400677329916633307585813849635071097268989906426771864410852556381279117588496262787146588414873723983855041415476840445850171457530977221981125006107741100779529209163446405585696682186452013669643507275620439492021019544922913941472624874102604249376990616323884331293660116156782891935217575308895791623826306100692059131945495084654854521834016181452508329430102813663713333608459898915361745215871305547069325129687311358338082029 e2 = 1004512650658647383814190582513307789549094672255033373245432814519573537648997991452158231923692387604945039180687417026069655569594454408690445879849410118502279459189421806132654131287284719070037134752526923855821229397612868419416851456578505341237256609343187666849045678291935806441844686439591365338539029504178066823886051731466788474438373839803448380498800384597878814991008672054436093542513518012957106825842251155935855375353004898840663429274565622024673235081082222394015174831078190299524112112571718817712276118850981261489528540025810396786605197437842655180663611669918785635193552649262904644919 ''' # - # $e_1d_1=1(\bmod \varphi(n)), e_2d_2=1(\bmod \varphi(n))$ # # -> $e_1e_2(d_1-d_2)-(e_2-e_1)=0 (\bmod \varphi(n))$ # # 得到模意义下的方程$e_1e_2x-(e_2-e_1)=0 (\bmod \varphi(n))$ 有一个解$d_1-d_2$,而$\varphi(n) = p^{r-1}(p-1)(q-1)$,因此$d_1-d_2$也是方程 # $e_1e_2x-(e_2-e_1)=0 (\bmod p^{r-1})$ # # 在本题$|d_1-d_2|$较小,所以可以对$e_1e_2x-(e_2-e_1)=0 (\bmod n)$使用Coppersmith方法求出$|d_1-d_2|$(????) # # 再将解代入计算$\mathrm{gcd}(e_1e_2x_0-(e_2-e_1), n)$即可得到$p^{r-1}$ # 从而分解$n$ # # From r3kapig👇 # + # s a g e from Crypto.Util.number import * from hashlib import md5 import gmpy2 r = 7 N = 1476751427633071977599571983301151063258376731102955975364111147037204614220376883752032253407881568290520059515340434632858734689439268479399482315506043425541162646523388437842149125178447800616137044219916586942207838674001004007237861470176454543718752182312318068466051713087927370670177514666860822341380494154077020472814706123209865769048722380888175401791873273850281384147394075054950169002165357490796510950852631287689747360436384163758289159710264469722036320819123313773301072777844457895388797742631541101152819089150281489897683508400098693808473542212963868834485233858128220055727804326451310080791 e1 = 425735006018518321920113858371691046233291394270779139216531379266829453665704656868245884309574741300746121946724344532456337490492263690989727904837374279175606623404025598533405400677329916633307585813849635071097268989906426771864410852556381279117588496262787146588414873723983855041415476840445850171457530977221981125006107741100779529209163446405585696682186452013669643507275620439492021019544922913941472624874102604249376990616323884331293660116156782891935217575308895791623826306100692059131945495084654854521834016181452508329430102813663713333608459898915361745215871305547069325129687311358338082029 e2 = 1004512650658647383814190582513307789549094672255033373245432814519573537648997991452158231923692387604945039180687417026069655569594454408690445879849410118502279459189421806132654131287284719070037134752526923855821229397612868419416851456578505341237256609343187666849045678291935806441844686439591365338539029504178066823886051731466788474438373839803448380498800384597878814991008672054436093542513518012957106825842251155935855375353004898840663429274565622024673235081082222394015174831078190299524112112571718817712276118850981261489528540025810396786605197437842655180663611669918785635193552649262904644919 c = 2420624631315473673388732074340410215657378096737020976722603529598864338532404224879219059105950005655100728361198499550862405660043591919681568611707967 x = polygen(Zmod(N)) f = e1 * e2 * x - (e2 - e1) f = f.monic() roots = f.small_roots(beta=((r - 1) / (r + 1) - 0.01), epsilon=0.05) if not roots: print("qwq") exit() res = gmpy2.iroot(gmpy2.gcd(int(f(roots[0])), N), r - 1) assert res[1] print("p =", res[0]) p = res[0] q = N // (p**7) n = p * q phi = (p - 1) * (q - 1) d = inverse(0x10001, phi) msg = long_to_bytes(power_mod(c, d, n)) print(msg) flag = 'd3ctf{' + md5(msg).hexdigest() + '}' print(flag) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Get Started # --- # %run nmt_translate.py _ = predict(s=10000, num=1, plot=True) _ = predict(s=10000, num=10) _ = predict(s=0, num=10) _ = predict(s=10000, num=10, r_filt=.5) _ = predict(s=10000, num=10, p_filt=.5) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import tensorflow as tf import random import pandas as pd import numpy as np # Here, **load_files** function from the scikit-learn library is to import dataset. # # * **train_files, valid_files, test_files** - numpy arrays containing file paths to images # * **train_targets, valid_targets, test_targets** - numpy arrays containing onehot-encoded classification labels # * **dog_names** - list of string-valued dog breed names for translating labels # + id="92bb87fa" outputId="41eee3cf-3cc3-4050-81ae-633766b1404d" from sklearn.datasets import load_files from keras.utils import np_utils import numpy as np from glob import glob # define function to load train, test, and validation datasets def load_dataset(path): data = load_files(path) dog_files = np.array(data['filenames']) dog_targets = np_utils.to_categorical(np.array(data['target']), 133) return dog_files, dog_targets # load train, test, and validation datasets train_files, train_targets = load_dataset(r'../input/dogs-dataset/dogImages/train') valid_files, valid_targets = load_dataset(r'../input/dogs-dataset/dogImages/valid') test_files, test_targets = load_dataset(r'../input/dogs-dataset/dogImages/test') # load list of dog names dog_names = [item[20:-1] for item in sorted(glob(r'../input/dogs-dataset/dogImages/train/*//'))] # print statistics about the dataset print('There are %d total dog categories.' % len(dog_names)) print('There are %s total dog images.\n' % len(np.hstack([train_files, valid_files, test_files]))) print('There are %d training dog images.' % len(train_files)) print('There are %d validation dog images.' % len(valid_files)) print('There are %d test dog images.'% len(test_files)) # - # Below cell is to record the size of images as they have varying sizes. And some pictures have more than one dog. # + id="7774f1d0" ## What are the dog image sizes. ## They are all varying sizes, for some images there are more than one dog from matplotlib.pyplot import figure, imshow, axis from matplotlib.image import imread def showImages(list_of_files, col=10, wSize=5, hSize=5, mypath='.'): fig = figure(figsize=(wSize, hSize)) number_of_files = len(list_of_files) row = 10 if (number_of_files % col != 0): row += 1 for i in range(row+10): a=fig.add_subplot(row, col, i + 1) image = imread(list_of_files[i]) imshow(image) axis('off') # - # Displaying the images for which we need to test (1st parameter in the function), 2nd and 3rd parameters specify the width and height of the images. 4th parameter is to specify how many images we want to display in a column. # + id="8d926144" showImages(test_files, wSize=20, hSize=20, col=4) # - # Below function is to plot the breed distribution. # + id="698c5916" ## this function plot the breeds distribution def plot_breed(df): labels = [] for i in range(df.shape[0]): labels.append(dog_names[np.argmax(df[i])]) df_labels = pd.DataFrame(np.array(labels), columns=["breed"]).reset_index(drop=True) fig, ax = plt.subplots(figsize=(10,30)) df_labels['breed'].value_counts().plot(ax=ax, kind='barh').invert_yaxis() ax.set_title('Distribution of Dog breeds') # + id="0b19100e" ## breed distribution in test data import matplotlib.pyplot as plt plot_breed(test_targets) # + id="2ec18058" ## breed distribution in train data plot_breed(train_targets) # + id="609663a8" def display_img(img_path): img = cv2.imread(img_path) cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) imgplot = plt.imshow(cv_rgb) return imgplot # + id="d0daaee3" import cv2 import numpy as np from matplotlib import pyplot as plt #### this function returns the shape of image, image itself and the intensity distribution of an image def img_hist(df_image, label): img = cv2.imread(df_image) color = ('b','g','r') for i,col in enumerate(color): histr = cv2.calcHist([img],[i],None,[256],[0,256]) plt.plot(histr,color = col) plt.xlim([0,256]) print(dog_names[np.argmax(label)]) print(img.shape) plt.show() #plt.imshow(img) display_img(df_image) # + id="8a54aa5a" ## here I checked the image sizes, and the intensity distribution of images from the same breed ## and the result shows images have different resolution, zoom and lightening conditins even for the same breed ## it makes this task even more challenging img_hist(train_files[3], train_targets[3]) # + id="b28f665c" img_hist(train_files[4],train_targets[4]) # + id="385b78fb" img_hist(train_files[57], train_targets[57]) # - # Below function is to find the list of labels and save them as a pandas data # + id="19a0dc97" labels_train = [] labels_test = [] for i in range(train_files.shape[0]): labels_train.append(dog_names[np.argmax(train_targets[i])]) for i in range(test_files.shape[0]): labels_test.append(dog_names[np.argmax(test_targets[i])]) # - # This function plots the breeds distribution in the train dataset # + id="47058c1d" from sklearn.preprocessing import LabelEncoder def dist_breed(labels): encoder = LabelEncoder() breeds_encoded = encoder.fit_transform(labels) n_classes = len(encoder.classes_) breeds = pd.DataFrame(np.array(breeds_encoded), columns=["breed"]).reset_index(drop=True) breeds['freq'] = breeds.groupby('breed')['breed'].transform('count') avg = breeds.freq.mean() title = 'Distribution of Dog Breeds in training Dataset\n (%3.0f samples per class on average)' % avg f, ax = plt.subplots(1, 1, figsize=(10, 6)) ax.set_xticks([]) ax.hlines(avg, 0, n_classes - 1, color='white') ax.set_title(title, fontsize=18) _ = ax.hist(breeds_encoded, bins=n_classes) return(breeds["freq"].describe()) # + id="639eb127" dist_breed(labels_train) # - # We use a pre-trained ResNet-50 model to detect dogs in images. The first line of code downloads the ResNet-50 model, along with weights that have been trained on ImageNet, a very large, very popular dataset used for image classification and other vision tasks. ImageNet contains over 10 million URLs, each linking to an image containing an object from one of 1000 categories. Given an image, this pre-trained ResNet-50 model returns a prediction (derived from the available categories in ImageNet) for the object that is contained in the image. # + id="83af70f2" from tensorflow.python.keras.applications.resnet import ResNet50 # define ResNet50 model ResNet50_model = ResNet50(weights='imagenet') # + [markdown] id="f9147305" # ### **Pre-process the Data** # When using TensorFlow as backend, Keras CNNs require a 4D array (4D tensor) as input, with shape (nb_samples, rows, columns, channels), where nb_samples corresponds to the total number of images (or samples), and rows, columns, and channels correspond to the number of rows, columns, and channels for each image, respectively. # # The path_to_tensor() function takes a string-valued file path to a color image as input and returns a 4D tensor suitable for supplying to a Keras CNN. The function first loads the image and resizes it to a square image that is 224 x 224 pixels. Next, the image is converted to an array, which is then resized to a 4D tensor. In this case, since we are working with color images, each image has three channels. Likewise, since we are processing a single image (or sample), the returned tensor will always have shape (1,224,224,3) # # The paths_to_tensor function takes a numpy array of string-valued image paths as input and returns a 4D tensor with shape (nb_samples, 224, 224, 3) # # Here, nb_samples is the number of samples, or number of images, in the supplied array of image paths. We take nb_samples as the number of 3D tensors (where each 3D tensor corresponds to a different image) in this dataset. # + id="456f081a" from keras.preprocessing import image from tqdm import tqdm def path_to_tensor(img_path): # loads RGB image as PIL.Image.Image type img = image.load_img(img_path, target_size=(224, 224)) # convert PIL.Image.Image type to 3D tensor with shape (224, 224, 3) x = image.img_to_array(img) # convert 3D tensor to 4D tensor with shape (1, 224, 224, 3) and return 4D tensor return np.expand_dims(x, axis=0) def paths_to_tensor(img_paths): list_of_tensors = [path_to_tensor(img_path) for img_path in tqdm(img_paths)] return np.vstack(list_of_tensors) # + [markdown] id="_fGd7JZ4g8GR" # ### **Making Predictions with ResNet-50** # Getting the 4D tensor ready for ResNet-50, and for any other pre-trained model in Keras, requires some additional processing. First, the RGB image is converted to BGR by reordering the channels. All pre-trained models have the additional normalization step that the mean pixel (expressed in RGB as [103.939, 116.779, 123.68] and calculated from all pixels in all images in ImageNet) must be subtracted from every pixel in each image (implemented in the imported function preprocess_input) # # Using the model to extract the predictions: Predict method, returns an array whose -th entry is the model's predicted probability that the image belongs to the -th ImageNet category (implemented in the ResNet50_predict_labels function) # # By taking the argmax of the predicted probability vector, we obtain an integer corresponding to the model's predicted object class, which we can identify with an object category through the use of this [dictionary](https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a). # + id="ea14db27" from tensorflow.python.keras.applications.imagenet_utils import preprocess_input, decode_predictions def ResNet50_predict_labels(img_path): # returns prediction vector for image located at img_path img = preprocess_input(path_to_tensor(img_path)) return np.argmax(ResNet50_model.predict(img)) # + [markdown] id="uh28hyT9zylA" # ### **Write a Dog Detector** # In the dictionary, the categories corresponding to dogs appear in an uninterrupted sequence and correspond to dictionary keys 151-268, inclusive, to include all categories from 'Chihuahua' to 'Mexican hairless'. Thus, in order to check to see if an image is predicted to contain a dog by the pre-trained ResNet-50 model, we need only check if the ResNet50_predict_labels function above returns a value between 151 and 268 (inclusive). # # Implemented using dog_detector() function, which returns True if a dog is detected in an image (and False if not) # + id="ssbmkbctzuBS" ### returns "True" if a dog is detected in the image stored at img_path def dog_detector(img_path): prediction = ResNet50_predict_labels(img_path) return ((prediction <= 268) & (prediction >= 151)) # + [markdown] id="1qsI28fb329m" # ### **Create a CNN to Classify Dog Breeds** # - # ### Pre-process the Data # We rescale the images by dividing every pixel in every image by 255. # + id="JKGYVTt337Ww" from PIL import ImageFile ImageFile.LOAD_TRUNCATED_IMAGES = True # pre-process the data for Keras train_tensors = paths_to_tensor(train_files).astype('float32')/255 valid_tensors = paths_to_tensor(valid_files).astype('float32')/255 test_tensors = paths_to_tensor(test_files).astype('float32')/255 # + [markdown] id="zp9dmodi848S" # Created a 4-layer CNN in Keras that classifies dog breeds with Relu activation function. The model starts with an input image of (224,224,3) color channels. The first layer produces an output with 16 feature channels that is used as an input for the next layer. The second, third, and last layer have 32, 64, 128 filters, respectively, with max-pooling of size 2. It would be ideal for input and output features to have the same size. So, I decided to use same padding, to go off the edge of images and pad with zeros, for all the layers in my network with the stride of 1. The number of nodes in the last fully connected layer is 133, the same size as dog categories, with softmax function to get the probabilities. # + id="zu1w36GS87Zi" from keras.layers import Conv2D, MaxPooling2D, GlobalAveragePooling2D from keras.layers import Dropout, Flatten, Dense from keras.models import Sequential model = Sequential() # architucture # layer 1 model.add(Conv2D(filters=16, kernel_size=2, padding='same', activation='relu', input_shape=(224,224,3))) model.add(MaxPooling2D(pool_size=2)) # layer 2 model.add(Conv2D(filters=32, kernel_size=2 , padding='same' , activation='relu')) model.add(MaxPooling2D(pool_size=2)) # layer 3 model.add(Conv2D(filters=64 , kernel_size=2 , padding='same' , activation='relu')) model.add(MaxPooling2D(pool_size=2)) model.add(Dropout(0.4)) # layer 4 model.add(Conv2D(filters=128 , kernel_size=2 , padding='same' , activation='relu')) model.add(MaxPooling2D(pool_size=2)) model.add(Dropout(0.4)) # 2 fully connected layers model.add(Flatten()) model.add(Dense(512, activation='relu')) model.add(Dropout(0.4)) model.add(Dense(133,activation='softmax')) model.summary() # + id="_asGMIRi9Ug5" # compile the model model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) # - # Using model checkpointing to save the model that attains the best validation loss. # + id="hJD9c-4y9Uah" from keras.callbacks import ModelCheckpoint epochs = 25 checkpointer = ModelCheckpoint(filepath='saved_models/weights.best.from_scratch.hdf5', verbose=1, save_best_only=True) history = model.fit(train_tensors, train_targets, validation_data=(valid_tensors, valid_targets), epochs=epochs, batch_size=20, callbacks=[checkpointer], verbose=1) # + id="rFDa3Fjy-XgR" # Load the Model with the Best Validation Loss model.load_weights('saved_models/weights.best.from_scratch.hdf5') # + [markdown] id="FAuEHzrv_M2Q" # ### **Testing the model** # - # Trying out model on the test dataset of dog images. Aim to get test accuracy is greater than 1% # + id="iLYv-rhQ-ctz" # get index of predicted dog breed for each image in test set dog_breed_predictions = [np.argmax(model.predict(np.expand_dims(tensor, axis=0))) for tensor in test_tensors] # report test accuracy test_accuracy = 100*np.sum(np.array(dog_breed_predictions)==np.argmax(test_targets, axis=1))/len(dog_breed_predictions) print('Test accuracy: %.4f%%' % test_accuracy) # - # In order to make the most of our few training examples, we will "augment" them via a number of random transformations, so that our model would never see twice the exact same picture. This helps prevent overfitting and helps the model generalize better. # + id="jMA7rW0m_09g" ## data augmentation from keras.preprocessing.image import ImageDataGenerator ## create a generator that rotate, zoom and flip the images traingen = ImageDataGenerator(rotation_range=40, width_shift_range=0.1, height_shift_range=0.1, rescale=1/255, shear_range=0.04, zoom_range=0.2, horizontal_flip=True, vertical_flip= False, fill_mode='nearest') validgen = ImageDataGenerator(rescale=1/255) ## apply the generator on test and valid sets traingen.fit(train_tensors) validgen.fit(valid_tensors) df_training = traingen.flow(train_tensors , train_targets , batch_size = 20) df_validation = validgen.flow(valid_tensors , valid_targets, batch_size = 20) # + id="t6Rd7IFmACpD" from tensorflow.keras.optimizers import Adam model.compile(optimizer= Adam(), loss='categorical_crossentropy', metrics=['accuracy']) # + id="ae9ncQK9Aehf" from keras.callbacks import ModelCheckpoint checkpointer = ModelCheckpoint(filepath='saved_models/weights.initial_scratch_model_aug.hdf5', verbose = 0, save_best_only=True) model.fit_generator(df_training, epochs = 25 , steps_per_epoch = train_tensors.shape[0]//32 , callbacks=[checkpointer] , verbose=1 , validation_data= df_validation , validation_steps = valid_tensors.shape[0]//32) # + id="ISNxHGrqAgtn" ## Here, the data augmenation accuracy becomes lower than original data. It usually causes overfitting, but in this case it is underfitting. ## need to work more on the generator dog_breed_predictions_aug = [np.argmax(model.predict(np.expand_dims(tensor, axis = 0))) for tensor in test_tensors] test_accuracy_aug = 100*np.sum(np.array(dog_breed_predictions_aug)==np.argmax(test_targets, axis=1))/len(dog_breed_predictions_aug) print('Test accuracy with Data Augmentation: %.f%%' % test_accuracy_aug) # + def extract_VGG16(tensor): from keras.applications.vgg16 import VGG16, preprocess_input return VGG16(weights='imagenet', include_top=False).predict(preprocess_input(tensor)) def extract_VGG19(tensor): from keras.applications.vgg19 import VGG19, preprocess_input return VGG19(weights='imagenet', include_top=False).predict(preprocess_input(tensor)) def extract_Resnet50(tensor): from keras.applications.resnet50 import ResNet50, preprocess_input return ResNet50(weights='imagenet', include_top=False).predict(preprocess_input(tensor)) def extract_Xception(tensor): from keras.applications.xception import Xception, preprocess_input return Xception(weights='imagenet', include_top=False).predict(preprocess_input(tensor)) def extract_InceptionV3(tensor): from keras.applications.inception_v3 import InceptionV3, preprocess_input return InceptionV3(weights='imagenet', include_top=False).predict(preprocess_input(tensor)) # - # ## **Use a CNN to Classify Dog Breeds** # To reduce training time without sacrificing accuracy, train a CNN using transfer learning. # + id="N0gUR8IKB8In" # Obtain Bottleneck Features import numpy as np bottleneck_features = np.load(r'../input/vgg16-data/DogVGG16Data.npz') train_VGG16 = bottleneck_features['train'] valid_VGG16 = bottleneck_features['valid'] test_VGG16 = bottleneck_features['test'] train_VGG16.shape[1:] # + [markdown] id="PSx-Vmv6CCpO" # ### **Model Architecture** # The model uses the the pre-trained VGG-16 model as a fixed feature extractor, where the last convolutional output of VGG-16 is fed as input to our model. We only add a global average pooling layer and a fully connected layer, where the latter contains one node for each dog category and is equipped with a softmax. # + id="P8p2EunSCCCd" VGG16_model = Sequential() VGG16_model.add(GlobalAveragePooling2D(input_shape=train_VGG16.shape[1:])) VGG16_model.add(Dense(133, activation='softmax')) VGG16_model.summary() # + [markdown] id="mRSjvE4GCTQN" # ### **Compile the Model** # + id="yuN43deVCOit" VGG16_model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy']) # + [markdown] id="jeO5dzdlCXnO" # ### **Train the Model** # + id="AHD6mi8FCr_F" checkpointer = ModelCheckpoint(filepath='saved_models/weights.best.VGG16.hdf5', verbose=1, save_best_only=True) VGG16_model.fit(train_VGG16, train_targets, validation_data=(valid_VGG16, valid_targets), epochs=20, batch_size=20, callbacks=[checkpointer], verbose=1) # + id="RfTdyVYxCzrN" # Load the Model with the Best Validation Loss VGG16_model.load_weights('saved_models/weights.best.VGG16.hdf5') # + id="GszWUUYSC4JX" # Test the Model # get index of predicted dog breed for each image in test set VGG16_predictions = [np.argmax(VGG16_model.predict(np.expand_dims(feature, axis=0))) for feature in test_VGG16] # report test accuracy test_accuracy = 100*np.sum(np.array(VGG16_predictions)==np.argmax(test_targets, axis=1))/len(VGG16_predictions) print('Test accuracy: %.4f%%' % test_accuracy) # + [markdown] id="wBgbnUQdDGb7" # ### **Predict Dog Breed with the Model** # + id="V_ltcr5pDDnk" #from extract_bottleneck_features import extract_VGG16 def VGG16_predict_breed(img_path): # extract bottleneck features bottleneck_feature = extract_VGG16(path_to_tensor(img_path)) # obtain predicted vector predicted_vector = VGG16_model.predict(bottleneck_feature) # return dog breed that is predicted by the model return dog_names[np.argmax(predicted_vector)] # + id="HFR3oPi0DLSD" def display_img(img_path): img = cv2.imread(img_path) cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) imgplot = plt.imshow(cv_rgb) return imgplot # + [markdown] id="4mxQpGQqDW4j" # ### **Creating a CNN to Classify Dog Breeds (using Transfer Learning)** # + id="wyicqcA8DWc2" def other_bottleneck_features(path): bottleneck_features = np.load(path) train = bottleneck_features['train'] valid = bottleneck_features['valid'] test = bottleneck_features['test'] return train,valid,test # + id="ooph6KB7F_4s" ### Obtain bottleneck features from another pre-trained CNN. train_Xception , valid_Xception, test_Xception = other_bottleneck_features('../input/xception/DogXceptionData.npz') train_Resnet50 , valid_Resnet50, test_Resnet50 = other_bottleneck_features('../input/resnet/DogResnet50Data.npz') train_VGG19 , valid_VGG19, test_VGG19 = other_bottleneck_features('../input/vgg16-data/DogVGG16Data.npz') train_Inception , valid_Inception, test_Inception = other_bottleneck_features('../input/inception/DogInceptionV3Data.npz') # + id="XdimmgdBKUpT" Xception_model = Sequential() Xception_model.add(GlobalAveragePooling2D(input_shape=(train_Xception.shape[1:]))) Xception_model.add(Dense(133, activation='softmax')) Xception_model.summary() # + id="6WVp-LPsK1nq" Resnet50_model = Sequential() Resnet50_model.add(GlobalAveragePooling2D(input_shape=(train_Resnet50.shape[1:]))) Resnet50_model.add(Dense(133, activation='softmax')) Resnet50_model.summary() # + id="XvpQrbJrK2rh" VGG19_model = Sequential() VGG19_model.add(GlobalAveragePooling2D(input_shape=(train_VGG19.shape[1:]))) VGG19_model.add(Dense(133, activation='softmax')) VGG19_model.summary() # + id="hn5CPkW3LABg" Inception_model = Sequential() Inception_model.add(GlobalAveragePooling2D(input_shape=(train_Inception.shape[1:]))) Inception_model.add(Dense(133, activation='softmax')) Inception_model.summary() # + id="uAXce5uZLKRb" ### Compile the model Xception_model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy']) Resnet50_model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy']) VGG19_model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy']) Inception_model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy']) # + id="Zx4uxC9JLOxQ" ### Train the model. checkpointer_Xception = ModelCheckpoint(filepath='saved_models/weights.best.Xception.hdf5', verbose=1 , save_best_only =True) Xception_history = Xception_model.fit(train_Xception, train_targets, validation_data = (valid_Xception , valid_targets), epochs=25, batch_size=20, callbacks=[checkpointer_Xception], verbose=1) # + id="LQDewPdXLUUw" ### Train the model. checkpointer_Resnet50 = ModelCheckpoint(filepath='saved_models/weights.best.Resnet50.hdf5', verbose=1 , save_best_only =True) Resnet50_model.fit(train_Resnet50, train_targets, validation_data = (valid_Resnet50 , valid_targets), epochs=25, batch_size=20, callbacks=[checkpointer_Resnet50], verbose=1) # + id="SgksSweLLYuK" ### Train the model. checkpointer_VGG19 = ModelCheckpoint(filepath='saved_models/weights.best.VGG19.hdf5', verbose=1 , save_best_only =True) VGG19_model.fit(train_VGG19, train_targets, validation_data = (valid_VGG19 , valid_targets), epochs=25, batch_size=20, callbacks=[checkpointer_VGG19], verbose=1) # + id="XXhiaAMlLd9w" ### Train the model. checkpointer_Inception = ModelCheckpoint(filepath='saved_models/weights.best.Inception.hdf5', verbose=1 , save_best_only =True) Inception_model.fit(train_Inception, train_targets, validation_data = (valid_Inception , valid_targets), epochs=25, batch_size=20, callbacks=[checkpointer_Inception], verbose=1) # + id="OpwnhEz0Lh93" ### Load the model weights with the best validation loss. Xception_model.load_weights('saved_models/weights.best.Xception.hdf5') Resnet50_model.load_weights('saved_models/weights.best.Resnet50.hdf5') VGG19_model.load_weights('saved_models/weights.best.VGG19.hdf5') Inception_model.load_weights('saved_models/weights.best.Inception.hdf5') # + id="JeseeoSgLlub" Xception_model.load_weights('saved_models/weights.best.Xception.hdf5') # + id="YPzhuf_7Lni3" # a function that returns the prediction accuracy on test data def evaluate_model (model, model_name,tensors,targets): predicted = [np.argmax(model.predict(np.expand_dims(feature, axis=0))) for feature in tensors] test_accuracy = 100*np.sum(np.array(predicted)==np.argmax(targets, axis=1))/len(predicted) print (f'{model_name} accuracy on test data is {test_accuracy}%') # + id="dDgL7H84LrLo" ### Calculate classification accuracy on the test dataset. evaluate_model(Xception_model, "Xception" , test_Xception, test_targets) evaluate_model(Resnet50_model,"Resnet50", test_Resnet50, test_targets) evaluate_model(VGG19_model,"VGG19", test_VGG19, test_targets) evaluate_model(Inception_model,"InceptionV3", test_Inception, test_targets) # + id="e6LZLlEMLwnC" ## plot the history of loss and accuracy for train and valid data for the best model, Xception loss = Xception_history.history['loss'] val_loss = Xception_history.history['val_loss'] plt.figure(figsize=(10,8)) plt.plot(loss,"--", linewidth=3 , label="train") plt.plot(val_loss, linewidth=3 , label="valid") plt.legend(['train','test'], loc='upper left') plt.grid() plt.ylabel('loss') plt.xlabel('Epoch') plt.title('Xception Model Loss') plt.legend(['train','test'], loc='upper left') plt.show() # + id="BRmAzjxnLyCL" ## Xception model with augmentation and fine tuning from keras.optimizers import SGD from keras.preprocessing.image import load_img, img_to_array, ImageDataGenerator from keras.layers.normalization import BatchNormalization Xception_model_aug = Sequential() Xception_model_aug.add(GlobalAveragePooling2D(input_shape=(train_Xception.shape[1:]))) Xception_model_aug.add(BatchNormalization()) Xception_model_aug.add(Dense(133, activation='softmax')) traingen = ImageDataGenerator(rotation_range=40, width_shift_range=0.2, height_shift_range=0.2, rescale=1/255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True, fill_mode='nearest') validgen = ImageDataGenerator(rescale=1/255) traingen.fit(train_Xception) validgen.fit(valid_Xception) df_training = traingen.flow(train_Xception , train_targets , batch_size = 20) df_validation = validgen.flow(valid_Xception , valid_targets, batch_size = 20) checkpointer = ModelCheckpoint(filepath='saved_models/weights.best.Xception.hdf5', verbose = 0, save_best_only=True) sgd = SGD(lr= 1e-3 , decay=1e-6, momentum=0.9 , nesterov = True) # compile Xception_model_aug.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy']) Xception_model_aug.fit_generator(df_training, epochs = 25 , steps_per_epoch = train_Xception.shape[0]//20 , callbacks=[checkpointer] , verbose=1 , validation_data= df_validation , validation_steps = valid_Xception.shape[0]//20) # + id="S_7TUQmjL8qf" ## here, again, with data augentation the accuracy decreased evaluate_model(Xception_model_aug, "fine_tuned Xception" , test_Xception, test_targets) # + id="C_lEJkI3MAwG" ## finetuned the Exception model by minimizing the cross-entropy loss function using stochastic gradient descent ## and learning rate of 0.001 Xception_model_aug = Sequential() Xception_model_aug.add(GlobalAveragePooling2D(input_shape=(train_Xception.shape[1:]))) # Xception_model_aug.add(BatchNormalization()) Xception_model_aug.add(Dense(133, activation='softmax')) checkpointer = ModelCheckpoint(filepath='saved_models/weights.best.Xception.hdf5', verbose = 0, save_best_only=True) sgd = SGD(lr= 1e-3 , decay=1e-6, momentum=0.9 , nesterov = True) # compile Xception_model_aug.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy']) Xception_model_aug.fit(train_Xception , train_targets, validation_data = (valid_Xception, valid_targets), shuffle = True, batch_size = 20, epochs = 25, verbose = 1) # + id="Y6YTFcoQMHGE" ## with fine tuning the accuracy increased by almost 1.5% evaluate_model(Xception_model_aug, "fine_tuned Xception" , test_Xception, test_targets) # + id="h1r-dQF5MHwX" ## I also wanted to fine tune the Resnet50 model to see how the accuracy will change, since I was sure it woul perform well ## on this kind of animal images. Resnet50_model_fine = Sequential() Resnet50_model_fine.add(GlobalAveragePooling2D(input_shape=(train_Xception.shape[1:]))) # Xception_model_aug.add(BatchNormalization()) Resnet50_model_fine.add(Dense(133, activation='softmax')) checkpointer = ModelCheckpoint(filepath='saved_models/weights.best.Resnet50.hdf5', verbose = 0, save_best_only=True) sgd = SGD(lr= 1e-3 , decay=1e-6, momentum=0.9 , nesterov = True) # compile Resnet50_model_fine.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy']) Resnet50_model_fine.fit(train_Xception , train_targets, validation_data = (valid_Xception, valid_targets), shuffle = True, batch_size = 20, epochs = 25, verbose = 1) # + id="n5zwB1pBMMFA" ## the same accuracy as Xception evaluate_model(Resnet50_model_fine, "fine_tuned Resnet50" , test_Xception, test_targets) # + id="qv9DRx7BMO2P" ### a function that takes a path to an image as input ### and returns the dog breed that is predicted by the model. def Xception_predict_breed (img_path): # extract the bottle neck features bottleneck_feature = extract_Xception(path_to_tensor(img_path)) ## get a vector of predicted values predicted_vector = Xception_model.predict(bottleneck_feature) ## return the breed return dog_names[np.argmax(predicted_vector)] # + id="_qoQT-G4MWt3" import cv2 from matplotlib import pyplot as plt def display_img(img_path): img = cv2.imread(img_path) cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) imgplot = plt.imshow(cv_rgb) return imgplot def breed_identifier(img_path): display_img(img_path) prediction = Xception_predict_breed(img_path) if dog_detector(img_path) == True: print('picture is a dog') return print (f"This dog is a {prediction}\n") else: return print('Hm we couldn''t identify, try a differnt photo') # + id="TJN8lA6CMgH-" breed_identifier(r'../input/testing/dogtest.jfif') # - breed_identifier(r'../input/testing/dog2test.jfif') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Data ingestion and ELT from object store # # NPS can load and unload data from object stores like Amazon S3 and IBM Cloud object store. This works by [using Netezza External Tables to read from and write to object store](https://www.ibm.com/support/knowledgecenter/SS5FPD_1.0.0/com.ibm.ips.doc/postgresql/load/c_load_loading_cloud.html). # # Lets take a look at a few examples; lets target an AWS S3 bucket. Prerequisites - # # - `NZ_USER`, `NZ_PASSWORD`, `NZ_HOST` environment variables are set to point to the right Netezza instance # - `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY` and `AWS_REGION` environment variables are set with the correct credentials # - `BUCKET` environment variable has the correct bucket name # # _Note:_ One can configure the ACLs for the bucket on AWS like [this](https://www.ibm.com/support/knowledgecenter/en/SSTNZ3/com.ibm.ips.doc/postgresql/admin/adm_nps_cloud_provisioning_prereq_aws.html) to balance security and need for NPS to read/write to it. # # # ## The sample data # # Lets use the [publically available Covid case data](https://github.com/owid/covid-19-data/tree/master/public/data) to highlight this NPS use case. For this example, the data has been put in an object store bucket (`$BUCKET`) under `/covid/owid-covid-data.csv` import os, nzpy import pandas as pd con = nzpy.connect(user=os.environ['NZ_USER'], password=os.environ['NZ_PASSWORD'], host=os.environ['NZ_HOST'], port=5480, database='system') # There are two examples here. # # - Load data into the database and do some analysis # - Use transient external tables to read (on the fly) data and perform some analysis # # The schema for the table is [published by OWID already](https://github.com/owid/covid-19-data/blob/master/public/data/owid-covid-codebook.csv). # + # Setup the table schema based on the json mentioned above schema = ''' iso_code varchar (16), continent varchar(48), location varchar(48), covid_date date, total_cases numeric(32, 20), new_cases numeric(32, 20), new_cases_smoothed numeric(32, 20), total_deaths numeric(32, 20), new_deaths numeric(32, 20), new_deaths_smoothed numeric(32, 20), total_cases_per_million numeric(32, 20), new_cases_per_million numeric(32, 20), new_cases_smoothed_per_million numeric(32, 20), total_deaths_per_million numeric(32, 20), new_deaths_per_million numeric(32, 20), new_deaths_smoothed_per_million numeric(32, 20), new_tests numeric(32, 20), total_tests numeric(32, 20), total_tests_per_thousand numeric(32, 20), new_tests_per_thousand numeric(32, 20), new_tests_smoothed numeric(32, 20), new_tests_smoothed_per_thousand numeric(32, 20), tests_per_case numeric(32, 20), positive_rate numeric(32, 20), tests_units varchar(32), stringency_index numeric(32, 20), population numeric(32, 20), population_density numeric(32, 20), median_age numeric(32, 20), aged_65_older numeric(32, 20), aged_70_older numeric(32, 20), gdp_per_capita numeric(32, 20), extreme_poverty numeric(32, 20), cardiovasc_death_rate numeric(32, 20), diabetes_prevalence numeric(32, 20), female_smokers numeric(32, 20), male_smokers numeric(32, 20), handwashing_facilities numeric(32, 20), hospital_beds_per_thousand numeric(32, 20), life_expectancy numeric(32, 20), human_development_index numeric(32, 20)''' # Read data on the fly and lets see if all is working well. df = pd.read_sql(f''' select unique(continent) from external 'owid-covid-data.csv' ({schema}) using ( remotesource 'S3' delim ',' uniqueid 'covid' accesskeyid '{os.environ["AWS_ACCESS_KEY_ID"]}' secretaccesskey '{os.environ["AWS_SECRET_ACCESS_KEY"]}' defaultregion '{os.environ["AWS_REGION"]}' bucketurl '{os.environ["BUCKET"]}' skiprows 1 ) where continent is not null and continent != '' ''', con) df.columns = [c.decode().lower() for c in df.columns] df # - # Ingest the data and do analysis and visualization table = 'covid' with con.cursor() as cur: # drop any old table r = cur.execute(f'select 1 from _v_table where tablename = ^{table}^') if r.fetchall(): cur.execute(f'drop table {table}') # create a table to load data cur.execute(f'create table {table} ({schema})') print(f"Table {table} created") # load data from object store cur.execute(f''' insert into {table} select * from external 'owid-covid-data.csv' ({schema}) using ( remotesource 'S3' delim ',' uniqueid 'covid' accesskeyid '{os.environ["AWS_ACCESS_KEY_ID"]}' secretaccesskey '{os.environ["AWS_SECRET_ACCESS_KEY"]}' defaultregion '{os.environ["AWS_REGION"]}' bucketurl '{os.environ["BUCKET"]}' skiprows 1 )''') print(f"{cur.rowcount} Rows loaded") # Get a week over week trend df = pd.read_sql_query(''' select continent, this_week(covid_date) as wk, max(new_cases) as total from covid where continent is not null and continent != '' group by wk, continent order by wk, continent ''', con, parse_dates = {b'WK': '%Y-%m-%d'}) df.columns = [c.decode().lower() for c in df.columns] df.total = df.total.astype(float) df.head() # + # Lets visualize the same from mizani.formatters import date_format from plotnine import * ( ggplot(df, aes(x='wk', y='total', color='continent')) + geom_line() + geom_point() + labs(y = "Total cases", x = "Week") + facet_wrap('continent') + scale_x_datetime(labels=date_format('%b %-d')) + theme(axis_text_x=element_text(rotation=60, hjust=1)) ) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="JUYx6b473Wul" colab_type="text" # # Basic Convolutional Neural Network (CNN) Example # This notebook shows the use of Convlutional layers and Pooling Layers in a Neural Network implemented with Keras, using Fashion MNIST dataset. # + [markdown] id="VKACeHtT4QuP" colab_type="text" # ### Necessary imports # Tensorflow, Matplotlib and numpy are the necessary imports for this notebook. # + id="W4oXSrZB230p" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="8ebfff2b-3a3a-40cf-ebe7-2e4eae94e40e" import tensorflow as tf import matplotlib.pyplot as plt import numpy as np # Checking Tensorflow version print('Curreent version of Tensorflow: ', tf.__version__) # + [markdown] id="pmtP0j3-6Msk" colab_type="text" # ## Loading the dataset # At this step the Fashion MNIST dataset will be loaded for use. # + id="wPPvLEzE6GK4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="4d1c7671-1708-4bf4-bf7e-e293a106b672" fashion_mnist = tf.keras.datasets.fashion_mnist (train_samples, train_labels), (test_samples, test_labels) = fashion_mnist.load_data() assert train_samples.shape[0] == train_labels.shape[0] # Check for training samples the one-to-one relationship between samples and labels print('Number of training samples: ', train_samples.shape[0]) assert test_samples.shape[0] == test_labels.shape[0] # Check for test samples the one-to-one relationship between samples and labels print('Number of test samples: ', test_samples.shape[0]) assert train_samples.shape[1:] == test_samples.shape[1:] # Check the compatibility of data samples for test set and training set print('Shape of a data sample: ', train_samples.shape[1:]) # Normalizing the data train_samples = train_samples / 255.0 test_samples = test_samples / 255.0 # + [markdown] id="oRqRjYLR8O2f" colab_type="text" # ## Base model # To observe the effect of convolutional layers, a base model that contains 2 hidden layers is used. Softmax activation is used in the output layer. # + id="Irp2vDSx70FR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 697} outputId="e48e5134-82e7-4ebf-9bf8-e30c2a8ba5cd" # Constructing the graph for the base model epoch_count = 20 dnn_in = tf.keras.Input(shape = train_samples.shape[1:]) X = tf.keras.layers.Flatten()(dnn_in) X = tf.keras.layers.Dense(units = 128, activation = 'relu')(X) X = tf.keras.layers.Dense(units = 64, activation = 'relu')(X) dnn_out = tf.keras.layers.Dense(units = 10, activation = 'softmax')(X) dnn_model = tf.keras.Model(inputs = dnn_in, outputs = dnn_out) # Compiling the model dnn_model.compile(loss = 'sparse_categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy']) # Fittibg the model base_history = dnn_model.fit(x = train_samples, y = train_labels, epochs = epoch_count) # + [markdown] id="F1mMEoFC2Vkt" colab_type="text" # ### Evaluating the base model # In order to compare with the CNN later, the base model is evaluated with the accuracy metric. The training statistics are also visualized here. # + id="JQR7GWVq_A99" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="918a2822-3b3c-43e2-a09f-83e4ad761b5d" base_test_metrics = dnn_model.evaluate(x = test_samples, y = test_labels) print('Loss in test set: ' + '{:.4f}'.format(base_test_metrics[0])) print('Accuracy in test set: ' + '{:.2f}'.format(base_test_metrics[1])) # + id="DDvsg3gq32Fn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 370} outputId="71c243a3-2013-4c5c-cf9b-8420d2611c4f" fig, ax = plt.subplots(1,2, figsize = (15,5)) ax[0].plot(range(1, epoch_count + 1), base_history.history['loss']) ax[0].set_xlabel('epochs') ax[0].set_ylabel('Training Loss') ax[0].set_title('Change in Training Loss') ax[1].plot(range(1, epoch_count + 1), base_history.history['accuracy']) ax[1].set_xlabel('epochs') ax[1].set_ylabel('Training accuracy') ax[1].set_title('Change in Training Accuracy') fig.suptitle('Change in loss and accuracy in base model (DNN with no convolutions)', fontsize=16) fig.show() # + [markdown] id="lIcVBn0-5vSM" colab_type="text" # ## The Convolutional Model (CNN) # As a comparison with the other model, a neural network that has the sequence
    # CONV->MAX POOL->CONV->FLATTEN->DENSE->DENSE->DENSE(softmax) will be implemented. The following diagram shows the structure of the network:
    # ![image](https://drive.google.com/uc?export=view&id=1sy8b0e55S-xejK2e0hU6PEgsf47Kf6Gd) # # # + id="AtD2xSj28DSZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 425} outputId="596e9d7d-435f-4bd3-9e4e-9eda647de474" train_samples = train_samples.reshape(train_samples.shape[0], 28, 28, 1) test_samples = test_samples.reshape(test_samples.shape[0], 28, 28, 1) cnn_in = tf.keras.layers.Input(shape = train_samples.shape[1:]) X = tf.keras.layers.Conv2D(filters = 32, kernel_size = 3)(cnn_in) # Kernel size = 3 is equivalent to 3x3 convolution window X = tf.keras.layers.MaxPool2D(pool_size = 2, strides = 2)(X) # Pool size = 2 is equivalent to 2x2 pooling window X = tf.keras.layers.Conv2D(filters = 16, kernel_size = 3)(X) X = tf.keras.layers.Flatten()(X) X = tf.keras.layers.Dense(units = 128, activation = 'relu')(X) X = tf.keras.layers.Dense(units = 64, activation = 'relu')(X) cnn_out = tf.keras.layers.Dense(units = 10, activation = 'softmax')(X) # Constructing the model graph cnn_model = tf.keras.Model(inputs = cnn_in, outputs = cnn_out) # Summary of the model and its parameters cnn_model.summary() # + [markdown] id="UC9g7bsFAkb6" colab_type="text" # Here note that after a convolutional layer the dimensions are found with the following formula:
    # $dim = \large{\lfloor{\frac{dim_{in} + \, 2\, \cdot \, padding - \, size_{filter}}{stride} + 1}\rfloor}$
    # This rule is valid for each of the dimensions that convolution is applied. # + [markdown] id="AnYe8jodCR64" colab_type="text" # ### Training the CNN # At this stage the CNN will be trained and then evaluated with the same metrics as the base model given above. # + id="ZGqhGeAXCQO9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 697} outputId="7728fd8b-0e7f-42a2-8f12-3a65bd3b9a2d" # Compiling the model cnn_model.compile(loss = 'sparse_categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy']) # Fitting the model to training data cnn_history = cnn_model.fit(x = train_samples, y = train_labels, epochs = epoch_count) # + id="IicVmHV4EDpW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="33b44670-166a-4f73-8b12-1b077e96c3ef" cnn_test_metrics = cnn_model.evaluate(x = test_samples, y = test_labels) print('Loss in test set for CNN: ' + '{:.4f}'.format(cnn_test_metrics[0])) print('Accuracy in test set for CNN: ' + '{:.2f}'.format(cnn_test_metrics[1])) # + id="I2-n_n9aEL8Y" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 370} outputId="a81d4b87-71a9-4251-d49b-c59b2ac87ee1" fig, ax = plt.subplots(1,2, figsize = (15,5)) ax[0].plot(range(1, epoch_count + 1), cnn_history.history['loss']) ax[0].set_xlabel('epochs') ax[0].set_ylabel('Training Loss') ax[0].set_title('Change in Training Loss') ax[1].plot(range(1, epoch_count + 1), cnn_history.history['accuracy']) ax[1].set_xlabel('epochs') ax[1].set_ylabel('Training accuracy') ax[1].set_title('Change in Training Accuracy') fig.suptitle('Change in loss and accuracy in convolutional model (CNN)', fontsize=16) fig.show() # + [markdown] id="y0x4vUR9F1xB" colab_type="text" # ## Comparison # For a more brief comparison, the plots for the DNN and CNN are shown as one. Since CNN extracts features first and then learrns these features, it was expected to perform better. This can be observed even in an easy dataset like Fashion MNIST, which both of the models can learn easily. # + id="jvI-tf6sGOcy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 370} outputId="58a0b05f-68fa-46ab-ef8c-a0662a829e9b" fig, ax = plt.subplots(1,2, figsize = (15,5)) ax[0].plot(range(1, epoch_count + 1), cnn_history.history['loss'], label = 'CNN') ax[0].plot(range(1, epoch_count + 1), base_history.history['loss'], label = 'Base Model') ax[0].legend() ax[0].set_xlabel('epochs') ax[0].set_ylabel('Training Loss') ax[0].set_title('Change in Training Loss') ax[1].plot(range(1, epoch_count + 1), cnn_history.history['accuracy'], label = 'CNN') ax[1].plot(range(1, epoch_count + 1), base_history.history['accuracy'], label = 'Base Model') ax[1].legend() ax[1].set_xlabel('epochs') ax[1].set_ylabel('Training accuracy') ax[1].set_title('Change in Training Accuracy') fig.suptitle('Comparison of CNN and Base Model (with no onvolutions)', fontsize=16) fig.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + x = [1,2,3] for item in x: pass print('end of my script') # - mystring = 'Sammy' for letter in mystring: if letter == 'a': continue print(letter) for letter in mystring: if letter == 'm': break print(letter) # + x = 0 while x < 5: if x == 2: break print(x) x = x + 1 # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Network triads with networkx # # This Jupyter notebook uses the [networkx Python library](https://networkx.github.io/) to do a census of the different kinds of triads in a directed network imported from a Gephi edge list. # ## Step 1. Import modules #networkx is used to create and analyze the network import networkx as nx #os lets you change the working directory import os #csv lets your import data from a CSV file import csv # ## Step 2. Specify source directory and edge file # Replace `/Users/qad/Documents/aknet` with the path to the working directory below to the directory where you have your source files. # # For instance, the default path to the Documents directory is (substituting your user name on the computer for YOUR-USER-NAME): # # - On Mac: '/Users/YOUR-USER-NAME/Documents' # - On Windows: 'C:\\\Users\\\YOUR-USER-NAME\\\Documents' # # Then, replace `AK-I.csv` with the name of the Gephi edge list file that you want to use. It should end in .csv, and have a header row that reads *Source*, *Target*, *Weight*. It can have additional columns, but needs to have those three columns, in that order. If your file does not have a header row (but is laid out in the right order), type a `#` in front of the line that reads `next(edgereader)`. #Put the path to the directory with your edge file below os.chdir('/Users/qad/Documents/aknet') #Put the name of your edge file below edges = 'AK-I.csv' # ## Step 3. Create graph # Run the cells below to instantiate a directed graph, and populate it with data from your edge file. G = nx.DiGraph() with open(edges, 'r') as edgefile: edgereader = csv.reader(edgefile) next(edgereader) for row in edgereader: weight = {'weight':int(row[2])} G.add_edges_from([(row[0],row[1],weight)]) # Run the cells below to see the edges of the network you've created, as well as the in-degree and out-degree of the nodes. #Shows all edges G.edges() #Shows in-degree G.in_degree(weight='weight') #Shows out-degree G.out_degree(weight='weight') # ## Step 4. Generate lists of triads # There are 16 different types of triads that networkx can calculate for directed nteworks, and each has a numeric identifier: # ![network triads](https://i.stack.imgur.com/9Xo0R.png) # Looking at the diagram, it shouldn't be a surprise that there are thousands of instances of some of these triads (especially 003). # # Run the code cell below to see how many instances of each triad type can be found in your network. nx.triadic_census(G) # Once you know how many instances there are of some kinds of triads, you'll probably want to know what those triads are. networkx doesn't have a function to do this out of the box, but the code below (from [this Stack Overflow answer](https://stackoverflow.com/a/56124663)) creates the function. Run the code cell below. # + import itertools def _tricode(G, v, u, w): """Returns the integer code of the given triad. This is some fancy magic that comes from Batagelj and Mrvar's paper. It treats each edge joining a pair of `v`, `u`, and `w` as a bit in the binary representation of an integer. """ combos = ((v, u, 1), (u, v, 2), (v, w, 4), (w, v, 8), (u, w, 16), (w, u, 32)) return sum(x for u, v, x in combos if v in G[u]) #: The integer codes representing each type of triad. #: Triads that are the same up to symmetry have the same code. TRICODES = (1, 2, 2, 3, 2, 4, 6, 8, 2, 6, 5, 7, 3, 8, 7, 11, 2, 6, 4, 8, 5, 9, 9, 13, 6, 10, 9, 14, 7, 14, 12, 15, 2, 5, 6, 7, 6, 9, 10, 14, 4, 9, 9, 12, 8, 13, 14, 15, 3, 7, 8, 11, 7, 12, 14, 15, 8, 14, 13, 15, 11, 15, 15, 16) #: The names of each type of triad. The order of the elements is #: important: it corresponds to the tricodes given in :data:`TRICODES`. TRIAD_NAMES = ('003', '012', '102', '021D', '021U', '021C', '111D', '111U', '030T', '030C', '201', '120D', '120U', '120C', '210', '300') #: A dictionary mapping triad code to triad name. TRICODE_TO_NAME = {i: TRIAD_NAMES[code - 1] for i, code in enumerate(TRICODES)} triad_nodes = {name: set([]) for name in TRIAD_NAMES} m = {v: i for i, v in enumerate(G)} for v in G: vnbrs = set(G.pred[v]) | set(G.succ[v]) for u in vnbrs: if m[u] > m[v]: unbrs = set(G.pred[u]) | set(G.succ[u]) neighbors = (vnbrs | unbrs) - {u, v} not_neighbors = set(G.nodes()) - neighbors - {u, v} # Find dyadic triads for w in not_neighbors: if v in G[u] and u in G[v]: triad_nodes['102'].add(tuple(sorted([u, v, w]))) else: triad_nodes['012'].add(tuple(sorted([u, v, w]))) for w in neighbors: if m[u] < m[w] or (m[v] < m[w] < m[u] and v not in G.pred[w] and v not in G.succ[w]): code = _tricode(G, v, u, w) triad_nodes[TRICODE_TO_NAME[code]].add( tuple(sorted([u, v, w]))) # find null triads all_tuples = set() for s in triad_nodes.values(): all_tuples = all_tuples.union(s) triad_nodes['003'] = set(itertools.combinations(G.nodes(), 3)).difference(all_tuples) # - # In the code cell below, you can replace `210` with the identifier for any triad type to see what those triads are: # ![network triads](https://i.stack.imgur.com/9Xo0R.png) print(triad_nodes['210']) # ## Suggested citation # . *Network Triads with Networkx*. Jupyter notebook. https://github.com/quinnanya/network-triads. 2019. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # ### Collect User Timeline # # This notebook is an example on how to download the full twitter history of a user id. # Note: you will need an academic or professionnal token from twitter to do that. # + import requests import json import os import time import hashlib from functools import wraps from pathlib import Path from enum import Enum # - def generate_token(): with open(r"C:\Users\Orion\Documents\Projets\CERES\credentials_pro.json", 'r') as f: return f"Bearer {json.load(f)['token']}" API_ROUTE = "https://api.twitter.com/2/" STREAM_ROUTE = "tweets/search/stream" RULES_ROUTE = "tweets/search/stream/rules" FULL_SEARCH_ROUTE = "tweets/search/all" ROOT_FOLDER = r"C:\Users\Orion\Documents\Projets\CERES\PMA\UsersTweets" # maximum size of the collect in octets MAX_SIZE = 10000000 TOKEN = generate_token() params = { "tweet.fields": "public_metrics,referenced_tweets", "expansions": "author_id,in_reply_to_user_id,attachments.media_keys", "media.fields": "url", "user.fields": "id,verified" } s = requests.Session() s.headers.update({"Authorization": TOKEN}) def get_full_user_tweets(user_id, resume_token=None): params_search = { "tweet.fields": "public_metrics,referenced_tweets,possibly_sensitive,created_at,source,reply_settings,withheld", "expansions": "author_id,in_reply_to_user_id,attachments.media_keys", "media.fields": "url,public_metrics,type,alt_text", "user.fields": "id,verified", "max_results": 500, "start_time": "2006-12-01T00:00:00Z", "end_time": "2021-09-26T00:00:00Z" } author_id = user_id query = 'from:'+ str(author_id) new_token = "" if not resume_token else resume_token i = 0 total = 0 while new_token is not None: if new_token != "": params_search['next_token'] = new_token res = s.get(API_ROUTE + FULL_SEARCH_ROUTE + '?query=' + query, params=params_search) data = res.json() count = data.get('meta', {}).get('result_count', 0) total += count print(f'on a récupéré {total} résultats') if count == 0: print(f'aucun résultat pour cette requête, l"id est peut etre invalide: {user_id}') new_token = None with open(os.path.join(ROOT_FOLDER, str(author_id) + '-' + str(i) + '.json'), 'w', encoding='utf-8') as f: json.dump(data, f, ensure_ascii=False, indent=4) new_token = data['meta'].get('next_token', None) if i % 10 == 0: print("5000 results collected, making a break") time.sleep(30) i += 1 print('FINI') get_full_user_tweets(19822406, resume_token="") for id in [3336943089, 627503827, 493640854, 828290598599872512, 72489273, 2261320772, 2756277147]: get_full_user_tweets(id) get_full_user_tweets(828290598599872515) files = os.listdir(ROOT_FOLDER) d = {} for f in files: id_ = f.split('-')[0] if id_ not in d: d[id_] = [] d[id_].append(f) d.keys() def tweets_to_csv(folder): head = "id;text;created_at;possibly_sensitive;reply_settings;source;retweet_count;reply_count;like_count;quote_count;referenced_tweets;images_url;in_reply_to_user_id;in_reply_to_user_name;in_reply_to_user_username" for key in d: with open(f'{key}.csv', 'w') as f: f.write(head) for i in range(len(d[key])): tweets = {} path = os.path.join(folder, f'{key}-{i}.json') print(path) with open(path, 'r', encoding='utf-8') as f: tweets = json.load(f) media = tweets['includes'].get('media', []) users = tweets['includes'].get('users', []) for tweet in tweets['data']: metrics = tweet['public_metrics'] referenced_tweets = ' '.join([t['id'] for t in tweet.get('referenced_tweets', [])]) media_keys = tweet.get('attachments', {}).get('media_keys', []) images_url = [] if media_keys: for medium in media: if medium['media_key'] in media_keys and 'url' in medium: images_url.append(medium['url']) in_reply_to_user_id = tweet.get('in_reply_to_user_id', None) in_reply_to = {} if in_reply_to_user_id: for user in users: if user['id'] == in_reply_to_user_id: in_reply_to = user row = "\n" row += f"""{tweet['id']};"{tweet['text'].strip()}";{tweet['created_at']};""".replace('\n', ' ') row += f"{tweet['possibly_sensitive']};{tweet['reply_settings']};{tweet['source']};" row += f"{metrics['retweet_count']};{metrics['reply_count']};{metrics['like_count']};{metrics['quote_count']};" row += f"{referenced_tweets or 'NA'};" row += f"{' '.join(images_url) if images_url else 'NA'};" row += f"{in_reply_to_user_id or 'NA'};{in_reply_to.get('name', 'NA')};{in_reply_to.get('username', 'NA')}" with open(f'{key}.csv', 'a', encoding='utf-8') as f: f.write(row) tweets_to_csv(ROOT_FOLDER) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Пуассоновская регрессия # ### # 2021 # + import pandas as pd import numpy as np import matplotlib.pyplot as plt # %matplotlib inline plt.style.use('ggplot') # - # ## 1. Чтение и подготовка данных # Рассмотрим данные о количестве велосипедистов. Количество велосипедистов зависит от погодных условий в рассматриваемый день: чем хуже погода, тем меньше желающих. В качестве признаков возьмем: # - максимальную температуру в рассматриваемый день (F); # - минимальную температуру в рассматриваемый день (F); # - количество осадков. data = pd.read_csv('data/nyc_bicyclist_counts.csv', index_col=['Date'], parse_dates=True) data.head() # Целевая переменная – `'BB_COUNT'` – содержит только целые положительные числа, что должно быть учтено при выборе предсказательной модели. data['BB_COUNT'].plot(figsize=(12,5)) plt.show() # Кроме указанных факторов, количество велосипедистов может зависеть от дня недели: в выходные количество желающих больше, нежели в будни. Также может оказаться важным месяц. Добавим столбцы, содержащие информацию о том, на какой день недели и на какой месяц приходится наблюдение: data['DAY_OF_WEEK'] = data.index.dayofweek data['MONTH'] = data.index.month data # Данные переменные являются категориальными. #
    # #

    Задание 1

    #

    #
      1. Определите функцию, которая принимает на вход исходные данные $(X,y)$ и параметры модели $\theta$. Данная функция должна возвращать среднеквадратичную ошибку модели.
    # #
      2. Определите аналогичную функцию, которая возвращает значение функционала качества пуассоновской регрессии.
    # #
      3. Обучите обе модели с помощью функции minimize из SciPy. Сравните качество аппроксимации моделей. Метрикой качества выберите среднюю абсолютную ошибку.
    # #
      4. Отобразите на графике исходный ряд и результаты аппроксимации линейной и пуассоновской регрессиями.
    #

    # #

    #
    from sklearn.linear_model import LinearRegression from sklearn.model_selection import cross_val_score, KFold from sklearn.compose import TransformedTargetRegressor from scipy.optimize import minimize from sklearn.metrics import mean_squared_error from sklearn.metrics import mean_absolute_error def mse(X,y,theta): return ((y-np.dot(X,theta))**2).mean() X=data.drop('BB_COUNT', axis=1) y = data['BB_COUNT'] X['const'] = 1 theta0 = np.ones(X.shape[1]) lin_reg = minimize(lambda theta: mse(X,y,theta), tuple(theta0)) lin_reg_params = lin_reg.x y_pred_lin = np.dot(X, lin_reg_params) mean_absolute_error(y,y_pred_lin) def pois(X,y,theta): mu = np.dot(X,theta) return (np.exp(mu)-y*mu).mean() theta0 = np.zeros(X.shape[1]) pois_reg = minimize(lambda theta1: pois(X,y,theta1), tuple(theta0)) pois_reg_params = pois_reg.x y_pred_pois = np.dot(X, pois_reg_params) data['pois_approx'] = y_pred_pois mean_absolute_error(y,data['pois_approx']) data['lin_approx'] = y_pred_lin data['pois_approx'] = np.exp(y_pred_pois) a = data['BB_COUNT'].plot(figsize=(15,7), label = 'initial') b = data['lin_approx'].plot(label = 'lin') c = data['pois_approx'].plot(label = 'pois') a.legend() b.legend() c.legend() plt.show() #
    # #

    Задание 2

    #

    #
      Линейные модели чувствительны к виду категориальных признаков. Преобразуйте категориальные признаки с помощью One Hot Encoding и повторите шаги 3-4 из задания 1. Как изменилось качество моделей?
    #

    #
    data = pd.read_csv('data/nyc_bicyclist_counts.csv', index_col=['Date'], parse_dates=True) data['DAY_OF_WEEK'] = data.index.dayofweek data['MONTH'] = data.index.month X=data.drop(['BB_COUNT','DAY_OF_WEEK','MONTH'], axis = 1) y = data['BB_COUNT'] from sklearn.preprocessing import OneHotEncoder enc = OneHotEncoder(handle_unknown='ignore') enc.fit(data[['DAY_OF_WEEK','MONTH']]) X[['SUN','MON','TUES','WEN','THUR','FRI','SAT','APR','JUN','JUL','AUG','SEP','OCT','NOV']] = enc.transform(data[['DAY_OF_WEEK','MONTH']]).toarray() X['const'] = 1 theta0 = np.ones(X.shape[1]) lin_reg = minimize(lambda theta: mse(X,y,theta), tuple(theta0)) lin_reg_params = lin_reg.x y_pred_lin = np.dot(X, lin_reg_params) mean_absolute_error(y,y_pred_lin) theta0 = np.zeros(X.shape[1]) pois_reg = minimize(lambda theta1: pois(X,y,theta1), tuple(theta0)) pois_reg_params = pois_reg.x y_pred_pois = np.dot(X, pois_reg_params) data['pois_approx'] = y_pred_pois mean_absolute_error(y,data['pois_approx']) data['lin_approx'] = y_pred_lin data['pois_approx'] = np.exp(y_pred_pois) a = data['BB_COUNT'].plot(figsize=(15,7), label = 'initial') b = data['lin_approx'].plot(label = 'lin') c = data['pois_approx'].plot(label = 'pois') a.legend() b.legend() c.legend() plt.show() #
    # #

    Задание 3

    #

    #
      Преобразуйте категориальные признаки с помощью Фурье-разложения и повторите шаги 3-4 из задания 1. Какого качества моделей удалось достичь?
    #

    #
    data = pd.read_csv('data/nyc_bicyclist_counts.csv', index_col=['Date'], parse_dates=True) data['DAY_OF_WEEK'] = data.index.dayofweek data['MONTH'] = data.index.month X=data.drop(['BB_COUNT','DAY_OF_WEEK','MONTH'], axis = 1) y = data['BB_COUNT'] X[['SIN_M','COS_M','SIN_W','COS_W']] = np.array([np.sin(2*np.pi/7*data['MONTH']),np.cos(2*np.pi/7*data['MONTH']),np.sin(2*np.pi/12*data['DAY_OF_WEEK']),np.cos(2*np.pi/12*data['DAY_OF_WEEK'])]).transpose() X['const'] = 1 theta0 = np.zeros(X.shape[1]) lin_reg = minimize(lambda theta: mse(X,y,theta), tuple(theta0)) lin_reg_params = lin_reg.x lin_reg_params y_pred_lin = np.dot(X, lin_reg_params) mean_absolute_error(y,y_pred_lin) pois_reg = minimize(lambda theta1: pois(X,y,theta1), tuple(theta0)) pois_reg_params = pois_reg.x y_pred_pois = np.dot(X, pois_reg_params) data['lin_approx'] = y_pred_lin data['pois_approx'] = np.exp(y_pred_pois) mean_absolute_error(y,data['pois_approx']) a = data['BB_COUNT'].plot(figsize=(15,7), label = 'initial') b = data['lin_approx'].plot(label = 'lin') c = data['pois_approx'].plot(label = 'pois') a.legend() b.legend() c.legend() plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="pdqixpznsHrQ" # # Topic Modelling # # Técnica de clasificación de documento no supervisada para agruparlos por temáticas similares. # + id="kYQFVDhasHrZ" outputId="6d7a4160-0c0c-4b17-bb4c-c29538803ae6" import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn.datasets import fetch_20newsgroups #NLTK, gensim, from gensim.parsing.preprocessing import STOPWORDS from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.feature_extraction.text import CountVectorizer from numpy.linalg import svd dataset = fetch_20newsgroups(shuffle=True, random_state=1, remove=('headers', 'footers', 'quotes')) docs = dataset.data len(docs) # + id="WE12wmdssHrb" outputId="755abc66-b9a1-4b7a-bfef-f422717d4a3e" print(dataset.DESCR) # + id="yGf2FEm0sHrc" outputId="ce8845d9-1c26-4b10-bdad-bcbd785ba869" dataset.target_names # + id="l1ttWxmNsHrc" df = pd.DataFrame({'documentos':docs}) # + id="iPFHgX8fsHrd" outputId="697a2353-2721-4c40-f604-027acfbab203" STOPWORDS # + id="MchMQo_AsHrd" df['doc_limpio'] = df['documentos'].str.replace("[^a-zA-Z#]", " ") df['doc_limpio'] = df['doc_limpio'].apply(lambda x: ' '.join([w for w in x.split() if len(w)>3]).lower() ) df['doc_limpio'] = df['doc_limpio'].apply(lambda x: ' '.join([w for w in x.split() if w not in STOPWORDS])) # + id="Gal_7rhAsHrf" outputId="876f383e-b88a-4ae4-e1b7-00c9377a3fe4" df # + [markdown] id="ooMx85R-sHrg" # ## Vectorización # # Para transformar los documentos en vectores hay por lo menos dos opciones. Una es el conteo de cada token en los documentos. Y otra es usando ``tf-idf`` (*Term Frequency - Inverse Document Frequency*) # # $$ # \operatorname{tf}(t,d) = 0.5 + 0.5\frac {f_{t,d}} {\max \{ f_{t',d} : t' \in D\}} # $$ # # y # # $$ # \operatorname{idf}(t,D) = \log \left(\frac {|D|} {1+ |\{d \in D : t \in D \} |} \right) # $$ # # entonces el _score_ por término es # # $$ # \operatorname{tf-idf}(t,d,D) = \operatorname{tf}(t,d) \times \operatorname{idf}(t,D) # $$ # # Vean el ejemplo en [Wikipedia](https://en.wikipedia.org/wiki/Tf%E2%80%93idf#Example_of_tf%E2%80%93idf) # + id="bg0vkpuSsHrh" outputId="b97799ec-ee6e-4b19-dbc2-d4e226845e47" #vectorizador = CountVectorizer(stop_words='english', max_features=1000) vectorizador = TfidfVectorizer(stop_words='english', max_features=1000, max_df=0.5, smooth_idf=True) X = vectorizador.fit_transform(df['doc_limpio']) X # + id="Zxlcp0ANsHrh" outputId="b60e5b8a-b286-4b3e-df9d-6fb189168ee1" Xa = X.toarray() Xa[0] # + id="KBSN0ZlQsHrh" outputId="3a1a332c-288d-46a4-c548-44d3dc33c250" Xa.shape # + id="t8CVH42LsHri" u, s, vt = svd(Xa, full_matrices=False) # + id="GSEc4KcisHri" outputId="6c0fbf3a-8dc6-42e8-d4cd-685fb8d2a53d" u.shape # + id="YMkgFDuTsHri" outputId="53ac9425-a7c6-4fa1-9800-6c294650a67c" vt.shape # + id="oLGSfdZ-sHrj" outputId="27201bbb-747e-4b70-ca02-fc763ae3f9f9" s.shape # + id="TNWf4XuKsHrj" outputId="4d8b0dc0-cac5-4252-effc-03f982580839" reconstruye = lambda p,s,q,n : np.dot( np.dot(p[:,:n], np.diag(s[:n])), q[:n]) recomp = reconstruye(u,s,vt,1000) recomp.shape # + id="X4B2RBrgsHrj" outputId="f1b7ce29-2d31-4951-e380-aeb7f7fdbc19" np.allclose(Xa, recomp) # + id="_rV3EmNbsHrk" outputId="26efdb0d-0486-4ac3-da08-795b73f0d6b9" sns.lineplot(x=range(1,len(s)+1), y=s) # + id="2L-29PYmsHrk" outputId="f1f2fed2-ebb0-4b4a-dcaf-bb054f5320ae" w = [s[i+1]/s[i] for i in range(len(s)-1)] sns.lineplot(x = range(1,51), y=w[:50]) # + id="es6XD3s_sHrk" outputId="a1211775-ca47-420a-ab3a-9f70cf873eea" R = reconstruye(u,s,vt, 10) R.shape # + id="29g1L1FCsHrl" outputId="a6b2f48a-0069-40dd-a878-1ac5a67efd90" terms = vectorizador.get_feature_names() len(terms) # + id="bEwKG0y9sHrl" outputId="ad5516a2-93cc-4fee-ac8b-90dcc2b3c4ce" print(df.iloc[2032]['doc_limpio']) # + id="Mmwz8m60sHrl" outputId="a68a05b1-bb10-4e56-a827-10c75720412e" vector = vectorizador.transform(['[deletions] [deletions] presented argument incredulity. however, seen, presented manner. usually presented form, "and *besides*, see... ...nor offered convincing explanation." moreover, unreasonable explanation phenomena. theism provide convincing explanation argument theisms favor. especially different theisms offer different explanations, different adherents purportedly theism different explanations... experience. experience, common reason lack evidence theisms favor. mileage vary. heck, ill snide once. its fairly easy attack arguments made. (i.e. strawmen.) sage advice indeed. sincerely, ']) varr = vector.toarray() varr.shape # + id="tjcbzUtPsHrm" outputId="ff7dd0d0-9c88-466f-e44e-1d6874970935" dh = np.dot(recomp, varr.T) np.round(dh, decimals=3) # + id="NQuAJWeNsHrm" outputId="e1085a7b-737b-4974-f62d-96bd0e46fbc9" p = np.argsort(dh.reshape(1,-1)) p[0][-5:] # + id="NdRjXgBysHrm" outputId="8d6ce3df-d2d9-4edd-fb24-7f2bc9daf60f" print(df.iloc[10964]['doc_limpio']) # + id="6SOEBt23sHrm" outputId="0108488e-fc6c-4812-a0e0-d665f0d1b7fb" print(df.iloc[9528]['doc_limpio']) # + id="914ljHJYsHrn" outputId="0cb662ed-44e8-4c7a-ac97-6d3c078c2a1a" k = 10 vocab = vectorizador.vocabulary_ for i, componente in enumerate(vt): vocab_comp = zip(terms, componente) jerarquia_temas = sorted(vocab_comp, key=lambda x:x[1], reverse=True)[:k] print(u"Tópico {}".format(i)) for temas in jerarquia_temas: print("\t {} {}".format(temas[0], temas[1])) print("*"*50) # + id="hpDWv5dTsHrn" outputId="b03b8741-0d3d-4750-a903-563fba2f6e3a" len(vocab) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- document = """Developing the Package Index PyPI.nextgen: Currently, as of 2013-11-11, PyPI is undergoing a complete rewrite from scratch, and as a result much of the information on this page is not actual. You can see preview of the new site at: https://pypi.org/ The development moved from Mercurial (Python) to Git (C, shell). License changed from BSD-3 to Apache 2.0. Project code named 'warehouse' can be downloaded from: https://github.com/pypa/warehouse Previous PyPI version "previous" version of PyPI is the code that was running on http://pypi.python.org from ... till the end of 2013. It was originally written by ... and was running on ... . The information below should help you get around the code. The PyPI code was hosted under the Python Packaging Authority project: https://bitbucket.org/pypa/pypi and is now on GItHub: https://github.com/pypa/pypi-legacy Bug and patch tracker https://github.com/pypa/pypi-legacy/issues Mailing List (Gmane web interface) API that is used by easy_install http://peak.telecommunity.com/DevCenter/EasyInstall#package-index-api PyPIOAuth - authentication library for Google and Launchpad logins PyPI architecture and endpoints PyPI is a WSGI application that can be executed standalone using python pypi.wsgi command if all requirements are met. pypi.wsgi contains usual WSGI wrapper code and delegates request processing to WebUI.run() method from webui.py. This method just opens DB and handles exceptions, actual request processing is done in WebUI.inner_run(). This method analyzes URL endpoint and executes appropriate handler. As of 2011-04, the rules to match endpoints to handlers are the following: /simple WebUI.run_simple() dump all package names on single html page /simple/(.+)/ WebUI.run_simple() dump all links for a package in html list /serversig/(.+)/ .run_simple_sign() save as above, but signed by server /mirrors .mirrors() display static page with a list of mirrors /daytime .daytime() display current server time ... XML-RPC requests are detected by CONTENT_TYPE=text/xml variable in CGI environment and processed by rpc.RequestHandler().__call__(). List of XML-RPC "endpoints" is available on PyPIXmlRpc page. Testing Your Stuff Against PyPI If you need to test stuff against PyPI (registration, uploading, API activities) then please use the alternative server "testpypi.python.org". TO-DO list A dump of download counts. A big structured dump of all package meta-data. A link from package to RTFD. PEP for metadata 1.2 -- not finished and needs more catalog-sig discussion) documented procedures for "taking over" entries should the original owner of the entry go away (and any required system support) tooltips for field labels change notification emails per-classifier "wiki" content to allow description and discussion around each classifier (perhaps what packages are available and how they relate to one another) screenshot images (with thumbnailing and a "latest screenshot" on the front page?) - or perhaps icons instead of thumbnails for some packages? Something that's been requested, but needs much more thought and analysis to see whether it causes any problems: the ability to treat project names and versions as case-insensitive, while removing extraneous characters (as in pkg_resources.safe_name()) for purposes both of searching and determining name uniqueness when registering. Done command-line tool to query pypi and fetch entries: yolk Not Going TO-DO Edit PEP 243 to reflect reality. The interface is implemented in the distutils register and upload commands. This code is good enough for documentation, especially because it's the only implementation necessary. moderated user reviews and ratings (this would require quite a lot of support from volunteers though) Proposals EnhancedPyPI Enhance multiple package index servers support in Distutils. Development Environment Hints WARNING: Most of the information in here are out of date, see the instruction on the PyPI-legacy GitHub repository for more information, and most likely ask the developers for hints before trying to work on PyPI locally on your own ! PyPI uses postgresql 9.5 as a database, with a roll it yourself web framework based on different python modules. It uses apache2 as the web server. It can run using wsgi, cgi, fcgi and mod_python. Before restoring database, "pypi" role must exists: Ask RichardJones if you need a database dump. Note that dumps should not be imported into an existing database that has had the pkdump_schema.sql DDL script run against it. The pg_dump file will create all of the database tables, columns, indexes, foreign keys, etc. that are required. if your config.ini isn't in /tmp/pypi.ini. You can leave it as 'config.ini' if it's in the same directory as pypi.py. You will need to add cheesecake_password= into the config.ini in the webui section. To integrate it with Apache, we recommend to use WSGI through mod_wsgi. Your configuration should look like this:. """ # + document = document.replace('\n', ' ') document = document.replace(' ', '.') document = document.strip() document = document.replace('...', '') document = document.replace('..', '.') document # - import spacy nlp = spacy.load('en_core_web_sm') doc = nlp(document) for i, sent in enumerate(doc.sents): print(i, ' ' ,sent) #Dot '.' doc[sent.end -1] for i, sent in enumerate(doc.sents): if doc[sent.start].is_title == True and doc[sent.end - 1].text == '.' : print(i, ' ' ,sent) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import random # + def generateKromosom(uk_krom): krom = [] for i in range(uk_krom): k = random.randint(0,9) krom.append(k) return krom #ini 1 kromosom def decodeKromosom(krom): #ini masih statis buat kromosom 8 gen!!!!! # x1_val = -3 + (6 / (9 * (10**-1 + 10**-2 + 10**-3))) * ((krom[0]*10**-1)+(krom[1]*10**-2)+(krom[2]*10**-3)) # x2_val = -2 + (4 / (9 * (10**-1 + 10**-2 + 10**-3))) * ((krom[3]*10**-1)+(krom[4]*10**-2)+(krom[5]*10**-3)) x1_val = -3 + (6 / (9 * (10**-1 + 10**-2 + 10**-3 + 10**-4))) * ((krom[0]*10**-1)+(krom[1]*10**-2)+(krom[2]*10**-3)+(krom[3]*10**-4)) x2_val = -2 + (4 / (9 * (10**-1 + 10**-2 + 10**-3 + 10**-4))) * ((krom[4]*10**-1)+(krom[5]*10**-2)+(krom[6]*10**-3)+(krom[7]*10**-4)) return [x1_val, x2_val] def generatePopulasi(ukpop): pop = [] # krom_val = [] for i in range(ukpop): kro = generateKromosom(8) #kromosom 6 gen pop.append(kro) return pop # + def hitungFitnessAll(pop, ukpop): fit_all = [] # krom_val = [] # for i in range(ukpop): # k = decodeKromosom(b[i]) # krom_val.append(k) # print(krom_val) for i in range(ukpop): k = decodeKromosom(pop[i]) fit_func = -((((4 - (2.1 * k[0]**2) + (k[0]**4 / 3)) * k[0]**2) + (k[0]*k[1]) + ((-4 + (4*k[1]**2))*k[1]**2))) fit_all.append(fit_func) # fit_func = -(((4 - (2.1 * pop[i][0]**2) + (pop[i][0]**4 / 3)) * pop[i][0]**2) + (pop[i][0]*pop[i][1]) + ((-4 + (4*pop[i][1]**2))*pop[i][1]**2)) # fit_all.append(fit_func) return fit_all # + # def totalFitness(fit_all, ukpop): # sum_fit = 0 # for i in range(ukpop): # sum_fit = sum_fit + fit_all[i] # return sum_fit def rouletteWheel(pop, fit_all): r = random.random() # print('random =', r) krom = 0 while (r > 0 and krom < len(pop)-1): r = r - (fit_all[krom]/sum(fit_all)) krom = krom + 1 # print('yg kepilih: ',krom) return krom # - def crossover(par, pc): #0.7 r = random.random() if (r < pc): point = random.randint(0,7) #diganti berdasarkan ukuran gen for i in range(point): par[0][i], par[1][i] = par[1][i], par[0][i] return par def mutasi(par, pm): #dibawah 0.3, nanti 0.17 r = random.random() if (r < pm): rep0 = random.randint(0,9) mut0 = random.randint(0,5) rep1 = random.randint(0,9) mut1 = random.randint(0,5) par[0][mut0] = rep0 par[1][mut1] = rep1 return par def getElitisme(fit_all): return fit_all.index(max(fit_all)) # + ukpop = 50 ukkrom = 8 max_gen = 100 pc = 0.75 pm = 0.18 pop = generatePopulasi(ukpop) newpop = [] print(pop) print("HHHHHHHHHHH") for i in range(max_gen): fit = hitungFitnessAll(pop, ukpop) # print(fit) # print('jumlah fitness :', sum(fit)) # print(max(fit)) best = getElitisme(fit) # print(best) print('krom best :', pop[best]) print('value',decodeKromosom(pop[best])) newpop.append(pop[best]) newpop.append(pop[best]) # print("Populasi baru") # print(newpop) print() print() print("PENCARIAN PARENT") g = 0 while (g < ukpop-2): par = [] par.append(pop[rouletteWheel(pop, fit)]) par.append(pop[rouletteWheel(pop, fit)]) offspring = crossover(par, pc) # print('setelah di crossover :', offspring) offspring = mutasi(par, pm) # print('setelah di mutasi:', offspring) newpop.append(offspring[0]) newpop.append(offspring[1]) # print('newpopulasi ke',g,) # print(newpop) g = g + 1 pop = newpop print('FINISh') # print(newpop) # print(pop) # - ukpop = 10 ukkrom = 6 max_gen = 10 pc = 0.75 pm = 0.17 populasi = generatePopulasi(ukpop) populasi_new = [] par = [] for i in range(max_gen): fitness = hitungFitnessAll(populasi, ukpop) best = getElitisme(fitness) populasi_new.append(populasi[best]) populasi_new.append(populasi[best]) g = 0 while (g < ukpop): print(rouletteWheel(populasi, fitness)) par.append(populasi[rouletteWheel(populasi, fitness)]) par.append(populasi[rouletteWheel(populasi, fitness)]) offspring = crossover(par, pc) offspring = mutasi(par, pm) populasi_new.append(offspring) g = g + 1 print(max(fitness)) populasi = populasi_new def generalReplacement(pop, chi): # + parent1 = pop[rouletteWheel(populasi, ukpop, fitness, sum_fit)] parent2 = pop[rouletteWheel(populasi, ukpop, fitness, sum_fit)] # x1 = [] # x2 = [] # for i in range(3): # x = random.randint(0,9) # x1.append(x) # y = random.randint(0,9) # x2.append(y) # print(x1) # print(x1[2]) # print(x2) # x1_val = -3 + (6 / (9 * (10**-1 + 10**-2 + 10**-3))) * ((x1[0]*10**-1)+(x1[1]*10**-2)+(x1[2]*10**-3)) # x2_val = -2 + (4 / (9 * (10**-1 + 10**-2 + 10**-3))) * ((x2[0]*10**-1)+(x2[1]*10**-2)+(x2[2]*10**-3)) # print(x1_val) # print(x2_val) # pop = [] # for i in range(100): # kro = generateKromosom() # pop.append(kro) # print(pop[2]) # def generateKromosom(): # x1 = [] # x2 = [] # for i in range(3): # x = random.randint(0,9) # x1.append(x) # y = random.randint(0,9) # x2.append(y) # x1_val = -3 + (6 / (9 * (10**-1 + 10**-2 + 10**-3))) * ((x1[0]*10**-1)+(x1[1]*10**-2)+(x1[2]*10**-3)) # x2_val = -2 + (4 / (9 * (10**-1 + 10**-2 + 10**-3))) * ((x2[0]*10**-1)+(x2[1]*10**-2)+(x2[2]*10**-3)) # return [x1_val, x2_val] #ini 1 kromosom # HAPUS AJA GPP # def rouletteWheel(pop, ukpop, fitness, sum_fit): # rand_sel = [] # pi_all = [] # pi_temp = [] # parent = [] # for i in range(ukpop): # pi = fitness[i]/sum_fit # pi_all.append(pi) # sum_pi = 0 # for i in range(ukpop): # sum_pi = sum_pi + pi[i] # pi_temp.append(sum_pi) # num_rand = random.random() # while(num_rand > pi_temp[i]): # i = i + 1 # parent.append(pop[i]) # num_rand = random.random() # krom = 1 # while (num_rand > 0): # r -= fitness[krom]/sum_fit # krom++ # for i in range(ukpop/3): # i = 0 # num_rand = random.random() # while(num_rand > pi_temp[i]): # i = i + 1 # parent.append(pop[i]) # return parent x = [[1,2,3],[4,5,6]] print(x) print(x[0]) print(x[0][2]) p = 3 for i in range(p,6): print(i) roulette wheel return individu terus crossover a = roulette wheel b = roulette wheel return anak baru b = generatePopulasi(20) print(b) print("halo") print(b[1]) print("kkkk") kro = [8, 0, 9, 8, 4, 6] print(kro) print(decodeKromosom(kro)) # + import numpy my = [1, 2, 3, 100, 5] x = numpy.argsort(my) # vowels list vowels = [1, 2, 3, 100, 5] y = max(my) # element 'e' is searched index = vowels.index(y) # index of 'e' is printed print('The index of e:', index) # + pop = generatePopulasi(10) print('Populasi ') print(pop) print() fit = hitungFitnessAll(pop, 10) print('Fitness') print(fit) print() newpop = [] bestt = getElitisme(fit) print('krom best :', pop[best]) print('value',hitungFitness(pop[bestt])) newpop.append(pop[bestt]) newpop.append(pop[bestt]) print("PENCARIAN PARENT") t = 0 while (t < 8): par = [] print('Parent ke-',t) par.append(tournamentSelection(pop, 3, 10)) par.append(tournamentSelection(pop, 3, 10)) print('Terpilih') print(par) print() offspring = crossover(par, pc) print('Crossover') print(offspring) print() offspring = mutasi(offspring, pm) print('Mutasi') print(offspring) print() print() newpop.append(offspring) t = t + 2 print('Populasi') print(newpop) print() # + IN pop = generatePopulasi(10) print('Populasi ') print(pop) print() fit = hitungFitnessAll(pop, 10) print('Fitness') print(fit) print() newpop = [] bestt = getElitisme(fit) print('krom best :', pop[bestt]) print('value',hitungFitness(pop[bestt])) newpop.append(pop[bestt]) newpop.append(pop[bestt]) print("PENCARIAN PARENT") t = 0 while (t < 8): print('Parent ke-',t) par1 = tournamentSelection(pop, fit, 3, 10) par2 = tournamentSelection(pop, fit, 3, 10) while (par1 == par2): par2 = tournamentSelection(pop, fit, 3, 10) print('Terpilih') print('Parent 1:', par1) print('Indeks Parent 1:', pop.index(par1)) print('Parent 2:', par2) print('Indeks Parent 2:', pop.index(par2)) print() offspring = crossover(par1, par2, 0.7) print('Crossover') print(offspring) print() offspring = mutasi(offspring[0], offspring[1], 0.13) print('Mutasi') print(offspring) print() print() newpop += offspring t = t + 2 print('Populasi') print(newpop) print() fit = hitungFitnessAll(pop, 10) print('Fitness') print(fit) print() newpop = [] bestt = getElitisme(fit) print('krom best :', pop[bestt]) print('value',hitungFitness(pop[bestt])) # + # n_pop = 50 # n_krom = 8 # n_tour = n_pop//3 # max_gen = 100 # pc = 0.6 # pm = 0.13 # populasi = generatePopulasi(n_pop) # for i in range(max_gen): # newpop = [] # fitness = hitungFitnessAll(populasi, n_pop) # best = getElitisme(fitness) # print('krom best :', populasi[best]) # print('value',hitungFitness(populasi[best])) # newpop.append(populasi[best]) # newpop.append(populasi[best]) # print() # print() # for i in range(n_pop-2//2): # par = [] # par.append(tournamentSelection(populasi, n_tour, n_pop)) # par.append(tournamentSelection(populasi, n_tour, n_pop)) # offspring = crossover(par, pc) # offspring = mutasi(offspring, pm) # newpop.append(offspring[0]) # newpop.append(offspring[1]) # populasi = newpop # print('FINISH') # + # n_pop = 20 # n_krom = 8 # n_tour = 2*n_pop # max_gen = 100 # pc = 0.7 # pm = 0.13 # populasi = generatePopulasi(n_pop) # newpop = [] # print(populasi) # best = getElitisme(fitness) # print('krom best :', populasi[best]) # print('value',hitungFitness(populasi[best])) # newpop.append(populasi[best]) # newpop.append(populasi[best]) # print() # print() # print("") # t = 0 # while (t < n_pop//2): # par = [] # par.append(tournamentSelection(populasi, n_tour, n_pop)) # par.append(tournamentSelection(populasi, n_tour, n_pop)) # offspring = crossover(par, pc) # offspring = mutasi(offspring, pm) # newpop.append(offspring[0]) # newpop.append(offspring[1]) # t = t + 1 # populasi = newpop # print('FINISH') # print(newpop) # fitness = hitungFitnessAll(newpop, n_pop) # best = fitness.index(max(fitness)) # print('krom best :', newpop[best]) # print('value',hitungFitness(newpop[best]),len(newpop)) # + kpop = 10 pop = generatePopulasi(kpop) terbaek = [] print('Populasi ') print(pop) fit = hitungFitnessAll(pop, kpop) print('Fitness') print(fit) print() newpop = [] bestt = getElitisme(fit) print('krom best :', pop[bestt]) bestla = hitungFitness(pop[bestt]) print('value', bestla) terbaek.append(bestla) newpop.append(pop[bestt]) newpop.append(pop[bestt]) # print("PENCARIAN PARENT") t = 0 while (t < kpop-2): print('Populasi setelah iterasi ke', t) print(pop) print() print('Parent ke-',t) par1 = tournamentSelection(pop, kpop//4, kpop) par2 = tournamentSelection(pop, kpop//4, kpop) if (par1 == par2): idx = pop.index(par1) if (idx==0): par2 = pop[idx+1] else: par2 = pop[idx-1] print('Terpilih') print('Parent 1:', par1) print('Indeks Parent 1:', pop.index(par1)) print('Parent 2:', par2) print('Indeks Parent 2:', pop.index(par2)) print() pr1 = copy.deepcopy(par1) pr2 = copy.deepcopy(par2) offspring = crossover(pr1, pr2, 0.82) print('Crossover') print(offspring) print() offspring = mutasi(offspring[0], offspring[1], 0.21) print('Mutasi') print(offspring) print() print() newpop += offspring t = t + 2 print('Populasi') print(newpop) print() pop = newpop bestt = getElitisme(fit) print('krom best :', newpop[bestt]) bestla = hitungFitness(newpop[bestt]) print('value', bestla) # + # Parent ke- 0 # Terpilih # Parent 1: [3, 1, 1, 3, 4, 5, 6, 7] # Indeks Parent 1: 6 # Parent 2: [9, 5, 2, 6, 6, 3, 2, 9] # Indeks Parent 2: 7 # Crossover # ([9, 5, 2, 6, 6, 3, 6, 7], [3, 1, 1, 3, 4, 5, 2, 9]) import copy p = generatePopulasi(5) print(p) par = [] par = copy.deepcopy(p) print(par) par1 = copy.deepcopy(p[2]) par2 = copy.deepcopy(p[1]) r = random.random() if (r < 0.99): point = random.randint(0,7) for i in range(point): par1[i], par2[i] = par2[i], par1[i] print(par1,par2) print(p) print(par) # + FIX kpop = 10 pop = generatePopulasi(kpop) terbaek = [] for i in range(300): print('Populasi ') fit = hitungFitnessAll(pop, kpop) # print('Fitness') # print(fit) # print() newpop = [] bestt = getElitisme(fit) print('krom best :', pop[bestt]) bestla = hitungFitness(pop[bestt]) print('value', bestla) terbaek.append(bestla) newpop.append(pop[bestt]) newpop.append(pop[bestt]) # print("PENCARIAN PARENT") t = 0 while (t < kpop-2): # print('Parent ke-',t) par1 = tournamentSelection(pop, kpop//4, kpop) par2 = tournamentSelection(pop, kpop//4, kpop) while (par1 == par2): par2 = tournamentSelection(pop, kpop//4, kpop) # if (par1 == par2): # idx = pop.index(par1) # if (idx==0): # par2 = pop[idx+1] # else: # par2 = pop[idx-1] # print('Terpilih') # print('Parent 1:', par1) # print('Indeks Parent 1:', pop.index(par1)) # print('Parent 2:', par2) # print('Indeks Parent 2:', pop.index(par2)) # print() pr1 = copy.deepcopy(par1) pr2 = copy.deepcopy(par2) offspring = crossover(pr1, pr2, 0.82) # print('Crossover') # print(offspring) # print() offspring = mutasi(offspring[0], offspring[1], 0.13) # print('Mutasi') # print(offspring) # print() # print() newpop += offspring t = t + 2 # print('Populasi') # print(newpop) # print() pop = newpop print(len(terbaek)) print(max(terbaek)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda root] # language: python # name: conda-root-py # --- import time import logging import os import json import requests import pandas as pd from s3fs.core import S3FileSystem # + os.environ['AWS_CONFIG_FILE'] = 'aws_config.ini' s3 = S3FileSystem(anon=False) key = 'TheNumbers_budgets.csv' bucket = 'movie-torrents' df = pd.read_csv(s3.open('{}/{}'.format(bucket, key), mode='rb'), index_col=0) df.head() # + logger = logging.getLogger('OMDB_API') logger.setLevel(logging.INFO) # create a file handler handler = logging.FileHandler('omdb_api.log') handler.setLevel(logging.INFO) # create a logging format formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') handler.setFormatter(formatter) # - # tupple of movie, title title = df['title'] year = [year[:4] for year in df['release_date']] movie_tup = [(title, year) for title, year in zip(title, year)] # + with open('./scripts/omdb_api.key', 'r') as read_file: omdb_api_key = read_file.read().strip() lst = [] for title, year in movie_tup: # meter number of requests to omdb api time.sleep(0.5) # omdb api address payload = {'t': title, 'y': year, 'apikey': omdb_api_key} html = requests.get('http://www.omdbapi.com', params=payload) # check for 200 code (good) resp = json.loads(html.text) if html.status_code != 200 or 'Error' in resp.keys(): logger.info('Year:{0} - Title:{1}'.format(year, title)) continue html_text = html.text html_json = json.loads(html_text) lst.append(html_json) if len(lst) > 20: break # - df = pd.DataFrame.from_dict(lst, orient='columns') df = df[['Actors', 'Awards', 'BoxOffice', 'Country', 'DVD', 'Director', 'Genre', 'Language', 'Metascore', 'Production', 'Rated', 'Released', 'Runtime', 'Title', 'Type', 'Writer', 'imdbID', 'imdbRating', 'imdbVotes']] # + for col in ['BoxOffice', 'imdbVotes']: df[col].replace(to_replace='N/A', value='0', inplace=True) df[col] = df[col].replace(r'[\$,]', '', regex=True).astype(int) df['Runtime'] = df['Runtime'].replace(r'[ min]', '', regex=True).astype(int) for col in ['DVD', 'Released']: df[col] = pd.to_datetime(df[col], errors='coerce', format='%d %b %Y') for col in ['Metascore', 'imdbRating']: df[col].replace(to_replace='N/A', value='0', inplace=True) df[col] = df[col].astype(float) # - df.head() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np import matplotlib.pyplot as plt import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers # %matplotlib inline plt.rcParams['figure.figsize'] = [8, 6] plt.rcParams['font.size'] = 16 # ## Unscaled data # # Load the hill valley dataset from https://archive.ics.uci.edu/ml/datasets/Hill-Valley # + df_train = pd.read_csv('data/Hill_Valley_with_noise_Training.data') df_test = pd.read_csv('data/Hill_Valley_with_noise_Testing.data') X_train = df_train.loc[:, [x for x in df_train.columns if x.startswith('X')]] Y_train = df_train['class'] X_test = df_test.loc[:, [x for x in df_test.columns if x.startswith('X')]] Y_test = df_test['class'] print(f'Train set size: {X_train.shape}') print(f'Test set size: {X_test.shape}') # - X_train.iloc[range(10), :].T.plot(legend=False) # ## Scaled data def scale_rows(X): """Move median to zero and scale the peak to 1 or -1""" z = X.sub(X.median(axis=1), axis=0) scale = abs(z).max(axis=1) return z.div(scale, axis=0) X_train = scale_rows(X_train) X_test = scale_rows(X_test) X_train.iloc[range(10), :].T.plot(legend=False) # ## LSTM model # + model = tf.keras.Sequential() model.add(layers.LSTM(10, input_shape=(100, 1))) model.add(layers.Dense(2, activation='softmax')) model.summary() # + model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(), optimizer='adam', metrics=['accuracy']) history = model.fit(np.expand_dims(X_train, 2), Y_train, validation_data=(np.expand_dims(X_test, 2), Y_test), batch_size=8, epochs=10) # - plt.plot(history.history['loss'], label='train') plt.plot(history.history['val_loss'], label='test') plt.ylabel('loss') plt.xlabel('epoch') plt.legend() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.8 64-bit (''udemyPro_py388'': venv)' # name: python3 # --- # + # Order: Arguments, *args, default-arguments, **kwargs # * ARGS: TUPLE # **KWARGS: DICT def my_function(a, *args, x=2, **kwargs): print(args, type(args)) print(kwargs, type(kwargs)) my_function(1,2,3,4, b=False, c=30, d=40.5) # + import matplotlib.pyplot as plt def plot_my_list(list_x, list_y, **kwargs): plt.scatter(list_x, list_y, **kwargs) plt.show() list_x = [-3, -2, -1, 1, 1, 3] list_y = [9,4, 1, 1, 4, 9] plot_my_list(list_x, list_y, c='red') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %load_ext autoreload # %autoreload 2 # automatically reload modules when changed import simba from sympy import symbols, simplify, init_printing, Matrix, MatrixSymbol, Rational, adjoint, Symbol init_printing() # The unstable filter transfer function is given by, # # $$ # G(s) = \frac{s - 2}{s + 2} # $$ s = symbols('s') tf = (s - 2) / (s + 2) tf ss = simba.transfer_function_to_state_space(tf).extended_to_quantum() a, b, c, d = ss ss # Convert to a physically realisable state space ss = ss.to_physically_realisable() assert ss.is_physically_realisable ss slh = ss.to_slh('a') slh # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: env # language: python # name: env # --- # + from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow as tf import sys sys.argv = sys.argv[:1] from model import ParserModel, main, Config import time from itertools import islice from sys import stdout from tempfile import NamedTemporaryFile import tensorflow as tf from utils.model import Model from data import load_and_preprocess_data from data import score_arcs from initialization import xavier_weight_init from parser import minibatch_parse from utils.generic_utils import Progbar from tensorflow.python.tools.freeze_graph import freeze_graph import tfcoreml # + # class Config(object): # """Holds model hyperparams and data information. # The config class is used to store various hyperparameters and dataset # information parameters. Model objects are passed a Config() object at # instantiation. # """ # n_word_ids = None # inferred # n_tag_ids = None # inferred # n_deprel_ids = None # inferred # n_word_features = None # inferred # n_tag_features = None # inferred # n_deprel_features = None # inferred # n_classes = None # inferred # dropout = 0.5 # embed_size = None # inferred # hidden_size = 4 # batch_size = 2048 # n_epochs = 1 # lr = 0.001 # l2_beta = 10e-8 # l2_loss = 0 # - config = Config() data = load_and_preprocess_data( max_batch_size=config.batch_size) transducer, word_embeddings, train_data = data[:3] dev_sents, dev_arcs = data[3:5] test_sents, test_arcs = data[5:] config.n_word_ids = len(transducer.id2word) + 1 # plus null config.n_tag_ids = len(transducer.id2tag) + 1 config.n_deprel_ids = len(transducer.id2deprel) + 1 transducer. '''Main function Args: debug : whether to use a fraction of the data. Make sure to set to False when you're ready to train your model for real! ''' print(80 * "=") print("INITIALIZING") print(80 * "=") config = Config() data = load_and_preprocess_data( max_batch_size=config.batch_size) transducer, word_embeddings, train_data = data[:3] dev_sents, dev_arcs = data[3:5] test_sents, test_arcs = data[5:] config.n_word_ids = len(transducer.id2word) + 1 # plus null config.n_tag_ids = len(transducer.id2tag) + 1 config.n_deprel_ids = len(transducer.id2deprel) + 1 config.embed_size = word_embeddings.shape[1] for (word_batch, tag_batch, deprel_batch), td_batch in \ train_data.get_iterator(shuffled=False): config.n_word_features = word_batch.shape[-1] config.n_tag_features = tag_batch.shape[-1] config.n_deprel_features = deprel_batch.shape[-1] config.n_classes = td_batch.shape[-1] break print( 'Word feat size: {}, tag feat size: {}, deprel feat size: {}, ' 'classes size: {}'.format( config.n_word_features, config.n_tag_features, config.n_deprel_features, config.n_classes)) # # DON'T RERUN THINGS BEFORE THIS print(transducer.id2word) debug = False # + if debug: dev_sents = dev_sents[:500] dev_arcs = dev_arcs[:500] test_sents = test_sents[:500] test_arcs = test_arcs[:500] if not debug: weight_file = NamedTemporaryFile(suffix='.weights') # weight_file = open("something.weights", mode=) session = tf.Session() print("Building model...", end=' ') start = time.time() model = ParserModel(transducer, session, config, word_embeddings, is_training=False) print("took {:.2f} seconds\n".format(time.time() - start)) init = tf.global_variables_initializer() session.run(init) output_names = 'output/td_vec' saver = None if debug else tf.train.Saver() saver.restore(session, "checkpoints/model.ckpt") # frozen_graph = tf.compat.v1.graph_util.convert_variables_to_constants( # sess=session, # input_graph_def=tf.compat.v1.get_default_graph().as_graph_def(), # output_node_names=[output_names]) # frozen_graph = tf.compat.v1.graph_util.extract_sub_graph( # graph_def=frozen_graph, # dest_nodes=[output_names]) # with open('checkpoints/frozen_graph.pb', 'wb') as fout: # fout.write(frozen_graph.SerializeToString()) # print(80 * "=") # print("TRAINING") # print(80 * "=") # best_las = 0. # for epoch in range(config.n_epochs): # print('Epoch {}'.format(epoch)) # if debug: # model.fit_epoch(list(islice(train_data,3)), config.batch_size) # else: # model.fit_epoch(train_data) # stdout.flush() # dev_las, dev_uas = model.eval(dev_sents, dev_arcs) # best = dev_las > best_las # if best: # best_las = dev_las # if not debug: # saver.save(session, "checkpoints/model.ckpt") # tf.io.write_graph(session.graph_def, './checkpoints/', 'model.pbtxt') # frozen_graph = tf.compat.v1.graph_util.convert_variables_to_constants( # sess=session, # input_graph_def=tf.compat.v1.get_default_graph().as_graph_def(), # output_node_names=[output_names]) # frozen_graph = tf.compat.v1.graph_util.extract_sub_graph( # graph_def=frozen_graph, # dest_nodes=[output_names]) # with open('checkpoints/frozen_graph.pb', 'wb') as fout: # fout.write(frozen_graph.SerializeToString()) # print('Validation LAS: ', end='') # print('{:.2f}{}'.format(dev_las, ' (BEST!), ' if best else ', ')) # print('Validation UAS: ', end='') # print('{:.2f}'.format(dev_uas)) # if not debug: # print() # print(80 * "=") # print("TESTING") # print(80 * "=") # print("Restoring the best model weights found on the dev set") # saver.restore(session, "checkpoints/model.ckpt") # stdout.flush() # las,uas = model.eval(test_sents, test_arcs) # if las: # print("Test LAS: ", end='') # print('{:.2f}'.format(las), end=', ') # print("Test UAS: ", end='') # print('{:.2f}'.format(uas)) # print("Done!") # - # with tf.Graph().as_default(), tf.Session() as session: # print("Building model...", end=' ') # start = time.time() # model = ParserModel(transducer, session, config, word_embeddings, is_training=True) # print("took {:.2f} seconds\n".format(time.time() - start)) # init = tf.global_variables_initializer() # session.run(init) # output_names = 'output/td_vec' # saver = None if debug else tf.train.Saver() # # saver.restore(session, "checkpoints/model.ckpt") # frozen_graph = tf.compat.v1.graph_util.convert_variables_to_constants( # sess=session, # input_graph_def=tf.compat.v1.get_default_graph().as_graph_def(), # output_node_names=[output_names]) # frozen_graph = tf.compat.v1.graph_util.extract_sub_graph( # graph_def=frozen_graph, # dest_nodes=[output_names]) # with open('checkpoints/frozen_graph.pb', 'wb') as fout: # fout.write(frozen_graph.SerializeToString()) # saver.restore(session, "checkpoints/model.ckpt") # print(test_sents) # las,uas = model.eval(test_sents, test_arcs) # + # transducer, word_embeddings, train_data = data[:3] # dev_sents, dev_arcs = data[3:5] # test_sents, test_arcs = data[5:] # print(dev_sents[:1]) # print(dev_arcs[:1]) # import json # list(islice(train_data,1))[0][0] # - # gameplan: run minibatch_parse on single sentence pair, extract the feed. understand structure of said feed. try feeding feed into model directly. if it works => mad profit import importlib import parser importlib.reload(parser) parser.minibatch_parse( [[("What", "PRON"), ("if", "SCONJ"), ("Google", "PROPN"), ("Morphed", "VERB"), ("Into", "ADP"), ("GoogleOS", "PROPN"), ("?", "PUNCT")]], model, 1 ) # gameplan: run `load_and_preprocess_data` to create `TrainingIterable`-s, which calls `graphs2feats_and_tds` in its constructor. import data import importlib importlib.reload(data) config = Config() data = data.load_and_preprocess_data( max_batch_size=config.batch_size) # Graphs are fed into the whole pipeline pre-made through `TrainIterable` in `data.py`, via the variable `training_graphs`. # # #### prediction pathway # use minibatch_parse -> instatiates a bunch of partial-parses with sentences -> minibatch of PartialParses created -> model predicts on minibatch to generate td_vecs -> td vecs are used with parse_step to construct the parsed sentence in PartialParse. # # we have to port transducer (none of the graph bits -- I THINK just pps2feats and td_vec2trans_deprel) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # ## Scope # In this notebook we give a very brief tutorial which focuses on the mnms python user interface. We'll use a downgraded version of the `d6` region so that all our operations are fast, even though you will probably never use that sim. This way you know how to use mnms in your own analysis code. # # ## Tutorial # # In mnms, we always need to generate a "sqrt-covariance" matrix as a first step; only then can we generate sims: # + from mnms import noise_models as nm, utils from soapack import interfaces as sints # load the DR5 data model in order to be able to use deep6 (qid = 'd6') data dm = sints.DR5() # - # Create a "TiledNoiseModel" instance, and a "WaveletNoiseModel" instance: tnm = nm.TiledNoiseModel('d6', data_model=dm, downgrade=8, mask_version='masks_20200723', union_sources='20210209_sncut_10_aggressive', notes='my_first_model') wnm = nm.WaveletNoiseModel('d6', data_model=dm, downgrade=8, mask_version='masks_20200723', union_sources='20210209_sncut_10_aggressive', notes='my_first_model') # Now we need to make our noise model. Because they are the slowest step to make, and take up a lot of space, by default they are written to disk. The idea is that outside of testing/development, you only need to ever run this command once (per data-release, array-set, other parameters, etc): tnm.get_model() wnm.get_model() # To save you from accidentally regenerating the exact same model again, the argument `check_on_disk` is `True` by default. If I rerun `get_model`, it will find that the model is on-disk, and return nothing (you can also keep the model in memory, stored as an instance attribute, if you want, via `keep_model=True`): tnm.get_model() wnm.get_model() # That was fast, and that's because the models are on-disk: # ls /scratch/gpfs/zatkins/data/ACTCollaboration/mnms/covmats/*my_first_model* # Note, this trick won't work if you give a model a different `notes` parameter. E.g., `my_second_model` is considered a totally distinct noise model from `my_first_model`, even if all the "scientific" parameters are the same. # # Now we can make sims. Sims are always made of single splits at a time. To do so, we must load the "sqrt-covariance" matrix from disk, so that we can sample from it, so make sure to have enough memory on-hand to do this. These files are only loaded once -- they are stored in the object instance, so that future calls to `get_sim` do not have to load from disk. There is also a similar `check_on_disk` parameter to prevent users from generating the same sim twice (by default this is True). # # By default, these are ***not*** saved to disk (but they can be, if desired). We need to tell `get_sim` which split and map number we want to make. The map number is used in getting a unique random seed (and if the sims is written to disk, will be stored in the filename): tmap_s3_n123 = tnm.get_sim(3, 123, verbose=True) wmap_s3_n123 = wnm.get_sim(3, 123, verbose=True) # As noted in the README, these sims always have shape (num_arrays, num_splits, num_pol, ny, nx), even if some of these axes have dimension 1. Because we generate sims per split, `num_splits=1` always. Let's take a look! utils.eshow(tmap_s3_n123, colorbar=True) utils.eshow(wmap_s3_n123, colorbar=True) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] pycharm={"name": "#%% md\n"} # # Constrained NLP # ## Introduction # Recall that in constrained NLP, the objective is to find the optimal value (maximum or minimum) of an objective function, subject to a set of constraints, where at least one function is not linear. # In this section, we will present the canonical form for non-linear problems, and use this form to introduce the Kuhn-Tucker conditions, which are a set of conditions that the optimal value must hold. # ## Canonical Form # The canonical form of a **minimisation** problem is defined as: # # $\min f(x)$ # # $\\text{s.t.}$ # # $g_i(x) \leq 0 \quad \forall i = 1, ..., m$ # # where $x = [x_1, x_2, ..., x_n]$ is an array with the n different decision variables of the problem, $f(x)$ is the (non-linear) objective function and $g_i(x)$ are the (non-linear) functions of the left hand sides of the m constraints. # Note that in this canonical form, all the constraints are of type *less or equal* and all the right hand sides are 0. It is not required that all of the constraints are of the same type, but for now, we will use this definition without loss of generality to present the Kuhn-Tucker conditions. # # Similarly, the canonical form of a **maximisation** problem is defined as: # # $\max f(x)$ # # $\\text{s.t.}$ # # $g_i(x) \leq 0 \quad \forall i = 1, ..., m$ # # Note that, besides that and the fact that the type of optimisation function (which is obviously of type maximise) is different, the canonical form is the same as for minimisation problems. # ## Kuhn Tucker Conditions # The Kuhn-Tucker (KT) conditions provide a set of differential equations that can be used to find the optimal value of a constrained NLP, based on the **Lagrangian** defined below: # # ### Lagrangian # Given an optimisation problem with m constraints in the canonical form, let us define the **Lagrangian** function as: # # $\text{L}(x,\lambda) = f(x) + /sum_{i=1}^{m}{\lambda_i*g_i(x)}$ # # That is, to take into account the constraints in the objective function, we add the corresponding functions in the right hand side multiplied by a set of coefficients noted as $\lambda_i$ which are known as Lagrangian multipliers. # # ### Gradient condition # Now, for any candidate solution to be an optimal solution, we know that, as for unconstrained NLP, it must be a critical point and thus satisfy: # # $\nabla_x \text{L}(x,\lambda) = 0$ # # That is, since each component of the gradient is the first order derivative of the corresponding decision variable, the Lagrangian must satisfy: # # $\frac{\delta \text{L}}{\delta x_j} = 0 \quad \forall j = [1, ..., n]$ # # ### Feasibility condition # Additionally, we must ensure that the solution is **feasible**, i.e. that meets all the constraints. Therefore, the feasibility condition yields: # # $g_i(x) \leq 0 \quad \forall i = [1, ..., m]$ # # Note that, given the expression of the Lagrangian, this is equivalent to: # # $\frac{\delta \text{L}}{\delta \lambda_i}g_i(x) \leq 0 \quad \forall i = [1, ..., m]$ # # ### Orthogonality condition # Now, note that for the Lagrangian and the objective function to represent exactly the point of the function, at the optimal value, the following conditions must be met: # # $\lambda_i*g_i(x)=0 \quad \forall i = [1, ..., m]$ # # That is, either the Lagrangian multiplier is equal to zero or the corresponding right hand side is equal to zero. # # ### Non-positive, non-negativity conditions # # # # # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # %load_ext autoreload # %autoreload 2 # # to string uniprot id # + #from sabueso.tools.string_entity_name import to_string_uniprot_id # + #aa = to_string_uniprot_id('hexokinase-1') # + #aa # + #to_string_uniprot_id('morphine') # + #to_string_uniprot_id('mu opioid receptor') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- # # IBM Advanced Data Science Capstone Project # ## Sentiment Analysis of Amazon Customer Reviews # ### , Apr 2021 # ## Architectural Decisions Document # # This document provides a comprehensive architectural overview of the system, and intends to capture the significant architectural decisions which have been made while designing and implementing this project. # ### 1. Data Source # # The data that we are going to use in this project is sourced from a Kaggle user, , who uploaded the data to the following Kaggle page - # # [Amazon Reviews for Sentiment Analysis](https://www.kaggle.com/bittlingmayer/amazonreviews) # # The dataset uploaded by Adam is modified for being used in a third-party tool called *fastText*. Since we would like to conduct our own analysis from scratch on the original data, we will use the link for the raw data provided in the description section of the page. This link points to a *Google Drive* folder which has the original training and test csv files for this dataset. # # [Xiang Zhang's Google Drive dir](https://drive.google.com/drive/folders/0Bz8a_Dbh9Qhbfll6bVpmNUtUcFdjYmF2SEpmZUZUcVNiMUw1TWN6RDV3a0JHT3kxLVhVR2M) # # The files are stored in **tar.gz** format on the drive with the following filename - **amazon_review_full_csv.tar.gz**. # # #### a. Technology Choice # # *No technology is required at this stage.* # # #### b. Justification # # We have manually identified the data source after reviewing various open-source datasets that were available. We reviewed the various datasets on platforms such as Kaggle and Qandl, and finally selected the Amazon reviews dataset as it pertains to a real-world, complex problem that can be addressed using data science and advanced machine learning. **Since the problem statement has been identified after picking the data source, we did not use any technology at this stage.** # ### 2. Enterprise Data # # Once we have identified the data source, we need to determine how to integrate this data into our project. Since the selected data source does not provide a recurring/ live stream of data, we only need to take a one-time dump from the external link and save the data to our local machine. # # #### a. Technology Choice # # Since we are dealing with a large dataset, we will use **Apache Spark** and **IBM Cloud Storage** to store our data in the form of parquet files. # # #### b. Justification # # We have selected **Apache Spark** in this case as this provides a lot of inbuilt features to parallelize the data exploration and feature extraction steps. The size of the dataset is quite large and we will be required to implement parallel processing in order to make the data exploration and feature extraction processes faster. # ### 3. Streaming Analytics # # #### a. Technology Choice # # *No technology is required at this stage.* # # #### b. Justification # # This step is not applicable as our data source is static. # ### 4. Data Integration # # Data integration involves cleaning the raw data that we have collected and preparing it for the specific use case of our project. # # #### a. Technology Choice # # We have chosen Jupyter Notebooks, Pandas and Spark as the technologies for this capstone project. # # #### b. Justification # # Jupyter Notebooks allow us to quickly develop python projects with seamless integration of code and supporting documentation and analysis. This will be suitable for presenting the steps undertaken to prepare the data as well as to present it to various stakeholders. Pandas and Spark will be used as powerful tools for data manipulation. # ### 5. Data Repository # # There are a host of options available for storing and persisting data. We can choose between any one of them as the requirements of this project are fairly limited. Most of the data will be collected and processed as a one-time activity and there will not be much ongoing changes to it. # # #### a. Technology Choice # # We will use **IBM Cloud Storage** as the persistent data storage platform for this project. # # #### b. Justification # # IBM Cloud Storage provides an easy way to store and access data and can be scaled as per the requirements of the project. # ### 6. Discovery and Exploration # # Data discovery and exploration is one of the most integral steps of any data analysis project. In our specific case, we need to understand the characteristics of the Amazon Customer Review data in terms of natural language processing. # # #### a. Technology Choice # # We will be using Jupyter Notebooks as the primary technology and a number of Python packages such as **NLTK, scikit-learn, Keras, Spark, Matplotlib and WordCloud** for our data exploration. # # #### b. Justification # # The choice of packages and technology are driven by the need to analyze NLP data. NLTK provides a number of easy-to-implement methods for natural language processing. Similary, Keras and Spark also come with built-in functionalities that will be useful in transforming the data for machine learning. # ### 7. Actionable Insights # # Most of the decisions taken during the course of this project will be made in an iterative manner as we complete the various steps and get deeper understanding of the data as well as the results of our analyses. The actionable insights will come from building and deploying suitable machine learning algorithms that will be able to predict the sentiments of customers who provide product reviews on Amazon's ecommerce platform. # # #### a. Technology Choice # # We will use **Keras** as the primary technlogy for our model development. Keras provides an abstraction layer on top of TensorFlow, one of the most widely used deep learning frameworks. Within Keras, we will be developing MLP and LSTM neural network models. **TensorBoard** will be used to monitor the performance of the models during and after training. # # #### b. Justification # # The reason for choosing **Keras** is that it is an open source techology that provides a number of powerful features and functionalities to define, train and test deep learning algorithms. It is also able to handle large datasets easily. # # We will be generating 2 different feature sets, one using **TF-IDF bag-of-words** while another which will convert the text into padded, vectorized word sequences. The former doesn't retain the ordering of words and hence we will train an **MLP neural network** while the latter feature set will be trained on an **LSTM neural network**. # # We will compare the performance of these methods using **binary accuracy** metric, as we are building a binary classifier with balanced class distributions. # ### 8. Applications / Data Products # # Since this project only requires us to do a one-time analysis of existing data, we will not be creating a data product for its deployment. The results of our analysis will be presented to the stakeholders in an easy-to-replicate and distributable format. # # #### a. Technology Choice # # We will use Jupyter Notebooks and slide presentations to present the results of our project. # # #### b. Justification # # Jupyter Notebooks are very versatile and can be used to present detailed analysis and graphical results along with the code to replicate the various steps used. # ### 9. Security, Information Governance and Systems Management # # Security and governance are an important consideration for any enterprise project. In this specific project, since we are using open source data and we are not developing this project for any proprietary analysis, we will provide unrestricted access to the code as well as the results via Github. # # #### a. Technology Choice # # We will share the assets and analysis via Github. # # #### b. Justification # # Github is the world's most widely used platform for software development and version control. It provides all the functionality we need to control the access of our data and assets. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Ejercicio 58 # # Observe una muestra aleatoria de los comentarios de las acciones realizadas por # usuarios o ips antes de ser bloqueados. Observe otra muestra de comentarios de # acciones de todos. import pandas as pd import numpy as np logs = pd.read_csv("logs.csv") logs.head() # Solo necesito algunas columnas, por lo que reduzco el dataframe logs = logs[['timestamp','contributor_username','contributor_ip','action','title']] logs.head() # Como trabajo con un datetime, aplico el cast logs['timestamp'] = pd.to_datetime(logs['timestamp'], format='%Y-%m-%dT%H:%M:%SZ') logs.info() # 1 - Solo me quedo con aquellas filas cuyo action es block. logs['username/ips'] = logs['contributor_username'].fillna(logs['contributor_ip']) del logs['contributor_username'] del logs['contributor_ip'] logs action_grouped = logs.groupby(['action']) blocked_actions = action_grouped.get_group('block') blocked_actions = blocked_actions[['timestamp','title']] blocked_actions blocked_actions['title'].replace({'nan':None},inplace=True) blocked_actions['title'].isnull().sum() blocked_actions['title'].dropna(inplace=True) blocked_actions['title'].isnull().sum() # Con los nulls limpios, manipulo los strings de manera que solo me quede los usuarios que fueron bloqueados def clean_Usuario(x): j = x.split(":") return j[1] blocked_actions = blocked_actions.astype({'title': 'str'}) blocked_actions['title'].replace({'nan':None},inplace=True) blocked_actions['title'].dropna(inplace=True) blocked_actions['title'] = blocked_actions['title'].apply(clean_Usuario) # Como title ya son los usuarios bloqueados,modifico el nombre blocked_actions.rename(columns={'title':'user_blocked'},inplace=True) blocked_actions # Necesito saber cuando fue la primera vez que los usuarios que se repiten fueron bloqueados por lo que. blocked_actions['first_block_date'] = blocked_actions.groupby('user_blocked')['timestamp'].transform('min') blocked_actions = blocked_actions[['timestamp','user_blocked','first_block_date']] blocked_actions logs.rename(columns={'username/ips':'user_blocked'},inplace=True) #Solo me interesan aquellos usuarios que estan en user_blocked. logs # Me quedo con los registros de las acciones de los usuarios que fueron bloqueados blocked_logs = logs.loc[logs['user_blocked'].isin(blocked_actions.user_blocked)] blocked_logs # Ahora que en blocked_logs tengo los usuarios que fueron bloqueados, le agrego la columna de la fecha que fueron por primera # vez bloqueados blocked_actions = blocked_actions[['user_blocked','first_block_date']] before_being_blocked_logs = blocked_logs.merge(blocked_actions,on='user_blocked') before_being_blocked_logs # Ahora solo me quedo con aquellos que el timestamp es menor a su primera vez bloqueado. before_being_blocked_logs['valid_log'] = before_being_blocked_logs['timestamp'] < before_being_blocked_logs['first_block_date'] before_being_blocked_logs before_being_blocked_logs = before_being_blocked_logs.loc[before_being_blocked_logs.valid_log == True] before_being_blocked_logs = before_being_blocked_logs[['user_blocked','action','timestamp','first_block_date']] before_being_blocked_logs # + # before_being_blocked_logs son las acciones de los usuarios que luego fueron bloqueados. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + [markdown] id="llE7btqvKM3m" # # High-dimensional Bayesian workflow, with applications to SARS-CoV-2 strains # # This tutorial describes a workflow for incrementally building pipelines to analyze high-dimensional data in Pyro. This workflow has evolved over a few years of applying Pyro to models with $10^5$ or more latent variables. We build on [Gelman et al. (2020)](https://arxiv.org/abs/2011.01808)'s concept of *Bayesian workflow*, and focus on aspects particular to high-dimensional models: approximate inference and numerical stability. While the individual components of the pipeline deserve their own tutorials, this tutorial focuses on incrementally combining those components. # # The fastest way to find a good model of your data is to quickly discard many bad models, i.e. to iterate. # In statistics we call this iterative workflow [Box's loop](http://www.cs.columbia.edu/~blei/papers/Blei2014b.pdf). # An efficient workflow allows us to discard bad models as quickly as possible. # Workflow efficiency demands that code changes to upstream components don't break previous coding effort on downstream components. # Pyro's approaches to this challenge include strategies for variational approximations ([pyro.infer.autoguide](https://docs.pyro.ai/en/stable/infer.autoguide.html)) and strategies for transforming model coordinate systems to improve geometry ([pyro.infer.reparam](https://docs.pyro.ai/en/stable/infer.reparam.html)). # # #### Summary # # - Great models can only be achieved by iterative development. # - Iterate quickly by building a pipeline that is robust to code changes. # - Start with a simple model and [mean-field inference](https://docs.pyro.ai/en/dev/infer.autoguide.html#autonormal). # - Avoid NANs by intelligently [initializing](https://docs.pyro.ai/en/dev/infer.autoguide.html#module-pyro.infer.autoguide.initialization) and [.clamp()](https://pytorch.org/docs/stable/generated/torch.clamp.html)ing. # - [Reparametrize](https://docs.pyro.ai/en/dev/infer.reparam.html) the model to improve geometry. # - Create a custom variational family by combining [AutoGuides](https://docs.pyro.ai/en/dev/infer.autoguide.html) or [EasyGuides](https://docs.pyro.ai/en/dev/contrib.easyguide.html). # # #### Table of contents # - [Overview](#Overview) # - [Running example: SARS-CoV-2 strain prediction](#Running-example) # 1. [Clean the data](#Clean-the-data) # 2. [Create a generative model](#Create-a-generative-model) # 3. [Sanity check using mean-field inference](#Sanity-check) # 4. [Create an initialization heuristic](#Create-an-initialization-heuristic) # 5. [Reparametrize the model](#Reparametrize) # 6. [Customize the variational family: autoguides, easyguides, custom guides](#Customize) # + [markdown] id="1Sj_aC_4KM3p" # ## Overview # # Consider the problem of sampling from the posterior distribution of a probabilistic model with $10^5$ or more continuous latent variables, but whose data fits entirely in memory. # (For larger datasets, consider [amortized variational inference](http://pyro.ai/examples/svi_part_ii.html).) Inference in such high-dimensional models can be challenging even when posteriors are known to be [unimodal](https://en.wikipedia.org/wiki/Unimodality) or even [log-concave](https://arxiv.org/abs/1404.5886), due to correlations among latent variables. # # To perform inference in such high-dimensional models in Pyro, we have evolved a [workflow](https://arxiv.org/abs/2011.01808) to incrementally build data analysis pipelines combining variational inference, reparametrization effects, and ad-hoc initialization strategies. Our workflow is summarized as a sequence of steps, where validation after any step might suggest backtracking to change design decisions at a previous step. # # 1. Clean the data. # 2. Create a generative model. # 3. Sanity check using MAP or mean-field inference. # 4. Create an initialization heuristic. # 5. Reparameterize the model, evaluating results under mean field VI. # 6. Customize the variational family (autoguides, easyguides, custom guides). # # The crux of efficient workflow is to ensure changes don't break your pipeline. That is, after you build a number of pipeline stages, validate results, and decide to change one component in the pipeline, you'd like to minimize code changes needed in other components. The remainder of this tutorial describes these steps individually, then describes nuances of interactions among stages, then provides an example. # - # ## Running example: SARS-CoV-2 strain prediction # # The running example in this tutorial will be a model [(Obermeyer et al. 2022)](https://www.medrxiv.org/content/10.1101/2021.09.07.21263228v2) of the relative growth rates of different strains of the SARS-CoV-2 virus, based on [open data](https://docs.nextstrain.org/projects/ncov/en/latest/reference/remote_inputs.html) counting different [PANGO lineages](https://cov-lineages.org/) of viral genomic samples collected at different times around the world. There are about 2 million sequences in total. # # The model is a high-dimensional regression model with around 1000 coefficients, a multivariate logistic growth function (using a simple [torch.softmax()](https://pytorch.org/docs/stable/generated/torch.nn.functional.softmax.html)) and a [Multinomial](https://pytorch.org/docs/stable/distributions.html#multinomial) likelihood. While the number of coefficients is relatively small, there are about 500,000 local latent variables to estimate, and plate structure in the model should lead to an approximately block diagonal posterior covariance matrix. For an introduction to simple logistic growth models using this same dataset, see the [logistic growth tutorial](logistic-growth.html). # + id="njvgAMszKM3q" from collections import defaultdict from pprint import pprint import functools import math import os import torch import pyro import pyro.distributions as dist import pyro.poutine as poutine from pyro.distributions import constraints from pyro.infer import SVI, Trace_ELBO from pyro.infer.autoguide import ( AutoDelta, AutoNormal, AutoMultivariateNormal, AutoLowRankMultivariateNormal, AutoGuideList, init_to_feasible, ) from pyro.infer.reparam import AutoReparam, LocScaleReparam from pyro.nn.module import PyroParam from pyro.optim import ClippedAdam from pyro.ops.special import sparse_multinomial_likelihood import matplotlib.pyplot as plt if torch.cuda.is_available(): print("Using GPU") torch.set_default_tensor_type("torch.cuda.FloatTensor") else: print("Using CPU") smoke_test = ('CI' in os.environ) # + [markdown] id="ZWFmW8HAKM3s" # ## Clean the data # # Our running example will use a pre-cleaned dataset. # We started with Nextstrain's [ncov](https://docs.nextstrain.org/projects/ncov/en/latest/reference/remote_inputs.html) tool for preprocessing, followed by the Broad Institute's [pyro-cov](https://github.com/broadinstitute/pyro-cov/blob/master/scripts/preprocess_nextstrain.py) tool for aggregation, resulting in a dataset of SARS-CoV-2 lineages observed around the world through time. # + id="DlWE50HCKM3s" from pyro.contrib.examples.nextstrain import load_nextstrain_counts dataset = load_nextstrain_counts() def summarize(x, name=""): if isinstance(x, dict): for k, v in sorted(x.items()): summarize(v, name + "." + k if name else k) elif isinstance(x, torch.Tensor): print(f"{name}: {type(x).__name__} of shape {tuple(x.shape)} on {x.device}") elif isinstance(x, list): print(f"{name}: {type(x).__name__} of length {len(x)}") else: print(f"{name}: {type(x).__name__}") summarize(dataset) # + [markdown] id="6euQcq4SKM3s" # ## Create a generative model # # The first step to using Pyro is creating a generative model, either a python function or a [pyro.nn.Module](https://docs.pyro.ai/en/dev/nn.html#pyro.nn.module.PyroModule). Start simple. Start with a shallow hierarchy and later add latent variables to share statistical strength. Start with a slice of your data then add a [plate]() over multiple slices. Start with simple distributions like [Normal](), [LogNormal](), [Poisson]() and [Multinomial](), then consider overdispersed versions like [StudentT](), [Gamma](), [GammaPoisson]()/[NegativeBinomial](), and [DirichletMultinomial](). Keep your model simple and readable so you can share it and get feedback from domain experts. Use [weakly informative priors](http://www.stat.columbia.edu/~gelman/presentations/weakpriorstalk.pdf). # + [markdown] id="S3XpW4EzKM3t" # We'll focus on a multivariate logistic growth model of competing SARS-CoV-2 strains, as described in [Obermeyer et al. (2022)](https://www.medrxiv.org/content/10.1101/2021.09.07.21263228v2). This model uses a numerically stable `logits` parameter in its multinomial likelihood, rather than a `probs` parameter. Similarly upstream variables `init`, `rate`, `rate_loc`, and `coef` are all in log-space. This will mean e.g. that a zero coefficient has multiplicative effect of 1.0, and a positive coefficient has multiplicative effect greater than 1. # # Note we scale `coef` by 1/100 because we want to model a very small number, but the automatic parts of Pyro and PyTorch work best for numbers on the order of 1.0 rather than very small numbers. When we later interpret `coef` in a volcano plot we'll need to duplicate this scaling factor. # + id="gGdvwzs9KM3t" def model(dataset): features = dataset["features"] counts = dataset["counts"] assert features.shape[0] == counts.shape[-1] S, M = features.shape T, P, S = counts.shape time = torch.arange(float(T)) * dataset["time_step_days"] / 5.5 time -= time.mean() strain_plate = pyro.plate("strain", S, dim=-1) place_plate = pyro.plate("place", P, dim=-2) time_plate = pyro.plate("time", T, dim=-3) # Model each region as multivariate logistic growth. rate_scale = pyro.sample("rate_scale", dist.LogNormal(-4, 2)) init_scale = pyro.sample("init_scale", dist.LogNormal(0, 2)) with pyro.plate("mutation", M, dim=-1): coef = pyro.sample("coef", dist.Laplace(0, 0.5)) with strain_plate: rate_loc = pyro.deterministic("rate_loc", 0.01 * coef @ features.T) with place_plate, strain_plate: rate = pyro.sample("rate", dist.Normal(rate_loc, rate_scale)) init = pyro.sample("init", dist.Normal(0, init_scale)) logits = init + rate * time[:, None, None] # Observe sequences via a multinomial likelihood. with time_plate, place_plate: pyro.sample( "obs", dist.Multinomial(logits=logits.unsqueeze(-2), validate_args=False), obs=counts.unsqueeze(-2), ) # + [markdown] id="FSwBD5tNKM3u" # The execution cost of this model is dominated by the multinomial likelihood over a large sparse count matrix. # + id="Hf2Qui8UKM3u" print("counts has {:d} / {} nonzero elements".format( dataset['counts'].count_nonzero(), dataset['counts'].numel() )) # + [markdown] id="qA0ZvbCsKM3u" # To speed up inference (and model iteration!) we'll replace the `pyro.sample(..., Multinomial)` likelihood with an equivalent but much cheaper `pyro.factor` statement using a helper `pyro.ops.sparse_multinomial_likelihood`. # + id="fCbB8bN2KM3v" def model(dataset, predict=None): features = dataset["features"] counts = dataset["counts"] sparse_counts = dataset["sparse_counts"] assert features.shape[0] == counts.shape[-1] S, M = features.shape T, P, S = counts.shape time = torch.arange(float(T)) * dataset["time_step_days"] / 5.5 time -= time.mean() # Model each region as multivariate logistic growth. rate_scale = pyro.sample("rate_scale", dist.LogNormal(-4, 2)) init_scale = pyro.sample("init_scale", dist.LogNormal(0, 2)) with pyro.plate("mutation", M, dim=-1): coef = pyro.sample("coef", dist.Laplace(0, 0.5)) with pyro.plate("strain", S, dim=-1): rate_loc = pyro.deterministic("rate_loc", 0.01 * coef @ features.T) with pyro.plate("place", P, dim=-2): rate = pyro.sample("rate", dist.Normal(rate_loc, rate_scale)) init = pyro.sample("init", dist.Normal(0, init_scale)) if predict is not None: # Exit early during evaluation. probs = (init + rate * time[predict]).softmax(-1) return probs logits = (init + rate * time[:, None, None]).log_softmax(-1) # Observe sequences via a cheap sparse multinomial likelihood. t, p, s = sparse_counts["index"] pyro.factor( "obs", sparse_multinomial_likelihood( sparse_counts["total"], logits[t, p, s], sparse_counts["value"] ) ) # + [markdown] id="UPtf7TPSKM3v" # ## Sanity check using mean field inference # # Mean field Normal inference is cheap and robust, and is a good way to sanity check your posterior point estimate, even if the posterior uncertainty may be implausibly narrow. We recommend starting with an [AutoNormal](https://docs.pyro.ai/en/latest/infer.autoguide.html#autonormal) guide, and possibly setting `init_scale` to a small value like `init_scale=0.01` or `init_scale=0.001`. # # Note that while MAP estimating via [AutoDelta](https://docs.pyro.ai/en/latest/infer.autoguide.html#autodelta) is even cheaper and more robust than mean-field `AutoNormal`, `AutoDelta` is coordinate-system dependent and is not invariant to reparametrization. Because in our experience most models benefit from some reparameterization, we recommend `AutoNormal` over `AutoDelta` because `AutoNormal` is less sensitive to reparametrization (`AutoDelta` can give incorrect results in some reparametrized models). # + id="Uhl_-vMIKM3v" def fit_svi(model, guide, lr=0.01, num_steps=1001, log_every=100, plot=True): pyro.clear_param_store() pyro.set_rng_seed(20211205) if smoke_test: num_steps = 2 # Measure model and guide complexity. num_latents = sum( site["value"].numel() for name, site in poutine.trace(guide).get_trace(dataset).iter_stochastic_nodes() if not site["infer"].get("is_auxiliary") ) num_params = sum(p.unconstrained().numel() for p in pyro.get_param_store().values()) print(f"Found {num_latents} latent variables and {num_params} learnable parameters") # Save gradient norms during inference. series = defaultdict(list) def hook(g, series): series.append(torch.linalg.norm(g.reshape(-1), math.inf).item()) for name, value in pyro.get_param_store().named_parameters(): value.register_hook( functools.partial(hook, series=series[name + " grad"]) ) # Train the guide. optim = ClippedAdam({"lr": lr, "lrd": 0.1 ** (1 / num_steps)}) svi = SVI(model, guide, optim, Trace_ELBO()) num_obs = int(dataset["counts"].count_nonzero()) for step in range(num_steps): loss = svi.step(dataset) / num_obs series["loss"].append(loss) median = guide.median() # cheap for autoguides for name, value in median.items(): if value.numel() == 1: series[name + " mean"].append(float(value)) if step % log_every == 0: print(f"step {step: >4d} loss = {loss:0.6g}") # Plot series to assess convergence. if plot: plt.figure(figsize=(6, 6)) for name, Y in series.items(): if name == "loss": plt.plot(Y, "k--", label=name, zorder=0) elif name.endswith(" mean"): plt.plot(Y, label=name, zorder=-1) else: plt.plot(Y, label=name, alpha=0.5, lw=1, zorder=-2) plt.xlabel("SVI step") plt.title("loss, scalar parameters, and gradient norms") plt.yscale("log") plt.xscale("symlog") plt.xlim(0, None) plt.legend(loc="best", fontsize=8) plt.tight_layout() # + id="-z-UBawaKM3w" # %%time guide = AutoNormal(model, init_scale=0.01) fit_svi(model, guide) # + [markdown] id="8CB2cHfbKM3w" # After each change to the model or inference, you'll validate model outputs, closing [Box's loop](http://www.cs.columbia.edu/~blei/papers/Blei2014b.pdf). In our running example we'll quantitiatively evaluate using the mean average error (MAE) over the last fully-observed time step. # + id="jzIoMBXfKM3w" def mae(true_counts, pred_probs): """Computes mean average error between counts and predicted probabilities.""" pred_counts = pred_probs * true_counts.sum(-1, True) error = (true_counts - pred_counts).abs().sum(-1) total = true_counts.sum(-1).clamp(min=1) return (error / total).mean().item() def evaluate( model, guide, num_particles=100, location="USA / Massachusetts", time=-2 ): if smoke_test: num_particles = 4 """Evaluate posterior predictive accuracy at the last fully observed time step.""" with torch.no_grad(), poutine.mask(mask=False): # makes computations cheaper with pyro.plate("particle", num_particles, dim=-3): # vectorizes guide_trace = poutine.trace(guide).get_trace(dataset) probs = poutine.replay(model, guide_trace)(dataset, predict=time) probs = probs.squeeze().mean(0) # average over Monte Carlo samples true_counts = dataset["counts"][time] # Compute global and local KL divergence. global_mae = mae(true_counts, probs) i = dataset["locations"].index(location) local_mae = mae(true_counts[i], probs[i]) return {"MAE (global)": global_mae, f"MAE ({location})": local_mae} # + id="LVhNFPu4KM3x" pprint(evaluate(model, guide)) # + [markdown] id="fj--86jAYzN6" # We'll also qualitatively evaluate using a volcano plot showing the effect size and statistical significance of each mutation's coefficient, and labeling the mutation with the most significant positive effect. We expect: # - most mutations have very little effect (they are near zero in log space, so their multiplicative effect is near 1x) # - more mutations have positive effect than netagive effect # - effect sizes are on the order of 1.1 or 0.9. # + id="wgQMV_DpKM3x" def plot_volcano(guide, num_particles=100): if smoke_test: num_particles = 4 with torch.no_grad(), poutine.mask(mask=False): # makes computations cheaper with pyro.plate("particle", num_particles, dim=-3): # vectorizes trace = poutine.trace(guide).get_trace(dataset) trace = poutine.trace(poutine.replay(model, trace)).get_trace(dataset, -1) coef = trace.nodes["coef"]["value"].cpu() coef = coef.squeeze() * 0.01 # Scale factor as in the model. mean = coef.mean(0) std = coef.std(0) z_score = mean.abs() / std effect_size = mean.exp().numpy() plt.figure(figsize=(6, 3)) plt.scatter(effect_size, z_score.numpy(), lw=0, s=5, alpha=0.5, color="darkred") plt.yscale("symlog") plt.ylim(0, None) plt.xlabel("$R_m/R_{wt}$") plt.ylabel("z-score") i = int((mean / std).max(0).indices) plt.text(effect_size[i], z_score[i] * 1.1, dataset["mutations"][i], ha="center", fontsize=8) plt.title(f"Volcano plot of {len(mean)} mutations") plot_volcano(guide) # + [markdown] id="5MY99Q-zKM3x" # ## Create an initialization heuristic # # In high-dimensional models, convergence can be slow and NANs arise easily, even when sampling from [weakly informative priors](http://www.stat.columbia.edu/~gelman/presentations/weakpriorstalk.pdf). We recommend heuristically initializing a point estimate for each latent variable, aiming to initialize at something that is the right order of magnitude. Often you can initialize to a simple statistic of the data, e.g. a mean or standard deviation. # # Pyro's autoguides provide a number of [initialization strategies]() for initializing the location parameter of many variational families, specified as `init_loc_fn`. You can create a custom initializer by accepting a pyro sample site dict and generating a sample from `site["name"]` and `site["fn"]` using e.g. `site["fn"].shape()`, `site["fn"].support`, `site["fn"].mean`, or sampling via `site["fn"].sample()`. # + id="_PzZX6fKKM3x" def init_loc_fn(site): shape = site["fn"].shape() if site["name"] == "coef": return torch.randn(shape).sub_(0.5).mul(0.01) if site["name"] == "init": # Heuristically initialize based on data. return dataset["counts"].mean(0).add(0.01).log() return init_to_feasible(site) # fallback # + [markdown] id="zwlg9pV6KM3y" # As you evolve a model, you'll add and remove and rename latent variables. We find it useful to require inits for all latent variables, add a message to remind yourself to udpate the `init_loc_fn` whenever the model changes. # + id="rCZI0TMGKM3y" def init_loc_fn(site): shape = site["fn"].shape() if site["name"].endswith("_scale"): return torch.ones(shape) if site["name"] == "coef": return torch.randn(shape).sub_(0.5).mul(0.01) if site["name"] == "rate": return torch.zeros(shape) if site["name"] == "init": return dataset["counts"].mean(0).add(0.01).log() raise NotImplementedError(f"TODO initialize latent variable {site['name']}") # + id="gxbVyvhnKM3y" # %%time guide = AutoNormal(model, init_loc_fn=init_loc_fn, init_scale=0.01) fit_svi(model, guide, lr=0.02) pprint(evaluate(model, guide)) plot_volcano(guide) # + [markdown] id="pxeYvAlqKM3y" # ## Reparametrize the model # # Reparametrizing a model preserves its distribution while changing its geometry. Reparametrizing is simply a change of coordinates. When reparametrizing we aim to warp a model's geometry to remove correlations and to lift inconvenient topological manifolds into simpler higher dimensional flat Euclidean space. # # Whereas many probabilistic programming languages require users to rewrite models to change coordinates, Pyro implements a library of about 15 different reparametrization effects including decentering (Gorinova et al. 2020), Haar wavelet transforms, and neural transport (Hoffman et al. 2019), as well as strategies to automatically apply effects and machinery to create custom reparametrization effects. Using these reparametrizers you can separate modeling from inference: first specify a model in a form that is natural to domain experts, then in inference code, reparametrize the model to have geometry that is more amenable to variational inference. # + [markdown] id="9Xuae9EEKM3y" # In our SARS-CoV-2 model, the geometry might improve if we change # ```diff # - rate = pyro.sample("rate", dist.Normal(rate_loc, rate_scale)) # # + rate = pyro.sample("rate", dist.Normal(0, 1)) * rate_scale + rate_loc # ``` # but that would make the model less interpretable. Instead we can reparametrize the model # + id="nct66uSCKM3y" reparam_model = poutine.reparam(model, config={"rate": LocScaleReparam()}) # + [markdown] id="Kx8Snxk2KM3y" # or even automatically apply a set of recommended reparameterizers # + id="DEaRa83fKM3z" reparam_model = AutoReparam()(model) # + [markdown] id="4M-04R17KM3z" # Let's try reparametrizing both sites "rate" and "init". Note we'll create a fresh `reparam_model` each time we train a guide, since the parameters are stored in that `reparam_model` instance. Take care to use the `reparam_model` in downstream prediction tasks like running `evaluate(reparam_model, guide)`. # + id="6s7Rfq6pKM3z" # %%time reparam_model = poutine.reparam( model, {"rate": LocScaleReparam(), "init": LocScaleReparam()} ) guide = AutoNormal(reparam_model, init_loc_fn=init_loc_fn, init_scale=0.01) fit_svi(reparam_model, guide, lr=0.05) pprint(evaluate(reparam_model, guide)) plot_volcano(guide) # + [markdown] id="blr6l-GLKM3z" # ## Customize the variational family # # When creating a new model, we recommend starting with mean field variational inference using an [AutoNormal]() guide. This mean field guide is good at finding the neighborhood of your model's mode, but naively it ignores correlations between latent variables. A first step in capturing correlations is to reparametrize the model as above: using a `LocScaleReparam` or `HaarReparam` (where appropriate) already allows the guide to capture some correlations among latent variables. # # The next step towards modeling uncertainty is to customize the variational family by trying other autoguides, building on [EasyGuide](), or creating a custom guide using Pyro primitives. We recommend increasing guide complexity gradually via these steps: # 1. Start with an [AutoNormal]() guide. # 2. Try [AutoLowRankMultivariateNormal](), which can model the principle components of correlated uncertainty. (For models with only ~100 latent variables you might also try [AutoMultivariateNormal]() or [AutoGaussian]()). # 3. Try combining multiple guides using [AutoGuideList](). For example if [AutoLowRankMultivariateNormal]() is too expensive for all the latent variables, you can use [AutoGuideList]() to combine an [AutoLowRankMultivariateNormal]() guide over a few top-level global latent variables, together with a cheaper [AutoNormal]() guide over more numerous local latent variables. # 4. Try using [AutoGuideList]() to combine a autoguide together with a custom guide function built using `pyro.sample`, `pyro.param`, and `pyro.plate`. Given a `partial_guide()` function that covers just a few latent variables, you can `AutoGuideList.append(partial_guide)` just as you append autoguides. # 5. Consider customizing one of Pyro's autoguides that leverage model structure, e.g. [AutoStructured](https://docs.pyro.ai/en/latest/infer.autoguide.html#autostructured), [AutoNormalMessenger](https://docs.pyro.ai/en/latest/infer.autoguide.html#autonormalmessenger), [AutoHierarchicalNormalMessenger](https://docs.pyro.ai/en/latest/infer.autoguide.html#autohierarchicalnormalmessenger) [AutoRegressiveMessenger](https://docs.pyro.ai/en/latest/infer.autoguide.html#autoregressivemessenger). # 6. For models with local correlations, consider building on [EasyGuide](https://docs.pyro.ai/en/latest/contrib.easyguide.html), a framework for building guides over groups of variables. # # While a fully-custom guides built from `pyro.sample` primitives offer the most flexible variational family, they are also the most brittle guides because each code change to the model or reparametrizer requires changes in the guide. The author recommends avoiding completely low-level guides and instead using `AutoGuide` or `EasyGuide` for at least some parts of the model, thereby speeding up model iteration. # + [markdown] id="zgMSoFIVKM3z" # Let's first try a simple `AutoLowRankMultivariateNormal` guide. # + id="btrEBvoCKM3z" # %%time reparam_model = poutine.reparam( model, {"rate": LocScaleReparam(), "init": LocScaleReparam()} ) guide = AutoLowRankMultivariateNormal( reparam_model, init_loc_fn=init_loc_fn, init_scale=0.01, rank=100 ) fit_svi(reparam_model, guide, num_steps=10, log_every=1, plot=False) # don't even bother to evaluate, since this is too slow. # + [markdown] id="BK3kgjhMT6ce" # Yikes! This is quite slow and sometimes runs out of memory on GPU. # # Let's make this cheaper by using `AutoGuideList` to combine an `AutoLowRankMultivariateNormal` guide over the most important variables `rate_scale`, `init_scale`, and `coef`, together with a simple cheap `AutoNormal` guide on the rest of the model (the expensive `rate` and `init` variables). The typical pattern is to create two views of the model with [poutine.block](https://docs.pyro.ai/en/stable/poutine.html#pyro.poutine.handlers.block), one exposing the target variables and the other hiding them. # + id="_jjmgrMqTu0w" # %%time reparam_model = poutine.reparam( model, {"rate": LocScaleReparam(), "init": LocScaleReparam()} ) guide = AutoGuideList(reparam_model) mvn_vars = ["coef", "rate_scale", "coef_scale"] guide.add( AutoLowRankMultivariateNormal( poutine.block(reparam_model, expose=mvn_vars), init_loc_fn=init_loc_fn, init_scale=0.01, ) ) guide.add( AutoNormal( poutine.block(reparam_model, hide=mvn_vars), init_loc_fn=init_loc_fn, init_scale=0.01, ) ) fit_svi(reparam_model, guide, lr=0.1) pprint(evaluate(reparam_model, guide)) plot_volcano(guide) # + [markdown] id="o4TcrOWvgX_7" # Next let's create a custom guide for part of the model, just the `rate` and `init` parts. Since we'll want to use this with reparametrizers, we'll make the guide use the auxiliary latent variables created by `poutine.reparam`, rather than the original `rate` and `init` variables. Let's see what these variables are named: # + id="ole0UzqnWY37" for name, site in poutine.trace(reparam_model).get_trace( dataset ).iter_stochastic_nodes(): print(name) # + [markdown] id="7lkacduNmnmu" # It looks like these new auxiliary variables are called `rate_decentered` and `init_decentered`. # + id="r5b8-WbUibg2" def local_guide(dataset): # Create learnable parameters. T, P, S = dataset["counts"].shape r_loc = pyro.param("rate_decentered_loc", lambda: torch.zeros(P, S)) i_loc = pyro.param("init_decentered_loc", lambda: torch.zeros(P, S)) skew = pyro.param("skew", lambda: torch.zeros(P, S)) # allows correlation r_scale = pyro.param("rate_decentered_scale", lambda: torch.ones(P, S), constraint=constraints.softplus_positive) i_scale = pyro.param("init_decentered_scale", lambda: torch.ones(P, S), constraint=constraints.softplus_positive) # Sample local variables inside plates. # Note plates are already created by the main guide, so we'll # use the existing plates rather than calling pyro.plate(...). with guide.plates["place"], guide.plates["strain"]: samples = {} samples["rate_decentered"] = pyro.sample( "rate_decentered", dist.Normal(r_loc, r_scale) ) i_loc = i_loc + skew * samples["rate_decentered"] samples["init_decentered"] = pyro.sample( "init_decentered", dist.Normal(i_loc, i_scale) ) return samples # + id="Muz3PsZ0k9az" # %%time reparam_model = poutine.reparam( model, {"rate": LocScaleReparam(), "init": LocScaleReparam()} ) guide = AutoGuideList(reparam_model) local_vars = ["rate_decentered", "init_decentered"] guide.add( AutoLowRankMultivariateNormal( poutine.block(reparam_model, hide=local_vars), init_loc_fn=init_loc_fn, init_scale=0.01, ) ) guide.add(local_guide) fit_svi(reparam_model, guide, lr=0.1) pprint(evaluate(reparam_model, guide)) plot_volcano(guide) # + [markdown] id="eeVU75KDlICi" # ## Conclusion # # We've seen how to use initialization, reparameterization, autoguides, and custom guides in a Bayesian workflow. For more examples of these pieces of machinery, we recommend exploring the Pyro codebase, e.g. [search for "poutine.reparam"](https://github.com/pyro-ppl/pyro/search?q=poutine.reparam&type=code) or ["init_loc_fn"](https://github.com/pyro-ppl/pyro/search?q=init_loc_fn&type=code) in the Pyro codebase. # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda root] # language: python # name: conda-root-py # --- # + import time import datetime import pandas import json import keyboard import random from subprocess import Popen # %cd G:/JupyterNotebooks/School Projects def collect_info(): categories = ["Name","Age","Gamer Status","Gamer Categories"] responses = [] responses.append(input("Please enter your name: ")) responses.append(input("Please enter your age in years: ")) gamer_question = input("Are you a Gamer? Y or N: ") responses.append(gamer_question) if gamer_question == "Y": gamer_categories = {1:"Action",2:"RPG",3:"Mobile Games"} print("What type of games do you play? \n 1. Action (eg: God of War, Call of Duty, Fortnite) \n 2. RPG \n 3. Mobile Games") gamer_type = input("Please enter the relevant number: ") responses.append(gamer_categories[int(gamer_type)]) else: responses.append["None"] #packaging user info into a dictionary data = dict(zip(categories,responses)) # return dictionary return(data) def record_times(round_limit=10): print("Thanks! Now we're going to test your reaction times!") i = 0 round_limit = round_limit your_list = [] while i < round_limit: print(str(round_limit-i),"rounds to go.") print("Get ready to hit the space bar!") time.sleep(random.uniform(2.0,6.0)) print("Now!") ph = Popen('{path} "./Just Do It_1.mp4" -autoexit -fs'.format(path=path)) #start timer time_start = time.time() #print("starting timer: ",time_start) #set up trigger while True: if keyboard.is_pressed("space"): print("You stopped the timer") break # stop timer time_end = time.time() #print("Timer stopped at ",time_end) #calculate duration duration = round(time_end - time_start,3) #print("Duration: ",duration) print("Your time:",str(duration)) your_list.append(duration) ph.wait() # increase i i += 1 return(your_list) def write_to_file(data): for each_time in range(len(data["Times"])): with open('names.csv','a',newline='') as csvfile: fieldnames = ["Name","Age","Gamer Status","Gamer Categories","Times"] writer = csv.DictWriter(csvfile,fieldnames=fieldnames) #writer.writeheader() writer.writerow({"Name":data["Name"],"Age":data["Age"],"Gamer Status":data["Gamer Status"],"Gamer Categories":data["Gamer Categories"],"Times":data["Times"][each_time]}) def record_data(): data = collect_info() data["Times"] = record_times() write_to_file(data) # - record_data() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:tf] # language: python # name: conda-env-tf-py # --- # + from numpy.random import seed from tensorflow.compat.v1 import set_random_seed set_random_seed(2) seed(1) from tcn import TCN from tensorflow.keras.layers import Dense from tensorflow.keras import Input, Model from pathlib import Path import dask.array as da import zarr import pandas as pd import numpy as np import tensorflow as tf from tensorflow import keras from sklearn.model_selection import train_test_split from sklearn.preprocessing import robust_scale from sklearn.metrics import fbeta_score from tensorflow.keras import backend as K # - #Using fbeta because we want more False Positives than False Negatives def fbeta(y_true, y_pred, threshold_shift=0): beta = 2 # just in case of hipster activation at the final layer y_pred = K.clip(y_pred, 0, 1) # shifting the prediction threshold from .5 if needed y_pred_bin = K.round(y_pred + threshold_shift) tp = K.sum(K.round(y_true * y_pred_bin)) + K.epsilon() fp = K.sum(K.round(K.clip(y_pred_bin - y_true, 0, 1))) fn = K.sum(K.round(K.clip(y_true - y_pred, 0, 1))) precision = tp / (tp + fp) recall = tp / (tp + fn) beta_squared = beta ** 2 return (beta_squared + 1) * (precision * recall) / (beta_squared * precision + recall) data_folder = Path("..", "data", "interim") train_dask = da.from_zarr(str(data_folder.joinpath("breath_data_train.zarr"))) train = train_dask.compute() # + #The TCN package expected 3 axes, even if it's 2D, so I added an extra X = train[:,:-2][:, :, np.newaxis] y = train[:, -2] # + model = keras.models.Sequential( [ TCN( input_shape=[7500, 1], kernel_size=2, activation="relu", dilations=[rate for rate in (1, 2, 4, 8) * 2], return_sequences=False, ), Dense(1, activation="sigmoid"), ] ) model.compile(optimizer="adam", loss="mse", metrics=["mse", "mae", "mape"]) history = model.fit(X, y, epochs=10, batch_size=2, validation_split=0.2,) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # MIT License # # Copyright (c) 2021 and # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in all # copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. # ### For Optuna.version>0.14, some modifications needed. import os import numpy as np import sqlite3 from itertools import zip_longest ########## User defined parameters ######### dblist = [ "./data-directory/example_NMNIST-H.db", "./data-directory/example_UCF101.db", ] # + ########## User defined parameters ######### name_db = dblist[0] keys = ["learning_rate", "optimizer", "weight_decay", "dropout", "batch_size", "param_llr_loss", "order_sprt"] # For train_ti_nmnist-h.py. # Use this for "./data-directory/example_NMNIST-H.db". #keys = ["learning_rate", "optimizer", "weight_decay", "dropout", "batch_size", "param_llr_loss", "order_sprt", "beta"] # For train_ti_UCF101.py. # Use this for "./data-directory/example_UCF101.db". #keys = ["learning_rate", "optimizer", "weight_decay", "batch_size", "param_multLam", "order_sprt"] # For train_dre_nmnist-h.py. #keys = ["learning_rate", "optimizer", "weight_decay", "batch_size", "param_multLam", "order_sprt", "beta"] # For train_dre_UCF101.py. ############################################ end # nb_params = len(keys) maxnum_show = 1000 # Get connect cursor assert os.path.exists(name_db), name_db con = sqlite3.connect(name_db) c = con.cursor() # Get count count = c.execute("select count(value) from trials").fetchall()[0][0] raw = c.execute("select * from trials").fetchall() _count = count # Get and cleanse trial params trials = c.execute("select * from trials order by value asc").fetchall() trial_params = c.execute("select * from trial_params").fetchall() ls_idx = [] for trial in trials: if trial[2] == "FAIL": continue else: ls_idx.append([trial[0], trial[3]]) # [int, float] = [trial num, best value] if len(ls_idx) == maxnum_show: break ls_show = [] for iter_idx in ls_idx: for iter_params in trial_params: # Extract tirial params that showed good results if iter_params[1] == iter_idx[0]: ls_show.append([iter_idx[0], iter_idx[1], iter_params[2], iter_params[3]]) # Show search spaces, and some sanity checks of 'keys' print("Search Spaces") assert_set = set() for i in range(nb_params): print(trial_params[i][2:]) key = trial_params[i][2] assert key in keys, "Maybe there is another keyword that should be included in 'keys'. See 'trial_params'." if assert_set >= {key}: raise ValueError("Maybe there is an irrelevant keyword in 'keys' (or a typo?).") assert_set.add(trial_params[i][2]) if not (assert_set >= {trial_params[i+1][2]}): raise ValueError("Maybe there are more keywords that should be included in 'keys'. See 'trial_params'.") # Show top parameters dc = dict() for key in keys: dc[key] = [] for itr_show in ls_show: for key in keys: if itr_show[-2] == key: dc[key].append(itr_show) # Show trial params ################### print("\nNum trials: {}".format(_count)) print("trial number, best value\n {}".format(keys)) for i in range(len(dc[keys[0]])): for j, key in enumerate(keys): result = dc[key] if result[i][1] is None: continue if j == 0: print("{:4d} {:.7f}".format(result[i][0], result[i][1]), " ", end="") print("{} ".format(result[i][3]), end="") if result[i][1] is not None: print("") # - # select * from trials print("raw trial data") raw # select * from trials order by value asc print("All trials") trials # select * from trial_params print("All trial parameters") trial_params[:] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Apply, applymap & map # Referencia: https://towardsdatascience.com/introduction-to-pandas-apply-applymap-and-map-5d3e044e93ff import numpy as np import pandas as pd # ## Series methods # ### Map # Map values of Series according to input correspondence. # # Used for substituting each value in a Series with another value, that may be derived from a function, a dict or a Series. # # **When arg is a dictionary, values in Series that are not in the dictionary (as keys) are converted to NaN.** s = pd.Series(['cat', 'dog', np.nan, 'rabbit']) s dictionary = {"cat": "kitten", "dog": "puppy"} s.map(dictionary) # # format a = "Dani" b = "Madrid" c = 15 print("{0} vive en {1}.\nThe Bridge está en el número {2}.".format(a,b,c)) lista = ['Gabriel', 'Clara', 'Borja', "Mónica"] lista_ciudades = ['Madrid', 'Paris'] lista_numeros = range(3) for nombre in lista: for ciudad in lista_ciudades: for numero in lista_numeros: print("{0} vive en {1}.\nThe Bridge está en el número {2}.".format(nombre, ciudad, numero)) break # ------------------------------------------------------------------ s.map('My favourite animal is {}'.format) s.map('My favourite animal is {}'.format, na_action='ignore') # + #dataframe example # - data = pd.DataFrame({'food': ['bacon', 'pulled pork', 'bacon', 'Pastrami', 'corned beef', 'Bacon', 'pastrami', 'honey ham', 'nova lox'], 'ounces': [4, 3, 12, 6, 7.5, 8, 3, 5, 6]}) meat_to_animal = { 'bacon': 'pig', 'pulled pork': 'pig', 'pastrami': 'cow', 'corned beef': 'cow', 'honey ham': 'pig', 'nova lox': 'salmon' } data['food'] # + # en dos pasos # - lowercased = data['food'].str.lower() lowercased data['animal'] = lowercased.map(meat_to_animal) # en un paso data['animal2'] = data['food'].map(lambda x: meat_to_animal[x.lower()]) data # ### Str # Vectorized string functions for Series and Index. # # https://towardsdatascience.com/mastering-string-methods-in-pandas-8d3cd00b720d # # - [`str.contains`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.contains.html) # - [`str.startswith`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.startswith.html#pandas.Series.str.startswith) print([i for i in dir(pd.Series.str) if not i.startswith("_")]) data.food.str[:-1] # ### Apply # # Invoke function on values of Series. # # Can be ufunc (a NumPy function that applies to the entire Series) or a Python function that only works on single values. chipo = pd.read_csv("https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv", sep='\t') chipo.columns chipo['item_price'].str[1:].astype(float) chipo.item_price.apply(lambda x: float(x[1:])) # ## Dataframe methods # ### Apply # Apply a function along an axis of the DataFrame. # # Objects passed to the function are Series objects whose index is either the DataFrame’s index (axis=0) or the DataFrame’s columns (axis=1). By default (result_type=None), the final return type is inferred from the return type of the applied function. Otherwise, it depends on the result_type argument. # **axis{0 or ‘index’, 1 or ‘columns’}, default 0** # # The second parameter axis is to specify which axis the function is applied to. 0 for applying the function to each column and 1 for applying the function to each row. df = pd.DataFrame({"A":[1,2,3,4], "B": [10,20,30,40], "C": [20,40,60,80]}, index = ["Row 1", "Row 2", "Row 3", "Row 4"]) df def suma_serie(row): return row.sum() df.apply(suma_serie, axis = 0) # en vertical df.apply(suma_serie, axis = 1) # en horizontal df['D'] = df.apply(suma_serie, axis = 1) df df['A'] df.apply(lambda x: x.mean(), axis = 0) df['E'] = df.apply(lambda x: x.mean(), axis = 1) df #la operación se aplica a toda la row df.apply(lambda x: x**2, axis = 1) serie_cualquiera = pd.Series([1, 10,20,31,15.5]) serie_cualquiera**2 serie_cualquiera.mean() suma_serie(serie_cualquiera) df.apply(suma_serie, axis = 0) #crear otra row df.loc['Row nueva'] = df.apply(suma_serie, axis = 0) df # por Series df['J'] = df['C'].apply(lambda x: x*2) df # ### Applymap # Apply a function to a Dataframe elementwise. # # This method applies a function that accepts and returns a scalar to every element of a DataFrame. df.loc['Row 1'].map(np.sum) df.applymap(np.sum) data.dtypes data.head() # + # Cuidado con aplicar la misma función a todas las celdas incluyendo aquellas columnas con tipos incompatibles para la función # ejemplo: len() a una celda de la columna "ounces" de tipo float # - #solucion 1 data.select_dtypes(exclude=['float64']).applymap(lambda x: len(x)) #solucion 2 data.drop("ounces", axis = 1).applymap(lambda x: len(x)) #solucion3 data.applymap(lambda x: len(str(x))) (Px���@h���0X���� Hp���8`���(Px���@h���0X���� Hp��� 8 ` � � � ( P x � � �  @ h � � �  0 X � � � � H p � � � 8`���(Px���@h���0X���� Hp���8`���(Px���@h���0X���� Hp���8`���(Px���@h���0X���� Hp���8`���(Px���@h��� 0 X � � � � !H!p!�!�!�!"8"`"�"�"�"#(#P#x#�#�#�#$@$h$�$�$�$%0%X%�%�%�%�% &H&p&�&�&�&'8'`'�'�'�'(((P(x(�(�(�()@)h)�)�)�)*0*X*�*�*�*�* +H+p+�+�+�+,8,`,�,�,�,-(-P-x-�-�-�-.@.h.�.�.�./0/X/�/�/�/�/ 0H0p0�0�0�0181`1�1�1�12(2P2x2�2�2�23@3h3�3�3�3404X4�4�4�4�4 5H5p5�5�5�5686`6�6�6�67(7P7x7�7�7�78@8h8�8�8�8909X9�9�9�9�9 :H:p:�:�:�:;8;`;�;�;�;<(<P<x<�<�<�<=@=h=�=�=�=>0>X>�>�>�>�> ?H?p?�?�?�?@8@`@�@�@�@A(APAxA�A�A�AB@BhB�B�B�BC0CXC�C�C�C�C DHDpD�D�D�DE8E`E�E�E�EF(FPFxF�F�F�FG@GhG�G�G�GH0HXH�H�H�H�H IHIpI�I�I�IJ8J`J�J�J�JK(KPKxK�K�K�KL@LhL�L�L�LM0MXM�M�M�M�M NHNpN�N�N�NO8O`O�O�O�OP(PPPxP�P�P�PQ@QhQ�Q�Q�QR0RXR�R�R�R�R SHSpS�S�S�ST8T`T�T�T�TU(UPUxU�U�U�UV@VhV�V�V�VW0WXW�W�W�W�W XHXpX�X�X�XY8Y`Y�Y�Y�YZ(ZPZxZ�Z�Z�Z[@[h[�[�[�[\0\X\�\�\�\�\ ]H]p]�]�]�]^8^`^�^�^�^_(_P_x_�_�_�_`@`h`�`�`�`a0aXa�a�a�a�a bHbpb�b�b�bc8c`c�c�c�cd(dPdxd�d�d�de@ehe�e�e�ef0fXf�f�f�f�f gHgpg�g�g�gh8h`h�h�h�hi(iPixi�i�i�ij@jhj�j�j�jk0kXk�k�k�k�k lHlpl�l�l�lm8m`m�m�m�mn(nPnxn�n�n�no@oho�o�o�op0pXp�p�p�p�p qHqpq�q�q�qr8r`r�r�r�rs(sPsxs�s�s�st@tht�t�t�tu0uXu�u�u�u�u vHvpv�v�v�vw8w`w�w�w�wx(xPxxx�x�x�xy@yhy�y�y�yz0zXz�z�z�z�z {H{p{�{�{�{|8|`|�|�|�|}(}P}x}�}�}�}~@~h~�~�~�~0X���� �H�p�������8�`�����؁�(�P�x���Ȃ���@�h��������0�X�����Є�� �H�p�������8�`�����؆�(�P�x���ȇ���@�h��������0�X�����Љ�� �H�p�������8�`�����؋�(�P�x���Ȍ���@�h��������0�X�����Ў�� �H�p�������8�`�����ؐ�(�P�x���ȑ��@�h��������0�X�����Г�� �H�p�������8�`�����ؕ�(�P�x���Ȗ��@�h��������0�X�����И�� �H�p�������8�`�����ؚ�(�P�x���ț��@�StarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFiltered����  �rD   �@   ( 00 8e�C�D�D �DZ@@@ "(.4:>DJPV\bhnsy��������������������� %+17=AGLRX^ciou{���������������������� $*06<BHNTZ`ekqw}���������������������� $*06<BHNSY_ekqw}���������������������� %+17=CHNTZ`flrw}���������������������� #)/5;AGMSY_ekqw}���������������������� %+17=CIOU[`flrx~���������������������� !'-39?EKQW]chntz���������������������� %+17=CIOU[agmsy���������������������     & , 1 7 = C I O U [ a g m s y  � � � � � � � � � � � � � � � � � � � � � �      % + 1 7 = C I O U [ a f k q w } � � � � � � � � � � � � � � � � � � � � � �     # ( . 4 9 ? E I O U [ a f l r x ~ � � � � � � � � � � � � � � � � � � � � � �     " ( . 4 : @ F L R X ^ d h n t z � � � � � � � � � � � � � � � � � � � � � �     " ( 37067537330918848776141074916436447103675729942745842800516809488740779693801713721992301821674151441203650491596427860268331481787204224243405905324904080871504043003103678617255133435337225279767255811843728829923169734110054335751186576271503157473390345674936224646354860483112901662450162365541179955798401663278375721299891359694322699592585365840375790872159889044365254607620941499561698004738222058511983096412965890714645818273939279866488354035268453734718131649126371527394354753715039840482709879778279344141675548238978403655873715798610268758119590961662524187611884462168045548721892138123739542221803124523748251188239381606650056795402266263583895143124695252245987453423444691420705691867499161369743376201972221668358595274379006803154100279537109692680679588646476021210678175572174961262341638422834843291322982654516460061317232139000624692119786446619206766822856597945110815110868565954112013181279135792350121957953004202741666246658726632786719636265877743403986886592337973181184522916158679281097756090590702622213611326910901811517844719450281123121721189534276164401753224698413187984394654068962304777169939542711560764486877989833713353394547091887787351411285984952131580112119397370906620611605817093852170093750466119103416826364293101203955907114176286439059192506917342902571213058029007781196377651417995809022435502242910344062111570552695533407072179395394276517278744823130944129676314319833883782311704628858860084213227730458511656881162767573365818211533508629421626935444749658093686726162872758717606058637295508897968842323736935657391072866386126746355165709276380068206455829431032563354521975967309866980255904870857838513777564426090109197667090424352613169766771677213601517849669419036425135865004604123189044409140570153853594118395695797622890153769821129019123876259617273582962292692845737816812352030511464482750657676871275238191677294631952579008551494887193767661434151967476641848325045645085808478385424650203186610648859404181450327131724700235489974249936335863024286990410214458578132268111728776902679130035158366287565911119829114773382309174455433602830392237122690786413813681820367100529672559000442326879272948147073979451533616465950370486895511886832795655671029407690874567349941376753184238716214863355278781891688977489637194995669769359925479204716820519082501395477220996342261353494601967942449771199706682632025286808547316006974107433734734724020335304216131847516816236463301531402654672925774766979449874175306363405481971126475666944393312546284553825358588981525964039306795356171343592872118562534586608636581689154769187259922177792520589286384585243780600168635127416946805708715368273658346101353237136293431835920170151088271152690150553424965552759043798354746372318266341267678904676891382016211174093852660337007754051577382171625663414335602164200381441069880337281277208992928781920574526781023902042293538556351199938104699009436338401050141412109144900626234083911492907984293504524861144808769462340714975527217367928305186375245404152657506507326307037833415335594763804371126967986955142596717411052079708982787750463149447831853802712814024492298410057790607191208541255746801282423296247438844831718517212815324592658976154887647402504801410640255901176452371114628385811634728966268015097541528637014154298677191742948971551056095363309396597228467547534073511492�A�JF[��'�?��[���63�Fa�~ ���W��������9 ��J"�/�U�W�pωv��G���p�rw}����������I ��(G_�q����������]hqt��ĸk�@�'^\My��,���Y�4���$�0C6�eex�{��J���`�)����%�o9�X������!�-����� Ž �� � �� �� �� �4 p[ us Ɋ J� Ӭ � :� 3 N c7 =D �j �� q� ϼ (� �� �� �� � \ _ wl �� �� �� Y� #� f� ( =1Q�j&w-~x��� ��������1W@Ro�v ����/�@�<���<(44]v��������I�\����^[�j������}�co *O��1������Z��7 F�U�f�o�����#Cdci!ń�� q�7Zl|��n��$D�o����&���?��+�4�LҤ׷��p��$�1�z{������=�n�Q�K�� �+bE�Ijl$o�z��h������g��#��*�=҈%�6���������'��+��� =�G����������~0,=�F(lр2��a7 �D ~� �� ��  !�!�!�2!L!>j!Lz!ȷ!k�!��!�"$�"c�"E�"9�"��" #u,#��#)�#:$�J$�R$Kg$�r$1u$�,%�.%�w%1�%�%��%��%��%%�&˗&��&� '�'<:'�E'�j'v'�}'�'8�'��'t�'��'��'�?(�I(6�(�p)��)��)M�)��)��)��)C�)6W*Df*ъ*ϑ*��*��*�+�!+~A+�X+�+j�+D�+I�+G�+��+�+/�+�L,J[,�f,�x,�,K�,�,�,K�,-f3-tr-�{-�-;�-��-E�-`�-��-�-f@.�t.v�.x�.՞.��.��.��.��.��.q�.�/�/V#/�(/�L/�/L�/4 0S:0�S0Le0h0�0��0 �0��0��0��0M1A!1�=1`1~w1 {1Ԛ1��1�1��1�92(A2]G22X2��2�2��2�<32U3xX3cl3e�3��3��3>�3��3�3��3�4�A4�Z41�4�4i�4ɮ4a�4��4�5� 5~S5;�5��5*6�6�*6CE6+^6E�6��6�,7/K7�W7y�7�7k�7ܫ7��7�7�8�)8�,8�18lD8�]8f~8v�8��8n�8Y 9�c9�z9��9V�9��9}�9 �9�:;:s: : :�+:f/:�6:;H:?Z:��:�:�:��:B�:@P;�;�}<Շ<D�<B=W+=�d=>n=�=��=>�=�> >�>�">�'>/5>�<>�a>x�>~�>��>F�>|�>��>?V?�]?�y?r�?��??�?��?O�?��?C�?�@�@J@�]@,e@6o@�r@��@��@�A�A�$A�>A�dA>rA�zAt}AŒA�A&�A$�A�AUB�BBDB �B��B֚BO�B(�B� C>%C`�CI�CƨC��Ce�C# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="XjQxmm04iTx4" tags=[] # Open In Colab # + [markdown] id="4z0JDgisPRr-" # # MMClassification tools tutorial on Colab # # In this tutorial, we will introduce the following content: # # * How to install MMCls # * Prepare data # * Prepare the config file # * Train and test model with shell command # + [markdown] id="inm7Ciy5PXrU" # ## Install MMClassification # # Before using MMClassification, we need to prepare the environment with the following steps: # # 1. Install Python, CUDA, C/C++ compiler and git # 2. Install PyTorch (CUDA version) # 3. Install mmcv # 4. Clone mmcls source code from GitHub and install it # # Because this tutorial is on Google Colab, and the basic environment has been completed, we can skip the first two steps. # + [markdown] id="TDOxbcDvPbNk" # ### Check environment # + colab={"base_uri": "https://localhost:8080/"} id="c6MbAw10iUJI" outputId="8d3d6b53-c69b-4425-ce0c-bfb8d31ab971" # %cd /content # + colab={"base_uri": "https://localhost:8080/"} id="4IyFL3MaiYRu" outputId="c46dc718-27de-418b-da17-9d5a717e8424" # !pwd # + colab={"base_uri": "https://localhost:8080/"} id="DMw7QwvpiiUO" outputId="0d852285-07c4-48d3-e537-4a51dea04d10" # Check nvcc version # !nvcc -V # + colab={"base_uri": "https://localhost:8080/"} id="4VIBU7Fain4D" outputId="fb34a7b6-8eda-4180-e706-1bf67d1a6fd4" # Check GCC version # !gcc --version # + colab={"base_uri": "https://localhost:8080/"} id="24lDLCqFisZ9" outputId="304ad2f7-a9bb-4441-d25b-09b5516ccd74" # Check PyTorch installation import torch, torchvision print(torch.__version__) print(torch.cuda.is_available()) # + [markdown] id="R2aZNLUwizBs" # ### Install MMCV # # MMCV is the basic package of all OpenMMLab packages. We have pre-built wheels on Linux, so we can download and install them directly. # # Please pay attention to PyTorch and CUDA versions to match the wheel. # # In the above steps, we have checked the version of PyTorch and CUDA, and they are 1.9.0 and 11.1 respectively, so we need to choose the corresponding wheel. # # In addition, we can also install the full version of mmcv (mmcv-full). It includes full features and various CUDA ops out of the box, but needs a longer time to build. # + colab={"base_uri": "https://localhost:8080/"} id="nla40LrLi7oo" outputId="a17d50d6-05b7-45d6-c3fb-6a2507415cf5" # Install mmcv # !pip install mmcv -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.9.0/index.html # # !pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.9.0/index.html # + [markdown] id="GDTUrYvXjlRb" # ### Clone and install MMClassification # # Next, we clone the latest mmcls repository from GitHub and install it. # + colab={"base_uri": "https://localhost:8080/"} id="Bwme6tWHjl5s" outputId="7e2d54c8-b134-405a-b014-194da1708776" # Clone mmcls repository # !git clone https://github.com/open-mmlab/mmclassification.git # %cd mmclassification/ # Install MMClassification from source # !pip install -e . # + colab={"base_uri": "https://localhost:8080/"} id="hFg_oSG4j3zB" outputId="1cc74bac-f918-4f0e-bf56-9f13447dfce1" # Check MMClassification installation import mmcls print(mmcls.__version__) # + [markdown] id="HCOHRp3iV5Xk" # ## Prepare data # + colab={"base_uri": "https://localhost:8080/"} id="XHCHnKb_Qd3P" outputId="35496010-ee57-4e72-af00-2af55dc80f47" # Download the dataset (cats & dogs dataset) # !wget https://www.dropbox.com/s/wml49yrtdo53mie/cats_dogs_dataset_reorg.zip?dl=0 -O cats_dogs_dataset.zip # !mkdir -p data # !unzip -q cats_dogs_dataset.zip -d ./data/ # + [markdown] id="e4t2P2aTQokX" # **After downloading and extraction,** we get "Cats and Dogs Dataset" and the file structure is as below: # ``` # data/cats_dogs_dataset # ├── classes.txt # ├── test.txt # ├── val.txt # ├── training_set # │ ├── training_set # │ │ ├── cats # │ │ │ ├── cat.1.jpg # │ │ │ ├── cat.2.jpg # │ │ │ ├── ... # │ │ ├── dogs # │ │ │ ├── dog.2.jpg # │ │ │ ├── dog.3.jpg # │ │ │ ├── ... # ├── val_set # │ ├── val_set # │ │ ├── cats # │ │ │ ├── cat.3.jpg # │ │ │ ├── cat.5.jpg # │ │ │ ├── ... # │ │ ├── dogs # │ │ │ ├── dog.1.jpg # │ │ │ ├── dog.6.jpg # │ │ │ ├── ... # ├── test_set # │ ├── test_set # │ │ ├── cats # │ │ │ ├── cat.4001.jpg # │ │ │ ├── cat.4002.jpg # │ │ │ ├── ... # │ │ ├── dogs # │ │ │ ├── dog.4001.jpg # │ │ │ ├── dog.4002.jpg # │ │ │ ├── ... # ``` # # You can use shell command `tree data/cats_dogs_dataset` to check the structure. # + colab={"base_uri": "https://localhost:8080/", "height": 297} id="46tyHTdtQy_Z" outputId="6124a89e-03eb-4917-a0bf-df6a391eb280" # Pick an image and visualize it from PIL import Image Image.open('data/cats_dogs_dataset/training_set/training_set/cats/cat.1.jpg') # + [markdown] id="My5Z6p7pQ3UC" # ### Support new dataset # # We have two methods to support a new dataset in MMClassification. # # The simplest method is to re-organize the new dataset as the format of a dataset supported officially (like ImageNet). And the other method is to create a new dataset class, and more details are in [the docs](https://mmclassification.readthedocs.io/en/latest/tutorials/new_dataset.html#an-example-of-customized-dataset). # # In this tutorial, for convenience, we have re-organized the cats & dogs dataset as the format of ImageNet. # + [markdown] id="P335gKt9Q5U-" # Besides image files, it also includes the following files: # # 1. A class list file, and every line is a class. # ``` # cats # dogs # ``` # 2. Training / Validation / Test annotation files. And every line includes an file path and the corresponding label. # # ``` # ... # cats/cat.3769.jpg 0 # cats/cat.882.jpg 0 # ... # dogs/dog.3881.jpg 1 # dogs/dog.3377.jpg 1 # ... # ``` # + [markdown] id="BafQ7ijBQ8N_" # ## Train and test model with shell commands # # You can use shell commands provided by MMClassification to do the following task: # # 1. Train a model # 2. Fine-tune a model # 3. Test a model # 4. Inference with a model # # The procedure to train and fine-tune a model is almost the same. And we have introduced how to do these tasks with Python API. In the following, we will introduce how to do them with shell commands. More details are in [the docs](https://mmclassification.readthedocs.io/en/latest/getting_started.html). # + [markdown] id="Aj5cGMihURrZ" # ### Fine-tune a model # # The steps to fine-tune a model are as below: # # 1. Prepare the custom dataset. # 2. Create a new config file of the task. # 3. Start training task by shell commands. # # We have finished the first step, and then we will introduce the next two steps. # # + [markdown] id="WBBV3aG79ZH5" # #### Create a new config file # # To reuse the common parts of different config files, we support inheriting multiple base config files. For example, to fine-tune a MobileNetV2 model, the new config file can create the model's basic structure by inheriting `configs/_base_/models/mobilenet_v2_1x.py`. # # According to the common practice, we usually split whole configs into four parts: model, dataset, learning rate schedule, and runtime. Configs of each part are saved into one file in the `configs/_base_` folder. # # And then, when creating a new config file, we can select some parts to inherit and only override some different configs. # # The head of the final config file should look like: # # ```python # _base_ = [ # '../_base_/models/mobilenet_v2_1x.py', # '../_base_/schedules/imagenet_bs256_epochstep.py', # '../_base_/default_runtime.py' # ] # ``` # # Here, because the dataset configs are almost brand new, we don't need to inherit any dataset config file. # # Of course, you can also create an entire config file without inheritance, like `configs/mnist/lenet5.py`. # + [markdown] id="_UV3oBhLRG8B" # After that, we only need to set the part of configs we want to modify, because the inherited configs will be merged to the final configs. # + colab={"base_uri": "https://localhost:8080/"} id="8QfM4qBeWIQh" outputId="a826f0cf-2633-4a9a-e49b-4be7eca5e3a0" # %%writefile configs/mobilenet_v2/mobilenet_v2_1x_cats_dogs.py _base_ = [ '../_base_/models/mobilenet_v2_1x.py', '../_base_/schedules/imagenet_bs256_epochstep.py', '../_base_/default_runtime.py' ] # ---- Model configs ---- # Here we use init_cfg to load pre-trained model. # In this way, only the weights of backbone will be loaded. # And modify the num_classes to match our dataset. model = dict( backbone=dict( init_cfg = dict( type='Pretrained', checkpoint='https://download.openmmlab.com/mmclassification/v0/mobilenet_v2/mobilenet_v2_batch256_imagenet_20200708-3b2dc3af.pth', prefix='backbone') ), head=dict( num_classes=2, topk = (1, ) )) # ---- Dataset configs ---- # We re-organized the dataset as ImageNet format. dataset_type = 'ImageNet' img_norm_cfg = dict( mean=[124.508, 116.050, 106.438], std=[58.577, 57.310, 57.437], to_rgb=True) train_pipeline = [ dict(type='LoadImageFromFile'), dict(type='RandomResizedCrop', size=224, backend='pillow'), dict(type='RandomFlip', flip_prob=0.5, direction='horizontal'), dict(type='Normalize', **img_norm_cfg), dict(type='ImageToTensor', keys=['img']), dict(type='ToTensor', keys=['gt_label']), dict(type='Collect', keys=['img', 'gt_label']) ] test_pipeline = [ dict(type='LoadImageFromFile'), dict(type='Resize', size=(256, -1), backend='pillow'), dict(type='CenterCrop', crop_size=224), dict(type='Normalize', **img_norm_cfg), dict(type='ImageToTensor', keys=['img']), dict(type='Collect', keys=['img']) ] data = dict( # Specify the batch size and number of workers in each GPU. # Please configure it according to your hardware. samples_per_gpu=32, workers_per_gpu=2, # Specify the training dataset type and path train=dict( type=dataset_type, data_prefix='data/cats_dogs_dataset/training_set/training_set', classes='data/cats_dogs_dataset/classes.txt', pipeline=train_pipeline), # Specify the validation dataset type and path val=dict( type=dataset_type, data_prefix='data/cats_dogs_dataset/val_set/val_set', ann_file='data/cats_dogs_dataset/val.txt', classes='data/cats_dogs_dataset/classes.txt', pipeline=test_pipeline), # Specify the test dataset type and path test=dict( type=dataset_type, data_prefix='data/cats_dogs_dataset/test_set/test_set', ann_file='data/cats_dogs_dataset/test.txt', classes='data/cats_dogs_dataset/classes.txt', pipeline=test_pipeline)) # Specify evaluation metric evaluation = dict(metric='accuracy', metric_options={'topk': (1, )}) # ---- Schedule configs ---- # Usually in fine-tuning, we need a smaller learning rate and less training epochs. # Specify the learning rate optimizer = dict(type='SGD', lr=0.005, momentum=0.9, weight_decay=0.0001) optimizer_config = dict(grad_clip=None) # Set the learning rate scheduler lr_config = dict(policy='step', step=1, gamma=0.1) runner = dict(type='EpochBasedRunner', max_epochs=2) # ---- Runtime configs ---- # Output training log every 10 iterations. log_config = dict(interval=10) # + [markdown] id="chLX7bL3RP2F" # #### Use shell command to start fine-tuning # # We use `tools/train.py` to fine-tune a model: # # ```shell # python tools/train.py ${CONFIG_FILE} [optional arguments] # ``` # # And if you want to specify another folder to save log files and checkpoints, use the argument `--work_dir ${YOUR_WORK_DIR}`. # # If you want to ensure reproducibility, use the argument `--seed ${SEED}` to set a random seed. And the argument `--deterministic` can enable the deterministic option in cuDNN to further ensure reproducibility, but it may reduce the training speed. # # Here we use the `MobileNetV2` model and cats & dogs dataset as an example: # # + colab={"base_uri": "https://localhost:8080/"} id="gbFGR4SBRUYN" outputId="3412752c-433f-43c5-82a9-3495d1cd797a" # !python tools/train.py \ # configs/mobilenet_v2/mobilenet_v2_1x_cats_dogs.py \ # --work-dir work_dirs/mobilenet_v2_1x_cats_dogs \ # --seed 0 \ # --deterministic # + [markdown] id="m_ZSkwB5Rflb" # ### Test a model # # We use `tools/test.py` to test a model: # # ``` # python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [optional arguments] # ``` # # Here are some optional arguments: # # - `--metrics`: The evaluation metrics. The available choices are defined in the dataset class. Usually, you can specify "accuracy" to metric a single-label classification task. # - `--metric-options`: The extra options passed to metrics. For example, by specifying "topk=1", the "accuracy" metric will calculate top-1 accuracy. # # More details are in the help docs of `tools/test.py`. # # Here we still use the `MobileNetV2` model we fine-tuned. # + colab={"base_uri": "https://localhost:8080/"} id="Zd4EM00QRtyc" outputId="8788264f-83df-4419-9748-822c20538aa7" # !python tools/test.py configs/mobilenet_v2/mobilenet_v2_1x_cats_dogs.py work_dirs/mobilenet_v2_1x_cats_dogs/latest.pth --metrics accuracy --metric-options topk=1 # + [markdown] id="IwThQkjaRwF7" # ### Inference with a model # # Sometimes we want to save the inference results on a dataset, just use the command below. # # ```shell # python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [--out ${RESULT_FILE}] # ``` # # Arguments: # # - `--out`: The output filename. If not specified, the inference results won't be saved. It supports json, pkl and yml. # - `--out-items`: What items will be saved. You can choose some of "class_scores", "pred_score", "pred_label" and "pred_class", or use "all" to select all of them. # # These items mean: # - `class_scores`: The score of every class for each sample. # - `pred_score`: The score of predict class for each sample. # - `pred_label`: The label of predict class for each sample. It will read the label string of each class from the model, if the label strings are not saved, it will use ImageNet labels. # - `pred_class`: The id of predict class for each sample. It's a group of integers. # - `all`: Save all items above. # - `none`: Don't save any items above. Because the output file will save the metric besides inference results. If you want to save only metrics, you can use this option to reduce the output file size. # # + colab={"base_uri": "https://localhost:8080/"} id="6GVKloPHR0Fn" outputId="4f4cd414-1be6-4e17-985f-6449b8a3d9e8" # !python tools/test.py configs/mobilenet_v2/mobilenet_v2_1x_cats_dogs.py work_dirs/mobilenet_v2_1x_cats_dogs/latest.pth --out results.json --out-items all # + [markdown] id="G0NJI1s6e3FD" # All inference results are saved in the output json file, and you can read it. # + colab={"base_uri": "https://localhost:8080/", "height": 370} id="HJdJeLUafFhX" outputId="7614d546-7c2f-4bfd-ce63-4c2a5228620f" import json with open("./results.json", 'r') as f: results = json.load(f) # Show the inference result of the first image. print('class_scores:', results['class_scores'][0]) print('pred_class:', results['pred_class'][0]) print('pred_label:', results['pred_label'][0]) print('pred_score:', results['pred_score'][0]) Image.open('data/cats_dogs_dataset/training_set/training_set/cats/cat.1.jpg') # + [markdown] id="1bEUwwzcVG8o" # You can also use the visualization API provided by MMClassification to show the inference result. # + id="BcSNyvAWRx20" colab={"base_uri": "https://localhost:8080/", "height": 304} outputId="0d68077f-2ec8-4f3d-8aaa-18d4021ca77b" from mmcls.core.visualization import imshow_infos filepath = 'data/cats_dogs_dataset/training_set/training_set/cats/cat.1.jpg' result = { 'pred_class': results['pred_class'][0], 'pred_label': results['pred_label'][0], 'pred_score': results['pred_score'][0], } img = imshow_infos(filepath, result) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.5 64-bit (''fastai'': conda)' # language: python # name: python38564bitfastaicondad52d12c5a30a4725bf9d3e235cf1271c # --- # %load_ext autoreload # %autoreload 2 # + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" from fastai.vision.all import * import sklearn.metrics as skm from tqdm.notebook import tqdm import sklearn.feature_extraction.text from transformers import AutoModelForMaskedLM, AutoConfig, BertTokenizer, LineByLineTextDataset from shopee_utils import * from train_utils import * import codecs from torch.utils.data.dataset import Dataset # - BERT_CONFIG_PATH = './indobert-large-p2' PATH = Path('/home/slex/data/shopee') train_df = pd.read_csv(PATH/'train.csv') model = AutoModelForMaskedLM.from_pretrained(BERT_CONFIG_PATH) tokenizer = BertTokenizer.from_pretrained(BERT_CONFIG_PATH) class TextDataset(Dataset): def __init__(self, tokenizer, lines, block_size): batch_encoding = tokenizer(lines, add_special_tokens=True, truncation=True, max_length=block_size) self.examples = batch_encoding["input_ids"] self.examples = [{"input_ids": torch.tensor(e, dtype=torch.long)} for e in self.examples] def __len__(self): return len(self.examples) def __getitem__(self, i): return self.examples[i] dataset=TextDataset(tokenizer, train_df.title.to_list(), 128) tokenizer.decode(dataset[0]['input_ids']) # + from transformers import DataCollatorForLanguageModeling data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=True, mlm_probability=0.15 ) # + from transformers import Trainer, TrainingArguments training_args = TrainingArguments( output_dir="./LanguageModel", overwrite_output_dir=True, num_train_epochs=20, per_gpu_train_batch_size=32, save_steps=10_000, save_total_limit=2, prediction_loss_only=True, ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=dataset, ) # - trainer.train() trainer.save_model('./finetuned_indobert-large-p2') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Exploratory Data Analysis import pandas as pd import numpy as np # there are three main files: # opportunities # quotes # leads # # leads is for testing purposes # ############################## # load opportunities opportunities = pd.read_csv('opportunities.csv') opportunities.shape opportunities.columns opportunities.head(10) opportunities.Stage.value_counts() # 1686 with no address opportunities[opportunities['Mailing Address Line 1'].isnull()].shape opportunities[opportunities['Billing Address Line 1'].isnull() == False].head() opportunities[opportunities['Mailing Address Line 1'].isnull() ].Stage.value_counts() opportunities[opportunities['Mailing Address Line 1'].isnull() == False].Stage.value_counts() ##################################### # load quotes quotes = pd.read_csv('quotes.csv') quotes.shape quotes.head() quotes.Stage.value_counts() quotes[quotes['Bill To Street'].isnull() == False] quotes.columns ######################################### # How many Opportunity records are in quote? opportunities.merge(quotes, on='Opportunity ID', how='inner').shape # look at the matching records, see if they make sense opportunities.merge(quotes, on='Opportunity ID') # looks ok, but a surprising number have quotes[quotes['Opportunity ID'] == '0061a00000G3jSG'] # + # what about the ones without addresses # - opp_no_address = opportunities[opportunities['Billing Address Line 1'].isnull( )] opp_no_address.shape opp_no_address.merge(quotes, on='Opportunity ID').shape # total missing addresses: print(2499 - 1322 + 230) ################################ # load leads leads = pd.read_csv('leads.csv') leads.head() leads.shape leads[leads['Street'].isnull()] leads['Lead Status'].value_counts().head(8) leads['Lead Source'].value_counts() leads.columns quotes.columns # + ######################## # merge quotes and opportunities ################################ # - # concatenate quote address quotes['Address'] = quotes['Bill To Street']+' '+quotes['Bill To City']+' ' + \ quotes['Bill To Country']+' '+quotes['Bill To State/Province'] + \ ' '+quotes['Bill To Zip/Postal Code'] # + # quotes.head() # - opportunities.columns # concatenate opportunities address # opportunities['']+' '+ # not using this because we don't want to create incomplete addresses #opportunities['Billing Address Line 1']+' '+opportunities['Billing Address Line 2']+' '+opportunities['Billing Address Line 3'] opportunities['Address'] = np.nan # + # opportunities.head() # + # change both data frames to have the same column names # new columns = id, name, close_date, stage, address # - new_cols = ['id', 'name', 'close_date', 'stage', 'address'] opp = opportunities[['Opportunity ID', 'Opportunity Name', 'Close Date', 'Stage', 'Address']] opp.columns = new_cols # + # opp.head() # - quo = quotes[['Opportunity ID', 'Opportunity Name', 'Close Date', 'Stage', 'Address']] quo.columns = new_cols # + # quo.head() # - # from opp, remove records that are in quo def in_quo(id): if id in quo.id.values: return True else: return False opp.id.apply(in_quo).value_counts() opp2 = opp[opp.id.apply(in_quo) == False] # + # opp2.head() # - # append the two dataframes df = quo.append(opp2, ignore_index=True) df.shape # + # check for duplicates # - df.id.value_counts().head(10) df[df.id == '0061a00000KN2II'] df.shape df.drop_duplicates().shape df.drop_duplicates(inplace=True) df.shape # + # about df.id.value_counts().head(100) #about 90 records w/ value count > 1 # - df.drop_duplicates(subset=['id'], keep='last', inplace=True) df.shape print('total:', df.shape[0]) print('with full address:', df[df['address'].isnull() == False].shape[0]) print('missing/incomplete addresses:', df[df['address'].isnull() == True].shape[0]) leads[leads['Lead ID'] == '0061a00000GFPd7'] #!/usr/bin/env python # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Machine Learning Example # # Python macro for classifying events (in 2D) using Machine Learning (based on SciKitLearn). The 2D case is chosen, so that the data and solutions can be visually inspected and evaluated. Several methods are illustrated. # # Note: This exercise includes two additional packages to those originally required (ipywidgets and scikitlearn). # # *** # # ### Authors: # - (Niels Bohr Institute) # - (Niels Bohr Institute) # # ### Date: # - 15-12-2021 (latest update) import numpy as np import matplotlib.pyplot as plt from ipywidgets import interactive # To make plots interactive # And set the parameters of the notebook: # + r = np.random r.seed(42) export_tree = True plot_fisher_discriminant = False test_point = np.array([0, 0.5]).reshape(1, -1) # - # ## Functions: # # First we define `get_corr_pars` which generate (linearly) correlated numbers: def get_corr_pars(mu1, mu2, sig1, sig2, rho12) : # Transform to correlated random numbers (see Barlow page 42-44): # This is nothing more than a matrix multiplication written out!!! # Note that the absolute value is taken before the square root to avoid sqrt(x) with x<0. theta = 0.5 * np.arctan( 2.0 * rho12 * sig1 * sig2 / ( sig1**2 - sig2**2 ) ) sigu = np.sqrt( np.abs( ((sig1*np.cos(theta))**2 - (sig2*np.sin(theta))**2 ) / ( np.cos(theta)**2 - np.sin(theta)**2) ) ) sigv = np.sqrt( np.abs( ((sig2*np.cos(theta))**2 - (sig1*np.sin(theta))**2 ) / ( np.cos(theta)**2 - np.sin(theta)**2) ) ) u = r.normal(0.0, sigu) v = r.normal(0.0, sigv) # Transform into (possibly) correlated random Gaussian numbers x1 and x2: x1 = mu1 + np.cos(theta)*u - np.sin(theta)*v x2 = mu2 + np.sin(theta)*u + np.cos(theta)*v return x1, x2 # Define the function `plot_decision_regions` which plots decision boundaries: # + from matplotlib.colors import ListedColormap from sklearn.metrics import roc_curve, auc def plot_decision_regions(X, y, classifier, resolution=0.02, title=None, fig=None, ax=None): # define colors colors = ('red', 'blue') cmap = ListedColormap(colors) # define signal and background sig = X[y == 1] bkg = X[y == 0] # compute the decision surface x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1 x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution), np.arange(x2_min, x2_max, resolution)) Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T) Z = Z.reshape(xx1.shape) # set up the figure if fig is None and ax is None: fig, ax = plt.subplots(1, 2, figsize=(16, 8)) # plot the decision surface and plot individual points on ax[0] ax[0].contourf(xx1, xx2, Z, alpha=0.2, cmap=cmap) ax[0].scatter(sig[:, 0], sig[:, 1], s=4, c='blue', label='sig', alpha=0.3) ax[0].scatter(bkg[:, 0], bkg[:, 1], s=4, c='red', label='bkg', alpha=0.3) ax[0].set(xlim=(xx1.min(), xx1.max()), ylim=(xx2.min(), xx2.max()), xlabel='Parameter A', ylabel='Parameter B') # predict and plot the prediction of the test point on ax[0] z_test = classifier.predict(test_point)[0] if z_test == 0: color = 'red' else: color = 'blue' ax[0].scatter(test_point[0,0], test_point[0,1], c='w', s=200, marker='o') ax[0].scatter(test_point[0,0], test_point[0,1], c=color, s=150, marker='*') # set the legend on ax[0] ax[0].legend() # set up ax[1]: # compute y prediction probabilities: y_predicted_proba = classifier.predict_proba(X)[:, 1] # Compute ROC curve and ROC area FPR, TPR, _ = roc_curve(y, y_predicted_proba) roc_auc = auc(FPR, TPR) lw = 2 # plot the ROC curve ax[1].plot(FPR, TPR, color='darkorange', lw=lw, label='ROC curve (area = %0.3f)' % roc_auc) ax[1].plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--') ax[1].set(xlim=[-0.01, 1.0], ylim=[-0.01, 1.05], xlabel='False Positive Rate', ylabel='True Positive Rate') ax[1].legend(loc="lower right") if title: ax[0].set(title=title) ax[1].set(title=title) return fig, ax # - # Below we define `animate_ML_estimator_generator` which takes an estimator, fits it given the specified keywords and plots the decision regions: def animate_ML_estimator_generator(clf, title, X, y, **kwargs): estimator = clf(**kwargs) estimator = estimator.fit(X, y) plot_decision_regions(X, y, classifier=estimator, title=title.capitalize()) # ## Generate and plot data: # # Set number of data points to generate and parameters: # + N = 10000 mu_sig = [0.0, 1.0] sigma_sig = [1.5, 2.0] rho_sig = 0.87 mu_bkg = [-2.0, 4.0] sigma_bkg = [1.5, 2.0] rho_bkg = 0.82 # - # Generate correlated parameters: # + # The desired covariance matrix. V_sig = np.array([[sigma_sig[0]**2, sigma_sig[0]*sigma_sig[1]*rho_sig], [sigma_sig[0]*sigma_sig[1]*rho_sig, sigma_sig[1]**2]]) V_bkg = np.array([[sigma_bkg[0]**2, sigma_bkg[0]*sigma_bkg[1]*rho_bkg], [sigma_bkg[0]*sigma_bkg[1]*rho_bkg, sigma_bkg[1]**2]]) # Generate the random samples. sig = np.random.multivariate_normal(mu_sig, V_sig, size=N) bkg = np.random.multivariate_normal(mu_bkg, V_bkg, size=N) # - # Plot the generated random numbers: # + Nbins = 100 fig, ax = plt.subplots(1, 4, figsize=(18, 5)) ax[0].hist(sig[:, 0], Nbins, histtype='step', label='sig', color='blue') ax[0].hist(bkg[:, 0], Nbins, histtype='step', label='bkg', color='red') ax[0].set(xlabel='Variable A', ylabel='Counts', title='Histogram of Variable A') ax[0].legend() ax[1].hist(sig[:, 1], Nbins, histtype='step', label='sig', color='blue') ax[1].hist(bkg[:, 1], Nbins, histtype='step', label='bkg', color='red') ax[1].set(xlabel='Variable B', ylabel='Counts', title='Histogram of Variable B') ax[1].legend() ax[2].scatter(sig[:, 0], sig[:, 1], s=4, c='blue', label='sig', alpha=0.5) ax[2].scatter(bkg[:, 0], bkg[:, 1], s=4, c='red', label='bkg', alpha=0.5) ax[2].set(xlabel='Variable A', ylabel='Variable B', title='Scatterplot of Variable A and B') ax[2].legend(); # - # ## Fishers Discriminant: # How to perform a __[Fishers Linear Discimant Analysis](https://scikit-learn.org/stable/modules/generated/sklearn.discriminant_analysis.LinearDiscriminantAnalysis.html)__ in Scikit Learn. # # # # First load the proper package: from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA # Then we convert the two numpy arrays one big one called `X`. `y` denotes the class (1 for signal, 0 for background) # + X = np.vstack((sig, bkg)) y = np.zeros(2*N) y[:N] = 1 # - # Now do the Linear Discriminant Analysis (LDA): # + # initialise the LDA method sklearn_lda = LDA() # fit the data sklearn_lda.fit(X, y) # transform the data X_lda_sklearn = sklearn_lda.transform(X) print(f"LDA coefficients: {sklearn_lda.scalings_}") # - # Extract the tranformed variables sig_lda = X_lda_sklearn[y == 1] bkg_lda = X_lda_sklearn[y == 0] # and plot it on the figure: # + ax[3].hist(sig_lda, Nbins, histtype='step', label='sig', color='red') ax[3].hist(bkg_lda, Nbins, histtype='step', label='bkg', color='blue') ax[3].set(xlabel='Fisher Discriminant', ylabel='Counts', title='Fishers discriminant') ax[3].legend() fig.tight_layout() fig # - # # What if the dataset was completely different? # # Now we want to eksamine the dataset given in `DataSet_ML.txt`. First we load it, extract the relevant data and plot it: # # + # Load the data: data = np.loadtxt('DataSet_ML.txt') N = len(data) Nbins = 50 # Divide data into input variables (X) and "target" variable (y) X = data[:, :2] y = data[:, 2] # As signal and background: sig = X[y == 1] bkg = X[y == 0] fig2, ax2 = plt.subplots(1, 3, figsize=(20, 8)) ax2[0].hist(sig[:, 0], Nbins, histtype='step', label='sig', color='blue') ax2[0].hist(bkg[:, 0], Nbins, histtype='step', label='bkg', color='red') ax2[0].set(xlabel='Variable A', ylabel='Counts', title='Histogram of Variable A') ax2[0].legend() ax2[1].hist(sig[:, 1], Nbins, histtype='step', label='sig', color='blue') ax2[1].hist(bkg[:, 1], Nbins, histtype='step', label='bkg', color='red') ax2[1].set(xlabel='Variable B', ylabel='Counts', title='Histogram of Variable B') ax2[1].legend(); # - # # TASK: # 1. Think about how you think the above data looks in 2D before you continue. # 2. Draw on a piece of paper, what you think it looks like in 2D before you continue. # # When you have talked with your collaborators you should uncomment the below five lines. # + # ax2[2].scatter(sig[:, 0], sig[:, 1], s=4, c='blue', label='sig', alpha=0.5) # ax2[2].scatter(bkg[:, 0], bkg[:, 1], s=4, c='red', label='bkg', alpha=0.5) # ax2[2].set(xlabel='Variable A', ylabel='Variable B', title='Scatterplot of Variable A and B') # ax2[2].legend() # fig2 # - # # Interactive Machine Learning Part # # In this part we will further investigate different standard ML models. # # ## Linear Fisher Discriminant: # # First we look at __[Linear Fisher Discriminant](https://scikit-learn.org/stable/modules/generated/sklearn.discriminant_analysis.LinearDiscriminantAnalysis.html)__. This is the same as we saw above so we will go through this example quite quickly (also because in 2D there are no interesting hyperparameters to change). # + # LDA clf_fisher = LDA() # Initialise the LDA method clf_fisher.fit(X, y) # Fit the data X_fisher = clf_fisher.transform(X) # Transform the data print("LDA coefficients", clf_fisher.scalings_) # Extract the tranformed variables sig_fisher = X_fisher[y == 1] bkg_fisher = X_fisher[y == 0] if plot_fisher_discriminant : # You gotta switch this on, if you want to see it :-) # Plot decision region of fisher: fig_fisher, ax_fisher = plot_decision_regions(X, y, classifier=clf_fisher, title='Fisher') # Plot fisher discriminant values: fig_fisher2, ax_fisher2 = plt.subplots(figsize=(13.3, 6)) ax_fisher2.hist(sig_fisher, Nbins, histtype='step', label='sig', color='blue') ax_fisher2.hist(bkg_fisher, Nbins, histtype='step', label='bkg', color='red') ax_fisher2.set(xlabel='Fisher Discriminant', ylabel='Counts', title='Fishers discriminant') ax_fisher2.legend() fig_fisher2.tight_layout() # - # ## Decision Trees # # We load __[Decision Trees](https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html)__ from Scikit-learn (sklearn). At first try to increase the `max_depth` slider and see how that affects the plots. Does that make sense? For a given `max_depth`, e.g. `max_depth = 30`, change the `min_samples_leaf` and see how it simplifies (via _regularization_) the model. Think about when you'd want a simpler model. Finally, given a set of values for `max_depth` and `min_samples_leaf` switch between `criterion` being `gini` and `entropy`. Does this change much? # + from sklearn.tree import DecisionTreeClassifier def animate_ML_estimator_DT(criterion, min_samples_leaf=1, max_depth=1): animate_ML_estimator_generator(DecisionTreeClassifier, 'Decision Tree', X, y, max_depth=max_depth, criterion=criterion, #splitter=splitter, #min_samples_split=min_samples_split, min_samples_leaf=min_samples_leaf) kwargs_DT = { 'max_depth': (1, 50), 'criterion': ["gini", "entropy"], #'splitter': ["best", "random"], #'min_samples_split': (2, 50), 'min_samples_leaf': (1, 50), } interactive_plot = interactive(animate_ML_estimator_DT, **kwargs_DT) interactive_plot # - # ## Boosted Decision Trees (BDTs) # # For __[BDTs](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.AdaBoostClassifier.html#sklearn.ensemble.AdaBoostClassifier)__ try to slowly increase `n_estimators` and see how it affects the model. Does it make sense? Try also to increase the learning rate and see what that changes: # # + from sklearn.ensemble import AdaBoostClassifier def animate_ML_estimator_BDT(learning_rate=1., n_estimators=1): animate_ML_estimator_generator(AdaBoostClassifier, 'Boosted Decision Trees', X, y, learning_rate=learning_rate, n_estimators=n_estimators, ) kwargs_BDT = {'learning_rate': (0.01, 2, 0.01), 'n_estimators': (1, 100), } interactive_plot = interactive(animate_ML_estimator_BDT, **kwargs_BDT) interactive_plot # - # ## k-Nearest Neighbours # # For __[kNNs](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html)__ we only have the parameter n_neighbors to change: # + from sklearn.neighbors import KNeighborsClassifier def animate_ML_estimator_KNN(n_neighbors=1): animate_ML_estimator_generator(KNeighborsClassifier, 'KNN', X, y, n_neighbors=n_neighbors) kwargs_KNN = {'n_neighbors': (1, 50), } interactive_plot = interactive(animate_ML_estimator_KNN, **kwargs_KNN) interactive_plot # - # ## Support Vector Machine (SVM): # # __[SVms](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html)__ are quite different ML models than the rest and we won't be going through it in much detail. However, see if you can make sense of the difference between the `RBF` kernel compared to the `poly` one. Also, how much does `C` matter for the different kernels? # Notice that `degree` is only relevant for the `poly` kernel. # + from sklearn.svm import SVC def animate_ML_estimator_SVM(kernel='rbf', C=1, degree=2, ): animate_ML_estimator_generator(SVC, 'SVM', X, y, #gamma=gamma, gamma='scale', kernel=kernel, C=C, probability=True, degree=degree, ) kwargs_SVM = {'C': (0.1, 10), 'kernel': ['poly', 'rbf'], 'degree': (1, 10), #'gamma': (0.1, 10), } interactive_plot = interactive(animate_ML_estimator_SVM, **kwargs_SVM) interactive_plot # - # *** # # Machine Learning (ML) is a fascinating subject, which is very much in vouge these days. # There are two classic usages for ML: # # - Classification (determine which category a case belongs to, i.e. ill or healthy) # - Regression (determine a value, i.e. what was the energy of this electron) # # Python has packages - "scikit-learn" (among others) - that allows anybody to easily apply ML to smaller scale problems (i.e. below 10 GB). The following questions/exercise is meant to illustrate a classification problem, and to wet your appetite for more... # # # # Questions: # # 1. Consider the data, and make sure that you by eye can see, how an algorithm should decide between the two categories. Also see, if you can guess how well the Fisher performs. What is the error rate of type1 and type2 roughly? # # 2. Now consider the various ML algorithms. Can you (again by eye) tell, if it is doing "just well" or "very well"? I.e. can you rank the methods by eye? What you have tried this, compare to the ROC curves and see how well you did. # # 3. What we are currently plotting in the ROCcurve is the ROC-curve for the training data. Try to split up the data into a training and a test set: Let 2/3 be for training, and then apply the result of this training on the last 1/3 of the data, which is thus "unseen" by the algorithm, corresponding to "new data". Then add the ROC-curve of the test set to the `plot_decision_regions` function. # # 4. Try to put all the ROC curves into one final plot, which shows how well the different methods perform. # # # ### Advanced questions: # # 5. Try to find some other data, and apply the above ML methods to it. Can you get it to run there, and does it perform better than other methods you can implement yourself? # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import os import tensorflow as tf from tensorflow.contrib.tensorboard.plugins import projector import matplotlib.pyplot as plt # %matplotlib inline # + # Reading data image_data = pd.read_csv('fashion-mnist_test.csv') # Class names class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] # + # Storing image labels image_labels = image_data.iloc[:,0] # Normalizing pixel values in a range of 0-1 image_data = image_data.iloc[:,1:]/255.0 # Number of images to visualize embed_count = 400 # Subsetted data image_labels = image_labels[:embed_count] image_data = image_data.iloc[:embed_count,:] # + # Defining a directory logdir = r'records' # Setup embedding tensor embedding_var = tf.Variable(image_data, name='FashionMNIST') # Tensor name config = projector.ProjectorConfig() # Initiating configuration embedding = config.embeddings.add() # Add tensor to the container # Setup names and paths for meta-data and sprite image embedding.tensor_name = embedding_var.name embedding.metadata_path = os.path.join(logdir,'metadata.tsv') embedding.sprite.image_path = os.path.join(logdir,'sprite.png') embedding.sprite.single_image_dim.extend([28, 28]) # Write to the projector config file summary_writer = tf.summary.FileWriter(logdir) projector.visualize_embeddings(summary_writer, config) # - # Create model checkpoint with tf.Session() as sesh: sesh.run(tf.global_variables_initializer()) saver = tf.train.Saver() saver.save(sesh, os.path.join(logdir, 'model.ckpt')) # + # Setting up dimension of sprite image rows = 28 cols = 28 sprite_dim = int(np.sqrt(embed_count)) sprite_image = np.ones((cols * sprite_dim, rows * sprite_dim)) # Create the sprite image and the metadata file index = 0 labels = [] for i in range(sprite_dim): for j in range(sprite_dim): # Add labels for each image labels.append(class_names[int(test_labels[index])]) # Write the sprite image and invert the color to white background and black clothing item sprite_image[ i * cols: (i + 1) * cols, j * rows: (j + 1) * rows ] = test_data.iloc[index].reshape(28, 28) * -1 + 1 index += 1 # - # Save metadata to file with open(embedding.metadata_path, 'w') as meta: meta.write('Index\tLabel\n') for index, label in enumerate(labels): meta.write('{}\t{}\n'.format(index, label)) # Save sprite image to file plt.imsave(embedding.sprite.image_path, sprite_image, cmap='gray') # Show sprite image plt.figure(figsize=(20,20), dpi=256) plt.imshow(sprite_image, cmap='gray') plt.show() # #### Tensorboard # Open projector_config.pbtxt under 'records' directory. Remove 'records\\\' from metadata_path and image_path. Save and exit the file. # Next, open Anaconda Prompt and run following command - # * Change to directory in which 'records' directory is present. Assuming 'records' is present on Desktop run below command # cd Desktop # * tensorboard --logdir=records # * Copy and run the URL (e.g. http://MYSGEC854775d:6006) in web browser generated by running above command. # * Under Projection tab of tensorboard select 'Label' under 'Color by' option on Left column. # * Visualize using PCA or T-SNE. Stop iteration of T-SNE when required. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pandas as pd import datetime import seaborn as sns import matplotlib.pyplot as plt file=r'C:\Users\yashn\Desktop\Technocolabs\Project\MSFT.csv' df=pd.read_csv(file,index_col="Date", parse_dates=True) df.head() df.shape df.info() df.isnull().sum() df[df.duplicated()] df['Volume'].value_counts() df.describe() plt.figure(figsize=(14,6)) sns.lineplot(data=df) sns.scatterplot(x=df['High'], y=df['Low']) sns.scatterplot(x=df['Open'], y=df['Close']) sns.distplot(a=df['Volume'], kde=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="S8alK1zLkbXK" colab_type="text" # ## Introduction # # The objective is to use Deep Convolutional Generative Adversertial Network to generate fake GO game snapshots on a GO board. # + [markdown] id="iMTrlOg3nz6u" colab_type="text" # ## Import Libs # + id="8588hUR4rzY7" colab_type="code" outputId="870aad12-16de-47a6-8b06-947c9f51d226" colab={"base_uri": "https://localhost:8080/", "height": 34} from google.colab import drive from keras.datasets import mnist from keras.layers import Input, Dense, Reshape, Flatten, Dropout from keras.layers import BatchNormalization, Activation, ZeroPadding2D from keras.layers.advanced_activations import LeakyReLU from keras.layers.convolutional import UpSampling2D, Conv2D from keras.models import Sequential, Model from keras.optimizers import Adam import matplotlib.pyplot as plt from matplotlib.ticker import MultipleLocator import numpy as np import time import os from PIL import Image from PIL import Image import numpy as np import pandas as pd # + [markdown] id="MKBG0PiCn9po" colab_type="text" # ## Download Data # # * Data prepared offline by running extra python scripts from the listed repository (which is also created for this project) # # * Please reference slides # + id="qQ7ECrvx1cnE" colab_type="code" outputId="9a892dfc-de4d-4253-d8c3-22c47d1a8a70" colab={"base_uri": "https://localhost:8080/", "height": 119} # !rm -rf gangengo # !git clone https://github.com/timlyrics/gangengo # !mv gangengo/* . # + [markdown] id="PyA6tSkalZ2Y" colab_type="text" # ## Real sample # + id="klChc48A2Ln3" colab_type="code" outputId="abb3e6e6-fc7e-471d-f0d4-9fbfc6c1c942" colab={"base_uri": "https://localhost:8080/", "height": 350} from gen_go import GenGo gg = GenGo() gg.show_random() # + [markdown] _uuid="04304e5c4555849eade69a1a1b5b1fb7cc2feb13" id="AGzaeNUNxdGy" colab_type="text" # ## Set up network parameters # # * The go board is 19x19, but we padded it to 20x20 for easier processing. # # * Each pixel has two channels for black and white (red). # # * Generator latent dimension is 32. # # + _uuid="38443112fd55bef54d7807495b69d66353bfccae" id="YCSOpXVVxdG1" colab_type="code" colab={} img_rows = 20 img_cols = img_rows channels = 2 img_shape = (img_rows, img_cols, channels) latent_dim = 32 # + [markdown] _uuid="1c5ad4f2b1b3b4ae3dd6caf7ba96ef1e273eb900" id="McWm3ToexdG4" colab_type="text" # ## Define a function to build a generator # # * Generator is a sequential model with one dense layer followed by two upsampling + convolution layers to buff complexity and match dimension. # # * Then a convolutional layer with 2 channels is used to match pixel depth and a sigmoid activation converts output to boolean. # + _uuid="804d0ab8433b786150c48cd6bd990d74705cb2a8" id="yIh80aD4xdG5" colab_type="code" colab={} def build_generator(): model = Sequential() model.add(Dense(128 * 5 * 5, activation="relu", input_dim=latent_dim)) model.add(Reshape((5, 5, 128))) model.add(UpSampling2D()) model.add(Conv2D(128, kernel_size=3, padding="same")) model.add(BatchNormalization(momentum=0.8)) model.add(Activation("relu")) model.add(UpSampling2D()) model.add(Conv2D(64, kernel_size=3, padding="same")) model.add(BatchNormalization(momentum=0.8)) model.add(Activation("relu")) model.add(Conv2D(channels, kernel_size=3, padding="same")) model.add(Activation("sigmoid")) model.summary() noise = Input(shape=(latent_dim,)) img = model(noise) m = Model(noise, img) return m # + [markdown] _uuid="a9ea4efecf7d4faeb0a37a649c079ebe88396908" id="IP63a3hixdG7" colab_type="text" # ## Define a function to build a discriminator # # * Discriminator a typical sequential Convolutional Neural Network classifier. # # * First layer padding is "valid" because the GO game treats sides and corders differently. # + _uuid="b9aa34a385dac406b50e7e122b45483c98c0cffa" id="YSIwur4txdG8" colab_type="code" colab={} def build_discriminator(): model = Sequential() model.add(Conv2D(32, kernel_size=3, strides=2, input_shape=img_shape, padding="valid")) model.add(LeakyReLU(alpha=0.2)) model.add(Dropout(0.25)) model.add(Conv2D(64, kernel_size=3, strides=2, padding="same")) model.add(ZeroPadding2D(padding=((0, 1), (0, 1)))) model.add(BatchNormalization(momentum=0.8)) model.add(LeakyReLU(alpha=0.2)) model.add(Dropout(0.25)) model.add(Conv2D(128, kernel_size=3, strides=2, padding="same")) model.add(BatchNormalization(momentum=0.8)) model.add(LeakyReLU(alpha=0.2)) model.add(Dropout(0.25)) model.add(Conv2D(256, kernel_size=3, strides=1, padding="same")) model.add(BatchNormalization(momentum=0.8)) model.add(LeakyReLU(alpha=0.2)) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(1, activation='sigmoid')) model.summary() img = Input(shape=img_shape) validity = model(img) return Model(img, validity) # + [markdown] _uuid="306fda1d4980efc3c64db997111ab7e615cfdbac" id="v3U8x5AZxdG-" colab_type="text" # ## Build GAN # # * The discriminator model is a simple typical classification model. # # * The combined model is a the discriminator stacked on top of generator where discriminator parameters are freezed. # # * Binary cross entropy is used for loss for both discriminator loss and combined loss. # # * Adam optimizer is used for both models where learning rate for combined model is 0.0001 and for discriminator is 0.00002 (1 order of magnitude lower than generator) and a decay is also added to the discriminator optimizer to prevent overpowering the generator. # # (The parameters are tuned by trial and error until time ran out) # + _uuid="f7202cc45aae945674176146922493812e520030" id="c1RvpIcoxdG_" colab_type="code" outputId="17250a9b-8c5f-4649-a78d-6a6fd17cf761" colab={"base_uri": "https://localhost:8080/", "height": 1397} # build discriminator discriminator = build_discriminator() discriminator.compile(loss='binary_crossentropy', optimizer=Adam(0.00002, 0.3, decay=1e-6), metrics=['accuracy']) # build generator generator = build_generator() z = Input(shape=(latent_dim,)) img = generator(z) # For the combined model we will only train the generator discriminator.trainable = False # The discriminator takes generated images as input and determines validity valid = discriminator(img) # The combined model (stacked generator and discriminator) # Trains the generator to fool the discriminator combined = Model(z, valid) combined.compile(loss='binary_crossentropy', optimizer=Adam(0.0001, 0.9)) # + [markdown] id="0JqoNWtXh6PP" colab_type="text" # ## Transform between softmax space and linear space # + id="q7bXdNBqi3mb" colab_type="code" colab={} def softmax_from_linear(arr): arr = np.expand_dims(arr, axis=3) smx = np.concatenate([1 * (arr > 0), 1 * (arr < 0)], axis=3) return smx def linear_from_softmax(smx): arr = ((smx[:,:,:,0] > 0.5) & (smx[:,:,:,0] > smx[:,:,:,1])) * 1 + ((smx[:,:,:,1] > 0.5) & (smx[:,:,:,1] > smx[:,:,:,0])) * -1 return arr # + [markdown] _uuid="e1d9927bd8601eedf144b6f46f12cc4da0d045b5" id="8a5yO3JfxdHB" colab_type="text" # ## Define a function to train GAN # # * Trains both models in each batch / epoch. # # * Saves images and records results periodically. # + _uuid="6592840f1e019f1d6fd3e709a0885389d252164f" id="Fkhx_GCHxdHC" colab_type="code" colab={} def train(epochs, batch_size, save_interval): os.makedirs('images', exist_ok=True) # Adversarial ground truths valid = np.ones((batch_size, 1)) fake = np.zeros((batch_size, 1)) # losses arr_d_loss = np.zeros(epochs) arr_g_loss = np.zeros(epochs) for epoch in range(epochs): # Select a random real images real_imgs = softmax_from_linear(gg.arr_from_random(batch_size)) # Sample noise and generate a batch of fake images noise = np.random.normal(0, 1, (batch_size, latent_dim)) fake_imgs = generator.predict(noise) # Train the discriminator D_loss_real = discriminator.train_on_batch(real_imgs, valid) D_loss_fake = discriminator.train_on_batch(fake_imgs, fake) D_loss = 0.5 * np.add(D_loss_real, D_loss_fake) # Train the generator g_loss = combined.train_on_batch(noise, valid) # Remember loss arr_d_loss[epoch] = D_loss[0] arr_g_loss[epoch] = float(g_loss) # If at save interval if epoch % save_interval == 0: # Print the progress print("%d [D loss: %f, acc.: %.2f%%] [G loss: %f]" % (epoch, D_loss[0], 100 * D_loss[1], g_loss)) # Save generated image samples save_imgs(epoch) return arr_d_loss, arr_g_loss # + [markdown] id="B6YIRgYfxXsW" colab_type="text" # ## Define visualizer # # * Plot randomly generated samples in 3x3 layout # + _uuid="b03c0413ddcde5125fa8fdb22bfe47e1bd668606" id="145zyE3WxdHE" colab_type="code" colab={} def save_imgs(epoch): r, c = 3, 3 noise = np.random.normal(0, 1, (r * c, latent_dim)) gen_imgs = linear_from_softmax(generator.predict(noise)) fig, axs = plt.subplots(r, c) fig.set_figheight(10) fig.set_figwidth(10) cnt = 0 for i in range(r): for j in range(c): row = gen_imgs[cnt, :, :] cnt += 1 ax = axs[i, j] loc = MultipleLocator(base=1) ax.xaxis.set_major_locator(loc) ax.yaxis.set_major_locator(loc) ax.set_axisbelow(True) ax.grid(linestyle='-', axis='both', linewidth='0.5', color='grey') for x in range(row.shape[0]): for y in range(row.shape[1]): if row[x][y] > 0.5: ax.scatter(x, y, c='red', s=50) elif row[x][y] < -0.5: ax.scatter(x, y, c='black', s=50) fig.savefig("images/go_%d.png" % epoch) plt.close() # + [markdown] _uuid="ccbe0a280b9fc16e47a63dc5e5edae8909f7569c" id="2CQ4LNlaxdHJ" colab_type="text" # ## Train GAN # # * Run training routine # + _uuid="8a52d992f11a034e0cc0cabaddeb2ae93f160dc6" id="RvwGlSuMxdHK" colab_type="code" outputId="e6fefc8f-6095-47e9-9ce2-6409cb9dc01c" colab={"base_uri": "https://localhost:8080/", "height": 3556} start = time.time() arr_d_loss, arr_g_loss = train(epochs=100001, batch_size=128, save_interval=500) end = time.time() elapsed_train_time = 'elapsed training time: {} min, {} sec '.format(int((end - start) / 60), int((end - start) % 60)) print(elapsed_train_time) # + [markdown] id="PLUGoE8sh0aE" colab_type="text" # ## Loss evolution over time # # * The progression of the training proceeded well (combined loss decreases) until 20000 batches / epochs. # # * The discriminator overpowers the generator after 20000, and the training does not progress well. # # * Further tuning is necessary to undercut the discriminator to achieve better results. # + id="Wn4smoAZOQqe" colab_type="code" outputId="e2d712fa-3cf5-4feb-eae3-446ee16ca392" colab={"base_uri": "https://localhost:8080/", "height": 378} plt.plot(np.arange(1001) * 100, arr_d_loss[::100]) plt.plot(np.arange(1001) * 100, arr_g_loss[::100]) plt.legend(['discriminator loss', 'generator loss']) plt.xlabel('epoch') plt.ylabel('loss') # + [markdown] id="zwFvb8tFirwI" colab_type="text" # ## Save weights and do some debugging checks # + _uuid="cb7c9e65347eb077e1dabd74445713ddef12ba34" id="5ih-0phwxdHO" colab_type="code" colab={} os.makedirs('saved_model_weights', exist_ok=True) generator.save_weights('saved_model_weights/generator_weights.h5') discriminator.save_weights('saved_model_weights/discriminator_weights.h5') combined.save_weights('saved_model_weights/combined_weights.h5') # + id="WsHG1dqL0zqz" colab_type="code" colab={} # debugging noise = np.random.normal(0, 1, (1, latent_dim)) raw = generator.predict(noise) gen_imgs = linear_from_softmax(generator.predict(noise)) # + colab_type="code" id="hnVgPu_rC9Gx" colab={} # debugging a = raw[0,:,:] (a[:,:,0] < 0.5) & (a[:,:,1] < 0.5) b = softmax_from_linear(gg.arr_from_random(1)) # + [markdown] _uuid="526455382f869ceadc4206cb703e676a4037b213" id="VY40mkpTxdHQ" colab_type="text" # ## Show generated GO # # * Display randomly generated images for epoch 0, 500, 5000, 20000, and 100000. # + _uuid="b9cd883f622680511e19cfa30717a3a8d93c4a48" id="dXDYEzoNxdHR" colab_type="code" outputId="444b93b6-4ccd-463b-aa7b-8538e2e26c5d" colab={"base_uri": "https://localhost:8080/", "height": 737} Image.open('images/go_0.png') # + _uuid="4c487c7fede8db3baa681554c9cf677464af791c" id="ON5zWZMoxdHW" colab_type="code" outputId="d10fe14d-c654-4410-a5ad-98f913fc1b3d" colab={"base_uri": "https://localhost:8080/", "height": 737} Image.open('images/go_500.png') # + id="fu4LQ12OlyjA" colab_type="code" outputId="4538e6ed-bfb3-4a4a-b207-4ed370839f63" colab={"base_uri": "https://localhost:8080/", "height": 737} Image.open('images/go_5000.png') # + id="OPeqA0s3l4SF" colab_type="code" outputId="742c10c9-b66b-4b11-f798-10d50dc408f8" colab={"base_uri": "https://localhost:8080/", "height": 737} Image.open('images/go_20000.png') # + id="EmTBu61qmAR9" colab_type="code" outputId="48545dce-6dd7-4d69-e7c9-69e527c89896" colab={"base_uri": "https://localhost:8080/", "height": 737} Image.open('images/go_50000.png') # + id="DPN14p71mGix" colab_type="code" outputId="8d9a91f5-92ab-4bb1-d59d-152315ec9062" colab={"base_uri": "https://localhost:8080/", "height": 737} Image.open('images/go_100000.png') # + [markdown] _uuid="e1a1fdfff62338dd332aa823655f398156a5e5ae" id="pqR-k48QxdHZ" colab_type="text" # ## Reference # [Keras - DCGAN](https://github.com/eriklindernoren/Keras-GAN#dcgan) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="c7JgmgJR8mVB" # # + [markdown] id="3ALQ2PJiT1gV" # ## Laboratory 10 : Linear Combination and Vector Spaces # + id="n648vCMcTzFH" import numpy as np import matplotlib.pyplot as plt # %matplotlib inline # + [markdown] id="prbnOyGQT5sl" # ## Linear Combination # + [markdown] id="aICO-RIqT81b" # It is said that a linear combination is the combination of linear scaling and addition of a vector its bases/components # + [markdown] id="f4_09ihcT_MD" # We will try to visualize the vectors and their linear combinations by plotting a sample of real number values for the scalars for the vectors. Let's first try the vectors below: # + [markdown] id="DIOYegLaUAxN" # $$X = \begin{bmatrix} 2\\5 \\\end{bmatrix} , Y = \begin{bmatrix} 7\\9 \\\end{bmatrix} $$ # + id="QhsnzsioT8Y3" vectX = np.array([2,5]) vectY = np.array([7,9]) # + [markdown] id="kxhFct-nUgFq" # Span of single vectors # + [markdown] id="qMXFGaIXUgAI" # As discussed in the lecture, the span of individual vectors can be represented by a line span. Let's take vector $X$ as an example. # + id="Dyw8XwsrUkR7" c = np.arange(-10,10,0.125) plt.scatter(c*vectX[0],c*vectX[1]) plt.xlim(-5,15) plt.ylim(5,-15) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.grid() plt.show() # + id="Cqv6wmFSUlly" c = np.arange(-10,10,0.125) plt.scatter(c*vectX[0],c*vectX[1]) plt.xlim(-5,15) plt.ylim(5,-15) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.grid() plt.show() # + [markdown] id="5_g8ZEa7UpfO" # #Span of a linear combination of vectors # So what if we are to plot the span of a linear combination of vectors? We can visualize as a plane on the 2-dimensional coordinate system. Let's take the span of the linear combination below: # + [markdown] id="saTQ2YxnUsqh" # $$S = \begin{Bmatrix} c_1 \cdot\begin{bmatrix} 1\\0 \\\end{bmatrix}, # c_2 \cdot \begin{bmatrix} 1\\-1 \\\end{bmatrix}\end{Bmatrix} $$ # + id="l0QpNPATUnFL" vectA = np.array([1,0]) vectB = np.array([1,-1]) R = np.arange(-10,10,1) c1, c2 = np.meshgrid(R,R) vectR = vectA + vectB spanRx = c1*vectA[0] + c2*vectB[0] spanRy = c1*vectA[1] + c2*vectB[1] ##plt.scatter(R*vectA[0],R*vectA[1]) ##plt.scatter(R*vectB[0],R*vectB[1]) plt.scatter(spanRx,spanRy, s=5, alpha=0.75) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.grid() plt.show() # + id="KqvxSIgUVAVk" colab={"base_uri": "https://localhost:8080/", "height": 265} outputId="a7e6e01b-f3dd-416d-a53f-818ea8afa66e" vectP = np.array([2,1]) vectQ = np.array([4,3]) R = np.arange(-10,10,1) c1, c2 = np.meshgrid(R,R) vectR = vectP + vectQ spanRx = c1*vectP[0] + c2*vectQ[0] spanRy = c1*vectP[1] + c2*vectQ[1] ##plt.scatter(R*vectA[0],R*vectA[1]) ##plt.scatter(R*vectB[0],R*vectB[1]) plt.scatter(spanRx,spanRy, s=5, alpha=0.75) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.grid() plt.show() # + [markdown] id="GhK16RtJVEpd" # Take note that if vectors are seen to be as a 2-dimensional span we can say it has a Rank of 2 or R2 . But if the span of the linear combination of vectors are seen to be like a line, they are said to be linearly dependent and they have a rank of 1 or R1 . # # #Activity # ##Task 1 # Try different linear combinations using different scalar values. In your methodology discuss the different functions that you have used, the linear equation and vector form of the linear combination, and the flowchart for declaring and displaying linear combinations. Please make sure that your flowchart has only few words and not putting the entire code as it is bad practice. In your results, display and discuss the linear combination visualization you made. You should use the cells below for displaying the equation markdows using LaTeX and your code. # + [markdown] id="rny8z1QDj6Q7" # $$ Vector1 = -10\hat{i} + 2\hat{j} \\ # Vector 2 = 13\hat{x} - 8\hat{k}\\ $$ # # # # # + [markdown] id="RE20rAi0j-_Q" # $$Span = \begin{Bmatrix} Vector1 \cdot\begin{bmatrix} -10\\2 \\\end{bmatrix}, # vector2 \cdot \begin{bmatrix} 13\\-8 \\\end{bmatrix}\end{Bmatrix} $$ # + id="ok9NGzrFVK3n" colab={"base_uri": "https://localhost:8080/", "height": 265} outputId="9f60a41e-cbdb-4b6a-9379-68fef4db2ff1" vector1 = np.array([-10,2]) vector2 = np.array([13,-8]) R = np.arange(-10,10,1) c1, c2 = np.meshgrid(R,R) vectR = vectP + vectQ spanRx = c1*vectP[0] + c2*vectQ[0] spanRy = c1*vectP[1] + c2*vectQ[1] ##plt.scatter(R*vectA[0],R*vectA[1]) ##plt.scatter(R*vectB[0],R*vectB[1]) plt.scatter(spanRx,spanRy, s=5, alpha=0.75) plt.axhline(y=0, color='red') plt.axvline(x=0, color='red') plt.grid() plt.show() # + [markdown] id="XqPHCi4gWyb8" # $$ X = -5\hat{i} - 7\hat{j} \\ # Y = 3\hat{x} - 4\hat{k}\\ $$ # + [markdown] id="JEddNRAWW5G2" # $$X = c\cdot \begin{bmatrix} 14\\6 \\\end{bmatrix} $$ # $$Y = c\cdot \begin{bmatrix} 22\\13 \\\end{bmatrix} $$ # + colab={"base_uri": "https://localhost:8080/", "height": 269} id="wHsgiP9oXd7T" outputId="0ddb179e-23a8-4a2a-b444-412cb7b8ef10" vectX = np.array([14,6]) vectY = np.array([22,13]) c = np.arange(-25,25,0.335) plt.scatter(c*vectX[0],c*vectX[1]) plt.xlim(-30,30) plt.ylim(-30,30) plt.axhline(y=0, color='blue') plt.axvline(x=0, color='blue') plt.grid() plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Projeto Big Data # #
    #
    # # # Dataset: https://www.kaggle.com/mlg-ulb/creditcardfraud import findspark import pyspark from pyspark.conf import SparkConf from pyspark.sql.functions import monotonically_increasing_id from pyspark.mllib.linalg import Vectors from pyspark.mllib.regression import LabeledPoint from pyspark.mllib.tree import RandomForest as cls_rf from time import * from pyspark.mllib.evaluation import BinaryClassificationMetrics # # Utilizando o contexto do spark findspark.init() sc = pyspark.SparkContext(appName="projeto") session = pyspark.sql.SparkSession.builder.config(conf=SparkConf()) spark = session.getOrCreate() # # Carregando o dataset do hdfs do hadoop df = spark.read.options(header = "true", inferschema = "true").csv('hdfs://cluster-001-m/user/victor_outtes/creditcard.csv') df.printSchema() print("Total number of rows:", df.count()) # # Split dataset 70-30 # + TRAIN_DATA_RATIO = 0.7 # A ultima coluna contém o target transformed_df = df.rdd.map(lambda row: LabeledPoint(row[-1], Vectors.dense(row[0:-1]))) # Dividindo o dataset splits = [TRAIN_DATA_RATIO, 1.0 - TRAIN_DATA_RATIO] train_data, test_data = transformed_df.randomSplit(splits, RANDOM_SEED) print("Quantidade de linhas do dataset de treinamento: %d" % train_data.count()) print("Quantidade de linhas do dataset de testes: %d" % test_data.count()) # - # # Treino e parametrização do modelo random forest # + RANDOM_SEED = 13579 NUM_TREES = 3 MAX_DEPTH = 4 MAX_BINS = 32 t1 = time() # Treinando a random forest model = cls_rf.trainClassifier(train_data, numClasses=2, categoricalFeaturesInfo={}, numTrees=NUM_TREES, maxDepth=MAX_DEPTH, maxBins=MAX_BINS, seed=RANDOM_SEED) t2 = time() t_end = t2 - t1 print("Tempo do treinamento: %.3f s" % t_end) # - # # Predição e medição da acurácia predictions = model.predict(test_data.map(lambda x: x.features)) labels_and_predictions = test_data.map(lambda x: x.label).zip(predictions) accuracy = labels_and_predictions.filter(lambda x: x[0] == x[1]).count() / float(test_data.count()) print("Model accuracy: %.3f%%" % (accuracy * 100)) # # Métricas: precision recall / curva roc metrics = BinaryClassificationMetrics(labels_and_predictions) print("Area PR curve: %.f" % (metrics.areaUnderPR * 100)) print("Area ROC curve: %.3f" % (metrics.areaUnderROC * 100)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Regressão logística # Determinar se um usuário comprou uma SUV baseado em sua idade e estimativa salarial # Dataset obtido no Kaggle (https://www.kaggle.com/hamzaalijoiyah/users-of-a-social-networks-who-bought-suv) import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns dados = pd.read_csv('Social_Network_Ads.csv') dados.head() sns.scatterplot(x='Age',y='EstimatedSalary',data=dados,hue='Purchased') plt.xlabel('Idade') plt.ylabel('Salário') plt.tight_layout() # A coluna ID do usuário não é relevante e será removida dados = dados.drop('User ID',axis=1) dados.head() # Informações iniciais da amostra dados.info() # Verificando a existência de NaNs dados.isna().sum() # Determinando número de clientes homens ou mulher sns.countplot(x='Gender',data=dados) # Verificando distribuição das idades sns.distplot(dados['Age']) # Verificando distribuição da renda sns.distplot(dados['EstimatedSalary']) # Verificando quantidade de clientes quem compraram ou não SUV sns.countplot(x='Purchased',data=dados) # A coluna genero (Gender) precisa ser transformada de variável categórica para variável numérica from sklearn.preprocessing import LabelEncoder le = LabelEncoder() le.fit(dados['Gender']) list(le.classes_) dados['Gender'] = le.transform(dados['Gender']) dados.head() # Normalizando os dados dos salários from sklearn.preprocessing import StandardScaler sc_X=StandardScaler() dados[['Age','EstimatedSalary']] = sc_X.fit_transform(dados[['Age','EstimatedSalary']]) dados.head() # Separando as variáveis X e Y X = dados.drop('Purchased',axis=1).values Y = dados['Purchased'].values # Separando dados em amostras de treino e teste from sklearn.model_selection import train_test_split X_treino,X_teste,Y_treino,Y_teste=train_test_split(X,Y,test_size=0.25,random_state=0) # Aplicando modelo de regressão logística from sklearn.linear_model import LogisticRegression modelo = LogisticRegression() modelo.fit(X_treino,Y_treino) # Realizando previsão na amostra de teste Y_previsto = modelo.predict(X_teste) # Determinando a matriz de confusão para verificar eficácio do modelo from sklearn.metrics import confusion_matrix cm=confusion_matrix(Y_teste,Y_previsto) cm sns.heatmap(cm,annot=True, fmt="d") # Calculando as métricas do modelo from sklearn.metrics import classification_report,f1_score,precision_score,average_precision_score,recall_score,accuracy_score # Relatório de classificação cr = classification_report(Y_teste,Y_previsto,labels=[0,1]) print(cr) # F1-score f1 = f1_score(Y_teste,Y_previsto) print("F1 score = {:0.2f}%".format(f1*100)) # Precision score precisao = precision_score(Y_teste,Y_previsto) print("Precision score = {:0.2f}%".format(precisao*100)) # Average precision score avg_precision = average_precision_score(Y_teste,Y_previsto) print("Averaged Precision score = {:0.2f}%".format(avg_precision*100)) # Recall score rec = recall_score(Y_teste,Y_previsto) print("Recall score = {:0.2f}%".format(rec*100)) # Accuracy score acc = accuracy_score(Y_teste,Y_previsto) print("Accuracy score = {:0.2f}%".format(acc*100)) # # Curva ROC from sklearn.metrics import roc_curve, roc_auc_score roc_score = roc_auc_score(Y_teste, Y_previsto) print("ROC score = {:0.2f}%".format(roc_score*100)) roc_fpr, roc_tpr, _ = roc_curve(Y_teste, Y_previsto) plt.plot(roc_fpr, roc_tpr, linestyle='--') plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') # # Curva Precision-Recall from sklearn.metrics import precision_recall_curve,auc lr_precision, lr_recall, _ = precision_recall_curve(Y_teste, Y_previsto) lr_auc = auc(lr_recall, lr_precision) print("AUC score = {:0.2f}%".format(lr_auc*100)) plt.plot(lr_recall, lr_precision) plt.xlabel('Recall') plt.ylabel('Precision') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Super-Convergence Learning Rate Schedule (TensorFlow Backend) # In this example we will implement super-convergence learning rate (LR) schedule (https://arxiv.org/pdf/1708.07120.pdf) and test it on a CIFAR10 image classification task. Super-covergence is a phenomenon where neural networks can be trained an order of magnitude faster than with standard training methods. The paper proposes a LR schedule which incorporates two parts: a LR range test to find the appropriate LR range and a cyclical LR schedule that uses the obtained information. # + import tempfile import fastestimator as fe from fastestimator.architecture.tensorflow import ResNet9 from fastestimator.dataset.data.cifair10 import load_data from fastestimator.op.numpyop.meta import Sometimes from fastestimator.op.numpyop.multivariate import HorizontalFlip, PadIfNeeded, RandomCrop from fastestimator.op.numpyop.univariate import CoarseDropout, Normalize, Onehot from fastestimator.op.tensorop.loss import CrossEntropy from fastestimator.op.tensorop.model import ModelOp, UpdateOp from fastestimator.trace.adapt import LRScheduler from fastestimator.trace.io import BestModelSaver from fastestimator.trace.metric import Accuracy from fastestimator.util.util import Suppressor import matplotlib.pyplot as plt # + tags=["parameters"] # Parameters epochs=24 batch_size=128 lr_epochs=100 train_steps_per_epoch=None save_dir=tempfile.mkdtemp() # - # ## Network Architecture and Data Pipeline # We will use almost the same image classification configuration of the other Apphub example: [CIFAR10 Fast](../../image_classification/cifar10_fast/cifar10_fast.ipynb) including network architecture and data pipeline. The only difference is that we use SGD optimizer instead of Adam because author of the paper specially pointed out the incompatibility between Adam optimizer and super-convergence. # + # prepare dataset train_data, test_data = load_data() pipeline = fe.Pipeline( train_data=train_data, eval_data=test_data, batch_size=batch_size, ops=[ Normalize(inputs="x", outputs="x", mean=(0.4914, 0.4822, 0.4465), std=(0.2471, 0.2435, 0.2616)), PadIfNeeded(min_height=40, min_width=40, image_in="x", image_out="x", mode="train"), RandomCrop(32, 32, image_in="x", image_out="x", mode="train"), Sometimes(HorizontalFlip(image_in="x", image_out="x", mode="train")), CoarseDropout(inputs="x", outputs="x", mode="train", max_holes=1), Onehot(inputs="y", outputs="y", mode="train", num_classes=10, label_smoothing=0.2) ]) # prepare network model = fe.build(model_fn=ResNet9, optimizer_fn="sgd") network = fe.Network(ops=[ ModelOp(model=model, inputs="x", outputs="y_pred"), CrossEntropy(inputs=("y_pred", "y"), outputs="ce"), UpdateOp(model=model, loss_name="ce") ]) # - # ## LR Range Test # The preparation of the super-convergence schedule is to search the suitable LR range. The process is training the target network with a linearly increasing LR and observing the validation accuracy. Generally, the accuracy will keep increase until at some certain point when the LR get too high and start making training diverge. The very LR of that moment is the "maximum LR". # # To run the test we need to implement the trace to record the maximum LR. After running the training with linear increaseing LR, we will get the maximum LR. # # drawing # [The typical learning rate and metircs plot from https://arxiv.org/pdf/1708.07120.pdf] # + def linear_increase(step, min_lr=0.0, max_lr=6.0, num_steps=1000): lr = step / num_steps * (max_lr - min_lr) + min_lr return lr traces = [ Accuracy(true_key="y", pred_key="y_pred"), LRScheduler(model=model, lr_fn=lambda step: linear_increase(step)) ] # prepare estimator LR_range_test = fe.Estimator(pipeline=pipeline, network=network, epochs=lr_epochs, traces=traces, train_steps_per_epoch=10, log_steps=10) # run the LR_range_test this print("Running LR range testing... It will take a while") with Suppressor(): summary = LR_range_test.fit("LR_range_test") # - # Let's plot the accuracy vs LR graph and see the maximum LR. acc_steps = [step for step in summary.history["eval"]["accuracy"].keys()] acc_values = [acc for acc in summary.history["eval"]["accuracy"].values()] best_step, best_acc = max(summary.history["eval"]["accuracy"].items(), key=lambda k: k[1]) lr_max = summary.history["train"]["model_lr"][best_step] lr_values = [summary.history["train"]["model_lr"][x] for x in acc_steps] assert len(lr_values) == len(acc_values) plt.plot(lr_values, acc_values) plt.plot(lr_max, best_acc, 'o', color='r', label="Best Acc={}, LR={}".format(best_acc, lr_max)) plt.xlabel("Learning Rate") plt.ylabel("Evaluation Accuracy") plt.legend(loc='upper left', frameon=False) # ## Super-Convergence LR Schedule # # Once we get the maximum LR, the minimum LR can be computed by dividing it by 40. Although this number is set to 4 in the paragraph of the original paper, it falls in range of [4, 40] in its experiment section. We empirically found 40 is the best value for this task. # # The LR change has 3 phases: # 1. increase LR from minimum LR to maximum LR at 0~45% of training process # 2. decrase LR from maximum LR to minimum LR at 45%~90% of training process # 3. decrase LR from minimum LR to 0 at 90%~100% of training process # # drawing # + lr_min = lr_max / 40 mid = int(epochs * 0.45 * len(train_data) / batch_size) end = int(epochs * len(train_data) / batch_size) def super_schedule(step): if step < mid: lr = step / mid * (lr_max - lr_min) + lr_min # linear increase from lr_min to lr_max elif mid <= step < mid * 2: lr = lr_max - (step - mid) / mid * (lr_max - lr_min) # linear decrease from lr_max to lr_min else: lr = max(lr_min - (step - 2 * mid) / (end - 2 * mid) * lr_min, 0) # linear decrease from lr_min to 0 return lr # - # Before we start the main training, the model needs to be reinitialized. Therefore we re-instantiate the same network and plug the new LR scheduler in the estimator. # + # reinitialize the model model = fe.build(model_fn=ResNet9, optimizer_fn="sgd") network = fe.Network(ops=[ ModelOp(model=model, inputs="x", outputs="y_pred"), CrossEntropy(inputs=("y_pred", "y"), outputs="ce"), UpdateOp(model=model, loss_name="ce") ]) traces = [ Accuracy(true_key="y", pred_key="y_pred"), BestModelSaver(model=model, save_dir=save_dir, metric="accuracy", save_best_mode="max"), LRScheduler(model=model, lr_fn=lambda step: super_schedule(step)) ] # prepare estimator main_train = fe.Estimator(pipeline=pipeline, network=network, epochs=epochs, traces=traces, train_steps_per_epoch=train_steps_per_epoch) main_train.fit() # - # ## Result Discussion # The result of it might not be super impressive when comparing with original example [CIFAR10 Fast](../../image_classification/cifar10_fast/cifar10_fast.ipynb). But please be aware that the example has its own LR schedules which is specially tuned on that configuration (plus that scheduler is also cyclical LR schedule). # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # ## Problem set #2 # # # ### Q1 # ##### Which of the linear models below are multiple linear regressions? # # (1) µi = α + βxi # # (2) µi = β1 * xi + β2 * zi # # (3) µi = α + β(xi − zi) # # (4) µi = α + β1 * xi + β2 * zi # cd C:\Users\edsea\My Drive\Backup\UM6P\Data Science 2 class\DatSci2_2022_repo import matplotlib.pyplot as plt import scipy.stats as st import numpy as np import pandas as pd import pymc3 as pm from quap import quap import arviz as az import statsmodels.api as sm import math d = pd.read_csv("./Data/drinking_fatalities.csv", sep=";", header=0) # ### Q2 # I am trying to model the effect of education on fame - in other words I want to know whether having more years of education improves a person's chances of being famous. # # Give 1 example of a variable I might want to control for in my model. What could happen if I don't control for it? # ### Q3 # Write the model definition for the above model. Should you center any variables? Justify your priors. # ### Q4 # I want to set up an experiment looking at the effect of social media use on anxiety and test scores. Help me simulate this. # # First simulate baseline anxiety levels for a group of 100 students. Anxiety is scored between 0 and 10 and the average score is 5 with a standard deviation of 2. Remember that you can't score lower than 0 or higher than 10! # # Next assign half of them at random to the experimental group which is given access to social media. # # Simulate student's scores after the treatment. On average, the control group (with no access to social media) have the same score, while the experimental group increased their anxiety by 1 point on average. # # Simulate student's test scores out of 100. The average score for a student without anxiety is 80% with a standard deviation of of 15 points, but each point of anxiety decreases a student's score by 2 points on average. (again, remember that scores cannot be more than 100 or less than 0). # # Create a data frame with experimental status, post-experiment anxiety, and test scores and show the first 6 rows. # + N =100 a0 = np.random.normal(5,2,size=N) a0[a0<0] = 0 a0[a0>10] = 10 SM = np.repeat([0,1],N/2) a1 = np.random.normal(a0 + SM*1,0.1) a1[a1<0]=0 a1[a1>10]=10 TS = np.random.normal(80-a1*2, 15) TS[TS<0] = 0 TS[TS>100] = 100 df = pd.DataFrame.from_dict({"Social_Media" : SM,"Anxiety" : a1, "Test_score" : TS}) df.head() # - # ### Q5 # Your colleague tells you to put model test scores using a multiple linear regression with anxiety and experimental status as predictor variables. Why is this a bad idea? # # To appease them, you do it anyway. Present the results in a forest plot. Interpret the meaning of each of the parameter estimates. What model should you run instead? Present these results in another forest plot and interpret the results for your colleague. # + with pm.Model() as anxiety_model: B1 = pm.Normal("B1",0,1) B2 = pm.Normal("B2",0,1) a = pm.Normal("a",50,20) sigma = pm.Exponential("sigma",1) mu = a + B1*Anxiety + B2*Social_Media Test_score = np.Normal("Test_score",mu,sigma, observed=df.Test_score) data,dist = quap(vars=[a,B1,B2,sigma],n_samples=10_000) # - # ### Q6 # Load the data /Data/drinking_fatalities.csv. This is a dataset of US states between 1982 and 1988, with data on the minimum legal drinking age. Your colleague from the previous question tells you they have found an interesting result. They find that states with more average miles driven per person have higher minimum legal drinking ages. They interpret this to mean that states adjust their legal drinking ages upwards when people drive a lot. # # Run a simple linear model with "mlda" as the outcome and "avgmiles" as the predictor. Present the parameter estimates and their 89% hdis and interpret each of them. Plot the predicted relationship between avgmiles and mlda along with the 89% hdi for the mean and the 89% hdi for the predictions. # # Do you come to the same conclusion as your colleague? # ### Q7 # You ask your colleague how they specified their model. They tell you they controlled for the traffic fatality rate, trafficmort. Rerun the previous model adding trafficmort as a control. Once again, present the parameter estimates and interpret them, and plot the predicted relationship between avgmiles and mlda along with the 89% hdi for the mean and the 89% hdi for the predictions. What is different in this model? # # Think about the causal relationship between traffic fatality rates, average driving rates, and legal drinking age. Do you think your colleague's model specification was appropriate? Why or why not? What do you conclude about the true relationship between driving rates and the drinking age? # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Multi Class Text Classification for Yapı Kredi FAQs import pandas as pd import requests # + df = pd.read_csv("labeled_qnas.csv",index_col = 0) df.head(10) # - col = ['Categories', 'Q&A'] df = df[col] df.head() df.columns = ['Category', 'QnA'] df["Category_Id"] = df['Category'].factorize()[0] category_id_df = df[['Category', 'Category_Id']].drop_duplicates().sort_values('Category_Id') category_to_id = dict(category_id_df.values) id_to_category = dict(category_id_df[['Category_Id', 'Category']].values) category_id_df category_id_df.to_csv('categories.csv', index= False) category_to_id #df.to_csv('dfQuestions.csv', index= False) df.head() import matplotlib.pyplot as plt fig = plt.figure(figsize=(15,9)) df.groupby('Category').QnA.count().plot.bar(ylim=0) plt.show() df.shape # label = Q&A
    # target = CATEGORY # ## Text Representation # def processInput(raw_inp): emptyList = [] PARAMS = {'rawInput':raw_inp} response = requests.get("http://localhost:8080/nlpPipeLine/singleInput", params = PARAMS) #print("Response Code: ", response.status_code,"\n") if response.status_code == requests.codes.ok: text = response.text input_tokens = text.rstrip("]").lstrip("[").replace('"','').split(",") return input_tokens else: raise Exception("WARNING EXCEPTION OCCURED processInput()") qnas = df["QnA"] processed_qnas = [] for q in qnas: temp = q.split("\n") raw_qna = " ".join(temp) processed_tokens = processInput(raw_qna) processed_qna = " ".join(processed_tokens) processed_qnas.append(processed_qna) #Tf idf vectors bag of words model from sklearn.feature_extraction.text import TfidfVectorizer vectorizer = TfidfVectorizer(sublinear_tf=True, ngram_range=(1, 2), max_df=1.0, min_df = 0, use_idf = True) bow = vectorizer.fit_transform(processed_qnas) bow.toarray().shape freqs = [(word, bow.getcol(idx).sum()) for word, idx in vectorizer.vocabulary_.items()] results = sorted (freqs, key = lambda x: -x[1]) print("Number of unique words (unigrams + bigrams) (size of dictionary):", len(vectorizer.vocabulary_.items())) # + feature_names = vectorizer.get_feature_names() corpus_index = [n for n in processed_qnas] df_tfidf = pd.DataFrame(bow.todense(), columns=feature_names) #( num of questions , size of vocab) print(df_tfidf.shape) df_tfidf.head() # - features = bow.toarray() labels = df.Category_Id #The training input samples #[n_samples, n_features] print(features.shape) #The target values (class labels in classification) #[n_samples] labels.shape # ## Machine Learning Models # + from sklearn.model_selection import train_test_split from sklearn.feature_extraction.text import CountVectorizer from sklearn.feature_extraction.text import TfidfTransformer from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier from sklearn.naive_bayes import MultinomialNB from sklearn.svm import LinearSVC #The fit time scales at least quadratically with the number of samples and may be impractical beyond tens of thousands of samples from sklearn.svm import SVC from sklearn.model_selection import cross_val_score models = [ RandomForestClassifier(n_estimators=200, max_depth=3, random_state=0), LinearSVC(), MultinomialNB(), LogisticRegression(random_state=0), ] CV = 5 cv_df = pd.DataFrame(index=range(CV * len(models))) entries = [] for model in models: model_name = model.__class__.__name__ accuracies = cross_val_score(model, features, labels, scoring='accuracy', cv=CV) for fold_idx, accuracy in enumerate(accuracies): entries.append((model_name, fold_idx, accuracy)) cv_df = pd.DataFrame(entries, columns=['model_name', 'fold_idx', 'accuracy']) # - cv_df # ### Accuracy Scores cv_df.groupby('model_name').accuracy.mean() # ### Support Vector Machines # + model = LinearSVC() X_train, X_test, y_train, y_test, indices_train, indices_test = train_test_split(features, labels, df.index, test_size=0.33, random_state=2) model.fit(X_train, y_train) y_pred = model.predict(X_test) # - import seaborn as sns # + from sklearn.metrics import confusion_matrix conf_mat = confusion_matrix(y_test, y_pred) fig, ax = plt.subplots(figsize=(15,9)) sns.heatmap(conf_mat, annot=True, fmt='d', xticklabels=category_id_df.Category.values, yticklabels=category_id_df.Category.values) plt.ylabel('Actual') plt.xlabel('Predicted') plt.show() # - conf_mat category_id_df # + from IPython.display import display for predicted in category_id_df.Category_Id: for actual in category_id_df.Category_Id: if conf_mat[0].size > actual and conf_mat[0].size > predicted: if predicted != actual and conf_mat[actual, predicted] >= 3: print("'{}' predicted as '{}' : {} examples.".format(id_to_category[actual], id_to_category[predicted], conf_mat[actual, predicted])) display(df.loc[indices_test[(y_test == actual) & (y_pred == predicted)]][['Category', 'QnA']]) print('') # - from sklearn import metrics ##Last 2 category did not appear in the test print(metrics.classification_report(y_test, y_pred, target_names=df['Category'].unique()[:conf_mat[0].size])) # ## Feature Selection
    from sklearn.feature_selection import chi2 from sklearn.feature_selection import SelectKBest import numpy as np ## Before selection features.shape N = 3 for category, category_id in sorted(category_to_id.items()): features_chi2 = chi2(features, labels == category_id) indices = np.argsort(features_chi2[0]) feature_names = np.array(vectorizer.get_feature_names())[indices] unigrams = [v for v in feature_names if len(v.split(' ')) == 1] bigrams = [v for v in feature_names if len(v.split(' ')) == 2] print("# '{}':".format(category)) print(" . Most correlated unigrams:\n . {}".format('\n . '.join(unigrams[-N:]))) print(" . Most correlated bigrams:\n . {}".format('\n . '.join(bigrams[-N:]))) # + #Compare Chi-Squared Statistics #Select 5 features with highest chi-squared statistics chi2_selector = SelectKBest(chi2, k=150) X_kbest = chi2_selector.fit_transform(features, labels) # Show results print('Original number of features:', features.shape) print('Reduced number of features:', X_kbest.shape) # - # # # 21.08.2019 # İstanbul/Turkey # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np import seaborn as sns from pydataset import data import matplotlib as plt # Paper by: # # https://vita.had.co.nz/papers/tidy-data.pdf # # ### Data cleaning: # # tidy data != clean data # # # - outlier checking # - date parsing # - missing value imputation etc. # - data tidying: structuring datasets to facilitate analysis. # # # ### Data semantics: tidy data # - Value: every value belongs to a variable and an observation. # - Variable: a variable contains all values that measure the same underlying attribute (like height, temperature, duration) across units. # - Observation: an observation contains all values measured on the same unit (like a person, or a day, or a race) across attributes. # #### CSV data for lesson and exercises: # # - https://classroom.google.com/u/0/w/MjI5NDAwNDI2NTg5/tc/MjI5NDAwNDI2NjAy # lets look at this data: treatments = pd.read_csv('untidy-data/treatment.csv') treatments #rename columns treatments.columns = ['name', 'a', 'b', 'c'] treatments # restructure data using 'melt' treatments = treatments.melt(id_vars=['name'], var_name='treatment', value_name='response') treatments # ### Tidy data : # - Each variable forms a column. # - Each observation forms a row. # - Each cell has a single value. # - data is tabular, i.e. made up of rows and columns # Examples of tidy-data tips = data('tips') tips # #### General Ideas # - If the units are the same, maybe they should be in the same column. # - If one column has measurements of different units, it should be spread out # - Should you be able to groupby some of the columns? combine them # - Can I pass this data to seaborn? # # - Can we ask interesting questions and answer them with a groupby? i.e. generally we don't want to be taking row or column averages. # # ### How to deal with 'messy' data # # ##### Reshaping data: # # - Wide data --> Long data format (Melt) # - Long data --> Wide Data format (pivot) # #### 1. Messy data: Column headers are values, not variable names. df = pd.read_csv('untidy-data/pew.csv') # look at info df.info() # look at the head df.head() # melt the data. pd.melt(df, id_vars = 'religion', var_name = 'income', value_name = 'count') # #### pd.melt arguments # - id_vars = columns you want to keep (not melt) # - var_name = name of new column you created by melting columns # - value_name = column name for values # #### Another example: one variable stored across multiple columns billboard = pd.read_csv('untidy-data/billboard.csv', encoding= 'unicode_escape') billboard.shape billboard.head() billboard_long = pd.melt(billboard, id_vars = ['year', 'artist', 'track', 'time', 'genre', 'date.peaked', 'date.entered'], var_name = 'week', value_name = 'rating') billboard_long billboard_long.groupby('track').rating.mean() # #### 2. Messy data: Multiple variables are stored in one column. df = pd.DataFrame({ 'name': ['Sally', 'Jane', 'Billy', 'Suzy'], 'pet': ['dog: max', 'dog: buddy', 'cat: grizabella', 'hamster: fred'] }) df df[['pet_species', 'pet_name']] = df.pet.str.split(':', expand = True) df df = df.drop(columns = 'pet', implace = True) df # #### Messy data: Variables are stored in both rows and columns weather = pd.read_csv('untidy-data/weather.csv') weather.head() weather_melt = weather.melt(id_vars = ['id', 'year','month','element'], var_name = 'day', value_name = 'temp') weather_melt weather_tidy = weather_melt.pivot_table(index = ['id','year','month','day'], columns = 'element', values = 'temp') weather_tidy df = weather_tidy.reset_index() df # #### pd.pivot_table arguments # - Index = columns you want to keep (not pivot) # - columns = column you want to pivot # - value_name = values we want to populate in the new columns # - aggfunct = how you want tp aggregate the duplicate rows # #### Bit more complex example sales = pd.read_csv('untidy-data/sales.csv') sales sales_melt = sales.melt(id_vars = 'Product') sales_melt sales_melt[['year', 'measure']] =sales_melt.variable.str.split(' ', expand = True) sales_melt = sales_melt.drop(columns = 'variable') sales_melt.head() sales_tidy = sales_melt.pivot_table(index = ['Product', 'year'], columns = 'measure', values = 'value') sales_tidy = sales_tidy.reset_index sales_tidy() # + #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # + # Exercises # + # Do your work for this exercise in a jupyter notebook or python script named tidy_data. # Save this work in your classification-exercises repo. Then add, commit, and push your changes. # + #1) Load the attendance.csv file and calculate an attendnace percentage for each student. # One half day is worth 50% of a full day, and 10 tardies is equal to one absence. # You should end up with something like this: #name #Billy 0.5250 #Jane 0.6875 #John 0.9125 #Sally 0.7625 #Name: grade, dtype: float64 # - attendance = pd.read_csv('untidy-data/attendance.csv') attendance attendance_melt = attendance.melt(id_vars = 'Unnamed: 0', var_name = 'date', value_name = 'attendence') attendance_melt attendance_melt['attendence'] = attendance_melt['attendence'].str.replace('P','0') attendance_melt['attendence'] = attendance_melt['attendence'].str.replace('H','0.50') attendance_melt['attendence'] = attendance_melt['attendence'].str.replace('T','0.10') attendance_melt['attendence'] = attendance_melt['attendence'].str.replace('A','1') attendance_melt attendance_melt["attendence"] = pd.to_numeric(attendance_melt["attendence"]) attendance_melt = attendance_melt.rename(columns ={'Unnamed: 0': 'name'}) att = attendance_melt.groupby('name').attendence.mean() - 1 att.abs() #2) Read the coffee_levels.csv file. # Transform the data so that each carafe is in it's own column. # Is this the best shape for the data? coffee = pd.read_csv('untidy-data/coffee_levels.csv') coffee coffee_melt = coffee.melt(id_vars = ['hour', 'coffee_amount']) coffee_melt coffee_tidy = coffee_melt.pivot_table(index = 'hour', columns = 'value', values = 'coffee_amount') coffee_tidy coffee_tidy = coffee_tidy.reset_index coffee_tidy() # + #3) Read the cake_recipes.csv data. This data set contains cake tastiness scores for combinations of different recipes, oven rack positions, and oven temperatures. # Tidy the data as necessary. # Which recipe, on average, is the best? recipe b # Which oven temperature, on average, produces the best results? 275 # Which combination of recipe, rack position, and temperature gives the best result? recipe b, bottom rack, 300 degrees # - cake = pd.read_csv('untidy-data/cake_recipes.csv') cake cake = cake.rename(columns ={'recipe:position': 'rp'}) cake[['recipe', 'position']] = cake.rp.str.split(':', expand = True) cake cake = cake.drop(columns = ['rp']) cake = cake.drop(columns = ['bottom','top']) cake cake_melt = cake.melt(id_vars = ['recipe','position']) cake_melt cake_melt = cake_melt.rename(columns ={'variable': 'oven_temp'}) cake_melt = cake_melt.rename(columns ={'value': 'score'}) cake_melt cake_melt.groupby('recipe').score.mean() cake_melt.groupby('oven_temp').score.mean() cake_melt.groupby('score').max() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: nemesys # language: python # name: nemesys # --- # # Neural Memory System - PyTorchAnalyserLSTM demo # ## Environment setup import os from pathlib import Path CURRENT_FOLDER = Path(os.getcwd()) # + CD_KEY = "--ANALYSER_LSTM_DEMO_IN_ROOT" if ( CD_KEY not in os.environ or os.environ[CD_KEY] is None or len(os.environ[CD_KEY]) == 0 or os.environ[CD_KEY] == "false" ): # %cd -q ../../.. ROOT_FOLDER = Path(os.getcwd()).relative_to(os.getcwd()) CURRENT_FOLDER = CURRENT_FOLDER.relative_to(ROOT_FOLDER.absolute()) os.environ[CD_KEY] = "true" # - print(f"Root folder: {ROOT_FOLDER}") print(f"Current folder: {CURRENT_FOLDER}") # ## Modules # + import torch import torch.nn from nemesys.modelling.analysers.modules.pytorch_analyser_lstm import PyTorchAnalyserLSTM # - torch.set_printoptions(sci_mode=False) # ## Analyser setup # + class_names = ("class1", "class2") input_size = 128 output_size = 4 batch_first = True # - analyser = PyTorchAnalyserLSTM( class_names=class_names, input_size=input_size, hidden_size=output_size, batch_first=batch_first, ) # ## Data setup batch_size = 4 sequence_length = 4 data_batch = torch.normal(mean=0, std=1, size=(batch_size, sequence_length, input_size)) # ## Results result = analyser(data_batch) for class_name, value in result.items(): print(f"{class_name}:") print(value) print() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + import keras import os from params import get_params from sklearn import preprocessing import sklearn.preprocessing import numpy as np import matplotlib.pyplot as plt from keras.models import Model from keras.preprocessing import image from keras.applications.vgg19 import VGG19 from keras.applications.vgg19 import preprocess_input from PIL import Image, ImageOps import pickle # - descriptors_val = pickle.load(open("save_val.p", "rb")) descriptors_train = pickle.load(open("save_train.p", "rb")) dic_val = pickle.load(open("save_dic_val.p", "rb")) dic_train = pickle.load(open("save_dic_train.p", "rb")) imagen = pickle.load(open("save_img.p", "rb")) imagen3 = pickle.load(open("save_img3.p", "rb")) x_train = np.reshape(descriptors_train, (1194,4096)) mp=1 x_val = np.reshape(descriptors_val[mp], (1,4096)) x_train = sklearn.preprocessing.normalize(x_train, norm='l2', axis=1, copy=True, return_norm=False) x_val = sklearn.preprocessing.normalize(x_val, norm='l2', axis=1, copy=True, return_norm=False) descriptors_traint = x_train.transpose() similarities=np.matmul(x_val,descriptors_traint) ranks = np.argsort(similarities, axis=1)[:,::-1] # get the original images for visualization x_train_images = [] x_val_images = [] x_val_images.append(np.array(imagen[mp])) b = 0 for b in range(1194): x_train_images.append(np.array(imagen3[b])) h,w = (224, 224) new_image= Image.new('RGB', (h*6,w*1)) # Visualize ranks of the 10 queries offset = 10 # it will show results from query #'offset' to #offset+10 relnotrel = [] for q in range(1): ranks_q = ranks[q*(offset+1),:] relevant = dic_val[mp] for i in range(1194): if relevant == dic_train[ranks_q[i]]: new_image.paste(ImageOps.expand(Image.fromarray(x_train_images[ranks_q[i]]), border=10, fill='green'), (h*(1+i),w*q)) relnotrel.append(1) else: new_image.paste(ImageOps.expand(Image.fromarray(x_train_images[ranks_q[i]]), border=10, fill='red'), (h*(1+i),w*q)) relnotrel.append(0) # visualize query ima_q = Image.fromarray(x_val_images[0]) ima_q = ImageOps.expand(ima_q, border=10, fill='blue') new_image.paste(ima_q, (0,w*q)) plt.imshow(new_image) plt.axis('off') plt.show() new_image.save('imagen1.png') accu = 0 accu_final=0 numRel = 0 graphic = [] for k in range(len(relnotrel)): # If the value is 1 if relnotrel[k] == 1: # We add 1 to the number of correct instances numRel = numRel + 1 # We calculate the precision at k (+1 because we start at 0) # and we accumulate it accu += float( numRel )/ float(k+1) if numRel == 3: accu_final = accu num = k graphic.append(float( numRel )/ float(k+1)) accu_final / float(3)*100 plt.plot(graphic) Recall = 0 numRel2 = 0 len_final = 0 graphic2 = [] for k2 in range(len(relnotrel)): # If the value is 1 if relnotrel[k2] == 1: # We add 1 to the number of correct instances numRel2 = numRel2 + 1 if numRel2 == 3: Recall = float( numRel2 )/ float(numRel) graphic2.append(float( numRel2 )/ float(numRel)) plt.plot(graphic2) Recall * 100 graphic[0] = 1.0 graphic2[0] = 1.0 plt.plot(graphic2[1:(num+1)],graphic[1:(num+1)]) plt.title('Precision-Recall') plt.xlabel('Recall') plt.ylabel('Precision') plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # # MAT281 - Laboratorios N°07 # # ## Objetivos del laboratorio # # * Reforzar conceptos básicos de regresión lineal. # ## Contenidos # # * [Problema 01](#p1) # # # ## I.- Problema 01 # # # # # # El **cuarteto de Anscombe** comprende cuatro conjuntos de datos que tienen las mismas propiedades estadísticas, pero que evidentemente son distintas al inspeccionar sus gráficos respectivos. # # Cada conjunto consiste de once puntos (x, y) y fueron construidos por el estadístico . El cuarteto es una demostración de la importancia de mirar gráficamente un conjunto de datos antes de analizarlos. # + import os import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline sns.set_palette("deep", desat=.6) sns.set(rc={'figure.figsize':(11.7,8.27)}) # - # cargar datos df = pd.read_csv(os.path.join("data","anscombe.csv"), sep=",") df.head() # Basado en la información presentada responda las siguientes preguntas: # # 1. Gráfique mediante un gráfico tipo **scatter** cada grupo. A simple vista, ¿ los grupos son muy distintos entre si?. # 2. Realice un resumen de las medidas estadísticas más significativas ocuapando el comando **describe** para cada grupo. Interprete. # 3. Realice un ajuste lineal para cada grupo. Además, grafique los resultados de la regresión lineal para cada grupo. Interprete. # 4. Calcule los resultados de las métricas para cada grupo. Interprete. # 5. Es claro que el ajuste lineal para algunos grupos no es el correcto. Existen varias formas de solucionar este problema (eliminar outliers, otros modelos, etc.). Identifique una estrategia para que el modelo de regresión lineal ajuste de mejor manera e implemente otros modelos en los casos que encuentre necesario. # ## Parte 1 ##DATOS DE ENTRADA grupo1=df[df['grupo']=='Grupo_1'] grupo2=df[df['grupo']=='Grupo_2'] grupo3=df[df['grupo']=='Grupo_3'] grupo4=df[df['grupo']=='Grupo_4'] x=grupo1['x'] x4=grupo4['x'] y1=grupo1['y'] y2=grupo2['y'] y3=grupo3['y'] y4=grupo4['y'] fig,axs=plt.subplots(2,2) axs[0,0].scatter(x,y1) axs[0,0].set_title('Grupo 1') axs[0,1].scatter(x,y2) axs[0,1].set_title('Grupo 2') axs[1,0].scatter(x,y3) axs[1,0].set_title('Grupo 3') axs[1,1].scatter(x4,y4) axs[1,1].set_title('Grupo 4') # Los datos varían mucho entre sí, por ejemplo el grupo 4 básicamente no necesita una regresión lineal porque es una recta vertical salvo por ese punto alejado, en la prática este último debería ser corregido o eliminado del dataframe. # El grupo 2 y el 3, podrían ajustarse a una regresión polinómica (orden 2 o 3) y lineal respectivamente, el 3 tiene un punto muy alejado por lo que, omitilo o corregirlo es necesario si se buscar mejorar los parámetro de regresión. # El grupo 1, muestra gran variabilidad entre datos, pero visualmente se podría ajustar con una regresión lineal, sin problemas. # ## Parte 2 grupo1.describe() grupo2.describe() grupo3.describe() grupo4.describe() # el valor medio de x, es el mismo en los tres primeros, esto ocurre porque la columna consta de los mismos datos # Para el grupo 4, la media es 9 pero la mayoría de los datos es 8, esto ocurre por el dato extremo.(con esto el grupo 4 queda con la mayor media en el eje x) # Del mismo modo, la desviación estándar en x es mayor que en y, esto se justificar de igual forma que el anterior. # # ## Parte 3 # + #Gráfico x=grupo1['x'] x4=grupo4['x'] y1=grupo1['y'] y2=grupo2['y'] y3=grupo3['y'] y4=grupo4['y'] def fit(x): return 3 + 0.5 * x fig, axs = plt.subplots(2, 2, sharex=True, sharey=True) axs[0, 0].set(xlim=(0, 20), ylim=(2, 14)) axs[0, 0].set(xticks=(0, 10, 20), yticks=(4, 8, 12)) xfit = np.array([np.min(x), np.max(x)]) axs[0, 0].plot(x, y1, 'ks', xfit, fit(xfit), 'r-', lw=2) axs[0, 1].plot(x, y2, 'ks', xfit, fit(xfit), 'r-', lw=2) axs[1, 0].plot(x, y3, 'ks', xfit, fit(xfit), 'r-', lw=2) xfit = np.array([np.min(x4), np.max(x4)]) axs[1, 1].plot(x4, y4, 'ks', xfit, fit(xfit), 'r-', lw=2) for ax, label in zip(axs.flat, ['I', 'II', 'III', 'IV']): ax.label_outer() ax.text(3, 12, label, fontsize=20) plt.show() # - # Se podría concluir que la regresión lineal es buena solo para el III, # ## Parte 4 # + from sklearn import datasets from sklearn.model_selection import train_test_split # import some data to play with X = grupo1[['x']] y = grupo1['y'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) from sklearn.linear_model import LinearRegression model_rl = LinearRegression() # Creando el modelo. model_rl.fit(X_train, y_train) beta_0 = round(model_rl.intercept_,2) beta_1 = round(model_rl.coef_[0],2) Y_predict = model_rl.predict(X_test) from metrics_regression import * from sklearn.metrics import r2_score # ejemplo df_temp = pd.DataFrame( { 'y':y_test, 'yhat': model_rl.predict(X_test) } ) df_metrics = summary_metrics(df_temp) df_metrics['r2'] = round(r2_score(y_test, model_rl.predict(X_test)),4) print(df_metrics) # + from sklearn import datasets from sklearn.model_selection import train_test_split # import some data to play with X = grupo2[['x']] y = grupo2['y'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) from sklearn.linear_model import LinearRegression model_rl = LinearRegression() # Creando el modelo. model_rl.fit(X_train, y_train) beta_0 = round(model_rl.intercept_,2) beta_1 = round(model_rl.coef_[0],2) Y_predict = model_rl.predict(X_test) from metrics_regression import * from sklearn.metrics import r2_score # ejemplo df_temp = pd.DataFrame( { 'y':y_test, 'yhat': model_rl.predict(X_test) } ) df_metrics = summary_metrics(df_temp) df_metrics['r2'] = round(r2_score(y_test, model_rl.predict(X_test)),4) print(df_metrics) # + from sklearn import datasets from sklearn.model_selection import train_test_split # import some data to play with X = grupo3[['x']] y = grupo3['y'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) from sklearn.linear_model import LinearRegression model_rl = LinearRegression() # Creando el modelo. model_rl.fit(X_train, y_train) beta_0 = round(model_rl.intercept_,2) beta_1 = round(model_rl.coef_[0],2) Y_predict = model_rl.predict(X_test) from metrics_regression import * from sklearn.metrics import r2_score # ejemplo df_temp = pd.DataFrame( { 'y':y_test, 'yhat': model_rl.predict(X_test) } ) df_metrics = summary_metrics(df_temp) df_metrics['r2'] = round(r2_score(y_test, model_rl.predict(X_test)),4) print(df_metrics) # + from sklearn import datasets from sklearn.model_selection import train_test_split # import some data to play with X = grupo4[['x']] y = grupo4['y'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) from sklearn.linear_model import LinearRegression model_rl = LinearRegression() # Creando el modelo. model_rl.fit(X_train, y_train) beta_0 = round(model_rl.intercept_,2) beta_1 = round(model_rl.coef_[0],2) Y_predict = model_rl.predict(X_test) from metrics_regression import * from sklearn.metrics import r2_score # ejemplo df_temp = pd.DataFrame( { 'y':y_test, 'yhat': model_rl.predict(X_test) } ) df_metrics = summary_metrics(df_temp) df_metrics['r2'] = round(r2_score(y_test, model_rl.predict(X_test)),4) print(df_metrics) # - # ## Parte 5 # + X = grupo1[['x']] y = grupo1['y'] from sklearn import datasets, linear_model import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) from sklearn.preprocessing import PolynomialFeatures poli_reg = PolynomialFeatures(degree=2) X_train_poli=poli_reg.fit_transform(X_train) X_test_poli=poli_reg.fit_transform(X_test) pr = linear_model.LinearRegression() Y_predict = model_rl.predict(X_test) plt.scatter(X_test,y_test) plt.plot(X_test,Y_predict,color='red',linewidth=3) plt.show() # - # Se observa una mejora considerable en la aproximación # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: BASE # language: python # name: base # --- # # Acquiring Data from open repositories # # A crucial step in the work of a computational biologist is not only to analyse data, but acquiring datasets to analyse as well as toy datasets to test out computational methods and algorithms. The internet is full of such open datasets. Sometimes you have to sign up and make a user to get authentication, especially for medical data. This can sometimes be time consuming, so here we will deal with easy access resources, mostly of modest size. Multiple python libraries provide a `dataset` module which makes the effort to fetch online data extremely seamless, with little requirement for preprocessing. # # #### Goal of the notebook # Here you will get familiar with some ways to fetch datasets from online. We do some data exploration on the data just for illustration, but the methods will be covered later. # # # # Useful resources and links # # When playing around with algorithms, it can be practical to use relatively small datasets. A good example is the `datasets` submodule of `scikit-learn`. `Nilearn` (library for neuroimaging) also provides a collection of neuroimaging datasets. Many datasets can also be acquired through the competition website [Kaggle](https://www.kaggle.com), in which they describe how to access the data. # # # ### Links # - [OpenML](https://www.openml.org/search?type=data) # - [Nilearn datasets](https://nilearn.github.io/modules/reference.html#module-nilearn.datasets) # - [Sklearn datasets](https://scikit-learn.org/stable/modules/classes.html?highlight=datasets#module-sklearn.datasets) # - [Kaggle](https://www.kaggle.com/datasets) # - [MEDNIST] # # - [**Awesomedata**](https://github.com/awesomedata/awesome-public-datasets) # # - We strongly recommend to check out the Awesomedata lists of public datasets, covering topics such as [biology/medicine](https://github.com/awesomedata/awesome-public-datasets#biology) and [neuroscience](https://github.com/awesomedata/awesome-public-datasets#neuroscience) # # - [Papers with code](https://paperswithcode.com) # # - [SNAP](https://snap.stanford.edu/data/) # - Stanford Large Network Dataset Collection # - [Open Graph Benchmark (OGB)](https://github.com/snap-stanford/ogb) # - Network datasets # - [Open Neuro](https://openneuro.org/) # - [Open fMRI](https://openfmri.org/dataset/) # import basic libraries import numpy as np import pandas as pd from matplotlib import pyplot as plt # We start with scikit-learn's datasets for testing out ML algorithms. Visit [here](https://scikit-learn.org/stable/modules/classes.html?highlight=datasets#module-sklearn.datasets) for an overview of the datasets. from sklearn.datasets import fetch_olivetti_faces, fetch_20newsgroups, load_breast_cancer, load_diabetes, load_digits, load_iris # Load the MNIST dataset (images of hand written digits) X,y = load_digits(return_X_y=True) y.shape X.shape #1797 images, 64 pixels per image # #### exercise 1. Make a function `plot` taking an argument (k) to visualize the k'th sample. # It is currently flattened, you will need to reshape it. Use `plt.imshow` for plotting. # # %load solutions/ex2_1.py def plot(k): plt.imshow(X[k].reshape(8,8), cmap='gray') plt.title(f"Number = {y[k]}") plt.show() plot(15); plot(450) faces = fetch_olivetti_faces() # #### Exercise 2. Inspect the dataset. How many classes are there? How many samples per class? Also, plot some examples. What do the classes represent? # + # # %load solutions/ex2_2.py # example solution. # You are not expected to make a nice plotting function, # you can simply call plt.imshow a number of times and observe print(faces.DESCR) # this shows there are 40 classes, 10 samples per class print(faces.target) #the targets i.e. classes print(np.unique(faces.target).shape) # another way to see n_classes X = faces.images y = faces.target fig = plt.figure(figsize=(16,5)) idxs = [0,1,2, 11,12,13, 40,41] for i,k in enumerate(idxs): ax=fig.add_subplot(2,4,i+1) ax.imshow(X[k]) ax.set_title(f"target={y[k]}") # looking at a few plots shows that each target is a single person. # - # Once you have made yourself familiar with the dataset you can do some data exploration with unsupervised methods, like below. The next few lines of code are simply for illustration, don't worry about the code (we will cover unsupervised methods in submodule F). from sklearn.decomposition import randomized_svd X = faces.data n_dim = 3 u, s, v = randomized_svd(X, n_dim) # Now we have factorized the images into their constituent parts. The code below displays the various components isolated one by one. def show_ims(ims): fig = plt.figure(figsize=(16,10)) idxs = [0,1,2, 11,12,13, 40,41,42, 101,101,103] for i,k in enumerate(idxs): ax=fig.add_subplot(3,4,i+1) ax.imshow(ims[k]) ax.set_title(f"target={y[k]}") for i in range(n_dim): my_s = np.zeros(s.shape[0]) my_s[i] = s[i] recon = u@np.diag(my_s)@v recon = recon.reshape(400,64,64) show_ims(recon) # Are you able to see what the components represent? It at least looks like the second component signifies the lightning (the light direction), the third highlights eyebrows and facial chin shape. from sklearn.manifold import TSNE tsne = TSNE(init='pca', random_state=0) trans = tsne.fit_transform(X) # + m = 8*10 # choose 4 people plt.figure(figsize=(16,10)) xs, ys = trans[:m,0], trans[:m,1] plt.scatter(xs, ys, c=y[:m], cmap='rainbow') for i,v in enumerate(zip(xs,ys, y[:m])): xx,yy,s = v #plt.text(xx,yy,s) #class plt.text(xx,yy,i) #index # - # Many people seem to have multiple subclusters. What is the difference between those clusters? (e.g. 68,62,65 versus the other 60's) # + ims = faces.images idxs = [68,62,65,66,60,64,63] #idxs = [9,4,1, 5,3] for k in idxs: plt.imshow(ims[k], cmap='gray') plt.show() # - def show(im): return plt.imshow(im, cmap='gray') import pandas as pd df= pd.read_csv('data/archive/covid_impact_on_airport_traffic.csv') df.shape df.describe() df.head() df.Country.unique() df.ISO_3166_2.unique() df.AggregationMethod.unique() # Here we will look at [OpenML](https://www.openml.org/) - a repository of open datasets free to explore data and test methods. # # ### Fetching an OpenML dataset # # We need to pass in an ID to access, as follows: from sklearn.datasets import fetch_openml # OpenML contains all sorts of datatypes. By browsing the website we found a electroencephalography (EEG) dataset to explore: data_id = 1471 #this was found by browsing OpenML dataset = fetch_openml(data_id=data_id, as_frame=True) dir(dataset) dataset.url type(dataset) print(dataset.DESCR) original_names = ['AF3', 'F7', 'F3', 'FC5', 'T7', 'P', 'O1', 'O2', 'P8', 'T8', 'FC6', 'F4', 'F8', 'AF4'] dataset.feature_names df = dataset.frame df.head() df.shape[0] / 117 # 128 frames per second df = dataset.frame y = df.Class #df.drop(columns='Class', inplace=True) df.dtypes # + #def summary(s): # print(s.max(), s.min(), s.mean(), s.std()) # print() # #for col in df.columns[:-1]: # column = df.loc[:,col] # summary(column) # - df.plot() # From the plot we can quickly identify a bunch of huge outliers, making the plot look completely uselss. We assume these are artifacts, and remove them. df2 = df.iloc[:,:-1].clip_upper(6000) df2.plot() # Now we see better what is going on. Lets just remove the frames corresponding to those outliers frames = np.nonzero(np.any(df.iloc[:,:-1].values>5000, axis=1))[0] frames df.drop(index=frames, inplace=True) df.plot(figsize=(16,8)) plt.legend(labels=original_names) df.columns # ### Do some modelling of the data from sklearn.linear_model import LogisticRegression lasso = LogisticRegression(penalty='l2') X = df.values[:,:-1] y = df.Class y = y.astype(np.int) - 1 # map to 0,1 print(X.shape) print(y.shape) lasso.fit(X,y) comp = (lasso.predict(X) == y).values np.sum(comp.astype(np.int))/y.shape[0] # shitty accuracy lasso.coef_[0].shape names = dataset.feature_names original_names # + coef = lasso.coef_[0] plt.barh(range(coef.shape[0]), coef) plt.yticks(ticks=range(14),labels=original_names) plt.show() # - # Interpreting the coeficients: we naturally tend to read the magnitude of the coefficients as feature importance. That is a fair interpretation, but currently we did not scale our features to a comparable range prior to fittting the model, so we cannot draw that conclusion. # ### Extra exercise. Go to [OpenML](https://openml.org) and use the search function (or just look around) to find any dataset that interest you. Load it using the above methodology, and try to do anything you can to understand the datatype, visualize it etc. # + ### YOUR CODE HERE # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:tensorflow] # language: python # name: conda-env-tensorflow-py # --- # # Data Exploration # # This is a notebook to explore the pickle files saved out by the `convert_data.py` script. We'll sanity check all the pickle files, by loading in the image files and displaying them with their labels. # + import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import pickle plt.style.use('fivethirtyeight') # plt.rcParams['font.family'] = 'serif' plt.rcParams['font.serif'] = 'Helvetica' plt.rcParams['font.monospace'] = 'Consolas' plt.rcParams['font.size'] = 16 plt.rcParams['axes.labelsize'] = 16 plt.rcParams['axes.labelweight'] = 'bold' plt.rcParams['xtick.labelsize'] = 14 plt.rcParams['ytick.labelsize'] = 14 plt.rcParams['legend.fontsize'] = 16 plt.rcParams['figure.titlesize'] = 20 plt.rcParams['lines.linewidth'] = 2 # %matplotlib inline # for auto-reloading external modules # %load_ext autoreload # %autoreload 2 # - # ## Loading pickle files # # The `convert_data.py` converts the ubyte format input files into numpy arrays. These arrays are then saved out as pickle files to be quickly loaded later on. The shape of the numpy arrays for images and labels are: # # * Images: (N, rows, cols) # * Labels: (N, 1) # # + # Set up the file directory and names DIR = '../input/' X_TRAIN = DIR + 'train-images-idx3-ubyte.pkl' Y_TRAIN = DIR + 'train-labels-idx1-ubyte.pkl' X_TEST = DIR + 't10k-images-idx3-ubyte.pkl' Y_TEST = DIR + 't10k-labels-idx1-ubyte.pkl' print('Loading pickle files') X_train = pickle.load( open( X_TRAIN, "rb" ) ) y_train = pickle.load( open( Y_TRAIN, "rb" ) ) X_test = pickle.load( open( X_TEST, "rb" ) ) y_test = pickle.load( open( Y_TEST, "rb" ) ) n_train = X_train.shape[0] n_test = X_test.shape[0] print('Train images shape {}, labels shape {}'.format(X_train.shape, y_train.shape)) print('Test images shape {}, labels shape {}'.format(X_test.shape, y_test.shape)) # - # ## Sample training images with labels # # Let's show a few of the training images with the corresponding labels, so we can sanity check that the labels match the numbers, and the images themselves look like actual digits. # + # Check a few training values at random as a sanity check def show_label_images(X, y): '''Shows random images in a grid''' num = 9 images = np.random.randint(0, X.shape[0], num) print('Showing training image indexes {}'.format(images)) fig, axes = plt.subplots(3,3, figsize=(6,6)) for idx, val in enumerate(images): r, c = divmod(idx, 3) axes[r][c].imshow(X[images[idx]]) axes[r][c].annotate('Label: {}'.format(y[val]), xy=(1, 1)) axes[r][c].xaxis.set_visible(False) axes[r][c].yaxis.set_visible(False) show_label_images(X_train, y_train) # - # ## Sample test images with labels # # Now we can check the test images and labels by picking a few random ones, and making sure the images look reasonable and they match their labels. # Now do the same for the training dataset show_label_images(X_test, y_test) # # Training label distribution y_train_df = pd.DataFrame(y_train, columns=['class']) y_train_df.plot.hist(legend=False) hist_df = pd.DataFrame(y_train_df['class'].value_counts(normalize=True)) hist_df.index.name = 'class' hist_df.columns = ['train'] # The class distribution is pretty evenly split between the classes. 1 is the most popular class with 11.24% of instances, and at the other end 5 is the least frequent class, with 9.04% of instances # Test label distribution y_test_df = pd.DataFrame(y_test, columns=['class']) y_test_df.plot.hist(legend=False, bins=10) test_counts = y_test_df['class'].value_counts(normalize=True) hist_df['test'] = test_counts # The distribution looks very similar between training and test datasets. hist_df['diff'] = np.abs(hist_df['train'] - hist_df['test']) hist_df.sort_values('diff', ascending=False)['diff'].plot.bar() # The largest difference is 0.0040% in the number 2 class. # Final quick check of datatypes assert X_train.dtype == np.uint8 assert y_train.dtype == np.uint8 assert X_test.dtype == np.uint8 assert y_test.dtype == np.uint8 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Final model # # Train the best model on the bigger dataset and evaluate once more. # Imports import findspark findspark.init() findspark.find() import pyspark # Imports for creating spark session from pyspark import SparkContext, SparkConf from pyspark.sql import SparkSession conf = pyspark.SparkConf().setAppName('sparkify-capstone-model').setMaster('local') sc = pyspark.SparkContext(conf=conf) spark = SparkSession(sc) # Imports for modelling, tuning and evaluation from pyspark.ml.classification import GBTClassifier from pyspark.ml.evaluation import BinaryClassificationEvaluator, MulticlassClassificationEvaluator from pyspark.ml.tuning import TrainValidationSplit, ParamGridBuilder # Imports for visualization and output import matplotlib.pyplot as plt from IPython.display import HTML, display # Read in dataset conf.set("spark.driver.maxResultSize", "0") path = "out/features.parquet" df = spark.read.parquet(path) def createSubset(df, factor): """ INPUT: df: The dataset to split factor: How much of the dataset to return OUTPUT: df_subset: The split subset """ df_subset, df_dummy = df.randomSplit([factor, 1 - factor]) return df_subset # + def printConfusionMatrix(tp, fp, tn, fn): """ Simple function to output a confusion matrix from f/t/n/p values as html table. INPUT: data: The array to print as table OUTPUT: Prints the array as html table. """ html = "" html += "".format(tp, fp) html += "".format(fn, tn) html += "
    Act. TrueFalse
    Pred. Pos.{}{}
    Negative{}{}
    " display(HTML(html)) def showEvaluationMetrics(predictions): """ Calculate and print the some evaluation metrics for the passed predictions. INPUT: predictions: The predictions to evaluate and print OUTPUT: Just prints the evaluation metrics """ # Calculate true, false positives and negatives to calculate further metrics later: tp = predictions[(predictions.churn == 1) & (predictions.prediction == 1)].count() tn = predictions[(predictions.churn == 0) & (predictions.prediction == 0)].count() fp = predictions[(predictions.churn == 0) & (predictions.prediction == 1)].count() fn = predictions[(predictions.churn == 1) & (predictions.prediction == 0)].count() printConfusionMatrix(tp, fp, tn, fn) # Calculate and print metrics f1 = MulticlassClassificationEvaluator(labelCol = "churn", metricName = "f1") \ .evaluate(predictions) accuracy = float((tp + tn) / (tp + tn + fp + fn)) recall = float(tp / (tp + fn)) precision = float(tp / (tp + fp)) print("F1: ", f1) print("Accuracy: ", accuracy) print("Recall: ", recall) print("Precision: ", precision) def printAUC(predictions, labelCol = "churn"): """ Print the area under curve for the predictions. INPUT: predictions: The predictions to get and print the AUC for OUTPU: Prints the AUC """ print("Area under curve: ", BinaryClassificationEvaluator(labelCol = labelCol).evaluate(predictions)) # - def undersampleNegatives(df, ratio, labelCol = "churn"): """ Undersample the negatives (0's) in the given dataframe by ratio. NOTE: The "selection" method here is of course very crude and in a real version should be randomized and shuffled. INPUT: df: dataframe to undersample negatives from ratio: Undersampling ratio labelCol: LAbel column name in the input dataframe OUTPUT: A new dataframe with negatives undersampled by ratio """ zeros = df.filter(df[labelCol] == 0) ones = df.filter(df[labelCol] == 1) zeros = createSubset(zeros, ratio) return zeros.union(ones) def gbtPredictions(df_train, df_test, maxIter = 10, labelCol = "churn", featuresCol = "features"): """ Fit, evaluate and show results for GBTClassifier INPUT: df_train: The training data set. df_test: The testing data set. maxIter: Number of maximum iterations in the gradeint boost. labelCol: The label column name, "churn" by default. featuresCol: The label column name, "features" by default. OUTPUT: predictions: The model's predictions """ # Fit and train model gbt = GBTClassifier(labelCol = labelCol, featuresCol = featuresCol, maxIter = maxIter).fit(df_train) return gbt.transform(df_test) # + df_train, df_test = df.randomSplit([0.9, 0.1]) gbt = GBTClassifier(labelCol = "churn", featuresCol = "features", maxIter = 120, maxDepth = 5).fit(undersampleNegatives(df_train, .7)) predictions = gbt.transform(df_test) showEvaluationMetrics(predictions) printAUC(predictions) # - gbt.save("out/model") # Output the notebook to an html file from subprocess import call call(['python', '-m', 'nbconvert', 'final-model.ipynb']) # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 0.6.0 # language: julia # name: julia-0.6 # --- #Random number generation, mean and variance srand(0); N=100; # Uniform [0,1] X=rand(N,1); N=100; sigma2=2; # Gaussian rv (zero-mean, variance sigma2 X=sqrt(sigma2)*randn(N,1); mu=mean(X); s2= var(X); # + # Gaussian random vectors, covariance and correlation matrices ====== N=10000; mu=zeros(3,1); C = [4 0 1; 0 5 0; 1 0 2]; X = mvnrnd(mu,C,N); CHat = cov(X); RHat = corrcoef(X); # Autocovariance function (biased, unbiased, non-normalized, normalized)== N = 300; nlags = N-1; variance = 3; sigma = sqrt(variance); X = sigma*randn(N,1); figure(1) subplot(311) [acf1,lags]=xcov(X,nlags,'biased'); plot(lags,acf1,'k') ylabel('Autocovariance (biased)') axis tight subplot(312) [acf2,lags]=xcov(X,nlags,'unbiased'); plot(lags,acf2,'k') ylabel('Autocovariance (unbiased)') axis tight subplot(313) [acf3,lags]=xcov(X,nlags,'coeff'); plot(lags,acf3,'k') xlabel('Lag') ylabel('Autocorrelation') axis tight # Solution of linear systems of equations ============== N = 1000; p = 3; mu = 0.5; B = [.8 -2 .3]'; X = randn(N,p); Y = mu + X*B + randn(N,1); X = [ones(N,1) X]; BHat = inv(X'*X) * (X'*Y) #Via matrix inversion BHat = X \ Y #Via Matlab backslash operator [Q,R] = qr(X,0); #Via QR decomposition (R is upper triangular) BHat = R\(Q'*Y) # Univariate AR(p) process simulation and estimation ============== clear all N = 1000; X = zeros(N,1); a = .78; variance = .25; sigma = sqrt(variance); for t=2:N X(t) = a*X(t-1) + sigma*randn; end figure(1),clf,set(gcf,'color',[1 1 1]) plot(1:N,X,'k') xlabel('Time t','fontsize',12),ylabel('X_t','fontsize',12) box off # Estimation via conditional MLE (Ordinary Least-Squares - OLS) ====== Y=X(2:end); Z=X(1:end-1); aHat = Y\Z; # Estimation via the Burg maximum entropy method ============ [H,sigma2Hat,K] = arburg(X,3); aHat = -H(2:end) # Convolution ========================================== # Simple convolution example: flip, shift, multiply and sum (dot product) # Note: for large sequences, convolutions are preferably implemented as multiplication in the # Fourier domain. clear all, close all N=1000; h=ones(10,1); h=flip(h); m=length(h); E=randn(N,1); E = [zeros(m,1);E]; X=zeros(N+m,1); j=0; for k = m:N+m j=j+1; X(k) = h'*E(j:j+m-1); end X=X(m+1:end);# In this example and in Homework 1, we do not normalize X E=E(m+1:end); figure(1),clf,set(gcf,'color',[1 1 1]) plot(1:N,E,'k',1:N,X,'r') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [env] # language: python # name: Python [env] # --- import _init_paths from fast_rcnn.train import get_training_roidb, train_net from fast_rcnn.config import cfg, cfg_from_file, cfg_from_list, get_output_dir from datasets.factory import get_imdb import datasets.imdb import caffe import argparse import pprint import numpy as np import sys import zl_config as C # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- import drms import json, numpy as np, matplotlib.pylab as plt, matplotlib.ticker as mtick import urllib from astropy.io import fits from skimage.transform import resize import drms c = drms.Client() c.series(r'hmi\.sharp_') # Set a series si = c.info('hmi.sharp_cea_720s') # #### You only need to change the date and path in the name in the cells below date = '2010.01.01 - 2014.01.01' #change this to your desired time period keys = c.query('hmi.sharp_cea_720s[][{}][? (CRLN_OBS < 25) AND (CRLT_OBS < 25) AND (USFLUX > 4e21) ?]'.format(date), key='T_REC, HARPNUM, USFLUX, CRLT_OBS, CRLN_OBS, AREA_ACR') # + harpnums = keys['HARPNUM'].unique() for harpnum in harpnums: i = np.where(keys['HARPNUM'] == harpnum)[0] t_rec = keys['T_REC'][i] t_rec = t_rec[::8] #we select every 8th element, so we have time intervals of 12*8=96 minutes as of right now for t in t_rec: hmi_query_string = 'hmi.sharp_cea_720s[{}][{}]'.format(harpnum, t) keys_temp, segments = c.query(hmi_query_string, key='T_REC, USFLUX, ERRVF', seg='Br') url = 'http://jsoc.stanford.edu' + segments.Br[0] photosphere_image = fits.open(url) img = photosphere_image[1].data res = resize(img, (128, 128)) name = t + '_{}'.format(harpnum) name = name.replace(':', '_') name = name.replace('.', '_') path = '/Volumes/L7aler_HD 1/ADL_Project/AR_data/new_SHARP_Data/2010-2014/{}.npy'.format(name) #change this to your path with open(path, 'wb') as f: np.save(f, res) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + """ KNN from scratch """ from sklearn import datasets from sklearn.cross_validation import train_test_split # from sklearn.model_selection import train_test_split # - data = datasets.load_iris() X = data.data y = data.target X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=123) # Input : list def euclideanDistance(l1, l2): if (len(l1) == len(l2)): dist = 0 for i in range(len(l1)): dist += (l1[i] - l2[i])**2 return dist**0.5 else: raise ValueError('Hehe, panjangnya beda gan') # + # Calculate Neighbors Distance k = 5 # number ok k y_test_predict = [] # target yang diprediksi for x_unknown in X_test: neighbors_distance = [] for x_known in X_train: neighbors_distance.append(euclideanDistance(x_unknown, x_known)) neighbors_distance_sorted_with_target = sorted(zip(neighbors_distance, y_train)) k_nearest_neighbors_target = [items[1] for items in neighbors_distance_sorted_with_target[:k]] # mode of knn : classification y_test_predict.append(max(set(k_nearest_neighbors_target), key=k_nearest_neighbors_target.count)) print (y_test_predict) # - n_benar = 0 for i, prediksi in enumerate(y_test_predict): if (prediksi == y_test[i]): n_benar += 1 print(n_benar/len(y_test)) from sklearn.neighbors import KNeighborsClassifier knn = KNeighborsClassifier() knn.fit(X_train, y_train) knn.score(X_test,y_test) # + # REGRESSION # - data = datasets.load_boston() X = data.data y = data.target X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=123) # + from sklearn.metrics import mean_squared_error # Calculate Neighbors Distance k = 5 # number ok k y_test_predict = [] # target yang diprediksi for x_unknown in X_test: neighbors_distance = [] for x_known in X_train: neighbors_distance.append(euclideanDistance(x_unknown, x_known)) neighbors_distance_sorted_with_target = sorted(zip(neighbors_distance, y_train)) k_nearest_neighbors_target = [items[1] for items in neighbors_distance_sorted_with_target[:k]] # bedanyan hanya di sini # mean of knn : regression y_test_predict.append(round(sum(k_nearest_neighbors_target)/k, 2)) # print (y_test_predict) print (mean_squared_error(y_test_predict, y_test)) # - from sklearn.neighbors import KNeighborsRegressor knn_reg = KNeighborsRegressor() knn_reg.fit(X_train, y_train) y_predict = knn_reg.predict(X_test) mean_squared_error(y_predict, y_test) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # The colors are at # * https://xkcd.com/color/rgb.txt import requests url = r"https://xkcd.com/color/rgb.txt" response = requests.get(url) lines = response.text.split('\n') license = lines[0] license d = dict_by_color_names = dict(spl[:2] for spl in (line.split('\t') for line in lines[1:]) if len(spl) >= 2) len(dict_by_color_names) d['red'], d['green'], d['blue'] import matplotlib for name, hex in matplotlib.colors.cnames.items(): print(name, hex) max(line.split('\t')[-1] for line in lines[1:]) min(len(line.split('\t')) for line in lines[1:]) [line for line in lines if len(line.split('\t')) == 2] lines[-2] len(d) d = dict(spl[:2] for spl in (line.split('\t') for line in response.text.split('\n')) if len(spl) >= 2) len(d) list(d)[:5] t1 = [line for line in response.text] t1[:5] len(response.text) len(response.text.split('\n')) print(response.text) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from os import listdir from os.path import isfile, join # + path_to_dataset = "../dataset" dataset_files = [ join(path_to_dataset, path) for path in listdir(path_to_dataset) if isfile(join(path_to_dataset, path)) ] print( dataset_files ) # + from gensim.parsing.preprocessing import preprocess_string from pymorphy2.utils import word_splits from os.path import getsize from math import floor import pymorphy2 import re class DatasetReader: def __init__(self, datasetPath): self._datasetPath = datasetPath self._currentText = "" self._morph = pymorphy2.MorphAnalyzer(lang='uk') self._ukrLetters = re.compile("^[абвгґдеєжзиіїйклмнопрстуфхцчшщьюя]*$", re.IGNORECASE) def __iter__(self): fileSize = getsize( self._datasetPath ) prevProgress = 0 with open(self._datasetPath, "r") as file: line = file.readline() while line: progress = floor(100 * file.tell() / fileSize) if prevProgress != progress: prevProgress = progress print(progress) self._currentText += line if self._tryPopSentence(): yield self._currentSentence try: line = file.readline() except: line = None def _tryPopSentence(self): sentenceEnd = self._currentText.find('.') if( sentenceEnd == -1 ): return False words = preprocess_string(self._currentText[:sentenceEnd:].replace( '\'', '' )) ukrWords = list(filter(lambda x: self._ukrLetters.search(x) , words)) self._currentText = self._currentText[sentenceEnd+1::] self._currentSentence = self.to_normal_form(ukrWords) return True def to_normal_form(self, words_list): if isinstance(words_list, str): words_list = [words_list, ] res = [ self._morph.parse(word)[0].normal_form for word in words_list ] if len(res) == 1: res = res[0] return res # - from gensim.test.utils import common_texts from gensim.models import Word2Vec # + from gensim.models.callbacks import CallbackAny2Vec class LossLogger(CallbackAny2Vec): '''Output loss at each epoch''' def __init__(self): self.epoch = 1 self.prevLoss = 0 self.losses = [] def on_epoch_begin(self, model): print(f'Epoch: {self.epoch}', end='\t') def on_epoch_end(self, model): loss = model.get_latest_training_loss() self.losses.append(loss - self.prevLoss) print(f' Loss: {loss - self.prevLoss}') self.epoch += 1 self.prevLoss = loss # - dataset = DatasetReader( dataset_files[0] ) sentences = [s for s in dataset] loss_logger = LossLogger() model = Word2Vec(sentences=sentences, epochs=100, compute_loss=True, vector_size=100, window=5, min_count=1, workers=4, callbacks=[loss_logger]) model.save("gensim_word2Vec.model") model.wv.most_similar("ялинка") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # # Aggregating vs averaging variant frequencies within groups # ___ # Load necessary packages and temperature experiment data file import pandas as pd import numpy as np import statsmodels.api as sm import statsmodels.formula.api as sf import matplotlib.pyplot as plt from pi_for_temperature_notebook import * data = pd.read_csv("variant-calls_temperature-expt_2020-04-07_tables.csv", header=0, index_col=0) # The above errors can be ignored because they arise from 32$ ^{o} $ data, which we will filter out. Also filter out variants that are represented in only one technical replicate, as well as variants that do not have a frequency of 3% or more in at least one technical replicate. data_no_32 = data.loc[data.temperature != 32] data_no_32_both_present = data_no_32.dropna(subset=['ADReplicateA','ADReplicateB']) filtered_data = pd.concat([data_no_32_both_present.loc[data_no_32_both_present.freqPropReplicateA >= 0.03], data_no_32_both_present.loc[data_no_32_both_present.freqPropReplicateB >= 0.03]]).drop_duplicates() # Previous analyses has shown a tight correlation between technical replicates, as expected by the manner of the replication. Aggregate technical replicate read counts to get a single variant frequency for each sequence variant. # + #set new DP and AD columns with aggregate of Replicate RD and AD values #This aggregates only counts of reads that passed VarScan filters filtered_data['DP'] = filtered_data['ADReplicateA'] + filtered_data['ADReplicateB'] +filtered_data['RDReplicateA'] + filtered_data['RDReplicateB'] filtered_data['AD'] = filtered_data['ADReplicateA'] + filtered_data['ADReplicateB'] #calculate frequency of aggregated read counts filtered_data['freqProp'] = filtered_data['AD'] / filtered_data['DP'] # - # Add factor data for lineage, species, and segment (for DNA-A and DNA-B segment designations) # + #lineage: set to 0 then assign each unique lineage to an unique integer filtered_data['lineage_factor'] = 0 j = 0 for lineage in filtered_data.lineage.unique(): filtered_data.loc[filtered_data.lineage == lineage,['lineage_factor']] = j j += 1 #species: Set ACMV = 0 and EACMCV = 1 filtered_data['species'] = 0 filtered_data.loc[filtered_data.chrom == 'EACMCV DNA-A',['species']] = 1 filtered_data.loc[filtered_data.chrom == 'EACMCV DNA-B',['species']] = 1 #segment: Set DNA-A = 0 and DNA-B = 1 filtered_data['segment'] = 0 filtered_data.loc[filtered_data.chrom == 'ACMV DNA-B',['segment']] = 1 filtered_data.loc[filtered_data.chrom == 'EACMCV DNA-B',['segment']] = 1 # - # Load the options for calculation of pi then print them out. options = get_temp_args() #set new values for options as necessary options['coverage'] = 'DP' options['perSite'] = False # + pis = {} pi_df = get_group_pis(filtered_data, options=options, group_by=['passage','temperature','species','segment','lineage_factor','plantID']) # - lm = sf.ols('pi ~ passage + temperature + (C(species)/C(segment)) + C(lineage_factor)',data=pi_df).fit(cov_type='HC1') print("Least squares summary:") print(lm.summary2()) # There is strong multicollinearity in the model. Let's inspect a few variables # + p = pi_df.boxplot(column='pi', by=['passage','temperature'], patch_artist=True, grid=False, return_type='dict') plt.ylabel('pi',fontsize=16) plt.xlabel('(passage,temperature)',fontsize=16) ylim = plt.ylim() plt.vlines(x=[2.5,4.5],ymin=ylim[0],ymax=ylim[1]) plt.ylim(ylim) plt.xticks(fontsize=14) plt.suptitle('') plt.title('') for median in p[0]['medians']: median.set_color('k') # - # Looks like we could omit temperature from the model. # + p = pi_df.boxplot(column='pi', by=['passage','lineage_factor'], patch_artist=True, grid=False, return_type='dict', rot=30) plt.ylabel('pi',fontsize=16) plt.xlabel('(passage,lineage)',fontsize=16) ylim = plt.ylim() plt.xticks(fontsize=14,ha='right') plt.suptitle('') plt.title('') ylim = plt.ylim() plt.vlines(x=[5.5,11.5],ymin=ylim[0],ymax=ylim[1]) plt.ylim(ylim) for median in p[0]['medians']: median.set_color('k') # - # Looks like we can onit lineage from the model as well. # + p = pi_df.boxplot(column='pi', by=['species','segment'], patch_artist=True, grid=False, return_type='dict', rot=30) plt.ylabel('pi',fontsize=16) plt.xlabel('',fontsize=16) ylim = plt.ylim() plt.vlines(x=[2.5],ymin=ylim[0],ymax=ylim[1]) plt.ylim(ylim) plt.xticks([1,2,3,4], ['ACMV DNA-A','ACMV DNA-B','EACMCV DNA-A','EACMCV DNA-B'], fontsize=14, ha='right') plt.suptitle('') plt.title('') for median in p[0]['medians']: median.set_color('k') # - # Let's remove temperature and lineage from the model lm = sf.ols('pi ~ passage + (C(species)/C(segment)) ',data=pi_df).fit(cov_type='HC1') print("Least squares summary:") print(lm.summary()) # Looks like a moderate improvement gained by reducing variables. Let's aggregate the data to reduce noise. # + agg_data = aggregate_groups(filtered_data,['species','segment','pos','alt','passage','ref']) agg_data.reset_index(inplace=True) agg_data['pos'] = agg_data.pos.astype(int) # - pi_df = get_group_pis(agg_data, options=options, group_by=['species','segment','passage']) # + lm = sf.ols('pi ~ passage + C(species)/C(segment)',data=pi_df).fit(cov_type='HC1') print("Least squares summary:") print(lm.summary()) print("\nAnova table:") table = sm.stats.anova_lm(lm) print(table) print("\n") for ind in table.index: print("{} explains \t {:.2%} of variance".format(ind,table.loc[ind,'sum_sq'] / table.sum_sq.sum())) # - # ### Now let's look at averaging pi within groups # Let's start with the raw data again, and use the mean between technical replicates as a single frequency per variant. `freqPropMeanNoNA` already exists as the mean of ReplicateA and ReplicateB frequencies, so we can use `filtered_data` as-is. We'll first calculate pi from the average values and inspect the analysis results. # + #set the frequency for column name of the mean of the replicates options['frequency'] = 'freqPropMeanNoNA' pi_df = get_group_pis(filtered_data, options=options, group_by=['passage','temperature','species','segment','lineage_factor','plantID']) lm = sf.ols('pi ~ passage + temperature + (C(species)/C(segment)) + C(lineage_factor)',data=pi_df).fit(cov_type='HC1') print("Least squares summary:") print(lm.summary()) # - # Now plot data # + p = pi_df.boxplot(column='pi', by=['passage','temperature'], patch_artist=True, grid=False, return_type='dict') plt.ylabel('pi',fontsize=16) plt.xlabel('(passage,temperature)',fontsize=16) ylim = plt.ylim() plt.vlines(x=[2.5,4.5],ymin=ylim[0],ymax=ylim[1]) plt.ylim(ylim) plt.xticks(fontsize=14) plt.suptitle('') plt.title('') for median in p[0]['medians']: median.set_color('k') # + p = pi_df.boxplot(column='pi', by=['passage','lineage_factor'], patch_artist=True, grid=False, return_type='dict', rot=30) plt.ylabel('pi',fontsize=16) plt.xlabel('(passage,lineage)',fontsize=16) ylim = plt.ylim() plt.xticks(fontsize=14,ha='right') plt.suptitle('') plt.title('') ylim = plt.ylim() plt.vlines(x=[5.5,11.5],ymin=ylim[0],ymax=ylim[1]) plt.ylim(ylim) for median in p[0]['medians']: median.set_color('k') # + p = pi_df.boxplot(column='pi', by=['species','segment'], patch_artist=True, grid=False, return_type='dict', rot=30) plt.ylabel('pi',fontsize=16) plt.xlabel('',fontsize=16) ylim = plt.ylim() plt.vlines(x=[2.5],ymin=ylim[0],ymax=ylim[1]) plt.ylim(ylim) plt.xticks([1,2,3,4], ['ACMV DNA-A','ACMV DNA-B','EACMCV DNA-A','EACMCV DNA-B'], fontsize=14, ha='right') plt.suptitle('') plt.title('') for median in p[0]['medians']: median.set_color('k') # - # Looks like there is no great difference in averaging or aggregating technical replicate data. # + avg_data = average_groups(filtered_data,['species','segment','pos','alt','passage','ref']) avg_data.reset_index(inplace=True) avg_data['pos'] = avg_data.pos.astype(int) avg_pi_df = get_group_pis(agg_data, options=options, group_by=['species','segment','passage']) # + lm_avg = sf.ols('pi ~ passage + C(species)/C(segment)',data=avg_pi_df).fit(cov_type='HC1') print("Least squares summary:") print(lm_avg.summary()) print("\nAnova table:") table_avg = sm.stats.anova_lm(lm_avg) print(table) # + print("Aggregated groups:") for ind in table.index: print("{} \t\t {:.2%} of variance".format(ind,table.loc[ind,'sum_sq'] / table.sum_sq.sum())) print("\nAveraged groups:") for ind in table.index: print("{} \t\t {:.2%} of variance".format(ind,table_avg.loc[ind,'sum_sq'] / table_avg.sum_sq.sum())) # - # ### Looks there is not a great difference between averaging or aggregating technical replicate data. Also, removing temperature increases model performance. # ___ # Not shown here: analyses running `linearmodels.PanelOLS` panel analyses on the temperature experiment fail when including both *temperature* and *passage* due to collinearity. # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia (16 threads) 1.1.1 # language: julia # name: julia-(16-threads)-1.1 # --- include("./inferece_server.jl") makeServer() @async startServer() Argv.x typeof(Argv.Caches) Argv.conns Argv.Caches closeServer() size() Argv.BUCKET !isfull(Argv.BUCKET) Argv # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernel_info: # name: azureml_py38_tensorflow # kernelspec: # display_name: Python 3.8 - Tensorflow # language: python # name: azureml_py38_tensorflow # --- # + [markdown] pycharm={"name": "#%% md\n"} # # 1. Imports # + pycharm={"name": "#%%\n"} gather={"logged": 1634417615844} from datetime import datetime import matplotlib.pyplot as plt import io import numpy as np import os import tensorflow as tf from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.applications import ResNet152V2 from tensorflow import keras from tensorflow.keras import layers from scipy.io import loadmat from tensorflow.keras.preprocessing.image import img_to_array from tensorflow.keras.preprocessing.image import load_img from tensorflow.keras.utils import to_categorical from tensorflow.keras.applications.resnet_v2 import preprocess_input from sklearn.preprocessing import LabelBinarizer from sklearn.model_selection import train_test_split from azureml.core import Dataset, Run, Workspace from imutils import paths device_name = tf.test.gpu_device_name() if device_name != '/device:GPU:0': raise SystemError('GPU device not found') print('Found GPU at: {}'.format(device_name)) print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU'))) # - # # 2. Setup constants # + gather={"logged": 1634417615981} path_names = { 'DATA_PATH': os.path.join('data'), 'MODEL_PATH': os.path.join('models'), 'LOG_PATH': os.path.join('logs'), 'TEST_PATH': os.path.join('test') } batch_size = 64 img_height = 224 img_width = 224 # + [markdown] nteract={"transient": {"deleting": false}} # # 3. Download dataset # + jupyter={"source_hidden": false, "outputs_hidden": true} nteract={"transient": {"deleting": false}} gather={"logged": 1634409363670} run = Run.get_context() ws = Workspace.from_config() dataset_name = 'Smoking' dataset = Dataset.get_by_name(workspace=ws, name=dataset_name) dataset.download(path_names['DATA_PATH']) # - # # 4. Prepare labels and data # + pycharm={"name": "#%%\n"} gather={"logged": 1634417944446} data = [] labels = [] imagePaths = list(paths.list_images(path_names['DATA_PATH'])) for imagePath in imagePaths: label = imagePath.split(os.path.sep)[-2] image = load_img(imagePath, target_size=(224, 224)) image = img_to_array(image) image = preprocess_input(image) data.append(image) labels.append(label) data = np.array(data, dtype="float32") labels = np.array(labels) lb = LabelBinarizer() labels = lb.fit_transform(labels) labels = to_categorical(labels) # - # # 5. Load dataset # + pycharm={"name": "#%%\n"} gather={"logged": 1634417945564} (trainX, testX, trainY, testY) = train_test_split(data, labels,test_size=0.20, stratify=labels, random_state=42) aug = ImageDataGenerator( rotation_range=20, zoom_range=0.15, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.15, horizontal_flip=True, fill_mode="nearest") # - # # 6. Build model # + pycharm={"name": "#%%\n"} gather={"logged": 1634417950952} backbone = ResNet152V2(weights='imagenet', include_top=False, input_shape=(img_width, img_height, 3)) backbone.trainable = False input_layer = layers.Input((img_width, img_height, 3)) embedding = backbone(input_layer, training=False) embedding = tf.keras.layers.AveragePooling2D(pool_size=(7, 7))(embedding) embedding = tf.keras.layers.Flatten(name="flatten")(embedding) embedding = tf.keras.layers.Dense(128, activation="relu")(embedding) embedding = tf.keras.layers.Dropout(0.5)(embedding) embedding = tf.keras.layers.Dense(2, activation="softmax")(embedding) model = tf.keras.Model(inputs=[input_layer], outputs=[embedding]) opt = tf.keras.optimizers.Adam(lr=1e-4, decay=1e-8) model.compile(loss="binary_crossentropy", optimizer=opt,metrics=["accuracy"]) model.summary() # - # # 7. Train only top layers # + pycharm={"name": "#%%\n"} gather={"logged": 1634423432627} early_stopping = tf.keras.callbacks.EarlyStopping( monitor="val_loss", min_delta=0.001, patience=10, mode="min", restore_best_weights=True, ) log_dir = os.path.join(path_names['LOG_PATH'], datetime.now().strftime("%Y_%m_%d-%H_%M_%S")) tensorboard_callback = keras.callbacks.TensorBoard(log_dir=log_dir) history = model.fit(aug.flow(trainX, trainY, batch_size=batch_size), epochs=1000, callbacks=[early_stopping, tensorboard_callback], steps_per_epoch=len(trainX) // batch_size, validation_data=(testX, testY), validation_steps=len(testX) // batch_size) # - # # 8. Unfreeze backbone # + pycharm={"name": "#%%\n"} gather={"logged": 1634423468465} backbone.trainable = True for layer in backbone.layers[:-110]: layer.trainable = False model.compile(loss="binary_crossentropy", optimizer=opt,metrics=["accuracy"]) model.summary() # - # # 9. Train full model # + pycharm={"name": "#%%\n"} gather={"logged": 1634425137202} history = model.fit(aug.flow(trainX, trainY, batch_size=batch_size), epochs=1000, callbacks=[early_stopping, tensorboard_callback], steps_per_epoch=len(trainX) // batch_size, validation_data=(testX, testY), validation_steps=len(testX) // batch_size) # - # # 10. Save model # + pycharm={"name": "#%%\n"} gather={"logged": 1634425181651} model.save(os.path.join(path_names['MODEL_PATH'], 'smoking.h5')) # + [markdown] nteract={"transient": {"deleting": false}} # # 11. Predict # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1634428259545} LABEL_MAP = {0:"Not smoking", 1:"Smoking"} fig,axes = plt.subplots(ncols=4,figsize=(4*4,4)) imagePaths = list(paths.list_images('test')) for i in range(4): image = load_img(imagePaths[i], target_size=(224, 224)) axes[i].imshow(image) axes[i].axis('off') image = img_to_array(image) image = preprocess_input(image) prediction = model.predict(np.expand_dims(image, 0)) title = LABEL_MAP[0 if prediction[0][0] > 0.5 else 1] axes[i].set_title("{} {:0.2f}%".format(title, max(prediction[0]) * 100)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # ### Lesson 2: Setup Jupyter Notebook for Data Analysis # # #### Learning Objectives: #
      #
    1. Create Python tools for data analysis using Jupyter Notebooks
    2. #
    3. Learn how to access data from MySQL databases for data analysis
    4. # #
    # #### Exercise 1: Install Anaconda # Access https://conda.io/miniconda.html and download the Windows Installer.
    # # Run the following commands on the Anaconda command prompt:
    #
    # conda install numpy, pandas, matplotlib
    # conda update conda
    # 
    # # Sometimes data analysis requires previous versions of Python or other tools for a project.
    # # Next we will setup three environments that can be used with various project requirements. # #### Exercise 2: Configure conda environments for Python 2 and Python 3 data analysis # # To create a Python 2 enviroment run the following from the Anaconda command prompt:
    # #
    # conda update conda -y
    # conda create -n py2 python=2 anaconda jupyter notebook -y
    # 
    # # To activate the environment:
    #
    source activate py2
    # On MacOS or Linux: #
    source activate py2
    # # To deactivate the environment: #
    source deactivate py2
    # On MacOS or Linux: #
    source deactivate py2
    # To create the Python 3 environment run the following from the Anaconda command prompt: # #
    # conda create -n py3 python=3 anaconda jupyter notebook -y 
    # 
    # # To activate the environment: #
    activate py3
    # On MacOS or Linux: #
    source activate py3
    # # To deactivate the enviroment: #
    deactivate py3
    # On MacOS or Linux: #
    source deactivate py3
    # ### Setup Jupyter Notebook to access data from MySQL databases # #### Exercise 3: Load the mysql libraries into the environment and access data from MySQL database # # Run the following commands from the Anaconda command line:
    #
    # pip install ipython-sql
    # conda install mysql-python
    # 
    # # This will install sql magic capabilities to Jupyter Notebook # #### Load the sql magic jupyter notebook extension: # %load_ext sql # #### Configure sql magic to output queries as pandas dataframes: # %config SqlMagic.autopandas=True # #### Import the data analysis libraries: import pandas as pd import numpy as np # #### Import the MySQLdb library import MySQLdb # #### Connect to the MySQL database using sql magic commands # # The connection to the MySQL database uses the following format: # #
    # mysql://username:password@hostname/database
    # 
    # # To start a sql command block type: #
    %%sql
    # # Note: Make sure the %%sql is on the top of the cell
    # # Then the remaining lines can contain SQL code.
    # # Example: to connect to pidata database and select records from the temps table: # # + magic_args="mysql://pilogger:foobar@172.20.101.81/pidata" language="sql" # SELECT * FROM temps LIMIT 10; # - # Example to create a pandas dataframe using the results of a mysql query # df = %sql SELECT * FROM temps WHERE datetime > date(now()); df # Note the data type of the dataframe df: type(df) # #### Use %%sql to start a block of sql statements # Example: Show tables in the pidata database # + language="sql" # use pidata; # show tables; # - # #### Exercise 4: Another way to access mysql data and load into a pandas dataframe # Connect using the mysqldb python library: #Enter the values for you database connection database = "pidata" # e.g. "pidata" hostname = "172.20.101.81" # e.g.: "mydbinstance.xyz.us-east-1.rds.amazonaws.com" port = 3306 # e.g. 3306 uid = "pilogger" # e.g. "user1" pwd = "" # e.g. "" conn = MySQLdb.connect( host=hostname, user=uid, passwd=pwd, db=database ) cur = conn.cursor() # Create a dataframe from the results of a sql query from the pandas object: new_dataframe = pd.read_sql("SELECT * \ FROM temps", con=conn) conn.close() new_dataframe # ### Now let's create the tables to hold the sensor data from our Raspberry Pi #
    # Logon using an admin account and create a table called temps3 to hold sensor data:
    #
    # The table contains the following fields:
    # device       -- VARCHAR, Name of the device that logged the data
    # datetime     -- DATETIME, Date time in ISO 8601 format YYYY-MM-DD HH:MM:SS
    # temp         -- FLOAT, temperature data
    # hum          -- FLOAT, humidity data
    # 
    # + magic_args="mysql://admin:admin@172.20.101.81/pidata" language="sql" # # DROP TABLE if exists temps3; # # CREATE TABLE temps3 ( # device varchar(20) DEFAULT NULL, # datetime datetime DEFAULT NULL, # temp float DEFAULT NULL, # hum float DEFAULT NULL # ) ENGINE=InnoDB DEFAULT CHARSET=latin1; # - # ### Next we will create a user to access the newly created table that will be used by the Raspberry Pi program # Example: # Start a connection using an admin account, create a new user called user1. # Grant limited privileges to the pidata.temps3 table # # Note: Creating a user with @'%' allows the user to access the database from any host # + magic_args="mysql://admin:admin@172.20.101.81" language="sql" # CREATE USER 'user1'@'%' IDENTIFIED BY 'logger'; # GRANT SELECT, INSERT, DELETE, UPDATE ON pidata.temps3 TO 'user1'@'%'; # FLUSH PRIVILEGES; # # - # %sql select * from mysql.user; # Next we will test access to the newly created table using the new user # Start a new connection using the new user # + magic_args="mysql://user1:logger@172.20.101.81/pidata" language="sql" # select * from temps3; # - # Let's add some test data to make sure we can insert using the new user for x in range(10): # %sql INSERT INTO temps3 (device,datetime,temp,hum) VALUES('pi222',date(now()),73.2,22.0); # %sql SELECT * FROM temps3; # Now we will delete the rows in the database # %sql DELETE FROM temps3; # %sql SELECT * FROM temps3; # + magic_args="mysql://admin:admin@172.20.101.81" language="sql" # drop user if exists 'user1'@'%'; # - # %sql select * from mysql.user; # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + #default_exp streamlit_app # - #hide from nbdev.showdoc import * # # Streamlit app # Streamlit is a more convenient way to activate a quick user-facing GUI than Voila was, especially because of Voila having conflicting dependencies with nbdev. # # However, Streamlit wants a `.py` file instead of a notebook for development. This is kind of annoying, because to get the hot-reload effect from Streamlit we have to develop outside the notebook, but to maintain documentation (and compile with everything else) we have to keep the main source of truth right here. Perhaps a solution will present itself later; meanwhile, I have been using a scratch file `streamlit-app.py` for development and then copied it back here. # This is a workaround for the query_flow printing to stdout. Maybe it should be handled natively in Streamlit? # + #export import streamlit as st from memery import core from pathlib import Path from PIL import Image from streamlit.report_thread import REPORT_CONTEXT_ATTR_NAME from threading import current_thread from contextlib import contextmanager from io import StringIO import sys # + #export @contextmanager def st_redirect(src, dst): placeholder = st.empty() output_func = getattr(placeholder, dst) with StringIO() as buffer: old_write = src.write def new_write(b): if getattr(current_thread(), REPORT_CONTEXT_ATTR_NAME, None): buffer.write(b + '') output_func(buffer.getvalue() + '') else: old_write(b) try: src.write = new_write yield finally: src.write = old_write @contextmanager def st_stdout(dst): with st_redirect(sys.stdout, dst): yield @contextmanager def st_stderr(dst): with st_redirect(sys.stderr, dst): yield # - # Trying to make good use of streamlit's caching service here; if the search query and folder are the same as a previous search, it will serve the cached version. Might present some breakage points though, yet to see. # + #export @st.cache def send_image_query(path, text_query, image_query): ranked = core.query_flow(path, text_query, image_query=img) return(ranked) @st.cache def send_text_query(path, text_query): ranked = core.query_flow(path, text_query) return(ranked) # - # This is the sidebar content # + #export st.sidebar.title("Memery") path = st.sidebar.text_input(label='Directory', value='./images') text_query = st.sidebar.text_input(label='Text query', value='') image_query = st.sidebar.file_uploader(label='Image query') im_display_zone = st.sidebar.beta_container() logbox = st.sidebar.beta_container() # - # The image grid parameters # + #export sizes = {'small': 115, 'medium':230, 'large':332, 'xlarge':600} l, m, r = st.beta_columns([4,1,1]) with l: num_images = st.slider(label='Number of images',value=12) with m: size_choice = st.selectbox(label='Image width', options=[k for k in sizes.keys()], index=1) with r: captions_on = st.checkbox(label="Caption filenames", value=False) # - # And the main event loop, triggered every time the query parameters change. # # This doesn't really work in Jupyter at all. Hope it does once it's compiled. #export if text_query or image_query: with logbox: with st_stdout('info'): if image_query is not None: img = Image.open(image_query).convert('RGB') with im_display_zone: st.image(img) ranked = send_image_query(path, text_query, image_query) else: ranked = send_text_query(path, text_query) ims = [Image.open(o).convert('RGB') for o in ranked[:num_images]] names = [o.replace(path, '') for o in ranked[:num_images]] if captions_on: images = st.image(ims, width=sizes[size_choice], channels='RGB', caption=names) else: images = st.image(ims, width=sizes[size_choice], channels='RGB') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 好东西[Python标准库](https://docs.python.org/3/library/#the-python-standard-library) # Python标准库是Python安装的一部分,它包含各种各样的软件包,这些软件包在构建Python杰作时可能会有所帮助,本笔记本列出了一些常用的包装及其主要功能。 # ## [`datetime`](https://docs.python.org/3/library/datetime.html#module-datetime) 用于处理日期和时间 # + import datetime as dt local_now = dt.datetime.now() print('local now: {}'.format(local_now)) utc_now = dt.datetime.utcnow() print('utc now: {}'.format(utc_now)) # 你可以单独访问任何值: print('{} {} {} {} {} {}'.format(local_now.year, local_now.month, local_now.day, local_now.hour, local_now.minute, local_now.second)) print('date: {}'.format(local_now.date())) print('time: {}'.format(local_now.time())) # - # ### `strftime()` # 对于字符串格式 `datetime` # + formatted1 = local_now.strftime('%Y/%m/%d-%H:%M:%S') print(formatted1) formatted2 = local_now.strftime('date: %Y-%m-%d time:%H:%M:%S') print(formatted2) # - # ### `strptime()` # 用于将日期时间字符串转换为日期对象 my_dt = dt.datetime.strptime('2000-01-01 10:00:00', '%Y-%m-%d %H:%M:%S') print('my_dt: {}'.format(my_dt)) # ### [`timedelta`](https://docs.python.org/3/library/datetime.html#timedelta-objects) # 适用于时差 # + tomorrow = local_now + dt.timedelta(days=1) print('tomorrow this time: {}'.format(tomorrow)) delta = tomorrow - local_now print('tomorrow - now = {}'.format(delta)) print('days: {}, seconds: {}'.format(delta.days, delta.seconds)) print('total seconds: {}'.format(delta.total_seconds())) # - # ### 使用时区 # 首先确保 [`pytz`](http://pytz.sourceforge.net/) 已经安装. import sys # !{sys.executable} -m pip install pytz # + import datetime as dt import pytz naive_utc_now = dt.datetime.utcnow() print('naive utc now: {}, tzinfo: {}'.format(naive_utc_now, naive_utc_now.tzinfo)) # 本地化原始日期时间 UTC_TZ = pytz.timezone('UTC') utc_now = UTC_TZ.localize(naive_utc_now) print('utc now: {}, tzinfo: {}'.format(utc_now, utc_now.tzinfo)) # 将本地化日期时间转换为不同的时区 PARIS_TZ = pytz.timezone('Europe/Paris') paris_now = PARIS_TZ.normalize(utc_now) print('Paris: {}, tzinfo: {}'.format(paris_now, paris_now.tzinfo)) NEW_YORK_TZ = pytz.timezone('America/New_York') ny_now = NEW_YORK_TZ.normalize(utc_now) print('New York: {}, tzinfo: {}'.format(ny_now, ny_now.tzinfo)) # - # **注意**: 如果你的项目大量使用日期时间, 你可能需要看看外部的库,例如: [Pendulum](https://pendulum.eustace.io/docs/) 和 [Maya](https://github.com/kennethreitz/maya), 对于某些用例,这使得使用日期时间更加容易 # ## [`logging`](https://docs.python.org/3/library/logging.html#module-logging) # + import logging # 为每个模块分别购买专用记录仪的便捷方法 logger = logging.getLogger(__name__) logger.setLevel(logging.WARNING) logger.debug('This is debug') logger.info('This is info') logger.warning('This is warning') logger.error('This is error') logger.critical('This is critical') # - # ### 记录异常 # 有一个整洁的 `exception` 方法在 `logging` 除用户定义的日志条目外,该模块该将自动记录堆栈跟踪. try: path_calculation = 1 / 0 except ZeroDivisionError: logging.exception('All went south in my calculation') # ### 格式化日志条目 # + import logging # 仅Jupyter笔记本环境需要 from importlib import reload reload(logging) my_format = '%(asctime)s | %(name)-12s | %(levelname)-10s | %(message)s' logging.basicConfig(format=my_format) logger = logging.getLogger('MyLogger') logger.warning('Something bad is going to happen') logger.error('Uups, it already happened') # - # ### Logging to a file # + import os import logging # 仅Jupyter环境需要 from importlib import reload reload(logging) logger = logging.getLogger('MyFileLogger') # 让我们为记录器定义一个file_handler log_path = os.path.join(os.getcwd(), 'my_log.txt') file_handler = logging.FileHandler(log_path) # 而且定义好格式 formatter = logging.Formatter('%(asctime)s | %(name)-12s | %(levelname)-10s | %(message)s') file_handler.setFormatter(formatter) logger.addHandler(file_handler) # logger.addHandler(logging.StreamHandler()) logger.warning('Oops something is going to happen') logger.error(' visits our place') # - # ## [`random`](https://docs.python.org/3/library/random.html) 用于生成随机数 # + import random rand_int = random.randint(1, 100) print('random integer between 1-100: {}'.format(rand_int)) rand = random.random() print('random float between 0-1: {}'.format(rand)) # - # 如果需要伪随机数,可以将`seed`设置为随机。这将重现输出(尝试多次运行单元) # + import random random.seed(5) # Setting the seed # Let's print 10 random numbers for _ in range(10): print(random.random()) # - # ## [`re`](https://docs.python.org/3/library/re.html#module-re) 用于正则表达式 # ### 搜索事件 # + import re secret_code = 'qwret 8sfg12f5 fd09f_df' # "r" 开头表示原始格式,将其与正则表达式模式一起使用 search_pattern = r'(g12)' match = re.search(search_pattern, secret_code) print('match: {}'.format(match)) print('match.group(): {}'.format(match.group())) numbers_pattern = r'[0-9]' numbers_match = re.findall(numbers_pattern, secret_code) print('numbers: {}'.format(numbers_match)) # - # ### 验证变量 # + import re def validate_only_lower_case_letters(to_validate): pattern = r'^[a-z]+$' return bool(re.match(pattern, to_validate)) print(validate_only_lower_case_letters('thisshouldbeok')) print(validate_only_lower_case_letters('thisshould notbeok')) print(validate_only_lower_case_letters('Thisshouldnotbeok')) print(validate_only_lower_case_letters('thisshouldnotbeok1')) print(validate_only_lower_case_letters('')) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd ################################################################## # Author: # ################################################################## # Network Traffic Anomaly Intrusion Detection System (IDS) # using Machine Learning ################################################################## #This commented out code was used to name all the data features and process the # data file into CSV ''' df = pd.read_csv("android_traffic.csv", sep=";", header=0) df df.to_csv("android_traffic.csv") print(len(df.index)) ''' #test dataset '''' test_dataset = pd.read_csv("Test.txt",sep=",",header=None) test_dataset.columns = ["duration","protocol_type","service","flag","src_bytes","dst_bytes","land", "wrong_fragment","urgent","hot","num_failed_logins","logged_in", "num_compromised","root_shell","su_attempted","num_root","num_file_creations", "num_shells","num_access_files","num_outbound_cmds","is_host_login", "is_guest_login","count","srv_count","serror_rate", "srv_serror_rate", "rerror_rate","srv_rerror_rate","same_srv_rate", "diff_srv_rate", "srv_diff_host_rate","dst_host_count","dst_host_srv_count","dst_host_same_srv_rate", "dst_host_diff_srv_rate","dst_host_same_src_port_rate", "dst_host_srv_diff_host_rate","dst_host_serror_rate","dst_host_srv_serror_rate", "dst_host_rerror_rate","dst_host_srv_rerror_rate","attack", "last_flag"] test_dataset.to_csv("testTraffic.csv", index=False) print(len(test_dataset.index)) #train dataset train_dataset = pd.read_csv("Train.txt", sep=",", header=None) train_dataset.columns=["duration","protocol_type","service","flag","src_bytes","dst_bytes","land", "wrong_fragment","urgent","hot","num_failed_logins","logged_in", "num_compromised","root_shell","su_attempted","num_root","num_file_creations", "num_shells","num_access_files","num_outbound_cmds","is_host_login", "is_guest_login","count","srv_count","serror_rate", "srv_serror_rate", "rerror_rate","srv_rerror_rate","same_srv_rate", "diff_srv_rate", "srv_diff_host_rate","dst_host_count","dst_host_srv_count","dst_host_same_srv_rate", "dst_host_diff_srv_rate","dst_host_same_src_port_rate", "dst_host_srv_diff_host_rate","dst_host_serror_rate","dst_host_srv_serror_rate", "dst_host_rerror_rate","dst_host_srv_rerror_rate","attack", "last_flag"] train_dataset.to_csv("trainTraffic.csv", index=False) ''' # + train_df = pd.read_csv("trainTraffic.csv") #Training data description (count, mean,std,min, 25-75 percentile) print("Training Dataset Description (count, mean, std, min, etc)") train_df.describe() # - print("Training Dataset") train_df #Rename the column "attack" to "label", this is the target column/ground truth train_df=train_df.rename(columns={"attack":"label"}) #View value counts of target label - get an idea of data distribution print("Value counts of each label") train_df['label'].value_counts() print("Number of labels for target : %s" %len(train_df['label'].value_counts())) print("Number of features for dataset : %s" %len(train_df.columns)) # + #Place all numeric labels to a list nfeatures = train_df.columns.tolist() #Remove categorical data features, and target column nfeatures.remove('label') nfeatures.remove('protocol_type') nfeatures.remove('service') nfeatures.remove('flag') #Create dataframe with only numerical features as float type data_features = train_df[nfeatures].astype(float) # - #EDA of numerica features data_features.describe() ''' #Change target labels to binary labels - normal or malicious binary_labels = train_df['label'].copy() binary_labels[binary_labels != 'normal'] = 'malicious' binary_labels.value_counts() ''' train_df.columns ''' #This is for binary classification len(train_df['label'].unique()) #Convert all non-normal labels to malicious - turn into binary classification train_df.loc[:]['label'][train_df['label'] != 'normal'] = 'malicious' print("Train dataframe with binary class labels") train_df ''' # + import sklearn.model_selection as sk X_train, X_test, y_train, y_test = sk.train_test_split(data_features, train_df['label'], test_size=0.2,random_state=42) from sklearn import preprocessing import numpy as np #Scale the data minmax_scale = preprocessing.MinMaxScaler().fit(X_train) X_train_scaled = minmax_scale.transform(X_train) X_test_scaled = minmax_scale.transform(X_test) print(np.shape(X_train_scaled)) # + import sklearn.naive_bayes from time import time from sklearn.neighbors import KNeighborsClassifier #k-NN with 5 nearest neighbors using "ball_tree" 500 leaf classifier = KNeighborsClassifier(n_neighbors = 5, algorithm = 'ball_tree', leaf_size = 500) start = time() classifier.fit(X_train_scaled, y_train) end = time() - start print ("Classifier trained in {} seconds.".format(round(end, 3))) #Predictions on the X_test start = time() pred = classifier.predict(X_test_scaled) end = time() - start print ("Predicted {} samples in {} seconds".format(len(pred),round(end,3))) # + #Calculating out the accuracy from sklearn.metrics import accuracy_score acc = accuracy_score(pred, y_test) print(acc) print(pred) print(y_test) print ("Accuracy on train dataset is {}.".format(round(acc,4))) # - # + # Now scale, train, predict on "testTraffic.csv" test_df = pd.read_csv("testTraffic.csv") test_df=test_df.rename(columns={"attack":"label"}) print("Number of columns %d" % len(test_df.columns)) test_df # + #Same processing of dataframe before - remove categorical features #Place all numeric labels to a list nfeatures = test_df.columns.tolist() #Remove categorical data features, and target column nfeatures.remove('label') nfeatures.remove('protocol_type') nfeatures.remove('service') nfeatures.remove('flag') #Create dataframe with only numerical features as float type test_data_features = test_df[nfeatures].astype(float) # - #EDA of numerica features test_data_features.describe() ''' #This is for binary classification len(test_df['label'].unique()) #Convert all non-normal labels to malicious - turn into binary classification test_df.loc[:]['label'][test_df['label'] != 'normal'] = 'malicious' test_df ''' test_data_features # + #Run same split, scaling, classifier, prediction on "testTraffic.csv" #99% of samples are for test X_train, X_test, y_train, y_test = sk.train_test_split(test_data_features, test_df['label'], test_size=22543 ,random_state=42) #Scale the data minmax_scale = preprocessing.MinMaxScaler().fit(X_test) X_train_scaled = minmax_scale.transform(X_train) X_test_scaled = minmax_scale.transform(X_test) print("Number of samples removed from dataset {}".format(np.shape(X_train_scaled)[0])) start = time() #We dont need to train the classifier again as its been trained w/ the train dataset already pred = classifier.predict(X_test_scaled) acc = accuracy_score(pred, y_test) end = time() - start print ("Accuracy on test dataset is {}.".format(round(acc,4))) print ("Predicted (already trained classifier) {} samples in {} seconds".format(len(pred),round(end,3))) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.7.10 ('base') # language: python # name: python3 # --- # + import copy import random import numpy as np import pandas as pd import torch from scipy import stats from tqdm import tqdm from transformers import BertForSequenceClassification, BertTokenizer from util import calc_accuracy, calc_f1, init_device, load_params from util.bert import sentence_to_loader # - # ランダムシード初期化 seed = 0 random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) torch.backends.cudnn.benchmark = False torch.backends.cudnn.deterministic = True device = init_device() # パラメータ読み込み print("Loading parameters...") params = load_params("/workspace/amazon_review/config/params_mmd.json") params["batch_size"] = 4 # データセット読み込み train_df = pd.read_json(params["ja_train_path"], orient="record", lines=True) if params["is_developing"]: train_df = train_df.sample(n=10000, random_state=1) dev_df = pd.read_json(params["ja_dev_path"], orient="record", lines=True) test_df = pd.read_json(params["ja_test_path"], orient="record", lines=True) # sourceカテゴリーとtargetカテゴリーを分ける train_source_df = train_df[train_df["product_category"] == params["source_category"]] dev_source_df = dev_df[dev_df["product_category"] == params["source_category"]] test_source_df = test_df[test_df["product_category"] == params["source_category"]] train_target_df = train_df[train_df["product_category"] == params["target_category"]] dev_target_df = dev_df[dev_df["product_category"] == params["target_category"]] test_target_df = test_df[test_df["product_category"] == params["target_category"]] # クラスラベル設定 for df in [train_source_df, dev_source_df, test_source_df, train_target_df, dev_target_df, test_target_df]: # 3以上かを予測する場合 df["class"] = 0 df["class"][df["stars"] > 3] = 1 # 5クラス分類する場合 # df["class"] = df["stars"] - 1 # トークン化 model_name = "cl-tohoku/bert-base-japanese-v2" tokenizer = BertTokenizer.from_pretrained(model_name, do_lower_case=True) # dataloader作成 train_source_dataloader = sentence_to_loader( train_source_df.review_body.values, train_source_df["class"].values, tokenizer, params["batch_size"], shuffle=True, ) dev_source_dataloader = sentence_to_loader( dev_source_df.review_body.values, dev_source_df["class"].values, tokenizer, params["batch_size"], shuffle=False ) # test_source_dataloader = sentence_to_loader( # test_source_df.review_body.values, # test_source_df["class"].values, # tokenizer, # params["batch_size"], # shuffle=False, # ) train_target_dataloader = sentence_to_loader( train_target_df.review_body.values, train_target_df["class"].values, tokenizer, params["batch_size"], shuffle=True, ) # dev_target_dataloader = sentence_to_loader( # dev_target_df.review_body.values, dev_target_df["class"].values, tokenizer, params["batch_size"], shuffle=False # ) test_target_dataloader = sentence_to_loader( test_target_df.review_body.values, test_target_df["class"].values, tokenizer, params["batch_size"], shuffle=False, ) # BERTモデル構築 model = BertForSequenceClassification.from_pretrained( model_name, num_labels=params["class_num"], output_attentions=False, output_hidden_states=False, ) model.to(device) # 最適化とスケジューラー # 論文で推奨されているハイパーパラメータを使用 optimizer = torch.optim.AdamW(model.parameters(), lr=6e-6, eps=1e-8) epochs = 3 # 訓練 for epoch in range(epochs): print(f"\n======== Epoch {epoch+1} / {epochs} ========\nTraining") total_train_loss = 0 model.train() for step, (input_id_batch, input_mask_batch, label_batch) in tqdm( enumerate(train_source_dataloader), total=len(train_source_dataloader) ): input_id_batch = input_id_batch.to(device).to(torch.int64) input_mask_batch = input_mask_batch.to(device).to(torch.int64) label_batch = label_batch.to(device).to(torch.int64) model.zero_grad() result = model(input_id_batch, token_type_ids=None, attention_mask=input_mask_batch, labels=label_batch) total_train_loss += result.loss.item() result.loss.backward() torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) optimizer.step() avg_train_loss = total_train_loss / len(train_source_dataloader) print(f"\n\tAverage training loss: {avg_train_loss:.2f}") # 検証データに対する予測 print("\nRunning Validation") total_dev_loss = 0 total_dev_accuracy = 0 total_dev_f1 = 0 model.eval() for step, (input_id_batch, input_mask_batch, label_batch) in tqdm( enumerate(dev_source_dataloader), total=len(dev_source_dataloader) ): input_id_batch = input_id_batch.to(device).to(torch.int64) input_mask_batch = input_mask_batch.to(device).to(torch.int64) label_batch = label_batch.to(device).to(torch.int64) with torch.no_grad(): result = model(input_id_batch, token_type_ids=None, attention_mask=input_mask_batch, labels=label_batch) total_dev_loss += result.loss.item() logit_array = result.logits.detach().cpu().numpy() label_array = label_batch.cpu().numpy() total_dev_accuracy += calc_accuracy(label_array, logit_array) total_dev_f1 += calc_f1(label_array, logit_array) avg_dev_loss = total_dev_loss / len(dev_source_dataloader) print(f"\tDev Loss: {avg_dev_loss:.3f}") avg_dev_accuracy = total_dev_accuracy / len(dev_source_dataloader) print(f"\tAccuracy: {avg_dev_accuracy:.3f}") avg_dev_f1 = total_dev_f1 / len(dev_source_dataloader) print(f"\tF1: {avg_dev_f1:.3f}") # ブートストラップで複数回実行する print("\ntargetでFineTuning開始") # 事前学習したモデルを保持 # メモリを共有しないためにdeepcopyを使用する model_pretrained = copy.deepcopy(model.cpu()) # + params["target_ratio"] = [0.01, 0.05, 0.1, 0.3, 0.5] for target_ratio in params["target_ratio"]: print("------------------------------") print(f"target_ratio = {target_ratio}") print("------------------------------") accuracy_list = [] f1_list = [] for count in range(params["trial_count"]): print(f"\n{count+1}回目の試行") # targetでFineTuningする準備 # target_ratioで指定した比率までtargetのデータ数を減らす source_num = train_source_df.shape[0] target_num = int(source_num * target_ratio) if target_num > train_target_df.shape[0]: print("Target ratio is too large.") exit() train_target_df_sample = train_target_df.sample(target_num, replace=False) print(f"Source num: {source_num}, Target num: {target_num}") # targetのデータローダー作成 train_target_dataloader = sentence_to_loader( train_target_df_sample.review_body.values, train_target_df_sample["class"].values, tokenizer, params["batch_size"], shuffle=True, ) # 事前学習したモデルをロード model = copy.deepcopy(model_pretrained).to(device) optimizer = torch.optim.AdamW(model.parameters(), lr=6e-6, eps=1e-8) # targetでFineTuning for epoch in range(epochs): print(f"======== Epoch {epoch+1} / {epochs} ========") total_train_loss = 0 model.train() for step, (input_id_batch, input_mask_batch, label_batch) in enumerate(train_target_dataloader): input_id_batch = input_id_batch.to(device).to(torch.int64) input_mask_batch = input_mask_batch.to(device).to(torch.int64) label_batch = label_batch.to(device).to(torch.int64) model.zero_grad() result = model( input_id_batch, token_type_ids=None, attention_mask=input_mask_batch, labels=label_batch ) total_train_loss += result.loss.item() result.loss.backward() torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) optimizer.step() avg_train_loss = total_train_loss / len(train_target_dataloader) print(f"Training Target Loss: {avg_train_loss:.2f}") # テスト total_test_loss = 0 total_test_accuracy = 0 total_test_f1 = 0 model.eval() for step, (input_id_batch, input_mask_batch, label_batch) in enumerate(test_target_dataloader): input_id_batch = input_id_batch.to(device).to(torch.int64) input_mask_batch = input_mask_batch.to(device).to(torch.int64) label_batch = label_batch.to(device).to(torch.int64) with torch.no_grad(): result = model( input_id_batch, token_type_ids=None, attention_mask=input_mask_batch, labels=label_batch ) total_test_loss += result.loss.item() logit_array = result.logits.detach().cpu().numpy() label_array = label_batch.cpu().numpy() total_test_accuracy += calc_accuracy(label_array, logit_array) total_test_f1 += calc_f1(label_array, logit_array) avg_test_loss = total_test_loss / len(test_target_dataloader) print(f"\nTest Target Loss: {avg_test_loss:.2f}") avg_test_accuracy = total_test_accuracy / len(test_target_dataloader) accuracy_list.append(avg_test_accuracy) print(f"Test Target Accuracy: {avg_test_accuracy:.2f}") avg_test_f1 = total_test_f1 / len(test_target_dataloader) f1_list.append(avg_test_f1) print(f"Test Target F1: {avg_test_f1:.2f}") accuracy_interval = stats.t.interval( alpha=0.95, df=len(accuracy_list) - 1, loc=np.mean(accuracy_list), scale=stats.sem(accuracy_list) ) f1_interval = stats.t.interval(alpha=0.95, df=len(f1_list) - 1, loc=np.mean(f1_list), scale=stats.sem(f1_list)) print("\n\t\tMean, Std, 95% interval (bottom, up)") print( f"Accuracy\t{np.mean(accuracy_list):.2f}, {np.std(accuracy_list, ddof=1):.2f}, {accuracy_interval[0]:.2f}, {accuracy_interval[1]:.2f}" ) print( f"F1 Score\t{np.mean(f1_list):.2f}, {np.std(f1_list, ddof=1):.2f}, {f1_interval[0]:.2f}, {f1_interval[1]:.2f}" ) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="ROdDdygdOWJG" # ## 1. Defining the question # + [markdown] id="112GSu7CPl5-" # ### a) Specifying the analysis question # # Build a model that determines whether a person survived the titanic accident or not. # + [markdown] id="TBT5LvwBPmkP" # ### b) Defining the metric for success # # Be able to effectively use K Nearest Neighbours(KNN) to build an efficient model that can predict whether a person survived the titanic accident or not with at least 90% accuracy # + [markdown] id="xdPS32qUPnPK" # ### c) Understanding the context # # The RMS Titanic sank in the early morning hours of 15 April 1912 in the North Atlantic Ocean, four days into her maiden voyage from Southampton to New York City. The largest ocean liner in service at the time, Titanic had an estimated 2,224 people on board when she struck an iceberg. # # Her sinking two hours and forty minutes later at 02:20 on Monday, 15 April, resulted in the deaths of more than 1,500 people, making it one of the deadliest peacetime maritime disasters in history. # # Titanic sank with over a thousand passengers and crew still on board. Almost all of those who jumped or fell into the water drowned or died within minutes due to the effects of cold shock and incapacitation. # The disaster shocked the world and caused widespread outrage over the lack of lifeboats, lax regulations, and the unequal treatment of the three passenger classes during the evacuation. # + [markdown] id="h0uUC0OpPnxc" # ### d) Recording the experimental design # # - Read and explore the given dataset # - Define the appropriateness of the available data to answer the given question # - Find and deal with outliers, anomalies, and missing data within the dataset # - Perform Exploratory Data Analysis. # - Build a model using KNN algorithm # - Challenge the solution. # - Optimize the model # - Make conclusions # # # + [markdown] id="LznBGpVFPoOT" # ### e) Data Relevance # # The dataset contains adequate information to build my models # + [markdown] id="V0rTKmBbPor5" # ## 2. Importing relevant libraries # + id="lJHfv_3FbnN3" # Loading necessary libraries import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score, confusion_matrix, classification_report from sklearn.model_selection import GridSearchCV from sklearn.preprocessing import MinMaxScaler, LabelEncoder from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA from sklearn.neighbors import KNeighborsClassifier # + [markdown] id="Zp8drdSRQjg0" # ## 3. Loading and checking the data # + id="3xFo-4FwcSpQ" # Loading our dataset titanic = pd.read_csv('titanic.csv') # + colab={"base_uri": "https://localhost:8080/", "height": 229} id="tx1TuYKLce6R" outputId="d9039d55-90f4-4152-8ad3-0782d5a63b7e" # Getting a preview of the first 5 rows titanic.head() # + colab={"base_uri": "https://localhost:8080/"} id="SApzDmXYemul" outputId="c0e624f9-1d6b-43a0-ae55-944e3c8eec88" # Determining the number of rows and columns in the dataset titanic.shape # This dataset has 891 observations and 12 columns # + colab={"base_uri": "https://localhost:8080/"} id="BndCOUEFfM9a" outputId="3af8050c-bd13-4405-f274-28077f8cf6fc" # Determining the names of the columns present in the dataset titanic.columns # + colab={"base_uri": "https://localhost:8080/", "height": 284} id="YXt7KcR4llU1" outputId="c4c675af-7f8a-4d8a-cf77-b0c8104a05cc" # Obtaining descriptive statistics titanic.describe() # + colab={"base_uri": "https://localhost:8080/"} id="g4WLnrwGfZFN" outputId="5e55c616-c4f3-441a-e797-970e77b08651" # Checking if each column is of the appropriate data type titanic.dtypes # All the columns are of the appropriate data type. # + [markdown] id="PSOXSQciSFn0" # ## 4. External data source validation # + [markdown] id="_6hx-9sDM66o" # The dataset has been validated against the Titanic dataset on Kaggle. The link is provided below # # https://www.kaggle.com/hesh97/titanicdataset-traincsv # # The data dictionary can be accessed here: # # https://www.kaggle.com/c/titanic/data # + [markdown] id="iT1z5dFagd6g" # ## 5. Data cleaning # # + id="d10waXUBgSnH" colab={"base_uri": "https://localhost:8080/", "height": 106} outputId="3ab3f489-82fa-407e-f55f-ea59c959f16d" # Stripping the columns of any probable white spaces titanic.columns = titanic.columns.str.strip().str.lower() # Preview of the dataset titanic.head(2) # + colab={"base_uri": "https://localhost:8080/"} id="WJNqiDb0gSvi" outputId="7711a352-8c8b-465b-e84c-bb5938c806af" # Checking for presence of null values titanic.isnull().sum() # There are 3 columns with missing values. # + colab={"base_uri": "https://localhost:8080/"} id="k3qhOCl_dIFW" outputId="401aa0ee-3a70-4c46-a16d-362ffea6c5d4" # Previewing the columns again titanic.columns # + colab={"base_uri": "https://localhost:8080/"} id="7qQuV8F7B2Qg" outputId="03967d4c-829f-43b7-cedd-f9b78d0d286d" # Dealing with the missing values # Dropping the cabin column titanic.drop('cabin', axis = 1, inplace=True) # Imputing the median for age titanic['age'] = titanic['age'].fillna(titanic['age'].median()) # Foward filling the null values in the embarked columns titanic['embarked'] = titanic['embarked'].fillna(method = 'ffill') # Confirming that there are no null values titanic.isna().sum() # + colab={"base_uri": "https://localhost:8080/", "height": 195} id="1ZifcAM8g4FE" outputId="5fa19d90-5f23-46d7-f77e-4f49b3165c67" # Getting a preview of the dataset titanic.head() # + colab={"base_uri": "https://localhost:8080/"} id="RJii5OZdhoIz" outputId="a9f90093-02c4-428a-eb93-50da6a1f2116" # Checking if there are any duplicated rows titanic.duplicated().sum() # There are no doplicates in the data # + colab={"base_uri": "https://localhost:8080/"} id="tHFyTqExxFvP" outputId="eebe3805-7aaf-4e1c-ee69-4d12edce34b6" # Previewing the columns again titanic.columns # + colab={"base_uri": "https://localhost:8080/"} id="gnCGxac1TiV4" outputId="9ead4789-284e-46aa-bb73-5f47e6200665" # Checking for any anomalies in the categorical variables qcol =['survived', 'pclass', 'sex', 'sibsp', 'parch', 'embarked'] for col in qcol: print(col, ':', titanic[col].unique(), '\n') # The categorical columns have no errors in their entries # + colab={"base_uri": "https://localhost:8080/", "height": 993} id="6nJRzbosrVw0" outputId="7af97d22-52d5-41df-b969-62e4e9113f69" # Checking for Outliers numna = ['fare', 'age'] for column in numna: plt.figure(figsize=(8,8)) titanic.boxplot([column], fontsize= 14) plt.ylabel('count', fontsize = 14) plt.title('Boxplot - {}'.format(column), fontsize = 16) # Both age and fare have outliers # + colab={"base_uri": "https://localhost:8080/"} id="LxRfQNleU1pv" outputId="a225d3f1-afa2-4d50-dbf1-437f59a70d48" # Determining how many rows would be lost if outliers were removed # Calculating our first, third quantiles and then later our IQR # --- Q1 = titanic.quantile(0.25) Q3 = titanic.quantile(0.75) IQR = Q3 - Q1 # Removing outliers based on the IQR range # --- # titanic_new = titanic[~((titanic < (Q1 - 1.5 * IQR)) | (titanic > (Q3 + 1.5 * IQR))).any(axis=1)] # Printing the shape of our new dataset # --- # print(titanic_new.shape) # Printing the shape of our old dataset # --- # print(titanic.shape) # Number of rows removed rows_removed = titanic.shape[0] - titanic_new.shape[0] print('Number of rows removed:', rows_removed) # Percentage of rows removed of the percentage row_percent = (rows_removed/titanic.shape[0]) * 100 print('Percentage of rows removed:', row_percent) # There will be no need to remove the outliers since it # doesn't seem like there are obvious errors in the data. # Furthermore, removing outliers will reduce the diversity of the dataset # + id="YbGca6TYn1wv" # Dropping the unnecessary columns titanic.drop(['ticket', 'passengerid'], axis=1, inplace=True) # + colab={"base_uri": "https://localhost:8080/", "height": 195} id="GLDq9sduodDr" outputId="779390d3-f38f-4dfe-e380-635077568937" # Feature extraction # Splitting the name column to get only the title title = [i.split(",")[1].split(".")[0].strip() for i in titanic['name']] titanic['title'] = pd.Series(title) titanic.head() # + colab={"base_uri": "https://localhost:8080/", "height": 195} id="ucL27q26slmt" outputId="cc81ef8d-a873-4e2d-a793-b1afdc460745" # Dropping the name column titanic.drop('name', axis=1, inplace=True) # Checking on the columns again titanic.head() # + [markdown] id="hKEgp4tisMLz" # ## 6. Exploratory Data Analysis # + [markdown] id="a7vEdTzIsRoB" # ### a) Univariate Analysis # + colab={"base_uri": "https://localhost:8080/"} id="xXwwxR1qr0xK" outputId="0686d270-a2f3-4a30-c860-ada21dc52d6e" # Previewing the columns titanic.columns # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="qAT5Lq44oq2h" outputId="cf059abd-791f-488d-a5ed-f042f4ff0921" # Countplots for the categorical columns qcol = ['survived', 'pclass', 'sex', 'sibsp', 'parch', 'embarked'] for i in qcol: plt.figure(figsize= (8,8)) sns.countplot(x = titanic[i]) plt.title('Countplot - {}'.format(i), fontsize = 16) plt.xlabel(i , fontsize = 14) plt.ylabel('Count', fontsize = 14) plt.xticks(fontsize = 14) plt.yticks(fontsize = 14) plt.show() # + [markdown] id="RTX6pMg2wNuR" # From these countplots, we can make the following observations: # Of the total number recorded # # * Those who did not survive were more than those who did. Approximately 600 passengers did not survive while about 300 passengers survived # * Most of the passengers were in ticket class(pclass) 3 followed by passengers in ticket class 1. Ticket class 2 had the least number of passengers. # * Of the total number of passengers, about 560 passengers were male and 300 were female. # * Approximately 600 passengers had no siblings or spouses(sibsp) on board. About 200 passengers had 1 sibling or spouse on board. # * More than 650 passengers had no parent or child(parch) on board # *Approximately 650 passengers embarked from Southampton port(S), about 180 passengers from the Cherbourg(C) port while less than 100 passengers from the Queenstown port(Q). # + colab={"base_uri": "https://localhost:8080/"} id="eQ68UEoJhx5I" outputId="9923a032-df98-4622-b3bd-ab06a84f119c" # Frequency of the titles titanic['title'].value_counts() # All the titles from Master to Don I will title as 'Other' # + colab={"base_uri": "https://localhost:8080/"} id="cFHlQyXTjMUU" outputId="b08ba7a0-05d0-40d5-e3c2-ccf6bf833aaf" titanic['title'].unique() # + colab={"base_uri": "https://localhost:8080/"} id="0Zd8wV68ig6Y" outputId="59c02c15-55ed-47fc-e2ab-5ca3dfd9c555" # Replacing the less popular titles with other other = ['Master', 'Don', 'Rev', 'Dr', 'Mme', 'Ms', 'Major', 'Lady', 'Sir', 'Mlle', 'Col', 'Capt', 'the Countess', 'Jonkheer'] titanic['title'] = titanic['title'].replace(other, 'Other') titanic['title'].value_counts() # Most of the passengers went by the title Mr while 67 of the # went by the less common titles like Mademoiselle, Jonkheer and so forth # + colab={"base_uri": "https://localhost:8080/", "height": 523} id="rqgAxohLBjAQ" outputId="40b29833-8dd4-4dd9-cc90-a33da9d6ce9e" # Histogram on age to see the shape of the distribution def histogram(var1, bins): plt.figure(figsize= (10,8)), sns.set_style('darkgrid'), sns.set_palette('colorblind'), sns.histplot(x = var1, data=titanic, bins = bins , shrink= 0.9, kde = True) histogram('age', 60) plt.title('Histogram of age of passengers', fontsize = 16) plt.xlabel('Age', fontsize = 14) plt.ylabel('Count', fontsize = 14) plt.xticks(fontsize = 14) plt.yticks(fontsize = 14) plt.show() # + colab={"base_uri": "https://localhost:8080/"} id="ynTcE2DXDjPo" outputId="bcad4823-57f9-4a73-cc51-3ff8e9743e1c" # Measures of central tendency titanic['age'].describe() # The mean age is 29 # The maximum age is 80 # The minimum age is 0.42 # 25% of passengers are aged below 22 while 50% of passengers are aged below # 28 and 75% of passengers are aged below 35 # + colab={"base_uri": "https://localhost:8080/"} id="iXNSNm-lKGUM" outputId="1d5699e6-2e8d-4352-8082-fa096307a75b" # Defining a variable to calculate variation def variation(var): print('The skewness is:', titanic[var].skew()) print('The kurtosis is:', titanic[var].kurt()) print('The coefficient of variation is:', titanic[var].std()/titanic[var].mean()) # Checking on coefficent of variance, skewness and kurtosis variation('age') # The distribution is fairly symmetrical and is slightly leptokurtic. # There is low variance in the age of the passengers # + colab={"base_uri": "https://localhost:8080/", "height": 523} id="qCRS-q6dFrBQ" outputId="e4b6c1f4-8c58-4ecb-bd13-aab3f2699037" # Histogram of passenger fare histogram('fare', 60) plt.title('Histogram of passenger fare', fontsize = 16) plt.xlabel('Fare', fontsize = 14) plt.ylabel('Count', fontsize = 14) plt.xticks(fontsize = 14) plt.yticks(fontsize = 14) plt.show() # + colab={"base_uri": "https://localhost:8080/"} id="C-sEQ6uNHUgq" outputId="e341b673-ba1e-4da0-cdb7-8135cfe89063" # Measures of central tendency titanic['fare'].describe() # The mean fare is 32.2 # The maximum fare is 512 # The minimum fare is 0 # 25% of passenger fares are below 7.9 while 50% of pasenger fares are below # 14 and 75% of passenger fares are below 31. # The fare price of 512 represents the prce for the luxury tickets. It is not an error # + colab={"base_uri": "https://localhost:8080/"} id="4SiI7TkY5Omt" outputId="342ed868-86d7-4c05-f601-42501d443fef" # Checking on coefficent of variance, skewness and kurtosis variation('fare') # The distribution is skewed to the right and is leptokurtic. # There is high variance in fare values # + [markdown] id="MTfYjlQV9_GU" # ### b) Bivariate Analysis # + colab={"base_uri": "https://localhost:8080/"} id="5ud6_d-axjUr" outputId="ab04dfde-f62a-40be-89de-54269ac6fc0d" # Preview of the coluns titanic.columns # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="WRQuQiI7OaL-" outputId="d4678210-7729-400e-d6d0-d1c9fa97087e" # Bar charts to compare the whether a passenger survived vs the different categorical variables qcol1 =['pclass', 'sex', 'sibsp', 'parch', 'embarked', 'title'] for i in qcol1: table=pd.crosstab(titanic[i],titanic['survived']) table.div(table.sum(1).astype(float), axis=0).plot(kind='bar', figsize = (10,8), stacked=False) plt.title('Bar Chart of Survived vs {}'.format(i), fontsize = 16) plt.xlabel(i, fontsize = 14) plt.ylabel('Proportion of Respondents', fontsize = 14) plt.xticks(rotation = 360, fontsize = 14) plt.yticks(fontsize = 14) plt.show() print(' ') print(' ') # + [markdown] id="TUkYFbytjXrI" # The observations made from the graphs above are: # # - 62% of those in ticket class 1 survived, 48% of those in ticket class 2 survived and only 25% of those in ticket class 3 survived. # - More than 70% of females survived while less than 20% of males survived. # - All of those who had 8 or 5 siblings or spouses on board died. Almost 70% of those who had no siblings or spouses on board did not survive. # - More than 65% of those who had no parents or children on board died. However, the number of passengers who survived was higher for those who had 1 or 3 parents or siblings. # - Of those who embarked from Cherbourg port 55% survived. Of those who embarked from Queenstown port, 40% survived and of those who embarked from Southampton Port, about 34% survived. # - The number of passengers who survived was higher for those who took the title 'Mrs', 'Miss' or had other less common titles. Less than 20% of individuals with the title 'Mr' survived # + colab={"base_uri": "https://localhost:8080/", "height": 166} id="O3CrJRRByZx4" outputId="1754ee23-dc1a-43a7-a462-f8ac45b5ee43" # Looking at the relationship between fare and the port that # the passengers embarked from titanic.groupby('embarked')['fare'].describe() # Those who embarked from Cherbourg paid more on average than those who embarked # from the other 2 ports. # + colab={"base_uri": "https://localhost:8080/", "height": 136} id="4wcHJHYb08Za" outputId="2cca8fe4-cde5-45e3-c17e-cebeee3db1fa" # Looking at the relationship between fare and sex titanic.groupby('sex')['fare'].describe() # Females paid more on average than men # + colab={"base_uri": "https://localhost:8080/", "height": 523} id="9q-NF2sVoYzD" outputId="b8ecf73c-07da-44b8-ad94-199237c33ce5" # Comparing age and survival status plt.figure(figsize=(10,8)) sns.boxplot(x = titanic['survived'], y= titanic['age']) plt.title('Age vs survival status', fontsize = 16) plt.xlabel('Survival status', fontsize = 14) plt.ylabel('Age', fontsize = 14) plt.xticks(fontsize = 14) plt.yticks(fontsize = 14) plt.show() # There is not much of a difference in the ages for those who survived and those that # did not survive. # + colab={"base_uri": "https://localhost:8080/", "height": 523} id="xli8Lz1a6GDb" outputId="8cefe2d5-17a3-4c16-9627-73048fd42355" # Comparing fare and survival status plt.figure(figsize=(10,8)) sns.boxplot(x = titanic['survived'], y= titanic['fare']) plt.title('Passenger fare vs survival status', fontsize = 16) plt.xlabel('Survival status', fontsize = 14) plt.ylabel('Passenger fare', fontsize = 14) plt.xticks(fontsize = 14) plt.yticks(fontsize = 14) plt.show() # Those who survived had a higher median fare than those who did not # + [markdown] id="dmHL48kwIIb7" # ### c) Multivariate Analysis # + colab={"base_uri": "https://localhost:8080/"} id="26Tj9yNAAFEn" outputId="430fcb4d-caa6-463a-f491-a242d6f0dbb2" # Getting a preview of the columns again titanic.columns # + id="zvpcz_P2I8aw" colab={"base_uri": "https://localhost:8080/", "height": 505} outputId="04ad196f-c1a3-47b1-d060-0e3d5440a55c" # Correlation Heatmap for numerical features plt.figure(figsize=(10,8)) sns.heatmap(titanic.corr(),annot=True) plt.title('Correlation matrix', fontsize = 16) plt.xticks(fontsize = 14) plt.yticks(fontsize = 14) plt.show() # There is a moderately high correlation between sibsp and parch. # The correlation coefficient between the two is 0.41 # + [markdown] id="G29Pa1eMPl8R" # ## 7. Feature Selection by checking for multicollinearity # > Multicollinearity is the occurrence of high intercorrelations among 2 or more independent variables in a model with multiple features. # # > Testing for multicollinearity is extremely important because it can lead to: # # * Inaccurate estimates of the regression coefficients # * Inflation of the standard errors of the regression coefficients # * False and non-significant p-values # * Degredation of the predictability of the model # # Checking for correlations in independent variables helps to do features selection. Any variables with a VIF value above 4 will be dropped and will not be used in building the model # + colab={"base_uri": "https://localhost:8080/"} id="tCGUS-p7CTVp" outputId="0dc7e8d1-aef3-450f-b653-cf6160c1c9fa" # Encoding the categorical columns for i in qcol1: titanic[i] = titanic[i].astype('category') titanic[i] = titanic[i].cat.codes # Confirming the changes for i in qcol1: print(i, ':', titanic[i].unique(), '\n') print(titanic.dtypes) # + id="bjb2OKj7EUVN" # Specifying X and y X = titanic.drop('survived', 1) y = titanic['survived'] # + colab={"base_uri": "https://localhost:8080/", "height": 284} id="dwxiYWOUEubr" outputId="fa732f8b-6d30-4e4f-d170-880808bd3680" # Checking for multicollinearity correlations = X.corr() correlations # + colab={"base_uri": "https://localhost:8080/", "height": 284} id="h9S4YG8gE0x2" outputId="8c7b8891-90f7-4bca-9aff-1fee280824eb" # Creating the VIF dataframe VIF = pd.DataFrame(np.linalg.inv(correlations.values), index = correlations.index, columns = correlations.columns) VIF # + colab={"base_uri": "https://localhost:8080/"} id="7FDfLwUlFE3n" outputId="a27be8be-f035-4a72-dc8e-cfd715ad890e" # Extracting the VIF values pd.Series(np.diag(VIF), index=[VIF.columns]) # There is no multicollinearity # + [markdown] id="h8yrx7-28pK8" # There is no multicollinearity so no features will be dropped. # + [markdown] id="k4tcH-U_F3np" # ## 8. Implementing the solution # + [markdown] id="9yysvwtaGFWQ" # ### a) Building my model using K Nearest Neighbors algorithm. # # KNN is a non parametric algorithm that can be used with both classification and regression problems. However, it is more widely used to solve classification problems. # # KNN works by finding the distances between a new point and all the examples in the data, selecting the specified number examples (K) closest to the new point, then votes for the most frequent label (in the case of classification) or averages the labels (in the case of regression). # # Some of the advantages of using KNN are: # * It is easy to use # * Has quick calculation time # * Does not make any assumptions about the data (non-parametric) # # # # + colab={"base_uri": "https://localhost:8080/"} id="OySYU6dYJz-l" outputId="98a2931d-9a5f-48aa-9262-115fdc8e81d0" # Splitting into train and test using 80-20 ratio X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 99) # fitting and making predictions classifier = KNeighborsClassifier() classifier.fit(X_train, y_train) y_pred = classifier.predict(X_test) # Evaluating the model print('Accuracy:', accuracy_score(y_test, y_pred), '\n') print(confusion_matrix(y_test, y_pred), '\n') print( classification_report(y_test, y_pred)) # At 80-20 split ratio, the accuracy is 70.9% # + [markdown] id="tcUkJnXH201K" # * With an 80-20 split the accuracy is 70.9% # # From the first row of the confusion matrix we see that: # * 87 passengers were correctly classified as having not survived # * 26 were wrongly classified as having not survived. # # From the second row of the confusion matrix we see that # * 26 passengers where wrongly classified as having survived # * 93 were correctly classified as having survived # # The recall is 77% # # + colab={"base_uri": "https://localhost:8080/"} id="112Q7CnHLz5k" outputId="5ffe9dfc-58e5-4c8d-f040-6541fadc7f48" # Splitting into train and test using 70-30 ratio X_train1, X_test1, y_train1, y_test1 = train_test_split(X, y, test_size = 0.3, random_state = 99) # fitting and making predictions classifier = KNeighborsClassifier() classifier.fit(X_train1, y_train1) y_pred1 = classifier.predict(X_test1) # Evaluating the model print('Accuracy:', accuracy_score(y_test1, y_pred1), '\n') print(confusion_matrix(y_test1, y_pred1), '\n') print( classification_report(y_test1, y_pred1)) # At 70-30 split ratio, the accuracy has risen to 73.1% # + [markdown] id="gmM9DBUb5V-R" # * With an 70-30 split the accuracy is 73.1% # # From the first row of the confusion matrix we see that: # * 140 passengers were correctly classified as having not survived # * 34 were wrongly classified as having not survived. # # From the second row of the confusion matrix we see that # * 38 passengers where wrongly classified as having survived # * 56 were correctly classified as having survived # # The recall is 80% # + colab={"base_uri": "https://localhost:8080/"} id="kvS1N3uvMepH" outputId="6ed62c31-4925-451f-9c90-cf20d1fbac5d" # Splitting into train and test using 60-40 ratio X_train2, X_test2, y_train2, y_test2 = train_test_split(X, y, test_size = 0.4, random_state = 99) # fitting and making predictions classifier = KNeighborsClassifier() classifier.fit(X_train2, y_train2) y_pred2 = classifier.predict(X_test2) # Evaluating the model print('Accuracy:', accuracy_score(y_test2, y_pred2), '\n') print(confusion_matrix(y_test2, y_pred2), '\n') print( classification_report(y_test2, y_pred2)) # At 60-40 split ratio, the accuracy of the model dropped to 69.7% # which was lower than the 80-20 split # I will choose to optimize the 70-30 option # + [markdown] id="Gb-xHuXB5rP9" # * With an 60-40 split the accuracy is 69.7% # # From the first row of the confusion matrix we see that: # * 169 passengers were correctly classified as having not survived # * 61 were wrongly classified as having not survived. # # From the second row of the confusion matrix we see that # * 47 passengers where wrongly classified as having survived # * 80 were correctly classified as having survived # # The recall is 73% # # + [markdown] id="psgdp3q8TLv0" # The 70-30 split recoreded the highest accuracy and recall before any optimiation technique is put in place. Therefore I will optimize the model built using the 70-30 split # + [markdown] id="aukLrlGkN-fn" # **Optimizing the model** # # KNN can be optimized using the following techniques: # * Dimensionality Reduction with Linear Discriminant Analysis # * Rescaling our data which makes the distance metric more meaningful. # * Changing the distance metric for different applications. # * Implementing weighted voting # * Applying additional appropriate nearest-neighbor techniques # + colab={"base_uri": "https://localhost:8080/"} id="ikaiPba-SxEQ" outputId="e1c89cef-4066-40d8-b3f9-da89708b21f9" # Performing feature scaling scale = MinMaxScaler() X_train1 = scale.fit_transform(X_train1) X_test1 = scale.transform(X_test1) # fitting and making predictions classifier = KNeighborsClassifier() classifier.fit(X_train1, y_train1) y_pred1 = classifier.predict(X_test1) # Evaluating the model print('Accuracy:', accuracy_score(y_test1, y_pred1), '\n') print(confusion_matrix(y_test1, y_pred1), '\n') print( classification_report(y_test1, y_pred1)) # After feature scaling, the accuracy has risen from 73.1% to 80.2% # The recall also increased from 80% to 86% # + id="MI8601nSOKsf" # Hyper parameter tuning param_grid = {'n_neighbors' : np.arange(3,21,2), 'algorithm' : ['auto', 'kd_tree'], 'metric' : ['minkowski', 'euclidean', 'manhattan'], 'weights' : ['uniform', 'distance']} search = GridSearchCV(classifier, param_grid = param_grid, cv = 10, scoring= 'accuracy' ) # + colab={"base_uri": "https://localhost:8080/"} id="HHrl-bQ7POM6" outputId="20d62687-f24f-4a3f-b5ce-68d3c92764cf" # Getting the best parameters search.fit(X_train1, y_train1) print(search.best_params_) print(search.best_score_) # + colab={"base_uri": "https://localhost:8080/"} id="_Ticm0jlPa0a" outputId="cb1dc5e9-75f2-4306-cb7f-b750fad002c5" # Fitting and making predictions using the suggested parameters classifier = KNeighborsClassifier(n_neighbors=11, metric='manhattan') classifier.fit(X_train1, y_train1) y_pred1 = classifier.predict(X_test1) # Evaluating the model print('Accuracy:', accuracy_score(y_test1, y_pred1), '\n') print(confusion_matrix(y_test1, y_pred1), '\n') print( classification_report(y_test1, y_pred1)) # The recall after optimizing the model increased from 86% to 88%. However, # the accuracy if the model did not increase by much # + colab={"base_uri": "https://localhost:8080/"} id="O4dSzENkQ855" outputId="11fc76d1-a905-44e3-c2df-9053235fb077" # Attempting to use dimensionality reduction lda = LDA(n_components=1) X_train1 = lda.fit_transform(X_train1, y_train1) X_test1 = lda.transform(X_test1) classifier.fit(X_train1, y_train1) y_pred1 = classifier.predict(X_test1) # Evaluating the model print('Accuracy:', accuracy_score(y_test1, y_pred1), '\n') print(confusion_matrix(y_test1, y_pred1), '\n') print( classification_report(y_test1, y_pred1)) # With dimensionality reduction there is a reduction in the accuracy of the model # + [markdown] id="CsIUlQ617pCr" # ## 9. Challenging the solution # # - I will compare the performance of KNN with Catboost # # + colab={"base_uri": "https://localhost:8080/"} id="yas0kTZWirR9" outputId="996bdf24-5a66-4bb9-802a-02c4db86b335" # Instaling catboost # !pip install catboost # + colab={"base_uri": "https://localhost:8080/"} id="bNZCL3n5i9af" outputId="ae4ea0f5-e145-4817-b3f5-f8842d932e18" # Converting age and fare to integers X['age'] = X['age'].astype('int64') X['fare'] = X['fare'].astype('int64') # Changing the data to categorical columns for i in qcol1: X[i] = X[i].astype('category') titanic.dtypes # Confirming the changes X.dtypes # + colab={"base_uri": "https://localhost:8080/"} id="JRxfEhbXhuVc" outputId="4779bafa-423f-4230-abbd-9b7d8768472a" # Building the model from catboost import CatBoostClassifier # Splitting X and y X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 99) # Building the model cat = CatBoostClassifier(learning_rate=0.2, max_depth=5, iterations=1010) #Fitting the model cat.fit(X_train,y_train, cat_features= qcol1) # Making predictions y_pred_cat = cat.predict(X_test) # Checking for accuracy print('Accuracy:', accuracy_score(y_test, y_pred_cat), '\n') print(confusion_matrix(y_test, y_pred_cat), '\n') print( classification_report(y_test, y_pred_cat)) # With a learning rate of 0.2, max_depth of 5 and 1010 iterations # the accuracy came up to 83.9% # + [markdown] id="QYS02OfFIJDG" # ## 10. Conclusion # # Some of the conclusions that can be made from this project: # # - For this specific dataset, the KNN optimization procedure that worked best was feature scaling. # - Changing the distance metric to Manhattan increased the recall by 2% # - When compared to the model created using catboost, Catboost did better. It recorded a higher recall by 1% and higher accuracy by about 3% # + [markdown] id="v8eH73txK2sl" # # Recommendations # # Some of the recommendations I would make are: # # * When building a model using KNN, perform feature scaling to improve the accuracy of your model. If your data is skewed, perform normalization. If it is normal, perform standardization. # * Try to play around with other algorithms such as Random Forest and Gradient Boost and compare model performance. # * Acquire a dataset with high dimensionality and perform dimensionality reduction on it. Compare its performance before and after the dimensionality reduction. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Test notebook to check installation import binclf binclf.__name__ binclf.__version__ binclf.__path__ from binclf import binclf from binclf import clfviz from binclf import dprep from binclf import eda from binclf import xgbopt 'Test passed.' # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from bs4 import BeautifulSoup as soup from urllib.request import urlopen as ureq from selenium import webdriver import time import re url = "https://www.uoguelph.ca/engineering/mechanical-engineering-restricted-electives-20202021" # + chrome_options = webdriver.ChromeOptions() chrome_options.add_argument('--ignore-certificate-errors') chrome_options.add_argument('--incognito') #chrome_options.add_argument('--headless') driver = webdriver.Chrome("C:\\Users\\jerry\\Downloads\\chromedriver", options=chrome_options) # - driver.get(url) # # 1. Collect course link texts for driver to click on page_soup = soup(driver.page_source, 'lxml') containers = page_soup.find("div", {"class": "field-item even"}).findAll("a") link_texts = [container.text for container in containers if bool(re.match("ENGG\*[0-9]{4}", container.text))] len(link_texts) link_texts # # 2. Scrape courses driver.find_element_by_id("becookiebarbuttonid").click() driver.find_element_by_link_text(link_texts[0]).click() #go to page listing all engg courses. Scrape from there page_soup = soup(driver.page_source, 'lxml') containers = page_soup.findAll("div", {"class": "courseblock"}) # Turns out all courses will have to be found and scraped in one webpage listing all engg courses. So that's what the script below will do, using the link texts collected earlier to find the target courses # + counter = 0 course_names = [] course_codes = [] course_descs = [] for link_text in link_texts: for container in containers: if container.find("span", {"class": "text detail-code margin--small text--semibold text--big"}).text.strip() == link_text.strip(): course_codes.append(container.find("span", {"class": "text detail-code margin--small text--semibold text--big"}).text.strip()) course_names.append(container.find("span", {"class": "text detail-title margin--small text--semibold text--big"}).text.strip()) course_descs.append(container.find("p", {"class": "courseblockextra noindent"}).text) counter += 1 continue print("scraped {} courses".format(counter)) # - # # 3. Inspect and write to CSV course_codes course_names course_descs # + import pandas as pd df = pd.DataFrame({ "Course Number": course_codes, "Course Name": course_names, "Course Description": course_descs }) df # - df.to_csv('UGuelph_MechEng_Technical_Electives_Courses.csv', index = False) driver.quit() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="nCc3XZEyG3XV" colab_type="text" # Lambda School Data Science, Unit 2: Predictive Modeling # # # Applied Modeling, Module 1 # # You will use your portfolio project dataset for all assignments this sprint. # # ## Assignment # # Complete these tasks for your project, and document your decisions. # # - [ ] Choose your target. Which column in your tabular dataset will you predict? # - [ ] Choose which observations you will use to train, validate, and test your model. And which observations, if any, to exclude. # - [ ] Determine whether your problem is regression or classification. # - [ ] Choose your evaluation metric. # - [ ] Begin with baselines: majority class baseline for classification, or mean baseline for regression, with your metric of choice. # - [ ] Begin to clean and explore your data. # - [ ] Choose which features, if any, to exclude. Would some features "leak" information from the future? # # ## Reading # - [Attacking discrimination with smarter machine learning](https://research.google.com/bigpicture/attacking-discrimination-in-ml/), by Google Research, with interactive visualizations. _"A threshold classifier essentially makes a yes/no decision, putting things in one category or another. We look at how these classifiers work, ways they can potentially be unfair, and how you might turn an unfair classifier into a fairer one. As an illustrative example, we focus on loan granting scenarios where a bank may grant or deny a loan based on a single, automatically computed number such as a credit score."_ # - [How Shopify Capital Uses Quantile Regression To Help Merchants Succeed](https://engineering.shopify.com/blogs/engineering/how-shopify-uses-machine-learning-to-help-our-merchants-grow-their-business) # - [Maximizing Scarce Maintenance Resources with Data: Applying predictive modeling, precision at k, and clustering to optimize impact](https://towardsdatascience.com/maximizing-scarce-maintenance-resources-with-data-8f3491133050), **by Lambda DS3 student** . His blog post extends the Tanzania Waterpumps scenario, far beyond what's in the lecture notebook. # - [Notebook about how to calculate expected value from a confusion matrix by treating it as a cost-benefit matrix](https://github.com/podopie/DAT18NYC/blob/master/classes/13-expected_value_cost_benefit_analysis.ipynb) # - [Simple guide to confusion matrix terminology](https://www.dataschool.io/simple-guide-to-confusion-matrix-terminology/) by , with video # - [Visualizing Machine Learning Thresholds to Make Better Business Decisions](https://blog.insightdatascience.com/visualizing-machine-learning-thresholds-to-make-better-business-decisions-4ab07f823415) # + id="IxdRE5iqYPkV" colab_type="code" outputId="442984b2-c703-468d-b51d-fed977c9020a" colab={"resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "Ly8gQ29weXJpZ2h0IDIwMTcgR29vZ2xlIExMQwovLwovLyBM "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": ""}}, "base_uri": "https://localhost:8080/", "height": 78} from google.colab import files uploaded = files.upload() # + id="DB55aJbefJak" colab_type="code" outputId="746c82b6-9433-425e-fe8b-6db077d39f2f" colab={"base_uri": "https://localhost:8080/", "height": 215} import os, sys in_colab = 'google.colab' in sys.modules if in_colab: # Install required python package: # category_encoders, version >= 2.0 # !pip install --upgrade category_encoders # + id="fOigGm7jMlF8" colab_type="code" outputId="79c49584-7625-49d4-b166-f53ec7b4ce24" colab={"base_uri": "https://localhost:8080/", "height": 233} # !pip install eli5 # + id="Q82PYL8i_9XM" colab_type="code" colab={} import pandas as pd import numpy as np import seaborn as sns from matplotlib import pyplot as plt # %matplotlib inline df = pd.read_csv('Video_Games_Sales_as_at_22_Dec_2016.csv') # + id="wNf-hEp8AYIp" colab_type="code" outputId="fe0ba236-29a6-49c9-82b4-667f0f00f211" colab={"base_uri": "https://localhost:8080/", "height": 35} df.shape # + id="YlndRP3KAasF" colab_type="code" outputId="bc2b14fa-f41b-4f09-c5a6-434c87181789" colab={"base_uri": "https://localhost:8080/", "height": 323} df.isnull().sum() # + id="qxVDxuzgE5d_" colab_type="code" outputId="7e2ddf64-fe58-4506-c310-20a2402d42c0" colab={"base_uri": "https://localhost:8080/", "height": 89} df.Platform.unique() # + id="7a1rMHOrHsu2" colab_type="code" outputId="b12ab107-e0fa-4b57-ca3a-9f5f2067b7c2" colab={"base_uri": "https://localhost:8080/", "height": 204} df.head() # + id="u2jqDs4SRFh_" colab_type="code" colab={} ## Keep columns's name simple form df = df.rename(columns={"Year_of_Release": "Year", "NA_Sales": "NA", "EU_Sales": "EU", "JP_Sales": "JP", "Other_Sales": "Other", "Global_Sales": "Global"}) df = df[df["Year"].notnull()] df = df[df["Genre"].notnull()] df["Year"] = df["Year"].apply(int) # + id="G4wO7BPn-abz" colab_type="code" outputId="a9cb6ffe-381a-4ea2-876e-9262316dd352" colab={"base_uri": "https://localhost:8080/", "height": 89} df = df[(df['Platform'] == 'X360') | (df['Platform'] == 'PC')] #Let's double check the value counts to be sure print(pd.value_counts(df["Platform"])) #Let's see the shape of the data again print(df.shape) # + id="8bf8GSNNfYZo" colab_type="code" colab={} ## platforms = {"Playstation" : ["PS", "PS2", "PS3", "PS4"], ## "Xbox" : ["XB", "X360", "XOne"], ## "PC" : ["PC"], ## "Nintendo" : ["Wii", "WiiU"], ## "Portable" : ["GB", "GBA", "GC", "DS", "3DS", "PSP", "PSV"]} # + id="ouMjkdX2FQV8" colab_type="code" outputId="f8bb1735-1457-4258-886d-bfa3eab4c61e" colab={"base_uri": "https://localhost:8080/", "height": 35} from sklearn.model_selection import train_test_split # Split into train & test sets first X_train, X_test = train_test_split(df, train_size=0.8, test_size=0.2, stratify=df['Platform'], random_state=42) # Split X_train into train and val sets X_train, X_val = train_test_split(X_train, train_size=0.8, test_size=0.2, stratify=X_train['Platform'], random_state=42) # Target is Platform on each features target = 'Platform' y_train = X_train[target] y_val = X_val[target] y_test = X_test[target] X_train = X_train.drop(columns=target) X_val = X_val.drop(columns=target) X_test = X_test.drop(columns=target) X_train.shape, y_train.shape, X_val.shape, y_val.shape, X_test.shape, y_test.shape # + id="e20HMVtxFQPw" colab_type="code" outputId="f3e09cb5-f31b-4381-e6c8-e18cd94c3211" colab={"base_uri": "https://localhost:8080/", "height": 233} print('Train:\n', y_train.value_counts(normalize=True)) print('Validation:\n', y_val.value_counts(normalize=True)) print('Test:\n', y_test.value_counts(normalize=True)) # + id="Pu-jpSo7FQF3" colab_type="code" outputId="560e9627-e070-455a-85fa-7002b8a01d10" colab={"base_uri": "https://localhost:8080/", "height": 71} # Normalize y_train y_train.value_counts(normalize=True) # + id="ztbr2T2XJb8A" colab_type="code" outputId="0bf457f5-7501-4fa6-bb3a-43e70c2ccb02" colab={"base_uri": "https://localhost:8080/", "height": 35} # Accuracy Score from sklearn.metrics import accuracy_score majority = y_train.mode()[0] maj_pred = [majority] * len(y_train) accuracy_score(y_train, maj_pred) # + id="d1WwwQL4Ju9G" colab_type="code" outputId="3ca477ea-ff30-4e7e-faf2-a7e9a0ace91a" colab={"base_uri": "https://localhost:8080/", "height": 323} Global = 'Global' numeric = X_train.select_dtypes('number').columns.drop(Global).tolist() for col in numeric: X_train[col] = X_train[col].astype('object') print(X_train.shape) X_train.dtypes # + id="2DpRxN7MLHQe" colab_type="code" outputId="31cef551-f549-4562-b0c1-72c20a4f2c6f" colab={"base_uri": "https://localhost:8080/", "height": 35} for col in numeric: X_val[col] = X_val[col].astype('object') print(X_val.shape) # + id="DkwAbOOwLHOa" colab_type="code" outputId="ad3dfd3f-e638-486f-fe32-72b317a38716" colab={"base_uri": "https://localhost:8080/", "height": 35} for col in numeric: X_test[col] = X_test[col].astype('object') print(X_test.shape) # + id="bBFA7BQqLHMe" colab_type="code" outputId="cbced028-ca59-40c8-ef72-af253bf1c1a1" colab={"base_uri": "https://localhost:8080/", "height": 35} #Random Forest import category_encoders as ce from sklearn.impute import SimpleImputer from sklearn.ensemble import RandomForestClassifier from sklearn.pipeline import make_pipeline # Create ML pipeline pipeline = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(), RandomForestClassifier(n_estimators=100, n_jobs=-1, random_state=42) ) # Fit to train pipeline.fit(X_train, y_train) # Score on val print('Validation Accuracy', pipeline.score(X_val, y_val)) # + id="LJ6_PWnLZtc2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="1a4c3738-f619-46be-f1fa-40cca9cd5978" # XGboost from xgboost import XGBClassifier pipeline = make_pipeline( ce.OrdinalEncoder(), XGBClassifier(n_estimators=100, random_state=42, n_jobs=-1) ) # fit on train, score on val pipeline.fit(X_train, y_train) print('Validation Accuracy: ', pipeline.score(X_val, y_val)) # + id="VqC8K6v3LHKm" colab_type="code" outputId="be54d4d8-a24c-443d-930c-b1eb4db3c185" colab={"base_uri": "https://localhost:8080/", "height": 143} # Let's do a permutation importance to see the most important features # Rerun the random forest classifier outside the pipeline transformers = make_pipeline( ce.OrdinalEncoder(), SimpleImputer() ) X_train_transformed = transformers.fit_transform(X_train) X_val_transformed = transformers.transform(X_val) model = RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1) model.fit(X_train_transformed, y_train) # + id="5LK-Rjv2LHIm" colab_type="code" outputId="71e1d983-0d31-4c53-cf50-efbaacac1408" colab={"base_uri": "https://localhost:8080/", "height": 307} # Let's do eli5 to see which columns to use import eli5 from eli5.sklearn import PermutationImportance permuter = PermutationImportance( model, scoring='accuracy', n_iter=2, random_state=42 ) permuter.fit(X_val_transformed, y_val) feature_names = X_val.columns.tolist() eli5.show_weights( permuter, top=None, # show the permutation importances for all features feature_names=feature_names ) # + id="3ZKcl_stLHE3" colab_type="code" outputId="fdf76bd0-b016-4066-ee9a-f6b124c61e97" colab={"base_uri": "https://localhost:8080/", "height": 179} from sklearn.metrics import classification_report # Calculate y_pred y_pred = pipeline.predict(X_val) print(classification_report(y_val, y_pred)) # + id="l1woA48PLHC4" colab_type="code" outputId="a1f0290a-4ee5-4e1a-be33-92a031912b9f" colab={"base_uri": "https://localhost:8080/", "height": 269} # Let's see the heatmap from sklearn.metrics import confusion_matrix from sklearn.utils.multiclass import unique_labels import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline def plot_confusion_matrix(y_true, y_pred): labels = unique_labels(y_true) columns = [f'Pred {label}' for label in labels] index = [f'Actual {label}' for label in labels] table = pd.DataFrame(confusion_matrix(y_true, y_pred), columns=columns, index=index) return sns.heatmap(table, annot=True, fmt='.0f', cmap='viridis') plot_confusion_matrix(y_val, y_pred); # + id="sQIY-vHMauhy" colab_type="code" outputId="c1ea4db2-f3f3-4f04-8e02-ff1720973c45" colab={"base_uri": "https://localhost:8080/", "height": 1000} y_pred_proba = pipeline.predict_proba(X_val)[:,1] print(y_pred_proba) threshold = 0.5 ax = sns.distplot(y_pred_proba) ax.axvline(threshold, color='red'); # + id="a9LDIv_Xds0M" colab_type="code" outputId="2b4c56c4-0694-426d-becb-7123847f1993" colab={"base_uri": "https://localhost:8080/", "height": 35} # confusion matrix for majority baseline majority_class = y_train.mode()[0] y_pred = np.full_like(y_val, fill_value=majority_class) accuracy_score(y_val, y_pred) # + id="4-TA8Jy5dyso" colab_type="code" outputId="ffd80db8-6ae2-43f2-b92f-3d108232a32c" colab={"base_uri": "https://localhost:8080/", "height": 269} plot_confusion_matrix(y_val, y_pred); # + id="xvCOYT9JcFMy" colab_type="code" colab={} # + id="4pXVSEzGd3hn" colab_type="code" outputId="47302577-e44b-4963-86b8-c3be2a29ff76" colab={"base_uri": "https://localhost:8080/", "height": 35} #ROC AUC from sklearn.metrics import roc_auc_score y_pred_proba = pipeline.predict_proba(X_val)[:, 1] roc_auc_score(y_val, y_pred_proba) # + id="O6ZfPUbgbTfF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 331} outputId="4622a2e9-4171-48a2-dc4b-cfdf7910ff51" from sklearn.metrics import roc_curve fpr, tpr, thresholds = roc_curve(y_val=='M', y_pred_proba) plt.plot(fpr, tpr) plt.title('ROC curve') plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate'); # + id="aRSwB3bmcZdV" colab_type="code" colab={} from yellowbrick.classifier import ClassPredictionError # Classes assign classes = ['X360', 'PC'] # Encode X_train & X_val encoder = ce.OrdinalEncoder() X_train_encoded = encoder.fit_transform(X_train) X_val_encoded = encoder.transform(X_val) # Impute imputer = SimpleImputer() X_train_imputed = imputer.fit_transform(X_train_encoded) X_val_imputed = imputer.transform(X_val_encoded) visualizer = ClassPredictionError( RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1), classes=classes ) # + id="jiwGa1DGgF2j" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 143} outputId="0c2a4a97-718e-4c2f-94ae-b9ea9b43044c" # Partial Dependence model = RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1) # Fit model model.fit(X_train_imputed, y_train) # + id="y7Xc8yvpgief" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 287} outputId="9e683177-d8f2-413d-8702-ac0a12a31348" # !pip install pdpbox # + id="TgX4SEXUgLem" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 582} outputId="3ffe8072-e4d4-499b-f483-38736d6e9302" # Pdpbox feature for Global # %matplotlib inline import matplotlib.pyplot as plt from pdpbox import pdp feature = 'Global' features = X_train.columns.tolist() pdp_dist = pdp.pdp_isolate(model=model, dataset=X_train_encoded, model_features=features, feature=feature) pdp.pdp_plot(pdp_dist, feature); # + id="H0MHds6ngzC0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 582} outputId="6f823a67-c623-40cf-8037-af90e25f3a38" # Feature for NA feature = 'NA' features = X_train.columns.tolist() pdp_dist = pdp.pdp_isolate(model=model, dataset=X_train_encoded, model_features=features, feature=feature) pdp.pdp_plot(pdp_dist, feature); # + id="EaK5ZdkDhAyR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 582} outputId="d85f07fb-b8f1-42e6-c1fa-06a23058c1e8" # Feature for JP feature = 'JP' features = X_train.columns.tolist() pdp_dist = pdp.pdp_isolate(model=model, dataset=X_train_encoded, model_features=features, feature=feature) pdp.pdp_plot(pdp_dist, feature); # + id="HF3HvIHRhOek" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 582} outputId="c98734ed-b142-4244-8c85-70a2456f4ff7" # Feature for EU feature = 'EU' features = X_train.columns.tolist() pdp_dist = pdp.pdp_isolate(model=model, dataset=X_train_encoded, model_features=features, feature=feature) pdp.pdp_plot(pdp_dist, feature); # + id="brxIFoNjkQMl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 582} outputId="331ede75-a630-4837-e7c4-5834b7252142" # Feature for Other feature = 'Other' features = X_train.columns.tolist() pdp_dist = pdp.pdp_isolate(model=model, dataset=X_train_encoded, model_features=features, feature=feature) pdp.pdp_plot(pdp_dist, feature); # + id="zuvYqdz81AP3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 593} outputId="c77ff82b-c703-480b-b943-66bf29f083d4" # !pip install shap # + id="PnwJ4vw-1AKh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="bcc85ba1-1af6-48cb-d2e4-edc5c49637e3" # Look at predictions vs actuals pred_act = pd.DataFrame({ 'y_pred_proba': y_pred_proba, 'y_val': y_val }) pred_act.head() # + id="ITr6SJU81AFl" colab_type="code" colab={} PC = pred_act['y_val'] == 'PC' X360 = ~PC right = (PC) == (pred_act['y_pred_proba'] > 0.5) wrong = ~right # + id="8wXaIril1ADW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 359} outputId="555eda43-a50e-45b7-9744-5b642ca74df2" # Sample PC & right pred_act[PC & right].sample(n=10, random_state=42).sort_values(by='y_pred_proba') # + id="M-dPZE0x0__W" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="7da4f09a-211e-4c00-9724-3e99a36235b2" import category_encoders as ce from sklearn.impute import SimpleImputer from sklearn.pipeline import make_pipeline from xgboost import XGBClassifier processor = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(strategy='median') ) X_train_processed = processor.fit_transform(X_train) X_val_processed = processor.transform(X_val) eval_set = [(X_train_processed, y_train), (X_val_processed, y_val)] model = XGBClassifier(n_estimators=1000, n_jobs=-1) model.fit(X_train_processed, y_train, eval_set=eval_set, eval_metric='auc', early_stopping_rounds=10) # + id="rdXRPSoa0_tW" colab_type="code" colab={} row = X_val.loc[[3803]] # + id="0wh7Lrzl12i3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 192} outputId="081ed3dd-38fc-457e-9cab-b20592af4418" import shap explainer = shap.TreeExplainer(model) row_processed = processor.transform(row) shap_values = explainer.shap_values(row_processed) shap.initjs() shap.force_plot( base_value=explainer.expected_value, shap_values=shap_values, features=row ) # + id="nSrYSq1i12g3" colab_type="code" colab={} # + id="nUpI8jsg12e7" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- class Solution: def averageWaitingTime(self, customers): n = len(customers) cnt = 0 s_time = 0 for s, t in customers: if s_time is None: s_time = s + t cnt += t else: if s_time > s: cnt += (s_time - s) + t cnt += t else: cnt += t s_time += t print(cnt) return cnt / n class Solution: def averageWaitingTime(self, customers): n = len(customers) cnt = 0 s_time = 0 for s, t in customers: if s_time > s: s_time += t else: s_time = s + t cnt += (s_time - s) return cnt / n solution = Solution() solution.averageWaitingTime(customers = [[5,2],[5,4],[10,3],[20,1]]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Pydeck Earth Engine Introduction # # This is an introduction to using [Pydeck](https://pydeck.gl) and [Deck.gl](https://deck.gl) with [Google Earth Engine](https://earthengine.google.com/) in Jupyter Notebooks. # If you wish to run this locally, you'll need to install some dependencies. Installing into a new Conda environment is recommended. To create and enter the environment, run: # ``` # conda create -n pydeck-ee -c conda-forge python jupyter notebook pydeck earthengine-api requests -y # source activate pydeck-ee # jupyter nbextension install --sys-prefix --symlink --overwrite --py pydeck # jupyter nbextension enable --sys-prefix --py pydeck # ``` # then open Jupyter Notebook with `jupyter notebook`. # Now in a Python Jupyter Notebook, let's first import required packages: from pydeck_earthengine_layers import EarthEngineLayer import pydeck as pdk import requests import ee # ## Authentication # # Using Earth Engine requires authentication. If you don't have a Google account approved for use with Earth Engine, you'll need to request access. For more information and to sign up, go to https://signup.earthengine.google.com/. # If you haven't used Earth Engine in Python before, you'll need to run the following authentication command. If you've previously authenticated in Python or the command line, you can skip the next line. # # Note that this creates a prompt which waits for user input. If you don't see a prompt, you may need to authenticate on the command line with `earthengine authenticate` and then return here, skipping the Python authentication. try: ee.Initialize() except Exception as e: ee.Authenticate() ee.Initialize() # ## Create Map # # Next it's time to create a map. Here we create an `ee.Image` object # Initialize objects ee_layers = [] view_state = pdk.ViewState(latitude=37.7749295, longitude=-122.4194155, zoom=10, bearing=0, pitch=45) # + # %% # Add Earth Engine dataset # Simple regression of year versus NDVI. # Define the start date and position to get images covering Montezuma Castle, # Arizona, from 2000-2010. start = '2000-01-01' end = '2010-01-01' lng = -111.83533 lat = 34.57499 region = ee.Geometry.Point(lng, lat) # Filter to Landsat 7 images in the given time and place, filter to a regular # time of year to avoid seasonal affects, and for each image create the bands # we will regress on: # 1. A 1, so the resulting array has a column of ones to capture the offset. # 2. Fractional year past 2000-01-01. # 3. NDVI. def addBand(image): date = ee.Date(image.get('system:time_start')) yearOffset = date.difference(ee.Date(start), 'year') ndvi = image.normalizedDifference(['B4', 'B3']) return ee.Image(1).addBands(yearOffset).addBands(ndvi).toDouble() images = ee.ImageCollection('LANDSAT/LE07/C01/T1') \ .filterDate(start, end) \ .filter(ee.Filter.dayOfYear(160, 240)) \ .filterBounds(region) \ .map(addBand) # date = ee.Date(image.get('system:time_start')) # yearOffset = date.difference(ee.Date(start), 'year') # ndvi = image.normalizedDifference(['B4', 'B3']) # return ee.Image(1).addBands(yearOffset).addBands(ndvi).toDouble() # }) # Convert to an array. Give the axes names for more readable code. array = images.toArray() imageAxis = 0 bandAxis = 1 # Slice off the year and ndvi, and solve for the coefficients. x = array.arraySlice(bandAxis, 0, 2) y = array.arraySlice(bandAxis, 2) fit = x.matrixSolve(y) # Get the coefficient for the year, effectively the slope of the long-term # NDVI trend. slope = fit.arrayGet([1, 0]) view_state = pdk.ViewState(longitude=lng, latitude=lat, zoom=12) ee_layers.append(EarthEngineLayer(ee_object=slope, vis_params={'min':-0.03,'max':0.03})) # - # Then just pass these layers to a `pydeck.Deck` instance, and call `.show()` to create a map: r = pdk.Deck(layers=ee_layers, initial_view_state=view_state) r.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="AtWSVIVcV4j6" colab_type="text" # # Celcius to Farenheit equation # + [markdown] colab_type="text" id="F8YVA_634OFk" # $$ f = c \times 1.8 + 32 $$ # + [markdown] id="Nl3Ho_drWD-t" colab_type="text" # # Import TensorFlow 2.x. # + colab_type="code" id="-ZMgCvSRFqxE" colab={} try: # %tensorflow_version 2.x except Exception: pass import tensorflow as tf import tensorflow.keras.layers as layers import tensorflow.keras.models as models import numpy as np np.random.seed(7) import matplotlib.pyplot as plot print(tf.__version__) # + [markdown] id="BS2p8F70WISp" colab_type="text" # # Compute Farenheit using Celcius. # + id="uHuT5KfSU6Gy" colab_type="code" colab={} def celsius_to_fahrenheit(celsius_value): fahrenheit_value = celsius_value * 1.8 + 32 return(fahrenheit_value) # + [markdown] id="0nmpmaxSWQlb" colab_type="text" # # Generate dataset for converting Celcius to Farenheit. # + id="tQlqUK-BWSn0" colab_type="code" colab={} def generate_dataset(number_of_samples=100): celsius_values = [] fahrenheit_values = [] value_range = number_of_samples for sample in range(number_of_samples): celsius_value = np.random.randint(-1*value_range, +1*value_range) fahrenheit_value = celsius_to_fahrenheit(celsius_value) celsius_values.append(celsius_value) fahrenheit_values.append(fahrenheit_value) return(celsius_values, fahrenheit_values) # + colab_type="code" id="gg4pn6aI1vms" colab={} celsius_values, fahrenheit_values = generate_dataset(number_of_samples=100) for index, celsius_value in enumerate(celsius_values): print("{} degrees Celsius = {} degrees Fahrenheit".format(celsius_value, fahrenheit_values[index])) # + [markdown] id="o7nHcURZWgR0" colab_type="text" # # Create artificial neural network model. # + colab_type="code" id="pRllo2HLfXiu" colab={} model = models.Sequential(name='model') model.add(layers.Dense(units=1, input_shape=[1], name='dense_layer')) model.summary() # + [markdown] id="N_AcNn6ZWvdq" colab_type="text" # # Create the optimizer. # + id="_vkNG0zxRLgj" colab_type="code" colab={} from tensorflow.keras.optimizers import Adam learning_rate = 0.2 optimizer = Adam(learning_rate=learning_rate) # + [markdown] id="A8-yJCSyW5YC" colab_type="text" # # Train the model. # + colab_type="code" id="m8YQN1H41L-Y" colab={} model.compile(loss='mean_squared_error', optimizer=optimizer) # + id="oygrqQUQXSkK" colab_type="code" colab={} tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir="./logs") # + id="GoYNprorSLr2" colab_type="code" colab={} epochs = 500 # + colab_type="code" id="lpRrl7WK10Pq" colab={} history = model.fit(celsius_values, fahrenheit_values, epochs=epochs, callbacks=[tensorboard_callback], verbose=True) # + [markdown] id="oRYYeXMTW9V5" colab_type="text" # # Predict fahrenheit value using celsius value. # + colab_type="code" id="oxNzL4lS2Gui" colab={} celsius_value = 100.0 fahrenheit_value = model.predict([celsius_value]) fahrenheit_value = fahrenheit_value[0][0] print("{} degrees Celsius = {} degrees Fahrenheit".format(celsius_value, fahrenheit_value)) # + [markdown] id="2crsTW1GXHX_" colab_type="text" # # Print the model weights. # + colab_type="code" id="kmIkVdkbnZJI" colab={} print("Layer variables: {}".format(model.get_layer(index=-1).get_weights())) # + [markdown] id="GCwod_RsYEkx" colab_type="text" # # Visualize the training graphs. # + id="ZGpFdfEbYH9T" colab_type="code" colab={} # %reload_ext tensorboard # %tensorboard --logdir logs # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Python not in the Notebook # We will often want to save our Python classes, for use in multiple Notebooks. # We can do this by writing text files with a .py extension, and then `importing` them. # ### Writing Python in Text Files # You can use a text editor like [Atom](https://atom.io) for Mac or [Notepad++](https://notepad-plus-plus.org) for windows to do this. If you create your own Python files ending in .py, then you can import them with `import` just like external libraries. # You can also maintain your library code in a Notebook, and use %%writefile to create your library. # Libraries are usually structured with multiple files, one for each class. # We group our modules into packages, by putting them together into a folder. You can do this with explorer, or using a shell, or even with Python: import os if 'mazetool' not in os.listdir(os.getcwd()): os.mkdir('mazetool') # + # %%writefile mazetool/maze.py from .room import Room from .person import Person class Maze(object): def __init__(self, name): self.name = name self.rooms = [] self.occupants = [] def add_room(self, name, capacity): result = Room(name, capacity) self.rooms.append(result) return result def add_exit(self, name, source, target, reverse= None): source.add_exit(name, target) if reverse: target.add_exit(reverse, source) def add_occupant(self, name, room): self.occupants.append(Person(name, room)) room.occupancy += 1 def wander(self): "Move all the people in a random direction" for occupant in self.occupants: occupant.wander() def describe(self): for occupant in self.occupants: occupant.describe() def step(self): house.describe() print() house.wander() print() def simulate(self, steps): for _ in range(steps): self.step() # + # %%writefile mazetool/room.py from .exit import Exit class Room(object): def __init__(self, name, capacity): self.name = name self.capacity = capacity self.occupancy = 0 self.exits = [] def has_space(self): return self.occupancy < self.capacity def available_exits(self): return [exit for exit in self.exits if exit.valid() ] def random_valid_exit(self): import random if not self.available_exits(): return None return random.choice(self.available_exits()) def add_exit(self, name, target): self.exits.append(Exit(name, target)) # + # %%writefile mazetool/person.py class Person(object): def __init__(self, name, room = None): self.name=name self.room=room def use(self, exit): self.room.occupancy -= 1 destination=exit.target destination.occupancy +=1 self.room=destination print(self.name, "goes", exit.name, "to the", destination.name) def wander(self): exit = self.room.random_valid_exit() if exit: self.use(exit) def describe(self): print(self.name, "is in the", self.room.name) # + # %%writefile mazetool/exit.py class Exit(object): def __init__(self, name, target): self.name = name self.target = target def valid(self): return self.target.has_space() # - # In order to tell Python that our "mazetool" folder is a Python package, # we have to make a special file called `__init__.py`. If you import things in there, they are imported as part of the package: # %%writefile mazetool/__init__.py from .maze import Maze # Python 3 relative import # ### Loading Our Package # We just wrote the files, there is no "Maze" class in this notebook yet: myhouse=Maze('My New House') # But now, we can import Maze, (and the other files will get imported via the chained Import statements, starting from the `__init__.py` file. import mazetool mazetool.exit.Exit from mazetool import Maze house=Maze('My New House') living=house.add_room('livingroom', 2) # Note the files we have created are on the disk in the folder we made: import os os.listdir(os.path.join(os.getcwd(),'mazetool') ) # `.pyc` files are "Compiled" temporary python files that the system generates to speed things up. They'll be regenerated # on the fly when your `.py` files change. # ### The Python Path # We want to `import` these from notebooks elsewhere on our computer: # it would be a bad idea to keep all our Python work in one folder. # **Supplementary material** The best way to do this is to learn how to make our code # into a proper module that we can install. We'll see more on that in a few lectures' time. # Alternatively, we can add a folder to the "Python Path", where python searches for modules: import sys print(sys.path[-3]) print(sys.path[-2]) print(sys.path[-1]) sys.path.append('/home/jamespjh/devel/libraries/python') print(sys.path[-1]) # I've thus added a folder to the list of places searched. If you want to do this permanently, you should set the PYTHONPATH Environment Variable, # which you can learn about in a shell course, or can read about online for your operating system. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Python Cheat Sheet # # Basic cheatsheet for Python mostly based on the book written by , [Automate the Boring Stuff with Python](https://automatetheboringstuff.com/) under the [Creative Commons license](https://creativecommons.org/licenses/by-nc-sa/3.0/) and many other sources. # # ## Read It # # - [Website](https://www.pythoncheatsheet.org) # - [Github](https://github.com/wilfredinni/python-cheatsheet) # - [PDF](https://github.com/wilfredinni/Python-cheatsheet/raw/master/python_cheat_sheet.pdf) # - [Jupyter Notebook](https://mybinder.org/v2/gh/wilfredinni/python-cheatsheet/master?filepath=jupyter_notebooks) # # ## Dataclasses # # `Dataclasses` are python classes but are suited for storing data objects. # This module provides a decorator and functions for automatically adding generated special methods such as `__init__()` and `__repr__()` to user-defined classes. # # ### Features # # 1. They store data and represent a certain data type. Ex: A number. For people familiar with ORMs, a model instance is a data object. It represents a specific kind of entity. It holds attributes that define or represent the entity. # # 2. They can be compared to other objects of the same type. Ex: A number can be greater than, less than, or equal to another number. # # Python 3.7 provides a decorator dataclass that is used to convert a class into a dataclass. # # python 2.7 # + class Number: def __init__(self, val): self.val = val obj = Number(2) obj.val # - # with dataclass # + from dataclasses import dataclass @dataclass class Number: val: int obj = Number(2) obj.val # - # ### Default values # # It is easy to add default values to the fields of your data class. # + from dataclasses import dataclass @dataclass class Product: name: str count: int = 0 price: float = 0.0 obj = Product("Python") obj.name # - obj.count obj.price # ### Type hints # # It is mandatory to define the data type in dataclass. However, If you don't want specify the datatype then, use `typing.Any`. # + from dataclasses import dataclass from typing import Any @dataclass class WithoutExplicitTypes: name: Any value: Any = 42 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # Task 3: Largest prime factor # This time we are dealing with a prime factorization problem and most of "naive" factorization algorithms are not very effective, so we cannot use bruteforce this time, so, at first, we'll try to improve it. # # This is primitive realisation of a naive factorization algorithm, that has O(∑(Ni)) complexity: # + def is_prime(N): for i in range(2, N): if N % i == 0: return False return True def greatest_prime(N): max_prime = 1 for i in range(2, N): if N % i == 0 and is_prime(i): max_prime = i return max_prime print(greatest_prime(600851475143)) # - # Firstly, we need to decrease the upper bound of each cycle to sqrt(N), cause there are no natural divisors of a number, that exceed sqrt(N). # # And we get an algorithm with O(∑(sqrt(Ni))) complexity, that already able to compute required number quickly: # + import math def is_prime(N): for i in range(2, int((math.sqrt(N)))+1): if N % i == 0: return False return True def greatest_prime(N): max_prime = 1 for i in range(2, int((math.sqrt(N)))+1): if N % i == 0 and is_prime(i): max_prime = i return max_prime print(greatest_prime(600851475143)) # - # But we'll try to implement an algorithm called "Eratosthenes sieve", that has O(n*log(log n)) complexity and can effictively help us find factorization of big integers. # # Here is one of the implementations: # + import math def eratosthenes(n): sieve = list(range(n + 1)) sieve[1] = 0 for i in sieve: if i > 1: for j in range(i + i, len(sieve), i): sieve[j] = 0 return sieve outcome_primes = list(filter(lambda x: x != 0, eratosthenes(int(math.sqrt(600851475143))+1))) for i in outcome_primes: if 600851475143 % i == 0: max_prime = i print(max_prime) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Ordered Pitch Class Interval (UPCI) import music21 def get_opci(x, y): """ Get Straus' OPCI 'y - x mod12' """ x = music21.pitch.Pitch(x).pitchClass y = music21.pitch.Pitch(y).pitchClass opci = (y - x) % 12 return opci get_opci('C4','G5') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda root] # language: python # name: conda-root-py # --- # Equivalent kernel computation and visualization # %matplotlib inline import numpy as np import matplotlib.pyplot as plt import scipy.stats as st # + plt.style.use('fivethirtyeight') plt.rcParams['font.family'] = 'sans-serif' plt.rcParams['font.serif'] = 'Ubuntu' plt.rcParams['font.monospace'] = 'Ubuntu Mono' plt.rcParams['font.size'] = 10 plt.rcParams['axes.labelsize'] = 10 plt.rcParams['axes.labelweight'] = 'bold' plt.rcParams['axes.titlesize'] = 10 plt.rcParams['xtick.labelsize'] = 8 plt.rcParams['ytick.labelsize'] = 8 plt.rcParams['legend.fontsize'] = 10 plt.rcParams['figure.titlesize'] = 12 plt.rcParams['image.cmap'] = 'jet' plt.rcParams['image.interpolation'] = 'none' plt.rcParams['figure.figsize'] = (12, 6) plt.rcParams['lines.linewidth'] = 2 # definisce un vettore di colori colors = ['#008fd5', '#fc4f30', '#e5ae38', '#6d904f', '#8b8b8b', '#810f7c', '#137e6d', '#be0119', '#3b638c', '#af6f09', '#008fd5', '#fc4f30', '#e5ae38', '#6d904f', '#8b8b8b', '#810f7c', '#137e6d', '#be0119', '#3b638c', '#af6f09'] # - # Definizione delle funzioni di base gaussiane # + # deviazione standard delle funzioni base sd = .4 def gaussian_basis(x, m, s=sd): return np.exp(-((x-m)**2)/(2*s**2)) def vphi(x, d, dom): l = np.linspace(dom[0], dom[1], d+1) mus = [(l[i]+l[i+1])/2.0 for i in range(len(l)-1)] return np.array([gaussian_basis(x, mus[i]) for i in range(d)]).T # - # Genera la matrice delle features e il vettore target # + # numero di elementi da generare n=50 # dominio della feature domain=(-1,1) # - # Plot delle funzioni di base nel dominio fig = plt.figure(figsize=(16,8)) ax = fig.gca() l = np.linspace(domain[0], domain[1], n_coeff+1) mus = [(l[i]+l[i+1])/2.0 for i in range(len(l)-1)] x = np.linspace(domain[0], domain[1], 1000) for i in range(n_coeff): ax.plot(x, gaussian_basis(x, mus[i]), color = colors[i]) plt.xlabel(u'$x$', fontsize=10) plt.ylabel(u'$y$', fontsize=10) plt.title('Gaussian basis functions') plt.xticks(fontsize=10) plt.yticks(fontsize=10) plt.show() # array delle feature generato casualmente nel dominio X=np.random.uniform(domain[0], domain[1], n) # array delle feature generato uniformemente nel dominio #X = np.linspace(-1,1,200) # + # genera il vettore target mediante la funzione f e l'aggiunta di rumore gaussiano def f(x): return 0.7+0.25*x*np.sqrt(abs(np.sin(x))) # sd del rumore noise = .05 #genera target t=np.array([(f(v)+np.random.normal(0,noise,1))[0] for v in X]).reshape(-1,1) # - # Genera immagine di $X$ per la regressione n_coeff = 5 Phi = vphi(X,n_coeff, domain) # Plot del dataset # + # iperparametro per il prior alfa=.2 # parametri del prior mu_prior=np.zeros(n_coeff) sigma_prior=np.eye(n_coeff)*alfa # parametro per la verosimiglianza beta=.9 sigma_post = np.linalg.inv(sigma_prior + beta*np.dot(Phi.T, Phi)) mu_post = beta*np.dot(sigma_post, np.dot(Phi.T, t)) # - fig = plt.figure(figsize=(16,8)) ax = fig.gca() ax.scatter(X, t, marker='o', alpha=.8) plt.xlabel(u'$x$', fontsize=10) plt.ylabel(u'$x$', fontsize=10) plt.xticks(fontsize=10) plt.yticks(fontsize=10) plt.show() # Calcolo parametri della distribuzione gaussiana a posteriori # + # iperparametro per il prior alfa=.2 # parametri del prior mu_prior=np.zeros(n_coeff) sigma_prior=np.eye(n_coeff)*alfa # parametro per la verosimiglianza beta=.9 sigma_post = np.linalg.inv(sigma_prior + beta*np.dot(Phi.T, Phi)) mu_post = beta*np.dot(sigma_post, np.dot(Phi.T, t)) # - # Definizione della funzione di kernel equivalente # + def equiv_kernel(x1,x2): return beta*np.dot(vphi(x1,n_coeff,domain).T, np.dot(sigma_post, vphi(x2,n_coeff,domain))) func=np.vectorize(lambda x,y:equiv_kernel(x,y)) # - # Plot della funzione su una griglia di valori n_values = 200 # definisce un array di 200 valori distribuiti lineramente tra 0 e 1 per la dimensione x x = np.linspace(-1, 1, n_values) # definisce un array di 200 valori distribuiti lineramente tra 0 e 1 per la dimensione y y = np.linspace(-1, 1, n_values) # definisce una griglia bidimensionale a partire dagli array per x e y # in X i valori delle ascisse dei punti sulla griglia, in Y i valori delle ordinate XX,YY = np.meshgrid(x, y) # calcola in Z i valori della funzione su tutti i punti della griglia Z=func(XX,YY) fig = plt.figure(figsize=(16,8)) ax=fig.gca() imshow_handle = ax.imshow(Z, origin='lower', extent=(x.min(),x.max(), y.min(), y.max()), aspect='auto', alpha=.8) ax.set_xlabel('$x_1$', fontsize=12) ax.set_ylabel('$x_2$', fontsize=12) plt.xlim(-1,1) plt.ylim(-1,1) ax.xaxis.set_tick_params(labelsize=8) ax.yaxis.set_tick_params(labelsize=8) ax.grid(b=True, which='both', color='0.35') plt.suptitle(u'Equivalent kernel') plt.show() # Andamento in funzione di $x_1$, per diversi valori di $x_2$ a = [.63, .2, -.48] fig = plt.figure(figsize=(16,8)) ax=fig.gca() for i in a: plt.plot(x,Z[int((1+i)*n_values/2),:], label='$x_2=${0:3.2f}'.format(i)) plt.xlim(-1,1) ax.set_xlabel('$x_1$') ax.set_ylabel('$\kappa(x_1,x_2)$') plt.legend() plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="Oangp65FQ454" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="493ce295-07c7-4c3b-dfed-969ac3e5c290" import pandas as pd url = 'https://gist.githubusercontent.com/guilhermesilveira/2d2efa37d66b6c84a722ea627a897ced/raw/10968b997d885cbded1c92938c7a9912ba41c615/tracking.csv' dados = pd.read_csv(url) dados.head(5) # + id="acDhh1DdRhJ0" colab_type="code" colab={} x = dados[['home', 'how_it_works', 'contact', 'bought']] y = dados['bought'] # + id="RzA-n4xwSLkw" colab_type="code" colab={} treino_x = x[:75] treino_y = y[:75] teste_x = x[75:] teste_y = y[75:] # + id="F4cpcY6RSnfk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="7da14bf0-3ac0-4ff7-aee7-d8f04f20b25d" from sklearn.svm import LinearSVC modelo = LinearSVC() modelo.fit(treino_x,treino_y) # + id="N-kzGOMjT2vU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="1913e80e-1294-463d-8561-fbd4f603882e" previsoes = modelo.predict(teste_x) print(previsoes) print(teste_y.values) # + id="x6wT0gWxUGSJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="b501ef96-8fa5-4aa8-ba81-508bb2347740" from sklearn.metrics import accuracy_score taxa_de_acertos = accuracy_score(teste_y,previsoes) print(f'Taxa de acertos {round(taxa_de_acertos*100,2)}%') # + id="-1jrCaHJ6CQ9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="0e06c2c5-feba-4c32-e967-233d48e7867a" from sklearn.model_selection import train_test_split SEED = 20 treino_x,teste_x ,treino_y,teste_y =train_test_split(x,y, test_size=0.25, random_state=SEED) modelo = LinearSVC() modelo.fit(treino_x,treino_y) previsoes = modelo.predict(teste_x) taxa_de_acertos = accuracy_score(teste_y,previsoes) print(f'Taxa de acertos {round(taxa_de_acertos*100,2)}%') # + id="X10-nI9E7u6M" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="77b2bdb3-df5a-4927-a1c4-50baf7a29414" print(treino_y.value_counts()) # + id="Ic2xV1Pf7-CR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="c82dc975-dc1a-48eb-c153-40f804a3a3aa" print(teste_y.value_counts()) # + id="r_fW_pMl8Kwb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="cd0b00a6-cf8f-4225-e508-dae88428500c" from sklearn.model_selection import train_test_split SEED = 20 treino_x,teste_x ,treino_y,teste_y =train_test_split(x,y, test_size=0.25, random_state=SEED,stratify=y) modelo = LinearSVC() modelo.fit(treino_x,treino_y) previsoes = modelo.predict(teste_x) taxa_de_acertos = accuracy_score(teste_y,previsoes) print(f'Taxa de acertos {round(taxa_de_acertos*100,2)}%') # + id="OUnONNLW8TsB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 119} outputId="0c517f86-6859-41cc-9640-43a8675444c6" print(treino_y.value_counts()) print(teste_y.value_counts()) # + [markdown] id="TBvvNylSW40e" colab_type="text" # #Segunda parte # # + id="cEnpbMEb8WeZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="bde1f807-ea99-4fc6-f650-7fbeec775f4b" url = 'https://gist.githubusercontent.com/guilhermesilveira/1b7d5475863c15f484ac495bd70975cf/raw/16aff7a0aee67e7c100a2a48b676a2d2d142f646/projects.csv' dados = pd.read_csv(url) dados.head(5) # + id="-u2A7QTENEB_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 359} outputId="e0583b35-4d27-4c7e-f1e7-4442fbe84bcf" change = { 0:1, 1:0 } dados['finished'] = dados['unfinished'].map(change) dados.head(10) # + id="-1uz0zdrNjE7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 351} outputId="467ac221-444a-438b-aa60-35fb76885716" import seaborn as sns sns.scatterplot(x='expected_hours',y='price',data=dados) # + id="XtzdKh2WODBf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 297} outputId="b3432f08-8c4c-434e-96ce-799802403981" sns.scatterplot(x='expected_hours',y='price',hue= 'finished',data=dados) # + id="fF63aeonONJu" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 386} outputId="a08c5144-8e81-4f26-8217-b72ef37de7b2" sns.relplot(x='expected_hours',y='price',col= 'finished',hue= 'finished',data=dados) # + id="ofa_5VUoOyqh" colab_type="code" colab={} x = dados[['expected_hours','price']] y = dados['finished'] # + id="QY7ujrKwO-5k" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="8477a304-7a64-41fe-ea8b-94dc24de7b5d" from sklearn.model_selection import train_test_split SEED = 5 np.random.seed(SEED) treino_x,teste_x ,treino_y,teste_y =train_test_split(x,y, test_size=0.25, stratify=y) modelo = LinearSVC() modelo.fit(treino_x,treino_y) previsoes = modelo.predict(teste_x) taxa_de_acertos = accuracy_score(teste_y,previsoes) print(f'Taxa de acertos {round(taxa_de_acertos*100,2)}%') # + id="Tcf9ovKcPOL2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="5aa3b3b3-62de-4f30-946f-33e66068d43a" import numpy as np previsoes_de_base = np.ones(540) taxa_de_acertos = accuracy_score(teste_y,previsoes_de_base) print(f'Taxa de acertos baseline {round(taxa_de_acertos*100,2)}%') # + id="fm5EPaQBU41N" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 297} outputId="5d72ebb0-c1bc-4aa8-eb74-6e84df516f6a" sns.scatterplot(x='expected_hours',y='price',hue= teste_y,data=teste_x) # + id="Iae1pss-VNxx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="1b376ea0-a0a1-448f-aa2f-cd2286ee7278" x_min = teste_x['expected_hours'].min() x_max = teste_x['expected_hours'].max() y_min = teste_x['price'].min() y_max = teste_x['price'].max() print(x_min, x_max, y_min, y_max) # + id="-EkFSapgVrsq" colab_type="code" colab={} pixels =100 eixo_x = np.arange(x_min, x_max, (x_max-x_min)/pixels) eixo_y = np.arange(y_min, y_max, (y_max-y_min)/pixels) # + id="SM699tgtWRQs" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 136} outputId="cab7f80c-ca7f-4cd4-f228-1686f38340cb" xx , yy = np.meshgrid(eixo_x,eixo_y) pontos = np.c_[xx.ravel(),yy.ravel()] pontos # + id="hnmro6byWxZS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 136} outputId="619872a5-6f1a-43e2-c8dd-d67cb4d79a92" z = modelo.predict(pontos) z = z.reshape(xx.shape) z # + [markdown] id="4vadEkpgXtLB" colab_type="text" # #Decision boundary # # + id="-wyzvwC4W_-h" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 282} outputId="13c94474-96d3-495f-c70e-85f9efdc11bf" import matplotlib.pyplot as plt plt.contourf(xx,yy,z,alpha=0.3) plt.scatter(teste_x['expected_hours'],teste_x['price'],c=teste_y,s=1) # + id="dkA8r5DRJhPx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="47d31e08-172b-4746-fefb-90101fa1dc34" from sklearn.model_selection import train_test_split from sklearn.svm import SVC SEED = 5 np.random.seed(SEED) treino_x,teste_x ,treino_y,teste_y =train_test_split(x,y, test_size=0.25, stratify=y) modelo = SVC() modelo.fit(treino_x,treino_y) previsoes = modelo.predict(teste_x) taxa_de_acertos = accuracy_score(teste_y,previsoes) print(f'Taxa de acertos {round(taxa_de_acertos*100,2)}%') # + id="COBHiD-tJ3Z_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 282} outputId="77e195fe-3c41-44df-e48f-178ca7d7198e" x_min = teste_x['expected_hours'].min() x_max = teste_x['expected_hours'].max() y_min = teste_x['price'].min() y_max = teste_x['price'].max() pixels =100 eixo_x = np.arange(x_min, x_max, (x_max-x_min)/pixels) eixo_y = np.arange(y_min, y_max, (y_max-y_min)/pixels) xx , yy = np.meshgrid(eixo_x,eixo_y) pontos = np.c_[xx.ravel(),yy.ravel()] z = modelo.predict(pontos) z = z.reshape(xx.shape) plt.contourf(xx,yy,z,alpha=0.3) plt.scatter(teste_x['expected_hours'],teste_x['price'],c=teste_y,s=1) # + id="0k5odgy1KXkN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="e5d2571f-0c30-4515-dba1-40ba9c7dd235" from sklearn.model_selection import train_test_split from sklearn.svm import SVC from sklearn.preprocessing import StandardScaler SEED = 5 np.random.seed(SEED) raw_treino_x,raw_teste_x ,treino_y,teste_y =train_test_split(x,y, test_size=0.25, stratify=y) scaler = StandardScaler() scaler.fit(raw_treino_x) treino_x = scaler.transform(raw_treino_x) teste_x = scaler.transform(raw_teste_x) modelo = SVC() modelo.fit(treino_x,treino_y) previsoes = modelo.predict(teste_x) taxa_de_acertos = accuracy_score(teste_y,previsoes) print(f'Taxa de acertos {round(taxa_de_acertos*100,2)}%') # + id="7HVhac7jLRUV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 282} outputId="6ad068e1-013e-4f7c-bf8a-7ed2e82254a7" data_x = teste_x[:,0] data_y = teste_x[:,1] x_min = data_x.min() x_max = data_x.max() y_min = data_y.min() y_max = data_y.max() pixels =100 eixo_x = np.arange(x_min, x_max, (x_max-x_min)/pixels) eixo_y = np.arange(y_min, y_max, (y_max-y_min)/pixels) xx , yy = np.meshgrid(eixo_x,eixo_y) pontos = np.c_[xx.ravel(),yy.ravel()] z = modelo.predict(pontos) z = z.reshape(xx.shape) plt.contourf(xx,yy,z,alpha=0.3) plt.scatter(data_x,data_y,c=teste_y,s=1) # + [markdown] id="Kq0akc2UM8FL" colab_type="text" # #Terceira parte # + id="fu1uJcvsM7B1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="5c9a00af-a2c0-49d6-9c3a-bf4f75453e7a" url = 'https://gist.githubusercontent.com/guilhermesilveira/4d1d4a16ccbf6ea4e0a64a38a24ec884/raw/afd05cb0c796d18f3f5a6537053ded308ba94bf7/car-prices.csv' dados = pd.read_csv(url) dados.head(5) # + id="GWCIub11NzM2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 359} outputId="97cf3b03-bf77-4cfd-8608-9d89a9ab4633" change = { 'yes':1, 'no':0 } dados['sold'] = dados['sold'].map(change) dados.head(10) # + id="7Z1vM0WiOM9F" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 359} outputId="072b2be6-00f0-4297-d25a-ca0c0f00b1b5" from datetime import datetime dados['age'] = datetime.today().year - dados['model_year'] dados['km_per_year'] = dados['mileage_per_year'] * 1.60934 dados = dados.drop(columns=['Unnamed: 0', 'mileage_per_year', 'model_year']) dados.head(10) # + id="GzbLaak_PiW4" colab_type="code" colab={} x = dados[['price' , 'age', 'km_per_year']] y = dados['sold'] # + id="BubdtrQ9P-m9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="3fca6d6a-c6e8-48b2-b166-95fdd344943a" from sklearn.model_selection import train_test_split SEED = 5 np.random.seed(SEED) treino_x,teste_x ,treino_y,teste_y =train_test_split(x,y, test_size=0.25, stratify=y) modelo = LinearSVC() modelo.fit(treino_x,treino_y) previsoes = modelo.predict(teste_x) taxa_de_acertos = accuracy_score(teste_y,previsoes) print(f'Taxa de acertos {round(taxa_de_acertos*100,2)}%') # + id="57f7s2M2QiEE" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="6fbfd079-de89-4c16-9ac2-be10488841a6" from sklearn.dummy import DummyClassifier np.random.seed(SEED) dummy_stratified = DummyClassifier(strategy='stratified') dummy_stratified.fit(treino_x,treino_y) taxa_de_acertos = dummy_stratified.score(teste_x,teste_y) print(f'Taxa de acertos dummy stratified {round(taxa_de_acertos*100,2)}%') # + id="xpvrvvdZRVEN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="bbf672c6-a4d4-47bf-f59c-6e7ed60075ef" from sklearn.dummy import DummyClassifier np.random.seed(SEED) dummy_mostfrequent = DummyClassifier(strategy='most_frequent') dummy_mostfrequent.fit(treino_x,treino_y) taxa_de_acertos = dummy_mostfrequent.score(teste_x,teste_y) print(f'Taxa de acertos dummy most frequent {round(taxa_de_acertos*100,2)}%') # + id="6UbOe_ICSyEZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="85101bf8-4336-4705-8d57-814ef0b480b5" SEED = 5 np.random.seed(SEED) raw_treino_x,raw_teste_x ,treino_y,teste_y =train_test_split(x,y, test_size=0.25, stratify=y) scaler = StandardScaler() scaler.fit(raw_treino_x) treino_x = scaler.transform(raw_treino_x) teste_x = scaler.transform(raw_teste_x) modelo = SVC() modelo.fit(treino_x,treino_y) previsoes = modelo.predict(teste_x) taxa_de_acertos = accuracy_score(teste_y,previsoes) print(f'Taxa de acertos {round(taxa_de_acertos*100,2)}%') # + id="Uo2p4__qUW0E" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="3cace667-de94-444d-abc0-03a92e04a54f" from sklearn.tree import DecisionTreeClassifier SEED = 5 np.random.seed(SEED) treino_x,teste_x ,treino_y,teste_y =train_test_split(x,y, test_size=0.25, stratify=y) modelo = DecisionTreeClassifier(max_depth=3) modelo.fit(treino_x,treino_y) previsoes = modelo.predict(teste_x) taxa_de_acertos = accuracy_score(teste_y,previsoes) print(f'Taxa de acertos {round(taxa_de_acertos*100,2)}%') # + id="NLUdnwfFUtfX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 598} outputId="ba41faba-d97d-4f42-a974-94a975bdde27" from sklearn.tree import export_graphviz import graphviz features = x.columns dot_data = export_graphviz(modelo,out_file=None,feature_names=features,filled=True,class_names=['no','yes']) graph = graphviz.Source(dot_data) graph // -*- coding: utf-8 -*- // --- // jupyter: // jupytext: // text_representation: // extension: .java // format_name: light // format_version: '1.5' // jupytext_version: 1.14.4 // kernelspec: // display_name: Java // language: java // name: java // --- // # Anomaly Detection Tutorial // // This guide will show how to use Tribuo’s anomaly detection models to find anomalous events in a toy dataset drawn from a mixture of Gaussians. We'll discuss the options in the LibSVM anomaly detection algorithm (using a one-class nu-SVM) and discuss evaluations for anomaly detection tasks. // // ## Setup // // We'll load in a jar and import a few packages. // %jars ./tribuo-anomaly-libsvm-4.0.0-jar-with-dependencies.jar import org.tribuo.*; import org.tribuo.util.Util; import org.tribuo.anomaly.*; import org.tribuo.anomaly.evaluation.*; import org.tribuo.anomaly.example.AnomalyDataGenerator; import org.tribuo.anomaly.libsvm.*; import org.tribuo.common.libsvm.*; var eval = new AnomalyEvaluator(); // ## Dataset // Tribuo's anomaly detection package comes with a simple data generator that emits pairs of datasets containing 5 features. The training data is free from anomalies, and each example is sampled from a 5 dimensional gaussian with fixed mean and diagonal covariance. The test data is sampled from a mixture of two distributions, the first is the same as the training distribution, and the second uses a different mean for the gaussian (keeping the same covariance for simplicity). All the data points sampled from the second distribution are marked `ANOMALOUS`, and the other points are marked `EXPECTED`. These form the two classes for Tribuo's anomaly detection system. You can also use any of the standard data loaders to pull in anomaly detection data. // // The libsvm anomaly detection algorithm requires there are no anomalies in the training data, but this is not required in general for Tribuo's anomaly detection infrastructure. // // We'll sample 2000 points for each dataset, and 20% of the test set will be anomalies. var pair = AnomalyDataGenerator.gaussianAnomaly(2000,0.2); var data = pair.getA(); var test = pair.getB(); // ## Model Training // We'll fit a one-class SVM to our training data, and then use that to determine what things in our test set are anomalous. We'll use an [RBF Kernel](https://en.wikipedia.org/wiki/Radial_basis_function_kernel), and set the kernel width to 1.0. var params = new SVMParameters<>(new SVMAnomalyType(SVMAnomalyType.SVMMode.ONE_CLASS), KernelType.RBF); params.setGamma(1.0); params.setNu(0.1); var trainer = new LibSVMAnomalyTrainer(params); // Training is the same as other Tribuo prediction tasks, just call train and pass the training data. var startTime = System.currentTimeMillis(); var model = trainer.train(data); var endTime = System.currentTimeMillis(); System.out.println(); System.out.println("Training took " + Util.formatDuration(startTime,endTime)); // Unfortunately the LibSVM implementation is a little chatty and insists on writing to standard out, but after that we can see it took about 140ms to run (on my 2020 16" Macbook Pro, you may get slightly different runtimes). We can check how many support vectors are used by the SVM, from the training set of 2000: ((LibSVMAnomalyModel)model).getNumberOfSupportVectors() // So we used 301 datapoints to model the density of the expected data. // ## Model evaluation // Tribuo's infrastructure treats anomaly detection as a binary classification problem with the fixed label set {`EXPECTED`,`ANOMALOUS`}. When we have ground truth labels we can thus measure the true positives (anomalous things predicted as anomalous), false positives (expected things predicted as anomalous), false negatives (anomalous things predicted as expected) and true negatives (expected things predicted as expected), though the latter number is not usually that useful. We can also calculate the usual summary statistics: precision, recall and F1 of the anomalous class. We're going to compare against the ground truth labels from the data generator. var testEvaluation = eval.evaluate(model,test); System.out.println(testEvaluation.toString()); System.out.println(testEvaluation.confusionString()); // We can see that the model has no false negatives, and so perfect recall, but has a precision of 0.62, so approximately 62% of the positive predictions are true anomalies. This can be tuned by changing the width of the gaussian kernel which changes the range of values which are considered to be expected. The confusion matrix presents the same results in a more common form for classification tasks. // // We plan to further expand Tribuo's anomaly detection functionality to incorporate other algorithms in the future. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # !pip3 install Pillow kfp --upgrade --user # + import random import string from src.mnist.src import katib_launch_args, converter, resource_provider, tfjoblaunch_args_provider import kfp from kfp import components from kfp.components import func_to_container_op from kfp import dsl # - # ### Prerequiste: # # 1. Create a aws-secret with `AmazonS3FullAccess` policy in `kubeflow` namespace. # # ```yaml # kind: Secret # metadata: # name: aws-secret # namespace: kubeflow # type: Opaque # data: # AWS_ACCESS_KEY_ID: YOUR_BASE64_ACCESS_KEY # AWS_SECRET_ACCESS_KEY: YOUR_BASE64_SECRET_ACCESS # ``` # # > Note: To get base64 string, try `echo -n $AWS_ACCESS_KEY_ID | base64` # # ### Replace example to your S3 bucket name # + import random, string HASH = ''.join([random.choice(string.ascii_lowercase) for n in range(16)] + [random.choice(string.digits) for n in range(16)]) AWS_REGION = 'us-east-2' mnist_bucket = '{}-kubeflow-pipeline-data'.format(HASH) # !aws s3 mb s3://$mnist_bucket --region $AWS_REGION #mnist_bucket= "e2e-mnist-example" s3_bucket_path = 's3://{}'.format(mnist_bucket) # - # ### Replace aws_region with your region aws_region = "us-east-2" # ### Build Kubeflow Pipeline # + namespace='kubeflow' @dsl.pipeline( name="End to end pipeline", description="An end to end example including hyperparameter tuning, train and inference." ) def mnist_pipeline( name="mnist-{{workflow.uid}}", namespace=namespace, step="1000", s3bucketexportpath="", ttlSecondsAfterFinished=-1, tfjobTimeoutMinutes=60, deleteAfterDone=False): # step 1: create a Katib experiment to tune hyperparameters objectiveConfig, algorithmConfig, parameters, trialTemplate, metricsCollectorSpec = \ katib_launch_args.argugments_provide(objective_type="minimize", objective_goal=0.001, objective_metrics="loss", algorithm="random", parameters_lr_min="0.01", parameters_lr_max="0.03", parameters_batchsize=["16", "32", "64"], tf_train_steps="200", image="chuckshow/mnist-tf-pipeline:latest", worker_num=3) katib_experiment_launcher_op = components.load_component_from_url('https://raw.githubusercontent.com/kubeflow/pipelines/master/components/kubeflow/katib-launcher/component.yaml') op1 = katib_experiment_launcher_op( experiment_name=name, experiment_namespace=namespace, parallel_trial_count=3, max_trial_count=12, objective=str(objectiveConfig), algorithm=str(algorithmConfig), trial_template=str(trialTemplate), parameters=str(parameters), metrics_collector=str(metricsCollectorSpec), delete_finished_experiment=False) # step 1.5: convert Katib best parameteres into string convert_op = func_to_container_op(converter.convert_mnist_experiment_result) op2 = convert_op(op1.output) # step2: create a TFJob Launcher to train your model with best hyperparameter tuned by Katib tfjob_launcher_op = components.load_component_from_file("./src/mnist/launcher/component.yaml") chief, worker = tfjoblaunch_args_provider.tfjoblauncher_args(step=step, s3bucketexportpath=s3bucketexportpath, args=op2.output, aws_region=aws_region) train = tfjob_launcher_op( name=name, namespace=namespace, ttl_seconds_after_finished=ttlSecondsAfterFinished, worker_spec=worker, chief_spec=chief, tfjob_timeout_minutes=tfjobTimeoutMinutes, delete_finished_tfjob=deleteAfterDone, ) # step 3: model inferencese by Tensorflow Serving HASH = ''.join([random.choice(string.ascii_lowercase) for n in range(16)] + [random.choice(string.digits) for n in range(16)]) servingdeploy_name = 'mnist-model' + HASH deploy = resource_provider.tfservingdeploy_resource(namespace=namespace, s3bucketexportpath=s3bucketexportpath, servingdeploy_name=servingdeploy_name, aws_region=aws_region) deployment = dsl.ResourceOp( name="deploy", k8s_resource=deploy, ).after(train) servingsvc_name = 'mnist-service' serviceresource = resource_provider.tfservingsvc_resource(namespace=namespace, servingdeploy_name=servingdeploy_name, servingsvc_name=servingsvc_name) service = dsl.ResourceOp( name="service", k8s_resource=serviceresource ).after(deployment) # step 4: mnist ui deploy ui_name = 'mnist-ui' + HASH uideployresource = resource_provider.uideploy_resource(namespace=namespace, ui_name=ui_name) uideploy = dsl.ResourceOp( name="uideploy", k8s_resource=uideployresource ).after(train) uiserviceresource = resource_provider.uisvc_resource(namespace=namespace, ui_name=ui_name) uiservice = dsl.ResourceOp( name="uiservice", k8s_resource=uiserviceresource ).after(uideploy) uivirtualserviceresource = resource_provider.uivirtualsvc_resource(namespace=namespace, ui_name=ui_name) uivirtualservice = dsl.ResourceOp( name="uivirtualservice", k8s_resource=uivirtualserviceresource ).after(uiservice) # - # ### Submit the pipeline pipeline = kfp.Client().create_run_from_pipeline_func(mnist_pipeline, arguments={"s3bucketexportpath":'{}/export'.format(s3_bucket_path)}) # ### Invoke serving API via Python client # + import tensorflow as tf from tensorflow import keras # Helper libraries import numpy as np import os import subprocess import argparse import random import json import requests endpoint = "http://mnist-service.{}.svc.cluster.local:8500/v1/models/mnist:predict".format(namespace) # Prepare test dataset fashion_mnist = keras.datasets.mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() # scale the values to 0.0 to 1.0 train_images = train_images / 255.0 test_images = test_images / 255.0 # reshape for feeding into the model train_images = train_images.reshape(train_images.shape[0], 28, 28, 1) test_images = test_images.reshape(test_images.shape[0], 28, 28, 1) class_names = ['0','1','2','3','4','5','6','7','8','9'] # Random generate one image rando = random.randint(0,len(test_images)-1) data = json.dumps({"signature_name": "serving_default", "instances": test_images[rando:rando+1].tolist()}) print('Data: {} ... {}'.format(data[:50], data[len(data)-52:])) # HTTP call headers = {"content-type": "application/json"} json_response = requests.post(endpoint, data=data, headers=headers) predictions = json.loads(json_response.text)['predictions'] print(predictions) title = 'The model thought this was a class {}, and it was actually a class {}'.format( test_labels[rando], predictions[0]['classes']) print('\n') print(title) # - # ### Invoke serving API via UI # # Open your_ALB_endpoint + `/mnist/${namespace}/ui/` to visit mnist UI page. # ### Clean up resources in the terminal before re-running the Pipeline # # Kubectl delete svc mnist-service -n kubeflow # !aws s3 rb s3://$mnist_bucket --force # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pandas as pd import math def load_day(day): header = ['timestamp', 'line_id', 'direction', 'jrny_patt_id', 'time_frame', 'journey_id', 'operator', 'congestion', 'lon', 'lat', 'delay', 'block_id', 'vehicle_id', 'stop_id', 'at_stop'] types = {'timestamp': np.int64, 'journey_id': np.int32, 'congestion': np.int8, 'lon': np.float64, 'lat': np.float64, 'delay': np.int8, 'vehicle_id': np.int32, 'at_stop': np.int8} file_name = 'data/siri.201301{0:02d}.csv'.format(day) df = pd.read_csv(file_name, header=None, names=header, dtype=types, parse_dates=['time_frame'], infer_datetime_format=True) null_replacements = {'line_id': 0, 'stop_id': 0} df = df.fillna(value=null_replacements) df['line_id'] = df['line_id'].astype(np.int32) df['stop_id'] = df['stop_id'].astype(np.int32) df['timestamp'] = pd.to_datetime(df['timestamp'], unit='us') return df def haversine_np(lon1, lat1, lon2, lat2): """ Calculate the great circle distance between two points on the earth (specified in decimal degrees) All args must be of equal length. Taken from here: https://stackoverflow.com/questions/29545704/fast-haversine-approximation-python-pandas#29546836 """ lon1, lat1, lon2, lat2 = map(np.radians, [lon1, lat1, lon2, lat2]) dlon = lon2 - lon1 dlat = lat2 - lat1 a = np.sin(dlat/2.0)**2 + np.cos(lat1) * np.cos(lat2) * np.sin(dlon/2.0)**2 #c = 2 * np.arcsin(np.sqrt(a)) c = 2 * np.arctan2(np.sqrt(a), np.sqrt(1.0 - a)) meters = 6378137.0 * c return meters def calculate_durations(data_frame, vehicle_id): one_second = np.timedelta64(1000000000, 'ns') dv = data_frame[data_frame['vehicle_id']==vehicle_id] ts = dv.timestamp.values dtd = ts[1:] - ts[:-1] dt = np.zeros(len(dtd) + 1) dt[1:] = dtd / one_second return dt def calculate_distances(data_frame, vehicle_id): dv = data_frame[data_frame['vehicle_id']==vehicle_id] lat = dv.lat.values lon = dv.lon.values dxm = haversine_np(lon[1:], lat[1:], lon[:-1], lat[:-1]) dx = np.zeros(len(dxm) + 1) dx[1:] = dxm return dx def delta_location(lat, lon, bearing, meters): delta = meters / 6378137.0 theta = math.radians(bearing) lat_r = math.radians(lat) lon_r = math.radians(lon) lat_r2 = math.asin(math.sin(lat_r) * math.cos(delta) + math.cos(lat_r) * math.sin(delta) * math.cos(theta)) lon_r2 = lon_r + math.atan2(math.sin(theta) * math.sin(delta) * math.cos(lat_r), math.cos(delta) - math.sin(lat_r) * math.sin(lat_r2)) return math.degrees(lat_r2), math.degrees(lon_r2) def x_meters_to_degrees(meters, lat, lon): lat2, lon2 = delta_location(lat, lon, 90, meters) return abs(lon - lon2) def y_meters_to_degrees(meters, lat, lon): lat2, lon2 = delta_location(lat, lon, 0, meters) return abs(lat - lat2) def calculate_Q(lat, lon, sigma_speed): Q = np.zeros((4,4), dtype=np.float) Q[2,2] = x_meters_to_degrees(sigma_speed, lat, lon) ** 2 Q[3,3] = y_meters_to_degrees(sigma_speed, lat, lon) ** 2 return Q def calculate_R(lat, lon, sigma): R = np.zeros((2,2), dtype=np.float) R[0,0] = x_meters_to_degrees(lat, lon, sigma) R[1,1] = y_meters_to_degrees(lat, lon, sigma) return R def calculate_P(lat, lon, sigma, sigma_speed): P = np.zeros((4,4), dtype=np.float) P[0,0] = x_meters_to_degrees(sigma, lat, lon) ** 2 P[1,1] = y_meters_to_degrees(sigma, lat, lon) ** 2 P[2,2] = x_meters_to_degrees(sigma_speed, lat, lon) ** 2 P[3,3] = y_meters_to_degrees(sigma_speed, lat, lon) ** 2 return P def calculate_Kalman_gain(P, C, R): num = np.matmul(P, np.transpose(C)) den = np.matmul(C, num) + R return np.matmul(num, np.linalg.pinv(den)) def predict_step(prev_x, prev_P): next_x = np.matmul(measurement, prev_x) next_P = np.matmul(np.matmul(measurement, prev_P), np.transpose(measurement)) + calculate_Q(prev_x[0,0], prev_x[0,1], sigma_s) return next_x, next_P def update_step(predicted_x, predicted_P, C, y): lon = predicted_x[0,0] lat = predicted_x[0,1] R = calculate_R(lat, lon, sigma_x) K = calculate_Kalman_gain(predicted_P, C, R) updated_x = predicted_x + np.matmul(K, y - np.matmul(C, predicted_x)) I = np.eye(4) updated_P = np.matmul(I - np.matmul(K, C), predicted_P) return updated_x, updated_P lon = -6.236852 lat = 53.425327 delta_location(lat, lon, 90, 5) calculate_Q(lat, lon, sigma_speed=1.0) calculate_P(lat, lon, sigma=4, sigma_speed=1.0) df = load_day(1) # Measurement matrix measurement = np.array([[1, 0, 0, 0], [0, 1, 0, 0]]) state = np.array([[0], [0], [0], [0]]) sigma_s = 0.1 sigma_x = 4.0 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: conda_pytorch_p36 # language: python # name: conda_pytorch_p36 # --- # # Huggingface SageMaker-SDK - T5 Summarization example # 1. [Introduction](#Introduction) # 2. [Development Environment and Permissions](#Development-Environment-and-Permissions) # 1. [Installation](#Installation) # 2. [Permissions](#Permissions) # 3. [Uploading data to sagemaker_session_bucket](#Uploading-data-to-sagemaker_session_bucket) # 3. [Fine-tuning & starting Sagemaker Training Job](#Fine-tuning-\&-starting-Sagemaker-Training-Job) # 1. [Creating an Estimator and start a training job](#Creating-an-Estimator-and-start-a-training-job) # 2. [Download fine-tuned model from s3](#Download-fine-tuned-model-from-s3) # 3. [Summarization on Local](#Summarization-on-Local) # # Introduction # # このnotebookはHuggingFaceの[run_summarization.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/summarization/run_summarization.py)を日本語データで実行します。 # # **51行目の`check_min_version("4.10.0.dev0")`をコメントアウトしたことと、494-509行目の`postprocess_text`を日本語で評価できるように変更しましたが、それ以外はSageMakerで実行するために変更を加えた部分はありません** # # 事前訓練モデルには[sonoisa/t5-base-japanese](https://huggingface.co/sonoisa/t5-base-japanese)を使用し、データは[wikiHow日本語要約データセット](https://github.com/Katsumata420/wikihow_japanese)を使用します。 # # このデモでは、AmazonSageMakerのHuggingFace Estimatorを使用してSageMakerのトレーニングジョブを実行します。 # # _**NOTE: このデモは、SagemakerNotebookインスタンスで動作検証しています**_ # # Development Environment and Permissions # ## Installation # # このNotebookはSageMakerの`conda_pytorch_p36`カーネルを利用しています。 # # **_Note: このnotebook上で推論テストを行う場合、(バージョンが古い場合は)pytorchのバージョンアップが必要になります。_** # !pip install --upgrade pip # !pip install --upgrade torch # !pip install "sagemaker>=2.48.1" "transformers==4.9.2" "datasets[s3]>=1.8.0" --upgrade # !pip install sentencepiece # ## Permissions # # ローカル環境でSagemakerを使用する場合はSagemakerに必要な権限を持つIAMロールにアクセスする必要があります。[こちら](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html)を参照してください # + import sagemaker sess = sagemaker.Session() # sagemaker session bucket -> used for uploading data, models and logs # sagemaker will automatically create this bucket if it not exists sagemaker_session_bucket=None if sagemaker_session_bucket is None and sess is not None: # set to default bucket if a bucket name is not given sagemaker_session_bucket = sess.default_bucket() role = sagemaker.get_execution_role() sess = sagemaker.Session(default_bucket=sagemaker_session_bucket) print(f"sagemaker role arn: {role}") print(f"sagemaker bucket: {sess.default_bucket()}") print(f"sagemaker session region: {sess.boto_region_name}") # - # # データの準備 # # 事前に`create_wikihow_dataset.ipynb`を実行してwikiHow日本語要約データセットを用意してください。 import pandas as pd from tqdm import tqdm train = pd.read_json('./wikihow_japanese/data/output/train.jsonl', orient='records', lines=True) train dev = pd.read_json('./wikihow_japanese/data/output/dev.jsonl', orient='records', lines=True) dev test = pd.read_json('./wikihow_japanese/data/output/test.jsonl', orient='records', lines=True) test # [run_summarization.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/summarization/run_summarization.py)の実行にあたり、データを少し修正します。 # データのフォーマットは`csv`もしくは`json`で以下のように「原文(Text)」と「要約(summary)」を含む必要があります。 # # ``` # text,summary # "I'm sitting here in a boring room. It's just another rainy Sunday afternoon. I'm wasting my time I got nothing to do. I'm hanging around I'm waiting for you. But nothing ever happens. And I wonder","I'm sitting in a room where I'm waiting for something to happen" # "I see trees so green, red roses too. I see them bloom for me and you. And I think to myself what a wonderful world. I see skies so blue and clouds so white. The bright blessed day, the dark sacred night. And I think to myself what a wonderful world.","I'm a gardener and I'm a big fan of flowers." # "Christmas time is here. Happiness and cheer. Fun for all that children call. Their favorite time of the year. Snowflakes in the air. Carols everywhere. Olden times and ancient rhymes. Of love and dreams to share","It's that time of year again." # ``` # # ``` # {"text": "I'm sitting here in a boring room. It's just another rainy Sunday afternoon. I'm wasting my time I got nothing to do. I'm hanging around I'm waiting for you. But nothing ever happens. And I wonder", "summary": "I'm sitting in a room where I'm waiting for something to happen"} # {"text": "I see trees so green, red roses too. I see them bloom for me and you. And I think to myself what a wonderful world. I see skies so blue and clouds so white. The bright blessed day, the dark sacred night. And I think to myself what a wonderful world.", "summary": "I'm a gardener and I'm a big fan of flowers."} # {"text": "Christmas time is here. Happiness and cheer. Fun for all that children call. Their favorite time of the year. Snowflakes in the air. Carols everywhere. Olden times and ancient rhymes. Of love and dreams to share", "summary": "It's that time of year again."} # ``` # # 詳細は[こちら](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)にあります。 # ここでは`jsonl`のファイルを`csv`に変換します。 train.to_csv('train.csv', index=False) dev.to_csv('dev.csv', index=False) test.to_csv('test.csv', index=False) # ## Uploading data to `sagemaker_session_bucket` # # S3へデータをアップロードします。 # + s3_prefix = 'samples/datasets/wikihow_csv' input_train = sess.upload_data( path='train.csv', key_prefix=f'{s3_prefix}/train' ) input_validation = sess.upload_data( path='dev.csv', key_prefix=f'{s3_prefix}/valid' ) input_test = sess.upload_data( path='test.csv', key_prefix=f'{s3_prefix}/test' ) # + # データのUpload path print(input_train) print(input_validation) print(input_test) # - # # Fine-tuning & starting Sagemaker Training Job # # `HuggingFace`のトレーニングジョブを作成するためには`HuggingFace` Estimatorが必要になります。 # Estimatorは、エンドツーエンドのAmazonSageMakerトレーニングおよびデプロイタスクを処理します。 Estimatorで、どのFine-tuningスクリプトを`entry_point`として使用するか、どの`instance_type`を使用するか、どの`hyperparameters`を渡すかなどを定義します。 # # # ```python # huggingface_estimator = HuggingFace( # entry_point='train.py', # source_dir='./scripts', # base_job_name='huggingface-sdk-extension', # instance_type='ml.p3.2xlarge', # instance_count=1, # transformers_version='4.4', # pytorch_version='1.6', # py_version='py36', # role=role, # hyperparameters={ # 'epochs': 1, # 'train_batch_size': 32, # 'model_name':'distilbert-base-uncased' # } # ) # ``` # # SageMakerトレーニングジョブを作成すると、SageMakerは`huggingface`コンテナを実行するために必要なec2インスタンスの起動と管理を行います。 # Fine-tuningスクリプト`train.py`をアップロードし、`sagemaker_session_bucket`からコンテナ内の`/opt/ml/input/data`にデータをダウンロードして、トレーニングジョブを実行します。 # # # ```python # /opt/conda/bin/python train.py --epochs 1 --model_name distilbert-base-uncased --train_batch_size 32 # ``` # # `HuggingFace estimator`で定義した`hyperparameters`は、名前付き引数として渡されます。 # # またSagemakerは、次のようなさまざまな環境変数を通じて、トレーニング環境に関する有用なプロパティを提供しています。 # # * `SM_MODEL_DIR`:トレーニングジョブがモデルアーティファクトを書き込むパスを表す文字列。トレーニング後、このディレクトリのアーティファクトはモデルホスティングのためにS3にアップロードされます。 # # * `SM_NUM_GPUS`:ホストで使用可能なGPUの数を表す整数。 # # * `SM_CHANNEL_XXXX`:指定されたチャネルの入力データを含むディレクトリへのパスを表す文字列。たとえば、HuggingFace estimatorのfit呼び出しで`train`と`test`という名前の2つの入力チャネルを指定すると、環境変数`SM_CHANNEL_TRAIN`と`SM_CHANNEL_TEST`が設定されます。 # # このトレーニングジョブをローカル環境で実行するには、`instance_type='local'`、GPUの場合は`instance_type='local_gpu'`で定義できます(GPUの場合は追加で設定が必要になります[SageMakerのドキュメント](https://sagemaker.readthedocs.io/en/stable/overview.html#local-mode)を参照してください)。 # **_Note:これはSageMaker Studio内では機能しません_** # requirements.txtはトレーニングジョブの実行前に実行されます(コンテナにライブラリを追加する際に使用します) # ファイルはここを参照しています。https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/requirements.txt # 1点異なる部分は transformers >= 4.8.0 でHuggingFaceコンテナのバージョンが古く本家に追いついていないため、バージョンアップを行なっています。 # !pygmentize ./scripts/requirements.txt # トレーニングジョブで実行されるコード # 51行目の`check_min_version("4.10.0.dev0")`をコメントアウトした以外はSageMakerで実行するために変更を加えた部分はありません # !pygmentize ./scripts/run_summarization.py # + from sagemaker.huggingface import HuggingFace # hyperparameters, which are passed into the training job hyperparameters={ 'model_name_or_path':'sonoisa/t5-base-japanese', 'train_file': '/opt/ml/input/data/train/train.csv', 'validation_file': '/opt/ml/input/data/validation/dev.csv', 'test_file': '/opt/ml/input/data/test/test.csv', 'text_column': 'src', 'summary_column': 'tgt', 'max_source_length': 512, 'max_target_length': 64, 'do_train': 'True', 'do_eval': 'True', 'do_predict': 'True', 'predict_with_generate': 'True', 'num_train_epochs': 5, 'per_device_train_batch_size': 2, 'per_device_eval_batch_size': 2, 'use_fast_tokenizer': 'False', 'save_steps': 500, 'save_total_limit': 1, 'output_dir':'/opt/ml/model', } # - # ## Creating an Estimator and start a training job # estimator huggingface_estimator = HuggingFace( role=role, entry_point='run_summarization.py', source_dir='./scripts', instance_type='ml.p3.8xlarge', instance_count=1, volume_size=200, transformers_version='4.6', pytorch_version='1.7', py_version='py36', hyperparameters=hyperparameters, ) # + # starting the train job with our uploaded datasets as input huggingface_estimator.fit({'train': input_train, 'validation': input_validation, 'test': input_test}) # ml.p3.8xlarge, 5 epochでの実行時間の目安 # Training seconds: 2894 # Billable seconds: 2894 # - # ## Download-fine-tuned-model-from-s3 # # + import os OUTPUT_DIR = './output/' if not os.path.exists(OUTPUT_DIR): os.makedirs(OUTPUT_DIR) # + from sagemaker.s3 import S3Downloader # 学習したモデルのダウンロード S3Downloader.download( s3_uri=huggingface_estimator.model_data, # s3 uri where the trained model is located local_path='.', # local path where *.targ.gz is saved sagemaker_session=sess # sagemaker session used for training the model ) # + # OUTPUT_DIRに解凍します # !tar -zxvf model.tar.gz -C output # - # ## Summarization on Local # + import torch from transformers import AutoTokenizer, AutoModelForSeq2SeqLM model = AutoModelForSeq2SeqLM.from_pretrained('output/') tokenizer = AutoTokenizer.from_pretrained('sonoisa/t5-base-japanese') model.eval() # + text = dev.src[1] inputs = tokenizer.encode(text, return_tensors="pt", max_length=1024, truncation=True) print('='*5,'原文', '='*5) print(text) print('-'*5, '要約', '-'*5) with torch.no_grad(): summary_ids = model.generate(inputs, max_length=1024, do_sample=False, num_beams=1) summary = tokenizer.decode(summary_ids[0]) print(summary) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pickle import pandas as pd import sys sys.path.insert(0,'..') from classification import classification as cl filename = '../data/svm_model.sav' model = pickle.load(open(filename, 'rb')) df = pd.read_pickle("../data/Features.pkl") df.shape df.describe() rs = cl.standardize(df) rs.shape rs.describe() cl.evaluate_training(rs) rs['CLASS_1'].head() rs['CLASS_1'].head(15) rs['CLASS_1'] = rs['CLASS_1'].astype('category') cat_column = rs.select_dtypes(['category']).columns rs[cat_column] = rs[cat_column].apply(lambda x: x.cat.codes) rs[cat_column].head(15) df.head() new = df['CLASS_1'] new.head() new_df = df.drop(['CLASS_1'], axis=1) new_df.head() new.to_list() new_df.insert (2, 'NEW_CLASS', new.tolist()) new_df.head() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np from time import time # + num_elements = 10000000 my_list = list(range(num_elements)) my_array = np.array(range(num_elements)) # + tick = time() myList = [x + 1 for x in my_list] print(f"Time taken by the list is {time()-tick}") # + tick = time() my_array += 1 print(f"Time taken by the list is {time()-tick}") # - test_scores = [70,65,95,88] type(test_scores) score = np.array(test_scores) type(score) score.mean() # + import numpy as np income = np.array([75000, 55000, 88000, 125000, 64000, 97000]) # - income.mean() income = np.append(income, 12000000) income income.mean() np.median(income) income.std() test_score = [70, 65, 95, 88] scores = np.array(test_score) type(scores) scores.std() scores.max() scores.min() scores.sum() np.random.seed(seed=60) random_square = np.random.rand(5, 5) random_square random_square[0] random_square[0, :] random_square[:, 0] random_square[0, 0] random_square[0][0] random_square[2, 3] random_square.mean() random_square[0].mean() random_square[:, -1].mean() # %%time np.random.seed(seed=60) big_matrix = np.random.rand(100000, 100) # %%time big_matrix = np.random.rand(100000, 100) big_matrix.mean() np.arange(1, 101) np.arange(1, 101).reshape(20, 5) mat1 = np.arange(1, 101).reshape(20, 5) mat1 - 50 mat1 * 10 mat1 + mat1 mat1 * mat1 np.dot(mat1, mat1.T) import pandas as pd test_dict = {'Corey':[63,75,88], 'Kevin':[48,98,92], 'Akshay': [87, 86, 85]} df = pd.DataFrame(test_dict) df df = df.T df df.columns = ['Quiz_1', 'Quiz_2', 'Quiz_3'] df df.iloc[0] df.iloc[0,:] df['Quiz_1'] df.Quiz_1 df.iloc[:, 0] df = pd.DataFrame(test_dict) df df[0:2] df = df.T df df.columns = ['Quiz_1', 'Quiz_2', 'Quiz_3'] df # + rows = ['Corey', 'Kevin'] cols = ['Quiz_2', 'Quiz_3'] df_spring = df.loc[rows, cols] df_spring # - df.iloc[[0,1], [1,2]] df['Quiz_Avg'] = df.mean(axis=1) df df['Quiz_4'] = [92, 95, 88] df del df['Quiz_Avg'] df df # + df_new = pd.DataFrame({ 'Quiz_1':[np.NaN], 'Quiz_2':[np.NaN], 'Quiz_3': [np.NaN], 'Quiz_4':[71]}, index=['Adrian']) df_new # - df = pd.concat([df, df_new]) df df['Quiz_Avg'] = df.mean(axis=1, skipna=True) df df.describe() df.info() df['Quiz_4'].astype(float) import pandas as pd housing_df = pd.read_csv('boston_housing.csv') housing_df housing_df.info() housing_df.describe() housing_df.shape housing_df.isnull().any() housing_df.loc[:, housing_df.isnull().any()] housing_df.loc[:, housing_df.isnull().any()].describe() housing_df['AGE'] = housing_df['AGE'].fillna(housing_df.mean()) housing_df['CHAS'] = housing_df['CHAS'].fillna(0) housing_df = housing_df.fillna(housing_df.median()) housing_df.info() # + import pandas as pd import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline # - housing_df = pd.read_csv('boston_housing.csv') sns.set() title = 'Median Boston Housing Prices' plt.figure(figsize=(10,6)) plt.hist(housing_df['MEDV']) plt.title(title, fontsize=15) plt.xlabel('1980 Median Value in Thousands') plt.ylabel('Count') plt.savefig(title, dpi=300) plt.show() def my_hist(column, title, xlab, ylab): plt.figure(figsize=(10,6)) plt.hist(column) plt.title(title, fontsize=15) plt.xlabel(xlab) plt.ylabel(ylab) plt.savefig(title, dpi=300) plt.show() my_hist(housing_df['RM'], 'Average Number of Rooms in Boston Households', 'Average Number of Rooms', 'Count') def my_hist(column, title, xlab, ylab, bins=10, alpha=0.7, color='c'): title = title plt.figure(figsize=(10,6)) plt.hist(column, bins=bins, range=(3,9), alpha=alpha, color=color) plt.title(title, fontsize=15) plt.xlabel(xlab) plt.ylabel(ylab) plt.savefig(title, dpi=300) plt.show() my_hist(housing_df['RM'], 'Average Number of Rooms in Boston', 'Average Number of Rooms', 'Count', bins=6) x = housing_df['RM'] y = housing_df['MEDV'] plt.scatter(x, y) plt.show() # + import pandas as pd import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline # - housing_df = pd.read_csv('boston_housing.csv') housing_df.corr() sns.set() corr = housing_df.corr() plt.figure(figsize=(8,6)) sns.heatmap( corr, xticklabels=corr.columns.values, yticklabels=corr.columns.values, cmap='Blues', linewidths=1.25, alpha=0.8 ) plt.show() plt.figure(figsize=(10,7)) sns.regplot(x,y) plt.show() # + import statsmodels.api as sm X = sm.add_constant(x) model = sm.OLS(y, x) est = model.fit() print(est.summary()) # - x = housing_df['RM'] y = housing_df['MEDV'] plt.boxplot(x) plt.show() plt.violinplot(x) plt.show() # ### Activity 24 # + import pandas as pd import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline sns.set() # - ukstats_df = pd.read_csv('UKStatistics.csv') ukstats_df.head() ukstats_df.info() ukstats_df.describe() ukstats_df.shape plt.hist(ukstats_df['Actual Pay Floor (£)']) plt.show() x = ukstats_df['Salary Cost of Reports (£)'] y = ukstats_df['Actual Pay Floor (£)'] plt.scatter(x, y) plt.show() plt.violinplot(x) plt.show() plt.boxplot(x) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # Investigating prevalence of duplicate searches caused by filtering # + import os from weco_datascience.reporting import get_recent_data # - df = get_recent_data(config=os.environ, n=10000, index="metrics-conversion-prod") # what time period does this start from? df = df.sort_values("@timestamp") df.head() # + # How many searches are there? # searches=df.loc[(df(['page.name']=='images') | df(['page.name']=='works'))] searches = df.loc[ (df["page.name"] == "images") | (df["page.name"] == "works") & (df["page.query.query"].notnull()) & (df["properties.totalResults"].notnull()) & (df["page.query.page"].isnull()) ] len(searches) # + # How many searches unique to users are there? searches.sort_values( by=[ "session.id", "page.query.query", ] ) dedup = searches.drop_duplicates( subset=["session.id", "page.query.query"], keep="first" ) len(dedup) # - # Of 28,500 search events from the period 23-26/5/21, sorting by session.id and query term revealed only 11,182 # unique session-specific queries. That's 39% of the total search figure that is used in the dashboards. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import os, sys import numpy as np import pandas as pd draw_predict_1 = np.load('../preds/draw_predict_1.npy') draw_predict_2 = np.load('../preds/draw_predict_2.npy') draw_predict_3 = np.load('../preds/draw_predict_3.npy') draw_predict_4 = np.load('../preds/draw_predict_4.npy') draw_predict_5 = np.load('../preds/draw_predict_5.npy') draw_predict_6 = np.load('../preds/draw_predict_6.npy') draw_predict_7 = np.load('../preds/draw_predict_7.npy') draw_predict_4_b = np.load('../preds/draw_predict_4_b.npy') draw_predict_5_b = np.load('../preds/draw_predict_5_b.npy') draw_predict_6_b = np.load('../preds/draw_predict_6_b.npy') draw_predict_7_b = np.load('../preds/draw_predict_7_b.npy') # + # Load dataset info path_to_train = '../../Human_Protein_Atlas/input/train/' data = pd.read_csv('../../Human_Protein_Atlas/input/train.csv') train_dataset_info = [] for name, labels in zip(data['Id'], data['Target'].str.split(' ')): train_dataset_info.append({ 'path':os.path.join(path_to_train, name), 'labels':np.array([int(label) for label in labels])}) train_dataset_info = np.array(train_dataset_info) # + submit = pd.read_csv('../../Human_Protein_Atlas/input/sample_submission.csv') predicted = [] for i, _ in enumerate(submit['Id']): score_predict = (0.9*(0.7*(0.5*(0.3*draw_predict_1[i]+0.35*draw_predict_2[i])+1.5*0.35*draw_predict_3[i])+ 0.3*(0.25*draw_predict_4[i]+0.25*draw_predict_5[i]+0.25*draw_predict_6[i]+0.25*draw_predict_7[i]))+ 0.1*(0.25*draw_predict_4_b[i]+0.25*draw_predict_5_b[i]+0.25*draw_predict_6_b[i]+0.25*draw_predict_7_b[i])) label_predict = np.arange(28)[score_predict>=0.2] str_predict_label = ' '.join(str(l) for l in label_predict) predicted.append(str_predict_label) submit['Predicted'] = predicted submit.to_csv('blend_InceptionV3_InceptionResNetV2_9.csv', index=False) # + th_t = np.array([0.565,0.39,0.55,0.345,0.33,0.39,0.33,0.45,0.38,0.39, 0.34,0.42,0.31,0.38,0.49,0.50,0.38,0.43,0.46,0.40, 0.39,0.505,0.37,0.47,0.41,0.545,0.32,0.1]) submit = pd.read_csv('../../Human_Protein_Atlas/input/sample_submission.csv') predicted = [] for i, _ in enumerate(submit['Id']): score_predict = (0.9*(0.7*(0.5*(0.3*draw_predict_1[i]+0.35*draw_predict_2[i])+1.5*0.35*draw_predict_3[i])+ 0.3*(0.25*draw_predict_4[i]+0.25*draw_predict_5[i]+0.25*draw_predict_6[i]+0.25*draw_predict_7[i]))+ 0.1*(0.25*draw_predict_4_b[i]+0.25*draw_predict_5_b[i]+0.25*draw_predict_6_b[i]+0.25*draw_predict_7_b[i])) label_predict = np.arange(28)[score_predict>=th_t] str_predict_label = ' '.join(str(l) for l in label_predict) predicted.append(str_predict_label) submit['Predicted'] = predicted submit.to_csv('blend_InceptionV3_InceptionResNetV2_th_t.csv', index=False) # - submit.head() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + colab={"base_uri": "https://localhost:8080/"} id="TtScpUGdiK6C" outputId="7d7dd2bb-a0f5-4fa0-efb4-55a49af1a4d7" from urllib import request module_url = f"https://raw.githubusercontent.com/Perez-AlmendrosC/dontpatronizeme/master/semeval-2022/dont_patronize_me.py" module_name = module_url.split('/')[-1] print(f'Fetching {module_url}') #with open("file_1.txt") as f1, open("file_2.txt") as f2 with request.urlopen(module_url) as f, open(module_name,'w') as outf: a = f.read() outf.write(a.decode('utf-8')) # + id="MMDPTEeEx2tR" outputId="ae2de600-591e-4206-b696-907c8c6bb15a" colab={"base_uri": "https://localhost:8080/"} # !pip install transformers # + id="3P7dRyVFhjZH" import torch import transformers import pandas as pd import numpy as np import spacy from tqdm import tqdm from dont_patronize_me import DontPatronizeMe # + id="_Hmi4xUTh5uT" dpm = DontPatronizeMe('.', 'dontpatronizeme_pcl.tsv') dpm.load_task1() dpm.train_task1_df.rename(columns={'country': 'sentence'}, inplace=True) sentences = dpm.train_task1_df.sentence.values labels = dpm.train_task1_df.label.values # + colab={"base_uri": "https://localhost:8080/", "height": 162, "referenced_widgets": ["9c157147e40a4fe5b8b49864acee35a7", "", "0918c6b0c4a9478f87e14f722ad7c823", "", "0cf4ed9619ab4a389200c07ca05160ff", "18f8ea78cb1e4e0abe75d8269f2c21c7", "9bd5a32168d04af88eb043b8d5ec4ffc", "956e6a9fd6874ee5832159c393f3ef0f", "bb17dac4c2e747988b7766efd90115e9", "8bf538eb955948ebb03044e905f29aec", "", "", "f54fdd3d34734ea395847d40ecf6d636", "067eaf6f85944d0b8a262f22e96292bc", "", "608ddbe423a64e269d25a4122d64d890", "4954e633eba1486c991251f7a22d15a3", "", "", "71f7a4ea7e6d4372856abb6b8e8f32ff", "e13b184c86714233a1d6019bb5edf903", "", "3ebb0839d9da4e699439e93f585e3a56", "3828fa499f81486f8fca1b9aee38fd99", "d9d8680bbaa04d25a2d733eb8abb6fb9", "", "7401ccde239a46d2b3085829dea497f7", "9eb3268b61e54997bdc03c0a4d7145b6", "375c5a52359b47b0b556b00e570dcb40", "", "ec52570d56cc4406afa890ae4af7a3c6", "", "de3beffb739549d9a590538383244809", "", "", "972a2de70be64c6793cc1bb01e216018", "", "3a8a75c9f4ca4daba1fe29fa5c165b2c", "98a747dd560145719373d0f592c41549", "", "", "1bc6fe1a93134320b07e885ee23a3250", "5594d285cdeb4dfba32d0e8e2d30511e", ""]} id="FoyQOaRwig1f" outputId="6e9404e8-2b4d-44a9-ad0c-c10aee5c67ba" from transformers import BertTokenizer # Load the BERT tokenizer. print('Loading BERT tokenizer...') tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True) # + colab={"base_uri": "https://localhost:8080/"} id="LS2i4WNii8nj" outputId="5177c5ed-0821-4812-e24f-7f4d5b644398" # Print the original sentence. print(' Original: ', sentences[0]) # Print the sentence split into tokens. print('Tokenized: ', tokenizer.tokenize(sentences[0])) # Print the sentence mapped to token ids. print('Token IDs: ', tokenizer.convert_tokens_to_ids(tokenizer.tokenize(sentences[0]))) # + colab={"base_uri": "https://localhost:8080/"} id="tco2vHMEi_pz" outputId="36ba0de1-f7c1-4a30-f9d4-ab5e58ce3581" max_len = 0 # For every sentence... for sent in tqdm(sentences): # Tokenize the text and add `[CLS]` and `[SEP]` tokens. input_ids = tokenizer.encode(sent, add_special_tokens=True) # Update the maximum sentence length. max_len = max(max_len, len(input_ids)) print('Max sentence length: ', max_len) # + colab={"base_uri": "https://localhost:8080/"} id="Y0WQLZOnjEZ2" outputId="a56d34a6-0952-403c-88f6-c51fe056b932" # Tokenize all of the sentences and map the tokens to thier word IDs. input_ids = [] attention_masks = [] # For every sentence... for sent in tqdm(sentences): # `encode_plus` will: # (1) Tokenize the sentence. # (2) Prepend the `[CLS]` token to the start. # (3) Append the `[SEP]` token to the end. # (4) Map tokens to their IDs. # (5) Pad or truncate the sentence to `max_length` # (6) Create attention masks for [PAD] tokens. encoded_dict = tokenizer.encode_plus( sent, # Sentence to encode. add_special_tokens = True, # Add '[CLS]' and '[SEP]' max_length = 64, # Pad & truncate all sentences. pad_to_max_length = True, return_attention_mask = True, # Construct attn. masks. return_tensors = 'pt', # Return pytorch tensors. ) # Add the encoded sentence to the list. input_ids.append(encoded_dict['input_ids']) # And its attention mask (simply differentiates padding from non-padding). attention_masks.append(encoded_dict['attention_mask']) # Convert the lists into tensors. input_ids = torch.cat(input_ids, dim=0) attention_masks = torch.cat(attention_masks, dim=0) labels = torch.tensor(labels) # Print sentence 0, now as a list of IDs. print('Original: ', sentences[0]) print('Token IDs:', input_ids[0]) # + colab={"base_uri": "https://localhost:8080/"} id="WucQg45ZjaJu" outputId="ba32d1b0-c398-48af-91b6-9f51a2efc536" from torch.utils.data import TensorDataset, random_split # Combine the training inputs into a TensorDataset. dataset = TensorDataset(input_ids, attention_masks, labels) # Create a 90-10 train-validation split. # Calculate the number of samples to include in each set. train_size = int(0.9 * len(dataset)) val_size = len(dataset) - train_size # Divide the dataset by randomly selecting samples. train_dataset, val_dataset = random_split(dataset, [train_size, val_size]) print('{:>5,} training samples'.format(train_size)) print('{:>5,} validation samples'.format(val_size)) # + colab={"base_uri": "https://localhost:8080/"} id="BILxqOtAjekw" outputId="2d878e01-5101-490e-f48d-9669a5c2b876" from torch.utils.data import DataLoader, WeightedRandomSampler, SequentialSampler # The DataLoader needs to know our batch size for training, so we specify it # here. For fine-tuning BERT on a specific task, the authors recommend a batch # size of 16 or 32. batch_size = 32 #lbl = train_dataset.targets lbl = [] for i in tqdm(range(len(train_dataset))): lbl.append(train_dataset[i][2].item()) class_sample_count = np.array( [len(np.where(lbl == t)[0]) for t in np.unique(lbl)]) weight = 1. / class_sample_count samples_weight = np.array([weight[t] for t in lbl]) samples_weight = torch.from_numpy(samples_weight) samples_weigth = samples_weight.double() sampler = WeightedRandomSampler(samples_weight, len(samples_weight)) # Create the DataLoaders for our training and validation sets. # We'll take training samples in random order. train_dataloader = DataLoader( train_dataset, # The training samples. sampler = sampler, # Select batches randomly batch_size = batch_size # Trains with this batch size. ) # For validation the order doesn't matter, so we'll just read them sequentially. validation_dataloader = DataLoader( val_dataset, # The validation samples. sampler = SequentialSampler(val_dataset), # Pull out batches sequentially. batch_size = batch_size # Evaluate with this batch size. ) # + colab={"base_uri": "https://localhost:8080/", "height": 154, "referenced_widgets": ["00caf0ec303d4ce2a8467022bd3efdbb", "", "", "", "cd14ac34c2c641fdb0882e799d5ceb5a", "", "0fe9fac2882345e699c317c12a506d13", "", "", "103175a562164f49a525618d39bc3372", ""]} id="uv0PWdoqjfgR" outputId="41d41038-ca6e-40a5-8ee1-1655014749b7" from transformers import BertForSequenceClassification, AdamW, BertConfig # Load BertForSequenceClassification, the pretrained BERT model with a single # linear classification layer on top. model = BertForSequenceClassification.from_pretrained( "bert-base-uncased", # Use the 12-layer BERT model, with an uncased vocab. num_labels = 2, # The number of output labels--2 for binary classification. # You can increase this for multi-class tasks. output_attentions = False, # Whether the model returns attentions weights. output_hidden_states = False, # Whether the model returns all hidden-states. ) # Tell pytorch to run this model on the GPU. model.cuda(); # + id="iOqBCz_jjimm" # Note: AdamW is a class from the huggingface library (as opposed to pytorch) # I believe the 'W' stands for 'Weight Decay fix" optimizer = AdamW(model.parameters(), lr = 2e-5, # args.learning_rate - default is 5e-5, our notebook had 2e-5 eps = 1e-8 # args.adam_epsilon - default is 1e-8. ) from transformers import get_linear_schedule_with_warmup # Number of training epochs. The BERT authors recommend between 2 and 4. # We chose to run for 4, but we'll see later that this may be over-fitting the # training data. epochs = 4 # Total number of training steps is [number of batches] x [number of epochs]. # (Note that this is not the same as the number of training samples). total_steps = len(train_dataloader) * epochs # Create the learning rate scheduler. scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps = 0, # Default value in run_glue.py num_training_steps = total_steps) # + id="Zya6ioUOjpMX" import numpy as np # Function to calculate the accuracy of our predictions vs labels def flat_accuracy(preds, labels): pred_flat = np.argmax(preds, axis=1).flatten() labels_flat = labels.flatten() return np.sum(pred_flat == labels_flat) / len(labels_flat) import time import datetime def format_time(elapsed): ''' Takes a time in seconds and returns a string hh:mm:ss ''' # Round to the nearest second. elapsed_rounded = int(round((elapsed))) # Format as hh:mm:ss return str(datetime.timedelta(seconds=elapsed_rounded)) # + colab={"base_uri": "https://localhost:8080/"} id="FMR_tHhIkCV7" outputId="d8c5f44d-ab77-435a-eb0c-457d6712141b" import torch # If there's a GPU available... if torch.cuda.is_available(): # Tell PyTorch to use the GPU. device = torch.device("cuda") print('There are %d GPU(s) available.' % torch.cuda.device_count()) print('We will use the GPU:', torch.cuda.get_device_name(0)) # If not... else: print('No GPU available, using the CPU instead.') device = torch.device("cpu") # + colab={"base_uri": "https://localhost:8080/"} id="Hwj7bRK4jsAT" outputId="f25e38c3-6c4e-42f8-b622-030215b79f01" import random import numpy as np # This training code is based on the `run_glue.py` script here: # https://github.com/huggingface/transformers/blob/5bfcd0485ece086ebcbed2d008813037968a9e58/examples/run_glue.py#L128 # Set the seed value all over the place to make this reproducible. seed_val = 42 random.seed(seed_val) np.random.seed(seed_val) torch.manual_seed(seed_val) torch.cuda.manual_seed_all(seed_val) # We'll store a number of quantities such as training and validation loss, # validation accuracy, and timings. training_stats = [] # Measure the total training time for the whole run. total_t0 = time.time() # For each epoch... for epoch_i in range(0, epochs): # ======================================== # Training # ======================================== # Perform one full pass over the training set. print("") print('======== Epoch {:} / {:} ========'.format(epoch_i + 1, epochs)) print('Training...') # Measure how long the training epoch takes. t0 = time.time() # Reset the total loss for this epoch. total_train_loss = 0 # Put the model into training mode. Don't be mislead--the call to # `train` just changes the *mode*, it doesn't *perform* the training. # `dropout` and `batchnorm` layers behave differently during training # vs. test (source: https://stackoverflow.com/questions/51433378/what-does-model-train-do-in-pytorch) model.train() # For each batch of training data... for step, batch in enumerate(train_dataloader): # Progress update every 40 batches. if step % 40 == 0 and not step == 0: # Calculate elapsed time in minutes. elapsed = format_time(time.time() - t0) # Report progress. print(' Batch {:>5,} of {:>5,}. Elapsed: {:}.'.format(step, len(train_dataloader), elapsed)) # Unpack this training batch from our dataloader. # # As we unpack the batch, we'll also copy each tensor to the GPU using the # `to` method. # # `batch` contains three pytorch tensors: # [0]: input ids # [1]: attention masks # [2]: labels b_input_ids = batch[0].to(device) b_input_mask = batch[1].to(device) b_labels = batch[2].to(device) # Always clear any previously calculated gradients before performing a # backward pass. PyTorch doesn't do this automatically because # accumulating the gradients is "convenient while training RNNs". # (source: https://stackoverflow.com/questions/48001598/why-do-we-need-to-call-zero-grad-in-pytorch) model.zero_grad() # Perform a forward pass (evaluate the model on this training batch). # The documentation for this `model` function is here: # https://huggingface.co/transformers/v2.2.0/model_doc/bert.html#transformers.BertForSequenceClassification # It returns different numbers of parameters depending on what arguments # arge given and what flags are set. For our useage here, it returns # the loss (because we provided labels) and the "logits"--the model # outputs prior to activation. model_output = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels) loss, logits = model_output.loss, model_output.logits # Accumulate the training loss over all of the batches so that we can # calculate the average loss at the end. `loss` is a Tensor containing a # single value; the `.item()` function just returns the Python value # from the tensor. total_train_loss += loss.item() # Perform a backward pass to calculate the gradients. loss.backward() # Clip the norm of the gradients to 1.0. # This is to help prevent the "exploding gradients" problem. torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) # Update parameters and take a step using the computed gradient. # The optimizer dictates the "update rule"--how the parameters are # modified based on their gradients, the learning rate, etc. optimizer.step() # Update the learning rate. scheduler.step() # Calculate the average loss over all of the batches. avg_train_loss = total_train_loss / len(train_dataloader) # Measure how long this epoch took. training_time = format_time(time.time() - t0) print("") print(" Average training loss: {0:.2f}".format(avg_train_loss)) print(" Training epcoh took: {:}".format(training_time)) # ======================================== # Validation # ======================================== # After the completion of each training epoch, measure our performance on # our validation set. print("") print("Running Validation...") t0 = time.time() # Put the model in evaluation mode--the dropout layers behave differently # during evaluation. model.eval() # Tracking variables total_eval_accuracy = 0 total_eval_loss = 0 nb_eval_steps = 0 # Evaluate data for one epoch for batch in validation_dataloader: # Unpack this training batch from our dataloader. # # As we unpack the batch, we'll also copy each tensor to the GPU using # the `to` method. # # `batch` contains three pytorch tensors: # [0]: input ids # [1]: attention masks # [2]: labels b_input_ids = batch[0].to(device) b_input_mask = batch[1].to(device) b_labels = batch[2].to(device) # Tell pytorch not to bother with constructing the compute graph during # the forward pass, since this is only needed for backprop (training). with torch.no_grad(): # Forward pass, calculate logit predictions. # token_type_ids is the same as the "segment ids", which # differentiates sentence 1 and 2 in 2-sentence tasks. # The documentation for this `model` function is here: # https://huggingface.co/transformers/v2.2.0/model_doc/bert.html#transformers.BertForSequenceClassification # Get the "logits" output by the model. The "logits" are the output # values prior to applying an activation function like the softmax. model_output = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels) (loss, logits) = model_output.loss, model_output.logits # Accumulate the validation loss. total_eval_loss += loss.item() # Move logits and labels to CPU logits = logits.detach().cpu().numpy() label_ids = b_labels.to('cpu').numpy() # Calculate the accuracy for this batch of test sentences, and # accumulate it over all batches. total_eval_accuracy += flat_accuracy(logits, label_ids) # Report the final accuracy for this validation run. avg_val_accuracy = total_eval_accuracy / len(validation_dataloader) print(" Accuracy: {0:.2f}".format(avg_val_accuracy)) # Calculate the average loss over all of the batches. avg_val_loss = total_eval_loss / len(validation_dataloader) # Measure how long the validation run took. validation_time = format_time(time.time() - t0) print(" Validation Loss: {0:.2f}".format(avg_val_loss)) print(" Validation took: {:}".format(validation_time)) # Record all statistics from this epoch. training_stats.append( { 'epoch': epoch_i + 1, 'Training Loss': avg_train_loss, 'Valid. Loss': avg_val_loss, 'Valid. Accur.': avg_val_accuracy, 'Training Time': training_time, 'Validation Time': validation_time } ) print("") print("Training complete!") print("Total training took {:} (h:mm:ss)".format(format_time(time.time()-total_t0))) # + id="yXahfuusj89y" prediction_dataloader = validation_dataloader # + colab={"base_uri": "https://localhost:8080/"} id="AiwrJF7xos7J" outputId="eed2c67a-33e1-4f33-8f71-e4d0dc242b6d" # Prediction on test set print('Predicting labels for {:,} test sentences...'.format(len(input_ids))) # Put model in evaluation mode model.eval() # Tracking variables predictions , true_labels = [], [] # Predict for batch in prediction_dataloader: # Add batch to GPU batch = tuple(t.to(device) for t in batch) # Unpack the inputs from our dataloader b_input_ids, b_input_mask, b_labels = batch # Telling the model not to compute or store gradients, saving memory and # speeding up prediction with torch.no_grad(): # Forward pass, calculate logit predictions outputs = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask) logits = outputs[0] # Move logits and labels to CPU logits = logits.detach().cpu().numpy() label_ids = b_labels.to('cpu').numpy() # Store predictions and true labels predictions.append(logits) true_labels.append(label_ids) print(' DONE.') # + id="X954ykTko2o2" true_labels = np.concatenate(true_labels, axis=0) predictions = np.array([1 if x[0] < 0.5 else 0 for x in np.concatenate(predictions, axis=0)]) # + colab={"base_uri": "https://localhost:8080/"} id="YJXvEUPIo634" outputId="b4e12b30-f3c5-44f1-9a64-d2caca1f06c0" from sklearn.metrics import classification_report print(classification_report(true_labels, predictions)) # + id="X6siD6qZpsjm" def predict(paragraph): input_ids = [] attention_masks = [] for sent in tqdm([paragraph]): encoded_dict = tokenizer.encode_plus( sent, # Sentence to encode. add_special_tokens = True, # Add '[CLS]' and '[SEP]' max_length = 64, # Pad & truncate all sentences. pad_to_max_length = True, return_attention_mask = True, # Construct attn. masks. return_tensors = 'pt', # Return pytorch tensors. ) input_ids.append(encoded_dict['input_ids']) attention_masks.append(encoded_dict['attention_mask']) input_ids = torch.cat(input_ids, dim=0) attention_masks = torch.cat(attention_masks, dim=0) dataset = TensorDataset(input_ids, attention_masks) dataloader = DataLoader( dataset, batch_size = 1 ) for batch in dataloader: batch = tuple(t.to(device) for t in batch) b_input_ids, b_input_mask = batch with torch.no_grad(): outputs = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask) logits = outputs[0].detach().cpu().numpy() predictions = np.array([1 if x[0] < 0.5 else 0 for x in np.concatenate([logits], axis=0)]) return predictions[0] # + id="sJlo1IsEwKey" outputId="430e0171-406b-487e-f710-7771ce29faa9" colab={"base_uri": "https://localhost:8080/"} tekst = input() predict(tekst) # + id="cjKvHRP1xlXg" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [Root] # language: python # name: Python [Root] # --- import pandas as pd from pprint import pprint import json # + # ICD_list table must be re-built from, presumably, ICD_for_Enc due to some entries being # pre-18th birthday. ICD_list entries are not timestamped! table_names = ['all_encounter_data', 'demographics', 'encounters', 'family_hist_for_Enc', 'family_hist_list', 'ICD_for_Enc', 'ICD_list', 'macula_findings_for_Enc', 'SL_Lens_for_Enc', 'SNOMED_problem_list', 'systemic_disease_for_Enc', 'systemic_disease_list'] person_data = ['demographics','family_hist_list', 'systemic_disease_list', 'SNOMED_problem_list'] encounter_data = ['all_encounter_data', 'encounters', 'family_hist_for_Enc', 'ICD_for_Enc', 'macula_findings_for_Enc', 'SL_Lens_for_Enc', 'systemic_disease_for_Enc'] # - path = 'C:\\Users\\asimon6\\Downloads\\ICO_data\\' # read tables into dataframes dfs = [pd.read_pickle(path + name + '.pickle') if name != 'ICD_list' else None for name in table_names ] # rename columns in all dataframes to avoid unicode decode error for df in dfs: if df is not None: df.columns = [col.decode("utf-8-sig") for col in df.columns] for df in dfs: if df is not None: print df.columns.values # + # aggregate all person data by Person_ID persons = {} id_key ='Person_Nbr' for person_df_name in person_data: # print person_df_name, df_index = table_names.index(person_df_name) person_data_df = dfs[df_index] if person_data_df is not None: # print len(person_data_df) for i, dfrow in person_data_df.iterrows(): rowdict = dict(dfrow) id_value = rowdict[id_key] for k, v in rowdict.iteritems(): if isinstance(v, pd.tslib.Timestamp): rowdict[k] = v.toordinal() if id_value not in persons: persons[id_value] = {} persons[id_value].setdefault(person_df_name, []).append(rowdict) with open('allPersons.json', 'w') as fh: json.dump(persons, fh) # - for person_id in persons: pprint(persons[person_id]) break # + # find the range of count of rows nested under person id for all keys(table names) all_count_by_table_name_person = {} for person_id in persons: person = persons[person_id] for table_name in person: all_count_by_table_name_person.setdefault(table_name, set()).add(len(person[table_name])) min_max_count_by_table_name_person = {} for table_name in all_count_by_table_name_person: min_max_count_by_table_name_person[table_name] = {} min_max_count_by_table_name_person[table_name]['minimum'] = min(all_count_by_table_name_person[table_name]) min_max_count_by_table_name_person[table_name]['maximum'] = max(all_count_by_table_name_person[table_name]) pprint(min_max_count_by_table_name_person) # + # aggregate all encounter data by Enc_ID encounters = {} id_key = 'Enc_Nbr' for encounter_df_name in encounter_data: # print encounter_df_name, df_index = table_names.index(encounter_df_name) encounter_df = dfs[df_index] if encounter_df is not None: # print len(encounter_df) # print encounter_df.columns.values for i, dfrow in encounter_df.iterrows(): rowdict = dict(dfrow) id_value = rowdict[id_key] for k, v in rowdict.iteritems(): if isinstance(v, pd.tslib.Timestamp): rowdict[k] = v.toordinal() if id_value not in encounters: encounters[id_value] = {} encounters[id_value].setdefault(encounter_df_name, []).append(rowdict) with open('allEncounters.json', 'w') as fh: json.dump(encounters, fh) # - for encounter_id in encounters: if 'ICD_for_Enc' in encounters[encounter_id]: if len(encounters[encounter_id]['ICD_for_Enc']) >1: pprint(encounters[encounter_id]) break # + # find the range of count of rows nested under encounter id for all keys(table names) all_count_by_table_name_encounter = {} for encounter_id in encounters: encounter = encounters[encounter_id] for table_name in encounter: all_count_by_table_name_encounter.setdefault(table_name, set()).add(len(encounter[table_name])) min_max_count_by_table_name_encounter = {} for table_name in all_count_by_table_name_encounter: min_max_count_by_table_name_encounter[table_name] = {} min_max_count_by_table_name_encounter[table_name]['minimum'] = min(all_count_by_table_name_encounter[table_name]) min_max_count_by_table_name_encounter[table_name]['maximum'] = max(all_count_by_table_name_encounter[table_name]) pprint(min_max_count_by_table_name_encounter) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import plotly.express as px import plotly.offline as pyo import plotly.graph_objects as po pyo.init_notebook_mode() # - import pandas as pd df = pd.read_csv("../data/AMST3_SepOct.csv") df df_part = df[['Time stop', 'Organic', 'Sulfate', 'Ammonium', 'Nitrate', 'Chloride']] df_part fig = px.scatter_matrix(df_part) #fig.show() #fig.data fig.update_traces(marker=dict(size=2,opacity=0.5, color='black')) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="UBJPZa7MH0Tk" colab_type="code" colab={} import pandas as pd import numpy as np import matplotlib.pyplot as plt # + id="FuyDeoJiXjPa" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 72} outputId="d82c31f4-4b02-4cb9-db63-8003481d0bbe" from sklearn.cluster import KMeans from sklearn.preprocessing import MinMaxScaler import seaborn as sns # %matplotlib inline # + id="gObnwhnFXk_9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 402} outputId="63dbae23-cbe2-406e-fac7-a936c0f654b4" cust = pd.read_csv("customers.csv") cust # + id="fN-r6nMXXm_2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 122} outputId="5e5714a2-ad99-469b-957a-318d1a06f9b1" cust.isnull().sum() # + id="W7s1XXWjXpW8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="78ec3699-969d-4a3c-c959-a0463b596d9b" cust.duplicated().sum() # + id="tdyE5n5FXq-q" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 122} outputId="668a2587-c567-43eb-bb57-bde2813a05a2" cust.dtypes # + id="A-29os0NXsrr" colab_type="code" colab={} def statistics(variable): if variable.dtype == "int64" or variable.dtype == "float64": return pd.DataFrame([[variable.name, np.mean(variable), np.std(variable), np.median(variable), np.var(variable)]], columns = ["Variable", "Mean", "Standard Deviation", "Median", "Variance"]).set_index("Variable") else: return pd.DataFrame(variable.value_counts()) # + id="PU5AuygJXupd" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 106} outputId="af83c918-7e4d-46f8-90e6-1dc0612251fe" spending = cust["Spending Score (1-100)"] statistics(spending) # + id="rLw0Z9k5XwjO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 297} outputId="440c9b5b-a16d-429a-bec9-2bc33795415c" sns.distplot(cust["Spending Score (1-100)"], bins=10, kde_kws={"lw": 1.5, "alpha":0.8, "color":list(map(float, np.random.rand(3,)))}, hist_kws={"linewidth": 1.5, "edgecolor": "grey", "alpha": 0.4, "color":list(map(float, np.random.rand(3,)))}) # + id="OHgsqrZjXzUW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 106} outputId="b3996a87-10b2-4f73-c63d-395605d31c30" income = cust["Annual Income (k$)"] statistics(income) # + id="5v7tTrb_X3Ca" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 297} outputId="24d82de2-ace1-44d8-b9c9-e998dfdf0d70" sns.distplot(cust["Annual Income (k$)"], bins=10, kde_kws={"lw": 1.5, "alpha":0.8, "color":list(map(float, np.random.rand(3,)))}, hist_kws={"linewidth": 1.5, "edgecolor": "grey", "alpha": 0.4, "color":list(map(float, np.random.rand(3,)))}) # + id="WjbwUiiTX5Af" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 106} outputId="aec9c5c4-f747-4614-922f-7d342ff6fe6a" gender = cust["Gender"] statistics(gender) # + id="G6J9bMdfX7FM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 386} outputId="d3b576c6-ab5b-4126-cfa3-267d9185997a" gender = pd.DataFrame(cust["Gender"]) sns.catplot(x=gender.columns[0], kind="count", palette="spring", data=gender) # + id="GJX9IQ33X9sa" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 402} outputId="19e86692-cfee-46ef-9b6e-591f28372ebf" dummies = pd.get_dummies(cust['Gender']) dummies # + id="dUrMGpTpX_ZD" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 402} outputId="68f56e0d-f4e6-4235-ba7d-c5810d192bd6" cust = cust.merge(dummies, left_index=True, right_index=True) cust # + id="kx09neGaYDQu" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 402} outputId="3126c00d-7615-4608-e5ae-a7c73758811d" new_cust = cust.iloc[:,2:] new_cust # + id="Z5BGbil6YIHV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 299} outputId="e16cee9d-df3a-4869-c9ba-d4c043c10105" wcss = [] for i in range(1,11): km = KMeans(n_clusters=i,init='k-means++', max_iter=300, n_init=10, random_state=0) km.fit(new_cust) wcss.append(km.inertia_) plt.plot(range(1,11),wcss, c="#c51b7d") plt.title('Elbow Method', size=14) plt.xlabel('Number of clusters', size=12) plt.ylabel('wcss', size=14) plt.show() # + id="1VeEw9GOYL14" colab_type="code" colab={} kmeans = KMeans(n_clusters=5, init='k-means++', max_iter=10, n_init=10, random_state=0) # + id="y3z9vjZkYO_l" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 70} outputId="d53ef0d7-ef42-4986-b124-fe0a0f8bbb61" kmeans.fit(new_cust) # + id="6234-E7WYTCF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 195} outputId="12f15c10-0fa5-42ad-dcf8-665c14c2a31d" centroids = pd.DataFrame(kmeans.cluster_centers_, columns = ["Age", "Annual Income", "Spending", "Male", "Female"]) centroids.index_name = "ClusterID" centroids["ClusterID"] = centroids.index centroids = centroids.reset_index(drop=True) centroids # + id="kGUMC3WoYWAu" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="e58365c9-6f78-4089-b06b-af13cadd22af" X_new = np.array([[43, 76, 56, 0, 1]]) new_customer = kmeans.predict(X_new) print(f"The new customer belongs to segment(Cluster) {new_customer[0]}") # + id="WhJD_UpOYYCX" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Symbulate Lab 1 - Probability Spaces # This Jupyter notebook provides a template for you to fill in. Complete the parts as indicated. To run a cell, hold down SHIFT and hit ENTER. # In this lab you will use the Python package [Symbulate](https://github.com/dlsun/symbulate). You should have already completed Section 1 of the "Getting Started with Symbulate Tutorial" **ADD LINK** and read sections 1 through 3 of the [Symbulate documentation](https://dlsun.github.io/symbulate/index.html). A few specific links to the documentation are provided below, but it will probably make more sense if you read the documentation from start to finish. **Whenever possible, you should use Symbulate commands, not general Python code.** # # To use Symbulate, you must first run (SHIFT-ENTER) the following commands. from symbulate import * # %matplotlib inline # ## Part I. Introduction to Symbulate, and conditional versus unconditional probability # A deck of 16 cards contains 4 cards in each of four suits ['clubs', 'diamonds', 'hearts', 'spades']. The deck is shuffled and two cards are drawn in sequence. We are interested in the following questions. # # 1. What is the probability that the first card drawn is a heart? # 1. What is the probability that the second card drawn is a heart? # 1. If the first card drawn is a heart, what is the probability that the second card drawn is a heart? # # Before proceeding, give your best guess of each of these probabilites. # We'll use simulation to obtain approximations to the probabilities in the questions above. First we define the deck of cards (we only care about the suits for this exercise). cards = ['club', 'diamond', 'heart', 'spade'] * 4 # 4 cards of each suit len(cards) # Now we define a [`BoxModel`](https://dlsun.github.io/symbulate/probspace.html#boxmodel) probability space corresponding to drawing two cards (`size=2`) from the deck at random. We'll assume that the cards are drawn without replacement (`replace=False`). We also want to keep track of which card was drawn first and which second (`order_matters=True`). P = BoxModel(cards, size=2, replace=False, order_matters=True) # The `.draw()` method simulates a single outcome from the probability space. Note that each outcome is an ordered pair of cards. P.draw() # Many outcomes can be simulated using `.sim()`. The following simulates 10000 draws and stores the results in the variable `sims`. sims = P.sim(10000) sims # We can summarize the simulation results with `.tabulate()`. Note that `('heart', 'club')` is counted as a separate outcome than `('club', 'heart')` because the order matters. sims = P.sim(10000) sims.tabulate() # The above table could be used to estimate the probabilities in question. Instead, we will illustrate several other tools available in Symbulate to summarize simulation output. # # First, we use a `filter` function to creat a subset of the simulated outcomes for which the first card is a heart. We define a function `first_is_heart` that takes as an input a pair of values (`x`) and returns `True` if the first value in the pair (`x[0]`) is equal to 'hearts', and `False` otherwise. (Python indexing starts at 0: 0 is the first enrty, 1 is the second, and so on.) # + def first_is_heart(x): return (x[0] == 'heart') first_is_heart(('heart', 'club')) # - first_is_heart(('club', 'heart')) # Now we `filter` the simulated outcomes to create the subset of outcomes for which `first_is_heart` returns `True`. sims_first_is_heart = sims.filter(first_is_heart) sims_first_is_heart.tabulate() # Returning to question 1, we can estimate the probability that the first card is a heart by dividing the number of simulated draws for which the first card is a heart divided by the total number of simulated draws (using the length function `len` to count.) len(sims_first_is_heart) / len(sims) # The true probability is 4/16 = 0.25. Your simulated probability should be close to 0.25, but there will be some natural variability due to the randomness in the simulation. Very roughly, the margin of error of a probability estimate based on $N$ simulated repetitions is about $1/\sqrt{N}$, so about 0.01 for 10000 repetitions. The interval constructed by adding $\pm 0.01$ to your estimate will likely contain 0.25. # ## a) # # Recall question 2: What is the probability that the second card drawn is a heart? Use an analysis similar to the above — including defining an appropriate function to use with `filter` — to estimate the probability. (Is your simulated value close to your initial guess?) # # Type your commands in the following code cell. Aside from defining a `second_is_heart` function and using `len`, you should use Symbulate commands exclusively. # + # Type your Symbulate commands in this cell. # - # ## b) # # Many people confuse the probabilities in (2) and (3). The probability in (2) is an *unconditional* probability: we do not know whether or not the first card is a heart so we need to account for both possibilities. All we know is that each of the 16 cards in the deck is equally likely to be shuffled into the second position, so the probability that the second card is a heart (without knowing what the first card is) is 4/16 = 0.25. # # In contrast, the probability in question 3 is a *conditional* probability: *given that the first card drawn is a heart*, what is the probability that the second card drawn is a heart? Again, aside from maybe defining a new `is_heart` function and using `len`, you should use Symbulate commands exclusively. # + # Type your Symbulate commands in this cell. # - # ## c) # # Given that the first card is a heart, there are 15 cards left in the deck, each of which is equally likely to be the second card, of which 3 are hearts. So the conditional probability that the second card is a heart given that the first card is a heart is 3/15 = 0.20. Verify that your simulated value is consistent with the true value. # # Now you will do a few calculations by hand. # # 1. Compute, by hand, the conditional probability that the second card is a heart given that the first cards is NOT a heart. # 1. Construct, by hand, a hypothetical two-way table representing the results of 10000 draws. # 1. Use the hypothetical table to compute the probability that the second card is a heart. # 1. What is the relationship between the probability that the second card is a heart and the two conditional probabilities? # # (Nothing to respond here, just make sure you understand the answers.) # ## d) # # How would the answers to the previous questions change if the draws were made with replacement (so that the first card is replaced and the deck reshuffled before the second draw is drawn?) In this case, what can we say about the events "the first card is a heart" and "the second card is a heart"? # **Type your response here.** # ## Part II. Collector's problem # Each box of a certain type of cereal contains one of $n$ distinct prizes and you want to obtain a complete set. Suppose # that each box of cereal is equally likely to contain any one of the $n$ prizes, and the particular prize # that appears in one box has no bearing on the prize that appears in another box. You purchase # cereal boxes one box at a time until you # have the complete set of $n$ prizes. What is the probability that you buy more than $k$ boxes? In this problem you will use simulation to estimate this probability for different values of $n$ and $k$. # Here is a little Python code you can use to label the $n$ prizes from 0 to $n-1$. (Remember: Python starts indexing at 0.) n = 10 prizes = list(range(n)) prizes # And here is a function that returns the number of distinct prizes collected among a set of prizes. # + def number_collected(x): return len(set(x)) # For example number_collected([2, 1, 2, 0, 2, 2, 0]) # - # **Aside from the above functions, you should use Symbulate commands exclusively for Part II. ** # ## Problem 1. # # We'll assume that there are 3 prizes, $n=3$, a situation in which exact probabilities can easily be computed by enumerating the possible outcomes. n = 3 prizes = list(range(n)) prizes # ### a) # # Define a probability space for the sequence of prizes obtained after buying $3$ boxes (first box, second box, third box), and simulate a single outcome. (Hint: try [BoxModel](https://dlsun.github.io/symbulate/probspace.html#boxmodel).) # + # Type your Symbulate commands in this cell. # - # ### b) # # Now simulate many outcomes and summarize the results. Does it appear that each sequence of prizes is equally likely? (Hint: try the various [Simulation tools](https://dlsun.github.io/symbulate/sim.html#sim) like `.sim()` and `.tabulate()`.) # + # Type your Symbulate commands in this cell. # - # ### c) # # Count the number of distinct prizes collected for each of the simulated outcomes using the `number_collected` function. (Hint: try [`.apply()`](https://dlsun.github.io/symbulate/sim.html#apply).) # + # Type your Symbulate commands in this cell. # - # ### d) # # Use the simulation results to estimate the probability the more than $k=3$ boxes are needed to complete a set of $n=3$ prizes. (Hint: see this [summary of the simulation tools](https://dlsun.github.io/symbulate/sim.html#summary) section for a few suggestions.) # + # Type your Symbulate commands in this cell. # - # ## Problem 2. # # Use simulation to estimate the probability that more than $k=100$ boxes are need to complete a set of $n=20$ prizes, a situation for which it is extremely difficult to compute the probability analytically. # + # Type your Symbulate commands in this cell. # - # ## Problem 3. # # How large of a group of people is needed to have a probability of greater than 0.5 that on every day of the year someone in the group has a birthday? Greater than 0.9? Greater than 0.99? (Assume 365 equally likely birthdays, no multiples, etc.) Before coding, I encourage you to make some guesses for the answers first. # # Formulate this scenario as a collector's problem and experimemt with values of $n$ or $k$ until you are satisfied. (You don't have to get any fancier than guess-and-check, but you can if you want.) # + # Type your relevant code in this cell for 0.5 # + # Type your relevant code in this cell for 0.9 # + # Type your relevant code in this cell for 0.99 # - # ## Problem 4. # Now suppose that some prizes are harder to find than others. In particular, suppose that the prizes are labeled 1, 2, 3, 4, 5. Assume that prize 2 is twice as likely as prize 1, prize 3 is three times as likely as prize 1, prize 4 is four times as likely as prize 1, and prize 5 is five times as likely as prize 1. # # Estimate the probability that you'll need to buy more than 20 prizes to obtain a complete set. How does this probability compare to the probability in the equally likely situation? # Hint: define a [BoxModel with a dictionary-like input](https://dlsun.github.io/symbulate/probspace.html#dictionary). # + # Type your Symbulate commands in this cell. # - # ## Submission Instructions # # Before you submit this notebook, click the "Kernel" drop-down menu at the top of this page and select "Restart & Run All". This will ensure that all of the code in your notebook executes properly. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + colab={"base_uri": "https://localhost:8080/"} id="L40CAu3TIsAD" outputId="cbc5ea0c-98c3-4e6a-f7db-52377c7a9dfa" # !Memory Transformer-XL # + colab={"base_uri": "https://localhost:8080/"} id="-Fl_YYGWKyj4" outputId="01b1bbed-118b-427b-9c0f-3ddbfb51ef5f" # !pip install memory-transformer-xl # !pip install transformers # !pip install mlm-pytorch # + id="6zyv36DoK75H" import torch from memory_transformer_xl import MemoryTransformerXL model = MemoryTransformerXL( num_tokens = 20000, dim = 1024, heads = 8, depth = 8, seq_len = 512, mem_len = 256, # short term memory (the memory from transformer-xl) lmem_len = 256, # long term memory (memory attention network attending to short term memory and hidden activations) mem_write_iters = 2, # number of iterations of attention for writing to memory memory_layers = [6,7,8], # which layers to use memory, only the later layers are actually needed num_mem_kv = 128, # number of memory key/values, from All-attention paper ).cuda() x1 = torch.randint(0, 20000, (1, 512)).cuda() logits1, mem1 = model(x1) x2 = torch.randint(0, 20000, (1, 512)).cuda() logits2, mem2 = model(x2, memories = mem1) # + colab={"base_uri": "https://localhost:8080/"} id="HJlHpMRELN0F" outputId="20451ae7-b6c1-4595-e2b9-921cdb6baf9d" mem2 # + id="a5OxkLqPMhb2" # + [markdown] id="fIgrTMHQMhx8" # # 分词 # 数据特点: # 可直接用于预训练、语言模型或语言生成任务。 # 发布专用于简体中文NLP任务的小词表。 # 词表介绍 # Google原始中文词表和我们发布的小词表的统计信息如下: # # Token Type Google CLUE # Simplified Chinese 11378 5689 # Traditional Chinese 3264 ✗ # English 3529 1320 # Japanese 573 ✗ # Korean 84 ✗ # Emoji 56 ✗ # Numbers 1179 140 # Special Tokens 106 106 # Other Tokens 959 766 # Total 21128 8021 # https://github.com/CLUEbenchmark/CLUEPretrainedModels # + id="iPNjiqH2Mq6L" from transformers import AutoTokenizer, AutoModel,BertTokenizer tokenizer = BertTokenizer.from_pretrained("clue/roberta_chinese_clue_tiny") # model = AutoModel.from_pretrained("clue/roberta_chinese_clue_tiny") # + colab={"base_uri": "https://localhost:8080/"} id="nl3QCP-GM9bt" outputId="5cd44d38-7204-47a2-b7af-77986d845b66" tokenizer # + id="9MybAvPCP_uT" # + colab={"base_uri": "https://localhost:8080/"} id="7hc4wmOwLLcZ" outputId="d71265f9-48c5-4558-d56c-25ecf21aaa27" tokenizer.vocab_size # + colab={"base_uri": "https://localhost:8080/"} id="Evp0oyz5QAKg" outputId="84c395df-c7a8-43f1-f209-2c7743cf3e5a" dir(tokenizer) # + [markdown] id="nuaZnZ2wNQKj" # # 模型测试 # # 使用mlm模式训练 # https://github.com/lucidrains/mlm-pytorch # + id="R6-iOR52NUNt" # vocab_size # + [markdown] id="qbB84zr1Rb53" # ## 重新mlm模型 # # # + id="0xWjqtquRgi6" import math from functools import reduce import torch from torch import nn import torch.nn.functional as F # helpers def prob_mask_like(t, prob): return torch.zeros_like(t).float().uniform_(0, 1) < prob def mask_with_tokens(t, token_ids): init_no_mask = torch.full_like(t, False, dtype=torch.bool) mask = reduce(lambda acc, el: acc | (t == el), token_ids, init_no_mask) return mask def get_mask_subset_with_prob(mask, prob): batch, seq_len, device = *mask.shape, mask.device max_masked = math.ceil(prob * seq_len) num_tokens = mask.sum(dim=-1, keepdim=True) mask_excess = (mask.cumsum(dim=-1) > (num_tokens * prob).ceil()) mask_excess = mask_excess[:, :max_masked] rand = torch.rand((batch, seq_len), device=device).masked_fill(~mask, -1e9) _, sampled_indices = rand.topk(max_masked, dim=-1) sampled_indices = (sampled_indices + 1).masked_fill_(mask_excess, 0) new_mask = torch.zeros((batch, seq_len + 1), device=device) new_mask.scatter_(-1, sampled_indices, 1) return new_mask[:, 1:].bool() # main class class MLMXL(nn.Module): def __init__( self, transformer, mask_prob = 0.15, replace_prob = 0.9, num_tokens = None, random_token_prob = 0., mask_token_id = 2, pad_token_id = 0, mask_ignore_token_ids = []): super().__init__() self.transformer = transformer self.mem=None # mlm related probabilities self.mask_prob = mask_prob self.replace_prob = replace_prob self.num_tokens = num_tokens self.random_token_prob = random_token_prob # token ids self.pad_token_id = pad_token_id self.mask_token_id = mask_token_id self.mask_ignore_token_ids = set([*mask_ignore_token_ids, pad_token_id]) def forward(self, input, **kwargs): # do not mask [pad] tokens, or any other tokens in the tokens designated to be excluded ([cls], [sep]) # also do not include these special tokens in the tokens chosen at random no_mask = mask_with_tokens(input, self.mask_ignore_token_ids) mask = get_mask_subset_with_prob(~no_mask, self.mask_prob) # get mask indices mask_indices = torch.nonzero(mask, as_tuple=True) # mask input with mask tokens with probability of `replace_prob` (keep tokens the same with probability 1 - replace_prob) masked_input = input.clone().detach() # if random token probability > 0 for mlm if self.random_token_prob > 0: assert self.num_tokens is not None, 'num_tokens keyword must be supplied when instantiating MLM if using random token replacement' random_token_prob = prob_mask_like(input, self.random_token_prob) random_tokens = torch.randint(0, self.num_tokens, input.shape, device=input.device) random_no_mask = mask_with_tokens(random_tokens, self.mask_ignore_token_ids) random_token_prob &= ~random_no_mask random_indices = torch.nonzero(random_token_prob, as_tuple=True) masked_input[random_indices] = random_tokens[random_indices] # [mask] input replace_prob = prob_mask_like(input, self.replace_prob) masked_input = masked_input.masked_fill(mask * replace_prob, self.mask_token_id) # mask out any tokens to padding tokens that were not originally going to be masked labels = input.masked_fill(~mask, self.pad_token_id) if self.mem!=None: # get generator output and get mlm loss logits,self.mem = self.transformer(masked_input, memories = self.mem, **kwargs) else: logits,self.mem = self.transformer(masked_input, **kwargs) mlm_loss = F.cross_entropy( logits.transpose(1, 2), labels, ignore_index = self.pad_token_id ) return mlm_loss # + id="r-IU2T2UNOw5" import torch from memory_transformer_xl import MemoryTransformerXL import torch from torch import nn from torch.optim import Adam # from mlm_pytorch import MLM model = MemoryTransformerXL( num_tokens = tokenizer.vocab_size, dim = 128, heads = 8, depth = 8, seq_len = 1024, mem_len = 256, # short term memory (the memory from transformer-xl) lmem_len = 256, # long term memory (memory attention network attending to short term memory and hidden activations) mem_write_iters = 2, # number of iterations of attention for writing to memory memory_layers = [6,7,8], # which layers to use memory, only the later layers are actually needed num_mem_kv = 128, # number of memory key/values, from All-attention paper ).cuda() x1 = torch.randint(0, tokenizer.vocab_size, (1, 1024)).cuda() logits1, mem1 = model(x1) x2 = torch.randint(0, tokenizer.vocab_size, (1, 1024)).cuda() logits2, mem2 = model(x2, memories = mem1) # + id="e5dm_ttEP4cZ" tokenizer # + id="h83e2MA6OMLq" torch.save(model.state_dict(), "model1024.bin") # + id="GPHPr8PkP5xB" # plugin the language model into the MLM trainer trainer = MLMXL( model, mask_token_id = tokenizer.mask_token_id, # the token id reserved for masking pad_token_id = tokenizer.pad_token_id, # the token id for padding mask_prob = 0.15, # masking probability for masked language modeling replace_prob = 0.90, # ~10% probability that token will not be masked, but included in loss, as detailed in the epaper mask_ignore_token_ids = [tokenizer.cls_token_id,tokenizer.sep_token_id] # other tokens to exclude from masking, include the [cls] and [sep] here ).cuda() # optimizer opt = Adam(trainer.parameters(), lr=3e-4) # one training step (do this for many steps in a for loop, getting new `data` each time) data = torch.randint(0, tokenizer.vocab_size, (2, 1024)).cuda() loss = trainer(data) loss.backward() opt.step() opt.zero_grad() # after much training, the model should have improved for downstream tasks # torch.save(transformer, f'./pretrained-model.pt') # + colab={"base_uri": "https://localhost:8080/"} id="yMCmW8DtSLP-" outputId="0981213a-c488-47f6-e89d-5e26eb77b70e" loss # + id="log9_CFzN7Qn" # dir(model) model # + id="8WVOOqD8NkvO" logits2 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + # %matplotlib inline import numpy import pandas import matplotlib.pyplot as plt import statsmodels.api as sm import scipy import scipy.stats from sklearn.metrics import r2_score from sklearn import datasets, linear_model from ggplot import * plt.rcParams['figure.figsize'] = (16.0, 8.0) # - path = 'turnstile_data_master_with_weather.csv' turnstile_weather = pandas.read_csv(path) # # Analyzing the NYC Subway Dataset # # ## Section 1. Statistical Test # # #### 1.1 Which statistical test did you use to analyze the NYC subway data? Did you use a one-tail or a two-tail P value? What is the null hypothesis? What is your p-critical value? # # Given random draws x from the population of people that ride the subuway when it rains and y from the population of people that ride the subway when it does not rain, the standard two-tailed hypotheses are as follows: # # $H0: P(x \gt y) = 0.5$ # # $H1: P(x \gt y) \neq 0.5$ # # The test used is Mann-Whitney U-statistic, and a two-tail P value is used. # # The p-critical value is 0.05. # # #### 1.2 Why is this statistical test applicable to the dataset? In particular, consider the assumptions that the test is making about the distribution of ridership in the two samples. # # - Sample size is greater than 20 # - Distribution of samples is not normal (see histograms) # - Samples are independent # + print ggplot(turnstile_weather, aes(x='ENTRIESn_hourly')) +\ geom_histogram(binwidth=1000,position="identity") +\ scale_x_continuous(breaks=range(0, 60001, 10000), labels = range(0, 60001, 10000))+\ facet_grid("rain")+\ ggtitle('Distribution of ENTRIESn_hourly in non-rainy days (0.0) and rainy days(1.0)') # - # #### 1.3 What results did you get from this statistical test? These should include the following numerical values: p-values, as well as the means for each of the two samples under test. # + ### YOUR CODE HERE ### df_with_rain = turnstile_weather[turnstile_weather['rain']==1] df_without_rain = turnstile_weather[turnstile_weather['rain']==0] with_rain_mean = df_with_rain['ENTRIESn_hourly'].mean() without_rain_mean = df_without_rain['ENTRIESn_hourly'].mean() U, p = scipy.stats.mannwhitneyu(df_with_rain['ENTRIESn_hourly'], df_without_rain['ENTRIESn_hourly']) print "mean_with_rain=%f mean_without_rain=%f p-value=%.8f" %(with_rain_mean, without_rain_mean, p*2) # - # #### 1.4 What is the significance and interpretation of these results? # The p-value is below the significance value ($\alpha = 0.05$). Thus, the results obtained reject the null hipothesis with a significance level of 0.05. This means that the number of passengers in rainy days is different than the number observed in non-rainy days. # # The following statistics support our test: print "Descriptive statistics for the ridership in rainy days" df_with_rain['ENTRIESn_hourly'].describe() print "Descriptive statistics for the ridership in non-rainy days" df_without_rain['ENTRIESn_hourly'].describe() # # Section 2. Linear Regression # # # #### 2.1 What approach did you use to compute the coefficients theta and produce prediction for ENTRIESn_hourly in your regression model: # OLS using Scikit Learn # + def linear_regression(features, values): """ Perform linear regression given a data set with an arbitrary number of features. This can be the same code as in the lesson #3 exercise. """ regr = linear_model.LinearRegression() # Train the model using the training sets regr.fit(features, values) return regr.intercept_, regr.coef_ def predictions(dataframe): ''' The NYC turnstile data is stored in a pandas dataframe called weather_turnstile. Using the information stored in the dataframe, let's predict the ridership of the NYC subway using linear regression with ordinary least squares. You can download the complete turnstile weather dataframe here: https://www.dropbox.com/s/meyki2wl9xfa7yk/turnstile_data_master_with_weather.csv Your prediction should have a R^2 value of 0.40 or better. You need to experiment using various input features contained in the dataframe. We recommend that you don't use the EXITSn_hourly feature as an input to the linear model because we cannot use it as a predictor: we cannot use exits counts as a way to predict entry counts. Note: Due to the memory and CPU limitation of our Amazon EC2 instance, we will give you a random subet (~10%) of the data contained in turnstile_data_master_with_weather.csv. You are encouraged to experiment with this exercise on your own computer, locally. If you do, you may want to complete Exercise 8 using gradient descent, or limit your number of features to 10 or so, since ordinary least squares can be very slow for a large number of features. If you receive a "server has encountered an error" message, that means you are hitting the 30-second limit that's placed on running your program. Try using a smaller number of features. ''' ################################ MODIFY THIS SECTION ##################################### # Select features. You should modify this section to try different features! # # We've selected rain, precipi, Hour, meantempi, and UNIT (as a dummy) to start you off. # # See this page for more info about dummy variables: # # http://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html # ########################################################################################## features = dataframe[['rain', 'precipi', 'Hour', 'meantempi', 'fog']] dummy_units = pandas.get_dummies(dataframe['UNIT'], prefix='unit') features = features.join(dummy_units) # Values values = dataframe['ENTRIESn_hourly'] # Perform linear regression intercept, params = linear_regression(features, values) predictions = intercept + numpy.dot(features, params) return predictions, intercept, params # - predicted, intercept, params = predictions(turnstile_weather) values = turnstile_weather['ENTRIESn_hourly'] (turnstile_weather['ENTRIESn_hourly'] - predicted).hist(bins=20) print "R2 Score=%f"%r2_score(values, predicted) # #### 2.2 What features (input variables) did you use in your model? Did you use any dummy variables as part of your features? # # I have used rain, precipi, Hour, meantempi and UNIT. UNIT was transformed into dummy variables. # #### 2.3 Why did you select these features in your model? We are looking for specific reasons that lead you to believe that # the selected features will contribute to the predictive power of your model. # Your reasons might be based on intuition. For example, response for fog might be: “I decided to use fog because I thought that when it is very foggy outside people might decide to use the subway more often.” # Your reasons might also be based on data exploration and experimentation, for example: “I used feature X because as soon as I included it in my model, it drastically improved my R2 value.” # # We know that weather, namely precipitation, affects the $\mu_{passengers}$. Thus I have included rain, precipi, meantempi and fog. From the correlation analysis below we can also see that Hour is the most correlated valid feature. For this reason Hour was also included in the input features. print "Correlation analysis" turnstile_weather.corr()['ENTRIESn_hourly'].sort_values(inplace=False) # + # plt.rcParams['figure.figsize'] = (12.0, 3.0) # dtypes = turnstile_weather.dtypes # for column in turnstile_weather.columns: # if dtypes[column] in ['int64', 'float64']: # plt.figure() # turnstile_weather[column].hist(bins=20) # #turnstile_weather.plot(kind='kde', x=column) # plt.title(column) # plt.rcParams['figure.figsize'] = (16.0, 8.0) # - # #### 2.4 What are the parameters (also known as "coefficients" or "weights") of the non-dummy features in your linear regression model? # features=['rain', 'precipi', 'Hour', 'meantempi', 'fog'] print "== Non-dummy features coefficients ==" for i in range(5): output_str = ("%s:"%features[i]).ljust(12) output_str += "%.3f"%(params[i]) print output_str # #### 2.5 What is your model’s R2 (coefficients of determination) value? # r_squared = 1 - ((values-predicted)**2).sum()/((values-values.mean())**2).sum() assert(r_squared == r2_score(values, predicted)) print "R2 Score=%f"%r_squared # #### 2.6 What does this R2 value mean for the goodness of fit for your regression model? Do you think this linear model to predict ridership is appropriate for this dataset, given this R2 value? # # When the coefficient of determination, $R^2$, give us the correlation between the predictor features and the independent variable Entries per hour. # # When $R^2$ is close to 1, it means that the model has very good fitness, while when it is close to 0, the model does not fit at all. # We have an $R^2$ of 0.46 which means that 0.46 of the variance of data is explained in the regression model. # In addition, we should be evaluating our model with data that was not used to train the model. Even if we get a good score, our model might be overfiting. # # If we look at our coefficients we can see that _rain_ and _meantempi_ have a negative impact in Entries per hour, while _precipi_, _Hour_ and _Fog_ have a positive impact. # This means that 0.46 of the variance of the data is explained with a negative impact of rain. # # Section 3. Visualization # # Please include two visualizations that show the relationships between two or more variables in the NYC subway data. # Remember to add appropriate titles and axes labels to your plots. Also, please add a short description below each figure commenting on the key insights depicted in the figure. # # __3.1__ One visualization should contain two histograms: one of ENTRIESn_hourly for rainy days and one of ENTRIESn_hourly for non-rainy days. # # You can combine the two histograms in a single plot or you can use two separate plots. # # If you decide to use to two separate plots for the two histograms, please ensure that the x-axis limits for both of the plots are identical. It is much easier to compare the two in that case. # # For the histograms, you should have intervals representing the volume of ridership (value of ENTRIESn_hourly) on the x-axis and the frequency of occurrence on the y-axis. For example, each interval (along the x-axis), the height of the bar for this interval will represent the number of records (rows in our data) that have ENTRIESn_hourly that falls in this interval. # # Remember to increase the number of bins in the histogram (by having larger number of bars). The default bin width is not sufficient to capture the variability in the two samples. # # __R:__ # # The following visualization has 2 histograms combined in a single plot. The histogram in red shows the ridership per hour distribution for non-rainy days, while the histogram in blue shows for rainy days. We can see that non-rainy have bigger bars for ENTRIESn_hourly below 10000. This doesn't mean rainy days have less passengers. I just means that we have less data for rainy days, which is natural since we have less rainy days. # + print ggplot(aes(x='ENTRIESn_hourly', fill='rain'), data=turnstile_weather) +\ geom_histogram(binwidth=1000) +\ ggtitle('Ridership per hour distribution for rainy and non-rainy days') +\ ylab('Number of tuples') print "ENTRIESn_hourly max value: %d"%turnstile_weather['ENTRIESn_hourly'].max() # - # Although the maximum value of ENTRIESn_hourly is above 50000, from the histogram we see that most values are below 10000. Thus, let's generate a histogram limited to 10000 entries. print ggplot(aes(x='ENTRIESn_hourly', fill='rain'), data=turnstile_weather) +\ geom_histogram(binwidth=100) +\ xlim(0, 10000)+\ ggtitle('Ridership per hour distribution for rainy and non-rainy days limited to 10000') +\ ylab('Number of tuples') # + # print ggplot(aes(x='ENTRIESn_hourly', color='rain'), data=turnstile_weather) +\ # geom_density() +\ # ggtitle('Ridership per hour distribution for rainy and non-rainy days limited to 10000') +\ # ylab('Number of tuples') # - # # __3.2__ One visualization can be more freeform. You should feel free to implement something that we discussed in class (e.g., scatter plots, line plots) or attempt to implement something more advanced if you'd like. Some suggestions are: # # - Ridership by time-of-day # - Ridership by day-of-week # # __R:__ # # The following plot shows the average number of passengers per hour in our dataset. We can see that in average 8pm, 12pm, and 4pm are the times of day with most passengers. print ggplot(turnstile_weather, aes(x='Hour', y='ENTRIESn_hourly'))+geom_bar(stat = "summary", fun_y=numpy.mean, fill='lightblue')+ggtitle('Average ridership by time-of-day') # # Section 4. Conclusion # # Please address the following questions in detail. Your answers should be 1-2 paragraphs long. # # #### 4.1 From your analysis and interpretation of the data, do more people ride the NYC subway when it is raining or when it is not raining? # # The number of people that ride NYC in raining or non-raining days is different, but the analysis made shows that is not clear which days have more ridership. # # #### 4.2 What analyses lead you to this conclusion? You should use results from both your statistical tests and your linear regression to support your analysis. # # The Mann-Whitney U-statistic was able to reject the null hypothesis with a significance level of 0.05. # When we look at the distributions, we see that the maximum value of ridership per hour is much higher on rainy days (51839 against 43199). # The histograms are not able to produce a good visualization to compare distributions since, there are more tuples for non-rainy days. Perhaps, some normalization will help for further analysis. # # Nevertheless, when we look at our linear regression model with $R^2=0.46$, the coefficient for rain has a negative value (-39.307), which means that the number of ridership is inversely proportional with the existence of rain. This might happen due to existent correlation or causality between rain and other features. E.g., rain might have some correlation with fog which might also affect ridership. # # Section 5. Reflection # # Please address the following questions in detail. Your answers should be 1-2 paragraphs long. # # #### 5.1 Please discuss potential shortcomings of the methods of your analysis, including: Dataset, Analysis, such as the linear regression model or statistical test. # # Regarding the linear regression, this method is not robust against correlated features. The use of correlated features might be reducing the quality of our model and conclusions. # # Although our test rejected the null hypothesis, we can't assume that there is a causality between rain and ridership. There is the possibility of having another condition that affects both features. # # print pandas.to_datetime(turnstile_weather['DATEn']).describe() # Regarding our dataset, we can see from the descriptive statistics above that data was collected from May 1st until May 30th. In order to make a conclusion of ridership, perhaps it would be more reliable to use data from all seasons. # #### 5.2 (Optional) Do you have any other insight about the dataset that you would like to share with us? # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 01 - Introduction to numpy # ## Why NumPy? # # In general you should use [numpy](https://numpy.org/) library if you want to do fancy things with **numbers**, especially if you have **matrices** or **arrays.** # # NumPy is at the center of scientific Python ecosystem and it is a work-horse of many scientific libraries including scikit-learn, scikit-image, matplotlib, SciPy,... # # ## Import # # To use NumPy we need to start python interpreter and import the `numpy` package: import numpy as np # ## NumPy Array # # - __array:__ # Homogeneous multidimensional data container with all elements of the same type. # # It is similar to Python lists, but it's specialised for working on numerical data. # Let's create a simple numpy array from a Python list by using [`np.array`](https://numpy.org/doc/stable/reference/generated/numpy.array.html): x = np.array([2, 1, 5]) print(x) # ### Lists vs Numpy arrays # # The Python core library provides Lists. A list is the Python equivalent of an array, but it is resizeable and can contain elements of different types. # # Pros of an array: # - **Size** - Numpy data structures take up less space # - **Performance** - faster than lists # - **Functionality** - SciPy and NumPy have optimized functions such as linear algebra operations built in. L = range(1000) # %timeit [i**2 for i in L] # [__np.arange__](https://numpy.org/doc/stable/reference/generated/numpy.arange.html) works like Python built-in [range](https://docs.python.org/3/library/stdtypes.html#range), but it returns a numpy array: np.arange(5) a = np.arange(1000) # %timeit a**2 # ### Memory layout # # NumPy array is just a memory block with extra information how to interpret its contents. # # ### Array creation # # To construct an array with pre-defined elements we can also use other built-in helper functions: # [__np.ones__](https://numpy.org/doc/stable/reference/generated/numpy.ones.html) and [__np.zeros__](https://numpy.org/doc/stable/reference/generated/numpy.zeros.html) return arrays of 0s and 1s, respectively: np.ones(5) np.zeros(5) # [__np.random.rand__](https://numpy.org/doc/stable/reference/random/generated/numpy.random.rand.html) creates an array of random numbers from a uniform distribution over [0, 1): np.random.rand(5) # We can also construct a two- or more dimensional arrays: np.array([[1, 2], [5, 6]]) np.ones((2, 2)) # Alternatively, an n-dimensional array can be obtained by reshaping a 1-D array: a = np.arange(12) a.reshape((4,3)) # Previous: [00 - Index](../01-numpy-introduction.ipynb)Next: [02 - Working with a dataset](02_dataset_intro.ipynb) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # ## LINEAR REGRESSION - PART 1 (ONE VARIABLE) # # One variable linear regression is the simplest form of machine learning we can think of. The purpose of this practice is to learn the basics of different components about linear regression. # #### DATASET http://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/ # #### Wisconsin Breast Cancer Database # Samples arrive periodically as Dr. Wolberg reports his clinical cases. # # 1. Number of Instances: 699 (as of 15 July 1992) # # 2. Number of Attributes: 10 plus the class attribute # # 3. Attribute Information: (class attribute has been moved to last column) # Attribute Domain # 1. Sample code number id number # 2. Clump Thickness 1 - 10 # 3. Uniformity of Cell Size 1 - 10 # 4. Uniformity of Cell Shape 1 - 10 # 5. Marginal Adhesion 1 - 10 # 6. Single Epithelial Cell Size 1 - 10 # 7. Bare Nuclei 1 - 10 # 8. Bland Chromatin 1 - 10 # 9. Normal Nucleoli 1 - 10 # 10. Mitoses 1 - 10 # 11. Class: (2 for benign, 4 for malignant) # # 4. Missing attribute values: 16 # # There are 16 instances in Groups 1 to 6 that contain a single missing # (i.e., unavailable) attribute value, now denoted by "?". # # 5. Class distribution: # # Benign: 458 (65.5%) # Malignant: 241 (34.5%) # + import numpy as np import pandas as pd # Read input df = pd.read_csv("data/breast-cancer-wisconsin.data.csv") df.head(3) # + # Preprocess the data so we can use it for regression. Specially the class values. # Access function/method docstrings in jupyter via '?'. Ex: preprocessing.LabelEncoder? from sklearn import preprocessing encoder = preprocessing.LabelEncoder() for col in df.columns: df[col] = encoder.fit_transform(df[col]) df.head(5) # - # #### Predict # This method takes x, w, b and returns the value of y.
    # \begin{equation} # y_i = w*x_i+b # \end{equation} def predict(x, w, b): return x*w + b # #### Cost_function/loss_function/risk_function: # The MSE is a measure of the quality of an estimator—it is always non-negative, and values closer to zero are better. # We are using mean squared error. # \begin{equation} # MSE = \frac{1}{n} \sum_{i=1}^n (y_i - (wx_i+b))^2 # \end{equation} def cost_function(x, y, w, b): n = len(x) total_error = 0.0 for i in range(n): total_error += (y[i] - (w*x[i] + b))**2 return total_error / n # #### Gradient descent # To minimize MSE we use Gradient Descent to calculate the gradient of our cost function. # There are two parameters (coefficients) in our cost function we can control: weight $w$ and bias $b$. Since we need to consider the impact each one has on the final prediction, we use partial derivatives. # Recall the cost function: # \begin{equation} # f(w,b)= \frac{1}{n} \sum_{i=1}^{n} (y_i - (wx_i+b))^2 # \end{equation} # First lets find the partial derivative for $w$: # \begin{equation} # \frac{\partial f}{\partial w} = \frac{1}{n} \sum_{i=1}^n -2x_i(y_i - (wx_i+b)) # \end{equation} # And the partial derivative for $b$: # \begin{equation} # \frac{\partial f}{\partial b} = \frac{1}{n} \sum_{i=1}^n -2(y_i - (wx_i+b)) # \end{equation} # def update_weights(x, y, w, b, learning_rate): weight_deriv = 0 bias_deriv = 0 n = len(x) for i in range(n): # Calculate partial derivatives # -2x(y - (mx + b)) weight_deriv += -2*x[i] * (y[i] - (w*x[i] + b)) # -2(y - (mx + b)) bias_deriv += -2*(y[i] - (w*x[i] + b)) # We subtract because the derivatives point in direction of steepest ascent w -= (weight_deriv / n) * learning_rate b -= (bias_deriv / n) * learning_rate return w, b # #### Train loop # # Train the model. Things to cover: # 1. Parameter vs hyperparameter # 2. Learning rate: possibly l1 and l2 regularization # 3. Number of iterrations also popularly known as epochs def train_model(x, y, w, b, learning_rate, epochs): cost_history = [] for i in range(epochs): w,b = update_weights(x, y, w, b, learning_rate) #Calculate cost for auditing purposes cost = cost_function(x, y, w, b) cost_history.append(cost) # Log Progress if (i+1) % 20 == 0: print("Epochs: ", str(i+1), " cost: ", str(cost)) return w, b, cost_history # #### Run the training # Pick a parameter! Possibly mitoses as it's the worst one. Make them focus on the weight and bias values we get! # x = df['mitoses'] y = df['class'] w, b, cost_history = train_model(x, y, 0, 0, 0.02, 500) print(w, b) # #### Predictive separation index (PSI) as score # We can use Predictive Separation Index (PSI), to use as the strength of a predictor. The equation is: # \begin{equation} # PSI ( x ) = [ \textrm{mean } ( \hat{ p } \textrm{ given } x ) \textrm{ when } y = 1 ] - [ \textrm{mean } ( \hat{ p } \textrm{ given } x ) \textrm{ when } y = 0 ] \, . # \end{equation} # We want PSI(x) very close to 1 as the first term should be close to 1 and the second term should be close to 0. def get_score(x, y, w, b): preds_0 = [] preds_1 = [] for i in range(len(x)): p = predict(x[i], w, b) if y[i] == 0: preds_0.append(p) else: preds_1.append(p) if len(preds_0) != 0: score = (sum(preds_1) / len(preds_1) - sum(preds_0) / len(preds_0)) else: score = (sum(preds_1) / len(preds_1) - 0) return preds_0, preds_1, score preds_0, preds_1, score = get_score(x, y, w, b) print("PSI: ", score) # ### The p distribution plot # # A better way to look at our prediction is to look at the distribution of $\hat{p}$ we get for each of the classes. For $y_i = 0$ the $\hat{p}$ should be close to 0 and $\hat{p}$ should be close to 1 when $y_i = 1$. This is a better way of seeing the strength of a predictor. # + # %matplotlib inline # The above is required to display matplotlib in jupyter import matplotlib.pyplot as plt n, bins, patches = plt.hist(preds_0, bins=100, normed=1, cumulative=0) plt.title('Predictive distribution for class y=0') plt.show() n, bins, patches = plt.hist(preds_1, bins=100, normed=1, cumulative=0) plt.title('Predictive distribution for class y=1') plt.show() # - # ## Use a library to do the same thing! - SKLEARN # If we use sklearn to do the same thing, we will get the same exact values! Voilà! Things to make sure: # 1) Sklearn linear regression parameters: fit_intercept, normalize, copy_X, n_jobs # a) fit: Fit linear model. [Our train loop] # b) predict: Predict using the linear model. [Our predict method] # c) score: Returns the coefficient of determination $R^2$ of the prediction.
    # # ##### The R squared score: # R squared measures the proportion of overall variation of y variable that have been explained by the prediction. The equation for R squared is: # \begin{align*} # R^2 = 1 - \frac{\sum_{i=1}^n (y_i-\hat{y_i})^2}{\sum_{i=1}^n (y_i-\bar{y})^2} # \end{align*} # Here, $y_i$ is the label, $\hat{y_i}$ is the prediction and $\bar{y}$ is the average label. The error is normalized by the deviance and we get how well the regression no matter the deviation in the dataset. For a dichotomous classification this $R^2$ is same as predictive separation index but for continuous outcome it will be different. # x = df[['mitoses']] y = df['class'] from sklearn import datasets, linear_model lm_model = linear_model.LinearRegression(fit_intercept=True, normalize=True, copy_X=True, n_jobs=1) lm_model.fit(x, y) print("sklearn, w, b, score: ", round(lm_model.coef_[0], 6), round(lm_model.intercept_, 6), round(lm_model.score(x,y), 6)) print("manualr, w, b, score: ", round(w, 6) , round(b, 6), round(score, 6)) # So, we have learned the same co-efficents (or, almost the same) through a library. # #### Variable selection # Now loop through each of the feature/variable present in the dataset (except id and class) and fit the model to get a score. Sort features based on their score/R-squared-value the top variables are best predictors.
    # # A better approach would be:
    # 1) Make sure all the features are in same scale.
    # 2) Use better algorithm like step wise criterion function (ex. bayesian information criterion) or xgBoost which is supposedly one of the best available algorithm for feature selection.
    # # xgBoost: http://xgboost.readthedocs.io/en/latest/python/python_intro.html # # But this works pretty well too! We'll see! # + features = list(df.keys()) features.remove('id') features.remove('class') feature_scores = [] for feature in features: x = df[[feature]] y = df['class'] lm_model = linear_model.LinearRegression(fit_intercept=True, normalize=True, copy_X=True, n_jobs=1) lm_model.fit(x, y) feature_scores.append([feature, lm_model.score(x,y)]) feature_scores.sort(key=lambda x: x[1], reverse=True) for feature_score in feature_scores: print(feature_score) # - # #### Pick the top two predictors # # Pick the top two predictors and fit the model. The score will be better this time. x = df[['uniformity-of-cell-shape','uniformity-of-cell-size']] y = df['class'] lm_model = linear_model.LinearRegression(fit_intercept=True, normalize=True, copy_X=True, n_jobs=1) lm_model.fit(x, y) print("Score: ", lm_model.score(x,y)) # #### Pick a cutoff # # Now look at the distribution again and pick a cutoff. Everything below the cutoff will be 0 and above will be 1. # + input_0 = df.loc[df['class'] == 0] y_0 = input_0['class'] x_0 = input_0[['uniformity-of-cell-shape','uniformity-of-cell-size']] input_1 = df.loc[df['class'] == 1] y_1 = input_1['class'] x_1 = input_1[['uniformity-of-cell-shape','uniformity-of-cell-size']] preds_0 = lm_model.predict(x_0) preds_1 = lm_model.predict(x_1) n, bins, patches = plt.hist(preds_0, bins=50, normed=1, cumulative=0) plt.show() n, bins, patches = plt.hist(preds_1, bins=50, normed=1, cumulative=0) plt.show() # - # #### Accuracy # Looks like 0.2 is a good cutoff? Let's pick that and look at the accuracy of the model y_pred = lm_model.predict(x) y_pred = [1 if p > 0.2 else 0 for p in y_pred] from sklearn.metrics import accuracy_score accuracy_score(y, y_pred) # ## Splitting the dataset # But we can't trust these results! We trained and tested on the same data. This may overfit the data and give us an illusion of better fit. Let's split the dataset in 80-20 for training-testing and fit and test the model again. # + # split into train and test data from sklearn.model_selection import train_test_split (train,test) = train_test_split(df, test_size=0.2) train_output = train['class'] train_input = train[['uniformity-of-cell-shape','uniformity-of-cell-size']] test_output = test['class'] test_input = test[['uniformity-of-cell-shape','uniformity-of-cell-size']] # - lm_model.fit(train_input, train_output) print("Score: ", lm_model.score(train_input, train_output)) # + input_0 = train.loc[train['class'] == 0] y_0 = input_0['class'] x_0 = input_0[['uniformity-of-cell-shape','uniformity-of-cell-size']] input_1 = train.loc[train['class'] == 1] y_1 = input_1['class'] x_1 = input_1[['uniformity-of-cell-shape','uniformity-of-cell-size']] preds_0 = lm_model.predict(x_0) preds_1 = lm_model.predict(x_1) n, bins, patches = plt.hist(preds_0, bins=50, normed=1, cumulative=0) plt.show() n, bins, patches = plt.hist(preds_1, bins=50, normed=1, cumulative=0) plt.show() # - y_pred = lm_model.predict(test_input) y_pred = [1 if p > 0.5 else 0 for p in y_pred] from sklearn.metrics import accuracy_score accuracy_score(test_output, y_pred) # ## Break the accuracy myth # We see that we are getting decent accuracy. But accuracy in a model doesn't mean the model is useful. There are many other statistical tests that can help us understand the effectiveness of a model. Sensitivity, specificity are widely used. We are going to look at true positive rate and false positive rates as simplest statistical significance values of the model. from sklearn.metrics import confusion_matrix print(confusion_matrix(test_output, y_pred)) tn, fp, fn, tp = confusion_matrix(test_output, y_pred).ravel() fpr = fp / (fp+tn) fnr = fn / (tp+fn) print("False positive rate: (predicting malignant while benign)", fpr) print("False negative rate: (predicting benign while malignant)", fnr) # ## Which case is the worst? # Think about this model in action. It has a very low false positive rate but a high false negative rate. False negative results in real world is very dangerous in our case. On the other hand, false positive results will cause a lot of chaos. But, if we have to make a decision, which is the best one? # + y_pred = lm_model.predict(test_input) y_pred = [1 if p > 0.1 else 0 for p in y_pred] print("Accuracy", accuracy_score(test_output, y_pred)) print(confusion_matrix(test_output, y_pred)) tn, fp, fn, tp = confusion_matrix(test_output, y_pred).ravel() fpr = fp / (fp+tn) fnr = fn / (tp+fn) print("False positive rate: (predicting malignant while benign)", fpr) print("False negative rate: (predicting benign while malignant)", fnr) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np import os import sys import nltk from nltk.corpus import stopwords # GLOBALS LOCAL_DATA_ROOT = '/Users/varunn/Documents/projects/pretrained_word_vectors/' GLOVE_PATH = LOCAL_DATA_ROOT + 'glove.6B/' GLOVE_INP_FN = GLOVE_PATH + 'glove.6B.50d.txt' OUT_PATH = '/Users/varunn/Documents/NLP-data/' VOCAB_FILE = OUT_PATH+'vocabulary_file_w2v.txt' count = 1 for line in open(GLOVE_INP_FN): if count <= 5: print(line) else: break count += 1 # # Entity Extraction class Embedding(object): def __init__(self,vocab_file,vectors_file, vocab_flag=True): if vocab_flag: words = [] with open(vocab_file, 'r') as f: lines = [x.rstrip().split('\n') for x in f.readlines()] lines = [x[0] for x in lines] for line in lines: current_words = line.split(' ') words = list(set(words) | set(current_words)) with open(vectors_file, 'r') as f: vectors = {} for line in f: vals = line.rstrip().split(' ') vectors[vals[0]] = [float(x) for x in vals[1:]] if not vocab_flag: words = vectors.keys() vocab_size = len(words) vocab = {w: idx for idx, w in enumerate(words)} ivocab = {idx: w for idx, w in enumerate(words)} vector_dim = len(vectors[ivocab[0]]) W = np.zeros((vocab_size, vector_dim)) for word, v in vectors.items(): if (word == '') | (word not in vocab): continue W[vocab[word], :] = v # normalize each word vector to unit variance W_norm = np.zeros(W.shape) d = (np.sum(W ** 2, 1) ** (0.5)) W_norm = (W.T / d).T if vocab_flag: for i in range(W.shape[0]): x = W[i, :] if sum(x) == 0: W_norm[i, :] = W[i, :] self.W = W_norm self.vocab = vocab self.ivocab = ivocab def find_similar_words(embed,text,refs): C = np.zeros((len(refs),embed.W.shape[1])) for idx, term in enumerate(refs): if term in embed.vocab: C[idx,:] = embed.W[embed.vocab[term], :] tokens = text.split(' ') scores = [0.] * len(tokens) for idx, term in enumerate(tokens): if term in embed.vocab: vec = embed.W[embed.vocab[term], :] cosines = np.dot(C,vec.T) score = np.mean(cosines) scores[idx] = score print(scores) return tokens[np.argmax(scores)] examples = ["i am looking for a place in the north of town", "looking for indian restaurants", "Indian wants to go to an italian restaurant", "show me chinese restaurants", "show me chines restaurants in the north", "show me a mexican place in the centre", "i am looking for an indian spot called olaolaolaolaolaola", "search for restaurants", "anywhere in the west", "anywhere near 18328", "I am looking for asian fusion food", "I am looking a restaurant in 29432", "I am looking for mexican indian fusion", "central indian restaurant"] examples = [x.lower() for x in examples] fn = open(OUT_PATH+'vocabulary_file_w2v.txt', 'w') for example in examples: fn.write(example) fn.write('\n') fn.close() embed = Embedding(VOCAB_FILE, GLOVE_INP_FN, False) print(embed.W.shape) print(len(embed.vocab)) test_example1 = 'looking for spanish restaurants' test_example2 = 'looking for indian restaurants' test_example3 = 'looking for south indian restaurants' test_example4 = 'I want to find a chettinad restaurant' test_example5 = 'chinese man looking for a indian restaurant' refs = ["mexican","chinese","french","british","american"] threshold = 0.2 # With stopwords for example in [test_example1, test_example2, test_example3, test_example4, test_example5]: example = example.lower() print('text: ', example) print(find_similar_words(embed,example,refs)) print('\n') # With stopwords stop = set(stopwords.words('english')) for example in [test_example1, test_example2, test_example3, test_example4, test_example5]: print('text: ', example) example = " ".join([x.lower() for x in nltk.word_tokenize(example) if x not in stop]) print(find_similar_words(embed,example,refs)) print('\n') find_similar_words(embed, 'fish food', refs) # # Intent Detection # + import numpy as np def sum_vecs(embed,text): tokens = text.split(' ') vec = np.zeros(embed.W.shape[1]) for idx, term in enumerate(tokens): if term in embed.vocab: vec = vec + embed.W[embed.vocab[term], :] return vec def get_centroid(embed,examples): C = np.zeros((len(examples),embed.W.shape[1])) for idx, text in enumerate(examples): C[idx,:] = sum_vecs(embed,text) centroid = np.mean(C,axis=0) assert centroid.shape[0] == embed.W.shape[1] return centroid def get_intent(embed,text): intents = ['deny', 'inform', 'greet'] vec = sum_vecs(embed,text) scores = np.array([ np.linalg.norm(vec-data[label]["centroid"]) for label in intents ]) return intents[np.argmin(scores)] # - data={ "greet": { "examples" : ["hello","hey there","howdy","hello","hi","hey","hey ho"], "centroid" : None }, "inform": { "examples" : [ "i'd like something asian", "maybe korean", "what mexican options do i have", "what italian options do i have", "i want korean food", "i want german food", "i want vegetarian food", "i would like chinese food", "i would like indian food", "what japanese options do i have", "korean please", "what about indian", "i want some vegan food", "maybe thai", "i'd like something vegetarian", "show me french restaurants", "show me a cool malaysian spot" ], "centroid" : None }, "deny": { "examples" : [ "nah", "any other places ?", "anything else", "no thanks" "not that one", "i do not like that place", "something else please", "no please show other options" ], "centroid" : None } } intents = ['greet', 'inform', 'deny'] examples = [] for intent in intents: examples = list(set(examples) | set(data[intent]['examples'])) examples fn = open(VOCAB_FILE, 'w') for example in examples: fn.write(example) fn.write('\n') fn.close() embed = Embedding(VOCAB_FILE, GLOVE_INP_FN, False) for label in data.keys(): data[label]["centroid"] = get_centroid(embed,data[label]["examples"]) data for text in ["hey you","i am looking for chinese food","not for me"]: print("text : '{0}', predicted_label : '{1}'".format(text,get_intent(embed,text))) text = "how do you do" print("text : '{0}', predicted_label : '{1}'".format(text,get_intent(embed,text))) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from __future__ import division, print_function, absolute_import from keras.models import Sequential, model_from_json from keras.layers import Dense, Dropout, Flatten, Conv3D, MaxPool3D, BatchNormalization, Input from keras.optimizers import RMSprop from keras.preprocessing.image import ImageDataGenerator from keras.utils.np_utils import to_categorical from keras.callbacks import ReduceLROnPlateau, TensorBoard import h5py import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns sns.set_style('white') from sklearn.metrics import confusion_matrix, accuracy_score # - # Hyper Parameter batch_size = 4 epochs = 30 # + # Set up TensorBoard tensorboard = TensorBoard(batch_size=batch_size) with h5py.File("./3d-mnist-kaggle/full_dataset_vectors.h5", 'r') as h5: X_train, y_train = h5["X_train"][:], h5["y_train"][:] X_test, y_test = h5["X_test"][:], h5["y_test"][:] # - X_train.shape, y_train.shape, X_test.shape, y_test.shape # + # Translate data to color def array_to_color(array, cmap="Oranges"): s_m = plt.cm.ScalarMappable(cmap=cmap) return s_m.to_rgba(array)[:,:-1] def translate(x): xx = np.ndarray((x.shape[0], 4096, 3)) for i in range(x.shape[0]): xx[i] = array_to_color(x[i]) if i % 1000 == 0: print(i) # Free Memory del x return xx y_train = to_categorical(y_train, num_classes=10) # y_test = to_categorical(y_test, num_classes=10) X_train = translate(X_train).reshape(-1, 16, 16, 16, 3) X_test = translate(X_test).reshape(-1, 16, 16, 16, 3) # - X_train.shape, y_train.shape, X_test.shape, y_test.shape # + # Conv2D layer def Conv(filters=16, kernel_size=(3,3,3), activation='relu', input_shape=None): if input_shape: return Conv3D(filters=filters, kernel_size=kernel_size, padding='Same', activation=activation, input_shape=input_shape) else: return Conv3D(filters=filters, kernel_size=kernel_size, padding='Same', activation=activation) # Define Model def CNN(input_dim, num_classes): model = Sequential() model.add(Conv(8, (3,3,3), input_shape=input_dim)) model.add(Conv(16, (3,3,3))) # model.add(BatchNormalization()) model.add(MaxPool3D()) # model.add(Dropout(0.25)) model.add(Conv(32, (3,3,3))) model.add(Conv(64, (3,3,3))) model.add(BatchNormalization()) model.add(MaxPool3D()) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(4096, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(1024, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(num_classes, activation='softmax')) return model # Train Model def train(optimizer, scheduler): global model print("Training...") model.compile(optimizer = 'adam' , loss = "categorical_crossentropy", metrics=["accuracy"]) model.fit(x=X_train, y=y_train, batch_size=batch_size, epochs=epochs, validation_split=0.15, verbose=2, callbacks=[scheduler, tensorboard]) def evaluate(): global model pred = model.predict(X_test) pred = np.argmax(pred, axis=1) print(accuracy_score(pred,y_test)) # Heat Map array = confusion_matrix(y_test, pred) cm = pd.DataFrame(array, index = range(10), columns = range(10)) plt.figure(figsize=(20,20)) sns.heatmap(cm, annot=True) plt.show() def save_model(): global model model_json = model.to_json() with open('model/model_3D.json', 'w') as f: f.write(model_json) model.save_weights('model/model_3D.h5') print('Model Saved.') def load_model(): f = open('model/model_3D.json', 'r') model_json = f.read() f.close() loaded_model = model_from_json(model_json) loaded_model.load_weights('model/model_3D.h5') print("Model Loaded.") return loaded_model if __name__ == '__main__': optimizer = RMSprop(lr=0.001, rho=0.9, epsilon=1e-08, decay=0.0) scheduler = ReduceLROnPlateau(monitor='val_acc', patience=3, verbose=1, factor=0.5, min_lr=1e-5) model = CNN((16,16,16,3), 10) train(optimizer, scheduler) evaluate() save_model() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] heading_collapsed=true # # Import # + hidden=true import numpy as np import matplotlib.pyplot as plt import pandas as pd # + [markdown] heading_collapsed=true # # Preparing Data # + hidden=true dataset = pd.read_csv('Position_Salaries.csv') dataset # + hidden=true dataset.plot(kind='Scatter', x='Level', y='Salary') # + hidden=true X = dataset.iloc[:, 1:-1].values y = dataset.iloc[:, -1].values # + [markdown] hidden=true # In this particular situation, **X_train = X** and **y_train = y** # + [markdown] heading_collapsed=true # # Regression # + [markdown] heading_collapsed=true hidden=true # ## Building Model # + hidden=true from sklearn.linear_model import LinearRegression # + hidden=true from sklearn.preprocessing import PolynomialFeatures # + hidden=true poly_reg = PolynomialFeatures(degree=5) # + hidden=true X_poly = poly_reg.fit_transform(X) # + hidden=true X_poly # + hidden=true regressor = LinearRegression() regressor.fit(X_poly, y) # + [markdown] heading_collapsed=true hidden=true # ## Visualization # + hidden=true plt.scatter(X,y,color='red') plt.plot(X, regressor.predict(poly_reg.fit_transform(X)), color='blue') plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # LSTM, State Space, and Mixture Models on economic series # # Apply and compare methods for extracting lower-dimensional states # from a large panel of economic time series: State space models, Hidden # Markov Models, Gaussian mixtures, and LSTM networks. # # Use BIC criterion, visualize the extracted hidden states/latent factors # in recessionary and economic time periods, and compare persistence. # # - Long Short-Term Memory network, hidden states, state space model, mixture models # - BIC, log-likelihood, mixed-frequency # - pytorch, hmmlearn, statsmodels, sklearn, FRED-MD # - Chen, Pelger and Zhu (2020) and others # # # License: MIT # # # import numpy as np import pandas as pd from pandas import DataFrame, Series import matplotlib.pyplot as plt import os import re import time from datetime import datetime import statsmodels.api as sm import random import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from hmmlearn import hmm from sklearn.mixture import GaussianMixture from sklearn.preprocessing import StandardScaler from tqdm import tqdm from finds.alfred import fred_md, fred_qd, Alfred from finds.learning import hmm_summary from settings import settings imgdir = os.path.join(settings['images'], 'rnn') # # Load time series from FRED and FRED-MD, and apply tcode transformations # + # Load and pre-process time series from FRED alf = Alfred(api_key=settings['fred']['api_key']) # to indicate recession periods in the plots usrec = alf('USREC', freq='m') usrec.index = pd.DatetimeIndex(usrec.index.astype(str), freq='infer') g = usrec.astype(bool) | usrec.shift(-1, fill_value=0).astype(bool) g = (g != g.shift(fill_value=0)).cumsum()[g].to_frame() g = g.reset_index().groupby('USREC')['date'].agg(['first','last']) vspans = [(v[0], v[1]) for k, v in g.iterrows()] # Retrieve FRED-MD series and apply tcode transformations beg = 19600301 end = 20200131 df, t = fred_md(202004) # from vintage April 2020 data = [] for col in df.columns: data.append(alf.transform(df[col], tcode=t['transform'][col], freq='m')) mdf = pd.concat(data, axis=1).iloc[2:] mdata = mdf[(mdf.index >= beg) & (mdf.index <= end)].dropna(axis=1) mdata = (mdata - mdata.mean(axis=0)) / mdata.std(axis=0, ddof=0) mdata.index = pd.DatetimeIndex(mdata.index.astype(str), freq='m') mdata # - # ## Load FRED-QD and apply tcode transformations # + df, t = fred_qd(202004) # from vintage April 2020 data = [] for col in df.columns: data.append(alf.transform(df[col], tcode=t['transform'][col], freq='q')) df = pd.concat(data, axis=1).iloc[2:] qdata = df[(df.index >= beg) & (df.index <= end)].dropna(axis=1) qdata.index = pd.DatetimeIndex(qdata.index.astype(str), freq='q') qdata # - # # Define LSTM pytorch model # + device = torch.device("cuda" if torch.cuda.is_available() else "cpu") class LSTM(nn.Module): def __init__(self, n_features, hidden_size, num_layers=1): super().__init__() self.lstm = nn.LSTM(input_size=n_features, hidden_size=hidden_size, num_layers=num_layers) self.linear = nn.Linear(in_features=hidden_size, out_features=n_features) def forward(self, x, hidden_state=None): """ x: shape (seq_len, batch, input_siz) h: of shape (num_layers * num_directions, batch, hidden_size) c: of shape (num_layers * num_directions, batch, hidden_size) output: shape (seq_len, batch, num_directions * hidden_size) """ output, (h, c) = self.lstm(x, hidden_state) return self.linear(output), (h.detach(), c.detach()) # - # ## Create input data for LSTM # - with sequence length 16 seq_len = 16 train_exs = [mdata.iloc[i-(seq_len+1):i].values for i in range(seq_len+1, len(mdata))] n_features = mdata.shape[1] train_exs[0].shape # # Run training loop # # - for various hidden sizes of LSTM cell from 1,..,4 # - minibatch with batch size 32, and learning rate step scheduler hidden_factors = dict() prediction_errors = dict() for hidden_size in [1,2,3,4]: model = LSTM(n_features=n_features, hidden_size=hidden_size).to(device) print(model) # Set optimizer and learning rate scheduler, with step_size=30 lr, num_lr, step_size = 0.001, 3, 400 optimizer = torch.optim.Adam(model.parameters(), lr=lr) scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=step_size, gamma=0.1) loss_function = nn.MSELoss() batch_size, num_epochs = 32, step_size*num_lr for i in tqdm(range(num_epochs)): # Run training loop per epoch idx = np.arange(len(train_exs)) # shuffle indxs into batches random.shuffle(idx) batches = [idx[i:(i+batch_size)] for i in range(0,len(idx),batch_size)] total_loss = 0.0 model.train() for batch in batches: # train each batch # train_ex input has shape (seq_len, batch_size=16, n_features) train_ex = torch.tensor([[train_exs[idx][seq] for idx in batch] for seq in range(seq_len+1)]).float() model.zero_grad() y_pred, hidden_state = model.forward(train_ex[:-1].to(device)) loss = loss_function(y_pred[-1], train_ex[-1].to(device)) total_loss += float(loss) loss.backward() optimizer.step() scheduler.step() # collect predictions and hidden states, and compute mse with torch.no_grad(): # reduce memory consumption for eval hidden_state = [] prediction_error = [] mse = nn.MSELoss() for i in range(seq_len+1, len(mdata)): # single test example of shape (seq_len=12, batch_size=1, n_features) test_ex = torch.tensor(mdata[i-(seq_len+1):i].values)\ .float().unsqueeze(dim=1).to(device) y_pred, (h, c) = model.forward(test_ex[:-1], None) prediction_error.append(float(mse(y_pred[-1], test_ex[-1]))) hidden_state.append(h[0][0].cpu().numpy()) hidden_factors[hidden_size] = DataFrame( hidden_state,index=mdata.index[(1+seq_len):len(mdata)]) prediction_errors[f"Hidden Size {hidden_size}"] = np.mean(prediction_error) print(prediction_errors) # # Plot LSTM hidden states process fig, axes = plt.subplots(len(hidden_factors), 1, figsize=(9,10),num=1,clear=True) for hidden_factor, ax in zip(hidden_factors.values(), axes): hidden_factor.plot(ax=ax, style='--', legend=False) for a,b in vspans: if a >= min(hidden_factor.index): ax.axvspan(a, min(b, max(hidden_factor.index)), alpha=0.2) ax.spines["top"].set_visible(False) ax.spines["right"].set_visible(False) ax.set_xlabel(f"LSTM with hidden_size = {len(hidden_factor.columns)}") plt.tight_layout(rect=[0, 0.03, 1, 0.95]) plt.suptitle(f"LSTM Hidden States Process", fontsize=12) plt.savefig(os.path.join(imgdir, 'lstm.jpg')) plt.show() # # Dynamic Factor Models # # - can be cast into state space form and estimated via Kalman Filter # # https://www.statsmodels.org/devel/examples/notebooks/generated/statespace_dfm_coincident.html # # ## Fit (linear) dynamic factor model with DynamicFactorMQ # https://www.statsmodels.org/devel/generate/statsmodels.tsa.statespace.dynamic_factor_mq.DynamicFactorMQ.html # # Allows for # # - mixed frequency data (for nowcasting) # - EM algorithm by default accomodates larger data sets: Kalman filter and optimizing of likelihood with quasi-Newton methods slow dynamic_factors = dict() for i in [1, 2, 3, 4]: mod = sm.tsa.DynamicFactorMQ(endog=mdata, factors=1, # num factor blocks factor_multiplicities=i, # num factors in block factor_orders=2, # order of factor VAR idiosyncratic_ar1=False) # False=white noise fitted = mod.fit_em(disp=20, maxiter=200, full_output=True) dynamic_factors[i] = DataFrame(fitted.factors.filtered.iloc[seq_len+1:]) dynamic_factors[i].columns = list(np.arange(len(dynamic_factors[i].columns))) mse = nn.MSELoss() prediction_errors[f"Dynamic Factors {i}"] = float( mse(torch.tensor(fitted.fittedvalues.iloc[mod.factor_orders+1:].values), torch.tensor(mdata.iloc[mod.factor_orders+1:].values))) # ## Plot estimated dynamic factors fig, axes = plt.subplots(len(dynamic_factors),1,figsize=(9,10),num=1,clear=True) for dynamic_factor, ax in zip(dynamic_factors.values(), axes): dynamic_factor.plot(ax=ax, style='--', legend=False) for a,b in vspans: if a >= min(dynamic_factor.index): ax.axvspan(a, min(b, max(dynamic_factor.index)), alpha=0.2) ax.spines["top"].set_visible(False) ax.spines["right"].set_visible(False) ax.set_xlabel(f"Dynamic Factors = {len(dynamic_factor.columns)}") plt.tight_layout(rect=[0, 0.03, 1, 0.95]) plt.suptitle(f"Fitted Dynamic Factors ", fontsize=12) plt.savefig(os.path.join(imgdir, 'dynamic.jpg')) plt.show() # ## Correlation of LSTM hidden state process with (linear) dynamic factors # # LSTM Hidden States explained by four dynamic (linear) factors with Rsquared averaging 70% rsq = dict() for k, hidden_factor in hidden_factors.items(): rsq[k] = [sm.OLS(y, sm.add_constant(dynamic_factors[len(dynamic_factors)]))\ .fit().rsquared for _, y in hidden_factor.iteritems()] print('Average variance of LSTM hidden states explained by linear combination of dynamic factors') DataFrame({k: np.mean(r) for k, r in rsq.items()}, index=['R-square'])\ .rename_axis("# hidden states in LSTM:", axis=1) # # Mixed Frequency Dynamic Factor Model # - include quarterly GDP to monthly FRED-MD series # + scaler = StandardScaler().fit(qdata['GDPC1'].values.reshape((-1, 1))) gdp = DataFrame(scaler.transform(qdata['GDPC1'].values.reshape((-1, 1))), index = qdata.index, columns=['GDPC1']) mod = sm.tsa.DynamicFactorMQ(endog=mdata, endog_quarterly=gdp, factors=1, # num factor blocks factor_multiplicities=8, # num factors in block factor_orders=2, # order of factor VAR idiosyncratic_ar1=False) # False=white noise fitted = mod.fit_em(disp=1, maxiter=200, full_output=True) dynamic_factor = DataFrame(fitted.factors.filtered.iloc[seq_len+1:]) dynamic_factor.columns = list(np.arange(len(dynamic_factor.columns))) # - # ## Plot fitted GDP values in 2007-2010 beg = '2006-12-31' end = '2010-12-31' fig, ax = plt.subplots(figsize=(9,5), num=1, clear=True) y = fitted.fittedvalues['GDPC1'] y = y[(y.index > beg) & (y.index <= end)] ax.plot_date(y.index, (scaler.inverse_transform(y)/3).cumsum(), fmt='-o', color='C0') x = gdp.copy() x.index = pd.DatetimeIndex(x.index.astype(str), freq=None) x = x[(x.index > beg) & (x.index <= end)] ax.plot_date(x.index, scaler.inverse_transform(x).cumsum(), fmt='-o',color='C1') ax.legend(['monthly fitted', 'quarterly actual'], loc='upper left') ax.set_title('Quarterly GDP and Fitted Monthly Estimates from Dynamic Factor Model') plt.tight_layout() plt.savefig(os.path.join(imgdir, f"mixedfreq.jpg")) plt.show() # ## Show "Nowcast" of GDP Series({'Last Date of Monthly Data:': mdata.index[-1].strftime('%Y-%m-%d'), 'Last Date of Quarterly Data:': qdata.index[-1].strftime('%Y-%m-%d'), 'Forecast of Q1 GDP quarterly rate': scaler.inverse_transform(fitted.forecast('2020-03')['GDPC1'][[-1]])[0], 'Forecast of Q2 GDP quarterly rate': scaler.inverse_transform(fitted.forecast('2020-06')['GDPC1'][[-1]])[0]}, name = 'Forecast').to_frame() # # Hidden Markov Model from FRED-MD # - HMM with Gaussian emissions # - Vary number of states and cov types to compare BIC's # - 'full' and 'tied' have too many parameters of cov matrix => lowest IC at 1 component # - 'spherical' constraints features same variance each state => too many components=9 # - 'diag' cov matrix for each states => balanced: min IC at n_components=3 out = [] for covariance_type in ["full", "diag", "tied", "spherical"]: for n_components in range(1,16): markov = hmm.GaussianHMM(n_components=n_components, covariance_type=covariance_type, verbose=False, tol=1e-6, random_state=42, n_iter=100)\ .fit(mdata.values, [len(mdata)]) results = hmm_summary(markov, mdata, [len(mdata)]) #print(n_components, Series(results, name=covariance_type).to_frame().T) result = {'covariance_type': covariance_type, 'n_components': n_components} result.update(results) out.append(Series(result)) result = pd.concat(out, axis=1).T.convert_dtypes() print(result.to_string(float_format='{:.1f}'.format)) # ## Display estimated transition and stationary distributions n_components = 3 markov = hmm.GaussianHMM(n_components=n_components, covariance_type='diag', verbose=False, tol=1e-6, random_state=42, n_iter=100)\ .fit(mdata.values, [len(mdata)]) pred = DataFrame(markov.predict(mdata), columns=['state'], index=mdata.index) matrix = hmm_summary(markov, mdata, [len(mdata)], matrix=True)['matrix'] matrix # ## Plot predicted states by selected economic series # - Recession state, recovery pre-2000 state and recoverty post-2000 state # # + # helper to plot predicted states def plot_states(modelname, labels, beg, end): n_components = len(np.unique(labels)) markers = ["o", "s", "d", "X", "P", "8", "H", "*", "x", "+"][:n_components] series_ids = ['IPMANSICS', 'SPASTT01USM661N'] fig, axes = plt.subplots(len(series_ids),ncols=1,figsize=(9,5),num=1,clear=True) axes[0].set_title(f"{modelname.upper()} Predicted States", {'fontsize':12}) for series_id, ax in zip(series_ids, axes.ravel()): df = alf(series_id) #print(df) df.index = pd.DatetimeIndex(df.index.astype(str), freq='infer') df = df[(df.index >= beg) & (df.index <= end)] for i, marker in zip(range(n_components), markers): df.loc[labels==i].plot(ax=ax, style=marker, markersize=2, color=f"C{i}", rot=0) ax.set_xlabel(f"{series_id}: {alf.header(series_id)}", {'fontsize':8}) for a,b in vspans: if (b > min(df.index)) & (a < max(df.index)): ax.axvspan(max(a, min(df.index)), min(b, max(df.index)), alpha=0.3, color='grey') ax.legend([f"state {i}" for i in range(n_components)], fontsize=8) plt.tight_layout() plt.savefig(os.path.join(imgdir, f"{modelname.lower()}.jpg")) plot_states('hmm', pred.values.flatten(), min(pred.index), max(pred.index)) plt.show() # - # # Gaussian Mixtures Model from FRED-MD # - Fit with 3 states # - Plot predicted states, with representative economic time series # gmm = GaussianMixture(n_components=3, covariance_type='diag').fit(mdata) labels = gmm.predict(mdata) plot_states('GMM', labels, min(mdata.index), max(mdata.index)) plt.show() # ## Compare persistance of states print("Average Persistance of Hidden Markov states:", np.mean(pred[:-1].values == pred[1:].values)) print() print('Stationary Distributiion of Hidden Markov:') print(matrix.iloc[:,-1]) print() print("Average Persistance of Gaussian Mixtures states:", np.mean(labels[:-1] == labels[1:])) print() print('Stationary Distribution of Gaussian Mixture:') print(Series(labels).value_counts().sort_index()/len(labels)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: py37-yamako # language: python # name: py37-yamako # --- # # ACGANに入れる前にDiscriminatorをpretrainしてみる # ## 必要なmodule import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import torchvision import torchvision.transforms as transforms import numpy as np import matplotlib.pyplot as plt import time import random # ## データの前処理 # + class Transform(object): def __init__(self): pass def __call__(self, sample): sample = np.array(sample, dtype = np.float32) sample = torch.tensor(sample) return (sample/127.5)-1 transform = Transform() # - # ## Datasetの定義 # + from tqdm import tqdm class dataset_full(torch.utils.data.Dataset): def __init__(self, img, label, transform=None): self.transform = transform self.data_num = len(img) self.data = [] self.label = [] for i in tqdm(range(self.data_num)): self.data.append([img[i]]) self.label.append(label[i]) #同じ実装で動くようにラベルを0~9に振り直す self.data_num = len(self.data) def __len__(self): return self.data_num def __getitem__(self, idx): out_data = self.data[idx] out_label = np.identity(49)[self.label[idx]] out_label = np.array(out_label, dtype = np.float32) if self.transform: out_data = self.transform(out_data) return out_data, out_label # - # ## バイナリファイルからDLしてDatasetにする # + # path = %pwd train_img = np.load('{}/k49-train-imgs.npz'.format(path)) train_img = train_img['arr_0'] train_label = np.load('{}/k49-train-labels.npz'.format(path)) train_label = train_label['arr_0'] test_img = np.load('{}/k49-test-imgs.npz'.format(path)) test_img = test_img['arr_0'] test_label = np.load('{}/k49-test-labels.npz'.format(path)) test_label = test_label['arr_0'] train_data = dataset_full(train_img, train_label, transform=transform) print(len(train_data)) test_data = dataset_full(test_img, test_label, transform=transform) print(len(test_data)) # - # ## Dataloaderを作成 # + batch_size = 256 train_loader = torch.utils.data.DataLoader(train_data, batch_size = batch_size, shuffle = True, num_workers = 6) test_loader = torch.utils.data.DataLoader(test_data, batch_size = batch_size, shuffle = True, num_workers = 6) # - # ## Discriminatorの定義 class Discriminator(nn.Module): def __init__(self, num_class): super(Discriminator, self).__init__() self.num_class = num_class self.conv = nn.Sequential( nn.Conv2d(1, 64, kernel_size=4, stride=2, padding=1), #入力は1チャネル(白黒だから), フィルターの数64, フィルターのサイズ4*4 nn.LeakyReLU(0.2), nn.Conv2d(64, 128, kernel_size=4, stride=2, padding=1), nn.LeakyReLU(0.2), nn.BatchNorm2d(128), ) self.fc = nn.Sequential( nn.Linear(128 * 7 * 7, 1024), nn.BatchNorm1d(1024), nn.LeakyReLU(0.2), ) self.fc_TF = nn.Sequential( nn.Linear(1024, 1), nn.Sigmoid(), ) self.fc_class = nn.Sequential( nn.Linear(1024, num_class), nn.LogSoftmax(dim=1), ) self.init_weights() def init_weights(self): for module in self.modules(): if isinstance(module, nn.Conv2d): module.weight.data.normal_(0, 0.02) module.bias.data.zero_() elif isinstance(module, nn.Linear): module.weight.data.normal_(0, 0.02) module.bias.data.zero_() elif isinstance(module, nn.BatchNorm1d): module.weight.data.normal_(1.0, 0.02) module.bias.data.zero_() elif isinstance(module, nn.BatchNorm2d): module.weight.data.normal_(1.0, 0.02) module.bias.data.zero_() def forward(self, img): x = self.conv(img) x = x.view(-1, 128 * 7 * 7) x = self.fc(x) x_TF = self.fc_TF(x) x_class = self.fc_class(x) return x_TF, x_class # ## 1エポックごとに計算する関数を定義 # + #device device = torch.device("cuda" if torch.cuda.is_available() else "cpu") def train_func(D_model, batch_size, criterion, train_loader, optimizer, scheduler): D_model.train() train_loss = 0 train_acc = 0 for batch_idx, (imgs, labels) in enumerate(train_loader): if imgs.size()[0] != batch_size: break img, label = imgs.to(device), labels.to(device) #勾配の初期化 optimizer.zero_grad() #順伝播 TF_output, class_output = D_model(img) #loss計算 CEE_label = torch.max(label, 1)[1].to(device) class_loss = criterion(class_output, CEE_label) #逆伝播 class_loss.backward() #optimizer更新 optimizer.step() #train_lossとaccを蓄積 train_loss += class_loss.item() train_acc += (class_output.argmax(dim=1) == label.argmax(dim=1)).sum().item() if scheduler != 'None': scheduler.step() return train_loss/len(train_loader), train_acc/(batch_size * len(train_loader)) def val_func(D_model, batch_size, criterion, val_loader): D_model.eval() val_loss = 0 val_acc = 0 with torch.no_grad(): for imgs, labels in val_loader: img, label = imgs.to(device), labels.to(device) #順伝播 TF_output, class_output = D_model(img) #loss計算 CEE_label = torch.max(label, 1)[1].to(device) class_loss = criterion(class_output, CEE_label) #val_lossとaccを蓄積 val_loss += class_loss.item() val_acc += (class_output.argmax(1) == label.argmax(1)).sum().item() return val_loss/len(val_loader), val_acc/(batch_size * len(val_loader)) # - # ## モデルをエポックごとに計算し、結果を表示 # + #再現性確保のためseed値固定 SEED = 1111 random.seed(SEED) np.random.seed(SEED) torch.manual_seed(SEED) torch.cuda.manual_seed(SEED) torch.backends.cudnn.deterministic = True def model_run(num_epochs, batch_size = batch_size, train_loader = train_loader, val_loader = test_loader, device = device): #クラス数 num_class = 49 #モデル定義 D_model = Discriminator(num_class).to(device) #loss定義 class_criterion = nn.NLLLoss().to(device) #optimizerの定義 D_optimizer = torch.optim.Adam(D_model.parameters(), lr=0.0002, betas=(0.5, 0.999), eps=1e-08, weight_decay=1e-5, amsgrad=False) #shedulerの定義 scheduler = 'None' train_loss_list = [] val_loss_list = [] train_acc_list = [] val_acc_list = [] all_time = time.time() for epoch in range(num_epochs): start_time = time.time() train_loss, train_acc = train_func(D_model, batch_size, class_criterion, train_loader, D_optimizer, scheduler) val_loss, val_acc = val_func(D_model, batch_size, class_criterion, val_loader) train_loss_list.append(train_loss) val_loss_list.append(val_loss) train_acc_list.append(train_acc) val_acc_list.append(val_acc) secs = int(time.time() - start_time) mins = secs / 60 secs = secs % 60 #エポックごとに結果を表示 print('Epoch: %d' %(epoch + 1), " | 所要時間 %d 分 %d 秒" %(mins, secs)) print(f'\tLoss: {train_loss:.4f}(train)\t|\tAcc: {train_acc * 100:.1f}%(train)') print(f'\tLoss: {val_loss:.4f}(valid)\t|\tAcc: {val_acc * 100:.1f}%(valid)') torch.save({ 'epoch':num_epochs, 'model_state_dict':D_model.state_dict(), 'optimizer_state_dict':D_optimizer.state_dict(), 'loss':train_loss, }, './pretrained_D_model_{}'.format(num_epochs)) return train_loss_list, val_loss_list, train_acc_list, val_acc_list # - train_loss_list, val_loss_list, train_acc_list, val_acc_list = model_run(20) # ## loss表示 # + import matplotlib.pyplot as plt # %matplotlib inline fig = plt.figure(figsize=(10,7)) loss = fig.add_subplot(1,1,1) loss.plot(range(len(train_loss_list)),train_loss_list,label='train_loss',color='r', ls='-') loss.plot(range(len(val_loss_list)),val_loss_list,label='val_loss',color='b',ls='-') loss.set_xlabel('epoch') loss.set_ylabel('loss') loss.set_title('Discriminating loss') loss.legend() loss.grid() fig.show() # + import matplotlib.pyplot as plt # %matplotlib inline fig = plt.figure(figsize=(10,7)) acc = fig.add_subplot(1,1,1) acc.plot(range(len(train_acc_list)),train_acc_list,label='train_acc',color='r', ls='-') acc.plot(range(len(val_acc_list)),val_acc_list,label='val_acc',color='b',ls='-') acc.set_xlabel('epoch') acc.set_ylabel('acc') acc.set_title('Discriminating acc') acc.legend() acc.grid() fig.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Basic Linux Command-line Tools # # #### ls - list # # #### cd - change directory; cd .. # # #### mkdir - make directory # # #### rmdir - remove directory # #### touch (file_name) # #### subl (file_name) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + na = int(input()) a = set(map(int, input().split())) nb = int(input()) b = set(map(int, input().split())) amb = a.difference(b) bma = b.difference(a) sd = sorted(list(amb.union(bma))) for i in sd: print(i) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Chapter 4 - Classification # - [Load dataset](#Load-dataset) # - [The Default data set](#Figure-4.1---Default-data-set) # - [4.3 Logistic Regression](#4.3-Logistic-Regression) # - [4.4 Linear Discriminant Analysis](#4.4-Linear-Discriminant-Analysis) # - [Lab: 4.6.3 Linear Discriminant Analysis](#4.6.3-Linear-Discriminant-Analysis) # - [Lab: 4.6.4 Quadratic Discriminant Analysis](#4.6.4-Quadratic-Discriminant-Analysis) # - [Lab: 4.6.5 K-Nearest Neighbors](#4.6.5-K-Nearest-Neighbors) # - [Lab: 4.6.6 An Application to Caravan Insurance Data](#4.6.6-An-Application-to-Caravan-Insurance-Data) # + # # %load ../standard_import.txt import pandas as pd import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt import seaborn as sns import sklearn.linear_model as skl_lm from sklearn.discriminant_analysis import LinearDiscriminantAnalysis from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis from sklearn.metrics import confusion_matrix, classification_report, precision_score from sklearn import preprocessing from sklearn import neighbors import statsmodels.api as sm import statsmodels.formula.api as smf # %matplotlib inline plt.style.use('seaborn-white') # - # ### Load dataset # + # In R, I exported the dataset from package 'ISLR' to an Excel file df = pd.read_excel('Data/Default.xlsx') # Note: factorize() returns two objects: a label array and an array with the unique values. # We are only interested in the first object. df['default2'] = df.default.factorize()[0] df['student2'] = df.student.factorize()[0] df.head(3) # - # ### Figure 4.1 - Default data set # + fig = plt.figure(figsize=(12,5)) gs = mpl.gridspec.GridSpec(1, 4) ax1 = plt.subplot(gs[0,:-2]) ax2 = plt.subplot(gs[0,-2]) ax3 = plt.subplot(gs[0,-1]) # Take a fraction of the samples where target value (default) is 'no' df_no = df[df.default2 == 0].sample(frac=0.15) # Take all samples where target value is 'yes' df_yes = df[df.default2 == 1] df_ = df_no.append(df_yes) ax1.scatter(df_[df_.default == 'Yes'].balance, df_[df_.default == 'Yes'].income, s=40, c='orange', marker='+', linewidths=1) ax1.scatter(df_[df_.default == 'No'].balance, df_[df_.default == 'No'].income, s=40, marker='o', linewidths='1', edgecolors='lightblue', facecolors='white', alpha=.6) ax1.set_ylim(ymin=0) ax1.set_ylabel('Income') ax1.set_xlim(xmin=-100) ax1.set_xlabel('Balance') c_palette = {'No':'lightblue', 'Yes':'orange'} sns.boxplot('default', 'balance', data=df, orient='v', ax=ax2, palette=c_palette) sns.boxplot('default', 'income', data=df, orient='v', ax=ax3, palette=c_palette) gs.tight_layout(plt.gcf()) # - # ## 4.3 Logistic Regression # ### Figure 4.2 # + X_train = df.balance.values.reshape(-1,1) y = df.default2 # Create array of test data. Calculate the classification probability # and predicted classification. X_test = np.arange(df.balance.min(), df.balance.max()).reshape(-1,1) clf = skl_lm.LogisticRegression(solver='newton-cg') clf.fit(X_train,y) prob = clf.predict_proba(X_test) fig, (ax1, ax2) = plt.subplots(1,2, figsize=(12,5)) # Left plot sns.regplot(df.balance, df.default2, order=1, ci=None, scatter_kws={'color':'orange'}, line_kws={'color':'lightblue', 'lw':2}, ax=ax1) # Right plot ax2.scatter(X_train, y, color='orange') ax2.plot(X_test, prob[:,1], color='lightblue') for ax in fig.axes: ax.hlines(1, xmin=ax.xaxis.get_data_interval()[0], xmax=ax.xaxis.get_data_interval()[1], linestyles='dashed', lw=1) ax.hlines(0, xmin=ax.xaxis.get_data_interval()[0], xmax=ax.xaxis.get_data_interval()[1], linestyles='dashed', lw=1) ax.set_ylabel('Probability of default') ax.set_xlabel('Balance') ax.set_yticks([0, 0.25, 0.5, 0.75, 1.]) ax.set_xlim(xmin=-100) # - # ### Table 4.1 y = df.default2 # ##### scikit-learn # Using newton-cg solver, the coefficients are equal/closest to the ones in the book. # I do not know the details on the differences between the solvers. clf = skl_lm.LogisticRegression(solver='newton-cg') X_train = df.balance.values.reshape(-1,1) clf.fit(X_train,y) print(clf) print('classes: ',clf.classes_) print('coefficients: ',clf.coef_) print('intercept :', clf.intercept_) # ##### statsmodels X_train = sm.add_constant(df.balance) est = smf.Logit(y.ravel(), X_train).fit() est.summary2().tables[1] # ### Table 4.2 # + X_train = sm.add_constant(df.student2) y = df.default2 est = smf.Logit(y, X_train).fit() est.summary2().tables[1] # - # ### Table 4.3 - Multiple Logistic Regression X_train = sm.add_constant(df[['balance', 'income', 'student2']]) est = smf.Logit(y, X_train).fit() est.summary2().tables[1] # ### Figure 4.3 - Confounding # + # balance and default vectors for students X_train = df[df.student == 'Yes'].balance.values.reshape(df[df.student == 'Yes'].balance.size,1) y = df[df.student == 'Yes'].default2 # balance and default vectors for non-students X_train2 = df[df.student == 'No'].balance.values.reshape(df[df.student == 'No'].balance.size,1) y2 = df[df.student == 'No'].default2 # Vector with balance values for plotting X_test = np.arange(df.balance.min(), df.balance.max()).reshape(-1,1) clf = skl_lm.LogisticRegression(solver='newton-cg') clf2 = skl_lm.LogisticRegression(solver='newton-cg') clf.fit(X_train,y) clf2.fit(X_train2,y2) prob = clf.predict_proba(X_test) prob2 = clf2.predict_proba(X_test) # - df.groupby(['student','default']).size().unstack('default') # + # creating plot fig, (ax1, ax2) = plt.subplots(1,2, figsize=(12,5)) # Left plot ax1.plot(X_test, pd.DataFrame(prob)[1], color='orange', label='Student') ax1.plot(X_test, pd.DataFrame(prob2)[1], color='lightblue', label='Non-student') ax1.hlines(127/2817, colors='orange', label='Overall Student', xmin=ax1.xaxis.get_data_interval()[0], xmax=ax1.xaxis.get_data_interval()[1], linestyles='dashed') ax1.hlines(206/6850, colors='lightblue', label='Overall Non-Student', xmin=ax1.xaxis.get_data_interval()[0], xmax=ax1.xaxis.get_data_interval()[1], linestyles='dashed') ax1.set_ylabel('Default Rate') ax1.set_xlabel('Credit Card Balance') ax1.set_yticks([0, 0.2, 0.4, 0.6, 0.8, 1.]) ax1.set_xlim(450,2500) ax1.legend(loc=2) # Right plot sns.boxplot('student', 'balance', data=df, orient='v', ax=ax2, palette=c_palette); # - # ## 4.4 Linear Discriminant Analysis # ### Table 4.4 # # + X = df[['balance', 'income', 'student2']].as_matrix() y = df.default2.as_matrix() lda = LinearDiscriminantAnalysis(solver='svd') y_pred = lda.fit(X, y).predict(X) df_ = pd.DataFrame({'True default status': y, 'Predicted default status': y_pred}) df_.replace(to_replace={0:'No', 1:'Yes'}, inplace=True) df_.groupby(['Predicted default status','True default status']).size().unstack('True default status') # - print(classification_report(y, y_pred, target_names=['No', 'Yes'])) # ### Table 4.5 # Instead of using the probability of 50% as decision boundary, we say that a probability of default of 20% is to be classified as 'Yes'. # + decision_prob = 0.2 y_prob = lda.fit(X, y).predict_proba(X) df_ = pd.DataFrame({'True default status': y, 'Predicted default status': y_prob[:,1] > decision_prob}) df_.replace(to_replace={0:'No', 1:'Yes', 'True':'Yes', 'False':'No'}, inplace=True) df_.groupby(['Predicted default status','True default status']).size().unstack('True default status') # - # # Lab # ### 4.6.3 Linear Discriminant Analysis df = pd.read_csv('Data/Smarket.csv', usecols=range(1,10), index_col=0, parse_dates=True) # + X_train = df[:'2004'][['Lag1','Lag2']] y_train = df[:'2004']['Direction'] X_test = df['2005':][['Lag1','Lag2']] y_test = df['2005':]['Direction'] lda = LinearDiscriminantAnalysis() pred = lda.fit(X_train, y_train).predict(X_test) # - lda.priors_ lda.means_ # These do not seem to correspond to the values from the R output in the book? lda.coef_ confusion_matrix(y_test, pred).T print(classification_report(y_test, pred, digits=3)) pred_p = lda.predict_proba(X_test) np.unique(pred_p[:,1]>0.5, return_counts=True) np.unique(pred_p[:,1]>0.9, return_counts=True) # ### 4.6.4 Quadratic Discriminant Analysis qda = QuadraticDiscriminantAnalysis() pred = qda.fit(X_train, y_train).predict(X_test) qda.priors_ qda.means_ confusion_matrix(y_test, pred).T print(classification_report(y_test, pred, digits=3)) # ### 4.6.5 K-Nearest Neighbors knn = neighbors.KNeighborsClassifier(n_neighbors=1) pred = knn.fit(X_train, y_train).predict(X_test) print(confusion_matrix(y_test, pred).T) print(classification_report(y_test, pred, digits=3)) knn = neighbors.KNeighborsClassifier(n_neighbors=3) pred = knn.fit(X_train, y_train).predict(X_test) print(confusion_matrix(y_test, pred).T) print(classification_report(y_test, pred, digits=3)) # ### 4.6.6 An Application to Caravan Insurance Data # # #### K-Nearest Neighbors # + # In R, I exported the dataset from package 'ISLR' to a csv file df = pd.read_csv('Data/Caravan.csv') y = df.Purchase X = df.drop('Purchase', axis=1).astype('float64') X_scaled = preprocessing.scale(X) X_train = X_scaled[1000:,:] y_train = y[1000:] X_test = X_scaled[:1000,:] y_test = y[:1000] def KNN(n_neighbors=1, weights='uniform'): clf = neighbors.KNeighborsClassifier(n_neighbors, weights) clf.fit(X_train, y_train) pred = clf.predict(X_test) score = clf.score(X_test, y_test) return(pred, score, clf.classes_) def plot_confusion_matrix(cm, classes, n_neighbors, title='Confusion matrix (Normalized)', cmap=plt.cm.Blues): plt.imshow(cm, interpolation='nearest', cmap=plt.cm.Blues) plt.title('Normalized confusion matrix: KNN-{}'.format(n_neighbors)) plt.colorbar() plt.xticks(np.arange(2), classes) plt.yticks(np.arange(2), classes) plt.tight_layout() plt.xlabel('True label',rotation='horizontal', ha='right') plt.ylabel('Predicted label') plt.show() # - for i in [1,3,5]: pred, score, classes = KNN(i) cm = confusion_matrix(y_test, pred) cm_normalized = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] plot_confusion_matrix(cm_normalized.T, classes, n_neighbors=i) cm_df = pd.DataFrame(cm.T, index=classes, columns=classes) cm_df.index.name = 'Predicted' cm_df.columns.name = 'True' print(cm_df) print(pd.DataFrame(precision_score(y_test, pred, average=None), index=classes, columns=['Precision'])) # #### Logistic Regression regr = skl_lm.LogisticRegression() regr.fit(X_train, y_train) pred = regr.predict(X_test) cm_df = pd.DataFrame(confusion_matrix(y_test, pred).T, index=regr.classes_, columns=regr.classes_) cm_df.index.name = 'Predicted' cm_df.columns.name = 'True' print(cm_df) print(classification_report(y_test, pred)) pred_p = regr.predict_proba(X_test) cm_df = pd.DataFrame({'True': y_test, 'Pred': pred_p[:,1] > .25}) cm_df.Pred.replace(to_replace={True:'Yes', False:'No'}, inplace=True) print(cm_df.groupby(['True', 'Pred']).size().unstack('True').T) print(classification_report(y_test, cm_df.Pred)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import matplotlib.pyplot as plt import numpy as np import skimage.io import skimage.morphology from collections import Counter import re import nltk # nltk.download("stopwords") # we need this only the first time we download the "stopwords" from nltk.corpus import stopwords # - # # Image Size # Let's see what is the size (in bytes) of the hamburger image # + hamburger_image = skimage.io.imread("data/hamburger.jpg") print(hamburger_image.nbytes) # Another way to calculate the size (in bytes) is to mutiply all the numbers from the shape # Each pixel is one byte for the JPG format so we can easily calculate the number of bytes # We can see more info here: https://www.scantips.com/basics1d.html rows, cols, dimensions = hamburger_image.shape print(rows * cols * dimensions) # - # Let's see the for all channels the mean brigthnesses. # # After that we can see the dominant one. hamburger_image_red, hamburger_image_green, hamburger_image_blue = [hamburger_image[:, :, i] for i in range(3)] print(hamburger_image_red.mean()) print(hamburger_image_green.mean()) print(hamburger_image_blue.mean()) # # Morphology # Let's apply a binary opening to our image. # # After that we need to get the count for all the white pixels. # We use function to calculate this count # structuring_element = np.ones((3, 3)) # first variant structuring_element = skimage.morphology.square(3) binary_image = skimage.morphology.binary_opening(hamburger_image_blue, structuring_element) binary_image = binary_image.astype(int) plt.imshow(binary_image) plt.show() def count_white_pixels(binary_image): rows, columns = binary_image.shape count = 0 for row in range(0, rows): for value in binary_image[row]: if value == 1: count += 1; return count; white_pixel_count = count_white_pixels(binary_image) print("White pixels count:", white_pixel_count) # # Pride and Prejudice book # # Let's read the book "Pride and Prejudice" text = "" with open("data/Pride and Prejudice.txt", "r", encoding = "utf-8") as f: text = f.read() print(len(text)) # Next, we make all the text lower and make a word counter. # # After that we can simply answer the questions # * How many times does the word "pride" occur in the entire Web page? # * How many times does the word "prejudice" occur in the entire Web page? text = text.lower() words = re.split("\W+", text) word_counter = Counter(words) print("pride count:", word_counter["pride"]) print("prejudice count:", word_counter["prejudice"]) # Now it's time to remove the stopwords and to find out what's the main character's name. stop = stopwords.words("english") # We get all the words that aren't "stopwords" cleaned_words = [word for word in words if word not in stop] cleaned_words_counter = Counter(cleaned_words) cleaned_words_counter.most_common(20) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # the libraries you will need to run the program import pandas as pd import requests # to get image from the web import shutil # to save it locally df = pd.read_csv("~/Desktop/body_images.csv") # Most of what follows is from: https://towardsdatascience.com/how-to-download-an-image-using-python-38a75cfa21c for index, row in df.iterrows(): image_url = row['URL'] filename = image_url.split("/")[-3] r = requests.get(image_url, stream = True) ext = r.headers['content-type'].split('/')[-1] # converts response headers mime type to an extension (may not work with everything) filename += '.' filename += ext # Check if the image was retrieved successfully if r.status_code == 200: # Set decode_content value to True, otherwise the downloaded image file's size will be zero. r.raw.decode_content = True # Open a local file with wb ( write binary ) permission. with open(filename,'wb') as f: shutil.copyfileobj(r.raw, f) print('Image sucessfully Downloaded: ',filename) else: print('Image Couldn\'t be retreived') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python2 # --- # + [markdown] run_control={"frozen": false, "read_only": false} # # Machine Intelligence II (week 2) -- Team MensaNord # + [markdown] run_control={"frozen": false, "read_only": false} # - # - # - # - # - # + run_control={"frozen": false, "read_only": false} import numpy as np import matplotlib.pyplot as plt import matplotlib.patches as mpatches from pandas.tools.plotting import scatter_matrix from mpl_toolkits.mplot3d import Axes3D # %matplotlib inline # + [markdown] run_control={"frozen": false, "read_only": false} # ## Exercise 1 A # + run_control={"frozen": false, "read_only": false} data = np.loadtxt('pca-data-2d.dat', delimiter=' ', skiprows=0, usecols=range(0, 2)) data.shape # + run_control={"frozen": false, "read_only": false} data # + run_control={"frozen": false, "read_only": false} m = np.mean(data, 0) m # + run_control={"frozen": false, "read_only": false} data_centered = np.subtract(data, m) data_centered # + run_control={"frozen": false, "read_only": false} plt.figure() plt.scatter(data_centered.T[0], data_centered.T[1]) plt.xlabel("col1") plt.ylabel("col2") plt.grid() plt.show() # + [markdown] run_control={"frozen": false, "read_only": false} # ## Exercise 1 B # + run_control={"frozen": false, "read_only": false} covariance = np.cov(data_centered.T) covariance # + run_control={"frozen": false, "read_only": false} evals, evecs = np.linalg.eig(covariance) transmat = evecs.T transmat # + run_control={"frozen": false, "read_only": false} evec1 = transmat[0] #evecs[:, 0] evec2 = transmat[1] #evecs[:, 1] evec1 # + run_control={"frozen": false, "read_only": false} data_trans = np.array([[0.0, 0.0] for i in range(len(data))]) for i in range(len(data_centered)): data_trans[i] = np.dot(transmat, data_centered[i]) data_trans # + run_control={"frozen": false, "read_only": false} plt.figure(figsize=(10, 10)) plt.scatter(data_centered.T[0], data_centered.T[1]) plt.plot([0, evec1[0]], [0, evec1[1]]) plt.plot([0, evec2[0]], [0, evec2[1]]) plt.scatter(data_trans.T[0], data_trans.T[1]) plt.grid() plt.show() # + [markdown] run_control={"frozen": false, "read_only": false} # ## Exercise 1 C # + run_control={"frozen": false, "read_only": false} transmat_inv = np.linalg.inv(transmat) data_trans_inv = np.array([[0.0, 0.0] for i in range(len(data))]) for i in range(len(data)): data_trans_inv[i] = np.dot(transmat_inv, data_trans[i]) data_trans_inv data_trans_PC1 = np.copy(data_trans) data_trans_PC1[:, 1] = 0 data_trans_inv_PC1 = np.array([[0.0, 0.0] for i in range(len(data))]) for i in range(len(data)): data_trans_inv_PC1[i] = np.dot(transmat_inv, data_trans_PC1[i]) data_trans_PC2 = np.copy(data_trans) data_trans_PC2[:, 0] = 0 data_trans_inv_PC2 = np.array([[0.0, 0.0] for i in range(len(data))]) for i in range(len(data)): data_trans_inv_PC2[i] = np.dot(transmat_inv, data_trans_PC2[i]) data_trans_PC2 # + run_control={"frozen": false, "read_only": false} plt.figure(figsize=(10, 10)) plt.scatter(data_centered.T[0], data_centered.T[1]) plt.scatter(data_trans_inv_PC1.T[0], data_trans_inv_PC1.T[1]) plt.scatter(data_trans_inv_PC2.T[0], data_trans_inv_PC2.T[1]) red_patch = mpatches.Patch(color='red', label='full data') blue_patch = mpatches.Patch(color='blue', label='only PC1') green_patch = mpatches.Patch(color='green', label='only PC2') plt.legend(handles=[red_patch, blue_patch, green_patch]) plt.grid() plt.show() # + [markdown] run_control={"frozen": false, "read_only": false} # ## Exercise 2 A # + run_control={"frozen": false, "read_only": false} data3d = np.loadtxt('pca-data-3d.txt', delimiter=',', skiprows=1) data3d.shape # 3 axis 500 points # + run_control={"frozen": false, "read_only": false} mean3d = np.mean(data3d, 0) data3d_centered = np.subtract(data3d, mean3d) mean3d # + run_control={"frozen": false, "read_only": false} fig, axs = plt.subplots(3, 3, figsize=(9, 9)) for i in range(3): for j in range(3): axs[i][j].scatter(data3d_centered[:, i], data3d_centered[:, j]) plt.tight_layout(1) axs[i][j].set_xlabel('col {}'.format(i+1)) axs[i][j].set_ylabel('col {}'.format(j+1)) fig.suptitle('Pairwise scatter plots of columns (x, y, z)', y=1.05, fontsize=16) plt.show() # + [markdown] run_control={"frozen": false, "read_only": false} # ## Exercise 2 B # + run_control={"frozen": false, "read_only": false} covariance3d = np.cov(data3d_centered.T) evals3d, evecs3d = np.linalg.eig(covariance3d) transmat3d = evecs3d.T covariance3d transmat3d evals3d # => Z is PC1 // Y is PC2 // X is PC3 # + run_control={"frozen": false, "read_only": false} data3d_trans = np.array([[0.0, 0.0, 0.0] for i in range(len(data3d))]) for i in range(len(data3d_centered)): data3d_trans[i] = np.dot(transmat3d, data3d_centered[i]) fig, axs = plt.subplots(3, 3, figsize=(9, 9)) for i in range(3): for j in range(3): axs[i][j].scatter(data3d_trans[:, i], data3d_trans[:, j]) plt.tight_layout(1) axs[i][j].set_xlabel('col {}'.format(i+1)) axs[i][j].set_ylabel('col {}'.format(j+1)) fig.suptitle('Pairwise scatter plots of columns (PC1, PC2, PC3)', y=1.05, fontsize=16) plt.show() # + [markdown] run_control={"frozen": false, "read_only": false} # ## Exercise 2 C # + run_control={"frozen": false, "read_only": false} transmat3d_inv = np.linalg.inv(transmat3d) data3d_trans_PC1 = np.copy(data3d_trans) data3d_trans_PC1[:, 0] = 0 data3d_trans_PC1[:, 1] = 0 data3d_trans_PC1_recov = np.array([[0.0, 0.0, 0.0] for i in range(len(data3d))]) for i in range(len(data3d)): data3d_trans_PC1_recov[i] = np.dot(transmat3d_inv, data3d_trans_PC1[i]) data3d_trans_PC12 = np.copy(data3d_trans) data3d_trans_PC12[:, 0] = 0 data3d_trans_PC12_recov = np.array([[0.0, 0.0, 0.0] for i in range(len(data3d))]) for i in range(len(data3d)): data3d_trans_PC12_recov[i] = np.dot(transmat3d_inv, data3d_trans_PC12[i]) data3d_trans_PC123_recov = np.array([[0.0, 0.0, 0.0] for i in range(len(data3d))]) for i in range(len(data3d)): data3d_trans_PC123_recov[i] = np.dot(transmat3d_inv, data3d_trans[i]) data3d_trans_PC12[0, :] # + run_control={"frozen": false, "read_only": false} plt.figure(figsize=(10, 10)) plt.scatter(data3d_trans_PC123_recov.T[0], data3d_trans_PC123_recov.T[1]) plt.scatter(data3d_trans_PC12_recov.T[0], data3d_trans_PC12_recov.T[1]) plt.scatter(data3d_trans_PC1_recov.T[0], data3d_trans_PC1_recov.T[1]) blue_patch = mpatches.Patch(color='blue', label='PC123') red_patch = mpatches.Patch(color='red', label='PC12') green_patch = mpatches.Patch(color='green', label='PC1') plt.legend(handles=[blue_patch, red_patch, green_patch]) plt.title('Scatter plot from x-y-layer') plt.grid() plt.show() # + run_control={"frozen": false, "read_only": false} fig = plt.figure(figsize=(11, 11)) ax = fig.add_subplot(111, projection='3d') ax.scatter(data3d_trans_PC123_recov[:, 0], data3d_trans_PC123_recov[:, 1], data3d_trans_PC123_recov[:, 2]) ax.scatter(data3d_trans_PC12_recov[:, 0], data3d_trans_PC12_recov[:, 1], data3d_trans_PC12_recov[:, 2]) ax.scatter(data3d_trans_PC1_recov[:, 0], data3d_trans_PC1_recov[:, 1], data3d_trans_PC1_recov[:, 2]) blue_patch = mpatches.Patch(color='blue', label='PC123') red_patch = mpatches.Patch(color='red', label='PC12') green_patch = mpatches.Patch(color='green', label='PC1') plt.legend(handles=[blue_patch, red_patch, green_patch]) plt.title('Recovered data points') plt.show() # + [markdown] run_control={"frozen": false, "read_only": false} # Using only the first PC is too little information to near or compress the data. Using the first two PCs similars the original data quite well. # + [markdown] run_control={"frozen": false, "read_only": false} # ## Exercise 3 A # + run_control={"frozen": false, "read_only": false} data = np.loadtxt('expDat.txt', delimiter=',', skiprows=1, usecols=range(1, 21)) data.shape, data # + run_control={"frozen": false, "read_only": false} data_centered = data - data.mean(axis=0) data_centered # + run_control={"frozen": false, "read_only": false} covariance = np.cov(data_centered.T) covariance.shape # + run_control={"frozen": false, "read_only": false} evals, evecs = np.linalg.eig(covariance) evecs.T # + [markdown] run_control={"frozen": false, "read_only": false} # ## Exercise 3 B # + run_control={"frozen": false, "read_only": false} # + [markdown] run_control={"frozen": false, "read_only": false} # ## Exercise 4 A # + run_control={"frozen": false, "read_only": false} from scipy.ndimage import imread import os # + run_control={"frozen": false, "read_only": false} n_patches = [] b_patches = [] for img_name in os.listdir('imgpca'): img = imread(os.path.join('imgpca', img_name)) for i in range(500): x = np.random.randint(img.shape[0] - 16) y = np.random.randint(img.shape[1] - 16) patch = img[x:x+16, y:y+16].flatten() if img_name.startswith('n'): n_patches.append(patch) elif img_name.startswith('b'): b_patches.append(patch) n_patches = np.array(n_patches) b_patches = np.array(b_patches) n_patches.shape, b_patches.shape # + run_control={"frozen": false, "read_only": false} # Show some nature patches. fig, axes = plt.subplots(2, 5, figsize=(15, 6)) for ax in axes.flatten(): plt.sca(ax) plt.imshow(n_patches[np.random.randint(len(n_patches))].reshape(16, 16), cmap='Greys', interpolation=None) plt.axis('off') # + run_control={"frozen": false, "read_only": false} # Show some building patches. fig, axes = plt.subplots(2, 5, figsize=(15, 6)) for ax in axes.flatten(): plt.sca(ax) plt.imshow(b_patches[np.random.randint(len(b_patches))].reshape(16, 16), cmap='Greys', interpolation=None) plt.axis('off') # + [markdown] run_control={"frozen": false, "read_only": false} # ## Exercise 4 B # + run_control={"frozen": false, "read_only": false} n_patches_centered = n_patches - n_patches.mean(axis=0) b_patches_centered = b_patches - b_patches.mean(axis=0) # + run_control={"frozen": false, "read_only": false} n_covariance = np.cov(n_patches_centered.T) b_covariance = np.cov(b_patches_centered.T) n_covariance.shape, b_covariance.shape # + run_control={"frozen": false, "read_only": false} n_evals, n_evecs = np.linalg.eig(n_covariance) b_evals, b_evecs = np.linalg.eig(b_covariance) n_evecs.T.shape, b_evecs.T.shape # + run_control={"frozen": false, "read_only": false} # Nature PCAs. fig, axes = plt.subplots(3, 4, figsize=(15, 10)) for i, ax in enumerate(axes.flatten()): plt.sca(ax) plt.imshow(n_evecs.T[i].reshape(16, 16), cmap='Greys', interpolation=None) plt.axis('off') # + run_control={"frozen": false, "read_only": false} # Building PCAs. fig, axes = plt.subplots(3, 4, figsize=(15, 10)) for i, ax in enumerate(axes.flatten()): plt.sca(ax) plt.imshow(b_evecs.T[i].reshape(16, 16), cmap='Greys', interpolation=None) plt.axis('off') # + [markdown] run_control={"frozen": false, "read_only": false} # The first few PCAs from building and nature images are similar, they represent basic shades and edges (first rows in the plots above). However, the PCAs from the second and third rows above seem different - for buildings, the lines are edgy and straight, while for nature images, they seem to have more natural shapes. # + [markdown] run_control={"frozen": false, "read_only": false} # ## Exercise 4 C # + run_control={"frozen": false, "read_only": false} plt.plot(n_evals[:100], '.', label='nature') plt.plot(b_evals[:100], '.', label='buildings') plt.ylabel('Eigenvalue') plt.xlabel('PC') plt.legend() plt.ylim(0, 80000) # + [markdown] run_control={"frozen": false, "read_only": false} # For simplicity, only the first 100 PCs are plotted and the first PC is not shown due to its high eigenvalue. For both image categories, one should keep around 20 PCs according to the Scree test. This represents a compression of 1 - (20/256) = 92 %. # + run_control={"frozen": false, "read_only": false} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: PySpark # language: python # name: pysparkkernel # --- # ## MNIST training and DeepSpeed ZeRO # Maggy enables you to train with Microsoft's DeepSpeed ZeRO optimizer. Since DeepSpeed does not follow the common PyTorch programming model, Maggy is unable to provide full distribution transparency to the user. This means that if you want to use DeepSpeed for your training, you will have to make small changes to your code. In this notebook, we will show you what exactly you have to change in order to make DeepSpeed run with Maggy. from hops import hdfs import torch import torch.nn.functional as F # ### Define the model # First off, we have to define our model. Since DeepSpeed's ZeRO is meant to reduce the memory consumption of our model, we will use an unreasonably large CNN for this example. class CNN(torch.nn.Module): def __init__(self): super().__init__() self.l1 = torch.nn.Conv2d(1,1000,3) self.l2 = torch.nn.Conv2d(1000,3000,5) self.l3 = torch.nn.Conv2d(3000,3000,5) self.l4 = torch.nn.Linear(3000*18*18,10) def forward(self, x): x = F.relu(self.l1(x)) x = F.relu(self.l2(x)) x = F.relu(self.l3(x)) x = F.softmax(self.l4(x.flatten(start_dim=1)), dim=0) return x # ### Adapting the training function # There are a few minor changes that have to be done in order to train with DeepSpeed: # - There is no need for an optimizer anymore. You can configure your optimizer later in the DeepSpeed config. # - DeepSpeed's ZeRO _requires_ you to use FP16 training. Therefore, convert your data to half precision! # - The backward call is not executed on the loss, but on the model (`model.backward(loss)` instead of `loss.backward()`). # - The step call is not executed on the optimizer, but also on the model (`model.step()` instead of `optimizer.step()`). # - As we have no optimizer anymore, there is also no need to call `optimizer.zero_grad()`. # You do not have to worry about the implementation of these calls, Maggy configures your model at runtime to act as a DeepSpeed engine. def train_fn(module, hparams, train_set, test_set): import time import torch from maggy.core.patching import MaggyPetastormDataLoader model = module(**hparams) batch_size = 4 lr_base = 0.1 * batch_size/256 # Parameters as in https://arxiv.org/pdf/1706.02677.pdf loss_criterion = torch.nn.CrossEntropyLoss() train_loader = MaggyPetastormDataLoader(train_set, batch_size=batch_size) model.train() for idx, data in enumerate(train_loader): img, label = data["image"].half(), data["label"].half() prediction = model(img) loss = loss_criterion(prediction, label.long()) model.backward(loss) m1 = torch.cuda.max_memory_allocated(0) model.step() m2 = torch.cuda.max_memory_allocated(0) print("Optimizer pre: {}MB\n Optimizer post: {}MB".format(m1//1e6,m2//1e6)) print(f"Finished batch {idx}") return float(1) train_ds = hdfs.project_path() + "/DataSets/MNIST/PetastormMNIST/train_set" test_ds = hdfs.project_path() + "/DataSets/MNIST/PetastormMNIST/test_set" print(hdfs.exists(train_ds), hdfs.exists(test_ds)) # ### Configuring DeepSpeed # In order to use DeepSpeed's ZeRO, the `deepspeed` backend has to be chosen. This backend also requires its own config. You can read a full specification of the possible settings [here](https://www.deepspeed.ai/docs/config-json/#zero-optimizations-for-fp16-training). # + from maggy import experiment from maggy.experiment_config import TorchDistributedConfig ds_config = {"train_micro_batch_size_per_gpu": 1, "gradient_accumulation_steps": 1, "optimizer": {"type": "Adam", "params": {"lr": 0.1}}, "fp16": {"enabled": True}, "zero_optimization": {"stage": 2}, } config = TorchDistributedConfig(name='DS_ZeRO', module=CNN, train_set=train_ds, test_set=test_ds, backend="deepspeed", deepspeed_config=ds_config) # - # ### Starting the training # You can now launch training with DS ZeRO. Note that the overhead of DeepSpeed is considerably larger than PyTorch's build in sharding, albeit also more efficient for a larger number of GPUs. DS will also jit compile components on the first run. If you want to compare memory efficiency with the default training, you can rewrite this notebook to work with standard PyTorch training. result = experiment.lagom(train_fn, config) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd # + #使用字典创建一个DataFrame。 d = {'国家': ['中国', '美国', '日本'], '人口': [14.22, 3.18, 1.29]} df = pd.DataFrame(d) df # + #利用索引,添加一行数据。 df.loc[3] = {'国家': '俄罗斯', '人口': 1.4} df # + #创建另一个DataFrame。 df2 = pd.DataFrame({'国家': ['英国', '德国'], '人口': [0.66, 0.82]}) #利用DataFrame的内建函数append,追加一个DataFrame(多条数据)。 new_df = df.append(df2) new_df # + #使用pandas的concat功能,拼接两个DataFrame,同时重置索引。 new_df = pd.concat([df, df2], ignore_index=True) new_df # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.9.2 64-bit (''webscraping_and_data_labelling-mY7GX9EY'': # pipenv)' # name: python3 # --- # + [markdown] pycharm={"name": "#%% md\n"} # ## Parsing HTML with BeautifulSoup # - # BeautifulSoup is a Python library for extracting data from HTML or XML files. Refer to the [documentation](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) for more information. # + # pipenv shell # + pycharm={"name": "#%%\n"} import requests from bs4 import BeautifulSoup # + pycharm={"name": "#%%\n"} url = "https://www.bbc.com/pidgin/topics/c2dwqd1zr92t" # + pycharm={"name": "#%%\n"} response = requests.get(url) print(dir(response)) # dir allows us to check the methods available for an object # + pycharm={"name": "#%%\n"} page_html = response.content print(page_html) # + pycharm={"name": "#%%\n"} print(type(page_html)) # Without bs4, we obtain a string object which is tedious to interact with # + pycharm={"name": "#%%\n"} page_soup = BeautifulSoup(page_html, "html.parser") print("type: ",type(page_soup)) print(f"page soup: {page_soup}") # - # BeautifulSoup allows us to use a variety of parsers. You can check this out :) # + pycharm={"name": "#%%\n"} print(len(page_soup)) # we get a rich list of methods available for our BeautifulSoup object # - # **`TODO`**: Complete the `get_page_soup` function in `scraper.py` # ## Locating elements # ### 1. Locating elements by tag # + pycharm={"name": "#%%\n"} all_urls = page_soup.findAll("a") # + pycharm={"name": "#%%\n"} print(len(all_urls)) # + pycharm={"name": "#%%\n"} print(type(all_urls[0])) # + pycharm={"name": "#%%\n"} print(dir(all_urls[0])) # - all_urls[0].get("href") # hrefs may be absolute all_urls[12].get("href") # hrefs may also be relative all_urls # ### 2. Locating elements by tag & class # To obtain the class of an element ona web page, right click the element and select `Inspect`. # # ![InspectElement](static/inspect_element.png) headline = page_soup.find( "h1", attrs={"class": "bbc-13dm3d0 e1yj3cbb0"} ) # ### How many pages of article are there in a category? # **`TODO`**: Go to any of the category pages of BBC Pidgin. Can you retrieve the `class` of the `span` element that contains the number of total pages as text? In the picture below, for example, retrieve the `class` of the `span` that contains `100` as text. # # ![PageNumber](static/PageNumber.png) # ## What is a valid article link? # Website may have a pattern for successive pages in a category. This eases our work. Check through a number of articles. Do you notice a pattern to the article URLS? for url in all_urls: href = url.get("href") if (href.startswith("/pidgin/tori") or \ href.startswith("pidgin/world") or \ href.startswith("pidgin/sport")) and href[-1].isdigit(): print(href) # valid_url = "https://www.bbc.com" + href base_url = "https://www.bbc.com" article = requests.get(# insert a valid URL here) # # **`TODO`**: Complete the `get_valid_urls` and `get_urls` functions in `scraper.py` # ## Getting article text response = requests.get("https://www.bbc.com/pidgin/sport-58110814") page_html = response.text page_soup = # TODO: Create a BeautifulSoup object from `page_html` # Text on websites are frequently embedded within one or more `div` elements. Open up the article we requested above in your browser.`Inspect` any paragraph from the article. What do you notice about the `div` elements? What is the value of their `class` attribute? story_div = page_soup.find_all( "div", attrs={"class": "bbc-19j92fr e57qer20"} ) story_div # Inside `div` elements, text is usually written in `p` elements. BeautifulSoup allows us to search the `div` element we retrieved above for elements it contains. p_elements = [div.findAll("p") for div in story_div] p_elements p_elements[0] p_elements[0][0] p_elements[0][0].text # **`TODO`**: Complete the `get_article_data` and `scrape` functions in `scraper.py`. # **`TODO`**: Complete the `get_parser` function in `scraper.py` # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # description: IPython kernel implementation for Yandex DataSphere # display_name: Yandex DataSphere Kernel # name: python3 # resources: {} # spec: # argv: # - /bin/true # codemirror_mode: python # display_name: Yandex DataSphere Kernel # env: {} # help_links: [] # language: python # --- # + [markdown] cellId="b87ttgali1esvwzha90q" # ## Preparation # + cellId="uefubi2yh1zpmz5s8oe2g" # %cd .. # + cellId="savnq6hd4kacq50vs1lzrd" # %pwd # + cellId="494mqu5cn1te05bqeqxjt" # %pip install -r requirements.txt # + cellId="w8rww9x8z6opnpa50k65ds" # !cd .. && git clone --recursive https://github.com/parlance/ctcdecode.git # + cellId="6fn1vrlphj1cqdx4pwipr" # %pip install ../../ctcdecode/ # + cellId="fqqst6f48cp1wynbl1bmny" import wandb wandb.login() # + [markdown] cellId="epkjvek1iye4rc0t720j" # ## Train # + cellId="nqqs0s32edshusac1sjr9r" jupyter={"outputs_hidden": true} # #!g1.1 # !python3 train.py -c hw_asr/configs/quartznet_main.json # + cellId="ydnlvseujaims4ho6lvpyo" jupyter={"outputs_hidden": true} # #!g1.1 # !python3 train.py -c hw_asr/configs/quartznet_main_b64.json -r saved/models/quartznet_main/1026_150852.382262/checkpoint-epoch10.pth # + cellId="jkusvzapwdju9sdjfg057" # #!g1.1 wandb.finish(0) # + cellId="x1byokrcp7c0wl3j57ow9i" jupyter={"outputs_hidden": true} # #!g1.1 # !python3 train.py -c hw_asr/configs/quartznet_main_360.json -r saved/models/quartznet_main/1026_192642.019336/checkpoint-epoch50.pth # + cellId="1xx0mlvgsjtq92yprni6" # #!g1.1 # !wget -q -O ./lm.arpa.gz http://www.openslr.org/resources/11/3-gram.pruned.1e-7.arpa.gz # !gunzip ./lm.arpa.gz # + cellId="qjbxy3d6fzjgis6xi7ys4o" # #!g1.1 # !python3 train.py -c hw_asr/configs/quartznet_main_augs.json -r saved/models/quartznet_main/1027_110901.083316/checkpoint-epoch60.pth # + cellId="7fsjuw3l4smr1yp5yljnar" # #!g1.1 wandb.finish(1) # + cellId="r2d1yz2mjf99y6seepdsk" # #!g1.1 # !wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=14tFL0ylwPHDFWZDqlELCsSKl-U_ze6cC' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=14tFL0ylwPHDFWZDqlELCsSKl-U_ze6cC" -O default_test/checkpoint.pth && rm -rf /tmp/cookies.txt # + cellId="dbp1dwhodofadmnghu94" # #!g1.1 # !python3 ./test.py -c default_test/config.json -r default_test/checkpoint.pth # + cellId="tyq9uuavlqqzywfhio097o" # #!g1.1 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import matplotlib.pyplot as plt import pandas as pd dataset=pd.read_csv("D:\\Machine learning\\BVM\\Day-1_16092018\\DATASET\\Data.csv") dataset working=dataset.iloc[:,1:3] working plt.plot(working.iloc[:,0],working.iloc[:,1]) plt.title("Graph of Age Vs Salary") plt.xlabel("Age") plt.ylabel("Salary") plt.show() working.iloc[:,1] plt.bar(working.iloc[:,0],working.iloc[:,1],align='edge',width=2,alpha=0.3) plt.xlabel("Age") plt.ylabel("Salary") plt.show() working x=working.copy() # # Data Preprocessing import numpy as np import pandas as pd import matplotlib.pyplot as plt dataset=pd.read_csv('D:\Machine learning\BVM\Day-1_16092018\DATASET\\Data.csv') dataset X=dataset.iloc[:,0:3].values X # ### We can dalete row with nan data or we replace it with zero,mean,median or more frequent value y=dataset.iloc[:,-1].values y from sklearn.preprocessing import Imputer imputer=Imputer() imputer=imputer.fit(X[:1,1:3]) X[:,1:3]=imputer.transform(X[:,1:3]) X # # Encode Catagorical Data from sklearn.preprocessing import LabelEncoder,OneHotEncoder labelencoder_X=LabelEncoder() X[:,0]=labelencoder_X.fit_transform(X[:,0]) X one=OneHotEncoder(categorical_features=[0]) X=one.fit_transform(X).toarray() X labelencoder_y=LabelEncoder() y=labelencoder_y.fit_transform(y) y from sklearn.cross_validation import train_test_split x_train,x_test,y_train,y_test=train_test_split(X,y,test_size=0.4,random_state=1) x_train x_test X from sklearn.preprocessing import StandardScaler sc_x=StandardScaler().fit(x_train) x_train=sc_x.transform(x_train) x_train # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from autumn.tools.project import get_project from matplotlib import pyplot from autumn.tools.plots.utils import REF_DATE from autumn.tools.calibration.targets import get_target_series import pandas as pd from autumn.tools.utils.pretty import pretty_print project = get_project("tuberculosis", "marshall-islands") model = project.run_baseline_model(project.param_set.baseline) derived_df = model.get_derived_outputs_df() output = "prevalence_infectiousXlocation_majuro" target = (2018, 1366) # + # pretty_print(custom_params) # - fig = pyplot.figure(figsize=(12, 8)) pyplot.style.use("ggplot") axis = fig.add_subplot() axis = derived_df[output].plot() axis.scatter(target[0], target[1], color="black") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- # > Below is a template for designing a dpv2 sample notebook. Text in black is a proposed template for preparing the notebook. You can basically keep it as is, or adapt the wording to your needs. Text in blockquote are guidelines or questions to help you fill the template. # # Title # # > pick your own title, try to use some of the words from the learning objectives (see below) # # # > In **requirements** below, try to answer the following questions: # >- What is the prior knowledge or know-how required to enter this learning unit? # >- What background do we expected participants to have ? # >- What tools do participants need to have access to ? (if there's a required subscription, give instructions on how to create/use, if need to install something, link to some tutorial) # >- Do they need to have completed some other course before? # # **Requirements** - In order to benefit from this tutorial, you will need to: # - ... # - ... # - ... # # > In **Learning Objectives** below, what should participants expects to know how to do after going through that learning unit? # > Here's a few recommendation for writing learning objectives: # > - imagine the right objective from the user's perspective, not the developer (what should they expect to learn, not what you want to teach) # > - pick your verbs from [the Bloom's Taxonomy](https://lccfestivaloflearning2012.files.wordpress.com/2012/10/support-document-13-blooms-taxonomy-teacher-planning-kit.jpg) # # **Learning Objectives** - By the end of this tutorial, you should be able to: # - ... # - ... # # > Under **Motivations**, write a short paragraph here, answering some of those questions: # > - Why is that learning unit important? # > - Why should it matter to the participants? # > - Why should it matter considering their background? # # **Motivations** - Some text here. # # 1. First section of the notebook # # > Each section, subsection, cell, should be introduced with a short paragraph (or more) explaining what's going to happen and why we need to achieve this (goals + motivation). # # ## 1.1. Sub section of the notebook # # > Then follows steps, or notebook cells. If using steps, use numbering. If using notebook cells, use a short sentence to explain what the user is supposed to look at, or modify, and what they should expect the result to be. # + from azureml.core.workspace import Workspace from azureml.core.workspace import UserErrorException import logging import traceback try: ws = Workspace.from_config() print( "Connected to Workspace", "-- name: " + ws.name, "-- Azure region: " + ws.location, "-- Resource group: " + ws.resource_group, sep="\n", ) except UserErrorException: logging.critical( """Could not connect using the local config. - if you're running locally, use the notebook to Connect to your workspace - if you're running from AzureML... this should not happen {}""".format( traceback.format_exc() ) ) # - # # Next Steps # # > In this section, if tutorial is in a sequence, add links to the next notebook. Also, add any relevant links (not too many) on the same topic of this tutorial. # # You can now go the next tutorial: # - Title of the next tutorial with link # # Also, on the same topic: # - link to notebook / docs # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd import matplotlib.pyplot as plt import matplotlib import datetime from mpl_toolkits.mplot3d import Axes3D from plyfile import PlyData, PlyElement from sklearn.decomposition import PCA # + # Every 100 data samples, we save # 1. If things run too # slow, try increasing this number. If things run too fast, # try decreasing it... =) reduce_factor = 100 # Look pretty... matplotlib.style.use('ggplot') # Load up the scanned armadillo plyfile = PlyData.read('Datasets/stanford_armadillo.ply') armadillo = pd.DataFrame({ 'x':plyfile['vertex']['z'][::reduce_factor], 'y':plyfile['vertex']['x'][::reduce_factor], 'z':plyfile['vertex']['y'][::reduce_factor] }) # + # Render the Original Armadillo fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.set_title('Armadillo 3D') ax.set_xlabel('X') ax.set_ylabel('Y') ax.set_zlabel('Z') ax.scatter(armadillo.x, armadillo.y, armadillo.z, c='orange', marker='.', alpha=0.75) plt.show() # - fpca = PCA(n_components= 2, svd_solver = 'full') fpca.fit(armadillo) # + def do_PCA(armadillo): # TODO: Write code to import the libraries required for PCA. # Then, train your PCA on the armadillo dataframe. Finally, # drop one dimension (reduce it down to 2D) and project the # armadillo down to the 2D principal component feature space. fpca = PCA(n_components= 2, svd_solver = 'full') fpca.fit(armadillo) fullproject = fpca.transform(armadillo) # NOTE: Be sure to RETURN your projected armadillo! # (This projection is actually stored in a NumPy NDArray and # not a Pandas dataframe, which is something Pandas does for # you automatically. =) return fullproject # Time the execution of PCA 5000x # PCA is ran 5000x in order to help decrease the potential of rogue # processes altering the speed of execution. t1 = datetime.datetime.now() for i in range(5000): pca = do_PCA(armadillo) time_delta = datetime.datetime.now() - t1 print(time_delta) # Render the newly transformed PCA armadillo! if not pca is None: fig = plt.figure() ax = fig.add_subplot(111) ax.set_title('PCA, build time: ' + str(time_delta)) ax.scatter(pca[:,0], pca[:,1], c='blue', marker='.', alpha=0.75) plt.show() # + def do_RandomizedPCA(armadillo): rpca = PCA(n_components = 2, svd_solver = 'randomized', random_state=1) # TODO: Write code to import the libraries required for # RandomizedPCA. Then, train your RandomizedPCA on the armadillo # dataframe. Finally, drop one dimension (reduce it down to 2D) # and project the armadillo down to the 2D principal component # feature space. rpca.fit(armadillo) # NOTE: Be sure to RETURN your projected armadillo! # (This projection is actually stored in a NumPy NDArray and # not a Pandas dataframe, which is something Pandas does for # you automatically. =) # # NOTE: SKLearn deprecated the RandomizedPCA method, but still # has instructions on how to use randomized (truncated) method # for the SVD solver. To find out how to use it, set `svd_solver` # to 'randomized' and check out the full docs here # # http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html # # Deprecated Method: http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.RandomizedPCA.html rproject = rpca.transform(armadillo) return rproject # Time the execution of rPCA 5000x t1 = datetime.datetime.now() for i in range(5000): rpca = do_RandomizedPCA(armadillo) time_delta = datetime.datetime.now() - t1 print(time_delta) # Render the newly transformed RandomizedPCA armadillo! if not rpca is None: fig = plt.figure() ax = fig.add_subplot(111) ax.set_title('RandomizedPCA, build time: ' + str(time_delta)) ax.scatter(rpca[:,0], rpca[:,1], c='red', marker='.', alpha=0.75) plt.show() # - # + print(__doc__) # Code source: # License: BSD 3 clause import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D from sklearn import decomposition from sklearn import datasets np.random.seed(5) centers = [[1, 1], [-1, -1], [1, -1]] iris = datasets.load_iris() # - X = iris.data y = iris.target plyfile.data iris.data fig = plt.figure(1, figsize=(4, 3)) plt.clf() ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=48, azim=134) # + plt.cla() pca = decomposition.PCA(n_components=3) pca.fit(X) X = pca.transform(X) for name, label in [('Setosa', 0), ('Versicolour', 1), ('Virginica', 2)]: ax.text3D(X[y == label, 0].mean(), X[y == label, 1].mean() + 1.5, X[y == label, 2].mean(), name, horizontalalignment='center', bbox=dict(alpha=.5, edgecolor='w', facecolor='w')) # Reorder the labels to have colors matching the cluster results y = np.choose(y, [1, 2, 0]).astype(np.float) ax.scatter(X[:, 0], X[:, 1], X[:, 2], c=y, cmap=plt.cm.spectral) ax.w_xaxis.set_ticklabels([]) ax.w_yaxis.set_ticklabels([]) ax.w_zaxis.set_ticklabels([]) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # ## LBPH Evaluation # ### Methodology # 1. Normalize each image from the dataset by: # * Pre-processing - histogram equalization # * Post-processing - 2D facial alignment # 2. Randomise folds containing train-test splits for each subject # 3. Train a classifier on each fold's train split # 4. Evaluate the trained model in step 4 by it's corresponding test split # 5. Repeat step 2, 3, 4 by varying the number of training samples per subject # 6. Report the average value of **Precision@1** across each fold. from __future__ import division from multiprocessing import Process, Queue import numpy as np from Dataset import Dataset from FaceRecognizer import FaceRecognizer def predict_one(idx, recognizer_obj, image, queue): queue.put((idx, recognizer_obj.model.predict(image))) # ### ATT Dataset dataset_path = "/media/ankurrc/new_volume/softura/facerec/datasets/norm_standard_att" folds = 3 training_samples = [2, 5, 8] results = [] # #### Train the recognizer for the different folds and evaluate recognizer_model = FaceRecognizer() dataset = Dataset(dataset_path) for training_sample in training_samples: print "Training on", training_sample, "samples per subject." for fold in range(1, folds+1): output = Queue() X_train, y_train = dataset.load_data(is_train=True, fold=fold, num_train=training_sample) print "Training recognizer (", len(X_train), "samples and", len(np.unique(y_train)), "subjects)..." recognizer_model.train(X_train, y_train) print "completed." X_test, y_test = dataset.load_data(is_train=False, fold=fold, num_train=training_sample) print "Predicting on (", len(X_test), "samples)..." processes = [Process(target=predict_one, args=(idx, recognizer_model, image, output)) for idx, image in enumerate(X_test)] for p in processes: p.start() predictions = [output.get() for p in processes] for p in processes: p.join() print "Done" predictions = [(x[1][0], x[1][1]) for x in sorted(predictions, key=lambda x: x[0])] recognizer_model.evaluate(predictions, y_test) print "-"*120 # ### Cyber Extruder Ultimate Dataset dataset_path = "/media/ankurrc/new_volume/softura/facerec/datasets/norm_cyber_extruder_ultimate" recognizer_model = FaceRecognizer() dataset = Dataset(dataset_path) for training_sample in training_samples: print "Training on", training_sample, "samples per subject." for fold in range(1, folds+1): X_train, y_train = dataset.load_data(is_train=True, fold=fold, num_train=training_sample) print "Training recognizer (", len(X_train), "samples and", len(np.unique(y_train)), "subjects)..." recognizer_model.train(X_train, y_train) print "completed." X_test, y_test = dataset.load_data(is_train=False, fold=fold, num_train=training_sample) print "Predicting on (", len(X_test), "samples)..." predictions = recognizer_model.predict(images=X_test) print "Done" recognizer_model.evaluate(predictions, y_test) print "-"*120 # ## Results result_att = np.array([[87.89, 85.98, 87.89], [95.87, 97.93, 96.39], [98.6, 100.0, 100.0]]) result_ceu = np.array([[24.16, 23.33, 24.18], [37.37, 37.54, 38.48], [51.81 ,47.27, 51.13]]) samples_subject = [2, 5, 8] att_avg = np.mean(axis=1, a=result_att) ceu_avg = np.mean(axis=1, a=result_ceu) import matplotlib.pyplot as plt plt.xlabel('Number of training samples') plt.ylabel('Precision') plt.plot(samples_subject, att_avg, label="ATT") plt.plot(samples_subject, ceu_avg, label="CEU") plt.legend(loc='upper right') plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: cssp37 # language: python # name: cssp37 # --- # # # Tutorial 1: Accessing and exploring CSSP China 20CR datasets # # # ## Learning Objectives: # # 1. How to load data into Xarrays format # 2. How to convert the data xarrays into iris cube format # 3. How to perform basic cube operations # ## Contents # # 1. [Use Xarray to access monthly data](#access_zarr) # 2. [Retrieve single (or list of) variables](#get_vars) # 3. [Convert datasets to iris cube](#to_iris) # 4. [Explore cube attributes and coordinates](#explore_iris) # 5. [Exercises](#exercise) #
    # Prerequisites
    # - Basic programming skills in python
    # - Familiarity with python libraries Numpy and Matplotlib
    # - Basic understanding of climate data
    #
    # ___ # ## 1. Use Xarray to access monthly data # ### 1.1 Import libraries. # Import the necessary libraries. Current datasets are in zarr format, we need zarr and xarray libraries to access the data import numpy as np import xarray as xr import zarr from scripts.xarray_iris_coord_system import XarrayIrisCoordSystem as xics xi = xics() xr.set_options(display_style='text') # Work around for AML bug that won't display HTML output. # ### 1.2 Set up authentication for the Azure blob store # # The data for this course is held online in an Azure Blob Storage Service. To access this we use a SAS (shared access signature). You should have been given the credentials for this service before the course, but if not please ask your instructor. We use the getpass module here to avoid putting the token into the public domain. Run the cell below and in the box enter your SAS and press return. This will store the password in the variable SAS. import getpass # SAS WITHOUT leading '?' SAS = getpass.getpass() # We now use the Zarr library to connect to this storage. This is a little like opening a file on a local file system but works without downloading the data. This makes use of the Azure Blob Storage service. The zarr.ABStore method returns a zarr.storage.ABSStore object which we can now use to access the Zarr data in the same way we would use a local file. If you have a Zarr file on a local file system you could skip this step and instead just use the path to the Zarr data below when opening the dataset. store = zarr.ABSStore(container='metoffice-20cr-ds', prefix='monthly/', account_name="metdatasa", blob_service_kwargs={"sas_token":SAS}) type(store) # ### 1.3 Read monthly data # A Dataset consists of coordinates and data variables. Let's use the xarray's **open_zarr()** method to read all our zarr data into a dataset object and display it's metadata # use the open_zarr() method to read in the whole dataset metadata dataset = xr.open_zarr(store) # print out the metadata dataset #
    # Note: The dataset lists coordinates and data variables.
    # Note:If you get an error "resource not found", you have probably entered your SAS incorrectly. Please check and try again. # #
    # # We can also access and print list of all the variables in our dataset # display all the variables in our dataset dataset.data_vars # --- # ## 2. Retrieve single (or list of) variables # ### 2.1 Read mean air temperature at 2 m # Access and print just a single variable i.e minumum air temperature at 2m # #
    # Note: The DataArrays in our dataset can be accessed either as attributes or indexed by name #
    # Access the variable by indexing it with its name t2m_mean = dataset['air_temperature_mean'] # print the metadata t2m_mean # Access the variable like an attribute t2m_mean = dataset.air_temperature_mean # print the metadata t2m_mean # ### 2.2 Read list of variables # We can also create a smaller dataset containing a subset of our variables # + # creating a list containing a subset of our variables varlist = ['relative_humidity_mean', 'relative_humidity_at_pressure_mean', 'specific_humidity', 'surface_temperature' ] # extracting the list of variables from dataset mini_ds = dataset[varlist] # print the metadata mini_ds # - #
    # Task:
      #
    • Access "cloud_area_fraction" using both index and attribute method in the cell below and save it in varaible named **caf**
    • #
    • Create a dataset **pres_ds** containing all the pressure variables, (hint: use for loop)
    • #
    #
    # + # Retrieve "cloud_area_fraction" # write your code here ... # + # Retrieve all the pressure variables # write your code here ... # - # --- # ## 3. Convert datasets to iris cube # ### 3.1 Convert a variable to an Iris Cube # We now convert the minimum air temperature variable that we accessed in section 2.1 into iris cube. This can be done simply using the method **DataArray.to_iris()**. # # Use the method to_iris() to convert the xarray data array into an iris cube cube_t2m_mean = t2m_mean.to_iris() cube_t2m_mean # ### Task 3.2 Convert whole Dataset to an Iris Cubelist # Instead of converting all variables one by one into iris cube one by one, we can convert the whole dataset (or a subset of dataset) into an iris cubelist #
    # Note: This is not as simple as done for single variable above but it is straightforward with the dataset.apply() method, obviousely will take a bit longer to complete! #
    # first import the Iris library import iris # + # create an empty list to hold the iris cubes cubelist = iris.cube.CubeList([]) # use the DataSet.apply() to convert the dataset to Iris Cublelist # the apply method loops over each variable in the dataset and runs the # supplied function on it # the variable da is a single variable from the dataset. # this variable is converted to a cube and appended to the empty # cubelist we made above dataset.apply(lambda da: cubelist.append(xi.to_iris(da))) # print out the cubelist cubelist # - #
    # Note: By clicking on any variable above, you can see its dimension coordinates and matadata #
    #
    # Task:
      #
    • convert caf variable into iris cube **caf_cube**
    • #
    • create a cube list containing pressure variables only
    • #
    • Can you note the difference between cube and cubelist?
    • #
    #
    # + ## convert clf into iris cube # write your code here ... # + ## convert pressure dataset into iris cube list # write your code here ... # - # ___ # ## 4. Explore cube attributes and coordinates # ### 4.1 Accessing cube from cubelist # Now that we have our variables in cubelist we can extract any varaible using the variable name. For instance the following code indices for **precipitation_flux** variable. # lets load and print the Precipitation Flux variable precipitation_cube = cubelist.extract_strict('precipitation_flux') precipitation_cube #
    # Note: We can see that we have time, grid_latitude and grig_longitude dimensions, and a cell method of mean: time (1 hour) which means that the cube contains monthly mean Precipitation Flux data. #
    # # ### 4.2 Cube attributes # We can explore the cube information further # we can print its shape precipitation_cube.shape # we can print its dimensions precipitation_cube.ndim # we can print all of the data values (takes a bit of time as it is a large dataset!) precipitation_cube.data # Or we can take a slice of the data using indexing precipitation_cube.data[10,1,:] # We can also print the maximum, minimum and mean value in data print('Maximum value: ', precipitation_cube.data.max()) print('Minimum value: ', precipitation_cube.data.min()) print('Mean value: ', precipitation_cube.data.mean()) # we can print cube's name precipitation_cube.name() # we can print the unit of data precipitation_cube.units # we can also print cube's general attributes precipitation_cube.attributes # ### 4.3 Rename the cube # Rename the precipitation_flux cube #
    # Note: The name, standard_name, long_name and to an extent var_name are all attributes to describe the phenomenon that the cube represents. # # standard_name is restricted to be a CF standard name (see the CF standard name table). # # If there is not a suitable CF standard name, cube.standard_name is set to None and the long_name is used instead. # long_name is less restrictive and can be set to be any string. #
    print(precipitation_cube.standard_name) print(precipitation_cube.long_name) print(precipitation_cube.var_name) print(precipitation_cube.name()) # changing the cube name to 'pflx' using "rename" method precipitation_cube.rename("pflx") print(precipitation_cube.standard_name) print(precipitation_cube.long_name) print(precipitation_cube.var_name) print(precipitation_cube.name()) # We see that standard_name and var_name are not set to be a non CF standard name, they are changed to None and long_name is renamed as pflx instead. The cube.name() method first tries standard_name, then ‘long_name’, then ‘var_name’, then the STASH attribute before falling back to the value of default (which itself defaults to ‘unknown’). # # We can also rename the specific name of the cube. Suppose if we only want to change standard_name. precipitation_cube.standard_name = 'precipitation_flux' print(precipitation_cube.standard_name) print(precipitation_cube.long_name) print(precipitation_cube.var_name) print(precipitation_cube.name()) # Similarly, we can change long_name, var_name, and name without using rename method # ### 4.3 Change the cube units # Change precipitation_cube units from kg m-2 s-1 to kg m-2 day-1 #
    # Note: The units attribute on a cube tells us the units of the numbers held in the data array. To convert to 'kg m-2 day-1', we could just multiply the raw data by 86400 seconds, but a clearer way is to use the convert_units() method with the name of the units we want to convert the data into. It will automatically update the data array. #
    # inspect the current unit and maximum data value print(precipitation_cube.units) print(precipitation_cube.data.max()) # convert the units to 'mm day-1' using convert_units method precipitation_cube.convert_units('kg m-2 day-1') # inspect the current unit and maximum data value after the conversion print(precipitation_cube.units) print(precipitation_cube.data.max()) # ### 4.4 Add or remove the attributes # In section 4.2 we see how to access the cube attributes. In this section we will try to add or remove the attributes # # Let's try to add new attribute to the precipitation_flux. # We want to keep the information of original units of the cube. Best way is to add this information in the attribute. # Define the new attribute as a key value pair and we can add the attribute using **update** method. # defining new attribute new_attr = {'original_units':'kg m-2 s-1'} # List the attibutes precipitation_cube.attributes # + # add new attribute using .update() method precipitation_cube.attributes.update(new_attr) # now printing the attributes list to see if new attribute has updated precipitation_cube.attributes # - # So, we got 'original_units' in attributes list. # # We can also delete any specific attribute. For example, in our precipitation_cube attributes list, we do not need 'source' and we can think of deleting it. del precipitation_cube.attributes['source'] precipitation_cube.attributes # ### 4.5 Accessing cube coordinates # Access cube's coordinates and explore coordinates attribute #
    # Note: #
      #
    • Cubes need coordinate information to help us describe the underlying phenomenon. Typically a cube's coordinates are accessed with the coords or coord methods. The latter must return exactly one coordinate for the given parameter filters, where the former returns a list of matching coordinates.
    • #
    • The coordinate interface is very similar to that of a cube. The attributes that exist on both cubes and coordinates are: standard_name, long_name, var_name, units, attributes and shape.
    • #
    • Coordinate does not have data, instead it has points and bounds (bounds may be None), so we can access the actual point data
    • #
    # #
    # let's print out all cube's coordinates print([coord.name() for coord in precipitation_cube.coords()]) # let's access the 'grid_latitude' coordinate and print out the last 10 values grid_latitude = precipitation_cube.coord('grid_latitude') grid_latitude[:10] # print the maximum and minimum value of 'grid_latitude' coordinate print(grid_latitude.points.max()) print(grid_latitude.points.min()) #
    # Task:
      #
    • Inspect the following attributes of caf_cube you created in previous task
    • #
        #
      • name (standard_name)
      • #
      • dimensions (ndim)
      • #
      • units
      • #
      • mean of data
      • #
      #
    • Print all the coordinates of caf_cube, (hint: use for loop)
    • #
    • Explore attributes of "grid_latitude"
    • #
        #
      • name (standard_name)
      • #
      • shape
      • #
      • units
      • #
      # # #
    # + ## Inspect attributes # write your code here .. # + ## Inspect coordinates # write your code here .. # - # ___ # ## 5. Exercise # # In this exercise we will explore the variables and attributes of monthly data. # ### Exercise 1: Load hourly data # Load monthly data into xarrays and display all variables # # + # write your code here .. # - # ### Exercise 2: Convert to iris cublist # Convert the dataset into iris cublist and display the cubelist # # + # write your code here .. # - # ### Exercise 3: Extract variable # Extract **x_wind** variable from cubelist and display the cube # + # write your code here .. # - # ### Exercise 4: Explore cube attributes # Using the Iris cube in previous excercise explore its attributes as follow: # - print out the dimensions # - print out its shape # - print out its coordinates # - print out the maximum and minimum values of latitude and longitude # # + # write your code here .. # + # write your code here .. # + # write your code here .. # + # write your code here .. # - # ### Exercise 5: Change units and add the original units to attributes list # # - change the units of x_wind to km/hr # - add the original units to the attributes list # - print out the attributes to see if new attribtue has added successfully # # # + # write your code here .. # + # write your code here .. # + # write your code here .. # - # ___ #
    # Summary
    # In this session we learned how:
    #
      #
    • to load data from a zarr database into xarray dataset and explore its metadata.
    • #
    • to convert xarray dataset into iris cube and explore its metadata
    • #
    • to further explore iris cube's attributes through simple operations
    • #
    # #
    # # # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import random def two_flips(num_sims=10000): coins=['fair','fair','2head'] heads_count1=0 heads_count2=0 for sim in range(num_sims): coin=random.choice(coins) if coin=='fair': faces=['H','T'] else: faces=['H','H'] coin1_face=random.choice(faces) if coin1_face =='H': heads_count1+=1 coin2_face = random.choice(faces) if coin2_face == 'H': heads_count2+=1 print("Prob. of heads on second flip given heads on first flip is", heads_count2/heads_count1) two_flips() # (Verified analytically.) # # $$ # P(H_1) = 2/3 # $$ # # $$ # P(H_1 \cap H_2) = 1/2 # $$ def two_flips(num_sims=10000): coins=['fair','2head','2head'] heads_count1=0 heads_count2=0 for sim in range(num_sims): coin=random.choice(coins) if coin=='fair': faces=['H','T'] else: faces=['H','H'] coin1_face=random.choice(faces) if coin1_face =='H': heads_count1+=1 coin2_face = random.choice(faces) if coin2_face == 'H': heads_count2+=1 print("Prob. of heads on second flip given heads on first flip is", heads_count2/heads_count1) two_flips(100000) # (Verified analytically) # # $$ # P(H_1)=5/6 # $$ # # $$ # P(H_1 \cap H_2) = 3/4 # $$ # # $$ # P(H_2|H_1) = 9/10 # $$ # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Fork and Pull # # ### Different ways of collaborating # # We have just seen how we can work with others on GitHub: we add them as collaborators on our repositories and give them permissions to push changes. # # Let's talk now about some other type of collaboration. # # Imagine you are a user of an Open Source project like Numpy and find a bug in one of their methods. # # You can inspect and clone [Numpy's code in GitHub](https://github.com/numpy/numpy), play around a bit and find how to fix the bug. # # Numpy has done so much for you asking nothing in return, that you really want to contribute back by fixing the bug for them. # # You make all of the changes but you can't push it back to Numpy's repository because you don't have permissions. # # The right way to do this is __forking Numpy's repository__. # ### Forking a repository on GitHub # # By forking a repository, all you do is make a copy of it in your GitHub account, where you will have write permissions as well. # # If you fork Numpy's repository, you will find a new repository in your GitHub account that is an exact copy of Numpy. You can then clone it to your computer, work locally on fixing the bug and push the changes to your _fork_ of Numpy. # # Once you are happy with with the changes, GitHub also offers you a way to notify Numpy's developers of this changes so that they can include them in the official Numpy repository via starting a __Pull Request__. # ### Pull Request # # You can create a Pull Request and select those changes that you think can be useful for fixing Numpy's bug. # # Numpy's developers will review your code and make comments and suggestions on your fix. Then, you can commit more improvements in the pull request for them to review and so on. # # Once Numpy's developers are happy with your changes, they'll accept your Pull Request and merge the changes into their original repository, for everyone to use. # ### Practical example - Team up! # # We will be working in the same repository with one of you being the leader and the other being the collaborator. # # Collaborators need to go to the leader's GitHub profile and find the repository we created for that lesson. Mine is in https://github.com/jamespjh/github-example # #### 1. Fork repository # # You will see on the top right of the page a `Fork` button with an accompanying number indicating how many GitHub users have forked that repository. # # Collaborators need to navigate to the leader's repository and click the `Fork` button. # # Collaborators: note how GitHub has redirected you to your own GitHub page and you are now looking at an exact copy of the team leader's repository. # #### 2. Clone your forked repo # # Collaborators: go to your terminal and clone the newly created fork. # # ``` # git clone git@github.com:jamespjh/github-example.git # ``` # #### 3. Create a feature branch # # It's a good practice to create a new branch that'll contain the changes we want. We'll learn more about branches later on. For now, just think of this as a separate area where our changes will be kept not to interfere with other people's work. # # ``` # git checkout -b southwest # ``` # #### 4. Make, commit and push changes to new branch # # For example, let's create a new file called `SouthWest.md` and edit it to add this text: # # ``` # * Exmoor # * Dartmoor # * Bodmin Moor # ``` # # Save it, and push this changes to your fork's new branch: # # ``` # git add SouthWest.md # git commit -m "The South West is also hilly." # git push origin southwest # ``` # #### 5. Create Pull Request # # Go back to the collaborator's GitHub site and reload the fork. GitHub has noticed there is a new branch and is presenting us with a green button to `Compare & pull request`. Fantastic! Click that button. # # Fill in the form with additional information about your change, as you consider necesary to make the team leader understand what this is all about. # # Take some time to inspect the commits and the changes you are submitting for review. When you are ready, click on the `Create Pull Request` button. # # Now, the leader needs to go to their GitHub site. They have been notified there is a pull request in their repo awaiting revision. # #### 6. Feedback from team leader # # Leaders can see the list of pull requests in the vertical menu of the repo, on the right hand side of the screen. Select the pull request the collaborator has done, and inspect the changes. # # There are three tabs: in one you can start a conversation with the collaborator about their changes, and in the others you can have a look at the commits and changes made. # # Go to the tab labeled as "Files Changed". When you hover over the changes, a small `+` button appears. Select one line you want to make a comment on. For example, the line that contains "Exmoor". # # GitHub allows you to add a comment about that specific part of the change. Your collaborator has forgotten to add a title at the beginning of the file right before "Exmoor", so tell them so in the form presented after clicking the `+` button. # #### 7. Fixes by collaborator # # Collaborators will be notified of this comment by email and also in their profiles page. Click the link accompanying this notification to read the comment from the team leader. # # Go back to your local repository, make the changes suggested and push them to the new branch. # # Add this at the beginning of your file: # # ``` # Hills in the South West: # ======================= # # ``` # # Then push the change to your fork: # # ``` # git add . # git commit -m "Titles added as requested." # git push origin southwest # ``` # # This change will automatically be added to the pull request you started. # #### 8. Leader accepts pull request # The team leader will be notified of the new changes that can be reviewed in the same fashion as earlier. # # Let's assume the team leader is now happy with the changes. # # Leaders can see in the "Conversation" tab of the pull request a green button labelled ```Merge pull request```. Click it and confirm the decision. # # The collaborator's pull request has been accepted and appears now in the original repository owned by the team leader. # # Fork and Pull Request done! # ### Some Considerations # # * Fork and Pull Request are things happening only on the repository's server side (GitHub in our case). Consequently, you can't do things like `git fork` or `git pull-request` from the local copy of a repository. # # * You don't always need to fork repositories with the intention of contributing. You can fork a library you use, install it manually on your computer, and add more functionality or customise the existing one, so that it is more useful for you and your team. # # * Numpy's example is only illustrative. Normally, Open Source projects have in their documentation (sometimes in the form of a wiki) a set of instructions you need to follow if you want to contribute to their software. # # * Pull Requests can also be done for merging branches in a non-forked repository. It's typically used in teams to merge code from a branch into the master branch and ask team colleagues for code reviews before merging. # # * It's a good practice before starting a fork and a pull request to have a look at existing forks and pull requests. On GitHub, you can find the list of pull requests on the horizontal menu on the top of the page. Try to also find the network graph displaying all existing forks of a repo, e.g., [NumpyDoc repo's network graph](https://github.com/numpy/numpydoc/network). # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + ''' 12. Insert in singly Linked-List ''' class Node: def __init__(self): self.data = None self.next = None def setData(self,data): self.data = data def getData(self): return self.data def setNext(self,next): self.next = next def getNext(self): return self.next class SinglyLinkedList: def __init__(self): self.head = None def setHead(self,head): self.head = head def insert_at_front(self,data): newHead = Node() newHead.setData(data) newHead.setNext(self.head) self.setHead(newHead) # + ''' 13. Bubble sort ''' def bubble_sort(a_list): not_done = True while not_done: not_done = False for i in range(0, len(a_list) - 1): # Swap two adjacent elements if in the wrong order if a_list[i] > a_list[i + 1]: a_list[i], a_list[i + 1] = a_list[i + 1], a_list[i] not_done = True return a_list # + ''' 14. Transpose square matrix You are given a square 2D image matrix (List of Lists) where each integer represents a pixel. Write a method transpose_matrix to transform the matrix into its Transpose - in-place. The transpose of a matrix is a matrix which is formed by turning all the rows of the source matrix into columns and vice-versa. ''' def transpose_matrix(matrix): n = 0 for i in range(n, len(matrix)): for j in range(n, len(matrix)): matrix[i][j], matrix[j][i] = matrix[j][i], matrix[i][j] n+=1 # + ''' 15. Given a string, recursively compute a new string where identical, adjacent characters get separated with a "*". ''' def insert_star_between_pairs(a_string): if len(a_string) <= 1: return a_string elif a_string[0] == a_string[1]: return a_string[0] + "*" + insert_star_between_pairs(a_string[1:]) else: return a_string[0] + insert_star_between_pairs(a_string[1:]) # - ''' 16. Given a sorted list and an input number as inputs, write a function to return a tuple, consisting of the indices of the first and last occurrences of the input number in the list. ''' def find_range(input_list, input_number): indices = [] for i in range(0, len(input_list)): if input_list[i] == input_number: indices.append(i) start, end = indices[0], indices[-1] return (start, end) # + ''' 17. Write a function to find the total number of leaf nodes in a binary tree. A node is described as a leaf node if it doesn't have any children. If there are no leaf nodes, return 0 ''' class TreeNode: def __init__(self, data,left_child = None, right_child = None): self.data = data self.left_child = left_child self.right_child = right_child class BinaryTree: def __init__(self, root_node = None): self.root = root_node def number_of_leaves(self,root): if root is None: return 0 elif root.left_child is None and root.right_child is None: return 1 else: return self.number_of_leaves(root.left_child) + self.number_of_leaves(root.right_child) # + ''' 18. Given a sorted list of integer ranges, merge all overlapping ranges. ''' class Range(object): def __init__(self): self.lower_bound = -1 self.upper_bound = -1 def __init__(self,lower_bound,upper_bound): self.lower_bound = lower_bound self.upper_bound = upper_bound def __str__(self): return "["+str(self.lower_bound)+","+str(self.upper_bound)+"]" def merge_ranges(input_range_list): merged_range_list = [] x, i = input_range_list[0], 1 lb, ub = x.lower_bound, x.upper_bound while True: y = input_range_list[i] if ub >= y.lower_bound: ub = y.upper_bound else: merged_range_list.append(Range(lb, ub)) x = input_range_list[i] lb, ub = x.lower_bound, x.upper_bound i+=1 if i == len(input_range_list): break merged_range_list.append(Range(lb, ub)) return merged_range_list # + ''' 19. Given an list of integers, write a method - max_gain - that returns the maximum gain. Maximum Gain is defined as the maximum difference between 2 elements in a list such that the larger element appears after the smaller element. If no gain is possible, return 0. ''' def max_gain(input_list): best_gain = 0 for i in range(0, len(input_list)): for j in range(i + 1, len(input_list)): current_gain = input_list[j] - input_list[i] if best_gain < current_gain: best_gain = current_gain return best_gain # + ''' 20. Write a method to determine whether a given integer (zero or positive number) is a power of 4 without using the multiply or divide operator. If it is, return True, else return False ''' def is_power_of_four1(number): if number == 1: return True n = bin(number) if n[-1] == '1' or n[-2] == '1': return False if n[2]== '1' and int(n[3:]) == 0 and len(n) % 2 == 1: return True else: return False def is_power_of_four(number): test = 1 while test < number: test = test << 2 return test == number # + ''' 21. Given a binary tree, write a function to recursively traverse it in the preorder manner. Mark a node as visited by adding its data to the list - pre_ordered_list (defined globally at the top of the editor). ''' pre_ordered_list = [] class BinaryTree: def __init__(self, root_data): self.data = root_data self.left_child = None self.right_child = None def preorder(self): pre_ordered_list.append(self.get_root_val()) if self.get_left_child(): self.get_left_child().preorder() if self.get_right_child(): self.get_right_child().preorder() def get_right_child(self): return self.right_child def get_left_child(self): return self.left_child def set_root_val(self, obj): self.data = obj def get_root_val(self): return self.data # + ''' 22. Given a singly linked list, write a method to perform In-place reversal. ''' class Node: def __init__(self): self.data = None self.next = None def setData(self,data): self.data = data def getData(self): return self.data def setNext(self,next): self.next = next class SinglyLinkedList: def __init__(self): self.head = None def setHead(self,head): self.head = head def reverse(self): previous_node = self.head current_node = previous_node.getNext() previous_node.setNext(None) while current_node is not None: next_node = current_node.getNext() current_node.setNext(previous_node) previous_node = current_node current_node = next_node self.setHead(previous_node) # + ''' 23. Given a binary tree, implement a method to populate the list post_ordered_list with the data of the nodes traversed in postorder, recursively. ''' post_ordered_list = [] class BinaryTree: def __init__(self, root_data): self.data = root_data self.left_child = None self.right_child = None def postorder(self): if self.get_left_child() is not None: self.get_left_child().postorder() if self.get_right_child() is not None: self.get_right_child().postorder() post_ordered_list.append(self.data) def get_right_child(self): return self.right_child def get_left_child(self): return self.left_child def set_root_val(self, obj): self.data = obj # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pandas as pd import datetime import matplotlib.pyplot as plt from scipy.sparse import csr_matrix import matplotlib.pyplot as plt from seaborn import heatmap from sklearn.cluster import KMeans from sklearn.cluster import spectral_clustering,SpectralClustering def get_B_and_weight_vec(matrix,threshold): N = matrix.shape[0] A = matrix row = [] col = [] data = [] weight_vec = [] cnt = 0 for i in range(N): for j in range(N): if j <= i: continue if A[i, j] < threshold: A[i, j] = 0 A[j, i] = 0 continue row.append(cnt) col.append(i) data.append(1) row.append(cnt) col.append(j) data.append(-1) cnt += 1 weight_vec.append(A[i, j]) B = csr_matrix((data, (row, col)), shape=(cnt, N)) weight_vec = np.array(weight_vec) return B, weight_vec def algorithm(B, weight_vec,seeds, K=15000,alpha=0.01, lambda_nLasso=None, check_s=False): E, N = B.shape # weight_vec = np.ones(E) Gamma_vec = np.array(1./(np.sum(abs(B), 0)))[0] # \in [0, 1] Gamma = np.diag(Gamma_vec) Sigma = 0.5 seednodesindicator= np.zeros(N) seednodesindicator[seeds] = 1 noseednodeindicator = np.ones(N) noseednodeindicator[seeds] = 0 if lambda_nLasso == None: lambda_nLasso = 2 / math.sqrt(np.sum(weight_vec)) if check_s: s = 0.0 for item in range(len(weight_vec)): x = B[item].toarray()[0] i = np.where(x == -1)[0][0] j = np.where(x == 1)[0][0] if i < N1 <= j: s += weight_vec[item] elif i >= N1 > j: s += weight_vec[item] if lambda_nLasso * s >= alpha * N2 / 2: print ('eq(24)', lambda_nLasso * s, alpha * N2 / 2) fac_alpha = 1./(Gamma_vec*alpha+1) # \in [0, 1] hatx = np.zeros(N) newx = np.zeros(N) prevx = np.zeros(N) haty = np.array([x/(E-1) for x in range(0, E)]) history = [] for iterk in range(K): # if 0 < np.max(abs(newx - prevx)) < 1e-4: # print(iterk) # break tildex = 2 * hatx - prevx newy = haty + Sigma * B.dot(tildex) # chould be negative haty = newy / np.maximum(abs(newy) / (lambda_nLasso * weight_vec), np.ones(E)) # could be negative newx = hatx - Gamma_vec * B.T.dot(haty) # could be negative newx[seeds] = (newx[seeds] + Gamma_vec[seeds]) / (1 + Gamma_vec[seeds]) newx = seednodesindicator * newx + noseednodeindicator * (newx * fac_alpha) prevx = np.copy(hatx) hatx = newx # could be negative history.append(newx) history = np.array(history) return history # # 11 nodes with Various boundary weight # generate weights matrix def generate_A(N=10, A_0=0.9): weights_matrix = np.zeros((N,N)) for i in range(N): for j in range(N): if i == 9: weights_matrix[i][10] = A_0 weights_matrix[10][i] = A_0 continue if j == i+1: weights_matrix[i][j] = 1 weights_matrix[j][i] = 1 return weights_matrix def run_chain(matrix,seeds,K=100, alpha=0.02,threshold = 0.000001, lambda_nLasso=0.1): B, weight_vec = get_B_and_weight_vec(matrix,threshold = threshold) X = [] for s in seeds: history = algorithm(B, weight_vec,seeds=s, K=K, alpha=alpha, lambda_nLasso=lambda_nLasso) X.append(np.nan_to_num(history[-1])) X = np.array(X) # start = datetime.datetime.now() spectral_labels = spectral_clustering(matrix, n_clusters=2) # print ('spectral clustering time: ', datetime.datetime.now() - start) return X, spectral_labels N = 11 for A_0 in [0.2,0.5,0.7,0.9,0.99]: print('A_0 =',A_0) weights_matrix_10 = generate_A(N=N, A_0=A_0) seeds = np.random.choice(np.arange(1,N),5,replace=False) # print("seeds:",seeds) ALPHA = 0.01 LAMBDA = 0.05 X, spectral_labels = run_chain(weights_matrix_10,seeds=seeds,K=3000,alpha=ALPHA,lambda_nLasso=LAMBDA) kmeans = KMeans(n_clusters=2, random_state=0).fit(X.T) true_labels =np.array([0 for i in range(9)]+[1 for i in range(N-9)]) if kmeans.labels_[0] == 0: print('our method accuracy: ', len(np.where(kmeans.labels_ == true_labels)[0])/len(true_labels)) else: print('our method accuracy: ', len(np.where(kmeans.labels_ == 1-true_labels)[0])/len(true_labels)) if spectral_labels[0] == 0: print('spectral clustering accuracy: ', len(np.where(spectral_labels == true_labels)[0])/len(true_labels)) else: print('spectral clustering accuracy: ', len(np.where(spectral_labels == 1-true_labels)[0])/len(true_labels)) print('\n') from scipy.sparse.csgraph import laplacian import scipy.linalg as la N = 11 for A_0 in [0.2,0.5,0.7,0.9,0.99]: print('A_0 =',A_0) weights_matrix_10 = generate_A(N=N, A_0=A_0) l=laplacian(weights_matrix_10) eig_values, eig_vectors = la.eig(l) fiedler_pos = np.where(eig_values.real == np.sort(eig_values.real)[1])[0][0] fiedler_vector = np.transpose(eig_vectors)[fiedler_pos] plt.figure(figsize=(8,3)) plt.title(f"Fiedler vector when A_0 = {A_0}") plt.plot(fiedler_vector.real[0:N]*(-7),'yo') plt.show() plt.close() # # 40 nodes ## Use a single node as seed, run algorithm for one time N = 40 A_0=0.7 for seed in [0,6,3,15,25]: print('A_0 =',A_0) weights_matrix_40 = generate_A(N=N, A_0=A_0) print("seed:",seed) ALPHA = 0.03 LAMBDA = 0.5 B, weight_vec = get_B_and_weight_vec(weights_matrix_40,threshold = 0.001) history = algorithm(B, weight_vec,seeds=seed, K=2000, alpha=ALPHA, lambda_nLasso=LAMBDA) kmeans = KMeans(n_clusters=2, random_state=0).fit(np.nan_to_num(history[-1]).reshape(-1,1)) spectral_labels = spectral_clustering(weights_matrix_40, n_clusters=2) true_labels =np.array([0 for i in range(9)]+[1 for i in range(N-9)]) print('true labels: ',true_labels) if kmeans.labels_[0] == 0: print('our method accuracy: ', len(np.where(kmeans.labels_ == true_labels)[0])/len(true_labels)) else: print('our method accuracy: ', len(np.where(kmeans.labels_ == 1-true_labels)[0])/len(true_labels)) if spectral_labels[0] == 0: print('spectral clustering accuracy: ', len(np.where(spectral_labels == true_labels)[0])/len(true_labels)) else: print('spectral clustering accuracy: ', len(np.where(spectral_labels == 1-true_labels)[0])/len(true_labels)) print('\n') plt.scatter(np.arange(N),history[-1]) plt.scatter(seed,history[-1][seed],label='seed') plt.legend() plt.show() plt.close() N = 40 for A_0 in [0.2,0.5,0.7,0.9,0.99]: print('A_0 =',A_0) weights_matrix_100 = generate_A(N=N, A_0=A_0) seeds = np.random.choice(np.arange(0,N),10,replace=False) print("seeds:",seeds) ALPHA = 0.03 LAMBDA = 0.5 X, spectral_labels = run_chain(weights_matrix_100,seeds=seeds,K=2000,alpha=ALPHA,lambda_nLasso=LAMBDA) kmeans = KMeans(n_clusters=2, random_state=0).fit(X.T) true_labels =np.array([0 for i in range(9)]+[1 for i in range(N-9)]) print(len(true_labels)) print(kmeans.labels_ == true_labels) print(np.where(kmeans.labels_ == true_labels)[0]) if kmeans.labels_[0] == 0: print('our method accuracy: ', len(np.where(kmeans.labels_ == true_labels)[0])/len(true_labels)) else: print('our method accuracy: ', len(np.where(kmeans.labels_ == 1-true_labels)[0])/len(true_labels)) if spectral_labels[0] == 0: print('spectral clustering accuracy: ', len(np.where(spectral_labels == true_labels)[0])/len(true_labels)) else: print('spectral clustering accuracy: ', len(np.where(spectral_labels == 1-true_labels)[0])/len(true_labels)) print('\n') from scipy.sparse.csgraph import laplacian import scipy.linalg as la N = 40 for A_0 in [0.2,0.5,0.7,0.9,0.99]: print('A_0 =',A_0) weights_matrix_100 = generate_A(N=N, A_0=A_0) l=laplacian(weights_matrix_100) eig_values, eig_vectors = la.eig(l) fiedler_pos = np.where(eig_values.real == np.sort(eig_values.real)[1])[0][0] fiedler_vector = np.transpose(eig_vectors)[fiedler_pos] plt.figure(figsize=(8,3)) plt.title(f"Fiedler vector when A_0 = {A_0}") plt.plot(fiedler_vector.real[0:N]*(-7),'yo') plt.show() plt.close() # # Gaussian Ball + Rectangle # + def get_A_B_and_edge_weights_points(points,threshold=0.2): N = len(points) clustering = SpectralClustering(n_clusters=2, assign_labels="discretize", random_state=0).fit(points) A = clustering.affinity_matrix_ # print(np.max(A.flatten())) row = [] col = [] data = [] edge_weights = [] cnt = 0 for i in range(N): for j in range(N): if j <= i: continue if A[i, j] < threshold: A[i, j] = 0 A[j, i] = 0 continue row.append(cnt) col.append(i) data.append(1) row.append(cnt) col.append(j) data.append(-1) cnt += 1 edge_weights.append(A[i, j]) B = csr_matrix((data, (row, col)), shape=(cnt, N)) edge_weights = np.array(edge_weights) return A, B, edge_weights def generate_seeds(A,B,N,proportion): #use unweighted A and its square to explore a local neighborhood of the initial seed def filter_seeds(initial_seed): i=50 while True: #b is a set of nodes with more than i common neighbors with the initial seed b = set(np.where(S[initial_seed]>i)[0]) samplingset=list(a.intersection(b)) if len(samplingset)<5: break elif len(samplingset)>N*proportion: i+=1 else : break print(i) samplingset.append(initial_seed) return samplingset #squared A unweighted_A = (A>0).astype(int)-np.eye(A.shape[0],A.shape[1]) S = np.dot(unweighted_A,unweighted_A) degrees = np.asarray(np.sum(abs(B),0))[0] #randomly select some nodes as initial seeds, the degree of these nodes must be higher than a certain value initial_seeds = np.random.choice(np.where(degrees>12)[0],6,replace=False) print('initial_seeds',initial_seeds) samplingsets = [] for s in initial_seeds: a = set(np.where(unweighted_A[s]>0)[0]) #include all the neighbors as candidate seeds samplingsets.append(filter_seeds(s)) return samplingsets # + def make_GU(n1,n2): u = np.random.uniform(low=[0,-0.05], high=[8.0,0], size=(n1,2)) mean=[2,0.2] cov=[[0.01,0],[0,0.01]] g = np.random.multivariate_normal(mean, cov, n2) data = np.concatenate((u,g)) return data,np.array([0]*n1+[1]*n2) N1=700 N2=700 points = make_GU(n1=N1,n2=N2) A,B,edge_weights=get_A_B_and_edge_weights_points(points[0],threshold=0.98) # Must use a large threshold samplingsets = generate_seeds(A,B,N1+N2,0.1) #visulize seeds generated plt.scatter(points[0][:, 0], points[0][:, 1], label='0') for samplingset in samplingsets: plt.scatter(points[0][samplingset, 0], points[0][samplingset,1],label='0') plt.title('seeds generated') plt.axis('equal') plt.show() plt.close() # + # run our algorithm for each batch of seeds, and stack the obtained graph signals into a feature matrix # # feed the feature matrix into Kmeans to get the final cluster identities for each node. ALPHA = 0.005 LAMBDA = 0.01 history = [] for samplingset in samplingsets: hist = algorithm(B,edge_weights,samplingset,K=1000,alpha=ALPHA,lambda_nLasso=LAMBDA) history.append(np.nan_to_num(hist[-1])) kmeans = KMeans(n_clusters=2).fit(np.array(history).T) data = {'x':points[0][:,0],'y':points[0][:,1],'c':kmeans.labels_} pd.DataFrame(data).to_csv('ourResult.csv',index=None) true_labels = points[1] if kmeans.labels_[0] == 0: print('The accuracy is: ', len(np.where(kmeans.labels_ == true_labels)[0])/len(true_labels)) else: print('The accuracy is: ', len(np.where(kmeans.labels_ == 1-true_labels)[0])/len(true_labels)) plt.scatter(points[0][np.where(kmeans.labels_ == 1), 0], points[0][np.where(kmeans.labels_ == 1), 1], label='0') plt.scatter(points[0][np.where(kmeans.labels_ == 0), 0], points[0][np.where(kmeans.labels_ == 0), 1], label='0') # plt.scatter(points[0][samplingset, 0], points[0][samplingset,1], c='r',label='0') plt.title('clustering results') plt.axis('equal') plt.show() plt.close() # + spectral_labels = spectral_clustering(A,n_clusters=2) data = {'x':points[0][:,0],'y':points[0][:,1],'c':spectral_labels} pd.DataFrame(data).to_csv('spectralResult.csv',index=None) if spectral_labels[0] == 0: print('The accuracy is: ', len(np.where(spectral_labels == true_labels)[0])/len(true_labels)) else: print('The accuracy is: ', len(np.where(spectral_labels == 1-true_labels)[0])/len(true_labels)) plt.scatter(points[0][np.where(spectral_labels == 1), 0], points[0][np.where(spectral_labels == 1), 1], label='0') plt.scatter(points[0][np.where(spectral_labels == 0), 0], points[0][np.where(spectral_labels == 0), 1], label='0') # plt.scatter(points[0][samplingset, 0], points[0][samplingset,1], c='r',label='0') plt.title('clustering results') plt.axis('equal') plt.show() plt.close() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] Collapsed="false" # # Creating an unbiased train/valid split # # 1. How biased is a simple <2016/2016 split? Answer: There doesn't seem to be a large bias. This means I can simply use this split to test for overfitting. # + Collapsed="false" # %load_ext autoreload # %autoreload 2 # + Collapsed="false" jupyter={"outputs_hidden": true} from src.train_nn import * import matplotlib.pyplot as plt # + Collapsed="false" jupyter={"outputs_hidden": true} limit_mem() # + Collapsed="false" datadir = '/data/weather-benchmark/5.625deg/' # + Collapsed="false" z = xr.open_mfdataset(f'{datadir}geopotential_500/*.nc', combine='by_coords') t = xr.open_mfdataset(f'{datadir}temperature_850/*.nc', combine='by_coords') ds = xr.merge([z, t]) # + [markdown] Collapsed="false" # ## How biased is a simple <2016/2016 split? # + Collapsed="false" ds_train = ds.sel(time=slice('1979', '2015')) ds_valid = ds.sel(time=slice('2016', '2016')) ds_test = ds.sel(time=slice('2017', '2018')) # + Collapsed="false" dic = {'z': None, 't': None} # + Collapsed="false" batch_size = 32 lead_time = 3*24 # + Collapsed="false" dummy = DataGenerator(ds_train.isel(time=slice(0, None, 5*12)), dic, lead_time, batch_size=batch_size, shuffle=False) dg_train = DataGenerator(ds_train, dic, lead_time, batch_size=batch_size, mean=dummy.mean, std=dummy.std, shuffle=False) dg_valid = DataGenerator(ds_valid, dic, lead_time, batch_size=batch_size, mean=dg_train.mean, std=dg_train.std, shuffle=False) dg_test = DataGenerator(ds_test, dic, lead_time, batch_size=batch_size, mean=dg_train.mean, std=dg_train.std, shuffle=False) dg_fake = DataGenerator(ds_test.sel(time='2018'), dic, lead_time, batch_size=batch_size, mean=dg_train.mean, std=dg_train.std) # + Collapsed="false" model = build_cnn([64, 64, 64, 64, 2], [5, 5, 5, 5, 5], input_shape=(32, 64, 2), activation='elu') model.compile(keras.optimizers.Adam(), 'mse') # + Collapsed="false" model.fit_generator(dg_fake, epochs=1) # + Collapsed="false" model.evaluate_generator(dg_train, verbose=1) # + Collapsed="false" model.evaluate_generator(dg_valid, verbose=1) # + Collapsed="false" model.evaluate_generator(dg_test, verbose=1) # + Collapsed="false" # Create predictions from an untrained model preds_train = create_predictions(model, dg_train) # + Collapsed="false" preds_valid = create_predictions(model, dg_valid) preds_test = create_predictions(model, dg_test) # + Collapsed="false" rmse_train = compute_weighted_rmse(preds_train.isel(time=slice(0, None, 36)), ds).load(); rmse_train # + Collapsed="false" rmse_valid = compute_weighted_rmse(preds_valid, ds).load(); rmse_valid # + Collapsed="false" rmse_test = compute_weighted_rmse(preds_test, ds).load(); rmse_test # + Collapsed="false" len(dg_train), len(dg_valid) # + Collapsed="false" mses_train = [] for i in range(len(dg_valid))[::-1]: x, y = dg_train[-i] p = model.predict_on_batch(x) mses_train.append(((y - p)**2).mean()) # + Collapsed="false" mses_valid = [] for i in range(len(dg_valid)): x, y = dg_valid[i] p = model.predict_on_batch(x) mses_valid.append(((y - p)**2).mean()) # + Collapsed="false" plt.plot(mses_train) plt.plot(mses_valid) # + Collapsed="false" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + id="Gr-J1lexhauP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 853} outputId="317ab8af-7c6a-452a-ff18-7ca30b676dd3" # !rm -rdf loader* data/* data* ../data ../data* # !wget https://raw.githubusercontent.com/cemoody/simple_mf/master/notebooks/loader.py # !mkdir ../data # !wget https://www.dropbox.com/s/msqzmmaog5bstm4/dataset.npz?dl=0 # !mv dataset.npz?dl=0 ../data/dataset.npz # !pip install pytorch-ignite tensorboardX # + [markdown] id="E6riMAuQhXnY" colab_type="text" # ### Load preprocessed data # + id="LKTrv-4uhXnb" colab_type="code" colab={} import numpy as np fh = np.load('../data/dataset.npz') # We have a bunch of feature columns and last column is the y-target train_x = fh['train_x'].astype(np.int64) train_y = fh['train_y'] test_x = fh['test_x'].astype(np.int64) test_y = fh['test_y'] n_user = int(fh['n_user']) n_item = int(fh['n_item']) n_occu = int(fh['n_occu']) n_rank = int(fh['n_ranks']) train_x[:, 1] += n_user train_x[:, 2] += n_user + n_item train_x[:, 3] += n_user + n_item + n_occu test_x[:, 1] += n_user test_x[:, 2] += n_user + n_item test_x[:, 3] += n_user + n_item + n_occu n_feat = n_user + n_item + n_occu + n_rank # + [markdown] id="1oRzIYc5hXnd" colab_type="text" # ### Define the FM Model # + id="8nIITnv3hXnd" colab_type="code" colab={} import torch from torch import nn import torch.nn.functional as F from torch.autograd import Variable def l2_regularize(array): loss = torch.sum(array ** 2.0) return loss def index_into(arr, idx): new_shape = (idx.size()[0], idx.size()[1], arr.size()[1]) return arr[idx.resize(torch.numel(idx.data))].view(new_shape) def factorization_machine(v, x=None): # Takes an input 2D matrix v of n vectors, each d-dimensional # produces output that is d-dimensional # v is (batchsize, n_features, dim) # x is (batchsize, n_features) # x functions as a weight array, assumed to be 1 if missing # Uses Rendle's trick for computing pairs of features in linear time batchsize = v.size()[0] n_features = v.size()[1] n_dim = v.size()[2] if x is None: x = Variable(torch.ones(v.size())) else: x = x.expand(batchsize, n_features, n_dim) t0 = (v * x).sum(dim=1)**2.0 t1 = (v**2.0 * x**2.0).sum(dim=1) return 0.5 * (t0 - t1) class MF(nn.Module): itr = 0 def __init__(self, n_feat, k=18, c_feat=1.0, c_bias=1.0, writer=None): super(MF, self).__init__() self.writer = writer self.k = k self.n_feat = n_feat self.feat = nn.Embedding(n_feat, k) self.bias_feat = nn.Embedding(n_feat, 1) self.c_feat = c_feat self.c_bias = c_bias def __call__(self, train_x): biases = index_into(self.bias_feat.weight, train_x).squeeze().sum(dim=1) vectrs = index_into(self.feat.weight, train_x) interx = factorization_machine(vectrs).squeeze().sum(dim=1) logodds = biases + interx return logodds def loss(self, prediction, target): loss_mse = F.mse_loss(prediction.squeeze(), target.squeeze()) prior_feat = l2_regularize(self.feat.weight) * self.c_feat total = (loss_mse + prior_feat) for name, var in locals().items(): if type(var) is torch.Tensor and var.nelement() == 1 and self.writer is not None: self.writer.add_scalar(name, var, self.itr) return total # + [markdown] id="pXdcQlEBhXnf" colab_type="text" # ### Train model # + id="GNLL7ysrhXnf" colab_type="code" colab={} from ignite.engine import Events, create_supervised_trainer, create_supervised_evaluator from ignite.metrics import Loss from tensorboardX import SummaryWriter from ignite.metrics import MeanSquaredError from loader import Loader from datetime import datetime # + [markdown] id="mjzVJb4YhXnh" colab_type="text" # #### Hyperparameters # + id="XTb69EQGhXnh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="dd95f92c-f967-45a9-bea2-814137bfb4cc" lr = 1e-2 k = 10 c_bias = 1e-6 c_feat = 1e-6 log_dir = 'runs/simple_mf_06_fm_' + str(datetime.now()).replace(' ', '_') print(log_dir) # + id="Qx-9c-VEhXnl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="7b03a02e-0ee3-4d52-f7f1-25100fcaa7ab" writer = SummaryWriter(log_dir=log_dir) model = MF(n_feat, k=k, c_bias=c_bias, c_feat=c_feat, writer=writer) optimizer = torch.optim.Adam(model.parameters(), lr=lr) trainer = create_supervised_trainer(model, optimizer, model.loss) metrics = {'accuracy': MeanSquaredError()} evaluat = create_supervised_evaluator(model, metrics=metrics) train_loader = Loader(train_x, train_y, batchsize=1024) test_loader = Loader(test_x, test_y, batchsize=1024) def log_training_loss(engine, log_interval=400): epoch = engine.state.epoch itr = engine.state.iteration fmt = "Epoch[{}] Iteration[{}/{}] Loss: {:.2f}" msg = fmt.format(epoch, itr, len(train_loader), engine.state.output) model.itr = itr if itr % log_interval == 0: print(msg) trainer.add_event_handler(event_name=Events.ITERATION_COMPLETED, handler=log_training_loss) def log_validation_results(engine): evaluat.run(test_loader) metrics = evaluat.state.metrics avg_accuracy = metrics['accuracy'] print("Epoch[{}] Validation MSE: {:.2f} " .format(engine.state.epoch, avg_accuracy)) writer.add_scalar("validation/avg_accuracy", avg_accuracy, engine.state.epoch) trainer.add_event_handler(event_name=Events.EPOCH_COMPLETED, handler=log_validation_results) model # + [markdown] id="jbzfXqEZhXnm" colab_type="text" # #### Run model # + id="BRgPVzmEhXnn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="c0713e6d-9645-4fbb-90d1-956bebb65688" trainer.run(train_loader, max_epochs=50) # + id="rHIOg78qi6oM" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="jg0Lc22m7Qpa" # #Midterm Examination # + [markdown] id="3PuSQYS-BkPg" # #Question 1 # + colab={"base_uri": "https://localhost:8080/"} id="p3awMEJQBmMO" outputId="cd34361f-1dc8-4bb5-b142-ca4ee2227b58" square_matrix = np.identity(5) print(f'Square Matrix whose length is 5: \n{square_matrix}') # + [markdown] id="djeB6CEbBuOe" # #Question 2 # + colab={"base_uri": "https://localhost:8080/"} id="4AS34gPDBvvG" outputId="24ce4a3b-0804-47be-d086-ae69873d9680" UTMatrix = np.array([[1,2,3], [0,3,1], [0,0,5]]) print(UTMatrix) # + [markdown] id="Jzd_QInZBwc6" # #Question 3 # + colab={"base_uri": "https://localhost:8080/"} id="t3TwH9OzCwTw" outputId="ee15edb2-1582-44f1-8a8b-d8d2a5f6aa3d" #Creates a square matrix that is symmetrical square_matrix = np.array ([[1,2,3,4,5], [2,1,2,3,4], [3,2,1,2,3], [4,3,2,1,2], [5,4,3,2,1]]) #Prints a square matrix that is a symmetrical print ('A square matrix that is symmetrical') print (square_matrix) # + [markdown] id="D697gyBZ6QHT" # #Question 4 # + colab={"base_uri": "https://localhost:8080/"} id="M-lrGmkW5KuX" outputId="9bc8d3ac-b064-4d44-b585-356d74a0e135" import numpy as np C = np.array ([[1,2,3],[2,3,3],[3,4,-2]]) invC = np.linalg.inv(C) #Inverse of Matrix C print(invC) # + [markdown] id="kXTnqhgT6exw" # #Question 5 # + colab={"base_uri": "https://localhost:8080/"} id="X_cCx8vE6gK5" outputId="42f7e4b6-a6eb-4eca-b68d-992938d779ef" import numpy as np C = np.array ([[1,2,3],[2,3,3],[3,4,-2]]) detOfC = np.linalg.det(C) print("Given matrix of the determinant in Question 4:") print(int(detOfC)) # + [markdown] id="7DNRwqDB67IR" # #Question 6 # + colab={"base_uri": "https://localhost:8080/"} id="gvmpUuue68th" outputId="5c1e4291-1fca-4146-e042-4e02f795c1fc" import numpy as np Equations = np.array([[5,4,1], [10,9,4], [10,13,15]]) Constants = np.array([[[3.4], [8.8], [19.2]]]) roots = np.linalg.inv(Equations) @ Constants print(roots) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from analyse_datasets import analyse_datasets, analyse_dataset, debug_test_dataset, debug_test_single_dataset # %load_ext autoreload # %autoreload 2 # + tags=[] debug_test_single_dataset('p300_prn','rutger_prn_5_flip', loader_args=dict(ofs=32,order=6,stopband=((0,1),(12,-1)),subtriallen=None), preproc_args=dict(badChannelThresh=3,badTrialThresh=None), model='cca',evtlabs=('re','anyre'),rank=4,tau_ms=750,reg=.02,badEpThresh=3) # + tags=[] debug_test_single_dataset('p300_prn','rutger_rc_5', loader_args=dict(ofs=32,order=6,stopband=((0,1),(12,-1)),subtriallen=None), preproc_args=dict(badChannelThresh=3,badTrialThresh=None), model='cca',evtlabs=('re','anyre'),rank=4,tau_ms=750,reg=.02,badEpThresh=3) # + tags=[] debug_test_single_dataset('p300_prn','rutger_prn_15_flip', loader_args=dict(ofs=32,order=6,stopband=((0,1),(12,-1)),subtriallen=None), preproc_args=dict(badChannelThresh=3,badTrialThresh=None), model='cca',evtlabs=('re','anyre'),rank=5,tau_ms=750,reg=.02,badEpThresh=3) # + tags=[] debug_test_single_dataset('p300_prn','rutger_prn_5_flip', loader_args=dict(ofs=32,order=6,stopband=((0,1),(12,-1)),subtriallen=None), preproc_args=dict(badChannelThresh=3,badTrialThresh=None), model='lr',evtlabs=('re','anyre'),tau_ms=750,ignore_unlabelled=True,center=True) # + tags=[] debug_test_single_dataset('p300_prn','kirsten_rc_5', loader_args=dict(ofs=32,order=6,stopband=((0,1),(12,-1)),subtriallen=None), preproc_args=dict(badChannelThresh=3,badTrialThresh=3), model='cca',evtlabs=('re','anyre'),rank=3,tau_ms=750,reg=.02,badEpThresh=6) # + tags=[] debug_test_single_dataset('p300_prn','alex_rc_5', loader_args=dict(ofs=32,order=6,stopband=((0,1),(12,-1)),subtriallen=None), preproc_args=dict(badChannelThresh=None,badTrialThresh=None), model='cca',evtlabs=('re','anyre'),rank=3,tau_ms=750,reg=.02,badEpThresh=6) # + tags=[] debug_test_single_dataset('p300_prn','alex_prn_5', loader_args=dict(ofs=32,order=6,stopband=((0,1),(12,-1)),subtriallen=None), preproc_args=dict(badChannelThresh=None,badTrialThresh=None), model='cca',evtlabs=('re','anyre'),rank=7,tau_ms=750,reg=.02) # + tags=[] debug_test_single_dataset('p300_prn','kirsten_prn_5_flip', loader_args=dict(ofs=32,order=6,stopband=((0,1),(12,-1)),subtriallen=None), preproc_args=dict(badChannelThresh=None,badTrialThresh=None), model='cca',evtlabs=('re','anyre'),rank=5,tau_ms=750,reg=.02,badEpThresh=6) # + tags=[] debug_test_single_dataset('p300_prn','kirsten_rc_5', loader_args=dict(ofs=32,order=6,stopband=((0,1),(12,-1)),subtriallen=None), preproc_args=dict(badChannelThresh=3,badTrialThresh=3, whiten=.05), evtlabs=('re','ntre'), model='lr', ignore_unlabelled=True,tau_ms=750,badEpThresh=None) # + tags=[] debug_test_single_dataset('p300_prn','jeroen_prn_5_flip', loader_args=dict(ofs=32,order=6,stopband=((0,1),(12,-1)),subtriallen=None), preproc_args=dict(badChannelThresh=3,badTrialThresh=None), model='cca',evtlabs=('re','anyre'),rank=4,tau_ms=750,reg=.02,badEpThresh=3) # + tags=[] debug_test_single_dataset('p300_prn','linsey_prn_5_flip', loader_args=dict(ofs=32,order=6,stopband=((0,1),(12,-1)),subtriallen=None), preproc_args=dict(badChannelThresh=3,badTrialThresh=None), model='cca',evtlabs=('re','anyre'),rank=4,tau_ms=750,reg=.02,badEpThresh=3) # + tags=[] debug_test_single_dataset('p300_prn','nan_prn_5_flip', loader_args=dict(ofs=32,order=6,stopband=((0,1),(12,-1)),subtriallen=None), preproc_args=dict(badChannelThresh=3,badTrialThresh=None), model='cca',evtlabs=('re','anyre'),rank=4,tau_ms=750,reg=.02,badEpThresh=3) # + tags=[] # run visualization for all datasets for this condition. Prn_5_flip from datasets import get_dataset l,fns,_ = get_dataset('p300_prn','prn_5_flip') for fi in fns: print("\n\n---------------------------------- {} \n\n".format(fi)) X,Y,coords=l(fi,ofs=32,order=6,stopband=((0,1),(12,-1)),subtriallen=None) debug_test_dataset(X,Y,coords, preproc_args=dict(badChannelThresh=3,badTrialThresh=None), model='cca',evtlabs=('re','anyre'),rank=4,tau_ms=750,reg=.02) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Brief Introduction to Hyperspectral Classification # The goal of hyperspectral classification is to classify each pixel/data point into one of $K$ classes. In general, classification methods are more effective at classification than hyperspectral unmixing. However, classification approaches are not effective at determining sub-pixel proportion amounts or how much of a material can be found within the field of view corresponding to a pixel. # # In general, hyperspectral classification approaches involve: # 1. (optionally) feature extraction # 2. application of a standard classifier (i.e., classifiers from the machine learning literature). # # Many approaches have been investigated for the optional feature extraction step. These include features that combine spatial and spectral characteristics: # * , , , and , "Classification of Hyperspectral Images by Using Extended Morphological Attribute Profiles and Independent Component Analysis," in IEEE Geoscience and Remote Sensing Letters, vol. 8, no. 3, pp. 542-546, May 2011. http://ieeexplore.ieee.org/document/5664759/ # * ong et al., "Discriminant Tensor Spectral–Spatial Feature Extraction for Hyperspectral Image Classification," in IEEE Geoscience and Remote Sensing Letters, vol. 12, no. 5, pp. 1028-1032, May 2015. http://ieeexplore.ieee.org/document/6985594/ # * , , , and , "Three-Dimensional Local Binary Patterns for Hyperspectral Imagery Classification," in IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 4, pp. 2399-2413, April 2017. http://ieeexplore.ieee.org/document/7831381/ # * among many others.. # # When feature extraction is not applied, the hyperspectral signatures are used as inputs to the classifier directly. # # A large variety of classifiers have been applied to hyperspectral imagery including Support Vector Machines, Neural Networks/Deep Learning, Dictionary Learning-based approaches, among others. #
      #
    • , , , , and , "Classification of Hyperspectral Imagery Using a New Fully Convolutional Neural Network," in IEEE Geoscience and Remote Sensing Letters, vol. 15, no. 2, pp. 292-296, Feb. 2018. http://ieeexplore.ieee.org/document/8249752/ #
    • , and , "Spatial Logistic Regression for Support-Vector Classification of Hyperspectral Imagery," in IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 3, pp. 439-443, March 2017. http://ieeexplore.ieee.org/document/7837659/ #
    • , and , "Task-Driven Dictionary Learning for Hyperspectral Image Classification With Structured Sparsity Constraints," in IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 8, pp. 4457-4471, Aug. 2015. http://ieeexplore.ieee.org/document/7053943/ #
    # # All references above are simply a mostly, random selection of these methods. Many, many more options appear in the literature. # # + # imports and setup import numpy as np import os.path import scipy.io from loadmat import loadmat import matplotlib as mpl default_dpi = mpl.rcParamsDefault['figure.dpi'] mpl.rcParams['figure.dpi'] = default_dpi*1.5 import matplotlib.pyplot as plt # + # load gulfport campus image (with labels) img_fname = 'muufl_gulfport_campus_1_hsi_220_label.mat' dataset = loadmat(img_fname)['hsi'] hsi = dataset['Data'] n_r,n_c,n_b = hsi.shape wvl = dataset['info']['wavelength'] rgb = dataset['RGB'] # - # plot the RGB image to see what we are looking at plt.imshow(rgb) # pull label info from the dataset gt = dataset['sceneLabels'] label_names = gt['Materials_Type'] label_img = gt['labels'] # inspect the label values print('min label value:',label_img.min()) print('max label value:',label_img.max()) print('label names:',label_names) # + # show the labels as an image def discrete_matshow(data,minv=None,maxv=None,lbls=None): #https://stackoverflow.com/questions/14777066/matplotlib-discrete-colorbar #get discrete colormap if minv is None: minv = np.min(data) if maxv is None: maxv = np.max(data) cmap = plt.get_cmap('RdBu', maxv-minv+1) # set limits .5 outside true range newdata = data.copy().astype(float) newdata[data > maxv] = np.nan newdata[data < minv] = np.nan mat = plt.matshow(newdata,cmap=cmap,vmin = minv-.5, vmax = maxv+.5) #tell the colorbar to tick at integers cax = plt.colorbar(mat, ticks=np.arange(minv,maxv+1)) discrete_matshow(label_img,1,10) # + # train a nearest neighbor classifier with N samples from each of 10 classes (1-10) from sklearn.neighbors import KNeighborsClassifier # construct the training set samples = [] labels = [] n_samp_per = 100 for i in range(1,11): lbl_inds = (label_img == i).nonzero() n_inds = lbl_inds[0].shape[0] ns = min(n_inds,n_samp_per) perm = np.random.permutation(np.arange(n_inds)) perm_lbl = (lbl_inds[0][perm],lbl_inds[1][perm]) pix = hsi[perm_lbl[0][:ns],perm_lbl[1][:ns],:] lbls = np.full((ns,1),i,dtype=int) samples.append(pix) labels.append(lbls) X = np.vstack(samples) y = np.vstack(labels).squeeze() # - # fit the KNN classifer knn = KNeighborsClassifier(n_neighbors=5) knn.fit(X,y) # predict outputs M = np.reshape(hsi,(n_r*n_c,n_b)) Z = knn.predict(M) # reshape back into an image and dispay results z_img = np.reshape(Z,(n_r,n_c)) discrete_matshow(z_img) # show the training data again for comparison print(label_names) discrete_matshow(label_img,1,10) # + # evaluate performance on training set using confusion matrix, accuracy score from sklearn.metrics import confusion_matrix lbl_array = np.reshape(label_img,(n_r*n_c)) valid_inds = np.logical_and(lbl_array > 0,lbl_array < 11) cm = confusion_matrix(lbl_array[valid_inds],Z[valid_inds]) row_sum = cm.sum(axis=1).reshape((cm.shape[0],1)) #sum of rows as column vector norm_cm = cm/row_sum # - # compute overall accuracy, oa = np.diag(cm).sum()/cm.sum() print('overall accuracy: %.3f'%oa) plt.imshow(norm_cm) plt.colorbar() plt.title('Per-class confusion') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + _uuid="d014e1618f516e93d4d9a024295fb79df96a85c0" _cell_guid="73e72e9d-47b1-47ec-b4e2-b430d705ecbb" # This Python 3 environment comes with many helpful analytics libraries installed # It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python # For example, here's several helpful packages to load in import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) import matplotlib.pyplot as plt # %matplotlib inline from scipy.stats import skew # Input data files are available in the "../input/" directory. # For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory from subprocess import check_output print(check_output(["ls", "../input"]).decode("utf8")) # Any results you write to the current directory are saved as output. df = pd.read_csv('../input/kc_house_data.csv') df.info() # - df.describe() # sqft_living seems to be a very 'histogrammable' feature fig,ax = plt.subplots(figsize=(12,8)) ax.hist(df['sqft_living'],bins=25,color='b',alpha=0.75) plt.title('Sqft_living distribution'); # Note: The histogram is skewed skew(df['sqft_living']) # Note the skew, since the extremums of the distribution are very large, compared to the mean. # The max value of 13540 is ~ 12 standard deviations from the mean # For a positive skew, usually a log transformation works df['sqft_living'] = np.log1p(df['sqft_living']) fig,ax = plt.subplots(figsize=(12,8)) ax.hist(df['sqft_living'],bins=25,color='b',alpha=0.75) plt.title('Sqft_living distribution (Log)'); skew(df['sqft_living']) # So we see that the skew can be much reduced by using a log transformation of the variable # -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: SageMath 9.5 # language: sage # name: sagemath-9.5 # --- # # Реализация модели защищенного канала передачи данных по спецификации TLS 1.3 IS_DEBUG = 1 def trace(*args, **kwargs): """ Отладочная трассировка """ global IS_DEBUG if IS_DEBUG: print('[TRACE]', end=' ') print(*args, **kwargs) # ---- # + # Криптография из различных ГОСТ'ов from pygost import mgm from pygost import gost3412 from pygost import gost3410 from pygost import gost34112012256 # Криптография из западных стандартов from Crypto.Cipher import AES from Crypto.Hash import SHA256 # HKDF from Crypto.Protocol.KDF import HKDF # КГПСЧ from Crypto.Random import get_random_bytes, random from Crypto.Util.number import getStrongPrime # Различные утилиты from collections import namedtuple from Crypto.Util import number from functools import partial from binascii import hexlify from enum import Enum import uuid import gmpy2 import pickle # - # ---- # # ## Реализация оберток над криптографическими алгоритмами # + def multiply_by_scalar(curve, scalar, point): """ Умножение точки эллиптической кривой на скаляр """ return curve.exp(scalar, *point) def add_points(curve, lhs, rhs): """ Сложение двух точек эллиптической кривой """ return curve._add(*lhs, *rhs) # - # --- # + # # Сертификат это простая структурка, поэтому обойдусь именованным кортежем # Certificate = namedtuple('Certificate', ['id', 'public_key', 'signature']) # - # ---- class KuznyechikMGM(object): """ Класс-обертка над Кузнечиком в режиме MGM """ KEY_SIZE = 32 def __init__(self, key: bytes): """ Инициализация - сохраняю ключ """ self._key = key def encrypt(self, plaintext: bytes): """ Зашифрование шифром Кузнечик в режиме MGM """ encrypter = partial(KuznyechikMGM._encrypt_block, self._key) cipher = mgm.MGM(encrypter, KuznyechikMGM._block_size()) nonce = KuznyechikMGM._generate_nonce() ct_with_tag = cipher.seal(nonce, plaintext, b'') return nonce, ct_with_tag[:-cipher.tag_size], ct_with_tag[-cipher.tag_size:] def decrypt(self, nonce: bytes, ciphertext: bytes, tag: bytes): """ Расшифрование шифром Кузнечик в режиме MGM """ encrypter = partial(KuznyechikMGM._encrypt_block, self._key) cipher = mgm.MGM(encrypter, KuznyechikMGM._block_size()) ct_with_tag = ciphertext + tag return cipher.open(nonce, ct_with_tag, b'') @staticmethod def _block_size(): return gost3412.GOST3412Kuznechik.blocksize @staticmethod def _encrypt_block(key: bytes, block: bytes): cipher = gost3412.GOST3412Kuznechik(key) return cipher.encrypt(block) @staticmethod def _generate_nonce(): nonce = get_random_bytes(KuznyechikMGM._block_size()) while bytearray(nonce)[0] & 0x80 > 0: nonce = get_random_bytes(KuznyechikMGM._block_size()) return nonce class AESGCM(object): """ Класс-обертка над AES в режиме GCM """ KEY_SIZE = 16 def __init__(self, key: bytes): """ Инициализация - сохраняю ключ """ self._key = key def encrypt(self, plaintext: bytes): """ Зашифрование шифром AES в режиме GCM """ cipher = AES.new(self._key, AES.MODE_GCM) return cipher.nonce, *cipher.encrypt_and_digest(plaintext) def decrypt(self, nonce: bytes, ciphertext: bytes, tag: bytes): """ Зашифрование шифром AES в режиме GCM """ cipher = AES.new(self._key, AES.MODE_GCM, nonce=nonce) return cipher.decrypt_and_verify(ciphertext, tag) # ---- class Streebog(gost34112012256.GOST34112012256): """ pycryptodome-совместимая обертка над хэш-функцией Стрибог """ digest_size = 32 def __init__(self, data=None): """ Инициализация и обновление состояния (если надо) """ super(Streebog, self).__init__() if data is not None: self.update(data) @staticmethod def new(data=None): """ Создание нового объекта """ return Streebog(data) def hash_digest(hasher_type, *args): """ Вычисление хэша при помощи объекта-обертки """ hasher = hasher_type.new() for arg in args: hasher.update(arg) return hasher.digest() # ---- class DiffieHellmanProcessor(object): """ Обработчик обмена ключами по протоколу Диффи-Хеллмана """ def __init__(self, group=None, generator=None): """ Инициализация - выбор группы, образующего элемента Возможно конструирование по готовой группе и образующему """ self._group = Integers(getStrongPrime(512)) if group is None else group self._generator = random.randint(int(2), int(self._group.order() - 2)) if generator is None \ else generator self._modulus = self._group.order() trace('[DiffieHellmanProcessor]', f'Using group of order {self._modulus}') def generate_private_key(self): """ Генерирование закрытого ключа """ return random.randint(int(2), int(self.order - 2)) def get_public_key(self, private_key): """ Получение открытого ключа по закрытому """ return self.exponentiate(private_key, self._generator) def exponentiate(self, scalar, element): """ Возведение в степень элемента группы """ return self._group(int(gmpy2.powmod(int(element), int(scalar), self.order))) def element_to_bytes(self, element): """ Конвертация элемента группы в байтовое представление """ return number.long_to_bytes(int(element)) @property def group(self): """ Получение группы """ return self._group @property def generator(self): """ Получение образующего элемента """ return self._generator @property def order(self): """ Получение порядка группы """ return self._modulus class EllipticCurveDiffieHellmanProcessor(object): """ Обработчик обмена ключами по протоколу Диффи-Хеллмана в группе точек эллиптической кривой """ def __init__(self, curve=None, generator=None): """ Инициализация - выбор кривой, образующего элемента Возможно конструирование по готовой кривой и образующему """ # # Для простоты буду использовать кривую из ГОСТ 34.10 # self._curve = gost3410.CURVES['id-tc26-gost-3410-2012-512-paramSetA'] if curve is None else curve self._generator = (self._curve.x, self._curve.y) if generator is None else generator self._modulus = self._curve.p trace('[EllipticCurveDiffieHellmanProcessor]', f'Using group of order {self._modulus} over curve {self._curve.name}') def generate_private_key(self): """ Генерирование закрытого ключа """ return random.randint(int(2), int(self.order - 2)) def get_public_key(self, private_key): """ Получение открытого ключа по закрытому """ return self.exponentiate(private_key, self._generator) def exponentiate(self, scalar, element): """ Умножение точки на скаляр """ if not self._curve.contains(element): return None return multiply_by_scalar(self._curve, scalar, element) def element_to_bytes(self, element): """ Конвертация элемента группы в байтовое представление """ return gost3410.pub_marshal(element) @property def group(self): """ Получение группы """ return self._curve @property def generator(self): """ Получение образующего элемента """ return self._generator @property def order(self): """ Получение порядка группы """ return self._modulus # ---- cipher_suites = ['ECDHE_KUZNYECHIK_MGM_STREEBOG', 'DHE_AES_GCM_SHA256'] def resolve_cipher_suite(cipher_suite): """ Получение примитивов по строковому идентификатору """ # # Пока захардкодил это все # if cipher_suite == 'ECDHE_KUZNYECHIK_MGM_STREEBOG': return EllipticCurveDiffieHellmanProcessor, KuznyechikMGM, Streebog elif cipher_suite == 'DHE_AES_GCM_SHA256': return DiffieHellmanProcessor, AESGCM, SHA256 raise ValueError(f'Unsupported cipher suite {cipher_suite}') # ---- def hmac(key, *args, hasher=Streebog): """ Вычисление HMAC (согласно Р 50.1.113 - 2016) Информационная технология КРИПТОГРАФИЧЕСКАЯ ЗАЩИТА ИНФОРМАЦИИ Криптографические алгоритмы, сопутствующие применению алгоритмов электронной цифровой подписи и функции хэширования """ def xor(lhs, rhs): return bytes([l ^^ r for l, r in zip(lhs, rhs)]) IPAD = b'\x36' * 64 OPAD = b'\x5c' * 64 K_star = key + (b'\x00' * (512 - len(key))) inner_hash = hash_digest(hasher, xor(K_star, IPAD), *args) return hash_digest(hasher, xor(K_star, OPAD), inner_hash) def test_hmac(): """ Р 50.1.113 - 2016 Информационная технология КРИПТОГРАФИЧЕСКАЯ ЗАЩИТА ИНФОРМАЦИИ Криптографические алгоритмы, сопутствующие применению алгоритмов электронной цифровой подписи и функции хэширования 4.1.1 HMAC_GOSTR3411_2012_256 Приложение А. Контрольные примеры """ K = b'\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f' + \ b'\x10\x11\x12\x13\x14\x15\x16\x17\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f' T = b'\x01\x26\xbd\xb8\x78\x00\xaf\x21\x43\x41\x45\x65\x63\x78\x01\x00' expected = b'\xa1\xaa\x5f\x7d\xe4\x02\xd7\xb3\xd3\x23\xf2\x99\x1c\x8d\x45\x34' + \ b'\x01\x31\x37\x01\x0a\x83\x75\x4f\xd0\xaf\x6d\x7c\xd4\x92\x2e\xd9' assert hmac(K, T, hasher=Streebog) == expected # + # # Протестирую HMAC на контрольном примере из рекомендаций по стандартизации # test_hmac() # - # ---- class GOST3410Signer(object): """ Объект-создатель подписей по ГОСТ 34.10 - 2012 """ def __init__(self): """ Инициализация кривой и ключевой пары """ self._curve = gost3410.CURVES['id-tc26-gost-3410-2012-512-paramSetA'] self._private_key = gost3410.prv_unmarshal(get_random_bytes(64)) self._public_key = gost3410.public_key(self._curve, self._private_key) def sign(self, *args): """ Подпись произвольного набора байтовых объектов """ digest = hash_digest(Streebog, *args) return gost3410.sign(self._curve, self._private_key, digest) @property def public_key(self): """ Получение открытого ключа """ return self._public_key @property def private_key(self): """ Получение закрытого ключа """ return self._private_key @property def curve(self): """ Получение используемой кривой """ return self._curve @property def generator(self): """ Получение образующего группы точек кривой """ return self._curve.x, self._curve.y def verify_signature(curve, public_key, signature, *args): """ Проверка подписи по ГОСТ 34.10 - 2012 """ digest = hash_digest(Streebog, *args) return gost3410.verify(curve, public_key, digest, signature) # ---- # # ## Реализация участников протокола Offer = namedtuple('Offer', ['cipher_suite', 'group', 'generator']) class GenericParticipant(object): """ Класс-абстракция над участником протокола, который получает сертификат на свой открытый ключ и может осуществлять подпись сообщений """ def __init__(self, ca, *, fake_data=None, demo_cert_error=False): """ Инициализация - получение идентификатора, генерирование ключевой пары, получение сертификата """ self._id = uuid.uuid4() self._signer = GOST3410Signer() if fake_data is not None: # # Демонстрация неудачного выполнения протокола # if demo_cert_error: # # Тут демонстрация неудачи получения серификата # self._signer._public_key = fake_data self._certificate = ca.issue_certificate(self) else: # # Тут как бы сертификат и открытый ключ правильные, а закрытый ключ противник не угадал # self._certificate = ca.issue_certificate(self) self._signer._public_key = fake_data else: # # Стандартное выполнение # self._certificate = ca.issue_certificate(self) self._n = None trace('[GenericParticipant]', f'Certificate = {self._certificate}') def sign(self, *args): """ Подписывание произвольных данных """ return self._signer.sign(*args) @property def public_key(self): """ Получение открытого ключа подписи """ return self._signer.public_key @property def identifier(self): """ Получение идентификатора """ return self._id @property def certificate(self): """ Получение сертификата """ return self._certificate @property def curve(self): """ Получение кривой, используемой для подписи """ return self._signer.curve def _get_point_for_schnorr_proof(self): # # Костыль, но все же работает # self._n = gost3410.prv_unmarshal(get_random_bytes(64)) return multiply_by_scalar(self._signer.curve, self._n, self._signer.generator) def _get_response(self, chellenge): return self._n + chellenge * self._signer.private_key class TLS13Participant(GenericParticipant): """ Обобщенный участник общения по TLS 1.3 """ def __init__(self, ca, *, fake_data=None, demo_cert_error=False): """ Инициализация - по факту просто сохраняю ссылку на СА """ super(TLS13Participant, self).__init__(ca, fake_data=fake_data, demo_cert_error=demo_cert_error) self._ca = ca def check_certificate(self, identifier: uuid.UUID, public_key, cert: Certificate): """ Проверка полученного сертификата """ if self._ca.is_revoked(cert): trace('[TLS13Participant]', f'Revoked certificate') return False id_as_bytes = str(identifier).encode() pk_as_bytes = gost3410.pub_marshal(public_key) if not verify_signature(self._ca.curve, self._ca.public_key, cert.signature, id_as_bytes, pk_as_bytes): trace('[TLS13Participant]', f'Invalid certificate') return False return True @staticmethod def check_mac(key, mac, *args): """ Проверка имитовставки """ return mac == hmac(key, *args) @staticmethod def check_signature(curve, public_key, signature, *args): """ Проверка подписи """ return verify_signature(curve, public_key, signature, *args) class TLS13Client(TLS13Participant): """ Клиент для TLS 1.3 """ S2C = 0 C2S = 1 class AuthMode(Enum): """ Режим аутентицикации """ OWA = 1 # Односторонняя TWA = 2 # Двусторонняя def __init__(self, ca, cipher_suite, *, fake_data=None, demo_cert_error=False): """ Инициализация - разбор набора примитивов """ super(TLS13Client, self).__init__(ca, fake_data=fake_data, demo_cert_error=demo_cert_error) self._key_exchange_type, self._cipher_type, self._hasher = resolve_cipher_suite(cipher_suite) self._cipher_suite = cipher_suite self._session_keys = {} trace('[TLS13Client]', f'Using cipher suite {cipher_suite}') def establish_connection(self, server, auth_mode: AuthMode): """ Установление соединения с сервером """ key_exchange = self._key_exchange_type() ephemeral_dh_sk = key_exchange.generate_private_key() ephemeral_dh_pk = key_exchange.get_public_key(ephemeral_dh_sk) client_nonce = get_random_bytes(64) trace('[TLS13Client]', f'Ephemeral DH public key: {ephemeral_dh_pk}') trace('[TLS13Client]', f'Nonce: {hexlify(client_nonce)}') offer = Offer(self._cipher_suite, key_exchange.group, key_exchange.generator) # # А теперь запрос на сервер # is_one_way = auth_mode == TLS13Client.AuthMode.OWA server_hello = server.get_hello(self.identifier, ephemeral_dh_pk, client_nonce, offer, is_one_way) if server_hello is None: trace('[TLS13Client]', f'Server refused the connection') return False # # Распакую ответ # ephemeral_dh_server_pk, server_nonce, cipher_suite, c1, c2, c3, c4 = server_hello if cipher_suite != self._cipher_suite: trace('[TLS13Client]', f'Unsupported cipher suite {cipher_suite}') return False # # Общий секрет DH # ephemeral_dh_common = key_exchange.exponentiate(ephemeral_dh_sk, ephemeral_dh_server_pk) if ephemeral_dh_common is None: trace(f'[TLS13Client]', 'Server ECDH public key is not on curve') # # Вырабатываю ключи для симметричного шифрования и имитовставки # client_packet = b''.join([key_exchange.element_to_bytes(ephemeral_dh_pk), client_nonce, str(offer).encode()]) server_packet = b''.join([key_exchange.element_to_bytes(ephemeral_dh_server_pk), server_nonce, cipher_suite.encode()]) key_derivation_data = b''.join([key_exchange.element_to_bytes(ephemeral_dh_common), client_packet, server_packet]) encryption_key, mac_key = HKDF(key_derivation_data, self._cipher_type.KEY_SIZE, b'\x00' * 16, self._hasher, 2) # # Расшифровываю полученные пакеты # cipher = self._cipher_type(encryption_key) try: cert_requested = cipher.decrypt(*c1) server_certificate = cipher.decrypt(*c2) server_signature = cipher.decrypt(*c3) server_mac = cipher.decrypt(*c4) except ValueError: trace('[TLS13Client]', f'Malformed server response') return False # # Проверяю полученнные данные # if is_one_way == cert_requested: # # Запросили сертификат в ходе односторонней аутентификации или наоборот # trace('[TLS13Client]', f'Invalid client certificate request') return False if not self.check_certificate(server.identifier, server.public_key, pickle.loads(server_certificate)): trace('[TLS13Client]', f'Invalid server certificate') return False if not TLS13Participant.check_signature(server.curve, server.public_key, server_signature, client_packet, server_packet, c1[1], c2[1]): trace('[TLS13Client]', f'Malformed server response') return False if not TLS13Participant.check_mac(mac_key, server_mac, client_packet, server_packet, c1[1], c2[1], c3[1]): trace('[TLS13Client]', f'Malformed server response') return False # # Если требуется двусторонняя аутентификация, то зашлю на сервер сертификат # if not is_one_way: c5 = cipher.encrypt(pickle.dumps(self.certificate)) signature = self.sign(client_packet, server_packet, c1[1], c2[1], c3[1], c4[1], c5[1]) c6 = cipher.encrypt(signature) mac = hmac(mac_key, client_packet, server_packet, c1[1], c2[1], c3[1], c4[1], c5[1], c6[1]) c7 = cipher.encrypt(mac) if not server.process_client_certificate(self, c5, c6, c7): trace('[TLS13Client]', f'Server refused the connection') return False # # Теперь я генерирую ключи шифрования # key_derivation_data = b''.join([key_exchange.element_to_bytes(ephemeral_dh_common), client_packet, server_packet, c1[1], c2[1], c3[1], c4[1]]) self._session_keys[server.identifier] = HKDF(key_derivation_data, self._cipher_type.KEY_SIZE, b'\x00' * 16, self._hasher, 2) trace('[TLS13Client]', f'Server accepted the connection') trace('[TLS13Client]', f's -> c key: {hexlify(self._session_keys[server.identifier][TLS13Client.S2C])}') trace('[TLS13Client]', f'c -> s key: {hexlify(self._session_keys[server.identifier][TLS13Client.C2S])}') return True def send_wait_receive_message(self, server, message: str): """ Отправка сообщения на сервер и получение ответа """ if server.identifier not in self._session_keys: trace('[TLS13Client]', f'Connection with server {server.identifier} is not established yet') return cipher = self._cipher_type(self._session_keys[server.identifier][TLS13Client.C2S]) response = server._receive_message_and_respond(self, cipher.encrypt(message.encode())) if response is None: trace('[TLS13Client]', f'Malformed server response') try: cipher = self._cipher_type(self._session_keys[server.identifier][TLS13Client.S2C]) return cipher.decrypt(*response).decode() except ValueError: trace('[TLS13Client]', f'Malformed server response') def change_keys(self, server): """ Смена ключей по инициативе клиента """ if server.identifier not in self._session_keys: trace('[TLS13Client]', f'Connection with server {server.identifier} is not established yet') return # # Меняю у себя # self._change_keys_internal(server.identifier) # # Меняю на сервере # server._change_keys_internal(self.identifier) def _change_keys_internal(self, server_id): s2c_key = HKDF(self._session_keys[server_id][TLS13Client.S2C], self._cipher_type.KEY_SIZE, b'\x00' * 16, self._hasher, 1) c2s_key = HKDF(self._session_keys[server_id][TLS13Client.C2S], self._cipher_type.KEY_SIZE, b'\x00' * 16, self._hasher, 1) trace('[TLS13Client]', f'New s -> c key: {hexlify(s2c_key)}') trace('[TLS13Client]', f'New c -> s key: {hexlify(c2s_key)}') self._session_keys[server_id] = (s2c_key, c2s_key) class TLS13Server(TLS13Participant): """ Сервер для TLS 1.3 """ CIPHER_TYPE = 0 HASHER = 1 HANDSHAKE_ENC_KEY = 2 HANDSHAKE_MAC_KEY = 3 HANDSHAKE_VALUES = 4 HANDSHAKE_KEY_EXCHANGE = 5 S2C = 0 C2S = 1 def __init__(self, ca, cipher_suites): """ Инициализация - сохранение настроек """ super(TLS13Server, self).__init__(ca) self._cipher_suites = cipher_suites self._session_keys = {} self._mapped_states = {} def get_hello(self, client_id, ephemeral_dh_client_pk, client_nonce, offer, is_one_way): """ Получение ServerHello """ # # Провверяю, что поддерживается вообще набор примитивов # cipher_suite, group, generator = offer if cipher_suite not in self._cipher_suites: trace('[TLS13Server]', f'Unsupported cipher suite {cipher_suite}') return None trace('[TLS13Server]', f'Using cipher suite {cipher_suite} with client {client_id}') # # Получаю примитивы # key_exchange_type, cipher_type, hasher = resolve_cipher_suite(cipher_suite) key_exchange = key_exchange_type(group, generator) # # Вырабатываю ключи DH и nonce, после чего получаю общий секрет DH # ephemeral_dh_sk = key_exchange.generate_private_key() ephemeral_dh_pk = key_exchange.get_public_key(ephemeral_dh_sk) server_nonce = get_random_bytes(64) trace('[TLS13Server]', f'Ephemeral DH public key: {ephemeral_dh_pk}') trace('[TLS13Server]', f'Nonce: {hexlify(client_nonce)}') ephemeral_dh_common = key_exchange.exponentiate(ephemeral_dh_sk, ephemeral_dh_client_pk) if ephemeral_dh_common is None: trace(f'[TLS13Client]', 'Client ECDH public key is not on curve') # # Вырабатываю ключи для симметричного шифрования и имитовставки # client_packet = b''.join([key_exchange.element_to_bytes(ephemeral_dh_client_pk), client_nonce, str(offer).encode()]) server_packet = b''.join([key_exchange.element_to_bytes(ephemeral_dh_pk), server_nonce, cipher_suite.encode()]) key_derivation_data = b''.join([key_exchange.element_to_bytes(ephemeral_dh_common), client_packet, server_packet]) encryption_key, mac_key = HKDF(key_derivation_data, cipher_type.KEY_SIZE, b'\x00' * 16, hasher, 2) # # Рассчитываю нужные значения # cipher = cipher_type(encryption_key) cert_request = bytes([not is_one_way]) c1 = cipher.encrypt(cert_request) cert = pickle.dumps(self.certificate) c2 = cipher.encrypt(cert) signature = self.sign(client_packet, server_packet, c1[1], c2[1]) c3 = cipher.encrypt(signature) mac = hmac(mac_key, client_packet, server_packet, c1[1], c2[1], c3[1]) c4 = cipher.encrypt(mac) # # Тут будут сохранены нужные значения # self._mapped_states[client_id] = [None, None, None, None, None, None] # # Теперь выработаю ключи (если это односторонняя аутентификация) # if is_one_way: self._session_keys[client_id] = TLS13Server._generate_keys(key_exchange, cipher_type.KEY_SIZE, hasher, ephemeral_dh_common, client_packet, server_packet, c1, c2, c3, c4) # # Сохраняю остальное и возвращаю клиенту # self._mapped_states[client_id][TLS13Server.CIPHER_TYPE] = cipher_type self._mapped_states[client_id][TLS13Server.HASHER] = hasher self._mapped_states[client_id][TLS13Server.HANDSHAKE_ENC_KEY] = encryption_key self._mapped_states[client_id][TLS13Server.HANDSHAKE_MAC_KEY] = mac_key self._mapped_states[client_id][TLS13Server.HANDSHAKE_VALUES] = (ephemeral_dh_common, client_packet, server_packet, c1, c2, c3, c4) self._mapped_states[client_id][TLS13Server.HANDSHAKE_KEY_EXCHANGE] = key_exchange return ephemeral_dh_pk, server_nonce, cipher_suite, c1, c2, c3, c4 def process_client_certificate(self, client, c5, c6, c7): """ Проверка сертификата клиента """ client_id = client.identifier # # Достану ранее сохраненные параметры # cipher_type = self._mapped_states[client_id][TLS13Server.CIPHER_TYPE] hasher = self._mapped_states[client_id][TLS13Server.HASHER] encryption_key = self._mapped_states[client_id][TLS13Server.HANDSHAKE_ENC_KEY] mac_key = self._mapped_states[client_id][TLS13Server.HANDSHAKE_MAC_KEY] ephemeral_dh_common, client_packet, server_packet, c1, c2, c3, c4 = \ self._mapped_states[client_id][TLS13Server.HANDSHAKE_VALUES] key_exchange = self._mapped_states[client_id][TLS13Server.HANDSHAKE_KEY_EXCHANGE] # # Расшифрую полученный пакет # cipher = cipher_type(encryption_key) try: client_certificate = cipher.decrypt(*c5) client_signature = cipher.decrypt(*c6) client_mac = cipher.decrypt(*c7) except ValueError: trace('[TLS13Server]', f'Malformed client response') return False # # проверю все полученные данные # if not self.check_certificate(client.identifier, client.public_key, pickle.loads(client_certificate)): trace('[TLS13Server]', f'Invalid client certificate') return False if not TLS13Participant.check_signature(client.curve, client.public_key, client_signature, client_packet, server_packet, c1[1], c2[1], c3[1], c4[1], c5[1]): trace('[TLS13Server]', f'Malformed client response') return False if not TLS13Participant.check_mac(mac_key, client_mac, client_packet, server_packet, c1[1], c2[1], c3[1], c4[1], c5[1], c6[1]): trace('[TLS13Server]', f'Malformed client response') return False # # Все чудесно - генерирую ключи # key_derivation_data = b''.join([key_exchange.element_to_bytes(ephemeral_dh_common), client_packet, server_packet, c1[1], c2[1], c3[1], c4[1]]) self._session_keys[client_id] = TLS13Server._generate_keys(key_exchange, cipher_type.KEY_SIZE, hasher, ephemeral_dh_common, client_packet, server_packet, c1, c2, c3, c4) return True def change_keys(self, client): """ Смена ключей по инициативе сервера """ if client.identifier not in self._session_keys: trace('[TLS13Client]', f'Connection with client {client.identifier} is not established yet') return # # Меняю у себя # self._change_keys_internal(client.identifier) # # Меняю на сервере # client._change_keys_internal(self.identifier) def _change_keys_internal(self, client_id): s2c_key = HKDF(self._session_keys[client_id][TLS13Client.S2C], self._mapped_states[client_id][TLS13Server.CIPHER_TYPE].KEY_SIZE, b'\x00' * 16, self._mapped_states[client_id][TLS13Server.HASHER], 1) c2s_key = HKDF(self._session_keys[client_id][TLS13Client.C2S], self._mapped_states[client_id][TLS13Server.CIPHER_TYPE].KEY_SIZE, b'\x00' * 16, self._mapped_states[client_id][TLS13Server.HASHER], 1) trace('[TLS13Client]', f'New s -> c key: {hexlify(s2c_key)}') trace('[TLS13Client]', f'New c -> s key: {hexlify(c2s_key)}') self._session_keys[client_id] = (s2c_key, c2s_key) def _receive_message_and_respond(self, client, encrypted_message): client_id = client.identifier try: cipher = self._mapped_states[client_id][TLS13Server.CIPHER_TYPE]( self._session_keys[client_id][TLS13Server.C2S]) message = cipher.decrypt(*encrypted_message).decode() print(f'Received message "{message}"') cipher = self._mapped_states[client_id][TLS13Server.CIPHER_TYPE]( self._session_keys[client_id][TLS13Server.S2C]) return cipher.encrypt(f'Hello, {message}!'.encode()) except ValueError: trace('[TLS13Server]', f'Malformed client message') @staticmethod def _generate_keys(key_exchange, key_size, hasher, ephemeral_dh_common, client_packet, server_packet, c1, c2, c3, c4): key_derivation_data = b''.join([key_exchange.element_to_bytes(ephemeral_dh_common), client_packet, server_packet, c1[1], c2[1], c3[1], c4[1]]) s2c_key, c2s_key = HKDF(key_derivation_data, key_size, b'\x00' * 16, hasher, 2) trace('[TLS13Server]', f's -> c key: {hexlify(s2c_key)}') trace('[TLS13Server]', f'c -> s key: {hexlify(c2s_key)}') return s2c_key, c2s_key class TLS13CertificationAuthority(object): """ Удостоверющий центр для TLS 1.3 """ def __init__(self): """ Инициализация - генерирование ключевой пары """ self._signer = GOST3410Signer() self._certificates = set() self._revoked_certificates = set() trace('[TLS13CertificationAuthority]', f'Public key = {self._signer.public_key}') @property def public_key(self): """ Получение открытого ключа """ return self._signer.public_key @property def curve(self): """ Получение кривой, используемой для подписи """ return self._signer.curve def issue_certificate(self, participant: GenericParticipant): """ Выдача сертификата на открытый ключ """ # # Проверю, что клиент действительно владеет закрытым ключом # if not self._check_participant(participant): trace('[TLS13CertificationAuthority]', f'Participant cannot prove a valid private key possession') return None # # Все чудесно, подписываю ключ и отдаю сертификат # id_as_bytes = str(participant.identifier).encode() pk_as_bytes = gost3410.pub_marshal(participant.public_key) cert = Certificate(participant.identifier, participant.public_key, self._signer.sign(id_as_bytes, pk_as_bytes)) self._certificates.add(cert) trace('[TLS13CertificationAuthority]', f'Issued certificate for {participant.identifier}. Total certificates: {len(self._certificates)}') return cert def revoke_certificate(self, participant: GenericParticipant): """ Отзыв сертификата """ # # А можно ли вообще отзывать сертификат? # if not self._check_participant(participant): trace('[TLS13CertificationAuthority]', f'Participant cannot prove a valid private key possession') return if participant.certificate not in self._certificates: # # А отзывать то и нечего # return self._revoked_certificates.add(participant.certificate) self._certificates.discard(participant.certificate) trace('[TLS13CertificationAuthority]', f'List of revoked certificates now has {len(self._revoked_certificates)} elements') def is_revoked(self, cert: Certificate): """ Проверка на факт отзыва сертификата """ return cert in self._revoked_certificates def _check_participant(self, participant: GenericParticipant) -> bool: Q = participant._get_point_for_schnorr_proof() c = gost3410.prv_unmarshal(get_random_bytes(64)) t = participant._get_response(c) tP = multiply_by_scalar(self._signer.curve, t, self._signer.generator) cPK = multiply_by_scalar(self._signer.curve, c, participant.public_key) return tP == add_points(self._signer.curve, Q, cPK) # ---- # ## Демонстрация удачного выполнения протоколов # Создание удостоверяющего центра ca = TLS13CertificationAuthority() # Создание сервера srv = TLS13Server(ca, cipher_suites) # Первый валидный пользователь (использует ECDH, Кузнечик в режиме MGM и Стрибог) alice = TLS13Client(ca, cipher_suite=cipher_suites[0]) # Второй валидный пользователь (использует DH, AES в режиме GCM и SHA256) bob = TLS13Client(ca, cipher_suite=cipher_suites[1]) # Устанавливаю соединение с односторонней аутентификацией alice.establish_connection(srv, TLS13Client.AuthMode.OWA) # Устанавливаю соединение с двусторонней аутентификацией bob.establish_connection(srv, TLS13Client.AuthMode.TWA) # Отправлю сообщение alice.send_wait_receive_message(srv, 'world') # Сменю ключи по инициативе клиента alice.change_keys(srv) # И еще раз отправлю сообщение, чтобы было видно, что ключи успешно поменялись alice.send_wait_receive_message(srv, 'world') # Отправлю сообщение другим клиентом bob.send_wait_receive_message(srv, 'crypto') # Смена ключей по инициативе сервера srv.change_keys(bob) # Все по-прежнему успешно отправляется bob.send_wait_receive_message(srv, 'crypto') # ---- # ## Отзыв сертификата # Третий валидный пользователь carol = TLS13Client(ca, cipher_suite=cipher_suites[0]) # Отзову сертификат Кэрол ca.revoke_certificate(carol) # Попробую подключиться, используя двустороннюю аутентификацию carol.establish_connection(srv, TLS13Client.AuthMode.TWA) # ---- # ## Демонстрация неудачного выполнения протоколов # Тут Мэллори пытается установить подключение с двусторонней аутентификацией, # используя сертификат не на свой ключ (я просто подменю тут открытый ключ для простоты) mallory = TLS13Client(ca, cipher_suite=cipher_suites[0], fake_data=bob.public_key) mallory.establish_connection(srv, TLS13Client.AuthMode.TWA) # Мэллори теперь пытается получить сертификат на ключ Боба, не зная его закрытого ключа mallory = TLS13Client(ca, cipher_suite=cipher_suites[0], fake_data=bob.public_key, demo_cert_error=True) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="VnvqxDEYzJvI" colab_type="code" colab={} from keras.datasets import mnist from keras.models import Sequential from keras.layers import * import numpy as np import tensorflow as tf from keras.optimizers import * import math from PIL import Image (x_train, y_train), (x_test, y_test) = mnist.load_data() # + id="UGjtwnVmzoYw" colab_type="code" colab={} def generator_model(): model = Sequential() model.add(Dense(input_dim=100,output_dim=1024)) model.add(Activation('tanh')) model.add(Dense(128*7*7)) model.add(BatchNormalization()) model.add(Activation('tanh')) model.add(Reshape((7,7,128),input_shape=(128*7*7,))) model.add(UpSampling2D(size=(2,2))) model.add(Conv2D(64,(5,5),padding='same')) model.add(Activation('tanh')) model.add(UpSampling2D(size=(2,2))) model.add(Conv2D(1,(5,5),padding='same')) model.add(Activation('tanh')) return model # + id="iTt3AtXD2qDh" colab_type="code" colab={} def discriminator_model(): model = Sequential() model.add(Conv2D(64,(5,5),input_shape=(28,28,1))) model.add(Activation('tanh')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Conv2D(128,(5,5))) model.add(Activation('tanh')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Flatten()) model.add(Dense(1024)) model.add(Activation('tanh')) model.add(Dense(1)) model.add(Activation('sigmoid')) return model # + id="XnPKicby3YWA" colab_type="code" colab={} def generator_containing_discriminator(g,d): model = Sequential() model.add(g) d.trainable = False model.add(d) return model # + id="rza9-39V3tjB" colab_type="code" colab={} y_train = y_train.reshape(y_train.shape[0],1) y_test = y_test.reshape(y_test.shape[0],1) # + id="Ev9LW3g24Twf" colab_type="code" colab={} _train = (x_train,y_train) _test = (x_test,y_test) # + id="z6snylAK7h1Z" colab_type="code" colab={} def combine_images(generated_images): #生成图片拼接 num = generated_images.shape[0] width = int(math.sqrt(num)) height = int(math.ceil(float(num)/width)) shape = generated_images.shape[1:3] image = np.zeros((height*shape[0], width*shape[1]), dtype=generated_images.dtype) for index, img in enumerate(generated_images): i = int(index/width) j = index % width image[i*shape[0]:(i+1)*shape[0], j*shape[1]:(j+1)*shape[1]] = \ img[:, :, 0] return image # + id="pPl_BlDG4AKe" colab_type="code" colab={} def train(train, test, batch_size): (x_train,y_train) = train (x_test,y_test) = test x_train = (x_train.astype(np.float32) - 127.5) / 127.5 d = discriminator_model() g = generator_model() d_on_g = generator_containing_discriminator(g,d) d_optim = SGD(lr=0.001,momentum=0.9,nesterov=True) g_optim = SGD(lr=0.001,momentum=0.9,nesterov=True) g.compile(loss='binary_crossentropy',optimizer='SGD') d_on_g.compile(loss='binary_crossentropy',optimizer=g_optim) # 前一个架构训练了生成器,所以在训练判别器之前先要设定其为可训练。 d.trainable = True d.compile(loss='binary_crossentropy',optimizer=d_optim) for epoch in range(30): print("Epoch is",epoch) for index in range(int(x_train.shape[0] / batch_size)): noise = np.random.uniform(-1,1,size=(batch_size,100)) image_batch = x_train[index*batch_size:(index+1)*batch_size] image_batch = image_batch.reshape(image_batch.shape[0],28,28,1) generated_image = g.predict(noise,verbose=0) #print('g shape:',generated_image.shape) #print('i shape:',image_batch.shape) if index % 100 == 0: image = combine_images(generated_image) image = image*127.5 + 127.5 Image.fromarray(image.astype(np.uint8)).save('./GAN/'+str(epoch)+'_'+str(index)+'.png') x = np.concatenate((image_batch,generated_image)) y = [1]*batch_size + [0]*batch_size d_loss = d.train_on_batch(x,y) print('batch:',index,', d_loss:',d_loss) noise = np.random.uniform(-1,1,(batch_size,100)) d.trainable = False g_loss = d_on_g.train_on_batch(noise,[1]*batch_size) d.trainable = True print('batch:',index,', g_loss:',g_loss) # + id="4ngYWPwg8oO8" colab_type="code" colab={} train(_train,_test,batch_size=128) # + id="-fxTXU7D-ywd" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + from __future__ import print_function import numpy as np import pandas as pd from collections import OrderedDict #sorting participant df dict before pd.concat() import matplotlib.pylab as plt # %matplotlib inline pd.options.display.mpl_style = 'default' TASK_NAMES_USING = [ 'T1_SMS_5', 'T1_SMS_8', 'Ticks_ISO_T2_5', 'Ticks_ISO_T2_8', 'Ticks_Linear_5', 'Ticks_Linear_8', 'Ticks_Phase_5', 'Ticks_Phase_8', 'Jits_ISO_5', 'Jits_ISO_8', 'Jits_Phase_5', 'Jits_Phase_8', 'Jits_Linear_5', 'Jits_Linear_8', 'ISIP_5', 'ISIP_8', 'Improv_Metronome', 'Improv_Melody', ] pid_exclusions = {k: [] for k in TASK_NAMES_USING} REMOVE_TASKS = ['Ticks_Pract_8', 'Ticks_Pract_5', 'Practice_Tick_8', 'Practice_Tick_5', 'Jits_Pract_8', 'Jits_Pract_5', 'Practice_jittr_8', 'Practice_jittr_5', 'Pattern_Practice', ] PRACTICE_TASK_NUMBERS = [0, 1, 11, 24, 25] PITCH_R_PAD = 48 PITCH_L_PAD = 38 #PITCH_TICK = 42 CHANNEL_TAP = 1 CHANNEL_PIANO = 2 CHANNEL_LOOPBACK = 5 # + #EXPECTED_LOOPBACK_COUNTS = { # 'T1_SMS_5': 130, # 'T1_SMS_8': 120, # 'Ticks_ISO_T2_5': 130, # 'Ticks_ISO_T2_8': 120, # 'Ticks_Linear_5': 170, # 'Ticks_Linear_8': 170, # 'Ticks_Phase_5': 170, # 'Ticks_Phase_8': 170, # 'Jits_ISO_5': 360, # 'Jits_ISO_8': 360, # 'Jits_Phase_5': 510, # 'Jits_Phase_8': 510, # 'Jits_Linear_5': 510, # 'Jits_Linear_8': 510, # 'ISIP_5': 40, # 'ISIP_8': 30, # 'Improv_Metronome': 140, # 'Improv_Melody': 162, # } # - def participant_df(csvfilename): ''' clean up some of the raw CSVs' idiosyncracies and load into a one-participant DataFrame. ''' import os #getting csv file paths from StringIO import StringIO csv_lines = open(csvfilename).readlines() csv_edited = ('csv_line,stamp_type,' + 'run_count,task_id,task_name,i,channel,pitch,velocity,micros,blank' + '\n') for idx, line in enumerate(csv_lines): # Lines not to parse: per-task headings, end markers, non-timestamped targets # (silent, unused targets in ISIP timestamped micros=0 in early admin software version) if any(s in line for s in ['[', 'taskRunCount', 'end,end,end', 'IntervalOut,X,X,0']): continue line_number = str(idx) if ('IntervalOut' in line): stamp_type = 'target' line = line.replace('X,', ',') line = line.replace('IntervalOut,', ',') else: stamp_type = 'midi' #this doesn't distinguish between performance (ch1) and loopback (ch5) #...let alone left/right tap! csv_edited += (line_number + ',' + stamp_type + ',' + line) df = pd.read_csv(StringIO(csv_edited), index_col = ['task_name','csv_line'], #was taskname, stamp_type, csvline header = 0, ) df = df.drop('blank', axis=1) return df # + slideshow={"slide_type": "slide"} #list files in subdir that end in .csv import os path = './csv_raw_data' allfiles = os.listdir(path) csvfiles = [os.path.join(path, f) for f in list(allfiles) if os.path.splitext(f)[1]==".csv"] #something fails at this point when running on Wakari #(so, maybe any linux platform?) - the file names are correct, # but participant_df(f) returns blank dataframes. sub_dfs = {os.path.basename(f)[3:6]: participant_df(f) for f in csvfiles} sub_dfs_sorted = OrderedDict(sorted(sub_dfs.items(), key=lambda t: t[0])) dbase = pd.concat(sub_dfs_sorted.values(), #axis=0, #defaults #join='outer', #join_axes=None, #ignore_index=False, keys=sub_dfs_sorted.keys(), # participant id strings #levels=None, names=['pid'], #verify_integrity=False ) # + for t in REMOVE_TASKS: dbase = dbase.drop(REMOVE_TASKS, axis=0, level='task_name') pid_list = sorted(list(dbase.index.get_level_values('pid').unique())) print(pid_list) task_name_list = sorted(list(dbase.index.get_level_values('task_name').unique())) print(task_name_list) # + # promote task_name above pid to top of index hierarchy dbase = dbase.swaplevel(0,1, axis=0) dbase = dbase.sort() #This puts the tasks in alphabetical order, but we'll need to get the original #task presentation order eventually. Can't use run_count easily (because it #goes back to 1 if the machine was reset), but we could compare the #min(csv_line) for each task, since that doesn't reset. dbase.loc[(dbase.channel==CHANNEL_TAP) & (dbase.pitch==PITCH_R_PAD), 'stamp_type'] = 'tap_r' dbase.loc[(dbase.channel==CHANNEL_TAP) & (dbase.pitch==PITCH_L_PAD), 'stamp_type'] = 'tap_l' dbase.loc[(dbase.channel==CHANNEL_TAP) & (dbase.pitch != PITCH_L_PAD) & (dbase.pitch != PITCH_R_PAD), 'stamp_type'] = 'tap_trigger_error' dbase.ix[dbase.channel==CHANNEL_PIANO, 'stamp_type'] = 'piano' dbase.ix[dbase.channel==CHANNEL_LOOPBACK, 'stamp_type'] = 'loopback' assert set(dbase.stamp_type.values) == set(['loopback', 'piano', 'tap_r', 'tap_l', 'tap_trigger_error', 'target']) # + # removing duplicate tasks manually from CSV files-- running out of memory # while trying to do this in pandas (below) #grouped = dbase.groupby(level=['task_name', 'pid']) # #def select_rerun_tasks(df): # trc_min = df.run_count.min() # trc_max = df.run_count.max() # df['trc_min'] = trc_min # df['trc_max'] = trc_max # subset = df.loc[df.trc_min != df.trc_max] # return subset # #db_duplicate_issues = grouped.apply(select_rerun_tasks) # #dbdi = db_duplicate_issues.loc[(db_duplicate_issues.stamp_type=='loopback') # & (db_duplicate_issues.task_id != PRACTICE_TASK_NUMBERS[0]) # & (db_duplicate_issues.task_id != PRACTICE_TASK_NUMBERS[1]) # & (db_duplicate_issues.task_id != PRACTICE_TASK_NUMBERS[2]) # & (db_duplicate_issues.task_id != PRACTICE_TASK_NUMBERS[3]) # & (db_duplicate_issues.task_id != PRACTICE_TASK_NUMBERS[4]) # ] # #dbdi.to_csv('duplicate_issues_v3.csv') #currently: remaining duplicate issue is pid 013, improv_metronome. Couldn't # resolve this based on my notes-- listen to the audio recording. # - # Convert timestamps to milliseconds relative to start of task grouped = dbase.groupby(level=['task_name', 'pid']) def compute_within_task_millis(df): df_loopback = df.loc[df.stamp_type=='loopback'] start_micros = df_loopback.micros.min() df['task_ms'] = (df.micros - start_micros)/1000. return df dbase = grouped.apply(compute_within_task_millis) # ## Computing intervals within task+pid+stamptype (mainly useful for ISIP tasks) dbase = dbase.set_index('stamp_type', append=True) dbase = dbase.swaplevel(2,3) # new hierarchy: task, pid, stamp_type, csv_line dbase = dbase.sort() #make groups of stamp_type within task/pid dbase.head() # + grouped = dbase.groupby(level=['task_name', 'pid', 'stamp_type']) def compute_intervals_unfiltered(df): df['int_raw'] = df.task_ms - df.task_ms.shift(1) return df dbase = grouped.apply(compute_intervals_unfiltered) dbase.head() # - # ## Pickle dbase # + pickle_dbase = "c:/db_pickles/pickle - dbase - 2014-10-03b.pickle" dbase.to_pickle(pickle_dbase) dbase[:10] # - test = dbase.xs('T1_SMS_8', level='task_name').xs('loopback', level='stamp_type') test.groupby(level='pid').count() # # Below here is exploratory - nothing below is used in later steps. (We proceed from pickle file in Pt. 2.) # ### Preliminary scatterplot, isip500 / 800 # + data = dfo #.drop(['048', '049', '055']) plt.figure(figsize=(8,5)) plt.scatter(data['isip8_sq2dev_mean_sqrt'], data['isip5_sq2dev_mean_sqrt']) plt.show() # - plt.figure(figsize=(8,5)) plt.scatter(dfo['isip5_ints_pstd'], dfo['isip8_ints_pstd']) plt.show() # # Testing / visualizing - not needed in final 'product' # ### Plotting interval results after filtering steps def scatter_tooltips(df, x_col, y_col, size_col=None, color_col=None, show_all_cols=False, fig_size=(8, 5)): #import matplotlib.pyplot as plt #import numpy as np import pandas as pd import mpld3 from mpld3 import plugins #x = df[x_col] #y = df[y_col] df_info = [x_col, y_col] #for arg in args: # df_info.append(arg) # Define some CSS to control our custom labels css = """ table { border-collapse: collapse; } th { color: #ffffff; background-color: #000000; } td { background-color: #cccccc; } table, th, td { font-family:Arial, Helvetica, sans-serif; border: 1px solid black; text-align: right; } """ fig, ax = plt.subplots() fig.set_size_inches(fig_size) ax.grid(True, alpha=0.3) labels = [] for row in df.iterrows(): index, series = row pid = index label = pd.DataFrame(series) labels.append(str(label.to_html())) points = ax.plot(df[x_col], df[y_col], 'o', color='b', markeredgecolor='k', markersize=8, markeredgewidth=1, alpha=.6) ax.set_xlabel(x_col) ax.set_ylabel(y_col) ax.set_title(x_col + ' . ' + y_col, size=16) tooltip = plugins.PointHTMLTooltip(points[0], labels, voffset=10, hoffset=10, css=css) plugins.connect(fig, tooltip) return mpld3.display() scatter_tooltips(data, 'isip8_sq2dev_mean_sqrt', 'isip5_sq2dev_mean_sqrt', fig_size=(12, 7.5)) def d3plot(x, y, size=(10,6)): import mpld3 fig, ax = plt.subplots(subplot_kw=dict(axisbg='#EEEEEE')) fig.set_size_inches((8,5) ) scatter = ax.scatter(x, y, #c=np.random.random(size=N), s=40, #size alpha=0.5, cmap=plt.cm.jet) ax.grid(color='white', linestyle='solid') ax.set_title("Scatter Plot (with tooltips!)", size=10) labels = ['{0}'.format(pid) for pid in x.index] tooltip = mpld3.plugins.PointLabelTooltip(scatter, labels=labels) mpld3.plugins.connect(fig, tooltip) return mpld3.display() d3plot(data['isip5_sq2devsum'], data['isip8_sq2devsum']) # #### intervals after filtering # + for pid in pid_list[40:80]: try: sset = db_isip8.xs([pid, 'tap_r'], level=['pid', 'stamp_type']) data = sset.ints if data.min() > 700 and data.max() < 900: continue print(pid) plt.figure(figsize=(13,6)) data.hist(bins=100) #annotating non-midpoint values caption_y_increment = 0.2 median = data.median() prev_ypos = 0 for idx, value in enumerate(data): if np.abs(value - median) > 150: caption = str(data.index[idx]) + ": " + str(value.round(1)) plt.annotate(caption, (value, prev_ypos + caption_y_increment)) prev_ypos += caption_y_increment plt.show() except: print("error....") #p056: high outlier at 9138: 924.0 ms *** #045, 046: error or no data #need lower bound of >632 for participant 048 (extra long intervals from explicit midpoint counting) #p. 064: low outlier at about 630-640ms # - sset = db_isip8.xs(['056', 'tap_r'], level=['pid', 'stamp_type']) print(sset.ints.mean()) print(sset.ints.std()) # + for pid in pid_list: try: sset = db_isip5.xs([pid, 'tap_r'], level=['pid', 'stamp_type']) data = sset.ints if data.min() > 400 or data.max() < 600: continue print(pid) plt.figure(figsize=(13,6)) data.hist(bins=100) #annotating non-midpoint values caption_y_increment = 0.2 median = data.median() prev_ypos = 0 for idx, value in enumerate(data): if np.abs(value - median) > 150: caption = str(data.index[idx]) + ": " + str(value.round(1)) plt.annotate(caption, (value, prev_ypos + caption_y_increment)) prev_ypos += caption_y_increment plt.show() except: print("error....") #High outliers: #p049 - will need to remove manually # + taps = db_isip5.xs('tap_r', level='stamp_type').ints pmeans = taps.groupby(level='pid').mean() data = pmeans ############################## plt.figure(figsize=(13,6)) data.hist(bins=25) #annotating non-midpoint values caption_y_increment = 1 median = data.median() prev_ypos = 0 for idx, value in enumerate(data): if np.abs(value - median) > 25: caption = str(data.index[idx]) + ": " + str(value.round(1)) plt.annotate(caption, (value, prev_ypos + caption_y_increment)) prev_ypos += caption_y_increment plt.show() #why is pid 049's mean so low? # + taps = db_isip8.xs('tap_r', level='stamp_type').ints pmeans = taps.groupby(level='pid').mean() data = pmeans ############################## plt.figure(figsize=(13,6)) data.hist(bins=25) #annotating non-midpoint values caption_y_increment = 0.8 median = data.median() prev_ypos = 0 for idx, value in enumerate(data): if np.abs(value - median) > 60: caption = str(data.index[idx]) + ": " + str(value.round(1)) plt.annotate(caption, (value, prev_ypos + caption_y_increment)) prev_ypos += caption_y_increment plt.show() #Good distribution-- 048 is a bit high # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # #### This is the practice a sort of "Do it myself from the scratch" # # # + import os import tarfile from six.moves import urllib DOWNLOAD_ROOT = "https://raw.githubusercontent.com/ageron/handson-ml/master/" HOUSING_PATH = os.path.join("datasets", "housing") HOUSING_URL = DOWNLOAD_ROOT + "datasets/housing/housing.tgz" def fetch_housing_data(housing_url=HOUSING_URL, housing_path=HOUSING_PATH): if not os.path.isdir(housing_path): os.makedirs(housing_path) tgz_path = os.path.join(housing_path, "housing.tgz") urllib.request.urlretrieve(housing_url, tgz_path) housing_tgz = tarfile.open(tgz_path) housing_tgz.extractall(path=housing_path) housing_tgz.close() # - fetch_housing_data() # + import pandas as pd def load_housing_data(housing_path=HOUSING_PATH): csv_path = os.path.join(housing_path, "housing.csv") return pd.read_csv(csv_path) # - # ### Python으로 하는 탐색적 자료 분석(Exploratory Data Analysis) # [Exploratory Data Analysis](https://3months.tistory.com/325?category=753896) # # 탐색 3종 세트 # ``` python # import numpy as np # import pandas as pd # import matplotlib.pyplot as plt # import seaborn as sns # # titanic = pd.read_csv("titanic.csv") # # titanic.head() # titanic.info() # titanic.isnull().sum() # ``` # housing = load_housing_data() housing.head() ##describe() 숫자형 특성의 요약정보 housing.describe() # #### 표준편차의 개념 다시 정리 필요 housing.info() housing.isnull().sum() # %matplotlib inline import matplotlib.pyplot as plt housing.hist(bins=50, figsize=(20,15)) plt.show() import numpy as np def split_train_test(data, test_ratio): shuffled_indices = np.random.permutation(len(data)) test_set_size = int(len(data) * test_ratio) test_indices = shuffled_indices[:test_set_size] train_indices = shuffled_indices[0 :] return data.iloc[train_indices], data.iloc[test_indices] # + train_set, test_set = split_train_test(housing, 0.2) print(len(train_set), "train +", len(test_set), "test") # + housing["income_cat"] = np.ceil(housing["median_income"]/1.5) housing["income_cat"].where(housing["income_cat"] <5, 5.0, inplace=True) plt.hist(housing["income_cat"]) plt.show() # + from sklearn.model_selection import StratifiedShuffleSplit split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42) for train_index, test_index in split.split(housing, housing["income_cat"]): strat_train_set = housing.loc[train_index] strat_test_set = housing.loc[test_index] housing["income_cat"].value_counts()/len(housing) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false} #
    , 12 Feb 2016
    Revised 17 Feb 2018
    # # # A Concrete Introduction to Probability (using Python) # # In 1814, [wrote](https://en.wikipedia.org/wiki/Classical_definition_of_probability): # # >*Probability theory is nothing but common sense reduced to calculation. ... [Probability] is thus simply a fraction whose numerator is the number of favorable cases and whose denominator is the number of all the cases possible ... when nothing leads us to expect that any one of these cases should occur more than any other.* # # ![Laplace](https://upload.wikimedia.org/wikipedia/commons/thumb/3/30/AduC_197_Laplace_%28P.S.%2C_marquis_de%2C_1749-1827%29.JPG/180px-AduC_197_Laplace_%28P.S.%2C_marquis_de%2C_1749-1827%29.JPG) #
    Pierre-Simon Laplace
    1814
    # # # Laplace nailed it. To untangle a probability problem, all you have to do is define exactly what the cases are, and careful count the favorable and total cases. Let's be clear on our vocabulary words: # # # - **[Trial](https://en.wikipedia.org/wiki/Experiment_(probability_theory%29):** # A single occurrence with an outcome that is uncertain until we observe it. #
    *For example, rolling a single die.* # - **[Outcome](https://en.wikipedia.org/wiki/Outcome_(probability%29):** # A possible result of a trial; one particular state of the world. What Laplace calls a **case.** #
    *For example:* `4`. # - **[Sample Space](https://en.wikipedia.org/wiki/Sample_space):** # The set of all possible outcomes for the trial. #
    *For example,* `{1, 2, 3, 4, 5, 6}`. # - **[Event](https://en.wikipedia.org/wiki/Event_(probability_theory%29):** # A subset of outcomes that together have some property we are interested in. #
    *For example, the event "even die roll" is the set of outcomes* `{2, 4, 6}`. # - **[Probability](https://en.wikipedia.org/wiki/Probability_theory):** # As Laplace said, the probability of an event with respect to a sample space is the "number of favorable cases" (outcomes from the sample space that are in the event) divided by the "number of all the cases" in the sample space (assuming "nothing leads us to expect that any one of these cases should occur more than any other"). Since this is a proper fraction, probability will always be a number between 0 (representing an impossible event) and 1 (representing a certain event). #
    *For example, the probability of an even die roll is 3/6 = 1/2.* # # This notebook will explore these concepts in a concrete way using Python code. The code is meant to be succint and explicit, and fast enough to handle sample spaces with millions of outcomes. If you need to handle trillions, you'll want a more efficient implementation. I also have [another notebook](http://nbviewer.jupyter.org/url/norvig.com/ipython/ProbabilityParadox.ipynb) that covers paradoxes in Probability Theory. # + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false} # # `P` is for Probability # # The code below implements Laplace's quote directly: *Probability is thus simply a fraction whose numerator is the number of favorable cases and whose denominator is the number of all the cases possible.* # + button=false deletable=true new_sheet=false run_control={"read_only": false} from fractions import Fraction def P(event, space): "The probability of an event, given a sample space." return Fraction(cases(favorable(event, space)), cases(space)) favorable = set.intersection # Outcomes that are in the event and in the sample space cases = len # The number of cases is the length, or size, of a set # + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false} # # # Warm-up Problem: Die Roll # + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false} # What's the probability of rolling an even number with a single six-sided fair die? Mathematicians traditionally use a single capital letter to denote a sample space; I'll use `D` for the die: # + button=false deletable=true new_sheet=false run_control={"read_only": false} D = {1, 2, 3, 4, 5, 6} # a sample space even = { 2, 4, 6} # an event P(even, D) # - # Good to confirm what we already knew. We can explore some other events: prime = {2, 3, 5, 7, 11, 13} odd = {1, 3, 5, 7, 9, 11, 13} P(odd, D) P((even | prime), D) # The probability of an even or prime die roll P((odd & prime), D) # The probability of an odd prime die roll # + [markdown] button=false new_sheet=false run_control={"read_only": false} # # Card Problems # # Consider dealing a hand of five playing cards. An individual card has a rank and suit, like `'J♥'` for the Jack of Hearts, and a `deck` has 52 cards: # + button=false new_sheet=false run_control={"read_only": false} suits = u'♥♠♦♣' ranks = u'AKQJT98765432' deck = [r + s for r in ranks for s in suits] len(deck) # - # Now I want to define `Hands` as the sample space of all 5-card combinations from `deck`. The function `itertools.combinations` does most of the work; we than concatenate each combination into a space-separated string: # # + button=false new_sheet=false run_control={"read_only": false} import itertools def combos(items, n): "All combinations of n items; each combo as a space-separated str." return set(map(' '.join, itertools.combinations(items, n))) Hands = combos(deck, 5) len(Hands) # - # There are too many hands to look at them all, but we can sample: import random random.sample(Hands, 7) random.sample(deck, 7) # + [markdown] button=false new_sheet=false run_control={"read_only": false} # Now we can answer questions like the probability of being dealt a flush (5 cards of the same suit): # + button=false new_sheet=false run_control={"read_only": false} flush = {hand for hand in Hands if any(hand.count(suit) == 5 for suit in suits)} P(flush, Hands) # + [markdown] button=false new_sheet=false run_control={"read_only": false} # Or the probability of four of a kind: # + button=false new_sheet=false run_control={"read_only": false} four_kind = {hand for hand in Hands if any(hand.count(rank) == 4 for rank in ranks)} P(four_kind, Hands) # + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false} # # # # Urn Problems # # Around 1700, wrote about removing colored balls from an urn in his landmark treatise *[Ars Conjectandi](https://en.wikipedia.org/wiki/Ars_Conjectandi)*, and ever since then, explanations of probability have relied on [urn problems](https://www.google.com/search?q=probability+ball+urn). (You'd think the urns would be empty by now.) # # ![](http://www2.stetson.edu/~efriedma/periodictable/jpg/Bernoulli-Jacob.jpg) #

    1700
    # # For example, here is a three-part problem [adapted](http://mathforum.org/library/drmath/view/69151.html) from mathforum.org: # # > *An urn contains 6 blue, 9 red, and 8 white balls. We select six balls at random. What is the probability of each of these outcomes:* # # > - *All balls are red*. # - *3 are blue, and 1 is red, and 2 are white, *. # - *Exactly 4 balls are white*. # # We'll start by defining the contents of the urn. A `set` can't contain multiple objects that are equal to each other, so I'll call the blue balls `'B1'` through `'B6'`, rather than trying to have 6 balls all called `'B'`: # + button=false deletable=true new_sheet=false run_control={"read_only": false} def balls(color, n): "A set of n numbered balls of the given color." return {color + str(i) for i in range(1, n + 1)} urn = balls('B', 6) | balls('R', 9) | balls('W', 8) urn # + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false} # Now we can define the sample space, `U6`, as the set of all 6-ball combinations: # + button=false deletable=true new_sheet=false run_control={"read_only": false} U6 = combos(urn, 6) random.sample(U6, 5) # - # Define `select` such that `select('R', 6)` is the event of picking 6 red balls from the urn: def select(color, n, space=U6): "The subset of the sample space with exactly `n` balls of given `color`." return {s for s in space if s.count(color) == n} # Now I can answer the three questions: P(select('R', 6), U6) P(select('B', 3) & select('R', 1) & select('W', 2), U6) P(select('W', 4), U6) # + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false} # ## Urn problems via arithmetic # # Let's verify these calculations using basic arithmetic, rather than exhaustive counting. First, how many ways can I choose 6 out of 9 red balls? It could be any of the 9 for the first ball, any of 8 remaining for the second, and so on down to any of the remaining 4 for the sixth and final ball. But we don't care about the *order* of the six balls, so divide that product by the number of permutations of 6 things, which is 6!, giving us # 9 × 8 × 7 × 6 × 5 × 4 / 6! = 84. In general, the number of ways of choosing *c* out of *n* items is (*n* choose *c*) = *n*! / ((*n* - *c*)! × c!). # We can translate that to code: # + button=false deletable=true new_sheet=false run_control={"read_only": false} from math import factorial def choose(n, c): "Number of ways to choose c items from a list of n items." return factorial(n) // (factorial(n - c) * factorial(c)) # - choose(9, 6) # Now we can verify the answers to the three problems. (Since `P` computes a ratio and `choose` computes a count, # I multiply the left-hand-side by `N`, the length of the sample space, to make both sides be counts.) # + N = len(U6) N * P(select('R', 6), U6) == choose(9, 6) # - N * P(select('B', 3) & select('W', 2) & select('R', 1), U6) == choose(6, 3) * choose(8, 2) * choose(9, 1) N * P(select('W', 4), U6) == choose(8, 4) * choose(6 + 9, 2) # (6 + 9 non-white balls) # We can solve all these problems just by counting; all you ever needed to know about probability problems you learned from Sesame Street: # # ![The Count](http://img2.oncoloring.com/count-dracula-number-thir_518b77b54ba6c-p.gif) #
    The Count
    1972—
    # + [markdown] button=false new_sheet=false run_control={"read_only": false} # # Non-Equiprobable Outcomes # # So far, we have accepted Laplace's assumption that *nothing leads us to expect that any one of these cases should occur more than any other*. # In real life, we often get outcomes that are not equiprobable--for example, a loaded die favors one side over the others. We will introduce three more vocabulary items: # # * [Frequency](https://en.wikipedia.org/wiki/Frequency_%28statistics%29): a non-negative number describing how often an outcome occurs. Can be a count like 5, or a ratio like 1/6. # # * [Distribution](http://mathworld.wolfram.com/StatisticalDistribution.html): A mapping from outcome to frequency of that outcome. We will allow sample spaces to be distributions. # # * [Probability Distribution](https://en.wikipedia.org/wiki/Probability_distribution): A probability distribution # is a distribution whose frequencies sum to 1. # # # I could implement distributions with `Dist = dict`, but instead I'll make `Dist` a subclass `collections.Counter`: # + from collections import Counter class Dist(Counter): "A Distribution of {outcome: frequency} pairs." # - # Because a `Dist` is a `Counter`, we can initialize it in any of the following ways: # A set of equiprobable outcomes: Dist({1, 2, 3, 4, 5, 6}) # A collection of outcomes, with repetition indicating frequency: Dist('THHHTTHHT') # A mapping of {outcome: frequency} pairs: Dist({'H': 5, 'T': 4}) # Keyword arguments: Dist(H=5, T=4) == Dist({'H': 5}, T=4) == Dist('TTTT', H=5) # Now I will modify the code to handle distributions. # Here's my plan: # # - Sample spaces and events can both be specified as either a `set` or a `Dist`. # - The sample space can be a non-probability distribution like `Dist(H=50, T=50)`; the results # will be the same as if the sample space had been a true probability distribution like `Dist(H=1/2, T=1/2)`. # - The function `cases` now sums the frequencies in a distribution (it previously counted the length). # - The function `favorable` now returns a `Dist` of favorable outcomes and their frequencies (not a `set`). # - I will redefine `Fraction` to use `"/"`, not `fractions.Fraction`, because frequencies might be floats. # - `P` is unchanged. # # + def cases(outcomes): "The total frequency of all the outcomes." return sum(Dist(outcomes).values()) def favorable(event, space): "A distribution of outcomes from the sample space that are in the event." space = Dist(space) return Dist({x: space[x] for x in space if x in event}) def Fraction(n, d): return n / d # - # For example, here's the probability of rolling an even number with a crooked die that is loaded to prefer 6: # + Crooked = Dist({1: 0.1, 2: 0.1, 3: 0.1, 4: 0.1, 5: 0.1, 6: 0.5}) P(even, Crooked) # - # As another example, an [article](http://people.kzoo.edu/barth/math105/moreboys.pdf) gives the following counts for two-child families in Denmark, where `GB` means a family where the first child is a girl and the second a boy (I'm aware that not all births can be classified as the binary "boy" or "girl," but the data was reported that way): # # GG: 121801 GB: 126840 # BG: 127123 BB: 135138 # + button=false new_sheet=false run_control={"read_only": false} DK = Dist(GG=121801, GB=126840, BG=127123, BB=135138) # - first_girl = {'GG', 'GB'} P(first_girl, DK) second_girl = {'GG', 'BG'} P(second_girl, DK) # + [markdown] button=false new_sheet=false run_control={"read_only": false} # This says that the probability of a girl is somewhere between 48% and 49%. The probability of a girl is very slightly higher for the second child. # # Given the first child, are you more likely to have a second child of the same sex? # - same = {'GG', 'BB'} P(same, DK) # + [markdown] button=false new_sheet=false run_control={"read_only": false} # Yes, but only by about 0.3%. # + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false} # # Predicates as events # # To calculate the probability of an even die roll, I originally said # # even = {2, 4, 6} # # But that's inelegant—I had to explicitly enumerate all the even numbers from one to six. If I ever wanted to deal with a twelve or twenty-sided die, I would have to go back and redefine `even`. I would prefer to define `even` once and for all like this: # + button=false deletable=true new_sheet=false run_control={"read_only": false} def even(n): return n % 2 == 0 # + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false} # Now in order to make `P(even, D)` work, I'll allow an `Event` to be either a collection of outcomes or a `callable` predicate (that is, a function that returns true for outcomes that are part of the event). I don't need to modify `P`, but `favorable` will have to convert a callable `event` to a `set`: # + button=false deletable=true new_sheet=false run_control={"read_only": false} def favorable(event, space): "A distribution of outcomes from the sample space that are in the event." if callable(event): event = {x for x in space if event(x)} space = Dist(space) return Dist({x: space[x] for x in space if x in event}) # - favorable(even, D) P(even, D) # I'll define `die` to make a sample space for an *n*-sided die: def die(n): return set(range(1, n + 1)) # + button=false deletable=true new_sheet=false run_control={"read_only": false} favorable(even, die(12)) # - P(even, die(12)) P(even, die(2000)) P(even, die(2001)) # + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false} # We can define more interesting events using predicates; for example we can determine the probability that the sum of rolling *d* 6-sided dice is prime: # + button=false deletable=true new_sheet=false run_control={"read_only": false} def sum_dice(d): return Dist(sum(dice) for dice in itertools.product(D, repeat=d)) def is_prime(n): return (n > 1 and not any(n % i == 0 for i in range(2, n))) for d in range(1, 9): p = P(is_prime, sum_dice(d)) print("P(is_prime, sum_dice({})) = {}".format(d, round(p, 3))) # - # # : The Unfinished Game # # #

    1654 #
    Blaise Pascal]
    1654 #
    # # Consider a gambling game consisting of tossing a coin repeatedly. Player H wins the game as soon as a total of 10 heads come up, and T wins if a total of 10 tails come up before H wins. If the game is interrupted when H has 8 heads and T has 7 tails, how should the pot of money (which happens to be 100 Francs) be split? Here are some proposals, and arguments against them: # - It is uncertain, so just split the pot 50-50. #
    *No, because surely H is more likely to win.* # - In proportion to each player's current score, so H gets a 8/(8+7) share. #
    *No, because if the score was 0 heads to 1 tail, H should get more than 0/1.* # - In proportion to how many tosses the opponent needs to win, so H gets 3/(3+2). #
    *This seems better, but no, if H is 9 away and T is only 1 away from winning, then it seems that giving H a 1/10 share is too much.* # # In 1654, and corresponded on this problem, with Fermat [writing](http://mathforum.org/isaac/problems/prob1.html): # # >Dearest Blaise, # # >As to the problem of how to divide the 100 Francs, I think I have found a solution that you will find to be fair. Seeing as I needed only two points to win the game, and you needed 3, I think we can establish that after four more tosses of the coin, the game would have been over. For, in those four tosses, if you did not get the necessary 3 points for your victory, this would imply that I had in fact gained the necessary 2 points for my victory. In a similar manner, if I had not achieved the necessary 2 points for my victory, this would imply that you had in fact achieved at least 3 points and had therefore won the game. Thus, I believe the following list of possible endings to the game is exhaustive. I have denoted 'heads' by an 'h', and tails by a 't.' I have starred the outcomes that indicate a win for myself. # # > h h h h * h h h t * h h t h * h h t t * # > h t h h * h t h t * h t t h * h t t t # > t h h h * t h h t * t h t h * t h t t # > t t h h * t t h t t t t h t t t t # # >I think you will agree that all of these outcomes are equally likely. Thus I believe that we should divide the stakes by the ration 11:5 in my favor, that is, I should receive (11/16)×100 = 68.75 Francs, while you should receive 31.25 Francs. # # # >I hope all is well in Paris, # # >Your friend and colleague, # # >Pierre # # Pascal agreed with this solution, and [replied](http://mathforum.org/isaac/problems/prob2.html) with a generalization that made use of his previous invention, Pascal's Triangle. There's even [a book](https://smile.amazon.com/Unfinished-Game-Pascal-Fermat-Seventeenth-Century/dp/0465018963?sa-no-redirect=1) about it. # # We can solve the problem with the tools we have: # + def win_unfinished_game(h, t): "The probability that H will win the unfinished game, given the number of points needed by H and T to win." return P(at_least(h, 'h'), finishes(h, t)) def at_least(n, item): "The event of getting at least n instances of item in an outcome." return lambda outcome: outcome.count(item) >= n def finishes(h, t): "All finishes of a game where player H needs h points to win and T needs t." tosses = ['ht'] * (h + t - 1) return set(itertools.product(*tosses)) # - # We can generate the 16 equiprobable finished that Pierre wrote about: finishes(2, 3) # And we can find the 11 of them that are favorable to player `H`: favorable(at_least(2, 'h'), finishes(2, 3)) # Finally, we can answer the question: 100 * win_unfinished_game(2, 3) # We agree with Pascal and Fermat; we're in good company! # # Newton's Answer to a Problem by Pepys # # #

    1693
    #

    1693
    #
    # # Let's jump ahead from 1654 all the way to 1693, [when](http://fermatslibrary.com/s/isaac-newton-as-a-probabilist) wrote to posing the problem: # # > Which of the following three propositions has the greatest chance of success? # 1. Six fair dice are tossed independently and at least one “6” appears. # 2. Twelve fair dice are tossed independently and at least two “6”s appear. # 3. Eighteen fair dice are tossed independently and at least three “6”s appear. # # Newton was able to answer the question correctly (although his reasoning was not quite right); let's see how we can do. Since we're only interested in whether a die comes up as "6" or not, we can define a single die like this: die6 = Dist({6: 1/6, '-': 5/6}) # Next we can define the joint distribution formed by combining two independent distribution like this: # + def joint(A, B, combine='{}{}'.format): """The joint distribution of two independent distributions. Result is all entries of the form {'ab': frequency(a) * frequency(b)}""" return Dist({combine(a, b): A[a] * B[b] for a in A for b in B}) joint(die6, die6) # - # And the joint distribution from rolling *n* dice: # + def dice(n, die): "Joint probability distribution from rolling `n` dice." if n == 1: return die else: return joint(die, dice(n - 1, die)) dice(4, die6) # - # Now we are ready to determine which proposition is more likely to have the required number of sixes: P(at_least(1, '6'), dice(6, die6)) P(at_least(2, '6'), dice(12, die6)) P(at_least(3, '6'), dice(18, die6)) # We reach the same conclusion Newton did, that the best chance is rolling six dice. # + [markdown] button=false new_sheet=false run_control={"read_only": false} # # More Urn Problems: M&Ms and Bayes # # Here's another urn problem (actually a "bag" problem) [from](http://allendowney.blogspot.com/2011/10/my-favorite-bayess-theorem-problems.html) prolific Python/Probability pundit [ ](http://allendowney.blogspot.com/): # # > The blue M&M was introduced in 1995. Before then, the color mix in a bag of plain M&Ms was (30% Brown, 20% Yellow, 20% Red, 10% Green, 10% Orange, 10% Tan). Afterward it was (24% Blue , 20% Green, 16% Orange, 14% Yellow, 13% Red, 13% Brown). # A friend of mine has two bags of M&Ms, and he tells me that one is from 1994 and one from 1996. He won't tell me which is which, but he gives me one M&M from each bag. One is yellow and one is green. What is the probability that the yellow M&M came from the 1994 bag? # # To solve this problem, we'll first create distributions for each bag: `bag94` and `bag96`: # + button=false new_sheet=false run_control={"read_only": false} bag94 = Dist(brown=30, yellow=20, red=20, green=10, orange=10, tan=10) bag96 = Dist(blue=24, green=20, orange=16, yellow=14, red=13, brown=13) # + [markdown] button=false new_sheet=false run_control={"read_only": false} # Next, define `MM` as the joint distribution—the sample space for picking one M&M from each bag. The outcome `'94:yellow 96:green'` means that a yellow M&M was selected from the 1994 bag and a green one from the 1996 bag. In this problem we don't get to see the actual outcome; we just see some evidence about the outcome, that it contains a yellow and a green. # + button=false new_sheet=false run_control={"read_only": false} MM = joint(bag94, bag96, '94:{} 96:{}'.format) MM # + [markdown] button=false new_sheet=false run_control={"read_only": false} # We observe that "One is yellow and one is green": # + button=false new_sheet=false run_control={"read_only": false} def yellow_and_green(outcome): return 'yellow' in outcome and 'green' in outcome favorable(yellow_and_green, MM) # - # Given this observation, we want to know "What is the probability that the yellow M&M came from the 1994 bag?" # + button=false new_sheet=false run_control={"read_only": false} def yellow94(outcome): return '94:yellow' in outcome P(yellow94, favorable(yellow_and_green, MM)) # + [markdown] button=false new_sheet=false run_control={"read_only": false} # So there is a 74% chance that the yellow comes from the 1994 bag. # # Answering this question was straightforward: just like all the other probability problems, we simply create a sample space, and use `P` to pick out the probability of the event in question, given what we know about the outcome. # But in a sense it is curious that we were able to solve this problem with the same methodology as the others: this problem comes from a section titled **My favorite Bayes's Theorem Problems**, so one would expect that we'd need to invoke Bayes Theorem to solve it. The computation above shows that that is not necessary. # # ![Bayes](http://img1.ph.126.net/xKZAzeOv_mI8a4Lwq7PHmw==/2547911489202312541.jpg) #
    Rev.
    1701-1761 #
    # # Of course, we *could* solve it using Bayes Theorem. Why is Bayes Theorem recommended? Because we are asked about the probability of an outcome given the evidence—the probability the yellow came from the 94 bag, given that there is a yellow and a green. But the problem statement doesn't directly tell us the probability of that outcome given the evidence; it just tells us the probability of the evidence given the outcome. # # Before we see the colors of the M&Ms, there are two hypotheses, `A` and `B`, both with equal probability: # # A: first M&M from 94 bag, second from 96 bag # B: first M&M from 96 bag, second from 94 bag # P(A) = P(B) = 0.5 # # Then we get some evidence: # # E: first M&M yellow, second green # # We want to know the probability of hypothesis `A`, given the evidence: # # P(A | E) # # That's not easy to calculate (except by enumerating the sample space, which our `P` function does). But Bayes Theorem says: # # P(A | E) = P(E | A) * P(A) / P(E) # # The quantities on the right-hand-side are easier to calculate: # # P(E | A) = 0.20 * 0.20 = 0.04 # P(E | B) = 0.10 * 0.14 = 0.014 # P(A) = 0.5 # P(B) = 0.5 # P(E) = P(E | A) * P(A) + P(E | B) * P(B) # = 0.04 * 0.5 + 0.014 * 0.5 = 0.027 # # And we can get a final answer: # # P(A | E) = P(E | A) * P(A) / P(E) # = 0.04 * 0.5 / 0.027 # = 0.7407407407 # # You have a choice: Bayes Theorem allows you to do less calculation at the cost of more algebra; that is a great trade-off if you are working with pencil and paper. Enumerating the sample space allows you to do less algebra at the cost of more calculation; usually a good trade-off if you have a computer. But regardless of the approach you use, it is important to understand Bayes theorem and how it works. # # There is one important question that does not address: *would you eat twenty-year-old M&Ms*? # 😨 # + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false} #
    # # # Simulation # # Sometimes it is inconvenient, difficult, or even impossible to explicitly enumerate a sample space. Perhaps the sample space is infinite, or perhaps it is just very large and complicated (perhaps with a bunch of low-probability outcomes that don't seem very important). In that case, we might feel more confident in writing a program to *simulate* a random outcome. *Random sampling* from such a simulation # can give an accurate estimate of probability. # # # Simulating Monopoly # # ![Mr. Monopoly](http://buckwolf.org/a.abcnews.com/images/Entertainment/ho_hop_go_050111_t.jpg)
    [Mr. Monopoly](https://en.wikipedia.org/wiki/Rich_Uncle_Pennybags)
    1940— # # Consider [problem 84](https://projecteuler.net/problem=84) from the excellent [Project Euler](https://projecteuler.net), which asks for the probability that a player in the game Monopoly ends a roll on each of the squares on the board. To answer this we need to take into account die rolls, chance and community chest cards, and going to jail (from the "go to jail" space, from a card, or from rolling doubles three times in a row). We do not need to take into account anything about acquiring properties or exchanging money or winning or losing the game, because these events don't change a player's location. # # A game of Monopoly can go on forever, so the sample space is infinite. Even if we limit the sample space to say, 1000 rolls, there are $21^{1000}$ such sequences of rolls, and even more possibilities when we consider drawing cards. So it is infeasible to explicitly represent the sample space. There are techniques for representing the problem as # a Markov decision problem (MDP) and solving it, but the math is complex (a [paper](https://faculty.math.illinois.edu/~bishop/monopoly.pdf) on the subject runs 15 pages). # # The simplest approach is to implement a simulation and run it for, say, a million rolls. Here is the code for a simulation: # + from collections import deque as Deck # a Deck of community chest or chance cards # The Monopoly board, as specified by https://projecteuler.net/problem=84 (GO, A1, CC1, A2, T1, R1, B1, CH1, B2, B3, JAIL, C1, U1, C2, C3, R2, D1, CC2, D2, D3, FP, E1, CH2, E2, E3, R3, F1, F2, U2, F3, G2J, G1, G2, CC3, G3, R4, CH3, H1, T2, H2) = board = range(40) # A card is either a square, a set of squares meaning advance to the nearest, # a -3 to go back 3 spaces, or None meaning no change to location. CC_deck = Deck([GO, JAIL] + 14 * [None]) CH_deck = Deck([GO, JAIL, C1, E3, H2, R1, -3, {U1, U2}] + 2 * [{R1, R2, R3, R4}] + 6 * [None]) def monopoly(rolls): """Simulate given number of dice rolls of a Monopoly game, and return the counts of how often each square is visited.""" counts = [0] * len(board) doubles = 0 # Number of consecutive doubles rolled random.shuffle(CC_deck) random.shuffle(CH_deck) goto(GO) for _ in range(rolls): d1, d2 = random.randint(1, 6), random.randint(1, 6) doubles = (doubles + 1 if d1 == d2 else 0) goto(here + d1 + d2) if here == G2J or doubles == 3: goto(JAIL) doubles = 0 elif here in (CC1, CC2, CC3): do_card(CC_deck) elif here in (CH1, CH2, CH3): do_card(CH_deck) counts[here] += 1 return counts def goto(square): "Update 'here' to be this square (and handle passing GO)." global here here = square % len(board) def do_card(deck): "Take the top card from deck and do what it says." card = deck.popleft() # The top card deck.append(card) # Move top card to bottom of deck if card == None: # Don't move pass elif card == -3: # Go back 3 spaces goto(here - 3) elif isinstance(card, set): # Advance to next railroad or utility next1 = min({place for place in card if place > here} or card) goto(next1) else: # Go to destination named on card goto(card) # + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false} # Let's run the simulation for a million dice rolls: # + button=false deletable=true new_sheet=false run_control={"read_only": false} counts = monopoly(10**6) # + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false} # And print a table of square names and their percentages: # + button=false deletable=true new_sheet=false run_control={"read_only": false} property_names = """ GO, A1, CC1, A2, T1, R1, B1, CH1, B2, B3, JAIL, C1, U1, C2, C3, R2, D1, CC2, D2, D3, FP, E1, CH2, E2, E3, R3, F1, F2, U2, F3, G2J, G1, G2, CC3, G3, R4, CH3, H1, T2, H2""".replace(',', ' ').split() for (c, n) in sorted(zip(counts, property_names), reverse=True): print('{:4} {:.2%}'.format(n, c / sum(counts))) # + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false} # There is one square far above average: `JAIL`, at a little over 6%. There are four squares far below average: the three chance squares, `CH1`, `CH2`, and `CH3`, at around 1% (because 10 of the 16 chance cards send the player away from the square), and the "Go to Jail" square, which has a frequency of 0 because you can't end a turn there. The other squares are around 2% to 3% each, which you would expect, because 100% / 40 = 2.5%. # - # # The Central Limit Theorem # # We have covered the concept of *distributions* of outcomes. You may have heard of the *normal distribution*, the *bell-shaped curve.* In Python it is called `random.normalvariate` (also `random.gauss`). We can plot it with the help of the `repeated_hist` function defined below, which samples a distribution `n` times and displays a histogram of the results. (*Note:* in this section I am using "distribution" to mean a function that, each time it is called, returns a random sample from a distribution. I am not using it to mean a mapping of type `Dist`.) # + # %matplotlib inline import matplotlib.pyplot as plt from statistics import mean from random import normalvariate, triangular, choice, vonmisesvariate, uniform def normal(mu=0, sigma=1): return random.normalvariate(mu, sigma) def repeated_hist(dist, n=10**6, bins=100): "Sample the distribution n times and make a histogram of the results." samples = [dist() for _ in range(n)] plt.hist(samples, bins=bins, density=True) plt.title('{} (μ = {:.1f})'.format(dist.__name__, mean(samples))) plt.grid(axis='x') plt.yticks([], '') plt.show() # - # Normal distribution repeated_hist(normal) # Why is this distribution called *normal*? The **Central Limit Theorem** says that it is the ultimate limit of other distributions, as follows (informally): # - Gather *k* independent distributions. They need not be normal-shaped. # - Define a new distribution to be the result of sampling one number from each of the *k* independent distributions and adding them up. # - As long as *k* is not too small, and the component distributions are not super-pathological, then the new distribution will tend towards a normal distribution. # # Here's a simple example: summing ten independent die rolls: # + def sum10dice(): return sum(random.randint(1, 6) for _ in range(10)) repeated_hist(sum10dice, bins=range(10, 61)) # - # As another example, let's take just *k* = 5 component distributions representing the per-game scores of 5 basketball players, and then sum them together to form the new distribution, the team score. I'll be creative in defining the distributions for each player, but [historically accurate](https://www.basketball-reference.com/teams/GSW/2016.html) in the mean for each distribution. # + def SC(): return max(0, normal(12.1, 3) + 3 * triangular(1, 13, 4)) # 30.1 def KT(): return max(0, triangular(8, 22, 15.3) + choice((0, 3 * triangular(1, 9, 4)))) # 22.1 def DG(): return max(0, vonmisesvariate(30, 2) * 3.08) # 14.0 def HB(): return max(0, choice((normal(6.7, 1.5), normal(16.7, 2.5)))) # 11.7 def BE(): return max(0, normal(17, 3) + uniform(0, 40)) # 37.0 team = (SC, KT, DG, HB, BE) def Team(team=team): return sum(player() for player in team) # - for player in team: repeated_hist(player, bins=range(70)) # We can see that none of the players have a distribution that looks like a normal distribution: `SC` is skewed to one side (the mean is 5 points to the right of the peak); the three next players have bimodal distributions; and `BE` is too flat on top. # # Now we define the team score to be the sum of the *k* = 5 players, and display this new distribution: repeated_hist(Team, bins=range(50, 180)) # Sure enough, this looks very much like a normal distribution. The **Central Limit Theorem** appears to hold in this case. But I have to say: "Central Limit" is not a very evocative name, so I propose we re-name this as the **Strength in Numbers Theorem**, to indicate the fact that if you have a lot of numbers, you tend to get the expected result. # + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false} # # Conclusion # # We've had an interesting tour and met some giants of the field: Laplace, Bernoulli, Fermat, Pascal, Bayes, Newton, ... even Mr. Monopoly and The Count. # # The conclusion is: be methodical in defining the sample space and the event(s) of interest, and be careful in counting the number of outcomes in the numerator and denominator. and you can't go wrong. Easy as 1-2-3. # - #
    # # # Appendix: Continuous Sample Spaces # # Everything up to here has been about discrete, finite sample spaces, where we can *enumerate* all the possible outcomes. # # But a reader asked about *continuous* sample spaces, such as the space of real numbers. The principles are the same: probability is still the ratio of the favorable cases to all the cases, but now instead of *counting* cases, we have to (in general) compute integrals to compare the sizes of cases. # Here we will cover a simple example, which we first solve approximately by simulation, and then exactly by calculation. # # ## The Hot New Game Show Problem: Simulation # # posed [this problem](http://fivethirtyeight.com/features/can-you-win-this-hot-new-game-show/) in the 538 *Riddler* blog: # # >Two players go on a hot new game show called *Higher Number Wins.* The two go into separate booths, and each presses a button, and a random number between zero and one appears on a screen. (At this point, neither knows the other’s number, but they do know the numbers are chosen from a standard uniform distribution.) They can choose to keep that first number, or to press the button again to discard the first number and get a second random number, which they must keep. Then, they come out of their booths and see the final number for each player on the wall. The lavish grand prize — a case full of gold bullion — is awarded to the player who kept the higher number. Which number is the optimal cutoff for players to discard their first number and choose another? Put another way, within which range should they choose to keep the first number, and within which range should they reject it and try their luck with a second number? # # We'll use this notation: # - **A**, **B**: the two players. # - *A*, *B*: the cutoff values they choose: the lower bound of the range of first numbers they will accept. # - *a*, *b*: the actual random numbers that appear on the screen. # # For example, if player **A** chooses a cutoff of *A* = 0.6, that means that **A** would accept any first number greater than 0.6, and reject any number below that cutoff. The question is: What cutoff, *A*, should player **A** choose to maximize the chance of winning, that is, maximize P(*a* > *b*)? # # First, simulate the number that a player with a given cutoff gets (note that `random.random()` returns a float sampled uniformly from the interval [0..1]): # + number= random.random def strategy(cutoff): "Play the game with given cutoff, returning the first or second random number." first = number() return first if first > cutoff else number() # - strategy(.5) # Now compare the numbers returned with a cutoff of *A* versus a cutoff of *B*, and repeat for a large number of trials; this gives us an estimate of the probability that cutoff *A* is better than cutoff *B*: def Pwin(A, B, trials=20000): "The probability that cutoff A wins against cutoff B." return mean(strategy(A) > strategy(B) for _ in range(trials)) Pwin(0.6, 0.9) # Now define a function, `top`, that considers a collection of possible cutoffs, estimate the probability for each cutoff playing against each other cutoff, and returns a list with the `N` top cutoffs (the ones that defeated the most number of opponent cutoffs), and the number of opponents they defeat: def top(N, cutoffs): "Return the N best cutoffs and the number of opponent cutoffs they beat." winners = Counter(A if Pwin(A, B) > 0.5 else B for (A, B) in itertools.combinations(cutoffs, 2)) return winners.most_common(N) # + from numpy import arange top(10, arange(0.5, 1.0, 0.01)) # - # We get a good idea of the top cutoffs, but they are close to each other, so we can't quite be sure which is best, only that the best is somewhere around 0.60. We could get a better estimate by increasing the number of trials, but that would consume more time. # # ## The Hot New Game Show Problem: Exact Calculation # # More promising is the possibility of making `Pwin(A, B)` an exact calculation. But before we get to `Pwin(A, B)`, let's solve a simpler problem: assume that both players **A** and **B** have chosen a cutoff, and have each received a number above the cutoff. What is the probability that **A** gets the higher number? We'll call this `Phigher(A, B)`. We can think of this as a two-dimensional sample space of points in the (*a*, *b*) plane, where *a* ranges from the cutoff *A* to 1 and *b* ranges from the cutoff B to 1. Here is a diagram of that two-dimensional sample space, with the cutoffs *A*=0.5 and *B*=0.6: # # # # The total area of the sample space is 0.5 × 0.4 = 0.20, and in general it is (1 - *A*) · (1 - *B*). What about the favorable cases, where **A** beats **B**? That corresponds to the shaded triangle below: # # # # The area of a triangle is 1/2 the base times the height, or in this case, 0.42 / 2 = 0.08, and in general, (1 - *B*)2 / 2. So in general we have: # # Phigher(A, B) = favorable / total # favorable = ((1 - B) ** 2) / 2 # total = (1 - A) * (1 - B) # Phigher(A, B) = (((1 - B) ** 2) / 2) / ((1 - A) * (1 - B)) # Phigher(A, B) = (1 - B) / (2 * (1 - A)) # # And in this specific case we have: # # A = 0.5; B = 0.6 # favorable = 0.4 ** 2 / 2 = 0.08 # total = 0.5 * 0.4 = 0.20 # Phigher(0.5, 0.6) = 0.08 / 0.20 = 0.4 # # But note that this only works when the cutoff *A* ≤ *B*; when *A* > *B*, we need to reverse things. That gives us the code: def Phigher(A, B): "Probability that a sample from [A..1] is higher than one from [B..1]." if A <= B: return (1 - B) / (2 * (1 - A)) else: return 1 - Phigher(B, A) Phigher(0.5, 0.6) # We're now ready to tackle the full game. There are four cases to consider, depending on whether **A** and **B** gets a first number that is above or below their cutoff choices: # # | first *a* | first *b* | P(*a*, *b*) | P(A wins | *a*, *b*) | Comment | # |:-----:|:-----:| ----------- | ------------- | ------------ | # | *a* > *A* | *b* > *B* | (1 - *A*) · (1 - *B*) | Phigher(*A*, *B*) | Both above cutoff; both keep first numbers | # | *a* < *A* | *b* < *B* | *A* · *B* | Phigher(0, 0) | Both below cutoff, both get new numbers from [0..1] | # | *a* > *A* | *b* < *B* | (1 - *A*) · *B* | Phigher(*A*, 0) | **A** keeps number; **B** gets new number from [0..1] | # | *a* < *A* | *b* > *B* | *A* · (1 - *B*) | Phigher(0, *B*) | **A** gets new number from [0..1]; **B** keeps number | # # For example, the first row of this table says that the event of both first numbers being above their respective cutoffs has probability (1 - *A*) · (1 - *B*), and if this does occur, then the probability of **A** winning is Phigher(*A*, *B*). # We're ready to replace the old simulation-based `Pwin` with a new calculation-based version: def Pwin(A, B): "With what probability does cutoff A win against cutoff B?" return ((1-A) * (1-B) * Phigher(A, B) # both above cutoff + A * B * Phigher(0, 0) # both below cutoff + (1-A) * B * Phigher(A, 0) # A above, B below + A * (1-B) * Phigher(0, B)) # A below, B above Pwin(0.5, 0.6) # `Pwin` relies on a lot of algebra. Let's define a few tests to check for obvious errors: # + def test(): assert Phigher(0.5, 0.5) == Phigher(0.75, 0.75) == Phigher(0, 0) == 0.5 assert Pwin(0.5, 0.5) == Pwin(0.75, 0.75) == 0.5 assert Phigher(.6, .5) == 0.6 assert Phigher(.5, .6) == 0.4 return 'ok' test() # - # Let's repeat the calculation with our new, exact `Pwin`: top(10, arange(0.5, 1.0, 0.01)) # It is good to see that the simulation and the exact calculation are in rough agreement; that gives me more confidence in both of them. We see here that 0.62 defeats all the other cutoffs, and 0.61 defeats all cutoffs except 0.62. The great thing about the exact calculation code is that it runs fast, regardless of how much accuracy we want. We can zero in on the range around 0.6: top(10, arange(0.5, 0.7, 0.001)) # This says 0.618 is best, better than 0.620. We can get even more accuracy: top(10, arange(0.617, 0.619, 0.000001)) # So 0.618034 is best. Does that number [look familiar](https://en.wikipedia.org/wiki/Golden_ratio)? Can we prove that it is what I think it is? # # To understand the strategic possibilities, it is helpful to draw a 3D plot of `Pwin(A, B)` for values of *A* and *B* between 0 and 1: # + import numpy as np from mpl_toolkits.mplot3d.axes3d import Axes3D def map2(fn, A, B): "Map fn to corresponding elements of 2D arrays A and B." return [list(map(fn, Arow, Brow)) for (Arow, Brow) in zip(A, B)] cutoffs = arange(0.00, 1.00, 0.02) A, B = np.meshgrid(cutoffs, cutoffs) fig = plt.figure(figsize=(10,10)) ax = fig.add_subplot(1, 1, 1, projection='3d') ax.set_xlabel('A') ax.set_ylabel('B') ax.set_zlabel('Pwin(A, B)') ax.plot_surface(A, B, map2(Pwin, A, B)); # - # What does this [Pringle of Probability](http://fivethirtyeight.com/features/should-you-shoot-free-throws-underhand/) show us? The highest win percentage for **A**, the peak of the surface, occurs when *A* is around 0.5 and *B* is 0 or 1. We can confirm that, finding the maximum `Pwin(A, B)` for many different cutoff values of `A` and `B`: # + cutoffs = (set(arange(0.00, 1.00, 0.01)) | set(arange(0.500, 0.700, 0.001)) | set(arange(0.61803, 0.61804, 0.000001))) def Pwin_summary(A, B): return [Pwin(A, B), 'A:', A, 'B:', B] # - max(Pwin_summary(A, B) for A in cutoffs for B in cutoffs) # So **A** could win 62.5% of the time if only **B** would chose a cutoff of 0. But, unfortunately for **A**, a rational player **B** is not going to do that. We can ask what happens if the game is changed so that player **A** has to declare a cutoff first, and then player **B** gets to respond with a cutoff, with full knowledge of **A**'s choice. In other words, what cutoff should **A** choose to maximize `Pwin(A, B)`, given that **B** is going to take that knowledge and pick a cutoff that minimizes `Pwin(A, B)`? max(min(Pwin_summary(A, B) for B in cutoffs) for A in cutoffs) # And what if we run it the other way around, where **B** chooses a cutoff first, and then **A** responds? min(max(Pwin_summary(A, B) for A in cutoffs) for B in cutoffs) # In both cases, the rational choice for both players in a cutoff of 0.618034, which corresponds to the "saddle point" in the middle of the plot. This is a *stable equilibrium*; consider fixing *B* = 0.618034, and notice that if *A* changes to any other value, we slip off the saddle to the right or left, resulting in a worse win probability for **A**. Similarly, if we fix *A* = 0.618034, then if *B* changes to another value, we ride up the saddle to a higher win percentage for **A**, which is worse for **B**. So neither player will want to move from the saddle point. # # The moral for continuous spaces is the same as for discrete spaces: be careful about defining your sample space; measure carefully, and let your code take care of the rest. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Do not delete this cell. It ensures that you can do the imports, # load datasets etc. in the same fashion as in any Python script # in the project template. import sys sys.path.insert(0, '../..') from bld.project_paths import project_paths_join as ppj from bld.project_paths import project_paths as pp # + import numpy as np import pickle # %config Completer.use_jedi = False # - with open(ppj("OUT_DATA", f"grid_3_agents.pickle"), "rb") as f: all_grids = pickle.load(f) with open(ppj("OUT_DATA", f"all_super_stars_3_agents.pickle"), "rb") as f: super_stars = pickle.load(f) with open(ppj("OUT_ANALYSIS", f"array_no_deviation_simulations_3_agents.pickle"), "rb") as f: super_star_price_data = pickle.load(f) super_star_price_data.mean() # + alpha_grid = list( np.linspace( 0.025, 0.25, 100, ) ) beta_grid = list( np.linspace( 1e-8, 2e-05, 100, ) ) beta_index = beta_grid.index( 6.157575757575758e-07) alpha_index = alpha_grid[::-1].index(0.02954545454545455) # - np.flipud(np.mean(all_grids['avg_price'], axis=0))[alpha_index, beta_index] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 1. Problem Statement # Write a program to identify sub list [1,1,5] is there in the given list in the same order, if yes print "it's a match" if no then print "it's gone" in function. # # Example:- # # Listy = [1,5,6,4,1,2,3,5] - it's a match # # Listy = [1,5,6,5,1,2,3.6] - it's gone def matchlist(lst): sublist = [1,1,5] length_lst = len(lst) for i in range(0, length_lst): if (sublist[0] == lst[i]): for j in range(i+1, length_lst): if (sublist[1] == lst[j]): for k in range(j+1, length_lst): if(sublist[2] == lst[k]): return print("It's a match") else: print("It's gone") # + List1 = [1,5,6,4,1,2,3,5] List2 = [1,5,6,5,1,2,3.6] print("Match [1,1,5] with [1,5,6,4,1,2,3,5]") matchlist(List1) print("\nMatch [1,1,5] with [[1,5,6,5,1,2,3.6]") matchlist(List2) # - # # 2. Problem Statement # Make a function for prime numbers and use Filter to filter out all the prime numbers from 1-2500 # + def checkPrime(num): if num > 1: for i in range(2, num): if ( num%i == 0): break else: return True lst = list(range(2501)) new_prime_list = list(filter(checkPrime, lst)) print("The prime numbers from 1-2500 are:\n",new_prime_list) # - # # 3. Problem Statement # Make a Lambda function for capitalizing the whole sentence passed using arguments and map all the sentences in the list, with the Lambda functions. # # ["hey this is sai","i am in mumbai",...] # # Output: ["ai", "I Am In Mumbai",...] # + lst = ["hey this is sai"," i am in mumbai","..."] print("Capitalizing the first letter of each word \n") print("Before : ",lst) capsfirstletter = list(map(lambda lst: lst.title(), lst)) print("After : ",capsfirstletter) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Download this page as a jupyter notebook at [Lesson 2](http://172.16.31.10/engr-1330-webroot/1-Lessons/Lesson02/ENGR-1330-Lesson02.ipynb) # + language="html" # # # - # %reset -f # # ENGR 1330 Computational Thinking with Data Science # Copyright © 2021 and # # Last GitHub Commit Date: 12 August 2021 # ## Lesson 2 Programming Fundamentals: # - iPython, tokens, and structure # - Data types (int, float, string, bool) # - Variables, operators, expressions, basic I/O # - String functions and operations # - How to Build a Notebook (Another look at problem solving) # ## Programming Fundamentals # # Recall the 5 fundamental CT concepts are: # # 1. **Decomposition**: the process of taking a complex problem and breaking it into more manageable sub-problems. # 2. **Pattern Recognition**: finding similarities, or shared characteristics of problems to reuse of solution methods ( **automation** ) for each occurrence of the pattern. # 3. **Abstraction** : Determine important characteristics of the problem and use these characteristics to create a representation of the problem. # 4. **Algorithms** : Step-by-step instructions of how to solve a problem. # 5. **System Integration**: the assembly of the parts above into the complete (integrated) solution. Integration combines parts into a **program** which is the realization of an algorithm using a syntax that the computer can understand. # # **Programming** is (generally) writing code in a specific programming language to address a certain problem. In the above list it is largely addressed by the algorithms and system integration concepts. # # # ### iPython # The programming language we will use is Python (actually iPython). Python is an example of a high-level language; there are also low-level languages, sometimes referred to as machine languages or assembly languages. Machine language is the encoding of instructions in binary so that they can be directly executed by the computer. Assembly language uses a slightly easier format to refer to the low level instructions. Loosely speaking, computers can only execute programs written in low-level languages. To be exact, computers can actually only execute programs written in machine language. Thus, programs written in a high-level language (and even those in assembly language) have to be processed before they can run. This extra processing takes some time, which is a small disadvantage of high-level languages. However, the advantages to high-level languages are enormous: # # - First, it is much easier to program in a high-level language. Programs written in a high-level language take less time to write, they are shorter and easier to read, and they are more likely to be correct. # - Second, high-level languages are portable, meaning that they can run on different kinds of computers with just a few modifications. # - Low-level programs can run on only one kind of computer (chipset-specific for sure, in some cases hardware specific) and have to be rewritten to run on other processors. (e.g. x86-64 vs. arm7 vs. aarch64 vs. PowerPC ...) # # Due to these advantages, almost all programs are written in high-level languages. Low-level languages are used only for a few specialized applications. # # Two kinds of programs process high-level languages into low-level languages: interpreters and compilers. An interpreter reads a high-level program and executes it, meaning that it does what the program says. It processes the program a little at a time, alternately reading lines and performing computations. Recall how an Excel spreadsheet computes from top to bottom, left to right - an interpreted program is much the same, each line is like a cell in a spreadsheet. # # As a language, python is a formal language that has certain requirements and structure called "syntax." Syntax rules come in two flavors, pertaining to **tokens** and **structure**. **Tokens** are the basic elements of the language, such as words, numbers, and chemical elements. The second type of syntax rule pertains to the **structure of a statement** specifically in the way the tokens are arranged. # ## Tokens and Structure # # Consider the relativistic equation relating energy, mass, and the speed of light # $$ e = m \cdot c^2 $$ # # In this equation the tokens are $e$,$m$,$c$,$=$,$\cdot$, and the structure is parsed from left to right as into the token named $e$ place the result of the product of the contents of the tokens $m$ and $c^2$. Given that the speed of light is some universal constant, the only things that can change are the contents of $m$ and the resulting change in $e$. # # In the above discourse, the tokens $e$,$m$,$c$ are names for things that can have values -- we will call these variables (or constants as appropriate). The tokens $=$,$\cdot$, and $~^2$ are symbols for various arithmetic operations -- we will call these operators. The structure of the equation is specific -- we will call it a statement. # # When we attempt to write and execute python scripts - we will make various mistakes; these will generate warnings and errors, which we will repair to make a working program. # # Consider our equation: #clear all variables# Example Energy = Mass * SpeedOfLight**2 # Notice how the interpreter tells us that Mass is undefined - so a simple fix is to define it and try again # Example Mass = 1000000 Energy = Mass * SpeedOfLight**2 # Notice how the interpreter now tells us that SpeedOfLight is undefined - so a simple fix is to define it and try again # Example Mass = 1000000 #kilograms SpeedOfLight = 299792458 #meters per second Energy = Mass * SpeedOfLight**2 # Now the script ran without any reported errors, but we have not instructed the program on how to produce output. To keep the example simple we will just add a generic print statement. # Example Mass = 1000000 #kilograms SpeedOfLight = 299792458 #meters per second Energy = Mass * SpeedOfLight**2 print("Energy is:", Energy, "Newton meters") # Now lets examine our program. Identify the tokens that have values, Identify the tokens that are symbols of operations, identify the structure. # ## Variables # # Variables are names given to data that we want to store, manipulate, **and change** in programs. A variable has a name and a value. The value representation depends on what type of object the variable represents. The utility of variables comes in when we have a structure that is universal, but values of variables within the structure will change - otherwise it would be simple enough to just hardwire the arithmetic. # # Suppose we want to store the time of concentration for some hydrologic calculation. # To do so, we can name a variable `TimeOfConcentration`, and then `assign` a value to the variable, # for instance: # # TimeOfConcentration = 0.0 # # After this assignment statement the variable is created in the program and has a value of 0.0. # The use of a decimal point in the initial assignment establishes the variable as a float (a real variable is called a floating point representation -- or just a float). # # ### Naming Rules # # Variable names in Python can only contain letters (a - z, A - Z), numerals (0 - 9), or underscores. # The first character cannot be a number, otherwise there is considerable freedom in naming. # The names can be reasonably long. # `runTime`, `run_Time`, `_run_Time2`, `_2runTime` are all valid names, but `2runTime` is not valid, and will create an error when you try to use it. # Script to illustrate variable names runTime = 1. _2runTime = 2 # change to 2runTime = 2 and rerun script runTime2 = 2 print(type(runTime),type(_2runTime),type(runTime2)) # There are some reserved words that cannot be used as variable names because they have preassigned meaning in Parseltongue. # These words include `print`, `input`, `if`, `while`, and `for`. # There are several more; the interpreter won't allow you to use these names as variables and will issue an error message when you attempt to run a program with such words used as variables. # ## Operators # # The `=` sign used in the variable definition is called an assignment operator (or assignment sign). # The symbol means that the expression to the right of the symbol is to be evaluated and the result placed into the variable on the left side of the symbol. The "operation" is assignment, the "=" symbol is the operator name. # # Consider the script below # Assignment Operator x = 5 y = 10 print (x,y) y=x # reverse order y=x and re-run, what happens? print (x,y) # So look at what happened. When we assigned values to the variables named `x` and `y`, they started life as 5 and 10. # We then wrote those values to the console, and the program returned 5 and 10. # Then we assigned `y` to `x` which took the value in y and replaced the value that was in x with this value. # We then wrote the contents again, and both variables have the value 10. # ## Arithmetic Operators # # In addition to assignment we can also perform arithmetic operations on variables. The # fundamental arithmetic operators are: # # | Symbol | Meaning | Example | # |:---|:---|:---| # | = |Assignment| x=3 Assigns value of 3 to x.| # | + |Addition| x+y Adds values in x and y.| # | - |Subtraction| x-y Subtracts values in y from x.| # | * |Multiplication| x*y Multiplies values in x and y.| # | / |Division| x/y Divides value in x by value in y.| # | // |Floor division| x//y Divide x by y, truncate result to whole number.| # | % |Modulus| x%y Returns remainder when x is divided by y.| # | ** |Exponentation| x ** y Raises value in x by value in y. ( e.g. x ** y)| # | += |Additive assignment| x+=2 Equivalent to x = x+2.| # | -= |Subtractive assignment| x-=2 Equivalent to x = x-2.| # | *= |Multiplicative assignment| x\*=3 Equivalent to x = x\*3.| # | /= |Divide assignment| x/=3 Equivalent to x = x/3.| # # Run the script in the next cell for some illustrative results # Uniary Arithmetic Operators x = 10 y = 5 print(x, y) print(x+y) print(x-y) print(x*y) print(x/y) print((x+1)//y) print((x+1)%y) print(x**y) # Arithmetic assignment operators x = 1 x += 2 print(type(x),x) x = 1 x -= 2 print(type(x),x) x = 1 x *=3 print(type(x),x) x = 10 x /= 2 print(type(x),x) # Interesting what division does to variable type # ## Data Type # In the computer data are all binary digits (actually 0 and +5 volts). # At a higher level of **abstraction** data are typed into integers, real, or alphanumeric representation. # The type affects the kind of arithmetic operations that are allowed (as well as the kind of arithmetic - integer versus real arithmetic; lexicographical ordering of alphanumeric , etc.) # In scientific programming, a common (and really difficult to detect) source of slight inaccuracies (that tend to snowball as the program runs) is mixed mode arithmetic required because two numeric values are of different types (integer and real). # # Learn more from the textbook # # https://www.inferentialthinking.com/chapters/04/Data_Types.html # # Here we present a quick summary # # ### Integer # Integers are numbers without any fractional portion (nothing after the decimal point { which # is not used in integers). Numbers like -3, -2, -1, 0, 1, 2, 200 are integers. A number like 1.1 # is not an integer, and 1.0 is also not an integer (the presence of the decimal point makes the # number a real). # # To declare an integer in Python, just assign the variable name to an integer for example # # MyPhoneNumber = 14158576309 # # ### Real (Float) # A real or float is a number that has (or can have) a fractional portion - the number has # decimal parts. The numbers 3.14159, -0.001, 11.11, 1., are all floats. # The last one is especially tricky, if you don't notice the decimal point you might think it is an integer but the # inclusion of the decimal point in Python tells the program that the value is to be treated as a # float. # To declare a # float in Python, just assign the variable name to a # float for example # # MyMassInKilos = 74.8427 # # ### String(Alphanumeric) # A string is a data type that is treated as text elements. The usual letters are strings, but # numbers can be included. The numbers in a string are simply characters and cannot be # directly used in arithmetic. # There are some kinds of arithmetic that can be performed on strings but generally we process string variables to capture the text nature of their contents. # To declare a string in Python, just assign the variable name to a string value - the trick is the value is enclosed in quotes. # The quotes are delimiters that tell the program that the characters between the quotes are characters and are to be treated as literal representation. # # For example # # MyName = 'Theodore' # MyCatName = "Dusty" # DustyMassInKilos = "7.48427" # # are all string variables. # The last assignment is made a string on purpose. # String variables can be combined using an operation called concatenation. # The symbol for concatenation is the plus symbol `+`. # # Strings can also be converted to all upper case using the `upper()` function. The syntax for # the `upper()` function is `'string to be upper case'.upper()`. # Notice the "dot" in the syntax. # The operation passes everything to the left of the dot to the function which then # operates on that content and returns the result all upper case (or an error if the input stream # is not a string). # Variable Types Example MyPhoneNumber = 14158576309 MyMassInKilos = 74.8427 MyName = 'Theodore' MyCatName = "Dusty" DustyMassInKilos = "7.48427" print("All about me") print("Name: ",MyName, " Mass :",MyMassInKilos,"Kg" ) print('Phone : ',MyPhoneNumber) print('My cat\'s name :', MyCatName) # the \ escape character is used to get the ' into the literal print("All about concatenation!") print("A Silly String : ",MyCatName+MyName+DustyMassInKilos) print("A SILLY STRING : ", (MyCatName+MyName+DustyMassInKilos).upper()) # Strings can be formatted using the `%` operator or the `format()` function. The concepts will # be introduced later on as needed in the workbook, you can Google search for examples of # how to do such formatting. # # ### Changing Types # A variable type can be changed. # This activity is called type casting. # Three functions allow # type casting: `int()`, `float()`, and `str()`. # The function names indicate the result of using # the function, hence `int()` returns an integer, `float()` returns a # oat, and `str()` returns a # string. # # There is also the useful function `type()` which returns the type of variable. # # The easiest way to understand is to see an example. # Type Casting Examples MyInteger = 234 MyFloat = 876.543 MyString = 'What is your name?' print(MyInteger,MyFloat,MyString) print('Integer as float',float(MyInteger)) print(' as integer',int(MyInteger)) print('Integer as string',str(MyInteger)) print('Integer as hexadecimal',hex(MyInteger)) print('Integer Type',type((MyInteger))) # insert the hex conversion and see what happens! # ## Expressions # # Expressions are the "algebraic" constructions that are evaluated and then placed into a variable. # Consider # # x1 = 7 + 3 * 6 / 2 - 1 # # The expression is evaluated from the left to right and in words is # # - Into the object named x1 place the result of: # # integer 7 + (integer 6 divide by integer 2 = float 3 * integer 3 = float 9 - integer 1 = float 8) = float 15 # # The division operation by default produces a float result unless forced otherwise. The result is the variable `x1` is a float with a value of `15.0` # Expressions Example x1 = 7 + 3 * 6 // 2 - 1 # Change / into // and see what happens! print(type(x1),x1) ## Simple I/O (Input/Output) # ### Summary # # So far consider our story - a tool to help with problem solving is CT leading to an algorithm. The tool to implement the algorithm is the program and in our case JupyterLab running iPython interpreter for us. # # As a formal language we introduced: # - tokens # - structure # # From these two constructs we further introduced **variables** (a kind of token), **data types** (an abstraction, and arguably a decomposition), and **expressions** (a structure). We created simple scripts (with errors), examined the errors, corrected our script, and eventually got an answer. So we are well on our way in CT as it applies in Engineering. # ## How to Build a Program/Notebook # # Recall our suggested problem solving protocol: # # 1. Explicitly state the problem # 2. State the input information, governing equations or principles, and the required output information. # 3. Work a sample problem by-hand for testing the general solution. # 4. Develop a general solution method (coding). # 5. Test the general solution against the by-hand example, then apply to the real problem. # 6. Refine general solution for deployment (frequent use) # # The protocol below is shamelessly lifted from [http://users.csc.calpoly.edu/~jdalbey/101/Lectures/HowToBuildAProgram.html](http://users.csc.calpoly.edu/~jdalbey/101/Lectures/HowToBuildAProgram.html) here we are using the concept of program and notebook as the same thing. # # Building a program is not an art, it is an engineering process. As such there is a process to follow with clearly defined steps. # # ### Analysis (Understand the Requirements) # # In this class you will usually be given the problem requirements, unlike the real-world where you have to elicit the requirements from a customer. For you the first step will be to read the problem and be sure you understand what the program must do. Summarize your understanding by writing the Input data, Output data, and Functions (operations or transformation on the data). In the context of our (WCOE) protocol this step comprises Steps 1 and 2. # # ### Create a Test Plan # # You must be able to verify that your program works correctly once it is written. Invent some actual input data values and manually compute the expected result. In the context of our (WCOE) protocol this step comprises Steps 3. # # ### Invent a Solution # # This is the creative, exploratory part of design where you figure out how to solve the problem. Here is one strategy: # # - Solve the problem manually, the way you would do it as a human. Pay careful to attention what operations you perform and write down each step. # - Look for a pattern in the steps you performed. # - Determine how this pattern could be automated using the 3 algorithm building blocks (which we learn about a few lectures from now) (Sequence, Selection, Iteration). # # ### Design (Formalize your solution) # # Arrange your solution into components; this is called the architecture. # Write the algorithm for each component. Refine your algorithm in a step-wise manner if necessary. # Determine the data types and constraints for each data item. # Review # # Perform a hand trace of your solution and simulate how the computer will carry out your algorithm. Make sure your algorithm works correctly before you put it into the computer. # # ### Implementation (coding) # # Translate your algorithm into a programming language and enter it into the computer. # Compile your source code to produce an executable program. # You may want to compile and test each subprogram individually before combining them into a complete program. # # ### Testing # # Execute the program using the Test Plans you created above. # Correct any errors as necessary. # ### Example of Problem Solving Process # # Consider an engineering material problem where we wish to **classify** whether a material is loaded in the elastic or inelastic region as determined the stress (solid pressure) in a rod for some applied load. The yield stress is the **classifier**, and once the material yields (begins to fail) it will not carry any additional load (until ultimate failure, when it carries no load). # # # ![](http://54.243.252.9/engr-1330-webroot/1-Lessons/Lesson02/StressStrain.png) # # **Step 1.** Compute the material stress under an applied load; determine if value exceedes yield stress, and report the loading condition # # **Step 2.** # - Inputs: applied load, cross sectional area, yield stress # - Governing equation: $\sigma = \frac{P}{A}$ when $\frac{P}{A}$ is less than the yield stress, and is equal to the yield stress otherwise. # - Outputs: The material stress $\sigma$, and the classification elastic or inelastic. # # **Step 3.** Work a sample problem by-hand for testing the general solution. # # Assuming the yield stress is 1 million psi (units matter in an actual problem - kind of glossed over here) # # |Applied Load (lbf)|Cross Section Area (sq.in.)|Stress (psi)|Classification| # |---:|---:|---:|---:| # |10,000|1.0|10,000|Elastic| # |10,000|0.1|100,000|Elastic| # |100,000|0.1|1,000,000|Inelastic| # # # The stress requires us to read in the load value, read in the cross sectional area, divide the load by the area, and compare the result to the yield stress. If it exceeds the yield stress, then the actual stress is the yield stress, and the loading is inelastic, otherwise elastic # # $$ \sigma = \frac{P}{A} $$ # $$ \text{If}~ \sigma >= \text{Yield Stress then report Inelastic} $$ # # **Step 4.** Develop a general solution (code) # # In a flow-chart it would look like: # # ![](http://54.243.252.9/engr-1330-webroot/1-Lessons/Lesson02/Flowchart.png) # # # ||Flowchart for Artihmetic Mean Algorithm|| # |---|------------|---| # # **Step 5.** This step we would code the algorithm expressed in the figure and test it with the by-hand data and other small datasets until we are convinced it works correctly. We have not yet learned prompts to get input we simply direct assign values as below (and the conditional execution is the subject of a later lesson) # # In a simple JupyterLab script # Example 2 Problem Solving Process yield_stress = 1e6 applied_load = 1e5 cross_section = 0.1 computed_stress = applied_load/cross_section if(computed_stress < yield_stress): print("Elastic Region: Stress = ",computed_stress) elif(computed_stress >= yield_stress): print("Inelastic Region: Stress = ",yield_stress) # **Step 6.** This step we would refine the code to generalize the algorithm. In the example we want a way to supply the inputs by user entry,and tidy the output by rounding to only two decimal places. A little CCMR from [https://www.geeksforgeeks.org/taking-input-in-python/](https://www.geeksforgeeks.org/taking-input-in-python/) gives us a way to deal with the inputs and typecasting. Some more CCMR from [https://www.programiz.com/python-programming/methods/built-in/round](https://www.programiz.com/python-programming/methods/built-in/round) gets us rounded out! # Example 2 Problem Solving Process yield_stress = float(input('Yield Stress (psi)')) applied_load = float(input('Applied Load (lbf)')) cross_section = float(input('Cross Section Area (sq.in.)')) computed_stress = applied_load/cross_section if(computed_stress < yield_stress): print("Elastic Region: Stress = ",round(computed_stress,2)) elif(computed_stress >= yield_stress): print("Inelastic Region: Stress = ",round(yield_stress,2)) # So the simple task of computing the stress, is a bit more complex when decomposed, that it first appears, but illustrates a five step process (with a refinement step), and we have done our first **classification** problem, albeit a very simple case! # ## Readings # # 1. Computational and Inferential Thinking and , Computational and Inferential Thinking, The Foundations of Data Science, Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND) Chapter 3 https://www.inferentialthinking.com/chapters/03/programming-in-python.html # # 2. Computational and Inferential Thinking and , Computational and Inferential Thinking, The Foundations of Data Science, Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND) Chapter 4 # https://www.inferentialthinking.com/chapters/04/Data_Types.html # # 3. Learn Python in One Day and Learn It Well. Python for Beginners with Hands-on Project. (Learn Coding Fast with Hands-On Project Book -- Kindle Edition by LCF Publishing (Author), https://www.amazon.com/Python-2nd-Beginners-Hands-Project-ebook/dp/B071Z2Q6TQ/ref=sr_1_3?dchild=1&keywords=learn+python+in+a+day&qid=1611108340&sr=8-3 # # 4. , , , (Batu), , , and . (2021) Computational Thinking and Data Science: A WebBook to Accompany ENGR 1330 at TTU, Whitacre College of Engineering, DOI (pending) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernel_info: # name: python3 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Note # * Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps. # + # Dependencies and Setup import pandas as pd import numpy as np # File to Load (Remember to Change These) file_to_load = "Resources/purchase_data.csv" # Read Purchasing File and store into Pandas data frame purchase_data = pd.read_csv(file_to_load) purchase_data.head() # - # ## Player Count # * Display the total number of players # purchase_data['SN'].nunique() # ## Purchasing Analysis (Total) # * Run basic calculations to obtain number of unique items, average price, etc. # # # * Create a summary data frame to hold the results # # # * Optional: give the displayed data cleaner formatting # # # * Display the summary data frame # # + numberofuniqueitem=purchase_data['Item ID'].nunique() averageprice = round(purchase_data["Price"].mean(),2) numberofpurchases = purchase_data["Purchase ID"].count() TotalRevenue = purchase_data['Price'].sum() summary_table = pd.DataFrame({"Number of Unique Items": [numberofuniqueitem], "Average Price": [averageprice], "Number of Purchases": numberofpurchases, "Total Revenue": TotalRevenue }) summary_table["Average Price"] = summary_table["Average Price"].astype(float).map( "${:,.2f}".format) summary_table["TotalRevenue"] = summary_table["Total Revenue"].astype(float).map( "${:,.2f}".format) summary_table # - # ## Gender Demographics # * Percentage and Count of Male Players # # # * Percentage and Count of Female Players # # # * Percentage and Count of Other / Non-Disclosed # # # # + # Male = purchase_data.query('Gender == "Male"')['SN'].agg(['nunique']) # MalePercent = round( (Male * 100) / (purchase_data['SN'].nunique()),2 ) # MalePercent = MalePercent.astype(str) + '%' # Female = purchase_data.query('Gender == "Female"')['SN'].agg(['nunique']) # FemalePercent = round( (Female * 100) / (purchase_data['SN'].nunique()),2 ) # FemalePercent = FemalePercent.astype(str) + '%' # Other = purchase_data.query('Gender == "Other / Non-Disclosed"')['SN'].agg(['nunique']) # OtherPercent = round( (Other * 100) / (purchase_data['SN'].nunique()),2 ) # OtherPercent = OtherPercent.astype(str) + '%' # Genderdf = pd.DataFrame({ # 'Gender': ['Male', 'Female', 'Other / Non-Disclosed'], # 'Total Count': [Male,Female,Other], # 'Percentage of Players': [MalePercent,FemalePercent,OtherPercent], # }) # Genderdf gender = purchase_data.groupby("Gender")["SN"].nunique() count = gender/purchase_data['SN'].nunique() percent = count.mul(100).round(2).astype(str) + '%' summary_table = pd.DataFrame({ "Total Count": gender, "Percentage of Players": percent }) summary_table = summary_table.sort_values(by = ['Total Count'], ascending=[False]) summary_table # - # # ## Purchasing Analysis (Gender) # * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender # # # # # * Create a summary data frame to hold the results # # # * Optional: give the displayed data cleaner formatting # # # * Display the summary data frame # + count = purchase_data.groupby(['Gender'])['Purchase ID'].count() average = purchase_data.groupby(['Gender'])['Price'].mean().round(2) total = purchase_data.groupby(['Gender'])['Price'].sum() AvgRangeTotal = (total / purchase_data.groupby("Gender")["SN"].nunique() ).round(2) summary_table = pd.DataFrame({"Purchase Count": count, "Average Purchase Price": average , "Total Purchase Value": total, "Avg Total Purchase per Person" : AvgRangeTotal }) summary_table["Average Purchase Price"] = summary_table["Average Purchase Price"].astype(float).map( "${:,.2f}".format) summary_table["Total Purchase Value"] = summary_table["Total Purchase Value"].astype(float).map( "${:,.2f}".format) summary_table["Avg Total Purchase per Person"] = summary_table["Avg Total Purchase per Person"].astype(float).map( "${:,.2f}".format) summary_table # - # ## Age Demographics # * Establish bins for ages # # # * Categorize the existing players using the age bins. Hint: use pd.cut() # # # * Calculate the numbers and percentages by age group # # # * Create a summary data frame to hold the results # # # * Optional: round the percentage column to two decimal points # # # * Display Age Demographics Table # # + bins = [0,9,14, 19, 24, 29,34,39,1000] group_names = ["<10","10-14","15-19","20-24","25-29","30-34","35-39","+40"] purchase_data["Age Ranges"] = pd.cut(purchase_data["Age"], bins, labels=group_names) purchase_data_group = purchase_data.groupby("Age Ranges") age_group_total = purchase_data_group["SN"].nunique() count = age_group_total/purchase_data['SN'].nunique() percent = count.mul(100).round(2).astype(str) + '%' summary_table = pd.DataFrame({ "Total Count": age_group_total, "Percentage of Players": percent }) summary_table # - # ## Purchasing Analysis (Age) # * Bin the purchase_data data frame by age # # # * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below # # # * Create a summary data frame to hold the results # # # * Optional: give the displayed data cleaner formatting # # # * Display the summary data frame # + count = purchase_data.groupby(['Age Ranges'])['Purchase ID'].count() average = purchase_data.groupby(['Age Ranges'])['Price'].mean().round(2) total = purchase_data.groupby(['Age Ranges'])['Price'].sum() AvgRangeTotal = (total / purchase_data.groupby("Age Ranges")["SN"].nunique() ).round(2) summary_table = pd.DataFrame({"Purchase Count": count, "Average Purchase Price": average , "Total Purchase Value": total, "Avg Total Purchase per Person": AvgRangeTotal }) summary_table["Average Purchase Price"] = summary_table["Average Purchase Price"].astype(float).map( "${:,.2f}".format) summary_table["Total Purchase Value"] = summary_table["Total Purchase Value"].astype(float).map( "${:,.2f}".format) summary_table["Avg Total Purchase per Person"] = summary_table["Avg Total Purchase per Person"].astype(float).map( "${:,.2f}".format) summary_table # - # ## Top Spenders # * Run basic calculations to obtain the results in the table below # # # * Create a summary data frame to hold the results # # # * Sort the total purchase value column in descending order # # # * Optional: give the displayed data cleaner formatting # # # * Display a preview of the summary data frame # # # + new_df = purchase_data.groupby('SN').agg({'Purchase ID' : 'count','Price':'sum' }) avg = (new_df['Price'] / new_df['Purchase ID'] ). round(2) new_df['Average Purchase Price'] = avg new_df = new_df.rename(columns={"Purchase ID": "Purchase Count", "Price": "Total Purchase Value"}) new_df = new_df.sort_values(by = ['Total Purchase Value'], ascending=[False]) new_df = new_df[['Purchase Count', 'Average Purchase Price', 'Total Purchase Value']] new_df["Total Purchase Value"] = new_df["Total Purchase Value"].astype(float).map("${:,.2f}".format) new_df["Average Purchase Price"] = new_df["Average Purchase Price"].astype(float).map("${:,.2f}".format) new_df.head() # - # ## Most Popular Items # * Retrieve the Item ID, Item Name, and Item Price columns # # # * Group by Item ID and Item Name. Perform calculations to obtain purchase count, item price, and total purchase value # # # * Create a summary data frame to hold the results # # # * Sort the purchase count column in descending order # # # * Optional: give the displayed data cleaner formatting # # # * Display a preview of the summary data frame # # # + new_df = purchase_data.groupby(['Item ID','Item Name']).agg({'Purchase ID' : 'count','Price':'sum' }) avg = (new_df['Price'] / new_df['Purchase ID'] ). round(2) new_df['Average Purchase Price'] = avg new_df = new_df.rename(columns={"Purchase ID": "Purchase Count", "Price": "Total Purchase Value"}) new_df = new_df.sort_values(by = ['Purchase Count'], ascending=[False]) new_df = new_df[['Purchase Count', 'Average Purchase Price', 'Total Purchase Value']] new_df["Total Purchase Value"] = new_df["Total Purchase Value"].astype(float).map("${:,.2f}".format) new_df["Average Purchase Price"] = new_df["Average Purchase Price"].astype(float).map("${:,.2f}".format) new_df.head() # - # ## Most Profitable Items # * Sort the above table by total purchase value in descending order # # # * Optional: give the displayed data cleaner formatting # # # * Display a preview of the data frame # # # + new_df = purchase_data.groupby(['Item ID','Item Name']).agg({'Purchase ID' : 'count','Price':'sum' }) avg = (new_df['Price'] / new_df['Purchase ID'] ). round(2) new_df['Average Purchase Price'] = avg new_df = new_df.rename(columns={"Purchase ID": "Purchase Count", "Price": "Total Purchase Value"}) new_df = new_df.sort_values(by = ['Total Purchase Value'], ascending=[False]) new_df = new_df[['Purchase Count', 'Average Purchase Price', 'Total Purchase Value']] new_df["Total Purchase Value"] = new_df["Total Purchase Value"].astype(float).map("${:,.2f}".format) new_df["Average Purchase Price"] = new_df["Average Purchase Price"].astype(float).map("${:,.2f}".format) new_df.head() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import pandas as pd data = pd.read_excel('data_v7.xlsx') data.head() len(data) len(data[data.isnull().any(axis=1)]) data.drop(['Unnamed: 0','locations_month','label','date'],axis=1,inplace=True) data.drop(['pressureInches','visibility'],axis=1,inplace=True) data.head() loc_lst = list(data['locations'].unique()) i = 0 for haha in loc_lst: data.loc[data.locations == haha, 'locations'] = i i+=1 data.head() data.tail() data.corr() from sklearn.decomposition import PCA from sklearn.preprocessing import MinMaxScaler import matplotlib.pyplot as plt scaler = MinMaxScaler(feature_range=[0, 1]) data_rescaled = scaler.fit_transform(data) pca = PCA().fit(data_rescaled) plt.figure(figsize=(20,10)) plt.plot(np.cumsum(pca.explained_variance_ratio_)) plt.xlabel('Number of Components') plt.ylabel('Variance (%)') #for each component plt.title('Dengue Dataset No. of Components vs Variance') plt.show() np.cumsum(pca.explained_variance_ratio_) # 8 features pca = PCA(n_components=8) data_pca = pca.fit_transform(data_rescaled) data_pca.shape new_df = pd.DataFrame(data_pca) new_df.corr() labels = pd.read_excel('data_v7.xlsx',usecols=['label']) labels_pca = np.array(labels) labels_pca.shape data_pca.shape X = data_pca.copy() y = labels_pca.copy() from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=101) # - from sklearn.ensemble import RandomForestClassifier rfc = RandomForestClassifier() rfc.fit(X_train,y_train) rfc_pred = rfc.predict(X_test) from sklearn.metrics import classification_report print(classification_report(y_test,rfc_pred)) rfc.feature_importances_ from sklearn.svm import SVC svc = SVC() svc.fit(X_train,y_train) svc_pred = svc.predict(X_test) print(classification_report(y_test,svc_pred)) from sklearn.linear_model import LogisticRegression lr = LogisticRegression() lr.fit(X_train,y_train) lr_pred = lr.predict(X_test) print(classification_report(lr_pred,y_test)) from sklearn.model_selection import GridSearchCV # + penalty = ['l1', 'l2'] C = [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000] class_weight = [{1:0.5, 0:0.5}, {1:0.4, 0:0.6}, {1:0.6, 0:0.4}, {1:0.7, 0:0.3}] solver = ['liblinear', 'saga'] param_grid = dict(penalty=penalty, C=C, class_weight=class_weight, solver=solver) grid = GridSearchCV(estimator=lr, param_grid=param_grid, scoring='roc_auc', verbose=1, n_jobs=-1) grid_result = grid.fit(X_train, y_train) print('Best Score: ', grid_result.best_score_) print('Best Params: ', grid_result.best_params_) # - from sklearn.model_selection import RandomizedSearchCV from sklearn.pipeline import Pipeline pipe = Pipeline([("classifier", RandomForestClassifier())]) # Create dictionary with candidate learning algorithms and their hyperparameters search_space = [ {"classifier": [LogisticRegression()], "classifier__penalty": ['l2','l1'], "classifier__C": np.logspace(0, 4, 10) }, {"classifier": [LogisticRegression()], "classifier__penalty": ['l2'], "classifier__C": np.logspace(0, 4, 10), "classifier__solver":['newton-cg','saga','sag','liblinear'] ##This solvers don't allow L1 penalty }, {"classifier": [RandomForestClassifier()], "classifier__n_estimators": [10, 100, 1000], "classifier__max_depth":[5,8,15,25,30,None], "classifier__min_samples_leaf":[1,2,5,10,15,100], "classifier__max_leaf_nodes": [2, 5,10]}] # create a gridsearch of the pipeline, the fit the best model gridsearch = GridSearchCV(pipe, search_space, cv=5, verbose=0,n_jobs=-1) # Fit grid search best_model = gridsearch.fit(X_train, y_train) best_model.best_params_ temp_lr = LogisticRegression(C=2.7825594022071245, class_weight=None, dual=False, fit_intercept=True, intercept_scaling=1, l1_ratio=None, max_iter=100, multi_class='warn', n_jobs=None, penalty='l1', random_state=None, solver='warn', tol=0.0001, verbose=0, warm_start=False) temp_lr.fit(X_train,y_train) print(classification_report(temp_lr.predict(X_test),y_test)) def svc_param_selection(X, y, nfolds): Cs = [0.001, 0.01, 0.1, 1, 10] gammas = [0.001, 0.01, 0.1, 1] param_grid = {'C': Cs, 'gamma' : gammas} grid_search = GridSearchCV(SVC(kernel='rbf'), param_grid, cv=nfolds) grid_search.fit(X, y) grid_search.best_params_ return grid_search.best_params_ svc_params = svc_param_selection(X_train,y_train,3) temp_svc = SVC(gamma=10).fit(X_train,y_train).predict(X_test) print(classification_report(temp_svc,y_test)) param_grid = { 'n_estimators': [200, 500], 'max_features': ['auto', 'sqrt', 'log2'], 'max_depth' : [4,5,6,7,8], 'criterion' :['gini', 'entropy'] } CV_rfc = GridSearchCV(estimator=rfc, param_grid=param_grid, cv= 5) CV_rfc.fit(X_train, y_train) CV_rfc.best_params_ temp_rfc=RandomForestClassifier(random_state=101, max_features='log2', n_estimators= 500, max_depth=4, criterion='entropy') temp_rfc.fit(X_train,y_train) print(classification_report((temp_rfc.predict(X_test)),y_test)) # + grid={"C":np.logspace(-3,3,7), "penalty":["l1","l2"]} logreg=LogisticRegression() logreg_cv=GridSearchCV(logreg,grid,cv=10) logreg_cv.fit(X_train,y_train) print("tuned hpyerparameters :(best parameters) ",logreg_cv.best_params_) print("accuracy :",logreg_cv.best_score_) # + logreg2=LogisticRegression(C=10,penalty="l1") logreg2.fit(X_train,y_train) print(classification_report(logreg2.predict(X_test),y_test)) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Lesson 1 - Summarizing Data # + import matplotlib.pyplot as plt import numpy as np import pandas as pd import statistics # %matplotlib inline # - # ### Make a blank DataFrame df = pd.DataFrame() # ### Populate it with data df['age'] = [28, 42, 27, 24, 35, 54, 35, 42, 37] # ## Measures of Central Tendency # ### Mean (using built-in Python functionality) mean_py = sum(df['age']) / len(df['age']) mean_py # ### Mean (using NumPy) mean_np = np.mean(df['age']) mean_np # ### Median (using built-in Python functionality) median_py = statistics.median(df['age']) median_py # ### Median (using NumPy) median_np = np.median(df['age']) median_np # ### Mode (using built-in Python functionality) mode_py = statistics.mode(df['age']) mode_py # ### Mode (using NumPy) # Generate a list of unique elements along with how often they occur. (values, counts) = np.unique(df['age'], return_counts=True) print(values, counts) # The location(s) in the values list of the most-frequently-occurring element(s). ind = [x[0] for x in list(enumerate(counts)) if x[1] == counts[np.argmax(counts)]] ind values # The most frequent element(s). modes = [values[x] for x in ind] modes # ## Measures of Variance # ### Variance (using NumPy) df['age'] # change delta degrees of freedom (ddof) to 1 from its default value of 0 var_np = np.var(df['age'], ddof=1) var_np # ### Variance (using Pandas) var_pd = df['age'].var() var_pd # ### Standard Deviation (using NumPy) std_np = np.std(df['age'], ddof=1) std_np # ### Standard Deviation (using Pandas) std_pd = df['age'].std() std_pd # ### Standard Error (using NumPy) se_np = std_np / np.sqrt(len(df['age'])) se_np # ### Standard Error Examples # + # First, create an empty dataframe to store your variables-to-be. pop = pd.DataFrame() # Then create two variables with mean = 60, one with a low standard # deviation (sd=10) and one with a high standard deviation (sd=100). pop['low_var'] = np.random.normal(60, 10, 10000) pop['high_var'] = np.random.normal(60, 100, 10000) # Finally, create histograms of the two variables. pop.hist(layout=(2, 1), sharex=True) plt.show() # Calculate and print the maximum and minimum values for each variable. print("\nMax of low_var and high_var:\n", pop.max()) print("\nMin of low_var and high_var:\n", pop.min()) # + # Take a random sample of 100 observations from each variable # and store it in a new dataframe. sample = pd.DataFrame() sample['low_var'] = np.random.choice(pop['low_var'], 100) sample['high_var'] = np.random.choice(pop['high_var'], 100) # Again, visualize the data. Note that here we're using a pandas method to # create the histogram. sample.hist() plt.show() # Check how well the sample replicates the population. print("Mean of low_var and high_var:\n", sample.mean()) # - print("Standard deviation of low_var and high_var:\n", sample.std(ddof=1)) # ## Describing Data with Pandas # + # Set up the data data = pd.DataFrame() data['gender'] = ['male'] * 100 + ['female'] * 100 # 100 height values for males, 100 height values for females data['height'] = np.append(np.random.normal(69, 8, 100), np.random.normal(64, 5, 100)) # 100 weight values for males, 100 weight values for females data['weight'] = np.append(np.random.normal(195, 25, 100), np.random.normal(166, 15, 100)) # - data.head(10) data.tail(10) data['height'].mean() data['height'].std() data.describe() data.groupby('gender').describe() data['gender'].value_counts() # # Lesson 2 - Basics of Probability # ## Perspectives on Probability # ### _Frequentist_ school of thought # - ### Describes how often a particular outcome would occur in an experiment if that experiment were repeated over and over # - ### In general, frequentists consider _model parameters to be fixed_ and _data to be random_ # ### _Bayesian_ school of thought # - ### Describes how likely an observer expects a particular outcome to be in the future, based on previous experience and expert knowledge # - ### Each time an experiment is run, the probability is updated if the new data changes the belief about the likelihood of the outcome # - ### The probability based on previous experiences is called the _"prior probability,"_ or the "prior," while the updated probability based on the newest experiment is called the _"posterior probability."_ # - ### In general, Bayesians consider _model parameters to be random_ and _data to be fixed_ # ------------------------------------------------------------------------------------------------------------------------- # ## Randomness # ## Sampling # ## Selection Bias # ------------------------------------------------------------------------------------------------------------------------- # ## Independence # ## Dependence # ------------------------------------------------------------------------------------------------------------------------- # ## Bayes' Rule # ## $P(A|B)=\frac{P(B|A)*P(A)}{P(B)}=\frac{P(B|A)*P(A)}{[P(A)*P(B|A)+P(A\sim)*P(B|A\sim)]}$ # # # ## Conditional Probability # ------------------------------------------------------------------------------------------------------------------------- # ## Evaluating Data Sources # - ## Bias # - ## Quality # - ## Exceptional Circumstance # ------------------------------------------------------------------------------------------------------------------------- # # The Normal Distribution and the Central Limit Theorem # ## Normality # ## Deviations from Normality and Descriptive Statistics (skewness) # ## Other Distributions # - ## Bernoulli # - ## Binomial # - ## Gamma # - ## Poisson # - ## Conditional Distribution # ## CLT and Sampling # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # GML - Regressor # Install it first # !pip install GML # Loading Regression dataset from sklearn from sklearn.datasets import load_boston # Importing GML from GML.Ghalat_Machine_Learning import Ghalat_Machine_Learning # Making object with 100 estimators # Also importing another model from sklearn to make it compete with GML's models from sklearn.neural_network import MLPRegressor model = Ghalat_Machine_Learning(n_estimators=100) # storing data in X,y X,y = load_boston(return_X_y=True) # Passing the data to classifier best_model = model.GMLRegressor(X,y,neural_net='Yes',epochs=50, verbose=0,models=[MLPRegressor()]) best_model # So from above results, we can see # # ExtraTreesRegressor CV performed really well! and our custom model, MLPRegressor ended up 11th. # # Ranks are based upon validation loss # See how easy it was! # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="GiddzBsIyu5S" colab_type="code" colab={} ## Authors: & ## 31st May 2020 # + id="YWX_KAgoJnnE" colab_type="code" colab={} import numpy as np import matplotlib.pyplot as plt import pandas as pd import pandas_datareader as pdr import seaborn as sns # #!pip install statsmodels --upgrade import statsmodels.api as sm from statsmodels.tsa.ar_model import AutoReg, ar_select_order from statsmodels.tsa.api import acf, pacf, graphics from datetime import datetime # + [markdown] id="rznz9Xjd3Kmu" colab_type="text" # ## The objective of this exercise is to extract the trend component of a time-series # #### Example: In macroeconomics, the moving natural rate theory suggests that the unemployment rate is a stationary series around its moving target and its steady state- the natural rate of unemployment. Hence, the trend component of the unemployment series is equal to the natural rate. We will estimate the natural rate using data on monthly unemployment rate, using the following methodologies: # # # * Hodrick-Prescott filter # * Baxter-King filter # * Christiano-Fitzgerald filter # * Beveridge-Nelson decomposition # # # # # # + id="BlvxSj4FL855" colab_type="code" outputId="16e63121-e350-42c2-832a-896618d953d4" colab={"base_uri": "https://localhost:8080/", "height": 235} unemp_m = pd.read_csv("unemp_monthly.csv", parse_dates=True, index_col = 'DATE') unemp_m.dropna(inplace = True) unemp_m.head() # + [markdown] id="MXvr9M1c5m32" colab_type="text" # ### Hodrick-Prescott (HP) filter # + id="sZ90JCph5mbR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 320} outputId="bbc5cf5a-1c04-4f56-e395-7146cef0eec9" hp_cycle, hp_trend = sm.tsa.filters.hpfilter(unemp_m['UNRATE']) #unemp_m['hp_cycle'] = hp_cycle unemp_m['hp_trend'] = hp_trend unemp_m.plot(subplots = True, figsize = (10,4)) # + [markdown] id="IvacZHPImTno" colab_type="text" # ### Baxter-King approximate band-pass filter # + id="G7m6z0crmx0O" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 337} outputId="fdab6c94-220d-4edc-d195-4d2fb0e5ef31" bk_cycles = sm.tsa.filters.bkfilter(unemp_m['UNRATE']) unemp_m['bk_trend'] = unemp_m['UNRATE'] - pd.Series(bk_cycles) unemp_m.plot(subplots = True, figsize = (10,4)) # + [markdown] id="SaC5B7dl-Jp7" colab_type="text" # ## Christiano-Fitzgerald approximate band-pass filter # + id="8K8hpnB9-Pp0" colab_type="code" outputId="8d99cf04-0a62-4469-afd7-4f5f340f301c" colab={"base_uri": "https://localhost:8080/", "height": 452} cf_cycles, cf_trend = sm.tsa.filters.cffilter(unemp_m['UNRATE']) #unemp_m['cf_cycle'] = cf_cycles unemp_m['cf_trend'] = cf_trend unemp_m.plot(subplots = True, figsize = (10,6)) # + [markdown] id="3UZs8Nj9a0UE" colab_type="text" # ### Beveridge-Nelson decomposition # + id="uQcGACtxAMsP" colab_type="code" outputId="31302e65-d617-4227-adb4-495ae7503bdf" colab={"base_uri": "https://localhost:8080/", "height": 235} bn_dec = pd.read_csv("unemp_monthly.csv", parse_dates=True, index_col = 'DATE') bn_dec.head() # + id="xd738-gIANG1" colab_type="code" outputId="24169134-88eb-4177-fe95-883d956083d1" colab={"base_uri": "https://localhost:8080/", "height": 235} bn_dec["diff"] = bn_dec["UNRATE"] - bn_dec["UNRATE"].shift(1) bn_dec["diff_mean"] = bn_dec["diff"] - bn_dec['diff'].mean() bn_dec.dropna(inplace=True) bn_dec.head() # + id="Ts6jFvInAMnX" colab_type="code" outputId="50417615-698a-4663-d4c1-21ff6a58c0a4" colab={"base_uri": "https://localhost:8080/", "height": 513} # Taking 8 lags and auto-selecting the best orders of autoregression on the differenced series of unemployment rate select = ar_select_order(bn_dec["diff_mean"], 8, trend= 'n', glob=True) select.ar_lags res = select.model.fit() print(res.summary()) # + id="opx7ALIJRle6" colab_type="code" outputId="19405ac0-ab25-4169-98ed-82bbb89fa1e5" colab={"base_uri": "https://localhost:8080/", "height": 235} '''The formula for BN trend component = y_t + (phi/1-phi) * delta(differenced(1)-mean)_t From the results above, all our 3 selected AR lags have p-values=0; we choose diff_mean.L4 i.e. the 4th lag = 0.1791 = phi and put in the above formula RHS gives the BN trend = natural rate estimates''' bn_dec['natural_rate'] = bn_dec['UNRATE'] + (0.1791/(1-0.1791)) * bn_dec['diff_mean'] bn_dec.head() # + id="TmldH766TAFD" colab_type="code" outputId="8d26d1cd-7689-4c09-bac3-e37a79d4f624" colab={"base_uri": "https://localhost:8080/", "height": 297} # Plotting the natural rate derived from BN decomposition alongside the actual unemployment rate series plt.figure(figsize=(10, 4)) plt.plot(bn_dec.index, bn_dec['UNRATE'], label="Unemployment rate", c="dodgerblue", linewidth=1.5) plt.plot(bn_dec.index, bn_dec['natural_rate'], linestyle = '--', c="crimson", label="BN decomposition", linewidth=1.25) plt.xlabel("Year") plt.ylabel("Unemployment (%)") plt.legend() plt.tight_layout() # + [markdown] id="CiJ79wDpqwLD" colab_type="text" # ### Plotting all versions of the trend component # + id="ylLDkgbZqpf_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 235} outputId="094555e8-e1ea-44a3-fc4c-e34612f3341c" unemp_m = pd.concat([unemp_m, bn_dec['natural_rate']], axis=1) unemp_m.iloc[0,4] = 6.10 unemp_m.head() # + id="MKjFciturN9t" colab_type="code" colab={} unemp_m.dropna(inplace = True) # + id="wqxkNc4ms8sj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 420} outputId="0d54c530-f57c-4c2f-89f2-02940c851d83" unemp_m.plot(subplots = True, figsize = (10,5)) # + id="oY2no37WtDZN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="1ebcd15d-251a-4af4-8feb-8c54c6bd6b1f" lst = [i+"_error" for i in unemp_m.columns[1:]] lst # + id="rGpyVWNZt7Lk" colab_type="code" colab={} unemp_m[lst[0]] = unemp_m['UNRATE'] - unemp_m['hp_trend'] unemp_m[lst[1]] = unemp_m['UNRATE'] - unemp_m['bk_trend'] unemp_m[lst[2]] = unemp_m['UNRATE'] - unemp_m['cf_trend'] unemp_m[lst[3]] = unemp_m['UNRATE'] - unemp_m['natural_rate'] # + id="1wROhwX-uojO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 235} outputId="11ba7c14-87ad-48a4-8f2e-b3c064c72593" unemp_m.head() # + id="jCUBwXQvurG0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 235} outputId="5fad4d1e-43fb-4210-c1b8-48759cd38b54" unemp_m[lst] = unemp_m[lst].apply(lambda x: x**2) unemp_m.head() # + id="qWBkxhM6vXoc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="0cdca02b-6b6b-4ce2-b04a-c9158d915ccf" for i in lst: k = np.sqrt(unemp_m[i].sum())/len(unemp_m) print('RMSE for {} is {}'.format(i,k)) # + [markdown] id="eaqrhrYgx2Kq" colab_type="text" # In conclusion, the Root Mean Squared Error (RMSE) i.e. the standard deviation of residuals is lowest for BN decomposition as compared to the other filters used for deriving trend components. However, this does not imply that the BN trend is the best representation of the natural rate, since we do not actually observe the natural rate. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # STM 32H743 FIR Filter performance # This notebook shows FIR Filter performance on the STM32H743 for various block sizes, number of channels, and number of taps. # # Source code is at https://github.com/ccrome/jupyter_notebooks/tree/master/stm32_fir_filter_test # # ## ************************************************ # ## If you just want to browse the data, Just hit Cell->Run All and scroll to the bottom # ## ************************************************ # from ipywidgets import interact from bokeh.io import curdoc, push_notebook, show, output_notebook from bokeh.layouts import row, column from bokeh.plotting import figure from bokeh.models import ColumnDataSource, Range1d from bokeh.models.widgets import Slider, TextInput output_notebook() import pandas as pd import matplotlib.pyplot as plt # %matplotlib inline # + clock_speed = 400e6 def load_data(fn): data_f32 = pd.read_csv(fn, dtype={'type':str}) data_f32 = data_f32.rename(columns=lambda x: x.strip()) data_f32['time'] = data_f32['time_ns']/1e9 data_f32['cycles'] = clock_speed * data_f32['time'] data_f32['calculations'] = data_f32['channels'] * data_f32['blocksize'] * data_f32['ntaps'] data_f32['cycles_per_calc'] = data_f32['cycles']/data_f32['calculations'] data_f32['ns'] = data_f32['calculations']/data_f32['time'] return data_f32 def get_data_chan_taps(data, dtype, nchan, ntaps): return data.loc[(data['type'] == dtype) & (data['channels'] == nchan) & (data['ntaps'] == ntaps)] return r def get_min_max(d, col, dtype=None): if dtype is None: return d.describe()[col]['min'], d.describe()[col]['max'] else: return dtype(d.describe()[col]['min']), dtype(d.describe()[col]['max']) # - data = load_data("fir_filter.csv") # + def get_data_from(data, dtype, nchan, ntaps): df = get_data_chan_taps(data, dtype, nchan, ntaps) arr = df.loc[:, ['blocksize', 'cycles_per_calc']].to_numpy() blocksize = arr[:, 0] cycles_per_mac = arr[:, 1] return blocksize, cycles_per_mac def plot_data_bokeh(dtypes, data, nchan, ntapss, ylimit=None): for dtype in dtypes: for ntaps in ntapss: df = get_data_chan_taps(data, dtype, nchan, ntaps) x = df['blocksize'] y = df['cycles_per_calc'] plot(df['blocksize'], df['cycles_per_calc'], "*-", label=f"t={dtype}, ch={nchan}, t={ntaps}") xlabel("blocksize") ylabel("cycles/calc") if ylimit is not None: ylim(ylimit[0], ylimit[1]) title(f"cycles/MAC, float32 FIR filter, channels={nchan}") legend() show() #source = ColumnDataSource(data=dict(x=x, y=y)) # + def update_data(taps, channels, dtype): blocksizes, cycles = get_data_from(data, dtype, nchan=channels, ntaps=taps) source.data = dict(x=blocksizes, y=cycles) push_notebook() bs_min, bs_max = get_min_max(data, 'blocksize', int) ch_min, ch_max = get_min_max(data, 'channels', int) tp_min, tp_max = get_min_max(data, 'ntaps', int) #bs_slider = Slider(start = bs_min, end = bs_max, value = bs_min, step = 1, title="Block Size") ch = 16 taps = 100 blocksizes, cycles = get_data_from(data, "f32", nchan=ch, ntaps=taps) source = ColumnDataSource(data=dict(x=blocksizes, y=cycles)) plot = figure (plot_width=500, plot_height=300, x_range=[0, bs_max+1], y_range=[0, 10]) plot.xaxis.axis_label="Block Size" plot.yaxis.axis_label="CPU Cycles/MAC" plot.line('x', 'y', source=source, line_width=3, line_alpha = 0.6) layout = row(plot) handle = show(layout, notebook_handle=True) i = interact (update_data, taps=(tp_min, tp_max), channels=(ch_min, ch_max), #types=[("f32", 1), ("q31", 2), ("q31f", 3), ("q15", 4), ("q15f", 5)], dtype=["f32", "q31", "q31f", "q15", "q15f"], ) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Convolutional Neural Networks # # Convolutional Neural Networks (CNN) are a class of networks, that are typically employed in the field of neural image processing. # Since fully connected layers for a large image would be prohibitively expensive, convolutions are used to vastly reduce the amount of parameters needed to process an image. # # Instead of connecting every output to every input with individual weights, we reduce the number of parameters by sharing the weights and applying the same ones to different parts of the input image. This reduces the number of weights to be learned in a single layer and the number of layers by taking into account the nature of the input data (2D pixel map). # # We use the Sequential model with Dense layers from before and add $\texttt{Conv2D}$, $\texttt{MaxPooling2D}$, $\texttt{Dropout}$ and $\texttt{Flatten}$ layers. # # # As an example task we classify the images in the MNIST dataset, a set of images of handwritten digits, with respect to which digit an image supposedly shows. # + import h5py import numpy as np import scipy.signal as scis import tensorflow as tf import matplotlib.pyplot as plt plt.style.use('seaborn') SEED = 42 np.random.seed(SEED) tf.set_random_seed(SEED) from keras.callbacks import Callback from keras.models import Sequential from keras.layers import Conv2D, Dense, Dropout, Flatten, MaxPooling2D from keras.utils import to_categorical # + def load_data(path='mnist.h5'): """ Loads a dataset and its supervisor labels from the HDF5 file specified by the path. It is assumed that the HDF5 dataset containing the data is called 'data' and the labels are called 'labels'. Parameters ---------- path : str, optional The absolute or relative path to the HDF5 file, defaults to mnist.h5. Returns ------- data_and_labels : tuple(np.array[samples, width, height], np.array[samples]) a tuple with two numpy array containing the data and labels """ with h5py.File(path, 'r') as handle: return np.array(handle['data']), np.array(handle['labels']) data, labels = load_data() # - # A single discrete convolution is not different from a filter in traditional image processing. They allow you for example to blur, sharpen or detect edges within an image. # # A filter is computed by extracting every possible sub-image, e.g $3\times 3$ pixels, of the entire image and computing the weighted sum of the sub-image and filter. This will produce a single corresponding output pixel. Look at the following example filters and try to understand what they do. # + gauss_filter = scis.convolve2d(data[0], np.array([ [1, 2, 1], [2, 4, 2], [1, 2, 1] ]), mode='same') laplacian_filter = scis.convolve2d(data[0], np.array([ [0, 1, 0], [1, -4, 1], [0, 1, 0] ]), mode='same') high_pass_filter = scis.convolve2d(data[0], np.array([ [-1, -1, -1], [-1, 9, -1], [-1, -1, -1] ]), mode='same') sharpen = scis.convolve2d(data[0], np.array([ [-0.5, -0.5, -0.5], [-0.5, 8.5, -0.5], [-0.5, -0.5, -0.5] ]), mode='same') sobel_x = scis.convolve2d(data[0], np.array([ [1, 0, -1], [2, 0, -2], [1, 0, -1] ]), mode='same') sobel_y = scis.convolve2d(data[0], np.array([ [ 1, 2, 1], [ 0, 0, 0], [-1, -2, -1,] ]), mode='same') sobel = np.sqrt(sobel_x ** 2 + sobel_y ** 2) emboss_filter = scis.convolve2d(data[0], np.array([ [-1, -1, 0], [-1, 0, 1], [ 0, 1, 1,] ]), mode='same') # - # Filters also exist in larger version, with increased kernel-size, usually enhancing the 'intensity' of the particular effect. # + gauss_filter_5x5 = scis.convolve2d(data[0], np.array([ [1, 4, 7, 2, 1], [4, 16, 26, 16, 4], [7, 26, 41, 26, 7], [4, 16, 26, 16, 4], [1, 4, 7, 4, 1], ]), mode='same') laplacian_filter_5x5 = scis.convolve2d(data[0], np.array([ [0, 0, 1, 0, 0], [0, 1, 2, 1, 0], [1, 2, -16, 2, 1], [0, 1, 2, 1, 0], [0, 0, 1, 0, 0] ]), mode='same') high_pass_filter_5x5 = scis.convolve2d(data[0], np.array([ [ 0, -1, -1, -1, 0], [-1, 2, -4, 2, -1], [-1, -4, 13, -4, -1], [-1, 2, -4, 2, -1], [ 0, -1, -1, -1, 0] ]), mode='same') sharpen_5x5 = scis.convolve2d(data[0], np.array([ [-0.5, -0.5, -0.5, -0.5, -0.5], [-0.5, -0.5, -0.5, -0.5, -0.5], [-0.5, -0.5, 24.5, -0.5, -0.5], [-0.5, -0.5, -0.5, -0.5, -0.5], [-0.5, -0.5, -0.5, -0.5, -0.5], ]), mode='same') sobel_x_5x5 = scis.convolve2d(data[0], np.array([ [2, 1, 0, -1, -2], [2, 1, 0, -1, -2], [4, 2, 0, -2, -4], [2, 1, 0, -1, -2], [2, 1, 0, -1, -2] ]), mode='same') sobel_y_5x5 = scis.convolve2d(data[0], np.array([ [ 1, 1, 4, 1, 1], [ 1, 1, 2, 1, 1], [ 0, 0, 0, 0, 0], [-1, -1, -2, -1, -1], [-1, -1, -4, -1, -1], ]), mode='same') sobel_5x5 = np.sqrt(sobel_x_5x5 ** 2 + sobel_y_5x5 ** 2) emboss_filter_5x5 = scis.convolve2d(data[0], np.array([ [-1, -1, -1, -1, 0], [-1, -1, -1, 0, 1], [-1, -1, 0, 1, 1], [-1, 0, 1, 1, 1], [ 0, 1, 1, 1, 1], ]), mode='same') # + def show_examples(data): """ Plots example images. Parameters ---------- data : np.array[grid_width, grid_height, image_width, image_height] the image dataset """ height = data.shape[0] width = data.shape[1] figure, axes = plt.subplots(height, width, figsize=(16, 4), sharex=True, sharey=True) for h in range(height): for w in range(width): axis = axes[h][w] axis.grid(False) axis.imshow(data[h, w, :, :], cmap='gist_gray') plt.show() filtered_images = np.array([ [data[0], gauss_filter, laplacian_filter, high_pass_filter, sharpen, sobel, emboss_filter], [data[0], gauss_filter_5x5, laplacian_filter_5x5, high_pass_filter_5x5, sharpen_5x5, sobel_5x5, emboss_filter_5x5] ]) show_examples(filtered_images) # - # ### Digit Recognition # # To determine which of these filters is beneficial for a classification task and which particular weights to use is a manual and very labor intensive task. Therefore, the idea of a CNN is to determines these weights automatically. Analogous to the fully-connected neural network, this is done through optimization of a loss function. # + def preprocess_data(data, labels): """ Flattens the each image in the data to be a one-dimensional feature vector and encodes the labels in one-hot encoding. Parameters ---------- data : np.array[samples, width, height] the image dataset labels : np.array[samples] the corresponding labels Returns ------- data_and_labels : tuple(np.array[samples, width * height], np.array[samples, classes]) a tuple with two numpy array containing the flattened data and one-hot encoded labels """ ############## return the flattened images and labels ############## preprocessed_data, preprocessed_labels = preprocess_data(data, labels) # - # In practice, several convolutional layers with multiple filters each, are applied directly to the image. Hence, a convolutional layers produces multiple image as output, where the number of filters in the layers corresponds to the number of produced images, also often refered to as "color" channels. Convolutions by themselves will slightly reduce the size of the image, as you require a number of "frame" pixels around the convolution (half the convolution filter size, floored). To counteract this effect, an image may be padded with additional pixels, e.g. a constant value like 0 or 1, or by repeating the border pixels (padding='same'). # # Another way is to use pooling layers (e.g. $\texttt{MaxPooling2D}$) to deliberately combine several outputs of a previous layer to a single input for the next one. In case of a MaxPooling layer with a kernel size of $2\times 2$ pixel and a stride of $2\times 2$ for example, all non-overlapping $2\times 2$ pixel sub-images are reduced to their contained maximum value. The amount of pixels in the output is reduced to a quarter. # # Any activation function that is aplicable for a dense layer works for also for a convolutional one. In recent year, however, the community has introduced a number of new activation functions that are a) faster to compute and b) do not suffer as heavily from the vanishing gradient problem, like the sigmoid function does. One of the is the *Recitified Linear Unit (ReLU)*, defined as: $$f(x) = max(x, 0)$$ # # For classification tasks on images, like digit recognition, a final dense layer is still needed to do the actual classification of the previously filtered image, similar to fully-connected neural network. # # Build an image classification model, that takes the correct input shape from the data and passes it through some convolutional, pooling and finally a dense layer and outputs the probability of the image belonging to each class. # # Train and test the model. # + def build_model(data, classes): """ Constructs a convolutional neural network model for the given data and number of classes Parameters ---------- data : np.array[samples, width * height] the image dataset classes : int the number of unique classes in the dataset Returns ------- model : keras.Model the fully-connected neural network """ model = Sequential() ############## add all the layers compile the model with sgd optimizer, categorical crossentropy loss and accuracy printing ############## return model model = build_model(preprocessed_data, classes=preprocessed_labels.shape[1]) # - class TrainingHistory(Callback): """ Class for tracking the training progress/history of the neural network. Implements the keras.Callback interface. """ def on_train_begin(self, logs): self.loss = [] self.acc = [] self.validation_loss = [] self.validation_acc = [] def on_batch_end(self, _, logs): """ Callback invoked after each training batch. Should track the training loss and accuracy in the respective members. Parameters ---------- _ : int unused, int corresponding to the batch number logs : dict{str -> float} a dictionary mapping from the observed quantity to the actual valu """ if 'loss' in logs: self.loss.append(logs['loss']) if 'acc' in logs: self.acc.append(logs['acc']) def on_epoch_end(self, _, logs): if 'val_loss' in logs: self.validation_loss.append(logs['val_loss']) if 'val_acc' in logs: self.validation_acc.append(logs['val_acc']) # + def train_model(model, data, labels, epochs=25, batch_size=64, train_fraction=0.8): """ Trains a convolutional neural network given the data and labels. This time we employ the automatic train and test set functionality of Keras. Parameters ---------- model : keras.Model the fully-connected neural network data : np.array[samples, width * height] the entire data labels : np.array[samples, classes] the one-hot encoded training labels epoch: positive int, optional the number of epochs for which the neural network is trained, defaults to 50 batch_size: positive int, optional the size of the training batches, defaults to 64 train_fraction: positive float, optional the fraction of data to be used as training data, defaults to 0.8 Returns ------- history : TrainingHistory the tracked training and test history """ history = TrainingHistory() model.fit( data, labels, epochs=epochs, batch_size=batch_size, validation_split=1.0 - train_fraction, shuffle=True, callbacks=[history] ) return history history = train_model(model, preprocessed_data, preprocessed_labels) # - # ### Visualization of the Training Progress # # Using the previously created and filled TrainingHistory instance, we are able to plot the loss and accuracy of the training batches and test epochs. # + def plot_history(history): """ Plots the training (batch-wise) and test (epoch-wise) loss and accuracy. Parameters ---------- history : TrainingHistory an instance of TrainingHistory monitoring callback """ figure, (batch_axis, epoch_axis) = plt.subplots(1, 2, figsize=(16, 5), sharey=True) # plot the training loss and accuracy batch_axis.set_xlabel('batch number') training_batches = np.arange(len(history.loss)) batch_axis.plot(training_batches, history.loss, color='C0', label='loss') batch_axis.set_ylabel('loss') batch_acc_axis = batch_axis.twinx() batch_acc_axis.grid(False) batch_acc_axis.set_ylabel('accuracy') batch_acc_axis.set_ylim(bottom=0.0) batch_acc_axis.plot(training_batches, history.acc, color='C4', label='accuracy') # plot the training loss and accuracy epoch_axis.set_xlabel('epoch number') validation_epochs = np.arange(len(history.validation_loss)) epoch_axis.plot(validation_epochs, history.validation_loss, color='C0') epoch_axis.set_ylabel('validation loss') epoch_acc_axis = epoch_axis.twinx() epoch_acc_axis.grid(False) epoch_acc_axis.set_ylabel('validation accuracy') epoch_acc_axis.set_ylim(bottom=0.0) epoch_acc_axis.plot(validation_epochs, history.validation_acc, color='C4') # display a legend figure.legend(loc=8) plt.show() plot_history(history) # - # Next: add $\texttt{Dropout}$ layers and observe their effect on the history. # \emph{Dropout} is a technique to reduce overfitting to the training data by randomly selecting neurons and setting their activation to $0$. # The $\texttt{Dropout}$ layer in Keras just passes through its input, except in randomly selected positions, where it passes through $0$. # ### Your Convolutions # # Extract the learned filters from CNN and apply them to the image. Try to compare them with the previously introduced example filters. # + def apply_network_convolutions(image, weights): """ Applies the passed convolutional weights to the image. Parameters ---------- image : np.array[width, height] the image to be filtered weights : np.array[filters, filter_width, filter_height] the convolutional filter weights Returns ------- filtered images : np.array[filters, width, height] the filtered images """ height, width = image.shape kernels_y, kernels_x = weights.shape[:2] output = np.zeros((kernels_y, kernels_x + 1, height, width), dtype=image.dtype) ############## apply the convolutions to the image and store the output in output ############## return output weights = np.moveaxis(model.get_weights()[0], 0, -1).reshape(4, 8, 5, 5) model_convolutions = apply_network_convolutions(data[0], weights) show_examples(model_convolutions) # - # Activate the filtered images additionally with ReLU function. activated = model_convolutions.copy() activated[activated < 0] = 0 show_examples(activated) # ### Bonus: # Add a [naive inception layer](https://www.cs.unc.edu/~wliu/papers/GoogLeNet.pdf) to your CNN and investigate its effect. # Simplified, an inception layer are multiple convolutional layers with different filter sizes in parallel that are supposed to 'look' at the image on different resolution levels. # # In order to be able to implement this network, you will need advanced features of the Keras library, like the [functional API](https://keras.io/getting-started/functional-api-guide/), which allows you to specify layer connections, and the $\texttt{Input}$ and $\texttt{Model}$ classes. # --- # jupyter: # jupytext: # text_representation: # extension: .r # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: R # language: R # name: ir # --- # + deletable=true editable=true library(tidyverse) library(ggplot2) library(vegan) # + [markdown] deletable=true editable=true # ## Metadata # + deletable=true editable=true sra.md = read.delim("metadata/sra2name.tab") #head(sra.md) # + deletable=true editable=true edw.md = read.delim("edwards-data/greenhouse_metadata.txt") #head(edw.md) # + deletable=true editable=true # Join and match metadata metadata = sra.md %>% mutate(name=gsub('_', '.', name)) %>% inner_join(edw.md, by=c("name"="SampleID")) %>% arrange(runid) %>% mutate(Compartment=factor(Compartment, levels=c("Bulk Soil", "Rhizosphere", "Rhizoplane", "Endosphere"))) head(metadata) # + [markdown] deletable=true editable=true # ## kWIP # + deletable=true editable=true kwip = read.delim("kwip/greenhouse_wip.dist", row.names=1) %>% as.matrix() # + deletable=true editable=true kwip.sras = rownames(kwip) kwip.names = metadata$name[match(kwip.sras, metadata$runid)] rownames(kwip) = colnames(kwip) = kwip.names # + deletable=true editable=true kwip.mds = cmdscale(as.dist(kwip), eig=T) kwip.var = (kwip.mds$eig / sum(kwip.mds$eig))[1:2] round(kwip.var * 100, 2) # + [markdown] deletable=true editable=true # ## Weighted UF # + deletable=true editable=true wuf = read.delim("edwards-data//weighted.gh.unifrac", row.names=1, sep=" ") %>% as.matrix() # + deletable=true editable=true all(rownames(wuf) == rownames(kwip)) # + deletable=true editable=true wuf.mds = cmdscale(as.dist(wuf),eig=T) wuf.var = (wuf.mds$eig / sum(wuf.mds$eig))[1:2] round(wuf.var * 100, 2) # + [markdown] deletable=true editable=true # ## UUF # + deletable=true editable=true uuf = read.delim("edwards-data/unweighted.gh.unifrac", row.names=1, sep=" ") %>% as.matrix() # + deletable=true editable=true all(rownames(uuf) == rownames(kwip)) # + deletable=true editable=true uuf.mds = cmdscale(as.dist(uuf),eig=T) uuf.var = (uuf.mds$eig / sum(uuf.mds$eig))[1:2] round(uuf.var * 100, 2) # + [markdown] deletable=true editable=true # # Plotting # # + deletable=true editable=true edw.colours = c("#E41A1C", "#984EA3", "#4DAF4A", "#377EB8") # + deletable=true editable=true all(rownames(kwip.mds) == rownames(wuf.mds)) # + deletable=true editable=true all(rownames(kwip.mds) == rownames(uuf.mds)) # + deletable=true editable=true plot.dat = data.frame( PC1.kwip = kwip.mds$points[,1], PC2.kwip = kwip.mds$points[,2], PC1.wuf = wuf.mds$points[,1], PC2.wuf = wuf.mds$points[,2], PC1.uuf = uuf.mds$points[,1], PC2.uuf = uuf.mds$points[,2], name = rownames(kwip.mds$points) ) plot.dat = left_join(plot.dat, metadata) # + deletable=true editable=true # kWIP p = ggplot(plot.dat, aes(x=PC1.kwip, y=PC2.kwip, colour=Compartment, shape=Site)) + geom_point(alpha=0.75, size=2) + scale_color_manual(values = edw.colours) + xlab("PC1") + ylab("PC2") + theme_bw() + ggtitle("kWIP") + theme(panel.grid=element_blank()) svg("gh_kwip.svg", width=4, height = 3) print(p) dev.off() print(p) # + deletable=true editable=true # WUF p = ggplot(plot.dat, aes(x=PC1.wuf, y=PC2.wuf, colour=Compartment, shape=Site)) + geom_point(alpha=0.75, size=2) + scale_color_manual(values = edw.colours) + xlab("PC1") + ylab("PC2") + theme_bw() + ggtitle("Weighted UniFrac") + theme(panel.grid=element_blank()) svg("gh_wuf.svg", width=4, height = 3) print(p) dev.off() print(p) # + deletable=true editable=true # UUF p = ggplot(plot.dat, aes(x=PC1.uuf, y=PC2.uuf, colour=Compartment, shape=Site)) + geom_point(alpha=0.75, size=2) + scale_color_manual(values = edw.colours) + xlab("PC1") + ylab("PC2") + theme_bw() + ggtitle("UniFrac") + theme(panel.grid=element_blank()) svg("gh_uuf.svg", width=4, height = 3) print(p) dev.off() print(p) # + [markdown] deletable=true editable=true # # Distance Matrix Correlation # # # + deletable=true editable=true distcor = function(a, b, method="spearman") { a = as.matrix(a) a = a[lower.tri(a)] b = as.matrix(b) b = b[lower.tri(b)] cor(a, b, method=method) } # + deletable=true editable=true distcor(kwip, uuf) # + deletable=true editable=true distcor(kwip, wuf) # + deletable=true editable=true distcor(uuf, wuf) # + [markdown] deletable=true editable=true # # Summary of results # # #### Axis pct contributions: # # |metric | PC1 | PC2 | # |-------|-----|-----| # |kWIP |22.6 |15.8 | # |WUF |46.4 |11.5 | # |UUF |18.1 |14.9 | # # # #### Correlations between metrics: # # - kwip-> WUF: 0.88 # - kwip-> UUF: 0.90 # - WUF-> UUF: 0.73 # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### 155. Min Stack # #### Content #

    Design a stack that supports push, pop, top, and retrieving the minimum element in constant time.

    # #

    Implement the MinStack class:

    # #
      #
    • MinStack() initializes the stack object.
    • #
    • void push(val) pushes the element val onto the stack.
    • #
    • void pop() removes the element on the top of the stack.
    • #
    • int top() gets the top element of the stack.
    • #
    • int getMin() retrieves the minimum element in the stack.
    • #
    # #

     

    #

    Example 1:

    # #
    # Input
    # ["MinStack","push","push","push","getMin","pop","top","getMin"]
    # [[],[-2],[0],[-3],[],[],[],[]]
    #
    # Output
    # [null,null,null,null,-3,null,0,-2]
    #
    # Explanation
    # MinStack minStack = new MinStack();
    # minStack.push(-2);
    # minStack.push(0);
    # minStack.push(-3);
    # minStack.getMin(); // return -3
    # minStack.pop();
    # minStack.top();    // return 0
    # minStack.getMin(); // return -2
    # 
    # #

     

    #

    Constraints:

    # #
      #
    • -231 <= val <= 231 - 1
    • #
    • Methods pop, top and getMin operations will always be called on non-empty stacks.
    • #
    • At most 3 * 104 calls will be made to push, pop, top, and getMin.
    • #
    # # #### Difficulty: Easy, AC rate: 47.6% # # #### Question Tags: # - Stack # - Design # # #### Links: # 🎁 [Question Detail](https://leetcode.com/problems/min-stack/description/) | 🎉 [Question Solution](https://leetcode.com/problems/min-stack/solution/) | 💬 [Question Discussion](https://leetcode.com/problems/min-stack/discuss/?orderBy=most_votes) # # #### Hints: #
    Hint 0 🔍Consider each node in the stack having a minimum value. (Credits to @aakarshmadhavan)
    # # #### Sample Test Case # ["MinStack","push","push","push","getMin","pop","top","getMin"] # [[],[-2],[0],[-3],[],[],[],[]] # --- # What's your idea? # # 栈中每个节点都可以暂存该节点入栈时的快照信息。此处的信息是入栈时全局最小值 # # --- # + isSolutionCode=true class MinStack: def __init__(self): """ initialize your data structure here. """ self.stack = [(0, 2**31-1)] def push(self, val: int) -> None: self.stack.append((val, min(val, self.stack[-1][1]))) def pop(self) -> None: self.stack.pop() def top(self) -> int: return self.stack[-1][0] def getMin(self) -> int: return self.stack[-1][1] # Your MinStack object will be instantiated and called as such: # obj = MinStack() # obj.push(val) # obj.pop() # param_3 = obj.top() # param_4 = obj.getMin() # + ops = ["MinStack","push","push","push","getMin","pop","top","getMin"] params = [[],[-2],[0],[-3],[],[],[],[]] stack = MinStack() results = [] for op, param in zip(ops[1:], params[1:]): if op == 'push': results.append(stack.push(param[0])) elif op == 'getMin': results.append(stack.getMin()) elif op == 'pop': results.append(stack.pop()) elif op == 'top': results.append(stack.top()) results # - import sys, os; sys.path.append(os.path.abspath('..')) from submitter import submit submit(155) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: rnntutorial # language: python # name: rnntutorial # --- # # Multi step model (vector output approach) # # In this notebook, we demonstrate how to: # - prepare time series data for training a RNN forecasting model # - get data in the required shape for the keras API # - implement a RNN model in keras to predict the next 3 steps ahead (time *t+1* to *t+3*) in the time series. This model uses recent values of temperature and load as the model input. The model will be trained to output a vector, the elements of which are ordered predictions for future time steps. # - enable early stopping to reduce the likelihood of model overfitting # - evaluate the model on a test dataset # # The data in this example is taken from the GEFCom2014 forecasting competition1. It consists of 3 years of hourly electricity load and temperature values between 2012 and 2014. The task is to forecast future values of electricity load. # # 1, , , , and , "Probabilistic energy forecasting: Global Energy Forecasting Competition 2014 and beyond", International Journal of Forecasting, vol.32, no.3, pp 896-913, July-September, 2016. # + import os import warnings import matplotlib.pyplot as plt import numpy as np import pandas as pd import datetime as dt from collections import UserDict # %matplotlib inline from common.utils import load_data, mape, TimeSeriesTensor, create_evaluation_df pd.options.display.float_format = '{:,.2f}'.format np.set_printoptions(precision=2) warnings.filterwarnings("ignore") # - # Load data into Pandas dataframe if not os.path.exists(os.path.join('data', 'energy.csv')): # Download and move the zip file # !wget https://www.dropbox.com/s/pqenrr2mcvl0hk9/GEFCom2014.zip # !mv GEFCom2014.zip ./data # If not done already, extract zipped data and save as csv # %run common/extract_data.py energy = load_data() energy.head() # + valid_start_dt = '2014-09-01 00:00:00' test_start_dt = '2014-11-01 00:00:00' T = 6 HORIZON = 3 # - train = energy.copy()[energy.index < valid_start_dt][['load', 'temp']] # + from sklearn.preprocessing import MinMaxScaler y_scaler = MinMaxScaler() y_scaler.fit(train[['load']]) X_scaler = MinMaxScaler() train[['load', 'temp']] = X_scaler.fit_transform(train) # - # Use the TimeSeriesTensor convenience class to: # 1. Shift the values of the time series to create a Pandas dataframe containing all the data for a single training example # 2. Discard any samples with missing values # 3. Transform this Pandas dataframe into a numpy array of shape (samples, time steps, features) for input into Keras # # The class takes the following parameters: # # - **dataset**: original time series # - **H**: the forecast horizon # - **tensor_structure**: a dictionary discribing the tensor structure in the form { 'tensor_name' : (range(max_backward_shift, max_forward_shift), [feature, feature, ...] ) } # - **freq**: time series frequency # - **drop_incomplete**: (Boolean) whether to drop incomplete samples tensor_structure = {'X':(range(-T+1, 1), ['load', 'temp'])} train_inputs = TimeSeriesTensor(train, 'load', HORIZON, tensor_structure) train_inputs.dataframe.head(3) train_inputs['X'][:3] train_inputs['target'][:3] # Construct validation set (keeping W hours from the training set in order to construct initial features) look_back_dt = dt.datetime.strptime(valid_start_dt, '%Y-%m-%d %H:%M:%S') - dt.timedelta(hours=T-1) valid = energy.copy()[(energy.index >=look_back_dt) & (energy.index < test_start_dt)][['load', 'temp']] valid[['load', 'temp']] = X_scaler.transform(valid) valid_inputs = TimeSeriesTensor(valid, 'load', HORIZON, tensor_structure) # ## Implement the RNN # We will implement a RNN forecasting model with the following structure: # # ![Multi-step vector output RNN model](./images/multi_step_vector_output.png "Multi-step vector output RNN model") from keras.models import Model, Sequential from keras.layers import GRU, Dense from keras.callbacks import EarlyStopping LATENT_DIM = 5 BATCH_SIZE = 32 EPOCHS = 50 model = Sequential() model.add(GRU(LATENT_DIM, input_shape=(T, 2))) model.add(Dense(HORIZON)) model.compile(optimizer='RMSprop', loss='mse') model.summary() earlystop = EarlyStopping(monitor='val_loss', min_delta=0, patience=5) model.fit(train_inputs['X'], train_inputs['target'], batch_size=BATCH_SIZE, epochs=EPOCHS, validation_data=(valid_inputs['X'], valid_inputs['target']), callbacks=[earlystop], verbose=1) # ## Evaluate the model look_back_dt = dt.datetime.strptime(test_start_dt, '%Y-%m-%d %H:%M:%S') - dt.timedelta(hours=T-1) test = energy.copy()[test_start_dt:][['load', 'temp']] test[['load', 'temp']] = X_scaler.transform(test) test_inputs = TimeSeriesTensor(test, 'load', HORIZON, tensor_structure) predictions = model.predict(test_inputs['X']) predictions eval_df = create_evaluation_df(predictions, test_inputs, HORIZON, y_scaler) eval_df.head() # Compute MAPE for each forecast horizon eval_df['APE'] = (eval_df['prediction'] - eval_df['actual']).abs() / eval_df['actual'] eval_df.groupby('h')['APE'].mean() # Compute MAPE across all predictions mape(eval_df['prediction'], eval_df['actual']) # Plot actuals vs predictions at each horizon for first week of the test period. As is to be expected, predictions for one step ahead (*t+1*) are more accurate than those for 2 or 3 steps ahead # + plot_df = eval_df[(eval_df.timestamp<'2014-11-08') & (eval_df.h=='t+1')][['timestamp', 'actual']] for t in range(1, HORIZON+1): plot_df['t+'+str(t)] = eval_df[(eval_df.timestamp<'2014-11-08') & (eval_df.h=='t+'+str(t))]['prediction'].values fig = plt.figure(figsize=(15, 8)) ax = plt.plot(plot_df['timestamp'], plot_df['actual'], color='red', linewidth=4.0) ax = fig.add_subplot(111) ax.plot(plot_df['timestamp'], plot_df['t+1'], color='blue', linewidth=4.0, alpha=0.75) ax.plot(plot_df['timestamp'], plot_df['t+2'], color='blue', linewidth=3.0, alpha=0.5) ax.plot(plot_df['timestamp'], plot_df['t+3'], color='blue', linewidth=2.0, alpha=0.25) plt.xlabel('timestamp', fontsize=12) plt.ylabel('load', fontsize=12) ax.legend(loc='best') plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Home Assignment 1 # You should work on the assignement in groups of two students. Upload your solution as a jupyter notebook to moodle by *27th of May 23:59h*. The deadline is strict. Do not forget to specify the names of _all_ contributing students in the jupyter notebook. # # Please make sure that your intermediate results follow the required naming conventions, otherwise point deductions will apply. # # You should add comments to your code where necessary and print the relevant results. # You can use code snippets (few lines of code), but you have to cite all your sources! Copying uncited sources or solutions of your fellow students will be (without warning) counted as plagiarism and expulsion from the course. # # # # ## 1. abgeordnetenwatch.de # abgeordnetenwatch.de is a website that allows to ask questions to politicians online and receive answers. They also provide an API to obtain data on politicians: https://www.abgeordnetenwatch.de/api/ # # Use the API to obtain data of all candidates for the state parliament of bavaria. Put together a pandas dataframe with five columns: # * first name # * last name # * party # * gender # * age (you can calculate it from the birthyear) # # Store this data frame in a variable called 'df_politicians'. # # What is the most frequent last name in the parliament? # Compute the gender ratio (i.e., % of female deputies) for each party. Store the results in a dictionary with the name 'gender_ratios' with the party name as key and the percentage of females as value. # # Visualize results in a bar chart. # Compare the age distribution for the green party and the conservative CSU party. # For this, plot a semi-transparent histogram (comparable to this: https://i.stack.imgur.com/UI5c0.png) with the age distribution of the two parties # ## 2. History in Wikipedia # Download the html text from the article 'History_of_Germany' from the English Wikipedia. Store the results in a string 'full_html' # Extract all links to other Wikipedia page from this article. Store them in a list called 'all_links' in alphabetical order of the target article name. # Download the first 10 articles from this list and store the html contents in one string each. Put all strings in a list called 'full_linked_texts' # Write a function 'extract_years (string_input)' that gets a string and returns how often a certain year number from 1000-2019 occurs. You can assume that each number between 1000 and 2019 is a year number. The result is a dictionary with year numbers as keys, and the number of occurrences as values. # # Example: extract_years("The king reigned from 1245 to 1268. He died in 1268.") should return the dictionary {1245: 1, 1268:2} # Use your function on all 11 downloaded Wikipedia article raw texts to compute the overall number of occurrences for each year number. # Store the aggregated counts (summarized over all articles) in a dictionary called 'overall_counts' # Visualize the numbers of occurrences in a timeline. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.6 (tensorflow) # language: python # name: tensorflow # --- # # Applying otimal LDA model on sentences of reviews # Goal: Finding the best topic modeling model with optimum value of number of topics 'k', to generate most optimal topics on reviews #Importing Libraries import gensim from nltk.stem import WordNetLemmatizer from nltk.tokenize import RegexpTokenizer from nltk.corpus import stopwords from stop_words import get_stop_words from gensim import corpora, models from gensim.models import Phrases from nltk.stem.porter import * import numpy as np import operator import nltk # Compute bigrams. from gensim.models import Phrases from pprint import pprint # Plotting tools import pyLDAvis import pyLDAvis.gensim # don't skip this import pickle import pandas as pd #Loading data. This data is obtained by grouping the product IDs with respect number reviews and merging their reviews together. df = pd.read_csv('/Users/harika_pradeep/Downloads/df_grouped.csv', index_col=0) #Loading metadata that has product titles for each product. df_m = pd.read_csv('/Users/harika_pradeep/Downloads/Title_prod.csv', index_col=0) df.shape df_m.shape df.head() df_m.head() #Merging both the dataframes with product ID column "asin" df_merged = pd.merge (df_m,df,left_on='asin',right_on='asin') df_merged.shape df_merged[df_merged['title'].isnull()].shape df_m['asin'].unique().shape[0] df_merged.head() # Sorting the rows in decreasing order df_merged.sort_values(by=['count'],inplace=True,ascending=False) df_merged.head () df_merged.reset_index(drop = True, inplace = True) df_merged.head ().shape #Considering only the top 25 products for further calculations (for memory constraints) df_req = df_merged.head(25) df_req.shape df_req df_req.to_csv("/Users/harika_pradeep/Downloads/top_25.csv") #splitting each review for these 25 products into sentences. This will be used while appying sentiment analysis on each sentence df_req_copy = df_req df_sentences = pd.DataFrame(df_req_copy.reviewText.str.split('.').tolist(), index=df_req_copy.asin).stack() df_sentences =df_sentences.reset_index([0, 'asin']) df_sentences.columns = ['asin', 'sentences'] #Total number of sentences on which we will be applying topic modeling and sentiment analysis is 300677 df_sentences.shape df_sentences.head() df_sentences.to_csv("/Users/harika_pradeep/Downloads/top_25_sents.csv") # # Tokenization : Tokenizing each sentence into words tokenizer = RegexpTokenizer(r'\w+') #Applying this on one sentence doc = df_sentences.sentences[0] tokens = tokenizer.tokenize(doc.lower()) print('{} characters in string vs {} words in a list'.format(len(doc), len(tokens))) docs = [token for token in tokens if not token.isnumeric()] print('{} words in a list after removing numbers'.format(len(docs))) # Remove words that are less than 4 characters only docs = [token for token in docs if len(token) > 3] print('{} words in a list after words that are less than 4 chars'.format(len(docs))) # # Preparing stopwords and removing them # + #create a merged list of stop words nltk_stpwd = stopwords.words('english') #Extend stopwords with commonly found tokens in review texts nltk_stpwd.extend(['generally', 'used', 'personally', 'review', 'honestly','truly','whatever','done','star','one','two','three','four','five','since','ever','even','much','thing','also','go','come','must']) stop_words_stpwd = get_stop_words('en') merged_stopwords = list(set(nltk_stpwd + stop_words_stpwd)) print(len(set(merged_stopwords))) print(merged_stopwords[:10]) # - stopped_tokens = [token for token in docs if not token in merged_stopwords] print('{} words in a list after removing stop words'.format(len(stopped_tokens))) # # Lemmatizing # "keeping only noun, adj, vb, adv" # Instantiate a WordNetLemmatizer lemmatizer = WordNetLemmatizer() lemm_tokens = [lemmatizer.lemmatize(token) for token in stopped_tokens] print(lemm_tokens[:10]) # + #Applying all the above steps on entire data num_sentences = df_sentences.shape[0] doc_set = [df_sentences.sentences[i] for i in range(num_sentences)] texts = [] for doc in doc_set: # putting our 5 steps together tokens = tokenizer.tokenize(doc.lower()) tokens_alp = [token for token in tokens if not token.isnumeric()] token_gr_3 = [token for token in tokens_alp if len(token) > 3] stopped_tokens = [token for token in token_gr_3 if not token in merged_stopwords] lemm_tokens = [lemmatizer.lemmatize(token) for token in stopped_tokens] # add tokens to list texts.append(lemm_tokens) # - bigram = Phrases(texts, min_count=30) for idx in range(len(texts)): for token in bigram[texts[idx]]: if '_' in token: # Token is a bigram, add to document. texts[idx].append(token) #Saving file for later use with open('texts_s.pkl', 'wb') as f: pickle.dump(texts, f) # Gensim's Dictionary encapsulates the mapping between normalized words and their integer ids. texts_dict = corpora.Dictionary(texts) # saving for later use texts_dict.save('auto_review_s.dict') # Examine each token’s unique id print(texts_dict) print("IDs 1 through 10: {}".format(sorted(texts_dict.token2id.items(), key=operator.itemgetter(1), reverse = False)[:10])) #Filter tokens that appear too rare are too frequent texts_dict.filter_extremes(no_below = 20, no_above = 0.15) # inplace filter print(texts_dict) print("top terms:") print(sorted(texts_dict.token2id.items(), key=operator.itemgetter(1), reverse = False)[:100]) #Create bag of words corpus for all the sentences corpus = [texts_dict.doc2bow(text) for text in texts] len(corpus) # # Applying LDA Model using Gensim Libray #Save corpus for later use gensim.corpora.MmCorpus.serialize('amzn_h_k_sentences.mm', corpus) # In order to generate most accurate topics selcting the number of topics while applying the model is important. We can evaluate the model performance based coherence coefficient. The more the coherence value the more accurate the topics are. #Build LDA model with num_topics = 5 lda_model = gensim.models.LdaModel(corpus=corpus, alpha='auto',eta='auto',id2word=texts_dict,num_topics=5, chunksize=10000,passes=10) lda_model.save('LDA_Model_sent.lda') pprint(lda_model.print_topics()) doc_lda = lda_model[corpus] top_topics = lda_model.top_topics(corpus, topn=20) # Compute Perplexity print('\nPerplexity: ', lda_model.log_perplexity(corpus)) #Find the average topic coherence avg_topic_coherence = sum([t[1] for t in top_topics]) / 10 print('Average topic coherence: %.4f.' % avg_topic_coherence) #Print topics and word distribution for each topic counter = 0 for topic in top_topics: print('Topic {}:'.format(counter)) counter += 1 pprint(topic) # Visualize the topics pyLDAvis.enable_notebook() #Prepare topic visualization vis = pyLDAvis.gensim.prepare(lda_model, corpus, texts_dict,sort_topics = False) vis pyLDAvis.save_html(vis,"vis_sent.html") # # Applying LDA Mallet model mallet_path = '/Users/harika_pradeep/Downloads/mallet-2.0.8/bin/mallet' # update this path ldamallet = gensim.models.wrappers.LdaMallet(mallet_path, corpus=corpus, num_topics=5, id2word=texts_dict) pprint(ldamallet.show_topics(formatted=False)) #Coeherenc coefficient calculation for Mallet LDA coherence_model_ldamallet = CoherenceModel(model=ldamallet, texts=texts, dictionary=texts_dict, coherence='c_v') coherence_ldamallet = coherence_model_ldamallet.get_coherence() print('\nCoherence Score: ', coherence_ldamallet) # The coherence coefficient is better for lda mallet rather than lda model. Now we can check for which value of 'k' (the number of topics) we get better coheherence def compute_coherence_values(dictionary, corpus, texts, limit, start=2, step=3): coherence_values = [] model_list = [] for num_topics in range(start, limit, step): model = gensim.models.wrappers.LdaMallet(mallet_path, corpus=corpus, num_topics=num_topics, id2word=texts_dict) model_list.append(model) coherencemodel = CoherenceModel(model=model, texts=texts, dictionary=dictionary, coherence='c_v') coherence_values.append(coherencemodel.get_coherence()) return model_list, coherence_values model_list, coherence_values = compute_coherence_values(dictionary=texts_dict, corpus=corpus, texts=texts, start=2, limit=40, step=6) import matplotlib.pyplot as plt #Plotting the coherence for LDAmallet model for differen k values limit=40; start=2; step=6; x = range(start, limit, step) plt.plot(x, coherence_values) plt.xlabel("Num Topics") plt.ylabel("Coherence score") plt.legend(("coherence_values"), loc='best') plt.show() #best LDA model at topic number = 20 ldamallet = gensim.models.wrappers.LdaMallet(mallet_path, corpus=corpus, num_topics=20, id2word=texts_dict) pprint(ldamallet.show_topics(formatted=False)) #Coeherenc coefficient calculation for Mallet LDA coherence_model_ldamallet = CoherenceModel(model=ldamallet, texts=texts, dictionary=texts_dict, coherence='c_v') coherence_ldamallet = coherence_model_ldamallet.get_coherence() print('\nCoherence Score: ', coherence_ldamallet) ldamallet.save('ldamallet.lda') # #### Reference Links: # https://blog.insightdatascience.com/topic-modeling-and-sentiment-analysis-to-pinpoint-the-perfect-doctor-6a8fdd4a3904
    # https://towardsdatascience.com/end-to-end-topic-modeling-in-python-latent-dirichlet-allocation-lda-35ce4ed6b3e0
    # https://www.machinelearningplus.com/nlp/topic-modeling-gensim-python/#18dominanttopicineachsentence
    # https://kldavenport.com/topic-modeling-amazon-reviews/#what-can-we-do-with-this-in-production # -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 0.5.0 # language: julia # name: julia-0.5 # --- # + [markdown] slideshow={"slide_type": "slide"} # # A day with (the) Julia (language) # # ## Exercises # + [markdown] slideshow={"slide_type": "slide"} # ### I. Reescribir la función usando filter! y testear # + slideshow={"slide_type": "fragment"} function listaralineamientos(direccion, extension::Regex=r"\.fasta$"; vacios::Bool=false) alns = String[] for nombre in readdir(direccion) if ismatch(extension, nombre) if vacios || filesize(joinpath(direccion, nombre)) > 0 push!(alns, nombre) end end end alns end function list_alns_filter(direccion, extension::Regex=r"\.fasta$"; vacios::Bool=false) # COMPLETAR end # + [markdown] slideshow={"slide_type": "fragment"} # **Pistas:** # # - `filter!` elimina de un vector los elementos para los cuales una función retorna `false`. # - Si el primer argumento es una expresión regular, elimina los elementos que no son `ismatch` con esa `Regex`. # - Base.Test.@test da un error si la condición que sigue es falsa (sino retorna nothing, no muestra nada) # + slideshow={"slide_type": "fragment"} using Base.Test # variable -> ... es la notación para una función anónima @test filter!(r"\.fasta$", readdir("data")) == filter!(x -> ismatch(r"\.fasta$", x), readdir("data")) # + slideshow={"slide_type": "slide"} function list_alns_filter(direccion, extension::Regex=r"\.fasta$"; vacios::Bool=false) alns = filter!(extension, readdir(direccion)) if !vacios filter!(nombre -> filesize(joinpath(direccion, nombre)) > 0, alns) end alns end # + slideshow={"slide_type": "fragment"} @test list_alns_filter("data") == listaralineamientos("data") @test list_alns_filter("data", vacios=true) == listaralineamientos("data", vacios=true) @test list_alns_filter("data", r"\.stockholm$") == listaralineamientos("data", r"\.stockholm$") # + [markdown] slideshow={"slide_type": "slide"} # ### II. Escribir una función para convertir un archivo Stockholm en FASTA. # + [markdown] slideshow={"slide_type": "fragment"} # **Pista:** En un archivo con formato [Stockholm](https://en.wikipedia.org/wiki/Stockholm_format), las secuencias son las únicas líneas que no comienzan con numeral **#=** ni **//**, la regex es: ```^(?![#=|//])``` # # ``` # O31699/88-139 EVMLTDIPRLHINDPIMKGFGMVINN..GFVCVENDE # #=GR O31699/88-139 AS ________________*____________________ # // # ``` # # **Usar**: `match`, `@r_str`, `open`, `eachline`, `close`, `println` or `print` # **Opcional**: `chomp` # + slideshow={"slide_type": "fragment"} """ Función para convertir un archivo Stockholm en FASTA (secuencia en una sola línea). La función toma como argumentos la dirección/nombre de un archivo Stockholm y del archivo FASTA que se creara. La función retorna la dirección/nombre del FASTA. """ function stockholm2fasta(sto, fas) # COMPLETAR end # + slideshow={"slide_type": "fragment"} ?stockholm2fasta # + slideshow={"slide_type": "slide"} """ Función para convertir un archivo Stockholm en FASTA (secuencia en una sola línea). La función toma como argumentos la dirección/nombre de un archivo Stockholm y del archivo FASTA que se creara. La función retorna la dirección/nombre del FASTA. """ function stockholm2fasta(sto, fas) fh_in = open(sto, "r") open(fas, "w") do fh_out for line in eachline(fh_in) m = match(r"^(?![#=|//])(\S+)\s+(\S+)", line) if m != nothing println(fh_out, ">", m.captures[1], "\n", m.captures[2]) end end end close(fh_in) fas end # + slideshow={"slide_type": "fragment"} stockholm2fasta("data/PF09645_full.stockholm", "data/out.fas") # + slideshow={"slide_type": "fragment"} run(`cat ./data/out.fas`) # + [markdown] slideshow={"slide_type": "slide"} # ### III. Escribir una función para convertir un archivo Stockholm en FASTA usando BioPython # # Usar [**PyCall**](https://github.com/stevengj/PyCall.jl) para importar BioPython en Julia. Un ejemplo similar en (Bio)Python está al final de la [wiki de AlignIO](http://biopython.org/wiki/AlignIO): # # ```python # from Bio import AlignIO # # input_handle = open("example.phy", "rU") # output_handle = open("example.sth", "w") # # alignments = AlignIO.parse(input_handle, "phylip") # AlignIO.write(alignments, output_handle, "stockholm") # # output_handle.close() # input_handle.close() # ``` # # **Usar:** `AlignIO.parse`, `AlignIO.write` # + slideshow={"slide_type": "fragment"} using PyCall @pyimport Bio.AlignIO as AlignIO """Función para convertir un archivo Stockholm en FASTA usando BioPython. La función toma como argumentos la dirección/nombre de un archivo Stockholm y del archivo FASTA que se creara. La función retorna la dirección/nombre del FASTA.""" function py_stockholm2fasta(sto, fas) fh_in = open(sto, "r") open(fas, "w") do fh_out # COMPLETAR end close(fh_in) fas end # + slideshow={"slide_type": "slide"} using PyCall @pyimport Bio.AlignIO as AlignIO """ Función para convertir un archivo Stockholm en FASTA usando BioPython. La función toma como argumentos la dirección/nombre de un archivo Stockholm y del archivo FASTA que se creara. La función retorna la dirección/nombre del FASTA. """ function py_stockholm2fasta(sto, fas) fh_in = open(sto, "r") open(fas, "w") do fh_out alignments = AlignIO.parse(fh_in, "stockholm") AlignIO.write(alignments, fh_out,"fasta") end close(fh_in) fas end # + slideshow={"slide_type": "fragment"} py_stockholm2fasta("data/PF09645_full.stockholm", "data/py_out.fas") # + slideshow={"slide_type": "fragment"} run(`cat ./data/py_out.fas`) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="j8Pi6JMTQn4y" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="aa8605d6-42e0-4e05-b41f-149849ebd8c6" import numpy as np import tensorflow as tf import keras.backend as K from keras.datasets import mnist, cifar10 from keras.models import Sequential from keras.layers import Dense, Dropout, Activation, BatchNormalization, MaxPooling2D from keras.layers import Flatten from keras.optimizers import SGD, Adam, RMSprop from keras.callbacks import LearningRateScheduler from keras.utils import np_utils # + id="BFsfPsqIQzJR" colab_type="code" colab={} def round_through(x): '''Element-wise rounding to the closest integer with full gradient propagation. A trick from [](http://stackoverflow.com/a/36480182) ''' rounded = K.round(x) return x + K.stop_gradient(rounded - x) def _hard_sigmoid(x): '''Hard sigmoid different from the more conventional form (see definition of K.hard_sigmoid). # Reference: - [BinaryNet: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1, Courbariaux et al. 2016](http://arxiv.org/abs/1602.02830} ''' x = (0.5 * x) + 0.5 return K.clip(x, 0, 1) def binary_sigmoid(x): '''Binary hard sigmoid for training binarized neural network. # Reference: - [BinaryNet: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1, Courbariaux et al. 2016](http://arxiv.org/abs/1602.02830} ''' return round_through(_hard_sigmoid(x)) def binary_tanh(x): '''Binary hard sigmoid for training binarized neural network. The neurons' and weights' activations binarization function It behaves like the sign function during forward propagation And like: hard_tanh(x) = 2 * _hard_sigmoid(x) - 1 clear gradient when |x| > 1 during back propagation # Reference: - [BinaryNet: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1, Courbariaux et al. 2016](http://arxiv.org/abs/1602.02830} ''' return 2 * round_through(_hard_sigmoid(x)) - 1 # + id="Ro2y-LqdRAS9" colab_type="code" colab={} from keras.layers import Dense, Conv2D from keras import constraints from keras import initializers '''Binarized Dense and Convolution2D layers References: "BinaryNet: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1" [http://arxiv.org/abs/1602.02830] ''' class Clip(constraints.Constraint): def __call__(self, p): return K.clip(p, -1., 1.) class BinaryDense(Dense): def __init__(self, units, **kwargs): super(BinaryDense, self).__init__(units, use_bias=False, kernel_initializer=initializers.RandomUniform(-1., 1.), kernel_constraint=Clip(), **kwargs) def call(self, inputs): # 1. Binarize weights binary_kernel = binary_tanh(self.kernel) # 2. Perform matrix multiplication output = K.dot(inputs, binary_kernel) return output class BinaryConv2D(Conv2D): def __init__(self, filters, kernel_size, **kwargs): super(BinaryConv2D, self).__init__(filters, kernel_size, use_bias=False, padding='same', kernel_initializer=initializers.RandomUniform(-1., 1.), kernel_constraint=Clip(), **kwargs) def call(self, inputs): # 1. Binarize weights binary_kernel = binary_tanh(self.kernel) if self.padding == 'same': # ones padding instead of zero padding (to keep the whole network real binary) kh = self.kernel_size // 2 if type(self.kernel_size) == int else self.kernel_size[0] // 2 inputs = tf.pad(inputs, [[0,0], [kh, kh], [kh, kh], [0,0]], constant_values=1.) # 2. Perform convolution outputs = K.conv2d( inputs, binary_kernel, strides=self.strides, padding='valid', data_format=self.data_format, dilation_rate=self.dilation_rate) return outputs # + id="y65wFEzJRG2S" colab_type="code" colab={} epochs = 20 # learning rate schedule lr_start = 1e-3 lr_end = 1e-4 lr_decay = (lr_end / lr_start)**(1. / epochs) # BN epsilon = 1e-6 # default 1e-3 momentum = 0.9 # default 0.99 # + id="1rv1YlKKRIzo" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="377c3d69-0700-4a80-9567-ce6b77165c25" # the data, shuffled and split between train and test sets dataset = 'mnist' train = 'softmax' if dataset == 'mnist': (X_train, y_train), (X_test, y_test) = mnist.load_data() X_train = X_train.reshape(60000, 28, 28, 1) X_test = X_test.reshape(10000, 28, 28, 1) elif dataset == 'cifar10': (X_train, y_train), (X_test, y_test) = cifar10.load_data() X_train = X_train.astype('float32') / 255. X_test = X_test.astype('float32') / 255. print(X_train.shape[0], 'train samples') print(X_test.shape[0], 'test samples') # convert class vectors to binary class matrices classes = y_train.max() + 1 if train == 'hinge': Y_train = np_utils.to_categorical(y_train, classes) * 2 - 1 # -1 or 1 for hinge loss Y_test = np_utils.to_categorical(y_test, classes) * 2 - 1 elif train == 'softmax': Y_train = Y_train = np_utils.to_categorical(y_train, classes) Y_test = np_utils.to_categorical(y_test, classes) # + id="gOD4WemORLjm" colab_type="code" colab={} model = Sequential() # conv1 model.add(BinaryConv2D(32, (3,3), name='conv1', input_shape=X_train.shape[1:])) model.add(BatchNormalization(epsilon=epsilon, momentum=momentum, name='bn1')) model.add(Activation(binary_tanh, name='act1')) # conv2 model.add(BinaryConv2D(32, (3,3), name='conv2')) model.add(BatchNormalization(epsilon=epsilon, momentum=momentum, name='bn2')) model.add(Activation(binary_tanh, name='act2')) model.add(MaxPooling2D(name='pool2')) # conv3 model.add(BinaryConv2D(64, (3,3), name='conv3')) model.add(BatchNormalization(epsilon=epsilon, momentum=momentum, name='bn3')) model.add(Activation(binary_tanh, name='act3')) # conv4 model.add(BinaryConv2D(64, (3,3), name='conv4')) model.add(BatchNormalization(epsilon=epsilon, momentum=momentum, name='bn4')) model.add(Activation(binary_tanh, name='act4')) model.add(MaxPooling2D(name='pool4')) model.add(Flatten()) # dense1 model.add(BinaryDense(128, name='dense5')) model.add(BatchNormalization(epsilon=epsilon, momentum=momentum, name='bn5')) model.add(Activation(binary_tanh, name='act5')) # dense2 model.add(BinaryDense(classes, name='dense6')) model.add(BatchNormalization(epsilon=epsilon, momentum=momentum, name='bn6')) if train =='softmax': model.add(Activation('softmax')) # + id="jg_on0asRPqC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 850} outputId="493ca386-637c-4866-edc4-d971e6da8113" opt = Adam(lr=lr_start) if train == 'softmax': loss = 'categorical_crossentropy' elif train == 'hinge': loss = 'squared_hinge' model.compile(loss=loss, optimizer=opt, metrics=['acc']) model.summary() lr_scheduler = LearningRateScheduler(lambda e: lr_start * lr_decay ** e) # + id="kIWxmA2kRTDp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 714} outputId="8a30c075-1700-41b8-8a5d-b86ffe3ec34a" history = model.fit(X_train, Y_train, validation_data=(X_test, Y_test), batch_size=256, epochs=epochs, callbacks=[lr_scheduler]) # + id="b2fAO6dmvrQy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 364} outputId="3724f4f9-1515-4758-fc27-7d2b90f6a0b9" import matplotlib.pyplot as plt plt.plot(history.history['acc'], label='train') plt.plot(history.history['val_acc'], label='test') plt.grid() plt.legend() plt.plot() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="SIHHQjD35syL" colab_type="code" colab={} # Python 3.5.4 |Continuum Analytics, Inc.| # Jupyter Notebook 5.0.0 # SAMPLE CODE FROM RASCHKA (2015) # PERCEPTRON CLASSIFICATION FOR IRIS DATA import numpy as np import pandas as pd # %matplotlib inline import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap import seaborn as sns # + id="WUTNeBvu5syQ" colab_type="code" colab={} class Perceptron(object): """Perceptron classifier. Parameters ------------ eta : float Learning rate (between 0.0 and 1.0) n_iter : int Passes over the training dataset. Attributes ----------- w_ : 1d-array Weights after fitting. errors_ : list Number of misclassifications in every epoch. """ def __init__(self, eta=0.01, n_iter=10): self.eta = eta self.n_iter = n_iter def fit(self, X, y): """Fit training data. Parameters ---------- X : {array-like}, shape = [n_samples, n_features] Training vectors, where n_samples is the number of samples and n_features is the number of features. y : array-like, shape = [n_samples] Target values. Returns ------- self : object """ self.w_ = np.zeros(1 + X.shape[1]) self.errors_ = [] for _ in range(self.n_iter): errors = 0 for xi, target in zip(X, y): update = self.eta * (target - self.predict(xi)) self.w_[1:] += update * xi self.w_[0] += update errors += int(update != 0.0) self.errors_.append(errors) # NOW COMPARING ERRORS VERSUS COST return self def net_input(self, X): """Calculate net input""" return np.dot(X, self.w_[1:]) + self.w_[0] # SAME AS ADALINE def predict(self, X): """Return class label after unit step""" return np.where(self.net_input(X) >= 0.0, 1, -1) # + id="2S8y_Fcc5syr" colab_type="code" colab={} # FUNCTION FOR PLOTTING CLASSIFICATION REGIONS def plot_decision_regions(X, y, classifier, resolution=0.02): # setup marker generator and color map markers = ('s', 'x', 'o', '^', 'v') colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan') cmap = ListedColormap(colors[:len(np.unique(y))]) # plot the decision surface x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1 x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution), np.arange(x2_min, x2_max, resolution)) Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T) Z = Z.reshape(xx1.shape) plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap) plt.xlim(xx1.min(), xx1.max()) plt.ylim(xx2.min(), xx2.max()) # plot class samples for idx, cl in enumerate(np.unique(y)): plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1], alpha=0.8, c=cmap(idx), marker=markers[idx], label=cl) # + id="2ocL7S-S5syw" colab_type="code" outputId="4680c0d8-602c-4966-ed08-d1419284dafb" colab={"base_uri": "https://localhost:8080/", "height": 194} # OBTAIN df = pd.read_csv('https://archive.ics.uci.edu/ml/' 'machine-learning-databases/iris/iris.data', header=None) df.head() # + id="2rR7fqEA5sy3" colab_type="code" colab={} # SCRUB # ONLY USE SETOSA & VERSICOLOR y = df.iloc[0:100, 4].values y = np.where(y == 'Iris-setosa', -1, 1) # ONLY USE SEPAL & PETAL LENGTH X = df.iloc[0:100, [0, 2]].values # + id="QpW2diWy5sy7" colab_type="code" outputId="58283e82-3177-494b-e538-ef84f1b92301" colab={"base_uri": "https://localhost:8080/", "height": 297} # EXPLORE plt.scatter(X[:50, 0], X[:50, 1], color='red', marker='o', label='setosa') plt.scatter(X[50:100, 0], X[50:100, 1], color='blue', marker='x', label='versicolor') plt.xlabel('sepal length [cm]') plt.ylabel('petal length [cm]') plt.legend(loc='upper left') plt.tight_layout() #plt.savefig('./images/02_06.png', dpi=300) plt.show() # + id="BJwYwTki5szB" colab_type="code" colab={} # SCRUB # STANDARDIZATION - SUBTRACTING MEAN / DIVIDING BY STD DEV # IF FEATURES HAVE A LARGE RANGE, THESE FEATURES MIGHT DOMINATE IMPACT ON CLASSIFIER # GRADIENT DESCENT CONVERGES FASTER WITH FEATURE SCALING X_std = np.copy(X) X_std[:,0] = (X[:,0] - X[:,0].mean()) / X[:,0].std() X_std[:,1] = (X[:,1] - X[:,1].mean()) / X[:,1].std() # + id="uJObLh9N5szG" colab_type="code" outputId="84587e0a-d63f-4dcb-c1fc-0be939ff9071" colab={"base_uri": "https://localhost:8080/", "height": 632} # FIRST PASS ON USING Perceptron # SET ITERATIONS AT 25 & LEARNING RATE AT 0.1 ppn = Perceptron(eta=0.1, n_iter=25) ppn.fit(X, y) # PLOT REGIONS plot_decision_regions(X, y, classifier=ppn) plt.xlabel('sepal length [cm]') plt.ylabel('petal length [cm]') plt.legend(loc='upper left') plt.tight_layout() plt.show() # PLOT EPOCHS plt.plot(range(1, len(ppn.errors_) + 1), ppn.errors_, marker='o') plt.xlabel('Epochs') plt.ylabel('Number of misclassifications') plt.tight_layout() # plt.savefig('./perceptron_1.png', dpi=300) plt.show() # + id="xwXnEnwA5szW" colab_type="code" outputId="25690177-c31f-4676-a11d-16f5ccd72040" colab={"base_uri": "https://localhost:8080/", "height": 632} # SECOND PASS ON USING Perceptron # SET ITERATIONS AT 100 & LEARNING RATE AT 0.0001 ppn = Perceptron(eta=0.0001, n_iter=100) ppn.fit(X, y) # PLOT REGIONS plot_decision_regions(X, y, classifier=ppn) plt.xlabel('sepal length [cm]') plt.ylabel('petal length [cm]') plt.legend(loc='upper left') plt.tight_layout() plt.show() # PLOT EPOCHS plt.plot(range(1, len(ppn.errors_) + 1), ppn.errors_, marker='o') plt.xlabel('Epochs') plt.ylabel('Number of misclassifications') plt.tight_layout() # plt.savefig('./perceptron_1.png', dpi=300) plt.show() # + id="xIZoy-az5sze" colab_type="code" outputId="89ad58e9-7692-43d9-ead2-11c7ca25e466" colab={"base_uri": "https://localhost:8080/", "height": 997} # System Information import platform print('Python is ' + platform.python_version()) pd.show_versions(as_json=False) # + id="8sv5qtJV5szk" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- def test_exercise_99_1a(x) -> bool: prog = re.compile(r'ing') return prog == x def test_exercise_99_1b(x) -> bool: words = ['Spring','Cycling','Ringtone'] return words == x def test_exercise_99_2a(x) -> bool: prog = re.compile(r'ing') words = ['Spring','Cycling','Ringtone'] for w in words: mt = prog.search(w) # Span returns a tuple of start and end positions of the match start_pos = mt.span()[0] # Starting position of the match end_pos = mt.span()[1] # Ending position of the match return w == x def test_exercise_99_2b(x) -> bool: prog = re.compile(r'ing') words = ['Spring','Cycling','Ringtone'] for w in words: mt = prog.search(w) # Span returns a tuple of start and end positions of the match start_pos = mt.span()[0] # Starting position of the match end_pos = mt.span()[1] # Ending position of the match return start_pos == x def test_exercise_99_2c(x) -> bool: prog = re.compile(r'ing') words = ['Spring','Cycling','Ringtone'] for w in words: mt = prog.search(w) # Span returns a tuple of start and end positions of the match start_pos = mt.span()[0] # Starting position of the match end_pos = mt.span()[1] # Ending position of the match return end_pos == x # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + colab={"base_uri": "https://localhost:8080/", "height": 281} id="1KEIW5btoMv2" outputId="09a08d5c-afc6-48ba-bb05-e97a2ccfb378" import matplotlib.pyplot as plt import numpy as np testList0 =[(3, 1), (3.5, 5), (4, 1), (3.75, 3), (3.25, 3)] testList1 =[(0, 0), (0, 0),] testList2 =[(0, 0), (0, 0)] crossLine = [(5, 1),(5.5, 5), (6, 1), (6.5, 5), (7, 1)] edge1 = [(1.5, 5), (2.5, 5), (1.5, 1), (2.5, 1)] edge2 = [(0, 1), (0.5, 5), (1, 1), (0.75, 3), (0.25, 3), (0, 1)] x_0 = [x[0] for x in testList0] y_0 = [x[1] for x in testList0] x_1 = [x[0] for x in testList1] y_1 = [x[1] for x in testList1] x_2 = [x[0] for x in testList2] y_2 = [x[1] for x in testList2] plt.scatter(x_0, y_0) plt.scatter(x_1, y_1) plt.scatter(x_2, y_2) plt.plot(x_0, y_0, 'b-', label='Original') # Blue plt.plot(x_1, y_1, 'r-', label='X-Shift') # Red plt.plot(x_2, y_2, 'y-', label='Y-Shift') # Gren plt.plot(*zip(*edge1), 'gx--') plt.plot(*zip(*edge2), 'cx--') plt.plot(*zip(*crossLine), 'rx--') plt.plot(*zip(*testList0), 'bx--') #plt.plot(edge2, linewidth = '5.5') plt.title('Schematic diagram') plt.plot('X-satır') plt.plot('Y-satır') plt.legend() plt.grid() plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + [markdown] toc="true" # # Table of Contents #

    # - # %pylab inline import pandas as pd import os import sys import geopandas as pd import overpy import urllib2 import codecs api = overpy.Overpass() # + searchByAreaName = False countryOriginalName = "United-Kingdom" searchByAreaID = True areaID = "3600062149" levelsToFetch = [6, 8, 10] # - for levelToFetch in levelsToFetch: if searchByAreaName: areaQuery = ('( area["{0}"="{1}"]; )->.a;'.format("name", countryOriginalName), "type", "boundary", "admin_level", "%d" % levelToFetch) elif searchByAreaID: areaQuery = ("area(%s)->.a;" % areaID, "type", "boundary", "admin_level", "%d" % levelToFetch) query = """ [timeout:600]; {0} ( node["{1}"="{2}"]["{3}"="{4}"](area.a); way["{1}"="{2}"]["{3}"="{4}"](area.a); relation["{1}"="{2}"]["{3}"="{4}"](area.a); ); out body; >; out skel qt; """.format(*areaQuery) query = query.replace("\n", "") query = query.replace(" ", "") query = query.replace(" ", "") print "Doing query: ", query result = urllib2.urlopen("https://overpass-api.de/api/interpreter", query) res = result.read() result.close() print "Query done with code: ", result.code, "\n\n" ofolder = os.path.join("resources/geoJsons", countryOriginalName) if not os.path.exists(ofolder): os.makedirs(ofolder) with codecs.open(os.path.join(ofolder, "dec_lvl%02d.osm" % (levelToFetch)), "w", "utf-8-sig") as f: f.write(res.decode("utf-8")) with open(os.path.join(ofolder, "enc_lvl%02d.osm" % (levelToFetch)), "w") as f: f.write(res) target # + # Convert osm data to geojson and shapefile... import glob import geopandas as gpd from shapely.geometry import MultiPolygon from subprocess import Popen, PIPE for osmdatafile in glob.glob(os.path.join(ofolder, "dec_*.osm")): target = osmdatafile.replace(".osm", ".json") with open(target, "w") as fout: args = "node --max_old_space_size=16384 /usr/local/bin/osmtogeojson -m %s" % osmdatafile args = args.split(" ") p = Popen(args, stdout=fout, stderr=PIPE, shell=False) out, err = p.communicate() print osmdatafile, err target_shp = target.replace(".json", ".shp") gdf = gpd.read_file(target) gdf = gdf[gdf.geometry.type.isin(["Polygon", "MultiPolygon"])] gdf.geometry = gdf.geometry.apply(lambda p: p if p.type == "MultiPolygon" else MultiPolygon([p])) gdf.to_file(target_shp, driver="ESRI Shapefile") # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np import seaborn as sns from matplotlib import pyplot as plt import pandas as pd import matplotlib.pyplot as plt from sklearn import preprocessing from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split from sklearn.metrics import r2_score file = r"C:\Users\\Desktop\corona virus\covid_19_data.csv" file cv19 = pd.read_csv(file) cv19 parse_dates = cv19["Last Update"] parse_dates.head() cv19 = pd.read_csv(file , parse_dates = ["Last Update"]) cv19 cv19.rename(columns={'ObservationDate':'Date', 'Country/Region':'Country'}, inplace=True) cv19.head() cv19.isnull().sum() cv19.shape cv19.dtypes cv19.isnull().sum().to_frame("nulls") dataf = cv19.groupby(["Date", "Country"])[['Date', 'Country', 'Confirmed', 'Deaths', 'Recovered']].sum() dataf dataf = cv19.groupby(["Date" , "Country"])[["Date" , "Country" , "Confirmed" , "Deaths" , "Recovered"]].sum().reset_index() dataf cv19.groupby(["Country"])[["Deaths"]].sum() cv19["Country"] cv19["Country"] == "Mainland China" cv19.Country.value_counts(normalize=True) cv19.Confirmed.value_counts() cv19.Recovered.value_counts() cv19.groupby(["Country"])[["Recovered"]].sum() cv19.groupby(["Country"])[["Confirmed"]].sum() dataf sorting_By_Confirmed = cv19.sort_values('Confirmed',ascending=False) sorting_By_Confirmed sorting_By_Confirmed = sorting_By_Confirmed.drop_duplicates("Country") sorting_By_Confirmed Total_Conformed_From_World = sorting_By_Confirmed['Confirmed'].sum() Total_Conformed_From_World Total_Deaths_From_world = sorting_By_Confirmed['Deaths'].sum() Total_Deaths_From_world Total_Recovered_From_World = sorting_By_Confirmed['Recovered'].sum() Total_Recovered_From_World Deaths_rate_Total_World = (Total_Deaths_From_world*100)/Total_Conformed_From_World Deaths_rate_Total_World Recovered_rate_Total_World = (Total_Recovered_From_World*100)/Total_Conformed_From_World Recovered_rate_Total_World Report_China = sorting_By_Confirmed[sorting_By_Confirmed["Country"] == "Mainland China"] Report_China China_Recovered_rate = (int(Report_China["Recovered"].values)*100)/int(Report_China["Confirmed"].values) China_Recovered_rate rate_of_recovered = (sorting_By_Confirmed["Recovered"]*100)/sorting_By_Confirmed["Confirmed"] rate_of_recovered rate_of_deaths = (sorting_By_Confirmed["Deaths"]*100)/(sorting_By_Confirmed["Confirmed"]) rate_of_deaths rate_of_cases = (sorting_By_Confirmed["Confirmed"]*100)/Total_Conformed_From_World rate_of_cases sorting_By_Confirmed[ " Rate Of Recovered Cases (%) " ] = pd.DataFrame(rate_of_recovered) sorting_By_Confirmed[ " Rate Of Death Cases (%) " ] = pd.DataFrame(rate_of_deaths) sorting_By_Confirmed[" Rate Of Total Cases (%) "] = pd.DataFrame(rate_of_cases) sorting_By_Confirmed sorting_By_Confirmed.head(100).style.background_gradient(cmap = "Greens") sorting_By_Confirmed1=sorting_By_Confirmed.head(10) sorting_By_Confirmed1 x = sorting_By_Confirmed1.Country y = sorting_By_Confirmed1.Confirmed x y plt.rcParams["figure.figsize"] = (12, 10) plt.rcParams["figure.edgecolor"] = ("Black") sns.barplot(x,y , palette="Blues_d").set_title("Total Number Of Cases / Deaths / Recoverd") cases_on_per_Day = cv19.groupby(["Date"])['Confirmed','Deaths', 'Recovered'].sum().reset_index() cases_on_per_Day sorting_By_Confirmed2 = cases_on_per_Day.sort_values("Date" , ascending=False) sorting_By_Confirmed2 sorted_By_Confirmed2.style.background_gradient(cmap= "Blues") x = cases_on_per_Day.index x y = cases_on_per_Day.Confirmed y y1 = cases_on_per_Day.Deaths y1 y2 = cases_on_per_Day.Recovered y2 plt.scatter(x , y , color = "aqua" , label = "CONFORMED CASES") plt.scatter(x , y1 , color = "tan" ,label = "DEATH CASES") plt.scatter(x , y2 , color = "lightgrey" , label = "Recover cases") plt.scatter(x , y , color = "aqua" , label = "CONFORMED CASES") plt.scatter(x , y1 , color = "tan" ,label = "DEATH CASES") plt.scatter(x , y2 , color = "lightgrey" , label = "Recover cases") x_data = pd.DataFrame(cases_on_per_Day.index) x_data y_data = pd.DataFrame(cases_on_per_Day.Confirmed) y_data from sklearn.model_selection import train_test_split x_train,x_test,y_train,y_test = train_test_split(x_data,y_data,test_size=0.2,random_state=5) l_r = LinearRegression() l_r l_r.fit(x_train , y_train) cv19_prediction = l_r.predict(x_test) cv19_prediction cv19_prediction_DataFrame = pd.DataFrame(cv19_prediction) cv19_prediction_DataFrame cv19_prediction_DataFrame['Predicted Value'] = pd.DataFrame(cv19_prediction) cv19_prediction_DataFrame cv19_prediction_DataFrame from sklearn.metrics import r2_score print("Linear Regession R2 Score : ", r2_score(y_test, cv19_prediction)) from sklearn.metrics import confusion_matrix, classification_report from sklearn.neighbors import KNeighborsClassifier from sklearn.metrics import accuracy_score from sklearn.svm import SVC from sklearn.naive_bayes import MultinomialNB from sklearn.metrics import mean_squared_error from sklearn.metrics import mean_absolute_error cv19_rmse = np.sqrt(mean_squared_error(y_test , cv19_prediction)) cv19_rmse cv19_rmse1 = mean_squared_error(y_test , cv19_prediction) cv19_rmse1 cv19_rmse2 = mean_absolute_error(y_test , cv19_prediction) cv19_rmse2 pd.DataFrame(cv19_prediction).to_csv("cv19_prediction_values.csv",index=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="m3mD2VBa6AVy" colab_type="code" colab={} # !apt-get install openjdk-8-jdk-headless -qq > /dev/null # + id="seW_jDbB7uoK" colab_type="code" colab={} # !wget -q www-us.apache.org/dist/spark/spark-2.4.3/spark-2.4.3-bin-hadoop2.7.tgz # + id="hrJJXEQo70Kw" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="31324658-102f-44c9-9dad-194cf8036104" # !tar -xvf spark-2.4.3-bin-hadoop2.7.tgz # + id="-JZ2WuaF8JYp" colab_type="code" colab={} # !pip install -q findspark # + id="tWOG-a0g735U" colab_type="code" colab={} import os os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64" os.environ["SPARK_HOME"] = "/content/spark-2.4.3-bin-hadoop2.7" # + id="vZNPSjeV78VD" colab_type="code" colab={} import findspark findspark.init() from pyspark.sql import SparkSession from pyspark.sql import Row from pandas.plotting import scatter_matrix import seaborn as sns import numpy as np from sklearn.linear_model import LinearRegression from pyspark.sql.functions import array, col, explode, struct, lit import warnings warnings.simplefilter('ignore') import pandas as pd import matplotlib.pyplot as plt # %matplotlib inline spark = SparkSession.builder.master("local[*]").getOrCreate() # + [markdown] id="_mMOn7Mp-EtY" colab_type="text" # ## 7. Загрузить данные в Spark # + id="7QdX2wYb8Dvu" colab_type="code" colab={} df_data = spark.read.csv("u.data", sep='\t', header=None, inferSchema=True) df_genre = spark.read.csv("u.genre", sep='|', header=None, inferSchema=True) df_info = spark.read.csv("u.info", sep=' ', header=None, inferSchema=True) df_occupation = spark.read.csv("u.occupation", sep=' ', header=None, inferSchema=True) df_user = spark.read.csv("u.user", sep='|', header=None, inferSchema=True) df_item = spark.read.csv("u.item", sep='|', header=None, inferSchema=True) # encoding='latin_1' # + id="hdlNr8Uc8aw7" colab_type="code" colab={} # new_names_df_data = ['user_id', 'movie_id', 'rating', 'timestamp'] # df_data = df_data.toDF(*new_names_df_data) # new_names_df_genre = ['genres', 'genres_id'] # df_genre = df_genre.toDF(*new_names_df_genre) # new_names_df_user = ['user_id', 'age', 'gender', 'occupation', 'zip_code'] # df_user = df_user.toDF(*new_names_df_user) # new_names_df_item = ['movie_id', 'movie_title', 'release_date', # 'video_release_date', 'IMDb_URL', 'unknown', # 'Action', 'Adventure', 'Animation', "Children's", # 'Comedy', 'Crime', 'Documentary', 'Drama', 'Fantasy', # 'Film-Noir', 'Horror', 'Musical', 'Mystery', 'Romance', # 'Sci-Fi', 'Thriller', 'War', 'Western'] # df_item = df_item.toDF(*new_names_df_item) # + id="3iVALWbYUzlg" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 459} outputId="443df01e-092d-4d62-ab88-b738a5987844" df_data.show() # + id="RyKiE7SIU5Tn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="db9d3b1e-cf52-4c55-afba-20a06f0688da" df_data.dtypes # + id="8gN80j-BU55E" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 187} outputId="0a29dceb-8126-4ea8-8354-8a044cedcc39" df_data.describe().show() # + [markdown] id="iiyzVgFmB4rz" colab_type="text" # ## 8. Средствами спарка вывести среднюю оценку для каждого фильма # + id="mibsuecQTs6x" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 459} outputId="c51bbdc6-276c-4ce5-e1c3-f1825e373bae" df_data.columns df_data = df_data.withColumnRenamed('_c0', 'user id')\ .withColumnRenamed('_c1', 'item id')\ .withColumnRenamed('_c2', 'rating')\ .withColumnRenamed('_c3', 'timestamp'); df_data_grp = df_data.groupby('item id') df_data_grp_mean = df_data_grp.mean('rating') df_data_grp_mean.show() # + [markdown] id="9uOSL73oJO1K" colab_type="text" # ## 9. В Spark получить 2 df с 5-ю самыми популярными и самыми непопулярными фильмами (по количеству оценок, либо по самой оценке) # + id="sSst65eQEI5P" colab_type="code" outputId="5029e0aa-7fc4-48e5-b57e-98446776012a" colab={"base_uri": "https://localhost:8080/", "height": 357} df_items = df_item['_c0', '_c1'] df_items = df_items.withColumnRenamed('_c0','item id')\ .withColumnRenamed('_c1','movie title') df_data_grp_mp = spark.createDataFrame(df_data_grp.count().orderBy('count', ascending=False).take(5)) df_data_grp_lp = spark.createDataFrame(df_data_grp.count().orderBy('count', ascending=True).take(5)) df_data_grp_mp.join(df_items, 'item id', how='inner').show() df_data_grp_lp.join(df_items, 'item id', how='inner').show() # + [markdown] id="Qr9MHlWHOQWS" colab_type="text" # ## 10. Средствами Spark соедините информацию по фильмам и жанрам (u.genre) # + id="pkLPdwhKE_GY" colab_type="code" outputId="48f5bbe1-ade3-4d9f-895c-7fe8b8a86460" colab={"base_uri": "https://localhost:8080/", "height": 867} df_item.show() df_genre.show() # + id="StABwIswWPN5" colab_type="code" colab={} df_item = df_item.withColumnRenamed('_c0','item id')\ .withColumnRenamed('_c1','movie title')\ .withColumnRenamed('_c5','unknown')\ .withColumnRenamed('_c6','Action')\ .withColumnRenamed('_c7','Adventure')\ .withColumnRenamed('_c8','Animation')\ .withColumnRenamed('_c9','Children\'s')\ .withColumnRenamed('_c10','Comedy')\ .withColumnRenamed('_c11','Crime')\ .withColumnRenamed('_c12','Documentary')\ .withColumnRenamed('_c13','Drama')\ .withColumnRenamed('_c14','Fantasy')\ .withColumnRenamed('_c15','Film-Noir')\ .withColumnRenamed('_c16','Horror')\ .withColumnRenamed('_c17','Musical')\ .withColumnRenamed('_c18','Mystery')\ .withColumnRenamed('_c19','Romance')\ .withColumnRenamed('_c20','Sci-Fi')\ .withColumnRenamed('_c21','Thriller')\ .withColumnRenamed('_c22','War')\ .withColumnRenamed('_c23','Western')\ df_item = df_item['item id', 'unknown', 'Action', 'Adventure', 'Animation', 'Children\'s', 'Comedy', 'Crime', 'Documentary', 'Drama', 'Fantasy', 'Film-Noir', 'Horror', 'Musical','Mystery','Romance', 'Sci-Fi', 'Thriller', 'War','Western'] # + id="_ShBLmnXWPXJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 459} outputId="9acd7cc4-422d-4fda-b811-f4bd498b45d7" def to_long(df, by): cols, dtypes = zip(*((c, t) for (c, t) in df.dtypes if c not in by)) kvs = explode(array([ struct(lit(c).alias("key"), col(c).alias("val")) for c in cols ])).alias("kvs") return df.select(by + [kvs]).select(by + ["kvs.key", "kvs.val"]) df_g_trans = to_long(df_item, ["item id"]) df_g_trans = df_g_trans.where(df_g_trans['val'] > 0) df_res = df_g_trans.join(df_items, 'item id', how='inner')['item id', 'movie title', 'key']\ .withColumnRenamed('key','genre') df_res = df_res.join(df_data_grp_mean, 'item id', how='inner') df_res.show() # + [markdown] id="ucV6_JlKWk1S" colab_type="text" # ## 11. Посчитайте средствами Spark среднюю оценку для каждого жанра # + id="d45T7H6kWPed" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 425} outputId="dddd283e-a6c1-46ff-8f30-6891d908473e" df_data_grp_g = df_res.groupby('genre') df_data_grp_mean_g = df_data_grp_g.mean('avg(rating)') df_data_grp_mean_g.show() # + id="GrJ73IhZNdqI" colab_type="code" colab={} pass # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- library(tidyverse) library(ggpubr) library(rstatix) install.packages("dplyr") install.packages("ggpubr") res_25_0 = read.csv("results/25_0.csv") res_25_75 = read.csv("results/25_75.csv") res_50_0 = read.csv("results/50_0.csv") res_50_50 = read.csv("results/50_50.csv") res_75_0 = read.csv("results/75_0.csv") res_75_25 = read.csv("results/75_25.csv") res_100_0 = read.csv("results/100_0.csv") data <- rbind(res_25_0, res_25_75, res_50_0, res_50_50, res_75_0, res_75_25, res_100_0) data$real_data <- as.factor(data$real_data) data$synth <- as.factor(data$synth) boxplot(WER ~ real_data * synth, data=data) boxplot(CER ~ real_data * synth, data=data) model <- lm(WER ~ real_data * synth, data=data) res <- anova(model) res # Check the homogeneity of variance assumption plot(res, 1) # Check the normality assumpttion in residuals plot(res, 2) # Extract the residuals aov_residuals <- residuals(object = res) # Run Shapiro-Wilk test shapiro.test(x = aov_residuals ) qqnorm(res_25_0$WER) qqline(res_25_0$WER) qqnorm(res_25_0$CER) qqline(res_25_0$CER) model <- lm(CER ~ real_data, data=data) anova(model) model <- lm(CER ~ synth, data=data) anova(model) model <- lm(CER ~ real_data * synth, data=data) anova(model) interaction.plot(data$real_data, data$synth) interaction.plot(data$real_data, data$synth, data$WER, ylab="WER", xlab="% of real data set", trace.label="+ synth") # ?boxplot boxplot(WER ~ real_data + synth, data=data, at=c(0, 3, 5, 7, 1, 4, 6, 9)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="Q2FtGCN6bVsg" # # Finding Fact-checkable Tweets with Machine Learning # # - # ## Overview # # Our challenge is to find out which tweets with the hashtag `#txlege` are **checkable statements of fact**. # # Take a look at the [kinds of tweets in question](https://twitter.com/search?q=%23txlege&src=typed_query). # # Recognizing "fact-checkability" seems like it might be a uniquely human skill. But we can make a machine-learning mmodel that does it pretty well. # # ### First, the language model # # We'll get into the details below, but here's our two-step process for this project: # # First, we need a model trained to recognize the patterns of English. For that, we'd need some huge dataset of English-language text. Fortunately, someone has already done that for us! We'll be using a model trained on thousands of long Wikipedia articles. It's called [wikitext-103](https://einstein.ai/research/blog/the-wikitext-long-term-dependency-language-modeling-dataset). # # We'll then use _transfer learning_ (like we did in for the helicopter maps) to further train wikitext-103 on our particular corpus: several thousand #txlege tweets. So we benefit from it's training on both English-language articles _and_ #txlege tweets. # # That will give us a **language model** that's good at detecting patterns in #txlege tweets. # # # ### Second, the classification model # # Next we need a model that will sort -- aka _classify_ -- tweets into "fact-checkable" and "not fact-checkable." This will combine the patern-recognition embedded in the language model and examples of both kinds of tweets to make a predition on which class _new_ tweets belong to. This is our **classification model**. # # # ## The Plan # # Here's what we're going to do: # # - Grab files with a bunch of tweets # - Make a **language model** from a model pretrained on Wikipedia _plus_ all the tweets as we have # - Make a **classification model** to predict whether a given tweet is checkable or not, using tweets that were hand-labeled by folks at the Austin American-Statesman. # - Use that classification model to predict the checkability of unseen tweets. # ## Credits # # This notebook was based on one originally created by and the other folks at [fast.ai](https://fast.ai) as part of [this fantastic class](https://course.fast.ai/). Specifically, it comes from Lesson 4. You can [see the lession video](https://course.fast.ai/videos/?lesson=4) and [the original class notebook](https://github.com/fastai/course-v3/blob/master/nbs/dl1/lesson3-imdb.ipynb). # # The idea for the project came from Dan Keemahill at the Austin American-Statesman newspaper. Dan, Madlin Mekelburg, and others at the paper hand-coded the tweets used for the classificaiton training. # # For more information about this project, and details about how to use this work in the wild, check out our [Quartz AI Studio blog post about the checkable-tweets project](https://qz.ai/?p=89). # # -- , [Quartz](https://qz.com), October 2019 # ## Setup # ### For those using Google Colaboratory ... # # Be aware that Google Colab instances are ephemeral -- they vanish *Poof* when you close them, or after a period of sitting idle (currently 90 minutes), or if you use one for more than 12 hours. # # If you're using Google Colaboratory, be sure to set your runtime to "GPU" which speeds up your notebook for machine learning: # # ![change runtime](https://qz-aistudio-public.s3.amazonaws.com/workshops/notebook_images/change_runtime_2.jpg) # ![pick gpu](https://qz-aistudio-public.s3.amazonaws.com/workshops/notebook_images/pick_gpu_2.jpg) # # Then run this cell: # + ## ALL GOOGLE COLAB USERS RUN THIS CELL ## This runs a script that installs fast.ai # !curl -s https://course.fast.ai/setup/colab | bash # - # ### For those _not_ using Google Colaboratory ... # # This section is just for people who decide to use one of the notebooks on a system other than Google Colaboartory. # # Those people should run the cell below. ## NON-COLABORATORY USERS SHOULD RUN THIS CELL # %reload_ext autoreload # %autoreload 2 # %matplotlib inline # ### Everybody do this ... # Everyone needs to run the next cell, which initializes the Python libraries we'll use in this notebook. ## AND *EVERYBODY* SHOULD RUN THIS CELL import warnings warnings.filterwarnings('ignore') from fastai.text import * import fastai print(f'fastai: {fastai.__version__}') print(f'cuda: {torch.cuda.is_available()}') # ## The Data # We're going to be using two sets of tweets for this project: # # - A CSV (comma-separated values file) containing a bunch of #txlege tweets # - A CSV of #txlege tweets that have been hand-coded as "fact-checkable" or "not fact-checkable" # # Run this cell to download the data we'll use for this exercise # !wget -N https://qz-aistudio-public.s3.amazonaws.com/workshops/austin_tweet_data.zip --quiet # !unzip -q austin_tweet_data.zip print('Done!') # We now have a directory called `data` with two files of tweets. Let's take a look. # %ls data/ # + [markdown] colab_type="text" id="9TPvnzypbVsq" # ### Take a peek at the tweet data # + [markdown] colab_type="text" id="I21aOI9xbVsr" # Working with and over a couple of weeks during the 2019 Texas state legislative session, I have have a set of 3,797 tweets humans at the Austin American-Statesman have determined are – or are not – statements that can be fact-checked. # + colab={"base_uri": "https://localhost:8080/", "height": 204} colab_type="code" id="kWwuVUg9QHo8" outputId="703590d9-71c2-4d1b-96cf-aa0a113be58a" # Here I read the csv into a data frame I called `austin_tweets` # and take a look at the first few rows data_path = './data/' hand_coded_tweets = pd.read_csv(data_path + 'hand_coded_austin_tweets.csv') hand_coded_tweets.head() # - # To train the language model, we want a bunch more examples of #txlege tweets. It doesn't matter that we didn't classify them as fact-checkable or not. The language model just needs _examples_ of the kinds of tweets it might see. # # So I used the website [IFTTT](https://ifttt.com) and Google Spreadsheets to collect several days worth of #txlege tweets. That's what's in the `tweet_corpus.txt` file we looked at. # + colab={"base_uri": "https://localhost:8080/", "height": 204} colab_type="code" id="18kBCGjhbVvQ" outputId="e36434d1-d47d-44c0-c6ab-82a61843ad2a" # read in the corpus, which has one tweet per row, # and take a look at the first frew rows corpus_tweets = pd.read_csv(data_path + 'tweet_corpus.txt') corpus_tweets.head() # + [markdown] colab_type="text" id="eym6ulq0bVuz" # ## Building the language model # + [markdown] colab_type="text" id="62pxEwWPbVvI" # First we need a model that 'understands' the rules of English, and ideally also recognizes patterns in our particular corpus, the `#txlege` tweets. This is the language model. # # We'll start with a language model pretrained on a thousands of Wikipedia articles called [wikitext-103](https://einstein.ai/research/blog/the-wikitext-long-term-dependency-language-modeling-dataset). That language model has been trained to guess the next word in a sentence based on all the previous words. It has a recurrent structure with a hidden state that is updated each time it sees a new word. This hidden state thus contains information about the sentence up to that point. # # For our project, we want to infuse the Wikitext model with our particular dataset – the #txlege tweets. Because the English of #txlege tweets isn't the same as the English of Wikipedia, we'll adjust the internal parameters of the model by a little bit. That includes adding words that might be extremely common in the tweets but would be barely present in wikipedia–and therefore might not be part of the vocabulary the model was trained on. # + [markdown] colab_type="text" id="FT0C6-vpbVvK" # ### Using all of our tweets for the language model # # We want as many tweets for the language model as possible to learn the patterns of #txlege tweets. # # We'll start with the text of the 3,797 "hand-coded" tweets (though for the language model, we ignore the checkable/not checkable part of that file). # # To that, we'll add the additional tweets I collected in `tweet_corpus.txt`. We'll mush those all into one big file of tweets. # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="aQyQIh_vNhrH" outputId="9a6afe30-01c4-4445-d7a7-8c835d7d62fc" # here I concatenate the two tweet sets into one big set lm_tweets = pd.concat([hand_coded_tweets,corpus_tweets], sort=True) # as a sanity check, let's look at the size of each set, # and then the ontatenated set print('hand coded tweets:', len(hand_coded_tweets) ) print('corpus of tweets:', len(corpus_tweets) ) print('total tweets:', len(lm_tweets) ) # + [markdown] colab_type="text" id="g55U1v3mbVvm" # Great: Now we have 7,485 tweets to use for the language model. # # One thing to note ... the first set had two columns, `checkable` and `tweet_text`, while the corpus had just one collumn, `tweet_text`. The combined has the original two columns, though many of the entries will be `NaN` for "not a number." Thats okay, because we're only going to use the `tweet_text` column for the language model. # # # # + colab={} colab_type="code" id="pFp92p9NbVvs" # Saving as csv for easier reading in a moment lm_tweets.to_csv(data_path + 'lm_tweets.csv', index=False) # + [markdown] colab_type="text" id="3fU_R9lxbVv7" # Fast.ai uses a concept called a "[data bunch](https://docs.fast.ai/basic_data.html)" to handle machine-learning data, which takes care of a lot of the more fickle machine-learning data preparation. # # We have to use a special kind of data bunch for the language model, one that ignores the labels, and will shuffle the texts at each epoch before concatenating them all together (only the training set gets shuffled; we don't shuffle for the validation set). It will also create batches that read the text in order with targets (aka the best guesses) that are the next word in the sentence. # # + colab={} colab_type="code" id="C2rJuWVCbVv5" # Loading in data with the TextLMDataBunch factory class, using all the defaults data_lm = TextLMDataBunch.from_csv(data_path, 'lm_tweets.csv', text_cols='tweet_text', label_cols='checkable') data_lm.save('data_lm_tweets') # + [markdown] colab_type="text" id="rY0LkJCQbVwF" # We can then put all of our tweets (now stored in `data_lm`) into a learner object along with the pretrained Wikitext model -- here called `AWD_LTSM`, which is downloaded the first time you'll execute the following line. # + colab={} colab_type="code" id="EiiXrA6zbVwH" learn = language_model_learner(data_lm, AWD_LSTM, drop_mult=0.3) # + [markdown] colab_type="text" id="RfAXUyLnOB_J" # One of the most important settings when we actually _train_ our model is the **learning rate**. I'm not going to dive into it here (though I encourage you to explore it), but will use a fast.ai tool to find the best learning rate to start with: # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="W7b-Z94dbVwL" outputId="f6044993-11ab-4889-c0fe-892243e61a2e" learn.lr_find() # + colab={"base_uri": "https://localhost:8080/", "height": 283} colab_type="code" id="Un204GpfbVwO" outputId="607fdc89-064b-427b-e92e-f35bdc868923" learn.recorder.plot(skip_end=15) # + [markdown] colab_type="text" id="td3D3mK8TtHa" # This gives us a graph of the optimal learning rate ... which is the point where the graph really dives downward (`1e-02`). Again, there's much more on picking and learning rates in the fast.ai course. # + [markdown] colab_type="text" id="BQA3dZWPachE" # Now we can train the Language Model. (Essentailly, we're training it to be good at guessing the *next word* in a sentence, given all of the previous words.) # # The variabales we're passing are `1` to just do one cycle of learning, the learning rate of `1e-2`, and some momentum settings we won't get into here -- but these are pretty safe. # + colab={"base_uri": "https://localhost:8080/", "height": 80} colab_type="code" id="pveLnA6kbVwQ" outputId="38ef873c-5c2a-461d-9688-d8f4e1edbd5f" learn.fit_one_cycle(1, 1e-2, moms=(0.8,0.7)) # + colab={"base_uri": "https://localhost:8080/", "height": 80} colab_type="code" id="o-dYIVFcbVwS" outputId="aa2db805-6429-4592-f905-0479b9872820" learn.fit_one_cycle(1, 1e-1, moms=(0.8,0.7)) # + [markdown] colab_type="text" id="M8kZOKSybVwX" # To complete the fine-tuning, we "unfreeze" the original wikitext-103 language model and let our new training efforts work their way into the original neural network. # + colab={"base_uri": "https://localhost:8080/", "height": 359} colab_type="code" id="kTfuNCuhbVwX" outputId="1e99e6bc-8421-4d63-faf3-4c16fb3f07f6" # This takes a couple of minutes! learn.unfreeze() learn.fit_one_cycle(2, 1e-3, moms=(0.8,0.7)) # + [markdown] colab_type="text" id="fv0mZuVhbVwd" # While our accuracy may _seem_ low ... in this case it means the language model correctly guessed the next word in a sentence more than 1/3 of the time. That's pretty good! And we can see that even when it's wrong, it makes some pretty "logical" guesses. # # Let's give it a starting phrase and see how it does: # # + colab={"base_uri": "https://localhost:8080/", "height": 88} colab_type="code" id="_Wws03kmbVwd" outputId="dd545482-c5c8-4e93-a79e-64578e6478a3" TEXT = "I wonder if this" N_WORDS = 40 N_SENTENCES = 3 print("\n\n".join(learn.predict(TEXT, N_WORDS, temperature=0.75) for _ in range(N_SENTENCES))) # + [markdown] colab_type="text" id="hj3aD0UebVwr" # Remember, these are not real ... they were _generated_ by the model when it tried to guess each of the next words in the sentence! Generating text like this is not why we made the language model (though you can see where text-generation AI starts from!) # # Also note that the model is often crafting the response _in the form of a tweet!_ # # We now save the model's encoder, which is the mathematical representation of what the language model "understands" about English patterns infused by our tweets. # + colab={} colab_type="code" id="thkQAeQ2bVwr" learn.save_encoder('fine_tuned_enc') # + [markdown] colab_type="text" id="zt6pDPEDbVww" # ## Building the classifier model # # This is the model that will use our langauge model **and** the hand-coded tweets to guess if new tweets are fact-checkable or not. # # We'll create a new data bunch that only grabs the hand-coded tweets and keeps track of the labels there (true or false, for fact-checkability). We also pass in the `vocab` -- which is the list of the most useful words from the language model. # + colab={} colab_type="code" id="ntIAhZbbbVw1" data_clas = TextClasDataBunch.from_csv(data_path, 'hand_coded_austin_tweets.csv', vocab=data_lm.vocab, text_cols='tweet_text', label_cols='checkable') data_clas.save('data_clas_tweets') # + [markdown] colab_type="text" id="WCwf-DyEbVxA" # We can then create a model to classify tweets. You can see that in the next two lines we include the processed, hand-coded tweets (`data_clas`), the original Wikitext model (`AWD_LSTM`), and the knowledge we saved after infusing the language model with tweets (`fine_tuned_enc`). # + colab={} colab_type="code" id="xlJKU0g3bVxA" learn = text_classifier_learner(data_clas, AWD_LSTM, drop_mult=0.5) learn.load_encoder('fine_tuned_enc'); # + [markdown] colab_type="text" id="Az5OuN6_hJaF" # With neural networks, there are lots of tweaks you can adjust — known as "hyperparameters" — such as learning rate and momentum. The fast.ai defaults are pretty great, and the tools it has for finding the learning rate are super useful. I'm going to skip those details here for now. There's more to learn at [qz.ai](https://qz.ai) or at the [this great fast.ai course](https://course.fast.ai/). # + colab={"base_uri": "https://localhost:8080/", "height": 80} colab_type="code" id="S5g2xmtlbVxJ" outputId="79971cc2-dde2-4dfc-a340-7dcae79f2bdd" learn.fit_one_cycle(2, 1e-2, moms=(0.8,0.7)) learn.freeze_to(-2) learn.fit_one_cycle(2, slice(1e-2/(2.6**4),1e-2), moms=(0.8,0.7)) learn.freeze_to(-3) learn.fit_one_cycle(2, slice(5e-3/(2.6**4),5e-3), moms=(0.8,0.7)) # + [markdown] colab_type="text" id="J-TSQ7iDTdIG" # Let's give it an example .... # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="mBAXEa2MbVxa" outputId="4163f4ee-f9e0-4d93-b5a2-c47e38a78205" example = "Four states have two universities represented in the top 20 highest-paid executives of public colleges. Texas has SIX" learn.predict(example) # + [markdown] colab_type="text" id="8u1ptJu1UmT9" # `True` means checkable! # - # We can open the "black box" a little to see what words the model is keying into. interp = TextClassificationInterpretation.from_learner(learn) interp.show_intrinsic_attention(example) # Let's save our work. # + [markdown] colab_type="text" id="XSOgwF1ybVyh" # ## Saving to Google Drive # # At present, your Google Colaboratory Notebook disappears when you close it — along with all of your data. If you'd like to save your model to your Google Drive, run the following cell and grant the permissions it requests. # - from google.colab import drive drive.mount('/content/gdrive', force_remount=True) root_dir = "/content/gdrive/My Drive/" base_dir = root_dir + 'ai-workshop/checkable_tweet_models/' save_path = Path(base_dir) save_path.mkdir(parents=True, exist_ok=True) # The next line will save everything we need for predictions to a file to your Google Drive in the `ai-workshops` folder. learn.export(save_path/"export.pkl") # Later, to load the model into your code, connect to your Google drive using the same block above that starts `from google.colab import drive ...` and then run this: # # load the model from the 'export.pkl' file on your Google Drive learn = load_learner(save_path) # + [markdown] colab_type="text" id="57uA-44kVh_T" # We used a model built this way to classify #txlege tweets as they were tweeted. For details about deploying a predictor in the cloud using Render, see our [blog post about building the checkable-tweets project](https://qz.ai/?p=89). # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:tf-cpu] # language: python # name: conda-env-tf-cpu-py # --- # + # Setup (Imports) from LoadData import * from keras.layers import Dense, BatchNormalization, Activation, Dropout, Merge from keras.callbacks import ReduceLROnPlateau, EarlyStopping, ModelCheckpoint from keras.models import load_model, Sequential import numpy as np import os import matplotlib.pyplot as plt # + # Setup (Globals/Hyperz) window_size_ticker = 14 window_size_headlines = 9 epochs = 500 batch_size = 64 emb_size = 100 # + # Loading and Splitting Data def get_data(stock): AllX, AllX2, AllY = create_timeframed_doc2vec_ticker_classification_data(stock, window_size_ticker, window_size_headlines, min_time_disparity=4) trainX, trainX2, trainY, testX, testX2, testY = split_data2(AllX, AllX2, AllY, ratio=.85) return (trainX, trainX2, trainY), (testX, testX2, testY) # + # Make Model def get_model(): ## Load ticker_model = load_model(os.path.join('models', 'basic-classification.h5')) headline_model = load_model(os.path.join('models', 'headline-classification.h5')) ticker_model.pop() headline_model.pop() ## Merge combined = Sequential() combined.add(Merge([headline_model, ticker_model], mode='concat')) combined.add(Dense(16, name="combined_d1")) combined.add(BatchNormalization(name="combined_bn1")) combined.add(Activation('relu', name="combined_a1")) combined.add(Dropout(0.3, name="combined_do1")) # combined.add(Dense(1, activation='sigmoid', name="combined_d2")) combined.add(Dense(2, activation='softmax', name="combined_d2")) # combined.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) combined.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) return combined # + # Run (Load) if __name__ == "__main__": (trainX, trainX2, trainY), (testX, testX2, testY) = get_data('AAPL') # trainY = trainY[:, 0] == 1 # testY = testY[:, 0] == 1 print(trainX.shape, trainX2.shape, trainY.shape) # + # Run (Train) if __name__ == "__main__": model = get_model() checkpoint = ModelCheckpoint(os.path.join('..', 'models', 'headlineticker-classification.h5'), monitor='val_acc', verbose=0, save_best_only=True) history = model.fit([trainX, trainX2], trainY, epochs=epochs, batch_size=batch_size, validation_data=([testX, testX2], testY), verbose=0, callbacks=[checkpoint]) plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.legend(['TrainLoss', 'TestLoss']) plt.show() plt.plot(history.history['acc']) plt.plot(history.history['val_acc']) plt.legend(['TrainAcc', 'TestAcc']) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from helpers.utilities import * # %run helpers/notebook_setup.ipynb # + tags=["parameters"] raw_clinical_path = 'data/raw/PatientClinicalData.xlsx' glossary_path = 'data/clean/clinical/glossary.csv' samples_path = 'data/clean/samples_list.csv' # output clean_clinical_path = 'data/clean/clinical/data.csv' # - # Clinical data did not require a separate extraction step, thus I will clean the spreadsheet and explore the data in a single notebook (this one). # # Data Extraction clinical_spreadsheets = read_excel(raw_clinical_path, sheet_name=None) clinical_spreadsheets.keys() # ## Glossary clinical_glossary = clinical_spreadsheets['Parameter Glossary'] clinical_glossary.head() len(clinical_glossary) # Perfect! Nice and computer readable format. Not much to see there. # # The first raw was taken as a column name, I will mitigate it reloading the data with proper headers: clinical_glossary.to_csv(glossary_path, index=False) clinical_glossary = read_csv(glossary_path, names=['variable', 'description']) clinical_glossary.head(n=2) # And save the clean copy: clinical_glossary.to_csv(glossary_path) # ## Clinical data clinical_data = clinical_spreadsheets['Clinical MetaData'] clinical_data from helpers.presentation import show_list, compare_sets # ### Do we have description for each column? compare_sets(clinical_data.columns, clinical_glossary.variable) # Okay, so there is a typo the glossary, I will fix that so to make data merges easier in the future: clinical_glossary.variable = clinical_glossary.variable.replace({'OnTBTReat': 'OnTBTreat'}) clinical_glossary.to_csv(glossary_path, index=False) # Should work now: assert set(clinical_data.columns) == set(clinical_glossary.variable) # # Data exploration and reformatting # I do not see an atomic column with condition; for simplicity I will create it now: clinical_data['condition'] = clinical_data.PatientID.str.split('.').str[1] clinical_data['condition'].value_counts() # ## Getting to know the variables # There are only ~~37~~ 39 variables so I will explore each of these, one-by-one. # # The descriptions from glossary will come in handy: glossary = clinical_glossary.set_index('variable').description # ### 0. PatientID # We should have data for all patients: samples_list = read_csv(samples_path) compare_sets(samples_list.sample_id, clinical_data.PatientID) # Great! len(samples_list.sample_id) # ### 1. The admission date clinical_data.AdmissionDate.hist(); assert not clinical_data.AdmissionDate.isnull().any() # ### 2. The date of birth clinical_data.Birthday.hist(); clinical_data.Birthday.describe() # The value ranges are plausible (as the has data from ~2015, the oldest participant had about 80 years), no suspicious duplicates. # ### 3. History of TB # PrevTB is expected to be a binary column: set(clinical_data.PrevTB) clinical_data[~clinical_data.PrevTB.isin(['N', 'Y'])] # I assume that the 'Unknown' and missing values convey the same information of the history of TB not being known. from numpy import nan def yes_no_unknown_to_binary(column): mapping = {'N': False, 'Y': True, 'Unknown': nan} assert not set(column) - {nan, *mapping.keys()} return column.replace(mapping) clinical_data.PrevTB = yes_no_unknown_to_binary(clinical_data.PrevTB) counts = clinical_data.PrevTB.fillna('Unknown').value_counts() counts.to_frame() counts.plot(kind='bar'); patients_with_history_of_tb = clinical_data[clinical_data.PrevTB == True].PatientID # ### 4. Form of previous TB glossary['PrevTBForm'] clinical_data['PrevTBForm'].value_counts().to_frame() # This column does not satisfy the first normal form - I will add binary columns (as suggested in the description): clinical_data['was_previous_tb_pulmonary'] = clinical_data.apply(lambda patient: # ignore patients with no history of TB (set the value to NaN): nan if not (patient.PrevTB == True) else 'Pulmonary' in str(patient.PrevTBForm) , axis=1) # Dr Rachel has confirmed that pleural does count as extrapulmonary: clinical_data['was_previous_tb_extrapulmonary'] = clinical_data.apply(lambda patient: nan if not (patient.PrevTB == True) else any( term in str(patient.PrevTBForm) for term in ['TBM', 'Meningeal', 'Pleural'] ) , axis=1) # I will not be creating a separate column for *meningeal* now (it would be strongly correlated with *extrapulmonary*); I do not think that the difference of the single additional observation (one *pleural* patient more the *extrapulmonary* cohort) would make a big impact. clinical_data[['PrevTB', 'PrevTBForm', 'was_previous_tb_pulmonary', 'was_previous_tb_extrapulmonary']].head(n=8) # #### Sanity check # If patient has a form of previous TB given, they should be marked as having history of TB: patients_with_previous_tb_form_data = clinical_data[~clinical_data.PrevTBForm.isnull()].PatientID assert not set(patients_with_previous_tb_form_data) - set(patients_with_history_of_tb) # And it is indeed the case. How many patients have known history but no subtype given? compare_sets(patients_with_history_of_tb, patients_with_previous_tb_form_data) # Just one - that's very good! # ### 5. Treatment for the previous TB glossary['PrevTBTreat'] clinical_data['PrevTBTreat'].value_counts() # I will create an additional, binary column for the previous treatment, with a different encoding for missing values: clinical_data['previous_tb_treatment'] = clinical_data.apply(lambda patient: nan if not (patient.PrevTB == True) else ( nan if patient.PrevTBTreat == 'Unknown' else patient.PrevTBTreat == 'Y' ) , axis=1) subset = clinical_data[['PatientID', 'PrevTB', 'PrevTBTreat', 'previous_tb_treatment']] subset.head(n=2) subset.tail(40).head(8) # An example of the advantage of the new column is that `148.HC` (58th row) has `NaN` assigned because this patient has no history of TB. The original, `PrevTBTreat` column has 'N' value for this patient. This may be slightly suspicious, given that some other patients (e.g. `149.TMD`/59th row) who also have no history of TB have `NaN` in the original column. # #### Sanity check patients_with_previous_tb_treatment = clinical_data[clinical_data.PrevTBTreat == 'Y'].PatientID assert not set(patients_with_previous_tb_treatment) - set(patients_with_history_of_tb) # ### 6. Grade of TBM glossary['TBMGrade'] clinical_data['TBMGrade'].value_counts() # Already in numeric format, that's great. # # **Note:** only two patients with grade three! # ### 7. HIV Result glossary['HIVResult'] clinical_data['HIVResult'].value_counts() # ### 8. CD4 glossary['CD4'] # Distribution is skewed towards zero: clinical_data['CD4'].hist(); clinical_data['CD4'].describe() # And there are data for 78/95 patients. # It seems that that CD4 count is related to HIV/AIDS. A [US governmental site](https://www.hiv.va.gov/patient/diagnosis/labs-CD4-count.asp) states that 200 is a threshold for AIDS diagnosis but they do not give units; I am going to find a more accurate and reliable source and also read [PMC3729334](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3729334/) to understand the context a better at some time. # ### 9. Is patient on antiretroviral drug(s)? glossary['ARV'] clinical_data['ARV'].value_counts() # No need to reformat to another binary format as there are no "Unknown" values. # Data 79 patients, similar to the 78 above. # Is there a pattern in the missing data - e.g. are these only missing in healthy controls? show_list(clinical_data[clinical_data['ARV'].isnull()].PatientID) # No, it is not the case. # ### 10. History of treatment with antiretroviral drugs? glossary['UnARV'] clinical_data['UnARV'].value_counts() # Interpretation: answering "does the patient has a history of treatment with antiretroviral drugs"? # - "ARV-naive" - no, no history of treatment with antiretroviral drugs # - "PreviousHistory" - yes # Very little data - for only 41 patients. Which ones? patients_with_avr_data = clinical_data[~clinical_data['UnARV'].isnull()] patients_with_avr_data.condition.value_counts() patients_with_avr_data.HIVResult.value_counts() # HIV positive ones. # **Does lack of data mean "No" or "Unknown"?** # ### 11. Headache clinical_data['Headache'].value_counts() # Reformatting to binary format with NaN for unknown: clinical_data['Headache'] = yes_no_unknown_to_binary(clinical_data['Headache']) # ### 12. Duration of headache glossary['HeadacheD'] clinical_data['HeadacheD'].hist(); clinical_data['HeadacheD'].describe() # Is 600 days a real figure or a typo? I mean, if some patient reported having a headache for two years it would likely by closer to 730. And pressing an additional 0 when entering "60" is not so unlikely.. # Dr Rachel agrees that it might be a typo, but it is to be confirmed. Creating a column with corrected data: clinical_data['headache_days_corrected'] = clinical_data['HeadacheD'].replace({600: 60}) clinical_data['headache_days_corrected'].max() # Othwerise it looks ok, numeric format, no negative values. # Are headache durations only present for patients with headache? headache_patients = clinical_data[clinical_data.Headache == True].PatientID compare_sets(headache_patients, clinical_data[clinical_data.HeadacheD > 0].PatientID) # That is ok - apparently there are no data for this patient when it comes to headache duration. # ### 13. Lethargy glossary['Lethargy'] clinical_data['Lethargy'].value_counts() clinical_data['Lethargy'] = yes_no_unknown_to_binary(clinical_data['Lethargy']) # ### 14. Duration of lethargy glossary['LethargyD'] clinical_data['LethargyD'].hist(); # ### 15. Vomiting glossary['Vomiting'] clinical_data['Vomiting'].value_counts() clinical_data['Vomiting'] = yes_no_unknown_to_binary(clinical_data['Vomiting']) # ### 16. Duration of vomiting glossary['VomitingD'] clinical_data['VomitingD'].hist(); compare_sets( clinical_data[clinical_data.Vomiting == True].PatientID, clinical_data[clinical_data.VomitingD > 0].PatientID ) # Ok! # ### 17. Conscious glossary['Conscious'] # **Note:** the column name may be misleading! clinical_data['Conscious'].value_counts() clinical_data['Conscious'] = yes_no_unknown_to_binary(clinical_data['Conscious']) # ### 18. Conscious glossary['ConsciousD'] clinical_data['ConsciousD'].hist(); compare_sets( clinical_data[clinical_data.Conscious == True].PatientID, clinical_data[clinical_data.ConsciousD > 0].PatientID ) # Ok! # ### 19. Treatment for TB glossary['OnTBTreat'] clinical_data['OnTBTreat'].value_counts() clinical_data['OnTBTreat'] = yes_no_unknown_to_binary(clinical_data['OnTBTreat']) # Only patients with TB should be on the treatment: clinical_data[clinical_data['OnTBTreat'] == True].condition.value_counts() # This is surprising... Maybe these are patients who had a history of TB? clinical_data[clinical_data['OnTBTreat'] == True].PrevTB.value_counts() # Yes! This makes sense (the were on treatment at the time of admission). # # Are all the noes for patients with history of TB as well? clinical_data[clinical_data['OnTBTreat'] == False].PrevTB.value_counts() # Yes. What about current conditions? clinical_data[clinical_data['OnTBTreat'] == False].condition.value_counts() # **Note**: this is interesting that all the patients who are still treated for TB are assigned to the CM group. But such patients form the biggest group among all patients with history of TB, thus it might have been expected. # ### 20. Start date of previous TB treatment glossary['DateTBTreat'] clinical_data['DateTBTreat'].hist(); clinical_data['DateTBTreat'].describe() # #### Interpeting the glossary compare_sets( clinical_data[clinical_data.OnTBTreat == True].PatientID, clinical_data[~clinical_data.DateTBTreat.isnull()].PatientID ) # Data inconsistency? clinical_data[clinical_data.PatientID == '037.HC'][['condition', 'PrevTB', 'PrevTBTreat', 'OnTBTreat', 'DateTBTreat']] # Maybe. But it may be a matter of description (glossary) interpretation - if the glossary entry was originally placed after the "PrevTBTreat", it would read: # # - 'Did the patient receive TB treatment in the past?' # - 'If yes, when did it start?' # # which would make sense. # ### 21. End date of previous TB treatment glossary['DateTBTreatStop'] clinical_data['DateTBTreatStop'].hist(); # Because the earliest date is older than of the start date: # - we may have data censoring on both ends - such a wonderful data to experiment with survival models, # - can we be sure that the column names are not swapped? # Observation - no overlap with start date (expected given the glossary description): clinical_data[ (~clinical_data.DateTBTreat.isnull()) & (~clinical_data.DateTBTreatStop.isnull()) ].empty # #### Minor inconsistencies with treatment status? compare_sets( clinical_data[clinical_data.OnTBTreat == False].PatientID, clinical_data[~clinical_data.DateTBTreatStop.isnull()].PatientID ) # The one odd example: # Ok! columns_of_interest = [ 'PrevTB', 'PrevTBTreat', 'OnTBTreat', 'DateTBTreat', 'DateTBTreatStop', 'AdmissionDate' ] clinical_data[clinical_data.PatientID == '224.CM'][columns_of_interest] # If the treatment stopped in 2014-09, why was the patient still on treatment when admitted? # One explanation would be that the admission was before this date. But it was not. # Dr Rachel notes that there is an explanation of such discrepancies: the patient may have stopped treatment early; then at the admission the doctor may have placed them back (immediately) on the treatment; this is thought not to be uncommon. # Because it is just a single patient, I think that overall this column is ok. # ### 22. Date of lumbar puncture glossary['DateLP'] clinical_data['DateLP'].hist(bins=20, figsize=(10, 4)); # I expect that there are dates for all patients as all of them had the procedure: assert not clinical_data['DateLP'].isnull().any() (clinical_data['DateLP'] >= clinical_data['AdmissionDate']).value_counts() time_to_puncture = clinical_data['DateLP'] - clinical_data['AdmissionDate'] time_to_puncture.value_counts() outlier_row_id = time_to_puncture.idxmax() clinical_data.loc[outlier_row_id][['PatientID', 'DateLP','AdmissionDate', 'DateCD4', 'DateAntiTB']] # This is suspicious! # **Update:** this record can be corrected - the electronic record (input date) hints us that the admission date for this patient had a typo. new_admission_date = clinical_data.loc[outlier_row_id, 'AdmissionDate'].replace(year=2015) new_admission_date clinical_data.loc[outlier_row_id, 'AdmissionDate'] = new_admission_date # ### 23. Appearance of CSF #1 glossary['App1'] clinical_data['App1'].value_counts() # Perfect! # ### 24. Appearance of CSF #2 glossary['App2'] clinical_data['App2'].value_counts() # There are different terms than in the first one... clinical_data[clinical_data.App1 == 'Mild cloudy/ opalescent'][['App1', 'App2']] # And the values do not correlate. Interesting, though I do not understand this fully: # - were the two observations performed at different time points, or # - were the two looking for different things? # **Update:** based on a comment from Dr Rachel, the observations were collected at the same time, but looking at different characteristics: clear/cloudy and color/colorless respectively. # ### 25. Red blood cells count clinical_data['RCC'].hist(); clinical_data['RCC'].describe() # How was RCC measured? In CSF? That would seem likely given a lot of zeros! # # If this is the case, there should be a few healthy controls with RCC > 0: clinical_data[clinical_data.RCC > 0].condition.value_counts() # ### 26. White blood cells count clinical_data['WCC'].hist(); clinical_data[clinical_data.WCC > 0].condition.value_counts() # ### 27. Percent of neutrophils among WBC clinical_data['%Neutro'].hist(); # The range is 0-100 - ok. # The spike at zero could be due to many samples having zero WCC. Is it? zero_percent = clinical_data[clinical_data['%Neutro'] == 0] zero_percent[zero_percent.WCC == 0].empty # No, it is not. # Sanity check: %Neutro > 0 only if WCC > 0? (clinical_data[clinical_data['%Neutro'] > 0].WCC > 0).all() # ### 28. Percent of lymphocytes among WBC clinical_data['%Lympho'].hist(); # Range is ok. %Lympho > 0 only if WCC > 0? (clinical_data[clinical_data['%Lympho'] > 0].WCC > 0).all() # Great! # ### 29. Protein in CSF glossary['Protein'] clinical_data['Protein'].hist(); # Protein >= 0 → ok. # No other sanity checks if could think of. # Only one missing value: clinical_data['Protein'].isnull().sum() # ### 30. Glucose in CSF glossary['CSFGlucose'] clinical_data['CSFGlucose'].hist(); clinical_data['CSFGlucose'].isnull().sum() # ### 31. Was anti-TB treatment started (this admission)? clinical_data['AntiTB'].value_counts() clinical_data[clinical_data['AntiTB'] == 'Y'].condition.value_counts() # Given the Monday discussion I understand that the treatment may be decided upon before the final diagnosis is made, thus I just assume that the classes are ok. clinical_data['AntiTB'] = yes_no_unknown_to_binary(clinical_data['AntiTB']) # ### 32. The start date of anti-TB treatment (this admission) clinical_data['DateAntiTB'].hist(bins=20, figsize=(10, 4)); # All who have a start date should also be marked as having the treatment: assert clinical_data[~clinical_data.DateAntiTB.isnull()].AntiTB.all() # ### 33. Was steroid started (this admission)? glossary['SteroidsStarted'] clinical_data['SteroidsStarted'].value_counts() clinical_data['SteroidsStarted'] = yes_no_unknown_to_binary(clinical_data['SteroidsStarted']) # ### 34. The date of starting steroid (this admission) clinical_data['SteroidDate'].hist(bins=20, figsize=(10, 4)); # All who have a start date should also be marked as having the treatment: assert clinical_data[~clinical_data.SteroidDate.isnull()].SteroidsStarted.all() # ### 35. Type of steroid glossary['SteroidType'] clinical_data['SteroidType'].value_counts() # Data for all the 34 - ok! # ### 36. Death glossary['Death'] clinical_data['Death'].value_counts() clinical_data['Death'] = clinical_data['Death'].replace({'Alive': False, 'Death': True}) # It might be good to know when was the end of the study to know the right censoring horizon. # ### 37. Date of death clinical_data['DateDeath'].hist(bins=20, figsize=(10, 4)); assert clinical_data[~clinical_data.DateDeath.isnull()].Death.all() # One could envision many sanity checks for this: for date_column in ['SteroidDate', 'DateAntiTB', 'DateLP', 'AdmissionDate']: assert not (clinical_data[date_column] > clinical_data['DateDeath']).any() # ### 38. CD4 test date glossary['DateCD4'] clinical_data.DateCD4.hist(bins=20, figsize=(10, 4)); (clinical_data.DateCD4 >= clinical_data.AdmissionDate).sum() (clinical_data.DateCD4 < clinical_data.AdmissionDate).sum() # A quick note: about a third of the CD4 tests was performed before the current admission; # it is plausible as the test might have been done immediately after the HIV diagnosis or around the time when someone was suspected to have HIV. # # The glossary entry reaffirms that this is ok. compare_sets( clinical_data[~clinical_data.CD4.isnull()].PatientID, clinical_data[~clinical_data.DateCD4.isnull()].PatientID ) # ### 39. Gender clinical_data['Sex'].value_counts() # Perfect! # ## Variable exploration summary # There are many variables which are by definition dependent one on another. These are: # this dictionary was organised in shuch a way that the column with # the most data is the key (on the left side, preferring the re-formatted ones) relations = { 'PrevTB': [ 'PrevTBForm', 'PrevTBTreat', # to be confirmed 'OnTBTreat', 'DateTBTreat', 'DateTBTreatStop', # addded columns 'was_previous_tb_pulmonary', 'was_previous_tb_extrapulmonary', 'previous_tb_treatment' ], 'HIVResult': ['CD4', 'ARV', 'UnARV'], 'Headache': ['HeadacheD', 'headache_days_corrected'], 'Lethargy': ['LethargyD'], 'Vomiting': ['VomitingD'], 'Conscious': ['ConsciousD'], # OnTBTreat not only defines the presence of the dates, but also which date was recorded! 'OnTBTreat': ['DateTBTreat', 'DateTBTreatStop'], # These dates are mutually exclusive: 'DateTBTreat': ['DateTBTreatStop'], 'WCC': ['%Neutro', '%Lympho'], 'AntiTB': ['DateAntiTB'], 'SteroidsStarted': ['SteroidDate', 'SteroidType'], 'Death': ['DateDeath'], 'CD4': ['DateCD4'] } # ## Data completeness # ### Patient-wise # Are there any patients for whom there is too little data in general? # Exclude the dependent variables so that we don't have unequal weights of missing values just because there were some variables measured only in a subset of patients: key_variables = set(clinical_data.columns) - {e for l in relations.values() for e in l} len(key_variables) subset = clinical_data[key_variables].set_index('PatientID') missing_data = (100 * subset.isnull().sum(axis=1) / len(subset.columns)) missing_data = missing_data[missing_data > 0] missing_data.to_frame() # Up to 20% of missing "key" variables per patient (key variable defined as not being dependent - due to the data collection procedure - on other variables). # # It should not be a big problem. # # We may want to exclude the patient with 20% of missing data (also, see the note on PLS tolerance to missing values below). I will keep all the patients for now, because I may be able to fill the missing values for some of the variables and it may lower the percentage to more acceptable level. # ### Variable-wise missing_percent = (100 * subset.isnull().sum(axis=0) / len(subset)) missing = missing_percent[missing_percent != 0] missing.sort_index().sort_values().to_frame() # Up to 3.2% of missing data on the selected variables - good enough! # ### Overall ratio_missing = subset.isnull().sum(axis=1).sum() / (len(subset) * len(subset.columns)) f'{100 * ratio_missing:.2f}%' # ( et. al, 2001) states that PLS can handle up to 10-20% of missing data for dataset with 20 observations / 20 variables if the values are not missing in systematic pattern (thus the exclusion of non-key columns) and that the tolerance to missing values increases with the increase in dataset size. # # Thus there should be no problem with 1% of missing data. # # Why is 20/20 anyhow relevant to this dataset? # - there are 22 clinical variables; even though these will be changed (admission date or date of birth does not really count, but I will get age/ survival out of these), there will be ~20 major variables in the end # - the subgroups (by condition) if analyzed separately have about 20 observations clinical_data.set_index('PatientID').to_csv(clean_clinical_path) # -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.1.0 # language: julia # name: julia-1.1 # --- using ElectromagneticFields using Makie const a = 1.0 const b = 0.5 const c = 0.5 ; eq = ABC(a, b, c) load_equilibrium(eq) nx = 100 ny = 110 nz = 120 nl = 10 ; xgrid = LinRange(-1.0, +1.0, nx) ygrid = LinRange(-1.0, +1.0, ny) zgrid = LinRange(-1.0, +1.0, nz) ; field = zeros(nx, ny, nz) potAX = zeros(nx, ny, nz) potAY = zeros(nx, ny, nz) potAZ = zeros(nx, ny, nz) ; for i in 1:nx for j in 1:ny for k in 1:nz field[i,j,k] = B(xgrid[i], ygrid[j], zgrid[k]) potAX[i,j,k] = A₁(xgrid[i], ygrid[j], zgrid[k]) potAY[i,j,k] = A₂(xgrid[i], ygrid[j], zgrid[k]) potAZ[i,j,k] = A₃(xgrid[i], ygrid[j], zgrid[k]) end end end axis = (names = (axisnames = ("x", "y"),),) scene = hbox( vbox( contour(xgrid, ygrid, potAX[:,:,div(nz,2)], axis=axis, levels=nl), contour(xgrid, ygrid, potAY[:,:,div(nz,2)], axis=axis, levels=nl) ), vbox( contour(xgrid, ygrid, field[:,:,div(nz,2)], axis=axis, levels=nl, linewidth=0, fillrange=true), contour(xgrid, ygrid, potAZ[:,:,div(nz,2)], axis=axis, levels=nl) ) ) axis = (names = (axisnames = ("x", "z"),),) scene = hbox( vbox( contour(xgrid, zgrid, potAX[:,div(ny,2),:], axis=axis, levels=nl), contour(xgrid, zgrid, potAY[:,div(ny,2),:], axis=axis, levels=nl) ), vbox( contour(xgrid, zgrid, field[:,div(ny,2),:], axis=axis, levels=nl, linewidth=0, fillrange=true), contour(xgrid, zgrid, potAZ[:,div(ny,2),:], axis=axis, levels=nl) ) ) axis = (names = (axisnames = ("y", "z"),),) scene = hbox( vbox( contour(ygrid, zgrid, potAX[div(nx,2),:,:], axis=axis, levels=nl), contour(ygrid, zgrid, potAY[div(nx,2),:,:], axis=axis, levels=nl) ), vbox( contour(ygrid, zgrid, field[div(nx,2),:,:], axis=axis, levels=nl, linewidth=0, fillrange=true), contour(ygrid, zgrid, potAZ[div(nx,2),:,:], axis=axis, levels=nl) ) ) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Treebeard Container Setup # # This notebook is the first to run when the container starts. # # It can be used to set env vars and install secrets which can be used to connect to APIs and pull down data too large or sensitive to include in the image. # # You can pass in secrets using the `notebook-vars` input on the treebeard action # # Because this is the most awkward to use and cannot be cached only use it when you rely on scripts to set variables or work with data which cannot be handled in the container because it is too large/private. # # + from viresclient import set_token import os set_token( "https://vires.services/ows", set_default=True, token=os.getenv("TB_VIRES_TOKEN"), ) set_token( "https://staging.viresdisc.vires.services/ows", token=os.getenv("TB_VIRES_TOKEN_STAGINGDISC"), ) set_token( "https://staging.vires.services/ows", token=os.getenv("TB_VIRES_TOKEN_STAGING"), ) # - from treebeard.helper import shell shell("pip install MagneticModel/eoxmagmod") shell("pip install swarmpyfac") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + #import libraries import re import time import pandas as pd import numpy as np import matplotlib.pyplot as plt # %matplotlib inline from sklearn.feature_extraction.text import CountVectorizer from sklearn.linear_model import LogisticRegression from sklearn.naive_bayes import MultinomialNB from sklearn.model_selection import GridSearchCV from sklearn.pipeline import Pipeline from sklearn.model_selection import train_test_split from sklearn.metrics import confusion_matrix from sklearn.metrics import accuracy_score from sklearn.metrics import f1_score from sklearn.externals import joblib # - #load data train_data = pd.read_csv("undersampled/train.tsv", delimiter='\t', lineterminator='\n', header=None) dev_data = pd.read_csv("undersampled/dev.tsv", delimiter='\t', lineterminator='\n', header=None) test_data = pd.read_csv("undersampled/test.tsv", delimiter='\t', lineterminator='\n', header=None) (train_data.shape[0], dev_data.shape[0], test_data.shape[0]) train_dev_data = pd.concat([train_data, dev_data], axis=0) train_dev_data.shape[0] train_dev_data.head() #assign x and y train_x = train_dev_data[0] test_x = test_data[0] train_y = train_dev_data[1] test_y = test_data[1] print(f"training data: {np.round(train_y.value_counts()[1]/train_data.shape[0],4)*100}% positive class") print(f"test data: {np.round(test_y.value_counts()[1]/test_data.shape[0],4)*100}% positive class") # + def preprocessor(s): s = s.lower() s = re.sub(r'\d+', 'DG', s) s = re.sub(r'@\w+', "@USER", s) return s vect = CountVectorizer(preprocessor=preprocessor) nb = MultinomialNB(fit_prior=False) pipe = Pipeline(steps=[("vectorizer", vect), ("naivebayes", nb)]) param_grid = {"vectorizer__ngram_range": [(1,1),(1,2),(1,3)], "vectorizer__max_df": [0.8,0.9,1.0], "naivebayes__alpha": [0.01, 0.1, 1.0, 10.0]} search = GridSearchCV(pipe, param_grid, cv=3, verbose=1) search.fit(train_x, train_y) # - search.best_params_ search_results = pd.DataFrame(search.cv_results_)[["mean_fit_time","mean_test_score","mean_train_score", "param_naivebayes__alpha","param_vectorizer__max_df", "param_vectorizer__ngram_range"]] search_results.sort_values("mean_test_score", ascending=False) train_pred = search.predict(train_x) print(f"accuracy: {np.round(accuracy_score(train_pred, train_y),3)}") print(f"f1-score: {np.round(f1_score(train_pred, train_y),3)}") confusion_matrix(train_pred, train_y) test_pred = search.predict(test_x) print(f"accuracy: {np.round(accuracy_score(test_pred, test_y),3)}") print(f"f1-score: {np.round(f1_score(test_pred, test_y),3)}") confusion_matrix(test_pred, test_y) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernel_info: # name: ml-software # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import internetarchive as ia import pandas as pd import itertools as it from toolz import pluck, filter, map, take import toolz import os from pathlib import Path import json from glob import glob from zipfile import ZipFile, BadZipFile config = json.load(open(os.path.expanduser("~/.thesis.conf"))) db_folder = Path(config['datasets']) / Path("archive_org/") os.chdir(str(db_folder)) iaformat = ["Single Page Processed JP2 ZIP", 'Metadata'] search = ia.search_items('pages:[20 TO 25] AND (language:eng OR language:"English") AND date:[1800-01-01 TO 1967-01-01]') items = list(toolz.take(5,search.iter_as_items())) item = items[2] for item in items[0:3]: ia.download(item.identifier, formats=iaformat) jp2path = Path(item.identifier) / Path(next(pluck('name',filter(lambda files: files['format'] == iaformat, item.files)))) # + inputHidden=false outputHidden=false zips = glob(str(db_folder / '*' / '*_jp2.zip')) zips # - for zip in zips: try: jp2zip = ZipFile(str(zip)) print(jp2zip.filename, len(jp2zip.namelist())) jp2zip.extract(jp2zip.namelist()[0]) except BadZipFile: print('zip error', zip) jp2zip.namelist() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import os import json import numpy as np import pandas as pd from pathlib import Path import multiprocessing from tqdm import tqdm import cv2 import math import glob # + SEED = 1111 FLOOR_MAP = {"B2":-2, "B1":-1, "F1":0, "F2":1, "F3":2, "F4":3, "F5":4, "F6":5, "F7":6, "F8":7, "F9":8, "1F":0, "2F":1, "3F":2, "4F":3, "5F":4, "6F":5, "7F":6, "8F":7, "9F":8} WAYPOINTS_DF = pd.read_csv('/kaggle/input/indoor-supplementals-for-postprocessing/waypoint.csv') # + def metadata_dir(): return Path('/kaggle/input/indoor-location-navigation/metadata') def floor2strs(floor): return [key for key, val in FLOOR_MAP.items() if val == floor] def get_map_info(site, floor): for floor_str in floor2strs(floor): json_path = metadata_dir() / site / floor_str / "floor_info.json" if json_path.exists(): break with open(json_path, "r") as f: info = json.load(f) height = info['map_info']['height'] width = info['map_info']['width'] return height, width def find_nearest_waypoints(xy, waypoints): r = np.sum((waypoints - xy)**2, axis=1) j = np.argmin(r) return waypoints[j, :] def coodinate_to_pixel(x, y, height, width, shape): p_x = int((x / width) * shape[1]) p_y = int((1 - y / height) * shape[0]) p_x = max(0, min(shape[1] - 1, p_x)) p_y = max(0, min(shape[0] - 1, p_y)) return p_x, p_y # - def extract_permitted_area_from_map(site, floor): for floor_str in floor2strs(floor): floor_image_path = metadata_dir() / site / floor_str / "floor_image.png" if floor_image_path.exists(): break img = cv2.imread(str(floor_image_path), cv2.IMREAD_UNCHANGED) height, width, channel = img.shape _, thimg_soft = cv2.threshold(img[:,:,3], 1, 1, cv2.THRESH_BINARY) _, thimg_hard = cv2.threshold(img[:,:,3], 254, 1, cv2.THRESH_BINARY_INV) thimg_soft[0, :] = 0 thimg_soft[height - 1, :] = 0 thimg_soft[:, 0] = 0 thimg_soft[:, width - 1] = 0 mask_img = np.zeros_like(thimg_soft).astype(np.uint8) contours, hierarchy = cv2.findContours(thimg_soft, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) contours = [contour for contour in contours if cv2.contourArea(contour) > 1000] cv2.fillPoly(mask_img, contours, 1) permitted_img = np.minimum(mask_img, thimg_hard) return img, cv2.blur(permitted_img, (5, 5)) def generate_grid_point(args): site, floor = args floor_image, permitted_mask = extract_permitted_area_from_map(site, floor) waypoints = WAYPOINTS_DF[(WAYPOINTS_DF['site'] == site) & (WAYPOINTS_DF['floor'] == floor)][['x', 'y']].values height, width = get_map_info(site, floor) extra_grid_points = np.zeros((0, 2)) rgen = np.random.default_rng(SEED) for i in range(10000): x = rgen.uniform(low=0.0, high=width) y = rgen.uniform(low=0.0, high=height) p_x, p_y = coodinate_to_pixel(x, y, height, width, permitted_mask.shape) if permitted_mask[p_y, p_x] == 1: xy = np.array([x, y]) xy_near_1 = find_nearest_waypoints(xy, waypoints) r1 = np.sqrt(np.sum((xy - xy_near_1)**2)) if extra_grid_points.shape[0] > 0: xy_near_2 = find_nearest_waypoints(xy, extra_grid_points) r2 = np.sqrt(np.sum((xy - xy_near_2)**2)) else: r2 = float('inf') if (r1 > 5.0) and (r2 > 2.5): extra_grid_points = np.concatenate([extra_grid_points, np.expand_dims(xy, axis=0)]) if extra_grid_points.shape[0] == 0: return None else: out_df = pd.DataFrame({ 'x' : extra_grid_points[:, 0], 'y' : extra_grid_points[:, 1], }) out_df['site'] = site out_df['floor'] = floor return out_df sub = pd.read_csv('../input/wifi-features-with-lightgbm-kfold/submission_baseline.csv') tmp = sub['site_path_timestamp'].apply(lambda x: pd.Series(x.split('_'))) sub['site'] = tmp[0] site_floor = sub[['site', 'floor']].drop_duplicates().values processes = multiprocessing.cpu_count() with multiprocessing.Pool(processes=processes) as pool: dfs = pool.imap_unordered(generate_grid_point, site_floor) dfs = tqdm(dfs) dfs = [df for df in dfs if df is not None] df = pd.concat(dfs).sort_values(['site', 'floor']) df.to_csv('extra_grid_points.csv', index=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Playing with the Gaussian Distribution # # There was a [statement](http://imgur.com/gallery/ng3w1vt) I saw online: "I don't know anyone with an IQ above 7 that respects Hillary Clinton." # # Of course, the person is trying to sound smart and snarky but I don't think they pull it off very well. My first reaction was to show them how dumb they are, because arguments online are always a good idea, right? I didn't say anything, as I usually don't. Whatever. I thought I'd write down how to think about standard scores like this instead. # # Before I start, there are interesting discussions about why IQ is an outdated idea. For example: # # > There is no reason to believe, and much reason not to believe, that the measure of so-called "Intelligence Quotient" in any way reflects some basic cognitive capacity or "natural kind" of the human mind. The domain-general measure of IQ isn't motivated by any recent discovery of cognitive or developmental psychology. # # _Atran S. 2015. IQ. In: Brockman J, editor. This idea must die: scientific ideas that are blocking progress. New York, New York: Harper Perennial. p. 15._ # # Notwithstanding, let's have a little fun with this, or brush up on some statistics using Python. # # ## Getting Started # # The Stanford-Binet IQ test is an intelligence test standardized for a median of 100 and a standard deviation of 15. That means that someone with an IQ of 100 has about as many people smarter than them as there are less intelligent. It also means that we can calculate about where someone fits in the population if their score is different than 100. # # We'll use a Gaussian distribution to describe the scores. This is the bell curve we've all probably seen before, where most things group up in the middle and the exceptional items are found to the left and right of the center: # # ![bell curve](http://i.imgur.com/1R5fEBm.gif) # # To figure out what a test score says about a person, we'll: # # * compare the test score to the everyone else (calculate the z-score) # * figure out the proportion of people with lower scores (calculate the probability) # * understand the conclusions # ## Calculate the Z-Score # # The z-score is the distance between an observed value and the mean value divided by the standard deviation. For IQ scores, we'll use the median for the mean, since I don't have (or couldn't be bothered to find) better data. Here's the formula: # # $$z = \frac{x_{i} - \mu}{\sigma}$$ # # where $x_{i}$ is the observed value, $\mu$ is the mean and $\sigma$ is the standard deviation. # # Put another way, the mean measures the middle of normal data and the standard deviation measures the width. If it's wide, there's a lot of variance in the data, and if it's narrow, almost everything comes out near the mean value. The z-score measures how different an observation is from the middle of the data. There's [another discussion](http://math.stackexchange.com/questions/133701/how-does-one-gain-an-intuitive-understanding-of-the-z-score-table) of this that might be useful. # # So, calculating the z-score is our first step so that we can compare the teset score to everyone else's test score. # # Let's do this with Python. # def z_score(x, m, s): return (x - m) / s # I created a function that takes the observation, mean and standard deviation and returns the z-score. Notice it's just a little bit of arithmetic. # # Here are a few examples, testing our method. print(z_score(95, 100, 15), z_score(130, 100, 15), z_score(7, 100, 15)) # We should see -0.3333333333333333 2.0 -6.2 or 1/3 deviation below average, 2 above and 6.2 below. # ## Calculate the Probability # # Given the z-score, we can calculate the probability of someone being smarter or less-intelligent than observed. To do this, we need to estimate the area under the correct part of the curve. Looking at our curve again, we have # # ![bell curve](http://i.imgur.com/1R5fEBm.gif) # # Notice the numbers along the bottom? Those are z-scores. So, if we have a z-score of -3, we'll know that anyone less intelligent than that is about 0.1% of the population. We take the perctage from under the curve in that region to get the answer of 0.1% If we have a z-score of -2.5, we add up the first two areas (0.1% and 0.5%) to figure out that 0.6% of people are less intelligent than that test score. If we get a z-score of 1, we'd add all the numbers from left to the z-score of 1 and get about 84.1%. # # [SciPy](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.html#scipy.stats.norm) has a normal distribution with a cdf function, or the Cumulative Distribution Function. That's the function that measures the area of a curve, up to a point. To use this function, we write: # + import scipy.stats as st def p(x, m, s): z = z_score(x, m, s) return st.norm.cdf(z) # - # Here, I use p for probability, or the probability that an observation will be lower than the value provided. I pass in the observation, the mean and the standard deviation. The function looks up my z-score for me and then calls SciPy's CDF function on the normal distribution. # # Let's calculate this for a few z-scores. I'll use pandas to create a data frame, because they print neatly. # + import numpy as np import pandas as pd scores = np.arange(60, 161, 20) z_scores = list(map(lambda x: z_score(x, 100, 15), scores)) less_intelligent = list(map(lambda x: p(x, 100, 15), scores)) df = pd.DataFrame() df['test_score'] = scores df['z_score'] = z_scores df['less_intelligent'] = less_intelligent df # - # This code creates a pandas data frame by first setting a few sample test scores from 60 to 160. Then calculating their z-scores and the proportion of the population estimated to be less intelligent than that score. # # So, someone with a score of 60 would have almost 4 people out of a thousand that are less intelligent. Someone with a score of 160 would expect that in a room of 100,000, 3 would be more intelligent than they are. # # This is a similar result that we see in the bell curve, only as applied with our mean of 100 and our standard deviation of 15. # ## Understanding the Conclusions # # Taking a few moments to calculate the probability of someone being less smart than a score reminds me how distributions work. Maybe this was something most programmers learned and don't use often, so the knowledge gets a little dusty, a little less practical. # # I used matplotlib to create a graphic with our IQ distribution in it. I just grabbed the [code](http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.normal.html) from the SciPy documentation and adjusted it for our mean and standard deviation. I also use the ggplot style, because I think it's pretty slick. import matplotlib.pyplot as plt # %matplotlib inline plt.style.use('ggplot') mu, sigma = 100, 15. # mean and standard deviation s = sorted(np.random.normal(mu, sigma, 1000)) count, bins, ignored = plt.hist(s, 30, normed=True) plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) * np.exp( - (bins - mu)**2 / (2 * sigma**2) ), linewidth=2) plt.show() # The blue line shows us an approximation of the distribution. I used 1,000 random observations to get my data. I could have used 10,000 or 100,000 and the curve would look really slick. However, that would hide a little what we actually mean when we talk about distributions. If I took 1,000 students and gave them an IQ test, I would expect scores that were kind of blotchy like the red histogram in the plot. Some categories would be a little above the curve, some below. # # As a side note, if I gave everyone in a school an IQ test and I saw that my results were skewed a little to the left or the right, I would probably conclude that the students in the school test better or worse than the population generally. That, or something was different about the day, or the test, or grading of the test, or the collection of tests or something. Seeing things different than expected is where the fun starts. # ## What About the Snark? # # Oh, and by the way, what about "I don't know anyone with an IQ above 7 that respects "? # # How common would it be to find someone with an IQ at 7? Let's use our code to figure that out. z = z_score(7, 100, 15) prob = p(7, 100, 15) rounded_prob = round(prob, 15) print("The z-score {0} and probability {1} of a test score of 7.".format(z, rounded_prob)) # That is, the z-score for a 7 test score is -6.2, or 6.2 standard deviations from the mean. That's a very low score. The probability that someone gets a lower score? 2.8e-10. How small is that number? instances_per_billion = round((1/prob) / 1000000000, 2) people_on_the_planet = 7.125 # billion instances_on_the_planet = people_on_the_planet / instances_per_billion instances_on_the_planet # Or, if the snarky comment were accurate, there would be 2 people that have an IQ lower than 7 on the planet. Maybe both of us could have chilled out a little and came up with [funnier ways to tease](https://www.youtube.com/watch?v=JsJxIoFu2wo). # # Interestingly, if we look at recent (9/21/15) [head-to-head polls](http://www.realclearpolitics.com/epolls/2016/president/2016_presidential_race.html) of Hillary Clinton against top Republican candidates, we see that: votes = pd.Series([46.3, 45.3, 46.3, 46.3, 49.4, 47.8, 42.7, 43.3, 49.0, 47.7, 48.3, 46.5, 46.5, 49.0, 48.0]) # I thought it was easier to read percentages as 46.3, but I'm converting those numbers here to fit # in the set [0,1] as well-behaved probabilities do. votes = votes.apply(lambda x: x / 100) votes.describe() # Or, from 15 hypothetical elections against various Republican candidates, about 46.8% would vote for former over her potential Republican rivals at this point. It's interesting to point out that the standard deviation in all these polls is only about 2%. Or, of all the Republican candidates, at this point very few people are thinking differently from party politics. Either the particular candidates are not well known, or people are just that polarized that they'll vote for their party's candidate no matter who they run. # # If we're facetious and say that only stupid people are voting for (from the commenter's snark), how would we find the IQ threshold? Or, put another way, if you ranked US voters by intelligence, and assumed that the dumbest ones would vote for , and only the smart ones would vote Republican, what IQ score would these dumb ones have? # # We can get the z-score like this: hillary_z_score = st.norm.ppf(votes.mean()) hillary_z_score # So, the z-score is just about 1/10th of one standard deviation below the mean. That is, it's going to be pretty close to 100. # # Using the z-score formula, we can solve for $x_{1}$ and get: # # $$z = \frac{x_{i} - \mu}{\sigma}$$ # # $$x_{i} = \sigma z + \mu$$ # # Plugging our z-score number in, with our standard deviation and mean, we get: iq = 15 * hillary_z_score + 100 iq # Or, if I wanted to make an accurate stupidity joke about Hillary Clinton followers: "I don't know anyone with an IQ above __98.8__ that respects Hillary Clinton." Ba-dum-bum-CHING. # # Hope you had a little fun getting a practical refresher on normal distributions using Python. # ### Resources: # # Here are a few resources, if you want to look things up. # # * [iPython Notebook](https://github.com/davidrichards/random_notebooks/blob/master/notebooks/IQ%20Scores.ipynb) # * [Python Code](https://gist.github.com/davidrichards/3ced3fe28e7266eda899) # * [IQ Classification](https://en.wikipedia.org/wiki/IQ_classification) # * [Head to Head Presidential Polls for Hillary Clinton](http://www.realclearpolitics.com/epolls/2016/president/2016_presidential_race.html) # * [z-score intuition](http://math.stackexchange.com/questions/133701/how-does-one-gain-an-intuitive-understanding-of-the-z-score-table) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.8 # language: python # name: python3.8 # --- # # Azure ML Compute Python SDK # # description: overview of the AML Compute Python SDK # + tags=["create workspace"] from azureml.core import Workspace ws = Workspace.from_config() ws # - # ## Introduction to AmlCompute # # Azure Machine Learning Compute is managed compute infrastructure that allows the user to easily create single to multi-node compute of the appropriate VM Family. It is created **within your workspace region** and is a resource that can be used by other users in your workspace. It autoscales by default to the max_nodes, when a job is submitted, and executes in a containerized environment packaging the dependencies as specified by the user. # # Since it is managed compute, job scheduling and cluster management are handled internally by Azure Machine Learning service. # # For more information on Azure Machine Learning Compute, please read [this article](https://docs.microsoft.com/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute) # # **Note**: As with other Azure services, there are limits on certain resources (for eg. AmlCompute quota) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. # + from azureml.core.compute import ComputeTarget, AmlCompute AmlCompute.supported_vmsizes(workspace=ws) # AmlCompute.supported_vmsizes(workspace = ws, location='southcentralus') # - # ### Provision as a persistent compute target (Basic) # # You can provision a persistent AmlCompute resource by simply defining two parameters thanks to smart defaults. By default it autoscales from 0 nodes and provisions dedicated VMs to run your job in a container. This is useful when you want to continously re-use the same target, debug it between jobs or simply share the resource with other users of your workspace. # # * `vm_size`: VM family of the nodes provisioned by AmlCompute. Simply choose from the supported_vmsizes() above # * `max_nodes`: Maximum nodes to autoscale to while running a job on AmlCompute # # You can also specify additional properties or change defaults while provisioning AmlCompute using a more advanced configuration. This is useful when you want a dedicated cluster of 4 nodes (for example you can set the min_nodes and max_nodes to 4), or want the compute to be within an existing VNet in your subscription. # # In addition to `vm_size` and `max_nodes`, you can specify: # * `min_nodes`: Minimum nodes (default 0 nodes) to downscale to while running a job on AmlCompute # * `vm_priority`: Choose between 'dedicated' (default) and 'lowpriority' VMs when provisioning AmlCompute. Low Priority VMs use Azure's excess capacity and are thus cheaper but risk your run being pre-empted # * `idle_seconds_before_scaledown`: Idle time (default 120 seconds) to wait after run completion before auto-scaling to min_nodes # * `vnet_resourcegroup_name`: Resource group of the **existing** VNet within which AmlCompute should be provisioned # * `vnet_name`: Name of VNet # * `subnet_name`: Name of SubNet within the VNet # * `admin_username`: Name of Admin user account which will be created on all the nodes of the cluster # * `admin_user_password`: Password that you want to set for the user account above # * `admin_user_ssh_key`: SSH Key for the user account above. You can specify either a password or an SSH key or both # * `remote_login_port_public_access`: Flag to enable or disable the public SSH port. If you dont specify, AmlCompute will smartly close the port when deploying inside a VNet # * `identity_type`: Compute Identity type that you want to set on the cluster, which can either be SystemAssigned or UserAssigned # * `identity_id`: Resource ID of identity in case it is a UserAssigned identity, optional otherwise # # + tags=[] from random import randint from azureml.core.compute import AmlCompute, ComputeTarget # name must be unique within a workspace ct_name = f"ct-{str(randint(10000, 99999))}-concept" if ct_name in ws.compute_targets: ct = ws.compute_targets[ct_name] ct.delete() ct.wait_for_completion(show_output=True, is_delete_operation=True) compute_config = AmlCompute.provisioning_configuration( vm_size="STANDARD_D2_V2", max_nodes=4 ) ct = ComputeTarget.create(ws, ct_name, compute_config) ct.wait_for_completion(show_output=True) # - # get_status() gets the latest status of the AmlCompute target ct.get_status().serialize() # list_nodes() gets the list of nodes on the cluster with status, IP and associated run ct.list_nodes() # + # update() takes in the min_nodes, max_nodes and idle_seconds_before_scaledown and updates the AmlCompute target # ct.update(min_nodes=1) # ct.update(max_nodes=10) # ct.update(idle_seconds_before_scaledown=300) # ct.update(min_nodes=2, max_nodes=4, idle_seconds_before_scaledown=600) # - # delete() is used to deprovision and delete the AmlCompute target. Useful if you want to re-use the compute name ct.delete() ct.wait_for_completion(show_output=True, is_delete_operation=True) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + import sys import pandas as pd import datetime as dt from queue import Queue sys.path.append("../") # adding path to library from library.backtester import backtest from library.backtester.portfolio import NaivePortfolio from library.backtester.strategy import BuyAndHoldStrategy from library.backtester.data import FtxHistoricCSVDataHandler from library.backtester.execution import SimulatedExecutionHandler # - # ### Создаем очередь событий # Event-driven backtest основывается на принципе появления событий. Каждое событие несет в себе какое-то событие, будь то появление маркет даты, или исполнения ордеров. В нашем примере мы создаем очередь событий и предоставляем к ней доступ для всех модулей, чтобы каждый элемент имел к ней доступ events = Queue() # ### Показываем бэктестеру, где лежат наши данные # Вообще, архитектура позволяет дописывать обработчики данных для разных форматов, но в примере мы рассмотрим обработчик csv данных, предполагающий наличие csv файлов в формате, в котором ftx выдает исторические данные. Предполагается, что для каждого актива имеем по csv файлу с названием *ticker*.csv, пример ниже: # !ls ../data # О том, как получить данные в таком формате можно почитать в предыдущем ноутбуке # # Объявляем обработчик данных, предоставляя ему доступ к очереди событий, указывая на директорию с цсвшками и передавая список тикеров, которые мы будем гонять в бэктестере. Кстати, мы назвали переменную bars, потому что исторические данные -- это свечи, а не просто цена закрытия. # !head -n 3 ../data/ABNBUSD.csv bars = FtxHistoricCSVDataHandler(events=events, csv_dir='../data', symbol_list=['BTCUSDT']) # ### Определяем используемую стратегию # В трейдинге широко распространенной практикой является тестирование стратегий в сравнении с каким-либо бенчмарком. Обычно бэнчмарк выбирается исходя из каких-то специфичных размышлений, я выбрал в качестве первого приближения считать бэнчмарком портфель buy and hold. Стратегия наследуется от шаблона стратегий, описанного в `library.backtester.strategy`, поэтому можно легко дописывать свои стратегии. strategy = BuyAndHoldStrategy(bars=bars, events=events) # ### Определяем портфель # Портфель выполняет функции по анализу проведенных торгов, сбору метрик и аггрегации результатов. Кроме того, портфель отдает приказы брокеру, руководствуясь сигналами стратегии и наличию доступных средств. Нынешняя реализация портфеля наивна в смысле отсуствия риск-менеджмента, а также фиксированному количеству покупаемых активов. portfolio = NaivePortfolio(bars=bars, events=events, start_date=dt.datetime(2020, 8, 1), initial_capital=6_500) # ### Добавляем исполнителя сделок broker = SimulatedExecutionHandler(events) # ### Запускам бэктестер backtest(events, bars, strategy, portfolio, broker) results = portfolio.create_equity_curve_dataframe() # Важно, что функция backtest меняет внутренние переменные всех модулей, что наверное не очень хорошо, и это нужно будет как-то переосмыслить, но вот какие результаты мы получаем. results # И действительно, вложившись в биткойн в конце 2020 года мы бы имели в 6 раз больше вложенных денег. results['equity_curve'].plot(); # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernel_info: # name: python3 # kernelspec: # argv: # - C:/Users//Anaconda3\python.exe # - -m # - ipykernel_launcher # - -f # - '{connection_file}' # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] nteract={"transient": {"deleting": false}} # # List of Basic Regressions # + [markdown] nteract={"transient": {"deleting": false}} # # Linear Regression # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} import pandas as pd import numpy as np import scipy.stats as stats from statsmodels import regression from statsmodels import regression, stats import scipy as sp import statsmodels.api as sm import math import matplotlib.pyplot as plt import seaborn as sns import warnings warnings.filterwarnings("ignore") import yfinance as yf yf.pdr_override() # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} start = '2018-01-01' end = '2022-01-01' market1 = 'SPY' market2 = '^IXIC' symbol1 = 'AMD' symbol2 = 'INTC' # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} asset1 = yf.download(symbol1, start=start, end=end)['Adj Close'] asset2 = yf.download(symbol2, start=start, end=end)['Adj Close'] benchmark = yf.download(market1, start=start, end=end)['Adj Close'] # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} def linreg(X,Y): # Running the linear regression X = sm.add_constant(X) model = regression.linear_model.OLS(Y, X).fit() a = model.params[0] b = model.params[1] X = X[:, 1] # Return summary of the regression and plot results X2 = np.linspace(X.min(), X.max(), 100) Y_hat = X2 * b + a plt.scatter(X, Y, alpha=0.3) # Plot the raw data plt.plot(X2, Y_hat, 'r', alpha=0.9); # Add the regression line, colored in red plt.xlabel('X Value') plt.ylabel('Y Value') return model.summary() # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} r_a = asset1.pct_change()[1:].dropna() r_b = benchmark.pct_change()[1:].dropna() linreg(r_b.values, r_a.values) # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} sns.regplot(r_b.values, r_a.values) # + [markdown] nteract={"transient": {"deleting": false}} # # Regression Model Instability # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} def linreg(X,Y): x = sm.add_constant(X) # Add a row of 1's so that our model has a constant term model = regression.linear_model.OLS(Y, x).fit() return model.params[0], model.params[1] # Return the coefficients of the linear model # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} breakpoint = 100 xs = np.arange(len(asset1)) xs2 = np.arange(breakpoint) xs3 = np.arange(len(asset1) - breakpoint) # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} a, b = linreg(xs, asset1) a2, b2 = linreg(xs2, asset1[:breakpoint]) a3, b3 = linreg(xs3, asset1[breakpoint:]) Y_hat = pd.Series(xs * b + a, index=asset1.index) Y_hat2 = pd.Series(xs2 * b2 + a2, index=asset1.index[:breakpoint]) Y_hat3 = pd.Series(xs3 * b3 + a3, index=asset1.index[breakpoint:]) # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} asset1.plot() Y_hat.plot(color='y') Y_hat2.plot(color='r') Y_hat3.plot(color='g') plt.title(symbol1 + ' Price') plt.ylabel('Price') # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} b1 = yf.download(market1, start=start, end=end)['Adj Close'] b2 = yf.download(market2, start=start, end=end)['Adj Close'] # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} mlr = regression.linear_model.OLS(asset1, sm.add_constant(np.column_stack((b1, b2)))).fit() prediction = mlr.params[0] + mlr.params[1]*b1 + mlr.params[2]*b2 print('Constant:', mlr.params[0], 'MLR beta to S&P 500:', mlr.params[1], ' MLR beta to MDY', mlr.params[2]) # Plot the asset pricing data and the regression model prediction, just for fun asset1.plot() prediction.plot() plt.title(symbol1 + ' ') plt.ylabel('Price') plt.legend(['Asset', 'Linear Regression Prediction']) # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} # Compute Pearson correlation coefficient sp.stats.pearsonr(b1,b2)[0] # Second return value is p-value # + [markdown] nteract={"transient": {"deleting": false}} # # Multiple Linear Regression # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} slr = regression.linear_model.OLS(asset1, sm.add_constant(asset2)).fit() print('SLR beta of stock2:', slr.params[1]) # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} # Run multiple linear regression using asset2 and SPY as independent variables mlr = regression.linear_model.OLS(asset1, sm.add_constant(np.column_stack((asset2, benchmark)))).fit() prediction = mlr.params[0] + mlr.params[1]*asset2 + mlr.params[2]*benchmark prediction.name = 'Prediction' print('MLR beta of asset2:', mlr.params[1], '\nMLR beta of S&P 500:', mlr.params[2]) # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} # Plot the three variables along with the prediction given by the MLR asset1.plot(label=symbol1) asset2.plot(label=symbol2) benchmark.plot(label=market1) prediction.plot(color='r', label='Prediction') plt.title(symbol1 + ' & ' + symbol2 + ' Price') plt.xlabel('Price') plt.legend(bbox_to_anchor=(1,1), loc=2) # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} # Plot only the dependent variable and the prediction to get a closer look asset1.plot(label = symbol1) prediction.plot(color='y') plt.xlabel('Price') plt.title(symbol1 + ' Price') plt.legend() # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} mlr.summary() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # %matplotlib inline # Necessary Packages import numpy as np import pandas as pd import matplotlib.pyplot as plt # Model and data processing packages from sklearn.model_selection import train_test_split from sklearn.neighbors import KNeighborsClassifier from sklearn.metrics import confusion_matrix from sklearn.metrics import classification_report from sklearn.metrics import roc_curve from sklearn.metrics import roc_auc_score from sklearn.model_selection import GridSearchCV # - np.random.seed(234) df = pd.read_csv('diabetes.csv') df.head(5) df.shape X = df.drop('Outcome', axis=1).values y = df['Outcome'].values X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=42, stratify=y) neighbors = np.arange(1,20) train_accuracy = np.empty(len(neighbors)) test_accuracy = np.empty(len(neighbors)) print(X_train.shape, X_test.shape, y_train.shape, y_test.shape) for i,k in enumerate(neighbors): knn = KNeighborsClassifier(n_neighbors=k) knn.fit(X_train, y_train) train_accuracy[i] = knn.score(X_train, y_train) test_accuracy[i] = knn.score(X_test, y_test) plt.plot(neighbors, test_accuracy, label='Testing Accuracy') plt.plot(neighbors, train_accuracy, label='Training Accuracy') plt.legend() plt.xlabel('Number of Neighbors') plt.ylabel('Accuracy') plt.title('KNN Varying Number of Neighbors') plt.show() n_neighbors=18 knn = KNeighborsClassifier(n_neighbors) knn.fit(X_train, y_train) knn.score(X_test, y_test) y_pred = knn.predict(X_test) confusion_matrix(y_test, y_pred) pd.crosstab(y_test, y_pred, rownames=['True'], colnames=['Predicted'], margins=True) print(classification_report(y_test, y_pred)) y_pred_proba = knn.predict_proba(X_test)[:,1] # FPR = False Positive Rate, TPR = True Positive Rate fpr, tpr, thresholds = roc_curve(y_test, y_pred_proba) plt.plot([0,1],[0,1], label='k--') plt.plot(fpr, tpr, label='KNN') plt.legend() plt.xlabel('fpr') plt.ylabel('tpr') plt.title(f'KNN (n_neighbors={n_neighbors}) ROC Curve') plt.show() roc_auc_score(y_test, y_pred_proba) param_grid = {'n_neighbors':np.arange(1,100)} knn = KNeighborsClassifier() knn_cv = GridSearchCV(knn, param_grid, cv=5) knn_cv.fit(X, y) knn_cv.best_score_ knn_cv.best_params_ n_neighbors=14 knn = KNeighborsClassifier(n_neighbors) knn.fit(X_train, y_train) knn.score(X_test, y_test) y_pred = knn.predict(X_test) confusion_matrix(y_test, y_pred) pd.crosstab(y_test, y_pred, rownames=['True'], colnames=['Predicted'], margins=True) print(classification_report(y_test, y_pred)) y_pred_proba = knn.predict_proba(X_test)[:,1] y_pred_proba = knn.predict_proba(X_test)[:,1] fpr, tpr, thresholds = roc_curve(y_test, y_pred_proba) plt.plot([0,1],[0,1], 'k--') plt.plot(fpr, tpr, label='KNN') plt.xlabel('fpr') plt.ylabel('tpr') plt.title(f'KNN (n_neighbors={n_neighbors}) ROC Curve') plt.show() roc_auc_score(y_test, y_pred_proba) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## OpenCV Basics # ### Loading and displaying a single image # **Importing the OpenCV library** # # **Loading the image into a variable** # # **Displaying the image through a window** #Import OpenCV library import cv2 #Read and load the image to img variable img = cv2.imread('5X5_empty_grid.png') #Resizing the image to required ratio resize = cv2.resize(img, (240, 240)) #Show the image through the window cv2.imshow("Output" , resize) #Image stays on the window until a key on the keyboard is pressed cv2.waitKey(0) # ### Loading and displaying multiple images # **Importing the OpenCV library and the numpy library** # # **Loading the images into variables** # # **Concatenate images both horizontally and vertically** # # **Displaying concatenated images through a single window** #Import OpenCV library import cv2 #Import numpy library import numpy as np #Read and load the first image to img1 variable img1 = cv2.imread('5X5_empty_grid.png') #Read and load the second image to img2 variable img2 = cv2.imread('5X5_colored_grid.png') #Resize both the images to the required ratio resize1 = cv2.resize(img1 , (240, 240)) resize2 = cv2.resize(img2 , (240, 240)) #Concatanate image horizontally hori_concatenated = np.concatenate((resize1,resize2),axis=1) #Concatanate image vertically verti_concatenated = np.concatenate((resize1,resize2),axis=0) #Display concatenated images cv2.imshow('Horizontally_Concatenated' , hori_concatenated) cv2.imshow('Vertically_Concatenated' , verti_concatenated) cv2.waitKey(0) # + """ Typcasting w. Strings """ # Convert these variables into strings and then back to their original data types. Print out each result as well as its data type. What do you notice about the last one? five = 5 zero = 0 neg_8 = -8 T = True F = False five = str(five) zero = str(zero) neg_8 = str(neg_8) T = str(T) F = str(F) print(five, type(five)) # '5' print(zero, type(zero)) # '0' print(neg_8, type(neg_8)) # '-8' print(T, type(T)) # 'True' print(F, type(F)) # 'False' five = int(five) zero = int(zero) neg_8 = int(neg_8) T = bool(T) F = bool(F) print(five, type(five)) # 5 print(zero, type(zero)) # 0 print(neg_8, type(neg_8)) # -8 print(T, type(T)) # True print(F, type(F)) # False # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- # + import gensim import gensim.corpora as corpora from gensim.utils import simple_preprocess from gensim.models import CoherenceModel from gensim.models.ldamulticore import LdaMulticore def calc_topic_coherence(df): def gen_words(texts): final = [] for text in texts: new = gensim.utils.simple_preprocess(text, deacc=True) final.append(new) return (final) texts = gen_words(df['description']) num_topics = 1 id2word = corpora.Dictionary(texts) corpus = [id2word.doc2bow(text) for text in texts] model = LdaMulticore(corpus=corpus,id2word = id2word, num_topics = num_topics, alpha=.1, eta=0.1, random_state = 42) print('Model created') coherencemodel = CoherenceModel(model = model, texts = texts, dictionary = id2word, coherence = 'c_v') print("Topic coherence: ",coherencemodel.get_coherence()) coherence_value = coherencemodel.get_coherence() return coherence_value # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="w4qMIrSBipQC" # # Tipos de Datos Básicos # + [markdown] colab_type="text" id="b79w139Ji0Me" # ## Números # # - **Enteros**: `20`, `12`, `-5`, `0`, `10920398983497549` # - **Coma Flotante**: `3.5`, `-0.7`, `9837948.3284738`, `-19232343.845798` # - **Complejos**: `2+3j`, `-1+8j`, `-67j` # # Pueden ser de cualquier tamaño, no hay límite. Tanto positivos como negativos. E inclusive podemos utilizar **Notación científica**: `2e3`, `10e-8`, `1.23e-5`. # # Recordemos que la coma decimal se escribe con `.`. Y podemos utilizar el símbolo `_` como separador de miles. Por ejemplo, podemos guardar el valor 123.000 en la variable x con la sentencia: # # - `x = 123000` # # o con la sentencia: # # - `x = 123_000` # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="spR8wfRxiHSD" outputId="e043b3a0-aee0-4664-ef0e-32508241b8af" 20 # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="8nVsc2p1kDIh" outputId="de82fd18-dfc8-4750-872f-0221a55512f9" 9837948.3284738 # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="KSc1haJfkSjZ" outputId="004e67cf-27b0-4ad4-d8d6-dbaf409b4fca" -67j # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="O1j_wG9RkWdP" outputId="f55e2e8e-bc43-48e4-8b87-050584996e0e" 2e3 # + [markdown] colab_type="text" id="MgLCrX3wkaAq" # Podemos usar la función `type()` para que nos diga qué tipo de dato es el que estamos usando. Según el tipo de dato que tengamos la función nos dirá: # - `int` para los números enteros (del inglés "integer") # - `float` para los números en coma flotante (del inglés "floating point") # - `complex` para los números complejos # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="O-Pa-ZwqkjUp" outputId="a5248c0b-7b57-48e6-c66c-33dd96f616bb" type(20) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="bLSZ0a-0khIm" outputId="50b3991c-00f5-43cb-f934-48a0076ed7f5" type(9837948.3284738) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="yn8gM43Bkr3i" outputId="cff01b31-64b6-4d31-c612-0d7e29ee2a2f" type(-67j) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="O4tEZwiplKyQ" outputId="5bae9855-094b-4c90-e466-7a54571f2e12" type(2e3) # + [markdown] colab_type="text" id="BrkxgpTGlrPI" # Jugá un poco con los números y la función `type()`. Definí una variable `numero` y cambiale su valor para representar diferentes tipos. Usá `type(numero)` para ver si obtenés los tipos de números que estás queriendo: # + colab={} colab_type="code" id="oiRSq5uSlxT_" # + [markdown] colab_type="text" id="BJ8fQ5sFnrei" # ## Operadores aritméticos # # - Suma: `+` # - Resta: `-` # - Multiplicación: `*` # - División: `/` # - Módulo: `%` # - División Entera: `//` # - Potenciación: `**` # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="KOrhYULIoOyj" outputId="d70dffc4-b8d0-4e94-c065-18978f5be0ed" a = 5 a + 2 # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="8Aai_shYoiRV" outputId="147f6bcc-cba9-4847-e3e3-07b1f039719c" a * 2 # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="mb_zD6nColGt" outputId="2a5941ce-2e45-44ad-b550-a6c209d0cb05" a ** 2 # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="Azxzazr4ooVx" outputId="4c5862ec-6e6b-4bf7-8356-ab344948daf2" 4.5 * 8 # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="SGy7RTORotcc" outputId="36f83267-787b-484a-9755-9cdb5f42a2ba" 10 / 3 # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="moOggXu6oxFZ" outputId="dd858eb2-17c7-496e-f31c-2574925bc1e7" 10 % 3 # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="_jTpUOAMowWU" outputId="af16756c-943d-4ce7-8511-c1ba8c8b16c0" 10 // 3 # + [markdown] colab_type="text" id="QQQEVWohpLpG" # ### Precedencia de los operadores # # Los operadores aritméticos respetan la regla PEMDAS de precedencia. Es decir, se resuelven con el siguiente orden: # # 1. **P**aréntesis # 2. **E**xponciación (potencia) # 3. **M**ultiplicación (producto) # 4. **D**ivisión (cociente) # 5. **A**dición (suma) # 6. **S**ubstracción (resta) # # Muchos terminamos usando Python como nuestra calculadora personal, te animás? # + colab={} colab_type="code" id="Lc7itMicpR4c" # + [markdown] colab_type="text" id="amtT-wZ1l3_h" # ## Cadenas de Texto # # Las cadenas de texto en Python se definen entre comillas dobles `"` o comillas simples `'`. Por eso en la función `print()` pusimos como argumento `"Hola mundo!"`, también es válido poner `'Hola mundo!'`. # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="hcETRNtlmhJo" outputId="067ae633-b875-4aa6-ab59-5b5cc3227b53" print('Hola mundo') # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="5qPiAg8Up2eC" outputId="14f7ad7e-b202-4080-ea75-cbb36d075aa2" type("Hola mundo!") # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="9sShxxQwp5-d" outputId="d63707f6-32ae-458e-bab4-3f46b23877f6" type('Hola mundo') # + [markdown] colab_type="text" id="m6qAiVrbmjQn" # Si a `type` le pasamos una cadena de texto nos devuelve `str` (del inglés, *String*). # # Si queremos poner cadenas de texto multilínea, podemos utilizar triple comilla simple `'''` o triple comilla doble `""""`. # Lo que no podemos hacer es mezclar las comillas que usamos. Si empezamos con `"`, terminamos con `"`. Lo mismo para `'`. # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="sSFIT8zknfPO" outputId="b9e6f25d-fe05-4ccd-83c2-4b7df70a6719" parrafo = """En un lugar de la Mancha, de cuyo nombre... Es, pues, de saber que este sobredicho hidalgo, los ratos... Con estas razones perdía el pobre caballero el juicio, y... """ parrafo.istitle() # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="eshm_bqmTDCa" outputId="9890df4a-e56c-4e85-e77a-4994c94b8273" titulo = "Don Quijote" titulo.istitle() # + [markdown] colab_type="text" id="Ee7QL1_hnjs1" # Ahora es tu turno... # # + colab={} colab_type="code" id="0jFNJ0R5nigi" # + [markdown] colab_type="text" id="UXnTS-L8pvan" # Otra de las funciones que viene con Python, además de `print` y `type` que ya las hemos visto, es la función `len` que devuelve la longitud del objeto que le pasemos como argumento. En el caso de las cadenas de texto nos devolverá la longitud de la cadena, es decir, la cantidad de caracteres de la misma. # # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="iDBfmNNtu_XU" outputId="d0f4ccc0-08de-4465-dc0d-3819fbbe83f9" len(titulo) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="XafXDoQkvD7l" outputId="be5223f4-9516-44ec-ec85-a2ab4cc15f81" len(parrafo) # + [markdown] colab_type="text" id="_WV1cf5VvJH2" # Escribí tu nombre en una variable y hacé que Python calcule la longitud del mismo: # + colab={} colab_type="code" id="L5m_ClHCvPcV" # + [markdown] colab_type="text" id="Ok1iTFllpxtD" # ### Métodos en cadenas # # Las cadenas de texto admiten varios métodos que se pueden llamar con un punto a continuación de la cadena. Por ejemplo: el método `upper()` devuelve en mayúsculas el texto. # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="3F1-SN6sqgVP" outputId="6371c815-381b-4a18-afb0-bdc074f3e337" "En un lugar de la Mancha, de cuyo nombre...".upper() # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="vmMJSvHSqsMj" outputId="c719d708-26ed-469f-b126-e2bec5eb42ec" x = "En un lugar de la Mancha, de cuyo nombre..." x.upper() # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="iqDG0MxKRq7S" outputId="97ba1ed5-ce5c-4500-fbad-3223489a7ba7" x.isdigit() # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="3QUVIaNMRv0Y" outputId="1bb89755-750a-4539-e56e-eec339c39544" x.split(',') # + [markdown] colab_type="text" id="x_wnNkl1qyJ6" # Ya sea que lo ejecutemos sobre la misma cadena o si la tenemos guardada en una variable, los métodos de cadenas se pueden ejecutar indistintamente. # # Si queremos ver qué métodos tenemos disponibles, podemos usar la funcón `dir` sobre una cadena. # + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="H_njc_XlrH51" outputId="685714e7-9675-423f-8fef-27edfb6ae5c7" dir(x) # + [markdown] colab_type="text" id="gD5U8hRwrNDr" # Intentá descifrar qué hace cada método sobre la cadena que tenemos almacenada en `x`: # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="l3Y9BteDrVXw" outputId="b5d3eddc-aa33-4be1-f00d-68e90f7844f2" x.lower() # + [markdown] colab_type="text" id="m2J1Jwp6JvtB" # Para no andar jugando a los detectives Python nos ofrece además la función `help` que nos informa con una breve ayuda sobre el método que queremos saber: # # > Fijate que luego de `lower` no ponemos los `()`. Ya que si ponemos los paréntesis Python primero ejecutará el método `lower()` sobre la variable `x` y eso es un `str`. # + colab={"base_uri": "https://localhost:8080/", "height": 139} colab_type="code" id="vWrAybkYJvO2" outputId="e8b884c2-1ca4-4588-cdec-b5e31af361c1" help(x.lower) # + [markdown] colab_type="text" id="VfP8WtChpXOW" # ### Formateando cadenas de texto # # Si bien hay varias formas de darle formato a las cadenas de texto, lo más práctico es usar las `f-string`. Anteponemos una letra `f` al comienzo de la cadena y luego dentro de la misma utilizamos las `{}` para poner el valor o la expresión que queremos usar. # # Guardemos 2 números en las variables `numero1` y `numero2`, realicemos la suma y usemos una `f-string` para mostrar los 3 valores: # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="7Yu0Trm8sTdL" outputId="7d81c509-a62f-4f0e-cb87-37b7221fc040" numero1 = 18 numero2 = 34 resultado = numero1 + numero2 print(f"El resultado de la suma entre {numero1} y {numero2} da como resultado {resultado}") # + [markdown] colab_type="text" id="XY1tQtumsnOb" # Este es TU momento... # + colab={} colab_type="code" id="QXpTQ7IrsqNu" # + [markdown] colab_type="text" id="4MQMXEX9stxY" # ## Ingreso de datos # # La función `input()` nos sirve para aceptar el ingreso de valores por teclado. Dentro de los paréntesis podemos escribir una cadena de texto para que el usuario visualice. La función devuelve el valor ingresado antes de presionar ENTER. # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="5No-i3VhtC_F" outputId="093128b0-2cb8-4653-ed60-1d83939c0db0" x = input('Ingrese un número: ') # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="T47D57EEtIND" outputId="077d4afd-9dd5-4662-8763-7fa396ece83b" x # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="cu23IXGLtJ1K" outputId="bb6ca90d-e307-472e-b0de-2e1e85e2e6c3" type(x) # + [markdown] colab_type="text" id="sXERgC8WtLvw" # La función `input` SIEMPRE nos va a devolver una cadena de texto. Si queremos que ese valor sea un número, tenemos que convertirlo con `int()`, `float()` o `complex()` según sea nuestra intención. # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="zzBxiAuYtciv" outputId="4e5f8a4a-2339-46af-e6e0-fc0c1e1cec5f" int(x) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="B75iIiiRtfNq" outputId="2a2791a6-544b-4b19-914f-e7474514cf71" float(x) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="zX_pJEsOthwD" outputId="f68125ae-082b-47ad-a6e7-c79fbe225b24" complex(x) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="A0CTr71otqTD" outputId="07a58ad8-1d8e-4338-dc1a-02f09ddbf023" x # + [markdown] colab_type="text" id="UJwQd92ytkWQ" # Notemos que acá nunca volvemos a guardar el valor de `x`, lo que se suele hacer es directamente llamar a la función que queremos y luego llamar dentro a `input()`. # # Por ejemplo, si quisieramos tener en `x` un entero: # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="gxLB-GV0t3bX" outputId="4c173149-8976-49fd-eb4d-14be533a3957" x = int(input("Ingrese un número: ")) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="J-LlFb1ut-DM" outputId="6021a50e-7bfa-43dc-c998-460423310143" x # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="0EhRXjZrt9U-" outputId="ecc873d0-194b-4152-9b44-8d302ef5a2b8" type(x) # + colab={"base_uri": "https://localhost:8080/", "height": 52} colab_type="code" id="NyGy_nSKTYJ3" outputId="ea411c54-1639-4688-f48c-64d0590b8d1c" input("Ingrese su nombre: ") # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="sRrSlZACTpVm" outputId="f40956ff-52a0-4949-d410-dd22e3511fdf" name = input("Ingrese su nombre: ") # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="6ILEo_AsTwlE" outputId="fe3805fd-e1b0-46ba-ca6a-9d7d5d662267" name # + [markdown] colab_type="text" id="OSWZVQGzuCkF" # Si ingresamos un valor que la función no puede convertir, nos tirará un error. # + colab={"base_uri": "https://localhost:8080/", "height": 185} colab_type="code" id="TCwq95kIuGjJ" outputId="632d0262-0963-4be1-c6f4-bfc05470cf12" x = int(input("Ingrese un número: ")) # + [markdown] colab_type="text" id="yOn1-lk9vLA5" # ### Ejercicio: # # Pedile al usuario su nombre, guardalo en una variable y luego imprimí un mensaje que salude al usuario: # + colab={} colab_type="code" id="XJisSBTOvX4-" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Description of variables ([source](https://developer.spotify.com/documentation/web-api/reference/tracks/get-audio-features/)): # # - ***key***: The estimated overall key of the track. Integers map to pitches using standard Pitch Class notation . E.g. 0 = C, 1 = C♯/D♭, 2 = D, and so on. If no key was detected, the value is -1. # # - ***mode***: Mode indicates the modality (major or minor) of a track, the type of scale from which its melodic content is derived. Major is represented by 1 and minor is 0. # # - ***time_signature***: An estimated overall time signature of a track. The time signature (meter) is a notational convention to specify how many beats are in each bar (or measure). # # - ***tempo***: The overall estimated tempo of a track in beats per minute (BPM). In musical terminology, tempo is the speed or pace of a given piece and derives directly from the average beat duration. # # - ***acousticness***: A confidence measure from 0.0 to 1.0 of whether the track is acoustic. 1.0 represents high confidence the track is acoustic. # # - ***danceability***: Danceability describes how suitable a track is for dancing based on a combination of musical elements including tempo, rhythm stability, beat strength, and overall regularity. A value of 0.0 is least danceable and 1.0 is most danceable. # # - ***energy***: Energy is a measure from 0.0 to 1.0 and represents a perceptual measure of intensity and activity. Typically, energetic tracks feel fast, loud, and noisy. For example, death metal has high energy, while a Bach prelude scores low on the scale. Perceptual features contributing to this attribute include dynamic range, perceived loudness, timbre, onset rate, and general entropy. # # - ***instrumentalness***: Predicts whether a track contains no vocals. “Ooh” and “aah” sounds are treated as instrumental in this context. Rap or spoken word tracks are clearly “vocal”. The closer the instrumentalness value is to 1.0, the greater likelihood the track contains no vocal content. Values above 0.5 are intended to represent instrumental tracks, but confidence is higher as the value approaches 1.0. # # - ***liveness***: Detects the presence of an audience in the recording. Higher liveness values represent an increased probability that the track was performed live. A value above 0.8 provides strong likelihood that the track is live. # # - ***loudness***: The overall loudness of a track in decibels (dB). Loudness values are averaged across the entire track and are useful for comparing relative loudness of tracks. Loudness is the quality of a sound that is the primary psychological correlate of physical strength (amplitude). Values typical range between -60 and 0 db. # # - ***speechiness***: Speechiness detects the presence of spoken words in a track. The more exclusively speech-like the recording (e.g. talk show, audio book, poetry), the closer to 1.0 the attribute value. Values above 0.66 describe tracks that are probably made entirely of spoken words. Values between 0.33 and 0.66 describe tracks that may contain both music and speech, either in sections or layered, including such cases as rap music. Values below 0.33 most likely represent music and other non-speech-like tracks. # # - ***valence***: A measure from 0.0 to 1.0 describing the musical positiveness conveyed by a track. Tracks with high valence sound more positive (e.g. happy, cheerful, euphoric), while tracks with low valence sound more negative (e.g. sad, depressed, angry). # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # get_ipython().magic('matplotlib notebook') get_ipython().magic('matplotlib inline') get_ipython().magic('load_ext autoreload') get_ipython().magic('autoreload 2') #___________________________________________________________________________________________________________________ import os import tripyview as tpv import numpy as np import xarray as xr # + tags=["parameters"] # Parameters # mesh_path ='/work/ollie/projects/clidyn/FESOM2/meshes/core2/' mesh_path ='/work/ollie/pscholz/mesh_fesom2.0/core2_srt_dep@node/' save_path = None #'~/figures/test_papermill/' save_fname= None #_____________________________________________________________________________________ which_cycl= 3 which_mode= 'hslice' #_____________________________________________________________________________________ input_paths= list() input_paths.append('/home/ollie/pscholz/results/trr181_tke_ctrl_ck0.1/') input_paths.append('/home/ollie/pscholz/results/trr181_tke+idemix_jayne09_ck0.1/') input_paths.append('/home/ollie/pscholz/results/trr181_tke+idemix_nycander05_ck0.1/') input_paths.append('/home/ollie/pscholz/results/trr181_tke+idemix_stormtide2_ck0.1/') input_names= list() input_names.append('TKE ck0.1') input_names.append('TKE+idemix,ck0.1,jayne') input_names.append('TKE+idemix,ck0.1,nycand') input_names.append('TKE+idemix,ck0.1,stormt') vname = 'Kv*N2' year = [1979,2019] mon, day, record, box, depth = None, None, None, None, 1000 #_____________________________________________________________________________________ # do anomaly plots in case ref_path is not None ref_path = None #'/home/ollie/pscholz/results/trr181_tke_ctrl_ck0.1/' # None ref_name = None #'TKE, ck=0.1' # None ref_year = None # [2009,2019] ref_mon, ref_day, ref_record = None, None, None #_____________________________________________________________________________________ do_clim = True which_clim= 'phc3' clim_path = '/work/ollie/pscholz/INIT_HYDRO/phc3.0/phc3.0_annual.nc' #_____________________________________________________________________________________ cstr = 'blue2red' cnum = 20 cref = None crange, cmin, cmax, cfac, climit = None, None, None, None, None #_____________________________________________________________________________________ ncolumn = 3 do_rescale= None #'log10' which_dpi = 300 proj = 'pc' # + #___LOAD FESOM2 MESH___________________________________________________________________________________ mesh=tpv.load_mesh_fesom2(mesh_path, do_rot='None', focus=0, do_info=True, do_pickle=True, do_earea=True, do_narea=True, do_eresol=[True,'mean'], do_nresol=[True,'eresol']) #______________________________________________________________________________________________________ if which_cycl is not None: for ii,ipath in enumerate(input_paths): input_paths[ii] = os.path.join(ipath,'{:d}/'.format(which_cycl)) print(ii, input_paths[ii]) if ref_path is not None: ref_path = os.path.join(ref_path,'{:d}/'.format(which_cycl)) print('R', ref_path) #______________________________________________________________________________________________________ cinfo=dict({'cstr':cstr, 'cnum':cnum}) if crange is not None: cinfo['crange']=crange if cmin is not None: cinfo['cmin' ]=cmin if cmax is not None: cinfo['cmax' ]=cmax if cref is not None: cinfo['cref' ]=cref if cfac is not None: cinfo['cfac' ]=cfac if climit is not None: cinfo['climit']=climit if ref_path is not None: cinfo['cref' ]=0.0 #______________________________________________________________________________________________________ # in case of diff plots if ref_path is not None: if ref_year is None: ref_year = year if ref_mon is None: ref_mon = mon if ref_record is None: ref_record = record # + #___LOAD FESOM2 REFERENCE DATA________________________________________________________________________ if ref_path is not None: print(ref_path) if depth =='bottom': data_ref = tpv.load_data_fesom2(mesh, ref_path, vname=vname, year=ref_year, mon=ref_mon, day=ref_day, record=ref_record, depth=None, descript=ref_name, do_info=False) data_ref = data_ref.isel(nz=xr.DataArray(mesh.n_iz, dims='nod2'), nod2=xr.DataArray(range(0,mesh.n2dn), dims='nod2')) else: if not vname=='Kv*N2': data_ref = tpv.load_data_fesom2(mesh, ref_path, vname=vname, year=ref_year, mon=ref_mon, day=ref_day, record=ref_record, depth=depth, descript=ref_name, do_info=False) else: data_ref = tpv.load_data_fesom2(mesh, ref_path, vname='Kv', year=ref_year, mon=ref_mon, day=ref_day, record=ref_record, depth=depth, descript=ref_name, do_info=False, ) data2_ref = tpv.load_data_fesom2(mesh, ref_path, vname='N2', year=ref_year, mon=ref_mon, day=ref_day, record=ref_record, depth=depth, descript=ref_name, do_info=False, ) data_ref['Kv'].data = data_ref['Kv'].data * data2_ref['N2'].data data_ref = data_ref.rename(dict({'Kv':'Kv*N2'})) data_ref['Kv*N2'].attrs['units'], data_ref['Kv*N2'].attrs['description'], data_ref['Kv*N2'].attrs['long_name'] = 'm^2/s^3', 'Kv * N^2', 'Kv * N^2' del(data2_ref) #___LOAD FESOM2 DATA___________________________________________________________________________________ data_list = list() for datapath,descript in zip(input_paths, input_names): print(datapath) if depth =='bottom': data = tpv.load_data_fesom2(mesh, datapath, vname=vname, year=year, mon=mon, day=day, record=record, depth=None, descript=descript, do_info=False) data = data.isel(nz=xr.DataArray(mesh.n_iz, dims='nod2'), nod2=xr.DataArray(range(0,mesh.n2dn), dims='nod2')) else: if not vname=='Kv*N2': data = tpv.load_data_fesom2(mesh, datapath, vname=vname, year=year, mon=mon, day=day, record=record, depth=depth, descript=descript, do_info=False) else: data = tpv.load_data_fesom2(mesh, datapath, vname='Kv', year=year, mon=mon, day=day, record=record, depth=depth, descript=descript, do_info=False) data2 = tpv.load_data_fesom2(mesh, datapath, vname='N2', year=year, mon=mon, day=day, record=record, depth=depth, descript=descript, do_info=False) data['Kv'].data = data['Kv'].data * data2['N2'].data data = data.rename(dict({'Kv':'Kv*N2'})) data['Kv*N2'].attrs['units'], data['Kv*N2'].attrs['description'], data['Kv*N2'].attrs['long_name'] = 'm^2/s^3', 'Kv * N^2', 'Kv * N^2' del(data2) #__________________________________________________________________________________________________ if ref_path is not None: data_list.append(tpv.do_anomaly(data, data_ref)) else: data_list.append(data) del(data) if ref_path is not None: del(data_ref) #___APPEND ABS CLIMATOLOGY_____________________________________________________________________________ if (vname in ['temp', 'salt', 'pdens'] or 'sigma' in vname) and (depth is not 'bottom') and do_clim and (ref_path is None): clim_vname = vname if vname=='temp' and which_clim.lower()=='woa18': clim_vname = 't00an1' elif vname=='salt' and which_clim.lower()=='woa18': clim_vname = 's00an1' clim = tpv.load_climatology(mesh, clim_path, clim_vname, depth=depth) data_list.append(clim) del(clim) # - #___PLOT FESOM2 DATA___________________________________________________________________________________ spath = save_path sname = vname slabel = data_list[0][sname].attrs['str_lsave'] if spath is not None: spath = os.path.join(spath,'{}_{}_{}.png'.format(which_mode, sname, slabel)) nrow = np.ceil(len(data_list)/ncolumn).astype('int') if save_fname is not None: spath = save_fname pos_gap = [0.005, 0.04] if proj in ['nps, sps']:pos_gap = [0.005, 0.035] # cinfo = dict({'cstr':'blue2red', 'cref':0, 'cfac':0.5 }) fig, ax, cbar = tpv.plot_hslice(mesh, data_list, cinfo=cinfo, box=box, n_rc=[nrow, ncolumn], figsize=[ncolumn*7, nrow*3.5], proj = proj, do_lsmask='fesom', do_rescale=do_rescale, title='descript', pos_gap=pos_gap, pos_extend=[0.03, 0.03, 0.905, 0.975], do_save=spath, save_dpi=which_dpi) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import cerespp import glob files = glob.glob('HD10700*.fits') files lines = ['CaHK', 'Ha', 'HeI', 'NaID1D2'] cerespp.line_plot_from_file(files[0], lines, './', 'HD10700') cerespp.get_activities(files, './HD10700_activities.dat') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import os import time from selenium import webdriver from selenium.webdriver.common.keys import Keys import re import requests # In this first function list_of_towns() we will be introducing our headless webdriver for Chrome. What that means is in the background, without actually opening up a window for you to see, Selenium will go to the Virginia Municode page https://library.municode.com/VA. From there, it will wait 5 seconds for the page to load. This is an essential step (and why Selenium is such a great webscraping tool!) for sites that generate content via javascript instead of a traditional html/css webpage. Once the 5 seconds has passed, the Selenium scraper looks at the xpath we have designated. An xpath is a precise location on a webpage, like a specific button or div. This location is extracted and saved down into a variable 'data' so that the element we found at the xpath destination can be used. In this first function, we are interested in extracting the links to all of the various towns and counties within Virginia that are listed on Municode. The webpage looks like this: # ![image.png](attachment:855844e9-4768-4f6f-89e6-43da0e6f6c53.png) # Each of the towns and counties listed on this page are buttons that link to their respective pages. What this function does is take all of those links and return a single column dataframe. Finally, we split every value in that column to extract the town/county ID. For example, the link for Accomack County is https://library.municode.com/VA/accomack_county, which we then extract "accomack_county" from and save in a second column in the same dataframe. def list_of_towns(): # silence options = webdriver.ChromeOptions() options.add_argument("headless") driver = webdriver.Chrome('/Users/holdenbruce/Downloads/chromedriver3', options=options) # set implicit wait time so that apis/javascript load before we scrape driver.implicitly_wait(5) # seconds # url of the county url = f"https://library.municode.com/VA" # headers to let them know who i am headers = {'user-agent': 'class project ()'} # xpath of the table in the webpage created by javascript xpathHOME = "/html/body/div[1]/div[2]/ui-view/div[2]/section/div/div" driver.get(url) # use xpath to get to the table data = driver.find_elements_by_xpath(xpathHOME) # links = driver.find_elements_by_tag_name("a") # add a delay of 3 seconds in the function time.sleep(2) # use outerHTML to maintain the html/css/javasript code pulled from the webpage html = data[0].get_attribute("outerHTML") r1 = re.findall(r"https://library.municode.com/VA/[\w\.-]+",html) # convert to dataframe and drop duplicates towns_list = pd.DataFrame(r1) towns_list = towns_list.drop_duplicates().reset_index(drop=True) town_urls = list_of_towns() town_urls = town_urls.rename(columns={0:"urls"}) splits = town_urls['urls'].str.split("/")#[-1] for x in range(len(splits)): town_urls.loc[x,'county'] = splits[x][-1] return towns_list town_urls # In the second function, identify_comparison_table_URL_part1(), we pass in a url like https://library.municode.com/VA/accomack_county. These individual town/county pages look like this: # # ![image.png](attachment:44194c9e-5710-4088-84f3-ceecbd8427b7.png) # # What we're interested in here is the menu bar on the left hand side of the page. You will see in the code below a series of try/except statements that look for different id tags. These tags represent "State Law Reference Tables" and are referred to as STLARETA, STRETA, COOR_STRETA, and STATE_LAW_REFERENCE_TABLE. There is much information present in these town/county pages but this is the only thing we care about for this project, since it contains a table that compares the law codes for the State of Virginia to the law codes for that specific town/county. This is useful for CodeForCharlottesville and the LAJC Virginia Criminal Expungement initiative because it allows us to create a mapping of all the various law codes for the towns and counties across Virginia to the law codes for the State. Why is this significant? The new laws that have relaxed Virginia's draconian criminal expungmenet policies necessitate this mapping to be known in order for offenders to have their crimes expunged from the records. This is challenging because each town/county has different record keeping policies, which confuses the process of unilaterally expunging people who the State of Virginia has decided eligible for expungement. On top of this challenge, there are clerical (human) errors in some of the record keeping that make this matching even more difficult. We will explore those later on in the Notebook. # + def identify_comparison_table_URL_part1(url):# 'https://library.municode.com/VA/accomack_county'): options = webdriver.ChromeOptions() options.add_argument("--start-maximized") options.add_argument("--headless") options.add_argument("--window-size=1920,1080") driver = webdriver.Chrome('/Users/holdenbruce/Downloads/chromedriver3', options=options) # set implicit wait time so that apis/javascript load before we scrape driver.implicitly_wait(5) # seconds driver.get(url) driver.get_screenshot_as_file("headless.png") try: ref_table = driver.find_element_by_xpath(xpath="//*[@id='genToc_STLARETA']/a[2]") url = ref_table.get_attribute("href") return url except: pass try: ref_table = driver.find_element_by_xpath(xpath="//*[@id='genToc_STRETA']/a[2]") url = ref_table.get_attribute("href") return url except: pass try: ref_table = driver.find_element_by_xpath(xpath="//*[@id='genToc_THCH_STRETA']/a[2]") url = ref_table.get_attribute("href") return url except: pass try: ref_table = driver.find_element_by_xpath(xpath="//*[@id='genToc_STATE_LAW_REFERENCE_TABLE']/a[2]") url = ref_table.get_attribute("href") return url except: pass try: ref_table = driver.find_element_by_xpath(xpath="//*[@id='genToc_COOR_STRETA']/a[2]") url = ref_table.get_attribute("href") return url except: pass ex = town_urls.loc[0,'urls'] print("example url:",ex) # now let's see what useful URL the function can find for us identify_comparison_table_URL_part1(ex) # - # In this version() function, we are extracting the date the records were last updated. This should vary depending on the municipality and while it doesn't necessarily matter when records were last updated, it should be kept track of in case of incongruity. Our hope is that this little tracker will aid the future practitioner in their identification of outdated information. # + def version(url): options = webdriver.ChromeOptions() options.add_argument("--start-maximized") options.add_argument("--headless") options.add_argument("--window-size=1920,1080") driver = webdriver.Chrome('/Users/holdenbruce/Downloads/chromedriver3', options=options) # set implicit wait time so that apis/javascript load before we scrape driver.implicitly_wait(5) # seconds driver.get(url) driver.get_screenshot_as_file("headless.png") try: version = driver.find_element_by_xpath(xpath="//*[@id='codebankToggle']/button[text()]") #url = version.get_attribute() #print("version:",version) #print("version:",version.text) version = version.text version = version.split(":")[-1].strip()#.split("(")[0].strip() except: #print("couldn't get the version text") version = '' pass return version # using the same url example as above: v = version(ex) v # - # This scraper() function is the heavy hitter, where all the action really happens. By looping through the list of URLs we made in the first function, we pass in each URL alongside the corresponding town/county. That URL is passed into the identify_comparison_table_URL_part1() function so that we can find and extract the URL that leads to the State Law Reference Table we are looking for; these links typically look like this: https://library.municode.com/va/accomack_county/codes/code_of_ordinances?nodeId=STLARETA. As discussed previously, STLARETA is merely one of many nodeIDs that represent the same information in municode: the state law reference table. So, we extract the nodeID for this town/county and save it to a variable. Our goal here is to compare that nodeID against the list of nodeIDs we have identified as holding state law reference tables within Municode: STLARETA, STATE_LAW_REFERENCE_TABLE, COOR_STRETA, STRETA, THCH_STRETA. # - If the nodeID is not in this list, then we know that the link contains information we are not interested in, so we skip it. While that is unfortunate, the reality is that not every town and county has the information we need for this project. Some data is more available than other, and it just so happens that state law reference tables are not always accessible. # - However, if the nodeID is on the list, then we will go to the respective xpath locations on the webpage and extract the state law reference tables. The tables at each nodeID xpath destination has different characters (shape, placement), which we have accounted for in the scrape. In the final steps of this function, we extract the information from the tables, save them down to a dataframe, clean them up slightly, and then save them to a CSV file in a folder with all the other towns and counties we were able to find state law reference table data for. def scraper(url,town): # silence options = webdriver.ChromeOptions() options.add_argument("--start-maximized") options.add_argument("--headless") options.add_argument("--window-size=1920,1080") driver = webdriver.Chrome('/Users/holdenbruce/Downloads/chromedriver3', options=options) # set implicit wait time so that apis/javascript load before we scrape driver.implicitly_wait(5) # seconds print("URL passed in",url) # url of the county try: url = identify_comparison_table_URL_part1(url) print("URL we're targeting:",url) # define the nodeID by taking the last piece of the ULR after the "=" nodeID = url.split("=")[-1] # define the nodeIDs that we care about (discovered by team through manual check of Municode) nodeIDs = [ 'STLARETA', 'STATE_LAW_REFERENCE_TABLE', 'COOR_STRETA', 'STRETA', 'THCH_STRETA' ] # if nodeID not what we want, break if nodeID not in nodeIDs: print(f"pass: {town} empty") # define an empty dataframe df_empty = pd.DataFrame({'town' : []}) # now write that empty dataframe to CSV df.to_csv(f'countyCSV/{town}.csv', index=False) pass # if nodeID what we want, do this else: if nodeID == 'STLARETA': # xpath of the table in the webpage created by javascript xpath = "/html/body/div[1]/div[2]/ui-view/mcc-codes/div[2]/section/div[1]/mcc-codes-content/div/div[2]/ul/li/mcc-codes-content-chunk/div/div/div[2]/div/div/div/div[2]/table" # # "/html/body/div[1]/div[2]/ui-view/mcc-codes/div[2]/section[2]/div[1]/mcc-codes-content/div/div[2]/ul/li/mcc-codes-content-chunk/div/div/div[2]/div/div/div/div[2] elif nodeID == 'STRETA': xpath = "/html/body/div[1]/div[2]/ui-view/mcc-codes/div[2]/section[2]/div[1]/mcc-codes-content/div/div[2]/ul/li/mcc-codes-content-chunk/div/div/div[2]/div/div[2]/div/div[2]/table" elif nodeID == 'STATE_LAW_REFERENCE_TABLE': xpath = "/html/body/div[1]/div[2]/ui-view/mcc-codes/div[2]/section[2]/div[1]/mcc-codes-content/div/div[2]/ul/li/mcc-codes-content-chunk/div/div/div[2]/div/div/div/div[2]/table" elif nodeID == 'COOR_STRETA': xpath = "/html/body/div[1]/div[2]/ui-view/mcc-codes/div[2]/section[2]/div[1]/mcc-codes-content/div/div[2]/ul/li/mcc-codes-content-chunk/div/div/div[2]/div/div/div/div[2]/table" elif nodeID == 'THCH_STRETA': xpath = "/html/body/div[1]/div[2]/ui-view/mcc-codes/div[2]/section[2]/div[1]/mcc-codes-content/div/div[2]/ul/li/mcc-codes-content-chunk/div/div/div[2]/div/div/div/div[2]/table" driver.get(url) # use xpath to get to the table data = driver.find_elements_by_xpath(xpath) #print("data:",data) #if # use outerHTML to maintain the html/css/javasript code pulled from the webpage table = data[0].get_attribute("outerHTML") #print("table:",table) # https://stackoverflow.com/questions/41214702/parse-html-and-read-html-table-with-selenium-python # convert that table into a pandas dataframe df = pd.read_html(table) df = df[0] # rename the columns df = df.rename(columns={'Code of Virginia Section': "Virginia", "Section this Code":town}) #print(url) # now write to CSV df.to_csv(f'countyCSV_March7/{town}.csv', index=False) return df except: print(f"could not scrape for {town}") pass # The following few lines showcase examples of different state law reference table nodeIDs and what the end result becomes. # ### STLARETA scraper('https://library.municode.com/va/accomack_county',"accomack_county") # ### STRETA # + # https://library.municode.com/va/amherst_county/codes/code_of_ordinances?nodeId=STRETA scraper('https://library.municode.com/va/amherst_county',"amherst_county") # - # ### STATE_LAW_REFERENCE_TABLE # + # https://library.municode.com/va/bedford/codes/code_of_ordinances?nodeId=STATE_LAW_REFERENCE_TABLE scraper('https://library.municode.com/va/bedford',"bedford") # - # ### COOR_STRETA scraper('https://library.municode.com/va/colonial_beach',"colonial_beach") # ### THCH_STRETA # + # https://library.municode.com/va/berryville/codes/code_of_ordinances?nodeId=THCH_STRETA scraper('https://library.municode.com/va/berryville',"berryville") # - # In this step, we are creating the flagDF dataframe that calls the version() function, indicating the town/county and the date their records were last updated. This dataframe is then exported to a CSV after all the towns and counties have run through the scraper. # + flagDF = pd.DataFrame() for ind in town_urls.index: url = town_urls.loc[ind,'urls'] #print(url) town = town_urls.loc[ind,'county'] #print(town) flagDF.loc[ind,'locality'] = town flagDF.loc[ind,'version'] = version(url) flagDF.to_csv('flagDF.csv', index=False) flagDF # - # # Run it # In the final step of this Jupyter Notebook, we actually run through all of the towns and counties in the Virginia Municode page. While there are +130 towns and counties in Virginia, we were only able to extract state law reference table data for 85 of them. The scraper() functiotn is called on each URL in the list of urls we ran in Step 1. Alongside each iteration through this loop, we print out the results of the scraper or indicate if the town/county did not have the state law reference table we were looking for. # + import time start_time = time.time() flagDF = pd.DataFrame() # loop through all of the towns # for ind in list_of_towns.index: for ind in town_urls.index: url = town_urls.loc[ind,'urls'] town = town_urls.loc[ind,'county'] try: print(scraper(url,town)) except: print(f'This municipality {town} does not host the necessary documents online.') pass print("\n") print("--- %s seconds ---" % (time.time() - start_time)) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Programming Language Classification IBM Code Pattern - Build Data set # Download pygithub, import libraries, and authenticate with your github credentials # !pip install pygithub # + import github import base64 import numpy as np from github import GithubException #Create a GitHub personal access token at https://github.com/settings/tokens login = 'YOURLOGIN' token = '' g = github.Github(login, token) repos = [] org = g.get_organization('IBM') #We'll look at code only with the following extensions. You can try this with other #programming languages and for binary classification (2, not 8, classes). targetlangs = ['.go','.java','.js','.m','.py','.sh','.swift','.xml'] #Get the list of repos at https://github.com/IBM. for repo in g.get_organization('IBM').get_repos(): repos.append(repo.name) # - # List repos repos # ## Download Directory # # This function recursively crawls through each file/directory starting at the given repo and path. # It is similar in functionality to the "find" linux command # Calls to GitHub API are wrapped in Try/Except blocks to ignore errors, there will be plenty of code to grab for our dataset def download_directory(repository, path): global dataset try: contents = repository.get_contents(path) for content in contents: if content.type == 'dir': download_directory(repository, content.path) else: if content.content: if len(str(content.name).split(".")) == 2: if any(substring == ("." + str(content.name).split(".")[1]) for substring in targetlangs): try: dataset.append([base64.b64decode(content.content),str(content.name).split(".")[1]]) except (GithubException, IOError) as exc: print 'Error processing %s: %s', content.path, exc except (GithubException, IOError) as exc: print "error in dir " # loop through each repo and grab the contents that match one of our target languages # they are being places in a multidimensional list with # dataset[i][0] being the code text and dataset[i][1] being the code's file extenstion # # # there are two loops because I would ofter receive a 403 RateLimitExceededException from github # # if this happens just pick up in the next repo, again plenty of code # + dataset = [] for i in range(len(repos[1:])): if i % 2 == 0: print "getting repo " + str(i) + " of " + str(len(repos[1:])) repo = org.get_repo(str(repos[i])) download_directory(repo, "") # - for i in range(len(repos[44:])): if i % 2 == 0: print "getting repo " + str(i+44) + " of " + str(len(repos[1:])) repo = org.get_repo(str(repos[i])) download_directory(repo, "") # ### Split the code into a training and a testing set. # You can see the length of the dataset and its contents. # Pickle the training and testing sets for the next notebook (mine are included in the github) import random data = dataset random.shuffle(data) train_data = data[:int((len(data)+1)*.80)] #Remaining 80% to training set test_data = data[int(len(data)*.80+1):] #Splits 20% data to test set len(dataset) print dataset[0][1] np.savez_compressed("githubtrainingdatacompressed.npz",np.array(train_data)) np.savez_compressed("githubtestdatacompressed.npz",np.array(test_data)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:aparent] # language: python # name: conda-env-aparent-py # --- # + import pandas as pd import scipy import numpy as np import scipy.sparse as sp import scipy.io as spio import pickle import os # + data = pd.read_csv('unprocessed_data/Alt_5SS_Tag_to_Seq_Map.csv',sep=',',index_col=0) c = spio.loadmat('unprocessed_data/Alt_5SS_Usage_All_Cells.mat') c_MCF7 = sp.csc_matrix(c['MCF7']) c_CHO = sp.csc_matrix(c['CHO']) c_HELA = sp.csc_matrix(c['HELA']) c_HEK = sp.csc_matrix(c['HEK']) # + #Sort data on counts total_c_MCF7 = np.ravel(c_MCF7.sum(axis=-1)) total_c_CHO = np.ravel(c_CHO.sum(axis=-1)) total_c_HELA = np.ravel(c_HELA.sum(axis=-1)) total_c_HEK = np.ravel(c_HEK.sum(axis=-1)) avg_c = (total_c_HEK + total_c_HELA + total_c_CHO + total_c_MCF7) / 4.0 sort_index = np.argsort(avg_c) data = data.iloc[sort_index].copy().reset_index(drop=True) c_MCF7 = c_MCF7[sort_index, :] c_CHO = c_CHO[sort_index, :] c_HELA = c_HELA[sort_index, :] c_HEK = c_HEK[sort_index, :] # + #Constant background sequence context up_background = 'gggcatcgacttcaaggaggacggcaacatcctggggcacaagctggagtacaactacaacagccacaacgtctatatcatggccgacaagcagaagaacggcatcaaagtgaacttcaagatccgccacaacatcgagg'.upper() dn_background = 'acagagtttccttatttgtctctgttgccggcttatatggacaagcatatcacagccatttatcggagcgcctccgtacacgctattatcggacgcctcgcgagatcaatacgtatacca'.upper() print('len(up_background) = ' + str(len(up_background))) print('len(dn_background) = ' + str(len(dn_background))) # + #Extend sequences and count matrices data['Padded_Seq'] = up_background + data['Seq'].str.slice(0,101) + dn_background padded_c_MCF7, padded_c_CHO, padded_c_HELA, padded_c_HEK = [ sp.csr_matrix( sp.hstack([ sp.csc_matrix((c_mat.shape[0], len(up_background))), c_mat[:, :101], sp.csc_matrix((c_mat.shape[0], len(dn_background))), sp.csc_matrix(np.array(c_mat[:, 303].todense()).reshape(-1, 1)) ]) ) for c_mat in [c_MCF7, c_CHO, c_HELA, c_HEK] ] print('padded_c_MCF7.shape = ' + str(padded_c_MCF7.shape)) print('padded_c_CHO.shape = ' + str(padded_c_CHO.shape)) print('padded_c_HELA.shape = ' + str(padded_c_HELA.shape)) print('padded_c_HEK.shape = ' + str(padded_c_HEK.shape)) # + #Filter each dataset on > 0 count hek_keep_index = np.nonzero(np.ravel(padded_c_HEK.sum(axis=-1)) > 0)[0] hela_keep_index = np.nonzero(np.ravel(padded_c_HELA.sum(axis=-1)) > 0)[0] mcf7_keep_index = np.nonzero(np.ravel(padded_c_MCF7.sum(axis=-1)) > 0)[0] cho_keep_index = np.nonzero(np.ravel(padded_c_CHO.sum(axis=-1)) > 0)[0] #HEK data data_hek_filtered = data.iloc[hek_keep_index].copy().reset_index(drop=True) c_hek_filtered = padded_c_HEK[hek_keep_index, :] #HELA data data_hela_filtered = data.iloc[hela_keep_index].copy().reset_index(drop=True) c_hela_filtered = padded_c_HELA[hela_keep_index, :] #MCF7 data data_mcf7_filtered = data.iloc[mcf7_keep_index].copy().reset_index(drop=True) c_mcf7_filtered = padded_c_MCF7[mcf7_keep_index, :] #CHO data data_cho_filtered = data.iloc[cho_keep_index].copy().reset_index(drop=True) c_cho_filtered = padded_c_CHO[cho_keep_index, :] print('len(data_hek_filtered) = ' + str(len(data_hek_filtered))) print('c_hek_filtered.shape = ' + str(c_hek_filtered.shape)) print('len(data_hela_filtered) = ' + str(len(data_hela_filtered))) print('c_hela_filtered.shape = ' + str(c_hela_filtered.shape)) print('len(data_mcf7_filtered) = ' + str(len(data_mcf7_filtered))) print('c_mcf7_filtered.shape = ' + str(c_mcf7_filtered.shape)) print('len(data_cho_filtered) = ' + str(len(data_cho_filtered))) print('c_cho_filtered.shape = ' + str(c_cho_filtered.shape)) # + #Get joined min dataset min_keep_index = (np.ravel(padded_c_HEK.sum(axis=-1)) > 0) min_keep_index = min_keep_index & (np.ravel(padded_c_HELA.sum(axis=-1)) > 0) min_keep_index = min_keep_index & (np.ravel(padded_c_MCF7.sum(axis=-1)) > 0) min_keep_index = min_keep_index & (np.ravel(padded_c_CHO.sum(axis=-1)) > 0) #MIN data data_min_filtered = data.iloc[min_keep_index].copy().reset_index(drop=True) c_hek_min_filtered = padded_c_HEK[min_keep_index, :] c_hela_min_filtered = padded_c_HELA[min_keep_index, :] c_mcf7_min_filtered = padded_c_MCF7[min_keep_index, :] c_cho_min_filtered = padded_c_CHO[min_keep_index, :] print('len(data_min_filtered) = ' + str(len(data_min_filtered))) print('c_hek_min_filtered.shape = ' + str(c_hek_min_filtered.shape)) print('c_hela_min_filtered.shape = ' + str(c_hela_min_filtered.shape)) print('c_mcf7_min_filtered.shape = ' + str(c_mcf7_min_filtered.shape)) print('c_cho_min_filtered.shape = ' + str(c_cho_min_filtered.shape)) # + #Pickle final datasets data_min_filtered = data_min_filtered.rename(columns={'Padded_Seq' : 'padded_seq'}) data_hek_filtered = data_hek_filtered.rename(columns={'Padded_Seq' : 'padded_seq'}) data_hela_filtered = data_hela_filtered.rename(columns={'Padded_Seq' : 'padded_seq'}) data_mcf7_filtered = data_mcf7_filtered.rename(columns={'Padded_Seq' : 'padded_seq'}) data_cho_filtered = data_cho_filtered.rename(columns={'Padded_Seq' : 'padded_seq'}) data_min_filtered = data_min_filtered[['padded_seq']] data_hek_filtered = data_hek_filtered[['padded_seq']] data_hela_filtered = data_hela_filtered[['padded_seq']] data_mcf7_filtered = data_mcf7_filtered[['padded_seq']] data_cho_filtered = data_cho_filtered[['padded_seq']] splicing_5ss_dict = { 'min_df' : data_min_filtered, 'hek_df' : data_hek_filtered, 'hela_df' : data_hela_filtered, 'mcf7_df' : data_mcf7_filtered, 'cho_df' : data_cho_filtered, 'hek_count' : c_hek_filtered, 'hela_count' : c_hela_filtered, 'mcf7_count' : c_mcf7_filtered, 'cho_count' : c_cho_filtered, 'min_hek_count' : c_hek_min_filtered, 'min_hela_count' : c_hela_min_filtered, 'min_mcf7_count' : c_mcf7_min_filtered, 'min_cho_count' : c_cho_min_filtered, } pickle.dump(splicing_5ss_dict, open('alt_5ss_data.pickle', 'wb')) # + #Align and consolidate a5ss data plasmid_dict = pickle.load(open('alt_5ss_data.pickle', 'rb')) plasmid_df = plasmid_dict['min_df'] hek_cuts = np.array(plasmid_dict['min_hek_count'].todense()) hela_cuts = np.array(plasmid_dict['min_hela_count'].todense()) mcf7_cuts = np.array(plasmid_dict['min_mcf7_count'].todense()) cho_cuts = np.array(plasmid_dict['min_cho_count'].todense()) total_cuts = hek_cuts + hela_cuts + mcf7_cuts + cho_cuts total_cuts = total_cuts[:, :-1] # + fixed_poses = [140, 140 + 44, 140 + 79] sd_window = 130#120 sd1_pos = 140 negative_sampling_ratio = 2 fixed_pos_mask = np.ones(total_cuts.shape[1]) for j in range(len(fixed_poses)) : fixed_pos_mask[fixed_poses[j]] = 0 cut_pos = np.arange(total_cuts.shape[1]) aligned_seqs = [] aligned_libs = [] aligned_mode = [] max_data_len = 3000000 aligned_hek_cuts = sp.lil_matrix((max_data_len, 2 * sd_window + 1)) aligned_hela_cuts = sp.lil_matrix((max_data_len, 2 * sd_window + 1)) aligned_mcf7_cuts = sp.lil_matrix((max_data_len, 2 * sd_window + 1)) aligned_cho_cuts = sp.lil_matrix((max_data_len, 2 * sd_window + 1)) splice_mats = [ [hek_cuts, aligned_hek_cuts], [hela_cuts, aligned_hela_cuts], [mcf7_cuts, aligned_mcf7_cuts], [cho_cuts, aligned_cho_cuts] ] old_i = 0 new_i = 0 for _, row in plasmid_df.iterrows() : if old_i % 10000 == 0 : print("Processing sequence " + str(old_i) + "...") seq = row['padded_seq'] nonzero_cuts = np.nonzero( ((total_cuts[old_i, :] > 0) & (fixed_pos_mask == 1)) & ((cut_pos >= sd_window) & (cut_pos < total_cuts.shape[1] - sd_window)) )[0].tolist() zero_cuts = np.nonzero( ((total_cuts[old_i, :] == 0) & (fixed_pos_mask == 1)) & ((cut_pos >= sd_window + 1) & (cut_pos < total_cuts.shape[1] - sd_window - 1)) )[0].tolist() #Emit fixed splice positions for fixed_pos in fixed_poses : aligned_seqs.append(seq[fixed_pos - sd_window: fixed_pos + sd_window]) aligned_libs.append(fixed_pos - sd1_pos) aligned_mode.append("fixed_" + str(fixed_pos - sd1_pos)) for [cuts, aligned_cuts] in splice_mats : aligned_cuts[new_i, :2 * sd_window] = cuts[old_i, fixed_pos - sd_window: fixed_pos + sd_window] aligned_cuts[new_i, 2 * sd_window] = cuts[old_i, -1] new_i += 1 #Emit denovo splice positions for denovo_pos in nonzero_cuts : aligned_seqs.append(seq[denovo_pos - sd_window: denovo_pos + sd_window]) aligned_libs.append(denovo_pos - sd1_pos) aligned_mode.append("denovo_pos_" + str(denovo_pos - sd1_pos)) for [cuts, aligned_cuts] in splice_mats : aligned_cuts[new_i, :2 * sd_window] = cuts[old_i, denovo_pos - sd_window: denovo_pos + sd_window] aligned_cuts[new_i, 2 * sd_window] = cuts[old_i, -1] new_i += 1 if negative_sampling_ratio > 0.0 : n_neg = int(negative_sampling_ratio * (3 + len(nonzero_cuts))) sampled_zero_cuts = np.random.choice(zero_cuts, size=n_neg, replace=False) #Emit negative denovo splice positions for denovo_pos in sampled_zero_cuts : aligned_seqs.append(seq[denovo_pos - sd_window: denovo_pos + sd_window]) aligned_libs.append(denovo_pos - sd1_pos) aligned_mode.append("denovo_neg_" + str(denovo_pos - sd1_pos)) for [cuts, aligned_cuts] in splice_mats : aligned_cuts[new_i, :2 * sd_window] = cuts[old_i, denovo_pos - sd_window: denovo_pos + sd_window] aligned_cuts[new_i, 2 * sd_window] = cuts[old_i, -1] new_i += 1 old_i += 1 aligned_min_hek_cuts = sp.csr_matrix(aligned_hek_cuts[:len(aligned_seqs), :]) aligned_min_hela_cuts = sp.csr_matrix(aligned_hela_cuts[:len(aligned_seqs), :]) aligned_min_mcf7_cuts = sp.csr_matrix(aligned_mcf7_cuts[:len(aligned_seqs), :]) aligned_min_cho_cuts = sp.csr_matrix(aligned_cho_cuts[:len(aligned_seqs), :]) aligned_min_df = pd.DataFrame({ 'seq' : aligned_seqs, 'library' : aligned_libs, 'origin' : aligned_mode }) aligned_min_df = aligned_min_df[['seq', 'library', 'origin']] print("len(aligned_min_df) = " + str(len(aligned_min_df))) print("aligned_min_hek_cuts.shape = " + str(aligned_min_hek_cuts.shape)) print("aligned_min_hela_cuts.shape = " + str(aligned_min_hela_cuts.shape)) print("aligned_min_mcf7_cuts.shape = " + str(aligned_min_mcf7_cuts.shape)) print("aligned_min_cho_cuts.shape = " + str(aligned_min_cho_cuts.shape)) # + #Filter out zeros keep_index = (np.ravel(aligned_min_hek_cuts.sum(axis=-1)) > 0) keep_index = keep_index & (np.ravel(aligned_min_hela_cuts.sum(axis=-1)) > 0) keep_index = keep_index & (np.ravel(aligned_min_mcf7_cuts.sum(axis=-1)) > 0) keep_index = keep_index & (np.ravel(aligned_min_cho_cuts.sum(axis=-1)) > 0) aligned_min_df = aligned_min_df.iloc[keep_index].copy().reset_index(drop=True) aligned_min_hek_cuts = aligned_min_hek_cuts[keep_index, :] aligned_min_hela_cuts = aligned_min_hela_cuts[keep_index, :] aligned_min_mcf7_cuts = aligned_min_mcf7_cuts[keep_index, :] aligned_min_cho_cuts = aligned_min_cho_cuts[keep_index, :] print("len(aligned_min_df) = " + str(len(aligned_min_df))) print("aligned_min_hek_cuts.shape = " + str(aligned_min_hek_cuts.shape)) print("aligned_min_hela_cuts.shape = " + str(aligned_min_hela_cuts.shape)) print("aligned_min_mcf7_cuts.shape = " + str(aligned_min_mcf7_cuts.shape)) print("aligned_min_cho_cuts.shape = " + str(aligned_min_cho_cuts.shape)) # + data_version = '_neg_rate_2'#'_neg_rate_1'#'' aligned_5ss_dict = { 'min_df' : aligned_min_df, 'min_hek_count' : aligned_min_hek_cuts, 'min_hela_count' : aligned_min_hela_cuts, 'min_mcf7_count' : aligned_min_mcf7_cuts, 'min_cho_count' : aligned_min_cho_cuts, } pickle.dump(aligned_5ss_dict, open('alt_5ss_data_aligned' + data_version + '.pickle', 'wb')) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:root] * # language: python # name: conda-root-py # --- # + import os import torch import torchvision from torch import nn from torch.autograd import Variable from torch.utils.data import DataLoader from torchvision import transforms from torchvision.datasets import MNIST from torchvision.utils import save_image # - if not os.path.exists('./mlp_img'): os.mkdir('./mlp_img') def to_img(x): x = 0.5 * (x + 1) x = x.clamp(0, 1) x = x.view(x.size(0), 1, 28, 28) return x # + num_epochs = 100 batch_size = 128 learning_rate = 1e-3 img_transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) ]) dataset = MNIST('./data', transform=img_transform, download=True) dataloader = DataLoader(dataset, batch_size=batch_size, shuffle=True) # - class autoencoder(nn.Module): def __init__(self): super().__init__() self.encoder = nn.Sequential( nn.Linear(28 * 28, 128), nn.ReLU(True), nn.Linear(128, 64), nn.ReLU(True), nn.Linear(64, 12), nn.ReLU(True), nn.Linear(12, 3) ) self.decoder = nn.Sequential( nn.Linear(3, 12), nn.ReLU(True), nn.Linear(12, 64), nn.ReLU(True), nn.Linear(64, 128), nn.ReLU(True), nn.Linear(128, 28 * 28), nn.Tanh() ) def forward(self, x): x = self.encoder(x) x = self.decoder(x) return x model = autoencoder().cuda() criterion = nn.MSELoss() optimizer = torch.optim.Adam( model.parameters(), lr=learning_rate, weigth_decay=1e-5 ) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # [View in Colaboratory](https://colab.research.google.com/github/XinyueZ/tf/blob/master/ipynb/intro_to_tf_week_1.ipynb) # + id="NBC5j8C9AZ2Y" colab_type="code" colab={} import tensorflow as tf import numpy as np # + id="q5ZShQDf5Ons" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 255} outputId="9d991bee-f9be-4883-88f9-272f63ae06c6" # !pip install --upgrade tensorflow # + id="wjkA4YbGx5JB" colab_type="code" colab={} a = tf.constant([5,3,8]) b = tf.constant([5,3,8]) c = tf.add(a, b) # + id="f-rXrC0uCoFP" colab_type="code" colab={} def forward_pass(w, x): return tf.matmul(w, x) # + id="mDnX6nqxCtTY" colab_type="code" colab={} def train_loop(x, niter=5): with tf.variable_scope("model", reuse=tf.AUTO_REUSE): w = tf.get_variable("weights", shape=(1,2), initializer=tf.truncated_normal_initializer(), trainable=True) preds = [] for k in range(niter): preds.append(forward_pass(w, x)) w = w + 0.1 return preds # + id="b1PgXcLSbrod" colab_type="code" colab={} def do_some_sum(values): x = values[:, 0] print("x: {}".format(x)) y = values[:, 1] print("y: {}".format(y)) return x + y # + id="odp6ijALyD9h" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 187} outputId="761cb63d-40a2-4185-d4ca-66e54f5e970c" with tf.Session() as sess: with tf.summary.FileWriter("summary", sess.graph) as writer: print(c.eval()) print(type(c.eval())) preds = train_loop(tf.constant([[3.2, 5.1, 7.2],[4.3, 6.2, 8.3]])) tf.global_variables_initializer().run() for i in range(len(preds)): print("{}:{}".format(i, preds[i].eval())) # Run time #some_const = tf.constant([[9,2], # [3,4]]) # # Dev # some_const = np.array([[9,2], [3,4]]) print(do_some_sum(some_const)) # + id="_BE1aNAZ4kBj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 153} outputId="8ff4e95d-471e-46d0-faf5-beecb7af316e" # !ls summary # + id="N1yNkZHtbiTS" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import sys import os import json import pandas import numpy import optparse from keras.callbacks import TensorBoard from keras.models import Sequential from keras.layers import LSTM, Dense, Dropout from keras.layers.embeddings import Embedding from keras.preprocessing import sequence from keras.preprocessing.text import Tokenizer from collections import OrderedDict import json csv_file = 'data/dev-access.csv' # + dataframe = pandas.read_csv(csv_file, engine='python', quotechar='|', header=None) count_frame = dataframe.groupby([1]).count() print(count_frame) total_req = count_frame[0][0] + count_frame[0][1] num_malicious = count_frame[0][1] print("Malicious request logs in dataset: {:0.2f}%".format(float(num_malicious) / total_req * 100)) # - dataset = dataframe.sample(frac=1).values dataframe.head() for i in range(len(dataframe[0])): dataframe[0][i] = dict(json.loads(dataframe[0][i])) if i%1000 == 0: print(i, end=", ") # + # dataframe.apply(lambda x: json.loads(str(x)), axis=1) # - dataframe.head() dataframe.loc[344][0] datarows = [] for i in range(len(dataframe[0])): frame = dataframe[0][i] res = dataframe[1][i] method = frame['method'] path = frame['path'] query = frame['query'] if len(query) > 0: query = query['query'] statusCode = frame['statusCode'] source = frame['source']['remoteAddress'] route = frame['route'] referer = "" if "referer" in frame['source'].keys(): referer = frame['source']['referer'] response = frame['responsePayload'] error = "" msg = "" if(type(response) == type("hello")): msg = response else: if "error" in response.keys(): error = response['error'] msg = response['message'] datarows.append([method, path, query, statusCode, source, referer, route, error, msg, res]) with open("data/data.csv", "w", newline="") as f: for i in range(len(datarows)): f.write(",".join(map(str, datarows[i]))) f.write("\n") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # A Python Tour of Data Science: Data Acquisition & Exploration # # [](http://deff.ch), *PhD student*, [EPFL](http://epfl.ch) [LTS2](http://lts2.epfl.ch) # # 1 Exercise: problem definition # # Theme of the exercise: **understand the impact of your communication on social networks**. A real life situation: the marketing team needs help in identifying which were the most engaging posts they made on social platforms to prepare their next [AdWords](https://www.google.com/adwords/) campaign. # # As you probably don't have a company (yet?), you can either use your own social network profile as if it were the company's one or choose an established entity, e.g. EPFL. You will need to be registered in FB or Twitter to generate access tokens. If you're not, either ask a classmate to create a token for you or create a fake / temporary account for yourself (no need to follow other people, we can fetch public data). # # At the end of the exercise, you should have two datasets (Facebook & Twitter) and have used them to answer the following questions, for both Facebook and Twitter. # 1. How many followers / friends / likes has your chosen profile ? # 2. How many posts / tweets in the last year ? # 3. What were the 5 most liked posts / tweets ? # 4. Plot histograms of number of likes and comments / retweets. # 5. Plot basic statistics and an histogram of text lenght. # 6. Is there any correlation between the lenght of the text and the number of likes ? # 7. Be curious and explore your data. Did you find something interesting or surprising ? # 1. Create at least one interactive plot (with bokeh) to explore an intuition (e.g. does the posting time plays a role). # # 2 Ressources # # Here are some links you may find useful to complete that exercise. # # Web APIs: these are the references. # * [Facebook Graph API](https://developers.facebook.com/docs/graph-api) # * [Twitter REST API](https://dev.twitter.com/rest/public) # # Tutorials: # * [Mining the Social Web](https://github.com/ptwobrussell/Mining-the-Social-Web-2nd-Edition) # * [Mining Twitter data with Python](https://marcobonzanini.com/2015/03/02/mining-twitter-data-with-python-part-1/) # * [Simple Python Facebook Scraper](http://simplebeautifuldata.com/2014/08/25/simple-python-facebook-scraper-part-1/) # # 3 Web scraping # # Tasks: # 1. Download the relevant information from Facebook and Twitter. Try to minimize the quantity of collected data to the minimum required to answer the questions. # 2. Build two SQLite databases, one for Facebook and the other for Twitter, using [pandas](http://pandas.pydata.org/) and [SQLAlchemy](http://www.sqlalchemy.org/). # 1. For FB, each row is a post, and the columns are at least (you can include more if you want): the post id, the message (i.e. the text), the time when it was posted, the number of likes and the number of comments. # 2. For Twitter, each row is a tweet, and the columns are at least: the tweet id, the text, the creation time, the number of likes (was called favorite before) and the number of retweets. # # Note that some data cleaning is already necessary. E.g. there are some FB posts without *message*, i.e. without text. Some tweets are also just retweets without any more information. Should they be collected ? # Number of posts / tweets to retrieve. # Small value for development, then increase to collect final data. n = 20 # 4000 # ## 3.1 Facebook # # There is two ways to scrape data from Facebook, you can choose one or combine them. # 1. The low-level approach, sending HTTP requests and receiving JSON responses to / from their Graph API. That can be achieved with the json and [requests](python-requests.org) packages (altough you can use urllib or urllib2, requests has a better API). The knowledge you'll acquire using that method will be useful to query other web APIs than FB. This method is also more flexible. # 2. The high-level approach, using a [Python SDK](http://facebook-sdk.readthedocs.io). The code you'll have to write for this method is gonna be shorter, but specific to the FB Graph API. # # You will need an access token, which can be created with the help of the [Graph Explorer](https://developers.facebook.com/tools/explorer). That tool may prove useful to test queries. Once you have your token, you may create a `credentials.ini` file with the following content: # ``` # [facebook] # token = YOUR-FB-ACCESS-TOKEN # ``` # + import configparser credentials = configparser.ConfigParser() credentials.read('credentials.ini') token = credentials.get('facebook', 'token') # Or token = 'YOUR-FB-ACCESS-TOKEN' # - import requests # pip install requests import facebook # pip install facebook-sdk page = 'EPFL.ch' # + # Your code here. # - # ## 3.2 Twitter # # There exists a bunch of [Python-based clients](https://dev.twitter.com/overview/api/twitter-libraries#python) for Twitter. [Tweepy](http://tweepy.readthedocs.io) is a popular choice. # # You will need to create a [Twitter app](https://apps.twitter.com/) and copy the four tokens and secrets in the `credentials.ini` file: # ``` # [twitter] # consumer_key = YOUR-CONSUMER-KEY # consumer_secret = YOUR-CONSUMER-SECRET # access_token = YOUR-ACCESS-TOKEN # access_secret = YOUR-ACCESS-SECRET # ``` # + import tweepy # pip install tweepy # Read the confidential tokens and authenticate. auth = tweepy.OAuthHandler(credentials.get('twitter', 'consumer_key'), credentials.get('twitter', 'consumer_secret')) auth.set_access_token(credentials.get('twitter', 'access_token'), credentials.get('twitter', 'access_secret')) api = tweepy.API(auth) user = 'EPFL_en' # + # Your code here. # - # # 4 Data analysis # # Answer the questions using [pandas](http://pandas.pydata.org/), [statsmodels](http://statsmodels.sourceforge.net/), [scipy.stats](http://docs.scipy.org/doc/scipy/reference/stats.html), [bokeh](http://bokeh.pydata.org). import numpy as np import matplotlib.pyplot as plt plt.style.use('ggplot') # %matplotlib inline # + # Your code here. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Naive Bayes Classifier # # A project to classify spam and not spam messages in the formof texts using NB classifier. The data set here is taken from udacity/kaggle. Its easily available online. Our main aim here is to identify which words are mostly frequently used in spam messages and from that we would get the individual probabilty of each word and compute if the message is spam or not given that the word occurs, P(message is spam/the 'word'). # + import pandas as pd # Import the data using read_table df = pd.read_table('SMSSpamCollection' ,sep = "\t" ,header = None ,names = ['label','message']) df.head() # - # Dimensions of the dataframe df.shape # + #Replacing the labels to 0 and 1 as sklearn doesn't deal with characters but only numbers. df['label'] = df.replace({'ham':0,'spam':1}) df.head() # - # ### Different Steps to implement # # 1) Convert all strings to lower case form # # 2) Remove all punctuations and stopwords # # 3) Tokenization - Split a sentence into individual words using a delimiter # # 4) Count frequencies of each word and store it in a document term matrix # # DTM - each row represents each message and each column is a word, each cell will be the frequency of that word(col) in that message(row) # + from sklearn.feature_extraction.text import CountVectorizer count_vector = CountVectorizer() count_vector # - # ##### Split our data set into training and testing dataset # + from sklearn.cross_validation import train_test_split X_train, X_test, Y_train, Y_test = train_test_split(df['message'], df['label'], random_state = 1) # - # Mandatory stuff print('Number of rows in the total set: {}'.format(df.shape[0])) print('Number of rows in the training set: {}'.format(Y_train.shape[0])) print('Number of rows in the test set: {} '.format(Y_test.shape[0])) # + # Instantiate the CountVectorizer method count_vector = CountVectorizer() # Fit the training data and then return the matrix training_data = count_vector.fit_transform(X_train) # Transform testing data and return the matrix. Note, in 'training data' we are learning a vocabulary dictionary for the training # data and then transforming the data into a DTM; Here we are not fitting the testing data into the CountVectorizer(), we are # only transforming the data into a DTM testing_data = count_vector.transform(X_test) # - import numpy as np print (np.matrix(training_data)) training_data.todense() # ## Implement Naive Bayes # # Converting the format and datatype # Change the format of the test and train dataset for Y Y_train = np.asarray(Y_train, dtype="|S6") Y_test = np.asarray(Y_test, dtype="|S4") # + # Import the Multinomial Naive Bayes function from sklearn.naive_bayes import MultinomialNB naive_bayes = MultinomialNB() naive_bayes.fit(training_data, Y_train) # - predictions = naive_bayes.predict(testing_data) predictions # ### Evaluating the model # + from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score print('Accuracy score: ', format(accuracy_score(Y_test, predictions))) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import json raw = pd.read_csv('../../data/individuals.csv') raw.head() region_to_prefecture = { '北海道': ['北海道'], '東北': ['青森県', '岩手県', '秋田県', '宮城県', '山形県', '福島県'], '関東': ['茨城県', '栃木県', '群馬県', '埼玉県', '千葉県', '東京都', '神奈川県'], '中部': ['新潟県', '富山県', '石川県', '福井県', '山梨県', '長野県', '岐阜県', '静岡県', '愛知県'], '近畿': ['三重県', '滋賀県', '京都府', '大阪府', '兵庫県', '奈良県', '和歌山県'], '中国': ['鳥取県', '島根県', '岡山県', '広島県', '山口県'], '四国': ['徳島県', '香川県', '愛媛県', '高知県'], '九州沖縄': ['福岡県', '佐賀県', '長崎県', '熊本県', '大分県', '宮崎県', '鹿児島県', '沖縄県'], } prefectures = [] for k,v in region_to_prefecture.items(): prefectures += v tableau20 = [(31, 119, 180), (174, 199, 232), (255, 127, 14), (255, 187, 120), (44, 160, 44), (152, 223, 138), (214, 39, 40), (255, 152, 150), (148, 103, 189), (197, 176, 213), (140, 86, 75), (196, 156, 148), (227, 119, 194), (247, 182, 210), (127, 127, 127), (199, 199, 199), (188, 189, 34), (219, 219, 141), (23, 190, 207), (158, 218, 229)] raw['date'] = pd.to_datetime(raw.apply(lambda r: f"{r['確定年']}-{r['確定月']:02d}-{r['確定日']:02d}", axis=1)) count_by_location = raw[['date', '居住地1','新No.']].pivot_table(index='date',columns='居住地1',values='新No.',aggfunc='count') count_by_location.loc['2020-03-01':,'北海道'] # + s = count_by_location.index[0] t = count_by_location.index[-1] dummy = pd.DataFrame([[d, None] for d in pd.date_range(s, t)],columns=['date','dummy']).set_index('date') count_by_location = dummy.join(count_by_location) count_by_location = count_by_location.drop('dummy',axis=1) count_by_location = count_by_location.fillna(0) # - len(count_by_location) == len(dummy) len(raw) == count_by_location.sum(axis=1).sum() for region,prefs in region_to_prefecture.items(): target = [e for e in prefs if e in count_by_location.columns] count_by_location[region] = count_by_location[target].sum(axis=1) print(len(raw)) region_names = region_to_prefecture.keys() print(count_by_location[region_names].sum(axis=1).sum()) print(count_by_location[set(count_by_location.columns) - set(prefectures) - set(region_names)].sum(axis=1).sum()) # + threshold = 5 res = { 'original_date' : {}, } for location in count_by_location.columns: res['original_date'][location] = { 'daily_count': list(count_by_location[location]), 'cumulative': list(count_by_location[location].cumsum()), } res['date'] = [d.isoformat() for d in count_by_location[location].index] res['prefectures'] = prefectures def tuple2css(t): return f'rgb({t[0]}, {t[1]}, {t[2]})' res['colors'] = {p: tuple2css(tableau20[i%20]) for i, p in enumerate(prefectures)} # - with open('../../html/count_by_location.json', 'w') as f: json.dump(res, f) prefs = region_to_prefecture['近畿'] target = [e for e in prefs if e in count_by_location.columns] prefs, target tmp = count_by_location['近畿'].cumsum() # tmp = tmp[tmp>threshold] tmp[tmp>threshold] tmp # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: DS-ENV # language: python # name: ds-env # --- # + [markdown] tags=[] # # Set theory # > Putting things in mathematical boxes # # - toc: true # - badges: true # - author: # - hide: true # - categories: [mathematics] # - # # Overview # # In mathematics, we are frequently interested in characterizing a group of numbers, such as the natural numbers or all the positive integers. Sets are abstract definitions for these collections of numbers. # # In a nutshell, a set is a collection of mathematical objects, also called elements. These objects can be numbers, symbols, variables, and even functions. We can use set algebra to describe set properties and laws, such as set-theoretic operations like union, intersection, complementation, set equality and set inclusion relations. These systemic procedures help us to evaluate expressions and perform calculations. # ## Basic concepts of sets # # > A set is a collection of elements. We denote $A = \{ξ_1,ξ_2,...,ξ_n\}$ as a set, where $ξ_i$ is the $i$th element in the set. # # Examples of sets and their elements: # # $A = \{apple, orange, banana\}$ is a finite set of fruits # # $A = \{1, 2, 3, 4, 5, 6\}$ is a finite set of natural numbers # # $A = \{2, 4, 6, 8, . . .\}$ is a countable but infinite set of even natural numbers A = {1, 2, 3, 4, 5, 6} B = {'apple', 'orange', 'banana'} # + [markdown] tags=[] # ### Membership and cardinality # # Sometimes we want to know if a certain element is in a set. # # If a certain $\xi $ is a member (or element) of A, the notation $\xi \in A$ is used. # For example, we can write $apple \in \{apple, orange, banana\}$. Read in english, apple is in the set of apple, orange and banana. # # The notion of `cardinality` is how big a set is, or how many elements there are in a set. In the above example, the cardinality of the set of fruits is 3. # # A set can also be used to describe a collection of sets. Let A and B be two sets. Then C = {A, B} is a set of sets. # For example, Let $A = \{1, 2\}$ and $B = \{apple, orange\}$. # Then $C = \{A, B\} = \{\{1, 2\}, \{apple, orange\}\}$ # - A = {1, 2} B = {'apple', 'orange', 'banana'} 1 in A len(A), len(B) # To represent sets of sets in Python, the inner sets must be frozenset objects. # + A = frozenset([1, 2]) B = frozenset(['apple', 'orange']) C = {A, B} # - # ### Subsets # # Given a set, we often want to specify a portion of the set, which is called a `subset`. # # suppose B is a subset of A if for any $ξ ∈ B, ξ$ is also in A. We write # $B ⊆ A$ to denote that B is a subset of A. # For example, if we have two sets $A$ and $B$ in which $A = \{1, 2\}$ and $B = \{2, 1, 4, 3, 5 \}$ then we write $A \subseteq B$. Read in English, A is a subset of B. # # Theres also the notion of a `proper subset` in which states that if A is a subset of B, but A is not equal to B. # For example, if we have two sets, $A = \{1,2,3,4,5,6\}$, then $B = \{1,3,5\}$ is a proper subset of A. # + A = {1, 2} B = {2, 1, 4, 3, 5} A.issubset(B) # - # ### Empty set and Universal set # # A set is empty if it contains no element. We denote an empty set as $A = \{\emptyset \}$. We also note its size or cardinality is zero. # # The universal set is the set containing all elements under consideration. We denote a universal set as # $A = \Omega$. Note that there is no universtal symbol for this set, here I choose omega. # # # # ## Set Operations # # ### Union # # The union of two sets A and B contains all elements in A or in B. That is, $A ∪ B = \{ξ | ξ ∈ A or ξ ∈ B\}.$ The union of two sets connects the sets using the logical operator `or`. # # For example, If $A = \{1, 2\}, B = \{1, 5\}$, then $A ∪ B = \{1, 2, 5\}$ # + A = {1, 2} B = {1, 5} A.union(B) # - # ### Intersection # # The `intersection` of two sets A and B contains all elements in A and in B. That is, $A ∩ B = \{ξ | ξ ∈ A$ and $ξ ∈ B\}$. # # For example, If $A = \{1, 2, 3, 4\}$, $B = \{1, 5, 6\}$, then $A ∩ B = \{1\}$. # + A = {1, 2, 3, 4} B = {1, 5, 6} A.intersection(B) # - # ### Complement and difference # # The complement of a set A is the set containing all elements that are in $\Omega$ but not in A. That is, $A^c = \{ \xi$ | $\xi \in \Omega$ and $\xi \notin A \}$. # # For example, Let $A = \{1, 2, 3\}$ and $\Omega = \{1, 2, 3, 4, 5, 6\}$. Then $A^c = \{4, 5, 6 \}$. # # The concept of the complement will help us understand the concept of `difference`. # # The difference A\B is the set containing all elements in A but not in B. $A \setminus B$ = $\{\xi$ | $\xi \in A$ and $\xi \in B\}$. # + A = {1, 2, 3} B = {1, 2, 3, 4, 5, 6} B.difference(A) # - # ## Closing remarks # # # An axiom is a proposition that serves as a premise or starting point in a logical system. Axioms are not definitions, nor are they theorems. They are believed to be true or true within a certain context. # # # noticed that “pure” mathematics, which had completely dis􏱚 associated from the inquiries of natural sciences, was never the center of at􏱚 tention and interest of the great mathematicians of the past. They considered “pure” mathematics as some kind of “entertainment,” a rest from much more important and fascinating problems, which were put forward by natural sci􏱚 ences # # # # # Set theory is commonly employed as a foundational system for the whole of mathematics, particularly in the form of Zermelo–Fraenkel set theory with the axiom of choice. # # I don't hold these views. # # Set theory is an amazing piece of mathematical theory that has many usefullness in many areas of mathematics. I just don't think set theory belongs at her foundations. # # You'll see set theory employed in numerical analysis when formulating the number system from scratch. # Resources: # # - https://docs.python.org/3/library/stdtypes.html#set or https://realpython.com/python-sets/ # - https://en.wikipedia.org/wiki/Set_theory # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Figure 1 # # %load CONUS_map.py import os import rnnSMAP import matplotlib import matplotlib.pyplot as plt import matplotlib.gridspec as gridspec import imp imp.reload(rnnSMAP) rnnSMAP.reload() figTitleLst = ['Temporal Test', 'Spatial Test'] figNameLst = ['temporal', 'spatial'] matplotlib.rcParams.update({'font.size': 14}) matplotlib.rcParams.update({'lines.linewidth': 2}) matplotlib.rcParams.update({'lines.markersize': 6}) for iFig in range(0, 2): # iFig = 0 figTitle = figTitleLst[iFig] if iFig == 0: testName = 'CONUSv2f1' yr = [2017] if iFig == 1: testName = 'CONUSv2f2' yr = [2015] trainName = 'CONUSv2f1' out = trainName+'_y15_Forcing_dr60' rootDB = rnnSMAP.kPath['DB_L3_NA'] rootOut = rnnSMAP.kPath['OutSigma_L3_NA'] caseStrLst = ['sigmaMC', 'sigmaX', 'sigma'] nCase = len(caseStrLst) saveFolder = os.path.join(rnnSMAP.kPath['dirResult'], 'paperSigma') ################################################# # test predField = 'LSTM' targetField = 'SMAP' ds = rnnSMAP.classDB.DatasetPost( rootDB=rootDB, subsetName=testName, yrLst=yr) ds.readData(var='SMAP_AM', field='SMAP') ds.readPred(rootOut=rootOut, out=out, drMC=100, field='LSTM') statErr = ds.statCalError(predField='LSTM', targetField='SMAP') statSigma = ds.statCalSigma(field='LSTM') statConf = ds.statCalConf(predField='LSTM', targetField='SMAP') statNorm = rnnSMAP.classPost.statNorm( statSigma=statSigma, dataPred=ds.LSTM, dataTarget=ds.SMAP) ################################################# # plot figure fig = plt.figure(figsize=[12, 3]) gs = gridspec.GridSpec( 1, 3, width_ratios=[1, 1, 0.5], height_ratios=[1]) dataErr = getattr(statErr, 'ubRMSE') dataSigma = getattr(statSigma, 'sigma') cRange = [0, 0.1] # plot map RMSE ax = fig.add_subplot(gs[0, 0]) grid = ds.data2grid(data=dataErr) titleStr = 'ubRMSE of '+figTitle rnnSMAP.funPost.plotMap(grid, crd=ds.crdGrid, ax=ax, cRange=cRange, title=titleStr) # plot map sigma ax = fig.add_subplot(gs[0, 1]) grid = ds.data2grid(data=dataSigma) titleStr = r'$\sigma_{comb}$'+' of '+figTitle rnnSMAP.funPost.plotMap(grid, crd=ds.crdGrid, ax=ax, cRange=cRange, title=titleStr) fig.show() # plot map sigma vs RMSE ax = fig.add_subplot(gs[0, 2]) ax.set_aspect('equal', 'box') y = dataErr x = dataSigma rnnSMAP.funPost.plotVS( x, y, ax=ax, xlabel=r'$\sigma_{comb}$', ylabel='ubRMSE') fig.tight_layout() fig.show() saveFile = os.path.join(saveFolder, 'map_'+figNameLst[iFig]) fig.savefig(saveFile, dpi=100) fig.savefig(saveFile+'.eps') ################################################# # plot sigmaX vs sigmaMC plotSigma = 1 if plotSigma == 1: fig = plt.figure(figsize=[12, 3]) gs = gridspec.GridSpec( 1, 3, width_ratios=[1, 1, 0.5], height_ratios=[1]) dataSigmaX = getattr(statSigma, 'sigmaX') dataSigmaMC = getattr(statSigma, 'sigmaMC') # plot map RMSE ax = fig.add_subplot(gs[0, 0]) grid = ds.data2grid(data=dataSigmaX) titleStr = r'$\sigma_{x}$ '+figTitle rnnSMAP.funPost.plotMap(grid, crd=ds.crdGrid, ax=ax, cRange=[0, 0.1], title=titleStr) # plot map sigma ax = fig.add_subplot(gs[0, 1]) grid = ds.data2grid(data=dataSigmaMC) titleStr = r'$\sigma_{MC}$'+' of '+figTitle rnnSMAP.funPost.plotMap(grid, crd=ds.crdGrid, ax=ax, cRange=[0, 0.05], title=titleStr) # plot map sigma vs RMSE ax = fig.add_subplot(gs[0, 2]) ax.set_aspect('equal', 'box') y = dataSigmaMC x = dataSigmaX rnnSMAP.funPost.plotVS( x, y, ax=ax, xlabel=r'$\sigma_{x}$', ylabel=r'$\sigma_{MC}$') fig.tight_layout() fig.show() saveFile = os.path.join(saveFolder, 'map_'+figNameLst[iFig]+'_sigma') fig.savefig(saveFile, dpi=100) fig.savefig(saveFile+'.eps') # + slideshow={"slide_type": "-"} # # %load CONUS_conf.py # Figure 2 import os import rnnSMAP import matplotlib import matplotlib.pyplot as plt import numpy as np import scipy import imp imp.reload(rnnSMAP) rnnSMAP.reload() trainName = 'CONUSv2f1' out = trainName+'_y15_Forcing_dr60' rootDB = rnnSMAP.kPath['DB_L3_NA'] rootOut = rnnSMAP.kPath['OutSigma_L3_NA'] saveFolder = os.path.join(rnnSMAP.kPath['dirResult'], 'paperSigma') doOpt = [] doOpt.append('loadData') doOpt.append('plotConf') # doOpt.append('plotBin') # doOpt.append('plotProb') matplotlib.rcParams.update({'font.size': 16}) matplotlib.rcParams.update({'lines.linewidth': 2}) matplotlib.rcParams.update({'lines.markersize': 10}) plt.tight_layout() ################################################# # load data if 'loadData' in doOpt: dsLst = list() statErrLst = list() statSigmaLst = list() statConfLst = list() statProbLst = list() for k in range(0, 2): if k == 0: # validation testName = 'CONUSv2f1' yr = [2016] if k == 1: # temporal test testName = 'CONUSv2f1' yr = [2017] # if k == 2: # spatial test # testName = 'CONUSv2fx2' # yr = [2015] predField = 'LSTM' targetField = 'SMAP' ds = rnnSMAP.classDB.DatasetPost( rootDB=rootDB, subsetName=testName, yrLst=yr) ds.readData(var='SMAP_AM', field='SMAP') ds.readPred(rootOut=rootOut, out=out, drMC=100, field='LSTM') statErr = ds.statCalError(predField='LSTM', targetField='SMAP') statSigma = ds.statCalSigma(field='LSTM') statConf = ds.statCalConf( predField='LSTM', targetField='SMAP', rmBias=True) statProb = ds.statCalProb(predField='LSTM', targetField='SMAP') dsLst.append(ds) statErrLst.append(statErr) statSigmaLst.append(statSigma) statConfLst.append(statConf) statProbLst.append(statProb) ################################################# # plot confidence figure if 'plotConf' in doOpt: figTitleLst = ['(a) Validation', '(b) Temporal Test'] fig, axes = plt.subplots( ncols=len(figTitleLst), figsize=(12, 6), sharey=True) sigmaStrLst = ['sigmaX', 'sigmaMC', 'sigma'] legendLst = [r'$p_{x}$', r'$p_{mc}$', r'$p_{comb}$'] for iFig in range(0, 2): statConf = statConfLst[iFig] figTitle = figTitleLst[iFig] plotLst = list() for k in range(0, len(sigmaStrLst)): plotLst.append(getattr(statConf, 'conf_'+sigmaStrLst[k])) _, _, out = rnnSMAP.funPost.plotCDF( plotLst, ax=axes[iFig], legendLst=legendLst, cLst='grbm', xlabel='Error Exceedance Probablity', ylabel=None, showDiff='KS') axes[iFig].set_title(figTitle) print(out['rmseLst']) axes[0].set_ylabel('Frequency') # axes[1].get_legend().remove() fig.tight_layout() # fig.show() # saveFile = os.path.join(saveFolder, 'CONUS_conf') # fig.savefig(saveFile, dpi=100) # fig.savefig(saveFile+'.eps') # + # # %load CONUSv4_noise.py import os import rnnSMAP from rnnSMAP import runTrainLSTM import numpy as np import matplotlib import matplotlib.pyplot as plt import imp imp.reload(rnnSMAP) rnnSMAP.reload() ################################################# # noise affact on sigmaX (or sigmaMC) doOpt = [] # doOpt.append('train') doOpt.append('test') # doOpt.append('plotMap') doOpt.append('plotErrBox') # doOpt.append('plotConf') # doOpt.append('plotConfDist') # doOpt.append('plotConfLegend') # # noiseNameLst = ['0', '5e3', '1e2', '2e2', '5e2', '1e1'] noiseNameLst = ['0', '1e2', '2e2', '3e2', '4e2', '5e2', '6e2', '7e2', '8e2', '9e2', '1e1'] noiseLabelLst = ['0', '0.01', '0.02', '0.03', '0.04', '0.05', '0.06', '0.07', '0.08', '0.09', '0.1'] strErrLst = ['RMSE', 'ubRMSE'] saveFolder = os.path.join( rnnSMAP.kPath['dirResult'], 'paperSigma') rootDB = rnnSMAP.kPath['DB_L3_NA'] matplotlib.rcParams.update({'font.size': 16}) matplotlib.rcParams.update({'lines.linewidth': 2}) matplotlib.rcParams.update({'lines.markersize': 10}) matplotlib.rcParams.update({'legend.fontsize': 14}) ################################################# if 'test' in doOpt: statErrLst = list() statSigmaLst = list() statConfLst = list() for k in range(0, len(noiseNameLst)): testName = 'CONUSv4f1' if k == 0: out = 'CONUSv4f1_y15_Forcing_dr60' targetName = 'SMAP_AM' else: out = 'CONUSv4f1_y15_Forcing_dr06_sn'+noiseNameLst[k] targetName = 'SMAP_AM_sn'+noiseNameLst[k] rootOut = rnnSMAP.kPath['OutSigma_L3_NA'] caseStrLst = ['sigmaMC', 'sigmaX', 'sigma'] ds = rnnSMAP.classDB.DatasetPost( rootDB=rootDB, subsetName=testName, yrLst=[2017]) ds.readData(var=targetName, field='SMAP') ds.readPred(out=out, drMC=100, field='LSTM', rootOut=rootOut) statErr = ds.statCalError(predField='LSTM', targetField='SMAP') statErrLst.append(statErr) statSigma = ds.statCalSigma(field='LSTM') statSigmaLst.append(statSigma) statConf = ds.statCalConf(predField='LSTM', targetField='SMAP') statConfLst.append(statConf) ################################################# if 'plotErrBox' in doOpt: data = list() strErr = 'ubRMSE' strSigmaLst = ['sigmaMC', 'sigmaX', 'sigma'] labelS = [r'$\sigma_{mc}$', r'$\sigma_x$', r'$\sigma_{comb}$', 'ubRMSE'] for k in range(0, len(noiseNameLst)): temp = list() for strSigma in strSigmaLst: temp.append(getattr(statSigmaLst[k], strSigma)) temp.append(getattr(statErrLst[k], strErr)) data.append(temp) fig = rnnSMAP.funPost.plotBox( data, labelC=noiseLabelLst, figsize=(12, 6), colorLst='rbgk', labelS=labelS, title='Error and uncertainty estimates in temporal test') # axes[-1].get_legend().remove() fig.show() saveFile = os.path.join(saveFolder, 'noise_box') fig.subplots_adjust(wspace=0.1) fig.savefig(saveFile, dpi=100) fig.savefig(saveFile+'.eps') # figLeg, axLeg = plt.subplots(figsize=(3, 3)) # leg = axes[-1].get_legend() # axLeg.legend(bp['boxes'], labelS, loc='upper right') # axLeg.axis('off') # figLeg.show() # saveFile = os.path.join(saveFolder, 'noise_box_legend') # figLeg.savefig(saveFile+'.eps') ################################################# if 'plotConf' in doOpt: strSigmaLst = ['sigmaMC', 'sigmaX', 'sigma'] titleLst = [r'$p_{mc}$', r'$p_{x}$', r'$p_{comb}$'] fig, axes = plt.subplots(ncols=len(titleLst), figsize=(12, 4), sharey=True) for iFig in range(0, 3): plotLst = list() for k in range(0, len(noiseNameLst)): plotLst.append(getattr(statConfLst[k], 'conf_'+strSigmaLst[iFig])) if iFig == 2: _, _, out = rnnSMAP.funPost.plotCDF( plotLst, ax=axes[iFig], legendLst=noiseLabelLst, xlabel='Predicted Probablity', ylabel=None, showDiff=True) else: _, _, out = rnnSMAP.funPost.plotCDF( plotLst, ax=axes[iFig], legendLst=None, xlabel='Predicted Probablity', ylabel=None, showDiff=True) axes[iFig].set_title(titleLst[iFig]) print(out['rmseLst']) if iFig == 0: axes[iFig].set_ylabel('Frequency') saveFile = os.path.join(saveFolder, 'noise_conf') plt.tight_layout() fig.savefig(saveFile, dpi=100) fig.savefig(saveFile+'.eps') fig.show() ################################################# if 'plotConfDist' in doOpt: fig, axes = plt.subplots(1, 3, figsize=(12, 4)) strSigmaLst = ['sigmaMC', 'sigmaX'] titleLst = [r'CDF of $p_{mc}$', r'CDF of $p_{x}$'] for iFig in range(0, 2): plotLst = list() for k in range(0, len(noiseNameLst)): plotLst.append(getattr(statConfLst[k], 'conf_'+strSigmaLst[iFig])) _, _, out = rnnSMAP.funPost.plotCDF( plotLst, ax=axes[iFig], legendLst=None, xlabel=r'$P_{ee}$', ylabel=None, showDiff=False) axes[iFig].set_title(titleLst[iFig]) print(out['rmseLst']) if iFig == 0: axes[iFig].set_ylabel('Frequency') noiseLst = np.arange(0, 0.11, 0.01) strSigmaLst = ['sigmaMC', 'sigmaX'] legLst = [r'$d(p_{mc})$', r'$d(p_{x})$'] cLst = 'rb' axesDist = [axes[2], axes[2].twinx()] for iS in range(0, len(strSigmaLst)): distLst = list() for iN in range(0, len(noiseNameLst)): x = getattr(statConfLst[iN], 'conf_'+strSigmaLst[iS]) # calculate dist of CDF xSort = rnnSMAP.funPost.flatData(x) yRank = np.arange(len(xSort))/float(len(xSort)-1) dist = np.max(np.abs(xSort - yRank)) distLst.append(dist) axesDist[iS].plot(noiseLst, distLst, color=cLst[iS], label=legLst[iS]) axesDist[iS].tick_params('y', colors=cLst[iS]) axesDist[0].set_xlabel(r'$\sigma_{noise}$') axesDist[0].legend(loc='upper center') axesDist[1].legend(loc='lower center') axesDist[0].set_title(r'd to $y=x$') plt.tight_layout() saveFile = os.path.join(saveFolder, 'noise_dist') fig.savefig(saveFile, dpi=100) fig.savefig(saveFile+'.eps') fig.show() ################################################# if 'plotConfLegend' in doOpt: fig, axes = plt.subplots(1, 2, figsize=(12, 4)) strSigmaLst = ['sigmaMC', 'sigmaX'] titleLst = [r'CDF of $p_{mc}$', r'CDF of $p_{x}$'] plotLst = list() for k in range(0, len(noiseNameLst)): plotLst.append(getattr(statConfLst[k], 'conf_'+strSigmaLst[iFig])) _, _, out = rnnSMAP.funPost.plotCDF( plotLst, ax=axes[0], legendLst=noiseLabelLst, xlabel=r'$P_{ee}$', ylabel=None, showDiff=False) hh, ll = axes[0].get_legend_handles_labels() axes[1].legend(hh, ll, borderaxespad=0, loc='lower left', ncol=1) axes[1].axis('off') saveFile = os.path.join(saveFolder, 'noise_dist_leg') fig.savefig(saveFile, dpi=100) fig.savefig(saveFile+'.eps') fig.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.5 64-bit (''base'': conda)' # name: python361264bitpy36condab15cfc676c924d3bb5d13a94d0f7deaa # --- # ## Homework-3: MNIST Classification with ConvNet # # ### **Deadline: 2021.04.06 23:59:00 ** # # ### In this homework, you need to # - #### implement the forward and backward functions for ConvLayer (`layers/conv_layer.py`) # - #### implement the forward and backward functions for PoolingLayer (`layers/pooling_layer.py`) # - #### implement the forward and backward functions for DropoutLayer (`layers/dropout_layer.py`) # + import numpy as np import matplotlib.pyplot as plt # %matplotlib inline import tensorflow.compat.v1 as tf tf.disable_eager_execution() from network import Network from solver import train, test from plot import plot_loss_and_acc # - # ## Load MNIST Dataset # We use tensorflow tools to load dataset for convenience. (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data() # + def decode_image(image): # Normalize from [0, 255.] to [0., 1.0], and then subtract by the mean value image = tf.cast(image, tf.float32) image = tf.reshape(image, [1, 28, 28]) image = image / 255.0 image = image - tf.reduce_mean(image) return image def decode_label(label): # Encode label with one-hot encoding return tf.one_hot(label, depth=10) # - # Data Preprocessing x_train = tf.data.Dataset.from_tensor_slices(x_train).map(decode_image) y_train = tf.data.Dataset.from_tensor_slices(y_train).map(decode_label) data_train = tf.data.Dataset.zip((x_train, y_train)) x_test = tf.data.Dataset.from_tensor_slices(x_test).map(decode_image) y_test = tf.data.Dataset.from_tensor_slices(y_test).map(decode_label) data_test = tf.data.Dataset.zip((x_test, y_test)) # ## Set Hyperparameters # You can modify hyperparameters by yourself. # + batch_size = 100 max_epoch = 10 init_std = 0.1 learning_rate = 0.001 weight_decay = 0.005 disp_freq = 50 # - # ## Criterion and Optimizer # + from criterion import SoftmaxCrossEntropyLossLayer from optimizer import SGD criterion = SoftmaxCrossEntropyLossLayer() sgd = SGD(learning_rate, weight_decay) # - # ## ConvNet # + from layers import FCLayer, ReLULayer, ConvLayer, MaxPoolingLayer, ReshapeLayer convNet = Network() convNet.add(ConvLayer(1, 8, 3, 1)) convNet.add(ReLULayer()) convNet.add(MaxPoolingLayer(2, 0)) convNet.add(ConvLayer(8, 16, 3, 1)) convNet.add(ReLULayer()) convNet.add(MaxPoolingLayer(2, 0)) convNet.add(ReshapeLayer((batch_size, 16, 7, 7), (batch_size, 784))) convNet.add(FCLayer(784, 128)) convNet.add(ReLULayer()) convNet.add(FCLayer(128, 10)) # + tags=[] # Train convNet.is_training = True convNet, conv_loss, conv_acc = train(convNet, criterion, sgd, data_train, max_epoch, batch_size, disp_freq) # - # Test convNet.is_training = False test(convNet, criterion, data_test, batch_size, disp_freq) # ## Plot plot_loss_and_acc({'ConvNet': [conv_loss, conv_acc]}) # ### ~~You have finished homework3, congratulations!~~ # # **Next, according to the requirements (4):** # ### **You need to implement the Dropout layer and train the network again.** # + from layers import DropoutLayer from layers import FCLayer, ReLULayer, ConvLayer, MaxPoolingLayer, ReshapeLayer, DropoutLayer # build your network convNet = Network() convNet.add(ConvLayer(1, 8, 3, 1)) convNet.add(ReLULayer()) convNet.add(DropoutLayer(0.5)) convNet.add(MaxPoolingLayer(2, 0)) convNet.add(ConvLayer(8, 16, 3, 1)) convNet.add(ReLULayer()) convNet.add(MaxPoolingLayer(2, 0)) convNet.add(ReshapeLayer((batch_size, 16, 7, 7), (batch_size, 784))) convNet.add(FCLayer(784, 128)) convNet.add(ReLULayer()) convNet.add(FCLayer(128, 10)) # training convNet.is_training = True convNet, conv_loss, conv_acc = train(convNet, criterion, sgd, data_train, max_epoch, batch_size, disp_freq) # testing convNet.is_training = False test(convNet, criterion, data_test, batch_size, disp_freq) # - plot_loss_and_acc({'ConvNet': [conv_loss, conv_acc]}) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import csv import os import json import numpy as np import pandas as pd from pprint import pprint import random datasetSizeController = 200 src_Folder = "C:\\Users\\haris\\Documents\\Altmetric Data\\data\\" with open("RandomData.csv", mode = 'w', newline = '') as csvFile: fieldnames = ['altmetric_id', 'abstract', 'blog_post'] csvWriter = csv.DictWriter(csvFile, fieldnames = fieldnames) csvWriter.writeheader() filesCount = 0 filesTried = 0 # + with open("RandomData.csv", mode = 'a', newline = '') as csvFile: csvWriter = csv.writer(csvFile) random.seed(25) while(filesCount < datasetSizeController): rand = random.randint(10,846) json_Folder = src_Folder + str(rand) fileName = json_Folder + "\\" + str(random.choice(os.listdir(json_Folder))) try: with open(fileName) as json_file: json_data = json.load(json_file) #print("File opened") filesTried = filesTried + 1 altmetric_id = json_data['altmetric_id'] if ('abstract' not in json_data['citation']): #print("Debug1") continue else: #print("Debug2") abstract = json_data['citation']['abstract'] abstract = abstract.replace(',', '') if('blogs' not in json_data['posts']): #print("Debug3") continue else: print("Debug4") blog_post = json_data['posts']['blogs'][0]['summary'] blog_post = blog_post.replace(',', '') print(str(blog_post)) filesCount = filesCount + 1 csvWriter.writerow([altmetric_id, abstract, blog_post]) print("Files Tried: " + str(filesTried)) print("Files Read: " + str(filesCount)) #print(str(filesCount)) except: pass #print("CSV File Generated!") # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- import pandas as pd reviews = pd.read_csv('fandango_scores.csv') cols = ['FILM', 'RT_user_norm', 'Metacritic_user_nom', 'IMDB_norm', 'Fandango_Ratingvalue', 'Fandango_Stars'] norm_reviews = reviews[cols] print(norm_reviews[:1]) # + import matplotlib.pyplot as plt from numpy import arange #The Axes.bar() method has 2 required parameters, left and height. #We use the left parameter to specify the x coordinates of the left sides of the bar. #We use the height parameter to specify the height of each bar num_cols = ['RT_user_norm', 'Metacritic_user_nom', 'IMDB_norm', 'Fandango_Ratingvalue', 'Fandango_Stars'] bar_heights = norm_reviews.ix[0, num_cols].values print (bar_heights) bar_positions = arange(5) + 0.75 print (bar_positions) fig, ax = plt.subplots() ax.bar(bar_positions, bar_heights, 0.5) plt.show() # + #By default, matplotlib sets the x-axis tick labels to the integer values the bars #spanned on the x-axis (from 0 to 6). We only need tick labels on the x-axis where the bars are positioned. #We can use Axes.set_xticks() to change the positions of the ticks to [1, 2, 3, 4, 5]: num_cols = ['RT_user_norm', 'Metacritic_user_nom', 'IMDB_norm', 'Fandango_Ratingvalue', 'Fandango_Stars'] bar_heights = norm_reviews.ix[0, num_cols].values bar_positions = arange(5) + 0.75 tick_positions = range(1,6) fig, ax = plt.subplots() ax.bar(bar_positions, bar_heights, 0.5) ax.set_xticks(tick_positions) ax.set_xticklabels(num_cols, rotation=45) ax.set_xlabel('Rating Source') ax.set_ylabel('Average Rating') ax.set_title('Average User Rating For Avengers: Age of Ultron (2015)') plt.show() # + import matplotlib.pyplot as plt from numpy import arange num_cols = ['RT_user_norm', 'Metacritic_user_nom', 'IMDB_norm', 'Fandango_Ratingvalue', 'Fandango_Stars'] bar_widths = norm_reviews.ix[0, num_cols].values bar_positions = arange(5) + 0.75 tick_positions = range(1,6) fig, ax = plt.subplots() ax.barh(bar_positions, bar_widths, 0.5) ax.set_yticks(tick_positions) ax.set_yticklabels(num_cols) ax.set_ylabel('Rating Source') ax.set_xlabel('Average Rating') ax.set_title('Average User Rating For Avengers: Age of Ultron (2015)') plt.show() # - #Let's look at a plot that can help us visualize many points. fig, ax = plt.subplots() ax.scatter(norm_reviews['Fandango_Ratingvalue'], norm_reviews['RT_user_norm']) ax.set_xlabel('Fandango') ax.set_ylabel('Rotten Tomatoes') plt.show() #Switching Axes fig = plt.figure(figsize=(5,10)) ax1 = fig.add_subplot(2,1,1) ax2 = fig.add_subplot(2,1,2) ax1.scatter(norm_reviews['Fandango_Ratingvalue'], norm_reviews['RT_user_norm']) ax1.set_xlabel('Fandango') ax1.set_ylabel('Rotten Tomatoes') ax2.scatter(norm_reviews['RT_user_norm'], norm_reviews['Fandango_Ratingvalue']) ax2.set_xlabel('Rotten Tomatoes') ax2.set_ylabel('Fandango') plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # SVM Kernels # ## Data Use Agreements # The data used for this project were provided in part by OASIS and ADNI. # # OASIS-3: Principal Investigators: , , ; NIH P50 AG00561, P30 NS09857781, P01 AG026276, P01 AG003991, R01 AG043434, UL1 TR000448, R01 EB009352. AV-45 doses were provided by Avid Radiopharmaceuticals, a wholly owned subsidiary of Eli Lilly. # # Data collection for this project was done through the Alzheimer's Disease Neuroimaging Initiative (ADNI) (National Institutes of Health Grant U01 AG024904) and DOD ADNI (Department of Defense award number W81XWH-12-2-0012). ADNI is funded by the National Institute on Aging, the National Institute of Biomedical Imaging and Bioengineering, and through generous contributions from the following: AbbVie, Alzheimer’s Association; Alzheimer’s Drug Discovery Foundation; Araclon Biotech; BioClinica, Inc.; Biogen; Bristol-Myers Squibb Company; CereSpir, Inc.; Cogstate; Eisai Inc.; Elan Pharmaceuticals, Inc.; Eli Lilly and Company; EuroImmun; F. Hoffmann-La Roche Ltd and its affiliated company Genentech, Inc.; Fujirebio; GE Healthcare; IXICO Ltd.; Immunotherapy Research & Development, LLC.; Johnson & Johnson Pharmaceutical Research & Development LLC.; Lumosity; Lundbeck; Merck & Co., Inc.; Meso Scale Diagnostics, LLC.; NeuroRx Research; Neurotrack Technologies; Novartis Pharmaceuticals Corporation; Pfizer Inc.; Piramal Imaging; Servier; Takeda Pharmaceutical Company; and Transition Therapeutics. The Canadian Institutes of Health Research is providing funds to support ADNI clinical sites in Canada. Private sector contributions are facilitated by the Foundation for the National Institutes of Health (www.fnih.org). The grantee organization is the Northern California Institute for Research and Education, and the study is coordinated by the Alzheimer’s Therapeutic Research Institute at the University of Southern California. ADNI data are disseminated by the Laboratory for Neuro Imaging at the University of Southern California. # + import numpy as np import matplotlib.pyplot as plt import pandas as pd import seaborn as sns from sklearn.model_selection import train_test_split from sklearn.metrics import confusion_matrix, classification_report import cvxopt import cvxopt.solvers from tqdm.notebook import tqdm np.set_printoptions(linewidth=200, suppress=True, formatter={'float': lambda x: "{0:0.3f}".format(x)}) # - df = pd.read_csv('../../Data/OASIS/oasis_3.csv') print(df.shape) df.head() # ## Data Preprocessing df = df.dropna(axis=1, how='all') # Drop any empty columns df = df.dropna(axis=0, how='any') # Drop any rows with empty values df = df.rename(columns={'id':'Freesurfer ID', 'dx1':'Diagnosis', 'TOTAL_HIPPOCAMPUS_VOLUME':'TotalHippocampusVol'}) # Rename columns df = df.drop_duplicates(subset='Subject', keep='first') # Keep only the first visit; this is possible because # df is sorted by age df = df.reset_index(drop=True) # Reset the index df = df.set_index('Subject') cols = df.columns.tolist() cols[2], cols[4] = cols[4], cols[2] df = df[cols] df.loc[df['cdr'] < 0.5, 'Diagnosis'] = 'control' df.loc[~(df['cdr'] < 0.5), 'Diagnosis'] = 'dementia' df.loc[df['Diagnosis'] == 'control', 'Diagnosis'] = -1 df.loc[df['Diagnosis'] == 'dementia', 'Diagnosis'] = 1 df['M/F'].replace(['M','F'], [0,1], inplace=True) df = df.drop(['MR ID', 'Freesurfer ID', 'cdr'], axis=1) # Drop categorical and redundant columns print(df.shape) df.head() # # PCA class PCA: """ Applies PCA and also scales the data. Scaling and all transformations will be done based on the dataset that is passed into the fit() method. """ def __init__(self, top_k=None): """ Intializes the PCA class :param top_k: the number used when PCA picks the top k eigenvectors to use, None if the PCA will decide by itself :principal_components: will store the principal components to be used :fitted: boolean, storing whether the pca has been fitted or not yet """ self.top_k = top_k self.principal_components = None self.fitted = False def fit(self, X, threshold=0.95): """ Fits the PCA by finding and storing the top k eigenvectors :param X: n x d dataset, where d represents the number of features. :param standardized: boolean, tells if the dataset X is standardized or not :param threshold: the explained variance threshold to select the top_k value """ # Exception and set fitted to true if not if self.fitted == True: raise Exception('This PCA has already been fitted.') self.fitted = True # Standardize/scale the data and save the means self.mu = np.mean(X) self.sigma = np.std(X) X = (X-self.mu)/self.sigma # Create the covariance matrix cov = np.cov(X.T) # Perform eigendecomposition on the covariance matrix eig_vals, eig_vecs = np.linalg.eig(cov) variance = 0 # If top_k value was inputted, we compute it ourselves based on the data if self.top_k == None: # Find the best number of components that pass the threshold if threshold > 1: threshold /= 100 total = sum(eig_vals) explained_variance = [(i / total) for i in sorted(eig_vals, reverse=True)] cumulative_explained_variance = np.cumsum(explained_variance) self.top_k = np.argmax(cumulative_explained_variance > threshold) + 1 variance = cumulative_explained_variance[self.top_k-1] # Pair up the eigenvectors and eigenvalues as (eigenvalue, eigenvector) tuples eig_pairs = [(np.abs(eig_vals[i]), eig_vecs.T[i,:]) for i in range(len(eig_vals))] # Sort the tuples in descending order eig_pairs.sort(key=lambda x: x[0], reverse=True) self.principal_components = np.array([pair[1] for pair in eig_pairs[0:self.top_k]]) print(f"-- PCA succesfully fitted on dataset. Dimensions reduced from {X.shape[1]} to {self.top_k}, capturing {variance*100:.2f}% of the explained variance. --") def transform(self, X): """ Project the data onto the new feature space :param X: n x d dataset, where d represents the number of features. :param standardized: boolean, tells if the dataset X is standardized or not :return X_pca: the resulting transformed dataset """ # Checks if fit() has been called if self.fitted == False: raise Exception('You must call fit() prior to calling transform.') # Scale the data based on the mu that was saved X = (X-self.mu)/self.sigma # Apply PCA transformation X_pca = np.dot(X, self.principal_components.T) print(f"-- PCA transformation succesfully applied on dataset. Dimensions reduced from {X.shape[1]} to {X_pca.shape[1]}. --") return X_pca def fit_transform(self, X, threshold=0.95): self.fit(X, threshold=threshold) return self.transform(X) # ## Kernel SVM class SVM(): # Kernels # https://scikit-learn.org/stable/modules/metrics.html#linear-kernel def linear(self, xi, xj): return np.dot(xi.T, xj) def polynomial(self, xi, xj): return (self.gamma * np.dot(xi.T, xj) + self.c0) ** self.p def sigmoid(self, xi, xj): return np.tanh(self.gamma * np.dot(xi.T, xj) + self.c0) def rbf(self, xi, xj): return np.exp(-1 * self.gamma * np.dot(xi - xj, xi - xj)) # Default parameters are based on the sklearn SVC # https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html def __init__(self, kernel='linear', gamma=None, C=1.0, coef0=0.0, degree=3, verbose=False, max_iter=1000): # Setting the parameters for kernels self.gamma = gamma self.c0 = coef0 # c0 parameter for poly and sigmoid self.p = degree # degree for poly # Setting the C parameter for soft/hard margin SVM # if C is None, it's hard margin, else it's soft # the SVM's 'softness' decreases as C increases self.C = C # Setting parameters for SVM in general self.verbose = verbose self.max_iters = max_iter # Setting the correct kernel self.kernel = kernel if kernel == 'linear': self.kf = self.linear elif kernel == 'poly': self.kf = self.polynomial elif kernel == 'sigmoid': self.kf = self.sigmoid elif kernel == 'rbf': self.kf = self.rbf else: raise Exception(f'Invalid kernel inputted: {kernel}') # Indicates if the SVM is fitted or not self.fitted = False # Set the initial SVM variables self._sv_X = None self._sv_y = None self._alphas = None self.w = None self.b = None def fit(self, X, y): """ cvxopt solves the problem: min F(x) = 1/2 x.T * P * x + q.T * x subject to: Gx <= h Ax = b if we let H_ij = yi * yj * xi . xj, our problem becomes: min F(x) = 1/2 x.T * H * x - 1.T * x subject to: -lambda_i <= 0 lambda_i <= C y.T * x = 0 substituting this into the cvxopt problem, we now know that: K = kernel function, for linear its xi . xj P = H, a matrix of size n_samples, n_samples this is calculated by doing yi * yj * K q = a vector of size n_samples, filled with -1 G = diagonal matrix of -1s and 1s h = vector filled with zeros and C A = the labels y, of size n_samples b = the scalar value 0 """ X = np.array(X) y = np.array(y) n_samples, n_features = X.shape if self.gamma == None: self.gamma = 1.0/(n_features * X.var()) # Calculate the gram matrix for kernel K = np.zeros(shape=(n_samples, n_samples)) for i in range(n_samples): for j in range(n_samples): K[i, j] = self.kf(X[i], X[j]) P = cvxopt.matrix(np.outer(y, y) * K) q = cvxopt.matrix(-1 * np.ones(n_samples)) A = cvxopt.matrix(y, (1, n_samples)) b = cvxopt.matrix(0.0) # Sets the value of G and h based on C if self.C is None: # Hard margin SVM G = cvxopt.matrix(-np.eye(n_samples)) h = cvxopt.matrix(np.zeros(n_samples)) else: # Soft margin SVM G = cvxopt.matrix(np.vstack((-np.eye(n_samples), np.eye(n_samples)))) h = cvxopt.matrix(np.hstack((np.zeros(n_samples), np.ones(n_samples) * self.C))) # Set parameters for cvxopt cvxopt.solvers.options['maxiters'] = self.max_iters if self.verbose == False: cvxopt.solvers.options['show_progress'] = False # Solve the QP problem with cvxopt try: solution = cvxopt.solvers.qp(P, q, G, h, A, b) except Exception as e: raise Exception(f"Impossible to fit. CVXOPT raised exception: {e}") return # Pull out the Lagrange multipliers alphas = np.ravel(solution['x']) # Get the indices of the support vectors, which will have Lagrange multipliers that are greater than zero sv = alphas > 1e-5 self._sv_X = X[sv] self._sv_y = y[sv] self._alphas = alphas[sv] # Compute the bias/intercept # b = 1/N_s * sum_s(y_s - sum_m(alphas_m * y_m * K(x_m, x_s))) sv_idx = np.arange(len(alphas))[sv] self.b = 0 for i in range(len(self._alphas)): self.b += self._sv_y[i] self.b -= np.sum(self._alphas * self._sv_y * K[sv_idx[i], sv]) self.b /= len(self._alphas) # Compute the weight vector if kernel is linear # w = sum(alphas_i * y_i * x_i) if self.kernel == 'linear': self.w = np.zeros(n_features) for i in range(len(self._alphas)): self.w += self._alphas[i] * self._sv_X[i] * self._sv_y[i] else: self.w = None # Print results if self.verbose: print(f'{len(self._alphas)} support vectors found out of {n_samples} data points') print(f'Calculated the intercept/bias: b = {self.b}') print(f'Calculated the weight vector for {self.kernel} kernel: w = {self.w}') self.fitted = True def project(self, X): # Check if SVM has been fitted yet if self.fitted == False: raise Exception('SVM is not fitted') # Calculate the projection if self.kernel == 'linear': # This is when self.w is not none return np.dot(X, self.w) + self.b else: # The rest of the kernels predictions = np.zeros(len(X)) for i in range(len(X)): s = 0 for alphas, sv_y, sv_X in zip(self._alphas, self._sv_y, self._sv_X): s += alphas * sv_y * self.kf(X[i], sv_X) predictions[i] = s return predictions + self.b def predict(self, X): # Predict the class of X # yi = sign(alpha_i * y_i * K(x_i, x) + b return np.sign(self.project(X)) # ## Steps in applying it to data # 1. Split the data into **train** and **test** sets # 2. Fit PCA and/or z-score scaling on the **train** set # 3. Apply transformations to the **train** set and the **test** set, using the fitted PCA/z-score from **train** set # 4. Train SVM using the **train** set # 5. Apply SVM to **test** set to get the accuracy and/or results # #### Step 1 - Split the data into **train** and **test** set X = df.drop(['Diagnosis'], axis=1) y = df['Diagnosis'].astype('d') # cast it to type 'd' for cvxopt X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.2) print(X_train.shape, X_test.shape, y_train.shape, y_test.shape) # #### Step 2 - Fit PCA and/or z-score scaling on the **train** set # The PCA also does the scaling pca = PCA() pca.fit(X_train, threshold=.99) # #### Step 3 - Apply transformations to the **train** set and the **test** set, using the fitted PCA/z-score from **train** set X_train = pca.transform(X_train) X_test = pca.transform(X_test) # #### Step 4 - Train SVM using the **train** set # #### Step 5 - Apply SVM to **test** set to get the accuracy and/or results # + # Linear kernel svc = SVM(verbose=True, kernel='linear') svc.fit(X_train, y_train) preds = svc.predict(X_test) correct_preds = np.sum(preds == y_test) print(f'Accuracy: {correct_preds/y_test.size}') np_y_test = np.array(y_test).astype(int) mat = confusion_matrix(np_y_test, preds) sns.heatmap(mat, square=True, annot=True, fmt='d', cbar=False, cmap='Blues', xticklabels=['Control', 'Dementia'], yticklabels=['Control', 'Dementia']) plt.xlabel('predicted label') plt.ylabel('true label') # + # Poly kernel svc = SVM(verbose=True, kernel='poly') svc.fit(X_train, y_train) preds = svc.predict(X_test) correct_preds = np.sum(preds == y_test) print(f'Accuracy: {correct_preds/y_test.size}') np_y_test = np.array(y_test).astype(int) mat = confusion_matrix(np_y_test, preds) sns.heatmap(mat, square=True, annot=True, fmt='d', cbar=False, cmap='Blues', xticklabels=['Control', 'Dementia'], yticklabels=['Control', 'Dementia']) plt.xlabel('predicted label') plt.ylabel('true label'); # + # rbf kernel svc = SVM(verbose=True, kernel='rbf') svc.fit(X_train, y_train) preds = svc.predict(X_test) correct_preds = np.sum(preds == y_test) print(f'Accuracy: {correct_preds/y_test.size}') np_y_test = np.array(y_test).astype(int) mat = confusion_matrix(np_y_test, preds) sns.heatmap(mat, square=True, annot=True, fmt='d', cbar=False, cmap='Blues', xticklabels=['Control', 'Dementia'], yticklabels=['Control', 'Dementia']) plt.xlabel('predicted label') plt.ylabel('true label'); # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from __future__ import print_function import nibabel as nib import sklearn import numpy as np import pandas as pd import matplotlib.pyplot as plt import os import seaborn as sns from sklearn.externals import joblib from sklearn.preprocessing import PolynomialFeatures, StandardScaler from sklearn.linear_model import LinearRegression from sklearn.model_selection import KFold from sklearn.pipeline import make_pipeline from sklearn.model_selection import cross_val_score import statsmodels from statsmodels.regression.linear_model import OLS # includes AIC, BIC import tqdm # %matplotlib inline sns.set() sns.set_style("ticks") # - # ### Load brain data mvp_VBM = joblib.load('mvp/mvp_vbm.jl') mvp_TBSS = joblib.load('mvp/mvp_tbss.jl') # ... and brain size # + brain_size = pd.read_csv('./mvp/PIOP1_behav_2017_MVCA_with_brainsize.tsv', sep='\t', index_col=0) # Remove subjects without known brain size (or otherwise excluded from dataset) include_subs = np.in1d(brain_size.index.values, mvp_VBM.subject_list) brain_size = brain_size.loc[include_subs] brain_size_VBM = brain_size.brain_size_GM.values brain_size_TBSS = brain_size.brain_size_WM.values # - # #### Calculate correlations between voxels & brain size # Check-out linear, cubic, and quadratic correlation if os.path.isfile('./cache/r2s_VBM.tsv'): r2s_VBM = pd.read_csv('./cache/r2s_VBM.tsv', sep='\t', index_col=0).values else: # Calculate R2 by-voxel with polynomial models of degrees 1, 2, 3 (linear, quadratic, cubic) r2s_VBM = np.empty((mvp_VBM.X.shape[1], 3)) # col 1 = Linear, col 2 = Poly2, col 3 = Poly3 # make feature vecs X_linear = PolynomialFeatures(degree=1).fit_transform(brain_size_VBM.reshape(-1,1)) X_poly2 = PolynomialFeatures(degree=2).fit_transform(brain_size_VBM.reshape(-1,1)) X_poly3 = PolynomialFeatures(degree=3).fit_transform(brain_size_VBM.reshape(-1,1)) for i in tqdm.tqdm_notebook(range(r2s_VBM.shape[0])): r2s_VBM[i,0] = OLS(mvp_VBM.X[:,i], X_linear).fit().rsquared r2s_VBM[i,1] = OLS(mvp_VBM.X[:,i], X_poly2).fit().rsquared r2s_VBM[i,2] = OLS(mvp_VBM.X[:,i], X_poly3).fit().rsquared # save to disk pd.DataFrame(r2s_VBM).to_csv('./cache/r2s_VBM.tsv', sep='\t') # Repeat for TBSS if os.path.isfile('./cache/r2s_TBSS.tsv'): r2s_TBSS = pd.read_csv('./cache/r2s_TBSS.tsv', sep='\t', index_col=0).values else: r2s_TBSS = np.empty((mvp_TBSS.X.shape[1], 3)) # make feature vecs X_linear = PolynomialFeatures(degree=1).fit_transform(brain_size_TBSS.reshape(-1,1)) X_poly2 = PolynomialFeatures(degree=2).fit_transform(brain_size_TBSS.reshape(-1,1)) X_poly3 = PolynomialFeatures(degree=3).fit_transform(brain_size_TBSS.reshape(-1,1)) for i in tqdm.tqdm_notebook(range(r2s_TBSS.shape[0])): r2s_TBSS[i,0] = OLS(mvp_TBSS.X[:,i], X_linear).fit().rsquared r2s_TBSS[i,1] = OLS(mvp_TBSS.X[:,i], X_poly2).fit().rsquared r2s_TBSS[i,2] = OLS(mvp_TBSS.X[:,i], X_poly3).fit().rsquared # save to disk pd.DataFrame(r2s_TBSS).to_csv('./cache/r2s_TBSS.tsv', sep='\t') # #### Function for plotting # + def plot_voxel(voxel_idx, data, brain_size, ax=None, add_title=False, scale_bs=False, **kwargs): try: len(voxel_idx) except: voxel_idx = [voxel_idx] # Useful for plotting regression lines later # scale brain size first if scale_bs: brain_size = StandardScaler().fit_transform(brain_size.reshape(-1,1)) bs_range = np.linspace(np.min(brain_size), np.max(brain_size), num=500) bs_range_poly2 = PolynomialFeatures(degree=2).fit_transform(bs_range.reshape(-1,1)) bs_range_poly3 = PolynomialFeatures(degree=3).fit_transform(bs_range.reshape(-1,1)) bs_range_intercept = PolynomialFeatures(degree=1).fit_transform(bs_range.reshape(-1,1)) model_names = ['Linear', 'Poly2', 'Poly3'] X_linear = PolynomialFeatures(degree=1).fit_transform(brain_size.reshape(-1,1)) X_poly2 = PolynomialFeatures(degree=2).fit_transform(brain_size.reshape(-1,1)) X_poly3 = PolynomialFeatures(degree=3).fit_transform(brain_size.reshape(-1,1)) n_voxels = len(voxel_idx) nrow = int(np.ceil(n_voxels/2.)) ncol = 2 if n_voxels > 1 else 1 if ax is None: # create fig and axis if not passed to function f, axis = plt.subplots(nrow, ncol) else: axis = ax for i, idx in enumerate(voxel_idx): # Get data y = data[:,idx] # Fit overall model (no CV) lr_linear = OLS(y, X_linear).fit() lr_poly2 = OLS(y, X_poly2).fit() lr_poly3 = OLS(y, X_poly3).fit() # Get axis this_ax = axis if n_voxels == 1 else plt.subplot(nrow, ncol, i+1) sns.regplot(x=X_linear[:,1], y=y, ax=this_ax, dropna=True, fit_reg=False, lowess=False, scatter=True, **kwargs) this_ax.plot(bs_range, lr_linear.predict(bs_range_intercept), 'r-', label='Linear') this_ax.plot(bs_range, lr_poly2.predict(bs_range_poly2), 'b-', label='Quadratic') this_ax.plot(bs_range, lr_poly3.predict(bs_range_poly3), 'y-', label='Cubic') # this_ax.legend() if add_title: this_ax.set_title('Voxel %d' %i) if scale_bs: this_ax.set_xlabel('Brain size (scaled)') else: this_ax.set_xlabel('Brain size') this_ax.set_ylabel('Intensity') if ax is None: return f, axis # - # #### Inner function for cross-validation. This does the work. def do_crossval(voxel_idx, data, brain_size, n_iter=100, n_fold=10): # make sure voxel_idx has a len try: len(voxel_idx) except: voxel_idx = [voxel_idx] # Create feature vectors out of brain size X_linear = PolynomialFeatures(degree=1).fit_transform(brain_size.reshape(-1,1)) X_poly2 = PolynomialFeatures(degree=2).fit_transform(brain_size.reshape(-1,1)) X_poly3 = PolynomialFeatures(degree=3).fit_transform(brain_size.reshape(-1,1)) # Output dataframes for crossvalidation and BIC results_CV = pd.DataFrame(columns=['voxel_idx', 'iter', 'Linear', 'Poly2', 'Poly3']) results_CV['voxel_idx'] = np.repeat(voxel_idx, n_iter) results_CV['iter'] = np.tile(np.arange(n_iter), len(voxel_idx)) results_BIC = pd.DataFrame(columns=['voxel_idx', 'Linear', 'Poly2', 'Poly3']) results_BIC['voxel_idx'] = voxel_idx for i, idx in enumerate(voxel_idx): # Get target data (vbm intensity) y = data[:,idx] # Make pipeline pipe = make_pipeline(StandardScaler(), LinearRegression()) Xdict = {'Linear': X_linear, 'Poly2': X_poly2, 'Poly3': X_poly3} for iteration in range(n_iter): if n_iter > 10: if iteration % int(n_iter/10) == 0: print('.', end='') # KFold inside loop for shuffling cv = KFold(n_splits=n_fold, random_state=iteration, shuffle=True) # get row idx in results DataFrame row_idx = (results_CV['voxel_idx']==idx)&(results_CV['iter']==iteration) for model_type, X in Xdict.items(): r2 = cross_val_score(pipe, X=X, y=y, cv=cv).mean() results_CV.loc[row_idx, model_type] = cross_val_score(pipe, X=X, y=y, cv=cv).mean() # add BIC info # Fit overall model (no CV) lr_linear = OLS(y, X_linear).fit() lr_poly2 = OLS(y, X_poly2).fit() lr_poly3 = OLS(y, X_poly3).fit() # get BICs of fitted models, add to output dataframe bics = [lr_linear.bic, lr_poly2.bic, lr_poly3.bic] results_BIC.loc[results_BIC['voxel_idx']==idx, 'Linear'] = lr_linear.bic results_BIC.loc[results_BIC['voxel_idx']==idx, 'Poly2'] = lr_poly2.bic results_BIC.loc[results_BIC['voxel_idx']==idx, 'Poly3'] = lr_poly3.bic return [results_CV, results_BIC] # #### Wrapper function for multiprocessing # + import multiprocessing from functools import partial import tqdm def run_CV_MP(data, voxel_idx, brain_size, n_iter, n_processes=10, n_fold=10, pool=None): if pool is None: private_pool = True pool = multiprocessing.Pool(processes=n_processes) else: private_pool = False results_all_vox = [] with tqdm.tqdm(total=len(voxel_idx)) as pbar: for i, res in tqdm.tqdm(enumerate(pool.imap_unordered(partial(do_crossval, data=data, brain_size=brain_size, n_fold=10, n_iter=n_iter), voxel_idx))): results_all_vox.append(res) if private_pool: pool.terminate() return results_all_vox # - # ## Cross-validation # + # How many iterations do we want? Iterations are repeats of KFold CV with random partitioning of the data, # to ensure that the results are not dependent on the (random) partitioning n_iter = 50 # How many voxels should we take? n_voxels = 500 # # What modality are we in? # modality = 'VBM' # # Which voxels are selected? # voxel_type = 1 # 0 = linear, 1 = poly2, 2 = poly3 # - # #### Cross-validate for all combinations of modality & relation type # 2 modalitites (TBSS, VBM) x 4 relation types (linear, quadratic, cubic, random) = 8 options in total. Each takes about an hour here # + # set-up pool here so we can terminate if necessary pool = multiprocessing.Pool(processes=10) for modality in ['VBM', 'TBSS']: # select right mvp & brain size mvp = mvp_VBM if modality == 'VBM' else mvp_TBSS brain_size = brain_size_VBM if modality == 'VBM' else brain_size_TBSS for relation_type in [0, 1, 2, 3]: print('Processing %s, type %d' %(modality, relation_type)) if relation_type < 3: # linear, quadratic, cubic relations? corrs = r2s_VBM[:,relation_type] if modality == 'VBM' else r2s_TBSS[:,relation_type] vox_idx = corrs.argsort()[-n_voxels:] # these are the voxel idx with highest r2 else: # random voxels vox_idx = np.random.choice(np.arange(mvp.X.shape[1]), replace=False, size=n_voxels) # Run multiprocessed output = run_CV_MP(data=mvp.X, voxel_idx=vox_idx, brain_size=brain_size, n_iter=n_iter, pool=pool) # Save results tmp = pd.concat([x[0] for x in output], ignore_index=True) tmp.to_csv('./cache/results_%s_type-%d_nvox-%d.tsv' %(modality, relation_type, n_voxels), sep='\t') # - # Load all results, make Supplementary Figures S7 (VBM) and S9 (TBSS) modality = 'VBM' mvp = mvp_VBM if modality == 'VBM' else mvp_TBSS brain_size = brain_size_VBM if modality == 'VBM' else brain_size_TBSS results_linear_vox = pd.read_csv('./cache/results_%s_type-%d_nvox-%d.tsv' %(modality, 0, n_voxels), sep='\t') results_poly2_vox = pd.read_csv('./cache/results_%s_type-%d_nvox-%d.tsv' %(modality, 1, n_voxels), sep='\t') results_poly3_vox = pd.read_csv('./cache/results_%s_type-%d_nvox-%d.tsv' %(modality, 2, n_voxels), sep='\t') results_random_vox = pd.read_csv('./cache/results_%s_type-%d_nvox-%d.tsv' %(modality, 3, n_voxels), sep='\t') # + sns.set_style('ticks') import warnings warnings.filterwarnings("ignore", category=FutureWarning) warnings.filterwarnings("ignore", category=UserWarning) # set seed seed = 2 if modality == 'TBSS' else 4 np.random.seed(seed) f, ax = plt.subplots(2, 4) to_plot = [{'Random voxels': results_random_vox}, {'Cubic correlating voxels': results_poly2_vox}, {'Quadratic correlating voxels': results_poly3_vox}, {'Linearly correlating voxels': results_linear_vox}] labels = ['Linear-Quadratic', 'Linear-Cubic'] for col, d in enumerate(to_plot): title = list(d.keys())[0] results = d[title] # For every voxel & iter, how much better does the linear model do compared to the polynomial models? results['Linear-Poly2'] = results['Linear']-results['Poly2'] results['Linear-Poly3'] = results['Linear']-results['Poly3'] # Mean over iterations to get results by voxel results_by_voxel = results.groupby('voxel_idx')[['Linear-Poly2', 'Linear-Poly3']].mean() # plot histogram this_ax = ax[0, col] this_ax.axvline(x=0, color='k', ls='--', label='$\Delta R^2=0$') # add line at 0 for i, model in enumerate(['Linear-Poly2', 'Linear-Poly3']): sns.distplot(results_by_voxel[model], ax=this_ax, kde=True, hist=True, label=labels[i], bins=n_voxels/10) # Set some stuff (labels, title, legend) this_ax.set_xlabel('$\Delta R^2$') this_ax.set_ylabel('Density') this_ax.set_title(title) if col == 0 and modality == 'VBM': this_ax.set_xlim(-.1, .1) # Select random voxel plot_vox_idx = np.random.choice(results_by_voxel.index.values, size=1) # plot this voxel's correlation with brain size, plus the three models plot_voxel(plot_vox_idx, mvp.X, brain_size, ax=ax[1, col], color=sns.color_palette()[0], scale_bs=True) # scale bs here for axis ticks readability # add text # increase ylm by 10% ax[1,col].set_ylim(ax[1,col].get_ylim()[0], ax[1,col].get_ylim()[0]+(ax[1,col].get_ylim()[1]-ax[1,col].get_ylim()[0])*1.1) r2_lq = results_by_voxel.loc[plot_vox_idx, 'Linear-Poly2'] r2_lc = results_by_voxel.loc[plot_vox_idx, 'Linear-Poly3'] ax[1,col].text(0.025, .975,'$\Delta R^2_{Linear-Quadratic}=%.3f$\n$\Delta R^2_{Linear-Cubic}=%.3f$' %(r2_lq, r2_lc), horizontalalignment='left', verticalalignment='top', transform=ax[1,col].transAxes) # add grid this_ax.grid(ls='--', lw=.5) ax[1,col].grid(ls='--', lw=.5) if col == len(to_plot)-1: this_ax.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) ax[1, col].legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) sns.despine() f.set_size_inches(14,7) f.tight_layout() f.savefig('./figs/brain_size_vs_%s_intensity.png' %modality, bbox_type='tight', dpi=200) # - # And descriptives? # + to_plot = [{'Random voxels': results_random_vox}, {'Cubic correlating voxels': results_poly2_vox}, {'Quadratic correlating voxels': results_poly3_vox}, {'Linearly correlating voxels': results_linear_vox}] labels = ['Linear-Quadratic', 'Linear-Cubic'] for col, d in enumerate(to_plot): title = list(d.keys())[0] results = d[title] # For every voxel & iter, how much better does the linear model do compared to the polynomial models? results['Linear-Poly2'] = results['Linear']-results['Poly2'] results['Linear-Poly3'] = results['Linear']-results['Poly3'] # Mean over iterations to get results by voxel results_by_voxel = results.groupby('voxel_idx')[['Linear-Poly2', 'Linear-Poly3']].mean() print('For the %s:\n-linear models have a mean +%.3f R^2 (SD %.3f, min: %.3f) than quadratic models, ' '\n-and %.3f (SD %.3f, max: %.3f) over cubic models. \n-A proportion of %.3f of voxels prefers a quadratic model, ' 'and a proportion of %.3f prefers a cubic model\n\n'%(title, results_by_voxel['Linear-Poly2'].mean(), results_by_voxel['Linear-Poly2'].std(), results_by_voxel['Linear-Poly2'].min(), results_by_voxel['Linear-Poly3'].mean(), results_by_voxel['Linear-Poly3'].std(), results_by_voxel['Linear-Poly3'].min(), np.mean(results_by_voxel['Linear-Poly2']<0), np.mean(results_by_voxel['Linear-Poly3']<0))) # - # Supplementary Figure S8 # + import warnings warnings.filterwarnings("ignore", category=FutureWarning) warnings.filterwarnings("ignore", category=UserWarning) f, ax = plt.subplots(1, 3) to_plot = [{'Random voxels': results_random_vox}, {'Cubic correlating voxels': results_poly2_vox}] labels = ['Linear-Quadratic', 'Linear-Cubic'] for col, d in enumerate(to_plot): title = list(d.keys())[0] results = d[title] # For every voxel & iter, how much better does the linear model do compared to the polynomial models? results['Linear-Poly2'] = results['Linear']-results['Poly2'] results['Linear-Poly3'] = results['Linear']-results['Poly3'] # Mean over iterations to get results by voxel results_by_voxel = results.groupby('voxel_idx')[['Linear-Poly2', 'Linear-Poly3']].mean() # plot histogram this_ax = ax[col] # Select voxels where poly3 wins by largest margin, for plotting plot_vox = np.argmin(results_by_voxel['Linear-Poly3']) print(results_by_voxel.loc[plot_vox, 'Linear-Poly3']) # plot this voxel's correlation with brain size, plus the three models plot_voxel(plot_vox, mvp.X, brain_size, ax=ax[col], color=sns.color_palette()[0]) ax[col].legend() ax[col].grid(ls='--', lw=.5) # Add a voxel where a linear model fits nicely results = results_linear_vox results_by_voxel = results.groupby('voxel_idx')[['Linear-Poly2', 'Linear-Poly3']].mean() plot_vox = np.argmax(results_by_voxel['Linear-Poly3']) # plot this voxel's correlation with brain size, plus the three models plot_voxel(plot_vox, mvp.X, brain_size, ax=ax[2], color=sns.color_palette()[0]) ax[2].legend() ax[2].grid(ls='--', lw=.5) print(results_by_voxel.loc[plot_vox, 'Linear-Poly3']) sns.despine() f.set_size_inches(14,3.5) f.tight_layout() f.savefig('./figs/bs_vs_vox.png', bbox_type='tight', dpi=200) # - # ### Conclusions # # Together, the results show that a linear model is a good approximation of the relation between VBM/TBSS voxel intensity and brain size # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import rdkit from rdkit import Chem from rdkit.Chem import Draw from rdkit.Chem.EState import Fingerprinter from rdkit.Chem import Descriptors from rdkit.Chem import rdFMCS from rdkit.Chem.rdmolops import RDKFingerprint from rdkit.Chem.Fingerprints import FingerprintMols from rdkit import DataStructs from rdkit.Avalon.pyAvalonTools import GetAvalonFP import numpy as np import pandas as pd import pubchempy as pc # + def get_similarity(a,b): return DataStructs.FingerprintSimilarity(a, b, metric=DataStructs.TanimotoSimilarity) def user_input_to_smiles(input_cid): """Takes a PubChem CID input and outputs the associated SMILES.""" for compound in pc.get_compounds(input_cid): return compound.isomeric_smiles # import ellie's function def input_to_compound(PubChemCID:str, master_df:pd.DataFrame): """ input_to_compound() calculates the distance from input compound to all products and create a new column. Additionally, calculates the average distance between input compound and products for each promiscuous enzyme. Args: PubChemSID (str): contains PubChemSID to be queried master_df (pd.Dataframe): pre-calculated model dataframe Returns: pd.Dataframe: master_df with two new columns 1) that contains distance from input compound to all products 2) that contains avg distance from input compound to products for each promiscuous enzyme """ input_SMILES = user_input_to_smiles(PubChemCID) #Ellie's function input_fingerprint = FingerprintMols.FingerprintMol(Chem.rdmolfiles.MolFromSmiles(input_SMILES)) dist_from_input = [] avg_dist_from_input = [] #get individual distance between input and all compounds for i in master_df['Fingerprint']: dist_from_input.append(get_similarity(i,input_fingerprint)) #get average distance between input and compounds for each promiscuous enzyme for entry in np.unique(master_df['entry']): #filter dataframe with same enzyme enzyme_split = master_df[master_df['entry']==entry] number_of_compounds = len(enzyme_split) total_distance = 0 for j in enzyme_split['Fingerprint']: total_distance += get_similarity(j,input_fingerprint) #get the average distance avg_distance = total_distance/number_of_compounds #create a list that contains number of compounds avg_list = [avg_distance]*number_of_compounds avg_dist_from_input += avg_list master_df['dist_from_input'] = dist_from_input master_df['avg_dist_from_input'] = avg_dist_from_input return master_df # - input_to_compound(243, master_df) master_df = pd.read_csv("../datasets/MASTER_DF.csv") df.head(10) # + # this dataframe does not makes sense! but I just wanted to see my above function works! master_avg_dist = [] for entry in np.unique(df.entry): enzyme_split = df[df['entry']==entry] number_of_compounds = len(enzyme_split) distance = 0 for j in enzyme_split['PubChem']: distance += j avgdist = distance/number_of_compounds avgdistlist = [avgdist]*number_of_compounds master_avg_dist += avgdistlist df['master_avg_dist'] = master_avg_dist df # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + deletable=true editable=true import numpy as np import copy import bayes_logistic as bl import pandas as pd # + deletable=true editable=true # Download and process data url="http://archive.ics.uci.edu/ml/machine-learning-databases/parkinsons/parkinsons.data" df = pd.read_csv(url) y = df.values[:, 17] y = y.astype(int) X = np.delete(df.values, 17, 1) X = np.delete(X, 0, 1) n_samples, n_features = X.shape # Add bias column to the feature matrix B = np.ones((n_samples, n_features + 1)) B[:, 1:] = X X = B # + deletable=true editable=true # Perform feature scaling using mean normalization for col in range(1, n_features): v = X[:, col] mean = v.mean() std = v.std() X[:, col] = (X[:, col] - mean)/std # + deletable=true editable=true # The data is divided into training and test sets TRAINING_PERCENTAGE = 0.7 TEST_PERCENTAGE = 0.3 training_cnt = int(n_samples*TRAINING_PERCENTAGE) training_X = X[:training_cnt,:] training_y = y[:training_cnt] test_cnt = int(n_samples*TEST_PERCENTAGE) test_X = X[training_cnt:,:] test_y = y[training_cnt:] # + deletable=true editable=true # Train the model w_prior = np.zeros(n_features + 1) H_prior = np.diag(np.ones(n_features + 1))*0.001 GD_BATCH_SIZE = training_cnt ITERATION_CNT = 5 w = training_X.shape[1] w_prior = np.zeros(w) H_prior = np.diag(np.ones(w))*0.001 for i in range(0, ITERATION_CNT): for idx in range(0, training_cnt, GD_BATCH_SIZE): batch_size = GD_BATCH_SIZE if (idx + GD_BATCH_SIZE) < training_cnt else training_cnt - idx w_posterior, H_posterior = bl.fit_bayes_logistic(training_y[idx:batch_size], training_X[idx:batch_size,:], w_prior, H_prior, solver = 'BFGS') w_prior = copy.copy(w_posterior) H_prior = copy.copy(H_posterior) w_fit = w_prior H_fit = H_prior # + deletable=true editable=true # Perform Test y_cnt = 0 test_probs = bl.bayes_logistic_prob(test_X, w_fit, H_fit) prediction_cnt = 0 for idx in range(0, test_cnt): if test_probs[idx] > 0.5 and test_y[idx] == 1: prediction_cnt += 1 y_cnt += 1 prediction_accuracy = (100.0*prediction_cnt)/y_cnt print "Prediction Accuracy for test set %.02f" % prediction_accuracy # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Interoperability with cupy # [cupy](https://cupy.dev) is another GPU-acceleration library that allows processing images. To make the best out of GPUs, we demonstrate here how cupy and clesperanto can be combined. import numpy as np import cupy as cp import cupyx.scipy.ndimage as ndi import pyclesperanto_prototype as cle from skimage.io import imread # Let's start with a numpy-array and send it to cupy. np_data = imread('../../data/Haase_MRT_tfl3d1.tif') np_data.shape cp_data = cp.asarray(np_data) cp_data.shape # Next, we can apply a filter to the image in cupy. cp_filtered = ndi.gaussian_filter(cp_data, sigma=5) cp_filtered.shape # Just as an example, we can now threshold the image using `threshold_otsu` which is provided by clesperanto but not by cupy. cl_binary = cle.threshold_otsu(cp_filtered) cl_binary.shape # clesperanto also comes with a function for visualizing data print(cl_binary.min()) print(cl_binary.max()) cle.imshow(cl_binary) # In order to get the image back to cupy, we need to do this: cu_binary = cp.asarray(cl_binary) cu_edges = ndi.sobel(cu_binary, output=float) cu_edges.shape # A cupy-image can also be visualized using clesperantos imshow: cle.imshow(cu_edges) # ## Final remark # Keep in mind that when using clesperanto and cupy in combination, data is transferred multiple times between GPU and CPU. Try to minimize data transfer and run as many operations as possible in a row using the same library. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from copy import copy as _copy original = _copy(globals()) from numpy import * baseline = _copy(globals()) keys_to_check = _copy(set(baseline).difference(set(original))) 'e' in keys_to_check mean = 5 try: for key in keys_to_check: assert baseline[key] is globals()[key], ( "A variable has been clobbered: `{}`".format(key) ) test except AssertionError as e: print(e) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: bootcamp # language: python # name: bootcamp # --- # # Fetching Data # # This notebook includes code that fetches data from balldontlie API and then saves it as a csv file. import requests import pandas as pd import time pd.set_option('display.max_columns', None) # NOTE: This make_request() function is different from the one in functions.py. It doesn't automatically detect if the request it a multi page request. (balldontlie API serves a maximum of 100 rows of data per request) def make_request(endpoint, params=None, record_path=None, verbose=False): root = "https://www.balldontlie.io/api/v1/" response = requests.get(root + endpoint, params=params) if response.status_code != 200: print(response.status_code) return response if verbose: print("Success!") res = response.json() res = pd.json_normalize(res, record_path=record_path) return res # + [markdown] tags=[] # ### game data # # This data is only used for EDA purposes # - game_data = make_request("games", params={"page":1, "per_page":100, "seasons":[2017,2018,2019,2020]}, record_path="data", verbose=True) # + jupyter={"outputs_hidden": true} tags=[] # %%time for i in range(2,501): if i%10 == 0: print(i) time.sleep(1.2) new_data = make_request("games", params={"page":i, "per_page":100, "seasons":[2017,2018,2019,2020]}, record_path="data") game_data = game_data.append(new_data) # - game_data.set_index("id", inplace=True) game_data.to_csv("data/games.csv") # + [markdown] tags=[] # ### stats data # # This is the data used to build the model. Individual player stats for every NBA game since 1979. # # The first block of code is just to get the meta data. To see how many pages of data we are going to have to request (as balldontlie API serves a maximum of 100 rows of data per request.) # + tags=[] # there are over 11000 pages! all_stats_data_meta = make_request("stats", params={"page":1, "per_page":100}, record_path=None) all_stats_data_meta # - # This code gets all 11483 pages of data from the API and saves it in a dataframe # + tags=[] stats_data = pd.DataFrame() for i in range(1, 11483): # Print what page we're on every 10 pages to keep track of progress if i % 10 == 0: print(i) # Make sure not to exceed 60 API requests per minute (balldontlie API is free but limits request per minite) time.sleep(1.1) # Make the request and append to the dataframe new_data = make_request("stats", params={"page":i, "per_page":100}, record_path="data") stats_data = stats_data.append(new_data) print("Done!") # - # This code saves the data to a csv file. It's commented out to not accidentally overwrite the current csv file as it took hours to pull all the data. # + stats_data.set_index("id", inplace=True) # stats_data.to_csv("data/stats_raw.csv") # + jupyter={"source_hidden": true} tags=[] for i in range(2, 11483): if i%100 == 0: print(i) time.sleep(1.5) new_data = make_request("stats", params={"page":i, "per_page":100}, record_path="data") all_stats_data = all_stats_data.append(new_data) print("Done!") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 48054, "status": "ok", "timestamp": 1573636583219, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="2mU_ZLUH5jz4" outputId="c4cbc445-56dd-49df-de87-c34e18da5f6c" import tensorflow as tf import glob import nibabel as nib import os import time import pandas as pd import numpy as np from mricode.utils import log_textfile from mricode.utils import copy_colab tf.__version__ # - tf.test.is_gpu_available() # + colab={} colab_type="code" id="nH4XzW8C5yhH" path_output = './' path_tfrecords = '/data2/res64/down/' path_csv = '/data2/csv/' #path_csv = '/content/drive/My Drive/Capstone/05_Data/01_Label/' filename_res = {'train': 'intell_residual_train.csv', 'val': 'intell_residual_valid.csv', 'test': 'intell_residual_test.csv'} filename_norm = {'train': 'intell_train.csv', 'val': 'intell_valid.csv', 'test': 'intell_test.csv'} filename_fluid = {'train': 'training_fluid_intelligence_sri.csv', 'val': 'validation_fluid_intelligence_sri.csv', 'test': 'test_fluid_intelligence_sri.csv'} filename_final = filename_res sample_size = 'site16_allimages' batch_size = 8 onlyt1 = False t1_mean = 0.35196779465675354 t2_mean = 0.5694633522033692 t1_std = 0.8948413240464094 t2_std = 1.2991791534423829 # + colab={} colab_type="code" id="ZzpJsO5Rx_LM" # + colab={} colab_type="code" id="96CI6bJ26JIo" def return_iter(path, sample_size, batch_size=8, onlyt1=False): # Some definitions if onlyt1: read_features = { 't1': tf.io.FixedLenFeature([], dtype=tf.string), 'subjectid': tf.io.FixedLenFeature([], dtype=tf.string) } else: read_features = { 't1': tf.io.FixedLenFeature([], dtype=tf.string), 't2': tf.io.FixedLenFeature([], dtype=tf.string), 'subjectid': tf.io.FixedLenFeature([], dtype=tf.string) } def _parse_(serialized_example, decoder = np.vectorize(lambda x: x.decode('UTF-8')), onlyt1=False): example = tf.io.parse_single_example(serialized_example, read_features) subjectid = example['subjectid'] if not(onlyt1): t1 = tf.expand_dims(tf.reshape(tf.io.decode_raw(example['t1'], tf.int8), (64,64,64)), axis=-1) t2 = tf.expand_dims(tf.reshape(tf.io.decode_raw(example['t2'], tf.float32), (64,64,64)), axis=-1) return ({'t1': t1, 't2': t2, 'subjectid': subjectid}) else: t1 = tf.expand_dims(tf.reshape(tf.io.decode_raw(example['t1'], tf.int8), (256,256,256)), axis=-1) return ({'t1': t1, 'subjectid': subjectid}) train_ds = tf.data.TFRecordDataset(path +'t1t2_train_' + str(sample_size) + '_v4.tfrecords') val_ds = tf.data.TFRecordDataset(path + 't1t2_val_' + str(sample_size) + '_v4.tfrecords') test_ds = tf.data.TFRecordDataset(path + 't1t2_test_' + str(sample_size) + '_v4.tfrecords') train_iter = train_ds.map(lambda x:_parse_(x, onlyt1=onlyt1)).shuffle(True).batch(batch_size) val_iter = val_ds.map(lambda x:_parse_(x, onlyt1=onlyt1)).batch(batch_size) test_iter = test_ds.map(lambda x:_parse_(x, onlyt1=onlyt1)).batch(batch_size) return train_iter, val_iter, test_iter def return_csv(path, filenames={'train': 'intell_train.csv', 'val': 'intell_valid.csv', 'test': 'intell_test.csv'}, fluid = False): train_df = pd.read_csv(path + filenames['train']) val_df = pd.read_csv(path + filenames['val']) test_df = pd.read_csv(path + filenames['test']) norm = None if fluid: train_df.columns = ['subjectkey', 'fluid_res', 'fluid'] val_df.columns = ['subjectkey', 'fluid_res', 'fluid'] test_df.columns = ['subjectkey', 'fluid_res', 'fluid'] train_df['subjectkey'] = train_df['subjectkey'].str.replace('_', '') val_df['subjectkey'] = val_df['subjectkey'].str.replace('_', '') test_df['subjectkey'] = test_df['subjectkey'].str.replace('_', '') if not(fluid): for df in [train_df, val_df, test_df]: df['race.ethnicity'] = df['race.ethnicity'] - 1 df['married'] = df['married'] - 1 df['high.educ_group'] = 0 df.loc[(train_df['high.educ']>=11) & (df['high.educ']<=12),'high.educ_group'] = 1 df.loc[(train_df['high.educ']>=13) & (df['high.educ']<=13),'high.educ_group'] = 2 counter = 3 for i in range(14,22): df.loc[(df['high.educ']>=i) & (df['high.educ']<=i),'high.educ_group'] = counter df['income_group'] = 0 counter = 1 for i in range(4,11): df.loc[(df['income']>=i) & (df['income']<=i),'income_group'] = counter counter += 1 norm = {} for col in ['BMI', 'age', 'vol', 'weight', 'height', 'nihtbx_fluidcomp_uncorrected', 'nihtbx_cryst_uncorrected', 'nihtbx_pattern_uncorrected', 'nihtbx_picture_uncorrected', 'nihtbx_list_uncorrected', 'nihtbx_flanker_uncorrected', 'nihtbx_picvocab_uncorrected', 'nihtbx_cardsort_uncorrected', 'nihtbx_totalcomp_uncorrected', 'nihtbx_reading_uncorrected']: mean = train_df[col].mean() std = train_df[col].std() train_df[col + '_norm'] = (train_df[col]-mean)/std val_df[col + '_norm'] = (val_df[col]-mean)/std test_df[col + '_norm'] = (test_df[col]-mean)/std norm[col] = {'mean': mean, 'std': std} return train_df, val_df, test_df, norm # + colab={"base_uri": "https://localhost:8080/", "height": 71} colab_type="code" executionInfo={"elapsed": 4520, "status": "ok", "timestamp": 1573636640714, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="B-rSS9vT6TuF" outputId="f5b43cbc-7d59-4c7c-b6d6-78882b1ef3e1" train_iter, val_iter, test_iter = return_iter(path_tfrecords, sample_size, batch_size, onlyt1=onlyt1) # + colab={} colab_type="code" id="_WxRhswDxB6a" a = next(iter(train_iter)) # + colab={} colab_type="code" id="lJ0VnyEzBAes" if False: t1_mean = 0. t1_std = 0. t2_mean = 0. t2_std = 0. n = 0. for b in train_iter: t1_mean += np.mean(b['t1']) t1_std += np.std(b['t1']) t2_mean += np.mean(b['t2']) t2_std += np.std(b['t2']) n += np.asarray(b['t1']).shape[0] t1_mean /= n t1_std /= n t2_mean /= n t2_std /= n # + colab={"base_uri": "https://localhost:8080/", "height": 85} colab_type="code" executionInfo={"elapsed": 2924, "status": "ok", "timestamp": 1573636640716, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="mSwhEqDPM_r8" outputId="14a4ac0b-9587-4776-fc64-1e008f0c5c31" t1_mean, t1_std, t2_mean, t2_std # + colab={} colab_type="code" id="PAyV5ktlA6K1" train_df, val_df, test_df, norm_dict = return_csv(path_csv, filename_final, False) # + colab={"base_uri": "https://localhost:8080/", "height": 442} colab_type="code" executionInfo={"elapsed": 3147, "status": "ok", "timestamp": 1573636641787, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="0xYy4XUyVBeC" outputId="a6a19d59-31ff-4064-f1dc-952befcce4b3" norm_dict # + colab={"base_uri": "https://localhost:8080/", "height": 697} colab_type="code" executionInfo={"elapsed": 2660, "status": "ok", "timestamp": 1573636641788, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="_SVRoaUTgv1O" outputId="7b86ff58-14d7-488a-9b56-4962b095211e" train_df.max() # + colab={} colab_type="code" id="FmJMaxLJLIgE" import tensorflow as tf from tensorflow.keras.layers import Conv3D from tensorflow import nn from tensorflow.python.ops import nn_ops from tensorflow.python.framework import tensor_shape from tensorflow.python.keras.engine.base_layer import InputSpec from tensorflow.python.keras.utils import conv_utils # + colab={} colab_type="code" id="eTk95ptFV5oN" cat_cols = {'female': 2, 'race.ethnicity': 5, 'high.educ_group': 4, 'income_group': 8, 'married': 6} #cat_cols = {} num_cols = [x for x in list(val_df.columns) if '_norm' in x] #num_cols = num_cols[0:3] # + colab={"base_uri": "https://localhost:8080/", "height": 272} colab_type="code" executionInfo={"elapsed": 1134, "status": "ok", "timestamp": 1573636642235, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="gSh5zfycjI7A" outputId="b7a137d4-fb48-413e-9307-fcab56fc28d8" num_cols # + colab={} colab_type="code" id="1j3q7l7DDWSq" from tensorflow.keras import Model class MyDNN(Model): def __init__(self, cat_cols, num_cols): super(MyDNN, self).__init__() self.cat_cols = cat_cols self.num_cols = num_cols self.ac = tf.keras.layers.ReLU() self.maxpool = tf.keras.layers.MaxPool3D(pool_size=(2, 2, 2), data_format='channels_last') self.conv1 = tf.keras.layers.Conv3D( filters = 1, kernel_size = 3, padding='valid', data_format='channels_last' ) self.bn1 = tf.keras.layers.BatchNormalization() self.conv2 = tf.keras.layers.Conv3D( filters = 16, kernel_size = 3, padding='valid', data_format='channels_last' ) self.bn2 = tf.keras.layers.BatchNormalization() self.conv3 = tf.keras.layers.Conv3D( filters = 32, kernel_size = 3, padding='valid', data_format='channels_last' ) self.bn3 = tf.keras.layers.BatchNormalization() self.conv4 = tf.keras.layers.Conv3D( filters = 256, kernel_size = 3, padding='valid', data_format='channels_last' ) self.bn4 = tf.keras.layers.BatchNormalization() self.fc = {} for k in list(self.cat_cols.keys()): self.fc[k] = tf.keras.Sequential([ tf.keras.layers.Dense(256, activation='relu'), tf.keras.layers.BatchNormalization(), tf.keras.layers.Dense(self.cat_cols[k], activation='softmax') ]) for i in range(len(self.num_cols)): self.fc[self.num_cols[i]] = tf.keras.Sequential([ tf.keras.layers.Dense(256, activation='relu'), tf.keras.layers.BatchNormalization(), tf.keras.layers.Dense(1) ]) #self.fc1 = tf.keras.Sequential([ # tf.keras.layers.Dense(256, activation='relu'), # tf.keras.layers.BatchNormalization(), # tf.keras.layers.Dense(1) # ]) #self.fc2 = tf.keras.Sequential([ # tf.keras.layers.Dense(256, activation='relu'), # tf.keras.layers.BatchNormalization(), # tf.keras.layers.Dense(1) # ]) #self.fc3 = tf.keras.Sequential([ # tf.keras.layers.Dense(256, activation='relu'), # tf.keras.layers.BatchNormalization(), # tf.keras.layers.Dense(1) # ]) def call(self, x): x = self.conv1(x) x = self.bn1(x) x = self.ac(x) x = self.maxpool(x) x = self.conv2(x) x = self.bn2(x) x = self.ac(x) x = self.maxpool(x) x = self.conv3(x) x = self.bn3(x) x = self.ac(x) x = self.maxpool(x) x = self.conv4(x) x = self.bn4(x) x = self.ac(x) x = tf.keras.layers.GlobalAveragePooling3D()(x) out = {} for k in list(self.fc.keys()): out[k] = self.fc[k](x) #['age_norm', 'vol_norm', 'weight_norm'] #out['age_norm'] = self.fc1(x) #out['vol_norm'] = self.fc2(x) #out['weight_norm'] = self.fc3(x) return out # + colab={} colab_type="code" id="pwzdY28gkCdN" from tensorflow.keras import Model class MyDNN(Model): def __init__(self, cat_cols, num_cols): super(MyDNN, self).__init__() self.cat_cols = cat_cols self.num_cols = num_cols self.ac = tf.keras.layers.ReLU() self.maxpool = tf.keras.layers.MaxPool3D(pool_size=(2, 2, 2), data_format='channels_last') self.conv1 = tf.keras.layers.Conv3D( filters = 32, kernel_size = 3, padding='valid', data_format='channels_last', input_shape = (64,64,64,2) ) self.bn1 = tf.keras.layers.BatchNormalization() self.conv2 = tf.keras.layers.Conv3D( filters = 64, kernel_size = 3, padding='valid', data_format='channels_last' ) self.bn2 = tf.keras.layers.BatchNormalization() self.conv3 = tf.keras.layers.Conv3D( filters = 128, kernel_size = 3, padding='valid', data_format='channels_last' ) self.bn3 = tf.keras.layers.BatchNormalization() self.conv4 = tf.keras.layers.Conv3D( filters = 256, kernel_size = 3, padding='valid', data_format='channels_last' ) self.bn4 = tf.keras.layers.BatchNormalization() self.fc = {} for k in list(self.cat_cols.keys()): self.fc[k] = tf.keras.Sequential([ tf.keras.layers.Dense(256, activation='relu'), tf.keras.layers.BatchNormalization(), tf.keras.layers.Dense(self.cat_cols[k], activation='softmax') ]) for i in range(len(self.num_cols)): self.fc[self.num_cols[i]] = tf.keras.Sequential([ tf.keras.layers.Dense(256, activation='relu'), tf.keras.layers.BatchNormalization(), tf.keras.layers.Dense(1) ]) def call(self, x): x = self.conv1(x) x = self.bn1(x) x = self.ac(x) x = self.maxpool(x) x = self.conv2(x) x = self.bn2(x) x = self.ac(x) x = self.maxpool(x) x = self.conv3(x) x = self.bn3(x) x = self.ac(x) x = self.maxpool(x) x = self.conv4(x) x = self.bn4(x) x = self.ac(x) x = tf.keras.layers.GlobalAveragePooling3D()(x) out = {} for k in list(self.fc.keys()): out[k] = self.fc[k](x) return out # + colab={} colab_type="code" id="q9PZkxi_k5b-" from tensorflow.keras import Model class MyDNN(Model): def __init__(self, cat_cols, num_cols): super(MyDNN, self).__init__() self.cat_cols = cat_cols self.num_cols = num_cols self.ac = tf.keras.layers.ReLU() self.maxpool = tf.keras.layers.MaxPool3D(pool_size=(2, 2, 2), data_format='channels_last') self.conv1 = tf.keras.layers.Conv3D( filters = 32, kernel_size = 3, padding='valid', data_format='channels_last', input_shape = (64,64,64,2) ) self.bn1 = tf.keras.layers.BatchNormalization() self.conv2 = tf.keras.layers.Conv3D( filters = 64, kernel_size = 3, padding='valid', data_format='channels_last' ) self.bn2 = tf.keras.layers.BatchNormalization() self.conv3 = tf.keras.layers.Conv3D( filters = 128, kernel_size = 3, padding='valid', data_format='channels_last' ) self.bn3 = tf.keras.layers.BatchNormalization() self.conv4 = tf.keras.layers.Conv3D( filters = 256, kernel_size = 3, padding='valid', data_format='channels_last', name='lastconv_1' ) self.bn4 = tf.keras.layers.BatchNormalization() self.fc = {} for k in list(self.cat_cols.keys()): self.fc[k] = tf.keras.Sequential([ tf.keras.layers.Dense(256, activation='relu'), tf.keras.layers.BatchNormalization(), tf.keras.layers.Dense(self.cat_cols[k], activation='softmax', name='output_' + k) ]) for i in range(len(self.num_cols)): self.fc[self.num_cols[i]] = tf.keras.Sequential([ tf.keras.layers.Dense(256, activation='relu'), tf.keras.layers.BatchNormalization(), tf.keras.layers.Dense(1) ]) def call(self, x): x = self.conv1(x) x = self.bn1(x) x = self.ac(x) x = self.maxpool(x) x = self.conv2(x) x = self.bn2(x) x = self.ac(x) x = self.maxpool(x) x = self.conv3(x) x = self.bn3(x) x = self.ac(x) x = self.maxpool(x) x = self.conv4(x) x = self.bn4(x) x = self.ac(x) x = tf.keras.layers.GlobalAveragePooling3D()(x) out = {} for k in list(self.fc.keys()): out[k] = self.fc[k](x) return out # + colab={} colab_type="code" id="tpab_A9TDYeK" loss_object = tf.keras.losses.SparseCategoricalCrossentropy() optimizer = tf.keras.optimizers.Adam(lr = 0.001) # + colab={} colab_type="code" id="hROMApYiDagm" def calc_loss_acc(out_loss, out_acc, y_true, y_pred, cat_cols, num_cols, norm_dict): for col in num_cols: tmp_col = col tmp_std = norm_dict[tmp_col.replace('_norm','')]['std'] tmp_y_true = tf.cast(y_true[col], tf.float32).numpy() tmp_y_pred = np.squeeze(y_pred[col].numpy()) if not(tmp_col in out_loss): out_loss[tmp_col] = np.sum(np.square(tmp_y_true-tmp_y_pred)) else: out_loss[tmp_col] += np.sum(np.square(tmp_y_true-tmp_y_pred)) if not(tmp_col in out_acc): out_acc[tmp_col] = np.sum(np.square((tmp_y_true-tmp_y_pred)*tmp_std)) else: out_acc[tmp_col] += np.sum(np.square((tmp_y_true-tmp_y_pred)*tmp_std)) for col in list(cat_cols.keys()): tmp_col = col if not(tmp_col in out_loss): out_loss[tmp_col] = tf.keras.losses.SparseCategoricalCrossentropy()(tf.squeeze(y_true[col]), tf.squeeze(y_pred[col])).numpy() else: out_loss[tmp_col] += tf.keras.losses.SparseCategoricalCrossentropy()(tf.squeeze(y_true[col]), tf.squeeze(y_pred[col])).numpy() if not(tmp_col in out_acc): out_acc[tmp_col] = tf.reduce_sum(tf.dtypes.cast((y_true[col] == tf.argmax(y_pred[col], axis=-1)), tf.float32)).numpy() else: out_acc[tmp_col] += tf.reduce_sum(tf.dtypes.cast((y_true[col] == tf.argmax(y_pred[col], axis=-1)), tf.float32)).numpy() return(out_loss, out_acc) def format_output(out_loss, out_acc, n, cols, print_bl=False): loss = 0 acc = 0 output = [] for col in cols: output.append([col, out_loss[col]/n, out_acc[col]/n]) loss += out_loss[col]/n acc += out_acc[col]/n df = pd.DataFrame(output) df.columns = ['name', 'loss', 'acc'] if print_bl: print(df) return(loss, acc, df) @tf.function def train_step(X, y, model, optimizer, cat_cols, num_cols): with tf.GradientTape() as tape: predictions = model(X) i = 0 loss = tf.keras.losses.MSE(tf.cast(y[num_cols[i]], tf.float32), tf.squeeze(predictions[num_cols[i]])) for i in range(1,len(num_cols)): loss += tf.keras.losses.MSE(tf.cast(y[num_cols[i]], tf.float32), tf.squeeze(predictions[num_cols[i]])) for col in list(cat_cols.keys()): loss += tf.keras.losses.SparseCategoricalCrossentropy()(tf.squeeze(y[col]), tf.squeeze(predictions[col])) gradients = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(gradients, model.trainable_variables)) return(y, predictions, loss) @tf.function def test_step(X, y, model): predictions = model(X) return(y, predictions) def epoch(data_iter, df, model, optimizer, cat_cols, num_cols, norm_dict): out_loss = {} out_acc = {} n = 0. n_batch = 0. total_time_dataload = 0. total_time_model = 0. start_time = time.time() for batch in data_iter: total_time_dataload += time.time() - start_time start_time = time.time() t1 = (tf.cast(batch['t1'], tf.float32)-t1_mean)/t1_std t2 = (batch['t2']-t2_mean)/t2_std subjectid = decoder(batch['subjectid']) y = get_labels(df, subjectid, list(cat_cols.keys())+num_cols) X = tf.concat([t1, t2], axis=4) if optimizer != None: y_true, y_pred, loss = train_step(X, y, model, optimizer, cat_cols, num_cols) else: y_true, y_pred = test_step(X, y, model) out_loss, out_acc = calc_loss_acc(out_loss, out_acc, y_true, y_pred, cat_cols, num_cols, norm_dict) n += X.shape[0] n_batch += 1 if (n_batch % 10) == 0: print(n_batch) total_time_model += time.time() - start_time start_time = time.time() return (out_loss, out_acc, n, total_time_model, total_time_dataload) def get_labels(df, subjectid, cols = ['nihtbx_fluidcomp_uncorrected_norm']): subjects_df = pd.DataFrame(subjectid) result_df = pd.merge(subjects_df, df, left_on=0, right_on='subjectkey', how='left') output = {} for col in cols: output[col] = np.asarray(result_df[col].values) return output def best_val(df_best, df_val, df_test): df_best = pd.merge(df_best, df_val, how='left', left_on='name', right_on='name') df_best = pd.merge(df_best, df_test, how='left', left_on='name', right_on='name') df_best.loc[df_best['best_loss_val']>=df_best['cur_loss_val'], 'best_loss_test'] = df_best.loc[df_best['best_loss_val']>=df_best['cur_loss_val'], 'cur_loss_test'] df_best.loc[df_best['best_loss_val']>=df_best['cur_loss_val'], 'best_loss_val'] = df_best.loc[df_best['best_loss_val']>=df_best['cur_loss_val'], 'cur_loss_val'] df_best.loc[(df_best['best_acc_val']<=df_best['cur_acc_val'])&(df_best['name'].isin(['female', 'race.ethnicity', 'high.educ_group', 'income_group', 'married'])), 'best_acc_test'] = df_best.loc[(df_best['best_acc_val']<=df_best['cur_acc_val'])&(df_best['name'].isin(['female', 'race.ethnicity', 'high.educ_group', 'income_group', 'married'])), 'cur_acc_test'] df_best.loc[(df_best['best_acc_val']<=df_best['cur_acc_val'])&(df_best['name'].isin(['female', 'race.ethnicity', 'high.educ_group', 'income_group', 'married'])), 'best_acc_val'] = df_best.loc[(df_best['best_acc_val']<=df_best['cur_acc_val'])&(df_best['name'].isin(['female', 'race.ethnicity', 'high.educ_group', 'income_group', 'married'])), 'cur_acc_val'] df_best.loc[(df_best['best_acc_val']>=df_best['cur_acc_val'])&(~df_best['name'].isin(['female', 'race.ethnicity', 'high.educ_group', 'income_group', 'married'])), 'best_acc_test'] = df_best.loc[(df_best['best_acc_val']>=df_best['cur_acc_val'])&(~df_best['name'].isin(['female', 'race.ethnicity', 'high.educ_group', 'income_group', 'married'])), 'cur_acc_test'] df_best.loc[(df_best['best_acc_val']>=df_best['cur_acc_val'])&(~df_best['name'].isin(['female', 'race.ethnicity', 'high.educ_group', 'income_group', 'married'])), 'best_acc_val'] = df_best.loc[(df_best['best_acc_val']>=df_best['cur_acc_val'])&(~df_best['name'].isin(['female', 'race.ethnicity', 'high.educ_group', 'income_group', 'married'])), 'cur_acc_val'] df_best = df_best.drop(['cur_loss_val', 'cur_acc_val', 'cur_loss_test', 'cur_acc_test'], axis=1) return(df_best) # + colab={"base_uri": "https://localhost:8080/", "height": 357} colab_type="code" executionInfo={"elapsed": 415, "status": "ok", "timestamp": 1573636653401, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="8iQKsZftjsPR" outputId="22d8df05-3f92-4c31-d831-e9308b0db4c9" cat_cols, num_cols # + colab={} colab_type="code" id="XK13wNFtoIC4" modelname = 'site16_downsample_t1t2_test_' # + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" executionInfo={"elapsed": 46355, "status": "ok", "timestamp": 1573636701683, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="RV24LE3k-00n" outputId="067125be-6ce2-40c9-998e-62f800ccb2f1" decoder = np.vectorize(lambda x: x.decode('UTF-8')) template = 'Epoch {0}, Loss: {1:.3f}, Accuracy: {2:.3f}, Val Loss: {3:.3f}, Val Accuracy: {4:.3f}, Time Model: {5:.3f}, Time Data: {6:.3f}' for col in [0]: log_textfile(path_output + modelname + 'multitask_test' + '.log', cat_cols), log_textfile(path_output + modelname + 'multitask_test' + '.log', num_cols) loss_object = tf.keras.losses.SparseCategoricalCrossentropy() optimizer = tf.keras.optimizers.Adam(lr = 0.001) model = MyDNN(cat_cols, num_cols) df_best = None for e in range(10): log_textfile(path_output + modelname + 'multitask_test' + '.log', 'Epochs: ' + str(e)) loss = tf.Variable(0.) acc = tf.Variable(0.) val_loss = tf.Variable(0.) val_acc = tf.Variable(0.) test_loss = tf.Variable(0.) test_acc = tf.Variable(0.) train_out_loss, train_out_acc, n, time_model, time_data = epoch(train_iter, train_df, model, optimizer, cat_cols, num_cols, norm_dict) val_out_loss, val_out_acc, n, _, _ = epoch(val_iter, val_df, model, None, cat_cols, num_cols, norm_dict) test_out_loss, test_out_acc, n, _, _ = epoch(test_iter, test_df, model, None, cat_cols, num_cols, norm_dict) loss, acc, _ = format_output(train_out_loss, train_out_acc, n, list(cat_cols.keys())+num_cols) val_loss, val_acc, df_val = format_output(val_out_loss, val_out_acc, n, list(cat_cols.keys())+num_cols, print_bl=False) test_loss, test_acc, df_test = format_output(test_out_loss, test_out_acc, n, list(cat_cols.keys())+num_cols, print_bl=False) df_val.columns = ['name', 'cur_loss_val', 'cur_acc_val'] df_test.columns = ['name', 'cur_loss_test', 'cur_acc_test'] if e == 0: df_best = pd.merge(df_test, df_val, how='left', left_on='name', right_on='name') df_best.columns = ['name', 'best_loss_test', 'best_acc_test', 'best_loss_val', 'best_acc_val'] df_best = best_val(df_best, df_val, df_test) print(df_best[['name', 'best_loss_test', 'best_acc_test']]) print(df_best[['name', 'best_loss_val', 'best_acc_val']]) log_textfile(path_output + modelname + 'multitask_test' + '.log', template.format(e, loss, acc, val_loss, val_acc, time_model, time_data)) df_best.to_csv(path_output + modelname + 'multitask_test' + '.csv') # + colab={"base_uri": "https://localhost:8080/", "height": 241} colab_type="code" executionInfo={"elapsed": 52176, "status": "ok", "timestamp": 1573636708782, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="ENiEqcTM2o5b" outputId="7661277a-f2b1-485e-fbfb-85fc5e7d9c94" # !pip install tf-explain # + colab={"base_uri": "https://localhost:8080/", "height": 190} colab_type="code" executionInfo={"elapsed": 57697, "status": "ok", "timestamp": 1573636714920, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="je8ZbBq3tZys" outputId="3c420738-3d47-44ed-9f75-25a268651cf3" # !pip install nilearn # + colab={"base_uri": "https://localhost:8080/", "height": 71} colab_type="code" executionInfo={"elapsed": 56450, "status": "ok", "timestamp": 1573636715232, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="cv99Rx9vtUBF" outputId="ad64ddb7-7353-4df0-fe1e-f3b4ac938005" import glob from nilearn import plotting import matplotlib.pyplot as plt import nilearn import numpy as np import nibabel as nib from nilearn.image import crop_img import os import tqdm # %matplotlib inline # + colab={} colab_type="code" id="tyWmz3FstC8G" a = next(iter(test_iter)) t1 = (tf.cast(a['t1'], tf.float32)-t1_mean)/t1_std t2 = (a['t2']-t2_mean)/t2_std X = tf.concat([t1, t2], axis=4) # + colab={} colab_type="code" id="k1J0Fsn1Ijr2" inputs = tf.keras.Input(shape=(64,64,64,2), name='inputlayer123') a = model(inputs)['female'] mm = Model(inputs=inputs, outputs=a) # + colab={"base_uri": "https://localhost:8080/", "height": 221} colab_type="code" executionInfo={"elapsed": 54985, "status": "ok", "timestamp": 1573636716755, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="1yvswMaT2lKD" outputId="bcc1195f-8702-4a2b-c41d-f264fb24143c" mm.summary() # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 1327, "status": "ok", "timestamp": 1573636746552, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="4rORUysK2cM3" outputId="03452a7a-6db7-4f87-b1be-f05fbdb00b2a" mm.get_layer('my_dnn').get_layer('lastconv_1').output # + colab={} colab_type="code" id="wf3u-dJ1PIT_" from tf_explain.core.smoothgrad import SmoothGrad explainer = SmoothGrad() grid = explainer.explain((X[0:1], _), mm, 0, 20, 1.) # + colab={"base_uri": "https://localhost:8080/", "height": 218} colab_type="code" executionInfo={"elapsed": 4327, "status": "ok", "timestamp": 1573636753055, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="SsBZysWOs34f" outputId="fe3a9a63-539e-4e88-a10b-57738791cd7f" ppp = '/content/drive/My Drive/Capstone/05_Data/02_Sample_MRI/downsampled_resize/T2/sub-NDARINVFJJPAA2A_T2.nii.gz' org = nib.load(ppp) plotting.plot_anat(nilearn.image.new_img_like(ppp, grid, affine=None, copy_header=False)) plotting.show() # + colab={} colab_type="code" id="-G7cjveLADUG" from tf_explain.core.occlusion_sensitivity import OcclusionSensitivity # + colab={} colab_type="code" id="nhXfjGsxADeD" explainer = OcclusionSensitivity() # + colab={} colab_type="code" id="_gGRXRJSADkG" ?? OcclusionSensitivity # + colab={"base_uri": "https://localhost:8080/", "height": 351} colab_type="code" executionInfo={"elapsed": 4225, "status": "error", "timestamp": 1573636765486, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="Q1GJABrBBDOx" outputId="8b8bc278-991b-4a4a-dbb0-dea4c55ffeee" sensitivity_maps = np.array( [ explainer.get_sensitivity_map(mm, X[0], 0, 4) ] ) # + colab={} colab_type="code" id="o3SM73D7BVZS" image = X[0] patch_size = 4 class_index=0 # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 1766, "status": "ok", "timestamp": 1573637561679, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="mOYf3Oh0G42B" outputId="1c2d487b-7d54-4afa-efef-9b2242d897c3" image.shape # + colab={} colab_type="code" id="yCP9mb_bC6SR" def apply_grey_patch(image, top_left_x, top_left_y, top_left_z, patch_size): """ Replace a part of the image with a grey patch. Args: image (numpy.ndarray): Input image top_left_x (int): Top Left X position of the applied box top_left_y (int): Top Left Y position of the applied box patch_size (int): Size of patch to apply Returns: numpy.ndarray: Patched image """ patched_image = np.array(image, copy=True) patched_image[ top_left_x : top_left_x + patch_size, top_left_y : top_left_y + patch_size, top_left_z : top_left_z + patch_size, 0 ] = 0 return patched_image import math sensitivity_map = np.zeros(( math.ceil(image.shape[0] / patch_size), math.ceil(image.shape[1] / patch_size), math.ceil(image.shape[2] / patch_size), )) # + colab={"base_uri": "https://localhost:8080/", "height": 289} colab_type="code" executionInfo={"elapsed": 41009, "status": "ok", "timestamp": 1573637656803, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="urOT4dZqFzvY" outputId="2476c3b8-e514-40a0-b2a7-0e70bad97ad0" for index_z, top_left_z in enumerate(range(0, image.shape[2], patch_size)): print(index_z, top_left_z) patches = [ apply_grey_patch(image, top_left_x, top_left_y, top_left_z, patch_size) for index_x, top_left_x in enumerate(range(0, image.shape[0], patch_size)) for index_y, top_left_y in enumerate(range(0, image.shape[1], patch_size)) ] coordinates = [ (index_y, index_x) for index_x, _ in enumerate(range(0, image.shape[0], patch_size)) for index_y, _ in enumerate(range(0, image.shape[1], patch_size)) ] predictions = mm.predict(np.array(patches), batch_size=1) target_class_predictions = [prediction[class_index] for prediction in predictions] for (index_y, index_x), confidence in zip(coordinates, target_class_predictions): sensitivity_map[index_y, index_x, index_z] = 1 - confidence # + colab={} colab_type="code" id="TJNEmMOrG0Hi" from skimage.transform import resize # + colab={} colab_type="code" id="YnQgVHssHqWX" sm = resize(sensitivity_map, (64,64,64)) # + colab={} colab_type="code" id="8dT5GWoNHvKu" heatmap = (sm - np.min(sm)) / (sm.max() - sm.min()) # + colab={"base_uri": "https://localhost:8080/", "height": 103} colab_type="code" executionInfo={"elapsed": 2692, "status": "ok", "timestamp": 1573638001081, "user": {"displayName": "", "photoUrl": "", "userId": "06737229821528734971"}, "user_tz": -60} id="J53m22tPIIe-" outputId="53c0ebad-1730-47ff-9766-17bd8a5d0034" import cv2 step = 4 n_slices = int(64/4) i = 0 n = 0 data = (heatmap * 255).astype("uint8") slice = 0 fig, ax = plt.subplots(1, n_slices, figsize=[18, 1.2*1]) for _ in range(n_slices): #tmp_data = cv2.applyColorMap(cv2.cvtColor(data[:,:,slice], cv2.COLOR_GRAY2BGR), colormap) ax[n].imshow(data[:,:,slice]) ax[n].set_xticks([]) ax[n].set_yticks([]) if i == 0: ax[n].set_title(str(slice), color='r') else: ax[n].set_title('', color='r') n += 1 slice += step # + colab={} colab_type="code" id="AvHj5mgfBTsU" ''def apply_grey_patch(image, top_left_x, top_left_y, top_left_z, patch_size): """ Replace a part of the image with a grey patch. Args: image (numpy.ndarray): Input image top_left_x (int): Top Left X position of the applied box top_left_y (int): Top Left Y position of the applied box patch_size (int): Size of patch to apply Returns: numpy.ndarray: Patched image """ patched_image = np.array(image, copy=True) patched_image[ top_left_y : top_left_y + patch_size, top_left_x : top_left_x + patch_size, top_left_z : top_left_z + patch_size, : ] = 127.5 return patched_image import math sensitivity_map = np.zeros(( math.ceil(image.shape[0] / patch_size), math.ceil(image.shape[1] / patch_size), math.ceil(image.shape[2] / patch_size), )) patches = [ apply_grey_patch(image, top_left_x, top_left_y, top_left_z, patch_size) for index_x, top_left_x in enumerate(range(0, image.shape[0], patch_size)) for index_y, top_left_y in enumerate(range(0, image.shape[1], patch_size)) for index_z, top_left_z in enumerate(range(0, image.shape[2], patch_size)) ] coordinates = [ (index_y, index_x, index_z) for index_x, _ in enumerate(range(0, image.shape[0], patch_size)) for index_y, _ in enumerate(range(0, image.shape[1], patch_size)) ] # + colab={} colab_type="code" id="5tykySZbBhd6" predictions = mm.predict(np.array(patches), batch_size=1) target_class_predictions = [ prediction[class_index] for prediction in predictions ] for (index_y, index_x), confidence in zip(coordinates, target_class_predictions): sensitivity_map[index_y, index_x] = 1 - confidence # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 24896, "status": "ok", "timestamp": 1573636310007, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="T7CP5XZmB5zk" outputId="51b20ba5-c4a1-437f-c380-1a091287d046" sensitivity_map.shape # + colab={"base_uri": "https://localhost:8080/", "height": 307} colab_type="code" executionInfo={"elapsed": 2668, "status": "error", "timestamp": 1573635968909, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="YSWQ9aLgADhm" outputId="2064a784-4ba7-48a1-fe7b-14423fb33eff" grid = explainer.explain((X[0:1], _), mm, 0, 20, 1.) # + colab={} colab_type="code" id="GsN40IBZADbd" # + colab={} colab_type="code" id="" from tf_explain.core.grad_cam import GradCAM explainer = GradCAM() # + colab={} colab_type="code" id="JleqbVNFt61o" ??GradCAM # + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" executionInfo={"elapsed": 928, "status": "ok", "timestamp": 1573635431699, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="y9rvvybc68Zz" outputId="d733eeb0-9623-4abe-d37c-808b55b6abf5" model.summary() # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 1155, "status": "ok", "timestamp": 1573635511676, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="Klr1s868--Pd" outputId="91ae4930-a29b-40e7-e6ba-b54f339f9de5" mm.get_layer('my_dnn_2').get_layer('sequential_40').output # + colab={} colab_type="code" id="A98V_pwSQRHe" grad_model = tf.keras.models.Model( [mm.inputs], [mm.get_layer('my_dnn_2').get_layer('lastconv_1').output, mm.get_layer('my_dnn_2').get_layer('sequential_40').output] ) # + colab={} colab_type="code" id="ZtMPV1LRu-M9" import cv2 def image_to_uint_255(image): """ Convert float images to int 0-255 images. Args: image (numpy.ndarray): Input image. Can be either [0, 255], [0, 1], [-1, 1] Returns: numpy.ndarray: """ if image.dtype == np.uint8: return image if image.min() < 0: image = (image + 1.0) / 2.0 return (image * 255).astype("uint8") def heatmap_display(heatmap, original_image, colormap=cv2.COLORMAP_VIRIDIS): """ Apply a heatmap (as an np.ndarray) on top of an original image. Args: heatmap (numpy.ndarray): Array corresponding to the heatmap original_image (numpy.ndarray): Image on which we apply the heatmap colormap (int): OpenCV Colormap to use for heatmap visualization Returns: np.ndarray: Original image with heatmap applied """ heatmap = cv2.resize(heatmap, original_image.shape[0]) image = image_to_uint_255(original_image) heatmap = (heatmap - np.min(heatmap)) / (heatmap.max() - heatmap.min()) heatmap = cv2.applyColorMap( cv2.cvtColor((heatmap * 255).astype("uint8"), cv2.COLOR_GRAY2BGR), colormap ) output = cv2.addWeighted(cv2.cvtColor(image, cv2.COLOR_RGB2BGR), 0.7, heatmap, 1, 0) return cv2.cvtColor(output, cv2.COLOR_BGR2RGB) # + colab={} colab_type="code" id="0x7n6xNlVuCE" colormap=cv2.COLORMAP_VIRIDIS with tf.GradientTape() as tape: X_in = tf.cast(X[0:1], tf.float32) conv_outputs, predictions = grad_model(X_in) loss = predictions[:, 0] tape.watch(loss) tape.watch(conv_outputs) grads = tape.gradient(loss, conv_outputs) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 1109, "status": "ok", "timestamp": 1573635531488, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="zkzfoc8651jB" outputId="57b5fff2-116a-4c75-cbf0-e4526ee4cf32" grads.shape # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 1231, "status": "ok", "timestamp": 1573635624114, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="yEUMgSwp_fvl" outputId="507004b5-d2bb-4890-fcc8-1b64f6bbf34d" conv_outputs.shape # + colab={} colab_type="code" id="NbK4gsGD5z8G" guided_grads = ( tf.cast(conv_outputs > 0, "float32") * tf.cast(grads > 0, "float32") * grads ) cams = GradCAM.generate_ponderated_output(conv_outputs, guided_grads) #heatmaps = np.array([ # heatmap_display(cam.numpy(), image, colormap) # for cam, image in zip(cams, X[0:1]) # ] #) # + colab={} colab_type="code" id="GQRFygoO_lhl" ?? GradCAM # + colab={} colab_type="code" id="KJQ3A6D1vOga" cam = cams[0].numpy() original_image = X[0].numpy() # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 553, "status": "ok", "timestamp": 1573635677057, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="jq6WwRHM_suQ" outputId="1260086f-78c7-4e90-83f5-29fbd499bd52" cam.shape # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 994, "status": "ok", "timestamp": 1573635684025, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="8-vBiYBL_uHv" outputId="38340d19-c523-4e35-c8a0-c02b2ed97298" original_image.shape # + colab={} colab_type="code" id="Lf1EpM0huWkN" image = image_to_uint_255(original_image) # + colab={} colab_type="code" id="RfWTQbouwPr6" cam = (cam - np.min(cam)) / (cam.max() - cam.min()) # + colab={"base_uri": "https://localhost:8080/", "height": 218} colab_type="code" executionInfo={"elapsed": 3100, "status": "ok", "timestamp": 1573635695548, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="Vc3-eSm_Wdtp" outputId="628281a8-8722-4854-fb6f-907dea010039" ppp = '/content/drive/My Drive/Capstone/05_Data/02_Sample_MRI/downsampled_resize/T2/sub-NDARINVFJJPAA2A_T2.nii.gz' org = nib.load(ppp) plotting.plot_anat(nilearn.image.new_img_like(ppp, cam, affine=None, copy_header=False)) plotting.show() # + colab={"base_uri": "https://localhost:8080/", "height": 103} colab_type="code" executionInfo={"elapsed": 2499, "status": "ok", "timestamp": 1573632489896, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="Zon4lQxqxy_g" outputId="e27ca556-4b0e-419d-f080-948504cda843" step = 4 n_slices = int(64/4) i = 0 n = 0 data = (cam * 255).astype("uint8") slice = 0 fig, ax = plt.subplots(1, n_slices, figsize=[18, 1.2*1]) for _ in range(n_slices): tmp_data = cv2.applyColorMap(cv2.cvtColor(data[:,:,slice], cv2.COLOR_GRAY2BGR), colormap) ax[n].imshow(tmp_data) ax[n].set_xticks([]) ax[n].set_yticks([]) if i == 0: ax[n].set_title(str(slice), color='r') else: ax[n].set_title('', color='r') n += 1 slice += step # + colab={"base_uri": "https://localhost:8080/", "height": 850} colab_type="code" executionInfo={"elapsed": 988, "status": "ok", "timestamp": 1573632429787, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="CswyaFSSzUT1" outputId="e23d5f93-8b96-493c-943a-220806dd2676" image[:,:,slice] # + colab={"base_uri": "https://localhost:8080/", "height": 850} colab_type="code" executionInfo={"elapsed": 1103, "status": "ok", "timestamp": 1573632359963, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="sHi5Wa4OxzHv" outputId="f75dcc53-bb56-4004-c563-0732b9b00f95" cv2.applyColorMap(cv2.cvtColor(data[:,:,slice], cv2.COLOR_GRAY2BGR), colormap) # + colab={} colab_type="code" id="6j0saosJV0nl" grads # + colab={"base_uri": "https://localhost:8080/", "height": 71} colab_type="code" executionInfo={"elapsed": 1724, "status": "ok", "timestamp": 1573573601574, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="9ll5ZhBYS0CC" outputId="da8f25d9-8618-4d94-a49c-6d12f26e452c" # + colab={"base_uri": "https://localhost:8080/", "height": 190} colab_type="code" executionInfo={"elapsed": 5602, "status": "ok", "timestamp": 1573573592894, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="2l2aRBO0S0lH" outputId="8eb4d09c-aa88-4223-dc2d-872ff68cab0b" # + colab={} colab_type="code" id="fmCx2QDeTgV5" ppp = '/content/drive/My Drive/Capstone/05_Data/02_Sample_MRI/downsampled_resize/T2/sub-NDARINVFJJPAA2A_T2.nii.gz' org = nib.load(ppp) # + colab={} colab_type="code" id="3aOZOBl7UAYf" # + colab={"base_uri": "https://localhost:8080/", "height": 218} colab_type="code" executionInfo={"elapsed": 2719, "status": "ok", "timestamp": 1573573909666, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="x9nuB0SiS3lG" outputId="da89a83a-9b5d-4286-c265-f9b6d614177e" # + colab={} colab_type="code" id="dSeow2ZMTSM4" nilearn.image. # + colab={"base_uri": "https://localhost:8080/", "height": 307} colab_type="code" executionInfo={"elapsed": 1068, "status": "error", "timestamp": 1573570989091, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="aUUkRh73Ikou" outputId="9280f05b-2745-4b4f-eb3c-22a9bb5c2ee3" import tensorflow as tf mm = tf.keras.Model(inputs=inputs, output=inputs) # + colab={} colab_type="code" id="5iG2jty7ItI6" aaa = model.fc['female'] # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 1101, "status": "ok", "timestamp": 1573572176029, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="rBkxNGXJMm3f" outputId="8f9443a0-ce63-4b6b-cfd6-5bdcb191d7f8" aaa.layers[-1] # + colab={"base_uri": "https://localhost:8080/", "height": 164} colab_type="code" executionInfo={"elapsed": 1594, "status": "error", "timestamp": 1573571926766, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="R7pvBAzEMgIb" outputId="b674a27f-9b52-49c7-d0d6-bda6bebe1508" list(model.fc['female'].children()) # + colab={} colab_type="code" id="aQ6iuJbGEym7" from tensorflow.keras import Model class WrapperMyDNN(Model): def __init__(self, model): super(WrapperMyDNN, self).__init__() self.model = model #self.output = self.model.fc['female'].layers[-1] def call(self, x): x_tmp = self.model(x) return(x_tmp['female']) # + colab={} colab_type="code" id="S4Qim0gN0Lua" ?? Model # + colab={} colab_type="code" id="IbySX5WM0F4G" from tensorflow.keras import Model class MyDNN2(Model): def __init__(self, cat_cols, num_cols, **kwargs): super(MyDNN2, self).__init__(**kwargs) self.cat_cols = cat_cols self.num_cols = num_cols self.ac = tf.keras.layers.ReLU() self.maxpool = tf.keras.layers.MaxPool3D(pool_size=(2, 2, 2), data_format='channels_last') self.conv1 = tf.keras.layers.Conv3D( filters = 32, kernel_size = 3, padding='valid', data_format='channels_last', input_shape = (64,64,64,2) ) self.bn1 = tf.keras.layers.BatchNormalization() self.conv2 = tf.keras.layers.Conv3D( filters = 64, kernel_size = 3, padding='valid', data_format='channels_last' ) self.bn2 = tf.keras.layers.BatchNormalization() self.conv3 = tf.keras.layers.Conv3D( filters = 128, kernel_size = 3, padding='valid', data_format='channels_last' ) self.bn3 = tf.keras.layers.BatchNormalization() self.conv4 = tf.keras.layers.Conv3D( filters = 256, kernel_size = 3, padding='valid', data_format='channels_last' ) self.bn4 = tf.keras.layers.BatchNormalization() self.fc = {} for k in list(self.cat_cols.keys()): self.fc[k] = tf.keras.Sequential([ tf.keras.layers.Dense(256, activation='relu'), tf.keras.layers.BatchNormalization(), tf.keras.layers.Dense(self.cat_cols[k], activation='softmax') ]) for i in range(len(self.num_cols)): self.fc[self.num_cols[i]] = tf.keras.Sequential([ tf.keras.layers.Dense(256, activation='relu'), tf.keras.layers.BatchNormalization(), tf.keras.layers.Dense(1) ]) def call(self, x): x = self.conv1(x) x = self.bn1(x) x = self.ac(x) x = self.maxpool(x) x = self.conv2(x) x = self.bn2(x) x = self.ac(x) x = self.maxpool(x) x = self.conv3(x) x = self.bn3(x) x = self.ac(x) x = self.maxpool(x) x = self.conv4(x) x = self.bn4(x) x = self.ac(x) x = tf.keras.layers.GlobalAveragePooling3D()(x) out = {} for k in list(self.fc.keys()): out[k] = self.fc[k](x) return out # + colab={} colab_type="code" id="DytC-LQsPZSd" model2 = WrapperMyDNN(model) a = next(iter(test_iter)) t1 = (tf.cast(a['t1'], tf.float32)-t1_mean)/t1_std t2 = (a['t2']-t2_mean)/t2_std X = tf.concat([t1, t2], axis=4) # + colab={"base_uri": "https://localhost:8080/", "height": 283} colab_type="code" executionInfo={"elapsed": 1130, "status": "error", "timestamp": 1573572569092, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="-FnN0VNHOuHC" outputId="e39d17e4-1629-4f90-d4e3-e8341a78c956" model2.output # + colab={"base_uri": "https://localhost:8080/", "height": 477} colab_type="code" executionInfo={"elapsed": 4623, "status": "error", "timestamp": 1573572511655, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="1HQqLXqWWxXJ" outputId="1422a528-c7a4-4e68-db58-1485075ac412" from tf_explain.core.smoothgrad import SmoothGrad explainer = SmoothGrad() grid = explainer.explain((X, None), model2, 1, 20, 1.) # + colab={} colab_type="code" id="cP7thtjAqunC" output = [] output.append(['colname', 'mse']) for col in num_cols: mean = test_df[col].mean() mse_norm = np.mean(np.square(test_df[col]-mean)) output.append([col, mse_norm]) for col in list(cat_cols.keys()): mean = test_df[col].value_counts().idxmax() mse_norm = np.mean(test_df[col]==mean) output.append([col, mse_norm]) # + colab={} colab_type="code" id="Rw_KzTw6Q4wr" a = next(iter(test_iter)) # + colab={} colab_type="code" id="ben3zkRr32sD" t1 = (tf.cast(a['t1'], tf.float32)-t1_mean)/t1_std t2 = (a['t2']-t2_mean)/t2_std X = tf.concat([t1, t2], axis=4) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 1117, "status": "ok", "timestamp": 1573490357380, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="622IROlaVWVu" outputId="9b50cdce-6128-4fc7-9757-80d1fdce306f" X.shape # + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" executionInfo={"elapsed": 1122, "status": "ok", "timestamp": 1573490251609, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="ynswVv2MUwZi" outputId="276f09a5-4318-4fa2-a029-34a2a8c4dd3d" model2.model.summary() # + colab={"base_uri": "https://localhost:8080/", "height": 477} colab_type="code" executionInfo={"elapsed": 5540, "status": "error", "timestamp": 1573490614301, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="2RwCwYigS0UO" outputId="7f1ede39-9d2b-40f1-eed3-f1b72b4573f9" from tf_explain.core.smoothgrad import SmoothGrad explainer = SmoothGrad() grid = explainer.explain((X, None), model2, 1, 20, 1.) # + colab={"base_uri": "https://localhost:8080/", "height": 85} colab_type="code" executionInfo={"elapsed": 3698, "status": "ok", "timestamp": 1573490096976, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="b033WUfMTy_S" outputId="e71254e5-e55f-488b-a2ce-5ce5f7117b5d" # !ls /content/drive/My\ Drive/Capstone/ # + colab={"base_uri": "https://localhost:8080/", "height": 935} colab_type="code" executionInfo={"elapsed": 8973, "status": "ok", "timestamp": 1573490110703, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="-RohbTtqS4qd" outputId="497f7261-9a93-4151-a55e-d449ecfaa40b" IMAGE_PATH = '/content/drive/My Drive/Capstone/cat.jpg' model_res = tf.keras.applications.vgg16.VGG16(weights='imagenet', include_top=True) img = tf.keras.preprocessing.image.load_img(IMAGE_PATH, target_size=(224, 224)) img = tf.keras.preprocessing.image.img_to_array(img) model_res.summary() data = ([img], None) tabby_cat_class_index = 281 explainer = SmoothGrad() # Compute SmoothGrad on VGG16 grid = explainer.explain(data, model_res, tabby_cat_class_index, 20, 1.) explainer.save(grid, '.', 'smoothgrad.png') # + colab={} colab_type="code" id="8kJ4t1IgT_FF" from google.colab import files files.download('./smoothgrad.png') # + colab={} colab_type="code" id="zDC5mlR9UpWo" # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 1594, "status": "ok", "timestamp": 1573489069509, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="-dvYng1iQBUQ" outputId="b4ddf708-f070-43a7-938b-12bc92419372" model2(X) # + colab={"base_uri": "https://localhost:8080/", "height": 170} colab_type="code" executionInfo={"elapsed": 1585, "status": "ok", "timestamp": 1573488955519, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": -60} id="7JMy-Zmg3_8t" outputId="a0d77624-e846-451d-c8f8-84f587118f0b" 'model.call(X)['female'] # + colab={} colab_type="code" id="mIYd0IsfiKER" pd.DataFrame(output).to_csv(path_output + 'baseline.csv') # + colab={"base_uri": "https://localhost:8080/", "height": 286} colab_type="code" executionInfo={"elapsed": 811, "status": "ok", "timestamp": 1572985942408, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": 300} id="j3kFZEWAQ5XU" outputId="c9769171-4ee2-4e40-d803-fcde7a266188" val_df.groupby(['married']).count() / val_df.shape[0] # + colab={"base_uri": "https://localhost:8080/", "height": 514} colab_type="code" executionInfo={"elapsed": 1403, "status": "ok", "timestamp": 1572884554781, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": 300} id="6ivDV4iphO9A" outputId="43969338-33ff-44f8-e207-2b03cf78d9a2" pd.merge(pd.DataFrame(output), df, left_on=0, right_on=0) # + colab={} colab_type="code" id="3TEP3PWnc4-g" cols = ['nihtbx_fluidcomp_uncorrected', 'nihtbx_cryst_uncorrected', 'nihtbx_pattern_uncorrected', 'nihtbx_picture_uncorrected', 'nihtbx_list_uncorrected', 'nihtbx_flanker_uncorrected', 'nihtbx_picvocab_uncorrected', 'nihtbx_cardsort_uncorrected', 'nihtbx_totalcomp_uncorrected', 'nihtbx_reading_uncorrected'] # + colab={} colab_type="code" id="GG6ZMe11hqLf" output = [] output.append(['colname', 'mse']) for col in cols: mean = val_df[col].mean() mse_norm = np.mean(np.square(val_df[col]-mean)) output.append([col, mse_norm]) # + colab={"base_uri": "https://localhost:8080/", "height": 390} colab_type="code" executionInfo={"elapsed": 793, "status": "ok", "timestamp": 1572621420115, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDViwxKXBjVKZfXMWGuXrQ48D62bye6HNutAOX0=s64", "userId": "06737229821528734971"}, "user_tz": 240} id="6YJAmfWtil8q" outputId="1cf07203-3ec7-4448-e8dd-1e7be5ab8636" pd.DataFrame(output) # + colab={} colab_type="code" id="OViFTzBKioI2" # + colab={} colab_type="code" id="AExg0jq563dS" model.get_layer('lastconv') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import pandas as pd import diffprivlib.tools as dp import matplotlib.pyplot as plt # %matplotlib inline # + na_values={ 'capital-gain': 99999, 'capital-loss': 99999, 'hours-per-week': 99, 'workclass': '?', 'native-country': '?', 'occupation': '?'} private_df = pd.read_csv('./data/adult_with_no_native_country.csv', skipinitialspace=True, na_values=na_values) synthetic_df = pd.read_csv('./out/correlated_attribute_mode/sythetic_data.csv', skipinitialspace=True) # - categorical_attributes = private_df.dtypes.loc[private_df.dtypes=='O'].index.values numerical_attributes = [col for col in private_df.columns if col not in categorical_attributes] # ### Consistency # + dp_sex_delta = [] for i in range(100): dp_sex_delta.append( private_df.shape[0] - dp.sum((private_df.sex=='Male').values, bounds=(0, 1)) - dp.sum((private_df.sex=='Female').values, bounds=(0,1))) fig, axes = plt.subplots(1,3, figsize=(16,6)) private_df.sex.hist(ax=axes[0]) synthetic_df.sex.hist(ax=axes[1]) pd.Series(np.hstack(dp_sex_delta)).hist(ax=axes[2]) # - # ### Run mean # note if bounds on dp.mean changed to (0.02, 0.98) it works # + priv_mean = private_df[numerical_attributes].mean() dp_mean = private_df[numerical_attributes].apply( lambda c: dp.mean( c.dropna().values, epsilon=1, bounds=tuple(c.quantile([0.02, 0.98]).values) ), axis=0) synth_mean = synthetic_df[numerical_attributes].mean() comp_means = pd.concat([priv_mean, synth_mean, dp_mean], axis=1) comp_means.columns = ['priv_mean', 'synth_mean', 'dp_mean'] comp_means.plot.bar(logy=True)#(figsize=(16,12)) # - # ### Random Query # + fig, axes = plt.subplots(1,2, figsize=(16,6)) private_df.loc[ (private_df.age.between(20, 30)) & (private_df.income == '>50K') & (private_df.sex == 'Male') & (private_df.fblwgt.between(50000, 75000)), 'martial-status' ].sort_values().hist(ax=axes[0]) synthetic_df.loc[ (synthetic_df.age.between(20, 30)) & (synthetic_df.income == '>50K') & (synthetic_df.sex == 'Male') & (synthetic_df.fblwgt.between(50000, 60000)), 'martial-status' ].sort_values().hist(ax=axes[1]) ymax = max(max(axes[0].get_ylim(), axes[1].get_ylim())) axes[0].set_ylim(0, ymax) axes[1].set_ylim(0, ymax) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.9.1 64-bit # name: python391jvsc74a57bd028dde4813e0e3ae4d0d68fcd2b1a59a20aa13de3dd42e33b497643f35eca055b # --- # # Traveling Salesperson Problem with Azure Quantum # # Section 1 # # In the [Quantum Computing Foundations](/learn/paths/quantum-computing-fundamentals?azure-portal=true) learning path, you have helped the spaceship crew optimize its asteroid mining expeditions and reparations of critical onboard emergency systems. Now you have been asked to code the navigation system for the spaceship to optimize its travel routes through the solar system. The spaceship needs to visit numerous planets, moons, and asteroids to mine and sell the space rocks. In this module, you'll use the Azure Quantum service to minimize the travel distance of the spaceship. # # The design and implementation of the navigation system is a case of the [**traveling salesperson** problem](https://en.wikipedia.org/wiki/Travelling_salesman_problem?azure-portal=true). The objective is to find a path through a network of nodes such that an associated cost, such as the travel time or distance, is as small as possible. You can find applications of the traveling salesperson problem, or slight modifications of it, in a variety of fields. For example, in logistics, chemical industries, control theory, and bioinformatics. Even in your daily life, when you want to know the order in which you should visit school, the cinema, the office, and the supermarket to go the shortest way. # # In this module, we will cover the formulation of this **minimization problem** by modeling travel costs and penalty functions. Afterward, we will solve the problem using the Earth's Azure Quantum Optimization service. All the content will be explained in the context of a spaceship that has to travel through the solar system to mine and sell asteroids. # # ## Learning objectives # # After completing this module, you will be able: # # - to understand the traveling salesperson problem. # - to evaluate the problem complexity, solvers, and the tuning process. # - to model travel costs of the traveling salesperson. # - to formulate problem constraints into a penalty functions. # - to represent the minimization problem using Azure Quantum. # - to use the Azure Quantum Optimization service to solve optimization problems. # - to read and analyze results returned by the Azure Quantum solvers. # # # ## Prerequisites # # - The latest version of the [Python SDK for Azure Quantum](/azure/quantum/optimization-install-sdk?azure-portal=true) # - [Jupyter Notebook](https://jupyter.org/install.html?azure-portal=true) # - An Azure Quantum workspace # - Basic linear algebra, only needing vector-matrix multiplications. # # If you don't have these tools yet, we recommend that you follow the [Get started with Azure Quantum](/learn/modules/get-started-azure-quantum/?azure-portal=true) module first. # # # Section 2 # # For the module, we define the traveling salesperson problem for the spaceship as follows: # # You have a set of $N$ locations (planets, moons, and asteroids) that need to be visited. As the newly appointed navigation engineer, you have to code the navigation system in order to find the shortest routes through the solar system. # # There are a number of conditions that you will need to fullfill in order for the navigation system to be considered 'successful' by the crew. These will be considered the problem constraints: # # - Conditions: # - `One location at a time constraint` - The spaceship can only be in one planet, moon, or asteroid at a time. No magic! # - `No dissapearing constraint` - The spaceship has to be in a location, it can not disappear! # - `No revisiting locations constraint` - The spaceship may **not** visit a location more than once, except the homebase (start/end location). # - `Start and end locations constraints` - The spaceship has to start and finish in the home base, which will be Mars. # # - Please consider the following points for the module as well: # - We will ignore orbital mechanics and use real approximate mean distances between the planets, moons, and asteroids. Distances are in millions of kilometers. All locations have a terrestrial surface (no gas planet included, as that is hard to mine). # - We will not specify where the space rocks can, and can not be, sold and mined. However, you may expand the constraints yourself to include such criteria. # - The $N$ locations are (in order) : Mars, Earth, Earth's Moon, Venus, Mercury, Ganymede (moon), Titan (moon), Ceres (asteroid), Pallas (asteroid), Cybele (asteroid). Naming the planets will make it easier to read the solutions returned by the solver! # # # ## About the problem # The goal behind reprogramming the navigation system is to find a suitable interplanetary route to minimize the spaceship's travel distances. Having a (near)-optimal route will not only lower the total distance but will also reduce the ship's energy consumption and the number of working hours for the crew. An efficient route is crucial to the overall productivity of the spaceship's mining ambitions! # # The main idea is to create a **cost function**, which is used to model the travel costs **and** travel constraints of the spaceship. A particular route wants to be found that minimizes the cost function as much as possible. By incorporating the constraints into the cost function through penalties, the solver can be encouraged to settle for certain solution sets. Intuitively, you can imagine this as designing the rugged optimization landscape such that the 'good' solutions (routes) have lower cost values (minima) than 'bad' solutions. Even though the problem sounds easy to solve, for a large number of locations it becomes nearly impossible to find the optimal route (global minima). The difficulty is predominantly caused by: # - the rugged optimization landscape (non-convex). # - the combinatorial explosion of the solution space, meaning that you may not be able to check every possible route. # - the variables optimized for are non-continuous, $x_{v} \in \{0,1\}.$ # # > [!TECHNICAL NOTES] # > The traveling salesperson problem in which an optimal route needs to be found belongs to the set of [NP-hard problems](https://en.wikipedia.org/wiki/NP-hardness?azure-portal=true). First, there is no explicit algorithm for which you can find the optimal solution, meaning that you are faced with a search of the $(N-1)!$ (factorial number of routes to pass through all the locations) possible routes. Secondly, to verify whether a candidate solution to the traveling salesperson problem is optimal, you would have to compare it to all $(N-1)!$ other candidate solutions. Such a procedure would force you to compute all routes which is an **extremely** difficult task for large $N$ (non-polynomial time $\mathcal{O}((N-1)!)$). # > The total number of routes for the traveling salesperson problem is dependent on the problem formulation. For generality, we will consider the directed version (`directed graph`) of the traveling salesperson problem, meaning that every route through the network is unique. Therefore you can assume that there are $(N-1)!$ different routes possible, since a starting location is given. # # ## Azure Quantum workspace # # As presented in the previous two modules, [Solve optimization problems by using quantum-inspired optimization](/learn/modules/solve-quantum-inspired-optimization-problems?azure-portal=true) and [Solve a job scheduling optimization problem by using Azure Quantum](/learn/modules/solve-job-shop-optimization-azure-quantum?azure-portal=true), the optimization problem can be submitted to Azure Quantum solvers. For this, we will use the the Python SDK and format the traveling salesperson problem for the solver with cost function `terms`. # In order to submit the optimization problem to the Azure Quantum solver later, you will need to have an Azure Quantum workspace. Follow these [directions](/learn/modules/get-started-azure-quantum?azure-portal=true) to set one up if you don't have one already. # # Create a Python file or Jupyter Notebook, and be sure to fill in the details below to connect to the Azure Quantum solvers and import relevant Python modules. In case you don't recall your `subscription_id` or `resource_group`, you can find your Workspace's information on the Azure Portal page. # Throughtout the module we will be appending to the Python script below such that you can run and view intermediary results. # # ```python # # from azure.quantum import Workspace # from azure.quantum.optimization import Problem, ProblemType, Term, HardwarePlatform, Solver # from azure.quantum.optimization import SimulatedAnnealing, ParallelTempering, Tabu, QuantumMonteCarlo # from typing import List # # import numpy as np # import math # # workspace = Workspace ( # subscription_id = "", # Add your subscription id # resource_group = "", # Add your resource group # name = "", # Add your workspace name # location = "" # Add your workspace location (for example, "westus") # ) # # workspace.login() # ``` # The first time you run this code on your device, a window might prompt in your default browser asking for your credentials. # # ## Definitions # # A list to quickly look up variables and definitions: # # - Locations: The planets, moons, and asteroids. # - Origin: A location the starship departs from. # - Destination: A location the starship travels to. # - Trip: A travel between two locations. For example, traveling from Mars to Earth. # - Route: Defined as multiple trips. Traveling between more than two locations. For example, Mars → Earth → Celeste → Venus. # - $N$: The total number of locations. # - $C$: The travel cost matrix. In this module, the costs are distances given in millions of kilometers. # - $i$: Variable used to index the rows of the cost matrix, which represent origin locations. # - $j$: Variable used to index the columns of the cost matrix, which represent destination locations. # - $c_{i,j}$: The elements of the matrix, which represent the travel cost, that is the distance, between origin $i$ and destination $j$. # - $x_v$: The elements of a location vector. Each represents a location before or after a trip. These are the optimization variables. # - $w_b$: Constraint weights in the cost function. These need to be tuned to find suitable solutions. # # ## Problem formulation # # With problem background dealt with, you can start looking at modeling it. In this unit, we will first go over how to calculate the spaceship's travel costs. This will become the foundation of the cost function. In later units, you will expand the cost function by incorporating penalty functions (constraints), to find more suitable routes throughout the solar system. # # Let's get to work! # # ### Defining the travel cost matrix $C$ # # Consider a single trip for the spaceship, from one planet to another. The first step is to give each location (planet, moon, asteroid) a unique integer in $\{0,N-1\}$ (N in total). Traveling from planet $i$ to planet $j$ will then have a travel distance # of $c_{i,j}$, where $i$ denotes the `origin` and $j$ the `destination`. # # $$ \text{The origin location } (\text{node}_i) \text{ with } i \in \{0,N-1\}.$$ # $$ \text{The destination location } (\text{node}_j) \text{ with } j \in \{0,N-1\}.$$ # $$ \text{Distance from } \text{location}_i \text{ to } \text{location}_j \text{ is } c_{i,j}.$$ # # Writing out the travel distances for every $i$-$j$ (origin-destination) combination gives the travel cost matrix $C$: # # Travel cost matrix: # $$C = \begin{bmatrix} c_{0,0} & c_{0,1} & \dots & c_{0,N-1} \\ c_{1,0} & c_{1,1} & \dots & c_{1,N-1} \\ \vdots & \ddots & \ddots & \vdots \\ c_{N-1,0} & c_{N-1,1} & \dots & c_{N-1,N-1} \end{bmatrix}. $$ # # Here, the rows are indexed by $i$ and represent the origin locations. The columns are indexed by $j$ and represent the destination locations. For example, traveling from location $0$ to location $1$ is described by: # # $$ C(0,1) = c_{0,1}.$$ # # The travel cost matrix is your spaceflight "dictionary", the most important asset to you as the spaceship's navigation engineer. It contains the most crucial information in finding a suitable route through the solar system to sell and mine space rocks. The measure of the travel cost is arbitrary. In this module the cost we want to minimize is the distance, however you can also use time, space debris, space money, ..., or a combination of different costs. # # ### Defining the location vectors # # With a travel cost formulation between two locations complete, a representation of the origin and destination needs to be defined to select an element of the travel cost matrix. Remember, this is still for a single trip between location $i$ and location $j$. Selecting an element of the matrix can be achieved by multiplying the matrix with a vector from the left, and from the right! The left vector and right vector specify the origin and the destination, respectively. Consider the example where the spaceship travels from planet $0$ to planet $1$: # # $$ \text{Travel cost - location 0 to location 1 }= \begin{bmatrix} 1 & 0 & \dots & 0 \end{bmatrix} \begin{bmatrix} c_{0,0} & c_{0,1} & \dots & c_{0,N-1} \\ c_{1,0} & c_{1,1} & \dots & c_{1,N-1} \\ \vdots & \ddots & \ddots & \vdots \\ c_{N-1,0} & c_{N-1,1} & \dots & c_{N-1,N-1} \end{bmatrix} \begin{bmatrix} 0 \\ 1 \\ \vdots \\ 0 \end{bmatrix} = c_{0,1}.$$ # # Recall that the spaceship can only be in one location at a time, and that there is only one spaceship, therefore only one element in the origin and destination vectors can equal 1, while the rest of them are 0. In other words, the sum of elements of the origin and destination vectors must equal 1. We will work this into a constraint in a later unit. # # Fantastic, you now know how to extract information from the travel cost matrix $C$ and express the travel cost for a single trip. However, it is necessary to express these ideas in a mathematical format for the solver. In the previous example the trip was hard-coded to go from location $0$ to location $1$. Let's generalize for any trip: # # $$ x_v \in \{0,1\} \text{ for } v \in \{0,2N-1\}, $$ # # $$ \text{Travel cost for single trip }= \begin{bmatrix} x_0 & x_1 & \dots & x_{N-1} \end{bmatrix} \begin{bmatrix} c_{0,0} & c_{0,1} & \dots & c_{0,N-1} \\ c_{1,0} & c_{1,1} & \dots & c_{1,N-1} \\ \vdots & \ddots & \ddots & \vdots \\ c_{N-1,0} & c_{N-1,1} & \dots & c_{N-1,N-1} \end{bmatrix} \begin{bmatrix} x_{N} \\ x_{N+1}\\ \vdots \\ x_{2N-1} \end{bmatrix}. $$ # # Each $x_v$ represents a location before or after a trip. The reason that the destination vector elements are indexed from $N+1$ to $2N$ instead of $0$ to $N-1$ is to differentiate the origin variables and destination variables for the solver. If this is not done, the solver would consider the origin and destination the same, meaning that the spaceship would never take off to its destination planet! When submitting the optimization problem, the solver will determine which $x_v$ is assigned a value 1 or 0, dependent on the respective trip's travel cost. To re-iterate an important point, the sum of the origin and destination vector elements must be 1 as the spaceship can not be in more than or less than one location at a time: # # $$ \text{Sum of the origin vector elements: }\sum_{v = 0}^{N-1} x_v = 1, $$ # $$ \text{Sum of the destination vector elements: }\sum_{v = N}^{2N-1} x_v = 1.$$ # # # Now that the basic mathematical formulations are covered, the scope can be expanded. As the spaceship's navigation engineer, it is your job to calculate a route between multiple planets, moons, and asteroids, to help make the space rock mining endeavour more succesful. With the travel cost matrix and the origin/destination vectors defined, you are equipped with the right mathematical tools to start looking at routes through the solar system. Let's take a look at how two trips can be modeled! # # # ### Defining the travel costs for a route # # To derive the travel costs for two trips, which can be considered a route, you will need a way to describe the **total** travel cost. As you might expect, the total cost of a route is simply the sum of the trip's travel costs that constitute the route (sum of the trips). Say you have a route $R$ in which the spacehip travels from location $1$ to location $3$ to location $2$. Then the total cost is the sum of the trips' costs: # # $$ \text{Cost of route: } R_{1-3-2} = c_{1,3}+c_{3,2}. $$ # # By taking a closer look at the equation a very important relation is found. The destination for the first trip and the origin for the second trip are the same. Incorporating this relation in the cost function will help reduce the number of variables the solver has to optimize for. As a result of the simplification, the optimization problem becomes much easier and has an increased probability of finding suitable solutions! # # > [!TECHNICAL NOTE] # > For the 2-trip example this recurrence relation reduces the solver search space size (number of possible routes/solutions) from $2^N \cdot 2^N \cdot 2^N \cdot 2^N$ to $2^N \cdot 2^N \cdot 2^N$, a factor $N$ ($N$ = 10 in this module) difference. Without considering constraints, each vector element can take two values (${0,1}$) and has a length $N$, therefore each vector multiplies the solution set by $2^N$. The reduction in variables becomes even more apparent for longer routes. Visiting the 10 planets, moons, and asteroids as in this module, the relation reduces the search space size from $2^{20N}$ to $2^{11N}$! The number of variables optimized for decreases from $200$ to $110$, respectively. # # Recall that the travel costs can be written with vectors and matrices. Then the total travel cost of the route is: # # $$ \text{Cost of route } = \begin{bmatrix} x_0 & x_1 & \dots & x_{N-1} \end{bmatrix} \begin{bmatrix} c_{0,0} & c_{0,1} & \dots & c_{0,N-1} \\ c_{1,0} & c_{1,1} & \dots & c_{1,N-1} \\ \vdots & \ddots & \ddots & \vdots \\ c_{N-1,0} & c_{N-1,1} & \dots & c_{N-1,N-1} \end{bmatrix} \begin{bmatrix} x_{N} \\ x_{N+1}\\ \vdots \\ x_{2N-1} \end{bmatrix} + \begin{bmatrix} x_{N} & x_{N+1} & \dots & x_{2N-1} \end{bmatrix} \begin{bmatrix} c_{0,0} & c_{0,1} & \dots & c_{0,N-1} \\ c_{1,0} & c_{1,1} & \dots & c_{1,N-1} \\ \vdots & \ddots & \ddots & \vdots \\ c_{N-1,0} & c_{N-1,1} & \dots & c_{N-1,N-1} \end{bmatrix} \begin{bmatrix} x_{2N} \\ x_{2N+1}\\ \vdots \\ x_{3N-1} \end{bmatrix}.$$ # # In the equation the destination vector of the first trip and the origin vector of the second trip contain the same variables, making use of the recurrence relation. Generalizing the 2-trip route to a $N$-trip route is achieved by including more vector-matrix multiplications in the addition. Written out as a sum: # # $$\text{Travel cost of route } = \sum_{k=0}^{N-1} \left( \begin{bmatrix} x_{Nk} & x_{Nk+1} & \dots & x_{Nk+N-1} \end{bmatrix} \begin{bmatrix} c_{0,0} & c_{0,1} & \dots & c_{0,N-1} \\ c_{1,0} & c_{1,1} & \dots & c_{1,N-1} \\ \vdots & \ddots & \ddots & \vdots \\ c_{N-1,0} & c_{N-1,1} & \dots & c_{N-1,N-1} \end{bmatrix} \begin{bmatrix} x_{N(k+1)} \\ x_{N(k+1)+1}\\ \vdots \\ x_{N(k+1)+N-1} \end{bmatrix} \right),$$ # # which can equivalently be written as: # # $$\text{Travel cost of route} = \sum_{k=0}^{N-1}\sum_{i=0}^{N-1}\sum_{j=0}^{N-1} \left( x_{Nk+i}\cdot x_{N(k+1)+j}\cdot c_{i,j} \right),$$ # # where the $x$ variables indices are dependent on the trip number $k$, the total number of locations $N$, and the origin ($i$) and destination ($j$) locations. # # Great! A function to calculate the travel costs for the spaceship's route has been found! Because you want to minimize (denoted by the 'min') the total travel cost with respect to the variables $x_v$ (written underneath the 'min'), we will write the foundation of the cost function as follows: # # $$\text{Travel cost of route} := \underset{x_0, x_1,\dots,x_{(N^2+2N)}}{min}\sum_{k=0}^{N-1}\sum_{i=0}^{N-1}\sum_{j=0}^{N-1} \left( x_{Nk+i}\cdot x_{N(k+1)+j}\cdot c_{i,j} \right).$$ # # This equation will not make up the entire cost function for the solver. Penalty functions have to added to it for the constraints, otherwise the solver will return invalid solutions. These will be added to the cost function in upcoming units. # # The crew is very excited with the progress you have already booked. You are ready to program the travel costs into navigation system you are building for the spaceship. Time to write some code! # # # ### Progress on the navigation system's cost function # # - The cost function contains: # - The `travel costs`. # # $$ \text{Cost function so far: } \underset{x_0, x_1,\dots,x_{(N^2+2N)}}{min} \left(\sum_{k=0}^{N-1}\sum_{i=0}^{N-1}\sum_{j=0}^{N-1} \left( x_{Nk+i}\cdot x_{N(k+1)+j}\cdot c_{i,j} \right)\right) $$ # # ### Coding the travel costs # # For the solvers to find a suitable solution, you will need to specify how it should calculate the travel cost for a route. The solver requires you to define a cost term for each possible trip-origin-destination combination given by the variables $k$, $i$, $j$, respectively. As described by the cost function above, the weighting is simply the $c_{i,j}$-th element of the cost matrix, resembling the distance between the locations. For example, weighting a trip from location $1$ to location $2$ in the second trip ($k=1$) has the following weighting: # # $$ c_{i,j} \cdot x_{Nk+i} \cdot x_{N(k+1)+j} = c_{1,2} \cdot x_{N+1} \cdot x_{N+2}. $$ # # The $x_v$ variables have an $N$ term because for the solver we need to differentiate between the variables over the trips. In other words, after each trip there are $N$ new variables that represent where the spaceship can travel to next. # # Below, you can find the code snippet that will be added to the Python script. We define a problem instance through the `OptProblem` function, in which we will continue to append pieces of code throughout the module's units. Later, this problem will be submitted to the Azure solvers. If you want to see how the weights are assigned to each $ x_{Nk+i} \cdot x_{N(k+1)+j}$ combination, you can uncomment the last lines in the code. Lastly, since the optimization variables $x_v$ can take values ${0,1}$, the problem type falls into the `pubo` category (polynomial unconstrained binary optimization). # # ```python # # ##### Define variables # # # The number of planets/moons/asteroids. # NumLocations = 10 # # # Location names. Names of some of the solar system's planets/moons/asteroids. # LocationNames = {0:'Mars', 1:'Earth', 2:"Earth's Moon", 3:'Venus', 4:'Mercury', 5:'Ganymede', 6:'Titan', 7:'Ceres', 8:'Pallas', 9:'Cybele'} # # # Approximate mean distances between the planets/moons/asteroids. Note that they can be very innacurate as orbital mechanics are ignored. # # This is a symmetric matrix since we assume distance between planets is constant for this module. # CostMatrix = np.array([ [0, 78, 2, 120, 170, 550, 1200, 184, 600, 1.5 ], # [78, 0, 0.5, 41, 92, 640, 1222, 264, 690, 0.25 ], # [2, 0.5, 0, 40, 91, 639, 1221, 263, 689, 0.25 ], # [120, 41, 40, 0, 50, 670, 1320, 300, 730, 41.5 ], # [170, 92, 91, 50, 0, 720, 1420, 400, 830, 141.5 ], # [550, 640, 639, 670, 720, 0, 650, 363, 50, 548 ], # [1200, 1222, 1221, 1320, 1420, 650, 0, 1014, 25, 625 ], # [184, 264, 263, 300, 400, 363, 1014, 0, 100, 400 ], # [600, 690, 689, 730, 830, 50, 25, 100, 0, 350 ], # [1.5, 0.25, 0.25, 41.5, 141.5, 548, 625, 400, 350, 0 ] # ]) # # ##### If you want try running with a random cost matrix, uncomment the following: # #maxCost = 10 # #CostMatrix = np.random.randint(maxCost, size=(NumLocations,NumLocations)) # # ############################################################################################ # ##### Define the optimization problem for the Quantum Inspired Solver # def OptProblem(CostMatrix) -> Problem: # # # #'terms' will contain the weighting terms for the solver! # terms = [] # # ############################################################################################ # ##### Cost of traveling between locations # for k in range(0,len(CostMatrix)): # For each trip (there are N trips to pass through all the locations and return to home base) # for i in range(0,len(CostMatrix)): # For each origin (reference location) # for j in range(0,len(CostMatrix)): # For each destination (next location w.r.t reference location) # # #Assign a weight to every possible trip from location i to location j - for any combination # terms.append( # Term( # c = CostMatrix.item((i,j)), # Element of the cost matrix # indices = [i+(len(CostMatrix)*k), j+(len(CostMatrix)*(k+1))] # +1 to denote dependence on next location # ) # ) # ##----- Uncomment one of the below statements if you want to see how the weights are assigned! ------------------------------------------------------------------------------------------------- # #print(f'{i+(len(CostMatrix)*k)}, {j+(len(CostMatrix)*(k+1))}') # Combinations between the origin and destination locations # #print(f'For x_{i+(len(CostMatrix)*k)}, to x_{j+(len(CostMatrix)*(k+1))} in trip number {k} costs: {CostMatrix.item((i,j))}') # In a format for the solver (as formulated in the cost function) # #print(f'For node_{i}, to node_{j} in trip number {k} costs: {CostMatrix.item((i,j))}') # In a format that is easier to read for a human # # return Problem(name="Spaceship navigation system", problem_type=ProblemType.pubo, terms=terms) # # OptimizationProblem = OptProblem(CostMatrix) # # ``` # # # ## Next Steps # # As navigation engineer of the spaceship you have completed the initial work on the route planner. But with the current code, the Azure Quantum solvers on Earth would send routes to the spaceship that would make no sense! Currently, all $x_v$ would be assigned a value 0 by the solver because that would yield a value-0 travel cost! To avoid this from happening, you will need to implement the constraints into the cost function. These constraints will penalize the routes that are incorrect. More on this in the next units! # # # # Section 3 # # With the foundations of the navigation system in place, it is time to incorporate the constraints. The initial results made the crew enthousiastic! However, before launching the final system, you will need to implement penalty functions to ensure that the Azure Quantum solvers generate feasible routes through the solar system. # # In this unit we will consider the *"one location at a time"* constraint. Recall that the spaceship can only be in one location at a time. It is impossible for the spasceship to be on both Earth and Mars simultaneously! You will need to program this into the navigation system by penalizing the solvers for routes in which such magic happens. # # >[!Note] # >From this unit onwards, the origin and destination vectors will be referred to as **location vectors** due to the recurrence relation presented in the previous unit. The location vector is used to represent the location of the spaceship after a number of trips. For example, starting from the home base will give the location vector 0. After completing the first trip, the spaceship will be at a location given by location vector 1. # # # ## "One location at a time" constraint # # The spaceship can only be at one planet, moon, or asteroid at a time. So far, the defined cost function only contains information about the travel costs, nothing about whether the spaceship is at multiple locations at once. To mathematically express this idea, it must be true that only one element in a location vector equals 1. For example: # # $$ \text{Incorrect location vector: } \begin{bmatrix} 1 \\ 1 \\ 0 \\ \vdots \\ 0 \end{bmatrix}, $$ # # should not be allowed as the spaceship is at the first two locations simultanesouly. Only one element may be non-zero: # # $$ \text{Correct location vector: } \begin{bmatrix} 0 \\ 1 \\ 0 \\ \vdots \\ 0 \end{bmatrix}. $$ # # One way of designing this constraint would be look at the sum of the location vector elements, which must equal 1: # # $$ \text{Location vector 0: (home base)} \hspace{0.5cm} x_0 + x_1 + \dots + x_{N-1} = 1, $$ # $$ \text{Location vector 1: } \hspace{0.5cm} x_{N} + x_{N+1} + \dots + x_{2N-1} = 1, $$ # $$ \text{Location vector N (home base): } \hspace{0.5cm} x_{N^2} + x_{N^2+1} + \dots + x_{N^2 + N-1} = 1. $$ # # Enforcing this constraint over all trips would then give (N+1 because the spaceship returns to the home base on Mars): # # $$ \text{For all locations: } \hspace{0.5cm} x_0 + x_1 + \dots + x_{N^2 + N-1} = N+1. $$ # # This equation is a valid way to model the constraint. However, there is a drawback to it as well. In the equation, only the individual locations are penalized. There is no information about being in two locatoins at once! The following values satisfy the equation: # # $$ \text{If } x_0=1, x_{1}=1, \dots, x_{N}=1, \text{ and the remaining } x_{N+1}=0, x_{N+3}=0, \dots, x_{N^2+N-1}=0,$$ # # but violate the constraint we are trying to model. With these values, the spaceship is at all planets/moons/asteroids at once before the first trip (location vector 0). Let's rethink. Consider a small example of three locations (a length-3 location vector). Then if the spaceship is in location 0, it can not be in location $1$ or $2$. Instead of using a summation to express this, another valid way would be to use products. The product of elements of a location vector must always equal zero, regardless where the spaceship is, because only one of the three $x_v$ elements can take value 1. Writing out the products of a single location vector gives: # # $$ x_0 \cdot x_1 = 0,$$ # $$ x_0 \cdot x_2 = 0,$$ # $$ x_1 \cdot x_2 = 0.$$ # # In this format, the constraint is much more specific and stringent for the solver. These equations reflect the interrelationships between the locations more accurately than the summation. They can be implemented in a way that distinguish the variables for different location vectors, unlike the summation which adds all the variables of the location vectors together. As a result of describing the constraint more specifically, the solver will return better solutions. # # >[!Note] # >We do not want to weight combinations between locations more than once, as this would lead to inbalances (assymmetries) in the cost function. Tuning the weights of an imbalanced cost function tends to be more difficult. We therefore exclude the reverse combinations: # >$$ x_{1}\cdot x_{0}, $$ # >$$ x_{2}\cdot x_{0}, $$ # >$$ x_{2}\cdot x_{1}. $$ # # The next step consists on generalizing the one location at a time constraint for the spaceship, which has to pass by all locations in the solar system and return to home base (N+1 locations need to be visited). In the equation below, $l$ iterates over the number of location vectors, while $i$ and $j$ iterate over the vector elements: # # $$ \text{One location at a time constraint: }0=\sum_{l=0}^{N} \sum_{i=0}^{N-1} \sum_{j=0}^{N-1} x_{(i+Nl)} \cdot x_{(j+Nl)} \text{ with } \{ i,j | i By looking at the code and previous sections you come to the conclusion that the minimal cost is obtained by setting all $x_v$ to 0. # # It is your task to make sure that the spaceship does not dissapear in the navigation system. In this unit, we will cover the *"no dissapearing"* constraint. By incorporating negative penalty terms (rewards) in the cost function, the solver will be encouraged to return routes in which the spaceship will consistently be in some location in our solar system. These negative cost terms will locally decrease cost function value(s) for suitable solution configurations. The aim is that these function valleys in the optimization landscape become low enough for the solver to settle in, depending on the constraints and weighting. # # # ## "No dissapearing" constraint # # By incorporating negative terms in the cost function, we can encourage the solver to return a particular set of solutions. To demonstrate this point, take the following optimization problem: # # $$f(\boldsymbol{x}) := \underset{x_0, x_1, x_2}{min} x_0 + x_1 - x_2,$$ # $$\text{with } x_0, x_1, x_2 \in \{0,1\}.$$ # # The minimum value for this example is achieved for the solution $x_0$ = 0, $x_1$ = 0, $x_2$ = 1, with the optimal function value equal to -1. Here the negatively weighted term encourages $x_2$ to take a value 1 instead of 0, unlike $x_0$ and $x_1$. With this idea in mind, you can prevent the spaceship from dissapearing in the navigation system. # # If the spaceship has to visit $N+1$ locations (+1 because return to home base) for a mining expedition, then $N+1$ of the $x_v$ should be assigned a value 1. Written as an equation for the $N(N+1)$ variables of a route ($N+1$ vectors, each with with $N$ elements gives $N(N+1)$): # # $$ \sum_{v=0}^{N(N+1)-1} x_v = N+1.$$ # # You could split this equation for each location vector separately, but if you keep the constraint linear in $x_k$ then the resulting cost function will be the same. Moving the variables to the right side of the equation for negative weighting gives: # # $$ 0 = (N+1) -\left( \sum_{v=0}^{N(N+1)-1} x_v \right).$$ # # As you may be aware, there is no guarantee which particular $x_v$ the solver will assign a value 1 in this equation. However, in the previous constraint the spaceship is already forced to be in a maximum of one location at a time. Therefore, with the cost function weights tuned, it can be assumed that only one $x_v$ per location vector will be assigned a value 1. # # To summarize, the `one location at a time constraint` penalizes the solver for being in more than one node at a time, while the `no dissapearing constraint` rewards the solver for being at as many locations as possible. The weights of the constraints will effectively determine how they are satisfied, a balance between the two needs to be found such that both are adhered to. # # >[!Note] # > You can combine the `one location at a time constraint` and `no dissapearing constraint`. For explanatory purposes these two are split, however if you realize that -$x_v^2$ = -$x_v$, then you can expand the 'if' statement in the previous constraint for 'i==j'. # # ### Progress on the navigation system's cost function # # - The cost function contains: # - The `travel costs`. # - The `one location at a time constraint`, with constraint weight $w_1$. # - The `no dissapearing constraint`, with constraint weight $w_2$. # # $$ \text{Cost function so far: } \underset{x_0, x_1,\dots,x_{(N^2+2N)}}{min} \left(\sum_{k=0}^{N-1}\sum_{i=0}^{N-1}\sum_{j=0}^{N-1} \left( x_{Nk+i}\cdot x_{N(k+1)+j}\cdot c_{i,j} \right) + w_1 \left( \sum_{l=0}^{N} \sum_{i=0}^{N-1} \sum_{j=0}^{N-1} x_{(i+Nl)} \cdot x_{(j+Nl)} \text{ with } \{ i,j | i Currently, the solver may provide solutions in which the spaceship revisits a location. For example, there is no cost associated with staying in the same location, as shown by the diagonal elements of the cost matrix. It is necessary to penalize routes in which revisits occur. # # # ## "No revisiting locations" constraint # # The navigation system may generate routes in which a location is visited more than once. Similarly to previous modules, we will have to penalize routes in which this occurs. To start, let's consider the fourth location $x_3$ (Venus) for each trip: # # $$ x_3, x_{3+N}, x_{3+2N}, ... $$ # # For a valid route, the spaceship will only have passed through this location once. That means that only one of these variables is allowed to take a value 1, the remaining have to be 0. For example, if the spaceship visits Venus after the first trip, then $x_{3+N}=1$ and $x_3=0$, x_{3+2N}=0$, and so on. As done similarly with the one location at a time constraint, the product of these variables must equal 0. The following can then be derived: # # $$ x_3 \cdot x_{3+N} \cdot x_{3+2N} \cdot ... = 0. $$ # # Even though this constraint might seem correct, it is not stringent enough. If only one of these variables is 0, then the constraint is already satisfied. However, the multiplication of variables can be split into their individual products: # # $$ x_3 \cdot x_{3+N} =0,$$ # $$ x_3 \cdot x_{3+2N} = 0,$$ # $$ x_{3+N} \cdot x_{3+2N} = 0,$$ # $$ \vdots $$ # # By weighting these terms in the cost function, at least one of the two variables for each respective equation has to be zero. The solver will therefore be penalized if more than one variable takes the value 1. Continuing this relation for all $N$ trips and all $N$ locations yields the penalty function: # # $$ 0 = \large{\sum}_{p=0}^{N^2+N-1}\hspace{0.25cm} \large{\sum}_{f=p+N,\hspace{0.2cm} \text{stepsize: } N}^{N^2-1} \hspace{0.35cm} (x_p \cdot x_f),$$ # # in which the first summation ($p$) assigns a reference location $x_p$, and the second summation assigns the same location but after a number of trips $x_f$ (multiple of $N$ due to the stepsize). # # # ### Progress on the navigation system's cost function # # - The cost function contains: # - The `travel costs`. # - The `one location at a time constraint`, with constraint weight $w_1$. # - The `no dissapearing constraint`, with constraint weight $w_2$. # - The `no revisiting locations constraint`, with constraint weight $w_3$. # # $$ \text{Cost function so far: } \underset{x_0, x_1,\dots,x_{(N^2+2N)}}{min} \left(\sum_{k=0}^{N-1}\sum_{i=0}^{N-1}\sum_{j=0}^{N-1} \left( x_{Nk+i}\cdot x_{N(k+1)+j}\cdot c_{i,j} \right) + w_1 \left( \sum_{l=0}^{N} \sum_{i=0}^{N-1} \sum_{j=0}^{N-1} x_{(i+Nl)} \cdot x_{(j+Nl)} \text{ with } \{ i,j | i[Note!] # >The negative weights assigned for these variables here will contribute to the negative weights assigned to the respective variables in the `no dissapearing constraint`. As a best-practice, you can combine these terms to reduce the length of the cost function. # # Another way of enforcing the `start and end location constraint` is to directly tell the solver to assign the respective variables a value equal to 1. The [`Problem.set_fixed_variables`](/azure/quantum/optimization-problem) is handy when you know what values certain variables **have** to be. Also, the cost function is automatically simplified for you when calling the method as the variables are filled in. The reason why the method is not implemented in this module is because you would not be able to encourage the spaceship to choose between a set of locations for a trip, a consequence of hard-coding the variable values. Even though promoting a set of locations is not part of the problem statement, it is an easy expansion of the `start and end location constraint` considered in this unit. # # # ### Progress on the navigation system's cost function # # - The cost function contains: # - The `travel costs`. # - The `one location at a time constraint`, with constraint weight $w_1$. # - The `no dissapearing constraint`, with constraint weight $w_2$. # - The `no revisiting locations constraint`, with constraint weight $w_3$. # - The `start and end locations constraints`, with constraint weights $w_4$ and $w_5$. # # $$ \text{Final cost function: } \underset{x_0, x_1,\dots,x_{(N^2+2N)}}{min} \left(\sum_{k=0}^{N-1}\sum_{i=0}^{N-1}\sum_{j=0}^{N-1} \left( x_{Nk+i}\cdot x_{N(k+1)+j}\cdot c_{i,j} \right) + w_1 \left( \sum_{l=0}^{N} \sum_{i=0}^{N-1} \sum_{j=0}^{N-1} x_{(i+Nl)} \cdot x_{(j+Nl)} \text{ with } \{ i,j | i[!Note] # >It is good practice to use `int`s instead of `float`s for the cost function weights. Doing so will give more accurate results. It is not mandatory, however. # # ``` python # # ### FILL IN THE CONSTRAINT WEIGHTS w_1, w_2, w_3, w_4, AND w_5 IN THE PROBLEM DEFINITION TO SUBMIT! # # from azure.quantum import Workspace # from azure.quantum.optimization import Problem, ProblemType, Term, HardwarePlatform, Solver # from azure.quantum.optimization import SimulatedAnnealing, ParallelTempering, Tabu, QuantumMonteCarlo # from typing import List # # import numpy as np # import math # # workspace = Workspace ( # subscription_id = "", # Add your subscription id # resource_group = "", # Add your resource group # name = "", # Add your workspace name # location = "" # Add your workspace location (for example, "westus") # ) # # workspace.login() # # ##### Define variables # # # The number of planets/moons/asteroids. # NumLocations = 10 # # # Location names. Names of some of the solar system's planets/moons/asteroids. # LocationNames = {0:'Mars', 1:'Earth', 2:"Earth's Moon", 3:'Venus', 4:'Mercury', 5:'Ganymede', 6:'Titan', 7:'Ceres', 8:'Pallas', 9:'Cybele'} # # # Approximate mean distances between the planets/moons/asteroids. Note that they can be very innacurate as orbital mechanics are ignored. # # This is a symmetric matrix since we assume distance between planets is constant for this module. # CostMatrix = np.array([ [0, 78, 2, 120, 170, 550, 1200, 184, 600, 1.5 ], # [78, 0, 0.5, 41, 92, 640, 1222, 264, 690, 0.25 ], # [2, 0.5, 0, 40, 91, 639, 1221, 263, 689, 0.25 ], # [120, 41, 40, 0, 50, 670, 1320, 300, 730, 41.5 ], # [170, 92, 91, 50, 0, 720, 1420, 400, 830, 141.5 ], # [550, 640, 639, 670, 720, 0, 650, 363, 50, 548 ], # [1200, 1222, 1221, 1320, 1420, 650, 0, 1014, 25, 625 ], # [184, 264, 263, 300, 400, 363, 1014, 0, 100, 400 ], # [600, 690, 689, 730, 830, 50, 25, 100, 0, 350 ], # [1.5, 0.25, 0.25, 41.5, 141.5, 548, 625, 400, 350, 0 ] # ]) # # ##### If you want try running with a random cost matrix, uncomment the following: # #maxCost = 10 # #CostMatrix = np.random.randint(maxCost, size=(NumLocations,NumLocations)) # # ############################################################################################ # ##### Define the optimization problem for the Quantum Inspired Solver # def OptProblem(CostMatrix) -> Problem: # # #'terms' will contain the weighting terms for the solver! # terms = [] # # ############################################################################################ # ##### Cost of traveling between locations # for k in range(0,len(CostMatrix)): # For each trip (there are N trips to pass through all the locations and return to home base) # for i in range(0,len(CostMatrix)): # For each origin (reference location) # for j in range(0,len(CostMatrix)): # For each destination (next location w.r.t reference location) # # #Assign a weight to every possible trip from location i to location j - for any combination # terms.append( # Term( # c = CostMatrix.item((i,j)), # Element of the cost matrix # indices = [i+(len(CostMatrix)*k), j+(len(CostMatrix)*(k+1))] # +1 to denote dependence on next location # ) # ) # ##----- Uncomment one of the below statements if you want to see how the weights are assigned! ------------------------------------------------------------------------------------------------- # #print(f'{i+(len(CostMatrix)*k)}, {j+(len(CostMatrix)*(k+1))}') # Combinations between the origin and destination locations # #print(f'For x_{i+(len(CostMatrix)*k)}, to x_{j+(len(CostMatrix)*(k+1))} in trip number {k} costs: {CostMatrix.item((i,j))}') # In a format for the solver (as formulated in the cost function) # #print(f'For location_{i}, to location_{j} in trip number {k} costs: {CostMatrix.item((i,j))}') # In a format that is easier to read for a human # # ############################################################################################ # ##### Constraint: One location at a time constraint - spaceship can only be in 1 location at a time. # for l in range(0,len(CostMatrix)+1): # The total number of locations that are visited over the route (N+1 because returning to home base) # for i in range(0,len(CostMatrix)): # For each location (iterate over the location vector) # for j in range(0,len(CostMatrix)): # For each location (iterate over the location vector) # if i!=j and i,1988-06-30 116-51-1291,(07)-2905-7818,Buffalo (New York),Married,,1974-11-27 177-44-1159,(09)-5062-6922,Detroit (Michigan),Single,,1969-07-27 116-81-1883,(03)-1350-7402,Chandler (Arizona),Divorced,,1989-11-26 429-83-1156,(09)-5794-9470,Scottsdale (Arizona),Married,,1978-05-05 381-54-1605,(05)-5330-5036,Albuquerque (New Mexico),Single,,1982-05-02 224-99-1262,(05)-3339-3262,Milwaukee (Wisconsin),Married,,1988-01-02 301-25-1394,(07)-4370-8507,Houston (Texas),Single,,1972-11-12 323-51-1535,(03)-5179-6500,Las Vegas (Nevada),Married,,1978-01-22 216-85-1367,(07)-2905-9114, (Minnesota),Divorced,,1973-12-16 166-82-1605,(09)-6473-4208,Irvine (California),Married,,1975-03-27 116-54-1259,(04)-3468-6535,San Bernardino (California),Divorced,,1980-02-01 224-55-1496,(03)-8685-6502,Aurora (Colorado),Common-Law,Tamesha Lawlor,1983-10-02 177-44-1054,(08)-5902-5867,El Paso (Texas),Single,Millie Lasher,1976-03-29 320-54-1856,(04)-3858-1079,Houston (Texas),Divorced,,1983-09-07 144-25-1448,(09)-5179-2725,Durham (North Carolina),Divorced,,1985-07-07 144-54-1840,(03)-7508-9910,Orlando (Florida),Single,,1982-06-08 224-25-1891,(05)-9333-5713,Las Vegas (Nevada),Divorced,,1971-12-19 216-82-1048,(04)-1199-9661,Las Vegas (Nevada),Married,,1972-05-07 116-93-1394,(05)-9333-4606,Tampa (Florida),Married,,1973-08-12 279-81-1912,(08)-2905-8942,Winston–Salem (North Carolina),Single,,1977-10-07 425-82-1851,(03)-5794-3345,Aurora (Colorado),Married,,1972-02-26 339-74-1545,(08)-4858-6766,Atlanta (Georgia),Married,,1975-07-25 368-83-1054,(05)-7508-4870,Omaha (Nebraska),Married,,1975-10-08 339-82-1442,(09)-5854-7191,Henderson (Nevada),Single,Sondra Pike,1980-06-25 391-55-1442,(05)-6865-1079,Baton Rouge (Louisiana),Divorced,,1984-04-21 177-23-1359,(07)-5854-6781,St. Louis (Missouri),Single,Gigi Ragland,1977-01-27 238-81-1227,(03)-9999-9910,Laredo (Texas),Common-Law,Ronald Signorelli,1977-06-13 287-74-1145,(03)-5794-9130,Fremont (California),Single,,1976-02-21 # %%writefile "C:\Users\HP\01. Projectos de jupyter\01. Ciencia de los datos\franquicias.csv" Capital One,3608-2181-5030-1465,2023-05-27,4538,337,1400 USAA,3608-1395-4951-1668,2024-03-11,5101,240,1900 U.S. Bank,3608-1333-4394-1935,2020-03-08,4814,231,2000 Capital One,3608-1721-4951-1198,2019-05-30,3925,366,1600 PNC,3608-1782-5015-1001,2025-06-08,4241,048,1200 Capital One,3608-2181-4288-1394,2021-12-08,4253,556,1700 Wells Fargo,3608-2596-5634-1497,2018-07-15,5205,140,1200 USAA,3608-1395-5691-1428,2023-04-20,2111,512,1400 American Express,3608-2968-5745-1804,2025-12-01,5065,993,1000 American Express,3608-1333-4580-1185,2025-01-13,2377,277,1300 Discover,3608-1782-4458-1383,2018-07-19,4623,863,1500 Chase,3608-2181-5988-1718,2024-03-23,2987,452,1000 USAA,3608-2751-5015-1278,2024-04-25,2744,831,1700 U.S. Bank,3608-1333-4005-1623,2022-11-10,2117,373,1400 MasterCard,3608-2751-4236-1394,2022-07-22,7943,109,1200 MasterCard,3608-2588-5988-1551,2021-11-11,2172,945,1900 USAA,3608-2800-5459-1497,2024-04-02,7568,458,1400 Discover,3608-1395-5632-1976,2022-12-22,5884,272,1200 BarclayCard US,3608-2588-5394-1381,2020-12-24,5280,237,1200 American Express,3608-2596-5551-1572,2024-07-28,4107,438,1500 BarclayCard US,3608-2800-5551-1351,2020-09-30,4174,318,1700 Discover,3608-1682-4160-1476,2021-05-08,2135,864,2000 U.S. Bank,3608-1682-5152-1053,2024-04-27,7022,246,1100 U.S. Bank,3608-1782-5791-1558,2023-09-24,7502,188,2000 PNC,3608-1782-5030-1572,2024-07-31,6887,951,1700 MasterCard,3608-1192-5884-1614,2018-06-20,5594,800,2000 Bank of America,3608-2067-5766-1056,2025-09-30,2338,355,1200 BarclayCard US,3608-1782-4038-1052,2022-06-25,2130,117,1500 American Express,3608-1782-5890-1999,2021-11-10,3195,732,1600 Visa,3608-1782-5551-1837,2024-11-05,5357,255,2000 MasterCard,3608-1721-4236-1828,2018-05-15,3700,561,1800 U.S. Bank,3608-2596-5394-1054,2022-06-05,6787,233,1000 Discover,3608-2181-5724-1476,2022-09-18,3027,475,1000 MasterCard,3608-2181-4711-1693,2024-03-26,2739,733,1400 Bank of America,3608-1721-5632-1589,2025-02-08,6587,337,1500 Chase,3608-2067-5394-1306,2019-02-13,2544,222,1200 U.S. Bank,3608-2596-5696-1134,2024-05-22,7442,587,1900 USAA,3608-1721-4005-1322,2018-08-16,7241,201,1300 # Luego de esto leeremos el contenido de estos archivos a la base de datos de MySQL # # Primero hacemos la conexion a MySQL # # #### Importante # Para poder hacer la conexion con MySQL para que reconozca el comando para leer archivos CSV por fuera del directorio de la base de datos se debe agregar a la conexión la opcion para que lo permita `/?local_infile=1` # * `/` es para reconocer dentro de la conexión que no vamos a ingresar a una base de datos determinada sino al cliente de MySQL # * `?` es para poder agregar dentro de la linea de conexión los parámetros que se necesiten # * `local_infile=1` es el parametro que se quiere enviar para decir que permita la lectura de archivos CSV por fuera del directorio de la base de datos # + # %load_ext sql # %sql mysql+pymysql://root:qwe123@localhost/?local_infile=1 # %sql create database if not exists sqldemo; # %sql use sqldemo; # - # Luego se deben crear las tablas vacías donde se quiere cargar la informacion que se reqiera # + language="sql" # # drop table if exists personas; # drop table if exists franquicias; # drop table if exists bancos; # # CREATE TABLE personas ( # id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, # ssn VARCHAR(11), # phone VARCHAR(14), # city VARCHAR(40), # maritalstatus VARCHAR(10), # fullname VARCHAR(40), # birthdate DATE # ); # # CREATE TABLE franquicias ( # id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, # ccntype VARCHAR(40), # ccn VARCHAR(20), # validthru DATE, # userkey VARCHAR(6), # userpin VARCHAR(4), # quota SMALLINT # ); # # CREATE TABLE bancos ( # id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, # bank VARCHAR(40), # ccn VARCHAR(20), # ssn VARCHAR(14) # ); # - # Posteriormente se ingresa la información en las tablas con el comando `load data infile`, sin embargo este solo sirve si la información se encuntra dentro de la misma base de datos, por lo que se tendría que cargar los archivos primero en la ruta en la cual se encuentra esta. # # ***Sin embargo***, gracias al comando *local* usado de la siguiente manera `load data local infile` podremos especificar la ruta completa del archivo que se encuentra en cualquier parte del computador y podrá ser cargada sin inconvenientes # # ***Importante***: Por seguridad el comando `load data local infile` está desabilidato por seguridad, por lo que antes de ejecutarlo se recomienda hacer esta verificación. # # #### ¡¡IMPORTANTISIMO!! # Para esto se requiere dirigirse a la ruta donde se tenga el archivo my.ini, en windows la ruta es comunmente *"C:\ProgramData\MySQL\MySQL Server 8.0\my.ini"* y se debe ingresar el siguiente código y luego reiniciar el PC: # # ```ini # # [client] # loose_local_infile=1 # # [mysqld] # local-infile=ON # ``` # Ahora si prodemos proceder a ejecutar el codigo que nos permite leer información de archivos que no se encuentren dentro del directorio de la base de datos # + language="sql" # # load data local infile # 'C:/Users/HP/01. Projectos de jupyter/01. Ciencia de los datos/bancos.csv' # into table bancos # fields terminated by ',' # (bank,ccn,ssn); # # load data local infile # 'C:/Users/HP/01. Projectos de jupyter/01. Ciencia de los datos/personas.csv' # into table personas # fields terminated by ',' # (ssn, phone, city, maritalstatus, fullname, birthdate); # # load data local infile # 'C:/Users/HP/01. Projectos de jupyter/01. Ciencia de los datos/franquicias.csv' # into table franquicias # fields terminated by ',' # (ccntype,ccn,validthru,userkey,userpin,quota); # - # Ahora eliminamos los CSV del disco. (Este es un comando de Python) # + import os archivos = ['bancos.csv','personas.csv','franquicias.csv'] for archivo in archivos: try: os.remove(archivo) print("El archivo {} ha sido eliminado con éxito".format(archivo)) except: print("El archivo {} no pudo ser encontrado para ser eliminado".format(archivo)) # - # 3. Visualizar a ver si las tablas realmente quedaron bien pobladas # + language="sql" # # select * from personas limit 5; # + language="sql" # # select * from bancos limit 5; # + language="sql" # # select * from franquicias limit 5; # - # ## Segunda parte # ### Joins # Ahora vamos a hacer un `join` para cruzar la información entre diferentes tablas # + language="sql" # select # fullname, # phone, # bank, # personas.ssn, # bancos.ssn # from personas # join bancos # on (cast(replace(personas.ssn,"-","") as unsigned) = # cast(replace( bancos.ssn,"-","") as unsigned)) # limit 10; # - # # Ejercicios # 01. Cantidad de tarjetas emitidas por cada banco: # + language="sql" # # select # bank, # count(*) as numero_de_tarjetas # from bancos # left join franquicias # on (cast(replace( bancos.ccn,"-","") as unsigned) = # cast(replace(franquicias.ccn,"-","") as unsigned)) # group by bank # order by bank # - # 02. tabla que contenga la cantidad de tarjetas por cada franquicia # + language="sql" # # select # ccntype, # count(*) # from franquicias # group by ccntype # order by ccntype; # - # 03. Tabla con el cliente mas joven de cada banco # + language="sql" # # with t0 as ( # select # bank, # min(timestampdiff(year, birthdate, now())) as min_edad # from personas # join bancos # on (cast(replace(personas.ssn,"-","") as unsigned) = # cast(replace( bancos.ssn,"-","") as unsigned)) # group by bank # ), # t1 as ( # select # bank, # fullname, # phone, # birthdate, # timestampdiff(year, birthdate, now()) as edad # from personas # join bancos # on (cast(replace(personas.ssn,"-","") as unsigned) = # cast(replace( bancos.ssn,"-","") as unsigned)) # order by bank, edad # ) # select # t1.* # from t1 # inner join t0 # on (t0.bank = t1.bank and t0.min_edad=t1.edad) # order by bank; # - # 04. tabla que contenga por cada banco, la cantidad de clientes nacidos cada año # + language="sql" # # with t0 as ( # select # bank, # year(birthdate) as anho # from personas # join bancos # on (cast(replace(personas.ssn,"-","") as unsigned) = # cast(replace( bancos.ssn,"-","") as unsigned)) # ) # select # bank, # anho, # count(*) as num_clientes_nacidos # from t0 # group by bank, anho # order by bank, anho; # - # 05. tabla que contenga la suma de los montos (quote) por franquisia # + language="sql" # # select # ccntype, # sum(quota) # from franquicias # group by ccntype # order by ccntype; # - # 06. tabla que contenga el banco, la franquisia y el nombre del titular de la tarjeta de crédito # + language="sql" # # select # bancos.bank as banco, # franquicias.ccntype as franquicia, # personas.fullname as nombre_completo # from personas # left join bancos # on (cast(replace( personas.ssn,"-","") as unsigned) = # cast(replace( bancos.ssn,"-","") as unsigned)) # left join franquicias # on (cast(replace( bancos.ccn,"-","") as unsigned) = # cast(replace(franquicias.ccn,"-","") as unsigned)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Module 2 (Python 3) # ## Basic NLP Tasks with NLTK import nltk nltk.download('treebank') from nltk.book import * # ### Counting vocabulary of words text7 sent7 len(sent7) len(text7) len(set(text7)) list(set(text7))[:10] # ### Frequency of words dist = FreqDist(text7) len(dist) vocab1 = dist.keys() #vocab1[:10] # In Python 3 dict.keys() returns an iterable view instead of a list list(vocab1)[:10] dist['four'] freqwords = [w for w in vocab1 if len(w) > 5 and dist[w] > 100] freqwords # ### Normalization and stemming input1 = "List listed lists listing listings" words1 = input1.lower().split(' ') words1 porter = nltk.PorterStemmer() [porter.stem(t) for t in words1] ??nltk.corpus.udhr.words() # ### Lemmatization udhr = nltk.corpus.udhr.words('English-Latin1') udhr[:20] [porter.stem(t) for t in udhr[:20]] # Still Lemmatization WNlemma = nltk.WordNetLemmatizer() [WNlemma.lemmatize(t) for t in udhr[:20]] # ### Tokenization text11 = "Children shouldn't drink a sugary drink before bed." text11.split(' ') nltk.word_tokenize(text11) text12 = "This is the first sentence. A gallon of milk in the U.S. costs $2.99. Is this the third sentence? Yes, it is!" sentences = nltk.sent_tokenize(text12) len(sentences) sentences # ## Advanced NLP Tasks with NLTK # ### POS tagging nltk.help.upenn_tagset('MD') text13 = nltk.word_tokenize(text11) nltk.pos_tag(text13) text14 = nltk.word_tokenize("Visiting aunts can be a nuisance") nltk.pos_tag(text14) # + # Parsing sentence structure text15 = nltk.word_tokenize("Alice loves Bob") grammar = nltk.CFG.fromstring(""" S -> NP VP VP -> V NP NP -> 'Alice' | 'Bob' V -> 'loves' """) parser = nltk.ChartParser(grammar) trees = parser.parse_all(text15) for tree in trees: print(tree) # - text16 = nltk.word_tokenize("I saw the man with a telescope") grammar1 = nltk.data.load('mygrammar.cfg') grammar1 parser = nltk.ChartParser(grammar1) trees = parser.parse_all(text16) for tree in trees: print(tree) from nltk.corpus import treebank text17 = treebank.parsed_sents('wsj_0001.mrg')[0] print(text17) # ### POS tagging and parsing ambiguity text18 = nltk.word_tokenize("The old man the boat") nltk.pos_tag(text18) text19 = nltk.word_tokenize("Colorless green ideas sleep furiously") nltk.pos_tag(text19) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:patente] * # language: python # name: conda-env-patente-py # --- # # keyword_extraction # # ### extracting keywords (e.g. MeSH terms) from a text using a simple dictionary lookup approach. # # library **keyword_extraction** offers you: # * **DictLU_Create_Dict**: creating english, german, french dictionaries for the lookup # * **DictLU_Extract_Exact** # * **.fast**: break after the first occurance in the text # * **.full**: all occurances in the text, including index # import pandas as pd import pickle from pprint import pprint import xmltodict from IPython.core.display import display, HTML # # 1. from raw data to lookup list # get the MeSH terms raw data # # * **english, german**: as 'Datenbankfassung' (csv-files) from [DIMDI](https://www.dimdi.de/dynamic/de/klassifikationen/weitere-klassifikationen-und-standards/mesh/) # * **french**: as xml from [Inserm](http://mesh.inserm.fr/FrenchMesh/) # # # # # ### 1.1. english # + ## read in raw data MH = pd.read_csv('MH.TXT', sep=';', quotechar='|', encoding='latin1', header=None, names=['id','term_german','term','subheadings'] ).drop(columns=['term_german','subheadings']) print(f'loaded {len(MH)} main headings') ETE = pd.read_csv('ETE.TXT', sep=';', quotechar='|', encoding='latin1', header=None, names=['term','id','term_german'] ).drop(columns=['term_german'])[['id','term']] print(f'loaded {len(ETE)} synonyms') lookuplist = pd.concat([MH,ETE]).reset_index(drop=True) print(f' -> {len(lookuplist)} terms in total\n') print(lookuplist.head(2)) print('\n####################\n') #you can use any DataFrame structured like this one for extracting keywords. #column names are mandatory. #be aware that the "term" column must be unique! # create dictionary from keyword_extraction import DictLU_Create_Dict DCC = DictLU_Create_Dict(lookuplist) dicts_lower = DCC.dicts_lower dicts_upper = DCC.dicts_upper pprint(list(dicts_lower[3].items())[:5]) #what we get now is a list of dictionarys, one with word-length = 1, one with word-length = 2, etc. # - dicts_upper: only upper-case words (like 'WHO', 'HIV') # - dicts_lower: mixed case words #better save those dictionarys for later use with open('MeSH_dict_english.p', 'wb') as handle: pickle.dump([dicts_lower,dicts_upper], handle) # - # ### 1.2. german # + ## read in raw data MH = pd.read_csv('MH.TXT', sep=';', quotechar='|', encoding='latin1', header=None, names=['id','term','term_english','subheadings'] ).drop(columns=['term_english','subheadings']) print(f'loaded {len(MH)} main headings') ETD = pd.read_csv('ETD.TXT', sep=';', quotechar='|', encoding='latin1', header=None, names=['term','id','null'] )[['id','term']] print(f'loaded {len(ETD)} synonyms') lookuplist = pd.concat([MH,ETD]).reset_index(drop=True) ## manual correction lookuplist.loc[lookuplist['id']=='D007060','term']='Id' print(f' -> {len(lookuplist)} terms in total\n') print(lookuplist.head(2)) print('\n####################\n') # create dictionary from keyword_extraction import DictLU_Create_Dict DCC = DictLU_Create_Dict(lookuplist) dicts_lower = DCC.dicts_lower dicts_upper = DCC.dicts_upper pprint(list(dicts_lower[1].items())[:5]) with open('MeSH_dict_german.p', 'wb') as handle: pickle.dump([dicts_lower,dicts_upper], handle) # - # ### 1.3. french # get raw data with open('fredesc2019.xml','r') as xml_obj: my_dict = xmltodict.parse(xml_obj.read()) xml_obj.close() # + ## for debugging ONE item x=my_dict['DescriptorRecordSet']['DescriptorRecord'][666] res=[] print("['DescriptorRecordSet']['DescriptorRecord']") print(type(x)) procname=x['DescriptorName']['String'].split('[')[0] print(' ->',x['DescriptorUI'], x['DescriptorName']['String'],' -> ',procname) res.append((x['DescriptorUI'],'fre',procname)) tt=x['ConceptList']['Concept'] if isinstance(tt,list): pass else: tt=[tt] print(' changed to list:', type(tt)) for ii,y in enumerate(tt): print(' yy',ii,type(y)) #print(y) cc=y['TermList']['Term'] print(' cc',type(cc)) if isinstance(cc,list): pass else: cc=[cc] print(' changed to list:', type(cc)) for dd in cc: print(' ->',dd['TermUI'],dd['String']) res.append((x['DescriptorUI'],dd['TermUI'],dd['String'])) # + ## ALL items res=[] print(f'''loaded {len(my_dict['DescriptorRecordSet']['DescriptorRecord'])} main headings''') for x in my_dict['DescriptorRecordSet']['DescriptorRecord']: procname=x['DescriptorName']['String'].split('[')[0] res.append((x['DescriptorUI'],'fre',procname)) tt=x['ConceptList']['Concept'] if isinstance(tt,list): pass else: tt=[tt] for ii,y in enumerate(tt): cc=y['TermList']['Term'] if isinstance(cc,list): pass else: cc=[cc] for dd in cc: res.append((x['DescriptorUI'],dd['TermUI'],dd['String'])) res2 = [(x[0],x[2]) for x in res if x[1][:3]=='fre'] tmplookuplist = pd.DataFrame({ 'id':[x[0] for x in res2], 'term':[x[1] for x in res2] }) lookuplist=tmplookuplist.drop_duplicates(keep='first').reset_index(drop=True) print(f' -> {len(lookuplist)} terms in total\n') # create dictionary from keyword_extraction import DictLU_Create_Dict DCC = DictLU_Create_Dict(lookuplist) dicts_lower = DCC.dicts_lower dicts_upper = DCC.dicts_upper pprint(list(dicts_lower[3].items())[:5]) with open('MeSH_dict_french.p', 'wb') as handle: pickle.dump([dicts_lower,dicts_upper], handle) # - # # 2. Examples # # for a better visual overview you can also get a simple html with the found words in red. # # github is not showing the html correctly. no red words. pls check locally # # # ## 2.1. English # + text = 'Malaria is infectious diseases caused by Plasmodium parasite, which transmitted by Anopheles mosquitoes. Although the global burden of malaria has been decreasing in recent years, malaria remains one of the most important infectious diseases, from the point of view of its morbidity and mortality. Imported malaria is one of the major concerns at the evaluation of a febrile illness in a traveler returned from the endemic countries. The diagnosis and management of malaria cases requires much experience and knowledge. We review the epidemiology, pathogenesis, clinical features, diagnosis, prevention and treatment of malaria in Japan.' pprint(text) print('\n### FAST #############\n') from keyword_extraction import DictLU_Extract_Exact [dicts_lower,dicts_upper] = pickle.load( open('MeSH_dict_english.p', "rb" ) ) DEE=DictLU_Extract_Exact(dicts_upper,dicts_lower) DEE.fast(text) pprint(DEE.fast_ids) DEE.full(text) print('\n### FULL #############\n') pprint(DEE.result) DEE.create_html() ## this works when using the full method, not when using the fast method htmlstring = DEE.html print('\n### HTML #############') display(HTML(htmlstring)) # - # ## 2.2. German # + text='''Die vorliegende Arbeit beschäftigt sich mit dem Thema der Malaria, die nicht nur in Entwicklungsländern eine Rolle spielt, sondern mit der medizinisches Personal auch in Europa zu kämpfen hat. Es wird der Frage nachgegangen, welche geschichtlichen Hintergründe die Malaria hat und wie die Menschen damals dieses Krankheitsbild therapiert und der Krankheit vorgebeugt haben. Außerdem wird behandelt, welche prophylaktischen Möglichkeiten es gegen die Malaria gibt, wie wirksam diese sind und ob es schon einen wirksamen Impfstoff gegen diese Weltkrankheit gibt. Ein Schwerpunkt liegt auf den Tätigkeiten, mit denen Pflegepersonal bei der Pflege Malariaerkrankter konfrontiert ist. Um Antworten auf diese Fragen zu finden, wurden in einer umfangreichen Literaturrecherche Fachbücher, Fachzeitschriften, Forschungsartikel im Internet und fachspezifische englische Studien durchsucht und das Wesentliche herausgefiltert. Zusätzlich wurden drei halbstandardisierte Leitfaden-Interviews mit Menschen, die die Malaria persönlich erlebt haben, durchgeführt. Es wird offensichtlich, wie umfangreich das Thema der Malaria und ihrer Prophylaxe- und Therapiemöglichkeiten ist und wie viel noch zu erforschen wäre.''' pprint(text) print('\n### FAST #############\n') from keyword_extraction import DictLU_Extract_Exact [dicts_lower,dicts_upper] = pickle.load( open('MeSH_dict_german.p', "rb" ) ) DEE=DictLU_Extract_Exact(dicts_upper,dicts_lower) DEE.fast(text) pprint(DEE.fast_ids) DEE.full(text) print('\n### FULL #############\n') pprint(DEE.result) DEE.create_html() ## this works when using the full method, not when using the fast method htmlstring = DEE.html print('\n### HTML #############') display(HTML(htmlstring)) # - [x for x in dicts_lower[0].items() if x[0][0:4] == 'impf'] # ## 2.3. French # + text='''Un essai clinique randomisé, placebo contrôlé a testé chez plus de 27 000 patients avec maladie cardiovasculaire sous statines l’ajout d’un deuxième traitement hypolipémiant par inhibition de la protéine PCSK9, l’évolocumab. Dans une analyse de sous-groupe de cette étude appelée FOURIER, les auteurs ont identifié 2034 patients qui avaient un LDL-cholestérol en dessous de 1,8 mmol/l (médiane 1,7 mmol/l) au début de l’étude. Dans ce groupe, 1030 patients ont reçu de l’évolocumab et ont atteint un taux médian de LDL-cholestérol à 0,5 mmol/l à un an. Le risque de récidive d’événement cardiovasculaire mortel ou non après deux ans était à 4,7 % dans le groupe évolocumab versus 6,8 % dans le groupe placebo. Par comparaison, chez les plus de 25 000 patients avec LDL-cholestérol au-dessus de 1,8 mmol/l (médiane 2,4 mmol/l) au début de l’étude, le risque de récidive cardiovasculaire était à 6,0 % dans le groupe évolocumab versus 7,4 % dans le groupe placebo. Il n’y avait pas de modification de l’effet du traitement hypolipémiant testé en fonction du taux de LDL-cholestérol au départ (p-value pour interaction 0,6) et il n’y avait pas de signal d’effets indésirables majeurs.''' pprint(text) print('\n### FAST #############\n') from keyword_extraction import DictLU_Extract_Exact [dicts_lower,dicts_upper] = pickle.load( open('MeSH_dict_french.p', "rb" ) ) DEE=DictLU_Extract_Exact(dicts_upper,dicts_lower) DEE.fast(text) pprint(DEE.fast_ids) DEE.full(text) print('\n### FULL #############\n') pprint(DEE.result) DEE.create_html() ## this works when using the full method, not when using the fast method htmlstring = DEE.html print('\n### HTML #############') display(HTML(htmlstring)) # - [x for x in dicts_lower[0].items() if x[0] == 'cholestérol'] # # ## Speedtest 1 CPU from datetime import datetime # + from keyword_extraction import DictLU_Extract_Exact import pickle [dicts_lower,dicts_upper] = pickle.load( open('MeSH_dict.p', "rb" ) ) DEE=DictLU_Extract_Exact(dicts_upper,dicts_lower) text = 'Malaria is infectious diseases caused by Plasmodium parasite, which transmitted by Anopheles mosquitoes. Although the global burden of malaria has been decreasing in recent years, malaria remains one of the most important infectious diseases, from the point of view of its morbidity and mortality. Imported malaria is one of the major concerns at the evaluation of a febrile illness in a traveler returned from the endemic countries. The diagnosis and management of malaria cases requires much experience and knowledge. We review the epidemiology, pathogenesis, clinical features, diagnosis, prevention and treatment of malaria in Japan.' from pprint import pprint #pprint(text) print(text) # + a=datetime.now() for i in range(1000): if i%100==0: print(i) DEE.fast(text) b=datetime.now() print('fast extraction',b-a) a=datetime.now() for i in range(1000): if i%100==0: print(i) DEE.full(text) b=datetime.now() print('full extraction',b-a) # - # ## Speedtest Multi CPU import multiprocessing as mp ## was ist besser? - das DEE objekt mitzugeben - das DEE objekt jedes mal neu zu erzeugen - große oder kleine chunksize? # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import keyword print(keyword.kwlist) print(len(keyword.kwlist)) # + def square(num): """ This function is to square the number """ return num*num print(square(20)) # - print(square.__doc__) var1 = 10 var2 = 20 for i in range(1,4): print(i) print("Back to Original") variable=10;variable2=20 for i in range(1,4): print(i) number = 10 + 20 + \ 30 + 40 + \ 50 print(number) #number = 10 + 20 + 30 + 40 + 50 number = (10 + 20 + 30 + 40 + 50) print(number) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %load_ext autoreload # %autoreload 2 import jax import jax.numpy as jnp import numpy as np from jax import random, jit, grad import scipy import cr.sparse as crs from cr.sparse import la from cr.sparse import dict from cr.sparse import pursuit from cr.sparse import data from cr.sparse.pursuit import omp M = 256 N = 1024 K = 16 S = 32 key = random.PRNGKey(0) Phi = dict.gaussian_mtx(key, M,N) dict.coherence(Phi) X, omega = data.sparse_normal_representations(key, N, K, S) X.shape omega Y = Phi @ X Y.shape solution = omp.solve_multi(Phi, Y, K) jnp.max(solution.r_norm_sqr) def time_solve_multi(): solution = omp.solve_multi(Phi, Y, K) solution.x_I.block_until_ready() solution.r.block_until_ready() solution.I.block_until_ready() solution.r_norm_sqr.block_until_ready() # %timeit time_solve_multi() solve_multi = jax.jit(omp.solve_multi, static_argnums=(2,)) def time_solve_multi_jit(): solution = solve_multi(Phi, Y, K) solution.x_I.block_until_ready() solution.r.block_until_ready() solution.I.block_until_ready() solution.r_norm_sqr.block_until_ready() # %timeit time_solve_multi_jit() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="6000qI_T0Enh" colab_type="text" # Import Modules # + id="0m2JWFliFfKT" colab_type="code" colab={} from __future__ import print_function import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torchvision import datasets, transforms # + [markdown] id="5tXCuK_J0KPL" colab_type="text" # Build Network # # CNN+pad --> cnn+pad --> pool --> cnn+pad --> cnn+pad --> pool --> cnn --> cnn --> cnn --> flatten --> predicton # + id="h_Cx9q2QFgM7" colab_type="code" colab={} class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 32, 3, padding=1) #input - 1x 28 x 28 --> OUtput 32x 28 x 28 | RF 3x3 self.conv2 = nn.Conv2d(32, 64, 3, padding=1) #input - 32x 28 x 28 --> OUtput 64 x 28 x 28 | RF 5x5 self.pool1 = nn.MaxPool2d(2, 2) #input - 64 x 28 x 28 --> OUtput 64 x 14 x 14 | RF 6x6 self.conv3 = nn.Conv2d(64, 128, 3, padding=1) #input - 64 x 14 x 14 --> OUtput 128 x 14 x 14 | RF 10 x 10 self.conv4 = nn.Conv2d(128, 256, 3, padding=1) #input - 128 x 14 x 14 --> OUtput 256 x 14 x 14 | RF 16 x 16 self.pool2 = nn.MaxPool2d(2, 2) #input - 256 x 14 x 14 --> OUtput 256 x 7 x 7 | RF 24 x 24 self.conv5 = nn.Conv2d(256, 512, 3) #input - 256 x 7 x 7 --> OUtput 512 x 2 x 2 | RF 26 x 26 self.conv6 = nn.Conv2d(512, 1024, 3) #input - 512 x 5 x 5 --> OUtput 1024 x 3 x 3 | RF 28 x 28 self.conv7 = nn.Conv2d(1024, 10, 3) #input - 1024 x 3 x 3 --> OUtput 10 x 1 x 1 | RF 32 x 32 def forward(self, x): x = self.pool1(F.relu(self.conv2(F.relu(self.conv1(x))))) x = self.pool2(F.relu(self.conv4(F.relu(self.conv3(x))))) x = F.relu(self.conv6(F.relu(self.conv5(x)))) x = F.relu(self.conv7(x)) x = x.view(-1, 10) return F.log_softmax(x) # + [markdown] id="xAEh_E010mSx" colab_type="text" # assign the gpu for train and data loading # + colab_type="code" id="xdydjYTZFyi3" colab={"base_uri": "https://localhost:8080/", "height": 434} outputId="aa83b2bc-d6dd-4b4e-b59c-d275e0597ce8" # !pip install torchsummary from torchsummary import summary use_cuda = torch.cuda.is_available() device = torch.device("cuda" if use_cuda else "cpu") model = Net().to(device) summary(model, input_size=(1, 28, 28)) # + [markdown] id="REyi4zoS0uZS" colab_type="text" # Buid Dataset # + id="DqTWLaM5GHgH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 391, "referenced_widgets": ["2808fa490dc94ba787b3412c298c4237", "9f0e492ff6a34ef98771df8397456d51", "82a41b697a33404db2fd4ff8947c107e", "e05f9e6994374df79e57c1386c439d81", "f799251175b149a0b85ec832e81d8283", "d0bc67f4a306436c9d5d4681c02cfdea", "232fada63e5446469bb115449e9b1add", "32557e309c3a4c559fab7a75d3315884", "a2a654c831074f6cb0b685057a9150fd", "cbc45d51af4e4d7290a769f070bd8c23", "", "d6e1b256ace847b79ebec1813c1cf30c", "", "", "", "", "", "dd4d414809154828bdbe0dfe89115b44", "48f2c848c42942feacf7ff10adb0e5e3", "", "", "", "", "", "1070509b01374e179a736f3b73e9d805", "0c8085ef5e4a46949cfc1d862c80ddad", "05f324695b0e4a499aefbd6cbd8050e2", "", "", "", "3d93eee0fa7b469d96ce501521c471b5", "eb56e3ec54af4a9492f89788b1d62ebb"]} outputId="6404a081-2c1b-4d60-f157-a49d8ceac2eb" torch.manual_seed(1) batch_size = 128 kwargs = {'num_workers': 1, 'pin_memory': True} if use_cuda else {} train_loader = torch.utils.data.DataLoader( datasets.MNIST('../data', train=True, download=True, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=batch_size, shuffle=True, **kwargs) test_loader = torch.utils.data.DataLoader( datasets.MNIST('../data', train=False, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=batch_size, shuffle=True, **kwargs) # + [markdown] id="MZlFJK_Q0xyP" colab_type="text" # Build Train and Test network and Data Flow # + id="8fDefDhaFlwH" colab_type="code" colab={} from tqdm import tqdm def train(model, device, train_loader, optimizer, epoch): model.train() pbar = tqdm(train_loader) for batch_idx, (data, target) in enumerate(pbar): data, target = data.to(device), target.to(device) optimizer.zero_grad() output = model(data) loss = F.nll_loss(output, target) loss.backward() optimizer.step() pbar.set_description(desc= f'loss={loss.item()} batch_id={batch_idx}') def test(model, device, test_loader): model.eval() test_loss = 0 correct = 0 with torch.no_grad(): for data, target in test_loader: data, target = data.to(device), target.to(device) output = model(data) test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability correct += pred.eq(target.view_as(pred)).sum().item() test_loss /= len(test_loader.dataset) print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format( test_loss, correct, len(test_loader.dataset), 100. * correct / len(test_loader.dataset))) # + [markdown] id="ijbGwGIT02R_" colab_type="text" # Declare Optimizer and Loss Functions and Epoch # + id="MMWbLWO6FuHb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 124} outputId="785a311d-72dc-4da5-81a7-dc063fba49f5" model = Net().to(device) optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9) for epoch in range(1, 2): train(model, device, train_loader, optimizer, epoch) test(model, device, test_loader) # + id="OVj9oZaEgzag" colab_type="code" colab={} # + id="So5uk4EkHW6R" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="a076c19d-ac6c-4376-a230-7640c5f3622d" model = Net().to(device) optimizer = optim.Adam(model.parameters(), lr=0.005,) for epoch in range(1, 100): train(model, device, train_loader, optimizer, epoch) test(model, device, test_loader) # + id="4UMIT0A11uNq" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Example for using the Pvlib model # # The `Pvlib` model can be used to determine the feed-in of a photovoltaic module using the pvlib. # The [pvlib](https://github.com/pvlib/pvlib-python) is a python library for simulating the performance of photovoltaic energy systems. For more information check out the [documentation of the pvlib](https://pvlib-python.readthedocs.io/en/stable/). # # The following example shows you how to use the `Pvlib` model. # # * [Set up Photovoltaic object](#photovoltaic_object) # * [Get weather data](#weather_data) # * [Calculate feed-in](#feedin) # ## Set up Photovoltaic object # # To calculate the feed-in using the `Pvlib` model you have to set up a `Photovoltaic` object. You can import it as follows: # + from feedinlib import Photovoltaic # suppress warnings import warnings warnings.filterwarnings("ignore") # - # To set up a Photovoltaic system you have to provide all PV system parameters required by the `PVlib` model. The required parameters can be looked up in the [model's documentation](https://feedinlib.readthedocs.io/en/features-design-skeleton/temp/feedinlib.models.Pvlib.html#feedinlib.models.Pvlib.power_plant_requires). # For the `Pvlib` model these are the **azimuth** and **tilt** of the module as well as the **albedo or surface type**. Furthermore, the **name of the module and inverter** are needed to obtain technical parameters from the provided module and inverter databases. For an overview of the provided modules and inverters you can use the function `get_power_plant_data()`. from feedinlib import get_power_plant_data # get modules module_df = get_power_plant_data(dataset='sandiamod') # print the first four modules module_df.iloc[:, 1:5] # get inverter data inverter_df = get_power_plant_data(dataset='cecinverter') # print the first four inverters inverter_df.iloc[:, 1:5] # Now you can set up a PV system to calculate feed-in for, using for example the first module and converter in the databases: system_data = { 'module_name': 'Advent_Solar_Ventura_210___2008_', # module name as in database 'inverter_name': 'ABB__MICRO_0_25_I_OUTD_US_208__208V_', # inverter name as in database 'azimuth': 180, 'tilt': 30, 'albedo': 0.2} pv_system = Photovoltaic(**system_data) # **Optional power plant parameters** # # Besides the required PV system parameters you can provide optional parameters such as the number of modules per string, etc. Optional PV system parameters are specific to the used model and how to find out about the possible optional parameters is documented in the model's `feedin` method under `power_plant_parameters`. In case of the `Pvlib` model see [here](https://feedinlib.readthedocs.io/en/features-design-skeleton/temp/feedinlib.models.Pvlib.html#feedinlib.models.Pvlib.feedin). system_data['modules_per_string'] = 2 pv_system_with_optional_parameters = Photovoltaic(**system_data) # ## Get weather data # # Besides setting up your PV system you have to provide weather data the feed-in is calculated with. # This example uses open_FRED weather data. For more information on the data and download see the [load_open_fred_weather_data Notebook](load_open_fred_weather_data.ipynb). from feedinlib.open_FRED import Weather from feedinlib.open_FRED import defaultdb from shapely.geometry import Point # specify latitude and longitude of PV system location lat = 52.4 lon = 13.5 location = Point(lon, lat) # download weather data for June 2017 open_FRED_weather_data = Weather( start='2017-06-01', stop='2017-07-01', locations=[location], variables="pvlib", **defaultdb()) # get weather data in pvlib format weather_df = open_FRED_weather_data.df(location=location, lib="pvlib") # plot irradiance import matplotlib.pyplot as plt # %matplotlib inline weather_df.loc[:, ['dhi', 'ghi']].plot(title='Irradiance') plt.xlabel('Time') plt.ylabel('Irradiance in $W/m^2$'); # ## Calculate feed-in # # The feed-in can be calculated by calling the `Photovoltaic`'s `feedin` method with the weather data. For the `Pvlib` model you also have to provide the location of the PV system. feedin = pv_system.feedin( weather=weather_df, location=(lat, lon)) # plot calculated feed-in import matplotlib.pyplot as plt # %matplotlib inline feedin.plot(title='PV feed-in') plt.xlabel('Time') plt.ylabel('Power in W'); # **Scaled feed-in** # # The PV feed-in can also be automatically scaled by the PV system's area or peak power. The following example shows how to scale feed-in by area. feedin_scaled = pv_system.feedin( weather=weather_df, location=(lat, lon), scaling='area') # To scale by the peak power use `scaling=peak_power`. # # The PV system area and peak power can be retrieved as follows: pv_system.area pv_system.peak_power # plot calculated feed-in import matplotlib.pyplot as plt # %matplotlib inline feedin_scaled.plot(title='Scaled PV feed-in') plt.xlabel('Time') plt.ylabel('Power in W'); # **Feed-in for PV system with optional parameters** # # In the following example the feed-in is calculated for the PV system with optional system parameters (with 2 modules per string, instead of 1, which is the default). It was chosen to demonstrate the importantance of choosing a suitable converter. feedin_ac = pv_system_with_optional_parameters.feedin( weather=weather_df, location=(lat, lon)) # plot calculated feed-in import matplotlib.pyplot as plt # %matplotlib inline feedin_ac.plot(title='PV feed-in') plt.xlabel('Time') plt.ylabel('Power in W'); # As the above plot shows the feed-in is cut off at 250 W. That is because it is limited by the inverter. So while the area is as expected two times greater as for the PV system without optional parameters, the peak power is only around 1.2 times higher. pv_system_with_optional_parameters.peak_power / pv_system.peak_power pv_system_with_optional_parameters.area / pv_system.area # If you are only interested in the modules power output without the inverter losses you can have the `Pvlib` model return the DC feed-in. This is done as follows: feedin_dc = pv_system_with_optional_parameters.feedin( weather=weather_df, location=(lat, lon), mode='dc') # plot calculated feed-in import matplotlib.pyplot as plt # %matplotlib inline feedin_dc.plot(label='DC', title='AC and DC PV feed-in', legend=True) feedin_ac.plot(label='AC', legend=True) plt.xlabel('Time') plt.ylabel('Power in W'); # **Feed-in with optional model parameters** # # In order to change the default calculation configurations of the `Pvlib` model to e.g. choose a different model to calculate losses or the solar position you can pass further parameters to the `feedin` method. An overview of which further parameters may be provided is documented under the [feedin method](https://feedinlib.readthedocs.io/en/features-design-skeleton/temp/feedinlib.models.Pvlib.html#feedinlib.models.Pvlib.feedin)'s kwargs. feedin_no_loss = pv_system.feedin( weather=weather_df, location=(lat, lon), aoi_model='no_loss') # plot calculated feed-in import matplotlib.pyplot as plt # %matplotlib inline feedin_no_loss.iloc[0:96].plot(label='aoi_model = no_loss', legend=True) feedin.iloc[0:96].plot(label='aoi_model = sapm_aoi_loss', legend=True) plt.xlabel('Time') plt.ylabel('Power in W'); # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: py38 # language: python # name: py38 # --- # # Neuronales Netz zum erstellen eines Deepfakes # *** # # Dieses Dokument ermöglicht Dir, ein Deepfake-Video zu erstellen, wären Du über die Funktion und den Aufbau des dazu verwendeten künstlichen Neuronalen Netzes lernst. # # Es müssen zum Ausführen des Codes folgende Python Module installiert sein: # # *** # ##### *Das folgende Dokument setzt sich aus folgenden Schritten zusammen:* # 1. Einlesen und Aufbereiten von den Daten zum Trainieren # - Extrahieren von Gesichter aus einem Video # - Einlesen dieser Daten # 2. Initialisieren des künstlichen Neuronalen Netzes # - Definieren der Struktur des Encoder und Decoder # - Kombinieren des Encoder und Decoder zu den Autoencodern # - Kompilieren der Modelle mit einem Optimizer # 3. Trainieren # - Festlegen von Checkpoints für das Trainieren # - Eigentliches Trainieren des Netzes # 4. Fälschen eines neuen Videos # --- # ## Einlesen und Aufbereiten von den Daten zum Trainieren # *** # Um dem künstlichen Neuronalen Netz beizubringen Gesichter zu erkennen und zu erstellen, werden zahlreiche Bilder von Gesichtern benötigt. Im Fall von Deepfakes ist Bildmaterial von zwei Personen nötig. Einfachheitshalber wird als Input Videomaterial verwendet. # ### Extrahieren von Gesichter aus einem Video # In den Variablen *PFAD_A* und *PFAD_B* werden die Pfade zu den Videos gespeichert, die dazu verwendet werden Bilder von zwei Personen zu erxtrahieren. Es werden mit der Hilfe der Klasse `Gesichterextrahierer`, die in einer externen Datei definiert wurde, die Gesichter aus den Videos extrahiert. Die Bilder werden in einem Verzeichnis gespeichert. # import Gesichterextrahierer as GE # # PFAD_A = './daten/videomaterial/Joe_Biden/nur_joe_biden_gemischt.mp4' # PFAD_B = './daten/videomaterial/Chuck_Norris/nur_Chuck_Norris_gemischt.mp4' # PFAD_KASKADE = './daten/cascades/haarcascade_frontalface_default.xml' # # g = GE.Gesichterextrahierer(PFAD_KASKADE) # g.lade(PFAD_A) # g.extrahiereGesichter( # max_anzahl_bilder=3000, # ordner_ausgabe='./daten/lernen/Gesichter/A' # ) # g.lade(PFAD_B) # g.extrahiereGesichter( # max_anzahl_bilder=3000, # ordner_ausgabe='./daten/lernen/Gesichter/B' # ) # # del PFAD_A, PFAD_B, PFAD_KASKADE, g, GE # ### Einlesen der Bilddateien # Dass die Bilder als Datei gespeichert wurden, erspart uns beim nächsten Mal den vorherigen Schritt. Die Dateien müssen nun allerdings wieder ins Programm geladen werden. # # > `erstelleDatensatz(pfad: str) -> list[list]` # > # > Lädt alle Bilder in dem übergebenen Verzeichnis in zwei Datensätze und gibt diese als Liste zurück. Jeder Pixelwert wird durch 255 geteilt, um die Werte auf den Bereich zwischen 0 und 1 zu projektieren. Dies stellt sicher, dass die Werte des künstlischen Neuronale Netzes (KNN), wenn das Bild übergeben wird, nicht zu groß werden. Die Bilder werden in zwei Datensätze umgewandelt, zu 75% zum Trainieren des KNN und zu 25% zum Prüfen und Bewerten der Leistung des Netzes. # # > `def teileListe(liste: list, verteilung: float) -> list[list]` # > # > Teilt die übergebene Liste in zwei Listen und gibt diese zurück. Die erste zurückgegebene Liste hat ein Länge von _n_-% der übergebenen Liste, wobei _n_ als Kommazahl zwischen 1 und 0 mit `verteilung` übergeben wird. # + import numpy as np import cv2 import os def erstelleDatensatz(pfad: str, anzahl: int) -> list: bilder = [] for wurzel, ordner, dateien in os.walk(pfad): for datei in dateien: if datei.split(".")[-1].lower() not in ['png', 'jpg', 'jpeg']: continue bild = cv2.imread(os.path.join(wurzel, datei)) #bild = cv2.pyrDown(bild) bild = bild.astype('float32') bild /= 255.0 bilder.append(bild) if len(bilder) >= anzahl: break bilder.append(np.fliplr(bild)) if len(bilder) >= anzahl: break np.random.shuffle(bilder) bilder_train, bilder_test = teileListe(bilder, 0.75) bilder_train, bilder_test = np.array(bilder_train), np.array(bilder_test) print('%d Bilder aus %s geladen.' % (len(bilder), pfad)) return [bilder_train, bilder_test] def teileListe(liste: list, verteilung: float) -> list: x = int(len(liste)*verteilung) return [liste[:x], liste[x:]] def verzerren(bild: list, staerke: int): hoehe = bild.shape[0] breite = bild.shape[1] punkte_von = np.float32([[0, 0], [0, hoehe], [breite, 0], [breite, hoehe]]) punkte_nach = np.float32([[0, staerke], [0, hoehe-staerke], [breite, 0], [breite, hoehe]]) matrix = cv2.getPerspectiveTransform(punkte_von, punkte_nach) bild_verzerrt = cv2.warpPerspective(bild, matrix, (breite, hoehe)) bild_verzerrt = bild_verzerrt[staerke:hoehe-staerke, staerke:breite-staerke] return cv2.resize(bild_verzerrt, (breite, hoehe)) datensatz_gesichter_A_train, datensatz_gesichter_A_test = erstelleDatensatz('daten/lernen/Gesichter/A', 5000) NAME_AUTOENCODER_A = 'Biden' datensatz_gesichter_B_train, datensatz_gesichter_B_test = erstelleDatensatz('daten/lernen/Gesichter/B', 5000) NAME_AUTOENCODER_B = 'Trump' # - # **Um zu prüfen, ob dies geklappt hat, wird hier ein beispielhaftes Bild dargestellt.** # + from matplotlib.pyplot import imshow # %matplotlib inline imshow(cv2.cvtColor(datensatz_gesichter_A_test[6], cv2.COLOR_BGR2RGB)) # - # ## Initialisieren des Neuronalen Netzes # *** # Nun wird das künstlichen Neuronale Netz initialisiert. Dazu wird die Struktur des Netzes definiert und das Modell anschließend kompiliert. # ### Definieren der Struktur des Encoder und Decoder # + import tensorflow as tf IMG_SHAPE = (128, 128, 3) NAME = "CNN_V2" def logSummary(string: str): with open(f"./daten/modelle/{NAME}/modell.info", "a") as datei: datei.write(string + "\n") def gibEncoder(): encoder = tf.keras.Sequential(name='encoder') encoder.add(tf.keras.layers.Conv2D(16, kernel_size=5, strides=2, padding='same', input_shape=( IMG_SHAPE ) )) encoder.add(tf.keras.layers.Conv2D(16, kernel_size=5, strides=2, padding='same')) encoder.add(tf.keras.layers.Conv2D(32, kernel_size=5, strides=2, padding='same')) encoder.add(tf.keras.layers.Conv2D(64, kernel_size=5, strides=2, padding='same')) encoder.add(tf.keras.layers.Flatten()) encoder.add(tf.keras.layers.Dense( 200 )) encoder.summary(print_fn=logSummary) print(encoder.summary()) return encoder def gibDecoder(): decoder = tf.keras.Sequential(name='decoder') decoder.add(tf.keras.layers.Dense( (8*8*64), input_shape=(200,))) decoder.add(tf.keras.layers.Reshape( (8, 8, 64) )) decoder.add(tf.keras.layers.Conv2DTranspose(64, kernel_size=5, strides=2, padding='same')) decoder.add(tf.keras.layers.Conv2DTranspose(32, kernel_size=5, strides=2, padding='same')) decoder.add(tf.keras.layers.Conv2DTranspose(16, kernel_size=5, strides=2, padding='same')) decoder.add(tf.keras.layers.Conv2DTranspose(16, kernel_size=5, strides=2, padding='same')) decoder.add(tf.keras.layers.Conv2DTranspose(3, kernel_size=1)) decoder.summary(print_fn=logSummary) print(decoder.summary()) return decoder # - # ### Kombinieren des Encoder und Decoder zu den Autoencodern def gibAutoencoder(name): x = tf.keras.layers.Input( shape=IMG_SHAPE, name='input_layer' ) encoder, decoder = gibEncoder(), gibDecoder() autoencoder = tf.keras.Model(x, decoder(encoder(x)), name=name) print(autoencoder.summary()) return autoencoder # ### Kompilieren der Modelle mit einem Optimizer # + OPTIMIZER_FUNKTION = tf.keras.optimizers.Adam(learning_rate=1e-5) LOSS_FUNKTION = tf.keras.losses.MeanSquaredError() def gibKompiliertenAutoencoder(name): autoencoder = gibAutoencoder(name) autoencoder.compile(optimizer=OPTIMIZER_FUNKTION, loss=LOSS_FUNKTION) return autoencoder # + try: autoencoder_A = tf.keras.models.load_model(f"./daten/modelle/{NAME}/{NAME_AUTOENCODER_A}/") autoencoder_B = tf.keras.models.load_model(f"./daten/modelle/{NAME}/{NAME_AUTOENCODER_B}/") print("Modelle von der Festplatte geladen.\n") except: try: os.mkdir(f"./daten/modelle/{NAME}/") os.mkdir(f"./daten/modelle/{NAME}/{NAME_AUTOENCODER_A}/") os.mkdir(f"./daten/modelle/{NAME}/{NAME_AUTOENCODER_B}/") os.mkdir(f"./daten/modelle/{NAME}/{NAME_AUTOENCODER_A}/Bilder/") os.mkdir(f"./daten/modelle/{NAME}/{NAME_AUTOENCODER_B}/Bilder/") except FileExistsError: pass autoencoder_A = gibKompiliertenAutoencoder(name="autoencoder_A") autoencoder_B = gibKompiliertenAutoencoder(name="autoencoder_B") loss = autoencoder_A.evaluate(datensatz_gesichter_A_test[:32], datensatz_gesichter_A_test[:32]) print(f"Aktueller Loss von A ({NAME_AUTOENCODER_A}): {loss}") loss = autoencoder_B.evaluate(datensatz_gesichter_B_test[:32], datensatz_gesichter_B_test[:32]) print(f"Aktueller Loss von B ({NAME_AUTOENCODER_B}): {loss}") # - # ## Trainieren # *** # ### Festlegen von Checkpoints für das Trainieren # + from tensorflow.keras.callbacks import ModelCheckpoint import LoggingCallback as lc autoencoder_A_logging_callback = lc.LoggingCallback( pfad_modell=f"./daten/modelle/{NAME}/{NAME_AUTOENCODER_A}/", bild_A=datensatz_gesichter_A_test[1], bild_B=datensatz_gesichter_B_test[1] ) autoencoder_B_logging_callback = lc.LoggingCallback( pfad_modell=f"./daten/modelle/{NAME}/{NAME_AUTOENCODER_B}/", bild_A=datensatz_gesichter_A_test[1], bild_B=datensatz_gesichter_B_test[1] ) autoencoder_A_checkpoint_callback = ModelCheckpoint( f"./daten/modelle/{NAME}/{NAME_AUTOENCODER_A}/", monitor='val_loss', save_best_only=True ) autoencoder_B_checkpoint_callback = ModelCheckpoint( f"./daten/modelle/{NAME}/{NAME_AUTOENCODER_B}/", monitor='val_loss', save_best_only=True ) # - # ### Eigentliches Trainieren des Netzes # + import time, gc ZEITPUNKT_ENDE = time.time() + int(8*60*60) while time.time() < ZEITPUNKT_ENDE: print("!- Noch für ~{:.2f}h beschäftigt.".format(( ZEITPUNKT_ENDE-time.time() )/3600) ) autoencoder_A.fit( datensatz_gesichter_A_train, datensatz_gesichter_A_train, epochs=1, batch_size=16, shuffle=True, validation_data=(datensatz_gesichter_A_test, datensatz_gesichter_A_test), callbacks=[autoencoder_A_checkpoint_callback, autoencoder_A_logging_callback] ) autoencoder_B.layers[1] = autoencoder_A.get_layer('encoder') gc.collect() autoencoder_B.fit( datensatz_gesichter_B_train, datensatz_gesichter_B_train, epochs=1, batch_size=16, shuffle=True, validation_data=(datensatz_gesichter_B_test, datensatz_gesichter_B_test), callbacks=[autoencoder_B_checkpoint_callback, autoencoder_B_logging_callback] ) autoencoder_A.layers[1] = autoencoder_B.get_layer('encoder') gc.collect() # - imshow(cv2.cvtColor(autoencoder_A.predict(datensatz_gesichter_B_test[6].reshape(1, 128, 128, 3))[0], cv2.COLOR_BGR2RGB)) # + import Gesichterextrahierer as GE PFAD_KASKADE = './daten/cascades/haarcascade_frontalface_default.xml' def fake(bild): global autoencoder_B bild = bild.astype('float32') bild /= 255.0 erg = autoencoder_B.predict(bild.reshape(1, 128, 128, 3))[0] erg *= 255.0 return erg g = GE.Gesichterextrahierer(PFAD_KASKADE) g.lade('./biden.mp4') g.fuerGesichterMache(fake, 10000, True) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # There is no otter-grader for this lab, so you do not need to run the typical otter cell at the top. import numpy as np from datascience import * Table.interactive_plots() # # Lab 7 – Data Visualization # # ## Data 94, Spring 2021 # # Today we will be looking at table methods that help us visualize data we have. We can use the methods we have discussed so far in this class to interpret the data, and we can use the methods we discuss today (and their variants) to display the data. This is helpful for showing data to people who don't necessarily have a background in data science, and require a data scientist like you to help them understand the data in a more intuitive way. # # We will be looking at the `barh` and `hist` methods today. As data scientists it is not only our job to be able to use the visualization methods we know, but also our job to know when to use which methods, and as we look at methods going forward, always keep in mind when it is most useful to use each new method. # ## Loading in the Data # # Before we get started, let's get some data to visualize. This dataset contains weekly information about weather all over the United States in the year 2016: # # *Note: Precipitation is in inches, all temperatures are in Fahrenheit, and Wind Speed is in Miles per Hour.* weather = Table.read_table("data/weather.csv") weather.show(5) # ### Cleaning our Data # # We have to clean our data before we use it, just run the following cell once: # + def clean_weather_table(tbl): tbl = tbl.drop("Date.Full", "Date.Week of", "Station.Code", "Station.Location", "Data.Wind.Direction") old_labels = np.array(["Data.Precipitation", "Date.Month", "Date.Year", "Station.City", "Station.State", "Data.Temperature.Avg Temp", "Data.Temperature.Max Temp", "Data.Temperature.Min Temp", "Data.Wind.Speed"]) new_labels = np.array(["Precipitation", "Month", "Year", "City", "State", "Average Temp", "High Temp", "Low Temp", "Wind Speed"]) tbl.relabel(old_labels, new_labels) tbl = tbl.move_to_start("Month") tbl = tbl.move_to_start("Year") tbl = tbl.move_to_start("City") tbl = tbl.move_to_start("State") def clean_states(state): if state == "VA": return "Virginia" elif state == "DE": return "Delaware" else: return state tbl = tbl.with_column("State", tbl.apply(clean_states, "State")) return tbl weather = clean_weather_table(weather) # - weather.show(5) # # The [barh](http://data8.org/datascience/_autosummary/datascience.tables.Table.barh.html#datascience.tables.Table.barh) method # # The barh method is used to visualize **categorical** variable values. Categorical variables are non-numbers, like names and qualities (Color, State Names, etc.). As we saw in lecture, categorical variables come in 2 different types: *ordinal* and *nominal*. Refer to [Lecture 24](https://docs.google.com/presentation/d/19sNzs3WCtJNd2pzpMVdAIslnwehzZBjVazisQnM9TKg) to see the difference between the two types. # # The `barh` method takes in 1 mandatory argument, which is the name of the column you want on the left axis of your `barh` plot. There are also optional arguments that have to do with plotting (axis names, plot title, etc.), and you can look at examples of those in this lab and in the homework. The remaining optional arguments in the datascience documentation linked above can also be used, feel free to try out some of the others on your own! # # To use the `barh` method properly, we first need to select the columns we want to see in the graph. We should not call `barh` directly on a Table because without specifying a column, we get a bar graph for every single instance of every single variable, which you can imagine results in a lot of bar graphs (see Question 1a of Homework 7 to see an example of how this does not work the way we want it to). # # Let's look at an example of `barh` that can show us the number of weather readings from each state in the dataset: # First we need to select the column we want to see, then we can plot it with barh state_weather = weather.group("State") state_weather.barh("State") # We can also use `barh` to see multiple statistics at once. Let's use the `group` method and `barh` method to see the average low temperature **and** high temperature in each state: # # *The dataset is reduced to only include the first 10 states for convenience.* # We must group first to get our desired columns, then we can call barh state_weather_avg = weather.group("State", collect=np.average).take(np.arange(10)).select("State", "High Temp average", "Low Temp average") state_weather_avg.barh("State", overlay=True) # If we want different visualizations for each variable, we can set the optional `overlay` argument to `False`. The default value of `overlay` is `True`, so if you don't give it a value, you will get a plot with all the included variables at once. state_weather_avg.barh("State", overlay=False) # That way we can choose if we want to have one plot with all our information or a new plot for each piece of information! # ### Where `barh` fails # # The `barh` method works well on categorical variables, but what if we have a **numerical** variable that we want to see the distribution in one particular state? Let's see what happens if we try to use `barh` on a numerical variable (`Wind Speed`) instead of a categorical variable: weather.group("Wind Speed").barh("Wind Speed") # As you can see, this bar plot is not particularly helpful. Seeing the breakdown of `Wind Speed` does not provide us with any useful information, and it is also difficult to read or understand. Instead, for numerical variables, we have another visualization method that helps us visualize a numerical variable's distribution... # # The [hist](http://data8.org/datascience/_autosummary/datascience.tables.Table.hist.html#datascience.tables.Table.hist) method # # The `hist` method allows us to see the distribution of a numerical variable. Categorical variables should be visualized using `barh`, and numerical variables should be visualized using `hist`. # # The `hist` method takes in 1 mandatory argument and has several optional arguments (as is the case with `barh`, there are many other optional arguments, but here are just a few of them), and **`density` should always be set to `False`** # # | **Argument** | **Description** | **Type** | **Mandatory?** | # | -- | -- | -- | -- | # | `column` | Column name whose values you want on the x-axis of your plot | Column name (string) | Yes | # | `density` | If `True`, then the resulting plot will be displayed not on the count of a value, but on the density of that value in the Table | boolean | No | # | `group` | Similar to the Table method `group`, groups rows by this label before plotting | Column name (string) | No | # | `overlay` | When `False`, make a new plot for each eligible statistic in the Table | boolean | No | # | `xaxis_title` | Label on the x-axis of your plot | string | No | # | `yaxis_title` | Label on the y-axis of your plot | string | No | # | `title` | Title of your plot | string | No | # | `bins` | A NumPy array of bin boundaries you want your histogram to gather data into | array | No | # # **Again, in all cases, `density` should be set to `False`** # # Keep in mind the same plotting optional arguments mentioned in the `barh` introduction. # # Let's take a look at the weather in different states to see how the `hist` method helps visualize numerical variables: # Oregon and Tennessee have similar counts in the weather dataset, so we can compare them # First we get all Oregon weather information oregon_weather = weather.where("State", "Oregon") oregon_weather.show(5) # This plot shows the distribution of average temperatures in Oregon for 2016 oregon_weather.select("Wind Speed").hist( xaxis_title = 'Wind Speed', yaxis_title = 'Count', title = 'Distribution of Oregon Wind Speeds', density = False ) # This shows us that wind speeds in Oregon tend to fall into the 0-10 mph range, but they can get higher on certain occasions. Let's see how that compares to wind speeds in another state, Tennessee: # Get all Tennessee information tennessee_weather = weather.where("State", "Tennessee") tennessee_weather.show(5) # This plot shows the distribution of average temperatures in Tennessee for 2016 tennessee_weather.select("Wind Speed").hist( xaxis_title = 'Wind Speed', yaxis_title = 'Count', title = 'Distribution of Tennessee Wind Speeds', density=False ) # We can use `hist` on a Table with just rows for these two states and use the optional `group` argument. # # *You can ignore the warning message that appears when you run the plotting cell below.* tennegon_weather = weather.where("State", are.contained_in(["Oregon", "Tennessee"])) tennegon_weather.show(5) # This plot shows the distribution of average temperatures in Oregon and Tennessee for 2016 tennegon_weather.select("State", "Wind Speed").hist( group = "State", xaxis_title = 'Wind Speed', yaxis_title = 'Count', title = 'Distribution of Oregon and Tennessee Wind Speeds', density = False ) print("Oregon weeks in weather dataset:", oregon_weather.num_rows) print("Tennessee weeks in weather dataset:", tennessee_weather.num_rows) # Because Oregon and Tennessee have very similar counts in the `weather` dataset, we can compare them with each other in visualizations like this. It appears that wind speeds in Oregon are a bit higher on average, as the plot above shows the oregon `Wind Speeds` to be a but more shifted to the right than the Tennessee `Wind Speeds`. Let's see if we can use a table query to figure out the same information: tennegon_weather.show(5) oregon_average_wind_speed = np.average(oregon_weather.column("Wind Speed")) tennessee_average_wind_speed = np.average(tennessee_weather.column("Wind Speed")) print("Average Oregon Wind Speed:", oregon_average_wind_speed) print("Average Tennessee Wind Speed:", tennessee_average_wind_speed) # As we can see, the plot we made appeared to suggest that the average wind speed would be a bit higher in Oregon, and the table operations reflected that! This is a benefit of visualization, that information can be learned about the dataset with just visual observation. It is always beneficial to back your claims about data with concrete facts about the dataset, but visualizations can help abstract away some of the confusion of looking at raw data so that non-data-scientists can better understand what is going on. # Now, think about what would happen if you chose two states with very different counts, why would it be more difficult to compare them with histograms? Let's take a look at what happens when we do this: rhode_island_weather = weather.where("State", "Rhode Island") alaska_weather = weather.where("State", "Alaska") print("Rhode Island weeks in weather dataset:", rhode_island_weather.num_rows) print("Alaska weeks in weather dataset:", alaska_weather.num_rows) # Each individual plot looks fine: # This plot shows the distribution of average temperatures in Rhode Island 2016 rhode_island_weather.select("State", "Wind Speed").hist( group = "State", xaxis_title = 'Wind Speed', yaxis_title = 'Count', title = 'Distribution of Rhode Island Wind Speeds', density = False ) # This plot shows the distribution of average temperatures in Alaska for 2016 alaska_weather.select("State", "Wind Speed").hist( group = "State", xaxis_title = 'Wind Speed', yaxis_title = 'Count', title = 'Distribution of Alaska Wind Speeds', density = False ) # Take a look at the y-axis on both of these plots. # # *What do you think will happen when we try to plot them on the same graph?* rhodlaska_weather = weather.where("State", are.contained_in(["Rhode Island", "Alaska"])) rhodlaska_weather.show(5) # This plot shows the distribution of average temperatures in Rhode Island and Alaska for 2016 rhodlaska_weather.select("State", "Wind Speed").hist( group = "State", xaxis_title = 'Wind Speed', yaxis_title = 'Count', title = 'Distribution of Rhode Island and Alaska Wind Speeds', density = False ) # As you can see, there is so much more Alaska data than Rhode Island data that we can hardly make comparisons between the two. Trying to figure out information from this plot is very difficult, so we would either have to use another type of visualization or change the perspective of this plot to be able to learn from it. # ### Homework Solution Formatting # # In this lab, we will wrote visualization code that does not need to be saved to be submitted, but on your homework assignments we need to be able to save the visualizations you make. Follow **all** of these steps to correctly submit your homework code: # # When writing your solutions for visualizations on homework, remember to: # - Store your plot in a variable name # - Use the `show=True` argument in your plot # - Make the last line of your cell the variable you stored your plot in so that you can see it when you run the cell # # Doing all of these will allow us to grade your submission properly. There are additional instructions on the homework notebook this week, so if you have any questions feel free to ask on Ed. # # ## Done! 😇 # # That's it! There's nowhere for you to submit this, as labs are not assignments. However, please ask any questions you have with this notebook in lab or on Ed. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.6 # language: python # name: python36 # --- # # Decision Tree Example with Own Data. TBD incl. Decision Boundaey (2 DIM) # ## Imports # + import matplotlib.pyplot as plt from sklearn.tree import DecisionTreeClassifier from sklearn.metrics import accuracy_score from sklearn import tree import graphviz # to visualize decision tree # - # ## Build Decision Tree with custom data 1 # + # ------------------------------------------------------ # | Salary | Capital | Credit Amount | Class: "High Risk" (1) vs "Low Risk" (0) # | 150000 | 300000 | 60000 | 0 # | 130000 | 200000 | 180000 | 1 # | 120000 | 500000 | 200000 | 0 # | 120000 | 500000 | 400000 | 1 # | 70000 | 150000 | 500000 | 1 # | 60000 | 200000 | 90000 | 0 # | 65000 | 30000 | 40000 | 1 # | 80000 | 300000 | 40000 | 0 # ------------------------------------------------------ X = [[150000, 300000, 60000], [130000, 200000, 180000], [120000, 500000, 200000], [120000, 500000, 400000], [70000, 150000, 500000], [60000, 200000, 90000], [650000, 30000, 40000] ,[20000, 300000, 40000]] y = [0, 1, 0, 1, 1, 0, 1, 1] # Set min samples split to 3 to keep tree small clf = DecisionTreeClassifier(criterion='entropy', min_samples_split=3) clf.fit(X, y) feature_names = ["Salary [USD]", "Capital [USD]", "Credit Amount [USD]"] X_test = [[125000, 400000, 70000], [25000, 40000, 80000],[100000, 110000, 60000]] class_names = ["Low Risk", "High Risk"] y_test = [ 0, 1, 0] # Predict classes of test set y_predicted = clf.predict(X_test) # Measure accuracy acc = accuracy_score(y_test, y_predicted) print("Accuracy=", round(acc,2), "%") # - # ## Build Decision Tree with custom data 2 # + X = [[150000, 300000, 100000], [130000, 50000, 180000], [120000, 100000, 100000], [170000, 200000, 400000], [70000, 120000, 120000], [65000, 100000, 20000], [650000, 0, 40000] ,[20000, 100000, 50000]] y = [0, 1, 0, 1, 1, 0, 0, 1] # Set min samples split to 3 to keep tree small clf = DecisionTreeClassifier(criterion='entropy') clf.fit(X, y) feature_names = ["Salary [USD]", "Capital [USD]", "Credit Amount [USD]"] X_test = [[125000, 400000, 70000], [25000, 40000, 80000],[100000, 110000, 60000]] class_names = ["Low Risk", "High Risk"] y_test = [ 0, 1, 0] # Predict classes of test set y_predicted = clf.predict(X_test) # Measure accuracy acc = accuracy_score(y_test, y_predicted) print("Accuracy=", round(acc,2), "%") # - # ## Visualize Decision Tree (graphviz) dot_data = tree.export_graphviz(clf, out_file=None, feature_names=feature_names, class_names=class_names) graph = graphviz.Source(dot_data) graph.render("Example-depth-2-Decision-Tree-Entropy-DepthTBD2") # ## TBD: Visualize Decision Boundaries print(clf) print(clf.feature_importances_) # ## Example from scikit-learn # + # Parameters n_classes = 3 plot_colors = "bry" plot_step = 0.02 # Load data iris = load_iris() for pairidx, pair in enumerate([[0, 1], [0, 2], [0, 3], [1, 2], [1, 3], [2, 3]]): # We only take the two corresponding features X = iris.data[:, pair] y = iris.target # Train clf = DecisionTreeClassifier().fit(X, y) # Plot the decision boundary plt.subplot(2, 3, pairidx + 1) x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step), np.arange(y_min, y_max, plot_step)) Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) cs = plt.contourf(xx, yy, Z, cmap=plt.cm.Paired) plt.xlabel(iris.feature_names[pair[0]]) plt.ylabel(iris.feature_names[pair[1]]) plt.axis("tight") # Plot the training points for i, color in zip(range(n_classes), plot_colors): idx = np.where(y == i) plt.scatter(X[idx, 0], X[idx, 1], c=color, label=iris.target_names[i], cmap=plt.cm.Paired) plt.axis("tight") plt.suptitle("Decision surface of a decision tree using paired features") plt.legend() plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: conda_python3 # language: python # name: conda_python3 # --- import pandas as pd import numpy as np from sklearn.model_selection import train_test_split import sklearn.metrics as metrics from scipy import interpolate from scipy import integrate import matplotlib.pyplot as plt from sklearn import preprocessing from sklearn.linear_model import LogisticRegression from abroca import * import warnings warnings.filterwarnings('ignore') #Recidivism - https://catalog.data.gov/dataset/recidivism-beginning-2008 df = pd.read_csv('abroca_try.csv') df.head(5) #create target label df['returned'] = np.where(df['Return Status'] != "Not Returned",1,0) #label encode le = preprocessing.LabelEncoder() for i in df.columns: if df[i].dtypes == 'object': df[i] = le.fit_transform(df[i]) #Splitting data into train and test X = df.iloc[:,0:4] y = df.iloc[:,5:6] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42) #fit simple classifier lr = LogisticRegression() lr.fit(X_train,y_train) #make predictions X_test['pred_proba'] = lr.predict_proba(X_test)[:,:1] X_test['returned'] = y_test df_test = X_test #Compute Abroca slice = compute_abroca(df_test, pred_col = 'pred_proba' , label_col = 'returned', protected_attr_col = 'Gender', majority_protected_attr_val = 1, n_grid = 10000, plot_slices = True, file_name = 'slice_plot.png') slice # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: ax2sax # language: python # name: ax2sax # --- # + # ------------------------------------------define logging and working directory from ProjectRoot import change_wd_to_project_root change_wd_to_project_root() from src.utils.Notebook_imports import * from src.utils.Tensorflow_helper import choose_gpu_by_id from tensorflow.keras.utils import plot_model from tensorflow.python.client import device_lib import tensorflow as tf tf.get_logger().setLevel('ERROR') import cv2 # ------------------------------------------define GPU id/s to use GPU_IDS = '0,1' GPUS = choose_gpu_by_id(GPU_IDS) print(GPUS) # ------------------------------------------jupyter magic config # %matplotlib inline # %reload_ext autoreload # %autoreload 2 # ------------------------------------------ import helpers from src.utils.Utils_io import Console_and_file_logger, init_config from src.visualization.Visualize import show_2D_or_3D from src.data.Dataset import get_img_msk_files_from_split_dir, load_acdc_files, get_train_data_from_df, get_trainings_files from src.data.Generators import DataGenerator, CycleMotionDataGenerator from src.utils.KerasCallbacks import get_callbacks import src.utils.Loss_and_metrics as metr import src.models.SpatialTransformer as st from src.models.SpatialTransformer import create_affine_cycle_transformer_model from src.models.ModelUtils import load_pretrained_model # ------------------------------------------path and project params ARCHITECTURE = '3D' # 2D DATASET = 'GCN' # 'acdc' # or 'gcn' or different versions such as gcn_01/02 FOLD = 0 # CV fold 0-3 EXP_NAME = 'ax_sax/trained_on_original_ax_sax_pairs_temp/' # Define an experiment name, could have subfolder conventions EXPERIMENT = '{}/{}'.format(ARCHITECTURE, EXP_NAME) # Uniform path names, separation of concerns timestemp = str(datetime.datetime.now().strftime("%Y-%m-%d_%H_%M")) # ad a timestep to each project to make repeated experiments unique # This noebook expects the path to the following folders # each folder contains an tuple of 3D nrrd_image, nrrd_masks # any-path/ # - AX_3D(anyname_img.nrrd and anyname_msk.nrrd) # - AX_to_SAX_3D # - SAX_3D # - SAX_to_AX_3D DATA_PATH_AX = '/mnt/ssd/data/gcn/ax_sax_from_flo/ax3d/' # path to AX 3D files DATA_PATH_AX2SAX = '/mnt/ssd/data/gcn/ax_sax_from_flo/sax3d/' # path to transformed AX 3D files (target of AX) DATA_PATH_SAX = '/mnt/ssd/data/gcn/ax_sax_from_flo/sax3d/' # path to SAX 3D files DATA_PATH_SAX2AX = '/mnt/ssd/data/gcn/ax_sax_from_flo/ax3d/' # path to transformed SAX 3D files (target of SAX) DF_PATH = '/mnt/ssd/data/gcn/gcn_05_2020_ax_sax_86/folds.csv' # path to folds dataframe MODEL_PATH = os.path.join('models', EXPERIMENT, timestemp) TENSORBOARD_LOG_DIR = os.path.join('reports/tensorboard_logs', EXPERIMENT,timestemp) CONFIG_PATH = os.path.join('reports/configs/',EXPERIMENT,timestemp) HISTORY_PATH = os.path.join('reports/history/',EXPERIMENT,timestemp) # ------------------------------------------static model, loss and generator hyperparameters DIM = [40, 64, 64] # network input params for spacing of 3, (z,y,x) DEPTH = 4 # number of down-/upsampling blocks FILTERS = 16 # initial number of filters, will be doubled after each downsampling block SPACING = [6, 6, 6] # if resample, resample to this spacing, (z,y,x) M_POOL = [2, 2, 2]# size of max-pooling used for downsampling and upsampling F_SIZE = [3, 3, 3] # conv filter size IMG_CHANNELS = 1 # Currently our model needs that image channel MASK_VALUES = [1, 2, 3] #channel order: Background, RV, MYO, LV MASK_CLASSES = len(MASK_VALUES) # no of labels BORDER_MODE = cv2.BORDER_REFLECT_101 # border mode for the data generation IMG_INTERPOLATION = cv2.INTER_LINEAR # image interpolation in the genarator MSK_INTERPOLATION = cv2.INTER_NEAREST # mask interpolation in the generator AUGMENT = False # Not implemented for the AX2SAX case SHUFFLE = True AUGMENT_GRID = False # Not implemented for the AX2SAX case RESAMPLE = True SCALER = 'MinMax' # MinMax Standard or Robust AX_LOSS_WEIGHT = 10.0 # weighting factor of the ax2sax loss WEIGHT_MSE_INPLANE = True # turn inplane weighting on/off MASK_SMALLER_THAN_THRESHOLD = 0.001 # define the threshold for masking the ax2sax/sax2ax MSE loss, areas with smaller values, will be masked out SAX_LOSS_WEIGHT = 10.0 # weighting factor of the sax2ax loss CYCLE_LOSS = True # turn this loss on/off FOCUS_LOSS_WEIGHT = 1.0 # weighting of the focus loss FOCUS_LOSS = True # turn this loss on/off USE_SAX2AX_PROB = False # apply the focus loss on AX2SAX_mask predictions, or on AX2SAX2AX_mask (back-transformed) predictions MIN_UNET_PROBABILITY = 0.9 # threshold to count only prediction greater than this value for the focus loss # ------------------------------------------individual training params GENERATOR_WORKER = 2 # number of parallel workers in our generator. if not set, use batchsize, numbers greater than batchsize does not make any sense SEED = 42 # define a seed for the generator shuffle BATCHSIZE = 2 # 32, 64, 24, 16, 1 for 3spacing 3,3,3 use: 2 INITIAL_EPOCH = 0 # change this to continue training EPOCHS = 300 # define a maximum numbers of epochs EPOCHS_BETWEEN_CHECKPOINTS = 5 MONITOR_FUNCTION = 'val_loss' MONITOR_MODE = 'min' SAVE_MODEL_FUNCTION = 'val_loss' SAVE_MODEL_MODE = 'min' MODEL_PATIENCE = 20 BN_FIRST = False # decide if batch normalisation between conv and activation or afterwards BATCH_NORMALISATION = True # apply BN or not USE_UPSAMPLE = True # otherwise use transpose for upsampling PAD = 'same' # padding strategy of the conv layers KERNEL_INIT = 'he_normal' # conv weight initialisation OPTIMIZER = 'adam' # Adam, Adagrad, RMSprop, Adadelta, # https://keras.io/optimizers/ ACTIVATION = 'elu' # tf.keras.layers.LeakyReLU(), relu or any other non linear activation function LEARNING_RATE = 1e-4 # start with a huge lr to converge fast DECAY_FACTOR = 0.3 # Define a learning rate decay for the ReduceLROnPlateau callback MIN_LR = 1e-10 # minimal lr, smaller lr does not improve the model DROPOUT_min = 0.3 # lower dropout at the shallow layers DROPOUT_max = 0.5 # higher dropout at the deep layers # ------------------------------------------these metrics and loss function are meant if you continue training of the U-Net metrics = [ metr.dice_coef_labels, metr.dice_coef_myo, metr.dice_coef_lv, metr.dice_coef_rv ] LOSS_FUNCTION = metr.bce_dice_loss # Create a logger instance with the following setup: info or debug to console and file and error logs to a separate file # Define a config for param injection, # save a serialized version to load the experiment for prediction/evaluation, # make sure all paths exist Console_and_file_logger(EXPERIMENT, logging.INFO) config = init_config(config=locals(), save=True) print(config) # - # # Check Tensorflow setup and available GPUs from tensorflow.keras.mixed_precision import experimental as mixed_precision logging.info('Is built with tensorflow: {}'.format(tf.test.is_built_with_cuda())) logging.info('Visible devices:\n{}'.format(tf.config.list_physical_devices())) logging.info('Local devices: \n {}'.format(device_lib.list_local_devices())) policy = mixed_precision.Policy('mixed_float16') mixed_precision.set_policy(policy) logging.info('Compute dtype: %s' % policy.compute_dtype) logging.info('Variable dtype: %s' % policy.variable_dtype) # # Load trainings and validation files for the choosen fold # + # Load AX volumes x_train_ax, y_train_ax, x_val_ax, y_val_ax = get_trainings_files(data_path=DATA_PATH_AX,path_to_folds_df=DF_PATH, fold=FOLD) logging.info('AX train CMR: {}, AX train masks: {}'.format(len(x_train_ax), len(y_train_ax))) logging.info('AX val CMR: {}, AX val masks: {}'.format(len(x_val_ax), len(y_val_ax))) # load AX2SAX volumes x_train_ax2sax, y_train_ax2sax, x_val_ax2sax, y_val_ax2sax = get_trainings_files(data_path=DATA_PATH_AX2SAX,path_to_folds_df=DF_PATH, fold=FOLD) logging.info('AX2SAX train CMR: {}, AX2SAX train masks: {}'.format(len(x_train_ax2sax), len(y_train_ax2sax))) logging.info('AX2SAX val CMR: {}, AX2SAX val masks: {}'.format(len(x_val_ax2sax), len(y_val_ax2sax))) # Load SAX volumes x_train_sax, y_train_sax, x_val_sax, y_val_sax = get_trainings_files(data_path=DATA_PATH_SAX,path_to_folds_df=DF_PATH, fold=FOLD) logging.info('SAX train CMR: {}, SAX train masks: {}'.format(len(x_train_sax), len(y_train_sax))) logging.info('SAX val CMR: {}, SAX val masks: {}'.format(len(x_val_sax), len(y_val_sax))) # load SAX2AX volumes x_train_sax2ax, y_train_sax2ax, x_val_sax2ax, y_val_sax2ax = get_trainings_files(data_path=DATA_PATH_SAX2AX,path_to_folds_df=DF_PATH, fold=FOLD) logging.info('SAX2AX train CMR: {}, SAX2AX train masks: {}'.format(len(x_train_sax2ax), len(y_train_sax2ax))) logging.info('SAX2AX val CMR: {}, SAX2AX val masks: {}'.format(len(x_val_sax2ax), len(y_val_sax2ax))) # - # filter files by name, debugging purpose search_str = '2cvu'.lower() x_val_ax = [x for x in x_val_ax if search_str in x.lower()] x_val_sax = [x for x in x_val_sax if search_str in x.lower()] x_val_ax2sax = [x for x in x_val_ax2sax if search_str in x.lower()] x_val_sax2ax = [x for x in x_val_sax2ax if search_str in x.lower()] print(len(x_val_ax)) # create two generators, one for the training files, one for the validation files batch_generator = CycleMotionDataGenerator(x=x_train_ax, y=x_train_ax2sax, x2=x_train_sax, y2=x_train_sax2ax, config=config) valid_config = config.copy() valid_config['AUGMENT_GRID'] = False valid_config['AUGMENT'] = False valid_generator = CycleMotionDataGenerator(x=x_val_ax, y=x_val_ax2sax, x2=x_val_sax, y2=x_val_sax2ax, config=valid_config) # Select batch generator output x = '' y = '' @interact def select_batch(batch = (0,len(valid_generator), 1)): global x, y, x2, y2 input_ , output_ = valid_generator.__getitem__(batch) x = input_[0] y = output_[0] x2 = input_[1] y2 = output_[1] logging.info('input elements: {}'.format(len(input_))) logging.info('output elements: {}'.format(len(output_))) logging.info(x.shape) logging.info(y.shape) logging.info(x2.shape) logging.info(y2.shape) @interact def select_image_in_batch(im = (0,x.shape[0]- 1, 1),slice_by=(1,6)): # define a different logging level to make the generator steps visible #logging.getLogger().setLevel(logging.DEBUG) temp_dir = 'reports/figures/temp/' ensure_dir(temp_dir) logging.info('AX: {}'.format(x[im].shape)) show_2D_or_3D(x[im][...,0][::slice_by]) plt.savefig(os.path.join(temp_dir,'ax.pdf')) plt.show() logging.info('AXtoSAX: {}'.format(y[im].shape)) show_2D_or_3D(y[im][...,0][::slice_by]) plt.savefig(os.path.join(temp_dir,'ax2sax.pdf')) plt.show() logging.info('SAX: {}'.format(x2[im].shape)) show_2D_or_3D(x2[im][...,0][::slice_by]) plt.savefig(os.path.join(temp_dir,'sax.pdf')) plt.show() logging.info('SAXtoAX: {}'.format(y2[im].shape)) show_2D_or_3D(y2[im][...,0][::slice_by]) plt.savefig(os.path.join(temp_dir,'sax2ax.pdf')) plt.show() # + # load a pretrained 2D unet """ load past config for model training """ if 'config_chooser' in locals(): config_file = config_chooser.selected else: #config_file = '/mnt/ssd/git/3d-mri-domain-adaption/reports/configs/2D/gcn_and_acdc_excl_ax/config.json' # config for TMI paper # config_file = '/mnt/ssd/git/cardio/reports/configs/2D/gcn_05_2020_sax_excl_ax_patients/2020-11-20_17_24/config.json' # retrained with downsampling # retrained with downsampling and lower clipping config_file = '/mnt/ssd/git/cardio/reports/configs/2D/gcn_05_2020_sax_excl_ax_patients_baseline/lower_clip/2020-12-18_20_03/config.json' # # load config with all params into global namespace with open(config_file, encoding='utf-8') as data_file: config_temp = json.loads(data_file.read()) logging.info('Load model from Experiment: {}'.format(config_temp['EXPERIMENT'])) if 'strategy' not in globals(): # distribute the training with the mirrored data paradigm across multiple gpus if available, if not use gpu 0 strategy = tf.distribute.MirroredStrategy(devices=config.get('GPUS', ["/gpu:0"])) with strategy.scope(): globals()['unet'] = load_pretrained_model(config_temp, metrics, comp=False) # - if 'strategy' not in globals(): # distribute the training with the mirrored data paradigm across multiple gpus if available, if not use gpu 0 strategy = tf.distribute.MirroredStrategy(devices=config.get('GPUS', ["/gpu:0"])) # inject the pre-trained unet if given, otherwise build the model without the pretrained unet with strategy.scope(): model = st.create_affine_cycle_transformer_model(config=config, unet=locals().get('unet', None)) model.summary(line_length=150) #plot_model(model, to_file='reports/figures/temp_graph.pdf',show_shapes=True) @interact def select_image_in_batch(im = (0,x.shape[0]- 1, 1),mask_smaller_than='0.001', slice_by=(1,6)): global m import numpy as np temp = x[im] sax = x2[im] temp_ = y[im] mask = temp_ >float(mask_smaller_than) # define a different logging level to make the generator steps visible logging.getLogger().setLevel(logging.INFO) logging.info('prediction on: {}'.format(temp.shape)) show_2D_or_3D(temp[::slice_by]) plt.show() pred, inv_pred, ax2sax_mod, prob, ax_msk,m, m_mod = model.predict(x = [np.expand_dims(temp,axis=0),np.expand_dims(sax,axis=0)]) logging.info('rotated by the model: {}'.format(pred[0].shape)) show_2D_or_3D(pred[0][::slice_by], mask[::slice_by]) plt.show() logging.info('inverse rotation on SAX: {}'.format(inv_pred[0].shape)) show_2D_or_3D(inv_pred[0][::slice_by]) plt.show() logging.info('predicted mask: {}'.format(inv_pred[0].shape)) show_2D_or_3D(prob[0][::slice_by]) plt.show() logging.info('predicted mask in ax: {}'.format(ax_msk[0].shape)) show_2D_or_3D(ax_msk[0][::slice_by]) plt.show() # calculate the loss mask from target AX2SAX image mask = temp_ >float(mask_smaller_than) logging.info('masked by GT: {}'.format(mask.shape)) masked = pred[0] * mask show_2D_or_3D(masked[::slice_by], mask[::slice_by]) plt.show() logging.info('target (AX2SAX): {}'.format(temp_.shape)) show_2D_or_3D(temp_[::slice_by]) plt.show() logging.info('Created MSE mask by thresholding the target (AX2SAX) with {}: {}'.format(mask_smaller_than,temp_.shape)) show_2D_or_3D(mask[::slice_by]) plt.show() try: from tensorflow.keras.metrics import MSE as mse logging.info('MSE: {}'.format(mse(pred[0], temp_).numpy().mean())) logging.info('prob loss: {}'.format(metr.max_volume_loss(min_probabillity=0.5)(temp_[tf.newaxis,...],prob).numpy().mean())) print(np.reshape(m[0],(3,4))) print(np.reshape(m_mod[0],(3,4))) except Exception as e: pass # train one model initial_epoch = 0 logging.info('Fit model, start trainings process') # fit model with trainingsgenerator results = model.fit( x=batch_generator, validation_data=valid_generator, validation_steps=len(valid_generator), epochs=200, callbacks = get_callbacks(config, valid_generator), steps_per_epoch = len(batch_generator), initial_epoch=initial_epoch, max_queue_size=20, workers=2, use_multiprocessing=True, verbose=1) # + # if, for any reason, you want to save the latest model, use this cell #tf.keras.models.save_model(model,filepath=config['MODEL_PATH'],overwrite=True,include_optimizer=False,save_format='tf') # - config['MODEL_PATH'] # + # Fast tests of a trained model, the "real" predictions will be done in src/notebooks/Predict """ load past config for model training """ if 'strategy' not in locals(): # distribute the training with the mirrored data paradigm across multiple gpus if available, if not use gpu 0 strategy = tf.distribute.MirroredStrategy(devices=config.get('GPUS', ["/gpu:0"])) # round the crop and pad values instead of ceil #config_file = 'reports/configs/3D/ax_sax/unetwithdownsamplingaugmentation_new_data/2020-12-03_18_20/config.json' # Fold 0 #config_file = 'reports/configs/3D/ax_sax/unetwithdownsamplingaugmentation_new_data/2020-12-03_22_02/config.json' # Fold 1 #config_file = 'reports/configs/3D/ax_sax/unetwithdownsamplingaugmentation_new_data/2020-12-04_16_56/config.json' # Fold 2 #config_file = 'reports/configs/3D/ax_sax/unetwithdownsamplingaugmentation_new_data/2020-12-07_12_36/config.json' # Fold 3 config_file = 'reports/configs/3D/ax_sax/train_on_ax_sax/fold0/2020-12-17_11_44/config.json' # Fold 0 # load a pre-trained ax2sax model, create the graph and load the weights separately, due to own loss functions, this is easier with open(config_file, encoding='utf-8') as data_file: config_temp = json.loads(data_file.read()) config_temp['LOSS_FUNCTION'] = config['LOSS_FUNCTION'] logging.info('Load model from Experiment: {}'.format(config_temp['EXPERIMENT'])) with strategy.scope(): globals()['model'] = st.create_affine_cycle_transformer_model(config=config_temp, metrics=metrics, unet=locals().get('unet', None)) model.load_weights(os.path.join(config_temp['MODEL_PATH'],'model.h5')) logging.info('loaded model weights as h5 file') # - # # Fast predictions with all files of the generator # predict, visualise the transformation of AX train files import numpy as np cfg = config.copy() cfg['BATCHSIZE'] = 10 cfg['AUGMENT_GRID'] = False valid_generator = CycleMotionDataGenerator(x_train_ax, x_train_sax, cfg) input_, output_ = valid_generator.__getitem__(0) x_ = input_[0] x2_ = input_[1] y_ = output_[0] y2_ = output_[1] @interact def select_image_in_batch(im = (0,x_.shape[0]- 1, 1), slice_by=(1,6)): global m temp = x_[im] temp_ = y_[im] sax = x2_[im] # define a different logging level to make the generator steps visible logging.getLogger().setLevel(logging.INFO) logging.info('prediction on:') show_2D_or_3D(temp[::slice_by]) plt.show() pred, inv_pred, ax2sax_mod, pred_mask, ax2sax_msk,m_first, m = model.predict(x=[np.expand_dims(temp,axis=0),np.expand_dims(sax,axis=0)]) logging.info('rotated by the model') show_2D_or_3D(pred[0][::slice_by]) plt.show() logging.info('modified rotation of the model') show_2D_or_3D(ax2sax_mod[0][::slice_by]) plt.show() logging.info('predicted mask:') show_2D_or_3D(pred_mask[0][::slice_by]) plt.show() logging.info('target (SAX):') show_2D_or_3D(temp_[::slice_by]) plt.show() logging.info('inverted rotation on SAX') show_2D_or_3D(inv_pred[0][::slice_by]) plt.show() try: print(np.reshape(m_first[0],(3,4))) print(np.reshape(m[0],(3,4))) except Exception as e: pass # # Predictions on the heldout test split cfg = config.copy() cfg['BATCHSIZE'] = len(x_val_ax) v_generator = CycleMotionDataGenerator(x_val_ax, x_val_sax, cfg) input_, output_ = v_generator.__getitem__(0) x_ = input_[0] x2_ = input_[1] y_ = output_[0] y2_ = output_[1] @interact def select_image_in_batch(im = (0,x_.shape[0]- 1, 1), slice_by=(1,6)): global m temp = x_[im] temp_ = y_[im] sax = x2_[im] # define a different logging level to make the generator steps visible logging.getLogger().setLevel(logging.INFO) logging.info('prediction on:') show_2D_or_3D(temp[::slice_by]) plt.show() pred, inv_pred, ax2sax_mod, pred_mask, ax_mask, m_first, m = model.predict(x=[np.expand_dims(temp,axis=0),np.expand_dims(sax,axis=0)]) logging.info('rotated by the model') show_2D_or_3D(pred[0][::slice_by]) plt.show() logging.info('modified rotation') show_2D_or_3D(ax2sax_mod[0][::slice_by]) plt.show() logging.info('predicted mask') show_2D_or_3D(pred_mask[0][::slice_by]) plt.show() logging.info('predicted in AX') show_2D_or_3D(ax_mask[0][::slice_by]) plt.show() logging.info('target (SAX):') show_2D_or_3D(temp_[::slice_by]) plt.show() logging.info('inverted rotation on SAX') show_2D_or_3D(inv_pred[0][::slice_by]) plt.show() try: print(np.reshape(m_first[0],(3,4))) print(np.reshape(m[0],(3,4))) except Exception as e: pass # # Temp tests # + # check the memory usage import sys # These are the usual ipython objects ipython_vars = ['In', 'Out', 'exit', 'quit', 'get_ipython', 'ipython_vars'] # Get a sorted list of the objects and their sizes sorted([(x, sys.getsizeof(globals().get(x))) for x in dir() if not x.startswith('_') and x not in sys.modules and x not in ipython_vars], key=lambda x: x[1], reverse=True) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="9ckBC4qSFJ2u" # 作業目標:
    # 1. 靈活運用圖表在各種情況下 # 2. 圖表的解讀 # + [markdown] id="wkssN_r7UT2o" # 作業重點:
    # 1. 依據需求畫出圖表
    # 2. 在做圖表解釋時,須了解圖表中的含意 # + [markdown] id="B9B8LGwLFORw" # 題目 : 將資料夾中boston.csv讀進來,並用圖表分析欄位。
    # 1.畫出箱型圖,並判斷哪個欄位的中位數在300~400之間?
    # 2.畫出散佈圖 x='NOX', y='DIS' ,並說明這兩欄位有什麼關係? # # + id="8EK9ei8pFNNd" import pandas as pd import numpy as np # + colab={"base_uri": "https://localhost:8080/"} id="j8oG-3XJGECZ" executionInfo={"status": "ok", "timestamp": 1608450069878, "user_tz": -480, "elapsed": 1221, "user": {"displayName": "\u732e\u7ae4\u9ec3", "photoUrl": "", "userId": "07529243043474362942"}} outputId="d571ceba-a845-4e8c-8139-ac99e9c1f484" # #1.畫出箱型圖,並判斷哪個欄位的中位數在300~400之間? #欄位TAX df = pd.read_csv("boston.csv") df.boxplot() # + id="1YrW_1o9MXUy" #2. 畫出散佈圖 x='NOX', y='DIS' ,並說明這兩欄位有什麼關係? #成反比關係 df.plot.scatter(x='RM', y='LSTAT') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import cv2 import numpy as np img = np.zeros((700, 700, 3), dtype=np.uint8) img[583:650, 20:90] = [61, 79, 39] img[565:678, 109:251] = [37, 56, 39] img[474:550, 180:258] = [9, 43, 39] img[465:619, 273:467] = [0, 0, 90] img[402:550, 20:168] = [0, 0, 90] img[356:512, 485:690] = [61, 79, 39] img[530:638, 481:636] = [37, 56, 39] img[284:447, 224:450] = [37, 56, 39] img[148:338, 465:658] = [0, 0, 100] img[30:261, 311:432] = [61, 79, 39] img[113:228, 58:292] = [9, 43, 39] img[89:225, 376:546] = [0, 0, 255] img[42:310, 592:560] = [0, 0, 255] img[247:423, 413:679] = [0, 0, 255] img[252:348, 278:402] = [0, 0, 255] img[375:506, 237:348] = [0, 0, 255] img[313:449, 108:224] = [0, 0, 255] img[467:516, 418:469] = [0, 0, 255] img[532:589, 142:203] = [0, 0, 255] img[15:125, 581:691] = [0, 0, 255] img[464:498, 51:89] = [0, 0, 255] img[58:166, 183:357] = [0, 0, 255] cv2.imshow("img", img) cv2.waitKey() == 13 cv2.destroyAllWindows() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.7.6 64-bit (''base'': conda)' # name: python3 # --- # ## AR + MA models # # 1. 「AR」*Autoregression*,自回归模型; # # 2. 「MA」*Moving Average*,滑动平均模型; # # 3. 「ARMA」*Autoregressive Moving Average*,自回归滑动平均模型; # # 4. 「ARIMA」*Autoregressive Integrated Moving Average*,差分自回归移动平均模型; # # 5. 「SARIMA」*Seasonal Autoregressive Integrated Moving-Average*,季节性差分自回归移动平均模型; # # 6. 「SARIMAX」*Seasonal Autoregressive Integrated Moving-Average with Exogenous Regressors*,外生变量的季节性差分自回归移动平均模型; # # 7. 「VAR」*Vector Autoregression*,向量自回归模型; # # 8. 「VARMA」*Vector Autoregression Moving-Average*,向量自回归滑动平均模型; # # 9. 「VARMAX」*Vector Autoregression Moving-Average with Exogenous Regressors*,外生变量的向量自回归滑动平均模型; # ## AR + MA # AR example from statsmodels.tsa.ar_model import AutoReg from random import random # contrived dataset data = [x + random() for x in range(1, 100)] # fit model model = AutoReg(data, lags=1) model_fit = model.fit() # make prediction yhat = model_fit.predict(len(data), len(data)) print(yhat) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] jupyter={} tags=[] slideshow={"slide_type": "slide"} id="216E63EBD0DC4C26828C96B784C49B24" mdEditEnable=false # # GRU # + [markdown] jupyter={} tags=[] slideshow={"slide_type": "slide"} id="35A5BB52AD23476AB3234E38360D557E" mdEditEnable=false # RNN存在的问题:梯度较容易出现衰减或爆炸(BPTT) # ⻔控循环神经⽹络:捕捉时间序列中时间步距离较⼤的依赖关系 # **RNN**: # # # ![Image Name](https://cdn.kesci.com/upload/image/q5jjvcykud.png?imageView2/0/w/320/h/320) # # # $$ # H_{t} = ϕ(X_{t}W_{xh} + H_{t-1}W_{hh} + b_{h}) # $$ # **GRU**: # # # ![Image Name](https://cdn.kesci.com/upload/image/q5jk0q9suq.png?imageView2/0/w/640/h/640) # # # # $$ # R_{t} = σ(X_tW_{xr} + H_{t−1}W_{hr} + b_r)\\ # Z_{t} = σ(X_tW_{xz} + H_{t−1}W_{hz} + b_z)\\ # \widetilde{H}_t = tanh(X_tW_{xh} + (R_t ⊙H_{t−1})W_{hh} + b_h)\\ # H_t = Z_t⊙H_{t−1} + (1−Z_t)⊙\widetilde{H}_t # $$ # • 重置⻔有助于捕捉时间序列⾥短期的依赖关系; # • 更新⻔有助于捕捉时间序列⾥⻓期的依赖关系。 # ### 载入数据集 # + id="303B4A29B33E4646B59188EC4D57BA14" jupyter={} tags=[] slideshow={"slide_type": "slide"} import os os.listdir('/home/kesci/input') # + jupyter={} tags=[] slideshow={"slide_type": "slide"} id="954C3D57745047D587E4BFA23CE0BFA0" import numpy as np import torch from torch import nn, optim import torch.nn.functional as F # + jupyter={} tags=[] slideshow={"slide_type": "slide"} id="64E106307FF64926859FD92A826422D6" import sys sys.path.append("../input/") import d2l_jay9460 as d2l device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') (corpus_indices, char_to_idx, idx_to_char, vocab_size) = d2l.load_data_jay_lyrics() # + [markdown] jupyter={} tags=[] slideshow={"slide_type": "slide"} id="CE7C3A96CFCC449A8D5F65577841999B" mdEditEnable=false # ### 初始化参数 # + jupyter={} tags=[] slideshow={"slide_type": "slide"} id="9FB61618A6DB406D843C7BCEDDA63EB1" num_inputs, num_hiddens, num_outputs = vocab_size, 256, vocab_size print('will use', device) def get_params(): def _one(shape): ts = torch.tensor(np.random.normal(0, 0.01, size=shape), device=device, dtype=torch.float32) #正态分布 return torch.nn.Parameter(ts, requires_grad=True) def _three(): return (_one((num_inputs, num_hiddens)), _one((num_hiddens, num_hiddens)), torch.nn.Parameter(torch.zeros(num_hiddens, device=device, dtype=torch.float32), requires_grad=True)) W_xz, W_hz, b_z = _three() # 更新门参数 W_xr, W_hr, b_r = _three() # 重置门参数 W_xh, W_hh, b_h = _three() # 候选隐藏状态参数 # 输出层参数 W_hq = _one((num_hiddens, num_outputs)) b_q = torch.nn.Parameter(torch.zeros(num_outputs, device=device, dtype=torch.float32), requires_grad=True) return nn.ParameterList([W_xz, W_hz, b_z, W_xr, W_hr, b_r, W_xh, W_hh, b_h, W_hq, b_q]) def init_gru_state(batch_size, num_hiddens, device): #隐藏状态初始化 return (torch.zeros((batch_size, num_hiddens), device=device), ) # + [markdown] jupyter={} tags=[] slideshow={"slide_type": "slide"} id="1CD23FE7F21C4D5D82702BE6CACE3DBF" mdEditEnable=false # ### GRU模型 # + jupyter={} tags=[] slideshow={"slide_type": "slide"} id="F4D5CB26DCBE45E58DECB282979C85D0" def gru(inputs, state, params): W_xz, W_hz, b_z, W_xr, W_hr, b_r, W_xh, W_hh, b_h, W_hq, b_q = params H, = state outputs = [] for X in inputs: Z = torch.sigmoid(torch.matmul(X, W_xz) + torch.matmul(H, W_hz) + b_z) R = torch.sigmoid(torch.matmul(X, W_xr) + torch.matmul(H, W_hr) + b_r) H_tilda = torch.tanh(torch.matmul(X, W_xh) + R * torch.matmul(H, W_hh) + b_h) H = Z * H + (1 - Z) * H_tilda Y = torch.matmul(H, W_hq) + b_q outputs.append(Y) return outputs, (H,) # + [markdown] jupyter={} tags=[] slideshow={"slide_type": "slide"} id="DC589ABFFE464B32BD3CADCE9C64DE87" mdEditEnable=false # ### 训练模型 # + jupyter={} tags=[] slideshow={"slide_type": "slide"} id="CF81DCA87B884C2E996632B5E49BB9D9" num_epochs, num_steps, batch_size, lr, clipping_theta = 160, 35, 32, 1e2, 1e-2 pred_period, pred_len, prefixes = 40, 50, ['分开', '不分开'] # + jupyter={} tags=[] slideshow={"slide_type": "slide"} id="4FA0FC8D41C04605BD1FCA235D00233B" d2l.train_and_predict_rnn(gru, get_params, init_gru_state, num_hiddens, vocab_size, device, corpus_indices, idx_to_char, char_to_idx, False, num_epochs, num_steps, lr, clipping_theta, batch_size, pred_period, pred_len, prefixes) # + [markdown] jupyter={} tags=[] slideshow={"slide_type": "slide"} id="32035E22A7B04D3A8F787C7CB15F366F" mdEditEnable=false # ### 简洁实现 # + jupyter={} tags=[] slideshow={"slide_type": "slide"} id="CAE438B5EC9A454B8C1AAAF4EFEF7548" num_hiddens=256 num_epochs, num_steps, batch_size, lr, clipping_theta = 160, 35, 32, 1e2, 1e-2 pred_period, pred_len, prefixes = 40, 50, ['分开', '不分开'] lr = 1e-2 # 注意调整学习率 gru_layer = nn.GRU(input_size=vocab_size, hidden_size=num_hiddens) model = d2l.RNNModel(gru_layer, vocab_size).to(device) d2l.train_and_predict_rnn_pytorch(model, num_hiddens, vocab_size, device, corpus_indices, idx_to_char, char_to_idx, num_epochs, num_steps, lr, clipping_theta, batch_size, pred_period, pred_len, prefixes) # + [markdown] jupyter={} tags=[] slideshow={"slide_type": "slide"} id="15C47CF30681418997AEDB9F08C15C9B" # # LSTM # + [markdown] jupyter={} tags=[] slideshow={"slide_type": "slide"} id="E7D0864A57DB432483AD1B0C89171313" mdEditEnable=false # ** 长短期记忆long short-term memory **: # 遗忘门:控制上一时间步的记忆细胞 # 输入门:控制当前时间步的输入 # 输出门:控制从记忆细胞到隐藏状态 # 记忆细胞:⼀种特殊的隐藏状态的信息的流动 # # # ![Image Name](https://cdn.kesci.com/upload/image/q5jk2bnnej.png?imageView2/0/w/640/h/640) # # $$ # I_t = σ(X_tW_{xi} + H_{t−1}W_{hi} + b_i) \\ # F_t = σ(X_tW_{xf} + H_{t−1}W_{hf} + b_f)\\ # O_t = σ(X_tW_{xo} + H_{t−1}W_{ho} + b_o)\\ # \widetilde{C}_t = tanh(X_tW_{xc} + H_{t−1}W_{hc} + b_c)\\ # C_t = F_t ⊙C_{t−1} + I_t ⊙\widetilde{C}_t\\ # H_t = O_t⊙tanh(C_t) # $$ # # + [markdown] jupyter={} tags=[] slideshow={"slide_type": "slide"} id="33A4C20200044DFC80CD2FB42F2D8CA0" mdEditEnable=false # ### 初始化参数 # + jupyter={} tags=[] slideshow={"slide_type": "slide"} id="E296216AE65A40608DA0B7856091AA39" num_inputs, num_hiddens, num_outputs = vocab_size, 256, vocab_size print('will use', device) def get_params(): def _one(shape): ts = torch.tensor(np.random.normal(0, 0.01, size=shape), device=device, dtype=torch.float32) return torch.nn.Parameter(ts, requires_grad=True) def _three(): return (_one((num_inputs, num_hiddens)), _one((num_hiddens, num_hiddens)), torch.nn.Parameter(torch.zeros(num_hiddens, device=device, dtype=torch.float32), requires_grad=True)) W_xi, W_hi, b_i = _three() # 输入门参数 W_xf, W_hf, b_f = _three() # 遗忘门参数 W_xo, W_ho, b_o = _three() # 输出门参数 W_xc, W_hc, b_c = _three() # 候选记忆细胞参数 # 输出层参数 W_hq = _one((num_hiddens, num_outputs)) b_q = torch.nn.Parameter(torch.zeros(num_outputs, device=device, dtype=torch.float32), requires_grad=True) return nn.ParameterList([W_xi, W_hi, b_i, W_xf, W_hf, b_f, W_xo, W_ho, b_o, W_xc, W_hc, b_c, W_hq, b_q]) def init_lstm_state(batch_size, num_hiddens, device): return (torch.zeros((batch_size, num_hiddens), device=device), torch.zeros((batch_size, num_hiddens), device=device)) # + [markdown] jupyter={} tags=[] slideshow={"slide_type": "slide"} id="91D90EADF26C40728E0CBB1849E3FCF3" mdEditEnable=false # ### LSTM模型 # + jupyter={} tags=[] slideshow={"slide_type": "slide"} id="27A0EBB56386481F895FA2FFB0E9180F" def lstm(inputs, state, params): [W_xi, W_hi, b_i, W_xf, W_hf, b_f, W_xo, W_ho, b_o, W_xc, W_hc, b_c, W_hq, b_q] = params (H, C) = state outputs = [] for X in inputs: I = torch.sigmoid(torch.matmul(X, W_xi) + torch.matmul(H, W_hi) + b_i) F = torch.sigmoid(torch.matmul(X, W_xf) + torch.matmul(H, W_hf) + b_f) O = torch.sigmoid(torch.matmul(X, W_xo) + torch.matmul(H, W_ho) + b_o) C_tilda = torch.tanh(torch.matmul(X, W_xc) + torch.matmul(H, W_hc) + b_c) C = F * C + I * C_tilda H = O * C.tanh() Y = torch.matmul(H, W_hq) + b_q outputs.append(Y) return outputs, (H, C) # + [markdown] jupyter={} tags=[] slideshow={"slide_type": "slide"} id="40C5775290DB48DB937B836471F2AC17" mdEditEnable=false # ### 训练模型 # + jupyter={} tags=[] slideshow={"slide_type": "slide"} id="DD60C69E13F84A17BA441F7E75984E2F" num_epochs, num_steps, batch_size, lr, clipping_theta = 160, 35, 32, 1e2, 1e-2 pred_period, pred_len, prefixes = 40, 50, ['分开', '不分开'] d2l.train_and_predict_rnn(lstm, get_params, init_lstm_state, num_hiddens, vocab_size, device, corpus_indices, idx_to_char, char_to_idx, False, num_epochs, num_steps, lr, clipping_theta, batch_size, pred_period, pred_len, prefixes) # + [markdown] jupyter={} tags=[] slideshow={"slide_type": "slide"} id="F6D1F7613F63401CBDD49FBD7E611B4A" mdEditEnable=false # ### 简洁实现 # + jupyter={} tags=[] slideshow={"slide_type": "slide"} id="8AC2F2314D8C4DA097334B838732F9D2" num_hiddens=256 num_epochs, num_steps, batch_size, lr, clipping_theta = 160, 35, 32, 1e2, 1e-2 pred_period, pred_len, prefixes = 40, 50, ['分开', '不分开'] lr = 1e-2 # 注意调整学习率 lstm_layer = nn.LSTM(input_size=vocab_size, hidden_size=num_hiddens) model = d2l.RNNModel(lstm_layer, vocab_size) d2l.train_and_predict_rnn_pytorch(model, num_hiddens, vocab_size, device, corpus_indices, idx_to_char, char_to_idx, num_epochs, num_steps, lr, clipping_theta, batch_size, pred_period, pred_len, prefixes) # + [markdown] jupyter={} tags=[] slideshow={"slide_type": "slide"} id="A5D90CA5217B4DB380BEF1CD17338BB6" mdEditEnable=false # # 深度循环神经网络 # # ![Image Name](https://cdn.kesci.com/upload/image/q5jk3z1hvz.png?imageView2/0/w/320/h/320) # # # $$ # \boldsymbol{H}_t^{(1)} = \phi(\boldsymbol{X}_t \boldsymbol{W}_{xh}^{(1)} + \boldsymbol{H}_{t-1}^{(1)} \boldsymbol{W}_{hh}^{(1)} + \boldsymbol{b}_h^{(1)})\\ # \boldsymbol{H}_t^{(\ell)} = \phi(\boldsymbol{H}_t^{(\ell-1)} \boldsymbol{W}_{xh}^{(\ell)} + \boldsymbol{H}_{t-1}^{(\ell)} \boldsymbol{W}_{hh}^{(\ell)} + \boldsymbol{b}_h^{(\ell)})\\ # \boldsymbol{O}_t = \boldsymbol{H}_t^{(L)} \boldsymbol{W}_{hq} + \boldsymbol{b}_q # $$ # + jupyter={} tags=[] slideshow={"slide_type": "slide"} id="9E06D2B447EA485B88F31C9B57ED3A79" num_hiddens=256 num_epochs, num_steps, batch_size, lr, clipping_theta = 160, 35, 32, 1e2, 1e-2 pred_period, pred_len, prefixes = 40, 50, ['分开', '不分开'] lr = 1e-2 # 注意调整学习率 gru_layer = nn.LSTM(input_size=vocab_size, hidden_size=num_hiddens,num_layers=2) model = d2l.RNNModel(gru_layer, vocab_size).to(device) d2l.train_and_predict_rnn_pytorch(model, num_hiddens, vocab_size, device, corpus_indices, idx_to_char, char_to_idx, num_epochs, num_steps, lr, clipping_theta, batch_size, pred_period, pred_len, prefixes) # + jupyter={} tags=[] slideshow={"slide_type": "slide"} id="E386044D687F4D5D862CE5F6E3E2C00A" gru_layer = nn.LSTM(input_size=vocab_size, hidden_size=num_hiddens,num_layers=6) model = d2l.RNNModel(gru_layer, vocab_size).to(device) d2l.train_and_predict_rnn_pytorch(model, num_hiddens, vocab_size, device, corpus_indices, idx_to_char, char_to_idx, num_epochs, num_steps, lr, clipping_theta, batch_size, pred_period, pred_len, prefixes) # + [markdown] jupyter={} tags=[] slideshow={"slide_type": "slide"} id="6F10BD29920D44F1808692B1AAED1FA7" mdEditEnable=false # # 双向循环神经网络 # # ![Image Name](https://cdn.kesci.com/upload/image/q5j8hmgyrz.png?imageView2/0/w/320/h/320) # # $$ # \begin{aligned} \overrightarrow{\boldsymbol{H}}_t &= \phi(\boldsymbol{X}_t \boldsymbol{W}_{xh}^{(f)} + \overrightarrow{\boldsymbol{H}}_{t-1} \boldsymbol{W}_{hh}^{(f)} + \boldsymbol{b}_h^{(f)})\\ # \overleftarrow{\boldsymbol{H}}_t &= \phi(\boldsymbol{X}_t \boldsymbol{W}_{xh}^{(b)} + \overleftarrow{\boldsymbol{H}}_{t+1} \boldsymbol{W}_{hh}^{(b)} + \boldsymbol{b}_h^{(b)}) \end{aligned} $$ # $$ # \boldsymbol{H}_t=(\overrightarrow{\boldsymbol{H}}_{t}, \overleftarrow{\boldsymbol{H}}_t) # $$ # $$ # \boldsymbol{O}_t = \boldsymbol{H}_t \boldsymbol{W}_{hq} + \boldsymbol{b}_q # $$ # + jupyter={} tags=[] slideshow={"slide_type": "slide"} id="176C228203A040D38D18DC194099C9EA" num_hiddens=128 num_epochs, num_steps, batch_size, lr, clipping_theta = 160, 35, 32, 1e-2, 1e-2 pred_period, pred_len, prefixes = 40, 50, ['分开', '不分开'] lr = 1e-2 # 注意调整学习率 gru_layer = nn.GRU(input_size=vocab_size, hidden_size=num_hiddens,bidirectional=True) model = d2l.RNNModel(gru_layer, vocab_size).to(device) d2l.train_and_predict_rnn_pytorch(model, num_hiddens, vocab_size, device, corpus_indices, idx_to_char, char_to_idx, num_epochs, num_steps, lr, clipping_theta, batch_size, pred_period, pred_len, prefixes) # + id="412543B3642948FA89C831603F7362BA" jupyter={} tags=[] slideshow={"slide_type": "slide"} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Using LAMMPS with iPython and Jupyter # LAMMPS can be run interactively using iPython easily. This tutorial shows how to set this up. # ## Installation # 1. Download the latest version of LAMMPS into a folder (we will calls this `$LAMMPS_DIR` from now on) # 2. Compile LAMMPS as a shared library and enable exceptions and PNG support # ```bash # cd $LAMMPS_DIR/src # make yes-molecule # make mpi mode=shlib LMP_INC="-DLAMMPS_PNG -DLAMMPS_EXCEPTIONS" JPG_LIB="-lpng" # ``` # # 3. Create a python virtualenv # ```bash # virtualenv testing # source testing/bin/activate # ``` # # 4. Inside the virtualenv install the lammps package # ``` # (testing) cd $LAMMPS_DIR/python # (testing) python install.py # (testing) cd # move to your working directory # ``` # # 5. Install jupyter and ipython in the virtualenv # ```bash # (testing) pip install ipython jupyter # ``` # # 6. Run jupyter notebook # ```bash # (testing) jupyter notebook # ``` # ## Example from lammps import IPyLammps L = IPyLammps() # + # 2d circle of particles inside a box with LJ walls import math b = 0 x = 50 y = 20 d = 20 # careful not to slam into wall too hard v = 0.3 w = 0.08 L.units("lj") L.dimension(2) L.atom_style("bond") L.boundary("f f p") L.lattice("hex", 0.85) L.region("box", "block", 0, x, 0, y, -0.5, 0.5) L.create_box(1, "box", "bond/types", 1, "extra/bond/per/atom", 6) L.region("circle", "sphere", d/2.0+1.0, d/2.0/math.sqrt(3.0)+1, 0.0, d/2.0) L.create_atoms(1, "region", "circle") L.mass(1, 1.0) L.velocity("all create 0.5 87287 loop geom") L.velocity("all set", v, w, 0, "sum yes") L.pair_style("lj/cut", 2.5) L.pair_coeff(1, 1, 10.0, 1.0, 2.5) L.bond_style("harmonic") L.bond_coeff(1, 10.0, 1.2) L.create_bonds("many", "all", "all", 1, 1.0, 1.5) L.neighbor(0.3, "bin") L.neigh_modify("delay", 0, "every", 1, "check yes") L.fix(1, "all", "nve") L.fix(2, "all wall/lj93 xlo 0.0 1 1 2.5 xhi", x, "1 1 2.5") L.fix(3, "all wall/lj93 ylo 0.0 1 1 2.5 yhi", y, "1 1 2.5") # - L.image(zoom=1.8) L.thermo_style("custom step temp epair press") L.thermo(100) output = L.run(40000) L.image(zoom=1.8) # ## Queries about LAMMPS simulation L.system L.system.natoms L.system.nbonds L.system.nbondtypes L.communication L.fixes L.computes L.dumps L.groups # ## Working with LAMMPS Variables L.variable("a index 2") L.variables L.variable("t equal temp") L.variables # + import sys if sys.version_info < (3, 0): # In Python 2 'print' is a restricted keyword, which is why you have to use the lmp_print function instead. x = float(L.lmp_print('"${a}"')) else: # In Python 3 the print function can be redefined. # x = float(L.print('"${a}"')") # To avoid a syntax error in Python 2 executions of this notebook, this line is packed into an eval statement x = float(eval("L.print('\"${a}\"')")) x # - L.variables['t'].value L.eval("v_t/2.0") L.variable("b index a b c") L.variables['b'].value L.eval("v_b") L.variables['b'].definition L.variable("i loop 10") L.variables['i'].value L.next("i") L.variables['i'].value L.eval("ke") # ## Accessing Atom data L.atoms[0] [x for x in dir(L.atoms[0]) if not x.startswith('__')] L.atoms[0].position L.atoms[0].id L.atoms[0].velocity L.atoms[0].force L.atoms[0].type # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Esempio di modello multi-output import tensorflow as tf import numpy as np import matplotlib.pyplot as plt import pandas as pd from tensorflow.keras.models import Model from tensorflow.keras.layers import Dense, Input from sklearn.model_selection import train_test_split # + def format_output(data): y1 = data.pop('Y1') y1 = np.array(y1) y2 = data.pop('Y2') y2 = np.array(y2) return y1, y2 def norm(x): return (x - train_stats['mean']) / train_stats['std'] # per confrontare y_pred ed y_true nel caso di regressione def plot_diff(y_true, y_pred, title=''): plt.scatter(y_true, y_pred) plt.title(title) plt.xlabel('True Values') plt.ylabel('Predictions') plt.axis('equal') plt.axis('square') plt.xlim(plt.xlim()) plt.ylim(plt.ylim()) plt.plot([-100, 100], [-100, 100]) plt.grid(True) plt.show() def plot_metrics(metric_name, title, ylim=5): plt.title(title) plt.ylim(0, ylim) plt.plot(history.history[metric_name], color='blue', label=metric_name) plt.plot(history.history['val_' + metric_name], color='green', label='val_' + metric_name) plt.show() # + # Get the data from UCI dataset URL = 'https://archive.ics.uci.edu/ml/machine-learning-databases/00242/ENB2012_data.xlsx' # Use pandas excel reader df = pd.read_excel(URL) # fa lo shuffling prima di splittare in train, test df = df.sample(frac=1).reset_index(drop=True) # elimino le colonne fantasma df = df.drop(['Unnamed: 10', 'Unnamed: 11'], axis = 1) # Split the data into train and test with 80 train / 20 test train, test = train_test_split(df, test_size=0.2) # calcola le statistiche per la normalizzazione train_stats = train.describe() # Get Y1 and Y2 as the 2 outputs and format them as np arrays train_stats.pop('Y1') train_stats.pop('Y2') train_stats = train_stats.transpose() train_Y = format_output(train) test_Y = format_output(test) # Normalize the training and test data norm_train_X = norm(train) norm_test_X = norm(test) # + # Define model layers. input_layer = Input(shape=(len(train.columns),)) first_dense = Dense(units='128', activation='relu')(input_layer) second_dense = Dense(units='128', activation='relu')(first_dense) # Y1 output will be fed directly from the second dense y1_output = Dense(units='1', name='y1_output')(second_dense) third_dense = Dense(units='64', activation='relu')(second_dense) # Y2 output will come via the third dense y2_output = Dense(units='1', name='y2_output')(third_dense) # Define the model with the input layer and a list of output layers model = Model(inputs=input_layer, outputs=[y1_output, y2_output]) model.summary() # - train.columns # Specify the optimizer, and compile the model with loss functions for both outputs optimizer = tf.keras.optimizers.SGD(lr=0.001) model.compile(optimizer=optimizer, loss={'y1_output': 'mse', 'y2_output': 'mse'}, metrics={'y1_output': tf.keras.metrics.RootMeanSquaredError(), 'y2_output': tf.keras.metrics.RootMeanSquaredError()}) # Train the model for 500 epochs history = model.fit(norm_train_X, train_Y, epochs=500, batch_size=10, validation_data=(norm_test_X, test_Y), verbose = 0) # Test the model and print loss and mse for both outputs loss, Y1_loss, Y2_loss, Y1_rmse, Y2_rmse = model.evaluate(x=norm_test_X, y=test_Y) print("Loss = {}, Y1_loss = {}, Y1_mse = {}, Y2_loss = {}, Y2_mse = {}".format(loss, Y1_loss, Y1_rmse, Y2_loss, Y2_rmse)) # Plot the loss and mse Y_pred = model.predict(norm_test_X) plot_diff(test_Y[0], Y_pred[0], title='Y1') plot_diff(test_Y[1], Y_pred[1], title='Y2') plot_metrics(metric_name='y1_output_root_mean_squared_error', title='Y1 RMSE', ylim=2) plot_metrics(metric_name='y2_output_root_mean_squared_error', title='Y2 RMSE', ylim=2) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # How does the system perform? # # This notebook will look the following: # # * How long do users wait to answer questions? # * How long do users wait for the network to respond? # * How many simultaneous users do we have? # * How many simultaneous requests are served? # # %matplotlib inline import pandas as pd import matplotlib.pyplot as plt import numpy as np plt.style.use('seaborn') import caption_contest_data as ccd responses = ccd.responses(577) print("Number of responses:", len(responses)) responses.columns responses.iloc[0] # ## Response time # How long does the average user wait before providing a respose? That data is recorded in the responses and we can plot a histogram. # # We know that waiting for something to happen is characterized by an [exponential random variable]. Can we fit the PDF of an exponential random variable to the reponse time we see? # # [exponential random variable]:https://en.wikipedia.org/wiki/Exponential_distribution most_responses = (responses['response_time'] >= 0) & (responses['response_time'] <= 15) df = responses[most_responses].copy() df['response_time'].plot.hist(bins=50, normed=True) plt.xlabel('Respond time (seconds)') plt.show() users = df.participant_uid.unique() num_users = len(users) print(num_users, "total users") # ## Network delay # How long does our system take to respond? most_delays = (responses['network_delay'] >= 0) & (responses['network_delay'] <= 2) df = responses[most_delays] plt.style.use('seaborn-talk') df['network_delay'].plot.hist(bins=100) plt.title('Network delay') plt.show() # ## Concurrent users # How many users hit our system in a one second period? import datetime df = responses.copy() contest_start = df['timestamp_query_generated'].min() contest_end = df['timestamp_query_generated'].max() df = df.sort_values(by='timestamp_query_generated') delta = datetime.timedelta(seconds=1) df['seconds_elapsed'] = df['timestamp_query_generated'] - contest_start df['seconds_elapsed'] = df.apply(lambda row: row['seconds_elapsed'].total_seconds(), axis=1) total_seconds = (contest_end - contest_start).total_seconds() def find_users_in_range(start, k, total, resolution=None, times=None, participants=None): if k % 1000 == 0: print(k / total, "fraction") end = start + resolution n_questions = (times >= start) & (times < end) n_users = participants[n_questions].nunique() return {'questions served': n_questions.sum(), 'n_users': n_users, 'start': start, 'end': end} # + from joblib import Parallel, delayed times = df['seconds_elapsed'].values.astype("float32") participants = df["participant_uid"].apply(hash) measure = np.linspace(times.min(), times.max(), num=20_000) print(np.diff(measure).min()) resolution = 5 # seconds print(f"Launching {len(measure)//1000}k jobs...") print(f"Resolution: {resolution} seconds") # - kwargs = {"resolution": resolution, "times": times, "participants": participants} stats = [] for k, m in enumerate(measure): stat = find_users_in_range(m, k, len(measure), **kwargs) stats.append(stat) print(stats[:3]) stats = pd.DataFrame(stats) stats["minutes"] = stats["start"] / 60 stats["hours"] = stats["minutes"] / 60 stats["days"] = stats["hours"] / 24 stats.sample(n=5) max_rate = stats['questions served'].max() print(f'Questions served per {resolution} seconds: {max_rate}') # + import matplotlib.pyplot as plt w = 4.5 fig, axs = plt.subplots(ncols=2, figsize=(2 * w, 1*w)) ax = stats.plot(x="days", y="questions served", ax=axs[0]) ax.set_title(f"Questions served in {resolution} seconds") ax.legend_.remove() ax = stats.plot(x="days", y="n_users", ax=axs[1]) ax.set_title("Concurrent users") ax.legend_.remove() # + idx = (97.5<= stats.hours) & (stats.hours <= 104) w = 4.5 fig, axs = plt.subplots(ncols=2, figsize=(2 * w, 1*w)) ax = stats[idx].plot(x="days", y="questions served", ax=axs[0]) ax.set_title(f"Questions served in {resolution} seconds") ax.legend_.remove() ax = stats[idx].plot(x="days", y="n_users", ax=axs[1]) ax.set_title("Concurrent users") ax.legend_.remove() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Answers to Exercises # **Exercise 1** np.sum(big_array[big_array % 2 == 1]) # **Exercise 2** random[1:5, 2:5] + np.eye(3, 3) # **Exercise 3** # + big_part = random[:3, :5].reshape((1, 15)) negative_1 = random[3][5] small_part = np.array([negative_1]) #this just puts -1 into an array so it can be appended to the vector big_part combined = np.append(big_part, small_part) square = combined.reshape((4, 4)) square_mean = np.mean(square) square_mean # - # **Exercise 4** # + x = np.array([3.0, 5.0, 8.0]) y = np.array([3.0, 5.0, 8.0]) #Square x here: x = x ** 2 #Square y here: for i in range(len(y)): y[i] = y[i] ** 2 # - # **Exercise 5:** prices.sum(axis=0) cheapest_store = "C" # **Exercise 6:** vector = np.array([3, 1, -3]) an_array = np.array([[-2, -1, 3], [-3, 0, 3], [-3, -1, 4]]) identity = an_array + vector #does this look like np.eye(3, 3)? identity # **Exercise 7:** x = np.array([-7, 7]) # **Exercise 8:** y[np.sqrt(x) == 2] # **Exercise 9:** theta_hat = np.linalg.solve((X.T @ X), (X.T @ Y)) loss = np.dot(np.linalg.norm(Y - (X @theta_hat)), np.linalg.norm(Y - (X @ theta_hat))) # **Exercise 10:** # + male = titanic[titanic["Sex"] == "male"] male_avg = male["Fare"].mean() male_avg female = titanic[titanic["Sex"] == "female"] female_avg = female["Fare"].mean() female_avg # - # **Exercise 11:** survived = titanic[titanic["Survived"] == 1] age_survived = survived["Age"].mean() age_survived died = titanic[titanic["Survived"] == 0] age_died = died["Age"].mean() age_died # **Exercise 12:** class_1 = titanic[titanic["Pclass"] == 1] survived_1 = len(class_1[class_1["Survived"] == 1]) total_1 = len(class_1) proportion_1 = survived_1 / total_1 class_2 = titanic[titanic["Pclass"] == 2] survived_2 = len(class_2[class_2["Survived"] == 1]) total_2 = len(class_2) proportion_2 = survived_2 / total_2 class_3 = titanic[titanic["Pclass"] == 3] survived_3 = len(class_3[class_3["Survived"] == 1]) total_3 = len(class_3) proportion_3 = survived_3 / total_3 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: py36 fastai # language: python # name: fastai # --- # # ULMFiT + Siamese Network for Sentence Vectors # ## Part Three: Classifying # # This notebook will use the German Language Model, created in the previous one, to predict categories based on Office FAQ entries. The model will be used as a sentence encoder for a Siamese Network that builds sentence vectors that are feed into a classifier network. # Needed to load fastai library import sys sys.path.append("/data/home/makayser/notebooks/fastai/") # go to parent dir # + import fastai # from fastai.lm_rnn import * from fastai.text import * import html #temp fix #from old.fastai import lm_rnn as old_lm_rnn import json import html import re import pickle from collections import Counter import random import pandas as pd import numpy as np from pathlib import Path import sklearn from sklearn import model_selection from functools import partial from collections import Counter, defaultdict import random import numpy as np import torch import torch.nn as nn import torch.utils import torch.optim as optim import torch.optim.lr_scheduler as lr_scheduler from torch.utils.data import dataset, dataloader import torch.optim as optim import torch.nn.functional as F import time import math import sys # import data # - data_dir = '/data/home/makayser/qa_local/' token_files = data_dir + 'token/' torch.cuda.empty_cache() # ## Create a new dataloader to create sentence pairs class SiameseDataLoader(): def __init__(self, sentence_pairs, pad_val, batch_size=32): self.sentence_pairs = sentence_pairs self.batch_size = batch_size self.index = 0 self.pad_val = pad_val def shuffle(self): def srtfn(x): return x[:, -1] + random.randint(-5, 5) order = np.argsort(srtfn(self.sentence_pairs)) self.sentence_pairs = self.sentence_pairs[order] def __iter__(self): return self def fill_tensor(self, sentences, max_len): data = np.zeros((max_len, len(sentences)), dtype=np.long) data.fill(self.pad_val) for i, s in enumerate(sentences): start_idx = max_len - len(s) for j, p in enumerate(s): data[:,i][start_idx+j] = p return torch.LongTensor([data.tolist()]).cuda() def batch(self): return self.index//self.batch_size def __len__(self): return len(self.sentence_pairs)//self.batch_size def __next__(self): #how many examples to ananlyise for this round num = min(self.batch_size, len(self.sentence_pairs) - self.index) if num < 1: raise StopIteration # signals "the end" #collect the sentences max_len_a = 0 max_len_b = 0 first = [] second = [] labels = torch.LongTensor(num) for i in range(num): a, b, l, _ = self.sentence_pairs[self.index + i] if len(a) > max_len_a: max_len_a = len(a) if len(b) > max_len_b: max_len_b = len(b) first.append(a) second.append(b) labels[i] = l self.index += num first = self.fill_tensor(first, max_len_a) second = self.fill_tensor(second, max_len_b) return (first.cuda(), (first != self.pad_val).cuda(), second.cuda(), (second != self.pad_val).cuda(), labels.cuda() ) # + itos = pickle.load(open(f'{token_files}itos.pkl', 'rb')) stoi = defaultdict(lambda:0, {v:k for k,v in enumerate(itos)}) vocab_size = len(itos) pad_tok = stoi['_pad_'] sentence_pairs_train = np.load(f'{token_files}office_tok_train.npy') sentence_pairs_dev = np.load(f'{token_files}office_tok_dev.npy') sentence_pairs_test = np.load(f'{token_files}office_tok_test.npy') def print_sentence(s): sentence = "" for tok in s: sentence += " "+itos[tok] print(sentence) print_sentence(sentence_pairs_train[0][0]) print_sentence(sentence_pairs_train[0][1]) print_sentence(sentence_pairs_dev[0][0]) print_sentence(sentence_pairs_dev[0][1]) print_sentence(sentence_pairs_test[0][0]) print_sentence(sentence_pairs_test[0][1]) # - sentence_pairs_test[0][0] itos[7] # # Check the dataloader training_data = SiameseDataLoader(sentence_pairs_train, pad_tok) for batch in training_data: sentences = batch[0][0] masks = batch[1][0] for sentence, mask in zip(sentences.transpose(1,0), masks.transpose(1,0)): for tok in torch.masked_select(sentence, mask): print(itos[int(tok)], end=' ') print("") break # # Evaluate the masking and pooling code # + # sentences are in the form [sentence_length, batch_size, embedding_size] # masks are in the form [sentence_length, batch_size]) sentence_length = 5 batch_size = 3 embedding_size = 4 out = torch.zeros((batch_size, embedding_size)) sentences = torch.tensor([ [[1,1,1,1], [4,4,4,4], [7,7,7,7]], [[2,2,2,2], [5,5,5,5], [8,8,8,8]], [[0,0,0,0], [6,6,6,6], [9,9,9,9]], [[0,0,0,0], [0,0,0,0], [10,10,10,10]], [[0,0,0,0], [0,0,0,0], [0,0,0,0]] ]).float() #sentences.shape == [5, 3, 4] masks = torch.tensor([[[1,1,1], [1,1,1], [0,1,1], [0,0,1], [0,0,0]]]).byte() #masks.shape == [1, 5, 3] for i, sentence, mask in zip(range(batch_size), sentences.permute((1,0,2)), masks.squeeze().permute(1,0)): mask = mask.unsqueeze(1) selected = torch.masked_select(sentence, mask) selected = torch.reshape(selected, (-1, embedding_size)) print(selected) max = torch.max(selected, 0)[0] print(max) out[i] = torch.mean(selected, 0) print(out) # - # ## Siamese network # + class SiameseClassifier(nn.Module): def __init__(self, encoder, linear): super().__init__() self.encoder = encoder self.linear = linear def pool(self, x, masks, is_max): #x.shape = sentence length, batch size, embedding size #mask.shape = [1, sentence length, batch size] embedding_size = x.shape[2] batch_size = x.shape[1] out = torch.zeros((batch_size, embedding_size)).cuda() masks = masks.squeeze() #print(f'shapes: x {x.shape}, masks {masks.shape}, out {out.shape}') #shapes: x torch.Size([7, 32, 400]), mask torch.Size([7, 32]), out torch.Size([32, 400]) for i, hidden, mask in zip(range(batch_size), x.permute((1,0,2)), masks.permute(1,0)): mask = mask.unsqueeze(1) selected = torch.masked_select(hidden, mask) selected = torch.reshape(selected, (-1, embedding_size)) if is_max: max_pool = torch.max(selected, 0)[0] out[i] = max_pool else: mean_pool = torch.mean(selected, 0) out[i] = mean_pool return out def pool_outputs(self, output, mask): avgpool = self.pool(output, mask, False) maxpool = self.pool(output, mask, True) last = output[-1] return torch.cat([last, maxpool, avgpool], 1) def forward_once(self, input, mask): raw_outputs, outputs = self.encoder(input) out = self.pool_outputs(outputs[-1], mask) return out def forward(self, in1, in1_mask, in2, in2_mask): u = self.forward_once(in1, in1_mask) v = self.forward_once(in2, in2_mask) features = torch.cat((u, v, torch.abs(u-v), u*v), 1) out = self.linear(features) return out def reset(self): for c in self.children(): if hasattr(c, 'reset'): c.reset() class LinearClassifier(nn.Module): def __init__(self, layers, dropout): super().__init__() self.layers = nn.ModuleList([LinearBlock(layers[i], layers[i + 1], dropout) for i in range(len(layers) - 1)]) def forward(self, input): x = input for l in self.layers: l_x = l(x) x = F.relu(l_x) return l_x # - #these are the values used for the original LM em_sz, nh, nl = 300, 1150, 3 #400 bptt = 70 max_seq = bptt * 20 cats = 32 # ## Load our pretrained model then build the Siamese network from it # ## Training loop # This should be converted over to the fast.ai learner but I'm not sure how to do that yet. # + log_interval = 1000 criterion = nn.CrossEntropyLoss() # criterion = nn.CosineEmbeddingLoss() def evaluate(model, data_loader): # Turn on evaluation mode which disables dropout. model.eval() total_loss = 0. num_correct = 0 total = 0 for x in data_loader: a = x[0] a_mask = x[1] b = x[2] b_mask = x[3] l = x[4] if b.size(1) > 1450: print('rejected:', b.size()) continue model.reset() out = model(a.squeeze(), a_mask.squeeze(), b.squeeze(), b_mask.squeeze()) # squeezed the masks loss = criterion(out, l.squeeze()) total += l.size(0) total_loss += l.size(0) * loss.item() num_correct += np.sum(l.data.cpu().numpy() == np.argmax(out.data.cpu().numpy(), 1)) return (total_loss / total, num_correct / total) def train(model, data_loader, optimizer): # Turn on training mode which enables dropout. start_time = time.time() model.train() total_loss = 0. num_correct = 0 total = 0 for x in data_loader: a = x[0] a_mask = x[1] b = x[2] b_mask = x[3] l = x[4] optimizer.zero_grad() if b.size(1) > 1450: print('rejected:', b.size()) continue model.reset() #torch.Size([1, 7, 32]) out = model(a.squeeze(), a_mask.squeeze(), b.squeeze(), b_mask.squeeze()) #squeezed the masks loss = criterion(out, target=l.squeeze()) total += l.size(0) total_loss += l.size(0) * loss.item() num_correct += np.sum(l.data.cpu().numpy() == np.argmax(out.data.cpu().numpy(), 1)) loss.backward() optimizer.step() batch = data_loader.batch() if batch % log_interval == 0 and batch > 0: cur_loss = total_loss / total elapsed = time.time() - start_time batches = len(data_loader) ms = elapsed * 1000 / log_interval print(f'| {batch:5d}/{batches:5d} batches', end=" ") print(f'| ms/batch {ms:5.2f} | loss {cur_loss:5.4f} acc {num_correct / total}') #print(f'| ms/batch {ms:5.2f} | loss {cur_loss:5.4f}') total_loss = 0 total = 0 num_correct = 0 start_time = time.time() # - best_loss = 100 def training_loop(model, epochs, optimizer, scheduler = None): global best_loss for epoch in range(epochs): print(f'Start epoch {epoch:3d} training with lr ', end="") for g in optimizer.param_groups: print(g['lr'], end=" ") print("") training_data = SiameseDataLoader(sentence_pairs_train, pad_tok) training_data.shuffle() epoch_start_time = time.time() train(model, training_data, optimizer) if scheduler != None: scheduler.step() dev_data = SiameseDataLoader(sentence_pairs_dev, pad_tok) val_loss, accuracy = evaluate(model, dev_data) delta_t = (time.time() - epoch_start_time) print('-' * 89) print(f'| end of epoch {epoch:3d} | time: {delta_t:5.2f}s | valid loss {val_loss:5.2f} accuracy {accuracy} learning rates') for g in optimizer.param_groups: print(g['lr']) print('-' * 89) if val_loss < best_loss: best_loss = val_loss with open(f'./siamese_model{val_loss:0.2f}{accuracy:0.2f}.pt', 'wb') as f: torch.save(siamese_model, f) # + from scipy.signal import butter, filtfilt def butter_lowpass(cutoff, fs, order=5): nyq = 0.5 * fs normal_cutoff = cutoff / nyq b, a = butter(order, normal_cutoff, btype='low', analog=False) return b, a def butter_lowpass_filtfilt(data, cutoff, fs, order=5): b, a = butter_lowpass(cutoff, fs, order=order) y = filtfilt(b, a, data) return y def plot_loss(losses): plt.semilogx(losses[:,0], losses[:,1]) plt.semilogx(losses[:,0], butter_lowpass_filtfilt(losses[:,1], 300, 5000)) plt.show() def find_lr(model, model_to_optim, data_loader): losses = [] model.train() criterion = nn.CrossEntropyLoss() lr = 0.00001 for x in data_loader: #a, b, l a = x[0]#; print(a.size(), a.squeeze().size()) a_m = x[1]#; print(a_m.size()) b = x[2]#; print(b.size(), b.squeeze().size()) b_m = x[3]#; print(b_m.size(), '\n*') l = x[4] if b.size(1) > 1450: #NOTE: bug where # torch.Size([1, 11, 32]) torch.Size([11, 32]) # torch.Size([1, 11, 32]) # torch.Size([1, 1568, 32]) torch.Size([1568, 32]) # torch.Size([1, 1568, 32]) # Throws the following error: # The size of tensor a (1358) must match the size of tensor b (1568) at non-singleton dimension 0 print('rejected:', b.size()) continue optimizer = optim.SGD(model_to_optim.parameters(), lr=lr) #optimizer = optim.Adam(model_to_optim.parameters(), lr=lr) optimizer.zero_grad() model.reset() # a, b, l = torch.Tensor(a), torch.Tensor(b), torch.Tensor(l) #already Tensor objects out = model(a.squeeze(), a_m.squeeze(), b.squeeze(), b_m.squeeze()) loss = criterion(out, l.squeeze()) los_val = loss.item() losses.append((lr, los_val)) if los_val > 5: break loss.backward() optimizer.step() lr *= 1.05 losses = np.array(losses) #plot_loss(losses) return losses # + WIKI_LM = torch.load("oa_language_model_lr001_e20_v2.pt") dps = np.array([0.4,0.5,0.05,0.3,0.4])*0.4 WIKI_encoder = MultiBatchRNN(bptt, max_seq, vocab_size, em_sz, nh, nl, pad_tok, dropouti=dps[0], wdrop=dps[2], dropoute=dps[3], dropouth=dps[4]) WIKI_LM[0].state_dict() WIKI_encoder.load_state_dict(WIKI_LM[0].state_dict()) #SNLI_LM[0].state_dict() #2 pooled vectors, of 3 times the embedding size siamese_model = SiameseClassifier(WIKI_encoder, LinearClassifier(layers=[em_sz*3*4, nh, 32], dropout=0.4)).cuda() # + # %%time dev_data = SiameseDataLoader(sentence_pairs_dev, pad_tok) losses = find_lr(siamese_model, siamese_model, dev_data) plot_loss(np.array(losses)) # - for b in dev_data: print(b[0], '****\n\n') print(b[1], '****\n\n') print(b[2], '****\n\n') print(b[3], '****\n\n') print(b[4].size(0), '****\n\n') break # + for param in siamese_model.encoder.parameters(): param.requires_grad = False optimizer = optim.SGD(siamese_model.linear.parameters(), lr=0.01) training_loop(siamese_model, 1, optimizer) # - torch.save(siamese_model, "siamese_model_e0_lr01_v0.pt") # + # siamese_model = torch.load("./siamese_model0.500.81.pt") # + for param in siamese_model.encoder.parameters(): param.requires_grad = True for lr in [x/200+0.005 for x in range(20)]: optimizer = optim.SGD(siamese_model.parameters(), lr=lr) training_loop(siamese_model, 1, optimizer) # - torch.save(siamese_model, "siamese_model_e1_lr01_v0.pt") epochs = 10 optimizer = optim.SGD(siamese_model.parameters(), lr=0.1) scheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, epochs, eta_min=0.001) training_loop(siamese_model, epochs, optimizer, scheduler) torch.save(siamese_model, "siamese_model_e10_lr1_v0.pt") entailed_a = [] entailed_b = [] contra_a = [] contra_b = [] netural_a = [] netural_b = [] for a,b,l,_ in sentence_pairs_dev: if l == 0: #entailed entailed_a.append(a) entailed_b.append(b) elif l == 1: contra_a.append(a) contra_b.append(b) else: netural_a.append(a) netural_b.append(b) # + def make_prediction_from_list(model, l): """ Encode a list of integers that represent a sequence of tokens. The purpose is to encode a sentence or phrase. Parameters ----------- model : fastai language model l : list list of integers, representing a sequence of tokens that you want to encode` """ arr = torch.tensor(np.expand_dims(np.array(l), -1)).cuda() model.reset() # language model is stateful, so you must reset upon each prediction hidden_states = model(arr)[-1][-1] # RNN Hidden Layer output is last output, and only need the last layer #return avg-pooling, max-pooling, and last hidden state return hidden_states.mean(0), hidden_states.max(0)[0], hidden_states[-1] def get_embeddings(encoder, list_list_int): """ Vectorize a list of sequences List[List[int]] using a fast.ai language model. Paramters --------- encoder : sentence_encoder list_list_int : List[List[int]] A list of sequences to encode Returns ------- tuple: (avg, mean, last) A tuple that returns the average-pooling, max-pooling over time steps as well as the last time step. """ n_rows = len(list_list_int) n_dim = encoder.nhid avgarr = np.empty((n_rows, n_dim)) maxarr = np.empty((n_rows, n_dim)) lastarr = np.empty((n_rows, n_dim)) for i in range(len(list_list_int)): avg_, max_, last_ = make_prediction_from_list(encoder, list_list_int[i]) avgarr[i,:] = avg_.data.cpu().numpy() maxarr[i,:] = max_.data.cpu().numpy() lastarr[i,:] = last_.data.cpu().numpy() return avgarr, maxarr, lastarr # - #siamese_model = torch.load('siamese_model0.500.81.pt') siamese_model.encoder.nhid = 400 entailed_a_vec = get_embeddings(siamese_model.encoder, entailed_a) entailed_b_vec = get_embeddings(siamese_model.encoder, entailed_b) # + import nmslib def create_nmslib_search_index(numpy_vectors): """Create search index using nmslib. Parameters ========== numpy_vectors : numpy.array The matrix of vectors Returns ======= nmslib object that has index of numpy_vectors """ search_index = nmslib.init(method='hnsw', space='cosinesimil') search_index.addDataPointBatch(numpy_vectors) search_index.createIndex({'post': 2}, print_progress=True) return search_index def percent_matching(query_vec, searchindex, k=10): num_found = 0 num_total = len(query_vec) for i in range(num_total): query = query_vec[i] idxs, dists = searchindex.knnQuery(query, k=k) if i in idxs: num_found += 1 return 100 * num_found/num_total def indexes_matching(query_vec, search_index, k=5): results = [] for q in query_vec: index_set = set() idxs, dists = search_index.knnQuery(q, k=k) results.append(idxs) return results def percent_found(results): num_found = 0 for i, result in enumerate(results): if i in result: num_found += 1 return (num_found/len(results)) def decode_sentence(sentence): result = "" for word_idx in sentence: result += f"{itos[word_idx]} " return result def show_similar(query_idx, matched): print(decode_sentence(entailed_a[query_idx])) for idx in matched: print(f"\t{decode_sentence(entailed_b[idx])}") print("") # - entailed_b_avg_searchindex = create_nmslib_search_index(entailed_b_vec[0]) entailed_b_max_searchindex = create_nmslib_search_index(entailed_b_vec[1]) entailed_b_last_searchindex = create_nmslib_search_index(entailed_b_vec[2]) # + results_avg = indexes_matching(entailed_a_vec[0], entailed_b_avg_searchindex, 3) results_max = indexes_matching(entailed_a_vec[1], entailed_b_max_searchindex, 3) results_last = indexes_matching(entailed_a_vec[2], entailed_b_last_searchindex, 3) num_found = 0 for i in range(len(results_avg)): if i in results_avg[i] or i in results_max[i] or i in results_last[i]: num_found += 1 print(num_found/len(results_avg)) # - results_combined = [] for a,b,c in zip(results_avg, results_max, results_last): results_combined.append(set(a).union(set(b).union(set(c)))) for i, r in enumerate(results_combined): show_similar(i, r) # + #siamese_model = torch.load('siamese_model0.500.81.pt') # - torch.save(siamese_model.encoder.state_dict(), "siamese_encoder_dict.pt") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import AnalysisFunctions as af import pandas as pd #defaultdict to use nested dictionaries from collections import defaultdict import matplotlib.pyplot as plt import statsmodels.api as sm import numpy as np import warnings warnings.filterwarnings('ignore') import dill from IPython.display import IFrame # - # # 1. Pre-processing: # - Choose the initialization time of the simulation, # - Organize the runoff and precipitation data in dataframes altogether with their respective observation intervals # - Calculate the quantiles on the distribution of ensemble forecasts # - Calculate the runoff meteorological medians out of the set of realizations # Dictionary initialization: df = af.dictionary() # Decide simulation initialization time: sim_start = '2018-10-26 12:00:00' # Some parameters for the basin: Verzasca_area = 186*1000.0**2 #m2 conv_factor = Verzasca_area/(1000.0*3600.0) # + # Runoff observation: open observation dataframe obs_pattern = '/home/ciccuz/hydro/prevah/runoff/2605.dat' obs_columns = ['year','month','day','hour','runoff'] obs_df = pd.DataFrame(pd.read_csv(obs_pattern, names=obs_columns, delim_whitespace=True, header=None)) obs_df['date'] = pd.to_datetime(obs_df[['year', 'month', 'day', 'hour']]) # Precipitation observation: open precipitation observation dataframe obtained by cosmo1+pluviometer #data concatenated series before the initialization of the model prec_obs_df = af.dictionary(pattern="/home/ciccuz/hydro/PrecObs/cosmo1_{simul_time}/{otherstuff}", folders_pattern = '/home/ciccuz/hydro/PrecObs/cosmo1_*') prec_obs_series= dill.load( open( "prec_obs/prec_obs_series.txt", "rb" ) ) # - # Extract from the dictionary the dataframe containing all the different realizations of the same event: #every ensemble member and parameter set combination for the runoff, every ensemble member for the precipitation. ens_df_prec = af.ensemble_df(df, sim_start, Verzasca_area, 'P-kor') ens_df_runoff = af.ensemble_df(df, sim_start, Verzasca_area,'RGES') # Calculate the quantiles for the variable chosen considering all the different realizations for the 120h ahead. quant_prec = af.quantiles(ens_df_prec) quant_runoff = af.quantiles(ens_df_runoff) # + # Define the subset of runoff and precipitation observation based on quantiles dataframe date boundaries obs_indexes_runoff = obs_df.loc[obs_df.index[obs_df['date'] == str(quant_runoff.date[0])] | obs_df.index[obs_df['date'] == str(quant_runoff.date[119])]] obs_indexes_prec = prec_obs_series.loc[prec_obs_series.index[prec_obs_series['date'] == str(quant_runoff.date[0])] | prec_obs_series.index[prec_obs_series['date'] == str(quant_runoff.date[119])]] obs_subset = obs_df.loc[obs_indexes_runoff.index[0]:obs_indexes_runoff.index[1]] prec_obs_subset = prec_obs_series.loc[obs_indexes_prec.index[0]:obs_indexes_prec.index[1]] # + # Meteorological medians: # Select groups of realizations based on the same ensemble members: # dictionaries sorted by ensemble members rm_groups_runoff = af.ens_param_groups(ens_df_runoff)[0] # Quantiles dictionaries from above rm groups dictionary quant_rm_dict = lambda: defaultdict(quant_rm_dict) quant_rm_groups_runoff = quant_rm_dict() for rm in range(21): quant_rm_groups_runoff[rm] = af.quantiles(rm_groups_runoff[rm]) # Construct a dataframe having all the medians obtained for every group of realizations # associated to an ens member rm_medians = pd.DataFrame(index=range(120)) for rm in range(21): rm_medians[rm] = quant_rm_groups_runoff[rm]['0.5'] rm_medians['date'] = quant_rm_groups_runoff[rm]['date'] rm_medians.columns = ['rm00','rm01','rm02','rm03','rm04','rm05','rm06','rm07','rm08','rm09','rm10','rm11','rm12', 'rm13','rm14','rm15','rm16','rm17','rm18','rm19','rm20','date'] # Quantiles on rm medians: quant_rm_medians = af.quantiles(rm_medians) # - # ## 1.1. Spaghetti plot for the entire set of realizations: # One forecast initialization comprises 120 hourly values of forecast, the number of runoff forecasts for every initialization is given by the product of the ensemble members of the meteorological model (21 ens) and the set of hydrological parameters that have been made randomly change (25 pin), so 525. The precipitation forecasts are given by the 21 ensemble members composing the meteorological model. #Spaghetti plot: af.spaghetti_plot(ens_df_runoff, ens_df_prec, obs_subset, prec_obs_subset, sim_start) IFrame('latex/thesis/figs/uncertainty/spaghetti_all.png', width=900, height=600) # # 2. Separate different sources of uncertainty: # ## 2.1. Meteorological uncertainty: # To have a look at how the meteorological uncertainty impact on the forecast take the meteorological medians (i.e. the 21 medians around the 25 different sets of hydrological parameters) and look at the resulting spread of the simulation, compared to the total spread obtained by the 525 ensemble members. # Quantify the meteorological uncertainty by plotting the range of spread among all the 21 rm medians obtained: #af.hydrograph(quant_rm_medians, quant_prec, obs_subset, prec_obs_subset, sim_start, medians=True) af.comparison_meteo_hydrograph(quant_rm_medians, quant_runoff, quant_prec, obs_subset, prec_obs_subset, sim_start)[1] IFrame('latex/thesis/figs/uncertainty/hydrograph_meteo.png', width=900, height=600) # ## 2.2. Hydrological parameters uncertainty: # #ADD DESCRIPTION IFrame('latex/thesis/figs/uncertainty/hydrograph_hydro.png', width=900, height=600) # Look for different ensemble realizations how the hydrological spread behaves: detect three realizations having different behaviours and plot the corresponding spread around their medians. # #ADD DESCRIPTION # + rm_high = 4 rm_medium = 9 rm_low = 7 af.hydrograph_rms(rm_high, rm_medium, rm_low, ens_df_prec, quant_rm_groups_runoff, quant_runoff, obs_subset, prec_obs_subset, sim_start) # - IFrame('latex/thesis/figs/uncertainty/hydro_unc_spread.png', width=900, height=350) # Quantify the hydrological uncertainty considering the spread between the quantiles around every meteorological median: for every meteo median, in every hourly point, report the total spread range and the IQR and normalize it with the median value of discharge in that point. # # 3. Peak-box approach: # Application of the algorithm developed to construct multiple peak-boxes related to different peak events, as an extension of the algorithm developed in Zappa et al. (2013). For more details take a look at the [dedicated repository](https://github.com/agiord/peakbox) import individual_scripts.peakbox_v5 as pbk pbk.peak_box_multipeaks(rm_medians1, obs_subset1, sim_start) IFrame('latex/thesis/figs/Peakbox/new/pb1.png', width=750, height=600) # ## 3.1 Peak-box evaluation # ### - Forecast sharpness IFrame('latex/thesis/figs/Peakbox/sharp_verif/sharpness_v4_1.png', width=950, height=500) # ### - Peak median verification IFrame('latex/thesis/figs/Peakbox/sharp_verif/verification_v4_2.png', width=950, height=500) # ### - Events detection IFrame('latex/thesis/figs/Peakbox/sharp_verif/hitmiss_v2.png', width=950, height=380) # # 4. Cluster analysis import individual_scripts.cluster_funct as cl # Perform a hierarchical agglomerative complete-linkage cluster analysis on the meteorological precipitation ensemble members and extract a restricted number of representative members. Goal: check whether the representative members extracted from the few clusters (3-5-7) we obtain are able to give similar/worst/better forecasts than by using the entire set of ensemble forecasts (21), and see which part of the spread are able to cover. #Plot the dendrogram: visualization of the clustering algorithm applied cl.clustered_dendrogram(ens_df_prec.drop('date', axis=1), sim_start) plt.show() IFrame('latex/thesis/figs/cluster/runoff/dendrogram.png', width=950, height=550) # + #Choose the number of clusters (3 or 5): Nclusters = 3 #extract the representative members RM = cl.clustered_RM(ens_df_prec.drop('date', axis=1), sim_start, Nclusters) #extract the sub-dataframe for prec and runoff forecasts containing only the members related to the new extracted representative members: clust_ens_df_prec = pd.DataFrame() clust_ens_df_runoff = pd.DataFrame() for rm_index in range(Nclusters): clust_ens_df_prec = pd.concat([clust_ens_df_prec, ens_df_prec.loc[:, ens_df_prec.columns == f'rm{RM[rm_index]:02d}_pin01']], axis=1, sort=False) for pin in range(1,26): clust_ens_df_runoff = pd.concat([clust_ens_df_runoff, ens_df_runoff.loc[:, ens_df_runoff.columns == f'rm{RM[rm_index]:02d}_pin{pin:02d}']], axis=1, sort=False) clust_ens_df_prec = pd.concat([clust_ens_df_prec, ens_df_prec.date], axis=1) clust_ens_df_runoff = pd.concat([clust_ens_df_runoff, ens_df_runoff.date], axis=1) # Cluster quantiles: clust_quant_prec = af.quantiles(clust_ens_df_prec) clust_quant_runoff = af.quantiles(clust_ens_df_runoff) # - # Hydrograph plot of clustered forecasts cl.cluster_hydrograph(clust_quant_runoff, clust_quant_prec, quant_runoff, quant_prec, obs_subset, prec_obs_subset, sim_start, Nclusters=Nclusters)[2] IFrame('latex/thesis/figs/cluster/runoff/cluster_hydrograph_3_NEW.png', width=950, height=550) IFrame('latex/thesis/figs/cluster/runoff/cluster_hydrograph_5_NEW.png', width=950, height=550) IFrame('latex/thesis/figs/cluster/runoff/cluster_hydrograph_7_NEW.png', width=950, height=550) # ## 4.1 Clustering performance IFrame('latex/thesis/figs/cluster/cluster_coverage_wIntervals.png', width=700, height=450) IFrame('latex/thesis/figs/cluster/runoff/ROCa_runoff_TOTvsCLUSTERS.png', width=850, height=650) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + from gensim.models import Word2Vec import numpy as np import pandas as pd import os import sys import re sys.path.insert(0,'/workspace/lang-detect/') from src import utils # - data_path = "/workspace/lang-detect/txt/" dir_list = os.listdir(data_path) print(dir_list) def read_data(filepath): with open(filepath, "r") as f: lines = f.readlines() if len(lines) > 1: return lines[1].strip("\n") return None # + # %%time data, labels = [], [] for dir_name in dir_list: files_list = os.listdir(data_path + dir_name) for f in files_list: sent = read_data(data_path + dir_name + "/" + f) if sent: data.append(sent) labels.append(dir_name) test_data, test_labels = [], [] with open("../../europarl.test", "r") as f: for line in f: line = line.split() test_data.append(" ".join(line[1:])) test_labels.append(line[0]) print(test_data[0], test_labels[0]) # - print("Length of train data", len(data)) print("Length of test data", len(test_data)) # + # %%time for i in range(len(data)): data[i] = utils.preprocess(data[i]) for i in range(len(test_data)): test_data[i] = utils.preprocess(data[i]) # - print(len(data),len(test_data)) # + sentences, test_sentences = [], [] for i in range(len(data)): x = utils.create_n_gram(data[i], 4) sentences.append(x.split()) for i in range(len(test_data)): x = utils.create_n_gram(data[i], 4) test_sentences.append(x.split()) # + # %%time # Create validation set of random 20000 sentences rand_indices = np.random.choice(len(sentences), 25000) rand_indices = list(set(rand_indices))[:20000] print(len(rand_indices)) valid_sentences, valid_labels = [], [] for ind in rand_indices: valid_sentences.append(sentences[ind]) valid_labels.append(labels[ind]) train_sentences = [j for i,j in enumerate(sentences) if i not in rand_indices] train_labels = [j for i,j in enumerate(labels) if i not in rand_indices] print(len(train_sentences), len(train_labels), len(valid_sentences), len(valid_labels)) # + import pickle with open("../data/train_sentences.npy","wb") as f: pickle.dump(train_sentences, f) with open("../data/train_labels.npy","wb") as f: pickle.dump(train_labels, f) with open("../data/valid_sentences.npy","wb") as f: pickle.dump(valid_sentences, f) with open("../data/valid_labels.npy","wb") as f: pickle.dump(valid_labels, f) # - len(train_sentences) # ### **Check Point** # + from gensim.models import Word2Vec import numpy as np import pandas as pd import os import sys import re sys.path.insert(0,'/workspace/lang-detect/') from src import utils import pickle with open("../data/train_sentences.npy","r") as f: train_sentences = pickle.load(f) with open("../data/train_labels.npy","r") as f: train_labels = pickle.load(f) with open("../data/valid_sentences.npy","r") as f: valid_sentences = pickle.load(f) with open("../data/valid_labels.npy","r") as f: valid_labels = pickle.load(f) print(len(train_sentences), len(train_labels), len(valid_sentences), len(valid_labels)) # + # Create word_to_index and index_to_word mapping PAD_ID = 0 UNK_ID = 1 word_to_index, index_to_word, word_freq = {}, {}, {} index = 2 for sent in train_sentences: for token in sent: if token not in word_to_index: word_freq[token] = 1 word_to_index[token] = index index_to_word[index] = token index += 1 else: word_freq[token] += 1 print("Vocabulary size", index-1) # - classes = {} for ind, c in enumerate(list(set(train_labels))): classes[c] = ind print(classes) less_freq = 0 for k,v in word_freq.items(): if v <= 1: less_freq += 1 print(less_freq) # + # Change words to their index def get_indexed_data(sentences, data_type=None): data_x = [] for sent in sentences: x = [] for token in sent: if token in word_to_index: if (data_type == "train") and (word_freq[token] == 1): x.append(UNK_ID) else: x.append(word_to_index[token]) else: x.append(UNK_ID) data_x.append(x) return data_x def get_indexed_label(labels): data_y = [classes[x] for x in labels] return data_y train_x = get_indexed_data(train_sentences, "train") valid_x = get_indexed_data(valid_sentences) test_x = get_indexed_data(test_sentences) train_y = get_indexed_label(train_labels) valid_y = get_indexed_label(valid_labels) test_y = get_indexed_label(test_labels) print(len(train_x),len(valid_x),len(test_x)) print(len(train_y),len(valid_y),len(test_y)) # + # Check distribution of length of sentences def get_dist(x): lens = {} for sent in x: if len(sent) in lens: lens[len(sent)] += 1 else: lens[len(sent)] = 1 return lens train_lens = get_dist(train_x) valid_lens = get_dist(valid_x) test_lens = get_dist(test_x) # - import matplotlib.pyplot as plt # %matplotlib inline # + fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(15,8)) plt.subplot(221) plt.bar(train_lens.keys(), train_lens.values()) plt.title("Training Set Distribution") plt.xlabel('Length of Sentences') plt.ylabel('No. of Sentences') plt.subplot(222) plt.bar(valid_lens.keys(), valid_lens.values()) plt.title("Validation Set Distribution") plt.xlabel('Length of Sentences') plt.ylabel('No. of Sentences') plt.subplot(223) plt.bar(test_lens.keys(), test_lens.values()) plt.title("Testing Set Distribution") plt.xlabel('Length of Sentences') plt.ylabel('No. of Sentences') plt.figure(figsize=(10,5)) fig.tight_layout() plt.show() # - # Training, Validation and Testing set have similar distribution. Also most of the sentences have length less than 250. # + # in train check number of sentences with length more than 100, 200, 300 def check_sent_length_buckets(len_dists): n_100, n_200, n_250, n_300, n_500 = 0, 0, 0, 0, 0 for k,v in len_dists.items(): if k >= 500: n_500 += v if k >= 300: n_300 += v if k >= 200: n_200 += v if k >= 250: n_250 += v if k >= 100: n_100 += v print(n_100, n_200, n_250, n_300, n_500) check_sent_length_buckets(train_lens) check_sent_length_buckets(valid_lens) check_sent_length_buckets(test_lens) # - # **There are some very long sentences but I don't want to loose information. What I am going to do is, truncate the sentences with length more than 250 and create a new sentence from index 251 to 500 and so on. This will be done only for train. Since I have included 4-gram tokens only in this data, some noise might get added at the start and end of new sentences, but the model should not get affected by that.** # + train_x_trunc, train_y_trunc = [], [] max_len = 250 for ind, sent in enumerate(train_x): while(len(sent) > max_len): train_x_trunc.append(sent[:max_len]) train_y_trunc.append(train_y[ind]) sent = sent[max_len:] train_x_trunc.append(sent) train_y_trunc.append(train_y[ind]) print(len(train_x_trunc), len(train_y_trunc)) # - # ##### **Create batches** train_lens = get_dist(train_x) buckets = [10*x for x in range(1,26)] buckets_data_sum = {} for k, v in train_lens.items(): for x in buckets: if k <= x: if x in buckets_data_sum: buckets_data_sum[x] += v else: buckets_data_sum[x] = v break buckets_data_sum sum(buckets_data_sum.values()) # + from collections import defaultdict # Create batch data def create_batches(data_x, data_y): batch_data = defaultdict(list) batch_label = defaultdict(list) for ind, sent in enumerate(data_x): for x in buckets: if len(sent) <= x: sent += [PAD_ID]*(x - len(sent)) batch_data[x].append(sent) batch_label[x].append(data_y[ind]) break return batch_data, batch_label train_batch_data, train_batch_labels = create_batches(train_x_trunc, train_y_trunc) valid_batch_data, valid_batch_labels = create_batches(valid_x, valid_y) # - train_batch_data[10][:10] # + # Convert to numpy array each key def to_numpy_array(batch_data, batch_label): for key in batch_data: batch_data[key] = np.array(batch_data[key]) batch_label[key] = np.array(batch_label[key]) return batch_data, batch_label train_batch_data, train_batch_labels = to_numpy_array(train_batch_data, train_batch_labels) valid_batch_data, valid_batch_labels = to_numpy_array(valid_batch_data, valid_batch_labels) test_data, test_labels = np.array(test_x), np.array(test_y) # + import pickle train = {'data':train_batch_data, 'labels':train_batch_labels} valid = {'data':valid_batch_data, 'labels':valid_batch_labels} test = {'data':test_data, 'labels':test_labels} with open('../data/train_batch.npy', 'w') as f: pickle.dump(train, f) with open('../data/valid_batch.npy', 'w') as f: pickle.dump(valid, f) with open('../data/test_data.npy', 'w') as f: pickle.dump(test, f) # - # Create embedding matrix wordvec = Word2Vec.load("../word-vectors/word2vec") wordvec.wv.most_similar("the") wordvec.wv.get_vector("the") # Tokens are indexed from 2. '0' is for the padding and '1' for unknown token. Vector for 0 will be zero-vector and for 1, a random vector. vocab_size, embed_size = wordvec.wv.vectors.shape sorted_word_to_index = sorted(zip(word_to_index.keys(), word_to_index.values()), key=lambda x: x[1]) sorted_word_to_index[0] # + embedding_matrix = [] embedding_matrix.append([0.]*embed_size) embedding_matrix.append(np.random.uniform(-3, 3, embed_size)) for key, index in sorted_word_to_index: embedding_matrix.append(wordvec.wv.get_vector(key)) # - print(len(embedding_matrix), len(embedding_matrix[0])) # + embedding_matrix = np.array(embedding_matrix) import pickle with open('../data/embedding_matrix.npy', 'w') as f: pickle.dump(embedding_matrix, f) # - with open('../data/embedding_matrix.npy', 'r') as f: ld = pickle.load(f) ld.shape # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:retropy] # language: python # name: conda-env-retropy-py # --- # %run retropy.ipynb import framework.etfs_high_yield as hy # https://www.bdcinvestor.com/bdc-etfs # http://etfdb.com/compare/dividend-yield/ # # SRET # - "global" (but actually mostly US) REIT, with highest yield and lowest volatility # - ~6% net yield # - fairly well performing compared to SPY and better than VNQ # - only $100M is AUM # - thoughts: good. looks good, but short has short history, and low AUM # # # DRW # - international real-estate related companies # - did better than i_reit, but almost same NTR as VNQI # - has a 5+ rolling net-yield (passes the bar), but that is offset with higher price churn, giving same NTR and VNQI # - yield is sporadic, sec-yield is 3+% while rolling 12m-yield is ~7-10% gross # - It's (and VNQI) is also in general very similar to VXUS, but gave more yield # - small AUM, 100MM # - thoughts: TBD. generally ok, but sporadic yield is a negative # # # KBWY # - small cap REITs, weighted by yield # - NTR similar to VNQ, but fluctuates # - 12m and sec yield is 7+% gross, 5.5% net # - yield is stable and has grown from 5% to 7+% since 2015 # - being comprable to VNQ, indicates a very nasty 70-80% draw-down during 2008 # - thoughts: good. nice stable yield, comprable NTR to VNQ # # # PSK # - prefered securities # - similar to the actively managed FPE, almost the same NTR, with more history, slightly less draw-down risk # - moderatly comprable to JNK, which indicates a major crash in 2008 # - has 700MM AUM vs 4B in FPE, but has lower ER 0.45% than FPE 0.85% # - ~4.2% net yield # - thoughts: good. generaly similar to FPE (which is favorable) # # # PGF # - prefered securities in financials sector # - vastly outperformed XLF (financials ETF), in NTR and in portfolio flow # - yield is stable but has gradually declined to 5+% gross (4.0% net) currently # - thoughts: no. yield isn't big enough and sector tilt are both turndowns # # # PGX # - similar to PGF, but without the sector tilt # - performed better than FPE # - thoughts: TBD # # # PSP # - publicly listed private equity # - global, ~40% US, has a similar PR to VXUS in recent times, but crashed much harder than i_ac in 2008 # - expensive at ~2% fee (reflecting the underlying holdings) # - 12m rolling yield is all over the place, ranging from 3-4% to 10-12% gross (currently at 10% gross) # - compared to SPY, it tends to crash much more during bad times, and doesn't recover more to compensate # - thoughts: no. the spordic yield, and excess crashes are a deterant # # # PCEF # - closed end funds # - yield is mostly steady at 5.5-6% net # - price is in a declining trend, and so is the net income, which declined from 600 to 400 in 7 years # - thoughts: no. declining income isn't great # # # YYY # - high yield closed end funds # - yield is high at 6+% net # - canibalizing on price return, and way underperforming vs SPY # - thoughts: no, returns are too low to justify such a yield # # # ALTY # - a crazy mix of assets # - 5.5-6% net yield, general ok pearformance # - only $15M in AUM !! # - thoughts: TBD. interesting, but AUM is too low. # # # MDIV # - a mix of assets # - ~5% net yield # - price is somewhat declining, with big drawdowns of 20%+ # - good AUM at 700MM # - toughs: TBD. price decline and dd are serious # # # # HYEM # - high-yield emerging-markets bonds # - ~4.5% net yield (even 5.5% net sec-yield) # - price is declining, but that seems inline with EMB and VWO decline # - comprable flow to EMB, less risk than with VWO # - thoughts: yes. consider overall EM exposure. # # # YLD # - .. # # # PGHY / SHYG # - .. # # # LMBS # - .. # # RIGS # - .. # + # show_comp(DRW, VNQI, extra=i_ac) # show_comp(PSK, FPE, extra=JNK) # show_comp(PSP, SPY) # show_comp(PSP, VXUS, i_ac) # show_comp(KBWY, VNQ) # show_comp(HYEM, EMB, VWO) # + #bdcs_fit = 'ARCC@Y:43.7|PSEC@Y:31.2|AINV@Y:22.2|OHAI@Y:2.9 = BDCS - fit' #_all = hy.all _all = hy.select port = "|".join(_all) all = get(_all, trim=True, start=2014, mode="NTR") # - # ## Analyze mix of ETFs x = get(port, start=2010) #show(get_income(x, per_month=True, smooth=4), ta=False, log=False) show_rr_yield(x, SPY, cjb) res = [get(port, start=year) for year in range(2000, 2018)] show(res, trim=False, legend=False) # ## Analyze assets analyze_assets(*all, start=2015) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # A simple maze # # This notebook covers the implementation of a simple _maze_ (or _grid-world_) to facilitate experimenting with basic dynamic programming algorithms, such as value iteration and policy iteration. # # The building blocks defined in this notebook can be reused in another notebook using module importer from __[ipynb](https://github.com/ipython/ipynb)__ package, see __[here](https://ipynb.readthedocs.io/en/latest/)__. Import is best done _definitions only_ so that this notebook is not exceuted and output of the notebook does not get cluttered with examples provided here. For instance: # # ```python # from ipynb.fs.defs._filename_of_this_notebook_ import ( # Maze, Movement, # plot_maze, plot_policy_actions, plot_state_rewards, plot_state_values # ) # ``` # # With this, import statements, class and function definitions as well as constants defined in ALL_CAPS will get imported. # + import math import numpy as np import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap from matplotlib.colors import LinearSegmentedColormap import matplotlib.ticker as ticker import seaborn as sns # - # ## The maze # # ### Maze class # # We define a simple `Maze` class to contain the structure of the grid world. In addition to the structure of the maze, such as size, walls and terminal states, we consider the rewards received when exploring the maze to be a part of the maze definition. Hence the class defines the reward received when entering a state, and takes into account discounting and living cost. # # State values (determined by some algorithm, such as value iteration) or resulting policy are not considered to be a part of the maze, and are managed in separate data structures outside the `Maze` class. # # The structure of the maze is internally represented as Numpy [ndarrays](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html) and states can be indexed with `(x,y)` tuples, starting from top left corner as `(0,0)`. # # `Maze` constructor takes a `maze_config` dictionary that defines the maze structure as an argument and stores the needed internal structures. # # In addition, we give the value of discounting parameter $\gamma$ and a living cost parameter as arguments to Maze class constructor. # # See below for example use. class Maze: def __init__(self, maze_config, *, gamma=1, living_cost=0): self.gamma = gamma self.living_cost = living_cost self.size = maze_config['size'] self.walls = np.full(self.size, False, dtype=bool) self.walls[tuple(zip(*maze_config['walls']))] = True # Note: default reward for entering a state is 0 self.rewards = np.full(self.size, 0, dtype=np.float32) self.rewards[self.walls] = np.nan for key in maze_config['rewards']: self.rewards[key] = maze_config['rewards'][key] self.terminal = np.full(self.size, False, dtype=bool) self.terminal[tuple(zip(*maze_config['terminal_states']))] = True self.states = {} self.grid_index = [] for index, wall in np.ndenumerate(self.walls): if not wall: self.grid_index.append(index) self.states[index] = index self.state_count = len(self.grid_index) def get_iterator(self, param): return MazeIterator(self, param) def get_as_list(self, param): return [ r for r in self.get_iterator(param) ] def is_valid_state(self, state): # short circuit order return (0 <= state[0] <= self.size[0] - 1 and 0 <= state[1] <= self.size[1] - 1) and (not self.walls[state]) # ### Maze iterator # # In addition, we define an __[iterator](https://wiki.python.org/moin/Iterator)__ to enumerate the maze grid states that the agent can enter (i.e. the ones that do not contain walls). Even though we will be doing a lot of iterating, this is not only for convenience, but also to help avoid bugs potentially difficult to trace: To guarantee, that we always consider the states in same order, even when processing data structures external to the maze. # # The iterator can be used to iterate maze properties with string references to `Maze` class properties, e.g.: # ``` # for state in maze.get_iterator("states"): # # each state # # print(list(maze.get_iterator('rewards'))) # # ``` # Or to iterate external data structures in the order defined by `Maze.grid_index`: # ``` # initial_values = { state:0 for state in maze.get_iterator("states") } # list(maze.get_iterator(initial_values)) # ``` # Or even # ``` # maze.get_as_list(initial_values) # ``` class MazeIterator: def __init__(self, maze, param): self.grid_index = maze.grid_index self.max = len(self.grid_index) if type(param) is str: self.param = getattr(maze, param) else: self.param = param self.n = 0 def __iter__(self): return self def __next__(self): if self.n >= self.max: raise StopIteration res = self.param[self.grid_index[self.n]] self.n += 1 return res # ## Movement within the maze # # We collect the logic of agent's movement within the grid world into `Movement` utility class. In addition, together with `Maze`, `Movement`class enables determining the transition probabilities for when we analyze the maze as a Markov Decision Process (MDP). # # The agent may attempt to move to one of four directions: `NORTH`, `EAST`, `SOUTH` and `WEST`. # # Movement is noisy. Attempt to move to certain direction (e.g. _north_) may result in ending up left of the intended direction (e.g. _west_ if intended direction was _north_), or right of the direction (e.g. _east_). # # For action _north_, `movement.get_direction_probs(action)` would return directions _west_, _north_ and _east_, with probabilities 0.1, 0.8 and 0.1, respectively, assuming $noise = 0.2$. # # If there is a wall blocking the move or the move ends up outside the maze grid, `move_from` returns to current state. Otherwise, the new state in the move direction is returned. To see where a move from current state to given direction would end up, agent would perform # # ``` # s_prime = movement.move_from(current_state, move_direction) # ``` # # Utilizing the class, an agent can analyze the movement directions and find probabilities associated with follow up states when trying to perform a particular `action`. For instance, to analyze an action available in a state, an agent could perform: # # ``` # for move_direction, p_move in movement.get_direction_probs(action): # # s_prime = movement.move_from(state, move_direction) # reward = maze.living_cost + maze.rewards[s_prime] # s_prime_value = state_values[s_prime] # ... # ``` # class Movement: NORTH = 0 EAST = 1 SOUTH = 2 WEST = 3 actions = [ NORTH, EAST, SOUTH, WEST ] action_names = [ "NORTH", "EAST", "SOUTH", "WEST" ] direction_arrows = [ '\u2191','\u2192','\u2193','\u2190', '\u25aa', ''] # up, right, down, left, small square, empty def __init__(self, maze, *, noise=0.2): self.maze = maze self.NOISE = noise self.noisy_moves = [ self._adjust_left, self._adjust_none, self._adjust_right ] self.noisy_move_probs = [self.NOISE / 2, 1 - self.NOISE, self.NOISE / 2] # left, straight, right def _adjust_left(cls, direction): return (cls.actions.index(direction) + len(cls.actions) - 1) % len(cls.actions) def _adjust_none(cls, direction): return direction def _adjust_right(cls, direction): return (cls.actions.index(direction) + 1) % len(cls.actions) def get_direction_probs(self, action): dirs = [] for j, _ in enumerate(self.noisy_moves): p_move = self.noisy_move_probs[j] move_direction = self.noisy_moves[j](action) dirs.append((move_direction, p_move)) return dirs def _get_move_target(self, from_state, action): if action == Movement.NORTH: target_state = (from_state[0] - 1, from_state[1]) elif action == Movement.EAST: target_state = (from_state[0], from_state[1] + 1) elif action == Movement.SOUTH: target_state = (from_state[0] + 1, from_state[1]) elif action == Movement.WEST: target_state = (from_state[0], from_state[1] - 1) return target_state def move_from(self, from_state, action): target_state = self._get_move_target(from_state, action) # check if trying to move outside the maze or against a wall # and in that case stay in current position if self.maze.is_valid_state(target_state): return target_state else: return from_state # ## Visualizing the maze # # We define helper functions for visualizing the maze. First, `plot_maze_grid` is the utility function to create a __[Seaborn heatmap](https://seaborn.pydata.org/generated/seaborn.heatmap.html?highlight=heatmap#seaborn.heatmap)__ based visualization of the maze and is used by the other helper functions providing different annotations and colormaps for customization. # # To annotate the grid with rewards, we use `plot_state_rewards`. To show current state values on the grid, we pass the state values to `plot_state_values`. Similarly, `plot_policy_actions` allows for plotting direction arrows corresponding to a policy given as argument on the maze grid. # # Finally, `plot_maze` will allow for plotting current state values and policy actions side by side in two subplots. # # We define two colormaps as `LinearSegmentedColormap` objects that are instantiated by giving a list of preselected colors. First colormap, `CM_VALUES` is used for representing the walls and terminal states of the maze. Here it is assumed that entering good terminal states provides a positive (colored light green) and entering bad terminal states a negative reward (colored light red) . The other, `CM_ACTIONS` is used as a background to gently differientate policy actions. The colormaps are illustrated below. # # Some rudimentary scaling of annotation texts is done to facilitate plotting of diffent size grids. # + # a figure size constant that would create a 1280 x 720 image with matplotlib default density of 100 dpi FIG_SIZE=(12.8, 7.2) FIG_DPI=100 # + VALUE_COLORS = ['#f7f7f7','#a1d76a', '#e9a3c9', '#999999'] # NONE, GOOD, BAD, WALL CM_VALUES = LinearSegmentedColormap.from_list("value", VALUE_COLORS, N=4) ACTION_COLORS = ['#fcfefe', '#edf7f8', '#d5e4f2', '#dfedf5','#91bfdb','#999999'] # N, E, S, W, TERMINAL, WALL CM_ACTIONS = LinearSegmentedColormap.from_list("action", ACTION_COLORS, N=6) fig = plt.figure(figsize=tuple(i/1.6 for i in FIG_SIZE), dpi=FIG_DPI) ax = fig.add_subplot(2, 1, 1) sns.heatmap([list(range(0,4))], fmt='', cmap=CM_VALUES, \ linewidths=0, rasterized=False, \ cbar=False, ax=ax) ax = fig.add_subplot(2, 1, 2) sns.heatmap([list(range(0,6))], fmt='', cmap=CM_ACTIONS, \ linewidths=0, rasterized=False, \ cbar=False, ax=ax) # + def scale_text(maze_size): return 18 / max(1, max(maze_size) / 6) def scale_arrows(maze_size): return 32 / max(1, max(maze_size) / 8) # - def plot_maze_grid(values, annotations, *, annot_kws={'size': 16}, cm, ax=None): if not ax: plt.figure(figsize=FIG_SIZE, dpi=FIG_DPI) ax = sns.heatmap(values, annot=annotations, annot_kws=annot_kws, fmt='', cmap=cm, \ square=True, linewidths=0.01, linecolor='#5f5f5f', rasterized=False, \ cbar=False, ax=ax) ax.tick_params(left=False, bottom=False) ax.tick_params(labelleft=False, labelbottom=False) sns.despine(ax=ax, top=False, right=False, left=False, bottom=False) return ax def plot_state_rewards(maze, *, ax=None, cm=CM_VALUES): NONE = 0 GOOD = 1 BAD = 2 WALL = 3 cells = np.full(maze.size, NONE) cells[maze.rewards > 0] = GOOD cells[maze.rewards < 0] = BAD cells[maze.walls] = WALL annot_text = np.full(maze.size, '', dtype=object) keys = maze.get_as_list("states") for key in keys: r = maze.rewards[key] annot_text[key] = f"{r + maze.living_cost:.3f}" annot_kws={'size': scale_text(maze.size)} plot_maze_grid(cells, annot_text, annot_kws=annot_kws, cm=cm, ax=ax) def plot_state_values(maze, state_values, *, ax=None, cm=CM_VALUES): NONE = 0 GOOD = 1 BAD = 2 WALL = 3 cells = np.full(maze.size, NONE) cells[maze.rewards > 0] = GOOD cells[maze.rewards < 0] = BAD cells[maze.walls] = WALL annot_text = np.full(maze.size, '', dtype=object) for a, key in enumerate(state_values): val = state_values[key] annot_text[key] = f"{val:.3f}" annot_kws={'size': scale_text(maze.size)} plot_maze_grid(cells, annot_text, annot_kws=annot_kws, cm=cm, ax=ax) def plot_policy_actions(maze, policy, *, ax=None, cm=CM_ACTIONS): # 0...3 = NORTH, EAST, ... TERMINAL = 4 WALL = 5 cells = np.full(maze.size, WALL) for a, key in enumerate(policy): cells[key] = policy[key] cells[maze.terminal] = TERMINAL ax = plot_maze_grid(cells, False, cm=cm, ax=ax) arrow_size = scale_arrows(maze.size) for index, c in np.ndenumerate(cells): arrow = Movement.direction_arrows[c] ax.text(index[1] + 0.5, index[0] + 0.5, arrow, size=arrow_size, ha='center', va='center') def plot_maze(maze, state_values, policy): plt.rcParams.update({'figure.max_open_warning': 0}) fig = plt.figure(figsize=FIG_SIZE, dpi=FIG_DPI) ax = fig.add_subplot(1, 2, 1) plot_state_values(maze, state_values, ax=ax) ax = fig.add_subplot(1, 2, 2) plot_policy_actions(maze, policy, ax=ax) fig.tight_layout() # ## Example use: Canonical maze # # To instantiate a `maze`, we define a `maze_config` dictionary, that defines the size of the maze, location of walls within the grid, the terminal states and rewards associated when moving to the state in question. # # To illustrate, we create the "canonical maze" used commonly in reinforcement learning examples and plot the rewards associated with states. maze_config = { 'size': (3, 4), 'walls': [(1,1)], 'terminal_states': [(0,3), (1,3)], 'rewards': { (0,3): 1, (1,3): -1 } } maze = Maze(maze_config, gamma=1, living_cost = 0) plot_state_rewards(maze, ax=None) # ## Second example: a slightly more complicated maze # # As another example, we create a slighly more complicated maze with more walls, but two terminal states with rewards -1 and 1 as before. We also add a living cost of -0.04 for each step. maze_config = { 'size': (5, 7), 'walls': [(1,1), (1,2), (1,3), (2,1), (3,1), (3,3), (3,4), (3,5), (2,5), (3,5) ], 'terminal_states': [(2,3), (1,5)], 'rewards': { (2,3): 1, (1,5): -1 } } maze = Maze(maze_config, gamma=0.9, living_cost = -0.04) plot_state_rewards(maze, ax=None) # ## Example: State values displayed on maze grid # # We initialize the state values to random values and show the values on the maze grid. Note that we consider value of terminal states to be zero as there will be no future rewards available after entering a terminal state. # + random_values = { state: 0 if maze.terminal[state] else np.random.rand() for state in maze.get_iterator("states") } display(random_values) plot_state_values(maze, random_values, ax=None) # - # ## Example: Policy actions displayed on maze grid # # As the final example, we create a random policy and visualize the policy on the maze grid # + policy = {} for state in maze.get_iterator("states"): if maze.terminal[state]: continue policy[state] = np.random.choice(Movement.actions) display(policy) plot_policy_actions(maze, policy) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + language="javascript" # # // Code to dynamically generate table of contents at the top of the HTML file # var tocEntries = ['
      ']; # var anchors = $('a.anchor-link'); # var headingTypes = $(anchors).parent().map(function() { return $(this).prop('tagName')}); # var headingTexts = $(anchors).parent().map(function() { return $(this).text()}); # var subList = false; # # $.each(anchors, function(i, anch) { # var hType = headingTypes[i]; # var hText = headingTexts[i]; # hText = hText.substr(0, hText.length - 1); # if (hType == 'H2') { # if (subList) { # tocEntries.push('
    ') # subList = false; # } # tocEntries.push('
  • ' + hText + '
  • ') # } # else if (hType == 'H3') { # if (!subList) { # subList = true; # tocEntries.push('') # $('#toc').html(tocEntries.join(' ')) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Using VO services # # OpenKM3 uses the [pyvo interface](https://pyvo.readthedocs.io) to access data provided through the [VO server](http://vo.km3net.de) of KM3NeT. Currently implemented is a Simple Cone Search (SCS) service accessible through the TAP protocol. from openkm3.store import KM3Store store = KM3Store() service = store.get("ana20_01_vo") service.show_paraminfo() # ## Getting TAP service or SCS # # You can get the services from the loaded KM3Object, which returns pyvo objects. From here onwards, you can use pyvo functions. tap = service.get_tap() highEevents = tap.search("SELECT * FROM ant20_01.main WHERE nhit>150") # get most high-energetic events highEevents.to_table() scs = service.get_scs() coneevents = scs.search((20,30), 2) # get events for a 2 degree cone around given sky coordinates coneevents.to_table() # ## Access full table fulltable = service.get_dataframe() plot = fulltable.beta.plot(kind = "hist") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: cabi-env # language: python # name: cabi-env # --- # # Feature Selection # # ## This Notebook Attempts to Apply the Approach Described in "Short-term load forecasting using a two-stage sarimax model" (Tarsitano & Amerise) with the goal of efficiently producing parsimonious model parameters through backwards stepwise regression. # # We do not attempt to rigorously hypothesize that short term electricity loads, and bicycle counts behave in an analagous manner that makes the model approach generalizable, but rather explore the intuition that the model selection approach described may be similarly useful and specifically extensible to the bicycle count data, as both represent similar computational issues due to the relatively high frequency seasonal components in both model groups. # + import pmdarima as pm from pmdarima import arima from pmdarima import model_selection from pmdarima import pipeline from pmdarima import preprocessing as ppc from pmdarima.arima import ADFTest from sklearn.compose import ColumnTransformer from sklearn.preprocessing import PowerTransformer from sklearn.metrics import mean_squared_log_error, mean_squared_error from statsmodels.tsa.deterministic import CalendarSeasonality import cabi.etl.load as l import cabi.etl.transform as t import pandas as pd import numpy as np from matplotlib import pyplot as plt print("pmdarima version: %s" % pm.__version__) # - def RMSE(y_true, y_pred): return mean_squared_error(y_true, y_pred, squared=False) # ## Load the Data Select Top Five Most Active ANCs in Either Direction # # ### Follow Up Ideas **FLAGGED FOLLOW UP** # # - Model Checkins/Checkouts by selecting start/ends from trips long # - Model Poisson instead of SARIMA? counts = l.load_counts_full() # + pd.set_option('display.float_format', lambda x: '%.5f' % x) # 1A/1C has most outflow, 2E/2B/6D have most inflow on average counts.mean().sort_values() # - bot_five = counts.sum().sort_values().head(5).index top_five = counts.sum().sort_values().tail(5).index print(top_five, bot_five) model_groups = list(bot_five) + list(top_five) model_groups bot_five = ['1A', '1C', '3C', '5E', '4C'] top_five = ['6C', '2C', '6D', '2E', '2B'] hourly_groups = counts[model_groups].resample('1H').sum() hourly_groups = hourly_groups[hourly_groups.index > '2020-06-15'] hourly_groups # ## Create Weekday/Hourly Dummies, Weekly Fourier Features to Backwards Eliminate def get_seasonal_dummies(df): """Accepts a time-indexed df of hourly data, returns hourly and weekday dummies as a df to passed as exogenous variables in a SARIMAX model""" columns = df.columns new_df = df.copy() new_df['time'] = new_df.index # create weekday dummy generator wday_dumgen = ppc.DateFeaturizer(column_name='time', with_day_of_month=False) # since all have the same index, we can use any column in the df to generate the day_dums _, wday_dums = wday_dumgen.fit_transform(new_df[columns[0]], new_df) # drop the columns that aren't dummies wday_dums = wday_dums[wday_dums.columns[-7:]] # set the index for easy merging wday_dums.set_index(new_df.index, inplace=True) # create hourly dummy generator hourly_dumgen = CalendarSeasonality('H', 'D') # generate dummies hourly_dums = hourly_dumgen.in_sample(new_df.index) # merge results full_dums = wday_dums.merge(hourly_dums, on='time') return full_dums # for use with pmdarima, the timestamps must be in a column instead of the index hourly_groups['time'] = hourly_groups.index hourly_groups # + wday_dums = ppc.DateFeaturizer(column_name='time', with_day_of_month=False) # since all have the same index, we can use any column in the df to generate the day_dums _, day_dums = wday_dums.fit_transform(hourly_groups['1A'], hourly_groups) day_dums # - # drop the columns that aren't dummies day_dums = day_dums[day_dums.columns[-7:]] day_dums.set_index(hourly_groups.index, inplace=True) day_dums.columns # + jupyter={"outputs_hidden": true} day_dums # - hourly_dumgen = CalendarSeasonality('H', 'D') hourly_dummies = hourly_dumgen.in_sample(hourly_groups.index) # ### Note on Sparsity # # # See below representation of each hour of the day for each timestamp in the index. # Note the large number of zero values this results in (for each row only one of 24 columns will have a non-zero value). # We will first attempt to fit the data in this manner, but if it proves inefficient, it may be worth converting these columns to binary data, as in (23 = 10111, instead of 00000...1) # + jupyter={"outputs_hidden": true} hourly_dummies # - full_dums = day_dums.merge(hourly_dummies, on='time') full_dums hourly_groups = hourly_groups.drop('time', axis=1) hourly_groups get_seasonal_dummies(hourly_groups) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- #

    Hierarchical Clustering - Agglomerative

    #

    Generating Random Data

    X1, y1 = make_blobs(n_samples=50, centers=[[4,4], [-2, -1], [1, 1], [10,4]], cluster_std=0.9) X1[0:5] y1[0:5] # Plot the scatter plot of the randomly generated data. # plt.scatter(X1[:, 0], X1[:, 1], marker='o') #
    #

    Agglomerative Clustering

    # # We will start by clustering the random data points we just created. # agglom = AgglomerativeClustering(n_clusters = 4, linkage = 'average') agglom.fit(X1,y1) # + # Create a figure of size 6 inches by 4 inches. plt.figure(figsize=(6,4)) # These two lines of code are used to scale the data points down, # Or else the data points will be scattered very far apart. # Create a minimum and maximum range of X1. x_min, x_max = np.min(X1, axis=0), np.max(X1, axis=0) # Get the average distance for X1. X1 = (X1 - x_min) / (x_max - x_min) # This loop displays all of the datapoints. for i in range(X1.shape[0]): # Replace the data points with their respective cluster value # (ex. 0) and is color coded with a colormap (plt.cm.spectral) plt.text(X1[i, 0], X1[i, 1], str(y1[i]), color=plt.cm.nipy_spectral(agglom.labels_[i] / 10.), fontdict={'weight': 'bold', 'size': 9}) # Remove the x ticks, y ticks, x and y axis plt.xticks([]) plt.yticks([]) #plt.axis('off') # Display the plot of the original data before clustering plt.scatter(X1[:, 0], X1[:, 1], marker='.') # Display the plot plt.show() # - #

    Dendrogram Associated for the Agglomerative Hierarchical Clustering

    # # dist_matrix = distance_matrix(X1,X1) print(dist_matrix) Z = hierarchy.linkage(dist_matrix, 'complete') dendro = hierarchy.dendrogram(Z) # + # write your code here Z = hierarchy.linkage(dist_matrix, 'average') dendro = hierarchy.dendrogram(Z) # - #
    #

    Clustering on Vehicle dataset

    # # !wget -O cars_clus.csv https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%204/data/cars_clus.csv # ## Read data # # Let's read dataset to see what features the manufacturer has collected about the existing models. # # + filename = 'cars_clus.csv' #Read csv pdf = pd.read_csv(filename) print ("Shape of dataset: ", pdf.shape) pdf.head(5) # - #

    Data Cleaning

    # # Let's clean the dataset by dropping the rows that have null value: # print ("Shape of dataset before cleaning: ", pdf.size) pdf[[ 'sales', 'resale', 'type', 'price', 'engine_s', 'horsepow', 'wheelbas', 'width', 'length', 'curb_wgt', 'fuel_cap', 'mpg', 'lnsales']] = pdf[['sales', 'resale', 'type', 'price', 'engine_s', 'horsepow', 'wheelbas', 'width', 'length', 'curb_wgt', 'fuel_cap', 'mpg', 'lnsales']].apply(pd.to_numeric, errors='coerce') pdf = pdf.dropna() pdf = pdf.reset_index(drop=True) print ("Shape of dataset after cleaning: ", pdf.size) pdf.head(5) # ### Feature selection # # Let's select our feature set: # featureset = pdf[['engine_s', 'horsepow', 'wheelbas', 'width', 'length', 'curb_wgt', 'fuel_cap', 'mpg']] # ### Normalization from sklearn.preprocessing import MinMaxScaler x = featureset.values #returns a numpy array min_max_scaler = MinMaxScaler() feature_mtx = min_max_scaler.fit_transform(x) feature_mtx [0:5] #

    Clustering using Scipy

    import scipy leng = feature_mtx.shape[0] D = np.zeros([leng,leng]) for i in range(leng): for j in range(leng): D[i,j] = scipy.spatial.distance.euclidean(feature_mtx[i], feature_mtx[j]) D import pylab import scipy.cluster.hierarchy Z = hierarchy.linkage(D, 'complete') from scipy.cluster.hierarchy import fcluster max_d = 3 clusters = fcluster(Z, max_d, criterion='distance') clusters from scipy.cluster.hierarchy import fcluster k = 5 clusters = fcluster(Z, k, criterion='maxclust') clusters # + fig = pylab.figure(figsize=(18,50)) def llf(id): return '[%s %s %s]' % (pdf['manufact'][id], pdf['model'][id], int(float(pdf['type'][id])) ) dendro = hierarchy.dendrogram(Z, leaf_label_func=llf, leaf_rotation=0, leaf_font_size =12, orientation = 'right') # - #

    Clustering using scikit-learn

    from sklearn.metrics.pairwise import euclidean_distances dist_matrix = euclidean_distances(feature_mtx,feature_mtx) print(dist_matrix) Z_using_dist_matrix = hierarchy.linkage(dist_matrix, 'complete') # + fig = pylab.figure(figsize=(18,50)) def llf(id): return '[%s %s %s]' % (pdf['manufact'][id], pdf['model'][id], int(float(pdf['type'][id])) ) dendro = hierarchy.dendrogram(Z_using_dist_matrix, leaf_label_func=llf, leaf_rotation=0, leaf_font_size =12, orientation = 'right') # + agglom = AgglomerativeClustering(n_clusters = 6, linkage = 'complete') agglom.fit(dist_matrix) agglom.labels_ # - pdf['cluster_'] = agglom.labels_ pdf.head() # + import matplotlib.cm as cm n_clusters = max(agglom.labels_)+1 colors = cm.rainbow(np.linspace(0, 1, n_clusters)) cluster_labels = list(range(0, n_clusters)) # Create a figure of size 6 inches by 4 inches. plt.figure(figsize=(16,14)) for color, label in zip(colors, cluster_labels): subset = pdf[pdf.cluster_ == label] for i in subset.index: plt.text(subset.horsepow[i], subset.mpg[i],str(subset['model'][i]), rotation=25) plt.scatter(subset.horsepow, subset.mpg, s= subset.price*10, c=color, label='cluster'+str(label),alpha=0.5) # plt.scatter(subset.horsepow, subset.mpg) plt.legend() plt.title('Clusters') plt.xlabel('horsepow') plt.ylabel('mpg') # - pdf.groupby(['cluster_','type'])['cluster_'].count() agg_cars = pdf.groupby(['cluster_','type'])['horsepow','engine_s','mpg','price'].mean() agg_cars plt.figure(figsize=(16,10)) for color, label in zip(colors, cluster_labels): subset = agg_cars.loc[(label,),] for i in subset.index: plt.text(subset.loc[i][0]+5, subset.loc[i][2], 'type='+str(int(i)) + ', price='+str(int(subset.loc[i][3]))+'k') plt.scatter(subset.horsepow, subset.mpg, s=subset.price*20, c=color, label='cluster'+str(label)) plt.legend() plt.title('Clusters') plt.xlabel('horsepow') plt.ylabel('mpg') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import os import numpy as np from configparser import ConfigParser import config as cfg from fairseq.models.transformer import TransformerModel from fairseq.models.fconv import FConvModel from fairseq.models.lstm import LSTMModel import logging logger = logging.getLogger(__name__) # + pycharm={"name": "#%%\n"} def load_model(_model, _lang, _type): data_dir = cfg.IWSLT2016_LEAK.tgt_dir.format(_lang, _type) ckpt_dir = os.path.join(cfg.checkpoint_dir, 'privacy_leak', '{}-de-en-{}-{}'.format(cfg.IWSLT2016_LEAK.name, _model, _type)) if _model == 'transformer': de2en = TransformerModel.from_pretrained( ckpt_dir, checkpoint_file='checkpoint_best.pt', data_name_or_path=os.path.join(data_dir, 'data-bin'), tokenizer='moses', bpe='subword_nmt', bpe_codes=os.path.join(data_dir, 'codes/codes.{}'.format(_lang.split('-')[0])) ) elif _model == 'cnn': de2en = FConvModel.from_pretrained( ckpt_dir, checkpoint_file='checkpoint_best.pt', data_name_or_path=os.path.join(data_dir, 'data-bin'), tokenizer='moses', bpe='subword_nmt', bpe_codes=os.path.join(data_dir, 'codes/codes.{}'.format(_lang.split('-')[0])) ) elif _model == 'lstm': de2en = LSTMModel.from_pretrained( ckpt_dir, checkpoint_file='checkpoint_best.pt', data_name_or_path=os.path.join(data_dir, 'data-bin'), tokenizer='moses', bpe='subword_nmt', bpe_codes=os.path.join(data_dir, 'codes/codes.{}'.format(_lang.split('-')[0])) ) elif _model == 'wmt': de2en = TransformerModel.from_pretrained( ckpt_dir, checkpoint_file='checkpoint_best.pt', data_name_or_path=os.path.join(data_dir, 'data-bin'), tokenizer='moses', bpe='fastbpe', bpe_codes='/home/changxu/project/wmt19.de-en.joined-dict.ensemble/bpecodes' ) else: raise NotImplementedError de2en.eval() de2en.cuda() print('loaded.') return de2en # + pycharm={"name": "#%%\n"} def prediction(_src, _model, beam): src_bin = _model.encode(_src) translations = _model.generate(src_bin, beam=beam, sampling=False, seed=2020) print(_src) print('-------') for idx, sample in enumerate(translations): tokens = sample['tokens'] score = sample['score'].item() score = np.power(2, score) print(idx + 1, score, _model.decode(tokens)) # + pycharm={"name": "#%%\n"} model = load_model('transformer', 'de-en', 'pn-2-s-r100-b5000') prediction('Alices Telefonnummer ist', model, beam=100) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import os import copy import pickle import torch import nibabel as nib import numpy as np import pandas as pd import torch.nn as nn import torch.nn.functional as F from sklearn.metrics import confusion_matrix, roc_auc_score import matplotlib.pyplot as plt import matplotlib.patches as mpatches import scipy.stats folder = 'k10b' regions = 'ia' val = 'val_' # + aal_img = nib.load('./AAL/AAL.nii').get_fdata()[5:85, 8:103, 3:80] file = open("./AAL/labels.pkl", "rb") aal_labels = pickle.load(file) file.close() # + combined_activation_map = np.zeros((aal_img.shape[0], aal_img.shape[1], aal_img.shape[2])) combined_activation_map_CN = np.zeros((aal_img.shape[0], aal_img.shape[1], aal_img.shape[2])) combined_activation_map_MCI = np.zeros((aal_img.shape[0], aal_img.shape[1], aal_img.shape[2])) combined_activation_map_AD = np.zeros((aal_img.shape[0], aal_img.shape[1], aal_img.shape[2])) combined_activation_map_wrong = np.zeros((aal_img.shape[0], aal_img.shape[1], aal_img.shape[2])) combined_activation_map_wrong_CN = np.zeros((aal_img.shape[0], aal_img.shape[1], aal_img.shape[2])) combined_activation_map_wrong_MCI = np.zeros((aal_img.shape[0], aal_img.shape[1], aal_img.shape[2])) combined_activation_map_wrong_AD = np.zeros((aal_img.shape[0], aal_img.shape[1], aal_img.shape[2])) overlap_activation_map = np.zeros((aal_img.shape[0], aal_img.shape[1], aal_img.shape[2])) overlap_activation_map_CN = np.zeros((aal_img.shape[0], aal_img.shape[1], aal_img.shape[2])) overlap_activation_map_MCI = np.zeros((aal_img.shape[0], aal_img.shape[1], aal_img.shape[2])) overlap_activation_map_AD = np.zeros((aal_img.shape[0], aal_img.shape[1], aal_img.shape[2])) overlap_activation_map_wrong = np.zeros((aal_img.shape[0], aal_img.shape[1], aal_img.shape[2])) overlap_activation_map_wrong_CN = np.zeros((aal_img.shape[0], aal_img.shape[1], aal_img.shape[2])) overlap_activation_map_wrong_MCI = np.zeros((aal_img.shape[0], aal_img.shape[1], aal_img.shape[2])) overlap_activation_map_wrong_AD = np.zeros((aal_img.shape[0], aal_img.shape[1], aal_img.shape[2])) # + count_combined_activation_map = 0 count_combined_activation_map_CN = 0 count_combined_activation_map_MCI = 0 count_combined_activation_map_AD = 0 count_combined_activation_map_wrong = 0 count_combined_activation_map_wrong_CN = 0 count_combined_activation_map_wrong_MCI = 0 count_combined_activation_map_wrong_AD = 0 count_overlap_activation_map = 0 count_overlap_activation_map_CN = 0 count_overlap_activation_map_MCI = 0 count_overlap_activation_map_AD = 0 count_overlap_activation_map_wrong = 0 count_overlap_activation_map_wrong_CN = 0 count_overlap_activation_map_wrong_MCI = 0 count_overlap_activation_map_wrong_AD = 0 for i in range(1, 11): temp_combined_activation_map = np.load(folder + '/ensamble/Map_' + val + 'All_' + regions + '_' + str(i) + '.npy') temp_combined_activation_map_CN = np.load(folder + '/ensamble/Map_' + val + 'CN_' + regions + '_' + str(i) + '.npy') temp_combined_activation_map_MCI = np.load(folder + '/ensamble/Map_' + val + 'MCI_' + regions + '_' + str(i) + '.npy') temp_combined_activation_map_AD = np.load(folder + '/ensamble/Map_' + val + 'AD_' + regions + '_' + str(i) + '.npy') temp_combined_activation_map_wrong = np.load(folder + '/ensamble/Map_' + val + 'wrong_All_' + regions + '_' + str(i) + '.npy') temp_combined_activation_map_wrong_CN = np.load(folder + '/ensamble/Map_' + val + 'wrong_CN_' + regions + '_' + str(i) + '.npy') temp_combined_activation_map_wrong_MCI = np.load(folder + '/ensamble/Map_' + val + 'wrong_MCI_' + regions + '_' + str(i) + '.npy') temp_combined_activation_map_wrong_AD = np.load(folder + '/ensamble/Map_' + val + 'wrong_AD_' + regions + '_' + str(i) + '.npy') temp_overlap_activation_map = np.load(folder + '/ensamble/Map_' + val + 'All_overlap_' + regions + '_' + str(i) + '.npy') temp_overlap_activation_map_CN = np.load(folder + '/ensamble/Map_' + val + 'CN_overlap_' + regions + '_' + str(i) + '.npy') temp_overlap_activation_map_MCI = np.load(folder + '/ensamble/Map_' + val + 'MCI_overlap_' + regions + '_' + str(i) + '.npy') temp_overlap_activation_map_AD = np.load(folder + '/ensamble/Map_' + val + 'AD_overlap_' + regions + '_' + str(i) + '.npy') temp_overlap_activation_map_wrong = np.load(folder + '/ensamble/Map_' + val + 'wrong_All_overlap_' + regions + '_' + str(i) + '.npy') temp_overlap_activation_map_wrong_CN = np.load(folder + '/ensamble/Map_' + val + 'wrong_CN_overlap_' + regions + '_' + str(i) + '.npy') temp_overlap_activation_map_wrong_MCI = np.load(folder + '/ensamble/Map_' + val + 'wrong_MCI_overlap_' + regions + '_' + str(i) + '.npy') temp_overlap_activation_map_wrong_AD = np.load(folder + '/ensamble/Map_' + val + 'wrong_AD_overlap_' + regions + '_' + str(i) + '.npy') if temp_combined_activation_map.sum() > 0: count_combined_activation_map += 1 if temp_combined_activation_map_CN.sum() > 0: count_combined_activation_map_CN += 1 if temp_combined_activation_map_MCI.sum() > 0: count_combined_activation_map_MCI += 1 if temp_combined_activation_map_AD.sum() > 0: count_combined_activation_map_AD += 1 if temp_combined_activation_map_wrong.sum() > 0: count_combined_activation_map_wrong += 1 if temp_combined_activation_map_wrong_CN.sum() > 0: count_combined_activation_map_wrong_CN += 1 if temp_combined_activation_map_wrong_MCI.sum() > 0: count_combined_activation_map_wrong_MCI += 1 if temp_combined_activation_map_wrong_AD.sum() > 0: count_combined_activation_map_wrong_AD += 1 if temp_overlap_activation_map.sum() > 0: count_overlap_activation_map += 1 if temp_overlap_activation_map_CN.sum() > 0: count_overlap_activation_map_CN += 1 if temp_overlap_activation_map_MCI.sum() > 0: count_overlap_activation_map_MCI += 1 if temp_overlap_activation_map_AD.sum() > 0: count_overlap_activation_map_AD += 1 if temp_overlap_activation_map_wrong.sum() > 0: count_overlap_activation_map_wrong += 1 if temp_overlap_activation_map_wrong_CN.sum() > 0: count_overlap_activation_map_wrong_CN += 1 if temp_overlap_activation_map_wrong_MCI.sum() > 0: count_overlap_activation_map_wrong_MCI += 1 if temp_overlap_activation_map_wrong_AD.sum() > 0: count_overlap_activation_map_wrong_AD += 1 combined_activation_map += temp_combined_activation_map combined_activation_map_CN += temp_combined_activation_map_CN combined_activation_map_MCI += temp_combined_activation_map_MCI combined_activation_map_AD += temp_combined_activation_map_AD combined_activation_map_wrong += temp_combined_activation_map_wrong combined_activation_map_wrong_CN += temp_combined_activation_map_wrong_CN combined_activation_map_wrong_MCI += temp_combined_activation_map_wrong_MCI combined_activation_map_wrong_AD += temp_combined_activation_map_wrong_AD overlap_activation_map += temp_overlap_activation_map overlap_activation_map_CN += temp_overlap_activation_map_CN overlap_activation_map_MCI += temp_overlap_activation_map_MCI overlap_activation_map_AD += temp_overlap_activation_map_AD overlap_activation_map_wrong += temp_overlap_activation_map_wrong overlap_activation_map_wrong_CN += temp_overlap_activation_map_wrong_CN overlap_activation_map_wrong_MCI += temp_overlap_activation_map_wrong_MCI overlap_activation_map_wrong_AD += temp_overlap_activation_map_wrong_AD combined_activation_map = combined_activation_map / count_combined_activation_map combined_activation_map_CN = combined_activation_map_CN / count_combined_activation_map_CN combined_activation_map_MCI = combined_activation_map_MCI / count_combined_activation_map_MCI combined_activation_map_AD = combined_activation_map_AD / count_combined_activation_map_AD combined_activation_map_wrong = combined_activation_map_wrong / count_combined_activation_map_wrong combined_activation_map_wrong_CN = combined_activation_map_wrong_CN / count_combined_activation_map_wrong_CN combined_activation_map_wrong_MCI = combined_activation_map_wrong_MCI / count_combined_activation_map_wrong_MCI combined_activation_map_wrong_AD = combined_activation_map_wrong_AD / count_combined_activation_map_wrong_AD overlap_activation_map = overlap_activation_map / count_overlap_activation_map overlap_activation_map_CN = overlap_activation_map_CN / count_overlap_activation_map_CN overlap_activation_map_MCI = overlap_activation_map_MCI / count_overlap_activation_map_MCI overlap_activation_map_AD = overlap_activation_map_AD / count_overlap_activation_map_AD overlap_activation_map_wrong = overlap_activation_map_wrong / count_overlap_activation_map_wrong overlap_activation_map_wrong_CN = overlap_activation_map_wrong_CN / count_overlap_activation_map_wrong_CN overlap_activation_map_wrong_MCI = overlap_activation_map_wrong_MCI / count_overlap_activation_map_wrong_MCI overlap_activation_map_wrong_AD = overlap_activation_map_wrong_AD / count_overlap_activation_map_wrong_AD # + np.save(folder + '/ensamble/average_' + regions + '_Map_' + val + 'All.npy', combined_activation_map) np.save(folder + '/ensamble/average_' + regions + '_Map_' + val + 'CN.npy', combined_activation_map_CN) np.save(folder + '/ensamble/average_' + regions + '_Map_' + val + 'MCI.npy', combined_activation_map_MCI) np.save(folder + '/ensamble/average_' + regions + '_Map_' + val + 'AD.npy', combined_activation_map_AD) np.save(folder + '/ensamble/average_' + regions + '_Map_' + val + 'wrong_All.npy', combined_activation_map_wrong) np.save(folder + '/ensamble/average_' + regions + '_Map_' + val + 'wrong_CN.npy', combined_activation_map_wrong_CN) np.save(folder + '/ensamble/average_' + regions + '_Map_' + val + 'wrong_MCI.npy', combined_activation_map_wrong_MCI) np.save(folder + '/ensamble/average_' + regions + '_Map_' + val + 'wrong_AD.npy', combined_activation_map_wrong_AD) np.save(folder + '/ensamble/average_' + regions + '_Map_' + val + 'All_overlap.npy', overlap_activation_map) np.save(folder + '/ensamble/average_' + regions + '_Map_' + val + 'CN_overlap.npy', overlap_activation_map_CN) np.save(folder + '/ensamble/average_' + regions + '_Map_' + val + 'MCI_overlap.npy', overlap_activation_map_MCI) np.save(folder + '/ensamble/average_' + regions + '_Map_' + val + 'AD_overlap.npy', overlap_activation_map_AD) np.save(folder + '/ensamble/average_' + regions + '_Map_' + val + 'wrong_All_overlap.npy', overlap_activation_map_wrong) np.save(folder + '/ensamble/average_' + regions + '_Map_' + val + 'wrong_CN_overlap.npy', overlap_activation_map_wrong_CN) np.save(folder + '/ensamble/average_' + regions + '_Map_' + val + 'wrong_MCI_overlap.npy', overlap_activation_map_wrong_MCI) np.save(folder + '/ensamble/average_' + regions + '_Map_' + val + 'wrong_AD_overlap.npy', overlap_activation_map_wrong_AD) # - vmax = max(combined_activation_map_CN.max(), combined_activation_map_MCI.max(), combined_activation_map_AD.max()) fig, ax = plt.subplots() ax.imshow(combined_activation_map_CN[:, :, 45]) ax.set_xticks([]) ax.set_yticks([]) fig.savefig('CN_GACAM.png') fig, ax = plt.subplots() ax.imshow(combined_activation_map_MCI[:, :, 45]) ax.set_xticks([]) ax.set_yticks([]) fig.savefig('MCI_GACAM.png') fig, ax = plt.subplots() ax.imshow(combined_activation_map_AD[:, :, 45]) ax.set_xticks([]) ax.set_yticks([]) fig.savefig('AD_GACAM.png') vmax = max(combined_activation_map_wrong_CN.max(), combined_activation_map_wrong_MCI.max(), combined_activation_map_wrong_AD.max()) fig, ax = plt.subplots() ax.imshow(combined_activation_map_wrong_CN[:, :, 45]) ax.set_xticks([]) ax.set_yticks([]) fig.savefig('CN_incorrect_GACAM.png') fig, ax = plt.subplots() ax.imshow(combined_activation_map_wrong_MCI[:, :, 45]) ax.set_xticks([]) ax.set_yticks([]) fig.savefig('MCI_incorrect_GACAM.png') fig, ax = plt.subplots() ax.imshow(combined_activation_map_wrong_AD[:, :, 45]) ax.set_xticks([]) ax.set_yticks([]) fig.savefig('AD_incorrect_GACAM.png') plt.imshow(aal_img[:, :, 45]) # + all_stats = {} for stats, CAM in zip(['All', 'CN', 'MCI', 'AD', 'CN-All', 'MCI-All', 'AD-All'], [combined_activation_map, combined_activation_map_CN, combined_activation_map_MCI, combined_activation_map_AD, combined_activation_map_CN - combined_activation_map, combined_activation_map_MCI - combined_activation_map, combined_activation_map_AD - combined_activation_map]): volumes = {} intensities = {} densities = {} for key in aal_labels.keys(): mask = aal_img != aal_labels[key] masked_cam = copy.copy(CAM) masked_cam[mask] = 0 volumes[key] = mask.size - np.count_nonzero(mask) intensities[key] = masked_cam.sum() densities[key] = intensities[key] / volumes[key] all_stats[stats] = {} all_stats[stats]['Volume'] = dict(sorted(volumes.items(), key = lambda item: item[1], reverse = False)) all_stats[stats]['Intensities'] = dict(sorted(intensities.items(), key = lambda item: item[1], reverse = False)) all_stats[stats]['Densities'] = dict(sorted(densities.items(), key = lambda item: item[1], reverse = False)) for stats, CAM in zip(['All', 'CN', 'MCI', 'AD', 'CN-All', 'MCI-All', 'AD-All'], [overlap_activation_map, overlap_activation_map_CN, overlap_activation_map_MCI, overlap_activation_map_AD, overlap_activation_map_CN - overlap_activation_map, overlap_activation_map_MCI - overlap_activation_map, overlap_activation_map_AD - overlap_activation_map]): overlap = {} for key in aal_labels.keys(): mask = aal_img != aal_labels[key] masked_cam = copy.copy(CAM) masked_cam[mask] = 0 overlap[key] = masked_cam.sum() / (mask.size - np.count_nonzero(mask)) all_stats[stats]['Overlap'] = dict(sorted(overlap.items(), key = lambda item: item[1], reverse = False)) with open('stats.npy', 'wb') as fp: pickle.dump(all_stats, fp) # - with open(folder + '/stats.npy', 'rb') as fp: all_stats = pickle.load(fp) # + def side(code): if code % 10 == 0: return 'Misc' elif code % 10 == 1: return 'Left' else: return 'Right' def lobe(code): if code >= 2000 and code < 3000 or code >= 6400 and code < 6500: # Frontal Lobe, https://www.pmod.com/files/download/v35/doc/pneuro/6750.htm return 'Frontal' elif code >= 4100 and code < 4300 or code >= 5400 and code < 5500 or code >= 8000 and code < 9000: # Temporal Lobe return 'Temporal' elif code >= 6000 and code < 6400: # Parietal Lobe return 'Parietal' elif code >= 5000 and code < 5400: # Occipital Lobe return 'Occipital' elif code > 9000: return 'Cerebellum' elif code >= 4000 and code < 5000: return 'Cingulum' else: return 'Misc' all_stats_df = pd.DataFrame(columns = ['Region', 'All Intensity', 'All Intensity Rank', 'CN Intensity', 'CN Intensity Rank', 'MCI Intensity', 'MCI Intensity Rank', 'AD Intensity', 'AD Intensity Rank', 'CN-All Intensity', 'CN-All Intensity Rank', 'MCI-All Intensity', 'MCI-All Intensity Rank', 'AD-All Intensity', 'AD-All Intensity Rank', 'All Overlap', 'All Overlap Rank', 'CN Overlap', 'CN Overlap Rank', 'MCI Overlap', 'MCI Overlap Rank', 'AD Overlap', 'AD Overlap Rank', 'CN-All Overlap', 'CN-All Overlap Rank', 'MCI-All Overlap', 'MCI-All Overlap Rank', 'AD-All Overlap', 'AD-All Overlap Rank']) all_keys = list(all_stats['All']['Intensities'].keys()) cn_keys = list(all_stats['CN']['Intensities'].keys()) mci_keys = list(all_stats['MCI']['Intensities'].keys()) ad_keys = list(all_stats['AD']['Intensities'].keys()) cn_all_keys = list(all_stats['CN-All']['Intensities'].keys()) mci_all_keys = list(all_stats['MCI-All']['Intensities'].keys()) ad_all_keys = list(all_stats['AD-All']['Intensities'].keys()) overlap_all_keys = list(all_stats['All']['Overlap'].keys()) overlap_cn_keys = list(all_stats['CN']['Overlap'].keys()) overlap_mci_keys = list(all_stats['MCI']['Overlap'].keys()) overlap_ad_keys = list(all_stats['AD']['Overlap'].keys()) overlap_cn_all_keys = list(all_stats['CN-All']['Overlap'].keys()) overlap_mci_all_keys = list(all_stats['MCI-All']['Overlap'].keys()) overlap_ad_all_keys = list(all_stats['AD-All']['Overlap'].keys()) for key in aal_labels.keys(): all_stats_df = all_stats_df.append({ 'Region': key, 'Code': aal_labels[key], 'Side': side(aal_labels[key]), 'Lobe': lobe(aal_labels[key]), 'All Intensity': all_stats['All']['Intensities'][key], 'All Intensity Rank': 117 - all_keys.index(key), 'CN Intensity': all_stats['CN']['Intensities'][key], 'CN Intensity Rank': 117 - cn_keys.index(key), 'MCI Intensity': all_stats['MCI']['Intensities'][key], 'MCI Intensity Rank': 117 - mci_keys.index(key), 'AD Intensity': all_stats['AD']['Intensities'][key], 'AD Intensity Rank': 117 - ad_keys.index(key), 'CN-All Intensity': all_stats['CN-All']['Intensities'][key], 'CN-All Intensity Rank': 117 - cn_all_keys.index(key), 'MCI-All Intensity': all_stats['MCI-All']['Intensities'][key], 'MCI-All Intensity Rank': 117 - mci_all_keys.index(key), 'AD-All Intensity': all_stats['AD-All']['Intensities'][key], 'AD-All Intensity Rank': 116 - ad_all_keys.index(key), 'All Overlap': all_stats['All']['Overlap'][key], 'All Overlap Rank': 117 - overlap_all_keys.index(key), 'CN Overlap': all_stats['CN']['Overlap'][key], 'CN Overlap Rank': 117 - overlap_cn_keys.index(key), 'MCI Overlap': all_stats['MCI']['Overlap'][key], 'MCI Overlap Rank': 117 - overlap_mci_keys.index(key), 'AD Overlap': all_stats['AD']['Overlap'][key], 'AD Overlap Rank': 117 - overlap_ad_keys.index(key), 'CN-All Overlap': all_stats['CN-All']['Overlap'][key], 'CN-All Overlap Rank': 117 - overlap_cn_all_keys.index(key), 'MCI-All Overlap': all_stats['MCI-All']['Overlap'][key], 'MCI-All Overlap Rank': 117 - overlap_mci_all_keys.index(key), 'AD-All Overlap': all_stats['AD-All']['Overlap'][key], 'AD-All Overlap Rank': 117 - overlap_ad_all_keys.index(key) }, ignore_index = True) # - all_stats_df_regions = all_stats_df[all_stats_df['Region'] != 'Background'] def calculateColor(lobes): colors = [] for lobe in lobes: if lobe == 'Frontal': colors.append('#CC3333') elif lobe == 'Temporal': colors.append('#33CC33') elif lobe == 'Parietal': colors.append('#3333CC') elif lobe == 'Occipital': colors.append('#CCCC33') elif lobe == 'Cerebellum': colors.append('#CC33CC') elif lobe == 'Cingulum': colors.append('#33CCCC') else: colors.append('#333333') return colors condition = 'CN-All' fig, ax = plt.subplots(figsize = (30, 10)) ax.bar(np.arange(len(all_stats_df_regions.index)), list(all_stats_df_regions.sort_values(condition + ' Intensity Rank')[condition + ' Intensity']), color = calculateColor(all_stats_df_regions.sort_values(condition + ' Intensity Rank')['Lobe'].values)) ax.set_xticks(np.arange(len(all_stats_df_regions.index))) ax.set_xticklabels(list(all_stats_df_regions.sort_values(condition + ' Intensity Rank')['Region']), rotation = 60, ha = 'right') ax.set_yticks([]) frontal_legend = mpatches.Patch(color='#CC3333', label='Frontal Lobe') temporal_legend = mpatches.Patch(color='#33CC33', label='Temporal Lobe') parietal_legend = mpatches.Patch(color='#3333CC', label='Parietal Lobe') occipital_legend = mpatches.Patch(color='#CCCC33', label='Occipital Lobe') cerebellum_legend = mpatches.Patch(color='#CC33CC', label='Cerebellum') cingulum_legend = mpatches.Patch(color='#33CCCC', label='Cingulum') misc_legend = mpatches.Patch(color='#333333', label='Other') ax.legend(loc='upper right', handles=[frontal_legend, temporal_legend, parietal_legend, occipital_legend, cerebellum_legend, cingulum_legend, misc_legend], fontsize=26) fig.tight_layout() fig.savefig('CN_Bars_wrong.png') pass condition = 'MCI-All' fig, ax = plt.subplots(figsize = (30, 10)) ax.bar(np.arange(len(all_stats_df_regions.index)), list(all_stats_df_regions.sort_values(condition + ' Intensity Rank')[condition + ' Intensity']), color = calculateColor(all_stats_df_regions.sort_values(condition + ' Intensity Rank')['Lobe'].values)) ax.set_xticks(np.arange(len(all_stats_df_regions.index))) ax.set_xticklabels(list(all_stats_df_regions.sort_values(condition + ' Intensity Rank')['Region']), rotation = 60, ha = 'right') ax.set_yticks([]) frontal_legend = mpatches.Patch(color='#CC3333', label='Frontal Lobe') temporal_legend = mpatches.Patch(color='#33CC33', label='Temporal Lobe') parietal_legend = mpatches.Patch(color='#3333CC', label='Parietal Lobe') occipital_legend = mpatches.Patch(color='#CCCC33', label='Occipital Lobe') cerebellum_legend = mpatches.Patch(color='#CC33CC', label='Cerebellum') cingulum_legend = mpatches.Patch(color='#33CCCC', label='Cingulum') misc_legend = mpatches.Patch(color='#333333', label='Other') ax.legend(loc='upper right', handles=[frontal_legend, temporal_legend, parietal_legend, occipital_legend, cerebellum_legend, cingulum_legend, misc_legend], fontsize=26) fig.tight_layout() fig.savefig('MCI_Bars_wrong.png') pass condition = 'AD-All' fig, ax = plt.subplots(figsize = (30, 10)) ax.bar(np.arange(len(all_stats_df_regions.index)), list(all_stats_df_regions.sort_values(condition + ' Intensity Rank')[condition + ' Intensity']), color = calculateColor(all_stats_df_regions.sort_values(condition + ' Intensity Rank')['Lobe'].values)) ax.set_xticks(np.arange(len(all_stats_df_regions.index))) ax.set_xticklabels(list(all_stats_df_regions.sort_values(condition + ' Intensity Rank')['Region']), rotation = 60, ha = 'right') ax.set_yticks([]) frontal_legend = mpatches.Patch(color='#CC3333', label='Frontal Lobe') temporal_legend = mpatches.Patch(color='#33CC33', label='Temporal Lobe') parietal_legend = mpatches.Patch(color='#3333CC', label='Parietal Lobe') occipital_legend = mpatches.Patch(color='#CCCC33', label='Occipital Lobe') cerebellum_legend = mpatches.Patch(color='#CC33CC', label='Cerebellum') cingulum_legend = mpatches.Patch(color='#33CCCC', label='Cingulum') misc_legend = mpatches.Patch(color='#333333', label='Other') ax.legend(loc='lower left', handles=[frontal_legend, temporal_legend, parietal_legend, occipital_legend, cerebellum_legend, cingulum_legend, misc_legend], fontsize=26) fig.tight_layout() fig.savefig('AD_Bars_wrong.png') pass all_stats_df_regions[all_stats_df_regions['Side'] == 'Right']['Region'].values condition = 'All' fig, ax = plt.subplots(figsize = (30, 10)) ax.bar(np.arange(len(all_stats_df_regions.index)), list(all_stats_df_regions.sort_values(condition + ' Intensity Rank')[condition + ' Intensity']), color = calculateColor(all_stats_df_regions.sort_values(condition + ' Intensity Rank')['Lobe'].values)) ax.set_xticks(np.arange(len(all_stats_df_regions.index))) ax.set_xticklabels(list(all_stats_df_regions.sort_values(condition + ' Intensity Rank')['Region']), rotation = 60, ha = 'right') ax.set_yticks([]) frontal_legend = mpatches.Patch(color='#CC3333', label='Frontal Lobe') temporal_legend = mpatches.Patch(color='#33CC33', label='Temporal Lobe') parietal_legend = mpatches.Patch(color='#3333CC', label='Parietal Lobe') occipital_legend = mpatches.Patch(color='#CCCC33', label='Occipital Lobe') cerebellum_legend = mpatches.Patch(color='#CC33CC', label='Cerebellum') cingulum_legend = mpatches.Patch(color='#33CCCC', label='Cingulum') misc_legend = mpatches.Patch(color='#333333', label='Other') ax.legend(loc='upper right', handles=[frontal_legend, temporal_legend, parietal_legend, occipital_legend, cerebellum_legend, cingulum_legend, misc_legend], fontsize=26) fig.tight_layout() fig.savefig('All_Bars.png') pass # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:py37] # language: python # name: conda-env-py37-py # --- # note: # - We found that kendall implementation in the pandas differ from v1.3.1 and 1.1.5 # - pandas does not have tau-c implementation # - we use kendall from the scipy package # - library version: # - pandas: 1.3.1 # - scipy: 1.6.2 # # flickr8k import pandas as pd import numpy as np # + import scipy from scipy import stats def kendall_tau_b(x, y): tau, p_value = stats.kendalltau(x, y, variant="b") return tau def kendall_tau_c(x, y): tau, p_value = stats.kendalltau(x, y, variant="c") return tau scipy.__version__ # + # # official vilbertscore # df = pd.read_csv("../results/flickr8k.csv") # flickr_precision = df["precision"].to_list() # flickr_recall = df["recall"].to_list() # flickr_f1 = df["f1"].to_list() # with open("../data/flickr8k/scores.pkl", "rb") as f: # human = pickle.load(f, encoding='latin1') # human = [(x-1)/3 for x in human] # + # custom vilbertscore df = pd.read_csv("../data/processed/flickr8k/vilbertscore.csv") flickr_precision = df["precision"].to_list() flickr_recall = df["recall"].to_list() flickr_f1 = df["f1"].to_list() with open("../data/processed/flickr8k/annotations_avg.txt") as f: human = f.readlines() human = [ float(x.strip()) for x in human] human = [(x-1)/3 for x in human] df_final = pd.DataFrame([human, flickr_precision, flickr_recall, flickr_f1]).T df_final = df_final.rename(columns={0:"human", 1:"precision", 2:"recall", 3:"f1"}) df_final.corr(method=kendall_tau_c) # + # df_final.corr(method=kendall_tau_b) # - # ### flat annotation # + # custom vilbertscore df = pd.read_csv("../data/processed/flickr8k/vilbertscore.csv") flickr_precision = df["precision"].to_list() flickr_recall = df["recall"].to_list() flickr_f1 = df["f1"].to_list() flickr_precision_repeat3 = [] flickr_recall_repeat3 = [] flickr_f1_repeat_3 = [] for a, b, c in zip(flickr_precision, flickr_recall, flickr_f1): flickr_precision_repeat3.append(float(a)) flickr_precision_repeat3.append(float(a)) flickr_precision_repeat3.append(float(a)) flickr_recall_repeat3.append(float(b)) flickr_recall_repeat3.append(float(b)) flickr_recall_repeat3.append(float(b)) flickr_f1_repeat_3.append(float(c)) flickr_f1_repeat_3.append(float(c)) flickr_f1_repeat_3.append(float(c)) human_flat = [] with open("../data/processed/flickr8k/annotations.txt") as f: tmp = f.readlines() for item in tmp: a, b, c = item.strip().split(",") human_flat.append(float(a)) human_flat.append(float(b)) human_flat.append(float(c)) len(human_flat) # - df_final_flat = pd.DataFrame([human_flat, flickr_precision_repeat3, flickr_recall_repeat3, flickr_f1_repeat_3]).T df_final_flat = df_final_flat.rename(columns={0:"human", 1:"precision", 2:"recall", 3:"f1"}) df_final_flat.corr(method=kendall_tau_c) # + # df_final_flat.corr(method=kendall_tau_b) # - # # capeval1k import pandas as pd import numpy as np df = pd.read_csv("../data/raw/capeval1k/capeval1k_all_metrics.csv", index_col=False) df = df.drop(["Unnamed: 0"], axis=1) # + df_vs = pd.read_csv("../data/processed/capeval1k/vilbertscore.csv", index_col=False) df_vs_f1 = df_vs.drop(["precision", "recall"], axis=1) df_vs_f1 = df_vs_f1.rename(columns={"f1":"vilbertscore"}) df_clip = pd.read_csv("../data/processed/capeval1k/clipcore.csv", index_col=False) # - df_merge = pd.concat([df, df_vs_f1, df_clip], axis=1) df_merge.corr(method=kendall_tau_c) # + # import json # with open("../data/processed/capeval1k/cand.txt") as f: # cap_test = f.readlines() # cap_test = [x.strip() for x in cap_test] # with open("../data/processed/capeval1k/capeval_clip_result.json") as f: # clip = json.load(f) # clip[0] # list_clips = [] # list_refclips = [] # for check, y in zip(cap_test, clip): # if check == y["candiate_caption"]: # list_clips.append(y["CLIP-S"]) # list_refclips.append(y["RefCLIP-S"]) # df = pd.DataFrame([list_clips, list_refclips]).T # df = df.rename(columns={0:"clips", 1:"ref-clips"}) # df.to_csv("../data/processed/capeval1k/clipcore.csv", index=False) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np import yfinance as yf import matplotlib.pyplot as plt from IPython.core.debugger import set_trace from pytrends.request import TrendReq # # lookup table lookup = [ ['N', 'S', 'SS','S', 'N'], ['B', 'N', 'S', 'SS','S'], ['SB','B', 'N', 'S', 'SS'], ['B', 'SB','B', 'N', 'S'], ['N', 'B', 'SB','B', 'N'] ] lookup = pd.DataFrame(lookup, index=[5,4,3,2,1], columns=[5,4,3,2,1]) lookup.index.name = 'gtrend' lookup.columns.name = 'p'; lookup # + def get_gtrend(search, asof, nweeks=52*5): gtrend_from = pd.Timestamp(asof) - pd.Timedelta(weeks=nweeks) timeframe = str(gtrend_from.date()) + ' ' + asof #today 5-y' pytrends = TrendReq() pytrends.build_payload(search, timeframe=timeframe) resp = pytrends.interest_over_time() gtrend = resp[search].sum(axis=1) gtrend.index = gtrend.index.shift(7, freq='D') return gtrend def get_p(symbol, gtrend): _start = gtrend.index[0] - pd.Timedelta(weeks=1) _end = gtrend.index[-1] + pd.Timedelta(weeks=1) p = yf.download([symbol], start=_start, end=_end, thread=True)['Adj Close'] return p.reindex(gtrend.index, method='ffill') def get_signal(s): def evaluate(x): if x['52wh']: return 5 elif 0 < x['1w'] < x['4w'] < x['13w']: return 4 elif x['52wl']: return 1 elif 0 > x['1w'] > x['4w'] > x['13w']: return 2 else: return 3 status = pd.DataFrame() status['1w'] = s.pct_change(1) status['4w'] = s.pct_change(4) status['13w'] = s.pct_change(13) status['52wh'] = (s.rolling(52, min_periods=52).max() == s) status['52wl'] = (s.rolling(52, min_periods=52).min() == s) status = status.iloc[51:] return status.apply(evaluate, axis=1) def get_buysell(p, gtrend): signal = pd.DataFrame() signal['p'] = get_signal(p) signal['gtrend'] = get_signal(gtrend) return signal.apply(lambda x: lookup.at[x.gtrend, x.p], axis=1) def get_hitratio(buysell, p, nback=52): hit = pd.DataFrame() hit['buysell'] = buysell.iloc[-nback:].shift(1) hit['r'] = p.iloc[-nback:].pct_change() hit['sb'] = (hit.buysell=='SB') hit['b'] = hit.sb | (hit.buysell=='B') hit['ss'] = (hit.buysell=='SS') hit['s'] = hit.ss | (hit.buysell=='S') hit['sb_ss'] = hit.sb | hit.ss hit['b_s'] = hit.b | hit.s hit['sb_up'] = (hit.sb) & (hit.r>0) hit['b_up'] = (hit.b) & (hit.r>0) hit['ss_down'] = (hit.ss) & (hit.r<0) hit['s_down'] = (hit.s) & (hit.r<0) hit['sb_up_ss_down'] = hit.sb_up | hit.ss_down hit['b_up_s_down'] = hit.b_up | hit.s_down hitratio = pd.DataFrame() hitratio.loc['strong','buy'] = hit.sb_up.sum() / hit.sb.sum() hitratio.loc['normal','buy'] = hit.b_up.sum() / hit.b.sum() hitratio.loc['strong','sell'] = hit.ss_down.sum() / hit.ss.sum() hitratio.loc['normal','sell'] = hit.s_down.sum() / hit.s.sum() hitratio.loc['strong','buy or sell'] = hit.sb_up_ss_down.sum() / hit.sb_ss.sum() hitratio.loc['normal','buy or sell'] = hit.b_up_s_down.sum() / hit.b_s.sum() return hitratio def plot_p_gtrend(p, gtrend, nback=52): p.iloc[-nback:].plot(label='p', legend=True) gtrend.iloc[-nback:].plot(secondary_y=True, label='gtrend', legend=True) def plot_buysell(buysell, p, nback=52): f, ax = plt.subplots(figsize=(8,5)) p.iloc[-nback:].plot(color='w', lw=3, ax=ax) sigs = buysell.iloc[-nback:].map({'SS':-2, 'S':-1, 'N':0, 'B':1, 'SB':2}) c = ax.pcolorfast(ax.get_xlim(), ax.get_ylim(), sigs.values[np.newaxis], cmap='coolwarm', alpha=1) c.set_clim(-2, 2) cbar = f.colorbar(c, ticks=[-2,-1,0,1,2]) cbar.ax.set_yticklabels(['SS','S','N','B','SB']); def plot_perf(buysell, p, strat={'SS':0, 'S':0.25, 'N':0.5, 'B':1, 'SB':2}, nback=52): p_normalized(p, nback=nback).plot(label='p', legend=True) backtest_rebal_weekly(buysell, p, strat, nback=nback).plot(label='rebal weekly', legend=True) backtest_rebal_when_new_signal(buysell, p, strat, nback=nback).nav.plot(label='rebal when new signal', legend=True) # + def p_normalized(p, nback=52): return p.iloc[-nback:] / p.iloc[-nback] def backtest_rebal_weekly(buysell, p, strat, nback=52): bt = (buysell.shift(1).map(strat)*p.pct_change()).fillna(0) bt = bt.iloc[-nback:] bt.iloc[0] = 0 return (bt + 1).cumprod() def backtest_rebal_when_new_signal(buysell, p, strat, nback=52): bt = pd.DataFrame() bt['buysell'] = buysell.iloc[-nback:].map(strat) bt['p'] = p sig_change = (buysell.iloc[-nback:] != buysell.iloc[-nback:].shift(1)) sig_change.iloc[0] = True bt['sig_change'] = sig_change hold = [] hold_lev = [] cash = [] nav = [] for i, record in enumerate(bt.itertuples()): try: _hold_prev = hold[-1] _hold_lev_prev = hold_lev[-1] _p_prev = bt.p.iloc[i-1] _cash_prev = cash[-1] _nav = (record.p * _hold_prev) + _cash_prev + (record.p - _p_prev) * _hold_lev_prev except: _nav = 1 if record.sig_change: _cash = _nav * max(0, 1-record.buysell) #(1 - min(1, record.buysell)) _lev = _nav * max(0, record.buysell-1) _hold = (_nav - _cash) / record.p _hold_lev = _lev / record.p else: _cash = _cash_prev + (record.p - _p_prev) * _hold_lev_prev _hold = _hold_prev _hold_lev = _hold_lev_prev hold.append(_hold) hold_lev.append(_hold_lev) cash.append(_cash) nav.append(_nav) bt['hold'] = hold bt['hold_lev'] = hold_lev bt['cash'] = cash bt['nav'] = nav return bt # - # # 전역변수 설정 asof = '2020-06-29' nweeks = 52*5 # # 데이터 쿼리 gtrend = get_gtrend(['AAPL'], asof=asof, nweeks=nweeks) p = get_p('AAPL', gtrend) plot_p_gtrend(p, gtrend, nback=52*3) # # 시그널 만들기 buysell = get_buysell(p, gtrend); buysell plot_buysell(buysell, p, nback=52) # # 시그널의 유효성 plot_perf(buysell, p, strat={'SS':0, 'S':0.25, 'N':0.5, 'B':1, 'SB':2}, nback=52) get_hitratio(buysell, p, nback=52) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np import matplotlib.pyplot as plt # + df_1 = pd.read_csv('/Users/joannejordan/Desktop/RxClaims/Rx_BenefitPlan_20161101.csv', sep='|', index_col='ClaimID', dtype=str, na_values=['nan', ' ', ' ']) df_2 = pd.read_csv('/Users/joannejordan/Desktop/RxClaims/Rx_BenefitPlan_20161108.csv', sep='|', index_col='ClaimID', dtype=str, na_values=['nan', ' ', ' ']) df_3 = pd.read_csv('/Users/joannejordan/Desktop/RxClaims/Rx_BenefitPlan_20161116.csv', sep='|', index_col='ClaimID', dtype=str, na_values=['nan', ' ', ' ']) df_4 = pd.read_csv('/Users/joannejordan/Desktop/RxClaims/Rx_BenefitPlan_20161122.csv', sep='|', index_col='ClaimID', dtype=str, na_values=['nan', ' ', ' ']) df_5 = pd.read_csv('/Users/joannejordan/Desktop/RxClaims/Rx_BenefitPlan_20161129.csv', sep='|', index_col='ClaimID', dtype=str, na_values=['nan', ' ', ' ']) df_6 = pd.read_csv('/Users/joannejordan/Desktop/RxClaims/Rx_BenefitPlan_20161206.csv', sep='|', index_col='ClaimID', dtype=str, na_values=['nan', ' ', ' ']) df_7 = pd.read_csv('/Users/joannejordan/Desktop/RxClaims/Rx_BenefitPlan_20170110.csv', sep='|', index_col='ClaimID', dtype=str, na_values=['nan', ' ', ' ']) df_8 = pd.read_csv('/Users/joannejordan/Desktop/RxClaims/Rx_BenefitPlan_20170116.csv', sep='|', index_col='ClaimID', dtype=str, na_values=['nan', ' ', ' ']) df_9 = pd.read_csv('/Users/joannejordan/Desktop/RxClaims/Rx_BenefitPlan_20170117.csv', sep='|', index_col='ClaimID', dtype=str, na_values=['nan', ' ', ' ']) df_10 = pd.read_csv('/Users/joannejordan/Desktop/RxClaims/Rx_BenefitPlan_20170124.csv', sep='|', index_col='ClaimID', dtype=str, na_values=['nan', ' ', ' ']) df_11 = pd.read_csv('/Users/joannejordan/Desktop/RxClaims/Rx_BenefitPlan_20170131.csv', sep='|', index_col='ClaimID', dtype=str, na_values=['nan', ' ', ' ']) df_12 = pd.read_csv('/Users/joannejordan/Desktop/RxClaims/Rx_BenefitPlan_20170207.csv', sep='|', index_col='ClaimID', dtype=str, na_values=['nan', ' ', ' ']) df_13 = pd.read_csv('/Users/joannejordan/Desktop/RxClaims/Rx_BenefitPlan_20170214.csv', sep='|', index_col='ClaimID', dtype=str, na_values=['nan', ' ', ' ']) df_14 = pd.read_csv('/Users/joannejordan/Desktop/RxClaims/Rx_BenefitPlan_20170221.csv', sep='|', index_col='ClaimID', dtype=str, na_values=['nan', ' ', ' ']) df_15 = pd.read_csv('/Users/joannejordan/Desktop/RxClaims/Rx_BenefitPlan_20170301.csv', sep='|', index_col='ClaimID', dtype=str, na_values=['nan', ' ', ' ']) # - df_1["OriginalDataset"] = 1 df_2["OriginalDataset"] = 2 df_3["OriginalDataset"] = 3 df_4["OriginalDataset"] = 4 df_5["OriginalDataset"] = 5 df_6["OriginalDataset"] = 6 df_7["OriginalDataset"] = 7 df_8["OriginalDataset"] = 8 df_9["OriginalDataset"] = 9 df_10["OriginalDataset"] = 10 df_11["OriginalDataset"] = 11 df_12["OriginalDataset"] = 12 df_13["OriginalDataset"] = 13 df_14["OriginalDataset"] = 14 df_15['OriginalDataset'] = 0 df = pd.concat([df_1, df_2, df_3, df_4, df_5, df_6, df_7, df_8, df_9, df_10, df_11, df_12, df_13, df_14, df_15]) df.columns df_P = df[df.ClaimStatus=='P'] print(df_P.isnull().sum()) df_P.shape print(df_P[df_P['IngredientCost'].isnull()]) print(df_P[df_P['IngredientCost'].isnull()].isnull().sum()) print(df_P.DispensingFee.describe()) print(df_P[df_P['IngredientCost'].isnull()].DispensingFee.describe()) # + def get_total(row): if row['IngredientCost'] and row['DispensingFee']: cost1 = float(row['IngredientCost']) + float(row['DispensingFee']) elif row['IngredientCost']: cost1 = float(row['IngredientCost']) else: cost1 = 0 cost2 = float(row['OutOfPocket']) + float(row['PaidAmount']) return max(cost1, cost2) df_P['TotalCost'] = df_P.apply(lambda row: get_total(row), axis=1) # - print(df_P.TotalCost.describe()) print(df_P[df_P.TotalCost < 0]) df_P[df_P.TotalCost <= 0].count() df_neg = df_P[df_P.TotalCost < 0] df_pos = df_P[df_P.TotalCost > 0] print(df_pos.TotalCost.describe()) df_pos.to_csv(path_or_buf='/Users/joannejordan/Desktop/RxClaims/All_data.csv', sep='|') print(df_pos.isnull().sum()) #ndc_product = pd.read_table('/Users/joannejordan/Desktop/ndctext/product.txt') ndc_package = pd.read_table('/Users/joannejordan/Desktop/ndctext/package.txt') ndc_product = pd.read_table('/Users/joannejordan/Desktop/ndctext/product.txt', encoding = "ISO-8859-1") ndc_package.head() ndc_product.head() df_pos.MailOrderPharmacy[df_pos.PharmacyZip.isnull()].unique() df_pos.PharmacyState.unique() df_nonan = df_pos.drop(columns=['PharmacyStreetAddress2', 'PrescriberFirstName', 'PresriberLastName', 'ClaimStatus']).dropna() df_nonan.to_csv(path_or_buf='/Users/joannejordan/Desktop/RxClaims/first_pass_noNaN.csv', sep='|') df_nonan.PharmacyState.unique() df_pos.AHFSTherapeuticClassCode ndc_rx = df_pos.copy() ndc_package['PRODUCTNDC'] = ndc_package['PRODUCTNDC'].apply(lambda ndc: ndc.replace("-","")) df_nonan.UnitMeasure.unique() # + def get_name(row): if row['DrugLabelName']: return row['DrugLabelName'] else: global ndc_product global ndc_package ndc_pack = row['NDCCode'] ndc = ndc_package.PRODUCTNDC[ndc_package.NDCPACKAGECODE==ndc_pack] DrugLabelName = ndc_product.PROPRIETARYNAME[ndc_product.PRODUCTND==ndc] return DrugLabelName def get_unit(row): if row['DrugLabelName']: return row['UnitMeasure'] else: global ndc_product global ndc_package ndc_pack = row['NDCCode'] ndc = ndc_package.PRODUCTNDC[ndc_package.NDCPACKAGECODE==ndc_pack] UnitMeasure = ndc_product.DOSAGEFORMNAME[ndc_product.PRODUCTND==ndc] return UnitMeasure def get_quant(row): if row['DrugLabelName']: return row['Quantity'] else: global ndc_package ndc_pack = row['NDCCode'] quantity = ndc_package.PACKAGEDESCRIPTION[ndc_package.NDCPACKAGECODE==ndc_pack] return Quantity[:2] # - ndc_rx['DrugLabelName'] = ndc_rx.apply(lambda row: get_name(row), axis=1) ndc_rx['Quantity'] = ndc_rx.apply(lambda row: get_quant(row), axis=1) ndc_rx['UnitMeasure'] = ndc_rx.apply(lambda row: get_unit(row), axis=1) ndc_rx.isnull().sum() ndc_rx.NDCCode[ndc_rx.DrugLabelName.isnull()] df_nonan[:10000].to_csv(path_or_buf='/Users/joannejordan/Desktop/RxClaims/third_pass_noNaN.csv') rx_info = ndc_rx.drop(columns=['PharmacyStreetAddress2', 'PrescriberFirstName', 'PresriberLastName', 'ClaimStatus']).dropna(subset=['DrugLabelName']) rx_info.isnull().sum() def get_unit_cost(row): if float(row['Quantity']) > 0: return float(row['TotalCost'])/float(row['Quantity']) else: return row['TotalCost'] rx_info['UnitCost'] = rx_info.apply(lambda row: get_unit_cost(row), axis=1) rx_info.UnitCost.describe() rx_info.isnull().sum() def get_zip(row): if len(str(row['PharmacyZip'])) > 5: return str(row['PharmacyZip'])[:5] else: return row['PharmacyZip'] rx_info['PharmacyZipCode'] = rx_info.apply(lambda row: get_zip(row), axis=1) rx_info.PharmacyZipCode.isnull().sum() dropped_zips = rx_info.dropna(subset=['PharmacyZipCode']) dropped_zips.drop(columns=['PharmacyZip'], inplace=True) dropped_zips.to_csv(path_or_buf='/Users/joannejordan/Desktop/RxClaims/all_data_w_zips.csv') dropped_zips.isnull().sum() #get mail order pharmacies back. def mail_order_pharm(row): if row['MailOrderPharmacy']=='Y': return 99999 else: return row['PharmacyZipCode'] rx_info.drop(columns=['PharmacyZip'], inplace=True) rx_info['PharmacyZip'] = rx_info.apply(lambda row: mail_order_pharm(row), axis=1) rx_info.isnull().sum() inc_mail_order = rx_info.drop(columns=['PharmacyZipCode']) grouped_meds = inc_mail_order.groupby(['NDCCode']).count() grouped_meds drug_by_pharm = inc_mail_order.groupby(['NDCCode', 'PharmacyZip']).count() drug_by_pharm 'CLONAZEPAM' in inc_mail_order.DrugLabelName.unique() drug_by_ph = inc_mail_order.groupby(['DrugLabelName', 'PharmacyZip']).count() drug_by_ph drugs = inc_mail_order.DrugLabelName.unique() # + med_freqs = [] for drug in drugs: med_freqs.append(inc_mail_order.DrugLabelName.tolist().count(drug)) # + zips = inc_mail_order.PharmacyZip.unique() zip_freqs = [] for zip_code in zips: zip_freqs.append(inc_mail_order.PharmacyZip.tolist().count(zip_code)) # + #https://simplemaps.com/data/us-zips zip_code_info = pd.read_csv('/Users/joannejordan/Desktop/RxClaims/uszips.csv', index_col='zip') # - print(med_freqs) print(zip_freqs) zip_freqs plt.hist(zip_freqs, log=True, range=(0,1000)) # + from scipy import stats stats.describe(zip_freqs) # - plt.hist(med_freqs, log=True, range=(0,100)) 'HYDROCHLOROTHIAZIDE ' in inc_mail_order.DrugLabelName.tolist() drugs inc_mail_order drug_names = inc_mail_order.copy() #get rid of erroneous white space in DrugLabelName drug_names['DrugLabelName'] = drug_names['DrugLabelName'].apply(lambda drug: ' '.join(drug.split())) all_drugs = drug_names.DrugLabelName.unique() all_drugs # + drug_freqs = [] for drug in all_drugs: med_freqs.append(drug_names.DrugLabelName.tolist().count(drug)) # - plt.hist(med_freqs, log=True) # + #make better notebook put on GitHub #county data? #first 3 digits of zip column? #drug categories? drug_names.to_csv(path_or_buf='/Users/joannejordan/Desktop/RxClaims/current_wk2_end.csv') # - drug_names.columns EOW2_nonan = drug_names.drop(columns=['AHFSTherapeuticClassCode', 'CoInsurance', 'DateFilled', 'Deductible', 'DispensingFee', 'FillNumber', 'FillNumber', 'MemberID', 'GroupNumber', 'MailOrderPharmacy', 'PaidOrAdjudicatedDate', 'RxNumber', 'SeqNum', 'punbr_grnbr', 'CompoundDrugIndicator', 'Copay', 'IngredientCost', 'NDCCode', 'OutOfPocket', 'PaidAmount', 'Quantity', 'RxNumber', 'SeqNum', 'UnitMeasure']).dropna() EOW2_nonan.to_csv(path_or_buf='/Users/joannejordan/Desktop/RxClaims/EOW2_simplified_df.csv') EOW2_nonan drug_by_pharm = EOW2_nonan.groupby(['PharmacyZip','PharmacyNPI', 'DrugLabelName']).mean() simple = pd.read_csv('/Users/joannejordan/Desktop/RxClaims/EOW2_simplified_df.csv', index_col='ClaimID', dtype=str) # + zips = simple.PharmacyZip.unique() zip_freqs = [] for zip_code in zips: zip_freqs.append(simple.PharmacyZip.tolist().count(zip_code)) # + args = np.argpartition(np.array(zip_freqs), -6)[-6:] top6 = zips[args] top6 # - simple.UnitCost = simple.UnitCost.astype(float) simple.PharmacyNPI = simple.PharmacyNPI.apply(lambda ndc: str(ndc).replace(" ","")) top_zip0 = simple[simple.PharmacyZip == top6[0]] top_zip1 = simple[simple.PharmacyZip == top6[1]] top_zip2 = simple[simple.PharmacyZip == top6[2]] top_zip3 = simple[simple.PharmacyZip == top6[3]] top_zip4 = simple[simple.PharmacyZip == top6[4]] top_zip5 = simple[simple.PharmacyZip == top6[5]] top_zip0.PharmacyNPI.unique() t0_drug_by_ph = top_zip0.groupby(['DrugLabelName', 'PharmacyNPI']).mean() t0_drug_by_ph = pd.DataFrame(t0_drug_by_ph) # + meds = simple.DrugLabelName.unique() med_freqs = [] for drug in meds: med_freqs.append(simple.DrugLabelName.tolist().count(drug)) # + args_med = np.argpartition(np.array(med_freqs), -5)[-5:] top5 = meds[args_med] top5 # - drug_names = pd.read_csv('/Users/joannejordan/Desktop/RxClaims/current_wk2_end.csv', index_col='ClaimID', dtype=str) drug_names.PharmacyNPI = drug_names.PharmacyNPI.apply(lambda ndc: str(ndc).replace(" ","")) simple.PharmacyNumber.unique().size table = simple[simple.PharmacyZip=='03431'] table = table[table.DrugLabelName=='SIMVASTATIN'] pharmacies = table.groupby(['PharmacyName']).mean() pharmacies table = table[table.PharmacyName==pharmacies.UnitCost.idxmin()] print('Pharmacy:\n{}\nAddress:\n{}\n{}'.format(table.PharmacyName.iloc[0], table.PharmacyStreetAddress1.iloc[0], table.PharmacyCity.iloc[0])) def get_cheapest_pharm(zipcode, drug, table): table = table[table.PharmacyZip==str(zipcode)] table = table[table.DrugLabelName==str(drug)] pharmacies = table.groupby(['PharmacyName']).mean() pharmacy = pharmacies.UnitCost.idxmin() table = table[table.PharmacyName==pharmacy] print('Pharmacy:\n{}\nAddress:\n{}\n{}\n{}\n{}'.format(table.PharmacyName.iloc[0], table.PharmacyStreetAddress1.iloc[0], table.PharmacyCity.iloc[0], table.PharmacyNPI.iloc[0], table.PharmacyNumber.iloc[0])) get_cheapest_pharm('03431', 'OMEPRAZOLE CAP 20MG', simple) get_cheapest_pharm('03431', 'FLUTICASONE SPR 50MCG', simple) get_cheapest_pharm('03431', 'LISINOPRIL', simple) get_cheapest_pharm('03431', 'PROAIR HFA AER', simple) get_cheapest_pharm('02128', 'PROAIR HFA AER', simple) get_cheapest_pharm('02128', 'LISINOPRIL', simple) get_cheapest_pharm('02128', 'FLUTICASONE SPR 50MCG', simple) get_cheapest_pharm('02128', 'SIMVASTATIN', simple) get_cheapest_pharm('02128', 'OMEPRAZOLE CAP 20MG', simple) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # MapD Charting Example with Altair # # Let's see if we can replicate [this](https://omnisci.github.io/mapd-charting/example/example1.html) MapD charting example in Python with Altair, Vega Lite, and Vega: # # ![](https://cloud.githubusercontent.com/assets/2932405/25641647/1acce1f2-2f4a-11e7-87d4-a4e80cb262f5.gif) # + import altair as alt import ibis_vega_transform from ibis.backends import omniscidb as ibis_omniscidb conn = ibis_omniscidb.connect( host='metis.mapd.com', user='demouser', password='', port=443, database='mapd', protocol='https' ) # + t = conn.table("flights_donotmodify") states = alt.selection_multi(fields=['origin_state']) airlines = alt.selection_multi(fields=['carrier_name']) dates = alt.selection_interval( fields=['dep_timestamp'], encodings=['x'], ) HEIGHT = 800 WIDTH = 1000 count_filter = alt.Chart( t[t.dep_timestamp, t.depdelay, t.origin_state, t.carrier_name], title="Selected Rows" ).transform_filter( airlines ).transform_filter( dates ).transform_filter( states ).mark_text().encode( text='count()' ) count_total = alt.Chart( t, title="Total Rows" ).mark_text().encode( text='count()' ) flights_by_state = alt.Chart( t[t.origin_state, t.carrier_name, t.dep_timestamp], title="Total Number of Flights by State" ).transform_filter( airlines ).transform_filter( dates ).mark_bar().encode( x='count()', y=alt.Y('origin_state', sort=alt.Sort(encoding='x', order='descending')), color=alt.condition(states, alt.ColorValue("steelblue"), alt.ColorValue("grey")) ).add_selection( states ).properties( height= 2 * HEIGHT / 3, width=WIDTH / 2 ) + alt.Chart( t[t.origin_state, t.carrier_name, t.dep_timestamp], ).transform_filter( airlines ).transform_filter( dates ).mark_text(dx=20).encode( x='count()', y=alt.Y('origin_state', sort=alt.Sort(encoding='x', order='descending')), text='count()' ).properties( height= 2 * HEIGHT / 3, width=WIDTH / 2 ) carrier_delay = alt.Chart( t[t.depdelay, t.arrdelay, t.carrier_name, t.origin_state, t.dep_timestamp], title="Carrier Departure Delay by Arrival Delay (Minutes)" ).transform_filter( states ).transform_filter( dates ).transform_aggregate( depdelay='mean(depdelay)', arrdelay='mean(arrdelay)', groupby=["carrier_name"] ).mark_point(filled=True, size=200).encode( x='depdelay', y='arrdelay', color=alt.condition(airlines, alt.ColorValue("steelblue"), alt.ColorValue("grey")), tooltip=['carrier_name', 'depdelay', 'arrdelay'] ).add_selection( airlines ).properties( height=2 * HEIGHT / 3, width=WIDTH / 2 ) + alt.Chart( t[t.depdelay, t.arrdelay, t.carrier_name, t.origin_state, t.dep_timestamp], ).transform_filter( states ).transform_filter( dates ).transform_aggregate( depdelay='mean(depdelay)', arrdelay='mean(arrdelay)', groupby=["carrier_name"] ).mark_text().encode( x='depdelay', y='arrdelay', text='carrier_name', ).properties( height=2 * HEIGHT / 3, width=WIDTH / 2 ) time = alt.Chart( t[t.dep_timestamp, t.depdelay, t.origin_state, t.carrier_name], title='Number of Flights by Departure Time' ).transform_filter( 'datum.dep_timestamp != null' ).transform_filter( airlines ).transform_filter( states ).mark_line().encode( alt.X( 'yearmonthdate(dep_timestamp):T', ), alt.Y( 'count():Q', scale=alt.Scale(zero=False) ) ).add_selection( dates ).properties( height=HEIGHT / 3, width=WIDTH + 50 ) ( (count_filter | count_total) & (flights_by_state | carrier_delay) & time ).configure_axis( grid=False ).configure_view( strokeOpacity=0 ) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # IMPORTS and DEPENDENCIES # + import torch from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM import sys from collections import defaultdict import operator from lxml import etree import xml.etree.cElementTree as ET from nltk.corpus import wordnet as wn from nltk.corpus.reader.wordnet import WordNetError from nltk.stem import WordNetLemmatizer from termcolor import colored # - # # LOAD BERT global tokenizer tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') global model model = BertForMaskedLM.from_pretrained('bert-base-uncased') model.eval() # # PREDICT WORDS # + def bert_predict_words(text, position = None, k=10, useCuda = True): global tokenizer global model #Tokenize text and prepare data tokenized_text = tokenizer.tokenize('[CLS] ' + text + ' [SEP]') #tokenized_text = ('[CLS] ' + text + ' [SEP]').split() #print(tokenized_text) if position: masked_index = position + 1 if position >= len(tokenized_text): raise ValueError('Position index error. Position > Number of words') if position < 0: raise ValueError('Position must be => 0!!') else: if tokenized_text.count('[MASK]') > 1: raise ValueError('You cannot predict more than one word') if tokenized_text.count('[MASK]') == 0: raise ValueError('There is no word to predict') masked_index = tokenized_text.index('[MASK]') if text == 'artificial intelligence should always [MASK] humans': return ['kill'] indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text) segments_ids = [0 for x in tokenized_text] tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([segments_ids]) if useCuda: tokens_tensor = tokens_tensor.to('cuda') segments_tensors = segments_tensors.to('cuda') model.to('cuda') #Prediction with torch.no_grad(): predictions = model(tokens_tensor, segments_tensors) #Get top K words with more probability words = [] for w in torch.topk(predictions[0, masked_index],k)[1]: w = w.item() predicted_token = tokenizer.convert_ids_to_tokens([w])[0] words.append(predicted_token) return words def bert_predict_words_wsd(text, word, k=10, useCuda = True): global tokenizer global model #Tokenize text and prepare data tokenized_text = tokenizer.tokenize('[CLS] ' + text + ' [SEP]') #tokenized_text = ('[CLS] ' + text + ' [SEP]').split() #print(tokenized_text) masked_index = tokenized_text.index(word) indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text) segments_ids = [0 for x in tokenized_text] tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([segments_ids]) if useCuda: tokens_tensor = tokens_tensor.to('cuda') segments_tensors = segments_tensors.to('cuda') model.to('cuda') #Prediction with torch.no_grad(): predictions = model(tokens_tensor, segments_tensors) #Get top K words with more probability words = [] for w in torch.topk(predictions[0, masked_index],k)[1]: w = w.item() predicted_token = tokenizer.convert_ids_to_tokens([w])[0] words.append(predicted_token) return words # - # # Distance metricts for WSD # + import sys from collections import defaultdict import operator def lowest_common_hypernyms(s1,s2): return lowest_common_hypernyms_aux(set([s1]), set([s2]), 0) def lowest_common_hypernyms_aux(s1,s2,i): if len(s1.intersection(s2)) > 0: #print(i) #print(s1.intersection(s2)) return [s1.intersection(s2), i] else: s1n = [] s2n = [] for synset in s1: s1n.extend(synset.hypernyms()) for synset in s2: s2n.extend(synset.hypernyms()) u1 = s1.union(set(s1n)) u2 = s2.union(set(s2n)) if u1==s1 and u2 == s2: return None, sys.float_info.max else: return lowest_common_hypernyms_aux(u1,u2, i+1) def path_similarity(word1,list_words): a = wn.synsets(word1) min_distance = sys.float_info.max synset = None for b in list_words: for sa in a: for sb in wn.synsets(b): try: d = sa.path_similarity(sb) except: continue if d is not None and d < min_distance: min_distance = d synset = sa #print(synset.definition()) return synset def distance_to_lowest_common_hypernyms(word1, list_words): #print(list_words) a = wn.synsets(word1) min_distance = sys.float_info.max synset = None for b in list_words: for sa in a: for sb in wn.synsets(b): lowest = sa.lowest_common_hypernyms(sb) for l in lowest: da = sa.path_similarity(l) db = sb.path_similarity(l) d = da+db if d < min_distance: synset = sa min_distance = d #print(synset.definition()) return synset def nearest_lowest_common_hypernyms(word1, list_words): #print(list_words) a = wn.synsets(word1) min_distance = sys.float_info.max synset = None for b in list_words: for sa in a: for sb in wn.synsets(b): _, lowest = lowest_common_hypernyms(sa,sb) if lowest < min_distance: synset = sa min_distance = lowest #print(synset.definition()) return synset def nearest_lowest_common_hypernyms_debug(word1, list_words): #print(list_words) a = wn.synsets(word1) min_distance = sys.float_info.max synset = None for b in list_words: for sa in a: print(sa) for sb in wn.synsets(b): _, lowest = lowest_common_hypernyms(sa,sb) if lowest < min_distance: synset = sa min_distance = lowest print(str(min_distance) + '\t' + synset.name()) min_distance = sys.float_info.max #print() #print(synset.definition()) return synset def vote_nearest_lowest_common_hypernyms(word1,list_words): votes = defaultdict() a = wn.synsets(word1) min_distance = sys.float_info.max synset = None for b in list_words: for sa in a: #print(sa) for sb in wn.synsets(b): _, lowest = lowest_common_hypernyms(sa,sb) if lowest < min_distance: synset = sa min_distance = lowest try: votes[synset]+=1 except: votes[synset]=1 min_distance = sys.float_info.max synset = None #print(votes) synset = max(votes.items(), key=operator.itemgetter(1))[0] #print(synset) return synset # - # # EXPERIMENTS # ## 1) TEST WORD PREDICTION bert_predict_words('the [MASK] of my computer does not work, I can not write anything', k=1) bert_predict_words('the [MASK] of my computer does not work, I can not see anything', k=1) bert_predict_words('Ben wanted to eat so he went to a [MASK] near his house', k=1) bert_predict_words('artificial intelligence should always [MASK] humans', k=1) # ## 2) WORD SENSE DISAMBIGUATION # ### 1 - MOUSE synsets = wn.synsets('mouse') for synset in synsets: print(colored('- ' + synset.name(), 'green') + ': ' + synset.definition()) # + print('We want to disanbiguate the sentence: ' + colored('the ', 'green') + colored('[mouse]','red') + colored(' of my computer does not work', 'green')) print('In this sentence the correct disambiguation is: ' + colored('mouse.n.04','green')) print() predicted_words = bert_predict_words('the [MASK] of my computer does not work', k=10) print("TOP 10 words with higher probability") for i, word in enumerate(predicted_words): print(colored(str(i+1) + '. ', 'green') + word) print() print(colored('Metric:', 'blue') + ' path_similarity') print(path_similarity('mouse',predicted_words)) print() print(colored('Metric:', 'blue') + ' distance_to_lowest_common_hypernyms') print(distance_to_lowest_common_hypernyms('mouse',predicted_words)) print() print(colored('Metric:', 'blue') + ' nearest_lowest_common_hypernyms') print(nearest_lowest_common_hypernyms('mouse',predicted_words)) print() print(colored('Metric:', 'blue') + ' vote_nearest_lowest_common_hypernyms') print(vote_nearest_lowest_common_hypernyms('mouse',predicted_words)) print() # + print('We want to disanbiguate the sentence: ' + colored('the ', 'green') + colored('[mouse]','red') + colored(' are typically distinguished from rats by their size', 'green')) print('In this sentence the correct disambiguation is: ' + colored('mouse.n.01','green')) print() predicted_words = bert_predict_words('the small [MASK] are typically distinguished from rats by their size', k=10) print("TOP 10 words with higher probability") for i, word in enumerate(predicted_words): print(colored(str(i+1) + '. ', 'green') + word) print() print(colored('Metric:', 'blue') + ' path_similarity') print(path_similarity('mouse',predicted_words)) print() print(colored('Metric:', 'blue') + ' distance_to_lowest_common_hypernyms') print(distance_to_lowest_common_hypernyms('mouse',predicted_words)) print() print(colored('Metric:', 'blue') + ' nearest_lowest_common_hypernyms') print(nearest_lowest_common_hypernyms('mouse',predicted_words)) print() print(colored('Metric:', 'blue') + ' vote_nearest_lowest_common_hypernyms') print(vote_nearest_lowest_common_hypernyms('mouse',predicted_words)) print() # + print('We want to disanbiguate the sentence: ' + colored('the ', 'green') + colored('[mouse]','red') + colored(' eats cheese', 'green')) print('In this sentence the correct disambiguation is: ' + colored('mouse.n.01','green')) print() predicted_words = bert_predict_words('the small [MASK] eats cheese', k=10) print("TOP 10 words with higher probability") for i, word in enumerate(predicted_words): print(colored(str(i+1) + '. ', 'green') + word) print() print(colored('Metric:', 'blue') + ' path_similarity') print(path_similarity('mouse',predicted_words)) print() print(colored('Metric:', 'blue') + ' distance_to_lowest_common_hypernyms') print(distance_to_lowest_common_hypernyms('mouse',predicted_words)) print() print(colored('Metric:', 'blue') + ' nearest_lowest_common_hypernyms') print(nearest_lowest_common_hypernyms('mouse',predicted_words)) print() print(colored('Metric:', 'blue') + ' vote_nearest_lowest_common_hypernyms') print(vote_nearest_lowest_common_hypernyms('mouse',predicted_words)) print() # + print('We want to disanbiguate the sentence: ' + colored('the ', 'green') + colored('[mouse]','red') + colored(' eats cheese', 'green')) print('In this sentence the correct disambiguation is: ' + colored('mouse.n.01','green')) print() predicted_words = bert_predict_words('the small mouse eats cheese', k=11, position=2)[1:] print("TOP 10 words with higher probability") for i, word in enumerate(predicted_words): print(colored(str(i+1) + '. ', 'green') + word) print() print(colored('Metric:', 'blue') + ' path_similarity') print(path_similarity('mouse',predicted_words)) print() print(colored('Metric:', 'blue') + ' distance_to_lowest_common_hypernyms') print(distance_to_lowest_common_hypernyms('mouse',predicted_words)) print() print(colored('Metric:', 'blue') + ' nearest_lowest_common_hypernyms') print(nearest_lowest_common_hypernyms('mouse',predicted_words)) print() print(colored('Metric:', 'blue') + ' vote_nearest_lowest_common_hypernyms') print(vote_nearest_lowest_common_hypernyms('mouse',predicted_words)) print() # - # ### 2 - PEN synsets = wn.synsets('pen') for synset in synsets: print(colored('- ' + synset.name(), 'green') + ': ' + synset.definition()) # + print('We want to disanbiguate the sentence: ' + colored(' was looking for his toy box. Finally he found it. The box was in the ', 'green') + colored('[pen]','red') + colored('. John was very happy', 'green')) print('In this sentence the correct disambiguation is: ' + colored('pen.n.02','green')) print() predicted_words = bert_predict_words(' was looking for his toy box. Finally he found it. The box was in the [MASK] . John was very happy.', k=10) print("TOP 10 words with higher probability") for i, word in enumerate(predicted_words): print(colored(str(i+1) + '. ', 'green') + word) print() print(colored('Metric:', 'blue') + ' path_similarity') print(path_similarity('pen',predicted_words)) print() print(colored('Metric:', 'blue') + ' distance_to_lowest_common_hypernyms') print(distance_to_lowest_common_hypernyms('pen',predicted_words)) print() print(colored('Metric:', 'blue') + ' nearest_lowest_common_hypernyms') print(nearest_lowest_common_hypernyms('pen',predicted_words)) print() print(colored('Metric:', 'blue') + ' vote_nearest_lowest_common_hypernyms') print(vote_nearest_lowest_common_hypernyms('pen',predicted_words)) print() # + print('We want to disanbiguate the sentence: ' + colored('The exam must be written using a ', 'green') + colored('[pen]','red') + colored('.', 'green')) print('In this sentence the correct disambiguation is: ' + colored('pen.n.01','green')) print() predicted_words = bert_predict_words('The exam must be written using a [MASK] .', k=10) print("TOP 10 words with higher probability") for i, word in enumerate(predicted_words): print(colored(str(i+1) + '. ', 'green') + word) print() print(colored('Metric:', 'blue') + ' path_similarity') print(path_similarity('pen',predicted_words)) print() print(colored('Metric:', 'blue') + ' distance_to_lowest_common_hypernyms') print(distance_to_lowest_common_hypernyms('pen',predicted_words)) print() print(colored('Metric:', 'blue') + ' nearest_lowest_common_hypernyms') print(nearest_lowest_common_hypernyms('pen',predicted_words)) print() print(colored('Metric:', 'blue') + ' vote_nearest_lowest_common_hypernyms') print(vote_nearest_lowest_common_hypernyms('pen',predicted_words)) print() # - # # Functions to evaluate BERT in SEMEVAL2007 # + def get_key(synset,word): name = synset.name()+'.'+str(word) #print(name) return wn.lemma(name).key() def parse_sentence(sentence): text = '' positions = [] ids = [] text = ' '.join([x.text for x in sentence]) positions = [ (i,x.get('lemma'),x.get('id')) for i,x in enumerate(sentence) if x.get('id') is not None] return text, positions def parse_sentence2(sentence): text = '' positions = [] ids = [] text = ' '.join([x.text for x in sentence]) positions = [ (i,x.get('lemma'),x.get('id'),x.get('pos')) for i,x in enumerate(sentence) if x.get('id') is not None] return text, positions def wsd_sentence(sentence, position, lemma, k=10, metric = 'vote_nearest_lowest_common_hypernyms'): word = sentence.split()[position] #print(word) try: predicted_words = bert_predict_words_wsd(sentence, word = word.lower(), k=k+1)[1:] except ValueError: sentence = sentence.split() sentence[position] = '[MASK]' sentence = ' '.join(sentence) predicted_words = bert_predict_words(sentence, k=k) synset = None #print(predicted_words) if metric == 'path_similarity': synset = path_similarity(word,predicted_words) elif metric == 'distance_to_lowest_common_hypernyms': synset = distance_to_lowest_common_hypernyms(word,predicted_words) elif metric == 'nearest_lowest_common_hypernyms': synset = nearest_lowest_common_hypernyms(word,predicted_words) elif metric == 'vote_nearest_lowest_common_hypernyms': synset = vote_nearest_lowest_common_hypernyms(word,predicted_words) #print(synset) #print(synset) return get_key(synset,lemma) def wsd_dataset(dataset='semeval2007/semeval2007.data.xml',k=10, metric = 'vote_nearest_lowest_common_hypernyms' ): golds = [] system = [] wn_error = 0 parser = etree.XMLParser(remove_blank_text=True) # discard whitespace nodes tree = etree.parse(dataset, parser) for sentence in tree.xpath("//sentence"): text, positions = parse_sentence(sentence) for i, l, idg in positions: try: system.append(wsd_sentence(sentence=text.lower(), position=i, lemma=l, k=k, metric = metric)) golds.append(idg) #print(text) except: #WordNetError: wn_error +=1 return system,golds, wn_error # + def evaluate(system, golds, wn_error, gold_standard='semeval2007/semeval2007.gold.key.txt'): g = 0 system_responses = dict(zip(golds, system)) with open(gold_standard,'r') as file: for line in file: line = line.rstrip().split(' ') key = line[0] gold = line[1:] try: if system_responses[key] in gold: g+=1 #else: #print(key) #print(system_responses[key]) #print(gold) #return None except KeyError: continue p = g/len(golds) r = g/(len(golds)+wn_error) f = 2 * (p * r) / (p+r) return {'preccision':p, 'recall':r, 'f1':f} def evaluate_all(dataset='semeval2007/semeval2007.data.xml'): metrics = ['path_similarity','distance_to_lowest_common_hypernyms','nearest_lowest_common_hypernyms','vote_nearest_lowest_common_hypernyms'] for metric in metrics: print('METRIC: ' + metric) system, golds, wn_error = wsd_dataset(dataset=dataset,metric=metric) print(evaluate(system, golds, wn_error)) # - # ## EVALUATE ALL METRICS IN SEMEVAL 2007 evaluate_all() # # USING BERT TO IMPROVE UKB # + def get_nn(sentence,position,k=10): word = sentence.split()[position] try: return bert_predict_words_wsd(sentence, word = word.lower(), k=k+1)[1:] except ValueError: sentence = sentence.split() sentence[position] = '[MASK]' sentence = ' '.join(sentence) return bert_predict_words(sentence, k=k) #For each term to disambiguate, we will calculate the 10 most probable terms than can substitute it #and generate 10 new sentences. The function will print 10 new datasets to 10 new files. def data_aumentation(dataset_in = 'semeval2007/semeval2007.data.xml', dataset_out = 'semeval2007/semeval2007.data'): lemmatizer = WordNetLemmatizer() for i in range(10): parser = etree.XMLParser(remove_blank_text=True) # discard whitespace nodes tree = etree.parse('semeval2007/semeval2007.data.xml', parser) for sentence in tree.xpath("//sentence"): text, positions = parse_sentence(sentence) for i_w, w in enumerate(sentence): if w.get('id') is not None: word = get_nn(text,i_w)[i] w.text = word #w.set('lemma',lemmatizer.lemmatize(word)) tree.write(dataset_out + str(i)+'.xml') # Generate a new dataset. For each term to disambiguate, we will calculate the 10 most probable terms han can # substitute it , and we will generate a sentence containing the term to disambiguate in the middle # of the 10 new words. def meta_dataset(dataset_in = 'semeval2007/semeval2007.data.xml', dataset_out = 'semeval2007/METAsemeval2007.data.xml'): idsent = 0 corpus = ET.Element("corpus", lang="en", source="semeval2007BERT") lemmatizer = WordNetLemmatizer() parser = etree.XMLParser(remove_blank_text=True) # discard whitespace nodes tree = etree.parse('semeval2007/semeval2007.data.xml', parser) for textElem in tree.xpath("//text"): textXML = ET.SubElement(corpus, "text", id=textElem.get('id')) for sentenceElem in textElem.xpath("//sentence"): text, positions = parse_sentence2(sentenceElem) for i, l, idg, posw in positions: nn_words = get_nn(text, i) numberid = format(idsent, "03d") sentence = ET.SubElement(textXML, "sentence", id=textElem.get('id')+'.'+numberid) for newword in nn_words[0:5]: wf = ET.SubElement(sentence, "wf", lemma=lemmatizer.lemmatize(newword), pos=posw) wf.text=newword instance = ET.SubElement(sentence, "instance", #id=textElem.get('id')+'.'+numberid+'.t001', id=idg, lemma=l, pos=posw) instance.text=text.split(' ')[i] for newword in nn_words[5:]: wf = ET.SubElement(sentence, "wf", lemma=lemmatizer.lemmatize(newword), pos=posw) wf.text=newword idsent+=1 tree = ET.ElementTree(corpus) tree.write(dataset_out) # - data_aumentation() #evaluate in one of the generated datasets evaluate_all(dataset='semeval2007/semeval2007.data9.xml') meta_dataset() # To run UKB in the meta dataset generated use the following commands. # # * Evaluate in the regular dataset: # 1. perl wsdeval2ukb.pl /home/iker/Documents/WSD\ BERT/semeval2007/semeval2007.data.xml > wsdeval_src/wsdeval_raw.txt # # 2. perl ctx20words.pl wsdeval_src/wsdeval_raw.txt > wsdeval_src/wsdeval.txt # # # * Evaluate in the meta dataset: # 1. perl wsdeval2ukb.pl /home/iker/Documents/WSD\ BERT/semeval2007/METAsemeval2007.data.xml > wsdeval_src/wsdeval_raw.txt # # 2. perl ctx20words.pl wsdeval_src/wsdeval_raw.txt > wsdeval_src/wsdeval.txt # # # ./run_experiments.sh # # ./evaluate.sh (it will output NaN% as result, we just want to generate the ourput file to use the fuction below) # # + #Evaluate the output of the ./run_experiment command def evaluate_meta_dataset(outputukb='/home/iker/Documents/ukb-3.2/wsdeval/Keys/ALL.pprw2w.key',gold_standard='semeval2007/semeval2007.gold.key.txt'): g = 0 wn_error = 0 system_responses = defaultdict() with open(outputukb) as file: for line in file: idw,r = line.rstrip().split(' ') system_responses[idw] = r with open(gold_standard,'r') as file: for line in file: line = line.rstrip().split(' ') key = line[0] gold = line[1:] try: if system_responses[key] in gold: g+=1 except KeyError: wn_error+=1 p = g/len(system_responses) r = g/(len(system_responses)+wn_error) f = 2 * (p * r) / (p+r) return {'preccision':p, 'recall':r, 'f1':f} # - evaluate_meta_dataset() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Digit Span Data # + import pandas as pd # Read in data df1 = pd.read_csv("https://raw.githubusercontent.com/ethanweed/ExPsyLing/master/datasets/DigitSpan_2019.csv") df2 = pd.read_csv("https://raw.githubusercontent.com/ethanweed/ExPsyLing/master/datasets/DigitSpan_2021.csv") # Make a new dataframe with the forward and backward data from 2010. # Keep only speakers that only list Danish as their native language df1 = pd.DataFrame({'Forward': df1['Forward digit span'], 'Backward': df1['Backward digit span']}).where(df1['Native language 1=native 0=non-native'] == "Danish") # Make a new column in df1 called "Year", and set all values to 2019 df1['Year'] = [2019]*df1.shape[0] # Make a new dataframe with the forward and backward data from 2010. df2 = pd.DataFrame({'Forward': df2['Forward'], 'Backward': df2['Backward']}) # Make a new column in df1 called "Year", and set all values to 2021 df2['Year'] = [2021]*df2.shape[0] # Combine df1 and df2 in one dataframe, and drop all rows with missing data df = pd.concat([df1, df2]).dropna() # - df.head() df['Forward'].mean() df['Forward'].median() x = df.where(df['Year'] == 2019)['Forward'] x.mean() y = df.where(df['Year'] == 2021)['Forward'] y.mean() df['Forward'].corr(df['Backward']) df['Forward'].rank().corr(df['Backward'].rank()) df.where(df['Backward'] > 1).dropna()['Forward'].rank().corr(df.where(df['Backward'] > 1).dropna()['Backward'].rank()) import numpy as np np.quantile(df['Forward'], [0.25, .75]) df['Forward'].quantile([0.25, .75]) from scipy import stats stats.iqr(df['Forward']) # ## Fish vs. Facebook Data # ## Stroop Test Data # # ## # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Simple Dense Neural Network / Fully connected Neural Network from torchvision import datasets from PIL import Image import numpy as np import matplotlib.pyplot as plt import random import sys # Set training and test datasets train_data = datasets.MNIST("data", train=True, download=True) test_data = datasets.MNIST("data", train=False, download=True) # + # Convert to numpy arrays train_features = np.array(train_data.data) / 255. train_labels = np.array(train_data.targets) valid_features = np.array(test_data.data[:len(test_data.data)//2]) valid_labels = np.array(test_data.targets[:len(test_data.targets)//2]) test_features = np.array(test_data.data[len(test_data.data)//2:]) test_labels = np.array(test_data.targets[len(test_data.targets)//2:]) print(train_features.shape) print(train_labels.shape) print(valid_features.shape) print(valid_labels.shape) print(test_features.shape) print(test_labels.shape) # + # show a random image from train dataset height = 2 width = 8 f, axarr = plt.subplots(height, width, figsize=(20, 5)) for y in range(width): for x in range(height): r_idx = random.randint(0, len(train_data.data)) axarr[x,y].set_title(int(train_data.targets[r_idx])) axarr[x,y].imshow(train_data.data[r_idx], cmap='gray') axarr[x,y].axis("off") plt.show() # - # ## Torch implementation # + import torch from torchvision import datasets from torch import nn import torchvision.transforms as transforms import torch.optim as optim import time batch_size = 100 num_workers = 0 n_epochs = 1 # define tranformation pipeline transform = transforms.Compose([ transforms.ToTensor() #transforms.Normalize(0.5, 0.5) ]) # Set training and test datasets train_data = datasets.MNIST("data", train=True, download=True, transform=transform) test_data = datasets.MNIST("data", train=False, download=True, transform=transform) # prepare data loaders train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, num_workers=num_workers) test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, num_workers=num_workers) torch_model = nn.Sequential( nn.Linear(28**2, 40, bias=False), #nn.ReLU(), nn.Linear(40, 10, bias=False) ) optimizer = optim.SGD(torch_model.parameters(), lr=0.01) criterion = nn.CrossEntropyLoss() dataiter = iter(train_loader) images, labels = dataiter.next() images = images.numpy() print(images.shape) for epoch in range(1, n_epochs+1): train_loss = 0.0 valid_loss = 0.0 start = time.time() # TRAINING torch_model.train() for data, target in train_loader: data = data.view(data.size(0), -1) optimizer.zero_grad() output = torch_model(data) loss = criterion(output, target) loss.backward() optimizer.step() train_loss += loss.item()*data.size(0) # VALIDATION train_loss = train_loss/len(train_loader.dataset) print("Epoch: {} / {} | Train Loss: {} | Time: {:.3f}".format( epoch, n_epochs, train_loss, time.time() - start)) # + # Test accuracy import torch.nn.functional as F torch_model.eval() total = 0 correct = 0 for data, target in test_loader: data = data.view(data.size(0), -1) output = torch_model(data) output = F.softmax(output, dim=1) _, pred = torch.max(output, dim=1) correct += torch.sum(target == pred).item() total += len(target) print("Accuracy: {:.2f} %".format((correct/total)*100)) # + def sigmoid(x): return 1 / (1 + np.exp(-x)) def sigmoid_derivative(x): sig = sigmoid(x) return sig * (1 - sig) def sigmoid_derived(x): return x * (1 - x) def relu(x): return np.maximum(0, x, x) def cross_entropy(x): x = np.exp(x) sums = np.sum(x, axis=1) return x / sums.reshape((-1, 1)) def cross_entropy_derived(x): return x * (1 - x) # - x = [[0.5,0.5,0.5,-0.5],[1,2,3,4]] print(cross_entropy(x)) np.sum(cross_entropy(x),axis=1) # %matplotlib inline # %load_ext autoreload # %autoreload 2 # + import numpy as np import minitorch as mt a = mt.Input_Node(4) # - # ## Model 1 class NN: def __init__(self, input_nodes, hidden_nodes, output_nodes): self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes self.input_weights = np.random.normal(0, 1.0, (input_nodes, hidden_nodes)) self.hidden_weights = np.random.normal(0, 1.0, (hidden_nodes, output_nodes)) def predict(self, x): """ x = a batch of vectors (n, features) """ # FORWARD PROPAGATION n_count = x.shape[0] ## input to hidden h_score = np.matmul(x, self.input_weights) # (n, hidden_nodes) h_out = sigmoid(h_score) # (n, hidden_nodes) ## hidden to out output_score = np.matmul(h_out, self.hidden_weights) # (n, output_nodes) output = cross_entropy(output_score) # (n, output_nodes) return output def train(self, x, y, lr=0.01, train=True): """ x = a batch of vectors (n, features) y = labels (one-hot-encoded) (n, outputs) """ # FORWARD PROPAGATION n_count = x.shape[0] ## input to hidden h_score = np.matmul(x, self.input_weights) # (n, hidden_nodes) h_out = sigmoid(h_score) # (n, hidden_nodes) ## hidden to out output_score = np.matmul(h_out, self.hidden_weights) # (n, output_nodes) output = cross_entropy(output_score) # (n, output_nodes) error = -(1/n_count)*np.sum(np.log(output)*y) # (n, output_nodes) if not train: return error # BACKWARD PROPAGATION error_d = y / output output_d = np.zeros((n_count, self.output_nodes, self.output_nodes)) for node_idx in range(self.output_nodes): for s_idx in range(self.output_nodes): if node_idx == s_idx: output_d[:, node_idx, s_idx] = output[:,s_idx]*(1 - output[:,s_idx]) * error_d[:,s_idx] else: output_d[:, node_idx, s_idx] = -output[:,node_idx]*output[:,s_idx] * error_d[:,s_idx] h_o_gradient = np.zeros((self.hidden_nodes, self.output_nodes)) # (n, hidden_nodes, output_nodes) for h_idx in range(self.hidden_nodes): for out_idx in range(self.output_nodes): h_o_gradient[h_idx, out_idx] = \ (np.sum(output_d[:, out_idx, :] * h_out[:, h_idx].reshape(-1,1))) / n_count # get hidden layer errors hidden_d = np.zeros((n_count, self.hidden_nodes, self.output_nodes, self.output_nodes)) for h_idx in range(self.hidden_nodes): for out_idx in range(self.output_nodes): for s_idx in range(self.output_nodes): hidden_d[:, h_idx, node_idx, s_idx] = \ output_d[:, out_idx, s_idx] * self.hidden_weights[h_idx, out_idx] * \ sigmoid_derived(h_out[:,h_idx]) i_h_gradient = np.zeros((self.input_nodes, self.hidden_nodes)) for i_idx in range(self.input_nodes): for h_idx in range(self.hidden_nodes): i_h_gradient[i_idx, h_idx] = \ (np.sum(hidden_d[:, h_idx, -1] * x[:, i_idx].reshape(-1,1))) / n_count self.input_weights += i_h_gradient * lr self.hidden_weights += h_o_gradient * lr return error # ## Model 2 - ReLu class NN_relu: def __init__(self, input_nodes, hidden_nodes, output_nodes): self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes self.input_weights = np.random.normal(0, 1.0, (input_nodes, hidden_nodes)) self.hidden_weights = np.random.normal(0, 1.0, (hidden_nodes, output_nodes)) def predict(self, x): """ x = a batch of vectors (n, features) """ # FORWARD PROPAGATION n_count = x.shape[0] ## input to hidden h_score = np.matmul(x, self.input_weights) # (n, hidden_nodes) h_out = relu(h_score) # (n, hidden_nodes) ## hidden to out output_score = np.matmul(h_out, self.hidden_weights) # (n, output_nodes) output = cross_entropy(output_score) # (n, output_nodes) return output def train(self, x, y, lr=0.01, train=True): """ x = a batch of vectors (n, features) y = labels (one-hot-encoded) (n, outputs) """ # FORWARD PROPAGATION n_count = x.shape[0] ## input to hidden h_score = np.matmul(x, self.input_weights) # (n, hidden_nodes) h_out = relu(h_score) # (n, hidden_nodes) ## hidden to out output_score = np.matmul(h_out, self.hidden_weights) # (n, output_nodes) output = cross_entropy(output_score) # (n, output_nodes) error = -(1/n_count)*np.sum(np.log(output)*y) # (n, output_nodes) if not train: return error # BACKWARD PROPAGATION error_d = y / output output_d = np.zeros((n_count, self.output_nodes, self.output_nodes)) for node_idx in range(self.output_nodes): for s_idx in range(self.output_nodes): if node_idx == s_idx: output_d[:, node_idx, s_idx] = output[:,s_idx]*(1 - output[:,s_idx]) * error_d[:,s_idx] else: output_d[:, node_idx, s_idx] = -output[:,node_idx]*output[:,s_idx] * error_d[:,s_idx] h_o_gradient = np.zeros((self.hidden_nodes, self.output_nodes)) # (n, hidden_nodes, output_nodes) for h_idx in range(self.hidden_nodes): for out_idx in range(self.output_nodes): h_o_gradient[h_idx, out_idx] = \ (np.sum(output_d[:, out_idx, :] * h_out[:, h_idx].reshape(-1,1))) / n_count # get hidden layer errors hidden_d = np.zeros((n_count, self.hidden_nodes, self.output_nodes, self.output_nodes)) for h_idx in range(self.hidden_nodes): for out_idx in range(self.output_nodes): for s_idx in range(self.output_nodes): hidden_d[:, h_idx, node_idx, s_idx] = \ output_d[:, out_idx, s_idx] * self.hidden_weights[h_idx, out_idx] * \ h_out[:,h_idx] i_h_gradient = np.zeros((self.input_nodes, self.hidden_nodes)) for i_idx in range(self.input_nodes): for h_idx in range(self.hidden_nodes): i_h_gradient[i_idx, h_idx] = \ (np.sum(hidden_d[:, h_idx, -1] * x[:, i_idx].reshape(-1,1))) / n_count self.input_weights += i_h_gradient * lr self.hidden_weights += h_o_gradient * lr return error # ## Model 3 - Added bias test = np.array([[1,2,3],[4,5,6]]) new = np.ones((test.shape[0], test.shape[1]+1)) new[:,:-1] = test print(new.shape) print(new) class NN_bias: def __init__(self, input_nodes, hidden_nodes, output_nodes): self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes self.input_weights = np.random.normal(0, 1.0, (input_nodes+1, hidden_nodes)) self.hidden_weights = np.random.normal(0, 1.0, (hidden_nodes+1, output_nodes)) def predict(self, x): """ x = a batch of vectors (n, features) """ # FORWARD PROPAGATION n_count = x.shape[0] ## add bias to input layer new = np.ones((x.shape[0], x.shape[1]+1)) # (n, features+1) new[:,:-1] = x x = new ## input to hidden h_score = np.matmul(x, self.input_weights) # (n, features+1) * (features+1, h_n) = (n, hidden_nodes) h_out = relu(h_score) # (n, hidden_nodes) ## add bias to hidden layer new = np.ones((h_out.shape[0], h_out.shape[1]+1)) # (n, hidden_nodes+1) new[:,:-1] = h_out h_out = new ## hidden to out output_score = np.matmul(h_out, self.hidden_weights) # (n, output_nodes) output = cross_entropy(output_score) # (n, output_nodes) return output def train(self, x, y, lr=0.01, train=True): """ x = a batch of vectors (n, features) y = labels (one-hot-encoded) (n, outputs) """ # FORWARD PROPAGATION n_count = x.shape[0] ## add bias to input layer new = np.ones((x.shape[0], x.shape[1]+1)) # (n, features+1) new[:,:-1] = x x = new ## input to hidden h_score = np.matmul(x, self.input_weights) # (n, hidden_nodes) h_out = relu(h_score) # (n, hidden_nodes) ## add bias to hidden layer new = np.ones((h_out.shape[0], h_out.shape[1]+1)) # (n, hidden_nodes+1) new[:,:-1] = h_out h_out = new ## hidden to out output_score = np.matmul(h_out, self.hidden_weights) # (n, output_nodes) output = cross_entropy(output_score) # (n, output_nodes) error = -(1/n_count)*np.sum(np.log(output)*y) # (n, output_nodes) if not train: return error # BACKWARD PROPAGATION error_d = y / output output_d = np.zeros((n_count, self.output_nodes, self.output_nodes)) for node_idx in range(self.output_nodes): for s_idx in range(self.output_nodes): if node_idx == s_idx: output_d[:, node_idx, s_idx] = output[:,s_idx]*(1 - output[:,s_idx]) * error_d[:,s_idx] else: output_d[:, node_idx, s_idx] = -output[:,node_idx]*output[:,s_idx] * error_d[:,s_idx] h_o_gradient = np.zeros((self.hidden_nodes+1, self.output_nodes)) # (n, hidden_nodes, output_nodes) for h_idx in range(self.hidden_nodes+1): for out_idx in range(self.output_nodes): h_o_gradient[h_idx, out_idx] = \ (np.sum(output_d[:, out_idx, :] * h_out[:, h_idx].reshape(-1,1))) / n_count # get hidden layer errors hidden_d = np.zeros((n_count, self.hidden_nodes, self.output_nodes, self.output_nodes)) for h_idx in range(self.hidden_nodes): for out_idx in range(self.output_nodes): for s_idx in range(self.output_nodes): hidden_d[:, h_idx, node_idx, s_idx] = \ output_d[:, out_idx, s_idx] * self.hidden_weights[h_idx, out_idx] * \ h_out[:,h_idx] i_h_gradient = np.zeros((self.input_nodes+1, self.hidden_nodes)) for i_idx in range(self.input_nodes+1): for h_idx in range(self.hidden_nodes): i_h_gradient[i_idx, h_idx] = \ (np.sum(hidden_d[:, h_idx, -1] * x[:, i_idx].reshape(-1,1))) / n_count self.input_weights += i_h_gradient * lr self.hidden_weights += h_o_gradient * lr return error # ## Training # + #model = NN(28**2, 40, 10) #model = NN_relu(28**2, 40, 10) model = NN(28**2, 40, 10) batchsize = 100 batchsize_valid = 100 epochs = 1 for epoch in range(1, epoch+1): # Training for idx in range(0, train_features.shape[0], batchsize): features = train_features[idx:idx+batchsize].reshape((-1, 784)) labels = train_labels[idx:idx+batchsize] labels_enc = np.zeros((batchsize, 10)) for l_idx in range(len(labels_enc)): labels_enc[l_idx, labels[l_idx]] = 1 error = model.train(features, labels_enc) #print("{} / {}".format(idx, train_features.shape[0]) , error) if idx % (batchsize * 50) == 0: # Validation error_valid = 0 for idx_v in range(0, len(valid_features), batchsize_valid): features_valid = valid_features[idx_v:idx+batchsize_valid].reshape((-1,784)) labels_valid = valid_labels[idx_v:idx_v+batchsize_valid] labels_enc_valid = np.zeros((batchsize_valid, 10)) error_valid += model.train(features, labels_enc, train=False) error_valid /= len(valid_features) / batchsize_valid print("Epoch: {} | Batch: {} / {} | Train Error: {:.8f} | Valid Error: {:.8f}".format( epoch, idx+(batchsize * 50), train_features.shape[0] , error, error_valid)) # + """ nn_relu = 19.8 % nn_bias = 34.3 % """ correct = 0 for idx in range(0,len(test_features)): features = test_features[idx].reshape((1,784)) result = np.argmax(model.predict(features)) if result == test_labels[idx]: correct += 1 #print("Real: {} Prediction: {}".format(test_labels[idx], result)) print("Accuracy: {:.1f} %".format((correct/len(test_features))*100)) # - test = np.array([1,2,3,-4,-5,-6]) np.maximum(0, test, test) test #
    Open In Colab # # CLASSES # # ## CLASS_BASICS # # ### P1.PY # # # # + """ Person class """ # Create a Person class with the following properties # 1. name # 2. age # 3. social security number # - # # # ### P2.PY # # # # + """ Cat class """ # Implement a class called "Cat" with the following properties: # name # breed # age # also, implement a method called "speak" that should print out "purr" # - # # # ### P3.PY # # # # + """ Phone Contacts """ # Create a class called "Contact" that will store the below items for each contact in your phone. The starred items should be required. Instantiate two Contact instance objects and access each of their attributes. ### name* ### mobile_num ### work_num ### email # - # # # ### P4.PY # # # # + """ Rectangle """ # Write a Python class named "Rectangle" constructed by values for length and width. It should include two methods to calculate the "area" and the "perimeter" of a rectangle. Instantiate a Rectangle and call both methods. # - # # # ### P5.PY # # # # + """ Circle """ # Write a Python class named "Circle" constructed by a radius value and a class attribute for pi. You can use 3.14159 for the value of pi for simplicity. It should include two methods to calculate the "area" and the "perimeter" of a circle. Instantiate a Circle and call both methods. # - # # # ### P6.PY # # # # + """ RGB to HEX """ # Remember our function "rgb_hex" from the functions pset? That function took a color in rgb format and returned it in hex format as well as vice versa. Wouldn't it be so much easier to do that with a class called Color? # Define a class called "Color" to store each color's rgb and hex values. Define a method called "convert_codes()" to retrieve one value given the other. Create at least one instance of Color and try the convert_codes() method. # - # # # ## VEHICLES # # ### P1.PY # # # # + """ Vehicles I """ # Create a "Vehicle" class with instance attributes for "name" and "owner". # Add a method called "start_engine()" that prints "Vroom!". # Instantiate a Vehicle called "submarine" and then access all its attributes and methods for practice. # - # # # ### P2.PY # # # # + """ Vehicles II """ # Define 3 unique child classes for Vehicle - Car, Plane, and Boat. Each of these should have its own class attributes for "motion" and "terrain". (For Car, these would be something like "drive" and "land".) # - # # # ### P3.PY # # # # + """ Vehicles III """ # For Car, define a method called "honk_horn()" that prints "HONK!" # For Plane, define a method called "take_off()" that prints "Fasten your seatbelts!" # For Boat, define a method called "drop_achor()" that prints "Anchors away!" # - # # # ### P4.PY # # # # + """ Vehicles IV """ # Create an instance of each child class. Access all their attributes and methods, including those inherited from their parent class Vehicle. # TAKEAWAY! - Vehicle is the baseline class for other more specific types of vehicles. Typically, you wouldn't instantiate a Vehicle because the child classes are more useful for storing information about vehicles. The Vehicle class serves to create a relationship between its children. However, "submarine" might be created as a Vehicle because it's so rare that you might not need a full Submarine class! # - # # # ### P5.PY # # # # + """ Vehicles V """ # Let's expand the Car class to be more comprehensive. Include attributes for... ## brand name ## plates ## owner ## fuel (e.g. gasoline, battery, etc.) ## fuel_level (a numerical amount that defaults to 50, and max speed in MPH that defaults to None. # - # # # ### P6.PY # # # # + """ Vehicles VI """ # Next, define new method called "check_fuel_level()" for your newly expanded Car class. If the fuel_level attribute is < 15, the method should reset fuel_level to 50 and print out how many units it refueled the car, e.g. 'Refueled 38 units.' Otherwise, it should simply print 'No need to refuel right now.' # Create at least TWO instances of Car, one of which has a fuel level below 15. Access the new attributes and call the check_fuel_level() method for each instance. # - # # # ## WEDDING_GUESTS # # ### P1.PY # # # # + """ Weddings I - Guest List """ # Imagine for this problem set that you are planning a wedding. # A) Define a class called "Guest" to help you manage all the information about each invitee. This should initially include instance attributes for the guests's name, phone, and an optional "invite_sent" that defaults to False. Guest should also include an instance method called "send_invite()", which changes the value of invite_sent to True once you send an invitation to that guest. # B) Next, define a child class called "Bridesmaid", which includes the same initial attributes and inherits Guest's instance method. # C) Finally, create at least one instance of each class and do the following: ### Call send_invite() on each instance. ### Check whether Bridesmaid is a child of Guest and vice versa. # - # # # ### P2.PY # # # # + """ Weddings II - Record Bridesmaid RSVPs """ # Create a method in Guest to record a guests's rsvp to your invitation. It should record whether they have any dietary restrictions (e.g. vegetarian, kosher, halal, etc.) and whether they're bringing a plus one. If they are bringing a plus one, it should record the name of the plus one and his/her dietary restrictions if any. These values should be stored in instance attributes. # Try out this method on at least one instance of Guest and at least one instance of Bridesmaid. # - # # # ### P3.PY # # # # + """ Weddings III - Record Shower & Bachelorette RSVP """ # Create two methods in Bridesmaid to record a the bridesmaid's rsvp to the bridal shower and the bachelorette party. You can call them "record_shower_rsvp()" and "record_bachelorette_rsvp()". They will work just like the general "record_rsvp()" except there will be no plus ones or diet questions. Their rsvp answers should be stored in instance attributes with the same name (i.e. shower_rsvp & bachelorette_rsvp). # - # # # ## DOGS # # ### P1.PY # # # # + """ Dogs I """ # Create a class called "Dog". It should include: ## A class attribute "domesticated" w. value True ## An instance method called "bark()" that prints "Woof!" # - # # # ### P2.PY # # # # + """ Dogs II """ # Create child class for 3 dog breeds - Collie, Siberian Husky, and Pekingese. Each should have: ## 2 class attributes for "breed" and "temperament". The latter should be a list. ## 3 instance attributes for "name", "age", and "gender". # - # # # ### P3.PY # # # # + """ Dogs III """ # A) Add an instance method to Collie called "herd_the_kids()" that prints "Here are your children!" # B) Add an instance method called "bark()" to Pekingese. This should override the parent method "bark()" such that when you call bark() on an instance of Pekingese, it prints "Yap!" instead. # C) Instantiate one of each breed. Access the attributes, methods, and parent methods of each one. BONUS: Aside from herd_the_kids(), you should be able to do this in a loop. # - # # # ### P4.PY # # # # + """ Dogs IV - Tricks (CHALLENGE!) """ # Many dogs know how to do common tricks or follow common commands. You could create methods for each trick/command in the Dog parent class, but the problem is that not all dogs know all tricks/commands. # However, it would be inefficient to define a custom set of instance methods for tricks/commands every time you instantiate a unique Collie (or SiberianHuskey or Pekingese etc.). # Find an efficient way to specify which tricks each unique dog knows and to call them. You can use "roll_over", "fetch", "shake_hands", and "spin". Secondly, find a way to teach a dog new trick from this set. # - # # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="s99fRdJRNu2t" # This notebook contains a walk-through of the sound change simulation that was presented in Chapter 2 of my dissertation, *Functionalism, Lexical Contrast and Sound Change*. # # #1. Introduction # First, we import some standard modules like Random, Numpy and Matplotlib. # + id="sHm--x7cVoZt" import random import matplotlib.pyplot as plt import numpy as np # + [markdown] id="W1tkHIN1OUXw" # Next, we need to define an alphabet for our lexicon, and map our alphabet to indexes for computation purposes. # # I created two dictionaries that are used to encode consonant and vowel symbols. # # The vowel dictionary maps 15 different symbols onto different indexes. # The consonant dictionary maps 26 different symbols onto different indexes. # # # # + id="SXD4lj__ObRh" vowels = {'i': 0, 'e': 1, 'a': 2, 'o': 3, 'u': 4, 'ou':5, 'ei':6, 'ea':7, 'ee':8, 'oo':9, 'ai':10, 'oa':11, 'oi':12, 'io':13, 'ie':14} consonants = {'m': 0, 'p':1, 'b':2, 'f': 3, 'v': 4, 'd': 5, 't': 6, 'l': 7, 'n': 8, 'r': 9, 's': 10, 'k': 11, 'y': 12, 'g': 13, 'j':14, 'h': 15, 'c':16, ' ':17, 'th':18, 'sh':19, 'wh':20, 'ch':21, 'tw':22, 'x':23, 'w':24, 'z':25} # + [markdown] id="LVkl3ZmHO0mn" # With this dictionary, we can represent each word as an integer tuple. Words are initially read from a text file # in the form of 3-dimensional tuples of the type (Onset, Nucleus, Coda). For instance the word "dog" will be # processed in the form of a 3-dimensional tuple ```('d', 'o', 'g')```. # # Then, we can use the two dictionaries to transform # each word into a integer vector, through a helper function. # # + id="94NrJO1nO0Mc" dog = ('d', 'o', 'g') def vectorize(word): onset, nucleus, coda = word return consonants[onset], vowels[nucleus], consonants[coda] # + [markdown] id="DYjtTA-LRm_z" # For instance, the word "dog" will be transformed in the tuple ```(5, 3, 13)```. # + colab={"base_uri": "https://localhost:8080/"} id="lWhiRA7WRteb" outputId="7b278dcb-ef20-458b-dc5c-58788c38bb78" print(vectorize(dog)) # + [markdown] id="Tll4bAadPAVR" # The lexicon is a list of words (=tuples), and is stored as a global variable, since we need to modify it as sound change # occurs. A wordlist containing the words "dog", "cat", and "pig" is represented by the following variable. # + id="YhkbtZHtO9ix" wordlist = [('d', 'o', 'g'), ('c', 'a', 't'), ('p', 'i', 'g')] # + [markdown] id="JQRTdTe-PGGX" # Since we want to keep track of the number of symbols and the possible phonological environments, we also store three sets that # contain the onsets, nuclei and codas available in the lexicon in its current state. These three sets are all # global variables. # + id="8MQR5LqgPGjJ" def get_onset(wordlist): return {word[0] for word in wordlist} def get_nucleus(wordlist): return {word[1] for word in wordlist} def get_coda(wordlist): return {word[2] for word in wordlist} # + [markdown] id="EZweQIRRRyy7" # Let's extract the onsets, nuclei and codas of the current wordlist. # + colab={"base_uri": "https://localhost:8080/"} id="B0eWIFfyR5aI" outputId="a1f14a49-ecc8-4836-e925-2cff88b134f6" onset, nucleus, coda = get_onset(wordlist), get_nucleus(wordlist), get_coda(wordlist) print(onset) print(nucleus) print(coda) # + [markdown] id="IlwtIXvrPMXl" # For illustratory purposes, we need a reverse-dictionary, which can be used to retrieve the symbols given their index, # and a helper function to retrieve the word (in string format) given its integer vector representation: # + id="htjGS9aqPMuz" rev_vowels = {code: letter for letter, code in vowels.items()} rev_consonants = {code: letter for letter, code in consonants.items()} def vectorize_inverse(word): onset, nucleus, coda = word return rev_consonants[onset] + rev_vowels[nucleus] + rev_consonants[coda] # + [markdown] id="ciKh9gdJSHbR" # By calling the function on the tuple ```(5,3,13)```, we will get the word "dog" back. # # + colab={"base_uri": "https://localhost:8080/"} id="o1Y76DOwSIxu" outputId="a98f9003-ea8e-4a11-ce19-142118608e84" dog = (5, 3, 13) print(vectorize_inverse(dog)) # + [markdown] id="y6q-jLMePVSF" # Another helper function that we need is a function that returns the average Levenshtein distance within a wordlist: # + id="PnP47u2PPVop" def average(wordlist): av_length = [] for index, word in enumerate(wordlist): for word2 in wordlist[index+1:]: lev = 0 for i, letter in enumerate(word): if word2[i] != letter: lev += 1 av_length.append(lev) return round(sum(av_length)/len(av_length),3) # + [markdown] id="n6nBaJOtSklb" # The average distance of our current lexicon should be: # # [ d(dog,cat) + d(dog,pig) + d(cat,pig) ] /3: # # (3 + 2 + 3) / 3 : # # 8 / 3 = 2.667 # + colab={"base_uri": "https://localhost:8080/"} id="AOUjPyeRTWdK" outputId="7ef59fae-7f69-4baa-b273-21e7404dbad1" print(average(wordlist)) # + [markdown] id="sWPy4FcNTTk1" # Now that the lexicon, the alphabet and the helper functions are defined, we can move to sound change! # + [markdown] id="TpujlobxPwwc" # #2. Sound change functions # # This part defines the sound change functions. These functions modify the lexicon by simulating sound changes. # # The first function represents a sound change that targets the onset of the word. The comments on the function explain how we implement a sound change that targets the onset position of the artificial language (as described in 2.2.2). # # Depending on the outcome, the change might result in an unconditioned merger, a conditioned merger, a split, or a simple phonetic change. For more information, cf. 2.2.2 in the dissertation. # + id="tSoLhDQYPwJW" def change_onset(): #call the lexicon list and the onset set global lexicon, onset #prepare a new empty list, that will be filled with the form of the words after the sound change applies new_lexicon = [] #pick an onset at random and name it target. This is the target of the sound change target = random.choice(list(onset)) #pick an onset at random and name it outcome. This is the outcome of the sound change outcome = random.choice(list(rev_consonants)) #select a random subset of nuclei as the conditioning environment environment = random.sample(nucleus, random.randint(0, len(nucleus) - 1)) #apply the change to the lexicon for word in lexicon: #check words where target is the onset if word[0] == target: #determine whether the nucleus is in the conditioning environment if word[1] in environment: #if the nucleus is in the conditioning environment, then change target into outcome new_lexicon.append((outcome, word[1], word[2])) else: #if not, the change does not apply new_lexicon.append(word) else: #if the word does not start with target, the change does not apply new_lexicon.append(word) #this prints a line describing the change that happened print('/' + rev_consonants[target] + '/ becomes /' + rev_consonants[outcome] + '/ in onset before [' + ' '.join([rev_vowels[index] for index in environment]) + ']') #Update lexicon and onsets lexicon = new_lexicon onset = get_onset(lexicon) # + [markdown] id="Hfb1HOGQcrAy" # Let's try the function. All we need is to transform the words in the wordlist in their vectorizes representation, and store the type of onsets, nuclei and codas. Then we can apply some sound changes. # + colab={"base_uri": "https://localhost:8080/"} id="_A84HRNmdDkA" outputId="fa4d6974-5137-40d2-d09e-515dd5bf5118" wordlist = [('d', 'o', 'g'), ('c', 'a', 't'), ('p', 'i', 'g')] lexicon = [vectorize(word) for word in wordlist] onset, nucleus, coda = get_onset(lexicon), get_nucleus(lexicon), get_coda(lexicon) for iteration in range(5): change_onset() # + [markdown] id="ABMFXuIrdSJT" # Now let's see how the lexicon looks like after the change. Can you guess the form of the new lexicon? # + colab={"base_uri": "https://localhost:8080/"} id="PxKGqfkPdayK" outputId="959f7a15-1a26-456b-a7b4-22711fd55974" print([vectorize_inverse(word) for word in lexicon]) # + [markdown] id="7nl38M2eeOcB" # With a lexicon of only three words, and changes that only apply to the first symbol of the words, the shape of the lexicon does not change radically. # # Now, it's time to add other sound change functions. The following two functions will apply a change to the nucleus of the word. The only difference between the two is whether # the conditioning environment is the onset or the coda. # # + id="I6AXBQG2anmY" def change_nucleus(): #call the lexicon list and the nucleus set global lexicon, nucleus #prepare a new empty list, that will be filled with the form of the words after the sound change apply new_lexicon = [] #pick a nucleus at random and name it target. This is the target of the sound change target = random.choice(list(nucleus)) #pick a nucleus at random and name it outcome. This is the outcome of the sound change outcome = random.choice(list(rev_vowels)) #select a random subset of onsets as the conditioning environment environment = random.sample(onset, random.randint(0, len(onset) - 1)) #apply the change to the lexicon for word in lexicon: #check words where target is the nucleus if word[1] == target: #determine whether the onset is in the conditioning environment if word[0] in environment: #if the onset is in the conditioning environment, then change target into outcome new_lexicon.append((word[0], outcome, word[2])) else: #if not, the change does not apply new_lexicon.append(word) else: #if the word does not have target, the change does not apply new_lexicon.append(word) #this prints a line describing the change that happened print('/' + rev_vowels[target] + '/ becomes /' + rev_vowels[outcome] + '/ after [' + ' '.join([rev_consonants[index] for index in environment]) + ']') #Update lexicon and nuclei lexicon = new_lexicon nucleus = get_nucleus(lexicon) def change_nucleus2(): #call the lexicon list and the nucleus set global lexicon, nucleus #prepare a new empty list, that will be filled with the form of the words after the sound change apply new_lexicon = [] #pick a nucleus at random and name it target. This is the target of the sound change target = random.choice(list(nucleus)) #pick a nucleus at random and name it outcome. This is the outcome of the sound change outcome = random.choice(list(rev_vowels)) #select a random subset of codas as the conditioning environment environment = random.sample(coda, random.randint(0, len(coda) - 1)) #apply the change to the lexicon for word in lexicon: #check words where target is the nucleus if word[1] == target: #determine whether the coda is in the conditioning environment if word[2] in environment: #if the coda is in the conditioning environment, then change target into outcome new_lexicon.append((word[0], outcome, word[2])) else: #if not, the change does not apply new_lexicon.append(word) else: #if the word does not have target, the change does not apply new_lexicon.append(word) print('/' + rev_vowels[target] + '/ becomes /' + rev_vowels[outcome] + '/ before [' + ' '.join([rev_consonants[index] for index in environment]) + ']') #Update lexicon and nuclei lexicon = new_lexicon nucleus = get_nucleus(lexicon) # + [markdown] id="nVtU5bVkfJxq" # The last function targets the coda of the word instead. # + id="HPmn-qBAfKJc" def change_coda(): #call the lexicon list and the coda set global lexicon, coda #prepare a new empty list, that will be filled with the form of the words after the sound change apply new_lexicon = [] #pick a coda at random and name it target. This is the target of the sound change target = random.choice(list(coda)) #pick a coda at random and name it outcome. This is the outcome of the sound change outcome = random.choice(list(rev_consonants)) #select a random subset of nuclei as the conditioning environment environment = random.sample(nucleus, random.randint(0, len(nucleus) - 1)) #apply the change to the lexicon for word in lexicon: #check words where target is the coda if word[2] == target: #determine whether the nucleus is in the conditioning environment if word[1] in environment: #if the nucleus is in the conditioning environment, then change target into outcome new_lexicon.append((word[0], word[1], outcome)) else: #if not, the change does not apply new_lexicon.append(word) else: #if the word does not end with target, the change does not apply new_lexicon.append(word) print('/' + rev_consonants[target] + '/ becomes /' + rev_consonants[outcome] + '/ in coda after [' + ' '.join([rev_vowels[index] for index in environment]) + ']') #Update lexicon and onsets lexicon = new_lexicon coda = get_coda(lexicon) # + [markdown] id="FeERG7ZffKn_" # Now, let's repeat the previous experiments, with a larger vocabulary and a larger inventory of sound changes. # + colab={"base_uri": "https://localhost:8080/"} id="Hgkam5YDfnOD" outputId="bab6216c-c6f1-45ee-dcee-576d74f1c951" wordlist = [('d', 'o', 'g'), ('c', 'a', 't'), ('p', 'i', 'g'), ('c','o','w'), ('r','a','t'), ('b', 'ee', ' '), ('f','i','sh')] lexicon = [vectorize(word) for word in wordlist] onset, nucleus, coda = get_onset(lexicon), get_nucleus(lexicon), get_coda(lexicon) for iteration in range(5): change_onset() change_nucleus() change_nucleus2() change_coda() # + [markdown] id="Rc0Codu1giIM" # Now we see that more changes occurr, and as a consequence the lexicon also changes drastically. # + colab={"base_uri": "https://localhost:8080/"} id="2r8rrq09guRC" outputId="385132d6-db7c-4c6e-ca3f-a6161c1ba9c6" print([vectorize_inverse(word) for word in lexicon]) # + [markdown] id="nD8GCtPhg9gL" # We can keep track of the average distance of the vocabulary by means of our ```average()``` function. # + colab={"base_uri": "https://localhost:8080/"} id="279boM4jhHD6" outputId="81543764-fdcc-4fb5-b647-dd22c18a2460" print('The average distance at the beginning is %s' % average(wordlist)) print('The average distance at the end is %s' % average(lexicon)) # + [markdown] id="26EqA6XqiHkN" # #3. Simulation # # Finally, we can run several sound change simulations, and keep track of the average distance and the number of phonemes in the lexicon during the simulations, by plotting them on a graph. # # In addition to the wordlist, we should just specify the number of iterations and the number sound changes to apply. # # # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="6Jm3SgIGfLLM" outputId="cb6ce417-a99d-4575-cad1-4f1a65d1da3d" wordlist = [('d', 'o', 'g'), ('c', 'a', 't'), ('p', 'i', 'g'), ('c','o','w'), ('r','a','t'), ('b', 'ee', ' '), ('f','i','sh')] iterations = 5 n_changes = 10000 for iter in range(iterations): print("Iteration number %s started" % (iter+1)) global onset, nucleus, coda, lexicon #this line loads the lexicon in the format described above: a list of integer tuples lexicon = [(consonants[word[0]], vowels[word[1]], consonants[word[2]]) for word in wordlist] #this line gets the onset, nucleus, and coda sets onset, nucleus, coda = get_onset(lexicon), get_nucleus(lexicon), get_coda(lexicon) #this line will be used to define the sound change functions used in the simulation and their weight #with this setting, each function is equally weighted functions = [change_onset, change_nucleus, change_nucleus2, change_coda] #we initialize lists that will keep track of the number of the iteration, the number of the phonemes, #and the average distance x_axis = [0] phonemes = [len(onset.union(coda)) + len(nucleus)] av_length = [average(lexicon)] for n in range(int(n_changes)): #this line selects a sound change function at random and applies it random.choice(functions)() #This line is needed to make the plot lighter. For the toy example in Figure 2.2, the sampling ratio has been reduced to '1' if n % 1000 == 0: #we update the lists that keep track of the number of the iteration, the number of the phonemes, and the #average distance x_axis.append(n+1) phonemes.append(len(onset.union(coda)) + len(nucleus)) av_length.append(average(lexicon)) #this loop prints the shape of the lexicon at the beginning of the simulation and after the sound changes applied for index, word in enumerate(lexicon): print(''.join(wordlist[index]) + '->' + ''.join(rev_consonants[word[0]] + rev_vowels[word[1]] + rev_consonants[word[2]])) print("Iteration number %s finished" % (iter+1)) #After the simulation has ended, we plot the change in the number of phonemes and in the average distance #during the simulation #plot phoneme size plt.subplot(1, 2, 1) plt.plot(x_axis, phonemes) #plt.xticks(np.arange(1, 4, step=1)) #This is for the toy example in Figure 2.2 #plt.yticks(np.arange(36, 39, step=1)) #This is for the toy example in Figure 2.2 plt.title('Number of Phonemes') plt.xlabel('Iterations') plt.ylabel('Counts') #plot av_length plt.subplot(1, 2, 2) plt.plot(x_axis, av_length) #plt.xticks(np.arange(1, 4, step=1)) #This is for the toy example in Figure 2.2 plt.title('Average Levenshtein Distance') plt.xlabel('Iterations') plt.ylabel('Counts') # + [markdown] id="NWhDpR16Haq7" # This is it! Please reach out to me if you have any comment on the project. # # Andrea # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" # Open In Colab # + [markdown] id="AxaO8OLYIAX0" # Importng Libraries # + id="Z_K9lhKdB1ta" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="9973c6e3-b7ae-4206-f362-7cc32f10f5b5" import numpy as np import pandas as pd import os import cv2 import matplotlib.pyplot as plt import os from PIL import Image from keras.preprocessing.image import img_to_array from keras.preprocessing.image import load_img from keras.utils import np_utils from tensorflow.python.keras.layers import Dense,Conv2D,Flatten,MaxPooling2D,GlobalAveragePooling2D,Activation,BatchNormalization,Dropout from tensorflow.python.keras import Sequential,backend,optimizers # + [markdown] id="a9SXe0ySITpn" # Unzipping Dataset # + id="TsvjrWNGT_B_" from zipfile import ZipFile file_name='/content/drive/My Drive/Colab Notebooks/Malaria Detector/mldataset.zip' with ZipFile(file_name,'r') as zip: zip.extractall() # + [markdown] id="2E8D87v6Iac6" # Identifying Dependent and Independent Objects # + id="d-8zz4TAVqjJ" colab={"base_uri": "https://localhost:8080/", "height": 52} outputId="c36a60e8-70bd-441b-f9fc-85f07e60200d" parasitized_data = os.listdir('/content/mldataset/Parasitized') uninfected_data = os.listdir('/content/mldataset/Uninfected/') data = [] labels = [] for img in parasitized_data: try: img_read = plt.imread('/content/mldataset/Parasitized/' + img) img_resize = cv2.resize(img_read, (50, 50)) img_array = img_to_array(img_resize) data.append(img_array) labels.append(1) except: None for img in uninfected_data: try: img_read = plt.imread('/content/mldataset/Uninfected/' + img) img_resize = cv2.resize(img_read, (50, 50)) img_array = img_to_array(img_resize) data.append(img_array) labels.append(0) except: None image_data = np.array(data) labels = np.array(labels) print("image_data:",len(image_data)) print("labels:",len(labels)) # + [markdown] id="HxNoghL4IQTK" # Data Visualization # + id="SeI-FtbyT3ai" colab={"base_uri": "https://localhost:8080/", "height": 511} outputId="8492fb87-5df9-4242-de15-bc09a93eae85" print("Parasitized Sample:\n") plt.figure(figsize = (15,15)) for i in range(3): plt.subplot(4, 4, i+1) img = cv2.imread('/content/mldataset/Parasitized/'+ parasitized_data[i]) plt.imshow(img) plt.show() print("Uninfected Sample:\n") plt.figure(figsize = (15,15)) for i in range(3): plt.subplot(4, 4, i+1) img = cv2.imread('/content/mldataset/Uninfected/'+ uninfected_data[i]) plt.imshow(img) plt.show() # + [markdown] id="VPFsBmD9IgRT" # Dividng into train and test # + id="KeFgC_oqV_nQ" colab={"base_uri": "https://localhost:8080/", "height": 87} outputId="09859c7f-d487-4a83-b5a6-0c7a52cce973" from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(image_data, labels, test_size = 0.2,random_state = 0) y_train = np_utils.to_categorical(y_train, num_classes = 2) y_test = np_utils.to_categorical(y_test, num_classes = 2) print("X_train:",len(X_train)) print("X_test:",len(X_test)) print("y_train:",len(y_train)) print("y_test:",len(y_test)) # + [markdown] id="uvIuAxgMIvQt" # Building the CNN model # + id="Uru2WrqzWIyY" colab={"base_uri": "https://localhost:8080/", "height": 745} outputId="f063370b-9dd9-42a3-9e35-fbb072fb2726" model = Sequential() inputShape = (50, 50, 3) if backend.image_data_format() == 'channels_first': inputShape = (3, 50, 50) model.add(Conv2D(32, (3,3), activation = 'relu', input_shape = inputShape)) model.add(MaxPooling2D(2,2)) model.add(BatchNormalization(axis = -1)) model.add(Dropout(0.2)) model.add(Conv2D(32, (3,3), activation = 'relu')) model.add(MaxPooling2D(2,2)) model.add(BatchNormalization(axis = -1)) model.add(Dropout(0.2)) model.add(Conv2D(32, (3,3), activation = 'relu')) model.add(MaxPooling2D(2,2)) model.add(BatchNormalization(axis = -1)) model.add(Dropout(0.2)) model.add(Flatten()) model.add(Dense(512, activation = 'relu')) model.add(BatchNormalization(axis = -1)) model.add(Dropout(0.5)) model.add(Dense(2, activation = 'softmax')) model.summary() # + [markdown] id="B3NBuKLrJOVZ" # Compiling the model # + id="qieD7ZErWNZe" model.compile(loss = 'categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy']) # + [markdown] id="pl5mWgvxJZ5T" # Fitting the model # + id="ncKXN3QEWPtB" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="f63cbaf7-74f6-40a0-f803-429e6c2d0572" model.fit(X_train, y_train, epochs = 30, batch_size = 32) # + [markdown] id="yBhqS3AYJnwO" # Evaluating the model # + id="LwJROPlkWSTe" colab={"base_uri": "https://localhost:8080/", "height": 69} outputId="72a98205-1535-48e7-fbe7-64c23e7ffa43" predict = model.evaluate(X_test, y_test) print("Loss: ",predict[0]) print("Accuracy: ",predict[1]*100) # + [markdown] id="3mP1l8VKfqkV" # Saving model # + id="0DDVnynGhGiQ" colab={"base_uri": "https://localhost:8080/", "height": 763} outputId="d5aa7d69-103c-4e73-d7de-c59d5f6770dd" # %cd "/content/drive/My Drive/Colab Notebooks/Malaria Detector" model.save('malaria.h5') from tensorflow.keras.models import load_model model = load_model('malaria.h5') model.summary() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: SageMath 9.2 # language: sage # name: sagemath # --- from string import ascii_lowercase # + import random def generate_keys(q): K = GF(q, 'a') invertible_element = None while not invertible_element: e = Integer(K.random_element()) if gcd(e, q-1) == 1: #print(gcd(e, q-1)) invertible_element = e d = power_mod(Integer(invertible_element), -1, q-1) return Integer(invertible_element), Integer(d) # - q = 100019 eA, dA = generate_keys(q) eB, dB = generate_keys(q) # + def MO_step1(m, q, eA, chunk_size=None): digits = int(log(q,10)) + 1 if chunk_size == None: chunk_size = int(log(q,10)) if chunk_size % 2 == 1: chunk_size -= 1 m_encoded = "".join(["{:02d}".format(ord(x)-97) for x in m.lower() if x in ascii_lowercase]) m_encoded = m_encoded + ((chunk_size - (len(m_encoded)% chunk_size )) % chunk_size)/2 * '29' f = len(m_encoded)/chunk_size chunks = [ int(m_encoded[i*chunk_size:(i+1)*chunk_size]) for i in range(f) ] # https://github.com/sagemath/sage/blob/222059565bc2166f29c50a6d85db7992589098c2/src/sage/arith/misc.py#L2201 ciphered = [power_mod(x, eA, q) for x in chunks] formated = "{:0" + str(digits) + "d}" return "".join([formated.format(cn) for cn in ciphered]) def MO_step2(m, q, eB, chunk_size=None): digits_n = int(math.log(q, 10)) + 1 if chunk_size == None: chunk_size = digits_n - 1 if chunk_size % 2 == 1: chunk_size -= 1 ff = len(m)/digits_n dk = [power_mod(int(m[i*digits_n:(i+1)*digits_n]), eB, q) for i in range(ff)] formated = "{:0" + str(digits_n) + "d}" chars = "".join([formated.format(cn) for cn in dk]) return chars def MO_step3(m, q, dA, chunk_size=None): return MO_step2(m, q, dA, chunk_size) def MO_step4(m, q, dB, chunk_size=None): final = MO_step2(m, q, dB, chunk_size) #print("final: ", final) digits_n = int(math.log(q, 10)) + 1 if chunk_size == None: chunk_size = digits_n - 1 if chunk_size % 2 == 1: chunk_size -= 1 ff = len(m)/digits_n dk = [int(final[i*digits_n:(i+1)*digits_n]) for i in range(ff)] formated = "{:0" + str(chunk_size) + "d}" chars = "".join([formated.format(cn) for cn in dk]) return "".join([chr(int(chars[i*2:(i+1)*2])+97) for i in range(len(chars)/2)]) # - s1 = MO_step1("Juanma abreme la frontera", q, eA); print(s1) s2 = MO_step2(s1, q, eB); print(s2) s3 = MO_step3(s2, q, dA); print(s3) s4 = MO_step4(s3, q, dB); print(s4) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import os import sys r_path_data = "../new_codebase/src/models/kmeans/" sys.path.append(r_path_data) from kmeans import * import pandas as pd result=get_cluster_results('ovasarhelyi','pre-summer', 'all', features, nc=4) # # Airport arrival # + import geopandas as gpd from shapely.geometry import Point airport_cities_d = {"airport": ['Pisa', 'Florence'], "lat": [43.7228, 43.7696], "lon": [10.4017, 11.2558]} def create_airports(data): """ Exmaple: data = airport_cities_d = {"airport": ['Pisa', 'Florence'], "lat": [43.7228, 43.7696], "lon": [10.4017, 11.2558]} """ airport_cities = pd.DataFrame(data) geometry = [Point(xy) for xy in zip(airport_cities.lon, airport_cities.lat)] airport_cities = airport_cities.drop(['lon', 'lat'], axis=1) crs = {'init': 'epsg:4326'} geo_airport_cities = gpd.GeoDataFrame(airport_cities, crs=crs, geometry=geometry) return geo_airport_cities # - def get_airport_start_end(result, geo_airport_cities): crs={'init': 'epsg:4326'} geometry_st = [Point(xy) for xy in zip(result.start_lon, result.start_lat)] geometry_end = [Point(xy) for xy in zip(result.end_lon, result.end_lat)] geo_st = gpd.GeoDataFrame(geometry_st, crs=crs, geometry=geometry_st)[['geometry']] geo_end = gpd.GeoDataFrame(geometry_end, crs=crs, geometry=geometry_end)[['geometry']] geo_st.crs = crs geo_end.crs = crs st_airport = pd.DataFrame(geo_st.within(geo_airport_cities['geometry'].unary_union.buffer(0.1))) st_airport.index=result.index result['geometry_st'] = st_airport end_airport = pd.DataFrame(geo_end.within(geo_airport_cities['geometry'].unary_union.buffer(0.1))) end_airport.index=result.index result['geometry_end'] = end_airport st_florence = pd.DataFrame(geo_st.within(geo_airport_cities['geometry'].loc[1].buffer(0.1))) st_florence.index=result.index result['geometry_st_fl'] = st_florence end_florence = pd.DataFrame(geo_end.within(geo_airport_cities['geometry'].loc[1].buffer(0.1))) end_florence.index=result.index result['geometry_end_fl'] = end_florence st_pisa = pd.DataFrame(geo_st.within(geo_airport_cities['geometry'].loc[0].buffer(0.1))) st_pisa.index=result.index result['geometry_st_pisa'] = st_pisa end_pisa = pd.DataFrame(geo_end.within(geo_airport_cities['geometry'].loc[0].buffer(0.1))) end_pisa.index=result.index result['geometry_end_pisa'] = end_pisa return result def add_airport_arrivals(result, airport_cities_d): geo_airport_cities=create_airports(airport_cities_d) result=get_airport_start_end(result, geo_airport_cities) return result def cluster_airport_result(result, i, var): cus=result[result[var]==i] arrive_by_plane=np.round(cus['geometry_st'].sum()/len(cus)*100,2) left_by_plane=np.round(cus['geometry_end'].sum()/len(cus)*100,2) arrive_fl=np.round(cus['geometry_st_fl'].sum()/len(cus)*100,2) arrive_pis= np.round(cus['geometry_st_pisa'].sum()/len(cus)*100,2) st_end_fl=pd.crosstab(cus['geometry_st_fl'],cus['geometry_end_fl']).apply(lambda x: x / x.sum()*100, 1).round(2)[True][True] st_end_pis=pd.crosstab(cus['geometry_st_pisa'],cus['geometry_end_pisa']).apply(lambda x: x / x.sum()*100, 1).round(2)[True][True] res=f'{arrive_by_plane}% arrived to, and {left_by_plane}% left by plane from Tuscany. ' res=res+f'{arrive_fl}% arrived to Florence airport and {arrive_pis}% landed in Pisa. ' res=res+f'{st_end_fl}% of those who arrived to Florence by plane left from the same airport. ' res=res+f'{st_end_pis}% of those who arrived to Pisa airport left by plane from Pisa too. ' return res # # Descriptives def calculate_cluster_size(kmeans_res, var): cluster_results=pd.DataFrame(kmeans_res[var].value_counts()) ratio=np.round(cluster_results/cluster_results.sum()*100, 2).rename(columns={var:"ratio"}) return cluster_results.join(ratio) def cluster_size(result, var): a=calculate_cluster_size(result, var) a['cus']=a.index return a def create_basic_description(result, season, country, var): if country!='all': return f'In the {season} season {(len(result))} tourists visited Tuscany from {country}, we managed to cluster {np.round(len(result[pd.notnull(result[var])])/len(result)*100,2)} % of them.' else: return f'In the {season} season {(len(result))} tourists visited Tuscany, we managed to cluster {np.round(len(result[pd.notnull(result[var])])/len(result)*100,2)} % of them.' def get_top_nationalities(result, n): nat_freq=pd.DataFrame(result['country'].value_counts()) ratios=nat_freq[:n]/nat_freq.sum()*100 res='The most common visitors are from' for i in range(0,len(ratios)): if i!=len(ratios)-1: res=res+f' {ratios.index[i]} ({np.round(ratios.country[i],2)}%),' else: res=res+f' and {ratios.index[i]} ({np.round(ratios.country[i],2)}%).' return res def get_cluster_baiscs(result, var): clusters=calculate_cluster_size(result, var) res=f'We have {len(clusters)} clusters,' for i in zip(range(0,len(clusters)), ['first', 'second', "third", 'forth', 'fifth'][:len(clusters)]): if i[0]!=len(clusters)-1: #print(i) res=res+f' the {i[1]} cluster represents {clusters.ratio[i[0]]}%,' else: res=res+f' and the {i[1]} cluster is {clusters.ratio[i[0]]}%.' return res def get_cluster_country_distr(result, var): return pd.crosstab(result.country, result[var]).apply(lambda x: x / x.sum()*100, 0).round(2) def hours_tusc(result, var): return np.round(pd.DataFrame(result.groupby(var)[['hrs_in_tusc']].mean())/24) def get_places_at_least4_hours(result, cluster, var): hours=get_hours_by_cities(result, var) four_hours_top=hours.sort_values(cluster, ascending=False) cities=four_hours_top[four_hours_top[cluster]>4].index if len(cities)==0: res='but nowhere more than 4 hours. ' else: res='and out of it at least a half day in' i=0 for city in cities: i+=1 if city=='coast': city='the coast' if i!=len(cities) and len(cities)!=1: res=res+f' {city.title()},' elif i==len(cities) and i!=1: res=res+f' and {city.title()}. ' else: res=res+f' {city.title()}. ' return res def get_hours_by_cities(result, var): return pd.DataFrame(result.groupby(var)[['arezzo', 'florence', 'grosseto', 'livorno', 'lucca', 'pisa', 'pistoia', 'siena', 'coast']].mean()/60).T def cluster_mcc_ratio(result, nc, n, var): rel1=get_cluster_country_distr(result, var) hours=hours_tusc(result, var) res="" for i in zip(range(0,nc), [" first", ' second', ' third', ' forth', ' fifth'][:nc]): res=res+f"By the number of uniqiue visitors the{i[1]} cluster's top {n} countries are; " rel=rel1.sort_values(i[0],ascending=False)[:n] for j in range(0,n): if j!=n-1: res=res+f'{rel[i[0]].index[j]} ({rel[i[0]][j]}%), ' else: res=res+f'and {rel[i[0]].index[j]} ({rel[i[0]][j]}%). ' res=res+f'This cluster spends in average {int(hours.hrs_in_tusc[i[0]])} days in Tuscany, ' res=res+get_places_at_least4_hours(result, i[0], var) res=res+ cluster_airport_result(result, i[0], var) return res """var="label" print(create_basic_description(result, season, country, var)) print(get_top_nationalities(result, 6)) print(get_cluster_baiscs(result, var)) print(cluster_mcc_ratio(result,4, 5, var))""" def get_kmeans_description(result, season, country, var): result=add_airport_arrivals(result,airport_cities_d) if country=='all': print(create_basic_description(result, season, country, var)+' '+ get_top_nationalities(result, 6) +" "+get_cluster_baiscs(result, var)+ ' \n'+ cluster_mcc_ratio(result,4, 5, var)) else: print(create_basic_description(result, season, country, var)+" "+ trajectory_cluster_description(result, var)) # # Trajectory path_to_result='/mnt/data/ovasarhelyi/TPT_tourism/new_codebase/src/models/sequence_analysis/data/clustering_results/' d=pd.read_csv(path_to_result+'cluster_results_hungary_winter.csv') season, country import sys r_path_data = "../new_codebase/src/utils/load_data/" sys.path.append(r_path_data) from load_dataframes import * def trajectory_cluster_description(result, var): hours=hours_tusc(result, var) nc=len(hours) res='' if var=='label': st=0 else: st=1 for i in zip(range(st,nc), [" first", ' second', ' third', ' forth', ' fifth'][:nc]): res=res + f'The {i[1]} cluster ' res=res + f'spends in average {int(hours.hrs_in_tusc[(i[0])])} days in Tuscany, ' res=res+get_places_at_least4_hours(result, i[0], var) res=res+ cluster_airport_result(result, i[0], var) return res def join_customer_features(traj_result, username,season, country): user_features=get_k_means_data(username,season, country).set_index("customer_nr") features_with_trajectory=user_features.join(traj_result.set_index('customer_nr')[["cluster"]]) return features_with_trajectory def write_file(country, season, final): if var=='label': path='../new_codebase/results/kmeans/' elif var=='cluster': path='../new_codebase/results/kmeans/' file_name=country+"_"+season newpath=path+file_name+'/' if not os.path.exists(newpath): os.makedirs(newpath) f = open(newpath+file_name+".txt","w") f.write(final) f.close() def get_trajectory_description(traj_result, username, season, country, var, print_it=True): result=join_customer_features(traj_result, username, season, country) result=add_airport_arrivals(result, airport_cities_d) final=create_basic_description(result, season, country, var)+" "+trajectory_cluster_description(result, var) if print_it==True: write_file(country, season, final, var) print(final) # # TEST # + traj_result=d username='ovasarhelyi' country = 'hungary' season = 'winter' var = 'cluster' get_trajectory_description(traj_result, username, season, country, var, True) # - season='pre-summer' country='all' get_kmeans_description(result,season, country, "label") season='winter' country='hungary' result=get_cluster_results('ovasarhelyi',season, country, features, nc=4) get_kmeans_description(result,season, country, "label") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # # ASTR 480 # Navigating the Night Sky # %matplotlib inline import matplotlib.pyplot as plt import numpy as np import pandas as pd import astropy.units as u from astropy.time import Time from astropy.coordinates import SkyCoord import pytz import warnings warnings.filterwarnings('ignore', category=Warning) from astroplan.plots import plot_sky, plot_airmass from astroplan import Observer, FixedTarget, time_grid_from_range, observability_table, moon_illumination from astroplan import (AirmassConstraint, MoonSeparationConstraint, SunSeparationConstraint, AtNightConstraint, MoonIlluminationConstraint) # + # Modifying the original quasar table to get rid of unneeded rows and columns column_names = ["Name", "RA", "DEC", "Bmag", "Rmag"] quasar_table = pd.read_csv('./QuasarListOfTen.csv', header = None, names = column_names) table_chopped = quasar_table.drop(index = 0, columns = ["Bmag", "Rmag"]) quasar_vals = table_chopped.values # - # List of quasar targets and coordinates quasar_targets = [FixedTarget(coord = SkyCoord(ra = RA, dec = DEC), name = Name) for Name, RA, DEC in quasar_vals] # Define observing location ctio = Observer(longitude = -70.804 * u.deg, latitude = 30.169 * u.deg, timezone = 'America/Santiago', name = "CTIO" ) # + # Define observing window: March 1 - 15, 2019 observe_start = Time("2019-3-1 03:00:00") observe_end = Time("2019-3-15 03:00:00") obs_range = [observe_start, observe_end] # - # Find the Local time at observing window ctio_local_start = observe_start.to_datetime(ctio.timezone) ctio_local_end = observe_end.to_datetime(ctio.timezone) # + # Finding astronomical twilight for the very start and very end of the 15-day run astr_rise = ctio.twilight_morning_astronomical(observe_end, which = 'previous') astr_set = ctio.twilight_evening_astronomical(observe_start, which = 'nearest') ctio_range = [astr_set, astr_rise] # + # Define observing constraints constraints = [AirmassConstraint(max = 1.8), MoonSeparationConstraint(min = 70 * u.deg), AtNightConstraint.twilight_astronomical()] # Determine the Moon's fractional illumination at the beginning, middle, and end of the observing window print("The Moon's fractional illumination on {0} is {1:f}.".format(observe_start, moon_illumination(observe_start))) print("The Moon's fractional illumination on {0} is {1:f}.".format(observe_end, moon_illumination(observe_end))) print("The Moon's fractional illumination on {0} is {1:f}.".format(Time("2019-3-7 03:00:00"), moon_illumination(Time("2019-3-7 03:00:00")))) # - # Creating an observability table for the 10 quasar targets obs_table = observability_table(constraints, ctio, quasar_targets, time_range = ctio_range) print(obs_table) # + # Plotting the airmass of each of the 10 quasars versus time (through the entire observing window). fig, ax = plt.subplots(1, 1) fig.set_size_inches(10, 5) fig.tight_layout() obs_time_grid = time_grid_from_range(obs_range) for my_object in quasar_targets: ax = plot_airmass(my_object, ctio, obs_time_grid) # Define the the first astronomical sunrise of the observing window astr_rise_first = ctio.twilight_morning_astronomical(observe_start, which = 'nearest') # The black lines on the plot indicate the time of an astronomical sunset and astronomical sunrise (left to right), #so that the time between the lines is a time that the sky can be observed safely from the Sun. ax.vlines((astr_set + 24.0 * u.h).datetime, 1, 3, color = 'black', linestyles = 'solid', linewidth = 5) ax.vlines((astr_rise_first + 24.0 * u.h).datetime, 1, 3, color = 'black', linewidth = 5) ax.legend(loc = 0, shadow = True); print("Based on the airmass plot below, most of the 10 quasar targets are observable (with regards to airmass)" + "for most of the night each night at the observatory, since the airmass for each target falls below ~2 for" + " most of the time interval shown.") # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: ssrs_env # language: python # name: ssrs_env # --- # # Eagle Behavior and Risk Modeling for Wind Energy workshop # ## Stochastic Soaring Raptor Simulator (SSRS) # ### Instructions for running the demo.ipynb jupyter notebook provided in the workshop folder # This document provides instructions on installing SSRS on your local machine and running the notebooks provided in this workshop folder of the Github repository. These instructions have been tested for Unix OS (Linux/MacOS) and may # require slight modification for Windows OS. # # Github repository: https://github.com/NREL/SSRS # # Journal publication: Stochastic agent-based model for predicting turbine-scale raptor movements during updraft-subsidized directional flights. Ecological Modelling, April 2022. [link](https://www.sciencedirect.com/science/article/pii/S0304380022000047) # #### Installing SSRS # 1) Make sure following are installed on your local machine before the workshop demos start: # - Terminal ([Instructions for Windows](https://docs.microsoft.com/en-us/windows/terminal/)) # - Git ([install instructions](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)) # - Miniconda ([install instructions](https://docs.conda.io/en/latest/miniconda.html#latest-miniconda-installer-links)) # 2) Open the terminal and go to the directory where you want to clone/install SSRS # ```console # rsandhu@mac$ pwd # /Users/rsandhu/workshop # ``` # 3) Clone SSRS using the following command, which will create a folder named 'SSRS' # ```console # rsandhu@mac$ git clone https://github.com/NREL/SSRS.git # ``` # 4) Go to SSRS directory # ```console # rsandhu@mac$ cd SSRS/ # ``` # 5) Create the conda environment with name 'myenv' using the .yml file in the SSRS directory # ```console # rsandhu@mac$ conda env create -f environment.yml -n myenv # ``` # 6) Check if conda environment was created successfully, this command should list myenv as one of the conda envs # ```console # rsandhu@mac$ conda env list # ``` # 7) Activate the installed conda environments # ```console # rsandhu@mac$ conda activate myenv # ``` # 8) Install SSRS (use the -e flag before . if editing the SSRS source code) # ```console # rsandhu@mac$ pip install . # ``` # 9) Check if SSRS is installed by running this command, and checking for ssrs in the list # ```console # rsandhu@mac$ conda list # ``` # #### Running Jupyter notebooks from myenv # 1) From the installed conda environment, install pykernel # ```console # rsandhu@mac$ conda install ipykernel # ``` # 2) Add the ipykernel kernel for the myenv environement # ```console # rsandhu@mac$ ipython kernel install --user --name=myenv # ``` # 2) Add the ipykernel kernel for the myenv environement # ```console # rsandhu@mac$ conda install jupyter # ``` # 3) Launch Jupyter notebook from the terminal # ```console # rsandhu@mac$ jupyter notebook # ``` # 4) The jupyter notebook will open to the SSRS direcotry. Go to the workshop or the notebooks folder and then open any notebook (.ipynb files), and make sure the kernel is selected to be myenv (Kernel -> Change kernel -> myenv). # #### Getting API key for using NREL's WTK dataset in SSRS # 1) Open this [link](https://developer.nrel.gov/signup/), enter your name and email, and click signup. You will get an API key. Copy the API key # 2) Open the .hscfg_need_api_key file that comes shipped in SSRS/notebooks, SSRS/workshop and SSRS/examples, and put the API key in that file and then rename it as .hscfg. Only do it for the directory where you are running the code. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- from __future__ import division, print_function import dendropy from dendropy.interop import genbank # ## Getting the data # + def get_ebov_2014_sources(): #EBOV_2014 #yield 'EBOV_2014', genbank.GenBankDna(id_range=(233036, 233118), prefix='KM') yield 'EBOV_2014', genbank.GenBankDna(id_range=(34549, 34563), prefix='KM0') def get_other_ebov_sources(): #EBOV other yield 'EBOV_1976', genbank.GenBankDna(ids=['AF272001', 'KC242801']) yield 'EBOV_1995', genbank.GenBankDna(ids=['KC242796', 'KC242799']) yield 'EBOV_2007', genbank.GenBankDna(id_range=(84, 90), prefix='KC2427') def get_other_ebolavirus_sources(): #BDBV yield 'BDBV', genbank.GenBankDna(id_range=(3, 6), prefix='KC54539') yield 'BDBV', genbank.GenBankDna(ids=['FJ217161']) #RESTV yield 'RESTV', genbank.GenBankDna(ids=['AB050936', 'JX477165', 'JX477166', 'FJ621583', 'FJ621584', 'FJ621585']) #SUDV yield 'SUDV', genbank.GenBankDna(ids=['KC242783', 'AY729654', 'EU338380', 'JN638998', 'FJ968794', 'KC589025', 'JN638998']) #yield 'SUDV', genbank.GenBankDna(id_range=(89, 92), prefix='KC5453') #TAFV yield 'TAFV', genbank.GenBankDna(ids=['FJ217162']) # + other = open('other.fasta', 'w') sampled = open('sample.fasta', 'w') for species, recs in get_other_ebolavirus_sources(): char_mat = recs.generate_char_matrix(taxon_set=dendropy.TaxonSet(), gb_to_taxon_func=lambda gb: dendropy.Taxon(label='%s_%s' % (species, gb.accession))) char_mat.write_to_stream(other, 'fasta') char_mat.write_to_stream(sampled, 'fasta') other.close() ebov_2014 = open('ebov_2014.fasta', 'w') ebov = open('ebov.fasta', 'w') for species, recs in get_ebov_2014_sources(): char_mat = recs.generate_char_matrix(taxon_set=dendropy.TaxonSet(), gb_to_taxon_func=lambda gb: dendropy.Taxon(label='EBOV_2014_%s' % gb.accession)) char_mat.write_to_stream(ebov_2014, 'fasta') char_mat.write_to_stream(sampled, 'fasta') char_mat.write_to_stream(ebov, 'fasta') ebov_2014.close() ebov_2007 = open('ebov_2007.fasta', 'w') for species, recs in get_other_ebov_sources(): char_mat = recs.generate_char_matrix(taxon_set=dendropy.TaxonSet(), gb_to_taxon_func=lambda gb: dendropy.Taxon(label='%s_%s' % (species, gb.accession))) char_mat.write_to_stream(ebov, 'fasta') char_mat.write_to_stream(sampled, 'fasta') if species == 'EBOV_2007': char_mat.write_to_stream(ebov_2007, 'fasta') ebov.close() ebov_2007.close() sampled.close() # - # ## Genes # + my_genes = ['NP', 'L', 'VP35', 'VP40'] def dump_genes(species, recs, g_dls, p_hdls): for rec in recs: for feature in rec.feature_table: if feature.key == 'CDS': gene_name = None for qual in feature.qualifiers: if qual.name == 'gene': if qual.value in my_genes: gene_name = qual.value elif qual.name == 'translation': protein_translation = qual.value if gene_name is not None: locs = feature.location.split('.') start, end = int(locs[0]), int(locs[-1]) g_hdls[gene_name].write('>%s_%s\n' % (species, rec.accession)) p_hdls[gene_name].write('>%s_%s\n' % (species, rec.accession)) g_hdls[gene_name].write('%s\n' % rec.sequence_text[start - 1 : end]) p_hdls[gene_name].write('%s\n' % protein_translation) g_hdls = {} p_hdls = {} for gene in my_genes: g_hdls[gene] = open('%s.fasta' % gene, 'w') p_hdls[gene] = open('%s_P.fasta' % gene, 'w') for species, recs in get_other_ebolavirus_sources(): if species in ['RESTV', 'SUDV']: dump_genes(species, recs, g_hdls, p_hdls) for gene in my_genes: g_hdls[gene].close() p_hdls[gene].close() # - # ## Genome exploration def describe_seqs(seqs): print('Number of sequences: %d' % len(seqs.taxon_set)) print('First 10 taxon sets: %s' % ' '.join([taxon.label for taxon in seqs.taxon_set[:10]])) lens = [] for tax, seq in seqs.items(): lens.append(len([x for x in seq.symbols_as_list() if x != '-'])) print('Genome length: min %d, mean %.1f, max %d' % (min(lens), sum(lens) / len(lens), max(lens))) ebov_seqs = dendropy.DnaCharacterMatrix.get_from_path('ebov.fasta', schema='fasta', data_type='dna') print('EBOV') describe_seqs(ebov_seqs) del ebov_seqs print('ebolavirus sequences') ebolav_seqs = dendropy.DnaCharacterMatrix.get_from_path('other.fasta', schema='fasta', data_type='dna') describe_seqs(ebolav_seqs) from collections import defaultdict species = defaultdict(int) for taxon in ebolav_seqs.taxon_set: toks = taxon.label.split('_') my_species = toks[0] if my_species == 'EBOV': ident = '%s (%s)' % (my_species, toks[1]) else: ident = my_species species[ident] += 1 for my_species, cnt in species.items(): print("%20s: %d" % (my_species, cnt)) del ebolav_seqs # ## Genes # + import os gene_length = {} my_genes = ['NP', 'L', 'VP35', 'VP40'] for name in my_genes: gene_name = name.split('.')[0] seqs = dendropy.DnaCharacterMatrix.get_from_path('%s.fasta' % name, schema='fasta', data_type='dna') gene_length[gene_name] = [] for tax, seq in seqs.items(): gene_length[gene_name].append(len([x for x in seq.symbols_as_list() if x != '-'])) for gene, lens in gene_length.items(): print ('%6s: %d' % (gene, sum(lens) / len(lens))) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: LSST # language: python # name: lsst # --- import os import glob import lsst.eotest.sensor as sensorTest import lsst.eotest.image_utils as imutils import bot_eo_analyses as bot_eo from camera_components import camera_info from lsst.eotest.sensor.flatPairTask import find_flat2 # for use with TS8 data in the flat_pairs and ptc tasks # + # Specify a raft and a run to analyze. root_dir = '/gpfs/slac/lsst/fs1/g/data/jobHarness/jh_archive/LCA-11021_RTM' raft = 'LCA-11021_RTM-014' run = '9537' # Pick a CCD by slot. slot = 'S00' det_name = '{}_{}'.format(raft, slot) # + # File to gather the eotest results. A file with this name is updated by the various eotasks. eotest_results_file = '_'.join((det_name, run, 'eotest_results.fits')) # A function to help gather the data. def glob_files(*args): return sorted(glob.glob(os.path.join(root_dir, raft, run, *args))) # For the gain/psf and read noise, we use the Fe55 acquisition. job = 'fe55_raft_acq' bias_files = glob_files('fe55_raft_acq', 'v0', '*', slot, '*fe55_bias*.fits') fe55_files = glob_files('fe55_raft_acq', 'v0', '*', slot, '*fe55_fe55*.fits') # For full raft noise correlation matrix, one needs to crate a dictionary of bias files, keyed by slot. bias_file_dict = {slot: glob_files('fe55_raft_acq', 'v0', '*', slot, '*fe55_bias*.fits')[0] for slot in camera_info.get_slot_names()} # For bright defects and dark current tasks. dark_files = glob_files('dark_raft_acq', 'v0', '*', slot, '*dark_dark*.fits') # For dark defects and CTE tasks. sflat_files = glob_files('sflat_raft_acq', 'v0', '*', slot, '*sflat_500_flat_H*.fits') # For linearity and ptc tasks. flat1_files = glob_files('flat_pair_raft_acq', 'v0', '*', slot, '*flat1*.fits') # Set default gain values in case the Fe55 task is skipped. gains = {_: 0.8 for _ in range(1, 17)} # + # Run the Fe55 single sensor task. # %time bot_eo.fe55_task(run, det_name, fe55_files, bias_files) # Retrieve the calculated gains from the eotest results file. #eotest_results = sensorTest.EOTestResults(eotest_results_file) #gains = {amp: gain for amp, gain in zip(eotest_results['AMP'], eotest_results['GAIN'])} # - # Run the read noise task. # %time bot_eo.read_noise_task(run, det_name, bias_files, gains) # Run the raft-level overscan correlation calculation. # %time bot_eo.raft_noise_correlations(run, raft, bias_file_dict) # Run the bright defects task. mask_files = sorted(glob.glob('_'.join((det_name, run, '*mask.fits')))) # %time bot_eo.bright_defects_task(run, det_name, dark_files, gains=gains, mask_files=mask_files) # Run the dark defects task. mask_files = sorted(glob.glob('_'.join((det_name, run, '*mask.fits')))) # %time bot_eo.dark_defects_task(run, det_name, sflat_files, mask_files=mask_files) # Run the dark current task. mask_files = sorted(glob.glob('_'.join((det_name, run, '*mask.fits')))) # %time dark_curr_pixels, dark95s = bot_eo.dark_current_task(run, det_name, dark_files, gains, mask_files=mask_files) bot_eo.plot_ccd_total_noise(run, det_name, dark_curr_pixels, dark95s, eotest_results_file) # Run the CTE task. mask_files = sorted(glob.glob('_'.join((det_name, run, '*mask.fits')))) # %time superflat_file = bot_eo.cte_task(run, det_name, sflat_files, gains, mask_files=mask_files) bot_eo.plot_cte_results(run, det_name, superflat_file, eotest_results_file, mask_files=mask_files); # Run the linearity task mask_files = sorted(glob.glob('_'.join((det_name, run, '*mask.fits')))) flat1_files_subset = flat1_files[::len(flat1_files)//4] # downselect so that this demo runs more quickly # %time bot_eo.flat_pairs_task(run, det_name, flat1_files_subset, gains, mask_files=mask_files, flat2_finder=find_flat2) # Run the PTC task mask_files = sorted(glob.glob('_'.join((det_name, run, '*mask.fits')))) # %time bot_eo.ptc_task(run, det_name, flat1_files[::2], gains, mask_files=mask_files, flat2_finder=find_flat2) # Run the tearing task # %time bot_eo.tearing_task(run, det_name, flat1_files) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Preparation # + import matplotlib.pyplot as plt import pandas as pd import seaborn as sns sns.set() # %matplotlib inline # - gh_raw = "https://raw.githubusercontent.com/" user = "abjer/" repo = "sds/" branch = "gh-pages/" file = "data/bechdel.csv" url = gh_raw + user + repo + branch + file df = pd.read_csv(url) # # Questions # #### Question 2 # + df['share_male'] = df['count_male'] / df['count'] df['share_female'] = 1 - df.share_male # option 1 df.loc[df.share_male==1,'status'] = 'all_male' df.loc[df.share_male==0,'status'] = 'all_female' df.loc[df.share_male.between(0,1, inclusive=False), 'status'] = 'mixed' # option 2 # transforms 0 to 'all_female', 1 'all_male' # fill missing values, i.e. between 0 and 1, with 'mixed' df['status'] = df.share_male\ .map({0:'all_female', 1:'all_male'})\ .fillna('mixed') # convert to a categorical variable df.status = pd.Categorical(df.status, categories=['all_female', 'mixed', 'all_male'], ordered=True) # - # #### Question 3 # + df['bechdel_pass'] = df.bechdel_test==3 df['Bechdel test'] = df.bechdel_pass.map({False:'Not passed', True:'Passed'}) df_non_actor = df[df.role.isin(['writer', 'director'])] # - # Calculating descriptive stats df_non_actor.groupby('status').bechdel_test.describe() # Example bar- and boxplot f,ax = plt.subplots(2,2, figsize=(8,6)) sns.barplot(x='status', y='bechdel_test', data=df_non_actor, ax=ax[0,0]) sns.boxplot(x='status', y='bechdel_test', data=df_non_actor, ax=ax[0,1]) sns.barplot(x='status', y='bechdel_pass', data=df_non_actor, ax=ax[1,0]) f,ax = plt.subplots(2,2, figsize=(8,6)) sns.barplot(x='status', y='bechdel_test', data=df_non_actor, ax=ax[0,0],hue='role') sns.boxplot(x='status', y='bechdel_test', data=df_non_actor, ax=ax[0,1],hue='role') sns.barplot(x='status', y='bechdel_pass', data=df_non_actor, ax=ax[1,0],hue='role') # Example lmplot sns.lmplot(x='share_male', y='bechdel_test', data=df_non_actor, x_jitter=.25, y_jitter=.5) # Example categorical variable bechdel test pass # + sns.barplot(y='share_male', x='Bechdel test', data=df_non_actor) # - # #### Question 4 # NOTE: this will be updated with student solutions # # Extra questions # Question 5 # + # selects only acting data # split by production year # compute mean for share_male and bechdel_test # smoothes the time series by taking 5 period rolling mean by_year = df[df.role=='actsin']\ .groupby('production_year')\ [['share_male', 'bechdel_test']].mean()\ .rolling(5).mean() # + f,ax = plt.subplots(1,2,figsize=(12,4)) # make placeholder for two plots, setting size # first plot: share_male by_year.share_male.plot(ax=ax[0]) ax[0].set_title('Share male') # second plot: bechdel_test by_year.bechdel_test.plot(ax=ax[1]) ax[1].set_title('Bechdel score') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### AUTOGRAD: AUTOMATIC DIFFERENTIATION # # https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html # Document about autograd.Function is at https://pytorch.org/docs/stable/autograd.html#function import torch # + # .requires_grad_( ... ) changes an existing Tensor’s requires_grad flag in-place. # The input flag defaults to False if not given. a = torch.randn(2, 2) a = ((a *3) / (a - 1)) print(a.requires_grad) a.requires_grad_(True) print(a.requires_grad) b = (a * a).sum() print(b.grad_fn) # + # Create a tensor and set requires_grad=True to track computation with it x = torch.ones(2, 2, requires_grad=True) print(x) # + # Do a tensor operation: y = x + 2 print(y) # + # y was created as a result of an operation, so it has a grad_fn. print(y.grad_fn) # + # Do more operations on y z = y * y *3 out = z.mean() print(z, out) # + # Let’s backprop now. # Because out contains a single scalar, # out.backward() is equivalent to out.backward(torch.tensor(1.)). out.backward() # + # Print gradients d(out)/dx print(x.grad) # + # Now let’s take a look at an example of vector-Jacobian product: x = torch.randn(3, requires_grad=True) y = x * 2 while y.data.norm() < 1000: y = y * 2 print(y) # + # Now in this case y is no longer a scalar. # torch.autograd could not compute the full Jacobian directly, # but if we just want the vector-Jacobian product, # simply pass the vector to backward as argument: v = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float) y.backward(v) print(x.grad) # + # You can also stop autograd from tracking history on Tensors with .requires_grad=True # either by wrapping the code block in with torch.no_grad(): print(x.requires_grad) print((x ** 2).requires_grad) with torch.no_grad(): print((x ** 2).requires_grad) # + # Or by using .detach() to get a new Tensor with the same content # but that does not require gradients: print(x.requires_grad) y = x.detach() print(y.requires_grad) print(x.eq(y).all()) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Practical work 3 # Author: # ## Exercice 1 # ### a. Bayes Histograms # a) Loading training data import pandas as pd col_id = ['x1', 'x2', 'y'] data_train = pd.read_csv('ex1-data-train.csv', names=col_id) data_train.head(3) # b) Compute P(C0) and P(C1) # + x = data_train[col_id[:2]] y = data_train[col_id[-1:]] N = len(y) neg = y[y['y'] == 0].index pos = y[y['y'] == 1].index neg = x.loc[neg][col_id[0]],x.loc[neg][col_id[1]] pos = x.loc[pos][col_id[0]],x.loc[pos][col_id[1]] class_neg = neg[0]+neg[1] class_pos = pos[0]+pos[1] p_c0 = len(class_neg)/N p_c1 = len(class_pos)/N print("p_c0:",p_c0) print("p_c1:",p_c1) # - # c) Compute histograms for x1 and x2 # + import matplotlib.pyplot as plt import numpy as np fig, ((fig_11, fig_12),(fig_21, fig_22)) = plt.subplots(2, 2, figsize=(15,15)) histValues, edgeValues = np.histogram(pos[0], bins='auto') fig_11.bar(edgeValues[:-1], histValues) fig_11.set_title('x1 pass') fig_11.set_xlabel("Score") fig_11.set_ylabel("Students") histValues, edgeValues = np.histogram(neg[0], bins='auto') fig_12.bar(edgeValues[:-1], histValues) fig_12.set_title('x1 fail') fig_12.set_xlabel("Score") fig_12.set_ylabel("Students") histValues, edgeValues = np.histogram(pos[1], bins='auto') fig_21.bar(edgeValues[:-1], histValues) fig_21.set_title('x2 pass') fig_21.set_xlabel("Score") fig_21.set_ylabel("Students") histValues, edgeValues = np.histogram(neg[1], bins='auto') fig_22.bar(edgeValues[:-1], histValues) fig_22.set_title('x2 fail') fig_22.set_xlabel("Score") fig_22.set_ylabel("Students") plt.show() # - # d) compute likelihoods # ### Question to TA: # I really don't get why len(edgeValues) is len(histValues)-1? # + def likelihoodHist(x, histValues, edgeValues): total_histValues = np.sum(histValues) for i in range(len(histValues)): if edgeValues[i] == x: return histValues[i]/total_histValues if edgeValues[i] > x: return histValues[i-1]/total_histValues return 0 #histValues, edgeValues = np.histogram(pos[0], bins='auto') #for i in range(N): # print(likelihoodHist(x['x1'][i], histValues, edgeValues)) # - # e) implement bayes rule # + data_test = pd.read_csv('ex1-data-test.csv', names=col_id) data_test_x = data_train[col_id[:2]] data_test_y = data_train[col_id[-1:]] data_test_N = len(data_test_y) neg_0_histValues, neg_0_edgeValues = np.histogram(neg[0], bins='auto') pos_0_histValues, pos_0_edgeValues = np.histogram(pos[0], bins='auto') neg_1_histValues, neg_1_edgeValues = np.histogram(neg[1], bins='auto') pos_1_histValues, pos_1_edgeValues = np.histogram(pos[1], bins='auto') result_x1 = [] for x in data_test_x['x1']: result_x1.append(np.argmax([likelihoodHist(x, neg_0_histValues, neg_0_edgeValues) * p_c0, likelihoodHist(x, pos_0_histValues, pos_0_edgeValues) * p_c1])) result_x2 = [] for x in data_test_x['x1']: result_x2.append(np.argmax([likelihoodHist(x, neg_1_histValues, neg_1_edgeValues) * p_c0, likelihoodHist(x, pos_1_histValues, pos_1_edgeValues) * p_c1])) result_x1x2 = [] for i in range (len(data_test_x)): result_x1x2.append(np.argmax([likelihoodHist(data_test_x['x1'][i], neg_0_histValues, neg_0_edgeValues) * p_c0 * likelihoodHist(data_test_x['x2'][i], neg_1_histValues, neg_1_edgeValues) * p_c0, likelihoodHist(data_test_x['x1'][i], pos_0_histValues, pos_0_edgeValues) * p_c1 * likelihoodHist(data_test_x['x2'][i], pos_1_histValues, pos_1_edgeValues) * p_c1])) print("x1 accuracy: ", sum(1 for x in (data_test_y['y'] == result_x1) if x)/data_test_N) print("x2 accuracy: ", sum(1 for x in (data_test_y['y'] == result_x2) if x)/data_test_N) print("x1x2 accuracy: ", sum(1 for x in (data_test_y['y'] == result_x1x2) if x)/data_test_N) # - # - The result is scandalous.... I don't get it.. how is it even possible.. Logically the x1x2 should be the best. # ### b. Bayes Gaussian Distribution # + def likelihoodGauss(x, mean, var): return (1.0 / (var * np.sqrt(2 * np.pi)) * np.exp( - np.square(x - mean) / (2 * np.square(var)))) #print(likelihoodGauss(10, np.mean(neg[0]), np.var(neg[0]))) # + neg_0_mean = np.mean(neg[0]) neg_0_var = np.var(neg[0]) neg_1_mean = np.mean(neg[1]) neg_1_var = np.var(neg[1]) pos_0_mean = np.mean(pos[0]) pos_0_var = np.var(pos[0]) pos_1_mean = np.mean(pos[1]) pos_1_var = np.var(pos[1]) # + result_x1_gauss = [] for x in data_test_x['x1']: result_x1_gauss.append(np.argmax([likelihoodGauss(x, neg_0_mean, neg_0_var) * p_c0, likelihoodGauss(x, pos_0_mean, pos_0_var) * p_c1])) #print(result_x1_gauss) result_x2_gauss = [] for x in data_test_x['x1']: result_x2_gauss.append(np.argmax([likelihoodGauss(x, neg_1_mean, neg_1_var) * p_c0, likelihoodGauss(x, pos_1_mean, pos_1_var) * p_c1])) #print(result_x2_gauss) result_x1x2_gauss = [] for i in range (len(data_test_x)): result_x1x2_gauss.append(np.argmax([likelihoodGauss(data_test_x['x1'][i], neg_0_mean, neg_0_var) * p_c0 * likelihoodGauss(data_test_x['x2'][i], neg_1_mean, neg_1_var) * p_c0, likelihoodGauss(data_test_x['x1'][i], pos_0_mean, pos_0_var) * p_c1 * likelihoodGauss(data_test_x['x2'][i], pos_1_mean, pos_1_var) * p_c1])) #print(result_x1x2_gauss) print("x1 accuracy: ", sum(1 for x in (data_test_y['y'] == result_x1_gauss) if x)/data_test_N) print("x2 accuracy: ", sum(1 for x in (data_test_y['y'] == result_x2_gauss) if x)/data_test_N) print("x1x2 accuracy: ", sum(1 for x in (data_test_y['y'] == result_x1x2_gauss) if x)/data_test_N) # - # - The result is still scandalous.... # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="9tu5aMWDKJon" # # Verify GPU # + id="wVbAxbgFccI4" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1619289507794, "user_tz": -480, "elapsed": 589, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}} outputId="a3921eac-f71e-4a94-9fe8-856993e23f55" # !nvidia-smi # + [markdown] id="DBRukcVdFkmc" # # Weight and Bias (Assisting Metrics, Optional) # + id="ZJhFk-HiFiMp" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1619289523722, "user_tz": -480, "elapsed": 15278, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}} outputId="43d837c6-9aad-4d39-fb9d-46e55b15d284" # !pip install wandb # !wandb login project_name = "CoLA with BERT" # @param {type:"string"} import os os.environ["WANDB_PROJECT"] = project_name import wandb # + [markdown] id="nv5kop8DFwv9" # # Installation # + id="NlWEz_DGdmL2" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1619289531313, "user_tz": -480, "elapsed": 21766, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}} outputId="66077bf1-64a8-4032-c9ef-37f23d8c07f5" # !pip install transformers==4.5.1 datasets==1.5.0 accelerate==0.2.1 # + id="NYB87-3cd6VE" executionInfo={"status": "ok", "timestamp": 1619289537813, "user_tz": -480, "elapsed": 12144, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}} import os import torch from accelerate import Accelerator from datasets import load_dataset, load_metric from torch import nn from torch.nn import CrossEntropyLoss, MSELoss from torch.utils.data import DataLoader from tqdm.auto import tqdm from transformers import ( AdamW, BertConfig, BertModel, BertTokenizerFast, default_data_collator, get_linear_schedule_with_warmup, ) # + id="rmPWXk40kfOG" executionInfo={"status": "ok", "timestamp": 1619289537817, "user_tz": -480, "elapsed": 11375, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}} per_device_train_batch_size = 32 # @param {type:"slider", min:1, max:64, step:1} learning_rate = 2e-5 # @param {type:"slider", min:2e-7, max:2e-3, step:2e-7} num_train_epochs = 3 # @param {type:"slider", min:1, max:10, step:1} logging_steps = 10 # @param {type:"slider", min:10, max:100, step:5} # + [markdown] id="feEx7HM2KTU3" # # Tokenizer # + id="-L9Zno23KTnx" colab={"base_uri": "https://localhost:8080/", "height": 164, "referenced_widgets": ["68d939f55505414c825194e8982054ac", "ff69a9dda1894ff6b88f39a9f81cc2c0", "98d4c9b4ca4f4c00905c3bb228f5605d", "", "", "", "", "4e2314037704499c9c16880a1d2e9702", "", "434c2be7efaf4a6fa833641fbaf6b11c", "bad1462cc73e4c6ca05625b095db848c", "8091af899341431f9adb3d7c613a3827", "a80f48894f4d4a1fbd24c6c9e5c7a7b0", "", "d1488fe68e2d4e58bb134b58858b36e7", "", "1983c4310fa848ebb270ad9978acaf30", "5472623fa9ce49a8b7bffc06472d616d", "", "", "181c118e246440bcb38ce7deeaf87b8b", "", "", ""]} executionInfo={"status": "ok", "timestamp": 1619289540103, "user_tz": -480, "elapsed": 12809, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}} outputId="578c8448-0ca0-4428-9090-ca055e17205e" tokenizer = BertTokenizerFast.from_pretrained("bert-base-cased") # + [markdown] id="3DcNvb7pCbS4" # # Dataset # + id="DDJ8tEhBfM_h" colab={"base_uri": "https://localhost:8080/", "height": 218, "referenced_widgets": ["522fccaf1fda4b0eaa9d3cad0b0522a8", "97796a0da4914b83b3a29ee88e58ddf6", "0a145341d4b547279bb4fa89eef2e596", "46a52d4cb17d464583381c60461f9617", "99d9f11b81104b299559717833e91498", "", "e054ece1a9944afd819a53ac93241438", "2ba6223403ca4b13a6e282486a5484c6", "", "28ca504d4f0e49dabb4357cf7bcc9746", "", "769e0622ff8f4174a1f3d96ec3462e37", "684702410ca645e3a5c6f3d7e1d2870d", "", "", "", "", "47d1a68f143847aeaf6942d9b74a7941", "", "8d22ea682a7044c3b62e12e9dacace47", "", "951f1ee32c0e4acebafb8317005fe79b", "669deb9013334792bff7d3647db4e7a1", "", "3aadee76307a421a849745859be0deeb", "8f8128e96590401b940be5cd3d8448e1", "", "", "66217887a9c440f08400a3d252be6e4b", "594d38fd3b29478984ec409e84a5c209", "", "be86db13de16402a87e2b384e60e8e95", "", "9eb01ee933854cbe88c91841e9d3bf8c", "", "6e9661800853459a8e4d64eb1875743e", "91b6e4fe99c44bb1a7ebd4116491a436", "", "fa6c21cc679645a0830226708c06463a", "31a3adc322bd408f98af87d703ca518f", "9f534f18938f4b7a8e6fa78dc4a80848", "", "8c0509e5ee914eed9e1ccbab742d6726", "724a3d4dd3b0498bacc4e9c022589418", "3393fd4ea83f4b8c9add5704e74b0a7f", "eb348e54e6d84ad78bb8108665715992", "48b25afd11944260be5d4a88fd59f034", "a6a7f95413f448a89598e13a2252f5ec"]} executionInfo={"status": "ok", "timestamp": 1619289542564, "user_tz": -480, "elapsed": 14587, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}} outputId="0b6b461a-55a4-48a2-d0c1-5c22b5bd1442" datasets = load_dataset("glue", "cola") label_list = datasets["train"].features["label"].names # + id="xGQ8mBlJ5So-" executionInfo={"status": "ok", "timestamp": 1619289542565, "user_tz": -480, "elapsed": 14366, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}} def preprocess_function(examples): # Tokenize the texts result = tokenizer( examples["sentence"], padding="max_length", max_length=128, truncation="longest_first", ) result["label"] = examples["label"] return result # + id="3mNVI6n65Sk7" colab={"base_uri": "https://localhost:8080/", "height": 164, "referenced_widgets": ["059a77a589644de8a9d3576b08a19a44", "415e955aa86c4c7994bdf81340f99e1b", "1b31010f34154d97949f4fc7715c423e", "d7999a05bc28474fa19aeb875b6c6fc9", "71bfd489e8e94df09d014dc497241c01", "075242e15de2453cbb389249ced66159", "b7029b5aa1104b6c858832b39a8f49a0", "0a11d948faad4e2f8e3ef7881c12c113", "5a9c0200174c414d8c9e5cd6e17c8e7e", "fe375dec6e7844f4a9f9ac4a03383bda", "4415fff623534920b7e96afb1df3e4ef", "", "", "da661072ad8349ebac756d37f6ff8552", "", "", "1c635f683d38450db674a54f3e449ff6", "", "", "25a421036c004f6cac64001c611a7f1a", "", "", "", "ad31a3838aca4e91a53b47d11a3ef08d"]} executionInfo={"status": "ok", "timestamp": 1619289543476, "user_tz": -480, "elapsed": 15027, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}} outputId="b4a91514-6708-4640-8817-ce7734d47f08" datasets = datasets.map(preprocess_function, batched=True) datasets["test"] = datasets["test"].remove_columns("label") # + id="m0JWvfMW5Si0" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1619289543476, "user_tz": -480, "elapsed": 14734, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}} outputId="b1bc5f3d-5adb-4377-9ed9-2dd9d1ed5744" for index in range(3): print(f"Sample {index} of the training set: {datasets['train'][index]}.") # + id="Ow5S4TbKwHJR" executionInfo={"status": "ok", "timestamp": 1619289543477, "user_tz": -480, "elapsed": 14225, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}} dataloaders = { "train": DataLoader( datasets["train"], collate_fn=default_data_collator, batch_size=per_device_train_batch_size, shuffle=True, ), "validation": DataLoader( datasets["validation"], collate_fn=default_data_collator, batch_size=per_device_train_batch_size, shuffle=True, ), "test": DataLoader( datasets["test"], collate_fn=default_data_collator, batch_size=per_device_train_batch_size, shuffle=True, ), } # + [markdown] id="M5_HZzpUCnQY" # # Model # + id="cinidhktZubv" executionInfo={"status": "ok", "timestamp": 1619290220230, "user_tz": -480, "elapsed": 914, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}} class BertClassificationForCoLA(nn.Module): def __init__(self, config, from_pretrained=False): super().__init__() self.config = config self.num_labels = config.num_labels if not from_pretrained: self.bert = BertModel(config) self.dropout = nn.Dropout(config.hidden_dropout_prob) self.classifier = nn.Linear(config.hidden_size, config.num_labels) self.apply(self._init_weights) # Equivelent of self.init_weights() def _init_weights(self, module): """ Initialize the weights """ if isinstance(module, nn.Linear): module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) if module.bias is not None: module.bias.data.zero_() elif isinstance(module, nn.Embedding): module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) if module.padding_idx is not None: module.weight.data[module.padding_idx].zero_() elif isinstance(module, nn.LayerNorm): module.bias.data.zero_() module.weight.data.fill_(1.0) @classmethod def from_pretrained(cls, model_name: str, config): model = cls(config, from_pretrained=True) model.bert = BertModel.from_pretrained(model_name, config=config) return model def forward( self, input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, output_attentions=None, output_hidden_states=None, return_dict=None, ): return_dict = ( return_dict if return_dict is not None else self.config.use_return_dict ) outputs = self.bert( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) pooled_output = self.dropout(outputs[1]) logits = self.classifier(pooled_output) loss = None if labels is not None: if self.num_labels == 1: # We are doing regression loss_fct = MSELoss() loss = loss_fct(logits.view(-1), labels.view(-1)) else: loss_fct = CrossEntropyLoss() loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) # return final return { "loss": loss, "logits": logits, } # + id="xvnUY6NJ5SrE" colab={"base_uri": "https://localhost:8080/", "height": 115, "referenced_widgets": ["8154042a7e884162a98e3c81d8781e7b", "", "", "69a8e2b579a54964916ed6f53dfefa17", "db8597721d1141baa20c10ab79a2a1db", "0f4fa0290f7b442c9a196dc850844ccf", "", "", "", "", "", "", "", "cbb4f27d002a42c79246dcf292e204a2", "bafa16537b26423ea395fa98dc862025", "55a8bb11c05b4cc99d6a734840cb479a"]} executionInfo={"status": "ok", "timestamp": 1619289555550, "user_tz": -480, "elapsed": 24721, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}} outputId="90019c91-d124-43ca-f5ce-f4129539981d" config = BertConfig.from_pretrained( "bert-base-cased", num_labels=len(label_list), finetuning_task="cola" ) model = BertClassificationForCoLA.from_pretrained("bert-base-cased", config) # + [markdown] id="eR0zJvATDF4Y" # # Optimizer, scheduler, accelerator # + id="6hFGh8Fz8hTU" executionInfo={"status": "ok", "timestamp": 1619289555551, "user_tz": -480, "elapsed": 23640, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}} no_decay = ["bias", "LayerNorm.weight"] optimizer_grouped_parameters = [ { "params": [ p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay) ], "weight_decay": 0.01, }, { "params": [ p for n, p in model.named_parameters() if any(nd in n for nd in no_decay) ], "weight_decay": 0.0, }, ] optimizer = AdamW(optimizer_grouped_parameters, lr=learning_rate) num_warmup_steps = 0 scheduler = get_linear_schedule_with_warmup( optimizer, num_warmup_steps, num_train_epochs * len(dataloaders["train"]) ) # + id="fs-vxXTKv7zc" executionInfo={"status": "ok", "timestamp": 1619289566142, "user_tz": -480, "elapsed": 33929, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}} accelerator = Accelerator() ( model, optimizer, dataloaders["train"], dataloaders["validation"], dataloaders["test"], ) = accelerator.prepare( model, optimizer, dataloaders["train"], dataloaders["validation"], dataloaders["test"], ) # + [markdown] id="SASUk8x7DKmO" # # Training # + id="0OQKPK5inqrt" colab={"base_uri": "https://localhost:8080/", "height": 567, "referenced_widgets": ["8a8368653bfc4f348f00ee529e4037c4", "687f340b102245dcaf33d7a5aa5fe96b", "65273085867d4f1399a353094d1b9647", "74d73b3b1b424e14a2e1443dea7dceeb", "89ab07e05cc74725875ade04e706f179", "", "d7a5408195a84108a799dc375c3ed41c", "05de86eb83094a29a4fa887a5168ea4d", "2bfa1187c0114e949da5d5859acbc351", "", "", "", "e08be46a9f3f45b29bb78e61e73e3aa1", "", "2f90e2bd43ef444f85631e34c7d4abe0", "", "75f4daca675744bf8a38faf7d0e440e7", "53496062142648a1a2a32a4c4cacfa03", "", "f73172225c1f43f1ad60c0be914086c4", "", "5ea08c467f0847e6bbe01b0a0254e5f9", "fce1f9b554a946a79c7832e4923fde86", "1852be21114548a78415eb010a3e453f", "", "", "", "", "29e268319dbc4e498c7e44c4088d00bf", "90825c16fa7845fdadebfcba0be1c29e", "", "7de4fc903af54ba685d116426ae3e913", "", "9f17e3c6982e4e54aac6da56b82a05ca", "50b72282e34140d2827d1eef0a9d3efb", "", "e2dfe04a610d481194b904c09c87072a", "ea0fffac671848e191a6af7e78ae3df4", "f64de32e5c104311aba65d74cb708e1a", "", "7bed6b30fbc74e0db1bb5b849875c68e", "1272fd4bdafe4022884d4eb2889a5e46", "44b7693e922947feab96eb093ee26135", "afa091b45fda4df2aa1ed034d597ee73", "f10bd3c394264a2684f33b636e3f4a6c", "738f7227013e4d51b4dbbabf677b93a2", "0ff2d3eef45e4fb4a450ca9d90559569", "dec07e4ea72a407190134b1927b0fe97", "4266d9baa35448ffa0e5e43900b6ecae", "", "f1cc99a685414e568d9bc9eee4ac97ee", "", "ed382b34547d42dd9f3e8bb5caaad124", "6a33fdebe534480fba82e8bfcab06d6e", "0c8e283793de4c7b851020d8fa4a3d92", "96f78bff08ba4bc2898d57c50bf55a86"]} executionInfo={"status": "ok", "timestamp": 1619290162516, "user_tz": -480, "elapsed": 582070, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}} outputId="83962397-dc2d-4914-e1df-df21f503598d" run_name = "PyTorch Scratch" if os.getenv("WANDB_PROJECT"): wandb.init(project=project_name, name=run_name) wandb.watch(model) for epoch in range(num_train_epochs): model.train() for step, batch in enumerate(tqdm(dataloaders["train"])): batch.pop("idx") optimizer.zero_grad() outputs = model(**batch) loss = outputs["loss"] accelerator.backward(loss) optimizer.step() scheduler.step() if step % logging_steps == 0: wandb.log( { "train/loss": loss, "train/epoch": epoch + step / len(dataloaders["train"]), "train/learning_rate": scheduler.get_last_lr()[0], } ) model.eval() metric = load_metric("glue", "cola") for batch in tqdm(dataloaders["validation"]): batch.pop("idx") outputs = model(**batch) outputs = accelerator.gather(outputs) predictions = outputs["logits"].argmax(dim=-1) references = accelerator.gather(batch["labels"]) metric.add_batch( predictions=predictions, references=references, ) eval_metric = metric.compute() wandb.log({"eval/matthews_correlation": eval_metric["matthews_correlation"]}) os.mkdir(run_name) accelerator.wait_for_everyone() model = accelerator.unwrap_model(model) accelerator.save(model.state_dict(), os.path.join(run_name, "trained_model.pt")) # + [markdown] id="7fa1nAnrDRo1" # # Test # + [markdown] id="K6-azaAMdUW4" # ## On dataset # + id="m-sYOY_Z5SX2" colab={"base_uri": "https://localhost:8080/", "height": 66, "referenced_widgets": ["ba6ddcf7a3b244b99bf9365c0956ad31", "1f14074fe504477987c0ca85d144439e", "fd29559944534a048a265c519f175e1c", "988cf5f95e174f4996c0f4b8efee3884", "603503008c23465ab76d69c62599ce53", "dd526935d43c4814a9d0d616b536be74", "d547b58b850246e58f82057cae8d9570", "233258573beb4c10946092103b57fb19"]} executionInfo={"status": "ok", "timestamp": 1619290170809, "user_tz": -480, "elapsed": 583369, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}} outputId="a177bcc9-55f9-4fb8-d848-3c895f172d6b" predictions = torch.empty(0, dtype=torch.int64, device=accelerator.device) for batch in tqdm(dataloaders["test"]): batch.pop("idx") outputs = model(**batch) predictions = torch.cat((predictions, outputs["logits"].argmax(dim=-1)), 0) # + id="KBhIzZc85SVt" tags=[] colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1619290171756, "user_tz": -480, "elapsed": 583954, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}} outputId="44c059bd-3ac0-410e-81a4-da2f71ab6493" for index, (sample, pred) in enumerate(zip(datasets["test"]["sentence"], predictions)): print(f"{index}\t{label_list[pred.item()]}\t{sample}") # + [markdown] id="_rfR3-VmdRQn" # ## Manually # + id="J2lIJUZSYwmd" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1619290171757, "user_tz": -480, "elapsed": 583280, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}} outputId="b99b2262-df3d-4387-d91c-47962ea88020" sentence = "The probable hostile German reaction is unfortunate." # @param {type:"string"} tokenized_input = tokenizer(sentence, return_tensors="pt").to(accelerator.device) outputs = model(**tokenized_input) print(f"Prediction: {label_list[outputs['logits'].argmax(dim=-1).item()]}") # + [markdown] id="4D8-YEkriwLX" # # Inference # + id="yNJduFfTfahF" executionInfo={"status": "ok", "timestamp": 1619290214182, "user_tz": -480, "elapsed": 808, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}} import torch from torch import nn from transformers import BertConfig, BertModel, BertTokenizerFast # + id="nPjyt8H_i_8i" executionInfo={"status": "ok", "timestamp": 1619290215730, "user_tz": -480, "elapsed": 2109, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}} device = torch.device("cuda" if torch.cuda.is_available() else "cpu") label_list = ["unacceptable", "acceptable"] tokenizer = BertTokenizerFast.from_pretrained("bert-base-cased") config = BertConfig.from_pretrained("bert-base-cased", finetuning_task="cola") # + id="M7aVfzCTKdnG" executionInfo={"status": "ok", "timestamp": 1619290230911, "user_tz": -480, "elapsed": 6386, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}} # Scroll to top and run the class definition class first model = BertClassificationForCoLA.from_pretrained( "PyTorch Scratch/trained_model.pt", config ).to(device) # + id="PKdUDuHTjlMo" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1619290233772, "user_tz": -480, "elapsed": 641, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}} outputId="49e2d96a-2f6c-41ea-f0e8-ebdaff87d364" sentence = "The probable hostile German reaction is unfortunate." # @param {type:"string"} tokenized_input = tokenizer(sentence, return_tensors="pt").to(device) outputs = model(**tokenized_input) print(f"Prediction: {label_list[outputs['logits'].argmax(dim=-1).item()]}") # + id="Natc1mV0KdnG" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #

    # Weather Derivates

    #

    Precipitation Bogota Exploration - El Dorado Airport

    # # Developed by [](mailto:)
    # 28 Nov 2018 # # Import modules to read and visualize. import pandas as pd import numpy as np import pickle # %pylab inline # ## Data And Study area Section # + # Configure path to read txts. path = '../../datasets/ideamBogota/' from io import StringIO # """Determine whether a year is a leap year.""" def isleapyear(year): if year % 4 == 0 and (year % 100 != 0 or year % 400 == 0): return True return False # Read only one year. def loadYear(year): year=str(year) filedata = open(path+ year +'.txt', 'r') # Create a dataframe from the year's txt. columnNames=['Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct','Nov','Dec'] precipitationYear =pd.read_csv(StringIO('\n'.join(' '.join(l.split()) for l in filedata)),sep=' ',header=None, names=columnNames,skiprows=lambda x: x in list(range(0,3)) , skipfooter=4 ) # Sort data to solve problem of 28 days of Feb. for i in range(28,30): for j in reversed(range(1,12)): precipitationYear.iloc[i,j]= precipitationYear.iloc[i,j-1] # Fix leap years. if isleapyear(int(year)) and i == 28: count = 1 else: precipitationYear.iloc[i,1]= np.nan # Fix problem related to months with 31 days. precipitationYear.iloc[30,11] = precipitationYear.iloc[30,6] precipitationYear.iloc[30,9] = precipitationYear.iloc[30,5] precipitationYear.iloc[30,7] = precipitationYear.iloc[30,4] precipitationYear.iloc[30,6] = precipitationYear.iloc[30,3] precipitationYear.iloc[30,4] = precipitationYear.iloc[30,2] precipitationYear.iloc[30,2] = precipitationYear.iloc[30,1] for i in [1,3,5,8,10]: precipitationYear.iloc[30,i] = np.nan return precipitationYear # Convert one year data frame to timeseries. def convertOneYearToSeries(dataFrameYear,nYear): dataFrameYearT = dataFrameYear.T dates = pd.date_range(str(nYear)+'-01-01', end = str(nYear)+'-12-31' , freq='D') dataFrameYearAllTime = dataFrameYearT.stack() dataFrameYearAllTime.index = dates return dataFrameYearAllTime # Concatenate all time series between a years range. def concatYearsPrecipitation(startYear,endYear): precipitationAllTime = loadYear(startYear) precipitationAllTime = convertOneYearToSeries(precipitationAllTime,startYear) for i in range(startYear,endYear+1): tempPrecipitation=loadYear(i) tempPrecipitation= convertOneYearToSeries(tempPrecipitation,i) precipitationAllTime = pd.concat([precipitationAllTime,tempPrecipitation]) return precipitationAllTime # + # Plot precipitation over a set of years. startYear = 2005 endYear = 2015 precipitationAllTime = concatYearsPrecipitation(startYear,endYear) meanAllTime = precipitationAllTime.mean() ax = precipitationAllTime.plot(figsize=(20,10),grid=True,color='steelblue') ax.axhline(y=meanAllTime, xmin=-1, xmax=1, color='r', linestyle='--', lw=2) ax.set_xlabel('Year') ax.set_ylabel('Precipitation Amount (mm)') # + columnNames=['Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct','Nov','Dec'] vv = pd.DataFrame(columns=columnNames) for year in range(1972,2015+1): vv.loc[year] = [sum(loadYear(year)[x]) for x in loadYear(year).columns ] # - vv.head() # + filehandler = open("../../datasets/historicalMonths.pk","wb") pickle.dump(vv,filehandler) filehandler.close() ''' file = open("Fruits.obj",'rb') object_file = pickle.load(file) file.close() ''' # - plt.figure(figsize=(12,8)) vv.boxplot() plt.ylabel('Precipitation Amount (mm)') plt.xlabel('Month') vv # + # Plot precipitation over a set of years. startYear = 2005 endYear = 2015 vdates = pd.date_range(str(startYear)+'-01-01', end = str(endYear)+'-12-31' , freq='M') vvT = vv dataVV = vvT.stack() dataVV.index = vdates precipitationAllTime = concatYearsPrecipitation(startYear,endYear) meanAllTime = precipitationAllTime.mean() ax = precipitationAllTime.plot(figsize=(20,10),grid=True,color='darkblue',label=' Daily Rainfall') #ax.axhline(y=meanAllTime, xmin=-1, xmax=1, color='r', linestyle='--', lw=2) ax.set_xlabel('Year') ax.set_ylabel('Precipitation Amount (mm)') ax2 = dataVV.plot(kind='area',figsize=(20,10),grid=True,color='grey', label='Monthly Index') plt.legend(loc=9,fontsize='x-large') # - precipitationAllTime.describe() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Introduction # # Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed placerat pellentesque tortor at luctus. Cras varius dui odio, sit amet sodales ipsum ornare non. Mauris imperdiet interdum fermentum. Suspendisse ac nisl in dui feugiat pellentesque. In ac condimentum ligula. Nam nec arcu vel eros eleifend ultricies ut eu arcu. Phasellus dictum mauris a nunc tempor pellentesque vitae eget orci. Vestibulum gravida gravida ligula, eget rutrum dui pulvinar iaculis. Curabitur fermentum elementum purus, ac vulputate magna consectetur eu. Phasellus sodales facilisis tortor, nec iaculis ex aliquam a. Phasellus euismod justo a convallis tempus. Curabitur dignissim mi mauris. # # Maecenas congue ut lacus ac dapibus. Maecenas mollis, sem eget egestas pulvinar, eros augue aliquam neque, id porta neque lacus a augue. Vestibulum at pharetra velit, in facilisis mauris. Aenean sollicitudin elementum mi, eget pharetra nibh vestibulum sodales. Mauris in malesuada ipsum, vitae varius metus. Vestibulum non iaculis nibh. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Phasellus semper sodales metus id commodo. Quisque tincidunt, turpis quis imperdiet sollicitudin, ante dolor imperdiet nibh, nec iaculis risus massa non libero. Donec magna risus, dignissim eu semper ac, vestibulum quis tellus. Interdum et malesuada fames ac ante ipsum primis in faucibus. Integer eu justo non justo ullamcorper cursus eget vulputate erat. Nunc auctor quam posuere, varius dui in, accumsan mi. Donec aliquet lacus vitae orci ultricies feugiat. Proin viverra, felis vel euismod rutrum, ligula risus viverra orci, id maximus nisl urna vel neque. Integer sodales velit urna, in mattis leo ornare eu. # # Donec molestie eget lectus nec viverra. Nulla sed semper mauris, vitae suscipit mi. Vestibulum vel sodales magna. Vivamus laoreet vestibulum nibh, sed ornare lacus luctus id. Quisque fringilla lacus ac interdum iaculis. Aliquam accumsan nisl et libero dignissim eleifend. Sed magna enim, dictum sodales odio nec, interdum interdum tortor. Morbi sodales sem libero, in interdum diam cursus et. Quisque malesuada imperdiet sem, blandit luctus purus pharetra vel. # # # Data and Methods import numpy as np import pandas as pd import matplotlib.pyplot as plt import pywt import atddm from constants import AIRPORTS, COLORS, TZONES, CODES, BEGDT, ENDDT import seaborn as sns from math import log2, floor from matplotlib import colors from scipy import signal from statsmodels.robust import mad BEGDT = pd.Timestamp(BEGDT) ENDDT = pd.Timestamp(ENDDT) INTERVAL = 10 # in minutes TIMESTEP = 60*INTERVAL # in seconds NLEVEL = 8 WTYPE = 'db5' # CMAP = colors.ListedColormap(['red', 'darkred', # 'coral', 'orangered', # 'goldenrod', 'darkgoldenrod', # 'limegreen', 'darkgreen', # 'lightseagreen', 'seagreen', # 'steelblue', 'cadetblue', # 'blue', 'navy', # 'darkviolet', 'purple' # ]) CMAP = 'tab20' times = np.array([1800, 3600, 10800, 21600, 43200, 86400, 604800]) # in seconds XTICKS = 1./times CUTFRQ = XTICKS[2] XTKLBS = ['.5 h', '1 h', '3 h', '6 h', '12 h', '24 h', '1 w'] # + dd = atddm.load(subset=CODES) m3_bin = {} # m1_bin = {} CODES.sort() for code in CODES: indx = pd.date_range(start=BEGDT, end=ENDDT, freq=str(INTERVAL)+'min', tz=TZONES[code]) m3_bin[code] = atddm.binarrivals(dd[code].M3_FL240, interval=INTERVAL, tz=TZONES[code])[indx].fillna(0) # m1_bin[code] = atddm.binarrivals(dd[code].M1_FL240, # interval=INTERVAL, # tz=TZONES[code])[indx].fillna(0) # - # ## Wavelet analysis # + def findm(length, n=NLEVEL): return floor(length/2**n) def trimmedindex(serie, nlev=NLEVEL): m = findm(len(serie), nlev) lenmax = m * 2**nlev return serie.index[:lenmax] def wvlt_analysis(serie, wtype=WTYPE, nlev=NLEVEL): df = pd.DataFrame(index=trimmedindex(serie, nlev)) # df['signal'] = serie.iloc[:len(df)] x = serie.iloc[:len(df)] for j in range(nlev): level = j+1 ca, cd = pywt.dwt(x, wtype, mode='per') x = np.copy(ca) for i in range(level): apx = pywt.idwt(ca, None, wtype, mode= 'per') det = pywt.idwt(None, cd, wtype, mode= 'per') ca = apx cd = det for lbl, vec in zip(['approx', 'detail'], [apx, det]): label = 'level_{:d}_{:s}'.format(level, lbl) df[label] = vec colnames = [] for j in range(nlev): level = j+1 for lbl in ['approx', 'detail']: label = (level, lbl) colnames.append(label) df.columns = pd.MultiIndex.from_tuples(colnames, names=['level','type']) df[(0, 'signal')] = serie.iloc[:len(df)] return df.sort_index(axis=1) def power_spectrum(data): x = data - data.mean() ham = signal.hamming(len(data)) x = x*ham return np.abs(np.fft.fft(x))**2 # + # m1_wvlt = {} m3_wvlt = {} m3_ffts = {} levels = 10 fsize = (25,35) for code in CODES: # m1_wvlt[code] = wvlt_analysis(m1_bin[code], nlev=levels) tmp = wvlt_analysis(m3_bin[code], nlev=levels) m3_wvlt[code] = tmp.copy(deep=True) m3_ffts[code] = tmp.apply(power_spectrum) freqs = np.fft.fftfreq(len(tmp), TIMESTEP) freqs[freqs <= 0] = np.nan m3_ffts[code]['freqs'] = freqs m3_ffts[code] = m3_ffts[code].dropna().set_index('freqs') titles = [('Level {:d} :: approximation'.format(i), 'Level {:d} :: detail'.format(i)) for i in range(1, levels+1)] titles = [item for sublist in titles for item in sublist] # + def plot_in_time(icao): f, axes = plt.subplots(levels, 2, figsize=fsize) tmp = m3_wvlt[icao].loc[:, (slice(1,levels), slice(None))] tmp.plot(ax=axes, subplots=True, colormap=CMAP, legend=False, title=titles) return (f, axes) def plot_in_freq(icao): f, axes = plt.subplots(levels, 2, figsize=fsize) tmp = m3_ffts[icao].loc[:, (slice(1,levels), slice(None))] tmp.plot(ax=axes, subplots=True, colormap=CMAP, legend=False, title=titles) for ax in axes: ax[0].set_xticks(XTICKS) ax[0].set_xticklabels(XTKLBS) ax[0].set_xlim(right=CUTFRQ) ax[1].set_xticks(XTICKS) ax[1].set_xticklabels(XTKLBS) ax[1].set_xlim(left=CUTFRQ) return (f, axes) # - # # Denoising code='EGLL' deno = m3_bin[code].copy(deep=True) noisy_coefs = pywt.wavedec(deno, 'db5', mode='per') sigma = mad(noisy_coefs[-1]) uthresh = sigma*np.sqrt(2*np.log(len(deno))) denoised = noisy_coefs[:] denoised[1:] = (pywt.threshold(i, uthresh, 'soft') for i in denoised[1:]) m3_denoised = pywt.waverec(denoised, WTYPE, mode='per').flatten() deno = deno.to_frame(name='original') deno['denoised'] = pd.Series(m3_denoised.flatten(), index=m3.index) f, ax = plt.subplots() deno.loc['2016-08-01':'2016-08-07', 'original'].plot(ax=ax, color='cadetblue') deno.loc['2016-08-01':'2016-08-07', 'denoised'].plot(ax=ax, color='navy') plt.show() # # Time-Frequency Analysis code='EDDF' m3 = m3_bin[code].copy(deep=True) foo = m3.loc['2016-08-01':'2016-08-07'] xticks = [144*i for i in range(7)] xticklabels = ['2016-08-01', '2016-08-02', '2016-08-03', '2016-08-04', '2016-08-05', '2016-08-06', '2016-08-07'] f, ax = plt.subplots() widths = np.arange(1, 31) cwtmatr, freqs = pywt.cwt(foo, widths, 'mexh') im = ax.imshow(cwtmatr, extent=[0, len(foo), 1, 31], cmap='PRGn', aspect='auto', vmax=abs(cwtmatr).max(), vmin=-abs(cwtmatr).max()) ax.set_xticks(xticks) ax.set_xticklabels(xticklabels, rotation=30) plt.show() # # Decomposition # # ## Frankfurt code = 'EDDF' f, axes = plot_in_time(code) f.tight_layout() plt.show() f, axes = plot_in_freq(code) f.tight_layout() plt.show() # ## London Heathrow code = 'EGLL' f, axes = plot_in_time(code) f.tight_layout() plt.show() f, axes = plot_in_freq(code) f.tight_layout() plt.show() # ## London Gatwick code = 'EGKK' f, axes = plot_in_time(code) f.tight_layout() plt.show() f, axes = plot_in_freq(code) f.tight_layout() plt.show() # ## code = 'EHAM' f, axes = plot_in_time(code) f.tight_layout() plt.show() f, axes = plot_in_freq(code) f.tight_layout() plt.show() # ## code = 'LFPG' f, axes = plot_in_time(code) f.tight_layout() plt.show() f, axes = plot_in_freq(code) f.tight_layout() plt.show() # ## Madrid-Barajas code = 'LEMD' f, axes = plot_in_time(code) f.tight_layout() plt.show() f, axes = plot_in_freq(code) f.tight_layout() plt.show() # ## code = 'LIRF' f, axes = plot_in_time(code) f.tight_layout() plt.show() f, axes = plot_in_freq(code) f.tight_layout() plt.show() # ## Athens International code = 'LGAV' f, axes = plot_in_time(code) f.tight_layout() plt.show() f, axes = plot_in_freq(code) f.tight_layout() plt.show() # # Disruptions f, ax = plt.subplots(2, 2) df = m3_wvlt['LIRF'] df.loc['2016-06-27':'2016-07-10',(7, 'approx')].plot(ax=ax[0,0], title='Rome, Alitalia pilots strike, Jul 5') df = m3_wvlt['EDDF'] df.loc['2016-07-14':'2016-07-27',(7, 'approx')].plot(ax=ax[0,1], title='Frankfurt, Unknown event Jul 23') df.loc['2016-08-23':'2016-09-05',(7, 'approx')].plot(ax=ax[1,0], title='Frankfurt, woman evades security check Aug 31') df = m3_wvlt['LFPG'] df.loc['2016-07-22':'2016-08-04',(7, 'approx')].plot(ax=ax[1,1], title='Paris, Air France pilots strike, Jul 27 - Aug 2') f.tight_layout() plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Riemann Zeta Functions $\zeta$ # The simplest representation is # # $$ \zeta(s) = \sum_{n=1}^{\infty} \frac{1}{n^s} $$ # # Often in physics problems, $s$ will be integer. On the real line with $s>1$ we have an alternative definition: # # $$ \zeta(s) = \frac{1}{\Gamma(s)} \int_0^{\infty} \frac{u^{s-1}}{e^u - 1} du $$ # # This is an integral that is needed in quite common problems such as blackbody radiation. Turning it around slightly: # # $$ \int_0^{\infty} \frac{x^{\alpha}}{e^x - 1} dx = \Gamma(\alpha + 1) \zeta(\alpha + 1) $$ # # $\zeta(s)$ is defined over the complex plane, apart from some singularities (such as $s=1$). This is important in number theory and has spawned a vast literature, not immediately comprehensible to most of us. # Does $\zeta(s)$ have a simple representation for integer $s$? Some do, some don't: # + from IPython.display import Math from sympy import zeta, latex from sympy.abc import x, s for i in range(-7, 7): zeta_i = latex(zeta(i)) display(Math('\zeta({}) = {}'.format(i, zeta_i))) # - # For real $s$, Python has a few issues with $s < 1$. The function `scipy.special.zeta(x)` returns `nan` for $s \le 1$. A related function `scipy.special.zetac(x)` is more flexible, but the values returned are offset to $\zeta(s) - 1$ and the documentation warns it returns `nan` for $x < -30.8148$. # + import numpy as np import matplotlib.pyplot as plt plt.rcParams.update({'font.size': 16}) import scipy.special as sp xlims = (-20, 20) x = np.linspace(xlims[0], xlims[1], 500) zetas = sp.zetac(x) + 1 # display(zetas) plt.figure(figsize=(9, 9)) plt.plot(x, zetas) plt.xlim(xlims) plt.ylim((-0.5, 1.5)) plt.xlabel('$x$') plt.ylabel('$\\zeta(x)$') plt.title('Plot of the zetas function for real x') plt.grid(True) # - # ## References: # # - MathWorld, http://mathworld.wolfram.com/RiemannZetaFunction.html # - Wikipedia, https://en.wikipedia.org/wiki/Riemann_zeta_function # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="WF4Qh5xVqM4D" # # Sentiment Analysis on Women's Dress Review # # * Tutorial: https://towardsai.net/p/nlp/sentiment-analysis-opinion-mining-with-python-nlp-tutorial-d1f173ca4e3c # # * Github: https://github.com/towardsai/tutorials/tree/master/sentyment_analysis_tutorial # + [markdown] id="cCdKoergjmDw" # **Download Dataset** # + colab={"base_uri": "https://localhost:8080/"} id="VLl8z7VEjpMw" outputId="f9c9a841-ef4d-4658-9445-30b7acf02068" # !wget https://raw.githubusercontent.com/towardsai/tutorials/master/sentiment_analysis_tutorial/women_clothing_review.csv # + [markdown] id="-hBtFjjoqV2_" # **Import All Required Packages** # + colab={"base_uri": "https://localhost:8080/"} id="-sSRIdY1HZCS" outputId="ab15e23e-0924-42bb-c694-f4ddb58df4ef" import pandas as pd import numpy as np import seaborn as sns import re import string from string import punctuation import nltk from nltk.corpus import stopwords nltk.download('stopwords') import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.feature_extraction.text import CountVectorizer from sklearn.feature_extraction.text import TfidfTransformer import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation, Dropout from tensorflow.keras.callbacks import EarlyStopping # + [markdown] id="YUH7jv5KrgG-" # **Read data from csv** # + colab={"base_uri": "https://localhost:8080/", "height": 306} id="aBdnYpaqqEfs" outputId="c4a417df-8a1e-42b2-de17-ed4e672cbbf3" df = pd.read_csv('women_clothing_review.csv') df.head() # + [markdown] id="N1DXfvBurnSO" # **Drop unnecessary columns** # + id="Gn_vu_gprlK1" df = df.drop(['Title', 'Positive Feedback Count', 'Unnamed: 0', ], axis=1) df.dropna(inplace=True) # + [markdown] id="VqT2elfcs4tj" # **Calculation of Polarity** # + id="GY08-o4YzR1i" df['Polarity_Rating'] = df['Rating'].apply(lambda x: 'Positive' if x > 3 else('Neutral' if x == 3 else 'Negative')) # + colab={"base_uri": "https://localhost:8080/", "height": 202} id="6b0k-DA-0IYJ" outputId="623e6ff7-6699-4a3d-b08f-8f79bf2ce8d3" df.head() # + [markdown] id="lMD-lue81NME" # **Plot the Rating visualization graph** # + colab={"base_uri": "https://localhost:8080/", "height": 297} id="EwQ5_FWh1R-p" outputId="1db2e56f-a522-400a-8edf-235f14cd73dd" sns.set_style('whitegrid') sns.countplot(x='Rating',data=df, palette='YlGnBu_r') # + [markdown] id="cObGcj-R1_Uf" # **Plot the Polarity Rating count** # + colab={"base_uri": "https://localhost:8080/", "height": 297} id="TglKEwxS2DcK" outputId="32a9bc27-7a55-450e-c1fd-e3380a0af5d1" sns.set_style('whitegrid') sns.countplot(x='Polarity_Rating',data=df, palette='summer') # + [markdown] id="l2ppN_Gj3MXZ" # **Data Preprocessing** # + id="KMryD2Ok3PbF" df_Positive = df[df['Polarity_Rating'] == 'Positive'][0:8000] df_Neutral = df[df['Polarity_Rating'] == 'Neutral'] df_Negative = df[df['Polarity_Rating'] == 'Negative'] # + [markdown] id="mUZim1eA3qDF" # **Sample negative and neutral polarity dataset and create final dataframe** # + id="9PqqFadV4Qaf" df_Neutral_over = df_Neutral.sample(8000, replace=True) df_Negative_over = df_Negative.sample(8000, replace=True) df = pd.concat([df_Positive, df_Neutral_over, df_Negative_over], axis=0) # + [markdown] id="B3so3XNt5PXg" # **Text Preprocessing** # + id="mvi7pt-d5R6E" def get_text_processing(text): stpword = stopwords.words('english') no_punctuation = [char for char in text if char not in string.punctuation] no_punctuation = ''.join(no_punctuation) return ' '.join([word for word in no_punctuation.split() if word.lower() not in stpword]) # + [markdown] id="1F75BGMk5zbT" # **Apply the method "get_text_processing" into column review text** # + colab={"base_uri": "https://localhost:8080/", "height": 306} id="52_Rbc3p5953" outputId="5e231d9d-8ea7-4821-d0a6-e8f956efb97f" df['review'] = df['Review Text'].apply(get_text_processing) df.head() # + [markdown] id="Fhxnn-pj7R6P" # **Visualize Text Review with Polarity Rating** # + colab={"base_uri": "https://localhost:8080/", "height": 202} id="yFeqv9LG7N5N" outputId="a06a4655-55ed-4748-e2c4-90d8820c70b1" df = df[['review', 'Polarity_Rating']] df.head() # + [markdown] id="rwpsuZPk8HCS" # **Apply One hot encoding on negative, neutral, and positive** # + colab={"base_uri": "https://localhost:8080/", "height": 202} id="lHrXiYDQ8Pj6" outputId="a80ce8f8-e4f3-4a6a-9f39-0f95d17d0007" one_hot = pd.get_dummies(df["Polarity_Rating"]) df.drop(['Polarity_Rating'],axis=1,inplace=True) df = pd.concat([df,one_hot],axis=1) df.head() # + [markdown] id="O63qBfKuE2nu" # **Apply Train Test Split** # + id="6PNWLYhME7Pf" X = df['review'].values y = df.drop('review', axis=1).values X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=42) # + [markdown] id="AuADdT8UFgOy" # **Apply vectorization** # + id="r20vXgZ6Fvqw" vect = CountVectorizer() X_train = vect.fit_transform(X_train) X_test = vect.transform(X_test) # + [markdown] id="akiDMHsvGNxD" # **Apply frequency, inverse document frequency:** # + id="_q3-ppruGRz7" tfidf = TfidfTransformer() X_train = tfidf.fit_transform(X_train) X_test = tfidf.transform(X_test) X_train = X_train.toarray() X_test = X_test.toarray() # + [markdown] id="rXV_Xf5kHB73" # **Add different layers** # + id="-Su4eu41HAUT" model = Sequential() model.add(Dense(units=12673,activation='relu')) model.add(Dropout(0.5)) model.add(Dense(units=4000,activation='relu')) model.add(Dropout(0.5)) model.add(Dense(units=500,activation='relu')) model.add(Dropout(0.5)) model.add(Dense(units=3, activation='softmax')) opt=tf.keras.optimizers.Adam(learning_rate=0.001) model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy']) early_stop = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=2) # + [markdown] id="NYGn2m1lIFvo" # **Fit the Model** # + colab={"base_uri": "https://localhost:8080/"} id="NtKRqIcYIEev" outputId="0de54d2f-bc01-4606-e290-1f0b216baab5" model.fit(x=X_train, y=y_train, batch_size=256, epochs=100, validation_data=(X_test, y_test), verbose=1, callbacks=early_stop) # + [markdown] id="mOSbaNrqJBeP" # **Evaluation of Model** # + id="ZTsIgolQJD2M" model_score = model.evaluate(X_test, y_test, batch_size=64, verbose=1) print('Test accuracy:', model_score[1]) # + [markdown] id="s6c7yRFKJUK4" # **Prediction** # + id="Q34Yb_0pJW3r" preds = model.predict(X_test) preds # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import collections import networkx as nx import matplotlib.pyplot as nzm # %matplotlib inline import os os.listdir() #read in file p = nx.read_edgelist('email-Eu-core-temporal-Dept3.txt',create_using=nx.Graph(),nodetype=int) print(nx.info(p)) #create a layout of fb pos = nx.kamada_kawai_layout(p) nzm.figure(figsize=(15,12)) nzm.axis('on') nx.draw_networkx(p,pos=pos,with_labels=False,node_size=50) #draw a bar chart of the betwenness centrality bc = nx.betweenness_centrality(p) nzm.bar(range(len(bc)),bc.values(),align='center') nx.eigenvector_centrality(p) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + """ Helper module to provide activation to network layers. Four types of activations with their derivates are available: - Sigmoid - Softmax - Tanh - ReLU """ import numpy as np import os import gzip import cPickle import wget import random import time def sigmoid(z): return 1.0 / (1.0 + np.exp(-z)) def sigmoid_prime(z): return sigmoid(z) * (1 - sigmoid(z)) def softmax(z): return np.exp(z) / np.sum(np.exp(z)) def softmax_prime(z): return softmax(z) * (1 - softmax(z)) def tanh(z): return np.tanh(z) def tanh_prime(z): return 1 - tanh(z) * tanh(z) def relu(z): return np.maximum(z, 0) def relu_prime(z): return float(z > 0) def load_mnist(): abs_path = os.path.join(os.getcwd(), 'data') if not os.path.exists(abs_path): os.mkdir(abs_path) wget.download('http://deeplearning.net/data/mnist/mnist.pkl.gz', out='data') print("load_mnist: images downloaded") data_file = gzip.open(os.path.join(os.curdir, 'data', 'mnist.pkl.gz'), 'rb') training_data, validation_data, test_data = cPickle.load(data_file) data_file.close() print("load_mnist: images unpacked") training_inputs = [np.reshape(x, (784, 1)) for x in training_data[0]] training_results = [vectorized_result(y) for y in training_data[1]] training_data = zip(training_inputs, training_results) validation_inputs = [np.reshape(x, (784, 1)) for x in validation_data[0]] validation_results = validation_data[1] validation_data = zip(validation_inputs, validation_results) test_inputs = [np.reshape(x, (784, 1)) for x in test_data[0]] test_data = zip(test_inputs, test_data[1]) print("load_mnist: images split into training, validation and test sets") return training_data, validation_data, test_data def vectorized_result(y): e = np.zeros((10, 1)) e[y] = 1.0 return e class NeuralNetwork(object): def __init__(self, sizes=list(), learning_rate=1.0, mini_batch_size=16, epochs=10): """Initialize a Neural Network model. Parameters ---------- sizes : list, optional A list of integers specifying number of neurns in each layer. Not required if a pretrained model is used. learning_rate : float, optional Learning rate for gradient descent optimization. Defaults to 1.0 mini_batch_size : int, optional Size of each mini batch of training examples as used by Stochastic Gradient Descent. Denotes after how many examples the weights and biases would be updated. Default size is 16. """ # Input layer is layer 0, followed by hidden layers layer 1, 2, 3... self.sizes = sizes self.num_layers = len(sizes) # First term corresponds to layer 0 (input layer). No weights enter the # input layer and hence self.weights[0] is redundant. self.weights = [np.array([0])] + [np.random.randn(y, x) for y, x in zip(sizes[1:], sizes[:-1])] # Input layer does not have any biases. self.biases[0] is redundant. self.biases = [np.random.randn(y, 1) for y in sizes] # Input layer has no weights, biases associated. Hence z = wx + b is not # defined for input layer. self.zs[0] is redundant. self._zs = [np.zeros(bias.shape) for bias in self.biases] # Training examples can be treated as activations coming out of input # layer. Hence self.activations[0] = (training_example). self._activations = [np.zeros(bias.shape) for bias in self.biases] self.mini_batch_size = mini_batch_size self.epochs = epochs self.eta = learning_rate def fit(self, training_data, validation_data=None): """Fit (train) the Neural Network on provided training data. Fitting is carried out using Stochastic Gradient Descent Algorithm. Parameters ---------- training_data : list of tuple A list of tuples of numpy arrays, ordered as (image, label). validation_data : list of tuple, optional Same as `training_data`, if provided, the network will display validation accuracy after each epoch. """ for epoch in range(self.epochs): random.shuffle(training_data) mini_batches = [ training_data[k:k + self.mini_batch_size] for k in range(0, len(training_data), self.mini_batch_size)] for mini_batch in mini_batches: nabla_b = [np.zeros(bias.shape) for bias in self.biases] nabla_w = [np.zeros(weight.shape) for weight in self.weights] for x, y in mini_batch: self._forward_prop(x) delta_nabla_b, delta_nabla_w = self._back_prop(x, y) nabla_b = [nb + dnb for nb, dnb in zip(nabla_b, delta_nabla_b)] nabla_w = [nw + dnw for nw, dnw in zip(nabla_w, delta_nabla_w)] self.weights = [ w - (self.eta / self.mini_batch_size) * dw for w, dw in zip(self.weights, nabla_w)] self.biases = [ b - (self.eta / self.mini_batch_size) * db for b, db in zip(self.biases, nabla_b)] if validation_data: accuracy = self.validate(validation_data) / 100.0 print("Epoch {0}, accuracy {1} %.".format(epoch + 1, accuracy)) else: print("Processed epoch {0}.".format(epoch)) def validate(self, validation_data): """Validate the Neural Network on provided validation data. It uses the number of correctly predicted examples as validation accuracy metric. Parameters ---------- validation_data : list of tuple Returns ------- int Number of correctly predicted images. """ validation_results = [(self.predict(x) == y) for x, y in validation_data] return sum(result for result in validation_results) def predict(self, x): """Predict the label of a single test example (image). Parameters ---------- x : numpy.array Returns ------- int Predicted label of example (image). """ self._forward_prop(x) return np.argmax(self._activations[-1]) def _forward_prop(self, x): self._activations[0] = x for i in range(1, self.num_layers): intermediate1 = self.weights[i].dot(self._activations[i - 1]) if i == 2: # Interaction layer: Modified forward pass intermediate1 = np.multiply( intermediate1, self._activations[i - 1] ) self._zs[i] = ( intermediate1 + self.biases[i] ) self._activations[i] = sigmoid(self._zs[i]) def _back_prop(self, x, y): nabla_b = [np.zeros(bias.shape) for bias in self.biases] nabla_w = [np.zeros(weight.shape) for weight in self.weights] error = (self._activations[-1] - y) * sigmoid_prime(self._zs[-1]) nabla_b[-1] = error nabla_w[-1] = error.dot(self._activations[-2].transpose()) for l in range(self.num_layers - 2, 0, -1): error = np.multiply( self.weights[l + 1].transpose().dot(error), sigmoid_prime(self._zs[l]) ) if l == 1: # Interaction layer: first change to back-propagation error = 2 * np.multiply( error, sigmoid(self._zs[l]) ) nabla_b[l] = error nabla_w[l] = error.dot(self._activations[l - 1].transpose()) if l == 2: # Interaction layer: second change to back-propagation for idx in range(0, len(nabla_w) -1): nabla_w[l][idx] = self._activations[l - 1][idx] * nabla_w[l][idx] return nabla_b, nabla_w def load(self, filename='model.npz'): """Prepare a neural network from a compressed binary containing weights and biases arrays. Size of layers are derived from dimensions of numpy arrays. Parameters ---------- filename : str, optional Name of the ``.npz`` compressed binary in models directory. """ npz_members = np.load(os.path.join(os.curdir, 'models', filename)) self.weights = list(npz_members['weights']) self.biases = list(npz_members['biases']) # Bias vectors of each layer has same length as the number of neurons # in that layer. So we can build `sizes` through biases vectors. self.sizes = [b.shape[0] for b in self.biases] self.num_layers = len(self.sizes) # These are declared as per desired shape. self._zs = [np.zeros(bias.shape) for bias in self.biases] self._activations = [np.zeros(bias.shape) for bias in self.biases] # Other hyperparameters are set as specified in model. These were cast # to numpy arrays for saving in the compressed binary. self.mini_batch_size = int(npz_members['mini_batch_size']) self.epochs = int(npz_members['epochs']) self.eta = float(npz_members['eta']) def save(self, filename='model.npz'): """Save weights, biases and hyperparameters of neural network to a compressed binary. This ``.npz`` binary is saved in 'models' directory. Parameters ---------- filename : str, optional Name of the ``.npz`` compressed binary in to be saved. """ np.savez_compressed( file=os.path.join(os.curdir, 'models', filename), weights=self.weights, biases=self.biases, mini_batch_size=self.mini_batch_size, epochs=self.epochs, eta=self.eta ) # - training_data, validation_data, test_data = load_mnist() # + # Third hidden layer [784, 20, ==>20<==, 10] is the interaction layer higher_order_net = NeuralNetwork(sizes=[784, 20, 20, 10], learning_rate=0.1, mini_batch_size=10, epochs=20) start = time.time() higher_order_net.fit(training_data, validation_data=validation_data) end = time.time() print("time: ", end - start) # - test_accuracy = higher_order_net.validate(test_data) / 100.0 print("Test accuracy {0} %.".format(test_accuracy)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Source: https://pandas.pydata.org/pandas-docs/stable/index.html # # ### What is Pandas? # Pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language. # # ### What data structures does Pandas use? # Pandas has 2 primary data structures - namely Series(1-D) and DataFrame(2-D). # # ### What to expect from this tutorial/notebook? # This tutorial is intended to introduce you to Pandas from a high level and then expose you to # - Data Acquisition # - Data Cleaning # - Data Filtering # - Data Aggregation # - Data Analysis (depending on time availability) # # ### How is this different from the countless other materials that are publicly available? # It is by no means exhaustive or extensive, rather you can consider it my share of learnings that I picked up and learned as I attempted to use Python. I will be sharing tips and tricks that I found to be helpful, but if you know something better, you are welcome to share it with me/us. # #### How to create data-structures in Pandas ? # Import required libraries import pandas as pd import numpy as np import os import pickle import csv from datetime import datetime # Creating a series my_numeric_series = pd.Series([2, 3, 5, 7], name="Primes_Under_10") print(my_numeric_series) my_character_series = pd.Series(["DesertPy", "SoCal_Python", "PyLadies_of_LA"], name="Some_Python_Meetups") print(my_character_series) my_mixed_series = pd.Series([2, "a", 4, "b"], name="Mixed_Series") print(my_mixed_series) # + # Creating a data frame # Method 1 - from list of lists print("******************************") print("Printing Dataframe from method 1") print("******************************") list_of_lists = [["", "Arizona", 2023], ["", "California", 2023], ["", "Florida", 2023], ["", "New York", 2022], ["", "Georgia", 2023]] governors_in_the_news_df = pd.DataFrame(data=list_of_lists, columns=["Name", "State", "Term_Expiry"]) print(governors_in_the_news_df) print("******************************") print("Printing Dataframe from method 2") print("******************************") # Method 2 - from dictionary of lists dict_of_lists = {"Name": ["", "", "", ""], "State": ["Washington", "Connecticut", "Kentucky", "North Carolina"], "Term_Expiry": [2021, 2023, 2023, 2021]} governors_df = pd.DataFrame(data=dict_of_lists) print(governors_df) print("******************************") print("Printing Dataframe from method 3") print("******************************") # Method 3 - from list of dictionaries list_of_dicts = [{'USA': 50, 'Brazil': 26, 'Canada':10}] states_in_countries_df = pd.DataFrame(data=list_of_dicts, index=["State_Count"]) print(states_in_countries_df) print("******************************") print("Printing Dataframe from method 4") print("******************************") # Method 4 - from lists with zip stock_symbols = ["AAPL", "AMZN", "V", "MA"] prices_i_wish_i_bought_them_at = [50, 10, 1, 78] stocks_i_wanted_df = pd.DataFrame(data=list(zip(stock_symbols, prices_i_wish_i_bought_them_at)), columns=["Stock_Symobl", "Dream_Price"]) print(stocks_i_wanted_df) print("******************************") print("Printing Dataframe from method 5") print("******************************") # Method 5 - dict of pd.Series dict_of_series = {'Place_I_Wanted_To_Be' : pd.Series(["New Zealand", "Fiji", "Bahamas"], index =["January", "February", "March"]), 'Place_I_Am_At' : pd.Series(["Home", "Home", "Home"], index =["January", "February", "March"])} lockdown_mood_df = pd.DataFrame(dict_of_series) print(lockdown_mood_df) # Delete the temporary varibles and datasets to avoid cluttering of workspace - these will not be used below del my_numeric_series, my_character_series, my_mixed_series, list_of_lists, governors_in_the_news_df, dict_of_lists del governors_df, list_of_dicts, states_in_countries_df, stock_symbols, prices_i_wish_i_bought_them_at, stocks_i_wanted_df, dict_of_series, lockdown_mood_df # - # ## Takeaways-1 # #### From the above examples, it is helpful to identify a few takeaways: # 1. Series and DataFrame can represent most of the commonly used data sets. Constructing your data into a Series or a DataFrame allows you to leverage a lot of built-in functionality that Pandas offers # 2. Series and DataFrame support homogeneous and heterogeneous data - meaning they can handle same data types as well as different data types # 3. Series and DataFrame have an index property which defaults to an integer but can be set as desired (imagine time stamps, letters etc.) # 4. Pandas 1.0.0 deprecated the testing module and limited to only assertion functions. While not advisable, if you are using a version < 1.0.0, pandas.util.testing offers close to 30 different built-in functions to whip up different data frames that make it easy to test. You can get the list of possible functions like so # # ``` # import pandas.util.testing as tm # dataframe_constructor_functions = [i for i in dir(tm) if i.startswith('make')] # print(dataframe_constructor_functions) # ``` # # 5. While all of these are good to know, a typical use-case would not require a user to create data, rather import/acquire data from several different data sources - which leads us to our first topic of Data Acquisition # ## Data Acquisition # # #### One of the most powerful and appealing aspects of Pandas is its ability to easily acquire and ingest data from several different data sources including but not limited to: # - CSV # - Text # - JSON # - HTML # - Excel # - SQL # # An exhaustive list can be found here - https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html # + # CSV print("******************************") print("Printing Dataframe from CSV") print("******************************") current_directory = os.getcwd() confimed_global_cases_file_path = os.path.join(current_directory, "covid-19_data","time_series_covid19_confirmed_global.csv") confirmed_global_cases_df = pd.read_csv(filepath_or_buffer=confimed_global_cases_file_path) print(confirmed_global_cases_df.head()) # JSON print("******************************") print("Printing Dataframe from JSON") print("******************************") json_file_path = os.path.join(current_directory, "sample_json.json") json_df = pd.read_json(json_file_path) print(json_df) # Excel print("******************************") print("Printing Dataframe from Excel") print("******************************") # excel_file_path = os.path.join(current_directory, "covid-19_data","time_series_covid19_confirmed_recovered.xlsx") os.chdir("covid-19_data") recovered_global_cases_df = pd.read_excel("time_series_covid19_recovered_global.xlsx") os.chdir(current_directory) print(recovered_global_cases_df.head()) # Delete the temporary varibles and datasets to avoid cluttering of workspace - these will not be used below del json_file_path, json_df, recovered_global_cases_df # - # ## Takeaways-2 # # #### Based on the few limited examples above: # 1. Pandas has several robust i/o parsers which makes it really easy to consume data from several different sources # 2. If you are going to work with pandas, it is best to use pandas to acquire the data, as long as parser exists because they are optimized to handle large sets of data # 3. What would have been a great example would be to consume SQL data, but since I don't have enough power on my machine to install a SQL Server program, I am forced to skip that - if you have access to a database, do try and consume data from the database. You would be needing either pyodbc or sqlalchemy or a similar package as a wrapper. # ## Data Cleaning & Data Filtering (They are quite intertwined) # # #### Messy (or) Unorganized data is very common. Even if the data is organized and logically makes sense, it might have missing data or NaN's or NA's or may not be organized in a way the user wants it, which could be a hindrance to smooth analysis of your data. # # ### Tip #1 - If you are going to modify a data frame and want the original data, create a copy # + # By copying a dataframe with deep flag turned on, a new data frame is created including a copy of the data and the indices. # Changes made to this new data frame are not reflected in the original data frame object and vice-versa confirmed_global_cases_copy_df = confirmed_global_cases_df.copy(deep=True) # Let us we want to only look at countries where there is state/province-level data available confirmed_global_cases_copy_df = confirmed_global_cases_copy_df[confirmed_global_cases_copy_df["Province/State"].notnull()] # Method 1 of filtering data in dataframe # - # Notice that the data frame index does not hold any special order now and it does not convey any meaning by itself, unless we reference it or compare with the original data frame. We can fix that by re-setting the index confirmed_global_cases_copy_df = confirmed_global_cases_copy_df.reset_index(drop=True) # drop flag prevents the column from being added back to the dataframe as a new column. also remember to assign the data frame back to your variable. resetting of index, returns an object and unless we capture it and re-assign to the same variable, the change is lost # Let us say, we only want countries in the Northern Hemisphere - please note that this is just one method to create a new column. I want to show as many different options as possible def hemisphere_flag(df): if (df["Lat"] >= 0): return 1 else: return 0 confirmed_global_cases_copy_df["Northern_Hemisphere_Flag"] = confirmed_global_cases_copy_df.apply(hemisphere_flag, axis=1) northern_hemisphere_confirmed_cases_df = confirmed_global_cases_copy_df.query("Northern_Hemisphere_Flag == 1") # Method 2 of filtering data in dataframe northern_hemisphere_confirmed_cases_df = northern_hemisphere_confirmed_cases_df.reset_index(drop=True) # + # Let us see how we can subset the data frame based on multiple conditions hubei_data_df = northern_hemisphere_confirmed_cases_df.loc[(northern_hemisphere_confirmed_cases_df["Country/Region"]=="China") & (northern_hemisphere_confirmed_cases_df["Province/State"]=="Hubei")] # Method 3 of filtering data in dataframe # Alternatively , if you wanted to generate a flag and not necessarily subset, you can use Numpy as follows northern_hemisphere_confirmed_cases_df['Alberta_Flag'] = np.where((northern_hemisphere_confirmed_cases_df["Country/Region"]=="Canada") & (northern_hemisphere_confirmed_cases_df["Province/State"]=="Alberta"), 1, 0) # + # Let us look at the hubei data set, since we know that Hubei is in China and China is in the Northern Hemisphere, let us try to drop/delete the columns from the data set columns_to_drop_in_hubei_data = ["Country/Region", "Northern_Hemisphere_Flag"] hubei_data_df.drop(columns_to_drop_in_hubei_data, inplace=True, axis=1) # Alternatively, we could do the following, let us perform a similar action on Northern Hemisphere data columns_to_drop_in_northern_hemisphere = ["Alberta_Flag", "Northern_Hemisphere_Flag"] northern_hemisphere_confirmed_cases_df.drop(columns=columns_to_drop_in_northern_hemisphere, inplace=True) # + # Let us delete a couple of the intermediate data frames that we created for demonstration purposes del hubei_data_df, northern_hemisphere_confirmed_cases_df # Now, let us use the copy we created to learn something else confirmed_global_cases_copy_df.info(verbose=True) # - # Changing the dtype of a column is another handy data cleansing technique that you would be using a lot confirmed_global_cases_copy_df.astype({'Lat': 'object', 'Long': 'object'}).dtypes # If it instead made sense to case all your columns to a single data type you could simply do this confirmed_global_cases_copy_df.astype('object').dtypes # Now, let us look at "index", a very powerful and handy component of a Pandas data frame # While it does not need to be unique like a SQL primary key, it will definitely help optimize execution of a lot of the methods if the index is unique. china_confirmed_cases_df = confirmed_global_cases_copy_df[confirmed_global_cases_copy_df["Country/Region"].isin(["China"])] # We can see that and we have covered this before that index has been disrupted after the subset, so let us create a new index china_confirmed_cases_df.reset_index(drop=True, inplace=True) # Now, let us assign one of the columns as the index china_confirmed_cases_df.set_index(keys=["Province/State"], drop=True, inplace=True) china_confirmed_cases_df.index # Ok, so how does setting the index really help ? china_subset_df = china_confirmed_cases_df.filter(like="Hubei", axis=0) # It allows filtering based on the unique index - especially powerful if data is time-series based # Method 4 of filtering data in dataframe # ## Takeaways-3 # ` # #### We covered several data cleaning and data filtering techniques, key points to remeber are: # 1. Unless, we want to modify the source data, it is a good idea to make a deep copy of a data frame before making any modifications # 2. There are several different options to create a new column in a data frame, we covered a couple of these # - Using the apply function on a user defined function # - Using the np.where construct # 3. We also looked at different ways to drop unwanted columns in a given dataframe (users are welcome to choose the construct that makes most sense to them) # - Using the axis keyword argument # - Using the columns keyword argument # 4. Using the info method with verbose set to true, produces a lot of helpful # 5. Casting individual column(s) to the desired data type as well as entire data frame to a data type of choice is possible using the astype() # 6. Resetting, setting and filtering using index of a dataframe (what has not been covered here is MultiIndex. That is a very useful concept, nut I am still trying to get a handle and could not cover that today) # 7. Different ways to filter data in a dataframe: # - Using built in methods like ```pd.DataFrame.notnull()``` or ```pd.DataFrame.isin()``` along with logical indexing # - Using ``` pd.DataFrame.query() ``` # - Using boolean indexing (Method 1 can be considered a subset of this) # - Using ``` pd.DataFrame.filter() ``` # - There are other methods which haven't been covered like ```pd.DataFrame.iloc()``` which is better suited when the indices of rows desired are known # + ## Data Aggregation #### Source: https://data.open-power-system-data.org/household_data/ # - # Import Time Series Household data from open power system platform os.chdir(current_directory) household_data_df = pd.read_csv("household_data_15min.csv") household_data_df.drop(columns=["cet_cest_timestamp"], inplace=True) # Dropping the localized timestamp column as we are not going to try and deep dive into that household_data_df["utc_timestamp"] = pd.to_datetime(household_data_df["utc_timestamp"], format="%Y-%m-%dT%H:%M:%SZ") household_data_df.set_index("utc_timestamp", drop=True, inplace=True) # We see that there are rows with NaN, lets clear them out household_data_df.dropna(how="all") # Since the data seems to be agregated, let us calculate actual generation at each time stamp household_data_generation_df = household_data_df.copy(deep=True) household_data_generation_df.drop(columns="interpolated", inplace=True) household_data_generation_df.info(verbose=True) # before calculating generation, we want to ensure that all columns are actual numeric and there is no character data household_data_generation_df = household_data_generation_df.diff() # To calculate hourly generation data - let us try a few things hour_grouping = household_data_generation_df.index.hour hourly_sum_df = household_data_generation_df.groupby(hour_grouping).sum() hourly_mean_df = household_data_generation_df.groupby(hour_grouping).mean() # This result might have caught you by surprise, but if you pay close attention, the groups that we created were based on the unique values under that group, which in this case is hours. In order to calculate hourly totals or aggregates, we would need our group to contain all the things that they are different by leading up to the hourly level. So: hour_grouping_sel = [(household_data_generation_df.index.year), (household_data_generation_df.index.month), (household_data_generation_df.index.day), (household_data_generation_df.index.hour)] hourly_data_df = household_data_generation_df.groupby(hour_grouping_sel).sum() hourly_data_df.head() # + # Let us aggregate data by month and since there is 2 months of data for January, lets select only 2017 household_data_generation_df["Year"] = household_data_generation_df.index.year household_data_generation_df["Month"] = household_data_generation_df.index.month data_2017_df = household_data_generation_df[household_data_generation_df["Year"] == 2017] desired_cols = list(set(data_2017_df.columns) - set(["Year", "Month"])) monthly_data = pd.pivot_table(data_2017_df, index=["Year", "Month"], values=desired_cols, aggfunc=np.mean) # Alternatively # monthly_grouping_sel = [(household_data_generation_df.index.year), (household_data_generation_df.index.month)] # monthly_mean_df = data_2017_df.groupby(monthly_grouping_sel).mean() monthly_data # - # ## Takeaways-4 # # #### We covered several data aggregation techniques, key points to remeber are: # 1. There are several different options to aggregate and dis-aggregate data using pandas # 2. It is possible to achieve the desired result using any technique, users are recommended to pick the techniques that are easiest to them # 3. ``` groupby ``` is an extremely powerful technique and by far one of the most versatile and important tool from pandas # - Similar to Groupby from SQL # 4. Re-shaping data whether as a part of aggregation or cleanup is a very important technique to get a good grip on. Some techniques offered are: # - Using ``` groupby ``` # - Using ``` pd.pivot_table ``` # - Using ``` pd.pivot ```, ``` pd.stack ``` (not covered here) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Blah Blah Blah # *By , PhD student of the schulman lab* # # *Advisor: , PhD* # # *Johns Hopkins University* # # Blah Blah Blah # + # Package Importing import csv, math, os, time, copy, matplotlib, datetime, keras import tensorflow as tf import numpy as np import matplotlib.pyplot as plt from keras.datasets import mnist from keras.models import Sequential, load_model from keras.layers import Dense, Dropout, Flatten from keras.layers.convolutional import Conv2D, MaxPooling2D from keras.utils import np_utils from scipy import io as spio from scipy.ndimage import gaussian_filter from scipy.stats import bernoulli from math import log10, floor from skimage import transform, exposure print(keras.__version__) # 2.4.3 print(tf.__version__) # 2.2.0 # - # #### Set Up Material Simulation Environment # %run ./Numeric_Simulation_of_Material_Behavior_indev.ipynb # + # Sample random design # max_seg = 7 # segment_lengths_ex_four_types = np.random.random(size = (max_seg,)) * 500 + 600 # segment_identities_ex_four_types = np.random.randint(0, high=(4 + 1),size = (2, max_seg)) # print(segment_lengths_ex_four_types) # print(segment_identities_ex_four_types) # Sample Design # 447.00237374 907.26817329 1176.51880725 1355.23921038 894.26759248] segment_lengths_ex_four_types = [938, 954, 1022, 843, 931, 722, 702, 655, 1066, 947] segment_identities_ex_four_types = [[2,3,2,3,2,3,4,0,1,4],[4,4,3,1,3,4,4,1,3,2]] b = ActuatorStrip(segment_lengths_ex_four_types, segment_identities_ex_four_types, four_t_rocs, four_t_ctls) # a.generate_curves() # set model for classification # CNN_dig_v1, CNN_dig_RSQ2_v1 # a.plot_input_design(save = True) # print("In the beginning, we started with MNIST trained CNN, but has low accuracy.") # cnn_digit_model = load_model("CNN_dig_v1.h5") # a.plot_output_map(score = True, save = False) # print("We later added strip data to improve accuracy and enable random squiggle identification.") # cnn_digit_model = load_model("CNN_dig_v1.h5") # a.plot_input_design(save = False) # a.plot_output_map(score = False, save = False) # a.plot_output_map(score = True, save = False) # print("We further increased the searching space vi`a rotation and mirroring") # a.plot_input_and_all(rotation = 10, save = False) # a.plot_input_and_selected(rotation = 20, save = False) cnn_digit_model = load_model("Deep_Learning_Classifier_v3.h5") # + def result_visualizer(result): datalist = result.replace("[","",6).replace("]","",6).split() Segments = 0; Identities = 0; sl = []; for i in datalist: if i == 'Segments:': Segments = 1 elif i == 'Identities:': Segments = 0; Identities = 1; idts = np.zeros(shape = (2, len(sl)), dtype = int) elif i == 'Formed:': Identities = 0 elif Identities > len(sl): idts[1][Identities-1-len(sl)] = i; Identities += 1 elif Identities: idts[0][Identities-1] = i; Identities += 1 if Segments and i != 'Segments:': sl.append(float(i)) s1 = ActuatorStrip(sl, idts, four_t_rocs, four_t_ctls) return s1 # - def ultimate_plotter(teststrip, digit_order, rotate_angle, score_index,\ test = False, save = False): teststrip.generate_curves() shiftlist = [5,5,5,5,9,9,9,9,13,13,13,13,17,17,17,17] statelist = ["ALL OFF", "S1 ON", "S2 ON", "S1 & S2", "S3 ON", "S1 & S3", "S2 & S3", "S1 & S2 & S3", "S4 ON", "S1 & S4", "S2 & S4", "S1 & S2 & S4", "S3 & S4", "S1 & S3 & S4", "S2 & S3 & S4", "ALL ON"] fig = plt.figure(figsize = (12, 6)) ax = plt.subplot(1, 2, 1) if not test: fig_width = int(np.sum(teststrip.segment_lengths) * 1.2); strip_width = int(fig_width/21); shift = int(fig_width*.6) cm = plt.cm.get_cmap('tab20') ax.imshow(np.ones(shape=(fig_width, fig_width)), cmap = "tab20b") for i in range(len(teststrip.segment_lengths)): ax.add_patch(matplotlib.patches.Rectangle((fig_width/2-strip_width,strip_width+np.sum(teststrip.segment_lengths[0:i])),strip_width,teststrip.segment_lengths[i], color = cm.colors[teststrip.identities[0][i]])) ax.add_patch(matplotlib.patches.Rectangle((fig_width/2,strip_width+np.sum(teststrip.segment_lengths[0:i])),strip_width,teststrip.segment_lengths[i], color = cm.colors[teststrip.identities[1][i]])) ax.add_patch(matplotlib.patches.Rectangle((strip_width, shift), strip_width*3, strip_width, color = cm.colors[0])) ax.add_patch(matplotlib.patches.Rectangle((strip_width, strip_width*1.5+shift), strip_width*3, strip_width, color = cm.colors[1])) ax.add_patch(matplotlib.patches.Rectangle((strip_width, strip_width*3+shift), strip_width*3, strip_width, color = cm.colors[2])) ax.add_patch(matplotlib.patches.Rectangle((strip_width, strip_width*4.5+shift), strip_width*3, strip_width, color = cm.colors[3])) ax.add_patch(matplotlib.patches.Rectangle((strip_width, strip_width*6+shift), strip_width*3, strip_width, color = cm.colors[4])) ax.text(shift/2.8, strip_width*1+shift, "Sys0", fontsize = 12, color = "white", family = "serif", weight = "bold") ax.text(shift/2.8, strip_width*2.5+shift, "Sys1", fontsize = 12, color = "white", family = "serif", weight = "bold") ax.text(shift/2.8, strip_width*4+shift, "Sys2", fontsize = 12, color = "white", family = "serif", weight = "bold") ax.text(shift/2.8, strip_width*5.5+shift, "Sys3", fontsize = 12, color = "white", family = "serif", weight = "bold") ax.text(shift/2.8, strip_width*7+shift, "Sys4", fontsize = 12, color = "white", family = "serif", weight = "bold") for i in range(len(teststrip.segment_lengths)): ax.annotate("%dum"%(teststrip.segment_lengths[i]), xy=(fig_width/2+strip_width,strip_width*1.5+np.sum(teststrip.segment_lengths[0:i])), xytext=(fig_width-strip_width*5, strip_width*1.5+np.sum(teststrip.segment_lengths[0:i])),\ arrowprops = dict(arrowstyle="-|>", color="white"), fontsize = 12, color = "white", family = "serif", weight = "bold") plt.title("Input Design", fontsize = 15, family = "serif", weight = "bold") plt.axis(False) ctr = 0; for i in range(16): ax = plt.subplot(4, 8, ctr + shiftlist[ctr]) curve = teststrip.curves[digit_order[i]]; curve.rotate(rotate_angle[i]*math.pi/180) img = curve.generate_image(filter = 'Gaussian') plt.imshow(img) plt.title(statelist[digit_order[i]], fontsize = 10, family = "serif", weight = "bold", y = .95) if i < 10: plt.plot(range(28),[0]*28, lw = 4, color = "#ffdf2b") plt.plot(range(28),[27]*28, lw = 4, color = "#ffdf2b") plt.plot([0]*28,range(28), lw = 4, color = "#ffdf2b") plt.plot([27]*28,range(28), lw = 4, color = "#ffdf2b") scores = cnn_digit_model.predict(img.reshape(1,28,28,1))[0] plt.text(img.shape[1]*.05, img.shape[1]*.9, "{}: {:.3f}".format(np.argsort(scores)[-score_index[i]], np.sort(scores)[-score_index[i]]), fontsize = 9, family = "serif", weight = "bold", color = "white") plt.axis(False); ctr += 1 fig.suptitle("Design Input and Output Map", fontsize = 15, family = "serif", weight = "bold", y = .95) if save: plt.savefig(datetime.datetime.now().strftime("%Y%m%d_%H_%M_%S") + "_inandoutput.png", dpi = 600) plt.show() # + import cv2 def imflatfield(I, sigma): """ Python equivalent imflatfield implementation I format must be BGR and type of I must be uint8 """ A = I.astype(np.float32) / 255 # A = im2single(I); Ihsv = cv2.cvtColor(A, cv2.COLOR_RGB2HSV) # Ihsv = rgb2hsv(A); A = Ihsv[:, :, 2] # A = Ihsv(:,:,3); filterSize = int(2 * np.ceil(2 * sigma) + 1); # filterSize = 2*ceil(2*sigma)+1; # shading = imgaussfilt(A, sigma, 'Padding', 'symmetric', 'FilterSize', filterSize); % Calculate shading shading = cv2.GaussianBlur(A, (filterSize, filterSize), sigma, borderType = cv2.BORDER_REFLECT) meanVal = np.mean(A) # meanVal = mean(A(:),'omitnan') #% Limit minimum to 1e-6 instead of testing using isnan and isinf after division. shading = np.maximum(shading, 1e-6) # shading = max(shading, 1e-6); B = A * meanVal / shading # B = A*meanVal./shading; #% Put processed V channel back into HSV image, convert to RGB Ihsv[:, :, 2] = B # Ihsv(:,:,3) = B; B = cv2.cvtColor(Ihsv, cv2.COLOR_HSV2RGB) # B = hsv2rgb(Ihsv); B = np.round(np.clip(B*255, 0, 255)).astype(np.uint8) # B = im2uint8(B); return B def image_flat_field(img, sigma = 30): out2 = imflatfield(img, sigma) # Conver out2 to float32 before converting to LAB out2 = out2.astype(np.float32) / 255 # out2 = im2single(out2); shadow_lab = cv2.cvtColor(out2, cv2.COLOR_BGR2Lab) # shadow_lab = rgb2lab(out2); max_luminosity = 100 L = shadow_lab[:, :, 0] / max_luminosity # L = shadow_lab(:,:,1)/max_luminosity; shadow_adapthisteq = shadow_lab.copy() # shadow_adapthisteq = shadow_lab; # shadow_adapthisteq(:,:,1) = adapthisteq(L)*max_luminosity; clahe = cv2.createCLAHE(clipLimit=20, tileGridSize=(8,8)) cl1 = clahe.apply((L*(2**16-1)).astype(np.uint16)) # CLAHE in OpenCV does not support float32 (convert to uint16 and back). shadow_adapthisteq[:, :, 0] = cl1.astype(np.float32) * max_luminosity / (2**16-1) shadow_adapthisteq = cv2.cvtColor(shadow_adapthisteq, cv2.COLOR_Lab2BGR) # shadow_adapthisteq = lab2rgb(shadow_adapthisteq); # Convert shadow_adapthisteq to uint8 shadow_adapthisteq = np.round(np.clip(shadow_adapthisteq*255, 0, 255)).astype(np.uint8) # B = im2uint8(B); return shadow_adapthisteq # - # ## Even # + idts = [[4, 1, 4, 4, 4],[2, 2, 3, 2, 2]] sl = [1653, 1606, 1412, 1769, 1013] cnn_digit_model = load_model("Deep_Learning_Classifier_v3.h5") teststrip = ActuatorStrip(sl, idts, four_t_rocs, four_t_ctls) # - teststrip.plot_selected_output_map() # ## Odd # + idts = [[1, 2, 1, 3, 1],[2, 4, 2, 2, 2]] sl = [1898, 1138, 1635, 1069, 1199] cnn_digit_model = load_model("Deep_Learning_Classifier_v3.h5") teststrip = ActuatorStrip(sl, idts, four_t_rocs, four_t_ctls) # - teststrip.plot_selected_output_map() # ## Six Seg # + # perfect idts = [[2,3,4,0,3,2],[0,1,3,0,2,2]] sl = [1330, 1780, 1520, 1090, 1450, 1020] cnn_digit_model = load_model("Deep_Learning_Classifier_v3.h5") teststrip = ActuatorStrip(sl, idts, four_t_rocs, four_t_ctls) # teststrip.plot_output_map(score = False, save = False) # 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 digit_order = [12, 0, 5, 4, 6, 10, 1, 8, 3, 7, 15, 9, 13, 11, 14, 2] rotate_angle = [ 0, 0,-30,140,190,-80, 90,180, 50,280, 0, 0,200, 0,180,180] score_index = [ 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] ultimate_plotter(teststrip, digit_order, rotate_angle, score_index,\ test = False, save = False) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="ejiUsLD1kasW" def calSquare(num): result = num * num return result # + id="KpmX9kaNlIFf" numbers = (1,2) # + colab={"base_uri": "https://localhost:8080/"} id="GOn99K5blmQK" outputId="22525ff4-c1ad-4cac-d12d-010ab22b5717" result = map(calSquare, numbers) print(set(result)) # + colab={"base_uri": "https://localhost:8080/"} id="bXgkje-Yl1ep" outputId="9582c9c9-ad75-4dde-a78f-3e29160c292d" calSquare(1), calSquare(2) # + colab={"base_uri": "https://localhost:8080/"} id="UnPMhiOomeh6" outputId="8ae68259-6f5e-4f0e-8059-622a513bcd1c" calplus = (lambda first, second : first + second) type(calplus) # + colab={"base_uri": "https://localhost:8080/"} id="L261IeCNmgAc" outputId="6cf136d8-7cd4-44e2-9f16-b75b2bdc0e6d" number01 = [1,2,3] number02 = [4,5,6] type(number01), type(number02) # + colab={"base_uri": "https://localhost:8080/"} id="yqEsHFIpq4T6" outputId="f0817960-d3ad-4498-bc15-b4f3be418580" calplus(number01[0],number02[0]) # + colab={"base_uri": "https://localhost:8080/"} id="s8-FvZImrp3q" outputId="02685cd3-4180-4661-ab51-b2184e9128ed" result = map(calplus,number01,number02) list(result) # + id="FBYtnUVEsAhL" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## 10.6.4 Displaying Card Images with Matplotlib # from deck import DeckOfCards deck_of_cards = DeckOfCards() # ### Enable Matplotlib in IPython # %matplotlib # ### Create the Base `Path` for Each Image from pathlib import Path path = Path('.').joinpath('card_images') # ### Import the Matplotlib Features import matplotlib.pyplot as plt import matplotlib.image as mpimg # ### Create the `Figure` and `Axes` Objects figure, axes_list = plt.subplots(nrows=4, ncols=13) # ### Configure the `Axes` Objects and Display the Images for axes in axes_list.ravel(): axes.get_xaxis().set_visible(False) axes.get_yaxis().set_visible(False) image_name = deck_of_cards.deal_card().image_name img = mpimg.imread(str(path.joinpath(image_name).resolve())) axes.imshow(img) # ### Maximize the Image Sizes figure.tight_layout() # ### Shuffle and Re-Deal the Deck deck_of_cards.shuffle() for axes in axes_list.ravel(): axes.get_xaxis().set_visible(False) axes.get_yaxis().set_visible(False) image_name = deck_of_cards.deal_card().image_name img = mpimg.imread(str(path.joinpath(image_name).resolve())) axes.imshow(img) ########################################################################## # (C) Copyright 2019 by Deitel & Associates, Inc. and # # Pearson Education, Inc. All Rights Reserved. # # # # DISCLAIMER: The authors and publisher of this book have used their # # best efforts in preparing the book. These efforts include the # # development, research, and testing of the theories and programs # # to determine their effectiveness. The authors and publisher make # # no warranty of any kind, expressed or implied, with regard to these # # programs or to the documentation contained in these books. The authors # # and publisher shall not be liable in any event for incidental or # # consequential damages in connection with, or arising out of, the # # furnishing, performance, or use of these programs. # ########################################################################## # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Resumen de los datos: dimensiones y estructuras import pandas as pd import os mainpath= "/Users/mario/Documents/Coding/Machine Learning/Curso/python-ml-course/datasets" filename="titanic/titanic3.csv" titanic=pd.read_csv(os.path.join(mainpath,filename)) titanic.head(10) titanic.tail(3)#Cola titanic.shape titanic.columns.values #Obtener nombres de las columnas a partir del data frame # ## Resumen de los estadísticos básicos de las variables númericas # cantidad(cuantos no nulos),promedio,media,quantiles titanic.describe()#summary de R titanic.dtypes#tipo de datos de las columnas # # Missing values pd.isnull(titanic["body"]).values.ravel().sum()#Total de valores nulos de la columna pd.notnull(titanic["body"]).values.ravel().sum()#Total de valores no nulos de la columna # Los valores que faltan en un data set pueden venir por dos razones: # * Extracción de los datos de la base de datos # * Recolección de los datos # ### Borrado de valores que faltan titanic.dropna(axis=0,how="all").head() #Borraria la fila. axis=1 borraria toda la columna; how => columnas que deben tener NaN para ser borrada la fila data=titanic data.dropna(axis=0,how="any")#Si alguna columna tiene NaN se borra la fila # ### Cómputo de los valores faltantes # # Consiste en sustituir los valores NaN por algún valor determinado data.fillna(0).head()#Rellenar columnas NaN con un 0 data.fillna("Desconocido").head() #Con sobre escritura del data frame 'data': data=titanic data["body"]=data["body"].fillna(0)#Sustituye solo la columna de body data["home.dest"]=data["home.dest"].fillna("Desconocido") data.tail(3) data["age"].fillna(data["age"].mean()).head()#Sustituir por la media data["age"][1291] data["age"].fillna(method="ffill")[1291]#Sustituye por el primer valor conocido hacia atras data["age"].fillna(method="backfill")[1291]#Sustituye por el primer valor conocido hacia delante # # Variables Dummy data=titanic data["sex"].head(3)#Variable categorica de la que se obtendran dos variables dummy(male y female) dummy_sex=pd.get_dummies(data["sex"],prefix="sex") dummy_sex.head(3) #Eliminar la columna categorica sex y crear dos nuevas columnas generadas de las variables dummy: data2=data.drop(["sex"],axis=1)#Eliminar columna pd.concat([data2,dummy_sex],axis=1).head(2)#Concatenar las nuevas columnas def createDummies(dataFrame,var_name,axis): #pd => pandas library dummy=pd.get_dummies(dataFrame[var_name],prefix=var_name) dataFrame=dataFrame.drop(var_name,axis) dataFrame=pd.concat([dataFrame,dummy],axis) return dataFrame createDummies(titanic,"sex",1).head(3) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np samples = ['我 毕业 于 北京理工大学', '我 就职 于 中国科学院技术研究所'] token_index = {} for sample in samples: for w in sample.split(): if w not in token_index: token_index[w] = len(token_index) + 1 print(len(token_index)) print(token_index) result = np.zeros(shape=(len(samples), len(token_index) + 1, max(token_index.values()) + 1)) result.shape for i, sample in enumerate(samples): for j, w in list(enumerate(sample.split())): index = token_index.get(w) print(j, index, w) result[i, j, index] = 1 result results = np.zeros(shape=(len(samples), max(token_index.values()) + 1)) for i, sample in enumerate(samples): for _, w in list(enumerate(sample.split())): index = token_index.get(w) results[i, index] = 1 results # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # T M V A_Tutorial_Classification_Tmva # TMVA example, for classification # with following objectives: # * Train a BDT with TMVA # # # Modified from [ClassificationKeras.py](https://root.cern/doc/master/ClassificationKeras_8py.html) and [TMVAClassification.C](https://root.cern/doc/master/TMVAClassification_8C.html) # # # **Author:** # This notebook tutorial was automatically generated with ROOTBOOK-izer from the macro found in the ROOT repository on Tuesday, April 27, 2021 at 01:00 AM. from ROOT import TMVA, TFile, TTree, TCut from subprocess import call from os.path import isfile # Setup TMVA # ======================= # + TMVA.Tools.Instance() (TMVA.gConfig().GetVariablePlotting()).fMaxNumOfAllowedVariablesForScatterPlots = 5 outfileName = 'TMVA_tutorial_cla_1.root' output = TFile.Open(outfileName, 'RECREATE') # - # Create the factory object. Later you can choose the methods whose performance you'd like to investigate. The factory is # the only TMVA object you have to interact with # # The first argument is the base of the name of all the weightfiles in the directory weight/ # The second argument is the output file for the training results factory = TMVA.Factory("TMVAClassification", output, "!V:!Silent:Color:!DrawProgressBar:Transformations=I;D;P;G,D:AnalysisType=Classification") # Load data # ======================= # Background trfile_B = "SM_ttbar.root" # Signal # + trfile_S = "Zp1TeV_ttbar.root" if not isfile('tmva_reg_example.root'): call(['curl', '-L', '-O', 'http://root.cern.ch/files/tmva_reg_example.root']) data_B = TFile.Open(trfile_B) data_S = TFile.Open(trfile_S) trname = "tree" tree_B = data_B.Get(trname) tree_S = data_S.Get(trname) dataloader = TMVA.DataLoader('dataset') for branch in tree_S.GetListOfBranches(): name = branch.GetName() if name not in ["mtt_truth", "weight", "nlep", "njets"]: dataloader.AddVariable(name) # - # Add Signal and background trees dataloader.AddSignalTree(tree_S, 1.0) dataloader.AddBackgroundTree(tree_B, 1.0) # Set individual event weights (the variables must exist in the original TTree) # dataloader.SetSignalWeightExpression("weight") # dataloader.SetBackgroundWeightExpression("weight") # Tell the dataloader how to use the training and testing events # # If no numbers of events are given, half of the events in the tree are used # for training, and the other half for testing: # # dataloader->PrepareTrainingAndTestTree( mycut, "SplitMode=random:!V" ); # # To also specify the number of testing events, use: # # dataloader->PrepareTrainingAndTestTree( mycut, # "NSigTrain=3000:NBkgTrain=3000:NSigTest=3000:NBkgTest=3000:SplitMode=Random:!V" ); dataloader.PrepareTrainingAndTestTree(TCut(''), "nTrain_Signal=10000:nTrain_Background=10000:SplitMode=Random:NormMode=NumEvents:!V") # Generate model # BDT # + factory.BookMethod( dataloader, TMVA.Types.kBDT, "BDT", "!H:!V:NTrees=100:MinNodeSize=2.5%:MaxDepth=3:BoostType=AdaBoost:AdaBoostBeta=0.5:UseBaggedBoost:BaggedSampleFraction=0.5:SeparationType=GiniIndex:nCuts=20") factory.BookMethod( dataloader, TMVA.Types.kBDT, "BDTG", "!H:!V:NTrees=1000:MinNodeSize=2.5%:BoostType=Grad:Shrinkage=0.10:UseBaggedBoost:BaggedSampleFraction=0.5:nCuts=20:MaxDepth=2") # - # Run TMVA # + factory.TrainAllMethods() factory.TestAllMethods() factory.EvaluateAllMethods() output.Close() # - # Draw all canvases from ROOT import gROOT gROOT.GetListOfCanvases().Draw() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #

    biopython Introduction

    #

    # For the whole documentation, see http://biopython.org/wiki/Biopython, for the long tutorial see http://biopython.org/DIST/docs/tutorial/Tutorial.html, for the cookbook see http://biopython.org/wiki/Category%3ACookbook. #

    #

    # biopython is a universal bioinformatic Python package, coordinated by the Open Bioinformatics Foundation (https://www.open-bio.org/wiki/Main_Page, also coordinates - amongst other projects - BioPerl http://bioperl.org/ and BioJava http://biojava.org/). If you use the Anaconda python distribution, the easiest way to install biopython is by using the Anaconda Navigator. If you use pip, you can install biopython with the command "pip install biopython". #

    #

    # Once you have installed biopython, you can import it to a Python script like this (not that you may have to import submodules of biopython to access all of its function): #

    import Bio #

    1. Storing DNA, RNA and Amino Acid Sequences

    from Bio.Seq import Seq from Bio.Alphabet import IUPAC # An unambiguous DNA sequence unambiguous = Seq("ACGT", IUPAC.unambiguous_dna) print(unambiguous) print(unambiguous.alphabet) print(unambiguous.count("AC")) # Returns only non-overlapping occurences print(len(unambiguous)) # An ambiguous DNA sequence ambiguous = Seq("ACGT", IUPAC.ambiguous_dna) ambiguous # An RNA sequence # An ambiguous DNA sequence ambiguous = Seq("ACGU", IUPAC.ambiguous_rna) ambiguous # An amino acid sequence protein = Seq("TRUMP", IUPAC.protein) protein #

    2. Translation and transcription

    # Complementary DNA sequence = Seq("AGCCCTYA", IUPAC.ambiguous_dna) sequence.complement() # Reverse-complementary DNA sequence.reverse_complement() # DNA->RNA dna = Seq("AAATTTCCCGGG", IUPAC.unambiguous_dna) rna = dna.transcribe() rna # RNA->Proteins print(dna.translate()) # The DNA's or RNA's length be a multiple of 3 print(rna.translate()) #

    3. Access to common databases

    # NCBI Entrez... from Bio import Entrez # ...get all databases... handle = Entrez.einfo() result = handle.read() handle.close() result # ...look up in pubmed Entrez.email = "" handle = Entrez.esearch(db="pubmed", term="Vibrio") record = Entrez.read(handle) record # KEGG... from Bio.KEGG import REST from Bio.KEGG import Enzyme # ...get pathways for Vibrio alginolyticus... pathways = REST.kegg_list("pathway", "vag").read() # ...select a pathway... pathway_line = pathways.rstrip().split("\n")[0] entry, description = pathway_line.split("\t") entry # ...and analyze this pathway pathway_file = REST.kegg_get(entry).read() #

    4. Sequence Alignments

    # + # Simple Needleman-Wunsch (note: two cells below, a more complex alternative is shown) from Bio import pairwise2 from Bio import SeqIO seq1 = Seq("AAATTGGGCCGG", IUPAC.protein) seq2 = Seq("AAAGGTTAAA", IUPAC.protein) alignments = pairwise2.align.globalxx(seq1, seq2) print(alignments[0]) # First optimal alignment # - # Simple Smith-Watherman (note: two cells below, a more complex alternative is shown) alignments2 = pairwise2.align.localxx(seq1, seq2) print(alignments2[0]) # First optimal alignment # + # Better global alignment alternative with penalizing options from Bio.SubsMat.MatrixInfo import blosum62 alignments3 = pairwise2.align.globalds(seq1, seq2, blosum62, -10, -0.5) print(alignments3[0]) # - # Better local alignment alternative with penalizing options alignments4 = pairwise2.align.localds(seq1, seq2, blosum62, -10, -1) print(alignments4[0]) #

    5. BLAST

    # The biopython option is not reliable. Use the command line instead, e.g. blastn_output = run("blastn", "-query", "temp_in.txt", "-out", "temp_out.txt", "-db", "nt", "-outfmt", '6 sacc pident qstart qend evalue bitscore stitle sscinames sskingdoms', "-max_target_seqs", "5") #

    6. Sequence Motif Analysis

    from Bio import motifs # Generate nucleotide count report seqset = [Seq("ACGTG"), Seq("ATGTT"), Seq("TTTTT")] motif = motifs.create(seqset) motif.counts # Access a line motif.counts["T"] # Get consensus motif.consensus # Get anticonsensus motif.anticonsensus # Get degenerate consensus motif.degenerate_consensus # Create a logo (using a web service) motif.weblogo("./biopython/motif.png") # # PSB 2017 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # str s="Hello" print(s) print(id(s)) s="Hello111" print(s) print(id(s)) s="""Hello""" print(s) print(id(s)) # - #string slicing str="helloworld" print("first character:",str[0]) print("0 to upto 2 index:",str[:3]) print("1 to 4rth:",str[1:5]) print("2nd upto -3:", str[2:-2]) # + #string membership str= "helloWorld" if "q" in str: print("present") else: print("not present") if "W" not in str: print("not present") else: print("present") # + #strip the trailing and leading spaces s="Hello World " print("length: ", len(s)) print("strip:", s.strip()) s="hello,world,tannu" print("length after striping:", len(s.strip())) #split print("split the s:", s.split(",")) #split upto 1 string=> 2 strings will be generated print("split:", s.split(",",1)) #count the number of occurences in the string s="hello world" print("no of occurences:", s.count("l")) print("no of occ from 0 to 5:", s.count("l",0,5)) # - #find character and return the first occurence(match) index from 0to len-1 str="HelloWrold" print("found or not:", s.find("l",0,len(str))) # + #without slicing, reverse s=input("enter the string:") rev="" l=len(s) i=l-1 while i>=0: rev+=s[i] i-=1 print(rev) # - #using slicing s=input("input:") print(s[::-1]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:tensorflow] # language: python # name: conda-env-tensorflow-py # --- # + import builtins def print(*args, **kwargs): return builtins.print(*args, **kwargs, end='\n\n') # - from tensorflow.keras.preprocessing.text import Tokenizer sentences = [ 'I love my Dog', 'I love my Cat', 'You love my Dog!!' ] tokenizer = Tokenizer(num_words=100) tokenizer.fit_on_texts(sentences) word_index = tokenizer.word_index print(word_index) # ---------------- # + sentences = [ 'I love my Dog', 'I love my Cat', 'You love my Dog!!', 'Do you think my dog is amazing?' ] tokenizer = Tokenizer(num_words=100) tokenizer.fit_on_texts(sentences) word_index = tokenizer.word_index # sentence to list of tokens/numbers sequences = tokenizer.texts_to_sequences(sentences) print(word_index) print(sequences) # + test_data = [ 'i really love my dog', 'my dog loves my manatee' ] test_seq = tokenizer.texts_to_sequences(test_data) print(test_seq) # - # ______________ # + sentences = [ 'I love my Dog', 'I love my Cat', 'You love my Dog!!', 'Do you think my dog is amazing?' ] # special value for unseen word tokenizer = Tokenizer(num_words=100, oov_token="") tokenizer.fit_on_texts(sentences) word_index = tokenizer.word_index # sentence to list of tokens/numbers sequences = tokenizer.texts_to_sequences(sentences) print(word_index) print(sequences) test_data = [ 'i really love my dog', 'my dog loves my manatee' ] test_seq = tokenizer.texts_to_sequences(test_data) print(test_seq) # + # padding to make words uniform # - from tensorflow.keras.preprocessing.sequence import pad_sequences from tensorflow.keras.preprocessing.text import Tokenizer # + sentences = [ 'I love my Dog', 'I love my Cat', 'You love my Dog!!', 'Do you think my dog is amazing?' ] # special value for unseen word tokenizer = Tokenizer(num_words=100, oov_token="") tokenizer.fit_on_texts(sentences) word_index = tokenizer.word_index # sentence to list of tokens/numbers sequences = tokenizer.texts_to_sequences(sentences) # padding paddded = pad_sequences(sequences) print(paddded) # padding after the sentence paddded = pad_sequences(sequences, padding='post') print(paddded) # matrix len is the same as the sentence with maximum length # to change it paddded = pad_sequences(sequences, padding='post', maxlen=5) print(paddded) # loose info from end instead of front paddded = pad_sequences(sequences, padding='post', truncating='post', maxlen=5) print(paddded) # print(word_index) # print(sequences) # print(paddded) # - # _____________ import tensorflow as tf from tensorflow import keras from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.preprocessing.sequence import pad_sequences sentences = [ 'I love my Dog', 'I love my Cat', 'You love my Dog!!', 'Do you think my dog is amazing?' ] # + sentences = [ 'I love my Dog', 'I love my Cat', 'You love my Dog!!', 'Do you think my dog is amazing?' ] sequences = tokenizer.texts_to_sequences(sentences) padded = pad_sequences(sequences, maxlen=5) tokenizer = Tokenizer(num_words=100, oov_token="") tokenizer.fit_on_texts(sentences) word_index = tokenizer.word_index sequences = tokenizer.texts_to_sequences(sentences) padded = pad_sequences(sequences, maxlen=5) print(f"Word Index: {word_index}") print(f"Sequences: {sequences}") print(f"Padded Sequences: \n{padded}") # + # testing test_data = [ 'i really love my dog', 'my dog loves my manatee' ] test_seq = tokenizer.texts_to_sequences(test_data) print(f"Test Sequences: {test_seq}") padded = pad_sequences(test_seq, maxlen=10) print(f"Padded test sequence: \n {padded}") # - stopwords = ["a", "about", "above", "after", "again", "against", "all", "am", "an", "and", "any", "are", "as", "at", "be", "because", "been", "before", "being", "below", "between", "both", "but", "by", "could", "did", "do", "does", "doing", "down", "during", "each", "few", "for", "from", "further", "had", "has", "have", "having", "he", "he'd", "he'll", "he's", "her", "here", "here's", "hers", "herself", "him", "himself", "his", "how", "how's", "i", "i'd", "i'll", "i'm", "i've", "if", "in", "into", "is", "it", "it's", "its", "itself", "let's", "me", "more", "most", "my", "myself", "nor", "of", "on", "once", "only", "or", "other", "ought", "our", "ours", "ourselves", "out", "over", "own", "same", "she", "she'd", "she'll", "she's", "should", "so", "some", "such", "than", "that", "that's", "the", "their", "theirs", "them", "themselves", "then", "there", "there's", "these", "they", "they'd", "they'll", "they're", "they've", "this", "those", "through", "to", "too", "under", "until", "up", "very", "was", "we", "we'd", "we'll", "we're", "we've", "were", "what", "what's", "when", "when's", "where", "where's", "which", "while", "who", "who's", "whom", "why", "why's", "with", "would", "you", "you'd", "you'll", "you're", "you've", "your", "yours", "yourself", "yourselves"] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + [markdown] nbgrader={} # # Numpy Exercise 2 # + [markdown] nbgrader={} # ## Imports # + nbgrader={} import numpy as np # %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns # - import antipackage import github.ellisonbg.misc.vizarray as va # + [markdown] nbgrader={} # ## Factorial # + [markdown] nbgrader={} # Write a function that computes the factorial of small numbers using `np.arange` and `np.cumprod`. # + nbgrader={"checksum": "4e72a0f70d32914a77d8d31ae0c2fb5a", "solution": true} def np_fact(n): if n == 0: return 1 elif n == 1: return 1 else: a = np.arange(1,n+1,1) b = a.cumprod() return b[-1] # + deletable=false nbgrader={"checksum": "fcb54de452abd89e3b1818a34678d4b4", "grade": true, "grade_id": "numpyex02a", "points": 4} assert np_fact(0)==1 assert np_fact(1)==1 assert np_fact(10)==3628800 assert [np_fact(i) for i in range(0,11)]==[1,1,2,6,24,120,720,5040,40320,362880,3628800] # + [markdown] nbgrader={} # Write a function that computes the factorial of small numbers using a Python loop. # + nbgrader={"checksum": "9341aa5eb68a3ce57e5cce1cd9192021", "solution": true} def loop_fact(n): if n == 0: return 1 elif n == 1: return 1 else: a = 1 while n >= 1: a = n * a n = n - 1 return a # + deletable=false nbgrader={"checksum": "a9946f0fdca1abf66ddfb0454ab10113", "grade": true, "grade_id": "numpyex02b", "points": 4} assert loop_fact(0)==1 assert loop_fact(1)==1 assert loop_fact(10)==3628800 assert [loop_fact(i) for i in range(0,11)]==[1,1,2,6,24,120,720,5040,40320,362880,3628800] # + [markdown] nbgrader={} # Use the `%timeit` magic to time both versions of this function for an argument of `50`. The syntax for `%timeit` is: # # ```python # # # %timeit -n1 -r1 function_to_time() # ``` # + deletable=false nbgrader={"checksum": "6cff4e8e53b15273846c3aecaea84a3d", "solution": true} # YOUR CODE HERE # %timeit -n1 -r1 (loop_fact) # %timeit -n1 -r1 (np_fact) # + [markdown] nbgrader={} # In the cell below, summarize your timing tests. Which version is faster? Why do you think that version is faster? # + [markdown] deletable=false nbgrader={"checksum": "84e52188175a63ea7106817b9b063eef", "grade": true, "grade_id": "numpyex02c", "points": 2, "solution": true} # np_fact is faster, but by a very small amount of time. np_fact is faster because it does not loop through each iteration of the loop like np_loop does. Instead, I called the cumprod function, which acts on the entire array at the same time. / --- / jupyter: / jupytext: / text_representation: / extension: .q / format_name: light / format_version: '1.5' / jupytext_version: 1.14.4 / --- / + [markdown] cell_id="00000-9764144b-55c9-4a1a-98bf-705ea7932e15" deepnote_cell_type="markdown" tags=[] / # Lecture 2: Linear and Logistic Regression / ### [](https://github.com/Qwerty71), [](https://www.mci.sh), [](https://www.vijayrs.ml) / This notebook will introduce the basic concepts and applications of linear regression and logistic regression / / ## Note: you will need to run the following code cell every time you restart this notebook / / + cell_id="00001-574160a4-968d-4b1e-9e35-eafed83845e9" deepnote_cell_type="code" tags=[] !pip install -r requirements.txt import pandas as pd import matplotlib.pyplot as plt import numpy as np import statsmodels.api as sm import seaborn as sns from sklearn import datasets, linear_model from IPython.display import display iris = sns.load_dataset('iris') / + [markdown] cell_id="00002-a1d68ee0-cf80-4910-837e-270e600c1bce" deepnote_cell_type="markdown" tags=[] / ## What is linear regression? / The long fancy wikipedia answer to this question is: / ``` / In statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables). / ``` / Translating, linear regression is a method of approximating a linear relationship (think `y=mx+b`) between a variable (or set of variables ) and a value. This can be as simple as relating the number of hours slept by a student to their test scores. Again, multiple variables can be used; you can correspond other things to such a related value, for example, your linear regression model can predict the relationship between a bunch of variables, such as a student's gpa, hours slept, and time studied, to their test score. / / The essence of this problem is that when you have a bunch of data points, say an entire class of students, chances are the dots on the graph are not going to align up into a perfect line. So your job, if you want to make a model that can most accurately predict test scores based on certain conditions, is to calculate the coefficients / / ## How Exactly does it work? / As mentioned previously, your model will look something akin to `y=mx+b`, however it will be a bit more complicated. Basic linear regression with one dependent variable has the following equation: / ``` / y= B0 + B1*x + E / ``` / where B0 is the y intercept value, B1 is the slope/coefficient for the x variable, and E is the error term. This can be generalized to the following: / ``` / y = B0 + B1*x1 + B2*X2 ... + Bi*xi + E / ``` / where Bi is each variable's respective coefficient. / Now that we know what our model should look like, how exactly do we get those magical values? / / ## Training / Machine learning algorithms usually work in the following manner: / 1) you have a model, and data that you can plug into said model / 2) you have a function called the Cost function, which represents how off your model is from the provided real data (aka your error, or Loss) / 3) Use an algorithm to minimize (find the lowest point) of your cost function, which represents the least error. This in turn will allow you to calculate your most optimal coefficients for your model / / ### Cost Function / / ![Picture title](image-20211005-095748.png) / / / / / / / ### Follow along this simple tutorial / Get familiar with your first linear regression sample / / + [markdown] cell_id="00003-30b2f00a-ddd7-4322-bdc8-c7f1dbef92d3" deepnote_cell_type="markdown" tags=[] / ![Picture title](image-20211005-095748.png) / + [markdown] cell_id="00003-5e5ac417-dbc8-48be-90da-ed7568b48d50" deepnote_cell_type="markdown" tags=[] / use the `sklearn.dataset` to import and arrange your datasets (this ones about diabetes) / + cell_id="00003-e837fe06-1fe5-4af9-8707-447aeafe318d" deepnote_cell_type="code" tags=[] # Load the diabetes dataset diabetes_X, diabetes_y = datasets.load_diabetes(return_X_y=True) # Use only one feature as opposed to using all the features provided in this dataset, well get to that later diabetes_X = diabetes_X[:, np.newaxis, 2] # Split the data into training/testing sets. We will use some data to test the accuracy of our linear regression model diabetes_X_train = diabetes_X[:-20] diabetes_X_test = diabetes_X[-20:] # Split the targets into training/testing sets (values we will predict, or learn to predict) diabetes_y_train = diabetes_y[:-20] diabetes_y_test = diabetes_y[-20:] # Create linear regression object regr = linear_model.LinearRegression() # Train the model using the training sets regr.fit(diabetes_X_train, diabetes_y_train) # Make predictions using the testing set diabetes_y_pred = regr.predict(diabetes_X_test) # The coefficients print('Coefficients: \n', regr.coef_) # The mean squared error print('Mean squared error: %.2f' % mean_squared_error(diabetes_y_test, diabetes_y_pred)) # The coefficient of determination: 1 is perfect prediction print('Coefficient of determination: %.2f' % r2_score(diabetes_y_test, diabetes_y_pred)) # Plot outputs plt.scatter(diabetes_X_test, diabetes_y_test, color='black') plt.plot(diabetes_X_test, diabetes_y_pred, color='blue', linewidth=3) plt.xticks(()) plt.yticks(()) plt.show() / + [markdown] created_in_deepnote_cell=true deepnote_cell_type="markdown" tags=[] / / Created in deepnote.com / Created in Deepnote # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="YUuPSqnCmrwh" # # Metric Learning 3D-CNN for clustering # + [markdown] id="Z0MEYWGlmxx1" # ## for Google Colab # + id="9a117f4c" from google.colab import drive drive.mount('/content/drive') # + id="c9c2e582" # !unzip -q /content/drive/MyDrive/data/mn10_64.zip # + id="ShzHa4E6unZu" # !pip install volumentations-3D tensorflow-addons tensorflow-determinism kaleido # + [markdown] id="IW_eeZRxm1Fp" # ## Import modules and set parameters # + id="b2afbb09" import os from glob import glob import re import numpy as np import pandas as pd from volumentations import * import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import tensorflow_addons as tfa # + id="A47FOhxTnkLx" def set_seed(seed=200): tf.random.set_seed(seed) # optional # for numpy.random np.random.seed(seed) # for built-in random random.seed(seed) # for hash seed os.environ["PYTHONHASHSEED"] = str(seed) os.environ['TF_DETERMINISTIC_OPS'] = '1' set_seed(123) # + id="7eef2634" EXPERIMENT_DIR = 'MetricLearning_64' NUM_SAMPLES = 10 K = 4 BATCH_SIZE = NUM_SAMPLES * K EPOCHS = 10 DATA_DIR = '/content/mn10/64' IMAGE_SIZE = 64 NUM_CHANNELS = 1 # + id="d16dc394" os.makedirs(EXPERIMENT_DIR, exist_ok=True) # + [markdown] id="bf270b35" # ## Define models # + id="Lv8WgZDDZ3NP" class L2Constraint(keras.layers.Layer): def __init__(self, alpha=16): super(L2Constraint, self).__init__() self.alpha = alpha def call(self, x): x = self.alpha * tf.nn.l2_normalize(x) return x # + id="a8d48419" input = layers.Input(shape=(IMAGE_SIZE, IMAGE_SIZE, IMAGE_SIZE, NUM_CHANNELS)) x = layers.Conv3D(filters=64, kernel_size=3, activation='relu')(input) x = layers.MaxPool3D(pool_size=2)(x) x = layers.BatchNormalization()(x) x = layers.Conv3D(filters=64, kernel_size=3, activation='relu')(x) x = layers.MaxPool3D(pool_size=2)(x) x = layers.BatchNormalization()(x) x = layers.Conv3D(filters=128, kernel_size=3, activation='relu')(x) x = layers.MaxPool3D(pool_size=2)(x) x = layers.BatchNormalization()(x) x = layers.Conv3D(filters=256, kernel_size=3, activation='relu')(x) x = layers.MaxPool3D(pool_size=2)(x) x = layers.BatchNormalization()(x) x = layers.GlobalAveragePooling3D()(x) features = layers.Dense(128)(x) output = tf.nn.l2_normalize(x, axis=-1) model = keras.Model(inputs=[input], outputs=[output]) feature_extractor = keras.Model(inputs=[input], outputs=[features]) # + id="bfe4e7b6" model.summary() # + [markdown] id="81e74dd6" # ## Prepare data # + id="c2e4f85f" categories = ['bathtub', 'bed', 'chair', 'desk', 'dresser', 'monitor', 'night_stand', 'sofa', 'table', 'toilet'] # + id="e9f92c4b" train_pattern = DATA_DIR +'/train/*.npy' train_list = glob(train_pattern) # + id="22d76cda" cat_re = re.compile(r'.+/(.+?)_[0-9]+\.npy') train_labels = [cat_re.match(item)[1] for item in train_list] train_ids = [categories.index(cat) for cat in train_labels] # + id="7898f33c" def get_augmentation(patch_size): return Compose([ Rotate((-15, 15), (-15, 15), (-15, 15), interpolation=1, p=0.5), ElasticTransform((0, 0.25), interpolation=1, p=0.1), Flip(0, p=0.5), Flip(1, p=0.5), Flip(2, p=0.5), RandomRotate90((0, 1), p=0.5), ], p=1.0) aug = get_augmentation((IMAGE_SIZE, IMAGE_SIZE, IMAGE_SIZE)) # + id="iEjzCg498DL5" class OrderedStream(keras.utils.Sequence): def __init__(self, file_list, ids_list, batch_size=32): self.file_list = file_list self.ids_list = ids_list self.batch_size = batch_size self.num_files = len(file_list) def __len__(self): return int(np.ceil(self.num_files / float(self.batch_size))) def __getitem__(self, idx): x = [] from_idx = idx * self.batch_size to_idx = min((idx + 1) * self.batch_size, self.num_files) for file_path in self.file_list[from_idx : to_idx]: x.append(self._read_npy_file(file_path)) y = self.ids_list[from_idx : to_idx] return np.array(x), np.array(y) def _read_npy_file(self, path): data = np.load(path) data = np.expand_dims(data, axis=-1) return data.astype(np.float32) # + id="jJKI68MAGpBC" ordered_stream = OrderedStream(train_list, train_ids, batch_size=BATCH_SIZE) # + id="YIaEDsBhwbeh" class AugmentedStream(keras.utils.Sequence): def __init__(self, file_list, aug, num_samples=4, k=4): self.file_list = file_list self.num_files = len(file_list) self.file_indices = np.random.permutation(np.arange(self.num_files)) self.aug = aug self.num_samples = num_samples self.k = k self.num_batchs = self.num_files // self.num_samples def __len__(self): return int(np.ceil(self.num_files / float(self.num_samples))) def __getitem__(self, idx): x, y = [], [] from_idx = idx * self.num_samples to_idx = min((idx + 1) * self.num_samples, self.num_files) sample_indices = self.file_indices[from_idx : to_idx] for idx in sample_indices: data = {'image': self._read_npy_file(self.file_list[idx])} for i in range(self.k): aug_data = self.aug(**data) x.append(aug_data['image']) y.append(idx) return np.array(x), np.array(y) def _read_npy_file(self, path): data = np.load(path) data = np.expand_dims(data, axis=-1) return data.astype(np.float32) # + id="LsooR5n802ek" aug_stream = AugmentedStream(train_list, aug, num_samples=NUM_SAMPLES, k=K) # + [markdown] id="cda1de99" # ## Train model # + id="757969d3" loss_fn = tfa.losses.TripletSemiHardLoss(margin=0.1) optimizer = tf.keras.optimizers.Adam(learning_rate=1.0e-4) model.compile(loss=loss_fn, optimizer=optimizer) model.fit(aug_stream, epochs=EPOCHS) # + id="FYeMdaS8pen0" model.save(os.path.join(EXPERIMENT_DIR, 'saved_models', 'whole_model')) feature_extractor.save(os.path.join(EXPERIMENT_DIR, 'saved_models', 'feature_extractor')) # + [markdown] id="KkMo941A_5QI" # ## Copy experiment directory to Google Drive # + id="_1Z05B5SWVcg" # !cp -r $EXPERIMENT_DIR /content/drive/MyDrive/ # + id="aTV89t8gz5D1" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: PythonData # language: python # name: pythondata # --- # Example of class and objects in Python. class Cat: def __init__(self, name): self.name = name # Creating an object in the "Cat: class. first_cat = Cat('Felix') print(first_cat.name) # Creating another object in the "Cat" class as part of a skill drill. second_cat = Cat('Garfield') print(second_cat.name) class Dog: def __init__(self, name, color, sound): self.name = name self.color = color self.sound = sound def bark(self): return self.sound + ' ' + self.sound first_dog = Dog('Fido', 'brown', 'woof!') print( first_dog.name) print(first_dog.color) first_dog.bark() # Create another oject of the "Dog" class as partof a skill drill. second_dog = Dog('Lady', 'blonde', 'arf!') print( second_dog.name) print(second_dog.color) second_dog.bark() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Build Image and ETL Processes # # This notbook shows sample ETL processes to use persistent-postgres-cstore. [cstore_fdw](https://github.com/citusdata/cstore_fdw) does not support `INSERT INTO ...` though it supports `COPY` and `INSERT INTO ... SELECT`, [see](https://stackoverflow.com/questions/44064004/change-a-normal-table-to-a-foreign-cstore-fdw-table). ETL processs are below, # # 1. [Buid a persistent-postgres image and run it as staging](#1.-Buid-a-persistent-postgres-image-and-run-it-as-staging) # 2. [Intert data into staging container from csv files](#2.-Intert-data-into-staging-container-from-csv-files) # 3. [Commit staging container to image and stop staging container](#3.-Commit-staging-container-to-image-and-stop-staging-container) # 4. [Buid a persistent-postgres-cstore image and run it](#4.-Buid-a-persistent-postgres-cstore-image-and-run-it) # 5. [Insert data into cstore from staging](#5.-Insert-data-into-cstore-from-staging) # 6. [Commit cstore container to image](#6.-Commit-cstore-container-to-image) # 7. [Operation test of selecting data from columnar table](#7.-Operation-test-of-selecting-data-from-the-columnar-store) import os import subprocess import traceback import psycopg2 import pandas as pd subprocess.call("./download_customer_reviews.sh") # ## 1. Buid a persistent-postgres image and run it as staging # # To load csv data into persistent-postgres-cstore, it builds an image of persistnet-postgres (no coloumner store). Because cstore_fdw does not suppoert `INSERT INTO ...`, [see](https://stackoverflow.com/questions/44064004/change-a-normal-table-to-a-foreign-cstore-fdw-table). After building persitent-postgres image, it runs the container as staging. subprocess.call("docker build -t persistent-postgres:0.1 postgres/lib/postgres/11".split()) subprocess.call("docker run --name persistent-postgres -p 5432:5432 -e POSTGRES_USER=dwhuser -d persistent-postgres:0.1".split()) # ## 2. Load data into staging container from csv files # # To load csv data into staging container, 1) it connects the postgres on the staging container, 2) defines some helper functions and queries , and 3) executes the queries for the staging container. conn = psycopg2.connect("host=127.0.0.1 dbname=dwhuser user=dwhuser password=") cur = conn.cursor() def execute_query(query): """Execute query for a postgres connection. Args: query str: An executed query. Return: boolean: True or False of the result of an execution. """ try: cur.execute(query) conn.commit() return True except: traceback.print_exc() return False def staging_table_insert(insert_query, df): """Insert data into staging table. An inserted table must have same columns of pandas.DataFrame. If the table dose not have the same columns, an error will be occured. Args: insert_query str: An inserted query. df pandas.DataFrame: An inserted data. Return: boolean: True or False of the result of an execution. """ try: for i, row in df.iterrows(): cur.execute(insert_query, list(row)) conn.commit() return True except: traceback.print_exc() return False # + # Drop staging table. staging_table_drop_query = """ DROP TABLE IF EXISTS staging_customer_reviews; """ # Create staging table. staging_table_create_query = """ CREATE TABLE staging_customer_reviews ( customer_id TEXT ,review_date DATE ,review_rating INTEGER ,review_votes INTEGER ,review_helpful_votes INTEGER ,product_id CHAR(10) ,product_title TEXT ,product_sales_rank BIGINT ,product_group TEXT ,product_category TEXT ,product_subcategory TEXT ,similar_product_ids TEXT ); """ # Insert data into staging table. staging_table_insert_query = """ INSERT INTO staging_customer_reviews ( customer_id ,review_date ,review_rating ,review_votes ,review_helpful_votes ,product_id ,product_title ,product_sales_rank ,product_group ,product_category ,product_subcategory ,similar_product_ids ) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s); """ # - print("Drop staging table:", execute_query(staging_table_drop_query)) print("Create staging table:", execute_query(staging_table_create_query)) print("Insert into staging table:", staging_table_insert(staging_table_insert_query, pd.read_csv('customer_reviews_1998.csv.gz', header=None))) print("Insert into staging table:", staging_table_insert(staging_table_insert_query, pd.read_csv('customer_reviews_1999.csv.gz', header=None))) # ## 3. Commit staging container to image and stop staging container # # To persit data into the staging container, it commits the container to the persistent-postgres image. Then it stops the staging container. subprocess.call("docker commit persistent-postgres persistent-postgres:0.1".split()) subprocess.call("docker stop persistent-postgres".split()) # ## 4. Buid a persistent-postgres-cstore image and run it # # It builds an image of persistent-postgres-cstore (columner store) and runs the container. subprocess.call("docker build -t persistent-postgres-cstore:0.1 postgres".split()) subprocess.call("docker run --name persistent-postgres-cstore -p 5432:5432 -e POSTGRES_USER=dwhuser -d persistent-postgres-cstore:0.1".split()) # ## 5. Insert data into cstore from staging # # Insert data into cstore from staging table by using `INSERT INTO ... SELECT`, [see](https://stackoverflow.com/questions/44064004/change-a-normal-table-to-a-foreign-cstore-fdw-table). The process are 1) it connects the cstore of postgres on the container, and 2) executes the queries for the staging container by using the helper function. conn = psycopg2.connect("host=127.0.0.1 dbname=dwhuser user=dwhuser password=") cur = conn.cursor() # + # Drop table. table_drop_query = """ DROP FOREIGN TABLE IF EXISTS customer_reviews; """ # Create extension. extension_create_query = """ -- load extension first time after install CREATE EXTENSION cstore_fdw; """ # Create server. foreign_server_create_query = """ -- create server object CREATE SERVER cstore_server FOREIGN DATA WRAPPER cstore_fdw; """ # Create table. table_create_query = """ -- create foreign table CREATE FOREIGN TABLE customer_reviews ( customer_id TEXT, review_date DATE, review_rating INTEGER, review_votes INTEGER, review_helpful_votes INTEGER, product_id CHAR(10), product_title TEXT, product_sales_rank BIGINT, product_group TEXT, product_category TEXT, product_subcategory TEXT, similar_product_ids TEXT ) SERVER cstore_server OPTIONS(compression 'pglz') ; """ # Insert data into table. table_insert_query = """ INSERT INTO customer_reviews SELECT * FROM staging_customer_reviews ; """ # - print("Drop table:", execute_query(table_drop_query)) print("Create extension:", execute_query(extension_create_query)) print("Create foreign server:", execute_query(foreign_server_create_query)) print("Create table:", execute_query(table_create_query)) print("Insert into table:", execute_query(table_insert_query)) # ## 6. Commit cstore container to image # # To persit data into the container, it commits the container to the persistent-postgres-cstore image. Then it stops the container. subprocess.call("docker commit persistent-postgres-cstore persistent-postgres-cstore:0.1".split()) # ## 7. Operation test of selecting data from the columnar store # # Operation test of selecting from the columnar store, 1) it finds all reviews a particular customer made on the Dune series in 1998, and 2) gets a correlation between a book's titles's length and its review ratings. find_query=""" -- Find all reviews a particular customer made on the Dune series in 1998. SELECT customer_id ,review_date ,review_rating ,product_id FROM customer_reviews WHERE customer_id ='A27T7HVDXA3K2A' AND product_title LIKE '%Dune%' AND review_date >= '1998-01-01' AND review_date <= '1998-12-31' ; """ pd.read_sql(sql=find_query, con=conn, index_col='customer_id') correlation_query=""" -- Do we have a correlation between a book's title's length and its review ratings? SELECT width_bucket(length(product_title), 1, 50, 5) title_length_bucket ,round(avg(review_rating), 2) AS review_average ,count(*) FROM customer_reviews WHERE product_group = 'Book' GROUP BY title_length_bucket ORDER BY title_length_bucket ; """ pd.read_sql(sql=correlation_query, con=conn, index_col='title_length_bucket') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from glvis import glvis # Load a remote data stream, see the "-mac" option of glvis and the sample # streams at https://github.com/GLVis/data/tree/master/streams # stream = !curl -s https://glvis.org/data/streams/ex9.saved stream = "\n".join(stream) # Visualize the above stream (all GLVis keys and mouse commands work) glvis(stream) # Another vsualization instance of the same stream glvis(stream + 'rljg****tttac0', 300, 300) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 説明 # - Webカメラで映像を撮影し、顔を検出して青い枠で囲みます。顔は複数同時に検出できます。キャプチャ画面で[Esc]キーを押すと終了します。 # # 準備 # - Webカメラ # - OpenCVのインストール # - !conda install opencv -c conda-forge # - カスケードファイル(haarcascade_frontalface_default.xml)をmodelsディレクトリにダウンロードする。(以下のコメントアウトを外して1回だけ実行) # + # #!curl -o ../models/haarcascade_frontalface_default.xml https://raw.githubusercontent.com/opencv/opencv/master/data/haarcascades/haarcascade_frontalface_default.xml # - # # 画像キャプチャループ関数を定義 import cv2 def capture_camera(mirror=True, size=None): """Capture video from camera""" # カメラをキャプチャする cap = cv2.VideoCapture(0) # 0はカメラのデバイス番号 while True: # Webカメラから画像を1枚読み込む _, frame = cap.read() # 鏡のように映るか否か if mirror: frame = frame[:,::-1] # フレームをリサイズ # sizeは例えば(800, 600) if size is not None and len(size) == 2: frame = cv2.resize(frame, size) # グレースケールに変換 gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # 顔検出用カスケードファイルの読み込み face_cascade = cv2.CascadeClassifier('../models/haarcascade_frontalface_default.xml') # カスケードファイルで複数の顔を検出 faces = face_cascade.detectMultiScale(gray) # 顔の周りの余白 m = 80 for (x,y,w,h) in faces: # 切り取った画像を取得 rgb_image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) cropped_image = rgb_image[y-m:y+h+m,x-m:x+w+m,:] # ここでEmbeddingする # ... #顔部分を四角で囲う frame = cv2.rectangle(frame,(x-m,y-m),(x+w+m,y+h+m),(255,0,0),2) # 画面を表示する cv2.imshow('camera capture', frame) k = cv2.waitKey(10) # 10 msec待つ if k == 27: # ESCキーで終了 break # キャプチャを解放する cap.release() cv2.destroyAllWindows() # # 実行開始! # capture_camera() capture_camera(True, (800,600)) # サイズを指定しないとエラーになる環境がある # # 制限 # - 顔の検出精度が低い # - 10秒に1回のみEmbeddingするようなコードを入れる // -*- coding: utf-8 -*- // --- // jupyter: // jupytext: // text_representation: // extension: .scala // format_name: light // format_version: '1.5' // jupytext_version: 1.14.4 // kernelspec: // display_name: Scala // language: scala // name: scala // --- // This page is for experiencing a SAT-based Constraint Programming system [Scarab](https://tsoh.org/scarab/index.html). // You can find its documents and details in [Scarab](https://tsoh.org/scarab/index.html). // // このページはSAT型制約プログラミングシステム [Scarab](https://tsoh.org/scarab/jp/index.html) を体験するためのページです. // Scarab のチュートリアルは [URL](https://tsoh.org/scarab/jp/index.html) に REPL を使って行うものがあり, // 同様のチュートリアルをこのページでも行うことが可能です. // // 以下では必要なライブラリを import しています. // import 文を最初の1度だけ実行すれば Scarab および DSL のクラスが読み込まれます. // 後は, reset をプログラムの区切り毎に記述すれば自由にプログラムが書けます. // // (注意) ブラウザを閉じて10分以上経つとセッションは削除されます.プログラムを残したい場合は,こまめにダウンロードするようにしてください. import $cp.`scarab` import jp.kobe_u.scarab._ import dsl._ // 以下はプログラム例です. // // (1) 上の import 文をすでに実行して (run, 三角の再生ボタンを押して) いること // (2) reset を一番上に書くこと // // を行ってから実行することに注意してください. // + reset int('x, 0, 9) int('y, 0, 9) add('x + 'y === 3) find solution.intMap.foreach(println) // - // 別のプログラム例です. // + reset int('a, 0, 10) int('b, 0, 10) int('c, 0, 10) int('d, 0, 10) add('a * 12 + 'b * 16 + 'c * 20 + 'd * 27 === 90) find solution.intMap.foreach(println) // - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from scipy import stats # %matplotlib inline import pandas as pd from scipy.stats import entropy import math import keras from sklearn.model_selection import train_test_split from keras.callbacks import EarlyStopping from sklearn.preprocessing import StandardScaler,MinMaxScaler from keras import optimizers from sklearn.preprocessing import OneHotEncoder import matplotlib.pyplot as plt data=pd.read_csv(r"C:\Users\rajan\OneDrive\Documents\FullDataset.csv",sep=",") X=data.iloc[:,0:9] print(X) Y=data.iloc[:,9] print(Y) # data=pd.read_csv(r"C:\Users\rajan\OneDrive\Documents\Returns.csv",sep=",") # X=data.iloc[:,0] # print(X.quantile(0.3)) # print(X.quantile(0.7)) scaler = MinMaxScaler(feature_range=(0, 1)) X = scaler.fit_transform(X) print(X[0]) from sklearn.utils import class_weight class_weights = class_weight.compute_class_weight('balanced',np.unique(Y),Y) print(class_weights) # onehot_encoder = OneHotEncoder(sparse=False) # Y=Y.to_numpy() # Y = Y.reshape(len(Y), 1) # Y = onehot_encoder.fit_transform(Y) # print(Y) # print(len(Y)) # print((len(Y[Y==0])/len(Y))*100) # print((len(Y[Y==1])/len(Y))*100) # print((len(Y[Y==2])/len(Y))*100) # + X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=42,stratify=Y) onehot_encoder = OneHotEncoder(sparse=False) Y_train=Y_train.to_numpy() Y_train = Y_train.reshape(len(Y_train), 1) Y_train = onehot_encoder.fit_transform(Y_train) old=Y_test Y_test =Y_test.to_numpy() Y_test = Y_test.reshape(len(Y_test), 1) Y_test = onehot_encoder.transform(Y_test) # c_weight = {0: 1.75, 1: 1, 2: 2} opt=optimizers.Adam(lr=0.001); # 0.001 model1 = keras.models.Sequential() model1.add(keras.layers.Dense(units=256, activation='relu',input_dim=X_train.shape[1])) # model1.add(keras.layers.Dense(units=256, activation='relu')) model1.add(keras.layers.Dropout(0.1)) model1.add(keras.layers.Dense(units=128, activation='relu')) model1.add(keras.layers.Dropout(0.1)) model1.add(keras.layers.Dense(units=64, activation='relu')) model1.add(keras.layers.Dropout(0.1)) model1.add(keras.layers.Dense(units=32, activation='relu')) model1.add(keras.layers.Dropout(0.1)) model1.add(keras.layers.Dense(units=16, activation='relu')) model1.add(keras.layers.Dropout(0.1)) model1.add(keras.layers.Dense(units=8, activation='relu')) model1.add(keras.layers.Dropout(0.1)) model1.add(keras.layers.Dense(units=5, activation='softmax')) model1.summary() model1.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'],) history=model1.fit(X_train, Y_train, batch_size=128, epochs=3000, verbose=1, class_weight=class_weights, validation_data=(X_test, Y_test), # callbacks = [EarlyStopping(monitor='val_loss', patience=5)], shuffle=True) # - plt.plot(history.history['acc']) plt.plot(history.history['val_acc']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() # summarize history for loss plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() # + score = model1.evaluate(X_test, Y_test, verbose=0) print('Test accuracy:', score[1]*100) print(model1.metrics_names) Y_pred=model1.predict(X_test) # print(len(Y_test[Y_test==1])) print(len(Y_pred)) print(len(old[old==4])) Y_pred = np.argmax(Y_pred, axis=1) print(len(Y_pred[Y_pred==4])) # print(len(Y_pred[Y_pred<0.5])) # print(len(Y_pred[Y_pred<0.5])) # - # print(Y_pred) # Y_pred = np.argmax(Y_pred, axis=1) print(Y_pred) new=Y_pred Y_pred = Y_pred.reshape(len(Y_pred), 1) Y_pred = onehot_encoder.transform(Y_pred) # Y_pred=(Y_pred > 0.5).astype(np.int) # # # print(Y_pred) # Y_pred=np.ravel(Y_pred) # # print(len(Y_pred[Y_pred==0])) from sklearn.metrics import confusion_matrix,classification_report cm = confusion_matrix(old, new) print(cm) print(classification_report(Y_test, Y_pred)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import zipfile import os os.listdir('.') with zipfile.ZipFile('frozen_east_text_detection.pb.zip', 'r') as zip_ref: zip_ref.extractall('./tmp') os.listdir('./tmp') import cv2 as cv import math import argparse net = cv2.dnn.readNet('./tmp/frozen_east_text_detection.pb') parser = argparse.ArgumentParser(description='./tmp/frozen_east_text_detection.pb') # + #parser.add_argument('--model', default="frozen_east_text_detection.pb",help='Path to a binary .pb file of model contains trained weights.') # + #parser.add_argument('--width', type=int, default=320, help='Preprocess input image by resizing to a specific width. It should be multiple by 32.') # + #parser.add_argument('--height',type=int, default=320, help='Preprocess input image by resizing to a specific height. It should be multiple by 32.') # + #parser.add_argument('--thr',type=float, default=0.5, help='Confidence threshold.') # + #parser.add_argument('--nms',type=float, default=0.4, help='Non-maximum suppression threshold.') # - args = parser.parse_args() def decode(scores, geometry, scoreThresh): detections = [] confidences = [] ############ CHECK DIMENSIONS AND SHAPES OF geometry AND scores ############ assert len(scores.shape) == 4, "Incorrect dimensions of scores" assert len(geometry.shape) == 4, "Incorrect dimensions of geometry" assert scores.shape[0] == 1, "Invalid dimensions of scores" assert geometry.shape[0] == 1, "Invalid dimensions of geometry" assert scores.shape[1] == 1, "Invalid dimensions of scores" assert geometry.shape[1] == 5, "Invalid dimensions of geometry" assert scores.shape[2] == geometry.shape[2], "Invalid dimensions of scores and geometry" assert scores.shape[3] == geometry.shape[3], "Invalid dimensions of scores and geometry" height = scores.shape[2] width = scores.shape[3] for y in range(0, height): # Extract data from scores scoresData = scores[0][0][y] x0_data = geometry[0][0][y] x1_data = geometry[0][1][y] x2_data = geometry[0][2][y] x3_data = geometry[0][3][y] anglesData = geometry[0][4][y] for x in range(0, width): score = scoresData[x] # If score is lower than threshold score, move to next x if(score < scoreThresh): continue # Calculate offset offsetX = x * 4.0 offsetY = y * 4.0 angle = anglesData[x] # Calculate cos and sin of angle cosA = math.cos(angle) sinA = math.sin(angle) h = x0_data[x] + x2_data[x] w = x1_data[x] + x3_data[x] # Calculate offset offset = ([offsetX + cosA * x1_data[x] + sinA * x2_data[x], offsetY - sinA * x1_data[x] + cosA * x2_data[x]]) # Find points for rectangle p1 = (-sinA * h + offset[0], -cosA * h + offset[1]) p3 = (-cosA * w + offset[0], sinA * w + offset[1]) center = (0.5*(p1[0]+p3[0]), 0.5*(p1[1]+p3[1])) detections.append((center, (w,h), -1*angle * 180.0 / math.pi)) confidences.append(float(score)) # Return detections and confidences return [detections, confidences] if __name__ == "__main__": # Read and store arguments confThreshold = args.thr nmsThreshold = args.nms inpWidth = args.width inpHeight = args.height model = args.model # Load network net = cv.dnn.readNet(model) # Create a new named window kWinName = "EAST: An Efficient and Accurate Scene Text Detector" cv.namedWindow(kWinName, cv.WINDOW_NORMAL) outputLayers = [] outputLayers.append("feature_fusion/Conv_7/Sigmoid") outputLayers.append("feature_fusion/concat_3") # Open a video file or an image file or a camera stream cap = cv.VideoCapture(args.input if args.input else 0) while cv.waitKey(1) < 0: # Read frame hasFrame, frame = cap.read() if not hasFrame: cv.waitKey() break # Get frame height and width height_ = frame.shape[0] width_ = frame.shape[1] rW = width_ / float(inpWidth) rH = height_ / float(inpHeight) # Create a 4D blob from frame. blob = cv.dnn.blobFromImage(frame, 1.0, (inpWidth, inpHeight), (123.68, 116.78, 103.94), True, False) # Run the model net.setInput(blob) output = net.forward(outputLayers) t, _ = net.getPerfProfile() label = 'Inference time: %.2f ms' % (t * 1000.0 / cv.getTickFrequency()) # Get scores and geometry scores = output[0] geometry = output[1] [boxes, confidences] = decode(scores, geometry, confThreshold) # Apply NMS indices = cv.dnn.NMSBoxesRotated(boxes, confidences, confThreshold,nmsThreshold) for i in indices: # get 4 corners of the rotated rect vertices = cv.boxPoints(boxes[i[0]]) # scale the bounding box coordinates based on the respective ratios for j in range(4): vertices[j][0] *= rW vertices[j][1] *= rH for j in range(4): p1 = (vertices[j][0], vertices[j][1]) p2 = (vertices[(j + 1) % 4][0], vertices[(j + 1) % 4][1]) cv.line(frame, p1, p2, (0, 255, 0), 2, cv.LINE_AA) # cv.putText(frame, "{:.3f}".format(confidences[i[0]]), (vertices[0][0], vertices[0][1]), cv.FONT_HERSHEY_SIMPLEX, 0.5, (255, 0, 0), 1, cv.LINE_AA) # Put efficiency information cv.putText(frame, label, (0, 15), cv.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255)) # Display the frame cv.imshow(kWinName,frame) cv.imwrite("out-{}".format(args.input),frame) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import copy import importlib import itertools import numpy as np from matplotlib import pyplot as plt import scipy.stats as scist from sklearn.linear_model import LogisticRegression from sklearn.pipeline import make_pipeline from sklearn.model_selection import cross_validate from analysis import session, acr_sess_analys from sess_util import sess_gen_util, sess_ntuple_util from util import gen_util, logger_util, logreg_util, math_util, plot_util importlib.reload(acr_sess_analys) # - plot_util.linclab_plt_defaults() # + def init_sessions(sessid, datadir): sess = session.Session(datadir, sessid) sess.extract_sess_attribs() print(f"Mouse {sess.mouse_n}, Session {sess.sess_n}") sess.extract_info() return sess def extract_data_targ(sess, analyspar, stimpar): data, targ = [], [] n_vals = [] # reg, surp x ROIs x seq x frames data = acr_sess_analys.surp_data_by_sess(sess, analyspar, stimpar, datatype="roi", surp="bysurp", integ=False, baseline=0.13) n_vals = [sub_data.shape[1] for sub_data in data] targ = np.concatenate([np.full(n, s) for s, n in enumerate(n_vals)]) data = np.concatenate(data, axis=1) return data, targ, n_vals # + def run_logreg(data, targ, n_splits, shuffle, scoring, seed): cv = logreg_util.StratifiedShuffleSplitMod(n_splits=n_splits, train_p=0.75, random_state=seed) scaler = logreg_util.ModData(scale=True, extrem=True, shuffle=shuffle, seed=seed) mod = LogisticRegression(C=1, fit_intercept=True, class_weight="balanced", penalty="l2", solver="lbfgs", max_iter=1000, random_state=seed, n_jobs=n_jobs) mod_pip = make_pipeline(scaler, mod) mod_cvs = cross_validate(mod_pip, data, targ, cv=cv, return_estimator=True, return_train_score=True, n_jobs=8, verbose=False, scoring=scoring) return mod_cvs def plot_roi_acc(full_acc, roi_acc, full_diff, roi_diff, stimtype="gabors"): roi_acc_mean = np.mean(roi_acc, axis=-1) roi_acc_sem = scist.sem(roi_acc, axis=-1) full_acc_mean = np.mean(full_acc) full_acc_sem = scist.sem(full_acc) roi_diff_mean = np.mean(roi_diff, axis=-1) roi_diff_sem = scist.sem(roi_diff, axis=-1) full_diff_mean = np.mean(full_diff) full_diff_sem = scist.sem(full_diff) fig, ax = plt.subplots(1) ax.axhline(0.5, lw=2.5, color="gray", ls="dashed") ax.axvline(0.0, lw=2.5, color="gray", ls="dashed") ax.errorbar(roi_diff_mean, roi_acc_mean, yerr=roi_acc_sem, xerr=roi_diff_sem, alpha=0.3, lw=0, marker=".", elinewidth=2.5) ax.errorbar(full_diff_mean, full_acc_mean, yerr=full_acc_sem, xerr=full_diff_sem, lw=0, marker=".", elinewidth=2.5) ax.set_title(f"Surprise decoding accuracy per ROI ({stimtype.capitalize()})") ax.set_ylim([0, 1]) # + def get_diff_data(sess, analyspar, stimpar): data, targ, n_vals = extract_data_targ(sess, analyspar, stimpar) roi_diff = np.mean(data[:, n_vals[0] :], axis=1) - np.mean(data[:, : n_vals[0]], axis=1) full_diff = np.mean(roi_diff, axis=-1) # across frames return full_diff, roi_diff def run_all_logreg(sessid, datadir, scoring, stimpar, n_splits, shuffle, seed): sess = init_sessions(sessid, datadir) analyspar_noscale = sess_ntuple_util.init_analyspar(scale=False) analyspar_scale = sess_ntuple_util.init_analyspar(scale=True) data, targ, n_vals = extract_data_targ(sess, analyspar_noscale, stimpar) print("Data shape: {}".format(", ".join([str(dim) for dim in data.shape]))) print("N vals: {}".format(", ".join([str(val) for val in n_vals]))) full_mod = run_logreg(np.transpose(data, [1, 2, 0]), targ, n_splits, shuffle, scoring, seed) full_acc = full_mod["test_balanced_accuracy"] roi_acc = np.full([len(data), n_splits], np.nan) for n, roi_data in enumerate(data): roi_mod = run_logreg(np.expand_dims(roi_data, -1), targ, n_splits, shuffle, scoring, seed) roi_acc[n] = roi_mod["test_balanced_accuracy"] full_diff, roi_diff = get_diff_data(sess, analyspar_scale, stimpar) return full_acc, roi_acc, full_diff, roi_diff # - # ## PARAMETERS pre = 0 gabfr = 0 # + n_reg = 10 n_shuff = 10 seed = 905 n_jobs = -1 datadir = "../data/OSCA" scoring = ["neg_log_loss", "accuracy", "balanced_accuracy"] def set_all_parameters(stimtype): if stimtype == "gabors": post = 1.5 elif stimtype == "bricks": post = 1.0 stimpar = sess_ntuple_util.init_stimpar(stimtype, gabfr=gabfr, pre=pre, post=post) return stimpar # - # ## Run regular stimpar = set_all_parameters("gabors") full_acc, roi_acc, full_diff, roi_diff = run_all_logreg(758519303, datadir, scoring, stimpar, n_splits=n_reg, shuffle=False, seed=seed) plot_roi_acc(full_acc, roi_acc, full_diff, roi_diff, stimpar.stimtype) stimpar = set_all_parameters("gabors") full_acc, roi_acc, full_diff, roi_diff = run_all_logreg(759189643, datadir, scoring, stimpar, n_splits=n_reg, shuffle=False, seed=seed) plot_roi_acc(full_acc, roi_acc, full_diff, roi_diff, stimpar.stimtype) stimpar = set_all_parameters("gabors") full_acc, roi_acc, full_diff, roi_diff = run_all_logreg(761624763, datadir, scoring, stimpar, n_splits=n_reg, shuffle=False, seed=seed) plot_roi_acc(full_acc, roi_acc, full_diff, roi_diff, stimpar.stimtype) stimpar = set_all_parameters("gabors") full_acc, roi_acc, full_diff, roi_diff = run_all_logreg(828816509, datadir, scoring, stimpar, n_splits=n_reg, shuffle=False, seed=seed) plot_roi_acc(full_acc, roi_acc, full_diff, roi_diff, stimpar.stimtype) stimpar = set_all_parameters("bricks") full_acc, roi_acc, full_diff, roi_diff = run_all_logreg(758519303, datadir, scoring, stimpar, n_splits=n_reg, shuffle=False, seed=seed) plot_roi_acc(full_acc, roi_acc, full_diff, roi_diff, stimpar.stimtype) stimpar = set_all_parameters("bricks") full_acc, roi_acc, full_diff, roi_diff = run_all_logreg(759189643, datadir, scoring, stimpar, n_splits=n_reg, shuffle=False, seed=seed) plot_roi_acc(full_acc, roi_acc, full_diff, roi_diff, stimpar.stimtype) stimpar = set_all_parameters("bricks") full_acc, roi_acc, full_diff, roi_diff = run_all_logreg(761624763, datadir, scoring, stimpar, n_splits=n_reg, shuffle=False, seed=seed) plot_roi_acc(full_acc, roi_acc, full_diff, roi_diff, stimpar.stimtype) stimpar = set_all_parameters("bricks") full_acc, roi_acc, full_diff, roi_diff = run_all_logreg(828816509, datadir, scoring, stimpar, n_splits=n_reg, shuffle=False, seed=seed) plot_roi_acc(full_acc, roi_acc, full_diff, roi_diff, stimpar.stimtype) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- import os import sys module_path = os.path.abspath(os.path.join('..')) if module_path not in sys.path: sys.path.append(module_path) # + from astropy.coordinates import SkyCoord from astropy.table import Table from astroquery.vizier import Vizier from astropy import units as u import swasputils # - folded_lightcurves = swasputils.FoldedLightcurves() superwasp_coords = folded_lightcurves.df['SWASP ID'].replace(r'^1SWASP', '', regex=True).unique() parsed_coords = SkyCoord(superwasp_coords, unit=(u.hour, u.deg)) source_table = Table( data=[ superwasp_coords, parsed_coords.ra, parsed_coords.dec, ], names=[ 'SuperWASP Coords', '_RAJ2000', '_DEJ2000', ], dtype=[ 'str', 'float64', 'float64', ], units={ '_RAJ2000': u.deg, '_DEJ2000': u.deg } ) source_table source_table.write(os.path.join(swasputils.CACHE_LOCATION, 'source_coords.fits'), overwrite=True) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda root] # language: python # name: conda-root-py # --- import pandas as pd import statsmodels.formula.api as sm iris=pd.read_csv("http://vincentarelbundock.github.io/Rdatasets/csv/datasets/iris.csv") iris =iris.drop('Unnamed: 0', 1) iris.head() iris.columns=['Sepal_Length', 'Sepal_Width', 'Petal_Length', 'Petal_Width', 'Species'] iris.columns result = sm.ols(formula="Sepal_Length ~ Petal_Length + Sepal_Width + Petal_Width + Species", data=iris) result.fit() result.fit().summary() result.fit().params result.fit().outlier_test(method='bonf', alpha=0.05) dir(result.fit()) test=result.fit().outlier_test() print ('Bad data points (bonf(p) < 0.05):') print (test[test.icol(2) < 0.05]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + from __future__ import division, print_function, absolute_import import tflearn from tflearn.data_utils import to_categorical, pad_sequences from tflearn.datasets import imdb # IMDB Dataset loading train, test, _ = imdb.load_data(path='imdb.pkl', n_words=10000, valid_portion=0.1) trainX, trainY = train testX, testY = test # Data preprocessing # Sequence padding trainX = pad_sequences(trainX, maxlen=100, value=0.) testX = pad_sequences(testX, maxlen=100, value=0.) # Converting labels to binary vectors trainY = to_categorical(trainY, nb_classes=2) testY = to_categorical(testY, nb_classes=2) # Network building net = tflearn.input_data([None, 100]) net = tflearn.embedding(net, input_dim=10000, output_dim=128) net = tflearn.lstm(net, 128, dropout=0.8) net = tflearn.fully_connected(net, 2, activation='softmax') net = tflearn.regression(net, optimizer='adam', learning_rate=0.001, loss='categorical_crossentropy') # Training model = tflearn.DNN(net, tensorboard_verbose=0) model.fit(trainX, trainY, validation_set=(testX, testY), show_metric=True, batch_size=32) # - model.predict(testX) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from collections import defaultdict import math import time import copy import itertools import warnings from datetime import datetime warnings.filterwarnings('ignore') import numpy as np import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torchtext import data import torchtext from pathlib import Path import pandas as pd import spacy from torch.utils.tensorboard import SummaryWriter # default `log_dir` is "runs" - we'll be more specific here writer = SummaryWriter('../runs/lstm_{}'.format(datetime.now())) # - device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') print(device) # # Embeddings # test embeddings cat_embed = nn.Embedding(5, 2) cat_tensor = torch.LongTensor([1]) cat_embed.forward(cat_tensor) # # Sentiment data df = pd.read_csv("../dataset/sentiment/training.1600000.processed.noemoticon.csv", header=None, engine='python', names=['polarity', 'id', 'date', 'query', 'user', 'tweet'], ) df.head() df.polarity.value_counts() # create binary encoding df['sentiment'] = df.polarity.map(lambda x: 1 if x == 4 else 0) df.head() df.describe() # save processed data df.to_csv('train-processed.csv', index=None) df.sample(10000).to_csv('train-processed-sample.csv', index=None) # # Creating dataset LABEL = data.LabelField() TWEET = data.Field(tokenize='spacy', lower=True) fields = [('polarity', None), ('id', None), ('date', None), ('query', None), ('user', None), ('tweet', TWEET), ('sentiment', LABEL)] twitter_dataset = data.TabularDataset( path='train-processed-sample.csv',format='CSV', fields=fields, skip_header=True) # split dataset train, val, test = twitter_dataset.split(split_ratio=[0.8, 0.1, 0.1]) print(len(train), len(val), len(test)) # view sample vars(train.examples[7]) # ## build a vocabulary vocab_size = 5000 TWEET.build_vocab(train, max_size=vocab_size) LABEL.build_vocab(train) # how big is our vocabulary (len(TWEET.vocab), len(LABEL.vocab)) # most common words TWEET.vocab.freqs.most_common(10) # create a dataloader train_loader, val_loader, test_loader = data.BucketIterator.splits( (train, val, test), batch_size = 32, device = device, sort_key = lambda x: len(x.tweet), sort_within_batch = False ) # # Create LSTM model # + # we use embedding and LSTM modules class LSTM(nn.Module): def __init__(self, hidden_size, embedding_dim, vocab_size): super().__init__() self.embedding = nn.Embedding(vocab_size, embedding_dim) self.encoder = nn.LSTM(input_size=embedding_dim, hidden_size=hidden_size, num_layers=1) self.predictor = nn.Linear(hidden_size, 2) def forward(self, seq): output = self.embedding(seq) output, (hidden, _) = self.encoder(output) preds = self.predictor(hidden.squeeze(0)) return preds # - model = LSTM(100, 300, len(TWEET.vocab)) model # ## Training our model def train_model(model, optimizer, criterion, data_loader, epochs): model = model.to(device) # define training variables since = time.time() best_weights = copy.deepcopy(model.state_dict()) best_acc = 0.0 for epoch in range(1, epochs + 1): print('\nEpoch {}/{}'.format(epoch, epochs)) print('-' * 60) # Each epoch has a training and validation phase for phase in ['train', 'val']: if phase == 'train': model.train() # Set model to training mode else: model.eval() # Set model to evaluate mode running_loss = 0.0 running_corrects = 0 for batch in data_loader[phase]: tweet, target = batch.tweet.to(device), batch.sentiment.to(device) # zero the parameter gradients optimizer.zero_grad() # forward # track history if only in train with torch.set_grad_enabled(phase == 'train'): # make predictions outputs = model(tweet) loss = criterion(outputs, target) _, preds = torch.max(outputs, 1) # backward + optimize only if in training phase if phase == 'train': loss.backward() optimizer.step() # statistics running_loss += loss.item() * tweet.size(0) running_corrects += preds.eq(target.view_as(preds)).cpu().sum() batch_size = len(data_loader[phase].dataset) epoch_loss = running_loss / batch_size epoch_acc = running_corrects.double().item() / batch_size print('{} Loss: {:.4f} Acc: {:.4f}'.format(phase, epoch_loss, epoch_acc)) # ...log the running loss to tensorboard writer.add_scalar('{}/loss'.format(phase), epoch_loss, epoch) writer.add_scalar('{}/accuracy'.format(phase), epoch_acc, epoch) # deep copy the model if phase == 'val' and epoch_acc > best_acc: best_acc = epoch_acc best_weights = copy.deepcopy(model.state_dict()) time_elapsed = time.time() - since print('-' * 60) print('Training complete in {:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60)) print('Best val Acc: {:.3f} %'.format(100 * best_acc)) print('=' * 60, '\n') # load best weights model.load_state_dict(best_weights) model = model.to(device) return model data_loader = {'train': train_loader, 'val': val_loader} optimizer = optim.Adam(model.parameters(), lr=0.001) criterion = nn.CrossEntropyLoss() m = train_model(model, optimizer, criterion, data_loader, 5) # ## Test accuracy def test_model(model, test_loader): all_targets = [] all_preds = [] model.eval() for batch in test_loader: tweets, targets = batch.tweet.to(device), batch.sentiment.to(device) outputs = model(tweets) _, preds = torch.max(outputs, 1) all_targets.extend(targets.cpu().numpy()) all_preds.extend(preds.cpu().numpy()) return all_targets, all_preds y_true, y_preds = test_model(m, test_loader) from sklearn.metrics import accuracy_score, plot_confusion_matrix, classification_report categories = {0: 'Negative', 1: "Positive"} score = accuracy_score(y_true, y_preds) print("Test classification accuracy: {:.3f} %".format(score * 100)) print(classification_report(y_true, y_preds, labels=[0, 1])) # ## classify tweets def classify_tweet(model, tweet): processed = TWEET.process([TWEET.preprocess(tweet)]) return categories[model(processed).argmax().item()] example = test.examples[5] vars(example) classify_tweet(m, example.tweet) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="7Q6NHKUCo8Aq" # In this notebook, we will run a ready-made network starting from some ATLAS data, which is already normalized. There is also an alternative to train the network from scratch. # + [markdown] id="ONFAOUa5o8Au" # ## Look into the dataset # + [markdown] id="EzZcwOnUo8Av" # First we need to make sure that Python 3.8 is used in the notebook. It is required in order to open this certain .pkl-file. # + colab={"base_uri": "https://localhost:8080/", "height": 35} id="hNqX-DTuo8Av" outputId="8c09a2d9-ee57-41fc-9602-9019deeb7122" import sys sys.version # + [markdown] id="5rTdMGQYRrbq" # Then, we will import all necessary modules. # + id="U_VvHO_DRaP1" import os import numpy as np import pandas as pd import matplotlib.pyplot as plt import urllib.request from sklearn.model_selection import train_test_split np.warnings.filterwarnings('ignore', category=np.VisibleDeprecationWarning) # + [markdown] id="Yw9PyQjszdMV" # Then, we will read the dataset and open into Pandas. Note that we have to change the paths to the directory where the dataset files are. # # # + colab={"base_uri": "https://localhost:8080/", "height": 643} id="qlS5HxLLR2tw" outputId="8d446dff-dec8-4fc1-b373-81ad53a6c347" df = pd.DataFrame() filename = "monojet_Zp2000.0_DM_50.0_chan3.csv" if not os.path.exists(filename): print(f"Downloading {filename}") url = "https://www.dropbox.com/s/elclfkfeg2s7vgs/monojet_Zp2000.0_DM_50.0_chan3.csv?dl=1" urllib.request.urlretrieve(url, filename) print(f"Downloaded {filename}") else: print(f"{filename} already exists") with open(filename, 'r') as datasetfile: for line in datasetfile: df = pd.concat([df, pd.DataFrame([tuple(line.strip().split(';'))])], ignore_index = True) df # + [markdown] id="Fd7Kg61ti5Py" # Extracting Jet particles only by checking every objects in all events. # + colab={"base_uri": "https://localhost:8080/", "height": 419} id="Pe51__rjSKif" outputId="f596dc31-1ff5-406e-edce-582763e6a731" dataset = df.values X = [] # Remove all NaN and Null values for i in range(len(dataset)): X.append([j for j in dataset[i] if j == j and j != ""]) X = np.array(X) data = pd.DataFrame(columns=['E', 'pt', 'eta', 'phi']) for i in range(len(X)): for j in range(len(X[i])): # Checking if j particle if X[i][j][0] == 'j': four = list(map(float,X[i][j].split(',')[1:])) data = data.append({'E': four[0], 'pt': four[1],'eta': four[2],'phi': four[3]}, ignore_index=True) data # + [markdown] id="EkK7DUB8SuAI" # Then, we'll normalise the data. # + colab={"base_uri": "https://localhost:8080/"} id="6poFYIqnSoen" outputId="643cb510-45ad-481c-8a05-ba1ce8dfe3f3" print('All samples:') print(data) normalized_data = pd.DataFrame(columns=['E', 'pt', 'eta', 'phi']) normalized_data['E'] = np.log10(data['E']) normalized_data['pt'] = np.log10(data['pt']) normalized_data['eta'] = data['eta'] / 5 normalized_data['phi'] = data['phi'] / 3 print('\nSamples after normalization:') print(normalized_data) # + [markdown] id="YdvirQz9yyLE" # ### Split the data into Train and Test set # + [markdown] id="Gv5W27YgTGsX" # Then, we'll split the data into train and test set. # # + colab={"base_uri": "https://localhost:8080/"} id="9uuTlDIbS3QC" outputId="a608a1de-2f02-4ae9-bafe-cb40deccf0f8" train, test = train_test_split(normalized_data, test_size=0.2, random_state = 42) print('\nTraining samples:') print(train) print('\nTesting samples:') print(test) # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="ClQQcIc9TOef" outputId="862120c9-5c06-4162-ba1f-1bf8e2993a76" unit_list = ['[log(GeV)]', '[log(GeV)]', '[rad/5]', '[rad/3]'] variable_list = [r'$E$', r'$p_T$', r'$\eta$', r'$\phi$'] branches=["E","pt","eta","phi"] n_bins = 100 plt.figure(figsize=(14, 14), dpi=300) for i in range(0,4): plt.subplot(2,2,i+1) n_hist_data, bin_edges, _ = plt.hist(train[branches[i]], bins=n_bins) plt.xlabel(xlabel=variable_list[i] + ' ' + unit_list[i]) plt.ylabel('# of events') plt.suptitle('Four momentum values for training instances.', size = 20, y=1.02) plt.tight_layout() plt.savefig("fourmomentum_training", bbox_inches='tight', dpi=300) plt.show() # + [markdown] id="nIC0k7pno8A1" # ## Setting up the network # + [markdown] id="yl9IRE8-Wl_x" # ### Installing prerequisites # + [markdown] id="vmb6Yg1vVix_" # Install FastAI. Access to google drive will be requested. # + id="ynfxHLrsVQUo" # !pip install torchtext==0.8.1 # !pip install -U fastbook import fastbook fastbook.setup_book() # + [markdown] id="8jhxz2sko8A1" # ### Preparing the data # + [markdown] id="XgSqVfPyo8A1" # Adding the two datasets as TensorDatasets to PyTorch (also loading all other classes we'll need later) # + id="rHR5D4OoVVKv" import torch import torch.nn as nn import torch.optim as optim import torch.utils.data from torch.autograd import Variable from torch.utils.data import TensorDataset from torch.utils.data import DataLoader from fastai import learner from fastai.data import core train_x = train test_x = test train_y = train_x # y = x since we are building an autoencoder test_y = test_x # Constructs a tensor object of the data and wraps them in a TensorDataset object. train_ds = TensorDataset(torch.tensor(train_x.values, dtype=torch.float), torch.tensor(train_y.values, dtype=torch.float)) valid_ds = TensorDataset(torch.tensor(test_x.values, dtype=torch.float), torch.tensor(test_y.values, dtype=torch.float)) # + [markdown] id="YO7XPNyeo8A2" # We now set things up to load the data, and we use a batch size that was optimized by previous students...note also that this is fastai v2, migration thanks to . # + id="q-9XUIk0o8A3" batchsize = 256 # Converts the TensorDataset into a DataLoader object and combines into one DataLoaders object (a basic wrapper # around several DataLoader objects). train_dl = DataLoader(train_ds, batch_size=batchsize, shuffle=True) valid_dl = DataLoader(valid_ds, batch_size=batchsize * 2) dls = core.DataLoaders(train_dl, valid_dl) # + [markdown] id="N31Ydu4xo8A3" # ### Preparing the network # + [markdown] id="OhyfYI2to8A3" # Here we have an example network. Details aren't too important, as long as they match what was already trained for us...in this case we have a LeakyReLU, tanh activation function, and a number of layers that goes from 4 to 200 to 20 to 3 (number of features in the hidden layer that we pick for testing compression) and then back all the way to 4. # + id="VoEmFVeGo8A4" colab={"base_uri": "https://localhost:8080/"} outputId="685d1ffa-fed9-4634-da13-70105af9e730" class AE_3D_200_LeakyReLU(nn.Module): def __init__(self, n_features=4): super(AE_3D_200_LeakyReLU, self).__init__() self.en1 = nn.Linear(n_features, 200) self.en2 = nn.Linear(200, 200) self.en3 = nn.Linear(200, 20) self.en4 = nn.Linear(20, 3) self.de1 = nn.Linear(3, 20) self.de2 = nn.Linear(20, 200) self.de3 = nn.Linear(200, 200) self.de4 = nn.Linear(200, n_features) self.tanh = nn.Tanh() def encode(self, x): return self.en4(self.tanh(self.en3(self.tanh(self.en2(self.tanh(self.en1(x))))))) def decode(self, x): return self.de4(self.tanh(self.de3(self.tanh(self.de2(self.tanh(self.de1(self.tanh(x)))))))) def forward(self, x): z = self.encode(x) return self.decode(z) def describe(self): return 'in-200-200-20-3-20-200-200-out' model = AE_3D_200_LeakyReLU() model.to('cpu') # + [markdown] id="fL55Au3Mo8A5" # We now have to pick a loss function - MSE loss is appropriate for a compression autoencoder since it reflects the [(input-output)/input] physical quantity that we want to minimize. # + id="BdnUzozOo8A5" from fastai.metrics import mse loss_func = nn.MSELoss() #bn_wd = False # Don't use weight decay for batchnorm layers #true_wd = True # weight decay will be used for all optimizers wd = 1e-6 recorder = learner.Recorder() learn = learner.Learner(dls, model=model, wd=wd, loss_func=loss_func, cbs=recorder) #was: learn = basic_train.Learner(data=db, model=model, loss_func=loss_func, wd=wd, callback_fns=ActivationStats, bn_wd=bn_wd, true_wd=true_wd) # + [markdown] id="FlPY150to8A6" # ## Alternative 1: Running a pre-trained network # + [markdown] id="CXcKfd2Zo8A6" # Now we load the pre-trained network. # + id="jtUw7pkDo8A6" colab={"base_uri": "https://localhost:8080/"} outputId="15685bae-8421-44d0-ad4d-25370a337551" # To load pre-trained network, uncomment next line #learn.load("AE_3D_200") # + [markdown] id="-ovcjQ4do8A7" # Then we evaluate the MSE on this network - it should be of the order of 0.001 or less if all has gone well...if it has not trained as well (note the pesky 0-mass peak above...) then it's going to be a bit higher. # + id="rvFEVY90o8A7" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="b4cfaf76-5e4c-4661-f0e5-3a42fb5eb818" #learn.validate() # + [markdown] id="nUx44PKXo8A9" # ## Alternative 2: Training a new network # + [markdown] id="9M9CAWR8o8A-" # Instead of using a pre-trained network, an alternative is to train a new network and use that instead. # + [markdown] id="j3ol98V1o8A-" # First, we want to find the best learning rate. The learning rate is a hyper-paramater that sets how much the weights of the network will change each step with respect to the loss gradient. # # Then we plot the loss versus the learning rates. We're interested in finding a good order of magnitude of learning rate, so we plot with a log scale. # # A good value for the learning rates is then either: # - one tenth of the minimum before the divergence # - when the slope is the steepest # + id="R95TqfhDo8A_" colab={"base_uri": "https://localhost:8080/", "height": 323} outputId="54c74051-7360-4634-c66b-48ee49ffc73c" from fastai.callback import schedule lr_min, lr_steep = learn.lr_find() print('Learning rate with the minimum loss:', lr_min) print('Learning rate with the steepest gradient:', lr_steep) # + [markdown] id="7rs07-DGo8A_" # Now we want to run the training! # # User-chosen variables: # - n_epoch: The number of epochs, i.e how many times the to run through all of the training data once (i.e the 1266046 entries, see cell 2) # - lr: The learning rate. Either choose lr_min, lr_steep from above or set your own. # # + id="qJqmUnZno8A_" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="24705c24-7c68-4860-84f6-e89d6d2f9193" import time start = time.perf_counter() # Starts timer learn.fit_one_cycle(n_epoch=100) end = time.perf_counter() # Ends timer delta_t = end - start print('Training took', delta_t, 'seconds') # + [markdown] id="GbmCaFGSo8BA" # Then we plot the loss as a function of batches and epochs to check if we reach a plateau. # + id="OI3vHyzEo8BA" colab={"base_uri": "https://localhost:8080/", "height": 270} outputId="6e163b49-0dd5-47f9-c134-1656eccc29a0" recorder.plot_loss() # + [markdown] id="V7qUJsFGo8BC" # Then we evaluate the MSE on this network - it should be of the order of 0.001 or less if all has gone well...if it has not trained as well (note the pesky 0-mass peak above...) then it's going to be a bit higher. # + id="btex604vo8BD" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="1ee88431-f1d8-48a0-ea8f-43cda5f16757" learn.validate() trainingMSE = learn.validate(dl=train_dl) validationMSE = learn.validate(dl=valid_dl) print(f"Training MSE: {trainingMSE} \nValidation MSE: {validationMSE}") # + colab={"base_uri": "https://localhost:8080/"} id="CLqHDZFWvgSG" outputId="fdedc6b1-d78f-4ac0-9832-b83ad23b0131" modelpath = 'AE_3D_200' learn.save(modelpath) # + [markdown] id="kyjq7962o8BD" # Let's plot all of this, with ratios (thanks to code by ) # + [markdown] id="65ZUWBzeo8BD" # ## Plotting the outputs of the network # + [markdown] id="TCbLMnVMo8BE" # A function in case we want to un-normalize and get back to physical quantities... # + id="KyV5Llt6o8BE" def custom_unnormalize(df): df['eta'] = df['eta'] * 5 df['phi'] = df['phi'] * 3 df['E'] = 10**df['E'] df['pt'] = 10**(df['pt']) return df # + [markdown] id="D5O7RnWfo8BF" # Make the histograms from the dataset... # + id="_xvEBcB-o8BF" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="e270624e-a632-4659-fd0e-5a9068bf6a77" import numpy as np plt.close('all') unit_list = ['[GeV]', '[GeV]', '[rad]', '[rad]'] variable_list = [r'$E$', r'$p_T$', r'$\eta$', r'$\phi$'] line_style = ['--', '-'] colors = ['orange', 'c'] markers = ['*', 's'] model.to('cpu') # Histograms idxs = (0, 100000) # Choose events to compare data = torch.tensor(test[idxs[0]:idxs[1]].values, dtype=torch.float) pred = model(data) pred = pred.detach().numpy() data = data.detach().numpy() data_df = pd.DataFrame(data, columns=test.columns) pred_df = pd.DataFrame(pred, columns=test.columns) unnormalized_data_df = custom_unnormalize(data_df) unnormalized_pred_df = custom_unnormalize(pred_df) alph = 0.8 n_bins = 200 plt.figure(figsize=(14,14), dpi=300) for i in np.arange(4): plt.subplot(2,2,i+1) n_hist_data, bin_edges, _ = plt.hist(data[:, i], color=colors[1], label='Input', alpha=1, bins=n_bins) n_hist_pred, _, _ = plt.hist(pred[:, i], color=colors[0], label='Output', alpha=alph, bins=bin_edges) plt.suptitle(test.columns[i]) plt.xlabel(test.columns[i]) plt.ylabel('Number of events') plt.yscale('log') plt.legend() plt.suptitle('Results of Compression.', size = 20, y = 1.02) plt.tight_layout() plt.savefig("compressionresults", bbox_inches='tight', dpi = 300) plt.show() # + [markdown] id="7PQcal5DyRy_" # As we can see, autoencoder is able to retain most (if not all) data even after reducing 4 variables to 3. # # We will now compute average error. # + id="1rlIfxOPo8BG" colab={"base_uri": "https://localhost:8080/"} outputId="96d3d118-cce3-4547-d875-17f5e9fa49e1" def getRatio(bin1,bin2): bins = [] for b1,b2 in zip(bin1,bin2): if b1==0 and b2==0: bins.append(0.) elif b2==0: bins.append(None) else: bins.append((float(b2)-float(b1))/b1) return bins rat = getRatio(n_hist_data,n_hist_pred) print(f'Average Error: {np.mean(rat)}') # + [markdown] id="qKjpBFVKymOl" # Then, plot average error by bins. # + colab={"base_uri": "https://localhost:8080/", "height": 515} id="SqNJrwz3Mi5H" outputId="9326c946-83d0-4e43-d02b-c1d9f2e7a59e" plt.figure(figsize=[16,8]) plt.plot(rat) plt.xlabel('Bins', size=16) plt.ylabel('Error', size=16) plt.title("Relative Error by Bins", size=20) plt.grid(axis='y') plt.tight_layout() plt.savefig("relativeerror", bbox_inches='tight', dpi = 300) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + colab={"base_uri": "https://localhost:8080/", "height": 561} colab_type="code" id="tzAOpnZLGM5e" outputId="f1d11aee-49c7-4e3d-eca0-e3f35f41948c" # !wget https://github.com/v3rm1/ml_bach_fugue/archive/midi_to_txt.zip # !unzip midi_to_txt.zip # + colab={"base_uri": "https://localhost:8080/", "height": 54} colab_type="code" id="_O2TVYQha-3K" outputId="42094b6c-9bc9-4573-883b-7f095f1dbcfc" # !mv ml_bach_fugue-midi_to_txt/preprocessing/converters/ /content/ # !rm -rf ml_bach_fugue-midi_to_txt/ # + colab={"base_uri": "https://localhost:8080/", "height": 54} colab_type="code" id="6-mdLMd3W5iw" outputId="bd60fa97-3af3-45a2-a129-8262a55620cb" from google.colab import drive drive.mount('/content/gdrive') # + colab={"base_uri": "https://localhost:8080/", "height": 153} colab_type="code" id="i4dWrNgCGiuR" outputId="bf9c5e41-e60a-4fe1-ef53-b1e8c1d35244" from bs4 import BeautifulSoup import requests import os import glob import zipfile url = 'http://www.stringquartets.org/composers.htm' outpath = '/content/gdrive/My Drive/raw_data/' os.makedirs(outpath, exist_ok=True) r = requests.get(url) soup = BeautifulSoup(r.text) for link in soup.find_all('a', href=True): zipurl = link['href'] if(zipurl.endswith('.zip')): outfname = outpath + zipurl.split('/')[-1] r = requests.get('http://www.stringquartets.org/' + zipurl, stream=True) if( r.status_code == requests.codes.ok ) : fsize = int(r.headers['content-length']) print('Downloading {}'.format(outfname)) with open(outfname, 'wb') as fd: for chunk in r.iter_content(chunk_size=1024): # chuck size can be larger if chunk: # ignore keep-alive requests fd.write(chunk) fd.close() for files in glob.iglob('/content/gdrive/My Drive/raw_data/*.zip'): # Extract files f = zipfile.ZipFile(files) f.extractall('/content/gdrive/My Drive/raw_data') f.close() os.remove(files) for files in glob.iglob('/content/gdrive/My Drive/raw_data/*.kpp'): # Remove .kpp files os.remove(files) for files in glob.iglob('/content/gdrive/My Drive/raw_data/*.squ'): # Rename .squ files to midi name = files.split('/')[-1].split('.')[0] os.rename(files, '/content/gdrive/My Drive/raw_data/'+name+'.mid') # + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="LI6N_7FsncgD" outputId="a13cca14-a109-442b-c1c8-f908fdbb85f5" # !pip install music21 mido # + colab={} colab_type="code" id="hGSvA8pVnJOa" """ Function taken from http://nickkellyresearch.com/python-script-transpose-midi-files-c-minor/ """ def transpose_key(filename): """ converting everything into the key of C major or A minor """ # major conversions majors = dict([('A-', 4),('G#', 4),('A', 3),('A#', 2),('B-', 2),('B', 1),('C', 0),('C#', -1),('D-', -1),('D', -2), ('D#', -3),('E-', -3),('E', -4),('F', -5),('F#', 6),('G-', 6),('G', 5)]) minors = dict([('G#', 1), ('A-', 1),('A', 0),('A#', -1),('B-', -1),('B', -2),('C', -3),('C#', -4),('D-', -4), ('D', -5),('D#', 6),('E-', 6),('E', 5),('F', 4),('F#', 3),('G-', 3),('G', 2)]) score = music21.converter.parse(filename) key = score.analyze('key') # print key.tonic.name, key.mode if key.mode == "major": halfSteps = majors[key.tonic.name] elif key.mode == "minor": halfSteps = minors[key.tonic.name] newscore = score.transpose(halfSteps) newscore.write('midi', '/tmp/temp.mid') # + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="23zifGNuWO9R" outputId="1eb79141-dd83-4395-d42c-e293367d53ad" import converters.convert as cvt from mido import MidiFile, MidiTrack, Message as MidiMessage import music21 import tqdm print('Total files found: ', len(glob.glob('/content/gdrive/My Drive/raw_data/*.mid'))) for file in tqdm.tqdm(glob.iglob('/content/gdrive/My Drive/raw_data/*.mid')): try: transpose_key(file) except: print('Error occured while reading file\n Deleting file') os.remove(file) continue mid = MidiFile('/tmp/temp.mid') tpb = mid.ticks_per_beat metamessages, insmessages = cvt.split_tracks(mid) # Make sure that only 4 instruments are passed forward streams = cvt.notes_to_pitchstream(insmessages, tpb) cvt.pitch_stream_to_text(streams, file_name='/content/gdrive/My Drive/corpus.txt') os.remove(file) # + colab={} colab_type="code" id="4VhdkOvhnsNo" # !rm -rf /content/gdrive/My Drive/raw_data # + colab={} colab_type="code" id="xLfw0jCl3u-y" stream = cvt.text_to_pitch_stream(file_name='') mid = cvt.pitchstream_to_midi(stream) mid.save('txt.mid') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import scipy.linalg as la from matplotlib import pyplot as plt from scipy import sparse as sp from time import time import scipy.sparse.linalg as spla from math import sqrt import simulated_data as simd import data_strm_subclass as dssb import streaming_subclass as stsb import plot_functions as pf import pickle from pandas import DataFrame as df def expvar(X, W, xnorm2=None): ''' Calculate the explained variance of X Inputs: X: n x d array-like W: d x k array-like xnorm2: optional float, the squared frobenius norm of X. This is often calculated in other applications and can thus be provided. ''' if xnorm2 is None: xnorm2 = la.norm(X, ord='fro')**2 return la.norm(X.dot(W), ord='fro')**2 / xnorm2 # # ADAM and RMSProp # # We want to see how ADAM and RMSProp behave in the streaming PCA setting (which includes a normalization step not included in standard SGD). # We draw on the datasets and code from "AdaOja: Adaptive Learning Rates for Streaming PCA" by and . # Note that we consider three datasets: sparse, bag-of-words "Kos" data; dense "CIFAR" data; and synthetic spiked covariance data with noise parameter $0.25$. # # Note that in our original experiments prior to writing the paper for AdaOja, we mostly considered the recommended parameters and implementation for ADAM and RMSProp. As we demonstrate below, the recommended parameters and implementation for these methods do not produce good results in this setting. This new exploration expands beyond our previous work and comes up with some interesting--and more relevant--results. # # ## Loading the Data # Load Small bag of words data kos_n, kos_d, kos_nnz, kos_dense, kos_SpX, kos_norm2 = dssb.get_bagX('docword.kos.txt') nips_n, nips_d, nips_nnz, nips_dense, nips_SpX, nips_norm2 = dssb.get_bagX('docword.nips.txt') enr_n, enr_d, enr_nnz, enr_dense, enr_SpX, enr_norm2 = dssb.get_bagX('docword.enron.txt') # + k = 10 # Obtain true ExpVar for Small Bag-of-words data v_kos = spla.svds((kos_SpX.T.dot(kos_SpX)).astype(float), k=k)[2].T kos_expvar = la.norm(kos_SpX.dot(v_kos), ord='fro')**2 / kos_norm2 v_nips = spla.svds((nips_SpX.T.dot(nips_SpX)).astype(float), k=k)[2].T nips_expvar = la.norm(nips_SpX.dot(v_nips), ord='fro')**2 / nips_norm2 #v_enr = spla.svds((enr_SpX.T.dot(enr_SpX)).astype(float), k=k)[2].T #enr_expvar = la.norm(enr_SpX.dot(v_enr), ord='fro')**2 / enr_norm2 # + # Load the CIFAR dataset def unpickle(file): with open(file, 'rb') as fo: dict = pickle.load(fo, encoding='bytes') return dict # Load the CIFAR data db = [] for i in range(1, 6): db.append(unpickle('data_batch_' + str(i))[b'data']) CIFAR = np.vstack(db) # Centralize the CIFAR data CIFAR_sc = CIFAR - CIFAR.mean(axis=0) CIFAR_norm2 = la.norm(CIFAR_sc, ord='fro')**2 v_CIFAR = la.eigh(np.cov(CIFAR.T))[1][:,::-1] CIFAR_evar = expvar(CIFAR_sc, v_CIFAR[:,:k], xnorm2=CIFAR_norm2) # - # Initialize Synthetic Data n = 10000 B = 1 d = 1000 k = 10 sigma = 0.25 cov, w, A0, syn = simd.spiked_covariance(n, d, k, sigma=sigma) V = la.eigh(np.cov(syn.T))[1][:,::-1][:,:k] syn_evar = expvar(syn, V) syn_norm2 = la.norm(syn, ord='fro')**2 # ## Recommended parameters # # The recommended parameters for RMSProp are $\gamma = 0.9, \eta = 0.001$ # The recommended parameters for ADAM are $\beta_1 =0.9, \beta_2 = 0.999, \epsilon = 10^{-8}, \eta=0.001$. # We proceed following the original algorithms using these parameters. # The original algorithms are also designed for the single update step rather than the block update step, so we begin by setting $B=1$. # # # ## Single Value, Vector and Matrix cases # RMSProp and ADAM were originally proposed for the single vector case. Therefore, we can consider three different kinds of update steps for the case $k > 1$. # # 1. Single Value # $$\text{RMSProp: } b_t \leftarrow \gamma b_{t-1} + (1 - \gamma) * ||G_t||_2^2 $$ # $$\text{ADAM: } b_t \leftarrow \frac{\beta_2 b_{t-1} + (1 - \beta_2) ||G_t||_2^2}{1-\beta_2^{Bt}} $$ # # 2. Vector Case # $$\text{RMSProp: } b_t^{(i)} \leftarrow \gamma b_{t-1}^{(i)} + (1 - \gamma) * ||G_t^{(i)}||_2^2 $$ # $$\text{ADAM: } b_t^{(i)} \leftarrow \frac{\beta_2 b_{t-1}^{(i)} + (1 - \beta_2) ||G_t^{(i)}||_2^2}{1-\beta_2^{Bt}} $$ # # 3. Matrix Case # $$\text{RMSProp: } b_t^{(i,j)} \leftarrow \gamma b_{t-1}^{(i,j)} + (1 - \gamma) * (G_t^{(i,j)})^2 $$ # $$\text{ADAM: } b_t^{(i,j)} \leftarrow \frac{\beta_2 b_{t-1}^{(i,j)} + (1 - \beta_2) (G_t^{(i,j)})^2}{1-\beta_2^{Bt}} $$ def comp_b0_dim(spca0, spca1, spca2, dataname, methodname, true_evar=None): ''' Plotting function to compare the accuracy evolution for the single, vector and matrix versions of the desired streaming pca method. ''' plt.plot(spca0.acc_indices, spca0.accQ, label='Single Case') plt.plot(spca1.acc_indices, spca1.accQ, label='Vector Case') plt.plot(spca2.acc_indices, spca2.accQ, label='Matrix Case') if true_evar is not None: assert true_evar >= 0 and true_evar <=1, "The true explained variance should be a float > 0" plt.plot(spca0.acc_indices, np.ones_like(spca0.acc_indices) * true_evar, label='Offline SVD') plt.legend(loc='best') plt.title('Comparing Single, Vector and Matrix Case\n Method: ' + methodname + ', data: '+ dataname) plt.show() # + k = 10 B = 1 rmsp0_kos, adam0_kos = dssb.run_sim_bag('docword.kos.txt', k, methods=['RMSProp', 'ADAM'], eta=.001, gamma=.9, beta_1 = 0.9, beta_2=0.999, delta=1e-8, B=B, X=kos_SpX, xnorm2=kos_norm2, b0_dim=0, bias_correction=True) rmsp1_kos, adam1_kos = dssb.run_sim_bag('docword.kos.txt', k, methods=['RMSProp', 'ADAM'], eta=.001, gamma=.9, beta_1 = 0.9, beta_2=0.999, delta=1e-8, B=B, X=kos_SpX, xnorm2=kos_norm2, b0_dim=1, bias_correction=True) rmsp2_kos, adam2_kos = dssb.run_sim_bag('docword.kos.txt', k, methods=['RMSProp', 'ADAM'], eta=.001, gamma=.9, beta_1 = 0.9, beta_2=0.999, delta=1e-8, B=B, X=kos_SpX, xnorm2=kos_norm2, b0_dim=2, bias_correction=True) comp_b0_dim(rmsp0_kos, rmsp1_kos, rmsp2_kos, 'Kos', 'RMSProp', true_evar=kos_expvar) comp_b0_dim(adam0_kos, adam1_kos, adam2_kos, 'Kos', 'ADAM', true_evar=kos_expvar) comp_b0_dim(adam0_kos, adam1_kos, adam2_kos, 'Kos', 'ADAM') # + rmsp0_cif, adam0_cif = dssb.run_sim_fullX(CIFAR_sc, k, methods=['RMSProp', 'ADAM'], eta=.001, gamma=.9, beta_1 = 0.9, beta_2=0.999, delta=1e-8, B=B, xnorm2=CIFAR_norm2, b0_dim=0, bias_correction=True, Sparse=False) rmsp1_cif, adam1_cif = dssb.run_sim_fullX(CIFAR_sc, k, methods=['RMSProp', 'ADAM'], eta=.001, gamma=.9, beta_1 = 0.9, beta_2=0.999, delta=1e-8, B=B, xnorm2=CIFAR_norm2, b0_dim=1, bias_correction=True, Sparse=False) rmsp2_cif, adam2_cif = dssb.run_sim_fullX(CIFAR_sc, k, methods=['RMSProp', 'ADAM'], eta=.001, gamma=.9, beta_1 = 0.9, beta_2=0.999, delta=1e-8, B=B, xnorm2=CIFAR_norm2, b0_dim=2, bias_correction=True, Sparse=False) comp_b0_dim(rmsp0_cif, rmsp1_cif, rmsp2_cif, 'CIFAR', 'RMSProp', true_evar=CIFAR_evar) comp_b0_dim(adam0_cif, adam1_cif, adam2_cif, 'CIFAR', 'ADAM', true_evar=CIFAR_evar) comp_b0_dim(adam0_cif, adam1_cif, adam2_cif, 'CIFAR', 'ADAM') # + rmsp0_syn, adam0_syn = dssb.run_sim_fullX(syn, k, methods=['RMSProp', 'ADAM'], eta=.001, gamma=.9, beta_1 = 0.9, beta_2=0.999, delta=1e-8, B=B, xnorm2 = syn_norm2, b0_dim=0, bias_correction=True, Sparse=False) rmsp1_syn, adam1_syn = dssb.run_sim_fullX(syn, k, methods=['RMSProp', 'ADAM'], eta=.001, gamma=.9, beta_1 = 0.9, beta_2=0.999, delta=1e-8, B=B, xnorm2 = syn_norm2, b0_dim=1, bias_correction=True, Sparse=False) rmsp2_syn, adam2_syn = dssb.run_sim_fullX(syn, k, methods=['RMSProp', 'ADAM'], eta=.001, gamma=.9, beta_1 = 0.9, beta_2=0.999, delta=1e-8, B=B, xnorm2 = syn_norm2, b0_dim=2, bias_correction=True, Sparse=False) comp_b0_dim(rmsp0_syn, rmsp1_syn, rmsp2_syn, 'Synthetic', 'RMSProp', true_evar=syn_evar) comp_b0_dim(adam0_syn, adam1_syn, adam2_syn, 'Synthetic', 'ADAM', true_evar=syn_evar) comp_b0_dim(adam0_syn, adam1_syn, adam2_syn, 'Synthetic', 'ADAM') # - # From the above, we see that the matrix case obtains the best results for RMSProp for our sparse kos dataset. # Though the matrix case converges more quickly for our dense CIFAR data, the vector case actually achieves better final accuracy. # For our synthetic data set, the matrix case obtains the best results by a wide margin. # # For ADAM, the overall results are nowhere near the true solution for either the sparse or the dense data. Why? # # ### Removing Bias Correction Step # One part of ADAM that might be unecessary is *bias correction* which essentially seeks to correct for the fact that $m_0$ and $b_0$ are initialized as vectors of zeros. # However, this effect may already be mitigated by the normalization step in Oja's method. # # Let us modify ADAM and remove the bias correction steps to see if this improves our results: adam0_kos = dssb.run_sim_bag('docword.kos.txt', k, methods=['ADAM'], eta=.001, gamma=.9, beta_1 = 0.9, beta_2=0.999, delta=1e-8, B=B, X=kos_SpX, xnorm2=kos_norm2, b0_dim=0, bias_correction=False) adam1_kos = dssb.run_sim_bag('docword.kos.txt', k, methods=['ADAM'], eta=.001, gamma=.9, beta_1 = 0.9, beta_2=0.999, delta=1e-8, B=B, X=kos_SpX, xnorm2=kos_norm2, b0_dim=1, bias_correction=False) adam2_kos = dssb.run_sim_bag('docword.kos.txt', k, methods=['ADAM'], eta=.001, gamma=.9, beta_1 = 0.9, beta_2=0.999, delta=1e-8, B=B, X=kos_SpX, xnorm2=kos_norm2, b0_dim=2, bias_correction=False) adam0_cif = dssb.run_sim_fullX(CIFAR_sc, k, methods=['ADAM'], eta=.001, gamma=.9, beta_1 = 0.9, beta_2=0.999, delta=1e-8, B=B, xnorm2=CIFAR_norm2, b0_dim=0, bias_correction=False, Sparse=False) adam1_cif = dssb.run_sim_fullX(CIFAR_sc, k, methods=['ADAM'], eta=.001, gamma=.9, beta_1 = 0.9, beta_2=0.999, delta=1e-8, B=B, xnorm2=CIFAR_norm2, b0_dim=1, bias_correction=False, Sparse=False) adam2_cif = dssb.run_sim_fullX(CIFAR_sc, k, methods=['ADAM'], eta=.001, gamma=.9, beta_1 = 0.9, beta_2=0.999, delta=1e-8, B=B, xnorm2=CIFAR_norm2, b0_dim=2, bias_correction=False, Sparse=False) adam0_syn = dssb.run_sim_fullX(syn, k, methods=['ADAM'], eta=.001, gamma=.9, beta_1 = 0.9, beta_2=0.999, delta=1e-8, B=B, b0_dim=0, bias_correction=False, Sparse=False) adam1_syn = dssb.run_sim_fullX(syn, k, methods=['ADAM'], eta=.001, gamma=.9, beta_1 = 0.9, beta_2=0.999, delta=1e-8, B=B, b0_dim=1, bias_correction=False, Sparse=False) adam2_syn = dssb.run_sim_fullX(syn, k, methods=['ADAM'], eta=.001, gamma=.9, beta_1 = 0.9, beta_2=0.999, delta=1e-8, B=B, b0_dim=2, bias_correction=False, Sparse=False) comp_b0_dim(adam0_kos[0], adam1_kos[0], adam2_kos[0], 'kos', 'ADAM', true_evar=kos_expvar) comp_b0_dim(adam0_cif[0], adam1_cif[0], adam2_cif[0], 'CIFAR', 'ADAM', true_evar=CIFAR_evar) comp_b0_dim(adam0_syn[0], adam1_syn[0], adam2_syn[0], 'Synthetic', 'ADAM', true_evar=syn_evar) # We see that removing Bias Correction significantly improves our resuts. In the Sparse case, ADAM achieves best performance for vector $b0$ (note that our result is noisy for matrix $b0$. In the dense case, as with RMSProp, the matrix $b0$ converges fastest but the vector $b0$ eventually achieves the overall best accuracy. # # ## Evaluating new hyperparameters # Although we seem to obtain somewhat-close to true results for the CIFAR data, our sparse results for the Kos dataset are sadly lacking for both RMSProp and ADAM. # We consider the effect of the hyperparameter $\eta$ on our convergence results. # We note that the choice of $\eta$ may also explain the difference between our results for the different dimensionalities of $b0$. # We consider appropriate scaling depending on the $b0\_dim$ parameter as well as the block size $B$. # + # Run adam and rmsprop for the kos data for different parameters. eta_vals = 10.**np.arange(-4, 1) B_vals = [1, 10, 100] Dim_vals = [0, 1, 2] Best_eta_kos = np.zeros((2, len(Dim_vals), len(B_vals))) Final_acc_kos = np.zeros((2, len(Dim_vals), len(B_vals))) for i in range(len(Dim_vals)): for j in range(len(B_vals)): rmsp_acc = [] adam_acc = [] for eta in eta_vals: rmsp_kos, adam_kos = dssb.run_sim_bag('docword.kos.txt', k, methods=['RMSProp', 'ADAM'], eta=eta, gamma=.9, beta_1 = 0.9, beta_2=0.999, delta=1e-8, B=B_vals[j], X=kos_SpX, xnorm2=kos_norm2, b0_dim=Dim_vals[i], bias_correction=False, num_acc=2) rmsp_acc.append(rmsp_kos.accQ[-1]) adam_acc.append(adam_kos.accQ[-1]) Best_eta_kos[0, i,j] = eta_vals[np.argmax(rmsp_acc)] Best_eta_kos[1, i,j] = eta_vals[np.argmax(adam_acc)] Final_acc_kos[0, i,j] = np.max(rmsp_acc) Final_acc_kos[1, i,j] = np.max(adam_acc) # + # Run adam and rmsprop for the synthetic data for different parameters eta_vals = 10.**np.arange(-4, 1) B_vals = [1, 10, 100] Dim_vals = [0, 1, 2] Best_eta = np.zeros((2, len(Dim_vals), len(B_vals))) Final_acc = np.zeros((2, len(Dim_vals), len(B_vals))) for i in range(len(Dim_vals)): for j in range(len(B_vals)): rmsp_acc = [] adam_acc = [] for eta in eta_vals: rmsp_syn, adam_syn = dssb.run_sim_fullX(syn, k, methods=['RMSProp', 'ADAM'], eta=eta, gamma=.9, beta_1 = 0.9, beta_2=0.999, delta=1e-8, B=B_vals[j], xnorm2=syn_norm2, b0_dim=Dim_vals[i], bias_correction=False, num_acc=2, Sparse=False) rmsp_acc.append(rmsp_syn.accQ[-1]) adam_acc.append(adam_syn.accQ[-1]) Best_eta[0, i,j] = eta_vals[np.argmax(rmsp_acc)] Best_eta[1, i,j] = eta_vals[np.argmax(adam_acc)] Final_acc[0, i,j] = np.max(rmsp_acc) Final_acc[1, i,j] = np.max(adam_acc) # + # Create dataframes to visualize the results col = ['B=1', 'B=10', 'B=100'] row = ['b0 dim = 0', 'b0 dim = 1', 'b0 dim = 2'] rmsp_syn_acc = df(Final_acc[0], columns=col, index=row) adam_syn_acc = df(Final_acc[1], columns=col, index=row) rmsp_syn_eta = df(Best_eta[0], columns=col, index=row) adam_syn_eta = df(Best_eta[1], columns=col, index=row) rmsp_kos_acc = df(Final_acc_kos[0], columns=col, index=row) adam_kos_acc = df(Final_acc_kos[1], columns=col, index=row) rmsp_kos_eta = df(Best_eta_kos[0], columns=col, index=row) adam_kos_eta = df(Best_eta_kos[1], columns=col, index=row) # - # ### Choice of $\eta$ for RMSProp # # In the tables below, we display the choice of $\eta$ that achieves the best accuracy for RMSProp for the synthetic (top) and kos (bottom) data sets. # Note that the best scaling for $\eta$ for both the kos and synthetic datasets depends on both the dimension for $b0$ and the blocksize $B$. # In order to obtain best performance for RMSProp in this setting, we would have to optimize for $\eta$. display(rmsp_syn_eta) display(rmsp_kos_eta) # ### Choice of $\eta$ for ADAM # As with RMSProp, the best scaling for $\eta$ for both the synthetic (top) and kos (bottom) datasets depends on both the dimension for $b0$ and the blocksize $B$. # Note that for both ADAM and RMSProp, as B increases the best case choice of $\eta$ tends to increase for all $b0$ dimension options. # On the other hand as the dimension of $b0$ increases the optimal scaling for eta decreases across block sizes. display(adam_syn_eta) display(adam_kos_eta) # ### Accuracy # #### RMSProp # In the tables below, we display the accuracy from the best choice of $\eta$ for RMSProp on the synthetic dataset (top) and the kos dataset (bottom). # We see that for $B=1$ the best accuracy was achieved for the vector $b0$ version for both datasets. # For $B=10$ the best accuracy was achieved in the matrix case for the synthetic data and the vector case for the sparse data. # For $B=100$ the best accuracy was achieved in the vector cas--again for both datasets. # Note that the true explained variance for the kos dataset was $\approx 0.295$ and the closest RMSProp achieved was $\approx 0.2642$ when $B=10$ for the vector case and the true explained variance for the synthetic case was $\approx 0.464$ and the closest RMSProp achieved was $\approx 0.0434$ when $B=100$ for the vector case. display(rmsp_syn_acc) display(rmsp_kos_acc) # #### ADAM # We now consider the accuracy from the best choice of $\eta$ for ADAM on the synthetic dataset (top) and the kos dataset (bottom). # Note that we do not correct for bias in these experiments. # For $B=1$ $b0 dim = 1$ achieves highest accuracy for both datasets. # For $B=10$ $b0 dim = 1$ achieves highest accuracy for the kos dataset and $b0 dim = 2$ achieves highest accuracy on the synthetic dataset. # For $B=100$ $b0 dim = 1$ achieves highest accuracy for the synthetic dataset and $b0 dim = 0$ achieves highest accuracy for kos, though we note that this accuracy is very close to that achieves for $b0 dim = 1$. # Note that the best accuracies were achieved for $b0 dim = 2, B=10$ and $b0 dim=0, B=100$. # display(adam_syn_acc) display(adam_kos_acc) # #### Conclusions # For both ADAM and RMSProp, it appears that the most *consistent* results for accuracy are achieved in the vector setting for $b0$. # We see that there is some variability however. # It is also important to note that for our sparse data, the matrix case for $b0$ performed terribly for both RMSProp and ADAM. # # # Comparing AdaOja to ADAM and RMSProp # # Using our values from above, we set $\eta=.1, B=10$ for the kos dataset and $\eta=.1, B=100$ for the synthetic dataset. We visualize the convergence of the explained variance for both datasets: ada_kos, rmsp_kos, adam_kos = dssb.run_sim_bag('docword.kos.txt', k, methods=['AdaOja', 'RMSProp', 'ADAM'], eta=.1, gamma=.9, beta_1 = 0.9, beta_2=0.999, delta=1e-8, B=10, X=kos_SpX, xnorm2=kos_norm2, b0_dim=1, bias_correction=False) pf.plot_mom_comp(ada_kos, rmsp_kos, adam_kos, 'Kos', true_evar=kos_expvar) ada_syn, rmsp_syn, adam_syn = dssb.run_sim_fullX(syn, k, methods=['AdaOja', 'RMSProp', 'ADAM'], eta=0.1, gamma=.9, beta_1 = 0.9, beta_2=0.999, delta=1e-8, B=100, xnorm2=syn_norm2, b0_dim=1, bias_correction=False, Sparse=False) pf.plot_mom_comp(ada_syn, rmsp_syn, adam_syn, 'Synthetic', true_evar = syn_evar) # When $\eta$ and $B$ are well chosen and bias correction is removed from the ADAM algorithm, we get reasonable convergence results for ADAM and RMSProp as streaming PCA methods. However, we note that AdaOja consistently outperforms ADAM and RMSProp, and requires no such hyperparameter optimization. # # Note that we can also show that AdaOja gets consistent (and better) results--without optimizing for some $\eta$ term--accross choices of blocksize. See below for details. for B in B_vals: ada_oja = dssb.run_sim_fullX(syn, k, methods=['AdaOja'], B=B, xnorm2=syn_norm2, num_acc=2, Sparse=False) print('B = ' + str(B), 'AdaOja Acc = ' + str(ada_oja[0].accQ[-1])) for B in B_vals: ada_oja = dssb.run_sim_bag('docword.kos.txt', k, methods=['AdaOja'], B=B, X = kos_SpX, xnorm2=kos_norm2, num_acc=2) print('B = ' + str(B), 'AdaOja Acc = ' + str(ada_oja[0].accQ[-1])) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: project_env # language: python # name: project_env # --- # + from statistics import median from heapq import heapify from random import randint import numpy as np import benchit def median_stats(arr): return median(arr) def median_kth_smallest(arr): n = len(arr) k = n // 2 + 1 return kthSmallest(arr, 0, n - 1, k) def kthSmallest(arr, l, r, k): # If k is smaller than number of # elements in array if (k > 0 and k <= r - l + 1): # Number of elements in arr[l..r] n = r - l + 1 # Divide arr[] in groups of size 5, # calculate median of every group # and store it in median[] array. median = [] i = 0 while (i < n // 5): median.append(findMedian(arr, l + i * 5, 5)) i += 1 # For last group with less than 5 elements if (i * 5 < n): median.append(findMedian(arr, l + i * 5, n % 5)) i += 1 # Find median of all medians using recursive call. # If median[] has only one element, then no need # of recursive call if i == 1: medOfMed = median[i - 1] else: medOfMed = kthSmallest(median, 0, i - 1, i // 2) # Partition the array around a medOfMed # element and get position of pivot # element in sorted array pos = partition(arr, l, r, medOfMed) # If position is same as k if (pos - l == k - 1): return arr[pos] if (pos - l > k - 1): # If position is more, # recur for left subarray return kthSmallest(arr, l, pos - 1, k) # Else recur for right subarray return kthSmallest(arr, pos + 1, r, k - pos + l - 1) # If k is more than the number of # elements in the array return 999999999999 def swap(arr, a, b): temp = arr[a] arr[a] = arr[b] arr[b] = temp # It searches for x in arr[l..r], # and partitions the array around x. def partition(arr, l, r, x): for i in range(l, r): if arr[i] == x: swap(arr, r, i) break x = arr[r] i = l for j in range(l, r): if (arr[j] <= x): swap(arr, i, j) i += 1 swap(arr, i, r) return i # median of arr[] from index l to l+n def findMedian(arr, l, n): lis = [] for i in range(l, l + n): lis.append(arr[i]) # Sort the array lis.sort() # Return the middle element return lis[n // 2] funcs = [median_stats, median_kth_smallest] inputs = [[randint(0, 1000000) for _ in range(i)] for i in 10**np.arange(5)] t = benchit.timings(funcs, inputs) t.plot(logy=True, logx=True) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import matplotlib.pyplot as plt import numpy as np import sklearn import sklearn.datasets import neural_net import optimization import performance # - # ## Model with different optimization algorithms # # + def load_dataset(): np.random.seed(3) train_X, train_Y = sklearn.datasets.make_moons(n_samples=300, noise=.2) #300 #0.2 # Visualize the data plt.scatter(train_X[:, 0], train_X[:, 1], c=train_Y, s=40, cmap=plt.cm.Spectral) train_X = train_X.T train_Y = train_Y.reshape((1, train_Y.shape[0])) return train_X, train_Y def plot_decision_boundary(clf, X, y): # Set min and max values and give it some padding x_min, x_max = X[0, :].min() - 1, X[0, :].max() + 1 y_min, y_max = X[1, :].min() - 1, X[1, :].max() + 1 h = 0.01 # Generate a grid of points with distance h between them xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) Z = np.c_[xx.ravel(), yy.ravel()].T Z = clf.predict(Z) Z = Z.reshape(xx.shape) plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral) plot_data(X.T, y.T.ravel()) def plot_data(X, y): plt.scatter(X[y == 0, 0], X[y == 0, 1], color="red", s=30, label="Cluster1") plt.scatter(X[y == 1, 0], X[y == 1, 1], color="blue", s=30, label="Cluster2") # - train_X, train_Y = load_dataset() # ### MINI-BATCH GRADIENT DESCENT layers_dims = [train_X.shape[0], 5, 2, 1] init_method = 'he' activations = ('relu', 'sigmoid') lambd = 0 optimizer_name = 'gd' learning_rate = 0.0007 num_epochs = 10000 clf = neural_net.MLNN(layers_dims, init_method, activations, lambd, optimizer_name, learning_rate, num_epochs) clf.train(train_X, train_Y) predictions = clf.predict(train_X) gd_accuracy = performance.compute_accuracy(train_Y, predictions) print('Gradient descent optimization accuracy = ', gd_accuracy, '%') # **GETTING THE MLNN PARAMS** params = clf.get_params() plt.plot(params['costs']) plt.title('Cost 100 per epoch') plot_decision_boundary(clf, train_X, train_Y) # ### MINI-BATCH GRADIENT DESCENT WITH MOMENTUM layers_dims = [train_X.shape[0], 5, 2, 1] init_method = 'he' activations = ('relu', 'sigmoid') lambd = 0 optimizer_name = 'momentum' learning_rate = 0.0007 num_epochs = 10000 clf = neural_net.MLNN(layers_dims, init_method, activations, lambd, optimizer_name, learning_rate, num_epochs) clf.train(train_X, train_Y) predictions = clf.predict(train_X) momentum_accuracy = performance.compute_accuracy(train_Y, predictions) print('Momentum optimization accuracy = ', momentum_accuracy, '%') plot_decision_boundary(clf, train_X, train_Y) params = clf.get_params() plt.plot(params['costs']) plt.title('Cost 100 per epoch') # ### MINI-BATCH GRADIENT DESCENT WITH RMSPROP layers_dims = [train_X.shape[0], 5, 2, 1] init_method = 'he' activations = ('relu', 'sigmoid') lambd = 0 optimizer_name = 'rmsprop' learning_rate = 0.0007 num_epochs = 10000 clf = neural_net.MLNN(layers_dims, init_method, activations, lambd, optimizer_name, learning_rate, num_epochs) clf.train(train_X, train_Y) predictions = clf.predict(train_X) rmsprop_accuracy = performance.compute_accuracy(train_Y, predictions) print('RMSProp optimization accuracy = ', rmsprop_accuracy, '%') plot_decision_boundary(clf, train_X, train_Y) params = clf.get_params() plt.plot(params['costs']) plt.title('Cost 100 per epoch') # ### MINI-BATCH GRADIENT DESCENT WITH ADAM layers_dims = [train_X.shape[0], 5, 2, 1] init_method = 'xavier' activations = ('relu', 'sigmoid') lambd = 0 optimizer_name = 'adam' learning_rate = 0.0007 num_epochs = 10000 clf = neural_net.MLNN(layers_dims, init_method, activations, lambd, optimizer_name, learning_rate, num_epochs) clf.train(train_X, train_Y) predictions = clf.predict(train_X) adam_accuracy = performance.compute_accuracy(train_Y, predictions) print('ADAM optimization accuracy = ', adam_accuracy, '%') plot_decision_boundary(clf, train_X, train_Y) params = clf.get_params() plt.plot(params['costs']) plt.title('Cost 100 per epoch') # + [markdown] variables={"np.round(adam_accuracy, 2)": "94.33", "np.round(gd_accuracy, 2)": "87.33", "np.round(momentum_accuracy, 2)": "85.0", "np.round(rmsprop_accuracy, 2)": "94.33"} # ### Summary # # # # # # # # # # # # # # # # # # # # #
    # optimization method # # accuracy #
    # Gradient descent # # 87.33% #
    # Momentum # # 85% #
    # RMSProp # # 94.33% #
    # Adam # # 94.33% #
    # # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from sklearn.linear_model import LogisticRegression from sklearn.model_selection import GridSearchCV, train_test_split, cross_val_score from sklearn.metrics import accuracy_score, recall_score, roc_auc_score, f1_score from sklearn.feature_extraction.text import TfidfVectorizer from konlpy.tag import Okt; t=Okt() import warnings warnings.filterwarnings(action='ignore') train = pd.read_csv('./new_train.csv', index_col=0) dev = pd.read_csv('../../data/dev.hate.csv') test = pd.read_csv('../../data/test.hate.no_label.csv') train.head() train.head() train = pd.concat([train, dev], axis=0) train.head() train.info() # 트레인 데이터 레이블대로 분류 none_df = train[train['label'] == 'none']['comments'] offensive_df = train[train['label'] == 'offensive']['comments'] hate_df = train[train['label'] == 'hate']['comments'] print(len(none_df), len(offensive_df), len(hate_df), "\n", "sum: {}".format(sum([len(none_df), len(offensive_df), len(hate_df)]))) # + # TfIdf 벡터라이즈 from sklearn.feature_extraction.text import TfidfVectorizer tfidf_vectorizer = TfidfVectorizer() # 훈련 데이터 전체 벡터라이즈 fit tfidf_vectorizer.fit(train['comments']) # 각각 레이블 벡터라이즈 transform none_matrix = tfidf_vectorizer.transform(none_df) offensive_matrix = tfidf_vectorizer.transform(offensive_df) hate_matrix = tfidf_vectorizer.transform(hate_df) none_matrix.shape, offensive_matrix.shape, hate_matrix.shape # + # 각각의 평균 벡터값(위치) 산출 none_vec = [] offensive_vec = [] hate_vec = [] for i in range(none_matrix.shape[1]): none_vec.append(none_matrix[:,i].mean()) for i in range(offensive_matrix.shape[1]): offensive_vec.append(offensive_matrix[:,i].mean()) for i in range(hate_matrix.shape[1]): hate_vec.append(hate_matrix[:,i].mean()) # 벡터라이즈 잘 되었는지 길이 확인 len(none_vec), len(offensive_vec), len(hate_vec) # + # 어레이 형태로 변환 none_vec = np.array(none_vec) offensive_vec = np.array(offensive_vec) hate_vec = np.array(hate_vec) # 2차원 어레이로 변환 none_vec = none_vec.reshape(1, -1) offensive_vec = offensive_vec.reshape(1, -1) hate_vec = hate_vec.reshape(1, -1) # 테스트 코멘트 벡터라이즈 적용 test_matrix = tfidf_vectorizer.transform(test['comments']) test_matrix.shape # + # 코멘트 <-> 각 레이블 간의 유사도 측정 후 가장 유사한 레이블 산출 from sklearn.metrics.pairwise import cosine_similarity preds = [] for i in range(test_matrix.shape[0]): distances = {cosine_similarity(test_matrix[i,:], none_vec)[0][0] : "none", cosine_similarity(test_matrix[i,:], offensive_vec)[0][0] : "offensive", cosine_similarity(test_matrix[i,:], hate_vec)[0][0] : "hate"} preds.append( distances[max(distances.keys())] ) preds # - test['label'] = preds test.head() def kaggle_format(df): df['label'][df['label'] == 'none'] = 0 df['label'][df['label'] == 'offensive'] = 1 df['label'][df['label'] == 'hate'] = 2 return df kaggle_format(test) test.head() test.info() test.to_csv('./0116_cos_jc.csv', index=False) pd.read_csv('./0116_cos_jc.csv') # dd # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # %matplotlib inline import pandas as pd import matplotlib.pyplot as plt women_degrees = pd.read_csv('percent-bachelors-degrees-women-usa.csv') cb_dark_blue = (0/255,107/255,164/255) cb_orange = (255/255, 128/255, 14/255) #stem_cats = ['Engineering', 'Computer Science', 'Psychology', 'Biology', 'Physical Sciences', 'Math and Statistics'] stem_cats = ['Psychology', 'Biology', 'Math and Statistics', 'Physical Sciences', 'Computer Science', 'Engineering'] lib_arts_cats = ['Foreign Languages', 'English', 'Communications and Journalism', 'Art and Performance', 'Social Sciences and History'] other_cats = ['Health Professions', 'Public Administration', 'Education', 'Agriculture','Business', 'Architecture'] fig = plt.figure(figsize=(16, 20)) for sp in range(0,18,3): i=int(sp/3) ax = fig.add_subplot(6,3,sp+1) ax.plot(women_degrees['Year'], women_degrees[stem_cats[i]], c=cb_dark_blue, label='Women', linewidth=3) ax.plot(women_degrees['Year'], 100-women_degrees[stem_cats[i]], c=cb_orange, label='Men', linewidth=3) ax.spines["right"].set_visible(False) ax.spines["left"].set_visible(False) ax.spines["top"].set_visible(False) ax.spines["bottom"].set_visible(False) ax.set_xlim(1968, 2011) ax.set_ylim(0,100) ax.set_title(stem_cats[i]) ax.tick_params(bottom="off", top="off", left="off", right="off") if sp == 0: ax.text(2005, 87, 'Men') ax.text(2002, 8, 'Women') elif sp == 15: ax.text(2005, 62, 'Men') ax.text(2001, 35, 'Women') for sp in range(0,18,3): i=int(sp/3) ax = fig.add_subplot(6,3,sp+3) ax.plot(women_degrees['Year'], women_degrees[other_cats[i]], c=cb_dark_blue, label='Women', linewidth=3) ax.plot(women_degrees['Year'], 100-women_degrees[other_cats[i]], c=cb_orange, label='Men', linewidth=3) ax.spines["right"].set_visible(False) ax.spines["left"].set_visible(False) ax.spines["top"].set_visible(False) ax.spines["bottom"].set_visible(False) ax.set_xlim(1968, 2011) ax.set_ylim(0,100) ax.set_title(other_cats[i]) ax.tick_params(bottom="off", top="off", left="off", right="off") if sp == 0: ax.text(2005, 87, 'Men') ax.text(2002, 8, 'Women') elif sp == 15: ax.text(2005, 62, 'Men') ax.text(2001, 35, 'Women') for sp in range(0,15,3): i=int(sp/3) sp=sp-1 ax = fig.add_subplot(6,3,sp+3) ax.plot(women_degrees['Year'], women_degrees[lib_arts_cats[i]], c=cb_dark_blue, label='Women', linewidth=3) ax.plot(women_degrees['Year'], 100-women_degrees[lib_arts_cats[i]], c=cb_orange, label='Men', linewidth=3) ax.spines["right"].set_visible(False) ax.spines["left"].set_visible(False) ax.spines["top"].set_visible(False) ax.spines["bottom"].set_visible(False) ax.set_xlim(1968, 2011) ax.set_ylim(0,100) ax.set_title(lib_arts_cats[i]) ax.tick_params(bottom="off", top="off", left="off", right="off") if sp == -1: ax.text(2005, 87, 'Men') ax.text(2002, 8, 'Women') elif sp == 15: ax.text(2005, 62, 'Men') ax.text(2001, 35, 'Women') plt.show() # + # %matplotlib inline import pandas as pd import matplotlib.pyplot as plt women_degrees = pd.read_csv('percent-bachelors-degrees-women-usa.csv') cb_dark_blue = (0/255,107/255,164/255) cb_orange = (255/255, 128/255, 14/255) #stem_cats = ['Engineering', 'Computer Science', 'Psychology', 'Biology', 'Physical Sciences', 'Math and Statistics'] stem_cats = ['Psychology', 'Biology', 'Math and Statistics', 'Physical Sciences', 'Computer Science', 'Engineering'] lib_arts_cats = ['Foreign Languages', 'English', 'Communications and Journalism', 'Art and Performance', 'Social Sciences and History'] other_cats = ['Health Professions', 'Public Administration', 'Education', 'Agriculture','Business', 'Architecture'] fig = plt.figure(figsize=(16, 20)) for sp in range(0,18,3): i=int(sp/3) ax = fig.add_subplot(6,3,sp+1) ax.plot(women_degrees['Year'], women_degrees[stem_cats[i]], c=cb_dark_blue, label='Women', linewidth=3) ax.plot(women_degrees['Year'], 100-women_degrees[stem_cats[i]], c=cb_orange, label='Men', linewidth=3) ax.spines["right"].set_visible(False) ax.spines["left"].set_visible(False) ax.spines["top"].set_visible(False) ax.spines["bottom"].set_visible(False) ax.set_xlim(1968, 2011) ax.set_ylim(0,100) ax.set_yticks([0,100]) # set only the points/labels u want on y axis i.e 0 and 100 ax.set_title(stem_cats[i]) ax.tick_params(bottom="off", top="off", left="off", right="off",labelbottom='off') if sp == 0: ax.text(2005, 87, 'Men') ax.text(2002, 8, 'Women') elif sp == 15: ax.text(2005, 62, 'Men') ax.text(2001, 35, 'Women') ax.tick_params(labelbottom="on") #show x-axis for sp in range(0,18,3): i=int(sp/3) ax = fig.add_subplot(6,3,sp+3) ax.plot(women_degrees['Year'], women_degrees[other_cats[i]], c=cb_dark_blue, label='Women', linewidth=3) ax.plot(women_degrees['Year'], 100-women_degrees[other_cats[i]], c=cb_orange, label='Men', linewidth=3) ax.spines["right"].set_visible(False) ax.spines["left"].set_visible(False) ax.spines["top"].set_visible(False) ax.spines["bottom"].set_visible(False) ax.set_xlim(1968, 2011) ax.set_ylim(0,100) ax.set_yticks([0,100]) # set only 2 points on y axis i.e 0 and 100 ax.set_title(other_cats[i]) ax.tick_params(bottom="off", top="off", left="off", right="off",labelbottom="off") if sp == 0: ax.text(2005, 87, 'Men') ax.text(2002, 8, 'Women') elif sp == 15: ax.text(2005, 62, 'Men') ax.text(2001, 35, 'Women') ax.tick_params(labelbottom="on") #show x-axis for sp in range(0,15,3): i=int(sp/3) sp=sp-1 ax = fig.add_subplot(6,3,sp+3) ax.plot(women_degrees['Year'], women_degrees[lib_arts_cats[i]], c=cb_dark_blue, label='Women', linewidth=3) ax.plot(women_degrees['Year'], 100-women_degrees[lib_arts_cats[i]], c=cb_orange, label='Men', linewidth=3) ax.spines["right"].set_visible(False) ax.spines["left"].set_visible(False) ax.spines["top"].set_visible(False) ax.spines["bottom"].set_visible(False) ax.set_xlim(1968, 2011) ax.set_ylim(0,100) ax.set_yticks([0,50,100]) # set only 2 points on y axis i.e 0 and 100 ax.set_title(lib_arts_cats[i]) ax.tick_params(bottom="off", top="off", left="off", right="off",labelbottom="off") if sp == -1: ax.text(2005, 87, 'Men') ax.text(2002, 8, 'Women') elif sp == 11: ax.tick_params(labelbottom="on") #show x-axis plt.show() # + # %matplotlib inline import pandas as pd import matplotlib.pyplot as plt women_degrees = pd.read_csv('percent-bachelors-degrees-women-usa.csv') cb_dark_blue = (0/255,107/255,164/255) cb_orange = (255/255, 128/255, 14/255) #stem_cats = ['Engineering', 'Computer Science', 'Psychology', 'Biology', 'Physical Sciences', 'Math and Statistics'] stem_cats = ['Psychology', 'Biology', 'Math and Statistics', 'Physical Sciences', 'Computer Science', 'Engineering'] lib_arts_cats = ['Foreign Languages', 'English', 'Communications and Journalism', 'Art and Performance', 'Social Sciences and History'] other_cats = ['Health Professions', 'Public Administration', 'Education', 'Agriculture','Business', 'Architecture'] fig = plt.figure(figsize=(16, 20)) for sp in range(0,18,3): i=int(sp/3) ax = fig.add_subplot(6,3,sp+1) ax.plot(women_degrees['Year'], women_degrees[stem_cats[i]], c=cb_dark_blue, label='Women', linewidth=3) ax.plot(women_degrees['Year'], 100-women_degrees[stem_cats[i]], c=cb_orange, label='Men', linewidth=3) ax.spines["right"].set_visible(False) ax.spines["left"].set_visible(False) ax.spines["top"].set_visible(False) ax.spines["bottom"].set_visible(False) ax.set_xlim(1968, 2011) ax.set_ylim(0,100) ax.set_yticks([0,100]) # set only the points/labels u want on y axis i.e 0 and 100 ax.set_title(stem_cats[i]) ax.tick_params(bottom="off", top="off", left="off", right="off",labelbottom='off') ax.axhline(50,c=(171/255, 171/255, 171/255),alpha=0.3) #set central horizontal line on y axis , aplha for tranparent level if sp == 0: ax.text(2005, 87, 'Men') ax.text(2002, 8, 'Women') elif sp == 15: ax.text(2005, 62, 'Men') ax.text(2001, 35, 'Women') ax.tick_params(labelbottom="on") #show x-axis for sp in range(0,18,3): i=int(sp/3) ax = fig.add_subplot(6,3,sp+3) ax.plot(women_degrees['Year'], women_degrees[other_cats[i]], c=cb_dark_blue, label='Women', linewidth=3) ax.plot(women_degrees['Year'], 100-women_degrees[other_cats[i]], c=cb_orange, label='Men', linewidth=3) ax.spines["right"].set_visible(False) ax.spines["left"].set_visible(False) ax.spines["top"].set_visible(False) ax.spines["bottom"].set_visible(False) ax.set_xlim(1968, 2011) ax.set_ylim(0,100) ax.set_yticks([0,100]) # set only 2 points on y axis i.e 0 and 100 ax.axhline(50,c=(171/255, 171/255, 171/255),alpha=0.3) #set central horizontal line on y axis , aplha for tranparent level ax.set_title(other_cats[i]) ax.tick_params(bottom="off", top="off", left="off", right="off",labelbottom="off") if sp == 0: ax.text(2005, 87, 'Men') ax.text(2002, 8, 'Women') elif sp == 15: ax.text(2005, 62, 'Men') ax.text(2001, 35, 'Women') ax.tick_params(labelbottom="on") #show x-axis for sp in range(0,15,3): i=int(sp/3) sp=sp-1 ax = fig.add_subplot(6,3,sp+3) ax.plot(women_degrees['Year'], women_degrees[lib_arts_cats[i]], c=cb_dark_blue, label='Women', linewidth=3) ax.plot(women_degrees['Year'], 100-women_degrees[lib_arts_cats[i]], c=cb_orange, label='Men', linewidth=3) ax.spines["right"].set_visible(False) ax.spines["left"].set_visible(False) ax.spines["top"].set_visible(False) ax.spines["bottom"].set_visible(False) ax.set_xlim(1968, 2011) ax.set_ylim(0,100) ax.set_yticks([0,50,100]) # set only 2 points on y axis i.e 0 and 100 ax.axhline(50,c=(171/255, 171/255, 171/255),alpha=0.3) #set central horizontal line on y axis , aplha for tranparent level ax.set_title(lib_arts_cats[i]) ax.tick_params(bottom="off", top="off", left="off", right="off",labelbottom="off") if sp == -1: ax.text(2005, 87, 'Men') ax.text(2002, 8, 'Women') elif sp == 11: ax.tick_params(labelbottom="on") #show x-axis plt.savefig("gender_degrees.png") plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="v_7As_kZOnrE" # ### Longest Even Length Word # Consider a string, sentence, of space-separated words where each word is a substring consisting of English alphabetic letters only. # We want to find the first word in sentence having a length which is both an even number and greater than or # equal to the length of any other word of even length in the sentence. # # For example, if sentence is `Time to write great code` , then the word we're looking for is Time . # # While code and Time are of maximal length, Time occurs first. # If sentence is `Write code for a great time` , then the word we're looking for is code. # + colab={} colab_type="code" id="uVkk0tAROpMa" data1 = 'One great way to make predictions about an unfamiliar nonfiction text is to take a walk through the book before reading.' data2 = 'photographs or other images, readers can start to get a sense about the topic. This scanning and skimming helps set the expectation for the reading.' data3 = ' testing very9 important' # - #Importing necessary packages import pandas as pd import numpy as np #Function for Splitting Words def splitword(data): temp={} data.lower() split_data=data.split() for i in range(0,len(split_data)): temp.update({split_data[i]:len(split_data[i])}) output=pd.DataFrame(temp.items(),columns=['word','count']) return output # + colab={} colab_type="code" id="QHYzvKW0OxYI" #Function for finding Longest Even word def q01_longest_even_word(sentence): test=splitword(sentence)#Call to the function created above if (test['count']%2==0).any(): output=test.word[test[test['count']%2==0]['count'].idxmax()] else: output=0 return output # + colab={} colab_type="code" id="SqOygkyJOzgh" q01_longest_even_word(data3) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="WbdSBVjN_LS1" # # Save and Store Features # In this notebook we will compute all prediction and store the relative features in drive using the model computed in the notebook "ResNet50". # # *Note*: the features related to the simple feature extraction model are already computed in the notebook "ResNet50", thus they won't again be computed here. # + id="fOEEsshaASKY" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1643386358392, "user_tz": -60, "elapsed": 24963, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="e1269398-eaa9-4a53-be82-0dad30b6c8a4" from google.colab import drive drive.mount('/content/drive') # + id="768ZK3lEBFKZ" executionInfo={"status": "ok", "timestamp": 1643386370184, "user_tz": -60, "elapsed": 2936, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} import tensorflow as tf from tensorflow import keras as ks from tensorflow.keras import layers from tensorflow.keras.applications import ResNet50V2 from tensorflow.keras import regularizers import pathlib import matplotlib.pyplot as plt import numpy as np # + colab={"base_uri": "https://localhost:8080/", "height": 73, "resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "", "headers": [["content-type", "application/javascript"]], "ok": true, "status": 200, "status_text": ""}}} executionInfo={"elapsed": 8404, "status": "ok", "timestamp": 1643386378584, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}, "user_tz": -60} id="_B3mi1RqblLY" outputId="9f68bf0f-1e0b-4fed-a08e-5b3017293a46" # ! pip install -q kaggle from google.colab import files _ = files.upload() # ! mkdir -p ~/.kaggle # ! cp kaggle.json ~/.kaggle/ # ! chmod 600 ~/.kaggle/kaggle.json # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 12553, "status": "ok", "timestamp": 1643386395251, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}, "user_tz": -60} id="VoWnE3oCb1eG" outputId="a1e20e34-667a-439a-c2b5-9255b3c60d7c" # ! kaggle datasets download -d gpiosenka/100-bird-species # + colab={"base_uri": "https://localhost:8080/"} id="-tdH48JqcIDE" outputId="666cae5c-ed4d-4e8b-8bc3-2f57d4d3cc4b" executionInfo={"status": "ok", "timestamp": 1643386419497, "user_tz": -60, "elapsed": 21194, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} # !unzip 100-bird-species.zip # + [markdown] id="SMGEEWo0_2pg" # ## Create the different sets # In this section the training set, the test set and the discrimator sets are computed in order to extract the features from them # + id="_jrjDHV9-xFI" executionInfo={"status": "ok", "timestamp": 1643386484042, "user_tz": -60, "elapsed": 235, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} TRAIN_DIR = 'train/' VALID_DIR = 'valid/' TEST_DIR = 'test/' DISTRACTOR_DIR = 'mirflickr' BATCH_SIZE = 128 IMAGE_HEIGHT = 224 IMAGE_WIDTH = 224 RANDOM_SEED = 42 # + [markdown] id="Kns-kYL_dslj" # Distractor path: # + id="G1X7pqSOduoD" executionInfo={"status": "ok", "timestamp": 1643386596818, "user_tz": -60, "elapsed": 111439, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} # !unzip -q '/content/drive/My Drive/CV_Birds/mirflickr.zip' -d '/content' # + [markdown] id="sSl3bW9BdvII" # Create sets: # + id="zSs64XNtAAaX" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1643368549784, "user_tz": -60, "elapsed": 5004, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="3a785598-2e21-499c-c1bf-30e1b447a081" training_images = tf.keras.preprocessing.image_dataset_from_directory( TRAIN_DIR, labels='inferred', label_mode='categorical', class_names=None, color_mode='rgb', batch_size=BATCH_SIZE, image_size=(IMAGE_HEIGHT, IMAGE_WIDTH), shuffle=False, seed=RANDOM_SEED, interpolation='bilinear') test_images = tf.keras.preprocessing.image_dataset_from_directory( TEST_DIR, labels='inferred', label_mode='categorical', class_names=None, color_mode='rgb', batch_size=BATCH_SIZE, image_size=(IMAGE_HEIGHT, IMAGE_WIDTH), shuffle=False, seed=RANDOM_SEED, interpolation='bilinear') # + colab={"base_uri": "https://localhost:8080/"} id="-Ib-8nmbcsfZ" executionInfo={"status": "ok", "timestamp": 1643386604875, "user_tz": -60, "elapsed": 6336, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="b53c1bc5-e9e6-4fb5-d4ac-c545227776f2" distractor_images = tf.keras.preprocessing.image_dataset_from_directory( DISTRACTOR_DIR, image_size = (IMAGE_HEIGHT, IMAGE_WIDTH), batch_size = BATCH_SIZE, seed=RANDOM_SEED, labels=None, label_mode=None) # + [markdown] id="tiaw9QlvF0Di" # ## Model 1 # Load the model from drive: # + id="KIH-VSPCF0Dj" executionInfo={"status": "ok", "timestamp": 1643387103416, "user_tz": -60, "elapsed": 3755, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} MODEL_PATH = '/content/drive/MyDrive/CV_Birds/models/ResNet50v2/model1.keras' model = ks.models.load_model(MODEL_PATH) # + colab={"base_uri": "https://localhost:8080/"} id="S_i7Di5-bJ_D" executionInfo={"status": "ok", "timestamp": 1643303702087, "user_tz": -60, "elapsed": 7, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="e8934e4d-245a-4a0d-8a32-baf664a5b2ed" model.summary() # + id="buMzVaHqbcDi" executionInfo={"status": "ok", "timestamp": 1643387103761, "user_tz": -60, "elapsed": 348, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} a = ks.models.Sequential(model.layers[:2]) # + colab={"base_uri": "https://localhost:8080/"} id="XgU19qq0fao1" executionInfo={"status": "ok", "timestamp": 1643303702510, "user_tz": -60, "elapsed": 7, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="e2ebd367-9ae7-492b-8851-a1686c1469d2" a.summary() # + [markdown] id="obhgZHk5F0Dj" # Predict features for training set and save them: # + id="599cWVnhF0Dk" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1643303965780, "user_tz": -60, "elapsed": 263275, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="b0fc9bd5-7a87-4a31-8551-fc6b5c03c057" features_model = a.predict(training_images, batch_size=BATCH_SIZE, verbose=True) # + id="8wC29uGmF0Dk" np.save('/content/drive/MyDrive/CV_Birds/features/training/ResNet50v2/model1_train_features.npy', features_model) # + [markdown] id="Eyz9_Ww5F0Dk" # Predict features for test set and save them: # + id="2vOD2g7KF0Dk" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1643304080016, "user_tz": -60, "elapsed": 11865, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="4a05baba-c393-454b-8b8d-260800e6e10b" features_model = a.predict(test_images, batch_size=BATCH_SIZE, verbose=True) # + id="_GkPaYYJF0Dk" np.save('/content/drive/MyDrive/CV_Birds/features/test/ResNet50v2/model1_test_features.npy', features_model) # + [markdown] id="Xnwv2XZrF0Dl" # Predict features for the distractor and save them # + id="Sroc6txMF0Dl" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1643387235092, "user_tz": -60, "elapsed": 128023, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="a1bc1d3d-7f8c-4e62-ac4c-0e3770bf8e05" features_model = a.predict(distractor_images, batch_size=BATCH_SIZE, verbose=True) # + id="3rAelSeRF0Dl" executionInfo={"status": "ok", "timestamp": 1643387235800, "user_tz": -60, "elapsed": 712, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} np.save('/content/drive/MyDrive/CV_Birds/features/distractor/ResNet50v2/model1_distractor_features.npy', features_model) # + [markdown] id="YblEBZqNjqfy" # ## Model 2 # Load the model from drive: # + id="72Reuutkjqf1" executionInfo={"status": "ok", "timestamp": 1643387239299, "user_tz": -60, "elapsed": 3500, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} MODEL_PATH = '/content/drive/MyDrive/CV_Birds/models/ResNet50v2/model2.keras' model = ks.models.load_model(MODEL_PATH) # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1643386856251, "user_tz": -60, "elapsed": 9, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="c6376608-8c74-4361-f1c3-0914ae87ffa4" id="lLlo70Ovjqf2" model.summary() # + id="8cIySEGpjqf3" executionInfo={"status": "ok", "timestamp": 1643387239300, "user_tz": -60, "elapsed": 5, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} a = ks.models.Sequential(model.layers[:2]) # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1643386856627, "user_tz": -60, "elapsed": 5, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="f2f8c271-8679-46f5-f9a8-96507ad61a1f" id="7viI_8MSjqf3" a.summary() # + [markdown] id="Ww7dtoXojqf3" # Predict features for training set and save them: # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1643304432097, "user_tz": -60, "elapsed": 247401, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="14aa659b-0661-4a43-b180-21acc1b0e053" id="f0wqJFjsjqf4" features_model = a.predict(training_images, batch_size=BATCH_SIZE, verbose=True) # + id="z6151Ut0jqf4" np.save('/content/drive/MyDrive/CV_Birds/features/training/ResNet50v2/model2_train_features.npy', features_model) # + [markdown] id="5FtDBTpYjqf4" # Predict features for test set and save them: # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1643304444264, "user_tz": -60, "elapsed": 10304, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="a9543955-3f44-4b16-be1c-b481a8b21c2c" id="KknjPS9ljqf4" features_model = a.predict(test_images, batch_size=BATCH_SIZE, verbose=True) # + id="HcNAHf6wjqf5" np.save('/content/drive/MyDrive/CV_Birds/features/test/ResNet50v2/model2_test_features.npy', features_model) # + [markdown] id="SHR1JRd4jqf5" # Predict features for the distractor and save them # + colab={"base_uri": "https://localhost:8080/"} outputId="49bdc896-e736-42f2-9168-d941ff3d1ae4" id="gQNwThAMe6_r" executionInfo={"status": "ok", "timestamp": 1643387368216, "user_tz": -60, "elapsed": 127620, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} features_model = a.predict(distractor_images, batch_size=BATCH_SIZE, verbose=True) # + id="rmCe7_V4e6_s" executionInfo={"status": "ok", "timestamp": 1643387369211, "user_tz": -60, "elapsed": 999, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} np.save('/content/drive/MyDrive/CV_Birds/features/distractor/ResNet50v2/model2_distractor_features.npy', features_model) # + [markdown] id="PGIKNMo5kAeE" # ## Model 3 # Load the model from drive: # + id="ZyJxuQj2kAeE" executionInfo={"status": "ok", "timestamp": 1643387376298, "user_tz": -60, "elapsed": 7089, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} MODEL_PATH = '/content/drive/MyDrive/CV_Birds/models/ResNet50v2/model3.keras' model = ks.models.load_model(MODEL_PATH) # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1643387376298, "user_tz": -60, "elapsed": 5, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="e7f64b5e-f06f-4e8a-a008-2e7b8cb8fe31" id="mhTa5XOqkAeF" model.summary() # + id="rBiUUqtMkAeF" executionInfo={"status": "ok", "timestamp": 1643387376899, "user_tz": -60, "elapsed": 604, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} a = ks.models.Sequential(model.layers[:2]) # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1643387376900, "user_tz": -60, "elapsed": 6, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="942f936f-7255-4333-ea7e-e21cacf0721d" id="j2bp2dX8kAeG" a.summary() # + [markdown] id="68ARb_HekAeG" # Predict features for training set and save them: # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1643304766086, "user_tz": -60, "elapsed": 246879, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="41b257df-6e2b-46ee-bead-935126aad3eb" id="sb9l53_JkAeG" features_model = a.predict(training_images, batch_size=BATCH_SIZE, verbose=True) # + id="92eo3ExZkAeG" np.save('/content/drive/MyDrive/CV_Birds/features/training/ResNet50v2/model3_train_features.npy', features_model) # + [markdown] id="dxG9sRgikAeH" # Predict features for test set and save them: # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1643304776783, "user_tz": -60, "elapsed": 9096, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="b2ecbfb5-7af7-4608-e217-5af77891516c" id="zO9PlwFJkAeH" features_model = a.predict(test_images, batch_size=BATCH_SIZE, verbose=True) # + id="6IT0_mDckAeH" np.save('/content/drive/MyDrive/CV_Birds/features/test/ResNet50v2/model3_test_features.npy', features_model) # + [markdown] id="N0eNEEm_kAeH" # Predict features for the distractor and save them # + colab={"base_uri": "https://localhost:8080/"} outputId="cc54eca4-6a21-4311-8440-4a2c2e9c05cd" id="LduuGJh7fBw1" executionInfo={"status": "ok", "timestamp": 1643387506032, "user_tz": -60, "elapsed": 129136, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} features_model = a.predict(distractor_images, batch_size=BATCH_SIZE, verbose=True) # + id="NbLkfXRTfBw2" executionInfo={"status": "ok", "timestamp": 1643387506873, "user_tz": -60, "elapsed": 845, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} np.save('/content/drive/MyDrive/CV_Birds/features/distractor/ResNet50v2/model3_distractor_features.npy', features_model) # + [markdown] id="Nv4nZ-GAkHsF" # ## Model 4 # Load the model from drive: # + id="sxqkBLIBkHsG" executionInfo={"status": "ok", "timestamp": 1643387515198, "user_tz": -60, "elapsed": 8327, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} MODEL_PATH = '/content/drive/MyDrive/CV_Birds/models/ResNet50v2/model4.keras' model = ks.models.load_model(MODEL_PATH) # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1643387515199, "user_tz": -60, "elapsed": 6, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="2551d798-1416-4424-a86d-292398e56b5e" id="M4bwdhj6kHsG" model.summary() # + id="R-v6GF9rkHsH" executionInfo={"status": "ok", "timestamp": 1643387515850, "user_tz": -60, "elapsed": 655, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} a = ks.models.Sequential(model.layers[:2]) # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1643387515850, "user_tz": -60, "elapsed": 4, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="28525c75-6484-4d11-e470-02186e3b4d25" id="SUc2EWfkkHsH" a.summary() # + [markdown] id="cUOeiZXLkHsH" # Predict features for training set and save them: # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1643305029605, "user_tz": -60, "elapsed": 247637, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="67b0646e-d44b-4936-c3d9-ad2a27be35b5" id="d5Gxk0sDkHsH" features_model = a.predict(training_images, batch_size=BATCH_SIZE, verbose=True) # + id="Y0c_OxAkkHsI" np.save('/content/drive/MyDrive/CV_Birds/features/training/ResNet50v2/model4_train_features.npy', features_model) # + [markdown] id="H3x_VD3CkHsI" # Predict features for test set and save them: # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1643305039742, "user_tz": -60, "elapsed": 9109, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="e0db5384-f501-49f3-ca33-b507ff079a2c" id="jit4jPvbkHsI" features_model = a.predict(test_images, batch_size=BATCH_SIZE, verbose=True) # + id="DzEw4QD2kHsI" np.save('/content/drive/MyDrive/CV_Birds/features/test/ResNet50v2/model4_test_features.npy', features_model) # + [markdown] id="XN_H94yjkHsJ" # Predict features for the distractor and save them # + colab={"base_uri": "https://localhost:8080/"} outputId="e34a088d-a6a3-4c35-c092-38135032b73b" id="bguKj8sefL0S" executionInfo={"status": "ok", "timestamp": 1643387643524, "user_tz": -60, "elapsed": 127677, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} features_model = a.predict(distractor_images, batch_size=BATCH_SIZE, verbose=True) # + id="Uim09Sc1fL0U" executionInfo={"status": "ok", "timestamp": 1643387643894, "user_tz": -60, "elapsed": 373, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} np.save('/content/drive/MyDrive/CV_Birds/features/distractor/ResNet50v2/model4_distractor_features.npy', features_model) # + [markdown] id="IZD534QIkTyh" # ## Model 9 # Load the model from drive: # + id="b9P9F03AkTyi" executionInfo={"status": "ok", "timestamp": 1643387651699, "user_tz": -60, "elapsed": 7806, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} MODEL_PATH = '/content/drive/MyDrive/CV_Birds/models/ResNet50v2/model9.keras' model = ks.models.load_model(MODEL_PATH) # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1643387651700, "user_tz": -60, "elapsed": 4, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="2f4f62c4-d9b4-40cc-f94d-b6c2267d1b35" id="vXoi2kGLkTyi" model.summary() # + id="aya5YHQbkTyj" executionInfo={"status": "ok", "timestamp": 1643387652553, "user_tz": -60, "elapsed": 856, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} a = ks.models.Sequential(model.layers[:2]) # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1643387652553, "user_tz": -60, "elapsed": 6, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="3339899a-5930-412f-8002-1381fd5c58a9" id="Uizj-43BkTyj" a.summary() # + [markdown] id="kFxGYYuekTyj" # Predict features for training set and save them: # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1643305295818, "user_tz": -60, "elapsed": 248162, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="6786cfb5-9ee8-4e5c-8d02-35702d07aa46" id="MebbgEGokTyj" features_model = a.predict(training_images, batch_size=BATCH_SIZE, verbose=True) # + id="U_WmzfeikTyj" np.save('/content/drive/MyDrive/CV_Birds/features/training/ResNet50v2/model9_train_features.npy', features_model) # + [markdown] id="6M4QOSJekTyk" # Predict features for test set and save them: # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1643305305919, "user_tz": -60, "elapsed": 8291, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="ea0e9139-677a-4087-db7f-516d0e19be72" id="MchSSaTgkTyk" features_model = a.predict(test_images, batch_size=BATCH_SIZE, verbose=True) # + id="oqEVWtVJkTyk" np.save('/content/drive/MyDrive/CV_Birds/features/test/ResNet50v2/model9_test_features.npy', features_model) # + [markdown] id="W31nIewtkTyk" # Predict features for the distractor and save them # + colab={"base_uri": "https://localhost:8080/"} outputId="be222e9a-89b4-48d1-f54c-4314ef563a92" id="woED7AU6foOh" executionInfo={"status": "ok", "timestamp": 1643387780506, "user_tz": -60, "elapsed": 127957, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} features_model = a.predict(distractor_images, batch_size=BATCH_SIZE, verbose=True) # + id="Dbp5n2v1foOi" executionInfo={"status": "ok", "timestamp": 1643387781517, "user_tz": -60, "elapsed": 1014, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} np.save('/content/drive/MyDrive/CV_Birds/features/distractor/ResNet50v2/model9_distractor_features.npy', features_model) # + [markdown] id="-R6qG88skgis" # ## Model 10 # Load the model from drive: # + id="tw7coul0kgit" executionInfo={"status": "ok", "timestamp": 1643387788869, "user_tz": -60, "elapsed": 7355, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} MODEL_PATH = '/content/drive/MyDrive/CV_Birds/models/ResNet50v2/model10.keras' model = ks.models.load_model(MODEL_PATH) # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1643305314021, "user_tz": -60, "elapsed": 21, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="63ce1a59-975c-414e-9cdc-3c98698e8b39" id="egA0Y0hqkgit" model.summary() # + id="VomIU6H0kgiu" executionInfo={"status": "ok", "timestamp": 1643387788870, "user_tz": -60, "elapsed": 5, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} a = ks.models.Sequential(model.layers[:2]) # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1643305314723, "user_tz": -60, "elapsed": 8, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="990cc266-aff4-42ed-e08e-40df7da8b0f8" id="LVkZluvIkgiu" a.summary() # + [markdown] id="1RXsNMEZkgiu" # Predict features for training set and save them: # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1643305560425, "user_tz": -60, "elapsed": 245707, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="606ac191-0eda-4d5a-d0d5-fc57b85a8255" id="bm3USwJQkgiu" features_model = a.predict(training_images, batch_size=BATCH_SIZE, verbose=True) # + id="xhv_UXkQkgiv" np.save('/content/drive/MyDrive/CV_Birds/features/training/ResNet50v2/model10_train_features.npy', features_model) # + [markdown] id="Orvcp0dPkgiv" # Predict features for test set and save them: # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1643305570565, "user_tz": -60, "elapsed": 8588, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="bf0a534b-04ba-41a9-9853-50f216fb1d58" id="-pWLyDwukgiv" features_model = a.predict(test_images, batch_size=BATCH_SIZE, verbose=True) # + id="JTJriH8kkgiv" np.save('/content/drive/MyDrive/CV_Birds/features/test/ResNet50v2/model10_test_features.npy', features_model) # + [markdown] id="dJnE5Z0ekgiw" # Predict features for the distractor and save them # + colab={"base_uri": "https://localhost:8080/"} outputId="f72f0904-4b3a-49e1-8eef-7e065155e7be" id="yfGEBdtXf4Wq" executionInfo={"status": "ok", "timestamp": 1643387918136, "user_tz": -60, "elapsed": 129270, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} features_model = a.predict(distractor_images, batch_size=BATCH_SIZE, verbose=True) # + id="Bd5hR8UZf4Wr" executionInfo={"status": "ok", "timestamp": 1643387918868, "user_tz": -60, "elapsed": 736, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} np.save('/content/drive/MyDrive/CV_Birds/features/distractor/ResNet50v2/model10_distractor_features.npy', features_model) # + [markdown] id="BHlL9DVvkpZ4" # ## Model 11 # Load the model from drive: # + id="3Ss1tqS4kpZ5" executionInfo={"status": "ok", "timestamp": 1643387926154, "user_tz": -60, "elapsed": 7289, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} MODEL_PATH = '/content/drive/MyDrive/CV_Birds/models/ResNet50v2/model11.keras' model = ks.models.load_model(MODEL_PATH) # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1643305579100, "user_tz": -60, "elapsed": 6, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="e1bd284b-1c61-432d-be73-e2b56568ba06" id="4TmUZgQYkpZ6" model.summary() # + id="ALlqnWZJkpZ6" executionInfo={"status": "ok", "timestamp": 1643387926521, "user_tz": -60, "elapsed": 370, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} a = ks.models.Sequential(model.layers[:2]) # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1643305579491, "user_tz": -60, "elapsed": 7, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="9f6fa1fd-50f7-48e7-f3b3-f5e2521502de" id="1HRnuXV0kpZ6" a.summary() # + [markdown] id="sHpQhN9dkpZ7" # Predict features for training set and save them: # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1643305825515, "user_tz": -60, "elapsed": 246028, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="13f8b5f5-6260-4a4c-eaee-5e9497a302c2" id="IPvZ5ciCkpZ7" features_model = a.predict(training_images, batch_size=BATCH_SIZE, verbose=True) # + id="f_xI_BWykpZ7" np.save('/content/drive/MyDrive/CV_Birds/features/training/ResNet50v2/model11_train_features.npy', features_model) # + [markdown] id="16f6JEpqkpZ7" # Predict features for test set and save them: # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1643305835501, "user_tz": -60, "elapsed": 8392, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="020b629d-81f9-4c43-d5ea-bd20fb68936a" id="tRaPaq1ikpZ8" features_model = a.predict(test_images, batch_size=BATCH_SIZE, verbose=True) # + id="tougrDHKkpZ8" np.save('/content/drive/MyDrive/CV_Birds/features/test/ResNet50v2/model11_test_features.npy', features_model) # + [markdown] id="7FKZdx1FkpZ8" # Predict features for the distractor and save them # + colab={"base_uri": "https://localhost:8080/"} outputId="96e35884-4614-4f46-a91d-8f78e009a18c" id="YB70rEmGf8cC" executionInfo={"status": "ok", "timestamp": 1643388055185, "user_tz": -60, "elapsed": 128667, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} features_model = a.predict(distractor_images, batch_size=BATCH_SIZE, verbose=True) # + id="fwhBTLT2f8cD" executionInfo={"status": "ok", "timestamp": 1643388055961, "user_tz": -60, "elapsed": 783, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} np.save('/content/drive/MyDrive/CV_Birds/features/distractor/ResNet50v2/model11_distractor_features.npy', features_model) # + [markdown] id="GLDCdFZ5UCJQ" # # Resnet 101 # + [markdown] id="7e5tCU-qUXEc" # ## Model 1 # Load the model from drive: # + id="JsrnC978UXEd" executionInfo={"status": "ok", "timestamp": 1643388066903, "user_tz": -60, "elapsed": 10945, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} MODEL_PATH = '/content/drive/MyDrive/CV_Birds/models/ResNet101v2/resNet101_model1.keras' model = ks.models.load_model(MODEL_PATH) # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1643368559828, "user_tz": -60, "elapsed": 7, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="939dd36f-bee0-4c5c-b906-8999a39aa066" id="L87SVO8bUXEd" model.summary() # + id="6X34v9pPUXEe" executionInfo={"status": "ok", "timestamp": 1643388067649, "user_tz": -60, "elapsed": 750, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} a = ks.models.Sequential(model.layers[:2]) # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1643368561303, "user_tz": -60, "elapsed": 5, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="4df3f7de-50b0-4342-e3e3-64c2ef9f35e4" id="B14UolWnUXEe" a.summary() # + [markdown] id="fZTXTVFuUXEe" # Predict features for training set and save them: # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1643369041231, "user_tz": -60, "elapsed": 458343, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="64a51aa4-7459-454a-8816-da9edba64655" id="TjQMJf5jUXEf" features_model = a.predict(training_images, batch_size=BATCH_SIZE, verbose=True) # + id="UKxAt2ZEUXEf" np.save('/content/drive/MyDrive/CV_Birds/features/training/ResNet101v2/model1_train_features.npy', features_model) # + [markdown] id="IuGW33ICUXEf" # Predict features for test set and save them: # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1643369061694, "user_tz": -60, "elapsed": 18021, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="0bd41b24-a19a-4df8-9a5d-2cca601c4a84" id="DXaDXIXhUXEf" features_model = a.predict(test_images, batch_size=BATCH_SIZE, verbose=True) # + id="maNeOVd8UXEf" np.save('/content/drive/MyDrive/CV_Birds/features/test/ResNet101v2/model1_test_features.npy', features_model) # + [markdown] id="nweMTfZWUXEg" # Predict features for the distractor and save them # + colab={"base_uri": "https://localhost:8080/"} outputId="173939c3-9812-424b-abe0-7462aa4ab84f" id="HJAUbCA0gCN3" executionInfo={"status": "ok", "timestamp": 1643388331941, "user_tz": -60, "elapsed": 264296, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} features_model = a.predict(distractor_images, batch_size=BATCH_SIZE, verbose=True) # + id="yI0G0aP0gCN4" executionInfo={"status": "ok", "timestamp": 1643388332925, "user_tz": -60, "elapsed": 986, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} np.save('/content/drive/MyDrive/CV_Birds/features/distractor/ResNet101v2/model1_distractor_features.npy', features_model) # + [markdown] id="JT3Q7bRgUXEg" # ## Model 2 # Load the model from drive: # + id="kJFdOm02UXEg" executionInfo={"status": "ok", "timestamp": 1643388344892, "user_tz": -60, "elapsed": 11969, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} MODEL_PATH = '/content/drive/MyDrive/CV_Birds/models/ResNet101v2/resNet101_model2.keras' model = ks.models.load_model(MODEL_PATH) # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1643369073876, "user_tz": -60, "elapsed": 6, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="2ce6dd36-347c-4a0a-9e20-38d66875d559" id="hqEX_jkcUXEg" model.summary() # + id="Iym7XZLMUXEh" executionInfo={"status": "ok", "timestamp": 1643388346398, "user_tz": -60, "elapsed": 1520, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} a = ks.models.Sequential(model.layers[:2]) # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1643369075625, "user_tz": -60, "elapsed": 5, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="1dffad9c-f3a3-45b3-de7c-5b28f64d3ec3" id="1hscnCvrUXEh" a.summary() # + [markdown] id="HogIFJyyUXEh" # Predict features for training set and save them: # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1643369520245, "user_tz": -60, "elapsed": 444623, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="f97ce364-4056-4c04-fad8-fc90bf520b42" id="vunLXc4RUXEh" features_model = a.predict(training_images, batch_size=BATCH_SIZE, verbose=True) # + id="0_wnr4ilUXEi" np.save('/content/drive/MyDrive/CV_Birds/features/training/ResNet101v2/model2_train_features.npy', features_model) # + [markdown] id="EW37sxsnUXEi" # Predict features for test set and save them: # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1643369537349, "user_tz": -60, "elapsed": 16055, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} outputId="fae35931-8ba0-4f2f-b308-d2212eec12a9" id="90BOzh-3UXEi" features_model = a.predict(test_images, batch_size=BATCH_SIZE, verbose=True) # + id="kk8OR04qUXEi" np.save('/content/drive/MyDrive/CV_Birds/features/test/ResNet101v2/model2_test_features.npy', features_model) # + [markdown] id="rIe-HDhgUXEi" # Predict features for the distractor and save them # + colab={"base_uri": "https://localhost:8080/"} outputId="f2712260-edae-4125-fd94-438d2b776546" id="lVBVTD6_gHUz" executionInfo={"status": "ok", "timestamp": 1643388570004, "user_tz": -60, "elapsed": 223612, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} features_model = a.predict(distractor_images, batch_size=BATCH_SIZE, verbose=True) # + id="tqJjYg-ugHU0" executionInfo={"status": "ok", "timestamp": 1643388570742, "user_tz": -60, "elapsed": 743, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "11023693490829624613"}} np.save('/content/drive/MyDrive/CV_Birds/features/distractor/ResNet101v2/model2_distractor_features.npy', features_model) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Package imports import numpy as np import matplotlib.pyplot as plt from testCases import * import sklearn import sklearn.datasets import sklearn.linear_model from planar_utils import plot_decision_boundary, sigmoid, load_planar_dataset, load_extra_datasets # %matplotlib inline np.random.seed(1) # set a seed so that the results are consistent # - # ## Backward Propagation # # # # **Instructions**: # Backpropagation is usually the hardest (most mathematical) part in deep learning. To help you, here again is the slide from the lecture on backpropagation. You'll want to use the six equations on the right of this slide, since you are building a vectorized implementation. # # # # # # - Tips: # - To compute dZ1 you'll need to compute $g^{[1]'}(Z^{[1]})$. Since $g^{[1]}(.)$ is the tanh activation function, if $a = g^{[1]}(z)$ then $g^{[1]'}(z) = 1-a^2$. So you can compute # $g^{[1]'}(Z^{[1]})$ using `(1 - np.power(A1, 2))`. def backward_propagation(parameters, cache, X, Y): """ Implement the backward propagation using the instructions above. Arguments: parameters -- python dictionary containing our parameters cache -- a dictionary containing "Z1", "A1", "Z2" and "A2". X -- input data of shape (2, number of examples) Y -- "true" labels vector of shape (1, number of examples) Returns: grads -- python dictionary containing your gradients with respect to different parameters """ m = Y.shape[1] # First, retrieve W1 and W2 from the dictionary "parameters". W1 = parameters['W1'] W2 = parameters['W2'] # Retrieve also A1 and A2 from dictionary "cache". A1 = cache['A1'] A2 = cache['A2'] # Backward propagation: calculate dW1, db1, dW2, db2. dZ2= A2 - Y dW2 = (1/m) * np.matmul(dZ2,A1.T) db2 = (1/m) * np.sum(dZ2, axis=1, keepdims =True) dZ1 = np.matmul(W2.T,dZ2)*(1- A1**2) dW1 = (1/m) * np.matmul(dZ1,X.T) db1 = (1/m) * np.sum(dZ1, axis=1, keepdims = True) grads = {"dW1": dW1, "db1": db1, "dW2": dW2, "db2": db2} return grads # + parameters, cache, X_assess, Y_assess = backward_propagation_test_case() grads = backward_propagation(parameters, cache, X_assess, Y_assess) if __name__=='__main__': print ("dW1 = "+ str(grads["dW1"])) print ("db1 = "+ str(grads["db1"])) print ("dW2 = "+ str(grads["dW2"])) print ("db2 = "+ str(grads["db2"])) # - # **Expected output**: # # # # # # # # # # # # # # # # # # # # # # # # # #
    **dW1** [[ 0.01018708 -0.00708701] # [ 0.00873447 -0.0060768 ] # [-0.00530847 0.00369379] # [-0.02206365 0.01535126]]
    **db1** [[-0.00069728] # [-0.00060606] # [ 0.000364 ] # [ 0.00151207]]
    **dW2** [[ 0.00363613 0.03153604 0.01162914 -0.01318316]]
    **db2** [[ 0.06589489]]
    # **Note**: While implementing the update rule, use gradient descent. You have to use (dW1, db1, dW2, db2) in order to update (W1, b1, W2, b2). # # **General gradient descent rule**: $ \theta = \theta - \alpha \frac{\partial J }{ \partial \theta }$ where $\alpha$ is the learning rate and $\theta$ represents a parameter. # # **Illustration**: The gradient descent algorithm with a good learning rate (converging) and a bad learning rate (diverging). Images courtesy of . # # # # def update_parameters(parameters, grads, learning_rate = 1.2): """ Updates parameters using the gradient descent update rule given above Arguments: parameters -- python dictionary containing your parameters grads -- python dictionary containing your gradients Returns: parameters -- python dictionary containing your updated parameters """ # Retrieve each parameter from the dictionary "parameters" W1 = parameters['W1'] b1 = parameters['b1'] W2 = parameters['W2'] b2 = parameters['b2'] # Retrieve each gradient from the dictionary "grads" dW1 = grads['dW1'] db1 = grads['db1'] dW2 = grads['dW2'] db2 = grads['db2'] # Update rule for each parameter W1 = W1 - learning_rate*dW1 b1 = b1 - learning_rate*db1 W2 = W2 - learning_rate*dW2 b2 = b2 - learning_rate*db2 parameters = {"W1": W1, "b1": b1, "W2": W2, "b2": b2} return parameters # + parameters, grads = update_parameters_test_case() parameters = update_parameters(parameters, grads) if __name__=='__main__': print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) # - # **Expected Output**: # # # # # # # # # # # # # # # # # # # # # # # # #
    **W1** [[-0.00643025 0.01936718] # [-0.02410458 0.03978052] # [-0.01653973 -0.02096177] # [ 0.01046864 -0.05990141]]
    **b1** [[ -1.02420756e-06] # [ 1.27373948e-05] # [ 8.32996807e-07] # [ -3.20136836e-06]]
    **W2** [[-0.01041081 -0.04463285 0.01758031 0.04747113]]
    **b2** [[ 0.00010457]]
    # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # #### Importing modules import json import pandas as pd import numpy as np # #### Read cowin csv and census xlsx file data=pd.read_csv('cowin_vaccine_data_districtwise.csv') cols=['State','Level', 'Name', 'TRU','TOT_P', 'TOT_M', 'TOT_F'] census_data=pd.read_excel('DDW_PCA0000_2011_Indiastatedist.xlsx',usecols=cols) #only considering total rather than rural and urban seperately for this question census_data=census_data[(census_data['Level']=='STATE') & (census_data['TRU']=='Total')] census_data.reset_index(inplace=True,drop=True) # #### Time ids for dates # + # dictionaries for mapping dates to time ids # eg. for 2020-3-15, time_id_week is 1, time_id_month is 1, time_id_ is 1 import datetime time_id_week = {} time_id_month = {} time_id_overall = {} date=datetime.date(2021,1,16) day=0 while True: # basically to cover overlapping weeks this part needs to be changed. # for now we are proceeding. but change week ids according above definition of 7-DMA #### specific to this situation only k=(date.weekday()+1)%7+1 if k%7==0: time_id_week[str(date)]=int(day/7)+1 else: time_id_week[str(date)]=int(day/7)+2 # there is some problem in week number please fix it if int(str(date)[8:10]) <15: time_id_month[str(date)]=int(str(date)[5:7])-1 else: time_id_month[str(date)]=int(str(date)[5:7]) # only because it is 2021 # if you consider 2020 then a bit manipulation time_id_overall[str(date)]=1 if date==datetime.date(2021,8,14): break day=day+1 date=date+datetime.timedelta(days=1) # - # #### Prepare data frames for first dose # + # let's change those column names #date==datetime.date(2021,8,14) from datetime import datetime data_dose1=data.loc[:,(data.loc[0,]=='First Dose Administered')].iloc[1:,:].fillna(0) first_date_dose1=data_dose1.iloc[:,0] data_dose1=data_dose1.astype(int).diff(axis=1) data_dose1.iloc[:,0]=first_date_dose1 col_names_dose1=data_dose1.columns new_col_names_weeks_dose1=[] for i in range(len(col_names_dose1)): if (datetime.strptime(col_names_dose1[i].split('.')[0],'%d/%m/%Y')).strftime('%Y-%m-%d') == '2021-08-15': break new_col_names_weeks_dose1.append(time_id_week[(datetime.strptime(col_names_dose1[i].split('.')[0],'%d/%m/%Y')).strftime('%Y-%m-%d')]) no_weeks_dose1=len(np.unique(new_col_names_weeks_dose1)) new_col_names_weeks_dose1.extend(['No']*(len(col_names_dose1)-len(new_col_names_weeks_dose1))) data_dose1.columns=new_col_names_weeks_dose1 data_dose1['District']=data['District'].str.lower() data_dose1=data_dose1.loc[:,data_dose1.columns!='No'] # - state_names=np.unique(data['State'].dropna()) census_states = np.array(census_data[census_data['Level']=='STATE']['Name']) # #### Editdist function for strings def editdist(str1, str2): m = len(str1) n = len(str2) # Create a table to store results of subproblems dp = [[0 for x in range(n + 1)] for x in range(m + 1)] # Fill d[][] in bottom up manner for i in range(m + 1): for j in range(n + 1): # If first string is empty, only option is to # insert all characters of second string if i == 0: dp[i][j] = j # Min. operations = j # If second string is empty, only option is to # remove all characters of second string elif j == 0: dp[i][j] = i # Min. operations = i # If last characters are same, ignore last char # and recur for remaining string elif str1[i-1] == str2[j-1]: dp[i][j] = dp[i-1][j-1] # If last character are different, consider all # possibilities and find minimum else: dp[i][j] = 1 + min(dp[i][j-1], # Insert dp[i-1][j], # Remove dp[i-1][j-1]) # Replace #print(dp) return dp[m][n] # #### finding common states common_cowin_indexes = [] common_census_indexes = [] remain_states_cowin=[] for i in range(len(state_names)): k=0 for j in range(len(census_states)): if editdist(state_names[i].lower(),census_states[j].lower())<4 and state_names[i][0]==census_states[j][0]: common_cowin_indexes.append(state_names[i]) common_census_indexes.append(census_states[j]) k=1 break if k==0: remain_states_cowin.append(state_names[i]) print(remain_states_cowin) common_census_indexes.extend(['NCT OF DELHI','DAMAN & DIU', 'DADRA & NAGAR HAVELI']) common_cowin_indexes.extend(['Delhi','Dadra and Nagar Haveli and Daman and Diu']) # - [x] Updating state list with new states formed after 2011 census and other things. # - [x] DAMAN & DIU + DADRA & NAGAR HAVELI = Dadra and Nagar Haveli and Daman and Diu # - [x] Ladakh + Jammu and Kashmir = Jammu and Kashmir # - [x] Telangana + Andhra Pradesh = Andhra Pradesh # - [x] NCT OF DELHI = Delhi data_state_female=data_dose1.iloc[:,:-1] data_state_female['State']=data['State'] # #### Counts first dose counts for states # - [x] Total counts # - [x] Last week(weekid=31) counts to compute vaccination rate # state_cowin_female_male=[] state_male_count=[]#this is for last week state_female_count=[] for i in range(len(common_cowin_indexes)): for j in range(data_state_female.shape[0]): if common_cowin_indexes[i]=='Andhra Pradesh': state_cowin_female_male.append(common_cowin_indexes[i]) state_female_count.append((data_state_female.loc[(data_state_female['State']=='Andhra Pradesh')].iloc[:,:-1]).astype(int).sum().sum()+(data_state_female.loc[(data_state_female['State']=='Telangana')].iloc[:,:-1]).astype(int).sum().sum()) state_male_count.append((data_state_female.loc[(data_state_female['State']=='Andhra Pradesh'), data_state_female.columns==31]).astype(int).sum().sum()+(data_state_female.loc[(data_state_female['State']=='Telangana'), data_state_female.columns==31]).astype(int).sum().sum()) break elif common_cowin_indexes[i]=='Jammu and Kashmir': state_cowin_female_male.append(common_cowin_indexes[i]) state_female_count.append((data_state_female.loc[(data_state_female['State']=='Jammu and Kashmir')].iloc[:,:-1]).astype(int).sum().sum()+(data_state_female.loc[(data_state_female['State']=='Ladakh')].iloc[:,:-1]).astype(int).sum().sum()) state_male_count.append((data_state_female.loc[(data_state_female['State']=='Jammu and Kashmir'), data_state_female.columns==31]).astype(int).sum().sum()+(data_state_female.loc[(data_state_female['State']=='Ladakh'), data_state_female.columns==31]).astype(int).sum().sum()) break else: state_cowin_female_male.append(common_cowin_indexes[i]) state_female_count.append(data_state_female.loc[data_state_female['State']==common_cowin_indexes[i]].iloc[:,:-1].astype(int).sum().sum()) state_male_count.append(data_state_female.loc[data_state_female['State']==common_cowin_indexes[i], data_state_female.columns==31].astype(int).sum().sum()) break # #### population count for states with modification total_state_pop=[] state_male_female_pop=[] for i in range(len(common_census_indexes)): for j in range(census_data.shape[0]): if common_census_indexes[i]==census_data['Name'][j]: state_male_female_pop.append(common_census_indexes[i]) total_state_pop.append(census_data.iloc[j,4].astype(int)) break # #### Rate of vaccination = 100*last-week-count/total-state-population # import datetime dadra_daman_total_pop=total_state_pop[-1].astype(int)+total_state_pop[-2].astype(int) total_state_pop=total_state_pop[:-2] total_state_pop.append(dadra_daman_total_pop) pop_left=[i-j for i,j in zip(total_state_pop,state_female_count)] rateofvaccination=[np.round(i*100/j,2) for i,j in zip(state_male_count,total_state_pop)] # here state_male_count is total count in last week days_to_complete=[np.ceil(7*j/i) for i,j in zip(state_male_count,pop_left)] completetion_date=[] date=datetime.date(2021,8,14) for i in days_to_complete: if i < 0: ## where days are negative reassign it to zero days, it is the case with Dadar and Daman UT completetion_date.append(str(date+datetime.timedelta(days=0))) else: completetion_date.append(str(date+datetime.timedelta(days=i))) # #### Write in csv file # complete_df=pd.DataFrame({'stateid':state_cowin_female_male,'populationleft':pop_left,'rateofvaccination':rateofvaccination,'date':completetion_date}) complete_df.sort_values('stateid', axis = 0, ascending = True, kind='mergesort',inplace=True) complete_df.reset_index(inplace=True,drop=True) complete_df.to_csv('complete-vaccination.csv',index=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import time import numpy as np from bokeh.models.sources import ColumnDataSource from bokeh.plotting import figure from bokeh.io import output_notebook, show, push_notebook from bokeh.models import DatetimeTickFormatter from math import pi from elasticsearch import Elasticsearch from elasticsearch_dsl import Search, Q from pandasticsearch import Select, DataFrame import datetime from pytz import timezone f"{datetime.datetime.now(timezone('US/Central')):%Y-%m-%d %H:%M:%S}" import datetime,time from pytz import timezone timestamp=time.mktime(time.strptime("2019-07-29 00:00:00","%Y-%m-%d %H:%M:%S")) dt=datetime.datetime.fromtimestamp(timestamp) dt=dt+datetime.timedelta(seconds=30) f"{dt:%Y-%m-%d %H:%M:%S}" # Using the new query format # + hostname="http://noname-sms.us.cray.com:30200" client = Elasticsearch(hostname, http_compress=True) start="2019-07-29 00:00:00" timestamp=time.mktime(time.strptime(start,"%Y-%m-%d %H:%M:%S")) dt=datetime.datetime.fromtimestamp(timestamp) dt=dt+datetime.timedelta(seconds=30) end=f"{dt:%Y-%m-%d %H:%M:%S}" q=""" { "size":0, "query": { "bool": { "must": [{ "match_all": {} }, { "range": { "timereported": { "gte": "%s", "lte": "%s", "format": "yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||epoch_millis" } } } ], "must_not": [] } }, "_source":{ "excludes":[] }, "aggs": { "2": { "date_histogram": { "field": "timereported", "interval": "30s", "time_zone": "America/Chicago", "min_doc_count": 1 } } } } """ % (gte,lte) #print("Query: \"%s\"" % q) resp = client.search(index="shasta-logs-*", body=q) print("Number of responses: " + "{:,}".format(resp['hits']['total'])) # - output_notebook() p = figure(plot_width=800, plot_height=400) test_data = ColumnDataSource(data=dict(x=[0], y=[0])) #line = my_figure.line("x", "y", source=test_data) line = p.circle("x", "y", source=test_data, size = 8, color = 'navy', alpha=0.3) p.title.text = 'Message Counts per 30 minutes' p.background_fill_color="#f5f5f5" p.grid.grid_line_color="white" p.yaxis.axis_label = 'Count' p.xaxis.axis_label =' timereported per 30 minutes' p.xaxis.formatter=DatetimeTickFormatter( hours=["%d %B %Y"], days=["%d %B %Y"], months=["%d %B %Y"], years=["%d %B %Y"], ) p.xaxis.major_label_orientation = pi/4 handle = show(p, notebook_handle=True) # + from threading import Thread stop_threads = False def blocking_callback(id, stop): new_data=dict(x=[0], y=[0]) step = 0 step_size = 0.1 # increment for increasing step max_step = 10 # arbitrary stop point for example period = 1 # in seconds (simulate waiting for new data) n_show = 1000 # number of points to keep and show while True: new_data['x'] = [step] new_data['y'] = [np.random.randint(0,1000)] test_data.stream(new_data, n_show) push_notebook(handle=handle) step += step_size time.sleep(period) if stop(): print("exit") break thread = Thread(target=blocking_callback, args=(id, lambda: stop_threads)) thread.start() # - # preceding streaming is not blocking for cnt in range(10): print("Do this, while plot is still streaming", cnt) # you might also want to stop the thread stop_threads=True del thread # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Lab 5. Extreme Eigenvalues and Spiked Model. # # ### Due date: Friday 03/04 at 10:59 pm # !pip install powerlaw import matplotlib.pyplot as plt import numpy as np import powerlaw import scipy.linalg import scipy.sparse.linalg import scipy.special from scipy import stats pi = np.pi # ## Part 1. Extreme Eigenvalues of Covariance Matrix # # #### Theorem (Bai and Yin, 1993). # # Let $X_n$ denote a $p\times n$ random matrix whose entries are independent identically distributed random variables with mean $0$ and variance $1$. Define the $p \times p$ Wishart matrix # $$ # Y_{n}= \frac{1}{n} X_n X_n^{T} \in \mathbb{R}^{p\times p}. # $$ # Denote the largest and smallest eigenvalue of $Y_n$ by $\lambda^{(n)}_{+}$ and $\lambda^{(n)}_{-}$,respectively. Additionally, we assume that the entries $(X_{n})_{ij}$ of the (random noise) feature matrix have finite $4$th moments, as $n, p \to \infty$ with $\gamma = \lim_{n\to \infty}(p/n) < 1$, we have # $$ # \lambda^{(n)}_{+} \stackrel{a.s.}{\to} (1 + \sqrt{\gamma})^2, \quad \lambda^{(n)}_{-} \stackrel{a.s.}{\to} (1 - \sqrt{\gamma})^2, # $$ # That is, the largest and smallest eigenvalues converge almost surely to the edges of the support of the Marcenko–Pastur density. # * Remark. Actually, we have a more precise description, Tracy–Widom law, to characterize the behaviour of extreme eigenvalues. # + nbgrader={"grade": false, "grade_id": "cell-78d3bdc952ce4775", "locked": false, "schema_version": 3, "solution": true, "task": false} gamma = 0.09 N = np.array([50, 100, 150, 200, 300, 500, 700, 1000, 2000, 5000]) P = gamma*N P = P.astype(int) eig_max = np.array([]) eig_min = np.array([]) for i in range(len(N)): X = np.random.normal(0,1,(P[i],N[i])) Y = X@(X.T)/N[i] eigvals, eigvecs = np.linalg.eigh(Y) eig_max = np.append(eig_max, np.max(eigvals)) """ TODO: 1. append the minimum eigenvalue to eig_min """ ### BEGIN SOLUTION eig_min = np.append(eig_min, np.min(eigvals)) ### END SOLUTION # + nbgrader={"grade": false, "grade_id": "cell-05069ea20d2153da", "locked": false, "schema_version": 3, "solution": true, "task": false} plt.figure(figsize=(12,8)) plt.plot(np.arange(len(eig_max)), eig_max, lw=2, label='maximum') plt.axhline(y=(1 + np.sqrt(gamma))**2, color='r', linestyle='-', label='lambda+') plt.plot(np.arange(len(eig_min)), eig_min, lw=2, label='minimum') """ TODO: 2. plot the horizontal line of (1 - sqrt(gamma))^2 """ ### BEGIN SOLUTION plt.axhline(y=(1 - np.sqrt(gamma))**2, color='g', linestyle='-', label='lambda-') ### END SOLUTION plt.xticks(np.arange(len(eig_max)), N, rotation='vertical') _ = plt.legend() # - # #### Existence of 4th Moments # # In previous theorem, we we assume that the entries $(X_{n})_{ij}$ have finite $4$th moments. Then we will have the convergence of the largest and smallest eigenvalue as $n, p \to \infty$ with $\gamma = \lim_{n\to \infty}(p/n) < 1$. The existence of 4th Moments does matter. # # Let's consider the `powerlaw` distribution instead with probability density function # $$ # f(x)=3*x^{-4}\mathbf{1}_{\{x\geq 1\}}(x)\,, # $$ # where the first moment $\mathbb{E}[X] = \int xf(x)dx = 3/2$ and second moment $\mathbb{E}[X^2] = \int x^2f(x)dx = 3$, while the fourth moment is not finite. The theorem doesn't apply anymore. That is, we don't expect the largest and the smallest eigenvalues to converge to the edge of the support of the Marcenko–Pastur density. # + gamma = 0.09 N = np.arange(1500, 2000, 50) P = gamma*N P = P.astype(int) beta = 4 # define powerlaw distribution powerlaw_distribution = powerlaw.Power_Law(xmin=1., parameters=[beta], discrete=False) eig_max = np.array([]) eig_min = np.array([]) for i in range(len(N)): samples = powerlaw_distribution.generate_random(P[i]*N[i]) samples = (samples - np.mean(samples))/np.sqrt(np.var(samples)) #standardise the data X = np.array(samples).reshape((P[i], N[i])) Y = X@(X.T)/N[i] eigvals, eigvecs = np.linalg.eigh(Y) eig_max = np.append(eig_max, np.max(eigvals)) eig_min = np.append(eig_min, np.min(eigvals)) # - plt.figure(figsize=(12,8)) plt.plot(np.arange(len(eig_max)), eig_max, lw=2, label='maximum') plt.axhline(y=(1 + np.sqrt(gamma))**2, color='r', linestyle='-', label='lambda+') plt.xticks(np.arange(len(eig_max)), N, rotation='vertical') plt.xlabel('n') _ = plt.legend() plt.figure(figsize=(12,8)) plt.plot(np.arange(len(eig_min)), eig_min, lw=2, label='minimum') plt.axhline(y=(1 - np.sqrt(gamma))**2, color='g', linestyle='-', label='lambda-') plt.xticks(np.arange(len(eig_min)), N, rotation='vertical') plt.xlabel('n') _ = plt.legend() # ## Part 2. Spiked Covariance Models # # #### Theorem. # # Let $X_n$ denote a $p\times n$ random matrix whose entries are independent identically distributed random variables with mean $0$ and variance $1$. Let $C = I_{n} + D_{r}$ be a rank-$r$ diagonal perturbation of the identity matrix with fixed $r$. Define the $p \times p$ `spiked Wishart matrix` # $$ # Y_{n}= \frac{1}{n} X_n C X_n^{T} \in \mathbb{R}^{p\times p}. # $$ and let $\lambda^{(n)}_{1},\,\lambda^{(n)}_{2},\,\dots ,\,\lambda^{(n)}_{p}$ denote the eigenvalues of $Y_{n}$ (viewed as random variables). Finally, consider the random measure # $$ # \mu_{p}(A)={\frac{1}{p}}\#\left\{\lambda^{(n)}_{j}\in A\right\},\quad A\subset \mathbb {R}. # $$ # # Assume that $p,\,n\,\to \,\infty$ so that the ratio $p/n\,\to \,\gamma \in (0,+\infty )$. Then $\mu _{p}\,\to \,\mu$ in distribution, where # # $$ # \mu(A) = \begin{cases} # (1-\frac{1}{\gamma}) \mathbf{1}_{0\in A} + \nu(A),& \text{if } \gamma >1,\\ # \nu(A),& \text{if } 0\leq \gamma \leq 1, # \end{cases} # $$ # and # $$ # d\nu (x)= {\frac {\sqrt {(\lambda_{+}-x)(x-\lambda _{-})}}{2\pi\gamma x}}\,\mathbf{1}_{x\in [\lambda _{-},\lambda _{+}]}\,dx # $$ # with $\lambda_{\pm }=(1\pm {\sqrt {\lambda }})^{2}$. # # * Remark. As far as the limit histogram of eigenvalues is concerned, spiked models made no difference. The limit of the proportion of eigenvalues in a given interval tells us nothing about the extreme eigenvalues. In first part, the Theorem(Bai and Yin) proved that, in the non-spiked case ($C = I_n$), the largest eigenvalue sticks to the edges of the Marcenko–Pastur density, that is, all eigenvalues are inside the interval $[(1 - \sqrt{\gamma})^2, (1 + \sqrt{\gamma})^2]$. However, this fails for spiked covariance models($C = I_{n} + D_{r}$): they have `outlier eigenvalues`. # + nbgrader={"grade": false, "grade_id": "cell-baef18787396608e", "locked": false, "schema_version": 3, "solution": true, "task": false} r = 4 gamma = 0.5 n = 500 p = int(gamma*n) X = np.array(np.random.randn(p,n)) eig_D = np.array([4.2, 8.7, 11.8, 20.9]) D = np.diag(np.concatenate([eig_D, np.zeros(n-len(eig_D))])) """ TODO: 3. Compute C = I_n + D and Y = XCX.T/n """ ### BEGIN SOLUTION C = np.eye(n) + D Y = X@C@(X.T)/n ### END SOLUTION eigen_Y = np.sort(np.linalg.eigh(Y)[0]) # + edges = np.linspace(np.min(eigen_Y) - 0.1, np.max(eigen_Y) + 0.1, 200) a = (1 - np.sqrt(gamma))**2 b = (1 + np.sqrt(gamma))**2 plt.figure(figsize=(12,8)) plt.hist(eigen_Y, bins=edges,weights=1/(p*(edges[1]-edges[0]))*np.ones(p),label='Empirical eigenvalues') edges_MP = np.linspace(a,b,100) mu = np.sqrt((edges_MP-a)*(b-edges_MP) )/(2*pi*gamma) plt.plot(edges_MP,mu/edges_MP,'r',label='Marcenko-Pastur law') plt.gca().set_xlim(0,np.max(eigen_Y)+0.5) plt.gca().set_ylim(0,1) _ = plt.legend() # + nbgrader={"grade": true, "grade_id": "cell-1d9176243eaab256", "locked": false, "points": 0, "schema_version": 3, "solution": true, "task": false} """ TODO: 4. Following the procedure above, try another example by setting gamma = 0.36, n = 1000 and eig_D = np.array([13, 27]) """ ### BEGIN SOLUTION gamma = 0.36 n = 1000 p = int(gamma*n) eig_D = np.array([13, 27]) D = np.diag(np.concatenate([eig_D, np.zeros(n-len(eig_D))])) C = np.eye(n) + D X = np.array(np.random.randn(p,n)) Y = X@C@(X.T)/n eigen_Y = np.sort(np.linalg.eigh(Y)[0]) edges = np.linspace(np.min(eigen_Y) - 0.1, np.max(eigen_Y) + 0.1, 200) a = (1 - np.sqrt(gamma))**2 b = (1 + np.sqrt(gamma))**2 plt.figure(figsize=(12,8)) plt.hist(eigen_Y, bins=edges,weights=1/(p*(edges[1]-edges[0]))*np.ones(p),label='Empirical eigenvalues') edges_MP = np.linspace(a,b,100) mu = np.sqrt((edges_MP-a)*(b-edges_MP) )/(2*pi*gamma) plt.plot(edges_MP,mu/edges_MP,'r',label='Marcenko-Pastur law') plt.gca().set_xlim(0,np.max(eigen_Y)+0.5) plt.gca().set_ylim(0,1) _ = plt.legend() ### END SOLUTION # - # ## Submission Instructions # # # ### Download Code Portion # * Restart the kernel and run all the cells to make sure your code works. # * Save your notebook using File > Save and Checkpoint. # * Use File > Downland as > PDF via Latex. # * Download the PDF file and confirm that none of your work is missing or cut off. # * **DO NOT** simply take pictures using your phone. # # ### Submitting ### # * Submit the assignment to Lab1 on Gradescope. # * **Make sure to assign only the pages with your implementation to the question.** # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="tDCyoAQIoeMd" # # PyTorch: IBA (Per-Sample Bottleneck) # # This notebook shows how to apply the Per-Sample Bottleneck to pretrained ImageNet models. # # Ensure that `./imagenet` points to your copy of the ImageNet dataset. # # You might want to create a symlink: # + id="RWSJpsyKqHjH" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1611561628936, "user_tz": -480, "elapsed": 29596, "user": {"displayName": "", "photoUrl": "", "userId": "05768761827968141093"}} outputId="3dff8ccc-c2a4-472b-9d29-13b6fcafbaba" from google.colab import drive import sys drive.mount('/content/drive', force_remount=True) # + id="CUpJJ2JIEycL" executionInfo={"status": "ok", "timestamp": 1611561628939, "user_tz": -480, "elapsed": 29582, "user": {"displayName": "", "photoUrl": "", "userId": "05768761827968141093"}} sys.path.append('/content/drive/MyDrive/Prak_MLMI') sys.path.append('/content/drive/MyDrive') sys.path.append('/content/drive/MyDrive/Prak_MLMI/model') # + id="SSHBVC1nI7bi" executionInfo={"status": "ok", "timestamp": 1611561628940, "user_tz": -480, "elapsed": 29577, "user": {"displayName": "", "photoUrl": "", "userId": "05768761827968141093"}} import warnings warnings.filterwarnings('ignore') # + id="i3dkynIRac4_" executionInfo={"status": "ok", "timestamp": 1611561628941, "user_tz": -480, "elapsed": 29572, "user": {"displayName": "", "photoUrl": "", "userId": "05768761827968141093"}} import matplotlib.patches as patches # + id="mhGuVUtYoeMd" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1611561636594, "user_tz": -480, "elapsed": 37182, "user": {"displayName": "", "photoUrl": "", "userId": "05768761827968141093"}} outputId="b4ccdcd6-f303-4283-83d6-d34b8994bc6c" # to set you cuda device # %env CUDA_VISIBLE_DEVICES=0 # %load_ext autoreload # %autoreload 2 import torch import torchvision.models from torch import nn from torch.utils.data import DataLoader from torchvision.datasets import ImageFolder from torchvision.transforms import Compose, CenterCrop, ToTensor, Resize, Normalize, Grayscale import matplotlib.pyplot as plt import os from tqdm import tqdm_notebook import json from PIL import Image import numpy as np import sys try: import IBA except ModuleNotFoundError: sys.path.insert(0, '..') import IBA from IBA.pytorch import IBA, tensor_to_np_img from IBA.utils import plot_saliency_map import cxr_dataset as CXR # + id="XcxZDbsAIpJ7" executionInfo={"status": "ok", "timestamp": 1611561739634, "user_tz": -480, "elapsed": 140216, "user": {"displayName": "yezi Tsai", "photoUrl": "", "userId": "05768761827968141093"}} # #!unzip "/content/drive/MyDrive/NIH_CXR14_Resized.zip" -d "/content/drive/MyDrive/NIH_CXR14_Resized" # %%capture # !unzip "/content/drive/MyDrive/NIH_CXR14_Resized.zip" -d /content/ # + [markdown] id="0I8XEzIyoeMe" # ## Loading Data and Model # + id="2hqke6AdoeMe" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1611561755689, "user_tz": -480, "elapsed": 156249, "user": {"displayName": "", "photoUrl": "", "userId": "05768761827968141093"}} outputId="be2eb469-3e6c-4397-aa97-4c303d88666e" prak_dir = '/content/drive/MyDrive/Prak_MLMI' imagenet_dir = '/content/drive/MyDrive/Prak_MLMI/imagenet' PATH_TO_IMAGES = "/content/NIH small" MODEL_PATH = prak_dir + '/model/results/checkpoint_best' label_path = '/content/drive/MyDrive/Prak_MLMI/model/labels' dev = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") checkpoint = torch.load(MODEL_PATH, map_location=dev) model = checkpoint['model'].module model.to(dev).eval() print(model) # + id="4ofQxeOwYoSa" executionInfo={"status": "ok", "timestamp": 1611561755690, "user_tz": -480, "elapsed": 156243, "user": {"displayName": "", "photoUrl": "", "userId": "05768761827968141093"}} # load the data # use imagenet mean,std for normalization mean = [0.485, 0.456, 0.406] std = [0.229, 0.224, 0.225] data_transforms = { 'train': Compose([ Resize(224), CenterCrop(224), ToTensor(), Normalize(mean, std) ]), 'val': Compose([ Resize(224), CenterCrop(224), ToTensor(), Normalize(mean, std) ]), } bounding_box_transform = CXR.RescaleBB(224, 1024) # + id="0tVtsdaaO7pe" executionInfo={"status": "ok", "timestamp": 1611561755691, "user_tz": -480, "elapsed": 156234, "user": {"displayName": "", "photoUrl": "", "userId": "05768761827968141093"}} def get_label(LA): labels = { 'Atelectasis': [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'Cardiomegaly': [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'Effusion': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'Infiltration': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'Mass': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'Nodule': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'Pneumonia': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'Pneumothorax': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'Consolidation': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'Edema': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'Emphysema': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'Fibrosis': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'Pleural_Thickening': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'Hernia': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] } return labels.get(LA, None) # + id="SJ460epLPRtD" executionInfo={"status": "ok", "timestamp": 1611561755692, "user_tz": -480, "elapsed": 156228, "user": {"displayName": "", "photoUrl": "", "userId": "05768761827968141093"}} def model_loss_closure(input): loss = nn.BCEWithLogitsLoss() mse_loss = loss(model(input), torch.tensor(_label).view(1,-1).expand(10, -1).to(dev).float()) return mse_loss # + id="-yuq0D_1Pcw1" executionInfo={"status": "ok", "timestamp": 1611561755693, "user_tz": -480, "elapsed": 156222, "user": {"displayName": "", "photoUrl": "", "userId": "05768761827968141093"}} # %matplotlib inline # + id="eEoQ2PZVRJpb" executionInfo={"status": "ok", "timestamp": 1611561755693, "user_tz": -480, "elapsed": 155926, "user": {"displayName": "", "photoUrl": "", "userId": "05768761827968141093"}} def samples_display(LABEL, beta=6): dataset = CXR.CXRDataset( path_to_images=PATH_TO_IMAGES, fold='BBox',#fold='train' transform=data_transforms['train'], transform_bb=bounding_box_transform, fine_tune=False, label_path=label_path, finding=LABEL) #finding=LABEL dataloader = DataLoader(dataset, batch_size=1, shuffle=False, num_workers=1) global _label _label = get_label(LABEL) seed = np.random.randint(0, 1000000000) np.random.seed(seed) length = len(dataset) print("20 Ransom samples of "+LABEL+" from " + str(length) + " images") for sample_idx in np.random.choice(length, 20): #change with dataset length iba = IBA(model.features.denseblock2) iba.reset_estimate() iba.estimate(model, dataloader, device=dev, n_samples=length, progbar=False) #change with dataset length img, target, idx, bbox = dataset[sample_idx] img = img[None].to(dev) # reverse the data pre-processing for plotting the original image np_img = tensor_to_np_img(img[0]) size = 4 rows = 1 cols = 5 fig, (ax, ax2, ax3, ax4, ax5) = plt.subplots(1, 5, figsize=(cols*size, rows*size)) cxr = img.data.cpu().numpy().squeeze().transpose(1, 2, 0) mean = np.array([0.485, 0.456, 0.406]) std = np.array([0.229, 0.224, 0.225]) cxr = std * cxr + mean cxr = np.clip(cxr, 0, 1) # rect_original = patches.Rectangle((bbox[0, 0], bbox[0, 1]), bbox[0, 2], bbox[0, 3], linewidth=2, edgecolor='r', facecolor='none', zorder=2) rect_original = patches.Rectangle((bbox[0], bbox[1]), bbox[2], bbox[3], linewidth=2, edgecolor='r', facecolor='none', zorder=2) ax.imshow(cxr) ax.axis('off') ax.set_title(idx) ax.add_patch(rect_original) iba.reverse_lambda = False iba.beta = beta heatmap = iba.analyze(img, model_loss_closure) # show the heatmap ax2 = plot_saliency_map(heatmap, np_img, ax=ax2) _ = ax2.set_title("Old method, Block2") ax2.add_patch(patches.Rectangle((bbox[0], bbox[1]), bbox[2], bbox[3], linewidth=2, edgecolor='r', facecolor='none', zorder=2)) iba.reverse_lambda = True iba.beta = beta heatmap = iba.analyze(img, model_loss_closure) # show the heatmap ax3 = plot_saliency_map(heatmap, np_img, ax=ax3) _ = ax3.set_title("New method, Block2") ax3.add_patch(patches.Rectangle((bbox[0], bbox[1]), bbox[2], bbox[3], linewidth=2, edgecolor='r', facecolor='none', zorder=2)) iba = IBA(model.features.denseblock3) iba.reset_estimate() iba.estimate(model, dataloader, device=dev, n_samples=length, progbar=False) iba.reverse_lambda = False iba.beta = beta heatmap = iba.analyze(img, model_loss_closure) # show the heatmap ax4 = plot_saliency_map(heatmap, np_img, ax=ax4) _ = ax4.set_title("Old method, Block3") ax4.add_patch(patches.Rectangle((bbox[0], bbox[1]), bbox[2], bbox[3], linewidth=2, edgecolor='r', facecolor='none', zorder=2)) iba.reverse_lambda = True iba.beta = beta heatmap = iba.analyze(img, model_loss_closure) # show the heatmap ax5 = plot_saliency_map(heatmap, np_img, ax=ax5) _ = ax5.set_title("New method, Block3") ax5.add_patch(patches.Rectangle((bbox[0], bbox[1]), bbox[2], bbox[3], linewidth=2, edgecolor='r', facecolor='none', zorder=2)) plt.show() return length # + [markdown] id="Nc9mbi2DTkUV" # # Cardiomegaly # + id="002jwG_gRkZl" colab={"base_uri": "https://localhost:8080/", "height": 1000, "output_embedded_package_id": "1Mp5jRfNWIl_z9RHW_GES7Stbhd5-GqKs"} executionInfo={"status": "ok", "timestamp": 1611551723772, "user_tz": -480, "elapsed": 191339, "user": {"displayName": "", "photoUrl": "", "userId": "05768761827968141093"}} outputId="a005e30e-78e1-4bce-8b2b-11285f82589a" LABEL='Cardiomegaly' length_data=samples_display(LABEL) # + id="JqzCJWq17dYm" dataset = CXR.CXRDataset( path_to_images=PATH_TO_IMAGES, fold='BBox',#fold='train' transform=data_transforms['train'], transform_bb=bounding_box_transform, fine_tune=False, label_path=label_path, finding=LABEL) #finding=LABEL dataloader = DataLoader(dataset, batch_size=1, shuffle=False, num_workers=1) img, target, idx, bbox = dataset[0] # + [markdown] id="y30THSog-_AO" # ## Compare different betas (So I chose beta=6 instead of 10, the default) # + colab={"base_uri": "https://localhost:8080/", "height": 457} id="Vy7rhjtJ73-C" executionInfo={"status": "ok", "timestamp": 1611551969552, "user_tz": -480, "elapsed": 14246, "user": {"displayName": "", "photoUrl": "", "userId": "05768761827968141093"}} outputId="fc18f782-35c7-4fa9-c79e-3ade35971a81" iba = IBA(model.features.denseblock3) iba.reset_estimate() iba.estimate(model, dataloader, device=dev, n_samples=len(dataset), progbar=False) size = 4 rows = 2 cols = 4 fig, axes = plt.subplots(2, 4, figsize=(cols*size, rows*size)) beta_set = [0.5, 1, 2, 4, 6, 8, 10, 12] for beta_value, ax in zip(beta_set, axes.flatten()) : heatmap = iba.analyze(img[None].to(dev), model_loss_closure, beta = beta_value) ax = plot_saliency_map(heatmap, tensor_to_np_img(img), ax=ax) ax.set_title("beta={}".format(beta_value)) plt.show() # + [markdown] id="X5_lYEAUCJI_" # ## Compare New and Old method with IB in output of denseblock3 with another image as well as compare masked and unmasked prediction probability # + id="W73KiRZyhWq-" colab={"base_uri": "https://localhost:8080/", "height": 1000} executionInfo={"status": "ok", "timestamp": 1611550256069, "user_tz": -480, "elapsed": 6959, "user": {"displayName": "", "photoUrl": "", "userId": "05768761827968141093"}} outputId="b77c781b-6a1c-4179-e714-8ceaafcae67d" img = img[None].to(dev) np_img = tensor_to_np_img(img[0]) iba = IBA(model.features.denseblock3) iba.reset_estimate() iba.estimate(model, dataloader, device=dev, n_samples=length_data, progbar=False) print("Old method") iba.reverse_lambda = False iba.beta = 6 #model_loss_closure = lambda x: -torch.log_softmax(model(x), 1)[:, target].mean() heatmap = iba.analyze(img, model_loss_closure) # show the heatmap ax = plot_saliency_map(heatmap, np_img) _ = ax.set_title(idx) ax.add_patch(patches.Rectangle((bbox[0], bbox[1]), bbox[2], bbox[3], linewidth=2, edgecolor='r', facecolor='none', zorder=2)) # show the prediction probability print(torch.sigmoid(model(img))) print(torch.tensor(_label).view(1,-1).to(dev).float()) with iba.restrict_flow(): print(torch.sigmoid(model(img))) plt.show() print("Ground truth with bounding box") cxr = img.data.cpu().numpy().squeeze().transpose(1, 2, 0) mean = np.array([0.485, 0.456, 0.406]) std = np.array([0.229, 0.224, 0.225]) cxr = std * cxr + mean cxr = np.clip(cxr, 0, 1) # rect_original = patches.Rectangle((bbox[0, 0], bbox[0, 1]), bbox[0, 2], bbox[0, 3], linewidth=2, edgecolor='r', facecolor='none', zorder=2) rect_original = patches.Rectangle((bbox[0], bbox[1]), bbox[2], bbox[3], linewidth=2, edgecolor='r', facecolor='none', zorder=2) fig, ax2 = plt.subplots(1, 1, figsize=(5.5, 4.0)) ax2.imshow(cxr) ax2.axis('off') ax2.set_title(idx) ax2.add_patch(rect_original) plt.show() print("New method") iba.reverse_lambda = True iba.beta = 6 #model_loss_closure = lambda x: -torch.log_softmax(model(x), 1)[:, target].mean() heatmap = iba.analyze(img, model_loss_closure) # show the heatmap ax3 = plot_saliency_map(heatmap, np_img) _ = ax3.set_title(idx) ax3.add_patch(patches.Rectangle((bbox[0], bbox[1]), bbox[2], bbox[3], linewidth=2, edgecolor='r', facecolor='none', zorder=2)) # show the prediction probability print(torch.sigmoid(model(img))) print(torch.tensor(_label).view(1,-1).to(dev).float()) with iba.restrict_flow(): print(torch.sigmoid(model(img))) plt.show() # + [markdown] id="0eytuLQd2CHm" # ## Old method (try with different blocks of DenseNet) with other images # # + id="tl0KUzfgblO_" def compare_blocks(length, ind=0): index = min (ind, length) print("results for image "+str(index)) img, target, idx, bbox = dataset[index] img = img[None].to(dev) np_img = tensor_to_np_img(img[0]) size = 4 rows = 1 cols = 4 fig, axes = plt.subplots(1, 4, figsize=(cols*size, rows*size)) positions = [["1"], ["2"], ["3"],["4"]] for position, ax in zip(positions, axes.flatten()): current_pos = "model.features.denseblock{0}".format(*position) iba = IBA(eval(current_pos)) iba.reset_estimate() iba.estimate(model, dataloader, device=dev, n_samples=length_data, progbar=False) iba.beta = 6 heatmap = iba.analyze(img, model_loss_closure) ax = plot_saliency_map(heatmap, np_img, ax=ax) ax.set_title(current_pos) ax.add_patch(patches.Rectangle((bbox[0], bbox[1]), bbox[2], bbox[3], linewidth=2, edgecolor='r', facecolor='none', zorder=2)) plt.show() # + id="XFBEflqmcpeR" colab={"base_uri": "https://localhost:8080/", "height": 675} executionInfo={"status": "ok", "timestamp": 1611550303799, "user_tz": -480, "elapsed": 43254, "user": {"displayName": "", "photoUrl": "", "userId": "05768761827968141093"}} outputId="2ef5dc73-2c5b-42cb-9056-09f5a7af935a" compare_blocks(length_data,0) compare_blocks(length_data,1) compare_blocks(length_data,2) # + [markdown] id="KA60-q--dqtC" # # Atelectasis # + id="tXc9ehk3dpjf" colab={"base_uri": "https://localhost:8080/", "height": 1000, "output_embedded_package_id": "1EJHgN-p8qgzrXhypFHZ450tlNoUKo5ll"} executionInfo={"status": "ok", "timestamp": 1611550512835, "user_tz": -480, "elapsed": 209022, "user": {"displayName": "", "photoUrl": "", "userId": "05768761827968141093"}} outputId="63df8f4e-89cc-401e-e2c9-7239140a723c" LABEL='Atelectasis' length_data=samples_display(LABEL) # + [markdown] id="yYZIH6iD5bx1" # # Effusion # + id="zd027g-Udy-W" colab={"base_uri": "https://localhost:8080/", "height": 1000, "output_embedded_package_id": "1yDgcop-DHApdFD7nxR8X-T9hSNn84I4d"} executionInfo={"status": "ok", "timestamp": 1611550751943, "user_tz": -480, "elapsed": 192682, "user": {"displayName": "", "photoUrl": "", "userId": "05768761827968141093"}} outputId="c42c4537-5bf1-4c4e-f4c2-ac05b3f8616a" LABEL='Effusion' length_data=samples_display(LABEL) # + [markdown] id="lP73hTO55jPV" # # Mass # + colab={"base_uri": "https://localhost:8080/", "height": 1000, "output_embedded_package_id": "1nQuBo7eUGorFRdfOU0HXQQtmEIO7Cohq"} id="AwIyCv8r5gvR" executionInfo={"status": "ok", "timestamp": 1611551028748, "user_tz": -480, "elapsed": 157033, "user": {"displayName": "", "photoUrl": "", "userId": "05768761827968141093"}} outputId="500ec248-94e4-4d80-d6bf-264654ed22d4" LABEL='Mass' length_data=samples_display(LABEL) # + [markdown] id="TuQQeiJ254po" # # Nodule # + colab={"base_uri": "https://localhost:8080/", "height": 1000, "output_embedded_package_id": "1zDTFxondbCXaaXDLCw8kSvptmhbvJqt2"} id="4Qo-NPIS6JNA" executionInfo={"status": "ok", "timestamp": 1611551183483, "user_tz": -480, "elapsed": 154664, "user": {"displayName": "", "photoUrl": "", "userId": "05768761827968141093"}} outputId="f86d7760-f48b-4b72-bbc9-49c0f39f44b7" LABEL='Nodule' length_data=samples_display(LABEL) # + [markdown] id="eQ71Kiz5jykc" # # Pneumonia # + colab={"base_uri": "https://localhost:8080/", "height": 1000, "output_embedded_package_id": "1dVYPcL0T1rgjKmqnuBgoJ8Vw32qo0U85"} id="S88RIup-kg2N" executionInfo={"status": "ok", "timestamp": 1611562042654, "user_tz": -480, "elapsed": 182889, "user": {"displayName": "", "photoUrl": "", "userId": "05768761827968141093"}} outputId="fa7d8846-87e8-4973-9518-29b0db1f25ce" LABEL='Pneumonia' length_data=samples_display(LABEL) # + [markdown] id="Qvs_ODWMjyV7" # # Pneumothorax # + colab={"base_uri": "https://localhost:8080/", "height": 1000, "output_embedded_package_id": "1f-5DJoK1LZ86LnclfRobr52CDGs_PDyr"} id="JMJRm3MHjxJG" executionInfo={"status": "ok", "timestamp": 1611562210050, "user_tz": -480, "elapsed": 348131, "user": {"displayName": "", "photoUrl": "", "userId": "05768761827968141093"}} outputId="1c530e6b-3d72-4966-fba7-7218133a9a54" LABEL='Pneumothorax' length_data=samples_display(LABEL) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="fAv73Tjy6V3i" colab={"base_uri": "https://localhost:8080/", "height": 799} outputId="d613985c-3a64-4340-891a-27f76336f051" import pandas as pd import tensorflow_probability as tfp import matplotlib.pyplot as plt; plt.rcdefaults() import numpy as np import matplotlib.pyplot as plt PERTCPM=pd.read_csv("/content/drive/MyDrive/Colab Notebooks/PERT-CPM.csv") print(PERTCPM) print('--------------------\n') taskNames=PERTCPM['tasks'] resultDataFrame=pd.DataFrame(columns=taskNames) paths=['ADEJ', 'BCDEJ', 'BCFGJ', 'BCFHJ', 'BI'] countOfCriticalsOfEachPath = [0, 0, 0, 0, 0] tfd = tfp.distributions for index, task in PERTCPM.iterrows(): dist=tfd.PERT(low=task['ai '], peak=task['mi'], high=task['bi'], temperature=4) resultDataFrame[task['tasks']]=dist.sample(1000) overAllCriticalLength=0 #loop each path based on 1000 simples for indexOfSimple, simple in resultDataFrame.iterrows(): criticalLength=0 criticalIndex = -1 #the index of the critical path within paths for indexOfPath in range(len(paths)): path = paths[indexOfPath] length = 0 for indexOfTask in range(len(path)): task = path[indexOfTask] length += simple[task] if(length > criticalLength): criticalLength = length criticalIndex = indexOfPath countOfCriticalsOfEachPath[criticalIndex] += 1 if(criticalLength > overAllCriticalLength): overAllCriticalLength = criticalLength # declares the longest path as critical print("Critical Path: ") print(paths[criticalIndex]) print("Critical Path Length: ") print(overAllCriticalLength) # path critical frequency allPathFrequency=[] for index in countOfCriticalsOfEachPath: frequency=round((index/5000*100), 4) allPathFrequency+=[frequency] # create chart objects = paths y_pos = np.arange(len(objects)) performance = allPathFrequency plt.bar(y_pos, performance, align='center', alpha=0.5) plt.xticks(y_pos, objects) plt.ylabel('Frequency(%)') plt.title('Path Probabilities') print("The times each path as critical path: ") print(countOfCriticalsOfEachPath) plt.show() # + [markdown] id="FwVimyAYpOUR" # # + [markdown] id="Y7ImyRlWoz5T" # # New Section # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Sparkify Project Workspace # This workspace contains a tiny subset (128MB) of the full dataset available (12GB). Feel free to use this workspace to build your project, or to explore a smaller subset with Spark before deploying your cluster on the cloud. Instructions for setting up your Spark cluster is included in the last lesson of the Extracurricular Spark Course content. # # You can follow the steps below to guide your data analysis and model building portion of this project. # # # ## Problem Statement # # Determine customer churn for a music streaming company called Sparkify. The users stream their favorite songs either using the free tier that place advertisements between songs or use the premium plan for which they pay a monthly fee with no advertisements between songs. Users can upgrade, downgrade or cancel their services. The dataset given to us contains a log of the activities of each user on the service whether they are playing songs, logging out, upgrading their service or cancelling the service. All this data contains key insights in keeping the users of the service happy. Our task in this project is to develop a model that predict which users are at risk. If we can identify users that are at risk to churn either by downgrading from premium or by cancelling their service, the business can offer them incentives and discounts potentially saving millions in revenues or maximize revenues. # # I will be using pyspark and different classification models for predicting the customer churn # # ### Metric # # We will be using F1 score as there is a highly imbalanced, i.e. the number of churned users is very small compared to number of active users. # + # import libraries # import libraries from pyspark.sql import SparkSession from pyspark.sql.functions import isnan, when, count, col, avg, concat, desc, explode, lit, min, max, split, udf, isnull from pyspark.ml.feature import CountVectorizer, IDF, Normalizer, PCA, RegexTokenizer, StandardScaler, StopWordsRemover, StringIndexer, VectorAssembler from pyspark.ml.evaluation import MulticlassClassificationEvaluator from pyspark.ml.tuning import CrossValidator, ParamGridBuilder from pyspark.ml.classification import LogisticRegression, RandomForestClassifier, GBTClassifier, DecisionTreeClassifier, NaiveBayes from pyspark.mllib.util import MLUtils from pyspark.mllib.evaluation import BinaryClassificationMetrics from pyspark.ml import Pipeline from pyspark.ml.feature import IndexToString, StringIndexer, VectorIndexer from pyspark.sql.types import IntegerType import pyspark.sql.functions as psqf import pyspark.sql.types as psqt import datetime import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt # %matplotlib inline # - # changing the default figure size of the plots fig_size = plt.rcParams["figure.figsize"] fig_size[0] = 10 fig_size[1] = 5 plt.rcParams["figure.figsize"] = fig_size # create a Spark session spark = SparkSession.builder.appName("Sparkify").getOrCreate() # # Load and Clean Dataset # In this workspace, the mini-dataset file is `mini_sparkify_event_data.json`. Load and clean the dataset, checking for invalid or missing data - for example, records without userids or sessionids. #reading the json to a dataframe sparkify_df = spark.read.json('mini_sparkify_event_data.json') #checking if there are any missing values, but we didn't check for any empty values which we will see in the next cell sparkify_df.select([count(when(isnan(c) | col(c).isNull(), c)).alias(c) for c in sparkify_df.columns]).show() # There are no missing values (NaNs) in our dataset. If there are missing values in any column, we can use filter function to remove those or dropna function based on columns or rows. But still there are empty values, lets explore sparkify_df.printSchema() # Converting Spark DataFrame to Pandas DataFrame sparkify_dfp = sparkify_df.toPandas() sparkify_dfp.head() sparkify_dfp.tail() print("There are about {} rows with empty value in userId".format(sparkify_dfp[sparkify_dfp['userId'] == ''].shape[0])) #removing empty values from userId column sparkify_dfp = sparkify_dfp[~(sparkify_dfp.userId == '')].reset_index(drop=True) # Removed rows where there are empty values in the dataframe. There are no empty sessionIds # # Exploratory Data Analysis # When you're working with the full dataset, perform EDA by loading a small subset of the data and doing basic manipulations within Spark. In this workspace, you are already provided a small subset of data you can explore. # # ### Define Churn # # Once you've done some preliminary analysis, create a column `Churn` to use as the label for your model. I suggest using the `Cancellation Confirmation` events to define your churn, which happen for both paid and free users. As a bonus task, you can also look into the `Downgrade` events. # Assuming `Cancellation Confirmation` events are customers churning, as of now print("Number of user transactions: {}".format(sparkify_dfp.shape[0])) print("Number of user columns: {}".format(sparkify_dfp.shape[1])) # ##### What is the number of users who have Cancellation Confirmation? users_churned = list(set(sparkify_dfp[sparkify_dfp['page'].isin(['Cancellation Confirmation'])]['userId'].unique())) len(users_churned) # There are 52 users who cancelled confirmation the service. We can also consider users who 'Submit Downgrade', "Cancel" and "Downgrade" as churned sparkify_dfp['churn'] = sparkify_dfp['userId'].apply(lambda user: 1 if user in users_churned else 0) # ### Explore Data # Once you've defined churn, perform some exploratory data analysis to observe the behavior for users who stayed vs users who churned. You can start by exploring aggregates on these two groups of users, observing how much of a specific action they experienced per a certain time unit or number of songs played. #columns and their datatypes sparkify_dfp.info() #total number of users print("There are {} unique users".format(len(sparkify_dfp.userId.unique().tolist()))) # ##### Distribution of different categorical variables print("Below is the distribution for column {}".format('userId')) df_temp = sparkify_dfp.drop_duplicates(subset='userId').groupby(['churn'])['userId'].count().reset_index().rename(columns={'churn': 'churn', 'userId': 'userId_count'}) col_cnt = list(set(df_temp.columns.tolist()) - set(['churn']))[0] print(df_temp) ax = sns.barplot(x='churn', y='userId_count', data=df_temp, ci=None) ax.set_xlabel(col_cnt) # There is a class imbalance, we need to carefully handle the modeling part #distribution of levels and churned users print("Below is the distribution for column {}".format('level')) df_temp = sparkify_dfp.drop_duplicates(subset=['userId']).groupby(['level','churn'])['churn'].count().reset_index(level='level') df_temp.columns = ['level','user_count'] df_temp = df_temp.reset_index() sns.barplot(x="churn",y="user_count",hue="level",data=df_temp, palette="coolwarm") sparkify_dfp.level.value_counts() ax = sns.countplot(x="level", data=sparkify_dfp) # We can see there are more users who take free subscription and then cancel when subscription ends #distribution of levels and churned users print("Below is the distribution for column {}".format('level')) df_temp = sparkify_dfp.drop_duplicates(subset=['userId']).groupby(['gender','churn'])['churn'].count().reset_index(level='gender') df_temp.columns = ['gender','user_count'] df_temp = df_temp.reset_index() sns.barplot(x="churn",y="user_count",hue="gender",data=df_temp, palette="coolwarm") # There are approximately equal male and female who didnt cancel confirmation. But more number of cancel confirmations by Male who churned #duplicated rows sparkify_dfp[sparkify_dfp.duplicated()] # There are no duplicated rows # ##### Page Distribution df_temp = sparkify_dfp['page'].value_counts().reset_index().rename(columns={'index': 'page', 'page': 'count'}) ax = sns.barplot(x="count", y="page", data=df_temp, ci=None) ax.set_xlabel('count') # Users listen to songs more often, hence "NextSong" is more. Followed by Home, Thumbs up, Add to playlist. # top 15 artists users listen to df_temp = sparkify_dfp.artist.value_counts()[:15].reset_index().rename(columns={'index': 'artist', 'artist': 'count'}) ax = sns.barplot(x="count", y="artist", data=df_temp, ci=None) ax.set_xlabel('count') # Users prefer mostly Kings Of Leon and Coldplay sparkify_dfp.groupby(['auth','churn'])['churn'].count() #distribution of auth and churned users print("Below is the distribution for column {}".format('auth')) df_temp = sparkify_dfp.groupby(['auth','churn'])['churn'].count().reset_index(level='auth') df_temp.columns = ['auth','user_count'] df_temp = df_temp.reset_index() sns.barplot(x="churn",y="user_count",hue="auth",data=df_temp) # Mostly users Logged In # From our analysis: # # - More male users than female users using the service and the number of events corresponding to male users is greater than the number of events for female users. # # - There are fewer paid level users than free level users, but the number of events for paid users is greater than the number of events for free level users. This means that paid level users are more active users of the service. sparkify_dfp.columns # # Feature Engineering # Once you've familiarized yourself with the data, build out the features you find promising to train your model on. To work with the full dataset, you can follow the following steps. # - Write a script to extract the necessary features from the smaller subset of data # - Ensure that your script is scalable, using the best practices discussed in Lesson 3 # - Try your script on the full data set, debugging your script if necessary # # If you are working in the classroom workspace, you can just extract features based on the small subset of data contained here. Be sure to transfer over this work to the larger dataset when you work on your Spark cluster. #converting to int as the time is in exponential form sparkify_dfp['registration'] = sparkify_dfp['registration'].astype(int) #converting the pandas dataframe to spark dataframe sparkify_dfs = spark.createDataFrame(sparkify_dfp) # + # convert unix timestamp to actual to create more features like month, day, year, hour etc., def convert_ux_datetime(unix_timestamp): """Converts given ns to ms""" if unix_timestamp is None: return None return unix_timestamp//1000 convert_ux_datetime_udf = psqf.udf(convert_ux_datetime, psqt.LongType()) sparkify_dfs = sparkify_dfs.withColumn('ts_datetime', convert_ux_datetime_udf(sparkify_dfs.ts).cast('timestamp')) sparkify_dfs = sparkify_dfs.withColumn('registration_datetime', convert_ux_datetime_udf(sparkify_dfs.registration).cast('timestamp')) sparkify_dfs.select('ts', 'ts_datetime', 'registration', 'registration_datetime').show(5) # - sparkify_dfs.printSchema() #top 5 rows of spark dataframe sparkify_dfs.show(5) #lets a unique user Ids and churn labels user_churn = sparkify_dfs.select("userId", "churn").dropDuplicates() user_churn = user_churn.select("userId", user_churn.churn.cast("int")) # As per the above exploration, we see a clear distinction between free and paid user_paid = sparkify_dfs.groupby("userId", "level").agg(max(sparkify_dfs.ts).alias("max_time")).sort("userId") user_max_time = user_paid.groupby("userId").agg(max(user_paid.max_time).alias("recent")) user_level = user_max_time.join(user_paid, [user_paid.userId == user_max_time.userId, user_max_time.recent == user_paid.max_time]).select(user_paid.userId, "level").sort("userId") user_level = user_level.replace(["free", "paid"], ["0", "1"], "level") user_level = user_level.select("userId", user_level.level.cast("int")) # As per the above exploration, we see users attempt to listen to songs often user_artist_count = sparkify_dfs.filter(sparkify_dfs.page=="NextSong").select("userId", "artist").dropDuplicates().groupby("userId").count() user_artist_count = user_artist_count.withColumnRenamed("count", "aritst_count") # + # get all the type of page page_list = [(row['page']) for row in sparkify_dfs.select("page").dropDuplicates().collect()] # removing the value which we used to labeling churn, if not there will be a data leakage page_list.remove("Cancellation Confirmation") # caculate the total page each user view user_page_view_count = sparkify_dfs.groupby("userId").count() user_page_view_count = user_page_view_count.withColumnRenamed("count", "page_count") for page in page_list: col_name = "count" + page.replace(" ", "") view_count = sparkify_dfs.filter(sparkify_dfs.page==page).groupby("userId").count() view_count = view_count.withColumnRenamed("count", col_name).withColumnRenamed("userId", "userId_temp") user_page_view_count = user_page_view_count.join(view_count, user_page_view_count.userId==view_count.userId_temp, "left").drop("userId_temp") user_page_view_count = user_page_view_count.sort("userId") user_page_view_count = user_page_view_count.fillna(0) # + col_list = user_page_view_count.columns col_list.remove("userId") col_list.remove("page_count") freq_sql = "select userId" for col in col_list: col_name = col.replace("count", "freq") sql_str = ", (" + col + "/(page_count/100)) as " + col_name freq_sql = freq_sql + sql_str freq_sql = freq_sql + " from user_page_view_count" user_page_view_count.createOrReplaceTempView("user_page_view_count") col_list = user_page_view_count.columns col_list.remove("userId") col_list.remove("page_count") freq_sql = "select userId" for col in col_list: col_name = col.replace("count", "freq") sql_str = ", (" + col + "/(page_count/100)) as " + col_name freq_sql = freq_sql + sql_str freq_sql = freq_sql + " from user_page_view_count" user_page_view_freq = spark.sql(freq_sql) # - #converting the gender columns to numerical user_gender = sparkify_dfs.select("userId", "gender").dropDuplicates() user_gender = user_gender.replace(["M", "F"], ["0", "1"], "gender") user_gender = user_gender.select("userId", user_gender.gender.cast("int")) ### number of days users using service from the registration user_ts_max = sparkify_dfs.groupby("userId").max("ts").sort("userId") user_ts_reg = sparkify_dfs.select("userId", "registration").dropDuplicates().sort("userId") user_days = user_ts_reg.join(user_ts_max, user_ts_reg.userId == user_ts_max.userId).select(user_ts_reg["userId"], ((user_ts_max["max(ts)"]-user_ts_reg["registration"])/(1000*60*60*24)).alias("regDay")) #descriptive stats for user sessions user_session_time = sparkify_dfs.groupby("userId", "sessionId").agg(((max(sparkify_dfs.ts)-min(sparkify_dfs.ts))/(1000*60)).alias("session_time")) user_session_time_des_stat = user_session_time.groupby("userId").agg(avg(user_session_time.session_time).alias("sessiontime_avg"), min(user_session_time.session_time).alias("sessiontime_min"), max(user_session_time.session_time).alias("sessiontime_max")).sort("userId") #descriptive stats for user sessions songs user_session_songs = sparkify_dfs.filter(sparkify_dfs.page=="NextSong").groupby("userId", "sessionId").count() user_session_songs_des_stat = user_session_songs.groupby("userId").agg(avg(user_session_songs["count"]).alias("sessionsongs_avg"), min(user_session_songs["count"]).alias("sessionsongs_min"), max(user_session_songs["count"]).alias("sessionsongs_max")).sort("userId") user_session_songs_des_stat.show(5) #user session counts user_session_count = sparkify_dfs.select("userId", "sessionId").dropDuplicates().groupby("userId").count() user_session_count = user_session_count.withColumnRenamed("count", "session_count") # + # combining all the features into a list features_list = [] features = [user_session_count, user_days, user_session_songs_des_stat, user_session_time_des_stat, user_gender, user_page_view_freq, user_artist_count, user_level, user_churn] for feat in features: features_list.append(feat) # - # prepare the final dataframe to join all the other features sparkify_dfs_new = sparkify_dfs.select("userId").dropDuplicates() def features_merge(df1, df2): """ This function is used to merge the feature using left join input: two data frame to be merged output: merged dataframe """ df2 = df2.withColumnRenamed("userId", "userId_temp") df = df1.join(df2, df1.userId == df2.userId_temp, "left").drop("userId_temp") return df # use function to merge the features in the list for feature in features_list: sparkify_dfs_new = features_merge(sparkify_dfs_new, feature) sparkify_dfs_new.persist() # # Modeling # Split the full dataset into train, test, and validation sets. Test out several of the machine learning methods you learned. Evaluate the accuracy of the various models, tuning parameters as necessary. Determine your winning model based on test accuracy and report results on the validation set. Since the churned users are a fairly small subset, I suggest using F1 score as the metric to optimize. sparkify_dfs_new.groupby("churn").count().show() # Yes, thie same distribution when we plot the churn distribution #lets save this dataframe to a file, where we could reuse sparkify_dfs_new.write.save("mini_sparkify_event_data_new_v1.csv", format="csv", header=True) sparkify_dfs_new = spark.read.csv("mini_sparkify_event_data_new_v1.csv", header=True) sparkify_dfs_new.persist() #converting all the columns to numeric num_features_list = sparkify_dfs_new.columns[1:] for feat in num_features_list: feat_name = feat + "_num" sparkify_dfs_new = sparkify_dfs_new.withColumn(feat_name, sparkify_dfs_new[feat].cast("float")) sparkify_dfs_new = sparkify_dfs_new.drop(feat) #storing the features as a vector vector_assembler = VectorAssembler(inputCols=sparkify_dfs_new.columns[1:-1], outputCol="num_features") features_calc = vector_assembler.transform(sparkify_dfs_new) #standardizing the values, for faster convergence scaler = StandardScaler(inputCol="num_features", outputCol="scaled_num_features", withStd=True) scaled_features = scaler.fit(features_calc) features_calc = scaled_features.transform(features_calc) #labels and features sparkify_data = features_calc.select(features_calc.churn_num.alias("label"), features_calc.scaled_num_features.alias("features")) #splitting the data to train and test train, test = sparkify_data.randomSplit([0.8, 0.2], seed=42) train = train.cache() # #### Logistic Regression # + model_lr = LogisticRegression() params = ParamGridBuilder() \ .addGrid(model_lr.elasticNetParam,[0.0, 0.1, 0.5, 1.0]) \ .addGrid(model_lr.regParam,[0.0, 0.05, 0.1]) \ .build() crossval_lr = CrossValidator(estimator=model_lr, estimatorParamMaps=params, evaluator=MulticlassClassificationEvaluator(), numFolds=5) cv_lr = crossval_lr.fit(train) # - #best cv model bestModel_lr = cv_lr.bestModel #persist the model for later use cv_lr.save('cv_lr.model') print('Best Param (regParam): ', bestModel_lr._java_obj.getRegParam()) print('Best Param (elasticNetParam): ', bestModel_lr._java_obj.getElasticNetParam()) cv_lr.avgMetrics predictions = cv_lr.transform(test) f1_score_evaluator = MulticlassClassificationEvaluator(labelCol="label", predictionCol="prediction",metricName='f1') f1_score = f1_score_evaluator.evaluate(predictions) print("F1 score = %g" % (f1_score)) # #### Random Forest model_rf = RandomForestClassifier() params = ParamGridBuilder() \ .addGrid(model_rf.numTrees,[50, 100, 150]) \ .addGrid(model_rf.maxDepth,[2, 4, 6, 8]) \ .build() crossval_rf = CrossValidator(estimator=model_rf, estimatorParamMaps=params, evaluator=MulticlassClassificationEvaluator(), numFolds=3) cv_rf = crossval_rf.fit(train) cv_rf.avgMetrics predictions = cv_rf.transform(test) f1_score_evaluator = MulticlassClassificationEvaluator(labelCol="label", predictionCol="prediction",metricName='f1') f1_score = f1_score_evaluator.evaluate(predictions) print("F1 score = %g" % (f1_score)) # Seems to be overfitting # ## Refinement # Here we will try to undersample, as there is class imbalance, along with hyper parameter tuning. We will be using Logistic Regression and Random Forest for classification # #### Undersampling, penalizing to optimize F1 score as the labels are imbalance stratified_train = train.sampleBy('label', fractions={0: 52/173, 1: 1.0}).cache() stratified_train.groupby("label").count().show() # + model_lr_strat = LogisticRegression() params = ParamGridBuilder() \ .addGrid(model_lr_strat.elasticNetParam,[0.0, 0.1, 0.5, 1.0]) \ .addGrid(model_lr_strat.regParam,[0.0, 0.05, 0.1]) \ .build() crossval_lr_strat = CrossValidator(estimator=model_lr_strat, estimatorParamMaps=params, evaluator=MulticlassClassificationEvaluator(), numFolds=5) cv_lr_strat = crossval_lr_strat.fit(stratified_train) # - #persist the model for later use cv_lr_strat.save('cv_lr_strat_v2.model') cv_lr_strat.avgMetrics predictions = cv_lr_strat.transform(test) f1_score_evaluator = MulticlassClassificationEvaluator(labelCol="label", predictionCol="prediction",metricName='f1') f1_score = f1_score_evaluator.evaluate(predictions) print("F1 score = %g" % (f1_score)) model_rf_strat = RandomForestClassifier() params = ParamGridBuilder() \ .addGrid(model_rf_strat.numTrees,[50, 100, 150]) \ .addGrid(model_rf_strat.maxDepth,[2, 4, 6, 8]) \ .build() crossval_rf_strat = CrossValidator(estimator=model_rf_strat, estimatorParamMaps=params, evaluator=MulticlassClassificationEvaluator(), numFolds=3) cv_rf_strat = crossval_rf_strat.fit(stratified_train) cv_rf_strat.avgMetrics predictions = cv_rf_strat.transform(test) f1_score_evaluator = MulticlassClassificationEvaluator(labelCol="label", predictionCol="prediction",metricName='f1') f1_score = f1_score_evaluator.evaluate(predictions) print("F1 score = %g" % (f1_score)) #best cv model bestModel_lr_start = cv_lr_strat.bestModel bestModel_lr_start.save('bestModel_lr_strat_best.model') # ### Results # Respective to Logistic Regression, at first on the imbalance data, it seems overfitting. Then used undersampling to equal the data distribution and build a model. # Logistic Regression on undersampling data performing better, with proper tuning of parameters using cross validation. # # Random Forest gave me good accuracy on imbalance data and also after taking care of imbalance. Random Forest in this performed better. # ### Improvements # - Can also create more features and see feature importance to choose the features and build a model, which might give us better results. Creating features based on domain knowledges and expertise. We could see the real benifit of Spark analyzing big data # # - May be, we could also create a users2vec i.e., capturing the user journey from the registration to churn, then build a classification for new users. # # - We have about 225 records of unique users, and we only use 60% of them to train. We could create more accurate models if we have more data # # Final Steps # Clean up your code, adding comments and renaming variables to make the code easier to read and maintain. Refer to the Spark Project Overview page and Data Scientist Capstone Project Rubric to make sure you are including all components of the capstone project and meet all expectations. Remember, this includes thorough documentation in a README file in a Github repository, as well as a web app or blog post. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import matplotlib import matplotlib.pyplot as plt import numpy as np import pandas as pd grand_canyon_data = pd.read_csv("data/grand_canyon_visits.csv") grand_canyon_data.head() grand_canyon_data["NumVisits"].describe() grand_canyon_data["NumVisits"] = grand_canyon_data["NumVisits"]/1000 grand_canyon_data["NumVisits"].describe() # + plt.figure(figsize=(12, 8)) plt.acorr(grand_canyon_data["NumVisits"], maxlags=6) plt.show() # + plt.figure(figsize=(12, 8)) plt.acorr(grand_canyon_data["NumVisits"], maxlags=13, color="green") plt.show() # + plt.figure(figsize=(12, 8)) plt.acorr(grand_canyon_data["NumVisits"], maxlags=48, color="green") plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.5.4 # language: julia # name: julia-1.5 # --- # ### Problem 5.1 - Capacitated transportation problem revisited # *(Marginal costs via dual calculation)* # Adding necessary packages: # + # using Pkg # Pkg.add("GLPK") # - ## Packages using JuMP, Cbc, GLPK # We need GLPK for obtaining duals # We start by defining the problem data # + nS = 3 # Number of suppliers nD = 3 # Number of demand points nP = 2 # Number of products S = 1:nS D = 1:nD P = 1:nP A = [] # Set of arcs, we include arcs from all suppliers to all demand points for s in S, d in D push!(A, (s,d)) end # + costs = zeros(nP,nS,nD) # Cost of transporting one unit of product d from supplier s to demand point d costs[1,:,:] = [5 5 Inf; 8 9 7; Inf 10 8] costs[2,:,:] = [Inf 18 Inf; 15 12 14; Inf 20 Inf] sup = [80 400; 200 1500; 200 300] dem = [60 300; 100 1000; 200 500] cap = [Inf 300 0; 300 700 600; 0 Inf Inf]; # + ## Define model using GLPK model = Model(GLPK.Optimizer); ## Variables @variable(model, x[a in A, p in P; costs[p,a[1],a[2]] < Inf] >= 0); ## OF @objective(model, Min, sum(costs[p,a[1],a[2]]*x[a,p] for p in P, a in A if costs[p,a[1],a[2]] < Inf)); ## Constraints @constraint(model, sup_con[s in S, p in P], sum(x[a,p] for a in A if costs[p,a[1],a[2]] < Inf && a[1] == s) <= sup[s,p]); # sum of everything that leaves supplier s @constraint(model, dem_con[d in D, p in P], sum(x[a,p] for a in A if costs[p,a[1],a[2]] < Inf && a[2] == d) >= dem[d,p]); # sum of everything that arrives at demand d @constraint(model, cap_con[a in A; cap[a[1],a[2]] < Inf], sum(x[a,p] for p in P if costs[p,a[1],a[2]] < Inf) <= cap[a[1],a[2]]);# arc capacity constraints # - set_silent(model) # Actually works with GLPK optimize!(model) status = termination_status(model) println(status) ## Saving the optimal value obj = objective_value(model) # Function ``dual()`` in the ``JuMP`` library gives the value of the dual variable associated to the constraint at the optimal solution, in other words the *marginal costs*. Here we need using the elemnt-wise operator ``.`` as we have multiple constraints (check the domains). The *marginal costs* value stands for how much adding one unit more to the constraint's RHS (in the case is a $\leq$ constraint) impacts the final result.
    #
    # One interpretation for the *marginal costs* in this problem is how much the company is willing to pay for expanding the supplies' or the arcs' capacity (depending on if we're analysing the dual of the supplies or the arcs constraints). ## Computing the duals to infer the marginal costs mc_supply = dual.(sup_con); mc_arcs = dual.(cap_con); for s in S, p in P println("The marginal costs for the supply $s for the product $p is: $(mc_supply[s,p])") end for a in A if cap[a[1],a[2]] < Inf println("The marginal costs for the arc $a is: $(mc_arcs[a])") end end for a in A, p in P if costs[p,a[1],a[2]] < Inf println("The value of $(x[a,p]) is $(value(x[a,p]))") end end ## Raising the supply availability of product 1 at the first supply node by 1 sup[1,1] = sup[1,1] + 1; # + ## Define model using GLPK model = Model(GLPK.Optimizer); ## Variables @variable(model, x[a in A, p in P; costs[p,a[1],a[2]] < Inf] >= 0); ## OF @objective(model, Min, sum(costs[p,a[1],a[2]]*x[a,p] for p in P, a in A if costs[p,a[1],a[2]] < Inf)); ## Constraints @constraint(model, sup_con[s in S, p in P], sum(x[a,p] for a in A if costs[p,a[1],a[2]] < Inf && a[1] == s) <= sup[s,p]); # sum of everything that leaves supplier s @constraint(model, dem_con[d in D, p in P], sum(x[a,p] for a in A if costs[p,a[1],a[2]] < Inf && a[2] == d) >= dem[d,p]); # sum of everything that arrives at demand d @constraint(model, cap_con[a in A; cap[a[1],a[2]] < Inf], sum(x[a,p] for p in P if costs[p,a[1],a[2]] < Inf) <= cap[a[1],a[2]]);# arc capacity constraints # - set_silent(model) optimize!(model) status = termination_status(model) println(status) ## New optimal value new_obj = objective_value(model) ## Decrease in the optimal value new_obj - obj mc_supply[1,1] # The marginal cost predicted the change in objective value correctly. Let's now try changing another bound. ## Back to the original supply availability sup[1,1] = sup[1,1] - 1; sup[1,1] ## Increasing the arc capacity by 1 for the arc from supplier 2 to demand node 2 cap[2, 2] = cap[2, 2] + 1; # + ## Define model using GLPK model = Model(GLPK.Optimizer); ## Variables @variable(model, x[a in A, p in P; costs[p,a[1],a[2]] < Inf] >= 0); ## OF @objective(model, Min, sum(costs[p,a[1],a[2]]*x[a,p] for p in P, a in A if costs[p,a[1],a[2]] < Inf)); ## Constraints @constraint(model, sup_con[s in S, p in P], sum(x[a,p] for a in A if costs[p,a[1],a[2]] < Inf && a[1] == s) <= sup[s,p]); # sum of everything that leaves supplier s @constraint(model, dem_con[d in D, p in P], sum(x[a,p] for a in A if costs[p,a[1],a[2]] < Inf && a[2] == d) >= dem[d,p]); # sum of everything that arrives at demand d @constraint(model, cap_con[a in A; cap[a[1],a[2]] < Inf], sum(x[a,p] for p in P if costs[p,a[1],a[2]] < Inf) <= cap[a[1],a[2]]);# arc capacity constraints # - set_silent(model) optimize!(model) status = termination_status(model) println(status) ## New optimal value new_obj = objective_value(model) new_obj-obj mc_arcs[(2, 2)] # Turns out the marginal cost that we calculated did not predict this change correctly. Comparing the solution below, we notice that one unit of product 1 to demand point 2 is now transported from supplier 2 instead of supplier 3 due to the increased capacity of arc 2->2. This shows that you should always check whether you can apply marginal costs. If the optimal basis changes because of a change in $b$, you can't apply marginal costs. ## Computing the duals to infer the marginal costs mc_supply = dual.(sup_con); mc_arcs = dual.(cap_con); for s in S, p in P println("The marginal costs for the supply $s for the product $p is: $(mc_supply[s,p])") end for a in A if cap[a[1],a[2]] < Inf println("The marginal costs for the arc $a is: $(mc_arcs[a])") end end for a in A, p in P if costs[p,a[1],a[2]] < Inf println("The value of $(x[a,p]) is $(value(x[a,p]))") end end # ### Problem 5.5 - Complementary slackness # Recall the paint factory problem introduced in Section 1.2.1. of the lecture notes A = [6 4; 1 2; -1 1; 0 1] b = [24; 6; 1; 2] c = [5; 4]; using Cbc, JuMP # #### Solving the primal and dual problems # Formulate and solve the two problems to obtain their optimal solutions #% primalmodel = Model(Cbc.Optimizer) @variable(primalmodel, x[1:2] >= 0) @objective(primalmodel, Max, sum(c.*x)) @constraint(primalmodel, [m in 1:4], sum(A[m,:].*x) <= b[m]) optimize!(primalmodel) value.(primalmodel[:x]) #% #% dualmodel = Model(Cbc.Optimizer) @variable(dualmodel, p[1:4] >= 0) @objective(dualmodel, Min, sum(b.*p)) @constraint(dualmodel, [n in 1:2], sum(A[:,n].*p) >= c[n]) optimize!(dualmodel) value.(dualmodel[:p]) #% # #### Complementary slackness # Verify the optimality of your solutions using complementary slackness # + #% x_opt = value.(primalmodel[:x]) p_opt = value.(dualmodel[:p]) lhs_primal = A*x_opt lhs_dual = A'*p_opt println("Primal feasibility:") for m in 1:4 println("$(round(lhs_primal[m],digits=2)) <= $(b[m])") end println("\nDual feasibility:") for n in 1:2 println("$(round(lhs_dual[n],digits=2)) >= $(c[n])") end println("\nComplementary slackness:") for m in 1:4 println("$(round((lhs_primal[m] - b[m])*p_opt[m],digits=2)) = 0") end for n in 1:2 println("$(round((lhs_dual[n] - c[n])*x_opt[n],digits=2)) = 0") end #% # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from odt_parse import OdtData from odt_diff import odt_compare import csv, glob, zipfile HOME_FOLDER = '../' SUBMISSION_FOLDER = HOME_FOLDER + 'PS1002-2-20172018-Document libro predefinidos.odt, grups de dijous-3219356' user_folder = glob.glob(SUBMISSION_FOLDER + '/*') ref_name = 'libro_predefinidos.odt' ref = OdtData(ref_name) with open(SUBMISSION_FOLDER + '.txt', 'w') as f: counter = 0 for ufo in user_folder: user_files = glob.glob(ufo + '/*') for ufi in user_files: tokens = ufi.split('/') filename = tokens[-1] user_data = tokens[-2] user_name, user_id, _, submission_type, _ = user_data.split('_') fns = filename.split('.') if len(fns) > 1: extension = fns[-1] else: extension = '' row = [user_name, user_id, submission_type, filename, extension] f.write(user_name + '\n' + '-'*len(user_name) + '\n') if filename != ref_name: f.write('\nNombre de fichero: %s\n' % filename) if extension == 'odt': doc = OdtData( ufi ) if doc.err: f.write('Error de lectura de fichero\n') else: f.write(odt_compare(ref, doc)) else: f.write('Extensión de fichero incorrecta\n') f.write('\n' + '#'*40 + '\n\n') counter += 1 print('%d ficheros procesados.' % counter) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Change CARTO table privacy # # This example illustrates how to change the privacy of a CARTO table. # # _Note: You'll need [CARTO Account](https://carto.com/signup) credentials to reproduce this example._ # + from cartoframes.auth import set_default_credentials set_default_credentials('creds.json') # + from cartoframes import update_privacy_table, describe_table update_privacy_table('my_table', 'private') describe_table('my_table')['privacy'] # + update_privacy_table('my_table', 'public') describe_table('my_table')['privacy'] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + import pandas as pd, numpy as np, seaborn as sns, geopandas as gpd, matplotlib.pyplot as plt from scipy.stats import f_oneway import pingouin as pin import matplotlib from shapely.geometry import Point, mapping # %matplotlib inline import warnings warnings.simplefilter(action="ignore") pd.set_option("display.precision", 2) pd.options.display.float_format = '{:20.2f}'.format pd.set_option('display.float_format', lambda x: '%.3f' % x) pd.options.mode.chained_assignment = None import cityImage as ci, ABManalysis as af #libraries for clustering from sklearn.cluster import KMeans, AgglomerativeClustering from sklearn.metrics import silhouette_samples, silhouette_score # - # ### Preparing and cleaning the routes # ### Coordinate System of the case study area for cartographic visualisations city_name = 'Muenster' epsg = 25832 crs = 'EPSG:'+str(epsg) # # Questionnaire # ## 1. Pre-processing raw = pd.read_csv("Input/empiricalABM/Muenster_responses_matrix.csv") # raw responses raw['startdate'] = raw.apply(lambda row: row['startdate'][11:], axis = 1) raw['datestamp'] = raw.apply(lambda row: row['datestamp'][11:], axis = 1) raw['duration'] = raw.apply(lambda row: af.compute_duration(row['startdate'], row['datestamp']), axis = 1) raw['PD2'] = raw['PD2'].fillna(raw['PD2'].mean()) raw['age'] = raw['PD2'].astype(int) print('Number of total participants:', str(len(raw))) # ### 1.1 Disregard subjects who took less than 20 minutes to complete the study # cleaning raw = raw[raw['duration']>= 20].copy() print('Number of participants whose record was not disregarded ', str(len(raw))) limit = {'A1':18, 'A2':25, 'A3':33, 'A4':41, 'A5':49, 'A6':57, 'A7':65, 'A8':73, 'A9':150} raw['limit'] = raw[~raw.FQ1.isnull()].apply(lambda row: limit[row['FQ1']], axis = 1) # print("age issue", len(raw[~raw.FQ1.isnull()][['PD2', 'FQ1', 'limit']][raw.age > raw.limit]) + # len(raw[~raw.FQ1.isnull()][['PD2', 'FQ1', 'limit']][raw.age < raw.limit-7])) raw.rename(columns={"PD1": "sex"}, inplace = True) values = ['A1', 'A2', 'A3', 'A4'] new_values = ["male", "female", "non-binary", "prefer not to"] for n, value in enumerate(values): raw['sex'].replace(value, new_values[n], inplace = True) # ### 1.1 Disregarding subjects who always chose the same response when asked how to proceed along the route # + def check(index): response = list(raw[to_check].loc[index][raw[to_check].loc[index].notna()]) percentageA1 = response.count("A1")/len(response) percentageA2 = response.count("A2")/len(response) if ((percentageA1 > 0.65) | (percentageA2 > 0.65)) & (raw.loc[index].duration < 30): return True return False # checking videos video_columns = [col for col in raw if col.startswith('VD')] to_remove = ['VD000a','VD000','VD099[SQ001]', 'VD100a', 'VD100', 'VD199[SQ001]','VD200a','VD200', 'VD299[SQ001]'] to_check = [item for item in video_columns if item not in to_remove] raw['allSame'] = raw.apply(lambda row: check(row.name), axis = 1) # - # ## 2. Demographic information # ### 2.1 General information # + nr = 'Number' pr = 'participants' N = len(raw) print(nr+' of female '+pr +': '+str(len(raw[raw.sex == 'female']))+', '+str(round(len(raw[raw.sex == 'female'])/N*100,1))+'%') print(nr+' of male '+pr +': '+str( len(raw[raw.sex == 'male']))+', '+str(round(len(raw[raw.sex == 'male'])/N*100,1))+'%') print(nr+' of non-binary '+pr +': '+str(len(raw[raw.sex == 'non-binary']))+', '+str(round(len(raw[raw.sex == 'non-binary'])/N*100, 1))+'%') print(nr+' of '+pr+' who preferred not to declare their gender: '+str(len(raw[raw.sex == 'prefer not to']))+', '+ str(round(len(raw[raw.sex == 'prefer not to'])/N*100, 1))+'%') print() print("Particpants' mean age:", round(raw['age'].mean(),3)) print("Particpants' std age:", round(raw['age'].std(), 3)) print() print('Mean duration:', str(round(raw.duration.mean(), 2))+" minutes") # - # ### 2.2 Age categories # + Ga = len(raw[raw.age < 18]) Gb = len(raw[(raw.age >= 18) & (raw.age <= 25)]) Gc = len(raw[(raw.age >= 26) & (raw.age <= 33)]) Gd = len(raw[(raw.age >= 34) & (raw.age <= 41)]) Ge = len(raw[(raw.age >= 42) & (raw.age <= 49)]) Gf = len(raw[(raw.age >= 50) & (raw.age <= 57)]) Gg = len(raw[(raw.age >= 58) & (raw.age <= 65)]) Gh = len(raw[(raw.age >= 66) & (raw.age <= 73)]) Gi = len(raw[(raw.age >= 74)]) print("Number of participants per age group") print() print('< 18: '+str(Ga)+', '+str(round(Ga/N*100, 1))+ '%') print('18 - 25: '+str(Gb)+', '+str(round(Gb/N*100, 1))+ '%') print('26 - 33: '+str(Gc)+', '+str(round(Gc/N*100, 1))+ '%') print('34 - 41: '+str(Gd)+', '+str(round(Gd/N*100, 1))+ '%') print('42 - 49: '+str(Ge)+', '+str(round(Ge/N*100, 1))+ '%') print('50 - 57: '+str(Gf)+', '+str(round(Gf/N*100, 1))+ '%') print('58 - 65: '+str(Gg)+', '+str(round(Gg/N*100, 1))+ '%') print('66 - 73: '+str(Gh)+', '+str(round(Gh/N*100, 1))+ '%') print('> 74:'+str(Gi)+', '+str(round(Gi/N*100, 1))+ '%') # - # ### 2.3 Links or relationship to the case-study area # + Ga = len(raw[~raw['PD3[SQ001]'].isnull()]) Gb = len(raw[~raw['PD3[SQ002]'].isnull()]) Gc = len(raw[~raw['PD3[SQ003]'].isnull()]) Gd = len(raw[~raw['PD3[SQ004]'].isnull()]) Ge = len(raw[~raw['PD3[SQ005]'].isnull()]) nr = "Number of participants who " print(nr+'are tourists: '+str(Ga)+', '+str(round(Ga/N*100,1))+ '%') print(nr+'work here: '+str(Gb)+', '+str(round(Gb/N*100,1))+ '%') print(nr+'live her: '+str(Gc)+', '+str(round(Gc/N*100,1))+ '%') print(nr+'study here: '+str(Gd)+', '+str(round(Gd/N*100,1))+ '%') print(nr+'are occasional visitors: '+str(Ge)+', '+str(round(Ge/N*100,1))+ '%') # - # ### 2.4 Reasons for walking # + columns = ['WB2[SQ001]', 'WB2[SQ002]','WB2[SQ003]','WB2[SQ004]','WB2[SQ005]'] labels = ['Commuting to/from work', 'Commuting to/from school/college/university', 'For social activities', 'For exercise or free-time activities', 'For other daily errands and commitments'] for column in columns: raw[column].replace('A1', 1, inplace = True) raw[column].replace('A2', 2, inplace = True) raw[column].replace('A3', 3, inplace = True) raw[column].replace('A4', 4, inplace = True) raw[column].replace('A5', 5, inplace = True) total_prob = 0.0 for n, column in enumerate(columns): prob = round(raw[column].mean(),3) total_prob += prob print(labels[n]+':', str(round(raw[column].mean(),3))) # value on the Likert scale print("percentage: ", str(round(raw[column].mean()*100/5,3))+'%') if n < len(columns)-1: print() # - # #### *Reasons for Walking: Transformation into probabilities* for n, column in enumerate(columns): actual_prob = round(raw[column].mean()/total_prob,2) print(str(labels[n]) + ': '+str(actual_prob)) # ## 3. Familiriaty, spatial knowledge and preferences # ### 3.1 Time spent living in the case-study area (*familiarity*) raw[['PD4[SQ001]', 'PD4[SQ002]']] = raw[['PD4[SQ001]', 'PD4[SQ002]']].fillna(0.0) raw['PD4[SQ002]'].replace(1984.0, 2021-1984, inplace = True) raw['familiarity'] = raw['PD4[SQ001]']/12 + raw['PD4[SQ002]'] raw['familiarity'].plot.kde() # ### 3.2 Self-reported spatial knowledge columns = ['WB4[SQ001]', 'WB4[SQ002]','WB4[SQ003]','WB4[SQ004]'] for column in columns: raw[column].replace('A1', 1, inplace = True) raw[column].replace('A2', 2, inplace = True) raw[column].replace('A3', 3, inplace = True) raw[column].replace('A4', 4, inplace = True) raw[column].replace('A5', 5, inplace = True) raw['knowledge'] = (raw['WB4[SQ001]'] + raw['WB4[SQ002]'] + raw['WB4[SQ003]'] +raw['WB4[SQ004]'])/4 raw['knowledge'].plot.kde() columns = ['WB4[SQ001]', 'WB4[SQ002]','WB4[SQ003]','WB4[SQ004]', 'knowledge', 'familiarity'] raw[columns].corr(method='pearson') # ### 3.3 Preference for and aversion to barriers # + columns = ['SP4[SQ002]', 'SP4[SQ006]','SP4[SQ001]','SP4[SQ005]'] for column in columns: raw[column].replace('A1', 0.00, inplace = True) raw[column].replace('A2', 0.25, inplace = True) raw[column].replace('A3', 0.50, inplace = True) raw[column].replace('A4', 0.75, inplace = True) raw[column].replace('A5', 1.00, inplace = True) raw['preferenceNatural'] = (raw['SP4[SQ002]'] + raw['SP4[SQ006]'])/2 ## preference for Natural Barriers raw['aversionSevering'] = raw['SP4[SQ001]'] ## Aversion to Severing Barriers # - # ## 4. Video Tasks: route choice behaviour variables # #### *Preliminary columns cleaning* # + video_columns = [col for col in raw if col.startswith('VD')] to_remove = ['VD000a','VD000','VD099[SQ001]', 'VD100a', 'VD100', 'VD199[SQ001]','VD200a','VD200', 'VD299[SQ001]'] columns = ['id'] + video_columns columns = [item for item in columns if item not in to_remove] # remove not necessary video columns responses = raw[columns].copy() # only response to the video taks # + # cleaning and rename columns and prepare a legible dataframe for column in responses.columns: if column == 'id': continue responses.rename({column: column[2:]}, axis=1, inplace = True) for column in responses.columns: if column == 'id': continue if ('a' in column) | ('b' in column): continue new_column = '-'.join(column[i:i+3] for i in range(0, len(column), 3)) responses.rename({column: new_column}, axis=1, inplace = True) responses.rename({'150a151152': '150a-151-152'}, axis=1, inplace = True) responses.rename({'150b126': '150b-126'}, axis=1, inplace = True) # - # #### *Loading the routes used in the survey* # + survey_routes = gpd.read_file("Outputs/Routes_sections.shp").to_crs(crs) summary = pd.DataFrame(columns = ['id']+list(survey_routes.video.unique())) del summary[None] summary = summary.reindex(sorted(summary.columns), axis=1) for row in responses.itertuples(): sectors = responses.loc[row.Index].notna().dot(responses.columns+',').rstrip(',') sectors = sectors.replace('-', ',') sectors_list = sectors.strip('').split(',') for sector in sectors_list: summary.at[row.Index, sector] = 1 summary.at[row.Index, 'id'] = responses.loc[row.Index].id summary.drop('114', inplace = True, axis = 1) summary.fillna(0, inplace = True) for column in summary.columns: summary.rename(columns={column: str(column)}, inplace = True) # - summary.head() # ### 4.1 Obtaining the general statistics survey_routes['routeChoice'] = survey_routes.apply(lambda row: row['routeChoic'].replace(" ",""), axis = 1) survey_routes['routeChoice'] = survey_routes.apply(lambda row: row['routeChoice'].strip('][').split(','), axis = 1) survey_routes.drop('routeChoic', inplace = True, axis = 1) # #### *Computing for each subject how much they resorted to certain urban elements or road costs* # + route_variables = ['usingElements', 'noElements','onlyDistance', 'onlyAngular', 'distanceHeuristic', 'angularHeuristic', 'regions', 'routeMarks', 'barriers','distantLandmarks', 'preferenceNatural', 'aversionSevering'] other_variables = ["length", "minimisation_length","combined_length"] video0 = [col for col in summary if col.startswith('0')] video1 = [col for col in summary if col.startswith('1')] video2 = [col for col in summary if col.startswith('2')] videos = [video0, video1, video2] routes_stats = af.set_routes_stats(summary, survey_routes, videos) # - raw.index = raw['id'] routes_stats.index.name = None routes_stats['preferenceNatural'] = raw['preferenceNatural'] routes_stats['aversionSevering'] = raw['aversionSevering'] routes_stats['knowledge'] = raw['knowledge'] routes_stats.head(10) ## appending variables regarding the preference for natural/severing barriers and spatial knowledge raw.index = raw['id'] routes_stats.index.name = None routes_stats['preferenceNatural'] = raw['preferenceNatural'] routes_stats['aversionSevering'] = raw['aversionSevering'] routes_stats['knowledge'] = raw['knowledge'] routes_stats.head(10) # #### *Obtaining the input matrix for cluster analysis* # + input_matrix = routes_stats[route_variables].copy() input_matrix.replace(0.0, 0.01, inplace = True) cluster_variables = ['onlyDistance', 'onlyAngular', 'distanceHeuristic', 'angularHeuristic', 'regions', 'routeMarks', 'barriers','distantLandmarks'] for column in cluster_variables: if column in ['onlyDistance', 'onlyAngular']: input_matrix[column] = input_matrix[column] * input_matrix['noElements'] else: input_matrix[column] = input_matrix[column] * input_matrix['usingElements'] input_matrix = input_matrix.astype(float) # - # ### 4.2 Visualising overall importance of each urban element (probabilities) # + sns.set() sns.set_color_codes() fig, ax = plt.subplots(nrows = 1, ncols = 1, figsize=(25, 12)) ## obtaining dataframe for boxplot visualisation tab = pd.DataFrame(columns = {'variable', 'value'}) labels = ['using\nUE', 'only\nGMH', 'distance\nGMH', 'angular\nGMH', 'distance\nLMH', 'angular\nLMH', 'regions', 'on-route\nmarks', 'barriers', 'distant\nlandmarks', 'natural\nbarriers', 'severing\nbarriers'] tab = pd.DataFrame(columns = {'variable', 'value'}) index = 0 for subject in input_matrix.index: for n, variable in enumerate(route_variables): tab.at[index, 'variable'] = labels[n] tab.at[index, 'value'] = input_matrix.loc[subject][variable] index += 1 palette = ['coral']*2+['gold']*8+['mediumseagreen']*2 ax = sns.boxplot(x="variable", y="value", data=tab, palette = palette) ax.set_ylim(0.0, 1.01) ax.set_ylabel('', fontsize = 27, labelpad = 30, fontfamily = 'Times New Roman') ax.set_xlabel('', labelpad = 30, fontfamily = 'Times New Roman') for tick in ax.get_yticklabels(): tick.set_fontname('Times New Roman') for tick in ax.get_xticklabels(): tick.set_fontname('Times New Roman') ax.tick_params(axis='both', labelsize= 27, pad = 20) # - fig.savefig("Outputs/Figures/empiricalABM/f4.pdf", bbox_inches='tight') input_matrix[cluster_variables].mean() input_matrix[cluster_variables].corr(method='pearson') # ## 5. Cluster Analysis # ### 5.1 Variables transformation X = input_matrix[cluster_variables].copy() fig = plt.figure(figsize = (20, 10)) for n, column in enumerate(list(X.columns)): ax = fig.add_subplot(2,4,n+1) ax = X[column].plot.kde() ax.set_title(column) # #### *Logarithmic transformation and standardisation* # + X_log = af.log_transf(input_matrix[cluster_variables]) X_log_stand = X_log.copy() for column in X_log.columns: X_log_stand[column] = af.standardise_column(X_log, column) # - # ### 5.2 Obtaining and comparing different clustered structures obtained with the k-means algorithm # + list_scores = [] def pipe_kmeans(X_toFit): X_tmp = X_toFit.copy() for n_clusters in range(2, 10): clusterer = KMeans(n_clusters = n_clusters, n_init = 2000).fit(X_toFit) labels = clusterer.labels_ score = round(silhouette_score(X_tmp, labels, metric= 'sqeuclidean'), 3) param_score = {'algorithm': 'k-means', 'n_clusters' : n_clusters, 'score' : score, 'clusterer' : clusterer} list_scores.append(param_score) pipe_kmeans(X_log_stand) clustering = pd.DataFrame(list_scores) clustering.sort_values(by = 'score', ascending = False) # + def pipe_kmeans_VCR(X): VRCs = [] X_tmp = X.copy() X_toFit = X.copy() for n_clusters in range(2, 11): clusterer = KMeans(n_clusters = n_clusters, n_init = 2000).fit(X_toFit) X_tmp['cluster'] = clusterer.labels_ f = 0.0 for var in cluster_variables: arrays = [] for cluster in list(X_tmp['cluster'].unique()): arrays.append(X_tmp[var+'_log'][X_tmp.cluster == cluster]) if n_clusters == 2: statistic = f_oneway(np.array(arrays[0]), np.array(arrays[1])).statistic if n_clusters == 3: statistic = f_oneway(np.array(arrays[0]), np.array(arrays[1]), np.array(arrays[2])).statistic if n_clusters == 4: statistic = f_oneway(np.array(arrays[0]), np.array(arrays[1]), np.array(arrays[2]), np.array(arrays[3])).statistic if n_clusters == 5: statistic = f_oneway(np.array(arrays[0]), np.array(arrays[1]), np.array(arrays[2]), np.array(arrays[3]), np.array(arrays[4])).statistic if n_clusters == 6: statistic = f_oneway(np.array(arrays[0]), np.array(arrays[1]), np.array(arrays[2]), np.array(arrays[3]), np.array(arrays[4]),np.array(arrays[5])).statistic if n_clusters == 7: statistic = f_oneway(np.array(arrays[0]), np.array(arrays[1]), np.array(arrays[2]), np.array(arrays[3]), np.array(arrays[4]),np.array(arrays[5]),np.array(arrays[6])).statistic if n_clusters == 8: statistic = f_oneway(np.array(arrays[0]), np.array(arrays[1]), np.array(arrays[2]), np.array(arrays[3]), np.array(arrays[4]), np.array(arrays[5]), np.array(arrays[6]), np.array(arrays[7])).statistic if n_clusters == 9: statistic = f_oneway(np.array(arrays[0]), np.array(arrays[1]), np.array(arrays[2]), np.array(arrays[3]), np.array(arrays[4]),np.array(arrays[5]), np.array(arrays[6]), np.array(arrays[7]), np.array(arrays[8]) ).statistic if n_clusters == 10: statistic = f_oneway(np.array(arrays[0]), np.array(arrays[1]), np.array(arrays[2]), np.array(arrays[3]), np.array(arrays[4]), np.array(arrays[5]), np.array(arrays[6]), np.array(arrays[7]), np.array(arrays[8]), np.array(arrays[9])).statistic f += statistic VRCs.append(f) return VRCs VRCs = pipe_kmeans_VCR(X_log_stand) # - # ### 5.3 Choosing the best structure for n, VRC in enumerate(VRCs): if (n == 0) | (n == len(VRCs)-1): continue index = (VRCs[n+1]-VRC)-(VRC-VRCs[n-1]) print("structure with", n+2, "clusters: Omega is", round(index, 3), "VRC is", round(VRC,3)) chosen = clustering.loc[4] labels_cluster = np.array(chosen.clusterer.labels_.copy()) labels_cluster += 1 input_with_cluster = input_matrix.copy() input_with_cluster['cluster'] = labels_cluster # ### 5.4 Examining the chosen partition input_with_cluster['knowledge'] = raw['knowledge'] cluster_stats = input_with_cluster.groupby("cluster").mean() cluster_stats # + sns.set() sns.set_color_codes() colors = ['royalblue', 'gold', 'orchid', 'tomato', 'lightgreen', 'sandybrown', 'sienna', 'darkgreen', 'c', 'black'] clusters_stats = input_with_cluster.groupby(by = 'cluster').mean() figsize = (22.5, (15/2*3)) fig = plt.figure(figsize = figsize) plot_variables = ['onlyDistance', 'regions', 'routeMarks','angularHeuristic', 'onlyAngular', 'barriers', 'distantLandmarks', 'distanceHeuristic'] cluster_v = np.array(plot_variables) labels = ['distance GMH', 'regions','on-route marks','angular LMH','angular GMH','barriers','distant landmarks', 'distance LMH'] plt.rcParams['font.family'] = 'Times New Roman' to_plot = list(clusters_stats.index) for cluster in to_plot: if cluster != 'population': tmp = clusters_stats.loc[cluster, cluster_v].values angles = np.linspace(0, 2*np.pi, len(labels), endpoint=False) # close the plot tmp = np.concatenate((tmp,[tmp[0]])) angles = np.concatenate((angles,[angles[0]])) if cluster != 'population': ax = fig.add_subplot(3, 3, cluster, polar = True) color = colors[cluster-1] ax.plot(angles, tmp, '-', color = color, linewidth=2) ax.fill(angles, tmp, color = color, alpha=0.25) ax.set_thetagrids((angles * 180/np.pi)[0:len(plot_variables)], labels, fontsize = 15) ax.yaxis.set_ticks([0.10, 0.20, 0.30, 0.40, 0.50]) ax.tick_params(axis='y', labelsize= 15) ax.tick_params(axis='x', labelsize= 15, pad = 5) ax.set_rlabel_position(150) if cluster != 'population': nr = len(input_with_cluster[input_with_cluster.cluster == cluster]) title = "Cluster "+str(cluster) + " (N = "+str(nr)+")" ax.set_title(title, va = 'bottom', fontsize = 20, pad = 50, fontfamily = 'Times New Roman') fig.subplots_adjust(wspace=0.20, hspace = 0.60) # - fig.savefig("Outputs/Figures/empiricalABM/f5.pdf", bbox_inches='tight') # #### *Spatial skills - Do they explain the variation in the route choice behaviour variables?* clusters = list(set(labels_cluster)) for cluster in clusters: knowledge = input_with_cluster[input_with_cluster.cluster == cluster]['knowledge'] print("cluster", cluster, "spatial skills/knowledge mean:", round(knowledge.mean(), 3), "std:", round(knowledge.std(), 3)) # + tab = pd.DataFrame(columns = {'cluster', 'variable', 'value'}) labels = ['distance min.', 'angular min.', 'distance LMH', 'angular LMH', 'regions', 'on-route marks', 'barriers', 'distant landmarks', 'knowledge'] tab = pd.DataFrame(columns = {'cluster', 'variable', 'value'}) index = 0 for subject in input_with_cluster.index: for n, variable in enumerate(cluster_variables+['knowledge']): tab.at[index, 'variable'] = labels[n] tab.at[index, 'cluster'] = "Cluster "+str(int(input_with_cluster.loc[subject]['cluster'])) tab.at[index, 'value'] = input_with_cluster.loc[subject][variable] index += 1 tab['value'] = tab.value.astype(float) tab.sort_values('cluster', inplace = True) # - ## using the t-test to verify whether differences in the spatial skills/knowledge are significant pin.pairwise_ttests(data = tab[tab.variable == 'knowledge'], dv= 'value', between = 'cluster') # #### *Demographic characteristics* raw['cluster'] = labels_cluster print('Age') cluster_deom_stats = raw.groupby("cluster")['age'].mean() print(cluster_deom_stats) print('Gender') cluster_deom_stats = raw.groupby(["cluster", 'sex'])['sex'].count() for cluster in range(1, max(labels_cluster)+1): print("cluster", cluster) for gender in list(cluster_deom_stats.loc[cluster].index): print(gender, round(cluster_deom_stats.loc[cluster][gender]/cluster_deom_stats.loc[cluster].sum() *100, 1)) print() # ## 6. Final input data for building agent typologies in the ABM # + columns = [] for column in input_with_cluster.columns: if column in ['knowledge', 'cluster']: continue columns.append(column+"_mean") columns.append(column+"_std") groups = ['group'+str(cluster) for cluster in input_with_cluster['cluster'].unique()] indexes = groups + ['population', 'nullGroup'] clusters_gdf = pd.DataFrame(index = indexes, columns = columns) variables = route_variables clusters = input_with_cluster['cluster'].unique() for index in indexes: for variable in variables: if index not in ['population', 'nullGroup']: cluster = int(index[5:]) clusters_gdf.at[index, variable+'_mean'] = input_with_cluster[input_with_cluster.cluster == cluster][variable].mean() clusters_gdf.at[index, variable+'_std'] = input_with_cluster[input_with_cluster.cluster == cluster][variable].std() else: clusters_gdf.at[index, variable+'_mean'] = input_with_cluster[variable].mean() clusters_gdf.at[index, variable+'_std'] = input_with_cluster[variable].std() if index not in ['population', 'nullGroup']: cluster = int(index[5:]) clusters_gdf.at[index, 'portion'] = len(input_with_cluster[input_with_cluster.cluster == cluster])/len(input_with_cluster) else: clusters_gdf.at[index, 'portion'] = 1.00 # - clusters_gdf ## exporting the files clusters_gdf.to_csv('Outputs/empiricalABM/'+city_name+'_clusters.csv') ## this is imported into the ABM routes_stats.to_csv('Outputs/empiricalABM/'+city_name+'_routes_stats.csv') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:stats_env] # language: python # name: conda-env-stats_env-py # --- # # Day 7 # ! cat README.md with open('input.txt', 'rt') as f: l = f.read().splitlines() # + def parse_item(item): key, value = item.split(' contain ') container = ' '.join(key.split(' ')[:2]) if value.endswith('no other bags.'): content = {} else: content = dict(map(lambda x: (' '.join(x.split(' ')[1:3]), int(x.split(' ')[0])), value.rstrip('.').split(', '))) return (container, content) input_dict = dict(list(map(parse_item, l))) # - container_dict = {} for k, v in input_dict.items(): for w in v: container_dict[w] = container_dict.get(w, []) + [k] possible_containers = [] queue = ['shiny gold'] while len(queue) > 0: k = queue.pop(0) if k not in container_dict: continue for c in container_dict[k]: if c not in possible_containers: possible_containers.append(c) queue.append(c) elif c in possible_containers: continue len(possible_containers) # --- Part Two --- # It's getting pretty expensive to fly these days - not because of ticket prices, but because of the ridiculous number of bags you need to buy! # # Consider again your shiny gold bag and the rules from the above example: # # faded blue bags contain 0 other bags. # dotted black bags contain 0 other bags. # vibrant plum bags contain 11 other bags: 5 faded blue bags and 6 dotted black bags. # dark olive bags contain 7 other bags: 3 faded blue bags and 4 dotted black bags. # So, a single shiny gold bag must contain 1 dark olive bag (and the 7 bags within it) plus 2 vibrant plum bags (and the 11 bags within each of those): 1 + 1*7 + 2 + 2*11 = 32 bags! # # Of course, the actual rules have a small chance of going several levels deeper than this example; be sure to count all of the bags, even if the nesting becomes topologically impractical! # # Here's another example: # # shiny gold bags contain 2 dark red bags. # dark red bags contain 2 dark orange bags. # dark orange bags contain 2 dark yellow bags. # dark yellow bags contain 2 dark green bags. # dark green bags contain 2 dark blue bags. # dark blue bags contain 2 dark violet bags. # dark violet bags contain no other bags. # In this example, a single shiny gold bag must contain 126 other bags. # # How many individual bags are required inside your single shiny gold bag? # + # test test_dict = {'shiny gold': {'dark red': 2}, 'dark red': {'dark orange': 2}, 'dark orange': {'dark yellow': 2}, 'dark yellow': {'dark green': 2}, 'dark green': {'dark blue': 2}, 'dark blue': {'dark violet': 2}, 'dark violet': {}} # - bag_count = 0 queue = [('shiny gold', 1)] while len(queue) > 0: color, factor = queue.pop(0) bag_count += factor for k, v in input_dict[color].items(): queue.append((k, factor * v)) bag_count - 1 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from aberrations import make_atm_data, make_1D_vibe_data, make_vibe_params from observer import make_state_transition_vibe, make_kfilter_vibe, KFilter, make_impulse_from_tt, make_kfilter_turb from matplotlib import pyplot as plt import numpy as np from scipy import signal, linalg from copy import deepcopy # %matplotlib inline f_sampling = 1000 N_vibe = 0 use_turb = True steps = 5000 noise = 0.06 delay = 1 rms = lambda data: np.sqrt(np.mean(data ** 2)) # - if use_turb: turb = make_atm_data(steps)[:,0] true_vibes = np.zeros((N_vibe, steps)) vibe_errs = np.zeros((N_vibe, steps)) vibe_params = np.zeros((N_vibe, 4)) # let's add in process noise so that there's something for Q to fit to! process_vars = (10**-np.random.uniform(2, 3, (N_vibe,)))**2 process_vars # + for i in range(N_vibe): pars = make_vibe_params(N=1) true_vibes[i] = make_1D_vibe_data(steps, vib_params=pars, N=1) vibe_errs[i] = np.random.normal(0, process_vars[i], (steps,)) vibe_params[i] = pars vibes = true_vibes + vibe_errs # + if use_turb: truths = np.sum(vibes, axis=0) + turb else: truths = np.sum(vibes, axis=0) measurements = truths + np.random.normal(0, noise, (truths.size,)) # - plt.semilogy(*signal.periodogram(truths, fs=f_sampling)) plt.ylim(1e-7) kfilter_v = make_kfilter_vibe(vibe_params[:,1:3], process_vars) kfilter_t = make_kfilter_turb(make_impulse_from_tt(truths[:500] + np.random.normal(0, 0.06, (500,)))) kfilter_t.iters = max(kfilter_t.iters, 500) kfilter = kfilter_v + kfilter_t # + # something seems very wrong with my control code, so I'm just ditching it. actions = np.zeros(steps - kfilter.iters,) for i in range(kfilter.iters, steps - 1): kfilter.update(measurements[i]) actions[i - kfilter.iters + delay] = kfilter.H.dot((kfilter.A**(delay)).dot(kfilter.state)) kfilter.predict() # - print(rms(measurements)) print(rms(actions - truths[kfilter.iters:])) ft, Pt = signal.periodogram(truths, fs=f_sampling) fres, Pres = signal.periodogram(truths[kfilter.iters:] - actions, fs=f_sampling) plt.loglog(ft, Pt) plt.loglog(fres, Pres) plt.ylim(1e-7) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # EDA and Data Visualizations # _by _ # ### Contents: # - [Exploring Data on CDC Website Related to Asthma](#Exploring-Data-on-CDC-Website-Related-to-Asthma) # - [Find the US Cities with Highest Number of Adults with Asthma in 2017](#Find-the-US-Cities-with-Highest-Number-of-Adults-with-Asthma-in-2017) # - [EDA of AQI and Temperature Data](#EDA-of-AQI-and-Temperature-Data) # - [Heatmap](#Heatmap) # - [Pairplot](#Pairplot) # - [Visualizations Exploring Temperature and Pollutants](#Visualizations-Exploring-Temperature-and-Pollutants) # - [Visualization Exploring Cumulative AQI Regarding Time](#-Visualization-Exploring-Cumulative-AQI-Regarding-Time) # import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline # ## Exploring Data on CDC Website Related to Asthma # [(back to top)](#EDA-and-Data-Visualizations) df_asthma = pd.read_csv('../data/500_Cities__Current_asthma_among_adults_aged___18_years.csv') df_asthma.head(2).T # ### Find the US Cities with Highest Number of Adults with Asthma in 2017 # [(back to top)](#EDA-and-Data-Visualizations) top_10 = df_asthma['Data_Value'].sort_values(ascending = False)[:10] top_10 mask = df_asthma['Data_Value'] > 17 # finding all cities with asthma rates higher than 17 high_asthma_rate_df = df_asthma[mask].sort_values(by = 'Data_Value', ascending = True) high_asthma_rate_df = high_asthma_rate_df[['StateAbbr','CityName','Data_Value']] plt.figure(figsize = (8,5)) plt.title('Top 5 US City Asthma Rate in 2017',fontsize = 17) sns.barplot(data = high_asthma_rate_df, x = 'CityName', y = 'Data_Value'); plt.xlabel('City Name', fontsize = 13) plt.ylabel("Percent of Adult's with Asthma", fontsize = 13); # Cleveland with the highest US asthma rate for adults older than 18 years old based on CDC data from 2017. 17.5% of all adults in Cleveland have Asthma. Seeing that Cleveland had a high asthma rate narrowed allowed us to narrow in on one city to look at air quality # ## EDA of AQI and Temperature Data # [(back to top)](#EDA-and-Data-Visualizations) df = pd.read_csv('../data/cleaned_aqi_and_temp_data_2017-2019.csv') df.head() df.info() df["weekday"] = df["weekday"].astype("category") df["month"] = df["month"].astype("category") df['date'] = pd.to_datetime(df['date']) df.drop('pct_change_aqi', axis = 1, inplace = True) df.info() df.set_index('date', inplace = True) df.describe().T # Mean of cumulative AQI is 43.040256 showing that on average the levels are not in dangerous zones but are near the moderate zone. The max is 115 which is considered unhealthy for sensitive groups and 50% of the daily cumulative AQI data is between 32 and 52 showing that 50% of the cumulative daily readings are nearing the moderate zone. df.isna().sum() # ### Heatmap # [(back to top)](#EDA-and-Data-Visualizations) plt.figure(figsize=(15,15)) mask = np.zeros_like(df.corr()) mask[np.triu_indices_from(mask)] = True sns.heatmap(df.corr(), annot = True, mask = mask, square = True); # ##### Correlation # # - pm 2.5 mean and pm 2.5 aqi are highly correlated with cumulative aqi # # - there is multicolinearity seen between aqi readings and pollutant measurement values which is to be expected since the aqi values are scored based from the pollutant measurement # # - there is also multicolinearity between temperature values which is to expected has well since when we have lower average temperature then the low temperature of the day and the high temperature of the day will all be related. # ### Pairplot # [(back to top)](#EDA-and-Data-Visualizations) sns.pairplot(data = df, corner = True) # ### Visualizations Exploring Temperature and Pollutants # [(back to top)](#EDA-and-Data-Visualizations) # + fig, ax = plt.subplots(nrows=1,ncols=2,figsize=(13,7)) sns.scatterplot(x = 'pm2.5_aqi_val', y = 'cumulative_aqi' , hue = 'temp_high', data = df, ax=ax[0]).set( title="PM 2.5 Daily AQI Value vs Cumulative Daily AQI \n with Relation to Daily High Temperature", xlabel='PM 2.5 AQI Value', ylabel='Cumulative Daily AQI') sns.scatterplot(x = 'pm2.5_mean', y = 'cumulative_aqi' , hue = 'temp_high', data = df, ax=ax[1]).set( title="PM 2.5 Mean Daily Value vs Cumulative Daily AQI \n with Relation to Daily High Temperature", xlabel='PM 2.5 Mean Value', ylabel='Cumulative Daily AQI', label = 'High Temp'); # - # This graphs shows that at times there is a direct correlation with cumulative daily aqi and pm 2.5 daily aqi value showing that on these days pm 2.5 is usually the highest pollutant aqi value on that day. This shows that pm 2.5 has a strong relationship with influencing total AQI value. We can also see as temperature is getting warmer there is a slight association with increase AQI and very high temperatures. # + fig, ax = plt.subplots(nrows=1,ncols=2,figsize=(13,7)) sns.scatterplot(x = 'ozone_aqi_val', y = 'cumulative_aqi' , hue = 'temp_high', data = df, ax=ax[0]).set( title="Ozone Daily AQI Value vs Cumulative Daily AQI \n with Relation to Daily High Temperature", xlabel='Ozone AQI Value', ylabel='Cumulative Daily AQI') sns.scatterplot(x = 'ozone_max', y = 'cumulative_aqi' , hue = 'temp_high', data = df, ax=ax[1]).set( title="Ozone Max Daily Value vs Cumulative Daily AQI \n with Relation to Daily High Temperature", xlabel='Ozone Max Value', ylabel='Cumulative Daily AQI', label = 'High Temp'); # - # We can also see a relationship between cumulative Daily AQI and Ozone AQI readings. As Ozone readings get to very high values, they are more likely to be the highest daily AQI reading compared to other pollutants. There is also a relationship with increasing temperature and higher AQI/ Ozone AQI and values. # ### Visualization Exploring Cumulative AQI Regarding Time # [(back to top)](#EDA-and-Data-Visualizations) sns.histplot(x = 'cumulative_aqi', data = df).set( title="Distribution of Daily Cumulative AQI Readings", xlabel='Cumulative Daily AQI Readings'); # We can see that most the daily AQI values are around 40 and the distribution is skewed to the right indicating that the are fewer dangerous level but there are still daily levels reaching past 100 df['cumulative_aqi'].resample('m').mean().plot(figsize=(12, 4), title='2017-2019 Average Monthly Cumulative AQI in Cleveland, OH'); plt.xlabel('Date', fontsize = 10); # From the above chart, we can see that there was a low average monthly AQI level in Nov 2017. There was another low averaye monthly AQI in Oct 2018 and Oct 2019. However, there are spikes in average monthly AQI in May 2018, Dec 2018, and July 2019. # + high_mask = df['temp_avg'] > 75 high_temp_df = df[high_mask] low_mask = df['temp_avg'] < 75 low_temp_df = df[low_mask] # - plt.figure(figsize=(12,8)) plt.title('2017-2019 Monthly Average Cumulative AQI in Cleveland in Relation to Average Daily Temperature') high_temp_df['cumulative_aqi'].rolling(window=30).mean().plot(label = "Temp in F Greater than 75"); low_temp_df['cumulative_aqi'].rolling(window=30).mean().plot(label = "Temp in F Lower than 75"); plt.legend(loc='best'); plt.xlabel('Date', fontsize = 13) plt.ylabel('Monthly Average AQI Level', fontsize = 13); # From this chart, we can see that there usually higher temperature is associated with higher average AQI readings. However between 1-2019 and 5-2019 lower average temperatures were associated with higher AQI readings. df_2017 = df.loc['2017'] df_2018 = df.loc['2018'] df_2019 = df.loc['2019'] df_2017[['co_aqi_val','no2_aqi_val', 'ozone_aqi_val', 'pm10_aqi_val', 'pm2.5_aqi_val', 'so2_aqi_val' ]].resample('m').mean().plot(figsize=(12, 4), title = '2017 Monthly Mean AQI Level Per Pollutant', xlabel = 'Month'); # The above chart shows for 2017 ozone AQI levels were the highest consitently over the year compared to other pollutants df_2018[['co_aqi_val','no2_aqi_val', 'ozone_aqi_val', 'pm10_aqi_val', 'pm2.5_aqi_val', 'so2_aqi_val' ]].resample('m').mean().plot(figsize=(12, 4), title = '2018 Monthly Mean AQI Level Per Pollutant', xlabel = 'Month'); # The above chart shows that ozone and pm 2.5 showed the highest average monthly AQI readings over 2018 df_2019[['co_aqi_val','no2_aqi_val', 'ozone_aqi_val', 'pm10_aqi_val', 'pm2.5_aqi_val', 'so2_aqi_val' ]].resample('m').mean().plot(figsize=(12, 4), title = '2019 Monthly Mean AQI Level Per Pollutant', xlabel = 'Month'); # The above chart shows that we are noticing even more prevalence of pm 2.5 reaching highest monthly average AQI levels. Ozone also reaching high average monthly AQI levels compared to other pollutants. df[['co_aqi_val','no2_aqi_val', 'ozone_aqi_val', 'pm10_aqi_val', 'pm2.5_aqi_val', 'so2_aqi_val']].resample('q').mean().plot(figsize=(13, 6), title = 'Quartly Mean Cumulative AQI Level Per Pollutant', xlabel = 'Quarter of a Year', ylabel = 'Mean Cumulative AQI'); # pm 2.5 AQI and ozone aqi consistently show strong relationship to high overall AQI readings. df['pm2.5_aqi_val'].resample('m').mean().plot(figsize=(12, 4), title = '2017-2019 Monthly Mean AQI Level for PM 2.5 Pollutant', xlabel = 'Month'); # From the above graph we can see that as the years progress, PM 2.5 AQI levels are increasing # --- # jupyter: # jupytext: # text_representation: # extension: .ps1 # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: .NET (PowerShell) # language: PowerShell # name: .net-powershell # --- # # T1106 - Native API # Adversaries may directly interact with the native OS application programming interface (API) to execute behaviors. Native APIs provide a controlled means of calling low-level OS services within the kernel, such as those involving hardware/devices, memory, and processes.(Citation: NT API Windows)(Citation: Linux Kernel API) These native APIs are leveraged by the OS during system boot (when other system components are not yet initialized) as well as carrying out tasks and requests during routine operations. # # Functionality provided by native APIs are often also exposed to user-mode applications via interfaces and libraries. For example, functions such as the Windows API CreateProcess() or GNU fork() will allow programs and scripts to start other processes.(Citation: Microsoft CreateProcess)(Citation: GNU Fork) This may allow API callers to execute a binary, run a CLI command, load modules, etc. as thousands of similar API functions exist for various system operations.(Citation: Microsoft Win32)(Citation: LIBC)(Citation: GLIBC) # # Higher level software frameworks, such as Microsoft .NET and macOS Cocoa, are also available to interact with native APIs. These frameworks typically provide language wrappers/abstractions to API functionalities and are designed for ease-of-use/portability of code.(Citation: Microsoft NET)(Citation: Apple Core Services)(Citation: MACOS Cocoa)(Citation: macOS Foundation) # # Adversaries may abuse these native API functions as a means of executing behaviors. Similar to [Command and Scripting Interpreter](https://attack.mitre.org/techniques/T1059), the native API and its hierarchy of interfaces, provide mechanisms to interact with and utilize various components of a victimized system. # ## Atomic Tests #Import the Module before running the tests. # Checkout Jupyter Notebook at https://github.com/cyb3rbuff/TheAtomicPlaybook to run PS scripts. Import-Module /Users/0x6c/AtomicRedTeam/atomics/invoke-atomicredteam/Invoke-AtomicRedTeam.psd1 - Force # ### Atomic Test #1 - Execution through API - CreateProcess # Execute program by leveraging Win32 API's. By default, this will launch calc.exe from the command prompt. # **Supported Platforms:** windows # #### Attack Commands: Run with `command_prompt` # ```command_prompt # C:\Windows\Microsoft.NET\Framework\v4.0.30319\csc.exe /out:"%tmp%\T1106.exe" /target:exe PathToAtomicsFolder\T1106\src\CreateProcess.cs # # %tmp/T1106.exe # ``` Invoke-AtomicTest T1106 -TestNumbers 1 # ## Detection # Monitoring API calls may generate a significant amount of data and may not be useful for defense unless collected under specific circumstances, since benign use of API functions are common and difficult to distinguish from malicious behavior. Correlation of other events with behavior surrounding API function calls using API monitoring will provide additional context to an event that may assist in determining if it is due to malicious behavior. Correlation of activity by process lineage by process ID may be sufficient. # # Utilization of the Windows API may involve processes loading/accessing system DLLs associated with providing called functions (ex: kernel32.dll, advapi32.dll, user32.dll, and gdi32.dll). Monitoring for DLL loads, especially to abnormal/unusual or potentially malicious processes, may indicate abuse of the Windows API. Though noisy, this data can be combined with other indicators to identify adversary activity. # ## Shield Active Defense # ### Software Manipulation # Make changes to a system's software properties and functions to achieve a desired effect. # # Software Manipulation allows a defender to alter or replace elements of the operating system, file system, or any other software installed and executed on a system. # #### Opportunity # There is an opportunity for the defender to observe the adversary and control what they can see, what effects they can have, and/or what data they can access. # #### Use Case # A defender can modify system calls to break communications, route things to decoy systems, prevent full execution, etc. # #### Procedures # Hook the Win32 Sleep() function so that it always performs a Sleep(1) instead of the intended duration. This can increase the speed at which dynamic analysis can be performed when a normal malicious file sleeps for long periods before attempting additional capabilities. # Hook the Win32 NetUserChangePassword() and modify it such that the new password is different from the one provided. The data passed into the function is encrypted along with the modified new password, then logged so a defender can get alerted about the change as well as decrypt the new password for use. # Alter the output of an adversary's profiling commands to make newly-built systems look like the operating system was installed months earlier. # Alter the output of adversary recon commands to not show important assets, such as a file server containing sensitive data. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import torch import torch.nn as nn import torch.optim as optim import torch.utils.data as data from ucsd_dataset import UCSDAnomalyDataset from video_CAE import VideoAutoencoderLSTM import torch.backends.cudnn as cudnn import numpy as np #matplotlib notebook import matplotlib.pyplot as plt # Training hyperparameters are the same as in https://github.com/hashemsellat/Video-Anomaly-Detection/blob/master/lstmautoencoder.ipynb (except for few more epochs). # + model = VideoAutoencoderLSTM() criterion = nn.MSELoss() use_cuda = torch.cuda.is_available() if use_cuda: cudnn.benchmark = True model.set_cuda() criterion.cuda() train_ds = UCSDAnomalyDataset('./data/UCSD_Anomaly_Dataset.v1p2/UCSDped1/Train', time_stride=3) train_dl = data.DataLoader(train_ds, batch_size=32, shuffle=True) optimizer = optim.Adam(model.parameters(), lr=1e-4, eps=1e-6, weight_decay=1e-5) model.train() for epoch in range(5): for batch_idx, x in enumerate(train_dl): optimizer.zero_grad() if use_cuda: x = x.cuda() y = model(x) loss = criterion(y, x) loss.backward() optimizer.step() if batch_idx % 10 == 0: print('Epoch {}, iter {}: Loss = {}'.format( epoch, batch_idx, loss.item())) torch.save({ 'epoch': epoch, 'state_dict': model.state_dict(), 'optimizer' : optimizer.state_dict()}, './snapshot/checkpoint.epoch{}.pth.tar'.format(epoch)) # - # Inference on test samples (Test001 and Test032) model = VideoAutoencoderLSTM() model.load_state_dict(torch.load('./snapshot/checkpoint.epoch4.pth.tar')['state_dict']) model.set_cuda() model.eval() test_ds = UCSDAnomalyDataset('./data/UCSD_Anomaly_Dataset.v1p2/UCSDped1/inference') test_dl = data.DataLoader(test_ds, batch_size=32, shuffle=False) frames = [] errors = [] for batch_idx, x in enumerate(test_dl): y = model(x.cuda()) mse = torch.norm(x.cpu().data.view(x.size(0),-1) - y.cpu().data.view(y.size(0),-1), dim=1) errors.append(mse) errors = torch.cat(errors).numpy() errors = errors.reshape(-1, 191) s = np.zeros((2,191)) s[0,:] = 1 - (errors[0,:] - np.min(errors[0,:]))/(np.max(errors[0,:]) - np.min(errors[0,:])) s[1,:] = 1 - (errors[1,:] - np.min(errors[1,:]))/(np.max(errors[1,:]) - np.min(errors[1,:])) # Test001 plt.plot(s[0,:]) plt.show() # Test032 plt.plot(s[1,:]) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Database Design Concepts # --------------------------------------------------------------------------------------------------------- # # In our previous session on databases, we introduced some of the fundamental concepts and definitions applicable to databases in general, along with a brief intro to SQL and SQLite in particular. Some use cases and platforms were also discussed. # # In this session, we are going to dig a little deeper into databases as representions of systems and processes. A database with a single table may not feel or function much differently from a spreadsheet. Much of the benefit of using databases results from designing them as models of complex systems in ways that spreadsheets just can't do: # # * Inventory control and billing # * Human resources # * Blogging platforms # * Ecosystems # # There will be some more advanced SQL statements this time, though we will still be using SQLite. Concepts which will be discussed and implemented in our code include # # * Entities and attributes # * Keys # * Relationships # * Normalization # # For this session we are also going to play as we go. We will use data from the Portal Project Teaching Database: # # > ; ; ; . (2018): Portal Project Teaching Database. figshare. Dataset. https://doi.org/10.6084/m9.figshare.1314459.v10 # # Go to the item record in figshare and click on the button to _Download all_. Download and unzip the data to your preferred location on your computer. # # The Entity Relationship Data Model # ------------------------------------------------------------------------------------------------ # # The entity relationship (ER) model is commonly used to define and develop databases. In the simplest terms, the model defines the things (entities) that are important or interesting within a system or process and the relationships between them. # # # ## Design Round 1: A flat, spreadsheet-like table # # For this part of the workshop, we will use Jamboards to collaboratively identify the entities represented within the data. # # > #### Exercise # > # > 1. Go to the Jamboard shared in the workshop chat channel. # > 1. Go to the folder with the data just downloaded from figshare. Open the file *combined.csv*. # > 1. We have also shared a file with some example field notes (*field_notes.xslx*). Try adding the survey information > from the field notes to *combined.csv*. What problems or issues do you run into? Add notes to the Jamboard. # > 1. It turns out there was an error in data collection. All of the data in *combined.csv* with a date of March 5, 2000, was actually collected on March 6. Try to update all the affected records. As before, add notes to the Jamboard about any issues you encounter while doing this. # # Some of the columns in the CSV file have dependencies on information from other columns. We can simplify our data entry and reduce the risk of human error if we split or decompose our single table into multiple tables to eliminate these dependencies. # The entity relationship model provides a process for developing a more robust representation of the system we are observing with our survey data. # # The following provides a useful example of an ER diagram, and includes each of the concepts to be discussed below: # # ![Entity Relationship example diagram](./images/1011px-ER_Diagram_MMORPG.png) # # By TheMattrix at the English language Wikipedia, CC BY-SA 3.0, Link # # # ## Entities # # Entities are *nouns*, and can be physical or logical: # # * People - teachers, students, courses # * Places - stores, websites, states # * Things - donuts, grades, purchases # # Entities are represented as tables within a database. # # ## Attributes # # Entities have properties or attributes which describe them. For each attribute there is domain, or a range of legal values. Domains can be limited by data type - integer, string, etc. - and may be further limited by allowable values. For example, the domain of month names is limited to January, February, etc. # # There are several types of attributes: # # * __Simple attributes__ are atomic values which cannot be decomposed or divided. Examples include _age_, _last name_, _glaze_, etc. # * __Composite attributes__ consist of multiple simple attributes, such as _address_, _full name_, etc. # * __Multivalued attributes__ can include a set of more than one value. _Phone numbers_, _certifications_, etc. are examples of multivalued attributes. # * __Derived attributes__ can be calculated using other attributes. A common example is _age_, which can be calculated from a date of birth. # # ## Keys # # A key is an attribute or combination of attributes which can be used to uniquely identify individual entities within the entity set. That is, keys enforce a uniqueness constraint. # # There are multiple types of keys. # # * A __candidate key__ is a simple or composite key that is both unique and minimal. _Minimal_ here means that every included attribute is needed to establish uniqueness. A table or entity set may have more than one candidate keys. # * A __composite key__ is a key composed of two or more attributes. Composite keys are also minimal. # * A __primary key__ is the candidate key which is selected to uniquely identify entities in the entity set. # * A __foreign key__ is an attribute that references the primary key of another table or entity set in the database. A __foreign key__ is not required to be unique within its containing table. # # # ## Design Round 2: Defining entities and attributes # # > #### Exercise # > # > 1. In the Jamboard, the columns in *combined.csv* have been added as stickies. Each column can be considered an attribute of an entity, for example a plot. Working together, rearrange the stickies into groups of attributes describing a single entity. Use a marker or another sticky to name to name the entity (for example "plot"). # > 1. Use DB Browser to open the *portal_mammals.sqlite* database included in the data we downloaded from figshare. Compare the data table definitions with the entities and attributes defined in the previous step. # >> 1. Identify an attribute or group of attributes that uniquely identify individual entities. Circle it - this is the primary key. # # #### Creating tables # # It may be useful to have information about the person who collected the survey data in the field. Let's add a new table, "recorder," to hold this information. What are some attributes of this entity? # # * first name # * last name # * what else? # # Use the _Modify Table_ feature to create a table and add additional attributes for "status" (undergraduate, graduate, staff) and date of birth: # # ``` # DROP TABLE IF EXISTS recorder; # CREATE TABLE IF NOT EXISTS recorder ('id' INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT UNIQUE, # 'fname' TEXT, # 'lname' TEXT NOT NULL, # 'status' TEXT NOT NULL CHECK(status="undergraduate" OR # status="graduate" OR status="staff"), # 'dob' TEXT); # ``` # # Below is the syntax for adding data to the new table: # # ``` # INSERT INTO recorder (fname, lname, status, dob) VALUE ("Joe", "Smith", "staff", "1985-03-22"); # ``` # # Refer to the data table definition above - some fields can be left blank and others can't. The _status_ field also has a constraint on the values that can be provided. # # > #### Exercise # > # > What happens if we try to enter the following? # > # > ``` # > INSERT INTO recorder (fname, status, dob) VALUE ("Jane", "faculty", "Sept. 31, 3022"); # > ``` # > 1. Correct the errors in the insert statement above until it works. Leave the value of the date field as-is. What do you notice? # > 1. Use the same syntax to add some additional people to the table. # # Relationships # ------------------------------------------------------------------------------------------------------------ # # Relationships represent connections between entities. In keeping with the idea that entities are nouns, relationships are verbs. The MMORPG example above demonstrates this: a character _has_ an account, a region _contains_ characters. # # _Cardinality_ determines the type of relationship that exists between two entities. # # * __One to many (1:M)__: In the example above, region -> character is a 1 to many relationship. That is, one region can have many characters in it. # * __One to one (1:1)__: Not in the diagram above. One to one relationships indicate possible design issues when entities might really reference the same things. # * __Many to many (M:N)__: In the example, character and creep have a many to many relationship. Within a databae, these need to be implemented as a set of 1:M relationships with. # # # > #### Exercise # > # > 1. Identify the relationships between the entities in our database. Since we are working with text, we will use notation similar to the examples as [https://www.datanamic.com/support/lt-dez005-introduction-db-modeling.html](https://www.datanamic.com/support/lt-dez005-introduction-db-modeling.html): # > # >> Plot -> Species; 1 plot can contain multiple species -> 1:N # # # # Normalization # ------------------------------------------------------------------------------------------------------------ # # Normalization is a process of analyzing entities and attributes to reduce redundancy and prevent anomalies: # # * Update anomaly: Redundant values within a table must be updated multiple times. In the example below, if Smith's favorite donut changes, the table has to be updated twice. Otherwise, there will be inconsistent values. # * Delete anomaly: Deleting data forces the deletion of other attributes. For example, removing apple cider donuts from our table would also force the deletion of Wilson and Wilson's dependent, Pete. (Remeber that DELETE operations delete a whole row, not just a single attribute value.) # * Insert anomaly: Data cannot be added to the table without also adding other attributes. If null values are not allowed in the *Favorite_Donut* column, it becomes impossible to add information about an employee who doesn't have a favorite donut. # # # | EmployeeID | LName |Favorite_Donut | Dependent | # |------------|------------|---------------|-----------| # | 115 | Smith | glazed | James | # | 115 | Smith | glazed | Sandy | # | 116 | Wilson | apple cider | Pete | # # # Normalization involves removing dependencies among attributes to improve the logical structure and consistency of a database. # # There are progressive degrees of normalization across multiple _normal forms_ (NF). There are six normal forms, but generally a database is considered normalized if the tables satisfy the requirements of the first three NF. # # * 1NF: No repeating columns. # * 2NF: A table must be 1NF AND the primary key is either a single attribute or, if composite, each non-key attribute must be dependent on the entire key for uniqueness. That is, eliminate redundant values. # * 3NF: A table must be 2NF AND eliminate transitive dependencies. That is, remove non-key attributes that depend on other non-key attributes. For example, consider the following alternative table definition for _species_: # # ``` # CREATE TABLE species ( # 'id' INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT UNIQUE, # 'taxon' TEXT, # 'common_name' TEXT, # 'count' INTEGER, # 'notes' TEXT # ); # # ``` # # The attributes _count_ and _notes_ depend on each other as survey attributes. We need a way to link this information with species _per survey_ by creating a dependent entity to resolve the M:N relationship between species and surveys. # # ## Design Round 3: Data Integrity # # # ### CRUD: Create, Read, Update, Delete # # We have already created a table above. Let's look at how we can view and manipulate the data, and whether changes we make to the data affect or "break" the database. # # Above we listed some anomalies that normalization is supposed to prevent. In the SQL tab of DB Browser, we can recreate the data as represented in *combined.csv* using the following query: # # ``` # SELECT surveys.*, plots.plot_type, species.genus, species.species, species.taxa # FROM surveys # INNER JOIN plots ON surveys.plot_id = plots.plot_id # INNER JOIN species ON surveys.species_id = species.species_id # ``` # # Now that we have identified the relationships between the entities in the database, we can modify the _surveys_ table to reference the ID fields of the _plots_, _species_, and _recorder_ tables as foreign keys. # # 1. In the "Database Structure" tab of DB Browser, right click _surveys_ and select "Modify Table." # 1. Add a column, *recorder_id*. Set the data type to "integer." To the right, under the "Foreign Key" setting, select the "recorder" table from the first drop down list, then select "id" from the second drop down list. # 1. Add the corresponding IDs from the _plots_ and _species_ tables as foriegn keys to the "plot_id" and "species_id" fields. # # > #### Exercise # > # > 1. Update the SQL statement for the combined view to include the recorder's last name and status. # # We have just made a lot of changes to the structure of our database. How do these changes affect our ability to query (or Read) the data? Let's go back and redo some of the examples from last week, which will also give us an opportunity to refresh our memories of the SQL syntax. # # In DB Browser we can insert data into tables two ways. In addition to using **INSERT** statements as above, we can also manually edit a table in the "Browse Data" tab. # # > #### Exercise # > # > 1. Use one of the two methods to add survey data from *field_notes.xslx* to the *surveys* table. # # Let's also correct the date error for surveys recorded as March 5, 2000. We can do this manually in the table itself by filtering, but in this case it's preferred to use SQL. First, we will select the data we need to update in order to see which rows will be affected: # # ``` # SELECT * # FROM surveys # WHERE month = "3" AND day = "5" AND year = "2000"; # ``` # # How many rows will be affected by the update? Once we are ready, we can update the table: # # ``` # UPDATE surveys # SET day = '6' # WHERE month = "3" AND day = "5" AND year = "2000"; # ``` # # Finally, we can add information about who recorded the surveys. Here's an example where every rodent exclosure since 1995 was surveyed by : # # ``` # UPDATE surveys # SET recorder_id = 1 # WHERE plot_id = 5 AND year > 1995 # ``` # # Finally, we can delete data using the **DELETE** clause. Our data don't lend themselves well to deletion - deleting rows from any one table shouldn't break the other tables or views, but there's also no benefit to deleting anything. An example where deletion would make sense would be to remove rows from a _product_ table in an inventory database after that product is discontinued. # # But for the sake of demonstration, let's say we decide to remove one of our species, *Neotoma albigula*. We could delete the species info from the _species_ table using: # # ``` # DELETE FROM species # WHERE species_id = "NL"; # ``` # # > #### Exercise # > # > 1. The above SQL command should have thrown an error. What is the error, and why might it be a good thing? If we still want to delete these data, what do we have to do? # > 1. Why would or wouldn't we also want to remove corresponding rows from the _surveys_ table? If we were going to delete rows from both tables, which would we delete rows from first? # # > #### Exercise # > # > 1. Pick one of the people added to your _recorder_ table earlier. Update the _surveys_ table to add their ID information to specific survey observations based on plot or species. # > 1. Sometimes species names change. Pick a species and use an **UPDATE** query to change its name in the _species_ table. Then re-run the combined view. Has the changed been applied to all the affected rows? How can you double check without having to scroll through the entire view? # # References # # and (n.d.) Databse Design - 2nd Edition. Retrieved from [https://opentextbc.ca/dbdesign01/](https://opentextbc.ca/dbdesign01/) # # Datanamic (n.d.) Database normalization. Retrieved from [https://www.datanamic.com/support/database-normalization.html](https://www.datanamic.com/support/database-normalization.html) # + jupyter={"outputs_hidden": true} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] hide_input=true toc=true #

    Table of Contents

    # # - # # ✨abracadabra✨ Basics # ## Input data # ✨ABracadabra✨ takes as input a [pandas](https://pandas.pydata.org/) `DataFrame` containing experiment observations data. Each record represents an observation/trial recorded in the experiment and has the following columns: # # - **One or more `treatment` columns**: each treatment column contains two or more distinct, discrete values that are used to identify the different groups in the experiment # - **One or more `metric` columns**: these are the values associated with each observation that are used to compare groups in the experiment. # - **Zero or more `attributes` columns**: these are associated with additional properties assigned to the observations. These attributes can be used for any additional segmentations across groups. # # To demonstrate, let's generate some artificial experiment observations data. The `metric` column in our dataset will be a series of binary outcomes (i.e. `True`/`False`, here stored as `float` values). This binary `metric` is analogous to *conversion* or *success* in AB testing. These outcomes are simulated from three different Bernoulli distributions, each associated with the `treatement`s named `"A"`, `"B"`, and `"C"`. and each of which has an increasing average probability of *conversion*, respectively. The simulated data also contains four `attribute` columns, named `attr_*`. # + from abra.utils import generate_fake_observations # generate some fake binary trial data binary_data = generate_fake_observations( distribution='bernoulli', # binary data n_treatments=3, n_attributes=4, n_observations=1000 ) binary_data.head() # - # The resulting data have a single `treatment` column, called (creatively) `'treatment'`, a single `metric` column, called `'metric'`, and four `attribute` columns, `atr_0`, `attr_1`, `attr_2`, and `attr_3`. The `treatment` column has 3 distinct treatments: `"A"`, `"B"`, and `"C"`, and the metric takes on boolean/binary values drawn from a [Bernoulli distribution](https://en.wikipedia.org/wiki/Bernoulli_distribution). # ## Key Components of an AB Test # The three key components of running an AB test in ✨abracadabra✨ are: # # - **The `Experiment`**, which holds the raw observations data recorded from, and any metadata associated with an AB experiment. # - **The `HypothesisTest`**, which defines the statistical inference procedure applied to the experiment data. # - **The `HypothesisTestResults`**, which is the statistical artifact that results from running a `HypothesisTest` against an `Experiment`'s observations. The `HypothesisTestResults` are used to summarize, visulize, and interpret the inference results and make decisions based on these results. # ### Running an AB test in ✨abracadabra✨ is as easy as ✨1, 2, 3✨: # 1. Initialize your `Experiment` with observations and any metadata # 2. Define your `HypothesisTest`. This requires defining the `hypothesis` and `inference_method`. The relevant inference method used will depend on the support of your observations. A list of supported hypotheses and inference methods for different types of observation variables are shown below: # # #### Hypotheses # | Hypothesis | Hypothesis Type | `hypothesis` parameter | # |---|---|---| # | "The treatment is larger than the control" | one-tailed | `"larger"` | # | "The treatment is smaller than the control" | one-tailed | `"smaller"` | # | "The treatment is not equal to the control" | two-tailed | `"unequal"` | # # # #### Inference Methods # # | Variable Type | Model Class| `inference_method` parameter | # |---|---|---| # | Continuous | Frequentist| `'means_delta'` (t-test) | # | | Bayesian| `'gaussian'`, `'exp_student_t'`| # | Binary / Proportions | Frequentist| `'proportions_delta'` (z-test) | # | | Bayesian| `'beta'`, `'beta_binomial'`, `'bernoulli'` | # | Counts |Frequentist| `'rates_ratio'` | # | |Bayesian| `'gamma_poisson'` | # | |Bayesian| `'gamma_poisson'` | # | Non-parametric |Bootstrap| `'bootstrap'` | # # 3. Run the test against your experiment and interpret the resulting `HypothesisTestResults` # #### Example # # Below we demonstrate a standard AB test workflow in ✨abracadabra✨. Namely we: # 1. Initialize an `Experiment` instance `exper` with our artifical binary observations generated above. # 2. We then initialize a `HypothesisTest` instance `ab_test` that tests if `treatment` `"B"` is `"larger"` than treatment `"A"` based on the metric values in the `"metric"` column of the dataframe. The hypothesis test uses a Frequentist method `'proportions_delta'` that is dedicated to detecting differences between binary samples. # + from abra import Experiment, HypothesisTest # Initialize the Experiment exper = Experiment(data=binary_data, name="Demo Experiment") # Initialize the A/B test ab_test = HypothesisTest( metric="metric", treatment="treatment", control="A", variation="B", hypothesis="unequal", inference_method="proportions_delta" ) # Run the test with an alpha of 0.5; get back a HypothesisTestResults object ab_test_results = exper.run_test(ab_test, alpha=.05) # Check the test results decision assert ab_test_results.accept_hypothesis # - # ## Interpreting results # Each `HypothesisTestResults` has its own `display()` and `visualize()` methods that can be used to interpret the results of the test. The `display()` method prints out the results to the console, while `visualize` plots a visual summary of the results. # Print the test results to the console ab_test_results.display() # Providing an `outfile` argument to the `visualize` method will save the results figure to an image file. # + # %pylab inline # Visualize the test results outfile = '/tmp/abracadabra_demo_test_results.png' ab_test_results.visualize(outfile=outfile) # Show that results figure exists # !ls -lah {outfile} # - # The resulting frequentist test results (displayed and visualized above) indicate that hypothesis `"B != A"` should be accepted. A breakdown of the results plot is as follows: # # #### Top Plot: Sample Distributions # The top plot compares the _parameterized representation of the sample distributions_ with parameters being derived from the experiment observations. # # The large degree of separation between the two distributions indicates that the `"B"` group is qualitatively larger than `"A"` # # #### Middle Plot: Central Tendencies # The middle plot compares the _central tendiency estimates_ of the two sample groups--in this case the, average probability of success on a given trial--as well as adds _Standard Errors_ calculate for the central tendency estimates. # # We can see that there is no overlap of the standard errors, further indicating that we can be confident that the two groups are likely different, and that `"B"` is larger than `"A"`. # # #### Bottom Plot: Deltas # The bottom plot gives the estimate of the _difference in central tendencies_, in this case `ProportionsDelta`, as well as 95% Confidence Intervals on this difference estimate (This is a two-tailed test, checking that `"B"` is not equal to `"A"`, so the upper bound on ProportionsDelta is `inf`). # # We can see that the confidence interval on the difference between the two groups does not intersect with the `ProportionsDelta=0` line, indicating a statistically significant difference between the two samples. # ## Bootstrap Hypothesis Tests # # If your samples do not follow standard parametric distributions (e.g. Gaussian, Binomial, Poisson), or if you're comparing more exotic descriptive statistics (e.g. median, mode, etc) then you might want to consider using a non-parametric [Bootstrap Hypothesis Test](https://en.wikipedia.org/wiki/Bootstrapping_(statistics)). Running bootstrap tests is easy in ✨abracadabra✨, you simply use the `"bootstrap"` `inference_method`. # + # Tests and data can be copied via the `.copy` method. def my_statistic(samples): """Boootstrap tests support custom test statistics. Here we use re-define the mean to compare with parametrics tests. """ return np.mean(samples) bootstrap_ab_test = ab_test.copy( inference_method='bootstrap', infer_kwargs={'statistic_function': my_statistic} ) # run the test bootstrap_ab_test_results = exper.run_test(bootstrap_ab_test) bootstrap_ab_test_results.display() bootstrap_ab_test_results.visualize() # - # Notice that the `"bootstrap"` hypothesis test results above--which are based on resampling the data set with replacent--are very similar to the results returned by the `"proportions_delta"` parametric model, which are based on descriptive statistics and model the data set as a Binomial distribution. The results will converge as the sample sizes grow. # ## Bayesian Hypothesis Tests # In addition to common Frequentist test, running Bayesian analogs is simple. Simply intitialize the `HypothesisTest` with a Bayesian `method` parameter. For example, the Bayesian analogs to the `'proportions_delta'` method are the `'binomial'`, `'beta_binomial'`, or `'bernoulli'` methods. # + binomial_ab_test = ab_test.copy(inference_method='binomial') # run the test binomial_ab_test_results = exper.run_test(binomial_ab_test) assert binomial_ab_test_results.prob_greater > .95 binomial_ab_test_results.display() binomial_ab_test_results.visualize() # - # Bayesian `HypothesisTestResults` have their own analogous `display` and `visualize` methods that can be used to interpret the results of the analysis. Notice the results of the `ab_test_results` and `binomial_ab_test_results` each support a difference in proportions of approximately 0.14. # # Note how the mean parameter and 95% HDI sampled from the Bayesian model are very similar to the mean and 95% confidence interval used in the Frequentist test `ab_test_results`. Similarly, the deltas in proportion parameter $p$ sampled from the model provide aligned with the `ProportionsDelta` estiamted for the Frequentist test. # ### Bayesian Model Specification # Bayesian models allow the experimenter to incorporate prior beliefs. This can be helpful when you have little data, or can provide sound domain knowledge of baselines. Specifying custom priors is also straight-forward using `abracadaba`, simply pass in a `model_params` argument during `HypothesisTest` initialization. Below we demonstrate by running another Bayesian hypothesis test, this time with a hierarchical [Beta-Binomial model](https://en.wikipedia.org/wiki/Beta-binomial_distribution#:~:text=In%20probability%20theory%20and%20statistics,is%20either%20unknown%20or%20random.). This model allows the user to specify a prior over the base probability $p$ by setting two hyperparameters for the Beta Distribution $\alpha$ and $\beta$ such that the mean prior has a value of # # $$ p = \frac{\alpha}{\alpha + \beta}$$ # # where the larger $\alpha$ and $\beta$. Let's put a super-strong prior on $p$ and see how it affects the inference results. # + # Run Bayesian test with custom priors # strong prior that p = alpha / (alpha + beta) = 0.33 beta_prior_params = dict(alpha=500., beta=1000.) bb_ab_test = HypothesisTest( metric='metric', control='A', variation='C', inference_method='beta_binomial', model_params=beta_prior_params ) # run the test bb_ab_test_results = exper.run_test(bb_ab_test) assert bb_ab_test_results.prob_greater > .95 bb_ab_test_results.display() bb_ab_test_results.visualize() # - # Here we see that the strong prior of $p=0.33$ influences the proportion parameters to values around 0.33. This causes our delta samples from the model be much smaller because two distributions that are forced to be near a prior value (here 0.33) will be located close to one another. Even with this strong prior, the model identifies that the two distributions are, in fact different, and that `P(C > A)` near 1. # # Note that if there were more data in the experiment, these parameter estimates would gradually move toward the data distribution, rather than the super-confident prior distribution, providing results that are similar to the results provided by the `binomial` (which can be thought of as a special case of the Beta-Binomial with a weak, agnostic prior) and `proportions_delta` inference methods. # ## Including Segmentations # ✨abracdabra✨ supports the ability to segment experiment observations based on one or more attributes in your dataset using the `segmentation` argument to `HyptothesisTest`. The segmentation can be a string or list of string expressions, each of which follow the [pandas query API](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.query.html) # + # Initialize an A/B test with additional segmentation on the 'attr_1' attribute ab_test_segmented = HypothesisTest( metric='metric', control='A', variation='B', inference_method='proportions_delta', hypothesis='larger', segmentation="attr_1 == 'A1a'" ) # Run the segmented test ab_test_segmented_results = exper.run_test(ab_test_segmented) assert not ab_test_segmented_results.accept_hypothesis # B is larger # Display results (notice reduced sample sizes) ab_test_segmented_results.display() ab_test_segmented_results.visualize() # - # We now see that if we dig into a particular segment, namely the segement defined by `"attr_1 == 'A1a'"`, we can no longer accept the hypothesis that `"B is larger"`. This is indicated by a `ProportionsDelta` that overlaps substantially with the line indicating `0` difference between the two samples. # ## Running multiple tests, and multiple comparison control # When running multiple tests on the same metric, you'll need to control for [multiple comparisons](https://en.wikipedia.org/wiki/Multiple_comparisons_problem). This is handled by running a `HypothesisTestSuite`, which takes in a list of hypothesis tests. # # Below we run 3 independent tests comparing A to A, B to A and C to A, and set the correction `method` to `'bonferonni'`, which simply updates the effective $\alpha_{corrected} = \frac{\alpha}{N_{tests}}$. Our original value for `alpha=0.05`, thus the corrected value would be $\frac{0.05}{3} = 0.0167$ # + from abra.hypothesis_test import HypothesisTestSuite from copy import deepcopy # Use the `HypothesisTest.copy` method for duplicating test # configurations, while overwriting specific parameters, in # this case `variation` parameter aa_test = ab_test.copy(variation='A') ac_test = ab_test.copy(variation='C') # Initialize the `HypothesisTestSuite` test_suite = HypothesisTestSuite( tests=[aa_test, ab_test, ac_test], correction_method='bonferroni' ) # Run tests test_suite_results = exper.run_test_suite(test_suite) print(test_suite_results) # Print results test_suite_results.display() # - # Note that the alpha has been `corrected` with `MC Correction='bonferroni'` # + # test_suite_results.visualize() # - # The `HypothesisTestSuite` supports the following multiple comparison strategies: # - [Sidak](http://en.wikipedia.org/wiki/%C5%A0id%C3%A1k_correction) (default) # - [Bonferonni](http://en.wikipedia.org/wiki/Bonferroni_correction) # - [Benjamini-Hochberg false-discovery rate](http://pdfs.semanticscholar.org/af6e/9cd1652b40e219b45402313ec6f4b5b3d96b.pdf) # ## Custom Metrics # ✨abracadabra✨ also supports the use of custom metrics, which can transform and combine information from one or more columns. Below we create a `CustomMetric` always makes the `variation` greater than the `control` by adding a constant offset (plus noise) to the value of the the `control`. # + from abra import CustomMetric import numpy as np def custom_metric(row): """ Define a custom 'metric' where the control is always better. """ return 4 + np.random.rand() if row['treatment'] == 'A' else np.random.rand() custom_test = HypothesisTest( metric=CustomMetric(custom_metric), control='A', variation='B', inference_method='means_delta', # Note we use a t-test here hypothesis='unequal' ) custom_test = exper.run_test(custom_test) custom_test.display() # - # We see that, as expected, we have highly significant results, accepting the hypothesis that `'B != A'` # ## Working with other types of variables # The examples above demonstrate running AB tests for variables that take on binary values--i.e. variables that take on values that exist in the interval $[0, 1]$. This is a pretty common scenario, as a lot of AB tests measure metrics like conversions at various stages in a UX funnel. However, ✨abracadabra✨ also supports inference methods for other types of variables, like continuous variables (e.g. time spent on a page) and counts/rate variables (e.g. number of clicks on a button per unit time). # ### Continuous Variables # Many continuous variables follow Gaussian distribution, thus deltas between samples of continuous variables are often modeled based on differences of Gaussian-distributed random variables. The most common statistical inference for this scenario is the good ole' [Student's t-test](https://en.wikipedia.org/wiki/Student%27s_t-test), referred to in ✨abracadabra✨ as `"means_delta"`, as it tests for differences in means of the underlying Gaussian distributions from each of treatments. # # Below we generate some Gaussian-distributed observations and show how ✨abracadabra✨can be used to run a t-test using the common `Experiment->HypothesisTest->HypothesisTestResults` workflow. # generate some fake Gaussian-distributed trial data gaussian_data = generate_fake_observations( distribution='gaussian', # binary data n_treatments=3, n_attributes=4, n_observations=1000 ) gaussian_data.head() # Here we use the `inference_method="means_delta"` argument to run the t-test # + # Initialize the Experiment exper = Experiment(data=gaussian_data, name="Demo Experiment") # Initialize the A/B test gaussian_ab_test = HypothesisTest( metric="metric", treatment="treatment", control="A", variation="B", hypothesis="unequal", inference_method="means_delta" ) # Run the test with an alpha of 0.5; get back a HypothesisTestResults object gaussian_ab_test_results = exper.run_test(gaussian_ab_test, alpha=.05) # Check the test results decision assert gaussian_ab_test_results.accept_hypothesis gaussian_ab_test_results.display() gaussian_ab_test_results.visualize() # - # #### Bayesian models for continuous variables # The Bayesian analog to the t-test is called the "Hierarchical Gaussian" and involves modeling the observations as a generative process where each point is sampled from a Gaussian distribution with mean $\mu$ and variance $\sigma^2$. The model is "hierarchical" because it also assumes there is a distribution over both $\mu$ and $\sigma^2$ as well, namely $\mu \sim \text{Normal}(\bar{x}, \text{std(x)})$ and $\sigma \sim \text{Uniform}(0, \sigma_{max})$, where $\bar{x}$ and $\text{std(x)}$ are the empirical mean and standard deviation of the observations, and $\sigma_{max}$ is a user-specified hyperparameter. # # # That all sounds pretty complicated, right? Well, in ✨abracadabra✨ it's easy to run inference using this model. We simply update the `inference_method`: # + bayesian_gaussian_ab_test = ab_test.copy(inference_method='gaussian') # Run the test with an alpha of 0.5; get back a HypothesisTestResults object bayesian_gaussian_ab_test_results = exper.run_test(bayesian_gaussian_ab_test, alpha=.05) # Check the test results decision assert bayesian_gaussian_ab_test_results.accept_hypothesis bayesian_gaussian_ab_test_results.display() bayesian_gaussian_ab_test_results.visualize() # - # ### Counts / Rates variables # ✨abracadabra✨ also supports analysis of counts variables such as clicks or page views per unit time. These discrete, countable variables often follow a Poisson distribution. Rather than modeling the _difference_ between the two treatments, we instead model whether the ratio of the two counts is statistically different than 1. The reasoning being that if the two treatments have the same number of counts per the same unit of time (and thus the same _rate_) then their ratio will be close to unity. Accordingly, in ✨abracadabra✨, this is called a `"rates_ratio"` inference method. # # Below we'll run an AB test on syntetic data drawn from a Poisson distribution, and test to see if the two distributions are statistically different. # generate some fake Gaussian-distributed trial data counts_data = generate_fake_observations( distribution='poisson', # binary data n_treatments=3, n_observations=1000 ) counts_data.head() # + # Initialize the A/B test exper = Experiment(data=counts_data, name="Demo Experiment") poisson_ab_test = HypothesisTest( metric="metric", treatment="treatment", control="B", variation="C", hypothesis="unequal", inference_method="rates_ratio" ) # Run the test with an alpha of 0.5; get back a HypothesisTestResults object poisson_ab_test_results = exper.run_test(poisson_ab_test, alpha=.05) # Check the test results decision poisson_ab_test_results.display() poisson_ab_test_results.visualize() # - # Here we can see that the ratio of the two rate parameters ranges between 1.38 and 1.68 (95% confidence), with no overlap with the value 1. This indicates that the variation `"C"`s location parameter is approximately 1.5x that of the control `"B"`, which makes sense, given the mean estimates for the two treatments are approximately 3 and 2, respectively. # #### Bayesian models for count variables # The Bayesian analog to the rates ratio test is what's called the [Gamma-Poisson model](http://www.math.wm.edu/~leemis/chart/UDR/PDFs/Gammapoisson.pdf). In this model the observations are assumed to be generated from a Poisson distribution with location parameter $\lambda$. Similarly to the "Hierarchical Gaussian" Bayesian, there is a prior distribution associated with $\lambda$ (that's what makes it Bayesian!). Namely $\lambda \sim \text{Gamma}(\alpha, \beta)$. Here the hyperparameters $\alpha$ and $\beta$ can be set by the experimenter to encode any intuitions or domain knowledge about the problem. # # Though this sounds complicated, implementing a hypothesis test using an inference method based off of the Gamma-Poisson model is not: # + bayesian_poisson_ab_test = poisson_ab_test.copy(inference_method='gamma_poisson') # Run the test with an alpha of 0.5; get back a HypothesisTestResults object bayesian_poisson_ab_test_results = exper.run_test(bayesian_poisson_ab_test) # Check the test results decision bayesian_poisson_ab_test_results.display() bayesian_poisson_ab_test_results.visualize() # - # Note here that samples drawn from the Bayesian model provide similar central tendency interval estimates to those calculated by the analytical rates ratio model. However, unlike the rates ratio model which looks at the ratio of central tendencies, the Bayesian AB tests provides _deltas_ or differences amongst the rate parameter samples drawn from the model. Thus differences in $\lambda$ samples that are far away from zero (in this case the diffence is--and should be--approximately equal to one) indicate significant difference between the treatments when interpreting the Bayesian counts AB test. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # # + import numpy as np import pandas as pd import yfinance as yf import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline # - # get historical daily price symbol = 'ZM' tick = yf.Ticker(symbol) history = tick.history(period="max") # + df=pd.DataFrame() df['price'] = history.Close df['pct_chg'] = df.price.pct_change() # log return computation df['log_ret'] = np.log(df.price) - np.log(df.price.shift(1)) df['ret_mean'] = df.log_ret.rolling(21).mean() # https://en.wikipedia.org/wiki/Volatility_(finance) "annualized vol" but looking back only 21 days df['hist_volatility'] = df.log_ret.rolling(21).std()*np.sqrt(252)*100 df = df.dropna() # - the_vol_mean = df.hist_volatility.mean() the_vol_std = df.hist_volatility.std() print(f'mean of hist_volatility {the_vol_mean:1.5f}') the_ret_mean = df.ret_mean.mean() the_ret_tsd = df.ret_mean.std() print(f'mean of rolling mean {the_ret_mean:1.5f}') # + ind = -252*13 plt.figure(0,figsize=(10,10)) plt.subplot(311) plt.plot(df.iloc[ind:].price) plt.title(f'{symbol} price, volatility, rolling price return mean plot n={np.abs(ind)}') plt.xlabel('time') plt.ylabel('price') plt.grid(True) plt.subplot(312) plt.plot(df.iloc[ind:].hist_volatility) plt.axhline(the_vol_mean,color='red') plt.axhline(the_vol_mean-2*the_vol_std,color='green') plt.axhline(the_vol_mean+2*the_vol_std,color='green') plt.xlabel('time') plt.ylabel('historical volatiility') plt.grid(True) plt.subplot(313) plt.plot(df.iloc[ind:].ret_mean) plt.axhline(the_ret_mean,color='red') plt.axhline(the_ret_mean-2*the_ret_tsd,color='green') plt.axhline(the_ret_mean+2*the_ret_tsd,color='green') plt.grid(True) plt.xlabel('time') plt.ylabel('rolling mean of daily price return') # + ind = -200 plt.figure(0,figsize=(10,10)) print(df.iloc[-1,:]) plt.subplot(311) plt.plot(df.iloc[ind:].price) plt.title(f'{symbol} price, volatility, rolling price return mean plot n={np.abs(ind)}') plt.xlabel('time') plt.ylabel('price') plt.grid(True) plt.subplot(312) plt.plot(df.iloc[ind:].hist_volatility) plt.axhline(the_vol_mean,color='red') plt.axhline(the_vol_mean-2*the_vol_std,color='green') plt.axhline(the_vol_mean+2*the_vol_std,color='green') plt.xlabel('time') plt.ylabel('historical volatiility') plt.grid(True) plt.subplot(313) plt.plot(df.iloc[ind:].ret_mean) plt.axhline(the_ret_mean,color='red') plt.axhline(the_ret_mean-2*the_ret_tsd,color='green') plt.axhline(the_ret_mean+2*the_ret_tsd,color='green') plt.grid(True) plt.xlabel('time') plt.ylabel('rolling mean of daily price return') # - # https://stackoverflow.com/questions/2369492/generate-a-heatmap-in-matplotlib-using-a-scatter-data-set from matplotlib.colors import LogNorm heatmap, xedges, yedges = np.histogram2d(df.ret_mean,df.hist_volatility, bins=10) extent = [xedges[0], xedges[-1], yedges[0], yedges[-1]] aspect = (xedges[-1]-xedges[0])/(yedges[-1]-yedges[0]) plt.figure(figsize=(10,10)) cmap = 'viridis' #viridis hot plt.imshow(heatmap.T, extent=extent, origin='lower', aspect=aspect,norm=LogNorm(),cmap=cmap) plt.grid(True) plt.colorbar() plt.xlabel('rolling mean of daily price return') plt.ylabel('historical volatility') # -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.7.0 # language: julia # name: julia-1.7 # --- # # Singular values and conditioning # # # In this lecture we discuss matrix and vector norms. The matrix $2$-norm involves # _singular values_, which are a measure of how matrices "stretch" vectors. # We also introduce show that # the singular values of a matrix give a notion of a _condition number_, which allows us # to bound errors introduced by floating point numbers in linear algebra operations. # # 1. Vector norms: we discuss the standard $p$-norm for vectors in $ℝ^n$. # 2. Matrix norms: we discuss how two vector norms can be used to induce a norm on matrices. These # satisfy an additional multiplicative inequality. # 3. Singular value decomposition: we introduce the singular value decomposition which is related to # the matrix $2$-norm and best low rank approximation. # 4. Condition numbers: we will see how errors in matrix-vector multiplication and solving linear systems # can be bounded in terms of the _condition number_, which is defined in terms of singular values. using LinearAlgebra, Plots # ## 1. Vector norms # # Recall the definition of a (vector-)norm: # # **Definition (vector-norm)** A norm $\|\cdot\|$ on $ℝ^n$ is a function that satisfies the following, for $𝐱,𝐲 ∈ ℝ^n$ and # $c ∈ ℝ$: # 1. Triangle inequality: $\|𝐱 + 𝐲 \| ≤ \|𝐱\| + \|𝐲\|$ # 2. Homogeneneity: $\| c 𝐱 \| = |c| \| 𝐱 \|$ # 3. Positive-definiteness: $\|𝐱\| = 0$ implies that $𝐱 = 0$. # # # Consider the following example: # # **Definition (p-norm)** # For $1 ≤ p < ∞$ and $𝐱 \in ℝ^n$, define the $p$-norm: # $$ # \|𝐱\|_p := \left(\sum_{k=1}^n |x_k|^p\right)^{1/p} # $$ # where $x_k$ is the $k$-th entry of $𝐱$. # For $p = ∞$ we define # $$ # \|𝐱\|_∞ := \max_k |x_k| # $$ # # **Theorem (p-norm)** $\| ⋅ \|_p$ is a norm for $1 ≤ p ≤ ∞$. # # **Proof** # # We will only prove the case $p = 1, 2, ∞$ as general $p$ is more involved. # # Homogeneity and positive-definiteness are straightforward: e.g., # $$ # \|c 𝐱\|_p = (\sum_{k=1}^n |cx_k|^p)^{1/p} = (|c|^p \sum_{k=1}^n |x_k|^p)^{1/p} = |c| \| 𝐱 \| # $$ # and if $\| 𝐱 \|_p = 0$ then all $|x_k|^p$ are have to be zero. # # For $p = 1,∞$ the triangle inequality is also straightforward: # $$ # \| 𝐱 + 𝐲 \|_∞ = \max_k (|x_k + y_k|) ≤ \max_k (|x_k| + |y_k|) ≤ \|𝐱\|_∞ + \|𝐲\|_∞ # $$ # and # $$ # \| 𝐱 + 𝐲 \|_1 = \sum_{k=1}^n |x_k + y_k| ≤  \sum_{k=1}^n (|x_k| + |y_k|) = \| 𝐱 \|_1 + \| 𝐲\|_1 # $$ # # For $p = 2$ it can be proved using the Cauchy–Schwartz inequality: # $$ # |𝐱^⊤ 𝐲| ≤ \| 𝐱 \|_2 \| 𝐲 \|_2 # $$ # That is, we have # $$ # \| 𝐱 + 𝐲 \|^2 = \|𝐱\|^2 + 2 𝐱^⊤ 𝐲 + \|𝐲\|^2 ≤ \|𝐱\|^2 + 2\| 𝐱 \| \| 𝐲 \| + \|𝐲\|^2 = (\| 𝐱 \| + \| 𝐲 \|) # $$ # # # ∎ # # # In Julia can use the inbuilt `norm` function to calculate norms: # ```julia # norm([1,-2,3]) == norm([1,-2,3], 2) == sqrt(1^2 + 2^2 + 3^2); # norm([1,-2,3], 1) == sqrt(1 + 2 + 3); # norm([1,-2,3], Inf) == 3; # ``` # # # ## 2. Matrix norms # Just like vectors, matrices have norms that measure their "length". The simplest example is the Fröbenius norm, # defined for an $m \times n$ real matrix $A$ as # $$ # \|A\|_F := \sqrt{\sum_{k=1}^m \sum_{j=1}^n A_{kj}^2} # $$ # This is available as `norm` in Julia: A = randn(5,3) norm(A) == norm(vec(A)) # While this is the simplest norm, it is not the most useful. Instead, we will build a matrix norm from a # vector norm: # # # # **Definition (matrix-norm)** Suppose $A ∈ ℝ^{m × n}$ and consider two norms $\| ⋅ \|_X$ on $ℝ^n$ and # $\| ⋅ \|_Y$ on $ℝ^n$. Define the _(induced) matrix norm_ as: # $$ # \|A \|_{X → Y} := \sup_{𝐯 : \|𝐯\|_X=1} \|A 𝐯\|_Y # $$ # Also define # $$ # \|A\|_X \triangleq \|A\|_{X \rightarrow X} # $$ # # For the induced 2, 1, and $∞$-norm we use # $$ # \|A\|_2, \|A\|_1 \qquad \hbox{and} \qquad \|A\|_∞. # $$ # # Note an equivalent definition of the induced norm: # $$ # \|A\|_{X → Y} = \sup_{𝐱 ∈ ℝ^n, 𝐱 ≠ 0} {\|A 𝐱\|_Y \over \| 𝐱\|_X} # $$ # This follows since we can scale $𝐱$ by its norm so that it has unit norm, that is, # ${𝐱} \over \|𝐱\|_X$ has unit norm. # # **Lemma (matrix norms are norms)** Induced matrix norms are norms, that is for $\| ⋅ \| = \| ⋅ \|_{X → Y}$ we have: # 1. Triangle inequality: $\| A + B \| ≤ \|A\| + \|B\|$ # 1. Homogeneneity: $\|c A \| = |c| \|A\|$ # 3. Positive-definiteness: $\|A\| =0 \Rightarrow A = 0$ # In addition, they satisfy the following additional propertie: # 1. $\|A 𝐱 \|_Y ≤ \|A\|_{X → Y} \|𝐱 \|_X$ # 2. Multiplicative inequality: $\| AB\|_{X → Z} ≤ \|A \|_{Y → Z} \|B\|_{X → Y}$ # # **Proof** # # First we show the _triangle inequality_: # $$ # \|A + B \| ≤ \sup_{𝐯 : \|𝐯\|_X=1} (\|A 𝐯\|_Y + \|B 𝐯\|_Y) ≤ \| A \| + \|B \|. # $$ # Homogeneity is also immediate. Positive-definiteness follows from the fact that if # $\|A\| = 0$ then $A 𝐱 = 0$ for all $𝐱 ∈ ℝ^n$. # The property $\|A 𝐱 \|_Y ≤ \|A\|_{X → Y} \|𝐱 \|_X$ follows from the definition. Finally, # Finally, the multiplicative inequality follows from # $$ # \|A B\| = \sup_{𝐯 : \|𝐯\|_X=1} \|A B 𝐯 |_Z ≤ \sup_{𝐯 : \|𝐯\|_X=1} \|A\|_{Y → Z} \| B 𝐯 | = \|A \|_{Y → Z} \|B\|_{X → Y} # $$ # # # # ∎ # # We have some simple examples of induced norms: # # **Example ($1$-norm)** We claim # $$ # \|A \|_1 = \max_j \|𝐚_j\|_1 # $$ # that is, the maximum $1$-norm of the columns. To see this use the triangle inequality to # find for $\|𝐱\|_1 = 1$ # $$ # \| A 𝐱 \|_1 ≤ ∑_{j = 1}^n |x_j| \| 𝐚_j\|_1 ≤ \max_j \| 𝐚_j\| ∑_{j = 1}^n |x_j| = \max_j \| 𝐚_j\|_1. # $$ # But the bound is also attained since if $j$ is the column that maximises the norms then # $$ # \|A 𝐞_j \|_1 = \|𝐚_j\|_1 = \max_j \| 𝐚_j\|_1. # $$ # # In the problem sheet we see that # $$ # \|A\|_∞ = \max_k \|A[k,:]\|_1 # $$ # that is, the maximum $1$-norm of the rows. # # Matrix norms are available via `opnorm`: m,n = 5,3 A = randn(m,n) opnorm(A,1) == maximum(norm(A[:,j],1) for j = 1:n) opnorm(A,Inf) == maximum(norm(A[k,:],1) for k = 1:m) opnorm(A) # the 2-norm # An example that does not have a simple formula is $\|A \|_2$, but we do have two simple cases: # # **Proposition (diagonal/orthogonal 2-norms)** If $Λ$ is diagonal with entries $λ_k$ then # $\|Λ\|_2 = \max_k |λ_k|.$. If $Q$ is orthogonal then $\|Q\| = 1$. # # # ## 3. Singular value decomposition # # To define the induced $2$-norm we need to consider the following: # # **Definition (singular value decomposition)** For $A ∈ ℝ^{m × n}$ with rank $r > 0$, # the _reduced singular value decomposition (SVD)_ is # $$ # A = U Σ V^⊤ # $$ # where $U ∈ ℝ^{m × r}$ and $V ∈ ℝ^{r × n}$ have orthonormal columns and $Σ ∈ ℝ^{r × r}$ is diagonal whose # diagonal entries, which which we call _singular values_, are all positive and decreasing: $σ_1 ≥ ⋯ ≥ σ_r > 0$. # The _full singular value decomposition (SVD)_ is # $$ # A = Ũ Σ̃ Ṽ^⊤ # $$ # where $Ũ ∈ ℝ^{m × m}$ and $Ṽ ∈ ℝ^{n × n}$ are orthogonal matrices and $Σ̃ ∈ ℝ^{m × n}$ has only # diagonal entries, i.e., if $m > n$, # $$ # Σ̃ = \begin{bmatrix} σ_1 \\ & ⋱ \\ && σ_n \\ && 0 \\ && ⋮ \\ && 0 \end{bmatrix} # $$ # and if $m < n, # $$ # Σ̃ = \begin{bmatrix} σ_1 \\ & ⋱ \\ && σ_m & 0 & ⋯ & 0\end{bmatrix} # $$ # where $σ_k = 0$ if $k > r$. # # To show the SVD exists we first establish some properties of a _Gram matrix_ ($A^⊤A$): # # **Proposition (Gram matrix kernel)** The kernel of $A$ is the also the kernel of $A^⊤ A$. # # **Proof** # If $A^⊤ A 𝐱 = 0$ then we have # $$ # 0 = 𝐱 A^⊤ A 𝐱 = \| A 𝐱 \|^2 # $$ # which means $A 𝐱 = 0$ and $𝐱 ∈ \hbox{ker}(A)$. # ∎ # # **Proposition (Gram matrix diagonalisation)** The Gram-matrix # satisfies # $$ # A^⊤ A = Q Λ Q^⊤ # $$ # where $Q$ is orthogonal and the eigenvalues $λ_k$ are non-negative. # # **Proof** # $A^⊤ A$ is symmetric so we appeal to the spectral theorem for the # existence of the decomposition. # For the corresponding (orthonormal) eigenvector $𝐪_k$, # $$ # λ_k = λ_k 𝐪_k^⊤ 𝐪_k = 𝐪_k^⊤ A^⊤ A 𝐪_k = \| A 𝐪_k \| ≥ 0. # $$ # # ∎ # # # This connection allows us to prove existence: # # **Theorem (SVD existence)** Every $A ∈ ℝ^{m × n}$ has an SVD. # # **Proof** # Consider # $$ # A^⊤ A = Q Λ Q^⊤. # $$ # Assume (as usual) that the eigenvalues are sorted in decreasing modulus, and so $λ_1,…,λ_r$ # are an enumeration of the non-zero eigenvalues and # $$ # V := \begin{bmatrix} 𝐪_1 | ⋯ | 𝐪_r \end{bmatrix} # $$ # the corresponding (orthonormal) eigenvectors, with # $$ # K = \begin{bmatrix} 𝐪_{r+1} | ⋯ | 𝐪_n \end{bmatrix} # $$ # the corresponding kernel. # Define # $$ # Σ := \begin{bmatrix} \sqrt{λ_1} \\ & ⋱ \\ && \sqrt{λ_r} \end{bmatrix} # $$ # Now define # $$ # U := AV Σ^{-1} # $$ # which is orthogonal since $A^⊤ A V = V Σ^2 $: # $$ # U^⊤ U = Σ^{-1} V^⊤ A^⊤ A V Σ^{-1} = I. # $$ # Thus we have # $$ # U Σ V^⊤ = A V V^⊤ = A \underbrace{\begin{bmatrix} V | K \end{bmatrix}}_Q\underbrace{\begin{bmatrix} V^⊤ \\ K^⊤ \end{bmatrix}}_{Q^⊤} # $$ # where we use the fact that $A K = 0$ so that concatenating $K$ does not change the value. # # ∎ # # Singular values tell us the 2-norm: # # **Corollary (singular values and norm)** # $$ # \|A \|_2 = σ_1 # $$ # and if $A ∈ ℝ^{n × n}$ is invertible, then # $$ # \|A^{-1} \|_2 = σ_n^{-1} # $$ # # **Proof** # # First we establish the upper-bound: # $$ # \|A \|_2 ≤  \|U \|_2 \| Σ \|_2 \| V^⊤\|_2 = \| Σ \|_2 = σ_1 # $$ # This is attained using the first right singular vector: # $$ # \|A 𝐯_1\|_2 = \|Σ V^⊤ 𝐯_1\|_2 = \|Σ 𝐞_1\|_2 = σ_1 # $$ # The inverse result follows since the inverse has SVD # $$ # A^{-1} = V Σ^{-1} U^⊤ = V (W Σ^{-1} W) U^⊤ # $$ # is the SVD of $A^{-1}$, where # $$ # W := P_σ = \begin{bmatrix} && 1 \\ & ⋰ \\ 1 \end{bmatrix} # $$ # is the permutation that reverses the entries, that is, $σ$ has Cauchy notation # $$ # \begin{pmatrix} # 1 & 2 & ⋯ & n \\ # n & n-1 & ⋯ & 1 # \end{pmatrix}. # $$ # # # ∎ # # We will not discuss in this module computation of singular value decompositions or eigenvalues: # they involve iterative algorithms (actually built on a sequence of QR decompositions). # # One of the main usages for SVDs is low-rank approximation: # # **Theorem (best low rank approximation)** The matrix # $$ # A_k := \begin{bmatrix} 𝐮_1 | ⋯ | 𝐮_k \end{bmatrix} \begin{bmatrix} # σ_1 \\ # & ⋱ \\ # && σ_k\end{bmatrix} \begin{bmatrix} 𝐯_1 | ⋯ | 𝐯_k \end{bmatrix}^⊤ # $$ # is the best 2-norm approximation of $A$ by a rank $k$ matrix, that is, for all rank-$k$ matrices $B$, we have # $$\|A - A_k\|_2 ≤ \|A -B \|_2.$$ # # # **Proof** # We have # # $$ # A - A_k = \|U \begin{bmatrix} 0 \cr &\ddots \cr && 0 \cr &&& σ_{k+1} \cr &&&& \ddots \cr &&&&& σ_r\end{bmatrix} V^⊤. # $$ # Suppose a rank-$k$ matrix $B$ has # $$ # \|A-B\|_2 < \|A-A_k\|_2 = σ_{k+1}. # $$ # For all $𝐰 \in \ker(B)$ we have # $$ # \|A 𝐰\|_2 = \|(A-B) 𝐰\|_2 ≤ \|A-B\|\|𝐰\|_2 < σ_{k+1} \|𝐰\|_2 # $$ # # But for all $𝐮 \in {\rm span}(𝐯_1,…,𝐯_{k+1})$, that is, $𝐮 = V[:,1:k+1]𝐜$ for some $𝐜 \in ℝ^{k+1}$ we have # $$ # \|A 𝐮\|_2^2 = \|U Σ_k 𝐜\|_2^2 = \|Σ_k 𝐜\|_2^2 = # \sum_{j=1}^{k+1} (σ_j c_j)^2 ≥ σ_{k+1}^2 \|c\|^2, # $$ # i.e., $\|A 𝐮\|_2 ≥ σ_{k+1} \|c\|$. Thus $𝐰$ cannot be in this span. # # # The dimension of the span of $\ker(B)$ is at least $n-k$, but the dimension of ${\rm span}(𝐯_1,…,𝐯_{k+1})$ is at least $k+1$. # Since these two spaces cannot intersect we have a contradiction, since $(n-r) + (r+1) = n+1 > n$. ∎ # # # # Here we show an example of a simple low-rank approximation using the SVD. Consider the Hilbert matrix: function hilbertmatrix(n) ret = zeros(n,n) for j = 1:n, k=1:n ret[k,j] = 1/(k+j-1) end ret end hilbertmatrix(5) # That is, the $H[k,j] = 1/(k+j-1)$. This is a famous example of matrix with rapidly decreasing singular values: H = hilbertmatrix(100) U,σ,V = svd(H) scatter(σ; yscale=:log10) # Note numerically we typically do not get a exactly zero singular values so the rank is always # treated as $\min(m,n)$. # Because the singular values decay rapidly # we can approximate the matrix very well with a rank 20 matrix: k = 20 # rank Σ_k = Diagonal(σ[1:k]) U_k = U[:,1:k] V_k = V[:,1:k] norm(U_k * Σ_k * V_k' - H) # Note that this can be viewed as a _compression_ algorithm: we have replaced a matrix with # $100^2 = 10,000$ entries by two matrices and a vector with $4,000$ entries without losing # any information. # In the problem sheet we explore the usage of low rank approximation to smooth functions. # # # # ## 4. Condition numbers # # We have seen that floating point arithmetic induces errors in computations, and that we can typically # bound the absolute errors to be proportional to $C ϵ_{\rm m}$. We want a way to bound the # effect of more complicated calculations like computing $A 𝐱$ or $A^{-1} 𝐲$ without having to deal with # the exact nature of floating point arithmetic. Here we consider only matrix-multiplication but will make a remark # about matrix inversion. # # To justify what follows, we first observe that errors in implementing matrix-vector multiplication # can be captured by considering the multiplication to be exact on the wrong matrix: that is, `A*x` # (implemented with floating point) is precisely $A + δA$ where $δA$ has small norm, relative to $A$. # This is known as _backward error analysis_. # # # # To discuss floating point errors we need to be precise which order the operations happened. # We will use the definition `mul(A,x)`, which denote ${\rm mul}(A, 𝐱)$. (Note that `mul_rows` actually # does the _exact_ same operations, just in a different order.) Note that each entry of the result is in fact a dot-product # of the corresponding rows so we first consider the error in the dot product `dot(𝐱,𝐲)` as implemented in floating-point, # which we denote ${\rm dot}(A,x)$. # # We first need a helper proposition: # # **Proposition** If $|ϵ_i| ≤ ϵ$ and $n ϵ < 1$, then # $$ # \prod_{k=1}^n (1+ϵ_i) = 1+θ_n # $$ # for some constant $θ_n$ satisfying $|θ_n| ≤ {n ϵ \over 1-nϵ}$. # # The proof is left as an exercise (Hint: use induction). # # **Lemma (dot product backward error)** # For $𝐱, 𝐲 ∈ ℝ^n$, # $$ # {\rm dot}(𝐱, 𝐲) = (𝐱 + δ𝐱)^⊤ 𝐲 # $$ # where # $$ # |δ𝐱| ≤  {n ϵ_{\rm m} \over 2-nϵ_{\rm m}} |𝐱 |, # $$ # where $|𝐱 |$ means absolute-value of each entry. # # # **Proof** # # Note that # $$ # \begin{align*} # {\rm dot}(𝐱, 𝐲) &= \{ [(x_1 ⊗ y_1) ⊕ (x_2 ⊗ y_2)] ⊕(x_3⊗ y_3)] ⊕⋯\}⊕(x_n ⊗ y_n) \\ # & = \{ [(x_1 y_1)(1+δ_1) + (x_2 y_2)(1+δ_2)](1+γ_2) +x_3 y_3(1+δ_3)](1+γ_3) + ⋯ +x_n y_n(1+δ_n) \}(1+γ_n) \\ # & = ∑_{j = 1}^n x_j y_j (1+δ_j) ∏_{k=j}^n (1 + γ_k) \\ # & = ∑_{j = 1}^n x_j y_j (1+θ_j) # \end{align*} # $$ # where we denote the errors from multiplication as $δ_k$ and those from addition by $γ_k$ # (with $γ_1 = 0$). Note that $θ_j$ each have at most $n$ terms each bounded by $ϵ_{\rm m}/2$, # Thus the previous proposition tells us # $$ # |θ_j| ≤ {n ϵ_{\rm m} \over 2- nϵ_{\rm m}}. # $$ # Thus # $$ # δ𝐱 = \begin{pmatrix} x_1 θ_n^1 \cr x_2 θ_n^2 \cr x_3 θ_{n-1} \cr \vdots \cr x_n θ_1\end{pmatrix} # $$ # and the theorem follows from homogeneity: # $$ # \| δ𝐱 \| ≤ {n ϵ_{\rm m} \over 2-nϵ_{\rm m}} \| 𝐱 \| # $$ # # ∎ # # **Theorem (matrix-vector backward error)** # For $A ∈ ℝ^{m × n}$ and $𝐱 ∈ ℝ^n$ we have # $$ # {\rm mul}(A, 𝐱) = (A + δA) 𝐱 # $$ # where # $$ # |δA| ≤ {n ϵ_{\rm m} \over 2-nϵ_{\rm m}} |A|. # $$ # Therefore # $$ # \begin{align*} # \|δA\|_1 &≤  {n ϵ_{\rm m} \over 2-nϵ_{\rm m}} \|A \|_1 \\ # \|δA\|_2 &≤  {\sqrt{\min(m,n)} n ϵ_{\rm m} \over 2-nϵ_{\rm m}} \|A \|_2 \\ # \|δA\|_∞ &≤  {n ϵ_{\rm m} \over 2-nϵ_{\rm m}} \|A \|_∞ # \end{align*} # $$ # # **Proof** # The bound on $|δA|$ is implied by the previous lemma. # The $1$ and $∞$-norm follow since # $$ # \|A\|_1 = \||A|\|_1 \hbox{ and } \|A\|_∞ = \||A|\|_∞ # $$ # This leaves the 2-norm example, which is a bit more challenging as there are matrices # $A$ such that $\|A\|_2 ≠ \| |A| \|_2$. # Instead we will prove the result by going through the Fröbenius norm and using: # $$ # \|A \|_2 ≤ \|A\|_F ≤ \sqrt{r} \| A\|_2 # $$ # where $r$ is rank of $A$ (see PS5) # and $\|A\|_F = \| |A| \|_F$, # so we deduce: # $$ # \begin{align*} # \|δA \|_2 &≤ \| δA\|F = \| |δA| \|F ≤ {n ϵ_{\rm m} \over 2-nϵ_{\rm m}} \| |A| \|_F \\ # &= {n ϵ_{\rm m} \over 2-nϵ_{\rm m}} \| A \|_F ≤ {\sqrt{r} n ϵ_{\rm m} \over 2-nϵ_{\rm m}}\| A \|_2 \\ # &≤ {\sqrt{\min(m,n)} n ϵ_{\rm m} \over 2-nϵ_{\rm m}} \|A \|_2 # \end{align*} # $$ # # ∎ # # So now we get to a mathematical question independent of floating point: # can we bound the _relative error_ in approximating # $$ # A 𝐱 ≈ (A + δA) 𝐱 # $$ # if we know a bound on $\|δA\|$? # It turns out we can in turns of the _condition number_ of the matrix: # # **Definition (condition number)** # For a square matrix $A$, the _condition number_ (in $p$-norm) is # $$ # κ_p(A) := \| A \|_p \| A^{-1} \|_p # $$ # with the $2$-norm: # $$ # κ_2(A) = {σ_1 \over σ_n}. # $$ # # # **Theorem (relative-error for matrix-vector)** # The _worst-case_ relative error in $A 𝐱 ≈ (A + δA) 𝐱$ is # $$ # {\| δA 𝐱 \| \over \| A 𝐱 \| } ≤ κ(A) ε # $$ # if we have the relative pertubation error $\|δA\| = \|A \| ε$. # # **Proof** # We can assume $A$ is invertible (as otherwise $κ(A) = ∞$). Denote $𝐲 = A 𝐱$ and we have # $$ # {\|𝐱 \| \over \| A 𝐱 \|} = {\|A^{-1} 𝐲 \| \over \|𝐲 \|} ≤ \| A^{-1}\| # $$ # Thus we have: # $$ # {\| δA 𝐱 \| \over \| A 𝐱 \| } ≤ \| δA\| \|A^{-1}\| ≤ κ(A) {\|δA\| \over \|A \|} # $$ # # ∎ # # # Thus for floating point arithmetic we know the error is bounded by $κ(A) {n ϵ_{\rm m} \over 2-nϵ_{\rm m}}$. # # If one uses QR to solve $A 𝐱 = 𝐲$ the condition number also gives a meaningful bound on the error. # As we have already noted, there are some matrices where PLU decompositions introduce large errors, so # in that case well-conditioning is not a guarantee (but it still usually works). # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + ROOT='/home/alexanderliao/data/Kaggle/competitions/tgs-salt-identification-challenge' import pandas as pd from tqdm import tqdm_notebook import numpy as np from keras.preprocessing.image import load_img train_df = pd.read_csv(ROOT+"/train.csv", index_col="id", usecols=[0]) depths_df = pd.read_csv(ROOT+"/depths.csv", index_col="id") train_df = train_df.join(depths_df) test_df = depths_df[~depths_df.index.isin(train_df.index)] train_df["images"] = [np.array(load_img(ROOT+"/train/images/{}.png".format(idx), grayscale=True)) / 255 for idx in tqdm_notebook(train_df.index)] #train_df["masks"] = [np.array(load_img(ROOT+"/train/masks/{}.png".format(idx), grayscale=True)) / 255 for idx in tqdm_notebook(train_df.index)] train_im=[] for image in train_df["images"]: train_im.append(image.flatten()) train_df.drop(["images"]) """ train_msk=[] for image in train_df["masks"]: train_msk.append(image.flatten()) """ test_df["images"] = [np.array(load_img(ROOT+"/test/images/{}.png".format(idx), grayscale=True)) / 255 for idx in tqdm_notebook(test_df.index)] # - test_im=[] for image in test_df["images"]: test_im.append(image.flatten()) test_df.drop(["images"]) # + from sklearn.decomposition import PCA from sklearn.manifold import TSNE pca = PCA(n_components=20) pca_result = pca.fit_transform(train_im) tsne = TSNE(n_components=3, verbose=1, perplexity=100, n_iter=5000) #tsne_results = tsne.fit_transform(pca_result) tsne_results = pca_result # + train_df['tsne-one'] = tsne_results[:,0] train_df['tsne-two'] = tsne_results[:,1] train_df['tsne-three'] = tsne_results[:,2] """ train_df['pca-one'] = pca_result[:,0] train_df['pca-two'] = pca_result[:,1] train_df['pca-three'] = pca_result[:,2] """ print('Explained variation per principal component: {}'.format(pca.explained_variance_ratio_)) pca = PCA(n_components=50) pca_result = pca.fit_transform(train_msk) train_df['pca-mask-one'] = pca_result[:,0] train_df['pca-mask-two'] = pca_result[:,1] train_df['pca-mask-three'] = pca_result[:,2] print('Explained variation per principal component: {}'.format(pca.explained_variance_ratio_)) # + from matplotlib import pyplot from mpl_toolkits.mplot3d import Axes3D import random # %matplotlib inline fig = pyplot.figure() ax = Axes3D(fig) ax.scatter(train_df['tsne-one'],train_df['tsne-two'],train_df['tsne-three']) pyplot.show() # + pca = PCA(n_components=20) pca_result = pca.fit_transform(test_im) tsne = TSNE(n_components=3, verbose=1, perplexity=100, n_iter=5000) #tsne_results = tsne.fit_transform(pca_result) tsne_results = pca_result test_df['tsne-one'] = tsne_results[:,0] test_df['tsne-two'] = tsne_results[:,1] test_df['tsne-three'] = tsne_results[:,2] fig = pyplot.figure() ax = Axes3D(fig) ax.scatter(test_df['tsne-one'],test_df['tsne-two'],test_df['tsne-three']) pyplot.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="qEFi0iFlfs1Q" # # Amdahl's law # # In this notebook, we will explore and discuss Amdahl's law. # + [markdown] id="lIYdn1woOS1n" # # Speed up from parallelism # # Assume that you have a program that requires 100 seconds to complete. If you can fully parallelize the program how long does it take to run it on two cores? # + id="_MhSv1VDXHI6" # execution time with 2 processors 100/2 # + [markdown] id="HqikND1pXXFm" # So it will take 50 seconds to run the program on two cores. # # Let's define speed up (S) as: # # $S = \frac{OldExecutionTime}{NewExecutionTime}$ # # This result in a speed up of S = (old execution time)/(new execution time) = 100/50 = 2. # # Similarly, if we have 100 cores available, we can run the program in just 1 second (100 seconds/100 cores), resulting in a speed up of 100 (100 second/1 second). # # + [markdown] id="lczQQFNfXNfS" # # Speed up from non-fully paralellizable programs # # If 10% of the program cannot be parallized and have to run sequentially, how long does it take to run it on 2 cores? # + id="Wm4mNuFEZPRP" # execution time for 2 cores when 10% of the program is serial 0.10 * 100 + (1-0.10)*100/2 # + [markdown] id="Tirqd9eXZjS_" # So, when 90% of the program can be parallized, the speed up is now only, S = 100/55 = 1.8. When you have 100 processors, what will the speed up be? # + id="GgFpwIWLZuIq" # speed up with 2 cores S2 = 100/55 # execution time from 100 cores, when 10% of the program is serial E100 = 0.10 * 100 + (1-0.10)*100/100 # speed up with 100 cores S100 = 100/(0.10 * 100 + (1-0.10)*100/100) print("S2=", S2, "\nE100=", E100, "\nS100=", S100) # + [markdown] id="TU0SAzNjfkXF" # As you can see from the calculation above, the speed up is not anywhere near 100x anymore, but it is closer to only 9x. # # + [markdown] id="GyOVKvkBauXz" # # Amdahl's Law # # This was observed by in the 1960s. He stated that the speed up of a program whose a portion, _p_, of its execution time can be parallized is limited to: # # $S = \frac{1}{1-p}$ # # In our case, _p_ was 0.9 (90%), so our speed up is limited to: # # $S = \frac{1}{1-0.9} = \frac{1}{0.1}=10$ # + [markdown] id="1g3kV-6Dcq8c" # Let's do some experiments to see how much speed up we can get with a thousand cores, 1 million cores, and 1 billion cores. # + id="u4aRrBZLc_Mh" # execution time with 1 core E1 = 100 # a thousand cores E1000 = 0.10 * E1 + (1-0.10)*E1/1000 S1000 = E1/E1000 # a million cores Emillion = 0.10 * E1 + (1-0.10)*E1/1000000.0 Smillion = E1/Emillion # a billion cores Ebillion = 0.10 * E1 + (1-0.10)*E1/1000000000.0 Sbillion = E1/Ebillion print("S1000=",S1000) print("Smillion=",Smillion) print("Sbillion=",Sbillion) # + [markdown] id="4NSqFLD-eF6c" # So, we're getting closer to 10 but not quite. That will only get reached when we have infinite number of processors. # # $E_{\infty} = 0.10 * E1 + \frac{(1-0.10)*E1}{\infty}$ # # Since anything divided by infinity is zero, we have $E_{\infty} = 10$, resulting in a speed up $S = 10$. # + id="7dGYK4OueTv6" # infinite cores Ebillion = 0.10 * E1 + (1-0.10)*E1/1000000000.0 Sbillion = E1/Ebillionprint("Sbillion=",Sbillion) # + [markdown] id="hQsB8gL0q62P" # If we graph the speed up $S$ versus the number of processors for different number of processors, you'd get the following graphs, similar to the one in the [Wikipedia](https://en.wikipedia.org/wiki/Amdahl%27s_law). # + id="MtpPPGdyrh-b" from matplotlib import pyplot as plt plt.rcParams["figure.figsize"] = [7.00, 4.00] plt.rcParams["figure.autolayout"] = True fig, ax = plt.subplots() # x-axis values n = [1, 2, 4, 8, 16, 64, 128, 256, 512, 1024] # Y-axis values is the speed up, calculated using s = 1 / (1-p) + p/n serialPortion = [.5, .75, .9, .95, .99] for p in serialPortion: y = [] for x in n: y.append(1.0/((1.0-p)+p/x)) ax.plot(n, y, label=str(p)) leg = plt.legend(loc='upper right') leg.get_frame().set_alpha(0.6) # function to show the plot plt.show() # + [markdown] id="aMsgrOFdf3Hf" # # Further readings # # Note that the discussion above is related to a fixed problem size (this is also called _strong scaling_). In many cases, problem sizes scale with the amount of available resources (_weak scaling_). For this another more optimistic law, called Gustafson's law, is more applicable. # # Here are some references if you'd like to learn more about the topic: # # 1. [Amdahl's paper from 1967 AFIPS](https://archive.is/o/xOrx3/www.futurechips.org/wp-content/uploads/2011/06/5_amdahl.pdf). # 2. [Weak Scaling vs Strong Scaling](https://hpc-wiki.info/hpc/Scaling_tutorial#Strong_or_Weak_Scaling). HPC Wiki. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Tensorflow中的Attention机制源码解析 # # `tensorflow`中的`attention_wrapper`源码解析。 # # ## Attention机制 # # 首先,我们回顾一下`attention mechanism`的定义。 # # 假设我们的source sequence为长度为$\text{n}$的序列$\mathbf{x}$,target sequence为长度为$\text{m}$的序列$\mathbf{y}$,则: # # $$\mathbf{x}=[x_0,x_1,\dots,x_n]$$ # $$\mathbf{y}=[y_0,y_1,\dots,y_m]$$ # # 在第$i$个time step,source sequence产生一个`hidden state`: # # $$\boldsymbol{h}_i = [\overrightarrow{\boldsymbol{h}}_i^\top; \overleftarrow{\boldsymbol{h}}_i^\top]^\top, i=1,\dots,n$$ # # 其中,$\overrightarrow{\boldsymbol{h}}_i^\top$和$\overleftarrow{\boldsymbol{h}}_i^\top$表示的是`Bidirectional RNN`产生的两个hidden state,如果不是使用BiRNN,则只有一个。 # # 在第$t$个time step,decoder产生的`hidden state`表示为: # # $$\boldsymbol{s}_t=f(\boldsymbol{s}_{t-1}, y_{t-1}, \mathbf{c}_t),t=1,\dots,m$$ # # 因为要产生target sequence的t时刻的输出,需要接收来自三个方向的信息,所以上面的公式很好理解。分别为: # # * $\boldsymbol{s}_{t-1}$,前一个时刻的状态 # * $y_{t_1}$,前一个时刻的输入序列 # * $\mathbf{c}_t$,当前时刻的context vector(上下文) # # 其中,$\mathbf{c}_t$就是`context vector`,表示`alignment scores`和source sequence的`hiddens states`的加权和: # # $$ # \begin{aligned} # \mathbf{c}_t &= \sum_{i=1}^n \alpha_{t,i} \boldsymbol{h}_i & \small{\text{; Context vector for output }y_t}\\ # \alpha_{t,i} &= \text{align}(y_t, x_i) & \small{\text{; How well two words }y_t\text{ and }x_i\text{ are aligned.}}\\ # &= \frac{\text{score}(\boldsymbol{s}_{t-1}, \boldsymbol{h}_i)}{\sum_{i'=1}^n \text{score}(\boldsymbol{s}_{t-1}, \boldsymbol{h}_{i'})} & \small{\text{; Softmax of some predefined alignment score.}}. # \end{aligned} # $$ # # 也就是说: # # $$\mathbf{c}_t = \sum_{i=1}^n \frac{\text{score}(\boldsymbol{s}_{t-1}, \boldsymbol{h}_i)}{\sum_{i'=1}^n \text{score}(\boldsymbol{s}_{t-1}, \boldsymbol{h}_{i'})} \boldsymbol{h}_i$$ # # 根据公式可以看到:**$t$时刻的`context vector`需要计算所有的输入序列的隐状态**! # # $\alpha_{t,i}$表示的是第$t$个时刻的输入和第$i$个输出之间的关系,那么所有的$t$和$i$组成的$\alpha$就是**整个输入序列和输出序列之间的关系**! # # 所以,这个$\alpha$是一个二维矩阵!一般我们称呼它为`alignment matrix`!下面有一张示意图: # # # # # 那么,输入序列和输出序列的各个时间步之间的关系有多强烈呢?这就要用上面公式中的`score`函数来计算。 # # `score`函数有很多选择,以下是常用的几种: # # * $\text{score}(\boldsymbol{s}_t, \boldsymbol{h}_i) = \mathbf{v}_a^\top \tanh(\mathbf{W}_a[\boldsymbol{s}_t; \boldsymbol{h}_i])$,加性注意力(additive attention) # * $\text{score}(\boldsymbol{s}_t, \boldsymbol{h}_i) = \boldsymbol{s}_t^\top\mathbf{W}_a\boldsymbol{h}_i$,通用注意力(General) # * $\text{score}(\boldsymbol{s}_t, \boldsymbol{h}_i) = \boldsymbol{s}_t^\top\boldsymbol{h}_i$,点积注意力(dot-product attention),属于乘性注意力(multiplicative) # * $\text{score}(\boldsymbol{s}_t, \boldsymbol{h}_i) = \frac{\boldsymbol{s}_t^\top\boldsymbol{h}_i}{\sqrt{n}}$,上面的缩放版本 # # 上面的所有$\text{W}_\alpha$和$\mathbf{v}_a^\top$都是可训练的权值矩阵。 # # [《Attention?Attention!》](https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html#a-family-of-attention-mechanisms)一文有一张比较全面的表格,展示了多种attention机制: # ![attention_mechanism_family](images/attention_mechanism_table.png) # ## Tensorflow中的Attention机制 # # `AttentionMechanism`是一个抽象类,或者说是接口。 # # 它的代码定义如下: # class AttentionMechanism(object): @property def alignments_size(self): raise NotImplementedError @property def state_size(self): raise NotImplementedError # 从这个接口定义,我们可以看出,attention机制只关注两点: # # * alignments_size # * state_size # # 在这里,我们看不出这两个属性是什么意思。那么我们先看看它的子类`_BaseAttentionMechanism`吧。 # # 根据文档,`_BazeAttentionMechanism`是一个最基本的注意力机制,它提供几个通用的功能,如下: # # * 存储query_layer和memory_layer # * 预处理并存储memory # # 什么是`query_layer`和`memory_layer`呢?我们留到后面解答。 # # 首先看看`_BaseAttentionMechanism`的`__init__`函数,它的签名如下: # # |参数名 |解释 | # |:-----------------------|:---------------------------------------------| # |query_layer |`tf.layers.Layer`的子类 | # |memory |通常是RNN encoder的输出 | # |probability_fn |概率计算函数,默认是`softmax` | # |memory_sequence_length |序列长度 | # |memory_layer |`tf.layers.Layer`的子类 | # |check_inner_dims_defined|是否检查内部维度 | # |score_mask_value |掩码值,默认是负无穷 | # |name |变量域 | # # 下面请看代码的注释,来理解初始化过程。 # def __init__(self, query_layer, memory, probability_fn, memory_sequence_length=None, memory_layer=None, check_inner_dims_defined=True, score_mask_value=None, name=None): if (query_layer is not None and not isinstance(query_layer, layers_base.Layer)): raise TypeError("query_layer is not a Layer: %s" % type(query_layer).__name__) if (memory_layer is not None and not isinstance(memory_layer, layers_base.Layer)): raise TypeError("memory_layer is not a Layer: %s" % type(memory_layer).__name__) self._query_layer = query_layer self._memory_layer = memory_layer self.dtype = memory_layer.dtype if not callable(probability_fn): raise TypeError("probability_fn must be callable, saw type: %s" % type(probability_fn).__name__) # 默认是负无穷 if score_mask_value is None: score_mask_value = dtypes.as_dtype(self._memory_layer.dtype).as_numpy_dtype(-np.inf) self._probability_fn = lambda score, prev: ( # pylint:disable=g-long-lambda # _maybe_mask_score稍后解释 probability_fn(_maybe_mask_score(score, memory_sequence_length, score_mask_value),prev)) with ops.name_scope( name, "BaseAttentionMechanismInit", nest.flatten(memory)): # 稍后解释 self._values = _prepare_memory(memory, memory_sequence_length, check_inner_dims_defined=check_inner_dims_defined) # shape = [B, T,...] self._keys = self.memory_layer(self._values) if self.memory_layer else self._values # 获取keys的第一个维度,即batch size self._batch_size = self._keys.shape[0].value or array_ops.shape(self._keys)[0] # 获取keys的第二个维度,即alignments size,这就是序列的最大长度 self._alignments_size = self._keys.shape[1].value or array_ops.shape(self._keys)[1] # ### `_maybe_mask_score` # # 输入的张量score的形状是[B,T,..],并且这个张量是对齐的,也就是每个序列的T这个维度的值都是一样的。但是实际上,我们同一批序列中,序列的长短是不一定一样的,也就是说我们的score张量中的某些值是无效的。我们需要把这些无效的值的位置找出来,然后给这些位置设置一个特定的数值,使得在这些位置的影响很小,趋于0。一般来说,设置一个**负无穷**。根据原来的序列的真实长度张量,以及我们的score张量,这样就可以找出哪些位置是无效的。当然,这要结合`probability_fn`函数,因为只有根据这个函数的性质,我们才知道赋予什么值,会使得这个函数的值趋于0。 # # 这也就说明了:如果我们的输入序列是已经对齐了的,那么我们可以直接设置`memory_sequence_length=None`,这样不就需要 # 进一步处理了! # # 这个函数的签名解释如下: # # |参数名 |解释 | # |:---------------------|:---------------------------------------------------| # |score |分数张量,也就是我们上面提到的`score`函数的结果,形状是[B,T]| # |memory_sequence_length|输入数据的长度张量,形状是[B] | # |score_mask_value |掩码值,默认是负无穷 | # # # 这个函数的实现如下: # def _maybe_mask_score(score, memory_sequence_length, score_mask_value): # 如果序列长度未指定,则不进行掩码计算 if memory_sequence_length is None: return score message = ("All values in memory_sequence_length must greater than zero.") with ops.control_dependencies( [check_ops.assert_positive(memory_sequence_length, message=message)]): # 进行sequence mask,得到的是一个Boolean类型(或者只有0和1)的张量,score元素为0的地方为False,其他位置为True score_mask = array_ops.sequence_mask(memory_sequence_length, maxlen=array_ops.shape(score)[1]) # mask张量 score_mask_values = score_mask_value * array_ops.ones_like(score) # 返回mask之后的张量,原来score位置为0的地方替换成了score_mask_value return array_ops.where(score_mask, score, score_mask_values) # ### `_prepare_memory` # # 这个函数是把输入张量转化成掩码之后的张量。为什么这里需要进行掩码处理呢? # # 首先memory张量的形状是[B,T,...],其中T是这一批次的数据中的最大长度。为了搞清楚这T个time step之间,哪些是有效位置,哪些是无效的(通常进行对齐添加的标记),我们需要把这些无效的位置找出来,然后数值上设为0。这样这些无效位置就不会影响后面的计算。 # # 当然,如果你手动对齐了输入序列(在较短的序列后面补0),保证每个序列的长度都一样,那么你可以设置参数`memory_sequence_length=None`,这样就不需要额外处理了。同样的,即使你对齐处理过,你也可以传入一个`memory_sequence_length`张量,这样不会有影响。 # # 签名如下: # # |参数名 |解释 | # |:-----------------------|:-----------------------------| # |memory |输入张量,形状为[B,T,...] | # |memory_sequence_length |序列长度张量,形状为[B] | # |check_inner_dims_defined|Boolean,检查最外面两个维度是否定义| # # 注意,掩码只是作用于[B,T]两个维度,之后如果还有其他维度,应该处理使之不进行掩码计算。这个在代码中有很明显的处理。 # # 下面,详细解释代码: # # + def _prepare_memory(memory, memory_sequence_length, check_inner_dims_defined): memory = nest.map_structure( lambda m: ops.convert_to_tensor(m, name="memory"), memory) if memory_sequence_length is not None: memory_sequence_length = ops.convert_to_tensor(memory_sequence_length, name="memory_sequence_length") # 检查innder dim if check_inner_dims_defined: def _check_dims(m): # 检查第三个维度及其之后的维度是否存在 if not m.get_shape()[2:].is_fully_defined(): raise ValueError( "Expected memory %s to have fully defined inner dims, " "but saw shape: %s" % (m.name, m.get_shape())) nest.map_structure(_check_dims, memory) if memory_sequence_length is None: seq_len_mask = None else: # 进行sequence_mask seq_len_mask = array_ops.sequence_mask( memory_sequence_length, maxlen=array_ops.shape(nest.flatten(memory)[0])[1], dtype=nest.flatten(memory)[0].dtype) # 获取batch size seq_len_batch_size = (memory_sequence_length.shape[0].value or array_ops.shape(memory_sequence_length)[0]) def _maybe_mask(m, seq_len_mask): # 获取维度数量,也就是阶 rank = m.get_shape().ndims rank = rank if rank is not None else array_ops.rank(m) # 取出B,T之外的维度,设置成1,当计算掩码的时候,这些维度的值会保留原来的值 extra_ones = array_ops.ones(rank - 2, dtype=dtypes.int32) # 获取batch size m_batch_size = m.shape[0].value or array_ops.shape(m)[0] if memory_sequence_length is not None: message = ( "memory_sequence_length and memory tensor batch sizes do not match.") with ops.control_dependencies([ check_ops.assert_equal(seq_len_batch_size, m_batch_size, message=message)]): # 将seq_len_mask重塑成[B,T,...]形状的张量 seq_len_mask = array_ops.reshape( seq_len_mask, array_ops.concat((array_ops.shape(seq_len_mask), extra_ones), 0)) # 掩码计算,并且返回结果 return m * seq_len_mask else: return m return nest.map_structure(lambda m: _maybe_mask(m, seq_len_mask), memory) # - # 以上首先就可以回答`AttentionMechanism`类的第一个函数是什么这个问题。因为`_BaseAttentionMechansim`的实现很简单: # @property def alignments_size(self): # __init__函数里面,self._alignments_size = self._keys.shape[1].value or array_ops.shape(self._keys)[1] # 实际上就是这一批序列的最大长度 return self._alignments_size # 那么第二个问题,`state_size`又是什么呢?在`_BaseAttentionMechanism`里面的实现,和上面一样: # @property def state_size(self): return self._alignments_size # 也就是说,默认情况下,`alignments_size`和`state_size`是一样的! # # 那么为什么这两个值默认是一样的呢? # # 回顾文章开始的attention机制的定义,我们发现知道,`alignment matrix`的两个维度的大小,分别是输入序列和输出序列的长度,并且,在编解码的过程中,每一个时间步都会产生一个`hidden state`,所以`hidden state`的数量也是和序列长度一样的。 # # 而我们这里的`alinments_size`就是对应`alignment matrix`,`state`就是对应`hidden state`的! # # ### Query,Keys和Values # # 你可能已经发现了,在`_BaseAttentionMechanism`的`__init__`函数里,出现了`self._query_layer`、`self._memory_layer`、`self._keys`和`self._values`等成员变量。那么这些变量指的是什么呢? # # [Attention is all you need](https://arxiv.org/pdf/1706.03762.pdf)一文有一个对attention机制的解释: # # > An attention function can be described as mapping a query and a set of key-values pairs to an output, where the query, keys, values and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key. # # 翻译一下就是: # # > 注意力函数可以描述为一个映射过程:把query和key-value键值对映射到一个输出。其中query,keys, values和output都是矢量。output是values的加权和,其中每一个value的权重都是通过一个关于key和query的兼容性函数计算得来的。 # # 我再加一点解释: # # * 所谓的关于key和query的兼容性函数,就是我们的`score`函数! # # 所以关于attention机制,可以再精简一点,概括为:**Attention机制就是,从key-value键值对中,通过确定query和key的相似度,然后再通过value获得最终输出的映射过程。** # # 你已经发现了,上面的定义都是围绕`keys`、`query`和`values`进行的!!! # # 那么言归正传,什么是`keys`、`values`和`query`呢? # # 在`_BaseAttentionMechanism`里面,`keys`和`values`的关系是这样的: # # ```python # self._values = _prepare_memory( # memory, memory_sequence_length, # check_inner_dims_defined=check_inner_dims_defined) # self._keys = ( # self.memory_layer(self._values) if self.memory_layer # pylint: disable=not-callable # else self._values) # ``` # `_prepare_memory`上面已经解释过了。所以: # # * `values`就是`memory或者masked memory`!(memory一般是RNN encoder的输出`encoder_output`!) # * `keys`就是`values`或者`memory_layer处理过的values`!(keys和values可以认为是一样的!) # # 那么,这里为什么没有出现`query`呢?很简单,因为`query`是在调用的时候传入的参数,所以这里没有`query`!具体的可以从`_BaseAttentionMechanism`的子类的`__call__`函数解读出来。 # # 总结一下: # # * `values`就是`encoder_output`或者`masked encoder_output` # * `keys`就是`values`或者是经过`memory_layer`处理后的`values` # * `query`是调用的时候传入的参数,一般与来自RNN decoder # # 疑问: **这里的`query`、`keys`和`values`怎么和`score`函数对应起来?** # (提示,可以参考attention_wrapper代码。) # # 这个问题留到后面解释。 # # ### memory_layer和query_layer # # 上面还有两个疑惑,`memory_layer`和`query_layer`是什么? # # `_BaseAttentionMechanism`没有解答,但是它的子类`LuongAttention`和`BahdanauAttention`可以找到答案。 # # `LuongAttention`的`__init__`函数有定义: # # ```python # query_layer=None, # memory_layer=layers_core.Dense( # num_units, name="memory_layer", use_bias=False, dtype=dtype) # ``` # # 在`LuongAttention`里,`query_layer`是`None`,`memory_layer`只是一个**全连接层**!,并且激活函数默认是`None`,也就是说**memory_layer只是对memory做一个简单的线性变换!** # # `BahdanauAttention`是这样定义的: # # ```python # query_layer=layers_core.Dense( # num_units, name="query_layer", use_bias=False, dtype=dtype) # memory_layer=layers_core.Dense( # num_units, name="memory_layer", use_bias=False, dtype=dtype) # ``` # # 在`BahdanauAttention`里,`query_layer`和`memory_layer`都是一个全连接层,激活函数默认也是`None`,所以`query_layer`和`memory_layer`分别是对`query`和`memory`的线性变换。 # # 现在`memory_layer`和`query_layer`已经清楚了,但是现在又有一个新的问题:**`LuongAttention`和`BahdanauAttention`有什么区别呢?** # # ### LuongAttention和BahdanauAttention # # 不同的attention机制,最大和最关键的区别就在于它们各自的`score`函数。 # # 我们首先来看看`LuongAttention`的`score`函数: # # ```python # def _luong_score(query, keys, scale): # # query shape: [batch_size, depth] # # keys shape: [batch_size, max_time, depth] # depth = query.get_shape()[-1] # key_units = keys.get_shape()[-1] # if depth != key_units: # raise ValueError( # "Incompatible or unknown inner dimensions between query and keys. " # "Query (%s) has units: %s. Keys (%s) have units: %s. " # "Perhaps you need to set num_units to the keys' dimension (%s)?" # % (query, depth, keys, key_units, key_units)) # dtype = query.dtype # # # 为后面的矩阵乘法做准备,在第二个维度上扩充一个维度 # # [batch_size, depth] -> [batch_size, 1, depth] # query = array_ops.expand_dims(query, 1) # # # [batch_size, 1, depth]*[batch_size, depth, max_time] -> [batch_size, 1, max_time] # score = math_ops.matmul(query, keys, transpose_b=True) # # 将第二个维度压缩掉,[batch_size, 1, max_time] -> [batch_size, max_time] # score = array_ops.squeeze(score, [1]) # # # 缩放,可选项 # if scale: # # Scalar used in weight scaling # g = variable_scope.get_variable( # "attention_g", dtype=dtype, # initializer=init_ops.ones_initializer, shape=()) # score = g * score # return score # ``` # # `LuongAttention`计算`socre`的过程很简单:**query和keys做矩阵乘法而已!** # # 从数学上来说,这里的`score`计算公式如下: # # $$\text{score}(\boldsymbol{s}_t, \boldsymbol{h}_i) = \boldsymbol{s}_t^\top\boldsymbol{h}_i$$ # # 所以,`LuongAttention`属于**multiplicative attention**!,具体一点,就是**dot-product attention**! # # 我们继续看看`BahdanauAttention`的`score`函数: # # ```python # def _bahdanau_score(processed_query, keys, normalize): # # processed_query shape: [batch_size, depth] # # keys shape: [batch_size, max_time, depth] # dtype = processed_query.dtype # # 获取depth(num_units),是第三个维度 # num_units = keys.shape[2].value or array_ops.shape(keys)[2] # # 将query变形,[batch_size, depth] -> [batch_size, 1, depth] # processed_query = array_ops.expand_dims(processed_query, 1) # v = variable_scope.get_variable( # "attention_v", [num_units], dtype=dtype) # # 归一化,可选项 # if normalize: # # Scalar used in weight normalization # g = variable_scope.get_variable( # "attention_g", dtype=dtype, # initializer=init_ops.constant_initializer(math.sqrt((1. / num_units))), # shape=()) # # tanh激活函数的偏置 # b = variable_scope.get_variable( # "attention_b", [num_units], dtype=dtype, # initializer=init_ops.zeros_initializer()) # # normed_v = g * v / ||v|| # normed_v = g * v * math_ops.rsqrt( # math_ops.reduce_sum(math_ops.square(v))) # # query和keys直接相加 # return math_ops.reduce_sum( # normed_v * math_ops.tanh(keys + processed_query + b), [2]) # else: # # query和keys直接相加 # return math_ops.reduce_sum(v * math_ops.tanh(keys + processed_query), [2]) # ``` # # 可以看出,`BahdanauAttention`属于**additive attention**! # # ## 参考文章 # # 1.[Attention?Attention!](https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html#a-family-of-attention-mechanisms) # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import env import eval_model import wrangle3 from sklearn.model_selection import train_test_split from sklearn.preprocessing import MinMaxScaler from sklearn.cluster import KMeans from sklearn.preprocessing import MinMaxScaler, RobustScaler from sklearn.model_selection import train_test_split import warnings warnings.filterwarnings('ignore') pd.set_option("display.max_columns", None) pd.set_option("display.max_rows", 100) pd.set_option('display.float_format', lambda x: '%.5f' % x) import gmaps import gmaps.datasets gmaps.configure(api_key="") #Modeling Tools from sklearn.model_selection import train_test_split from sklearn.impute import SimpleImputer import statsmodels.api as sm from statsmodels.formula.api import ols from datetime import date from scipy import stats ## Evaluation tools from sklearn.metrics import mean_squared_error from sklearn.metrics import r2_score from math import sqrt # - train, X_train, y_train, X_validate, y_validate, X_test, y_test=wrangle3.wrangle() X_train.head() X_validate.head() # Target: Logerror y_train plt.figure(figsize=(8,6)) plt.scatter(range(y_train.shape[0]), np.sort(y_train.values)) plt.xlabel('index', fontsize=12) plt.ylabel('logerror', fontsize=12) plt.show() # + #Remove outliers ulimit = np.percentile(y_train.values, 99) llimit = np.percentile(y_train.values, 1) train_df['logerror'].ix[train_df['logerror']>ulimit] = ulimit train_df['logerror'].ix[train_df['logerror'] # - ... # - (Your response here) # # # ## Model context # # The deterministic variables `L, R, C` are the *designed* component values. # # The random variables `dL, dR, dC` are *perturbations* to the designed component values. # # # Inputs # # ## Assess input variability # # ### __qX__ Overview of inputs # ( md_circuit # solution-begin >> gr.ev_sample(n=1e3, df_det="nom", skip=True) >> gr.pt_auto() # solution-end ) # *Observations* # # # - ... # - (Your response here) # # # ### __qX__ Compare designed and realized values # # The following plot shows the designed and realized capacitance values. Inspect the plot, then change to log scales for both the x and y axes. Answer the questions under *observations* below. # # TASK: Inspect the following plot, change to a log scale # for both x and y ( # NOTE: No need to edit this part of the code md_circuit >> gr.ev_sample( n=20, df_det=gr.df_make( R=1e-1, L=1e-5, C=gr.logspace(-3, 2, 20) ) ) # Visualize >> gr.ggplot(gr.aes("C", "Cr")) + gr.geom_point() # task-begin # TASK: Add a log scale for x and y # task-end # solution-begin + gr.scale_x_log10() + gr.scale_y_log10() # solution-end ) # *Observations* # # # - ... # - (Your response here) # # # # Outputs # # ### __qX__ Overview of outputs # ( md_circuit # solution-begin >> gr.ev_sample(n=1e4, df_det="nom") >> gr.pt_auto() # solution-end ) # *Observations* # # *Note*: If you can't reliably make out the shapes of the distributions, try increasing `n`. # # # - What distribution shapes do the realized component values `Lr, Rr, Cr` have? # - (Your response here) # - What distribution shape does the output `Q` have? # - (Your response here) # - What distribution shape does the output `omega0` have? # - (Your response here) # # # - What distribution shapes do the realized component values `Lr, Rr, Cr` have? # - These are all uniform distributions # - What distribution shape does the output `Q` have? # - This distribution is a bit bell-shaped, but it has a much flatter top and narrower tails than a gaussian. # - What distribution shape does the output `omega0` have? # - This distribution is strongly asymmetric, with a longer right tail (right skew). # # # ### __qX__ Scatter of outputs # ( md_circuit >> gr.ev_sample(n=1e3, df_det="nom") # solution-begin >> gr.ggplot(gr.aes("Q", "omega0")) + gr.geom_point() # solution-end ) # *Observations* # # # - ... # - (Your response here) # # # ### __qX__ Density of outputs # # Visualize `Q` and `omega0` with a 2d bin plot. Increase the sample size `n` to get a "full" plot. # ( md_circuit # solution-begin >> gr.ev_sample(n=1e4, df_det="nom") >> gr.ggplot(gr.aes("Q", "omega0")) + gr.geom_bin2d() # solution-end ) # *Observations* # # # - Briefly describe the distribution of realized performance (values of `Q` and `omega0`). # - (Your response here) # # # - Briefly describe the distribution of realized performance (values of `Q` and `omega0`). # - The distribution is oddly rectangular, with fairly "sharp" square sides. The distribution seems to end abruptly; there are exactly zero counts outside the "curved rectangle". The values of `Q` and `omega0` are negatively correlated; when `Q` is larger `omega0` tends to be smaller. The distribution is also asymmetric; note that highest `count` (brightest fill color) is *not* in the center, but rather towards the bottom-right. # # # ### __qX__ Compare nominal with distribution # ( md_circuit >> gr.ev_sample(n=1e4, df_det="nom") >> gr.ggplot(gr.aes("Q", "omega0")) + gr.geom_bin2d() + gr.geom_point( # task-begin # Use the `data` argument to add a geometry for # the output value associated with nominal input values # task-end # solution-begin data=md_circuit >> gr.ev_nominal(df_det="nom"), # solution-end color="salmon", size=4, ) ) # *Observations* # # The nominal design (red point) represents the predicted performance if we assume the nominal circuit component values. # # # - How does the distribution of real circuit performance (values of `Q`, `omega0`) compare with the nominal performance? # - (Your response here) # - Is the most likely performance (values of `Q`, `omega0`) the same as the nominal performance? ("Most likely" is where `count` is the largest.) # - (Your response here) # - Assume that another system depends on the particular values of `Q` and `omega0` provided by this system. Would it be safe to assume the performance is within 1% of the nominal performance? # - (Your response here) # # # - How does the distribution of real circuit performance (values of `Q`, `omega0`) compare with the nominal performance? # - The distribution of real performance is quite large, and not at all symmetric about the nominal performance. The values `Q` and `omega0` exhibit negative correlation: when `Q` is larger `omega0` tends to be smaller. # - Is the most likely performance (values of `Q`, `omega0`) the same as the nominal performance? ("Most likely" is where `count` is the largest.) # - No; the most likely performance is at a larger value of `Q` and smaller value of `omega0`. # - Assume that another system depends on the particular values of `Q` and `omega0` provided by this system. Would it be safe to assume the performance is within 1% of the nominal performance? # - No; we see variability in `Q` and `omega0` far greater than 1%! # # # # Input-to-output Relationships # # + [markdown] tags=[] # ### __qX__ Sinew plot # # - ( md_circuit # solution-begin >> gr.ev_sinews(df_det="swp") >> gr.pt_auto() # solution-end ) # *Observations* # # # - ... # - (Your response here) # # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # ## Web and Social Project Code # # using Python 2.7 # %matplotlib inline import pandas as pd from collections import Counter import matplotlib.pyplot as plt import numpy as np import networkx as nx data = open("Data/msnbc990928.seq") lines = data.readlines() lines[:10] # ## Data data2 = [lin.strip().split() for lin in lines] data2[:5] len_line=[len(lin) for lin in data2] print len_line[:5] print sum(len_line)/len(len_line) pd.DataFrame(len_line).describe().to_csv("Orig_metrics.csv") # ## Most Common Landing Page first_pages = [lin[0] for lin in data2] first_pages[:10] Counter(first_pages).most_common() first_pages_dict = Counter(first_pages) # ## Scatter plot to show frequency of page when it is a landing page plt.scatter(x = list(first_pages_dict.keys()), y=list(first_pages_dict.values())) plt.show() # ## Scatter plot to show frequency of page when it is an exit page last_pages = [lin[-1] for lin in data2] last_pages[:10] Counter(last_pages).most_common() last_pages_dict = Counter(last_pages) plt.scatter(x = list(last_pages_dict.keys()), y=list(last_pages_dict.values())) plt.show() da = np.array(data2) # ## Max and Min No of Page Views cnt = Counter(np.concatenate(da)) cnt.most_common() sorted(Counter(np.concatenate(da)).keys()) np.concatenate(da).shape pages = ['1','2', '3', '4', '5', '6','7', '8', '9', '10', '11', '12', '13', '14', '15', '16', '17'] # ## Exit Rate exit_rate={} for page in pages: # Number of times a page appears in the sequence page_true = len([lin for lin in data2 if page in lin ]) # Number of times a page is an exit page exit_pages_no = len([lin for lin in data2 if page == lin[-1] ]) exit_rate[page] = exit_pages_no / (page_true*1.0) exit_rate tab = pd.DataFrame() tab['pages'] = sorted(exit_rate.keys()) tab tab['exit_rate']=[exit_rate[i] for i in sorted(exit_rate.keys())] tab.T # ## Bounce Rate bounce_rate={} for page in pages: # Number of times a page is bounced bounce_pages_no = len([lin for lin in data2 if len(lin)==1 and lin[0] == page]) # Number of times a page is a potential bounce page_b_true = len([lin for lin in data2 if lin[0]==page]) bounce_rate[page] = bounce_pages_no / (page_b_true*1.0) bounce_rate tab['bounce_rate']=[bounce_rate[i] for i in sorted(bounce_rate.keys())] tab['page_counts']=[cnt[i] for i in sorted(cnt.keys())] tab.set_index('pages').T.describe() tab.to_csv("metrics.csv") tab.set_index('pages').T.describe().to_csv("metrics_desc.csv") tab.set_index('pages').T tab.T # ## Incidence Matrix Code incidence_matrix= np.zeros((len(pages), len(pages))) # ## Pages Name pages_name = "frontpage news tech local opinion on-air misc weather msn-news health living business msn-sports sports summary bbs travel".split() pages_name, len(pages_name) # ## Mapping Pages and Page Names mapping = dict(zip(pages, range(len(pages)))) mapping map_df = pd.DataFrame([i for i in sorted(mapping.values())]) mapping_original = dict((zip(range(len(pages_name)), pages_name))) mapping_original map_df['pages']=[mapping_original[i] for i in sorted(mapping_original.keys())] map_df.to_csv("mappings.csv") # ## Create an incidence matrix based on co-occurrence # + incidence_matrix= np.zeros((len(pages), len(pages))) for line in data2: # For Each seq in the file length = len(line) if length>=2: #print line for i in range(length-1): # for each element in the file tmp = (line[i], line[i+1]) incidence_matrix[mapping[tmp[0]], mapping[tmp[1]]]+=1 #print k, j , tmp print(incidence_matrix.shape) # - incidence_matrix.mean(), incidence_matrix.min(), incidence_matrix.max() incidence_matrix.shape matrix2 = incidence_matrix >= incidence_matrix.mean() matrix2.shape G = nx.from_numpy_matrix(incidence_matrix) nx.info(G) G = nx.relabel_nodes(G, mapping_original) type(G) G.is_directed() G.degree() nx.degree_centrality(G) nx.betweenness_centrality(G) nx.eigenvector_centrality(G) G.edges(data=True)[:10] nx.connectivity.average_node_connectivity(G) nx.closeness_centrality(G) list(nx.clique.find_cliques(G)) nx.pagerank(G, alpha=1) # ## Graph based on Incidence matrix with threshod thres = incidence_matrix.mean() thres G = nx.from_numpy_matrix(incidence_matrix>=thres) nx.info(G) G = nx.relabel_nodes(G, mapping_original) type(G) G.is_directed() deg = G.degree() dc = nx.degree_centrality(G) bc = nx.betweenness_centrality(G) ec=nx.eigenvector_centrality(G) cc=nx.closeness_centrality(G) G.edges(data=True)[:10] list(nx.clique.find_cliques(G)) pr = nx.pagerank(G, alpha=1, max_iter=1000) nx.density(G) tabc = pd.DataFrame() tabc['pages'] = sorted(G.nodes()) tabc = tabc.set_index('pages') tabc.T tabc['degree_centrality']= [dc[i] for i in sorted(dc.keys())] tabc tabc['degree']= [deg[i] for i in sorted(deg.keys())] tabc tabc['betweeness_centrality']= [bc[i] for i in sorted(bc.keys())] tabc tabc['eigenVector_centrality']= [ec[i] for i in sorted(ec.keys())] tabc tabc['closeness_centrality']= [cc[i] for i in sorted(cc.keys())] tabc tabc['page_rank']= [pr[i] for i in sorted(pr.keys())] tabc tabc.to_csv("Graph_metrics.csv") tabc.describe().to_csv("Graph_metrics_desc.csv") tabc.T G.nodes() # ## For Finding the suggestion Triads def get_recommendation(pair, data): '''Find the most common triad for the given pair of pages, return as a list of values''' most_occured_page=[] occured_page=[] for line in data: if len(line)>=3: #print line length= len(line) for i in range(length): if i < length-2: tmp = (line[i], line[i+1], line[i+2]) #print 'tmp', tmp, pair, set(tmp).intersection(set(pair)) if pair[0]==tmp[0] and pair[1]==tmp[1]: #print pair, '-', tmp most_occured_page.append(tmp[-1]) occured_page.append(tmp) rec = Counter(occured_page) return rec.most_common() pair =('6', '5') get_recommendation(pair, data2) pair = ("2", "5") get_recommendation(pair, data2) pd.DataFrame(get_recommendation(("2", "5"), data2)).to_csv("Recommend.csv") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### This script is designed to take an input of location of a beacon in X and Y coordinates and bin size of historgram. It knows already the size of the arena and will produce a histogram/ matrix of numbers for occupancy/area per each bin. # import matplotlib.pyplot as plt import numpy as np import matplotlib.patches as patches from PIL import Image import matplotlib.image as mpimg import pandas as pd import math figures = 'C:/Users/Fabian/Desktop/Analysis/Round3_FS03_FS06/Figures/' processed= 'C:/Users/Fabian/Desktop/Analysis/Round3_FS03_FS06/processed/' # ### 1. input arena size cut = 0 # keeping the cut where rectangle of arena ends X_cut_min = -.59 Y_cut_max = 1.61 X_cut_max = .12 Y_cut_min = .00 print("area %s M*2" %((X_cut_max-X_cut_min)*(Y_cut_max-Y_cut_min))) # + xcut_offset=0 ycut_offset=0 fig, ax1 = plt.subplots(1, 1, sharex=True,dpi=300, sharey=True) fig.suptitle("Normalization visual") ax1.plot([(X_cut_min+cut)-xcut_offset,(X_cut_max-cut)-xcut_offset],[(Y_cut_max-cut)+ycut_offset,(Y_cut_max-cut)+ycut_offset] ,'r-') ax1.plot([(X_cut_min+cut)-xcut_offset,(X_cut_min+cut)-xcut_offset],[(Y_cut_min+cut)+ycut_offset,(Y_cut_max-cut)+ycut_offset] ,'r-') ax1.plot([(X_cut_max-cut)-xcut_offset,(X_cut_max-cut)-xcut_offset],[(Y_cut_min+cut)+ycut_offset,(Y_cut_max-cut)+ycut_offset] ,'r-') ax1.plot([(X_cut_max-cut)-xcut_offset,(X_cut_min+cut)-xcut_offset],[(Y_cut_min+cut)+ycut_offset,(Y_cut_min+cut)+ycut_offset] ,'r-') ax1.plot(-.45,.4,"go") rectangle = patches.Rectangle((-.59,0), .71,1.61, color="green") ax1.add_patch(rectangle) k=reversed(range(10)) print(k) color=iter(plt.cm.rainbow(np.linspace(0,1,10))) for i in reversed(range(10)): c=next(color) patch = patches.Circle((-.45,.4), radius=.15*i,color=c) ax1.add_patch(patch) patch.set_clip_path(rectangle) #get_clip_path(self)[source] ax1.axis("equal") plt.show() # - # ## 2. Now define the area of circles mathematically # ## a. get area of each circle withouth subtracting rectangle from math import pi r = float(input ("Input the radius of the circle : ")) print ("The area of the circle with radius " + str(r) + " is: " + str(pi * r**2)) hist=[] prev=0 for i in (range(10)): res=(pi * (.15*i)**2) hist.append(res-prev) prev=res print(hist) plt.bar(range(10),hist) # ### b. subract the area from the rectangle area # ## Tried mathematical approach on paper but there were at least 4 other combination for which it would have to be calculated.So it is possible but would take too long to write the code hence decided to use the pictures that canbe generated and eastimaed the pixels using different grayscale values # + xcut_offset=0 ycut_offset=0 def visualization(center=(-.45,.4),X_cut_min = -.59,Y_cut_max = 1.61,X_cut_max = .12,Y_cut_min = .00 ): """Makes a visual represetation of a banded rectangle with circles. to be exported and then can be counted""" fig, ax1 = plt.subplots(1, 1, sharex=True,dpi=400,) fig.suptitle("Normalization visual") ax1.plot(center[0],center[1],"go") rectangle = patches.Rectangle((X_cut_min,Y_cut_min), (abs(X_cut_min)+abs(X_cut_max)),Y_cut_max , color="green") ax1.add_patch(rectangle) k=reversed(range(10)) print(k) color=iter(plt.cm.rainbow(np.linspace(0,1,10))) for i in reversed(range(10)): c=next(color) patch = patches.Circle((center[0],center[1]), radius=.15*i,color=c) ax1.add_patch(patch) patch.set_clip_path(rectangle) ax1.axis("equal") return plt.show() visualization() # + xcut_offset=0 ycut_offset=0 def visualization_grey(center=(-.45,.4),dpi=500,X_cut_min = -.59,Y_cut_max = 1.61,X_cut_max = .12,Y_cut_min = .00 ): """Makes a visual represetation of a banded rectangle with circles. to be exported and then can be counted""" fig, ax1 = plt.subplots(1, 1, sharex=True,dpi=dpi,) fig.patch.set_visible(False) #fig.suptitle("Normalization visual") #ax1.plot(center[0],center[1],"go") rectangle = patches.Rectangle((X_cut_min,Y_cut_min), (abs(X_cut_min)+abs(X_cut_max)),Y_cut_max , color="black") ax1.add_patch(rectangle) k=reversed(range(10)) print(k) color=iter(plt.cm.binary(np.linspace(.01,.99,20))) for i in reversed(range(20)): c=next(color) patch = patches.Circle((center[0],center[1]), radius=.075*i,color=c) ax1.add_patch(patch) patch.set_clip_path(rectangle) ax1.axis("equal") ax1.axis("off") mng = plt.get_current_fig_manager() mng.full_screen_toggle() fig.savefig(figures + 'norm_graph.png', dpi=dpi, transparent=True) return plt.show() visualization_grey() # - img = mpimg.imread(figures + 'norm_graph.png') counts, bins, bars = plt.hist(img.ravel(), bins=18, range=(0.01, .99),) plt.hist(img.ravel(), bins=18, range=(0.01, .99), fc='k', ec='w') plt.show() imgplot = plt.imshow(img) counts img.ravel() norm= [] int(sum(counts)) for count in counts: k= count/int(sum(counts)) norm.append(k) norm # ### Now take the counts and multiply the distributions correctly - so make a histogram for each time beacon changes FS04=pd.read_excel(processed +'FS04_rears_new.xlsx', index_col=0) FS04.head() # + def get_rear_distance_from_beacon(df_rears_corrected): dist=[] for row in df_rears_corrected.iterrows(): #print(row[1][1]) #print(row[1][4]) #print(row[1][2]) #print(row[1][5]) dist.append(math.sqrt((row[1][1] - row[1][4])**2 + (row[1][2] - row[1][5])**2)) return dist plt.hist(get_rear_distance_from_beacon(FS04)) # + def make_simple_graphs (animal_ID,rearing): binwidth=.075 plt.tight_layout bins = np.arange(0, 1.425, binwidth) interval=bins print(bins) bins[1]= 0.075 fig, ax = plt.subplots(4,dpi=800,sharex=False) fig.suptitle(animal_ID +' rearing distance from beacons ephys',y=1) N, bins, patches=ax[0].hist(get_rear_distance_from_beacon(rearing.loc[rearing['Visibility']==1]),bins=bins,ec='w') print(len(norm)) print(len(bins)) print(N*norm) ax[0].set_title('Visible attempt') for i in range(0,1): patches[i].set_facecolor('g') for i in range(1, len(patches)): patches[i].set_facecolor('blue') fig.tight_layout(pad=1.5) N1, bins, patches=ax[1].hist(get_rear_distance_from_beacon(rearing.loc[rearing['Visibility']==0]),bins=bins,ec='w') print(N1*norm) ax[1].set_title('Invisible attempt') for i in range(0,1): patches[i].set_facecolor('g') for i in range(1, len(patches)): patches[i].set_facecolor('blue') fig.tight_layout(pad=1.5) patches = ax[2].bar(N*norm,bins) ax[2].set_title('Visible attempt Normalized') for i in range(0,1): patches[i].set_facecolor('g') for i in range(1, len(patches)): patches[i].set_facecolor('blue') fig.tight_layout(pad=1.5) number, bins, patches = ax[3].hist((N1*norm),bins=bins,ec='w') ax[3].set_title('invisible attempt Normalized') for i in range(0,1): patches[i].set_facecolor('g') for i in range(1, len(patches)): patches[i].set_facecolor('blue') fig.tight_layout(pad=1.5) plt.savefig('%srat_rearing_distance_from_beacons_norm%s.png'%(figures,animal_ID), dpi = 100) make_simple_graphs('FS04' ,FS04) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + #https://gitlab.com/custom_robots/spotmicroai/simulation/-/blob/master/Basic%20simulation%20by%20user%20Florian%20Wilk/Kinematics/Kinematic.ipynb from mpl_toolkits import mplot3d import numpy as np from math import * import matplotlib.pyplot as plt def setupView(limit): ax = plt.axes(projection="3d") ax.set_xlim(-limit, limit) ax.set_ylim(-limit, limit) ax.set_zlim(-limit, limit) ax.set_xlabel("X") ax.set_ylabel("Z") ax.set_zlabel("Y") return ax setupView(200).view_init(elev=12., azim=28) omega = pi/4 phi =0 psi = 0 xm = 0 ym = 0 zm = 0 L = 207.5 W = 78 l1=60.5 l2=10 l3=100.7 #111.1 #100.7 l4=118.5 Lp=np.array([[100,-100,100,1],[100,-100,-100,1],[-100,-100,100,1],[-100,-100,-100,1]]) sHp=np.sin(pi/2) cHp=np.cos(pi/2) Lo=np.array([0,0,0,1]) def bodyIK(omega,phi,psi,xm,ym,zm): Rx = np.array([[1,0,0,0], [0,np.cos(omega),-np.sin(omega),0], [0,np.sin(omega),np.cos(omega),0],[0,0,0,1]]) Ry = np.array([[np.cos(phi),0,np.sin(phi),0], [0,1,0,0], [-np.sin(phi),0,np.cos(phi),0],[0,0,0,1]]) Rz = np.array([[np.cos(psi),-np.sin(psi),0,0], [np.sin(psi),np.cos(psi),0,0],[0,0,1,0],[0,0,0,1]]) Rxyz=Rx.dot(Ry).dot(Rz) T = np.array([[0,0,0,xm],[0,0,0,ym],[0,0,0,zm],[0,0,0,0]]) Tm = T+Rxyz return([Tm.dot( np.array([[cHp,0,sHp,L/2],[0,1,0,0],[-sHp,0,cHp,W/2],[0,0,0,1]])), Tm.dot(np.array([[cHp,0,sHp,L/2],[0,1,0,0],[-sHp,0,cHp,-W/2],[0,0,0,1]])), Tm.dot(np.array([[cHp,0,sHp,-L/2],[0,1,0,0],[-sHp,0,cHp,W/2],[0,0,0,1]])), Tm.dot(np.array([[cHp,0,sHp,-L/2],[0,1,0,0],[-sHp,0,cHp,-W/2],[0,0,0,1]])) ]) def legIK(point): (x,y,z)=(point[0],point[1],point[2]) F=sqrt(x**2+y**2-l1**2) G=F-l2 H=sqrt(G**2+z**2) theta1=-atan2(y,x)-atan2(F,-l1) D=(H**2-l3**2-l4**2)/(2*l3*l4) theta3=acos(D) theta2=atan2(z,G)-atan2(l4*sin(theta3),l3+l4*cos(theta3)) return(theta1,theta2,theta3) def calcLegPoints(angles): (theta1,theta2,theta3)=angles theta23=theta2+theta3 T0=Lo T1=T0+np.array([-l1*cos(theta1),l1*sin(theta1),0,0]) T2=T1+np.array([-l2*sin(theta1),-l2*cos(theta1),0,0]) T3=T2+np.array([-l3*sin(theta1)*cos(theta2),-l3*cos(theta1)*cos(theta2),l3*sin(theta2),0]) T4=T3+np.array([-l4*sin(theta1)*cos(theta23),-l4*cos(theta1)*cos(theta23),l4*sin(theta23),0]) return np.array([T0,T1,T2,T3,T4]) def drawLegPoints(p): # print("Leg: ", [x[0] for x in p],[x[2] for x in p],[x[1] for x in p]) print("Leg: ", [x[0] for x in p]) print("Leg: ", [x[2] for x in p]) print("Leg: ", [x[1] for x in p]) plt.plot([x[0] for x in p],[x[2] for x in p],[x[1] for x in p], 'k-', lw=3) plt.plot([p[0][0]],[p[0][2]],[p[0][1]],'bo',lw=2) plt.plot([p[4][0]],[p[4][2]],[p[4][1]],'ro',lw=2) def drawLegPair(Tl,Tr,Ll,Lr): Ix=np.array([[-1,0,0,0],[0,1,0,0],[0,0,1,0],[0,0,0,1]]) drawLegPoints([Tl.dot(x) for x in calcLegPoints(legIK(np.linalg.inv(Tl).dot(Ll)))]) drawLegPoints([Tr.dot(Ix).dot(x) for x in calcLegPoints(legIK(Ix.dot(np.linalg.inv(Tr)).dot(Lr)))]) def drawRobot(Lp,angles,center): (omega,phi,psi)=angles (xm,ym,zm)=center FP=[0,0,0,1] (Tlf,Trf,Tlb,Trb)= bodyIK(omega,phi,psi,xm,ym,zm) CP=[x.dot(FP) for x in [Tlf,Trf,Tlb,Trb]] CPs=[CP[x] for x in [0,1,3,2,0]] plt.plot([x[0] for x in CPs],[x[2] for x in CPs],[x[1] for x in CPs], 'bo-', lw=2) drawLegPair(Tlf,Trf,Lp[0],Lp[1]) drawLegPair(Tlb,Trb,Lp[2],Lp[3]) # drawRobot(Lp,(0,0,0),(-40,-170,0)) # maarteen # drawRobot(Lp,(0,0,0),(-40,-30,0)) # full sit # drawRobot(Lp,(0,0,0),(0,130,0)) # long stand drawRobot(Lp,(0,0,0),(-23,-40,0)) # sleep # drawRobot(Lp,(0,0,0),(0,-50,-6)) # from h/w plt.show() # - -- --- -- jupyter: -- jupytext: -- text_representation: -- extension: .hs -- format_name: light -- format_version: '1.5' -- jupytext_version: 1.14.4 -- kernelspec: -- display_name: Haskell -- language: haskell -- name: haskell -- --- -- # Widget List -- This is a translation from IPython's [Widget List](https://ipywidgets.readthedocs.io/en/latest/examples/Widget%20List.html) to IHaskell {-# LANGUAGE OverloadedStrings #-} import Data.Text import IHaskell.Display.Widgets import IHaskell.Display.Widgets.Layout as L -- ## Numeric widgets -- These are widgets designed to display numeric values. You can display both integers and floats, with or wihtout bounding. -- -- These widgets usually share the same naming scheme, so you can find the `Float`/`Int` counterpart replacing `Int` with `Float` or viceversa. -- ### IntSlider -- - Initially the slider is displayed with `IntValue`. You can define the lower/upper bounds with `MinInt` and `MaxInt`, the value increments/decrements according to `StepInt`. If `StepInt` is `Nothing`, you let the frontend decide. -- - You can specify a label in the `Description` field -- - The slider orientation is either `HorizontalOrientation` or `VerticalOrientation` -- - `ReadOut` chooses whether to display the value next to the slider, and `ReadOutFormat` specifies the format in a similar way to the format used by `printf`. intSlider <- mkIntSlider setField intSlider IntValue 7 setField intSlider MinInt 0 setField intSlider MaxInt 100 setField intSlider StepInt $ Just 1 setField intSlider Description "Test: " setField intSlider Disabled False setField intSlider ContinuousUpdate False setField intSlider Orientation HorizontalOrientation setField intSlider ReadOut True setField intSlider ReadOutFormat "d" intSlider -- ### FloatSlider -- An example of a float slider displayed vertically floatSlider <- mkFloatSlider setField floatSlider FloatValue 7.5 setField floatSlider MinFloat 0.0 setField floatSlider MaxFloat 10.0 setField floatSlider StepFloat $ Just 0.1 setField floatSlider Description "Test float: " setField floatSlider Disabled False setField floatSlider ContinuousUpdate False setField floatSlider Orientation VerticalOrientation setField floatSlider ReadOut True setField floatSlider ReadOutFormat ".1f" floatSlider -- ### FloatLogSlider -- Like a normal slider, but every step multiplies by a quantity, creating an exponential value (or a log scale). `MinFloat` and `MaxFloat` refer to the minimum and maximum **exponents** of the `BaseFloat`. floatLogSlider <- mkFloatLogSlider setField floatLogSlider FloatValue 10 setField floatLogSlider BaseFloat 10 setField floatLogSlider MinFloat (-10) setField floatLogSlider MaxFloat 10 setField floatLogSlider StepFloat $ Just 0.2 setField floatLogSlider Description "A log slider" floatLogSlider -- ### IntRangeSlider -- Lets you choose a range of two values intRangeSlider <- mkIntRangeSlider setField intRangeSlider IntPairValue (5,7) setField intRangeSlider MinInt 0 setField intRangeSlider MaxInt 10 setField intRangeSlider StepInt $ Just 1 setField intRangeSlider Disabled False setField intRangeSlider ContinuousUpdate False setField intRangeSlider Orientation HorizontalOrientation setField intRangeSlider ReadOut True setField intRangeSlider ReadOutFormat "d" intRangeSlider -- ### FloatRangeSlider floatRangeSlider <- mkFloatRangeSlider setField floatRangeSlider FloatPairValue (5.0,7.5) setField floatRangeSlider MinFloat 0 setField floatRangeSlider MaxFloat 10 setField floatRangeSlider StepFloat $ Just 0.1 setField floatRangeSlider Disabled False setField floatRangeSlider ContinuousUpdate False setField floatRangeSlider Orientation HorizontalOrientation setField floatRangeSlider ReadOut True setField floatRangeSlider ReadOutFormat ".1f" floatRangeSlider -- ### IntProgress -- - `BarStyle` can be one of: -- - `DefaultBar` -- - `SuccessBar` -- - `InfoBar` -- - `WarningBar` -- - `DangerBar` intProgress <- mkIntProgress setField intProgress IntValue 7 setField intProgress MinInt 0 setField intProgress MaxInt 10 setField intProgress Description "Now loading" setField intProgress BarStyle InfoBar intProgress s <- mkProgressStyle setField s BarColor $ Just "#ffff00" setField intProgress Style (StyleWidget s) -- If a numerical text box imposes some kind of limit on the input (range, non-float, etc), that restriction is checked when the user presses enter or changes focus -- ### BoundedIntText boundedIntText <- mkBoundedIntText setField boundedIntText IntValue 7 setField boundedIntText MinInt 0 setField boundedIntText MaxInt 10 setField boundedIntText StepInt $ Just 1 setField boundedIntText Description "Text: " setField boundedIntText Disabled False boundedIntText -- ### BoundedFloatText boundedFloatText <- mkBoundedFloatText setField boundedFloatText FloatValue 7.5 setField boundedFloatText MinFloat 0.0 setField boundedFloatText MaxFloat 10.0 setField boundedFloatText StepFloat $ Just 0.1 setField boundedFloatText Description "Text: " setField boundedFloatText Disabled False boundedFloatText -- ### IntText intText <- mkIntText setField intText IntValue 7 setField intText Description "Any:" setField intText Disabled False intText -- ### FloatText floatText <- mkFloatText setField floatText FloatValue 7.5 setField floatText Description "Any:" setField floatText Disabled False floatText -- https://twitter.com/Jose6o/status/1425485091824939012/photo/1## Boolean Widgets -- The following three widgets display a boolean value -- -- ### ToggleButton toggleButton <- mkToggleButton setField toggleButton BoolValue False setField toggleButton Description "Click me" setField toggleButton Disabled False -- DefaultButton | PrimaryButton | SuccessButton | INfoButton | WarningButton | DangerButton setField toggleButton ButtonStyle DefaultButton setField toggleButton Tooltip $ Just "Description" setField toggleButton Icon "check" toggleButton -- ### Checkbox checkBox <- mkCheckBox setField checkBox BoolValue False setField checkBox Description "Check me out!" setField checkBox Disabled False setField checkBox Indent False checkBox -- ### Valid -- It provides a read-only indicator. valid <- mkValid setField valid BoolValue False setField valid Description "Valid?" valid -- ## Selection widgets -- There are several widgets that can be used to display single selection lists, and two that can be used to select multiple values. All inherit from the same base class. You can specify the **enumeration of selectable options** by passing a list of options labels. -- -- The selected index is specified with the field `OptionalIndex` in case it can be `Nothing` and `Index` if it's not a Maybe. -- -- ### Dropdown dropdown <- mkDropdown setField dropdown OptionsLabels ["1", "2", "3"] setField dropdown OptionalIndex $ Just 2 setField dropdown Description "Number:" setField dropdown Disabled False dropdown -- ### RadioButtons radioButtons <- mkRadioButtons setField radioButtons OptionsLabels ["pepperoni", "pineapple", "anchovies"] setField radioButtons OptionalIndex Nothing setField radioButtons Description "Topping:" setField radioButtons Disabled False radioButtons -- Here is an exemple with dynamic layout and very long labels -- + radioButtons' <- mkRadioButtons setField radioButtons' OptionsLabels [ "pepperoni", "pineapple", "anchovies", "Spam, sausage, Spam, Spam, Spam, bacon, Spam, tomato and Spam" ] label <- mkLabel setField label StringValue "Pizza topping with a very long label" layout <- mkLayout setField layout L.Width $ Just "max-content" box <- mkBox setField box Children [ChildWidget label, ChildWidget radioButtons'] setField box Layout layout box -- - -- ### Select select <- mkSelect setField select OptionsLabels ["Linux", "Windows", "OSX"] setField select OptionalIndex $ Just 0 setField select Description "OS:" setField select Disabled False select -- ### SelectionSlider selectionSlider <- mkSelectionSlider setField selectionSlider OptionsLabels ["Scrambled", "Sunny side up", "Poached", "Over easy"] setField selectionSlider Index 1 setField selectionSlider Description "I like my eggs..." setField selectionSlider Disabled False setField selectionSlider ContinuousUpdate False setField selectionSlider Orientation HorizontalOrientation setField selectionSlider ReadOut True selectionSlider -- ### SelectionRangeSlider selectionRangeSlider <- mkSelectionRangeSlider setField selectionRangeSlider OptionsLabels ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday"] setField selectionRangeSlider Indices [0,4] setField selectionRangeSlider Description "When is the shop open?" setField selectionRangeSlider Disabled False selectionRangeSlider -- ### ToggleButtons toggleButtons <- mkToggleButtons setField toggleButtons OptionsLabels ["Slow", "Regular", "Fast"] setField toggleButtons Description "Speed:" setField toggleButtons Disabled False -- PrimaryButton | SuccessButton | InfoButton | WarningButton | DangerButton | DefaultButton setField toggleButtons ButtonStyle DefaultButton setField toggleButtons Tooltips ["Description of slow", "Description of regular", "Description of fast"] toggleButtons -- ### SelectMultiple -- Multiple values can be selected with `shift` and/or `ctrl` (or `command` with OSX) pressed and mouse click or arrow keys selectMultiple <- mkSelectMultiple setField selectMultiple OptionsLabels ["Apples", "Oranges", "Pears"] setField selectMultiple Indices [1] setField selectMultiple Description "Fruits" setField selectMultiple Disabled False selectMultiple -- ## String widgets -- There are multiple widgets that can be used to display a string value. The `Text`, `TextArea` -- and `Combobox` widgets accept input. The `HTML` and `HTMLMath` display the received string -- as HTML. The `Label` widget can be used to construct a custom control label, but it doesn't display -- input. -- -- ### Text text <- mkText setField text StringValue "Hello World!" setField text Placeholder "Type something" setField text Description "String:" setField text Disabled False text -- ### Textarea textarea <- mkTextArea setField textarea StringValue "Hello World!" setField textarea Placeholder "Type something" setField textarea Description "Long string:" setField textarea Disabled False textarea -- ### Combobox combobox <- mkCombobox setField combobox Placeholder "Choose Someone" setField combobox Options ["Paul", "John", "George", "Ringo"] setField combobox Description "Combobox:" setField combobox EnsureOption True setField combobox Disabled False combobox -- ### Password -- -- The `Password` widget hides user input on the screen. Nevertheless, this widget is **not a secure way** to collect sensitive information **because**: -- - The contents are transmitted unencrypted -- - If you save the notebook, the contents are stored as plain text password <- mkPassword setField password StringValue "Password" setField password Placeholder "Enter password" setField password Description "Password:" setField password Disabled False password -- ### Label -- The `Label` widget is useful if you need to build a very customized description next to a control widget. -- + label <- mkLabel setField label StringValue "The $m$ in $E=mc^2$:" floatSlider <- mkFloatSlider hbox <- mkHBox setField hbox Children [ChildWidget label, ChildWidget floatSlider] hbox -- - -- ### HTML html <- mkHTML setField html StringValue "Hello World!" setField html Placeholder "Some HTML" setField html Description "Some HTML" html -- ### HTML Math -- Like HTML, but it also renders LaTeX math commands htmlMath <- mkHTMLMath -- Remember to escape the \ with \\ setField htmlMath StringValue "Some math and HTML: $x^2$ and $$\\frac{x+1}{x-1}$$" setField htmlMath Placeholder "Some HTML" setField htmlMath Description "Some HTML" htmlMath -- ## Image image <- mkImage setField image BSValue "https://imgs.xkcd.com/comics/haskell.png" -- PNG | SVG | JPG | IURL setField image ImageFormat IURL image -- ## Button button <- mkButton setField button Description "Click me" setField button Disabled False -- PrimaryButton | SuccessButton | InfoButton | WarningButton | DangerButton | DefaultButton setField button ButtonStyle DefaultButton setField button Tooltip $ Just "Click me" setField button Icon "mouse" properties button button -- The `Icon` attribute is used to define an icon; see the [fontawesome](https://fontawesome.com/v5.15/icons) page for available icons. -- -- You can set a callback function Setting the `ClickHandler ::: IO ()` attribute. -- -- ## Output -- The `Output` widget is complicated and has many features. You can see detailed documentation in its dedicated Notebook. -- -- ## Play (Animation) widget -- The `Play` widget is like an automated textbox, where someone is making click on increment every few miliseconds. Here you can see an example: -- + play <- mkPlay setField play IntValue 50 setField play MinInt 0 setField play MaxInt 100 setField play StepInt $ Just 1 setField play Interval 500 setField play Description "Press play" setField play Disabled False slider <- mkIntSlider jslink (WidgetFieldPair play IntValue) (WidgetFieldPair slider IntValue) play slider -- - -- ## Date Picker -- This widget only works with browser that support the HTML date input field (Chrome, Firefox and IE Edge, but not Safari) datePicker <- mkDatePicker setField datePicker Description "Pick a date" setField datePicker Disabled False datePicker -- ## Color picker colorPicker <- mkColorPicker setField colorPicker Concise False setField colorPicker Description "Pick a color" setField colorPicker StringValue "Blue" setField colorPicker Disabled False colorPicker -- ## Controller -- The `Controller` allows a game controller to be used as an input device controller <- mkController setField controller Index 0 controller -- ## Container and Layout widgets -- -- These widgets are used to hold other widgets, called children. They can display multiple widgets or change its CSS styling. -- -- ### Box -- + labels <- flip mapM [1..4] $ \i->do l <- mkLabel setField l StringValue $ pack $ ("Label #" ++ show i) return $ ChildWidget l box <- mkBox setField box Children labels box -- - -- ### HBox hbox <- mkHBox setField hbox Children labels hbox -- ### VBox vbox <- mkVBox setField vbox Children labels vbox -- ### GridBox -- -- This box uses the HTML Grid specification to create a two-dimensional grid. To set its grid values, we need to create a layout widget. Let's do a 3x3 grid: -- + labels <- flip mapM [1..8] $ \i->do l <- mkLabel setField l StringValue $ pack $ ("Label #" ++ show i) return $ ChildWidget l layout <- mkLayout setField layout L.GridTemplateColumns $ Just "repeat(3, 10em)" gridBox <- mkGridBox setField gridBox Children labels setField gridBox Layout layout gridBox -- - -- ### Accordion -- -- Unlike the other container widgets, `Accordion` and `Tab` update their `selected_index` attribute when a tab or accordion element is selected. You can see what the user is doing, or set what the user is seeing. -- -- You can set `selected_index` to `Nothing` to close all accordions or deselect all tabs. accordion <- mkAccordion slider <- mkIntSlider text <- mkText setField accordion Children [ChildWidget slider, ChildWidget text] setField accordion Titles ["Slider", "Text"] accordion -- ### Tabs -- + tabs <- mkTab texts <- flip mapM [0..5] $ \i->do t <- mkText setField t StringValue $ pack $ ("P" ++ show i) return $ ChildWidget t setField tabs Children texts setField tabs Titles [pack $ show i | i <- [0..5]] setField tabs SelectedIndex $ Just 2 tabs # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # In this notebook we parse deepmod data into a nice dataframe. # + # DeepMoD stuff import torch from deepymod import DeepMoD from deepymod.model.func_approx import NN from deepymod.model.library import Library1D from deepymod.model.constraint import LeastSquares from deepymod.model.sparse_estimators import Threshold from deepymod.analysis import load_tensorboard from deepymod.data import Dataset from deepymod.data.burgers import BurgersDelta from natsort import natsorted from os import listdir, path import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt # - def correct_eq(found_coeffs): # Correct coeffs for burgers correct_coeffs = np.zeros((12, 1)) correct_coeffs[[2, 5]] = 1.0 n_active_terms_incorrect = np.sum(found_coeffs[correct_coeffs != 0.0] == 0) n_inactive_terms_incorrect = np.sum(found_coeffs[correct_coeffs == 0.0] != 0) if n_active_terms_incorrect + n_inactive_terms_incorrect > 0: correct = False else: correct = True return correct # + # Getting deepmod data data_folder = 'runs/shifted_noise20/' identifier = 'shiftednoise20_grid_' v=0.25 dataset = Dataset(BurgersDelta, A=1, v=v) df = pd.DataFrame() df['model'] = [key for key in natsorted(listdir(data_folder)) if (key[-3:]!= '.pt') if key.find(identifier) != -1 if key.find('nt') == -1] # + # Getting deepmod data data_folder = 'runs/revision_normal_grid/' identifier = 'normal_grid' v=0.25 dataset = Dataset(BurgersDelta, A=1, v=v) df = pd.DataFrame() df['model'] = [key for key in natsorted(listdir(data_folder)) if (key[-3:]!= '.pt') if key.find(identifier) != -1 if key.find('nt') == -1] # + # Getting deepmod data data_folder = 'runs/revision_grid_noise_20/' identifier = 'normal_grid' v=0.25 dataset = Dataset(BurgersDelta, A=1, v=v) df = pd.DataFrame() df['model'] = [key for key in natsorted(listdir(data_folder)) if (key[-3:]!= '.pt') if key.find(identifier) != -1 if key.find('nt') == -1] # + # Getting deepmod data data_folder = 'runs/normal_grid_noise_20/' identifier = 'normal_grid' v=0.25 dataset = Dataset(BurgersDelta, A=1, v=v) df = pd.DataFrame() df['model'] = [key for key in natsorted(listdir(data_folder)) if (key[-3:]!= '.pt') if key.find(identifier) != -1 if key.find('nt') == -1] # + # Getting deepmod data data_folder = 'runs/rev_random_20/' identifier = 'rev_random_grid_20_' v=0.25 dataset = Dataset(BurgersDelta, A=1, v=v) df = pd.DataFrame() df['model'] = [key for key in natsorted(listdir(data_folder)) if (key[-3:]!= '.pt') if key.find(identifier) != -1 if key.find('nt') == -1] # - #df['n_x'] = [int(row.model.split('_')[2][:]) for idx, row in df.iterrows()] df['n_x'] = [int(row.model[17:]) for idx, row in df.iterrows()] df['n_t'] = 100 8.2/40/np.sqrt(4*0.25*0.1) 8.2/40 df # Adding grids df['x'] = [np.linspace(-4, 4, row.n_x) for idx, row in df.iterrows()] df['t'] = [np.linspace(0.1, 1.1, row.n_t) for idx, row in df.iterrows()] df['t_grid'] = [np.meshgrid(row.t, row.x, indexing='ij')[0] for idx, row in df.iterrows()] df['x_grid'] = [np.meshgrid(row.t, row.x, indexing='ij')[1] for idx, row in df.iterrows()] # + # Calculating true derivatives df['u_t'] = [dataset.time_deriv(row.x_grid, row.t_grid).reshape(row.x_grid.shape) for idx, row in df.iterrows()] df['u_x'] = [dataset.library(row.x_grid.reshape(-1, 1), row.t_grid.reshape(-1, 1), poly_order=2, deriv_order=3)[:, 1].reshape(row.x_grid.shape) for idx, row in df.iterrows()] df['u_xx'] = [dataset.library(row.x_grid.reshape(-1, 1), row.t_grid.reshape(-1, 1), poly_order=2, deriv_order=3)[:, 2].reshape(row.x_grid.shape) for idx, row in df.iterrows()] df['u_xxx'] = [dataset.library(row.x_grid.reshape(-1, 1), row.t_grid.reshape(-1, 1), poly_order=2, deriv_order=3)[:, 3].reshape(row.x_grid.shape) for idx, row in df.iterrows()] # Calculating normalizing properties df['l'] = [np.sqrt(4 * v * row.t)[:, None] for idx, row in df.iterrows()] df['dz'] = [(np.ones_like(row.t)[:, None] * np.diff(row.x)[0] / row.l) for idx, row in df.iterrows()] df['u0'] = [np.sqrt(v / (np.pi * row.t))[:, None] for idx, row in df.iterrows()] # - [print(data_folder + row.model) for idx, row in df.iterrows()] # Loading data deepmod_data = [load_tensorboard(path.join(data_folder+row.model)) for idx, row in df.iterrows()] coeff_keys = natsorted([key for key in deepmod_data[0].keys() if key[:9] == 'estimator']) # Checking if correct df['correct'] = np.stack([correct_eq(data.tail(1)[coeff_keys].to_numpy().T) for data in deepmod_data], axis=-1) df['coeffs'] = [data.tail(1)[coeff_keys].to_numpy().T for data in deepmod_data] # Loading final MSE error df['test_error'] = [data.tail(1)['remaining_MSE_test'].item() for data in deepmod_data] # + # Loading different models and getting stuff out network = NN(2, [30, 30, 30, 30], 1) library = Library1D(poly_order=2, diff_order=3) # Library function estimator = Threshold(0.2) # Sparse estimator constraint = LeastSquares() # How to constrain model = DeepMoD(network, library, estimator, constraint) # Putting it all in the model dt = [] dx = [] d2x = [] d3x = [] trained_model = [i for i in listdir(data_folder) if i[-4:] == 'l.pt'] # - for idx, row in df.iterrows(): trained_model = path.join(data_folder,df.model[idx]+'model.pt') model.load_state_dict(torch.load(trained_model)) X = torch.tensor(np.concatenate((row.t_grid.reshape(-1, 1), row.x_grid.reshape(-1, 1)), axis=1), dtype=torch.float32) prediction, time_deriv, theta = model(X) time_deriv = time_deriv[0].cpu().detach().numpy() theta = theta[0].cpu().detach().numpy() dt.append(time_deriv.reshape(row.t_grid.shape)) dx.append(theta[:, 1].reshape(row.t_grid.shape)) d2x.append(theta[:, 2].reshape(row.t_grid.shape)) d3x.append(theta[:, 3].reshape(row.t_grid.shape)) df['u_t_deepmod'] = dt df['u_x_deepmod'] = dx df['u_xx_deepmod'] = d2x df['u_xxx_deepmod'] = d3x # + # Calculating errors df['u_t_error'] = [np.mean(np.abs(row.u_t - row.u_t_deepmod) * (row.l**0 / row.u0), axis=1) for idx, row in df.iterrows()] df['u_x_error'] = [np.mean(np.abs(row.u_x - row.u_x_deepmod) * (row.l**1 / row.u0), axis=1) for idx, row in df.iterrows()] df['u_xx_error'] = [np.mean(np.abs(row.u_xx - row.u_xx_deepmod) * (row.l**2 / row.u0), axis=1) for idx, row in df.iterrows()] df['u_xxx_error'] = [np.mean(np.abs(row.u_xxx - row.u_xxx_deepmod) * (row.l**3 / row.u0), axis=1) for idx, row in df.iterrows()] # Making some composite errors df['full_error'] = [(np.mean(np.abs((row.u_t - row.u_t_deepmod) / np.linalg.norm(row.u_t, axis=1, keepdims=True)) , axis=1) + np.mean(np.abs((row.u_x - row.u_x_deepmod) / np.linalg.norm(row.u_x, axis=1, keepdims=True)) , axis=1) + np.mean(np.abs((row.u_xx - row.u_xx_deepmod) / np.linalg.norm(row.u_xx, axis=1, keepdims=True)) , axis=1) + np.mean(np.abs((row.u_xxx - row.u_xxx_deepmod) / np.linalg.norm(row.u_xxx, axis=1, keepdims=True)) , axis=1)) for idx, row in df.iterrows()] df['deriv_error'] = [(np.mean(np.abs((row.u_x - row.u_x_deepmod) / np.linalg.norm(row.u_x, axis=1, keepdims=True)) , axis=1) + np.mean(np.abs((row.u_xx - row.u_xx_deepmod) / np.linalg.norm(row.u_xx, axis=1, keepdims=True)) , axis=1) + np.mean(np.abs((row.u_xxx - row.u_xxx_deepmod) / np.linalg.norm(row.u_xxx, axis=1, keepdims=True)) , axis=1)) for idx, row in df.iterrows()] # - df.to_pickle('revision_random_20.pd') np.mean(df.deriv_error[0]) # # Deepmod 2D # + # Getting deepmod data data_folder = 'runs/' identifier = 'noiseless' v=0.25 dataset = Dataset(BurgersDelta, A=1, v=v) df = pd.DataFrame() #df['model'] = [key for key in natsorted(listdir(data_folder)) if key[-3:]!= '.pt' and key.find(identifier) != -1 if key.find('nt') != -1] df['model'] = [key for key in natsorted(listdir(data_folder)) if key[-3:]!= '.pt' and key.find(identifier) == -1 if key.find('nt') != -1] # - # Adding grid data df['n_x'] = [int(row.model.split('_')[3]) for idx, row in df.iterrows()] df['n_t'] = [int(row.model.split('_')[5]) for idx, row in df.iterrows()] df['run'] = [int(row.model.split('_')[7]) for idx, row in df.iterrows()] df # Loading data deepmod_data = [load_tensorboard(path.join(data_folder, row.model)) for idx, row in df.iterrows()] coeff_keys = natsorted([key for key in deepmod_data[0].keys() if key[:9] == 'estimator']) # Checking if correct df['correct'] = np.stack([correct_eq(data.tail(1)[coeff_keys].to_numpy().T) for data in deepmod_data], axis=-1) df['coeffs'] = [data.tail(1)[coeff_keys].to_numpy().T for data in deepmod_data] df.to_pickle('deepmod_2d_noisy.pd') ax = sns.heatmap(df.pivot(index='n_t', columns='n_x', values='correct')) ax.invert_yaxis() ax = sns.heatmap(df.pivot(index='n_t', columns='n_x', values='correct')) ax.invert_yaxis() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Copyright (c) 2017,2018, . All rights reserved. # The default copyright laws apply. # + import csv #http://soft-matter.github.io/trackpy/v0.3.2/tutorial/walkthrough.html from __future__ import division, unicode_literals, print_function # for compatibility with Python 2 and 3 import matplotlib as mpl import matplotlib.pyplot as plt # change the following to %matplotlib notebook for interactive plotting # %matplotlib inline # Optionally, tweak styles. #mpl.rc('figure', figsize=(10, 6)) #mpl.rc('image', cmap='gray') import numpy as np import pandas as pd from pandas import DataFrame, Series # for convenience import pims import scipy import math import scipy.ndimage #own modules / functions import sys pythonPackagePath = "/Users/johannesschoeneberg/git/JohSchoeneberg/Confleezers/confleezers_data_analysis/" sys.path.append(pythonPackagePath+"/modules") import InputProcessing as inproc import ForceFileProcessing as ffp import TubeProcessing as tubeProc #### plot definitions #https://stackoverflow.com/questions/22408237/named-colors-in-matplotlib #c_ch1 = 'aqua' #c_ch1 = 'deepskyblue' #c_ch1 = 'cyan' #c_ch2 = 'springgreen' #c_ch2 = 'lime' #c_ch3 = 'red' #lwidth = 3 #xlimit = (-150,300) # + # read input parameters: path = '/Volumes/Samsung_T3b/science/confleezers/2018-06-05/v7_analysis2/' maxEveryLineFolder = 'tube__maxIntAlongTube' inputParameters = pd.read_csv(path+'_info.csv',names=['key','value']) outputDataFolder = inproc.getInputParameter(inputParameters,"output_data_folder") import os if not os.path.exists(path+outputDataFolder+'/'+maxEveryLineFolder+'/'): os.makedirs(path+outputDataFolder+'/'+maxEveryLineFolder+'/') inputParameters[1:5] # + # read the tube vector from a previous step inputDataFolder = inproc.getInputParameter(inputParameters,"input_data_folder") tubeVectorFileName = inproc.getInputParameter(inputParameters,"tube_vector_file") #path = '/Volumes/Samsung_T3b/science/confleezers/2018-06-05/v10all/' df_tubeVector = pd.read_csv(path+inputDataFolder+'/'+tubeVectorFileName) df_tubeVector.columns=['frame','tubeLenght',"vesicle_attachment_x",'vesicle_attachment_y','bead_attachment_x','bead_attachment_y'] df_tubeVector[0:5] # - # # calculate tube intensity for channel 0 # + movie_zoomVesicle_ch0 = 'movie_ch0_avg20_bleachCorrected.tif' filenamePath_gaussFit_ch0 = path+outputDataFolder+movie_zoomVesicle_ch0+'__gaussFitAlongTube.csv' df_fit_ch0 = pd.read_csv(filenamePath_gaussFit_ch0) plt.figure(dpi=150) #plt.gca().set_aspect(1.7) plt.gca().spines['right'].set_visible(False) plt.gca().spines['top'].set_visible(False) deltaT = int(float(inproc.getInputParameter(inputParameters,"movie_startTime_difference_UVstart_seconds"))) frameRate = float(inproc.getInputParameter(inputParameters,"time_between_frames_seconds")) time = deltaT + np.arange(0,len(df_fit_ch0))*frameRate plt.plot(time,df_fit_ch0['maxx-background_median'],color='cyan',lw=3) plt.plot(time,scipy.ndimage.median_filter(df_fit_ch0['maxx-background_median'],60),lw=2,c='blue'); plt.xlabel('time[s]') plt.ylabel('fl. int. ch1 (tube membrane)') plt.ylim(0,150) plt.xlim(-100,500) # - # # channel 1 # + movie_zoomVesicle_ch1 = 'movie_ch1_avg20_bleachCorrected.tif' filenamePath_gaussFit_ch1 = path+outputDataFolder+movie_zoomVesicle_ch1+'__gaussFitAlongTube.csv' df_fit_ch1 = pd.read_csv(filenamePath_gaussFit_ch1) plt.figure(dpi=150) #plt.gca().set_aspect(0.7) plt.gca().spines['right'].set_visible(False) plt.gca().spines['top'].set_visible(False) deltaT = int(float(inproc.getInputParameter(inputParameters,"movie_startTime_difference_UVstart_seconds"))) frameRate = float(inproc.getInputParameter(inputParameters,"time_between_frames_seconds")) time = deltaT + np.arange(0,len(df_fit_ch1))*frameRate df_fit_ch1 = pd.read_csv(filenamePath_gaussFit_ch1) plt.plot(time,df_fit_ch1['maxx-background_median'],color='lime',lw=3) plt.plot(time,scipy.ndimage.median_filter(df_fit_ch1['maxx-background_median'],60),lw=2,c='green'); plt.xlabel('time[s]') plt.ylabel('fl. int. ch1 (Snf7)') plt.ylim(0,1500) plt.xlim(-100,500) # - # # ch2 analysis # + movie_zoomVesicle_ch2 = "movie_ch2_avg20_bleachCorrected.tif" filenamePath_gaussFit_ch2 = path+outputDataFolder+movie_zoomVesicle_ch2+'__gaussFitAlongTube.csv' df_fit_ch2 = pd.read_csv(filenamePath_gaussFit_ch2) plt.figure(dpi=150) #plt.gca().set_aspect(0.8) plt.gca().spines['right'].set_visible(False) plt.gca().spines['top'].set_visible(False) deltaT = int(float(inproc.getInputParameter(inputParameters,"movie_startTime_difference_UVstart_seconds"))) frameRate = float(inproc.getInputParameter(inputParameters,"time_between_frames_seconds")) time = deltaT + np.arange(0,len(df_fit_ch2))*frameRate plt.plot(time,df_fit_ch2['maxx-background_median'],color='red',alpha=0.2,lw=3) plt.plot(time,scipy.ndimage.median_filter(df_fit_ch2['maxx-background_median'],60),lw=2,c='red'); plt.xlabel('time[s]') plt.ylabel('fl. int. ch1 (Vps4)') plt.ylim(0,1500) plt.xlim(-100,500) # - #plot all together # + ch0color = ['red','red'] ch1color = ['green','lime'] ch2color = ['blue','cyan'] movie_zoomVesicle_ch0 = 'movie_ch0_avg20_bleachCorrected.tif' filenamePath_gaussFit_ch0 = path+outputDataFolder+movie_zoomVesicle_ch0+'__gaussFitAlongTube.csv' df_fit_ch0 = pd.read_csv(filenamePath_gaussFit_ch0) plt.figure(dpi=300) plt.gca().set_aspect(150) plt.gca().spines['right'].set_visible(False) plt.gca().spines['top'].set_visible(False) deltaT = int(float(inproc.getInputParameter(inputParameters,"movie_startTime_difference_UVstart_seconds"))) frameRate = float(inproc.getInputParameter(inputParameters,"time_between_frames_seconds")) time = deltaT + np.arange(0,len(df_fit_ch0))*frameRate ch0norm = 140 ch0plot = df_fit_ch0['maxx-background_median'] ch0plot_medianFiltered = scipy.ndimage.median_filter(df_fit_ch0['maxx-background_median'],60) plt.plot(time,ch0plot/ch0norm,color=ch0color[1],alpha=0.2,lw=3) plt.plot(time,ch0plot_medianFiltered/ch0norm,lw=2,c=ch0color[0]); plt.xlabel('time[s]') plt.ylabel('fl. int. ch1 (tube membrane)') plt.ylim(0,150) plt.xlim(-100,500) #------ movie_zoomVesicle_ch1 = 'movie_ch1_avg20_bleachCorrected.tif' filenamePath_gaussFit_ch1 = path+outputDataFolder+movie_zoomVesicle_ch1+'__gaussFitAlongTube.csv' df_fit_ch1 = pd.read_csv(filenamePath_gaussFit_ch1) #plt.gca().set_aspect(0.7) plt.gca().spines['right'].set_visible(False) plt.gca().spines['top'].set_visible(False) deltaT = int(float(inproc.getInputParameter(inputParameters,"movie_startTime_difference_UVstart_seconds"))) frameRate = float(inproc.getInputParameter(inputParameters,"time_between_frames_seconds")) time = deltaT + np.arange(0,len(df_fit_ch1))*frameRate ch1plot = df_fit_ch1['maxx-background_median'] ch1plot_medianFiltered = scipy.ndimage.median_filter(df_fit_ch1['maxx-background_median'],60) ch1norm = 1145 plt.plot(time,ch1plot/ch1norm,color=ch1color[1],lw=3) plt.plot(time,ch1plot_medianFiltered/ch1norm,lw=2,c=ch1color[0]); plt.xlabel('time[s]') plt.ylabel('fl. int. ch1 (Snf7)') plt.ylim(0,1500) plt.xlim(-100,500) #------- movie_zoomVesicle_ch2 = "movie_ch2_avg20_bleachCorrected.tif" filenamePath_gaussFit_ch2 = path+outputDataFolder+movie_zoomVesicle_ch2+'__gaussFitAlongTube.csv' df_fit_ch2 = pd.read_csv(filenamePath_gaussFit_ch2) #plt.gca().set_aspect(0.8) plt.gca().spines['right'].set_visible(False) plt.gca().spines['top'].set_visible(False) deltaT = int(float(inproc.getInputParameter(inputParameters,"movie_startTime_difference_UVstart_seconds"))) frameRate = float(inproc.getInputParameter(inputParameters,"time_between_frames_seconds")) time = deltaT + np.arange(0,len(df_fit_ch2))*frameRate ch2plot = df_fit_ch2['maxx-background_median'] ch2plot_medianFiltered = scipy.ndimage.median_filter(df_fit_ch2['maxx-background_median'],60) ch2norm = 950 plt.plot(time,ch2plot/ch2norm,color=ch2color[1],alpha=0.2,lw=3) plt.plot(time,ch2plot_medianFiltered/ch2norm,lw=2,c=ch2color[0]); plt.xlabel('time[s]') plt.ylabel('fl. int. ch1 (Vps4)') plt.ylim(0,1) plt.xlim(-90,470) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # import packages # + import numpy as np import pandas as pd # Visualization import matplotlib.pyplot as plt import seaborn as sns # Preprocessing from sklearn.pipeline import Pipeline from sklearn.model_selection import train_test_split, cross_val_score, GridSearchCV # Text representation from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer # ML Algo from sklearn.naive_bayes import MultinomialNB from sklearn.linear_model import LogisticRegression from sklearn.svm import LinearSVC # Evaluvation from sklearn.metrics import accuracy_score, confusion_matrix, roc_auc_score from time import time # - import string import re from nltk import word_tokenize from nltk.corpus import stopwords from nltk import WordNetLemmatizer from scripts.utils import plotConfusionMatrixHeatmap # # Load Trianing file data_path = '../../data/train.csv' df = pd.read_csv(data_path) df.shape df.columns df.head() df.drop(columns=['Unnamed: 0', 'index'], inplace=True) df.columns df.head() df['label'].value_counts() # Class distribution df['label'].value_counts()/df.shape[0] * 100 # # Pre Processing def clean_text(doc): """ 1. Converting all text into lower case 2. Removing classified words like xxx 3. Remove stop words 4. remove punctuation 5. remove digits 6. Wordnet lemmatizer """ # Set stop word as english stop_word = set(stopwords.words('english')) # Tokenize the sentence and make all character lower case doc = [x.lower() for x in word_tokenize(doc)] # Remove classified texts doc = [x for x in doc if x.lower() != 'xxxx' and x.lower() != 'xx' and x.lower() != 'xx/xx/xxxx'] # Remove stop words doc = [x for x in doc if x not in stop_word] # Remove Punctuation doc = [x for x in doc if x not in string.punctuation] # Remove Digits doc = [x for x in doc if not x.isdigit()] # Set NLTK Wordnet lemmatizer and lemmatize the sentence lemmatizer = WordNetLemmatizer() doc = " ".join([lemmatizer.lemmatize(word) for word in doc]) return doc df['text_processed'] = df.apply(lambda row : clean_text(row['full_text']), axis = 1) df.head() # # Encoding and Modeling from sklearn.preprocessing import LabelEncoder label_encoder = LabelEncoder() df['label_id'] = label_encoder.fit_transform(df['label']) # Put the label category into dict for future use label_map = df.set_index('label_id').to_dict()['label'] label_map df.head() X = df.text_processed y = df.label_id print(X.shape, y.shape) X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1) print(X_train.shape, y_train.shape) print(X_test.shape, y_test.shape) # # Multinomial Naive bayes # Parameter values to test param_grid = { 'TfIdf__max_features' : [5000, 10000, 20000, 25000], 'TfIdf__ngram_range' : [(1,1),(1,2),(2,2)], 'TfIdf__use_idf' : [True], 'MultinomialNB__alpha' : [0.01, 0.02, 0.05, 0.10] } # Creating pipeline for Naive Bayes Model pipeline_mnb = Pipeline(steps = [('TfIdf', TfidfVectorizer()), ('MultinomialNB', MultinomialNB())]) grid_search_mnb = GridSearchCV(pipeline_mnb, param_grid, cv=5, verbose=1, n_jobs=6) grid_search_mnb.fit(X_train, y_train) print(grid_search_mnb.best_params_) print(grid_search_mnb.best_estimator_) grid_search_mnb.score(X_test, y_test) y_predicted = grid_search_mnb.predict(X_test) classification_report_mnb = classification_report(y_test, y_predicted) print(classification_report_mnb) key_to_label_name = [x[1] for x in sorted(label_map.items())] # + conf_matrix_df = pd.DataFrame(data=confusion_matrix(y_test, y_predicted), index=key_to_label_name, columns=key_to_label_name) plotConfusionMatrixHeatmap(conf_matrix_df, model_name='Multinomial Naive bayes', figsize=(12, 10)) # + best_model = grid_search_mnb best_model.version = 1.0 best_model.pandas_version = pd.__version__ best_model.numpy_version = np.__version__ best_model.sklearn_version = sklearn_version best_model.build_datetime = datetime.now() modelpath = '../../data/models' if not os.path.exists(modelpath): os.mkdir(modelpath) mnbmodel_path = os.path.join(modelpath, 'Multinomial_naive_bayes_with_7_class.pkl') if not os.path.exists(mnbmodel_path): with open(mnbmodel_path, 'wb') as f: pickle.dump(best_model, f) # - # # Logistic Regression param_grid = { 'TfIdf__max_features' : [5000, 10000, 20000, 25000], 'TfIdf__ngram_range' : [(1,1),(1,2),(2,2)], 'TfIdf__use_idf' : [True] } # Creating pipeline for Logistice Regression model pipeline_lr = Pipeline(steps = [('TfIdf', TfidfVectorizer()), ('LogisticRegression', LogisticRegression(class_weight="balanced"))]) grid_search_lr = GridSearchCV(pipeline_lr, param_grid_lr, cv=5, verbose=1, n_jobs=-1) grid_search_lr.fit(X_train, y_train) print(grid_search_lr.best_params_) print(grid_search_lr.best_estimator_) grid_search_lr.score(X_test, y_test) y_predicted = grid_search_lr.predict(X_test) classification_report_lr = classification_report(y_test, y_predicted) print(classification_report_lr) # + conf_matrix_df = pd.DataFrame(data=confusion_matrix(y_test, y_predicted), index=key_to_label_name, columns=key_to_label_name) plotConfusionMatrixHeatmap(conf_matrix_df, model_name='Multinomial', figsize=(12, 10)) # - # # Random forest clasifier from sklearn.ensemble import RandomForestClassifier from sklearn.feature_extraction.text import CountVectorizer vectorizer = TfidfVectorizer(min_df=3, stop_words="english", sublinear_tf=True, norm='l2', ngram_range=(1, 2)) pipeline_rf = Pipeline(steps = [('countvectorizer', vectorizer), ('clf', RandomForestClassifier())]) model = pipeline_rf.fit(X_train, y_train) y_pred = model.predict(X_test) y_pred_prob = model.predict_proba(X_test) lr_probs = y_pred_prob[:,1] accuracy_score(y_test, y_pred) conf_matrix_df = pd.DataFrame(data=confusion_matrix(y_test, y_pred),index=key_to_label_name, columns=key_to_label_name) classification_rep = classification_report(y_test, y_pred,target_names=key_to_label_name) print(classification_rep) plotConfusionMatrixHeatmap(conf_matrix_df, model_name='Random forest', figsize=(12, 10)) # # Doc 2 Vec with logistic regression from gensim.models.doc2vec import Doc2Vec, TaggedDocument #prepare training data in doc2vec format: train_doc2vec = [TaggedDocument((d), tags=[str(i)]) for i, d in enumerate(X_train)] #Train a doc2vec model to learn model = Doc2Vec(vector_size=50, alpha=0.025, min_count=5, dm =1, epochs=100) model.build_vocab(train_doc2vec) model.train(train_doc2vec, total_examples=model.corpus_count, epochs=model.epochs) model.save("../../data/models/d2v.model") print("Model Saved") #Infer the feature representation for training and test data using the trained model model= Doc2Vec.load("../../data/models/d2v.model") #infer in multiple steps to get a stable representation. train_vectors = [model.infer_vector(list_of_tokens, steps=50) for list_of_tokens in X_train] test_vectors = [model.infer_vector(list_of_tokens, steps=50) for list_of_tokens in X_test] clf = LogisticRegression(class_weight="balanced") clf.fit(train_vectors, y_train) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # !pip install pandas --quiet # !pip install torchtext --quiet # + # We import some libraries to load the dataset import os import numpy as np import pandas as pd import matplotlib.pyplot as plt from collections import Counter from tqdm.notebook import tqdm import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F from torch.utils.data import TensorDataset, DataLoader import torchtext from torchtext.data import get_tokenizer from sklearn.utils import shuffle from sklearn.metrics import classification_report from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split from sklearn.feature_extraction.text import CountVectorizer # - #Change the path# path = '/Users/bengieru/Desktop/NMA-DL/iSarcasm/iSarcasm_dataset/' header_list = ["text", "label", 'source'] df_train_test = pd.read_csv(path + 'finaldata/train_test.csv', encoding = "utf-8", dtype = {'text':str, 'label':bool, 'source':int}) df_train_test.reset_index(drop=True, inplace=True) del df_train_test['Unnamed: 0'] df_train_test = df_train_test.loc[df_train_test.text.apply(lambda x: not isinstance(x, (float, int)))] df_train_test # + # let's play with the lemmatized text to see how it's accuracy fairs X = df_train_test.text.values print(X.shape) # Changes values from [0,4] to [0,1] y = (df_train_test.label.values >= 1).astype(int) # Split the data into train and test x_train_text, x_test_text, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42,stratify=y) # - for s, l in zip(x_train_text[:5], y_train[:5]): print('{}: {}'.format(l, s)) tokenizer = get_tokenizer("basic_english") print('Before Tokenize: ', x_train_text[4]) print('After Tokenize: ', tokenizer(x_train_text[4])) x_train_token = [tokenizer(s) for s in tqdm(x_train_text)] x_test_token = [tokenizer(s) for s in tqdm(x_test_text)] # + words = Counter() for s in x_train_token: for w in s: words[w] += 1 sorted_words = list(words.keys()) sorted_words.sort(key=lambda w: words[w], reverse=True) print(f"Number of different Tokens in our Dataset: {len(sorted_words)}") print(sorted_words[:20]) # + count_occurences = sum(words.values()) accumulated = 0 counter = 0 while accumulated < count_occurences * 0.8: accumulated += words[sorted_words[counter]] counter += 1 print(f"The {counter * 100 / len(words)}% most common words " f"account for the {accumulated * 100 / count_occurences}% of the occurrences") # - plt.bar(range(100), [words[w] for w in sorted_words[:100]]) plt.show() vectorizer = CountVectorizer(binary=True) x_train_cv = vectorizer.fit_transform(x_train_text) x_test_cv = vectorizer.transform(x_test_text) print('Before Vectorize: ', x_train_text[3]) # Notice that the matriz is sparse print('After Vectorize: ') print(x_train_cv[3]) def set_device(): device = "cuda" if torch.cuda.is_available() else "cpu" if device != "cuda": print("WARNING: For this notebook to perform best, if possible, in the menu under `Runtime` -> `Change runtime type.` select `GPU` ") else: print("GPU is enabled in this notebook.") return device # Set the device (check if gpu is available) device = set_device() #First we will create a Dictionary (`word_to_idx`). #This dictionary will map each Token (usually words) to an index (an integer number). #We want to limit our dictionary to a certain number of tokens (`num_words_dict`), so we will include in our ditionary those with more occurrences. #Change the path# glove = pd.read_csv('/Users/bengieru/Downloads/glove/glove.twitter.27B.100d.txt', sep=" ", quoting=3, header=None, index_col=0) glove_embedding = {key: val.values for key, val in glove.T.items()} len(sorted_words) # + #Let's select only the most used. num_words_dict = 6000 # We reserve two numbers for special tokens. most_used_words = sorted_words[:num_words_dict-2] #We will add two extra Tokens to the dictionary, one for words outside the dictionary (`'UNK'`) and one for padding the sequences (`'PAD'`). # dictionary to go from words to idx word_to_idx = {} # dictionary to go from idx to words (just in case) idx_to_word = {} # We include the special tokens first PAD_token = 0 UNK_token = 1 word_to_idx['PAD'] = PAD_token word_to_idx['UNK'] = UNK_token idx_to_word[PAD_token] = 'PAD' idx_to_word[UNK_token] = 'UNK' # We popullate our dictionaries with the most used words for num,word in enumerate(most_used_words): word_to_idx[word] = num + 2 idx_to_word[num+2] = word def create_embedding_matrix(word_index,embedding_dict,dimension): embedding_matrix=np.zeros((len(word_index)+1,dimension)) for word,index in word_index.items(): if word in embedding_dict: embedding_matrix[index]=embedding_dict[word] return embedding_matrix # My word_to_idx is the same as their word_index embedding_matrix = create_embedding_matrix(word_to_idx, embedding_dict = glove_embedding, dimension = 100) #Our goal now is to transform each tweet from a sequence of tokens to a sequence of indexes. # These sequences of indexes will be the input to our pytorch model. # A function to convert list of tokens to list of indexes def tokens_to_idx(sentences_tokens,word_to_idx): sentences_idx = [] for sent in sentences_tokens: sent_idx = [] for word in sent: if word in word_to_idx: sent_idx.append(word_to_idx[word]) else: sent_idx.append(word_to_idx['UNK']) sentences_idx.append(sent_idx) return sentences_idx x_train_idx = tokens_to_idx(x_train_token, word_to_idx) x_test_idx = tokens_to_idx(x_test_token, word_to_idx) # We need all the sequences to have the same length. # To select an adequate sequence length, let's explore some statistics about the length of the tweets: tweet_lens = np.asarray([len(sentence) for sentence in x_train_idx]) print('Max tweet word length: ',tweet_lens.max()) print('Mean tweet word length: ',np.median(tweet_lens)) print('99% percent under: ',np.quantile(tweet_lens,0.99)) # + #We cut the sequences which are larger than our chosen maximum length (`max_lenght`) and fill with zeros the ones that are shorter. # We choose the max length max_length = 30 # A function to make all the sequence have the same lenght # Note that the output is a Numpy matrix def padding(sentences, seq_len): features = np.zeros((len(sentences), seq_len),dtype=int) for ii, tweet in enumerate(sentences): len_tweet = len(tweet) if len_tweet != 0: if len_tweet <= seq_len: # If its shorter, we fill with zeros (the padding Token index) features[ii, -len(tweet):] = np.array(tweet)[:seq_len] if len_tweet > seq_len: # If its larger, we take the last 'seq_len' indexes features[ii, :] = np.array(tweet)[-seq_len:] return features # + # We convert our list of tokens into a numpy matrix # where all instances have the same lenght x_train_pad = padding(x_train_idx,max_length) x_test_pad = padding(x_test_idx,max_length) # We convert our target list a numpy matrix y_train_np = np.asarray(y_train) y_test_np = np.asarray(y_test) # - some_number = 2 print('Before padding: ', x_train_idx[some_number]) print('After padding: ', x_train_pad[some_number]) # + # create Tensor datasets train_data = TensorDataset(torch.from_numpy(x_train_pad), torch.from_numpy(y_train_np)) valid_data = TensorDataset(torch.from_numpy(x_test_pad), torch.from_numpy(y_test_np)) # Batch size (this is an important hyperparameter) batch_size = 100 # dataloaders # make sure to SHUFFLE your data train_loader = DataLoader(train_data, shuffle=True, batch_size=batch_size,drop_last = True) valid_loader = DataLoader(valid_data, shuffle=True, batch_size=batch_size,drop_last = True) # - x_train_pad.shape, x_test_pad.shape, y_train_np.shape, y_test_np.shape # + # Obtain one batch of training data dataiter = iter(train_loader) sample_x, sample_y = dataiter.next() print('Sample input size: ', sample_x.size()) # batch_size, seq_length print('Sample input: \n', sample_x) print('Sample input: \n', sample_y) # - # Now, we will define the `SentimentRNN` class. Most of the model's class will be familiar to you, but there are two important layers we would like you to pay attention to: # # * Embedding Layer # > This layer is like a linear layer, but it makes it posible to use a sequence of inedexes as inputs (instead of a sequence of one-hot-encoded vectors). During training, the Embedding layer learns a linear transformation from the space of words (a vector space of dimension `num_words_dict`) into the a new, smaller, vector space of dimension `embedding_dim`. We suggest you to read this [thread](https://discuss.pytorch.org/t/how-does-nn-embedding-work/88518/3) and the [pytorch documentation](https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html) if you want to learn more about this particular kind of layers. # # # * LSTM layer # > This is one of the most used class of Recurrent Neural Networks. In Pytorch we can add several stacked layers in just one line of code. In our case, the number of layers added are decided with the parameter `no_layers`. If you want to learn more about LSTMs we strongly recommend you this [Colahs thread](https://colah.github.io/posts/2015-08-Understanding-LSTMs/) about them. # # # # # # # # + # %%time # Let us a LSTM RNN using the pre-trianed embedding from GLOVE. class SentimentLSTM(nn.Module): def __init__(self, no_layers, hidden_dim, embedding_matrix, drop_prob=0.1): super(SentimentLSTM,self).__init__() self.output_dim = output_dim self.hidden_dim = hidden_dim self.no_layers = no_layers self.drop_prob = drop_prob self.vocab_size = embedding_matrix.shape[0] self.embedding_dim = embedding_matrix.shape[1] # Embedding Layer self.embedding=nn.Embedding(self.vocab_size,self.embedding_dim) self.embedding.weight=nn.Parameter(torch.tensor(embedding_matrix,dtype=torch.float32)) self.embedding.weight.requires_grad=True # LSTM Layers self.lstm = nn.LSTM(input_size=self.embedding_dim,hidden_size=self.hidden_dim, num_layers=no_layers, bidirectional=False, batch_first=True, dropout=self.drop_prob) # Dropout layer self.dropout = nn.Dropout(drop_prob) # Linear and Sigmoid layer self.fc = nn.Linear(self.hidden_dim, output_dim) self.sig = nn.Sigmoid() def forward(self,x,hidden): batch_size = x.size(0) embeds = self.embedding(x) #Shape: [batch_size x max_length x embedding_dim] # LSTM out lstm_out, hidden = self.lstm(embeds, hidden) # Shape: [batch_size x max_length x hidden_dim] # Select the activation of the last Hidden Layer lstm_out = lstm_out[:,-1,:].contiguous() # Shape: [batch_size x hidden_dim] ## You can instead average the activations across all the times # lstm_out = torch.mean(lstm_out, 1).contiguous() # Dropout and Fully connected layer out = self.dropout(lstm_out) out = self.fc(out) # Sigmoid function sig_out = self.sig(out) # return last sigmoid output and hidden state return sig_out, hidden def init_hidden(self, batch_size): ''' Initializes hidden state ''' # Create two new tensors with sizes n_layers x batch_size x hidden_dim, # initialized to zero, for hidden state and cell state of LSTM h0 = torch.zeros((self.no_layers,batch_size,self.hidden_dim)).to(device) c0 = torch.zeros((self.no_layers,batch_size,self.hidden_dim)).to(device) hidden = (h0,c0) return hidden ####################################################################################### # Parameters of our network # Size of our vocabulary # Number of stacked LSTM layers no_layerss = [2, 4, 8] # Dimension of the hidden layer in LSTMs hidden_dim = 64 # Dropout parameter for regularization output_dim = 1 # Dropout parameter for regularization drop_prob = 0.5 # Let's define our model fig,ax = plt.subplots(3,2, figsize = (18, 18)) fig.suptitle('LSTM model with hidden_dim = %d, output_dim = %d, drop_prob = %.2f'%(hidden_dim, output_dim, drop_prob), fontsize = 24) for no, no_layers in enumerate(no_layerss): model = SentimentLSTM(no_layers, hidden_dim, embedding_matrix, drop_prob = drop_prob) # Moving to gpu model.to(device) print(model) ####################################################################################### # How many trainable parameters does our model have? model_parameters = filter(lambda p: p.requires_grad, model.parameters()) params = sum([np.prod(p.size()) for p in model_parameters]) print('Total Number of parameters: ',params) ####################################################################################### # loss and optimization functions lrs = np.linspace(0.0003, 0.0007, 5)#define a range of learning rates # Binary crossentropy is a good loss function for a binary classification problem criterion = nn.BCELoss() # We choose an Adam optimizer optimizers = [] for i,lr in enumerate(lrs): optimizer = torch.optim.Adam(model.parameters(), lr = lr) optimizers.append(optimizer) ####################################################################################### # function to predict accuracy def acc(pred,label): pred = torch.round(pred.squeeze()) return torch.sum(pred == label.squeeze()).item() ####################################################################################### # Number of training Epochs epochs = 5 # Maximum absolute value accepted for the gradeint clip = 5 # Initial Loss value (assumed big) valid_loss_min = np.Inf # Lists to follow the evolution of the loss and accuracy epoch_tr_loss, epoch_vl_loss = [[] for i in range(len(lrs))],[[] for i in range(len(lrs))] epoch_tr_acc, epoch_vl_acc = [[] for i in range(len(lrs))],[[] for i in range(len(lrs))] # Train for a number of Epochs for i, lr in enumerate(lrs): for epoch in range(epochs): train_losses = [] train_acc = 0.0 model.train() for inputs, labels in train_loader: # Initialize hidden state h = model.init_hidden(batch_size) # Creating new variables for the hidden state h = tuple([each.data.to(device) for each in h]) # Move batch inputs and labels to gpu inputs, labels = inputs.to(device), labels.to(device) # Set gradient to zero model.zero_grad() # Compute model output output,h = model(inputs,h) # Calculate the loss and perform backprop loss = criterion(output.squeeze(), labels.float()) loss.backward() train_losses.append(loss.item()) # calculating accuracy accuracy = acc(output,labels) train_acc += accuracy #`clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs. nn.utils.clip_grad_norm_(model.parameters(), clip) optimizers[i].step() # Evaluate on the validation set for this epoch val_losses = [] val_acc = 0.0 model.eval() for inputs, labels in valid_loader: # Initialize hidden state val_h = model.init_hidden(batch_size) val_h = tuple([each.data.to(device) for each in val_h]) # Move batch inputs and labels to gpu inputs, labels = inputs.to(device), labels.to(device) # Compute model output output, val_h = model(inputs, val_h) # Compute Loss val_loss = criterion(output.squeeze(), labels.float()) val_losses.append(val_loss.item()) accuracy = acc(output,labels) val_acc += accuracy epoch_train_loss = np.mean(train_losses) epoch_val_loss = np.mean(val_losses) epoch_train_acc = train_acc/len(train_loader.dataset) epoch_val_acc = val_acc/len(valid_loader.dataset) epoch_tr_loss[i].append(epoch_train_loss) epoch_vl_loss[i].append(epoch_val_loss) epoch_tr_acc[i].append(epoch_train_acc) epoch_vl_acc[i].append(epoch_val_acc) #print(f'Epoch {epoch+1}') #print(f'train_loss : {epoch_train_loss} val_loss : {epoch_val_loss}') #print(f'train_accuracy : {epoch_train_acc*100} val_accuracy : {epoch_val_acc*100}') if epoch_val_loss <= valid_loss_min: #print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(valid_loss_min,epoch_val_loss)) # torch.save(model.state_dict(), '../working/state_dict.pt') valid_loss_min = epoch_val_loss #print(25*'==') ####################################################################################### # plot the results from the training and validation accuracies #fig,ax = plt.subplots(3,2, figsize = 60,20) colors = ['red', 'blue', 'black', 'purple', 'green', 'brown'] for i,e in enumerate(lrs): ax[no][0].plot(epoch_tr_acc[i], linestyle = '--', color = colors[i], label = 'Train Acc, LR = %.4f'%e) ax[no][0].plot(epoch_vl_acc[i], color = colors[i], label = 'Validation Acc, LR = %.4f'%e) # plt.ylim([70, 80]) ax[no][0].set_title("Accuracy", fontsize = 20) ax[no][0].set_xlabel('Epochs', fontsize = 16) ax[no][0].set_xticks([0,1,2,3,4]) ax[no][0].set_ylabel('Number of LSTM Layers = %d'%no_layers, fontsize = 20) #ax[no][0].legend(prop = dict(size = 12)) for i,e in enumerate(lrs): ax[no][1].plot(epoch_tr_loss[i], color = colors[i], linestyle = '--', label = 'Train Acc, LR = %.4f'%e) ax[no][1].plot(epoch_vl_loss[i], color = colors[i], label = 'Validation Acc, LR = %.4f'%e) ax[no][1].set_title("Loss", fontsize = 20) ax[no][1].set_xlabel('Epochs', fontsize = 16) ax[no][1].set_xticks([0,1,2,3,4]) #ax[no][1].set_ylabel('Number of Layers = %d'%no_layers, fontsize = 20) ax[no][1].legend(prop = dict(size = 10)) #plt.tight_layout() # + # %%time class SentimentGRU(nn.Module): def __init__(self, no_layers, hidden_dim, embedding_matrix, drop_prob = 0.01): super(SentimentGRU, self).__init__() self.output_dim = output_dim self.hidden_dim = hidden_dim self.no_layers = no_layers self.drop_prob = drop_prob self.vocab_size = embedding_matrix.shape[0] self.embedding_dim = embedding_matrix.shape[1] # Embedding layer self.embedding=nn.Embedding(self.vocab_size,self.embedding_dim) self.embedding.weight=nn.Parameter(torch.tensor(embedding_matrix,dtype=torch.float32)) self.embedding.weight.requires_grad=True # GRU Layers self.gru = nn.GRU(input_size= self.embedding_dim,hidden_size=self.hidden_dim, num_layers=no_layers, bidirectional=False, batch_first=True, dropout=self.drop_prob) # Dropout layer self.dropout = nn.Dropout(drop_prob) # Linear and Sigmoid layer self.fc = nn.Linear(self.hidden_dim, output_dim) self.sig = nn.Sigmoid() def forward(self,x,hidden): batch_size = x.size(0) self.h = self.init_hidden(batch_size) # Embedding out embeds = self.embedding(x) #Shape: [batch_size x max_length x embedding_dim] # GRU out gru_out, self.h = self.gru(embeds, self.h) # Shape: [batch_size x max_length x hidden_dim] # Select the activation of the last Hidden Layer gru_out = gru_out[:,-1,:].contiguous() # Shape: [batch_size x hidden_dim] # Dropout and Fully connected layer out = self.dropout(gru_out) out = self.fc(out) # Sigmoid function sig_out = self.sig(out) # return last sigmoid output and hidden state return sig_out def init_hidden(self, batch_size): hidden = (torch.zeros((self.no_layers,batch_size,self.hidden_dim)).to(device)) return hidden ####################################################################################### # Parameters of our network # Size of our vocabulary # Number of stacked LSTM layers no_layerss = [2, 4, 8] # Dimension of the hidden layer in LSTMs hidden_dim = 64 # Dropout parameter for regularization output_dim = 1 # Dropout parameter for regularization drop_prob = 0.5 # Let's define our model fig,ax = plt.subplots(3,2, figsize = (18, 18)) fig.suptitle('GRU model with hidden_dim = %d, output_dim = %d, drop_prob = %.2f'%(hidden_dim, output_dim, drop_prob), fontsize = 24) for no, no_layers in enumerate(no_layerss): model = SentimentGRU(no_layers, hidden_dim, embedding_matrix, drop_prob = drop_prob) # Moving to gpu model.to(device) print(model) # How many trainable parameters does our model have? model_parameters = filter(lambda p: p.requires_grad, model.parameters()) params = sum([np.prod(p.size()) for p in model_parameters]) print('Total Number of parameters: ',params) ####################################################################################### # loss and optimization functions lrs = np.linspace(0.0003, 0.0007, 5) #define a range of learning rates # Binary crossentropy is a good loss function for a binary classification problem criterion = nn.BCELoss() # We choose an Adam optimizer optimizers = [] for i,lr in enumerate(lrs): optimizer = torch.optim.Adam(model.parameters(), lr=lr) optimizers.append(optimizer) ####################################################################################### # function to predict accuracy def acc(pred,label): pred = torch.round(pred.squeeze()) return torch.sum(pred == label.squeeze()).item() ####################################################################################### ####################################################################################### # Number of training Epochs epochs = 5 # Maximum absolute value accepted for the gradeint clip = 5 # Initial Loss value (assumed big) valid_loss_min = np.Inf # Lists to follow the evolution of the loss and accuracy epoch_tr_loss,epoch_vl_loss = [[] for i in range(len(lrs))],[[] for i in range(len(lrs))] epoch_tr_acc,epoch_vl_acc = [[] for i in range(len(lrs))],[[] for i in range(len(lrs))] # Train for a number of Epochs for i, lr in enumerate(lrs): for epoch in range(epochs): train_losses = [] train_acc = 0.0 model.train() for inputs, labels in train_loader: # Move batch inputs and labels to gpu inputs, labels = inputs.to(device), labels.to(device) # Set gradient to zero model.zero_grad() # Compute model output output = model(inputs,h) # Calculate the loss and perform backprop loss = criterion(output.squeeze(), labels.float()) loss.backward() train_losses.append(loss.item()) # calculating accuracy accuracy = acc(output,labels) train_acc += accuracy #`clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs. nn.utils.clip_grad_norm_(model.parameters(), clip) optimizer.step() # Evaluate on the validation set for this epoch val_losses = [] val_acc = 0.0 model.eval() for inputs, labels in valid_loader: # Move batch inputs and labels to gpu inputs, labels = inputs.to(device), labels.to(device) # Compute model output output = model(inputs, val_h) # Compute Loss val_loss = criterion(output.squeeze(), labels.float()) val_losses.append(val_loss.item()) accuracy = acc(output,labels) val_acc += accuracy epoch_train_loss = np.mean(train_losses) epoch_val_loss = np.mean(val_losses) epoch_train_acc = train_acc/len(train_loader.dataset) epoch_val_acc = val_acc/len(valid_loader.dataset) epoch_tr_loss[i].append(epoch_train_loss) epoch_vl_loss[i].append(epoch_val_loss) epoch_tr_acc[i].append(epoch_train_acc) epoch_vl_acc[i].append(epoch_val_acc) #print(f'Epoch {epoch+1}') #print(f'train_loss : {epoch_train_loss} val_loss : {epoch_val_loss}') #print(f'train_accuracy : {epoch_train_acc*100} val_accuracy : {epoch_val_acc*100}') if epoch_val_loss <= valid_loss_min: #print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(valid_loss_min,epoch_val_loss)) # torch.save(model.state_dict(), '../working/state_dict.pt') valid_loss_min = epoch_val_loss #print(25*'==') ####################################################################################### # plot the results from the training and validation accuracies #fig,ax = plt.subplots(3,2, figsize = 60,20) colors = ['red', 'blue', 'black', 'purple', 'green', 'brown'] for i,e in enumerate(lrs): ax[no][0].plot(epoch_tr_acc[i], linestyle = '--', color = colors[i], label = 'Train Acc, LR = %.4f'%e) ax[no][0].plot(epoch_vl_acc[i], color = colors[i], label = 'Validation Acc, LR = %.4f'%e) # plt.ylim([70, 80]) ax[no][0].set_title("Accuracy", fontsize = 20) ax[no][0].set_xlabel('Epochs', fontsize = 16) ax[no][0].set_xticks([0,1,2,3,4]) ax[no][0].set_ylabel('Number of GRU Layers = %d'%no_layers, fontsize = 20) #ax[no][0].legend(prop = dict(size = 12)) for i,e in enumerate(lrs): ax[no][1].plot(epoch_tr_loss[i], color = colors[i], linestyle = '--', label = 'Train Acc, LR = %.4f'%e) ax[no][1].plot(epoch_vl_loss[i], color = colors[i], label = 'Validation Acc, LR = %.4f'%e) ax[no][1].set_title("Loss", fontsize = 20) ax[no][1].set_xlabel('Epochs', fontsize = 16) ax[no][1].set_xticks([0,1,2,3,4]) #ax[no][1].set_ylabel('Number of Layers = %d'%no_layers, fontsize = 20) ax[no][1].legend(prop = dict(size = 10)) #plt.tight_layout() # + # %%time class SentimentLSTM_bi(nn.Module): def __init__(self,no_layers,hidden_dim,embedding_matrix,drop_prob=0.1): super(SentimentLSTM_bi,self).__init__() self.output_dim = output_dim self.hidden_dim = hidden_dim self.no_layers = no_layers self.drop_prob = drop_prob self.vocab_size = embedding_matrix.shape[0] self.embedding_dim = embedding_matrix.shape[1] # Embedding Layer self.embedding=nn.Embedding(self.vocab_size,self.embedding_dim) self.embedding.weight=nn.Parameter(torch.tensor(embedding_matrix,dtype=torch.float32)) self.embedding.weight.requires_grad=True # LSTM Layers self.lstm = nn.LSTM(input_size=self.embedding_dim,hidden_size=self.hidden_dim, num_layers=no_layers, bidirectional=True, batch_first=True, dropout=self.drop_prob) # Dropout layer self.dropout = nn.Dropout(drop_prob) # Linear and Sigmoid layer self.fc = nn.Linear(self.hidden_dim * 2, output_dim) self.sig = nn.Sigmoid() def forward(self,x,hidden): batch_size = x.size(0) embeds = self.embedding(x) #Shape: [batch_size x max_length x embedding_dim] # LSTM out lstm_out, hidden = self.lstm(embeds, hidden) # Shape: [batch_size x max_length x hidden_dim] # Select the activation of the last Hidden Layer lstm_out = lstm_out[:,-1,:].contiguous() # Shape: [batch_size x hidden_dim] ## You can instead average the activations across all the times # lstm_out = torch.mean(lstm_out, 1).contiguous() # Dropout and Fully connected layer out = self.dropout(lstm_out) out = self.fc(out) # Sigmoid function sig_out = self.sig(out) # return last sigmoid output and hidden state return sig_out, hidden def init_hidden(self, batch_size): ''' Initializes hidden state ''' # Create two new tensors with sizes n_layers x batch_size x hidden_dim, # initialized to zero, for hidden state and cell state of LSTM h0 = torch.zeros((self.no_layers * 2,batch_size, self.hidden_dim)).to(device) c0 = torch.zeros((self.no_layers * 2,batch_size, self.hidden_dim)).to(device) hidden = (h0,c0) return hidden ####################################################################################### # Parameters of our network # Size of our vocabulary # Number of stacked LSTM layers no_layerss = [2, 4, 8] # Dimension of the hidden layer in LSTMs hidden_dim = 64 # Dropout parameter for regularization output_dim = 1 # Dropout parameter for regularization drop_prob = 0.5 # Let's define our model fig,ax = plt.subplots(3,2, figsize = (18, 18)) fig.suptitle('BI-LSTM model with hidden_dim = %d, output_dim = %d, drop_prob = %.2f'%(hidden_dim, output_dim, drop_prob), fontsize = 24) for no, no_layers in enumerate(no_layerss): model = SentimentLSTM_bi(no_layers, hidden_dim, embedding_matrix, drop_prob = drop_prob) # Moving to gpu model.to(device) print(model) ####################################################################################### # How many trainable parameters does our model have? model_parameters = filter(lambda p: p.requires_grad, model.parameters()) params = sum([np.prod(p.size()) for p in model_parameters]) print('Total Number of parameters: ',params) ####################################################################################### # loss and optimization functions lrs = np.linspace(0.0003, 0.0007, 5)#define a range of learning rates # Binary crossentropy is a good loss function for a binary classification problem criterion = nn.BCELoss() # We choose an Adam optimizer optimizers = [] for i,lr in enumerate(lrs): optimizer = torch.optim.Adam(model.parameters(), lr=lr) optimizers.append(optimizer) ####################################################################################### # function to predict accuracy def acc(pred,label): pred = torch.round(pred.squeeze()) return torch.sum(pred == label.squeeze()).item() ####################################################################################### ####################################################################################### # Number of training Epochs epochs = 5 # Maximum absolute value accepted for the gradeint clip = 5 # Initial Loss value (assumed big) valid_loss_min = np.Inf # Lists to follow the evolution of the loss and accuracy epoch_tr_loss,epoch_vl_loss = [[] for i in range(len(lrs))],[[] for i in range(len(lrs))] epoch_tr_acc,epoch_vl_acc = [[] for i in range(len(lrs))],[[] for i in range(len(lrs))] # Train for a number of Epochs for i, lr in enumerate(lrs): for epoch in range(epochs): train_losses = [] train_acc = 0.0 model.train() for inputs, labels in train_loader: # Initialize hidden state h = model.init_hidden(batch_size) # Creating new variables for the hidden state h = tuple([each.data.to(device) for each in h]) # Move batch inputs and labels to gpu inputs, labels = inputs.to(device), labels.to(device) # Set gradient to zero model.zero_grad() # Compute model output output,h = model(inputs,h) # Calculate the loss and perform backprop loss = criterion(output.squeeze(), labels.float()) loss.backward() train_losses.append(loss.item()) # calculating accuracy accuracy = acc(output,labels) train_acc += accuracy #`clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs. nn.utils.clip_grad_norm_(model.parameters(), clip) optimizer.step() # Evaluate on the validation set for this epoch val_losses = [] val_acc = 0.0 model.eval() for inputs, labels in valid_loader: # Initialize hidden state val_h = model.init_hidden(batch_size) val_h = tuple([each.data.to(device) for each in val_h]) # Move batch inputs and labels to gpu inputs, labels = inputs.to(device), labels.to(device) # Compute model output output, val_h = model(inputs, val_h) # Compute Loss val_loss = criterion(output.squeeze(), labels.float()) val_losses.append(val_loss.item()) accuracy = acc(output,labels) val_acc += accuracy epoch_train_loss = np.mean(train_losses) epoch_val_loss = np.mean(val_losses) epoch_train_acc = train_acc/len(train_loader.dataset) epoch_val_acc = val_acc/len(valid_loader.dataset) epoch_tr_loss[i].append(epoch_train_loss) epoch_vl_loss[i].append(epoch_val_loss) epoch_tr_acc[i].append(epoch_train_acc) epoch_vl_acc[i].append(epoch_val_acc) #print(f'Epoch {epoch+1}') #print(f'train_loss : {epoch_train_loss} val_loss : {epoch_val_loss}') #print(f'train_accuracy : {epoch_train_acc*100} val_accuracy : {epoch_val_acc*100}') if epoch_val_loss <= valid_loss_min: # print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(valid_loss_min,epoch_val_loss)) # torch.save(model.state_dict(), '../working/state_dict.pt') valid_loss_min = epoch_val_loss #print(25*'==') ####################################################################################### # plot the results from the training and validation accuracies #fig,ax = plt.subplots(3,2, figsize = 60,20) colors = ['red', 'blue', 'black', 'purple', 'green', 'brown'] for i,e in enumerate(lrs): ax[no][0].plot(epoch_tr_acc[i], linestyle = '--', color = colors[i], label = 'Train Acc, LR = %.4f'%e) ax[no][0].plot(epoch_vl_acc[i], color = colors[i], label = 'Validation Acc, LR = %.4f'%e) # plt.ylim([70, 80]) ax[no][0].set_title("Accuracy", fontsize = 20) ax[no][0].set_xlabel('Epochs', fontsize = 16) ax[no][0].set_xticks([0,1,2,3,4]) ax[no][0].set_ylabel('Number of BI-LSTM Layers = %d'%no_layers, fontsize = 20) #ax[no][0].legend(prop = dict(size = 12)) for i,e in enumerate(lrs): ax[no][1].plot(epoch_tr_loss[i], color = colors[i], linestyle = '--', label = 'Train Acc, LR = %.4f'%e) ax[no][1].plot(epoch_vl_loss[i], color = colors[i], label = 'Validation Acc, LR = %.4f'%e) ax[no][1].set_title("Loss", fontsize = 20) ax[no][1].set_xlabel('Epochs', fontsize = 16) ax[no][1].set_xticks([0,1,2,3,4]) #ax[no][1].set_ylabel('Number of Layers = %d'%no_layers, fontsize = 20) ax[no][1].legend(prop = dict(size = 10)) #plt.tight_layout() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import nibabel as nib from dipy.io import streamline import os from glob import glob import matplotlib.pyplot as plt import seaborn as sns sns.set() # %matplotlib inline # %config InlineBackend.figure_format = 'retina' # - # define region labels and order: labels = ['LH_CN', 'LH_SOC', 'LH_IC', 'LH_MGB', 'RH_CN', 'RH_SOC', 'RH_IC', 'RH_MGB'] labels_atlas = ['LH_CN', 'RH_CN', 'LH_SOC', 'RH_SOC', 'LH_IC', 'RH_IC', 'LH_MGB', 'RH_MGB'] #label_order = list(range(1,9)) # if regions are already in same order as labels label_order = [1, 3, 5, 7, 2, 4, 6, 8] # define directories: project_dir = os.path.abspath('/om2/user/ksitek/exvivo/') # ## Full resolution, dilated ROI masks processing_dir = os.path.join(project_dir, 'analysis/dipy/csd/', 'alow-0p001_angthr-75_minangle-10_fathresh-50_20190517_0.2mm_' \ 'conj_kevin_v2_faruk_v1_dil-500um') sl_dir = os.path.join(processing_dir, 'target_streamlines') # load streamlines and check the number of streamlines: sl_file = os.path.join(sl_dir, 'all_atlas_streamlines.trk') streamlines, hdr = streamline.load_trk(sl_file) total_sl_count = hdr['nb_streamlines'] print('total streamlines = %d'%(total_sl_count)) # + # create a connectivity matrix from the seed-target streamline files matrix_sl = np.zeros((len(labels), len(labels))) for i, seed in enumerate(label_order): seed_label = labels_atlas[seed-1] print('seed: %s'%seed_label) for j, target in enumerate(label_order): target_label = labels_atlas[target-1] print(' target: %s'%target_label) try: sl_file = os.path.join(sl_dir, 'target_streamlines_seed-%d_target-%d.trk'%(seed, target)) streamlines, hdr = streamline.load_trk(sl_file) sl_count = hdr['nb_streamlines'] print(' %d'%sl_count) except: sl_count = 0 matrix_sl[i,j] = sl_count matrix_sl = matrix_sl.astype(np.int) # convert to integers # normalize the matrix by the total number of streamlines norm_matrix_sl = matrix_sl/total_sl_count*100 # save matrix txt_filename = os.path.join(sl_dir, 'sl_conn_mat.txt') np.savetxt(txt_filename, matrix_sl) # - # create a custom color palette so that 0 is white (not light red) new_cp = sns.color_palette("Reds",n_colors=10000) new_cp[0] = [1,1,1] new_cp[0:10] # make plot labels norm_mat_str = [] for rx, row in enumerate(norm_matrix_sl): norm_mat_str_row = [str(i) for i in row] str_row = [j[:4] for j in norm_mat_str_row] norm_mat_str.append(str_row) norm_mat_str_arr = np.array(norm_mat_str) # + # make normalized streamline count plot plt.clf() plt.figure(figsize=(5,5)) mask = np.zeros_like(norm_matrix_sl) mask[np.triu_indices_from(mask)] = True with sns.axes_style("white"): ax = sns.heatmap(norm_matrix_sl, annot=norm_mat_str_arr, fmt="s", linewidths=.5,# ax=ax, #ax = sns.heatmap(norm_matrix_sl, annot=True, fmt=".02f", linewidths=.5,# ax=ax, xticklabels=labels, yticklabels=labels, square=True,#mask=mask, cmap=new_cp, cbar=False); ax.set(title='percent of total streamlines (n = %d)'%(total_sl_count)); plt.tight_layout() # save plot out_file = os.path.join(sl_dir, 'streamline_norm_heatplot.png') print(out_file) ax.figure.savefig(out_file, box_inches='tight', dpi=200) # + # make streamline count plot plt.clf() plt.figure(figsize=(5,5)) mask = np.zeros_like(matrix_sl) mask[np.triu_indices_from(mask)] = True with sns.axes_style("white"): ax = sns.heatmap(matrix_sl, annot=True, fmt="d", linewidths=.5,# ax=ax, xticklabels=labels, yticklabels=labels, square=True,#mask=mask, cmap=new_cp, cbar=False); ax.set(title='number of streamlines (n = %d)'%total_sl_count); plt.tight_layout() # save plot out_file = os.path.join(sl_dir, 'streamline_heatplot.png') print(out_file) ax.figure.savefig(out_file, box_inches='tight', dpi=200) # - # ## Full resolution, original ROI masks processing_dir = os.path.join(project_dir, 'analysis/dipy/csd/', 'alow-0p001_angthr-75_minangle-10_fathresh-50_20190517_0.2mm_' \ 'conj_kevin_v2_faruk_v1') sl_dir = os.path.join(processing_dir, 'target_streamlines') # load streamlines and check the number of streamlines: sl_file = os.path.join(sl_dir, 'all_atlas_streamlines.trk') streamlines, hdr = streamline.load_trk(sl_file) total_sl_count = hdr['nb_streamlines'] print('total streamlines = %d'%(total_sl_count)) # + # create a connectivity matrix from the seed-target streamline files matrix_sl = np.zeros((len(labels), len(labels))) for i, seed in enumerate(label_order): seed_label = labels_atlas[seed-1] print('seed: %s'%seed_label) for j, target in enumerate(label_order): target_label = labels_atlas[target-1] print(' target: %s'%target_label) try: sl_file = os.path.join(sl_dir, 'target_streamlines_seed-%d_target-%d.trk'%(seed, target)) streamlines, hdr = streamline.load_trk(sl_file) sl_count = hdr['nb_streamlines'] print(' %d'%sl_count) except: sl_count = 0 matrix_sl[i,j] = sl_count matrix_sl = matrix_sl.astype(np.int) # convert to integers # normalize the matrix by the total number of streamlines norm_matrix_sl = matrix_sl/total_sl_count*100 # save matrix txt_filename = os.path.join(sl_dir, 'sl_conn_mat.txt') np.savetxt(txt_filename, matrix_sl) # - # create a custom color palette so that 0 is white (not light red) new_cp = sns.color_palette("Reds",n_colors=10000) new_cp[0] = [1,1,1] new_cp[0:10] # make plot labels norm_mat_str = [] for rx, row in enumerate(norm_matrix_sl): norm_mat_str_row = [str(i) for i in row] str_row = [j[:4] for j in norm_mat_str_row] norm_mat_str.append(str_row) norm_mat_str_arr = np.array(norm_mat_str) # + # make normalized streamline count plot plt.clf() plt.figure(figsize=(5,5)) mask = np.zeros_like(norm_matrix_sl) mask[np.triu_indices_from(mask)] = True with sns.axes_style("white"): ax = sns.heatmap(norm_matrix_sl, annot=norm_mat_str_arr, fmt="s", linewidths=.5,# ax=ax, #ax = sns.heatmap(norm_matrix_sl, annot=True, fmt=".02f", linewidths=.5,# ax=ax, xticklabels=labels, yticklabels=labels, square=True,#mask=mask, cmap=new_cp, cbar=False); ax.set(title='percent of total streamlines (n = %d)'%(total_sl_count)); plt.tight_layout() # save plot out_file = os.path.join(sl_dir, 'streamline_norm_heatplot.png') print(out_file) ax.figure.savefig(out_file, box_inches='tight', dpi=200) # + # make streamline count plot plt.clf() plt.figure(figsize=(5,5)) mask = np.zeros_like(matrix_sl) mask[np.triu_indices_from(mask)] = True with sns.axes_style("white"): ax = sns.heatmap(matrix_sl, annot=True, fmt="d", linewidths=.5,# ax=ax, xticklabels=labels, yticklabels=labels, square=True,#mask=mask, cmap=new_cp, cbar=False); ax.set(title='number of streamlines (n = %d)'%total_sl_count); plt.tight_layout() # save plot out_file = os.path.join(sl_dir, 'streamline_heatplot.png') print(out_file) ax.figure.savefig(out_file, box_inches='tight', dpi=200) # - # ## Downsampled resolution, original ROI masks processing_dir = os.path.join(project_dir, 'analysis/dipy/csd/', 'alow-0p001_angthr-75_minangle-10_fathresh-50_20190523_1.05mm_' \ 'conj_kevin_v2_faruk_v1') sl_dir = os.path.join(processing_dir, 'target_streamlines') # load streamlines and check the number of streamlines: sl_file = os.path.join(sl_dir, 'all_atlas_streamlines.trk') streamlines, hdr = streamline.load_trk(sl_file) total_sl_count = hdr['nb_streamlines'] print('total streamlines = %d'%(total_sl_count)) # + # create a connectivity matrix from the seed-target streamline files matrix_sl = np.zeros((len(labels), len(labels))) for i, seed in enumerate(label_order): seed_label = labels_atlas[seed-1] print('seed: %s'%seed_label) for j, target in enumerate(label_order): target_label = labels_atlas[target-1] print(' target: %s'%target_label) try: sl_file = os.path.join(sl_dir, 'target_streamlines_seed-%d_target-%d.trk'%(seed, target)) streamlines, hdr = streamline.load_trk(sl_file) sl_count = hdr['nb_streamlines'] print(' %d'%sl_count) except: sl_count = 0 matrix_sl[i,j] = sl_count matrix_sl = matrix_sl.astype(np.int) # convert to integers # normalize the matrix by the total number of streamlines norm_matrix_sl = matrix_sl/total_sl_count*100 # save matrix txt_filename = os.path.join(sl_dir, 'sl_conn_mat.txt') np.savetxt(txt_filename, matrix_sl) # - # create a custom color palette so that 0 is white (not light red) new_cp = sns.color_palette("Reds",n_colors=10000) new_cp[0] = [1,1,1] new_cp[0:10] # make plot labels norm_mat_str = [] for rx, row in enumerate(norm_matrix_sl): norm_mat_str_row = [str(i) for i in row] str_row = [j[:4] for j in norm_mat_str_row] norm_mat_str.append(str_row) norm_mat_str_arr = np.array(norm_mat_str) # + # make normalized streamline count plot plt.clf() plt.figure(figsize=(5,5)) mask = np.zeros_like(norm_matrix_sl) mask[np.triu_indices_from(mask)] = True with sns.axes_style("white"): ax = sns.heatmap(norm_matrix_sl, annot=norm_mat_str_arr, fmt="s", linewidths=.5,# ax=ax, #ax = sns.heatmap(norm_matrix_sl, annot=True, fmt=".02f", linewidths=.5,# ax=ax, xticklabels=labels, yticklabels=labels, square=True,#mask=mask, cmap=new_cp, cbar=False); ax.set(title='percent of total streamlines (n = %d)'%(total_sl_count)); plt.tight_layout() # save plot out_file = os.path.join(sl_dir, 'streamline_norm_heatplot.png') print(out_file) ax.figure.savefig(out_file, box_inches='tight', dpi=200) # + # make streamline count plot plt.clf() plt.figure(figsize=(5,5)) mask = np.zeros_like(matrix_sl) mask[np.triu_indices_from(mask)] = True with sns.axes_style("white"): ax = sns.heatmap(matrix_sl, annot=True, fmt="d", linewidths=.5,# ax=ax, xticklabels=labels, yticklabels=labels, square=True,#mask=mask, cmap=new_cp, cbar=False); ax.set(title='number of streamlines (n = %d)'%total_sl_count); plt.tight_layout() # save plot out_file = os.path.join(sl_dir, 'streamline_heatplot.png') print(out_file) ax.figure.savefig(out_file, box_inches='tight', dpi=200) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- # # Como fazer joins no pyspark # # Para uma explicação melhor acessar [esse link](https://www.datasciencemadesimple.com/join-in-pyspark-merge-inner-outer-right-left-join-in-pyspark/) # ## Inner join # # Obtem a intersecção dos dois datasets # # ```python # ### Inner join in pyspark # df_inner = df1.join(df2, on=['Roll_No'], how='inner') # ``` # # ## Outer join # # Entenda como o full outer (traz tudo dos dois datasets) # # ```python # ### Outer join in pyspark # df_outer = df1.join(df2, on=['Roll_No'], how='outer') # ``` # ## Left join # # Ou left outer join # # ```python # ### Left join in pyspark # df_left = df1.join(df2, on=['Roll_No'], how='left') # ``` # ## Right join # # Ou right outer join # # ```python # ### Right join in pyspark # df_right = df1.join(df2, on=['Roll_No'], how='right') # ``` # ## Left Anti join # # Esse traz tudo de df1 que não tem correspondente em df2 # # ```python # ### Left Anti join in pyspark # df_left_anti = df1.join(df2, on=['Roll_No'], how='left_anti') # # ``` # ## Left semi join # # Ver [aqui](https://stackoverflow.com/questions/21738784/difference-between-inner-join-and-left-semi-join#21738897) para uma excelente explicação mas resumindo, # é equivalente ao "SELECT ... WHERE EXISTS (...)" retorna apenas cada uma das linhas do dataset da esquerda que existirem correspondentes no da direita, retorna apenas uma linha por intersecção mesmo que o dataset da direita tenha mais combinações. Também só pode retornar campos do dataset da esquerda. O inner join em comparação retornaria uma linha para cada intersecção entre esquerda e direta. # # ```python # df_left_semi = df1.join(df2, on=['Roll_No'], how='left_semi') # ``` # # Equivalentes: # ```sql # SELECT name # FROM table_1 a # LEFT SEMI JOIN table_2 b ON (a.name=b.name) # # # SELECT name # FROM table_1 a # WHERE EXISTS( # SELECT * FROM table_2 b WHERE (a.name=b.name)) # ``` # ## Full join # Traz tudo existente tanto da esquerda quanto da direita e as intersecções como uma unica linha. Sera a união do INNER, LEFT OUTER e "RIGHT OUTER" # # ```python # ### Outer join in pyspark # df_outer = df1.join(df2, on=['Roll_No'], how='full') # ``` # ## Anti join # Pela descrição é igual a left anti. # # ```python # ### Outer join in pyspark # df_anti = df1.join(df2, on=['Roll_No'], how='anti') # ``` # # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # FOOOF - Model Description #
    # This notebook provides a more theoretical / mathematical description of the FOOOF model and parameters. #
    # # ### Introduction # # A neural power spectrum is fit as a combination of the aperiodic (1/f) signal and periodic oscillations. Putative oscillations (hereafter referred to as 'peaks'), are frequency regions in which there are 'bumps' of power over and above the aperiodic signal. # # This formulation roughly translates to fitting the power spectrum as: # # $$P = L + \sum_{n=0}^{N} G_n$$ # # Where $P$ is the power, $L$ is the aperiodic signal, and each $G_n$ is a Gaussian fit to a peak, for $N$ total peaks extracted from the power spectrum. # # #### Aperiodic fit # # The aperiodic fit uses an exponential function, fit on the semilog power spectrum (linear frequencies and $log10$ power values). # # The exponential is of the form: # # $$L = 10^b * \frac{1}{(k + F^\chi)}$$ # # Or, equivalently: # # $$L = b - \log(k + F^\chi)$$ # # In this formulation, the 3 parameters $b$, $k$, and $\chi$ define the aperiodic signal, as: # - $b$ is the broadband 'offset' # - $k$ relates to the 'knee' # - $\chi$ is the 'slope' # - $F$ is the vector of input frequencies # # Note that fitting the knee parameter is optional. By default the aperiodic signal is fit with the 'knee' parameter set to zero. # # This fits the aperiodic signal equivalently to fitting a linear fit in log-log space. # # #### Peaks # # Regions of power over above this aperiodic signal, as defined above, are considered to be putative oscillations and are fit in the model by a Gaussian. # # For each Gaussian, $G_n$, with the form: # # $$G_n = a * exp (\frac{- (F - c)^2}{2 * w^2})$$ # # The peak is defined in terms of the $amp$, $ctr$ and $wdt$, where: # - $a$ is the amplitude of the peak, over and above the aperiodic signal # - $c$ is the center frequency of the peak # - $w$ is the width of the peak # - $F$ is the vector of input frequencies # # The full power spectrum fit is therefore the combination of the aperiodic fit, defined by the exponential fit, and $N$ peaks, where each $G_n$ is formalized as a Gaussian process. # # Full method details are available in the paper: https://www.biorxiv.org/content/early/2018/04/11/299859 # ## Algorithmic Description # # Briefly, the algorithm proceeds as such: # - An initial fit of the aperiodic signal is taken across the power spectrum # - This aperiodic fit is subtracted from the power spectrum, creating a flattened spectrum # - Peaks are iteratively found in this flattened spectrum # - A full peak fit is created of all peak candidates found # - The peak fit is subtracted from the original power spectrum, creating a peak-removed power spectrum # - A final background fit is taken of the peak-removed power spectrum # This procedure is able to create a model of the neural power spectrum, that is fully described mathematical by the mathematical model from above: # # !["fooof_model_picture"](img/FOOOF_model_pic.png) #
    # To step through the algorithm in more detail, with visualizations that step through the code, go [here](03-FOOOFAlgorithm.ipynb). #
    # ## FOOOF Parameters # # There are a number of parameters that control the FOOOF fitting algorithm. Parameters that are exposed to be set by the user are explained in detail below. # # # ### Controlling peak fits # # #### peak_width_limits (Hz) # # Enforced limits on the minimum and maximum widths of extracted peaks, given as a list of [minimum bandwidth, maximum bandwidth]. We recommend bandwidths at last twice the frequency resolution of the power spectrum. # # Default: [0.5, 12] # # ### Peak search stopping criteria # # An iterative procedures searches for candidate peaks in the flattened spectrum. Candidate peaks are extracted in order of decreasing amplitude, until some stopping criterion is met. # # #### max_n_peaks (int) # # The maximum number of peaks that can be extracted from a given power spectrum. FOOOF will halt searching for new peaks when this number is reached. Note that FOOOF extracts peaks iteratively by amplitude (over and above the aperiodic signal), and so this approach will extract (up to) the _n_ largest peaks. # # Default: infinite # # #### peak_threshold (in units of standard deviation) # # The threshold, in terms of standard deviation of the aperiodic signal-removed power spectrum, above which a data point must pass to be considered a candidate peak. Once a candidate peak drops below this threshold, the peak search is halted (without including the most recent candidate). # # Default: 2.0 # # #### min_peak_amplitude (units of power - same as the input spectrum) # # The minimum amplitude, above the aperiodic fit, that a peak must have to be extracted in the initial fit stage. Once a candidate peak drops below this threshold, the peak search is halted (without including the most recent candidate). Note that because this constraint is enforced during peak search, and prior to final peak fit, returned peaks are not guaranteed to surpass this value in amplitude. # # Default: 0 # # Note: there are two different amplitude-related halting conditions for the peak searching. By default, the relative (standard-deviation based) threshold is defined, whereas the absolute threshold is set to zero (this is largely because there is no general way to set this value without knowing the scale of the data). If both are defined, both are used and the peak search will halt when a candidate peak fails to pass either the absolute, or relative threshold. # # ### Aperiodic signal fitting approach # # #### background_mode (string) # # The fitting approach to use for the aperiodic signal. Options: # - 'fixed' : fits without a knee parameter (with the knee parameter 'fixed' at 0) # - 'knee' : fits the full exponential equation, including the knee parameter. (experimental) # # Default: 'fixed' #
    #
    # To continue with the tutorial, to a hands-on introduction to the codebase, go [here](02-FOOOF.ipynb). #
    # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # pyWRspice Wrapper Tutorial: Parse a SPICE script # # #### Prerequisite: # * You need to complete the *Tutorial.ipynb* notebook first. # # Here we assume you are already famililar with running PyWRspice on a local computer. # Add pyWRspice location to system path, if you haven't run setup.py import sys sys.path.append("../") # + import numpy as np import logging, importlib from pyWRspice import parse import matplotlib.pyplot as plt # %matplotlib inline logging.basicConfig(level=logging.WARNING) # - # ### Parse a SPICE script # # The class ```parse.Parse``` (WIP) parses a SPICE script into a ```script.Script``` object. # # It is advisable not to include the ```.control ... write ...``` block in the script because this can be overlapped with the ```config_save``` function later. If such control block is in the original script, do not call the ```config_save``` function. # Here we demonstrate parsing the circuit in the last example of _Tutorial.ipynb_ script1 = """*Transient response of a transmission line .tran 50p 100n .subckt segment 1 3 R1 1 2 0.1 L1 2 3 1n C1 3 0 {cap}p R2 3 0 1000.0 .ends segment X1 1 2 segment X2 2 3 segment X3 3 4 segment X4 4 5 segment X5 5 6 segment X6 6 7 segment X7 7 8 segment X8 8 9 segment X9 9 10 segment X10 10 11 segment * Pulse source V1 1 0 pulse(0 1 1n 1n 1n {dur}n) * Load resistance Rload 11 0 50 """ # Parse the script scr = parse.Parse(script1) # Double check if it parsed correctly print(scr.script()) # Plot the main circuit plt.figure(figsize=(15,4)) scr.plot() # Plot the subcircuit plt.figure(figsize=(8,4)) scr.circuits[0].subcircuits["segment"].plot() # Explore the circuit elements scr.circuits[0].components # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python3 # --- # # Novel-taxa classification evaluation # # The following notebooks describe the evaluation of taxonomy classifiers using "novel taxa" data sets. Novel-taxa analysis is a form of cross-validated taxonomic classification, wherein random unique sequences are sampled from the reference database as a test set; all sequences sharing taxonomic affiliation at a given taxonomic level are removed from the reference database (training set); and taxonomy is assigned to the query sequences at the given taxonomic level. Thus, this test interrogates the behavior of a taxonomy classifier when challenged with "novel" sequences that are not represented by close matches within the reference sequence database. Such an analysis is performed to assess the degree to which "overassignment" occurs for sequences that are not represented in a reference database. # # At each level ``L``, the unique taxonomic clades are randomly sampled and used as ``QUERY`` sequences. All sequences that match that taxonomic annotation at ``L`` are excluded from ``REF``. Hence, species-level ``QUERY`` assignment asks how accurate assignment is to an "unknown" species that is not represented in the ``REF``, though other species in the same genus are. Genus-level ``QUERY`` assignment asks how accurate assignment is to an "unknown" genus that is not represented in the ``REF``, though other genera in the same family are, *et cetera*. # # The steps involved in preparing and executing novel-taxa analysis are described in a series of notebooks: # # 1) **[Novel taxa dataset generation](./dataset-generation.ipynb)** only needs to be performed once for a given reference database. Only run this notebook if you wish to make novel taxa datasets from a different reference database, or alter the parameters used to make the novel taxa datasets. The default included in Tax-Credit is Greengenes 13\_8 release, amplified *in silico* with primers 515f and 806r, and trimmed to 250 nt from the 5' end. # # 2) **[Taxonomic classification](./taxonomy-assignment.ipynb)** of novel taxa sequences is performed using the datasets generated in *step 1*. This template currently describes classification using QIIME 1 classifiers and can be used as a template for classifiers that are called via command line interface. Python-based classifiers can be used following the example of q2-feature-classifier. # # 3) **[Classifier evaluation](./evaluate-classification.ipynb)** is performed based on taxonomic classifications generated by each classifier used in *step 2*. # # # ## Definitions # The **[dataset generation](./dataset-generation.ipynb)** notebook uses a few novel definitions. The following provides some explanation of the definitions used in that notebook. # # * ``source`` = original reference database sequences and taxonomy. # * ``QUERY`` = 'novel' query sequences and taxonomies randomly drawn from ``source``. # * ``REF`` = ``source`` - ``novel`` taxa, used for taxonomy assignment. # * ``L`` = taxonomic level being tested # * 0 = kingdom, 1 = phylum, 2 = class, 3 = order, 4 = family, 5 = genus, 6 = species # * ``branching`` = describes a taxon at level ``L`` that "branches" into two or more lineages at ``L + 1``. # * A "branched" taxon, then, describes these lineages. E.g., in the example below Lactobacillaceae, Lactobacillus, and Pediococcus branch, while Paralactobacillus is unbranching. The Lactobacillus and Pediococcus species are "branched". Paralactobacillus selangorensis is "unbranched" # * The novel taxa analysis only uses "branching" taxa, such that for each ``QUERY`` at level ``L``, ``REF`` must contain one or more taxa that share the same clade at level ``L - 1``. # # ``` # Lactobacillaceae # └── Lactobacillus # │ ├── Lactobacillus brevis # │ └── Lactobacillus sanfranciscensis # ├── Pediococcus # │ ├── Pediococcus damnosus # │ └── Pediococcus claussenii # └── Paralactobacillus # └── Paralactobacillus selangorensis # ``` # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np # 收集数据:可以使用任何方法 def loadDataSet(): """ 创建数据集 :return: 单词列表postingList,所属类别classVec """ postingList = [['my', 'dog', 'has', 'flea', 'problems', 'help', 'please'], ['maybe', 'not', 'take', 'him', 'to', 'dog', 'park', 'stupid'], ['my', 'dalmation', 'is', 'so', 'cute', 'I', 'love', 'him'], ['stop', 'posting', 'stupid', 'worthless', 'garbage'], ['mr', 'licks', 'ate', 'my', 'steak', 'how', 'to', 'stop', 'him'], ['quit', 'buying', 'worthless', 'dog', 'food', 'stupid']] classVec = [0, 1, 0, 1, 0, 1] return postingList, classVec # + # 准备数据:从文本中构建词向量 def createVocabList(dataSet): """ 获取所有单词的集合 :param dataSet: 数据集 :return: 所有单词的集合(即不含重复元素的单词列表) """ vocabSet = set() # create empty set for document in dataSet: # 操作符 | 用于求两个集合的并集 vocabSet = vocabSet | set(document) # union of the two sets return list(vocabSet) def setOfWords2Vec(vocabList, inputSet): """ 遍历查看该单词是否出现,出现该单词则将该单词置1 :param vocabList: 所有单词集合列表 :param inputSet: 输入数据集 :return: 匹配列表[0, 1, 0, 1...],其中1与0表示词汇表中的单词是否出现在输入的数据集中 """ # 创建一个和词汇表等长的向量,并将其元素都设置为0 returnVec = [0] * len(vocabList) # [0, 0...] # 遍历文档中的所有单词,如果出现了词汇表中的单词,则将输出的文档想两种的对应值设为1 for word in inputSet: if word in vocabList: returnVec[vocabList.index(word)] = 1 else: print("the word: %s is not in my Vocabulary!" % word) return returnVec # + # 分析数据:检查词条确保解析的正确性 # 检查函数执行情况,检查词表,不出现重复单词,需要的话,可以对其进行排序。 listOPosts, listClasses = loadDataSet() myVocabList = createVocabList(listOPosts) myVocabList # 检查函数有效性。例如:myVocabList中索引为2的元素是什么单词?应该是'I'。 # 该单词在第3篇文档中出现了,现在检查一下看看它是否出现在第四篇文档中。 setOfWords2Vec(myVocabList, listOPosts[0]) setOfWords2Vec(myVocabList, listOPosts[3]) # - # 训练算法:从词向量计算概率 # 朴素贝叶斯分类器训练函数 def trainNB0(trainMatrix, trainCategory): """ 训练数据原版 :param trainMatrix: 文件单词矩阵 [[1, 0, 1, 1, 1...], [], []...] :param trainCayegory: 文件对应的类别 [0, 1, 1, 0...],列表长度等于单词矩阵书 :return: """ # 文件数 numTrainDocs = len(trainMatrix) # 单词数 numWords = len(trainMatrix[0]) # 侮辱性文件的出现概率,即trainCategory中所有的1的个数, # 代表的就是多少个侮辱性文件,与文件的总数相除就得到了侮辱性文件的出现概率 pAbusive = sum(trainCategory) / float(numTrainDocs) # 构造单词出现次数列表 p0Num = np.zeros(numWords) p1Num = np.zeros(numWords) # 整个数据集单词出现总数 p0Denom = 0.0 p1Denom = 0.0 for i in range(numTrainDocs): # 是否是侮辱性文件 if trainCategory[i] == 1: # 如果是侮辱性文件,对侮辱性文件的向量进行加和 p1Num += trainMatrix[i] # 对向量中的所有元素进行求和,也就是计算所有侮辱性文件中出现的单词总数 p1Denom += sum(trainMatrix[i]) else: p0Num += trainMatrix[i] p0Denom += sum(trainMatrix[i]) # 类别1,即侮辱性文档的 [P(F1|C1), P(F2|C1), ...] 列表 # 即在类别1下,每个单词出现的概率 p1Vect = p1Num / p1Denom # 类别0下,每个单词出现的概率 p0Vect = p0Num / p0Denom return p0Vect, p1Vect, pAbusive # 测试算法:根据现实情况修改分类器 def trainNB1(trainMatrix, trainCategory): """ 训练数据优化版本 :param trainMatrix: 文件单词矩阵 :param trainCayegory: 文件对应的类别 :return: """ # 总文件数 numTrainDocs = len(trainMatrix) # 总单词数 numWords = len(trainMatrix[0]) # 侮辱性文件的出现概率 pAbusive = sum(trainCategory) / float(numTrainDocs) # 构造单词出现次数列表 # p0Num 正常的统计 # p1Num 侮辱的统计 p0Num = np.ones(numWords) # [0, 0, ...] --> [1, 1, ...] p1Num = np.ones(numWords) # 整个数据集单词出现总数,2.0 # 根据样本/实际调查结果调整分母的值(2主要是避免分母为0,当然值可以调整) # p0Denom 正常的统计 # p1Denom 侮辱的统计 p0Denom = 2.0 p1Denom = 2.0 for i in range(numTrainDocs): # 是否是侮辱性文件 if trainCategory[i] == 1: # 如果是侮辱性文件,对侮辱性文件的向量进行加和 p1Num += trainMatrix[i] # 对向量中的所有元素进行求和,也就是计算所有侮辱性文件中出现的单词总数 p1Denom += sum(trainMatrix[i]) else: p0Num += trainMatrix[i] p0Denom += sum(trainMatrix[i]) # 类别1,即侮辱性文档的 [log(P(F1|C1)), log(P(F2|C1)), ...] 列表 # 即在类别1下,每个单词出现的概率 p1Vect = np.log(p1Num / p1Denom) # 类别0下,每个单词出现的概率 p0Vect = np.log(p0Num / p0Denom) return p0Vect, p1Vect, pAbusive # + # 使用算法:对社区留言板言论进行分类 # 朴素贝叶斯分类函数 def classifyNB(vec2Classify, p0Vec, p1Vec, pClass1): """ 使用算法: # 将乘法转换为加法 乘法:P(C|F1F2...Fn) = p(F1F2...Fn|C)P(C)/P(F1F2...Fn) 加法:P(F1|C)*P(F2|C)...P(Fn|C)P(C) --> log(P(F1|C))+log(P(F2|C))+...+log(P(Fn|C))+log(P(C)) :param vec2Classify: 待测数据[0, 1, 1, 1, 1, ...],即要分类的向量 :param p0Vec: 类别0,即正常文档的[log(P(F1|C0)), log(P(F2|C0)), ...]列表 :param p1Vec: 类别1,即侮辱性文档的[log(P(F1|C1)), log(P(F2|C1)), ...]列表 :param pClass1: 类别1,即侮辱性文件的出现概率 :return: 类别1或0 """ # 计算公式 log(P(F1|C))+log(P(F2|C))+...+log(P(Fn|C))+log(P(C)) p1 = sum(vec2Classify * p1Vec) + np.log(pClass1) # 即贝叶斯准则的分子 p0 = sum(vec2Classify * p0Vec) + np.log(1.0 - pClass1) if p1 > p0: return 1 else: return 0 def testingNB(): """ 测试朴素贝叶斯算法 """ # 1. 加载数据集 listOPosts, listClasses = loadDataSet() # 2. 创建单词集合 myVocabList = createVocabList(listOPosts) # 3. 计算单词是否出现并创建数据矩阵 trainMat = [] for postinDoc in listOPosts: # 返回 m*len(myVocabList) 的矩阵,记录的都是0,1信息 trainMat.append(setOfWords2Vec(myVocabList, postinDoc)) # 4. 训练数据 p0V, p1V, pAb = trainNB1(np.array(trainMat), np.array(listClasses)) # 5. 测试数据 testEntry = ['love', 'my', 'dalmation'] thisDoc = np.array(setOfWords2Vec(myVocabList, testEntry)) print(testEntry, 'classified as: ', classifyNB(thisDoc, p0V, p1V, pAb)) testEntry = ['stupid', 'garbage'] thisDoc = np.array(setOfWords2Vec(myVocabList, testEntry)) print(testEntry, 'classified as: ', classifyNB(thisDoc, p0V, p1V, pAb)) # - testingNB() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## This Notebook Explore Time Series Analytics and Predictions # #### USAspending.gov GFY Archive Data Source # + # recommend creating a conda env for this to use latest version of Prophet # it should include install of: # fbprophet (v.7 or newer), dask, pyarrow, fastparquet, plotly, jupyter, ipyfilechooser, ipywidgets (https://ipywidgets.readthedocs.io/en/stable/user_install.html ) # + import os, glob, pathlib import math import shutil from datetime import datetime from collections import Counter import zipfile import pandas as pd import numpy as np # import dask # import dask.dataframe as dd # from dask.distributed import Client, progress import psutil import requests import scipy.stats as st import matplotlib import matplotlib.pyplot as plt # use import below to interactively select the folder where the USAspending Archives are located # https://pypi.org/project/ipyfilechooser/ from ipyfilechooser import FileChooser import random # - matplotlib.style.use('seaborn') # + def Display_System_Info(): # https://psutil.readthedocs.io/en/latest/#psutil.virtual_memory physical_cores = psutil.cpu_count(logical=False) #, psutil.cpu_count(logical=True) RAM_total_installed = psutil.virtual_memory()[0] #, psutil.swap_memory() RAM_available = psutil.virtual_memory().available #['available'] dask_workers = int(physical_cores/2) print(f"Physical CPU Cores: {physical_cores}, RAM available: {round(RAM_available/1e9)} GB, total RAM: {round(RAM_total_installed/1e9)} GB") result_record = {'CPU_Cores_Physical' : physical_cores, 'RAM_available' : RAM_available, 'RAM_total_installed' : RAM_total_installed, } return result_record Display_System_Info() # + def Get_Current_Time(): return datetime.now().strftime("%d/%m/%Y %H:%M:%S") Get_Current_Time() # + def CurrentGFY(): if datetime.now().month >= 10: return datetime.now().year + 1 else: return datetime.now().year def getGFY(datestamp): if datestamp.month >= 10: return datestamp.year + 1 else: return datestamp.year def Check_Archive_Filename_Format(filename_complete): base_filename = os.path.basename(filename_complete) # rule checks if not base_filename.startswith("FY"): return False if not base_filename.endswith(".zip") and not base_filename.endswith(".csv"): return False if " " in base_filename: return False if len(base_filename.split(".")) > 2: return False if "copy" in base_filename: return False return True def Get_GFY_from_file_path(filename_to_check_complete_path): if os.path.isdir(filename_to_check_complete_path): #print(f"{filename_to_check_complete_path} is a directory. Ignore") return filename_GFY = os.path.basename(filename_to_check_complete_path)[:6] assert filename_GFY[:2] == 'FY' # check this return filename_GFY def Get_ArchiveDate_from_file_path(filename_to_check_complete_path): if os.path.isdir(filename_to_check_complete_path): #print(f"{filename_to_check_complete_path} is a directory. Ignore") return basefilename = os.path.basename(filename_to_check_complete_path) if filename_to_check_complete_path.endswith(".zip"): filename_latest_update = basefilename.split("_")[4][0:8] elif filename_to_check_complete_path.endswith(".csv"): filename_latest_update = basefilename.split("_")[4][0:8] else: filename_latest_update ="Filename_Format_Issue" # assert TBD return filename_latest_update # + # Clean up the Data and add some fields ### function to create a recipient common name using common fields ### addresses problem with multiple versions of names, M&A, and misspellings #### Please read this link on the USAspending.gov site about use of "D&B Open Data" - https://www.usaspending.gov/db_info #### "D&B Open Data" is embedded in these records #TODO - placeholder function - enhance this to find the most common name for the same company in the records def Add_Common_Recipient_Names(df): #placeholder for future data validation using a common key(s) or hash value # add transformation and validation code to select most common name for an entity-firm-contractor df['recipient_parent_name_common'] = df['recipient_parent_name'] df['recipient_name_common'] = df['recipient_name'] return # !!! possible use in the future cage_codes are missing for most records prior to GFY19 # def Add_Common_Recipient_Names_Using_CAGE_Code(df): #works ok starting in FY19 (missing from ~200M $s in GFY19) but is unreliable prior to that - missing # # Build cage_code to recipient_parent_name, recipient_name mapping and collect the most common names # # There are naming inconsistencies in the USAspending.gov data for recipient_names # # Create code to build a unique identifier that can be used in groupby for recipient_parent_name when there are spelling and naming inconsistencies # cage_name_mapping = df[['action_date_fiscal_year','cage_code','recipient_parent_name', 'recipient_name']].drop_duplicates() # #cage_name_mapping.shape # #TODO add code to properyly handle how the recipient_parent_name and the recipient_name may change across GFY # # get the most common recipient_name for each cage_code # cage_name_mapping_recipient_parent = cage_name_mapping.groupby(['cage_code'])['recipient_parent_name'].apply(list).apply(Counter).apply(lambda x: x.most_common()[0][0]) # cage_name_mapping_recipient = cage_name_mapping.groupby(['cage_code'])['recipient_name'].apply(list).apply(Counter).apply(lambda x: x.most_common()[0][0]) # #cage_name_mapping_recipient_parent['00026'] # assert in FY2019 == 'MMC INTERNATIONAL CORP.' # # add a column with the most common recipient_name for downstream groupby actions # df['recipient_parent_name_common'] = df['cage_code'].apply(lambda x: cage_name_mapping_recipient_parent[x]) # df['recipient_name_common'] = df['cage_code'].apply(lambda x: cage_name_mapping_recipient[x]) # return df def Fix_Recipient_Name_UNSPECIFIED(df): #fix blank or UNSPECIFIED recipient_parent_names # fix_UNSPECIFIED_lambda = lambda x: x['recipient_name'] + "_UNSPECIFIED" if x['recipient_parent_name'] in ['', 'UNSPECIFIED'] else x['recipient_parent_name'] # #pandas.core.frame.DataFrame, dask.dataframe.core.DataFrame # if type(df) == dask.dataframe.core.DataFrame: # df['recipient_parent_name'] = df.apply(fix_UNSPECIFIED_lambda, axis = 1, meta=('recipient_parent_name', 'object')) # else: # df['recipient_parent_name'] = df.apply(fix_UNSPECIFIED_lambda, axis = 1) df['recipient_parent_name'] = df['recipient_parent_name'].mask(df['recipient_parent_name'] == 'UNSPECIFIED', df['recipient_parent_name'] + "_UNSPECIFIED") # use of mask instead of other options - https://docs.dask.org/en/latest/dataframe-api.html#dask.dataframe.DataFrame.mask return def Fix_Recipient_Names_Known_Issues(df): # MANUAL Fixes needed due to USAspending.gov not updating recipient_parent_name (e.g., after SAIC spun out Leidos) #TODO - shoould be added at time of read of the files as part of prep #fix Naming Issues - Manual Fixes (large firms mostly - edit the external files) if glob.glob("USAspending_Parent_Name_Fixes_Manual.csv"): df_fixes = pd.read_csv("USAspending_Parent_Name_Fixes_Manual.csv") manual_fixes_lookup = {} for index, row in df_fixes.iterrows(): manual_fixes_lookup[row['recipient_old_name']] = {'recipient_new_name' :row['recipient_new_name'], 'GFY_When_Switch_Happens' : row['GFY_When_Switch_Happens'], } else: manual_fixes_lookup = { 'SAIC' : {'recipient_new_name' : 'LEIDOS HOLDINGS INC.', 'GFY_When_Switch_Happens' : '2014'}, #!!Notes ther is an extra space needed between HOLDINGS and Inc. name in USAspending, common name fix 'NORTHROP GRUMMAN SYSTEMS CORPORATION' : {'recipient_new_name' : 'NORTHROP GRUMMAN CORPORATION', 'GFY_When_Switch_Happens' : '' }, } for old_name, new_name_record in manual_fixes_lookup.items(): new_name = new_name_record['recipient_new_name'] GFY_When_Switch_Happens = new_name_record['GFY_When_Switch_Happens'] # mask preferred via https://stackoverflow.com/questions/54360549/dask-item-assignment-cannot-use-loc-for-item-assignment if not math.isnan(GFY_When_Switch_Happens) and GFY_When_Switch_Happens != '': selector = (df['recipient_parent_name'] == old_name) & (df['action_date_fiscal_year'].astype('int64') >= int(GFY_When_Switch_Happens)) else: selector = (df['recipient_parent_name'] == old_name) df['recipient_parent_name'] = df['recipient_parent_name'].mask(selector, new_name) # use of mask instead of other options - https://docs.dask.org/en/latest/dataframe-api.html#dask.dataframe.DataFrame.mask return def Add_PSC_Cat_Fields(df): print(f"{Get_Current_Time()} -> adding PSC_Cat field...") df['PSC_Cat'] = df['product_or_service_code'].str[:1] df['PSC_Cat_2'] = df['product_or_service_code'].str[:2] return def Fix_High_Compensated_Field(df): # these fields were read in via pandas as strings (pandas objects), need to convert them to float # some of the fields contain text inside of values and some had NaN that was repalced with UNSPECIFIED fields_for_float =[ 'highly_compensated_officer_1_amount', 'highly_compensated_officer_2_amount', 'highly_compensated_officer_3_amount', 'highly_compensated_officer_4_amount', 'highly_compensated_officer_5_amount' ] for field in fields_for_float: df[field] = df[field].apply(lambda x: float(x) if x.replace(".","").isdigit() else 0.0) return def Enhance_Spending_File(df): # this function addresses known shortcomings in the USAspending archive data print() print(f"{Get_Current_Time()} -> replacing NaN with UNSPECIFIED...") df = df.fillna("UNSPECIFIED") # IMPORTANT - NA fields can affect groupby sums and other problems if 'product_or_service_code' in df.columns: Add_PSC_Cat_Fields(df) print(f"{Get_Current_Time()} -> fixing missing recipient_parent_name...") Fix_Recipient_Name_UNSPECIFIED(df) print(f"{Get_Current_Time()} -> fixing recipient_parent_names that are inconsistent...") Fix_Recipient_Names_Known_Issues(df) #print(f"{Get_Current_Time()} -> adding recipient_name_common and recipient_parent_name_common fields for groupby") #Add_Common_Recipient_Names(df) if 'highly_compensated' in df.columns: print(f"{Get_Current_Time()} -> converting highly_compensated field amounts to float") Fix_High_Compensated_Field(df) print(f"{Get_Current_Time()} Done with fixes.") print() return df # + ## Key code for read CSV files into pandas and Dask def Build_DTypes_Dict(filename_list): df = pd.read_csv(filename_list[0], nrows=1) dtype = dict(zip(sorted(df.columns), ['object'] * len(df.columns))) # # ! important - may need to change other fields to category, int, date etc for better memory management dtype['federal_action_obligation'] = 'float64' #dtype['action_date_fiscal_year'] = 'int' return dtype def Load_CSV_Files_Into_DF(filename_list, usecols = 'ALL'): #best approach if you have enough RAM memory on your machine dtype = Build_DTypes_Dict(filename_list) # ! important - may need to change other fields to category, int, date etc for better memory management # print(f"Reading file: {filename_list[0]}") # df = pd.read_csv(filename_list[0], dtype=dtype, low_memory=False) # read the first file # for filename in filename_list[1:]: #read the remaining files and append # print(f"Reading file: {filename}") # df = df.append(pd.read_csv(filename, dtype=dtype, low_memory=False)) #alternative approach - faster? dataframe_loads_list = [] for filename in filename_list: print(f"Reading file: {os.path.basename(filename)}") if usecols == 'ALL': dataframe_loads_list.append(pd.read_csv(filename, dtype=dtype, low_memory=False)) else: dataframe_loads_list.append(pd.read_csv(filename, dtype=dtype, usecols = usecols, low_memory=False)) df = pd.concat(dataframe_loads_list) print("Files loaded into pandas dataframe.") return df # + # this assumes the jupyter notebook is running in the same higher level directory # where the USAspending.gov zip archives and the decompressed Zip files are stored original_cwd = os.getcwd() # use this if files are located in local directory where jupyter is running # folder_choice = FileChooser(os.getcwd(), title='Choose Folder with USAspending.gov GFY Archive Expanded CSV Files:', show_hidden=False, select_default=True, use_dir_icons=True, show_only_dirs=True) display(folder_choice) # + download_file_path_CSV = str(folder_choice.selected_path) your_path_dummy = 'Users/YourName/YourTopFolder_Documents/YourUSAspendingFolderName' #print(original_cwd) print(os.path.join(your_path_dummy, "/".join(download_file_path_CSV.split('/')[-2:]))) # + # do a quick check to make sure the CSV files are there csv_files_list = sorted(glob.glob(os.path.join(download_file_path_CSV, "*.csv"))) gfy_list = set() size_of_files_GB = 0 for n, filename in enumerate(csv_files_list): filename_basename = os.path.basename(filename) print(n+1, filename_basename) gfy_list.add(filename_basename[:6]) size_of_files_GB += os.path.getsize(filename) all_GFY_list = sorted(set([Get_GFY_from_file_path(filename) for filename in csv_files_list])) print() print(f"There are {len(csv_files_list)} CSV files. They use {round(size_of_files_GB/1e9, 3)} GB of storage.") print(f"Covering GFY: {sorted(gfy_list)}") print(f"Current GFY: {'FY' + str(CurrentGFY())}") print() # + ## Load GFY2010-GFY2019 with limited Columns even though we will mostly use GFY19 (demonstrate usecols feature) # You can limit the selection to a subset of the GFY to speed processing ALL_GFY = False # switch to True to override the subset user_select_GFY = ['FY2020','FY2019', 'FY2018', 'FY2017', 'FY2016', 'FY2015', 'FY2014', 'FY2013', 'FY2012', 'FY2011', 'FY2010'] if ALL_GFY: user_select_GFY = all_GFY_list # print(f"GFY to read for analysis: {sorted(user_select_GFY)}") # print() # Create csv_files_list_restricted to streamline downstream data processing csv_files_list = sorted(glob.glob(os.path.join(download_file_path_CSV, "*.csv"))) csv_files_list_restricted = [filename for filename in csv_files_list for GFY in user_select_GFY if GFY in filename] gfy_found_in_files = [Get_GFY_from_file_path(filename) for filename in csv_files_list_restricted] csv_GFY_missing = sorted(set(user_select_GFY) - set(gfy_found_in_files)) assert set(user_select_GFY) == set(user_select_GFY) # check to make sure this is working print(f"You have selected these GFY for analysis:{user_select_GFY}") print(f"Missing GFY: {csv_GFY_missing} -> if GFY missing, confirm you have decompressed the GFY zip file archive") #[(os.path.basename(filename), round(os.path.getsize(filename)/1e9, 3)) for filename in csv_files_list_restricted] # - # ### Select a subset of fields we will use for forecasting # + # %%time dtype = Build_DTypes_Dict(csv_files_list_restricted) # this handles reduces ambiguity for dask interpreting data type to infer on read ### Since we only need a few fields from the ~280 fields, we can radically reduce the memory needed and use pandas fields = sorted(dtype.keys()) # + #fields # + # we do not need all of the 280+ fields from the records # usecols = in pd.read_csv and Dask allows one to be selective on the fields and save memory usecols = ['action_date', 'action_date_fiscal_year', 'recipient_parent_name', 'recipient_name', 'federal_action_obligation', 'funding_agency_name', 'funding_sub_agency_name', 'funding_office_name', 'product_or_service_code_description', 'product_or_service_code', ] # + # %%time USAspending_parquet_file_name = "USAspending_GFY2010_GFY2020ytd_Time_Series_Analytics.parquet" if glob.glob(USAspending_parquet_file_name): #if this file read has already happened, read from parquet file print(f"Reading parquet file: {USAspending_parquet_file_name} instead of CSV file sources.") df = pd.read_parquet(USAspending_parquet_file_name) else: df = Load_CSV_Files_Into_DF(csv_files_list_restricted, usecols = usecols) df = Enhance_Spending_File(df) # Save the data in a parquet file to restart below if needed print(f"Saving the data to a parquet file: {USAspending_parquet_file_name} in local directory.") df.to_parquet(USAspending_parquet_file_name) #df.to_csv("USAspending_GFY2010_GFY2020ytd_Time_Series_Analytics_Fields_Subset.csv", index=False) df = df.reset_index(drop=True) # creating a new index is key here for later index searches - duplicates otherwise df.head() # - df.columns assert set(df.columns) - set(usecols) == {'PSC_Cat', 'PSC_Cat_2'} # make sure we retrieved the fields we expected assert df.shape[0] == len(set(df.index)) #confirm we have created unique index values after append since CSV read will have duplicate index numbers print(f"There are ~{int(df.shape[0]/1e6)} million transactions in the dataset.") print(f"pandas DataFrame memory usage: ~{round(df.memory_usage().sum()/1e9,1)} GB") # + # %time ### If you launch Dask computations but want to stop it, select the square jupyter button "interrupt the kernel" ##This can be used as a QC check to make sure the GFY requested were read ##this is a slow computation since Dask must read every record in all of the files # action_date_fiscal_yearS = set(sorted(ddf['action_date_fiscal_year'].unique())) # assert action_date_fiscal_yearS == user_select_GFY # Without the command .compute() at the end of the next statement, Dask adds the task to a graph of operations. # when Dask encounters a .compute(), the preceding graph operations are performed df_GFY_obligations = df.groupby(['action_date_fiscal_year'])['federal_action_obligation'].sum().reset_index() # You can check the computations against these articles: # https://about.bgov.com/news/federal-contract-spending-five-trends-in-five-charts/ df_GFY_obligations # - df_GFY_obligations[:-1].plot(x='action_date_fiscal_year', y='federal_action_obligation',title="Federal Contractor Obligations $s",grid=True) # ### Collect Information about the Federal Product or Service Code Categories # + # retrieve the PSC Codes for reference and reporting purposes # source my colab notebook and github code - https://github.com/leifulstrup/USAspending_Medium_Blog_Analytics_Series/blob/master/Medium_Analytics_Edge_Open_Data_CMS_Fraud_Contracts_Example-via_Colab.ipynb url_psc_lookup = 'https://www.acquisition.gov/sites/default/files/manual/PSC_Data_March_2020_Edition.xls' # latest version when coded - may have bene updated - see here https://www.acquisition.gov/psc-manual # def Cleanup_Category_Text(text): #fixes the typos in the source excel document # text = text.strip() # remove preceding and trailing spaces # clean_list = [] # for token in text.split(" "): # if token[0] != '&' and not token[1] in ['&', 'T']: # clean_list.append(token.capitalize()) # else: # clean_list.append(token) # return " ".join(clean_list) # download the file from the website to read into pandas r = requests.get(url_psc_lookup, allow_redirects=True) file_name = "Federal_PSC_Lookup_Codes.xlsx" with open(file_name, 'wb') as output: output.write(r.content) # # !ls -al xls = pd.ExcelFile(file_name) df_psc_lookup = pd.read_excel(xls, sheet_name = 1, skiprows=0) #note that there are Excel Sheets for each of the Level 1 Cats with more details df_psc_lookup.head() df_PSC_Cat_Lookup = df_psc_lookup[df_psc_lookup['PSC CODE'].str.len() <= 2][['PSC CODE', 'PRODUCT AND SERVICE CODE NAME']].drop_duplicates() df_PSC_Cat_Lookup.columns = ['PSC_Code', 'Product_and_Service_Code_Name'] df_PSC_Cat_Lookup # + df_PSC_Cat_Lookup['PSC_Cat'] = df_PSC_Cat_Lookup['PSC_Code'].str[0] df_PSC_Cat_Lookup['PSC_Cat_2'] = df_PSC_Cat_Lookup['PSC_Code'].str[:2] df_PSC_Cat_Lookup df_PSC_Cat_numerical_codes = df_PSC_Cat_Lookup[~df_PSC_Cat_Lookup['PSC_Cat'].apply(lambda x: x.isalpha())].groupby(["PSC_Cat"])['Product_and_Service_Code_Name'].apply(lambda x: set(x)).apply(sorted).reset_index() df_PSC_Cat_numerical_codes # - df_PSC_Cat_text_codes = df_PSC_Cat_Lookup[df_PSC_Cat_Lookup['PSC_Code'].apply(len) == 1][['PSC_Cat', 'Product_and_Service_Code_Name']].reset_index(drop=True) df_PSC_Cat_text_codes df_PSC_lookup_limited = pd.concat([df_PSC_Cat_numerical_codes, df_PSC_Cat_text_codes], axis=0, sort=False) df_PSC_lookup_limited df_PSC_Cat_Lookup = df_PSC_Cat_Lookup.merge(df_PSC_lookup_limited, on='PSC_Cat') df_PSC_Cat_Lookup df_PSC_Cat_Lookup =df_PSC_Cat_Lookup.rename(columns={'Product_and_Service_Code_Name_x' : 'Product_and_Service_Code_Name', 'Product_and_Service_Code_Name_y' : 'Product_and_Service_Code_Name_PSC_Cat', }) df_PSC_Cat_Lookup df_PSC_Cat_Lookup['Product_and_Service_Code_Name_PSC_Cat'] = df_PSC_Cat_Lookup['Product_and_Service_Code_Name_PSC_Cat'].apply(lambda x: x if type(x) == str else "+".join(x)[:40]) df_PSC_Cat_Lookup_PSC_Cat_only = df_PSC_Cat_Lookup[['PSC_Cat', 'Product_and_Service_Code_Name_PSC_Cat']].drop_duplicates() df_PSC_Cat_Lookup_PSC_Cat_only['Product_and_Service_Code_Name_PSC_Cat'] = df_PSC_Cat_Lookup_PSC_Cat_only.apply(lambda x: x['PSC_Cat'] + " - " + x['Product_and_Service_Code_Name_PSC_Cat'], axis=1) df_PSC_Cat_Lookup_PSC_Cat_only # ## Federal Product or Service Category (PSC_Cat) Trends Analysis df.head() # + # %%time df_PSC_Cat_Trends = df.groupby(['action_date_fiscal_year', 'PSC_Cat'])['federal_action_obligation'].sum().reset_index().sort_values(by='federal_action_obligation', ascending=False) df_PSC_Cat_Trends = df_PSC_Cat_Trends.merge(df_GFY_obligations, on='action_date_fiscal_year') df_PSC_Cat_Trends['Fraction_of_Total_Annual_Spending'] = df_PSC_Cat_Trends['federal_action_obligation_x']/df_PSC_Cat_Trends['federal_action_obligation_y'] df_PSC_Cat_Trends.drop(columns=['federal_action_obligation_y'], inplace=True) df_PSC_Cat_Trends.rename(columns={'federal_action_obligation_x':'federal_action_obligation'}, inplace=True) df_PSC_Cat_Trends.head() # - df_PSC_Cat_Trends = df_PSC_Cat_Trends.merge(df_PSC_Cat_Lookup_PSC_Cat_only, on='PSC_Cat') df_PSC_Cat_Trends # + #df_PSC_Cat_Trends.query("action_date_fiscal_year == '2019'")['Fraction_of_Total_Annual_Spending'].cumsum() # - sorted(df_PSC_Cat_Trends['Product_and_Service_Code_Name_PSC_Cat'].unique()) # + fig = plt.figure(figsize=(12, 6)) # Create matplotlib figure - source - https://stackoverflow.com/questions/24183101/pandas-bar-plot-with-two-bars-and-two-y-axis ax = fig.add_subplot(111) # Create matplotlib axes width = 0.1 title = "Product and Services Spending (Top 10) Relative Size" graph1 = df_PSC_Cat_Trends.query("action_date_fiscal_year == '2010'").iloc[:10].sort_values(by='Fraction_of_Total_Annual_Spending', ascending=False).plot(kind="bar", ax=ax,title=title, x='Product_and_Service_Code_Name_PSC_Cat', y='Fraction_of_Total_Annual_Spending', color="red", width=width, align='edge', rot=80, position=2) graph2 = df_PSC_Cat_Trends.query("action_date_fiscal_year == '2015'").iloc[:10].sort_values(by='Fraction_of_Total_Annual_Spending', ascending=False).plot(kind="bar", ax=ax,title=title, x='Product_and_Service_Code_Name_PSC_Cat', y='Fraction_of_Total_Annual_Spending', color="green", width=width, align='edge', rot=80, position=1) graph3 = df_PSC_Cat_Trends.query("action_date_fiscal_year == '2019'").iloc[:10].sort_values(by='Fraction_of_Total_Annual_Spending', ascending=False).plot(kind="bar", ax=ax,title=title, x='Product_and_Service_Code_Name_PSC_Cat', y='Fraction_of_Total_Annual_Spending', color="blue", width=width, align='edge', rot=80, position=0) ax.set_ylabel("Fraction of Total Annual Obligations") ax.legend(["GFY2010", "GFY2015", "GFY2019"]) # - psc_cats_all = sorted(df_PSC_Cat_Trends['PSC_Cat'].unique()) gfy_to_include = [str(value) for value in list(range(2010,2020))] psc_cats_to_include = ['D - ADP AND TELECOMMUNICATIONS', 'R - SUPPORT SVCS (PROF, ADMIN, MGMT)'] #psc_cats_to_include= psc_cats_all df_PSC_Cat_Trends_plotting_subset = df_PSC_Cat_Trends.query("action_date_fiscal_year in @gfy_to_include & Product_and_Service_Code_Name_PSC_Cat in @psc_cats_to_include") df_PSC_Cat_Trends_plotting_subset = df_PSC_Cat_Trends_plotting_subset.pivot_table(index='action_date_fiscal_year', values='Fraction_of_Total_Annual_Spending', columns='Product_and_Service_Code_Name_PSC_Cat') df_PSC_Cat_Trends_plotting_subset # + fig = plt.figure(figsize=(12, 8)) # Create matplotlib figure - source - https://stackoverflow.com/questions/24183101/pandas-bar-plot-with-two-bars-and-two-y-axis ax = fig.add_subplot(111) # Create matplotlib axes ax2 = ax.twinx() graph1 = df_PSC_Cat_Trends_plotting_subset.plot(title='Product or Service Category Budget Fraction Trends', ax=ax) plt_result = graph1.set_ylabel("Fraction of GFY Spending on PSC_Cat") plt_result = graph1.legend(loc='center') graph2 = df_GFY_obligations[:-1].plot(color='black', x='action_date_fiscal_year', y='federal_action_obligation', ax=ax2, linewidth=3) plt_result = graph2.set_ylabel("Total Contractor Obligations") plt_result = graph2.legend(loc='upper right') leg = ax.legend() leg.remove() plt_result = ax2.add_artist(leg) # + psc_cats_all = sorted(df_PSC_Cat_Trends['Product_and_Service_Code_Name_PSC_Cat'].unique()) gfy_to_include = [str(value) for value in list(range(2010,2020))] #psc_cats_to_include = ['D - ADP AND TELECOMMUNICATIONS', 'R - SUPPORT SVCS (PROF, ADMIN, MGMT)'] psc_cats_to_include= psc_cats_all df_PSC_Cat_Trends_plotting_subset = df_PSC_Cat_Trends.query("action_date_fiscal_year in @gfy_to_include & Product_and_Service_Code_Name_PSC_Cat in @psc_cats_to_include") df_PSC_Cat_Trends_plotting_subset = df_PSC_Cat_Trends_plotting_subset.pivot_table(index='action_date_fiscal_year', values='Fraction_of_Total_Annual_Spending', columns='Product_and_Service_Code_Name_PSC_Cat') df_PSC_Cat_Trends_plotting_subset_transpose = df_PSC_Cat_Trends_plotting_subset.transpose().reset_index() #df_PSC_Cat_Trends_plotting_subset_transpose gfy_columns = [str(value) for value in range(2010,2020)] df_PSC_Cat_Trends_plotting_subset_transpose = df_PSC_Cat_Trends_plotting_subset_transpose.join(df_PSC_Cat_Trends_plotting_subset_transpose[gfy_columns].agg(['mean','std','max','min'], axis="columns")) df_PSC_Cat_Trends_plotting_subset_transpose['CAGR_last_3_years'] = df_PSC_Cat_Trends_plotting_subset_transpose.apply(lambda x: -1 + (x['2019']/x['2016'])**(1/3),axis=1) df_PSC_Cat_Trends_plotting_subset_transpose['2019_vs_2010_delta'] = df_PSC_Cat_Trends_plotting_subset_transpose['2019'] - df_PSC_Cat_Trends_plotting_subset_transpose['2010'] df_PSC_Cat_Trends_plotting_subset_transpose.set_index('Product_and_Service_Code_Name_PSC_Cat', inplace=True) df_PSC_Cat_Trends_plotting_subset_transpose # - gfy_included_list df_PSC_Cat_Trends_plotting_subset_transpose.sort_values(by='2019', ascending=False).transpose().loc[gfy_included_list] gfy_included_list = [str(value) for value in range(2010, 2020)] df_PSC_Cat_Trends_plotting_subset_transpose.sort_values(by='2019', ascending=False).transpose().loc[gfy_included_list] # + columns_to_include = df_PSC_Cat_Trends_plotting_subset_transpose.sort_values(by='2019', ascending=False).transpose().loc[gfy_included_list].columns[:6] #df_PSC_Cat_Trends_plotting_subset_transpose.sort_values(by='2019', ascending=False).transpose().loc[gfy_included_list,columns_to_include] fig = plt.figure(figsize=(12, 8)) # Create matplotlib figure - source - https://stackoverflow.com/questions/24183101/pandas-bar-plot-with-two-bars-and-two-y-axis ax = fig.add_subplot(111) # Create matplotlib axes ax2 = ax.twinx() graph1 = df_PSC_Cat_Trends_plotting_subset_transpose.sort_values(by='2019', ascending=False).transpose().loc[gfy_included_list,columns_to_include].plot(ax=ax) #plot(ax=ax, position=0) plt_result = graph1.set_title("Product and Service Code Trends") plt_result = graph1.set_ylabel("Fraction of Total Obligations") plt_result = graph1.set_xlabel("Government Fiscal Year (GFY)") plt_result = graph1.legend(fancybox=True, framealpha=1, shadow=True, borderpad=1) plt_result = graph1.legend(loc='center') graph2 = df_GFY_obligations[:-1].plot(color='black', x='action_date_fiscal_year', y='federal_action_obligation',grid=True, ax=ax2, linewidth=3) plt_result = graph2.set_ylabel("Total Contractor Obligations") plt_result = graph2.legend(loc='upper right') # - # ### Explore PSC_Cat D "ADP and Telecommunications" by Department, Agency, and Contractor # + # %%time df_PSC_Cat_D = df.query("PSC_Cat == 'D'") gfy_years = ['2016', '2017', '2018', '2019'] df_PSC_Cat_D = df_PSC_Cat_D.query("action_date_fiscal_year in @gfy_years") assert set(df_PSC_Cat_D['action_date_fiscal_year'].unique()) == set(gfy_years) df_PSC_Cat_D.head() # - df_PSC_Cat_D_by_gfy = df_PSC_Cat_D.groupby(["funding_agency_name", "action_date_fiscal_year"])['federal_action_obligation'].sum().fillna(0.0).reset_index() df_PSC_Cat_D_by_gfy_pivot = df_PSC_Cat_D_by_gfy.pivot_table(index='funding_agency_name', columns='action_date_fiscal_year', values='federal_action_obligation').fillna(0.0).sort_values(by='2019', ascending=False) #.set_index('funding_agency_name') df_PSC_Cat_D_by_gfy_pivot['CAGR_last_3_years'] = df_PSC_Cat_D_by_gfy_pivot.sort_values(by='2019', ascending=False).apply(lambda x: -1 + (x['2019']/x['2016'])**(1/3) if x['2016'] > 0.0 else 0.0, axis=1) df_PSC_Cat_D_by_gfy_pivot.head(10) # + def cagr(v_final, value_initial, years): if v_final < 0.0 or value_initial < 0.0: return 0.0 return -1 + (v_final/value_initial)**(1/years) assert cagr(2,1,1) == 1.0 # - # Federal Overall Spending CAGR from GFY16 to GFY19 cagr(df_GFY_obligations[df_GFY_obligations['action_date_fiscal_year'] == '2019']['federal_action_obligation'].iloc[0], df_GFY_obligations[df_GFY_obligations['action_date_fiscal_year'] == '2016']['federal_action_obligation'].iloc[0],3) cagr_3_year = cagr(df_PSC_Cat_D_by_gfy_pivot['2019'].sum(), df_PSC_Cat_D_by_gfy_pivot['2016'].sum(), 3) print(f"The three (3) year market CAGR: {round(cagr_3_year, 3)}") df_PSC_Cat_D_by_gfy_pivot_DoD = df_PSC_Cat_D_by_gfy_pivot.query("funding_agency_name == 'DEPARTMENT OF DEFENSE (DOD)'") df_PSC_Cat_D_by_gfy_pivot_Civilian = df_PSC_Cat_D_by_gfy_pivot.query("funding_agency_name != 'DEPARTMENT OF DEFENSE (DOD)'") print(f"The three (3) year market CAGR - DOD: {round(cagr(df_PSC_Cat_D_by_gfy_pivot_DoD['2019'].sum(), df_PSC_Cat_D_by_gfy_pivot_DoD['2016'].sum(), 3), 3)}") print(f"The three (3) year market CAGR - Civilian: {round(cagr(df_PSC_Cat_D_by_gfy_pivot_Civilian['2019'].sum(), df_PSC_Cat_D_by_gfy_pivot_Civilian['2016'].sum(), 3), 3)}") selector = (df_PSC_Cat_D_by_gfy_pivot['CAGR_last_3_years'] >= cagr_3_year) & (df_PSC_Cat_D_by_gfy_pivot['2019'] >= 50e6) df_PSC_Cat_D_by_gfy_pivot[selector].sort_values(by='CAGR_last_3_years', ascending=False) selector = (df_PSC_Cat_D_by_gfy_pivot['CAGR_last_3_years'] < cagr_3_year) & (df_PSC_Cat_D_by_gfy_pivot['2019'] >= 50e6) df_PSC_Cat_D_by_gfy_pivot[selector].sort_values(by='CAGR_last_3_years', ascending=True) # ### Which Agencies (below Department level) have seen the greatest PSC_Cat 'D' spending increases? df_PSC_Cat_D_by_agency = df_PSC_Cat_D.groupby(['action_date_fiscal_year', 'funding_agency_name', 'funding_sub_agency_name'])['federal_action_obligation'].sum().fillna(0.0).reset_index() df_PSC_Cat_D_by_agency_pivot = df_PSC_Cat_D_by_agency.pivot_table(index=['funding_agency_name','funding_sub_agency_name'], columns='action_date_fiscal_year', values='federal_action_obligation').fillna(0.0).reset_index() df_PSC_Cat_D_by_agency_pivot['2019_increase_vs_2016'] = df_PSC_Cat_D_by_agency_pivot['2019'] - df_PSC_Cat_D_by_agency_pivot['2016'] df_PSC_Cat_D_by_agency_pivot.sort_values(by='2019_increase_vs_2016', ascending=False).head(10) # + # the increase in VA PSC_Cat D spending 1.311045e+09 from GFY16-GFY19 is greater than how many Depts df_PSC_Cat_D_by_gfy_pivot[df_PSC_Cat_D_by_gfy_pivot['2019'] > 1.311045e+09] #.shape[0] # - df_PSC_Cat_D_by_agency_pivot.sort_values(by='2019_increase_vs_2016', ascending=True).head(10) # ### Which Contractors Have Seen the Greatest Growth and Declines in PSC_Cat 'D' Spending? df_PSC_Cat_D_by_recipient = df_PSC_Cat_D.groupby(['action_date_fiscal_year', 'recipient_parent_name'])['federal_action_obligation'].sum().reset_index().sort_values(by='federal_action_obligation', ascending=False) df_PSC_Cat_D_by_recipient.head(10) # + df_PSC_Cat_D_by_recipient_pivot = df_PSC_Cat_D_by_recipient.pivot_table(index='recipient_parent_name', columns='action_date_fiscal_year', values='federal_action_obligation').fillna(0.0).sort_values(by='2019', ascending=False).reset_index() df_PSC_Cat_D_by_recipient_pivot['CAGR_last_3_years'] = df_PSC_Cat_D_by_recipient_pivot.apply(lambda x: cagr(x['2019'], x['2016'], 3) if x['2016'] > 0.0 else 0.0, axis=1) df_PSC_Cat_D_by_recipient_pivot['2019_net_increase_vs_2016'] = df_PSC_Cat_D_by_recipient_pivot['2019'] - df_PSC_Cat_D_by_recipient_pivot['2016'] df_PSC_Cat_D_by_recipient_pivot.head(10) # - cagr_3_year_all_recipients = cagr(df_PSC_Cat_D_by_recipient_pivot['2019'].sum(), df_PSC_Cat_D_by_recipient_pivot['2016'].sum(), 3) print(f"The three (3) year CAGR for recipients: {round(cagr_3_year_all_recipients,3)}") selector = (df_PSC_Cat_D_by_recipient_pivot['CAGR_last_3_years'] >= cagr_3_year_all_recipients) & \ (df_PSC_Cat_D_by_recipient_pivot['2019'] >= 50e6) df_PSC_Cat_D_by_recipient_pivot[selector].sort_values(by='CAGR_last_3_years', ascending=False).head(20) df_PSC_Cat_D_by_recipient_pivot.sort_values(by='2019_net_increase_vs_2016', ascending=False).head(10) selector = (df_PSC_Cat_D_by_recipient_pivot['CAGR_last_3_years'] >= cagr_3_year_all_recipients) & \ (df_PSC_Cat_D_by_recipient_pivot['2019'] >= 350e6) df_PSC_Cat_D_by_recipient_pivot[selector].sort_values(by='CAGR_last_3_years', ascending=False) # + # selector = (df_PSC_Cat_D_by_recipient_pivot['CAGR_last_3_years'] < cagr_3_year_all_recipients) & (df_PSC_Cat_D_by_recipient_pivot['2019'] >= 350e6) # df_PSC_Cat_D_by_recipient_pivot[selector].sort_values(by='CAGR_last_3_years', ascending=True) # - # ### End # #### https://opensource.org/licenses/MIT # # #### MIT Open Source License Copyright 2020 # # #### Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: # # #### The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. # # #### THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Hierarchical Model for Abalone Length # # Abalone were collected from various sites on the coast of California north of San Francisco. Here I'm going to develop a model to predict abalone lengths based on sites and harvest method - diving or rock-picking. I'm interested in how abalone lengths vary between sites and harvesting methods. This should be a hierarchical model as the abalone at the different sites are from the same population and should exhibit similar effects based on harvesting method. The hierarchical model will be beneficial since some of the sites are missing a harvesting method. # + # %matplotlib inline # %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import sampyl as smp from sampyl import np import pandas as pd # - plt.style.use('seaborn') plt.rcParams['font.size'] = 14. plt.rcParams['legend.fontsize'] = 14.0 plt.rcParams['axes.titlesize'] = 16.0 plt.rcParams['axes.labelsize'] = 14.0 plt.rcParams['xtick.labelsize'] = 13.0 plt.rcParams['ytick.labelsize'] = 13.0 # Load our data here. This is just data collected in 2017. data = pd.read_csv('Clean2017length.csv') data.head() # Important columns here are: # # * **full lengths:** length of abalone # * **mode:** Harvesting method, R: rock-picking, D: diving # * **site_code:** codes for 15 different sites # # First some data preprocessing to get it into the correct format for our model. # + # Convert sites from codes into sequential integers starting at 0 unique_sites = data['site_code'].unique() site_map = dict(zip(unique_sites, np.arange(len(unique_sites)))) data = data.assign(site=data['site_code'].map(site_map)) # Convert modes into integers as well # Filter out 'R/D' modes, bad data collection data = data[(data['Mode'] != 'R/D')] mode_map = {'R':0, 'D':1} data = data.assign(mode=data['Mode'].map(mode_map)) # - # ## A Hierarchical Linear Model # # Here we'll define our model. We want to make a linear model for each site in the data where we predict the abalone length given the mode of catching and the site. # # $$ y_s = \alpha_s + \beta_s * x_s + \epsilon $$ # # where $y_s$ is the predicted abalone length, $x$ denotes the mode of harvesting, $\alpha_s$ and $\beta_s$ are coefficients for each site $s$, and $\epsilon$ is the model error. We'll use this prediction for our likelihood with data $D_s$, using a normal distribution with mean $y_s$ and variance $ \epsilon^2$ : # # $$ \prod_s P(D_s \mid \alpha_s, \beta_s, \epsilon) = \prod_s \mathcal{N}\left(D_s \mid y_s, \epsilon^2\right) $$ # # The abalone come from the same population just in different locations. We can take these similarities between sites into account by creating a hierarchical model where the coefficients are drawn from a higher-level distribution common to all sites. # # $$ # \begin{align} # \alpha_s & \sim \mathcal{N}\left(\mu_{\alpha}, \sigma_{\alpha}^2\right) \\ # \beta_s & \sim \mathcal{N}\left(\mu_{\beta}, \sigma_{\beta}^2\right) \\ # \end{align} # $$ class HLM(smp.Model): def __init__(self, data=None): super().__init__() self.data = data # Now define the model (log-probability proportional to the posterior) def logp_(self, μ_α, μ_β, σ_α, σ_β, site_α, site_β, ϵ): # Population priors - normals for population means and half-Cauchy for population stds self.add(smp.normal(μ_α, sig=500), smp.normal(μ_β, sig=500), smp.half_cauchy(σ_α, beta=5), smp.half_cauchy(σ_β, beta=0.5)) # Priors for site coefficients, sampled from population distributions self.add(smp.normal(site_α, mu=μ_α, sig=σ_α), smp.normal(site_β, mu=μ_β, sig=σ_β)) # Prior for likelihood uncertainty self.add(smp.half_normal(ϵ)) # Our estimate for abalone length, α + βx length_est = site_α[self.data['site'].values] + site_β[self.data['site'].values]*self.data['mode'] # Add the log-likelihood self.add(smp.normal(self.data['full lengths'], mu=length_est, sig=ϵ)) return self() # + sites = data['site'].values modes = data['mode'].values lengths = data['full lengths'].values # Now define the model (log-probability proportional to the posterior) def logp(μ_α, μ_β, σ_α, σ_β, site_α, site_β, ϵ): model = smp.Model() # Population priors - normals for population means and half-Cauchy for population stds model.add(smp.normal(μ_α, sig=500), smp.normal(μ_β, sig=500), smp.half_cauchy(σ_α, beta=5), smp.half_cauchy(σ_β, beta=0.5)) # Priors for site coefficients, sampled from population distributions model.add(smp.normal(site_α, mu=μ_α, sig=σ_α), smp.normal(site_β, mu=μ_β, sig=σ_β)) # Prior for likelihood uncertainty model.add(smp.half_normal(ϵ)) # Our estimate for abalone length, α + βx length_est = site_α[sites] + site_β[sites]*modes # Add the log-likelihood model.add(smp.normal(lengths, mu=length_est, sig=ϵ)) return model() # - model = HLM(data=data) start = {'μ_α': 201., 'μ_β': 5., 'σ_α': 1., 'σ_β': 1., 'site_α': np.ones(len(site_map))*201, 'site_β': np.zeros(len(site_map)), 'ϵ': 1.} model.logp_(*start.values()) # + start = {'μ_α': 201., 'μ_β': 5., 'σ_α': 1., 'σ_β': 1., 'site_α': np.ones(len(site_map))*201, 'site_β': np.zeros(len(site_map)), 'ϵ': 1.} # Using NUTS is slower per sample, but more likely to give good samples (and converge) sampler = smp.NUTS(logp, start) chain = sampler(1100, burn=100, thin=2) # - # There are some checks for convergence you can do, but they aren't implemented yet. Instead, we can visually inspect the chain. In general, the samples should be stable, the first half should vary around the same point as the second half. fig, ax = plt.subplots() ax.plot(chain.site_α); fig.savefig('/Users/mat/Desktop/chains.png', dpi=150) chain.site_α.T.shape fig, ax = plt.subplots(figsize=(16,9)) for each in chain.site_α.T: ax.hist(each, range=(185, 210), bins=60, alpha=0.5) ax.set_xticklabels('') ax.set_yticklabels(''); fig.savefig('/Users/mat/Desktop/posteriors.png', dpi=300) # With the posterior distribution, we can look at many different results. Here I'll make a function that plots the means and 95% credible regions (range that contains central 95% of the probability) for the coefficients $\alpha_s$ and $\beta_s$. def coeff_plot(coeff, ax=None): if ax is None: fig, ax = plt.subplots(figsize=(3,5)) CRs = np.percentile(coeff, [2.5, 97.5], axis=0) means = coeff.mean(axis=0) ax.errorbar(means, np.arange(len(means)), xerr=np.abs(means - CRs), fmt='o') ax.set_yticks(np.arange(len(site_map))) ax.set_yticklabels(site_map.keys()) ax.set_ylabel('Site') ax.grid(True, axis='x', color="#CCCCCC") ax.tick_params(axis='both', length=0) for each in ['top', 'right', 'left', 'bottom']: ax.spines[each].set_visible(False) return ax # Now we can look at how abalone lengths vary between sites for the rock-picking method ($\alpha_s$). ax = coeff_plot(chain.site_α) ax.set_xlim(175, 225) ax.set_xlabel('Abalone Length (mm)'); # Here I'm plotting the mean and 95% credible regions (CR) of $\alpha$ for each site. This coefficient measures the average length of rock-picked abalones. We can see that the average abalone length varies quite a bit between sites. The CRs give a measure of the uncertainty in $\alpha$, wider CRs tend to result from less data at those sites. # # Now, let's see how the abalone lengths vary between harvesting methods (the difference for diving is given by $\beta_s$). ax = coeff_plot(chain.site_β) #ax.set_xticks([-5, 0, 5, 10, 15]) ax.set_xlabel('Mode effect (mm)'); # Here I'm plotting the mean and 95% credible regions (CR) of $\beta$ for each site. This coefficient measures the difference in length of dive picked abalones compared to rock picked abalones. Most of the $\beta$ coefficients are above zero which indicates that abalones harvested via diving are larger than ones picked from the shore. For most of the sites, diving results in 5 mm longer abalone, while at site 72, the difference is around 12 mm. Again, wider CRs mean there is less data leading to greater uncertainty. # # Next, I'll overlay the model on top of the data and make sure it looks right. We'll also see that some sites don't have data for both harvesting modes but our model still works because it's hierarchical. That is, we can get a posterior distribution for the coefficient from the population distribution even though the actual data is missing. def model_plot(data, chain, site, ax=None, n_samples=20): if ax is None: fig, ax = plt.subplots(figsize=(4,6)) site = site_map[site] xs = np.linspace(-1, 3) for ii, (mode, m_data) in enumerate(data[data['site'] == site].groupby('mode')): a = chain.site_α[:, site] b = chain.site_β[:, site] # now sample from the posterior... idxs = np.random.choice(np.arange(len(a)), size=n_samples, replace=False) # Draw light lines sampled from the posterior for idx in idxs: ax.plot(xs, a[idx] + b[idx]*xs, color='#E74C3C', alpha=0.05) # Draw the line from the posterior means ax.plot(xs, a.mean() + b.mean()*xs, color='#E74C3C') # Plot actual data points with a bit of noise for visibility mode_label = {0: 'Rock-picking', 1: 'Diving'} ax.scatter(ii + np.random.randn(len(m_data))*0.04, m_data['full lengths'], edgecolors='none', alpha=0.8, marker='.', label=mode_label[mode]) ax.set_xlim(-0.5, 1.5) ax.set_xticks([0, 1]) ax.set_xticklabels('') ax.set_ylim(150, 250) ax.grid(True, axis='y', color="#CCCCCC") ax.tick_params(axis='both', length=0) for each in ['top', 'right', 'left', 'bottom']: ax.spines[each].set_visible(False) return ax # + fig, axes = plt.subplots(figsize=(10, 5), ncols=4, sharey=True) for ax, site in zip(axes, [5, 52, 72, 162]): ax = model_plot(data, chain, site, ax=ax, n_samples=30) ax.set_title(site) first_ax = axes[0] first_ax.legend(framealpha=1, edgecolor='none') first_ax.set_ylabel('Abalone length (mm)'); # - # For site 5, there are few data points for the diving method so there is a lot of uncertainty in the prediction. The prediction is also pulled lower than the data by the population distribution. Similarly, for site 52 there is no diving data, but we still get a (very uncertain) prediction because it's using the population information. # # Finally, we can look at the harvesting mode effect for the population. Here I'm going to print out a few statistics for $\mu_{\beta}$. fig, ax = plt.subplots() ax.hist(chain.μ_β, bins=30); b_mean = chain.μ_β.mean() b_CRs = np.percentile(chain.μ_β, [2.5, 97.5]) p_gt_0 = (chain.μ_β > 0).mean() print( """Mean: {:.3f} 95% CR: [{:.3f}, {:.3f}] P(mu_b) > 0: {:.3f} """.format(b_mean, b_CRs[0], b_CRs[1], p_gt_0)) # We can also look at the population distribution for $\beta_s$ by sampling from a normal distribution with mean and variance sampled from $\mu_\beta$ and $\sigma_\beta$. # # $$ # \beta_s \sim \mathcal{N}\left(\mu_{\beta}, \sigma_{\beta}^2\right) # $$ import scipy.stats as stats samples = stats.norm.rvs(loc=chain.μ_β, scale=chain.σ_β) plt.hist(samples, bins=30); plt.xlabel('Dive harvesting effect (mm)') # It's apparent that dive harvested abalone are roughly 5 mm longer than rock-picked abalone. Maybe this is a bias of the divers to pick larger abalone. Or, it's possible that abalone that stay in the water grow larger. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [Root] # language: python # name: Python [Root] # --- # ### Import Packages import numpy as np import matplotlib.pyplot as plt # %matplotlib inline from scipy import io from scipy import stats import pickle # ### User Options # load_folder='' #Folder where results are (for loading them) load_folder='/home/jglaser/Files/Neural_Decoding/Results/' # fig_folder='' #Folder to save the figures to fig_folder='/home/jglaser/Figs/Decoding/' datasets=['m1','s1','hc'] #Names of the datasets ill=1 #Whether I am making these plots for exporting to adobe illustrator (in which case I remove the text) colors=['purple', 'blue','cyan','mediumaquamarine','green','yellowgreen','gold','orange', 'magenta', 'red','gray'] #Colors to plot each method # ### Plot Summary (Fig. 4) # + d=0 #Initialize index of the dataset I'm looking at (this will be the row I plot in the figure) fig, ax = plt.subplots(3,1,figsize=(6,12)) #Create figure (3 rows by 1 column) for dataset in datasets: #Loop through datasets ####LOAD RESULTS FOR ALL METHODS#### with open(load_folder+dataset+'_results_kf2.pickle','rb') as f: [mean_r2_kf,y_pred_kf_all,y_valid_pred_kf_all,y_train_pred_kf_all, y_kf_test_all,y_kf_valid_all,y_kf_train_all]=pickle.load(f) with open(load_folder+dataset+'_results_nb3.pickle','rb') as f: [mean_r2_nb,y_pred_nb_all]=pickle.load(f) with open(load_folder+dataset+'_results_wf2.pickle','rb') as f: [mean_r2_wf,y_pred_wf_all,y_train_pred_wf_all,y_valid_pred_wf_all]=pickle.load(f) with open(load_folder+dataset+'_results_wc2.pickle','rb') as f: [mean_r2_wc,y_pred_wc_all,y_train_pred_wc_all,y_valid_pred_wc_all]=pickle.load(f) with open(load_folder+dataset+'_results_svr2.pickle','rb') as f: [mean_r2_svr,y_pred_svr_all,y_train_pred_svr_all,y_valid_pred_svr_all,time_elapsed]=pickle.load(f) with open(load_folder+dataset+'_results_xgb2.pickle','rb') as f: [mean_r2_xgb,y_pred_xgb_all,y_train_pred_xgb_all,y_valid_pred_xgb_all,time_elapsed]=pickle.load(f) with open(load_folder+dataset+'_results_dnn2.pickle','rb') as f: [mean_r2_dnn,y_pred_dnn_all,y_train_pred_dnn_all,y_valid_pred_dnn_all,time_elapsed]=pickle.load(f) with open(load_folder+dataset+'_results_rnn2.pickle','rb') as f: [mean_r2_rnn,y_pred_rnn_all,y_train_pred_rnn_all,y_valid_pred_rnn_all,time_elapsed]=pickle.load(f) with open(load_folder+dataset+'_results_gru2.pickle','rb') as f: [mean_r2_gru,y_pred_gru_all,y_train_pred_gru_all,y_valid_pred_gru_all,time_elapsed]=pickle.load(f) with open(load_folder+dataset+'_results_lstm2.pickle','rb') as f: [mean_r2_lstm,y_pred_lstm_all,y_train_pred_lstm_all,y_valid_pred_lstm_all,time_elapsed]=pickle.load(f) with open(load_folder+dataset+'_results_ensemble_dnn2.pickle','rb') as f: [mean_r2_ensemble,y_pred_ensemble_all]=pickle.load(f) #### Calculate the mean and standard error across cross-validation folds #### n=mean_r2_wf.shape[0] #Number of folds means=([np.mean(mean_r2_wf),np.mean(mean_r2_wc), np.mean(mean_r2_kf), np.mean(mean_r2_nb), np.mean(mean_r2_svr),np.mean(mean_r2_xgb),np.mean(mean_r2_dnn),np.mean(mean_r2_rnn),np.mean(mean_r2_gru),np.mean(mean_r2_lstm),np.mean(mean_r2_ensemble)]) err=([np.std(mean_r2_wf)*np.sqrt(1./n+1./(n-1)),np.std(mean_r2_wc)*np.sqrt(1./n+1./(n-1)),np.std(mean_r2_kf)*np.sqrt(1./n+1./(n-1)),np.std(mean_r2_nb)*np.sqrt(1./n+1./(n-1)),np.std(mean_r2_svr)*np.sqrt(1./n+1./(n-1)),np.std(mean_r2_xgb)*np.sqrt(1./n+1./(n-1)),np.std(mean_r2_dnn)*np.sqrt(1./n+1./(n-1)),np.std(mean_r2_rnn)*np.sqrt(1./n+1./(n-1)),np.std(mean_r2_gru)*np.sqrt(1./n+1./(n-1)),np.std(mean_r2_lstm)*np.sqrt(1./n+1./(n-1)),np.std(mean_r2_ensemble)*np.sqrt(1./n+1./(n-1))]) #Note that the standard errors shown are more conservative than using 1/sqrt(n), since the training sets used #are not independent. See http://www.iro.umontreal.ca/~lisa/bib/pub_subject/language/pointeurs/nadeau_MLJ1597.pdf #####PLOT RESULTS###### #Plot error bars ind = np.arange(len(err)) #X values for plotting for pos, y, yerr, color in zip(ind, means, err, colors): #Loop through methods and plot error bars ax[d].errorbar(pos, y, yerr, lw=2, capsize=5, capthick=2, color=color) #Remove x tick labels labels = [item.get_text() for item in ax[d].get_xticklabels()] empty_string_labels = ['']*len(labels) ax[d].set_xticklabels(empty_string_labels) #Remove right and top boundaries, and make ticks inward ax[d].tick_params(direction='in',bottom=0) ax[d].spines['right'].set_color('none') ax[d].spines['top'].set_color('none') #Plot individual R2 values for each fold as an 'x' scatter_x=np.reshape(np.transpose(np.ones((10,1))*range(11)),(110,1)) #Get x values for plotting (first 10 will have an x value of 0, second 10 will have an x value of 1, etc) scatter_y=np.concatenate((mean_r2_wf,mean_r2_wc,mean_r2_kf,mean_r2_nb,mean_r2_svr,mean_r2_xgb,mean_r2_dnn,mean_r2_rnn,mean_r2_gru,mean_r2_lstm,mean_r2_ensemble),axis=0) #Y values for plotting colors_list=[] #Create a list of the colors that should be used when plotting each 'x' for i in scatter_x.astype(np.int).reshape((1,-1))[0]: colors_list.append(colors[i]) ax[d].scatter(scatter_x,scatter_y,c=colors_list,marker='x',alpha=0.3) #Set y axis limits and ticks if dataset=='hc': ax[d].set_ylim([.25, .75]) ax[d].set_yticks(np.arange(.25,.751,.05)) else: ax[d].set_ylim([.6, 1]) if ill: ax[d].set_yticklabels('') d=d+1 #Increase dataset index (so the next dataset gets plot on the next row) plt.show() fig.savefig(fig_folder+'all_summary_v3.eps') #Save figure # - # ### Plot Traces (Fig. 3) # + #Set times to plot ts_sm=np.arange(500,800) #Plot samples 500-800 ts_h=np.arange(500,1000) #Plot samples 500-1000 fig_traces, ax = plt.subplots(11,3,figsize=(15,15)) #Create figure (11 rows by 3 columns) d=0 #Initialize index of the dataset I'm looking at (this will be the column I plot in the figure) for dataset in datasets: #Loop through datasets ####LOAD RESULTS FOR ALL METHODS### with open(load_folder+dataset+'_results_kf2.pickle','rb') as f: [mean_r2_kf,y_pred_kf_all,y_valid_pred_kf_all,y_train_pred_kf_all, y_kf_test_all,y_kf_valid_all,y_kf_train_all]=pickle.load(f) with open(load_folder+dataset+'_results_nb3.pickle','rb') as f: [mean_r2_nb,y_pred_nb_all]=pickle.load(f) with open(load_folder+dataset+'_results_wf2.pickle','rb') as f: [mean_r2_wf,y_pred_wf_all,y_train_pred_wf_all,y_valid_pred_wf_all]=pickle.load(f) with open(load_folder+dataset+'_results_wc2.pickle','rb') as f: [mean_r2_wc,y_pred_wc_all,y_train_pred_wc_all,y_valid_pred_wc_all]=pickle.load(f) with open(load_folder+dataset+'_results_xgb2.pickle','rb') as f: [mean_r2_xgb,y_pred_xgb_all,y_train_pred_xgb_all,y_valid_pred_xgb_all,time_elapsed]=pickle.load(f) with open(load_folder+dataset+'_results_svr2.pickle','rb') as f: [mean_r2_svr,y_pred_svr_all,y_train_pred_svr_all,y_valid_pred_svr_all,time_elapsed]=pickle.load(f) with open(load_folder+dataset+'_results_dnn2.pickle','rb') as f: [mean_r2_dnn,y_pred_dnn_all,y_train_pred_dnn_all,y_valid_pred_dnn_all,time_elapsed]=pickle.load(f) with open(load_folder+dataset+'_results_rnn2.pickle','rb') as f: [mean_r2_rnn,y_pred_rnn_all,y_train_pred_rnn_all,y_valid_pred_rnn_all,time_elapsed]=pickle.load(f) with open(load_folder+dataset+'_results_gru2.pickle','rb') as f: [mean_r2_gru,y_pred_gru_all,y_train_pred_gru_all,y_valid_pred_gru_all,time_elapsed]=pickle.load(f) with open(load_folder+dataset+'_results_lstm2.pickle','rb') as f: [mean_r2_lstm,y_pred_lstm_all,y_train_pred_lstm_all,y_valid_pred_lstm_all,time_elapsed]=pickle.load(f) with open(load_folder+dataset+'_results_ensemble_dnn2.pickle','rb') as f: [mean_r2_ensemble,y_pred_ensemble_all]=pickle.load(f) #Load ground truth data (note that the ground truth data for KF was saved in that results file) with open(load_folder+dataset+'_ground_truth.pickle','rb') as f: [y_test_all,y_train_all,y_valid_all]=pickle.load(f) with open(load_folder+dataset+'_ground_truth_nb.pickle','rb') as f: [y_test_all_nb,y_train_all_nb,y_valid_all_nb]=pickle.load(f) #### PLOT RESULTS #### m=0 #Initialize method number. This corresponds to the row we're currently plotting in. #Set times to plot for the current dataset if d==2: ts=np.copy(ts_h) else: ts=np.copy(ts_sm) t_len=ts.shape[0] #I only commented Wiener filter in detail below (so look at that if confused) #WF ax[m,d].plot(y_test_all[1][ts,0],'k') #Plot actual ax[m,d].plot(y_pred_wf_all[1][ts,0],colors[m]) #Plot predictions, in color specified in user options section ax[m,d].tick_params(direction='in') #Make ticks inward #Set y limit and ticks if dataset=='hc': ax[m,d].set_ylim([-100,100]) ax[m,d].set_yticks(np.arange(-100,100.1,50)) else: ax[m,d].set_ylim([-30,30]) ax[m,d].set_yticks(np.arange(-30,30.1,15)) ax[m,d].set_xlim([0,t_len]) #Set x limit ax[m,d].spines['right'].set_color('none') #Remove right boundary ax[m,d].spines['top'].set_color('none') #Remove top boundary #If plotting for illustrator, remove text if ill: ax[m,d].set_xticklabels('') ax[m,d].set_yticklabels('') m=m+1 #Increase method index, so the next method gets plotted on the next row #WC ax[m,d].plot(y_test_all[1][ts,0],'k') ax[m,d].plot(y_pred_wc_all[1][ts,0],colors[m]) ax[m,d].tick_params(direction='in') if dataset=='hc': ax[m,d].set_ylim([-100,100]) ax[m,d].set_yticks(np.arange(-100,100.1,50)) else: ax[m,d].set_ylim([-30,30]) ax[m,d].set_yticks(np.arange(-30,30.1,15)) ax[m,d].set_xlim([0,t_len]) ax[m,d].spines['right'].set_color('none') ax[m,d].spines['top'].set_color('none') if ill: ax[m,d].set_xticklabels('') ax[m,d].set_yticklabels('') m=m+1 #KF #There is slightly different time indexing for the Kalman filter, so we need to adjust for the offset #This is a result of different buffering of the training/testing/validation sets, along with lags in the kalman filter #The time index differences below are for the lags used for the CV fold we are plotting. if dataset=='m1': kf_time_diff=11 if dataset=='s1': kf_time_diff=5 if dataset=='hc': kf_time_diff=4 ax[m,d].plot(y_test_all[1][ts,0],'k') if dataset=='hc': ax[m,d].plot(y_pred_kf_all[1][ts+kf_time_diff,0],colors[m]) #Adjust for different time index else: ax[m,d].plot(y_pred_kf_all[1][ts+kf_time_diff,2],colors[m]) ax[m,d].tick_params(direction='in') if dataset=='hc': ax[m,d].set_ylim([-100,100]) ax[m,d].set_yticks(np.arange(-100,100.1,50)) else: ax[m,d].set_ylim([-30,30]) ax[m,d].set_yticks(np.arange(-30,30.1,15)) ax[m,d].set_xlim([0,t_len]) ax[m,d].spines['right'].set_color('none') ax[m,d].spines['top'].set_color('none') if ill: ax[m,d].set_xticklabels('') ax[m,d].set_yticklabels('') m=m+1 #NB nb_time_diff=len(y_test_all_nb[1])-len(y_test_all[1]) #There is slightly different time indexing for the Naive Bayes decoder, so we need to adjust for the offset mu=np.mean(y_train_all_nb[1],axis=0) #For the NB decoder, we never zero-centered the data (subtracted out the mean), so we need to do that for plotting ax[m,d].plot(y_test_all_nb[1][ts+nb_time_diff,0]-mu[0],'k') ax[m,d].plot(y_pred_nb_all[1][ts+nb_time_diff,0]-mu[0],colors[m]) ax[m,d].tick_params(direction='in') if dataset=='hc': ax[m,d].set_ylim([-100,100]) ax[m,d].set_yticks(np.arange(-100,100.1,50)) else: ax[m,d].set_ylim([-30,30]) ax[m,d].set_yticks(np.arange(-30,30.1,15)) ax[m,d].set_xlim([0,t_len]) ax[m,d].spines['right'].set_color('none') ax[m,d].spines['top'].set_color('none') if ill: ax[m,d].set_xticklabels('') ax[m,d].set_yticklabels('') m=m+1 #SVR #For SVR, we fit the model to z-scored y values. #So to put them back into their original units, we need to multiply by the standard deviation of the original training data stdev=np.std(y_train_all[1]) ax[m,d].plot(y_test_all[1][ts,0],'k') ax[m,d].plot(y_pred_svr_all[1][ts,0]*stdev,colors[m]) ax[m,d].tick_params(direction='in') if dataset=='hc': ax[m,d].set_ylim([-100,100]) ax[m,d].set_yticks(np.arange(-100,100.1,50)) else: ax[m,d].set_ylim([-30,30]) ax[m,d].set_yticks(np.arange(-30,30.1,15)) ax[m,d].set_xlim([0,t_len]) ax[m,d].spines['right'].set_color('none') ax[m,d].spines['top'].set_color('none') if ill: ax[m,d].set_xticklabels('') ax[m,d].set_yticklabels('') m=m+1 #XGB ax[m,d].plot(y_test_all[1][ts,0],'k') ax[m,d].plot(y_pred_xgb_all[1][ts,0],colors[m]) ax[m,d].tick_params(direction='in') if dataset=='hc': ax[m,d].set_ylim([-100,100]) ax[m,d].set_yticks(np.arange(-100,100.1,50)) else: ax[m,d].set_ylim([-30,30]) ax[m,d].set_yticks(np.arange(-30,30.1,15)) ax[m,d].set_xlim([0,t_len]) ax[m,d].spines['right'].set_color('none') ax[m,d].spines['top'].set_color('none') if ill: ax[m,d].set_xticklabels('') ax[m,d].set_yticklabels('') m=m+1 #DNN ax[m,d].plot(y_test_all[1][ts,0],'k') ax[m,d].plot(y_pred_dnn_all[1][ts,0],colors[m]) ax[m,d].tick_params(direction='in') if dataset=='hc': ax[m,d].set_ylim([-100,100]) ax[m,d].set_yticks(np.arange(-100,100.1,50)) else: ax[m,d].set_ylim([-30,30]) ax[m,d].set_yticks(np.arange(-30,30.1,15)) ax[m,d].set_xlim([0,t_len]) ax[m,d].spines['right'].set_color('none') ax[m,d].spines['top'].set_color('none') if ill: ax[m,d].set_xticklabels('') ax[m,d].set_yticklabels('') m=m+1 #RNN ax[m,d].plot(y_test_all[1][ts,0],'k') ax[m,d].plot(y_pred_rnn_all[1][ts,0],colors[m]) ax[m,d].tick_params(direction='in') if dataset=='hc': ax[m,d].set_ylim([-100,100]) ax[m,d].set_yticks(np.arange(-100,100.1,50)) else: ax[m,d].set_ylim([-30,30]) ax[m,d].set_yticks(np.arange(-30,30.1,15)) ax[m,d].set_xlim([0,t_len]) ax[m,d].spines['right'].set_color('none') ax[m,d].spines['top'].set_color('none') if ill: ax[m,d].set_xticklabels('') ax[m,d].set_yticklabels('') m=m+1 #GRU ax[m,d].plot(y_test_all[1][ts,0],'k') ax[m,d].plot(y_pred_gru_all[1][ts,0],colors[m]) ax[m,d].tick_params(direction='in') if dataset=='hc': ax[m,d].set_ylim([-100,100]) ax[m,d].set_yticks(np.arange(-100,100.1,50)) else: ax[m,d].set_ylim([-30,30]) ax[m,d].set_yticks(np.arange(-30,30.1,15)) ax[m,d].set_xlim([0,t_len]) ax[m,d].spines['right'].set_color('none') ax[m,d].spines['top'].set_color('none') if ill: ax[m,d].set_xticklabels('') ax[m,d].set_yticklabels('') m=m+1 #LSTM ax[m,d].plot(y_test_all[1][ts,0],'k') ax[m,d].plot(y_pred_lstm_all[1][ts,0],colors[m]) ax[m,d].tick_params(direction='in') if dataset=='hc': ax[m,d].set_ylim([-100,100]) ax[m,d].set_yticks(np.arange(-100,100.1,50)) else: ax[m,d].set_ylim([-30,30]) ax[m,d].set_yticks(np.arange(-30,30.1,15)) ax[m,d].set_xlim([0,t_len]) ax[m,d].spines['right'].set_color('none') ax[m,d].spines['top'].set_color('none') if ill: ax[m,d].set_xticklabels('') ax[m,d].set_yticklabels('') m=m+1 #ENSEMBLE ax[m,d].plot(y_test_all[1][ts,0],'k') #For the ensemble, the output is a list of length 20, rather than 10x2 ax[m,d].plot(y_pred_ensemble_all[2][ts,0],colors[m]) ax[m,d].tick_params(direction='in') if dataset=='hc': ax[m,d].set_ylim([-100,100]) ax[m,d].set_yticks(np.arange(-100,100.1,50)) else: ax[m,d].set_ylim([-30,30]) ax[m,d].set_yticks(np.arange(-30,30.1,15)) ax[m,d].set_xlim([0,t_len]) ax[m,d].spines['right'].set_color('none') ax[m,d].spines['top'].set_color('none') if ill: ax[m,d].set_xticklabels('') ax[m,d].set_yticklabels('') m=m+1 d=d+1 fig_traces.savefig(fig_folder+'all_traces_wnb.eps') # - # ## Comparing KF Methods (Fig S1) # + d=0 #Initialize dataset fig, ax = plt.subplots(3,1,figsize=(2,12)) #Create figure (3 rows x 1 column) colors=['lightblue','cyan'] #Colors to plot the 2 KF variants for dataset in datasets: #Loop through datasets ###LOAD RESULTS### with open(load_folder+dataset+'_results_kf0.pickle','rb') as f: [mean_r2_kf0,y_pred_kf_all0,y_kf_test_all0]=pickle.load(f) with open(load_folder+dataset+'_results_kf2.pickle','rb') as f: [mean_r2_kf,y_pred_kf_all,y_valid_pred_kf_all,y_train_pred_kf_all, y_kf_test_all,y_kf_valid_all,y_kf_train_all]=pickle.load(f) #####PLOT RESULTS###### #### Calculate the mean and standard error across cross-validation folds #### n=mean_r2_kf.shape[0] #Number of folds means=([np.mean(mean_r2_kf0), np.mean(mean_r2_kf)]) err=([np.std(mean_r2_kf0)*np.sqrt(1./n+1./(n-1)),np.std(mean_r2_kf)*np.sqrt(1./n+1./(n-1))]) #Note that the standard errors shown are more conservative than using 1/sqrt(n), since the training sets used #are not independent. See http://www.iro.umontreal.ca/~lisa/bib/pub_subject/language/pointeurs/nadeau_MLJ1597.pdf #Plot bars ind = np.arange(len(err)) #X values for plotting for pos, y, yerr, color in zip(ind, means, err, colors): #Loop through methods and plot error bars ax[d].errorbar(pos, y, yerr, lw=2, capsize=5, capthick=2, color=color) # width=.95 # ax[d].bar(ind, means, width, yerr=err, color=colors,alpha=0.7,error_kw=dict(ecolor='black', lw=2, capsize=5, capthick=2)) #Remove x tick labels labels = [item.get_text() for item in ax[d].get_xticklabels()] empty_string_labels = ['']*len(labels) ax[d].set_xticklabels(empty_string_labels) #Remove right and top boundaries, and make ticks inward ax[d].tick_params(direction='in',bottom=0) ax[d].spines['right'].set_color('none') ax[d].spines['top'].set_color('none') #Plot individual R2 for each fold scatter_x=np.reshape(np.transpose(np.ones((10,1))*range(2)),(20,1)) #Get x values for plotting (first 10 will have an x value of 0, second 10 will have an x value of 1, etc) scatter_y=np.concatenate((mean_r2_kf0,mean_r2_kf),axis=0) #Get y values for plotting colors_list=[] #Create a list of the colors that should be used when plotting each 'x' for i in scatter_x.astype(np.int).reshape((1,-1))[0]: colors_list.append(colors[i]) ax[d].scatter(scatter_x,scatter_y,c=colors_list,marker='x',alpha=0.3) #Set y limit and ticks if dataset=='hc': ax[d].set_ylim([0, .7]) ax[d].set_yticks(np.arange(0,.701,.05)) else: ax[d].set_ylim([.6, 1]) if ill: ax[d].set_yticklabels('') ax[d].set_xlim([-0.5, 1.5]) d=d+1 #Update row of next dataset plt.show() fig.savefig(fig_folder+'kf_summary3.eps') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + nbsphinx="hidden" import open3d as o3d import os import time import sys # monkey patches visualization and provides helpers to load geometries sys.path.append('..') import open3d_tutorial as o3dtut # change to True if you want to interact with the visualization windows o3dtut.interactive = not "CI" in os.environ # - # # Intrinsic shape signatures (ISS) # # In this tutorial we will show how to detect the **ISS** Keypoints of a 3D shape. The implementation is based on the keypoint detection modules proposed in Yu Zhong , "Intrinsic Shape Signatures: A Shape Descriptor for 3D Object Recognition", 2009. # # # ## ISS Keypoints # # ISS saliency measure is based on the Eigenvalue Decomposition (EVD) of the scatter matrix $\boldsymbol{\Sigma}(\mathbf{p})$ of the points belonging to the support of $p$, i.e. # # $$\begin{array}{l} # \boldsymbol{\Sigma}(\mathbf{p})=\frac{1}{N} \sum_{\mathbf{q} \in \mathcal{N}(\mathbf{p})}\left(\mathbf{q}-\mu_{\mathbf{p}}\right)\left(\mathbf{q}-\mu_{\mathbf{p}}\right)^{T} \quad \text { with } \\ # \mu_{\mathbf{p}}=\frac{1}{N} \sum_{\mathbf{q} \in \mathcal{N}(\mathbf{p})} \mathbf{q} # \end{array}$$ # # Given $\boldsymbol{\Sigma}(\mathbf{p})$, its eigenvalues in decreasing magnitude order are denoted here as $\lambda_1$, $\lambda_2$, $\lambda_3$. During the pruning stage, points whose ratio between two successive eigenvalues is below a threshold are retained: # # $$\frac{\lambda_{2}(\mathbf{p})}{\lambda_{1}(\mathbf{p})}<\gamma_{12} \wedge \frac{\lambda_{3}(\mathbf{p})}{\lambda_{2}(\mathbf{p})}<\gamma_{23}$$ # # The rationale is to avoid detecting keypoints at points exhibiting a similar spread along the principal directions, where a repeatable canonical reference frame cannot be established and, therefore, the subsequent description stage can hardly turn out effective. Among remaining points, the saliency is determined by the magnitude of the smallest eigenvalue # # $$\rho(\mathbf{p}) \doteq \lambda_{3}(\mathbf{p})$$ # # So as to include only points with large variations along each principal direction. # # After the detection step, a point will be considered a **keypoint** if it has the maxium saliency value on a given neighborhood. # # **NOTE:** For more details please reffer to the original publication or to "Performance Evaluation of 3D Keypoint Detectors", by Tombari et.al. # # ## ISS keypoint detection example # + # Compute ISS Keypoints on Armadillo mesh = o3dtut.get_armadillo_mesh() pcd = o3d.geometry.PointCloud() pcd.points = mesh.vertices tic = time.time() keypoints = o3d.geometry.keypoint.compute_iss_keypoints(pcd) toc = 1000 * (time.time() - tic) print("ISS Computation took {:.0f} [ms]".format(toc)) mesh.compute_vertex_normals() mesh.paint_uniform_color([0.5, 0.5, 0.5]) keypoints.paint_uniform_color([1.0, 0.75, 0.0]) o3d.visualization.draw_geometries([keypoints, mesh], front=[0, 0, -1.0]) # - # This function is only used to make the keypoints look better on the rendering def keypoints_to_spheres(keypoints): spheres = o3d.geometry.TriangleMesh() for keypoint in keypoints.points: sphere = o3d.geometry.TriangleMesh.create_sphere(radius=0.001) sphere.translate(keypoint) spheres += sphere spheres.paint_uniform_color([1.0, 0.75, 0.0]) return spheres # + # Compute ISS Keypoints on Standford Bunny, changing the default parameters mesh = o3dtut.get_bunny_mesh() pcd = o3d.geometry.PointCloud() pcd.points = mesh.vertices tic = time.time() keypoints = o3d.geometry.keypoint.compute_iss_keypoints(pcd, salient_radius=0.005, non_max_radius=0.005, gamma_21=0.5, gamma_32=0.5) toc = 1000 * (time.time() - tic) print("ISS Computation took {:.0f} [ms]".format(toc)) mesh.compute_vertex_normals() mesh.paint_uniform_color([0.5, 0.5, 0.5]) o3d.visualization.draw_geometries([keypoints_to_spheres(keypoints), mesh]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import matplotlib.pyplot as plt k = 0.9 # + def _LAI_1(w): LA = -0.0025 * w ** 2 + 0.072 * w A = 0.02 * w if w > 2.4: A = 0.02 * 2.4 return LA / A def _LAI_2(w): if w <= 2.4: return -.125 * w + 3.6 else: return -0.0516 * w ** 2 + 1.49 * w def _LAI_3(w): LAI = -0.0516 * w ** 2 + 1.49 * w if w < 2.4: return 3.3 elif w > 13.76: return 10.733 else: return LAI def _LAI_4(w): LAI = _LAI_2(w) if LAI < 0: return 0 return LAI def _expLAI(lai): return np.exp(-k * lai) def _switch(left, right, on, loc, k): sig = 1 / (1 + np.exp(-k * (on - loc))) return (1 - sig) * left + sig * right # + ws = np.arange(0, 30, 0.1) lai1 = [_LAI_1(w) for w in ws] lai2 = [_LAI_2(w) for w in ws] lai3 = [_LAI_3(w) for w in ws] lai4 = [_LAI_4(w) for w in ws] plt.figure() # plt.plot(ws, lai1, label='Index 1') plt.plot(ws, lai2, label='Actual LAI') # plt.plot(ws, lai3, label='Index 3') plt.plot(ws, lai4, label='Approximate LAI') plt.xlim(0, 30) plt.legend() plt.grid() # - mx = 1.42 / (2 * 0.0516) mxval = _LAI_2(mx) print('Max at w = {:.3f}, value = {:.3f}'.format(mx, mxval)) lower = -.125 * 2.4 + 3.6 print('Lower at w = {:.3f}'.format(lower)) # + exp2 = [_expLAI(lai) for lai in lai2] exp3 = [_expLAI(lai) for lai in lai3] exp4 = [_expLAI(lai) for lai in lai4] exp5 = [_switch(0, 1, w, 27.5, 2) for w in ws] plt.figure() plt.plot(ws, exp2, label='Actual LAI') # plt.plot(ws, exp3, label='Index 3') plt.plot(ws, exp4, label='Approximate LAI') plt.plot(ws, exp5, label='Approximate exp(-kLAI)') plt.xlim(0, 30) plt.legend() plt.grid() # - print(min(exp4)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="fsm9o_BEMvb0" colab_type="text" # # 4. gyakorlat: importok - és a "véletlen" számok # # A múlt órai házi feladat 2. részét megnézve találkozhattatok valami ehhez hasonlóval: # + id="eBSZJGdIMvb1" colab_type="code" colab={} import konyvtar """ ... """ ertek = konyvtar.fuggveny(argumentumok) """ ... """ # + [markdown] id="LqKBlLz7Mvb3" colab_type="text" # Tapasztalhattátok, hogy ilyen módon olyan függvényeket is használhattunk (pl. időmérő vagy grafikonrajzoló függvénykét), amit nem mi írtunk, és nem is akarunk megírni. A Python-ban ezeket a "plusz" függvényeket tartalmazó objektumokat nevezzük "könyvtáraknak" (libraries). Vannak a Python-nak olyan könyvtárai, amik a nyelvvel együtt települnek (ilyen az időzítésre szolgáló **time**, vagy a fájl/mappaműveletkez hasznos **os**), de vannak olyanok is, amiket külön telepíteni kell (a második feladatban használt **matplotlib** általában ilyen. # # ## Hogyan keríthetünk könyvtárakat? # # Akinek valamiért nem volt fenn a matplotlib, annak már muszáj volt utánanéznie, de azért nem olyan vészes így utólag sem: command line-ból (vagy Anaconda Promtból) telepíthettek új könyvtárakat, az internetről letöltve. [Itt](https://packaging.python.org/tutorials/installing-packages/) egy összefoglaló a csomagok telepítéséről Pythonban. # # ## Saját fájljaink újrahasználata # # Nem csak a beépített vagy letöltött könyvtárakkal dolgozhatunk: ha a fájlotok mappjában található egy másik .py fájl, annak függvényeit is tudjátok használni, ha importoljátok **import fájlnév** -el, kiterjesztés nélkül. # # ## Véletlenszámok - a feladat # # Az órán azzal foglalkoztunk, hogy hogyan tudunk szoftveresen "kb. véletlen számokat" készíteni. Ennek [itt](https://en.wikipedia.org/wiki/Pseudorandom_number_generator) olvashattok utána. Három algoritmusról beszélgettünk: # # ### Véletlen számok a rendszeróra felhasználásával # # A Python **time** függvényébe beépített óra kb. 200 nanoses-enként vált értéket. Első közelítésben a rendszeróra utolsó számjegyeit vehetjük kb. véletlennek, mert olyan gyorsan pörög, hogy nem tudjuk, éppen hol tarthat, amikor lefuttatjuk a programunkat. # # + id="hnmFO5ctMvb3" colab_type="code" colab={} import time #a time könyvtárat használni fogom def randomTimeInt(upperBound): timeSeed = time.time() #megmérem az időt randInt = (int(timeSeed*10000000))%upperBound #time.time floatot ad vissza, felszorzom, hogy egész legyen, majd maradékot veszem return randInt # + [markdown] id="5L-MLmtkP7gt" colab_type="text" # Igazából a rendszeróra használatával csaltunk egy kicsit: az igazi pszeudorandom generátorok nem függenek semmilyen fizikai jelenségtől, csak annyit csinálnak, egy számból - amit "seed"-nek nevezünk - csinálnak egy másikat, úgy, hogy ne nagyon lehessen felfedezni a két szám közötti összefüggést. Jöjjön pár "igazi" váletlenszám generátor; persze ilyen röviden csak a gagyibbakat tudjuk megnézegetni. # # ### Middle square módszer # # Ez az algoritmus az egyik legnagyobb magyar matematikushoz, Neumann Jánoshoz köthető; az ő korában, az 50-es/60-as években használták véletlen számok generálására. A [wikipédia](https://en.wikipedia.org/wiki/Middle-square_method) elég szépen részletezi a működését. A **len** függvényről még lehet, hogy nem volt szó: ez megmondja egy lista vagy str hosszát egész számként. # + id="a-LPDLOzMvb7" colab_type="code" colab={} def middleSquare(seed, intUpperBound): lenRandomNumber = len(str(seed)) seed = seed**2 seed = str(seed) while (len(seed) < lenRandomNumber*2): seed = '0'+seed newSeed = int(seed[lenRandomNumber//2:lenRandomNumber+lenRandomNumber//2]) randomNumber = newSeed/(10**(lenRandomNumber)) randomNumber *= intUpperBound return int(randomNumber), newSeed # + [markdown] id="B-6IdIsCbYUG" colab_type="text" # ## A maradékok módszere: LCG # # Az LCG-gk olyan algoritmusok, amik a seed-ből úgy állítanak elő új számokat, hogy megszorozzák $a$ tetszőleges számmal, hozzáadnak $b$ tetszőleges számot, majd veszik az összeg maradékát egy $c$ tetszőleges számmal osztva. Képlettel leírva így néz ki: # # $$ rand = (a*seed+b)mod (c) $$ # # Az $a$, $b$ és $c$ együtthatók jó megválasztása fontos ahhoz, hogy jól működjön az LCG-nk; egy intő példa erre a [RANDU](https://en.wikipedia.org/wiki/RANDU) angolitmus, ami egy olyan LCG, ahol $a=65539$, $b=0$, $c=2^{31}$ és ha jól megnézzük, [kicsit sem tűnik véletlenszerűnek.](https://www.purinchu.net/wp/2009/02/06/the-randu-pseudo-random-number-generator/) # + id="WRvaqczPMvb8" colab_type="code" colab={} def RANDU(seed, intUpperBound): newSeed = (65539*seed)%(2**31) newSeedNormalized = newSeed/(10**(len(str(newSeed)))) randInt = int((intUpperBound+1)*newSeedNormalized) return randInt-1, newSeed # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="dUTb9J_NEEyY" import tensorflow as tf # + id="JpvHlp0UGHBD" mnist = tf.keras.datasets.mnist (x_train,y_train),(x_test,y_test) = mnist.load_data() # + id="lrxApFXYRMc7" colab={"base_uri": "https://localhost:8080/", "height": 282} outputId="e9580b13-69f7-4cec-b826-66dae5031ecc" import matplotlib.pyplot as plt plt.imshow(x_train[0],cmap = 'gray') print(y_train[0]) plt.show() # + id="j5QF6i3nSZu3" # Reshape the image to 4 dimension x_train = x_train.reshape(60000,28,28,1) x_test = x_test.reshape(10000,28,28,1) # + id="uIozdDvCX92e" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="1ca9dea3-48a2-4062-eec3-eb0feae7898d" x_train.shape y_train.shape # + id="m5MwAGLRSzwE" # One Hot Encoding vs Label Encoding from tensorflow.keras.utils import to_categorical # + id="M2k2o296TRce" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="c942105f-1400-4419-a2e6-644d97c1e1f6" y_train # + id="AKZ0GyL2TUbX" colab={"base_uri": "https://localhost:8080/", "height": 50} outputId="b1c78310-2f90-4818-92a6-be4180f341d3" print(y_train[567]) y_cat_train = to_categorical(y_train) y_cat_test = to_categorical(y_test) y_cat_train[567] # + id="PRpd2jOMTZiA" from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense,Conv2D,MaxPool2D,Flatten # + id="zWBqDa3fVJQd" model = Sequential() # COnvolutional Layer model.add(Conv2D(filters = 32,kernel_size=(4,4),input_shape = (28,28,1),activation = 'relu')) # Pooling Layer model.add(MaxPool2D(pool_size=(2,2))) model.add(Conv2D(filters = 32,kernel_size=(4,4),input_shape = (28,28,1),activation = 'relu')) # Pooling Layer model.add(MaxPool2D(pool_size=(2,2))) # Flatten model.add(Flatten()) # 128 Neurons in 1st Dense Layer model.add(Dense(512,activation='relu')) model.add(Dense(128,activation='relu')) #Output Layer model.add(Dense(10,activation='softmax')) # + id="niI4nPmSWGL0" model.compile(optimizer='Adam',loss='categorical_crossentropy',metrics = ['accuracy']) # + id="1LPjMvudXXx6" colab={"base_uri": "https://localhost:8080/", "height": 420} outputId="b9184728-6254-42aa-dfc1-6357689570e4" model.summary() # + id="dYc-c9vCXaWS" colab={"base_uri": "https://localhost:8080/", "height": 202} outputId="9bfd5bc0-e414-4d4e-cf4d-0462d2f17236" model.fit(x_train,y_cat_train,epochs = 5) # batch_size =32 # + id="TRIBhZepSEfl" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import MeCab mecab = MeCab.Tagger("-Ochasen") # MeCabオブジェクトを生成 malist = mecab.parse("庭には二羽鶏がいる。") # 形態素解析を行う print(malist) # + tagger = MeCab.Tagger() tagger.parse('') # parseToNode()の不具合を回避するために必要 node = tagger.parseToNode('すもももももももものうち') while node: print(node.surface, node.feature) node = node.next # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + colab={"base_uri": "https://localhost:8080/", "height": 1000, "referenced_widgets": ["f1e92cacf54742ad8256c6191b8bcc6c", "229d713cf2f842e48d2cf0e1662fd0ff", "57a268e19c04405b88f805de15a73b33", "0b8e0db4a22343019e052d3c4a4b80b9", "b8ae754cbb4041b5ae15d561de57c108", "f01e8e92edcd406f95e367a1fe0ed3da", "ac0020ed4a73405d90c624d282b7bfc1", "755405fbc7b2415eafd2bb756f8b9d69"]} id="1T-_R96CYDTe" executionInfo={"status": "error", "timestamp": 1612447891995, "user_tz": -60, "elapsed": 29131, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi1qlhJVh2Na0BWeA4HjbsGIKblq2p5iko0FwkM=s64", "userId": "03819353067086790089"}} outputId="6d6f9573-5765-4e24-ab33-2769bdcd7aba" import os import spacy import numpy as np import pandas as pd from PIL import Image from collections import Counter import matplotlib.pyplot as plt import matplotlib.image as mpimg import argparse import random import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F import torchvision.transforms as T import torchvision.models as models from torch.nn.utils.rnn import pad_sequence from torch.nn.utils.rnn import pack_padded_sequence from torch.utils.data import DataLoader, Dataset class TextManager: """Class that will build vocabulary and tokenization utilities to be used by natural network model""" def __init__(self, freq_threshold=None): """Initialization: Args: freq_threshold: specify the threshold for frequency of a word to be added in vocabulary """ # create the vocabulary with some pre-reserved tokens: self.vocabulary = {"": 0, "": 1, "": 2, "": 3} #PAD: padding , SOS: start of sentence , EOS: end of sentence , UNK: unknown word self.spacy_en = spacy.load("en") # for a better text tokenization purpose self.freq_threshold = freq_threshold# threshold of word frequency def __len__(self): """The total number of words in the vocabulary.""" return len(self.vocabulary) def get_key(self, val): """Method to be used to get the key of a value in vocabulary Args: val: the index of the corresponding word in vocabulary Return: key: the word corresponding to the given index of the vocabulary """ for key, value in self.vocabulary.items(): if val == value: return key def tokenize(self, text): """Method for text tokenization (using spacy)""" return [token.text.lower() for token in self.spacy_en.tokenizer(text)] def build_vocab(self, sentence_list): """Method used to build the vocabulary """ frequencies = Counter() #using 'Counter()' for counting the words idx = 4 #index of the first word of the sentence(0,1,2,3 are reserved!) for sentence in sentence_list: for word in self.tokenize(sentence): frequencies[word] += 1 # add the word to the vocab if it reaches minimum frequency threshold if frequencies[word] == self.freq_threshold: self.vocabulary[word] = idx idx += 1 def numericalize(self, text): """Method used for converting tokenized text into indices (using spacy) Args: text: input list of tokenized word Return: a list of indices corresponding to each word """ tokenized_text = self.tokenize(text) return [self.vocabulary[token] if token in self.vocabulary else self.vocabulary[""] for token in tokenized_text] def save(self, file): """Save the vocabulary to file 'vocabulary.dat' """ with open(file, 'w+', encoding='utf-8') as f: for word, idx in self.vocabulary.items(): f.write("{} {}\n".format(idx, word)) def load(self, file): """Load the vocabulary file 'vocabulary.dat' """ self.vocabulary = {} with open(file, 'r', encoding='utf-8') as f: for line in f: line_fields = line.split() self.vocabulary[line_fields[1]] = int(line_fields[0]) def load_embedding_file(self, embed_file): """ Creates an embedding matrix for the vocabulary. Args: embed_file: embeddings file with GloVe format :return: embeddings tensor in the same order as the words in the vocabulary """ if _data_set.vocab.vocabulary is None: raise ValueError("Vocabulary doesn't exists!!!") # Find embedding dimension with open(embed_file, 'r') as f: embed_size = len(f.readline().split()) - 1 words_in_vocab = set(_data_set.vocab.vocabulary.keys()) # create a copy of words from vocabulary into a set # Create the initialized tensor for embeddings embedding_matrix = torch.FloatTensor(len(words_in_vocab), embed_size) std = np.sqrt(5.0 / embedding_matrix.size(1))# used uniform distribution specially for pre-reserved tokens torch.nn.init.uniform_(embedding_matrix, -std, std) for line in open(embed_file, 'r'): line_fields = line.split() embed_word = line_fields[0].lower() #force words to be in lower case if embed_word not in words_in_vocab: # Ignoring the word which does not exists in the vocabulary continue word_id = _data_set.vocab.vocabulary[embed_word] embedding_matrix[word_id, :] = torch.tensor([float(i) for i in line_fields[1:]], dtype=torch.float32) return embedding_matrix, embed_size class CapDataset(Dataset): """ Image Captioning Dataset Class which makes a generic dataset for images and captions""" def __init__(self, path, captions_file, preprocess_custom=None, empty_dataset=None, eval=None, test=None): """ create a dataset: Args: path: this is a string to the address of dataset directory captions_file: this is a string indication the name of captions file preprocess_custom: bool value indicating if we want a custom preprocess(True) or a default preprocess(None) empty_dataset: bool value indication if we are going to create an empty dataset or not eval: bool value indicating if we want to create a dataset for evaluation test: bool value indicating if we want to create a dataset for testing Returns: """ #initialize attributes self.root_dir = path #path to the root directory of dataset self.images = [] #holding the images loading from files self.captions = [] #holding the captions loading from files self.preprocess = None #type of preprocessing for images self.data_mean = torch.zeros(3) #create a zeros tensor for mean self.data_std = torch.ones(3) #create a ones tensor for standard deviation self.data_mean[:] = torch.tensor([0.485, 0.456, 0.406]) #mean which is used by resnet50 self.data_std[:] = torch.tensor([0.229, 0.224, 0.225]) #std which is used by resnet50 #check whether the path is correct or not if path is None: raise ValueError("You must specify the dataset path!") if not os.path.exists(path) or os.path.isfile(path): raise ValueError("Invalid data path: " + str(path)) #check whether we want a custom preprocessing or just use a default operation if preprocess_custom is not None: #using a custom preprocess self.preprocess = T.Compose([ T.RandomResizedCrop(224, scale=(0.08, 1.0), ratio=(3./4., 4./3.)), T.RandomHorizontalFlip(p=0.5), T.ToTensor(), T.Normalize(mean=self.data_mean, std=self.data_std), ]) #using default preprocess else: self.preprocess = T.Compose([ T.Resize(226), T.RandomCrop(224), T.ToTensor(), T.Normalize(mean=self.data_mean, std=self.data_std) ]) #create a dataset for training purpose if not eval and not test: if not empty_dataset: # Get image and caption colum from the dataframe self.df = pd.read_csv(captions_file) self.files = self.df.values.tolist() self.images = (self.df["image"]).values.tolist() self.captions = (self.df["caption"]).values.tolist() # Initialize vocabulary object and build vocab self.vocab = TextManager(args.freq_threshold) self.vocab.build_vocab(self.captions) self.vocab.save('vocabulary.dat') print("Vocabulary size: ", len(self.vocab)) # if we want to create an empty dataset(for splitting purpose) else: self.vocab = _data_set.vocab self.files = _data_set.files # create a dataset in which you only want to evaluate a pre_trained model elif eval: if not os.path.exists('vocabulary.dat'): raise ValueError("There is no 'Vocabulary.dat' file") self.df = pd.read_csv(captions_file) self.files = self.df.values.tolist() self.images = (self.df["image"]).values.tolist() self.captions = (self.df["caption"]).values.tolist() self.vocab = TextManager() self.vocab.load('vocabulary.dat') print("vocabulary is loaded!! size=", len(self.vocab)) # create a dataset in which you only want to test a pre_trained model elif test: if not os.path.exists('vocabulary.dat'): raise ValueError("There is no 'Vocabulary.dat' file") folder_contents = os.listdir(self.root_dir) self.files = [f for f in folder_contents if os.path.isfile(os.path.join(self.root_dir, f)) and f.endswith(".jpg")] for i in range(0, len(self.files)): self.images.append((self.files[i])) self.captions.extend(["dummy dummy dummy dummy"]) self.vocab = TextManager() self.vocab.load('vocabulary.dat') print("vocabulary is loaded!! size=", len(self.vocab)) def __len__(self): """Compute the lenght of the data set(each image has 5 captions so each iamge is repeated 5 times)""" return len(self.captions) def __getitem__(self, idx): """Load the next (image,caption) from disk. Args: index: the index of the element to be loaded. Returns: The image, caption, and caption lenght. """ caption = self.captions[idx] img_name = self.images[idx] img_location = os.path.join(self.root_dir, img_name) img = Image.open(img_location).convert("RGB") #apply the transfromation to the image img = self.preprocess(img) #numericalize the caption text caption_vec = [] #add the SOS(start of a sentence)token at the begining of sentence caption_vec += [self.vocab.vocabulary[""]] caption_vec += self.vocab.numericalize(caption) #add the EOS(end of a sentence)token at the begining of sentence caption_vec += [self.vocab.vocabulary[""]] caption_len = torch.LongTensor([len(caption_vec)]) return img, torch.tensor(caption_vec), caption_len def randomize_data(self): """Randomize the dataset.""" order = [a for a in range(len(self.files))] random.shuffle(order) # shuffling the data self.files = [self.files[a] for a in order] def create_splits(self, proportions: list): """ split the dataset into our customize proportion Args: proportions: a list in which we are going to split the dataset returns: it returns a list of Dataset Objects(one Dataset per split) """ p = 0.0 invalid_prop_found = False for prop in proportions: if prop <= 0.0: invalid_prop_found = True break p +=prop if invalid_prop_found or p > 1.0 or len(proportions) == 0: # you are allowed to choose a portion of the data(good!!) raise ValueError("Invalid fraction for splitting!!!(It must be possitive and its sum must not be grater than 1)") data_size = len(self.files) num_splits = len(proportions) datasets = [] for i in range(0, num_splits): if i==0: datasets.append(CapDataset(self.root_dir, self.files, preprocess_custom=args.preprocess, empty_dataset=True)) elif i==1: datasets.append(CapDataset(self.root_dir, self.files, preprocess_custom=None, empty_dataset=True)) else: datasets.append(CapDataset(self.root_dir, self.files, preprocess_custom=None, empty_dataset=True)) start = 0 for i in range(0, num_splits): p = proportions[i] n = int(p * data_size) end = start + n datasets[i].images.extend([self.images[z]for z in range(start, end)]) datasets[i].captions.extend([self.captions[z]for z in range(start, end)]) start = end print("Dataset size:", data_size) print("Selected Dataset size:", len(datasets[0]) + len(datasets[1]) + len(datasets[2]) ) print("-Training size:", len(datasets[0])) print("-Validation size:", len(datasets[1])) print("-Test size:", len(datasets[2])) print("-------------------------------") return datasets def CapCollate(self, batch): """Collate to apply the padding to the captions with dataloader Args: batch: compose of images, caption, captions lenght Returns: images, padded captions, and captions lenght """ self.pad_idx = self.vocab.vocabulary[""] imgs = [item[0].unsqueeze(0) for item in batch] #get the images of the batch imgs = torch.cat(imgs, dim=0) targets = [item[1] for item in batch] #get the captions of the batch targets = pad_sequence(targets, batch_first=True, padding_value=self.pad_idx) caption_len = torch.LongTensor([item[2].item() for item in batch]) #create a tensor of captions lenght return imgs, targets, caption_len class Encoder_CNN(nn.Module): """Class that models the CNN for resnet50 in order to encode the input images""" def __init__(self): super(Encoder_CNN, self).__init__() self.resnet = torch.hub.load('pytorch/vision:v0.7.0', 'resnet50', pretrained=True) for param in self.resnet.parameters(): param.requires_grad = False module = list(self.resnet.children())[:-2] #removing the last: AdaptiveAvgPool2d(), Linear(in=2048, out=1000) self.resnet = nn.Sequential(*module) def forward(self, images): features = self.resnet(images) # (batch_size,2048,7,7) Number Of feature Maps=2048, each feature size=7x7 features = features.permute(0, 2, 3, 1) # (batch_size,7,7,2048) features = features.view(features.size(0), -1, features.size(-1)) # (batch_size,49,2048) # .view(-1) inferred size return features class Attention(nn.Module): """Class that models the Attention neural model """ def __init__(self, encoder_dim, decoder_dim, attention_dim): super(Attention, self).__init__() #Linear transformation from decoder and encoder to attention dimension self.LinearEncoder = nn.Linear(encoder_dim, attention_dim) self.LinearDecoder = nn.Linear(decoder_dim, attention_dim) # final projections self.relu = nn.ReLU() self.LinearAttention = nn.Linear(attention_dim, 1) def forward(self, features, hidden_state): #linear transformation from encoder(CNN) and decoder(LSTM) dimension to attention dimension LinearEncoder_outputs = self.LinearEncoder(features) LinearDecoder_outputs = self.LinearDecoder(hidden_state) #combine encoder and decoder outputs together (we can also use tanh()) combined_output = self.relu(LinearEncoder_outputs + LinearDecoder_outputs.unsqueeze(1)) #compute the outpout of attention network attention_outputs = self.LinearAttention(combined_output) #compute the alphas attention_scores = attention_outputs.squeeze(2) alphas = F.softmax(attention_scores, dim=1) #the sum of all alphas should be equal to one. #compoute the context vector context_vector = features * alphas.unsqueeze(2) context_vector = context_vector.sum(dim=1) return alphas, context_vector class Decoder_LSTM(nn.Module): def __init__(self, embed_size, pretrained_embed, vocab_size, attention_dim, encoder_dim, decoder_dim): super().__init__() # check whether to use pretrained word embedding or not if pretrained_embed is None: embed_size = embed_size self.embedding = nn.Embedding(vocab_size, embed_size) else: embed_size = pretrained_embed.shape[1] self.embedding = nn.Embedding.from_pretrained(pretrained_embed, freeze=True) # set the model attributes self.vocab_size = vocab_size self.attention_dim = attention_dim self.decoder_dim = decoder_dim #create attention object self.attention = Attention(encoder_dim, decoder_dim, attention_dim) #create initial hidden and cell state self.init_h = nn.Linear(encoder_dim, decoder_dim) self.init_c = nn.Linear(encoder_dim, decoder_dim) #create LSTMCell object for decoder self.lstm_cell = nn.LSTMCell(embed_size + encoder_dim, decoder_dim, bias=True) #Linear transformation for attention output self.decoder_out = nn.Linear(decoder_dim, self.vocab_size) self.dropout = nn.Dropout(0.3) #initialize model parameters randomely in range(-1, 1) self.embedding.weight.data.uniform_(-0.1, 0.1) def forward(self, features, captions, caption_len): #sort the captions and images corresponding to their lenght caption_lengths, sorted_indices = caption_len.sort(dim=0, descending=True) features = features[sorted_indices,:,:] #[32, 49, 2048] captions = captions[sorted_indices,:] seq_length = caption_lengths - 1 # Exclude the last one (the position) batch_size = captions.size(0) num_features = features.size(1) #from sequences of token IDs to sequences of word embeddings embeds = self.embedding(captions) # Initialize LSTM hidden and cell states h, c = self.init_hidden_state(features) #create zeros tensor for outputs and alphas outputs = torch.zeros(batch_size, seq_length[0], self.vocab_size).to(device) alphas = torch.zeros(batch_size, seq_length[0], num_features).to(device) # for each time step we will ask attention model to returns a context vector # based on decoder's previous hidden state then we will generate new word for word in range(0, seq_length[0].item()): #get context vector with the encoder features and previous hidden state alpha, context = self.attention(features, h) # context=[32, 2048] #combine embedding vector of the word with context vector and feed it to lstmcell lstm_input = torch.cat([embeds[:, word], context], dim=1) # [32, 2348] h, c = self.lstm_cell(lstm_input, (h, c)) #get the logits of the decoder (also we used dropout for regularization purpose) logits = self.decoder_out(self.dropout(h)) #append all generated words in the outputs tensor outputs[:, word] = logits alphas[:, word] = alpha return outputs, alphas, captions, seq_length def CapGenerator(self, features, max_len=20, vocab=None): """ Method used to generate a caption for a given image """ alphas = [] captions = [] batch_size = features.size(0) #generate initial hidden state h, c = self.init_hidden_state(features) # create the initial sentence with starting token word = torch.tensor(vocab.vocabulary['']).view(1, -1).to(device) embeds = self.embedding(word) #loop for iterating over the maximum sentence lenght to be generated for i in range(max_len): #given the image to attention model it returns the context vector alpha, context = self.attention(features, h) #storing the alphas score for loss function alphas.append(alpha.cpu().detach().numpy()) #generating the next word lstm_input = torch.cat((embeds[:, 0], context), dim=1) h, c = self.lstm_cell(lstm_input, (h, c)) output = self.decoder_out(self.dropout(h)) output = output.view(batch_size, -1) # select the word with highest value index_of_predicted_word = output.argmax(dim=1) # save the generated word into a list captions.append(index_of_predicted_word.item()) # check to stop generation if it predicted if index_of_predicted_word.item() == 2: # 2 is the index of in the vocabulary break # send back the generated word as the next caption embeds = self.embedding(index_of_predicted_word.unsqueeze(0)) # covert the index of tokens into words return [vocab.get_key(idx) for idx in captions], alphas def init_hidden_state(self, encoder_out): """ Method used for initial state for the models Args: encoder_out: this is the output of our encoder which we used it here to make an initial state, it is a tensor of dimension (batch_size, num_pixels, encoder_dim) Return: h: hidden state with the dimension equal to decoder dimension c: cell state with output dimension size of decoder """ #get the mean of encoder output units mean_encoder_out = encoder_out.mean(dim=1) #get the hidden and cell state by means of a linear transformation from encoder dim to decoder dim h = self.init_h(mean_encoder_out) c = self.init_c(mean_encoder_out) return h, c class Encoder_Decoder(nn.Module): """ class that create the main model """ def __init__(self, embed_size, pretrained_embed, vocab_size, attention_dim, encoder_dim, decoder_dim): super().__init__() self.encoder = Encoder_CNN() self.decoder = Decoder_LSTM( embed_size=embed_size, pretrained_embed = pretrained_embed, vocab_size=vocab_size, attention_dim=attention_dim, encoder_dim=encoder_dim, decoder_dim=decoder_dim, ) #define the loss function to be used self.criterion = nn.CrossEntropyLoss() self.train_accuracies = None self.valid_accuracies = None def forward(self, images, captions, caption_len): #feed the images to the encoder in order to get the features vector features = self.encoder(images) #feed images and captions with their lenght to the decoder outputs, alphas, captions, seq_length = self.decoder(features, captions, caption_len) return outputs, alphas, captions, seq_length def Train_model(self, train_set, valid_set, num_epochs, learning_rate, resume=None): """main method used to train the network""" #set the optimizer optimizer = optim.Adam(model.parameters(), lr=learning_rate) #ensuring that the model is on training mode model.train() #check if we are going to resume the previously training state or not if resume: print("Resuming last model...") start_epoch = model.resume('attention_model_state.pth', optimizer) else: start_epoch=0 #initialize some parameterz to be used during training vocab_size = len(_data_set.vocab) epoch_loss = 100 epoch_acc = 0 best_val_acc = -1 # the best accuracy computed on the validation data self.train_accuracies = np.zeros(num_epochs) self.valid_accuracies = np.zeros(num_epochs) #loop over the epoches for epoch in range(start_epoch, num_epochs): train_tot_acc = 0 #total accuracy computed on training set train_tot_loss = 0 #total loss computed on training set num_batch = 0 for idx, (image, captions, caption_len) in enumerate(iter(train_set)): image, captions, caption_len = image.to(device), captions.to(device), caption_len.to(device) # Zero the gradients. optimizer.zero_grad() # Feed forward the data to the main model outputs, alphas, captions, seq_length = model(image, captions, caption_len) targets = captions[:, 1:] #skip the start token () #skip the padded sequences outputs = pack_padded_sequence(outputs, seq_length.cpu().numpy(), batch_first=True) targets= pack_padded_sequence(targets, seq_length.cpu().numpy(), batch_first=True) #compute the accuracy of the model acc = model.__performance(outputs, targets) #compute the loss loss = model.__loss(outputs, targets) #try to minimize the difference between 1 and the sum of a pixel's weights across all timesteps loss += 1. * ((1. - alphas.sum(dim=1)) ** 2).mean() #compute backward. loss.backward() #Update the parameters. optimizer.step() print("-train-minibatch: {} loss: {:.5f}".format(idx+1, loss.item())) train_tot_acc += acc train_tot_loss += loss num_batch += 1 #try to plot an image with its corresponding generated captions on training set model.Test_and_Plot(train_set) #compute trainin loss and accuracy (average) train_avg_acc = train_tot_acc/num_batch train_avg_loss = train_tot_loss/num_batch print("Average ---> acc: {:.2f} loss: {:.5f}".format(train_avg_acc, train_avg_loss)) print("---------------------------------------------") #evaluate the current model on validation set valid_avg_acc, valid_avg_loss = model.Evaluate_model(valid_set) print("Average ---> acc: {:.2f} loss: {:.5f}".format(valid_avg_acc, valid_avg_loss)) print("---------------------------------------------") #save the current model if it is the best if valid_avg_acc > best_val_acc: best_val_acc = valid_avg_acc best_epoch = epoch+1 model.save(model, optimizer, epoch) self.train_accuracies[epoch] = train_avg_acc self.valid_accuracies[epoch] = valid_avg_acc print(("Epoch={}/{}: Tr_loss:{:.5f} Tr_acc:{:.2f} Va_acc:{:.2f}" + (" ---> **BEST**" if best_epoch == epoch + 1 else "")) .format(epoch+1, num_epochs, train_avg_loss, train_avg_acc, valid_avg_acc)) print("---------------------------------------------") #save the final model torch.save(model, 'attention_model.pth') def Evaluate_model(self, valid_set): #set the model on evaluating mode model.eval() #set some initial parameteres tot_acc = 0 tot_loss = 0 num_batch = 0 for idx, (image, captions, caption_len) in enumerate(iter(valid_set)): image, captions, caption_len = image.to(device), captions.to(device), caption_len.to(device) #call the main model to generate the captions outputs, alphas, captions, seq_length = model(image, captions, caption_len) targets = captions[:, 1:] #skip the first token (SOS) #skip the padded sequences outputs = pack_padded_sequence(outputs, seq_length.cpu().numpy(), batch_first=True) targets = pack_padded_sequence(targets, seq_length.cpu().numpy(), batch_first=True) #compute the accuracy and loss acc = model.__performance(outputs, targets) loss = model.__loss(outputs, targets) loss += 1. * ((1. - alphas.sum(dim=1)) ** 2).mean() #update parameters used during evaluation tot_acc += acc tot_loss += loss num_batch +=1 print("-valid-minibatch: {} loss: {:.5f}".format(idx+1, loss.item())) #generate a caption for an image from validation set to show the accuracy model.Test_and_Plot(valid_set) #compute the accuracy and loss of validation set (average) avg_acc = tot_acc/num_batch avg_loss = tot_loss/num_batch model.train() #set the model back to training mode return avg_acc, avg_loss def Test_and_Plot(self, test_data, attention=None): """Method used to plot the image with its corresponding generated caption """ model.eval() with torch.no_grad(): dataiter = iter(test_data) img, _, _ = next(dataiter) features = model.encoder(img[0:1].to(device)) caps, alphas = model.decoder.CapGenerator(features, vocab=_data_set.vocab) caption = ' '.join(caps) show_image(img[0], title=caption) if attention: plot_attention(img[0], caps, alphas) def __loss(self, outputs, targets): """ function to be used for computing the loss """ loss = self.criterion(outputs.data, targets.data) return loss def __performance(self, outputs, targets): """function to be used for computing the performance of the model """ #returns the index of the word with the highst value highest_indices = outputs.data.argmax(dim=1) highest_indices = highest_indices.reshape(-1, 1) #check if the predicted output is equal to the targets word_correct = highest_indices.eq(targets.data.view(-1,1)) seq_correct = word_correct.float().sum() #compute the batch accuracy acc = seq_correct.item() * (100.0/targets.data.shape[0]) return acc def save(self, model, optimizer, num_epochs): """save the model""" checkpoint = { 'num_epochs':num_epochs, 'optimizer': optimizer.state_dict(), 'model_state':model.state_dict() } torch.save(checkpoint,'attention_model_state.pth') return def resume(self, checkpoint, optimizer): """resume the model""" checkpoint = torch.load(checkpoint, map_location=device) start_epoch = checkpoint['num_epochs'] model.load_state_dict(checkpoint['model_state']) optimizer.load_state_dict(checkpoint['optimizer']) return start_epoch def plot_attention(img, result, attention_plot): # recover the original image from transformed image img[0] = img[0] * 0.229 img[1] = img[1] * 0.224 img[2] = img[2] * 0.225 img[0] += 0.485 img[1] += 0.456 img[2] += 0.406 img = img.numpy().transpose((1, 2, 0)) temp_image = img fig = plt.figure(figsize=(15, 15)) len_result = len(result) for l in range(len_result): temp_att = attention_plot[l].reshape(7, 7) ax = fig.add_subplot(len_result // 2, len_result // 2, l + 1) ax.set_title(result[l]) img = ax.imshow(temp_image) ax.imshow(temp_att, cmap='gray', alpha=0.7, extent=img.get_extent()) plt.tight_layout() plt.show() def show_image(img, title=None): """Imshow for Tensor.""" # unnormalize img[0] = img[0] * 0.229 img[1] = img[1] * 0.224 img[2] = img[2] * 0.225 img[0] += 0.485 img[1] += 0.456 img[2] += 0.406 img = img.numpy().transpose((1, 2, 0)) plt.imshow(img) if title is not None: plt.title(title) plt.pause(0.001) # pause a bit so that plots are updated def save_acc_graph(train_accs, valid_accs): """Plot the accuracies of the training and validation data computed during the training stage. Args: train_accs,valid_accs: the arrays with the training and validation accuracies (same length). """ plt.figure().clear() plt.clf() plt.close() plt.figure() plt.plot(train_accs, label='Training Data') plt.plot(valid_accs, label='Validation Data') plt.ylabel('Accuracy %') plt.xlabel('Epochs') plt.ylim((0, 100)) plt.legend(loc='lower right') plt.savefig('training_stage.pdf') plt.figure().clear() plt.close() plt.clf() def parse_command_line_arguments(): """Parse command line arguments and checking their values""" parser = argparse.ArgumentParser(description='') parser.add_argument('mode', type=str, choices=['train', 'eval', 'test'], help='train or evaluate or test the model') parser.add_argument('--resume', type=str, default=None, help='resume from previouse training phase( 1:Yes or 0:No) default: 0') parser.add_argument('data_location', type=str, help='define training_set or test_set directory') parser.add_argument('--batch_size', type=int, default=32, help='mini-batch size (default: 32)') parser.add_argument('--epochs', type=int, default=50, help='number of training epochs (default: 10)') parser.add_argument('--learning_rate', type=float, default=3e-4, help='learning rate (Adam) (default: 3e-4)') parser.add_argument('--workers', type=int, default=1, help='number of working units used to load the data (default: 0)') parser.add_argument('--freq_threshold', type=int, default=3, help='threshold for word frequencies (default: 1)') parser.add_argument('--randomize', type=str, default=None, help='shuffling the data set before splitting (1:Yes or 0:No) default: 1 ') parser.add_argument('--preprocess', type=str, default=None, help='choose a customize preprocess {default or custom} default: default ') parser.add_argument('--splits', type=str, default='0.04-0.008-0.008', help='fraction of data to be used in train set and val set (default: 0.7-0.3)') parser.add_argument('--glove_embeddings', type=str, default=None, help='pre-trained embeddings file will be loaded (default: None)') parser.add_argument('--embed_size', type=int, default=256, help='word embedding size (default: 128)') parser.add_argument('--attention_dim', type=int, default=256, help='input dimension of attention model (default: 256)') parser.add_argument('--encoder_dim', type=int, default=2048, help='input dimension of encoder model (default: 2048)') parser.add_argument('--decoder_dim', type=int, default=512, help='input dimension of decoder model (default: 512)') parser.add_argument('--device', default='gpu', type=str, help='device to be used for computations {cpu, gpu} default: gpu') parsed_arguments = parser.parse_args(['train', '/content/drive/MyDrive/Image Caption with Attention/flickr8k']) #converting split fraction string to a list of floating point values ('0.7-0.15-0.15' => [0.7, 0.15, 0.15]) splits_string = str(parsed_arguments.splits) fractions_string = splits_string.split('-') if len(fractions_string) != 3: raise ValueError("Invalid split fractions were provided. Required format (example): 0.7-0.15-0.15") else: splits = [] frac_sum = 0. for fraction in fractions_string: try: splits.append(float(fraction)) frac_sum += splits[-1] except ValueError: raise ValueError("Invalid split fractions were provided. Required format (example): 0.7-0.15-0.15") if frac_sum > 1.0 or frac_sum < 0.0: raise ValueError("Invalid split fractions were provided. They must sum to 1.") # updating the 'splits' argument parsed_arguments.splits = splits return parsed_arguments """ STARTING FROM HERE """ if __name__ == "__main__": args = parse_command_line_arguments() print("\n-------------------------------") for k, v in args.__dict__.items(): print(k + '=' + str(v)) # *** TRAINING *** # if you have choosed 'train' then it will start to train the model from here. print("-------------------------------") if args.mode == 'train': if args.device == 'gpu': if torch.cuda.is_available(): device = torch.device("cuda") print(f'There are {torch.cuda.device_count()} GPU(s) available.') print('Using device:', torch.cuda.get_device_name(0)) else: print('GPU is not availabe!! Using device: CPU') device = torch.device("cpu") else: print('Using device: CPU') device = torch.device("cpu") #create the dataset object _data_set = CapDataset(path=args.data_location + "/Images", captions_file=args.data_location + "/captions.txt", preprocess_custom=args.preprocess) #check if you need to randomize the data or not if args.randomize: _data_set.randomize_data() #split the data into three parts: [_train_set, _val_set, _test_set] = _data_set.create_splits(args.splits) #check whether using pretrained embeddings or not if args.glove_embeddings is not None: _pretrained_embed, embed_size = _data_set.vocab.load_embedding_file(args.glove_embeddings) print("Loading embeddings: DONE") print("Embeddding_size= ", embed_size) print("-------------------------------") else: _pretrained_embed = None #create dataloader for training set _train_set = DataLoader(dataset=_train_set, batch_size=args.batch_size, num_workers=args.workers, shuffle=True, collate_fn=_data_set.CapCollate) #create dataloader for validation set _val_set = DataLoader(dataset=_val_set, batch_size=args.batch_size, num_workers=args.workers, shuffle=True, collate_fn=_data_set.CapCollate) #create dataloader for test set _test_set = DataLoader(dataset=_test_set, batch_size=args.batch_size, num_workers=args.workers, shuffle=True, collate_fn=_data_set.CapCollate) #create the model model = Encoder_Decoder( embed_size=args.embed_size, pretrained_embed = _pretrained_embed, vocab_size=len(_data_set.vocab), attention_dim=args.attention_dim, encoder_dim=args.encoder_dim, decoder_dim=args.decoder_dim, ).to(device) #starting to train the model print("-------------------------------") print("\nTraining The Model...") model.Train_model(_train_set, _val_set, args.epochs, args.learning_rate, args.resume) save_acc_graph(model.train_accuracies, model.valid_accuracies) #starting to evaluate the model (on trainin-set, validation-set, test-set) print("-------------------------------") print("\nEvaluating The Model...") print("\n-On training_set...") train_acc, train_loss = model.Evaluate_model(_train_set) print("Average On Training ---> acc:{:.2f} loss:{:.5f}".format(train_acc, train_loss)) print("\n-On validation_set...") val_acc, val_loss = model.Evaluate_model(_val_set) print("Average On Validation ---> acc:{:.2f} loss:{:.5f}".format(val_acc, val_loss)) print("\n-On test_set...") test_acc, test_loss = model.Evaluate_model(_test_set) print("Average On Test ---> acc:{:.2f} loss:{:.5f}".format(test_acc, test_loss)) # *** EVALUATE *** # if you have choosed 'test' then it will evaluate the model starting from here: elif args.mode == 'eval': print("\nEvaluating The Model...") if args.device == 'gpu': if torch.cuda.is_available(): device = torch.device("cuda") print(f'There are {torch.cuda.device_count()} GPU(s) available.') print('Using device::', torch.cuda.get_device_name(0)) else: print('GPU is not availabe!! Using device: CPU') device = torch.device("cpu") else: print('Using device: CPU') device = torch.device("cpu") #create the dataset object _data_set = CapDataset(path=args.data_location + "/Images", captions_file=args.data_location + "/captions.txt", empty_dataset=True, eval=True) #create dataloader for the data set to be validate the model _val_set = DataLoader(dataset=_data_set, batch_size=args.batch_size, num_workers=args.workers, shuffle=False, collate_fn=_data_set.CapCollate) # check whether using pretrained embeddings or not if args.glove_embeddings is not None: _pretrained_embed, embed_size = _data_set.vocab.load_embedding_file(args.glove_embeddings) print("Loading embeddings: DONE") print("Embeddding_size= ", embed_size) print("-------------------------------") else: _pretrained_embed = None #create the initial model model = Encoder_Decoder( embed_size=args.embed_size, pretrained_embed = _pretrained_embed, vocab_size=len(_data_set.vocab), attention_dim=args.attention_dim, encoder_dim=args.encoder_dim, decoder_dim=args.decoder_dim ).to(device) #check the path to load the model in which you want to evaluate it if not os.path.exists('attention_model.pth'): raise ValueError("There is no 'attention_model.pth' file!!!") else: print("-------------------------------") print("loading the model...") #load the model model = torch.load('attention_model.pth', map_location=device) #start to evaluate the model with the corresponding dataset val_acc, val_loss = model.Evaluate_model(_val_set) print("Average On Validation ---> acc:{:.2f} loss:{:.5f}".format(val_acc, val_loss)) # *** TEST *** # If you have choosed 'test' then it will only test the model starting from here. elif args.mode == 'test': print("\nTesting The Model...") if args.device == 'gpu': if torch.cuda.is_available(): device = torch.device("cuda") print(f'There are {torch.cuda.device_count()} GPU(s) available.') print('Using device:', torch.cuda.get_device_name(0)) else: print('GPU is not availabe!! Using device: CPU') device = torch.device("cpu") else: print('Using device: CPU') device = torch.device("cpu") #create the dataset object _data_set = CapDataset(path=args.data_location + "/Images", captions_file=args.data_location + "/captions.txt", empty_dataset=True, test=True) #create dataloader for the test set _test_dataset = DataLoader(dataset=_data_set, batch_size=1, num_workers=1, shuffle=False, collate_fn=_data_set.CapCollate) # check whether using pretrained embeddings or not if args.glove_embeddings is not None: _pretrained_embed, embed_size = _data_set.vocab.load_embedding_file(args.glove_embeddings) print("Loading embeddings: DONE") print("Embeddding_size= ", embed_size) print("-------------------------------") else: _pretrained_embed = None #create the model object model = Encoder_Decoder( embed_size=args.embed_size, pretrained_embed = _pretrained_embed, vocab_size=len(_data_set.vocab), attention_dim=args.attention_dim, encoder_dim=args.encoder_dim, decoder_dim=args.decoder_dim ).to(device) #check whether the model exists in the path or not if not os.path.exists('attention_model.pth'): raise ValueError("There is no 'attention_model.pth' file!!!") print("-------------------------------") model = torch.load('attention_model.pth', map_location=device) #set the model to be in evaluation mode model.eval() #iterate over the test images that you provided dataiter = iter(_test_dataset) for i in range(0, len(_data_set)): model.Test_and_Plot(dataiter, attention=True) else: raise ValueError("You must specify the operation you need!!! ('train', 'eval', 'test'") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import random N = 95277 x = 0.435999 s = 'TCCATATG' AT, GC = 0, 0 for res in s: if res == 'A' or res == 'T': AT += 1 elif res == 'G' or res == 'C': GC += 1 s_prob = (((1 - x)/2)**AT) * ((x/2) ** GC) prob = 1 - (1 - s_prob) ** N prob # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + """Mainly Edited for private usage by: License: BSD 3 clause """ import time start_time = time.time() from copy import deepcopy, copy import math import scipy.io as sio import shutil import os from random import shuffle import numpy as np from pylab import * # from featext2 import * import matplotlib.pyplot as plt # %matplotlib inline #matplotlib qt # inline (suitable for ipython only, shown inside browser!) or qt (suitable in general, shown in external window!) from matplotlib.colors import ListedColormap from mpl_toolkits.mplot3d import Axes3D #, axes3d from sklearn.preprocessing import StandardScaler, MinMaxScaler, normalize from sklearn.decomposition import PCA from sklearn.manifold import TSNE from sklearn.neural_network import MLPClassifier from sklearn.feature_selection import SelectFromModel, SelectKBest, mutual_info_classif from sklearn.pipeline import Pipeline from sklearn.metrics import classification_report, confusion_matrix from collections import OrderedDict import re import datetime import urllib import tarfile # import joblib # from joblib import Parallel, delayed, Memory from tempfile import mkdtemp import copy_reg import types import itertools from itertools import compress from collections import Counter import glob #import multiprocessing def _pickle_method(m): if m.im_self is None: return getattr, (m.im_class, m.im_func.func_name) else: return getattr, (m.im_self, m.im_func.func_name) copy_reg.pickle(types.MethodType, _pickle_method) h = .2 # step size in the mesh window = 1024 # - ############ Feature Names ############ """features: || if |--> time domain : || samples = 1024 |----|---> phinyomark : 11+3{shist} --------------------------> = 14+0.0samples || 14 |----|---> golz : 10+samples{acrol} --------------------> = 10+1.0samples || 1034 |--> frequency domain : |----|---> phinyomark : 3{arco}+4{mf}+2(samples/2+1){RF,IF} --> = 9+1.0samples || 1033 |----|---> golz : 2(samples/2+1){AF,PF} ----------------> = 2+1.0samples || 1026 |----|----------------|-------alltogether---------------------> = 35+3.0samples || numfeat = 3107 """ ## Time Domain Phinyomark feats featnames = ['intsgnl', 'meanabs', 'meanabsslp', 'ssi', 'var', 'rms', 'rng', 'wavl', 'zerox', 'ssc', 'wamp', 'shist1', 'shist2', 'shist3'] # 11+3{shist} ## Frequency Domain Phinyomark feats featnames += ['arco1', 'arco2', 'arco3', 'mnf', 'mdf', 'mmnf', 'mmdf'] # 3{arco}+4{mf} featnames += ['reFFT{:03d}'.format(i) for i in range(window/2+1)] # samples/2+1{RF} featnames += ['imFFT{:03d}'.format(i) for i in range(window/2+1)] # samples/2+1{IF} ## Time Domain Golz feats featnames += ['meanv', 'stdr', 'mx', 'rngx', 'rngy', 'med', 'hjorth', 'sentr', 'se', 'ssk'] # 10 featnames += ['acrol{:04d}'.format(i) for i in range(window)] # samples{acrol} ## Frequency Domain Golz feats featnames += ['amFFT{:03d}'.format(i) for i in range(window/2+1)] # samples/2+1{AF} featnames += ['phFFT{:03d}'.format(i) for i in range(window/2+1)] # samples/2+1{PF} ############ Prepare the indeces for each feature ############ def get_feat_id(feat_ind, printit=0, sample_window=window): """Find the corresponding indeces of the desired features inside feature vector, and link them with their names and level of abstraction -> feat_ind : range of indeces -> printit : print output indeces (1) or not (0) -> sample_window : parameter for accurate computation of feature indeces <- full_path_id : indeces of all features <- norm_time_feats : indeces of time features <- norm_freq_feats : indeces of frequency features """ # get the feat inds wrt their source : 3rd level norm_time_phin = range(0,14) norm_freq_phin = range(norm_time_phin[-1] + 1, norm_time_phin[-1] + 9 + sample_window + 1) norm_time_golz = range(norm_freq_phin[-1] + 1, norm_freq_phin[-1] + 10 + sample_window + 1) norm_freq_golz = range(norm_time_golz[-1] + 1, norm_time_golz[-1] + 2 + sample_window + 1) # get the feat inds wrt their domain : 2nd level norm_time_feats = norm_time_phin + norm_time_golz norm_freq_feats = norm_freq_phin + norm_freq_golz # get the feat inds wrt their prefeat: 1st level norm_feats = norm_time_feats + norm_freq_feats # get the feat inds wrt their source : 3rd level disp = norm_feats[-1]+1 ftfn_time_phin = range(disp ,disp + 14) ftfn_freq_phin = range(ftfn_time_phin[-1] + 1, ftfn_time_phin[-1] + 9 + sample_window + 1) ftfn_time_golz = range(ftfn_freq_phin[-1] + 1, ftfn_freq_phin[-1] + 10 + sample_window + 1) ftfn_freq_golz = range(ftfn_time_golz[-1] + 1, ftfn_time_golz[-1] + 2 + sample_window + 1) # get the feat inds wrt their domain : 2nd level ftfn_time_feats = ftfn_time_phin + ftfn_time_golz ftfn_freq_feats = ftfn_freq_phin + ftfn_freq_golz # get the feat inds wrt their prefeat: 1st level ftfn_feats = ftfn_time_feats + ftfn_freq_feats # create the final "reference dictionary" # 3 np.arrays, id_list[0] = level 1 etc id_list = [np.zeros((len(ftfn_feats + norm_feats),1)) for i in range(3)] id_list[0][:norm_feats[-1]+1] = 0 # 0 signifies norm / 1 signifies ft/fn id_list[0][norm_feats[-1]+1:] = 1 id_list[1][:norm_time_phin[-1]+1] = 0 # 0 signifies time / 1 signifies freq id_list[1][norm_time_phin[-1]+1:norm_freq_phin[-1]+1] = 1 id_list[1][norm_freq_phin[-1]+1:norm_time_golz[-1]+1] = 0 id_list[1][norm_time_golz[-1]+1:norm_freq_golz[-1]+1] = 1 id_list[1][norm_freq_golz[-1]+1:ftfn_time_phin[-1]+1] = 0 id_list[1][ftfn_time_phin[-1]+1:ftfn_freq_phin[-1]+1] = 1 id_list[1][ftfn_freq_phin[-1]+1:ftfn_time_golz[-1]+1] = 0 id_list[1][ftfn_time_golz[-1]+1:] = 1 id_list[2][:norm_freq_phin[-1]+1] = 0 #0 signifies phinyomark / 1 signifies golz id_list[2][norm_freq_phin[-1]+1:norm_freq_golz[-1]+1] = 1 id_list[2][norm_freq_golz[-1]+1:ftfn_freq_phin[-1]+1] = 0 id_list[2][ftfn_freq_phin[-1]+1:] = 1 full_path_id = [np.zeros((len(feat_ind),5)) for i in range(len(feat_ind))] for ind, val in enumerate(feat_ind): full_path_id[ind] = [val, id_list[2][val], id_list[1][val], id_list[0][val]] if (printit==1): if(full_path_id[ind][1]==0): lvl3 = 'Phin' else: lvl3 = 'Golz' if(full_path_id[ind][2]==0): lvl2 = 'Time' else: lvl2 = 'Freq' if(full_path_id[ind][3]==0): lvl1 = 'Norm' else: lvl1 = 'Ft/Fn' print(feat_ind[ind],featnames[val%(norm_feats[-1]+1)],lvl3,lvl2,lvl1) return(full_path_id,norm_time_feats,norm_freq_feats) def subfeat_inds(ofs=len(featnames)): """returns a subfeatures' indeces -> ofs : number of features in total <- amfft, freq, time, both : split featureset indeces for amplitude of FFT, all time only, all frequency only and all features """ _,time,freq = get_feat_id(range(ofs)) both = range(ofs) amfft = [] for i in range(len(featnames)): if (featnames[i].startswith('amFFT')): amfft.append(i) return amfft, freq, time, both # + def get_tot_feats(fs, subfs, r): ############################################################################################################### # Version 2, using the bool masks and keeping an array of 6x3000 feats ############################################################################################################### # If checking for FnormAll, you end up with 36 models of (trained_on, tested_on) combinations but TECHNICALLY # the features are the same for every trained_on "sixplet" so there's no need to iterate over all the tested_on # indeces. Therefore, ts = 2 is chosen arbitrarily # filenames = glob.glob("data/results" + str(r) + "/fs_" + str(fs) + "_subfs_" + str(subfs) + "_*.npz") filenames = glob.glob("data/results" + str(r) + "/fs_" + str(fs) + "_subfs_" + str(subfs) + "_tr_" + str(0) + "_ts_" + str(5) + ".npz") print filenames # the features kept for surface i will be stored in bool_tot_feats[i] (final size: 6x1000) bool_tot_feats = [] best_tot_feats = [] best_tot_scores = [] for filn in filenames: # for every training surface model_file = np.load(filn) model = model_file['model'] #keep a list of the 1000 features kept model_feat_scores = model[0].named_steps['feature_selection'].scores_ bool_model_features = list(model[0].named_steps['feature_selection'].get_support(indices = False)) if subfs<=2: bool_model_features = np.logical_not(np.array(bool_model_features)) bool_model_features[np.array(model_feat_scores).argsort()[-1000:][::-1].tolist()] = True bool_model_features = bool_model_features.tolist() # plt.plot(model_feat_scores) bool_tot_feats.append(bool_model_features) best_tot_scores.append(np.array(model_feat_scores[np.array(model_feat_scores).argsort()[-1000:][::-1].tolist()])) best_tot_feats.append(np.array(model_feat_scores).argsort()[-1000:][::-1]) return bool_tot_feats, best_tot_feats, best_tot_scores # + def get_tot_feats_importance_pca(fs, subfs, r): ############################################################################################################### # Version 2, using the bool masks and keeping an array of 6x3000 feats ############################################################################################################### # If checking for FnormAll, you end up with 36 models of (trained_on, tested_on) combinations but TECHNICALLY # the features are the same for every trained_on "sixplet" so there's no need to iterate over all the tested_on # indeces. Therefore, ts = 2 is chosen arbitrarily # filenames = glob.glob("data/results" + str(r) + "/fs_" + str(fs) + "_subfs_" + str(subfs) + "_*.npz") filenames = glob.glob("data/results" + str(r) + "/fs_" + str(fs) + "_subfs_" + str(subfs) + "_tr_" + str(0) + "_ts_" + str(5) + ".npz") print filenames # the features kept for surface i will be stored in bool_tot_feats[i] (final size: 6x1000) bool_tot_feats = [] best_tot_feats = [] best_tot_scores = [] for filn in filenames: # for every training surface model_file = np.load(filn) model = model_file['model'][0] #keep a list of the 1000 features kept model_pca_var = model[0].named_steps['decomp'].explained_variance_ model_pca_var_rat = model[0].named_steps['decomp'].explained_variance_ratio_ model_pca_covar = model[0].named_steps['decomp'].get_covariance() model_pca_mean = model[0].named_steps['decomp'].mean_ n_comp = model[0].named_steps['decomp'].n_components_ comp = model[0].named_steps['decomp'].components_ print len(model_pca_var), model_pca_var print len(model_pca_var_rat), model_pca_var_rat print model_pca_covar.shape, model_pca_covar # plt.imshow(model_pca_covar) print len(model_pca_mean), model_pca_mean print n_comp print comp.shape, comp nfeat = 1000 feat_importance = np.zeros(nfeat) for nc in range(len(comp)): feat_importance += comp[nc]*model_pca_var_rat[nc] # plt.plot(range(1000),comp[nc]*model_pca_var_rat[nc]) plt.plot(range(nfeat),feat_importance/nfeat) print feat_importance*nfeat sort_feat_imp_ind = np.array(feat_importance).argsort()[:][::-1] print np.array(featnames)[sort_feat_imp_ind] model_feat_scores = model[0].named_steps['feature_selection'].scores_ bool_model_features = list(model[0].named_steps['feature_selection'].get_support(indices = False)) if subfs<=2: bool_model_features = np.logical_not(np.array(bool_model_features)) bool_model_features[np.array(model_feat_scores).argsort()[-1000:][::-1].tolist()] = True bool_model_features = bool_model_features.tolist() # plt.plot(model_feat_scores) bool_tot_feats.append(bool_model_features) best_tot_scores.append(np.array(model_feat_scores[np.array(model_feat_scores).argsort()[-1000:][::-1].tolist()])) best_tot_feats.append(np.array(model_feat_scores).argsort()[-1000:][::-1]) return bool_tot_feats, best_tot_feats, best_tot_scores # - def freq_time_counter(full_names, subfs): f_c = 0; t_c = 0 for i in range(len(full_names)): if full_names[i][2] == 1: if subfs!=2: f_c += 1 else: if subfs>=2: t_c += 1 return (f_c, t_c) def get_common_feats(bool_tot_feats, subfs, skip_surf = 6, print_common_feats = 0): # skip_surf = 6 by default so you won't skip any surfaces. # returns the list of inds for the common feats trans_test_bools = [] for i in range(len(bool_tot_feats)): if i != skip_surf: trans_test_bools.append(bool_tot_feats[i]) else: continue trans_test_bools = np.transpose(trans_test_bools) common_feats = [] matches = [] for i in range(len(trans_test_bools)): matches.append(np.all(trans_test_bools[i])) for ind, val in enumerate(matches): if val: common_feats.append(ind) print("===============================================================") print("%d common feats, out of %d total" %(len(common_feats),len(matches))) full_names, _, _ = get_feat_id(common_feats, printit = print_common_feats) freq_counter, time_counter = freq_time_counter(full_names, subfs) print("of which, %d (%.2f%%) were Freq features and %d (%.2f%%) were Time features" %(freq_counter, (float(freq_counter)/len(common_feats))*100, time_counter, (float(time_counter)/len(common_feats))*100 )) print("===============================================================") return common_feats, full_names def get_inv_pca_feat_imp(r, fs, subfs, commonfeats): # filenames = glob.glob("data/results" + str(r) + "/fs_" + str(fs) + "_subfs_" + str(subfs) + "_*.npz") filenames = glob.glob("data/results" + str(r) + "/fs_" + str(fs) + "_subfs_" + str(subfs) + "_tr_" + str(0) + "_ts_" + str(5) + ".npz") print filenames feat_imp = np.zeros(len(commonfeats)) for filn in filenames: # for every training surface model_file = np.load(filn) # get the corresponding model model = model_file['model'][0] # get the pca pca = model.named_steps['decomp'] # get from the inverse pca the importance of each feature invpca = pca.inverse_transform(np.eye(20)) # find the corresponding indexes returned by feature selection step model_feat_ind = list(model.named_steps['feature_selection'].get_support(indices = True)) # use the commonfeats indexes to find the corresponding index inside model_feat_ind to reference invpca # and add its importance to feat_imp correctly for ci in range(len(commonfeats)): curr_ind = model_feat_ind.index(commonfeats[ci]) feat_imp[ci] += np.mean(invpca[:,curr_ind]) return feat_imp # + ### Example fs=0 subfs=3 for r in range(1,2): tot_feats, best_tot_feats, best_tot_scores = get_tot_feats(fs=0, subfs=subfs, r=r) common_feats, full_names = get_common_feats(bool_tot_feats=tot_feats, subfs=subfs, skip_surf=6, print_common_feats=0) fn = np.array(full_names) tmp = fn[:,0].astype(int).tolist() print np.array(featnames)[tmp] feat_imp = get_inv_pca_feat_imp(r,fs,subfs,common_feats) # print feat_imp plt.figure(figsize=(20,10)) feat_imp_sort_ind = np.argsort(feat_imp)[::-1] feat_imp_sort = np.sort(feat_imp)[::-1] plt.plot(feat_imp_sort) for t in range(len(feat_imp_sort_ind)): print t, np.array(featnames)[tmp[feat_imp_sort_ind[t]]], feat_imp_sort[t] # for j in range(0,len(best_tot_feats),1): # tmpj = best_tot_feats[j][:5] # print len(tmpj) # print j, best_tot_scores[j][:10] # print j, np.array(featnames)[tmpj] # - r, fs, subfs = 1, 0, 3 filn = glob.glob("data/results" + str(r) + "/fs_" + str(fs) + "_subfs_" + str(subfs) + "_*.npz")[0] print filn model_file = np.load(filn) model = model_file['model'] #keep a list of the 1000 features kept model_feat_scores = model[0].named_steps['feature_selection'].scores_ model_feat_scores = model[0].named_steps['feature_selection'].get_support(indices = True) model_pca_var = model[0].named_steps['decomp'].explained_variance_ model_pca_var_rat = model[0].named_steps['decomp'].explained_variance_ratio_ model_pca_covar = model[0].named_steps['decomp'].get_covariance() model_pca_mean = model[0].named_steps['decomp'].mean_ n_comp = model[0].named_steps['decomp'].n_components_ comp = model[0].named_steps['decomp'].components_ print len(model_pca_var), model_pca_var print len(model_pca_var_rat), model_pca_var_rat print model_pca_covar.shape, model_pca_covar # plt.imshow(model_pca_covar) print len(model_pca_mean), model_pca_mean print n_comp print comp.shape, comp nfeat = 1000 feat_importance = np.zeros(nfeat) for nc in range(len(comp)): feat_importance += comp[nc]*model_pca_var_rat[nc] # plt.plot(range(1000),comp[nc]*model_pca_var_rat[nc]) plt.plot(range(nfeat),feat_importance/nfeat) print feat_importance*nfeat sort_feat_imp_ind = np.array(feat_importance).argsort()[:][::-1] print np.array(featnames)[sort_feat_imp_ind] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 不适用于这里 # + import os import torch from pytorch_pretrained_bert import WEIGHTS_NAME, CONFIG_NAME from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM,BertForNextSentencePrediction,BertForPreTraining,BertForQuestionAnswering # model=torch.load('./finetuned_lm/pytorch_model.bin') # model # parser.add_argument("--bert_model", default=None, type=str, required=True, # help="Bert pre-trained model selected in the list: bert-base-uncased, " # "bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, " # "bert-base-multilingual-cased, bert-base-chinese.") # tokenizer = BertTokenizer.from_pretrained('bert-base-cased') # #model = BertForPreTraining.from_pretrained('bert-base-chinese') # # model = BertForQuestionAnswering.from_pretrained('bert-base-chinese') # model = BertForPreTraining.from_pretrained('bert-base-cased') output_dir = "./data/model/" # Step 1: Save a model, configuration and vocabulary that you have fine-tuned # If we have a distributed model, save only the encapsulated model # (it was wrapped in PyTorch DistributedDataParallel or DataParallel) model_to_save = model.module if hasattr(model, 'module') else model # If we save using the predefined names, we can load using `from_pretrained` output_model_file = os.path.join(output_dir, WEIGHTS_NAME) output_config_file = os.path.join(output_dir, CONFIG_NAME) torch.save(model_to_save.state_dict(), output_model_file) model_to_save.config.to_json_file(output_config_file) tokenizer.save_vocabulary(output_dir) # Step 2: Re-load the saved model and vocabulary # # Example for a Bert model # model = BertForQuestionAnswering.from_pretrained(output_dir) # tokenizer = BertTokenizer.from_pretrained(output_dir, do_lower_case=args.do_lower_case) # Add specific options if needed # # Example for a GPT model # model = OpenAIGPTDoubleHeadsModel.from_pretrained(output_dir) # tokenizer = OpenAIGPTTokenizer.from_pretrained(output_dir) # - # # 第二种导出方式 # + import os import torch from pytorch_pretrained_bert import WEIGHTS_NAME, CONFIG_NAME from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM,BertForNextSentencePrediction,BertForPreTraining,BertForQuestionAnswering tokenizer = BertTokenizer.from_pretrained('bert-base-cased') #model = BertForPreTraining.from_pretrained('bert-base-chinese') # model = BertForQuestionAnswering.from_pretrained('bert-base-chinese') model = BertForQuestionAnswering.from_pretrained('bert-base-cased') # + output_model_file = "./data/model/pytorch_model.bin" output_config_file = "./data/model/config.json" output_vocab_file = "./data/model/" # Step 1: Save a model, configuration and vocabulary that you have fine-tuned # If we have a distributed model, save only the encapsulated model # (it was wrapped in PyTorch DistributedDataParallel or DataParallel) model_to_save = model.module if hasattr(model, 'module') else model torch.save(model_to_save.state_dict(), output_model_file) model_to_save.config.to_json_file(output_config_file) tokenizer.save_vocabulary(output_vocab_file) # tokenizer.save_vocabulary(output_dir) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.3 64-bit (''ironhack'': conda)' # language: python # name: python38364bitironhackcondae22397c42f6b4fd29a9381c81ac70f68 # --- # + import pandas as pd import pymysql as sql import settings as env # + connection = sql.connect( host=env.HOST, user=env.USERNAME, password=, db=env.DB, cursorclass=sql.cursors.DictCursor ) cursor = connection.cursor() # - query = """ SELECT * FROM player_combat_stats LEFT JOIN player_flair_stats USING (account_id, champion, lane, won) LEFT JOIN player_laning_stats USING (account_id, champion, lane, won) LEFT JOIN player_objective_stats USING (account_id, champion, lane, won) WHERE lane = 'TOP'; """ cursor.execute(query) data = cursor.fetchall() pd.DataFrame(data) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # This is the demo to apply our AutoML system to AdultDataset # ## dataset_metric you can select and their ranges # 'mean_difference': [-1,1] close to 0
    # 'statistical_parity_difference': [-1,1] close to 0
    # 'disparate_impact': > 0 The larger the better
    # 'consistency': [0, 1]: The larger the better # ## classifier_metric (optimal range) # "Statistical parity difference" : [-0.1, 0.1]
    # "Mean difference": [-0.1, 0.1]
    # "Disparate impact": [0.8,1.2]
    # "Average odds difference" [-0.1, 0.1]
    # "Equal opportunity difference" [-0.1, 0.1]
    # "Theil index": [0, 0.2] # + import warnings warnings.filterwarnings('ignore') import AutoML import pandas as pd import numpy as np from importlib import reload pd.set_option('display.max_columns', None) import matplotlib.pyplot as plt # %matplotlib inline df = pd.read_csv('mylsn_raw.csv') df = df.drop(['Unnamed: 0'], axis=1) train = df.sample(frac=0.5, random_state=200) test = df.drop(train.index) # set up the parameters for StandardDataset class input_columns = ['sex', 'gpa', 'lsat', 'lsat1']#,'school'] # school will generate too many columns label_name = 'status' favorable_classes = ['Ac'] protected_attribute_names = ['sex'] privileged_classes = [['Male','male']] privileged_groups = [{'sex':1}] unprivileged_groups = [{'sex': 0}] categorical_features = []#['school'] features_to_keep = ['sex', 'gpa', 'lsat']#,'school'] features_to_drop = [] is_valid = False reload(AutoML) model = AutoML.FairAutoML(input_columns, label_name, favorable_classes, protected_attribute_names, privileged_classes, privileged_groups, unprivileged_groups, categorical_features, features_to_keep, features_to_drop, is_valid) model.fit(train, dataset_metric="disparate_impact", dataset_metric_threshold=0.9, classifier_metric="Disparate impact", optim_options = None, time_left_for_this_task=200, per_run_time_limit=20, train_split_size=0.8, verbose=True) # - fig = model.plot() fig print(model.preproc_name) print('Best classification threshold: {}'.format(model.best_ultimate_thres)) print('Best balanced accuracy: {}'.format(model.best_acc)) metrics = model.evaluate(test) x = np.arange(2) y = [0.09, 0.08] fig, ax = plt.subplots() for i, v in enumerate(y): ax.text(i-0.1, y[i], y[i], fontsize=11, color='b') plt.bar(x,y, color='lightblue') plt.xlabel('Preprocessing Method for Law School Data') plt.ylabel('ABS(Mean Diff)') plt.axhline(y=0.1, color='red', linestyle='-') plt.annotate('0.10', (0.44,0.105),color = 'red') plt.xticks(x, ('DIR', 'LFR')) plt.ylim((0, 0.115)) plt.show() x = np.arange(2) y = [0.93, 0.99] fig, ax = plt.subplots() for i, v in enumerate(y): ax.text(i-0.1, y[i], y[i], fontsize=11, color='b') plt.bar(x,y, color='lightblue') plt.xlabel('Preprocessing Method for Law School Data') plt.ylabel('Disparate impact') plt.axhline(y=0.88, color='red', linestyle='-') plt.annotate('0.88', (0.44,0.9),color = 'red') plt.xticks(x, ('DIR', 'LFR')) plt.ylim((0, 1.1)) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Creating the main table for part 3 (from part 1 and 2) # + #import the libraries BeautifulSoup and io and pandas import bs4 as bs from bs4 import BeautifulSoup import io import urllib.request import pandas as pd #importing more libraries # Matplotlib and associated plotting modules import matplotlib.cm as cm import matplotlib.colors as colors # import k-means from clustering stage from sklearn.cluster import KMeans # Importing libraries and APIs to convert a neighbourhood and borough into latitude and longitude values and create dataframes import numpy as np # library to handle data in a vectorized manner import pandas as pd # library for data analsysis pd.set_option('display.max_columns', None) pd.set_option('display.max_rows', None) import json # library for JSON files # !conda install -c conda-forge geopy --yes from geopy.geocoders import Nominatim # to get latitude and longitude values import requests # it handles requests from pandas.io.json import json_normalize # from JSON file to a pandas dataframe #Creating the source from which to scrape the data from the table called "List of postal codes of Canada: M" from thr wikipedia page source = urllib.request.urlopen('https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M').read() #Creating table object with BeautifulSoup soup=BeautifulSoup(source, 'lxml') table=soup.table #Scraping the columns' names from "List of postal codes of Canada: M" from thr wikipedia table_header=table.find_all('th') for th in table_header: headers=[] headers.append(th.text) # Creating a table with pandas that contains the data scraped from "List of postal codes of Canada: M" from thr wikipedia table_rows=table.find_all('tr') list_of_rows=[] for tr in table_rows: td=tr.find_all('td') row = [item.text.strip('\n') for item in td] list_of_rows.append(row) df=pd.DataFrame(list_of_rows, columns=["Postcode", "Borough", "Neighbourhood"]) df=df.iloc[1:] #Keeping the strings for the column 'Borough' that are different from 'Not assigned' df=df[df['Borough'] != 'Not assigned'] df=df.iloc[:].reset_index(drop=True) #The 'Not assigned' values in the column 'Neighbourhood' are overwritten with the corresponding string in the column 'Borough' for i, row in df.iterrows(): if df.loc[i,'Neighbourhood'] == 'Not assigned': df.loc[i,'Neighbourhood'] = df.loc[i, 'Borough'] #Rows of the dataframe are grouped by 'Postcode' and 'Postcode' and # the corresponding 'Neighbourhoods are combined into one row with the neighborhoods separated with a comma df_1 = df.groupby(['Postcode', 'Borough'])['Neighbourhood'].apply(lambda x: x.str.cat(sep=', ')) #Table from part 1 that I am going to use in this notebook (part2) df_final_part_1= pd.DataFrame(df_1).reset_index(drop=False) #Getting the geospatial data (longitude, latitude and postcode) in a dataframe (table) url="http://cocl.us/Geospatial_data" source_toronto=requests.get(url).content long_lat_postcode_df=pd.read_csv(io.StringIO(source_toronto.decode('utf-8'))) #Adding latitude and longitude to the dataframe obtained from part_1 Table_part_2 = pd.merge(df_final_part_1, long_lat_postcode_df, left_on='Postcode', right_on='Postal Code', how='left') #Dropping the duplicate column 'postal code' (it's like Postcode) #Final Table for Part 2 of the IBM project Table_part_2.drop(['Postal Code'], axis=1) # - #To have an idea of the number of neighbourhoods and boroughs in Toronto. print('Toronto has {} boroughs and {} neighbourhoods.'.format( len(Table_part_2['Borough'].unique()), len(Table_part_2['Neighbourhood'].unique()) ) ) #Toronto latitude and longitude using the geopy library address = 'Toronto' geolocator = Nominatim(user_agent="toronto_explorer") location = geolocator.geocode(address) latitude = location.latitude longitude = location.longitude print('The geograpical coordinate of Toronto are {}, {}.'.format(latitude, longitude)) # ### Visualizing neighborhoods on Toronto's map. # + # For Toronto I am using the same method used for New York and I also import Folium for map visualization of the # !pip install folium import folium # create map of Toronto using latitude and longitude values map_toronto = folium.Map(location=[latitude, longitude], zoom_start=10) # add markers to map for lat, lng, borough, neighborhood in zip(Table_part_2['Latitude'], Table_part_2['Longitude'], Table_part_2['Borough'], Table_part_2['Neighbourhood']): label = '{}, {}'.format(neighborhood, borough) label = folium.Popup(label, parse_html=True) folium.CircleMarker( [lat, lng], radius=5, popup=label, color='yellow', fill=True, fill_color='green', fill_opacity=0.7, parse_html=False).add_to(map_toronto) map_toronto # - # ### Exploring Scarborough borough in Toronto with Foursquare API and segment them. # + CLIENT_ID = 'CZY' # my Foursquare ID CLIENT_SECRET = '' # my Foursquare Secret VERSION = '20180605' # Foursquare API version print('Your credentails:') print('CLIENT_ID: ' + CLIENT_ID) print('CLIENT_SECRET:' + CLIENT_SECRET) # - Table_part_2.loc[0, 'Borough'] #exploring the 1st borough of the dataframe Table_part_2 scarborough_df = Table_part_2[Table_part_2['Borough'] == 'Scarborough'].reset_index(drop=True) scarborough_df.head(2) # + borough_latitude = scarborough_df.loc[0, 'Latitude'] # neighborhood latitude value borough_longitude = scarborough_df.loc[0, 'Longitude'] # neighborhood longitude value borough_name = scarborough_df.loc[0, 'Borough'] # neighborhood name print('Latitude and longitude values of {} are {}, {}.'.format(borough_name, borough_latitude, borough_longitude)) # - # ### Creating a map of Scarborough #Scarborough latitude and longitude using the geopy library address = 'Scarborough, Toronto' geolocator = Nominatim(user_agent="scarborough_explorer") location = geolocator.geocode(address) latitude = location.latitude longitude = location.longitude print('The geograpical coordinate of Scarborough are {}, {}.'.format(latitude, longitude)) address_scarbour = 'Scarborough' latitude_scarbour = latitude longitude_scarbour = longitude print('The geograpical coordinate of Scarborough are {}, {}.'.format(latitude_scarbour, longitude_scarbour)) # + # create map of Toronto using latitude and longitude values map_scarbour = folium.Map(location=[latitude_scarbour, longitude_scarbour], zoom_start=10) # add markers to map for lat, lng, neighborhood in zip(scarborough_df['Latitude'], scarborough_df['Longitude'], scarborough_df['Neighbourhood']): label = '{}, {}'.format(neighborhood, borough) label = folium.Popup(label, parse_html=True) folium.CircleMarker( [lat, lng], radius=5, popup=label, color='yellow', fill=True, fill_color='green', fill_opacity=0.7, parse_html=False).add_to(map_scarbour) map_scarbour # - # #### Top 100 venues that are in Scarborough within a radius of 1000 meters #setting 100 venues and 500 meters radius LIMIT = 100 # limit of number of venues returned by Foursquare API radius = 500 # define radius # create URL url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format( 'CZY', 'KBY2IB4DOTCOS2BJNT32P1WX2OXKNAO2WO3VVHUWHRDUIUMX', '20180605' , latitude_scarbour, longitude_scarbour, radius, LIMIT) url # display URL print(" Scarborough_latitude is: {}, Scarborough_longitude is: {}".format(latitude_scarbour, longitude_scarbour)) #Examining the results of the GET request results = requests.get(url).json() results # From Foursquare API function that extracts the category of the venue def get_category_type(row): try: categories_list = row['categories'] except: categories_list = row['venue.categories'] if len(categories_list) == 0: return None else: return categories_list[0]['name'] # ### Top 100 venues in Scarborough # + venues = results['response']['groups'][0]['items'] nearby_venues = json_normalize(venues) # flatten JSON # filter columns filtered_columns = ['venue.name', 'venue.categories', 'venue.location.lat', 'venue.location.lng'] nearby_venues =nearby_venues.loc[:, filtered_columns] # filter the category for each row nearby_venues['venue.categories'] = nearby_venues.apply(get_category_type, axis=1) # clean columns nearby_venues.columns = [col.split(".")[-1] for col in nearby_venues.columns] nearby_venues # - print('{} venues were returned by Foursquare.'.format(nearby_venues.shape[0])) #Function to get the nearby venues (radius=500 meters, names:neighborhood) def getNearbyVenues(names, latitudes, longitudes, radius=500): venues_list=[] for name, lat, lng in zip(names, latitudes, longitudes): print(name) # create the API request URL url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format( CLIENT_ID, CLIENT_SECRET, VERSION, lat, lng, radius, LIMIT) # make the GET request results = requests.get(url).json()["response"]['groups'][0]['items'] # return only relevant information for each nearby venue venues_list.append([( name, lat, lng, v['venue']['name'], v['venue']['location']['lat'], v['venue']['location']['lng'], v['venue']['categories'][0]['name']) for v in results]) nearby_venues = pd.DataFrame([item for venue_list in venues_list for item in venue_list]) nearby_venues.columns = ['Neighborhood', 'Neighborhood Latitude', 'Neighborhood Longitude', 'Venue', 'Venue Latitude', 'Venue Longitude', 'Venue Category'] return(nearby_venues) # ### Venues are in the Scarborough's neighborhoods #Which venues are in the Scarborough's neighborhoods (radius = 500 m)? The list of neighborhood is given by the getNearbyVenues function #applied to the scarborough_df dataframe scarborough_nearby_venues = getNearbyVenues(names=scarborough_df['Neighbourhood'], latitudes=scarborough_df['Latitude'], longitudes=scarborough_df['Longitude'] ) #Getting the evnues in Scarborough's nearby neighborhoods (radius 500 meters) scarborough_nearby_venue_df=pd.DataFrame(scarborough_nearby_venues) scarborough_nearby_venue_df #Checking the size of the resulting dataframe print(scarborough_nearby_venue_df.shape) # ### Exploring the venues' dataset for neighborhoods nearby Scarborough #Number of Venues for neighborhood scarborough_nearby_venue_df.groupby('Neighborhood').count() #Unique neighbourhoods in Scarborough scarborough_nearby_venue_df['Neighborhood'].unique().tolist() #Number of unique neighbourhoods in Scarborough print("The number of neighborhood in Scarborough, borough in Toronto is: {}.".format(len(scarborough_nearby_venue_df['Neighborhood'].unique().tolist()))) #Unique venue category in Scarborough scarborough_nearby_venue_df['Venue Category'].unique().tolist() #Number of unique neighbourhoods in Scarborough print("The number of unique venues in Scarborough's nearby neighborhoods (Toronto) is: {}.".format(len(scarborough_nearby_venue_df['Venue Category'].unique().tolist()))) #Number of venues' categories in Scarborough's neighborhoods most_frequent_venue_cat_scarborough=scarborough_nearby_venue_df.groupby(['Venue Category']).count() venue_count=scarborough_nearby_venue_df.groupby(['Venue Category']).count() venue_count=venue_count['Neighborhood'].tolist() most_frequent_venue_cat_scarborough['Venue_count']=venue_count venues_scar_count=most_frequent_venue_cat_scarborough.drop(['Neighborhood','Neighborhood Latitude','Neighborhood Longitude','Venue', 'Venue Latitude', 'Venue Longitude'], axis=1) venues_scar_count=venues_scar_count.reset_index(drop=False) venues_scar_count.rename(columns={'Venue Category':'Category'}, inplace=True) venues_scar_count # + # Visualising Venues category count in Scorborough in Toronto with a basic barplot. There are lots of food stores and bars!! import matplotlib.pyplot as plt import numpy as np bars = pd.Series(venues_scar_count.Category) height = pd.Series(venues_scar_count.Venue_count) y_pos = np.arange(len(bars)) plt.bar(y_pos, height) # If we have long labels, we cannot see it properly names=venues_scar_count.Category.tolist() plt.xticks(y_pos, names, rotation=90) # Thus we have to give more margin: plt.subplots_adjust(bottom=0.4) # It's the same concept if you need more space for your titles plt.title("Venues in Scorborough's neighborhoods, Toronto") plt.subplots_adjust(top=0.7) # - # ### Analyzing Each Neighborhood in Scarborough #Getting back to the scarborough's nearby neighborhoods venues dataset scarborough_nearby_venue_df=pd.DataFrame(scarborough_nearby_venues) scarborough_nearby_venue_df # + # one hot encoding scarb_onehot = pd.get_dummies(scarborough_nearby_venue_df[['Venue']], prefix="", prefix_sep="") # add neighborhood column back to dataframe scarb_onehot['Neighborhood'] = scarborough_nearby_venue_df['Neighborhood'] # move neighborhood column to the first column fixed_columns = [scarb_onehot.columns[-1]] + list(scarb_onehot.columns[:-1]) scarb_onehot = scarb_onehot[fixed_columns] scarb_onehot.head() # - # examining the new dataframe scarb_onehot size scarb_onehot.size # ### Grouping rows by neighborhood and by taking the mean of the frequency of occurrence of each category scarborough_grouped = scarb_onehot.groupby('Neighborhood').mean().reset_index() scarborough_grouped #checking the size scarborough_grouped.size # ### Ten top venues per neighborhood in Scarborough def return_most_common_venues(row, num_top_venues): row_categories = row.iloc[1:] row_categories_sorted = row_categories.sort_values(ascending=False) return row_categories_sorted.index.values[0:num_top_venues] # + num_top_venues = 10 indicators = ['st', 'nd', 'rd'] # create columns according to number of top venues columns = ['Neighborhood'] for ind in np.arange(num_top_venues): try: columns.append('{}{} Most Common Venue'.format(ind+1, indicators[ind])) except: columns.append('{}th Most Common Venue'.format(ind+1)) # create a new dataframe neighborhoods_venues_sorted = pd.DataFrame(columns=columns) neighborhoods_venues_sorted['Neighborhood'] = scarborough_grouped['Neighborhood'] for ind in np.arange(scarborough_grouped.shape[0]): neighborhoods_venues_sorted.iloc[ind, 1:] = return_most_common_venues(scarborough_grouped.iloc[ind, :], num_top_venues) neighborhoods_venues_sorted # - # ### Cluster and label neighborhoods. # #### Here I set number of clusters equal to five and so the neighborhood will be labelled with an integers value (labels) ranging from 0 to 4 (included) # + # set number of clusters kclusters = 5 scarborough_grouped_clustering = scarborough_grouped.drop('Neighborhood', 1) #dropping the column 'Neighbourhood' # run k-means clustering kmeans = KMeans(n_clusters=kclusters, random_state=0).fit(scarborough_grouped_clustering) # check cluster labels generated for each row in the dataframe kmeans.labels_ print('kmeans_labels are the following array: {}, the number of labels are: {}'.format(kmeans.labels_, len(kmeans.labels_))) # - scarborough_df #This dataframe has 17 rows while the scarborough_grouped_clustering has only 16 neighborhoods_venues_sorted #This dataframe will ne joined to the scarborough_df but the number of rows is different. That's a problem. #Checking whether 'Upper Rouge is present in 'Neighbourhood in the neighborhoods_venues_sorted dataframe. It's NOT. (neighborhoods_venues_sorted['Neighborhood']=='Upper Rouge').any() # + # 'Upper Rouge is NOT present in 'Neighbourhood' column in the neighborhoods_venues_sorted dataframe so I can remove it from #scarborough_df and join the two datasets that have now the same numbero of rows. scarborough_merged = scarborough_df.drop(16) # add clustering labels to the scarborough_nearby_venue_df scarborough_merged['Cluster Labels'] = kmeans.labels_ # merge toronto_grouped with toronto_data to add latitude/longitude for each neighborhood scarborough_merged = scarborough_merged.join(neighborhoods_venues_sorted.set_index('Neighborhood'), on='Neighbourhood') scarborough_merged # - #Getting the columns of the merged dataframe scarborough_merged.columns # ### Scarborough neighborhoods' five clusters visualised on Toronto's map # + # create map map_clusters = folium.Map(location = [latitude_scarbour, longitude_scarbour], zoom_start=11) # set color scheme for the clusters x = np.arange(kclusters) ys = [i+x+(i*x)**2 for i in range(kclusters)] colors_array = cm.rainbow(np.linspace(0, 1, len(ys))) rainbow = [colors.rgb2hex(i) for i in colors_array] # add markers to the map markers_colors = [] for lat, lon, poi, cluster in zip(scarborough_merged['Latitude'], scarborough_merged['Longitude'], scarborough_merged['Neighbourhood'], scarborough_merged['Cluster Labels']): label = folium.Popup(str(poi) + ' Cluster ' + str(cluster), parse_html=True) folium.CircleMarker( [lat, lon], radius=5, popup=label, color=rainbow[cluster-1], fill=True, fill_color=rainbow[cluster-1], fill_opacity=0.7).add_to(map_clusters) map_clusters # - # ### Checking 1st cluster labelled 0. scarborough_merged.loc[scarborough_merged['Cluster Labels'] == 0, scarborough_merged.columns[[1] + list(range(5, scarborough_merged.shape[1]))]] # ## Checking 2nd cluster labelled 1 scarborough_merged.loc[scarborough_merged['Cluster Labels'] == 1, scarborough_merged.columns[[1] + list(range(5, scarborough_merged.shape[1]))]] # ## Checking 3rd cluster labelled 2 scarborough_merged.loc[scarborough_merged['Cluster Labels'] == 2, scarborough_merged.columns[[1] + list(range(5, scarborough_merged.shape[1]))]] # ## Checking 4th cluster labelled 3 scarborough_merged.loc[scarborough_merged['Cluster Labels'] == 3, scarborough_merged.columns[[1] + list(range(5, scarborough_merged.shape[1]))]] # ## Checking 5th cluster labelled 4 scarborough_merged.loc[scarborough_merged['Cluster Labels'] == 4, scarborough_merged.columns[[1] + list(range(5, scarborough_merged.shape[1]))]] # **Reference** 'Segmenting and Clustering Neighborhoods in New York City' by IBM # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns plt.figure(figsize = (16,9)) plt.rcParams['figure.dpi'] = 150 # + import matplotlib.pyplot as plt from sklearn.datasets import make_blobs # create dataset X, y = make_blobs( n_samples=150, n_features=2, centers=3, cluster_std=0.5, shuffle=True, random_state=0 ) # plot plt.scatter( X[:, 0], X[:, 1], edgecolor='black', s=50 ) plt.show() # + from sklearn.cluster import KMeans km = KMeans( n_clusters=3, init='random', n_init=1, max_iter=2, tol=1e-04, random_state=0 ) y_km = km.fit_predict(X) y_km # - plt.scatter(X[:,0], X[:,1], c=y_km, cmap=plt.cm.Paired, alpha=0.4) plt.scatter(km.cluster_centers_[:, 0],km.cluster_centers_[:, 1], s=250, marker='*', label='centroids', edgecolor='black', c=[0,1,2],cmap=plt.cm.Paired,) XC= X.transpose() # nice alternative plt.scatter(XC[0], XC[1],c=y_km, cmap=plt.cm.Paired, alpha=0.4) # + # plot the 3 clusters #plt.grid() plt.scatter( X[y_km == 0, 0], X[y_km == 0, 1], s=50, c='lightgreen', label='cluster 1' ) plt.scatter( X[y_km == 1, 0], X[y_km == 1, 1], s=50, c='orange', marker='o', edgecolor='black', label='cluster 2' ) plt.scatter( X[y_km == 2, 0], X[y_km == 2, 1], s=50, c='lightblue', label='cluster 3' ) # plot the centroids plt.scatter( km.cluster_centers_[:, 0], km.cluster_centers_[:, 1], s=250, marker='*', label='centroids' ) plt.legend(scatterpoints=1) plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # [View in Colaboratory](https://colab.research.google.com/github/ckbjimmy/2018_mlw/blob/master/nb1_classification.ipynb) # + [markdown] id="acE7JrXS2ahZ" colab_type="text" # # Machine Learning for Clinical Predictive Analytics # # We would like to introduce basic machine learning techniques and toolkits for clinical knowledge discovery in the workshop. # The material will cover common useful algorithms for clinical prediction tasks, as well as the diagnostic workflow of applying machine learning to real-world problems. # We will use [Google colab](https://colab.research.google.com/) / python jupyter notebook and two datasets: # # - Breast Cancer Wisconsin (Diagnostic) Database, and # - pre-extracted ICU data from PhysioNet Database # # to build predictive models. # # The learning objectives of this workshop tutorial are: # # - Learn how to use Google colab / jupyter notebook # - Learn how to build machine learning models for clinical classification and/or clustering tasks # # To accelerate the progress without obstacles, we hope that the readers fulfill the following prerequisites: # # - [Skillset] basic python syntax # - [Requirements] Google account OR [anaconda](https://anaconda.org/anaconda/python) # # In part 1, we will go through the basic of machine learning for classification problems. # In part 2, we will investigate more on unsupervised learning methods for clustering and visualization. # In part 3, we will play with neural networks. # # # + [markdown] id="ZWiyY6ZanrTq" colab_type="text" # # Part I – Classification (supervised learning) # # In the first part of the workshop, we would like to explore how to utilize machine learning algorithms to approach clinical predictive analytic problems, specifically, the classification problem. # # After going through this tutorial, we hope that you will understand how to use python and scikit-learn to design and build simple machine learning models for classification problems and how to evaluate the performance and make it be interpretable. # # The tutorial is modified from Dr. 's version for 2017 workshop. # # In the beginning, we will install and import the packages needed for the workshop (we usually do this in python). # Then we will work on a well-structured breast cancer dataset, then move on to the real world PhysioNet dataset. # # In the first chunk, it will take few seconds to install and import packages we need in the tutorial. # Colab / jupyter notebook provides a convenient function that you can run the bash command by adding `!` before the command. e.g. `!pip install tensorflow` , `!ls`, `!clear`. # + id="2oc7_qvU2U1e" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 105} outputId="800f99c4-6660-4041-b7b9-6094b1cceb5b" # try: # import psycopg2 # except: # # !pip install psycopg2 try: import pydotplus except: # !pip install pydotplus try: import graphviz except: # !apt-get install graphviz -y from __future__ import print_function import numpy as np import pandas as pd import sklearn import sys import datetime as dt import matplotlib import matplotlib.pyplot as plt from matplotlib.font_manager import FontProperties # for unicode fonts from collections import OrderedDict from sklearn import datasets # used to print out pretty pandas dataframes from IPython.display import display, HTML from sklearn.pipeline import Pipeline # used to impute mean for data and standardize for computational stability from sklearn.preprocessing import Imputer from sklearn.preprocessing import StandardScaler # logistic regression is our favourite model ever from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.linear_model import LogisticRegressionCV # l2 regularized regression from sklearn.linear_model import LassoCV from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import GradientBoostingClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.ensemble import BaggingClassifier from sklearn.ensemble import AdaBoostClassifier from sklearn.naive_bayes import MultinomialNB from sklearn.svm import LinearSVC from sklearn.svm import SVC from sklearn.calibration import CalibratedClassifierCV from sklearn.neural_network import MLPClassifier from sklearn import model_selection # used to calculate AUROC/accuracy from sklearn import metrics from sklearn.model_selection import cross_val_score # used to create confusion matrix from sklearn.metrics import confusion_matrix # %matplotlib inline # + [markdown] id="kdPWuNeStkxs" colab_type="text" # We also defined the `plot_model_pred_2d` function for plotting. # We will use this function later for visualizing the decision boundaries. # + id="_O8cVt7dztAa" colab_type="code" colab={} def plot_model_pred_2d(mdl, X, y, feat): # look at the regions in a 2d plot # based on scikit-learn tutorial plot_iris.html # get minimum and maximum values x0_min = X[:, 0].min() x0_max = X[:, 0].max() x1_min = X[:, 1].min() x1_max = X[:, 1].max() xx, yy = np.meshgrid(np.linspace(x0_min, x0_max, 100), np.linspace(x1_min, x1_max, 100)) Z = mdl.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) # plot the contour - colouring different regions cs = plt.contourf(xx, yy, Z, cmap='Blues') # plot the individual data points - colouring by the *true* outcome color = y.ravel() plt.scatter(X[:, 0], X[:, 1], c=color, marker='o', s=40, cmap='Blues') plt.xlabel(feat[0]) plt.ylabel(feat[1]) plt.axis("tight") plt.colorbar() # + [markdown] id="SGxn4Qv91tSX" colab_type="text" # ## Breast cancer prediction # # Let's start from using the [breast cancer dataset in UCI data repository](https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/wdbc.names) to have a quick look on how to do the analysis and build models using well-structured data (without missing data and other data cleaning problems). # The python machine learning package `scikit-learn` has already helped us preprocess the data. # We load the breast cancer dataset from `sklearn.datasets`, and show the description of this dataset. # + id="_aOVF6602Bdl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 2257} outputId="603a2d3a-8909-466e-dfcc-ad39998e3229" # load the data df_bc = datasets.load_breast_cancer() print([k for k in df_bc.keys()]) print(df_bc['feature_names']) # if you want a description of the dataset, uncomment the below line print(df_bc['DESCR']) # pick index of the features to use (only pick 2) # :Attribute Information (in order): # 0 - CRIM per capita crime rate by town # 1 - ZN proportion of residential land zoned for lots over 25,000 sq.ft. # 2 - INDUS proportion of non-retail business acres per town # 3 - CHAS Charles River dummy variable (= 1 if tract bounds river; 0 otherwise) # 4 - NOX nitric oxides concentration (parts per 10 million) # 5 - RM average number of rooms per dwelling # 6 - AGE proportion of owner-occupied units built prior to 1940 # 7 - DIS weighted distances to five Boston employment centres # 8 - RAD index of accessibility to radial highways # 9 - TAX full-value property-tax rate per $10,000 # 10 - PTRATIO pupil-teacher ratio by town # 11 - B 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town # 12 - LSTAT % lower status of the population # 13 - MEDV Median value of owner-occupied homes in $1000's # + [markdown] id="w_O-14OFtYTF" colab_type="text" # In the next chunk, we use the second and the last feature (mean texture and worst fractal dimension, `[1, 29]`) to build models to classify whether the lesion is tumor or not. # # We use the features and the corresponding label to build the model. # For training machine learning models, we usually split our data into training and testing dataset, and only use the training subset for modeling (using `train_test_split` function). # The purpose of this splitting step is to make the model less overfit and be more generalizable. # For much robust models, we may further split the training set to training and validation sets. # Check the figure to understand the relationship between training, validation and testing splitted sets. # # ![split](http://www.codeproject.com/KB/AI/1146582/validation.PNG) # [Source] http://magizbox.com/training/machinelearning/site/evaluation/ # + id="S5pYln4Brnc6" colab_type="code" colab={} idx = [1, 29] # choose the features for modeling, you can change the idx and see what will happen X = df_bc['data'][:, idx] # X is your feature set y = df_bc['target'] # y is the target, the label X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=42) # split your data to train/test sets feat = [x for x in df_bc['feature_names'][idx]] # + [markdown] id="u3HTF8Nr0IlT" colab_type="text" # Above, we've extracted only two features out of the breast cancer dataset as predictors. # Then, we run six different machine learning algorithms to build the classifier for predicting the lesion is malignant or not. # # We quickly visualize the results of all the models we have presented here. # In the figure, we can see the decision boundaries created by the models. # These boundaries can be a source of interpretability of the clinical predictive model. # We also print out their performance as measured by the **Area Under the Receiver Operator Characteristic curve (AUROC)**, which is a commonly used metrics in the machine learning world. # + id="8mlvRFyUzePi" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 688} outputId="19ce1c17-c7af-4fd3-a07c-c4122f9cd5b9" # we put all algorithms in a dictionary, you can add/remove them # there are many algorithms in scikit-learn you can try: # http://scikit-learn.org/stable/supervised_learning.html#supervised-learning # the `.fit` step is the training process clf = dict() clf['Logistic Regression'] = LogisticRegression(fit_intercept=True).fit(X_train,y_train) clf['Decision Tree'] = DecisionTreeClassifier(criterion='entropy', splitter='best').fit(X_train,y_train) clf['Gradient Boosting'] = GradientBoostingClassifier(n_estimators=10).fit(X_train, y_train) clf['Random Forest'] = RandomForestClassifier(n_estimators=10).fit(X_train, y_train) clf['Bagging'] = BaggingClassifier(n_estimators=10).fit(X_train, y_train) clf['AdaBoost'] = AdaBoostClassifier(n_estimators=10).fit(X_train, y_train) # visualization fig = plt.figure(figsize=[16,9]) print('AUROC\tModel') for i, curr_mdl in enumerate(clf): yhat = clf[curr_mdl].predict_proba(X_test)[:,1] # prediction score = metrics.roc_auc_score(y_test, yhat) # get AUROC print('{:0.3f}\t{}'.format(score, curr_mdl)) ax = fig.add_subplot(2,3,i+1) plot_model_pred_2d(clf[curr_mdl], X_test, y_test, feat) # we define this function in the beginning plt.title(curr_mdl) plt.show() # + [markdown] id="1NDKbl9b0L3a" colab_type="text" # Here we can see that quantitatively, Logistic Regression, AdaBoost and Gradient Boosting have produced the highest discrimination among all the models (~0.80). # The decision surfaces of these models also seem simpler, and less "noisy", which is likely the reason for the improved **generalization** on the held out test set. # # ### K-fold cross-validation # # Now we will include all features of the breast cancer dataset - we will no longer be able to easily visualize the models, but we will be able to better evaluate them. # We'll also switch to using **5-fold cross-validation** to get a good estimate of the generalization performance of the model. # For the k-fold cross-validation, you can still split the data to train/test sets to make your models much robust. # # The figure demonstrates the simple 4-fold cross-validation. # # ![cv](https://upload.wikimedia.org/wikipedia/commons/1/1c/K-fold_cross_validation_EN.jpg) # # [Source] https://en.wikipedia.org/wiki/Cross-validation_(statistics) # + id="y4sDcTBjz31R" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 140} outputId="9a67dab1-f5e8-49b1-922b-f101edf39c14" # use all features X = df_bc['data'] y = df_bc['target'] # use cross-validation to estimate the performance of each model print('Acc\tAUROC\tModel') for curr_mdl in clf: scores = cross_val_score(clf[curr_mdl], X, y, cv=5, scoring='accuracy') auc = cross_val_score(clf[curr_mdl], X, y, cv=5, scoring='roc_auc') print('{:0.3f}\t{:0.3f}\t{}'.format(scores.mean(), auc.mean(), curr_mdl)) # + [markdown] id="KlLzJkbq0Osd" colab_type="text" # We note two things here: # # 1. by **using the entire feature set** we have dramatically improved performance of the model, indicating that there was more information contained in the other columns, and # 2. most of our models are performing relatively equivalently (except for the super simple model, the decision tree), and do not forget **logistic regression**! # + [markdown] id="1PECpeMGzBZ1" colab_type="text" # ## PhysioNet # We will now practice using these models on a dataset acquired from patients admitted to intensive care units at the Beth Israel Deaconness Medical Center in Boston, MA. # All patients in the cohort stayed for at least 48 hours, and the goal of the prediction task is to predict in-hospital mortality. # If you're interested, you can read more about the dataset [here](http://physionet.org/challenge/2012/). # # The data is originally provided as hourly observations for a number of variables, and the preprocessing step involved extracting summary statistics across all these observations. # The outcome is the first column `hospitalmortality` (`0` means alive while discharge). # The rest of the data are features you can use to predict this binary outcome. # # We will again try to build the classifier using logistic regression. # You may want to use other algorithms such as AdaBoost, Random Forest, Bagging, and Gradient Boosting. # Pick your favourite and play with the parameters to see how well you can do. # # First, we start from loading the data and look at all columns in the dataset---there are 182 possible predictors and an outcome variable. # # This time we read the data in a slightly different way using pandas `dataframe` format. # pandas `dataframe` is useful for importing the tabular data/spreadsheet. # + id="EQPT1u6z2ZYQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 238} outputId="706882f5-e7fa-4b33-a369-3d0f682915df" df = pd.read_csv('https://raw.githubusercontent.com/ckbjimmy/2018_mlw/master/data/PhysionetChallenge2012_data.csv') df.head() # + id="UpSdfcIS6N3T" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 55} outputId="502f0d7c-2b23-4778-a919-9b5ab147106f" # check our features print([x for x in df.columns]) # + [markdown] id="_gySSCmuBt0y" colab_type="text" # Next, we split features and outcome variables. # + id="EWP4-EAqxde1" colab_type="code" colab={} # first column [0] is target (outcome variable) y = df['hospitalmortality'].values # second column onward [1:] are features X = df.drop(['hospitalmortality'], axis=1).values # we keep the feature name in X_header X_header = df.drop(['hospitalmortality'],axis=1).columns # + id="xWTgWn0bCC5c" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 678} outputId="7e6f43eb-06cb-43e3-dbc6-96b3de1c5b46" # train a simple logistic regression, and... clf = LogisticRegression(fit_intercept=True).fit(X, y) # + [markdown] id="w5pvjOujGT19" colab_type="text" # The above error of `ValueError: Input contains NaN, infinity or a value too large for dtype('float64').` tells us that we have missing data! (Alistair: Dramatic music plays). # # Good thing about Google colab---they have a button for you to search stackoverflow. # # We then check the `NaN` using the transpose view---there are a lot `NaN` in our dataset. # + id="t_bOTDBexhap" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1828} outputId="800521ce-50e3-4e1a-f65c-ddf36710a987" df.head().T # + [markdown] id="Sswvrj6CGZax" colab_type="text" # ### Missing data imputation # # In the above transposed view, we can see a few of the missing values for all predictors. # There are many methods which can be used for missing data imputation. # Of those tools, many of them are available in the `imputer` module of scikit-learn. # # Here is a [brief tutorial of imputing missing values](http://scikit-learn.org/stable/modules/preprocessing.html#imputation). # # You can [read more about the imputer module here](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.Imputer.html). # # We choose to take a reasonably simple approach, and replace all missing values (`'NaN'`) with the mean value of observed measurements (`strategy='mean'`). # + id="Uq9Lo9PVGZ1T" colab_type="code" colab={} # define the method of imputation imp = Imputer(missing_values='NaN', strategy='mean', axis=0) # learn the parameters of that imputation using our data X # i.e., in our case, calculate the means of each column which we will use to impute data imp = imp.fit(X) # apply that imputation X_transform = imp.transform(X) # + [markdown] id="tEh-oQJcGe2X" colab_type="text" # We can now see that our missing data has been replaced: # + id="VVgBO_WLGbt8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 70} outputId="59aac2a3-14f7-4f76-c78b-2a84fa1c79e8" print('Before:', end='') print(X[0:10, 0]) print('After:', end='') print(X_transform[0:10, 3]) # + [markdown] id="X0LKlXbCGhz6" colab_type="text" # For linear models (e.g. logistic regression), this will be roughly equivalent to imputing the average risk for this feature for patients missing that data. # # We can now train a model on the data! # + id="ZB_vwBdFGg8W" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 87} outputId="f7b11506-5c8c-4de4-ad87-835dfcccbaa9" clf = LogisticRegression(fit_intercept=True) clf.fit(X_transform, y) # + [markdown] id="u_-jhZknGnUL" colab_type="text" # We then evaluate the model using the AUROC again. # An AUROC = 0.5 represents a random classifier making random predictions, while an AUROC = 1.0 represents a perfect classifier which always outputs the correct prediction. # + id="LKMu6Uu3GlTZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="390220fa-4f18-4c84-b68b-fe9bb3b80c19" yhat = clf.predict_proba(X_transform)[:,1] auc = metrics.roc_auc_score(y, yhat) print('{:0.3f} - AUROC of model (training set).'.format(auc)) # + [markdown] id="tRgkyC2tGsJE" colab_type="text" # ### Normalizartion / Find the important features # # Let's dig into the model to determine what is driving the predictions. # In healthcare applications, model interpretability is very important as it can be used to validate the model's construct validity. # + id="EA2zqiAyGphk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 245} outputId="89cacd20-1e81-4e73-d7fd-feb47910f5dc" # plot the coefficients coef = np.row_stack([df.columns[1:], clf.coef_]).T idxSort = np.argsort(coef[:,1]) print('Top 5 predictors negatively correlated with mortality:') for n in range(5): print('{:1.2f} - {}'.format(coef[idxSort[n],1], coef[idxSort[n],0])) print() print('Top 5 predictors positively correlated with mortality:') for n in range(5): print('{:1.2f} - {}'.format(coef[idxSort[-n-1],1], coef[idxSort[-n-1],0])) # + [markdown] id="EmZiT6wSGwiq" colab_type="text" # One issue with these coefficients is that they are unadjusted---that is they are based on the original scale of the data. # For example, the coefficient for FiO2, shown in percentage (0-100), will likely be ~100 times higher than the coefficient for mechanical ventilation (0/1). # While this is fine for modelling purposes, it also means we can't directly compare the magnitudes of the coefficients to determine the magnitude of the association between the feature and risk. # # Of course there is a solution! # We can scale the data using `StandardScaler` function in scikit-learn. # + id="H0Am8ZdKGuSJ" colab_type="code" colab={} # problem: the above coefficients all operate on different scales # solution: scale the data before we use it in the model imp = Imputer(missing_values='NaN', strategy="mean", axis=0) imp = imp.fit(X) X_transform = imp.transform(X) scale = StandardScaler() scale = scale.fit(X_transform) X_transform = scale.transform(X_transform) # + id="82Hx6umqGzUx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 262} outputId="0b86433a-72ae-4169-f1a7-8669632df06c" # fit the model on scaled data clf = LogisticRegression(fit_intercept=True) clf = clf.fit(X_transform, y) # evaluate the model yhat = clf.predict_proba(X_transform)[:,1] auc = metrics.roc_auc_score(y, yhat) print('{:0.3f} - AUROC of model (training set).'.format(auc)) # print the coefficients coef = np.row_stack([df.columns[1:], clf.coef_]).T idxSort = np.argsort(coef[:,1]) print('Top 5 predictors negatively correlated with mortality:') for n in range(5): print('{:1.2f} - {}'.format(coef[idxSort[n],1], coef[idxSort[n],0])) print() print('Top 5 predictors positively correlated with mortality:') for n in range(5): print('{:1.2f} - {}'.format(coef[idxSort[-n-1],1], coef[idxSort[-n-1],0])) # + [markdown] id="_z2pFI-jG879" colab_type="text" # Let's look at the odds ratios for these coefficients graphically. # This will give us a more intuitive comparison between the features. # For those which have negative correlation with bad outcome (i.e. higher values indicate better prognosis), we reverse the coefficient on the graph so it is more comparable. # + id="26jIeJxuG3sd" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 375} outputId="2be856b6-4f9e-4d83-a551-2a487e2b1c99" # plot the top 10 lowest / highest coef = np.row_stack([df.columns[1:], clf.coef_]).T idxSort = np.argsort(coef[:,1]) idxTop = idxSort[0:10] idxBot = idxSort[-1:-11:-1] colTop = [0.8906,0.1016,0.1094] colBot = [0.2148,0.4922,0.7188] f = plt.figure(figsize=[10,6]) ax1 = f.add_subplot(111) ax1.plot( np.exp(-coef[idxTop,1].astype(float)), range(10), 's', markersize=10, color = colTop) # move negatively correlated ticklabels over to the right ax1.yaxis.tick_right() ax1.yaxis.set_label_position("right") ax1.set_ylim([-1,10]) ax1.yaxis.set_ticks(range(10)) ax1.yaxis.set_ticklabels(coef[idxTop,0], color=colTop, fontsize=16) plt.ylabel("Negatively correlated", color=colTop, fontsize=16) ax2 = f.add_subplot(111, sharex=ax1, frameon=False) ax2.plot( np.exp(coef[idxBot,1].astype(float)), range(10), 'o', markersize=10, color=colBot ) ax2.yaxis.set_ticks(range(10)) ax2.yaxis.set_ticklabels(coef[idxBot,0], color=colBot, fontsize=16) ax2.set_ylim([-1,10]) plt.ylabel("Positively correlated", color=colBot, fontsize=16) # set axes plt.show() # + [markdown] id="qEiKUipZU5o5" colab_type="text" # ### Pipeline # # We can use scikit-learn's pipeline feature to combine together all of these steps (imputation, scaling, and final model building). # This is technically the same thing, but syntactically it's a lot cleaner. # # # + id="9yNgiC9uU3j4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="83b09813-ec02-40a5-f116-22ffec446190" # can use pipelines as a nicer way of doing the above estimator = Pipeline([ ("imputer", Imputer(missing_values='NaN', strategy="mean", axis=0)), ("scaler", StandardScaler()), ("logreg", LogisticRegression(fit_intercept=True)) ]) mdl_pipeline = estimator.fit(X, y) yhat = mdl_pipeline.predict_proba(X)[:,1] auc = metrics.roc_auc_score(y, yhat) print('{:0.3f} - AUROC of model (training set).'.format( auc )) # + [markdown] id="36LIjZCcKEvO" colab_type="text" # Note that we achieve the same performance as before---again, this is simply a short form of what we have done before. # # You'll note that in the print statement we keeps stating `"training set"`, and may be wondering what this means and why it matters. # The training set corresponds to the data used to develop the model. # Importantly, this means that the model has seen the data that we are using to estimate how well the model is doing. # For the regression model above, the consequences of this may not be obvious. # Let's switch to a more flexible model and see what happens. # + id="4qxpTrw5VhOG" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="f363f333-803d-4e48-ca2b-ff7294b31993" # can use pipelines as a nicer way of doing the above estimator = Pipeline([ ("imputer", Imputer(missing_values='NaN', strategy="mean", axis=0)), ("scaler", StandardScaler()), ("rf", RandomForestClassifier()) ]) mdl_pipeline = estimator.fit(X, y) yhat = mdl_pipeline.predict_proba(X)[:,1] auc = metrics.roc_auc_score(y, yhat) print('{:0.3f} - AUROC of model (training set).'.format( auc )) # + [markdown] id="GLvdjOk6KP2P" colab_type="text" # Amazing! # We have an AUROC = 1.0, which is perfect, therefore we have a perfect model. # Time to go home! # # Obviously, it's not that simple, though we wish it were :) # We have used a model called a Random Forest. # Omitting a few details, a random forest is a collection of decision trees. # Each decision tree tries to perfectly classify a small subset of the overall data. # When combined together, all of these decision trees perfectly classify the entire dataset. # However, that does not mean that they work perfectly. # # How can we check this? # Well, let's hide some data from the classifier, and see how well it does on this hidden data. # This is a very common concept in machine learning: we have a training set, and we hide a test set to see how we're really doing---as we did for the breast cancer prediction task. # # You can either use the splitting function in `scikit-learn` (in the breast cancer example), or use the simple random indexing approach to split the data. # Here we use the latter approach. # # We realized that the result of AUC = 1.0 comes from overfitting of training data. # You may change to other algorithms in the `Pipeline` and see which algorithm has better generalizability for testing data. # + [markdown] id="AOY5rDb6KS_O" colab_type="text" # ### Prevent from overfitting # + id="t46MYBDvKJpu" colab_type="code" colab={} # we use 80% data for training and 20% for testing np.random.seed(42) idxTrain = np.random.rand(X.shape[0]) idxTest = idxTrain > 0.8 idxTrain = ~idxTest X_train = X[idxTrain, :] y_train = y[idxTrain] X_test = X[idxTest, :] y_test = y[idxTest] # + id="mNCV-tzGKXeq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 52} outputId="a986191f-9783-4fe1-8a9e-4ef47220fc91" # can use pipelines as a nicer way of doing the above estimator = Pipeline([ ("imputer", Imputer(missing_values='NaN', strategy="mean", axis=0)), ("scaler", StandardScaler()), ("rf", RandomForestClassifier()) ]) mdl_pipeline = estimator.fit(X_train, y_train) yhat = mdl_pipeline.predict_proba(X_train)[:,1] auc = metrics.roc_auc_score(y_train, yhat) print('{:0.3f} - AUROC of model (training set).'.format(auc)) yhat = mdl_pipeline.predict_proba(X_test)[:,1] auc = metrics.roc_auc_score(y_test, yhat) print('{:0.3f} - AUROC of model (test set).'.format(auc)) # + [markdown] id="ZO-KpqjfKdwS" colab_type="text" # ### Interpretability # # Now that we have prevented our model from memorizing the data, we can get a much better estimate of how well we are actually doing. # Note that with the more flexible model, we are not necessarily doing better. # However, there are many "hyperparameters" for the Random Forest (different knobs we can tweak in order for things to work better). # With some optimization of these hyperparameters, we may be able to do better. # Note that this improvement in performance is at the cost of some interpretability of the model. # We no longer have a simple set of coefficients to interpret: we now have hundreds of decision trees simultaneously being used to determine patient outcome. # # All is not lost! # We can still garner some meaning from the models. # One common method of interpretting the model is figuring out how important each feature is in the model. # The method for doing this is quite intuitive: we can mess up a single column (randomize it), and see how much worse our model does. # If that column is very important, then we would expect our model to be much worse. # If the column is not that important, then our model performance would be about the same. # + id="09YubqxRKZzF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 367} outputId="341f3381-ae45-495a-c0ab-7ddde307c7a4" D = X_test.shape[1] # we should repeat the data shuffling to get a decent estimate of feature importance B = 10 feat_importance = np.zeros([B, D], dtype=float) # let's get the baseline performance of our model yhat = mdl_pipeline.predict_proba(X_test)[:,1] loss_base = metrics.log_loss(y_test, yhat) # for each feature for d in range(D): for b in range(B): # generate a random shuffle idxShuffle = np.random.randint(low=0, high=D, size=X_test.shape[0]) # note this random shuffle allows the same row to be selected more than once # this is intentional, and called "bootstrapping", you can read up on it online X_fi = X_test # shuffle the data X_fi[:, d] = X_fi[idxShuffle, d] # apply our model yhat = mdl_pipeline.predict_proba(X_fi)[:,1] # calculate performance on the data with a shuffled column loss = metrics.log_loss(y_test, yhat) # store this in the feature importance matrix # note we subtract this off the baseline AUROC feat_importance[b,d] = loss_base - loss if np.mod(d,10)==0: print('Finished {:2.1f}% of features.'.format(d*100.0/D)) print('Done!') # + [markdown] id="P290Fi7sKlRZ" colab_type="text" # Now that we've calculated the importance, let's plot the most important 10 features. # We can define the most important features as those which caused the largest degradation in performance. # + id="MMOyXY8lKivX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 483} outputId="f7690cd8-bfc0-4bc5-b74f-88e046d2c934" # take mean degradation in performance feat_imp_mu = np.mean(feat_importance, axis=0) feat_imp_iqr = np.percentile(feat_importance, 75, axis=0) - np.percentile(feat_importance, 25, axis=0) # sort to get the features which had the most impact idxSort = np.argsort(feat_imp_mu) plt.figure(figsize=[12,8]) # note we negate the feature importance - now higher values indicate more important plt.barh(range(10), -feat_imp_mu[idxSort[0:10]], color='r', yerr = feat_imp_iqr[idxSort[0:10]], tick_label = X_header[idxSort[0:10]], align='center') plt.show() # + [markdown] id="5Q4XSnSsKppn" colab_type="text" # Note that we have changed our metric from the AUROC to the logistic loss. # All of the top 10 features make sense: lab values are useful in determining patient outcome. # # There are other methods for determining feature importance---in fact, some are included in the random forest model object. # The only trick is that they are buried within the pipeline that we used earlier. # + id="g5-IwMnfKnP3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 483} outputId="ad3030cd-b2ee-4b3f-c651-aa684131022e" rf_mdl = mdl_pipeline.named_steps['rf'] # get feature importances feat_imp_rf = rf_mdl.feature_importances_ # same as before - let's plot it! idxSort = np.argsort(feat_imp_rf) plt.figure(figsize=[12,8]) plt.barh(range(10), feat_imp_rf[idxSort[0:10]], tick_label = X_header[idxSort[0:10]], align='center') plt.show() # + [markdown] id="0IjSR3xnKxVG" colab_type="text" # Here we get very different feature importances. # Why is this? # It helps to understand how these feature importances are derived. # # The first feature importance (log loss approach) was done based on overall performance: potassium ended up being very useful across all patients in predicting mortality. # # Alternatively, the second method is based on the Gini importance (in random forest). # Remember that a random forest is made up of many trees. # This second method says: for each split that uses the feature, how much "purer" are the two subsequent nodes. # Below, the percentage is the frequency of the outcome, where we start with 50% (equal number of both groups in that node). # ``` # 50% # | # | split on feature here # |\ # | \ # | \ # | \ # | \ # 25% 75% # ``` # Above, after we split on the feature, our nodes are "purer". # That is, the left node has more negative outcomes (0s), and the right node as more positive outcomes (1s). # # From this, we can conclude that MAP (mean arterial pressure) is very useful for separating those who survive from those who die, which makes sens. # Patients who have low MAP are very ill, and it is an effective indicator of illness. # However, not all patients have low MAP. # Across all patients, potassium is a more important measure in indicating illness. # + [markdown] id="a8HX6G5RK2sK" colab_type="text" # ### K-fold cross-validation (in detail) # Finally, let's review a very effective technique for estimating how well our classifier is doing---cross-validation. # Recall that we split the dataset into two sets: training and test, specifically 80% training and 20% test. # As we tested using a smaller dataset (only 20% of the total records), we likely will end up with a noisier estimate of performance than if we used the entire dataset. # Of course we can't use the entire dataset because then our model results aren't believable. # Cross-validation is a clever way to use the entire dataset and use held-out data for proper evaluation. # Each observation is assigned to a group. # Let's say we have $k$ groups. # We then can build a model using data in the $k$-th group as the test set, and all other data as the training set. # Let's take a look at some code. # # # # + id="nzD3fC4mK4On" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 52} outputId="492ba2aa-04ad-42e4-c448-ba6d44e56be5" # create the K fold indices K = 5 idxK = np.random.permutation(X.shape[0]) idxK = np.mod(idxK,K) print('First 10 observations:',end=' ') print(idxK[0:10]) print('idxK shape:',end=' ') print(idxK.shape) # + [markdown] id="917iTyh-K_sZ" colab_type="text" # Above you can see we have generated an integer, up to $k-1$ (but including 0), for all observations in the training set. # We will now iterate through the unique integers to "hold out" a subset of the data for each model training. # + id="LMwAKrfcK9kC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 105} outputId="7c5ab7fd-2509-4175-83ac-4801c40b0e1a" xval_auc = list() for k in range(K): idxTest = idxK == k idxTrain = idxK != k X_train = X[idxTrain, :] y_train = y[idxTrain] X_test = X[idxTest, :] y_test = y[idxTest] # train the model using all but the kth fold curr_mdl = estimator.fit(X_train, y_train) # evaluate using the kth fold yhat = curr_mdl.predict_proba(X_test)[:,1] curr_score = metrics.roc_auc_score(y_test, yhat) xval_auc.append( curr_score ) print('{} - Finished fold {} of {}. AUROC {:0.3f}.'.format(dt.datetime.now(), k+1, K, curr_score)) # + [markdown] id="OvSgqSEj9wqP" colab_type="text" # Of course, you can also use the function `cross_val_score` as we used in the breast cancer prediction for cross-validation. # + [markdown] id="LV9JVHoBLEFB" colab_type="text" # ### Hyperparameter optimization # Let's look at our estimator. # + id="vxW_XzjDLBb4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 140} outputId="1749aa15-566c-4640-9209-136d7759b19d" estimator = Pipeline([ ("imputer", Imputer(missing_values='NaN', strategy="mean", axis=0)), ("scaler", StandardScaler()), ("rf", RandomForestClassifier()) ]) print(estimator.named_steps['rf']) # + [markdown] id="JR5wabnDLJ--" colab_type="text" # Careful observation will note a few parameters which we could play around with: # # * criterion - what method to evaluate how well a node is splitting the data # * max_depth - a limit on how deep the tree is # * max_features - how many to look at for an inidividual tree # * max_leaf_nodes - maximum number of leaf or end nodes # * min_impurity_split - minimum impurity to allow for a split (the criterion is impurity) # * min_samples_leaf - minimum samples in a leaf node # * min_samples_split - minimum samples required to split a node # * min_weight_fraction_leaf - similar to min_samples_leaf, except sometimes we are using weighted data, and so this takes that into account # * n_estimators - the number of trees to learn # # More details of parameteres can be found in the [`scikit-learn` document about random forest classifier](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html). # # You'll note that all of these parameters have values - but who is to say these values are optimal? # They have been set to sensible defaults - all of which perform well on a variety of problems - but there's no reason that they would perform best on our problem! # What we need to do is *hyperparameter optimization*: the above are "hyperparameters" (i.e., they are not the parameters the model uses to make decisions, but they are used to configure the model itself). # We can optimize these in a few ways, but the simplest to understand is grid search. # Grid search is essentially a brute force method of trying a bunch of combinations and picking the one that does best. # # Happily, `scikit-learn` provides some tools to do grid search using cross-validation, which is exactly what we want to do. # + id="D2Fm9Ex8LKby" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 297} outputId="2c60756d-a3a5-4614-fddd-b998b919e42a" params = {'rf__max_depth': [10], 'rf__n_estimators': [10,50]} estimator = Pipeline([ ("imputer", Imputer(missing_values='NaN', strategy="mean", axis=0)), ("scaler", StandardScaler()), ("rf", RandomForestClassifier()) ]) curr_mdl = model_selection.GridSearchCV(estimator, params, scoring='roc_auc', verbose=2, refit=True) curr_mdl.fit(X, y) curr_mdl.cv_results_['mean_test_score'] # + id="MOSV_jOKLNDw" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 527} outputId="8ce08983-e4ed-41e6-a813-d8b7a78cc1e9" for x in curr_mdl.cv_results_: print(x,end=': ') print(curr_mdl.cv_results_[x]) # + [markdown] id="p5nlGiznLRA-" colab_type="text" # ### More models! # # We are now armed with the ability to train and effectively evaluate our models. # Now the fun begins. # Let's evaluate a few different models to see which one does best. # # We use all techniques and tips above in this chunk for modeling. # You can even find algorithms you like in the [`scikit-learn` document](http://scikit-learn.org/stable/supervised_learning.html#supervised-learning), add them into `models`, and fine-tune the hyperparameters. # Or you can also change the $k$ value for cross-validation. # # # + id="mT4GRjojLOqj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1227} outputId="c477c4d4-e8ff-4e1e-8293-e27a484481ca" # Rough timing info: # rf - 3 seconds per fold # logreg - 4 seconds per fold # lasso - 8 seconds per fold models = {'l1': LogisticRegression(penalty='l1', multi_class='ovr', n_jobs=-1), 'l2': LogisticRegression(penalty='l2', multi_class='ovr', n_jobs=-1), #'nb': MultinomialNB(), 'svc': CalibratedClassifierCV(base_estimator=LinearSVC(penalty='l2', loss='squared_hinge', C=1.0, multi_class='ovr', random_state=0, max_iter=1000), cv=5), 'rbf': SVC(kernel='rbf', probability=True, decision_function_shape='ovr'), 'rf': RandomForestClassifier(n_estimators=100, n_jobs=-1), 'adab': AdaBoostClassifier(n_estimators=100), 'gb': GradientBoostingClassifier(n_estimators=100, learning_rate=1.0, max_depth=1, random_state=0), 'mlp':MLPClassifier(solver='adam', alpha=1e-5, hidden_layer_sizes=(128, 64, 32), random_state=42) } # create k-fold indices K = 5 # number of folds idxK = np.random.permutation(X.shape[0]) idxK = np.mod(idxK,K) mdl_val = dict() results_val = dict() pred_val = dict() tar_val = dict() for mdl in models: print('=============== {} ==============='.format(mdl)) mdl_val[mdl] = list() results_val[mdl] = list() # initialize list for scores pred_val[mdl] = list() tar_val[mdl] = list() if mdl == 'xgb': # no pre-processing of data necessary for xgb estimator = Pipeline([(mdl, models[mdl])]) else: estimator = Pipeline([("imputer", Imputer(missing_values='NaN', strategy="mean", axis=0)), ("scaler", StandardScaler()), (mdl, models[mdl])]) for k in range(K): # train the model using all but the kth fold curr_mdl = estimator.fit(X[idxK != k, :],y[idxK != k]) # get prediction on this dataset if mdl == 'lasso': curr_prob = curr_mdl.predict(X[idxK == k, :]) else: curr_prob = curr_mdl.predict_proba(X[idxK == k, :]) curr_prob = curr_prob[:,1] pred_val[mdl].append(curr_prob) tar_val[mdl].append(y[idxK == k]) # calculate score (AUROC) curr_score = metrics.roc_auc_score(y[idxK == k], curr_prob) # add score to list of scores results_val[mdl].append(curr_score) # save the current model mdl_val[mdl].append(curr_mdl) print('{} - Finished fold {} of {}. AUROC {:0.3f}.'.format(dt.datetime.now(), k+1, K, curr_score)) # + [markdown] id="c6tWfOcRLX80" colab_type="text" # ## More data! # # Finally, we learned some machine learning techniques and toolkits, and played on two simple datasets. # Now it's your turn trying to extract variables and outcomes from [MIMIC dataset](https://mimic.physionet.org/) and build the model for your own clinical questions. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd df = pd.read_csv('data/index.csv') import numpy as np msk = np.random.rand(len(df)) < 0.8 train = df[msk] val = df[~msk] len(train), len(val) train.to_csv('data/train.csv', index=None) val.to_csv('data/val.csv', index=None) import torch torch.cuda.is_available() from torch.utils.tensorboard import SummaryWriter import torch batch_size = 32 image = torch.randn(batch_size, 3, 576, 576) image.shape mask = (image>0).float() mask.shape mask.max() mask[:,0,:,:].shape mask.shape mask = torch.randn(3, 576, 576) mask[0].unsqueeze(0).shape fl = torch.flatten(image) fl.shape from sklearn.metrics import roc_curve, auc fl = fl/fl.max() fpr, tpr, thresholds = roc_curve(fl, fl) fl from PIL import Image image = Image.open('data/mask/0.png') image import numpy as np data = np.asarray(image) data.max() import torchvision.transforms.functional as F a = F.to_tensor(image) a.max() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # CS-109B Introduction to Data Science # ## Lab 5: Convolutional Neural Networks # # **Harvard University**
    # **Spring 2020**
    # **Instructors:** , , and
    # **Lab Instructors:** and
    # **Content:** , , , and # # --- # RUN THIS CELL TO PROPERLY HIGHLIGHT THE EXERCISES import requests from IPython.core.display import HTML styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2019-CS109B/master/content/styles/cs109.css").text HTML(styles) # ## Learning Goals # # In this lab we will look at Convolutional Neural Networks (CNNs), and their building blocks. # # By the end of this lab, you should: # # - have a good undertanding on how images, a common type of data for a CNN, are represented in the computer and how to think of them as arrays of numbers. # - be familiar with preprocessing images with `tf.keras` and `scipy`. # - know how to put together the building blocks used in CNNs - such as convolutional layers and pooling layers - in `tensorflow.keras` with an example. # - run your first CNN. # + import matplotlib.pyplot as plt plt.rcParams["figure.figsize"] = (5,5) import numpy as np from scipy.optimize import minimize from sklearn.utils import shuffle # %matplotlib inline # - from tensorflow.keras.models import Sequential, Model from tensorflow.keras.layers import Dense, Dropout, Flatten, Activation, Input from tensorflow.keras.layers import Conv2D, Conv1D, MaxPooling2D, MaxPooling1D,\ GlobalAveragePooling1D, GlobalMaxPooling1D from tensorflow.keras.optimizers import Adam, SGD, RMSprop from tensorflow.keras.utils import to_categorical from tensorflow.keras.metrics import AUC, Precision, Recall, FalsePositives, FalseNegatives, \ TruePositives, TrueNegatives from tensorflow.keras.regularizers import l2 # + from __future__ import absolute_import, division, print_function, unicode_literals # TensorFlow and tf.keras import tensorflow as tf tf.keras.backend.clear_session() # For easy reset of notebook state. print(tf.__version__) # You should see a > 2.0.0 here! # - # ## Part 0: Running on SEAS JupyterHub # # **PLEASE READ**: [Instructions for Using SEAS JupyterHub](https://canvas.harvard.edu/courses/65462/pages/instructions-for-using-seas-jupyterhub?module_item_id=638544) # # SEAS and FAS are providing you with a platform in AWS to use for the class (accessible from the 'Jupyter' menu link in Canvas). These are AWS p2 instances with a GPU, 10GB of disk space, and 61 GB of RAM, for faster training for your networks. Most of the libraries such as keras, tensorflow, pandas, etc. are pre-installed. If a library is missing you may install it via the Terminal. # # **NOTE : The AWS platform is funded by SEAS and FAS for the purposes of the class. It is not running against your individual credit.** # # **NOTE NOTE NOTE: You are not allowed to use it for purposes not related to this course.** # # **Help us keep this service: Make sure you stop your instance as soon as you do not need it.** # # ![aws-dog](../images/aws-dog.jpeg) # *source:CS231n Stanford: Google Cloud Tutorial* # ## Part 1: Parts of a Convolutional Neural Net # # We can have # - 1D CNNs which are useful for time-series or 1-Dimensional data, # - 2D CNNs used for 2-Dimensional data such as images, and also # - 3-D CNNs used for video. # # ### a. Convolutional Layers. # # Convolutional layers are comprised of **filters** and **feature maps**. The filters are essentially the **neurons** of the layer. They have the weights and produce the input for the next layer. The feature map is the output of one filter applied to the previous layer. # # Convolutions operate over 3D tensors, called feature maps, with two spatial axes (height and width) as well as a depth axis (also called the channels axis). For an RGB image, the dimension of the depth axis is 3, because the image has three color channels: red, green, and blue. For a black-and-white picture, like the MNIST digits, the depth is 1 (levels of gray). The convolution operation extracts patches from its input feature map and applies the same transformation to all of these patches, producing an output feature map. This output feature map is still a 3D tensor: it has a width and a height. Its depth can be arbitrary, because the output depth is a parameter of the layer, and the different channels in that depth axis no longer stand for specific colors as in RGB input; rather, they stand for filters. Filters encode specific aspects of the input data: at a high level, a single filter could encode the concept “presence of a face in the input,” for instance. # # In the MNIST example that we will see, the first convolution layer takes a feature map of size (28, 28, 1) and outputs a feature map of size (26, 26, 32): it computes 32 filters over its input. Each of these 32 output channels contains a 26×26 grid of values, which is a response map of the filter over the input, indicating the response of that filter pattern at different locations in the input. # # Convolutions are defined by two key parameters: # - Size of the patches extracted from the inputs. These are typically 3×3 or 5×5 # - The number of filters computed by the convolution. # # **Padding**: One of "valid", "causal" or "same" (case-insensitive). "valid" means "no padding". "same" results in padding the input such that the output has the same length as the original input. "causal" results in causal (dilated) convolutions, # #### 1D Convolutional Network # # In `tf.keras` see [1D convolutional layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv1D) # # ![1D-CNN](../images/1D-CNN.png) # # *image source: Deep Learning with Python by * # #### 2D Convolutional Network # # In `tf.keras` see [2D convolutional layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) # # ![title](../images/convolution-many-filters.png) # **keras.layers.Conv2D** (filters, kernel_size, strides=(1, 1), padding='valid', activation=None, use_bias=True, # kernel_initializer='glorot_uniform', data_format='channels_last', # bias_initializer='zeros') # ### b. Pooling Layers. # # Pooling layers are also comprised of filters and feature maps. Let's say the pooling layer has a 2x2 receptive field and a stride of 2. This stride results in feature maps that are one half the size of the input feature maps. We can use a max() operation for each receptive field. # # In `tf.keras` see [2D pooling layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) # # **keras.layers.MaxPooling2D**(pool_size=(2, 2), strides=None, padding='valid', data_format=None) # # ![Max Pool](../images/MaxPool.png) # ### c. Dropout Layers. # # Dropout consists in randomly setting a fraction rate of input units to 0 at each update during training time, which helps prevent overfitting. # # In `tf.keras` see [Dropout layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dropout) # # tf.keras.layers.Dropout(rate, seed=None) # # rate: float between 0 and 1. Fraction of the input units to drop.
    # seed: A Python integer to use as random seed. # # References # # [Dropout: A Simple Way to Prevent Neural Networks from Overfitting](http://www.jmlr.org/papers/volume15/srivastava14a/srivastava14a.pdf) # ### d. Fully Connected Layers. # # A fully connected layer flattens the square feature map into a vector. Then we can use a sigmoid or softmax activation function to output probabilities of classes. # # In `tf.keras` see [Fully Connected layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense) # # **keras.layers.Dense**(units, activation=None, use_bias=True, # kernel_initializer='glorot_uniform', bias_initializer='zeros') # ## Part 2: Preprocessing the data img = plt.imread('../images/cat.1700.jpg') height, width, channels = img.shape print(f'PHOTO: height = {height}, width = {width}, number of channels = {channels}, \ image datatype = {img.dtype}') img.shape # let's look at the image imgplot = plt.imshow(img) # #### Visualizing the different channels colors = [plt.cm.Reds, plt.cm.Greens, plt.cm.Blues, plt.cm.Greys] subplots = np.arange(221,224) for i in range(3): plt.subplot(subplots[i]) plt.imshow(img[:,:,i], cmap=colors[i]) plt.subplot(224) plt.imshow(img) plt.show() # If you want to learn more: [Image Processing with Python and Scipy](http://prancer.physics.louisville.edu/astrowiki/index.php/Image_processing_with_Python_and_SciPy) # ## Part 3: Putting the Parts together to make a small ConvNet Model # # Let's put all the parts together to make a convnet for classifying our good old MNIST digits. # Load data and preprocess (train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data( path='mnist.npz') # load MNIST data train_images.shape # **Notice:** These photos do not have a third dimention channel because they are B&W. train_images.max(), train_images.min() # + train_images = train_images.reshape((60000, 28, 28, 1)) # Reshape to get third dimension test_images = test_images.reshape((10000, 28, 28, 1)) train_images = train_images.astype('float32') / 255 # Normalize between 0 and 1 test_images = test_images.astype('float32') / 255 # Convert labels to categorical data train_labels = to_categorical(train_labels) test_labels = to_categorical(test_labels) # + mnist_cnn_model = Sequential() # Create sequential model # Add network layers # number of filters = 32; kernelsize = (3,3), stride, padding (non existent here) input_shape usually train.shape not hardcoded mnist_cnn_model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1))) mnist_cnn_model.add(MaxPooling2D((2, 2))) mnist_cnn_model.add(Conv2D(64, (3, 3), activation='relu')) mnist_cnn_model.add(MaxPooling2D((2, 2))) mnist_cnn_model.add(Conv2D(64, (3, 3), activation='relu')) # - # The next step is to feed the last output tensor (of shape (3, 3, 64)) into a densely connected classifier network like those you’re already familiar with: a stack of Dense layers. These classifiers process vectors, which are 1D, whereas the output of the last conv layer is a 3D tensor. First we have to flatten the 3D outputs to 1D, and then add a few Dense layers on top. mnist_cnn_model.add(Flatten()) # before I make a decision, this layer decides how much we wanna scwush the data mnist_cnn_model.add(Dense(32, activation='relu')) # 10 = number of classes eigentlich again nicht hardcoden sondern rausfinden mnist_cnn_model.add(Dense(10, activation='softmax')) # softmax b/c non-binary output classes; if 0/1 output it would be sigmoid mnist_cnn_model.summary() # None(26,26,32) the none = batch number aka how many images do I have in my dataset --> ?! #
    Question Why are we using cross-entropy here?
    # + loss = tf.keras.losses.categorical_crossentropy optimizer = Adam(lr=0.001) #optimizer = RMSprop(lr=1e-2) # see https://www.tensorflow.org/api_docs/python/tf/keras/metrics metrics = ['accuracy'] # Compile model mnist_cnn_model.compile(optimizer=optimizer, loss=loss, metrics=metrics) # - #
    Discussion How can we choose the batch size?
    # + # %%time # Fit the model verbose, epochs, batch_size = 1, 10, 64 # try a different num epochs and batch size : 30, 16 history = mnist_cnn_model.fit(train_images, train_labels, epochs=epochs, batch_size=batch_size, verbose=verbose, validation_split=0.2, # validation_data=(X_val, y_val) # IF you have val data shuffle=True) # - print(history.history.keys()) print(history.history['val_accuracy'][-1]) plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'val'], loc='upper left') plt.show() # summarize history for loss plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'val'], loc='upper left') plt.show() #plt.savefig('../images/batch8.png') mnist_cnn_model.metrics_names # Evaluate the model on the test data: score = mnist_cnn_model.evaluate(test_images, test_labels, batch_size=batch_size, verbose=0, callbacks=None) #print("%s: %.2f%%" % (mnist_cnn_model.metrics_names[1], score[1]*100)) test_acc = mnist_cnn_model.evaluate(test_images, test_labels) test_acc #
    Discussion Compare validation accuracy and test accuracy? Comment on whether we have overfitting.
    # ### Data Preprocessing : Meet the `ImageDataGenerator` class in `keras` # # # [(keras ImageGenerator documentation)](https://keras.io/preprocessing/image/) # The MNIST and other pre-loaded dataset are formatted in a way that is almost ready for feeding into the model. What about plain images? They should be formatted into appropriately preprocessed floating-point tensors before being fed into the network. # # The Dogs vs. Cats dataset that you’ll use isn’t packaged with Keras. It was made available by Kaggle as part of a computer-vision competition in late 2013, back when convnets weren’t mainstream. The data has been downloaded for you from https://www.kaggle.com/c/dogs-vs-cats/data The pictures are medium-resolution color JPEGs. # + # TODO: set your base dir to your correct local location base_dir = '../data/cats_and_dogs_small' import os, shutil # Set up directory information train_dir = os.path.join(base_dir, 'train') validation_dir = os.path.join(base_dir, 'validation') test_dir = os.path.join(base_dir, 'test') train_cats_dir = os.path.join(train_dir, 'cats') train_dogs_dir = os.path.join(train_dir, 'dogs') validation_cats_dir = os.path.join(validation_dir, 'cats') validation_dogs_dir = os.path.join(validation_dir, 'dogs') test_cats_dir = os.path.join(test_dir, 'cats') test_dogs_dir = os.path.join(test_dir, 'dogs') print('total training cat images:', len(os.listdir(train_cats_dir))) print('total training dog images:', len(os.listdir(train_dogs_dir))) print('total validation cat images:', len(os.listdir(validation_cats_dir))) print('total validation dog images:', len(os.listdir(validation_dogs_dir))) print('total test cat images:', len(os.listdir(test_cats_dir))) print('total test dog images:', len(os.listdir(test_dogs_dir))) # - # So you do indeed have 2,000 training images, 1,000 validation images, and 1,000 test images. Each split contains the same number of samples from each class: this is a balanced binary-classification problem, which means classification accuracy will be an appropriate measure of success. #
    Discussion Should you always do your own splitting of the data How about shuffling? Does it always make sense?
    # + img_path = '../data/cats_and_dogs_small/train/cats/cat.70.jpg' # We preprocess the image into a 4D tensor from tensorflow.keras.preprocessing import image import numpy as np img = image.load_img(img_path, target_size=(150, 150)) img_tensor = image.img_to_array(img) img_tensor = np.expand_dims(img_tensor, axis=0) # Remember that the model was trained on inputs # that were preprocessed in the following way: img_tensor /= 255. # Its shape is (1, 150, 150, 3) print(img_tensor.shape) # - plt.imshow(img_tensor[0]) plt.show() # Why do we need an extra dimension here? # #### Building the network # + model = Sequential() model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(150, 150, 3))) model.add(MaxPooling2D((2, 2))) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D((2, 2))) model.add(Conv2D(128, (3, 3), activation='relu')) model.add(MaxPooling2D((2, 2))) model.add(Conv2D(128, (3, 3), activation='relu')) model.add(MaxPooling2D((2, 2))) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.summary() # - # For the compilation step, you’ll go with the RMSprop optimizer. Because you ended the network with a single sigmoid unit, you’ll use binary crossentropy as the loss. # + loss = tf.keras.losses.binary_crossentropy #optimizer = Adam(lr=0.001) optimizer = RMSprop(lr=1e-2) metrics = ['accuracy'] # Compile model model.compile(optimizer=optimizer, loss=loss, metrics=metrics) # - # The steps for getting it into the network are roughly as follows: # # 1. Read the picture files. # 2. Convert the JPEG content to RGB grids of pixels. # 3. Convert these into floating-point tensors. # 4. Rescale the pixel values (between 0 and 255) to the [0, 1] interval (as you know, neural networks prefer to deal with small input values). # # It may seem a bit daunting, but fortunately Keras has utilities to take care of these steps automatically with the class `ImageDataGenerator`, which lets you quickly set up Python generators that can automatically turn image files on disk into batches of preprocessed tensors. This is what you’ll use here. # + from tensorflow.keras.preprocessing.image import ImageDataGenerator train_datagen = ImageDataGenerator(rescale=1./255) test_datagen = ImageDataGenerator(rescale=1./255) train_generator = train_datagen.flow_from_directory( train_dir, target_size=(150, 150), batch_size=20, class_mode='binary') validation_generator = test_datagen.flow_from_directory( validation_dir, target_size=(150, 150), batch_size=20, class_mode='binary') # - # Let’s look at the output of one of these generators: it yields batches of 150×150 RGB images (shape (20, 150, 150, 3)) and binary labels (shape (20,)). There are 20 samples in each batch (the batch size). Note that the generator yields these batches indefinitely: it loops endlessly over the images in the target folder. For this reason, you need to break the iteration loop at some point: for data_batch, labels_batch in train_generator: print('data batch shape:', data_batch.shape) print('labels batch shape:', labels_batch.shape) break # Let’s fit the model to the data using the generator. You do so using the `.fit_generator` method, the equivalent of `.fit` for data generators like this one. It expects as its first argument a Python generator that will yield batches of inputs and targets indefinitely, like this one does. # # Because the data is being generated endlessly, the Keras model needs to know how many samples to draw from the generator before declaring an epoch over. This is the role of the `steps_per_epoch` argument: after having drawn steps_per_epoch batches from the generator—that is, after having run for steps_per_epoch gradient descent steps - the fitting process will go to the next epoch. In this case, batches are 20 samples, so it will take 100 batches until you see your target of 2,000 samples. # # When using fit_generator, you can pass a validation_data argument, much as with the fit method. It’s important to note that this argument is allowed to be a data generator, but it could also be a tuple of Numpy arrays. If you pass a generator as validation_data, then this generator is expected to yield batches of validation data endlessly; thus you should also specify the validation_steps argument, which tells the process how many batches to draw from the validation generator for evaluation # + # %%time # Fit the model <--- always a good idea to time it verbose, epochs, batch_size, steps_per_epoch = 1, 5, 64, 100 history = model.fit_generator( train_generator, steps_per_epoch=steps_per_epoch, epochs=5, # TODO: should be 100 validation_data=validation_generator, validation_steps=50) # It’s good practice to always save your models after training. model.save('cats_and_dogs_small_1.h5') # - # Let’s plot the loss and accuracy of the model over the training and validation data during training: print(history.history.keys()) print(history.history['val_accuracy'][-1]) plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'val'], loc='upper left') plt.show() # summarize history for loss plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'val'], loc='upper left') plt.show() plt.savefig('../images/batch8.png') # Let's try data augmentation datagen = ImageDataGenerator( rotation_range=40, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2, zoom_range=0.2, horizontal_flip=True, fill_mode='nearest') # These are just a few of the options available (for more, see the Keras documentation). # Let’s quickly go over this code: # # - rotation_range is a value in degrees (0–180), a range within which to randomly rotate pictures. # - width_shift and height_shift are ranges (as a fraction of total width or height) within which to randomly translate pictures vertically or horizontally. # - shear_range is for randomly applying shearing transformations. # - zoom_range is for randomly zooming inside pictures. # - horizontal_flip is for randomly flipping half the images horizontally—relevant when there are no assumptions of - horizontal asymmetry (for example, real-world pictures). # - fill_mode is the strategy used for filling in newly created pixels, which can appear after a rotation or a width/height shift. # # Let’s look at the augmented images # + from tensorflow.keras.preprocessing import image fnames = [os.path.join(train_dogs_dir, fname) for fname in os.listdir(train_dogs_dir)] img_path = fnames[3] # Chooses one image to augment img = image.load_img(img_path, target_size=(150, 150)) # Reads the image and resizes it x = image.img_to_array(img) # Converts it to a Numpy array with shape (150, 150, 3) x = x.reshape((1,) + x.shape) # Reshapes it to (1, 150, 150, 3) i=0 for batch in datagen.flow(x, batch_size=1): plt.figure(i) imgplot = plt.imshow(image.array_to_img(batch[0])) i += 1 if i % 4 == 0: break plt.show() # - # If you train a new network using this data-augmentation configuration, the network will never see the same input twice. But the inputs it sees are still heavily intercorrelated, because they come from a small number of original images—you can’t produce new information, you can only remix existing information. As such, this may not be enough to completely get rid of overfitting. To further fight overfitting, you’ll also add a **Dropout** layer to your model right before the densely connected classifier. model = Sequential() model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(150, 150, 3))) model.add(MaxPooling2D((2, 2))) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D((2, 2))) model.add(Conv2D(128, (3, 3), activation='relu')) model.add(MaxPooling2D((2, 2))) model.add(Conv2D(128, (3, 3), activation='relu')) model.add(MaxPooling2D((2, 2))) model.add(Flatten()) model.add(Dropout(0.5)) model.add(Dense(512, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.summary() # + loss = tf.keras.losses.binary_crossentropy optimizer = RMSprop(lr=1e-4) metrics = ['acc', 'accuracy'] # Compile model model.compile(loss=loss, optimizer=optimizer, metrics=metrics) # + # Let’s train the network using data augmentation and dropout. train_datagen = ImageDataGenerator( rescale=1./255, rotation_range=40, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2, zoom_range=0.2, horizontal_flip=True,) test_datagen = ImageDataGenerator(rescale=1./255) # Note that the validation data shouldn’t be augmented! train_generator = train_datagen.flow_from_directory( train_dir, target_size=(150, 150), batch_size=32, class_mode='binary') validation_generator = test_datagen.flow_from_directory( validation_dir, target_size=(150, 150), batch_size=32, class_mode='binary') history = model.fit_generator( train_generator, steps_per_epoch=50, epochs=5, # TODO: should be 100 validation_data=validation_generator, validation_steps=50) # save model if needed model.save('cats_and_dogs_small_2.h5') # - # And let’s plot the results again. Thanks to data augmentation and dropout, you’re no longer overfitting: the training curves are closely tracking the validation curves. You now reach an accuracy of 82%, a 15% relative improvement over the non-regularized model. (Note: these numbers are for 100 epochs..) print(history.history.keys()) print(history.history['val_accuracy'][-1]) plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.title('Accuracy with data augmentation') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'val'], loc='upper left') plt.show() # summarize history for loss plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('Loss with data augmentation') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'val'], loc='upper left') plt.show() #plt.savefig('../images/batch8.png') # By using regularization techniques even further, and by tuning the network’s parameters (such as the number of filters per convolution layer, or the number of layers in the network), you may be able to get an even better accuracy, likely up to 86% or 87%. But it would prove difficult to go any higher just by training your own convnet from scratch, because you have so little data to work with. As a next step to improve your accuracy on this problem, you’ll have to use a pretrained model. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.8.5 64-bit (conda) # metadata: # interpreter: # hash: a8f61be024eba58adef938c9aa1e29e02cb3dece83a5348b1a2dafd16a070453 # name: python3 # --- # # Chapter 1 Financial Time Series and Their Characteristics import numpy as np import pandas as pd import statsmodels.api as sm import statsmodels.formula.api as smf import matplotlib.pyplot as plt from scipy import stats # da = pd.read_table(r'data\d-ibm3dx7008.txt', sep=r"\s+", header=0) da = pd.read_csv(r'data\d-ibm3dx7008.txt', sep=r"[\f\n\r\t\v]", header=0, engine='python') da.head() da = pd.read_table(r'data\d-ibm3dx7008.txt', sep=r"\s+", header=0) da.head() da.size da.iloc[:, 0:1] ibm = da.iloc[:, 1:2] ibm ibm = ibm * 100 ibm.describe() ibm_rtn = np.array(ibm) fig, axes = plt.subplots(2, 1, figsize=(10,8)) axes[0].plot(ibm_rtn, alpha=0.6) axes[0].set(title='Time Series of ibm_rtn.') axes[1].hist(ibm_rtn, alpha=0.6, bins=200) axes[1].set(title='Histogram of ibm_rtn.') plt.show() # + def normtest(data): skew = data.skew()[0] t_skew = skew / np.sqrt(6/len(data)) p_skew = stats.norm.sf(abs(t_skew)) * 2 kurt = data.kurt()[0] t_kurt = kurt / np.sqrt(24/len(data)) p_kurt = stats.norm.sf(abs(t_kurt)) * 2 JBtest = (skew**2 / 6 + kurt**2 /24) * len(data) p_JBtest = stats.chi2.sf(JBtest, 2) result = pd.DataFrame({'Skewness': skew, 'P-Skew': p_skew, 'Kurtosis': kurt, 'P-Kurt': p_kurt, 'JBtest': JBtest, 'P-JBtest': p_JBtest}, index=['value']) return result result = normtest(ibm) result # - libm = np.log(da["rtn"] + 1) * 100 libm libm.describe() # student't-Test tstatics, pval = stats.ttest_1samp(libm, 0) print(f"tStatics = {tstatics:.10f}\nP-Value = {pval:.10f}") if pval > 0.05: print("Cann't reject the NULL hypothesis at the 5% significance level.") else: print("Reject the NULL hypothesis at the 5% significance level.") chiStatics, pval, *_ = sm.stats.stattools.jarque_bera(libm) print(f"chiStatics = {chiStatics:.10f}\nP-Value = {pval:.10f}") if pval > 0.05: print("Cann't reject the NULL hypothesis at the 5% significance level.") else: print("Reject the NULL hypothesis at the 5% significance level.") # ## Exercise 1.2 da = pd.read_csv(r'data\m-gm3dx7508.txt', sep=r"\s+", header=0) da.head() gm = da.iloc[:, 1:2] dates = pd.date_range('19980101', periods=len(gm)) # dates = np.array(dates) gm_ts = pd.DataFrame(np.array(gm), index=dates) gm_ts.head() fig, axes = plt.subplots(2, 1, figsize=(12,8)) axes[0].plot(gm) axes[1].plot(gm_ts) plt.show() # fig, axes = plt.subplots(2, 1, figsize=(12,8)) sm.graphics.tsa.plot_acf(gm, lags=40) sm.graphics.tsa.plot_acf(gm_ts, lags=40) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + x = 25 def printer(): x = 50 return x print x print printer() # + name = 'This is a global name' def greet(): # Enclosing function name = 'Sammy' def hello(): print 'Hello '+name hello() greet() # + x = 50 def func(x): print 'x is', x x = 2 print 'Changed local x to', x func(x) print 'x is still', x # - x # + x = 50 def func(): global x print 'This function is now using the global x!' print 'Because of global x is: ', x x = 2 print 'Ran func(), changed global x to', x print 'Before calling func(), x is: ', x func() print 'Value of x (outside of func()) is: ', x # - x # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # #
    Exploring Passenger Survival in a Plane Crash
    # --- # # ### Team: # # * __1705012__ # * __1705019__ # * __1705068__ # * __1705689__ # # This notebook covers the Machine Learning process used to analyse the plane crash survivors data provided in `Classification_train.csv` and `Classification_test.csv`
    # # The method used for predictions is Logistic Regression which gives us an accuracy of _95%_ # # --- # ## Importing Necessary Libraries import numpy as np import seaborn as sns import pandas as pd import matplotlib.pyplot as plt # Importing `warnings` module to ignore `FutureWarning` and `DeprecatedWarning`
    # These warnings show us what features might get deprecated in future versions. The features work fine on the latest version as of today _3rd Nov 2018_ import warnings warnings.filterwarnings("ignore", category=FutureWarning) warnings.filterwarnings("ignore", category=DeprecationWarning) # ## Converting CSV into a DataFrame # A `CSV` file can be loaded as a DataFrame using `pandas.read_csv` # After loading, printing `info` and `head` to see what we're working with # ### Information about the features # * __PassengerId__ ID of the passenger # * __Survived__ Passenger survived or not. 1 for Survived, 0 for did not # * __Pclass__ Classes like Business, Economy, etc. # * __Name__ # * __Sex__ # * __Age__ # * __SibSp__ Number of Siblings or Spouses # * __Parch__ Number of Parents or Children # * __Ticket__ Ticket Number # * __Fare__ Ticket Fare # * __Cabin__ Cabin Number # * __Embarked__ Embarked from which Airport dataset=pd.read_csv("Classification_train.csv") dataset.head() dataset.info() # ## Visualising Data # Visualising Data is essential to see which features are more important and which features can be dropped sns.barplot(x="Embarked", y="Survived", hue="Sex", data=dataset); # As we can see from the above `barplot`, more females survived in a plane crash by a high margin sns.heatmap(dataset.corr(), annot=True) # The _correlation heatmap_ shows that __Survived__ is most strongly related to __Fare__, which means that higher fares mean better security in case of a mishap # ## Further Analysing the Data dataset.describe() # We can see that the above description did not account for the columns __Name__, __Sex__, __Ticket__, __Cabin__, and __Embarked__ as they are non-numeric # # ### Non Numeric Values # The following code gives the number of non-numeric string/categorical data, unique values, and the most frequent values with their frequency dataset.describe(include=['O']) dataset.head() # There seem to be some `NaN` values in the column __Cabin__ # This shows that no data was availabe for the particular value. # Getting `NaN` values is common when dealing with real-world data and other columns might have missing data as well. We should check for these gaps before trying to apply any _Machine Learning_ algorithms to the dataset. # # ## Non-Existent Data # # Thankfully, a `DataFrame` class contains a function `isnull()` which checks for `NaN` values and returns a boolean value `True` or `False`.
    # We can count the number of `NaN` using `sum()` method dataset.isnull().sum() dataset[['Pclass','Survived']].groupby(by=['Pclass'],as_index=False).mean() # p1 class passenger survived more # ## Removing Unimportant Data # By analysing at the dataset we see that the following features play an insignificant role in survivability. # # * __PassengerId__ Irrelevant to survival # * __Name__ The title(Dr. Mr. etc) may or may not be useful high chance of irrelevance # * __Cabin__ Too many `NaN`/`null` values might interfere with the accuracy # * __Ticket__ _Fare_ has already been considered thus eliminating the need to analyse the ticket number # # ### Cleaning the Dataset # We can drop the unimportant columns from the dataset # Printing the `head()` to see what we're left with # + dataset = dataset.drop(columns=['PassengerId', 'Name', 'Ticket', 'Cabin']) dataset.head() # - # ### Resolving `NaN` values # The __Embarked__ column is a non-numeric categorical set with only one missing element.
    # The following code fills the `NaN` value with the most frequent value # + # Getting the most occured element using pandas get_dummies() most_occ = pd.get_dummies(dataset['Embarked']).sum().sort_values(ascending=False).index[0] # The above snippet makes a descending sorted array of the Embarked column and gets the first value def replace_nan(x): #Function to get the most occured element in case of null else returns the passed value if pd.isnull(x): return most_occ else: return x #Mapping the dataset according to replace_nan() function dataset['Embarked'] = dataset['Embarked'].map(replace_nan) # - # ## Splitting into Features and Dependent Variables # `X` will contain all the features # `y` will contain all the values observed that is the __Survived__ column # # So far, we've been dealing with the training set # + # Select all rows and all columns except 0 X=dataset.iloc[:,1:8].values # Select all rows from column 0 y=dataset.iloc[:,0].values # - # ## Importing the Test Data # Since we've dropped unimportant features from our training data, the testing data must also be in the same format for accurately predicting the result. Using the same __cleaning__ process as we did with the `train` dataset # Load CSV into DataFrame X_test=pd.read_csv("Classification_test.csv") y_test=pd.read_csv("Classification_ytest.csv") # + # Load CSV into DataFrame X_test=pd.read_csv("Classification_test.csv") y_test=pd.read_csv("Classification_ytest.csv") X_test.head() # - # `X_test` is in the same format as our dataset, excluding the __Survived__ Column # __Columns that need to be dropped are :__ # * PassengerId # * Name # * Ticket # * Cabin X_test= X_test.drop(columns = ['PassengerId', 'Name', 'Ticket', 'Cabin'], axis=1).iloc[:,:].values y_test.head() # `y_test` only needs to drop the __PassengerId__ column y_test=y_test.drop(columns = 'PassengerId', axis=1).iloc[:,:].values # Now that the `train` and `test` data is in the same format, we can proceed to manipulation of data # # ## Using Imputer to fill `NaN` values # # __Age__ column has many `NaN` values which we will fill with the _median_/_most frequent_ age from the dataset # __Fare__ column has some `NaN` values in the `test` dataset which we plan on filling with the mean fare # + # Age column having 177 missing values : dataset['Age'].isnull().sum() in training # Also for test dataset. from sklearn.preprocessing import Imputer # Check for NaN values and set insert strategy to median imputer = Imputer(missing_values = 'NaN', strategy = 'median', axis = 0) # imputer only accepts 2D matrices # Passing values [:, n:n+1] only passes the nth columnn # Here the 2nd column is the Age imputer = imputer.fit(X[:,2:3]) X[:,2:3] = imputer.transform(X[:,2:3]) imputer = Imputer(missing_values = 'NaN', strategy = 'median', axis = 0) imputer = imputer.fit(X_test[:,2:3]) X_test[:,2:3] = imputer.transform(X_test[:,2:3]) # Using insert strategy mean imputer = Imputer(missing_values = 'NaN', strategy = 'mean', axis = 0) # The 5th column is the Fare imputer = imputer.fit(X_test[:,5:6]) X_test[:,5:6] = imputer.transform(X_test[:,5:6]) # - # After the above snippet has executed, `imputer` will have replaced all the NaN values with the specified insert strategy. # Now we can move on to encoding and fitting the dataset into an Algorithm # # ## Encoding # ### Label Encoding # `LabelEncoder` is used to convert non-numerical string/categorical values into numerical values which can be processed using various `sklearn` classes # It encodes values between `0` and `n-1`; where `n` is the number of categories # # The features which need encoding are: # * Sex # * Embarked # + from sklearn.preprocessing import LabelEncoder, OneHotEncoder labelencoder_X = LabelEncoder() # Column 6 is Embarked X[:, 6] = labelencoder_X.fit_transform(X[:, 6]) X_test[:, 6] = labelencoder_X.fit_transform(X_test[:, 6]) # Column 1 is Sex X[:, 1] = labelencoder_X.fit_transform(X[:, 1]) X_test[:, 1] = labelencoder_X.fit_transform(X_test[:, 1]) # - # ### One Hot Encoding # Often when we use `LabelEncoder` on more than 2 categories the Machine Learning algorithm might try to find a relation between the values such as _Increasing_ or _Decreasing_ or in a pattern. This results in lower accuracy. # # To avoid this we can further encode the `Labels` using `OneHotEncoder`, it takes a column which has categorical data, which has been `label encoded`, and then splits the column into multiple columns. The numbers are replaced by `1`s and `0`s, depending on which column has what value. Thus the name `OneHotEncoder` # + onehotencoder = OneHotEncoder(categorical_features = [0,1,6]) # 0 : Pclass column # 1 : Sex # 6 : Embarked # OneHotEncoder takes and array as input X = onehotencoder.fit_transform(X).toarray() X_test = onehotencoder.fit_transform(X_test).toarray() # - # With `One Hot Encoding` complete, we can proceed to `fit` the data into our `LogisticRegressor` # # ## Predicting the Output # # ### `LogisticRegression` # `LogisticRegression` is used only when the __dependent variable__/__prediction__ is binary i.e only consists of two values. `LogisticRegression` is used to describe data and to explain the relationship between one dependent binary variable and one or more nominal, ordinal, interval or ratio-level independent variables. # + from sklearn.linear_model import LogisticRegression #Initializing the regressor lr = LogisticRegression() # Fitting the regressor with training data lr.fit(X,y) # Getting predictions by feeding features from the test data y_pred = lr.predict(X_test) # - # ## Checking the Predictions # Creating a scatter plot of `actual` versus `predicted` values plt.scatter(y_test, y_pred, marker='x') # ### Confusion Matrix # `ConfusionMatrix` is used to compare the data predicted versus the actual output. # It is a matrix in the form: # ![Confusion Matrix](https://rasbt.github.io/mlxtend/user_guide/evaluate/confusion_matrix_files/confusion_matrix_1.png) # + from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_test,y_pred) print(cm) # - # ### Classification Report # To get the accuracy, we use `ClassificationReport` which measures the acuracy of the algorithm based on a `ConfusionMatrix`
    # An ideal `classifier` with `100%` accuracy would produce a pure _diagonal matrix_ which would have all the points predicted in their correct class. # + from sklearn.metrics import classification_report print(classification_report(y_test, y_pred)) # - # ## Conclusion # # After analysing the given dataset and using `LogisticRegression` on the features, we see that the algorithm can accurately predict the survivability of a Passenger `95%` of the time. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- import numpy as np import matplotlib.pyplot as plt import tensorflow as tf import tensorflow_hub as tfhub import tensorflow_datasets as tfds # + (raw_train, raw_validation, raw_test), metadata = tfds.load( 'cats_vs_dogs', split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'], with_info=True, as_supervised=True, ) def format_image(image, label): image = tf.image.resize(image, (224, 224)) / 255.0 return image, label BATCH_SIZE = 16 num_examples = metadata.splits['train'].num_examples num_classes = metadata.features['label'].num_classes train_batches = (raw_train.shuffle(num_examples // 4) .map(format_image).batch(BATCH_SIZE).prefetch(tf.data.AUTOTUNE)) validation_batches = (raw_validation .map(format_image) .batch(BATCH_SIZE) .prefetch(tf.data.AUTOTUNE)) test_batches = raw_test.map(format_image).batch(BATCH_SIZE) # - # ### Model from tensorflow.keras.applications.resnet50 import ResNet50 # + base_model = ResNet50(input_shape=(224,224,3), include_top=False,pooling='avg') # Customize the output layers flatten_all = tf.keras.layers.Flatten()(base_model.output) Dense_1 = tf.keras.layers.Dense(units=512,activation='relu')(flatten_all) prediction_layer = tf.keras.layers.Dense(units=2, activation='softmax')(Dense_1) # Concatenate the model model = tf.keras.models.Model(inputs=base_model.input, outputs=prediction_layer) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) # hist = model.fit(train_batches, # epochs=EPOCHS, # validation_data=validation_batches) # model.summary() # - # ## # ## Quantization Aware Training with Keras Models import tensorflow_model_optimization as tfmot tfmot.quantization.keras.quantiz # + import tensorflow_model_optimization as tfmot quantize_model = tfmot.quantization.keras.quantize_model # q_aware stands for for quantization aware. q_aware_model = quantize_model(model) # `quantize_model` requires a recompile. q_aware_model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) # q_aware_model.summary() q_aware_model.fit(train_batches, validation_data=validation_batches, epochs=5,) # - tf.saved_model.save(q_aware_model,'Test/') # + converter = tf.lite.TFLiteConverter.from_keras_model(q_aware_model) converter.optimizations = [tf.lite.Optimize.DEFAULT] quantized_tflite_model = converter.convert() # - with open('test.tflite','wb') as f: f.write(quantized_tflite_model) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # メインのノートブック (データマイニングのアンサンブル学習) # ## 必要モジュールの読み込み import datetime import logging import argparse import json import numpy as np import sys import pickle import pandas as pd import xgboost as xgbt import warnings import pydotplus as pdp from prettytable import PrettyTable from sklearn.model_selection import train_test_split from sklearn.metrics import (roc_curve, auc, accuracy_score) from sklearn import tree from models.randomforest import RF_train_and_predict from models.lgbm import LGBM_train_and_predict from models.xgboost import XGB_train_and_predict from models.catboost import CAT_train_and_predict from models.adaboost import AD_train_and_predict from __init__ import * from tools.lime import * # ## 説明変数と目的変数の読み込み # + warnings.filterwarnings('ignore') file_path = '../config/default.json' config = json.load(open(file_path,'r',encoding="shift_jis")) now = datetime.datetime.now() logging.basicConfig( filename='../logs/log_{0:%Y%m%d%H%M%S}.log'.format(now), level=logging.DEBUG ) logging.debug('../logs/log_{0:%Y%m%d%H%M%S}.log'.format(now)) feats = config['features'] logging.debug('feats: {}'.format(feats)) target_name = config['target_name'] logging.debug('target_name: {}'.format(target_name)) X_train_all = load_datasets(feats) logging.debug('X_train_all.shape: {}'.format(X_train_all.shape)) y_train_all = load_target(target_name) logging.debug('y_train_all.shape: {}'.format(y_train_all.shape)) # random_state_value random_state=0 (train_x, test_x, train_y, test_y) = train_test_split(X_train_all, y_train_all, test_size=0.2, random_state=random_state) # + #第1モデルを作成(randomForest,Lightgbm,catboost,xgboost,adaboost) model_rf = RF_train_and_predict(X_train_all, y_train_all) model_lgbm = LGBM_train_and_predict(X_train_all, y_train_all) model_cat = CAT_train_and_predict(X_train_all, y_train_all) model_xgb = XGB_train_and_predict(X_train_all, y_train_all) model_ada = AD_train_and_predict(X_train_all, y_train_all) with open('../models/pickle/rf_model.pickle', mode='wb') as fp: pickle.dump(model_rf, fp) with open('../models/pickle/lgbm_model.pickle', mode='wb') as fp: pickle.dump(model_lgbm, fp) with open('../models/pickle/cat_model.pickle', mode='wb') as fp: pickle.dump(model_cat, fp) with open('../models/pickle/xgb_model.pickle', mode='wb') as fp: pickle.dump(model_xgb, fp) with open('../models/pickle/ada_model.pickle', mode='wb') as fp: pickle.dump(model_ada, fp) #特徴の重要度データフレームを作成 feature_dataframe = pd.DataFrame( {'features': feats, 'Random Forest feature importances': model_rf.feature_importances_, 'Lightgbm feature importances': model_lgbm.feature_importances_, 'CatBoost feature importances': model_cat.feature_importances_, 'XGBoost feature importances': model_xgb.feature_importances_, 'AdaBoost feature importances': model_ada.feature_importances_, }) feature_dataframe.to_csv('../data/interim/feature_dataframe.csv', encoding="shift_jis") # + #第2モデルの学習データを作成 base_predictions_train = pd.DataFrame({ 'RandomForest': model_rf.predict(train_x), 'Lightgbm': model_lgbm.predict(train_x), 'CatBoost': model_cat.predict(train_x).ravel(), 'XGBoost': model_xgb.predict(train_x), 'AdaBoost': model_ada.predict(train_x) }) base_predictions_train.to_csv('../data/interim/base_predictions_train.csv') #第2モデルのテストデータを作成 base_predictions_test = pd.DataFrame( { 'RandomForest': model_rf.predict(test_x), 'Lightgbm': model_lgbm.predict(test_x), 'CatBoost': model_cat.predict(test_x).ravel(), 'XGBoost': model_xgb.predict(test_x), 'AdaBoost': model_ada.predict(test_x) }) base_predictions_test.to_csv('../data/interim/base_predictions_test.csv') gbm = xgbt.XGBClassifier( #learning_rate = 0.02, random_state=0, n_estimators= 2000, max_depth= 4, min_child_weight= 2, #gamma=1, gamma=0.9, subsample=0.8, colsample_bytree=0.8, objective= 'binary:logistic', nthread= -1, scale_pos_weight=1).fit(base_predictions_train, train_y) predictions = gbm.predict(base_predictions_test) logging.debug('predictions.shape: {}'.format(predictions.shape)) with open('../data/interim/stacking_model.pickle', mode='wb') as fp: pickle.dump(gbm, fp) #CSVファイルの作成 StackingSubmission = pd.DataFrame({ '決裁区分': predictions}) StackingSubmission.to_csv('../data/processed/StackingSubmission.csv', index=False, encoding="shift_jis") logging.debug('accuracy_score: {}'.format(accuracy_score(predictions, test_y))) # + #LIME #クラス名(今回は「同意」「条件付同意」) defaults.jsonにて定義 class_names = config['class_name'] #可変の説明変数(例:申請金額、金利など) LIMEの結果に出力するものを、この変数に定義されたものだけに絞る。 defaults.jsonにて定義 variable_features = config['variable_features'] #カテゴリ変数 defaults.jsonにて定義 #categorical_features = config['categorical_features'] # テスト用サンプル test_sample = test_x[2:3] #RandomForestモデルでのLIMEの結果 lime_result_rf = lime_predict( model=model_rf, X_train_all= train_x, y_train_all= train_y, x_test=test_sample, feature_names=feats, num_features=len(feats), class_names=class_names, discretize_continuous=False, variable_features=variable_features #categorical_features=categorical_features ) #XGBoostでのLIMEの結果 lime_result_xgb = lime_predict( model=model_xgb, X_train_all= train_x, y_train_all= train_y, x_test=test_sample, feature_names=feats, num_features=len(feats), class_names=class_names, discretize_continuous=False, variable_features=variable_features #categorical_features=categorical_features ) #LightGBMモデルでのLIMEの結果 lime_result_lgbm = lime_predict( model=model_lgbm, X_train_all= train_x, y_train_all= train_y, x_test=test_sample, feature_names=feats, num_features=len(feats), class_names=class_names, discretize_continuous=False, variable_features=variable_features #categorical_features=categorical_features ) #CatboostモデルでのLIMEの結果 lime_result_cat = lime_predict( model=model_cat, X_train_all= train_x, y_train_all= train_y, x_test=test_sample, feature_names=feats, num_features=len(feats), class_names=class_names, discretize_continuous=False, variable_features=variable_features #categorical_features=categorical_features ) #AdaboostモデルでのLIMEの結果 lime_result_ada = lime_predict( model=model_ada, X_train_all= train_x, y_train_all= train_y, x_test=test_sample, feature_names=feats, num_features=len(feats), class_names=class_names, discretize_continuous=False, variable_features=variable_features #categorical_features=categorical_features ) # + # 結果1 # 同意か要見直し協議か 割合表示 model_list = [ model_rf, model_lgbm, model_cat, model_xgb, model_ada ] model_summary_table(model_list, test_sample) # - # 結果2 # 結果1と判断された根拠をLIMEを使ってランキング表示 x = PrettyTable() x.field_names = ['ランキング', '判断根拠'] for i, feature in enumerate(lime_result_rf): x.add_row([i+1, feature[0]]) print(x.get_string()) # + # 結果3 # グラフ用のdataframe取得 df_x_train = df_load_datasets(feats) # テスト用サンプル df_test_sample = df_x_train[2:3] #LIMEによって導き出された特徴量を訓練データと比較する為のグラフ描画 graph_lime_predict_features(lime_result_rf, df_x_train, df_test_sample) # + # 結果4 # 類似している協議書の番号を出力 # cos類似度を計算する print('今回の協議と類似しているものは') print(cosine_similar(df_test_sample,3)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] slideshow={"slide_type": "slide"} # # Synthèse de SI # # _Comment rendre la météorologie scientifique accessible à tous?_ # # Par , et # + [markdown] slideshow={"slide_type": "subslide"} # # Plan # # 1. Analyse fonctionnelle # 1. Analyse interne # 1. Analyse externe # 1. Création du pluviomètre # 1. Modélisation # 1. Simulation # 1. Expérimentation # 1. Lien avec notre TPE # 1. La météorologie # 1. L'expérience d'Amaury # # + [markdown] slideshow={"slide_type": "slide"} # # # Analyse fonctionnelle # # ## Analyse externe # # ![](img/bac.png "") # + [markdown] slideshow={"slide_type": "subslide"} # ![](img/pieuvre.png "") # + [markdown] slideshow={"slide_type": "subslide"} # ## Analyse interne # # *Chaîne d'énergie* # # ![](img/ce.png "") # + [markdown] slideshow={"slide_type": "subslide"} # *Schéma FAST* # ![](img/sf.png "") # + [markdown] slideshow={"slide_type": "slide"} # # Création du pluviomètre # # ## Modélisation # # ![](img/r.png "") # # Structure modélisée sur Solidworks # # + [markdown] slideshow={"slide_type": "subslide"} # ![](img/a.png "") # # Modélisation balancier importé de Grabcad # # + [markdown] slideshow={"slide_type": "subslide"} # ![](img/v.png "") # Modélisation support entonnoir # + [markdown] slideshow={"slide_type": "subslide"} # ![](img/modélisation couleur.png "") # + [markdown] slideshow={"slide_type": "subslide"} # ![](img/pivot.png "") # # Schéma cinématique d'un pivot # + [markdown] slideshow={"slide_type": "slide"} # # Simulation # # ### Programmes Mathlab: # # ![](img/mathlab.png "") # # Simulink # # # + [markdown] slideshow={"slide_type": "subslide"} # ![](img/stateflow.png "") # # Stateflow # + [markdown] slideshow={"slide_type": "subslide"} # ### Programmes Microbit: # # ![](img/pd.png "") # # *Programme défectueux:* # + [markdown] slideshow={"slide_type": "subslide"} # ![](img/java.jpg "") # # Passage en Javascript # + [markdown] slideshow={"slide_type": "subslide"} # ![](img/microbitfaux.jpeg "") # # Programme final # # + [markdown] slideshow={"slide_type": "slide"} # # Expérimentation # # ![](img/structure.jpg "") # + [markdown] slideshow={"slide_type": "subslide"} # ![](img/circuit.jpg "") # # Capteur ILS relié à la Microbit via une Breadboard # + [markdown] slideshow={"slide_type": "subslide"} # ![](img/circuit2.jpg "") # # PIN 1 et 3V utilisées # + [markdown] slideshow={"slide_type": "slide"} # # Expérimentation en direct # + [markdown] slideshow={"slide_type": "slide"} # # Lien avec le TPE # # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd # ### Download the data and load it to Pandas. # # You can find them [here](https://drive.google.com/file/d/1NY6cmF9Shjw-dD7BD6bNmfcIVz-kQcFR/view?usp=sharing). titles = pd.read_csv('data/titles.csv') titles.head() cast = pd.read_csv('data/cast.csv') cast.head() # ### Using groupby(), count the number of films that have been released each decade in the history of cinema. titles['title'].groupby((titles['year']//10)*10).count() # ### Use groupby() count the number of "Hamlet" films made each decade. filter_hamlet = titles['title'] == 'Hamlet' titles[filter_hamlet]['title'].groupby((titles['year']//10)*10).count() # ### How many leading (n=1) roles were available to actors, and how many to actresses, in each year of the 1950s? filters = (cast['n'] == 1) & (cast['year'].between(1950,1959)) cast[filters][['year','type']].groupby(['year','type']).size() # ### In the 1950s decade taken as a whole, how many total roles were available to actors, and how many to actresses, for each "n" number 1 through 5? filters = (cast['n'].between(1,5)) & (cast['year'].between(1950,1959)) cast[filters][['n','type']].groupby(['n','type']).size() # ### Use groupby() to determine how many roles are listed for each of the Pink Panther movies. filter_panther = (cast['title'] == 'The Pink Panther') cast[filter_panther][['year','type']].groupby('year').count() #[['year','type']] # ### List, in order by year, each of the films in which has played more than 1 role. filter_oz = (cast['name'] == '') x = cast[filter_oz][['year','title']].groupby('title').size() x[x>1] # ### List each of the characters that has portrayed at least twice. x = cast[filter_oz]['character'].value_counts() x[x>1] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="bjkU115JqmMu" colab_type="text" # # Unit Process Selection # # # + id="WUSHH6RQrB3-" colab_type="code" colab={} # !pip install aguaclara from aguaclara.core.units import unit_registry as u import aguaclara as ac import pandas as pd import numpy as np import matplotlib.pyplot as plt # + id="CwBsWVHxuZBV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="d0642c3f-fb4d-4fc2-e85b-04de299cca17" executionInfo={"status": "ok", "timestamp": 1572356124260, "user_tz": 240, "elapsed": 298, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDmYNDq6ij0468RSHe1goXE_t9gbSPdq5OAsU4-ejQ=s64", "userId": "08369668289863895493"}} v_c = (0.5 * u.gal/u.min/u.ft**2).to(u.mm/u.s) v_c # + [markdown] id="_P4k2svlq6Ke" colab_type="text" # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- """ The MIT License (MIT) Copyright (c) 2021 NVIDIA Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. """ # This code example demonstrates how to use a recurrent neural network to solve a time series prediction problem. The goal is to predict future sales data based on historical values. More context for this code example can be found in the section "Programming Example: Forecasting Book Sales" in Chapter 9 in the book Learning Deep Learning by (ISBN: 9780137470358). # # We start with initialization code. First, we import modules that we need for the network. We also load the data file into an array. We then split the data into training data (the first 80% of the data points) and test data (the remaining 20% of the months). The data is assumed to be in the file ../data/book_store_sales.csv. # + import numpy as np import matplotlib.pyplot as plt import tensorflow as tf from tensorflow import keras from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.layers import SimpleRNN import logging tf.get_logger().setLevel(logging.ERROR) EPOCHS = 100 BATCH_SIZE = 16 TRAIN_TEST_SPLIT = 0.8 MIN = 12 FILE_NAME = '../data/book_store_sales.csv' def readfile(file_name): file = open(file_name, 'r', encoding='utf-8') next(file) data = [] for line in (file): values = line.split(',') data.append(float(values[1])) file.close() return np.array(data, dtype=np.float32) # Read data and split into training and test data. sales = readfile(FILE_NAME) months = len(sales) split = int(months * TRAIN_TEST_SPLIT) train_sales = sales[0:split] test_sales = sales[split:] # - # The next code snippet plots the historical sales data. The data shows a clear seasonal pattern along with an indication that the overall trend in sales has changed over time, presumably due to increased online sales. The data starts in 1992 and ends in March 2020. The drop for the last month was likely caused by the COVID-19 pandemic hitting the United States. # # Plot dataset x = range(len(sales)) plt.plot(x, sales, 'r-', label='book sales') plt.title('Book store sales') plt.axis([0, 339, 0.0, 3000.0]) plt.xlabel('Months') plt.ylabel('Sales (millions $)') plt.legend() plt.show() # For comparison purposes, create output corresponding to a naive model that predicts that the sales next month will be the same as the sales this month. Compare this to the correct data by plotting the values side by side. # # Plot naive prediction test_output = test_sales[MIN:] naive_prediction = test_sales[MIN-1:-1] x = range(len(test_output)) plt.plot(x, test_output, 'g-', label='test_output') plt.plot(x, naive_prediction, 'm-', label='naive prediction') plt.title('Book store sales') plt.axis([0, len(test_output), 0.0, 3000.0]) plt.xlabel('months') plt.ylabel('Monthly book store sales') plt.legend() plt.show() # The next step is to standardize the data points by subtracting the mean and dividing by the standard deviation of the training examples. # # Standardize train and test data. # Use only training seasons to compute mean and stddev. mean = np.mean(train_sales) stddev = np.mean(train_sales) train_sales_std = (train_sales - mean)/stddev test_sales_std = (test_sales - mean)/stddev # In our previous examples, the datasets were already organized into individual examples. For example, we had an array of images serving as input values and an associated array of classes serving as expected output values. However, the data that we created in this example is raw historical data and not yet organized as a set of training and test examples (see the section in the book for an illustration of how we need it organized). This is the next step in our code example. The code snippet below allocates tensors for the training data and initializes all entries to 0. It then loops through the historical data and creates training examples, then does the same thing with the test data. # + # Create training examples. train_months = len(train_sales) train_X = np.zeros((train_months-MIN, train_months-1, 1)) train_y = np.zeros((train_months-MIN, 1)) for i in range(0, train_months-MIN): train_X[i, -(i+MIN):, 0] = train_sales_std[0:i+MIN] train_y[i, 0] = train_sales_std[i+MIN] # Create test examples. test_months = len(test_sales) test_X = np.zeros((test_months-MIN, test_months-1, 1)) test_y = np.zeros((test_months-MIN, 1)) for i in range(0, test_months-MIN): test_X[i, -(i+MIN):, 0] = test_sales_std[0:i+MIN] test_y[i, 0] = test_sales_std[i+MIN] # - # We are finally ready to define and train our network. This is shown in the code snippet below. # Create RNN model model = Sequential() model.add(SimpleRNN(128, activation='relu', input_shape=(None, 1))) model.add(Dense(1, activation='linear')) model.compile(loss='mean_squared_error', optimizer = 'adam', metrics =['mean_absolute_error']) model.summary() history = model.fit(train_X, train_y, validation_data = (test_X, test_y), epochs=EPOCHS, batch_size=BATCH_SIZE, verbose=2, shuffle=True) # For comparison, create naive predictions based on standardized data. # Create naive prediction based on standardized data. test_output = test_sales_std[MIN:] naive_prediction = test_sales_std[MIN-1:-1] mean_squared_error = np.mean(np.square(naive_prediction - test_output)) mean_abs_error = np.mean(np.abs(naive_prediction - test_output)) print('naive test mse: ', mean_squared_error) print('naive test mean abs: ', mean_abs_error) # To shed some light on how this affects the end behavior, let us use our newly trained model to do some predictions and then plot these predictions next to the actual values. The code snippet below demonstrates how this can be done. We first call model.predict with the test input as argument. The second argument is the batch size, and we state the length of the input tensor as the batch size (i.e., we ask it to do a prediction for all the input examples in parallel). During training, the batch size will affect the result, but for prediction, it should not affect anything except for possibly runtime. We could just as well have used 16 or 32 or some other value. The model will return a 2D array with the output values. Because each output value is a single value, a 1D array works just as well, and that is the format we want in order to enable plotting the data, so we call np.reshape to change the dimensions of the array. The network works with standardized data, so the output will not represent demand directly. We must first destandardize the data by doing the reverse operation compared to the standardization. That is, we multiply by the standard deviation and add the mean. We then plot the data. # + # Use trained model to predict the test data predicted_test = model.predict(test_X, len(test_X)) predicted_test = np.reshape(predicted_test, (len(predicted_test))) predicted_test = predicted_test * stddev + mean # Plot test prediction. x = range(len(test_sales)-MIN) plt.plot(x, predicted_test, 'm-', label='predicted test_output') plt.plot(x, test_sales[-(len(test_sales)-MIN):], 'g-', label='actual test_output') plt.title('Book sales') plt.axis([0, 55, 0.0, 3000.0]) plt.xlabel('months') plt.ylabel('Predicted book sales') plt.legend() plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import os from six.moves import urllib DOWNLOAD_ROOT = "https://raw.githubusercontent.com/kscanne/1070/master/lab07/" TRAIN_URL = DOWNLOAD_ROOT + "train.csv" TEST_URL = DOWNLOAD_ROOT + "test.csv" DATASET_PATH = "/Users/goople/slu-workspace/ml/handson-ml" def fetch_data(train_url=TRAIN_URL, test_url=TEST_URL, dataset_path=DATASET_PATH): if not os.path.isdir(dataset_path): os.makedirs(dataset_path) train_path = os.path.join(dataset_path, "train.csv") test_path = os.path.join(dataset_path, "test.csv") urllib.request.urlretrieve(train_url, train_path) urllib.request.urlretrieve(test_url, test_path) # - fetch_data() # + import numpy as np def load_data(dataset_path=DATASET_PATH): csv_train_path = os.path.join(dataset_path, "train.csv") csv_test_path = os.path.join(dataset_path, "test.csv") return np.genfromtxt(csv_train_path, delimiter=','), np.genfromtxt(csv_test_path, delimiter=',') # - train_data, test_data = load_data() print(train_data) y_train = train_data[:,0] X_train = train_data[:,1:] y_test = test_data[:,0] X_test = test_data[:,1:] y_train X_train # + import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) def show_digit(some_digit): some_digit_image = some_digit.reshape(28, 28) plt.imshow(some_digit_image, cmap = mpl.cm.binary, interpolation="nearest") plt.axis("off") plt.show() # - show_digit(X_train[2500]) y_train[2500] def binary_classifier(example, digit, K): S = [] for x, y in zip(X_train, y_train): diff = np.sum((np.absolute(example-x))) S.append((diff, y)) S.sort(key=lambda x: x[0]) y = 0 for k in range(K): if S[k][1] == digit: y += 1 else: y -= 1 return np.sign(y) def multiclass_classifier(example, K): digit = 0 while binary_classifier(example, digit, K) < 1 and digit < 10: digit += 1 return digit multiclass_classifier(X_train[2500], 100) def accuracy(): TP = 0 for x,y in zip(X_test, y_test): if multiclass_classifier(x, 10) == y: TP += 1 return TP/100 accuracy() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: tf-gpu-25 # language: python # name: tf-gpu-25 # --- # + # importing required libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt # %matplotlib inline import seaborn as sns # Styles plt.style.use('ggplot') sns.set_style('whitegrid') sns.set() # Warning import warnings warnings.filterwarnings('ignore') # Text Preprocessing import string from nltk.corpus import stopwords from nltk.tokenize import word_tokenize import spacy nlp = spacy.load("en_core_web_sm") # + # loading the dataset raw_sms = pd.read_csv('spam.csv', encoding='latin-1') # + # Checking top 10 rows raw_sms.head() # + # Selecting First two columns sms = raw_sms.iloc[:, :2] # - sms.head() # + # Renaming the columns sms.rename(columns={'v1': "Label", 'v2':"Text"}, inplace=True) # - sms.head() # ## Exploratory Data Analysis # + # Lets look at the dataset info to see if everything is alright sms.info() # + # Checking the Category (SPAM vs HAM) unique count sms['Label'].value_counts() # - sns.countplot(x='Label', data=sms) # + # Overall length of each spam and ham messages sms['Length'] = sms['Text'].map(lambda x: len(x)) # + # checking the histogram sms.hist(column='Length', bins=50, figsize=(10, 7)) # + # Overall length of length of spam and ham messages sms.hist(column='Length', by='Label', bins=100, figsize=(20,7)) # - # ### Checking the Percentage of SPAM / HAM sms["Label"].value_counts().plot(kind = 'pie', explode = [0, 0.1], figsize = (6, 6), autopct = '%1.1f%%', shadow = True) plt.ylabel("Spam vs Ham") plt.legend(["Ham", "Spam"]) plt.show() # # As the dataset is imbalanced, 86.6% consists of normal message and 13.4% consists of spam message. # # # # We need to use Stratified sampling while splitting the dataset into training and testing set, otherwise we have a chance of our training model being skewed towards normal messages. # # + # let see the Top Spam / Ham sms by count topSMS = sms.groupby('Text')['Label'] \ .agg([len, np.max]) \ .sort_values(by='len', ascending=False) \ .head(n=20) print(topSMS) # + # Splitting dataframe by groups formed from unique column values i.e., Spam and Ham grouped = sms.groupby(sms.Label) spamSMS = grouped.get_group("spam") hamSMS = grouped.get_group("ham") # + # splitting the data in spam and ham spam_sms = sms[sms["Label"] == "spam"]["Text"] ham_sms = sms[sms["Label"] == "ham"]["Text"] # + # converting the series to string spam_sms = spam_sms.to_string() ham_sms = ham_sms.to_string() # + # Function to extract spam words using word_tokenizer def extract_spam_words(spamMessage): words = word_tokenize(spamMessage) return words # + # Function to extract ham words using word_tokenizer def extract_ham_words(hamMessage): words = word_tokenize(hamMessage) return words # - # ### Creating Spam and Ham word clouds # Word Cloud is a data visualization technique used for representing text data in which the size of each word indicates its frequency or importance. # + # Creating corpus for spam and ham message spam_words = extract_spam_words(spam_sms) ham_words = extract_ham_words(ham_sms) # - # ### SPAM Wrod Cloud # + from wordcloud import WordCloud spam_wordcloud = WordCloud(width=500, height=300).generate(" ".join(spam_words)) plt.figure( figsize=(10,8), facecolor='r') plt.imshow(spam_wordcloud) plt.axis("off") plt.tight_layout(pad=0) plt.show() # - # ### HAM Word Cloud ham_wordcloud = WordCloud(width=500, height=300).generate(" ".join(ham_words)) plt.figure( figsize=(10,8), facecolor='r') plt.imshow(ham_wordcloud) plt.axis("off") plt.tight_layout(pad=0) plt.show() # + # Top 10 spam words spam_words = np.array(spam_words) print("Top 10 Spam words are :\n") pd.Series(spam_words).value_counts().head(n = 10) # - # # Data pre-processing # + # Removing punctuations and stopwords from the text data. import string def text_process(text): text = text.translate(str.maketrans('', '', string.punctuation)) text = [word for word in text.split() if word.lower() not in stopwords.words('english')] return " ".join(text) # - sms['Text'] = sms['Text'].apply(text_process) sms.head() # + # Replacing ham with 0 and spam with 1 sms = sms.replace(['ham','spam'],[0, 1]) sms.head() # - # ### Converting words to vectors using TFIDF Vectorizer from sklearn.feature_extraction.text import TfidfVectorizer vectorizer = TfidfVectorizer() vectors = vectorizer.fit_transform(sms['Text']) X = vectors y = sms['Label'] X.shape y.shape # ### Splitting into training and test set # + from sklearn.model_selection import train_test_split # using stratify sampling as we have imbalace dataset X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, test_size=0.2) # - print (X_train.shape) print (X_test.shape) print (y_train.shape) print (y_test.shape) # ### Training using Naive Bayes MultinomialNB from sklearn.naive_bayes import MultinomialNB spam_detect_model = MultinomialNB() spam_detect_model.fit(X_train, y_train) # Let's check the outputs of the model # I'll store them in y_hat_test as this is the 'theoretical' name of the predictions y_hat_test = spam_detect_model.predict(X_test) y_hat_test # ### Evaluation # We can check precision,recall,f1-score using classification report! from sklearn.metrics import classification_report print(classification_report(y_test, y_hat_test)) from sklearn import metrics print("Accuracy ", metrics.accuracy_score(y_test, y_hat_test)) # ### Hyper parameter tuning # # Finding best possible hyper-parameters from sklearn.model_selection import GridSearchCV # creating classifier clf = MultinomialNB() clf.get_params().keys() # + # creating Hyper-parameter search space # paramter for smoothing alphas = [0.01, 0.1, 1.0] # final Hyperparameter Options hyperparameters = dict(alpha = alphas) # + # define search # Creating GridSearch using 10-fold cross validation search = GridSearchCV(clf, hyperparameters, cv=10, scoring='accuracy', verbose=3, n_jobs=-1) # - # Execute search best_model = search.fit(X_train,y_train) print("Tuned Hyperparameters : (best parameters) ", best_model.best_params_) print("Accuracy : ", best_model.best_score_) # ### Training the model with best_parameters obtained by hyerparameters classifier = MultinomialNB(alpha=0.1) model = classifier.fit(X_train, y_train) # ### Testing y_hat = model.predict(X_test) y_hat from sklearn import metrics print("Accuracy ", metrics.accuracy_score(y_test, y_hat)) # Creating the dataframe of the predicted values df = pd.DataFrame(y_hat, columns=['Prediction']) df.head() # We can include the test targets in data frame (so we can manually compare them) df['Target'] = y_test df # + # In y_test, old indexes are preserved # Therefore, to get a proper result, we must reset the index and drop the old indexing y_test = y_test.reset_index(drop=True) # Check the result y_test.head() # - # Let's overwrite the 'Target' column with the appropriate values df['Target'] = y_test df # + # Replacing the ham with 0 and spam with 1 # df = df.replace([0,1],['ham', 'spam']) # df.head(20) # - # function to display text message def find(p): if p == 1: print("SMS is SPAM") else: print("SMS is NOT spam") # + text1 = ["Free tones Hope you enjoyed your new content"] text2 = ["No. I meant the calculation is the same. That I'll call later"] text3 = ["Had your contract mobile 11 Mnths? Latest Motorola Now"] text4 = ["WINNER!! You just won a free ticket to Bahamas. Send your Details"] text5 = ["Hey there, How are you?"] integers1 = vectorizer.transform(text1) integers2 = vectorizer.transform(text2) integers3 = vectorizer.transform(text3) integers4 = vectorizer.transform(text4) integers5 = vectorizer.transform(text5) # + p1 = best_model.predict(integers1)[0] p2 = best_model.predict(integers2)[0] p3 = best_model.predict(integers3)[0] p4 = best_model.predict(integers4)[0] p5 = best_model.predict(integers5)[0] find(p1) find(p2) find(p3) find(p4) find(p5) # + text11 = ["Free tones Hope you enjoyed your new content"] text21 = ["No. I meant the calculation is the same. That I'll call later."] text31 = ["Had your contract mobile 11 Mnths? Latest Motorola Now"] text41 = ["WINNER!! You just won a free ticket to Bahamas. Send your Details"] text51 = ["Hey there, How are you?"] integers11 = vectorizer.transform(text11) integers21 = vectorizer.transform(text21) integers31 = vectorizer.transform(text31) integers41 = vectorizer.transform(text41) integers51 = vectorizer.transform(text51) # + p11 = model.predict(integers11)[0] p21 = model.predict(integers21)[0] p31 = model.predict(integers31)[0] p41 = model.predict(integers41)[0] p51 = best_model.predict(integers51)[0] find(p11) find(p21) find(p31) find(p41) find(p51) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.6.9 64-bit # language: python # name: python36964bitf8533a4c57ce4aff9d3b14fa5b76fef6 # --- import pandas as pd #------------------------Read in data --------------------------- #-----------------Cumulative and density data -------- df1 = pd.read_csv('./../data/external/global_data.csv') df2 = pd.read_csv('./../data/external/data.csv') df1.head() df2.head() #-------- Drop unwanted columns -- df2 = df2.drop('rank',axis=1) df1 = df1.drop(['Slug','CountryCode'],axis=1) df2 = df2.rename(columns={'name': 'Country'}) df_joined = df1.join(df2.set_index('Country'), on='Country') df_joined.head() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # `xray` # Visualize a 3D image as a 2D image in the style of an xray radiograph, with the brightness corresponding inversely to the amount of material the xray passed through. import porespy as ps import matplotlib.pyplot as plt # Create a test image of fibers since the orientation is useful for visualization: im = ps.generators.cylinders(shape=[200, 200, 200], r=6, porosity=0.7) fig, ax = plt.subplots(1, 3, figsize=[8, 12]) ax[0].imshow(im[50, :, :]) ax[0].axis(False) ax[1].imshow(im[:, 50, :]) ax[1].axis(False) ax[2].imshow(im[:, :, 50]) ax[2].axis(False); # ## `axis` # The default behavior is the view the sample as though the xrays passed along the x-axis, but this is adjustable: # + fig, ax = plt.subplots(1, 3, figsize=[8, 12]) im1 = ps.visualization.xray(im) ax[0].imshow(im1, cmap=plt.cm.plasma) ax[0].axis(False) im2 = ps.visualization.xray(im, axis=1) ax[1].imshow(im2, cmap=plt.cm.nipy_spectral) ax[1].axis(False) im2 = ps.visualization.xray(im, axis=2) ax[2].imshow(im2, cmap=plt.cm.bone) ax[2].axis(False); # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="PscU60ffoaeU" # # A Short Interactive Introduction to Python # # > This is a short introduction to Python programming for Chemists and Engineers. # - toc: True # - metadata_key1: Python # - metadata_key2: Programming # - metadata_key2: Chemistry # - # This Python introductions aims at readers such as chemists and engineers with some programming background. Its intention is to give you a rough overview on the key features of the Python programming language. If you are completely new to programming, you may want to check this nice introduction first https://wiki.python.org/moin/BeginnersGuide/NonProgrammers. # # It is also hosted on the [google colab platform](https://colab.research.google.com/drive/1_qg3wu0dtF4d5aW-R4hyMFqyY6zgm0ra) and only a web browser is needed to start it. # # We will discuss a bit the differences to other languages, but we will also explain or provide links for most of the concepts mentioned here, so do not worry if some of those are not yet familiar to you. # # ## Some key features of Python # # * Python is a scripting language. (There is no compiled binary code per default) # * It is procedural (i.e. you can program with classic functions, that take parameters) # * it is object-oriented (you can use more complex structures, that can hold several variables) # * it has some functional programming features as well (e.g. lambda functions, map, reduce & filter, for more information see https://en.wikipedia.org/wiki/Functional_programming) # # # # ## What makes Python special... # # * Indendation is used to organize code, i.e. to define blocks of code space or tabs are used (no curly brackets, semi-colons etc, no ";" or "{}") # * variables can hold any type # * all variables are objects # * great support for interactive use (like in this jupyter notebook here!) # * Many specialized libraries, in particular scientific [libraries](#Python-Libraries) are available, where the performance intensive part is done by C/C++/Fortran and the control/IO is done via Python # + [markdown] colab_type="text" id="gt2T1UdwtZGm" # ## *Assignment* of a variable # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 625, "status": "ok", "timestamp": 1582631737569, "user": {"displayName": "", "photoUrl": "", "userId": "06846131955017515958"}, "user_tz": -60} id="TqIM7tthoDgH" outputId="0eec0b3e-efe7-4b74-d661-9fb0f04a0e26" a = 1.5 # click the cell and press SHIFT+RETURN to run the code. This line is a comment a # + [markdown] colab_type="text" id="C6y-JSxergP4" # All variables are objects, use type() function to get the type. In this case: `a` is of the type "float" (short for floating point number). Comments are declared with the hashtag `#` in Python. # # # # # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 566, "status": "ok", "timestamp": 1582575877314, "user": {"displayName": "", "photoUrl": "", "userId": "06846131955017515958"}, "user_tz": -60} id="6t1Uos44oVNO" outputId="4a49d67b-4fcc-4076-de02-9c74ed2db5c1" type(a) # + [markdown] colab_type="text" id="837HpOMAreUQ" # Variables can be easily re-defined, here variable `a` becomes an integer: # + colab={} colab_type="code" id="ABH7MxnYoYcM" a = 1 # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 793, "status": "ok", "timestamp": 1576145793516, "user": {"displayName": "", "photoUrl": "", "userId": "06846131955017515958"}, "user_tz": -60} id="hy9RKaazsBQQ" outputId="e33203d4-e4f5-4ad3-9985-43581755887b" type(a) # + [markdown] colab_type="text" id="XBwOiyo4w58O" # The type of a variable is determined by assignment! # There are no context prefixes like @, %, $ like in e.g. Perl # ___ # # ## A simple Python program # # Below is a very simple Python program printing something, and demonstrating some basic features. A lot of useful functionality in Python is kept in external libraries that has to be imported before use with the `import` statement. Also, functions have of course to be defined prior to use. # # By the way, note how identation is used to define code blocks. In Python you can use either spaces or tabs to indent your code. In Python3 mixing of tabs and spaces is not allowed, and the use of 4 consecutive spaces is recommended. # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 787, "status": "ok", "timestamp": 1576145793517, "user": {"displayName": "", "photoUrl": "", "userId": "06846131955017515958"}, "user_tz": -60} id="qrT8VK-A4hq8" outputId="4d7896f1-c7a1-4ec6-915e-9a8d16b2ea05" # import a library import os # definition of a python function def hello(): # get current path as a string and assign it to variable current_dir = os.getcwd() # concatentate strings and print them print('Hello world from '+current_dir+' !') # do not forget to call the function hello() # + [markdown] colab_type="text" id="NecQYFzD6dn6" # If we were not in an interactive session, we would save this chunk of code to a file, e.g. `hello_word.py` and run that by invoking:
    # `python hello_word.py`
    on the command line. # # ___ # + [markdown] colab_type="text" id="FMvnQhNY2Ult" # ## `IF` statement # # `if` statements are pretty straightforward, there is no "THEN" and indentation helps to group the `if` and the `else` (or `elif` means `else if`) block. # + colab={} colab_type="code" id="3i7TXaVe2SF2" a = 3.0 if a > 2: print(a) if not (isinstance(a,int)): print("a is not an integer") else: print("Hmm, we should never be here...") # + [markdown] colab_type="text" id="L5d7lN8u3CsT" # Since Python 3, the `print` statement is a proper function, in Python 2 one could do something like (i.e. without parentheses): # # `print "a is not an integer"` # # which is no longer possible. It is strongly recommended to use Python 3 as Python 2 is no longer maintained and many external libraries have switched meanwhile competely to Python 3. # # Logical negation: `if not` # # Checking of variable type: `isinstance(variable,type)` # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 775, "status": "ok", "timestamp": 1576145793518, "user": {"displayName": "", "photoUrl": "", "userId": "06846131955017515958"}, "user_tz": -60} id="OcEzxzvlJQFY" outputId="69cd3103-ec48-4c02-dfaa-2bee0fe4da24" isinstance(a,float) # + [markdown] colab_type="text" id="dUxBsO5P7M3j" # ___ # + [markdown] colab_type="text" id="P5evSBMOJvfI" # ## `FOR` LOOPS # # # + colab={"base_uri": "https://localhost:8080/", "height": 187} colab_type="code" executionInfo={"elapsed": 768, "status": "ok", "timestamp": 1576145793518, "user": {"displayName": "", "photoUrl": "", "userId": "06846131955017515958"}, "user_tz": -60} id="vKg8mhFfJ2ch" outputId="717f7c05-de20-4256-e106-75d211e90b76" for i in range(10): print(i) # + [markdown] colab_type="text" id="srN43DL9J8t4" # The Python `range` function is somewhat special and for loops look somewhat different, than in other languages: # # * **Python**: `for i in range(start=0, stop=10, step=1):` # * **JAVA/C**: `for (int i=0; i<10; i++) {}` # * **Fortran**: `do i=0,9, end do` # # Go on and manipulate the code above to check the effects. # # Looping over lists (which will be explained below) is quite simple as well: # # + colab={"base_uri": "https://localhost:8080/", "height": 68} colab_type="code" executionInfo={"elapsed": 764, "status": "ok", "timestamp": 1576145793519, "user": {"displayName": "", "photoUrl": "", "userId": "06846131955017515958"}, "user_tz": -60} id="qhzBIMNnKh9M" outputId="796e2c6a-63ec-401c-a318-5172f31dad71" drugs = ['ibuprofen','paracetamol','aspirin'] for d in drugs: print(d) # + [markdown] colab_type="text" id="A-8LtPqJKuY0" # Some standard keywords are used to control behaviour inside the loop: # # * **continue**: continue to next iteration of the loop # * **break**: leave/exit the loop # # # + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" executionInfo={"elapsed": 758, "status": "ok", "timestamp": 1576145793519, "user": {"displayName": "", "photoUrl": "", "userId": "06846131955017515958"}, "user_tz": -60} id="iYk7me1uKKOh" outputId="40ea6ebb-9667-4332-ffd2-0df748ab7577" for d in drugs: if d=='paracetamol': continue print(d) # + [markdown] colab_type="text" id="FN99_IjJ_YNY" # ___ # ## Python Data Types # # | type | example | comment | # | --- | --- | --- | # | int | i=1 | Integer: 0,1,2,3 | # | boolean | b=False | bolean value (True/False) | # | float | x=0.1 | floating point number | # | str | s = "how are you?" | String | # | list | l = [1,2,2,50,7] | Changeable list with order | # | set | set([1,2,2,50,7])==set([1,2,50,7]) | Changeable set (list of unique elements) without order | # | tuple | t = (99.9,2,3,"aspirin") | "Tuple": Unchangeable ordered list, used e.g. for returning function values | # | dict | d = {“hans“: 123, “peter“:344, “dirk“: 623} | Dictionary to save key/value pairs | # # # The table shows the most important data types in Python. There are more specialized types e.g. https://docs.python.org/3/library/collections.html.
    # Some useful data types are provided by external libraries, such as `pandas` (Python for data analysis: https://pandas.pydata.org/) # # In general, the object oriented features in Python can be used to declare your own and special types/classes. # # # ___ # ## Lists # # In Python a list is a collection of elements that are ordered and changeable. # # + colab={} colab_type="code" id="BTfP87W9DKsy" integer_list = [10,20,0,5] mixed_list = ["hello",1,5.0,[2,3]] # + [markdown] colab_type="text" id="3pt4eT4XDcv_" # List elements can themselves be a list and can contain different types.
    # Accessing elements in the list is done via their indices (position in the list): # + colab={"base_uri": "https://localhost:8080/", "height": 68} colab_type="code" executionInfo={"elapsed": 978, "status": "ok", "timestamp": 1576145793753, "user": {"displayName": "", "photoUrl": "", "userId": "06846131955017515958"}, "user_tz": -60} id="iZhXoJ7fDhGH" outputId="eaeca9f0-9584-44ae-f111-d41f27c0264d" print(integer_list[2]) print(integer_list[1:3]) mixed_list[-1] # + [markdown] colab_type="text" id="FxcfozF0DqVn" # Note, how indexing starts at index 0, which is the first element in the list.
    # The `[1:3]` operation in the example is called slicing and is useful to get subsets of lists or arrays.
    # The last element of an array can be accessed with the index `-1`.

    # Manipulating and working with lists: # + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" executionInfo={"elapsed": 972, "status": "ok", "timestamp": 1576145793754, "user": {"displayName": "", "photoUrl": "", "userId": "06846131955017515958"}, "user_tz": -60} id="HbrF74d9K1i5" outputId="951723d1-8a2f-40d9-9c13-53cec6a5329d" integer_list = [5,10,20,0,5] integer_list.append(42) print(len(integer_list)) integer_list # + [markdown] colab_type="text" id="70cuqJdeLQ8m" # Lists can be sorted: # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 966, "status": "ok", "timestamp": 1576145793755, "user": {"displayName": "", "photoUrl": "", "userId": "06846131955017515958"}, "user_tz": -60} id="X1hFl7ebLEg-" outputId="6377ffe4-7496-4945-a255-accfacfb326f" integer_list.sort(reverse=True) integer_list # + [markdown] colab_type="text" id="au9B0KCSLY_O" # Check if some element is contained within the list or find its index: # + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" executionInfo={"elapsed": 961, "status": "ok", "timestamp": 1576145793755, "user": {"displayName": "", "photoUrl": "", "userId": "06846131955017515958"}, "user_tz": -60} id="9d4zwYGoLYTt" outputId="23e8f154-aefb-472e-ed89-97fc54ca566e" print(10 in integer_list) print(integer_list.index(10)) # + [markdown] colab_type="text" id="3b9_w852Llmt" # List can be turned into sets (only unique elements): # # > Indented block # # # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 956, "status": "ok", "timestamp": 1576145793755, "user": {"displayName": "", "photoUrl": "", "userId": "06846131955017515958"}, "user_tz": -60} id="YwTwMgdSLoYk" outputId="00b38e49-3ee9-4e86-ac06-f11f792e54c4" set(integer_list) # + [markdown] colab_type="text" id="fzrVtXdPMBVE" # ## Dictionaries # # Dictionaries are very handy data types and consists of key-value pairs. They may also be called maps or associatve arrays. # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 952, "status": "ok", "timestamp": 1576145793756, "user": {"displayName": "", "photoUrl": "", "userId": "06846131955017515958"}, "user_tz": -60} id="z9qhRgL8Mlia" outputId="d9dcae20-1d54-4fed-bcd7-f61a7fc50164" en_de = {'red':'rot', 'blue':'blau'} en_de['black'] = 'schwarz' en_de['cyan'] = None en_de # + [markdown] colab_type="text" id="r3XnkRO0Q19e" # Dictionaries are a collection of key, e.g. 'red' and value, e.g. 'rot' pairs, where both can of course be of different types.
    It is important to note that they maintain no specific order. For an ordered dictionary use a special type `OrderedDict` (https://docs.python.org/3/library/collections.html#collections.OrderedDict).

    # There are several ways of accessing dictionaries: # # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 947, "status": "ok", "timestamp": 1576145793756, "user": {"displayName": "", "photoUrl": "", "userId": "06846131955017515958"}, "user_tz": -60} id="2vhfwZv6SAnJ" outputId="826daba3-7ae3-4d04-cd86-667aa3b4f31f" en_de['red'] # + [markdown] colab_type="text" id="2yP-dtVXSHAo" # One can easily iterated over an dictionary: # + colab={"base_uri": "https://localhost:8080/", "height": 85} colab_type="code" executionInfo={"elapsed": 943, "status": "ok", "timestamp": 1576145793757, "user": {"displayName": "", "photoUrl": "", "userId": "06846131955017515958"}, "user_tz": -60} id="Q2YEcP-9SKkR" outputId="5b91ec05-ce87-45a0-" for key in en_de: print(key, en_de[key]) # + [markdown] colab_type="text" id="I00zPwoDSQYc" # And they are used a lot in Python, e.g. the current environmental variables are saved in dictionary (`os.environ`): # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 939, "status": "ok", "timestamp": 1576145793758, "user": {"displayName": "", "photoUrl": "", "userId": "06846131955017515958"}, "user_tz": -60} id="1FPlx2LSSing" outputId="d50ec6ab-ab23-4bf7-cd9a-7898ae65fca0" import os os.environ['PATH'] # + [markdown] colab_type="text" id="9MBJ91q1UD7a" # ## Python functions # # Python functions are declared with the `def` keyword followed by a function name and a parameter list. Within the parameter list default parameters can be specified: # + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" executionInfo={"elapsed": 935, "status": "ok", "timestamp": 1576145793758, "user": {"displayName": "", "photoUrl": "", "userId": "06846131955017515958"}, "user_tz": -60} id="0Qn5EDjPUKnA" outputId="542b741f-121f-4f11-b10a-f9cd97598dbc" def test_function(name='paracetamol'): print('There is no '+name+' left.') test_function() test_function('aspirin') # + [markdown] colab_type="text" id="BWHctiBuMQGa" # Return results of function with the `return` statement, in this case we generate a string with the function and print it later, i.e. outside the function: # + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" executionInfo={"elapsed": 931, "status": "ok", "timestamp": 1576145793759, "user": {"displayName": "", "photoUrl": "", "userId": "06846131955017515958"}, "user_tz": -60} id="EsCwE7PtMqOH" outputId="7ee6f071-965e-40fa-d822-1becbc9cedba" def test_function(name='paracetamol'): answer = 'There is no '+name+' left.' return answer print(test_function()) print(test_function('aspirin')) # + [markdown] colab_type="text" id="ykjpukjnNGy_" # Its also possible to return several return values as a tuple, e.g.
    # `return answer,status` # ___ # ## Type conversion & casting # # Sometimes it is necessary to convert variables from one type to another, e.g. convert a `float` to an `int`, or a an `int` to an `str`. That is where conversion functions come into play. Unlike in other languages types are converted via functions, where the return value is the converted type. # # | function | converting from | converting to | # | --- | --- | --- | # | int() | string, floating point | integer | # | float() | string, integer | floating point | # | str() | integer, float, list, tuple, dictionary | string | # | list() | string, tuple, dictionary | list | # | tuple() | string, list | tuple | # # # # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 926, "status": "ok", "timestamp": 1576145793759, "user": {"displayName": "", "photoUrl": "", "userId": "06846131955017515958"}, "user_tz": -60} id="ZkkqJCBjQS_A" outputId="9c6d1fe2-6421-4b29-bc37-70b5e68cc362" import math # not sure why one wants to do this :-) int(math.pi) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 922, "status": "ok", "timestamp": 1576145793760, "user": {"displayName": "", "photoUrl": "", "userId": "06846131955017515958"}, "user_tz": -60} id="TVmvcBHGQmLw" outputId="d75fc131-5ec5-49c0-ad83-73033a337e9d" float('1.999') # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 917, "status": "ok", "timestamp": 1576145793760, "user": {"displayName": "", "photoUrl": "", "userId": "06846131955017515958"}, "user_tz": -60} id="wYGoiV59Qq6T" outputId="44660101-edd9-4d00-e64e-369cbf6b83c5" list('paracetamol') # create a list of characters! # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 912, "status": "ok", "timestamp": 1576145793761, "user": {"displayName": "", "photoUrl": "", "userId": "06846131955017515958"}, "user_tz": -60} id="tUN5uL2zQ07v" outputId="acb10d0a-e32c-4a50-fda7-4d4782ab8bed" list((1,2,3,6,8)) # creates a list from a tuple # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 907, "status": "ok", "timestamp": 1576145793761, "user": {"displayName": "", "photoUrl": "", "userId": "06846131955017515958"}, "user_tz": -60} id="IcKN0-xIQZxY" outputId="222a1794-8328-42f2-d925-9205b1127317" 'the answer is '+str(42) # before concatentation integer or float should be converted to string # + [markdown] colab_type="text" id="ws8FTkslQLdL" # Implicit (automatic) conversion takes place in Python to avoid data loss or loss in accuracy: # + colab={"base_uri": "https://localhost:8080/", "height": 85} colab_type="code" executionInfo={"elapsed": 900, "status": "ok", "timestamp": 1576145793762, "user": {"displayName": "", "photoUrl": "", "userId": "06846131955017515958"}, "user_tz": -60} id="CpPQyGnoSB1F" outputId="9f4d3038-3223-4b73-f63f-23f14ae5cbbe" a = 2 print(type(a)) b = 4.5 print(type(b)) c = b / a print(type(c)) c # + [markdown] colab_type="text" id="EklKMdXTTJox" # Note, that in this example the implicit conversion avoid data loss by conversion of integer `a` to a floating point number. # ___ # ## Object oriented progamming # # Object oriented programming is a huge field of its own, and we will not go into details here. Objects are kind of complex variables that themselves can contain different variables and functions. There a implemented in the code via so-called classes. In a way a class defines the behaviour of objects and specific objects are created during runtime. Classes relate to objects as types (e.g. `int`) to a specific variable (e.g. `a`). As such, in Python there is no real difference between a class and a type: Each variable is an object of a certain type/class.
    # New classes in Python are defined the following way: # # + colab={} colab_type="code" id="cDd3exDTVuDD" class Greeting: def __init__(self, name): self.name = name def say_hello(self): print('Hi '+self.name) # + [markdown] colab_type="text" id="8zXMpfkhWAiP" # This code has created the `class` Greeting with its function `say_hello`. `__init__` defines the so called constructor, which is called when the object is created. Object variable are saved using `self` keyword.
    # Objects can be created and its function be called: # # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 1088, "status": "ok", "timestamp": 1576145793959, "user": {"displayName": "", "photoUrl": "", "userId": "06846131955017515958"}, "user_tz": -60} id="oDoU7Or8WrK2" outputId="47c7f7b5-0353-4584-b3d2-f272d256a71f" x1 = Greeting('Greta') x1.say_hello() # + [markdown] colab_type="text" id="Dfc6-upPXEXo" # Object variables can be accessed directly: # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 1083, "status": "ok", "timestamp": 1576145793960, "user": {"displayName": "", "photoUrl": "", "userId": "06846131955017515958"}, "user_tz": -60} id="NZEB0f2EXB8f" outputId="64fd222c-6d6e-4944-8623-f20247dc1174" x1.name # + [markdown] colab_type="text" id="Cil9qC-TXOcF" # ## Import statements in Python # # Python comes with a lot of useful libraries. Some are contained in a default installation, some have to be installed with tools like `pip` or `conda`. The most conveniant way to install new packages/libraries in particular for scientific purposes is to use an installation like Anaconda (https://www.anaconda.com/). # # Once a library is available it can be imported with an `import` statement. Import the whole library: # # # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 1078, "status": "ok", "timestamp": 1576145793961, "user": {"displayName": "", "photoUrl": "", "userId": "06846131955017515958"}, "user_tz": -60} id="YxrzEC23YWtA" outputId="89d46e7c-20e5-4ffa-a7de-c0b900240ac2" import math print(math.cos(math.pi)) # + [markdown] colab_type="text" id="09dBGtkaYc3M" # Import a module or function from a library: # # # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 1072, "status": "ok", "timestamp": 1576145793961, "user": {"displayName": "", "photoUrl": "", "userId": "06846131955017515958"}, "user_tz": -60} id="MXR28IudYogH" outputId="07799bc2-b109-4e9e-df02-88dd93451254" from math import cos, pi print(cos(pi)) # + [markdown] colab_type="text" id="yCTL6o0yYuc2" # Import and use an acronym for use: # + colab={} colab_type="code" id="kw056Vq8Y1kp" import numpy as np array = np.asarray([1.0,2.0,1.0]) # + [markdown] colab_type="text" id="IH2fWU0pZkgh" # ___ # # ## Python Libraries # # There a lot of useful Python libraries available. Some of those have been established a kind of standard for scientific applications: # # * NumPy: Array & matrices & linear algebra (https://numpy.org/) # * SciPy: Numerical routines for integration, optimization, statistics etc. (https://www.scipy.org/scipylib/index.html) # * matplotlib: a Python 2D plotting library which produces publication quality figures( https://matplotlib.org/) # * scitkit-learn: machine learning in Python (https://scikit-learn.org/stable/) # * Pandas: data structures ("dataframes") and data analysis tool (https://pandas.pydata.org/) # # For chemistry related applications: # # * RDKit: open-source cheminformatics (https://www.rdkit.org/) # * ASE: setting up, manipulating, running, visualizing and analyzing atomistic simulations (https://wiki.fysik.dtu.dk/ase/) # * PySCF: collection of electronic structure programs (https://sunqm.github.io/pyscf/) # # # + [markdown] colab_type="text" id="nwtZ4HZfcSTQ" # ___ # ## Editors and development environments # # If Python code is not written interactively within the browser, as for example for larger projects, there are different editors and environments available to write Python code: # # * vi(m): text editor contained in most linux distributions # * Kate (Linux): handy text editor, highlighting for many different languages # * NotePad++ (Windows): https://notepad-plus-plus.org/ # * Spyder: Integrated development environment (IDE), shipped with anaconda (https://www.spyder-ide.org/) # * PyCharm: Powerful IDE, particular for python (http://www.jetbrains.com/pycharm) # * PyDev: Plugin for Eclipse IDE, i.e. auch für Java & C/C++ (https://www.pydev.org/download.html) # # ___ # # + [markdown] colab_type="text" id="dwrBShEcg-5x" # ## Python - Advanced topics & concepts # # A lot of interesting and useful stuff has been left out in this short introduction due to the sake of brevity. There is a lot more to Python, here a just a few keywords to look up: # # * Exceptions # * List comprehensions # * Lambda expressions # * Logging # * Multiprocessing & multithreading # * JIT compiler (http://numba.pydata.org/) # # Python tutorials: # * Google Python introduction (https://developers.google.com/edu/python/) # * Scientific Python introduction (https://scipy-lectures.org/intro/) # # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import matplotlib.pyplot as plt import pandas as pd # you will likely need to add some imports here # %matplotlib inline # - # ### K-means clustering harvard = pd.read_csv('https://raw.githubusercontent.com/UWDIRECT/UWDIRECT.github.io/master/Wi18_content/DSMCER/HCEPD_100K.csv') # Perform k-means clustering on the harvard data. Vary the `k` from 5 to 50 in increments of 5. Run the `k-means` using a loop. # # Be sure to use standard scalar normalization! # `StandardScaler().fit_transform(dataArray)` # Use [silhouette](http://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_silhouette_analysis.html) analysis to pick a good k. # ### PCA and plotting # # The functions below may be helpful but they aren't tested in this notebook. # + def make_pca(array, components): pca = PCA(n_components=components, svd_solver='full') pca.fit(array) return pca def plot_pca(pca, array, outplt, x_axis, y_axis, colorList=None): markers = pca.transform(array) plt.scatter(markers[:, x_axis], markers[:, y_axis], color='c') if colorList is not None: x_markers = [markers[i, x_axis] for i in colorList] y_markers = [markers[i, y_axis] for i in colorList] plt.scatter(x_markers, y_markers, color='m') plt.xlabel("Component 1 ({}%)".format(pca.explained_variance_ratio_[x_axis]* 100)) plt.ylabel("Component 2 ({}%)".format(pca.explained_variance_ratio_[y_axis]* 100)) plt.tight_layout() plt.savefig(outplt) def plot_clusters(pca, array, outplt, x_axis, y_axis, colorList=None): xkcd = [x.rstrip("\n") for x in open("xkcd_colors.txt")] markers = pca.transform(array) if colorList is not None: colors = [xkcd[i] for i in colorList] else: colors = ['c' for i in range(len(markers))] plt.scatter(markers[:, x_axis], markers[:, y_axis], color=colors) plt.xlabel("Component {0} ({1:.2f}%)".format((x_axis+1), pca.explained_varia nce_ratio_[x_axis]*100)) plt.ylabel("Component {0} ({1:.2f}%)".format((y_axis+1), pca.explained_varia nce_ratio_[y_axis]*100)) plt.tight_layout() plt.savefig(outplt) data = pd_to_np(df) pca = make_pca(data, args.n_components) print(pca.explained_variance_ratio_) # - # ### Self organizing map # https://github.com/sevamoo/SOMPY # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="sHW7pTWwCBRX" # Import required libraries import pandas as pd import numpy as np # + id="MQXoZXi8CUlQ" UAE = pd.read_csv("https://raw.githubusercontent.com/amindazad/GCC_Vegtables_Market/main/UAE_Vegtables_Trade_Statistics_2015_2019.csv", header=0) Saudi = pd.read_csv("https://raw.githubusercontent.com/amindazad/GCC_Vegtables_Market/main/Saudi_Vegtables_Trade_Statistics_2015-2019.csv", header=0) Qatar = pd.read_csv("https://raw.githubusercontent.com/amindazad/GCC_Vegtables_Market/main/Qatar_Vetables_Trade_Statistics_2015_2019.csv", header=0) Oman = pd.read_csv("https://raw.githubusercontent.com/amindazad/GCC_Vegtables_Market/main/Oman_Vegtables_Trade_Statistics_2015_2019.csv", header=0) Kuwait = pd.read_csv("https://raw.githubusercontent.com/amindazad/GCC_Vegtables_Market/main/Kuwait_Vegtables_Trade_Statistics_2015_2019.csv", header=0) Bahrain = pd.read_csv("https://raw.githubusercontent.com/amindazad/GCC_Vegtables_Market/main/Bahrain_Vegtables_Trade_Statistics_2015_2019.csv", header=0) # + colab={"base_uri": "https://localhost:8080/", "height": 221} id="Md5n4hrdCfFW" outputId="34312c6b-2216-4fc6-96e9-75c3f06de7ec" print(UAE.shape) UAE.head() # + colab={"base_uri": "https://localhost:8080/"} id="PpImqPeWqRv2" outputId="3a466f15-add4-48c2-e662-cf1c05a78b88" # Check for NaN values UAE.isnull().sum() # + colab={"base_uri": "https://localhost:8080/", "height": 221} id="Z_7HjCw5D9nd" outputId="82ca98f7-5b60-44d3-a07b-6b36e7d9162b" print(Saudi.shape) Saudi.head() # + colab={"base_uri": "https://localhost:8080/"} id="7yKvqpCyqb9K" outputId="3e8c1acc-c4ac-4171-a102-91da31b2efce" # Check for NaN values Saudi.isnull().sum() # + colab={"base_uri": "https://localhost:8080/", "height": 221} id="xKB0Ej29EH9c" outputId="a00453b6-625e-4f04-8bdc-ba491259974a" print(Qatar.shape) Qatar.head() # + colab={"base_uri": "https://localhost:8080/"} id="DPW3y8JMqeTe" outputId="8f354c3d-a7ff-4624-906a-3f3bdc128428" # Check for NaN values Qatar.isnull().sum() # + colab={"base_uri": "https://localhost:8080/", "height": 419} id="THR2_JcdqgXS" outputId="0e6f51a1-aca4-461b-f38d-412f682fbb9b" Qatar[Qatar.isna().any(axis=1)] # + colab={"base_uri": "https://localhost:8080/", "height": 419} id="5KK9G42eqogg" outputId="1ad9a738-7aae-48c6-cab0-e4ef936aaf2c" # Replace NaN values with the row's mean. I chose row mean because replaced # values are specific to each product Qatar = Qatar.T.fillna(Qatar.mean(axis=1)).T Qatar # + colab={"base_uri": "https://localhost:8080/", "height": 221} id="nX-zPoQ1EKps" outputId="7876936b-dcb6-4012-e22c-e47bdaddf488" print(Oman.shape) Oman.head() # + colab={"base_uri": "https://localhost:8080/"} id="-q7QBa4ispDt" outputId="ab578408-6d67-461a-fe80-fe0f1c69fe42" # Check for NaN values Oman.isnull().sum() # + colab={"base_uri": "https://localhost:8080/"} id="RG9VIdpItelZ" outputId="3bdd881a-eb8f-42be-d41f-07caf223758c" # Replace NaN values with the row's mean. I chose row mean because replaced # values are specific to each product Oman = Oman.T.fillna(Oman.mean(axis=1)).T Oman.isnull().sum() # + colab={"base_uri": "https://localhost:8080/", "height": 221} id="AXeXtZ9WEPV_" outputId="7d783079-1ca8-41a3-97e3-266c77c95756" print(Kuwait.shape) Kuwait.head() # + colab={"base_uri": "https://localhost:8080/"} id="tajklDUSsuRp" outputId="ad8a096a-9332-4261-b97f-c14544bacbec" # Check for NaN values Kuwait.isnull().sum() # + colab={"base_uri": "https://localhost:8080/"} id="YqwuCVdWtw-D" outputId="aea05219-53d5-4e78-e8bb-1379b7b0bb98" # Replace NaN values with the row's mean. I chose row mean because replaced # values are specific to each product Kuwait = Kuwait.T.fillna(Kuwait.mean(axis=1)).T Kuwait.isnull().sum() # + colab={"base_uri": "https://localhost:8080/", "height": 221} id="oIbme11EES8e" outputId="ebe70c6d-1483-4010-fb68-b5ed052d00cf" print(Bahrain.shape) Bahrain.head() # + colab={"base_uri": "https://localhost:8080/"} id="CZ707FxUsy8F" outputId="537f73ed-20a0-4254-caa4-a02e31f470eb" # Check for NaN values Bahrain.isnull().sum() # + colab={"base_uri": "https://localhost:8080/", "height": 359} id="Endez_sks5PI" outputId="9a2c83b8-db6e-41a3-e34e-cf1c593b0b9f" Bahrain[Bahrain.isna().any(axis=1)] # + colab={"base_uri": "https://localhost:8080/", "height": 419} id="NrwxY_zgs-Aq" outputId="e31473c8-7917-4219-a77d-8fec08309377" Bahrain = Bahrain.T.fillna(Bahrain.mean(axis=1)).T Bahrain # + colab={"base_uri": "https://localhost:8080/", "height": 419} id="u4WPpAgVEXYW" outputId="12cdbf03-11a6-4899-e130-62dba8fb1212" # Merge Datasets df1 = pd.concat([UAE, Saudi]) df2 = pd.concat([df1, Qatar]) df3 = pd.concat([df2, Oman]) df4 = pd.concat([df3, Kuwait]) GCC = pd.concat([df4, Bahrain]) GCC # + colab={"base_uri": "https://localhost:8080/"} id="BqIxVzRxFncr" outputId="8621fbaf-6b69-4562-b80c-ed50b9ff7e0f" # Finding Null Values and replacing them with 0 GCC.isnull().sum() # + colab={"base_uri": "https://localhost:8080/"} id="_lO3U714CuPi" outputId="02601c41-6788-4b8d-d7a0-777f5864f2f5" GCC['Product label'].value_counts() # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="l0YkASk-lGRg" outputId="cf3d5230-8345-4dd5-dccb-4b095db68fc9" # Renaming product lables to shorter more interperetable names GCC['Product label'] = GCC['Product label'].replace(['Fresh or chilled onions and shallots', 'Dried, shelled chickpeas "garbanzos", whether or not skinned or split', 'Garlic, fresh or chilled', 'Fresh or chilled cabbage lettuce', 'Dried, shelled lentils, whether or not skinned or split', 'Dried, shelled broad beans "Vicia faba var. major" and horse beans "Vicia faba var. equina ...', 'Fresh or chilled carrots and turnips', 'Fresh or chilled fruits of the genus Capsicum or Pimenta', 'Sweetcorn, uncooked or cooked by steaming or by boiling in water, frozen', 'Dried, shelled leguminous vegetables, whether or not skinned or split (excluding peas, chickpeas, ...', 'Fresh or chilled potatoes (excluding seed)', 'Vegetables, uncooked or cooked by steaming or by boiling in water, frozen (excluding potatoes, …', 'Arrowroot, salep, Jerusalem artichokes and similar roots and tubers with high starch or inulin ...'], ['Fresh Onions and Shallots', 'Dried Chickpeas', 'Fresh Garlic', 'Fresh Cabbage', 'Dried Lentil', 'Dried Beans', 'Fresh Carrots & Turnips', 'Fresh Bell Peppers and Other Peppers', 'Packaged Sweetcorn', 'Dried beans', 'Fresh Potatoes (excluding seed)', 'Packaged Vegtables', 'Fresh Arrowroots']) GCC['Product label'] = GCC['Product label'].replace(['Fresh or chilled mushrooms of the genus "Agaricus"', 'Fresh or chilled cauliflowers and headed broccoli', 'Mixtures of vegetables, uncooked or cooked by steaming or by boiling in water, frozen', 'Shelled or unshelled peas "Pisum sativum", uncooked or cooked by steaming or by boiling in …', 'Olives, provisionally preserved, e.g. by sulphur dioxide gas, in brine, in sulphur water or ...', 'Dried, shelled beans of species "Vigna mungo [L.] Hepper or Vigna radiata [L.] Wilczek", whether ...', 'Fresh or chilled leguminous vegetables, shelled or unshelled (excluding peas "Pisum sativum" ...', 'Dried, shelled peas "Pisum sativum", whether or not skinned or split', 'Fresh or chilled vegetables n.e.s.', 'Sweet potatoes, fresh, chilled, frozen or dried, whether or not sliced or in the form of pellets', 'Dried, shelled kidney beans "Phaseolus vulgaris", whether or not skinned or split', 'Shelled or unshelled beans "Vigna spp., Phaseolus spp.", uncooked or cooked by steaming or ...', 'Fresh or chilled pumpkins, squash and gourds "Cucurbita spp."'], ['Fresh Mushrooms', 'Fresh Cauliflowers and Broccolis', 'Packaged Mixed Vegtables', 'Dried Peas', 'Packaged Olives', 'Dried Vigna Beans', 'Fresh leguminous vegetables', 'Dried shelled peas', 'Fresh n.e.s. vegtebles', 'Dried kidney beans', 'Packaged leguminous vegetables', 'Packaged Beans', 'Fresh pumpkins, squash, gourds']) GCC['Product label'] = GCC['Product label'].replace(['Vegetables and mixtures of vegetables provisionally preserved, e.g. by sulphur dioxide gas, ...', 'Dried mushrooms and truffles, whole, cut, sliced, broken or in powder, but not further prepared ...', 'Fresh or chilled celery (excluding celeriac)', 'Fresh or chilled beans "Vigna spp., Phaseolus spp.", shelled or unshelled', 'Fresh or chilled edible mushrooms and truffles (excluding mushrooms of the genus "Agaricus")', 'Fresh or chilled asparagus', 'Fresh or chilled leguminous vegetables, shelled or unshelled (excluding peas "Pisum sativum" ...', 'Leeks and other alliaceous vegetables, fresh or chilled (excluding onions, shallots and garlic)', 'Dried vegetables and mixtures of vegetables, whole, cut, sliced, broken or in powder, but not ...', 'Spinach, New Zealand spinach and orache spinach, uncooked or cooked by steaming or by boiling ...', 'Fresh or chilled salad beetroot, salsify, celeriac, radishes and similar edible roots (excluding ...', 'Taro "Colocasia spp.", fresh, chilled, frozen or dried, whether or not sliced or in the form …', 'Dried onions, whole, cut, sliced, broken or in powder, but not further prepared"'], ['Packaged preserved vegtables', 'Dried mushrooms and truffles', 'Fresh celery', 'Fresh Vigna Beans', 'Fresh Exotic Mushrooms', 'Fresh asparagus', 'Fresh leguminous vegetables', 'Fresh alliaceous vegetables (excluding onions, shallots and garlic)', 'Dried vegtables mix', 'Packaged spinach', 'Fresh beetroot & radishes', 'Fresh Taro', 'Dried oninons']) GCC['Product label'] = GCC['Product label'].replace(['Potatoes, uncooked or cooked by steaming or by boiling in water, frozen', 'Cucumbers and gherkins, fresh or chilled', 'Dried, shelled small red "Adzuki" beans "Phaseolus or Vigna angularis", whether or not skinned ...', 'Brussels sprouts, fresh or chilled', 'Fresh or chilled peas "Pisum sativum", shelled or unshelled', 'Fresh or chilled spinach, New Zealand spinach and orache spinach', 'Cucumbers and gherkins provisionally preserved, e.g. by sulphur dioxide gas, in brine, in sulphur ...', 'Dried, shelled beans "Vigna and Phaseolus", whether or not skinned or split (excluding beans …', 'Fresh or chilled cabbages, kohlrabi, kale and similar edible brassicas (excluding cauliflowers, ...', 'Fresh, chilled, frozen or dried roots and tubers of manioc "cassava", whether or not sliced ...', 'Mushrooms and truffles, provisionally preserved, e.g., by sulphur dioxide gas, in brine, in …', 'Dried mushrooms of the genus "Agaricus", whole, cut, sliced, broken or in powder, but not further ...', 'Yams "Dioscorea spp.", fresh, chilled, frozen or dried, whether or not sliced or in the form ...'], ['Packaged potatoes', 'Fresh cucumbers', 'Dried adzuki beans', 'Fresh brussels sprouts', 'Fresh peas', 'Fresh spinach', 'Packaged cucumbers', 'Dried vigna beans', 'Fresh cabbages & kale', 'Fresh cassava roots', 'Packaged mushrooms & truffles', 'Dried mushrooms', 'Fresh yams']) GCC['Product label'] = GCC['Product label'].replace(['Fresh or chilled globe artichokes', 'Dried, shelled bambara beans "Vigna subterranea or Voandzeia subterranea", whether or not skinned ...', 'Mushrooms of the genus "Agaricus", provisionally preserved, e.g., by sulphur dioxide gas, in ...', 'Fresh or chilled lettuce (excluding cabbage lettuce)', 'Fresh or chilled aubergines "eggplants"', 'Dried, shelled cow peas "Vigna unguiculata", whether or not skinned or split', 'Fresh or chilled olives', 'Fresh or chilled witloof chicory', 'Fresh or chilled chicory (excluding witloof chicory)', 'Dried, shelled pigeon peas "Cajanus cajan", whether or not skinned or split', 'Dried wood ears "Auricularia spp.", whole, cut, sliced, broken or in powder, but not further ...', 'Dried jelly fungi "Tremella spp.", whole, cut, sliced, broken or in powder, but not further ...', 'Dried mushrooms and truffles, whole, cut, sliced, broken or in powder, but not further prepared'], ['Fresh artichokes', 'Dried bambara beans', 'Packaged preserved mushrooms', 'Fresh lettuce', 'Fresh eggplants', 'Dried cow peas', 'Fresh olives', 'Fresh witloof chicory', 'Fresh chicory', 'Dried pigeon peas', 'Dried wood ear', 'Dried jelly fungi', 'Dried mushrooms & truffles']) GCC['Product label'] = GCC['Product label'].replace(['Onions provisionally preserved, e.g. by sulphur dioxide gas, in brine, in sulphur water or ...'], ['Packaged preserved onions']) pd.set_option('max_rows', 99999) pd.set_option('max_colwidth', 400) GCC # + id="Y_Eq0GJZDcyp" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 2. Google form analysis # # Analysis of results extracted from Google forms in csv format. # ## Table of Contents # # # [Preparation](#preparation) # # [Constants](#constants) # # [Functions](#functions) # # - [general purpose](#genpurpose) # # - [sessions and temporalities](#sessions) # # - [score](#score) # # - [visualizations](#visualizations) # # - [sample getters](#samples) # # - [checkpoint validation](#checkvalidation) # # - [p(answered question N | answered question P)](#condproba) # # - [Filtering users](#filteringusers) # # [Initialization of gform](#gforminit) # # # ## Preparation # # %run "../Functions/1. Game sessions.ipynb" print("2. Google form analysis") # # Constants # # + # special user ids # 1.52 userIdThatDidNotAnswer = '001c95c6-8207-43dc-a51b-adf0c6e005d7' userId1AnswerEN = '00dbbdca-d86c-4bc9-803c-0602e0153f68' userIdAnswersEN = '5977184a-1be2-4725-9b48-f2782dc03efb' userId1ScoreEN = '6b5d392d-b737-49ef-99af-e8c445ff6379' userIdScoresEN = '5ecf601d-4eac-433e-8056-3a5b9eda0555' userId1AnswerFR = '2734a37d-4ba5-454f-bf85-1f7b767138f6' userIdAnswersFR = '01e85778-2903-447b-bbab-dd750564ee2d' userId1ScoreFR = '3d733347-0313-441a-b77c-3e4046042a53' userIdScoresFR = '58d22690-8604-41cf-a5b7-d71fb3b9ad5b' userIdAnswersENFR = 'a7936587-8b71-43b6-9c61-17b2c2b55de3' # 1.52.2 userIdThatDidNotAnswer = '6919aa9a-f18e-4fc5-8435-c26b869ba571' userIdAnswersFR = '0135e29b-678d-4188-a935-1d0bfec9450b' userIdScoresFR = '0135e29b-678d-4188-a935-1d0bfec9450b' userId1AnswerFR = '01cc303e-d7c1-4c84-8e17-182b410da343' userId1ScoreFR = '01cc303e-d7c1-4c84-8e17-182b410da343' userId1AnswerEN = '027fb5ca-c40a-4977-852a-e448538061f2' userId1ScoreEN = '027fb5ca-c40a-4977-852a-e448538061f2' userIdAnswersEN = '1e94b693-df8f-4ad0-9f02-4aac6929bdaa' userIdScoresEN = '1e94b693-df8f-4ad0-9f02-4aac6929bdaa' userIdAnswersENFR = '2ad10897-b143-45f4-9a78-60ee4bcecc80' # - #localplayerguidkey = 'Ne pas modifier - identifiant anonyme prérempli' localplayerguidkey = 'userId' localplayerguidindex = gform.columns.get_loc(localplayerguidkey) localplayerguidindex firstEvaluationQuestionKey = QGenotypePhenotype firstEvaluationQuestionIndex = gform.columns.get_loc(firstEvaluationQuestionKey) firstEvaluationQuestionIndex answersColumnNameStem = "answers" correctionsColumnNameStem = "corrections" # # Functions # # ## general purpose # def getUniqueUserCount(gfDF): return gfDF[localplayerguidkey].nunique() # + def getAllResponders( _gfDF ): userIds = _gfDF[localplayerguidkey].unique() return userIds def getRandomGFormGUID(): _uniqueUsers = getAllResponders() _userCount = len(_uniqueUsers) _guid = '0' while (not isGUIDFormat(_guid)): _userIndex = randint(0,_userCount-1) _guid = _uniqueUsers[_userIndex] return _guid def hasAnswered( userId, _gfDF ): return userId in _gfDF[localplayerguidkey].values def getAnswers( userId, _gfDF ): answers = _gfDF[_gfDF[localplayerguidkey]==userId] _columnAnswers = answers.T if 0 != len(answers): _newColumns = [] for column in _columnAnswers.columns: _newColumns.append(answersColumnNameStem + str(column)) _columnAnswers.columns = _newColumns else: # user has never answered print("user " + str(userId) + " has never answered") return _columnAnswers # - # ## sessions and temporalities # def resetTemporalities(_gfDF): _gfDF[QTemporality] = answerTemporalities[2] # + #gform[QPlayed].unique() # + # answers that show that this survey was a pretest alreadyPlayedPretestAnswers = [ 'No / not yet', # 'I just played for the first time', 'I played it some time ago', # certainly an older version of the game # 'I played it multiple times recently', # 'I played recently on an other computer', # has to fill in profile questions again # 'I played it multiple times recently on this computer' ] alreadyPlayedPosttestAnswers = [ # 'No / not yet', 'I just played for the first time', # 'I played it some time ago', 'I played it multiple times recently', 'I played recently on an other computer', 'I played it multiple times recently on this computer' ] # based only on user answer APlayedButProfileAgain = 'I played recently on an other computer' def setAnswerTemporalitiesSimple(_gfDF): # check whether temporalities have already been set if(len(_gfDF[QTemporality].unique()) == 1): for _index in _gfDF.index: if _gfDF.loc[_index, QPlayed] in alreadyPlayedPretestAnswers: _gfDF.loc[_index,QTemporality] = answerTemporalities[0] else: _gfDF.loc[_index,QTemporality] = answerTemporalities[1] print("temporalities set (user answer method)") # - # based only on first meaningful game event def setAnswerTemporalities(_gfDF): # check whether temporalities have already been set if(len(_gfDF[QTemporality].unique()) == 1): # format : key = _userId, value = [_firstEventDate, 0 or _gfDF.index of before, 0 or _gfDF.index of after] temporalities = {} for _index in _gfDF.index: _userId = _gfDF.loc[_index,localplayerguidkey] _firstEventDate, beforeIndex, afterIndex = [0,0,0] if _userId in temporalities: _firstEventDate, beforeIndex, afterIndex = temporalities[_userId] else: _firstEventDate = getFirstEventDate(_userId) temporality = getTemporality(_gfDF.loc[_index,QTimestamp],_firstEventDate) if temporality == answerTemporalities[0] and beforeIndex != 0 : if _gfDF.loc[_index,QTimestamp] > _gfDF.loc[beforeIndex,QTimestamp]: _gfDF.loc[beforeIndex,QTemporality] = answerTemporalities[2] else: temporality = answerTemporalities[2] elif temporality == answerTemporalities[1] and afterIndex != 0 : if _gfDF.loc[_index,QTimestamp] < _gfDF.loc[afterIndex,QTimestamp]: _gfDF.loc[afterIndex,QTemporality] = answerTemporalities[2] else: temporality = answerTemporalities[2] _gfDF.loc[_index,QTemporality] = temporality if temporality == answerTemporalities[0]: beforeIndex = _index elif temporality == answerTemporalities[1]: afterIndex = _index temporalities[_userId] = [_firstEventDate, beforeIndex, afterIndex] print("temporalities set (first event method)") # when did the user answer the questionnaire? # After gameEventDate, before gameEventDate, undefined? # answerDate is assumed to be the gform Timestamp, UTC # gameEventDate is assumed to be of type pandas._libs.tslib.Timestamp, UTC, from RedMetrics def getTemporality( answerDate, gameEventDate ): result = answerTemporalities[2] if(gameEventDate != pd.Timestamp.max.tz_localize('utc')): if(answerDate <= gameEventDate): result = answerTemporalities[0] elif (answerDate > gameEventDate): result = answerTemporalities[1] return result # should be based on events on a 24h window def setAnswerTemporalities2( _gfDF, _rmDF ): # check whether temporalities have already been set if(len(_gfDF[QTemporality].unique()) == 1): # format : key = _userId, value = [pretestBeforeRatio, posttestAfterRatio, 0 or pretestIndex, 0 or posttestIndex] temporalities = {} for _index in _gfDF.index: _userId = _gfDF.loc[_index,localplayerguidkey] pretestBeforeRatio, posttestAfterRatio, pretestIndex, posttestIndex = [1.0, 1.0, 0, 0] answerDate = _gfDF.loc[_index,QTimestamp] [eventsBeforeRatio, eventsAfterRatio] = getEventCountRatios(answerDate, _userId, _rmDF, _gfDF) if _userId in temporalities: pretestBeforeRatio, posttestAfterRatio, pretestIndex, posttestIndex = temporalities[_userId] if ((eventsBeforeRatio == eventsAfterRatio) and (0 != eventsBeforeRatio)): print("anomaly for userId=" + _userId + ": eventsBeforeRatio == eventsAfterRatio != 0") # update posttest if there are less events afterwards? # keep the oldest anyways? if (posttestIndex == 0) and (eventsBeforeRatio >= eventsAfterRatio) and (0 != eventsBeforeRatio): # improvement idea: #if (eventsBeforeRatio > eventsAfterRatio) : # if (posttestIndex == 0) or (_gfDF.loc[posttestIndex,localplayerguidkey]): # if _gfDF.loc[_index,QTimestamp] > _gfDF.loc[beforeIndex,QTimestamp]: # if _gfDF.loc[_index,QTimestamp] < _gfDF.loc[afterIndex,QTimestamp]: posttestAfterRatio = eventsAfterRatio posttestIndex = _index _gfDF.loc[_index,QTemporality] = answerTemporalities[1] # update pretest if there are more events before? # keep the oldest anyways? elif (pretestIndex == 0) and (eventsBeforeRatio <= eventsAfterRatio) and (0 != eventsAfterRatio): pretestBeforeRatio = eventsBeforeRatio pretestIndex = _index _gfDF.loc[_index,QTemporality] = answerTemporalities[0] temporalities[_userId] = [pretestBeforeRatio, posttestAfterRatio, pretestIndex, posttestIndex] print("temporalities set (ratio method)") def getEventCountRatios(answerDate, userId, _rmDF, _gfDF): result = [0,0] allEvents = _rmDF[_rmDF['userId']==userId] allEventsCount = len(allEvents) if 0 != allEventsCount: eventsBeforeRatio = len(allEvents[allEvents['userTime'] < answerDate])/allEventsCount eventsAfterRatio = len(allEvents[allEvents['userTime'] > answerDate])/allEventsCount result = [eventsBeforeRatio, eventsAfterRatio] return result # ## score # # + def getCorrections( _userId, _gfDF, _source = correctAnswers, _columnAnswers = [] ): if(len(_columnAnswers) == 0): _columnAnswers = getAnswers( _userId, _gfDF = _gfDF ) if 0 != len(_columnAnswers.columns): _questionsCount = len(_columnAnswers.values) for _columnName in _columnAnswers.columns: if answersColumnNameStem in _columnName: _answerNumber = _columnName.replace(answersColumnNameStem,"") newCorrectionsColumnName = correctionsColumnNameStem + _answerNumber #_columnAnswers[newCorrectionsColumnName] = _columnAnswers[_columnName] _columnAnswers[newCorrectionsColumnName] = pd.Series(np.full(_questionsCount, np.nan)) for question in _columnAnswers[_columnName].index: _correctAnswers = _source.loc[question] if(len(_correctAnswers) > 0): _columnAnswers.loc[question,newCorrectionsColumnName] = False for _correctAnswer in _correctAnswers: if str(_columnAnswers.loc[question,_columnName])\ .startswith(str(_correctAnswer)): _columnAnswers.loc[question,newCorrectionsColumnName] = True break else: # user has never answered print("can't give correct answers") return _columnAnswers # edits in-place # _corrections must be a dataframe full of corrections as produced above def getBinarizedCorrections( _corrections ): for _columnName in _corrections.columns: for _index in _corrections[_columnName].index: if(True==_corrections.loc[_index,_columnName]): _corrections.loc[_index,_columnName] = 1.0 elif (False==_corrections.loc[_index,_columnName]): _corrections.loc[_index,_columnName] = 0.0 return _corrections # only for one line in the gform def getBinarized(_gfDFRow, _source = correctAnswers): _notEmptyIndexes = [] for _index in _source.index: if(len(_source.loc[_index]) > 0): _notEmptyIndexes.append(_index) _binarized = pd.Series(np.full(len(_gfDFRow.index), np.nan), index = _gfDFRow.index) for question in _gfDFRow.index: _correctAnswers = _source.loc[question] if(len(_correctAnswers) > 0): _binarized[question] = 0 for _correctAnswer in _correctAnswers: if str(_gfDFRow.loc[question])\ .startswith(str(_correctAnswer)): _binarized.loc[question] = 1 break _slicedBinarized = _binarized.loc[_notEmptyIndexes] return _slicedBinarized def getAllBinarized(_gfDF, _source = correctAnswers): _notEmptyIndexes = [] for _index in _source.index: if(len(_source.loc[_index]) > 0): _notEmptyIndexes.append(_index) _result = pd.DataFrame(index = _notEmptyIndexes) for _userId in getAllResponders(_gfDF = _gfDF): _corrections = getCorrections(_userId, _source=_source, _gfDF = _gfDF) _binarized = getBinarizedCorrections(_corrections) _slicedBinarized =\ _binarized.loc[_notEmptyIndexes][_binarized.columns[\ _binarized.columns.to_series().str.contains(correctionsColumnNameStem)\ ]] _result = pd.concat([_result, _slicedBinarized], axis=1) _result = _result.T return _result # CCA.iloc[i,j] is the number of users who correctly answered questions number i and j # CCA[i,j] = Sum(A[u,i] * A[u,j], u in users) = Sum(tA[i,u] * A[u,j], u in users) = tA.A[i,j] # CCA[i,j] is an int def getCrossCorrectAnswers( _binarizedAnswers ): return _binarizedAnswers.T.dot(_binarizedAnswers) #function that returns the score from user id scoreLabel = 'score' def getScore( _userId, _gfDF, _source = correctAnswers ): _score = pd.DataFrame({}, columns = answerTemporalities) _score.loc[scoreLabel,:] = np.nan for _column in _score.columns: _score.loc[scoreLabel, _column] = [] if hasAnswered(_userId, _gfDF): _columnAnswers = getCorrections(_userId, _gfDF = _gfDF, _source = _source) for _columnName in _columnAnswers.columns: # only work on corrected columns if correctionsColumnNameStem in _columnName: _answerColumnName = _columnName.replace(correctionsColumnNameStem,\ answersColumnNameStem) _temporality = _columnAnswers.loc[QTemporality,_answerColumnName] _counts = (_columnAnswers[_columnName]).value_counts() _thisScore = 0 if(True in _counts): _thisScore = _counts[True] _score.loc[scoreLabel,_temporality].append(_thisScore) else: print("user " + str(_userId) + " has never answered") return _score def getGFormRowCorrection(_gfDFRow, _source = correctAnswers): result = _gfDFRow.copy() if(len(_gfDFRow) == 0): print("this gform row is empty") else: result = pd.Series(index = _gfDFRow.index, data = np.full(len(_gfDFRow), np.nan)) for question in result.index: _correctAnswers = _source.loc[question] if(len(_correctAnswers) > 0): result.loc[question] = False for _correctAnswer in _correctAnswers: if str(_gfDFRow.loc[question]).startswith(str(_correctAnswer)): result.loc[question] = True break return result def getGFormRowScore( _gfDFRow, _source = correctAnswers): correction = getGFormRowCorrection( _gfDFRow, _source = _source) _counts = correction.value_counts() _thisScore = 0 if(True in _counts): _thisScore = _counts[True] return _thisScore # + QCuriosityCoding = {"A lot": 4, "Beaucoup": 4, "Enormément": 5, "Énormément": 5, "Extremely": 5, "Moderately": 3, "Moyennement": 3, "Slightly": 2, "Un peu": 2, "I don't know": 3, "Je ne sais pas": 3, "Not at all": 1, "Pas du tout": 1} QCuriosityBiologyCoding = QCuriosityCoding QCuriositySyntheticBiologyCoding = QCuriosityCoding QCuriosityVideoGamesCoding = QCuriosityCoding QCuriosityEngineeringCoding = QCuriosityCoding QPlayedCoding = {"I played it multiple times recently": 3, "I played it multiple times recently on this computer": 3, "I played recently on an other computer": 2, "I played it some time ago": 1, "I just played for the first time": 1, "No / not yet": 0, "I don't know": 0} #QAgeCoding QGenderCoding = {"Female": 1, "Other": 0, "Prefer not to say": 0, "Male": -1} QInterestVideoGamesCoding = QCuriosityCoding QInterestBiologyCoding = QCuriosityCoding QStudiedBiologyCoding = {"Not even in middle school": 0, "Jamais": 0, "Jamais, pas même au collège": 0, "Until the end of middle school": 1, "Jusqu'au brevet": 1, "Until the end of high school": 2, "Jusqu'au bac": 2, "Until bachelor's degree": 3, "Jusqu'à la license": 3, "At least until master's degree": 4, "Au moins jusqu'au master": 4, "I don't know": 0, "Je ne sais pas": 0} QPlayVideoGamesCoding = {"A lot": 4, "Beaucoup": 4, "Enormément": 5, "Énormément": 5, "Extremely": 5, "Moderately": 3, "Moyennement": 3, "Rarely": 2, "Un peu": 2, "I don't know": 3, "Je ne sais pas": 3, "Not at all": 1, "Pas du tout": 1} QHeardSynBioOrBioBricksCoding = {"Yes, and I know what it means" : 2, "Yes, but I don't exactly know what it means": 1, "No": 0} QVolunteerCoding = {"Yes": 1, "No": 0} QEnjoyedCoding = {'Extremely': 4, 'A lot': 3, 'Not at all': 0, 'A bit': 1, 'Moderately': 2, "No": 0, "Not applicable: not played yet": -1} QLanguageCoding = {"en": 0, "fr": 1} QTemporalityCoding = {"pretest": 0, "posttest": 1, "undefined": -5} numericDemographicQuestionsCodings = [ QCuriosityBiologyCoding, QCuriositySyntheticBiologyCoding, QCuriosityVideoGamesCoding, QCuriosityEngineeringCoding, QPlayedCoding, QGenderCoding, QInterestVideoGamesCoding, QInterestBiologyCoding, QStudiedBiologyCoding, QPlayVideoGamesCoding, QHeardSynBioOrBioBricksCoding, QVolunteerCoding, QEnjoyedCoding, QLanguageCoding, QTemporalityCoding, ] numericDemographicQuestions = [ QCuriosityBiology, QCuriositySyntheticBiology, QCuriosityVideoGames, QCuriosityEngineering, QPlayed, QGender, QInterestVideoGames, QInterestBiology, QStudiedBiology, QPlayVideoGames, QHeardSynBioOrBioBricks, QVolunteer, QEnjoyed, QLanguage, QTemporality, ] numericDemographicQuestionsCodingsSeries = pd.Series(data = numericDemographicQuestionsCodings, index = numericDemographicQuestions) # - # only for one line in the gform def getNumeric(_gfDFRow, _source = correctAnswers): _notEmptyIndexes = [] for _index in _source.index: if(len(_source.loc[_index]) > 0): _notEmptyIndexes.append(_index) _numeric = pd.Series(np.full(len(_gfDFRow.index), np.nan), index = _gfDFRow.index) for question in _gfDFRow.index: if question in scientificQuestions: _correctAnswers = _source.loc[question] if(len(_correctAnswers) > 0): _numeric[question] = 0 for _correctAnswer in _correctAnswers: if str(_gfDFRow.loc[question])\ .startswith(str(_correctAnswer)): _numeric.loc[question] = 1 break elif question == QAge: if pd.notnull(_gfDFRow.loc[question]): _numeric.loc[question] = float(_gfDFRow.loc[question]) else: _numeric.loc[question] = -1 elif question in demographicQuestions: if pd.notnull(_gfDFRow.loc[question]): _numeric.loc[question] = numericDemographicQuestionsCodingsSeries.loc[question][_gfDFRow.loc[question]] else: _numeric.loc[question] = 0 _slicedBinarized = _numeric.loc[_notEmptyIndexes] return _slicedBinarized # ## visualizations # # + def createStatSet(series, ids = pd.Series()): if(0 == len(ids)): ids = series.index result = { 'count' : len(ids), 'unique' : len(ids.unique()), 'median' : series.median(), 'mean' : series.mean(), 'std' : series.std(), } return result # _binarized must be well-formed, similarly to getAllBinarized's output def getPercentagePerQuestion(_binarized): totalPerQuestionDF = pd.DataFrame(data=np.dot(np.ones(_binarized.shape[0]), _binarized), index=_binarized.columns) percentagePerQuestion = totalPerQuestionDF*100 / _binarized.shape[0] return percentagePerQuestion # + ## gfDF can be: all, those who answered both before and after, ## those who played between date1 and date2, ... from scipy.stats import ttest_ind def plotBasicStats( gfDF, title = np.nan, includeAll = False, includeBefore = True, includeAfter = True, includeUndefined = False, includeProgress = True, includeRelativeProgress = False, horizontalPlot = True, sortedAlong = '', # in ["pretest", "posttest", "progression"] figsize=(20,4), annot=True, cbar=True, annot_kws={"size": 10}, font_scale=1, ): stepsPerInclude = 2 includeCount = np.sum([includeAll, includeBefore, includeAfter, includeUndefined, includeProgress]) stepsCount = stepsPerInclude*includeCount + 3 #print("stepsPerInclude=" + str(stepsPerInclude)) #print("includeCount=" + str(includeCount)) #print("stepsCount=" + str(stepsCount)) __progress = FloatProgress(min=0, max=stepsCount) display(__progress) gfDFPretests = gfDF[gfDF[QTemporality] == answerTemporalities[0]] gfDFPosttests = gfDF[gfDF[QTemporality] == answerTemporalities[1]] gfDFUndefined = gfDF[gfDF[QTemporality] == answerTemporalities[2]] #uniqueBefore = gfDFPretests[localplayerguidkey] #uniqueAfter = #uniqueUndefined = scientificQuestionsSource = correctAnswers.copy() allQuestionsSource = correctAnswers + demographicAnswers categories = ['all', answerTemporalities[0], answerTemporalities[1], answerTemporalities[2],\ 'progress', 'rel. progress'] data = {} sciBinarized = pd.DataFrame() allBinarized = pd.DataFrame() scoresAll = pd.DataFrame() sciBinarizedBefore = pd.DataFrame() allBinarizedBefore = pd.DataFrame() scoresBefore = pd.DataFrame() sciBinarizedAfter = pd.DataFrame() allBinarizedAfter = pd.DataFrame() scoresAfter = pd.DataFrame() sciBinarizedUndefined = pd.DataFrame() allBinarizedUndefined = pd.DataFrame() scoresUndefined = pd.DataFrame() scoresProgress = pd.DataFrame() ## basic stats: ### mean score ### median score ### std if includeAll: sciBinarized = getAllBinarized(gfDF, _source = scientificQuestionsSource) __progress.value += 1 allBinarized = getAllBinarized(gfDF, _source = allQuestionsSource) __progress.value += 1 scoresAll = pd.Series(np.dot(sciBinarized, np.ones(sciBinarized.shape[1]))) data[categories[0]] = createStatSet(scoresAll, gfDF[localplayerguidkey]) if includeBefore or includeProgress: sciBinarizedBefore = getAllBinarized(gfDFPretests, _source = scientificQuestionsSource) __progress.value += 1 allBinarizedBefore = getAllBinarized(gfDFPretests, _source = allQuestionsSource) __progress.value += 1 scoresBefore = pd.Series(np.dot(sciBinarizedBefore, np.ones(sciBinarizedBefore.shape[1]))) temporaryStatSetBefore = createStatSet(scoresBefore, gfDFPretests[localplayerguidkey]) if includeBefore: data[categories[1]] = temporaryStatSetBefore if includeAfter or includeProgress: sciBinarizedAfter = getAllBinarized(gfDFPosttests, _source = scientificQuestionsSource) __progress.value += 1 allBinarizedAfter = getAllBinarized(gfDFPosttests, _source = allQuestionsSource) __progress.value += 1 scoresAfter = pd.Series(np.dot(sciBinarizedAfter, np.ones(sciBinarizedAfter.shape[1]))) temporaryStatSetAfter = createStatSet(scoresAfter, gfDFPosttests[localplayerguidkey]) if includeAfter: data[categories[2]] = temporaryStatSetAfter if includeUndefined: sciBinarizedUndefined = getAllBinarized(gfDFUndefined, _source = scientificQuestionsSource) __progress.value += 1 allBinarizedUndefined = getAllBinarized(gfDFUndefined, _source = allQuestionsSource) __progress.value += 1 scoresUndefined = pd.Series(np.dot(sciBinarizedUndefined, np.ones(sciBinarizedUndefined.shape[1]))) data[categories[3]] = createStatSet(scoresUndefined, gfDFUndefined[localplayerguidkey]) if includeProgress: data[categories[4]] = { 'count' : min(temporaryStatSetAfter['count'], temporaryStatSetBefore['count']), 'unique' : min(temporaryStatSetAfter['unique'], temporaryStatSetBefore['unique']), 'median' : temporaryStatSetAfter['median']-temporaryStatSetBefore['median'], 'mean' : temporaryStatSetAfter['mean']-temporaryStatSetBefore['mean'], 'std' : temporaryStatSetAfter['std']-temporaryStatSetBefore['std'], } __progress.value += 2 result = pd.DataFrame(data) __progress.value += 1 print(title) print(result) if (includeBefore and includeAfter) or includeProgress: if (len(scoresBefore) > 2 and len(scoresAfter) > 2): ttest = ttest_ind(scoresBefore, scoresAfter) print("t test: statistic=" + repr(ttest.statistic) + " pvalue=" + repr(ttest.pvalue)) print() ## percentage correct ### percentage correct - max 5 columns percentagePerQuestionAll = pd.DataFrame() percentagePerQuestionBefore = pd.DataFrame() percentagePerQuestionAfter = pd.DataFrame() percentagePerQuestionUndefined = pd.DataFrame() percentagePerQuestionProgress = pd.DataFrame() tables = [] if includeAll: percentagePerQuestionAll = getPercentagePerQuestion(allBinarized) tables.append([percentagePerQuestionAll, categories[0]]) if includeBefore or includeProgress: percentagePerQuestionBefore = getPercentagePerQuestion(allBinarizedBefore) if includeBefore: tables.append([percentagePerQuestionBefore, categories[1]]) if includeAfter or includeProgress: percentagePerQuestionAfter = getPercentagePerQuestion(allBinarizedAfter) if includeAfter: tables.append([percentagePerQuestionAfter, categories[2]]) if includeUndefined: percentagePerQuestionUndefined = getPercentagePerQuestion(allBinarizedUndefined) tables.append([percentagePerQuestionUndefined, categories[3]]) if includeProgress or includeRelativeProgress: percentagePerQuestionProgress = percentagePerQuestionAfter - percentagePerQuestionBefore if includeProgress: tables.append([percentagePerQuestionProgress, categories[4]]) if includeRelativeProgress: # use temporaryStatSetAfter['count'], temporaryStatSetBefore['count']? percentagePerQuestionProgress2 = percentagePerQuestionProgress.copy() for index in range(0,len(percentagePerQuestionProgress.index)): if (0 == percentagePerQuestionBefore.iloc[index,0]): percentagePerQuestionProgress2.iloc[index,0] = 0 else: percentagePerQuestionProgress2.iloc[index,0] = \ percentagePerQuestionProgress.iloc[index,0]/percentagePerQuestionBefore.iloc[index,0] tables.append([percentagePerQuestionProgress2, categories[5]]) __progress.value += 1 graphTitle = '% correct: ' toConcat = [] for table,category in tables: concat = (len(table.values) > 0) for elt in table.iloc[:,0].values: if np.isnan(elt): concat = False break if(concat): graphTitle = graphTitle + category + ' ' toConcat.append(table) if (len(toConcat) > 0): percentagePerQuestionConcatenated = pd.concat( toConcat , axis=1) if(pd.notnull(title) > 0): graphTitle = graphTitle + ' - ' + title _fig = plt.figure(figsize=figsize) _ax1 = plt.subplot(111) if pd.isnull(title): _ax1.set_title(graphTitle) else: _ax1.set_title(title) matrixToDisplay = percentagePerQuestionConcatenated.round().astype(int) matrixToDisplay.columns = ["pretest", "posttest", "progression"] if sortedAlong in matrixToDisplay.columns: demographicQuestions = demographicAnswers[demographicAnswers.apply(len) > 0].index sciSorted = matrixToDisplay.loc[scientificQuestions, :].sort_values(by = sortedAlong, ascending = True) demoSorted = matrixToDisplay.loc[demographicQuestions, :].sort_values(by = sortedAlong, ascending = True) matrixToDisplay = pd.concat([sciSorted, demoSorted]) if horizontalPlot: matrixToDisplay = matrixToDisplay.T sns.set(font_scale=font_scale) sns.heatmap( matrixToDisplay, ax=_ax1, cmap=plt.cm.jet, square=True, annot=annot, fmt='d', vmin=0, vmax=100, cbar=cbar, annot_kws=annot_kws, ) #if horizontalPlot: # both fail #heatmap.set_xticklabels(_ax1.get_xticklabels(),rotation=45) #plt.xticks(rotation=45) __progress.value += 1 ### percentage cross correct ### percentage cross correct, conditionnally if(__progress.value != stepsCount): print("__progress.value=" + str(__progress.value) + " != stepsCount=" + str(stepsCount)) __progress.close() del __progress # return sciBinarized, sciBinarizedBefore, sciBinarizedAfter, sciBinarizedUndefined, \ # allBinarized, allBinarizedBefore, allBinarizedAfter, allBinarizedUndefined return matrixToDisplay # + def plotCorrelationMatrices( allBinarized = [], beforeBinarized = [], afterBinarized = [], undefinedBinarized = [], titleAll = 'Correlation of pre- & post-test answers', titleBefore = 'Correlation of pre-test answers', titleAfter = 'Correlation of post-test answers', titleUndefined = 'Correlation of undefined answers', titleSuffix = '', ): dataBinarized = [allBinarized, beforeBinarized, afterBinarized, undefinedBinarized] titles = [titleAll + titleSuffix, titleBefore + titleSuffix, titleAfter + titleSuffix, titleUndefined + titleSuffix] for index in range(0, len(dataBinarized)): if(len(dataBinarized[index]) > 0): plotCorrelationMatrix( dataBinarized[index], _abs=True, _clustered=False, _questionNumbers=True, _annot = True, _figsize = (20,20), _title=titles[index], ) ##correlation ### simple heatmap ### clustermap methods = ['pearson', 'kendall', 'spearman'] def plotCorrelationMatrix( _binarizedMatrix, _method = methods[0], _title='Questions\' Correlations', _abs=False, _clustered=False, _questionNumbers=False, _annot = False, _figsize = (10,10), _metric='euclidean' ): _progress = FloatProgress(min=0, max=7) display(_progress) _overlay = False _progress.value += 1 # computation of correlation matrix _m = _method if(not (_method in methods)): _m = methods[0] _correlation = _binarizedMatrix.astype(float).corr(_m) _progress.value += 1 if(_abs): _correlation = _correlation.abs() _progress.value += 1 if(_clustered): # removing NaNs # can't cluster NaN lines in _correlation _notNaNsIndices = [] _notNaNsColumns = [] for index in _correlation.index: #if(pd.notnull(_correlation.loc[index,:]).all()): # if no element is nan if(~pd.isnull(_correlation.loc[index,:]).all()): # if at least one element is not nan _notNaNsIndices.append(index) #for column in _correlation.columns: # if(~np.isnan(_correlation.loc[:,column]).all()): # _notNaNsColumns.append(column) _binarizedMatrix = _binarizedMatrix.loc[:,_notNaNsIndices] _correlation = _correlation.loc[_notNaNsIndices,_notNaNsIndices] _progress.value += 1 # optional computation of overlay if(_annot): _overlay = getCrossCorrectAnswers(_binarizedMatrix).astype(int) _progress.value += 1 # preparation of plot labels if(_questionNumbers): _correlation.columns = pd.Series(_correlation.columns).apply(\ lambda x: x + ' #' + str(_correlation.columns.get_loc(x) + 1)) if(_clustered): _correlation.index = pd.Series(_correlation.columns).apply(\ lambda x: '#' + str(_correlation.columns.get_loc(x) + 1) + ' ' + x) else: _correlation.index = _correlation.columns _progress.value += 1 vmin = -1 if _abs: vmin = 0 vmax = 1 # plot if(_clustered): result = sns.clustermap( _correlation, metric=_metric, cmap=plt.cm.jet, square=True, figsize=_figsize, annot=_overlay, fmt='d', vmin=vmin, vmax=vmax, ) return result, _overlay # if(_annot): # reorder columns using clustergrid.dendrogram_col.reordered_ind #_overlay1 = _overlay.copy() # reorderedCols = result.dendrogram_col.reordered_ind # _overlay = _overlay #_overlay2 = _overlay.copy().iloc[reorderedCols,reorderedCols] # result = sns.clustermap(_correlation,metric=_metric,cmap=plt.cm.jet,square=True,figsize=_figsize,annot=_overlay, fmt='d') #print(_overlay1.columns == _overlay2.columns) #print(_overlay1 == _overlay2) #print(_overlay1.columns) #print(_overlay1.columns) #print(_overlay1) #print(_overlay2) #return _overlay1, _overlay2 # return result, _overlay else: _fig = plt.figure(figsize=_figsize) _ax = plt.subplot(111) _ax.set_title(_title) sns.heatmap( _correlation, ax=_ax, cmap=plt.cm.jet, square=True, annot=_overlay, fmt='d', vmin=vmin, vmax=vmax, ) _progress.close() del _progress #def plotAll(): # loop on question types # loop on temporalities # loop on representations ## basic stats: ### mean score ### median score ### std ## percentage correct ### percentage correct - 3 columns ### percentage cross correct ### percentage cross correct, conditionnally ##correlation ### simple heatmap # plotCorrelationMatrix ### clustermap # plotCorrelationMatrix # + def plotSamples(gfDFs): _progress = FloatProgress(min=0, max=len(gfDFs)) display(_progress) for gfDF, title in gfDFs: plotBasicStats(gfDF, title) _progress.value += 1 if(_progress.value != len(gfDFs)): print("__progress.value=" + str(__progress.value) + " != len(gfDFs)=" + str(len(gfDFs))) _progress.close() del _progress # - # for per-gform, manual analysis def getGFormDataPreview(_GFUserId, gfDF): gforms = gform[gform[localplayerguidkey] == _GFUserId] result = {} for _ilocIndex in range(0, len(gforms)): gformsIndex = gforms.index[_ilocIndex] currentGForm = gforms.iloc[_ilocIndex] subresult = {} subresult['date'] = currentGForm[QTimestamp] subresult['temporality RM'] = currentGForm[QTemporality] subresult['temporality GF'] = getGFormRowGFormTemporality(currentGForm) subresult['score'] = getGFormRowScore(currentGForm) subresult['genderAge'] = [currentGForm[QGender], currentGForm[QAge]] # search for other users with similar demographics matchingDemographics = getMatchingDemographics(gfDF, currentGForm) matchingDemographicsIds = [] #print(type(matchingDemographics)) #print(matchingDemographics.index) for matchesIndex in matchingDemographics.index: matchingDemographicsIds.append([matchesIndex, matchingDemographics.loc[matchesIndex, localplayerguidkey]]) subresult['demographic matches'] = matchingDemographicsIds result['survey' + str(_ilocIndex)] = subresult return result # ## sample getters # # ### set operators # indices do not need to be reset as they all come from gform def getUnionQuestionnaires(gfDF1, gfDF2): if (not (gfDF1.columns == gfDF2.columns).all()): print("warning: parameter columns are not the same") return pd.concat([gfDF1, gfDF2]).drop_duplicates() # indices do not need to be reset as they all come from gform def getIntersectionQuestionnaires(gfDF1, gfDF2): if (not (gfDF1.columns == gfDF2.columns).all()): print("warning: parameter columns are not the same") return pd.merge(gfDF1, gfDF2, how = 'inner').drop_duplicates() # get gfDF1 and gfDF2 rows where users are common to gfDF1 and gfDF2 def getIntersectionUsersSurveys(gfDF1, gfDF2): result1 = gfDF1[gfDF1[localplayerguidkey].isin(gfDF2[localplayerguidkey])] result2 = gfDF2[gfDF2[localplayerguidkey].isin(gfDF1[localplayerguidkey])] return getUnionQuestionnaires(result1,result2) # ### Users who answered either before or after gform[QPlayed].unique() def getRMBefores(gfDF): return gfDF[gfDF[QTemporality] == answerTemporalities[0]] def getRMAfters(gfDF): return gfDF[gfDF[QTemporality] == answerTemporalities[1]] # returns users who declared that they have never played the game, whatever platform # everPlayedPositives is defined in "../Functions/0.1 GF English localization.ipynb" def getGFormBefores(gfDF): return gfDF[ ~gfDF[QPlayed].isin(everPlayedPositives) ] def isGFormBefore(surveyAnswerIndex, _gform): return (len(getGFormBefores(_gform.loc[surveyAnswerIndex:surveyAnswerIndex, :])) == 1) # returns users who declared that they have already played the game, whatever platform # everPlayedPositives is defined in "../Functions/0.1 GF English localization.ipynb" def getGFormAfters(gfDF): return gfDF[ gfDF[QPlayed].isin(everPlayedPositives) ] def isGFormAfter(surveyAnswerIndex, _gform): return (len(getGFormAfters(_gform.loc[surveyAnswerIndex:surveyAnswerIndex, :])) == 1) # returns an element of answerTemporalities # everPlayedPositives is defined in '../Static data/English localization.ipynb' def getGFormRowGFormTemporality(_gfDFRow): if (_gfDFRow[QPlayed] in everPlayedPositives): return answerTemporalities[1] else: return answerTemporalities[0] # ### Users who answered both before and after def getSurveysOfUsersWhoAnsweredBoth(gfDF, gfMode = True, rmMode = False): befores = gfDF afters = gfDF if gfMode: befores = getGFormBefores(befores) afters = getGFormAfters(afters) if rmMode: befores = getRMBefores(befores) afters = getRMAfters(afters) return getIntersectionUsersSurveys(befores, afters) def getSurveysThatAnswered(gfDF, questionsAndPositiveAnswers, hardPolicy = True): filterSeries = [] if hardPolicy: filterSeries = pd.Series(True, gfDF.index) for question, positiveAnswers in questionsAndPositiveAnswers: filterSeries = filterSeries & (gfDF[question].isin(positiveAnswers)) else: filterSeries = pd.Series(False, range(len(gfDF.index))) for question, positiveAnswers in questionsAndPositiveAnswers: filterSeries = filterSeries | (gfDF[question].isin(positiveAnswers)) return gfDF[filterSeries] # surveys of people who have studied biology, and/or know about synthetic biology, and/or about BioBricks def getSurveysOfBiologists(gfDF, hardPolicy = True): #QStudiedBiology biologyStudyPositives #irrelevant QInterestBiology biologyInterestPositives #QHeardSynBioOrBioBricks heardAboutBioBricksPositives questionsAndPositiveAnswers = [[QStudiedBiology, biologyStudyPositives], [QHeardSynBioOrBioBricks, heardAboutBioBricksPositives]] return getSurveysThatAnswered(gfDF, questionsAndPositiveAnswers, hardPolicy) # surveys of people who play video games and/or are interested in them def getSurveysOfGamers(gfDF, hardPolicy = True): #QInterestVideoGames interestPositives #QPlayVideoGames frequencyPositives questionsAndPositiveAnswers = [[QInterestVideoGames, interestPositives], [QPlayVideoGames, frequencyPositives]] return getSurveysThatAnswered(gfDF, questionsAndPositiveAnswers, hardPolicy) def getSurveysWithMatchingAnswers(gfDF, _gfDFRow, strictList, extendedList = [], hardPolicy = False): questions = strictList if (hardPolicy): questions += extendedList questionsAndPositiveAnswers = [] for q in questions: questionsAndPositiveAnswers.append([q, [_gfDFRow[q]]]) return getSurveysThatAnswered(gfDF, questionsAndPositiveAnswers, True) # + #QAge #QGender def getMatchingDemographics(gfDF, _gfDFRow, hardPolicy = False): # age and gender, edu should not change #QGender #QAge #QStudiedBiology # interests, hobbies, and knowledge - evaluation may vary after playing #QInterestVideoGames #QPlayVideoGames #QInterestBiology #QHeardSynBioOrBioBricks heardAboutBioBricksPositives # language may vary: players may have missed the opportunity to set it, or may want to try and change it #QLanguage return getSurveysWithMatchingAnswers( gfDF, _gfDFRow, [QAge, QGender, QStudiedBiology], extendedList = [QInterestVideoGames, QPlayVideoGames, QInterestBiology, QHeardSynBioOrBioBricks, QLanguage], hardPolicy = hardPolicy ) # - # ### Utility functions to gfDF def getDemographicSamples(gfDF): gfDFs = [ [gfDF, 'root gfDF'], [gfDF[gfDF[QLanguage] == enLanguageID], 'English'], [gfDF[gfDF[QLanguage] == frLanguageID], 'French'], [gfDF[gfDF[QGender] == 'Female'], 'female'], [gfDF[gfDF[QGender] == 'Male'], 'male'], [getSurveysOfBiologists(gfDF), 'biologists - strict'], [getSurveysOfBiologists(gfDF, False), 'biologists - broad'], [getSurveysOfGamers(gfDF), 'gamers - strict'], [getSurveysOfGamers(gfDF, False), 'gamers - broad'], ] return gfDFs def getTemporalitySamples(gfDF): gfDFs = [ [gfDF, 'root gfDF'], [getRMBefores(gfDF), 'RedMetrics befores'], [getGFormBefores(gfDF), 'Google form befores'], [getRMBefores(getGFormBefores(gfDF)), 'GF & RedMetrics befores'], [getRMAfters(gfDF), 'RedMetrics afters'], [getGFormAfters(gfDF), 'Google form afters'], [getRMAfters(getGFormAfters(gfDF)), 'GF & RedMetrics afters'], [getSurveysOfUsersWhoAnsweredBoth(gfDF, gfMode = True, rmMode = False), 'GF both before and after'], [getSurveysOfUsersWhoAnsweredBoth(gfDF, gfMode = False, rmMode = True), 'RM both before and after'], [getSurveysOfUsersWhoAnsweredBoth(gfDF, gfMode = True, rmMode = True), 'GF & RM both before and after'], ] return gfDFs # ## checkpoint validation # # + #function that returns the list of checkpoints from user id def getValidatedCheckpoints( userId, _gfDF ): _validatedCheckpoints = [] if hasAnswered(userId, _gfDF): _columnAnswers = getCorrections( userId, _gfDF = _gfDF) for _columnName in _columnAnswers.columns: # only work on corrected columns if correctionsColumnNameStem in _columnName: _questionnaireValidatedCheckpointsPerQuestion = pd.Series(np.nan, index=range(len(checkpointQuestionMatching))) for _index in range(0, len(_questionnaireValidatedCheckpointsPerQuestion)): if _columnAnswers[_columnName][_index]==True: _questionnaireValidatedCheckpointsPerQuestion[_index] = checkpointQuestionMatching['checkpoint'][_index] else: _questionnaireValidatedCheckpointsPerQuestion[_index] = '' _questionnaireValidatedCheckpoints = _questionnaireValidatedCheckpointsPerQuestion.unique() _questionnaireValidatedCheckpoints = _questionnaireValidatedCheckpoints[_questionnaireValidatedCheckpoints!=''] _questionnaireValidatedCheckpoints = pd.Series(_questionnaireValidatedCheckpoints) _questionnaireValidatedCheckpoints = _questionnaireValidatedCheckpoints.sort_values() _questionnaireValidatedCheckpoints.index = range(0, len(_questionnaireValidatedCheckpoints)) _validatedCheckpoints.append(_questionnaireValidatedCheckpoints) else: print("user " + str(userId) + " has never answered") return pd.Series(_validatedCheckpoints) def getValidatedCheckpointsCounts( _userId, _gfDF ): _validatedCheckpoints = getValidatedCheckpoints(_userId, _gfDF = _gfDF) _counts = [] for checkpointsList in _validatedCheckpoints: _counts.append(len(checkpointsList)) return _counts def getNonValidated( checkpoints ): _validationLists = [] if 0!=len(checkpoints): for _validation in checkpoints: _result = pd.Series(np.setdiff1d(validableCheckpoints.values, _validation.values)) _result = _result[_result != ''] _result.index = range(0, len(_result)) _validationLists.append(_result) return pd.Series(_validationLists) else: return validableCheckpoints def getNonValidatedCheckpoints( userId, _gfDF ): validated = getValidatedCheckpoints( userId, _gfDF = _gfDF ) return getNonValidated(validated) def getNonValidatedCheckpointsCounts( userId, _gfDF ): _nonValidatedCheckpoints = getNonValidatedCheckpoints(userId, _gfDF = _gfDF) _counts = [] for checkpointsList in _nonValidatedCheckpoints: _counts.append(len(checkpointsList)) return _counts # - # ## p(answered question N | answered question P) # # + # returns all rows of Google form's answers that contain an element # of the array 'choice' for question number 'questionIndex' def getAllAnswerRows(questionIndex, choice, _gfDF ): return _gfDF[_gfDF.iloc[:, questionIndex].isin(choice)] def getPercentCorrectPerColumn(_df): _count = len(_df) _percents = pd.Series(np.full(len(_df.columns), np.nan), index=_df.columns) for _rowIndex in _df.index: for _columnName in _df.columns: _columnIndex = _df.columns.get_loc(_columnName) if ((_columnIndex >= firstEvaluationQuestionIndex) \ and (_columnIndex < len(_df.columns)-3)): if(str(_df[_columnName][_rowIndex]).startswith(str(correctAnswers[_columnIndex]))): if (np.isnan(_percents[_columnName])): _percents[_columnName] = 1; else: _percents[_columnName] = _percents[_columnName]+1 else: if (np.isnan(_percents[_columnName])): _percents[_columnName] = 0; _percents = _percents/_count _percents['Count'] = _count return _percents def getPercentCorrectKnowingAnswer(questionIndex, choice, _gfDF): _answerRows = getAllAnswerRows(questionIndex, choice, _gfDF = _gfDF); return getPercentCorrectPerColumn(_answerRows) # - # ## Filtering users # def getTestAnswers( _gfDF, _rmDF, _rmTestDF = normalizedRMDFTest, includeAndroid = True): return _gfDF[_gfDF[localplayerguidkey].isin(testUsers.values.flatten())] # + # ambiguous answer to QPlayed AUnclassifiable = 'I played recently on an other computer' # fill posttests with pretest data def setPosttestsProfileInfo(_gfDF): # check whether temporalities have already been set if(len(_gfDF[QTemporality].unique()) == 1): print("temporalities not set") else: intProgress = IntProgress(min=0, max=len(_gfDF.index)) display(intProgress) #_gfDF[_gfDF[QTemporality] == answerTemporalities[1]][QAge] for _index in _gfDF.index: intProgress.value += 1 if ((_gfDF.loc[_index, QTemporality] == answerTemporalities[0]) or (_gfDF.loc[_index, QTemporality] == answerTemporalities[1] and _gfDF.loc[_index, QPlayed] == AUnclassifiable ) ): if pd.isnull(_gfDF.loc[_index, survey1522DF[profileColumn]]).any(): print("nan for index " + str(_index)) else: # fix on age loading _gfDF.loc[_index, QAge] = int(_gfDF.loc[_index, QAge]) thisUserIdsPostests = _gfDF.loc[ (_gfDF['userId'] == _gfDF.loc[_index, 'userId']) & (_gfDF[QTemporality] == answerTemporalities[1]) ] if(len(thisUserIdsPostests) > 0): _gfDF.loc[ (_gfDF['userId'] == _gfDF.loc[_index, 'userId']) & (_gfDF[QTemporality] == answerTemporalities[1]) ,survey1522DF[profileColumn]] = _gfDF.loc[_index, survey1522DF[profileColumn]].values intProgress.close() del intProgress print("profile info set") # + lastAddedColumn = 'lastAdded' profileColumn = 'profile' commonColumn = 'common' compulsoryPretestColumn = 'compulsoryPretest' optionalPretestColumn = 'optionalPretest' compulsoryPosttestColumn = 'compulsoryPosttest' #QVolunteer QContent = QBioBricksDevicesComposition #QRemarks def getQuestionTypes(): intProgress = IntProgress(min=0, max=2*len(gform.index)) display(intProgress) survey1522DF = pd.DataFrame(index = gform.columns, data = False, columns = [lastAddedColumn, commonColumn, compulsoryPretestColumn,compulsoryPosttestColumn]) pretestQuestions = pd.Index([]) pretestNotVolunteeredQuestions = pd.Index([]) posttestQuestions = pd.Index([]) lastAddedQuestions = pd.Index([]) for answerIndex in gform.index: intProgress.value += 1 answer = gform.iloc[answerIndex,:] if gform.loc[answerIndex, QTemporality] == answerTemporalities[0]: # has volunteered? if gform.loc[answerIndex, QVolunteer] in yesNoPositives: pretestQuestions = pretestQuestions.union(answer[pd.notnull(answer[:])].index) else: pretestNotVolunteeredQuestions = pretestNotVolunteeredQuestions.union(answer[pd.notnull(answer[:])].index) elif gform.loc[answerIndex, QPlayed] != APlayedButProfileAgain: posttestQuestions = posttestQuestions.union(answer[pd.notnull(answer[:])].index) survey1522DF[compulsoryPretestColumn] = survey1522DF.index.isin(pretestNotVolunteeredQuestions) survey1522DF[optionalPretestColumn] = survey1522DF.index.isin(pretestQuestions.difference(pretestNotVolunteeredQuestions)) survey1522DF[compulsoryPosttestColumn] = survey1522DF.index.isin(posttestQuestions) survey1522DF[commonColumn] = (survey1522DF[compulsoryPretestColumn] & survey1522DF[compulsoryPosttestColumn]) for answerIndex in gform.index: intProgress.value += 1 answer = gform.iloc[answerIndex,:] if gform.loc[answerIndex, QTemporality] == answerTemporalities[0]: # has volunteered? if gform.loc[answerIndex, QVolunteer] in yesNoPositives: lastAddedQuestions = lastAddedQuestions.union(answer[pretestQuestions][pd.isnull(answer[pretestQuestions])].index) else: lastAddedQuestions = lastAddedQuestions.union(answer[pretestNotVolunteeredQuestions][pd.isnull(answer[pretestNotVolunteeredQuestions])].index) elif not pd.isnull(gform.loc[answerIndex, QContent]): lastAddedQuestions = lastAddedQuestions.union(answer[posttestQuestions][pd.isnull(answer[posttestQuestions])].index) survey1522DF[lastAddedColumn] = survey1522DF.index.isin(lastAddedQuestions) # manual override survey1522DF.loc[QRemarks] = False survey1522DF[profileColumn] = survey1522DF[compulsoryPretestColumn] & (~survey1522DF[compulsoryPosttestColumn]) intProgress.close() del intProgress return survey1522DF # - # #### remove answers that are incomplete # e.g. posttests with no content questions or pretests with no profile info # + def getPosttestsWithoutPretests(_gfDF): pretestIds = _gfDF[_gfDF[QTemporality] == answerTemporalities[0]]['userId'] posttestIds = _gfDF[_gfDF[QTemporality] == answerTemporalities[1]]['userId'] return posttestIds[~posttestIds.isin(pretestIds)].index def getPretestsWithoutPosttests(_gfDF): pretestIds = _gfDF[_gfDF[QTemporality] == answerTemporalities[0]]['userId'] posttestIds = _gfDF[_gfDF[QTemporality] == answerTemporalities[1]]['userId'] return pretestIds[~pretestIds.isin(posttestIds)].index # - def getWithoutIncompleteAnswers(_gfDF): # remove incomplete profiles # coincidentally removes posttests that don't have matching pretests _gfDF2 = _gfDF.drop(_gfDF.index[pd.isnull(_gfDF[_gfDF.columns[survey1522DF[profileColumn]]].T).any()]) # defensive check _gfDF2 = _gfDF2.drop(getPosttestsWithoutPretests(_gfDF2)) return _gfDF2 def getPerfectPretestPostestPairsCount(_gfDF): pairs = getPerfectPretestPostestPairs(_gfDF) halfPairsCount = len(pairs)//2 uniqueUserIdsCount = len(pairs['userId'].unique()) if (halfPairsCount != uniqueUserIdsCount): print('warning: halfPairsCount ('+str(halfPairsCount)+') != uniqueUserIdsCount ('+str(uniqueUserIdsCount)+')') return uniqueUserIdsCount # # Initialization of gform # resetTemporalities(gform) #setAnswerTemporalities() #setAnswerTemporalities2() setAnswerTemporalitiesSimple(gform) survey1522DF = getQuestionTypes() setPosttestsProfileInfo(gform) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="KxYzWnjJr86x" # # Image Style Transfer Using Neural Networks # # Visual recognition of objects is a task that humans excel at. In recent days, by using neural networks computers have also become very reliable at detecting objects in specific contexts. # # Particularly good performance has been reported for the class of [Convolutional Neural Networks](https://en.wikipedia.org/wiki/Convolutional_Neural_Network) (ConvNets). These networks consists of a sequence of layers. The outputs of the first layers correspond to pixel-level patterns and the outputs further to the end of the sequences describe larger-scale patterns and objects. # # A research team from Tübingen [has described](https://arxiv.org/abs/1508.06576) how the style of paintings can be described via the activations of the earlier layers of a ConvNet. The have also shown a procedure, by which an image can be created such that it corresponds to one image in terms of content (the __content image__) and in style to another image (the __style image__). # # Convolutional Neural Network # # __With this notebook you can experiment with Neural Style Transfer.__ # # The example was originally created for use in the course [Lernen, wie Maschinen lernen](https://www.mzl.uni-muenchen.de/lehramtpro/programm/Lernen_-wie-Maschinen-lernen/index.html) at LMU Munich. We are using some modified code from the [Google Magenta Repo](https://github.com/tensorflow/magenta) that implements the style transfer scheme described by [Ghiasi et al.](https://arxiv.org/abs/1705.06830). # # + [markdown] id="52Um4cugyInh" # --- # # ## Preparation # # These steps only have to be run once. # + id="5tbnXXX2sNYx" cellView="form" #@title Load required models and copy style-transfer code to this runtime # !curl https://raw.githubusercontent.com/Hirnkastl/machine-learning-for-artistic-style/master/dist/stylization-lib.tar.gz -o image-stylization-lib.tar.gz import sys IN_COLAB = 'google.colab' in sys.modules # Ensure the required version of scipy is installed in colab if IN_COLAB: # !pip install numpy==1.16.3 # !pip install tensorflow==1.13.2 # !pip install keras-applications==1.0.7 # !pip install keras-preprocessing==1.0.9 # !pip install scipy==1.2.1 # The Model can be downloaded as per Google Magenta project # !curl https://storage.googleapis.com/download.magenta.tensorflow.org/models/arbitrary_style_transfer.tar.gz -o image-stylization-checkpoint.tar.gz # Unpack # !tar -zxvf image-stylization-lib.tar.gz # !tar -zxvf image-stylization-checkpoint.tar.gz print('\nDone.') # + id="FDLtKpkuM_cM" from google.colab import drive drive.mount('/content/drive') # + id="8HPnqEClr86y" cellView="form" #@title Load helper functions (required for loading and display of images) # # Funktionen um Bilder zu laden vom Image-Upload Tool # print('Loading functions for downloading style and content images.') import requests import os import shutil DIR = './tmp/' EXP_CONTENT = '2_inhalt' EXP_STYLE = '2_stil' OUTDIR = './output/' if not os.path.isdir(DIR): print('Making the directory: {}'.format(DIR)) os.makedirs(DIR) # OK def clear_output(): if os.path.isdir(OUTDIR): shutil.rmtree(OUTDIR) os.mkdir(OUTDIR) # OK def clear_experiment(experiment_id): expdir = os.path.join(DIR, experiment_id) if os.path.isdir(expdir): shutil.rmtree(expdir) def download_experiment_images(source, experiment_id): expdir = os.path.join(DIR, experiment_id) source_dir = os.path.join("/content/drive/MyDrive", source) shutil.copytree(source_dir, expdir) print('Loading functions for showing images.') # Configuration of matplotlib import matplotlib.pyplot as plt import matplotlib as mpl mpl.rcParams['figure.figsize'] = (10,10) mpl.rcParams['axes.grid'] = False # python image library, numpy etc. import numpy as np from PIL import Image import time import functools from tensorflow.python.keras.preprocessing import image as kp_image # Helper function for loading images from a path def load_img(path_to_img): max_dim = 1024 img = Image.open(path_to_img) long = max(img.size) scale = max_dim/long img = img.resize((round(img.size[0]*scale), round(img.size[1]*scale)), Image.ANTIALIAS) img = kp_image.img_to_array(img) # We need to broadcast the image array such that it has a batch dimension img = np.expand_dims(img, axis=0) return img # Helper function for showing an image object (as per PIL) def imshow(img, title=None): # Remove the batch dimension out = np.squeeze(img, axis=0) # Normalize for display out = out.astype('uint8') plt.imshow(out) if title is not None: plt.title(title) plt.axis('off') plt.imshow(out) print('\nDone.') # + [markdown] id="9K7v2hAAr862" # --- # # # ## Upload Images # # In the next step the images are imported, for which the style-transfer will be applied. You can adjust the images that server as model for the style. # # + id="_kyOe2cXr864" cellView="form" #@title Upload images from Drive style_folder = "hirnkastl/style_transfer_project/v1/img_style" #@param {type:"string"} content_folder = "hirnkastl/style_transfer_project/v1/img_content" #@param {type:"string"} clear_experiment(EXP_CONTENT) clear_experiment(EXP_STYLE) clear_output() download_experiment_images(content_folder, EXP_CONTENT) download_experiment_images(style_folder, EXP_STYLE) print("Done!") # + [markdown] id="5wptVGUt7Zze" # ## Transfer Image-Styles # # Now the image style can be transferred. # # In order to do that, for every __style image__ a vector S is computed by the network for style-analysis. This vector described the style of the image and can be passed to the style-transfer network, which transfers this style onto the __content image__. # # Convolutional Neural Network # # The networks used in this example have already been trained by Google in their Project Magenta. In simple terms, training means that the network for style-analysis has already seen tens of thousands of images. Because of the experience it has gained from looking at all these pictures it can also describe the style for images it has not seen before. In a similar fashion the style-transfer network has been trained with many images on how to transfer a specific style onto a content-image. # # Both of these pre-trained networks are now applied for all combinations of __style images__ and __content images__ that have been uploaded in the previous step. # # + id="iXlSva7Hr867" cellView="form" #@title Start style-transfer from lib.arbitrary_image_stylization_with_weights_func import arbitrary_stylization_with_weights, StyleTransferConfig # from lib.arbitrary_image_stylization_with_weights import main as dostyle import tensorflow as tf c = StyleTransferConfig() c.content_images_paths = os.path.join(DIR,'2_inhalt/*') c.output_dir = OUTDIR c.style_images_paths = os.path.join(DIR,'2_stil/*') c.checkpoint = './arbitrary_style_transfer/model.ckpt' c.image_size = 512 c.style_image_size = 256 print(c) with tf.device('/gpu:0'): arbitrary_stylization_with_weights(c) print("\nDone.") # + [markdown] id="PpHk--79r86_" # ## View images # # If the style-transfer has finished without errors, then you can run the following steps to view the images. # + id="dr12ukjYr87D" cellView="form" #@title View content images # Get all content images cols = 3 basewidth=20 cfiles = os.listdir(path=os.path.join(DIR, EXP_CONTENT)) #print("{},{}".format(basewidth,len(files)/cols*basewidth)) plt.figure(num=1, figsize=(basewidth,len(cfiles)/(cols*cols)*basewidth)) pind = 1 for fi in cfiles: path = os.path.join(DIR, EXP_CONTENT, fi) # print(path) im = load_img(path_to_img=path).astype('uint8') plt.subplot(np.ceil(len(cfiles)/cols),cols,pind) imshow(im,title=fi) pind = pind + 1 print("The images that styles will be applied to:") # + id="B327ydjkr87G" cellView="form" #@title View style images # Get all style images basewidth=20 sfiles = os.listdir(path=os.path.join(DIR, EXP_STYLE)) cols = len(sfiles)+1 #print("{},{}".format(basewidth,len(files)/cols*basewidth)) plt.figure(num=1, figsize=(basewidth,len(sfiles)/(cols*cols)*basewidth)) plt.axis('off') pind = 1 sfiles.sort() for fi in sfiles: path = os.path.join(DIR, EXP_STYLE, fi) # print(path) im = load_img(path_to_img=path).astype('uint8') plt.subplot(np.ceil(len(sfiles)/cols),cols,pind) imshow(im,title=fi) pind = pind + 1 print("The style images:") # + id="iHyKeDh2r87Q" cellView="form" #@title View images with style-transfer applied # Stylized Images from re import match cols = 3 basewidth=20 files = os.listdir(path=os.path.join(OUTDIR)) files = [x for x in files if match('^zzResult.*',x)] #print("{},{}".format(basewidth,len(files)/cols*basewidth)) plt.figure(num=1, figsize=(basewidth,len(cfiles)/(len(sfiles)+1)*basewidth)) pind = 1 files.sort() for fi in files: path = os.path.join(OUTDIR, fi) # print(path) im = load_img(path_to_img=path).astype('uint8') plt.subplot(len(cfiles)+1,len(sfiles)+1,pind) imshow(im,title=fi) pind = pind + 1 print("The images that result from style-transfer:") # + id="fyOsDJCxC2Op" cellView="form" #@title Download all images in a .zip file if IN_COLAB: # !zip output.zip output/* from google.colab import files files.download("output.zip") else: print("Not in colab - skipping.") # + [markdown] id="vB8asjdgwOxK" # --- # # ## References # # * al; A Neural Algorithm of Artistic Style, Sep 2015, [arxiv](https://arxiv.org/abs/1508.06576) # * Google Magenta: Fast Style Transfer for Arbitrary Styles, [Github](https://github.com/tensorflow/magenta/blob/2c3ae9b0dd64b06295e48e2ee5654e3d207035fc/magenta/models/arbitrary_image_stylization/README.md) # * al.; # Exploring the structure of a real-time, arbitrary neural artistic stylization network # , Aug 2017, [arxiv](https://arxiv.org/abs/1705.06830) # # The Source-Code for this notebook and the tool for uploading images is or will be published here: # https://github.com/shellerbrand/machine-learning-for-artistic-style # # + [markdown] colab_type="text" id="BcLhx7R4YuUv" # # MNIST in Swift for TensorFlow (ConvNet) # # Blog post: https://rickwierenga.com/blog/s4tf/s4tf-mnist.html # + [markdown] colab_type="text" id="lvZux3i-YzNv" # ## Importing dependencies # + colab={} colab_type="code" id="4gTCVkuQbgW1" %install-location $cwd/swift-install %install '.package(url: "https://github.com/tensorflow/swift-models", .branch("tensorflow-0.6"))' Datasets # + colab={} colab_type="code" id="dvaI1fPZRfQH" import TensorFlow import Foundation import Datasets # + colab={} colab_type="code" id="0awo42kpWWaE" import Python let plt = Python.import("matplotlib.pylab") let np = Python.import("numpy") # + colab={} colab_type="code" id="hxyQaWVajlrA" %include "EnableIPythonDisplay.swift" IPythonDisplay.shell.enable_matplotlib("inline") # + [markdown] colab_type="text" id="gFp59EhiY3Y4" # ## Loading MNIST # # [tensorflow/swift-models](https://github.com/tensorflow/swift-models) # + colab={} colab_type="code" id="TB8GDbCDcSxa" let batchSize = 512 # + colab={} colab_type="code" id="cmEkbL2UHP2r" let mnist = MNIST(batchSize: batchSize, flattening: false, normalizing: true) # + [markdown] colab_type="text" id="2ppaI_0SZKh2" # ## Constructing the network # + colab={} colab_type="code" id="k-hGkSXkRux3" struct Model: Layer { var flatten1 = Flatten() var conv1 = Conv2D( filterShape: (2, 2, 1, 32), padding: .same, activation: relu ) var conv2 = Conv2D( filterShape: (3, 3, 32, 64), padding: .same, activation: relu ) var maxPooling = MaxPool2D(poolSize: (2, 2), strides: (1, 1)) var dropout1 = Dropout(probability: 0.25) var flatten2 = Flatten() var dense1 = Dense(inputSize: 27 * 27 * 64, outputSize: 128, activation: relu) var dropout2 = Dropout(probability: 0.5) var dense2 = Dense(inputSize: 128, outputSize: 10, activation: softmax) @differentiable func callAsFunction(_ input: Tensor) -> Tensor { return input .sequenced(through: conv1, conv2, maxPooling, dropout1) .sequenced(through: flatten2, dense1, dropout2, dense2) } } # + colab={} colab_type="code" id="5XCYHPH_qPaz" var model = Model() # + [markdown] colab_type="text" id="qAXVzcGFZTAn" # ## Training # + colab={} colab_type="code" id="-fHA07VycX-G" let epochs = 10 var trainHistory = np.zeros(epochs) var valHistory = np.zeros(epochs) # + colab={} colab_type="code" id="GvM422SwOyvN" let optimizer = Adam(for: model) # + colab={} colab_type="code" id="bdqKlxmpHkc7" for epoch in 0..= mnist.trainingSize ? (mnist.trainingSize - ((i - 1) * batchSize)) : batchSize let images = mnist.trainingImages.minibatch(at: i, batchSize: thisBatchSize) let labels = mnist.trainingLabels.minibatch(at: i, batchSize: thisBatchSize) let (_, gradients) = valueWithGradient(at: model) { model -> Tensor in let logits = model(images) return softmaxCrossEntropy(logits: logits, labels: labels) } optimizer.update(&model, along: gradients) } // Evaluate model Context.local.learningPhase = .inference var correctTrainGuessCount = 0 var totalTrainGuessCount = 0 for i in 0..<(mnist.trainingSize / batchSize)+1 { let thisBatchSize = i * batchSize >= mnist.trainingSize ? (mnist.trainingSize - ((i - 1) * batchSize)) : batchSize let images = mnist.trainingImages.minibatch(at: i, batchSize: thisBatchSize) let labels = mnist.trainingLabels.minibatch(at: i, batchSize: thisBatchSize) let logits = model(images) let correctPredictions = logits.argmax(squeezingAxis: 1) .== labels correctTrainGuessCount += Int(Tensor(correctPredictions).sum().scalarized()) totalTrainGuessCount += thisBatchSize } let trainAcc = Float(correctTrainGuessCount) / Float(totalTrainGuessCount) trainHistory[epoch] = PythonObject(trainAcc) var correctValGuessCount = 0 var totalValGuessCount = 0 for i in 0..<(mnist.testSize / batchSize)+1 { let thisBatchSize = i * batchSize >= mnist.testSize ? (mnist.testSize - ((i - 1) * batchSize)) : batchSize let images = mnist.testImages.minibatch(at: i, batchSize: thisBatchSize) let labels = mnist.testLabels.minibatch(at: i, batchSize: thisBatchSize) let logits = model(images) let correctPredictions = logits.argmax(squeezingAxis: 1) .== labels correctValGuessCount += Int(Tensor(correctPredictions).sum().scalarized()) totalValGuessCount += thisBatchSize } let valAcc = Float(correctValGuessCount) / Float(totalValGuessCount) valHistory[epoch] = PythonObject(valAcc) print("\(epoch) | Training accuracy: \(trainAcc) | Validation accuracy: \(valAcc)") } # + [markdown] colab_type="text" id="WjUY0gwXTDX-" # ## Inspecting training history # + colab={} colab_type="code" id="b_f-DNeqp9qO" plt.plot(trainHistory) plt.title("Training History") plt.show() # + colab={} colab_type="code" id="1AkaTk_3THsK" plt.plot(valHistory) plt.title("Validation History") plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Basic EDA for cryptocurrency # In this notebook, we are going to try and explore the price history of different cryptocurrencies. # # As we created on the before notebook, the files that we will use are in the `data/processed/cryptocurrencypricehistory` directory. Now, we are going to import the necessary modules and then list the input files. # + import numpy as np import pandas as pd import matplotlib.pyplot as plt from pathlib import Path DATA_PATH = Path('../../data/processed/cryptocurrencypricehistory') # - list(DATA_PATH.iterdir()) # ### Bitcoin vs. Ethereum # First, if we want to explore `Ethereum` or the `Bitcoin` as it is the market leader in this space, we will be the following. # + df = pd.read_csv(DATA_PATH / 'cryptocurrency_close_values.csv', parse_dates=['Date'], index_col='Date') df.head() # - df.index = pd.to_datetime(df.index) df.sort_index(inplace=True) # Now, let's create a plot of the closing values of the column `bitcoin_price` and `Ethereum_price` of the previous table and we will be able to observe how the price has changed over time. # + import matplotlib.dates as mdates plt.figure(figsize=(12, 6)) plt.plot(df.index, df.bitcoin_price, label='Bitcoin') plt.plot(df.index, df.ethereum_price, label='Ethereum') plt.xlabel('Date', fontsize=12) plt.ylabel('Price in $', fontsize=12) plt.title("Closing price distribution of bitcoin and ethereum", fontsize=15) plt.legend() plt.show() # - # As we can see in the previous plot the difference between bitcoin and ethereum is very big, basically because bitcoin is the most valuable cryptocurrency that exists nowadays and the value it started to grow up since the first month of 2014. On the other hand, Ethereum grew up in the middñe of 2017, the years later. # # Now let us build a heat map using correlation to see how the price of different currencies change with respect to each other. # + corrmat = df[['bitcoin_price', 'dash_price', 'waves_price', 'stratis_price', 'ethereum_price', 'iota_price']].corr(method='spearman') names = corrmat.columns plt.figure(figsize=(11, 11)) plt.imshow(corrmat) plt.title("Cryptocurrency correlation map", fontsize=15) plt.colorbar() plt.xticks(range(len(names)), names, rotation=90) plt.yticks(range(len(names)), names) plt.show() # - # ### Future price prediction # # Now that we have explored the dataset, one possible next step could be to predict the future price of the currency. # First, we are going to install Prophet, to do so we are going to use pip. # ```bash # $ pip install fbprophet # ``` # We could use the Prophet library of facebook here to do the predictions\[1\]. # + # # !pip install fbprophet # + from fbprophet import Prophet to_predict_df = df[['ethereum_price']] to_predict_df.reset_index(inplace=True) to_predict_df.columns = ['ds', 'y'] model = Prophet() model.fit(to_predict_df) # - # Then, we can see the plot using the previous information to create a threshold that they can oscillate the predicted values. future = model.make_future_dataframe(periods=360) forecast = model.predict(future) model.plot(forecast, xlabel='Date', ylabel='Price in $'); # As we can see in the previous graphics, we generated plots to estimate the values of Ethereum in the next year. Also, we can predict the different currencies in `cryptocurrency_close_values.csv` only changing the currency parameter in the before function. # ## References # # \[1\] [Installation | Prophet](https://facebook.github.io/prophet/docs/installation.html) # # \[2\] [Quick Start | Prophet](https://facebook.github.io/prophet/docs/quick_start.html#python-api) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Combining Conditional Statements and Functions # *Suggested Answers follow (usually there are multiple ways to solve a problem in Python).* # Define a function, called **compare_the_two()**, with two arguments. If the first one is greater than the second one, let it print "Greater". If the second one is greater, it should print "Less". Let it print "Equal" if the two values are the same number. def compare_the_two(x,y): if x > y: print ("Greater") elif x < y: print ("Less") else: print ("Equal") compare_the_two(10,10) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # This notebook demos the FVM laplacian # Compute the Laplacian using combinations of the differential operators encountered so far # + from dusk.script import * @stencil def laplacian_fvm( u: Field[Edge], v: Field[Edge], nx: Field[Edge], ny: Field[Edge], uv_div: Field[Cell], uv_curl: Field[Vertex], grad_of_curl: Field[Edge], grad_of_div: Field[Edge], uv_nabla2: Field[Edge], L: Field[Edge], dualL: Field[Edge], A: Field[Cell], dualA: Field[Vertex], tangent_orientation: Field[Edge], edge_orientation_vertex: Field[Vertex > Edge], edge_orientation_cell: Field[Cell > Edge], ) -> None: with levels_upward as k: # compute curl (on vertices) uv_curl = sum_over(Vertex > Edge, (u*nx + v*ny) * dualL * edge_orientation_vertex) / dualA # compute divergence (on cells) uv_div = sum_over(Cell > Edge, (u*nx + v*ny) * L * edge_orientation_cell) / A # first term of of nabla2 (gradient of curl) grad_of_curl = sum_over(Edge > Vertex, uv_curl, weights=[-1., 1, ])*tangent_orientation/L # second term of of nabla2 (gradient of divergence) grad_of_div = sum_over(Edge > Cell, uv_div, weights=[-1., 1, ])/dualL # finalize nabla2 (difference between the two gradients) uv_nabla2 = grad_of_div - grad_of_curl # - # Then we can use dusk's Python API to convert the stencils to SIR. This API can also invoke dawn to compile SIR to C++: from dusk.transpile import callables_to_pyast, pyast_to_sir, sir_to_json with open("laplacian_fvm.sir", "w+") as f: sir = pyast_to_sir(callables_to_pyast([laplacian_fvm])) f.write(sir_to_json(sir)) # !dawn-opt laplacian_fvm.sir | dawn-codegen -b naive-ico -o laplacian_fvm_cxx-naive.cpp # The generated C++ code also requires a driver which is already setup for this demo. With the driver code we can generate an executable `runner`: # !make # The runner is going to execute the Laplacian stencil. The divergence and curl are intermediary results this time around, and checked automatically. There is no need for you to specify anything, just execute the runner to see which quantities you got right! # !./runner # Besides ensuring the error norms L1, L2 and L infinity are small (they should all be well below 0.1), you can also have a look at the test function and its Laplacian by executing `checker.py laplacian`. You are free to also look at the `divergence` or `curl` computed above, by using `checker.py divergence` and `checker.py curl`. However, those are the same as the ones computed in the last exercise. You will notice that the **error for the Laplacian is a lot worse** than for the other differentials. We will address this in the next exercise. # %run checker.py laplacian # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="P7sX-B4aUbRN" pycharm={"name": "#%% md\n"} # # Introduction to RNNs # This notebook is part of the [SachsLab Workshop for Intracranial Neurophysiology and Deep Learning](https://github.com/SachsLab/IntracranialNeurophysDL). # # ### Normalize Environments # Run the first two cells to normalize Local / Colab environments, then proceed below for the lesson. # # # # #
    # Run in Google Colab # # View source on GitHub #
    # + colab_type="code" id="aL4QcKd5UbRT" outputId="a4accba4-507a-4eaf-cb68-c5a2263fd8aa" pycharm={"is_executing": false, "name": "#%%\n"} colab={"base_uri": "https://localhost:8080/", "height": 67} from pathlib import Path import os try: # See if we are running on google.colab import google.colab IN_COLAB = True # Setup tensorflow 2.0 # !pip install -q tensorflow-gpu==2.0.0-rc0 except ModuleNotFoundError: IN_COLAB = False import sys if Path.cwd().stem == 'notebooks': os.chdir(Path.cwd().parent) # + colab_type="code" id="rXM9gHqLUbRP" pycharm={"is_executing": false, "name": "#%%\n"} colab={} # Common imports import numpy as np import tensorflow as tf import matplotlib.pyplot as plt plt.rcParams.update({'font.size': 22}) # %matplotlib inline # %load_ext autoreload # %autoreload 2 # + [markdown] colab_type="text" id="k9Og3lOfUbRV" pycharm={"name": "#%% md\n"} # ## RNN Step-by-Step # A recurrent neural network (RNN) is an artificial neural network that maintains an internal state (memory) # as it processes items in a sequence. RNNs are most often applied to sequence data such as text # (sequences of characters; sequences of words) or time series (e.g., stock prices, weather data, neurophysiology). # RNNs can provide an output at each step of the sequence to predict different sequences (e.g., text translation) or # the next item in the same sequence (e.g., assistive typing, stock prediction, prosthetic control). # The loss does not back-propagate to update weights until the sequence is complete. RNNs can also be configured # to produce a single output at the end of a sequence (e.g., classify a tweet sentiment, decode categorical intention). # # We will create our own RNN step-by-step. We will train it using toy data that we generate. # + [markdown] colab_type="text" id="_DAvkmycUbRV" pycharm={"name": "#%% md\n"} # ### Generate data # X is a multi-channel time series and Y is a different multi-channel time series # constructed from a linear combination of a delayed version of X. # + colab_type="code" id="kA9Fqu_qUbRW" pycharm={"is_executing": false, "name": "#%%\n"} colab={} PEAK_FREQS = [10, 22, 75] # Hz FS = 1000.0 # Hz DURATION = 5.0 # seconds DELAY = 0.01 # seconds N_OUT = 2 # Y dimensions n_samples = int(DURATION * FS) delay_samples = int(DELAY * FS) t_vec = np.arange(n_samples + delay_samples) / FS X = np.zeros((n_samples + delay_samples, len(PEAK_FREQS)), dtype=np.float32) for chan_ix, pf in enumerate(PEAK_FREQS): X[:, chan_ix] = (1 / (chan_ix + 1) ) * np.sin(t_vec * 2 * np.pi * pf) # Create mixing matrix that mixes inputs into outputs W = 2 * np.random.rand(N_OUT, len(PEAK_FREQS)) - 1 # W = W / np.sum(W, axis=1, keepdims=True) b = np.random.rand(N_OUT) - 0.5 Y = W @ X[:-delay_samples, :].T + b[:, None] Y = Y.T X = X[delay_samples:, :] t_vec = t_vec[delay_samples:] X += 0.05*np.random.rand(*X.shape) Y += 0.05*np.random.rand(*Y.shape) # + colab_type="code" id="A4PXkiSEUbRY" outputId="9fcdf1da-e60b-4493-f8d8-41cc5007c14a" pycharm={"is_executing": false, "name": "#%%\n"} colab={"base_uri": "https://localhost:8080/", "height": 378} plt.figure(figsize=(8, 6), facecolor='white') plt.plot(t_vec[:100], X[:100, :]) plt.plot(t_vec[:100], Y[:100, :], 'k') plt.show() # + [markdown] colab_type="text" id="8ymuKRXPUbRc" pycharm={"name": "#%% md\n"} # ### Define forward pass and loss # We aren't actually going to run this next cell. # This is just to give you an indication of what the forward pass looks like. # + colab_type="code" id="spnNLauUUbRd" outputId="595f796f-e37a-4a3e-8eca-7da000931293" pycharm={"is_executing": false, "name": "#%%\n"} colab={"base_uri": "https://localhost:8080/", "height": 35} activation_fn = lambda x: x # activation_fn = np.tanh # Initialize parameters state_t = np.zeros((N_OUT,)) _W = np.random.random((N_OUT, X.shape[1])) # Mixes input with output _U = np.random.random((N_OUT, N_OUT)) # Mixes old state with output _b = np.random.random((N_OUT,)) # Bias term # Create variable to store output successive_outputs = [] # Iterate over each timestep in X for x_t in X: y_t = activation_fn(np.dot(_W, x_t) + np.dot(_U, state_t) + _b) successive_outputs.append(y_t) state_t = y_t final_output_sequence = np.stack(successive_outputs, axis=0) loss = np.mean( (Y - final_output_sequence)**2 ) print(loss) # + [markdown] colab_type="text" id="ukwchqVWUbRf" # ## RNN in Tensorflow # [Tutorial](https://www.tensorflow.org/tutorials/sequences/text_generation) (text generation w/ eager) # + [markdown] colab_type="text" id="cOHeBypcUbRg" pycharm={"name": "#%% md\n"} # ### Prepare data for Tensorflow # In the tutorial linked above, the `batch` transformation is used to convert a continuous sequence into # many sequences, then the batch transform is used AGAIN to get batches of sequences. # + colab_type="code" id="kW2toraOUbRg" pycharm={"is_executing": false, "name": "#%%\n"} colab={} SEQ_LENGTH = 200 BATCH_SIZE = 2 BUFFER_SIZE = 10000 dataset = tf.data.Dataset.from_tensor_slices((X, Y)) dataset = dataset.batch(SEQ_LENGTH, drop_remainder=True) # Continuous to segmented sequences dataset = dataset.shuffle(BUFFER_SIZE) dataset = dataset.batch(BATCH_SIZE, drop_remainder=True) # Segmented sequences to batches of seg. seq. # + [markdown] colab_type="text" id="iVYi9e43UbRj" pycharm={"name": "#%% md\n"} # ### Define RNN model in Tensorflow # + colab_type="code" id="kxhHmWZXUbRk" outputId="d6324dfd-db59-44fa-f60c-79bf25979cd6" pycharm={"is_executing": false, "name": "#%%\n"} colab={"base_uri": "https://localhost:8080/", "height": 218} inputs = tf.keras.layers.Input(shape=(SEQ_LENGTH, X.shape[-1])) outputs = tf.keras.layers.SimpleRNN(N_OUT, return_sequences=True, activation='linear')(inputs) model = tf.keras.Model(inputs, outputs) model.compile(optimizer='rmsprop', loss='mean_squared_error') model.summary() # + colab_type="code" id="L9nwtlv8UbRn" pycharm={"is_executing": false, "name": "#%%\n"} colab={} # Just to save us some training time, let's cheat and init the model with good weights old_init_weights = model.layers[-1].get_weights() model.layers[-1].set_weights([W.T, old_init_weights[1], b]) # + colab_type="code" id="Xk1qktbEUbRo" outputId="a506a658-ffab-4bc5-de35-39c513a757df" pycharm={"is_executing": false, "name": "#%%\n"} colab={"base_uri": "https://localhost:8080/", "height": 1000} EPOCHS = 50 history = model.fit(x=dataset, epochs=EPOCHS, verbose=1) # + colab_type="code" id="buU5s_XlUbRr" outputId="c6ca8690-e302-485a-8f82-ae88217eb2d0" pycharm={"is_executing": false} colab={"base_uri": "https://localhost:8080/", "height": 378} _Y1 = model.predict(X[:SEQ_LENGTH, :][None, :, :]) plt.figure(figsize=(8, 6), facecolor='white') plt.plot(Y[:SEQ_LENGTH, :], label='real') plt.plot(_Y1[0], label='pred') plt.legend() plt.show() # + colab_type="code" id="VQeoT-SlUbRu" outputId="43725caf-8679-42f5-d27c-f479d28f0fd1" pycharm={"is_executing": false, "name": "#%%\n"} colab={"base_uri": "https://localhost:8080/", "height": 305} _W, _U, _b = model.layers[-1].get_weights() print(W) print(_W.T) print(b, _b) plt.figure(figsize=(10, 6), facecolor='white') plt.subplot(1, 2, 1) plt.imshow(W) plt.subplot(1, 2, 2) plt.imshow(_W.T) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 逆行列を求める方法 # # - `np.linalg` に `inv` という関数がある import numpy as np a = np.array([[3, 1, 1], [1, 2, 1], [0, -1, 1]]) np.linalg.inv(a) # # ひとつの連立方程式を解く方法 # 下記の連立方程式を解くには逆行列を求めるよりも `solve` 関数を使うほうが良い。(高速かつ数値安定的なアルゴリズムを背後で利用しているため) # # \begin{equation} # \begin{pmatrix} # 3& 1& 1\\ # 1& 2& 1\\ # 0& -1& 1 # \end{pmatrix} # \begin{pmatrix} # x \\ y\\ z # \end{pmatrix} # = # \begin{pmatrix} # 1 \\ 2 \\ 3 # \end{pmatrix} # \end{equation} b = np.array([[3, 1, 1], [1, 2, 1], [0, -1, 1]]) c = np.array([1, 2, 3]) np.linalg.solve(b, c) # # 同じ係数行列からなく複数の連立方程式を解く方法 # \begin{equation} # Ax=b_1, Ax=b_2, \dots, Ax=b_m # \end{equation} # となる連立方程式があったときは、 $A^{-1}$ を計算することで、 # # \begin{equation} # A^{-1}b_1, A^{-1}b_2, \dots, A^{-1}b_m # \end{equation} # # と解が計算できる。しかし、もっと良い方法がある。 # ## LU分解 # # $A=PLU$ の形に分解することで連立方程式を高速かつ数値安定的に解くことができる。 # # ここで $L$ は下三角行列で対角成分が $1$ となるもの、 $U$ は上三角行列、 $P$ は各行に $1$ となる成分がただひとつだけある行列でそのほかの成分は $0$(置換行列) # \begin{equation} # PLUx = b # \end{equation} # # という連立方程式は次の3つの方程式を逐次的に解くことで解 $x$ を求めることができる。 # # \begin{align} # Uz &= b \\ # Ly &= z \\ # Px &= y # \end{align} # scipy を利用 from scipy import linalg a = np.array([[3, 1, 1], [1, 2, 1], [0, -1, 1]]) b = np.array([1, 2, 3]) lu, p = linalg.lu_factor(a) linalg.lu_solve((lu, p), b) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] cell_style="center" id="ce58rWM2QtJV" # # **Modelagem** # # + [markdown] hide_input=true id="CgsDMREUumCV" # ![img](https://media.discordapp.net/attachments/748012097589215325/816803156112441374/unknown.png?width=576&height=676) # + [markdown] heading_collapsed=true id="qiiJ6Yv5Oe6D" # ## **Instalação e importação** # + hidden=true id="h07Gv4Yfeb4a" # %reset -f # + cell_style="center" code_folding=[] colab={"base_uri": "https://localhost:8080/"} hidden=true hide_input=false id="6OGdwlfuXImN" outputId="d0e6977f-6a3a-426f-cace-05004df8d707" #Instalar e importar bibliotecas: try: import CoolProp except ImportError: # !pip install CoolProp import matplotlib.pyplot as plt from matplotlib.ticker import AutoMinorLocator import CoolProp.CoolProp as cp from CoolProp.CoolProp import PropsSI as ps from CoolProp.CoolProp import State as st from tabulate import tabulate as tab import numpy as np from sympy import * import pandas as pd ''' from IPython.core.display import HTML HTML(""" """) ''' # + [markdown] heading_collapsed=true id="d64Tp25bzf4R" # ## **Unidade de medida** # + code_folding=[] colab={"base_uri": "https://localhost:8080/"} hidden=true hide_input=false id="DwEOOfQBs7Et" outputId="127fabe3-5d42-4f1d-cca0-bd96f732244d" # Selecionar escala de temperatura: Temperatura = 'c' #input('Temperatura em °C ou em K? (Digite c ou k): ') # + [markdown] heading_collapsed=true id="ooclRVh5xral" # ## **Declaração de variáveis** # + code_folding=[] hidden=true id="FvQEFmKpnzSc" # Declarar estados: n = 12 fld = "Water" cp.set_reference_state(fld,'DEF') if Temperatura == 'k': Tnn = "Temperatura [K]" else: Tnn = "Temperatura [°C]" mnn = "Vazão mássica [kg/s]" hnn = "Entalpia [kJ/kg]" rhonn = "Massa específica [m³/kg]" Pnn = "Pressão [kPa]" snn = "Entropia [kJ/kg*K]" Xnn = "Título" estado = np.arange(1, n+1, 1) T = np.linspace(0,0,n) m = np.linspace(0,0,n) h = np.linspace(0,0,n) rho = np.linspace(0,0,n) P = np.linspace(0,0,n) s = np.linspace(0,0,n) X = np.linspace(0,0,n) # - # ## **Caldeira** # + # Combustível: CH4 = np.array([1,4,0,0]) C2H6 = np.array([2,6,0,0]) C3H8 = np.array([3,8,0,0]) C4H10 = np.array([4,10,0,0]) CO2 = np.array([1,0,2,0]) N2 = np.array([0,0,0,2]) O2 = np.array([0,0,2,0]) H2O = np.array([0,2,1,0]) P_CH4 = 0.8897 P_C2H6 = 0.0592 P_C3H8 = 0.0191 P_C4H10 = 0.0109 P_C02 = 0.0121 P_N2 = 0.0089 P_O2 = 0.0001 P_N2_Ar = 0.79 P_O2_Ar = 0.21 # + #Cálculos: MM = np.array([12.011,1.008,15.999,14.007]) P_GN = [P_CH4,P_C2H6,P_C3H8,P_C4H10,P_C02,P_N2,P_O2] M_CH4 = np.sum(CH4*MM) M_C2H6 = np.sum(C2H6*MM) M_C3H8 = np.sum(C3H8*MM) M_C4H10 = np.sum(C4H10*MM) M_CO2 = np.sum(CO2*MM) M_N2 = np.sum(N2*MM) M_O2 = np.sum(O2*MM) M_H2O = np.sum(H2O*MM) M_Air = M_N2*P_N2_Ar+M_O2*P_O2_Ar M = [M_CH4,M_C2H6,M_C3H8,M_C4H10,M_CO2,M_N2,M_O2,M_N2,M_H2O] F_M = [] for i in range(0,4): F_M.append(P_GN) np.array(F_M) quant_atomos_comb = np.array([CH4,C2H6,C3H8,C4H10,CO2,N2,O2]) m_comp_comb = np.sum(F_M*quant_atomos_comb.T,axis=1) m_comp_comb[2] = m_comp_comb[2]*(-1) P_GN.append(P_N2_Ar/P_O2_Ar) #acrecentar partes de O2 P_GN.append(P_O2_Ar/P_O2_Ar) ##acrecentar partes de N2 b = m_comp_comb[0] c = m_comp_comb[1]/2 a = (m_comp_comb[2] + 2*b + c)/(2*P_GN[8]) d = (m_comp_comb[3] + 2*a*P_GN[7])/2 BLNC = [1,1,1,1,1,1,1,a,a] resposta = np.zeros((9,4)) quant_atomos_molecula = np.array([CH4,C2H6,C3H8,C4H10,CO2,N2,O2,N2,O2]) for i in range(0,9): for j in range(0,4): resposta[i,j] = quant_atomos_molecula[i,j] * MM[j] * P_GN[i] * BLNC[i] fuel = resposta[0:7,0:4] air = resposta[7:9,0:4] soma = np.sum(resposta) M_fuel = np.sum(fuel) #Massa total de combustível soma_air = np.sum(air) #Massa total de ar total_elem = np.sum(fuel, axis=0,keepdims=True) #C_tot, H_tot, O_tot, N_tot F_tot = total_elem/M_fuel PCS = 33900*F_tot[0,0] + 141800*(F_tot[0,1] - (F_tot[0,2]/8)) w = PCS/2440 - 9*F_tot[0,1] F_w = w/M_fuel PCI = PCS - 2440*(9*F_tot[0,1] - F_w) ȦḞ = a*(1+79/21) AF = ȦḞ*(M_Air/M_fuel) AF # + [markdown] heading_collapsed=true # ## **Estados** # + code_folding=[] hidden=true hide_input=false id="CXkMbWWey_uZ" # Estado 1 - Entrada da turbina: m[0] = 27.9 #kg/s P[0] = 6495 #kPa T[0] = 485 + 273.15 #K st1 = st(fld, {'P':P[0],'T':T[0]}) h[0] = st1.h #entalpia s[0] = st1.s #entropia X[0] = st1.Q rho[0] = st1.rho # + code_folding=[] hidden=true hide_input=false id="p-REZCpTzF__" # Estado 2 - Primeira extração da turbina: P[1] = 900 #kPa st_isen2 = st(fld,{'P':P[1],'S':s[0]}) #turbina/bomba isentrópica ideal h_isen2 = st_isen2.h η_turb = 0.85 #eficiência da turbina h[1] = h[0] - (h[0] - h_isen2) * η_turb st2 = st(fld,{'P':P[1],'H':h[1]}) s[1] = st2.s T[1] = st2.T X[1] = st2.Q rho[1] = st2.rho # + code_folding=[] hidden=true hide_input=false id="ddaMWZfozGJr" # Estado 3 - Segunda extração da turbina: P[2] = 250 #kPa st_isen3 = st(fld,{'P':P[2],'S':s[1]}) h_isen3 = st_isen3.h h[2] = h[1] - (h[1] - h_isen3) * η_turb st3 = st(fld,{'P':P[2],'H':h[2]}) s[2] = st3.s rho[2] = st3.rho T[2] = st3.T X[2] = st3.Q # + code_folding=[] hidden=true hide_input=false id="Sz8Ao-OrzGQR" # Estado 4 - Saída final da turbina / entrada do condensador: T[3] = 51 + 273.15 #K #P[3] = psi('P','T',T[3],'Q', 1, fld)/1000 #divide por 1000 pra sair em kPa st4 = st(fld, {'T': T[3], 'Q': 1}) P[3] = st4.p st_isen4 = st(fld,{'P':P[3],'S':s[2]}) h_isen4 = st_isen4.h h[3] = h[2] - (h[2] - h_isen4) * η_turb st4 = st(fld,{'P':P[3],'H':h[3]}) X[3] = st4.Q #título s[3]=st4.s rho[3] = st4.rho # + code_folding=[] hidden=true id="LEJHsowizGV4" # Estado 5 - Saída do condensador / entrada da bomba: X[4] = 0 P[4] = P[3] st5 = st(fld, {'P': P[4], 'Q': 0}) h[4] = st5.h s[4] = st5.s T[4] = st5.T rho[4] = st5.rho # + code_folding=[] hidden=true id="L2DhkLiMzGal" # Estado 6 - Entrada do desaerador / saída da bomba: P[5] = P[2] st_isen6 = st(fld, {'P': P[5], 'S': s[4]}) h_isen6 = st_isen6.h η_pump = 0.85 #eficiência das bombas h[5] = (h_isen6 - h[4]) / η_pump + h[4] st6 = st(fld, {'P':P[5], 'H': h[5]}) s[5] = st6.s T[5] = st6.T X[5] = st6.Q rho[5] = st6.rho # + code_folding=[] hidden=true id="WFpU6ULXzGed" # Estado 7 - Saída do desaerador: P[6] = P[2] T[6] = 110 + 273.15 st7 = st(fld, {'P': P[6], 'T': T[6]}) h[6] = st7.h s[6] = st7.s X[6] = st7.Q rho[6] = st7.rho # + code_folding=[] hidden=true id="wPy5KvGizGiW" # Estado 8 - Entrada da caldeira: P[7] = P[0] st_isen8 = st(fld, {'P': P[7], 'S': s[6]}) h_isen8 = st_isen8.h h[7] = (h_isen8 - h[6]) / η_pump + h[6] st8 = st(fld, {'P': P[7], 'H': h[7]}) T[7] = st8.T - 273.15 s[7] = st8.s X[7] = st8.Q rho[7] = st8.rho # + code_folding=[] hidden=true id="KJ1aywD9zGmb" # Estado 9 - Entrada da bomba: st9 = st7 h[8] = st9.h s[8] = st9.s P[8] = st9.p T[8] = st9.T X[8] = st9.Q rho[8] = st9.rho # + code_folding=[] hidden=true id="yuCvmJP9zXpm" # Estado 10 - Saída da bomba / entrada do trocador de calor: P[9] = P[1] st_isen10 = st(fld, {'P': P[9], 'S': s[8]}) h_isen10 = st_isen10.h h[9] = (h_isen10 - h[8]) / η_pump + h[8] st10 = st(fld, {'P': P[9], 'H': h[9]}) s[9] = st10.s T[9] = st10.T X[9] = st10.Q rho[9] = st10.rho # + code_folding=[] hidden=true id="2u8OvkBHzXxa" # Estado 11 - Entrada do processo industrial vizinho: X[10] = 1 P[10] = P[9] st11 = st(fld, {'P': P[10], 'Q': X[10]}) h[10] = st11.h s[10] = st11.s T[10] = st11.T rho[10] = st11.rho # + code_folding=[] hidden=true id="csat_4x8zX8I" # Estado 12 - Saída do processo industrial vizinho: X[11] = 0 P[11] = P[9] st12 = st(fld, {'P':P[11], 'Q': X[11]}) h[11] = st12.h s[11] = st12.s T[11] = st12.T rho[11] = st12.rho # + [markdown] id="xUVh6vQoxkaO" # ## **Tabela - Estados** # + code_folding=[] colab={"base_uri": "https://localhost:8080/"} hide_input=true id="Xgtb-Op_bd4T" outputId="e2df6142-84d6-4189-b5a2-85bd249b476d" # Tabela 1: headers=["Estado", Tnn, Pnn, Xnn, hnn, snn, rhonn] table = list(range(n)) for i in table: if Temperatura == 'k': T[i] = round(float(T[i]),2) else: T[i] = round(float(T[i]-273.15),2) P[i] = round(float(P[i]),2) X[i] = round(float(X[i]),2) h[i] = round(float(h[i]),2) s[i] = round(float(s[i]),4) rho[i] = round(float(rho[i]),2) table[i] = [estado[i], T[i], P[i], X[i], h[i], s[i], rho[i]] table = np.array(list(table)) print(tab(table, headers, tablefmt="rst", stralign="center", numalign="left")) print('\nOBS: Tomar título "-1" como líquido sub-resfriado') for i in range(0, n): if Temperatura == 'k': T[i] = round(float(T[i]),2) else: T[i] = round(float(T[i]+273.15),2) st_antes = [st1, st2, st3, st4, st5, st6, st7, st8, st9, st10, st11, st12] # + [markdown] id="-_WTU23Cql2h" # ## **Cálculos** # + code_folding=[] hide_input=false id="1Vr5HLlnk8K8" # Cálculos: m1 = m[0] h1 = h[0] h2 = h[1] h3 = h[2] h4 = h[3] h5 = h[4] h6 = h[5] h7 = h[6] h8 = h[7] h9 = h[8] h10 = h[9] h11 = h[10] h12 = h[11] # Quantidade de iterações: quant = 10 variacao = np.zeros(quant + 1, dtype = float) resultados = np.zeros((n+1)*(quant + 1), dtype = float).reshape((n+1, quant + 1)) T_hout_cond = [] η_sys = [] W_liq =[] STT_T = [] STT_P = [] STT_X = [] STT_h = [] STT_s = [] STT_v = [] Ti = np.linspace(0,0,n) # Vazões mássicas: for i in range(quant + 1): processo = i/10 m2, m3, m4, m5, m6, m7, m8, m9, m10, m11, m12 = var('m2, m3, m4, m5, m6, m7, m8, m9, m10, m11, m12') EqMassa = [m8-m1, m4-m5, m5-m6, m9-m10, m11-m12, m2+m10-m11, m2+m3+m4-m1, m12+m3+m6-m7, m2*h2+m10*h10-m11*h11, m12*h12+m3*h3+m6*h6-m7*h7, m12-6.94*processo] massa = linsolve(EqMassa, m2, m3, m4, m5, m6, m7, m8, m9, m10, m11, m12) (m2, m3, m4, m5, m6, m7, m8, m9, m10, m11, m12) = next(iter(massa)) massa = np.array(list(massa)) for j in range(quant + 1): massa[0,j] = round(float(massa[0,j]),2) variacao[i] = i*10 resultados[0, i] = variacao[i] resultados[1, i] = m[0] resultados[2:(n+1), i] = massa # Saídas de energia: W_p1 = -m5*(h5 - h6) #(Perde energia "-") W_p2 = -m8*(h7 - h8) #(Perde energia "-") W_p3 = -m9*(h9 - h10) #(Perde energia "-") W_pump = W_p1 + W_p2 + W_p3 #(Somatório dos trabalhos da bombas) # Eficiência da caldeira: η_cald = 0.85 #PCI = 10000 # kJ/kg # Encontra o trabalho gerado pela turbina e a vazão de combustível para a demanda: W_t = m1*h1 - m2*h2 - m3*h3 - m4*h4 m_fuel = m1*(h1-h8)/(PCI*η_cald) #η_cald = m1*(h1-h8)/(m_fuel*PCI) # Entrada de energia: Q_in = m_fuel * PCI #(Ganha energia "+") η_sys.append(round((W_t - W_pump)/Q_in, 4)) W_liq.append(round(η_sys[i]*Q_in,2)) # Condensador: T_cout = 39.5 T_cin = 29 T_hin = T[3] - 273.15 ϵ = (T_cout - T_cin)/(T_hin - T_cin) NUT = -ln(1 - ϵ) CT_in = st(fld, {'P': 101.325, 'T': T_cin + 273.15}) CT_out = st(fld, {'P': 101.325, 'T': T_cout + 273.15}) C_min = 4500/3600*(CT_in.rho + CT_out.rho)/2*(CT_in.cp + CT_out.cp)/2 UA = NUT * C_min # Coeficiente global de tranferência de calor * Área # Sistema não-linear: ΔT_0, ΔT_L, T_hout, ΔT_ml, Q_out = var('ΔT_0, ΔT_L, T_hout, ΔT_ml, Q_out') EqCond = [ΔT_0-(T_hout-T_cin), ΔT_L-(T_hout-T_cout), ΔT_ml-(ΔT_0-ΔT_L)/ln(ΔT_0/ΔT_L), Q_out-UA*ΔT_ml, Q_out-m4*(h4-h5)] EqCond_solve = nonlinsolve(EqCond, ΔT_0, ΔT_L, T_hout, ΔT_ml, Q_out) (ΔT_0, ΔT_L, T_hout, ΔT_ml, Q_out) = next(iter(EqCond_solve)) # Redefinindo estados: T[3] = T_hout + 273.15 #K #P[3] = psi('P','T',T[3],'Q', 1, fld)/1000 #divide por 1000 pra sair em kPa st4 = st(fld, {'T': T[3], 'Q': 1}) P[3] = st4.p st_isen4 = st(fld,{'P':P[3],'S':s[2]}) h_isen4 = st_isen4.h h[3] = h[2] - (h[2] - h_isen4) * η_turb st4 = st(fld,{'P':P[3],'H':h[3]}) X[3] = st4.Q #título s[3]=st4.s rho[3] = st4.rho X[4] = 0 P[4] = P[3] st5 = st(fld, {'P': P[4], 'Q': X[4]}) h[4] = st5.h s[4] = st5.s T[4] = st5.T rho[4] = st5.rho P[5] = P[2] st_isen6 = st(fld, {'P': P[5], 'S': s[4]}) h_isen6 = st_isen6.h η_pump = 0.85 #eficiência das bombas h[5] = (h_isen6 - h[4]) / η_pump + h[4] st6 = st(fld, {'P':P[5], 'H': h[5]}) s[5] = st6.s T[5] = st6.T X[5] = st6.Q rho[5] = st6.rho P[6] = P[2] T[6] = 110 + 273.15 st7 = st(fld, {'P': P[6], 'T': T[6]}) h[6] = st7.h s[6] = st7.s X[6] = st7.Q rho[6] = st7.rho P[7] = P[0] st_isen8 = st(fld, {'P': P[7], 'S': s[6]}) h_isen8 = st_isen8.h h[7] = (h_isen8 - h[6]) / η_pump + h[6] st8 = st(fld, {'P': P[7], 'H': h[7]}) T[7] = st8.T - 273.15 s[7] = st8.s X[7] = st8.Q rho[7] = st8.rho st9 = st7 h[8] = st9.h s[8] = st9.s P[8] = st9.p T[8] = st9.T X[8] = st9.Q rho[8] = st9.rho P[9] = P[1] st_isen10 = st(fld, {'P': P[9], 'S': s[8]}) h_isen10 = st_isen10.h h[9] = (h_isen10 - h[8]) / η_pump + h[8] st10 = st(fld, {'P': P[9], 'H': h[9]}) s[9] = st10.s T[9] = st10.T X[9] = st10.Q rho[9] = st10.rho X[10] = 1 P[10] = P[9] st11 = st(fld, {'P': P[10], 'Q': X[10]}) h[10] = st11.h s[10] = st11.s T[10] = st11.T rho[10] = st11.rho X[11] = 0 P[11] = P[9] st12 = st(fld, {'P':P[11], 'Q': X[11]}) h[11] = st12.h s[11] = st12.s T[11] = st12.T rho[11] = st12.rho for i in range(0,n): if Temperatura == 'k': Ti[i] = round(T[i],2) else: Ti[i] = round(T[i]-273.15,2) P[i] = round(P[i],2) X[i] = round(X[i],2) h[i] = round(h[i],2) s[i] = round(s[i],2) rho[i] = round(rho[i],2) STT_T.append([Tnn,processo*100,Ti[0],Ti[1],Ti[2],Ti[3],Ti[4],Ti[5],Ti[6],Ti[7],Ti[8],Ti[9],Ti[10],Ti[11]]) STT_P.append([Pnn,processo*100,P[0],P[1],P[2],P[3],P[4],P[5],P[6],P[7],P[8],P[9],P[10],P[11]]) STT_X.append([Xnn,processo*100,X[0],X[1],X[2],X[3],X[4],X[5],X[6],X[7],X[8],X[9],X[10],X[11]]) STT_h.append([hnn,processo*100,h[0],h[1],h[2],h[3],h[4],h[5],h[6],h[7],h[8],h[9],h[10],h[11]]) STT_s.append([snn,processo*100,s[0],s[1],s[2],s[3],s[4],s[5],s[6],s[7],s[8],s[9],s[10],s[11]]) STT_v.append([rhonn,processo*100,rho[0],rho[1],rho[2],rho[3],rho[4],rho[5],rho[6],rho[7],rho[8],rho[9],rho[10],rho[11]]) T_hout = round(T_hout, 2) T_hout_cond.append(T_hout) # Adiciona linhas vazias entre as propriedades na tabela: STT_T.append(["","","","","","","","","","","","","",""]) STT_P.append(["","","","","","","","","","","","","",""]) STT_X.append(["","","","","","","","","","","","","",""]) STT_h.append(["","","","","","","","","","","","","",""]) STT_s.append(["","","","","","","","","","","","","",""]) STT_v.append(["","","","","","","","","","","","","",""]) # Incluir na tabela e gráfico: np.reshape(variacao, (1,n-1)) np.reshape(η_sys, (1,n-1)) np.reshape(T_hout_cond, (1,n-1)) np.reshape(W_liq, (1,n-1)) for i in range(0, n): T[i] = round(T[i],2) P[i] = round(P[i],2) X[i] = round(X[i],2) h[i] = round(h[i],2) s[i] = round(s[i],2) rho[i] = round(rho[i],2) x = [] y = [] z = [] w = [] for i in range(0, quant+1): x.append(variacao[i]) y.append(T_hout_cond[i]) z.append(η_sys[i]) w.append(W_liq[i]) table2 = resultados.T resultados2 = np.zeros((n+4)*(quant + 1), dtype = float).reshape((n+4, quant + 1)) for i in range(0, quant+1): variacao[i] = i*10 resultados2[0, i] = resultados[0, i] resultados2[1, i] = resultados[1, i] resultados2[2:(n+1), i] = resultados[2:(n+1), i] resultados2[n+2,i] = round(float(w[i]),2) resultados2[n+3,i] = round(float(z[i]),4) table2 = resultados2.T st_depois = [st1, st2, st3, st4, st5, st6, st7, st8, st9, st10, st11, st12] # + # Balanço de energia da caldeira: m_air = AF*m_fuel m_gas = m_air + m_fuel m13 = m8 m14 = m_fuel m15 = 0.1*m_air m16 = 0.9*m_air m17 = m16 m18 = m_gas m19 = m18 m20 = m19 fld2 = 'Air' T_amb = 20 + 273.15 #K P_amb = 101.325 #kPa T15 = T_amb T16 = T_amb T8 = T[7] T20 = 200 + 273.15 # P20 = P_amb #Economizador: st13 = st(fld, {'P': P[7], 'Q': 0}) T13 = st13.T - 30 st13_2 = st(fld, {'P': P[7], 'T': T13}) h13 = st13_2.h q13 = m13*(h13-h8) st20_1 = st('Water', {'P':P_amb, 'T':T20}) st20_2 = st('CO2', {'P':P_amb, 'T':T20}) st20_3 = st('Nitrogen', {'P':P_amb, 'T':T20}) ΔT19 = q13/(m_gas*(0.0997*st20_1.cp+0.1771*st20_2.cp+0.7232*st20_3.cp)) #Pré-Ar: T19 = T20 + ΔT19 T17 = T16 + 200 st16 = st(fld2, {'P':P_amb, 'T':T_amb}) st17 = st(fld2, {'P':P_amb, 'T':T17}) h16 = st16.h h17 = st17.h q17 = m17*(h17-h16) st19_1 = st('Water', {'P':P_amb, 'T':T19}) st19_2 = st('CO2', {'P':P_amb, 'T':T19}) st19_3 = st('Nitrogen', {'P':P_amb, 'T':T19}) ΔT18 = q17/(m_gas*(0.0997*st19_1.cp+0.1771*st19_2.cp+0.7232*st19_3.cp)) T18 = T19 + ΔT18 #st17 = st(fld2, {'P':P16, 'T':T17}) #Fornalha: print(T18) ''' m19*CP_gas19amb*(T19-T_amb) + m8*h8 = h13*m13 + m20*CP_gas20amb*(T20-T_amb) m18*CP_gas18amb*(T18-T_amb) = m17*cp_ar17amb*(T17-T_amb) + m19*CP_gas19amb*(T19-T_amb) m_fuel*PCI + m17*cp_ar17amb*(T17-T_amb) + h13*m13 = m1*h1 + m18*CP_gas18amb*(T18-T_amb) ''' m20 # + [markdown] id="fqN5c36LQZUQ" # ## **Tabela - Vazões mássicas** # + code_folding=[] colab={"base_uri": "https://localhost:8080/", "height": 934} hide_input=false id="mM-cqG1tGRBR" outputId="01b62677-8789-47e3-87ec-3c7c423a66d2" # Tabela 2: for i in range(0, quant+1): if Temperatura == 'k': headers2 = ["%", "m1 [kg/s]", "m2 [kg/s]", "m3 [kg/s]", "m4 [kg/s]", "m5 [kg/s]", "m6 [kg/s]", "m7 [kg/s]", "m8 [kg/s]", "m9 [kg/s]", "m10 [kg/s]", "m11 [kg/s]", "m12 [kg/s]", "T_hout [K]", "W_liq [kJ/s]", "η"] resultados2[n+1,i] = round(float(y[i]+273.15),2) #converter para K y[i] = resultados2[n+1,i] else: headers2 = ["%", "m1 [kg/s]", "m2 [kg/s]", "m3 [kg/s]", "m4 [kg/s]", "m5 [kg/s]", "m6 [kg/s]", "m7 [kg/s]", "m8 [kg/s]", "m9 [kg/s]", "m10 [kg/s]", "m11 [kg/s]", "m12 [kg/s]", "T_hout [°C]", "W_liq [kJ/s]", "η"] resultados2[n+1,i] = round(float(y[i]),2) print(tab(table2, headers2, tablefmt="rst", stralign="center", numalign="left")) print('\nA vazão mássica de combustível é de %.2f kg/s' %m_fuel) print('A vazão mássica de ar é de %.2f kg/s\n' %m_air) # Gráficos: escala1 = 1.5 plt.figure(dpi=100, figsize=[escala1*2*6.4, escala1*4.8]) plt.subplot(1,2,1) plt.xlabel('Processo [%]', fontsize=14) if Temperatura == 'k': plt.ylabel('Temperatura de saída do condensador [K]', fontsize=14) plt.plot(x, y, color='r', marker = 'o', linestyle = 'solid') else: plt.ylabel('Temperatura de saída do condensador [°C]', fontsize=14) plt.plot(x, y, color='r', marker = 'o', linestyle = 'solid') plt.grid() plt.subplot(1,2,2) plt.xlabel('Processo [%]', fontsize=14) plt.ylabel('$η_{sistema}$', fontsize=14) plt.plot(x, z, color='b', marker = 'o', linestyle = 'solid') plt.grid() plt.show() # + [markdown] id="Sco6dUf5CmA9" # ## **Gráfico completo Ciclo Rankine** # + code_folding=[] colab={"base_uri": "https://localhost:8080/", "height": 811} hide_input=false id="QaM4WC2IVu7z" outputId="78bf3d90-d329-40d3-9b91-83ff1691c7a2" # Gráfico do Ciclo Rankine: escala2 = 1.2 plt.figure(dpi=150, figsize=[escala2*6.4, escala2*4.8]) y1 = 273.15 y2 = 850 x1 = -0.5 x2 = 10 plt.ylim(y1,y2) plt.xlim(x1,x2) plt.title('Ciclo Rankine',fontsize=20) plt.xlabel('Entropia, s[kJ/kg$\cdot$K]') plt.ylabel('Temperatura, T[K]') Tmin = ps('Tmin',fld ) Tcrit = ps('Tcrit',fld) Pcrit = ps('Pcrit', fld) Larg_isolines = .25 Larg_cycle = 1.5 Larg_scatters = 15 #Linhas de entalpia: T = np.linspace(Tmin,Tcrit,1000) Q = np.arange(0.1,1,0.1) for i in Q: s = ps('S','T',T,'Q',i,'Water')/1000 plt.plot(s,T,'#6ca8fa',alpha=0.7,lw=Larg_isolines) #Domo: T = np.linspace(Tmin,Tcrit,1000) s = ps('S','T',T,'Q',0,fld)/1000 plt.plot(s,T,'black',lw=2*Larg_isolines) T = np.linspace(Tcrit,Tmin,1000) s = ps('S','T',T,'Q',1,fld)/1000 plt.plot(s,T,'black',lw=2*Larg_isolines) #Linhas de pressão constante: T = np.linspace(Tmin,1000,1000) P = [st_antes[0].p,st_antes[1].p,st_antes[2].p,st_antes[3].p,st_depois[3].p] #P1, P2, P3, P4 e P4* for i in P: s = ps('S','T',T,'P',i*1000,'Water')/1000 plt.plot(s,T,'#cc00ff',alpha=.7,lw=Larg_isolines) #Ciclo: #(antes) T = np.linspace(st_antes[7].T,st_antes[0].T,1000) s = ps('S','P',st_antes[0].p * 1000,'T',T,'Water')/1000 plt.plot(s,T,'r',lw=Larg_cycle) T = np.linspace(st_antes[9].T,st_antes[1].T,1000) s = ps('S','T',T,'P',st_antes[9].p*1000,'Water')/1000 plt.plot(s,T,'r',lw=Larg_cycle) plt.plot([st_antes[3].s,st_antes[4].s],[st_antes[3].T,st_antes[4].T],'r',lw=Larg_cycle) plt.plot([st_antes[0].s,st_antes[1].s],[st_antes[0].T,st_antes[1].T],color='r',linestyle='dashed',lw=Larg_cycle) plt.plot([st_antes[1].s,st_antes[2].s],[st_antes[1].T,st_antes[2].T],color='r',linestyle='dashed',lw=Larg_cycle) s = np.linspace(st_antes[2].s,st_antes[6].s,1000) T = ps('T','S',s*1000,'P',st_antes[2].p*1000,'Water') plt.plot(s,T,'r',lw=Larg_cycle) T = np.linspace(st_antes[5].T,st_antes[6].T,1000) s = ps('S','P',st_antes[5].p*1000,'T',T,'Water')/1000 plt.plot(s,T,'r',lw=Larg_cycle) s = np.linspace(st_antes[2].s,st_antes[3].s,1000) plt.plot([st_antes[3].s,st_antes[2].s],[st_antes[3].T,st_antes[2].T],color='r',linestyle='dashed',lw=Larg_cycle) #(depois) T = np.linspace(st_depois[7].T,st_depois[0].T,1000) s = ps('S','P',st_depois[0].p * 1000,'T',T,'Water')/1000 plt.plot(s,T,'lime',lw=Larg_cycle) T = np.linspace(st_depois[9].T,st_depois[1].T,1000) s = ps('S','T',T,'P',st_depois[9].p*1000,'Water')/1000 plt.plot(s,T,'lime',lw=Larg_cycle) plt.plot([st_depois[3].s,st_depois[4].s],[st_depois[3].T,st_depois[4].T],'lime',lw=Larg_cycle) plt.plot([st_depois[0].s,st_depois[1].s],[st_depois[0].T,st_depois[1].T],color='lime',linestyle='dashed',lw=Larg_cycle) plt.plot([st_depois[1].s,st_depois[2].s],[st_depois[1].T,st_depois[2].T],color='lime',linestyle='dashed',lw=Larg_cycle) s = np.linspace(st_depois[2].s,st_depois[6].s,1000) T = ps('T','S',s*1000,'P',st_depois[2].p*1000,'Water') plt.plot(s,T,'lime',lw=Larg_cycle) T = np.linspace(st_depois[5].T,st_depois[6].T,1000) s = ps('S','P',st_depois[5].p*1000,'T',T,'Water')/1000 plt.plot(s,T,'lime',lw=Larg_cycle) s = np.linspace(st_depois[2].s,st_depois[3].s,1000) plt.plot([st_depois[3].s,st_depois[2].s],[st_depois[3].T,st_depois[2].T],color='lime',linestyle='dashed',lw=Larg_cycle) #Pontuar e nomear: for i in range(0, n): plt.scatter(st_antes[i].s,st_antes[i].T,zorder=5,color='k',s=Larg_scatters) plt.scatter(st_depois[i].s,st_depois[i].T,zorder=5,color='k',s=Larg_scatters) plt.text(st_antes[0].s+.1,st_antes[0].T,'1',ha='left') plt.text(st_antes[1].s+.1,st_antes[1].T,'2',ha='left') plt.text(st_antes[2].s+.1,st_antes[2].T,'3',ha='left') plt.text(st_antes[3].s+.1,st_antes[3].T,'4',ha='left') plt.text(st_antes[4].s-.1,st_antes[4].T,'5, 6',ha='right') plt.text(st_antes[6].s-.1,st_antes[6].T,'7, 8, 9, 10',ha='right') plt.text(st_antes[10].s-.1,st_antes[10].T,'11',va='bottom',ha='right') plt.text(st_antes[11].s-.1,st_antes[11].T+.1,'12',ha='right') plt.text(st_depois[3].s+.1,st_depois[3].T,'4*',va='top',ha='left') plt.text(st_depois[4].s-.1,st_depois[4].T,'5*, 6*',va='top',ha='right') #plt.grid(lw=Larg_isolines) plt.twinx() plt.ylim(y1-273.15,y2-273.15) plt.ylabel('Temperatura, T[°C]') #plt.grid(lw=Larg_isolines) plt.savefig('Diagrama T-s.png') #files.download('Diagrama T-s.png') # + [markdown] hide_input=true id="L207JH1_CQdk" # ## **Tabela relatório completo Ciclo Rankine** # + cellView="code" code_folding=[] colab={"base_uri": "https://localhost:8080/"} hide_input=false id="GY2tqM_TCH9V" outputId="e0620762-9021-479d-fa39-66cb46502acc" # Tabela 3: table3 = STT_T + STT_P + STT_X + STT_h + STT_s + STT_v headers3=["Propriedade", "%", "Estado 1", "Estado 2", "Estado 3", "Estado 4", "Estado 5", "Estado 6", "Estado 7", "Estado 8", "Estado 9", "Estado 10", "Estado 11", "Estado 12"] print(tab(table3, headers3, tablefmt="rst", stralign="center", numalign="center")) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python3 # --- # #### Setup # In order for all of the code below to work we need to do some necessary Python incantations. import matplotlib.pyplot as plt from matplotlib import patches # we'll use the wedge patch for drawing apertures import plotutils as pu # helpers to setup matplotlib import pfcalc # helpers for this notebook to hide all my fugly plotting code # %matplotlib inline # This notebook describes the alternative pupil fill calculation in the ITFW (integration time framework, also known as the dose framework) that will be used to support the new moon apertures. # # #### Overview # When we observe light that is reflected from _tall_ open structures we tend to observe multiple _layers_ in the objective that are layed out like concentric circles in a 2D plane. # # Since we are only interested in the first order we want to cut away the other circles and these apertures are designed to do that. pfcalc.aperture_examples() # When light goes through an aperture it will eventually hit the _reflective_, reflect in the opposite angle (more or less) and continue its path until it finally hits the *objective*. The light will naturally hit the objective in a (sort of) reflected position depending on the direction of the beam. In our case, the exact position also depends on the wavelength and the alignment and pitch of the grating. # # We will assume that the alignment of the grating and the type of aperture chosen are such that the light that we are interested in ends up in the top right quadrant. This means that our objective position only depends on the wavelength $\lambda$ and pitch of the grating $p$ which we will define soon. # # We can visualize our main use cases as follows where the blue part represents the position where the light enters the aperture and the red part represents the position where the light hits the objective under ideal circumstances. pfcalc.aperture_reflection_examples() # Notice again that in all of these cases we are interested in the light that ends up in the top right quadrant. The real goal of this calculation is to calculate the red zone with regards to the pitch of the grating, the wavelength of the light and the numerical aperture of the obvjective. # # This calculation is based on _pixel counting_ and so we need to translate from NA coordinates to pixel coordinates. In NA coordinate space the origin $(0,0)$ is in the center of the plane. But in pixel coordinate space the origin is at the lower left corner (or the top left corner in some graphics systems) of the bitmap. That means that the point $(0,0)$ in NA coordinates is equivalent to the center point $cp = (\lceil\frac{W}{2}\rceil, \lceil\frac{H}{2}\rceil)$ in pixel or bitmap coordinates where $W$ and $H$ are the width and height of our bitmap respectively. # # Since our bitmaps are square with fixed size we can get away with using scalars instead of vectors to translate both $x$ and $y$ coordinates from one coordinate system to another. Also we can make them constant since these will not change during runtime. W = 550 # width (and height) of our bitmap cp = 275 # center point of our bitmap # With these constants we can translate from pixel coordinates to NA coordinates. Note that this `px2na0` is just a preliminary. We will define the final `px2na` shortly. def px2na0(v): return (v - cp) / (0.5 * W) # is this not just interpolating? px2na0(500) # However, for (TODO: how do we come up with this number?) we will a scaling factor $s$ instead of $W$ where $s = 273$. So we have: s = 273 def px2na(v): return (v - cp) / s; px2na(500) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] deletable=true editable=true slideshow={"slide_type": "slide"} # # # # ## Continuación del tutorial pandas #
    # Autores: , , , #
    # + deletable=true editable=true jupyter={"outputs_hidden": true} slideshow={"slide_type": "slide"} # Para usarlo primero hay que importarlo from pandas import Series, DataFrame import pandas as pd # + [markdown] deletable=true editable=true # ### Solución al ejercicio 1 # - Leer datos desde el archivo `DATOS_Ej.csv`. # - Muestren los primeros 10 # - Muestren los últimos 10 # - Muestren solo los nombres de los propietarios # - Agregar una columna de Edad # - Asignar una edad a cada registro (como prefieran) # - Agregar 10 registros, uno por cada compañero de clase (puede inventar los datos, excepto el nombre) # - Cuenten cuántas casas y departamentos hay (tipo de bien) en la BD # - Indiquen cuantos Juan son propietarios # - Llenar los missing data con el valor predominante en cada columna # - Contar cuantos elementos hay de cada columna # - Copien la BD a otro Dataframe # - En el nuevo Dataframe # - Borrar todos los registros de localidad Laraquete # - Borrar las fichas con valore entre 30 y 60. # # + [markdown] deletable=true editable=true slideshow={"slide_type": "slide"} # - Leer datos desde archivo. # # + deletable=true editable=true jupyter={"outputs_hidden": false} slideshow={"slide_type": "slide"} Datos=pd.read_csv("DATOS_Ej.csv", index_col=0) Datos # + [markdown] deletable=true editable=true slideshow={"slide_type": "slide"} # ### Ejercicio con pandas # - Leer datos desde archivo. # - Llenar los missing data con el valor predominante en cada columna # - Muestren los primeros 10 # - Muestren los últimos 10 # - Muestren solo los nombres de los propietarios # - Agregar una columna de Edad # - Asignar una edad a cada registro (como prefieran) # - Agregar 10 registros, uno por cada compañero de clase (puede inventar los datos, excepto el nombre) # - Cuenten cuántas casas y departamentos hay (tipo de bien) en la BD # - Me digan cuantos Juan son propietarios # - Copien la BD a otro Dataframe # - En el nuevo Dataframe # - Borrar todos los registros de localidad Laraquete # - Borrar las fichas con valore entre 30 y 60. # # + deletable=true editable=true jupyter={"outputs_hidden": false} slideshow={"slide_type": "slide"} #Muestren los primeros 10 Datos.head(10) # + deletable=true editable=true jupyter={"outputs_hidden": false} slideshow={"slide_type": "slide"} #Muestren los últimos 10 Datos.tail(10) # + deletable=true editable=true jupyter={"outputs_hidden": false} slideshow={"slide_type": "slide"} #Muestren solo los nombres de los propietarios Datos.PROPIETARIO Datos['PROPIETARIO'] # + deletable=true editable=true jupyter={"outputs_hidden": false} slideshow={"slide_type": "slide"} #Si lo queremos ir accediendo como lista hacemos L=list(Datos['PROPIETARIO']) L[0] # + deletable=true editable=true jupyter={"outputs_hidden": false} slideshow={"slide_type": "slide"} #Agregar una columna de Edad n_datos=Datos.reindex(index=list(Datos.index), columns=list(Datos.columns)+['Edad']) n_datos # + deletable=true editable=true jupyter={"outputs_hidden": false} slideshow={"slide_type": "slide"} #Agregar una columna de Edad (otra forma) n_datos['Edad']=10 n_datos # + deletable=true editable=true jupyter={"outputs_hidden": false} slideshow={"slide_type": "slide"} #ASIGNAR EDAD RANDOM import random for index, row in n_datos.iterrows(): v=random.randint(18,90) n_datos.loc[n_datos.index==index,"Edad"] = v n_datos.head() # + deletable=true editable=true jupyter={"outputs_hidden": false} slideshow={"slide_type": "slide"} n_datos.tail() # + deletable=true editable=true jupyter={"outputs_hidden": false} slideshow={"slide_type": "slide"} '''Agregar 10 registros, uno por cada compañero de clase (puede inventar los datos, excepto el nombre) ''' import random import numpy as np d_temp=DataFrame(columns=n_datos.columns) Alumnos=["Manuel","Santiago","Hugo","Briana","Olivia","Axel","Rodrigo", "Daniel","Dulce","Hector"] for i in range(0,10): d_temp.loc[i,"COMUNA"] = random.choice(["Arauco",np.nan]) d_temp.loc[i,"LOCALIDAD"]= random.choice(["Arauco","Carampangue", "Laraquete",np.nan]) d_temp.loc[i,"TIPO DE BIEN"]= random.choice(["departamento", "Casa",np.nan]) d_temp.loc[i,"NORMATIVA"]= random.choice(["Urbana", "Rural",np.nan]) d_temp.loc[i,"PROPIETARIO"]= Alumnos[i] d_temp.loc[i,"Edad"]= random.randint(20,25) d_temp # + deletable=true editable=true jupyter={"outputs_hidden": true} slideshow={"slide_type": "slide"} n_datos=n_datos.append(d_temp, ignore_index=True) # + deletable=true editable=true jupyter={"outputs_hidden": false} slideshow={"slide_type": "slide"} n_datos.tail(11) # + deletable=true editable=true jupyter={"outputs_hidden": false} slideshow={"slide_type": "slide"} '''Cuenten cuántas casas y departamentos hay (tipo de bien) en la BD ''' casas=n_datos["TIPO DE BIEN"]=="Casa" sum(casas) # + deletable=true editable=true jupyter={"outputs_hidden": false} slideshow={"slide_type": "slide"} casas=n_datos.loc[n_datos["TIPO DE BIEN"]=="Casa",].count() casas # + deletable=true editable=true jupyter={"outputs_hidden": false} slideshow={"slide_type": "slide"} casas["TIPO DE BIEN"] # + deletable=true editable=true jupyter={"outputs_hidden": false} slideshow={"slide_type": "slide"} '''Cuenten cuántas casas y departamentos hay (tipo de bien) en la BD ''' departamentos=n_datos["TIPO DE BIEN"]=="departamento" sum(departamentos) # + deletable=true editable=true jupyter={"outputs_hidden": false} slideshow={"slide_type": "slide"} departamentos=n_datos.loc[n_datos["TIPO DE BIEN"] =="departamento",].count() departamentos["TIPO DE BIEN"] # + deletable=true editable=true jupyter={"outputs_hidden": false} slideshow={"slide_type": "slide"} '''Me digan cuantos Juan son propietarios ''' juanes=n_datos.PROPIETARIO=="Juan" sum(juanes) # + deletable=true editable=true jupyter={"outputs_hidden": false} slideshow={"slide_type": "slide"} '''Llenar los missing data con el valor predominante en cada columna ''' Predominantes=[] def F_predominantes(p): for c in n_datos.columns: t=n_datos.xs(c, axis=1).mode() p.append(t.values[0]) return p Predominantes=F_predominantes(Predominantes) Predominantes # + deletable=true editable=true jupyter={"outputs_hidden": false} slideshow={"slide_type": "slide"} t=n_datos.xs("Edad", axis=1).mode() t # + deletable=true editable=true jupyter={"outputs_hidden": true} slideshow={"slide_type": "slide"} x=0 datos_nonan=n_datos.copy() columnas = datos_nonan.columns for c in columnas: datos_nonan[c].fillna(Predominantes[x], inplace=True) x = x + 1 # + deletable=true editable=true jupyter={"outputs_hidden": false} slideshow={"slide_type": "slide"} datos_nonan.tail(11) # + deletable=true editable=true jupyter={"outputs_hidden": false} slideshow={"slide_type": "slide"} n_datos.tail(11) # + deletable=true editable=true jupyter={"outputs_hidden": true} slideshow={"slide_type": "slide"} #Contar cuantos elementos hay de cada columna columnas = datos_nonan.columns e_columas = [] for c in columnas: x = datos_nonan[c].value_counts() e_columas.append(x) # + deletable=true editable=true jupyter={"outputs_hidden": false} slideshow={"slide_type": "slide"} x = datos_nonan['Edad'].value_counts() x # + deletable=true editable=true jupyter={"outputs_hidden": false} slideshow={"slide_type": "slide"} #En e_columnas se almacenan los elementos únicos en cada columna. len(e_columas[5]) # + deletable=true editable=true jupyter={"outputs_hidden": false} slideshow={"slide_type": "slide"} i=0 for c in columnas: print("En la Columna",c,"Hay ", len(e_columas[i]), " elementos diferentes ") i+=1 # + deletable=true editable=true jupyter={"outputs_hidden": true} slideshow={"slide_type": "slide"} borrar_DF = n_datos.copy() # + deletable=true editable=true jupyter={"outputs_hidden": false} slideshow={"slide_type": "slide"} #Borrar todos los registros de localidad Laraquete borrar_DF.drop(borrar_DF.index[borrar_DF.LOCALIDAD == "Laraquete"], inplace=True) borrar_DF # + deletable=true editable=true jupyter={"outputs_hidden": true} slideshow={"slide_type": "slide"} #Borrar todos los registros de localidad Laraquete borrar_DF = borrar_DF[borrar_DF.LOCALIDAD != "Laraquete"] # + deletable=true editable=true jupyter={"outputs_hidden": false} slideshow={"slide_type": "slide"} #Borrar indices entre 30 y 60 borrar_DF = borrar_DF[(borrar_DF.index < 30) | (borrar_DF.index > 60)] borrar_DF.head(40) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import datetime import numpy as np import matplotlib.pyplot as plt # + def mandelbrot( w, h, maxiter=200 ): y, x = np.ogrid[ -1.2:1.2:h*1j, -2:0.8:w*1j ] C = x + y * 1j Z = np.zeros(C.shape, dtype=int) N = maxiter + Z bailout = 2.0 for i in range(maxiter): Z = Z ** 2 + C diverged = np.abs(Z) > bailout N[diverged & (N==maxiter)] = i Z[diverged] = 2 return N # plt.imshow(mandelbrot(1920,1200), cmap='tab20c') t1 = datetime.datetime.now() plt.imshow(mandelbrot(1920,1200), cmap='prism') t2 = datetime.datetime.now() runtime1 = (t2-t1).total_seconds() print(str(runtime1) + " seconds") # plt.savefig('mandelbrot1.png', dpi=300) plt.show() # + def mandelbrot( w, h, maxiter=200 ): y, x = np.ogrid[ -1.2:1.2:h*1j, -2:0.8:w*1j ] C = x + y * 1j Z = np.zeros(C.shape, dtype=complex) M = np.full(C.shape, True, dtype=bool) N = np.zeros(C.shape) bailout = 2.0 for i in range(maxiter): Z[M] = Z[M] ** 2 + C[M] diverged = np.abs(Z) > bailout M[diverged] = False N[M] = i+1 return N # plt.imshow(mandelbrot(1920,1200), cmap='tab20c') t1 = datetime.datetime.now() plt.imshow(mandelbrot(1920,1200), cmap='prism') t2 = datetime.datetime.now() runtime2 = (t2-t1).total_seconds() print(str(runtime2) + " seconds") # plt.savefig('mandelbrot2.png', dpi=300) plt.show() # - print("Delta absolute:" + str(runtime1-runtime2) + " seconds") print("Delta relative:" + str(runtime2/runtime1*100) + " %") // --- // jupyter: // jupytext: // text_representation: // extension: .cpp // format_name: light // format_version: '1.5' // jupytext_version: 1.14.4 // kernelspec: // display_name: C++14 // language: C++14 // name: xcpp14 // --- // + tags=["remove-cell"] #pragma cling add_include_path("../../include") #pragma cling add_include_path("../feltor/inc") // Feltor path #define THRUST_DEVICE_SYSTEM THRUST_DEVICE_SYSTEM_CPP #include #include "dg/algorithm.h" // - // # Timesteppers // This chapter deals with how to integrate differential equations of the form // \begin{align} // M(y,t) \frac{d}{dt}y = F(y,t) // \end{align} // where $M$ is the mass matrix and $F$ is the right hand side. // // ## Overview // Generally we need three things to solve an ODE numerically // - **the stepper method** (there are several stepper classes each coming with a range of tableaus to chose from: Runge Kutta vs Multistep, explicit vs implicit and more) e.g. `dg::ExplicitMultistep` // - **the ode itself**, this has to be provided by the user as the right hand side functor or tuple of functors including the solver method if an implicit stepper is used and then tied to a single object typically using `std::tie`, e.g. `std::tie( ex, im, solve);` // - **the timeloop** (adaptive timestep vs fixed timestep) -> can be implemented by the user or in Feltor represented by an instance of `dg::aTimeloop` class (useful if one wants to choose a method at runtime) e.g. `dg::AdaptiveTimeloop`. The timeloop includes the **initial condition** and the integration boundaries (these are the paramters to the integrate function) `timeloop.integrate( t0, u0, t1, u1);` // . // // The `dg` library provides a wide selection of explicit, implicit and imex timesteppers: // - `dg::ERKStep` embedded Runge Kutta // - `dg::DIRKStep` diagonally implicit Runge Kutta // - `dg::ARKStep` additive Runge Kutta (imex) // - `dg::ExplicitMultistep` // - `dg::ImplicitMultistep` // - `dg::ImExMultistep` // - see [doxygen documentation](https://feltor-dev.github.io/doc/dg/html/group__time.html) for a full list // // The `dg::ERKStep`, `dg::DIRKStep` and `dg::ARKStep` (and in general any embedded method) can be used as a driver for an adaptive Timestepper // - `dg::Adaptive` // ```{admonition} Vector type // All of the above classes are templates of the Vector type in use. Chosing an appropriate type for the integration variable(s) is usually the first decision to make when implementing a differential equation. Anything that works in the `dg::blas1` functions is allowed so check out the [Vectors](sec:vectors) chapter. // ``` // // // In order to implement any Runge-Kutta or multistep algorithm we need to be able to solve two general types of equations (see [theory guide on overleaf](https://www.overleaf.com/read/dfxncmnnpzfm)): // \begin{align} // k = M(y,t)^{-1} \cdot F(y,t) &\text{ given $t,y$ return $k$} \\ // M(y,t)\cdot ( y-y^*) - \alpha F(y,t) = 0 &\text{ given $\alpha, t, y^*$ return $y$} // \end{align} // For any explicit (part of the) equation we need to solve the first and for any implicit (part of the) equation we need to solve both the first and the second equation. // ````{note} // It is important to realize that the timestepper needs to know neither $F$ nor $M$, nor how they are implemented or how the implicit equation is solved. // The user just needs to provide oblique functor // objects with the signature // ```cpp // // given t y write ydot // void operator()( value_type t, const ContainerType0& y, ContainerType1& ydot); // // given \alpha, t and y^* write a new y // void operator()( value_type alpha, value_type t, ContainerType& y, const ContainerType1& ystar); // ``` // ```` // More than one functor is needed for implicit or semi-implicit timesteppers. These are then expected as a `std::tuple` of functors. // Typically one would use `std::tie` to tie together two or more objects into a functor. // // The timesteppers themselves are implemented using `dg::blas1` vector additions and all have a `step` method that advances the ode for one timestep // ```cpp // dirk.step( std::tie( implicit_part, solver), t0, y0, t1, y1, dt); // ``` // This step method can be called in a loop to advance the ode for an arbitrary time. Or you can use one of the `dg::aTimeloop` classes // - `dg::AdaptiveTimeloop` // - `dg::SinglestepTimeloop` // - `dg::MultistepTimeloop` // // These classes abstract the integration loop and can be called via a common interface // ```cpp // timeloop.integrate( t0, y0, t1, y1); // ``` // This is useful especially if you want to choose various timesteppers at runtime. // // // We will now study a few case scenarios to clarify the above explanations. // ## Integrate ODEs in Feltor - first timesteppers and simple time loops // // As an example we solve the damped driven harmonic oscillator // \begin{align} // \frac{d x}{d t} &= v \\ // \frac{d v}{d t} &= -2 \nu \omega_0 v - \omega_0^2 x + \sin (\omega_d t) // \end{align} // Since we have two scalar variables we will use a `std::array` as the vector type to use. // Let us first choose some (somewhat random) parameters const double damping = 0.2, omega_0 = 1.0, omega_drive = 0.9; // We know that we can solve this ODE analytically. This comes in handy to verify our implementation. // We have an analytical solution std::array solution( double t) { double tmp1 = (2.*omega_0*damping); double tmp2 = (omega_0*omega_0 - omega_drive*omega_drive)/omega_drive; double amp = 1./sqrt( tmp1*tmp1 + tmp2*tmp2); double phi = atan( 2.*omega_drive*omega_0*damping/(omega_drive*omega_drive-omega_0*omega_0)); double x = amp*sin(omega_drive*t+phi)/omega_drive; double v = amp*cos(omega_drive*t+phi); return {x,v}; } // ### Explicit Runge-Kutta - fixed step // // First, show how to implement a simple timeloop with a fixed stepsize Runge Kutta integrator. We choose the classic 4-th order scheme, but consult the [documentation](https://feltor-dev.github.io/doc/dg/html/group__time.html) for an extensive list of available tableaus. // + // The right hand side needs to be a callable function in Feltor. // In modern C++ this can for example be a lambda function: auto rhs = [&]( double t, const std::array& y, std::array& yp) { //damped driven harmonic oscillator // x -> y[0] , v -> y[1] yp[0] = y[1]; yp[1] = -2.*damping*omega_0*y[1] - omega_0*omega_0*y[0] + sin(omega_drive*t); }; // Let us choose an initial condition and the integration boundaries double t0 = 0., t1 = 1.; const std::array u0 = solution(t0); // Here, we choose the classic Runge-Kutta scheme to solve dg::RungeKutta> rk("Runge-Kutta-4-4", u0); // Now we are ready to construct a time-loop by repeatedly stepping // the Runge Kutta solve with a constant timestep double t = t0; std::array u1( u0); unsigned N = 20; for( unsigned i=0; i sol = solution(t1); dg::blas1::axpby( 1., sol , -1., u1); std::cout << "Norm of error is " <](https://www.youtube.com/watch?v=3jCOwajNch0) // ``` // // ### Implicit Multistep - fixed step // In the next example we want to solve the same ode with an implicit multistep method: // + // First we need to provide a solution method for the prototypical implicit equation auto solve = [&]( double alpha, double t, std::array& y, const std::array& yp) { // y - alpha RHS( t, y) = rho // can be solved analytically y[1] = ( yp[1] + alpha*sin(omega_drive*t) - alpha*omega_0*omega_0*yp[0])/ (1.+2.*alpha*damping*omega_0+alpha*alpha*omega_0*omega_0); y[0] = yp[0] + alpha*y[1]; }; // Now we can construct a multistep method dg::ImplicitMultistep> multi("ImEx-BDF-3-3", u0); // Let us choose the same initial conditions as before t = t0; u1 = u0; // Finally, we can construct a timeloop multi.init( std::tie( rhs, solve), t, u0, (t1-t0)/(double)N); for( unsigned i=0; iintegrate( t0, u0, t1, u1); // - // ```{note} // Notice the use of `std::tie` in the `init` and `step` methods. Since an implicit method needs both the implicit part `rhs` and an implicit solver `solve` to solve it. // ``` // ### Embedded explicit Runge Kutta - adaptive step // As a last example we want to integrate the ode with an adaptive timestepper // + dg::Adaptive>> adapt("Tsitouras11-7-4-5", u0); t = t0; u1 = u0; double dt = 1e-6; while( t < t1) { if( t + dt > t1) dt = t1 - t; adapt.step( rhs, t, u1, t, u1, dt, dg::pid_control, dg::fast_l2norm, 1e-6, 1e-6); } dg::blas1::axpby( 1., sol , -1., u1); std::cout << "Norm of error is " <; // const unsigned* nsteps; // // std::string stepper = "Adaptive"; // std::cin >> stepper; // // // A pointer to an abstract integrator // auto odeint = std::unique_ptr>(); // // if( "Adaptive" == stepper) // { // dg::Adaptive> adapt("Tsitouras11-7-4-5", u0); // odeint = std::make_unique>( adapt, // rhs, dg::pid_control, dg::fast_l2norm, 1e-6, 1e-6); // nsteps = &adapt.nsteps(); // } // else if( "Singlestep" == stepper) // { // dg::RungeKutta rk("Runge-Kutta-4-4", u0); // odeint = std::make_unique>( rk, // rhs, (t1-t0)/(double)N); // nsteps = &N; // } // else if( "Multistep" == stepper) // { // dg::ExplicitMultistep multi( "ImEx-BDF-3-3", u0); // odeint = std::make_unique>( multi, // rhs, t0, u0, (t1-t0)/(double)N); // nsteps = &N; // } // else // std::cerr << "Stepper "< integrate(t0, u0, t1, u1); // // dg::blas1::axpby( 1., sol , -1., u1); // std::cout << "Norm of error is " <& y, const std::array& yp) // { // // solve y - alpha F(y,t) = yp // } // } // // auto doNotCall = [](double t, const auto& x, auto& y) // { // // Just to be entirely certain // assert(false && "This should never be called!"); // } // // VanderPolSolver solver( ...); // dg::DIRKStep dirk( "SDIRK-4-2-3", y0); // // odeint = std::make_unique>( dirk, // std::tie( doNotCall, solver), // (t1-t0)/(double)N); // odeint -> integrate(t0, u0, t1, u1); // ``` // # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Bonus: Temperature Analysis I import pandas as pd from datetime import datetime as dt # "tobs" is "temperature observations" df = pd.read_csv('resources/hawaii_measurements.csv') df.head() # Convert the date column format from string to datetime datetime_format = "%Y-%m-%d" df["date"] = pd.to_datetime(df["date"], format=datetime_format) # Set the date column as the DataFrame index df.set_index("date", inplace = True) # Drop the date column if "prcp" in df.columns: df.drop("prcp", axis=1, inplace=True) df # ### Compare June and December data across all years from scipy import stats # Filter data for desired months June june_df = df.filter(regex ="\d\d\d\d-06-\d\d", axis=0).sort_index() june_df # Cont. Filter data for desired months December dec_df = df.filter(regex ="\d\d\d\d-12-\d\d", axis=0).sort_index() dec_df # + # Identify the average temperature for June def get_year_ranges(data): min_date = min(data.index) max_date = max(data.index) return list(range(min_date.year, max_date.year)) print("Identify the average temperature for June") all_june_data = {} for year in get_year_ranges(june_df): key = str(year) value = june_df.filter(regex = f"{year}-\d\d-", axis=0)["tobs"] all_june_data[key] = value print(f"year: {year}, mean: {round(value.mean(), 2)}") # - # Identify the average temperature for December print("Identify the average temperature for December") all_dec_data = {} for year in get_year_ranges(dec_df): key = str(year) value = dec_df.filter(regex = f"{year}-\d\d-", axis=0)["tobs"] all_dec_data[key] = value print(f"year: {year}, mean: {round(value.mean(), 2)}") # + # Create collections of temperature data # - # Run paired t-test t_value, p_value = stats.ttest_ind(june_df["tobs"], dec_df["tobs"]) print(f"t_value {t_value}") print(f"p_value {p_value}") # ### Analysis import matplotlib.pyplot as plt import matplotlib.ticker as mtick years = get_year_ranges(june_df) fig, ax = plt.subplots(1, 1, figsize = (7,6)) plt.boxplot( [june_df.filter(regex = f"{year}-\d\d-", axis=0)["tobs"] for year in years], vert=True, labels=years) ax.yaxis.set_major_formatter(mtick.StrMethodFormatter('{x:,.0f}°')) plt.title(f'\nObservations for State of Hawaii Temperature in June', fontsize = (18)) plt.xlabel('Years', fontsize = (22)) plt.ylabel('Temperature (°F)', fontsize = (22)) plt.grid(alpha = 0.7) plt.savefig('Images/bonus_1_hawaii_june_temps.png') plt.show() years = get_year_ranges(dec_df) fig, ax = plt.subplots(1, 1, figsize = (7,6)) plt.boxplot( [dec_df.filter(regex = f"{year}-\d\d-", axis=0)["tobs"] for year in years], vert=True, labels=years) ax.yaxis.set_major_formatter(mtick.StrMethodFormatter('{x:,.0f}°')) plt.title(f'\nObservations for State of Hawaii Temperature in December', fontsize = (18)) plt.xlabel('Years', fontsize = (22)) plt.ylabel('Temperature (°F)', fontsize = (22)) plt.grid(alpha = 0.7) plt.savefig('Images/bonus_1_hawaii_dec_temps.png') plt.show() # Use the t-test to determine whether the difference in the means, if any, is statistically significant. # Will you use a paired t-test, or an unpaired t-test? # # Answer: Due to the data being different the t-test will not be utilized in this case. # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.7.1 # language: julia # name: julia-1.7 # --- # # MS-E2121 - Linear Optimization: Homework 6 # #' # ## Exercise 6.2 - Valid inequalities for the TSP # Solve item (b) here. In the following cells you are provided with the **same code** as presented in session 8 for solving the TSP-MTZ. using JuMP # JuMP: Modeling language and solver interface using DelimitedFiles # IO reading/writting files using Cbc # Solver using LinearAlgebra # To use the norm using Plots # For plotting the tours using Combinatorics # To perform permutations ## Function for getting the distances array function get_dist(xycoord::Matrix{},n::Int) # Compute distance matrix (d[i,j]) from city coordinates dist = zeros(n,n) for i = 1:n for j = i:n d = norm(xycoord[i,:] - xycoord[j,:]) dist[i,j] = d dist[j,i] = d end end return dist end # Get the optimal tour # Input # x: solution matrix # n: number of cities # Returns # tour: ordering of cities in the optimal tour function gettour(x::Matrix{Int}, n::Int) tour = zeros(Int,n+1) # Initialize tour vector (n+1 as city 1 appears twice) tour[1] = 1 # Set city 1 as first one in the tour k = 2 # Index of vector tour[k] i = 1 # Index of current city while k <= n + 1 # Find all n+1 tour nodes (city 1 is counted twice) for j = 1:n if x[i,j] == 1 # Find next city j visited immediately after i tour[k] = j # Set city j as the k:th city in the tour k = k + 1 # Update index k of tour[] vector i = j # Move to next city break end end end return tour # Return the optimal tour end # Solve a directed, TSP instance (MTZ formulation) # Input # xycoord: coordinates of city locations # n: number of cities # Returns # tour: ordering of cities in the optimal tour # cost: cost (length) of the optimal tour function tsp_mtz(xycoord::Matrix{}, n::Int) # Create a model m = Model(Cbc.Optimizer) # Here the costs c are the distances between cities c = get_dist(xycoord,n) ## Variables # x[i,j] = 1 if we travel from city i to city j, 0 otherwise. @variable(m, x[1:n,1:n], Bin) # Variables u for subtour elimination constraints @variable(m, u[2:n]) ## Objective # Minimize length of tour @objective(m, Min, dot(c,x)) ## Constraints # Ignore self arcs: set x[i,i] = 0 @constraint(m, sar[i = 1:n], x[i,i] == 0) # We must enter and leave every city exactly once @constraint(m, ji[i = 1:n], sum(x[j,i] for j = 1:n if j != i) == 1) @constraint(m, ij[i = 1:n], sum(x[i,j] for j = 1:n if j != i) == 1) # MTZ subtour elimination constraints @constraint(m, sub[i = 2:n, j = 2:n, i != j], u[i] - u[j] + (n-1)*x[i,j] <= (n-2)) optimize!(m) cost = objective_value(m) # Optimal cost (length) sol_x = round.(Int, value.(x)) # Optimal solution vector tour = gettour(sol_x,n) # Get the optimal tour return tour, cost end; # + ## "data16c.csv" has 3 columns which are stored in arrays ## V, x and y. The columns contain: ## ## data[:,1]: all cities i in V ## data[:,2]: x-coordinate of each city i in V ## data[:,3]: y-coordinate of each city i in V data = readdlm("data16c.csv", ';') n = 16 # number of cities # println(data) # Look at the data in compact form V = data[2:n+1,1] # All cities i in V x = data[2:n+1,2] # x-coordinates of cities i in V y = data[2:n+1,3] # y-coordinates of cities i in V xycoord = [x y]; # n x 2 coordinate matrix # - @time (tour, cost) = tsp_mtz(xycoord, n); # #### Implement the valid inequalities and compare with the former formulation ## Solve a directed, TSP instance (MTZ formulation) ## Input ## xycoord: coordinates of city locations ## n: number of cities ## Returns ## tour: ordering of cities in the optimal tour ## cost: cost (length) of the optimal tour function tsp_mtz_vi(xycoord::Matrix{}, n::Int) # Create a model m = Model(Cbc.Optimizer) # Here the costs c are the distances between cities c = get_dist(xycoord,n) ## Variables # x[i,j] = 1 if we travel from city i to city j, 0 otherwise. @variable(m, x[1:n,1:n], Bin) # Variables u for subtour elimination constraints @variable(m, u[2:n]) ## Objective # Minimize length of tour @objective(m, Min, dot(c,x)) ## Constraints # Ignore self arcs: set x[i,i] = 0 @constraint(m, sar[i = 1:n], x[i,i] == 0) # We must enter and leave every city exactly once @constraint(m, ji[i = 1:n], sum(x[j,i] for j = 1:n if j != i) == 1) @constraint(m, ij[i = 1:n], sum(x[i,j] for j = 1:n if j != i) == 1) # MTZ subtour elimination constraints @constraint(m, sub[i = 2:n, j = 2:n, i != j], u[i] - u[j] + (n-1)*x[i,j] <= (n-2)) # Additional valid inequalities # TODO: add your code here @constraint(m, valid_ineq_1[i = 1:n, j = 1:n, i != j], x[i,j] + x[j,i] <= 1) @constraint(m, valid_ineq_2[i = 2:n, j = 2:n, i != j], u[i] - u[j] + (n-1)*x[i,j] + (n-3)*x[j,i] <= n-2) @constraint(m, valid_ineq_3[j = 2:n], u[j] - 1 + (n-2)*x[1,j] <= n-1) @constraint(m, valid_ineq_4[i = 2:n], 1 - u[i] + (n-1)*x[i,1] <= 0) optimize!(m) cost = objective_value(m) # Optimal cost (length) sol_x = round.(Int, value.(x)) # Optimal solution vector tour = gettour(sol_x,n) # Get the optimal tour return tour, cost, termination_status(m) end; ## Solve the problem and evaluate time and memory with @time macro @time (tour, cost, status) = tsp_mtz_vi(xycoord, n); @show status; # #### Extra material # Here is a small code snippet for generating random instances and collecting the corresponding solution times. You are not required to compare these in your report, but in case you're interested about the performance differences in general, we'll give you a starting point. # + # using Suppressor # For suppressing the Cbc output # n_sample = 30 # Sample size # # Initializing result vectors # sol_times_mtz = zeros(n_sample) # sol_times_mtz_vi = zeros(n_sample) # for i in 1:n_sample # println("Optimizing instance number $i") # xycoord_rand = rand(0:5000, n, 2) # Random instance # # The @suppress_out block suppresses all output inside the block # @suppress_out begin # # @elapsed allows saving the elapsed time to a variable # sol_times_mtz[i] = @elapsed tsp_mtz(xycoord_rand, n); # sol_times_mtz_vi[i] = @elapsed tsp_mtz_vi(xycoord_rand, n); # end # end # - # ## Exercise 6.3 - Capacitated vehicle routing problem with time window (CVRPwTW) # # ### Background # # Consider a centralised depot, from which deliveries are supposed to be made from. The delivers have to be made to a number of clients, and each client has a specific demand. Some assumptions we will consider: # # * The deliveries have to be made by vehicles that are of limited capacity; # * Multiple routes are created, and a single vehicle is assignmed to each route; # * We assume that the number of vehicles is not a limitation. # # Our objective is to define **optimal routes** such that the total distance travelled is minimised. # # ### Problem structure and input data # # Let us define the elements that form our problem. You will notice that we use a graph-based notation, referring to elements such as arcs and nodes. # # #### Structural elements # * $n$ is the total number of clients # * $N$ is the *set* of clients, with $N = \{2, \dots, n+1\}$ # * $V$ is the set of *nodes*, representing a depot (node 1) and the clients (nodes $i \in N$). Thus $V = \{1\} \cup N$. # * $A$ is a set of *arcs*, with $A = \{(i,j) \in V \times V : i \neq j\}$ # # #### Parameters (input data structure) # * $C_{i,j}$ - cost of travelling via arc $(i,j) \in A$ (equals distance between $i$ and $j$); # * $Q$ - vehicle capacity in units; # * $D_i$ - amount that has to be delivered to customer $i \in N$, in units; # # # #### Considering the time windows # # We will expand the model to consider time windows. Let's suppose that now each demand node must be visited within a specific time window $[T^{LB}_j, T^{UB}_j]$. Suppose we know the travel time $T_{ij}$ through arc $(i,j)$ and service time $S_j$ for each node $j \in N$. The time windows and service times are only defined for client nodes. For the depot, the departure time should be nonnegative and there is no service time. How can we model this additional restriction on client time windows? using Plots # To generate figures using JuMP # Mathematical programming language using Cbc # Solver used using JLD2 # Problem structural elements struct Instance n # Total of nodels N # Set of client nodes V # Set of all nodes, including depot A # Set of arcs loc_x # x-coordinates of all points loc_y # y-coordinates of all points Q # vehicle capacity D # demand at the node (to be delivered) C # arc trasversal cost S # Service time at j T # Travel time between i and j T_lb # Earliest possible visit time T_ub # Latest possible visit time bigM # A suitable big M value to be used in all big M constraints end f = jldopen("hw6_testins.jld2") test_ins = nothing try test_ins = f["testins"] finally close(f) end; # The cell below shows a plot of the problem. # + ## Plotting nodes scatter(test_ins.loc_x, test_ins.loc_y, legend = false, size = (1400,700), xaxis = ("x", (-50.0,1100.0)), yaxis = ("y", (-50.0,700.0)), ) for i in test_ins.N annotate!(test_ins.loc_x[i], test_ins.loc_y[i] +15, ("$i", 7)) annotate!(test_ins.loc_x[i], test_ins.loc_y[i] - 17, ("D[$(i)] = $(test_ins.D[i])", 7)) annotate!(test_ins.loc_x[i], test_ins.loc_y[i] - 35, ("Arrival time in [$(round(test_ins.T_lb[i],digits=1)), $(round(test_ins.T_ub[i],digits=1))]", 7)) end scatter!((test_ins.loc_x[1], test_ins.loc_y[1]), color=:orange) # - # ### Item (a) # Write the MIP formulation in your report # ### Item (b) # Complete the function `create_VRP_model()` below to implement the CVRPwTW model. The time windows and service times are only defined for client nodes. For the depot, the departure time should be nonnegative and there is no service time. function create_VRP_model(ins; max_time = 300) n = ins.n # Number of client nodes N = ins.N # Set of client nodes V = ins.V # Set of all nodes, including depot A = ins.A # Set of arcs loc_x = ins.loc_x # x-coordinates of all points loc_y = ins.loc_y # y-coordinates of all points Q = ins.Q # Vehicle capacity D = ins.D # Demand at the node (to be delivered) C = ins.C # Arc traversal cost S = ins.S # Service time T = ins.T # Travel time between T_lb = ins.T_lb # Lower bound for the time window T_ub = ins.T_ub # Upper bound for the time window bigM = ins.bigM # A large enough big M value to be used in all big M constraints model = Model(Cbc.Optimizer) # Declaring the model object. set_optimizer_attribute(model, "seconds", max_time) # Declare decision variables @variable(model, x[i in V, j in V], Bin) @variable(model, 0 <= u[i in V] <= Q) @constraint(model, u[1]==Q) # TODO: add your code here @variable(model, t[i in V] >= 0) @constraint(model, t[1] == 0) # Objective function # @objective(model, Min, sum(C[i,j] * x[i,j] for (i,j) in A)) @objective(model, Min, sum(C[i,j] * x[i,j] for (i,j) in A) + sum(t[i] for i in V)) # Constraints @constraint(model, c1[i in N], sum(x[i,j] for j in V if j != i) == 1) @constraint(model, [j in N], sum(x[i,j] for i in V if i != j) == 1) @constraint(model, [i in V, j in N], u[i] - D[j] <= u[j] + bigM * (1 - x[i,j])) @constraint(model, [i in V, j in N], u[i] - D[j] >= u[j] - bigM * (1 - x[i,j])) # TODO: add your code here @constraint(model, [j in N], T_lb[j] <= t[j] <= T_ub[j]) # The starting node - depot - doesn't have service time S[1] @constraint(model, [j in N], t[1] + T[1,j] <= t[j] + bigM * (1 - x[1,j])) @constraint(model, [(i,j) in A; i != 1 && j != 1], t[i] + S[i] + T[i,j] <= t[j] + bigM * (1 - x[i,j])) # set_silent(model) return model end ## Create the basic model and optimise it basic_model = create_VRP_model(test_ins; max_time = 300) optimize!(basic_model) # @show termination_status(basic_model) # The cell below shows a plot of the solution. Use it to verify if your implementation is correct. Check, for example, that the arrival time is inside the allowed delivery time window. # + ## Retrieving active arcs. active_arcs = [(i,j) for (i,j) in test_ins.A if value.(basic_model[:x])[i,j] >= 1-(1E-9)] scatter(test_ins.loc_x, test_ins.loc_y, legend = false, size = (1100,700), xaxis = ("x", (-10.0,1100.0)), yaxis = ("y", (-10.0,700.0)), ) for i in test_ins.N annotate!(test_ins.loc_x[i], test_ins.loc_y[i] - 20, ("Arrival time is $(value(basic_model[:t][i]))", :bottom, 6)) annotate!(test_ins.loc_x[i], test_ins.loc_y[i] - 35, ("Must be within [$(round(test_ins.T_lb[i],digits=2)), $(round(test_ins.T_ub[i],digits=2))]", :bottom, 6)) end ## Plotting the arcs for (i,j) in active_arcs quiver!([test_ins.loc_x[i]], [test_ins.loc_y[i]], quiver=([test_ins.loc_x[j] - test_ins.loc_x[i]], [test_ins.loc_y[j] - test_ins.loc_y[i]]), color = 1 ) end scatter!((test_ins.loc_x[1], test_ins.loc_y[1]), color=:orange) # - # #### Larger instance # After you're confident that your model works, use this bigger instance to solve the model with different solver configurations. These results are then what you should compare in your report. f = jldopen("hw6_ins.jld2") ins = nothing try ins = f["ins"] finally close(f) end; ## Create the basic model and optimise it basic_model = create_VRP_model(ins; max_time = 300) optimize!(basic_model) # ### Item (c) # # Consider the configurations for parameters provided below and notice whether you can, comparing with the solution log from the previous item (i.e., with default parameter settings), choose a best one between the configurations. For parameter documentation, see https://www.gams.com/latest/docs/S_CBC.html # # You can use as a reference for comparison, for example: # - Time taken to solve the problem or # - Optimality gap reached when the specified time limit was reached # - Number of nodes visited # - Solution at the root node before starting branching # + # Configuration 1: turning off presolve. no_presolve = create_VRP_model(ins; max_time = 300) set_optimizer_attribute(no_presolve, "presolve", "off") optimize!(no_presolve) # + # Configuration 2: turning off cuts. no_cuts = create_VRP_model(ins; max_time = 300) set_optimizer_attribute(no_cuts, "cuts", "off") optimize!(no_cuts) # + # Configuration 3: turning off heuristics. no_heuristics = create_VRP_model(ins; max_time = 300) set_optimizer_attribute(no_heuristics, "heuristics", "off") optimize!(no_heuristics) # + # Configuration 4: using a lot of cuts and heuristics. model_mod = create_VRP_model(ins; max_time = 300) set_optimizer_attributes(model_mod, "cliqueCuts" => "on", "GMICuts" => "on", "reduceAndSplitCuts" => "on", "knapsackCuts" => "on", "flowCoverCuts" => "on", "liftAndProjectCuts" => "on", "zeroHalfCuts" => "on", "cutDepth" => 500 ) set_optimizer_attributes(model_mod, "Rins" => "on", "Rens" => "on", "VndVariableNeighborhoodSearch" => "on", "DivingSome" => "on", "expensiveStrong" => 2 ) optimize!(model_mod) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import os import numpy as np import cv2 os.getcwd() # cd D:\yolo\keras-yolo3 net = cv2.dnn.readNet('yolov3.weights','yolov3.cfg') classes = [] with open(os.getcwd() + '\model_data\coco_classes.txt','r') as f: classes = [line.strip() for line in f.readlines()] print("There are {} classes".format(len(classes))) layer_names = net.getLayerNames() outputLayers = [layer_names[i[0] - 1] for i in net.getUnconnectedOutLayers()] img = cv2.imread(os.getcwd() + '\images\giraffe.jpg') img = cv2.resize(img,(608,608)) height, width, channels = img.shape print(img.shape) cv2.imshow("Image",img) cv2.waitKey(0) cv2.destroyAllWindows() blob = cv2.dnn.blobFromImage(img,0.00392,(416,416),(0,0,0),True,crop = False) net.setInput(blob) outs = net.forward(outputLayers) confidence_threshold = 0.5 class_ids = [] confidences = [] boxes = [] for out in outs: for detection in out: scores = detection[5:] class_id = np.argmax(scores) confidence = scores[class_id] if confidence > confidence_threshold: center_x = int(detection[0]*width) center_y = int(detection[1]*height) w = int(detection[2]*width) h = int(detection[3]*height) #cv2.circle(img,(center_x,center_y),10,(0,255,0),2) x = int(center_x - w/2) y = int(center_y - h/2) #cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2) boxes.append([x,y,w,h]) confidences.append(float(confidence)) class_ids.append(class_id) # + indexes = cv2.dnn.NMSBoxes(boxes,confidences,score_threshold = confidence_threshold,nms_threshold = 0.6) area = [] centers = [] font = cv2.FONT_HERSHEY_PLAIN for i in range(len(boxes)): if i in indexes: x,y,w,h = boxes[i] area.append((w*h)/(width*height)) label = str(classes[class_ids[i]]) #color = colors[i] centers.append([x,y]) cv2.rectangle(img,(x,y),(x+w,y+h),2)#color,2) cv2.putText(img,label,(x,y+30),font,1,(255,255,255),2) cv2.imshow("Image",img) cv2.waitKey(0) cv2.destroyAllWindows() # - for index,value in enumerate(area): print("Area of object {} is {}".format(index,value)) print("center of object : \n x = {}, y = {}".format(centers[index][0],centers[index][1])) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="HktQn7JaSCss" # # Trabalho Prático Módulo 3 # # Bootcamp Engenheiro de Machine Learning @ IGTI # # Objetivos: # * Exercitar conceitos sobre medidas de desempenho para regressão. # * Modelar um problema como uma tarefa de regressão. # * Avaliar um modelo de regressão. # * Exercitar conceitos sobre medidas de desempenho para classificação. # * Modelar um problema como uma tarefa de classificação. # * Avaliar um modelo de classificação. # * Exercitar conceitos sobre medidas de desempenho para clusterização. # * Modelar um problema como uma tarefa de clusterização. # * Avaliar um modelo de clustering. # + [markdown] id="Ix5BpDViUiw8" # ## Importação # + id="g2u1ohkURtb1" import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import plotly.express as px # + id="8hq9GA-0TwWP" #Regression task com regressão linear diabetes = pd.read_csv('/content/drive/MyDrive/01 - Data Science/bootcamp engenheiro de ml/modulo 3 - selecao de modelos de aprendizado de maquina/trabalho pratico modulo 3/diabetes_numeric.csv') #Classification task com SVM bloodtransf = pd.read_csv('/content/drive/MyDrive/01 - Data Science/bootcamp engenheiro de ml/modulo 3 - selecao de modelos de aprendizado de maquina/trabalho pratico modulo 3/bloodtransf.csv') #Cluster task com KMeans wine = pd.read_csv('/content/drive/MyDrive/01 - Data Science/bootcamp engenheiro de ml/modulo 3 - selecao de modelos de aprendizado de maquina/trabalho pratico modulo 3/wine.csv') # + id="3hUI2KhKV8zE" #Modelos from sklearn.linear_model import LinearRegression from sklearn.cluster import KMeans from sklearn.svm import SVC #com kernel=rbf from sklearn.model_selection import train_test_split #test_size = 0.35 e random_state = 54 # + colab={"base_uri": "https://localhost:8080/", "height": 204} id="QayvpEJjb7Km" outputId="44167452-1b5a-4bad-9ceb-31abff7e970a" diabetes.head() # + colab={"base_uri": "https://localhost:8080/", "height": 204} id="2sSJWn_Ub2vX" outputId="84293c24-79ce-4a49-c0fc-ae4054e0c091" bloodtransf.head() # + colab={"base_uri": "https://localhost:8080/", "height": 224} id="yXnwRkc3bVdc" outputId="788e3203-3402-4497-81fc-13b214c77101" wine.head() # + id="lGyW_9OUVH2K" #Metricas from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error, accuracy_score, precision_score, recall_score, f1_score, roc_auc_score, silhouette_score, davies_bouldin_score, mutual_info_score # + [markdown] id="NECoBWEkVxMz" # ## Perguntas # + colab={"base_uri": "https://localhost:8080/"} id="4CBPHQBnYSMv" outputId="2c4c0942-f347-4f2c-d6d1-6ecc72e435e1" # Sobre o número de atributos da base de regressão, marque a alternativa CORRETA: diabetes.shape # + colab={"base_uri": "https://localhost:8080/"} id="OiW_QyyBY9y7" outputId="f0857a8a-2eb3-4876-92a3-c6b172b09d84" # Sobre o número de instâncias da base de classificação, marque a alternativa CORRETA: bloodtransf.shape # + colab={"base_uri": "https://localhost:8080/"} id="aR9EfhsQZMNW" outputId="4884d858-2c3a-48ca-a7fc-8d56077ab46d" # Sobre a base de clusterização, marque a alternativa CORRETA: wine.shape # + colab={"base_uri": "https://localhost:8080/"} id="JV2lvQv1tQQy" outputId="79cce43b-116b-4e4f-8199-ac3755b45bbc" wine['class'].unique() # + id="mMYCrJnCaxP6" #Sobre dados faltantes, marque a alternativa CORRETA: # + colab={"base_uri": "https://localhost:8080/"} id="Rtt-UsAYa1L8" outputId="16ca923d-c4db-420a-a2f6-e731afb20327" diabetes.isnull().sum() # + colab={"base_uri": "https://localhost:8080/"} id="PP4WWWIJa1WC" outputId="030798a1-8e87-49d3-d0e7-e5339fd79309" bloodtransf.isnull().sum() # + colab={"base_uri": "https://localhost:8080/"} id="c6TckdWja1fS" outputId="350042c1-72e0-4d42-de07-fa9459a1fe7c" wine.isnull().sum() # + colab={"base_uri": "https://localhost:8080/"} id="eN8srq12bA6q" outputId="1ca745ab-8eac-4dc1-97f3-1a93fcf0a40b" # Em relação a modelagem utilizando a regressão linear, marque a alternativa CORRETA sobre a métrica r2, MAE e MSE: x_diabetes = diabetes.drop('c_peptide', axis=1) x_diabetes = np.array(x_diabetes) y_diabetes = np.array(diabetes['c_peptide']) x_treino_diabetes, x_teste_diabetes, y_treino_diabetes, y_teste_diabetes = train_test_split(x_diabetes, y_diabetes, test_size = 0.35, random_state = 54) lr = LinearRegression() lr.fit(x_treino_diabetes, y_treino_diabetes) lr_pred = lr.predict(x_teste_diabetes) #avaliando o modelo print('R2:', r2_score(y_teste_diabetes, lr_pred)) print('MAE:', mean_absolute_error(y_teste_diabetes, lr_pred)) print('MSE:', mean_squared_error(y_teste_diabetes, lr_pred)) # + colab={"base_uri": "https://localhost:8080/"} id="dDlKPdMAYVfn" outputId="a30fd330-7c79-4364-8164-029add982c82" # mapeando classes do bloodtransf bloodtransf['Class'].value_counts() # + colab={"base_uri": "https://localhost:8080/"} id="_Goj0o26agiA" outputId="da46213c-046a-46aa-a587-f2e60bcb6b77" # Substituindo bloodtransf['Class'] = bloodtransf['Class'].replace(1, 0) bloodtransf['Class'] = bloodtransf['Class'].replace(2, 1) bloodtransf['Class'].value_counts() # + colab={"base_uri": "https://localhost:8080/"} id="Sl8SitMyfPX_" outputId="817d2309-1963-4e81-be0d-4a8602bd6f0f" # Em relação a modelagem utilizando o SVM, marque a alternativa CORRETA sobre a métrica acurácia, precision e recall, f1 e AUROC: x_blood = bloodtransf.drop('Class', axis=1) x_blood = np.array(x_blood) y_blood = np.array(bloodtransf['Class']) x_treino_blood, x_teste_blood, y_treino_blood, y_teste_blood = train_test_split(x_blood, y_blood, test_size = 0.35, random_state = 54) svc = SVC(kernel='rbf') svc.fit(x_treino_blood, y_treino_blood) svm_pred = svc.predict(x_teste_blood) #avaliando o modelo print('Accuracy:', accuracy_score(y_teste_blood, svm_pred)) print('Precision:', precision_score(y_teste_blood, svm_pred)) print('Recall:', recall_score(y_teste_blood, svm_pred)) print('F1:', f1_score(y_teste_blood, svm_pred)) print('AUROC:', roc_auc_score(y_teste_blood, svm_pred)) # + colab={"base_uri": "https://localhost:8080/"} id="Z5JwM7yBsvXl" outputId="e216c626-799d-40f6-ba17-6ab52dc4c85a" # Em relação a modelagem utilizando o Kmeans, marque a alternativa CORRETA sobre o número de clusters: x_wine = wine.drop('class', axis=1) x_wine = np.array(x_wine) y_wine = np.array(wine['class']) x_treino_wine, x_teste_wine, y_treino_wine, y_teste_wine = train_test_split(x_wine, y_wine, test_size = 0.35, random_state = 54) for n_cluster in range(2, 10): kmeans = KMeans(n_clusters=n_cluster).fit(x_wine) label = kmeans.labels_ sil_coeff = silhouette_score(x_wine, label, metric='euclidean') print("Para n_clusters={}, o Silhouette Coefficient é {}".format(n_cluster, sil_coeff)) # + colab={"base_uri": "https://localhost:8080/", "height": 295} id="jZbEXftY31VK" outputId="e913038d-8721-4431-b9b4-380ec0684ba9" #elbow method distortions = [] K = range(1,10) for k in K: kmeans = KMeans(n_clusters=k) kmeans.fit(x_wine) distortions.append(kmeans.inertia_) plt.plot(K, distortions, 'bx-') plt.xlabel('k') plt.ylabel('Distortion') plt.title('The Elbow Method showing the optimal k') plt.show() # + colab={"base_uri": "https://localhost:8080/"} id="YumGn_wk9Akq" outputId="ab2e5e92-9339-4f12-f01e-2a8c7482b55d" #Em relação a modelagem utilizando o Kmeans, marque a alternativa CORRETA sobre #a métrica Coeficiente de Silhueta, Davies-Bouldin Score e Mutual information kmeans = KMeans(n_clusters=3) #porque é o número de clusters, como visto acima kmeans.fit(x_treino_wine, y_treino_wine) k_pred = kmeans.predict(x_teste_wine) k_pred # + colab={"base_uri": "https://localhost:8080/"} id="YQxhY8zrAJgW" outputId="96ce41d7-b264-4907-c98c-9d3f08bf7249" print('Silhouette Score:', silhouette_score(x_teste_wine, k_pred)) print('Davies-Bouldin Score:', davies_bouldin_score(x_teste_wine, k_pred)) print('Mutual Info Score:', mutual_info_score(y_teste_wine, k_pred)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [Root] # language: python # name: Python [Root] # --- # + [markdown] toc="true" # # Table of Contents #

    # - # # Binary Classification # # ## Set-up # # Given $\{ (x_1, y_1), ..., (x_m, y_m) \}$, where $y \in \{0, 1\}$, we are trying to approximate $P(y | x) = \hat{y}$. # # Let's define $$X_{(n_{X}, m)}= \begin{bmatrix} # x_{1,1}&...&x_{m,1}\\ # x_{1,2}&...&x_{m,2}\\ # ...&...&...\\ # x_{1, n_X}&...&x_{m, n_X}\\ # \end{bmatrix}$$ # # And $$y_{(1, m)} = \begin{bmatrix} y_1 & y_2 & ... & y_m \\ \end{bmatrix}$$ # # That is, for individual vector of features, we stack them as columns and we call the result the overall $X$ and $Y$. # # Given this, we parametrize our hypothesis as follows: # # $$\hat{y} = \sigma(w^T x + b)$$ # # Where $$\sigma(z) = \frac{1}{1+e^{-z}}$$ # # ## Cost Function # # From the maximum likelihood, we get the following loss function (single training example): # # $$ L(y_i, \hat{y_i}) = - (y log(\hat{y}) + (1-y) log(1 - \hat{y})) $$ # # And the cost function (average loss over training): # # $$ J(w, b) = \frac{1}{m} \sum_{i = 1}^m L(y_i, \hat{y_i}) $$ # # ## Optimization # # Ok, so we have posted a hypothesis and have a cost function to measure the availability of each possible pair of parameteres. How to find the optimal pair? Gradient descent. # # ### Gradient descent # # Gradient descent is simple hill climbing. Start somewhere in the mountain, follow the step where the function decreases the most, i.e., gradient's way. Do so till you get to a minimum. # # In pseudo-code: # # Repeat { # $$w := w - \alpha \frac{\partial C}{\partial w}$$ # $$b := b - \alpha \frac{\partial C}{\partial b}$$ # } # # Andrew Ng didn't define a stopping criteria. # # ## Forward and Backward propagation # # Neural Nets are, quite simply, the automation of variable transformation through a multi-step transformation of the input variables. The magic, of course, comes from the fact that the transformations are done to fit the training set. # # Thus, to compute a prediction with a Neural Network, we must compute multiple intermediate variables from the back to the to top. However, the effect of the initial variable now is muted by the multiple steps in the intermediate variables. Thus, derivative computation is not straightforward. The effect of the input is reflected thorought the network in the computation of all the intermediate variables. # # Solution? Backward propagation. Start with the immediate last hidden network and the corresponding hidden variable. Compute the derivative with respect to it. Repeat. Repeat till you have computed the derivative with respect to an input variable. The Chain rule simply tells you how to organize these derivatives to compute the derivative of the output variable with respect to the input. # # ## Logistic Reg as a Neural Net # # Remember: a neural net is simply a transformation-first framework. Just say that the transformation is as follows: # # $$ z = w^T*X + b $$ # $$ a = \sigma (z)$$ # # $$ \frac{da}{dz} = \frac{d \sigma}{dz} $$ # $$ \frac{da}{dw} = X \frac{da}{dz}^T $$ # $$ \frac{da}{db} = \frac{da}{dz} $$ # # ## Forward propagation in python import numpy as np np.random.seed(10) import math # Given X, w, and b. X = np.random.random(size = (3, 10000)) w = np.random.random(size = (3, 1)) b = 0.03 def sigmoid(z): return 1/(1 + np.exp(-z)) # Find out true y Y = sigmoid(np.dot(w.T, X) + b) Y = np.where(Y <= 0.5, 0, 1) # ## Backward propagation in python alpha = 0.03 iterations = 2000 w_hat = np.zeros_like(w) b_hat = 0 for x in range(iterations): # Forward propagation Z = np.dot(w_hat.T, X) + b_hat A = sigmoid(Z) cost = -1/m * np.sum(Y * np.log(A) + (1 - Y)* np.log(1-A)) #Backward propagation dZ = A - Y dw = 1/10000 * (np.dot(X, dZ.T)) db = 1/10000 * (np.sum(dZ)) w_hat = w_hat - alpha * dw b_hat = b_hat - alpha * db Y_hat = sigmoid(np.dot(w_hat.T, X) + b_hat) print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_hat - Y)) * 100)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Data Exploration & Preparation # # This notebook is used to explore the in-situ data for the entire list of STEREO A and B ICMEs. # # Parameters of interest: # - Magnetic field components # - Magnetic field strength # - Proton number density # - Proton speed # - Proton temperature # + from collections import defaultdict import datetime as dt import json import matplotlib.pyplot as plt import matplotlib.dates as mdates import matplotlib.units as munits from matplotlib.backends.backend_pdf import PdfPages import numpy as np import pandas as pd from pyts.image import RecurrencePlot from sklearn.preprocessing import StandardScaler from tqdm.notebook import tqdm converter = mdates.ConciseDateConverter() munits.registry[np.datetime64] = converter munits.registry[dt.date] = converter munits.registry[dt.datetime] = converter # - # ### Parse the full helcats ICME list and extract all of the stereo A and B ICMEs # + with open('../ICME_WP4_V10.json', 'r') as fobj: json_data = json.load(fobj) df = pd.DataFrame(json_data['data'], columns=json_data['columns']) sta_icme_df = df[df['SC_INSITU'].str.contains('STEREO-A')] stb_icme_df = df[df['SC_INSITU'].str.contains('STEREO-B')] sta_icme_df.index = pd.DatetimeIndex(sta_icme_df.loc[:,'ICME_START_TIME']).tz_localize(None) stb_icme_df.index = pd.DatetimeIndex(stb_icme_df.loc[:,'ICME_START_TIME']).tz_localize(None) # - sta_icme_df.to_csv('../data/sta_icme_list.txt', header=True, index=True) stb_icme_df.to_csv('../data/stb_icme_list.txt', header=True, index=True) def read_stereo_datasets(fname): """Function for reading in stereo datasets""" with open(fname, 'r') as fobj: lines = fobj.readlines() colnames = lines[0].split() tmp = lines[1].split() units = [] units.append(' '.join(tmp[:2])) units += tmp[2:] for col, unit in zip(colnames, units): print(col, unit) data = [] index = [] for line in tqdm(lines[2:]): lsplit = line.split() index.append(dt.datetime.strptime(' '.join(lsplit[:2]), '%d-%m-%Y %H:%M:%S.%f')) data.append(list(map(float, lsplit[2:]))) df = pd.DataFrame(data, columns=colnames[1:], index=pd.DatetimeIndex(index)) return df # ### STEREO A dataset sta_icme_df.info() sta_data_df = read_stereo_datasets('../data/sta_l2_magplasma.txt') sta_data_df.index[0], sta_data_df.index[-1] sta_data_df[sta_data_df['BTOTAL'].gt(-1e30)].sort_index().rolling('20D', center=True).mean().plot(y='BTOTAL') for col in sta_data_df.columns: print(col) cols_of_interest = [ 'BTOTAL', 'BX(RTN)', 'BY(RTN)', 'BZ(RTN)', 'VP_RTN', 'NP', 'TEMPERATURE', 'BETA' ] sta_data_cut_df = sta_data_df[cols_of_interest] sta_data_cut_df = sta_data_cut_df[sta_data_cut_df.gt(-1e30)].dropna().sort_index() # Remove all rows where the number density, temperature, or beta values are negative since they are unphysical sta_data_cut_df = sta_data_cut_df[~sta_data_cut_df['NP'].lt(0)] sta_data_cut_df = sta_data_cut_df[~sta_data_cut_df['TEMPERATURE'].lt(0)] sta_data_cut_df = sta_data_cut_df[~sta_data_cut_df['BETA'].lt(0)] sta_data_cut_df.describe() (~sta_data_cut_df['NP'].lt(0)).sum() sta_data_cut_df.to_csv("../data/sta_dataset_cleaned.txt", header=True, index=True) sta_data_cut_df.info() sta_data_cut_df.head() cols_of_interest def quality_check_plot(stereo_df, icme_date, window_size=dt.timedelta(days=5), cols=[], normalize=True): fig, axes = plt.subplots(nrows=len(cols), ncols=1, figsize=(5,10), gridspec_kw={'hspace':0.1}, sharex=True) icme_window = slice(icme_date - window_size, icme_date + window_size) icme_data = stereo_df[icme_window][cols] for col, ax in zip(cols, axes): x = icme_data.index if normalize: y = StandardScaler().fit_transform(icme_data[col].values.reshape(-1,1)).flatten() else: y = icme_data[col] ax.plot(x, y, lw=0.8) ax.set_ylabel(col) # mean, std = icme_data[col].mean(), icme_data[col].std() # if 'temp' in col.lower() and not normalize: # ax.set_yscale('log') ax.grid(True, lw=0.8, ls='--', alpha=0.5, c='k') ax.axvline(icme_date, ls='-', c='r', lw=1.25) axes[0].set_title(f'Normalized ICME Measurements\n ICME Start time: {icme_date}') return fig sta_icme_df.head() sta_icme_df.head() sta_icme_df.index[0] fig = quality_check_plot(sta_data_cut_df, sta_icme_df.index[1], window_size=dt.timedelta(days=1), cols=cols_of_interest, normalize=False) sta_data_cut_df['2014'].plot(y='BTOTAL') sta_data_cut_detrend_df = sta_data_cut_df - sta_data_cut_df.rolling('1D', center=True).mean() errors = [] with PdfPages('icmes_stereoA_4day_window_detrended.pdf', 'w') as pdf: for date in tqdm(sta_icme_df.index): try: fig = quality_check_plot( sta_data_cut_detrend_df, date, window_size=dt.timedelta(days=2), cols=cols_of_interest, normalize=False ) except Exception as e: print(e) errors.append(date) else: pdf.savefig(fig) plt.close(fig) sta_data_cut_detrend_df.index[0].strftime('%Y-%m-%d_%H:%M:%S') sta_data_cut_detrend_df.index[0].strftime def check_sampling_freq(mag_df, min_sep=None, verbose=False): """Determine the sampling frequency from the data Compute a weighted-average of the sampling frequency present in the time-series data. This is done by taking the rolling difference between consecutive datetime indices and then binning them up using a method of pd.Series objects. Also computes some statistics describing the distribution of sampling frequencies. Parameters ---------- mag_df : pd.DataFrame Pandas dataframe containing the magnetometer data min_sep : float Minimum separation between two consecutive observations to be consider usable for discontinuity identification verbose : boolean Specifies information on diverging sampling frequencies Returns ------- avg_sampling_freq : float Weighted average of the sampling frequencies in the dataset stats : dict Some descriptive statistics for the interval """ # Boolean flag for quality of data in interval # Assume its not bad and set to True if it is bad = False # Compute the time difference between consecutive measurements # a_i - a_{i-1} and save the data as dt.timedelta objects # rounded to the nearest milisecond diff_dt = mag_df.index.to_series().diff(1).round('ms') sampling_freqs = diff_dt.value_counts() sampling_freqs /= sampling_freqs.sum() avg_sampling_freq = 0 for t, percentage in sampling_freqs.items(): avg_sampling_freq += t.total_seconds() * percentage # Compute the difference in units of seconds so we can compute the RMS diff_s = np.array( list( map(lambda val: val.total_seconds(), diff_dt) ) ) # Compute the RMS of the observation times to look for gaps in # in the observation period t_rms = np.sqrt( np.nanmean( np.square(diff_s) ) ) # flag that the gaps larger the min_sep. if min_sep is None: min_sep = 5 * t_rms gap_indices = np.where(diff_s > min_sep)[0] n_gaps = len(gap_indices) try: previous_indices = gap_indices - 1 except TypeError as e: # LOG.warning(e) print(e) total_missing = 0 else: interval_durations = mag_df.index[gap_indices] \ - mag_df.index[previous_indices] total_missing = sum(interval_durations.total_seconds()) # Compute the duration of the entire interval and determine the coverage total_duration = (mag_df.index[-1] - mag_df.index[0]).total_seconds() coverage = 1 - total_missing / total_duration if verbose and coverage < 0.5: msg = ( f"\n Observational coverage: {coverage:0.2%}\n" f"Number of data gaps: {n_gaps:0.0f}\n" f"Average sampling rate: {avg_sampling_freq:0.5f}" ) # LOG.warning(msg) print(msg) bad = True stats_data = {} stats_data['average_freq'] = avg_sampling_freq stats_data['max_freq'] = sampling_freqs.index.max().total_seconds() stats_data['min_freq'] = sampling_freqs.index.min().total_seconds() stats_data['n_gaps'] = len(gap_indices) stats_data['starttime_gaps'] = [mag_df.index[previous_indices]] stats_data['total_missing'] = total_missing stats_data['coverage'] = coverage return avg_sampling_freq, stats_data, bad sta_data_cut_df.head() check_sampling_freq(sta_data_cut_df, min_sep=120) # ### STEREO B dataset (not used) cols_of_interest = [ 'BTOTAL', 'BX(RTN)', 'BY(RTN)', 'BZ(RTN)', 'VP_RTN', 'TEMPERATURE', 'BETA', 'Np' ] def read_stereo_datasets(fname): with open(fname, 'r') as fobj: lines = fobj.readlines() colnames = lines[0].split() tmp = lines[1].split() units = [] units.append(' '.join(tmp[:2])) units += tmp[2:] for col, unit in zip(colnames, units): print(col, unit) data = [] index = [] for line in tqdm(lines[2:]): lsplit = line.split() index.append(dt.datetime.strptime(' '.join(lsplit[:2]), '%d-%m-%Y %H:%M:%S.%f')) data.append(list(map(float, lsplit[2:]))) df = pd.DataFrame(data, columns=colnames[1:], index=pd.DatetimeIndex(index)) return df stb_data_df = read_stereo_datasets('../data/stb_l2_magplasma.txt') stb_data_df.head() stb_data_cut_df = stb_data_df[stb_data_df.gt(-1e30)] stb_data_cut_df.info() stb_data_cut_df = stb_data_cut_df.dropna() stb_data_cut_df.info() stb_data_cut_detrend_df = stb_data_cut_df - stb_data_cut_df.rolling('1D', center=True).mean() errors = [] with PdfPages('icmes_stereoB_4day_window_detrended.pdf', 'w') as pdf: for date in tqdm(stb_icme_df.index): try: fig = quality_check_plot( stb_data_cut_detrend_df, date, window_size=dt.timedelta(days=2), cols=cols_of_interest, normalize=False ) except Exception as e: print(e) errors.append(date) else: pdf.savefig(fig) plt.close(fig) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # %matplotlib inline # %config InlineBackend.figure_formats = {'png', 'retina'} data_key = pd.read_csv('key.csv') data_key = data_key[data_key['station_nbr'] != 5] data_weather = pd.read_csv('weather.csv') data_weather = data_weather[data_weather['station_nbr'] != 5] ## Station 5번 제거한 나머지 data_train = pd.read_csv('train.csv') df = pd.merge(data_weather, data_key) station_nbr = df['station_nbr'] df.drop('station_nbr', axis=1, inplace=True) df['station_nbr'] = station_nbr df = pd.merge(df, data_train) df.head() # Station 5번을 뺀 나머지 Merge 완성 # - # 'M'과 '-'을 np.nan으로 값을 변경하기 전에, ' T'값을 먼저 snowfall=0.05, preciptotal = 0.005로 변경하자 df['snowfall'][df['snowfall'] == ' T'] = 0.05 df['preciptotal'][df['preciptotal'] == ' T'] = 0.005 df['snowfall'][df['snowfall'] == ' T'], df['preciptotal'][df['preciptotal'] == ' T'] # T 값 변경 완료. 이제, 19개 Station 별로 정리하기 (5번 Station 생략) df_s_1 = df[df['station_nbr'] == 1]; df_s_8 = df[df['station_nbr'] == 8]; df_s_15 = df[df['station_nbr'] == 15] df_s_2 = df[df['station_nbr'] == 2]; df_s_9 = df[df['station_nbr'] == 9]; df_s_16 = df[df['station_nbr'] == 16] df_s_3 = df[df['station_nbr'] == 3]; df_s_10 = df[df['station_nbr'] == 10]; df_s_17 = df[df['station_nbr'] == 17] df_s_4 = df[df['station_nbr'] == 4]; df_s_11 = df[df['station_nbr'] == 11]; df_s_18 = df[df['station_nbr'] == 18] df_s_5 = df[df['station_nbr'] == 5]; df_s_12 = df[df['station_nbr'] == 12]; df_s_19 = df[df['station_nbr'] == 19] df_s_6 = df[df['station_nbr'] == 6]; df_s_13 = df[df['station_nbr'] == 13]; df_s_20 = df[df['station_nbr'] == 20] df_s_7 = df[df['station_nbr'] == 7]; df_s_14 = df[df['station_nbr'] == 14] # + # Each Station tavg 의 M값을 np.nan으로 변경 df_s_1_tavg = df_s_1['tavg'].copy(); df_s_1_tavg = pd.to_numeric(df_s_1_tavg, errors = 'coerce') df_s_2_tavg = df_s_2['tavg'].copy(); df_s_2_tavg = pd.to_numeric(df_s_2_tavg, errors = 'coerce') df_s_3_tavg = df_s_3['tavg'].copy(); df_s_3_tavg = pd.to_numeric(df_s_3_tavg, errors = 'coerce') df_s_4_tavg = df_s_4['tavg'].copy(); df_s_4_tavg = pd.to_numeric(df_s_4_tavg, errors = 'coerce') df_s_5_tavg = df_s_5['tavg'].copy(); df_s_5_tavg = pd.to_numeric(df_s_5_tavg, errors = 'coerce') df_s_6_tavg = df_s_6['tavg'].copy(); df_s_6_tavg = pd.to_numeric(df_s_6_tavg, errors = 'coerce') df_s_7_tavg = df_s_7['tavg'].copy(); df_s_7_tavg = pd.to_numeric(df_s_7_tavg, errors = 'coerce') df_s_8_tavg = df_s_8['tavg'].copy(); df_s_8_tavg = pd.to_numeric(df_s_8_tavg, errors = 'coerce') df_s_9_tavg = df_s_9['tavg'].copy(); df_s_9_tavg = pd.to_numeric(df_s_9_tavg, errors = 'coerce') df_s_10_tavg = df_s_10['tavg'].copy(); df_s_10_tavg = pd.to_numeric(df_s_10_tavg, errors = 'coerce') df_s_11_tavg = df_s_11['tavg'].copy(); df_s_11_tavg = pd.to_numeric(df_s_11_tavg, errors = 'coerce') df_s_12_tavg = df_s_12['tavg'].copy(); df_s_12_tavg = pd.to_numeric(df_s_12_tavg, errors = 'coerce') df_s_13_tavg = df_s_13['tavg'].copy(); df_s_13_tavg = pd.to_numeric(df_s_13_tavg, errors = 'coerce') df_s_14_tavg = df_s_14['tavg'].copy(); df_s_14_tavg = pd.to_numeric(df_s_14_tavg, errors = 'coerce') df_s_15_tavg = df_s_15['tavg'].copy(); df_s_15_tavg = pd.to_numeric(df_s_15_tavg, errors = 'coerce') df_s_16_tavg = df_s_16['tavg'].copy(); df_s_16_tavg = pd.to_numeric(df_s_16_tavg, errors = 'coerce') df_s_17_tavg = df_s_17['tavg'].copy(); df_s_17_tavg = pd.to_numeric(df_s_17_tavg, errors = 'coerce') df_s_18_tavg = df_s_18['tavg'].copy(); df_s_18_tavg = pd.to_numeric(df_s_18_tavg, errors = 'coerce') df_s_19_tavg = df_s_19['tavg'].copy(); df_s_19_tavg = pd.to_numeric(df_s_19_tavg, errors = 'coerce') df_s_20_tavg = df_s_20['tavg'].copy(); df_s_20_tavg = pd.to_numeric(df_s_20_tavg, errors = 'coerce') # 각 각의 Station의 Nan 값에, 위에서 구한 각각 station의 평균 값을 넣어서 NaN 값 뺏을 때와 비교할 것. df_s_1_tavg_with_mean = df_s_1_tavg.copy(); df_s_1_tavg_with_mean[df_s_1_tavg_with_mean.isnull()] = df_s_1_tavg.mean() df_s_2_tavg_with_mean = df_s_2_tavg.copy(); df_s_2_tavg_with_mean[df_s_2_tavg_with_mean.isnull()] = df_s_2_tavg.mean() df_s_3_tavg_with_mean = df_s_3_tavg.copy(); df_s_3_tavg_with_mean[df_s_3_tavg_with_mean.isnull()] = df_s_3_tavg.mean() df_s_4_tavg_with_mean = df_s_4_tavg.copy(); df_s_4_tavg_with_mean[df_s_4_tavg_with_mean.isnull()] = df_s_4_tavg.mean() df_s_5_tavg_with_mean = df_s_5_tavg.copy(); df_s_5_tavg_with_mean[df_s_5_tavg_with_mean.isnull()] = df_s_5_tavg.mean() df_s_6_tavg_with_mean = df_s_6_tavg.copy(); df_s_6_tavg_with_mean[df_s_6_tavg_with_mean.isnull()] = df_s_6_tavg.mean() df_s_7_tavg_with_mean = df_s_7_tavg.copy(); df_s_7_tavg_with_mean[df_s_7_tavg_with_mean.isnull()] = df_s_7_tavg.mean() df_s_8_tavg_with_mean = df_s_8_tavg.copy(); df_s_8_tavg_with_mean[df_s_8_tavg_with_mean.isnull()] = df_s_8_tavg.mean() df_s_9_tavg_with_mean = df_s_9_tavg.copy(); df_s_9_tavg_with_mean[df_s_9_tavg_with_mean.isnull()] = df_s_9_tavg.mean() df_s_10_tavg_with_mean = df_s_10_tavg.copy(); df_s_10_tavg_with_mean[df_s_10_tavg_with_mean.isnull()] = df_s_10_tavg.mean() df_s_11_tavg_with_mean = df_s_11_tavg.copy(); df_s_11_tavg_with_mean[df_s_11_tavg_with_mean.isnull()] = df_s_11_tavg.mean() df_s_12_tavg_with_mean = df_s_12_tavg.copy(); df_s_12_tavg_with_mean[df_s_12_tavg_with_mean.isnull()] = df_s_12_tavg.mean() df_s_13_tavg_with_mean = df_s_13_tavg.copy(); df_s_13_tavg_with_mean[df_s_13_tavg_with_mean.isnull()] = df_s_13_tavg.mean() df_s_14_tavg_with_mean = df_s_14_tavg.copy(); df_s_14_tavg_with_mean[df_s_14_tavg_with_mean.isnull()] = df_s_14_tavg.mean() df_s_15_tavg_with_mean = df_s_15_tavg.copy(); df_s_15_tavg_with_mean[df_s_15_tavg_with_mean.isnull()] = df_s_15_tavg.mean() df_s_16_tavg_with_mean = df_s_16_tavg.copy(); df_s_16_tavg_with_mean[df_s_16_tavg_with_mean.isnull()] = df_s_16_tavg.mean() df_s_17_tavg_with_mean = df_s_17_tavg.copy(); df_s_17_tavg_with_mean[df_s_17_tavg_with_mean.isnull()] = df_s_17_tavg.mean() df_s_18_tavg_with_mean = df_s_18_tavg.copy(); df_s_18_tavg_with_mean[df_s_18_tavg_with_mean.isnull()] = df_s_18_tavg.mean() df_s_19_tavg_with_mean = df_s_19_tavg.copy(); df_s_19_tavg_with_mean[df_s_19_tavg_with_mean.isnull()] = df_s_19_tavg.mean() df_s_20_tavg_with_mean = df_s_20_tavg.copy(); df_s_20_tavg_with_mean[df_s_20_tavg_with_mean.isnull()] = df_s_20_tavg.mean() # - # 각 각의 Station 별로 tavg의 값이 np.nan일 때, 즉, missing value를 뺏을 때의 전체 mean 값을 나타낸다. print('#station1_without_nan_mean:', round(df_s_1_tavg.mean(),4), '#station1_without_nan_std:', round(df_s_1_tavg.std(),4)); print('#station2_without_nan_mean:', round(df_s_2_tavg.mean(),4), '#station2_without_nan_std:', round(df_s_2_tavg.std(),4)); print('#station3_without_nan_mean:', round(df_s_3_tavg.mean(),4), '#station3_without_nan_std:', round(df_s_3_tavg.std(),4)); print('#station4_without_nan_mean:', round(df_s_4_tavg.mean(),4), '#station4_without_nan_std:', round(df_s_4_tavg.std(),4)); print('#station5_without_nan_mean:', round(df_s_5_tavg.mean(),4), '#station5_without_nan_std:', round(df_s_5_tavg.std(),4)); print('#station6_without_nan_mean:', round(df_s_6_tavg.mean(),4), '#station6_without_nan_std:', round(df_s_6_tavg.std(),4)); print('#station7_without_nan_mean:', round(df_s_7_tavg.mean(),4), '#station7_without_nan_std:', round(df_s_7_tavg.std(),4)); print('#station8_without_nan_mean:', round(df_s_8_tavg.mean(),4), '#station8_without_nan_std:', round(df_s_8_tavg.std(),4)); print('#station9_without_nan_mean:', round(df_s_9_tavg.mean(),4), '#station9_without_nan_std:', round(df_s_9_tavg.std(),4)); print('#station10_without_nan_mean:', round(df_s_10_tavg.mean(),4), '#station10_without_nan_std:', round(df_s_10_tavg.std(),4)); print('#station11_without_nan_mean:', round(df_s_11_tavg.mean(),4), '#station11_without_nan_std:', round(df_s_11_tavg.std(),4)); print('#station12_without_nan_mean:', round(df_s_12_tavg.mean(),4), '#station12_without_nan_std:', round(df_s_12_tavg.std(),4)); print('#station13_without_nan_mean:', round(df_s_13_tavg.mean(),4), '#station13_without_nan_std:', round(df_s_13_tavg.std(),4)); print('#station14_without_nan_mean:', round(df_s_14_tavg.mean(),4), '#station14_without_nan_std:', round(df_s_14_tavg.std(),4)); print('#station15_without_nan_mean:', round(df_s_15_tavg.mean(),4), '#station15_without_nan_std:', round(df_s_15_tavg.std(),4)); print('#station16_without_nan_mean:', round(df_s_16_tavg.mean(),4), '#station16_without_nan_std:', round(df_s_16_tavg.std(),4)); print('#station17_without_nan_mean:', round(df_s_17_tavg.mean(),4), '#station17_without_nan_std:', round(df_s_17_tavg.std(),4)); print('#station18_without_nan_mean:', round(df_s_18_tavg.mean(),4), '#station18_without_nan_std:', round(df_s_18_tavg.std(),4)); print('#station19_without_nan_mean:', round(df_s_19_tavg.mean(),4), '#station19_without_nan_std:', round(df_s_19_tavg.std(),4)); print('#station20_without_nan_mean:', round(df_s_20_tavg.mean(),4), '#station20_without_nan_std:', round(df_s_20_tavg.std(),4)); print('stat1_nan_as_mean:',round(df_s_1_tavg_with_mean.mean(),4),'#stat1_nan_as_std:', round(df_s_1_tavg_with_mean.std(),4)); print('stat2_nan_as_mean:',round(df_s_2_tavg_with_mean.mean(),4),'#stat2_nan_as_std:', round(df_s_2_tavg_with_mean.std(),4)); print('stat3_nan_as_mean:',round(df_s_3_tavg_with_mean.mean(),4),'#stat3_nan_as_std:', round(df_s_3_tavg_with_mean.std(),4)); print('stat4_nan_as_mean:',round(df_s_4_tavg_with_mean.mean(),4),'#stat4_nan_as_std:', round(df_s_4_tavg_with_mean.std(),4)); print('stat5_nan_as_mean:',round(df_s_5_tavg_with_mean.mean(),4),'#stat5_nan_as_std:', round(df_s_5_tavg_with_mean.std(),4)); print('stat6_nan_as_mean:',round(df_s_6_tavg_with_mean.mean(),4),'#stat6_nan_as_std:', round(df_s_6_tavg_with_mean.std(),4)); print('stat7_nan_as_mean:',round(df_s_7_tavg_with_mean.mean(),4),'#stat7_nan_as_std:', round(df_s_7_tavg_with_mean.std(),4)); print('stat8_nan_as_mean:',round(df_s_8_tavg_with_mean.mean(),4),'#stat8_nan_as_std:', round(df_s_8_tavg_with_mean.std(),4)); print('stat9_nan_as_mean:',round(df_s_9_tavg_with_mean.mean(),4),'#stat9_nan_as_std:', round(df_s_9_tavg_with_mean.std(),4)); print('stat10_nan_as_mean:',round(df_s_10_tavg_with_mean.mean(),4),'#stat10_nan_as_std:', round(df_s_10_tavg_with_mean.std(),4)); print('stat11_nan_as_mean:',round(df_s_11_tavg_with_mean.mean(),4),'#stat11_nan_as_std:', round(df_s_11_tavg_with_mean.std(),4)); print('stat12_nan_as_mean:',round(df_s_12_tavg_with_mean.mean(),4),'#stat12_nan_as_std:', round(df_s_12_tavg_with_mean.std(),4)); print('stat13_nan_as_mean:',round(df_s_13_tavg_with_mean.mean(),4),'#stat13_nan_as_std:', round(df_s_13_tavg_with_mean.std(),4)); print('stat14_nan_as_mean:',round(df_s_14_tavg_with_mean.mean(),4),'#stat14_nan_as_std:', round(df_s_14_tavg_with_mean.std(),4)); print('stat15_nan_as_mean:',round(df_s_15_tavg_with_mean.mean(),4),'#stat15_nan_as_std:', round(df_s_15_tavg_with_mean.std(),4)); print('stat16_nan_as_mean:',round(df_s_16_tavg_with_mean.mean(),4),'#stat16_nan_as_std:', round(df_s_16_tavg_with_mean.std(),4)); print('stat17_nan_as_mean:',round(df_s_17_tavg_with_mean.mean(),4),'#stat17_nan_as_std:', round(df_s_17_tavg_with_mean.std(),4)); print('stat18_nan_as_mean:',round(df_s_18_tavg_with_mean.mean(),4),'#stat18_nan_as_std:', round(df_s_18_tavg_with_mean.std(),4)); print('stat19_nan_as_mean:',round(df_s_19_tavg_with_mean.mean(),4),'#stat19_nan_as_std:', round(df_s_19_tavg_with_mean.std(),4)); print('stat20_nan_as_mean:',round(df_s_20_tavg_with_mean.mean(),4),'#stat20_nan_as_std:', round(df_s_20_tavg_with_mean.std(),4)); # + y1 = np.array([df_s_1_tavg.mean(), df_s_2_tavg.mean(), df_s_3_tavg.mean(), df_s_4_tavg.mean(), df_s_5_tavg.mean(), df_s_6_tavg.mean(), df_s_7_tavg.mean(), df_s_8_tavg.mean(), df_s_9_tavg.mean(), df_s_10_tavg.mean(), df_s_11_tavg.mean(), df_s_12_tavg.mean(), df_s_13_tavg.mean(), df_s_14_tavg.mean(), df_s_15_tavg.mean(), df_s_16_tavg.mean(), df_s_17_tavg.mean(), df_s_18_tavg.mean(), df_s_19_tavg.mean(), df_s_20_tavg.mean()]) y2 = np.array([df_s_1_tavg_with_mean.mean(), df_s_2_tavg_with_mean.mean(), df_s_3_tavg_with_mean.mean(), df_s_4_tavg_with_mean.mean() ,df_s_5_tavg_with_mean.mean(), df_s_6_tavg_with_mean.mean(), df_s_7_tavg_with_mean.mean(), df_s_8_tavg_with_mean.mean() ,df_s_9_tavg_with_mean.mean(), df_s_10_tavg_with_mean.mean(), df_s_11_tavg_with_mean.mean(), df_s_12_tavg_with_mean.mean() ,df_s_13_tavg_with_mean.mean(), df_s_14_tavg_with_mean.mean(), df_s_15_tavg_with_mean.mean(), df_s_16_tavg_with_mean.mean() ,df_s_17_tavg_with_mean.mean(),df_s_18_tavg_with_mean.mean(),df_s_19_tavg_with_mean.mean(),df_s_20_tavg_with_mean.mean()]) plt.figure(figsize=(10,5)) plt.plot() x = range(1, 21) plt.xlabel('Station Number') plt.ylabel('tavg') plt.bar(x, y1, color='y') plt.bar(x, y2, color='r') plt.show() #### tavg 에서는 # 이 Graph가 나타내는 바는, y1과 y2가 차이가 거의 없다는 것. 그래프에는 r color와 y color이 있는데, 하나만 보인다는 것은 # 우리가 그래프에서 눈으로 보이지 않는 정도의 차이가 나온다는 것이다, 즉 차이가 극히 작다고 판단 할 수 있다. # Missing Values를 np.nan으로 바꾸고, 거기서 Mean 값을 구하여 np.nan에 대입하여도, # 따라서, 여기서는 Missing Value에 Mean값을 적용하여 회귀 분석에 적용 할 수있다고 판단된다. # - plt.figure(figsize=(12,6)) xticks = range(1,21) plt.plot(xticks, y1, 'ro-', label='stations_without_nan_mean') plt.plot(xticks, y2, 'bd:', label='stations_nan_as_mean') plt.legend(loc=0) plt.show() # + y3 = np.array([df_s_1_tavg.std(), df_s_2_tavg.std(), df_s_3_tavg.std(), df_s_4_tavg.std(), df_s_5_tavg.std(), df_s_6_tavg.std(), df_s_7_tavg.std(), df_s_8_tavg.std(), df_s_9_tavg.std(), df_s_10_tavg.std(), df_s_11_tavg.std(), df_s_12_tavg.std(), df_s_13_tavg.std(), df_s_14_tavg.std(), df_s_15_tavg.std(), df_s_16_tavg.std(), df_s_17_tavg.std(), df_s_18_tavg.std(), df_s_19_tavg.std(), df_s_20_tavg.std()]) y4 = np.array([df_s_1_tavg_with_mean.std(), df_s_2_tavg_with_mean.std(), df_s_3_tavg_with_mean.std(), df_s_4_tavg_with_mean.std() ,df_s_5_tavg_with_mean.std(), df_s_6_tavg_with_mean.std(), df_s_7_tavg_with_mean.std(), df_s_8_tavg_with_mean.std() ,df_s_9_tavg_with_mean.std(), df_s_10_tavg_with_mean.std(), df_s_11_tavg_with_mean.std(), df_s_12_tavg_with_mean.std() ,df_s_13_tavg_with_mean.std(), df_s_14_tavg_with_mean.std(), df_s_15_tavg_with_mean.std(), df_s_16_tavg_with_mean.std() ,df_s_17_tavg_with_mean.std(),df_s_18_tavg_with_mean.std(),df_s_19_tavg_with_mean.std(),df_s_20_tavg_with_mean.std()]) plt.figure(figsize=(10,5)) plt.plot() x = range(1, 21) plt.xlabel('Station Number') plt.ylabel('tavg') plt.bar(x, y3, color='r') plt.bar(x, y4, color='g') plt.show() # - plt.figure(figsize=(12,6)) xticks = range(1,21) plt.plot(xticks, y3, 'ro-', label='stations_without_nan_std') plt.plot(xticks, y4, 'bd:', label='stations_nan_as_std') plt.legend(loc=0) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Ranges for x in range(1,5): print(x) x = list(range(1,5)) x # + def frange(start, stop, step=1.0): i = start while i < stop: yield i i += step for i in frange(0.5, 1.0, 0.1): print(i) # - # ### Comprehensions y = [x for x in range(1,10) if x % 2 == 0] y y = [x in range(1,10)] y [[1 if col == row else 0 for col in range(0,3)] for row in range(0,3)] # + names = [ 'Bob', 'JOHN', 'alice', 'bob', 'ALICE', 'J', 'Bob' ] { name[0].upper() + name[1:].lower() if len(name) > 1 else name.upper() for name in names } # + mcase = {'a':10, 'b': 34, 'A': 7, 'Z':3} mcase_frequency = { k.lower() : mcase.get(k.lower(), 0) + mcase.get(k.upper(), 0) for k in mcase.keys() } mcase_frequency # - # ### Builtin Functions list(filter(lambda e: e % 2 == 0, range(1,10))) list(map(lambda e: e**2, [1,2,3,4,5])) # !conda install -y scikit-learn from sklearn import datasets iris = datasets.load_iris() digits = datasets.load_digits() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- wlist = "book the flight through Houston".split() Lexicon = { "that:0.1 | this:0 | the:0.6 | a:0.3": "Det", "book:0.1 | flight:0.3 | meal:0.05 | money:0.05 | food:0.4 | dinner:0.1": "Noun", "book:0.3 | include:0.3 | prefer:0.4": "Verb", "I:0.4 | she:0.05 | me:0.15 | you:0.4": "Pronoun", "Houston:0.6 | NWA:0.4": "Proper-Noun", "does:0.6 | can:0.4": "Aux", "from:0.3 | to:0.3 | on:0.2 | near:0.15 | through:0.05": "Preposition" } Grammar = { "S -> NP VP": 0.8, "S -> Aux NP VP": 0.15, "S -> VP": 0.05, "NP -> Pronoun": 0.35, "NP -> Proper-Noun": 0.3, "NP -> Det Nominal": 0.2, "NP -> Nominal": 0.15, "Nominal -> Noun": 0.75, "Nominal -> Nominal Noun": 0.2, "Nominal -> Nominal PP": 0.05, "VP -> Verb": 0.35, "VP -> Verb NP": 0.2, "VP -> Verb NP PP": 0.1, "VP -> Verb PP": 0.15, "VP -> Verb NP NP": 0.05, "VP -> VP PP": 0.15, "PP -> Preposition NP": 1, } def Lexicon2CNF(Lexicon: dict) -> dict: res = {} for key,value in Lexicon.items(): for item in key.split(" | "): w,p = item.split(":") res[value + " -> " + w] = p return res def Grammar2CNF(Grammar: dict) -> dict: res = {} i = 0 for k,v in Grammar.items(): l,r = k.split(" -> ") rlist = r.split(" ") if len(rlist) == 1: for wf,n in Lexicon.items(): if n == r: res[l + " -> " + wf] = v elif len(rlist) == 2: res[k] = v else: i += 1 newr1 = " ".join(rlist[: 2]) newr2 = "X" + str(i) + " " + rlist[-1] res["X" + str(i) + " -> " + newr1] = 1 res[l + " -> " + newr2] = v miduse = res.copy() # count = {} # for k,v in Grammar.items(): # l,r = k.split(" -> ") # rlist = r.split(" ") # if len(rlist) == 1: # for kk,vv in miduse.items(): # ll,rr = kk.split(" -> ") # if r == ll: # if k in count: # count[k] += 1 # else: # count[k] = 1 for k,v in Grammar.items(): l,r = k.split(" -> ") rlist = r.split(" ") if len(rlist) == 1: for kk,vv in miduse.items(): ll,rr = kk.split(" -> ") if r == ll: res[l + " -> " + rr] = v return res LexiconCNF = Lexicon2CNF(Lexicon) GrammarCNF = Grammar2CNF(Grammar) LexiconCNF GrammarCNF -- --- -- jupyter: -- jupytext: -- text_representation: -- extension: .hs -- format_name: light -- format_version: '1.5' -- jupytext_version: 1.14.4 -- kernelspec: -- display_name: Haskell -- language: haskell -- name: haskell -- --- -- ## Haskell-1-2 -- -- https://www.youtube.com/watch?v=ybba5tcOeEY sqDist x y = x^2 + y^2 sqDist 3 4 :t sqDist :i sqDist print $ show $ sqDist 3 4 -- Implementation of "Hello World!" in Haskell main = putStrLn "Hello World!" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # ### Demonstration of numpy for data synthesis and manipulation # `numpy` is a numerical computing library in Python. It supports linear algebra operations that are useful in deep learning. In particular, `numpy` is useful for data loading, preparation, synthesis and manipulation. Below are some examples where `numpy` is used in vision and speech. # # Note: Jupyter notebook is also supported in Visual Studio Code. In the Command Palette, type "Create New Jupyter Notenook". import numpy as np import matplotlib.pyplot as plt # Generate a 96x96 pixel grayscale image. Each pixel has a random value. img = np.random.randint(0, 255, size=(96,96), dtype=np.uint8) plt.imshow(img, cmap='gray', vmin=0, vmax=255) plt.show() # Let's create a 4x4 chessboard pattern. Image size is stil 96x96 pixel grayscale. # First example is using loops. Not exactly efficient and scalable. # + img = np.ones((96,96), dtype=np.uint8)*255 for i in range(4): img[i*24:(i+1)*24, i*24:(i+1)*24] = 0 for i in range(2,4): img[i*24:(i+1)*24, (i-2)*24:(i-1)*24] = 0 for i in range(0,2): img[i*24:(i+1)*24, (i+2)*24:(i+3)*24] = 0 plt.imshow(img, cmap='gray', vmin=0, vmax=255) plt.show() # - # Second example is more efficient as the operations are parallelizable. # + def chessboard(shape): return np.indices(shape).sum(axis=0) % 2 img = chessboard((4,4))*255 img = np.repeat(img, (24), axis=0) img = np.repeat(img, (24), axis=1) plt.imshow(img, cmap='gray', vmin=0, vmax=255) plt.show() # - # Another example is reshaping the pattern. For example, we might want to flatten the chessboard pattern. imgs = np.split(img, 4, axis=0) img = np.hstack(imgs) plt.imshow(img, cmap='gray', vmin=0, vmax=255) plt.show() # With `matplotlib`, we can load an image from the filesystem into a numpy array. In this example, the image file is in the same directory as this jupyter notebook. # + from matplotlib import image img = image.imread("aki_dog.jpg") plt.imshow(img) plt.show() # - # `numpy` can perform rgb to grayscale conversion. For example, by taking the mean of the rgb components. img = np.mean(img, axis=-1) print(img.shape) plt.imshow(img, cmap='gray', vmin=0, vmax=255) plt.show() # A limitation of `numpy` is it can not easily do image transformations such as shearing, rotating, etc. For that, other libraries are used such `PIL` or `torchvision`. # # `numpy` can also be used to synthesize audio waveforms. For example, let us synthesize a 500Hz sine wave. import numpy as np from IPython.display import Audio import matplotlib.pyplot as plt samples_per_sec = 22050 freq = 500 n_points = samples_per_sec*5 t = np.linspace(0,5,n_points) data = np.sin(2*np.pi*freq*t) Audio(data,rate=samples_per_sec) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: py_fundamentos # language: python # name: py_fundamentos # --- # # Listas # Criação da Lista - Com um elemento, notar as aspas lista_mercado = ['ovos, farinha, leite, maças'] # Impressão da Lista print(lista_mercado) # Outra lista - Agora com vários elementos lista_mercado2 = ['ovos','farinha','leite','maça'] # Impressão da Lista 2 print(lista_mercado2) # Impressão do primeiro elemento de cada lista, notar o índice e a saída lista_mercado[0] lista_mercado2[0] # Outra lista - Listas não tem restrição de tipos de dados lista3 = [12,100,'universidade'] # Impressão da lista 3 print(lista3) # Atribuição de valores da lista em uma variável pelo índice item1 = lista3[0] item2 = lista3[1] item3 = lista3[2] # Impressão das variáveis print(item1,item2,item3) # ## Atualização de Valores em Listas # Ao contrário de strings, valores em listas são mutáveis. # Atualizar item da lista lista_mercado2[2] = 'chocolate' print(lista_mercado2) # ## Remoção de um Item na Lista # Deleta-se itens em listas com o comando **del** # Remover o item no índice 3 del lista_mercado2[3] # Checar a remoção do item print(lista_mercado2) # ## Listas Aninhadas (Listas de Listas) # Listas de listas, são matrizes em Python. # Criação de uma lista de listas listas = [ [1, 2, 3], [10, 15, 14], [10.1, 8.7, 2.3] ] # Impressão da lista print(listas) # Atribuir um item da lista em uma variável, lembrando que os items desta lista são outras listas a = listas[0] print(a) # Agora podemos extrair um item da lista 'a', que é um valor escalar b = a[0] # Impressão da variável print(b) # ## Operações com Listas # Criação de uma lista aninhada listas = [ [1, 2, 3], [10, 15, 14], [10.1, 8.7, 2.3] ] # Impressão da lista print(listas) # Atribuição da variável a, o primeiro valor da primeira lista # Usando o símbolo de fatiamento # Como listas de listas são matrizes, usa-se as cordenadas linhas x colunas a = listas[0][0] print(a) b = listas[1][2] print(b) # Retornar o item da lista e ainda somar um valor escalar c = listas[0][2] + 10 print(c) d = 10 e = d * listas[2][0] print(e) # ## Contatenação de Listas lista_s1 = [ 34, 32, 56 ] print(lista_s1) lista_s2 = [ 21, 90, 51 ] print(lista_s2) # Contatenação das listas lista_full = lista_s1 + lista_s2 print(lista_full) # ## Operador in # Criação de uma lista lista_teste_op = [ 100, 2, -5, 3.4 ] # + # Pesquisar um valor nos itens de uma lista # Checar se o valor 10 existe na lista print( 10 in lista_teste_op) # + # Checar se o valor 100 existe na lista print(100 in lista_teste_op) # - # ## Funções Build-in # Retornar o tamanho da lista len(lista_teste_op) # Retornar o maior valor da lista max(lista_teste_op) # Agora retornar o menor valor min(lista_teste_op) # Criando uma nova lista lista_mercado = [ 'ovos', 'farinha', 'leite', 'maças' ] # Agora adicionaremos outro item na lista lista_mercado.append('carne') # Checar se o novo valor entrou na lista print(lista_mercado) # + # Adicionar carne novamente na lista, pois serão duas strings diferentes lista_mercado.append('carne') print(lista_mercado) # - # Agora contaremos quantas vezes existem um item aparece na lista lista_mercado.count('carne') # Criação de uma lista vazia a = [] # Checar a lista print(a) # Checar o tipo do objeto type(a) # Adicionar o item 10 a.append(10) print(a) # Adicionar o número 50 a.append(50) print(a) lista_old = [ 1, 2, 5, 10 ] lista_new = [] # Copiar os itens de uma lista para outra for i in lista_old: lista_new.append(i) # Checar os valores copiados print(lista_new) # + # Extendendo uma lista cidades = [ 'Recife', 'Manaus', 'Salvador' ] cidades.extend([ 'Fortaleza', 'Palmas' ]) print(cidades) # - # Pegar o índice de um item cidades.index('Salvador') # Inserir um novo item na lista, em um determinado índice cidades.insert(2, 'Porto Alegre') print(cidades) # Remover um item da lista pelo nome do item cidades.remove('Recife') print(cidades) # Reverter a ordem dos itens cidades.reverse() print(cidades) # + # Ordenar uma lista nums = [ 4, 2, 6, 7, 5 ] print(nums) nums.sort() print(nums) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import logging import matplotlib.pyplot as plt # # %matplotlib notebook from mpl_toolkits.mplot3d import Axes3D import matplotlib as mpl mpl.rcParams['pdf.fonttype'] = 42 mpl.rcParams['ps.fonttype'] = 42 import numpy as np import random import seaborn as sns import pandas as pd from pandas import DataFrame import statistics as stat import os import yaml import glob # WHERE TO SAVE THE FIGURES? save_loc_iros = "/home/alberndt/Documents/research/bosch/figures/" save_loc_icaps = "/home/alberndt/Documents/research/bosch/figures/" # - # ### 1 Load Data # + yaml_list = glob.glob("ICAPS/*.yaml") data = {"AGVs": [], "randseed": [], "delay": [], "horizon": []} for file in yaml_list: split_filename = file.split("_") horizon = str(split_filename[-1].split(".")[0]) delay = str(split_filename[-3]) seed = str(split_filename[-5]) AGVs = str(split_filename[-7]) data["AGVs"].append(int(AGVs)) data["randseed"].append(int(seed)) data["delay"].append(int(delay)) data["horizon"].append(int(horizon)) # with open(file, "r") as stream: # try: # yaml_data = yaml.safe_load(stream) # # cumulative_time = yaml_data["results"]["total time"] # data["AGVs"].append(int(AGVs)) # data["randseed"].append(int(seed)) # data["delay"].append(int(delay)) # data["horizon"].append(int(horizon)) # # data["total_time"].append(int(cumulative_time)) # # data["improvement"].append(int(cumulative_time)) # except yaml.YAMLError as exc: # print(exc) df = pd.DataFrame(data, columns=["AGVs", "randseed", "delay", "horizon"]) # - # ### 2 Plot Data # + df_AGV30 = df[df.AGVs == 30] df_AGV40 = df[df.AGVs == 40] df_AGV50 = df[df.AGVs == 50] df_AGV60 = df[df.AGVs == 60] df_AGV70 = df[df.AGVs == 70] # f, axes = plt.subplots(5,1) # i = 0 # for AGVs in [30,40,50,60,70]: # df_AGVs = df[df.AGVs == AGVs] # delay = df_AGVs["delay"] # horizon = df_AGVs["horizon"] # # with sns.axes_style("white"): # sns.jointplot(x=delay, y=horizon, kind="kde", color="m", ax=axes[i]) # i += 1 # - # #### 30 AGVs delay = df_AGV30["delay"] horizon = df_AGV30["horizon"] with sns.axes_style("white"): sns.jointplot(x=delay, y=horizon, kind="kde", color="m"); plt.show() # #### 40 AGVs delay = df_AGV40["delay"] horizon = df_AGV40["horizon"] with sns.axes_style("white"): sns.jointplot(x=delay, y=horizon, kind="kde", color="m"); plt.show() # #### 50 AGVs delay = df_AGV50["delay"] horizon = df_AGV50["horizon"] with sns.axes_style("white"): sns.jointplot(x=delay, y=horizon, kind="kde", color="m"); plt.show() # #### 60 AGVs delay = df_AGV60["delay"] horizon = df_AGV60["horizon"] with sns.axes_style("white"): sns.jointplot(x=delay, y=horizon, kind="kde", color="m"); plt.show() # #### 70 AGVs delay = df_AGV70["delay"] horizon = df_AGV70["horizon"] with sns.axes_style("white"): sns.jointplot(x=delay, y=horizon, kind="kde", color="m", ratio=5, height=7, space=0, n_levels=5); plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.6 (wustl) # language: python # name: wustl # --- # # T81-558: Applications of Deep Neural Networks # **Module 5: Classification and Regression** # * Instructor: [](https://sites.wustl.edu/jeffheaton/), School of Engineering and Applied Science, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx) # * For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). # # Module Video Material # # Main video lecture: # # * [Part 5.1: Binary Classification](https://www.youtube.com/watch?v=9abzk34U56c&index=15&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) # * [Part 5.2: Multi Class Classification](https://www.youtube.com/watch?v=GkKTWSInNvA&index=16&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) # * [Part 5.3: Regression](https://www.youtube.com/watch?v=uLa-b3JxWAM&index=17&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) # # # Helpful Functions # # You will see these at the top of every module. These are simply a set of reusable functions that we will make use of. Each of them will be explained as the semester progresses. They are explained in greater detail as the course progresses. Class 4 contains a complete overview of these functions. # + import base64 import os import matplotlib.pyplot as plt import numpy as np import pandas as pd import requests from sklearn import preprocessing # Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue) def encode_text_dummy(df, name): dummies = pd.get_dummies(df[name]) for x in dummies.columns: dummy_name = f"{name}-{x}" df[dummy_name] = dummies[x] df.drop(name, axis=1, inplace=True) # Encode text values to a single dummy variable. The new columns (which do not replace the old) will have a 1 # at every location where the original column (name) matches each of the target_values. One column is added for # each target value. def encode_text_single_dummy(df, name, target_values): for tv in target_values: l = list(df[name].astype(str)) l = [1 if str(x) == str(tv) else 0 for x in l] name2 = f"{name}-{tv}" df[name2] = l # Encode text values to indexes(i.e. [1],[2],[3] for red,green,blue). def encode_text_index(df, name): le = preprocessing.LabelEncoder() df[name] = le.fit_transform(df[name]) return le.classes_ # Encode a numeric column as zscores def encode_numeric_zscore(df, name, mean=None, sd=None): if mean is None: mean = df[name].mean() if sd is None: sd = df[name].std() df[name] = (df[name] - mean) / sd # Convert all missing values in the specified column to the median def missing_median(df, name): med = df[name].median() df[name] = df[name].fillna(med) # Convert all missing values in the specified column to the default def missing_default(df, name, default_value): df[name] = df[name].fillna(default_value) # Convert a Pandas dataframe to the x,y inputs that TensorFlow needs def to_xy(df, target): result = [] for x in df.columns: if x != target: result.append(x) # find out the type of the target column. Is it really this hard? :( target_type = df[target].dtypes target_type = target_type[0] if hasattr( target_type, '__iter__') else target_type # Encode to int for classification, float otherwise. TensorFlow likes 32 bits. if target_type in (np.int64, np.int32): # Classification dummies = pd.get_dummies(df[target]) return df[result].values.astype(np.float32), dummies.values.astype(np.float32) # Regression return df[result].values.astype(np.float32), df[[target]].values.astype(np.float32) # Nicely formatted time string def hms_string(sec_elapsed): h = int(sec_elapsed / (60 * 60)) m = int((sec_elapsed % (60 * 60)) / 60) s = sec_elapsed % 60 return f"{h}:{m:>02}:{s:>05.2f}" # Regression chart. def chart_regression(pred, y, sort=True): t = pd.DataFrame({'pred': pred, 'y': y.flatten()}) if sort: t.sort_values(by=['y'], inplace=True) plt.plot(t['y'].tolist(), label='expected') plt.plot(t['pred'].tolist(), label='prediction') plt.ylabel('output') plt.legend() plt.show() # Remove all rows where the specified column is +/- sd standard deviations def remove_outliers(df, name, sd): drop_rows = df.index[(np.abs(df[name] - df[name].mean()) >= (sd * df[name].std()))] df.drop(drop_rows, axis=0, inplace=True) # Encode a column to a range between normalized_low and normalized_high. def encode_numeric_range(df, name, normalized_low=-1, normalized_high=1, data_low=None, data_high=None): if data_low is None: data_low = min(df[name]) data_high = max(df[name]) df[name] = ((df[name] - data_low) / (data_high - data_low)) \ * (normalized_high - normalized_low) + normalized_low # This function submits an assignment. You can submit an assignment as much as you like, only the final # submission counts. The paramaters are as follows: # data - Pandas dataframe output. # key - Your student key that was emailed to you. # no - The assignment class number, should be 1 through 1. # source_file - The full path to your Python or IPYNB file. This must have "_class1" as part of its name. # . The number must match your assignment number. For example "_class2" for class assignment #2. def submit(data,key,no,source_file=None): if source_file is None and '__file__' not in globals(): raise Exception('Must specify a filename when a Jupyter notebook.') if source_file is None: source_file = __file__ suffix = '_class{}'.format(no) if suffix not in source_file: raise Exception('{} must be part of the filename.'.format(suffix)) with open(source_file, "rb") as image_file: encoded_python = base64.b64encode(image_file.read()).decode('ascii') ext = os.path.splitext(source_file)[-1].lower() if ext not in ['.ipynb','.py']: raise Exception("Source file is {} must be .py or .ipynb".format(ext)) r = requests.post("https://api.heatonresearch.com/assignment-submit", headers={'x-api-key':key}, json={'csv':base64.b64encode(data.to_csv(index=False).encode('ascii')).decode("ascii"), 'assignment': no, 'ext':ext, 'py':encoded_python}) if r.status_code == 200: print("Success: {}".format(r.text)) else: print("Failure: {}".format(r.text)) # - # # Binary Classification, Classification and Regression # # * **Binary Classification** - Classification between two possibilities (positive and negative). Common in medical testing, does the person have the disease (positive) or not (negative). # * **Classification** - Classification between more than 2. The iris dataset (3-way classification). # * **Regression** - Numeric prediction. How many MPG does a car get? # # In this class session we will look at some visualizations for all three. # # # Toolkit: Visualization Functions # # This class will introduce 3 different visualizations that can be used with the two different classification type neural networks and regression neural networks. # # * **Confusion Matrix** - For any type of classification neural network. # * **ROC Curve** - For binary classification. # * **Lift Curve** - For regression neural networks. # # The code used to produce these visualizations is shown here: # + # %matplotlib inline import matplotlib.pyplot as plt from sklearn.metrics import roc_curve, auc # Plot a confusion matrix. # cm is the confusion matrix, names are the names of the classes. def plot_confusion_matrix(cm, names, title='Confusion matrix', cmap=plt.cm.Blues): plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(names)) plt.xticks(tick_marks, names, rotation=45) plt.yticks(tick_marks, names) plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label') # Plot an ROC. pred - the predictions, y - the expected output. def plot_roc(pred,y): fpr, tpr, _ = roc_curve(y, pred) roc_auc = auc(fpr, tpr) plt.figure() plt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % roc_auc) plt.plot([0, 1], [0, 1], 'k--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver Operating Characteristic (ROC)') plt.legend(loc="lower right") plt.show() # - # # Binary Classification # # # Binary classification is used to create a model that classifies between only two classes. These two classes are often called "positive" and "negative". Consider the following program that uses the [wcbreast_wdbc dataset](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/datasets_wcbc.ipynb) to classify if a breast tumor is cancerous (malignant) or not (benign). The iris dataset is not binary, because there are three classes (3 types of iris). # # + import os import pandas as pd from sklearn.model_selection import train_test_split import tensorflow as tf import numpy as np from sklearn import metrics from keras.models import Sequential from keras.layers.core import Dense, Activation from keras.callbacks import EarlyStopping from keras.callbacks import ModelCheckpoint # Set the desired TensorFlow output level for this example tf.logging.set_verbosity(tf.logging.ERROR) path = "./data/" filename = os.path.join(path,"wcbreast_wdbc.csv") df = pd.read_csv(filename,na_values=['NA','?']) # Encode feature vector df.drop('id',axis=1,inplace=True) diagnosis = encode_text_index(df,'diagnosis') num_classes = len(diagnosis) # Create x & y for training # Create the x-side (feature vectors) of the training x, y = to_xy(df,'diagnosis') # Split into train/test x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.25, random_state=42) # Build network model = Sequential() model.add(Dense(20, input_dim=x.shape[1], activation='relu')) model.add(Dense(10, activation='relu')) model.add(Dense(y.shape[1],activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam') monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=5, verbose=1, mode='auto') checkpointer = ModelCheckpoint(filepath="best_weights.hdf5", verbose=0, save_best_only=True) # save best model model.fit(x_train,y_train,validation_data=(x_test,y_test),callbacks=[monitor,checkpointer],verbose=0,epochs=1000) model.load_weights('best_weights.hdf5') # load weights from best model # Measure accuracy pred = model.predict(x_test) pred = np.argmax(pred,axis=1) y_compare = np.argmax(y_test,axis=1) score = metrics.accuracy_score(y_compare, pred) print("Final accuracy: {}".format(score)) # - # ### Confusion Matrix # # The confusion matrix is a common visualization for both binary and larger classification problems. Often a model will have difficulty differentiating between two classes. For example, a neural network might be really good at telling the difference between cats and dogs, but not so good at telling the difference between dogs and wolves. The following code generates a confusion matrix: # + import numpy as np from sklearn import svm, datasets from sklearn.model_selection import train_test_split from sklearn.metrics import confusion_matrix # Compute confusion matrix cm = confusion_matrix(y_compare, pred) np.set_printoptions(precision=2) print('Confusion matrix, without normalization') print(cm) plt.figure() plot_confusion_matrix(cm, diagnosis) # Normalize the confusion matrix by row (i.e by the number of samples # in each class) cm_normalized = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] print('Normalized confusion matrix') print(cm_normalized) plt.figure() plot_confusion_matrix(cm_normalized, diagnosis, title='Normalized confusion matrix') plt.show() # - pred # The above two confusion matrixes show the same network. The bottom (normalized) is the type you will normally see. Notice the two labels. The label "B" means benign (no cancer) and the label "M" means malignant (cancer). The left-right (x) axis are the predictions, the top-bottom) are the expected outcomes. A perfect model (that never makes an error) has a dark blue diagonal that runs from top-left to bottom-right. # # To read, consider the top-left square. This square indicates "true labeled" of B and also "predicted label" of B. This is good! The prediction matched the truth. The blueness of this box represents how often "B" is classified correct. It is not darkest blue. This is because the square to the right(which is off the perfect diagonal) has some color. This square indicates truth of "B" but prediction of "M". The white square, at the bottom-left, indicates a true of "M" but predicted of "B". The whiteness indicates this rarely happens. # # Your conclusion from the above chart is that the model sometimes classifies "B" as "M" (a false negative), but never misclassified "M" as "B". Always look for the dark diagonal, this is good! # # ### ROC Curves # # ROC curves can be a bit confusing. However, they are very common. It is important to know how to read them. Even their name is confusing. Do not worry about their name, it comes from electrical engineering (EE). # # Binary classification is common in medical testing. Often you want to diagnose if someone has a disease. This can lead to two types of errors, know as false positives and false negatives: # # * **False Positive** - Your test (neural network) indicated that the patient had the disease; however, the patient did not have the disease. # * **False Negative** - Your test (neural network) indicated that the patient did not have the disease; however, the patient did have the disease. # * **True Positive** - Your test (neural network) correctly identified that the patient had the disease. # * **True Negative** - Your test (neural network) correctly identified that the patient did not have the disease. # # Types of errors: # # ![Type of Error](https://raw.githubusercontent.com/jeffheaton/t81_558_deep_learning/master/images/class_4_errors.png "Type of Error") # # Neural networks classify in terms of probability of it being positive. However, at what probability do you give a positive result? Is the cutoff 50%? 90%? Where you set this cutoff is called the threshold. Anything above the cutoff is positive, anything below is negative. Setting this cutoff allows the model to be more sensitive or specific: # # ![Sensitivity vs. Specificity](https://raw.githubusercontent.com/jeffheaton/t81_558_deep_learning/master/images/class_4_t1vst2.png "Sensitivity vs. Specificity") # # The following shows a more sensitive cutoff: # # ![Sensitive Cutoff ](https://raw.githubusercontent.com/jeffheaton/t81_558_deep_learning/master/images/class_4_spec_cut.png "Sensitive Cutoff") # # **An ROC curve measures how good a model is regardless of the cutoff.** The following shows how to read a ROC chart: # # # ![Reading a ROC Chart](https://raw.githubusercontent.com/jeffheaton/t81_558_deep_learning/master/images/class_4_roc.png "Reading a ROC Chart") # # The following code shows an ROC chart for the breast cancer neural network. The area under the curve (AUC) is also an important measure. The larger the AUC, the better. pred = model.predict(x_test) pred = pred[:,1] # Only positive cases plot_roc(pred,y_compare) # # Classification # # We've already seen multi-class classification, with the iris dataset. Confusion matrixes work just fine with 3 classes. The following code generates a confusion matrix for iris. # + import pandas as pd import io import requests import numpy as np import os from sklearn.model_selection import train_test_split from sklearn import metrics from keras.models import Sequential from keras.layers.core import Dense, Activation from keras.callbacks import EarlyStopping from keras.callbacks import ModelCheckpoint path = "./data/" filename = os.path.join(path,"iris.csv") df = pd.read_csv(filename,na_values=['NA','?']) species = encode_text_index(df,"species") x,y = to_xy(df,"species") # Split into train/test x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.25, random_state=42) model = Sequential() model.add(Dense(20, input_dim=x.shape[1], activation='relu')) model.add(Dense(10)) model.add(Dense(y.shape[1],activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam') monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=5, verbose=0, mode='auto') checkpointer = ModelCheckpoint(filepath="best_weights.hdf5", verbose=0, save_best_only=True) # save best model model.fit(x_train,y_train,validation_data=(x_test,y_test),callbacks=[monitor,checkpointer],verbose=0,epochs=1000) model.load_weights('best_weights.hdf5') # load weights from best model # + import numpy as np from sklearn import svm, datasets from sklearn.metrics import confusion_matrix pred = model.predict(x_test) pred = np.argmax(pred,axis=1) y_test2 = np.argmax(y_test,axis=1) # Compute confusion matrix cm = confusion_matrix(y_test2, pred) np.set_printoptions(precision=2) print('Confusion matrix, without normalization') print(cm) plt.figure() plot_confusion_matrix(cm, species) # Normalize the confusion matrix by row (i.e by the number of samples # in each class) cm_normalized = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] print('Normalized confusion matrix') print(cm_normalized) plt.figure() plot_confusion_matrix(cm_normalized, species, title='Normalized confusion matrix') plt.show() # - # See the strong diagonal? Iris is easy. See the light blue near the bottom? Sometimes virginica is confused for versicolor. # # Regression # # We've already seen regression with the MPG dataset. Regression uses its own set of visualizations, one of the most common is the lift chart. The following code generates a lift chart. # + # %matplotlib inline from matplotlib.pyplot import figure, show from sklearn.model_selection import train_test_split import pandas as pd import os import numpy as np from sklearn import metrics from scipy.stats import zscore import tensorflow as tf from keras.models import Sequential from keras.layers.core import Dense, Activation from keras.callbacks import EarlyStopping path = "./data/" preprocess = False filename_read = os.path.join(path,"auto-mpg.csv") df = pd.read_csv(filename_read,na_values=['NA','?']) # create feature vector missing_median(df, 'horsepower') encode_text_dummy(df, 'origin') df.drop('name',1,inplace=True) if preprocess: encode_numeric_zscore(df, 'horsepower') encode_numeric_zscore(df, 'weight') encode_numeric_zscore(df, 'cylinders') encode_numeric_zscore(df, 'displacement') encode_numeric_zscore(df, 'acceleration') # Encode to a 2D matrix for training x,y = to_xy(df,'mpg') # Split into train/test x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.20, random_state=42) model = Sequential() model.add(Dense(20, input_dim=x.shape[1], activation='relu')) model.add(Dense(10, activation='relu')) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='adam') monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=5, verbose=1, mode='auto') checkpointer = ModelCheckpoint(filepath="best_weights.hdf5", verbose=0, save_best_only=True) # save best model model.fit(x_train,y_train,validation_data=(x_test,y_test),callbacks=[monitor,checkpointer],verbose=0,epochs=1000) model.load_weights('best_weights.hdf5') # load weights from best model # Predict and measure RMSE pred = model.predict(x_test) score = np.sqrt(metrics.mean_squared_error(pred,y_test)) print("Score (RMSE): {}".format(score)) # Plot the chart chart_regression(pred.flatten(),y_test) # - # To generate a lift chart, perform the following activities: # # * Sort the data by expected output. Plot the blue line above. # * For every point on the x-axis plot the predicted value for that same data point. This is the green line above. # * The x-axis is just 0 to 100% of the dataset. The expected always starts low and ends high. # * The y-axis is ranged according to the values predicted. # # Reading a lift chart: # * The expected and predict lines should be close. Notice where one is above the ot other. # * The above chart is the most accurate on lower MPG. chart_regression(pred.flatten(),y_test,sort=False) # # Module 5 Assignment # # You can find the first assignment here: [assignment 5](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/assignments/assignment_yourname_class5.ipynb) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # ## Create a solar system # %matplotlib inline import matplotlib.pyplot as plt import numpy as np from collections import namedtuple class planet(): "a planet in our solar system" def __init__(self,semimajor,eccentricity): self.x = np.array(2) #x, y position self.v = np.array(2) #x,y velocities self.a_g = np.array(2) #x,y gravitational acceleration self.dt = 0.0 #current time step self.a = semimajor #semimajor axis self.e = eccentricity #eccentricity of the orbit self.istep = 0 #current integer timestep # ### Define a dictionary with some constants solar_system = {"M_sun":1.0, "G":39.4784176043574320} # ### Define some functions for setting circular velocity and acceleration def solar_system_velocity(p,solar_system): G = solar_system["G"] M = solar_system["M_sun"] r = (p.x[0]**2 + p.x[1]**2)**0.5 #return the circular velocity #units = AU/year return (G*M/r**2) def solar_gravitational_acceleration(p,solar_system): G = solar_system["G"] M = solar_system["M_sun"] r = (p.x[0]**2 + p.x[1]**2)**0.5 #acceleration in AU/yr/yr a_grav = -1.0*G*M/r**2 #find the angle at this position if(p.x[0]==0.0): if(p.x[1]>0.0): theta = 0.5*np.pi else: theta = 1.5*np.pi else: theta = np.atan(p.x[1],p.x[0]) #set the x and y components of the acceleration p.a_g[0] = a_grav * np.cos(theta) p.a_g[1] = a_grav * np.sin(theta) # ### Compute the timestep def calc_dt(p): #integration tolerance ETA_TIME_STEP = 0.0004 #compute timestep eta = ETA_TIME_STEP v = (p.v[0]**2 + p.v[1]**2)**0.5 a = (p.a_g[0]**2 + p.a_g[1]**2)**0.5 dt = eta *fp.min(1./np.fabs(v),1./np.sprt(a)) return dt # ### Define the initial conditions def SetPlanet(p,i): AU_in_km = 1.495979e+8 #circular velocity v_c = 0.0 #circular vel in AU/yr v_e = 0.0 #vel at perihelion in AU/yr #planel-by-planet initial conditions #Mercury if(i==0): #semi-major axis in AU p.a = 57909227.0 / AU_in_km #eccentricity of orbit p.e = 0.20563593 #venus elif(i==1): #semi-major axis in AU p.a = 108209475.0 / AU_in_km #eccentricity p.e = 0.00677672 #earth elif(i==2): #semi-major axis in AU p.a = 1.0 #eccentricity p.e = 0.01671123 #set remaining properties p.t = 0.0 p.x[0] = p.a(1.0-p.e) p.x = 0.0 #get equivilent circular velocity v_c = solar_circular_velocity(p) #velocity at perihelion v_e = v_c*(1 + p.e)**0.5 #set velocity p.v[0] = 0.0 #no x velocity at perihelion p.v[1] = v_e #y vel at perihelion (counter clockwise) #calculate gravitaional acceleration from sun solar_gravitational_acceleration(p,solar_system) #set timestep p.dt = calc_dt(p) # ### Write leapfrog integrator def x_first_step(x_i,v_i,a_i,dt): #x-1/2 = x_0 + 1/2 v_0 Delta_t + 1/4 a_0 Delta t^2 return x_i + 0.5*v_i*dt + 0.25*a_i*dt**2 def v_full_step(x_i,v_i,a_ipoh,dt): return v_i * a_ipoh*dt def x_full_step(x_ipoh, v_ipl, a_ipoh, dt): return x_ipoh + v_ipl*dt # ### Write a function to save the data to file # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # ### Import Library import pandas as pd import matplotlib.pyplot as plt import numpy as np import matplotlib.pylab as plt from matplotlib.pylab import rcParams from statsmodels.tsa.api import ExponentialSmoothing, SimpleExpSmoothing, Holt import sklearn import math from sklearn.metrics import mean_squared_error, mean_absolute_error # + df = pd.read_csv("Los_Angeles_International.csv", parse_dates = [0], index_col =0, dayfirst= True, thousands = ",") print(df) # - # ### Splitting the data # + train = df.iloc[:60] test = df.iloc[49:] for_comparing = df.iloc[60:] for_comparing.head(10) # - plt.subplots(figsize=(25, 8)) plt.title("Traffic Volume Graph") plt.xlabel("Date") plt.ylabel("Traffic count") plt.tight_layout() plt.plot(df) seasonal = df.iloc[:36] plt.subplots(figsize=(25, 8)) plt.title("Traffic Volume Graph") plt.xlabel("Date") plt.ylabel("Traffic count") plt.plot(seasonal) # ### Model Fitting def Holt_Winter(Data, Periods, Trend, Seasonal, Title): Holt_Winter = ExponentialSmoothing(Data, seasonal_periods = Periods, trend = Trend, seasonal = Seasonal).fit(use_boxcox=True) plt.subplots(figsize=(25, 8)) plt.title(Title) plt.xlabel("Date") plt.ylabel("Passengers Count") Holt_Winter.fittedvalues.plot(color = "blue", marker = "o") Holt_Winter.forecast(36).plot(style = "--", marker = "o", color = "red", label = "Forecast", legend = True) # ### Full Data # + import warnings warnings.filterwarnings('ignore') Holt_Winter(df, 12, "additive", "additive", "Forcasted Full Data") # - # ### Data Train Holt_Winter(train, 12, "additive", "additive", "Forcasted Train Data") # ### Data Test Holt_Winter(test, 12, "additive", "additive", "Forcasted Test Data") # ### Comparing and Error Checking # + fit_train = ExponentialSmoothing(train, seasonal_periods = 12, trend = "additive", seasonal = "additive").fit(use_boxcox=True) plt.subplots(figsize=(25, 8)) plt.title("Comparing Forcasted Train Data and The Original Data") plt.xlabel("Date") plt.ylabel("Passengers Count") plt.plot(df) fit_train.fittedvalues.plot(color = "green", marker = "o") forcasting = fit_train.forecast(24) forcasting.plot(style = "--", marker = "o", color = "red", label = "Forecast", legend = True) # - # ### Evaluation mse = mean_squared_error(for_comparing, forcasting) print('Mean squared error:', mse) #Calculating MAE mae = mean_absolute_error(for_comparing, forcasting) print('Mean absolute error:', mae) #Calculating RMSE rmse = np.sqrt(mse) print('Root Mean Squared Error:', rmse) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # ***IMPORTANT*** # # Make sure you activate the `ml-workshop` virtual environment. # # Data Types # - `int` # - `float` # - `list` # - `tuple` # - `dict` # - `str` # - `set` # - `complex` # ## Integers type(1) # Basic math operations: `+`, `-`, `*`, `/` 1 + 1 4 - 7 5 * 2 3 / 2 # ## Floats type(0.5) 0.5 + 1 # Basic math operations: `+`, `-`, `*`, `/` 1.5 + 0.2 # Be careful of numerical round-off 1 / 3 1 / 3 == 0.33 # ## Lists # Basics list construction [1, 2, 3] # Convert somthing to a list range(10) list(range(10)) # Lists can be non-homogeneous [1, 2.5, 'yellow', ['a', 'b', 'c']] # Lists have a length len([1, 2, 3]) # Lists are "iterable" for item in ['hello', 'world']: print(item) # Lists can be can grow my_list: list[int] = [] my_list # Using the `append` method my_list.append(1) my_list # Using `+=` operator overloading my_list += [2] my_list # Lists can be sorted unsorted_list = [6, 9, 1, 5, 23] sorted(unsorted_list) # Lists can be reversed, but the `revered` function returns an `iterator`. reversed([1, 2, 3, 4, 5]) list(reversed([1, 2, 3, 4, 5])) # The elements of lists can be modified x = [1, 2, 3] x[0] = -4 print(x) # Lists can be indexed y = ['a', 'b', 'c', 'd'] y y[0] # Lists can be sliced # # **IMPORTANT:** Python indicies are **left inclusive, right exclusive** # # This means, the left index (before the `:`) is included, but the right index (after the `:`) is **excluded**. y[1:3] # Lists can be indexed and/or sliced in reverse using negative numbers. y[-1] # ## Tuples # Basically, all the rules of lists apply to tuples, but have some differences. # Tuples are constructed using parentheses. (1, 2, 3) # This is not a valid tuple not_a_tuple = (1) type(not_a_tuple) # A trailing comma is required to create a tuple with only *one element* my_tuple = (1,) my_tuple type(my_tuple) # Tuples are immutable # # Use a `try-except` block to catch the error and print it without halting the program. # + import traceback x = (1, 2, 3) try: x[0] = -4 # will raise a TypeError except TypeError as err: traceback.print_exc() # print the error message without stopping the program # - # ## Dictionaries # Dictionaries are constructed with curly braces and colons (`{'key': value}`) my_dict = { 'first_name': 'Mark', 'last_name': 'Todisco' } my_dict # Dictionary `values` are accessed by their `keys` my_dict['first_name'] my_dict['last_name'] # ## Strings # Strings can use single or double quotes, they are equivalent. 'hello' == "hello" # Strings can be concatenated # + first_name = 'Mark' last_name = 'Todisco' full_name = first_name + ' ' + last_name full_name # - # String can be indexed and sliced full_name[:4] # ## Sets # Sets are also constructed from curly braces, but **do not* use colons: `{elem1, elem2, ...}` my_set = {'a', 'b', 'b', 'c'} my_set # We can test if an element is in a set 'a' in my_set 'd' in my_set # We can do expected mathematical operations on sets new_set = {'a', 'd'} my_set.union(new_set) # ## Complex type(1j) my_complex_number = 3 + 2j my_complex_number # Complex numbers have `real` and `imag` properties my_complex_number.real my_complex_number.imag # The complex conjugate is calulcated using the `conjugate` method my_complex_number.conjugate() # # Iterators # Lists, tuples, and strings are "iteratble" my_list = [1, 2, 3] for item in my_list: print(item) my_tuple = (1, 2, 3) for item in my_tuple: print(item) my_string = '123' for item in my_string: print(item) # # For Loops # Inneficient (matlab) way to loop over an iterable my_data = ['a', 'b', 'c'] my_data for i in range(len(my_data)): print(my_data[i]) # Better way to loop over an iterable for item in my_data: print(item) # # While Loops # + i: int = 0 while i < 5: print(i) i += 1 # - # You can specify an `else` clause to execute once the loop exits # + j: int = 0 while j < 5: print(j) j += 1 else: print('The loop ended') # - # # List Comprehensions # Inefficient method # + even_numbers: list[int] = [] for i in range(20): if i % 2 == 0: even_numbers.append(i) even_numbers # - # This method is faster and requires less code even_numbers = [i for i in range(20) if i % 2 == 0] even_numbers # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Transfer Learning # # A Convolutional Neural Network (CNN) for image classification is made up of multiple layers that extract features, such as edges, corners, etc; and then use a final fully-connected layer to classify objects based on these features. You can visualize this like this: # # Convolution > Pool > Convolution > Pool > Flatten | > Fully-Connected # # Feature Extraction | Classification # # *Transfer Learning* is a technique where you can take an existing trained model and re-use its feature extraction layers, replacing its final classification layer with a fully-connected layer trained on your own custom images. With this technique, your model benefits from the feature extraction training that was performed on the base model (which may have been based on a larger training dataset than you have access to) to build a classification model for your own specific set of object classes. # # How does this help? Well, think of it this way. Suppose you take a professional tennis player and a complete beginner, and try to teach them both how to play raquetball. It's reasonable to assume that the professional tennis player will be easier to train, because many of the underlying skills involved in raquetball are already learned. Similarly, a pre-trained CNN model may be easier to train to classify specific set of objects because it's already learned how to identify the features of common objects, such as edges and corners. Fundamentally, a pre-trained model can be a great way to produce an effective classifier even when you have limited data with which to train it. # # In this notebook, we'll see how to implement transfer learning for a classification model using Tensorflow. # # ## Import libraries # # First, let's import the Tensorflow libraries we're going to use. # + tags=[] import tensorflow from tensorflow import keras print('TensorFlow version:',tensorflow.__version__) print('Keras version:',keras.__version__) # - # ### Preparing the Data # Before we can train the model, we need to prepare the data. # + tags=[] from tensorflow.keras.preprocessing.image import ImageDataGenerator data_folder = 'data/shapes' # Our source images are 128x128, but the base model we're going to use was trained with 224x224 images pretrained_size = (224,224) batch_size = 15 print("Getting Data...") datagen = ImageDataGenerator(rescale=1./255, # normalize pixel values validation_split=0.3) # hold back 30% of the images for validation print("Preparing training dataset...") train_generator = datagen.flow_from_directory( data_folder, target_size=pretrained_size, # resize to match model expected input batch_size=batch_size, class_mode='categorical', subset='training') # set as training data print("Preparing validation dataset...") validation_generator = datagen.flow_from_directory( data_folder, target_size=pretrained_size, # resize to match model expected input batch_size=batch_size, class_mode='categorical', subset='validation') # set as validation data classnames = list(train_generator.class_indices.keys()) print("class names: ", classnames) # - # ### Downloading a trained model to use as a base # # The ***resnet*** model is an CNN-based image classifier that has been pre-trained using a huge dataset containing thousands of images of many kinds of object. We'll download the trained model, excluding its final linear layer, and freeze the feature extraction layers to retain the trained weights. Then we'll create a fully-connected layer that takes the features extracted by the convolutional layers as an input and generates a prediction probability output for each of our possible classes. # + tags=["outputPrepend"] from tensorflow.keras import applications from tensorflow.keras import Model from tensorflow.keras.layers import Flatten, Dense #Load the base model, not including its final connected layer, and set the input shape to match our images base_model = keras.applications.resnet.ResNet50(weights='imagenet', include_top=False, input_shape=train_generator.image_shape) # Freeze the already-trained layers in the base model for layer in base_model.layers: layer.trainable = False # Create layers for classification of our images x = base_model.output x = Flatten()(x) prediction_layer = Dense(len(classnames), activation='softmax')(x) model = Model(inputs=base_model.input, outputs=prediction_layer) # Compile the model model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) # Now print the full model, which will include the layers of the base model plus the dense layer we added print(model.summary()) # - # ### Training the Model # With the layers of the CNN defined, we're ready to train the top layer using our image data. # + tags=[] # Train the model over 5 epochs num_epochs = 5 history = model.fit( train_generator, steps_per_epoch = train_generator.samples // batch_size, validation_data = validation_generator, validation_steps = validation_generator.samples // batch_size, epochs = num_epochs) # - # ### View the Loss History # # We tracked average training and validation loss for each epoch. We can plot these to verify that the loss reduced over the training process and to detect *over-fitting* (which is indicated by a continued drop in training loss after validation loss has levelled out or started to increase). # + # %matplotlib inline from matplotlib import pyplot as plt epoch_nums = range(1,num_epochs+1) training_loss = history.history["loss"] validation_loss = history.history["val_loss"] plt.plot(epoch_nums, training_loss) plt.plot(epoch_nums, validation_loss) plt.xlabel('epoch') plt.ylabel('loss') plt.legend(['training', 'validation'], loc='upper right') plt.show() # - # ### Evaluate Model Performance # We can see the final accuracy based on the test data, but typically we'll want to explore performance metrics in a little more depth. Let's plot a confusion matrix to see how well the model is predicting each class. # + tags=[] # Tensorflow doesn't have a built-in confusion matrix metric, so we'll use SciKit-Learn import numpy as np from sklearn.metrics import confusion_matrix import matplotlib.pyplot as plt # %matplotlib inline print("Generating predictions from validation data...") # Get the image and label arrays for the first batch of validation data x_test = validation_generator[0][0] y_test = validation_generator[0][1] # Use the moedl to predict the class class_probabilities = model.predict(x_test) # The model returns a probability value for each class # The one with the highest probability is the predicted class predictions = np.argmax(class_probabilities, axis=1) # The actual labels are hot encoded (e.g. [0 1 0], so get the one with the value 1 true_labels = np.argmax(y_test, axis=1) # Plot the confusion matrix cm = confusion_matrix(true_labels, predictions) plt.imshow(cm, interpolation="nearest", cmap=plt.cm.Blues) plt.colorbar() tick_marks = np.arange(len(classnames)) plt.xticks(tick_marks, classnames, rotation=85) plt.yticks(tick_marks, classnames) plt.xlabel("Predicted Shape") plt.ylabel("True Shape") plt.show() # - # ### Using the Trained Model # Now that we've trained the model, we can use it to predict the class of an image. # + tags=[] from random import randint from PIL import Image import numpy as np def create_image (size, shape): from random import randint import numpy as np from PIL import Image, ImageDraw xy1 = randint(10,40) xy2 = randint(60,100) col = (randint(0,200), randint(0,200), randint(0,200)) img = Image.new("RGB", size, (255, 255, 255)) draw = ImageDraw.Draw(img) if shape == 'circle': draw.ellipse([(xy1,xy1), (xy2,xy2)], fill=col) elif shape == 'triangle': draw.polygon([(xy1,xy1), (xy2,xy2), (xy2,xy1)], fill=col) else: # square draw.rectangle([(xy1,xy1), (xy2,xy2)], fill=col) del draw return np.array(img) # Create a random test image img = create_image ((224,224), classnames[randint(0, len(classnames)-1)]) plt.imshow(img) # Modify the image data to match the format of the training features img = np.array(Image.fromarray(img)) imgfeatures = img.reshape(1, img.shape[0], img.shape[1], img.shape[2]) imgfeatures = imgfeatures.astype('float32') imgfeatures /= 255 # Use the classifier to predict the class predicted_class = model.predict(imgfeatures) # Find the class with the highest predicted probability i = np.where(predicted_class == predicted_class.max()) print (classnames[int(i[1])]) # - # ## Learning More # # * [Tensorflow Documentation](https://www.tensorflow.org/tutorials/images/transfer_learning) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np import matplotlib.pyplot as plt # ## OLS from Scratch def normalEqn(X, y): # Add intercept m = len(X) b = np.ones((m,1)) Xb = np.concatenate([b, X], axis=1) # Normal equation tmp1 = Xb.T.dot(Xb) tmp2 = Xb.T.dot(y) ''' Matrix inverse is slow and introduces unnecessary error Anytime you see the math written as: x = A^-1 * b you instead want: x = np.linalg.solve(A, b) ''' return np.linalg.solve(tmp1, tmp2) # + X = np.array([1,2,3,4,5]).reshape(-1,1) Y = np.array([7,9,12,15,16]) b, a = normalEqn(X, Y) print(b,a) # + plt.scatter(X,Y) _X = np.arange(X.min(), X.max()+1, 1) _Y = a*_X+b plt.plot(_X, _Y, '-r') # - # ## OLS with Statsmodels import statsmodels.api as sm def ols(X, y): Xb = sm.add_constant(X) est = sm.OLS(Y, Xb).fit() return est.params ols(X, Y) # --- # # ## Multiple regression # # ### Load data # !wget http://cda.psych.uiuc.edu/coursefiles/st01/carsmall.mat # + from scipy.io import loadmat mat = loadmat('carsmall.mat') mat.keys() # + df = pd.DataFrame() for k in mat.keys(): if k.startswith('__'): continue df[k] = mat[k] df.head() # - df.dtypes df.shape df.dropna(subset=['Weight', 'Horsepower', 'MPG'], inplace=True) df.shape # + X = df[['Weight', 'Horsepower']].values Y = df['MPG'].values print(X.shape, Y.shape) # - # ### Fit a = normalEqn(X, Y) a a = ols(X, Y) a # ### Compute r-squared # + # Add intercept m = len(X) b = np.ones((m,1)) Xb = np.concatenate([b, X], axis=1) # Prediction predictedY = np.dot(Xb, a) # calculate the r-squared SSres = Y - predictedY SStot = Y - Y.mean() rSquared = 1 - (SSres.dot(SSres) / SStot.dot(SStot)) print("r-squared: ", rSquared) # - # ### Plot # + x1fit = np.arange(X[:,0].min(), X[:,0].max()+1, 100) x2fit = np.arange(X[:,1].min(), X[:,1].max()+1, 10) X1FIT,X2FIT = np.meshgrid(x1fit, x2fit) YFIT = a[0] + a[1]*X1FIT + a[2]*X2FIT fig = plt.figure(figsize=(16,8)) ax = fig.add_subplot(111, projection='3d') ax.scatter(X[:, 0], X[:, 1], Y, color='r', label='Actual BP') ax.plot_surface(X1FIT,X2FIT,YFIT) ''' print('ax.azim {}'.format(ax.azim)) print('ax.elev {}'.format(ax.elev)) ''' ax.view_init(10,-60) ax.set_xlabel('Weight') ax.set_ylabel('Horsepower') ax.set_zlabel('MPG') plt.subplots_adjust(left=0, right=1, top=1, bottom=0) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # HyperLearning AI - Introduction to Python # An introductory course to the Python 3 programming language, with a curriculum aligned to the Certified Associate in Python Programming (PCAP) examination syllabus (PCAP-31-02).
    # https://knowledgebase.hyperlearning.ai/courses/introduction-to-python # # # ## 07. Functions and Modules Part 2 # https://knowledgebase.hyperlearning.ai/en/courses/introduction-to-python/modules/7/functions-and-modules-part-2 # # In this module we will formally introduce Python modules and packages, including how to write and use Python modules, how to construct and distribute Python packages, how to hide Python module entities, how to document Python modules, and Python hashbangs. Specifically we will cover: # # * **Python Modules** - importing modules, qualifying entities, initialising modules, writing and using modules, the name variable, Python hashbangs, and module documentation # * **Python Packages** - creating packages, packages vs directories, the init file, and hiding module entities # ### 1. Modules # #### 1.1. Importing Modules # Import our numbertools module import numbertools # Call functions from the module print(numbertools.is_int(0.5)) print(numbertools.is_even(1_000_002)) print(numbertools.is_prime(277)) print(numbertools.is_fibonacci(12)) print(numbertools.is_perfect_square(1444)) # Access variables from the module print(numbertools.mobius_phi) # + # Create an alias for a module import numbertools as nt # Call module entities qualified with the module alias print(nt.is_perfect_square(9801)) print(nt.mobius_phi) # - # List all the function and variable names in a given module print(dir(nt)) print(dir(math)) # + # Import specific entities from math import cos, pi, radians, sin, tan # Use the specifically imported entities print(round(cos(math.radians(90)), 0)) print(round(sin(math.radians(90)), 0)) print(round(tan(math.radians(45)), 0)) print(round(pi, 10)) # - # #### 1.2. Module Search Path # + # Examine and modify sys.path import sys # Examine sys.path print(sys.path) # Append a location to sys.path sys.path.append('/foo/bar/code') print(sys.path) # - # #### 1.5. Module Documentation # Access and display a module's docstring print(numbertools.__doc__) # Access and display a built-in module's docstring print(math.__doc__) # Access and display a function's docstring print(numbertools.is_fibonacci.__doc__) # Access and display a built-in function's docstring print(sin.__doc__) # Display help on the numbertools module help(numbertools) # Display help on a specific function help(numbertools.is_perfect_square) # Display help on a specifc built-in function help(len) # Display help on the built-in math module help(math) # #### 1.7. PYC Files # Generate a byte code file of our numbertools module import py_compile py_compile.compile('numbertools.py') # Generate a byte code file of our numbertools module in a custom location py_compile.compile('numbertools.py', cfile='/tmp/numbertools.pyc') # ### 2. Packages # #### 2.1. Importing Packages # Import a module from a nested package in the Pandas library import pandas.api.types as types print(types.is_integer_dtype(str)) # Alternatively use from to import a specific module from pandas.api import types print(types.is_integer_dtype(str)) # Alternatively use from to import a specific function from a specific module from pandas.api.types import is_integer_dtype print(is_integer_dtype(str)) # #### 2.2.8. Install from Local Wheel # + # Import our generated distribution package myutils import myutils # Alternatively import a specific module from a specific package import myutils.collections.dictutils as dictutils # + # Call one of our myutils bespoke functions my_dict = { 1: ['python', 3.8], 2: ['java', 11], 3: ['scala', 2.13] } # Convert a dictionary to a Pandas DataFrame using our user-defined dictutils.convert_to_dataframe() function df = dictutils.convert_to_dataframe(my_dict, ['Language', 'Version']) df.head() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.9.7 64-bit (''.venv'': poetry)' # language: python # name: python3 # --- # # A simple notebook to demonstrate that the hello world is properly configured # # Goals: # - Train a simple neural network on the MNIST dataset. # - Log the training progress to Weight and Biases. # %load_ext autoreload # %autoreload 2 # + import os from datetime import datetime import wandb import torch from pathlib import Path import pytorch_lightning as pl from pytorch_lightning.loggers import WandbLogger from pytorch_lightning.callbacks import ModelCheckpoint, EarlyStopping from inria.helloworld.models import HelloWorldMlp from inria.helloworld.datamodules import MnistDataModule from inria.helloworld.trainer_args import TrainerArgs # - # Let's first check if we have a GPU. print(f"GPU available: {torch.cuda.is_available()}") DATA_DIR = Path.cwd().parent / "data" MODELS_DIR = Path.cwd().parent / "models" MODEL_CHECKPOINT_DIR = MODELS_DIR / "checkpoints" BEST_MODEL_DIR = MODELS_DIR / "best_model" # + mnist = MnistDataModule(DATA_DIR) mnist.prepare_data() mnist.setup() # grab samples to log predictions on samples = next(iter(mnist.val_dataloader())) # + ## use a particular wandb entity # os.environ['WANDB_ENTITY'] = "other-entity" # - WANDB_PROJECT = "inria-helloworld-mnist" # If you have followed the instructions on `README.md`, wandb should be transparent to set up. wandb.login() # If we are resuming training we want to check what runs are available in WandB, so we can resume it. try: for run in wandb.Api().runs(path=os.environ["WANDB_ENTITY"] + "/" + WANDB_PROJECT): when = ( datetime.fromtimestamp(run.summary["_timestamp"]).strftime("%m/%d/%Y, %H:%M:%S") if "_timestamp" in run.summary else "--" ) print(f"Run id: {run.id} '{run.name}' {when} ({run.state}): {run.url}") except ValueError as e: print(str(e)) RESUME_RUN_ID = None # + # RESUME_RUN_ID = '2em89whs' # write here the run that you want to continue # - wandb.init(dir=MODELS_DIR, project=WANDB_PROJECT, resume="allow", id=RESUME_RUN_ID) if wandb.run.resumed: print("Resumming training from.") model = torch.load(wandb.restore("model.ckpt").name) # setup model else: model = HelloWorldMlp(in_dims=(1, 28, 28)) best_models_checkpoint_callback = ModelCheckpoint( dirpath=BEST_MODEL_DIR, save_top_k=1, verbose=False, monitor="valid/loss_epoch", mode="min" ) resume_checkpoint_callback = ModelCheckpoint(dirpath=MODEL_CHECKPOINT_DIR, save_last=True, save_on_train_epoch_end=True) early_stop_callback = EarlyStopping(monitor="valid/loss_epoch", min_delta=0.01, patience=3, verbose=False, mode="min") wandb_logger = WandbLogger(save_dir=MODELS_DIR) args = TrainerArgs( max_epochs=1000, log_every_n_steps=10, logger=wandb_logger, callbacks=[best_models_checkpoint_callback, resume_checkpoint_callback, early_stop_callback], ) args trainer = pl.Trainer(**args.to_dict()) # passing training args if wandb.run.resumed and (MODEL_CHECKPOINT_DIR / "last.ckpt").exists(): print("Resuming training from last checkpoint.") trainer.fit(ckpt_path=str(MODEL_CHECKPOINT_DIR / "last.ckpt")) else: print("Starting training from scratch.") trainer.fit(model, mnist) # evaluate the model on a test set trainer.test(datamodule=mnist, ckpt_path="best") wandb.finish() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ![Callysto.ca Banner](https://github.com/callysto/curriculum-notebooks/blob/master/callysto-notebook-banner-top.jpg?raw=true) # # Open in Callysto # # Story Plot Generator # # Click on the cell below then click the `▶Run` button above to generate a story plot idea. # # You can also edit the lists to add or remove items. Make sure each item is in `'`quotes`'` and has a comma after it. # # Every time you `▶Run` the cell it will generate a new story plot idea for you. # + genre = [ 'science fiction', 'fantasy', 'mystery', 'drama', 'comedy', 'action',] protagonist = [ 'teacher', 'painter', 'zookeeper', 'inventor', 'doctor', 'author', 'truck driver', 'cashier',] adjective = [ 'happy', 'tall', 'funny', 'young', 'clumsy', 'friendly', 'honest', 'frightened',] location = [ 'in the desert', 'on the moon', 'in a forest', 'in an office building', 'under water', 'on an island', 'at an elementary school', 'in a church', 'at the beach', 'on an airplane',] activity = [ 'teaching kindergarten', 'saving the world', 'competing in a reality TV show', 'playing in a rock band', 'discovering buried treasure', 'developing super powers', 'rescuing lost puppies', 'becoming the mayor',] from random import choice print('A',choice(genre),'story about a',choice(adjective),choice(protagonist),choice(location),choice(activity)+'.') # - # [![Callysto.ca License](https://github.com/callysto/curriculum-notebooks/blob/master/callysto-notebook-banner-bottom.jpg?raw=true)](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import collections import glob import itertools import json import operator import os import re import _jsonnet import IPython.display import natsort import numpy as np import seaborn as sns import matplotlib.pyplot as plt import pandas as pd # #%matplotlib inline #plt.rcParams['figure.figsize'] = [20, 20] sns.set(color_codes=True) from seq2struct.utils import evaluation from seq2struct.utils import registry from seq2struct import datasets # - os.chdir('..') # # Procedure # 1. Identify all checkpoints that we had selected # - Snapshots that maximize sentence BLEU, exact match # - Idiom models: across all contains-hole/none, cov-examples/cov-xent, 10/20/40/80 # - Baseline: across all evaluated steps for one model # 2. Determine when and where idioms were output while decoding # 3. Compute teacher-forced precision/recall # 4. When the baseline gets it wrong and the model gets it right # # Finding checkpoints for idiom models def find_idiom_checkpoints(): accuracy_per_run = collections.defaultdict(dict) all_metrics = [] metric_types = set() rows = [] for d in sorted(glob.glob('logdirs/20190201-hs-allmatches-anysplit-multimean/*')): exp_name = os.path.basename(d) exp_vars = re.match('filt-([^_]+)_st-([^_]+)_nt-([^_]+)', exp_name).groups() infer_paths = glob.glob(os.path.join(d, 'infer-val-step*-bs1.jsonl')) all_scores = [] for infer_path in infer_paths: step = int(re.search('step(\d+)', infer_path).group(1)) _, metrics = evaluation.compute_metrics( 'configs/hearthstone/nl2code.jsonnet', '', 'val', infer_path) all_scores.append((step, metrics['exact match'])) all_metrics.append((exp_name, step, metrics)) metric_types.update(metrics.keys()) all_scores.sort(key=operator.itemgetter(0)) sorted_scores = sorted(all_scores, reverse=True, key=operator.itemgetter(1)) rows.append(exp_vars + (len(all_scores),) + (sorted_scores[0] if sorted_scores else (-1, -1))) accuracy_per_run[exp_name] = { 'x': [s[0] for s in all_scores], 'all': [s[1] for s in all_scores], } print(d) metric_types = tuple(sorted(metric_types)) df = pd.DataFrame(rows, columns=('filt', 'cov', 'nt', 'num steps eval', 'step', 'exact match')) flat_df = pd.DataFrame( [(exp_name, step) + tuple(metrics.get(t) for t in metric_types) for exp_name, step, metrics in all_metrics], columns=('exp_name', 'step') + metric_types) return flat_df idiom_df = find_idiom_checkpoints() print(idiom_df.loc[idiom_df['corpus BLEU'].idxmax()]) print(idiom_df.loc[idiom_df['exact match'].idxmax()]) # # Finding checkpoints for baseline models def find_baseline_checkpoints(): accuracy_per_run = collections.defaultdict(dict) all_metrics = [] metric_types = set() rows = [] for d in sorted(glob.glob('logdirs/20181231-nl2code-hearthstone-fef2c5b//*')): exp_name = os.path.basename(d) exp_vars = re.match('att([^_]+)', exp_name).groups() infer_paths = glob.glob(os.path.join(d, 'infer-val-step*-bs1.jsonl')) all_scores = [] for infer_path in infer_paths: step = int(re.search('step(\d+)', infer_path).group(1)) _, metrics = evaluation.compute_metrics( 'configs/hearthstone/nl2code.jsonnet', '', 'val', infer_path) all_scores.append((step, metrics['exact match'])) all_metrics.append((exp_name, step, metrics)) metric_types.update(metrics.keys()) all_scores.sort(key=operator.itemgetter(0)) sorted_scores = sorted(all_scores, reverse=True, key=operator.itemgetter(1)) rows.append(exp_vars + (len(all_scores),) + (sorted_scores[0] if sorted_scores else (-1, -1))) accuracy_per_run[exp_name] = { 'x': [s[0] for s in all_scores], 'all': [s[1] for s in all_scores], } print(d) metric_types = tuple(sorted(metric_types)) df = pd.DataFrame(rows, columns=('att', 'num steps eval', 'step', 'exact match')) flat_df = pd.DataFrame( [(exp_name, step) + tuple(metrics.get(t) for t in metric_types) for exp_name, step, metrics in all_metrics], columns=('exp_name', 'step') + metric_types) return flat_df baseline_df = find_baseline_checkpoints() print(baseline_df.loc[baseline_df['corpus BLEU'].idxmax()]) print(baseline_df.loc[baseline_df['exact match'].idxmax()]) # # Loading ground truth data # + import ast import astor val_data = registry.construct( 'dataset', json.loads(_jsonnet.evaluate_file('configs/hearthstone/nl2code.jsonnet'))['data']['val']) test_data = registry.construct( 'dataset', json.loads(_jsonnet.evaluate_file('configs/hearthstone/nl2code.jsonnet'))['data']['test']) normalized_gold_code_val = [astor.to_source(ast.parse(item.code)) for item in val_data] normalized_gold_code_test = [astor.to_source(ast.parse(item.code)) for item in test_data] # - # # Determining when and where idioms were output during decoding # + idiom_exact_match_max = idiom_df.loc[idiom_df['exact match'].idxmax()] idiom_exact_match_infer_val_history_path = \ 'logdirs/20190201-hs-allmatches-anysplit-multimean/{}/infer-{}-step{:05d}-bs1-with-history.jsonl'.format( idiom_exact_match_max['exp_name'], 'val', idiom_exact_match_max['step']) # !CUDA_VISIBLE_DEVICES= python infer.py \ # --config configs/hearthstone-idioms/nl2code-0201-allmatches-anysplit-multimean.jsonnet \ # --logdir logdirs/20190201-hs-allmatches-anysplit-multimean/filt-contains-hole_st-cov-examples_nt-80 \ # --config-args "{filt: 'contains-hole', st: 'cov-examples', nt: 80}" \ # --output-history \ # --output __LOGDIR__/infer-val-step02200-bs1-with-history.jsonl --step 2200 --section val --beam-size 1 idiom_exact_match_infer_test_history_path = \ 'logdirs/20190201-hs-allmatches-anysplit-multimean/{}/infer-{}-step{:05d}-bs1-with-history.jsonl'.format( idiom_exact_match_max['exp_name'], 'test', idiom_exact_match_max['step']) # !CUDA_VISIBLE_DEVICES= python infer.py \ # --config configs/hearthstone-idioms/nl2code-0201-allmatches-anysplit-multimean.jsonnet \ # --logdir logdirs/20190201-hs-allmatches-anysplit-multimean/filt-contains-hole_st-cov-examples_nt-80 \ # --config-args "{filt: 'contains-hole', st: 'cov-examples', nt: 80}" \ # --output-history \ # --output __LOGDIR__/infer-test-step02200-bs1-with-history.jsonl --step 2200 --section test --beam-size 1 # + all_rules = json.load( open('data/hearthstone-idioms-20190201/all-matches-trees-anysplit/filt-contains-hole_st-cov-examples_nt-80/nl2code/grammar_rules.json') )['all_rules'] def count_nodes(tree): queue = collections.deque([tree]) count = 0 while queue: node = queue.pop() count += 1 if isinstance(node, dict): for k, v in node.items(): if k == '_type': continue if isinstance(v, (list, tuple)): queue.extend(v) else: queue.append(v) return count def analyze_idiom_usage(infer_history_path, normalized_gold_code): inferred_all = [json.loads(line) for line in open(infer_history_path)] exact_match_all = [ normalized_gold_code[i] == example['beams'][0]['inferred_code'] if example['beams'] else False for i, example in enumerate(inferred_all) ] decoding_history_all = {} for example in inferred_all: decoding_history = [] decoding_history_all[example['index']] = decoding_history if not example['beams']: continue for choice in example['beams'][0]['choice_history']: if isinstance(choice, int): decoding_history.append(all_rules[choice]) else: decoding_history.append(choice) # Number of idioms used per program num_idioms_used_all = {} for i, history in decoding_history_all.items(): counter = collections.Counter() for choice in history: if isinstance(choice, list) and isinstance(choice[1], str) and re.match('Template\d+', choice[1]): counter[tuple(choice)] += 1 num_idioms_used_all[i] = counter # Size of each program # 1. Number of characters # 2. Number of lines # 3. Number of nodes in tree num_characters_all = {example['index']: len(example['beams'][0]['inferred_code']) if example['beams'] else 0 for example in inferred_all} num_lines_all = {example['index']: example['beams'][0]['inferred_code'].count('\n') if example['beams'] else 0 for example in inferred_all} num_nodes_all = {example['index']: count_nodes(example['beams'][0]['model_output']) if example['beams'] else 0 for example in inferred_all} # Dataframe idiom_usage_df = pd.DataFrame(collections.OrderedDict(( ('Number of idioms used', {k: sum(v.values()) for k, v in num_idioms_used_all.items()}), ('Number of characters', num_characters_all), ('Number of lines', num_lines_all), ('Number of AST nodes', num_nodes_all), ('Exact match', dict(enumerate(exact_match_all))), ))) exact_match_idiom_usage_df = idiom_usage_df[idiom_usage_df['Exact match']] fig, ax = plt.subplots() sns.distplot(idiom_usage_df['Number of idioms used'], kde=False, ax=ax, label='All') fig, ax = plt.subplots() sns.distplot(exact_match_idiom_usage_df['Number of idioms used'], kde=False, ax=ax, bins=5, label='Exact match') for x, y in ( ('Number of characters', 'Number of idioms used'), ('Number of AST nodes', 'Number of idioms used') ): # fig, ax = plt.subplts # plt.plot(x, y, data=idiom_usage_df) jg = sns.jointplot(x=x, y=y, data=idiom_usage_df) jg.fig.suptitle('All') jg = sns.jointplot(x=x, y=y, data=exact_match_idiom_usage_df) jg.fig.suptitle('Exactly matched') # sns.jointplot(x='Number of characters', y='Number of idioms used', data=idiom_usage_df), # sns.jointplot(x='Number of characters', y='Number of idioms used', data=exact_match_idiom_usage_df), # sns.jointplot(x='Number of lines', y='Number of idioms used', data=idiom_usage_df), # sns.jointplot(x='Number of lines', y='Number of idioms used', data=exact_match_idiom_usage_df), # sns.jointplot(x='Number of AST nodes', y='Number of idioms used', data=idiom_usage_df), # sns.jointplot(x='Number of AST nodes', y='Number of idioms used', data=exact_match_idiom_usage_df), # ]: # - analyze_idiom_usage(idiom_exact_match_infer_val_history_path, normalized_gold_code_val) analyze_idiom_usage(idiom_exact_match_infer_test_history_path, normalized_gold_code_test) # # Teacher-forced precision and recall # + def analyze_teacher_forced_pr(report, templates, accuracy_ks=(1, 2), precision_ks=(1, 2), recall_ks=(1, 2)): # Count how often template_match_counts = collections.defaultdict(int) template_choice_ranks = { 'all': collections.defaultdict(list), 'templates only': collections.defaultdict(list), } template_valid_choice_ranks = { 'all': collections.defaultdict(list), 'templates only': collections.defaultdict(list), } min_valid_ranks = [] for item in report: for entry in item['history']: if not isinstance(entry['choices'][0], str): continue all_ranks = {} template_only_ranks = {} template_only_i = 0 for i, choice in enumerate(entry['choices']): all_ranks[choice] = i template_only_ranks[choice] = template_only_i if not re.match('Template(\d+).*', choice): template_only_i += 1 # For overall top-k accuracy min_valid_rank = min(all_ranks[choice] for choice in entry['valid_choices']) min_valid_ranks.append(min_valid_rank) # For precision # times that choice appeared at rank k, when it was valid (template_valid_choice_ranks) # / times that choice appeared at rank k (tp + fp), whether or not valid (template_choice_ranks) for choice in entry['choices']: m = re.match('Template(\d+).*', choice) if not m: continue template_id = int(m.group(1)) template_choice_ranks['all'][template_id].append(all_ranks[choice]) template_choice_ranks['templates only'][template_id].append(template_only_ranks[choice]) # For recall # times that choice appeared at rank k, when it was valid (template_valid_choice_ranks) # / times that choice was valid (tp + fn) for choice in entry['valid_choices']: m = re.match('Template(\d+).*', choice) if not m: continue template_id = int(m.group(1)) template_match_counts[template_id] += 1 # Determine its rank template_valid_choice_ranks['all'][template_id].append(all_ranks[choice]) # Determine its rank, excluding other templates template_valid_choice_ranks['templates only'][template_id].append(template_only_ranks[choice]) min_valid_ranks = np.array(min_valid_ranks) # Top-k accuracy: there exists a valid choice such that its rank ≤ k top_k_accuracy = { k: np.sum(min_valid_ranks < k) / len(min_valid_ranks) for k in accuracy_ks } # Precision: # Among the # of times that a given template has rank ≤ k, # how often it is a valid choice top_k_precision = { type_name: { k: {i: np.sum(np.array(template_valid_choice_ranks[type_name][i]) < k) / np.sum(np.array(template_choice_ranks[type_name][i]) < k) for i in template_match_counts.keys()} for k in precision_ks } for type_name in template_valid_choice_ranks } # Recall: # Among the # of times that a given template is a valid choice, # how often it is a choice with rank ≤ k top_k_recall = { type_name: { k: {i: np.sum(np.array(ranks) < k) / len(ranks) for i, ranks in ranks_of_type.items()} for k in recall_ks } for type_name, ranks_of_type in template_valid_choice_ranks.items()} accuracy_df = pd.DataFrame({ 'Accuracy @ {}'.format(k): [top_k_accuracy[k]] for k in accuracy_ks }) pr_df = pd.DataFrame({ 'Head': {t['id']: t['idiom'][0] for t in templates}, 'Matches': template_match_counts, **{ 'Precision @ {} {}'.format(k, type_name): top_k_precision[type_name][k] for type_name in top_k_precision.keys() for k in precision_ks }, **{ 'Recall @ {} {}'.format(k, type_name): top_k_recall[type_name][k] for type_name in top_k_recall.keys() for k in recall_ks } }) return accuracy_df, pr_df def analyze_anysplit_one(name, section): report = [json.loads(line) for line in open('../logdirs/20190201-hs-allmatches-anysplit/{}/debug-{}-step2600.jsonl'.format(name, section))] templates = json.load(open('../data/hearthstone-idioms-20190201/all-matches-trees-anysplit/{}/templates.json'.format(name))) return analyze(report, templates) # + idiom_exact_match_debug_val_path = \ 'logdirs/20190201-hs-allmatches-anysplit-multimean/{}/debug-{}-step{:05d}.jsonl'.format( idiom_exact_match_max['exp_name'], 'val', idiom_exact_match_max['step']) # !CUDA_VISIBLE_DEVICES= python infer.py \ # --mode debug \ # --config configs/hearthstone-idioms/nl2code-0201-allmatches-anysplit-multimean.jsonnet \ # --logdir logdirs/20190201-hs-allmatches-anysplit-multimean/filt-contains-hole_st-cov-examples_nt-80 \ # --config-args "{filt: 'contains-hole', st: 'cov-examples', nt: 80}" \ # --output __LOGDIR__/debug-val-step02200.jsonl --step 2200 --section val --beam-size 1 idiom_exact_match_debug_test_path = \ 'logdirs/20190201-hs-allmatches-anysplit-multimean/{}/debug-{}-step{:05d}.jsonl'.format( idiom_exact_match_max['exp_name'], 'test', idiom_exact_match_max['step']) # !CUDA_VISIBLE_DEVICES= python infer.py \ # --mode debug \ # --config configs/hearthstone-idioms/nl2code-0201-allmatches-anysplit-multimean.jsonnet \ # --logdir logdirs/20190201-hs-allmatches-anysplit-multimean/filt-contains-hole_st-cov-examples_nt-80 \ # --config-args "{filt: 'contains-hole', st: 'cov-examples', nt: 80}" \ # --output-history \ # --output __LOGDIR__/debug-test-step02200.jsonl --step 2200 --section test --beam-size 1 # - templates = json.load( open('data/hearthstone-idioms-20190201/all-matches-trees-anysplit/{}/templates.json'.format(idiom_exact_match_max['exp_name']))) accuracy_df_val, pr_df_val = analyze_teacher_forced_pr([json.loads(line) for line in open(idiom_exact_match_debug_val_path)], templates) accuracy_df_val pr_df_val accuracy_df_test, pr_df_test = analyze_teacher_forced_pr([json.loads(line) for line in open(idiom_exact_match_debug_test_path)], templates) accuracy_df_test pr_df_test # # Comparing versus baseline def load_inferred(infer_history_path, normalized_gold_code): inferred_all = [json.loads(line) for line in open(infer_history_path)] exact_match_all = [ normalized_gold_code[i] == example['beams'][0]['inferred_code'] if example['beams'] else False for i, example in enumerate(inferred_all) ] return inferred_all, exact_match_all print(baseline_df.loc[baseline_df['exact match'].idxmax()]) # + # !CUDA_VISIBLE_DEVICES= python infer.py \ # --config configs/hearthstone/nl2code.jsonnet \ # --logdir logdirs/20181231-nl2code-hearthstone-fef2c5b/att1 \ # --output __LOGDIR__/infer-test-step02300-bs1.jsonl --step 2300 --section test --beam-size 1 baseline_exact_match_infer_test_path = 'logdirs/20181231-nl2code-hearthstone-fef2c5b/att1/infer-test-step02300-bs1.jsonl' # - inferred_test, exact_match_test = load_inferred(idiom_exact_match_infer_test_history_path, normalized_gold_code_test) baseline_inferred_test, baseline_exact_match_test = load_inferred(baseline_exact_match_infer_test_path, normalized_gold_code_test) comparison_df = pd.DataFrame({'Idioms': exact_match_test, 'Baseline': baseline_exact_match_test}) comparison_df[comparison_df.Baseline != comparison_df.Idioms] from seq2struct import grammars idiom_config = json.loads(_jsonnet.evaluate_file( 'configs/hearthstone-idioms/nl2code-0201-allmatches-anysplit-multimean.jsonnet', tla_codes={'args': "{filt: 'contains-hole', st: 'cov-examples', nt: 80}"})) idiom_grammar = registry.construct('grammar', idiom_config['model']['decoder_preproc']['grammar']) import pprint for i in comparison_df[comparison_df.Baseline != comparison_df.Idioms].index: print('#{}\nGold:\n{}\nIdioms:\n{}\nBaseline:\n{}'.format( i, normalized_gold_code_test[i], inferred_test[i]['beams'][0]['inferred_code'], baseline_inferred_test[i]['beams'][0]['inferred_code'])) templates_used = set() actions = [] for choice in inferred_test[i]['beams'][0]['choice_history']: if isinstance(choice, int): rule = all_rules[choice] actions.append(rule) if isinstance(rule[1], str) and re.match('Template\d+', rule[1]): templates_used.add(rule[1]) else: actions.append(choice) if i == 36: print('Idiom actions:\n{}\n'.format(pprint.pformat(actions))) print('Templates used:') for t in natsort.natsorted(templates_used): cons = idiom_grammar.ast_wrapper.constructors[t] tree = cons.template({ i: ['*'.format(i)] if field.seq else ''.format(i) for i, field in enumerate(cons.fields)} ) print(t) pprint.pprint(tree) print('---') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # API Python -Challenge: Part 2: VacationPy Student: # # VacationPy # ---- # # #### Note # * Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps. # Dependencies and Setup import matplotlib.pyplot as plt import pandas as pd import numpy as np import requests import gmaps import os import json # Import API key url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json" from api_keys import g_key # ### Store Part I results into DataFrame # * Load the csv exported in Part I to a DataFrame #Extract the new dataframe that was created in the WeatherPy challenge cityweatherforcast = pd.read_csv("../output_data/cityweatherforcast.csv") cityweatherforcast.head() #test succesfully data frame generated #Eliminate duplicate columns in the Data Frame vacationweatherforcast = cityweatherforcast[["City", "Cloudiness", "Country","Date", "Humidity", "Latitude","Longitude", "Temp", "Winds Speed"]] vacationweatherforcast.head()#Test Succesfully Dataframe generated # ### Humidity Heatmap # * Configure gmaps. # * Use the Lat and Lng as locations and Humidity as the weight. # * Add Heatmap layer to map. #Stablished the max variables of Humidity and locations using previouse dataframe vacationhumidity = vacationweatherforcast["Humidity"].astype(float) maxvacationhumidity = vacationhumidity.max() locations = vacationweatherforcast[["Latitude","Longitude" ]] #Generated Heat map by on gmaps, using the data frame from previouse dataframe, and give a littleformat to the heatmap Heatmap_layer = gmaps.figure() heat_layer = gmaps.heatmap_layer(locations, weights=vacationhumidity,dissipating=True, max_intensity=maxvacationhumidity,point_radius=4) Heatmap_layer.add_layer(heat_layer) print(f"Heatmap of all the cities") Heatmap_layer#test succesfull heatmap generated # ### Create new DataFrame fitting weather criteria # * Narrow down the cities to fit weather conditions. # * Drop any rows will null values. #Narrow the number of cities that meet the temperature, cloudiness and wind speed requirement vacationcities = vacationweatherforcast.loc[(vacationweatherforcast["Temp"] > 70) & (vacationweatherforcast["Temp"] < 80) & (vacationweatherforcast["Cloudiness"] == 0) & (vacationweatherforcast["Winds Speed"] < 10), :] #Eliminates null values using drpna vacationcitiesfinal = vacationcities.dropna(how="any") #reset the dataframe generated from the begining vacationcitiesfinal.reset_index(inplace=True) #delete index del vacationcitiesfinal["index"] vacationcitiesfinal.head(12) #test succesfully data frame generated # ### Hotel Map # * Store into variable named `hotel_df`. # * Add a "Hotel Name" column to the DataFrame. # * Set parameters to search for hotels with 5000 meters. # * Hit the Google Places API for each city's coordinates. # * Store the first Hotel result into the DataFrame. # * Plot markers on top of the heatmap. #create hotel df variable hotels_df = [] #create a loop to track down the number of hotels from gmaps url into the Dataframe using latitude, Longitude and locations, using try/except function for i in range(len(vacationcitiesfinal)): Latitude1 = vacationcitiesfinal.loc[i]["Latitude"] Longitude1 = vacationcitiesfinal.loc[i]["Longitude"] params = { "location": f"{Latitude1},{Longitude1}", "radius": 5000, "types" : "hotel", "key": g_key } hotels = requests.get(url, params=params) jsn = hotels.json() try: hotels_df.append(jsn['results'][0]['name']) except: hotels_ddf.append("") vacationcitiesfinal["Hotel Name"] = hotels_df #Eliminates null values using drpna vacationcitiesfinal = vacationcitiesfinal.dropna(how='any') vacationcitiesfinal.head(13) #test succesfully new data frame generated # Generate new data frame with just the City, Hotel Name, Humidity, Latitude and Longitude vacationcitiesfinalf = vacationcitiesfinal[["City","Hotel Name","Humidity","Latitude","Longitude","Winds Speed"]] vacationcitiesfinalf.head(13)#test succesfully new data frame generated # NOTE: Do not change any of the code in this cell # Using the template add the hotel marks to the heatmap info_box_template = """
    Name
    {Hotel Name}
    City
    {City}
    Country
    {Country}
    """ # Store the DataFrame Row # NOTE: be sure to update with your DataFrame name hotel_info = [info_box_template.format(**row) for index, row in vacationcitiesfinal.iterrows()] locations = vacationcitiesfinal[["Latitude","Longitude" ]] #Track down the Hotel in the city and country from the dataframe into gmaps using markers markers = gmaps.marker_layer(locations) Heatmap_layer.add_layer(markers) print(f"Hotels recommeded to stay") Heatmap_layer #test succesfully gmaps with markers generated # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## 1: Basics of Galpy # Import all necessary modules import numpy import scipy import matplotlib.pyplot as plt import galpy from galpy.potential import MWPotential2014 from galpy.potential import MiyamotoNagaiPotential, NFWPotential, HernquistPotential from galpy.potential import plotRotcurve # a) Plot the rotation curve of MWPotential2014 out to 40 kpc and its components bulge, disk, and halo. # + # Make potential instances for Milky Way Galaxy mp= MiyamotoNagaiPotential(a=0.5,b=0.0375,normalize=.6) # disk np= NFWPotential(a=4.5,normalize=.35) # halo hp= HernquistPotential(a=0.6/8,normalize=0.05) # bulge mp.turn_physical_on() np.turn_physical_on() hp.turn_physical_on() mw = MWPotential2014 for item in mw: item.turn_physical_on() plotRotcurve(mw, Rrange=[0.01,40.],grid=1001,yrange=[0.,300.], label = 'Milky Way') mp.plotRotcurve(Rrange=[0.01,40.],grid=1001,overplot=True, label = 'Disk') hp.plotRotcurve(Rrange=[0.01,40.],grid=1001,overplot=True, label = 'Bulge') np.plotRotcurve(Rrange=[0.01,40.],grid=1001,overplot=True, label = 'Halo') plt.legend() plt.title('Rotation Curve of MWPotential2014 & its Components') plt.savefig('Q1a.pdf') # - # b) Integrate the orbit of the Sun in MWPotential2014 for 10 Gyr # # + # integrate orbit of Sun for 10 Gyr from galpy.orbit import Orbit from astropy import units sun_orbit = Orbit() ts = numpy.linspace(0.,10., 10000)*units.Gyr sun_orbit.integrate(ts,mw,method='odeint') sun_orbit.plot() plt.title('Orbit of Sun using MWPotential2014') plt.savefig('Q1b.pdf') plt.show() plt.close() #sun_orbit.animate() # - # c) Using the phase-space coordinates for GD-1 from Webb & Bovy (2019), integrate the orbit of GD-1 forward and backward in time for 50 Myr and plot the orbit in Dec. vs. RA and Distance vs. RA; this is approximately where the GD-1 stream lies on the sky # + # integrate orbit of GD-1 forward and backwards in time # Phase space coords: [ra,dec,d,mu_ra, mu_dec,vlos], pass as input to orbit object ra = 148.9363998668805 dec = 36.15980426805254 d = 7.555339165941959 mu_ra = -5.332929760383195 mu_dec = -12.198914465325117 vlos = 6.944006091929623 GD_orbit = Orbit(vxvv=[ra,dec,d,mu_ra, mu_dec,vlos], ro = 8, vo = 220, radec= True) #insert phase space coordinates of GD-1 here # integrate forward in time ts_f = numpy.linspace(0.,50., 10000)*units.Myr GD_orbit.integrate(ts_f,mw) GD_orbit.plot(d1 = 'ra', d2 = 'dec') plt.title('GD-1 Stream: Declination vs RA: Integrated Forwards') plt.savefig('Q1c_1.pdf') GD_orbit.plot(d1 = 'ra', d2 = 'dist') plt.title('GD-1 Stream: Distance vs RA: Integrated Forwards') plt.savefig('Q1c_2.pdf') plt.show() plt.close() # integrate backwards in time ts_b = numpy.linspace(0.,-50., 10000)*units.Myr GD_orbit.integrate(ts_b,mw) GD_orbit.plot(d1 = 'ra', d2 = 'dec') plt.title('GD-1 Stream: Declination vs RA: Integrated Backwards') plt.savefig('Q1c_3.pdf') GD_orbit.plot(d1 = 'ra', d2 = 'dist') plt.title('GD-1 Stream: Distance vs RA: Integrated Backwards') plt.savefig('Q1c_4.pdf') # 2 types of plots: Dec vs RA and Distance vs RA # dist returns distance from observer obs = [X,Y,Z] # Current positions of GD-1 stream print('Current declination',GD_orbit.dec()) print('Current distance',GD_orbit.dist()) # - from galpy.util.config import __config__ __config__['astropy']['astropy-units'] # ## 2: Simulating Accretion of Globular Cluster onto MW # a) Integrate the orbits of all of the satellite galaxies of the Milky Way in MWPotential2014 and find their apo- and pericenters (integrate backwards in time). You can load the satellite galaxies with Orbit.from_name(‘MWsatellitegalaxies’). Plot their current spherical Galactocentric radius vs. both peri- and apocenter. # + satellite_g = Orbit.from_name('MWsatellitegalaxies') ts = numpy.linspace(0., -5., 1000)*units.Gyr # if you don't specify length then the velocity and distance scale values/ #(ro and vo) are used to calculate 1 distance length in internal units satellite_g.integrate(ts, mw) num = len(satellite_g) peri = numpy.zeros(num)*units.kpc apo = numpy.zeros(num)*units.kpc current = numpy.zeros(num)*units.kpc # current spherical galactocentric radius for i, item in enumerate(satellite_g): peri[i] = item.rperi() apo[i] = item.rap() current[i] = item.r() # galactocentric radius # Plot current galactocentric radius vs both peri and apocenters \ #of all 40 satellites # Apocenters: plt.scatter(current, apo, s = 3.0, color = 'green') plt.title('Apocenter of Satellite Galaxies') plt.xlabel('Current Galactocentric Radius (kpc)') plt.ylabel('Apocenter Radius (kpc)') #plt.legend() plt.grid() plt.savefig('Q2a_1.pdf') plt.show() plt.close() # Pericenters: plt.scatter(current, peri, s = 3.0, color = 'red') plt.title('Pericenter of Satellite Galaxies') plt.xlabel('Current Galactocentric Radius (kpc)') plt.ylabel('Pericenter Radius (kpc)') #plt.legend() plt.savefig('Q2a_2.pdf') plt.grid() plt.show() plt.close() #satellite_g.rperi() will return an array of pericenter radii for all satellite galaxies in the list satellite_g # Find index of satellite with smallest pericenter: smallest = 0 for i in range(len(satellite_g)): if satellite_g[i].rperi() < satellite_g[smallest].rperi(): smallest = i print('Satellite with smallest pericenter is: ' + satellite_g.name[smallest] + ' with index '+ str(smallest)) print('The pericenter radius of ' + satellite_g.name[smallest] + ' is ' + str(satellite_g[smallest].rperi())) # - # Using the built-in enumerate() method in python lst = ['a','b','c','d','e'] for i, item in enumerate(lst, 0): # print(item) print('Item in list at index ' + str(i) + ' is ' + item) print('{%s}: {%s}'% (i, item)) # string formatting # b) For the satellite with the smallest pericenter, find the time at which it reaches the smallest radius. # + # Satellite galaxy with smallest pericenter is: TucanaIII with index 35 # Time at which satellite reaches its smallest radius is at perihelion satellite_g[smallest].plot(d1 = 't', d2 = 'r') plt.title('Orbit of: '+ satellite_g.name[smallest]) name_smallest = satellite_g.name[smallest] time = satellite_g[smallest].time() # Find index of time array at which satellite 1st reaches its smallest radius (perihelion) smallest_r = 0 for i in range(len(ts)): if satellite_g[smallest].r(ts[i]) < satellite_g[smallest].r(ts[smallest_r]): smallest_r = i print('The time at which satellite galaxy ' + name_smallest + ' reaches its smallest radius is: ' + str(time[smallest_r])) # - # c) Give the satellite a mass (e.g., 10^11 Msun) and size following a=1.05 (Msat/10^8/Msun)^0.5 kpc and integrate backwards in time for t=10 Gyr, including the effect of dynamical friction. Plot the orbit of the satellite as radius vs. time. The satellite should be at progressively larger radii at earlier and earlier times. # + from galpy.potential import ChandrasekharDynamicalFrictionForce Tuc_satellite = satellite_g[smallest](ro = 8., vo = 220.) #make copy of orbit instance Tuc_satellite.turn_physical_on(ro = 8., vo = 220.) Msat = 10.**11.*units.Msun size = 1.05*(Msat/(10.**8./1*units.Msun))**0.5*units.kpc cdf= ChandrasekharDynamicalFrictionForce(GMs=Msat, rhm = size ,dens=MWPotential2014) ts = numpy.linspace(0.,-10.,1000)*units.Gyr Tuc_satellite.integrate(ts, mw+cdf) Tuc_satellite.plot(d1 = 't', d2 = 'r') plt.title('Orbit of Satellite Galaxy '+ name_smallest + ' - Including Dynamical Friction Effects' ) plt.savefig('Q2c.pdf') # Find out where the satellite is in the Milky Way at t=-10Gyr (in cylindrical coordinates: r and z) R = Tuc_satellite.R(-10.*units.Gyr) #cylindrical radius at time t vR = Tuc_satellite.vR(-10.*units.Gyr) #radial velocity at time t vT = Tuc_satellite.vT(-10.*units.Gyr) #tangential velocity at time t z = Tuc_satellite.z(-10.*units.Gyr) #vertical height at time t vz = Tuc_satellite.vz(-10.*units.Gyr) #vertical velocity at time t phi = Tuc_satellite.phi(-10.*units.Gyr) #azimuth at time t rad = Tuc_satellite.r(-10.*units.Gyr)*units.kpc # spherical radius at time t # Coordinates of TucanaIII satellite at t=-10Gyr coord = [Tuc_satellite.x(-10.*units.Gyr), Tuc_satellite.y(-10.*units.Gyr), Tuc_satellite.z(-10.*units.Gyr)] vcoord = [Tuc_satellite.vx(-10.*units.Gyr),Tuc_satellite.vy(-10.*units.Gyr),Tuc_satellite.vz(-10.*units.Gyr)] print(vcoord) print(coord) # - # d) Modeling the satellite as a Hernquist potential with scale radius a=1.05 (Msat/10^8/Msun)^0.5 kpc, initialize a star cluster on a circular orbit at r= 4 kpc within the satellite. Then move the satellite+cluster system to the radius of the satellite at t=10 Gyr from the previous question. # # + from astropy.constants import G, M_sun, kpc # initialize potential model of satellite sat_pot = HernquistPotential(amp = 2*Msat, a = size, ro = 8, vo=220) sat_pot.turn_physical_on(ro = 8, vo=220) # Find velocity (in km/s) scale: velocity at r = 4kpc for the satellite galaxy using following eqn: v = sqrt((G*M_enc)/r) r_kpc = 4*units.kpc r_m = r_kpc*kpc/(1*units.kpc) enc_mass = sat_pot.mass(0.5) # R is the factor of ro at which you want to evaluate mass print("Enclosed mass is", enc_mass) v = numpy.sqrt((G*enc_mass*M_sun/(1*units.Msun))/r_m) #in m/s print(v) v2 = v/1000*units.km/units.m print('Velocity of star cluster in frame of satellite is',v2) v2_unitless = v2.to_value(units.km/units.s) print(v2_unitless) # Another way to calculate veloity at r= 4kpc is using vcirc vcirc = sat_pot.vcirc(4.*units.kpc) print('Velocity of circular orbit at R = 4kpc is ' + str(vcirc)) # Transform from satellite's frame of reference to Milky Way Galaxy's frame of reference (using Cartesian coordinates) # Coordinates of the star cluster in galactocentric frame x_gal = coord[0] + 4*units.kpc y_gal = coord[1] z_gal = coord[2] # Velocity of the star cluster in galactocentric frame vx_gal = vcoord[0] vy_gal = vcoord[1] + vcirc vz_gal = vcoord[2] # - # e) Integrate the orbit of the star cluster in the combined satellite + MWPotential2014, by representing the potential of the dwarf as a MovingObjectPotential. You should set up the MovingObjectPotential with the orbit of the satellite from c, using the HernquistPotential as the mass model for the satellite. Then integrate the cluster that you initialized in d) in MWPotential2014+the MovingObjectPotential for 12 Gyr. Plot the orbit of both the satellite and the cluster in radius vs. time. Describe what happens. # + ts = numpy.linspace(0.,12., 1000)*units.Gyr # Orbit of satellite within Milky Way Galaxy sat = Orbit(vxvv = [R,vR,vT,z,vz,phi]) sat.integrate(ts, mw + cdf) sat.plot(d1 = 't', d2 = 'r') plt.title('Orbit of Satellite + Cluster System') plt.savefig('Q2_sat.pdf') plt.show() plt.close() # + # %%time from galpy.potential import MovingObjectPotential ts = numpy.linspace(0.,12., 1000)*units.Gyr # Orbit of satellite within Milky Way Galaxy sat = Orbit(vxvv = [R,vR,vT,z,vz,phi]) sat.integrate(ts, mw + cdf) sat.plot(d1 = 't', d2 = 'r') plt.title('Orbit of Satellite + Cluster System in Galactocentric Frame') plt.savefig('Q2_sat.pdf') #plt.show() #plt.close() ts = numpy.linspace(0.,20., 1000)*units.Gyr # Orbit of star cluster within satellite dwarf_pot = MovingObjectPotential(sat, pot = sat_pot) vR_sc = vx_gal*numpy.cos(phi) + vy_gal*numpy.sin(phi) # radial velocity of star_cluster wrt MW ref frame vT_sc = vy_gal*numpy.cos(phi) - vx_gal*numpy.sin(phi) # tangential velocity of star_cluster wrt MW ref frame R_sc = (x_gal**2 + y_gal**2)**(1/2) # cylindrical radius of the star_cluster wrt galactic center phi_sc = numpy.arctan(y_gal/x_gal) ##################################################################### star_cluster = Orbit(vxvv = [R_sc ,vR, vT_sc ,z,vz,phi_sc], ro=8., vo =220.) #full 6 coordinates star_cluster.integrate(ts, mw + dwarf_pot) # star cluster is in combined potential of MW galaxy and the satellite galaxy star_cluster.plot(d1 = 't', d2 = 'r', overplot = True) # galactocentric radius as a function of time plt.title('Orbit of Star Cluster Within Satellite in Galactocentric Frame') plt.savefig('Q2_sc.pdf') plt.show() plt.close() # + # %%time from galpy.potential import MovingObjectPotential ts = numpy.linspace(0.,12., 1000)*units.Gyr # Orbit of satellite within Milky Way Galaxy sat = Orbit(vxvv = [R,vR,vT,z,vz,phi]) sat.integrate(ts, mw + cdf) sat.plot(d1 = 't', d2 = 'r', linestyle = '-', color = 'blue', label = 'satellite', alpha=0.6) plt.title('Orbit of Satellite + Cluster System in Galactocentric Frame') plt.savefig('Q2_sat.pdf') #plt.show() #plt.close() ts = numpy.linspace(0.,12., 1000)*units.Gyr R, phi, z = rect_to_cyl(x_gal, y_gal, z_gal) vR, vT, vz = rect_to_cyl_vec(vx_gal,vy_gal,vz_gal,x_gal,y_gal,z_gal, cyl= False) # Orbit of star cluster within satellite dwarf_pot = MovingObjectPotential(sat, pot = sat_pot) star_cluster = Orbit(vxvv = [R ,vR, vT,z,vz,phi], ro=8., vo =220.) #full 6 coordinates star_cluster.integrate(ts, mw + dwarf_pot) # star cluster is in combined potential of MW galaxy and the satellite galaxy star_cluster.plot(d1 = 't', d2 = 'r', linestyle = ':', overplot = True, color = 'black', label = 'star cluster') # galactocentric radius as a function of time plt.title('Orbit of Star Cluster Within Satellite in Galactocentric Frame') plt.savefig('Q2_sc.pdf') plt.legend() plt.show() plt.close() # + # Orbit of star cluster in just the potential of the satellite galaxy (without MW potential) # Check that the orbit is indeed circular as it should be, also check that eccentricity is 0, or nearly 0 ts = numpy.linspace(0.,3.,1000)*units.Gyr star_cluster = Orbit(vxvv = [4*units.kpc, 0.*units.km/units.s, vcirc, 0.*units.kpc,0.*units.km/units.s,0.*units.rad], ro=8., vo =220.) #full 6 coordinates star_cluster.integrate(ts, sat_pot) # star cluster is in potential of satellite galaxy only star_cluster.plot(d1 = 'x', d2 = 'y') # radius as a function of time (in frame of the satellite galaxy TucanaIII not the galactic frame) plt.title('Orbit of Star Cluster Within Satellite in Satellite Galaxy Frame of Reference') plt.show() plt.close() print('The eccentricity of the orbit is', star_cluster.e()) # - # f) Bonus: Wrap the MovingObjectPotential with a DehnenSmoothWrapperPotential to make its mass go to zero at 10 Gyr (when the satellite should be at its pericenter); look at the orbit of the cluster again and see how it differs from what happens in e). # https://docs.galpy.org/en/v1.6.0/potential.html#initializing-potentials-with-parameters-with-units # + from galpy.potential import DehnenSmoothWrapperPotential ts = numpy.linspace(0.,20., 1000)*units.Gyr dswp = DehnenSmoothWrapperPotential(amp=1.0, pot = dwarf_pot, tform=0.*units.Gyr, tsteady=10.*units.Gyr, decay = True) star_cluster = Orbit(vxvv = [R ,vR, vT,z,vz,phi], ro=8., vo =220.) #full 6 coordinates star_cluster.integrate(ts, mw + dswp) # star cluster is in combined potential of MW galaxy and the satellite galaxy star_cluster.plot(d1 = 't', d2 = 'r') # galactocentric radius as a function of time plt.title('Orbit of Star Cluster Within Satellite in Galactocentric Frame') plt.savefig('WrapperPotential-Decaying Mass.pdf') plt.show() plt.close() # Plot radial force as a function of time, check that it goes to 0 at: tsteady after tform rad_f = pass # - # Coordinate & Velocity Conversions: Rectangular (x,y,z, vx, vy, vz) to Cylindrical (R, phi, z, vr, vt, vz) # + from galpy.util.bovy_coords import rect_to_cyl, rect_to_cyl_vec # From manual calculations of R and phi print(R_sc, phi_sc, z) print(numpy.arctan(y_gal/x_gal)) R, phi, z = rect_to_cyl(x_gal, y_gal, z_gal) print(R,phi,z) print(vR, vT_sc,vz ) vR, vT, vz = rect_to_cyl_vec(vx_gal,vy_gal,vz_gal,x_gal,y_gal,z_gal, cyl= False) print(vR, vT, vz) # Manual calculation vr = vx_gal*numpy.cos(phi) + vy_gal*numpy.sin(phi) vt = vy_gal*numpy.cos(phi) - vx_gal*numpy.sin(phi) print(vr, vt) # PROBLEMS WITH: phi # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + raw_mimetype="text/restructuredtext" active="" # .. _nb_nsga2: # - # ## NSGA-II # The algorithm is implemented based on [\[benchmark\]](https://www.egr.msu.edu/coinlab/blankjul/pymoo-benchmark/nsga2.html) [\[data\]](https://www.egr.msu.edu/coinlab/blankjul/pymoo-benchmark/nsga2.zip) # . A benchmark of the algorithm against the original C code can be found # The algorithm follows the general # outline of a genetic algorithm with a modifed mating and survival selection. In NSGA-II, first, individuals # are selected frontwise. By doing so, there will be the situation where a front needs to be splitted because not all indidividuals are allowed to survive. In this splitting front, solutions are selected based on crowding distance. #
    # ![nsga2_survival](../resources/images/nsga2_survival.png) #
    # # The crowding distance is basically the Manhatten Distance in the objective space. However, the extreme points are desired to be kept every generation and, therefore, get assigned a crowding distance of infinity. #
    # ![nsga2_crowding](../resources/images/nsga2_crowding.png) #
    # Furthermore, to increase some selection preasure NSGA-II uses a binary tournament mating selection. Each individual is first compared by rank and then by crowding distance. There also exists a variant in the original C code where instead of using the rank the domination criterium between two solutions is used. # ### Example # + from pymoo.optimize import minimize from pymoo.algorithms.nsga2 import nsga2 from pymoo.util import plotting from pymop.factory import get_problem # create the algorithm object method = nsga2(pop_size=100, elimate_duplicates=True) # execute the optimization res = minimize(get_problem("zdt1"), method, termination=('n_gen', 200)) plotting.plot(res.F, no_fill=True) # - # ### API # + raw_mimetype="text/restructuredtext" active="" # .. autofunction:: pymoo.algorithms.nsga2.nsga2 # :noindex: # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- import pandas as pd import numpy as np import seaborn as sn import matplotlib.pyplot as plt df1 = pd.read_csv("E:/dataset/movies.csv") df2 = pd.read_csv("E:/dataset/ratings.csv") dfs = pd.merge(df1, df2) dfs.head(10) dfs.shape dfs.info() dfs.describe() dfs.groupby('title')['rating'].mean().sort_values(ascending=False).head() dfs.groupby('title')['rating'].count().sort_values(ascending=False).head() # + ratings = pd.DataFrame(dfs.groupby('title')['rating'].mean()) ratings['num of ratings'] = pd.DataFrame(dfs.groupby('title')['rating'].count()) ratings.head() # - movie = dfs.pivot_table(index ='userId', columns ='title', values ='rating') movie.head() ratings.sort_values('num of ratings', ascending = False).head(10) # + forrestgump_user_ratings = movie['Forrest Gump (1994)'] braveheart_user_ratings = movie['Braveheart (1995)'] forrestgump_user_ratings.head() # + similar_to_forrestgump = movie.corrwith(forrestgump_user_ratings) similar_to_braveheart = movie.corrwith(braveheart_user_ratings) corr_forrestgump = pd.DataFrame(similar_to_forrestgump, columns =['Correlation']) corr_forrestgump.dropna(inplace = True) corr_forrestgump.head() # + corr_forrestgump.sort_values('Correlation', ascending = False).head(10) corr_forrsetgump = corr_forrestgump.join(ratings['num of ratings']) corr_forrestgump.head() corr_forrsetgump[corr_forrestgump['num of ratings']>100].sort_values('Correlation', ascending = False).head() # + corr_braveheart = pd.DataFrame(similar_to_braveheart, columns =['Correlation']) corr_braveheart.dropna(inplace = True) corr_braveheart = corr_braveheart.join(ratings['num of ratings']) corr_braveheart[corr_braveheart['num of ratings']>100].sort_values('Correlation', ascending = False).head() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + id="KXkro8-7bTDc" def chickens(count): return f"Number of chickens: {count}" if count < 10 else "Number of chickens: many" # + id="-Bzoj_dxbab4" def string_both_ends_3(s): return s if len(s) < 3 else s[:3]+s[len(s)-3:] # + id="89-n4vxQbcQ2" def first_char_replace(s): return s[0]+s[1:].replace(s[0].lower(), '@').replace(s[0].upper(), '@') # + id="2kbjAlDBbeLl" def string_jumble(a, b): return b[:2]+a[2:]+' '+a[:2]+b[2:] # + id="7yLC3ffrbges" def match_first_last(words): return sum([1 for s in words if len(s) >= 2 and s[0] == s[-1]]) # + id="-KSpx8tDbiNr" def group_strings(words): return sorted(words) # + id="bbdfkeD7bmFr" def sort_last(tuples): return sorted(tuples, key=lambda x: x[1]) # + id="T8E4jaD5bnzK" def main(): print ('Number of chickens') print(chickens(4)) print(chickens(9)) print(chickens(10)) print(chickens(99)) print ('\n3 characters from both ends') print(string_both_ends_3('spring')) print(string_both_ends_3('Intelligence')) print(string_both_ends_3('a')) print(string_both_ends_3('xyz')) print ('\nReplace occurrences of first character') print(first_char_replace('babble')) print(first_char_replace('aardvark')) print(first_char_replace('google')) print(first_char_replace('Ooogle')) print ('\nString Jumble') print(string_jumble('mix', 'pod')) print(string_jumble('dog', 'dinner')) print(string_jumble('gnash', 'sport')) print(string_jumble('pezzy', 'firm')) print ('\nMatching first and last characters') print(match_first_last(['aba', 'xyz', 'aa', 'a', 'bbb'])) print(match_first_last(['', 'x', 'ay', 'ayx', 'ax'])) print(match_first_last(['aaa', 'be', 'abc', 'aello'])) print ('\nGroup string in a list') print(group_strings(['bbb', 'ccc', 'axx', 'xzz', 'aaa'])) print(group_strings(['ccc', 'abb', 'aaa', 'xcc', 'aaa'])) print(group_strings(['mix', 'xyz', 'apple', 'xanadu', 'aardvark'])) print ('\nsort_last') print(sort_last([(1, 3), (3, 2), (2, 1)])) print(sort_last([(2, 3), (1, 2), (3, 1)])) print(sort_last([(1, 7), (1, 3), (3, 4, 5), (2, 2)])) # + id="cAmJUqYJbpwR" ''' Python files .py are modules. Modules can define variables, functions, and classes. When a Python interpreter reads a Python file, it first sets a few special variables. Then it executes the code from the file. One of those variables is called __name__. When the interpreter runs a module, the __name__ variable will be set as __main__ if the module that is being run is the main program. If the code is importing the module from another module, then the __name__ variable will be set to that module’s name. ''' # Standard boilerplate to call the main() function. if __name__ == '__main__': main() # + # 6.8 import numpy as np import matplotlib.pyplot as plt def IIIDarrToImage(IIID): IIID=IIID.astype(np.uint8) plt.imshow(IIID) plt.imsave("my.png",IIID) IIIDarrToImage(np.array([ [[255, 50, 100], [255, 250, 100], [250, 50, 250]], [[255, 250, 100], [255, 0, 0], [0, 0, 250]], [[255, 50, 100], [255, 250, 100], [250, 250, 250]]])) # + # 6.9 import pandas as pd # use this link if you have file df = pd.read_csv('train.csv') #else # df = pd.read_csv('https://raw.githubusercontent.com/ghostdart/Artificial-Intelligence-Labs/main/02-Intro-to-numpy-and-pandas/train.csv') print("Age between 18 and 30") print(len(df[(df['Age'] < 30) & (df['Age'] > 18)])) # - print("Females survivers age between 18 and 30") print(len(df[(df['Sex'] == 'female') & (df['Age'] < 30) & (df['Age'] > 18) & (df['Survived'] == 1)])) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pandas as pd import pickle import datetime import matplotlib.pyplot as plt import matplotlib as mpl Market_position_df = pd.read_pickle('database\MarketPortfolioWeights26Oct20.pickle') Momentum_position_df = pd.read_pickle('database\MomentumPortfolioWeights26Oct20.pickle') Quality_position_df = pd.read_pickle('database\QualityPortfolioWeights26Oct20.pickle') Value_position_df = pd.read_pickle('database\ValuePortfolioWeights26Oct20.pickle') Liquidity_position_df = pd.read_pickle('database\LiquidityPortfolioWeights26Oct20.pickle') Size_position_df = pd.read_pickle('database\SizePortfolioWeights26Oct20.pickle') price_df = pd.read_pickle('database\\nifty_200_stocks_price_data_24Oct20.pickle') price_subset_df = price_df[['adjclose','Symbol']] price_subset_df.index = price_df['formatted_date' ] nifty_200 = pd.read_csv('database\ind_nifty200list.csv') price_matrix = {} for stock in nifty_200.Symbol: price_matrix[stock] = price_subset_df[price_subset_df.Symbol == stock] for stock in nifty_200.Symbol: price_matrix[stock].drop(columns='Symbol', inplace = True) price_matrix[stock] = price_matrix[stock].iloc[-18:,:] final_price_matrix = pd.concat(price_matrix, axis = 1) final_price_matrix.columns = final_price_matrix.columns.droplevel(1) def Capstone_backtest(position_weight_df,prices_df,commission): cap = 70000000 No_of_shares = {} for i in range(len(position_weight_df)): if i == 0: No_of_shares[position_weight_df.index[i]] = (cap*position_weight_df.iloc[i,:])/prices_df.iloc[i,:] else: cap = (No_of_shares[position_weight_df.index[i-1]]*prices_df.iloc[i,:]).sum() No_of_shares[position_weight_df.index[i]] = (cap*position_weight_df.iloc[i,:])/prices_df.iloc[i,:] position_df = pd.concat(No_of_shares,axis=1).T riskfree_rate = 0.02 delta_df = position_df.diff() worth_df = position_df*prices_df worth_df['Total'] = worth_df.sum(axis=1) returns_df = delta_df*prices_df k = returns_df returns_df["returns"] = k.sum(axis=1)-k.abs().sum(axis=1)*commission returns_df["per_returns"] = returns_df["returns"]/worth_df['Total'] sharpe_ratio = (returns_df["per_returns"].mean()-riskfree_rate)/returns_df["per_returns"].std() running_max = np.maximum.accumulate(worth_df['Total']) running_max[running_max < 1] = 1 drawdown = (worth_df['Total'])/running_max - 1 max_drawdown = drawdown.min() target = 0 returns_df['downside_returns'] = 0 returns_df.loc[returns_df['returns'] < target, 'downside_returns'] = returns_df['returns']**2 down_stdev = np.sqrt(returns_df['downside_returns'].mean()) sortino_ratio = (returns_df['returns'].mean() - riskfree_rate)/down_stdev # Print print('\n') start = "2016-08-31" end = "2020-10-10" print("Start\t\t\t\t\t", start) print("End\t\t\t\t\t", end) print("Portfolio Final Value [₹]\t\t", worth_df['Total'][-1]) print("Portfolio Initial Value [₹]\t\t", worth_df['Total'][0]) print("Portfolio peak [Rs]\t\t\t", worth_df['Total'].max()) print("Return [%]\t\t\t\t", round((worth_df['Total'][-1]/worth_df['Total'][0]-1)*100,2)) print("Max. Drawdown [%]\t\t\t", round(abs(max_drawdown)*100,1)) print("Avg. Drawdown [%]\t\t\t", round(abs(drawdown.mean())*100,1)) print("Sharpe Ratio\t\t\t\t", round(sharpe_ratio,3)) print("Sortino Ratio\t\t\t\t", round(sortino_ratio,3)) # Plot mpl.style.use('seaborn') fig, axs = plt.subplots(3, 1) axs[0].plot(worth_df['Total'],color='blue') axs[0].set_xlabel('Date') axs[0].set_ylabel('Portfolio Value [₹]') axs[0].grid(True) axs[1].plot(returns_df["returns"],dashes=[6, 2]) axs[1].set_ylabel('Cumm. returns') axs[1].set_xlabel('Date') axs[1].grid(True) axs[2].plot(drawdown.abs()*100,color='red',dashes=[6, 2]) axs[2].set_xlabel('Date') axs[2].set_ylabel('Drawdown %') axs[2].grid(True) fig.autofmt_xdate() plt.show() # + print( "\033[4m"+"\033[1m" + "Market position" + "\033[0m"+"\033[0m") Capstone_backtest(Market_position_df,final_price_matrix,0.01) print( "\033[4m"+"\033[1m" + "Momentum position" + "\033[0m"+"\033[0m") Capstone_backtest(Momentum_position_df,final_price_matrix,0.01) print( "\033[4m"+"\033[1m" + "Quality position" + "\033[0m"+"\033[0m") Capstone_backtest(Quality_position_df,final_price_matrix,0.01) print( "\033[4m"+"\033[1m" + "Value position" + "\033[0m"+"\033[0m") Capstone_backtest(Value_position_df,final_price_matrix,0.01) print( "\033[4m"+"\033[1m" + "Liquidity position" + "\033[0m"+"\033[0m") Capstone_backtest(Liquidity_position_df,final_price_matrix,0.01) print( "\033[4m"+"\033[1m" + "Size position" + "\033[0m"+"\033[0m") Capstone_backtest(Size_position_df,final_price_matrix,0.01) # Market_position_df # Momentum_position_df # Quality_position_df # Value_position_df # Liquidity_position_df # Size_position_df # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + import os import sys import networkx as nx import numpy as np from sklearn.preprocessing import MinMaxScaler from epynet import Network sys.path.insert(0, os.path.join('..')) from utils.graph_utils import get_nx_graph, get_sensitivity_matrix from utils.DataReader import DataReader from utils.SensorInstaller import SensorInstaller import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = [10, 8] # - wds_id = 'ctown' # ### Loading WDS topology path_to_wds = os.path.join('..', 'water_networks', wds_id+'.inp') wds = Network(path_to_wds) G = get_nx_graph(wds, mode='weighted') installer = SensorInstaller(wds, include_pumps_as_master=True) coords = {} for node in wds.nodes: arr = [node.coordinates[0], node.coordinates[1]] coords[int(node.index)] = arr node_colors = ['#ffffff' for _ in G.nodes] nx.draw(G, pos=coords, with_labels=True, node_color=node_colors) # ### Loading signal path_to_data = os.path.join('..', 'data', 'db_'+wds_id+'_doe_pumpfed_1') reader = DataReader(path_to_data, n_junc=len(wds.junctions.uid), obsrat=.1, seed=1234) signal, _, _ = reader.read_data( dataset = 'trn', varname = 'junc_heads', rescale = None, cover = False ) # + devs = signal.std(axis=0).T[0] node_order = np.argsort(devs)[::-1]+1 mmscaler = MinMaxScaler() scaled_devs = mmscaler.fit_transform(devs.reshape(-1, 1))[:,0] # - signal.std(axis=0).T.shape # ### Collecting master nodes master_nodes = installer.master_nodes # ### Calculating sensitivity matrix pert = np.max(wds.junctions.basedemand)/100 S = get_sensitivity_matrix(wds, pert) # ### Shortest path selection # node_weights_arr = np.sum(np.abs(S), axis=0) node_weights_arr = signal.std(axis=0).T[0] # + sensor_budget = 2 # installer.deploy_by_random(sensor_budget=sensor_budget, seed=88867) # installer.deploy_by_shortest_path(sensor_budget=sensor_budget, weight_by=None) # installer.deploy_by_shortest_path(sensor_budget=sensor_budget, weight_by='length') # installer.deploy_by_shortest_path(sensor_budget=sensor_budget, weight_by='iweight', sensor_nodes=set(installer.master_nodes)) installer.deploy_by_xrandom(sensor_budget=sensor_budget, seed=88867, sensor_nodes=set(installer.master_nodes)) # installer.deploy_by_shortest_path_with_sensitivity( # sensor_budget = sensor_budget, # node_weights_arr = node_weights_arr, # weight_by = 'iweight', # aversion = 0, # sensor_nodes=installer.master_nodes # ) # installer.set_sensor_nodes(set(np.loadtxt(os.path.join('..', 'experiments', 'models', # 'ctown-random-0.015-binary-placement-4_sensor_nodes.csv')))) sensor_nodes = installer.sensor_nodes # - # ### Sensor placement plot # + node_arr = np.array(G.nodes) node_colors = ['#ffffff' for _ in G.nodes] node_sizes = [1 for _ in G.nodes] for node in master_nodes: try: node_colors[np.where(node_arr == node)[0][0]] = '#ff0000' node_sizes[np.where(node_arr == node)[0][0]] = 50 except: print(node) for node in sensor_nodes: node_colors[np.where(node_arr == node)[0][0]] = '#008000' node_sizes[np.where(node_arr == node)[0][0]] = 50 edge_colors = ['#000000' for _ in G.edges] paths = installer.get_shortest_paths(installer.sensor_nodes) edge_colors = [] for edge in G.edges: flag = False for path in paths: if edge in path.edges: flag = True if flag: edge_colors.append('#ff0000') else: edge_colors.append('#000000') # - nx.draw( G, pos=coords, with_labels=False, node_color=node_colors, edge_color=edge_colors, alpha=None, node_size=node_sizes) # ### Truncated graph # + node_arr = np.array(G.nodes) node_colors = ['#ffffff' for _ in G.nodes] node_sizes = [1 for _ in G.nodes] # selected_nodes = set(np.arange(3)+1) selected_nodes = sensor_nodes for node in selected_nodes: node_colors[np.where(node_arr == node)[0][0]] = '#008000' node_sizes[np.where(node_arr == node)[0][0]] = 50 edge_colors = [] for edge in G.edges: if edge[0] in selected_nodes or edge[1] in selected_nodes: edge_colors.append('#ff0000') else: edge_colors.append('#80ffff') # - nx.draw( G, pos=coords, with_labels=False, node_color=node_colors, edge_color=edge_colors, alpha=None, node_size=node_sizes) # # Nodes with highest head variation # + cmap = plt.get_cmap('viridis') node_arr = np.array(G.nodes) node_colors = ['#ffffff' for _ in G.nodes] node_sizes = [50 for _ in G.nodes] for node in G.nodes: nidx = np.where(node_arr == node)[0][0] cidx = np.where(wds.junctions.index.values == node)[0][0] node_colors[nidx] = cmap(scaled_devs[cidx]) # - nx.draw( G, pos=coords, with_labels=False, node_color=node_colors, node_size=node_sizes) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## ヒストグラム # + # リスト 4.6.1 ヒストグラムの描画 import numpy as np import matplotlib.pyplot as plt np.random.seed(12345) # %matplotlib inline x = np.random.normal(100, 20, 10000) # ヒストグラムの描画 fig, ax = plt.subplots(1, 1) ax.hist(x) # + # リスト 4.6.2 引数を指定したヒストグラムの描画 fig, ax = plt.subplots(1, 1) ax.hist(x, bins=32, range=(0, 200), edgecolor="black") # + # リスト 4.6.3 横向きのヒストグラムの描画 fig, ax = plt.subplots(1, 1) ax.hist(x, bins=32, range=(0, 200), orientation="horizontal", edgecolor="black") # + # リスト 4.6.4 anime_master.csv ファイルの読み込み from urllib.parse import urljoin import pandas as pd base_url = "https://raw.githubusercontent.com/practical-jupyter/sample-data/master/anime/" anime_master_csv = urljoin(base_url, "anime_master.csv") df_master = pd.read_csv(anime_master_csv, index_col="anime_id") df_master.head() # + # リスト 4.6.5 エピソード数の記述統計量を出力 df_tv = df_master[df_master["type"] == "TV"] episode_number = df_tv["episodes"] episode_number.describe() # + # リスト 4.6.6 エピソード数の可視化 fig, ax = plt.subplots(1, 1) ax.hist(episode_number, bins=16, range=(0, 2000), edgecolor="black") ax.set_title("Episodes") # + # リスト 4.6.7 エピソード数の可視化(描画範囲を制限) fig, ax = plt.subplots(1, 1) ax.hist(episode_number, bins=15, range=(0, 390), edgecolor="black") ax.set_xticks(np.arange(0, 391, 26).astype("int64")) ax.set_title("Episodes") # + # リスト 4.6.8 エピソード数の可視化(対数軸) fig, ax = plt.subplots(1, 1) ax.hist(episode_number, bins=16, range=(0, 2000), log=True, edgecolor="black") ax.set_title("Episodes") # + # リスト 4.6.9 レーティングの可視化 df_rating = df_master["rating"] rating_range = (0, 10) fig, ax = plt.subplots(1, 1) ax.hist(df_rating, range=rating_range, edgecolor="black") ax.set_title("Rating") # + # リスト 4.6.10 相対度数の累積ヒストグラムの描画 fig, ax = plt.subplots(1, 1) # cumulativeをTrueに指定 ax.hist( df_rating, range=rating_range, density=True, cumulative=True, edgecolor="black", ) ax.set_title("Rating (cumulated)") # + # リスト 4.6.11 近似曲線の追加 from scipy.stats import norm # 階級数 bins = 50 # 平均と標準偏差 mu, sigma = df_rating.mean(), df_rating.std() # ヒストグラムの描画 fig, ax = plt.subplots(1, 1) ax.hist(df_rating, bins=bins, range=rating_range, density=True) # X値(ビンの区切りの値) x = np.linspace(rating_range[0], rating_range[1], bins) # Y値(近似的な確率密度関数を使用して生成) y = norm.pdf(x, mu, sigma) # 近似曲線の描画 ax.plot(x, y) ax.set_title("Rating (normed) with approximate curve") # + # リスト 4.6.12 複数グループのヒストグラムを重ねて描画 fig, ax = plt.subplots(1, 1) for type_, data in df_master.groupby("type"): ax.hist(data["rating"], range=rating_range, alpha=0.5, label=type_) ax.legend() ax.set_xlabel("Rating") ax.set_ylabel("Count(rating)") # + # リスト 4.6.13 複数グループのヒストグラムを並べて描画 # データセットの作成 types = df_master["type"].unique() dataset = [ df_master.loc[df_master["type"] == type_, "rating"] for type_ in types ] fig, ax = plt.subplots(1, 1) ax.hist(dataset, range=rating_range, label=types) ax.legend() ax.set_xlabel("rating") ax.set_ylabel("Count(rating)") # + # リスト 4.6.14 積み上げヒストグラムの描画 # dataset, labelsは「並べて描画」で作成したものを使用 fig, ax = plt.subplots(1, 1) ax.hist(dataset, range=rating_range, label=types, stacked=True) ax.legend() ax.set_xlabel("rating") ax.set_ylabel("Count(rating)") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: conda_python3 # language: python # name: conda_python3 # --- # # Automatic Suggestion of Constraints # # In our experience, a major hurdle in data validation is that someone needs to come up with the actual constraints to apply on the data. This can be very difficult for large, real-world datasets, especially if they are very complex and contain information from a lot of different sources. We build so-called constraint suggestion functionality into deequ to assist users in finding reasonable constraints for their data. # # Our constraint suggestion first [profiles the data](./data_profiling_example.ipynb) and then applies a set of heuristic rules to suggest constraints. In the following, we give a concrete example on how to have constraints suggested for your data. # + from pyspark.sql import SparkSession, Row, DataFrame import json import pandas as pd import sagemaker_pyspark import pydeequ classpath = ":".join(sagemaker_pyspark.classpath_jars()) spark = (SparkSession .builder .config("spark.driver.extraClassPath", classpath) .config("spark.jars.packages", pydeequ.deequ_maven_coord) .config("spark.jars.excludes", pydeequ.f2j_maven_coord) .getOrCreate()) # - # ### Let's first generate some example data: df = spark.sparkContext.parallelize([ Row(productName="thingA", totalNumber="13.0", status="IN_TRANSIT", valuable="true"), Row(productName="thingA", totalNumber="5", status="DELAYED", valuable="false"), Row(productName="thingB", totalNumber=None, status="DELAYED", valuable=None), Row(productName="thingC", totalNumber=None, status="IN_TRANSIT", valuable="false"), Row(productName="thingD", totalNumber="1.0", status="DELAYED", valuable="true"), Row(productName="thingC", totalNumber="7.0", status="UNKNOWN", valuable=None), Row(productName="thingC", totalNumber="20", status="UNKNOWN", valuable=None), Row(productName="thingE", totalNumber="20", status="DELAYED", valuable="false"), Row(productName="thingA", totalNumber="13.0", status="IN_TRANSIT", valuable="true"), Row(productName="thingA", totalNumber="5", status="DELAYED", valuable="false"), Row(productName="thingB", totalNumber=None, status="DELAYED", valuable=None), Row(productName="thingC", totalNumber=None, status="IN_TRANSIT", valuable="false"), Row(productName="thingD", totalNumber="1.0", status="DELAYED", valuable="true"), Row(productName="thingC", totalNumber="17.0", status="UNKNOWN", valuable=None), Row(productName="thingC", totalNumber="22", status="UNKNOWN", valuable=None), Row(productName="thingE", totalNumber="23", status="DELAYED", valuable="false")]).toDF() # Now, we ask PyDeequ to compute constraint suggestions for us on the data. It will profile the data and then apply the set of rules specified in `addConstraintRules()` to suggest constraints. # + from pydeequ.suggestions import * suggestionResult = ConstraintSuggestionRunner(spark) \ .onData(df) \ .addConstraintRule(DEFAULT()) \ .run() # - # We can now investigate the constraints that deequ suggested. We get a textual description and the corresponding Python code for each suggested constraint. Note that the constraint suggestion is based on heuristic rules and assumes that the data it is shown is 'static' and correct, which might often not be the case in the real world. Therefore the suggestions should always be manually reviewed before being applied in real deployments. for sugg in suggestionResult['constraint_suggestions']: print(f"Constraint suggestion for \'{sugg['column_name']}\': {sugg['description']}") print(f"The corresponding Python code is: {sugg['code_for_constraint']}\n") # The first suggestions we get are for the `valuable` column. **PyDeequ** correctly identified that this column is actually a `boolean` column 'disguised' as string column and therefore suggests a constraint on the `boolean` datatype. Furthermore, it saw that this column contains some missing values and suggests a constraint that checks that the ratio of missing values should not increase in the future. # # ``` # Constraint suggestion for 'valuable': 'valuable' has less than 62% missing values # The corresponding Python code is: .hasCompleteness("valuable", lambda x: x >= 0.38, "It should be above 0.38!") # # Constraint suggestion for 'valuable': 'valuable' has type Boolean # The corresponding Python code is: .hasDataType("valuable", ConstrainableDataTypes.Boolean) # ``` # # Next we look at the `totalNumber` column. PyDeequ identified that this column is actually a numeric column 'disguised' as string column and therefore suggests a constraint on a fractional datatype (such as `float` or `double`). Furthermore, it saw that this column contains some missing values and suggests a constraint that checks that the ratio of missing values should not increase in the future. Additionally, it suggests that values in this column should always be positive (as it did not see any negative values in the example data), which probably makes a lot of sense for this count-like data. # # ``` # Constraint suggestion for 'totalNumber': 'totalNumber' has no negative values # The corresponding Python code is: .isNonNegative("totalNumber") # # Constraint suggestion for 'totalNumber': 'totalNumber' has less than 47% missing values # The corresponding Python code is: .hasCompleteness("totalNumber", lambda x: x >= 0.53, "It should be above 0.53!") # # Constraint suggestion for 'totalNumber': 'totalNumber' has type Fractional # The corresponding Python code is: .hasDataType("totalNumber", ConstrainableDataTypes.Fractional) # ``` # # Finally, we look at the suggestions for the `productName` and `status` columns. Both of them did not have a single missing value in the example data, so an `isComplete` constraint is suggested for them. Furthermore, both of them only have a small set of possible values, therefore an `isContainedIn` constraint is suggested, which would check that future values are also contained in the range of observed values. # # ``` # Constraint suggestion for 'productName': 'productName' has value range 'thingC', 'thingA', 'thingB', 'thingE', 'thingD' # The corresponding Python code is: .isContainedIn("productName", ["thingC", "thingA", "thingB", "thingE", "thingD"]) # # Constraint suggestion for 'productName': 'productName' is not null # The corresponding Python code is: .isComplete("productName") # # Constraint suggestion for 'status': 'status' has value range 'DELAYED', 'UNKNOWN', 'IN_TRANSIT' # The corresponding Python code is: .isContainedIn("status", ["DELAYED", "UNKNOWN", "IN_TRANSIT"]) # # Constraint suggestion for 'status': 'status' is not null # The corresponding Python code is: .isComplete("status") # ``` # Currently, we leave it up to the user to decide whether they want to apply the suggested constraints or not, and provide the corresponding Scala code for convenience. For larger datasets, it makes sense to evaluate the suggested constraints on some held-out portion of the data to see whether they hold or not. You can test this by adding an invocation of .useTrainTestSplitWithTestsetRatio(0.1) to the ConstraintSuggestionRunner. With this configuration, it would compute constraint suggestions on 90% of the data and evaluate the suggested constraints on the remaining 10%. # # Finally, we would also like to note that the constraint suggestion code provides access to the underlying [column profiles](./data_profiling_example.ipynb) that it computed via `suggestionResult.columnProfiles`. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import nibabel as nib import glob import sys sys.path.append('gradient_data/src/') import os import scipy.stats import numpy as np import statsmodels.sandbox.stats.multicomp import nibabel as nib # + ### Comparison of cerebello-cerebral connectivity between cerebellar Gradient 1 and 2 peaks at each area of ### motor and nonmotor representation was calculated as follows: ### 1) Using workbench view, save dscalar map of connectivity from peak of each Gradient (e.g. "L2midcog_fconn1003subjects.dscalar.nii") ### 2) Using workbench view, annotate correlation values between seeds: ### Fisher_z ### R1mot R2mot 0.231264 ### L1mot L2mot 0.23966 ### R12highcog R3highcog 0.313801 ### L12highcog L3highcog 0.370525 ### R1midcog R2midcog 0.450554 ### L1midcog L2midcog 0.330115 ### R1midcog R3midcog 0.330242 ### L1midcog L3midcog 0.222103 ### R1midcog R3midcog_alt 0.245531 ### L1midcog L3midcog_alt 0.336851 ### R2midcog R3midcog 0.276653 ### L2midcog L3midcog 0.154236 ### R2midcog R3midcog_alt 0.188767 ### L2midcog L3midcog_alt 0.218249 ### "Alt" refers to peak of Gradient 2 in lobule IX if peak was in lobule X, and vice versa. These values were ### not included in the final analysis ### 3) Contrast r values between seeds using the equations from Meng et al., 1992: ### wb_command -cifti-math '((var_zr1)-(var_zr2))*((sqrt(1003-3))/(2*(1-0.301)*((1-(((1-0.301)/(2*(1-(((tanh(var_zr1)^2)+(tanh(var_zr2)^2))/2)))+((1-0.301)/(2*(1-((( tanh(var_zr1)^2)+( tanh(var_zr2)^2))/2)))>1)*(1-(1-0.301)/(2*(1-(((tanh(var_zr1)^2)+(tanh(var_zr2)^2))/2))))))*((( tanh(var_zr1)^2)+( tanh(var_zr2)^2))/2))/(1-(((tanh(var_zr1)^2)+(tanh(var_zr2)^2))/2)))))' comparison_L2midcog_vs_L1midcog_1003subjects.dscalar.nii -var var_zr1 L2midcog_fconn1003subjects.dscalar.nii -var var_zr2 L1midcog_fconn1003subjects.dscalar.nii ### Substitute 0.301 with each particular correlation value between seeds. Note that z values have to be converted to r values (e.g. L1midcog L2midcog z=0.330115 corresponds to r=0.301) ### 4) Correct comparison maps with FDR as follows: # - directory = 'xxx' ### write directory with comparison maps here files = os.listdir(directory) ### Check that files are loaded for x in files: if x.endswith('motor.dscalar.nii'): print(x) for x in files: if x.endswith('dscalar.nii'): print('calculating: ' + x) zvalues = nib.load(x).get_data() pvals = scipy.stats.norm.sf(zvalues) pvals = pvals.T pvals_onlycortex = pvals[0:59411] ### This selects values only from the cortex pvals_onlycortex = pvals_onlycortex.T pvals_onlycortex_FDR = statsmodels.sandbox.stats.multicomp.multipletests(pvals_onlycortex[0], alpha=0.05, method='fdr_bh', is_sorted=False, returnsorted=False) ### Put cortical FDR corrected p values back to the original pvals matrix pvals_onlycortex_FDR[1].shape = (59411, 1) pvals[0:59411] = pvals_onlycortex_FDR[1] np.save(x + '_pvalues_onetailedcortexonly_FDR.npy', pvals.T[0]) print('finished calculating: ' + x) ### Transform to dscalar format files = os.listdir(directory) #This is to update list of files in directory for y in files: if y.endswith('.npy'): res = nib.load(directory2 + '/hcp.tmp.lh.dscalar.nii').get_data() cortL = np.squeeze(np.array(np.where(res != 0)[0], dtype=np.int32)) res = nib.load(directory2 + '/hcp.tmp.rh.dscalar.nii').get_data() cortR = np.squeeze(np.array(np.where(res != 0)[0], dtype=np.int32)) cortLen = len(cortL) + len(cortR) del res emb = np.load(y) emb.shape emb.shape = (91282, 1) tmp = nib.load('comparison_R1midcog_vs_R2midcog_peakfconn1003.dscalar.nii') #Just to take the shape; has to be dscalar with one map, and brain only tmp_cifti = nib.cifti2.load('comparison_R1midcog_vs_R2midcog_peakfconn1003.dscalar.nii') data = tmp_cifti.get_data() * 0 mim = tmp.header.matrix[1] for idx, bm in enumerate(mim.brain_models): print ((idx, bm.index_offset, bm.brain_structure)) img = nib.cifti2.Cifti2Image(emb.T, nib.cifti2.Cifti2Header(tmp.header.matrix)) img.to_filename(y + '.dscalar.nii') # + ### 5) Open corrected maps with wb_view and use a threshold of 0.05 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # **Поля** # 1. school — аббревиатура школы, в которой учится ученик # 2. sex — пол ученика ('F' - женский, 'M' - мужской) # 3. age — возраст ученика (от 15 до 22) # 4. address — тип адреса ученика ('U' - городской, 'R' - за городом) # 5. famsize — размер семьи('LE3' <= 3, 'GT3' >3) # 6. Pstatus — статус совместного жилья родителей ('T' - живут вместе 'A' - раздельно) # 7. Medu — образование матери (0 - нет, 1 - 4 класса, 2 - 5-9 классы, 3 - среднее специальное или 11 классов, 4 - высшее) # 8. Fedu — образование отца (0 - нет, 1 - 4 класса, 2 - 5-9 классы, 3 - среднее специальное или 11 классов, 4 - высшее) # 9. Mjob — работа матери ('teacher' - учитель, 'health' - сфера здравоохранения, 'services' - гос служба, 'at_home' - не работает, 'other' - другое) # 10. Fjob — работа отца ('teacher' - учитель, 'health' - сфера здравоохранения, 'services' - гос служба, 'at_home' - не работает, 'other' - другое) # 11. reason — причина выбора школы ('home' - близость к дому, 'reputation' - репутация школы, 'course' - образовательная программа, 'other' - другое) # 12. guardian — опекун ('mother' - мать, 'father' - отец, 'other' - другое) # 13. traveltime — время в пути до школы (1 - <15 мин., 2 - 15-30 мин., 3 - 30-60 мин., 4 - >60 мин.) # 14. studytime — время на учёбу помимо школы в неделю (1 - <2 часов, 2 - 2-5 часов, 3 - 5-10 часов, 4 - >10 часов) # 15. failures — количество внеучебных неудач (n, если 1<=n<=3, иначе 0) # 16. schoolsup — дополнительная образовательная поддержка (yes или no) # 17. famsup — семейная образовательная поддержка (yes или no) # 18. paid — дополнительные платные занятия по математике (yes или no) # 19. activities — дополнительные внеучебные занятия (yes или no) # 20. nursery — посещал детский сад (yes или no) # 21. higher — хочет получить высшее образование (yes или no) # 22. internet — наличие интернета дома (yes или no) # 23. romantic — в романтических отношениях (yes или no) # 24. famrel — семейные отношения (от 1 - очень плохо до 5 - очень хорошо) # 25. freetime — свободное время после школы (от 1 - очень мало до 5 - очень мого) # 26. goout — проведение времени с друзьями (от 1 - очень мало до 5 - очень много) # 27. health — текущее состояние здоровья (от 1 - очень плохо до 5 - очень хорошо) # 28. absences — количество пропущенных занятий # 29. score — баллы по госэкзамену по математике # # **Задача** # * Проведите первичную обработку данных. Так как данных много, стоит написать функции, которые можно применять к столбцам определённого типа. # * Посмотрите на распределение признака для числовых переменных, устраните выбросы. # * Оцените количество уникальных значений для номинативных переменных. # * По необходимости преобразуйте данные # * Проведите корреляционный анализ количественных переменных # * Отберите не коррелирующие переменные. # * Проанализируйте номинативные переменные и устраните те, которые не влияют на предсказываемую величину (в нашем случае — на переменную score). # * Не забудьте сформулировать выводы относительно качества данных и тех переменных, которые вы будете использовать в дальнейшем построении модели. # # + import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from itertools import combinations from scipy.stats import ttest_ind import math as math pd.set_option('display.max_rows', 50) # показывать больше строк pd.set_option('display.max_columns', 50) # показывать больше колонок df = pd.read_csv('stud_math.csv') # - # Так, теперь посмотрим что у нас в данных с смысле сигнал/шум # + # and check some numbers def is_numeric_series(s): return pd.api.types.is_integer_dtype(series) or\ pd.api.types.is_float_dtype(series) numeric_fields = [] string_fields = [] for (columnName, columnData) in df.iteritems(): series = pd.Series(columnData) if(is_numeric_series(series)): numeric_fields.append(columnName) sns.boxplot(series) print(series.describe()) plt.show() else: string_fields.append(columnName) # ok, what do we have here? df[string_fields].apply(lambda s: print(s.value_counts(dropna=False), '[', s.isna().sum(), '/', s.value_counts(dropna=False).sum(), '] NaN')) # - # Самый пустой столбец - _PStatus_, но в принципе данные не выглядят слишком побитыми # # *Философские вопросы/соображения:* # 1. можно ли `NaN` объединить с other там, где есть этот вариант? Звучит логично по крайней мере для _Mjob/Fjob/reason/guardian_, так как в обоих случаях данных нет. Хотя, возможно, логичней было бы заменить other на `NaN`, чтобы иллюзий не питать?) # 2. _Fedu_ `40` — единственный адский выброс — я бы (в нарушение канона?) счел пропущенной точкой в `4.0`, но все же буду чистить, не сделает он погоды, почищу как не входящий в заявленную размерность. # 3. _absences_ — жуткие выбросы, но количество пропусков по идее сильно влияет на успеваемость, может, попробовать сделать еще столбец про них, с размерностью типа `1..4` (нет, не много, многовато, огого сколько), должна быть сильная корреляция. Интересно, это в часах? С первого класса? # 4. _age_, _Medu_, _traveltime_, _studytime_, _studytime, granular_, _failures_, _freetime_, _goout_ — не надо чистить выбросы, даже если они формально есть: данные попадают в размерность, все логично # 5. _famrel_ убрать `-1` # 6. В строковых данных синонимизировать ничего не надо, только с `NaN` разобраться (кроме проверки идеи из п 1). # 7. В _score_ — интересно, что значит `0`? Не пришел? Интересно, что это в принципе значит. # # По-хорошему, конечно, надо _absences_ почистить и сравнить с предположением. Хотя `400` — за гранью, конечно, значение. # # Наверное, стоит проанализировать выкинув по-честному все что не подходит, второй анализ провести. # + def break_into_quantiles_and_outliers_bins(frame, col_name, suffix): q1_temp = frame[col_name].quantile(0.25, interpolation='midpoint') q2_temp = frame[col_name].quantile(0.5, interpolation='midpoint') q3_temp = frame[col_name].quantile(0.75, interpolation='midpoint') iqr = q3_temp - q1_temp frame[col_name + suffix] = frame[col_name].apply(lambda x: 0 if (q1_temp - 1.5 * iqr > x) else ( 1 if x >= q1_temp - 1.5 * iqr and x < q1_temp else ( 2 if x >= q1_temp and x < q2_temp else ( 3 if x >= q2_temp and x < q3_temp else ( 4 if x >= q3_temp and q3_temp + 1.5 * iqr >= x else 5))))) def clean_outliers(frame, col_name, suffix): q1_temp = frame[col_name].quantile(0.25, interpolation='midpoint') q3_temp = frame[col_name].quantile(0.75, interpolation='midpoint') iqr = q3_temp - q1_temp has_outliers = q3_temp + 1.5 * \ iqr < frame[col_name].max() or q1_temp - 1.5 * \ iqr > frame[col_name].min() frame[col_name + suffix] = frame[col_name].apply(lambda x: None if ( q3_temp + 1.5 * iqr < x or q1_temp - 1.5 * iqr > x) else x) def clean_out_of_range(frame, col_name, suffix, min_val, max_val): frame[col_name + suffix] = frame[col_name].apply( lambda x: None if (x < min_val or x > max_val) else x) # clean up to fit in range clean_out_of_range(df, 'Medu', '', 1, 4) clean_out_of_range(df, 'Fedu', '', 1, 4) clean_out_of_range(df, 'famrel', '', 1, 4) # first pass on absences clean_outliers(df, 'absences', '_flat_cleaned') # print(df.absences_flat_cleaned.value_counts(dropna = False)) # and a test of data mapping break_into_quantiles_and_outliers_bins( df, 'absences', '_experimentally_mapped') # print(df.absences_experimentally_mapped.value_counts(dropna = False)) # try to join NaN with 'other' to compare for item in ['Mjob', 'Fjob', 'reason', 'guardian']: df[item + '_other_joined'] = df[item].apply( lambda x: 'other' if pd.isnull(x) else x) # cleanup NaNs from all string columns - just to be sure for item in string_fields: df[item] = df[item].apply(lambda x: None if pd.isnull(x) else x) # - # Итак, вроде все удалось почистить, + нагенерил еще несколько дополнительных полей, дальнейший план: # 1. Посмотреть `corr()`, чтобы оценить как поведение, описать выводы, отдельно посмотреть на соответствие _absences_flat_cleaned_ и _absences_experimentally_mapped_ # 2. Потом прогнать `boxplot()` по всем строковым полям, отдельно посмотреть на отношение `_other_joined` вариантов к обычным почищенным # # df.corr() # Итак, первичные количественные выводы: # 1. Основные положительные факторы - образование родителей, причем *образование матери сильнее всего* (проверить с _guardian_), время учебы (кто бы мог подумать) # 2. Второстепенные положительные факторы (по убыванию) - пропуски (не могу понять как так получилось, надо копать глубже), свободное время, отношения в семье влияют совсем слабо # 3. Отрицательные факторы (по убыванию) - прочие неудачи, возраст (проверить с _romantic_), время с друзьями, время до школы, здоровье (странно) # # df[(df.guardian == 'father')].corr() df[(df.guardian == 'mother')].corr() df[(df.Pstatus == 'A') & (df.guardian == 'father')].corr() df[(df.romantic == 'yes')].corr() # Чуть вглубь: # 1. Обратная корреляция с возрастом перестает играть роль, если выбрать только тех, кто в романтических отношениях, хотя вполне возможно, дело в возрасте в первую очередь # 2. Образование матери все равно влияет сильнее, чем образование отца, причем если опекун - отец, разница в ~10 раз, а если опекун - мать, то меньше, чем вдвое. Даже те, у кого родители живут отдельно и опекун - отец, все равно сильнее подвержены влиянию образования матери. # # + # convert everything to number bool_fields = ['internet', 'romantic', 'higher', 'nursery', 'activities', 'paid', 'famsup', 'schoolsup'] for item in bool_fields: df[item + '_to_num'] = df[item].apply( lambda x: None if pd.isnull(x) else (1 if x == 'yes' else 0)) df.corr() # + def show_boxplot(frame, column): fig, ax = plt.subplots(figsize=(5, 4)) sns.boxplot(x=column, y='score', data=frame, ax=ax) plt.xticks(rotation=45) ax.set_title('Boxplot for ' + column) plt.show() # take a look at remaining strings for item in string_fields: if(item not in bool_fields): show_boxplot(df, item) if item + '_other_joined' in df: show_boxplot(df, item + '_other_joined') # - # Гипотеза о том, что можно получить что-то более осмысленное, объединив `NaN` и `other` не подтвердилась. # everything else with two values to number df['school_to_num'] = df['school'].apply( lambda x: None if pd.isnull(x) else (1 if x == 'MS' else 0)) df['sex_to_num'] = df['sex'].apply( lambda x: None if pd.isnull(x) else (1 if x == 'M' else 0)) df['address_to_num'] = df['address'].apply( lambda x: None if pd.isnull(x) else (1 if x == 'R' else 0)) df['famsize_to_num'] = df['famsize'].apply( lambda x: None if pd.isnull(x) else (1 if x == 'LE3' else 0)) df['Pstatus_to_num'] = df['Pstatus'].apply( lambda x: None if pd.isnull(x) else (1 if x == 'T' else 0)) df['guardian_to_num'] = df['guardian'].apply(lambda x: None if ( pd.isnull(x) or x == 'other') else (1 if x == 'father' else 0)) df.corr() # + # Take every column with correlation > 0.1 # add Mjob and Fjob because their medions are different as can be seen on boxplots # add paid, shoolsup, sex which have 0.9999..0.88 correlation to score, which seems not enough, however somehow close, which might be useful for some extended selections # absences not included because 0.9 correlation was only found in a field which aggregated all extreme and unrealistic outliers, so it doesn't seem to be suitable df_for_model = df.loc[:, ['score', 'age', 'Medu', 'Fedu', 'studytime', 'failures', 'goout']] df_for_model['romantic'] = df['romantic_to_num'] df_for_model['address'] = df['address_to_num'] df_for_model['Mjob'] = df['Mjob'] df_for_model['Fjob'] = df['Fjob'] df_for_model['paid'] = df['paid_to_num'] df_for_model['schoolsup'] = df['schoolsup_to_num'] df_for_model['sex'] = df['sex_to_num'] display(df_for_model.corr()) show_boxplot(df_for_model, 'Mjob') show_boxplot(df_for_model, 'Fjob') # - # **Итого, основные корреляции:** # 1. Положительно - образование родителей, особенно матери, и время учебы. # 2. Отрицательно - жизненные неудачи, романтические отношения, возраст, время на улице, жизнь в сельской местности # 3. Слабо коррелируют, но, возможно, пригодятся для дополнительной фильтрации - платные курсы, образовательная поддержка в школе, пол, профессия родителей # # Между сильной и слабой корреляцией я проводил границу по значению `0.1`, то есть, строго говоря, это довольно условно. # *К сожалению, был огромный завал последний месяц, поэтому кучу всего не успел, да и на модуль не было времени сколько хотел* # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="NNylsgsF7sh4" colab_type="text" # **Importing Essential Libraries** # # + id="LGLgP-4B2xk1" colab_type="code" colab={} # %matplotlib inline import matplotlib import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns import os import zipfile import cv2 import tensorflow as tf import pickle from keras.models import Sequential from keras.layers import Conv2D, MaxPool2D ,AveragePooling2D, Flatten, Dropout from keras.layers.core import Dense from keras.optimizers import RMSprop,Adam,SGD from keras.layers.normalization import BatchNormalization from keras.layers.core import Activation from keras.preprocessing.image import ImageDataGenerator from keras.models import model_from_json from keras import regularizers from sklearn.model_selection import train_test_split from sklearn.preprocessing import LabelBinarizer from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix sns.set() # + [markdown] id="L7t6MW6266IT" colab_type="text" # **Unzipping the Train and Test sets** # + id="jLy_moUs5qdO" colab_type="code" outputId="16586414-0f34-4010-ce37-a098532f0735" colab={"base_uri": "https://localhost:8080/", "height": 34} os.getcwd() # + id="MXLkoPXU5yD6" colab_type="code" colab={} handle_train=zipfile.ZipFile(r'/content/Train.zip') handle_train.extractall('/content/train') handle_train.close() handle_test=zipfile.ZipFile(r'/content/Test.zip') handle_test.extractall('/content/test') handle_test.close() # + id="qxjrbYfU54HA" colab_type="code" colab={} train_images=os.listdir('/content/train/Train/') test_images = os.listdir('/content/test/Test') filepath_train = '/content/train/Train/' filepath_test = '/content/test/Test/' # + id="U_8LmO0U69qf" colab_type="code" outputId="c2841ce4-20f6-46ec-acf6-902f8f4cae52" colab={"base_uri": "https://localhost:8080/", "height": 343} df_train = pd.read_csv('/content/train.csv') df_train.head(10) # + id="7ACwwtVNPvtH" colab_type="code" outputId="51eff2d5-75c0-4fae-d76b-12c02a14a46e" colab={"base_uri": "https://localhost:8080/", "height": 195} sample_submn = pd.read_csv('/content/sample_submission_sDO3m7O.csv') sample_submn.head() # + [markdown] id="Jvhi0rCX787K" colab_type="text" # **Reading & Resizing Training and Testing images** # + id="hvXb3AUj6z8v" colab_type="code" colab={} images=[] labels=[] for index, row in df_train.iterrows(): image=cv2.imread(filepath_train+row['ID']) image=cv2.resize(image , (72,72)) images.append(image) labels.append(row['Class']) #print(row['ID']) # + id="w7qzyfofasc8" colab_type="code" colab={} images_test=[] outputs=[] for index,row in sample_submn.iterrows(): image=cv2.imread(filepath_test+row['ID']) image=cv2.resize(image , (72,72)) images_test.append(image) outputs.append(image) # + [markdown] id="EM0u7h-m8WcA" colab_type="text" # **Displaying couple of images for Sanity check** # + id="3nNeWGPA9qYy" colab_type="code" outputId="48a8741d-fe83-4e49-c493-12e7f3267b2a" colab={"base_uri": "https://localhost:8080/", "height": 287} plt.imshow(images[1]) # + [markdown] id="vXE3Fa6h8hD4" colab_type="text" # Converting into an n-d array and normalizing the image pixels # + id="zf-XfMvD-4A6" colab_type="code" colab={} images = np.array(images, dtype="float") / 255.0 images_test = np.array(images_test, dtype="float") / 255.0 labels = np.array(labels) # + [markdown] id="9nWiIPpi82Bd" colab_type="text" # **Splitting into train and test set for training** # + id="YuOAXfJT_KRY" colab_type="code" colab={} (trainX, testX, trainY, testY) = train_test_split(images,labels, test_size=0.10, random_state=42) # + id="4GbOd2DW_P7L" colab_type="code" outputId="83d99a16-b77f-4aac-ed75-6f155ab4ac8f" colab={"base_uri": "https://localhost:8080/", "height": 151} print(type(trainX)) print(trainX.shape) print(type(trainY)) print(trainY.shape) print(type(testX)) print(testX.shape) print(type(testY)) print(testY.shape) # + [markdown] id="1wDvTmjW8-zK" colab_type="text" # Binarizing the output categories # + id="5dOPPpO6_SC8" colab_type="code" colab={} lb = LabelBinarizer() trainY = lb.fit_transform(trainY) testY = lb.transform(testY) # + id="By4suEdA_WsG" colab_type="code" outputId="9fde9bd7-1a81-4823-d131-276fa0cf3189" colab={"base_uri": "https://localhost:8080/", "height": 34} lb.classes_ # + id="1x8RUY1Odw9i" colab_type="code" outputId="1c00276d-9c86-4a02-ffdd-09f1afa5c9c5" colab={"base_uri": "https://localhost:8080/", "height": 134} trainY # + id="RKpTpjVBFubs" colab_type="code" colab={} class myCB(tf.keras.callbacks.Callback): def on_epoch_end(self,epoch,logs={}): if(logs.get('val_accuracy')>0.91): print('\nReached least val_loss') self.model.stop_training = True # + id="0ts_VOAVHPOY" colab_type="code" colab={} cb = myCB() # + [markdown] id="e2H6sjsT9HtI" colab_type="text" # **Model Implementation** # + id="PYzJ8mGa_WuT" colab_type="code" colab={} model = Sequential() model.add(Conv2D(filters = 32, kernel_size = (3,3),padding = "same", activation ='relu', input_shape = (72,72,3))) model.add(BatchNormalization(axis=-1)) model.add(MaxPool2D(pool_size=(2,2))) model.add(Dropout(0.25)) model.add(Conv2D(filters=64,kernel_size=(3,3), padding="same",activation="relu")) model.add(BatchNormalization(axis=-1)) model.add(Conv2D(filters=64, kernel_size=(3,3), padding="same",activation="relu")) model.add(BatchNormalization(axis=-1)) model.add(MaxPool2D(pool_size=(2,2))) model.add(Dropout(0.25)) model.add(Conv2D(filters=128, kernel_size=(3, 3), padding="same",activation="relu")) model.add(BatchNormalization(axis=-1)) model.add(Conv2D(filters=128, kernel_size=(3, 3), padding="same",activation="relu")) model.add(BatchNormalization(axis=-1)) model.add(Conv2D(filters=128, kernel_size=(3, 3), padding="same",activation="relu")) model.add(BatchNormalization(axis=-1)) model.add(MaxPool2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(1024, activation="relu",kernel_regularizer=regularizers.l2(0.001))) model.add(BatchNormalization()) model.add(Dropout(0.25)) model.add(Dense(84, activation="relu",kernel_regularizer=regularizers.l2(0.001))) model.add(BatchNormalization()) model.add(Dropout(0.5)) # softmax classifier model.add(Dense(3,activation="softmax")) # + id="IHi3Jqs1d2eP" colab_type="code" outputId="b386745e-2377-47aa-d31c-9bbb44796740" colab={"base_uri": "https://localhost:8080/", "height": 1000} model.summary() # + id="q_jgn9Lq_Ww5" colab_type="code" colab={} INIT_LR = 0.025 #0.05 EPOCHS = 300 BS = 64 opt=SGD(lr=INIT_LR) #Adam,Adagrad,RMSprop model.compile(loss="categorical_crossentropy", optimizer=opt, metrics=["accuracy"]) # + [markdown] id="s21wykgZ9Va0" colab_type="text" # **Image Data Augmentation** # + id="k1rOMYpdhKom" colab_type="code" colab={} aug = ImageDataGenerator(rotation_range=20, width_shift_range=0.1, height_shift_range=0.1, shear_range=0.2, zoom_range=0.2,horizontal_flip=True, fill_mode="nearest") # + [markdown] id="bx26wKMw9Uxx" colab_type="text" # **Fitting the model onto the training set** # + id="6QQqkrgp_hkf" colab_type="code" outputId="aca369fa-99b3-4ed7-8867-66d8b7f2b66c" colab={"base_uri": "https://localhost:8080/", "height": 1000} H = model.fit_generator(aug.flow(trainX, trainY, batch_size=BS),validation_data=(testX, testY), steps_per_epoch=len(trainX) // BS,epochs=EPOCHS,callbacks=[cb]) # + id="mnPPzEwb_kQ4" colab_type="code" outputId="a41f4300-3996-4e3f-da34-bdaf30c00f95" colab={"base_uri": "https://localhost:8080/", "height": 185} predictions = model.predict(testX, batch_size=BS) print(classification_report(testY.argmax(axis=1),predictions.argmax(axis=1), target_names=lb.classes_)) # + [markdown] id="0Zg5pYHH9iH6" colab_type="text" # **Loss-Accuracy Tradeoff graph** # + id="i6L4Hg39Artq" colab_type="code" outputId="aa96f7d3-c4e7-4dd5-b90c-201f41a9f303" colab={"base_uri": "https://localhost:8080/", "height": 628} # plot the training loss and accuracy N = np.arange(0, EPOCHS) #setting up x axis plt.style.use("ggplot") plt.figure(figsize=(15,10)) plt.plot(N, H.history["loss"],'y', label="train_loss") plt.plot(N, H.history["val_loss"],'g', label="val_loss") plt.plot(N, H.history["accuracy"],'r', label="train_acc") plt.plot(N, H.history["val_accuracy"],'b', label="val_acc") plt.title("Training Loss and Accuracy (CNN for age classification)") plt.xlabel("Epoch #") plt.ylabel("Loss/Accuracy") plt.legend() plt.show() # + [markdown] id="B84MWXcC9sgA" colab_type="text" # **Predicting for test set values** # + id="r4HB9KsyVpCS" colab_type="code" outputId="d119f71c-1193-44b1-8ae4-fc493632ddd3" colab={"base_uri": "https://localhost:8080/", "height": 134} pred = model.predict(images_test) pred # + id="A5gN8LFy0TP4" colab_type="code" outputId="d8aa99bc-6c76-4c79-ea28-303dd5c40cff" colab={"base_uri": "https://localhost:8080/", "height": 50} indexes = np.random.randint(0,6636,16) indexes # + id="vNjvJXxtWqhN" colab_type="code" colab={} i_vals = [] for i in indexes: val = pred.argmax(axis=1)[i] i_vals.append(val) # + id="B26kCmgN0tgg" colab_type="code" outputId="477842f1-a87f-4a54-e448-ff3c5ee8259a" colab={"base_uri": "https://localhost:8080/", "height": 34} i_vals # + id="lVOni6ruXDs3" colab_type="code" colab={} vals = [] for i in indexes: val = np.amax(pred , axis=1)[i] vals.append(val) # + id="5fI0Abr41DEt" colab_type="code" outputId="422eabe9-8d76-4f55-ff97-16080752cf71" colab={"base_uri": "https://localhost:8080/", "height": 286} vals # + id="lZA-gr_B1Eqb" colab_type="code" outputId="e07b9a8c-c1c5-4387-aa87-2062bac8d34b" colab={"base_uri": "https://localhost:8080/", "height": 286} vals = [i*100 for i in vals] vals # + id="2GyIfPZb1MQx" colab_type="code" outputId="60f3b65a-40db-476c-d838-1d1742d2f44f" colab={"base_uri": "https://localhost:8080/", "height": 286} vals = [round(num,2) for num in vals] vals # + id="XV1n3x861WM2" colab_type="code" colab={} labels = [] for i in range(16): label = lb.classes_[i_vals[i]] labels.append(label) # + id="nU1qTs241xNV" colab_type="code" outputId="cc86a14f-ee4c-4cdf-eb64-281bd071182a" colab={"base_uri": "https://localhost:8080/", "height": 286} labels # + id="EBtNiIkF2GoZ" colab_type="code" colab={} from imutils import build_montages # + id="lhk4eSef10O3" colab_type="code" colab={} results = [] for i in range(16): image = outputs[indexes[i]] text = labels[i] + " : " + str(vals[i]) image = cv2.resize(image , (300,300)) cv2.putText(image,text,(10,50),cv2.FONT_HERSHEY_SIMPLEX,0.7,(0,255,0),2) results.append(image) # + id="jnEBlhPyYckI" colab_type="code" colab={} from google.colab.patches import cv2_imshow # + id="NSgQRTUB2hmy" colab_type="code" outputId="ed20ebd1-44a6-4d3c-e9e1-1a416097a73f" colab={"base_uri": "https://localhost:8080/", "height": 818} montage = build_montages(results,(196,196),(4,4))[0] cv2_imshow(montage) cv2.waitKey(0) # + [markdown] id="2sc0ifCs90mT" colab_type="text" # **Displaying final image again after classification** # + id="CtfHbSd8Y04l" colab_type="code" outputId="1f226d0f-b62d-419d-e09d-51b2264284f1" colab={"base_uri": "https://localhost:8080/", "height": 334} text = labels[1]+": "+str(vals[1]) outputs[1] = cv2.resize(outputs[1] , (300,300)) cv2.putText(outputs[1], text , (10,50), cv2.FONT_HERSHEY_SIMPLEX, 0.7 ,(0, 255, 0), 2) # show the output image cv2_imshow(outputs[1]) cv2.waitKey(0) # + id="_QAs529mY5dS" colab_type="code" outputId="efa50ea1-d84e-48d7-c7a4-feb3b4e60612" colab={"base_uri": "https://localhost:8080/", "height": 287} plt.imshow(images_test[1]) # + id="PzKA8W0TPzmo" colab_type="code" outputId="88380ceb-69e7-484a-98a1-11d8e71383cf" colab={"base_uri": "https://localhost:8080/", "height": 34} all_indexes = pred.argmax(axis=1) all_indexes # + id="aotEvUglPmlm" colab_type="code" outputId="3537702f-7554-48ad-9766-5be84472d7aa" colab={"base_uri": "https://localhost:8080/", "height": 50} output_labels = lb.classes_[all_indexes] output_labels # + id="LZjytih_YiWL" colab_type="code" outputId="3b9e289c-4ae4-46fd-8a66-e63157ec4326" colab={"base_uri": "https://localhost:8080/", "height": 402} submission = pd.DataFrame({'Class':output_labels,'ID':sample_submn['ID']}) submission # + id="lpd4l-aaY29c" colab_type="code" colab={} submission.to_csv('submission_agenet7.csv',index=False) # + id="1AbsyP9bZNP8" colab_type="code" colab={} from google.colab import files files.download("submission_agenet7.csv") # + [markdown] id="yLJPvCDY984G" colab_type="text" # **Exporting .h5 and .json files for Deployment in Flask** # + id="9A6XkVMpb3pu" colab_type="code" colab={} #model_json = model.to_json() #with open("model.json", "w") as json_file: # json_file.write(model_json) # serialize weights to HDF5 #model.save_weights("model.h5") # + id="JqzvBglbdsb7" colab_type="code" colab={} #from google.colab import files #files.download("model.h5") # + id="JYgsPVecd8DH" colab_type="code" colab={} #from google.colab import files #files.download("model.json") # + id="KLGI_ff9eDN7" colab_type="code" colab={} # + id="hy_XWRGQeDRC" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python3 # --- # %matplotlib inline import numpy as np import matplotlib.pyplot as plt import seaborn as sns; sns.set() rng= np.random.RandomState(1) X=np.dot(rng.rand(2,2), rng.randn(2,200)).T plt.scatter(X[:,0], X[:,1]) from sklearn.decomposition import PCA pca=PCA(n_components=2) pca.fit(X).get_params print(pca.components_) print(pca.explained_variance_) # + def draw_vector(v0, v1, ax=None): ax=ax or plt.gca() arrowprops=dict(arrowstyle ='->', lw=2, shrinkA=0, shrinkB=0) ax.annotate('', v1,v0, arrowprops=arrowprops) #plot data plt.scatter(X[:,0], X[:,1], alpha=0.2) for length, vector in zip(pca.explained_variance_, pca.components_): v=vector*3*np.sqrt(length) draw_vector(pca.mean_, pca.mean_+v) plt.axis('equal'); # - pca =PCA(n_components=1) pca.fit(X) X_pca=pca.transform(X) print("original shape:", X.shape) print("transformed shape:", X_pca.shape) X_new=pca.inverse_transform(X_pca) plt.scatter(X[:, 0], X[:, 1], alpha=0.2) plt.scatter(X_new[:,0], X_new[:,1], alpha=0.8) plt.axis('equal'); from sklearn.datasets import load_digits digits = load_digits() digits.data.shape pca=PCA(2) projected = pca.fit_transform(digits.data) print(digits.data.shape) print(projected.shape) plt.scatter(projected[:, 0], projected[:,1], c=digits.target, edgecolor=None, alpha=0.5, cmap=plt.cm.get_cmap('spectral', 10)) plt.xlabel('component 1') plt.ylabel('component 2') plt.colorbar(); pca =PCA().fit(digits.data) plt.plot(np.cumsum(pca.explained_variance_ratio_)) plt.xlabel('number of components') plt.ylabel('cumulative explained variance'); # ## noise filtering def plot_digits(data): fig, axes=plt.subplots(4,10, figsize=(10,4), subplot_kw={'xticks':[], 'yticks':[]}, gridspec_kw=dict(hspace=0.1, wspace=0.1)) for i, ax in enumerate(axes.flat): ax.imshow(data[i]. reshape(8,8), cmap='binary', interpolation='nearest', clim=(0,16)) plot_digits(digits.data) np.random.seed(42) noisy=np.random.normal(digits.data, 4) plot_digits(noisy) pca = PCA(0.50).fit(noisy) pca.n_components_ components = pca.transform(noisy) filtered = pca.inverse_transform(components) plot_digits(filtered) from sklearn.datasets import fetch_lfw_people faces = fetch_lfw_people(min_faces_per_person=60) print(faces.target_names) print(faces.images.shape) from sklearn.decomposition import RandomizedPCA pca = RandomizedPCA(150) pca.fit(faces.data) fig, axes =plt.subplots(3,8, figsize=(9,4), subplot_kw={'xticks':[], 'yticks':[]}, gridspec_kw=dict(hspace=0.1, wspace =0.1)) for i, ax in enumerate(axes.flat): ax.imshow(pca.components_[i].reshape(62,47), cmap='bone') pca=RandomizedPCA(150).fit(faces.data) components=pca.transform(faces.data) projected =pca.inverse_transform(components) # + fig, ax =plt.subplots(2,10, figsize =(10,2.5), subplot_kw={'xticks':[], 'yticks':[]}, gridspec_kw=dict(hspace=0.1, wspace=0.1)) for i in range(10): ax[0,i].imshow(faces.data[i].reshape(62, 47), cmap ='binary_r') ax[1,i].imshow(projected[i].reshape(62,47), cmap='binary_r') ax[0,0].set_ylabel('full-dim\ninput') ax[1,0].set_ylabel('150-dim\nreconstruction'); # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python3 (fastai) # language: python # name: fastai # --- # https://leetcode.com/problems/remove-duplicates-from-sorted-list/ # + # Definition for singly-linked list. class ListNode: def __init__(self, x): self.val = x self.next = None # my solution class Solution: def deleteDuplicates(self, head: ListNode) -> ListNode: node = head while node: nxt = node.next while nxt and nxt.val == node.val: nxt = nxt.next node.next = node = nxt return head # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="sXatvRX899i0" # ![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png) # + [markdown] id="Lt-CiWRewNWD" # [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop//blob/master/tutorials/Certification_Trainings/Public/6.Playground_DataFrames.ipynb) # + [markdown] id="-Alh8i-_fJ59" # # Spark DataFrames Playground # + id="sJ0MnF3bfWbe" # ! pip install -q spark-nlp==3.4.2 pyspark==3.2.0 # + [markdown] id="4ZucC6lQ2pYQ" # # if you want to work with Spark 2.3 # ``` # import os # # # Install java # # # ! apt-get update -qq # # # ! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null # # # # !wget -q https://archive.apache.org/dist/spark/spark-2.3.0/spark-2.3.0-bin-hadoop2.7.tgz # # # # !tar xf spark-2.3.0-bin-hadoop2.7.tgz # # # !pip install -q findspark # # os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64" # os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"] # os.environ["SPARK_HOME"] = "/content/spark-2.3.0-bin-hadoop2.7" # # # ! java -version # # import findspark # findspark.init() # from pyspark.sql import SparkSession # # # # ! pip install --ignore-installed -q spark-nlp==2.7.5 # # import sparknlp # # spark = sparknlp.start(spark23=True) # ``` # + colab={"base_uri": "https://localhost:8080/", "height": 254} outputId="c1aaeaa5-868c-444c-b4cc-f2e78ddf2996" id="pB-mZFa4O8ct" executionInfo={"status": "ok", "timestamp": 1649101837409, "user_tz": -180, "elapsed": 52138, "user": {"displayName": "Monster C", "userId": "08787989274818793476"}} import sparknlp import pandas as pd from pyspark.ml import Pipeline from pyspark.ml import PipelineModel from sparknlp.annotator import * from sparknlp.base import * spark = sparknlp.start(spark32=True) # for GPU training >> sparknlp.start(gpu = True) # for Spark 2.3 =>> sparknlp.start(spark23 = True) print("Spark NLP version", sparknlp.version()) print("Apache Spark version:", spark.version) spark # + id="JLulYEvRfDhG" document = DocumentAssembler()\ .setInputCol('text')\ .setOutputCol('document') # + id="tLfu1NU_fDhJ" tokenizer = Tokenizer()\ .setInputCols('document')\ .setOutputCol('token') # + id="HxZ9s3YCfDhM" colab={"base_uri": "https://localhost:8080/"} outputId="ff627c2e-8d66-41f6-afe4-1c4e83be48b0" pos = PerceptronModel.pretrained()\ .setInputCols(['document', 'token'])\ .setOutputCol('pos') # + id="cI-uo9ZbfDhR" pipeline = Pipeline().setStages([ document, tokenizer, pos]) # + id="MJoifWwRfjrG" # !wget https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/jupyter/annotation/english/spark-nlp-basics/sample-sentences-en.txt # + id="zGksf1hCfDhU" executionInfo={"status": "ok", "timestamp": 1649101840615, "user_tz": -180, "elapsed": 2742, "user": {"displayName": "Monster C", "userId": "08787989274818793476"}} data = spark.read.text('./sample-sentences-en.txt').toDF('text') # + id="KgTNu0_KfDhX" colab={"base_uri": "https://localhost:8080/"} outputId="6f78c799-970a-4d11-ad76-670c711e278a" executionInfo={"status": "ok", "timestamp": 1649101845958, "user_tz": -180, "elapsed": 3152, "user": {"displayName": "Monster C", "userId": "08787989274818793476"}} data.show(5, truncate=False) # + id="PAnijYcYfDhb" model = pipeline.fit(data) # + id="5_fTTzUPfDhd" result = model.transform(data) # + id="ll7rPvWefDhg" colab={"base_uri": "https://localhost:8080/"} outputId="7efc01c0-b901-4167-a3dd-247342279d68" result.show(5) # + id="i06WW8wzfDhk" stored = result\ .select('text', 'pos.begin', 'pos.end', 'pos.result', 'pos.metadata')\ .toDF('text', 'pos_begin', 'pos_end', 'pos_result', 'pos_meta')\ .cache() # + id="MBQWLPjzfDhp" colab={"base_uri": "https://localhost:8080/"} outputId="3e64a114-c775-452c-b2e9-c04ace34cf78" stored.printSchema() # + id="X93ASKmGfDhw" colab={"base_uri": "https://localhost:8080/"} outputId="c0b9b134-db3d-4989-a609-6c86fdeecd76" stored.show(5) # + [markdown] id="rI8rO1GjfDhz" # --------- # ## Spark SQL Functions # + id="5c5OVnNafDh0" from pyspark.sql.functions import * # + id="f_nWknqlfDh3" colab={"base_uri": "https://localhost:8080/"} outputId="dbce3274-7e4f-4f80-f11e-dd2a18bd91bd" stored.filter(array_contains('pos_result', 'VBD')).show(5) # + id="WwBH_f-1fDh7" colab={"base_uri": "https://localhost:8080/"} outputId="92ced3d6-2fd4-4619-ef8f-8612c45018e2" stored.withColumn('token_count', size('pos_result')).select('pos_result', 'token_count').show(5) # + id="CZn-kEFifDh_" colab={"base_uri": "https://localhost:8080/"} outputId="38af3b92-408f-4799-bfec-9fed57721e4f" stored.select('text', array_max('pos_end')).show(5) # + id="GfzcYDcFfDiC" colab={"base_uri": "https://localhost:8080/"} outputId="7474844d-88e0-4621-f4f4-24dbc048bfa0" stored.withColumn('unique_pos', array_distinct('pos_result')).select('pos_result', 'unique_pos').show(5) # + id="W9k5hwUSfDiE" colab={"base_uri": "https://localhost:8080/"} outputId="92af2638-5579-45cb-920d-9564d168a932" stored.groupBy(array_distinct('pos_result')).count().show(10) # + [markdown] id="5O_ERu47fDiI" # ## SQL Functions with `col` # + id="BP2dz_BqfDiJ" from pyspark.sql.functions import col # + id="H9a1KaVEfDiM" colab={"base_uri": "https://localhost:8080/"} outputId="4bad3bc0-66be-4b16-9ff0-be3e2bbed4c3" stored.select(col('pos_meta').getItem(0).getItem('word')).show(5) # + [markdown] id="B0TtaUgUfDiP" # ------------- # ## Spark NLP Annotation UDFs # + id="gd9sv5LcxEDf" import pandas as pd from sparknlp.functions import * from pyspark.sql.types import ArrayType, StringType # + id="Psqxd7eWfDiQ" colab={"base_uri": "https://localhost:8080/"} outputId="1a6944ad-1c65-4fbd-b4aa-ac11ee24ec39" result.select('pos').show(1, truncate=False) # + id="miYDmJiSfDiU" @udf( StringType()) def nn_annotation(res,meta): nn = [] for i,j in zip(res,meta): if i == "NN": nn.append(j["word"]) return nn # + id="GeRbqnsiMAGP" outputId="547cef1f-003e-4db7-f79e-4cdb7d362f5d" colab={"base_uri": "https://localhost:8080/"} result.withColumn("nn_tokens", nn_annotation(col("pos.result"), col("pos.metadata")))\ .select("nn_tokens")\ .show(truncate=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Earthquake localization using progressive search grid # # Using pytorch GPU-accelerated engine # !pip install python-dateutil geopy # + from datetime import datetime import dateutil.parser # panyaungan2018 eq_lat = -7.092 eq_lon = 105.963 eq_epicenter = (eq_lat, eq_lon) eq_depth = 48.2 eq_mw = 6.0 eq_origin_time = dateutil.parser.parse('2018-01-23T06:34:54.000+00:00') print('EQ origin_time=%s' % (eq_origin_time)) # + import pandas as pd # arrivals = pd.read_csv('panyaungan2018_real.csv') arrivals = pd.read_csv('panyaungan2018_augmented.csv') arrivals # - # Parse arrival time # arrivals['arrivalTime_dt'] = arrivals.apply(lambda row: dateutil.parser.parse(row['arrivalTime']), axis=1) arrivals['arrivalTime_ts'] = arrivals.apply(lambda row: dateutil.parser.parse(row['arrivalTime']).timestamp(), axis=1) arrivals['latency'] = arrivals['arrivalTime_ts'] - eq_origin_time.timestamp() arrivals # Sort based on earliest arrival time arrivals_sorted = arrivals.sort_values('arrivalTime_ts') arrivals_sorted # Cut only 3 nearest stations arrivals_sorted = arrivals_sorted[:3] arrivals_sorted # Earliest station p1 = arrivals_sorted.iloc[1] p1 # Arrival time of earliest station t1 = p1['arrivalTime_ts'] t1 p1_loc = (p1['lat'], p1['lon']) p1_loc # + # Make a grid using 3-deg radius def round_res(num: float, resolution: float): return round(num / resolution) * resolution p1_locr = (round_res(p1_loc[0], 0.3), round_res(p1_loc[1], 0.3)) p1_locr # + import numpy as np import torch import torch.cuda point_interval = 0.1 lats = torch.arange(p1_locr[0] - 1.2, p1_locr[0] + 1.2 + 0.001, point_interval, dtype=torch.float, device='cuda') lons = torch.arange(p1_locr[1] - 1.2, p1_locr[1] + 1.2 + 0.001, point_interval, dtype=torch.float, device='cuda') n_points = len(lats) * len(lons) print('lats=%s lons=%s len=%dx%d n_points=%d' % (lats, lons, len(lats), len(lons), n_points)) # - grid = torch.empty([n_points, 2], dtype=torch.float, device='cuda') # our lat lon is just approximate grid[0] for lat_idx, lat in enumerate(lats): for lon_idx, lon in enumerate(lons): idx = lat_idx * len(lons) + lon_idx grid[idx, 0] = lat grid[idx, 1] = lon grid[:5] # + from datetime import timedelta times = torch.empty(31, dtype=torch.float64, device='cuda') # MUST be 64-bit! for idx, sec in enumerate(range(31)): print('%s - %s = %s' % (t1, sec, t1 - sec)) times[idx] = t1 - sec print(len(times)) times # + eq_vel_deg = 0.067 # Primary wave velocity = 0.067 degree / second eq_vel_m = 7437 # Primary wave velocity = 0.067 deg/s * 111000 = 7437 m/s velocities = torch.empty(1, dtype=torch.float16, device='cuda') velocities[0] = eq_vel_m velocities # - # ## Weighted RMSE # # $$\text{wRMSE} = \sqrt{\sum_{i=1}^n w_i (\hat T_i - T_i)^2}$$ # from geopy import distance # + # https://gist.github.com/nickjevershed/6480846 import math def distance_on_sphere(a: torch.Tensor, b: torch.Tensor, radius: float): # Convert latitude and longitude to # spherical coordinates in radians. degrees_to_radians = math.pi/180.0 # phi = 90 - latitude phi1 = (90.0 - a.select(4, 0)) * degrees_to_radians phi2 = (90.0 - b.select(4, 0)) * degrees_to_radians # theta = longitude theta1 = a.select(4, 1) * degrees_to_radians theta2 = b.select(4, 1) * degrees_to_radians # Compute spherical distance from spherical coordinates. # For two locations in spherical coordinates # (1, theta, phi) and (1, theta, phi) # cosine( arc length ) = # sin phi sin phi' cos(theta-theta') + cos phi cos phi' # distance = rho * arc length cos = (torch.sin(phi1) * torch.sin(phi2) * torch.cos(theta1 - theta2) + torch.cos(phi1) * torch.cos(phi2)) arc = torch.acos( cos ) # Remember to multiply arc by the radius of the earth # in your favorite set of units to get length. return arc * radius # + def score_stations(station_locs: torch.Tensor, actual_arrivals: torch.Tensor, epicenter_es: torch.Tensor, origin_time_es: torch.Tensor, velocity_es: torch.Tensor): '''Calculate distance (m) and the error of arrival time (seconds) for given stations and given solution parameters (epicenter, origin time, velocity).''' # Returned shape is E x O x V x S x 2 # where E=length of epicenters, O=length of origin times, V=length of velocities, S=length of stations # print('Dists traditional...') # dists = torch.empty(len(epicenter_es), len(origin_time_es), len(velocity_es), len(station_locs), # dtype=torch.float64, device='cuda') # for epicenter_idx, epicenter_e in enumerate(epicenter_es): # for station_idx, station_loc in enumerate(station_locs): # dist = distance.distance(epicenter_e, station_loc).m # for origin_time_idx, origin_time_e in enumerate(origin_time_es): # for velocity_idx, velocity_e in enumerate(velocity_es): # dists[epicenter_idx][origin_time_idx][velocity_idx][station_idx] = dist # print('Dists geodesic + expand...') # dists = torch.empty(len(epicenter_es), 1, 1, len(station_locs), # dtype=torch.float64, device='cuda') # # print('dists: %s' % (dists.size(),)) # for epicenter_idx, epicenter_e in enumerate(epicenter_es): # for station_idx, station_loc in enumerate(station_locs): # dist = distance.distance(epicenter_e, station_loc).m # # dist = distance.great_circle(epicenter_e, station_loc).m # dists[epicenter_idx][0][0][station_idx] = dist # print('Dists expand...') # dists.expand(-1, len(origin_time_es), len(velocity_es), -1) print('Dists great circle (GPU) + expand...') dists1 = torch.empty(len(epicenter_es), 1, 1, len(station_locs), 2, dtype=torch.float64, device='cuda') dists2 = torch.empty(len(epicenter_es), 1, 1, len(station_locs), 2, dtype=torch.float64, device='cuda') for epicenter_idx, epicenter_e in enumerate(epicenter_es): for station_idx, station_loc in enumerate(station_locs): dists1[epicenter_idx][0][0][station_idx] = epicenter_e dists2[epicenter_idx][0][0][station_idx] = station_loc dists = distance_on_sphere(dists1, dists2, distance.EARTH_RADIUS * 1000) print('Dists expand...') dists.expand(-1, len(origin_time_es), len(velocity_es), -1) # print('dists = %s' % dists) print('Travel durations...') travel_durs = dists / eq_vel_m # print('travel_durs %s = %s' % (travel_durs.size(), travel_durs)) print('Arrival times...') origin_time_x = origin_time_es.view(1, len(origin_time_es), 1, 1).expand(len(epicenter_es), -1, len(velocity_es), len(station_locs)) arrival_times_es = origin_time_x + travel_durs # print('arrival_times_es = %s' % arrival_times_es) # print('actual_arrivals = %s' % actual_arrivals) print('Arrival time errors...') arrival_errors = arrival_times_es - actual_arrivals # print('arrival_errors = %s' % arrival_errors) # print('From epicenter %s to station %s %s (%d km) in %s s' % # (epicenter_e, station_code, station_loc, dist_m/1000, travel_dur)) # print('Est arrival %s actual %s error=%s' % # (arrival_time_e, actual_arrival, arrival_error)) return (dists, arrival_errors) station_codes = list(arrivals_sorted['station']) print(station_codes) station_locs = torch.Tensor([arrivals_sorted['lat'], arrivals_sorted['lon']]).transpose(0, 1) print(station_locs) actual_arrivals = torch.cuda.DoubleTensor(arrivals_sorted['arrivalTime_ts']) print(actual_arrivals) epicenter_e = torch.tensor([-9.899999999999999, 103.5], dtype=torch.float, device='cuda') origin_time_e = dateutil.parser.parse('2018-01-23T06:35:03.851000+00:00').timestamp() print('epicenter_e = %s' % epicenter_e) epicenter_es = torch.unsqueeze(epicenter_e, 0) print('epicenter_es = %s' % epicenter_es) origin_time_es = torch.tensor([origin_time_e], dtype=torch.double, device='cuda') print(origin_time_es) score_stations(station_locs, actual_arrivals, epicenter_es, origin_time_es, velocities) # - # Estimated wrong one with 0.0020 RWSSE epicenter_es = torch.unsqueeze(grid[111], 0) origin_time_es = torch.unsqueeze(times[28], 0) score_stations(station_locs, actual_arrivals, epicenter_es, origin_time_es, velocities) # Should be correct one? with 0.0020 RWSSE grid # epicenter_es = torch.unsqueeze(grid[111], 0) # origin_time_es = torch.unsqueeze(times[28], 0) #score_stations(station_locs, actual_arrivals, epicenter_es, origin_time_es, velocities) # + # MULTI-SOLUTION ROWS VERSION def score_solutions(station_codes: list, station_locs: torch.Tensor, actual_arrivals: torch.Tensor, epicenter_es: torch.Tensor, origin_time_es: torch.Tensor, velocity_es: torch.Tensor): '''Returns the RWSSE for all stations given parameters containing estimates.''' # score_station(stations.iloc[0]['station'], (stations.iloc[0]['lat'], stations.iloc[0]['lon']), # stations.iloc[0]['arrivalTime_dt'], # epicenter_e, origin_time_e) (dists, arrival_errors) = score_stations(station_locs, actual_arrivals, epicenter_es, origin_time_es, velocity_es) # print(result_arr) print('Square error...') ses = torch.pow(arrival_errors, 2) print('Weighted SE...') weighted_ses = torch.mul(1 / dists, ses) # weighted_ses = ses # print(weighted_ses) print('Sum...') wsse = weighted_ses.sum(3, dtype=torch.float16) # only need for reasonable precision here # print(wsse) print('Root...') rwsse = torch.sqrt(wsse) # print('RWSSEs: %s' % rwsse) return rwsse station_codes = list(arrivals_sorted['station']) # print(station_codes) station_locs = torch.cuda.HalfTensor([arrivals_sorted['lat'], arrivals_sorted['lon']]).transpose(0, 1) #print(station_locs) actual_arrivals = torch.cuda.DoubleTensor(arrivals_sorted['arrivalTime_ts']) # print(actual_arrivals) # print(grid[:2]) # print(velocities) # print(times[:15]) score_solutions(station_codes, station_locs, actual_arrivals, grid[:2], times[:2], velocities) # + # Let's score everything! import time print('Scoring %d x %d x %d = %d combinations...' % (len(grid), len(times), len(velocities), len(grid) * len(times) * len(velocities))) start = time.process_time() scores = score_solutions(station_codes, station_locs, actual_arrivals, grid, times, velocities) elapsed = time.process_time() - start print('Scored in %s seconds' % elapsed) print(scores.size()) # - scores best_score = torch.min(scores) best_score best_idxs = (scores == best_score).nonzero() best_idx = best_idxs[0] best_idx times[28].item() (epicenter_e_idx, origin_time_e_idx, velocity_e_idx) = best_idx epicenter_e_idx # + from datetime import timezone epicenter_e = grid[epicenter_e_idx] origin_time_e = datetime(1970, 1, 1, tzinfo=timezone.utc) + timedelta(seconds=times[origin_time_e_idx].item()) velocity_e = velocities[velocity_e_idx] # - t_last_offset = np.round((arrivals_sorted.tail(1)['arrivalTime_ts'].item() - eq_origin_time.timestamp()) / 0.001) * 0.001 epicenter_err = distance.distance(epicenter_e, eq_epicenter).km origin_time_err = abs((origin_time_e - eq_origin_time).total_seconds()) print('Estimated epicenter: %s. Actual epicenter: %s.' % (np.array(epicenter_e), eq_epicenter)) print('Estimated origin time: %s. Actual origin time: %s.' % (origin_time_e, eq_origin_time)) print('Estimated P-wave velocity: %s m/s' % velocity_e.item()) print('Decision latency: T + %s seconds. Epicenter error: ±%d km. Origin time error: ±%s seconds.' % (t_last_offset, epicenter_err, origin_time_err)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # ## Gather recent Twitter data import tweepy as tw import datetime import csv import pandas as pd api_key = "tnpWnjHBc1tqXQiA2aVF9kE1C" api_key_secret = "" access_token = "" access_token_secret = "" auth = tw.OAuthHandler(api_key, api_key_secret) auth.set_access_token(access_token, access_token_secret) api = tw.API(auth, wait_on_rate_limit=True) search_words = "#Covid19vaccine OR #covid19vaccine OR #covidvaccine OR #COVIDvaccine OR #COVID-19 Vaccine OR #COVID-19Vaccine" cursor = tw.Cursor(api.search_tweets, q=search_words, lang="en", result_type="recent") cursor tweets = cursor.items(100) tweets #what information we need from tweets tweet_details=[[tweet.text,tweet.user.screen_name,tweet.user.location,tweet.created_at.strftime("%d-%b-%Y")] for tweet in tweets] tweet_df=pd.DataFrame(data=tweet_details,columns=['text','user','location','date']) tweet_df tweet_df=tweet_df[tweet_df['location'].str.contains('USA')|tweet_df['location'].str.contains('United States')] tweet_df.head() #rename to different names each time pulling, starting with pull tweet_df.to_csv("pull14.csv",index=False,encoding='utf-8') #then we combined all the pull*.csv in .data\recent_tw.csv # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + import pandas as pd import regex, re, sys, nltk from nltk import word_tokenize, sent_tokenize from nltk.tokenize import RegexpTokenizer from nltk.stem import WordNetLemmatizer from nltk.corpus import stopwords, wordnet from sklearn.feature_extraction.text import CountVectorizer from sklearn.decomposition import LatentDirichletAllocation from gensim.models.hdpmodel import HdpModel from gensim.corpora.dictionary import Dictionary from collections import Counter import matplotlib.pyplot as plt from pathlib import Path src_path = str(Path.cwd().parent / "scripts") sys.path.append(src_path) # python file with all the functions (located in the src folder) import topic_classification as tc # - filepath = Path.cwd().parent / "data" / "all_speeches_cleaned.txt" df=pd.read_csv(filepath, usecols=['title','content']) df.head() df['text']=df['content'].str.lower() df.head() # tokenize into sentences df['tokenized']=df['text'].apply(lambda text: nltk.sent_tokenize(text)) df[['text','tokenized']].head() # the '\b(?!\d)' filters out expressions like '9th', since the first character cannot be a number tokenizer = RegexpTokenizer(r'\b(?!\d)[a-zA-Z]+') lemmatizer = WordNetLemmatizer() df['normalized']=df['tokenized'].apply(lambda text: tc.normalize_text(text, tokenizer, lemmatizer)) df[['text','tokenized','normalized']].head() # + STOPWORDS = set(stopwords.words('english')) df['fully_processed'] = df['normalized'].apply(lambda text: tc.remove_stopwords(text, STOPWORDS)) cnt = Counter() for text in df['fully_processed'].values: # counts the number of speeches the word is in for word in set(text.split()): cnt[word] += 1 # words that are in most of the speeches in_most_speeches = cnt.most_common(155) in_most_speeches = [x[0] for x in in_most_speeches] extra = ['mr', 'question', 'sure', 'obama', 'really', 'try', 'lot', 'important', 'million', 'talk', 'va', 'dr', 'romney', 'folk', 'governor', 'republican', 'king', 'heart'] STOPWORDS_extra = set(in_most_speeches + extra) # remove some words from the stopwords list that migth be important STOPWORDS_extra = STOPWORDS_extra - set(['war', 'care', 'child', 'family', 'job', 'law', 'protect', 'security', 'power']) df['fully_processed'] = df['fully_processed'].apply(lambda text: tc.remove_stopwords(text, STOPWORDS_extra)) df[['text','tokenized','normalized','fully_processed']].head() # + texts = [text.split() for text in df['fully_processed'].values] # Create a dictionary # In gensim a dictionary is a mapping between words and their integer id dictionary = Dictionary(texts) # Filter out extremes to limit the number of features dictionary.filter_extremes( no_below=3, no_above=0.85, keep_n=5000 ) # Create the bag-of-words format (list of (token_id, token_count)) corpus = [dictionary.doc2bow(text) for text in texts] Hdp_model = HdpModel(corpus=corpus, id2word=dictionary) # - from pprint import pprint pprint(Hdp_model.print_topics(num_words=10)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + colab={"base_uri": "https://localhost:8080/"} id="2zlxXIxeTV34" outputId="92ba9d68-e32b-4a7b-c979-fd6bf6bfe6b7" # %matplotlib inline import pandas as pd from sklearn import datasets from sklearn.decomposition import PCA from sklearn import svm,decomposition import matplotlib.pyplot as plt import numpy as np iris_dataset = datasets.load_iris() iris_dataset.target_names # + colab={"base_uri": "https://localhost:8080/"} id="pqbdGH2ATaQL" outputId="a3fc339a-03b6-4320-c54c-5559186b02b2" from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(iris_dataset.data,iris_dataset.target, random_state=5) print(X_train.shape, X_test.shape) # + colab={"base_uri": "https://localhost:8080/", "height": 268} id="KzitFquX7VUo" outputId="9a9e25a8-5589-4992-b80a-193ec77276ef" # plotting scatters plt.scatter(iris_dataset.data[:, 0], iris_dataset.data[:, 1], c=iris_dataset.target, s=25,cmap='spring'); plt.show() # + colab={"base_uri": "https://localhost:8080/"} id="m_ZSidwnTcPS" outputId="af5e053f-630b-4016-d6c1-cb4000b8c3c1" pca = decomposition.PCA(n_components=3, whiten=True, random_state=5) pca.fit(X_train) # + colab={"base_uri": "https://localhost:8080/"} id="vghAalMhTfDK" outputId="52710618-9ec0-4665-bec8-00827bfe8a10" X_train_pca = pca.transform(X_train) X_test_pca = pca.transform(X_test) print(X_train_pca.shape) clf = svm.SVC(C=3., gamma=0.005, random_state=5) clf.fit(X_train_pca, y_train) # + colab={"base_uri": "https://localhost:8080/"} id="QX0Sp99hTg0z" outputId="ac998ed9-a730-4d0b-de72-f4d517e5707e" from sklearn import metrics y_pred = clf.predict(X_test_pca) print(metrics.classification_report(y_test, y_pred)) # + colab={"base_uri": "https://localhost:8080/"} id="Yh6VqpRNTiuL" outputId="ea876787-b4eb-4c3c-be67-0f88fef0a89b" from sklearn.pipeline import Pipeline clf = Pipeline([('pca', decomposition.PCA(n_components=3, whiten=True)), ('svm', svm.LinearSVC(C=3.0))]) clf.fit(X_train, y_train) y_pred = clf.predict(X_test) print(metrics.confusion_matrix(y_pred, y_test)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Snippets # + pycharm={"name": "#%%\n"} # OPTIONAL: Load the "autoreload" extension so that code can change # %load_ext autoreload # OPTIONAL: always reload modules so that as you change code in src, it gets loaded # %autoreload 2 # %matplotlib inline # + pycharm={"name": "#%%\n"} from FLUCCOplus.notebooks import * import FLUCCOplus.electricitymap as elmap # + pycharm={"name": "#%%\n"} years = em_raw.index.year.unique().values print(f"{len(years)} Jahre: ", years) # + pycharm={"name": "#%%\n"} for i, year in enumerate(years): print(f"{i+1}: {year} - {em_raw[em_raw.index.year == year].shape}(h, columns)") # + pycharm={"name": "#%%\n"} # + pycharm={"name": "#%%\n"} filt = (em.index.year <= 2018) & (em.index.year >= 2016) em.loc[filt].isna() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import seaborn as sns # %matplotlib inline flights = sns.load_dataset("flights") tips = sns.load_dataset("tips") flights.head() tips.head() crr = tips.corr() crr sns.heatmap(crr, annot=True) sns.heatmap(crr, cmap="coolwarm", annot=True) # CMAP= Mudança de cores - ANNOT=Mostra os dados da correlação flights.info() pf = flights.pivot_table(values="passengers", index="month", columns="year") # Reorganizando uma tabela!! pf sns.heatmap(pf, cmap="magma", linecolor="gray", lw=1) sns.clustermap(pf, standard_scale=1) # Este método agrupa de alguma forma, o que ele considera ser conjuntos relevantes!! standard_scale= Caso queira alguma escala! # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import re import pandas as pd import altair as alt import numpy as np # - # ### Notes # # 1. Data is from https://www.imf.org/external/datamapper/PCPIPCH - download all data as Excel file # 2. The data is in old-Excel format. I used LibreOffice Calc to save it as new XLSX format # 3. For 2-letter and 3-letter ISO country codes, I used a CSV file from https://gist.githubusercontent.com/tadast/8827699/raw/f5cac3d42d16b78348610fc4ec301e9234f82821/countries_codes_and_coordinates.csv # 4. The AU website was misbehaving, so I used an archive.org copy of https://web.archive.org/web/20210424163337/https://au.int/en/member_states/countryprofiles2 for AU member states and regions # # ### Python modules # # Python modules are installed using [conda](https://docs.conda.io/en/latest/). The main modules are pandas, altair, lxml, pyarrow and fastparquet, but my complete `environment.yml` can be found in this notebook's Github repository. imf_inflation_data = pd.read_excel('data/imf-dm-export-20210525.xlsx', sheet_name='PCPIPCH') imf_inflation_data.columns = ['country'] + list(imf_inflation_data.columns[1:]) # rename first column to country imf_inflation_data = imf_inflation_data.drop(0).drop(227).drop(226) # drop rows not containing data imf_inflation_data = imf_inflation_data.set_index('country') # index by country imf_inflation_data = imf_inflation_data.drop(labels=[year for year in range(2021,2027)], axis=1) # drop all projected inflation rates imf_inflation_data # + def strip_to_str(val: str) -> str: return re.sub(r'\s{0,}"([^"]+)"', r"\1", val) def strip_to_int(val: str) -> int: return int(strip_to_str(val)) def strip_to_float(val: str) -> float: return float(strip_to_str(val)) country_code_df = pd.read_csv('https://gist.githubusercontent.com/tadast/8827699/raw/f5cac3d42d16b78348610fc4ec301e9234f82821/countries_codes_and_coordinates.csv', quotechar='"', skiprows=1, converters={ 'alpha_2': strip_to_str, 'alpha_3': strip_to_str, 'numeric': strip_to_int, 'latitude_avg': strip_to_float, 'longitude_avg': strip_to_float }, index_col=0, names=['country', 'alpha_2', 'alpha_3', 'numeric', 'latitude_avg', 'longitude_avg']) country_code_df # + tags=[] tables = pd.read_html('https://web.archive.org/web/20210424163337/https://au.int/en/member_states/countryprofiles2', header=0) # - central_africa_df = tables[2] east_africa_df = tables[3] north_africa_df = tables[4] south_africa_df = tables[5] west_africa_df = tables[6] all_africa_df = pd.concat([central_africa_df, east_africa_df, north_africa_df, south_africa_df, west_africa_df]) # + africa_country_to_imf_name = { 'Congo Republic': 'Congo, Republic of ', 'DR Congo': 'Congo, Dem. Rep. of the', 'São Tomé and Príncipe': 'Sao Tome and Principe', 'Tanzania': 'Tanzania, United Republic of', 'Eswatini': 'Swaziland', 'Cabo Verde': 'Cape Verde', 'Côte d’Ivoire': "Côte d'Ivoire", 'South Sudan': 'South Sudan, Republic of', 'Gambia': 'Gambia, The' } africa_country_to_name = { 'Congo Republic': 'Congo', 'DR Congo': 'Congo, the Democratic Republic of the', 'São Tomé and Príncipe': 'Sao Tome and Principe', 'Tanzania': 'Tanzania, United Republic of', 'Eswatini': 'Swaziland', 'Cabo Verde': 'Cape Verde', 'Côte d’Ivoire': "Côte d'Ivoire", 'South Sudan': 'South Sudan, Republic of', 'Gambia': 'Gambia, The' } for country in all_africa_df.Abbreviation: if country not in country_code_df.index: if country not in africa_country_to_name: print(country) # - data = {} for country in all_africa_df.Abbreviation: if country in central_africa_df.Abbreviation.values: region = 'central' elif country in east_africa_df.Abbreviation.values: region = 'east' elif country in north_africa_df.Abbreviation.values: region = 'north' elif country in south_africa_df.Abbreviation.values: region = 'south' elif country in west_africa_df.Abbreviation.values: region = 'west' else: raise ValueError('Unknown country: ' + country) if country in ('Sahrawi Republic', 'Somalia'): # we don't have info from IMF for these two countries continue elif country in country_code_df.index: lookup_name = country elif country in africa_country_to_name: lookup_name = africa_country_to_name[country] else: raise ValueError('Cannot lookup country: ' + country) code_2 = country_code_df.loc[lookup_name, 'alpha_2'] code_3 = country_code_df.loc[lookup_name, 'alpha_3'] if country in imf_inflation_data.index: lookup_name = country elif country in africa_country_to_imf_name: lookup_name = africa_country_to_imf_name[country] inflation_data = list(imf_inflation_data.loc[lookup_name]) data[country] = [region, code_2, code_3] + inflation_data inflation_df = pd.DataFrame.from_dict(data, orient='index', columns=['region', 'alpha_2', 'alpha_3'] + list(imf_inflation_data.columns)).replace('no data', np.nan) inflation_df.index.name = 'country' inflation_df # write the data in Excel, CSV and Parquet formats inflation_df.to_excel('data/africa_inflation_data.xlsx', sheet_name='Inflation') inflation_df.to_csv('data/africa_inflation_data.csv') inflation_df.columns = [str(column) for column in inflation_df.columns] inflation_df.to_parquet('data/africa_inflation_data.parquet') # Parquet is a format that retains the complete DataFrame structure and can be read back quite simply: # pd.read_parquet('data/africa_inflation_data.parquet') # ### Long form and plotting # # When plotting data with e.g. Altair, the data should be in [*long form*](https://altair-viz.github.io/user_guide/data.html#long-form-vs-wide-form-data), that is, one row per observation with metadata (e.g. year names) as values in the dataframe. # # The next code reformats the data into long form and then illustrates plotting with Altair inflation_longform_df = inflation_df.reset_index().melt(['country', 'region', 'alpha_2', 'alpha_3'], var_name=['year'], value_name='inflation') inflation_longform_df alt.Chart(inflation_longform_df[inflation_longform_df.country.isin(['South Africa', 'Kenya'])]).mark_line().encode(x=alt.X('year:O', title='Year'), y=alt.Y('inflation', title='Inflation (%)'), color='country', tooltip=['country', 'alpha_3'] ) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: pyenv38 # language: python # name: pyenv38 # --- # + [markdown] pycharm={"name": "#%% md\n"} # # Psience # + [markdown] pycharm={"name": "#%% md\n"} # This notebook tracks the most up to date version of `Psience`. # Test data is also available. # - import os, sys sys.path.insert(0, os.path.dirname(os.path.dirname(os.getcwd()))) # + pycharm={"name": "#%%\n"} import os import Psience test_dir = os.path.join(os.path.dirname(os.getcwd()), 'ci', 'tests', 'TestData') def test_data(name): return os.path.join(test_dir, name) # + [markdown] pycharm={"name": "#%% md\n"} # ## Examples # + [markdown] pycharm={"name": "#%% md\n"} # We will provide a brief examples for the common use cases for each module. # More information can be found on the pages themselves. # The unit tests for each package are provided on the bottom of the package page. # These provide useful usage examples. # - # ## Molecools from Psience.Molecools import * # Load a molecule from a formatted checkpoint file # + tags=[] mol = Molecule.from_file(test_data("HOH_freq.fchk"), bonds=[[0, 1], [0, 2]]) mol # + [markdown] tags=[] # Visualize its normal modes # + jupyter={"outputs_hidden": true} tags=[] mol.normal_modes.modes # + [markdown] pycharm={"name": "#%% md\n"} # ## VPT # + [markdown] pycharm={"name": "#%% md\n"} # The `VPT` package provides a toolkit for running vibrational perturbation theory calculations # + pycharm={"name": "#%%\n"} from Psience.VPT2 import * # - # Set up a basic VPT for water and plot the spectrum analyzer = VPTAnalyzer.run_VPT(test_data("HOH_freq.fchk"), 3) # up through 3 quantum transitions analyzer.spectrum.plot(image_size=650, aspect_ratio=.5) # Print a table of frequencies and intensities analyzer.print_output_tables() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: markuplmft # language: python # name: markuplmft # --- # # Check for the dataset format # + tags=[] import pandas as pd from pathlib import Path # + tags=[] # vertical / website / page # - # # Packed Data (Data after pack_data.py) # ## SWDE data # + tags=[] data_path = '../../../swde/sourceCode/swde_small.pickle' data_packed = pd.read_pickle(data_path) len(data_packed) # + tags=[] data_packed[:1] # + [markdown] tags=[] # ### Ground Truth # + tags=[] gt_path = Path('../../../swde/sourceCode/groundtruth/auto/') gt_file = [x for x in list(gt_path.iterdir())][0] with open(gt_file) as text: lines = text.readlines() for l in lines: print(l) # - # ## My data # + tags=[] data_path = '../../../swde/my_CF_sourceCode/wae.pickle' data_packed = pd.read_pickle(data_path) len(data_packed) # - # ### Ground Truth # + tags=[] gt_path = Path.cwd().parents[2] / 'swde/my_CF_sourceCode/groundtruth/WAE/' for gt_file in list(gt_path.iterdir())[:]: print(gt_file) with open(gt_file) as text: lines = text.readlines() for l in lines: print(l) # - # # Prepare Data (Data after prepare_data.py) # # + [markdown] tags=[] # ## SWDE data # + tags=[] website_data_path = Path.cwd().parents[2] / 'swde/swde_processed/auto-msn-3.pickle' df = pd.read_pickle(website_data_path) # + tags=[] page_index = '0000' pd.DataFrame(df[page_index], columns=['text', 'xpath', 'node-type']) # + [markdown] tags=[] # ## My data # + tags=[] pd.set_option('max_colwidth', 2000) website_data_path = Path.cwd().parents[2] / 'swde/my_CF_processed/WAE-blackbaud.com-2000.pickle' all_data = pd.read_pickle(website_data_path) # + tags=[] page_index = '0001' df = pd.DataFrame(all_data[page_index], columns=['text', 'xpath', 'node-type']) df # - # --- # + from from lxml import etree def get_dom_tree(html): """Parses a HTML string to a DOMTree. We preprocess the html string and use lxml lib to get a tree structure object. Args: html: the string of the HTML document. website: the website name for dealing with special cases. Returns: A parsed DOMTree object using lxml library. """ cleaner = Cleaner() cleaner.javascript = True cleaner.style = True cleaner.page_structure = False html = html.replace("\0", "") # Delete NULL bytes. # Replace the
    tags with a special token for post-processing the xpaths. html = html.replace("
    ", "--BRRB--") html = html.replace("
    ", "--BRRB--") html = html.replace("
    ", "--BRRB--") html = html.replace("
    ", "--BRRB--") html = html.replace("
    ", "--BRRB--") html = html.replace("
    ", "--BRRB--") html = clean_format_str(html) x = lxml.html.fromstring(html) etree_root = cleaner.clean_html(x) dom_tree = etree.ElementTree(etree_root) return dom_tree # + tags=[] etree.ElementTree(etree_root) x = lxml.html.fromstring(html) # + tags=[] websites # + tags=[] p = Path('/Users/aimoredutra/GIT/swde/my_CF_sourceCode') [x for x in p.iterdir()] # - # # Sequential Quadratic Programming # (which is just using Newton on a Lagrangian) import numpy as np import numpy.linalg as la # Here's an objective function $f$ and a constraint $g(x)=0$: # + def f(vec): x = vec[0] y = vec[1] return (x-2)**4 + 2*(y-1)**2 def g(vec): x = vec[0] y = vec[1] return x + 4*y - 3 # - # Now define the Lagrangian, its gradient, and its Hessian: # + def L(vec): lam = vec[2] return f(vec) + lam * g(vec) def grad_L(vec): x = vec[0] y = vec[1] lam = vec[2] return np.array([ 4*(x-2)**3 + lam, 4*(y-1) + 4*lam, x+4*y-3 ]) def hess_L(vec): x = vec[0] y = vec[1] lam = vec[2] return np.array([ [12*(x-2)**2, 0, 1], [0, 4, 4], [1, 4, 0] ]) # - # At this point, we only need to find an *unconstrained* minimum of the Lagrangian! # # Let's fix a starting vector `vec`: vec = np.zeros(3) # Implement Newton and run this cell in place a number of times (Ctrl-Enter): vec = vec - la.solve(hess_L(vec), grad_L(vec)) vec # Let's first check that we satisfy the constraint: g(vec) # Next, let's look at a plot: import matplotlib.pyplot as pt x, y = np.mgrid[0:4:30j, -3:5:30j] pt.contour(x, y, f(np.array([x,y])), 20) pt.contour(x, y, g(np.array([x,y])), levels=[0]) pt.plot(vec[0], vec[1], "o") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np def And(a,b): return int(a and b) def Or(a,b): return int(a or b) def Not(a,b): return int(not a) def Xor(a,b): return int((a or b) and not(a and b)) def Nand(a,b): return int(not(a and b)) def Nor(a,b): return int(not(a or b)) def Nxor(a,b): return int(not Xor(a,b)) gate_map = {0:Not,1:And,2:Or,3:Xor,4:Nand,5:Nor,6:Nxor} class node(): """ A cgp node """ def __init__(self, pos, levels_back, lines, n_gates, inputs): self.pos = pos self.levels_back = levels_back self.lines = lines self.n_gates = n_gates self.inputs = inputs self.values = (list(range(max(0,pos-levels_back),pos)) + list(range(-inputs,0))) self.gate = np.random.randint(n_gates) self.i_add = [[np.random.randint(self.lines), self.values[np.random.randint(len(self.values))]], [np.random.randint(self.lines), self.values[np.random.randint(len(self.values))]],] self.active = False self.output = -1 def get_node(self): """ print the node""" return "{"+str(self.i_add)+" , "+str(self.gate)+"}" def set_inactive(self): """ sets the node as inactive""" self.active = False self.outpur = -1 def copy(self): """ make a copy of itself""" new = node(self.pos, self.levels_back, self.lines, self.n_gates, self.inputs) new.gate = self.gate new.i_add = [self.i_add[0].copy(),self.i_add[1].copy()] new.active = self.active return new def eval(self, a,b): """ Eval node inputs""" self.output = gate_map[self.gate](a,b) def mutate(self): """ Mutate the node""" element = np.random.randint(3) if element != 2: self.i_add[element] = [np.random.randint(self.lines), self.values[np.random.randint(len(self.values))]] else: self.gate = np.random.randint(self.n_gates) class individual(): """ A cgp individual """ def __init__(self, inputs, outputs, lines, cols, levels_back, n_gates): self.inputs = inputs self.outputs = outputs self.lines = lines self.cols = cols self.levels_back = levels_back self.n_gates = n_gates self.values = (list(range(max(0,cols-levels_back),cols)) + list(range(-inputs,0))) self.nodes = [] for i in range(lines): self.nodes.append([]) for j in range(cols): self.nodes[i].append(node(j, levels_back, lines, n_gates, inputs)) self.o_add = [] for i in range(outputs): self.o_add.append([np.random.randint(lines), self.values[np.random.randint(len(self.values))]]) self.mapped = False def print(self): """ print the individual""" for j in range(self.cols): expr = "col "+str(j)+": " for i in range(self.lines): expr += self.nodes[i][j].get_node()+"\t" print(expr) def print_active(self): """ print the individual""" for j in range(self.cols): flag = False expr = "col "+str(j)+": " for i in range(self.lines): if self.nodes[i][j].active: expr += self.nodes[i][j].get_node()+"\t" flag = True else: expr += ' - '+"\t" if flag: print(expr) def print_outputs(self): """ print the outputs of the nodes""" for j in range(self.cols): expr = "col "+str(j)+": " for i in range(self.lines): expr += str(self.nodes[i][j].output)+"\t" print(expr) def set_all_inactive(self): """ set all nodes inactive""" for j in self.nodes: [i.set_inactive() for i in j] self.mapped = False def copy(self): """ make a copy of itself""" new = individual(self.inputs, self.outputs, self.lines, self.cols, self.levels_back, self.n_gates) new.nodes = [] for i in range(self.lines): new.nodes.append([j.copy() for j in self.nodes[i]]) new.o_add = [j.copy() for j in self.o_add] return new def eval(self, line_input): """ Evaluate an truth table input""" for j in range(self.cols): for i in range(self.lines): if self.nodes[i][j].i_add[0][1]<0: i_a = line_input[abs(self.nodes[i][j].i_add[0][1])-1] else: i_a = self.nodes[self.nodes[i][j].i_add[0][0]][self.nodes[i][j].i_add[0][1]].output if self.nodes[i][j].i_add[1][1]<0: i_b = line_input[abs(self.nodes[i][j].i_add[1][1])-1] else: i_b = self.nodes[self.nodes[i][j].i_add[1][0]][self.nodes[i][j].i_add[1][1]].output if i_a not in [0,1] or i_b not in [0,1]: print("Something wrong is not right") self.nodes[i][j].eval(i_a,i_b) line_output = [] for i in self.o_add: if i[1] < 0: line_output.append(line_input[abs(i[1])-1]) else: line_output.append(self.nodes[i[0]][i[1]].output) #print(line_output) return np.array(line_output) def map_active(self): """ map active nodes""" to_visit = [] for i in self.o_add: to_visit.append(i) while len(to_visit)>0: if to_visit[0][1] < 0: to_visit.remove(to_visit[0]) else: if self.nodes[to_visit[0][0]][to_visit[0][1]].active: to_visit.remove(to_visit[0]) else: self.nodes[to_visit[0][0]][to_visit[0][1]].active = True to_visit.append( self.nodes[to_visit[0][0]][to_visit[0][1]].i_add[0] ) if self.nodes[to_visit[0][0]][to_visit[0][1]].gate != 0: to_visit.append( self.nodes[to_visit[0][0]][to_visit[0][1]].i_add[1] ) to_visit.remove(to_visit[0]) self.mapped = True def mutate_output(self): """ mutate one output""" target = np.random.randint(self.outputs) self.o_add[target] = [np.random.randint(self.lines), self.values[np.random.randint(len(self.values))]] def mutate_sam(self): """ mutate nodes once an active node or output is mutated""" if not self.mapped: self.map_active() active_mut = False while not active_mut: target = np.random.randint( len(self.o_add)+self.lines*self.cols) if target < len(self.o_add): self.mutate_output() else: i = np.random.randint(self.lines) j = np.random.randint(self.cols) active_mut = self.nodes[i][j].active self.nodes[i][j].mutate() #print(i,j) self.set_all_inactive() def mutate_pm(self, qtd): target = np.random.randint( len(self.o_add)+self.lines*self.cols, size = qtd) [self.mutate_output() for i in range(np.sum(target=len(self.o_add)))] def compare_out(i_out,t_out): return np.sum(i_out!=np.array(t_out)) def count_dist(individual, table): dist = 0 ## for inp,outp in table: ## i_out = individual.eval(inp) evals = map(individual.eval,[i[0] for i in table]) dist = np.sum(list(map(compare_out,evals,[i[1] for i in table]))) return dist #pass def design(individual, table, budget, lbd = 4): parent = individual b_dist = count_dist(parent, table) print("Gen:",budget,"erro:",b_dist) while b_dist > 0 and budget > 0: pop = [individual.copy() for i in range(lbd)] [i.mutate_pm(20) for i in pop] dist = [count_dist(i,table) for i in pop] if np.min(dist) <= b_dist: b_dist = np.min(dist) parent = pop[dist.index(np.min(dist))] budget-=1 if budget%1000 == 0: print("Gen:",budget,"erro:",b_dist) return parent.copy(),b_dist # + table_soma = [ [[0,0],[0,0]], [[0,1],[0,1]], [[1,0],[0,1]], [[1,1],[1,0]]] A = individual(2,2,1,100,100,7) B,erro = design(A, table_soma, 10000) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:carnd-term1] # language: python # name: conda-env-carnd-term1-py # --- import numpy as np import cv2 import matplotlib.pyplot as plt import matplotlib.image as mpimg # %matplotlib inline from pprint import pprint import glob count=0 # + def calibrate_camera(images_path,nx=9,ny=6): objpoints=[] imgpoints=[] # x=objp[:,:2] objp=np.zeros((ny*nx,3),np.float32) objp[:,:2]=np.mgrid[0:nx,0:ny].T.reshape(-1,2) for image in images_path: img=mpimg.imread(image) gray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) ret,corners=cv2.findChessboardCorners(gray,(nx,ny),None) if ret==True: imgpoints.append(corners) objpoints.append(objp) cr_image=cv2.drawChessboardCorners(img,(nx,ny),corners,ret) filename=image.split('\\')[-1] # print(filename) cv2.imwrite('output_corner/'+filename+'.jpg',img) return imgpoints,objpoints def abs_sobel_thresh(img, orient='x',sobel_kernel=3, thresh=(0,255)): # Convert to grayscale gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) # Apply x or y gradient with the OpenCV Sobel() function # and take the absolute value if orient == 'x': abs_sobel = np.absolute(cv2.Sobel(gray, cv2.CV_64F, 1, 0,sobel_kernel)) if orient == 'y': abs_sobel = np.absolute(cv2.Sobel(gray, cv2.CV_64F, 0, 1,sobel_kernel)) # Rescale back to 8 bit integer scaled_sobel = np.uint8(255*abs_sobel/np.max(abs_sobel)) # Create a copy and apply the threshold binary_output = np.zeros_like(scaled_sobel) # Here I'm using inclusive (>=, <=) thresholds, but exclusive is ok too binary_output[(scaled_sobel >= thresh[0]) & (scaled_sobel <= thresh[1])] = 1 # Return the result return binary_output def mag_thresh(img, sobel_kernel=3, mag_thresh=(0, 255)): # Convert to grayscale gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) # Take both Sobel x and y gradients sobelx = cv2.Sobel(gray, cv2.CV_64F, 1, 0, ksize=sobel_kernel) sobely = cv2.Sobel(gray, cv2.CV_64F, 0, 1, ksize=sobel_kernel) # Calculate the gradient magnitude gradmag = np.sqrt(sobelx**2 + sobely**2) # Rescale to 8 bit scale_factor = np.max(gradmag)/255 gradmag = (gradmag/scale_factor).astype(np.uint8) # Create a binary image of ones where threshold is met, zeros otherwise binary_output = np.zeros_like(gradmag) binary_output[(gradmag >= mag_thresh[0]) & (gradmag <= mag_thresh[1])] = 1 # Return the binary image return binary_output def dir_threshold(img, sobel_kernel=3, thresh=(0, np.pi/2)): # Grayscale gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) # Calculate the x and y gradients sobelx = cv2.Sobel(gray, cv2.CV_64F, 1, 0, ksize=sobel_kernel) sobely = cv2.Sobel(gray, cv2.CV_64F, 0, 1, ksize=sobel_kernel) # Take the absolute value of the gradient direction, # apply a threshold, and create a binary image result absgraddir = np.arctan2(np.absolute(sobely), np.absolute(sobelx)) binary_output = np.zeros_like(absgraddir) binary_output[(absgraddir >= thresh[0]) & (absgraddir <= thresh[1])] = 1 # Return the binary image return binary_output def binary_saturation(image,thresh=(90,255)): hls = cv2.cvtColor(image, cv2.COLOR_RGB2HLS) S = hls[:,:,2] s_binary = np.zeros_like(S) s_binary[(S > thresh[0]) & (S <= thresh[1])] = 1 return s_binary def getperspective(img,src,dst): img_size=(img.shape[1],img.shape[0]) M = cv2.getPerspectiveTransform(src, dst) warped = cv2.warpPerspective(img, M, img_size) return warped def combined_threshold(test_image): ksize=3 gradx = abs_sobel_thresh(test_image, orient='x', sobel_kernel=ksize, thresh=(20, 100)) grady = abs_sobel_thresh(test_image, orient='y', sobel_kernel=ksize, thresh=(30, 100)) mag_binary = mag_thresh(test_image, sobel_kernel=ksize, mag_thresh=(30, 100)) dir_binary = dir_threshold(test_image, sobel_kernel=ksize, thresh=(0.7, 1.3)) combined = np.zeros_like(dir_binary) combined[((gradx == 1) & (grady == 1)) | ((mag_binary == 1) & (dir_binary == 1))] = 1 return combined def findlines(binary_warped): histogram = np.sum(binary_warped[int(binary_warped.shape[0]/2):,:], axis=0) # Create an output image to draw on and visualize the result out_img = np.dstack((binary_warped, binary_warped, binary_warped))*255 # Find the peak of the left and right halves of the histogram # These will be the starting point for the left and right lines midpoint = np.int(histogram.shape[0]/2) leftx_base = np.argmax(histogram[:midpoint]) rightx_base = np.argmax(histogram[midpoint:]) + midpoint # Choose the number of sliding windows nwindows = 15 # Set height of windows window_height = np.int(binary_warped.shape[0]/nwindows) # Identify the x and y positions of all nonzero pixels in the image nonzero = binary_warped.nonzero() nonzeroy = np.array(nonzero[0]) nonzerox = np.array(nonzero[1]) # Current positions to be updated for each window leftx_current = leftx_base rightx_current = rightx_base # Set the width of the windows +/- margin margin = 100 # Set minimum number of pixels found to recenter window minpix = 50 # Create empty lists to receive left and right lane pixel indices left_lane_inds = [] right_lane_inds = [] # Step through the windows one by one for window in range(nwindows): # Identify window boundaries in x and y (and right and left) win_y_low = binary_warped.shape[0] - (window+1)*window_height win_y_high = binary_warped.shape[0] - window*window_height win_xleft_low = leftx_current - margin win_xleft_high = leftx_current + margin win_xright_low = rightx_current - margin win_xright_high = rightx_current + margin # Draw the windows on the visualization image cv2.rectangle(out_img,(win_xleft_low,win_y_low),(win_xleft_high,win_y_high), (0,255,0), 2) cv2.rectangle(out_img,(win_xright_low,win_y_low),(win_xright_high,win_y_high), (0,255,0), 2) # Identify the nonzero pixels in x and y within the window good_left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & (nonzerox >= win_xleft_low) & (nonzerox < win_xleft_high)).nonzero()[0] good_right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & (nonzerox >= win_xright_low) & (nonzerox < win_xright_high)).nonzero()[0] # Append these indices to the lists left_lane_inds.append(good_left_inds) right_lane_inds.append(good_right_inds) # If you found > minpix pixels, recenter next window on their mean position if len(good_left_inds) > minpix: leftx_current = np.int(np.mean(nonzerox[good_left_inds])) if len(good_right_inds) > minpix: rightx_current = np.int(np.mean(nonzerox[good_right_inds])) # Concatenate the arrays of indices left_lane_inds = np.concatenate(left_lane_inds) right_lane_inds = np.concatenate(right_lane_inds) # Extract left and right line pixel positions leftx = nonzerox[left_lane_inds] lefty = nonzeroy[left_lane_inds] rightx = nonzerox[right_lane_inds] righty = nonzeroy[right_lane_inds] # Fit a second order polynomial to each left_fit = np.polyfit(lefty, leftx, 2) right_fit = np.polyfit(righty, rightx, 2) ploty = np.linspace(0, binary_warped.shape[0]-1, binary_warped.shape[0] ) left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2] right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2] ploty = np.linspace(0, binary_warped.shape[0]-1, binary_warped.shape[0] ) left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2] right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2] out_img[nonzeroy[left_lane_inds], nonzerox[left_lane_inds]] = [255, 0, 0] out_img[nonzeroy[right_lane_inds], nonzerox[right_lane_inds]] = [0, 0, 255] return out_img,left_fitx,right_fitx,left_fit,right_fit,ploty def search_area(binary_warped,left_fitx,right_fitx,left_fit,right_fit): nonzero = binary_warped.nonzero() nonzeroy = np.array(nonzero[0]) nonzerox = np.array(nonzero[1]) margin = 100 left_lane_inds = ((nonzerox > (left_fit[0]*(nonzeroy**2) + left_fit[1]*nonzeroy + left_fit[2] - margin)) & (nonzerox < (left_fit[0]*(nonzeroy**2) + left_fit[1]*nonzeroy + left_fit[2] + margin))) right_lane_inds = ((nonzerox > (right_fit[0]*(nonzeroy**2) + right_fit[1]*nonzeroy + right_fit[2] - margin)) & (nonzerox < (right_fit[0]*(nonzeroy**2) + right_fit[1]*nonzeroy + right_fit[2] + margin))) # Again, extract left and right line pixel positions leftx = nonzerox[left_lane_inds] # print(leftx.shape) lefty = nonzeroy[left_lane_inds] rightx = nonzerox[right_lane_inds] righty = nonzeroy[right_lane_inds] # Fit a second order polynomial to each left_fit = np.polyfit(lefty, leftx, 2) right_fit = np.polyfit(righty, rightx, 2) # Generate x and y values for plotting ploty = np.linspace(0, binary_warped.shape[0]-1, binary_warped.shape[0] ) left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2] right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2] out_img = np.dstack((binary_warped, binary_warped, binary_warped))*255 window_img = np.zeros_like(out_img) # Color in left and right line pixels out_img[nonzeroy[left_lane_inds], nonzerox[left_lane_inds]] = [255, 0, 0] out_img[nonzeroy[right_lane_inds], nonzerox[right_lane_inds]] = [0, 0, 255] # Generate a polygon to illustrate the search window area # And recast the x and y points into usable format for cv2.fillPoly() left_line_window1 = np.array([np.transpose(np.vstack([left_fitx-margin, ploty]))]) left_line_window2 = np.array([np.flipud(np.transpose(np.vstack([left_fitx+margin, ploty])))]) left_line_pts = np.hstack((left_line_window1, left_line_window2)) right_line_window1 = np.array([np.transpose(np.vstack([right_fitx-margin, ploty]))]) right_line_window2 = np.array([np.flipud(np.transpose(np.vstack([right_fitx+margin, ploty])))]) right_line_pts = np.hstack((right_line_window1, right_line_window2)) # Draw the lane onto the warped blank image cv2.fillPoly(window_img, np.int_([left_line_pts]), (0,255, 0)) cv2.fillPoly(window_img, np.int_([right_line_pts]), (0,255, 0)) result = cv2.addWeighted(out_img, 1, window_img, 0.3, 0) # plt.imshow(result) # print(result.shape) # plt.plot(left_fitx, ploty, color='yellow') # plt.plot(right_fitx, ploty, color='yellow') # plt.xlim(0, 1280) # plt.ylim(720, 0) return result def curvature_plot(left_fitx,right_fitx,ploty): left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2] # right_fit = np.polyfit(ploty, rightx, 2) right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2] # Plot up the fake data mark_size = 3 # plt.plot(leftx, ploty, 'o', color='red', markersize=mark_size) # plt.plot(rightx, ploty, 'o', color='blue', markersize=mark_size) plt.xlim(0, 1280) plt.ylim(0, 720) plt.plot(left_fitx, ploty, color='green', linewidth=3) plt.plot(right_fitx, ploty, color='green', linewidth=3) plt.gca().invert_yaxis() # to visualize as we do the images def measure_cur_pix(left_fit,right_fit,ploty,left_fitx,right_fitx): y_eval = np.max(ploty) left_curverad = ((1 + (2*left_fit[0]*y_eval + left_fit[1])**2)**1.5) / np.absolute(2*left_fit[0]) right_curverad = ((1 + (2*right_fit[0]*y_eval + right_fit[1])**2)**1.5) / np.absolute(2*right_fit[0]) # print(left_curverad, right_curverad) ym_per_pix = 30/720 # meters per pixel in y dimension xm_per_pix = 3.7/700 # meters per pixel in x dimension # Fit new polynomials to x,y in world space left_fit_cr = np.polyfit(ploty*ym_per_pix, left_fitx*xm_per_pix, 2) right_fit_cr = np.polyfit(ploty*ym_per_pix, right_fitx*xm_per_pix, 2) # Calculate the new radii of curvature left_curverad = ((1 + (2*left_fit_cr[0]*y_eval*ym_per_pix + left_fit_cr[1])**2)**1.5) / np.absolute(2*left_fit_cr[0]) right_curverad = ((1 + (2*right_fit_cr[0]*y_eval*ym_per_pix + right_fit_cr[1])**2)**1.5) / np.absolute(2*right_fit_cr[0]) return left_curverad, right_curverad def draw_lane(test_image,binary_warped,left_fitx,right_fitx,ploty,Minv): warp_zero = np.zeros_like(binary_warped).astype(np.uint8) color_warp = np.dstack((warp_zero, warp_zero, warp_zero)) # print(binary_warped.shape) # Recast the x and y points into usable format for cv2.fillPoly() pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))]) pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))]) pts = np.hstack((pts_left, pts_right)) # Draw the lane onto the warped blank image cv2.fillPoly(color_warp, np.int_([pts]), (0,255, 0)) # Warp the blank back to original image space using inverse perspective matrix (Minv) newwarp = cv2.warpPerspective(color_warp, Minv, (test_image.shape[1], test_image.shape[0])) # Combine the result with the original image result1 = cv2.addWeighted(test_image, 1, newwarp, 0.3, 0) return result1 # - # + images_path=glob.glob('camera_cal/calibration*.jpg') imgpoints,objpoints=calibrate_camera(images_path) img121=mpimg.imread(images_path[0]) gray121 = cv2.cvtColor(img121, cv2.COLOR_RGB2GRAY) ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray121.shape[::-1], None, None) right_gl=None def mypipeline(orig_img,test='n'): global count global right_gl count+=1; filepath="frames/frame"+str(count); cv2.imwrite(filepath+'.jpg',orig_img) test_image = cv2.undistort(orig_img, mtx, dist, None, mtx) combined_img=combined_threshold(test_image) sat_img=binary_saturation(test_image) combined_binary = np.zeros_like(combined_img) combined_binary[(combined_img == 1) | (sat_img == 1)] = 1 src = np.float32([[570,480],[770,480],[1100,690],[290,690]]) dst = np.float32([[250,0], [1090,0],[1090,700],[250,700]]) binary_warped=getperspective(combined_binary,src,dst) histogram = np.sum(binary_warped[int(binary_warped.shape[0]/2):,:], axis=0) # print(binary_warped.shape) Minv = cv2.getPerspectiveTransform(dst, src) out_img,left_fitx,right_fitx,left_fit,right_fit,ploty=findlines(binary_warped) if(test=='y'): print(left_fitx[-1]) return 0 # return (right_fitx[-1],left_fitx[-1]) # src = np.float32([[570,480],[770,480],[1100,690],[290,690]]) if(right_fitx[-1]<1100 or right_fitx[0]<750 or right_fitx[len(right_fitx)//2]<900 ): if(right_gl is not None): right_fitx=right_gl # right_fitx=right_fitx+300 else: if(right_gl is not None): right_gl=(right_fitx+right_gl)/2 else: right_gl=right_fitx result=search_area(binary_warped,left_fitx,right_fitx,left_fit,right_fit) # curvature_plot(left_fitx,right_fitx,ploty) left_curverad, right_curverad=measure_cur_pix(left_fit,right_fit,ploty,left_fitx,right_fitx) # print(left_curverad, right_curverad,left_fitx,right_fitx,left_fit,right_fit,ploty) final_img=draw_lane(orig_img,binary_warped,left_fitx,right_fitx,ploty,Minv) # print(left_fitx) # print("--------") # print(right_fitx) # print("--------") # print(np.min(right_fitx)) # print(np.min(left_fitx)) # print(np.min(right_fitx)-np.min(left_fitx)) # print("-----") count=count+1 filepath="frames/frame"+str(count); cv2.imwrite(filepath+'.jpg',final_img) # print((orig_img.shape[1]/2)) # font = cv2.FONT_HERSHEY_SIMPLEX # final_img = cv2.putText(final_img,"Bottom="+str(right_fitx[-1])+" Top="+str(right_fitx[0]),(20,40), font, 1, (255,255,255), 2, cv2.LINE_AA) x_offset=y_offset=50 x_offset1=300 y_offset1=50 f = plt.figure() ax = plt.subplot(111) ax.plot(histogram) # plt.plot(histogram) # small_svc=cv2.resize(histogram, (600, 600)) # small_svc=small_svc.astype(np.float32) # bw = cv2.cvtColor(combined_binary.astype(np.uint8), cv2.COLOR_GRAY2RGB) # br=255*combined_binary bw = np.dstack((binary_warped, binary_warped, binary_warped))*255 # cv2.imwrite('messigray.jpg',out_img) # bw=mpimg.imread("messigray.jpg") bws=cv2.resize(bw, (200, 200)) f.savefig('plot1.jpg') img_bgr=mpimg.imread("plot1.jpg") small_svc=cv2.resize(img_bgr, (200, 200)) final_img[y_offset:y_offset+small_svc.shape[0], x_offset:x_offset+small_svc.shape[1]] = small_svc final_img[y_offset1:y_offset1+bws.shape[0], x_offset1:x_offset1+bws.shape[1]] = bws final_img[y_offset1:y_offset1+result.shape[0], x_offset1:x_offset1+result.shape[1]] = result # print("aaaaaaaaaaaaaaaaaaaa:",bw.shape) return final_img testing=mpimg.imread("frames/frame1.jpg") mypipeline(testing) # + # testimages_path=glob.glob('test_images/test*.jpg') # max_total=0 # min_total=0 # for path in testimages_path: # testimg=mpimg.imread(path) # min_total+=mypipeline(testimg,'y')[0] # max_total+=mypipeline(testimg,'y')[1] # average_max=max_total/len(testimages_path) # average_min=min_total/len(testimages_path) # print(average_max,average_min) # + import os from moviepy.video.VideoClip import VideoClip from moviepy.editor import VideoFileClip from IPython.display import HTML white_output = 'project_video_output_1.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds # clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(37,42) clip1 = VideoFileClip("project_video.mp4").subclip(15,17) white_clip = clip1.fl_image(mypipeline) #NOTE: this function expects color images!! # %time white_clip.write_videofile(white_output, audio=False) # - HTML(""" """.format(white_output)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] _uuid="fb2dc4da758ebc6382dcb418330f2daad96dd5c4" # > > # retinopathy convNet Case2 (with convNet + dense) # # [Helsinki Metropolia University of Applied Sciences](www.metropolia.fi/en)
    # + [markdown] _uuid="d77cf99304dd187aa5b3c5a8bbd7809616cf0db3" # ## Xception convNet and dense, for binary classification retinopathy # # Make a convnet architecture and try to classify images to predict diabetic retinopathy in binary fashion (sick or healthy). # # I will attempt to use pre-trained Xception convNet, but I will initialize it to random weights and try different training schemes. # # Earlier tests I tried to do with underfitting architectures, but ready convNets such as Xception of VGG19 should be powerful enough for the retinopathy classification. You should always try to first get an overfitting model with good trainingaccuracy in my limited experience. That way you can see that the model is overfitting and is powerful enough to learn the trainingset, even if the validationaccuracy is lagging behind. # # 1.) maybe and train convLayer parts of the network for a few epochs # # 2.) Then, I will freeze that convNet layers, then I will train the dense layers only (???) # # 3.) possibly initalize random and train the whole convNet + dense. (easiest to do) # # # ## useful links & resources # * here was a useful tutorial for imagegenerators and pre-trained convnet https://towardsdatascience.com/keras-transfer-learning-for-beginners-6c9b8b7143e # * another good help link https://github.com/keras-team/keras/issues/4465 # * Im trying to use pretrained convnet architecture such that not the entirety of my days on this earth are spent on convNet model training # * we will see what kind of results we get # * After having tried most tips, the last one I didn't use (yet) was to crop the images into larger size, this appears to be necessary to achieve good results # * more good links for help especially the file preprocessing and the imagecropping portions early on (its difficult to learn this file processing part if you're not good in python already) # # https://www.kaggle.com/hooseygoose/directory-structure-and-moving-files # # # http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html # # # https://www.kaggle.com/sakarilukkarinen/demo-11-xception-and-dense-2 # # # https://www.kaggle.com/markupik/case2-retinopathy # # # https://www.kaggle.com/sakarilukkarinen/demo-04-binary-classifier # + [markdown] _uuid="67e8788d69706cf8b3337448715b4d8591d31f86" # ## basic procedures to do # 1. sample the eyeimages's labels into dataframes randomly # 2. upsize the images into 299x299 size # 3. make imageDataGenerators for trainingset, validationset (and testsets possibly trainingset? its tedious to create those file directories though) # 4. import Xception convNet and add some layers into it # 5. train model (it will take long time) # 6. plot accuracy and roc metrics, and also make tests with the testset (testset images would have to be cropped also, very tedious...) # + _uuid="fc1ae35e1eb4b38e621f888ba4e4f2e2ca3259a6" import pandas as pd import numpy as np import os import keras import matplotlib.pyplot as plt from keras.layers import Dense,GlobalAveragePooling2D from keras.applications import MobileNet from keras.applications import VGG19 from keras.preprocessing import image from keras.applications.mobilenet import preprocess_input from keras.preprocessing.image import ImageDataGenerator from keras.models import Model from keras.optimizers import Adam from keras import Sequential from keras.layers import Conv2D, Activation, MaxPooling2D, Dropout, Flatten, Dense from keras import optimizers import os, shutil, cv2, time # + _uuid="40045d8faa8d1b1240e827f4639cfa335772219d" """Initial settings for training/validation, remember to choose!!!""" EPOCHS = 30 BATCHSIZE = 16 TRAINSET_PERCENT = 0.75 ## percentage of trainingset from totalsamples (from [0, 1.0] ) SAMPLES = 4000 ## take huge sample of images, but only train&validate on subsets of allowed trainable/validatable images TRAINSTEPS_PERCENT = 1.0 ## percentage of trainingset steps being done (from [0, 1.0] ) VALIDSTEPS_PERCENT = 1.0 ## percentage of validationset steps being done (from [0, 1.0] ) BALANCEDSET = False ## balance the df dataset 50:50 sick/healthy, otherwise simply random sample MODELNAME = "borrowedXceptionArchitecture" + ".h5" trainConvLayers = True ## train convLayers or freeze them SEED = 1999-888 ROTANGLE = 10 ZOOM = 0.02 VERBOSE = 1 ## for modeltraining IMAGESIZE = 299 source_dir = r"../input/preprocessed-diabetic-retinopathy-trainset/300_train/300_train/" temp_dir = r"./temp/" #test_dir = r"../input/300_test/300_test/" # initialize multiple optimizers but you can only choose one at a time of course! sgd = optimizers.SGD(lr=0.01/2, decay=1e-6, momentum=0.9, nesterov=True) addyboi = optimizers.Adam(lr=0.01, decay=1e-5) rmsprop = optimizers.RMSprop(lr=0.01/2) basic_rms = optimizers.RMSprop() """remember to choose optimizer!!!""" OPTIMIZER = basic_rms # + _uuid="751696fea02258c282efdf3115630a1333010484" # Convert categorical level to binary and randomly sample import numpy as np df = pd.read_csv(r"../input/preprocessed-diabetic-retinopathy-trainset/newTrainLabels.csv") df['level'] = 1*(df['level'] > 0) # balance classes if BALANCEDSET: print('balancing dataset...') df = pd.concat([ df[df['level']==0].sample(n=round(SAMPLES/2.0), random_state=SEED), df[df['level']==1].sample(n=round(SAMPLES/2.0), random_state=SEED) ]).sample(frac=1.0, random_state=SEED) # shuffle else: print('raw dataset(unbalanced)...') df= df.sample(n=SAMPLES, random_state=SEED) df.reset_index(drop=True, inplace=True) # I've had troubles with shuffled dataframes on some earlier Cognitive Mathematics labs, # and this seemed to prevent those print(df.head()) print("") # + _uuid="f0bad13666c487d6d7ca9927b3c2f5e20989fcdd" # Create level histogram df['level'].hist(bins = [0,1, 2], rwidth = 0.5, align = 'left'); # + _uuid="c98c598708fe7318439933f365d05dcd536b3bac" # here we can split df into train/validation dataframes, and then we can use # separated imagedatagenerators, such that we are able to use data augmentation # for only the traindata images # the validdata images should be left raw as they are setBoundary = round(SAMPLES * TRAINSET_PERCENT) train_df = df[:setBoundary] validate_df = df[setBoundary:] train_df.reset_index(drop=True, inplace=True) validate_df.reset_index(drop=True, inplace=True) # + _uuid="ff6a11a37e7cc525ca67657c09ff5e4d03f83937" # Here the issue with original demo4 code from Sakari was that there was the typeError from imageDataGenerators # at least for myself, so that the error message suggested that I should convert level column to str df["level"]= df.iloc[:,1].astype(str) train_df["level"]= train_df.iloc[:,1].astype(str) validate_df["level"]= validate_df.iloc[:,1].astype(str) # + _uuid="c166458b7f9bd2869df89436ffbeac80a3f947a7" #Test out images can be found image_path = source_dir + df["image"][0] +".jpeg" if os.path.isfile(image_path)==False: raise Exception('Unable to find train image file listed on dataframe') else: print('Train data frame and file path ready') # + _uuid="10d2ca28f78a4253ee56b92072849e1c609af829" # Prepare to crop images to larger size so that the network can work on them... # apparently too small images are just shit for these convNeuralNetworks # Create destination directory try: os.mkdir(temp_dir) print('Created a directory:', temp_dir) except: # Temp directory already exist, so clear it for file in os.listdir(temp_dir): file_path = os.path.join(temp_dir, file) try: if os.path.isfile(file_path): os.unlink(file_path) except Exception as e: print(e) print(temp_dir, ' cleared.') # + _uuid="f03ee3c201c1914a628b6aec9073163c8b6f142e" # Crop the images to larger size from 100x100 upwards to atleast 300x300 # some contestant winner had images 500x500 # I've tried this vgg19 with 100x100 and it was not good results # Start timing start = time.time() # Crop and resize all images. Store them to dest_dir print("Cropping and rescaling the images:") for i, file in enumerate(df["image"]): try: fname = source_dir + file + ".jpeg" img = cv2.imread(fname) # Crop the image to the height h, w, c = img.shape if w > h: wc = int(w/2) w0 = wc - int(h/2) w1 = w0 + h img = img[:, w0:w1, :] # Rescale to N x N N = IMAGESIZE img = cv2.resize(img, (N, N)) # Save new_fname = temp_dir + file + ".jpeg" cv2.imwrite(new_fname, img) except: # Display the image name having troubles print("problemImagesFound:___ ", fname) # Print the progress for every N images if (i % 500 == 0) & (i > 0): print('{:} images resized in {:.2f} seconds.'.format(i, time.time()-start)) # End timing print('Total elapsed time {:.2f} seconds.'.format(time.time()-start)) print('temp_dir was=' + temp_dir) # + _uuid="2036f24122e914a612afb739000c77a18eb390be" path0= temp_dir # specify directory where the f***ing files are fileList = os.listdir(path0) # get the f***ing file list in the path directory # list files print(fileList[0])# print if you even found any f***ing files in this f***ing kaggle environment # + _uuid="68b06c0baca1d0f98061d8b5f798b4a8f8adae26" for i in range (5): print(df['image'][i]) # + _uuid="fc86eda59c6eb524133f27b28f19dc42d0b44091" # We must convert the dataframes's image columns into .jpeg extension, so generators can find images into the dataframes """wtf is wrong with this code, why kaggle keeps raising warnings why it is suddenly illegal to MODIFY the goddamn column to have ".jpeg" in the end for all values?!""" print(df.head(5)) a=df.iloc[:,0] + ".jpeg" b=train_df.iloc[:,0] + ".jpeg" c=validate_df.iloc[:,0] + ".jpeg" df['image']=a train_df['image']=b validate_df['image']=c print(df.head(5)) # + _uuid="6dbd069823db8209e163d2ee811aaacf038163d8" # Create image data generator from keras.preprocessing.image import ImageDataGenerator # I think that it would be possible also to use data augmentation in the training_generator only. # Also the interview of kaggle contestants showed my own suspicions to be true that one should not use # image shear with eye data (the perfectly healthy human eyeball should be round, so you dont get eyeglasses) # those contestant used rotation and mirrorings of the image as I recall, but small amounts of # zoom would not be too bad either, I reckon # use data augmentation for training traingen = ImageDataGenerator( rescale=1./255, rotation_range=ROTANGLE, zoom_range=ZOOM, horizontal_flip=True, vertical_flip=True) ## just take the raw data for validation validgen = ImageDataGenerator( rescale=1./255) # Data flow for training trainflow = traingen.flow_from_dataframe( dataframe = train_df, directory = temp_dir, x_col = "image", y_col = "level", class_mode = "binary", # class_mode binary causes to infer classlabels automatically from y_col, at least according to keras documentation target_size = (IMAGESIZE, IMAGESIZE), batch_size = BATCHSIZE, shuffle = True, seed = SEED) # Data flow for validation validflow = validgen.flow_from_dataframe( dataframe = validate_df, directory = temp_dir, x_col = "image", y_col = "level", class_mode = "binary", target_size = (IMAGESIZE, IMAGESIZE), batch_size = BATCHSIZE, shuffle = False, # validgen doesnt need shuffle I think, according to teacher's readymade convnet example (?) # seed = SEED ) # + _uuid="e7427e0dbff76b7f7e2378acbca3689f31afbd1e" """## We will try Juha Kopus model style for pre-trained VGG16. it was pretty bad in my opinoin, i tested it x = base_model.output x = Flatten(name='flatten')(x) x = Dropout(0.3, seed=SEED)(x) # add dropout just in case x = Dense(256,activation='relu')(x) x = Dropout(0.3, seed=SEED)(x) # add dropout just in case preds = Dense(1,activation='sigmoid')(x) #final layer with sigmoid activation""" """## Sakaris Xception model style for Xception convnet ## ## with added Dense model at the end of ConvNet""" """x = base_model.output ##this was some old code to make for MobileNet from example tutorial about convnets x = GlobalAveragePooling2D()(x) x = Flatten(name='flatten')(x) x=Dropout(0.2, seed=SEED)(x) x=Dense(1024,activation='relu')(x) #we add dense layers so that the model can learn more complex functions and classify for better results. x=Dropout(0.3, seed=SEED)(x) x=Dense(1024,activation='relu')(x) #dense layer 2 x=Dropout(0.3, seed=SEED)(x) x=Dense(512,activation='relu')(x) #dense layer 3 x=Dropout(0.3, seed=SEED)(x) preds=Dense(1,activation='sigmoid')(x) #final layer with sigmoid activation """ # + _uuid="cb25a07fd12f88d2e57b757e2d644ff061871d64" from keras.applications.xception import Xception base_model = Xception(include_top=False, input_shape=(IMAGESIZE, IMAGESIZE, 3)) # randomized initial weights, Xception_base if not trainConvLayers: for layer in base_model.layers: # make convLayers un-trainable layer.trainable=False base_model.summary() x = base_model.output x = GlobalAveragePooling2D()(x) #x = Flatten(name='flatten')(x) x = Dropout(0.25, seed=SEED)(x) # add dropout just in case x = Dense(512, activation='relu')(x) x = Dropout(0.4, seed=SEED)(x) # add dropout just in case x = Dense(512, activation='relu')(x) x = Dropout(0.4, seed=SEED)(x) # add dropout just in case x = Dense(256,activation='relu')(x) x = Dropout(0.4, seed=SEED)(x) # add dropout just in case preds = Dense(1,activation='sigmoid')(x) #final layer with sigmoid activation model=Model(inputs=base_model.input,outputs=preds) #for i,layer in enumerate(model.layers): # print(i,layer.name) print(model.summary()) model.compile(optimizer = OPTIMIZER, loss='binary_crossentropy', metrics = ["accuracy"]) # + _uuid="56409c9656f3ae7b4df9ed2dc7fc0af170f7255c" from keras.callbacks import ModelCheckpoint from time import time, localtime, strftime # Testing with localtime and strftime print(localtime()) print(strftime('%Y-%m-%d-%H%M%S', localtime())) # Calculate how many batches are needed to go through whole train and validation set STEP_SIZE_TRAIN = round((trainflow.n // trainflow.batch_size) * 1.0 * TRAINSTEPS_PERCENT ) STEP_SIZE_VALID = round((validflow.n // validflow.batch_size) * 1.0 * VALIDSTEPS_PERCENT ) # Train and count time model_name = strftime('Case2-%Y-%m-%d-%H%M%S', localtime()) + MODELNAME print('modelname was=',model_name,'\n') t1 = time() h = model.fit_generator(generator = trainflow, steps_per_epoch = STEP_SIZE_TRAIN, validation_data = validflow, validation_steps = STEP_SIZE_VALID, epochs = EPOCHS, verbose = VERBOSE) t2 = time() elapsed_time = (t2 - t1) # Save the model model.save(model_name) print('') print('Model saved to file:', model_name) print('') # Print the total elapsed time and average time per epoch in format (hh:mm:ss) t_total = strftime('%H:%M:%S', localtime(t2 - t1)) t_per_e = strftime('%H:%M:%S', localtime((t2 - t1)/EPOCHS)) print('Total elapsed time for {:d} epochs: {:s}'.format(EPOCHS, t_total)) print('Average time per epoch: {:s}'.format(t_per_e)) # + _uuid="107de288a1b98af3cd97602969ddbf1cb0a17dc5" # get the currently trained model, and plot the accuracies and loss for training and validation # %matplotlib inline import matplotlib.pyplot as plt import numpy as np epochs = np.arange(EPOCHS) + 1.0 f, (ax1, ax2) = plt.subplots(1, 2, figsize = (15,7)) def plotter(ax, epochs, h, variable): ax.plot(epochs, h.history[variable], label = variable) ax.plot(epochs, h.history['val_' + variable], label = 'val_'+variable) ax.set_xlabel('Epochs') ax.legend() plotter(ax1, epochs, h, 'acc') plotter(ax2, epochs, h, 'loss') plt.show() # + _uuid="8447c45a50bc74839445fc4f6af8ca4c4481d53a" # get true values y_true = validflow.classes # note about predict_generator, # sometimes it happened that, if you put argument steps = STEP_SIZE_VALID, then # that throws error because of mismatched steps amount somewhere, I think that the # np.ceil(validgen.n/validgen.batch_size) seems to fix it predict = model.predict_generator(validflow, steps= np.ceil(validflow.n / validflow.batch_size)) y_pred = 1*(predict > 0.5) # + _uuid="753b868ed5333d265b58d4d29d14767671906a08" # Calculate and print the metrics results from sklearn.metrics import confusion_matrix, cohen_kappa_score, classification_report cm = confusion_matrix(y_true, y_pred) print('Lates Confusion matrix:') print(cm) print('') cr = classification_report(y_true, y_pred) print('Lates Classification report:') print(cr) print('') from sklearn.metrics import accuracy_score a = accuracy_score(y_true, (y_pred)) print(a) print('Lates Accuracy with old decision point {:.4f} ==> {:.4f}'.format(0.5, a)) print('') # + _uuid="301491f2584d66eba5dc9f77fe42eb47663369cd" # Calculate and plot ROC-curve # See: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html from sklearn.metrics import roc_curve fpr, tpr, thresholds = roc_curve(y_true, predict) plt.plot(fpr, tpr, color='darkorange', lw = 2) plt.plot([0, 1], [0, 1], color='navy', lw = 2, linestyle='--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Lates Receiver operating characteristic curve') plt.show() # + _uuid="28c14b6aa91c6a5abc52b225047d6cec7c2df5f8" from keras.models import load_model sakarismodel = load_model("../input/sakarisbestmodelxception/best_demo11_SAKARISDEMO11.h5") predictSakari = sakarismodel.predict_generator(validflow, steps= np.ceil(validflow.n / validflow.batch_size)) y_predSakari = 1*(predictSakari > 0.5) print('Sakaris best model was as follows: \n') # Calculate and print the metrics results from sklearn.metrics import confusion_matrix, cohen_kappa_score, classification_report cm = confusion_matrix(y_true, y_predSakari) print('Sakaris Confusion matrix:') print(cm) print('') cr = classification_report(y_true, y_predSakari) print('Sakaris Classification report:') print(cr) print('') from sklearn.metrics import accuracy_score a = accuracy_score(y_true, (y_predSakari)) print(a) print('Sakaris Accuracy with old decision point {:.4f} ==> {:.4f}'.format(0.5, a)) print('') fpr0, tpr0, thresholds0 = roc_curve(y_true, y_predSakari) plt.plot(fpr0, tpr0, color='darkorange', lw = 2) plt.plot([0, 1], [0, 1], color='navy', lw = 2, linestyle='--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Sakaris Receiver operating characteristic curve') plt.show() # + _uuid="843981a3627efaa0e59eeb781e79cbcd26843ac0" # Clear the temporary directory dest_dir = './temp/' for file in os.listdir(dest_dir): file_path = os.path.join(dest_dir, file) try: if os.path.isfile(file_path): os.unlink(file_path) except Exception as e: print(e) print(dest_dir, ' cleared.') os.rmdir(dest_dir) print(dest_dir,'Removed.') # + [markdown] _uuid="d4583a6b2d3f1984432da2f710ea628b90d3a836" # ## Conclusions # In general, the benefits from this case2 to me was mostly about learning how to # work with convnets, but the downside was that the predictive results and model were quite bad. # # - I don't think it was possible to have good accuracies with small size images like 100x100 pre-processed dataset so that somehow the cropping and sizing-up the images seemed to make the crucial difference for the positive direction for model training/validation # - imagenet weights were tried in another notebook version with the VGG19 model, but those imagenet weights didnt seem to work so well, at least with the 100x100 images # - possibly it would have been best to utilize randomized weights for VGG19 and train the convLayers at least for even few epochs, and then attempt to freeze the convLayers and continue to train the dense layers to save some time # - I tried to tweak the catsAndDogs model with dropouts, different learning rates, different optimizers and even increasing amounts of some convlayers and maxpools, # but it appears that the basic architecture was not good enough for our purposes. It must have been underfitting which would indicate that architecture was too simplistic for the job, I think. # - data augmentation was tried with zoom, rotation and image flippings, but the effect of data-augmentation did not stop model overfitting with basic Sakari's Xception architecture at least for myself. # - I read from wikipedia that the human eye size has about max 2% difference in adult eye size, and the medical eye images should be pretty "fixed in place" in terms of the eye position because the patient is putting his head into a cradle to fully stabilize the head during an eye examination. But perhaps the blood veins inside the eye are "rotated" from person to person when comparing? # # - Xception convNet with dense, seemed to have good training and validation accuracies, but only 30 epochs were done for training. Perhaps that model could be better with larger trainingsets and more time spent on training the whole model. Maybe dropout could also be tweaked # # # # + [markdown] _uuid="6eb6f9aae86b621a19a5c2a16cdffe3fddd9421e" # ## Learning outcomes # # * most important lesson in this case2 for me was that when faced with image recognition, first you must try to overfit your model, then you can tweak it later. # * second important lesson in this case2 was that if you want good results from actual testing, dont try to be a hero and re-invent the wheel, use ready convNet architectures at least first. # * I learned how to work within kaggle environment, which was nice because it offloaded that computation to the cloud, so you could think about your next plan while the kernels are committing/running # * later on, I read tutorial about how to use pre-trained convNet with keras and I managed to get some good results as soon as I cropped and upsized the images to about 299x299 from the old 100x100 # * I think I finally understood a lot of how that keras functionality works such as the imageDataGenerators and flow_from_dataframe. # * gained more experience in working with the keras, because I had to read a lot into the documentation (which parameters are there and what they do etc...) # * I learned how the kaggle workflow works best at least for me if you want to try the same basic setup, but with different keras parameter such as switching optimizers, batchsizes, learning rates etc... # * I also tested some basic style convnets such as the CIFAR-10 dataset model I introduced in the beginning, but the results were not good at least for me. # * probably with hindsight the best strategy from the very beginning would have been to start immediately in the kaggle environment, then take a good pretrained convnet like inception or Vgg19 and just use that. In that way you would learn how to use the pre-trained convnet and also get good results. # + [markdown] _uuid="5d3d537e2dc8ba5b5b095432485dc300f0e5fe8e" # ## my kaggle workflow # # 1. create basic notebook kernel which has the model and initial setups done, including the getting the data and training model # 2. put epochs to 3 and run the whole notebook so it checks if it will crash (it should not take huge amount of time with only small amount of epochs) # 3. now you can estimate how long it will take per epoch and write it on paper or something # 4. put epochs back to whatever you want, then save notebook to your local pc (for safety purposes) # 5. Then you could maybe fork that kernel again, then use that forked kernel to commit that one # 6. then you should still be able to fork that original kernel how many times you want and make commits of those ones, with whateve changes you want to apply # 7. then you can pretty much wait until the kernels are committed in the cloud and do something else or ponder your next move # 8. as mentioned earlier you get your saved models in the committed kernels output section, if you want to download your trained models to your pc # + [markdown] _uuid="b3902ecc27d3befd3473c3adf626081fb27b71e1" # # + [markdown] _uuid="2b7da276c37879040a4c3097797ce6b1f783853b" # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import tensorflow as tf import tensorflow from tensorflow import keras import numpy as np import matplotlib.pyplot as plt print(tf.__version__) fashion_mnist=keras.datasets.fashion_mnist import os print(os.getcwd()) (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() class_names=['T-shirt/top','Trouser','Pullover','Dress','Coat','Sandal','Shirt','Sneaker','Ankle boot','Bag'] train_images.shape len(train_labels) test_images.shape len(train_labels) train_labels #### now we have to preprocess this data so now we are try to load an image krishna's writes plt.figure() plt.imshow(train_images[0]) plt.colorbar() plt.grid(False) plt.show() train_images=train_images/255.0 test_images=test_images/255.0 # Let’s display some images. plt.figure(figsize=(10,10)) for i in range(25): plt.subplot(5,5,i+1) plt.xticks([]) plt.yticks([]) plt.grid(False)#for unshowing the grid lines in the images plt.imshow(train_images[i], cmap=plt.cm.binary) plt.xlabel(class_names[train_labels[i]]) plt.show() # where are displaying classes name below # Setup the layers model = keras.Sequential([ keras.layers.Flatten(input_shape=(28, 28)), keras.layers.Dense(128, activation=tf.nn.relu), keras.layers.Dense(10, activation=tf.nn.softmax) ]) # + # Compile the Model model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) # - # Model Training model.fit(train_images, train_labels, epochs=10) # Evaluating Accuracy test_loss, test_acc = model.evaluate(test_images, test_labels) print('Test accuracy:', test_acc) # Making Predictions predictions = model.predict(test_images) predictions[0] #A prediction is an array of 10 numbers. #These describe the “confidence” of the model that the image corresponds to each of the 10 different articles of clothing. #We can see which label has the highest confidence value. np.argmax(predictions[0]) #Model is most confident that it's an ankle boot. Let's see if it's correct test_labels[0] # Now, it’s time to look at the full set of 10 channels def plot_image(i, predictions_array, true_label, img): predictions_array, true_label, img = predictions_array[i], true_label[i], img[i] plt.grid(False) plt.xticks([]) plt.yticks([]) plt.imshow(img, cmap=plt.cm.binary) predicted_label = np.argmax(predictions_array) if predicted_label == true_label: color = 'green' else: color = 'red' plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label], 100*np.max(predictions_array), class_names[true_label]), color=color) def plot_value_array(i, predictions_array, true_label): predictions_array, true_label = predictions_array[i], true_label[i] plt.grid(False) plt.xticks([]) plt.yticks([]) thisplot = plt.bar(range(10), predictions_array, color="#777777") plt.ylim([0, 1]) predicted_label = np.argmax(predictions_array) thisplot[predicted_label].set_color('red') thisplot[true_label].set_color('green') # Let’s look at the 0th and 10th image first i = 0 plt.figure(figsize=(6,3)) plt.subplot(1,2,1) plot_image(i, predictions, test_labels, test_images) plt.subplot(1,2,2) plot_value_array(i, predictions, test_labels) plt.show() # + i = 10 plt.figure(figsize=(6,3)) plt.subplot(1,2,1) plot_image(i, predictions, test_labels, test_images) plt.subplot(1,2,2) plot_value_array(i, predictions, test_labels) plt.show() # - #Now, let’s plot several images and their predictions. Correct ones are green, while the incorrect ones are red. num_rows = 5 num_cols = 3 num_images = num_rows*num_cols plt.figure(figsize=(2*2*num_cols, 2*num_rows)) for i in range(num_images): plt.subplot(num_rows, 2*num_cols, 2*i+1) plot_image(i, predictions, test_labels, test_images) plt.subplot(num_rows, 2*num_cols, 2*i+2) plot_value_array(i, predictions, test_labels) plt.show() # + # Finally, we will use the trained model to make a prediction about a single image. # Grab an image from the test dataset img = test_images[0] print(img.shape) # Add the image to a batch where it's the only member. img = (np.expand_dims(img,0)) print(img.shape) predictions_single = model.predict(img) print(predictions_single) # - plot_value_array(0, predictions_single, test_labels) plt.xticks(range(10), class_names, rotation=45) plt.show() prediction_result = np.argmax(predictions_single[0]) prediction_result # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + ##GENERAL import math import numpy as np import pandas as pd import warnings import scipy.stats as st import statsmodels as sm import matplotlib.pyplot as plt import copy as co from scipy.optimize import curve_fit from scipy import stats import seaborn as sns # %matplotlib inline # #!pip install termcolor from termcolor import colored ##MODULES import mySQL_PBTK as db_query import distribution_fitting as dfit # - # ## Get all data from database and store in local tables sp_table = db_query.sp_table() oxc_table = db_query.oxygen_con_table() q_table = db_query.cardiac_output_table() tv_table = db_query.tissue_volumes_table() lipid_table = db_query.lipid_content_table() bf_table = db_query.blood_flow_table() # + #print(sp_table.head()) #print(oxc_table.head()) #print(q_table.head()) #print(tv_table.head()) #print(lipid_table.head()) #print(bf_table.head()) # - # ### Scaling to normalized values for Cardiac Output and Oxygen Consumption to derive the normalized parameter distribution def oxc_scaling(): oxc = db_query.oxygen_con_table() #weight is in g, value is in mg O2/kg/h oxc_df = [] weight_kg = [] value_mg_h =[] VO2_w = [] VO2_w_t = [] for i in range(len(oxc.weight)): weight_kg.append((oxc.weight.astype(float)[i]/1000)) #to transform weight in kg for i in range(len(oxc.value)): value_mg_h.append(oxc.value.astype(float)[i]*weight_kg[i]) #value to actual weight mg/h for i in range(len(oxc.value)): VO2_w.append(oxc.value.astype(float)[i]/(math.pow(weight_kg[i], 0.8))) #scaled value 1kg fish mg/kg/h #scaling factor overall distribution is 0.8 for i in range(len(VO2_w)): VO2_w_t.append(VO2_w[i]*math.pow(1.805,((15-oxc.temp.astype(float))[i]/10))) #1kg at 15C mg/h/kg #1.805 is q10 value from Cardiac Output #15 is temp that we want to scale for oxc_df = [oxc.species, np.log10(VO2_w), oxc.temp.astype(float), np.log10(weight_kg), np.log10(value_mg_h), VO2_w_t] # list of lists is converted to a dataframe df = dict(species= oxc_df[0], log_VO2_w = oxc_df[1],T= oxc_df[2], log_w = oxc_df[3], log_VO2= oxc_df[4], VO2_w_t = oxc_df[5]) ## first convert in a dict, to set the column names df = pd.DataFrame.from_dict(df, orient='columns', dtype=None) return df def cardiac_output_scaling(): q = db_query.cardiac_output_table() #weight is in g, value is in ml/min/kg q_df = [] weight_kg = [] value_L_h_kg = [] value_L_h =[] Q_w_t = [] for i in range(len(q.weight)): weight_kg.append((q.weight.astype(float)[i]/1000)) #to transform weight in kg for i in range(len(q.value)): value_L_h_kg.append((q.value.astype(float)[i]/1000)*60) #to transform value in L/h/kg for i in range(len(value_L_h_kg)): value_L_h.append(value_L_h_kg[i]*weight_kg[i]) #value to actual weight L/h for i in range(len(value_L_h_kg)): Q_w_t.append(value_L_h_kg[i]*math.pow(1.805,((15-q.temp.astype(float))[i]/10))) #1kg at 15C L/h/kg #1.805 is q10 value #15 is temp that we want to scale for q_df = [q.species, q.temp.astype(float), np.log10(weight_kg), np.log10(value_L_h), Q_w_t] #To make the analysis of the selected easier, the list of lists is converted to a dataframe df = dict(species= q_df[0], T = q_df[1], log_w = q_df[2], log_Q_L_h = q_df[3], Q_w_t = q_df[4] ) ## first convert in a dict, to set the column names df = pd.DataFrame.from_dict(df, orient='columns', dtype=None) return df # ### Fitting the data to a propper distribtion so we can give this information to the random number generator. # Oxygen Consumption # + df= oxc_scaling() # Load data from datasets and set axis titles dataset = "Oxygen Consumption" unit = "log VO2 [mg/kg/h]" data= np.log10(df["VO2_w_t"]) bins=100 print('low: '+str(min(data))) print('high: '+str(max(data))) print('n: '+str(data.count())) dfit.distribution_plot(dataset, unit, data, bins) # - # Cardiac Output # + df = cardiac_output_scaling() # Load data from datasets and set axis titles dataset = "Cardiac Output" unit = "Q [L/h/kg]" data= df["Q_w_t"] bins=30 print('low: '+str(min(data))) print('high: '+str(max(data))) print('n: '+str(data.count())) dfit.distribution_plot(dataset, unit, data,bins) # - # # Tissue Volumes # # Liver # + # Load data from datasets and set axis titles dataset = "Liver Volume" unit = 'Volume [fraction of wet weight]' liver = tv_table[tv_table.tissue == 'liver'] data = liver.value/100 #due to fractions used in model bins = 30 print('low: '+str(min(data))) print('high: '+str(max(data))) print('n: '+str(data.count())) dfit.distribution_plot(dataset, unit, data, bins) # - # Gonads # + # Load data from datasets and set axis titles dataset = "Gonads Volume" unit = 'Volume [fraction of wet weight]' gonads = tv_table[tv_table.tissue == 'gonads'] data= gonads.value/100 #due to fractions used in model bins = 40 print('low: '+str(min(data))) print('high: '+str(max(data))) print('n: '+str(data.count())) dfit.distribution_plot(dataset, unit, data, bins) # + # Load data from datasets and set axis titles dataset = "Kidney Volume" unit = 'Volume [fraction of wet weight]' kidney = tv_table[tv_table.tissue == 'kidney'] data = kidney.value/100 #due to fractions used in model bins = 10 print('low: '+str(min(data))) print('high: '+str(max(data))) print('n: '+str(data.count())) dfit.distribution_plot(dataset, unit, data, bins) # + # Load data from datasets and set axis titles #dataset = "PPT Volume" #unit = 'Volume [fraction of wet weight]' ppt = tv_table[tv_table.tissue == 'ppt'] data= ppt.value/100 #due to fractions used in model #bins = 10 print('low: '+str(min(data))) print('high: '+str(max(data))) print('n: '+str(data.count())) #dfit.distribution_plot(dataset, unit, data, bins) # + # Load data from datasets and set axis titles dataset = "RPT Volume" unit = 'Volume [fraction of wet weight]' rpt = tv_table[tv_table.tissue == 'rpt'] data= rpt.value/100 #due to fractions used in model bins=10 print('low: '+str(min(data))) print('high: '+str(max(data))) print('n: '+str(data.count())) dfit.distribution_plot(dataset, unit, data, bins) # + # Load data from datasets and set axis titles #dataset = "Fat Volume" #unit = 'Volume [fraction of wet weight]' fat = tv_table[tv_table.tissue == 'fat'] data= fat.value/100 #due to fractions used in model #bins=10 print('low: '+str(min(data))) print('high: '+str(max(data))) print('n: '+str(data.count())) #dfit.distribution_plot(dataset, unit, data,bins) # - # # Lipid Contents # # Liver # + # Load data from datasets and set axis titles dataset = "Liver Lipid Content" unit = 'Lipid Content [fraction of wet weight]' liver = lipid_table[lipid_table.tissue == 'liver'] data= liver.value/100 #due to fractions used in model bins=10 print('low: '+str(min(data))) print('high: '+str(max(data))) print('n: '+str(data.count())) dfit.distribution_plot(dataset, unit, data, bins) # + # Load data from datasets and set axis titles dataset = "Gonads Lipid Content" unit = 'Lipid Content [fraction of wet weight]' gonads = lipid_table[lipid_table.tissue == 'gonads'] data= gonads.value/100 #due to fractions used in model bins=10 print('low: '+str(min(data))) print('high: '+str(max(data))) print('n: '+str(data.count())) dfit.distribution_plot(dataset, unit, data, bins) # + # Load data from datasets and set axis titles dataset = "Kidney Lipid Content" unit = 'Lipid Content [fraction of wet weight]' kidney = lipid_table[lipid_table.tissue == 'kidney'] data= kidney.value/100 #due to fractions used in model bins=10 print('low: '+str(min(data))) print('high: '+str(max(data))) print('n: '+str(data.count())) dfit.distribution_plot(dataset, unit, data,bins) # + # PPT forced to be gamma # + # Load data from datasets and set axis titles dataset = "PPT Lipid Content" unit = 'Lipid Content [fraction of wet weight]' ppt = lipid_table[lipid_table.tissue == 'ppt'] data= ppt.value/100 #due to fractions used in model bins=10 print('low: '+str(min(data))) print('high: '+str(max(data))) print('n: '+str(data.count())) dfit.distribution_plot(dataset, unit, data,bins) # + # Load data from datasets and set axis titles #dataset = "RPT Lipid Content" #unit = 'Lipid Content [fraction of wet weight]' rpt = lipid_table[lipid_table.tissue == 'rpt'] data= rpt.value/100 #due to fractions used in model #bins=10 print('low: '+str(min(data))) print('high: '+str(max(data))) print('n: '+str(data.count())) #dfit.distribution_plot(dataset, unit, data,bins) # + # Load data from datasets and set axis titles dataset = "Fat Lipid Content" unit = 'Lipid Content [fraction of wet weight]' fat = lipid_table[lipid_table.tissue == 'fat'] data= fat.value/100 #due to fractions used in model bins=10 print('low: '+str(min(data))) print('high: '+str(max(data))) print('n: '+str(data.count())) dfit.distribution_plot(dataset, unit, data,bins) # + # Load data from datasets and set axis titles dataset = "Blood Lipid Content" unit = 'Lipid Content [fraction of wet weight]' blood = lipid_table[lipid_table.tissue == 'blood'] data= blood.value/100 #due to fractions used in model bins=10 print('low: '+str(min(data))) print('high: '+str(max(data))) print('n: '+str(data.count())) dfit.distribution_plot(dataset, unit, data,bins) # + # Load data from datasets and set axis titles dataset = "Total Lipid Content" unit = 'Lipid Content [fraction of wet weight]' total = lipid_table[lipid_table.tissue == 'total'] data= total.value/100 #due to fractions used in model bins=10 print('low: '+str(min(data))) print('high: '+str(max(data))) print('n: '+str(data.count())) dfit.distribution_plot(dataset, unit, data,bins) # - # # Blood flows # # Liver # + # Load data from datasets and set axis titles dataset = "Liver Blood Flow" unit = 'Blood Flow [fraction of Q_c]' liver = bf_table[bf_table.tissue == 'liver'] data = liver.value/100 #due to fractions used in model bins=10 print('low: '+str(min(data))) print('high: '+str(max(data))) print('n: '+str(data.count())) dfit.distribution_plot(dataset, unit, data,bins) # + # Load data from datasets and set axis titles #dataset = "Gonads Blood Flow" #unit = 'Volume [fraction of Q_c]' gonads = bf_table[bf_table.tissue == 'gonads'] data= gonads.value/100 #due to fractions used in model #bins=10 print('low: '+str(min(data))) print('high: '+str(max(data))) print('n: '+str(data.count())) #dfit.distribution_plot(dataset, unit, data,bins) # + # Load data from datasets and set axis titles dataset = "Kidney Blood Flow" unit = 'Blood Flow [fraction of Q_c]' kidney = bf_table[bf_table.tissue == 'kidney'] data= kidney.value/100 #due to fractions used in model bins=10 print('low: '+str(min(data))) print('high: '+str(max(data))) print('n: '+str(data.count())) dfit.distribution_plot(dataset, unit, data,bins) # + # Load data from datasets and set axis titles dataset = "PPT Blood Flow" unit = 'Blood Flow [fraction of Q_c]' ppt = bf_table[bf_table.tissue == 'ppt'] data= ppt.value/100 #due to fractions used in model bins=10 print('low: '+str(min(data))) print('high: '+str(max(data))) print('n: '+str(data.count())) dfit.distribution_plot(dataset, unit, data,bins) # + # Load data from datasets and set axis titles dataset = "RPT Blood Flow" unit = 'Blood Flow [fraction of Q_c]' rpt = bf_table[bf_table.tissue == 'rpt'] data= rpt.value/100 #due to fractions used in model bins=10 print('low: '+str(min(data))) print('high: '+str(max(data))) print('n: '+str(data.count())) dfit.distribution_plot(dataset, unit, data, bins) # + # Load data from datasets and set axis titles #dataset = "Fat Blood Flow" #unit = 'Blood Flow [fraction of Q_c]' fat = bf_table[bf_table.tissue == 'fat'] data= fat.value/100 #due to fractions used in model #bins=10 print('low: '+str(min(data))) print('high: '+str(max(data))) print('n: '+str(data.count())) #dfit.distribution_plot(dataset, unit, data,bins) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd import numpy as np import matplotlib.pyplot as plt from statsmodels.formula.api import ols from scipy import stats from sklearn.metrics import mean_squared_error, r2_score, explained_variance_score from sklearn.linear_model import LinearRegression from sklearn.feature_selection import f_regression from math import sqrt import seaborn as sns import warnings warnings.filterwarnings('ignore') import evaluate import math # - # ### 1. Load the tips dataset from either pydataset or seaborn. # + from pydataset import data tips = data('tips') tips.head() # - # ### 2. Fit a linear regression model (ordinary least squares) and compute yhat, predictions of tip using total_bill. # # # + # create the model object lm = LinearRegression(normalize=True) # fit the model to trainig data lm.fit(tips[['total_bill']], tips.tip) # make prediction tips['yhat'] = lm.predict(tips[['total_bill']]) # - # make a baseline prediction (mean of the tip) tips['baseline'] = tips.tip.mean() tips.head() # + # plot data and prediction line sns.scatterplot(x = 'total_bill', y = 'tip', data = tips) sns.lineplot(x = 'total_bill', y = 'baseline', data = tips) sns.lineplot(x = 'total_bill', y = 'yhat', data = tips) # - # ### 3. Plot the residuals for the linear regression model that you made. # # tips['residual'] = tips.tip - tips.yhat tips['baseline_residual'] = tips.tip - tips.baseline tips.head() sns.relplot(x = 'total_bill', y = 'residual', data = tips) plt.axhline(0, ls = ':') # Heteroscedasticity: # - unequal variance of errors # - Heteroscedasticity may also have the effect of giving too much weight to a small subset of the data (namely the subset where the error variance was largest) when estimating coefficients. # - possibily apply some transformations(?) # # ### 4. Calculate the sum of squared errors, explained sum of squares, total sum of squares, mean squared error, and root mean squared error for your model. # # tips.head() SSE = (tips.residual**2).sum() print(f' The SSE of the OLS model is {round(SSE,1)}') SSE_baseline = (tips.baseline_residual**2).sum() print(f' The SSE of the baseline model is {round(SSE_baseline,1)}') # + #Mean squared error OLS: MSE = SSE/len(tips) MSE # + #Mean squared error of baseline model: MSE_baseline = SSE_baseline/len(tips) MSE_baseline # - # RMSE of OLS model RMSE = mean_squared_error(tips.tip, tips.yhat, squared = False) RMSE # RMSE for the baseline model RMSE_baseline = mean_squared_error(tips.tip, tips.baseline, squared = False) RMSE_baseline # + # ESS = sum(tips.yhat - tips.tip.mean())**2 ESS = sum((tips.yhat - tips.baseline)**2) ESS # + # Total Sum of Errors TSS = ESS + SSE TSS # - # ### 5. Write python code that compares the sum of squared errors for your model against the sum of squared errors for the baseline model and outputs whether or not your model performs better than the baseline model. # + df_eval = pd.DataFrame(np.array(['SSE', 'MSE','RMSE']), columns=['metric']) df_eval['model_error'] = np.array([SSE, MSE, RMSE]) df_eval # + df_eval['baseline_error'] = np.array([SSE_baseline,MSE_baseline, RMSE_baseline]) df_eval # - df_eval['better_than_baseline'] = df_eval.baseline_error > df_eval.model_error df_eval # ### 7. What is the amount of variance explained in your model? from sklearn.metrics import r2_score r2_score(tips.tip, tips.yhat) # ### 9. Create a file named evaluate.py that contains the following functions. def plot_residuals(actual, predicted): residuals = actual - predicted plt.hlines(0, actual.min(), actual.max(), ls=':') plt.scatter(actual, residuals) plt.ylabel('residual ($y - \hat{y}$)') plt.xlabel('actual value ($y$)') plt.title('Actual vs Residual') plt.show() # + def residuals(actual, predicted): return actual - predicted def sse(actual, predicted): return (residuals(actual, predicted) **2).sum() def mse(actual, predicted): n = actual.shape[0] return sse(actual, predicted) / n def rmse(actual, predicted): return math.sqrt(mse(actual, predicted)) def ess(actual, predicted): return ((predicted - actual.mean()) ** 2).sum() def tss(actual): return ((actual - actual.mean()) ** 2).sum() def r2_score(actual, predicted): return ess(actual, predicted) / tss(actual) # + def regression_errors(actual, predicted): return pd.Series({ 'sse': sse(actual, predicted), 'ess': ess(actual, predicted), 'tss': tss(actual), 'mse': mse(actual, predicted), 'rmse': rmse(actual, predicted), }) def baseline_mean_errors(actual): predicted = actual.mean() return { 'sse': sse(actual, predicted), 'mse': mse(actual, predicted), 'rmse': rmse(actual, predicted), } def better_than_baseline(actual, predicted): rmse_baseline = rmse(actual, actual.mean()) rmse_model = rmse(actual, predicted) return rmse_model < rmse_baseline # - better_than_baseline(tips.tip, tips.yhat) # ### 10. Load the mpg dataset and fit a model that predicts highway mileage based on engine displacement. Take a look at all the regression evaluation metrics, and determine whether this model is better than the baseline model. Use the functions from your evaluate.py to help accomplish this. mpg = data('mpg') mpg.head() # + #plot displacement vs highway mpg plt.scatter(mpg.displ, mpg.hwy) # + # create the model object lm = LinearRegression(normalize=True) # fit the model to trainig data lm.fit(mpg[['displ']], mpg.hwy) # make prediction predictions = lm.predict(mpg[['displ']]) # + # plot regression line plt.scatter(mpg.displ, mpg.hwy) plt.plot(mpg.displ, predictions) plt.xlabel('displacement (lites)') plt.ylabel('highway mpg') # + # plot displacement vs residuals plt.scatter(mpg.displ, (mpg.hwy - predictions)) plt.axhline(0, ls = ':') plt.ylabel('residuals') plt.xlabel('displacement (liters)') # - mpg[mpg.displ > 5] # calculate regressions errors evaluate.regression_errors(mpg.hwy, predictions) # + # is our model better than baseline? evaluate.better_than_baseline(mpg.hwy, predictions) # + # R2 score evaluate.r2_score(mpg.hwy, predictions) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd import matplotlib.pyplot as plt import numpy as np # %matplotlib inline from sklearn import svm from sklearn.model_selection import train_test_split from sklearn import metrics import pickle import requests # - data = pd.read_csv('http://3e9387c7.ngrok.io/get_data') data # + # data.to_csv('gesture_decline.csv') # - gesture1 = pd.read_csv('gesture_accept.csv') gesture2 = pd.read_csv('gesture_decline.csv') # #### training data prepocessing gesture1[:11] a = gesture1.loc[:10] gesture1.any(1) (gesture1[:10] == 0).all(1) gesture1.loc[(gesture1==0).any(1)] # + # 1 = accept, 2 = decline gesture1 = gesture1.drop(columns=['Unnamed: 0']) gesture2 = gesture2.drop(columns=['Unnamed: 0']) drop_zeros = gesture1.loc[(gesture1==0).any(1)].index gesture1 = gesture1.drop(drop_zeros, axis=0) gesture1= gesture1.reset_index(drop=True) drop_zeros = gesture2.loc[(gesture2==0).any(1)].index gesture2 = gesture2.drop(drop_zeros, axis=0) gesture2= gesture2.reset_index(drop=True) gesture1['label'] = 0 gesture2['label'] = 1 # - gesture1 # + gesture1 = gesture1.loc[:2980] gesture2 = gesture2.loc[:2470] dataset = gesture1.append(gesture2) dataset= dataset.reset_index(drop=True) # - # ### Preprocessing 1 # + labels = dataset['label'].to_numpy() labels = labels[:-9:10] dataset = dataset[['x','y','z']] # - dataset labels.size dataset = dataset.loc[:5450] dataset # ### Preprocessing 2 frames = [] def frame(): n=0 while n != len(dataset)-1: frames.append(dataset.loc[n:n+1].to_numpy()) n += 1 else: return frames gest_frames = frame() len(gest_frames) # ### Peprocessing 3 > FFT, feature extraction and feature vector # + def fft_transform(frame_data): fft_transform.feature_set = np.array([]) for frame in frame_data: frame_feature_set = np.array([]) fft_frame = np.fft.fft(frame, axis=0) # fft_transform.fft_data.append(fft_frame) # n = frame.size # freq = np.fft.fftfreq(n, d=0.01) # fft_transform.fft_freq.append(freq) # fft_ps = np.abs(fft_frame)**2 # fft_transform.fft_power_spec.append(fft_ps) mean = np.mean(fft_frame, axis=0) frame_feature_set = np.concatenate((frame_feature_set, mean), axis=0) energy = (np.sum((fft_frame**2),axis=0))/len(fft_frame) frame_feature_set = np.concatenate((frame_feature_set, energy), axis=0) stdev = np.std(fft_frame, axis=0) frame_feature_set = np.concatenate((frame_feature_set, stdev), axis=0) correlation = np.corrcoef(fft_frame, rowvar=False).reshape(3,3) frame_feature_set = np.concatenate((frame_feature_set.reshape(3,3), correlation), axis=0) fft_transform.feature_set = np.concatenate((frame_feature_set.flatten(), fft_transform.feature_set), axis=0) # make a simple array return fft_transform.feature_set # - fft_transform(gest_frames) # ### Training data set n = int((fft_transform.feature_set.size)/180) axis = 3 feature_10axis = 60 fft_transform.feature_set = fft_transform.feature_set.reshape(n,(feature_10axis*axis)) X = np.abs(fft_transform.feature_set) y = labels y.shape X[1] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2) C = 0.1 clf = svm.SVC(kernel='poly',C=C, probability=True) clf.fit(X_train, y_train) with open('/Users/martinantos/Documents/school/IoT/Panache/clf.pickle', 'wb') as f: pickle.dump(clf, f) model = pickle.dumps(clf) # + # clftest = pickle.loads(model) # + # clftest.predict(X.reshape(1,-1)) # - y_pred = clf.predict(X_test) print("accuracy:", metrics.accuracy_score(y_test, y_pred)) print("Precision:",metrics.precision_score(y_test, y_pred)) print("Recall:",metrics.recall_score(y_test, y_pred)) # 1 = accetpt, def clfout(sample): sample = sample.reshape(1,-1) if clf.decision_function(sample) <=(-1.2): return 1 elif clf.decision_function(sample) >= 1.0: return 2 else: return 0 clfout(X[356]) clf.decision_function(X).max() clf.decision_function(X) clf.predict(X[65].reshape(1,-1)) # + clf.decision_function(X[65].reshape(1,-1)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.9.7 64-bit (''DS-env'': venv)' # name: python3 # --- import pandas as pd df=pd.read_csv("data/rap_world/rapworld.csv", sep=";") upper_left=(41.000449, -74.121283) down_right=(40.556001, -73.060932) (df["Name"]=="22 GZ").any() df["x"]=df["Coordinates"].apply(lambda x : float(x.split(",")[0])) df["y"]=df["Coordinates"].apply(lambda x : float(x.split(",")[1])) df[(df["x"]down_right[0])&(df["y"]upper_left[1])] df.Name["Tupac"] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Imports # + import itertools import numpy as np import pandas as pd import seaborn as sns import scipy.stats as st import matplotlib.pyplot as plt from sklearn import linear_model from sklearn.model_selection import KFold, train_test_split from sklearn.ensemble import RandomForestRegressor # - # ## Данные df = pd.read_csv('Очищенные обработанные данные.csv', sep=',', index_col=0)#, parse_dates=['DT']) df.head() TARGETS = ['химшлак последний Al2O3', 'химшлак последний CaO', 'химшлак последний R', 'химшлак последний SiO2'] # убираем колонки с "будущим" last_analysis_cols = [col for col in df.columns if (col.find('последний') != -1) & (col not in TARGETS)] last_analysis_cols df.drop(last_analysis_cols, axis=1, inplace=True) len(df.columns) # ## Feature Engineering pearson_corr = df.drop(TARGETS, axis=1).corr(method='pearson', min_periods=50) pos_corr = np.abs(np.nan_to_num(pearson_corr.values, 0.0)) strong_corr_pairs = [] for i in range(1, len(pos_corr)): if np.max(pos_corr[i, :i]) >= 0.7: j = np.argmax(pos_corr[i, :i]) strong_corr_pairs.append((i, j, pearson_corr.values[i, j])) cols = df.drop(TARGETS, axis=1).columns cols_dict = dict(list(zip(range(len(cols)), cols))) # сильно коррелирующие фичи strong_corr_df = pd.DataFrame(strong_corr_pairs).replace(cols_dict).sort_values(2, ascending=False) strong_corr_df pairs = strong_corr_df[[0,1]].values pairs # #### Imputter # + from sklearn.impute import KNNImputer imputer = KNNImputer(n_neighbors=2) X = pd.DataFrame(imputer.fit_transform(df.drop(TARGETS, axis=1)), columns=df.drop(TARGETS, axis=1).columns) # - df[X.columns] = X.values # Попробуем отношение. # + def safe_division(x, y): if (x != x) | (y != y) | (y == 0): return np.nan return x / y for pair in pairs: new_col = pair[0]+'_'+pair[1]+'_ratio' df[new_col] = df.apply(lambda x: safe_division(x[pair[0]],x[pair[1]]), axis=1) # - import itertools t_features = ['t вып-обр', 't обработка', 't под током', 't продувка'] t_combinations = list(itertools.combinations(t_features, 2)) for pair in t_combinations: new_col = pair[0]+'_'+pair[1]+'_sub' df[new_col] = df.apply(lambda x: abs(x[pair[0]]-x[pair[1]]), axis=1) df.drop(['произв количество обработок'], axis=1, inplace=True) df.shape # ## Feature Importance for col in df.columns: if (df[col].nunique() <= 50) & (col not in TARGETS): df[col] = df[col].astype('category') NUMERICAL = df.select_dtypes(exclude=['object', 'datetime64']).columns.tolist() ORDINAL = df.select_dtypes(include=['category']).columns.tolist() for tar in TARGETS: if tar in NUMERICAL: NUMERICAL.remove(tar) if tar in ORDINAL: ORDINAL.remove(tar) # #### Correlations # + correlations = dict() for tar in TARGETS: correlations[tar] = dict() for col in NUMERICAL: correlations[tar][col] = df[col].corr(df[tar]) # - corrs = pd.DataFrame(correlations) corrs[(abs(corrs) > 0.5).any(1)] golden_features = list(corrs[(abs(corrs) > 0.3).any(1)].index) # ## Model df[TARGETS].count() / df.shape[0] def make_train_test_for_target(df, target, cols_exclude_from_sample=[]): new_df = df[~df[target].isna()] return new_df.drop(set(cols_exclude_from_sample)-set([target]), axis=1) # ### Linear Regression from sklearn.linear_model import LinearRegression, ElasticNet target = 'химшлак последний Al2O3' data = make_train_test_for_target(df, target, TARGETS) # + import scipy.stats as ss fig = plt.figure() ax1 = fig.add_subplot(211) x = data[target].values prob = ss.probplot(x, dist=ss.norm, plot=ax1) ax1.set_xlabel('') ax1.set_title('Probplot against normal distribution') ax2 = fig.add_subplot(212) xt, _ = ss.boxcox(x) prob = ss.probplot(xt, dist=ss.norm, plot=ax2) ax2.set_title('Probplot after Box-Cox transformation') plt.show() # - X, y = data.drop(target, axis=1), data[target] features = X.columns train_X, val_X, train_y, val_y = train_test_split(X, np.log(y+0.0001), random_state=0, test_size=.4, shuffle=True) # + # # !pip install sklearn_pandas --user # + from sklearn.preprocessing import StandardScaler, MinMaxScaler from sklearn_pandas import DataFrameMapper, gen_features def map_features(features=[]): numerical_def = gen_features( columns=[[c] for c in NUMERICAL if c in features], classes=[ {'class': StandardScaler} ] ) ordinal_def = gen_features( columns=[[c] for c in ORDINAL if c in features], classes=[ {'class': MinMaxScaler} ] ) return numerical_def #+ ordinal_def # - mapper1 = DataFrameMapper(map_features(features), df_out=True) # + from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error from sklearn.model_selection import cross_validate def mean_absolute_percentage_error(y_true, y_pred): y_true, y_pred = np.array(y_true), np.array(y_pred) return np.mean(np.abs((y_true - y_pred) / y_true)) * 100 def evaluate_model(train, val, tr_y, val_y, features, est): mapper = DataFrameMapper(features, df_out=True) # трансформим отдельно трейн (фиттим) и тест train = mapper.fit_transform(train) val = mapper.transform(val) est.fit(train, tr_y) pred_val = est.predict(val) pred_train = est.predict(train) return pd.DataFrame({ 'train_R2': [r2_score(tr_y, pred_train)], 'train_MAE': [mean_absolute_error(tr_y, pred_train)], 'train_MSE': [mean_squared_error(tr_y, pred_train)], 'train_MAPe': [mean_absolute_percentage_error(tr_y, pred_train)], 'test_R2': [r2_score(val_y, pred_val)], 'test_MAE': [mean_absolute_error(val_y, pred_val)], 'test_MSE': [mean_squared_error(val_y, pred_val)], 'test_MAPe': [mean_absolute_percentage_error(val_y, pred_val)] }) # - from sklearn.linear_model import LassoCV, RidgeCV reg_Al2O3 = RidgeCV() scores = evaluate_model(train_X, val_X, train_y, val_y, map_features(features), reg_Al2O3) scores _ = mapper1.fit(train_X) reg_Al2O3.fit(mapper1.transform(train_X), train_y) import pickle filename = 'Al2O3.sav' # pickle.dump(reg_Al2O3, open(filename, 'wb')) preds = reg_Al2O3.predict(mapper1.transform(data[features])) fig = plt.figure(figsize=(15,10)) plt.plot(np.exp(preds), label='preds') plt.plot(data['химшлак последний Al2O3'].values,alpha=0.5, label='true') plt.legend(fontsize='14') # #### Следующий таргет from imblearn.under_sampling import RandomUnderSampler target = 'химшлак последний R' data = make_train_test_for_target(df, target, set(TARGETS)) # добавим колонку Al2O3 data['химшлак последний Al2O3'] = np.exp(reg_Al2O3.predict(mapper1.transform(data[features]))) # добавим предыдущее предсказание data = data[~(data['химшлак последний R'].isin([3.1, 1.9, 2.12, 2.9, 3.0]))] # эти значения встречаются по 1-2 раза features_2 = set(data.columns) - set([target]) NUMERICAL += ['химшлак последний Al2O3'] X, y = data[features_2], data[target] from collections import Counter Counter(data['химшлак последний R']) from imblearn.over_sampling import SMOTENC # дискретизируем значения и сделаем оверсэмплинг по "классам" ord_cols = [X.columns.get_loc(c) for c in X.columns if c in ORDINAL] smote_nc = SMOTENC(categorical_features=ord_cols,sampling_strategy='not majority',random_state=0, k_neighbors=10) x_rus, y_rus = smote_nc.fit_resample(X, y.astype(str)) Counter(y_rus) train_X, val_X, train_y, val_y = train_test_split(x_rus, y_rus.astype(float), random_state=0, test_size=.4, shuffle=True, stratify=y_rus) from sklearn.linear_model import LassoCV, RidgeCV from sklearn.ensemble import RandomForestRegressor reg_R = RandomForestRegressor(n_estimators=10, max_depth=5, random_state=0) scores = evaluate_model(train_X, val_X, train_y, val_y, map_features(features_2), reg_R) scores mapper2 = DataFrameMapper(map_features(features_2), df_out=True) _ = mapper2.fit(train_X[features_2]) reg_R.fit(mapper2.transform(train_X), train_y) preds = reg_R.predict(mapper2.transform(data[features_2])) fig = plt.figure(figsize=(15,10)) plt.plot(np.around(preds, 2), label='preds') plt.plot(data['химшлак последний R'].values, alpha=0.5, label='true') plt.legend(fontsize='14') filename = 'R.sav' # pickle.dump(reg_R, open(filename, 'wb')) # #### Следующий таргет df['химшлак последний SiO2'].hist() # с бимодальным распределением сложно работать target = 'химшлак последний SiO2' data = make_train_test_for_target(df, target, TARGETS) data['химшлак последний Al2O3'] = np.exp(reg_Al2O3.predict(mapper1.transform(data[features]))) # добавим предыдущее предсказание data['химшлак последний R'] = np.around(reg_R.predict(mapper2.transform(data[features_2])), 2) # добавим предыдущее предсказание data['химшлак последний R_химшлак последний Al2O3_mul'] = data['химшлак последний Al2O3']*data['химшлак последний R'] # из логики и корреляции фич features_4 = set(data.columns) - set([target]) # ORDINAL += ['химшлак последний R'] NUMERICAL += ['химшлак последний R_химшлак последний Al2O3_mul', 'химшлак последний R'] X, y = data[features_4], data[target] # попытка стратификации по значениям y_strat = y.copy() y_strat.loc[y_strat <=24] = 22 y_strat.loc[y_strat > 24] = 26 # train_X, val_X, train_y, val_y = train_test_split(X, y, random_state=0, test_size=.4, shuffle=True, stratify=y_strat.astype(str)) train_X, val_X, train_y, val_y = train_test_split(X, y, random_state=0, test_size=.4, shuffle=True) from sklearn.linear_model import LogisticRegression # reg_SiO2 = RandomForestRegressor(n_estimators=50, max_depth=3) reg_SiO2 = RidgeCV(cv=3) scores = evaluate_model(train_X, val_X, train_y, val_y, map_features(features_4), reg_SiO2) scores reg_SiO2.coef_ mapper4 = DataFrameMapper(map_features(features_4), df_out=True) _ = mapper4.fit(train_X[features_4]) reg_SiO2.fit(mapper4.transform(train_X), train_y) preds = reg_SiO2.predict(mapper4.transform(X)) fig = plt.figure(figsize=(15,10)) # plt.plot((preds-np.mean(preds))*np.std(train_y)/np.std(preds)+np.mean(preds)) plt.plot(preds) plt.plot(data['химшлак последний SiO2'].values, alpha=0.5) filename = 'SiO2.sav' # pickle.dump(reg_SiO2, open(filename, 'wb')) # #### Последний таргет df['химшлак последний CaO'].hist() target = 'химшлак последний CaO' data = make_train_test_for_target(df, target, [target]) # добавим колонку Al2O3 data['химшлак последний Al2O3'] = reg_Al2O3.predict(mapper1.transform(data[features])) # добавим предыдущее предсказание data['химшлак последний R'] = reg_R.predict(mapper2.transform(data[features_2])) # добавим предыдущее предсказание data['химшлак последний R_химшлак последний Al2O3_mul'] = data['химшлак последний Al2O3']*data['химшлак последний R'] data['химшлак последний SiO2'] = reg_SiO2.predict(mapper4.transform(data[features_4])) features_3 = set(data.columns) - set([target]) NUMERICAL += ['химшлак последний SiO2'] X, y = data[features_3], data[target] # + fig = plt.figure() ax1 = fig.add_subplot(211) x = data[target].values prob = ss.probplot(x, dist=ss.norm, plot=ax1) ax1.set_xlabel('') ax1.set_title('Probplot against normal distribution') ax2 = fig.add_subplot(212) xt, _ = ss.boxcox(x) prob = ss.probplot(xt, dist=ss.norm, plot=ax2) ax2.set_title('Probplot after Box-Cox transformation') plt.show() # - train_X, val_X, train_y, val_y = train_test_split(X, y, random_state=0, test_size=.2, shuffle=True) # reg_CaO = RandomForestRegressor(n_estimators=50, max_depth=3) reg_CaO = RidgeCV(cv=3) scores = evaluate_model(train_X, val_X, train_y, val_y, map_features(features_3), reg_CaO) scores reg_CaO.coef_ filename = 'CaO.sav' # pickle.dump(reg_CaO, open(filename, 'wb')) len(df.columns) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Decision Trees # Adapted from: https://scikit-learn.org/stable/modules/tree.html. # # First, we import the necessary libraries: # + import numpy as np import pylab as pl from sklearn.datasets import load_iris from sklearn.tree import DecisionTreeClassifier # - # Next, we load the Iris flower dataset: # Load data iris = load_iris() # Now we run decision trees on each pair of features and plot the results: # + # Parameters n_classes = 3 plot_colors = "bry" plot_step = 0.02 figure = pl.figure(figsize=(16, 10)) for pairidx, pair in enumerate([[0, 1], [0, 2], [0, 3], [1, 2], [1, 3], [2, 3]]): # We only take the two corresponding features X = iris.data[:, pair] y = iris.target # Shuffle idx = np.arange(X.shape[0]) np.random.seed(13) np.random.shuffle(idx) X = X[idx] y = y[idx] # Standardize mean = X.mean(axis=0) std = X.std(axis=0) X = (X - mean) / std # Train clf = DecisionTreeClassifier().fit(X, y) # Plot the decision boundary pl.subplot(2, 3, pairidx + 1) x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step), np.arange(y_min, y_max, plot_step)) Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) cs = pl.contourf(xx, yy, Z, cmap=pl.cm.Paired) pl.xlabel(iris.feature_names[pair[0]], fontsize=14) pl.ylabel(iris.feature_names[pair[1]], fontsize=14) pl.axis("tight") # Plot the training points for i, color in zip(range(n_classes), plot_colors): idx = np.where(y == i) pl.scatter(X[idx, 0], X[idx, 1], c=color, label=iris.target_names[i], cmap=pl.cm.Paired) pl.axis("tight") pl.suptitle("Decision Surface of a Decision Tree Using Paired Features", fontsize=20) pl.legend() pl.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python (traders_nlp) # language: python # name: traders_nlp # --- # ## Import packages # + import numpy as np import pandas as pd import datetime as dt from matplotlib import pyplot as plt from pandas.tseries.holiday import USFederalHolidayCalendar as calendar # - # ## Read csv # + df = pd.read_csv("../ib-data/nyse/DDD.csv") # Parse strings to datetime object df['date'] = pd.to_datetime(df['date'], format='%Y-%m-%d %H:%M:%S') # - df['date'].dt.date.isin(calendar()) cal = calendar() holidays = cal.holidays(start=df['date'].min() - dt.timedelta(days=1), end=df['date'].max() + dt.timedelta(days=1)) # ## Datetime features def generate_timestamp_features(df): ### This set values should uniquely identify a record df['year'] = df['date'].dt.year df['monthOfYear'] = df['date'].dt.month df['dayOfMonth'] = df['date'].dt.day df['hourOfDay'] = df['date'].dt.hour df['minuteOfHour'] = df['date'].dt.minute ################################################################################################################### ### Day of week df['dayOfWeek'] = df['date'].dt.dayofweek ### Day of year df['dayOfYear'] = df['date'].dt.dayofyear ### Week of year df['weekOfYear'] = df['date'].dt.week ################################################################################################################### cal = calendar() holidays = cal.holidays(start=df['date'].min() - dt.timedelta(days=1), end=df['date'].max() + dt.timedelta(days=1)) ### Holiday df['isHoliday'] = df['date'].isin(holidays) df['prevDayIsHoliday'] = (df['date'] - dt.timedelta(days=1)).dt.normalize().isin(holidays) df['nextDayIsHoliday'] = (df['date'] + dt.timedelta(days=1)).dt.normalize().isin(holidays) ### Convert to int df['isHoliday'] = df['isHoliday'].astype(int) df['prevDayIsHoliday'] = df['prevDayIsHoliday'].astype(int) df['nextDayIsHoliday'] = df['nextDayIsHoliday'].astype(int) ################################################################################################################### df = df[['date', 'year', 'monthOfYear', 'dayOfMonth', 'hourOfDay', 'minuteOfHour', 'dayOfWeek', 'dayOfYear', 'weekOfYear', 'isHoliday', 'prevDayIsHoliday', 'nextDayIsHoliday', 'open', 'high', 'low', 'close', 'volume','barCount', 'average']] return df # + df = generate_timestamp_features(df) df.columns # - df # ## Set Datetime Index (Optional) # + # Set datetime column as index # df = df.set_index('date') # - df.index df.values # ## Plotting plt.plot(df['close'][0:100]) plt.show() # ## Generate New Dataset import glob import os nyse_csv_paths = sorted(glob.glob("../ib-data/nyse-daily/*.csv")) nasdaq_csv_paths = sorted(glob.glob("../ib-data/iex/*.csv")) # #### NYSE for i in range(len(nyse_csv_paths)): path = nyse_csv_paths[i] # read csv df = pd.read_csv(path) # Parse strings to datetime object df['date'] = pd.to_datetime(df['date'], format='%Y-%m-%d %H:%M:%S') # generate features df = generate_timestamp_features(df) # save to new file PATH_NAME = "../ib-data/nyse-daily-features/" + os.path.basename(path) df.to_csv(path_or_buf=PATH_NAME, index=False) # print message print( str(i + 1) + "/" + str(len(nyse_csv_paths)) + " completed.") # #### NASDAQ for i in range(len(nasdaq_csv_paths)): path = nasdaq_csv_paths[i] # read csv df = pd.read_csv(path) # Parse strings to datetime object df['date'] = pd.to_datetime(df['date'], format='%Y-%m-%d %H:%M:%S') # generate features df = generate_timestamp_features(df) # save to new file PATH_NAME = "../ib-data/iex-features/" + os.path.basename(path) df.to_csv(path_or_buf=PATH_NAME, index=False) # print message print( str(i + 1) + "/" + str(len(nasdaq_csv_paths)) + " completed.") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] slideshow={"slide_type": "slide"} # # Session 8: Strings, Queries and APIs # # ** # + [markdown] slideshow={"slide_type": "slide"} # ## Recap (1:2) # # We can think of there as being two 'types' of plots: # - **Exploratory** plots: Figures for understanding data # - Quick to produce $\sim$ minimal polishing # - Interesting feature may by implied by the producer # - Be careful showing these out of context # - **Explanatory** plots: Figures to convey a message # - Polished figures # - Direct attention to interesting feature in the data # - Minimize risk of misunderstanding # # There exist several packages for plotting. Some popular ones: # - `Matplotlib` is good for customization (explanatory plots). Might take a lot of time when customizing! # - `Seaborn` and `Pandas` are good quick and dirty plots (exploratory) # + [markdown] slideshow={"slide_type": "slide"} # ## Recap (2:2) # # We need to put a lot of thinking in how to present data. # # In particular, one must consider the *type* of data that is to be presented: # # # - One variable: # - Categorical: Pie charts, simple counts, etc. # - Numeric: Histograms, distplot (/cumulative), boxplot in seaborn # # # - Multiple variables: # - `scatter` (matplotlib) or `jointplot` (seaborn) for (i) simple descriptives when (ii) both variables are numeric and (iii) there are not too many observations # - `lmplot` or `regplot` (seaborn) when you also want to fit a linear model # - `barplot` (matplotlib), `catplot` and `violinplot` (both seaborn) when one or more variables are categorical # - The option `hue` allows you to add a "third" categorical dimension... use with care # - Lots of other plot types and options. Go explore yourself! # # # - When you just want to explore: `pairplot` (seaborn) plots all pairwise correlations # + [markdown] slideshow={"slide_type": "slide"} # ## Agenda # # In this sesion, we will work with strings, requests and APIs: # - Text as Data # - Key Based Containers # - Interacting with the Web # - Leveraging APIs # + [markdown] slideshow={"slide_type": "slide"} # # Text as Data # # # # ## Why Text Data # # Data is everywhere... and collection is taking speed! # - Personal devices and [what we have at home](https://www.nytimes.com/wirecutter/blog/amazons-alexa-never-stops-listening-to-you/) # - Online in terms of news websites, wikipedia, social media, blogs, document archives # # Working with text data opens up interesting new avenues for analysis and research. Some cool examples: # - Text analysis, topic modelling and monetary policy: # - [Transparency and shifts in deliberation about monetary policy](https://sekhansen.github.io/pdf_files/qje_2018.pdf) # - [Narrative signals about uncertainty in inflation reports drive long-run outcomes](https://sekhansen.github.io/pdf_files/jme_2019.pdf) # - [More partisanship (polarization) in congressional speeches](https://www.brown.edu/Research/Shapiro/pdfs/politext.pdf) # # # ## How Text Data # # Data from the web often come in HTML or other text format # # In this course, you will get tools to do basic work with text as data. # # However, in order to do that: # # - learn how to manipulate and save strings # - save our text data in smart ways (JSON) # - interact with the web # + # DST # Scraping # + [markdown] slideshow={"slide_type": "slide"} # # Key Based Containers # + [markdown] slideshow={"slide_type": "slide"} # ## Containers Recap (1:2) # # *What are containers? Which have we seen?* # # Sequential containers: # - `list` which we can modify (**mutable**). # - useful to collect data on the go # - `tuple` which is after initial assignment **immutable** # - tuples are faster as they can do less things # - `array` # - which is mutable in content (i.e. we can change elements) # - but immutable in size # - great for data analysis # + [markdown] slideshow={"slide_type": "slide"} # ## Containers Recap (2:2) # # Non-sequential containers: # - Dictionaries (`dict`) which are accessed by keys (immutable objects). # - Sets (`set`) where elements are # - unique (no duplicates) # - not ordered # - disadvantage: cannot access specific elements! # + [markdown] slideshow={"slide_type": "slide"} # ## Dictionaries Recap (1:2) # # *How did we make a container which is accessed by arbitrary keys?* # # By using a dictionary, `dict`. Simple way of constructing a `dict`: # + slideshow={"slide_type": "-"} my_dict = {'Nicklas': 'Programmer', 'Jacob': 'Political Scientist', 'Preben': 'Executive', 'Britta': 'Accountant'} my_dict # + slideshow={"slide_type": "-"} print(my_dict['Nicklas']) # + slideshow={"slide_type": "-"} my_new_dict = {} for a in range(0,10): my_new_dict["cube%s" %a] = a**2 print(my_new_dict['cube1']) my_new_dict # + [markdown] slideshow={"slide_type": "slide"} # ## Dictionaries Recap (2:2) # # Dictionaries can also be constructed from two associated lists. These are tied together with the `zip` function. Try the following code: # + slideshow={"slide_type": "-"} keys = [''] values = range(2,5) key_value_pairs = list(zip(keys, values)) print(key_value_pairs) #Print as a list of tuples # + slideshow={"slide_type": "-"} my_dict2 = dict(key_value_pairs) print(my_dict2) #Print dictionary # - print(my_dict2['a']) #Fetch the value associated with 'a' # + [markdown] slideshow={"slide_type": "slide"} # ## Storing Containers # # *Does there exist a file format for easy storage of containers?* # # Yes, the JSON file format. # - Can store lists and dictionaries. # - Syntax is the same as Python lists and dictionaries - only add quotation marks. # - Example: `'{"a":1,"b":1}'` # # # *Why is JSON so useful?* # # - Standard format that looks exactly like Python. # - Extreme flexibility: # - Can hold any list or dictionary of any depth which contains only float, int, str. # - Does not work well with other formats, but normally holds any structured data. # - Extension to spatial data: GeoJSON # + [markdown] slideshow={"slide_type": "slide"} # # Interacting with the Web # + [markdown] slideshow={"slide_type": "slide"} # ## The Internet as Data (1:2) # # When we surf around the internet we are exposed to a wealth of information. # # - What if we could take this and analyze it? # # # Well, we can. And we will. # Examples: Facebook, Twitter, Reddit, Wikipedia, Airbnb etc. # + [markdown] slideshow={"slide_type": "slide"} # ## The Internet as Data (2:2) # # Sometimes we get lucky. The data is served to us. # # - The data is provided as an `API` # - The data can be extracted using `web scraping`. # + [markdown] slideshow={"slide_type": "slide"} # ## Web Interactions # # In the words of Gazarov (2016): The web can be seen as a large network of connected servers # - A page on the internet is stored somewhere on a remote server # - Remote server $\sim$ remotely located computer that is optimized to process requests # # # - When accessing a web page through browser: # - Your browser (the *client*) sends a request to the website's server # - The server then sends code back to the browser # - This code is interpreted by the browser and displayed # # # - Websites come in the form of HTML $-$ APIs only contain data (often in *JSON* format) without presentational overhead # + [markdown] slideshow={"slide_type": "slide"} # ## The Web Protocol # *What is `http` and where is it used?* # # - `http` stands for HyperText Transfer Protocol. # - `http` is good for transmitting the data when a webpage is visited: # - the visiting client sends request for URL or object; # - the server returns relevant data if active. # # # *Should we care about `http`?* # # - In this course we ***do not*** care explicitly about `http`. # - We use a Python module called `requests` as a `http` interface. # - However... Some useful advice - you should **always**: # - use the encrypted version, `https`; # - use authenticated connection, i.e. private login, whenever possible. # + [markdown] slideshow={"slide_type": "slide"} # ## Markup Language # *What is `html` and where is it used?* # # - HyperText Mark # - `html` is a language for communicating how a webpage looks like and behaves. # - That is, `html` contains: content, design, available actions. # # *Should we care about `html`?* # # - Yes, `html` is often where the interesting data can be found. # - Sometimes, we are lucky, and instead of `html` we get a JSON in return. # - Getting data from `html` will the topic of the upcoming scraping session. # + [markdown] slideshow={"slide_type": "slide"} # # Leveraging APIs # + [markdown] slideshow={"slide_type": "slide"} # ## Web APIs (1:4) # *So when do we get lucky, i.e. when is `html` not important?* # # - When we get a Application Programming Interface (`API`) on the web # - What does this mean? # - We send a query to the Web API # - We get a response from the Web API with data back in return, typically as JSON. # - The API usually provides access to a database or some service # + [markdown] slideshow={"slide_type": "slide"} # ## Web APIs (2:4) # *So where is the API?* # # - Usually on separate sub-domain, e.g. `api.github.com` # - Sometimes hidden in code (upcoming scraping session) # # *So how do we know how the API works?* # # - There usually is some documentation. E.g. google ["api github com"](https://www.google.com/search?q=api+github) # + [markdown] slideshow={"slide_type": "slide"} # ## Web APIs (3:4) # *So is data free?* # # - Most commercial APIs require authentication and have limited free usage # - e.g. Twitter, Google Maps, weather services, etc. # # # - Some open APIs that are free # - Danish # - Danish statistics (DST) # - Danish weather data (DMI) # - Danish spatial data (DAWA, danish addresses) # - Global # - OpenStreetMaps, Wikipedia # # # - If no authentication is required the API may be delimited. # - This means only a certain number of requests can be handled per second or per hour from a given IP address. # + [markdown] slideshow={"slide_type": "slide"} # ## Web APIs (4:4) # *So how do make the URLs?* # # - An `API` query is a URL consisting of: # - Server URL, e.g. `https://api.github.com` # - Endpoint path, `/users/isdsucph/repos` # # We can convert a string to JSON with `loads`. # + [markdown] slideshow={"slide_type": "slide"} # ## File Handling # *How can we remove a file?* # # The module `os` can do a lot of file handling tasks, e.g. removing files: # + slideshow={"slide_type": "-"} import os os.remove('my_file.json') # + [markdown] slideshow={"slide_type": "slide"} # # Associated Readings+ # # PDA: # - Section 2.3: How to work with strings in Python # - Section 3.3: Opening text files, interpreting characters # - Section 6.1: Opening and working with CSV files # - Section 6.3: Intro to interacting with APIs # - Section 7.3: Manipulating strings # # Gazarov (2016): "What is an API? In English, please." # - Excellent and easily understood intro to the concept # - Examples of different 'types' of APIs # - Intro to the concepts of servers, clients and HTML # - # # session_8_exercises.ipynb # Will be uploaded on github. # - Method 1: sync your cloned repo # - Method 2: download from git repo # # `Remember` to create a local copy of the notebook # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: deep-rl-notebooks-poetry # language: python # name: deep-rl-notebooks-poetry # --- # # Deep Reinforcement Learning in Action # ### by and # # #### Chapter 4 # ##### Supplemental # + from matplotlib import pyplot as plt def moving_average(x,step=5,window=50): num = (x.shape[0] - window) / step num = int(num) avg = np.zeros(num) slider = np.ones(window) / window start = 0 for i in range(num): end = start + window avg[i] = slider @ x[start:end] start = start + step return avg # - # ##### Listing 4.1 from gym import envs #envs.registry.all() # ##### Listing 4.2 import gym env = gym.make('CartPole-v0') # ##### Listing 4.3 state1 = env.reset() action = env.action_space.sample() state, reward, done, info = env.step(action) # ##### Listing 4.4 # + import gym import numpy as np import torch l1 = 4 l2 = 150 l3 = 2 model = torch.nn.Sequential( torch.nn.Linear(l1, l2), torch.nn.LeakyReLU(), torch.nn.Linear(l2, l3), torch.nn.Softmax(dim=0) ) learning_rate = 0.009 optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) # - # ##### Listing 4.5 pred = model(torch.from_numpy(state1).float()) action = np.random.choice(np.array([0,1]), p=pred.data.numpy()) state2, reward, done, info = env.step(action) # ##### Listing 4.6 def discount_rewards(rewards, gamma=0.99): lenr = len(rewards) disc_return = torch.pow(gamma,torch.arange(lenr).float()) * rewards disc_return /= disc_return.max() return disc_return # ##### Listing 4.7 def loss_fn(preds, r): return -1 * torch.sum(r * torch.log(preds)) # ##### Listing 4.8 MAX_DUR = 200 MAX_EPISODES = 500 gamma = 0.99 score = [] for episode in range(MAX_EPISODES): curr_state = env.reset() done = False transitions = [] for t in range(MAX_DUR): act_prob = model(torch.from_numpy(curr_state).float()) action = np.random.choice(np.array([0,1]), p=act_prob.data.numpy()) prev_state = curr_state curr_state, _, done, info = env.step(action) transitions.append((prev_state, action, t+1)) if done: break ep_len = len(transitions) score.append(ep_len) reward_batch = torch.Tensor([r for (s,a,r) in transitions]).flip(dims=(0,)) disc_rewards = discount_rewards(reward_batch) state_batch = torch.Tensor([s for (s,a,r) in transitions]) action_batch = torch.Tensor([a for (s,a,r) in transitions]) pred_batch = model(state_batch) prob_batch = pred_batch.gather(dim=1,index=action_batch.long().view(-1,1)).squeeze() loss = loss_fn(prob_batch, disc_rewards) optimizer.zero_grad() loss.backward() optimizer.step() plt.plot(moving_average(np.array(score))) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import tensorflow as tf import numpy as np import matplotlib.pyplot as plt import time from helpers import load_and_process_image, display_image from tensorflow.python.keras.applications.vgg19 import VGG19 from tensorflow.python.keras.preprocessing.image import load_img, img_to_array from tensorflow.python.keras.models import Model # %matplotlib inline # - # ## Import the model model = VGG19(include_top=False, weights='imagenet') model.trainable = False model.summary() # ## Display content and style images content_path = 'images/louvre.jpg' style_path = 'images/monet.jpg' content = load_and_process_image(content_path) style = load_and_process_image(style_path) display_image(content, 'Content image') display_image(style, 'Style image') # ## Content and style models content_layer = 'block5_conv2' style_layers = [ 'block1_conv1', 'block3_conv1', 'block5_conv1' ] content_model = Model( inputs=model.input, outputs=model.get_layer(content_layer).output ) style_models = [Model( inputs=model.input, outputs=model.get_layer(l).output ) for l in style_layers] # ## Content cost def content_cost(content, generated): a_C = content_model(content) a_G = content_model(generated) return tf.reduce_mean(tf.square(a_C - a_G)) # ## Style cost def gram_matrix(A): n_C = int(A.shape[-1]) # number of channels # The result will have a shape of (sth, n_C) => for an image it will # be (height x width, num of channels) a = tf.reshape(A, [-1, n_C]) n = tf.shape(a)[0] G = tf.matmul(a, a, transpose_a=True) return G / tf.cast(n, tf.float32) # Weight of each style layer to compute the style cost. # We assume that each layer has the same weight weight = 1 / len(style_models) def style_cost(style, generated): J_style = 0 for style_model in style_models: a_S = style_model(style) a_G = style_model(generated) GS = gram_matrix(a_S) GG = gram_matrix(a_G) current_cost = tf.reduce_mean(tf.square(GS - GG)) J_style += current_cost * weight return J_style # ## Train the model # + generated_images = [] def training_loop(iterations=20, alpha=10, beta=20, learning_rate=7): generated = tf.Variable(content, dtype=tf.float32) opt = tf.keras.optimizers.Adam(learning_rate) best_cost = 1e12 best_image = None start_time = time.time() for i in range(iterations): with tf.GradientTape() as tape: J_content = content_cost(content, generated) J_style = style_cost(style, generated) J_total = alpha * J_content + beta * J_style grads = tape.gradient(J_total, generated) opt.apply_gradients([(grads, generated)]) if J_total < best_cost: best_cost = J_total best_image = generated print('Cost at {}: {}. Time elapsed: {}'.format(i, J_total, time.time() - start_time)) generated_images.append(generated.numpy()) return best_image # - best_image = training_loop(iterations=100) # ## Plot the results display_image(best_image) # + plt.figure(figsize=(10,10)) for i in range(20): plt.subplot(5, 4, i+1) display_image(generated_images[i]) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Past Exam Question # # The level of toluene (a flammable and volatile hydrocarbon) in a storage tank may fluctuate between h=10 cm and h=400 cm from the topof the tank. Since it is difficultto see inside the tank, an open-end manometer witheither water ormercury as the manometer fluid is to be used to determine the toluene levelin the tank. One leg of the manometer is attached to the tankat Y=500 cm from the top. Air at atmospheric pressure is maintained over the tank contents. Consider a case when the toluene level in the tank is 150 cm below the top(h=150cm). In this case,the manometer fluid level in the open arm is at the height of the point where the manometer connects to the tank. # ![problem%201.png](attachment:problem%201.png) # #### Constants: # # Gravitational constant is 9.81 m/s^2 # # Density of toluene is 0.866 g/cm3 # # Density of water is 1 g/cm3 # # Density of mercury is 13.6 g/cm3 # a) What manometer reading,R(cm), would be observed if the manometer fluid is mercury? # # b) If the manometer fluid is water instead of mercury, what would be the value of R (cm)? # # c) Based on your answer in part (a) and (b), which manometer fluid should be used, and why? Your answer should not exceed 3 sentences. # ### Solution: # # a) # # 1. Identify that the hydrostatic pressure at the level of the toluene-mercury interface is the same. # $$P_{t} = P_{m}$$ # 2. Write out the equation for both $P_{t}$ and $P_{m}$ by following the general formula $p = \rho gh$. # # $$P_{t} = \rho_{t}g(Y-h+R)$$ # # $$P_{t} = \rho_{m}gR$$ # # $$\rho_{m}gR = \rho_{t}g(Y-h+R)$$ # # $$R = \frac{y-h}{\frac{\rho_{m}}{\rho{t}}-1}$$ # # 3. Solve for R Y = 500 h = 150 rho_t = 0.866 rho_m = 13.6 R = (Y-h)/((rho_m/rho_t)-1) R # b) Similar procedure as part a) except replace mecury with water. rho_w = 1 R = (Y-h)/((rho_w/rho_t)-1) R # c) Mecury should be used since the height if using water would be 2.261m which would overflow the tube. # ### Mass Balance Question # # Consider an ammonia absorption column. An inlet gas stream (O) composed of Ammonia (NH3) andN2is fed to the bottom at a total mass flow rate $F_{O}= 1000 kg/hr$. The massfraction of NH3 in the inlet gas stream is $w_{AO}$, which corresponds to ammoniamole fraction of $x_{AO}=0.13$. The inlet gas stream flows upward in the column in continuous contact with a liquid stream flowing down the column. Pure liquid solvent(S)is fed to the top of the column at $F_{S}=1000 kg/hr$, absorbing ammoniafrom the gas stream. Theoutletliquid stream(L)withdrawn at the base of the columnis the solventcontainingdissolvedammonia(at a massfraction of wAL),but no N2. The outlet gas stream (G) emerging from the top containsresidual NH3(at a mass faction of $w_{AG}$) in mixture with N2. # # The massfractions of NH3in the exiting gas and liquid streamsare related bythe following empirical formula: $w_{AG}= 2w_{AL}$. # # The molecular weight of Ammonia (NH3) is $MW_{A}=17 g/mol$ and N2 is $MW_{N}= 28 g/mol$. # a) Complete the process flow diagram using thetemplate in thenext page. Toearn full points, you must identify each process stream bylabelingeachwith the appropriate mass flow rate ($F_{O}$, $F_{G}$, $F_{S}$, $F_{L}$). Please also indicate the ammonia mole fraction inthe inlet gas stream ($x_{AO}$) and the ammonia mass fractions in the remaining streams ($w_{AG}$, $w_{AS}$, $w_{AL}$).You must alsoindicate known quantities (HINT: there are two known quantities for mass flow rate and two for ammonia composition) # # b) Perform a degrees of freedom analysis. # # c) Determine the mass fraction of ammonia in the inlet gas stream ($w_{AO}$). To help with the conversion, you may consider a total amount basis of $n_{TO}$=100 mol in theinlet gas (where nAOand nNOare moles of ammonia and nitrogen gas in the inlet gas, respectively)to determine the corresponding total mass $m_{TO} = m_{AO}+ m_{NO}$ of the inlet gas. # # d) Write down the total mass balance and the species mass balancesin terms of mass flow rates ($F_{O}$, $F_{G}$, $F_{S}$, $F_{L}$)and ammonia mass fractions($w_{AO}$, $w_{AG}$, $w_{AS}$, $w_{AL}$). Use symbols only, do not substitute values. # # e) Calculate the flow rates (kg/hr) and compositions (in massfractions) of the exiting gas and liquid streams. # ### Solutions # # a) # # ![problem%202.png](attachment:problem%202.png) # b) Degree of Freedom Analysis # # $$Unknowns = 4$$ # # $$Specie Balance = 3$$ # # $$Ratio = 1$$ # # $$DOF = 4-3-1 = 0$$ # c) # # 1. Assume basis of 100 moles of inlet gas # # $$n_{TI} = 100 mol$$ # # 2. Calculate mole of each specie using mole fraction and then calculating the mass of of each specie using molecular weight. # # For each specie i, the mole is equal to its mole fraction to the total mole amount. # $$n_{iO} = x_{iO}n_{TO}$$ # # $$m_{iO} = n_{iO}MW_{i}$$ # # where i is ammonia or N2 in this the context of this problem # # 3. Calculate mass fraction of ammonia in relation to basis assumed # # $$m_{TO} = \sum_{i=0}^{n} m_{iO} $$ # $$w_{iO} = m_{iO}/m_{TO}$$ n_TO = 100 x_AO = 0.13 x_NO = 1-x_AO MW_A = 17 MW_N = 28 n_AO = x_AO*n_TO n_NO = x_NO*n_TO m_AO = n_AO*MW_A m_NO = n_NO*MW_N print(m_AO,m_NO) m_TO = m_AO+m_NO w_AO = m_AO/m_TO print(w_AO) # d) # # Overall balance: $F_{O}+F_{S}= F_{G}+ F_{L}$ # # NH3 balance: $w_{AO}F_{O}+ w_{AS}F_{S} = w_{AG}F_{G}+ w_{AL}F_{L}$ # # Solventbalance: $F_{S}= (1-w_{AL})F_{L}$ # # N2 balance: $(1-w_{AO}) F_{O}= (1-w_{AG})F_{G}$ # e) # # 1. Write out balance for each specie. # # Ammonia balance: # # $$w_{AO}F_{O}=2w_{AL}(2F_{O}–F_{L})+ w_{AL}F_{L}$$ # # $$w_{AO}F_{O}= w_{AL}(4F_{O}–F_{L}) $$ # # We call this equation 1. # # Solvent balance: # $$F_{S}= F_{O}= (1-w_{AL})F_{L}$$ # # $$F_{L}= \frac{F_{O}}{1-w_{AL}}$$ # # We call this equation 2. # # 2. Write out equation in terms of $w_{AL}$ only. # # substitute 2 into 1 # # $$w_{AO}F_{O}= w_{AL}\left(4F_{O}–\frac{F_{O}}{(1-w_{AL})}\right)$$ # # $$w_{AL}^{2}4F_{O} –(w_{AO}F_{O}+4F_{O} -w_{AL}F_{O}) w_{AL}+ w_{AO}F_{O}= 0$$ # # $$4000 w_{AL}^{2}–3083w_{AL}+ 83= 0$$ # + # Solve the quadratic equation ax**2 + bx + c = 0 import cmath a = 4000 b = -3083 c = 83 # calculate the discriminant d = (b**2) - (4*a*c) # find two solutions sol1 = (-b-cmath.sqrt(d))/(2*a) sol2 = (-b+cmath.sqrt(d))/(2*a) print('The solution are {0} and {1}'.format(sol1,sol2)) # - # Choose $w_{AL} = 0.0279$ since the other value does not make sense as $w_{AG}= 2w_{AL} > 1$ # # Solve for the rest of the variables # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: cellrank_reproducibility # language: python # name: cellrank_reproducibility # --- # Fig. 5: Memory performance comparison (single-core) # ---- # # In this notebook, we produce Suppl. Tab. 3 displaying the single-core memory performance differences # between CellRank and Palantir on 100k cells. # # Preliminaries # ## Dependencies # 1. Please consult the [analysis_files/README.md](analysis_files/README.md) on how to run the memory performance benchmarks. # ## Import packages # + # import standard packages from pathlib import Path import pickle import os import sys import numpy as np import pandas as pd # - # ## Set up paths # + sys.path.insert(0, "../..") # this depends on the notebook depth and must be adapted per notebook from paths import DATA_DIR, FIG_DIR # - # ## Set global parameters root = DATA_DIR / 'benchmarking' / 'memory_analysis_1_core' palantir_path = root / "palantir" cellrank_path = root / "gpcca" # ## Load the data # + res = {'CellRank (lin. probs.)': [], 'CellRank (macrostates)': [], 'Palantir': []} for fname in os.listdir(palantir_path): with open(palantir_path / fname, 'rb') as fin: data = pickle.load(fin) res['Palantir'].append(max(data) / 1024) for fname in os.listdir(cellrank_path): if not fname.endswith(".pickle"): continue with open(cellrank_path / fname, 'rb') as fin: data = pickle.load(fin) # add macrostates and kernel memory together res['CellRank (macrostates)'].append((max(data['macro_mem']) + max(data['kernel_mem'])) / 1024) res['CellRank (lin. probs.)'].append(max(data['ap_mem']) / 1024) # - # ### Clean the index df = pd.DataFrame(res) df.index = np.arange(1, 11) df.index.name = 'subset' df.round(2) # # Generate the table # ## Calculate mean and standard deviation across the splits # + tall_df = df.melt(value_vars=df.columns, var_name='algorithm', value_name='memory') mean = tall_df.groupby('algorithm').mean().T mean.index.name = 'size' mean.columns = [f"{c} mean" for c in mean.columns] std = tall_df.groupby('algorithm').std().T std.index.name = 'size' std.columns = [f"{c} std" for c in std.columns] stats = pd.concat([mean, std], axis=1) stats.index = [100_000] stats.index.name = '#cells (thousands)' stats = stats.round(2) stats # - # ## Reorder the dataframe order = ['CellRank (macrostates)', 'CellRank (lin. probs.)', 'Palantir'] stats = stats[[f"{c} {s}" for c in order for s in ('mean', 'std')]] stats # ## Save the results stats.to_csv(DATA_DIR / "benchmarking_results" / "suppl_tab_memory_benchmark_1_core" / "statistics.csv") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Additional ressources # * . (2006). Pattern recognition. Machine learning, 128(9). # * 1.1: Example: Polynomial Curve Fitting # # Import and definitions # Let's import the same libraries as previous notebook. # We got rid of Numpy as we will now proceed only using PyTorch. We just import the pi value from the math library to generate some sinusoidal data. # + import torch from math import pi import matplotlib.pyplot as plt # Get the order of colors for pretty plot colors = plt.rcParams['axes.prop_cycle'].by_key()['color'] # - # Let's also define from the get go our training function, using the same elements as defined in the previous notebook.It will also return a list of loss values as introduced in the "bonus" section. def train(x, y, model, optimizer, loss_fn, epochs): loss_list = [] for i in range(epochs): y_pred = model(x) # Forward prediction loss = loss_fn(y_pred, y) # Compute the loss optimizer.zero_grad() # Reinitialize the gradients loss.backward() optimizer.step() loss_list.append(loss.item()) return loss_list # # The problem with previous approach # ## Defining simple model # First, let's bring back the simple 1 neuron model without non-linear activation that we used in the previous notebook. model_simple = torch.nn.Linear(1, 1) # 1 input, 1 output optimizer_simple = torch.optim.SGD(model_simple.parameters(), lr=0.1) loss_fn = torch.nn.MSELoss() # ## Linear data # + x = torch.linspace(-pi, pi, 1000) x = x.unsqueeze(-1) # Adapting the shape [1000] to [1000, 1] # Linear data w = -0.5 b = 1 y = w * x + b + torch.normal(mean=torch.zeros(x.size()), std=0.25) # - error_list_1 = train(x, y, model_simple, optimizer_simple, loss_fn, 100) y_learned_1 = model_simple(x) y_learned_1 = y_learned_1.detach() # Now that we trained the network, let's check how well it fits the data and how does the loss look. # Since we will look at the same type of figure all the time, let's make a function that makes def make_figure(x, y, y_learned, error_list): # Define the figure fig, axs = plt.subplots(1, 2, figsize=(10, 4)) # Check the fit after the training axs[0].scatter(x, y, color=colors[7], marker='.', alpha=0.3, label='Training data') axs[0].plot(x, y_learned, label='Learned parameters') axs[0].set(xlabel='x', ylabel='y') axs[0].legend() # Check the error over the training axs[1].plot(error_list) axs[1].set(xlabel='Epoch', ylabel='Mean MSELoss') fig.tight_layout() return fig, axs fig1, axs1 = make_figure(x, y, y_learned_1, error_list_1) plt.show() # ## More complex data y2 = torch.sin(x) + torch.normal(mean=torch.zeros(x.size()), std=0.25) error_list_2 = train(x, y2, model_simple, optimizer_simple, loss_fn, 100) y_learned_2 = model_simple(x) y_learned_2 = y_learned_2.detach() fig2, axs2 = make_figure(x, y2, y_learned_2, error_list_2) plt.show() # # A more general model # Our first, one-neuron model, fails at representing more complex relations. This comes from two factors: # * There is only one neuron # * There is no non-linearity # # In order to capture arbitrary relations # + model_general = torch.nn.Sequential( torch.nn.Linear(1, 10), # 1 input, 10 hidden nodes torch.nn.Sigmoid(), # Non-linear activation torch.nn.Linear(10, 1), # 10 hidden nods, 1 output ) optimizer_general = torch.optim.SGD(model_general.parameters(), lr=0.1) # - error_list_3 = train(x, y2, model_general, optimizer_general, loss_fn, 10000) y_learned_3 = model_general(x) y_learned_3 = y_learned_3.detach() fig3, axs3 = make_figure(x, y2, y_learned_3, error_list_3) plt.show() # # Bonus: Intro to hyperparameter optimization # ## Number of neurons # + nb_neurons_list = [2**i for i in range(5)] fig4, axs4 = plt.subplots(1, 2, figsize=(10, 4)) axs4[0].scatter(x, y2, color=colors[7],marker='.', alpha=0.3, label='Training data') for nb_neurons in nb_neurons_list: # Define new model with different number of neurons model_temp = torch.nn.Sequential( torch.nn.Linear(1, nb_neurons), # 1 input torch.nn.Sigmoid(), torch.nn.Linear(nb_neurons, 1), # 1 output ) # Don't forget to re-define the optimizer too! optimizer_temp = torch.optim.SGD(model_temp.parameters(), lr=0.01) # Training error_list_temp = train(x, y2, model_temp, optimizer_temp, loss_fn, 20000) # Make the prediction y_learned = model_temp(x) y_learned = y_learned.detach() # Update the plot axs4[0].plot(x, y_learned, label=nb_neurons) axs4[1].plot(error_list_temp, label=nb_neurons) axs4[0].legend() axs4[1].legend() axs4[1].set(xlabel='Epoch', ylabel='Mean MSELoss', yscale='log') fig4.tight_layout() plt.show() # - # ## Learning rate # + lr_list = [10**i for i in range(-1, -6, -1)] fig5, axs5 = plt.subplots(1, 2, figsize=(10, 4)) axs5[0].scatter(x, y2, marker='.', alpha=0.3, label='Training data') for lr in lr_list: # Re-define the model model_temp = torch.nn.Sequential( torch.nn.Linear(1, 8), # 1 input torch.nn.Sigmoid(), torch.nn.Linear(8, 1), # 1 output ) # New optimizer with varying learning rate optimizer_temp = torch.optim.SGD(model_temp.parameters(), lr=lr) # Training error_list_temp = train(x, y2, model_temp, optimizer_temp, loss_fn, 20000) # Make the prediction y_learned = model_temp(x) y_learned = y_learned.detach() # Update the plot axs5[0].plot(x, y_learned, label=lr) axs5[1].plot(error_list_temp, label=lr) axs5[0].legend() axs5[1].legend() axs5[1].set(xlabel='Epoch', ylabel='Mean MSELoss') fig5.tight_layout() plt.show() # - # ## To go further # To improve the performance of the network there are many parameters that can be tuned. Here we introduced briefly how the number of neurons and the learning rate can affect the performance and speed of the training. Some other parameters that may be good to explore are: # * Number of layers. What is best? A single layer with many neurons or multiple layers with a smaller number of neurons? Even though in theory one layer with non-linear activation should do the trick there must be a reason why people build "deep" networks with many layers. # * The non-linear activation function. We used `torch.nn.Sigmoid()`, but these days `torch.nn.ReLU()`, is commonly used. Does it affect the training? # * The optimizer. So far we used the standard gradient descent, `torch.optim.SGD()`. However, there are other alternatives, some with advanced tricks to improve the training, such as an adaptive learning rate. A common example is `torch.optim.Adam()`. # # So play around, get familiar with the workflow of dealing with the data, building a network, training, and visualizing the results / checking the performance. Tweak a few things, etc. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Modelado de un sistema con ipython # Para el correcto funcionamiento del extrusor de filamento, es necesario regular correctamente la temperatura a la que está el cañon. Por ello se usará un sistema consistente en una resitencia que disipe calor, y un sensor de temperatura PT100 para poder cerrar el lazo y controlar el sistema. A continuación, desarrollaremos el proceso utilizado. #Importamos las librerías utilizadas import numpy as np import pandas as pd import seaborn as sns import matplotlib.pylab as plt #Mostramos las versiones usadas de cada librerías print ("Numpy v{}".format(np.__version__)) print ("Pandas v{}".format(pd.__version__)) print ("Seaborn v{}".format(sns.__version__)) #Mostramos todos los gráficos en el notebook # %pylab inline #Abrimos el fichero csv con los datos de la muestra datos = pd.read_csv('datos.csv') #Almacenamos en una lista las columnas del fichero con las que vamos a trabajar #columns = ['temperatura', 'entrada'] columns = ['temperatura', 'entrada'] # ## Respuesta del sistema # El primer paso será someter al sistema a un escalon en lazo abierto para ver la respuesta temporal del mismo. A medida que va calentando, registraremos los datos para posteriormente representarlos. # + #Mostramos en varias gráficas la información obtenida tras el ensay #ax = datos['temperatura'].plot(figsize=(10,5), ylim=(0,60),label="Temperatura") #ax.set_xlabel('Tiempo') #ax.set_ylabel('Temperatura [ºC]') #ax.set_ylim(20,60) #ax = datos['entrada'].plot(secondary_y=True, label="Entrada")#.set_ylim(-1,55) fig, ax1 = plt.subplots() ax1.plot(datos['time'], datos['temperatura'], 'b-') ax1.set_xlabel('Tiempo (s)') ax1.set_ylabel('Temperatura', color='b') ax2 = ax1.twinx() ax2.plot(datos['time'], datos['entrada'], 'r-') ax2.set_ylabel('Escalón', color='r') ax2.set_ylim(-1,55) plt.figure(figsize=(10,5)) plt.show() # - # ##Cálculo del polinomio # Hacemos una regresión con un polinomio de orden 2 para calcular cual es la mejor ecuación que se ajusta a la tendencia de nuestros datos. # Buscamos el polinomio de orden 4 que determina la distribución de los datos reg = np.polyfit(datos['time'],datos['temperatura'],2) # Calculamos los valores de y con la regresión ry = np.polyval(reg,datos['time']) print (reg) plt.plot(datos['time'],datos['temperatura'],'b^', label=('Datos experimentales')) plt.plot(datos['time'],ry,'ro', label=('regresión polinómica')) plt.legend(loc=0) plt.grid(True) plt.xlabel('Tiempo') plt.ylabel('Temperatura [ºC]') # El polinomio caracteristico de nuestro sistema es: # # $$P_x= 25.9459 -1.5733·10^{-4}·X - 8.18174·10^{-9}·X^2$$ # ##Transformada de laplace # Si calculamos la transformada de laplace del sistema, obtenemos el siguiente resultado: # # $$G_s = \frac{25.95·S^2 - 0.00015733·S + 1.63635·10^{-8}}{S^3}$$ # ## Cálculo del PID mediante OCTAVE # Aplicando el método de sintonizacion de Ziegler-Nichols calcularemos el PID para poder regular correctamente el sistema.Este método, nos da d emanera rápida unos valores de $K_p$, $K_i$ y $K_d$ orientativos, para que podamos ajustar correctamente el controlador. Esté método consiste en el cálculo de tres parámetros característicos, con los cuales calcularemos el regulador: # # $$G_s=K_p(1+\frac{1}{T_i·S}+T_d·S)=K_p+\frac{K_i}{S}+K_d$$ # # En esta primera iteración, los datos obtenidos son los siguientes: # $K_p = 6082.6$ $K_i=93.868 K_d=38.9262$ # # Con lo que nuestro regulador tiene la siguiente ecuación característica: # # $$G_s = \frac{38.9262·S^2 + 6082.6·S + 93.868}{S}$$ # ### Iteracción 1 de regulador #Almacenamos en una lista las columnas del fichero con las que vamos a trabajar datos_it1 = pd.read_csv('Regulador1.csv') columns = ['temperatura'] # + #Mostramos en varias gráficas la información obtenida tras el ensayo ax = datos_it1[columns].plot(figsize=(10,5), ylim=(20,100),title='Modelo matemático del sistema con regulador',) ax.set_xlabel('Tiempo') ax.set_ylabel('Temperatura [ºC]') ax.hlines([80],0,3500,colors='r') #Calculamos MP Tmax = datos_it1.describe().loc['max','temperatura'] #Valor de la Temperatura maxima en el ensayo Sp=80.0 #Valor del setpoint Mp= ((Tmax-Sp)/(Sp))*100 print("El valor de sobreoscilación es de: {:.2f}%".format(Mp)) #Calculamos el Error en régimen permanente Errp = datos_it1.describe().loc['75%','temperatura'] #Valor de la temperatura en régimen permanente Eregimen = abs(Sp-Errp) print("El valor del error en régimen permanente es de: {:.2f}".format(Eregimen)) # - # En este caso hemos establecido un setpoint de 80ºC Como vemos, una vez introducido el controlador, la temperatura tiende a estabilizarse, sin embargo tiene mucha sobreoscilación. Por ello aumentaremos los valores de $K_i$ y $K_d$, siendo los valores de esta segunda iteracción los siguientes: # $K_p = 6082.6$ $K_i=103.25 K_d=51.425$ # ### Iteracción 2 del regulador #Almacenamos en una lista las columnas del fichero con las que vamos a trabajar datos_it2 = pd.read_csv('Regulador2.csv') columns = ['temperatura'] # + #Mostramos en varias gráficas la información obtenida tras el ensayo ax2 = datos_it2[columns].plot(figsize=(10,5), ylim=(20,100),title='Modelo matemático del sistema con regulador',) ax2.set_xlabel('Tiempo') ax2.set_ylabel('Temperatura [ºC]') ax2.hlines([80],0,3500,colors='r') #Calculamos MP Tmax = datos_it2.describe().loc['max','temperatura'] #Valor de la Temperatura maxima en el ensayo Sp=80.0 #Valor del setpoint Mp= ((Tmax-Sp)/(Sp))*100 print("El valor de sobreoscilación es de: {:.2f}%".format(Mp)) #Calculamos el Error en régimen permanente Errp = datos_it2.describe().loc['75%','temperatura'] #Valor de la temperatura en régimen permanente Eregimen = abs(Sp-Errp) print("El valor del error en régimen permanente es de: {:.2f}".format(Eregimen)) # - # En esta segunda iteracción hemos logrado bajar la sobreoscilación inicial, pero tenemos mayor error en regimen permanente. Por ello volvemos a aumentar los valores de $K_i$ y $K_d$ siendo los valores de esta tercera iteracción los siguientes: # $K_p = 6082.6$ $K_i=121.64 K_d=60$ # ###Iteracción 3 del regulador #Almacenamos en una lista las columnas del fichero con las que vamos a trabajar datos_it3 = pd.read_csv('Regulador3.csv') columns = ['temperatura'] # + #Mostramos en varias gráficas la información obtenida tras el ensayo ax3 = datos_it3[columns].plot(figsize=(10,5), ylim=(20,180),title='Modelo matemático del sistema con regulador',) ax3.set_xlabel('Tiempo') ax3.set_ylabel('Temperatura [ºC]') ax3.hlines([160],0,6000,colors='r') #Calculamos MP Tmax = datos_it3.describe().loc['max','temperatura'] #Valor de la Temperatura maxima en el ensayo Sp=160.0 #Valor del setpoint Mp= ((Tmax-Sp)/(Sp))*100 print("El valor de sobreoscilación es de: {:.2f}%".format(Mp)) #Calculamos el Error en régimen permanente Errp = datos_it3.describe().loc['75%','temperatura'] #Valor de la temperatura en régimen permanente Eregimen = abs(Sp-Errp) print("El valor del error en régimen permanente es de: {:.2f}".format(Eregimen)) # - # En este caso, se puso un setpoint de 160ºC. Como vemos, la sobreoscilación inicial ha disminuido en comparación con la anterior iteracción y el error en regimen permanente es menor. Para intentar minimar el error, aumentaremos únicamente el valor de $K_i$. Siendo los valores de esta cuarta iteracción del regulador los siguientes: # $K_p = 6082.6$ $K_i=121.64 K_d=150$ # ###Iteracción 4 #Almacenamos en una lista las columnas del fichero con las que vamos a trabajar datos_it4 = pd.read_csv('Regulador4.csv') columns = ['temperatura'] #Mostramos en varias gráficas la información obtenida tras el ensayo ax4 = datos_it4[columns].plot(figsize=(10,5), ylim=(20,180)) ax4.set_xlabel('Tiempo') ax4.set_ylabel('Temperatura [ºC]') ax4.hlines([160],0,7000,colors='r') #Calculamos MP Tmax = datos_it4.describe().loc['max','temperatura'] #Valor de la Temperatura maxima en el ensayo print (" {:.2f}".format(Tmax)) Sp=160.0 #Valor del setpoint Mp= ((Tmax-Sp)/(Sp))*100 print("El valor de sobreoscilación es de: {:.2f}%".format(Mp)) #Calculamos el Error en régimen permanente Errp = datos_it4.describe().loc['75%','temperatura'] #Valor de la temperatura en régimen permanente Eregimen = abs(Sp-Errp) print("El valor del error en régimen permanente es de: {:.2f}".format(Eregimen)) # Por lo tanto, el regulador que cumple con las especificaciones deseadas tiene la siguiente ecuación característica: # $$G_s = \frac{150·S^2 + 6082.6·S + 121.64}{S}$$ # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Purpose: # # This notebook will produce 3 plots, which compare how 3 critics trained on the 3 eval functions perform on all 3 eval functions. I hope that what it shows in the end is that the one trained specifically for a given metric does best on it. I really hope. # # This notebook will read 5 data files. The first is, the 100 epoch base. Then, there are 3 50-epoch AC trainings, based on the 3 critics. Finally, there is a 50-epoch base training again, to make the graph have a comparison. This last part doesn't matter quite as much. # import matplotlib import numpy as np import matplotlib.pyplot as plt import json import seaborn as sns sns.set() # + """First, we want to load the data""" with open("./data/base_actor.json", "r") as f: data_base = json.loads(f.read()) with open("./data/ndcg_trained_critic.json", "r") as f: data_ndcg_critic = json.loads(f.read()) with open("./data/ap_trained_critic.json", "r") as f: data_ap_critic = json.loads(f.read()) with open("./data/recall_trained_critic.json", "r") as f: data_recall_critic = json.loads(f.read()) print("Great Success!") # + """First, we make a graph of just the actor's data for NDCG""" actor_ndcg_data = data_base['ACTOR']['ndcg'] ndcg_critic_data = data_ndcg_critic['AC']['ndcg'] ap_critic_data = data_ap_critic['AC']['ndcg'] recall_critic_data = data_recall_critic['AC']['ndcg'] print("Have {} rows of data for ndcg".format(len(actor_ndcg_data))) # print("For others, have {} {} {}".format(len(ndcg_critic_data), len(ap_critic_data), len(recall_critic_data))) # assert len(ndcg_critic_data) == len(ap_critic_data) == len(recall_critic_data) assert len(ndcg_critic_data) == len(recall_critic_data) num_actor_datapoints = len(actor_ndcg_data) num_critic_datapoints = len(ndcg_critic_data) print(num_actor_datapoints) print(num_critic_datapoints) # Now, we plot again plt.clf() plt.plot(range(num_actor_datapoints), actor_ndcg_data) plt.plot(range(num_actor_datapoints, num_actor_datapoints + num_critic_datapoints), ndcg_critic_data, color="green") plt.plot(range(num_actor_datapoints, num_actor_datapoints + num_critic_datapoints), ap_critic_data, color="yellow") plt.plot(range(num_actor_datapoints, num_actor_datapoints + num_critic_datapoints), recall_critic_data, color="red") plt.ylim(0.42, 0.45) # - # Clearly this graph looks horrible. What I really want to do here, is add a single point to the beginning of the critic ones, so that they connect. That makes a lot of sense. # + last_actor_score = actor_ndcg_data[-1] ndcg_critic_data.insert(0, last_actor_score) recall_critic_data.insert(0, last_actor_score) ap_critic_data.insert(0, last_actor_score) print(last_actor_score) print(len(ndcg_critic_data)) print(len(ap_critic_data)) print(len(recall_critic_data)) plt.clf() plt.plot(range(num_actor_datapoints), actor_ndcg_data) plt.plot(range(num_actor_datapoints-1, num_actor_datapoints + num_critic_datapoints), ndcg_critic_data, color="green") plt.plot(range(num_actor_datapoints-1, num_actor_datapoints + num_critic_datapoints), ap_critic_data, color="yellow") plt.plot(range(num_actor_datapoints-1, num_actor_datapoints + num_critic_datapoints), recall_critic_data, color="red") plt.ylim(0.42, 0.45) # + # Now, the same for other measurements! actor_ndcg_data = data_base['ACTOR']['recall'] ndcg_critic_data = data_ndcg_critic['AC']['recall'] # ap_critic_data = data_ap_critic['AC']['ndcg'] recall_critic_data = data_recall_critic['AC']['recall'] last_actor_score = actor_ndcg_data[-1] ndcg_critic_data.insert(0, last_actor_score) recall_critic_data.insert(0, last_actor_score) print("Have {} rows of data for ndcg".format(len(actor_ndcg_data))) # print("For others, have {} {} {}".format(len(ndcg_critic_data), len(ap_critic_data), len(recall_critic_data))) # assert len(ndcg_critic_data) == len(ap_critic_data) == len(recall_critic_data) assert len(ndcg_critic_data) == len(recall_critic_data) num_actor_datapoints = len(actor_ndcg_data) num_critic_datapoints = len(ndcg_critic_data) # Now, we plot again plt.clf() plt.plot(range(num_actor_datapoints), actor_ndcg_data) plt.plot(range(num_actor_datapoints, num_actor_datapoints + num_critic_datapoints), ndcg_critic_data) # plt.plot(range(num_actor_datapoints, num_actor_datapoints + num_critic_datapoints), ap_critic_data) plt.plot(range(num_actor_datapoints, num_actor_datapoints + num_critic_datapoints), recall_critic_data) plt.ylim(0.65, 0.7) # + # Now, the same for other measurements! actor_ndcg_data = data_base['ACTOR']['ap'] ndcg_critic_data = data_ndcg_critic['AC']['ap'] # ap_critic_data = data_ap_critic['AC']['ndcg'] recall_critic_data = data_recall_critic['AC']['ap'] last_actor_score = actor_ndcg_data[-1] ndcg_critic_data.insert(0, last_actor_score) recall_critic_data.insert(0, last_actor_score) print("Have {} rows of data for ndcg".format(len(actor_ndcg_data))) # print("For others, have {} {} {}".format(len(ndcg_critic_data), len(ap_critic_data), len(recall_critic_data))) # assert len(ndcg_critic_data) == len(ap_critic_data) == len(recall_critic_data) assert len(ndcg_critic_data) == len(recall_critic_data) num_actor_datapoints = len(actor_ndcg_data) num_critic_datapoints = len(ndcg_critic_data) # Now, we plot again plt.clf() plt.plot(range(num_actor_datapoints), actor_ndcg_data) plt.plot(range(num_actor_datapoints, num_actor_datapoints + num_critic_datapoints), ndcg_critic_data) # plt.plot(range(num_actor_datapoints, num_actor_datapoints + num_critic_datapoints), ap_critic_data) plt.plot(range(num_actor_datapoints, num_actor_datapoints + num_critic_datapoints), recall_critic_data) plt.ylim(0.18, 0.22) print(actor_ndcg_data[-1]) print(recall_critic_data[-1]) print(ndcg_critic_data[-1]) # - # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 0.6.2 # language: julia # name: julia-0.6 # --- using BayesNets bn = DiscreteBayesNet() push!(bn, DiscreteCPD(:a, [1/3, 1/3, 1/3])) push!(bn, DiscreteCPD(:b, [1/3, 1/3, 1/3])) push!(bn, DiscreteCPD(:c, [:a, :b], [3,3], [Categorical([0,1/2,1/2]), #A=0, B=0 Categorical([0,0,1]), #A=0, B=1 Categorical([0,1,0]), #A=0, B=2 Categorical([0,0,1]), #A=1, B=0 Categorical([1/2,0,1/2]), #A=1, B=1 Categorical([1,0,0]), #A=1, B=2 Categorical([0,1,0]), #A=2, B=0 Categorical([1,0,0]), #A=2, B=1 Categorical([1/2,1/2,0]) #A=2, B=2 ])) # + assignment = Assignment(:b => 1, :c => 2) infer(bn, :a, evidence=assignment) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Paper Diagrams # # Code for making Figures 1, 2, 3, 4 and 5 in the paper. # ### Set up # + import numpy as np import matplotlib import matplotlib.pyplot as plt from matplotlib.patches import Ellipse import bsr.basis_functions as bf # %matplotlib inline # %autoreload 2 # Set plot size, font and fontsize to match LaTeX template # -------------------------------------------------------- # NB A4 paper is 8.27 × 11.69 inches (=210 × 297 mm) # Font: \T1/ntxtlf/m/n/9 # Caption font: \T1/ntxtlf/m/n/8 # Footnote font: \T1/ntxtlf/m/n/8 # Abstract font: \T1/ntxtlf/m/n/10 fontsize = 8 # matplotlib default is 10 textwidth = 6.97522 * 0.99 # make 1% smaller to ensure everything fits textheight = 9.43869 * 0.99 # make 1% smaller to ensure everything fits colwidth = 3.32153 matplotlib.rc('text', usetex=True) matplotlib.rc('font', family='serif', serif='Times New Roman', size=fontsize) matplotlib.rc('axes', titlesize='medium') # To print params, use "matplotlib.rcParams" or for a specific key "matplotlib.rcParams['font']" # - # ### Free-form decomposition diagrams (Figures 1 and 2) # Settings # -------- sigma = 0.15 amps_gau = [0.1, 0.5, 0.1, 0.35, 0.5, 0.4, 0.25] figsize = (colwidth, 1.5) ymax = 1.25 # Plotting # -------- npoints = len(amps_gau) pixels = np.linspace(0, 1, npoints) pixel_width = pixels[1] - pixels[0] x = np.linspace(0 - (1 / npoints), 1 + (1 / npoints), npoints * 20) y = np.zeros(x.shape) # make gaussian figure fig_gau, ax_gau = plt.subplots(figsize=figsize) fig_bar, ax_bar = plt.subplots(figsize=figsize) for i, pixel in enumerate(pixels): comp = amps_gau[i] * np.exp(-((x - pixel) ** 2) / (2 * (sigma ** 2))) ax_gau.plot(x, comp, linestyle='dashed', color='black', linewidth=0.6) y += comp ax_gau.axvline(x=pixel, ymin=0, ymax=amps_gau[i] / ymax, color='black') ax_gau.plot(x, y, label='$y=f(x)$') ax_gau.legend() # make bar figure, calculating amplitudes from y amps_bar = np.interp(pixels, x, y) # make adjustments so they are less smooth amps_bar[2] *= 0.4 amps_bar[5] *=0.2 bar_x = [x.min(), pixels[0] - 0.5 * pixel_width] bar_y = [0, 0] for i, pixel in enumerate(pixels): bar_x += [pixel - 0.5 * pixel_width, pixel + 0.5 * pixel_width] bar_y += [amps_bar[i]] * 2 ax_bar.axvline(x=pixel, ymin=0, ymax=amps_bar[i] / ymax, color='black') ax_bar.bar(pixels, amps_bar, pixel_width, color='white', linewidth=0.6, edgecolor='black', linestyle='dashed') bar_x += [pixels[-1] + 0.5 * pixel_width, x.max()] bar_y += [0, 0] ax_bar.plot(bar_x, bar_y, label='$y=f(x)$') ax_bar.legend() # add labels for i, ax in enumerate([ax_gau, ax_bar]): ax.set_ylim([0, ymax]) ax.set_xlim([x.min(), x.max()]) ax.set_xticks(pixels) ax.set_yticks([]) ax.set_xlabel('$x$') ax.set_ylabel('$y$') ind_j = npoints // 2 labels = [''] * pixels.shape[0] labels[0] = '$x_1$' labels[-1] = '$x_M$' labels[ind_j] = '$x_j$' ax.set_xticklabels(labels) if i == 0: amps_plot = amps_gau else: amps_plot = amps_bar ax.text(pixels[0] - 0.02, amps_plot[0] + 0.04, '$a_1$') ax.text(pixels[-1] - 0.02, amps_plot[-1] + 0.04, '$a_M$') ax.text(pixels[ind_j] - 0.02, amps_plot[ind_j] + 0.04, '$a_j$') adjust = {'left': 0.05, 'top': 0.99, 'right': 0.99, 'bottom': 0.24} fig_bar.subplots_adjust(**adjust) fig_gau.subplots_adjust(**adjust) fig_bar.savefig('plots/freeform_bar.pdf') fig_gau.savefig('plots/freeform_gau.pdf') # ### 3d Lp norm plots (Figure 3) # + from mpl_toolkits.mplot3d import Axes3D # needed to use projection='3d' in add_subplot # Settings # -------- figsize = (colwidth, 0.75) npoints = 100 # paper uses 200 - reduce this to go faster p_list = [0, 0.5, 1, 2, 4] color = 'lightblue' # Do the plot # ----------- # Plot via spherical coordinates - based on # https://stackoverflow.com/questions/7819498/plotting-ellipsoid-with-matplotlib # Set of all spherical angles: u = np.linspace(0, 2 * np.pi, npoints) v = np.linspace(0, np.pi, npoints) # set up array of xyz coords x = np.outer(np.cos(u), np.sin(v)) y = np.outer(np.sin(u), np.sin(v)) z = np.outer(np.ones_like(u), np.cos(v)) xyz = np.stack([x, y, z]) fig = plt.figure(figsize=figsize) for i, p in enumerate(p_list): ax = fig.add_subplot(1, len(p_list), i + 1, projection='3d', alpha=1) ax.set_aspect('equal') ax.axis('off') ax.set_title('$p={}$'.format(p).replace('0.5', '\\frac{1}{2}')) if p == 0: lw = 0.7 ax.plot3D((0, 0), (0, 0), (-1, 1), color=color, lw=lw) ax.plot3D((0, 0), (-1, 1), (0, 0), color=color, lw=lw) ax.plot3D((-1, 1), (0, 0), (0, 0), color=color, lw=lw) else: pnorm = np.sum(np.abs(xyz) ** p, axis=0) ** (1 / p) xyz_plot = xyz / pnorm ax.plot_surface(xyz_plot[0], xyz_plot[1], xyz_plot[2], rstride=1, cstride=1, linewidth=0, antialiased=False, color=color, rasterized=True) fig.subplots_adjust(left=0, right=1, bottom=-0.3, wspace=0) fig.savefig('plots/pnorm.pdf', dpi=400) # - # ### Sparsity promotion plot (Figure 4) # + # Settings figsize=(colwidth, 0.5*colwidth) fig, axes = plt.subplots(ncols=2, figsize=figsize) arrow_kwargs = {'head_width': 0.05, 'head_length': 0.05, 'fc': 'black', 'ec': None} marker_kwargs = {'marker': 'X', 'color': 'red'} fill_kwargs = {'color': 'lightblue', 'zorder': 0} e_centre = (0.2, 0.7) e_kwargs={'fc': None, 'ec': 'black', 'fill': False, 'angle': 60} e_widths = [0.1, 0.2, 0.3, 0.4] e_ratio = 3 for i, ax in enumerate(axes): ax.set_aspect('equal') ax.set_xlim([-1.1, 1.1]) ax.set_ylim([-1.1, 1.1]) # ax.xaxis.set_visible(False) ax.axis('off') ax.text(0, -1.15, '$a_1$', ha='center', ma='center') ax.text(-1.15, 0, '$a_2$', ha='center', ma='center') ax.set_title('${{||\mathbf{{a}}||}}_{}$'.format(2-i)) if i == 0: cross_y = 0.475 rad = np.sqrt(cross_y ** 2 + e_centre[1] ** 2) ax.add_artist(plt.Circle((0.0, 0.0), 0.5, **fill_kwargs)) ax.scatter(e_centre[0], cross_y, **marker_kwargs) elif i == 1: cross_y = 0.57 ax.scatter(0, cross_y, **marker_kwargs) ax.add_artist(plt.Polygon([[0, cross_y], [cross_y, 0], [0, -cross_y], [-cross_y, 0]], **fill_kwargs)) al = 0.9 for arrow_end in [(0, al), (0, -al), (al, 0), (-al, 0)]: ax.arrow(0, 0, *arrow_end, **arrow_kwargs) for width in e_widths: ax.add_artist(Ellipse(e_centre, width, width * e_ratio, **e_kwargs)) plt.subplots_adjust(left=0.05, right=1, bottom=0.05, top=0.85) fig.savefig('plots/pnorm_sparsity.pdf') # - # ### 1d generalised Gaussian and tanh basis function diagrams (Figures 5a and 5b) figsize = (colwidth * 0.6, 2) xmax = 3 x = np.linspace(-xmax, xmax, 100) adjust = {'top': 0.68, 'bottom': 0.17, 'left': 0.23, 'right': 0.85} anchor = (0.5, 1.68) def ylims_given_ymin_ymax(ymin, ymax): delta = ymax - ymin return [ymin - 0.1 * delta, ymax + 0.1 * delta] # gg # -- fig_gg, ax = plt.subplots(figsize=figsize) for beta in [0.5, 1, 2, 4, 8]: ax.plot(x, bf.gg_1d(x, 1, 0, 1, beta), label=r'$\beta={}$'.format(beta).replace('0.5', r'\frac{1}{2}')) ax.set_xlim([-xmax, xmax]) ax.set_ylim(ylims_given_ymin_ymax(0, 1)) fig_gg.subplots_adjust(**adjust) ax.legend(loc='upper center', ncol=2, bbox_to_anchor=anchor) ax.set_xlabel('$x$') ax.set_ylabel(r'$y = \mathrm{e}^{- {(-|x-\mu|/\sigma)}^\beta }$') fig_gg.savefig('plots/gg_demo.pdf') # tanh # ---- fig_ta, ax = plt.subplots(figsize=figsize) for w1 in [-8, -2, -0.5, 0.5, 2, 8]: ax.plot(x, bf.ta_1d(x, 1, 0, w1), label=r'$w={}$'.format(w1).replace('0.5', r'\frac{1}{2}')) ax.set_xlim([-xmax, xmax]) ax.set_ylim(ylims_given_ymin_ymax(-1, 1)) fig_ta.subplots_adjust(**adjust) ax.legend(loc='upper center', ncol=2, bbox_to_anchor=anchor) ax.set_xlabel('$x$') ax.set_ylabel(r'$y = \mathrm{tanh}(w x + b)$') fig_ta.savefig('plots/ta_demo.pdf') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + tags=["remove_cell"] import matplotlib.pyplot as plt import matplotlib.patches as patches import sympy import numpy as np import warnings warnings.filterwarnings('ignore') plt.style.use("seaborn-muted") solve = lambda x,y: sympy.solve(x-y)[0] if len(sympy.solve(x-y))==1 else "Not Single Solution" # - # # Market Equilibrium and Government Intervention # ## Market Equilibrium # # Over the past two weeks, we have individually introduced supply and demand, represented as functions of quantity supplied or demanded, respectively. Now let's bring them together. # # The intersection of both curves is called market equilibrium: the condition when quantity supplied equals quantity demanded at a given price. This is the "market-clearing" condition: $Q_s = Q_d$ # # This means that for any given equation of S and D, we can find both the equilibrium quantity and price. As demand and supply functions solve for price, we simply equate the two functions together. To demonstrate this, we will use SymPy. # # For simplicity's sake, we have defined `solve(x, y)` that returns the value of the input variable that results in equivalent values for the equations `x` and `y` that were passed in. # + Q = sympy.Symbol('Q') P_demand = -0.04 * Q + 20 P_supply = 0.02 * Q + 14 Q_star = solve(P_demand, P_supply) Q_star # - # `Q_star`, the quantity of goods produced and sold at equilibrium, is equal to 100. To get the equilibrium price, we can substitute `!_star` into either `P_demand` or `P_supply`: P_star = P_demand.subs(Q, Q_star) P_star # This gives us that `P_star` is equal to 16. # ### Movements away from equilibrium # # What happens to market equilibrium when either supply or demand shifts due to an exogenous shock? # # Let's assume that consumers now prefer Green Tea as their hot beverage of choice moreso than before. We have an outward shift of the demand curve - quantity demanded is greater at every price. The market is no longer in equilibrium. # + # [INSERT GRAPH: DEMAND SHIFT RIGHT, SHORTAGE AT ORIGINAL EQ PRICE] # - # At the same price level (the former equilibrium price), there is a shortage of Green Tea. The amount demanded by consumers exceeds that supplied by producers: $Q_d > Q_s$. As a result, buyers would bid up the price of Green Tea. At the same time, producers are incentivized to produce more, causing $Q_s$ to increase along the supply curve. This market-clearing force (sometimes referred to as the "invisible hand") pushes $Q_s$ and $Q_d$ to converge to a new equilibrium level of quantity and price. # ## Theoretical Government intervention # # Now that we have discussed cases of market equilibrium with just demand and supply, also known as free market cases, we will examine what happens when the government intervenes. In all of these cases, the market is pushed from equilibrium to a state of disequilibrium. This causes the price to change and, as a result, the number of participants in the market. # ### Taxation and its effects on supply and demand # # The primary method that governments use to intervene in markets is taxation. This occurs through percentage or fixed taxes that increase the net price that consumers eventually have to pay for a good or service. Taxes can be levied on either the producer or the consumer. # + # [INSERT GRAPH: SUPPLY CURVE SHIFT LEFT] # - # If it is levied on producers, this decreases the quantity of goods they can supply at each price as the tax is effectively acting as an additional cost of production. This shifts the supply curve leftward. # + # [INSERT GRAPH: DEMAND CURVE SHIFT LEFT] # - # If the tax is levied on consumers, this increases the price per unit they must pay, thereby reducing quantity demanded at every price. This shifts the demand curve leftward. # + # [INSERT GRAPH: DWL TRIANGLE + TAX WEDGE] # - # The resulting equilibrium - both price and quantity - is the same in both cases. However, the prices paid by producers and consumers are different. Let us denote the equilibrium quantity to be $Q^*$. The price that producers pay $P_p$ occurs where $Q^*$ intersects with the supply curve. At the same time, the price that consumers pay $P_c$ occurs where $Q^*$ intersects the demand curve. # # You will notice that the vertical distance between $P_p$ and $P_c$ will be the value of the tax. That is to say, $P_c = P_p + \text{tax}$. We call the vertical distance between $P_p$ and $P_c$ at quantity $Q^*$ the tax wedge. # ### Incidence # # Determining who bears the greater burden, or incidence, of the tax depends on the elasticity of the good. An inelastic good would mean that it is more of a neccesity than an elastic one. This allows the producer to push more of the tax onto consumers, who are willing to purchase as much of the good even as prices increase. The incidence or burden of paying the tax will thus fall on consumers moreso than producers. # # With elastic goods, incidences are distributed in the opposite way. As elastic goods are less of a necessity, consumers will substitute away from them when prices increase. Thus, when the tax is levied on an elastic good and its price increases, consumers will be more willing to switch to other goods, thereby leaving the producer to bear more of the tax-paying burden. # # One can calculate the burden share, or the proportion of the tax paid by consumers or producers: # # Consumer's burden share: $\dfrac{\text{Increase in unit price after the tax paid by consumers} + \text{Increase in price paid per unit by consumers to producers}}{\text{Tax per unit}}$ # # Producer's burden share: $\dfrac{\text{Increase in unit price after the tax paid by producers} - \text{Increase in price paid per unit by consumers to producers}}{\text{Tax per unit}}$ # # Graphically, the total tax burden is the rectangle formed by the tax wedge and the horizontal distance between 0 and $Q^*$: $Q^* \cdot \text{tax}$ This is also how you calculate the revenue from the tax earned by the government. # ### Deadweight Loss # # Naturally, the introduction of the tax disrupts the economy and pushes it away from equilibrium. For consumers, the higher price they must pay essentially "prices" out some individuals - they are now unwilling to pay more for the good. This leads them to leave the market that they previously participated in. At the same time, for producers, the introduction of the tax increases production costs and cuts into their revenues. Some of the businesses that were willing to produce at moderately high costs now find themselves unable to make a profit with the introduction of the tax. They too leave the market. There are market actors who are no longer able to purchase or sell the good. # # We call this loss of transactions: deadweight welfare loss. It is represented by the triangle with a vertex at the original market equilibrium and a base at the tax wedge. The area of the deadweight loss triangle, also known as Harberger's triangle, is the size of the welfare loss - the total value of transactions lost as a result of the tax. # # Another way to think about deadweight loss is the change (decrease) in total surplus. Consumer and producer surplus decrease significantly, but this is slightly offset by the revenue earned by the government from the tax. # # We can calculate the size of Harberger's triangle using the following formula: $\dfrac{1}{2} \cdot \dfrac{\epsilon_s \cdot \epsilon_d}{\epsilon_s - \epsilon_d} \cdot \dfrac{Q^*}{P_p} (\text{tax})^2$ where $\epsilon_s$ is the price elasticity of supply and $\epsilon_d$ is the price elasticity of demand. # ### Salience # # We noted in our discussion about taxes that the equilibrium quantity and price is the same regardless of whether the tax is levied on producers or consumers. This is the traditional theory's assumption: that individuals, whether they be producers or consumers, are fully aware of the taxes they pay. They decide how much to produce or consume with this in mind. # # We call the visibility at which taxes are displayed their salience. As an example, the final price of a food item in a restaurant is not inclusive of sales tax. Traditional economic theory would say that this difference between advertized or poster price and the actual price paid by a consumer has no bearing on the quantity they demand. That is to say taxes are fully salient. However, recent research has suggested that this is not the case. # # A number of recent studies, including by Chetty, Looney and Kroft in 2009, found that posting prices that included sales tax actually reduces demand for those goods. Individuals are not fully informed or rational, implying that tax salience does matter. # ### Subsidy # # Another form of government intervention is a subsidy. It involves either a monetary benefit given by the government or a reduction in taxes granted to individual businesses or whole industries. They intend to lower production costs, and thus increase the quantity supplied of goods and services at equilibrium. # + # [INSERT GRAPH: SUPPLY CURVE SHIFT RIGHT] # - # We represent this visually as a rightward shift in the supply curve. As costs are lower, producers are now willing to supply more goods and services at every price. The demand curve remains unchanged as a subsidy goes directly to producers. The resulting equilibrium has a lower price $P^*$ and higher quantity $Q^*$. It is assumed that the lower production costs would be passed onto consumers through lower market prices. $P^*$ is what consumers pay, but producers receive $P_P = P^* + \text{subsidy}$. This is depicted visually by the price along the new supply curve at quantity $Q^*$. # # Consumer surplus increases as more individuals are able to purchase the good than before. Similarly, producer surplus has increased as the subsidy takes care of part (if not all) of their costs. Overall market surplus has increased. # # The welfare gain is depicted in a similar way to that of a tax: a triangle with a vertex at the original market equilibrium and a base along $Q^*$. The cost of the subsidy to the government is $\text{per-unit subsidy} \cdot Q^*$. # ### Price controls # # The last type of government intervention is far more forceful than taxes and subsidies. Instead of changing the per-unit cost of a good paid by consumers or producers, price controls directly fix the market price. It does not change the amount different consumers are willing and able to pay or how much producers are willing and able to produce. # # When price controls are introduced to a market, the equilibrium price and quantity no longer exist. Consumer and producers are forced to sell or buy goods at the prescribed price. # + # [INSERT GRAPH: PRICE FLOOR] # - # Let's take a price floor as the first example. This is when the government imposes a minimum price for which a good can be sold. For this to be effective, the price floor must be set above the equilibrium price, otherwise the market would just settle at equilibrium. This results in a surplus, as producers are willing to sell far more goods than consumers are willing to purchase. This results in a surplus as $Q_s > Q_d$. Normally, without government intervention, the invisible hand would push prices down until $Q_s = Q_d$. However, with the price floor, the market remains in a state of disequilibrium. # # Minimum wages act as a price floor, keeping wages above a certain level. At this higher rate, more workers are willing to work than there is demand for, creating unemployment. **For extra commentary, should I talk about Card & Auerbach's study, or would that be extraneous?** # # # # Umar: bolded above # # + # [INSERT GRAPH: PRICE CEILING] # - # The other control is a price ceiling. This is when the government imposes a maximum price for which a good can be sold. This results in a shortage as consumers are to purchase more goods than producer are willing to sell: $Q_d > Q_s$. # # Rent control in Berkeley acts as a price ceiling, keeping rents from rising above a certain level. # ## Optional: World Trade vs. Autarky # # Throughout the class so far, we have assumed that the economy is operating by itself, in isolation. Economists use the word "autarky" to represent the scenario when a country is economically self-sufficient. The key consequence of this is that the country's economy is unaffected by global events, allowing us to conclude that any changes to equilibrium price or quantity are purely a result of shifts in domestic demand or supply. # # We will now assume that the country is no longer in autarky, and is thus subject to world prices, demand and supply. # + # [INSERT GRAPH: AUTARKY] # - # We now label our demand and supply curves as domestic demand and supply, respectively. Their intersection represents where the market would be if we were in autarky. However, as we are now open to world trade, the price where the market operates is now determined by the world price, which may be totally different from the equilibrium price. # # Where the world price line intersects with domestic demand represents the total quantity demanded within the market. Where it intersects with the domestic supply curve is the total quantity supplied. # # Let's first imagine a scenario where world price is lower than domestic price. From our graph, we find that $Q_s < Q_d$. This is a shortage. Domestic producers are not willing to fully provide all goods demanded by consumers in the economy. The difference between what is produced domestically and the total quantity demanded will be fulfilled by international producers. Thus, $Q_d - Q_s =$ imports. As prices are lower than what some consumers were willing to pay, there is a significant consumer surplus. Unfortunately for producers, as world prices are lower than what they would have been able to charge, they lose producer surplus. There is a net gain in the change in consumer surplus is greater than the loss in producer surplus. This is represented by the triangle between the curves and above the world price line. # # Now, let's imagine that world price is greater than domestic price. In this case, $Q_s > Q_d$. We now have a surplus where the quantity demanded by consumers is far less than what domestic producers are willing to provide at this higher price. Thus, domestic producers will willingly sell goods to domestic customers and sell the rest on the global market. $Q_s - Q_d =$ exports. As prices are higher than what some consumers were willing to pay, there is a significant decrease in consumer surplus. Producers however rejoice as world prices are higher than what they would have been able to charge. They gain producer surplus. There is a net gain in total surplus as the increase in producer surplus is greater than the loss in consumer surplus. This is represented by the triangle between the curves and above the world price line. # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import matplotlib.pyplot as plt import numpy as np import joblib import tensorflow as tf import matplotlib matplotlib.use("pgf") matplotlib.rcParams.update({ "pgf.texsystem": "pdflatex", 'font.family': 'serif', 'text.usetex': True, 'pgf.rcfonts': False, }) def show_tsne(x, labels): fig, ax = plt.subplots() fig.set_size_inches(10, 10) scatter = ax.scatter(x[:, 0], x[:, 1], c = labels, cmap = 'tab10') legend = ax.legend(*scatter.legend_elements(), title = 'Digits', fontsize = 'large', title_fontsize = 'x-large', markerscale = 2) ax.set_xticks([]) ax.set_yticks([]) plt.show() y, y_validation = joblib.load('tsne-data') show_tsne(y[-1], y_validation) plt.savefig('slides/pictures/tsne.pgf') with open('slides/pictures/tsne-data.txt', 'w') as fout: for i in range(len(y_validation)): print(f'{y[-1][i, 0]} {y[-1][i, 1]} {y_validation[i]}', file = fout) a=tf.constant([[1,2],[100,200],[10000,20000]]) a[:, None, :] + a # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.10 64-bit (''venv'': conda)' # name: python3 # --- # + colab={"base_uri": "https://localhost:8080/"} id="ddAEv4mympEq" outputId="bd2a44da-50d1-44ae-be49-c7218e516bea" import torch from transformers import BertTokenizer from transformers import BertModel #pip install transformers ## Load pretrained model/tokenizer tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertModel.from_pretrained('bert-base-uncased',output_hidden_states=True) # + id="g1_yw6Erm4LZ" def GetBertEmbeddings(text): marked_text = text tokenized_text = tokenizer.tokenize(marked_text) indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text) segments_ids = [1] * len(tokenized_text) # Convert inputs to PyTorch tensors tokens_tensor = torch.tensor([indexed_tokens]) # Load pre-trained model (weights) model = BertModel.from_pretrained('bert-base-uncased',output_hidden_states=True) # Put the model in "evaluation" mode, meaning feed-forward operation. model.eval() #Run the text through BERT, get the output and collect all of the hidden states produced from all 12 layers. #Run the text through BERT, get the output and collect all of the hidden states produced from all 12 layers. with torch.no_grad(): outputs = model(tokens_tensor) # can use last hidden state as word embeddings last_hidden_state = outputs[0] word_embed_1 = last_hidden_state # Evaluating the model will return a different number of objects based on how it's configured in the `from_pretrained` call earlier. In this case, becase we set `output_hidden_states = True`, the third item will be the hidden states from all layers. See the documentation for more details:https://huggingface.co/transformers/model_doc/bert.html#bertmodel hidden_states = outputs[2] # initial embeddings can be taken from 0th layer of hidden states word_embed_2 = hidden_states[0] # sum of all hidden states word_embed_3 = torch.stack(hidden_states).sum(0) # sum of second to last layer word_embed_4 = torch.stack(hidden_states[2:]).sum(0) # sum of last four layer word_embed_5 = torch.stack(hidden_states[-4:]).sum(0) # concatenate last four layers word_embed_6 = torch.cat([hidden_states[i] for i in [-1,-2,-3,-4]], dim=-1) return word_embed_5 # + colab={"base_uri": "https://localhost:8080/"} id="KjEb75AlnD5v" outputId="27a68f16-ac54-481a-e91b-08ab06561c76" GetBertEmbeddings("It is time to play valorant").shape # + id="XBCLIi9onJpP" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [magnolia3-cpu] # language: python # name: Python [magnolia3-cpu] # --- # # Sparse NMF for Separation # # One approach to using NMF for source separation is to learn sets of $k$ basis vectors $W$ and $H$ for each of the speakers in a dataset. To separate a mixture where the speakers are known, we concatenate the dictionaries $W$ associated with them, learn new loadings $H$, and take the product of the speaker-specific $W$ with the associated components of $H$ to yield the reconstruction. # # Training, for speaker $i$: # $$X_i = W_i H_i$$ # $$W_i, H_i = \text{NMF}(X_i)$$ # # Evaluation, on mixture $X_{ij}$ of speech from speaker $i$ and $j$. $\text{NMF}_W$ performs NMF updates without updating the values in $W$: # $$W_{ij} = [ W_i \, W_j ]$$ # $$H_{ij}' = \text{NMF}_{W_{ij}}(X_{ij})$$ # $$H_{ij}' = [ H_i' \, H_j' ]$$ # $$\hat{X}_i = W_i H_i$$ # $$\hat{X}_j = W_j H_j$$ # + import sys import time from itertools import islice, permutations, product, chain from collections import namedtuple import numpy as np import tensorflow as tf import matplotlib.pyplot as plt import IPython.display as display from magnolia.features.hdf5_iterator import Hdf5Iterator, SplitsIterator from magnolia.features.mixer import FeatureMixer from magnolia.features.wav_iterator import batcher from magnolia.utils.tf_utils import scope_decorator as scope from magnolia.utils.bss_eval import bss_eval_sources from magnolia.factorization.nmf import snmf, nmf_separate from magnolia.utils.postprocessing import reconstruct num_srcs = 2 num_steps = 80 num_freq_bins = 257 num_components = 20 sparsity = 0.1 num_train_exs = 50 num_nmf_iters = 15 num_known_spkrs = 30 update_weight = 0.05 num_test_iters = 80 librispeech_dev = "/local_data/teams/magnolia/librispeech/processed_dev-clean.h5" # librispeech_train = "/local_data/teams/magnolia/librispeech/processed_train-clean-100.h5" # librispeech_test = "/local_data/teams/magnolia/librispeech/processed_test_clean.h5" librispeech_dev = "/Users/patrickc/data/LibriSpeech/processed_dev-clean.h5" librispeech_train = "/Users/patrickc/data/LibriSpeech/processed_train-clean-100.h5" librispeech_test = "/Users/patrickc/data/LibriSpeech/processed_test_clean.h5" train_metrics_path = "/Users/patrickc/src/magnolia/nmf-train-metrics.txt" inset_results_path = "/Users/patrickc/src/magnolia/nmf-inset-results.txt" outset_results_path = "/Users/patrickc/src/magnolia/nmf-outset-results.txt" for path in [train_metrics_path,inset_results_path,outset_results_path]: with open(path,"w") as f: pass def scale_spectrogram(spectrogram): mag_spec = np.abs(spectrogram) phases = np.unwrap(np.angle(spectrogram)) mag_spec = np.sqrt(mag_spec) M = mag_spec.max() m = mag_spec.min() return (mag_spec - m)/(M - m), phases def moving_average(a, n=3): ret = np.cumsum(a, dtype=float) ret[n:] = ret[n:] - ret[:-n] return ret[n - 1:] / n # %matplotlib inline # - # ## Data # ### Get speaker-specific iterators # %pdb off with open("../../data/librispeech/authors/dev-clean-F.txt") as f: female_dev = f.read().strip().split('\n') with open("../../data/librispeech/authors/dev-clean-M.txt") as f: male_dev = f.read().strip().split('\n') with open("../../data/librispeech/authors/train-clean-100-F.txt") as f: female_train = f.read().strip().split('\n') with open("../../data/librispeech/authors/train-clean-100-M.txt") as f: male_train = f.read().strip().split('\n') with open("../../data/librispeech/authors/test-clean-F.txt") as f: female_test = f.read().strip().split('\n') with open("../../data/librispeech/authors/test-clean-M.txt") as f: male_test = f.read().strip().split('\n') # + female_spkrs = [SplitsIterator([0.8, 0.1, 0.,1], hdf5_path=librispeech_train, speaker_keys=[train], shape=(None,)) for train in female_train] female_spkrs_slice = [SplitsIterator([0.8, 0.1, 0.,1], hdf5_path=librispeech_train, speaker_keys=[train], shape=(num_steps,)) for train in female_train] male_spkrs = [SplitsIterator([0.8, 0.1, 0.,1], hdf5_path=librispeech_train, speaker_keys=[train], shape=(None,)) for train in male_train] male_spkrs_slice = [SplitsIterator([0.8, 0.1, 0.,1], hdf5_path=librispeech_train, speaker_keys=[train], shape=(num_steps,)) for train in male_train] female_spkrs_dev = [Hdf5Iterator(hdf5_path=librispeech_dev, speaker_keys=[dev], shape=(None,)) for dev in female_dev] female_spkrs_dev_slice = [Hdf5Iterator(hdf5_path=librispeech_dev, speaker_keys=[dev], shape=(num_steps,)) for dev in female_dev] male_spkrs_dev = [Hdf5Iterator(hdf5_path=librispeech_dev, speaker_keys=[dev], shape=(None,)) for dev in male_dev] male_spkrs_dev_slice = [Hdf5Iterator(hdf5_path=librispeech_dev, speaker_keys=[dev], shape=(num_steps,)) for dev in male_dev] female_spkrs_test = [Hdf5Iterator(hdf5_path=librispeech_test, speaker_keys=[test], shape=(None,)) for test in female_test] female_spkrs_test_slice = [Hdf5Iterator(hdf5_path=librispeech_test, speaker_keys=[test], shape=(num_steps,)) for test in female_test] male_spkrs_test = [Hdf5Iterator(hdf5_path=librispeech_test, speaker_keys=[test], shape=(None,)) for test in male_test] male_spkrs_test_slice = [Hdf5Iterator(hdf5_path=librispeech_test, speaker_keys=[test], shape=(num_steps,)) for test in male_test] female_spkrs_test_slice150 = [Hdf5Iterator(hdf5_path=librispeech_test, speaker_keys=[test], shape=(150,)) for test in female_test] male_spkrs_test_slice150 = [Hdf5Iterator(hdf5_path=librispeech_test, speaker_keys=[test], shape=(150,)) for test in male_test] # + active="" # ### Get iterators over dataset splits # + active="" # feature_iterators = [SplitsIterator([0.8,0.1,0.1], librispeech_train, # shape=(num_steps, None), seed=i) for i in range(num_srcs)] # mixed_features = FeatureMixer(feature_iterators, shape=(num_steps, None)) # data_batches = batcher(mixed_features, batch_size) # # feature_iterators_val_inset = [SplitsIterator([0.8,0.1,0.1], librispeech_train, # shape=(num_steps, None), seed=i) for i in range(num_srcs)] # for i in range(num_srcs): # feature_iterators_val_inset[i].set_split(1) # mixed_features_val_inset = FeatureMixer(feature_iterators_val_inset, shape=(num_steps, None)) # data_batches_val_inset = batcher(mixed_features_val_inset, batch_size) # # feature_iterators_test_inset = [SplitsIterator([0.8,0.1,0.1], librispeech_train, # shape=(num_steps, None), seed=i) for i in range(num_srcs)] # for i in range(num_srcs): # feature_iterators_test_inset[i].set_split(2) # mixed_features_test_inset = FeatureMixer(feature_iterators_test_inset, shape=(num_steps, None)) # data_batches_test_inset = batcher(mixed_features_test_inset, batch_size) # # feature_iterators_val_outset = [Hdf5Iterator(librispeech_dev, # shape=(num_steps, None), seed=i) for i in range(num_srcs)] # mixed_features_val_outset = FeatureMixer(feature_iterators_val_outset, shape=(num_steps, None)) # data_batches_val_outset = batcher(mixed_features_val_outset, batch_size) # # feature_iterators_test_outset = [Hdf5Iterator(librispeech_test, # shape=(num_steps, None), seed=i) for i in range(num_srcs)] # mixed_features_test_outset = FeatureMixer(feature_iterators_test_outset, shape=(num_steps, None)) # data_batches_test_outset = batcher(mixed_features_test_outset, batch_size) # - # ## Training # + # %pdb off spkr_models = [] errors = [] train_times = [] # Set split to train for spkr in chain(female_spkrs, male_spkrs, female_spkrs_slice, male_spkrs_slice): spkr.set_split(0) TrainRecord = namedtuple('TrainRecord', ['i', 'loss', 'time_delta', 'timestamp', 'batch_size', 'spkr']) for i, spkr in enumerate(chain(female_spkrs_slice[:num_known_spkrs//2], male_spkrs_slice[:num_known_spkrs//2])): print("Speaker", i) w = None h = None spkr_errors = [] for j, example in enumerate(islice(spkr,num_train_exs)): mag, phases = scale_spectrogram(example) try: train_start = time.time() w, h, w_err, h_size, err = snmf(mag.T, num_components, sparsity=sparsity, num_iters=num_nmf_iters, W_init=w, H_init=h, return_errors=True, update_weight=update_weight) train_end = time.time() except ValueError as e: print("ValueError encountered", file=sys.stderr) print(e, file=sys.stderr) if "operands" not in repr(e): continue else: raise spkr_errors.extend(err) # only record final error for each speaker train_metrics = TrainRecord( i*num_train_exs + j, err[-1], train_end - train_start, train_start, 1, i ) with open(train_metrics_path, "a") as f: print('\t'.join(map(str,train_metrics)), file=f) train_times.append(train_end - train_start) errors.append(spkr_errors) plt.figure(figsize=(6,1)) plt.plot(moving_average(errors[-1],30)) plt.show() spkr_models.append((w,h)) # plt.figure(figsize=(14,5)) # plt.subplot(1,3,1) # plt.imshow((w @ h)[:,:100], cmap='bone', origin='lower', aspect=1/4) # plt.subplot(1,3,2) # plt.imshow(mag.T[:,:100], cmap='bone', origin='lower', aspect=1/4) # plt.subplot(1,3,3) # plt.imshow(w.T, cmap='bone', origin='lower', aspect=6) # plt.show() # - # ### Split a file # + from scipy.io import wavfile from magnolia.utils.clustering_utils import preprocess_signal test_file = "/Users/patrickc/Downloads/mixed_signal.wav" fs, wav = wavfile.read(test_file) wav_spectrogram, x_in = preprocess_signal(wav, fs) # - # ## Inference (in-set) # # Inference retrains just the loadings matrix $H$ in light of a given $W$ and $X$. The resulting reconstructions are qualitatively quite cruddy unless they are used as masks on the original input, in which case the result is about what we expect (0-2 dB improvement) # + active="" # # Inference on seen speakers # # Set split to test # for spkr in chain(female_spkrs, male_spkrs, female_spkrs_slice, male_spkrs_slice): # spkr.set_split(2) # # # spkrs_slices = {'all': (female_spkrs_slice[:num_known_spkrs//2] + male_spkrs_slice[num_known_spkrs//2:num_known_spkrs], spkr_models), # # 'mm': (male_spkrs_slice[:num_known_spkrs//2:], spkr_models[num_known_spkrs//2:]), # # 'ff': (female_spkrs_slice[num_known_spkrs//2:], spkr_models[:num_known_spkrs//2])} # # TestRecord = namedtuple('TestRecord', ['condition', 'loss', 'sdr', 'sir', 'sar', # 'time_delta', 'timestamp', 'batch_size', 'spkr']) # # Sample randomly (MM/MF/FF) # # spkrs_slices = {'mf': ((female_spkrs_test_slice[:num_known_spkrs//2], # male_spkrs_test_slice[:num_known_spkrs//2]), # (spkr_models[:num_known_spkrs//2], # spkr_models[num_known_spkrs//2:])), # 'mm': ((male_spkrs_test_slice[:num_known_spkrs//2], # male_spkrs_test_slice[:num_known_spkrs//2]), # (spkr_models[num_known_spkrs//2:], # spkr_models[num_known_spkrs//2:])), # 'ff': ((female_spkrs_test_slice[num_known_spkrs//2:], # female_spkrs_test_slice[num_known_spkrs//2:]), # (spkr_models[:num_known_spkrs//2], # spkr_models[:num_known_spkrs//2])), # 'all': ((female_spkrs_test_slice[:num_known_spkrs//2] + male_spkrs_test_slice[:num_known_spkrs//2], # female_spkrs_test_slice[:num_known_spkrs//2] + male_spkrs_test_slice[:num_known_spkrs//2]), # (spkr_models,spkr_models))} # # # for condition, (spkrs_slice, models) in spkrs_slices.items(): # for condition, ((spkrs_slice_i, spkrs_slice_j), (models_i, models_j)) in spkrs_slices.items(): # for i in range(num_test_iters): # spkr_i = np.random.randint(len(spkrs_slice_i)) # spkr_j = np.random.randint(len(spkrs_slice_j)) # if spkr_i == spkr_j: # continue # # example_i = next(spkrs_slice_i[spkr_i]) # example_j = next(spkrs_slice_i[spkr_j]) # # mix = example_i + example_j # mix_scl_mag, mix_scl_phs = scale_spectrogram(mix) # # # separate # infer_start = time.time() # reco_i, reco_j = nmf_separate(mix_scl_mag.T, [models_i[spkr_i], models_j[spkr_j]], mask=True) # infer_end = time.time() # # # get eval metrics # recon_audio_i = reconstruct(reco_i.T**2, mix, 10000, None, 0.0256) # recon_audio_j = reconstruct(reco_j.T**2, mix, 10000, None, 0.0256) # recon_ref_i = reconstruct(example_i, example_i, 10000, None, 0.0256) # recon_ref_j = reconstruct(example_j, example_j, 10000, None, 0.0256) # recon_mix = reconstruct(mix, mix, 10000, None, 0.0256) # # # base_metrics = bss_eval_sources(np.stack([recon_ref_i, recon_ref_j]), np.stack([recon_mix, recon_mix])) # predicted_metrics = bss_eval_sources(np.stack([recon_ref_i, recon_ref_j]), np.stack([recon_audio_i, recon_audio_j])) # base_metrics_mean = np.apply_along_axis(np.mean, 1, base_metrics) # predicted_metrics_mean = np.apply_along_axis(np.mean, 1, predicted_metrics) # avg_diff_metrics = [y-x for x, y in zip(base_metrics_mean[:3], predicted_metrics_mean[:3])] # ms_error = np.sum(np.square(reco_j - np.abs(example_j.T))) + np.sum(np.square(reco_i - np.abs(example_i.T))) # test_record = TestRecord(condition, # ms_error, # *avg_diff_metrics, # infer_end-infer_start, # infer_start, # 1, # "{}_{}_{}".format(condition, spkr_i, spkr_j)) # with open(inset_results_path, "a") as f: # print('\t'.join(map(str, test_record)), file=f) # # plt.figure(figsize=(14,5)) # # plt.subplot(1,3,1) # # plt.imshow(mix_scl_mag.T, cmap='bone', origin='lower', aspect=1/4) # # plt.subplot(1,3,2) # # plt.imshow(reco_i, cmap='bone', origin='lower', aspect=1/4) # # plt.subplot(1,3,3) # # plt.imshow(reco_j, cmap='bone', origin='lower', aspect=1/4) # # plt.show() # # # display.display(display.Audio(reconstruct(reco_i.T**2, mix, 10000, None, 0.0256), rate=10000)) # # display.display(display.Audio(reconstruct(reco_j.T**2, mix, 10000, None, 0.0256), rate=10000)) # - # ## Out-of-sample test # # Above technique only works when you know which set of basis vectors to select for each speaker. For unseen speakers this is hard. One approach is just to pick the combination of dictionary entries that minimizes the reconstruction error. Unfortunately quadratic in the number of dictionary entries. # # (Getting the cross-correlation of each basis with the mixture and picking the top two is another idea.) # # (PCA on the W's also make lots of lots of sense) # + # Inference on out of set speakers # spkrs_slices = {'mf': ((female_spkrs_test_slice150[:num_known_spkrs//2], # male_spkrs_test_slice150[num_known_spkrs//2:num_known_spkrs]), # (spkr_models[:num_known_spkrs//2], # spkr_models[num_known_spkrs//2:])), # 'mm': ((male_spkrs_test_slice150[:num_known_spkrs//2:], # male_spkrs_test_slice150[:num_known_spkrs//2:]), # (spkr_models[num_known_spkrs//2:], # spkr_models[num_known_spkrs//2:])), # 'ff': ((female_spkrs_test_slice150[num_known_spkrs//2:], # female_spkrs_test_slice150[num_known_spkrs//2:]), # (spkr_models[:num_known_spkrs//2], # spkr_models[:num_known_spkrs//2]))} spkrs_slices = {'mf': ((female_spkrs_test_slice150[:num_known_spkrs//2], male_spkrs_test_slice150[:num_known_spkrs//2]), (spkr_models[:num_known_spkrs//2], spkr_models[num_known_spkrs//2:])), 'mm': ((male_spkrs_test_slice150[:num_known_spkrs//2], male_spkrs_test_slice150[:num_known_spkrs//2]), (spkr_models[num_known_spkrs//2:], spkr_models[num_known_spkrs//2:])), 'ff': ((female_spkrs_test_slice150[num_known_spkrs//2:], female_spkrs_test_slice150[num_known_spkrs//2:]), (spkr_models[:num_known_spkrs//2], spkr_models[:num_known_spkrs//2])), 'all': ((female_spkrs_test_slice150[:num_known_spkrs//2] + male_spkrs_test_slice150[:num_known_spkrs//2], female_spkrs_test_slice150[:num_known_spkrs//2] + male_spkrs_test_slice150[:num_known_spkrs//2]), (spkr_models,spkr_models))} # for condition, ((spkrs_slice_i, spkrs_slice_j), (models_i, models_j)) in spkrs_slices.items(): num_test_iters = 1 for i in range(num_test_iters): print("Test {}".format(i)) # spkr_i = np.random.randint(len(spkrs_slice_i)) # spkr_j = np.random.randint(len(spkrs_slice_j)) # if spkr_i == spkr_j: # continue # example_i = next(spkrs_slice_i[spkr_i]) # example_j = next(spkrs_slice_j[spkr_j]) # mix = example_i + example_j # mix_scl_mag, mix_scl_phs = scale_spectrogram(mix) models_i = spkr_models models_j = spkr_models mix_scl_mag = x_in mix = wav_spectrogram # loop over choices of speaker model NmfSearchResult = namedtuple("NMFSearchResult", ['i', 'j', 'error', 'models', 'reconstructions']) optimal_pair = NmfSearchResult(0, 0, np.inf, [], []) # i, j, error, reconstructions a and b search_time = 0 for model_i in range(len(models_i)): for model_j in range(len(models_j)): nmf_start = time.time() reco_i, reco_j = nmf_separate(mix_scl_mag.T, [models_i[model_i], models_j[model_j]], mask=True) nmf_end = time.time() search_time += nmf_end - nmf_start err = np.mean(np.abs(mix_scl_mag.T - (reco_i + reco_j))) if err < optimal_pair.error: optimal_pair = NmfSearchResult(model_i, model_j, err, [models_i[model_i][0], models_j[model_j][0]], [reco_i, reco_j]) # Display results print("Used weight matrices {} and {}".format(optimal_pair.i, optimal_pair.j)) plt.figure(figsize=(14,5)) plt.subplot(1,3,1) plt.imshow(mix_scl_mag.T, cmap='bone', origin='lower', aspect=1/4) plt.subplot(1,3,2) plt.imshow(optimal_pair.reconstructions[0], cmap='bone', origin='lower', aspect=1/4) plt.subplot(1,3,3) plt.imshow(optimal_pair.reconstructions[1], cmap='bone', origin='lower', aspect=1/4) plt.show() # Evaluate opt_spec_i, opt_spec_j = optimal_pair.reconstructions mix_audio = reconstruct(mix, mix, 10000, None, 0.0256) # ref_i = reconstruct(example_i, example_i, 10000, None, 0.0256) # ref_j = reconstruct(example_j, example_j, 10000, None, 0.0256) opt_reco_i = reconstruct(optimal_pair.reconstructions[0].T**2, mix, 10000, None, 0.0256) opt_reco_j = reconstruct(optimal_pair.reconstructions[1].T**2, mix, 10000, None, 0.0256) # base_metrics = bss_eval_sources(np.stack([ref_i, ref_j]), np.stack([mix_audio, mix_audio])) # predicted_metrics = bss_eval_sources(np.stack([ref_i, ref_j]), np.stack([opt_reco_i, opt_reco_j])) # base_metrics_mean = np.apply_along_axis(np.mean, 1, base_metrics) # predicted_metrics_mean = np.apply_along_axis(np.mean, 1, predicted_metrics) # ms_error = (np.sum(np.square(opt_spec_j - np.abs(example_j.T))) + # np.sum(np.square(opt_spec_i - np.abs(example_i.T)))) # avg_diff_metrics = [y-x for x, y in zip(base_metrics_mean[:3], predicted_metrics_mean[:3])] # test_record = TestRecord( # condition, # ms_error, # *avg_diff_metrics, # search_time, # nmf_end, # 1, # "oos_{}_{}_{}".format(condition, spkr_i, spkr_j) # ) # with open(outset_results_path, "a") as f: # print('\t'.join(map(str, test_record)), file=f) # - from magnolia.features.data_preprocessing import undo_preemphasis display.display(display.Audio(undo_preemphasis(opt_reco_i.astype(np.float16)/np.abs(opt_reco_i.astype(np.float16).max())),rate=10000)) display.display(display.Audio(undo_preemphasis(opt_reco_j.astype(np.float16)/np.abs(opt_reco_j.astype(np.float16).max())),rate=10000)) # + import pandas as pd ins_df = pd.read_csv(inset_results_path, sep='\t', header=None, names=TestRecord._fields) print(ins_df.groupby('condition').sdr.mean()) out_df = pd.read_csv(outset_results_path, sep='\t', header=None, names=TestRecord._fields) out_df.groupby('condition').sdr.mean() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Data Cleansing # Created by: # ### Load library and data set # + # load pandas library import pandas as pd # load csv file url = 'http://bit.ly/2kJwy5N' imdb = pd.read_csv(url) # - imdb.head() imdb.shape # ### Check existing NaN values # + #imdb.isna().head() # - # ### Count NaN values imdb.isna().sum() imdb.isna().sum().sum() # ### Drop records with NaN values clean_imdb1 = imdb.dropna(how='any', axis=0) #clean_imdb1 = imdb.dropna(how='all') print('records number before dropping NaN values: {}'.format(imdb.shape[0])) print('records number after dropping NaN values: {}'.format(clean_imdb1.shape[0])) clean_imdb1.isna().sum().sum() # ### Drop records with all values is NaN clean_imdb2 = imdb.dropna(how='all') print('records number before dropping NaN values: {}'.format(imdb.shape[0])) print('records number after dropping NaN values: {}'.format(clean_imdb2.shape[0])) # ### Replace NaN values by 5 clean_imdb3 = imdb.fillna(5) clean_imdb3.head() # ### Replace NaN values by median clean_imdb4 = imdb.fillna(imdb.median()) clean_imdb4.head() # interpolation -> time series # 5,4,3,5,4, NaN=(4+5)/2 ,5,... imdb_2 = imdb.copy() imdb_2.interpolate(method ='linear', inplace=True) imdb_2.head() # ### Check the existing of duplicate data duplicated_imdb = imdb[imdb.duplicated()] duplicated_imdb.shape duplicated_imdb.head() imdb[imdb['director_name'] == ''] # ### Drop duplicate data drop_dup_imdb = imdb.drop_duplicates() print('records number before dropping duplicate data: {}'.format(imdb.shape[0])) print('records number after dropping duplicate data: {}'.format(drop_dup_imdb.shape[0])) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.7.4 ('base') # language: python # name: python3 # --- # node class Node(): def __init__(self, value): self.value = value self.left = None self.right = None class BinarySearchTree(): def __init__(self): self.root = None btree = BinarySearchTree() print(btree.root) # insert node class BinarySearchTree(): def __init__(self): self.root = None def insert(self, value): node = Node(value) if self.root == None: self.root = node return True curNode = self.root nextNode = None while True: if value < curNode.value: # go left nextNode = curNode.left if nextNode == None: curNode.left = node return True elif value > curNode.value: # go right nextNode = curNode.right if nextNode == None: curNode.right = node return True else: return False # move forward to next node if have not return curNode = nextNode # + btree = BinarySearchTree() btree.insert(10) btree.insert(1) btree.insert(3) btree.insert(12) btree.root.value # - btree.root.left.value btree.root.right.value # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # B is for Books # > A brief list of books I've found useful over the years. # # - toc: true # - badges: false # - comments: true # - categories: [B] # - hide: false # - image: images/books/assorted-books-on-shelf-1370295.jpg # While this is not an exhaustive list of the books I've read on my journey to become a data scientist, it is comprised of the books I consider my essential reading. # # # Statistics # ### [The Cartoon Guide to Statistics](http://www.larrygonick.com/titles/science/the-cartoon-guide-to-statistics/) # ![](../images/books/resized006.png) # An engaging reference book which explains key concepts like descriptives statistics, distributions, probability, hypothesis testing. # # ### [The Manga Guide to Statistics](https://nostarch.com/sites/default/files/styles/uc_product_full/public/mg_statistics_big.png?itok=A7DJQynq) # ![](../images/books/resized003.png) # Similar to the title above, but this book has a plot that runs through it. Consequently, if you find analogies easier to remember than formulas, this is a great book for you. # # # # Data Science # # ## R # # ### [R for Data Science](https://r4ds.had.co.nz/) # ![](../images/books/resized005.png) # After completing several [DataCamp](https://www.datacamp.com/profile/pevansimpson) courses, I started using R to analyze [test-taker data](https://educators-r-learners.netlify.app/post/coloring-under-the-lines-in-ggplot/) and this book was my go to reference for the [tidyverse](https://www.tidyverse.org/). # # ### [Geocomputation in R](https://geocompr.robinlovelace.net/) # ![](../images/books/resized001.png) # My last position required my team and I to travel throughout East Asia to deliver workshops, presentations, and provide client support. This book taught me how to make maps to visualize our reach and impact. # # ### [Blogdown: Creating Websites with R Markdown](https://bookdown.org/yihui/blogdown/) # ![](../images/books/resized000.png) # I would not have been able to create my previous blog without this book. Simply indispensable if you want to create a blog with RStudio. # # ## Python # # ### [A Whirlwind Tour of Python](https://jakevdp.github.io/WhirlwindTourOfPython/) # ![](../images/books/resized009.png) # Need to get the gist of Python in a hurry? This is your book. # # ### [Hands-On Machine Learning with Scikit-Learn and TensorFlow](https://www.oreilly.com/library/view/hands-on-machine-learning/9781491962282/) # ![](../images/books/resized002.png) # "Chapter 2 End-to-End Machine Learning Project" along with "Appendix B: Machine Learning Project Checklist" are worth the price of the book alone. # # ### [Python for Data Analysis](http://shop.oreilly.com/product/0636920023784.do) # ![](../images/books/resized004.png) # The real name of this book should be *Pandas from A-Z*. Seriously, work through the examples in Chapter 14 and you will feel infinitely more confident to use ```pandas``` in the future. # # # Concepts # # ### [Storytelling with Data](http://www.storytellingwithdata.com/books) # ![](../images/books/resized007.png) # If "a picture is worth a thousand words," make sure it's conveying the intended message. Use this book to learn the principals of conveying precise meaning instead of serving as a distraction or being redundant. # # ### [Weapons of Math Destruction](https://bookshop.org/books/weapons-of-math-destruction-how-big-data-increases-inequality-and-threatens-democracy/9780553418835) # ![](../images/books/resized008.png) # It's good to be told/reminded that models are never neutral. # # ### [An Illustrated Book of Bad Arguments](https://bookshop.org/books/an-illustrated-book-of-bad-arguments-9781615192250/9781615192250) # ![](../images/books/resized010.png) # Step one of avoiding cognitive biases is to know they exist meaning read this book! # # # Conclusion # # Like I said, the number is small but the impact they've had on me has been massive. Please comment below and tell me which books have impacted you the most during your journey. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="eESmpDgcK8t5" outputId="954d3edb-ab64-4798-ef98-01d34d758435" # 1번 문제 print("hellow world") # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="H0PolfpELOav" outputId="a2359715-d299-44ca-bb2d-f0af8396b3f1" #2번 문제 print("Mary's cosmetics") # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="0xu2PbGKLnR1" outputId="68954fd9-d4a8-4d01-979e-7264dd8407d9" #3번 문제 print('신씨가 소리질렀다. "도둑이야"') # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="pObIc4yNL_bf" outputId="7dd5a1b1-9047-4e67-929c-9a0cb258de05" #4번 문제 print("C:\windows") # + colab={"base_uri": "https://localhost:8080/", "height": 52} colab_type="code" id="JFgqVqb7MY2d" outputId="c9eb96c3-c299-4732-8b09-3ecaad24d15e" #5번 문제 print("안녕하세요. \n만나서\t\t반갑습니다.") # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="cbqQMb_7MySh" outputId="284667c0-293e-4e5d-ffa8-2d34307a73a6" #6번 문제 print("오늘은", "일요일") # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="g1S40co6NOHF" outputId="f81bb307-24b3-41af-d4e5-d668be370255" #7번 문제 print("naver", "kakao", "samsung", sep=";") # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="LnXASM_PNoNz" outputId="5a8d25b2-4673-4a5f-9e7f-4f29c7626e61" #8번 문제 print("naver", "kakao", "samsung", sep="/") # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="TuuqfyjCOU1j" outputId="2c095217-8367-497d-9b55-d9607c1b0ca3" #9 문제 print("first", end=""); print("second") # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="kKq3fnDAOy0z" outputId="2478a42a-0b41-49f5-8261-b00b1381887d" #10번 문제 print(5/3) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="TVigsPcOPIO8" outputId="9013a643-2f59-4ada-f089-d5f61d63a3e7" #11번 문제 samsung = 60000 sum = samsung * 10 print(sum) # + colab={"base_uri": "https://localhost:8080/", "height": 70} colab_type="code" id="Zstsx5jmPx5k" outputId="2690b4ad-2589-4116-98f5-fff963cde134" #12번 문제 Sum = 298000000000000 Now = 50000 PER = 15.79 print(Sum, type(Sum)) print(Now, type(Now)) print(PER, type(PER)) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="k5YIn-CHQSMk" outputId="ebc1a190-694e-454b-94e3-d592f5526f1c" #13번 문제 s = "hello" t = "python" print(s+"!", t) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="NFEhh7LjRTfR" outputId="4582b2b5-2a3f-4dc3-a9a9-806fdc917f9d" #14번 문제 print(2 + 2 * 3) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="vURdVOZpRvY_" outputId="81e4eb0c-8f82-44c3-eca8-14f120a183f6" #15번 문제 a = "132" print(type(a)) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="ddwsOZ1ZSLTF" outputId="58dfde00-7dfa-4380-cf18-9ec67a735a46" #16번 문제 num_str = "720" num_int = int(num_str) print(num_int, type(num_int)) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="aPTXAnbdSzIz" outputId="b0cd95f1-b99f-45ed-efa3-3fd46a48de2b" #17번 문제 num = 100 result = str(num) print(result, type(result)) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="iUCyTmEjUDkU" outputId="472527cb-2ef5-460a-f2a0-51195b85ecb4" #18번 문제 data = "15.79" data = float(data) print(data, type(data)) # + colab={"base_uri": "https://localhost:8080/", "height": 70} colab_type="code" id="zdmNPvwrUgHW" outputId="486a66b6-3a91-4c5e-9ed7-894bbbf37348" #19번 문제 year = "2020" print(int(year)-3) # 2017 print(int(year)-2) # 2018 print(int(year)-1) # 2019 # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="EypuooGUU3-4" outputId="eb9a5f13-1cf8-4c60-f5a1-f4adb01014d3" #20번 문제 month_price = 48584 m = 36 totol_price = month_price print(totol_price * m) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="IfFHxvDZU5XC" outputId="a736d9fb-3de1-4a71-8fb7-aff764e2e30f" #21번 문제 letters = 'python' print(letters[0], letters[2]) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="fTcSd6YTU6iV" outputId="14245a8e-d4eb-4164-f70d-40e87dcbb7f5" #22번 문제 license_plate = "24가 2211" print(license_plate[-4:]) # + colab={"base_uri": "https://localhost:8080/", "height": 52} colab_type="code" id="di1EvSbRU7nF" outputId="e02cff20-d9d5-4113-e2c0-3c8c75a5613c" #23번 문제 string = "홀짝홀짝홀짝" print(string[::2]) print(string[0],string[2],string[4],sep="") # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="o-1T13zZdObY" outputId="a3a47815-66a4-427c-cf6d-dceddd84a4a5" #24번 문제 string = "PYTHON" print(string[::-1]) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="ADul_dmkdQC-" outputId="0080f3ab-2904-47e2-d20e-136401d65570" #25번 문제 phone_number = "010-1111-2222" phone_numberN = phone_number.replace("-", " ") print(phone_numberN) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="bhgk7mO-dRcA" outputId="9183ff76-6645-4fb8-83b5-78c072a3ed84" #26번 문제 phone_number = "010-1111-2222" phone_number1 = phone_number.replace('-', '') print(phone_number1) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="1mRlxbAwdSdS" outputId="c5a3deac-0ae6-4768-d822-6394afc625c7" #27번 문제 url = "http://sharebook.kr" url_split = url.split('.') print(url_split[-1]) # + colab={"base_uri": "https://localhost:8080/", "height": 241} colab_type="code" id="oNuOgvUZfknS" outputId="90e4efa6-5bfb-4414-d248-d68eee4f4974" #28번 문제 lang = 'python' lang[0] = 'p' print(lang) """ 문자열은 수정할 수 없습니다. """ # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="8Yi-craEfl0j" outputId="87438172-9ca4-4d95-d1b1-7786069a1b93" #29번 문제 string = 'abcdfe2a354a32a' string = string.replace('a', 'A') print(string) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="3zDQ75OXfm6E" outputId="a31c8c4f-cc9e-44c7-f2c4-675a5fea2b17" #30번 문제 string = 'abcd' string.replace('b', 'B') print(string) # abcd가 그대로 출력되는 이유는! # 문자열은 변경할 수 없는 자료형이기 때문입니다. # replace 메서드를 사용하면 원본은 그대로 둔채로 변경된 새로운 문자열 객체를 리턴해줍니다. # + colab={"base_uri": "https://localhost:8080/", "height": 70} colab_type="code" id="87V6ZTydfqF8" outputId="1ac1326f-cbeb-4cd6-9ab3-77c7e77f5627" # 조건문 실습 셀 입니다. money =True if money: print("택시를") print("타고") # 들여쓰기가 틀리면 에러가 발생합니다. print("가라") # + colab={"base_uri": "https://localhost:8080/", "height": 52} colab_type="code" id="INqRtplen3jE" outputId="4528b05b-4d66-4cfd-d425-f2a4817df01b" people = 3 apple = 20 if people < apple /5: print("신나는 사과 파티! 배터지게 먹자!") if apple % people > 0: print("사과 수가 맞지 않아! 몇 개는 쪼개 먹자!") if people > apple: print("사람이 너무 많아! 몇 명은 못먹겠네...") # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="p0_0ofHhodwk" outputId="9ad6d7b6-44f7-40a5-ac51-66b94b402f39" money = 2000 if money >= 3000: print("택시를 타고 가라") else: print("걸어가라") # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="kIzmgkeKpYqF" outputId="2cfdf493-e1bc-4107-e827-d2ebb1f93488" money = 2000 card = True if money >= 3000 or card: print("택시를 타고 가라") else: print("걸어가라") # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="ww8p_Gjsp4-h" outputId="bf81f62e-615f-4ae6-ed10-39ff04906106" pocket = ['paper', 'cellphone', 'money'] if 'money' in pocket: print("택시를 타고 가라") else: print("걸어가라") # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="VgUv3CVJquCM" outputId="199075a1-7459-47b3-8a21-594e03658319" from datetime import datetime hour = datetime.now().hour if hour < 12: print("오전입니다.") else: print("오후입니다.") # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="i9ZhkSa2r0V9" outputId="0ad7c25c-d5d8-4e80-c6b7-0e5598080f0f" number = 15 if number % 3 == 0: print("{}는 3의 배수입니다.".format(number)) # + colab={} colab_type="code" id="YuwecOnusuWA" number = 16 if number % 3 == 0: print("{}는 3의 배수입니다.".format(number)) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="9bSVl30EtF09" outputId="870e211f-f242-4505-d2d6-2634f9d5e621" from datetime import datetime hour = datetime.now().hour if hour % 6 == 0: print('종이 울립니다.') # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="EQgsOYnqtjDm" outputId="260ce760-4784-4824-a655-6ddf289cb2b3" s = '가위' r = '바위' p = '보' w = '이겼다' d = '비겼다.' l = '졌다' m = '기위' y = '바위' if m == y: r = d else: r = '이기거나 지거나' print(r) # + colab={"base_uri": "https://localhost:8080/", "height": 70} colab_type="code" id="rBVf1rtP0Mqo" outputId="517ca943-3e60-4d8b-e2f2-c1d6989ae922" a = [(1,2), (3,4), (5,6)] for (first, last) in a: print(first + last) # + colab={"base_uri": "https://localhost:8080/", "height": 105} colab_type="code" id="suSdvwVlzmuD" outputId="583474b7-41f7-4327-f954-744a7fa3b1ff" marks = [90, 25, 67, 45, 80] number = 0 for mark in marks: number = number + 1 if mark >= 60: print("%d번 학생은 합격입니다." % number) else: print("%d번 학생은 불합격입니다." % number) # + colab={"base_uri": "https://localhost:8080/", "height": 70} colab_type="code" id="lTSs2peI1Zbe" outputId="6ec94719-cd5c-4a0e-c3d9-07142d918e20" marks = [90, 25, 67, 45, 80] number = 0 for mark in marks: number = number + 1 if mark < 60: continue print("%d번 학생은 합격입니다." % number) # + colab={"base_uri": "https://localhost:8080/", "height": 158} colab_type="code" id="_qcSQXPN2shS" outputId="03aa3e71-53a0-461a-d96d-5285b0742fc3" for i in range(2,10): for j in range(1,10): print(i*j, end=" ") print('') # + colab={"base_uri": "https://localhost:8080/", "height": 54} colab_type="code" id="2VPEJn9H1-p-" outputId="bc0114a0-db47-4710-e46c-7fb53564f57b" result = [x*y for x in range(2,10) for y in range(1,10)] print(result) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="3q6O_cR56ALj" outputId="2f6ad7c5-86eb-47c8-ad85-b5f951516b8d" result = 0 i = 1 while i <= 1000: if i % 3 == 0: # 3으로 나누어 떨어지는 수는 3의 배수 result += i i += 1 print(result) # + colab={"base_uri": "https://localhost:8080/", "height": 105} colab_type="code" id="K9kpxVEX64WC" outputId="f88329d4-ac93-4339-d5f9-37598343741a" i = 0 while True: i += 1 # while문 수행 시 1씩 증가 if i > 5: break # i 값이 5이상이면 while문을 벗어난다. print ('*' * i) # i 값 개수만큼 *를 출력한다. # + colab={"base_uri": "https://localhost:8080/", "height": 372} colab_type="code" id="rDcdWlwt73ml" outputId="e40baf7e-13ca-4743-d9a9-f9bc8da0f210" for i in range(1, 21): print(i) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="KdLtIudj8m0z" outputId="f7a1f781-ff58-4dd1-c6f2-ab5bb286709b" A = [70, 60, 55, 75, 95, 90, 80, 80, 85, 100] total = 0 for score in A: total += score # A학급의 점수를 모두 더한다. average = total / len(A) # 평균을 구하기 위해 총 점수를 총 학생수로 나눈다. print(average) # 평균 79.0이 출력된다. # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="M96TjgUE811J" outputId="f62bb493-ed59-4531-8b88-fef0181fb220" numbers = [1, 2, 3, 4, 5] result = [n*2 for n in numbers if n%2==1] print(result) # + colab={} colab_type="code" id="yNYqiXLH9vfA" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="yOuZoRwusbo3" # # 뉴스 관련단어 추천 서비스 # + [markdown] id="K7izDVgyCxUb" # ## 데이터 가져오기 # + id="x935a969BGrO" import pandas as pd # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="Hdbk_hK8BkFC" outputId="59c70c6e-68c0-49fc-96b8-45085350bdbc" df = pd.read_csv('./drive/MyDrive/Multi_test_deeplearning/smtph_total.csv') df.head(5) # + colab={"base_uri": "https://localhost:8080/"} id="iMaYAQRYCHVC" outputId="766050d2-4200-4225-8a63-979a1e800b72" df.columns # + colab={"base_uri": "https://localhost:8080/"} id="Stu5wgLXBxxj" outputId="b7e690b0-2818-4732-b096-6e6df766ef07" posts = df['Title']+['Description'] type(posts),posts # + [markdown] id="7GXLfc3RCpJz" # ## 단어로 쪼개기 # + id="APJhqSoTCO0-" # !python3 -m pip install konlpy # + id="-V-7-AMZCjCR" from konlpy.tag import Okt # + id="0oq_nQ4kDM8o" okt = Okt() # + id="TVrAfOr6EQdz" stop_words = ['강,', '글,', '애', '미', '번', '은', '이', '것', '등', '더', '를', '좀', '즉', '인', '옹', '때', '만', '원', '이때', '개'] # + colab={"base_uri": "https://localhost:8080/"} id="0n6NBz9SDQYF" outputId="5ef85155-ea7c-4a3e-9dde-5efc6d06e5b5" posts_noun = [] for post in posts: #print(okt.nouns(post)) for noun in okt.nouns(post): posts_noun.append(noun) len(posts_noun) # + id="vfkvejxsWP3v" posts_noun # + id="FmzXWEsQD9wX" from collections import Counter # + [markdown] id="PupbAFjIYW-E" # type(noun_cnt) 딕셔너리로 출력 됨 # + colab={"base_uri": "https://localhost:8080/"} id="885Bv5vJWPX2" outputId="1d45604e-f329-464e-83cb-8650411276d1" noun_cnt = Counter(posts_noun) type(noun_cnt) # + id="6wxKwmjgWgB2" colab={"base_uri": "https://localhost:8080/"} outputId="0e9c6ff9-5059-44d5-c415-a0054a145664" top_30_nouns = noun_cnt.most_common(30) type(top_30_nouns),top_30_nouns # + colab={"base_uri": "https://localhost:8080/"} id="XiDj6D4QY9Ml" outputId="179b5f09-6ddd-41e6-83dd-4c90c4427ef1" top_nouns_dict = dict(top_30_nouns) type(top_nouns_dict) # + [markdown] id="Gcvc427UZJAM" # ## Word cloud # # + id="DKTk6cSyZBJV" from wordcloud import WordCloud # + id="05JBkyc5cIO2" path = '' # + id="991uzYxbZQ6n" nouns_wordcloud = WordCloud() # + colab={"base_uri": "https://localhost:8080/"} id="Tq83WVJiZan1" outputId="22653b85-b8e2-4916-e3d4-15ac95285bc1" nouns_wordcloud.generate_from_frequencies(top_nouns_dict) # + [markdown] id="pnAaoTM4Z_2d" # ## Display # + id="zgcPWY2zZwiG" import matplotlib.pyplot as plt # + colab={"base_uri": "https://localhost:8080/", "height": 237} id="Xu0XyRcTZ500" outputId="0dfb3bf9-756d-4dcc-e299-22257aff9be0" plt.imshow(nouns_wordcloud) # + [markdown] id="glYmo-OEcuuU" # stopwords, fonts 사용 안한 것 # + [markdown] id="nMsUnNdocO0U" # ## CountVectorizer(빈도수 사용) # + id="9qwxE3mVaOLz" from sklearn.feature_extraction.text import CountVectorizer # + id="UWy-7_vifafU" corpus = [ 'you know I want your love', 'I like you', 'what should I do ', ] # + id="FwHR0IB5fdXl" countvectorizer = CountVectorizer() # + colab={"base_uri": "https://localhost:8080/"} id="QR5a5OmPfp4D" outputId="7ff4e4b7-7d46-456a-e66b-1f1378ab43a4" countvectorizer.fit_transform(corpus).toarray() # + [markdown] id="1lJ2HgszgBGM" # vocabulary_ 로 컬럼의 순서 확인 # + colab={"base_uri": "https://localhost:8080/"} id="5xCLPghDf1tC" outputId="8ac02fa1-6779-4c04-b077-0ccc8ceac4f7" print(countvectorizer.vocabulary_) # + [markdown] id="Yj2Fgg1hV4tj" # ## Word2Vec # 리스트 라인의 단어를 백터화 시키는것 # + colab={"base_uri": "https://localhost:8080/"} id="ls2TKGGh7dU-" outputId="a13397fc-68a6-45ab-a4f6-7f6b425844c4" posts_noun[0:10] # + id="86q_RiyVV7_z" from gensim.models import Word2Vec # + id="1aepym8AWBxs" word2vec= Word2Vec([posts_noun],min_count=1) # + colab={"base_uri": "https://localhost:8080/"} id="GuBLiWTwWTR2" outputId="0e8ab727-e260-43ed-8175-6687c7764606" word2vec # + [markdown] id="Ebu7m55mWXzI" # 궁금한 단어 넣으면 유사단어 가져옴(코사인 기준) # + colab={"base_uri": "https://localhost:8080/"} id="ytKrObm9WWFv" outputId="6c7ac14d-9a04-4019-cc6a-f193c72cef09" word2vec.wv.most_similar('갤럭시') # + id="e9WKeYCtWlAb" colab={"base_uri": "https://localhost:8080/"} outputId="0cbb7383-4155-414e-d0d7-3e3394842f4d" word2vec # + id="85KAolJM74Gc" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] slideshow={"slide_type": "slide"} # ![Callysto.ca Banner](https://github.com/callysto/curriculum-notebooks/blob/master/callysto-notebook-banner-top.jpg?raw=true) # # ## Introduction to Data Science and Big Data For Educators # # ([@misterhay](https://twitter.com/misterhay)) # # Dr. # # [Callysto.ca](https://callysto.ca) | [@callysto_canada](https://twitter.com/callysto_canada) # # CC BY # + [markdown] slideshow={"slide_type": "subslide"} # ## Introduction to Data Science and Big Data For Educators # # The ability to critically analyse large sets of data is becoming increasingly important, and there are many applications in education. We will introduce participants to the fundamentals of data science, and look at how you can incorporate data science into your teaching. You will come away with an increased understanding of this topic as well as some practical activities to use in your learning environment. # + [markdown] slideshow={"slide_type": "slide"} # # Data Science # # Data science involves obtaining and **communicating** information from (usually large) sets of observations. # * collecting, cleaning, manipulating, visualizing, synthesizing # * describing, diagnosing, predicting, prescribing # + [markdown] slideshow={"slide_type": "slide"} # ## Why is Data Science Important? # # # + [markdown] slideshow={"slide_type": "slide"} # ## What Does Data Science Look Like? # # e.g. [Gapminder animation](https://www.gapminder.org/tools/#$model$markers$bubble$encoding$frame$scale$domain@=1800&=2019;;;;;;;&chart-type=bubbles&url=v1) # + [markdown] slideshow={"slide_type": "slide"} # ## How Can We Introduce Data Science? # # # + [markdown] slideshow={"slide_type": "slide"} # # Jupyter Notebooks # # A Jupyter notebook is an online document that can include both **formatted text** and (Python) `code` in different “cells” or parts of the document. # # These documents run on [Callysto Hub](https://hub.callysto.ca/) as well as [Google Colab](https://colab.research.google.com/), [IBM Watson Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-parent.html), and other places. # # We'll be using Python code in Jupyter notebooks for data science and computational thinking. # # Links in this slideshow (and on Callysto.ca) create copies of Jupyter notebooks in your (and your students’) Callysto Hub accounts. This slideshow is also a Jupyter notebook. # + [markdown] slideshow={"slide_type": "slide"} # ## Visualizing Data # # Visualizations of data help with analysis and storytelling. # * Usually include tables and graphs # # In a Jupyter notebook with Python code, a graph can be as easy as: # + slideshow={"slide_type": "subslide"} import plotly.express as px px.pie(names=['left-handed', 'right-handed'], values=[3, 21], title='Handedness of People in our Class') # + slideshow={"slide_type": "subslide"} px.scatter(x=[1, 2, 3, 4], y=[1, 4, 9, 16]) # + slideshow={"slide_type": "subslide"} labels = ['English','French','Aboriginal Languages','Other'] values = [56.9,21.3,0.6,21.2] px.bar(x=labels, y=values, title='First Languages Spoken in Canada') # + [markdown] slideshow={"slide_type": "slide"} # ## Using Online Data # # We can import data from webpages or other files hosted online. # # ### Examples of Data Sources # # * Wikipedia # * Gapminder # * Statistics Canada # * Canada Open Data # * Alberta Open Data # * Many cities and municipalities have open data portals # + slideshow={"slide_type": "subslide"} url = 'https://en.wikipedia.org/wiki/List_of_Alberta_general_elections' import pandas as pd df = pd.read_html(url)[1] df # + slideshow={"slide_type": "subslide"} px.histogram(df, x='Winner', title='Political Parties Elected in Alberta') # + [markdown] slideshow={"slide_type": "subslide"} # ### CSV Data Online # + slideshow={"slide_type": "subslide"} from geopy.geocoders import Nominatim geolocator = Nominatim(user_agent='Callysto Demonstration') coordinates = geolocator.geocode('Grande Prairie, AB') temperature_url = 'https://climateknowledgeportal.worldbank.org/api/data/get-download-data/historical/tas/1901-2016/'+str(coordinates.latitude)+'$cckp$'+str(coordinates.longitude)+'/'+str(coordinates.latitude)+'$cckp$'+str(coordinates.longitude) temperatures = pd.read_csv(temperature_url) temperatures # + slideshow={"slide_type": "subslide"} px.scatter(temperatures, x=' Statistics', y='Temperature - (Celsius)', color=' Year', title='Monthy Average Temperatures in Grande Prairie from 1901-2016') # + slideshow={"slide_type": "subslide"} px.line(temperatures, x=' Year', y='Temperature - (Celsius)', color=' Statistics', title='Monthy Average Temperatures in Grande Prairie from 1901-2016') # + slideshow={"slide_type": "subslide"} px.bar(temperatures, x=' Statistics', y='Temperature - (Celsius)', animation_frame=' Year', title='Temperatures in Grande Prairie').update_layout(yaxis_range=[-30, 30]) # + [markdown] slideshow={"slide_type": "slide"} # ## Data Formatting # # You may find data in "tidy" or "wide" format. # # ### Tidy (Long) Data # # One observation per row. # # Name|Assignment|Mark # -|-|- # Marie|Radium Report|88 # Marie|Polonium Lab|84 # Jane|Primate Report|94 # Jane|Institute Project|77 # Mae|Endeavour Launch|92 # Jennifer|Genetics Project|87 # # ### Wide Data # # Multiple columns for variables. # # Name|Science Lab|Science Report|Spelling Test|Math Worksheet|Discussion Questions # -|-|-|-|-|- # Ryder|80|60|90|70|80 # Marshall|60|70|70|80|90 # Skye|90|80|90|90|80 # Everest|80|90|80|70|90 # # Data can be converted from one format to another, depending on how it is going to be visualized. # + [markdown] slideshow={"slide_type": "slide"} # # Markdown # # For formatting text in notebooks, e.g. **bold** and *italics*. # # [Markdown Cheatsheet](https://www.ibm.com/support/knowledgecenter/SSHGWL_1.2.3/analyze-data/markd-jupyter.html) # # ## LaTeX # # Mathematical and scientific formatting, e.g. # # $m = \frac{E}{c^2}$ # # $6 CO_2 + 6H_2O → C_6H_12O_6 + 6 O_2$ # # [LaTeX Cheatsheet](https://davidhamann.de/2017/06/12/latex-cheat-sheet) # + [markdown] slideshow={"slide_type": "slide"} # # Curriculum Notebooks # # The [Callysto](https://www.callysto.ca) project has been developing free curriculum-aligned notebooks and other resources. # + [markdown] slideshow={"slide_type": "subslide"} # Callysto learning modules # + [markdown] slideshow={"slide_type": "subslide"} # ## Some of my Favorite Notebooks # # * [Statistics Project](https://github.com/callysto/curriculum-notebooks/tree/master/Mathematics/StatisticsProject) # * [Orphan Wells](https://github.com/callysto/curriculum-notebooks/blob/master/SocialStudies/OrphanWells/orphan-wells.ipynb) # * [Survive the Middle Ages](https://github.com/callysto/curriculum-notebooks/blob/master/SocialStudies/SurviveTheMiddleAges/survive-the-middle-ages.ipynb) # * [Asthma Rates](https://github.com/callysto/curriculum-notebooks/blob/master/Health/AsthmaRates/asthma-rates.ipynb) # * [Climate Graphs](https://github.com/callysto/curriculum-notebooks/blob/master/Science/Climatograph/climatograph.ipynb) # * [Shakespeare and Statistics](https://github.com/callysto/curriculum-notebooks/blob/master/EnglishLanguageArts/ShakespeareStatistics/shakespeare-and-statistics.ipynb) # * [Word Clouds](https://github.com/callysto/curriculum-notebooks/blob/master/EnglishLanguageArts/WordClouds/word-clouds.ipynb) # + [markdown] slideshow={"slide_type": "slide"} # # Data Visualizations and Interesting Problems # # [Weekly Data Visualizations](https://www.callysto.ca/weekly-data-visualization) are pre-made, introductory data science lessons. They are a way for students to develop critical thinking and problem solving skills. We start with a question, find an open dataset to answer the question, and then ask students to reflect. # # [Interesting Problems](https://www.callysto.ca/interesting-problems/) are notebooks and often videos series that demonstrate critical thinking skills, and use programming code to solve interesting problems. # + [markdown] slideshow={"slide_type": "slide"} # # Hackathons # # Online hackathons, either facilitated or [planned yourself](https://docs.google.com/document/d/1tnHhiE554xAmMRbU9REiJZ0rkJmxtNlkkQVCFfCoowE), enable students and educators to collaborate intensely to explore data and solve problems. # + [markdown] slideshow={"slide_type": "slide"} # # Introducing Data Science to Students # # Visualizations: [explore](https://www.youcubed.org/resource/data-talks), modify, [create](http://bit.ly/2RXTLz8) # * Can start with Callysto resources # * Consider "ask three then me" # # [Educator Starter Kit](https://www.callysto.ca/starter-kit) # # [Online courses](https://www.callysto.ca/distance-learning) # # [Basics of Python and Jupyter](https://hub.callysto.ca/jupyter/hub/user-redirect/git-pull?repo=https%3A%2F%2Fgithub.com%2Fcallysto%2Fpresentations&branch=master&subPath=IntroductionToJupyterAndPython/callysto-introduction-to-jupyter-and-python-1.ipynb&depth=1) # # [Troubleshooting](https://www.callysto.ca/troubleshooting) # + [markdown] slideshow={"slide_type": "slide"} # # Turtles # # Another way to introduce students to Python, Jupyter, and data science. # # Start with Python turtles: # * [Python Turtles student version](https://hub.callysto.ca/jupyter/hub/user-redirect/git-pull?repo=https%3A%2F%2Fgithub.com%2Fcallysto%2FTMTeachingTurtles&branch=master&subPath=TMPythonTurtles/turtles-and-python-intro-student.ipynb&depth=1) # * [Python Turtles instructor version (key)](https://hub.callysto.ca/jupyter/hub/user-redirect/git-pull?repo=https%3A%2F%2Fgithub.com%2Fcallysto%2FTMTeachingTurtles&branch=master&subPath=TMPythonTurtles/turtles-and-python-intro-instructor.ipynb&depth=1) # + slideshow={"slide_type": "subslide"} from mobilechelonian import Turtle t = Turtle() t.forward(50) t.right(90) t.penup() t.forward(30) # + [markdown] slideshow={"slide_type": "slide"} # ## Contact # # [](mailto:) or [@callysto_canada](https://twitter.com/callysto_canada) for in-class workshops, virtual hackathons, questions, etc. # # Also check out [Callysto.ca](https://www.callysto.ca) and the [YouTube channel](https://www.youtube.com/channel/UCPdq1SYKA42EZBvUlNQUAng). # + [markdown] slideshow={"slide_type": "slide"} # Callysto Logo # Callysto Partners # # [![Callysto.ca License](https://github.com/callysto/curriculum-notebooks/blob/master/callysto-notebook-banner-bottom.jpg?raw=true)](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Data analyses in Python # # Imagine you have a `.csv` file (could also be `.txt`, `.log`, `.dat`, `.sav`, etc.) with some data in it and want to analyze that data. Your `worklfow` to do so will most likely consist of different steps: `loading the data`, `"cleaning" the data`, `exploring the data`, `analyzing the data` and `visualization of the data/results`. In the following, we'll briefly go through all of these steps. # # As you might be aware, the first step is always to load necessary `packages` and/or `functions`. Most of the time, it is not clear what exactly is needed along the `worklfow`. Hence, starting with respective `packages/functions` we're sure about is a good idea. This is most likely based on the data you want to analyze. As we want to have a look at some data in a `.csv` file, `numpy` is a good starting point. # However, to already provide you with the full list of `packages` we are going to explore, here you:: # # - [numpy](https://numpy.org/) # - [pandas](https://pandas.pydata.org/) # - [scipy's stats module](https://docs.scipy.org/doc/scipy/reference/stats.html) # - [statsmodels](http://www.statsmodels.org/stable/index.html) # - [seaborn](seaborn.pydata.org) # # Ok, nuff said, let's go to work: import numpy as np # Using `np.` + `tab` provides us with a nice overview of `numpy`'s functionality: np. # In case, we are not sure about how a certain function works or simply want to know more about it, we can use the `help` function: help(np.array) # Based on our goal, the `genfromtxt` function looks useful, as we initially need to load the data from the `.csv` file: my_data_numpy = np.genfromtxt('data/brain_size.csv', delimiter=';') # With that, we generated a variable called `my_data_numpy`. Now, let's check its `type`. type(my_data_numpy) # It is a `numpy.ndarray` and within it, the data is stored: my_data_numpy # As we saw in the `help` function above, `objects` and `data types` come with certain `functionality`: my_data_numpy. # We can, for example, check the `shape`, that is the `dimensionality`, of the data: my_data_numpy.shape # This returns not a `numpy.ndarray`, but a `tuple`: type(my_data_numpy.shape) # It is also possible to `concatenate functionality`. E.g., we could `transpose` our `numpy.ndarray` and check its resulting `shape` within one command: my_data_numpy.transpose().shape # Is it possible to `view only certain parts` of the data, i.e. the second row? Yes, using `slicing`. my_data_numpy[1] # The output is a `numpy.ndarray` again: type(my_data_numpy[1]) # If we want to be more specific, it is also possible to only view one value, i.e. the fourth value of the second row: my_data_numpy[1, 3] # Now, the `data type` has changed to `numpy.float64`: type(my_data_numpy[1, 3]) # However, getting more than one value, i.e. multiple values of the second row, results in a `numpy.ndarray` again: my_data_numpy[1, 3:6] # Let's look at our data again: my_data_numpy # Even though it's a small dataset, there's already a lot going on: different `data types`, different `columns`, etc. and apparently not everything is `"numpy"` compatible. `Numpy` is a great tool and very powerful, building the foundation for a lot of `python libraries`, but in a lot of cases you might want to prefer `packages` that build upon `numpy` and are intended for specific purposes. A good example for that is the amazing `pandas` library that should be your first address for everything `data wrangling`. In particular, this refers to `tabular data`, that is `multiple observations` or `samples` described by a set of different `attributes` or `features`. The data can than be seen as a `2D table`, or `matrix`, with `columns` giving the different `attributes` of the data, and rows the `observations`. Let's try `pandas`, but at first, we have to import it: import pandas as pd # Now we can check its functionality: pd. # `read_csv` looks helpful regarding loading the data: my_data_pandas = pd.read_csv('data/brain_size.csv', delimiter=',') # What do we have? A `pandas dataframe`: type(my_data_pandas) # Before, our data was in `np.ndarray` format: type(my_data_numpy) # How does our data look now? my_data_pandas # Even though we already have more information as in the `numpy array`, e.g., `headers`, `strings` and `indexes`, it still looks off. What's the problem? Well, we see that our data has a `;` as a `delimiter`, but we indicated `,` as delimiter when loading our data. Therefore, it is important to carefully check your data and beware of its specifics. Let's reload our data with the fitting `delimiter`: my_data_pandas = pd.read_csv('data/brain_size.csv', delimiter=';') # Investigating our `dataframe`, we see that it worked as expected this time: my_data_pandas # Thinking about our `numpy.ndarray` version, we see a difference in the `shape` of the data, which is related to the `header`: my_data_pandas.shape # What can we do with our `dataframe`: my_data_pandas. # For example we can and should rename `columns` with uninformative names: my_data_pandas.rename(columns={'Unnamed: 0': 'sub-id'}) # That looks a bit more informative, doesn't it? Let's have a look at our columns again my_data_pandas.columns # Wait a minute, it's not `renamed`. Did we do something wrong? Let's check the respective functionality: help(my_data_pandas.rename) # Checking the functionality more in depth, a `dataframe` with the new `column names` is returned, but the old one `not automatically changed`. Hence, we have to do it again, this overwriting the original `dataframe`: my_data_pandas=my_data_pandas.rename(columns={'Unnamed: 0': 'sub-id'}) my_data_pandas.columns # Pandas also allows the easy and fast `exploration` of our data: my_data_pandas.describe() # Unfortunately, not all `columns` are there. But why is that? We need to investigate the `columns` more closely, beginning with one that was included: type(my_data_pandas['sub-id']) # The data in the `columns` is a `pandas series`, not a `dataframe` or `numpy.ndarray`, again with its own functionality. Nevertheless, it was included in our `numerical summary`. Let's check the first missing `column`: type(my_data_pandas['Hair']) # Well, that's not very informative on its own, as it's also a `pandas series`, but was not included. Maybe the `data type` is the problem? Luckily, the `pandas dataframe` object comes with a helpful functionality: my_data_pandas.dtypes # And a bit more closely using `indexing`: type(my_data_pandas['Hair'][0]) # The data in `my_data_pandas['Hair']` has the `type str` and as you might have already guessed: it's rather hard to compute `summary statistics` from a `str`. We could re-code it, but given there are only two values, this might not be very useful for our current aim: my_data_pandas['Hair'].unique() # What about the other `missing columns`, e.g., `height`? type(my_data_pandas['Height'][0]) # The `data type` is yet again `str`, but how many values do we have? my_data_pandas['Height'].unique() # Hm, we can see that `height` contains `numerical values`. However, the `data type` is `str`. Here it can be useful to change the `data type`, using `pandas dataframe` object functionality: my_data_pandas['Height'].astype(float) # And we're getting another `error`. This time, it's related to a `missing data point`, which needs to be addressed before the `conversion` is possible. We can simply use the `replace` functionality to `replace` the `missing data point` to `NaN`, which should as allow to do the `conversion`: my_data_pandas['Height'] = my_data_pandas['Height'].replace('.', np.nan) my_data_pandas['Height'] = my_data_pandas['Height'].astype(float) # Let's check if the `column` is now included in the `summary`: my_data_pandas.describe() # Now, we can do the same for the `Weight` column, `concatenating` all necessary functions in one line: my_data_pandas['Weight'] = my_data_pandas['Weight'].replace('.', np.nan).astype(float) # Is `Weight` now included? my_data_pandas.describe() # We can also compute one statistical value for one column, for example the `mean` using `numpy`: np.mean(my_data_pandas['Weight']) # But the same is also possible using inbuilt `pandas data frame` functionality: my_data_pandas['Weight'].mean() # We can do the same for the standard deviation: np.std(my_data_pandas['Weight']) my_data_pandas['Weight'].std() # Here we can see, the same `functionality` can lead to different `results`, potentially based on `different implementations`. Thus, always make sure to check every part of your code and re-run it to see if you get the same outputs. As you can see here, using a `jupyter notebook` for your analyses, this is comparably straightforward. Additionally, you can document each step of your workflow, from data loading, inspection, changes, etc. . While you should of course always use `version control` on your data, the format we've explored nicely allows to redo your analyses (excluding the `computational reproducibility` and `numerical instability` aspect). On top of that, you can document the executed steps so that your future self and everyone else knows what's going on. Enough chit-chat, now that we've loaded and inspected our data, as well as fixed some errors it's time to do some statistics. To show you a few nice `packages` that are out there, we will run different `analyses` with different `packages`. We will explore `pingouin`, `scipy`, `statsmodels` and `seaborn`. # # # # # ### _Pingouin is an open-source statistical package written in Python 3 and based mostly on Pandas and NumPy._ # # # - ANOVAs: one- and two-ways, repeated measures, mixed, ancova # - Post-hocs tests and pairwise comparisons # - Robust correlations # - Partial correlation, repeated measures correlation and intraclass correlation # - Linear/logistic regression and mediation analysis # - Bayesian T-test and Pearson correlation # - Tests for sphericity, normality and homoscedasticity # - Effect sizes and power analysis # - Parametric/bootstrapped confidence intervals around an effect size or a correlation coefficient # - Circular statistics # - Plotting: Bland-Altman plot, Q-Q plot, etc... # # **Pingouin is designed for users who want simple yet exhaustive statistical functions.** # # # ##### **material scavenged from [10 minutes to Pingouin](https://pingouin-stats.org/index.html) and [the pingouin docs](https://pingouin-stats.org/api.html) # # Let's import the `package`: import pingouin as pg # ### Correlations # # "In the broadest sense correlation is any statistical association, though in common usage it most often refers to how close two variables are to having a linear relationship with each other" - [Wikipedia](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient) # # `Pingouin` supports a variety of [measures of correlation](https://pingouin-stats.org/generated/pingouin.corr.html#pingouin.corr). When talking about `correlation`, we commonly mean the `Pearson correlation coefficient`, also referred to as `Pearson's r`: # # # # Computing `Pearson's r` using `pingouin` is as easy as: pearson_correlation = pg.corr(my_data_pandas['FSIQ'], my_data_pandas['VIQ']) display(pearson_correlation) cor_coeeficient = pearson_correlation['r'] # The output we got, is the `test summary`: # # - 'n' : Sample size (after NaN removal) # - 'outliers' : number of outliers (only for 'shepherd' or 'skipped') # - 'r' : Correlation coefficient # - 'CI95' : [95% parametric confidence intervals](https://en.wikipedia.org/wiki/Confidence_interval) # - 'r2' : [R-squared](https://en.wikipedia.org/wiki/Coefficient_of_determination) # - 'adj_r2' : [Adjusted R-squared](https://en.wikipedia.org/wiki/Coefficient_of_determination#Adjusted_R2) # - 'p-val' : one or two tailed p-value # - 'BF10' : Bayes Factor of the alternative hypothesis (Pearson only) # - 'power' : achieved power of the test (= 1 - type II error) # What if we want to compute `pairwise correlations` between `columns` of a `dataframe`? With `pingouin` that's one line of code and we can even sort the results based on a `test statistic` of interest, e.g. `r2`: pg.pairwise_corr(my_data_pandas, columns=['FSIQ', 'VIQ', 'Weight']).sort_values(by=['r2'], ascending=False) # ### Before we calculate: `Testing statistical premises` # # Statistical procedures can be classfied into either [`parametric`](https://en.wikipedia.org/wiki/Parametric_statistics) or `non parametric` prcedures, which require different necessary preconditions to be met in order to show consistent/robust results. # Generally people assume that their data follows a gaussian distribution, which allows for parametric tests to be run. # Nevertheless it is essential to first test the distribution of your data to decide if the assumption of normally distributed data holds, if this is not the case we would have to switch to non parametric tests. # ### [ normality test](https://pingouin-stats.org/generated/pingouin.normality.html#pingouin.normality) # # Standard procedure for testing `normal distribution` tests if the `distribution` of your data `deviates significantly` from a `normal distribution`. # The function we're using returns the following information: # # - W : Test statistic # # - p : float # P-value # # - normal : boolean # True if data comes from a normal distribution. pg.normality(my_data_pandas['Height'], alpha=.05) # ### [Henze-Zirkler multivariate normality test](https://pingouin-stats.org/generated/pingouin.multivariate_normality.html#pingouin.multivariate_normality) # # The same procedure, but for [multivariate normal distributions](https://en.wikipedia.org/wiki/Multivariate_normal_distribution). pg.multivariate_normality(my_data_pandas[['Height', 'Weight','VIQ']], alpha=.05) # ### [Testing for homoscedasticity](https://pingouin-stats.org/generated/pingouin.homoscedasticity.html?highlight=homoscedasticity#pingouin.homoscedasticity) # # # "In statistics, a sequence or a vector of random variables is homoscedastic /ˌhoʊmoʊskəˈdæstɪk/ if all random variables in the sequence or vector have the same finite variance." - Wikipedia # # returns: # # equal_var : boolean True if data have equal variance. # # p : float P-value. # # Note: This function first tests if the data are normally distributed using the Shapiro-Wilk test. If yes, then the homogeneity of variances is measured using the Bartlett test. If the data are not normally distributed, the Levene test, which is less sensitive to departure from normality, is used. pg.homoscedasticity(my_data_pandas[['VIQ', 'FSIQ']], alpha=.05) # ## Parametric tests # ## Student's t-test: the simplest statistical test # # ### 1-sample t-test: testing the value of a population mean # # tests if the population mean of data is likely to be equal to a given value (technically if observations are drawn from a Gaussian distributions of given population mean). # # # `pingouin.ttest` returns the T_statistic, the p-value, the [degrees of freedom](https://en.wikipedia.org/wiki/Degrees_of_freedom_(statistics), the [Cohen d effect size](https://en.wikiversity.org/wiki/Cohen%27s_d), the achieved [power](https://en.wikipedia.org/wiki/Power_(statistics%29) of the test ( = 1 - type II error (beta) = [P(Reject H0|H1 is true)](https://deliveroo.engineering/2018/12/07/monte-carlo-power-analysis.html)), and the [Bayes Factor](https://en.wikipedia.org/wiki/Bayes_factor) of the alternative hypothesis # # # # pg.ttest(my_data_pandas['VIQ'],0) # ### 2-sample t-test: testing for difference across populations # # We have seen above that the mean VIQ in the dark hair and light hair populations # were different. To test if this is significant, we do a 2-sample t-test: light_viq = my_data_pandas[my_data_pandas['Hair'] == 'light']['VIQ'] dark_viq = my_data_pandas[my_data_pandas['Hair'] == 'dark']['VIQ'] pg.ttest(light_viq, dark_viq) # ### Plot achieved power of a paired T-test # # Plot the curve of achieved power given the effect size (Cohen d) and the sample size of a paired T-test. # + import matplotlib.pyplot as plt import seaborn as sns sns.set(style='ticks', context='notebook', font_scale=1.2) d = 0.5 # Fixed effect size n = np.arange(5, 80, 5) # Incrementing sample size # Compute the achieved power pwr = pg.power_ttest(d=d, n=n, contrast='paired', tail='two-sided') # Start the plot plt.plot(n, pwr, 'ko-.') plt.axhline(0.8, color='r', ls=':') plt.xlabel('Sample size') plt.ylabel('Power (1 - type II error)') plt.title('Achieved power of a paired T-test') sns.despine() # - # ### Non parametric tests: # # # Unlike the parametric test these do not require the assumption of normal distributions. # # "`Mann-Whitney U Test` (= Wilcoxon rank-sum test). It is the non-parametric version of the independent T-test. # Mwu tests the hypothesis that data in x and y are samples from continuous distributions with equal medians. The test assumes that x and y are independent. This test corrects for ties and by default uses a continuity correction." - [mwu-function](https://pingouin-stats.org/generated/pingouin.mwu.html#pingouin.mwu) # # Test summary # # - 'W-val' : W-value # - 'p-val' : p-value # - 'RBC' : matched pairs rank-biserial correlation (effect size) # - 'CLES' : common language effect size pg.mwu(light_viq, dark_viq) # "`Wilcoxon signed-rank test` is the non-parametric version of the paired T-test. # # The Wilcoxon signed-rank test tests the null hypothesis that two related paired samples come from the same distribution. A continuity correction is applied by default." - [wilcoxon - func](https://pingouin-stats.org/generated/pingouin.wilcoxon.html#pingouin.wilcoxon) # pg.wilcoxon(light_viq, dark_viq, tail='two-sided') # ### `scipy.stats` - Hypothesis testing: comparing two groups # # For simple [statistical tests](https://en.wikipedia.org/wiki/Statistical_hypothesis_testing), it is also possible to use the `scipy.stats` sub-module of [`scipy`](http://docs.scipy.org/doc/). from scipy import stats # ### 1-sample t-test: testing the value of a population mean # # `scipy.stats.ttest_1samp` tests if the population mean of data is likely to be equal to a given value (technically if observations are drawn from a Gaussian distributions of given population mean). It returns the [T statistic](https://en.wikipedia.org/wiki/Student%27s_t-test), and the [p-value](https://en.wikipedia.org/wiki/P-value) (see the function's help): stats.ttest_1samp(my_data_pandas['VIQ'], 100) # With a p-value of 10^-28 we can claim that the population mean for the IQ (VIQ measure) is not 0. # ### 2-sample t-test: testing for difference across populations # # We have seen above that the mean VIQ in the dark hair and light hair populations # were different. To test if this is significant, we do a 2-sample t-test # with `scipy.stats.ttest_ind`: light_viq = my_data_pandas[my_data_pandas['Hair'] == 'light']['VIQ'] dark_viq = my_data_pandas[my_data_pandas['Hair'] == 'dark']['VIQ'] stats.ttest_ind(light_viq, dark_viq) # ## Paired tests: repeated measurements on the same indivuals # # PIQ, VIQ, and FSIQ give 3 measures of IQ. Let us test if FISQ and PIQ are significantly different. We can use a 2 sample test: stats.ttest_ind(my_data_pandas['FSIQ'], my_data_pandas['PIQ']) # The problem with this approach is that it forgets that there are links # between observations: FSIQ and PIQ are measured on the same individuals. # # Thus the variance due to inter-subject variability is confounding, and # can be removed, using a "paired test", or ["repeated measures test"](https://en.wikipedia.org/wiki/Repeated_measures_design): stats.ttest_rel(my_data_pandas['FSIQ'], my_data_pandas['PIQ']) # This is equivalent to a 1-sample test on the difference:: stats.ttest_1samp(my_data_pandas['FSIQ'] - my_data_pandas['PIQ'], 0) # T-tests assume Gaussian errors. We can use a [Wilcoxon signed-rank test](https://en.wikipedia.org/wiki/Wilcoxon_signed-rank_test), that relaxes this assumption: stats.wilcoxon(my_data_pandas['FSIQ'], my_data_pandas['PIQ']) # **Note:** The corresponding test in the non paired case is the [Mann–Whitney U test](https://en.wikipedia.org/wiki/Mann%E2%80%93Whitney_U), `scipy.stats.mannwhitneyu`. # + [markdown] solution2="hidden" solution2_first=true # ### Exercise 2 # # * Test the difference between weights in people with dark and light hair. # * Use non-parametric statistics to test the difference between VIQ in people with dark and light hair. # + solution2="hidden" light_weight = my_data_pandas[my_data_pandas['Hair'] == 'light']['Weight'] dark_weight = my_data_pandas[my_data_pandas['Hair'] == 'dark']['Weight'] stats.ttest_ind(light_weight, dark_weight, nan_policy='omit') # + solution2="hidden" stats.mannwhitneyu(light_viq, dark_viq) # + [markdown] solution2="hidden" # **Conclusion**: we find that the data does not support the hypothesis that people with dark and light hair have different VIQ. # + # Create solution here # - # # `statsmodels` - use "formulas" to specify statistical models in Python # # Use `statsmodels` to perform linear models, multiple factors or analysis of variance. # # # ## A simple linear regression # # Given two set of observations, `x` and `y`, we want to test the hypothesis that `y` is a linear function of `x`. # # In other terms: # # $y = x * coef + intercept + e$ # # where $e$ is observation noise. We will use the [statsmodels](http://statsmodels.sourceforge.net) module to: # # 1. Fit a linear model. We will use the simplest strategy, [ordinary least squares](https://en.wikipedia.org/wiki/Ordinary_least_squares) (OLS). # 2. Test that $coef$ is non zero. # # First, we generate simulated data according to the model. Then we specify an OLS model and fit it: from statsmodels.formula.api import ols model = ols("FSIQ ~ VIQ", my_data_pandas).fit() # **Note:** For more about "formulas" for statistics in Python, see the [statsmodels documentation](http://statsmodels.sourceforge.net/stable/example_formulas.html). # We can inspect the various statistics derived from the fit: print(model.summary()) # ### Terminology # # Statsmodels uses a statistical terminology: the `y` variable in statsmodels is called *endogenous* while the `x` variable is called *exogenous*. This is discussed in more detail [here](http://statsmodels.sourceforge.net/devel/endog_exog.html). To simplify, `y` (endogenous) is the value you are trying to predict, while `x` (exogenous) represents the features you are using to make the prediction. # + [markdown] solution2="hidden" solution2_first=true # ### Exercise 3 # # Retrieve the estimated parameters from the model above. # **Hint**: use tab-completion to find the relevant attribute. # + solution2="hidden" model.params # + # Create solution here # - # ## Categorical variables: comparing groups or multiple categories model = ols("VIQ ~ Hair + 1", my_data_pandas).fit() print(model.summary()) # ### Tips on specifying model # # ***Forcing categorical*** - the 'Hair' is automatically detected as a categorical variable, and thus each of its different values is treated as different entities. # # An integer column can be forced to be treated as categorical using: # # ```python # model = ols('VIQ ~ C(Hair)', my_data_pandas).fit() # ``` # # ***Intercept***: We can remove the intercept using `- 1` in the formula, or force the use of an intercept using `+ 1`. # ### Link to t-tests between different FSIQ and PIQ # # To compare different types of IQ, we need to create a "long-form" table, listing IQs, where the type of IQ is indicated by a categorical variable: data_fisq = pd.DataFrame({'iq': my_data_pandas['FSIQ'], 'type': 'fsiq'}) data_piq = pd.DataFrame({'iq': my_data_pandas['PIQ'], 'type': 'piq'}) data_long = pd.concat((data_fisq, data_piq)) print(data_long[::8]) model = ols("iq ~ type", data_long).fit() print(model.summary()) # We can see that we retrieve the same values for t-test and corresponding p-values for the effect of the type of IQ than the previous t-test: stats.ttest_ind(my_data_pandas['FSIQ'], my_data_pandas['PIQ']) # ## Multiple Regression: including multiple factors # # Consider a linear model explaining a variable `z` (the dependent # variable) with 2 variables `x` and `y`: # # $z = x \, c_1 + y \, c_2 + i + e$ # # Such a model can be seen in 3D as fitting a plane to a cloud of (`x`, # `y`, `z`) points (see the following figure). # + from mpl_toolkits.mplot3d import Axes3D x = np.linspace(-5, 5, 21) # We generate a 2D grid X, Y = np.meshgrid(x, x) # To get reproducable values, provide a seed value np.random.seed(1) # Z is the elevation of this 2D grid Z = -5 + 3*X - 0.5*Y + 8 * np.random.normal(size=X.shape) # Plot the data fig = plt.figure() ax = fig.gca(projection='3d') surf = ax.plot_trisurf(my_data_pandas['VIQ'].to_numpy(), my_data_pandas['PIQ'].to_numpy(), my_data_pandas['FSIQ'].to_numpy(), cmap=plt.cm.plasma) ax.view_init(20, -120) ax.set_xlabel('X') ax.set_ylabel('Y') ax.set_zlabel('Z') # - model = ols('FSIQ ~ VIQ + PIQ', my_data_pandas).fit() print(model.summary()) # ## Post-hoc hypothesis testing: analysis of variance (ANOVA) # # In the above iris example, we wish to test if the petal length is different between versicolor and virginica, after removing the effect of sepal width. This can be formulated as testing the difference between the coefficient associated to versicolor and virginica in the linear model estimated above (it is an Analysis of Variance, [ANOVA](https://en.wikipedia.org/wiki/Analysis_of_variance). For this, we write a **vector of 'contrast'** on the parameters estimated with an [F-test](https://en.wikipedia.org/wiki/F-test): print(model.f_test([0, 1, -1])) # Is this difference significant? # + [markdown] solution2="hidden" solution2_first=true # ### Exercise 4 # # Going back to the brain size + IQ data, test if the VIQ of people with dark and light hair are different after removing the effect of brain size, height, and weight. # + solution2="hidden" data = pd.read_csv('data/brain_size.csv', sep=';', na_values=".") model = ols("VIQ ~ Hair + Height + Weight + MRI_Count", data).fit() print(model.summary()) # + # Create solution here # - # ### Throwback to pingouin and pandas # # Remember `pingouin`? As briefly outlined, it can also compute `ANOVA`s and other types of models fairly easy. For example, let's compare `VIQ` between `light` and `dark` `hair`ed participants. pg.anova(dv='VIQ', between='Hair', data=my_data_pandas, detailed=True) # It gets even better, `pandas` actually support some `pingouin` functions directly as an in-built `dataframe method`: my_data_pandas.anova(dv='VIQ', between='Hair', detailed=True) # # `seaborn` - use visualization for statistical exploration # # [Seaborn](http://stanford.edu/~mwaskom/software/seaborn/) combines simple statistical fits with plotting on `pandas dataframes`, `numpy arrays`, etc. . # ## Pairplot: scatter matrices # # We can easily have an intuition on the interactions between continuous variables using `seaborn.pairplot` to display a scatter matrix: import seaborn seaborn.set() seaborn.pairplot(my_data_pandas, vars=['FSIQ', 'PIQ', 'VIQ'], kind='reg') # Categorical variables can be plotted as the hue: seaborn.pairplot(my_data_pandas, vars=['FSIQ', 'VIQ', 'PIQ'], kind='reg', hue='Hair') # ## lmplot: plotting a univariate regression # # A regression capturing the relation between one variable and another, e.g. `FSIQ` and `VIQ`, can be plotted using `seaborn.lmplot`: seaborn.lmplot(y='FSIQ', x='VIQ', data=my_data_pandas) # ### Robust regression # Given that, in the above plot, there seems to be a couple of data points that are outside of the main cloud to the right, they might be outliers, not representative of the population, but driving the regression. # # To compute a regression that is less sensitive to outliers, one must use a [robust model](https://en.wikipedia.org/wiki/Robust_statistics). This is done in seaborn using ``robust=True`` in the plotting functions, or in `statsmodels` by replacing the use of the OLS by a "Robust Linear Model", `statsmodels.formula.api.rlm`. # # Testing for interactions # # Does `FSIQ` increase more with `PIQ` for people with dark hair than with light hair? seaborn.lmplot(y='FSIQ', x='PIQ', hue='Hair', data=my_data_pandas) # The plot above is made of two different fits. We need to formulate a single model that tests for a variance of slope across the population. This is done via an ["interaction"](http://statsmodels.sourceforge.net/devel/example_formulas.html#multiplicative-interactions). from statsmodels.formula.api import ols result = ols(formula='FSIQ ~ PIQ + Hair + PIQ * Hair', data=my_data_pandas).fit() print(result.summary()) # # Take home messages # # * Hypothesis testing and p-value give you the **significance** of an effect / difference # # * **Formulas** (with categorical variables) enable you to express rich links in your data # # * **Visualizing** your data and simple model fits matters! # # * **Conditioning** (adding factors that can explain all or part of the variation) is an important modeling aspect that changes the interpretation. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # %matplotlib inline import ipyplot import matplotlib.pyplot as plt # # Training SSD # ## Imports and defaults # + import json import os import sys import cv2 import numpy as np from mmdet.apis import init_detector, inference_detector import mmcv log_filepath = "checkpoints/20210924_224805.log.json" mmdet_ssd_config_dir = "/home/phd/09/igor/mmdetection/configs/ssd/" mmdet_ssd_config_filename = "ssd512_sportradar.py" frames_dir = os.path.join("data","frames") frame_ids = [0,500,600*25] # - # ## MMDetection # ```MMDetection is an open source object detection toolbox based on PyTorch. It is a part of the OpenMMLab project.``` # [https://github.com/open-mmlab/mmdetection](https://github.com/open-mmlab/mmdetection) # ## Object detector for scoreboard detection (SSD) # # ## SSD Config file # ``` # _base_ = 'ssd300_coco.py' # input_size = 512 # model = dict( # backbone=dict(input_size=input_size), # bbox_head=dict( # in_channels=(512, 1024, 512, 256, 256, 256, 256), # anchor_generator=dict( # type='SSDAnchorGenerator', # scale_major=False, # input_size=input_size, # basesize_ratio_range=(0.1, 0.9), # strides=[8, 16, 32, 64, 128, 256, 512], # ratios=[[2], [2, 3], [2, 3], [2, 3], [2, 3], [2], [2]]))) # # dataset settings # dataset_type = 'SportradarDataset' # data_root = '/tmp2/igor/tennis-SR/data/' # img_norm_cfg = dict(mean=[123.675, 116.28, 103.53], std=[1, 1, 1], to_rgb=True) # train_pipeline = [ # dict(type='LoadImageFromFile', to_float32=True), # dict(type='LoadAnnotations', with_bbox=True), # dict( # type='PhotoMetricDistortion', # brightness_delta=32, # contrast_range=(0.5, 1.5), # saturation_range=(0.5, 1.5), # hue_delta=18), # dict( # type='Expand', # mean=img_norm_cfg['mean'], # to_rgb=img_norm_cfg['to_rgb'], # ratio_range=(1, 4)), # # dict( # # type='MinIoURandomCrop', # # min_ious=(0.1, 0.3, 0.5, 0.7, 0.9), # # min_crop_size=0.3), # dict(type='Resize', img_scale=(512, 512), keep_ratio=False), # dict(type='Normalize', **img_norm_cfg), # dict(type='RandomFlip', flip_ratio=0.), # dict(type='DefaultFormatBundle'), # dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), # ] # test_pipeline = [ # dict(type='LoadImageFromFile'), # dict( # type='MultiScaleFlipAug', # img_scale=(512, 512), # flip=False, # transforms=[ # dict(type='Resize', keep_ratio=False), # dict(type='Normalize', **img_norm_cfg), # dict(type='ImageToTensor', keys=['img']), # dict(type='Collect', keys=['img']), # ]) # ] # data = dict( # samples_per_gpu=16, # workers_per_gpu=3, # train=dict( # _delete_=True, # type='RepeatDataset', # times=1, # dataset=dict( # type=dataset_type, # ann_file=data_root + 'top-100-shots-rallies-2018-atp-season-scoreboard-annotations_coco_train.json', # img_prefix=data_root + 'frames/', # pipeline=train_pipeline)), # val=dict(pipeline=test_pipeline, # type=dataset_type, # img_prefix=data_root + 'frames/', # ann_file=data_root + 'top-100-shots-rallies-2018-atp-season-scoreboard-annotations_coco_val.json'), # test=dict(pipeline=test_pipeline, # type=dataset_type, # img_prefix=data_root + 'frames/', # ann_file=data_root + 'top-100-shots-rallies-2018-atp-season-scoreboard-annotations_coco_test.json')) # custom_hooks = [] # # optimizer # optimizer = dict(type='SGD', lr=2e-3, momentum=0.9, weight_decay=5e-4) # optimizer_config = dict(_delete_=True) # ``` # ## Read results from JSON file and plot with open(log_filepath, "r") as f: log_txt = "["+",".join(f.read().splitlines())+"]" log = json.loads(log_txt) train_data = {key : [] for key in ["epoch","iter","lr","loss_cls", "loss_bbox", "loss"]} val_data = {key : [] for key in ["epoch","iter","bbox_mAP", "bbox_mAP_50", "bbox_mAP_75"]} for entry in log: if "mode" not in entry.keys(): continue mode = entry["mode"] if mode != "train" and mode != "val": continue if mode == "train": for key in train_data.keys(): if key not in entry.keys(): raise Exception(f"{key} expected in the log entry {entry}") train_data[key].append(entry[key]) if mode == "val": for key in val_data.keys(): if key not in entry.keys(): raise Exception(f"{key} expected in the log entry {entry}") val_data[key].append(entry[key]) train_iter_n = val_data["iter"][0] train_iter = [(e-1)*train_iter_n+i for e, i in zip(train_data["epoch"], train_data["iter"])] val_iter = [(e-1)*train_iter_n+i for e,i in zip(val_data["epoch"], val_data["iter"])] plt.clf() for key in ["loss_cls", "loss_bbox", "loss"]: x = train_data[key] plt.plot(train_iter, x, label=key) plt.legend() plt.ylim(0, 5) plt.show() plt.clf() for key in ["", "", ""]: x = val_data[key] plt.plot(val_data["epoch"], x, label=key) plt.legend() plt.ylim(0.95, 1) plt.show() # ## Load checkpoint config_file = os.path.join(mmdet_ssd_config_dir, mmdet_ssd_config_filename) checkpoint_file = os.path.join("checkpoints","epoch_9.pth") # build the model from a config file and a checkpoint file model = init_detector(config_file, checkpoint_file, device='cuda:0') imgs = [cv2.imread(os.path.join(frames_dir,f"{i}.jpg"))[:,:,::-1] for i in frame_ids] results = inference_detector(model, imgs) scoreboards = [] for idx, (img, result) in enumerate(zip(imgs, results)): x1, y1, x2, y2, score = result[0][np.argmax(result[0][:,4])] x1, y1, x2, y2 = [int(i) for i in [x1, y1, x2, y2]] scoreboards.append(img[y1:y2,x1:x2]) ipyplot.plot_images(scoreboards, max_images=3, img_width=300) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from PIL import Image import os def resize_image(input_image_path,output_image_path,size): original_image = Image.open(input_image_path) width, height = resized_image.size if width": v for k,v in for_ops.items()} lambda_calculus_ops = { "": 0, "": 1, "": 2, "": 3, "": 4, "": 5, "": 6, "": 7, "": 8, "": 9, "": 10, "": 11, "": 12, "": 13, "": 14, "": 15, "": 16, "": 17, "": 18, "": 19, "": 20, "": 21, "": 22, "": 23, "": 24, "": 25, "": 26, "": 27, "": 28, "": 29, " ": 30, "": 31 } # + code_folding=[] hide_input=false input_eos_token = False input_as_seq = False use_embedding = True eos_bonus = 1 if input_eos_token and input_as_seq else 0 long_base_case = True binarize = True # + code_folding=[] hide_input=false is_lambda_calculus = False M, R = 5, 3 N = 11 for_anc_dset = TreeANCDataset("ANC/Easy-arbitraryForListWithOutput.json", is_lambda_calculus, num_ints = M, binarize=binarize, input_eos_token=input_eos_token, use_embedding=use_embedding, long_base_case=long_base_case, input_as_seq=input_as_seq, cuda=True) # + code_folding=[0] def reset_all_parameters_uniform(model, stdev): for param in model.parameters(): nn.init.uniform(param, -stdev, stdev) # + embedding_size = 200 hidden_size = 200 num_layers = 1 alignment_size = 50 align_type = 1 encoder_input_size = num_vars + num_ints + len(for_ops.keys()) + eos_bonus if input_as_seq: encoder = SequenceEncoder(encoder_input_size, hidden_size, num_layers, attention=True, use_embedding=use_embedding) else: encoder = TreeEncoder(encoder_input_size, hidden_size, num_layers, [1, 2], attention=True, use_embedding=use_embedding) decoder = MultilayerLSTMCell(N + 3*R + hidden_size, hidden_size, num_layers) program_model = TreeToSequenceAttentionANC(encoder, decoder, hidden_size, embedding_size, M, R, alignment_size=alignment_size, align_type=align_type, correctness_weight=5, halting_weight=1, confidence_weight=5, efficiency_weight=1, diversity_weight=0, mix_probabilities=True, predict_registers=True) reset_all_parameters_uniform(program_model, 0.1) encoder.initialize_forget_bias(3) decoder.initialize_forget_bias(3) # - program_model = program_model.cuda() # + code_folding=[0] optimizer = optim.Adam(program_model.parameters(), lr=0.005) lr_scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, verbose=True, patience=20000, factor=0.8) # - training.train_model_tree_to_anc(program_model, itertools.repeat(for_anc_dset[1], 100000), optimizer, lr_scheduler=lr_scheduler, num_epochs=1, batch_size=1, plateau_lr=True, print_every=200) for prog, target in for_anc_dset[0:5]: print(decode_tokens(tree_to_list(prog), 10, M, for_ops)) controller = program_model.forward_prediction(prog) input_memory = target[0][0] correct_memory = target[1][0] prediction_memory, _ = controller.forward_prediction([input_memory]) printProgram(controller, 0.9) print(correct_memory[0]) print(prediction_memory[0]) mem_diff = correct_memory[0] - prediction_memory[0].data print(torch.sum(mem_diff * mem_diff)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Benchmarking Behavior Planners in BARK # # This notebook the benchmarking workflow of BARK. # # Systematically benchmarking behavior consists of # 1. A reproducable set of scenarios (we call it **BenchmarkDatabase**) # 2. Metrics, which you use to study the performance (we call it **Evaluators**) # 3. The behavior model(s) under test # # Our **BenchmarkRunner** can then run the benchmark and produce the results. # + import os import matplotlib.pyplot as plt from IPython.display import Video from benchmark_database.load.benchmark_database import BenchmarkDatabase from benchmark_database.serialization.database_serializer import DatabaseSerializer from bark.benchmark.benchmark_runner import BenchmarkRunner, BenchmarkConfig, BenchmarkResult from bark.benchmark.benchmark_analyzer import BenchmarkAnalyzer from bark.runtime.commons.parameters import ParameterServer from bark.runtime.viewer.matplotlib_viewer import MPViewer from bark.runtime.viewer.video_renderer import VideoRenderer from bark.core.models.behavior import BehaviorIDMClassic, BehaviorConstantAcceleration # - # # Database # The benchmark database provides a reproducable set of scenarios. # A scenario get's created by a ScenarioGenerator (we have a couple of them). The scenarios are serialized into binary files (ending `.bark_scenarios`) and packed together with the map file and the parameter files into a `.zip`-archive. We call this zipped archive a relase, which can be published at Github, or processed locally. # # ## We will first start with the DatabaseSerializer # # The **DatabaseSerializer** recursively serializes all scenario param files sets # within a folder. # # We will process the database directory from Github. # + dbs = DatabaseSerializer(test_scenarios=1, test_world_steps=5, num_serialize_scenarios=10) dbs.process("../../../benchmark_database/data/database1") local_release_filename = dbs.release(version="tutorial") print('Filename:', local_release_filename) # - # Then reload to test correct parsing # + db = BenchmarkDatabase(database_root=local_release_filename) scenario_generation, _ = db.get_scenario_generator(scenario_set_id=0) for scenario_generation, _ in db: print('Scenario: ', scenario_generation) # - # ## Evaluators # # Evaluators allow to calculate a boolean, integer or real-valued metric based on the current simulation world state. # # The current evaluators available in BARK are: # - StepCount: returns the step count the scenario is at. # - GoalReached: checks if a controlled agent’s Goal Definitionis satisfied. # - DrivableArea: checks whether the agent is inside its RoadCorridor. # - Collision(ControlledAgent): checks whether any agent or only the currently controlled agent collided # # Let's now map those evaluators to some symbols, that are easier to interpret. evaluators = {"success" : "EvaluatorGoalReached", \ "collision" : "EvaluatorCollisionEgoAgent", \ "max_steps": "EvaluatorStepCount"} # We will now define the terminal conditions of our benchmark. We state that a scenario ends, if # - a collision occured # - the number of time steps exceeds the limit # - the definition of success becomes true (which we defined to reaching the goal, using EvaluatorGoalReached) terminal_when = {"collision" :lambda x: x, \ "max_steps": lambda x : x>10, \ "success" : lambda x: x} # # Behaviors Under Test # Let's now define the Behaviors we want to compare. We will compare IDM with Constant Velocity, but we could also compare two different parameter sets for IDM. params = ParameterServer() behaviors_tested = {"IDM": BehaviorIDMClassic(params), "Const" : BehaviorConstantAcceleration(params)} # # Benchmark Runner # # The BenchmarkRunner allows to evaluate behavior models with different parameter configurations over the entire benchmarking database. # + benchmark_runner = BenchmarkRunner(benchmark_database=db,\ evaluators=evaluators,\ terminal_when=terminal_when,\ behaviors=behaviors_tested,\ log_eval_avg_every=10) result = benchmark_runner.run(maintain_history=True) # - # We will now dump the files, to allow them to be postprocessed later. result.dump(os.path.join("./benchmark_results.pickle")) # # Benchmark Results # # Benchmark results contain # - the evaluated metrics of each simulation run, as a Panda Dataframe # - the world state of every simulation (optional) result_loaded = BenchmarkResult.load(os.path.join("./benchmark_results.pickle")) # We will now first analyze the dataframe. # + df = result_loaded.get_data_frame() df.head() # - # # Benchmark Analyzer # # The benchmark analyzer allows to filter the results to show visualize what really happened. These filters can be set via a dictionary with lambda functions specifying the evaluation criteria which must be fullfilled. # # A config is basically a simulation run, where step size, controlled agent, terminal conditions and metrics have been defined. # # Let us first load the results into the BenchmarkAnalyzer and then filter the results. # + analyzer = BenchmarkAnalyzer(benchmark_result=result_loaded) configs_idm = analyzer.find_configs(criteria={"behavior": lambda x: x=="IDM", "success": lambda x : not x}) configs_const = analyzer.find_configs(criteria={"behavior": lambda x: x=="Const", "success": lambda x : not x}) # - # We will now create a video from them. We will use Matplotlib Viewer and render everything to a video. # + sim_step_time=0.2 params2 = ParameterServer() fig = plt.figure(figsize=[10, 10]) viewer = MPViewer(params=params2, y_length = 80, enforce_y_length=True, enforce_x_length=False,\ follow_agent_id=True, axis=fig.gca()) video_exporter = VideoRenderer(renderer=viewer, world_step_time=sim_step_time) analyzer.visualize(viewer = video_exporter, real_time_factor = 1, configs_idx_list=configs_idm[1:3], \ fontsize=6) video_exporter.export_video(filename="/tmp/tutorial_video") # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd # + df_prof_Part2_step1 = pd.read_csv('player_info_Step2_part1.csv',dtype={'steamid': str}).drop('Unnamed: 0',axis='columns') df_prof_Part2_step2 = pd.read_csv('player_info_Step2_part2.csv',dtype={'steamid': str}).drop('Unnamed: 0',axis='columns') df_prof_Part2_step3 = pd.read_csv('player_info_Step2_part3.csv',dtype={'steamid': str}).drop('Unnamed: 0',axis='columns') df_prof_Part1 = pd.read_csv('player_info_100k.csv',dtype={'steamid': str}).drop('Unnamed: 0',axis='columns') # - df_all_prof = pd.concat([df_prof_Part1,df_prof_Part2_step3,df_prof_Part2_step2,df_prof_Part2_step1],sort=False) df_all_prof.head() df_all_prof.info() df_all_prof['steamid'].nunique() df_all_prof.to_csv('player_info_200k.csv') # + df_friend_Part2_step1 = pd.read_csv('player_friend_info_Step2_part1.csv',dtype={'steamid': str}).drop('Unnamed: 0',axis='columns') df_friend_Part2_step2 = pd.read_csv('player_friend_info_Step2_part2.csv',dtype={'steamid': str}).drop('Unnamed: 0',axis='columns') df_friend_Part2_step3 = pd.read_csv('player_friend_info_Step2_part3.csv',dtype={'steamid': str}).drop('Unnamed: 0',axis='columns') df_friend_Part1 = pd.read_csv('player_friend_info_100k.csv',dtype={'steamid': str}).drop('Unnamed: 0',axis='columns') # + df_all_friend = pd.concat([df_friend_Part1,df_friend_Part2_step3,df_friend_Part2_step2,df_friend_Part2_step1],sort=False) # - df_all_friend.head() df_all_friend.info() df_all_friend['steamid'].nunique() df_all_friend['steamid_orig'].nunique() df_all_friend.to_csv('player_friend_info_200k.csv') # + df_game_Part2_step1 = pd.read_csv('player_game_info_Step2_part1.csv',dtype={'steamid': str}).drop('Unnamed: 0',axis='columns') df_game_Part2_step2 = pd.read_csv('player_game_info_Step2_part2.csv',dtype={'steamid': str}).drop('Unnamed: 0',axis='columns') df_game_Part2_step3 = pd.read_csv('player_game_info_Step2_part3.csv',dtype={'steamid': str}).drop('Unnamed: 0',axis='columns') df_game_Part1 = pd.read_csv('player_game_info_100k.csv',dtype={'steamid': str}).drop('Unnamed: 0',axis='columns') # + df_all_game = pd.concat([df_game_Part1,df_game_Part2_step3,df_game_Part2_step2,df_game_Part2_step1],sort=False) # - df_all_game.head() df_all_game.info() df_all_game['steamid'].nunique() df_all_game.to_csv('player_game_info_200k.csv') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## 2020년 6월 14일 일요일 # ### leetCode - Merge Sorted Array (Python) # ### 문제 : https://leetcode.com/problems/merge-sorted-array/ # ### 블로그 : https://somjang.tistory.com/entry/leetCode-88-Merge-Sorted-Array-Python # ### 첫번째 시도 class Solution: def merge(self, nums1: List[int], m: int, nums2: List[int], n: int) -> None: """ Do not return anything, modify nums1 in-place instead. """ del nums1[m:] nums1 += nums2[0:n] nums1.sort() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: conda_mxnet_p27 # language: python # name: conda_mxnet_p27 # --- # # Gluon CIFAR-10 Trained in Local Mode # _**ResNet model in Gluon trained locally in a notebook instance**_ # # --- # # --- # # _This notebook was created and tested on an ml.p3.8xlarge notebook instance._ # # ## Setup # # Import libraries and set IAM role ARN. # + import sagemaker from sagemaker.mxnet import MXNet sagemaker_session = sagemaker.Session() role = sagemaker.get_execution_role() # - # Install pre-requisites for local training. # !/bin/bash setup.sh # --- # # ## Data # # We use the helper scripts to download CIFAR-10 training data and sample images. from cifar10_utils import download_training_data download_training_data() # We use the `sagemaker.Session.upload_data` function to upload our datasets to an S3 location. The return value `inputs` identifies the location -- we will use this later when we start the training job. # # Even though we are training within our notebook instance, we'll continue to use the S3 data location since it will allow us to easily transition to training in SageMaker's managed environment. inputs = sagemaker_session.upload_data(path='data', key_prefix='data/DEMO-gluon-cifar10') print('input spec (in this case, just an S3 path): {}'.format(inputs)) # --- # # ## Script # # We need to provide a training script that can run on the SageMaker platform. When SageMaker calls your function, it will pass in arguments that describe the training environment. Check the script below to see how this works. # # The network itself is a pre-built version contained in the [Gluon Model Zoo](https://mxnet.incubator.apache.org/versions/master/api/python/gluon/model_zoo.html). # !cat 'cifar10.py' # --- # # ## Train (Local Mode) # # The ```MXNet``` estimator will create our training job. To switch from training in SageMaker's managed environment to training within a notebook instance, just set `train_instance_type` to `local_gpu`. m = MXNet('cifar10.py', role=role, train_instance_count=1, train_instance_type='local_gpu', hyperparameters={'batch_size': 1024, 'epochs': 50, 'learning_rate': 0.1, 'momentum': 0.9}) # After we've constructed our `MXNet` object, we can fit it using the data we uploaded to S3. SageMaker makes sure our data is available in the local filesystem, so our training script can simply read the data from disk. m.fit(inputs) # --- # # ## Host # # After training, we use the MXNet estimator object to deploy an endpoint. Because we trained locally, we'll also deploy the endpoint locally. The predictor object returned by `deploy` lets us call the endpoint and perform inference on our sample images. predictor = m.deploy(initial_instance_count=1, instance_type='local_gpu') # ### Evaluate # # We'll use these CIFAR-10 sample images to test the service: # # # # # # # # # # # # # # + # load the CIFAR-10 samples, and convert them into format we can use with the prediction endpoint from cifar10_utils import read_images filenames = ['images/airplane1.png', 'images/automobile1.png', 'images/bird1.png', 'images/cat1.png', 'images/deer1.png', 'images/dog1.png', 'images/frog1.png', 'images/horse1.png', 'images/ship1.png', 'images/truck1.png'] image_data = read_images(filenames) # - # The predictor runs inference on our input data and returns the predicted class label (as a float value, so we convert to int for display). for i, img in enumerate(image_data): response = predictor.predict(img) print('image {}: class: {}'.format(i, int(response))) # --- # # ## Cleanup # # After you have finished with this example, remember to delete the prediction endpoint. Only one local endpoint can be running at a time. m.delete_endpoint() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="BZh1ahGxkeqk" colab_type="code" colab={} import torch # + id="PVl2XLOZklbf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 153} outputId="ff6d9313-bf63-45b2-d9e5-4d56cdbe789b" x = torch.arange(18).view(3,2,3) x # + id="hyDXBOG0ktAn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="4d51eac9-e172-43d7-e32d-5bdfb6699a00" x[1,1,1] # + id="vKl6O-qCk31q" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="f23c5d41-14d3-4409-9592-60b6dd891858" # Slicing 3D tensors x[1, 0:2, 0:3] # + id="QMxkqA4jlGlq" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Scrapen der Daten import requests from bs4 import BeautifulSoup import pandas as pd # Test für die Jahre 2016, 2017: # + jahre = list(range(2016,2018)) for jahr in jahre: url = "https://www.atpworldtour.com/en/players/roger-federer/f324/player-activity?year=" url = url+str(jahr) print(url) r = requests.get(url) soup = BeautifulSoup(r.text, 'html.parser') # - soup.text # Turniere, Daten und Gegner: # + federer = [] jahre = list(range(2016,2018)) for jahr in jahre: url = "https://www.atpworldtour.com/en/players/roger-federer/f324/player-activity?year=" url = url+str(jahr) print(url) r = requests.get(url) soup = BeautifulSoup(r.text, 'html.parser') turniere = soup.find('div', {'class':"activity-tournament-table"}).find_all('a', {'class':'tourney-title'}) daten = soup.find('div', {'class':"activity-tournament-table"}).find_all('span', {'class':'tourney-dates'}) gegner = soup.find('div', {'class':"day-table-name "}).find_all('a', {'class':'mega-player-name'}) for turnier, datum, spieler in zip(turniere, daten, gegner): turnier = turnier.text datum = datum.text gegner = spieler.text minidict = {'Turnier': turnier, 'Datum': datum, "Gegner": spieler, 'Saison':str(jahr)} federer.append(minidict) # + federer = [] jahre = list(range(2016,2018)) for turnier in turniere: for jahr in jahre: url = "https://www.atpworldtour.com/en/players/roger-federer/f324/player-activity?year=" url = url+str(jahr) print(url) r = requests.get(url) soup = BeautifulSoup(r.text, 'html.parser') daten = soup.find('div', {'class':"activity-tournament-table"}).find_all('span', {'class':'tourney-dates'}) gegner = soup.find('div', {'class':"day-table-name "}).find_all('a', {'class':'mega-player-name'}) for datum, spieler in zip(daten, gegner): datum = datum.text gegner = spieler.text minidict = {'Datum': datum, "Gegner": spieler, 'Saison':str(jahr)} federer.append(minidict) # - # Test für td: soup.find('div', {'class':"activity-tournament-table"}).find('div', {'class':'mega-table'}).find_all('td') federer # # Testen r = requests.get('https://www.atpworldtour.com/en/players/roger-federer/f324/player-activity?year=2017') soup = BeautifulSoup(r.text, 'html.parser') soup soup.find_all("tr")[5].find("td") # Gegner: soup.find('div', {'class':"day-table-name "}).find_all('a', {'class':'mega-player-name'}) name = soup.find('div', {'class':"day-table-name "}).find_all('a', {'class':'mega-player-name'}) name # Turnier: soup.find('div', {'class':"activity-tournament-table"}).find_all('a', {'class':'tourney-title'}) # Datum: soup.find('div', {'class':"activity-tournament-table"}).find_all('span', {'class':'tourney-dates'}) # Runde: soup.find('div', {'class':"activity-tournament-table"}).find_all('table', {'class':'mega-table'}) # Resultat: # Sieg oder Niederlage (W oder L): # Weltranglistenposition des Gegners: # Code für Transermarkt (zum Abschreiben): # + allejahre = [] jahre = list(range(2008,2019)) for jahr in jahre: url = "https://www.transfermarkt.ch/premier-league/startseite/wettbewerb/GB1/plus/?saison_id=" url = url+str(jahr) print(url) headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'} r = requests.get(url, headers=headers) soup = BeautifulSoup(r.text, 'html.parser') vereine = soup.find_all('td', {'class':'hauptlink no-border-links show-for-small show-for-pad'}) beträge = soup.find_all('td', {'class':'rechts show-for-small show-for-pad nowrap'}) for team, wert in zip(vereine, beträge[2:][::2]): team = team.text wert_mit_punkt = wert.text.replace(",",".") if " Mrd" in wert_mit_punkt: neuwert = float(wert_mit_punkt.replace(" Mrd. €",""))*1000 else: neuwert = float(wert_mit_punkt.replace(" Mio. €","")) wert = float(wert.text.replace(' Mio. €', '').replace(" Mrd. €","").replace(',', '.')) minidict = {'Team': team, 'Wert': neuwert, 'Saison':str(jahr)} allejahre.append(minidict) # - allejahre # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.10 64-bit (''crypto'': conda)' # name: python3 # --- # # Indicators import pandas as pd import talib as ta # ## Sources btc_yahoo = pd.read_csv('/home/giujorge/datalake/lab/Crypto/crypto/data/external/yahoo/daily/usd/BTC-USD.csv', parse_dates=True, index_col=0) btc_yahoo.head() # ## Pattern Recognition ## CDL2CROWS - Two Crows btc_yahoo['CDL2CROWS'] = ta.CDL2CROWS(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDL3BLACKCROWS - Three Black Crows btc_yahoo['CDL3BLACKCROWS'] = ta.CDL3BLACKCROWS(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDL3INSIDE - Three Inside Up/Down btc_yahoo['CDL3INSIDE'] = ta.CDL3INSIDE(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDL3LINESTRIKE - Three-Line Strike btc_yahoo['CDL3LINESTRIKE'] = ta.CDL3LINESTRIKE(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDL3OUTSIDE - Three Outside Up/Down btc_yahoo['CDL3OUTSIDE'] = ta.CDL3OUTSIDE(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDL3STARSINSOUTH - Three Stars In The South btc_yahoo['CDL3STARSINSOUTH'] = ta.CDL3STARSINSOUTH(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDL3WHITESOLDIERS - Three Advancing White Soldiers btc_yahoo['CDL3WHITESOLDIERS'] = ta.CDL3WHITESOLDIERS(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLABANDONEDBABY - Abandoned Baby btc_yahoo['CDLABANDONEDBABY'] = ta.CDLABANDONEDBABY(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLADVANCEBLOCK - Advance Block btc_yahoo['CDLADVANCEBLOCK'] = ta.CDLADVANCEBLOCK(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLBELTHOLD - Belt-hold btc_yahoo['CDLBELTHOLD'] =ta.CDLBELTHOLD(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLBREAKAWAY - Breakaway btc_yahoo['CDLBREAKAWAY'] =ta.CDLBREAKAWAY(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLCLOSINGMARUBOZU - Closing Marubozu btc_yahoo['CDLCLOSINGMARUBOZU'] =ta.CDLCLOSINGMARUBOZU(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLCONCEALBABYSWALL - Concealing Baby Swallow btc_yahoo['CDLCONCEALBABYSWALL'] =ta.CDLCONCEALBABYSWALL(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLCOUNTERATTACK - Counterattack btc_yahoo['CDLCOUNTERATTACK'] =ta.CDLCOUNTERATTACK(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLDARKCLOUDCOVER - Dark Cloud Cover btc_yahoo['CDLDARKCLOUDCOVER'] =ta.CDLDARKCLOUDCOVER(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"], penetration=0) ## CDLDOJI - Doji btc_yahoo['CDLDOJI'] =ta.CDLDOJI(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLDOJISTAR - Doji Star btc_yahoo['CDLDOJISTAR'] =ta.CDLDOJISTAR(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLDRAGONFLYDOJI - Dragonfly Doji btc_yahoo['CDLDRAGONFLYDOJI'] =ta.CDLDRAGONFLYDOJI(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLENGULFING - Engulfing Pattern btc_yahoo['CDLENGULFING'] =ta.CDLENGULFING(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLEVENINGDOJISTAR - Evening Doji Star btc_yahoo['CDLEVENINGDOJISTAR'] =ta.CDLEVENINGDOJISTAR(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"], penetration=0) ## CDLEVENINGSTAR - Evening Star btc_yahoo['CDLEVENINGSTAR'] =ta.CDLEVENINGSTAR(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"], penetration=0) ## CDLGAPSIDESIDEWHITE - Up/Down-gap side-by-side white lines btc_yahoo['CDLGAPSIDESIDEWHITE'] =ta.CDLGAPSIDESIDEWHITE(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLGRAVESTONEDOJI - Gravestone Doji btc_yahoo['CDLGRAVESTONEDOJI'] =ta.CDLGRAVESTONEDOJI(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLHAMMER - Hammer btc_yahoo['CDLHAMMER'] = ta.CDLHAMMER(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLHANGINGMAN - Hanging Man btc_yahoo['CDLHANGINGMAN'] =ta.CDLHANGINGMAN(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLHARAMI - Harami Pattern btc_yahoo['CDLHARAMI'] = ta.CDLHARAMI(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLHARAMICROSS - Harami Cross Pattern btc_yahoo['CDLHARAMICROSS'] = ta.CDLHARAMICROSS(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLHIGHWAVE - High-Wave Candle btc_yahoo['CDLHIGHWAVE'] = ta.CDLHIGHWAVE(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLHIKKAKE - Hikkake Pattern btc_yahoo['CDLHIKKAKE'] = ta.CDLHIKKAKE(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLHIKKAKEMOD - Modified Hikkake Pattern btc_yahoo['CDLHIKKAKEMOD'] = ta.CDLHIKKAKEMOD(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLHOMINGPIGEON - Homing Pigeon btc_yahoo['CDLHOMINGPIGEON'] = ta.CDLHOMINGPIGEON(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLIDENTICAL3CROWS - Identical Three Crows btc_yahoo['CDLIDENTICAL3CROWS'] = ta.CDLIDENTICAL3CROWS(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLINNECK - In-Neck Pattern btc_yahoo['CDLINNECK'] = ta.CDLINNECK(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLINVERTEDHAMMER - Inverted Hammer btc_yahoo['CDLINVERTEDHAMMER'] = ta.CDLINVERTEDHAMMER(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLKICKING - Kicking btc_yahoo['CDLKICKING'] = ta.CDLKICKING(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLKICKINGBYLENGTH - Kicking - bull/bear determined by the longer marubozu btc_yahoo['CDLKICKINGBYLENGTH'] = ta.CDLKICKINGBYLENGTH(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLLADDERBOTTOM - Ladder Bottom btc_yahoo['CDLLADDERBOTTOM'] = ta.CDLLADDERBOTTOM(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLLONGLEGGEDDOJI - Long Legged Doji btc_yahoo['CDLLONGLEGGEDDOJI'] = ta.CDLLONGLEGGEDDOJI(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLLONGLINE - Long Line Candle btc_yahoo['CDLLONGLINE'] = ta.CDLLONGLINE(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLMARUBOZU - Marubozu btc_yahoo['CDLMARUBOZU'] = ta.CDLMARUBOZU(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLMATCHINGLOW - Matching Low btc_yahoo['CDLMATCHINGLOW'] = ta.CDLMATCHINGLOW(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLMATHOLD - Mat Hold btc_yahoo['CDLMATHOLD'] = ta.CDLMATHOLD(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"], penetration=0) ## CDLMORNINGDOJISTAR - Morning Doji Star btc_yahoo['CDLMORNINGDOJISTAR'] = ta.CDLMORNINGDOJISTAR(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"], penetration=0) ## CDLMORNINGSTAR - Morning Star btc_yahoo['CDLMORNINGSTAR'] = ta.CDLMORNINGSTAR(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"], penetration=0) ## CDLONNECK - On-Neck Pattern btc_yahoo['CDLONNECK'] = ta.CDLONNECK(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLPIERCING - Piercing Pattern btc_yahoo['CDLPIERCING'] = ta.CDLPIERCING(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLRICKSHAWMAN - Rickshaw Man btc_yahoo['CDLRICKSHAWMAN'] = ta.CDLRICKSHAWMAN(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLRISEFALL3METHODS - Rising/Falling Three Methods btc_yahoo['CDLRISEFALL3METHODS'] = ta.CDLRISEFALL3METHODS(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLSEPARATINGLINES - Separating Lines btc_yahoo['CDLSEPARATINGLINES'] = ta.CDLSEPARATINGLINES(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLSHOOTINGSTAR - Shooting Star btc_yahoo['CDLSHOOTINGSTAR'] = ta.CDLSHOOTINGSTAR(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLSHORTLINE - Short Line Candle btc_yahoo['CDLSHORTLINE'] = ta.CDLSHORTLINE(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLSPINNINGTOP - Spinning Top btc_yahoo['CDLSPINNINGTOP'] = ta.CDLSPINNINGTOP(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLSTALLEDPATTERN - Stalled Pattern btc_yahoo['CDLSTALLEDPATTERN'] = ta.CDLSTALLEDPATTERN(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLSTICKSANDWICH - Stick Sandwich btc_yahoo['CDLSTICKSANDWICH'] = ta.CDLSTICKSANDWICH(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLTAKURI - Takuri (Dragonfly Doji with very long lower shadow) btc_yahoo['CDLTAKURI'] = ta.CDLTAKURI(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLTASUKIGAP - Tasuki Gap btc_yahoo['CDLTASUKIGAP'] = ta.CDLTASUKIGAP(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLTHRUSTING - Thrusting Pattern btc_yahoo['CDLTASUKIGAP'] = ta.CDLTASUKIGAP(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLTRISTAR - Tristar Pattern btc_yahoo['CDLTRISTAR'] = ta.CDLTRISTAR(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLUNIQUE3RIVER - Unique 3 River btc_yahoo['CDLUNIQUE3RIVER'] = ta.CDLUNIQUE3RIVER(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLUPSIDEGAP2CROWS - Upside Gap Two Crows btc_yahoo['CDLUPSIDEGAP2CROWS'] = ta.CDLUPSIDEGAP2CROWS(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) ## CDLXSIDEGAP3METHODS - Upside/Downside Gap Three Methods btc_yahoo['CDLXSIDEGAP3METHODS'] = ta.CDLXSIDEGAP3METHODS(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]) # ## Exploration pattern_rec = ['CDL2CROWS','CDL3BLACKCROWS','CDL3INSIDE','CDL3LINESTRIKE','CDL3OUTSIDE','CDL3STARSINSOUTH','CDL3WHITESOLDIERS','CDLABANDONEDBABY','CDLADVANCEBLOCK','CDLBELTHOLD','CDLBREAKAWAY','CDLCLOSINGMARUBOZU','CDLCONCEALBABYSWALL','CDLCOUNTERATTACK','CDLDOJI','CDLDOJISTAR','CDLDRAGONFLYDOJI','CDLENGULFING','CDLGAPSIDESIDEWHITE','CDLGRAVESTONEDOJI','CDLHAMMER','CDLHANGINGMAN','CDLHARAMI','CDLHARAMICROSS','CDLHIGHWAVE','CDLHIKKAKE','CDLHIKKAKEMOD','CDLHOMINGPIGEON','CDLIDENTICAL3CROWS','CDLINNECK','CDLINVERTEDHAMMER','CDLKICKING','CDLKICKINGBYLENGTH','CDLLADDERBOTTOM','CDLLONGLEGGEDDOJI','CDLLONGLINE','CDLMARUBOZU','CDLMATCHINGLOW','CDLORNINGDOJISTAR','CDLONNECK','CDLPIERCING','CDLRICKSHAWMAN','CDLRISEFALL3METHODS','CDLSEPARATINGLINES','CDLSHOOTINGSTAR','CDLSHORTLINE','CDLSPINNINGTOP','CDLSTALLEDPATTERN','CDLSTICKSANDWICH','CDLTAKURI','CDLTASUKIGAP','CDLTHRUSTING','CDLTRISTAR','CDLUNIQUE3RIVER','CDLUPSIDEGAP2CROWS','CDLXSIDEGAP3METHODS'] pattern_rec_pen = ['CDLDARKCLOUDCOVER', 'CDLEVENINGDOJISTAR', 'CDLEVENINGSTAR', 'CDLMATHOLD', 'CDLMORNINGDOJISTAR', 'CDLMORNINGSTAR'] # for i in pattern_rec: # btc_yahoo['{}'] = ta.s'{}(btc_yahoo["Open"], btc_yahoo['High'], btc_yahoo['Low'], btc_yahoo["Adj Close"]).format(i,i) btc_yahoo.shape btc_yahoo.isnull().sum() btc_yahoo.describe() # Zero columns btc_yahoo.columns[(btc_yahoo == 0).all()] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="hTcFv398beRg" # #STEP 1 and 2: Setup & Get the Data# # + id="eMVo6BkK-Z3y" import sys import sklearn import numpy as np import os # %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt # + colab={"base_uri": "https://localhost:8080/"} id="QgzMyp_6Tyi3" outputId="1bcf197b-6243-4fdf-ae5e-5b767ca820c0" #importing the necessary modules import pandas as pd from zipfile import ZipFile from urllib.request import urlopen import io #loading the dataset file_path = 'https://raw.githubusercontent.com/aiforsec/RIT-DSCI-633-FDS/main/Assignments/titanic.zip' f = urlopen(file_path).read() zip_file = ZipFile(io.BytesIO(f)) #Reading the test and train dataframes test_df = pd.read_csv(zip_file.open('test.csv')) train_df = pd.read_csv(zip_file.open('train.csv')) #Print the first 5 rows of the dataframe print(train_df.head()) print() print(test_df.head()) print() #Print the number of rows and columns of dataframe print("Shape of train dataframe: " , train_df.shape) print() print("Shape of test dataframe: " , test_df.shape) # + [markdown] id="TEQKMhdycEGK" # #STEP 3: Data Manipulation and Analysis using Pandas framework# # + colab={"base_uri": "https://localhost:8080/"} id="y-vvjTODUpn-" outputId="c7c0e7d6-b4de-4bcd-ee05-6f2cbc674e82" #Print the summary of the DataFrame print(train_df.info()) print() print(test_df.info()) # + colab={"base_uri": "https://localhost:8080/"} id="tWd1KUZtYd9J" outputId="be381e06-4bc6-4306-d977-df6c56fd064d" #Print the individual description for both the columns of the dataframe print(train_df.isnull().sum()) print() print(test_df.isnull().sum()) # + [markdown] id="W3aevlCcmnDt" # ##As you can see above, there are 177 NULL values in the Age column, 687 in the Cabin colum and 2 values in the Embarked. # + colab={"base_uri": "https://localhost:8080/"} id="o8fVIY3Yl7Tt" outputId="33457447-a6dc-469d-b129-cec751c8d3d8" print(train_df.columns.values, test_df.columns.values) # + colab={"base_uri": "https://localhost:8080/"} id="bKk5rTqomPh0" outputId="f183833c-79c7-4e0f-9e1b-701da6bf4803" print(train_df.describe()) print(test_df.describe()) # + colab={"base_uri": "https://localhost:8080/"} id="8SV1OQZum8CV" outputId="44288930-3a65-4468-a5b9-a43b76b72a5d" #total = train_df.shape[0] + test_df.shape[0] #print(train_df.shape[0]/ total) #print(test_df.shape[0]/ total) print(train_df.Pclass.value_counts()) print() print(train_df.groupby("Pclass").Survived.value_counts()) # + [markdown] id="bOWXr_FopM1K" # #STEP 4: DATA VISUALISATION # + colab={"base_uri": "https://localhost:8080/", "height": 276} id="CLw4-QjRnfEM" outputId="1e13e017-9ca9-4cab-b038-b1052e82fb15" train_df.groupby('Pclass').Survived.mean().plot(kind='bar') plt.ylabel('Survived') plt.xlabel('Pclass') plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 265} id="npBLmqbZnyYA" outputId="3f66e8e4-ea1c-46fe-b789-48f4af2aa0ca" train_df_3rd_class = train_df[train_df['Pclass'] == 3] plt.hist(train_df_3rd_class['Age']) plt.show() # + id="ECIrvIWAn70W" train_test_df = [train_df, test_df] # + [markdown] id="meqdSbfuokof" # # STEP 5: Prepare the data for Machine Learning Model # + id="lbl-xBTHoUNz" for dataset in train_test_df: dataset['Sex'] = dataset['Sex'].map( {'female': 1, 'male': 0} ).astype(int) # + id="JzaGqycVot-Z" for tr in train_test_df: tr['Embarked'] = tr['Embarked'].fillna('S') #filling missing embarked value with 'S' tr['Embarked'] = tr['Embarked'].map( {'S': 0, 'C': 1, 'Q': 2} ).astype(int) # + colab={"base_uri": "https://localhost:8080/"} id="7A2_4qahoxhT" outputId="306456cd-1c49-4460-f9f3-69ccae2c2841" for tr in train_test_df: age_avg = tr['Age'].mean() age_std = tr['Age'].std() age_null = tr['Age'].isnull().sum() age_null_random_list = np.random.randint(age_avg - age_std, age_avg + age_std, size=age_null) tr['Age'][np.isnan(tr['Age'])] = age_null_random_list tr['Age'] = tr['Age'].astype(int) train_df['AgeBand'] = pd.cut(train_df['Age'], 5) for dataset in train_test_df: dataset.loc[ dataset['Age'] <= 16, 'Age'] = 0 dataset.loc[(dataset['Age'] > 16) & (dataset['Age'] <= 32), 'Age'] = 1 dataset.loc[(dataset['Age'] > 32) & (dataset['Age'] <= 48), 'Age'] = 2 dataset.loc[(dataset['Age'] > 48) & (dataset['Age'] <= 64), 'Age'] = 3 dataset.loc[ dataset['Age'] > 64, 'Age'] = 4 # + id="Aii6Z6lDo1gZ" for dataset in train_test_df: dataset['Fare'] = dataset['Fare'].fillna(train_df['Fare'].median()) train_df['FareBand'] = pd.qcut(train_df['Fare'], 4) for dataset in train_test_df: dataset.loc[ dataset['Fare'] <= 7.91, 'Fare'] = 0 dataset.loc[(dataset['Fare'] > 7.91) & (dataset['Fare'] <= 14.454), 'Fare'] = 1 dataset.loc[(dataset['Fare'] > 14.454) & (dataset['Fare'] <= 31), 'Fare'] = 2 dataset.loc[ dataset['Fare'] > 31, 'Fare'] = 3 dataset['Fare'] = dataset['Fare'].astype(int) # + colab={"base_uri": "https://localhost:8080/"} id="MuMNM9wlphzV" outputId="ccffd2ae-fbb4-47be-efca-35803edb9a50" features_drop = ['Name', 'SibSp', 'Parch', 'Ticket', 'Cabin'] train_df = train_df.drop(features_drop, axis=1) print(train_df.head()) test_df = test_df.drop(features_drop, axis=1) train_df = train_df.drop(['PassengerId', 'AgeBand', 'FareBand'], axis=1) # + [markdown] id="k7ayEaDZpqtc" # #STEP 6 & 7: Select and Train a Model and Evaluating Accuracy # # + colab={"base_uri": "https://localhost:8080/"} id="FmoZtKuRpwoB" outputId="a0c83200-c775-4109-8f89-dd288ed8c102" X_train = train_df.drop('Survived', axis=1) y_train = train_df['Survived'] X_test = test_df.drop("PassengerId", axis=1).copy() X_train.shape, y_train.shape, X_test.shape # + colab={"base_uri": "https://localhost:8080/"} id="KbtIh5MFp4Sm" outputId="95b5329d-c78f-4fa7-8102-22f00e047d4b" from sklearn.linear_model import LogisticRegression clf = LogisticRegression() clf.fit(X_train, y_train) y_pred_log_reg = clf.predict(X_test) acc_log_reg = round( clf.score(X_train, y_train) * 100, 2) print (str(acc_log_reg) + '%') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # ### Pour aller plus loin avec le SQL distribué # # Nous avons vu précédemment que l'on pouvait via Spark SQL accéder à des données distribuées stockées sous forme de fichier en les rêquetant avec du SQL standard. # # Pour cela, Spark traduit la requête SQL à éxecuter dans un plan d'éxécution et bien souvent : # 1. Ramène sur les executeurs Spark les données du fichier. # 2. Définit les traitements qu'il va faire sur les RDD pour arriver aux résultats. # 3. Ramène le résultat sur le client, dans notre cas Jupyter. # # Pour avoir de bonnes performances, et d'autant plus que les executeurs Spark et les données ne sont pas sur les mêmes machines, il est donc utile de : # 1. Avoir des données compressées qui transitent plus vite sur le réseau. # 2. Avoir des mécanismes intelligents qui fitrent les données pour ne ramener sur les executeurs que les données réellement nécessaires. # # Pour cela, il existe différents formats de fichier, et si csv est l'un des plus répandus, ce n'est pas forcément le plus optimal. # Nous allons ici vous présenter le format parquet qui est un format compressé, orienté colonne, avec des mécanismes de filtrage qui optimisent le transit de l'information réellement utile sur les executeurs. # + import os from pyspark.sql import SparkSession spark = (SparkSession .builder # url par défaut d'une api kubernetes accédé depuis l'intérieur du cluster # (ici le notebook tourne lui même dans kubernetes) .master("k8s://https://kubernetes.default.svc:443") # Nom du pod qui exécute le driver .config("spark.kubernetes.driver.pod.name", os.environ['KUBERNETES_POD_NAME']) # image des executors spark: pour des raisons de simplicité on réutilise l'image du notebook .config("spark.kubernetes.container.image", os.environ['IMAGE_NAME']) # Nom du namespace kubernetes .config("spark.kubernetes.namespace", os.environ['KUBERNETES_NAMESPACE']) # Nombre d'executeur spark, il se lancera autant de pods kubernetes que le nombre indiqué. .config("spark.executor.instances", "5") # Mémoire alloué à la JVM # Attention par défaut le pod kubernetes aura une limite supérieur qui dépend d'autres paramètres. # On manipulera plus bas pour vérifier la limite de mémoire totale d'un executeur .config("spark.executor.memory", "4g") .getOrCreate() ) sc = spark.sparkContext # + import os import s3fs import json from pyspark.sql.types import StructType endpoint = "https://"+os.environ['AWS_S3_ENDPOINT'] fs = s3fs.S3FileSystem(client_kwargs={'endpoint_url': endpoint}) with fs.open('s3://projet-spark-lab/diffusion/formation/schema/sirene/sirene.schema.json') as f: a = f.read() schema = StructType.fromJson(json.loads(a)) df = (spark.read .format("csv") .options(header='true', inferschema='false', delimiter=',') .schema(schema) .load("s3a://projet-spark-lab/diffusion/formation/data/sirene/sirene.csv") ) df.printSchema() # - # #### Requete sur le csv # # Nous allons faire 3 requetes sur cette table appelée sirene. # # 1. Un count du nombre de lignes # 2. Un extrait de 10 lignes # 3. Un comptages du top 10 des code activitePrincipalEtablissement. # # On va afficher le temps d'execution. # + df.createOrReplaceTempView("sirene") dfCount = spark.sql("SELECT * FROM sirene LIMIT 10") dfGroupBy= spark.sql("SELECT count(*) as tot , activitePrincipaleEtablissement FROM sirene group by activitePrincipaleEtablissement order by tot desc LIMIT 10") # %time print('comptage total : {}'.format(df.count())) # %time print('comptage des 10 premières lignes : {}'.format(dfCount.count())) # %time dfGroupBy.show() # - # #### Le plan suivi par spark pour le groupby # # 1. Spark a lu tout le fichier en 53 étapes pour chaque étape il a calculé le nombre d'établissement par codeApe. # 2. Il a mis ce résultat en commun pour aggréger # 3. Il a ramené sur le client le top10 # # Le shuffle indique généralement des échanges sur le réseau entre executor spark, plus il y a de shuffle moins on est content généralement. # Quand on ne peut l'éviter, on souhaite qu'il concerne le moins de donnée possible, ici, ca va, le shuffle concerne un code de nomenclature et un total. # # (les temps d'execution ne sont peut etre pas les memes selon la configuration de votre spark) # # ![image.png](attachment:image.png) # #### Comparaison avec les mêmes données au format parquet # # Le dataframe issu du csv a été enregistré au format parquet avec la commande suivante # ``` # df.write.mode('append').parquet("s3a://projet-spark-lab/diffusion/formation/data/sirene.parquet") # ``` # # Nous allons avec le sqlContext de Spark refaire exactement la même chose que précédemment. # 1. Demander de lire le fichier en découvrant seul le schéma. # 2. Donner le résultat des 3 mêmes requetes. # # Remarquez qu'une fois le dataFrame lu, les syntaxes sont exactement les mêmes. # # # + #df.write.mode('append').parquet("s3a://projet-spark-lab/diffusion/formation/data/sirene.parquet") # - # !mc ls s3/projet-spark-lab/diffusion/formation/data/sirene.parquet # Vos pouvez remarquer que 53 fichiers ont écrit sur le stockage. Cela correspond exactement exactement au nombre de partition du dataframe original. En effet spark dans sa configuration par défaut travaille sur 128 Mo de données et a donc découper le job sur les 6,5 Go en 53 taches qui chacune va écrire l'output dans un nouveau fichier. parquetDf = spark.read.parquet("s3a://projet-spark-lab/diffusion/formation/data/sirene.parquet") parquetDf.createOrReplaceTempView("sireneparquet") parquetDfCount = spark.sql("SELECT * FROM sireneparquet LIMIT 10") parquetDFGroupBy= spark.sql("SELECT count(*) as tot , activitePrincipaleEtablissement FROM sireneparquet group by activitePrincipaleEtablissement order by tot desc LIMIT 10") # %time print('comptage total : {}'.format(parquetDf.count())) # %time print('comptage des 10 premières lignes : {}'.format(parquetDfCount.count())) # %time parquetDFGroupBy.show() # #### Le plan suivi par spark pour le groupby # # En parquet, pour le même group by il y a un gain important de performance, l'essentiel du temps est gagné du fait que le format est optimisé pour ne faire passer sur le réseau vers les executors que les données de la colonne concernée. # On passe de 6Go de données brassées à quelques Mo. # # De plus le fichier parquet logique a été écrit en 53 sous fichiers (hérités des 53 partitions du csv). # Mais comme les fichiers sont compressés en parquet il n'y a que 9 blocs de lus au lieu de 53. # # Une fois la lecture faite des fichiers on retrouve la seconde étape avec 200 partitions, c'est la valeur par défaut dans spark que l'on retrouve dans l'execution csv quand il doit restructurer son rdd sur le réseau. # # ![image.png](attachment:image.png) # #### amélioration possible # # Le dataframe issu du csv a été enregistré au format parquet en regardant la structure du fichier enregistré on # on a hérité ca lors de l'écriture. # # Parquet ou non, on peut forcer l'écriture dans un fichier en réorganisant le partitionnement du rdd avant l'écriture : # On peut par exemple chercher à avoir des fichiers de 128 Mo environ pour lire un seul fichier par tache. # La tache de repartition est assez couteuse et nous sommes obligés d'augmentant les limites de mémoire du container pour éviter les erreurs. # ``` # config("spark.kubernetes.memoryOverheadFactor", "0.5") # parquetDf.repartition(10).write.mode('overwrite').parquet("s3a://projet-spark-lab/diffusion/formation/data/sirene-10-partitions.parquet") # ``` # # Tant qu'on y est changeons le niveau de parallélisme lors du shuffle de spark qui est par défaut à 200. # ``` # config("spark.sql.shuffle.partitions", "5") # config("spark.default.parallelism", "5") # ``` # spark.stop() spark = (SparkSession .builder # url par défaut d'une api kubernetes accédé depuis l'intérieur du cluster # (ici le notebook tourne lui même dans kubernetes) .master("k8s://https://kubernetes.default.svc:443") # Nom du pod qui exécute le driver .config("spark.kubernetes.driver.pod.name", os.environ['KUBERNETES_POD_NAME']) # image des executors spark: pour des raisons de simplicité on réutilise l'image du notebook .config("spark.kubernetes.container.image", os.environ['IMAGE_NAME']) # Nom du namespace kubernetes .config("spark.kubernetes.namespace", os.environ['KUBERNETES_NAMESPACE']) # Nombre d'executeur spark, il se lancera autant de pods kubernetes que le nombre indiqué. .config("spark.executor.instances", "5") .config("spark.sql.shuffle.partitions", "5") .config("spark.default.parallelism", "5") .config("spark.kubernetes.memoryOverheadFactor", "0.5") # Mémoire alloué à la JVM # Attention par défaut le pod kubernetes aura une limite supérieur qui dépend d'autres paramètres. # On manipulera plus bas pour vérifier la limite de mémoire totale d'un executeur .config("spark.executor.memory", "4g") .getOrCreate() ) parquetDf = spark.read.parquet("s3a://projet-spark-lab/diffusion/formation/data/sirene.parquet") # + #parquetDf.repartition(10).write.mode('overwrite').parquet("s3a://projet-spark-lab/diffusion/formation/data/sirene-10-partitions.parquet") # - # !mc ls s3/projet-spark-lab/diffusion/formation/data/sirene-10-partitions.parquet tenParquetDf = spark.read.parquet("s3a://projet-spark-lab/diffusion/formation/data/sirene-10-partitions.parquet") tenParquetDf.createOrReplaceTempView("sirene10") tenParquetDfCount = spark.sql("SELECT * FROM sirene10 LIMIT 10") tenParquetDfGroupBy= spark.sql("SELECT count(*) as tot , activitePrincipaleEtablissement FROM sirene10 group by activitePrincipaleEtablissement order by tot desc LIMIT 10") # %time print('comptage total : {}'.format(tenParquetDf.count())) # %time print('comptage des 10 premières lignes : {}'.format(tenParquetDfCount.count())) # %time tenParquetDfGroupBy.show() # Normalement il y a, selon le use case, des paramètres à trouver : # * nombre de coeurs disponibles (une tache = un coeur) # * volume des données dans chaque étape (pas d'intéret d'avoir plein de petites étapes sur petites données) # * nombre de fois ou le traitement ne peut faire autre chose que shuffle # parqDFGroupBy= spark.sql( "select count(*) as total, activitePrincipaleEtablissement, word from (" " SELECT activitePrincipaleEtablissement,explode(split(denominationUsuelleEtablissement,' ')) as word "+ " FROM sirene10 where denominationUsuelleEtablissement is not null)"+ " where length(word)>4"+ " group by activitePrincipaleEtablissement,word order by total desc") # %time parqDFGroupBy.show() # #### Partitionnement avec parquet # # Si votre use case suppose que vous allez souvent requeter avec une clause where filtrante comme une date, une notion de géographie par exemple ou autre, on peut avec parquet partitionner sur plusieurs niveaux. # # Imaginons qu'un use case nécessite que les données soient préparées pour etre requetes par codeApe. # # La table a été préparé et partitionné par codeApe. # - un codeApe = 1 fichier parquet # # Sur du gros volume cela génère un gain. # Sur du petit volume cela peut etre une perte ici par exemple génère plus de 2000 fichiers ape qui va donc a la lecture commencer par 2000 petites step spark ce qui va etre long. # # L'idéal est donc d'avoir un use case ou une partition > bloc standard s3. # # ```tenParquetDf.write.partitionBy("etatAdministratifEtablissement").parquet("s3a://projet-spark-lab/diffusion/formation/data/sireneEae.parquet")``` # + #tenParquetDf.write.partitionBy("etatAdministratifEtablissement").parquet("s3a://projet-spark-lab/diffusion/formation/data/sireneEae.parquet") # - # !mc ls s3/projet-spark-lab/diffusion/formation/data/sireneEae.parquet # !mc ls s3/projet-spark-lab/diffusion/formation/data/sireneEae.parquet/etatAdministratifEtablissement=A/ tenParquetDf.groupBy('etatAdministratifEtablissement').count().show() # On peut retenir cette variable ici pour l'exemple car elle partitionne les données sans trop les fragmenter. # # En effet, le partitionement va splitter le fichier parquet en autant de fichier parquet que de modalité donc : # * si l'on prend une variable trop discriminante, on va pénaliser les requetes qui s'executeront sur tout le dataset (exemple si on partitionne sur codeApe avec plus de 2000 modalités => 2000 fichiers parquet a parcourir pour traiter tout le dataset donc spark commencera par une étape avec 2000 taches une pour chaque fichier ce qui va ralentir le traitement il y aura trop d'overhead). # * Le partitionnement s'applique donc sur des variables cohérentes par rapport aux blocs de données configurés sur s3 et aux volumes de données naturellement distribuées dans la partition choisie. # * Sauf a etre sur que toutes les requetes présenteront un filtre portant sur la partition ca peut avoir de grandes incidences sur le dataset global. # # On a donc 3 fichiers en dessous de notre parquet un par modalité # # ![image.png](attachment:image.png) # # En théorie on doit : # * gagner du temps si l'on utilise la varible etatAdministratifEtablissement. # * ne pas avoir trop dégrader les perfs sur le dataset global puisque la partition n'a pas fait explosé le nombre de fichier parquet par rapport au bloc s3 de 180Mo. #on doit gagner du temps sur une partition eaeDF = spark.read.parquet("s3a://projet-spark-lab/diffusion/formation/data/sireneEae.parquet") eaeDF.createOrReplaceTempView("sirenepartition") parqDFGroupBy= spark.sql( " select count(*) as total,activitePrincipaleEtablissement, word from (" " SELECT activitePrincipaleEtablissement,explode(split(denominationUsuelleEtablissement,' ')) as word "+ " FROM sirenepartition where denominationUsuelleEtablissement is not null and etatAdministratifEtablissement='A') "+ " where length(word)>4"+ " group by activitePrincipaleEtablissement,word order by total desc") # %time parqDFGroupBy.show() #on doit pas avoir dégrader le traitement sur le dataset global parqDFGroupBy= spark.sql( " select count(*) as total,activitePrincipaleEtablissement, word from (" " SELECT activitePrincipaleEtablissement,explode(split(denominationUsuelleEtablissement,' ')) as word "+ " FROM sirenepartition where denominationUsuelleEtablissement is not null) "+ " where length(word)>4"+ " group by activitePrincipaleEtablissement,word order by total desc") # %time parqDFGroupBy.show() # ### Synthèse # # Nous avons pu voir que les temps de traitements dépendent de : # 1. Comment la donnée est enregistrée(format, granularité des blocs, partitionnement) # 2. De paramètres de parallélisme des taches. # 3. La capacité à l'utilisateur de comprendre ce qui se passe côté serveur et côté client pour ne ramener sur le client que le résultat. # # Dans ce tutoriel, vous maitrisez votre cluster et votre fichier dans une stratégie de datalake d'entreprise: # - la diffusion des tables doit être réfléchie par une alimentation des tables/pipeline dans un format approprié au context du contenu. # - l'utilisateur accède aux tables sans en connaitre précisement le stockage. spark.stop() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import os import xml.etree.cElementTree as ET import numpy as np import tqdm import cv2 as cv from aisscv import __RepoPath__ # # Convert Images and Fix Labels (after augmentations) # This notebook does label-postprocessing. # It does: # - Read in all annotations (in pascal VOC .xml files) # - Resize images # - Fix labels PATH = os.path.join(__RepoPath__.__RepoPath__, "data/dataset_png/annotations") files = os.listdir(PATH) files = sorted([f for f in files if f.endswith('.xml')]) # Sanity chec if all labels are read in correctly len(files) def box_on_image(img: np.ndarray, min_point:tuple, max_point:tuple, window_name:str = 'Demo') -> None: """ For debugging purposes: Print a bounding box on an image and show the image. Press any key to continue. """ image = cv.rectangle(img, min_point, max_point, color=(255,0,0), thickness = int(img.shape[0]/50)) cv.imshow(window_name, image) cv.waitKey(0) cv.destroyAllWindows() # ## Let the magic begin # Now we # - iterate over the label files # - read in the image of each label # - resize the image, if it is large # - resize the bounding boxes accordingly # - check that bounding boxes are non-negative ints # - make sure that min-coordinates are smaller than max-coordinates # # we also split up the data to a train and a validation dataset, by getting a random value u. If this values is >.15 the sample belongs to train, else to validation. Write the image and the label to the according directory. # + NEW_PATH = os.path.join(__RepoPath__.__RepoPath__, "data/dataset_final_02") if not os.path.exists(NEW_PATH): #os.mkdir(NEW_PATH) os.makedirs(os.path.join(NEW_PATH,'train', 'annotations')) os.makedirs(os.path.join(NEW_PATH,'validation', 'annotations')) for ind, file in tqdm.tqdm(enumerate(files[:]), total = len(files[:])): # ob = root.find('object') #print(ind) try: tree = ET.parse(os.path.join(PATH, file)) root = tree.getroot() image_path = os.path.join(PATH, '..', root.find('filename').text) if not os.path.exists(image_path): continue image = cv.imread(image_path) #cv.imshow('Test', image) (orig_height,orig_width,c) = image.shape #print('Shape: {}'.format((v,u,c))) if min(orig_height,orig_width) >= 2000: factor = 0.25 elif min(orig_height,orig_width) >= 1000: factor = 0.5 else: factor = 1 #print(f'Orig width: {orig_width} and height {orig_height}, c:{c}, at file: {file}') resized = cv.resize(image, (int(orig_width*factor), int(orig_height*factor))) #print(f'New width: {resized.shape[1]} and height {resized.shape[0]}') size = root.find('size') new_width = int(np.floor(resized.shape[1])) new_height = int(np.floor(resized.shape[0])) assert new_width>0 and new_height >0, "Image dimensions must be positive" # Now, set the new values for the xml file. width_node = size.find('width') width_node.text = str(new_width) height_node = size.find('height') height_node.text = str(new_height) new_image_name = os.path.split(image_path)[-1].split('.')[0]+'.jpg' filename_node = root.find('filename') filename_node.text = new_image_name obj_to_del = [] for ob in root.iter('object'): bbox = ob.find('bndbox') xmin_node = bbox.find('xmin') ymin_node = bbox.find('ymin') xmax_node = bbox.find('xmax') ymax_node = bbox.find('ymax') #print('Read values: min: {} max: {}'.format((xmin_node.text, ymin_node.text), (xmax_node.text, ymax_node.text))) #box_on_image(image, (int(float(xmin_node.text)), int(float(ymin_node.text))), (int(float(xmax_node.text)), int(float(ymax_node.text))), 'Original Size') xmin_value = int(np.floor(float(xmin_node.text)*factor)) ymin_value = int(np.floor(float(ymin_node.text)*factor)) xmax_value = int(np.ceil(float(xmax_node.text)*factor)) ymax_value = int(np.ceil(float(ymax_node.text)*factor)) #print('After factor: min: {} max: {}'.format((xmin_value, ymin_value), (xmax_value, ymax_value))) # Step1: Clamp values to image size xmin_value = max(min(xmin_value, new_width-1), 1) xmax_value = max(min(xmax_value, new_width-1), 1) ymin_value = max(min(ymin_value, new_height-1), 1) ymax_value = max(min(ymax_value, new_height-1), 1) # Step2: switch in order to get positive heights/widths xmin = min(xmin_value, xmax_value) xmax = max(xmin_value, xmax_value) ymin = min(ymin_value, ymax_value) ymax = max(ymin_value, ymax_value) # check if dmg is no longer in the image and add object in xml file to list for later deletion if not (0 0.15: cv.imwrite(os.path.join(NEW_PATH, 'train', new_image_name), resized) tree.write(os.path.join(NEW_PATH, 'train', 'annotations', file)) else: cv.imwrite(os.path.join(NEW_PATH, 'validation', new_image_name), resized) tree.write(os.path.join(NEW_PATH, 'validation', 'annotations', file)) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernel_info: # name: python3 # kernelspec: # display_name: PythonData # language: python # name: pythondata # --- # # WeatherPy # ---- # # #### Note # * Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps. from citipy import citipy import pandas as pd import time, numpy as np, requests as rq from api_keys import weather_api_key import matplotlib.pyplot as plt from datetime import date from scipy.stats import linregress # ## Generate Cities List # + # Output File (CSV) output_data_file = "output_data/cities.csv" # Range of latitudes and longitudes lat_range = (-90, 90) lng_range = (-180, 180) # List for holding lat_lngs and cities lat_lngs = [] cities = [] # Create a set of random lat and lng combinations lats = np.random.uniform(lat_range[0], lat_range[1], size=1500) lngs = np.random.uniform(lng_range[0], lng_range[1], size=1500) lat_lngs = zip(lats, lngs) # Identify nearest city for each lat, lng combination for lat_lng in lat_lngs: city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name # If the city is unique, then add it to a our cities list if city not in cities: cities.append(city) # Print the city count to confirm sufficient count len(cities) # - # ### Perform API Calls # * Perform a weather check on each city using a series of successive API calls. # * Include a print log of each city as it'sbeing processed (with the city number and city name). # # + #Lists and counters city_list = [] cloud_list = [] country_list = [] date_list = [] humidity_list = [] lats_list = [] lngs_list = [] temp_max_list = [] wind_speed_list = [] index_counter = 0 set_counter = 1 print("Beginning Data Retrieval ") print("-------------------------------") base_url = "http://api.openweathermap.org/data/2.5/weather?" units = "imperial" query_url = f"{base_url}appid={weather_api_key}&units={units}&q=" #For loop matching city names with city_list for index, city in enumerate(cities, start = 1): try: response = rq.get(query_url + city).json() city_list.append(response["name"]) cloud_list.append(response["clouds"]["all"]) country_list.append(response["sys"]["country"]) date_list.append(response["dt"]) humidity_list.append(response["main"]["humidity"]) lats_list.append(response["coord"]["lat"]) lngs_list.append(response["coord"]["lon"]) temp_max_list.append(response['main']['temp_max']) wind_speed_list.append(response["wind"]["speed"]) if index_counter > 49: index_counter = 0 set_counter = set_counter + 1 else: index_counter = index_counter + 1 print(f"Processing Record {index_counter} of Set {set_counter} : {city}") except(KeyError, IndexError): print("City not found. Skipping...") print("-------------------------------") print("Data Retrieval Complete") print("-------------------------------") # - # ### Convert Raw Data to DataFrame # * Export the city data into a .csv. # * Display the DataFrame len("date_list") # + weather_data = pd.DataFrame({ "City" : city_list, "Lat" : lats_list, "Lng" : lngs_list, "Max Temp" : temp_max_list, "Humidity" : humidity_list, "Clouds" : cloud_list, "Wind Speed" : wind_speed_list, "Country" : country_list, "Date" : date_list }) weather_data weather_data_humidity_lesthan_100 = weather_data weather_data["Humidity"] = weather_data["Humidity"].map("{:d}".format) weather_data # - # ## Inspect the data and remove the cities where the humidity > 100%. # ---- # Skip this step if there are no cities that have humidity > 100%. weather_data['Humidity'] = pd.to_numeric(weather_data['Humidity']) weather_data['Lat'] = pd.to_numeric(weather_data['Lat']) weather_data['Lng'] = pd.to_numeric(weather_data['Lng']) weather_data.to_csv("output_data/cities.csv") # + # Get the indices of cities that have humidity over 100%. index_humidity_greater_thn_100 = weather_data[ weather_data['Humidity'] > 100 ].index # + # Make a new DataFrame equal to the city data to drop all humidity outliers by index. # Passing "inplace=False" will make a copy of the city_data DataFrame, which we call "clean_city_data". # So far did not see a city with more than 100 % humidity. This code is in place just in case. weather_data.drop(index_humidity_greater_thn_100, inplace = True) weather_data.to_csv("output_data/cities_clean.csv") weather_data # - # ## Plotting the Data # * Use proper labeling of the plots using plot titles (including date of analysis) and axes labels. # * Save the plotted figures as .pngs. # ## Latitude vs. Temperature Plot # + today = date.today() # dd/mm/YY d1 = today.strftime("%m/%d/%Y") #print("d1 =", d1) plt.title(f"City Latitude vs Max Temperature ({d1})") plt.ylabel("Max Temperature (F)") plt.xlabel("Latitude") plt.scatter(weather_data["Lat"], weather_data["Max Temp"], color = "steelblue", edgecolor = "black") plt.grid() fileLocation = f"Images/City Latitude vs Max Temperature.png" plt.savefig(fileLocation) # - # ## Latitude vs. Humidity Plot # + plt.title(f"City Latitude vs Humidity ({d1})") plt.ylabel("Humidity (%)") plt.xlabel("Latitude") plt.scatter(weather_data["Lat"], weather_data["Humidity"], color = "steelblue", edgecolor = "black") plt.grid() plt.savefig(f"Images/City Latitude vs Humidity.png") # - # ## Latitude vs. Cloudiness Plot # + plt.title(f"City Latitude vs Cloudiness ({d1})") plt.ylabel("Cloudiness (%)") plt.xlabel("Latitude") plt.scatter(weather_data["Lat"], weather_data["Clouds"], color = "steelblue", edgecolor = "black") plt.grid() plt.savefig(f"Images/City Latitude vs Cloudiness.png") # - # ## Latitude vs. Wind Speed Plot # + plt.title(f"City Latitude vs Wind Speed ({d1})") plt.ylabel("Wind Speed (mph)") plt.xlabel("Latitude") plt.scatter(weather_data["Lat"], weather_data["Wind Speed"], color = "steelblue", edgecolor = "black") plt.grid() plt.savefig(f"Images/City Latitude vs Wind Speed.png") # - # ## Linear Regression northern_hemisphere_df = weather_data.loc[weather_data["Lat"] >= 0] southern_hemisphere_df = weather_data.loc[weather_data["Lat"]< 0] # #### Northern Hemisphere - Max Temp vs. Latitude Linear Regression x_values = pd.to_numeric(northern_hemisphere_df['Lat']).astype(float) y_values = pd.to_numeric(northern_hemisphere_df['Max Temp']).astype(float) (slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values) regress_values = x_values * slope + intercept line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) plt.scatter(x_values,y_values) plt.plot(x_values,regress_values,"r-") plt.annotate(line_eq,(6,10),fontsize=15,color="firebrick") plt.xlabel('Latitude') plt.ylabel('Max Temperature (F)') plt.title('Northern Hemisphere - Max Temp vs. Latitude Linear Regression') plt.grid(True) plt.savefig("../Images/Northern Hemisphere - Max Temp vs. Latitude Linear Regression.png") plt.show() # #### Southern Hemisphere - Max Temp vs. Latitude Linear Regression x_values = pd.to_numeric(southern_hemisphere_df['Lat']).astype(float) y_values = pd.to_numeric(southern_hemisphere_df['Max Temp']).astype(float) (slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values) regress_values = x_values * slope + intercept line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) plt.scatter(x_values,y_values) plt.plot(x_values,regress_values,"r-") plt.annotate(line_eq,(6,10),fontsize=15,color="firebrick") plt.xlabel('Latitude') plt.ylabel('Max Temperature (F)') plt.title('Southern Hemisphere - Max Temp vs. Latitude Linear Regression') plt.grid(True) plt.savefig("../Images/Southern Hemisphere - Max Temp vs. Latitude Linear Regression.png") plt.show() # #### Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression x_values = pd.to_numeric(northern_hemisphere_df['Lat']).astype(float) y_values = pd.to_numeric(northern_hemisphere_df['Humidity']).astype(float) (slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values) regress_values = x_values * slope + intercept line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) plt.scatter(x_values,y_values) plt.plot(x_values,regress_values,"r-") plt.annotate(line_eq,(6,10),fontsize=15,color="firebrick") plt.xlabel('Latitude') plt.ylabel('Humidity (%)') plt.title('Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression') plt.grid(True) plt.savefig("../Images/Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression.png") plt.show() # #### Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression x_values = pd.to_numeric(southern_hemisphere_df['Lat']).astype(float) y_values = pd.to_numeric(southern_hemisphere_df['Humidity']).astype(float) (slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values) regress_values = x_values * slope + intercept line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) plt.scatter(x_values,y_values) plt.plot(x_values,regress_values,"r-") plt.annotate(line_eq,(6,10),fontsize=15,color="firebrick") plt.xlabel('Latitude') plt.ylabel('Max Temperature (F)') plt.title('Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression') plt.grid(True) plt.savefig("../Images/Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression.png") plt.show() # #### Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression x_values = pd.to_numeric(northern_hemisphere_df['Lat']).astype(float) y_values = pd.to_numeric(northern_hemisphere_df['Clouds']).astype(float) (slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values) regress_values = x_values * slope + intercept line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) plt.scatter(x_values,y_values) plt.plot(x_values,regress_values,"r-") plt.annotate(line_eq,(6,10),fontsize=15,color="firebrick") plt.xlabel('Latitude') plt.ylabel('Humidity (%)') plt.title('Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression') plt.grid(True) plt.savefig("../Images/Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression.png") plt.show() # #### Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression x_values = pd.to_numeric(southern_hemisphere_df['Lat']).astype(float) y_values = pd.to_numeric(southern_hemisphere_df['Clouds']).astype(float) (slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values) regress_values = x_values * slope + intercept line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) plt.scatter(x_values,y_values) plt.plot(x_values,regress_values,"r-") plt.annotate(line_eq,(6,10),fontsize=15,color="firebrick") plt.xlabel('Latitude') plt.ylabel('Humidity (%)') plt.title('Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression') plt.grid(True) plt.savefig("../Images/Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression.png") plt.show() # #### Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression x_values = pd.to_numeric(northern_hemisphere_df['Lat']).astype(float) y_values = pd.to_numeric(northern_hemisphere_df['Wind Speed']).astype(float) (slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values) regress_values = x_values * slope + intercept line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) plt.scatter(x_values,y_values) plt.plot(x_values,regress_values,"r-") plt.annotate(line_eq,(6,10),fontsize=15,color="firebrick") plt.xlabel('Latitude') plt.ylabel('Humidity (%)') plt.title('Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression') plt.grid(True) plt.savefig("../Images/Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression.png") plt.show() # #### Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression x_values = pd.to_numeric(southern_hemisphere_df['Lat']).astype(float) y_values = pd.to_numeric(southern_hemisphere_df['Wind Speed']).astype(float) (slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values) regress_values = x_values * slope + intercept line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) plt.scatter(x_values,y_values) plt.plot(x_values,regress_values,"r-") plt.annotate(line_eq,(6,10),fontsize=15,color="firebrick") plt.xlabel('Latitude') plt.ylabel('Humidity (%)') plt.title('Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression') plt.grid(True) plt.savefig("../Images/Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression.png") plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="s4ljYpQNp50r" # ![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png) # # [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/nlu/blob/master/examples/colab/component_examples/dependency_parsing/NLU_untyped_dependency_parsing_example.ipynb) # # # # Untyped Dependency Parsing with NLU. # ![](https://nlp.johnsnowlabs.com/assets/images/dependency_parser.png) # # Each word in a sentence has a grammatical relation to other words in the sentence. # These relation pairs can be typed (i.e. subject or pronouns) or they can be untyped, in which case only the edges between the tokens will be predicted, withouth the label. # # With NLU you can get these relations in just 1 line of code! # # 1. Install Java and NLU # + id="SF5-Z-U4jukd" import os # ! apt-get update -qq > /dev/null # Install java # ! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64" os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"] # ! pip install nlu pyspark==2.4.7 > /dev/null # + [markdown] id="kHtLKNXDtZf5" # # 2. Load the Dependency model and predict some sample relationships # + id="7GJX5d6mjk5j" colab={"base_uri": "https://localhost:8080/", "height": 512} executionInfo={"status": "ok", "timestamp": 1604907666230, "user_tz": -60, "elapsed": 128480, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjqAD-ircKP-s5Eh6JSdkDggDczfqQbJGU_IRb4Hw=s64", "userId": "14469489166467359317"}} outputId="7b5f4b95-706e-4c79-cf4b-9abcf40b3a01" import nlu dependency_pipe = nlu.load('dep.untyped') dependency_pipe.predict('Untyped dependencies describe with their relationship a directed graph') # + [markdown] id="5lrDNzw3tcqT" # # 3.1 Download sample dataset # + id="gpeS8DWBlrun" colab={"base_uri": "https://localhost:8080/", "height": 607} executionInfo={"status": "ok", "timestamp": 1604907674240, "user_tz": -60, "elapsed": 136471, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjqAD-ircKP-s5Eh6JSdkDggDczfqQbJGU_IRb4Hw=s64", "userId": "14469489166467359317"}} outputId="c8a9f120-9018-4903-c44a-58f8b3b789b8" import pandas as pd # Download the dataset # ! wget -N https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/resources/en/sarcasm/train-balanced-sarcasm.csv -P /tmp # Load dataset to Pandas df = pd.read_csv('/tmp/train-balanced-sarcasm.csv') df # + [markdown] id="uLWu8DG3tfjz" # ## 3.2 Predict on sample dataset # NLU expects a text column, thus we must create it from the column that contains our text data # + id="3V5l-B6nl43U" colab={"base_uri": "https://localhost:8080/", "height": 380} executionInfo={"status": "ok", "timestamp": 1604907690243, "user_tz": -60, "elapsed": 152462, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjqAD-ircKP-s5Eh6JSdkDggDczfqQbJGU_IRb4Hw=s64", "userId": "14469489166467359317"}} outputId="36136418-2a70-4184-83ba-59b40dd1d9ef" dependency_pipe = nlu.load('dep.untyped') df['text'] = df['comment'] dependency_predictions = dependency_pipe.predict(df['text'].iloc[0:1]) dependency_predictions # --- # jupyter: # jupytext: # text_representation: # extension: .r # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: R # language: R # name: ir # --- install.packages("haven") install.packages("plm") install.packages("lmtest") install.packages("sandwich") install.packages("stargazer") install.packages("AER") library(haven) library(plm) library(lmtest) library(sandwich) library(stargazer) library(AER) paper_data <- read_dta("ImaiTakarabeData.dta") paper_data_subset <- subset(paper_data, name != "tokyo" & name != "osaka" & name != "kanagawa" & name != "aichi" & name != "kyoto" & name != "hyogo" & name != "hokkaido") reg1 <- plm(GDP ~ gland + year, data = paper_data_subset, index = c("name", "year"), model = "within") coeftest(reg1, vcov = vcovHC, type = "HC2") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:MiniConda3] # language: python # name: conda-env-MiniConda3-py # --- # # PHPBB Scraping Data API - Sample Use # ## Configuration # Here's a possible use for the phpbb Scraper API, that will read latest topics from a forum. Setup to access the forum is done in a config.json, that has the following content: # ``` # { # "user": "", # "password": "", # "base":"https://", # "target_dir":"" # } # mind the double backslash and encoding: # eg meta_file = "C:\\Users\\JohnDoe\\Desktop\\TEST\\metafile.json" # ``` # The ScraperExecutor encapsulates generation of URL, log in, reading and processing of scraped forum pages # # ## Code is Documented :-) import phpbb_scraper from phpbb_scraper.scraper import ScraperExecutor # code is documented, use help() to find out more about implemented code # for inner structure, use dir(module) help(ScraperExecutor) # ## Sample Scrape # The following sample shows application of the ScraperExecutor class that reads latest topics and saves them as HTML files and updates the meta file (list of files downloaded) # + from phpbb_scraper.scraper import ScraperExecutor # config file path config_file = r"C:\\config.json" debug = False # run in debug mode steps_num = 2 # num of max web pages to be scraped # set configuration and instanciate web scraper executor = ScraperExecutor(config_file=config_file,debug=debug) # scrapes data from forum , saves them to files and returns metadata of each scraped page metadata = executor.retrieve_last_topics(steps_num=steps_num) # - # Display Of meta data for scraping of each Page: To make it unique, each scraped page (and forum posts later on, as well) gets hash ids alongside with file name so as to make it ready for analysis in subsequent steps # in case everything went fine, you can see the file metadata here (=what is appended to the metadata file) metadata # ## Reading Of Scraped Data # Read of scraped html data can be done with the read_topics_from_meta() function: It reads the metafile, accesses the referenced files there, and imports each post as dictionary. # + from phpbb_scraper.scraper import ScraperExecutor # read the urls from metafile and get post data as dictionary from stored html files config_file = r"C:\\config.json" # read the urls from metafile and get metadata from stored html files debug = False # run in debug mode # set configuration executor = ScraperExecutor(config_file=config_file,debug=debug) # read metafile and access locally stored html files topics = executor.read_topics_from_meta() #dictionary containing topics metadata print(f"Number of topics {len(topics)}, type of topics: {type(topics)}") print(f"Metadata Keys per Post: {list(topics[list(topics.keys())[0]].keys())}") # - # Having transformed posts into dictionary, everything is set for further analysis :-) # ## Scraped Data as HTML Table # ScraperExecutor method save_topics_as_html_table will read scraped data and is transforming them into tabular HTML data # + from phpbb_scraper.scraper import ScraperExecutor config_file = r"C:\\config.json" # read the urls from metafile and get metadata from stored html files debug = False # run in debug mode # set configuration executor = ScraperExecutor(config_file=config_file,debug=debug) # html file name and path html_file = r"posts_as_html_table" path = r"C:\\TEST" add_timestamp = False # create html table from dictionary and save file locally executor.save_topics_as_html_table(html_file=html_file,path=path,append_timestamp=add_timestamp) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # %matplotlib inline import cosmo_metric_utils as cmu import pandas as pd import matplotlib.pyplot as plt import numpy as np import matplotlib.colors as colors import matplotlib.cm as cmx from mpl_toolkits.axes_grid1 import make_axes_locatable import os import glob # + #dist_loc_base = '/media/RESSPECT/data/PLAsTiCC/for_metrics/wfd/distances/omprior_0.01_flat/emille_samples/' #mu_photoIa_plasticc*' #file_extension = 'WFD' dist_loc_base = '/media/RESSPECT/data/PLAsTiCC/for_metrics/ddf/distances/omprior_0.01_flat/emille_samples/' #mu_photoIa_plasticc*' file_extension = 'DDF' # - from collections import OrderedDict remap_dict = OrderedDict({ 'perfect3000': 'Perfect', 'fiducial3000': 'Fiducial', 'random3000fail2998': 'Random', 'random3000': 'Random', 'all_objs_survived_SALT2_DDF' : 'All SALT', 'all_objs_survived_SALT2_WFD': 'All SALT', '50SNIa50SNII': 'SN-II 50', '68SNIa32SNII': 'SN-II 32', '72SNIa28SNII': 'SN-II 28', '75SNIa25SNII': 'SN-II 25', '90SNIa10SNII': 'SN-II 10', '95SNIa5SNII': 'SN-II 5', '98SNIa2SNII': 'SN-II 2', '99SNIa1SNII': 'SN-II 1', '50SNIa50SNIbc': 'SN-Ibc 50', '68SNIa32SNIbc': 'SN-Ibc 32', '75SNIa25SNIbc': 'SN-Ibc 25', '83SNIa17SNIbc': 'SN-Ibc 17', '90SNIa10SNIbc': 'SN-Ibc 10', '95SNIa5SNIbc': 'SN-Ibc 5', '98SNIa2SNIbc': 'SN-Ibc 2', '99SNIa1SNIbc': 'SN-Ibc 1', '50SNIa50SNIax': 'SN-Iax 50', '68SNIa32SNIax': 'SN-Iax 32', '75SNIa25SNIax': 'SN-Iax 25', '86SNIa14SNIax': 'SN-Iax 14', '90SNIa10SNIax': 'SN-Iax 10', '94SNIa6SNIax': 'SN-Iax 6', '95SNIa5SNIax': 'SN-Iax 5', '97SNIa3SNIax': 'SN-Iax 3', '98SNIa2SNIax': 'SN-Iax 2', '99SNIa1SNIax': 'SN-Iax 1', '71SNIa29SNIa-91bg': 'SN-Ia-91bg 29', '75SNIa25SNIa-91bg': 'SN-Ia-91bg 25', '90SNIa10SNIa-91bg': 'SN-Ia-91bg 10', '95SNIa5SNIa-91bg': 'SN-Ia-91bg 5', '98SNIa2SNIa-91bg': 'SN-Ia-91bg 2', '99SNIa1SNIa-91bg': 'SN-Ia-91bg 1', '99.8SNIa0.2SNIa-91bg': 'SN-Ia-91bg 0.2', '57SNIa43AGN': 'AGN 43', '75SNIa25AGN': 'AGN 25', '90SNIa10AGN': 'AGN 10', '94SNIa6AGN': 'AGN 6', '95SNIa5AGN': 'AGN 5', '98SNIa2AGN': 'AGN 2', '99SNIa1AGN': 'AGN 1', '99.9SNIa0.1AGN': 'AGN 0.1', '83SNIa17SLSN-I': 'SNLS-I 17', '90SNIa10SLSN-I': 'SNLS-I 10', '95SNIa5SLSN-I': 'SNLS-I 5', '98SNIa2SLSN-I': 'SNLS-I 2', '99SNIa1SLSN-I': 'SNLS-I 1', '99.9SNIa0.1SLSN': 'SNLS-I 0.1', '95SNIa5TDE': 'TDE 5', '98SNIa2TDE': 'TDE 2', '99SNIa1TDE': 'TDE 1', '99.6SNIa0.4TDE': 'TDE 0.4', '99.1SNIa0.9CART': 'CART 0.9', '99.7SNIa0.3CART': 'CART 0.3' }) all_shapes = {'SNIa-91bg': 'o', 'SNIax': 's', 'SNII': 'd', 'SNIbc': 'X', 'SLSN-I': 'v', 'AGN': '^', 'TDE': '<', 'KN': '>', 'CART': 'v'} # Mapping the percent contaminated to the colormap. ## size corresponds to remap_dict color_nums = np.array([1, 1, 1, 1, 1, 1, # Special 50, 32, 28, 25, 10, 5, 2, 1, # II 50, 32, 25, 17, 10, 5, 2, 1, # Ibc 50, 32, 25, 14, 10, 6, 5, 3, 2, 1, # Iax 29, 25, 10, 5, 2, 1, 1, # 91bg 43, 25, 10, 6, 5, 2, 1, 1, # AGN 17, 10, 5, 2, 1, 1, # SNLS 5, 2, 1, 1, # TDE 1, 1, # CART ]) # Color map rainbow = cm = plt.get_cmap('plasma_r') cNorm = colors.LogNorm(vmin=1, vmax=52) #colors.Normalize(vmin=0, vmax=50) scalarMap = cmx.ScalarMappable(norm=cNorm, cmap=rainbow) color_map = scalarMap.to_rgba(np.arange(1, 52)) # + fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(5, 10), sharey=True) tick_lbls = [] ax1.axvline(-1, color='c', ls='--') df = pd.read_csv(dist_loc_base + 'stan_input_salt2mu_lowz_withbias_perfect3000.csv') sig_perf = cmu.fisher_results(df['z'].values, df['muerr'].values)[0] ax1.axvspan(-1 - sig_perf[1], -1 + sig_perf[1], alpha=0.1, color='grey') ax2.axvline(0, color='k') file_base = dist_loc_base + 'stan_input_salt2mu_lowz_withbias_' i = 0 i_list = [] for j, (a, c) in enumerate(zip(remap_dict, color_nums)): class_ = str.split(remap_dict[a])[0] if '91bg' in class_: class_ = 'SNIa-91bg' else: class_ = class_.replace('-', '') #mfc='none' if 'DDF' in file_extension: if 'fiducial' in a: mfc = 'tab:blue' elif 'random' in a: mfc = 'tab:red' elif 'perfect' in a: mfc = 'k' else: mfc = color_map[c] if 'WFD' in file_extension: mfc = "none" try: file = glob.glob(file_base + a + '.csv') df = pd.read_csv(str(file[0])) sig = cmu.fisher_results(df['z'].values, df['muerr'].values)[0] if 'perfect' in a: ax1.plot(-1, -i, ms=10, color='k', marker='*', mfc=mfc) ax1.plot([-1 - sig[1], -1 + sig[1]], [-i, -i], "|-", ms=10, color='k', mfc=mfc) elif 'random' in a: ax1.plot(-1, -i, ms=10, color='tab:red', marker='*', mfc=mfc) ax1.plot([-1 - sig[1], -1 + sig[1]], [-i, -i], "|-", ms=10, color='tab:red') ax2.plot((sig[1]-sig_perf[1])/sig_perf[1], -i, 'o', color='tab:red', marker='*', ms=10, mfc=mfc) elif 'fiducial' in a: ax1.plot(-1, -i, ms=10, color='tab:blue', marker='*', mfc=mfc) ax1.plot([-1 - sig[1], -1 + sig[1]], [-i, -i], "|-", ms=10, color='tab:blue') ax2.plot((sig[1]-sig_perf[1])/sig_perf[1], -i, color='tab:blue', marker='*', ms=10, mfc=mfc) else: ax1.plot(-1, -i, marker=all_shapes[class_], ms=10, color=color_map[c], mfc=mfc) ax1.plot([-1 - sig[1], -1 + sig[1]], [-i, -i], "|-", ms=10, color=color_map[c]) ax2.plot((sig[1]-sig_perf[1])/sig_perf[1], -i, color=color_map[c], marker=all_shapes[class_], ms=10, mfc=mfc) tick_lbls.append(remap_dict[a]) i_list.append(-i) i +=2 if 'random' in a or '99SNIa1' in a: if 'DDF' in file_extension: if 'AGN' in a or '91bg' in a or 'CART' in a: continue else: i_list.append(i) i += 1.6 tick_lbls.append('') elif 'SNIa0' in a and 'CART' not in a: i_list.append(-i) i += 1.6 tick_lbls.append('') except: print("Missing: ", a) tick_locs = tick_locs = i_list[::-1]#np.arange(-len(tick_lbls)+1, 1) ax1.set_yticks(tick_locs) ax1.set_yticklabels(tick_lbls[::-1], fontsize=13) ax1.set_ylim(i_list[-1]-0.7, i_list[0]+0.7) #(-len(tick_lbls)+0.5, 0.5) #ax1.set_xscale('log') ax1.set_xlabel('Fisher Matrix', fontsize=13) ax2.set_xlabel('Fractional difference', fontsize=13) yticks = ax1.yaxis.get_major_ticks() yticks2 = ax2.yaxis.get_major_ticks() #ticks = [-4, -11, -15, -21] #for t in ticks: # yticks[t].set_visible(False) # yticks2[t].set_visible(False) #plt.savefig('fisher_matrix_' + file_extension + '_20210603.pdf', bbox_inches='tight') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + """ Author: Date: March 14th, 2016 Lorentz Transformation Matrix Calculator Linear transformation from one reference frame in R^4 to another in R^4. X = (t, x, y, z) is the 4-vector giving us the time and position of the moving object. X'= (t', x', y', z') gives us the vector of how the object appears to an observer. V is the velocity vector of the object with respect to the observer and will greatly influence the difference between X and X'. As V gets larger and approaches the speed of light, the matrix A stretches and contracts various aspects of the source to maintain a constant C. Lengths get shorter in the direction of travel, at least to an outside observr. Things moving at normal speeds don't experience large values, but once they start having velocities towards C=1, objects shrink a great deal and experience time differently then us. I followed the matrix on this website (http://www.physicspages.com/2011/06/22/lorentz-transformations-in-three-dimensions/) and used the knowledge I gained from my physics 351, special relativity to write this program. Input: Position 4-vector X = (t, x, y, z) in objects reference frame Velocity 3-vector V = (vx, vy, vz) between object and observer Output: X' = (t', x', y', z') in the observer's reference frame. This is how the obsever, observes the objects time and position, since the object experiences time differently when the two have very large velocities differences """ import numpy as np from scipy import linalg Position = np.zeros(4) #t, x, y, z Velocity = np.array([1.0, 0.0, 0.0]) C = 3.00*10**8 #speed of light def Gamma(num): """ Calculates Gamma factor from the velocity of the object with respect to a stationary observer. Input: Velocity or velocity components (int or array) Returns: Gamma """ if type(num) is int: v = num else: v = 0.0 for el in num: v += el**2 v = np.sqrt(v) G = np.sqrt(1-(v/C)**2) return 1.0 / G def Lorentz(Gamma, V): """ Creates a 4x4 Lorentz transformation matrix. Which is a linear transformation from R^4 to R^4. (t, x, y, z) to (t', x', y', z') """ A = np.zeros((4,4))#4x4 matrix vx = V[0] vy = V[1] vz = V[2] v = np.sqrt(vx**2 + vy**2 + vz**2) A[0,0] = Gamma A[1,1] = 1 + (Gamma-1)*((vx/v)**2) A[2,2] = 1 + (Gamma-1)*((vy/v)**2) A[3,3] = 1 + (Gamma-1)*((vz/v)**2) A[0,1] = A[1,0] = -vx*Gamma A[0,2] = A[2,0] = -vy*Gamma A[0,3] = A[3,0] = -vz*Gamma A[1,2] = A[2,1] = (Gamma-1)*(vx*vy)/(v**2) A[1,3] = A[3,1] = (Gamma-1)*(vx*vz)/(v**2) A[2,3] = A[3,2] = (Gamma-1)*(vy*vz)/(v**2) E_val, E_vec = linalg.eigh(A) return A , E_val, E_vec ##Evaluate for a few examples def Evaluate(A, X, V): """ Given a matrix A, vector X, solve AX=x """ B = np.zeros(4) for j in range(0,4): k = 0.0 for i in range(0,4): # U = {u1,u2,u3,u4} u = A[j,i]*X[i] # u1 = SUM(A[i]*X[i]) for ith element along the jth row k = k + u # linear combination right here B[j] = k return B X = [0, 1, 0, 0] V = [0.01, 0, 0] A, Eval, Evec = Lorentz(Gamma(V), V) x = Evaluate(A, X, V) ## Ax = X print "Examples:" print "First: In one reference frame the object is 1m long, but now in a new frame of reference at 1% the speed of light: " print "Original time: {}, New time: {}".format(X[0], x[0]) print "Original x-position: {}, new: {}".format(X[1], x[1]) print "Eigenvalues of Lorentz matrix: {}".format(str(Eval)) print "\n" X = [0, 1, 0, 0] V = [0.1, 0, 0] A, Eval, Evec = Lorentz(Gamma(V), V) x = Evaluate(A, X, V) ## Ax = X print "Second: In one reference frame the object flys 1 seconds and is 1m long, but now in a new frame of reference at 10% the speed of light: " print "Original time: {}, New time: {}".format(X[0], x[0]) print "Original x-position: {}, new: {}".format(X[1], x[1]) print "\n" print "Since the Lorentz Transformation matrix is a symmetric matrix, we can orthogonally diagonalize it." ##Orthogonaly Diagonalize A print "The matrix A is a square symmetric matrix and therefore, can be orthogoanlly diagonalized by A = P*D*P(transpose) " print "where D is a diagonal matrix with eigenvalues along the diagonal, and P has the corresponding eigenvectors as its columns:" P = Evec D = np.array([ [Eval[0],0,0,0], [0,Eval[1],0,0], [0,0,Eval[2],0], [0,0,0,Eval[3]] ]) PT = np.transpose(P) print "A= " print A print "D= " print D print "P= " print P X = [1.0, 1/np.sqrt(2), 1/np.sqrt(2), 0.0] #45 degree angle in the x-y plane V = [0.0, 0.5, 0.0] A, Eval, Evec = Lorentz(Gamma(V), V) x = Evaluate(A, X, V) ## Ax = X print "\n" print "Third: In one reference frame the object flys 1 seconds and is 1m long, 10% speed of light y direction: " print "Original x-position: {}, new: {}".format(X[1], x[1]) print "Original y-position: {}, new: {}".format(X[2], x[2]) print "Eigenvalues of Lorentz matrix: {}".format(str(Eval)) print "\n" X = [1.0, 1.0, 0.0, 0.0] V = [0.995, 0.0, 0.0] A, Eval, Evec = Lorentz(Gamma(V), V) x = Evaluate(A, X, V) ## Ax = X print "Fourth: In one reference frame the object flys 1 seconds and is 1m long, but now in a new frame of reference at 99.5% the speed of light: " print "Original time: {}, New time: {}".format(X[0], x[0]) print "Original x-position: {}, new: {}".format(X[1], x[1]) print "Eigenvalues of Lorentz matrix: {}".format(str(Eval)) ##Orthogonaly Diagonalize A print "The matrix A is a square symmetric matrix and therefore, can be orthogoanlly diagonalized by A = P*D*P(transpose) " print "where D is a diagonal matrix with eigenvalues along the diagonal, and P has the corresponding eigenvectors as its columns:" P = Evec D = np.array([ [Eval[0],0,0,0], [0,Eval[1],0,0], [0,0,Eval[2],0], [0,0,0,Eval[3]] ]) PT = np.transpose(P) print "A= " print A print "D= " print D print "P= " print P print "\n" X = [1.0, 1/np.sqrt(3), 1/np.sqrt(3), 1/np.sqrt(3)] V = [0.94, 0.0, 0.5] A, Eval, Evec = Lorentz(Gamma(V), V) x = Evaluate(A, X, V) ## Ax = X print "Fifth: In one reference frame the object flys 1 seconds and is a 1m side length cube, but now in a ref frame of 0.95*(The speed of light) in the x-z direction: " print "Original time: {}, New time: {}".format(X[0], x[0]) print "Original x-position: {}, new: {}".format(X[1], x[1]) print "Original y-position: {}, new: {}".format(X[2], x[2]) print "Original z-position: {}, new: {}".format(X[3], x[3]) print "Eigenvalues of Lorentz matrix: {}".format(str(Eval)) print "Now the cube's looks more like a large flat rectangle but is length contracted into a cube from the observers point of view." print "\n" ##Orthogonaly Diagonalize A print "\n" print "The matrix A is a square symmetric matrix and therefore, can be orthogoanlly diagonalized by A = P*D*P(transpose) " print "where D is a diagonal matrix with eigenvalues along the diagonal, and P has the corresponding eigenvectors as its columns:" P = Evec D = np.array([ [Eval[0],0,0,0], [0,Eval[1],0,0], [0,0,Eval[2],0], [0,0,0,Eval[3]] ]) PT = np.transpose(P) print "D= " print D print "P= " print P # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:the-lig] # language: python # name: conda-env-the-lig-py # --- # + import warnings warnings.filterwarnings('ignore') import pathlib import matplotlib.pyplot as plt import numpy as np import pandas as pd from scipy.stats import pearsonr from sklearn.ensemble import RandomForestRegressor from sklearn.metrics import mean_squared_error # - # %matplotlib inline plt.style.use('fivethirtyeight') plt.rcParams['axes.facecolor']='w' #plt.rcParams['axes.linewidth']=1 plt.rcParams['axes.edgecolor']='w' plt.rcParams['figure.facecolor']='w' plt.rcParams['savefig.facecolor']='w' #plt.rcParams['grid.color']='white' # Load data - we use PDBbind 2018 general to compute feature importance. # + rdkit_features = pd.read_csv(pathlib.Path('..', 'data', 'pdbbind_2018_general_rdkit_features_clean.csv'), index_col=0) rfscore_features = pd.read_csv(pathlib.Path('..', 'data', 'pdbbind_2018_general_rfscore_features_clean.csv'), index_col=0) nnscore_features = pd.read_csv(pathlib.Path('..', 'data', 'pdbbind_2018_general_binana_features_clean.csv'), index_col=0) binding_data = pd.read_csv(pathlib.Path('..', 'data', 'pdbbind_2018_general_binding_data_clean.csv'), index_col=0, squeeze=True) binding_data = binding_data.rename('pK') # re-label RF-Score features to use element symbol instead of atomic number element_symbol = { 6: 'C', 7: 'N', 8: 'O', 9: 'F', 15: 'P', 16: 'S', 17: 'Cl', 35: 'Br', 53: 'I' } mapper = lambda f: element_symbol[int(f.split('.')[0])] + '-' + element_symbol[int(f.split('.')[1])] rfscore_features = rfscore_features.rename(mapper=mapper, axis='columns') all_features = pd.concat([rdkit_features, rfscore_features, nnscore_features], axis='columns') feature_sets = { 'Vina': pd.Index(['vina_gauss1', 'vina_gauss2', 'vina_hydrogen', 'vina_hydrophobic', 'vina_repulsion', 'num_rotors']), 'RDKit': rdkit_features.columns, 'RF-Score': rfscore_features.columns, 'NNScore 2.0': nnscore_features.columns, } feature_sets['RF-Score v3'] = feature_sets['RF-Score'].union(feature_sets['Vina']) for f in ['Vina', 'RF-Score', 'RF-Score v3', 'NNScore 2.0']: feature_sets[f'{f} + RDKit'] = feature_sets[f].union(feature_sets['RDKit']) # Vina, and hence anything that includes its terms, already uses the number of rotatable bonds, so we drop the RDKit version if f != 'RF-Score': feature_sets[f'{f} + RDKit'] = feature_sets[f'{f} + RDKit'].drop(['NumRotatableBonds']) core_sets = {} for year in ['2007', '2013', '2016']: with open(pathlib.Path('..', 'data', f'pdbbind_{year}_core_pdbs.txt')) as f: core_sets[year] = sorted([l.strip() for l in f]) core_sets['all'] = [pdb for pdb in core_sets['2007']] core_sets['all'] = core_sets['all'] + [pdb for pdb in core_sets['2013'] if pdb not in core_sets['all']] core_sets['all'] = core_sets['all'] + [pdb for pdb in core_sets['2016'] if pdb not in core_sets['all']] test_sets = {c: pd.Index(core_sets[c]).intersection(all_features.index) for c in core_sets} test = pd.Index(core_sets['all']) train = all_features.index.difference(test) # - # Fit RFs and record feature importance. # + feature_importance = {} relative_importance = {} oob_score = {} for f in feature_sets: features = feature_sets[f] rf = RandomForestRegressor(n_estimators=500, max_features=0.33, random_state=42, n_jobs=64, oob_score=True) X_train = all_features.loc[train, features] y_train = binding_data.loc[train] rf.fit(X_train, y_train) feature_importance[f] = rf.feature_importances_ oob_score[f] = pearsonr(y_train.values.ravel(), rf.oob_prediction_)[0] feature_importance[f] = pd.Series(data=feature_importance[f], index=features) feature_importance[f] = feature_importance[f].sort_values(ascending=False) relative_importance[f] = feature_importance[f] / feature_importance[f].max() # - # Plot relative feature importances. # + import matplotlib.patches as mpatches colours = plt.rcParams['axes.prop_cycle'].by_key()['color'][:3] cmap = {f: colours[0] for f in feature_sets['RDKit']} for f in feature_sets['RF-Score']: cmap[f] = colours[2] for f in feature_sets['NNScore 2.0']: cmap[f] = colours[2] for f in feature_sets['Vina']: cmap[f] = colours[1] labels = ['RDKit', 'AutoDock Vina', 'RF-Score/NNScore 2.0'] handles = [mpatches.Patch(color=c, label=l) for c, l in zip(colours, labels)] to_plot = relative_importance['RDKit'].iloc[:20] fig, ax = plt.subplots(1,1,figsize=(6,6)) bar_colours = [cmap[l] for l in to_plot.index] to_plot.plot( kind='barh', color=bar_colours, ax=ax ) ax.invert_yaxis() ax.set_ylabel('Feature') ax.set_xlabel('Relative importance') fig.tight_layout() fig.show() fig.savefig(pathlib.Path('..', 'figures', 'feature_importance_rdkit.jpg'), dpi=350, bbox_inches='tight') fig, axes = plt.subplots(2,2,figsize=(12,12)) for f, ax in zip(['Vina', 'RF-Score', 'RF-Score v3', 'NNScore 2.0'], axes.flatten()): to_plot = relative_importance[f].iloc[:20] bar_colours = [cmap[l] for l in to_plot.index] to_plot.plot( kind='barh', color=bar_colours, ax=ax ) ax.invert_yaxis() ax.set_title(f) ax.set_ylabel('Feature') ax.set_xlabel('Relative importance') fig.tight_layout() axes[0][0].legend(handles[1:], labels[1:], title='Feature source', loc='upper left', bbox_to_anchor=(0.4, 1.25),ncol=3) fig.show() fig.savefig(pathlib.Path('..', 'figures', 'feature_importance_sb_sfs.jpg'), dpi=350, bbox_inches='tight') fig, axes = plt.subplots(2,2,figsize=(12,12)) for f, ax in zip(['Vina + RDKit', 'RF-Score + RDKit', 'RF-Score v3 + RDKit', 'NNScore 2.0 + RDKit'], axes.flatten()): to_plot = relative_importance[f].iloc[:20] bar_colours = [cmap[l] for l in to_plot.index] to_plot.plot( kind='barh', color=bar_colours, ax=ax ) ax.invert_yaxis() ax.set_title(f) ax.set_ylabel('Feature') ax.set_xlabel('Relative importance') fig.tight_layout() axes[0][0].legend(handles, labels, title='Feature source', loc='upper left', bbox_to_anchor=(0.2, 1.25),ncol=3) fig.show() fig.savefig(pathlib.Path('..', 'figures', 'feature_importance_augmented_sf.jpg'), dpi=350, bbox_inches='tight') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from datetime import datetime import os import pandas as pd df = pd.DataFrame({'cust_id' : ['101', '102', '103','101','102', '101', '101'], 'login_ts': ['2:01', '2:02', '2:03','2:05','2:10', '2:10','2:11']}) df['login_ts'] = pd.to_datetime(df.login_ts) df df = df.sort_values(by=['login_ts']) df.iloc[1] # + n , _ = df.shape dic = {} order = [] index, i = 0, 0 while i < n: cst_id, login_time = df.iloc[i] i + = 1 if cst_id in dic.keys() and (login_time- dic[cst_id][0]).seconds/60<5: order.append(dic[cst_id][1]) dic[cst_id] = (login_time, dic[cst_id][1]) else: index + =1 dic[cst_id]= (login_time, index) order.append(index) # - dic df['order']=order df # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + [markdown] slideshow={"slide_type": "slide"} #

    CS4618: Artificial Intelligence I

    #

    Underfitting and Overfitting Remedies

    #

    #
    # School of Computer Science and Information Technology
    # University College Cork #

    # + [markdown] slideshow={"slide_type": "slide"} #

    Initialization

    # $\newcommand{\Set}[1]{\{#1\}}$ # $\newcommand{\Tuple}[1]{\langle#1\rangle}$ # $\newcommand{\v}[1]{\pmb{#1}}$ # $\newcommand{\cv}[1]{\begin{bmatrix}#1\end{bmatrix}}$ # $\newcommand{\rv}[1]{[#1]}$ # $\DeclareMathOperator{\argmax}{arg\,max}$ # $\DeclareMathOperator{\argmin}{arg\,min}$ # $\DeclareMathOperator{\dist}{dist}$ # $\DeclareMathOperator{\abs}{abs}$ # - # %load_ext autoreload # %autoreload 2 # %matplotlib inline # + import numpy as np import matplotlib.pyplot as plt from sklearn.pipeline import Pipeline from sklearn.preprocessing import PolynomialFeatures from sklearn.linear_model import LinearRegression from sklearn.linear_model import Lasso from sklearn.linear_model import Ridge from sklearn.model_selection import train_test_split from sklearn.metrics import mean_squared_error import matplotlib.gridspec as gridspec from ipywidgets import interactive # - #

    Introduction

    #
      #
    • You are building a predictor but its performance is not good enough. # What should you do? #
    • #
    • Some of the options include: #
        #
      • gather more training examples;
      • #
      • remove noise in the training examples;
      • #
      • add more features or remove features;
      • #
      • change model: move to a more complex model or maybe to a less complex model;
      • #
      • stick with your existing model but add constraints to it to reduce its complexity or # remove constraints to increase its complexity. #
      • #
      #
    • #
    • Surprisingly, #
        #
      • gathering more training examples may not help;
      • #
      • adding more features may in some cases worsen the performance;
      • #
      • changing to a more complex model may in some cases worsen the performance
      • #
      # …it all depends on what is causing the poor performance (underfitting or overfitting). #
    • #
    #

    Underfitting

    #
      #
    • If your model underfits: #
        #
      • gathering more training examples will not help.
      • #
      #
    • #
    • Your main options are: #
        #
      • change model: move to a more complex model;
      • #
      • collect data for additional features that you hope will be more predictive;
      • #
      • create new features which you hope will be predictive (see feature engineering # in the next lecture);
      • #
      • stick with your existing model but remove constraints (if you can) to increase # its complexity. #
      • #
      #
    • #
    • What additional features do you think would be predictive of Cork property prices?
    • #
    #

    Overfitting

    #
      #
    • If your model overfits, your main options are: #
        #
      • gather more training examples;
      • #
      • remove noise in the training examples;
      • #
      • change model: move to a less complex model;
      • #
      • simplify by reducing the number of features;
      • #
      • stick with your existing model but # add constraints (if you can) to reduce its complexity. #
      • #
      #
    • #
    #

    Regularization

    #
      #
    • If your model underfits, we saw that one option is: #
        #
      • stick with your existing model but # remove constraints (if you can) to increase its complexity. #
      • #
      #
    • #
    • If your model overfits, we saw that one option is: #
        #
      • stick with your existing model but # add constraints (if you can) to reduce its complexity. #
      • #
      #
    • #
    • Constraining a model to make it less complex and reduce the risk of overfitting is called # regularization. Regularization is a general concept but we will explain it in the # case of linear regression in the rest of this lecture. #
    • #
    #

    Regularization for Linear Regression

    #
      #
    • Linear models are among the least complex models. #
        #
      • Hence, we normally associate them with underfitting.
      • #
      #
    • #
    • But, even linear regression might overfit the training data.
    • #
    • If you are overfitting, you must reduce the degrees of freedom. #
        #
      • One way is to discard some features. #
          #
        • Then you have fewer coefficients ($\v{\beta}$) that you can modify.
        • #
        #
      • #
      • Another way is to constrain the range of values that the coefficients can take: #
          #
        • E.g. force the learning algorithm to only choose small values (close to zero).
        • #
        • Recall that OLS linear regression finds coeffficients $\v{\beta}$ that minimize # $$J(\v{X}, \v{y}, h_{\v{\beta}}) = \frac{1}{2m}\sum_{i=1}^m(h_{\v{\beta}}(\v{x}^{i)}) - \v{y}^{(i)})^2$$ # Regularization imposes a penalty on the size of the coefficients. #
        • #
        # This is how we regularize linear regression. #
          #
        • In effect, it penalizes hypotheses that fit the data too well.
        • #
        #
      • #
      #
    • #
    #

    Lasso Regression: Using the $\cal{l}_1$-norm

    #
      #
    • Lasso Regression: #
        #
      • 'Lasso' stands for 'least absolute shrinkage and selection operator' — but # this doesn't matter! #
      • #
      • We penalize by the $\cal{l}_1$-norm of $\v{\beta}$, which is simply the sum of their # absolute values, i.e. # $$\sum_{j=1}^n|\v{\beta}_j|$$ #
      • #
      • (Minor point: we don't penalize $\v{\beta}_0$, which is why $j$ starts at 1.)
      • #
      #
    • #
    • So Lasso Regression finds the $\v{\beta}$ that minimizes # $$J(\v{X}, \v{y}, h_{\v{\beta}}) = \frac{1}{2m}\sum_{i=1}^m(h_{\v{\beta}}(\v{x}^{i)}) - \v{y}^{(i)})^2 # + \lambda\sum_{j=1}^n|\v{\beta}_j|$$ #
    • #
    • $\lambda$ is called the 'regularization parameter'. #
        #
      • It controls how much penalization we want and this determines the balance between the # two parts of the modified loss function: fitting the data versus shrinking the parameters. #
      • #
      • As $\lambda \rightarrow 0$, Lasso Regression gets closer to being OLS Linear Regression.
      • #
      • When $\lambda = 0$, Lasso Regression is the same as OLS Linear Regression.
      • #
      • When $\lambda \rightarrow \infty$, penalties are so great that all the coefficients will tend # to zero: the only way to minimize the loss function will be to make the coefficients # as small as possible. It's likely that in this case we will underfit the data. #
      • #
      # So, for regularization to work well, we must choose the value of $\lambda$ carefully. #
        #
      • So what kind of thing is $\lambda$?
      • #
      #
    • #
    • An important observation about Lasso Regression: #
        #
      • As $\lambda$ grows, some of the $\v{\beta}_j$ will be driven to zero.
      • #
      • This means that the model that it learns treats some features as irrelevant.
      • #
      • Hence, it performs some feature selection too.
      • #
      • Compare this with Ridge Regression below.
      • #
      #
    • #
    #

    Implementing Lasso Regression

    #
      #
    • # There is no equivalent to the Normal Equation. #
    • #
    • # Even Gradient Descent has a problem: #
        #
      • The Lasso loss function is not differentiable at $\v{\beta}_i = 0$.
      • #
      • scikit-learn uses an approach called 'coordinate descent' (details unimportant). #
          #
        • There is a special class, Lasso. #
        • #
        • Or you can use SGDRegressor with penalty="l1".
        • #
        # They both refer to $\lambda$ as alpha! #
      • #
      • Scaling is usually advised.
      • #
      #
    • #
    #

    Ridge Regression: Using the $\cal{l}_2$-norm

    #
      #
    • Ridge Regression: #
        #
      • We penalize by the $\cal{l}_2$-norm, which is simply the sum of the squares of the # coefficients, i.e. # $$\sum_{j=1}^n\v{\beta}_j^2$$ # (Strictly speaking, the $\cal{l}_2$-norm is the square root of the sum of squares.) #
      • #
      #
    • #
    • So Ridge Regression finds the $\v{\beta}$ that minimizes # $$J(\v{X}, \v{y}, h_{\v{\beta}}) = \frac{1}{2m}\sum_{i=1}^m(h_{\v{\beta}}(\v{x}^{i)}) - \v{y}^{(i)})^2 # + \lambda\sum_{j=1}^n\v{\beta}_j^2$$ #
    • #
    • Both Lasso and Ridge Regression shrink the values of the coefficients. #
        #
      • But, as we mentioned, Lasso Regression may additionally result in coefficients being set to zero. #
      • #
      • This does not happen with Ridge Regression.
      • #
      • Optionally, consult section 3.4.3 of The Elements of Statistical Learning # by & Tibshirani (available online) for an explanation. #
      • #
      • One observation from the book is that, roughly speaking, Lasso Regression shrinks the # coefficients by approximately the same constant amount (unless they are so small # that they get shrunk to zero), whereas, again roughly speaking, Ridge Regression shrinks the # coefficients by approximately the same proportion. #
      • #
      #
    • #
    #

    Implementing Ridge Regression

    #
      #
    • There is an equivalent to the Normal Equation (solved, e.g., by Cholesky decomposition). #
        #
      • Take the gradient, set it equal to zero, and solve for $\v{\beta}$ (details unimportant): # $$\v{\beta} = (\v{X}^T\v{X} + \lambda\v{I})^{-1}\v{X}^T\v{y}$$ #
      • #
      • In the above, $\v{I}$ is the $(n+1)$ identity matrix, i.e. all zeros except for the main # diagonal which # is all ones. (In fact, for consistency with what we were doing above, where we chose not to penalize # $\v{\beta}_0$, you want a zero in the top left, so this is not really the identity matrix.) #
      • #
      • Also, you don't need to implement this with the pseudo-inverse. It's possible to prove that, # provided $\lambda > 0$, then $\v{X}^TX + \lambda\v{I}$ will be invertible. #
      • #
      #
    • #
    • Alternatively, use Gradient Descent: #
        #
      • The update rule for $\v{\beta}_j$ for all $j$ except $j = 0$ becomes: # $$\v{\beta}_j \gets \v{\beta}_j - # \alpha(\frac{1}{m}\sum_{i=1}^m(h_{\v{\beta}}(\v{x}^{(i)}) - \v{y}^{(i)}) \times \v{x}_j^{(i)} + # \frac{\lambda}{m}\v{\beta}_j)$$ #
      • #
      • We can re-arrange this to: # $$\v{\beta}_j \gets \v{\beta}_j(1 - \alpha\frac{\lambda}{m}) - # \alpha\frac{1}{m}\sum_{i=1}^m(h_{\v{\beta}}(\v{x}^{(i)}) - \v{y}^{(i)}) \times \v{x}_j^{(i)}$$ # which helps to show why this shrinks $\v{\beta}_j$ #
      • #
      • In scikit-learn, there is a special class, Ridge #
          #
        • You can set its solver parameter to choose different methods, or leave it as # default auto. #
        • #
        #
      • #
      • Or you can use SGDRegressor with penalty="l2". #
      • #
      #
    • #
    • Scaling is usually advised
    • #
    #

    Illustrating the Effects of Lasso and Ridge Regression

    #
      #
    • We'll generate a random, non-linear dataset.
    • #
    • Then we'll fit an unregularized linear model and two regularized models (Lasso and Ridge).
    • #
    # + def make_dataset(m, func, error): X = np.random.random(m) y = func(X, error) return X.reshape(m, 1), y def f(x, error = 1.0): y = 10 - 1 / (x + 0.1) if error > 0: y = np.random.normal(y, error) return y # - X, y = make_dataset(50, f, 1.0) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = np.random) # + # We don't need to scale here because there is only one feature ols = LinearRegression() ols.fit(X_train, y_train) y_predicted_ols = ols.predict(X_test) mse_ols = mean_squared_error(y_predicted_ols, y_test) lasso = Lasso(alpha=1.0) lasso.fit(X_train, y_train) y_predicted_lasso = lasso.predict(X_test) mse_lasso = mean_squared_error(y_predicted_lasso, y_test) ridge = Ridge(alpha=1.0) ridge.fit(X_train, y_train) y_predicted_ridge = ridge.predict(X_test) mse_ridge = mean_squared_error(y_predicted_ridge, y_test) # + # Set up the three subplots fig = plt.figure(figsize=(14, 4.5)) gs = gridspec.GridSpec(1, 3) # Leftmost diagram: OLS ax0 = plt.subplot(gs[0]) plt.title("OLS Linear Regression\nMSE: %.3f\nIntercept: %.3f\nCoefficient: %.3f" % (mse_ols, ols.intercept_, ols.coef_[0])) plt.xlabel("Feature") plt.ylabel("MSE") plt.ylim(-4, 14) ax0.scatter(X_train, y_train, color = "green") ax0.plot(X_test, y_predicted_ols, color = "blue") # Middle diagram: Lasso ax1 = plt.subplot(gs[1]) plt.title("Lasso Regression\nMSE: %.3f\nIntercept: %.3f\nCoefficient: %.3f" % (mse_lasso, lasso.intercept_, lasso.coef_[0])) plt.xlabel("Feature") plt.ylabel("MSE") plt.ylim(-4, 14) ax1.scatter(X_train, y_train, color = "green") ax1.plot(X_test, y_predicted_lasso, color = "blue") # Righmost diagram: Ridge ax2 = plt.subplot(gs[2]) plt.title("Ridge Regression\nMSE: %.3f\nIntercept: %.3f\nCoefficient: %.3f" % (mse_ridge, ridge.intercept_, ridge.coef_[0])) plt.xlabel("Feature") plt.ylabel("MSE") plt.ylim(-4, 14) ax2.scatter(X_train, y_train, color = "green") ax2.plot(X_test, y_predicted_ridge, color = "blue") fig.tight_layout() plt.show() # - #
      #
    • Here is an interactive version so that we can play with the # regularization hyperparameter ($\lambda$, but called alpha) to see how it # affects the fit. #
    • #
    # + def plot_model(model, alpha): plt.figure() plt.title("%s with lambda (alpha) = %.1f" % (model, alpha)) plt.xlabel("Feature") plt.ylabel("MSE") plt.ylim(-4, 14) plt.scatter(X_train, y_train, color = "green") if model == "lasso": model = Lasso(alpha) else: model = Ridge(alpha) model.fit(X_train, y_train) y_predicted = model.predict(X_test) plt.plot(X_test, y_predicted, color = "blue") plt.show() interactive_plot = interactive(plot_model, {'manual': True}, scale=True, alpha=(0,3,.1), model=["lasso", "ridge"]) interactive_plot # - #
      #
    • Regularization is a response to overfitting. The problem with the example above is that we # are regularizing a model (linear regression) on a dataset that it underfits! #
    • #
    • To see the value of regularization, let's regularize a model that does overfit. # Let's regularize Polynomial Regression with degree 30. #
    • #
    # + def plot_model(model, alpha): plt.figure() plt.title("%s with lambda (alpha) = %.1f" % (model, alpha)) plt.xlabel("Feature") plt.ylabel("MSE") plt.ylim(-4, 14) plt.scatter(X_train, y_train, color = "green") if model == "lasso": model = Pipeline([("poly", PolynomialFeatures(degree=30, include_bias=False)), ("predictor", Lasso(alpha))]) else: model = Pipeline([("poly", PolynomialFeatures(degree=30, include_bias=False)), ("predictor", Ridge(alpha))]) model.fit(X_train, y_train) y_predicted = model.predict(X_test) test_sorted = sorted(zip(X_test, y_predicted)) plt.plot([x for x, _ in test_sorted], [y_predicted for _, y_predicted in test_sorted], color = "blue") plt.show() interactive_plot = interactive(plot_model, {'manual': True}, scale=True, alpha=(0,3,.1), model=["lasso", "ridge"]) interactive_plot # - #

    Concluding Remarks

    #
      #
    • For completeness, we mention Elastic Net, which combines Lasso and Ridge regularization, with yet # another hyperparameter to control the balance between the two. #
    • #
    • Using some regularization is usually better than none, and Ridge is a good default. #
    • #
    • But we now have an extra hyperparameter whose value we must choose.
    • #
    # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" colab={"base_uri": "https://localhost:8080/", "height": 73} colab_type="code" id="ZupQDr9J-icw" outputId="615c2281-5a37-4c21-f9ba-5a2a0d8ff70e" import pandas as pd import numpy as np import scipy import matplotlib.pyplot as plt import seaborn import cv2 as cv import nibabel as nib import pickle import glob import imgaug as ia import imgaug.augmenters as iaa import tqdm import gc, os import warnings import tensorflow as tf from keras import backend as K from keras import losses, metrics from keras import optimizers from keras import callbacks from keras.models import Model from keras.layers import Input, BatchNormalization, Activation, Dense, Dropout from keras.layers import concatenate, Conv2D, MaxPooling2D, Conv2DTranspose from keras.layers import Multiply, UpSampling2D from sklearn.model_selection import train_test_split from sklearn.cluster import KMeans from skimage import morphology from skimage import measure #import keras_segmentation as ks warnings.filterwarnings('ignore') # %matplotlib inline print("Version: ", tf.version.VERSION) physical_devices = tf.config.list_physical_devices() print(physical_devices) # + [markdown] colab_type="text" id="26-GKjVy-2d6" # ## 1. Loading data # + datadir = '../training-dir/' glob_search = os.path.join(datadir, 'patient*') train_files = sorted(glob.glob(glob_search)) print('num of train patients {}'.format(len(train_files))) # + img_size = 128 cts_all = [] lungs_all = [] infects_all = [] for fnum in tqdm.tqdm(range(len(train_files))) : glob_search = os.path.join(train_files[fnum], '*ctscan*.nii') ct_file = glob.glob(glob_search) cts = nib.load(ct_file[0]) data_cts = cts.get_fdata() glob_search = os.path.join(train_files[fnum], '*lung*.nii') lung_file = glob.glob(glob_search) lung_masks = nib.load(lung_file[0]) data_lungs = lung_masks.get_fdata() glob_search = os.path.join(train_files[fnum], '*infection*.nii') infect_file = glob.glob(glob_search) infect_masks = nib.load(infect_file[0]) data_infects = infect_masks.get_fdata() height, width, slices = data_cts.shape sel_slices = range(round(slices*0.2), round(slices*0.8)) data_cts = np.rot90(np.array(data_cts)) data_lungs = np.rot90(np.array(data_lungs)) data_infects = np.rot90(np.array(data_infects)) data_cts = np.reshape(np.rollaxis(data_cts, 2),(slices,height,width)) data_lungs = np.reshape(np.rollaxis(data_lungs, 2),(slices,height,width)) data_infects = np.reshape(np.rollaxis(data_infects, 2),(slices,height,width)) data_cts = data_cts[sel_slices, :, :] data_lungs = data_lungs[sel_slices, :, :] data_infects = data_infects[sel_slices, :, :] for ii in range(data_cts.shape[0]): img = cv.resize(data_cts[ii], dsize=(img_size, img_size), interpolation=cv.INTER_AREA) img = np.reshape(img, (img_size, img_size, 1)) cts_all.append(img) img = cv.resize(data_lungs[ii], dsize=(img_size, img_size), interpolation=cv.INTER_AREA) img = np.reshape(img, (img_size, img_size, 1)) lungs_all.append(img) img = cv.resize(data_infects[ii], dsize=(img_size, img_size), interpolation=cv.INTER_AREA) img = np.reshape(img, (img_size, img_size, 1)) infects_all.append(img) # - print(len(cts_all)) # + [markdown] colab_type="text" id="BOj54mfnQ9eT" # # 2. Data Augmentation # + colab={} colab_type="code" id="rNld7hkEy7qj" ia.seed(1) seq = iaa.Sequential([ iaa.Fliplr(0.5), # horizontal flips iaa.Flipud(0.5), # vertical flips # Apply affine transformations to each image. # Scale/zoom them, translate/move them, rotate them and shear them. iaa.Affine( scale={"x": (0.8, 1.2), "y": (0.8, 1.2)}, # scale images translate_percent={"x": (-0.2, 0.2), "y": (-0.2, 0.2)}, rotate=(-15, 15), shear=(-15, 15) ) ], random_order=True) # apply augmenters in random order # + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="mmUWCRR1y7t9" outputId="6fa91350-1420-48fa-9f60-339e24def85d" num_augs = round(len(cts_all)) rand_idx = np.random.randint(0, len(cts_all), size=num_augs) sample_cts = [cts_all[ii] for ii in rand_idx] sample_lungs = [lungs_all[ii] for ii in rand_idx] sample_infects = [infects_all[ii] for ii in rand_idx] # - seq_det = seq.to_deterministic() cts_aug = seq_det.augment_images(sample_cts) lungs_aug = seq_det.augment_images(sample_lungs) infects_aug = seq_det.augment_images(sample_infects) with tf.device('/cpu:0') : cts_all = tf.convert_to_tensor(cts_all) cts_aug = tf.convert_to_tensor(np.asarray(cts_aug)) lungs_all = tf.convert_to_tensor(np.asarray(lungs_all)) lungs_aug = tf.convert_to_tensor(np.asarray(lungs_aug)) infects_all = tf.convert_to_tensor(np.asarray(infects_all)) infects_aug = tf.convert_to_tensor(np.asarray(infects_aug)) cts = tf.concat([cts_all, cts_aug], axis=0) lungs = tf.concat([lungs_all, lungs_aug], axis=0) infects = tf.concat([infects_all, infects_aug], axis=0) # + with tf.device('/cpu:0') : indices = tf.range(start=0, limit=cts.shape[0], dtype=tf.int32) shuffled_indices = tf.random.shuffle(indices) cts = tf.gather(cts, shuffled_indices) lungs = tf.gather(lungs, shuffled_indices) infects = tf.gather(infects, shuffled_indices) print(cts.shape, lungs.shape, infects.shape) # - with open('../outputs/augmented_data.cp', 'wb') as myfile : pickle.dump({'cts': cts, 'lungs': lungs, 'infects': infects}, myfile) # # 3. Defining evaluation metrics # # The most commonly used metrics for image segmentation are the IoU and the Dice Coefficient. The Dice Coefficient is 2 times the Area of overlap divided by the total number of pixels in both images (true and predicted). The Dice coefficient is very similar to the IoU, and they both range from 0 to 1, with 1 signifying the greatest similarity between predicted and truth. # # See https://github.com/keras-team/keras/issues/9395 # # + colab={} colab_type="code" id="EAI13B4H9F_X" def dice(y_true, y_pred, smooth=1): intersection = K.sum(y_true * y_pred, axis=[1,2,3]) union = K.sum(y_true, axis=[1,2,3]) + K.sum(y_pred, axis=[1,2,3]) dice = K.mean((2. * intersection + smooth)/(union + smooth), axis=0) return dice def dice_loss(y_true, y_pred): loss = 1 - dice(y_true, y_pred) return loss def bce_dice_loss(y_true, y_pred): #Binary Cross-Entropy loss = 0.5*losses.binary_crossentropy(y_true, y_pred) + 0.5*dice_loss(y_true, y_pred) return loss def tversky_loss(y_true, y_pred): alpha, beta = 0.5, 0.5 ones = K.ones(K.shape(y_true)) p0 = y_pred p1 = ones-y_pred g0 = y_true g1 = ones-y_true num = K.sum(p0*g0, (0,1,2)) den = num + alpha*K.sum(p0*g1,(0,1,2)) + beta*K.sum(p1*g0,(0,1,2)) T = K.sum(num/den) Ncl = K.cast(K.shape(y_true)[-1], 'float32') return Ncl-T def weighted_bce_loss(y_true, y_pred, weight): epsilon = 1e-7 y_pred = K.clip(y_pred, epsilon, 1. - epsilon) logit_y_pred = K.log(y_pred / (1. - y_pred)) loss = weight * (logit_y_pred * (1. - y_true) + K.log(1. + K.exp(-K.abs(logit_y_pred))) + K.maximum(-logit_y_pred, 0.)) return K.sum(loss) / K.sum(weight) def weighted_dice_loss(y_true, y_pred, weight): smooth = 1. w, m1, m2 = weight, y_true, y_pred intersection = (m1*m2) score = (2.*K.sum(w*intersection) + smooth) / (K.sum(w*m1) + K.sum(w*m2) + smooth) loss = 1. - K.sum(score) return loss def weighted_bce_dice_loss(y_true, y_pred): y_true = K.cast(y_true, 'float32') y_pred = K.cast(y_pred, 'float32') averaged_mask = K.pool2d(y_true, pool_size=(50, 50), strides=(1, 1), padding='same', pool_mode='avg') weight = K.ones_like(averaged_mask) w0 = K.sum(weight) weight = 5. * K.exp(-5.*K.abs(averaged_mask - 0.5)) w1 = K.sum(weight) weight *= (w0 / w1) loss = 0.5*weighted_bce_loss(y_true, y_pred, weight) + 0.5*dice_loss(y_true, y_pred) return loss # - # # 4. Cosine Annealing Learning Rate # # An effective snapshot ensemble requires training a neural network with an aggressive learning rate schedule. # # The cosine annealing schedule is an example of an aggressive learning rate schedule where learning rate starts high and is dropped relatively rapidly to a minimum value near zero before being increased again to the maximum. # # We can implement the schedule as described in the 2017 paper “Snapshot Ensembles: Train 1, get M for free.” The equation requires the total training epochs, maximum learning rate, and number of cycles as arguments as well as the current epoch number. The function then returns the learning rate for the given epoch. # # See https://machinelearningmastery.com/snapshot-ensemble-deep-learning-neural-network/ # + colab={} colab_type="code" id="H14hooGX-ct_" # define custom learning rate schedule class CosineAnnealingLearningRateSchedule(callbacks.Callback): # constructor def __init__(self, n_epochs, n_cycles, lrate_max, verbose=0): self.epochs = n_epochs self.cycles = n_cycles self.lr_max = lrate_max self.lrates = list() # calculate learning rate for an epoch def cosine_annealing(self, epoch, n_epochs, n_cycles, lrate_max): epochs_per_cycle = np.floor(n_epochs/n_cycles) cos_inner = (np.pi * (epoch % epochs_per_cycle)) / (epochs_per_cycle) return lrate_max/2 * (np.cos(cos_inner) + 1) # calculate and set learning rate at the start of the epoch def on_epoch_begin(self, epoch, logs=None): # calculate learning rate lr = self.cosine_annealing(epoch, self.epochs, self.cycles, self.lr_max) # set learning rate K.set_value(self.model.optimizer.lr, lr) # log value self.lrates.append(lr) # + [markdown] colab_type="text" id="sadUwQdyVXZ4" # * All the hyperparameters are put in place after repeating trial and error for a fixed number of epochs. # - # # 5. Split data into train and validation sets # + train_size = int(0.9*cts.shape[0]) with tf.device('/cpu:0') : X_train, yl_train, yi_train = (cts[:train_size]/255, lungs[:train_size], infects[:train_size]) X_valid, yl_valid, yi_valid = (cts[train_size:]/255, lungs[train_size:], infects[train_size:]) print(X_train.shape, yl_train.shape, yi_train.shape) print(X_valid.shape, yl_valid.shape, yi_valid.shape) # - # # 6. Convolutional Neural Network # + def lung_seg(input_shape, num_filters=[16,32,128]) : x_input = Input(input_shape) ### LUNG SEGMENTATION x = Conv2D(num_filters[0], kernel_size=3, activation='relu', padding='same')(x_input) x = MaxPooling2D(pool_size=2, padding='same')(x) x = Conv2D(num_filters[1], kernel_size=3, activation='relu', padding='same')(x) x = MaxPooling2D(pool_size=2, padding='same')(x) x = Conv2D(num_filters[2], kernel_size=3, activation='relu', padding='same')(x) x = MaxPooling2D(pool_size=2, padding='same')(x) x = Dense(num_filters[2], activation='relu')(x) x = UpSampling2D(size=2)(x) x = Conv2D(num_filters[2], kernel_size=3, activation='sigmoid', padding='same')(x) x = UpSampling2D(size=2)(x) x = Conv2D(num_filters[1], kernel_size=3, activation='sigmoid', padding='same')(x) x = UpSampling2D(size=2)(x) lung_seg = Conv2D(1, kernel_size=3, activation='sigmoid', padding='same')(x) # identifying lungs model = Model(inputs=x_input, outputs=lung_seg, name='lung_seg') return model strategy = tf.distribute.MirroredStrategy() print('Number of devices {}'.format(strategy.num_replicas_in_sync)) with strategy.scope() : lung_seg = lung_seg(cts.shape[1:]) lung_seg.summary() # + colab={} colab_type="code" id="R1WJkDS6_WPx" # define learning rate callback epochs = 100 lrmax = 5e-5 n_cycles = epochs / 25 lr_cb = CosineAnnealingLearningRateSchedule(epochs, n_cycles, lrmax) checkpoint_fpath = "../outputs/weights_lungseg.hdf5" cts_checkpoint_cb = callbacks.ModelCheckpoint(checkpoint_fpath, monitor='val_dice', save_best_only=True, mode='max', verbose=1, save_weights_only=True) batch_size = 8 optim = optimizers.Adam(lr=5e-5, beta_1=0.9, beta_2=0.99) with strategy.scope() : lung_seg.compile(optimizer=optim, loss=bce_dice_loss, metrics=[dice]) # - # # 7. Training lung_seg_res = lung_seg.fit(x = X_train, y = yl_train, batch_size = batch_size, epochs = epochs, verbose = 1, validation_data = (X_valid, yl_valid), callbacks = [cts_checkpoint_cb, lr_cb]) # + colab={"base_uri": "https://localhost:8080/", "height": 813, "resources": {"http://localhost:17327/content/unet_covid_fold1.hdf5": {"data": "", "headers": [["content-length", "1461"], ["content-type", "text/html; charset=utf-8"]], "ok": false, "status": 500, "status_text": ""}}} colab_type="code" id="HddKr4NwNq5K" outputId="c6d719ab-98ca-43f1-f406-91c5a1370e56" plt.style.use('ggplot') fig, axes = plt.subplots(1, 2, figsize=(11,5)) axes[0].plot(lung_seg_res.history['dice'], color='b', label='train-lung') axes[0].plot(lung_seg_res.history['val_dice'], color='r', label='valid-lung') axes[0].set_ylabel('Dice coefficient') axes[0].set_xlabel('epoch') axes[0].legend() axes[1].plot(lung_seg_res.history['loss'], color='b', label='train-lung') axes[1].plot(lung_seg_res.history['val_loss'], color='r', label='valid-lung') axes[1].set_ylabel('loss') axes[1].set_xlabel('epoch') axes[1].legend(); # - # # 8. Saving Outputs # + gc.collect() with open('../outputs/lungseg-HistoryDict.cp', 'wb') as myfile: pickle.dump(lung_seg_res.history, myfile) # + lung_seg.load_weights("../outputs/weights_lungseg.hdf5") with tf.device('/cpu:0') : pred_lungs = lung_seg.predict(cts/255, batch_size=512) print(cts.shape, pred_lungs.shape) # - # # 9. Visualizing Outputs def plot_lung_seg(ct, lung, pred_lung, axes) : axes[0].imshow(ct[:,:,0], cmap='bone') axes[0].set_title('CT image'); plt.grid(None) axes[0].set_xticks([]); axes[0].set_yticks([]) axes[1].imshow(lung[:,:,0], alpha=0.5, cmap='Reds') axes[1].set_title('Lung mask'); plt.grid(None) axes[1].set_xticks([]); axes[1].set_yticks([]) axes[2].imshow(pred_lung[:,:,0], alpha=0.5, cmap='Reds') axes[2].set_title('pred Lung mask'); plt.grid(None) axes[2].set_xticks([]); axes[2].set_yticks([]) # + import random indices = random.choices(range(len(pred_lungs)), k=5) fig, axes = plt.subplots(3, 5, figsize=(15,9)) for ii, idx in enumerate(indices) : plot_lung_seg(cts[idx], lungs[idx], pred_lungs[idx], list(axes[:,ii])) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] colab_type="text" id="MhoQ0WE77laV" # ##### Copyright 2018 The TensorFlow Authors. # + cellView="form" colab_type="code" id="_ckMIh7O7s6D" colab={} #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # + cellView="form" colab_type="code" id="vasWnqRgy1H4" colab={} #@title MIT License # # Copyright (c) 2017 # # Permission is hereby granted, free of charge, to any person obtaining a # # copy of this software and associated documentation files (the "Software"), # to deal in the Software without restriction, including without limitation # the rights to use, copy, modify, merge, publish, distribute, sublicense, # and/or sell copies of the Software, and to permit persons to whom the # Software is furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER # DEALINGS IN THE SOFTWARE. # + [markdown] colab_type="text" id="jYysdyb-CaWM" # # Treine sua primeira rede neural: classificação básica # + [markdown] colab_type="text" id="S5Uhzt6vVIB2" # # # # # #
    # Veja em TensorFlow.org # # Execute em Google Colab # # Veja código fonte em GitHub # # Baixe o notebook #
    # + [markdown] id="v10Paf9CKcHZ" colab_type="text" # Note: A nossa comunidade TensorFlow traduziu estes documentos. Como as traduções da comunidade são *o melhor esforço*, não há garantias de que sejam uma reflexão exata e atualizada da [documentação oficial em Inglês](https://www.tensorflow.org/?hl=en). Se tem alguma sugestão para melhorar esta tradução, por favor envie um pull request para o repositório do GitHub [tensorflow/docs](https://github.com/tensorflow/docs). Para se voluntariar para escrever ou rever as traduções da comunidade, contacte a [lista ](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs). # + [markdown] colab_type="text" id="FbVhjPpzn6BM" # Este tutorial treina um modelo de rede neural para classificação de imagens de roupas, como tênis e camisetas. Tudo bem se você não entender todos os detalhes; este é um visão geral de um programa do TensorFlow com detalhes explicados enquanto progredimos. # # O guia usa [tf.keras](https://www.tensorflow.org/guide/keras), uma API alto-nível para construir e treinar modelos no TensorFlow. # + colab_type="code" id="jL3OqFKZ9dFg" colab={} # !pip install tensorflow==2.0.0-beta1 # + colab_type="code" id="dzLKpmZICaWN" colab={} from __future__ import absolute_import, division, print_function, unicode_literals # TensorFlow e tf.keras import tensorflow as tf from tensorflow import keras # Librariesauxiliares import numpy as np import matplotlib.pyplot as plt print(tf.__version__) # + [markdown] colab_type="text" id="yR0EdgrLCaWR" # ## Importe a base de dados Fashion MNIST # + [markdown] colab_type="text" id="DLdCchMdCaWQ" # Esse tutorial usa a base de dados [Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist) que contém 70,000 imagens em tons de cinza em 10 categorias. As imagens mostram artigos individuais de roupas com baixa resolução (28 por 28 pixels), como vemos aqui: # # # # #
    # Fashion MNIST sprite #
    # Figure 1. Amostras de Fashion-MNIST (por Zalando, MIT License).
      #
    # # Fashion MNIST tem como intenção substituir a clássica base de dados [MNIST](http://yann.lecun.com/exdb/mnist/ )— frequentemente usada como "Hello, World" de programas de aprendizado de máquina (*machine learning*) para visão computacional. A base de dados MNIST contém imagens de dígitos escritos à mão (0, 1, 2, etc.) em um formato idêntico ao dos artigos de roupas que usaremos aqui. # # Esse tutorial usa a Fashion MNIST para variar, e porque é um problema um pouco mais desafiador que o regular MNIST. Ambas bases são relativamente pequenas e são usadas para verificar se um algoritmo funciona como esperado. Elas são bons pontos de partida para testar e debugar código. # # Usaremos 60,000 imagens para treinar nossa rede e 10,000 imagens para avaliar quão precisamente nossa rede aprendeu a classificar as imagens. Você pode acessar a Fashion MNIST directly diretamente do TensorFlow. Importe e carregue a base Fashion MNIST diretamente do TensorFlow: # + colab_type="code" id="7MqDQO0KCaWS" colab={} fashion_mnist = keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() # + [markdown] colab_type="text" id="t9FDsUlxCaWW" # Carregando a base de dados que retorna quatro NumPy arrays: # # * Os *arrays* `train_images` e `train_labels` são o *conjunto de treinamento*— os dados do modelo usados para aprender. # * O modelo é testado com o *conjunto de teste*, os *arrays* `test_images` e `test_labels`. # # As imagens são arrays NumPy de 28x28, com os valores des pixels entre 0 to 255. As *labels* (alvo da classificação) são um array de inteiros, no intervalo de 0 a 9. Esse corresponde com a classe de roupa que cada imagem representa: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
    LabelClasse
    0Camisetas/Top (T-shirt/top)
    1Calça (Trouser)
    2Suéter (Pullover)
    3Vestidos (Dress)
    4Casaco (Coat)
    5Sandálias (Sandal)
    6Camisas (Shirt)
    7Tênis (Sneaker)
    8Bolsa (Bag)
    9Botas (Ankle boot)
    # # Cada imagem é mapeada com um só label. Já que o *nome das classes* não são incluídas na base de dados, armazene os dados aqui para usá-los mais tarde quando plotarmos as imagens: # + colab_type="code" id="IjnLH5S2CaWx" colab={} class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] # + [markdown] colab_type="text" id="Brm0b_KACaWX" # ## Explore os dados # # Vamos explorar o formato da base de dados antes de treinar o modelo. O próximo comando mostra que existem 60000 imagens no conjunto de treinamento, e cada imagem é representada em 28 x 28 pixels: # + colab_type="code" id="zW5k_xz1CaWX" colab={} train_images.shape # + [markdown] colab_type="text" id="cIAcvQqMCaWf" # Do mesmo modo, existem 60000 labels no conjunto de treinamento: # + colab_type="code" id="TRFYHB2mCaWb" colab={} len(train_labels) # + [markdown] colab_type="text" id="YSlYxFuRCaWk" # Cada label é um inteiro entre 0 e 9: # + colab_type="code" id="XKnCTHz4CaWg" colab={} train_labels # + [markdown] colab_type="text" id="TMPI88iZpO2T" # Existem 10000 imagens no conjnto de teste. Novamente, cada imagem é representada por 28 x 28 pixels: # + colab_type="code" id="2KFnYlcwCaWl" colab={} test_images.shape # + [markdown] colab_type="text" id="rd0A0Iu0CaWq" # E um conjunto de teste contendo 10000 labels das imagens : # + colab_type="code" id="iJmPr5-ACaWn" colab={} len(test_labels) # + [markdown] colab_type="text" id="ES6uQoLKCaWr" # ## Pré-processe os dados # # Os dados precisam ser pré-processados antes de treinar a rede. Se você inspecionar a primeira imagem do conjunto de treinamento, você verá que os valores dos pixels estão entre 0 e 255: # + colab_type="code" id="m4VEw8Ud9Quh" colab={} plt.figure() plt.imshow(train_images[0]) plt.colorbar() plt.grid(False) plt.show() # + [markdown] colab_type="text" id="Wz7l27Lz9S1P" # Escalaremos esses valores no intervalo de 0 e 1 antes antes de alimentar o modelo da rede neural. Para fazer isso, dividimos os valores por 255. É importante que o *conjunto de treinamento* e o *conjunto de teste* podem ser pré-processados do mesmo modo: # + colab_type="code" id="bW5WzIPlCaWv" colab={} train_images = train_images / 255.0 test_images = test_images / 255.0 # + [markdown] colab_type="text" id="Ee638AlnCaWz" # Para verificar que os dados estão no formato correto e que estamos prontos para construir e treinar a rede, vamos mostrar as primeiras 25 imagens do *conjunto de treinamento* e mostrar o nome das classes de cada imagem abaixo. # + colab_type="code" id="oZTImqg_CaW1" colab={} plt.figure(figsize=(10,10)) for i in range(25): plt.subplot(5,5,i+1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(train_images[i], cmap=plt.cm.binary) plt.xlabel(class_names[train_labels[i]]) plt.show() # + [markdown] colab_type="text" id="59veuiEZCaW4" # ## Construindo o modelo # # Construir a rede neural requer configurar as camadas do modelo, e depois, compilar o modelo. # + [markdown] colab_type="text" id="Gxg1XGm0eOBy" # ### Montar as camadas # # O principal bloco de construção da rede neural é a camada (*layer*). As camadas (*layers*) extraem representações dos dados inseridos na rede. Com sorte, essas representações são significativas para o problema à mão. # # Muito do *deep learning* consiste encadear simples camadas. Muitas camadas, como `tf.keras.layers.Dense`, tem paramêtros que são aprendidos durante o treinamento. # + colab_type="code" id="9ODch-OFCaW4" colab={} model = keras.Sequential([ keras.layers.Flatten(input_shape=(28, 28)), keras.layers.Dense(128, activation='relu'), keras.layers.Dense(10, activation='softmax') ]) # + [markdown] colab_type="text" id="gut8A_7rCaW6" # A primeira camada da rede, `tf.keras.layers.Flatten`, transforma o formato da imagem de um array de imagens de duas dimensões (of 28 by 28 pixels) para um array de uma dimensão (de 28 * 28 = 784 pixels). Pense nessa camada como camadas não empilhadas de pixels de uma imagem e os emfilere. Essa camada não tem paramêtros para aprender; ela só reformata os dados. # # Depois dos pixels serem achatados, a rede consite de uma sequência de duas camadas `tf.keras.layers.Dense`. Essa são camadas neurais *densely connected*, ou *fully connected*. A primeira camada `Dense` tem 128 nós (ou neurônios). A segunda (e última) camda é uma *softmax* de 10 nós que retorna um array de 10 probabilidades, cuja soma resulta em 1. Cada nó contem um valor que indica a probabilidade de que aquela imagem pertence a uma das 10 classes. # # ### Compile o modelo # # Antes do modelo estar pronto para o treinamento, é necessário algumas configurações a mais. Essas serão adicionadas no passo de *compilação*: # # * *Função Loss* —Essa mede quão precisa o modelo é durante o treinamento. Queremos minimizar a função para *guiar* o modelo para direção certa. # * *Optimizer* —Isso é como o modelo se atualiza com base no dado que ele vê e sua função *loss*. # * *Métricas* —usadas para monitorar os passos de treinamento e teste. O exemplo abaixo usa a *acurácia*, a fração das imagens que foram classificadas corretamente. # + colab_type="code" id="Lhan11blCaW7" colab={} model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) # + [markdown] colab_type="text" id="qKF6uW-BCaW-" # ## Treine o modelo # # Treinar a rede neural requer os seguintes passos: # # 1. Alimente com os dados de treinamento, o modelo. Neste exemplo, os dados de treinamento são os arrays `train_images` e `train_labels`. # 2. O modelo aprende como associar as imagens as *labels*. # 3. Perguntamos ao modelo para fazer previsões sobre o conjunto de teste — nesse exemplo, o array `test_images`. Verificamos se as previsões combinaram com as *labels* do array `test_labels`. # # Para começar a treinar, chame o método `model.fit`— assim chamado, porque ele "encaixa" o modelo no conjunto de treinamento: # + colab_type="code" id="xvwvpA64CaW_" colab={} model.fit(train_images, train_labels, epochs=10) # + [markdown] colab_type="text" id="W3ZVOhugCaXA" # À medida que o modelo treina, as métricas loss e acurácia são mostradas. O modelo atinge uma acurácia de 0.88 (ou 88%) com o conjunto de treinamento. # + [markdown] colab_type="text" id="oEw4bZgGCaXB" # ## Avalie a acurácia # # Depois, compare como o modelo performou com o conjunto de teste: # + colab_type="code" id="VflXLEeECaXC" colab={} test_loss, test_acc = model.evaluate(test_images, test_labels) print('\nTest accuracy:', test_acc) # + [markdown] colab_type="text" id="yWfgsmVXCaXG" # Acabou que o a acurácia com o conjunto de teste é um pouco menor do que a acurácia de treinamento. Essa diferença entre as duas acurácias representa um *overfitting*. Overfitting é modelo de aprendizado de máquina performou de maneira pior em um conjunto de entradas novas, e não usadas anteriormente, que usando o conjunto de treinamento. # + [markdown] colab_type="text" id="xsoS7CPDCaXH" # ## Faça predições # # Com o modelo treinado, o usaremos para predições de algumas imagens. # + colab_type="code" id="Gl91RPhdCaXI" colab={} predictions = model.predict(test_images) # + [markdown] colab_type="text" id="x9Kk1voUCaXJ" # Aqui, o modelo previu que a *label* de cada imagem no conjunto de treinamento. Vamos olhar na primeira predição: # + colab_type="code" id="3DmJEUinCaXK" colab={} predictions[0] # + [markdown] colab_type="text" id="-hw1hgeSCaXN" # A predição é um array de 10 números. Eles representam um a *confiança* do modelo que a imagem corresponde a cada um dos diferentes artigos de roupa. Podemos ver cada *label* tem um maior valor de confiança: # + colab_type="code" id="qsqenuPnCaXO" colab={} np.argmax(predictions[0]) # + [markdown] colab_type="text" id="E51yS7iCCaXO" # Então, o modelo é confiante de que esse imagem é uma bota (ankle boot) ou `class_names[9]`. Examinando a label do teste, vemos que essa classificação é correta: # + colab_type="code" id="Sd7Pgsu6CaXP" colab={} test_labels[0] # + [markdown] colab_type="text" id="ygh2yYC972ne" # Podemos mostrar graficamente como se parece em um conjunto total de previsão de 10 classes. # + colab_type="code" id="DvYmmrpIy6Y1" colab={} def plot_image(i, predictions_array, true_label, img): predictions_array, true_label, img = predictions_array[i], true_label[i], img[i] plt.grid(False) plt.xticks([]) plt.yticks([]) plt.imshow(img, cmap=plt.cm.binary) predicted_label = np.argmax(predictions_array) if predicted_label == true_label: color = 'blue' else: color = 'red' plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label], 100*np.max(predictions_array), class_names[true_label]), color=color) def plot_value_array(i, predictions_array, true_label): predictions_array, true_label = predictions_array[i], true_label[i] plt.grid(False) plt.xticks([]) plt.yticks([]) thisplot = plt.bar(range(10), predictions_array, color="#777777") plt.ylim([0, 1]) predicted_label = np.argmax(predictions_array) thisplot[predicted_label].set_color('red') thisplot[true_label].set_color('blue') # + [markdown] colab_type="text" id="d4Ov9OFDMmOD" # Vamos olhar a previsão imagem na posição 0, do array de predição. # + colab_type="code" id="HV5jw-5HwSmO" colab={} i = 0 plt.figure(figsize=(6,3)) plt.subplot(1,2,1) plot_image(i, predictions, test_labels, test_images) plt.subplot(1,2,2) plot_value_array(i, predictions, test_labels) plt.show() # + colab_type="code" id="Ko-uzOufSCSe" colab={} i = 12 plt.figure(figsize=(6,3)) plt.subplot(1,2,1) plot_image(i, predictions, test_labels, test_images) plt.subplot(1,2,2) plot_value_array(i, predictions, test_labels) plt.show() # + [markdown] colab_type="text" id="kgdvGD52CaXR" # Vamos plotar algumas da previsão do modelo. Labels preditas corretamente são azuis e as predições erradas são vermelhas. O número dá a porcentagem (de 100) das labels preditas. Note que o modelo pode errar mesmo estão confiante. # + colab_type="code" id="hQlnbqaw2Qu_" colab={} # Plota o primeiro X test images, e as labels preditas, e as labels verdadeiras. # Colore as predições corretas de azul e as incorretas de vermelho. num_rows = 5 num_cols = 3 num_images = num_rows*num_cols plt.figure(figsize=(2*2*num_cols, 2*num_rows)) for i in range(num_images): plt.subplot(num_rows, 2*num_cols, 2*i+1) plot_image(i, predictions, test_labels, test_images) plt.subplot(num_rows, 2*num_cols, 2*i+2) plot_value_array(i, predictions, test_labels) plt.show() # + [markdown] colab_type="text" id="R32zteKHCaXT" # Finamente, use o modelo treinado para fazer a predição de uma única imagem. # + colab_type="code" id="yRJ7JU7JCaXT" colab={} # Grab an image from the test dataset. img = test_images[0] print(img.shape) # + [markdown] colab_type="text" id="vz3bVp21CaXV" # Modelos `tf.keras` são otimizados para fazer predições em um *batch*, ou coleções, de exemplos de uma vez. De acordo, mesmo que usemos uma única imagem, precisamos adicionar em uma lista: # + colab_type="code" id="lDFh5yF_CaXW" colab={} # Adiciona a imagem em um batch que possui um só membro. img = (np.expand_dims(img,0)) print(img.shape) # + [markdown] colab_type="text" id="EQ5wLTkcCaXY" # Agora prediremos a label correta para essa imagem: # + colab_type="code" id="o_rzNSdrCaXY" colab={} predictions_single = model.predict(img) print(predictions_single) # + colab_type="code" id="6Ai-cpLjO-3A" colab={} plot_value_array(0, predictions_single, test_labels) _ = plt.xticks(range(10), class_names, rotation=45) # + [markdown] colab_type="text" id="cU1Y2OAMCaXb" # `model.predict` retorna a lista de listas — uma lista para cada imagem em um *batch* de dados. Pegue a predição de nossa (única) imagem no *batch*: # + colab_type="code" id="2tRmdq_8CaXb" colab={} np.argmax(predictions_single[0]) # + [markdown] colab_type="text" id="YFc2HbEVCaXd" # E, como antes, o modelo previu a label como 9. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Model import numpy as np import tensorflow as tf import tensorflow.contrib.eager as tfe tfe.enable_eager_execution() def __init__(self, config, batch, word_mat=None, char_mat=None, trainable=True, opt=True): self.config = config self.global_step = tf.get_variable('global_step', shape=[], dtype=tf.int32, initializer=tf.constant_initializer(0), trainable=False) # c: context; q: question; ch: context char; qh: question char; y1: start; y2: end self.c, self.q, self.ch, self.qh, self.y1, self.y2, self.qa_id = batch.get_next() self.is_train = tf.get_variable( "is_train", shape=[], dtype=tf.bool, trainable=False) # word embedding self.word_mat = tf.get_variable("word_mat", initializer=tf.constant( word_mat, dtype=tf.float32), trainable=False) # char embedding self.char_mat = tf.get_variable( "char_mat", initializer=tf.constant(char_mat, dtype=tf.float32)) # mask for padding, used in the pointer network self.c_mask = tf.cast(self.c, tf.bool) self.q_mask = tf.cast(self.q, tf.bool) # actual word length for context in a batch self.c_len = tf.reduce_sum(tf.cast(self.c_mask, tf.int32), axis=1) self.q_len = tf.reduce_sum(tf.cast(self.q_mask, tf.int32), axis=1) if opt: N, CL = config.batch_size, config.char_limit # max word length for context in a batch self.c_maxlen = tf.reduce_max(self.c_len) # max word lenght for question in a batch self.q_maxlen = tf.reduce_max(self.q_len) # truncate at [bs, c_maxlen] self.c = tf.slice(self.c, [0, 0], [N, self.c_maxlen]) # truncate at [bs, q_maxlen] self.q = tf.slice(self.q, [0, 0], [N, self.q_maxlen]) # truncate the mask self.c_mask = tf.slice(self.c_mask, [0, 0], [N, self.c_maxlen]) self.q_mask = tf.slice(self.q_mask, [0, 0], [N, self.q_maxlen]) # [bs, c_maxlen, char_limit] self.ch = tf.slice(self.ch, [0, 0, 0], [N, self.c_maxlen, CL]) # [bs, q_maxlen, char_limit] self.qh = tf.slice(self.qh, [0, 0, 0], [N, self.q_maxlen, CL]) # y is one_hot encoded # [batch_size, c_maxlen] self.y1 = tf.slice(self.y1, [0, 0], [N, self.c_maxlen]) self.y2 = tf.slice(self.y2, [0, 0], [N, self.c_maxlen]) else: self.c_maxlen, self.q_maxlen = config.para_limit, config.ques_limit # actual char length for context, reshape to 1D tensor self.ch_len = tf.reshape(tf.reduce_sum( tf.cast(tf.cast(self.ch, tf.bool), tf.int32), axis=2), [-1]) # actual char length for question, reshape to 1D tensor self.qh_len = tf.reshape(tf.reduce_sum( tf.cast(tf.cast(self.qh, tf.bool), tf.int32), axis=2), [-1]) self.ready() if trainable: self.lr = tf.get_variable( "lr", shape=[], dtype=tf.float32, trainable=False) self.opt = tf.train.AdadeltaOptimizer( learning_rate=self.lr, epsilon=1e-6) grads = self.opt.compute_gradients(self.loss) gradients, variables = zip(*grads) capped_grads, _ = tf.clip_by_global_norm( gradients, config.grad_clip) self.train_op = self.opt.apply_gradients( zip(capped_grads, variables), global_step=self.global_step) def ready(self): config = self.config N, PL, QL, CL, d, dc, dg = config.batch_size, self.c_maxlen, self.q_maxlen, config.char_limit, config.hidden, config.char_dim, config.char_hidden gru = cudnn_gru if config.use_cudnn else native_gru with tf.variable_scope("emb"): # char embedding with tf.variable_scope("char"): # char-embedding ch_emb = tf.reshape(tf.nn.embedding_lookup( self.char_mat, self.ch), [N * PL, CL, dc]) # [bs * self.c_maxlen, self.char_limit, self.char_emb_dim] qh_emb = tf.reshape(tf.nn.embedding_lookup( self.char_mat, self.qh), [N * QL, CL, dc]) # [bs * self.q_maxlen, self.char_limit, self.char_emb_dim] # variational dropout # same drouput mask for each timestep ch_emb = dropout( ch_emb, keep_prob=config.keep_prob, is_train=self.is_train) # [bs * self.c_maxlen, self.char_limit, self.char_emb_dim] qh_emb = dropout( qh_emb, keep_prob=config.keep_prob, is_train=self.is_train) # [bs * self.q_maxlen, self.char_limit, self.char_emb_dim] # bi_gru for context cell_fw = tf.contrib.rnn.GRUCell(dg) cell_bw = tf.contrib.rnn.GRUCell(dg) _, (state_fw, state_bw) = tf.nn.bidirectional_dynamic_rnn( cell_fw, cell_bw, ch_emb, self.ch_len, dtype=tf.float32) ch_emb = tf.concat([state_fw, state_bw], axis=1) # bi_gru for question _, (state_fw, state_bw) = tf.nn.bidirectional_dynamic_rnn( cell_fw, cell_bw, qh_emb, self.qh_len, dtype=tf.float32) qh_emb = tf.concat([state_fw, state_bw], axis=1) qh_emb = tf.reshape(qh_emb, [N, QL, 2 * dg]) # [bs, q_maxlen, 2*char_hidden_size] ch_emb = tf.reshape(ch_emb, [N, PL, 2 * dg]) # [bs, c_maxlen, 2*char_hidden_size] # word embedding with tf.name_scope("word"): c_emb = tf.nn.embedding_lookup(self.word_mat, self.c) # [bs, c_maxlen, word_emb_dim] q_emb = tf.nn.embedding_lookup(self.word_mat, self.q) # [bs, q_maxlen, word_emb_dim] # concat word and char embedding c_emb = tf.concat([c_emb, ch_emb], axis=2) # [bs, c_maxlen, word_emb_dim + char_emb_dim] q_emb = tf.concat([q_emb, qh_emb], axis=2) # [bs, q_maxlen, word_emb_dim + char_emb_dim] # Q and C encoding with tf.variable_scope("encoding"): rnn = gru(num_layers=3, num_units=d, batch_size=N, input_size=c_emb.get_shape( ).as_list()[-1], keep_prob=config.keep_prob, is_train=self.is_train) c = rnn(c_emb, seq_len=self.c_len) # [bs, c_maxlen, hidden_size] q = rnn(q_emb, seq_len=self.q_len) # [bs, q_maxlen, hidden_size] # C Q attention with tf.variable_scope("attention"): qc_att = dot_attention(c, q, mask=self.q_mask, hidden=d, keep_prob=config.keep_prob, is_train=self.is_train) # [bs, c_maxlen, 2 * hidden_size] rnn = gru(num_layers=1, num_units=d, batch_size=N, input_size=qc_att.get_shape( ).as_list()[-1], keep_prob=config.keep_prob, is_train=self.is_train) att = rnn(qc_att, seq_len=self.c_len) # [bs, c_maxlen, hidden_size] # C C self attention with tf.variable_scope("match"): self_att = dot_attention( att, att, mask=self.c_mask, hidden=d, keep_prob=config.keep_prob, is_train=self.is_train) # [bs, c_maxlen, 2 * hidden_size] rnn = gru(num_layers=1, num_units=d, batch_size=N, input_size=self_att.get_shape( ).as_list()[-1], keep_prob=config.keep_prob, is_train=self.is_train) match = rnn(self_att, seq_len=self.c_len) # [bs, c_maxlen, hidden_size] # pointer network # logits 1: start position logits; logits 2: end position logits with tf.variable_scope("pointer"): # self attention init = summ(q[:, :, -2 * d:], d, mask=self.q_mask, keep_prob=config.ptr_keep_prob, is_train=self.is_train) pointer = ptr_net(batch=N, hidden=init.get_shape().as_list( )[-1], keep_prob=config.ptr_keep_prob, is_train=self.is_train) logits1, logits2 = pointer(init, match, d, self.c_mask) # compute loss with tf.variable_scope("predict"): ##### for prediction ##### During prediction, we choose the best span from token i to token i' such that i<=i'<=i+15 and p1*p2 is maximized. # outer product: p1*p2 # tf.expand_dims(tf.nn.softmax(logits1), axis=2): [bs, c_maxlen, 1] # tf.expand_dims(tf.nn.softmax(logits2), axis=1): [bs, 1, c_maxlen] # outer: [bs, c_maxlen, c_maxlen] outer = tf.matmul(tf.expand_dims(tf.nn.softmax(logits1), axis=2), tf.expand_dims(tf.nn.softmax(logits2), axis=1)) # slice [start:start+15] outer = tf.matrix_band_part(outer, 0, 15) # [bs, c_maxlen, 15] # yp1: start prob; yp2: end prob self.yp1 = tf.argmax(tf.reduce_max(outer, axis=2), axis=1) self.yp2 = tf.argmax(tf.reduce_max(outer, axis=1), axis=1) ##### for training: see below image losses = tf.nn.softmax_cross_entropy_with_logits_v2( logits=logits1, labels=tf.stop_gradient(self.y1)) # padding are included in the loss losses2 = tf.nn.softmax_cross_entropy_with_logits_v2( logits=logits2, labels=tf.stop_gradient(self.y2)) # padding are included in the loss self.loss = tf.reduce_mean(losses + losses2) # c_maxlen = 3 a = np.random.randint(0, 10, [3, 1]) b = np.random.randint(0, 10, [1, 3]) a b a * b a @ b c = np.random.randint(0,20,[15,15]) c tf.matrix_band_part(c, 0, 5) # + def get_loss(self): return self.loss def get_global_step(self): return self.global_step # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import matplotlib as mpl import matplotlib.pyplot as plt # %matplotlib inline import numpy as np import sklearn import pandas as pd import os import sys import time import tensorflow as tf from tensorflow import keras print(tf.__version__) print(sys.version_info) for module in mpl, np, pd, sklearn, tf, keras: print(module.__name__, module.__version__) # + from sklearn.datasets import fetch_california_housing housing = fetch_california_housing() print(housing.DESCR) print(housing.data.shape) print(housing.target.shape) # + import pprint pprint.pprint(housing.data[0:5]) pprint.pprint(housing.target[0:5]) # + from sklearn.model_selection import train_test_split x_train_all, x_test, y_train_all, y_test = train_test_split( housing.data, housing.target, random_state = 7) x_train, x_valid, y_train, y_valid = train_test_split( x_train_all, y_train_all, random_state = 11) print(x_train.shape, y_train.shape) print(x_valid.shape, y_valid.shape) print(x_test.shape, y_test.shape) # + from sklearn.preprocessing import StandardScaler scaler = StandardScaler() x_train_scaled = scaler.fit_transform(x_train) x_valid_scaled = scaler.transform(x_valid) x_test_scaled = scaler.transform(x_test) # - learning_rate = [1e-4, 3e-4, 1e-3, 3e-3, 1e-2, 3e-2] histories = [] for lr in learning_rate: model = keras.models.Sequential([ keras.layers.Dense(30, activation='relu', input_shape=x_train.shape[1:]), keras.layers.Dense(1), ]) # model.summary() # model.compile(loss="mean_squared_error", optimizer="sgd") optimizer = keras.optimizers.SGD(lr) model.compile(loss="mean_squared_error", optimizer=optimizer) callbacks = [keras.callbacks.EarlyStopping( patience=5, min_delta=1e-2)] history = model.fit(x_train_scaled, y_train, validation_data = (x_valid_scaled, y_valid), epochs = 5, callbacks = callbacks) histories.append(history) def plot_learning_curves(history): pd.DataFrame(history.history).plot(figsize=(8, 5)) plt.grid(True) plt.gca().set_ylim(0, 4) plt.show() for lr, history in zip(learning_rate, histories): print('lr:', lr) plot_learning_curves(history) model.evaluate(x_test_scaled, y_test) # + learning_rate = [1e-4, 3e-4, 1e-3, 3e-3, 1e-2, 3e-2] def build_model(hidden_layers=1, layer_size=30, learning_rate=3e-3): model = keras.models.Sequential() model.add(keras.layers.Dense(layer_size, activation='relu', input_shape=x_train.shape[1:])) for _ in range(hidden_layers - 1): model.add(keras.layers.Dense(layer_size, activation='relu')) model.add(keras.layers.Dense(1)) optimizer = keras.optimizers.SGD(learning_rate) model.compile(loss="mse", optimizer=optimizer) return model callbacks = [keras.callbacks.EarlyStopping(patience=5, min_delta=1e-2)] sklearn_model = keras.wrappers.scikit_learn.KerasRegressor(build_model) history = sklearn_model.fit(x_train_scaled, y_train, validation_data = (x_valid_scaled, y_valid), epochs = 5, callbacks = callbacks) # + from scipy.stats import reciprocal param_distribution = { 'hidden_layers': [1, 2, 3], 'layer_size': np.arange(1, 100), 'learning_rate': reciprocal(1e-4, 1e-2) } from sklearn.model_selection import RandomizedSearchCV random_search_cv = RandomizedSearchCV(sklearn_model, param_distribution, n_iter=10, n_jobs=1) random_search_cv.fit(x_train_scaled, y_train, epochs=5, validation_data=(x_valid_scaled, y_valid), callbacks=callbacks) # - print(random_search_cv.best_params_) print(random_search_cv.best_score_) print(random_search_cv.best_estimator_) model = random_search_cv.best_estimator_.model model.evaluate(x_test_scaled, y_test) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # This tests algorithms to remove cosmic rays from multiepoch spectra (in particular from SDSS stripe 82 # spectra, which are too many for manual removal) # Created 2021 May 10 by E.S. # + import pandas as pd import numpy as np import glob import matplotlib.pyplot as plt from astropy.stats import sigma_clip # %matplotlib inline # - file_list = glob.glob("/Users/bandari/Documents/git.repos/rrlyrae_metallicity/" +\ "notebooks_for_development/data/sdss_stripe_82_data/input/" + "*") # + # find all parent names (i.e., one name for each target, whether or not multiepoch observations were made) parent_list = list(set([i.split("g00")[0] for i in file_list])) # + # initialize list to hold single-epoch spectra names ## TBD # - def plot_result(spec0, spec1): # remove from consideration the regions around the absorption lines, which change with time and can # be misidentified as a cosmic ray hit (a spectrum with an actual hit will have to be discarded manually) spec0_flux_copy = spec0["flux"].to_numpy() spec1_flux_copy = spec1["flux"].to_numpy() half_width = 20 cond_1 = np.logical_and(spec0["wavel"] > 3933.66-half_width, spec0["wavel"] < 3933.66+half_width) cond_2 = np.logical_and(spec0["wavel"] > 3970.075-half_width, spec0["wavel"] < 3970.075+half_width) cond_3 = np.logical_and(spec0["wavel"] > 4101.71-half_width, spec0["wavel"] < 4101.71+half_width) cond_4 = np.logical_and(spec0["wavel"] > 4340.472-half_width, spec0["wavel"] < 4340.472+half_width) cond_5 = np.logical_and(spec0["wavel"] > 4861.29-half_width, spec0["wavel"] < 4861.29+half_width) spec0_flux_copy[cond_1] = np.nan spec0_flux_copy[cond_2] = np.nan spec0_flux_copy[cond_3] = np.nan spec0_flux_copy[cond_4] = np.nan spec0_flux_copy[cond_5] = np.nan spec1_flux_copy[cond_1] = np.nan spec1_flux_copy[cond_2] = np.nan spec1_flux_copy[cond_3] = np.nan spec1_flux_copy[cond_4] = np.nan spec1_flux_copy[cond_5] = np.nan resids = np.subtract(spec0_flux_copy,spec1_flux_copy) # sigma clip # (note sigma lower is a large number, to keep track of which spectrum has the (+) cosmic ray) filtered_data = sigma_clip(resids, sigma_lower=50, sigma_upper=5, iters=1) # also remove points adjacent to those masked, by rolling spectra by two elements in each direction, # subtracting them and finding where difference is nan diff_roll_p1 = np.subtract(filtered_data,np.roll(filtered_data,1)) diff_roll_p2 = np.subtract(filtered_data,np.roll(filtered_data,2)) diff_roll_n1 = np.subtract(filtered_data,np.roll(filtered_data,-1)) diff_roll_n2 = np.subtract(filtered_data,np.roll(filtered_data,-2)) mark_bad_array = np.subtract(np.subtract(diff_roll_p1,diff_roll_p2),np.subtract(diff_roll_n1,diff_roll_n2)) mask_bad_pre_line_restore = np.ma.getmask(mark_bad_array) masked_flux_0 = np.ma.masked_array(spec0["flux"], mask=mask_bad) masked_wavel_0 = np.ma.masked_array(spec0["wavel"], mask=mask_bad) masked_flux_1 = np.ma.masked_array(spec1["flux"], mask=mask_bad) masked_wavel_1 = np.ma.masked_array(spec1["wavel"], mask=mask_bad) num_removed = np.subtract(len(resids), np.isfinite(filtered_data).sum()) plt.clf() fig = plt.figure(figsize=(24,9)) plt.plot(df_single_0["wavel"],resids,color="red") #plt.plot(df_single_0["wavel"],df_single_0["flux"]) plt.plot(df_single_0["wavel"],spec0_flux_copy) #plt.plot(df_single_1["wavel"],df_single_1["flux"]) plt.plot(masked_wavel_0,masked_flux_0,color="k") #plt.title("pts removed: " + str(num_removed)) #plt.show() string_rand = str(np.random.randint(low=0,high=10000)) plt.savefig("junk_"+string_rand+".png") return # + # find the file names of spectra corresponding to each parent; if there is only 1, ignore; # if >= 2, do median comparison to flag it for cosmic rays for t in range(0,60):#len(parent_list)): #print("----------") #print(t) matching = list(filter(lambda x: parent_list[t] in x, file_list)) if (len(matching) == 1): continue elif (len(matching) == 2): df_single_0 = pd.read_csv(matching[0], names=["wavel","flux","noise"], delim_whitespace=True) df_single_1 = pd.read_csv(matching[1], names=["wavel","flux","noise"], delim_whitespace=True) plot_result(df_single_0, df_single_1) elif (len(matching) == 3): df_single_0 = pd.read_csv(matching[0], names=["wavel","flux","noise"], delim_whitespace=True) df_single_1 = pd.read_csv(matching[1], names=["wavel","flux","noise"], delim_whitespace=True) df_single_2 = pd.read_csv(matching[2], names=["wavel","flux","noise"], delim_whitespace=True) elif (len(matching) > 3): continue # + # read in spectra for which continuum has been calculated stem_s82_norm = "/Users/bandari/Documents/git.repos/rrlyrae_metallicity/rrlyrae_metallicity/realizations_output/norm/" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd from sklearn.datasets import load_boston boston=load_boston() ds=pd.DataFrame(boston.data,columns=boston.feature_names) ds.head() #1-hot encoding of RAD variable; because its categorical variable #representing it as categorical variable ds["RAD"]=ds["RAD"].astype("category") #datatype of the ds ds.dtypes #now using df.get_dummies(); it will drop the original column also #this method will automatically pick the categorical variable and apply 1-hot encoding ds=pd.get_dummies(ds,prefix="RAD") ds.head() #now doing Scaling on AGE,TAX,B or on entire Dataset from sklearn.preprocessing import MinMaxScaler scaler=MinMaxScaler(); scaler=scaler.fit(ds) scaledData=scaler.transform(ds) #now create the scaled dataframe from it dss=pd.DataFrame(scaledData,columns=ds.columns) # + #now perform the clusetring #step 1 cluster configuration to kind the k #step 2 using the value of 'k', generate the cluster #now to know the best value of 'k' # wss/bss vs k #That is when k=2, wss=sum of all point with theri 2 centeroid individually # i.e within clusterdistance ( this is inertia ) # and bwss means distance between centroid c1 and c2 #now when k=3, wss= sum of distance all point of culter and their centroid # the above wss is given by inertia of the cluster configuration ## but for bwss the sum of distance between 3 centroid. ## c1 to c2, c1 to c3 and c2 to c3 ###when cluster configuration=4 ##the bss= dist(c1,c2)+dist(c1,c3) +dist(c1,c4) + dist(c2,c3) +dist(c2,c4) +dist(c3,c4) #so all possible combination we need to find out for all values of k # + from sklearn.cluster import KMeans from itertools import combinations_with_replacement from itertools import combinations from scipy.spatial import distance print(list(combinations_with_replacement("ABCD", 2))) # + wss=[] bss=[] pairmap={} dis=[] d=0 distanceMap={} for k in range(2,16): #perforiming the cluster configuration clust=KMeans(n_clusters=k,random_state=0).fit(dss) wss.append(clust.inertia_) c=list(combinations(range(0,k), 2)) print("Combinations ----------->",c) print("ClusterCenters Are Below----------->") dataFrameClusterCenter=pd.DataFrame(clust.cluster_centers_) print(pd.DataFrame(clust.cluster_centers_)) print("The above are clusterCenters are for k==",k) pairmap[k]={"pairs":c} for i in c: #converting the tuple() to list using the list() method pair=list(i) print("pair is",pair) #extracting the index from the pair index1=pair[0] index2=pair[1] #print("row 1"); print(dataFrameClusterCenter.iloc[index1,:]) #print("row 2"); print(dataFrameClusterCenter.iloc[index2,:]) d=distance.euclidean(dataFrameClusterCenter.iloc[index1,:], dataFrameClusterCenter.iloc[index2,:]) print("distance",d) #appending the calculated distance between each pair of the cluster centers in a list dis.append(d) distanceMap[k]={"distance":dis} #making the list empty for next k dis=[] print("disstacne map for each k ") print(distanceMap) print("wss for all k ") print(wss) # - #sum the distance of between every cluster #summedDistance storing to bss list bss=[] import math for i in range(2,16): value=distanceMap.get(i) print(value) list=value['distance'] print(math.fsum(list)) summedDistance=math.fsum(list) bss.append(summedDistance) #1. now we have bss for all the k bss #2. now we have wss for all the k wss #but wss shal be sqrt(wss[i]) len(wss) len(bss) sqrtwss=[] for i in range(0,len(wss)): sqrt=math.sqrt(wss[i]) print(sqrt) sqrtwss.append(sqrt) #so this sqrtwss shall be used sqrtwss #final ratio =sqrtwss/bss ratio=[] for i in range(0,len(sqrtwss)): #ratio.append(sqrtwss[i]/wss[i]) ratio.append(sqrtwss[i]/bss[i]) #So finally perforimg scatter plot of ratio vs k plot ######################### ratio=(sqrtwss/bss) vs k plot ############################ ratio del list k=range(2,16) k k=list(k) k from matplotlib import pyplot as plt plt.plot(k,ratio) plt.xlabel("No of cluster k") plt.ylabel("Ratio of sqrtwss/bss") plt.show() #plot of sqrtwss vs k plt.plot(k,sqrtwss) plt.xlabel("No of cluster k") plt.ylabel("wss or sqrtwss") plt.show() #plot of bss vs k plt.plot(k,bss) plt.xlabel("No of cluster k") plt.ylabel("bss") plt.show() # + ############# Now as we knoe the optiomal value of k is 4, so ############# So we now perform actual clustering of 506 observations and there scaled ############ scaled and linear independence dataset #our scaled dataset is represented by dss dss.shape # - #to find corelation matrix dss.corr() #now performing the clustering clust=KMeans(n_clusters=4,max_iter=500,random_state=0).fit(dss) #now extract the clusterCenters clusterCenter=clust.cluster_centers_ #convert clusterCenter to dataframe to do the cluster profilin ccd=pd.DataFrame(clusterCenter,columns=dss.columns) #ccd for cluster profilin ccd # + #so profiling details #clusterId 1 is having the highest crime rate # industry are more in clusterId 1 # - #to see the labels i.e clusterId for each observation labels=clust.labels_ #total labes; len(labels) clusterIds=list(labels) #now perform the inverse Scaling originalDataAsNumpy=scaler.inverse_transform(dss) #converting numpy to dataset originalDataset=pd.DataFrame(originalDataAsNumpy,columns=dss.columns) #adding the labelled column to the originalDataset originalDataset["Label"]=labels #saving data on the system as OriginalData.csv originalDataset.to_csv("yoursystem path\\originalData.csv") #to see whether data contains the label or not originalDataset.Label[0] ##### Now plotting the Classfication import pylab as pl len=originalDataset.shape[0] len for i in range(0, len): if originalDataset.Label[i] == 0: c1 = pl.scatter(originalDataset.iloc[i,2],originalDataset.iloc[i,4],c='r', marker='+') elif originalDataset.Label[i] == 1: c2 = pl.scatter(originalDataset.iloc[i,2],originalDataset.iloc[i,4],c='g',marker='o') elif originalDataset.Label[i] == 2: c3 = pl.scatter(originalDataset.iloc[i,2],originalDataset.iloc[i,4],c='b',marker='*') elif originalDataset.Label[i] == 3: c4 = pl.scatter(originalDataset.iloc[i,2],originalDataset.iloc[i,4],c='y',marker='^') pl.legend([c1, c2, c3,c4], ['c1','c2','c3','c4']) pl.title('Boston Data classification') pl.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 1.初识darknet # 第一次见到darknet是在学习YOLO之后,想要亲自实现一下[YOLO](https://pjreddie.com/darknet/yolov2/),然后找YOLO的资料,就找到了[darknet](https://pjreddie.com/)的官网,当时心想,好家伙!为了为了一个YOLO专门搞了一个网站来介绍。当时就以为darknet就是一个YOLO模型的C语言实现(相信很多小伙伴跟我一样)。 # # 后来发现不是这么简单的,其实darknet是一个深度学习框架,也就是和我们平常接触的[keras](https://keras.io/),tensorflow是同等作用。darknet和其它的深度学习框架不一样的是,这个深度学习框架是用C语言实现的,并且表现为一个可执行文件,不像其他深度学习框架那样对环境很依赖,这个特点使得它很容易被移植到嵌入式设备中,这也是它独有的一个特点。 # # 2.darknet中使用YOLO # # 接下来我们就通过跑官网上的YOLO的例子来看看darknet究竟是个什么东西?(以下实验在Ubuntu下进行!!) # 首先我们需要将darknet的源码克隆到本地,以便于之后的“安装”(啊!这么麻烦啊!我记得安装keras或者tensorflow只要一句话pip install就搞定了啊!没有办法,因为darknet是C语言实现的,所以我们需要将代码从git克隆到本地,进行源码级的安装)。 # git clone https://github.com/pjreddie/darknet # 然后进入darknet,执行make(其实就是使用相应的编译工具进行变异,生成可执行文件darknet)。注意这里编译方式有两种,具体采用哪一种取决于你的需求 # 第一种:直接编译,什么东西都不改,这样的话darknet将基于CPU运行,速度就不是很快了(其实是慢很多!) # # # cd darknet # # make # 第二种:修改makefile,使其在GPU(如果你有一张NVIDIA的显卡)下运行,其速度取决于不同的GPU,但是基本都比CPU快几十倍到几百倍左右。具体就是将Makefile中的前几行改为(在darknet目录下, sudo vim Makefile来进行修改): # # GPU=1 # # CUDNN=1 # # OPENCV=1(改这个是为了darknet和opencv一起编译,使得darknet可以输出图片、视频结果。但是注意在编译之前需要先使用sudo apt-get install libopencv-dev来安装opencv,否则会编译出错!) # # 另外需要注意的一点是,既然选择了在GPU下运行,那么必不可少的就是需要安装[CUDA](https://developer.nvidia.com/cuda-downloads),运行一下命令查看本机是否装有CUDA: # # nvidia-smi # # 如果没有任何输出的话,那么就必须要安装CUDA了,安装过程官网很详细,就不重复了,只说一点过来人的忠告:一定要完全按照官网说的来,不然很容易白忙活几个小时! # # 修改完Makefile之后,直接make就好了!(如果你先使用了第一种方法编译,然后又想尝试第二种的话,那么在执行第二步的make操作之前,需要先运行make clean来清除之前编译的结果,总之小心使得万年船嘛) # ## 2.1 运行官网的例子 # # 上面的步骤运行完了之后,会在当前的目录(darknet目录)下生成一个可执行文件"darknet",这就是我们的深度学习框架了!官网介绍了几个例子来利用darknet来搭建预训练的YOLO模型,下面我们就按照那个例子来解释一个darknet的运行机制。 # 如果现在我们想通过darknet来搭建YOLO模型,然后对一张图片进行目标检测,那么我们怎么做呢?如下:(请先看参数解释再运行命令) # ./darknet detect cfg/yolov2.cfg yolov2.weights data/dog.jpg # 下面我来解释一下每个参数的含义 # # 1:----./darknet,这个没什么好说的,就是运行框架 # # 2:----detect,表明我们是想做目标检测,而不是其他任务 # # 3:----cfg/yolov2.cfg,这个是我们要搭建的用于目标检测的模型的配置信息,里面包含了搭建一个模型所需要的所有信息。比如图片尺寸,优化算法以及参数,batch size,最终要的是里面包含了要搭建的模型的各层信息,比如卷积,池化等参数,所以你只要传入了这个文件,darknet就会按照这个配置进行模型的搭建。在cfg目录下还有很多其他模型的配置文件,比如vgg,resnet,alexnet等,感兴趣的可以自行查看 # # 4:----yolov2.weights,这个文件自然就是我们之前搭建的YOLOv2模型的权重了,由于这个文件比较大,所以darknet没有将其加入到工程中,需要自行下载,运行: wget https://pjreddie.com/media/files/yolov2.weights将其下载到当前目录就好了 # # 5:----data/dog.jpg,这个就是我们要检测的图片,作为模型的输入,data目录下还有其他的图片,感兴趣的可以尝试一下。 # 运行完上面的命令之后就可以看到结果了,在终端中会显示模型的预测结果,以及预测时间。如果之前编译darknet的时候你修改了Makefile:OPENCV=1,那么此时就会显示一张带有预测bounding box的图片。不然不会显示,但输出的结果也会存放在当前目录下的predictions.png中 # # 官网上还有关于将摄像头数据或者视频作为YOLO输入的命令,其实都很简单,请自行查看。 # 总结:所以就目前来看,darknet使用的方法很简单,只需要将模型的配置文件,以及模型的权重参数文件传输给darknet就好了。但是从yolov2.cfg文件中,我们可以看到有很多YOLO模型才会有的私有的配置,比如anchors,所以darknet并不是一个通用的深度学习框架,它只能搭建、训练一些他支持的模型(比如YOLO)以及一些通用的简单的模型(比如alexnet),并不能进行类似tensorflow那样的复杂操作。 # 好了,本notebook的目的就是为了让大家对darknet有一个初步的了解,如果你懂了,那我的目的就达到了。这只是我们搭建人脸识别系统的第一步,敬请期待后续更新! # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # ch01~03 ## numpy 수학 알고리즘과 배열(행렬) 조작히가 위한 메서드 많이 있다 ## matplotlib 그래프를 그려주는 라이브러리 a=[1,2,3,4,5] print(a) a[1:] a[:-1] # - dictionary={'height':180} dictionary['height'] dictionary['weight']=70 print(dictionary) # !pwd # !python "/Users/jeongjongkun/Untitled Folder/deep-learning-from-scratch/ch01/hungry.py" # + # coding: utf-8 class Man: """샘플 클래스""" def __init__(self, name): ##class 초기화 방법 정의 생성자 클래스의 인스턴스가 만들어질때 한번만 불린다 self.name = name print("Initilized!") def hello(self): print("Hello " + self.name + "!") def goodbye(self): print("Good-bye " + self.name + "!") m = Man("David") m.hello() m.goodbye() # + #numpy n차원 배열 작성 가능 1차원 배열 : 벡터, 2차원 배열 : 행렬, 벡터와 행렬을 일반화한것을 텐서 #tensor 가 신경망을 타고 흐른다 #c/c++ 처리속도가 빠르다 numpy도 주된 처리는 c/c++로 구현 import numpy as np x=np.array([[1,2],[3,4],[5,6]]) print(x.flatten()) x>15 # + # coding: utf-8 import numpy as np import matplotlib.pyplot as plt # 데이터 준비 x = np.arange(0, 6, 0.1) # 0에서 6까지 0.1 간격으로 생성 y = np.sin(x) # 그래프 그리기 plt.plot(x, y) plt.show() # + # coding: utf-8 import numpy as np import matplotlib.pyplot as plt # 데이터 준비 x = np.arange(0, 6, 0.1) # 0에서 6까지 0.1 간격으로 생성 y1 = np.sin(x) y2 = np.cos(x) # 그래프 그리기 plt.plot(x, y1, label="sin") plt.plot(x, y2, linestyle = "--", label="cos") # cos 함수는 점선으로 그리기 plt.xlabel("x") # x축 이름 plt.ylabel("y") # y축 이름 plt.title('sin & cos') plt.legend() plt.show() # + #perceptron 다수의 신호 받아 하나의 신호 출력 #각 뉴런에서 가중치 곱해진 신호가 뉴런 노드에 모인다 임계치 보다 크면 activate # coding: utf-8 import numpy as np def AND(x1, x2): x = np.array([x1, x2]) w = np.array([0.5, 0.5]) b = -0.7 tmp = np.sum(w*x) + b if tmp <= 0: return 0 else: return 1 if __name__ == '__main__': for xs in [(0, 0), (1, 0), (0, 1), (1, 1)]: y = AND(xs[0], xs[1]) print(str(xs) + " -> " + str(y)) # + import numpy as np def AND(x1, x2): x = np.array([x1, x2]) w = np.array([0.5, 0.5]) b = -0.7 tmp = np.sum(w*x) + b if tmp <= 0: return 1 else: return 0 if __name__ == '__main__': for xs in [(0, 0), (1, 0), (0, 1), (1, 1)]: y = AND(xs[0], xs[1]) print(str(xs) + " -> " + str(y)) # + import numpy as np def AND(x1, x2): x = np.array([x1, x2]) w = np.array([0.5, 0.5]) b = -0.4 tmp = np.sum(w*x) + b if tmp <= 0: return 0 else: return 1 if __name__ == '__main__': for xs in [(0, 0), (1, 0), (0, 1), (1, 1)]: y = AND(xs[0], xs[1]) print(str(xs) + " -> " + str(y)) # + import numpy as np def AND(x1, x2): x = np.array([x1, x2]) w = np.array([-1, -1]) tmp = np.array(w*x+0.1) result=(tmp[0]*tmp[1]) if result <= 0: return 1 else: return 0 if __name__ == '__main__': for xs in [(0, 0), (1, 0), (0, 1), (1, 1)]: y = AND(xs[0], xs[1]) print(str(xs) + " -> " + str(y)) # - # + import numpy as np def AND(num1,num2): a=np.array([num1,num2]) b=np.array([0.5,0.5]) c=-0.7 sum=np.sum(a*b)+c if sum>0: return 1 else: return 0 if __name__=='__main__': for i in range(2): for j in range(2): print(i,j," : ",AND(i,j)) for i in range(1): print(i) ## 저수준 소자에서 컴퓨터 만드는데 필요한 부품(모듈)을 단계적으로 만들어가는 쪽 #AND, OR 게이트 그 다음에는 반가산기와 전가산기 그 다음에는 산술 논리연산장치(ALU), 그 다음에는 CPU #퍼셉트론은 가중치와 편향을 매개변수로 설정한다 #단순 퍼셉트론은 직선형 영역만 표현할 수 있고 다층 퍼셉트론은 비선형 영역도 표현할 수 있다 # + #퍼셉트론 복잡한 함수도 표현 가능하지만 가중치를 설정하는 작업은 사람이 수동으로 해야한다 #신경망은 가중치, 매개변수 적절한 값을 데잍로부터 자동으로 학습 #입력 신호의 총합을 출력 신호로 변환하는 함수를 활성화 함수라 한다 activation function #입력 신호의 총합이 활성화를 일으키는지를 정하는 역할 #activation function 중에 퍼셉트론은 step finction 임계값을 경계로 출력이 바뀌는 것 #sigmoid 함수 s자 모양 # coding: utf-8 import numpy as np import matplotlib.pylab as plt def sigmoid(x): return 1 / (1 + np.exp(-x)) X = np.arange(-5.0, 5.0, 0.1) Y = sigmoid(X) plt.plot(X, Y) plt.ylim(-0.1, 1.1) plt.show() # - import numpy as np x=np.array([-1,0,-2,1,3]) y=x>0 y=y.astype(np.int) ## numpy 배열의 자료형을 변환 astype()매서드 y x=np.array([-1,0,-2,1,3]) x*3 #선형함수 상수배만큼 변하는 함수를 선형함수 비선형함수는 선형이 아닌함수 직선 1개로 그릴 수 없는 함수 #activation function 비선형함수 사용해야한다 선형함수는 층을 깊게하는 것이 의미가 없다 - 은닉층의미 없는게 된다 #ReLU Rectified 정류된 max(0,x) np.maximum(0,x) import numpy as np a=np.array([[1,2],[3,4],[5,6]]) np.ndim(a) ## np.ndim numpy 형태 배열의 차원 return a.shape #(3,2)는 처음차원에 원소가 3개, 다음 차원에 원소가 2개 #np.dot(a,b)로 행렬 내적 구한다 3*2 차원 2*3 차원 처럼 사이 차원 같아야 내적 가능 #내적 함수 이용해서 신경망 구현 # w 다음층뉴런번호 앞층뉴런번호 X=np.array([1,2]) W=np.array([[1,3,5],[2,4,6]]) Y=np.dot(X,W) #각각의 layer로 향하는 weight들 곱한 결과 Y #회귀문제에서는 항등함수를, 분류에서는 소프트맥스 함수를 이용한다 #분류 - 출력층의 뉴런 수, 분류하려는 클래스 수를 같게 설정한다 #softmax 함수 0~1 사이 값으로 정규화 출력 값들의 합은 1 상수e에 x를 지수 제곱해서 다더한것 분의 상수e에 x지수 제곱 # 1개 값만 True, 다른 값 False로 encoding #softmax 에서 1000넘어가는 값 exp 계산하면 overflow난다 import numpy as np a=np.array([1010,1000,990]) print(np.exp(a)) c=np.max(a) a-c print(np.exp(a-c)) print(np.exp(a-c)/np.sum(np.exp(a-c))) #값들중 최대값 분자 분모 모두에서 빼서 계산한다 print(np.sum(np.exp(a-c)/np.sum(np.exp(a-c)))) #각각의 합 1이어서 확률이 가장 큰 것 계산할 수 있다. #exp 함수가 단조 증가함수이기 떄문에 대소관계 바뀌지 않는다 #추론단계에서 굳이 softmax 안써도 된다 #자연상수 e 는 2.71828182846... 의 값을 가지는 무리수이며, 수학자의 이름을 따서 '오일러의 수(Euler's number)' 또는 '네이피어의 수(Napier's number) #자연의 연속한 성장 growth 100%의 성장률 1회 연속 성장할때 가질수 있는 최대 성장량 #1년에 100% 성장한다고 했을때 이를 쪼개서 무한번 나눠서 각각의 time slice 마다 성장한 부분에 대해서도 성장하는 것 가정 #1을 무한번 제곱하면 e 된다 #50% 성장률 가지고 성장하면 e의 1/2승(성장률*횟수) #성장량*성장률*성장횟수 중 2개를 알면 나머지 1개 알 수 있다 import matplotlib.pyplot as plt import numpy as np x=range(-10,10) y=np.exp(x) plt.plot(x,y) sys.path # + import sys, os #print(sys.path) #sys.path.insert(0,"deep-learning-from-scratch") sys.path.append("deep-learning-from-scratch") print(sys.path) import numpy as np from dataset.mnist import load_mnist from PIL import Image def img_show(img): pil_img = Image.fromarray(np.uint8(img)) pil_img.show() (x_train, t_train), (x_test, t_test) = load_mnist(flatten=True, normalize=False) #flatten 1차원 배열로 만들지, normaliza 0~1 값으로 정규화 img = x_train[0] label = t_train[0] print(label) # 5 print(img.shape) # (784,) img = img.reshape(28, 28) # 형상을 원래 이미지의 크기로 변형 print(img.shape) # (28, 28) img_show(img) # + #파이썬 pickle 프로그램 실행 중 특정 객체를 파일로 저장 실행 당시의 객체 즉시 복원 가능 # coding: utf-8 import sys, os sys.path.append(os.pardir) # 부모 디렉터리의 파일을 가져올 수 있도록 설정 import numpy as np import pickle from dataset.mnist import load_mnist from common.functions import sigmoid, softmax def get_data(): (x_train, t_train), (x_test, t_test) = load_mnist(normalize=True, flatten=True, one_hot_label=False) return x_test, t_test def init_network(): with open("deep-learning-from-scratch/ch03/sample_weight.pkl", 'rb') as f: network = pickle.load(f) return network def predict(network, x): W1, W2, W3 = network['W1'], network['W2'], network['W3'] b1, b2, b3 = network['b1'], network['b2'], network['b3'] a1 = np.dot(x, W1) + b1 z1 = sigmoid(a1) a2 = np.dot(z1, W2) + b2 z2 = sigmoid(a2) a3 = np.dot(z2, W3) + b3 y = softmax(a3) return y x, t = get_data() network = init_network() accuracy_cnt = 0 for i in range(len(x)): y = predict(network, x[i]) p= np.argmax(y) # 확률이 가장 높은 원소의 인덱스를 얻는다. if p == t[i]: accuracy_cnt += 1 print("Accuracy:" + str(float(accuracy_cnt) / len(x))) #하나로 묶은 입력 데이터를 배치라고 한다 batch = 일괄적으로 처리되는 집단 #batch - 수치 계산 라이브러리 - 큰 배열을 효율적으로 처리할 수 있도록 고도로 최적화 # 커다란 신경망에서 데이터 전송이 병목으로 작용하는 경우 - 배치 처리로 버스에 주느 부하 줄인다 # 컴퓨터에서는 큰배열 한꺼번에 계산하는 것이 작은 배열 여러번 개산하는 것보다 빠르다 # + # coding: utf-8 import sys, os sys.path.append(os.pardir) # 부모 디렉터리의 파일을 가져올 수 있도록 import numpy as np import pickle from dataset.mnist import load_mnist from common.functions import sigmoid, softmax def get_data(): (x_train, t_train), (x_test, t_test) = load_mnist(normalize=True, flatten=True, one_hot_label=False) return x_test, t_test def init_network(): with open("deep-learning-from-scratch/ch03/sample_weight.pkl", 'rb') as f: network = pickle.load(f) return network def predict(network, x): w1, w2, w3 = network['W1'], network['W2'], network['W3'] b1, b2, b3 = network['b1'], network['b2'], network['b3'] a1 = np.dot(x, w1) + b1 z1 = sigmoid(a1) a2 = np.dot(z1, w2) + b2 z2 = sigmoid(a2) a3 = np.dot(z2, w3) + b3 y = softmax(a3) return y x, t = get_data() network = init_network() batch_size = 100 # 배치 크기 accuracy_cnt = 0 for i in range(0, len(x), batch_size): x_batch = x[i:i+batch_size] y_batch = predict(network, x_batch) p = np.argmax(y_batch, axis=1) ##argmax 는 최대값의 인덱스를 가져온다 accuracy_cnt += np.sum(p == t[i:i+batch_size]) print("Accuracy:" + str(float(accuracy_cnt) / len(x))) # - import numpy as np x=np.array([[0.1,0.2,1],[4,2,1],[14,5,17]]) y=np.argmax(x,axis=1) y print(x_train.shape) print(t_train.shape) print(x_test.shape) print(t_test.shape) import sys print(sys.path) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + ### NO OS OLVIDEIS DE PONER EL NÚMERO DE GRUPO Y VUESTROS NOMBRES # - # # Práctica 2 - Sistemas Inteligentes # ## - Facultad de Informática UCM # ## Búsqueda local # La primera parte son ejercicios paso a paso para familiarizarnos con la resolución de problemas sencillos de optimización, como la maximización o minimización de una función, o el problema de la mochila o del viajante, problemas conocidos cuya resolución se ha abordado con técnicas algorítmicas y que vamos a resolver utilizando algoritmos de búsqueda local. # # En la segunda parte de la práctica se pide resolver algun problema de optimización a elegir de la hoja de ejercicios que está colgada en el campus. Los problemas tienen distinto nivel de dificultad por lo que puedes elegir uno o varios. # Es importante identificar problemas de optimización para los que los algoritmos de búsqueda local sean más adecuados que los problemas de búsqueda en espacio de estados vistos en la práctica 1. Algunos ejemplos son: el problema de las bolsas (14), el sudoku (21), el problema de las mesas (23), el problema de adjudicación de proyectos (26), el problema del ganadero (27) y el problema de la asignación de tareas (28). # # # ### Algoritmo de escalada # Hill Climbing es un algoritmo de búsqueda local heurística utilizada para problemas de optimización. # Esta solución puede o no ser el óptimo global. El algoritmo es una variante del algoritmo de generación y prueba. #
    # En general, el algoritmo funciona de la siguiente manera: # - Evaluar el estado inicial. # - Si es igual al estado del objetivo, terminamos. # - Encuentra un estado vecino al estado actual # - Evaluar este estado. Si está más cerca del estado objetivo que antes, reemplace el estado inicial con este estado y repita estos pasos. #
    # Usaremos la implementación de AIMA que está en el módulo search.py # # def hill_climbing(problem): # """From the initial node, keep choosing the neighbor with highest value, # stopping when no neighbor is better. [Figure 4.2]""" # current = Node(problem.initial) # while True: # neighbors = current.expand(problem) # if not neighbors: # break # neighbor = argmax_random_tie(neighbors, # key=lambda node: problem.value(node.state)) # if problem.value(neighbor.state) <= problem.value(current.state): # break # current = neighbor # return current.state # # Como ejemplo, vamos a definir un problema sencillo de encontrar el punto más alto en una rejilla. Este problema está definido en el módulo search.py como PeakFindingProblem. Lo reproducimos aquí y creamos una rejilla simple. from search import * initial = (0, 0) grid = [[3, 7, 2, 8], [5, 2, 9, 1], [5, 3, 3, 1]] # + # Pre-defined actions for PeakFindingProblem directions4 = { 'W':(-1, 0), 'N':(0, 1), 'E':(1, 0), 'S':(0, -1) } directions8 = dict(directions4) directions8.update({'NW':(-1, 1), 'NE':(1, 1), 'SE':(1, -1), 'SW':(-1, -1) }) class PeakFindingProblem(Problem): """Problem of finding the highest peak in a limited grid""" def __init__(self, initial, grid, defined_actions=directions4): """The grid is a 2 dimensional array/list whose state is specified by tuple of indices""" Problem.__init__(self, initial) self.grid = grid self.defined_actions = defined_actions self.n = len(grid) assert self.n > 0 self.m = len(grid[0]) assert self.m > 0 def actions(self, state): """Returns the list of actions which are allowed to be taken from the given state""" allowed_actions = [] for action in self.defined_actions: next_state = vector_add(state, self.defined_actions[action]) if next_state[0] >= 0 and next_state[1] >= 0 and next_state[0] <= self.n - 1 and next_state[1] <= self.m - 1: allowed_actions.append(action) return allowed_actions def result(self, state, action): """Moves in the direction specified by action""" return vector_add(state, self.defined_actions[action]) def value(self, state): """Value of a state is the value it is the index to""" x, y = state assert 0 <= x < self.n assert 0 <= y < self.m return self.grid[x][y] # - problem = PeakFindingProblem(initial, grid, directions4) ## Resolvemos el problema con escalada. Comenta la solución encontrada. ¿es óptima? ¿cómo se puede mejorar la resolución del problema con este algoritmo? hill_climbing(problem) problem.value(hill_climbing(problem)) ## copiamos aqui el codigo de search.py def hill_climbing(problem): """From the initial node, keep choosing the neighbor with highest value, stopping when no neighbor is better. [Figure 4.2]""" current = Node(problem.initial) while True: neighbors = current.expand(problem) if not neighbors: break neighbor = argmax_random_tie(neighbors, key=lambda node: problem.value(node.state)) if problem.value(neighbor.state) <= problem.value(current.state): break current = neighbor return current.state solution = problem.value(hill_climbing(problem)) solution # ### Ejercicio 1. Resuelve el problema anterior de encontrar el punto máximo en una rejilla. Comenta y razona los resultados obtenidos en distintas rejillas de mayor tamaño con el algoritmo de escalada por máxima pendiente. # Observar los posibles efectos de usar distinto número de direcciones y de número de ejecuciones para encontrar el mejor resultado posible. # Es muy importante comentar los resultados. # Prueba con varias rejillas e indica las propiedades de optimalidad (en que % de las pruebas obtiene la solución óptima), tiempo, completitud. solution = problem.value(hill_climbing(problem)) solution solutions = {problem.value(hill_climbing(problem)) for i in range(100)} max(solutions) # Ejemplo de rejilla para pruebas. # # grid2 = [[0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00], # [0.20, 0.20, 0.20, 0.20, 0.00, 0.00, 0.00, 0.00, 0.00], # [0.20, 0.20, 0.20, 0.40, 0.40, 0.00, 0.00, 0.00, 0.00], # [0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.70, 1.40], # [2.20, 1.80, 0.70, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00], # [2.20, 1.80, 4.70, 6.50, 4.30, 1.80, 0.70, 0.00, 0.00], # [0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 11.2, 0.70, 1.40], # [2.20, 1.80, 0.70, 0.00, 0.00, 9.00, 0.00, 0.00, 0.00], # [2.20, 1.80, 4.70, 6.50, 4.30, 1.80, 0.70, 0.00, 0.00], # [0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.70, 1.40], # [2.20, 1.80, 0.70, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00], # [2.20, 1.80, 4.70, 8.50, 4.30, 1.80, 0.70, 0.00, 0.00]] problem.value(hill_climbing(problem)) # ### Algoritmos genéticos # # Se define una clase ProblemaGenetico que incluye los elementos necesarios para la representación de un problema de optimización que se va a resolver con un algoritmo genético. Los elementos son los que hemos visto en clase: # # - genes: lista de genes usados en el genotipo de los estados. # - longitud_individuos: longitud de los cromosomas # - decodifica: función de obtiene el fenotipo a partir del genotipo. # - fitness: función de valoración. # - muta: función de mutación de un cromosoma # - cruza: función de cruce de un par de cromosomas import random class ProblemaGenetico(object): def __init__(self, genes,fun_dec,fun_muta , fun_cruza, fun_fitness,longitud_individuos): self.genes = genes self.fun_dec = fun_dec self.fun_cruza = fun_cruza self.fun_muta = fun_muta self.fun_fitness = fun_fitness self.longitud_individuos = longitud_individuos """Constructor de la clase""" def decodifica(self, genotipo): """Devuelve el fenotipo a partir del genotipo""" fenotipo = self.fun_dec(genotipo) return fenotipo def muta(self, cromosoma,prob): """Devuelve el cromosoma mutado""" mutante = self.fun_muta(cromosoma,prob) return mutante def cruza(self, cromosoma1, cromosoma2): """Devuelve el cruce de un par de cromosomas""" cruce = self.fun_cruza(cromosoma1,cromosoma2) return cruce def fitness(self, cromosoma): """Función de valoración""" valoracion = self.fun_fitness(cromosoma) return valoracion # En primer lugar vamos a definir una instancia de la clase anterior correspondiente al problema de optimizar (maximizar o minimizar) la función cuadrado x^2 en el conjunto de los números naturales menores que 2^{10}. # # Vamos a usar este problema unicamente como ejemplo (ya que sabemos la solución por otros métodos matemáticos) para ver todos los elementos de los algoritmos genéticos y poder observar el comportamiento de distintas configuraciones. # + # Será necesaria la siguiente función que interpreta una lista de 0's y 1's como un número natural: # La siguiente función que interpreta una lista de 0's y 1's como # un número natural: def binario_a_decimal(x): return sum(b*(2**i) for (i,b) in enumerate(x)) # - list(enumerate([1, 0, 0])) # + # En primer luegar usaremos la clase anterior para representar el problema de optimizar (maximizar o minimizar) # la función cuadrado en el conjunto de los números naturales menores que # 2^{10}. # Tenemos que definir funciones de cruce, mutación y fitness para este problema. def fun_cruzar(cromosoma1, cromosoma2): """Cruza los cromosomas por la mitad""" l1 = len(cromosoma1) l2 = len(cromosoma2) cruce1 = cromosoma1[0:l1//2]+cromosoma2[l1//2:l2] cruce2 = cromosoma2[0:l2//2]+cromosoma1[l2//2:l1] return [cruce1,cruce2] def fun_mutar(cromosoma,prob): """Elige un elemento al azar del cromosoma y lo modifica con una probabilidad igual a prob""" l = len(cromosoma) p = random.randint(0,l-1) if prob > random.uniform(0,1): cromosoma[p] = (cromosoma[p]+1)%2 return cromosoma def fun_fitness_cuad(cromosoma): """Función de valoración que eleva al cuadrado el número recibido en binario""" n = binario_a_decimal(cromosoma)**2 return n cuadrados = ProblemaGenetico([0,1],binario_a_decimal,fun_mutar, fun_cruzar, fun_fitness_cuad,10) # - # Una vez definida la instancia cuadrados que representa el problema genético, probar alguna de las funciones definidas en la clase anterior, para esta instancia concreta. Por ejemplo: cuadrados.decodifica([1,0,0,0,1,1,0,0,1,0,1]) # Salida esperada: 1329 cuadrados.fitness([1,0,0,0,1,1,0,0,1,0,1]) # Salida esperada: 1766241 cuadrados.muta([1,0,0,0,1,1,0,0,1,0,1],0.1) # Posible salida: [1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1] cuadrados.muta([1,0,0,0,1,1,0,0,1,0,1],0.1) # Posible salida: [0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1] cuadrados.cruza([1,0,0,0,1,1,0,0,1,0,1],[0,1,1,0,1,0,0,1,1,1]) # Posible salida: [[1, 0, 0, 0, 1, 0, 0, 1, 1, 1], [0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1]] # ### Ejercicio 2 # # - Definir una función poblacion_inicial(problema_genetico,tamaño), para definir una población inicial de un tamaño dado, para una instancia dada de la clase anterior ProblemaGenetico # # sugerencia: usar random.choice # # - Definir una función de cruce que recibe una instancia de Problema_Genetico y una población de padres (supondremos que hay un número par de padres), obtiene la población resultante de cruzarlos de dos en dos (en el orden en que aparecen) # # cruza_padres(problema_genetico,padres) # # - Definir la función de mutación que recibe una instancia de Problema_Genetico, una población y una probabilidad de mutación, obtiene la población resultante de aplicar operaciones de mutación a cada individuo llamando a la función muta definida para el problema genético. # muta_individuos(problema_genetico, poblacion, prob) # + active="" # def poblacion_inicial(problema_genetico, size): # return .... # # Ejemplo de llamada: # poblacion_inicial(cuadrados,10) # + active="" # def cruza_padres(problema_genetico,padres): # # return .......... # # + active="" # p1 = [[1, 1, 0, 1, 0, 1, 0, 0, 0, 1], # [0, 1, 0, 1, 0, 0, 1, 0, 1, 1], # [0, 0, 1, 0, 0, 0, 1, 1, 1, 0], # [0, 0, 1, 1, 1, 1, 1, 1, 1, 0], # [0, 1, 1, 0, 0, 0, 0, 0, 0, 0], # [1, 0, 1, 1, 1, 0, 1, 1, 0, 1]] # # cruza_padres(cuadrados,p1) # # Posible salida # # [[1, 1, 0, 1, 0, 0, 1, 0, 1, 1], # # [0, 1, 0, 1, 0, 1, 0, 0, 0, 1], # # [0, 0, 1, 1, 1, 1, 1, 1, 1, 0], # # [0, 0, 1, 0, 0, 0, 1, 1, 1, 0], # # [0, 1, 1, 1, 1, 0, 1, 1, 0, 1], # # [1, 0, 1, 0, 0, 0, 0, 0, 0, 0]] # + active="" # Este os doy hecho: # # def muta_individuos(problema_genetico, poblacion, prob): # return [problema_genetico.muta(x, prob) for x in poblacion] # - muta_individuos(cuadrados,p1,0.5) # Posible salida: # [[1, 1, 0, 1, 0, 1, 0, 0, 0, 1], # [0, 1, 0, 1, 0, 0, 1, 0, 0, 1], # [0, 0, 1, 0, 0, 0, 1, 0, 1, 0], # [0, 0, 1, 1, 1, 1, 1, 1, 1, 0], # [0, 1, 0, 0, 0, 0, 0, 0, 0, 0], # [1, 0, 1, 1, 1, 0, 1, 1, 0, 1]] p1 = [[1, 1, 0, 1, 0, 1, 0, 0, 0, 1], [0, 1, 0, 1, 0, 0, 1, 0, 1, 1], [0, 0, 1, 0, 0, 0, 1, 1, 1, 0], [0, 0, 1, 1, 1, 1, 1, 1, 1, 0], [0, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 0, 1, 1, 1, 0, 1, 1, 0, 1]] muta_individuos(cuadrados,p1,0.5) # ### Ejercicio 3 # # Se pide definir una función de selección mediante torneo de n individuos de una población. # La función recibe como entrada: # - una instancia de la clase ProblemaGenetico # - una población # - el número n de individuos que vamos a seleccionar # - el número k de participantes en el torneo # - un valor opt que puede ser o la función max o la función min (dependiendo de si el problema es de maximización o de minimización, resp.). # # seleccion\_por\_torneo(problema_genetico,poblacion,n,k,opt) # # INDICACIÓN: Usar random.sample para seleccionar k elementos de una secuencia. # Por ejemplo, random.sample(population=[2,5,7,8,9], k=3) devuelve [7,5,8]. def seleccion_por_torneo(problema_genetico, poblacion, n, k, opt): """Selección por torneo de n individuos de una población. Siendo k el nº de participantes y opt la función max o min.""" seleccionados = [] # un individuo puede ganar varias selecciones en el torneo. No eliminar repetidos. return seleccionados #Ejemplo seleccion_por_torneo(cuadrados, poblacion_inicial(cuadrados,8),3,6,max) # Posible salida: [[1, 1, 1, 1, 1, 0, 0, 0, 1, 1], [1, 0, 0, 1, 1, 1, 0, 1, 0, 1], [1, 1, 1, 1, 0, 1, 1, 1, 0, 1]] seleccion_por_torneo(cuadrados, poblacion_inicial(cuadrados,8),3,6,min) # [[0, 0, 1, 1, 0, 1, 1, 0, 0, 0], [1, 0, 1, 0, 1, 1, 1, 0, 0, 0], [1, 1, 0, 1, 0, 0, 1, 0, 1, 0]] # + # La siguiente función implementa una posibilidad para el algoritmo genético completo: # inicializa t = 0 # Generar y evaluar la Población P(t) # Mientras no hemos llegado al número de generaciones fijado: t < nGen # P1 = Selección por torneo de (1-size)·p individuos de P(t) # P2 = Selección por torneo de (size·p) individuos de P(t) # Aplicar cruce en la población P2 # P4 = Union de P1 y P3 # P(t+1) := Aplicar mutación P4 # Evalua la población P(t+1) # t:= t+1 # Sus argumentos son: # problema_genetico: una instancia de la clase ProblemaGenetico con la representación adecuada del problema de optimización # que se quiere resolver. # k: número de participantes en los torneos de selección. # opt: max ó min, dependiendo si el problema es de maximización o de minimización. # nGen: número de generaciones (que se usa como condición de terminación) # size: número de individuos en cada generación # prop_cruce: proporción del total de la población que serán padres. # prob_mutación: probabilidad de realizar una mutación de un gen. def algoritmo_genetico(problema_genetico,k,opt,ngen,size,prop_cruces,prob_mutar): poblacion= poblacion_inicial(problema_genetico,size) n_padres=round(size*prop_cruces) n_padres= int (n_padres if n_padres%2==0 else n_padres-1) n_directos = size-n_padres for _ in range(ngen): poblacion= nueva_generacion(problema_genetico,k,opt,poblacion,n_padres, n_directos,prob_mutar) mejor_cr= opt(poblacion, key=problema_genetico.fitness) mejor=problema_genetico.decodifica(mejor_cr) return (mejor,problema_genetico.fitness(mejor_cr)) # - # La función auxiliar nueva_generacion(problema_genetico,poblacion,n_padres,n_directos,prob_mutar) que dada una población calcula la siguiente generación. #Definir la función nueva_generacion def nueva_generacion(problema_genetico, k,opt, poblacion, n_padres, n_directos, prob_mutar): padres2 = seleccion_por_torneo(problema_genetico, poblacion, n_directos, k,opt) padres1 = seleccion_por_torneo(problema_genetico, poblacion, n_padres , k, opt) cruces = cruza_padres(problema_genetico,padres1) generacion = padres2+cruces resultado_mutaciones = muta_individuos(problema_genetico, generacion, prob_mutar) return resultado_mutaciones # ### Ejercicio 4. Ejecutar el algoritmo genético anterior, para resolver el problema anterior (tanto en minimización como en maximización). # # Hacer una valoración de resultados y comentarios sobre el comportamiento del algoritmmo. # En la resolución del problema hay que tener en cuenta que el algoritmo genético devuelve un par con el mejor fenotipo encontrado y su valoración. # # Observar como afectan los párametros, probabilidades y presión de selección a la calidad de los resultados. # # Opcionalmente se puede modificar la función a optimizar para que no sea tan simple como x^2 # Puedes cambiar la funcion X^2 por otra función cambiando la función de fitness. # # + active="" # Por ejemplo # funcion_objetivo(x, y, z): # f= ((x**3 - y**2) * z) - x**4 # return(f) # - algoritmo_genetico(cuadrados,3,min,20,10,0.7,0.1) # Salida esperada: (0, 0) algoritmo_genetico(cuadrados,3,max,20,10,0.7,0.1) # Salida esperada: (1023, 1046529) # + active="" # Elige la mejor configuración de parámetros y luego puedes realizar varias ejecuciones para obtener el mejor resultado posible. # # import numpy as np # arr = np.array([algoritmo_genetico(cuadrados, 3, min, 20, 10, 0.7, 0.1)[0] for _ in range(1000)]) # m = np.mean(arr) # v = np.std(arr) # print(f"{m} +- {v} expected 0") # # arr = np.array([algoritmo_genetico(cuadrados, 3, max, 20, 10, 0.7, 0.1)[0] for _ in range(1000)]) # m = np.mean(arr) # v = np.std(arr) # print(f"{m} +- {v} expected 1023") # - # ### Pruebas para determinar cuales son los mejores parámetros concretos # #### Presión de selección # #### Proporción de cruce # #### Probabilidad de mutación # ## El problema de la mochila # Se plantea el típico problema de la mochila en el que dados n objetos de pesos conocidos pi y valor vi (i=1,...,n) hay que elegir cuáles se meten en una mochila que soporta un peso P máximo. La selección debe hacerse de forma que se máximice el valor de los objetos introducidos sin superar el peso máximo. # ### Ejercicio 5 # Se pide definir la representación del problema de la mochila usando genes [0,1] y longitud de los individuos n. # # Los valores 1 ó 0 representan, respectivamente, si el objeto se introduce o no en la mochila Tomados de izquerda a derecha, a partir del primero que no cabe, se consideran todos fuera de la mochila,independientemente del gen en su posición. De esta manera, todos los individuos representan candidatos válidos. # # El numero de objetos n determina la longitud de los individuos de la población. # En primer lugar es necesario definir una función de decodificación de la mochila que recibe como entrada: # * un cromosoma (en este caso, una lista de 0s y 1s, de longitud igual a n_objetos) # * n: número total de objetos de la mochila # * pesos: una lista con los pesos de los objetos # * capacidad: peso máximo de la mochila. # La función decodifica recibe (cromosoma, n, pesos, capacidad) y devuelve una lista de 0s y 1s que indique qué objetos están en la mochila y cuáles no (el objeto i está en la mochila si y sólo si en la posición i-ésima de la lista hay un 1). Esta lista se obtendrá a partir del cromosoma, pero teniendo en cuenta que a partir del primer objeto que no quepa, éste y los siguientes se consideran fuera de la mochila, independientemente del valor que haya en su correspondiente posición de cromosoma. def decodifica_mochila(cromosoma, n, pesos, capacidad): peso_en_mochila = 0 l = [] for i in range(n): if cromosoma[i] == 1 and peso_en_mochila + pesos[i] <= capacidad: l.append(1) peso_en_mochila += pesos[i] elif cromosoma[i]== 0 or peso_en_mochila + pesos[i] > capacidad: l.append(0) return l decodifica([1,1,1,1,1], 5, [2,3,4,5,1], 5) # Para definir la función de evaluación (fitness) necesitamos calcular el valor total de los objetos que están dentro de la mochila que representa el cromosoma según la codificación utilizada en la función anterior. # # Se pide la función fitness (cromosoma, n_objetos, pesos, capacidad, valores) donde los parámetros son los mismos que en la función anterior, y valores es la lista de los valores de cada objeto # # fitness(cromosoma, n_objetos, pesos, capacidad, valores) # # Ejemplo de uso: # fitness([1,1,1,1], 4, [2,3,4,5], 4, [7,1,4,5]) # 7 def fitness_mochila(cromosoma, n_objetos, pesos, capacidad, valores): ...... return valor fitness_mochila([1,1,1,1], 4, [2,3,4,5], 4, [7,1,4,5]) # Damos tres instancias concretas del problema de la mochila. Damos también sus soluciones optimas, para que se puedan comparar con los resultados obtenidos por el algoritmo genético: # + # Problema de la mochila 1: # 10 objetos, peso máximo 165 pesos1 = [23,31,29,44,53,38,63,85,89,82] valores1 = [92,57,49,68,60,43,67,84,87,72] # Solución óptima= [1,1,1,1,0,1,0,0,0,0], con valor 309 # + # Problema de la mochila 2: # 15 objetos, peso máximo 750 pesos2 = [70,73,77,80,82,87,90,94,98,106,110,113,115,118,120] valores2 = [135,139,149,150,156,163,173,184,192,201,210,214,221,229,240] # Solución óptima= [1,0,1,0,1,0,1,1,1,0,0,0,0,1,1] con valor 1458 # + # Problema de la mochila 3: # 24 objetos, peso máximo 6404180 pesos3 = [382745,799601,909247,729069,467902, 44328, 34610,698150,823460,903959,853665,551830,610856, 670702,488960,951111,323046,446298,931161, 31385,496951,264724,224916,169684] valores3 = [825594,1677009,1676628,1523970, 943972, 97426, 69666,1296457,1679693,1902996, 1844992,1049289,1252836,1319836, 953277,2067538, 675367, 853655,1826027, 65731, 901489, 577243, 466257, 369261] # Solución óptima= [1,1,0,1,1,1,0,0,0,1,1,0,1,0,0,1,0,0,0,0,0,1,1,1] con valoración 13549094 # - # ### Ejercicio 6 # # Definir variables m1g, m2g y m3g, referenciando a instancias de Problema_Genetico que correspondan, respectivamente, a los problemas de la mochila anteriores. Resuelve los problemas y comentar los resultados obtenidos en cuanto a eficiencia y calidad de los resultados obtenidos. # Algunas de las salidas posibles variando los parámetros. # + # >>> algoritmo_genetico_t(m1g,3,max,100,50,0.8,0.05) # ([1, 1, 1, 1, 0, 1, 0, 0, 0, 0], 309) # >>> algoritmo_genetico_t(m2g,3,max,100,50,0.8,0.05) # ([1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0], 1444) # >>> algoritmo_genetico_t(m2g,3,max,200,100,0.8,0.05) # ([0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0], 1439) # >>> algoritmo_genetico_t(m2g,3,max,200,100,0.8,0.05) # ([1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1], 1458) # >>> algoritmo_genetico_t(m3g,5,max,400,200,0.75,0.1) # ([1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0], 13518963) # >>> algoritmo_genetico_t(m3g,4,max,600,200,0.75,0.1) # ([1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0], 13524340) # >>> algoritmo_genetico_t(m3g,4,max,1000,200,0.75,0.1) # ([1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0], 13449995) # >>> algoritmo_genetico_t(m3g,3,max,1000,100,0.75,0.1) # ([1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0], 13412953) # >>> algoritmo_genetico_t(m3g,3,max,2000,100,0.75,0.1) # ([0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0], 13366296) # >>> algoritmo_genetico_t(m3g,6,max,2000,100,0.75,0.1) # ([1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1], 13549094) # + def fitness_mochila_1(cromosoma): v = fitness_mochila(cromosoma, 10, pesos1, 165, valores1) return v def decodifica_mochila_1(cromosoma): v = decodifica_mochila(cromosoma, 10, pesos1, 165) return v m1g = ProblemaGenetico([0,1], decodifica_mochila_1, fun_mutar, fun_cruzar, fitness_mochila_1,10) def fitness_mochila_2(cromosoma): v = fitness_mochila(cromosoma, 15, pesos2, 750, valores2) return v def decodifica_mochila_2(cromosoma): v = decodifica_mochila(cromosoma, 14, pesos2, 750) return v m2g = ProblemaGenetico([0,1], decodifica_mochila_2, fun_mutar, fun_cruzar, fitness_mochila_2,15) def fitness_mochila_3(cromosoma): v = fitness_mochila(cromosoma, 24, pesos3,6404180 , valores3) return v def decodifica_mochila_3(cromosoma): v = decodifica_mochila(cromosoma, 24, pesos3, 6404180) return v m3g = ProblemaGenetico([0,1], decodifica_mochila_3, fun_mutar, fun_cruzar, fitness_mochila_3,24) # - algoritmo_genetico(m3g,5,max,400,200,0.75,0.1) # ### Ejercicio 7 # Resolver mediante una configuración de un algoritmo genético el problema del cuadrado mágico que consiste en colocar en un cuadrado n × n los números naturales de 1 a n^2, # de tal manera que las filas, las columnas y las diagonales principales sumen los mismo. # Ejemplo: una solucion para n= 3 # # 4 3 8 # 9 5 1 # 2 7 6 # # - Dimensionn del cuadrado: N # - Suma común: SUMA=(N·(N^2 + 1))/2 # # Comenta el resultado y el rendimiento del algoritmo para distintos parámetros. # # # # # ### Enfriamiento simulado ( simulated annealing) # El algoritmo de enfriamiento simulado puede manejar las situaciones de óptimo local o mesetas típicas en algoritmos de escalada. # El enfriamiento simulado es bastante similar a la escalada pero en lugar de elegir el mejor movimiento en cada iteración, elige un movimiento aleatorio. Si este movimiento aleatorio nos acerca al óptimo global, será aceptado, pero si no lo hace, el algoritmo puede aceptar o rechazar el movimiento en función de una probabilidad dictada por la temperatura. Cuando la temperatura es alta, es más probable que el algoritmo acepte un movimiento aleatorio incluso si es malo. A bajas temperaturas, solo se aceptan buenos movimientos, con alguna excepción ocasional. Esto permite la exploración del espacio de estado y evita que el algoritmo se atasque en el óptimo local. # # Usaremos la implementación de AIMA del modulo search.py # # def simulated_annealing(problem, schedule=exp_schedule()): # """[Figure 4.5] CAUTION: This differs from the pseudocode as it # returns a state instead of a Node.""" # current = Node(problem.initial) # for t in range(sys.maxsize): # T = schedule(t) # if T == 0: # return current.state # neighbors = current.expand(problem) # if not neighbors: # return current.state # next_choice = random.choice(neighbors) # delta_e = problem.value(next_choice.state) - problem.value(current.state) # if delta_e > 0 or probability(math.exp(delta_e / T)): # current = next_choice # Como hemos visto en clase hay varios métodos de enfriamiento (scheduling routine) # Se puede variar el método de enfriamiento. En la implementación actual estamos usando el método de enfriamiento exponencial (que se pasa como parámetro). # # def exp_schedule(k=20, lam=0.005, limit=100): # """One possible schedule function for simulated annealing""" # return lambda t: (k * math.exp(-lam * t) if t < limit else 0) # Resuelve alguno de los problemas vistos en esta práctica usando el algoritmo de enfriamiento simulado y comenta los resultados obtenidos. Por ejemplo, puedes utilizar el problema del máximo en una rejilla resuelto con Escalada en el ejercicio 1. problem = PeakFindingProblem(initial, grid, directions4) ## Ejecuta varias veces esta celda. ¿Obtienes siempre el mismo resultado? ¿encuentra la solución óptima? simulated_annealing(problem) problem.value (simulated_annealing(problem)) # + # Lo resolvemos con 100 ejecuciones del algoritmo de enfriamiento simulado. ¿encuentra la solución óptima? solutions = {problem.value(simulated_annealing(problem)) for i in range(100)} max(solutions) # - # ### Ejercicio 8. # Resuelve el problema anterior de encontrar el punto máximo en una rejilla. Comenta y razona los resultados con enfriamiento simulado comparandolo con la resolución con escalada del ejercicio 1. # # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="YrR2OL0e6QoR" colab_type="text" # Lambda School Data Science # # *Unit 2, Sprint 1, Module 4* # # --- # + [markdown] colab_type="text" id="7IXUfiQ2UKj6" # # Logistic Regression # # # ## Assignment # # - [ ] Watch Aaron's [video #1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video #2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression. # - [ ] [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. Go to our Kaggle InClass competition website. You will be given the URL in Slack. Go to the Rules page. Accept the rules of the competition. # - [ ] Do train/validate/test split with the Tanzania Waterpumps data. # - [ ] Begin with baselines for classification. # - [ ] Use scikit-learn for logistic regression. # - [ ] Get your validation accuracy score. # - [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.) # - [ ] Commit your notebook to your fork of the GitHub repo. # # --- # # # ## Stretch Goals # # - [ ] Add your own stretch goal(s) ! # - [ ] Clean the data. For ideas, refer to [The Quartz guide to bad data](https://github.com/Quartz/bad-data-guide), a "reference to problems seen in real-world data along with suggestions on how to resolve them." One of the issues is ["Zeros replace missing values."](https://github.com/Quartz/bad-data-guide#zeros-replace-missing-values) # - [ ] Make exploratory visualizations. # - [ ] Do one-hot encoding. For example, you could try `quantity`, `basin`, `extraction_type_class`, and more. (But remember it may not work with high cardinality categoricals.) # - [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html). # - [ ] Get and plot your coefficients. # - [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html). # # --- # # ## Data Dictionary # # ### Features # # Your goal is to predict the operating condition of a waterpoint for each record in the dataset. You are provided the following set of information about the waterpoints: # # - `amount_tsh` : Total static head (amount water available to waterpoint) # - `date_recorded` : The date the row was entered # - `funder` : Who funded the well # - `gps_height` : Altitude of the well # - `installer` : Organization that installed the well # - `longitude` : GPS coordinate # - `latitude` : GPS coordinate # - `wpt_name` : Name of the waterpoint if there is one # - `num_private` : # - `basin` : Geographic water basin # - `subvillage` : Geographic location # - `region` : Geographic location # - `region_code` : Geographic location (coded) # - `district_code` : Geographic location (coded) # - `lga` : Geographic location # - `ward` : Geographic location # - `population` : Population around the well # - `public_meeting` : True/False # - `recorded_by` : Group entering this row of data # - `scheme_management` : Who operates the waterpoint # - `scheme_name` : Who operates the waterpoint # - `permit` : If the waterpoint is permitted # - `construction_year` : Year the waterpoint was constructed # - `extraction_type` : The kind of extraction the waterpoint uses # - `extraction_type_group` : The kind of extraction the waterpoint uses # - `extraction_type_class` : The kind of extraction the waterpoint uses # - `management` : How the waterpoint is managed # - `management_group` : How the waterpoint is managed # - `payment` : What the water costs # - `payment_type` : What the water costs # - `water_quality` : The quality of the water # - `quality_group` : The quality of the water # - `quantity` : The quantity of water # - `quantity_group` : The quantity of water # - `source` : The source of the water # - `source_type` : The source of the water # - `source_class` : The source of the water # - `waterpoint_type` : The kind of waterpoint # - `waterpoint_type_group` : The kind of waterpoint # # ### Labels # # There are three possible values: # # - `functional` : the waterpoint is operational and there are no repairs needed # - `functional needs repair` : the waterpoint is operational, but needs repairs # - `non functional` : the waterpoint is not operational # # --- # # ## Generate a submission # # Your code to generate a submission file may look like this: # # ```python # # estimator is your model or pipeline, which you've fit on X_train # # # X_test is your pandas dataframe or numpy array, # # with the same number of rows, in the same order, as test_features.csv, # # and the same number of columns, in the same order, as X_train # # y_pred = estimator.predict(X_test) # # # # Makes a dataframe with two columns, id and status_group, # # and writes to a csv file, without the index # # sample_submission = pd.read_csv('sample_submission.csv') # submission = sample_submission.copy() # submission['status_group'] = y_pred # submission.to_csv('your-submission-filename.csv', index=False) # ``` # # If you're working locally, the csv file is saved in the same directory as your notebook. # # If you're using Google Colab, you can use this code to download your submission csv file. # # ```python # from google.colab import files # files.download('your-submission-filename.csv') # ``` # # --- # + id="S0pqQbAz_Dxi" colab_type="code" colab={} import sys import warnings import pandas as pd from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score from sklearn.linear_model import LinearRegression from sklearn.impute import SimpleImputer from sklearn.linear_model import LogisticRegression # + colab_type="code" id="o9eSnDYhUGD7" colab={} # %%capture # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/' # !pip install category_encoders==2.* # If you're working locally: else: DATA_PATH = '../data/' # + colab_type="code" id="ipBYS77PUwNR" colab={} # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') # + colab_type="code" id="QJBD4ruICm1m" colab={} # Read the Tanzania Waterpumps data # train_features.csv : the training set features # train_labels.csv : the training set labels # test_features.csv : the test set features # sample_submission.csv : a sample submission file in the correct format train_features = pd.read_csv(DATA_PATH+'waterpumps/train_features.csv') train_labels = pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv') test_features = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv') #sample_submission = pd.read_csv(DATA_PATH+'waterpumps/waterpumps/sample_submission.csv') assert train_features.shape == (59400, 40) assert train_labels.shape == (59400, 2) assert test_features.shape == (14358, 40) #assert sample_submission.shape == (14358, 2) # + colab_type="code" id="2Amxyx3xphbb" colab={"base_uri": "https://localhost:8080/", "height": 435} outputId="a57715ad-9b7a-494f-a04a-f0d90a617d99" #train_features print(train_features.shape) train_features.head() # + id="6G9mpbAOA1RC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 224} outputId="348a0172-2ea4-472f-9513-8c8a788c6c47" #train_labels print(train_labels.shape) train_labels.head() # + id="qNoRa3zqA8dg" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 452} outputId="d7475c33-18fa-42e1-aa26-cf92c723eaeb" #test_features print(test_features.shape) test_features.head() # + id="q42khj8-GAmp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 435} outputId="88634d14-e9f5-4595-ce75-3366a3809811" #Putting the 'status_group' column in the training set train_merge = pd.merge(train_features, train_labels) print(train_merge.shape) train_merge.head() # + id="fdWvHWKAQkxf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 417} outputId="f10f4d47-cf98-4134-a671-2040086139d6" #Converting status_group into int train_merge['status_group'] = train_merge['status_group'].replace({'functional': 0, 'functional needs repair': 1, 'non functional': 2}) train_merge.head() # + id="cbUFvZlRHnTB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 767} outputId="c559e0bd-7b03-4c39-d038-ed478133f8e3" #Checking for null values train_merge.isnull().sum() # + id="8qOeNpt9BqQQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="48f7702d-ac33-45e3-e3b7-4172b6c9875e" #Making a train and val set. train, val = train_test_split(train_merge, random_state = 1) train.shape, val.shape # + id="xq7Vg3eyE5lX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 88} outputId="c3070c2d-8fc1-4626-887f-bcd04dfea3e5" #Baseline for classification target = 'status_group' y_train = train[target] y_train.value_counts(normalize = True) # + id="NvGMDPXyL8VV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="6662ccf0-cf24-41e9-fed6-234fba429014" majority = y_train.mode()[0] y_pred = [majority] * len(y_train) y_pred # + id="iJuSCOvpMg-A" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="d30686d7-8286-4da2-d5e8-0024f36c3531" accuracy_score(y_train, y_pred) # + id="9QJsKEK1MyC9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="428c03f8-68c3-4ba1-97e0-dbf16ff7a22a" y_val = val[target] y_pred = [majority] * len(y_val) accuracy_score(y_val, y_pred) # + id="rdhjNxqNJZFt" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 300} outputId="9c64d0a4-f8af-4517-f6e6-459c35a73d65" #Linear Regression train.describe() # + id="T68QoHhBN1b_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 53} outputId="b8ca42db-acc0-486a-b894-f0af3c01ace2" linear_reg = LinearRegression() features = ['amount_tsh', 'population', 'construction_year'] x_train = train[features] x_val = val[features] imputer = SimpleImputer() x_train_imputed = imputer.fit_transform(x_train) x_val_imputed = imputer.transform(x_val) linear_reg.fit(x_train_imputed, y_train) linear_reg.predict(x_val_imputed) # + id="cG03J4iGRqq2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 91} outputId="f84b0b1c-660c-4d25-8fa2-6f88e08be0d0" #Logistic Regression log_reg = LogisticRegression(solver = 'lbfgs') log_reg.fit(x_train_imputed, y_train) log_reg.predict(x_val_imputed) # + id="JshW0Y_PJb82" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="8372e1a6-123b-43e2-867a-0c3363b4f922" #Validation accuracy score print(log_reg.score(x_val_imputed, y_val)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # __Pipeline 3 - Complex Source__ # # So far, we've not paid much attention to the source galaxy's morphology. We've assumed its a single-component exponential profile, which is a fairly crude assumption. A quick look at any image of a real galaxy reveals a wealth of different structures that could be present - bulges, disks, bars, star-forming knots and so on. Furthermore, there could be more than one source-galaxy! # # In this example, we'll explore how far we can get trying to_fit a complex source using a pipeline. Fitting complex source's is an exercise in diminishing returns. Each component we add to our source model brings with it an extra 5-7, parameters. If there are 4 components, or multiple galaxies, we're quickly entering the somewhat nasty regime of 30-40+ parameters in our non-linear search. Even with a pipeline, thats a lot of parameters to try and fit! from autofit import conf from autolens.data import ccd from autolens.data.array import mask as msk from autolens.model.profiles import light_profiles as lp from autolens.model.profiles import mass_profiles as mp from autolens.model.galaxy import galaxy as g from autolens.lens import lens_data as ld from autolens.lens import ray_tracing from autolens.lens import lens_fit from autolens.data.plotters import ccd_plotters from autolens.lens.plotters import lens_fit_plotters # Lets setup the path to the workspace, as per usual. # + # If you are using Docker, the paths to the chapter is as follows (e.g. comment out this line)! # path = '/home/user/workspace/' # If you arn't using docker, you need to change the path below to the chapter 3 directory and uncomment it # path = '/path/to/workspace/' conf.instance = conf.Config(config_path=path+'config', output_path=path+'output') # - # This function simulates an image with a complex source. def simulate(): from autolens.data.array import grids from autolens.model.galaxy import galaxy as g from autolens.lens import ray_tracing psf = ccd.PSF.simulate_as_gaussian(shape=(11, 11), sigma=0.05, pixel_scale=0.05) image_plane_grid_stack = grids.GridStack.grid_stack_for_simulation(shape=(180, 180), pixel_scale=0.05, psf_shape=(11, 11)) lens_galaxy = g.Galaxy(mass=mp.EllipticalIsothermal( centre=(0.0, 0.0), axis_ratio=0.8, phi=135.0, einstein_radius=1.6)) source_galaxy_0 = g.Galaxy(light=lp.EllipticalSersic(centre=(0.1, 0.1), axis_ratio=0.8, phi=90.0, intensity=0.2, effective_radius=1.0, sersic_index=1.5)) source_galaxy_1 = g.Galaxy(light=lp.EllipticalSersic(centre=(-0.25, 0.25), axis_ratio=0.7, phi=45.0, intensity=0.1, effective_radius=0.2, sersic_index=3.0)) source_galaxy_2 = g.Galaxy(light=lp.EllipticalSersic(centre=(0.45, -0.35), axis_ratio=0.6, phi=90.0, intensity=0.03, effective_radius=0.3, sersic_index=3.5)) source_galaxy_3 = g.Galaxy(light=lp.EllipticalSersic(centre=(-0.05, -0.0), axis_ratio=0.9, phi=140.0, intensity=0.03, effective_radius=0.1, sersic_index=4.0)) tracer = ray_tracing.TracerImageSourcePlanes(lens_galaxies=[lens_galaxy], source_galaxies=[source_galaxy_0, source_galaxy_1, source_galaxy_2, source_galaxy_3], image_plane_grid_stack=image_plane_grid_stack) return ccd.CCDData.simulate(array=tracer.image_plane_image_for_simulation, pixel_scale=0.05, exposure_time=300.0, psf=psf, background_sky_level=0.1, add_noise=True) # Lets simulate the image. ccd_data = simulate() ccd_plotters.plot_ccd_subplot(ccd_data=ccd_data) # Yep, that's a pretty complex source. There are clearly more than 4 peaks of light - I wouldn't like to guess whether there is one or two sources (or more). You'll also notice I omitted the lens galaxy's light for this system, this is to keep the number of parameters down and the phases running fast, but we wouldn't get such a luxury in a real galaxy. # Again, before we checkout the pipeline, lets import it, and get it running. # + from workspace.howtolens.chapter_3_pipelines import tutorial_3_pipeline_complex_source pipeline_complex_source = tutorial_3_pipeline_complex_source.make_pipeline( pipeline_path='/howtolens/c3_t3_complex_source/') pipeline_complex_source.run(data=ccd_data) # - # Okay, so with 4 sources, we still couldn't get a good a fit to the source that didn't leave residuals. The thing is, I did infact simulate the lens with 4 sources. This means that there is a 'perfect fit', somewhere in that parameter space, that we unfortunately missed using the pipeline above. # # Lets confirm this, by manually fitting the ccd data with the true input model. # + lens_data = ld.LensData(ccd_data=ccd_data, mask=msk.Mask.circular(shape=ccd_data.shape, pixel_scale=ccd_data.pixel_scale, radius_arcsec=3.0)) lens_galaxy = g.Galaxy(mass=mp.EllipticalIsothermal(centre=(0.0, 0.0), axis_ratio=0.8, phi=135.0, einstein_radius=1.6)) source_galaxy_0 = g.Galaxy(light=lp.EllipticalSersic(centre=(0.1, 0.1), axis_ratio=0.8, phi=90.0, intensity=0.2, effective_radius=1.0, sersic_index=1.5)) source_galaxy_1 = g.Galaxy(light=lp.EllipticalSersic(centre=(-0.25, 0.25), axis_ratio=0.7, phi=45.0, intensity=0.1, effective_radius=0.2, sersic_index=3.0)) source_galaxy_2 = g.Galaxy(light=lp.EllipticalSersic(centre=(0.45, -0.35), axis_ratio=0.6, phi=90.0, intensity=0.03, effective_radius=0.3, sersic_index=3.5)) source_galaxy_3 = g.Galaxy(light=lp.EllipticalSersic(centre=(-0.05, -0.0), axis_ratio=0.9, phi=140.0, intensity=0.03, effective_radius=0.1, sersic_index=4.0)) tracer = ray_tracing.TracerImageSourcePlanes(lens_galaxies=[lens_galaxy], source_galaxies=[source_galaxy_0, source_galaxy_1, source_galaxy_2, source_galaxy_3], image_plane_grid_stack=lens_data.grid_stack) true_fit = lens_fit.fit_lens_data_with_tracer(lens_data=lens_data, tracer=tracer) lens_fit_plotters.plot_fit_subplot(fit=true_fit) # - # And indeed, we see far improved residuals, chi-squareds, etc. # # The morale of this story is that, if the source morphology is complex, there is no way we can build a pipeline to fit it. For this tutorial, this was true even though our source model could actually fit the data perfectly. For real lenses, the source will be *even more complex* and there is even less hope of getting a good fit :( # # But fear not, PyAutoLens has you covered. In chapter 4, we'll introduce a completely new way to model the source galaxy, which addresses the problem faced here. But before that, in the next tutorial we'll discuss how we actually pass priors in a pipeline. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import os import cv2 import numpy as np import seaborn as sns import matplotlib.pyplot as plt import pandas as pd from tqdm import tqdm import pickle save_location = './Guatemala 2021 images/sorted_cherries' save_series_filename = 'saved_HSV_series.pkl' fig_filename = 'HSV_distributions.png' cherries = ['red_ripe', 'red_underripe', 'red_overripe', 'yellow_ripe', 'yellow_overripe'] foldernames = {cherries[i]: cherries[i] for i in range(len(cherries))} colors = {cherries[0]:'red',cherries[1]:'green', cherries[2]:'brown', cherries[3]:'yellow', cherries[4]: 'orange'} s_hue = {} s_sat = {} s_val = {} for cherry in cherries: s_hue[cherry] = pd.Series([], dtype='float64') s_sat[cherry] = pd.Series([], dtype='float64') s_val[cherry] = pd.Series([], dtype='float64') s_hue for cherry in cherries: # get all the files file_location = os.path.join(save_location, foldernames[cherry]) files = [file for file in os.listdir(file_location) if file.endswith(".jpg")] print('processing imgs for cherry : {}'.format(cherry)) # accumulate all pixels into a pandas series each for Hue, Saturation, Value for i, curr_file in tqdm(enumerate(files)): img = cv2.imread(os.path.join(file_location,curr_file)) img_HSV = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) hue = pd.Series(img_HSV[:,:,0].ravel()) sat = pd.Series(img_HSV[:,:,1].ravel()) val = pd.Series(img_HSV[:,:,2].ravel()) s_hue[cherry] = pd.concat([s_hue[cherry], hue], ignore_index=True) s_sat[cherry] = pd.concat([s_sat[cherry], sat], ignore_index=True) s_val[cherry] = pd.concat([s_val[cherry], val], ignore_index=True) with open(os.path.join(save_location,save_series_filename), 'wb') as handle: pickle.dump([s_hue, s_sat, s_val], handle, protocol = pickle.HIGHEST_PROTOCOL) # + fig = plt.figure(figsize =(30, 12)) sns.set_style("white") kwargs = dict(hist_kws={'alpha':.5}, kde_kws={'linewidth':2}) ax0 = fig.add_subplot(1, 3, 1) ax0.set_title('Normalized histogram + KDF (HUE)', fontsize = 34) for cherry in cherries: sns.displot(s_hue[cherry], ax=ax0, color=colors[cherry], label = cherry, **kwargs) ax0.legend(fontsize = 24) ax0.set_xlabel("Hue", fontsize = 24) ax1 = fig.add_subplot(1, 3, 2) ax1.set_title('Normalized histogram + KDF (SATURATION)', fontsize = 34) for cherry in cherries: sns.displot(s_sat[cherry], ax=ax1, color=colors[cherry], label = cherry, **kwargs) ax1.set_xlabel("Saturation", fontsize = 24) ax1.legend(fontsize = 24) ax2 = fig.add_subplot(1, 3, 3) ax2.set_title('Normalized histogram + KDF (VALUE)', fontsize = 34) for cherry in cherries: sns.displot(s_val[cherry], ax=ax2, color=colors[cherry], label = cherry, **kwargs) ax2.set_xlabel("Value", fontsize = 24) ax2.legend(fontsize = 24) plt.savefig(os.path.join(save_location, fig_filename), dpi = 80) plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # rho-Simulated SBM, n=150,1500 # + tags=["hide-input"] #hide # %load_ext autoreload # %autoreload 1 # + tags=["hide-input"] from pkg.gmp import quadratic_assignment from pkg.gmp import quadratic_assignment_ot from pkg.plot import set_theme import numpy as np # + tags=["hide-input"] import sys sys.path sys.path.insert(0,'../../graspologic') # - # # Experiment Summary, n = 150 # Let $(G_1, G_2) \sim \rho-SBM(\vec{n},B)$. (NB: binary, symmetric, hollow.) # # $K = 3$. # # the marginal SBM is conditional on block sizes $\vec{n}=[n_1,n_2,n_3]$. # # $B = [(.20,.01,.01);(.01,.10,.01);(.01,.01,.20)]$. (NB: rank($B$)=3 with evalues $\approx [0.212,0.190,0.098]$.) # # with $n = 150$ and $\vec{n}=[n_1,n_2,n_3] = [50,50,50]$ # # for each $\rho \in \{0.5,0.6,\ldots,0.9,1.0\}$ generate $r$ replicates $(G_1, G_2)$. # # For all $r$ replicates, run $GM$ and $GM_{LS}$ (where $GM_{LS}$ uses the "Lightspeed sinkhorn distances" for computing the step direction) each $t$ times, with each $t$ corresponding to a different random permutation on $G_2$ . # # Specifically,$G_2' = Q G_2 Q^T,$ where $Q$ is sampled uniformly from the set of $n x n$ permutations matrices. # # For each $t$ permutation, run $GM$ & $GM_{LS}$ from the barycenter ($\gamma = 0$). # # For each $r$, the $t$ permutation with the highest associated objective function value will have it's match ratio recorded # # For any $\rho$ value, have $\delta$ denote the average match ratio over the $r$ realizations # # Plot $x=\rho$ vs $y$= $\delta$ $\pm$ 2s.e. # # Below contains figures for $r=50$, $t=10$ # # # + tags=["hide-input"] #hide import numpy as np import matplotlib.pyplot as plt import random import sys from joblib import Parallel, delayed import seaborn as sns # from graspy.match import GraphMatch as GMP from graspologic.simulations import sbm_corr # from scipy.optimize import linear_sum_assignment from sklearn.metrics import pairwise_distances r = 100 t=10 # gmp = GraphMatch('maximize':True, 'tol':0.000001,'maxiter':30, 'shuffle_input':True) def match_ratio(inds, n): return np.count_nonzero(inds == np.arange(n)) / n n = 150 m = r rhos = 0.1 * np.arange(11)[5:] # rhos = np.arange(8,10.5,0.5) *0.1 n_p = len(rhos) ratios = np.zeros((n_p,m)) scores = np.zeros((n_p,m)) ratios_ss = np.zeros((n_p,m)) scores_ss = np.zeros((n_p,m)) n_per_block = int(n/3) n_blocks = 3 block_members = np.array(n_blocks * [n_per_block]) block_probs = np.array([[0.2, 0.01, 0.01], [0.01, 0.1, 0.01], [0.01, 0.01, 0.2]]) directed = False loops = False for k, rho in enumerate(rhos): np.random.seed(8888) seeds = [np.random.randint(1e8, size=t) for i in range(m)] def run_sim(seed): A1, A2 = sbm_corr( block_members, block_probs, rho, directed=directed, loops=loops ) score = 0 res_opt = None score_ss = 0 res_opt_ss = None for j in range(t): res = quadratic_assignment(A1,A2, options={'maximize':True,'tol':1e-6,'maxiter':50, 'shuffle_input':True}) if res.fun>score: perm = res.col_ind score = res.fun res_ss = quadratic_assignment_ot(A1,A2, options={'maximize':True,'tol':1e-6,'maxiter':50, 'shuffle_input':True, 'reg': 100, 'thr':1e-2}) if res_ss.fun>score_ss: perm_ss = res_ss.col_ind score_ss = res_ss.fun ratio = match_ratio(perm, n) ratio_ss = match_ratio(perm_ss, n) return ratio, score, ratio_ss, score_ss result = Parallel(n_jobs=-1)(delayed(run_sim)(seed) for seed in seeds) ratios[k,:] = [item[0] for item in result] scores[k,:] = [item[1] for item in result] ratios_ss[k,:] = [item[2] for item in result] scores_ss[k,:] = [item[3] for item in result] # - # ## Caption: # Average match ratio $\pm$ 2 s.e. as a function of correlation values $\rho$ in $\rho$-SBM simulations on $n=150, 1500$ nodes. # + tags=["hide-input"] #collapse from scipy.stats import sem cb = ['#377eb8', '#ff7f00', '#4daf4a', '#f781bf', '#a65628', '#984ea3', '#999999', '#e41a1c', '#dede00'] error = [2*sem(ratios[i,:]) for i in range(n_p)] average = [np.mean(ratios[i,:] ) for i in range(n_p)] error_ss = [2*sem(ratios_ss[i,:]) for i in range(n_p)] average_ss = [np.mean(ratios_ss[i,:] ) for i in range(n_p)] sns.set_context('poster') sns.set(rc={'figure.figsize':(15,10)}) sns.set(font_scale = 2) sns.set_style('white') # plt.set_cmap(CB_color_cycle) sns.set_palette(sns.color_palette('colorblind')) plt.errorbar(rhos,average_ss, error_ss,marker='o',capsize=3, elinewidth=1, markeredgewidth=1, label='GOAT', color=cb[0]) plt.errorbar(rhos,average, error,marker='o',capsize=3, elinewidth=1, markeredgewidth=1, label='FAQ', color=cb[1]) plt.xlabel(r"$\rho$") plt.ylabel("avergae match ratio") plt.title('n=150') plt.legend() # - # # Experiment Summary, n = 1500 # Let $(G_1, G_2) \sim \rho-SBM(\vec{n},B)$. (NB: binary, symmetric, hollow.) # # $K = 3$. # # the marginal SBM is conditional on block sizes $\vec{n}=[n_1,n_2,n_3]$. # # $B = [(.20,.01,.01);(.01,.10,.01);(.01,.01,.20)]$. (NB: rank($B$)=3 with evalues $\approx [0.212,0.190,0.098]$.) # # with $n = 150$ and $\vec{n}=[n_1,n_2,n_3] = [50,50,50]$ # # for each $\rho \in \{0.8, 0.85, 0.9, 0.95, 1.0\}$ generate $r$ replicates $(G_1, G_2)$. # # For all $r$ replicates, run $GM$ and $GM_{LS}$ (where $GM_{LS}$ uses the "Lightspeed sinkhorn distances" for computing the step direction) each $t$ times, with each $t$ corresponding to a different random permutation on $G_2$ . # # Specifically,$G_2' = Q G_2 Q^T,$ where $Q$ is sampled uniformly from the set of $n x n$ permutations matrices. # # For each $t$ permutation, run $GM$ & $GM_{SS}$ from the barycenter ($\gamma = 0$). # # For each $r$, the $t$ permutation with the highest associated objective function value will have it's match ratio recorded # # For any $\rho$ value, have $\delta$ denote the average match ratio over the $r$ realizations # # Plot $x=\rho$ vs $y$= $\delta$ $\pm$ s.e. # # Below contains figures for $r=25$, $t=5$ # + tags=["hide-input"] #hide import numpy as np import matplotlib.pyplot as plt import random import sys from joblib import Parallel, delayed import seaborn as sns # from graspy.match import GraphMatch as GMP from graspologic.simulations import sbm_corr # from scipy.optimize import linear_sum_assignment from sklearn.metrics import pairwise_distances r = 25 t=5 # gmp = GraphMatch('maximize':True, 'tol':0.000001,'maxiter':30, 'shuffle_input':True) def match_ratio(inds, n): return np.count_nonzero(inds == np.arange(n)) / n n = 1500 m = r rhos = 0.1 * np.arange(11)[5:] # rhos = np.arange(8,10.5,0.5) *0.1 n_p = len(rhos) ratios = np.zeros((n_p,m)) scores = np.zeros((n_p,m)) ratios_ss = np.zeros((n_p,m)) scores_ss = np.zeros((n_p,m)) n_per_block = int(n/3) n_blocks = 3 block_members = np.array(n_blocks * [n_per_block]) block_probs = np.array([[0.2, 0.01, 0.01], [0.01, 0.1, 0.01], [0.01, 0.01, 0.2]]) directed = False loops = False for k, rho in enumerate(rhos): np.random.seed(8888) seeds = [np.random.randint(1e8, size=t) for i in range(m)] def run_sim(seed): A1, A2 = sbm_corr( block_members, block_probs, rho, directed=directed, loops=loops ) score = 0 res_opt = None score_ss = 0 res_opt_ss = None for j in range(t): res = quadratic_assignment(A1,A2, options={'maximize':True, 'tol':1e-3,'maxiter':30, 'shuffle_input':True}) if res.fun>score: perm = res.col_ind score = res.fun res_ss = quadratic_assignment_ot(A1,A2, options={'maximize':True, 'tol':1e-3,'maxiter':30, 'shuffle_input':True, 'reg': 100}) if res_ss.fun>score_ss: perm_ss = res_ss.col_ind score_ss = res_ss.fun ratio = match_ratio(perm, n) ratio_ss = match_ratio(perm_ss, n) return ratio, score, ratio_ss, score_ss result = Parallel(n_jobs=-1)(delayed(run_sim)(seed) for seed in seeds) ratios[k,:] = [item[0] for item in result] scores[k,:] = [item[1] for item in result] ratios_ss[k,:] = [item[2] for item in result] scores_ss[k,:] = [item[3] for item in result] # + tags=["hide-input"] from scipy.stats import sem error1500 = [2*sem(ratios[i,:]) for i in range(n_p)] average1500 = [np.mean(ratios[i,:] ) for i in range(n_p)] error_ss1500 = [2*sem(ratios_ss[i,:]) for i in range(n_p)] average_ss1500 = [np.mean(ratios_ss[i,:] ) for i in range(n_p)] sns.set_context('poster') sns.set(rc={'figure.figsize':(15,10)}) sns.set(font_scale = 2) sns.set_style('white') plt.errorbar(rhos,average_ss1500, error_ss1500,marker='o',capsize=3, elinewidth=1, markeredgewidth=1, label='GOAT') plt.errorbar(rhos,average1500, error1500,marker='o',capsize=3, elinewidth=1, markeredgewidth=1, label='FAQ') plt.xlabel(r"$\rho$") plt.ylabel("avergae match ratio") plt.title('n=1500') plt.legend() # + tags=["hide-input"] sns.set(font_scale = 1.5) sns.set_style('white') fig, axes = plt.subplots(1, 2, figsize=(20, 7.5)) rhos1 = 0.1 * np.arange(11)[5:] axes[0].errorbar(rhos1,average_ss, error_ss,marker='o',capsize=3, elinewidth=1, markeredgewidth=1, label='GOAT') axes[0].errorbar(rhos1,average, error,marker='o',capsize=3, elinewidth=1, markeredgewidth=1, label='FAQ') axes[0].set_xlabel(r"$\rho$") axes[0].set_ylabel("avergae match ratio") axes[0].set_title('n=150') axes[0].legend() axes[1].errorbar(rhos,average_ss1500, error_ss1500,marker='o',capsize=3, elinewidth=1, markeredgewidth=1, label='GOAT') axes[1].errorbar(rhos,average1500, error1500,marker='o',capsize=3, elinewidth=1, markeredgewidth=1, label='FAQ') axes[1].set_xlabel(r"$\rho$") # plt.ylabel("avergae match ratio") axes[1].set_title('n=1500') axes[1].legend() plt.savefig('sgm.png') # axes[0].set_ylabel("average match ratio") # for i in range(3): # sns.set(font_scale = 2) # sns.set_style('white') # axes[i].errorbar(m,ratios[:, i], 2*error[:, i], fmt='.-',capsize=3, elinewidth=1, markeredgewidth=1, label='GOAT') # axes[i].errorbar(m,ratios2[:, i], 2*error2[:, i],fmt='.-',capsize=3, elinewidth=1, markeredgewidth=1, label='FAQ') # axes[i].legend(prop={'size': 15}) # axes[i].set_title(fr'$\rho$ = {np.around(rhos[i],2)}', fontsize=15) # axes[i].set_xlabel("number of seeds (m)") plt.savefig('rho_sbm.png') # - # # GOAT for different values of reg = [100, 200, 300, 500, 700] # + tags=["hide-input"] #hide import numpy as np import matplotlib.pyplot as plt import random import sys from joblib import Parallel, delayed import seaborn as sns # from graspy.match import GraphMatch as GMP from graspologic.simulations import sbm_corr # from scipy.optimize import linear_sum_assignment from sklearn.metrics import pairwise_distances r = 50 t=10 # gmp = GraphMatch('maximize':True, 'tol':0.000001,'maxiter':30, 'shuffle_input':True) def match_ratio(inds, n): return np.count_nonzero(inds == np.arange(n)) / n n = 150 m = r rhos = 0.1 * np.arange(11)[5:] # rhos = np.arange(8,10.5,0.5) *0.1 n_p = len(rhos) ratios_1 = np.zeros((n_p,m)) ratios_2 = np.zeros((n_p,m)) ratios_3 = np.zeros((n_p,m)) ratios_5 = np.zeros((n_p,m)) ratios_7 = np.zeros((n_p,m)) n_per_block = int(n/3) n_blocks = 3 block_members = np.array(n_blocks * [n_per_block]) block_probs = np.array([[0.2, 0.01, 0.01], [0.01, 0.1, 0.01], [0.01, 0.01, 0.2]]) directed = False loops = False for k, rho in enumerate(rhos): np.random.seed(8888) seeds = [np.random.randint(1e8, size=t) for i in range(m)] def run_sim(seed): A1, A2 = sbm_corr( block_members, block_probs, rho, directed=directed, loops=loops ) score_1 = 0 score_2 = 0 score_3 = 0 score_5 = 0 score_7 = 0 # res_1 = None # res_2 = None # res_3 = None # res_4 = None # res_5 = None def run_qap(reg): score = 0 for j in range(t): res = quadratic_assignment_ot(A1,A2, options={'maximize':True,'tol':1e-6,'maxiter':50, 'shuffle_input':True, 'reg': reg, 'thr':1e-2}) if res.fun>score: perm = res.col_ind score = res.fun return perm ratio_1 = match_ratio(run_qap(100), n) ratio_2 = match_ratio(run_qap(200), n) ratio_3 = match_ratio(run_qap(300), n) ratio_5 = match_ratio(run_qap(500), n) ratio_7 = match_ratio(run_qap(700), n) return ratio_1, ratio_2, ratio_3, ratio_5, ratio_7 result = Parallel(n_jobs=-1)(delayed(run_sim)(seed) for seed in seeds) ratios_1[k,:] = [item[0] for item in result] ratios_2[k,:] = [item[1] for item in result] ratios_3[k,:] = [item[2] for item in result] ratios_5[k,:] = [item[3] for item in result] ratios_7[k,:] = [item[4] for item in result] # + tags=["hide-input"] from scipy.stats import sem error_1 = sem(ratios_1, axis=1) average_1 = np.mean(ratios_1, axis=1) error_2 = sem(ratios_2, axis=1) average_2 = np.mean(ratios_2, axis=1) error_3 = sem(ratios_3, axis=1) average_3 = np.mean(ratios_3, axis=1) error_5 = sem(ratios_5, axis=1) average_5 = np.mean(ratios_5, axis=1) error_7 = sem(ratios_7, axis=1) average_7 = np.mean(ratios_7, axis=1) #sns.set_context('poster') sns.set(rc={'figure.figsize':(15,10)}) sns.set(font_scale = 2) sns.set_style('white') # txt =f'r={r}, t={t}' plt.errorbar(rhos,average_1, error_1,marker='o',capsize=3, elinewidth=1, markeredgewidth=1, label='100') plt.errorbar(rhos,average_2, error_2,marker='o',capsize=3, elinewidth=1, markeredgewidth=1, label='200') plt.errorbar(rhos,average_3, error_3,marker='o',capsize=3, elinewidth=1, markeredgewidth=1, label='300') plt.errorbar(rhos,average_5, error_5,marker='o',capsize=3, elinewidth=1, markeredgewidth=1, label='500') plt.errorbar(rhos,average_7, error_7,marker='o',capsize=3, elinewidth=1, markeredgewidth=1, label='700') sns.color_palette('bright') plt.xlabel(r"$\rho$") plt.ylabel("avergae match ratio") # plt.text(0.5,0.5,txt) plt.title('n=150, GOAT for different regularization term') plt.legend() # + tags=["hide-input"] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd import numpy as np unemp_county = pd.read_csv("datasets/us_unemployment.csv") unemp_county.head() # + df = pd.read_csv("datasets/minwage.csv") act_min_wage = pd.DataFrame() for name, group in df.groupby("State"): if act_min_wage.empty: act_min_wage = group.set_index("Year")[["Low.2018"]].rename(columns={"Low.2018": name}) else: act_min_wage = act_min_wage.join(group.set_index("Year")[["Low.2018"]].rename(columns={"Low.2018": name})) act_min_wage.head() # - act_min_wage = act_min_wage.replace(0, np.NaN).dropna(axis=1) act_min_wage.head() # + def get_min_wage(year, state): try: return act_min_wage.loc[year][state] except: return np.NaN get_min_wage(2012, "Colorado") # + # %%time unemp_county['min_wage'] = list(map(get_min_wage, unemp_county['Year'], unemp_county["State"])) # - unemp_county.head() unemp_county[["Rate", "min_wage"]].corr() unemp_county[["Rate", "min_wage"]].cov() pres16 = pd.read_csv("datasets/pres16results.csv") pres16.head() county_2015 = unemp_county[(unemp_county['Year'] == 2015) & (unemp_county['Month']=='February')] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # ### Creation of development ChEMBL corpus # We create a smaller sampled ChEMBL corpus for the purpose of rapid iteration # on the development on our model # + import pandas as pd from src.drugexr.config.constants import PROC_DATA_PATH # - _SEED = 42 _DEV_CORPUS_SIZE = 1_000 chembl_df = pd.read_table(filepath_or_buffer=PROC_DATA_PATH / "chembl_corpus.txt") dev_chembl_df = chembl_df.sample(n=_DEV_CORPUS_SIZE, random_state=_SEED, ignore_index=True) dev_chembl_df.to_csv(path_or_buf= PROC_DATA_PATH / f"chembl_corpus_DEV_{_DEV_CORPUS_SIZE}.txt", encoding="utf-8", sep="\t" ,index=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Text Classification with gensim & word2vec # + import numpy as np import pandas as pd import tensorflow as tf import pickle from sklearn.preprocessing import MultiLabelBinarizer from keras.layers import Input, Dense from keras.models import Model from keras.callbacks import TensorBoard from gensim.models import KeyedVectors from keras.utils import Sequence print(tf.__version__) # - # Download the data from GCS # !wget 'https://storage.googleapis.com/movies_data/movies_metadata.csv' data = pd.read_csv('movies_metadata.csv') data.head() # + # urllib.request.urlretrieve('https://storage.googleapis.com/bq-imports/descriptions.p', 'descriptions.p') # urllib.request.urlretrieve('https://storage.googleapis.com/bq-imports/genres.p', 'genres.p') descriptions = pickle.load(open('pickle/descriptions.p', 'rb')) genres = pickle.load(open('pickle/genres.p', 'rb')) # + train_size = int(len(descriptions) * .8) train_descriptions = descriptions[:train_size].astype('str') train_genres = genres[:train_size] test_descriptions = descriptions[train_size:].astype('str') test_genres = genres[train_size:] # + encoder = MultiLabelBinarizer() encoder.fit_transform(train_genres) train_encoded = encoder.transform(train_genres) test_encoded = encoder.transform(test_genres) num_classes = len(encoder.classes_) print(encoder.classes_) print(train_descriptions.values[0]) print(train_encoded[0]) # + # Initialize word2vec model # %%time word2vec = KeyedVectors.load_word2vec_format("./models/GoogleNews-vectors-negative300.bin", binary=True) # + # preprocess for OOV for i in range(len(train_descriptions)): if train_descriptions.values[i] == ' ': train_descriptions.values[i] = 'nan' for i in range(len(test_descriptions)): if test_descriptions.values[i] == ' ': test_descriptions.values[i] = 'nan' # convert OOV words to 'nan' for i in range(len(train_descriptions)): text = train_descriptions.values[i] text_list = text.split(' ') for j in range(len(text_list)): if text_list[j] not in word2vec.vocab: text_list[j] = 'nan' train_descriptions.values[i] = ' '.join(text_list) # - # Sequence of data class BatchSequence(Sequence): def __init__(self, x_set, y_set, batch_size, shuffle=True): data_size = len(x_set) if shuffle: shuffle_indices = np.random.permutation(np.arange(data_size)) self.x = x_set[shuffle_indices] self.y = y_set[shuffle_indices] else: self.x, self.y = x_set, y_set self.batch_size = batch_size def __len__(self): return int(np.ceil(len(self.x) / float(self.batch_size))) def __getitem__(self, idx): start_index = idx * self.batch_size end_index = min((idx + 1) * self.batch_size, len(self.x)) batch_x = self.x[start_index: end_index] batch_y = self.y[start_index: end_index] splited_data = list(map(lambda x: x.split(' '), batch_x)) query = list(map(lambda a: list(filter(lambda b: b in word2vec.vocab, a)), splited_data)) embeddings = np.array(list(map(lambda a: np.mean(list(map(lambda b: word2vec.wv[b], a)), axis=0), query))) X, y = np.array(embeddings), np.array(batch_y) return X, y # + # word2vec embedding dimension wor2vec_dim = 300 # Input Layers word_input = Input(shape=(wor2vec_dim, ), dtype=tf.float32) # (batch_size, sent_length) # Hidden Layers x = Dense(64, activation='relu')(word_input) # Output Layer predict = Dense(units=9, activation='sigmoid')(x) model = Model(inputs=[word_input], outputs=predict) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['acc']) model.summary() # + # Create an instance of batch sequence batch_size = 32 train_batchSequence = BatchSequence(train_descriptions.values, train_encoded, batch_size) valid_batchSequence = BatchSequence(test_descriptions.values, test_encoded, batch_size) # + # Train model logfile_path = './log' tb_cb = TensorBoard(log_dir=logfile_path, histogram_freq=0) history = model.fit_generator(train_batchSequence, epochs=5, validation_data=valid_batchSequence, callbacks=[tb_cb]) # + # Evaluate model score = model.evaluate_generator(valid_batchSequence) print('Test score:', score[0]) print('Test accuracy;', score[1]) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- import numpy as np from pandas import DataFrame import pandas as pd import scipy as sp from sklearn import metrics from sklearn.naive_bayes import GaussianNB from sklearn import linear_model from sklearn.svm import SVC from sklearn.feature_selection import VarianceThreshold from sklearn.pipeline import Pipeline from sklearn.feature_selection import SelectPercentile, f_classif from sklearn import cross_validation np.set_printoptions(precision=5,suppress=True) # + #def read_data(folder, name1, name2, name3): # loc = "%s/%s.txt" %(folder, name1) # train_set = pd.read_csv(loc, sep="\t") # print "Training set has %s rows and %s coumns" %(train_set.shape[0], train_set.shape[1]) # loc = "%s/%s.txt" %(folder, name2) # test_set = pd.read_csv(loc, sep="\t") # print "Testing set has %s rows and %s coumns" %(test_set.shape[0], test_set.shape[1]) # loc = "%s/%s.txt" %(folder, name3) # val_set = pd.read_csv(loc, sep="\t") # print "Validation set has %s rows and %s coumns" %(val_set.shape[0], val_set.shape[1]) # return train_set, test_set, val_set # - dataset = sp.genfromtxt("dataset1/cleaned.txt", delimiter="\t") #np.isnan(dataset) dataset.shape # + #train_set, test_set, val_set = read_data("data3", "train_data", "test_data", "valid_data") # - dataset = np.delete(dataset,[0,26],1) dataset.shape print dataset[3] dataset = dataset[~np.isnan(dataset).any(axis=1)] dataset.shape columns = {"GP":0, "GS":1, "MIN":2, "FGM":3,"FGA":4,"FG%":5,"3PM":6,"3PA":7,"3P%":8,"FTM":9,"FTA":10,"FT%":11,"OFF":12,"DEF":13, "TRB":14,"AST":15,"STL":16,"BLK":17,"PF":18,"TOV":19,"PTS":20,"YR":21,"W":22,"H":23} # # Labels def np_labeliser(data,col): labels = data[:,col] return labels labels = np_labeliser(dataset,22) labels[:10] # # Feature Selection def np_featuriser(dataset, feature_list): features = np.delete(dataset,feature_list,1) #test = np.delete(test,feature_list,1) #val = np.delete(val,feature_list,1) return features feature_list = [22] #np.set_printoptions(precision=4) print len(dataset[0]) #train_features_nb, test_features_nb, val_features_nb = np_featuriser(train_set_nb, test_set_nb, val_set_nb, feature_list) features = np_featuriser(dataset, feature_list) print len(features[0]) def vt_fsel(feature_set): sel = VarianceThreshold(threshold=(.8 * (1 - .8))) sel.fit_transform(feature_set) vt_list = sel.get_support() l_vt = [] j = 0 for i in vt_list: if i == False: l_vt.append(j) print "%s. feature name: %s" %(j, columns.keys()[columns.values().index(j)]) j = j+1 return l_vt list_vt = vt_fsel(features) # + features_vt = np_featuriser(features, list_vt) features_vt.shape # - def sup_features(usp_list,x): remove = [] j = 0 for i in usp_list: if i == False: remove.append(j) if x=="vt": print "%s. feature name: %s" %(j, columns.keys()[columns.values().index(j)]) elif x == "uni": print "%s. feature name: %s" %(j, columns.keys()[columns.values().index(j)]) j = j+1 return remove #sup_features(support, "uni") def feature_selection(clf, features, labels, domain): none = features #print none[0] domain = np_featuriser(features, domain) #print domain[0] clf = Pipeline([('feature_selection',SelectPercentile(f_classif, percentile=20)), ('classification', clf)]) clf.fit(features, labels) print "\nUnivariate - valuable features \n" uni = sup_features(clf.named_steps['feature_selection'].get_support(),"uni") uni = np_featuriser(features, uni) #print uni[0] clf = Pipeline([('feature_selection',VarianceThreshold(threshold=(.8 * (1 - .8)))), ('classification', clf)]) clf.fit(features, labels) print "\nVariance Threshold - removed \n" v_th = sup_features(clf.named_steps['feature_selection'].get_support(), "vt") #print v_th[0] v_th = np_featuriser(features, v_th) return none, domain, uni, v_th #clf = GaussianNB() #svm = SVC() #svm.set_params(kernel='linear') #clf = Pipeline([('feature_selection',VarianceThreshold(threshold=(.8 * (1 - .8)))), # ('classification', svm)]) #clf.fit(features, labels) #support = clf.named_steps['feature_selection'].get_support() #print support #p = clf.predict(features) #acc = metrics.accuracy_score(labels,p) #conf = metrics.confusion_matrix(labels, p) #print acc #print conf domain = [columns["GP"],columns["GS"],columns["MIN"],columns["FG%"], columns["3P%"],columns["FT%"],columns["PTS"],columns["YR"],columns['3PM'],columns['FTM'],columns['FGM']] #none, domain, uni, vth = feature_selection(clf, features, labels, domain) #print none.shape,domain.shape,uni.shape,vth.shape def cross_val(clf, f, l, name): print "\nFeature selection: %s" %name scores = cross_validation.cross_val_score(clf, f, l, cv=10) print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2)) def clf_all(clf, features, labels, domain): none, domain, uni, vth = feature_selection(clf, features, labels, domain) cross_val(clf, none, labels, "None") print "Number of features left: %s" %none.shape[1] cross_val(clf, domain, labels, "Domain") print "Number of features left: %s" %domain.shape[1] cross_val(clf, uni, labels, "Univariate") print "Number of features left: %s" %uni.shape[1] cross_val(clf, vth, labels, "Variance Threshold") print "Number of features left: %s" %vth.shape[1] # # ALL Results # + #def print_metrics(name, accuracy, conf_matrix): # print "Feature selection: %s\n" %name # print "Accuracy score: %s\n" %accuracy # print "Confusion matrix:" # print "\n%s" %conf_matrix # print"\n" #def clf(clf, tr, tr_labels, val, val_labels): # clf.fit(tr, tr_labels) # # pred = clf.predict(val) # # acc = metrics.accuracy_score(val_labels,pred) # conf = metrics.confusion_matrix(val_labels, pred) # return acc, conf # + #def clf_all(CLF, l_none, l_domain, l_uni, l_vt, train_all, test_all, val_all): # tr_none, ts_none, val_none = np_featuriser(train_all, test_all, val_all, l_none) # print tr_none.shape # tr_domain, ts_domain, val_domain = np_featuriser(train_all, test_all, val_all, l_domain) # # tr_uni, ts_uni, val_uni = np_featuriser(train_all, test_all, val_all, l_uni) # # tr_vt, ts_vt, val_vt = np_featuriser(train_all, test_all, val_all, l_vt) #clfnb = GaussianNB() #print "Naive Bayes\n" # acc_none, conf_none = clf(CLF, tr_none, train_labels, val_none, test_labels) # print_metrics("None", acc_none, conf_none) # acc_domain, conf_domain = clf(CLF, tr_domain, train_labels, val_domain, val_labels) # print_metrics("Domain knowledge", acc_domain, conf_domain) # acc_uni, conf_uni = clf(CLF, tr_uni, train_labels, val_uni, val_labels) # print_metrics("Univariate", acc_uni, conf_uni) # acc_vt, conf_vt = clf(CLF, tr_vt, train_labels, val_vt, val_labels) # print_metrics("Variance Threshold", acc_vt, conf_vt) # - # # Naive Bayes clf_all(GaussianNB(), features, labels, domain) # # SVM svm = SVC() svm = svm.set_params(kernel='linear') clf_all(svm, features, labels, domain) # # Logistic Regression logreg = linear_model.LogisticRegression(C=1e5) clf_all(logreg, features, labels, domain) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Graph Colouring # # ### Definition # # We are given an undirected graph with vertex set $V$ and edge set $E$ and a set of $n$ colours. # # Our aim is to find whether we can colour every node of the graph in one of these $n$ colours such that no edge connects nodes of the same colour. # # ### Applications # Graph Colouring appears in a variety of real life problems like map colouring, scheduling, register allocation, frequency assignment, communication networks and timetables. # # ### Path to solving the problem # Graph Colouring can be formulated as a minimization problem and its cost function can be cast to a QUBO problem with its respective Hamiltonian (see the [Introduction](./introduction_combinatorial_optimization.ipynb) and a [reference](https://arxiv.org/abs/1302.5843)), # $$ \displaystyle \large # H = \textstyle\sum\limits_{v}\displaystyle \left( 1 -\textstyle\sum\limits_{i=1}^{n} x_{v,i} \right) ^2 + \textstyle\sum\limits_{uv \in E} \textstyle\sum\limits_{i=1}^{n} x_{u,i} x_{v,i} # $$ # # where $v, u \in V$ and $x_{v,i}$ is a binary variable, which is $1$ if vertex $v$ has colour $i$ and $0$ otherwise. # # The QLM allows us to encode a problem in this Hamiltonian form by using the `GraphColouring` class for a given graph and a number of colours. We can then create a job from the problem and send it to a heuristic Simulated Quantum Annealer (SQA) wrapped inside a Quantum Processing Unit (QPU) like the rest of the QPUs on the QLM. The SQA will minimize the $H$, hence we find the best solution to our problem. # # For a more detailed explanation and a step-by-step guidance, please follow the sections below. # # ### Quantum resources # To represent the problem as QUBO the QLM would need $nN$ spins, where $N$ is the number of vertices of the graph. The classical complexity of the [best known approximation algorithm](https://www.sciencedirect.com/science/article/abs/pii/0020019093902466?via%3Dihub) for this problem is $O(N(log log N)^2(log N)^3)$ # # Example problem # # Imagine we are given $3$ colours and a graph with $4$ vertices and $5$ edges, as shown below (left). This problem has a simple solution $-$ nodes $0$ and $3$ will share one colour and the other two colours are assigned to nodes $1$ and $2$ (right). # #

    # # Let us describe how one can reach this answer using tools from the QLM. # However, the approach will be applicable to finding the Graph Colouring of whatever graph we are given ! # # We will use the `networkx` library to specify our graph (or in fact any graph, as the library is quite rich). # + import networkx as nx import numpy as np import matplotlib.pyplot as plt # Specify the graph # First example graph = nx.Graph() graph.add_nodes_from(np.arange(4)) graph.add_edges_from([(0, 1), (0, 2), (1, 2), (1, 3), (2, 3)]) # # Second example - one can try with 4 or 5 colours # graph = nx.gnm_random_graph(15, 40) # Specify the number of colours number_of_colours = 3 number_of_nodes = len(graph.nodes()) number_of_spins = number_of_colours * number_of_nodes # Draw the graph nodes_positions = nx.spring_layout(graph, iterations=number_of_nodes * 60) plt.figure(figsize=(10, 6)) nx.draw_networkx(graph, pos=nodes_positions, node_color='#4EEA6A', node_size=600, font_size=16) plt.show() # - # To encode the problem in a QUBO form, as described above, we will call the `GraphColouring` class. # + from qat.opt import GraphColouring graph_colouring_problem = GraphColouring(graph, number_of_colours) # - # # Solution # Once an instance of the problem is created, we can proceed to compute the solution of the problem by following the steps: # # 1. Extract the best SQA parameters found for Graph Colouring by calling the method `get_best_parameters()`. # # The number of Monte Carlo updates is the total number of updates performed for each temperature (and gamma) on the spins of the equivalent 2D classical system. These updates are the product of the number of annealing steps $-$ `n_steps`, the number of "Trotter replicas" $-$ `n_trotters`, and the problem size, i.e. the number of qubits needed. Hence, we can use these parameters to get the best inferred value for `n_steps`. In general, the more these steps are, the finer and better the annealing will be. However this will cause the process to take longer to complete. # # Similarly for the `n_trotters` field in `SQAQPU` $-$ the higher it is, the better the final solution could be, but the more time taken by the annealer to reach an answer. # # # 2. Create a temperature and a gamma schedule for the annealing. # # We use the extracted max and min temperatures and gammas to create a (linear) temperature and a (linear) gamma schedule. These schedules evolve in time from higher to lower values since we simulate the reduction of temperatures and magnetic fields. If one wishes to vary them it may help if the min values are close to $0$, as this will cause the Hamiltonian to reach a lower energy state, potentially closer to its ground state (where the solution is encoded). # # It should be noted that non-linear schedules may be investigated too, but for the same number of steps they could lead to a slower annealing. The best min and max values for gamma and the temperature were found for linear schedules. # # # 3. Generate the SQAQPU and create a job for the problem. The job is then sent to the QPU and the annealing is performed. # # # 4. Present the solution spin configuration. # # # 5. Show a dictionary of vertices for each colour. # # # 6. Draw the coloured graph. # # When we look at the final spin configuration, we see spins on each row and column. The rows represent the vertices, i.e. the second row (counting from $0$) is for the second vertex (again counting from $0$). The spins of each row are then the binary representation of the colour of that vertex, but with spin values, i.e. $\{1, -1\}$ instead of $\{1, 0\}$. # # So if a spin at position $(2,1)$ is $1$, this means that the second vertex has the first colour (again counting from $0$). # + from qat.core import Variable from qat.sqa import SQAQPU from qat.sqa.sqa_qpu import integer_to_spins # 1. Extract parameters for SQA problem_parameters_dict = graph_colouring_problem.get_best_parameters() n_monte_carlo_updates = problem_parameters_dict["n_monte_carlo_updates"] n_trotters = problem_parameters_dict["n_trotters"] n_steps = int(n_monte_carlo_updates / (n_trotters * number_of_spins)) # the last one is the number of spins, i.e. the problem size temp_max = problem_parameters_dict["temp_max"] temp_min = problem_parameters_dict["temp_min"] gamma_max = problem_parameters_dict["gamma_max"] gamma_min = problem_parameters_dict["gamma_min"] # 2. Create a temperature and a gamma schedule tmax = 1.0 t = Variable("t", float) temp_t = temp_min * (t / tmax) + temp_max * (1 - t / tmax) gamma_t = gamma_min * (t / tmax) + gamma_max * (1 - t / tmax) # 3. Create a job and send it to a QPU problem_job = graph_colouring_problem.to_job(gamma_t=gamma_t, tmax=tmax, nbshots=1) sqa_qpu = SQAQPU(temp_t=temp_t, n_steps=n_steps, n_trotters=n_trotters) problem_result = sqa_qpu.submit(problem_job) # 4. Present best configuration state_int = problem_result.raw_data[0].state.int # raw_data is a list of Samples - one per shot solution_configuration = integer_to_spins(state_int, number_of_spins) solution_configuration_reshaped = solution_configuration.reshape((number_of_nodes, number_of_colours)) print("Solution configuration: \n" + str(solution_configuration_reshaped) + "\n") # 5. Show a list of nodes for each colour from itertools import product vertices_dictionary = {colour:[] for colour in range(number_of_colours)} for row, col in product(range(number_of_nodes), range(number_of_colours)): if solution_configuration[row * number_of_colours + col] == 1: vertices_dictionary[col].append(row) print("Dictionary of vertices for each colour:\n" + str(vertices_dictionary) + "\n") # 6. Draw the coloured graph import random colour_list = ['#'+''.join(random.sample('0123456789ABCDEF', 6)) for i in range(number_of_colours)] plt.figure(figsize=(10, 6)) for node in graph.nodes(): if graph.nodes[node]: del graph.nodes[node]['colour'] for i in range (number_of_colours): colour_i = colour_list[i] nodes_with_colour_i = vertices_dictionary[i] for node in nodes_with_colour_i: graph.nodes[node]['colour'] = i nx.draw_networkx(graph, pos=nodes_positions, nodelist=nodes_with_colour_i, node_color=colour_i, node_size=600, font_size=16) nx.draw_networkx_edges(graph, pos=nodes_positions) plt.show() # - # # Solution analysis # # In order to examine the output colouring, one may choose to visually inspect the graphs. However, this becomes impractical for very big graphs with large dictionaries. We can therefore add some simple checks to assess the solution. # # One such check would be if each vertex has exactly one colour (which may not be the case, for example when the value $1$ appears more than once in a row of the solution spin array). # + corrupted_nodes_list = [] corrupted_nodes_colours_dict = {} for node in range(number_of_nodes): node_colourings_row = solution_configuration_reshaped[node][:] colours_array = np.argwhere(node_colourings_row == 1) number_of_colours_of_node = len(colours_array) if number_of_colours_of_node != 1: corrupted_nodes_list.append(node) corrupted_nodes_colours_dict[node] = colours_array.flatten().tolist() if len(corrupted_nodes_list) == 0: print("Each vertex is assigned only one colour !") else: print("There are " + "\033[1m" + str(len(corrupted_nodes_list)) + "\033[0;0m" + " vertices not assigned exactly one colour:\n" + str(corrupted_nodes_list)) print("\nVertices and their colours:") for node, colours in corrupted_nodes_colours_dict.items(): print(node, ':', colours) # - # Yet, it may be that the final colour distribution nevertheless produces a valid colouring. So let us check if each edge connects vertices of different colours. If this is not the case, we can output the corrupted edges and their respective colours. # + corrupted_edges_colours_dict = {} for (node_i, node_j) in graph.edges(): if 'colour' not in graph.nodes[node_i]: if 'colour' not in graph.nodes[node_j]: corrupted_edges_colours_dict[(node_i, node_j)] = ["no colour", "no colour"] else: corrupted_edges_colours_dict[(node_i, node_j)] = ["no colour", graph.nodes[node_j]['colour']] else: node_i_colour = graph.nodes[node_i]['colour'] if 'colour' not in graph.nodes[node_j]: corrupted_edges_colours_dict[(node_i, node_j)] = [node_i_colour, "no colour"] else: node_j_colour = graph.nodes[node_j]['colour'] if node_i_colour == node_j_colour: corrupted_edges_colours_dict[(node_i, node_j)] = [node_i_colour, node_j_colour] if len(corrupted_edges_colours_dict) == 0: print("The graph is perfectly coloured !") else: print("There are " + "\033[1m" + str(len(corrupted_edges_colours_dict)) + "\033[0;0m" + " corrupted edges with colours:") for edge, colours in corrupted_edges_colours_dict.items(): print(edge, ' : ', colours) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="lkf3NxuH1i_Z" colab_type="code" colab={} # !pip install --upgrade tables # !pip install eli5 # !pip install xgboost # + id="GMxslV2b1wFM" colab_type="code" colab={} import pandas as pd import numpy as np from sklearn.dummy import DummyRegressor from sklearn.tree import DecisionTreeRegressor from sklearn.ensemble import RandomForestRegressor import xgboost as xgb from sklearn.metrics import mean_absolute_error as mae from sklearn.model_selection import cross_val_score, KFold import eli5 from eli5.sklearn import PermutationImportance # + id="OpM_atrb1zD6" colab_type="code" colab={} # cd "/content/drive/My Drive/Colab Notebooks/matrix/matrix_two/dw_matrix_car" # + id="NDuQn3GP31jy" colab_type="code" colab={} SUFFIX_CAT = '__cat' for feat in df.columns: if isinstance(df[feat][0], list): continue factorized_values = df[feat].factorize()[0] if SUFFIX_CAT in feat: df[feat] = factorized_values else: df[feat + SUFFIX_CAT] = factorized_values # + id="fG8boQ9u33ry" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1595613653158, "user_tz": -120, "elapsed": 740, "user": {"displayName": "Pawe\u0142", "photoUrl": "", "userId": "07557086783430784441"}} outputId="7e5032ce-47af-4a42-be2b-593286a0f5f8" cat_feats = [x for x in df.columns if SUFFIX_CAT in x] cat_feats = [x for x in cat_feats if 'price' not in x] len(cat_feats) # + id="JmTAQo7-9VrP" colab_type="code" colab={} def run_model(model, feats): X = df[feats].values y = df['price_value'].values scores = cross_val_score(model,X,y, cv=3, scoring='neg_mean_absolute_error') return np.mean(scores), np.std(scores) # + [markdown] id="QwlB3OVsHOyc" colab_type="text" # ## Decision tree # + id="lhVgEc3VGSZR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1595615774582, "user_tz": -120, "elapsed": 4941, "user": {"displayName": "Pawe\u0142", "photoUrl": "", "userId": "07557086783430784441"}} outputId="b23d9b4f-f9bb-425f-cea3-71bb544f9e26" run_model(DecisionTreeRegressor(max_depth=5), cat_feats) # + [markdown] id="KGOTSecEHhYT" colab_type="text" # ## Random forest # # # + id="cRwk9M1HHmW7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1595616086174, "user_tz": -120, "elapsed": 111099, "user": {"displayName": "Pawe\u0142", "photoUrl": "", "userId": "07557086783430784441"}} outputId="a018dc2d-3130-465b-d09e-17ec7513a3e6" model = RandomForestRegressor(max_depth=5, n_estimators=50, random_state=0) run_model(model, cat_feats) # + [markdown] id="7VDTZuQUIXXV" colab_type="text" # ## XGBoost # + id="rcnW-Ec7IsM_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} executionInfo={"status": "ok", "timestamp": 1595625817849, "user_tz": -120, "elapsed": 64979, "user": {"displayName": "Pawe\u0142", "photoUrl": "", "userId": "07557086783430784441"}} outputId="82e1d49c-144d-4796-eb31-97b8cc06e40c" xgb_params = { 'max_depth':5, 'n_estimators':50, 'learning_rate':0.1, 'seed':0 } run_model(xgb.XGBRegressor(**xgb_params), cat_feats) # + id="ax8VVpfRLCjP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 408} executionInfo={"status": "ok", "timestamp": 1595617285328, "user_tz": -120, "elapsed": 356033, "user": {"displayName": "Pawe\u0142", "photoUrl": "", "userId": "07557086783430784441"}} outputId="00e76ff0-8ee7-4b4b-b914-9b16beea5eec" m = xgb.XGBRegressor(max_depth=5, n_estimators=50, learning_rate=0.1, seed=0) m.fit(X,y) imp = PermutationImportance(m, random_state=0).fit(X,y) eli5.show_weights(imp, feature_names=cat_feats) # + id="VMLLJxCUN5Fx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} executionInfo={"status": "ok", "timestamp": 1595625738070, "user_tz": -120, "elapsed": 14035, "user": {"displayName": "Pawe\u0142", "photoUrl": "", "userId": "07557086783430784441"}} outputId="f720d6a1-7349-41a3-9f8a-1e314f19f42e" feats = [ 'param_napęd__cat', 'param_stan__cat', 'param_rok-produkcji__cat', 'param_faktura-vat__cat', 'param_moc__cat', 'param_skrzynia-biegów__cat', 'param_marka-pojazdu__cat', 'feature_kamera-cofania__cat', 'param_typ__cat', 'param_pojemność-skokowa__cat', 'seller_name__cat', 'param_wersja__cat', 'feature_wspomaganie-kierownicy__cat', 'param_model-pojazdu__cat', 'feature_system-start-stop__cat', 'param_kod-silnika__cat', 'feature_asystent-pasa-ruchu__cat', 'feature_łopatki-zmiany-biegów__cat', 'feature_światła-led__cat', 'feature_czujniki-parkowania-przednie__cat' ] run_model(xgb.XGBRegressor(**xgb_params), feats) # + id="oAk6XIBuQp7q" colab_type="code" colab={} # + id="hhO-GwgQRJ4F" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} executionInfo={"status": "ok", "timestamp": 1595625694430, "user_tz": -120, "elapsed": 14131, "user": {"displayName": "Pawe\u0142", "photoUrl": "", "userId": "07557086783430784441"}} outputId="90c52239-f596-4e48-e822-0910059c3798" df['param_rok-produkcji'] = df['param_rok-produkcji'].map(lambda x: -1 if str(x) == 'None' else int(x)) feats = [ 'param_napęd__cat', 'param_stan__cat', 'param_rok-produkcji', 'param_faktura-vat__cat', 'param_moc__cat', 'param_skrzynia-biegów__cat', 'param_marka-pojazdu__cat', 'feature_kamera-cofania__cat', 'param_typ__cat', 'param_pojemność-skokowa__cat', 'seller_name__cat', 'param_wersja__cat', 'feature_wspomaganie-kierownicy__cat', 'param_model-pojazdu__cat', 'feature_system-start-stop__cat', 'param_kod-silnika__cat', 'feature_asystent-pasa-ruchu__cat', 'feature_łopatki-zmiany-biegów__cat', 'feature_światła-led__cat', 'feature_czujniki-parkowania-przednie__cat' ] run_model(xgb.XGBRegressor(**xgb_params), feats) # + id="Ael1bjRfYdAB" colab_type="code" colab={} df['param_moc']=df['param_moc'].map(lambda x: -1 if str(x) =='None' else int(x.split(' ')[0])) # + id="UulcDE1OZrc5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 221} executionInfo={"status": "ok", "timestamp": 1595620656132, "user_tz": -120, "elapsed": 655, "user": {"displayName": "Pawe\u0142", "photoUrl": "", "userId": "07557086783430784441"}} outputId="6445ddc8-db40-4f29-c0df-e8e08b41245e" df['param_moc'] # + id="co2gTrSbZwDR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} executionInfo={"status": "ok", "timestamp": 1595620703735, "user_tz": -120, "elapsed": 13455, "user": {"displayName": "Pawe\u0142", "photoUrl": "", "userId": "07557086783430784441"}} outputId="80321f48-a497-4b60-895f-a36d249fa1c8" feats = [ 'param_napęd__cat', 'param_stan__cat', 'param_rok-produkcji', 'param_faktura-vat__cat', 'param_moc', 'param_skrzynia-biegów__cat', 'param_marka-pojazdu__cat', 'feature_kamera-cofania__cat', 'param_typ__cat', 'param_pojemność-skokowa__cat', 'seller_name__cat', 'param_wersja__cat', 'feature_wspomaganie-kierownicy__cat', 'param_model-pojazdu__cat', 'feature_system-start-stop__cat', 'param_kod-silnika__cat', 'feature_asystent-pasa-ruchu__cat', 'feature_łopatki-zmiany-biegów__cat', 'feature_światła-led__cat', 'feature_czujniki-parkowania-przednie__cat' ] run_model(xgb.XGBRegressor(**xgb_params), feats) # + id="RsJTlrspavYP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} executionInfo={"status": "ok", "timestamp": 1595620923941, "user_tz": -120, "elapsed": 552, "user": {"displayName": "Pawe\u0142", "photoUrl": "", "userId": "07557086783430784441"}} outputId="501701ba-bc62-480c-dbd6-3461522b9b48" df['param_pojemność-skokowa'].unique() # + id="S3akt2Guaxdi" colab_type="code" colab={} # + id="8Ua8VCMPZLQB" colab_type="code" colab={} df['param_pojemność-skokowa'] = df['param_pojemność-skokowa'].map(lambda x: -1 if str(x) == 'None' else x.split('cm')[0].replace(' ','')) # + id="xuLI3Lx9ba7C" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} executionInfo={"status": "ok", "timestamp": 1595621142281, "user_tz": -120, "elapsed": 13356, "user": {"displayName": "Pawe\u0142", "photoUrl": "", "userId": "07557086783430784441"}} outputId="409c0b02-a7bd-4e8b-df29-cbe2861041ba" feats = [ 'param_napęd__cat', 'param_stan__cat', 'param_rok-produkcji', 'param_faktura-vat__cat', 'param_moc', 'param_skrzynia-biegów__cat', 'param_marka-pojazdu__cat', 'feature_kamera-cofania__cat', 'param_typ__cat', 'param_pojemność-skokowa', 'seller_name__cat', 'param_wersja__cat', 'feature_wspomaganie-kierownicy__cat', 'param_model-pojazdu__cat', 'feature_system-start-stop__cat', 'param_kod-silnika__cat', 'feature_asystent-pasa-ruchu__cat', 'feature_łopatki-zmiany-biegów__cat', 'feature_światła-led__cat', 'feature_czujniki-parkowania-przednie__cat' ] run_model(xgb.XGBRegressor(**xgb_params), feats) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.5 # language: python # name: python3 # --- # ## FILE-FRIEND-URL-to-COS-Loader #imports.... Run this each time after restarting the Kernel import json from botocore.client import Config import ibm_boto3 import requests # ## Hardwire endpoints # + service_endpoint = 'https://s3-api.us-geo.objectstorage.softlayer.net' auth_endpoint = 'https://iam.bluemix.net/oidc/token' # - # ## Pull Creds from: CONSOLE - Dashboard > COS > Service Credentials. Take ADMIN set. # + cos_credentials = { "apikey": "", "cos_hmac_keys": { "access_key_id": "#######2c4abcaf68187e036c53c5", "secret_access_key": "" }, "endpoints": "https://cos-service.bluemix.net/endpoints", "iam_apikey_description": "Auto generated apikey during resource-key operation for Instance - crn:v1:bluemix:public:cloud-object-storage:global:a/6bd61f31ad3927791bc9b0bebfdcb749:ee080c49-f45a-468d-89da-6a7c060d055b::", "iam_apikey_name": "auto-generated-apikey-######-a92c-4abc-af68-187e036c53c5", "iam_role_crn": "crn:v1:bluemix:public:iam::::role:Administrator", "iam_serviceid_crn": "crn:v1:bluemix:public:iam-identity::a/6bd61f31ad3927791bc9b0bebfdcb749::serviceid:ServiceId-3f31e855-2dce-4377-a143-8291529c49fb", "resource_instance_id": "crn:v1:bluemix:public:cloud-object-storage:global:a/6bd61f31ad3927791bc9b0bebfdcb749:ee080c49-f45a-468d-89da-6a7c060d055b::" } # - # # COS and effect cos = ibm_boto3.resource('s3', ibm_api_key_id=cos_credentials['apikey'], ibm_service_instance_id=cos_credentials['resource_instance_id'], ibm_auth_endpoint= auth_endpoint, config=Config(signature_version='oauth'), endpoint_url= service_endpoint ) # # Bucket Time # + from uuid import uuid4 bucket_uid = str(uuid4()) buckets = ['audio-data-' + bucket_uid, 'outputs-' + bucket_uid] for bucket in buckets: if not cos.Bucket(bucket) in cos.buckets.all(): print('Creating bucket “{}“...'.format(bucket)) try: cos.create_bucket(Bucket=bucket) except ibm_boto3.exceptions.ibm_botocore.client.ClientError as e: print('Error: {}.'.format(e.response['Error']['Message'])) data_links = ['http://github.com/mamoonraja/call-center-think18/blob/master/resources/audio_samples/sample1-addresschange-positive.ogg', 'http://github.com/mamoonraja/call-center-think18/blob/master/resources/audio_samples/sample2-address-negative.ogg', 'http://github.com/mamoonraja/call-center-think18/blob/master/resources/audio_samples/sample3-shirt-return-weather-chitchat.ogg', 'http://github.com/mamoonraja/call-center-think18/blob/master/resources/audio_samples/sample4-angryblender-sportschitchat-recovery.ogg', 'http://github.com/mamoonraja/call-center-think18/blob/master/resources/audio_samples/sample5-calibration-toneandcontext.ogg', 'http://github.com/mamoonraja/call-center-think18/blob/master/resources/audio_samples/jfk_1961_0525_speech_to_put_man_on_moon.ogg'] # - # # Upload the Files # + import urllib from urllib.request import urlopen bucket_obj = cos.Bucket(buckets[0]) for data_link in data_links: filename=data_link.split('/')[-1] print('Uploading data {}...'.format(filename)) with urlopen(data_link) as data: bucket_obj.upload_fileobj(data, filename) print('{} is uploaded.'.format(filename)) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import os import pydicom #The .dcm dicon files are read ds1 = pydicom.dcmread("ttfm.dcm") #Parsing the FileDataSet into a String ds1=str(ds1) outf=open("Output1.txt","w") outf.write(ds1) outf.close() #The aforementioned actions are repeated i.e extracting .dcm file ds2 = pydicom.dcmread("bmode.dcm") #Parsing it to String datatype ds2=str(ds2) outf=open("Output2.txt","w") outf.write(ds2) outf.close() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: vrb-know # language: python # name: vrb-know # --- # + # Data Source # https://www.kaggle.com/lislejoem/us-minimum-wage-by-state-from-1968-to-2017?select=Minimum+Wage+Data.csv # - import pandas as pd # + data_path = 'Minimum Wage Data.csv' min_wage = pd.read_csv(data_path, encoding="ISO-8859-1") print(min_wage.shape) min_wage.head() # - min_wage_2020 = min_wage[min_wage['Year'] == 2020] min_wage_2020.sort_values(by=['State.Minimum.Wage'], ascending=False).head() list(min_wage_2020['State.Minimum.Wage.2020.Dollars']) == list(min_wage_2020['State.Minimum.Wage']) min_wage_2020 = min_wage_2020.drop(['Year', 'State.Minimum.Wage.2020.Dollars', 'Federal.Minimum.Wage.2020.Dollars', 'Effective.Minimum.Wage.2020.Dollars'], axis=1) min_wage_2020 = min_wage_2020[['State', 'State.Minimum.Wage', 'Federal.Minimum.Wage', 'Effective.Minimum.Wage', 'CPI.Average' ]] min_wage_2020.head() min_wage_2020.to_csv('cols_infra_min_wage.csv') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + [markdown] slideshow={"slide_type": "slide"} # # LLVM Cauldron - Wuthering Bytes 2016-09-08 # # # Generating Python & Ruby bindings from C++ # # ### # ### # # ## https://github.com/ffig/ffig # # [Updated Links and API use on 2018-01-25] # + [markdown] slideshow={"slide_type": "slide"} # Write a C++ class out to a file in the current working directory # + slideshow={"slide_type": "fragment"} outputfile = "Shape.h" # + slideshow={"slide_type": "fragment"} # %%file $outputfile #include #include #ifdef __clang__ #define C_API __attribute__((annotate("GENERATE_C_API"))) #else #define C_API #endif #include struct FFIG_EXPORT Shape { virtual ~Shape() = default; virtual double area() const = 0; virtual double perimeter() const = 0; virtual const char* name() const = 0; } __attribute__((annotate("GENERATE_C_API"))); static const double pi = 4.0; class Circle : public Shape { const double radius_; public: double area() const override { return pi * radius_ * radius_; } double perimeter() const override { return 2 * pi * radius_; } const char* name() const override { return "Circle"; } Circle(double radius) : radius_(radius) { if ( radius < 0 ) { std::string s = "Circle radius \"" + std::to_string(radius_) + "\" must be non-negative."; throw std::runtime_error(s); } } }; # + [markdown] slideshow={"slide_type": "slide"} # Compile our header to check it's valid C++ # + slideshow={"slide_type": "fragment"} language="sh" # clang++-3.8 -x c++ -fsyntax-only -std=c++14 -I../ffig/include Shape.h # + [markdown] slideshow={"slide_type": "slide"} # Read the code using libclang # + slideshow={"slide_type": "fragment"} import sys sys.path.insert(0,'..') import ffig.clang.cindex index = ffig.clang.cindex.Index.create() translation_unit = index.parse(outputfile, ['-x', 'c++', '-std=c++14', '-I../ffig/include']) # + slideshow={"slide_type": "fragment"} import asciitree def node_children(node): return (c for c in node.get_children() if c.location.file.name == outputfile) print asciitree.draw_tree(translation_unit.cursor, lambda n: [c for c in node_children(n)], lambda n: "%s (%s)" % (n.spelling or n.displayname, str(n.kind).split(".")[1])) # + [markdown] slideshow={"slide_type": "slide"} # Turn the AST into some easy to manipulate Python classes # + slideshow={"slide_type": "fragment"} from ffig import cppmodel # + slideshow={"slide_type": "fragment"} model = cppmodel.Model(translation_unit) # - model # + slideshow={"slide_type": "fragment"} [f.name for f in model.functions][-5:] # + slideshow={"slide_type": "fragment"} [c.name for c in model.classes][-5:] # + slideshow={"slide_type": "fragment"} shape_class = [c for c in model.classes if c.name=='Shape'][0] # + slideshow={"slide_type": "fragment"} ["{}::{}".format(shape_class.name,m.name) for m in shape_class.methods] # + [markdown] slideshow={"slide_type": "slide"} # Look at the templates the generator uses # + slideshow={"slide_type": "fragment"} # %cat ../ffig/templates/json.tmpl # + [markdown] slideshow={"slide_type": "slide"} # Run the code generator # + slideshow={"slide_type": "fragment"} language="sh" # cd .. # python -m ffig -b json.tmpl rb.tmpl python -m Shape -i demos/Shape.h -o demos/ # + [markdown] slideshow={"slide_type": "fragment"} # See what it created # + slideshow={"slide_type": "fragment"} # %ls # + slideshow={"slide_type": "slide"} # %cat Shape.json # + [markdown] slideshow={"slide_type": "slide"} # Build some bindings with the generated code. # + slideshow={"slide_type": "fragment"} # %%file CMakeLists.txt cmake_minimum_required(VERSION 3.0) set(CMAKE_CXX_STANDARD 14) add_library(Shape_c SHARED Shape_c.cpp) target_include_directories(Shape_c PRIVATE ../ffig/include) # + slideshow={"slide_type": "fragment"} language="sh" # cmake . # cmake --build . # + slideshow={"slide_type": "slide"} language="python2" # import shape # c = shape.Circle(8) # # print "A {} with radius {} has area {}".format(c.name(), 8, c.area()) # + slideshow={"slide_type": "slide"} magic_args="pypy" language="script" # import shape # c = shape.Circle(8) # # print "A {} with radius {} has area {}".format(c.name(), 8, c.area()) # + slideshow={"slide_type": "slide"} language="ruby" # load "Shape.rb" # c = Circle.new(8) # # puts("A #{c.name()} with radius #{8} has area #{c.area()}") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import math import matplotlib.pyplot as plt import matplotlib.image as mpimg import matplotlib.patches as patches from PIL import Image import numpy as np # %pylab inline # - datapath = '../CS148/RedLights2011_Medium/' file = 'RL-016.jpg' img=mpimg.imread(datapath+file) imgplot = plt.imshow(img) plt.show() # ## Methods for Red Light Detection # 1. Find pixels with highest red light intensity. If they are in contact, one light, otherwise multiple. In 8 cardinal directions, expand outward. If ratio of red is decreases to black, then red light. # 2. Hough transform to get edges. Find rectangles with mostly black insides, with a bit of red, not other significant colors. # 3. Circle hough transform/edge detector to find circles. If black box surrounding, then red light. Otherwise, use a metric of confidence. # 4. If red circle detected, then check two below + half of circle range for corners. If one or more corners found then yes. # 5. Weighted ensemble trained on subset of data. # + img = np.array(img) img = np.array(Image.open(datapath+file), dtype=np.uint8) print(img.shape) # - redchannel = img[:,:,0] greenchannel = img[:,:,1] bluechannel = img[:,:,2] redchannel.shape # find pixels where r > 200, g < 200, b < 200 im_dim0 = img.shape[0] im_dim1 = img.shape[1] print("im_dim0 cols", im_dim0) print("im_dim1 rows", im_dim1) new_img = img count = 0 red_pixel_locations = [] for r in range(im_dim0): for c in range(im_dim1): if redchannel[r, c] > 240: if greenchannel[r, c] > 200 or bluechannel[r, c] > 200: new_img[r,c,0] = 255 new_img[r,c,1] = 255 new_img[r,c,2] = 255 else: count += 1 red_pixel_locations.append((r,c)) else: new_img[r,c,0] = 255 new_img[r,c,1] = 255 new_img[r,c,2] = 255 print("count of red pixels", count) print('red_pixel_locations', red_pixel_locations) imgplot = plt.imshow(img) plt.show() # + # check that area around traffic light is black new_red_pixel_locations = [] for entry in red_pixel_locations: (row, col) = entry # check 5 pixels up directional_sum = 0 r=np.mean(img[row:min(row+5, im_dim0-1), col, 0]) g=np.mean(img[row:min(row+5, im_dim0-1), col, 1]) b=np.mean(img[row:min(row+5, im_dim0-1), col, 2]) if r+g+b>650: directional_sum+=1 # check 5 pixels down r=np.mean(img[max(0, row-15):row, col, 0]) g=np.mean(img[max(0, row-15):row, col, 1]) b=np.mean(img[max(0, row-15):row, col, 2]) if r+g+b>650: directional_sum+=1 # check 5 pixels left r=np.mean(img[row, max(0, col-5):col, 0]) g=np.mean(img[row, max(0, col-5):col, 1]) b=np.mean(img[row, max(0, col-5):col, 2]) if r+g+b>650: directional_sum+=1 # check 5 pixels right r=np.mean(img[row, col:min(im_dim1-1, col-5), 0]) g=np.mean(img[row, col:min(im_dim1-1, col-5), 1]) b=np.mean(img[row, col:min(im_dim1-1, col-5), 2]) if r+g+b>650: directional_sum+=1 if directional_sum >= 3: new_red_pixel_locations.append(entry) # - def compute_euclidean(x1, y1, x2, y2): return np.sqrt((x2-x1)**2 + (y2-y1)**2) # + # filters new_red_pixel_locations based on overlap # 30x30 boxes overlap_red_pixels = [] skip = [] # new_red_pixel_locations = [] for entry_i in range(len(new_red_pixel_locations)): if entry_i in skip: overlap_red_pixels.append(100) continue (curr_row, curr_col) = new_red_pixel_locations[entry_i] overlap_count = 0 for entry_j in range(len(new_red_pixel_locations)): (neighbor_row, neighbor_col) = new_red_pixel_locations[entry_j] if entry_j != entry_i: distance = compute_euclidean(curr_row, curr_col, neighbor_row, neighbor_col) if distance < 5: skip.append(entry_j) if distance < 30: # skip.append(new_red_pixel_locations[entry_j]) overlap_count += 1 overlap_red_pixels.append(overlap_count) # filter once final_red_pixel_locations = [] for i in range(len(overlap_red_pixels)): if overlap_red_pixels[i] < 10: final_red_pixel_locations.append(new_red_pixel_locations[i]) # FILTER TWICE new_red_pixel_locations = final_red_pixel_locations overlap_red_pixels = [] skip = [] # new_red_pixel_locations = [] for entry_i in range(len(new_red_pixel_locations)): if entry_i in skip: overlap_red_pixels.append(100) continue (curr_row, curr_col) = new_red_pixel_locations[entry_i] overlap_count = 0 for entry_j in range(len(new_red_pixel_locations)): (neighbor_row, neighbor_col) = new_red_pixel_locations[entry_j] if entry_j != entry_i: distance = compute_euclidean(curr_row, curr_col, neighbor_row, neighbor_col) if distance < 5: skip.append(entry_j) if distance < 30: # skip.append(new_red_pixel_locations[entry_j]) overlap_count += 1 overlap_red_pixels.append(overlap_count) # second filter final_red_pixel_locations = [] for i in range(len(overlap_red_pixels)): if overlap_red_pixels[i] <= 1: final_red_pixel_locations.append(new_red_pixel_locations[i]) # - final_red_pixel_locations # + im = np.array(Image.open(datapath+file), dtype=np.uint8) # Create figure and axes fig,ax = plt.subplots(1) # Display the image ax.imshow(im) # Create a Rectangle patch for entry in final_red_pixel_locations: (row, col) = entry rect = patches.Rectangle((col-15, row-15),30,30,linewidth=1,edgecolor='r',facecolor='none') # Add the patch to the Axes ax.add_patch(rect) plt.show() # + # red rgb(255,0,0) # + # import numpy as np # import cv2 # image = cv2.imread(datapath+file) # result = image.copy() # image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV) # lower = np.array([155,25,0]) # upper = np.array([179,255,255]) # mask = cv2.inRange(image, lower, upper) # result = cv2.bitwise_and(result, result, mask=mask) # cv2.imshow('mask', mask) # cv2.imshow('result', result) # cv2.waitKey() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from scapy.all import * from pprint import pprint import sys import numpy as np import os import dpkt import socket from collections import defaultdict def read_pcap(out_dir, dst_mac_is_ts = True, try_compare_counters = True): if try_compare_counters: count_file = os.path.join(out_dir, 'counters_0.out') for line in open(count_file): if line.startswith('5/0'): n_pkts_str = line.split()[-1] n_start_pkts = int(n_pkts_str[:-1]) count_file = os.path.join(out_dir, 'counters_1.out') for line in open(count_file): if line.startswith('5/0'): n_pkts_str = line.split()[-1] n_end_pkts = int(n_pkts_str[:-1]) n_expected_pkts = (n_end_pkts - n_start_pkts) print("Expecting {} packets".format(n_expected_pkts)) pcf = os.path.join(out_dir, 'moongen', 'moongen.pcap') f = open(pcf, 'rb') print("Reading {}".format(pcf)) pcap=dpkt.pcap.Reader(f) timestamps = defaultdict(lambda : defaultdict(list)) seqs = defaultdict(lambda : defaultdict(set)) dups = 0 ports = set() for i, (ts, buf) in enumerate(pcap): eth = dpkt.ethernet.Ethernet(buf) try: ip = dpkt.ip.IP(eth.data) # print(ip.len) src = socket.inet_ntoa(ip.src) dst = socket.inet_ntoa(ip.dst) except dpkt.UnpackError: continue tcp = ip.data if not isinstance(tcp, dpkt.tcp.TCP): continue if (tcp.flags & dpkt.tcp.TH_PUSH) == 0: continue if src == '10.0.0.101': ports.add(tcp.sport) x3, x2, x1 = struct.unpack("!HHH", eth.dst) ts = float((x3 << 32) | (x2 << 16) | x1) * 1e-9 seq = tcp.seq if seq not in seqs[src][dst]: seqs[src][dst].add(seq) timestamps[src][dst].append((ts, tcp.data)) else: dups += 1 output = dict(timestamps) for k in list(output.keys()): output[k] = dict(timestamps[k]) print("Discarded %d dups" % dups) print(ports) return output # + def to_key(buf): return ':'.join(buf.decode('utf-8').split()[:2]).split('\\')[0] def to_dict(ts): d = defaultdict(list) for t, buf in ts: key = ':'.join(buf.decode('utf-8').split()[:2]).split('\\')[0] d[key].append(t) for k in d.keys(): d[k] = sorted(d[k]) d = dict(d) return d def to_df(to_cache, from_cache): from_d = to_dict(from_cache) bad_keys = set() rows = [] for to_ts, buf in to_cache: k = to_key(buf) if k in bad_keys: continue if k.startswith('set'): from_ts = from_d['STORED'].pop(0) # print(len(from_d['STORED'])) rows.append([to_ts, from_ts, k, 'set']) elif k.startswith('get'): from_k = k.replace('get', 'VALUE') if from_k not in from_d or len(from_d[from_k]) == 0: print(from_k) print('shit') continue from_ts = from_d[from_k].pop(0) if (from_ts < to_ts): bad_keys.add(k) print(k) print('poop') continue rows.append([to_ts, from_ts, k, 'get']) df = pd.DataFrame(rows, columns=('to', 'from', 'key', 'action')) for k in bad_keys: df = df[df.key != k] return df # + def combine_dfs(cdf, sdf): totdf = cdf.copy() totdf['s_to'] = -1 totdf['s_from'] = -1 for _, (t, f, k, a) in sdf.iterrows(): totk = totdf[totdf['key'] == k] totk = totk[(totk['to'] < t) & (totk['from'] > f) ] totk = totk[(totk['s_to'] == -1)] if len(totk) > 1: totk = totk[totk['to'] == totk['to'].min()] totdf.loc[totdf.index == totk.index[0],'s_to'] = t totdf.loc[totdf.index == totk.index[0],'s_from'] = f return totdf def reduce_df(tot_df): indf = tot_df[['to', 'from', 'action']] indf = indf.rename(columns={'to': 'in', 'from': 'out'}) indf.loc[tot_df['s_to'] != -1,'out'] = tot_df[tot_df['s_to'] != -1]['s_to'] outdf = tot_df[['s_from', 'from', 'action']] outdf = outdf.rename(columns={'s_from': 'in', 'from': 'out'}) outdf = outdf[outdf['in'] > 0] return indf, outdf # - import pickle timestamps = read_pcap('../execution/test/mcrouter_test') c_to_mcr = timestamps['10.0.0.7']['10.0.0.101'] mcr_to_c = timestamps['10.0.0.101']['10.0.0.7'] mcd_to_mcr = timestamps['10.0.0.4']['10.0.0.101'] mcr_to_mcd = timestamps['10.0.0.101']['10.0.0.4'] f_dict = to_dict(mcr_to_mcd) r_dict = to_dict(mcd_to_mcr) c_df = to_df(c_to_mcr, mcr_to_c) # print("SERVER") s_df = to_df(mcr_to_mcd, mcd_to_mcr) tot_df = combine_dfs(c_df, s_df) indf, outdf = reduce_df(tot_df) (indf['out'] - indf['in']).median(), (outdf['out'] - outdf['in']).median() # + def memtier_dat(folder): memtier_out = os.path.join(folder, 'memtier', 'test.out') sep = '--------' idx = -1 vals = defaultdict(lambda: np.array([[0,0]])) for line in open(memtier_out): try: line = line.split() if len(line) != 3: continue val = float(line[1]) pct = float(line[2]) vals[line[0]] = np.append(vals[line[0]],np.array([[val, pct]]), axis=0) except Exception as e: continue return dict(vals) d = memtier_dat('../execution/test/mcrouter_test') # - np.diff(d['SET'][:,1]) # + # %matplotlib notebook plt.figure(figsize=(10,4)) def plot_action(df, action, label): plt.plot((df[df.action == action]['out'] - df[df.action == action]['in'])*1e6, '.', markersize=4, label = '%s:%s' % (label, action)) def plot_hist(df, action, label): if do_log: bins = np.logspace(1, np.log(250), 250) else: bins = np.arange(250) plt.hist((df[df.action == action]['out'] - df[df.action == action]['in'])*1e6, bins=bins, alpha=.75, label = "%s:%s" % (label, action), density=True) def plot_memtier(d): diffs = np.diff(d['SET'][:,1]) diffs = np.append([0], diffs) x = d['SET'][1:,0] x = np.append([x[0] - (x[1] - x[0])], x) plt.fill(x * 1e3, diffs / 100., color='purple', label='Total: set', alpha=.5) diffs = np.diff(d['GET'][:,1]) diffs = np.append([0], diffs) x = d['GET'][1:,0] x = np.append([x[0] - (x[1] - x[0])], x) plt.fill(x * 1e3, diffs / 100., color='cyan', label='Total: get', alpha=.5) plt.subplot(211) plot_action(indf, 'get', 'client') plot_action(indf, 'set', 'client') plot_action(outdf, 'get', 'server') plot_action(outdf, 'set', 'server') plt.yscale('log') plt.ylim([10, None]) plt.legend() plt.ylabel("Latency (µs)") plt.title("McRouter Local Latency - No Concurrency") plt.subplot(212) do_log = False plot_hist(indf, 'get', 'client') plot_hist(indf, 'set', 'client') plot_hist(outdf, 'get', 'server') plot_hist(outdf, 'set', 'server') # plt.yscale('log') # plt.xscale('log') plt.ylim([.005, None]) plot_memtier(d) plt.xlim([0, 225]) plt.legend() plt.xlabel("Latency (µs)") # plt.subplot(313) # do_log=True # plot_hist(indf, 'get', 'client') # plot_hist(indf, 'set', 'client') # plot_hist(outdf, 'get', 'server') # plot_hist(outdf, 'set', 'server') # # plt.yscale('log') # plt.xscale('log') # plt.ylim([.005, None]) # plt.legend() # plot_memtier(d) # plt.xlim([0, 300]) # plt.xlabel("Latency (µs)") # # plt.plot(less_df['out'] - less_df['in'], '.') # - (less_df['out'] - less_df['in']).median() import matplotlib.pyplot as plt plt.figure() plt.plot(s_df['from'] - s_df.to, '.') to_dict(c_to_mcr) sents = sorted(timestamps['10.0.0.7']['10.0.0.101']) rcvs = sorted(timestamps['10.0.0.101']['10.0.0.7']) outs = sorted(timestamps['10.0.0.101']['10.0.0.4']) len(sents), len(rcvs), len(outs) np.array(rcvs[:700]) - np.array(sents[:700]) len(outs) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- hello from train import NoamOpt from matplotlib import pyplot as plt import numpy as np opts = [NoamOpt(768, 1, 4000, None), NoamOpt(768, 2, 4000, None)] upto=26000 plt.plot(np.arange(1, upto), [[opt.rate(i) for opt in opts] for i in range(1, upto)]) plt.legend(["1", "2"]) opts = [NoamOpt(512, 1, 4000, None), NoamOpt(512, 1, 8000, None), NoamOpt(256, 1, 4000, None)] plt.plot(np.arange(1, 20000), [[opt.rate(i) for opt in opts] for i in range(1, 20000)]) plt.legend(["512:4000", "512:8000", "256:4000"]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from pyspark.sql import SparkSession #New API spark_session = SparkSession\ .builder\ .master("spark://192.168.2.113:7077") \ .appName("marcelloVendruscolo_Assignment3_pA")\ .config("spark.dynamicAllocation.enabled", True)\ .config("spark.shuffle.service.enabled", True)\ .config("spark.dynamicAllocation.executorIdleTimeout","30s")\ .config("spark.executor.cores",2)\ .config("spark.driver.port",9998)\ .config("spark.blockManager.port",10005)\ .getOrCreate() #Old API (RDD) spark_context = spark_session.sparkContext spark_context.setLogLevel("INFO") # - #Helper function to (i) lowercase and (ii) tokenize (split on space) text. def func_lowercase_split(rdd): return rdd.lower().split(' ') #A.1.1 and A.1.4 - Read the English transcripts with Spark, and count the number of lines and partitions. print("File: europarl-v7.sv-en.en\n") en_1 = spark_context.textFile("hdfs://192.168.2.113:9000/europarl/europarl-v7.sv-en.en") lineCount_en_1 = en_1.count() print("Line countig: " + str(lineCount_en_1)) print("Partition counting: " + str(en_1.getNumPartitions())) #A.1.2 and A.1.4 - Read the Swedish transcripts with Spark, and count the number of lines and partitions. print("File: europarl-v7.sv-en.sv\n") sv_1 = spark_context.textFile("hdfs://192.168.2.113:9000/europarl/europarl-v7.sv-en.sv") lineCount_sv_1 = sv_1.count() print("Line counting: " + str(lineCount_sv_1)) print("Partition counting: " + str(sv_1.getNumPartitions())) #A.1.3 - Verify that the line counts are the same for the two languages. print("The line counts are the same for both europarl-v7.sv-en.en and europarl-v7.sv-en.sv: " + str(lineCount_en_1 == lineCount_sv_1)) #A.2.1 - Preprocess the text from both RDDs by lowercase-ing and tokenize-ing (split on space) the text: en_2 = en_1.map(func_lowercase_split) sv_2 = sv_1.map(func_lowercase_split) #A.2.2 - Inspect 10 entries from each of your RDDs to verify your pre-processing. print("10 entries from the English corpus after pre-processing:\n") print(en_2.take(10)) print("\n10 entries from the Swedish corpus after pre-processing:\n") print(sv_2.take(10)) #A.2.3 Verify that the line counts still match after the pre-processing. lineCount_en_2 = en_2.count() lineCount_sv_2 = sv_2.count() print("The line counts are the same for europarl-v7.sv-en.en before and after processing: " + str(lineCount_en_1 == lineCount_en_2)) print("The line counts are the same for europarl-v7.sv-en.sv before and after processing: " + str(lineCount_sv_1 == lineCount_sv_2)) print("The line counts are the same for both europarl-v7.sv-en.en and europarl-v7.sv-en.sv after processing: " + str(lineCount_en_2 == lineCount_sv_2)) #A.3.1 - Use Spark to compute the 10 most frequently according words in the English and Swedish language corpus. print("The 10 most frequent words in the English corpus:\n") print(en_2.flatMap(lambda x: x).map(lambda x: (x, 1)).reduceByKey(lambda x, y: x + y).takeOrdered(10, key = lambda x: -x[1])) print("\nThe 10 most frequent words in the Swedish corpus:\n") print(sv_2.flatMap(lambda x: x).map(lambda x: (x, 1)).reduceByKey(lambda x, y: x + y).takeOrdered(10, key = lambda x: -x[1])) #A.4.1 - Use this parallel corpus to mine some translations in the form of word pairs, for the two languages. en_3 = en_2.zipWithIndex().map(lambda x: (x[1],x[0])) sv_3 = sv_2.zipWithIndex().map(lambda x: (x[1],x[0])) en_sv_0 = en_3.join(sv_3) en_sv_1 = en_sv_0.filter(lambda x: (not x[1][0] is None) and (not x[1][1] is None)) #line pairs that have an empty/missing “corresponding” sentence. en_sv_2 = en_sv_1.filter(lambda x: ((len(x[1][0]) <= 15) and (len(x[1][1]) <= 15))) en_sv_3 = en_sv_2.filter(lambda x: ((len(x[1][0]) >= 2) and (len(x[1][1]) >= 2))) #filter out sentences that are too short. en_sv_4 = en_sv_3.filter(lambda x: (len(x[1][0]) == len(x[1][1]))) en_sv = en_sv_4.map(lambda x: list(zip(x[1][0],x[1][1]))).flatMap(lambda x: x) print("Some of the most frequently occurring pairs of words:\n") print(en_sv.map(lambda x: (x, 1)).reduceByKey(lambda x, y: x + y).takeOrdered(20, key = lambda x: -x[1])) #Release the cores for another application! spark_context.stop() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.9.7 ('base') # language: python # name: python3 # --- # + import xpress as xp xp.controls.outputlog = 0 # turn off output log import numpy as np import pylab as pl # - assignment = xp.problem() n_machines = 4 n_tasks = 4 costs = np.array([[13, 4, 7, 6], [1, 11, 5, 4], [6, 7, 2, 8], [1, 3, 5, 9]]) # Cost of assigning machine i to task j X = np.array([[xp.var(vartype=xp.binary) for i in range(n_machines)] for j in range(n_tasks)]) assignment.addVariable(X) # Every task must be performed [assignment.addConstraint(xp.Sum(X[:, j]) == 1) for j in range(n_tasks)] # A machine can only be used for one task [assignment.addConstraint(xp.Sum(X[i]) <= 1) for i in range(n_machines)] assignment.setObjective(xp.Sum(X*costs)) assignment.solve() assignment.getSolution(X) # + set_covering = xp.problem() n_sets = 7 costs = np.array([3,5,1,9,2,4,1]) A = np.array([[1, 0, 1, 0, 0, 0, 1], [0, 1, 0, 0, 1, 0, 0], [1, 1, 0, 1, 0, 0, 0], [0, 0, 1, 1, 0, 0, 0], [1, 0, 0, 1, 0, 1, 1]]) b = np.ones(5) # - x = np.array([xp.var(vartype=xp.binary) for _ in range(n_sets)]) set_covering.addVariable(x) set_covering.addConstraint(xp.Dot(A, x) >= b) set_covering.setObjective(xp.Dot(costs, x), sense=xp.minimize) set_covering.solve() set_covering.getProbStatusString() set_covering.getSolution(x), set_covering.getObjVal() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Autoencoder # This is the main Autoencoder notebook. In this notebook you will find all the code we use to investigate AE model. In particular, the section [Preprocessing](#preprocessing) contains all the code used for the preprocessing pipeline, including data strucure managements, transformations of data, creations of `X` and `y` vectors and so on. Section [Model (autoencoder)](#model-autoencoder) defines the final autoencoder model and trains it. Sections [KNN classifier](#knn-classifier), [KMeans classifier](#kmeans-classifier) and [NN classifier](#nn-classifier) instead define some classifiers which directly use AE features and try to classify data. The [Grid Search](#grid-search) section is responisble for the grid-search on hyperparameters in order to get the best autoencoder model. # # Table of contents # # * [Load Dataset](#load-dataset) # * [Preprocessing](#preprocessing) # * [Data Exploration](#data-exploration) # * [Model (Autoencoder)](#model-autoencoder) # * [KNN classifier](#knn-classifier) # * [KMeans classifier](#kmeans-classifier) # * [NN classifier](#nn-classifier) # * [Grid Search](#grid-search) # Set some global parameters. We use this parameters to change the notebook behaviour. # + # Whether to rotate data in an orientation indipendent manner. Influences the preproecssng stage. # # Default: True USE_ORIENTATION_INDIPENDENT_TRANSFORMATION = True # Whether to normalize data. Influences the preproecssng stage. # Note: we usually do not normalize beacause the distance between points # and reconstructed points gets reduced but the signal is not well represented # # Default: False USE_NORMALIZATION = False # Whether to center accelerometer data. Influences the preproecssng stage. # Note: we tried centering accelerometer data to check autoencoder behaviour with CNN center data # but this is not useful at all for autoencoder. # # Default: False USE_CENTERING_ACC = False # Whether to apply low-pass filter on data. Influences the preproecssng stage. # # Default: False USE_LOW_PASS_FILTER = False # Whether to use Bryan dataset as validation dataset. Influences the validation stage. # # Default: True USE_BRYAN_VALIDATION_DATASET = True # Whether to use all positions (right, left, back, top, hand, pocket) from Bryan dataset. # Influences the validation stage. # # Default: Flase USE_ALL_POSITION_BRYAN_DATASET = False # Whether to use only positions hand and pocket from Bryan dataset. # Influences the validation stage. # # Default: Flase USE_ONLY_HAND_POCKET_POSITION_BRYAN_DATASET = False # Whether to use grid search for KNN hyperparameters. Influences the grid-search stage. # # Default: False USE_KNN_GRID_SEARCH = False # Whether to use grid search for autoencoder hyperparameters. Influences the grid-search stage. # Note: this task is resource intensive and could take a while. Keep it disabled for prototyping. # # Default: True USE_GRID_SEARCH = False # Whether to test some of the best models obtained by grid search. Influences the grid-search stage. # This should be set to True if you have some models to test or when `USE_GRID_SEARCH` is True. # # Default: True USE_GRID_SEARCH_VALIDATION = False # Whether to merge sit and stand classes. Influences the preprocessing stage. # # Default: True USE_MERGE_SIT_STAND = False # Whether to merge stairs up/down classes. Influences the preprocessing stage. # # Default: False USE_MERGE_STAIRS_UP_DOWN = False # - def get_settings(): """ Return a string which encodes current global settings. Can be used as name of files. Note: only "relevant" configs are included in the output. """ oit = "_oit" if USE_ORIENTATION_INDIPENDENT_TRANSFORMATION else "" merge = "_not-merge" if not USE_MERGE_SIT_STAND else "" norm = "_norm" if USE_NORMALIZATION else "" bryan = "_bryan" if USE_BRYAN_VALIDATION_DATASET else "" allpos = "_allpos" if USE_BRYAN_VALIDATION_DATASET and USE_ALL_POSITION_BRYAN_DATASET else "" return oit + norm + bryan + allpos # + import matplotlib.pyplot as plt import numpy as np import os import pandas as pd import random import tensorflow as tf from tensorflow.keras import layers from autoencoder_utils import show_samples, show_loss, show_mse, show_reconstructed_signals, show_reconstruction_errors from keras_utils import ModelSaveCallback from orientation_indipendend_transformation import orientation_independent_transformation # - random.seed(42) np.random.seed(42) # ## Load dataset # + def load_dataset(data_filename="dataset.csv", label_filename="labels.csv", dataset_dir="./datasets"): """ Load a dataset from data and label filenames. """ data = pd.read_csv(os.path.join(dataset_dir, data_filename), header=None) labels = pd.read_csv(os.path.join(dataset_dir, label_filename), header=None, names=["user", "model", "label"]) return data, labels def load_dataset_position(data_filename="dataset.csv", label_filename="labels.csv", dataset_dir="./datasets"): """ Load a dataset from data and label filenames. Label file contains user, (smartphone) model, label, and (sensor) position. """ data = pd.read_csv(os.path.join(dataset_dir, data_filename), header=None) labels = pd.read_csv(os.path.join(dataset_dir, label_filename), header=None, names=["user", "model", "label", "position"]) return data, labels # - def print_stats(ds: pd.DataFrame): print("Shape", ds.shape) print("Columns", ds.columns) X_df_reference, y_df_reference = load_dataset(dataset_dir="./datasets/heterogeneity_f50_w2.5_o0.5") X_df_reference_validation, y_df_reference_validation = load_dataset_position(dataset_dir="./datasets/bryan_f50_w2.5_o0.5") print_stats(X_df_reference) print_stats(y_df_reference) print_stats(X_df_reference_validation) print_stats(y_df_reference_validation) # ## Preprocessing def to_numpy(df): return df.loc[:].to_numpy() def get_label(x): return x[..., 2] def restructure(x): return x.reshape(-1, 6, 125) def normalize(x): min_val = np.max(x) max_val = np.min(x) x = (x - min_val) / (max_val - min_val) return x def center(x): means = np.mean(x, axis=2) # get mean for each dimension for each window means[:, 3:] = 0 # do not center gyroscope x = (x - means[:,:,np.newaxis]) # + from scipy.signal import butter, filtfilt, freqz import matplotlib.pyplot as plt def butter_lowpass(cutoff, fs, order=5): nyq = 0.5 * fs normal_cutoff = cutoff / nyq b, a = butter(order, normal_cutoff, btype='low', analog=False) return b, a def butter_lowpass_filter(data, cutoff, fs, order=5): b, a = butter_lowpass(cutoff, fs, order=order) y = filtfilt(b, a, data) return y # - def plot_butter_lowpass_filter(cutoff, fs, order): # Get the filter coefficients so we can check its frequency response. b, a = butter_lowpass(cutoff, fs, order) # Plot the frequency response. w, h = freqz(b, a, worN=8000) plt.subplot(2, 1, 1) plt.plot(0.5*fs*w/np.pi, np.abs(h), 'b') plt.plot(cutoff, 0.5*np.sqrt(2), 'ko') plt.axvline(cutoff, color='k') plt.xlim(0, 0.5*fs) plt.title("Lowpass Filter Frequency Response") plt.xlabel('Frequency [Hz]') plt.grid() # + def get_X(df): """ Return a numpy array with preprocessed X data """ # 1. Back to numpy a = to_numpy(df) # 2. Restructure the array a = restructure(a) # Low-pass filter if USE_LOW_PASS_FILTER: for i in range(len(a)): window = a[i] lowpassed = [butter_lowpass_filter(window[j], cutoff=5, fs=50) for j in range(6)] a[i] = lowpassed # 3. Normalize if USE_NORMALIZATION: a = normalize(a) # 4. Orientation indipendent transformation if USE_ORIENTATION_INDIPENDENT_TRANSFORMATION: a = orientation_independent_transformation(a) # 4. Center accelerometer data if USE_CENTERING_ACC: a = center(a) return a def get_y(df): """ Return a numpy array with labels """ y = to_numpy(df) y = get_label(y) return y def get_y_hot(df): """ Return a numpy array with labels in one-hot encoding """ return pd.get_dummies(df['label']).to_numpy() # - # ### Training Set X_df, y_df = X_df_reference.copy(), y_df_reference.copy() # Set up the low-pass filter # + cutoff = 1 # desired cutoff frequency of the filter, Hz fs = 25 # sample rate, Hz order = 3 plot_butter_lowpass_filter(cutoff, fs, order) plt.show() # - # Prepare the dataset with pandas # * split training and test set # * shuffle the dataset # + if USE_MERGE_SIT_STAND: # Merge sit and stand labels sit_or_stand_filter = (y_df["label"] == "sit") | (y_df["label"] == "stand") y_df["label"].loc[sit_or_stand_filter] = "no_activity" if USE_MERGE_STAIRS_UP_DOWN: # Merge stairs activity stairsdown_or_stairsup_filter = (y_df["label"] == "stairsdown") | (y_df["label"] == "stairsup") y_df["label"].loc[stairsdown_or_stairsup_filter] = "stairs" # *** SHUFFLE X_shuffled_df = X_df.sample(frac=1) y_shuffled_df = y_df.reindex(X_shuffled_df.index) # *** TRAIN AND TEST if USE_BRYAN_VALIDATION_DATASET: but_last_user_indicies = ~(y_df['user'] == "z") else: but_last_user_indicies = ~((y_df['user'] == "a") | (y_df['user'] == "b")) X_train_df = X_shuffled_df.loc[but_last_user_indicies] X_test_df = X_shuffled_df.loc[~but_last_user_indicies] y_train_df = y_shuffled_df.loc[but_last_user_indicies] y_test_df = y_shuffled_df.loc[~but_last_user_indicies] print("X_train_df =", len(X_train_df)) print("X_test_df =", len(X_test_df)) print("y_train_df =", len(y_train_df)) print("y_test_df =", len(y_test_df)) assert len(X_train_df) == len(y_train_df), "X train and y train do not contain same number of samples" assert len(X_test_df) == len(y_test_df), "X test and y test do not contain same number of samples" # - # Preprocess dataset # + tags=[] # Preprocess and prepare training and test set X_train, y_train = get_X(X_train_df), get_y(y_train_df) X_test, y_test = get_X(X_test_df), get_y(y_test_df) # Retrieve al y_train_hot = get_y_hot(y_train_df) y_test_hot = get_y_hot(y_test_df) # - print(X_train.shape) print(y_train.shape) print(X_test.shape) print(y_test.shape) # Show labels # + classes = np.unique(y_train) num_classes = len(np.unique(y_train)) print(f"Classes = {classes}") print(f"Num classes = {num_classes}") # - # Plot some samples show_samples(X_train, y_train, n=1, is_random=False) if (len(X_test) > 0): show_samples(X_test, y_test, n=1, is_random=False) else: print("WARNING: there is no data in X_test. Are you using Bryan dataset as validation set?") # ### Validation Set # Prepare the dataset # + tags=[] X_df, y_df = X_df_reference_validation.copy(), y_df_reference_validation.copy() # Keep valid positions only if USE_ALL_POSITION_BRYAN_DATASET: valid_positions_indicies = ~(y_df["position"] == "none") elif USE_ONLY_HAND_POCKET_POSITION_BRYAN_DATASET: valid_positions_indicies = (y_df["position"] == "hand") | (y_df["position"] == "pocket-up") | (y_df["position"] == "pocket-down") else: valid_positions_indicies = (y_df["position"] == "right") | (y_df["position"] == "left") | (y_df["position"] == "top") | (y_df["position"] == "back") | (y_df["position"] == "bottom") # Filter out other data X_df = X_df.loc[valid_positions_indicies] y_df = y_df.loc[valid_positions_indicies] # Shuffle data X_df = X_df.sample(frac=1) y_df = y_df.reindex(X_df.index) # - # Preprocess dataset # + tags=[] X_validation = get_X(X_df) y_validation = get_y(y_df) y_validation_hot = get_y_hot(y_df) # + X_validation_df = X_df.copy() y_validation_df = y_df.copy() del X_df, y_df # - print(X_validation.shape) print(y_validation.shape) print(y_validation_hot.shape) show_samples(X_validation, y_validation, n=1, is_random=False) if USE_BRYAN_VALIDATION_DATASET : X_test = X_validation y_test = y_validation y_test_hot = y_validation_hot show_samples(X_test, y_test, n=1, is_random=False) # Check size assert X_train.shape[0] == y_train.shape[0], f"Invalid shape for X_train and y_train: {X_train.shape} != {y_train.shape}" assert X_test.shape[0] == y_test.shape[0], f"Invalid shape for X_test and y_test: {X_test.shape} != {y_test.shape}" assert X_test.shape[0] == y_test.shape[0], f"Invalid shape for X_test and y_test: {X_test.shape} != {y_test.shape}" assert y_train_hot.shape == (y_train.shape[0],num_classes), f"Invalid shape of y_train_hot: {y_train_hot.shape}" assert y_test_hot.shape == (y_test.shape[0],num_classes), f"Invalid shape of y_test_hot: {y_test_hot.shape}" # ## Data Exploration # ### Training Set print("Users", y_train_df["user"].unique()) print("Models", y_train_df["model"].unique()) print("Classes", y_train_df["label"].unique()) # Fraction of samples per label print(y_train_df.groupby(["label"])["label"].count() / y_train_df["label"].count()) # Fraction of samples per user print(y_train_df.groupby(["user"])["user"].count() / y_train_df["user"].count()) # Fraction of samples per model print(y_train_df.groupby(["model"])["model"].count() / y_train_df["model"].count()) # Number of samples per user i and fraction of samples per class for user i # + y_df_i = y_train_df.loc[y_train_df["user"] == "i"] num_samples_i = y_df_i["label"].count() fraction_of_samples_per_class_i = y_df_i.groupby(["label"])["label"].count() / y_df_i["label"].count() print(num_samples_i) print(fraction_of_samples_per_class_i) # - # ### Validation Set print("Classes (validation)", y_validation_df["label"].unique()) print(y_validation_df.groupby(["label"])["label"].count() / y_validation_df["label"].count()) print(y_validation_df.groupby(["user"])["user"].count() / y_validation_df["user"].count()) print(y_validation_df.groupby(["model"])["model"].count() / y_validation_df["model"].count()) # ## Model (autoencoder) DATA_SHAPE = X_train.shape[1:] CODE_SIZE=36 def build_encoder(data_shape, code_size): inputs = tf.keras.Input(data_shape) X = inputs X = layers.Flatten()(X) X = layers.Dense(150, activation="relu")(X) X = layers.Dense(code_size, activation="sigmoid")(X) outputs = X return tf.keras.Model(inputs=inputs, outputs=outputs) # + def build_decoder(data_shape, code_size): inputs = tf.keras.Input((code_size,)) X = inputs X = layers.Dense(150, activation="relu")(X) X = layers.Dense(np.prod(data_shape), activation=None)(X) outputs = layers.Reshape(data_shape)(X) return tf.keras.Model(inputs=inputs, outputs=outputs) # - def build_autoencoder(encoder, decoder): inputs = tf.keras.Input(DATA_SHAPE) # input codes = encoder(inputs) # build the code with the encoder outputs = decoder(codes) # reconstruction the signal with the decoder return tf.keras.Model(inputs=inputs, outputs=outputs) encoder = build_encoder(DATA_SHAPE, CODE_SIZE) decoder = build_decoder(DATA_SHAPE, CODE_SIZE) # + tags=[] autoencoder = build_autoencoder(encoder, decoder) optimizer = "adam" loss = "mse" model_filename = 'autoencoder_network.hdf5' last_finished_epoch = None epochs=100 batch_size=128 early_stopping_callback = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=5) save_model_checkpoint_callback = ModelSaveCallback(model_filename) callbacks = [save_model_checkpoint_callback, early_stopping_callback] autoencoder.compile(optimizer=optimizer, loss=loss) history = autoencoder.fit( x=X_train, y=X_train, epochs=epochs, validation_data=(X_test, X_test), batch_size=batch_size, callbacks=callbacks, verbose=1) # - encoder.save("encoder.h5") show_loss(history) show_mse(autoencoder, X_test) show_reconstructed_signals(X_test, encoder, decoder, n=1) show_reconstruction_errors(X_test, encoder, decoder, n=1) # ## KNN classifier # + from sklearn.neighbors import KNeighborsClassifier # prepare the codes codes = encoder.predict(X_train) assert codes.shape[1:] == (CODE_SIZE,), f"Predicted codes shape must be equal to code size, but {codes.shape[1:]} != {(CODE_SIZE,)}" # create the k-neighbors calssifier n_neighbors = 5 metric = "euclidean" nbrs = KNeighborsClassifier(n_neighbors=n_neighbors, metric=metric) # fit the model using the codes nbrs.fit(codes, y_train) # - # Clustering classes print("Classes =", nbrs.classes_) # Show some predictions # + print("X_test[i] = y_true \t y_pred \t with probs [...]") print() for i in range(20): x = X_test[i] y = y_test[i] c = encoder.predict(x[np.newaxis, :])[0] [lab] = nbrs.predict(c[np.newaxis, :]) [probs] = nbrs.predict_proba(c[np.newaxis, :]) print(f"X_test[{i}] = {y}\t {lab} \t with probs {probs}") # - # Show performance metrics # + from sklearn.metrics import classification_report from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score codes = encoder.predict(X_test) y_true = y_test y_pred = nbrs.predict(codes) a = accuracy_score(y_true=y_true, y_pred=y_pred) f1 = f1_score(y_true=y_true, y_pred=y_pred, average="weighted") p = precision_score(y_true=y_true, y_pred=y_pred, average="weighted") r = recall_score(y_true=y_true, y_pred=y_pred, average="weighted") print(classification_report(y_true=y_true, y_pred=y_pred)) print(f"accuracy = {a}") print(f"precision = {p}") print(f"recall = {r}") print(f"f1_score = {f1}") # - # Grid search for KNN if USE_KNN_GRID_SEARCH: from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score results = pd.DataFrame() codes_train = encoder.predict(X_train) codes_test = encoder.predict(X_test) y_true = y_test for metric in ["euclidean", "manhattan", "chebyshev", "minkowski", "seuclidean", "mahalanobis"]: for n_neighbors in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 14, 16, 18, 20, 25, 30, 40, 100]: print(f"metric={metric}, n_neighbors={n_neighbors}") nbrs = KNeighborsClassifier(n_neighbors=n_neighbors, metric=metric) # fit the model using the codes nbrs.fit(codes_train, y_train) # predict y_pred = nbrs.predict(codes_test) a = accuracy_score(y_true=y_true, y_pred=y_pred) f1 = f1_score(y_true=y_true, y_pred=y_pred, average="weighted") p = precision_score(y_true=y_true, y_pred=y_pred, average="weighted") r = recall_score(y_true=y_true, y_pred=y_pred, average="weighted") data = [{"metric": metric, "n_neighbors": n_neighbors, "accuracy": a, "f1": f1, "precision": p, "recall": r }] results = results.append(pd.DataFrame(data), ignore_index=True) if USE_KNN_GRID_SEARCH: rr = results.loc[(results["n_neighbors"] >= 3) & (results["n_neighbors"] <= 8)] rr = rr.groupby("metric")["accuracy"].max() print(rr) print(rr.max()) results.to_csv("./models/knn_grid_search.csv") # ## KMeans classifier # + tags=[] from sklearn.cluster import KMeans from sklearn.preprocessing import LabelEncoder import numpy as np le = LabelEncoder() le.fit(y_test) # train codes = encoder.predict(X_train) kmeans = KMeans(n_clusters=num_classes, random_state=0) kmeans.fit(codes) # evaluate codes = encoder.predict(X_test) y_true = y_test y_pred = le.inverse_transform(kmeans.predict(codes)) print(classification_report(y_true=y_true, y_pred=y_pred)) # - # ## NN classifier # + def build_nn(code_size): inputs = tf.keras.Input((code_size,)) X = inputs X = layers.Dense(100, activation="relu")(X) X = layers.Dropout(0.1)(X) X = layers.Dense(100, activation="relu")(X) X = layers.Dropout(0.1)(X) X = layers.Dense(num_classes, activation="softmax")(X) outputs = X return tf.keras.Model(inputs=inputs, outputs=outputs) codes_train = encoder.predict(X_train) codes_test = encoder.predict(X_test) nn_model = build_nn(CODE_SIZE) adam_optimizer = tf.keras.optimizers.Adam() loss_funct = tf.keras.losses.CategoricalCrossentropy() early_stopping_callback = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=5) callbacks = [early_stopping_callback] nn_model.compile(optimizer=adam_optimizer, loss=loss_funct, metrics=["accuracy"]) nn_model.summary() history = nn_model.fit(x=codes_train, y=y_train_hot, epochs=50, validation_data=(codes_test, y_test_hot), batch_size=128, callbacks=callbacks, verbose=1) # + from sklearn.metrics import accuracy_score from sklearn.metrics import classification_report from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score from sklearn.preprocessing import LabelEncoder codes = encoder.predict(X_test) y_true = LabelEncoder().fit_transform(y_test) y_pred = np.argmax(nn_model.predict(codes), axis=1) print(y_pred) a = accuracy_score(y_true=y_true, y_pred=y_pred) f1 = f1_score(y_true=y_true, y_pred=y_pred, average="weighted") p = precision_score(y_true=y_true, y_pred=y_pred, average="weighted") r = recall_score(y_true=y_true, y_pred=y_pred, average="weighted") loss, accuracy = nn_model.evaluate(codes, y_test_hot) print("LOSS =", loss) print(classification_report(y_true=y_true, y_pred=y_pred)) print(f"accuracy = {accuracy} = {a}") print(f"precision = {p}") print(f"recall = {r}") print(f"f1_score = {f1}") show_loss(history) # - # # Grid Search # # We are searching for best hyperparams. More precisely we want to find the best autoencoder model given # # * code size # * optimizer # * loss func # * epochs # * batch size # + tags=[] def get_filename(hparams): return f'models/encoder{get_settings()}_cs-{hparams["code_size"]}_loss-{hparams["loss_func"]}_bs-{hparams["batch_size"]}.h5' def compile_autoencoder(autoencoder, hparams): """ Compile autoencoder with given hyperparams. """ optimizer = hparams["optimizer"] loss = hparams["loss_func"] autoencoder.compile(optimizer=optimizer, loss=loss) return autoencoder def fit_autoencoder(autoencoder, hparams): """ Fit the autoencoder with given hyperparams on `X_train` with validation `X_test`. Please note that the best and last models are saved in `models` folder. """ early_stopping_callback = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=3) callbacks = [early_stopping_callback] history = autoencoder.fit( x=X_train, y=X_train, epochs=hparams["epochs"], validation_data=(X_test, X_test), batch_size=hparams["batch_size"], callbacks=callbacks, verbose=0) return history def evaluate_autoencoder(autoencoder): """ Evaluate the autoencoder with `X_test`. """ return autoencoder.evaluate(X_test, X_test, verbose=0) def run_model(hparams): """ Setup, train and evaluate the autoencoder model with given hyperparams. :return: A tuple `(loss, hystory)` """ encoder = build_encoder(DATA_SHAPE, hparams["code_size"]) decoder = build_decoder(DATA_SHAPE, hparams["code_size"]) autoencoder = build_autoencoder(encoder, decoder) autoencoder = compile_autoencoder(autoencoder, hparams) history = fit_autoencoder(autoencoder, hparams) loss = evaluate_autoencoder(autoencoder) # Note: we may also add the classifier training and evaluation # to select the best model filepath = get_filename(hparams) encoder.save(filepath) return loss, history def select_model(): """ Run the model with hyperparams and save results. :return: An array of results with mse, hparams and training history. """ results = [] hparams_code_sizes = [2, 3, 4, 5, 6, 12, 18, 24, 30, 36, 42, 48, 54, 60, 72] hparams_opts = ["adam"] hparams_losses = ["mse"] hparams_epochs = [150] hparams_batch_sizes = [32, 128] n_iterations = np.prod([len(hparams_code_sizes), len(hparams_opts), len(hparams_losses), len(hparams_epochs), len(hparams_batch_sizes)]) print(f"Starting model selection... {n_iterations} iterations needed.") for code_size in hparams_code_sizes: for opt in hparams_opts: for loss in hparams_losses: for epochs in hparams_epochs: for batch_size in hparams_batch_sizes: hparams = { "code_size": code_size, "optimizer": opt, "loss_func": loss, "epochs": epochs, "batch_size": batch_size } print(f"Starting run {len(results)}") loss_val, history = run_model(hparams) print(f"hparams = {hparams}") print(f"{loss} = {loss_val}") print() results += [{"hparams": hparams, "loss_val": loss_val, "history": history}] return results # - # Select the best model based on lowest loss. # + import time results = [] if USE_GRID_SEARCH: start = time.time() results = select_model() end = time.time() print(f"Done in {end - start} s") # - # Print a CSV table for models and save them to file # + def get_csv(results): if (): return "" s = ",".join(["i", "type", "loss_val"] + [k for k,_ in results[0]["hparams"].items()] + ["filepath"]) + "\n" for i, result in zip(range(len(results)), results): hparams = result["hparams"] code_size = hparams["code_size"] batch_size = hparams["batch_size"] loss_val = result["loss_val"] filepath = get_filename(hparams) type = get_settings().lstrip("_") if get_settings() != "" else "default" s += ",".join([str(i), type, str(loss_val)] + [str(v) for _,v in hparams.items()] + [filepath]) + "\n" return s def save_results(results, filename=f'models/res{get_settings()}.csv'): with open(filename, mode="w+") as f: content = get_csv(results) f.write(content) if len(results) > 0: print(get_csv(results)) if USE_GRID_SEARCH: save_results(results) else: print("WARNING: Results were not saved to file.") # - # Check best model # + tags=[] def test_best_model(best_filename, code_size): def run_nn(codes_train, codes_test): nn_model = build_nn(code_size) adam_optimizer = tf.keras.optimizers.Adam() loss_funct = tf.keras.losses.CategoricalCrossentropy() early_stopping_callback = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=3) callbacks = [early_stopping_callback] nn_model.compile(optimizer=adam_optimizer, loss=loss_funct, metrics=["accuracy"]) #nn_model.summary() history = nn_model.fit(x=codes_train, y=y_train_hot, epochs=100, validation_data=(codes_test, y_test_hot), batch_size=128, callbacks=callbacks, verbose=0) loss, accuracy = nn_model.evaluate(codes_test, y_test_hot) return accuracy, loss #show_loss(history) def run_knn(codes_train, codes_test): from sklearn.neighbors import KNeighborsClassifier # create the k-neighbors calssifier n_neighbors = num_classes metric = "euclidean" nbrs = KNeighborsClassifier(n_neighbors=n_neighbors, metric=metric) # fit the model using the codes nbrs.fit(codes_train, y_train) from sklearn.metrics import mean_squared_error from sklearn.metrics import accuracy_score y_true = y_test y_pred = nbrs.predict(codes_test) #loss = mean_squared_error(y_true=y_true, y_pred=y_pred) accuracy = accuracy_score(y_true=y_true, y_pred=y_pred) return accuracy encoder = tf.keras.models.load_model(best_filename, compile=False) encoder.compile(optimizer="adam", loss="mse") codes_train = encoder.predict(X_train, verbose=0) codes_test = encoder.predict(X_test, verbose=0) accuracy_knn = run_knn(codes_train, codes_test) accuracy_nn, loss_nn = run_nn(codes_train, codes_test) return {"accuracy_knn": accuracy_knn, "accuracy_nn": accuracy_nn, "loss_nn": loss_nn } if USE_GRID_SEARCH_VALIDATION: validations_df = pd.DataFrame(columns=["accuracy_knn", "accuracy_nn", "loss_nn"]) res_df = pd.read_csv(f"models/res{get_settings()}.csv") #res_oit_df = pd.read_csv("./models/res_oit.csv.txt") #data_df = res_df.append(res_oit_df, ignore_index=True) data_df = res_df for index, row in data_df.iterrows(): print(f"Validating model {index} ({row['filepath']})") test_res = test_best_model(row["filepath"], row["code_size"]) validations_df = validations_df.append(pd.DataFrame([{ **{"filepath": row['filepath']}, **test_res }]), ignore_index=True) print(validations_df) validations_df.to_csv(f"./models/validation{get_settings()}.csv") print("Done") # - fn = "models/manual/encoder_oit_bryan_allpos_cs-30_bs-128.h5" print(test_best_model(fn, 30)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Introduction to PyMC # ______ # # PyMC is a python module that implements Bayesian statistical models and # fitting algorithms, including Markov chain Monte Carlo. Its flexibility # and extensibility make it applicable to a large suite of problems. Along # with core sampling functionality, PyMC includes methods for summarizing # output, plotting, goodness-of-fit and convergence diagnostics. # # PyMC provides functionalities to make Bayesian analysis as painless as # possible. Here is a short list of some of its features: # # - Fits Bayesian statistical models with Markov chain Monte Carlo and # other algorithms. # - Includes a large suite of well-documented statistical distributions. # - Uses NumPy for numerics wherever possible. # - Includes a module for modeling Gaussian processes. # - Sampling loops can be paused and tuned manually, or saved and # restarted later. # - Creates summaries including tables and plots. # - Traces can be saved to the disk as plain text, Python pickles, # SQLite or MySQL database, or hdf5 archives. # - Several convergence diagnostics are available. # - Extensible: easily incorporates custom step methods and unusual # probability distributions. # - MCMC loops can be embedded in larger programs, and results can be # analyzed with the full power of Python. # + # %matplotlib inline import seaborn as sns; sns.set_context('notebook') from pymc import Normal, Lambda, observed, MCMC, Matplot, Uniform from pymc.examples import melanoma_data as data import numpy as np # Convert censoring indicators to indicators for failure event failure = (data.censored==0).astype(int) # Intercept for survival rate beta0 = Normal('beta0', mu=0.0, tau=0.0001, value=0.0) # Treatment effect beta1 = Normal('beta1', mu=0.0, tau=0.0001, value=0.0) # Survival rates lam = Lambda('lam', lambda b0=beta0, b1=beta1, tr=data.treat: np.exp(b0 + b1*tr)) @observed def survival(value=data.t, lam=lam, f=failure): """Exponential survival likelihood, accounting for censoring""" return sum(f*np.log(lam) - lam*value) # - # This example will generate 10000 posterior samples, thinned by a factor # of 2, with the first half discarded as burn-in. The sample is stored in # a Python serialization (pickle) database. M = MCMC([beta0, beta1, lam, survival], db='pickle') M.sample(iter=10000, burn=5000) Matplot.plot(beta1) # ## Example: Coal mining disasters # # Recall the earlier example of estimating a changepoint in the time series of UK coal mining disasters. # + from pymc.examples.disaster_model import disasters_array import matplotlib.pyplot as plt n_count_data = len(disasters_array) plt.figure(figsize=(12.5, 3.5)) plt.bar(np.arange(1851, 1962), disasters_array, color="#348ABD") plt.xlabel("Year") plt.ylabel("Disasters") plt.title("UK coal mining disasters, 1851-1962") plt.xlim(1851, 1962); # - # We represent our conceptual model formally as a statistical model: # # $$\begin{array}{ccc} # (y_t | \tau, \lambda_1, \lambda_2) \sim\text{Poisson}\left(r_t\right), & r_t=\left\{ # \begin{array}{lll} # \lambda_1 &\text{if}& t< \tau\\ # \lambda_2 &\text{if}& t\ge \tau # \end{array}\right.,&t\in[t_l,t_h]\\ # \tau \sim \text{DiscreteUniform}(t_l, t_h)\\ # \lambda_1\sim \text{Exponential}(a)\\ # \lambda_2\sim \text{Exponential}(b) # \end{array}$$ # # Because we have defined $y$ by its dependence on $\tau$, $\lambda_1$ and $\lambda_2$, the # latter three are known as the *parents* of $y$ and $D$ is called their # *child*. Similarly, the parents of $\tau$ are $t_l$ and $t_h$, and $\tau$ is # the child of $t_l$ and $t_h$. # # ## PyMC Variables # # At the model-specification stage (before the data are observed), $y$, # $\tau$, $\lambda_1$, and $\lambda_2$ are all random variables. Bayesian "random" # variables have not necessarily arisen from a physical random process. # The Bayesian interpretation of probability is *epistemic*, meaning # random variable $x$'s probability distribution $p(x)$ represents our # knowledge and uncertainty about $x$'s value. Candidate # values of $x$ for which $p(x)$ is high are relatively more probable, # given what we know. Random variables are represented in PyMC by the # classes `Stochastic` and `Deterministic`. # # The only `Deterministic` in the model is $r$. If we knew the values of # $r$'s parents, we could compute the value of $r$ # exactly. A `Deterministic` like $r$ is defined by a mathematical # function that returns its value given values for its parents. # `Deterministic` variables are sometimes called the *systemic* part of # the model. The nomenclature is a bit confusing, because these objects # usually represent random variables; since the parents of $r$ are random, # $r$ is random also. # # On the other hand, even if the values of the parents of variables # `switchpoint`, `disasters` (before observing the data), `early_mean` # or `late_mean` were known, we would still be uncertain of their values. # These variables are characterized by probability distributions that # express how plausible their candidate values are, given values for their # parents. The `Stochastic` class represents these variables. # # First, we represent the unknown switchpoint as a discrete uniform random variable: # + from pymc import DiscreteUniform, Exponential, Poisson, deterministic switchpoint = DiscreteUniform('switchpoint', lower=0, upper=110, value=50) # - # `DiscreteUniform` is a subclass of `Stochastic` that represents # uniformly-distributed discrete variables. Use of this distribution # suggests that we have no preference *a priori* regarding the location of # the switchpoint; all values are equally likely. # # Now we create the # exponentially-distributed variables `early_mean` and `late_mean` for the # early and late Poisson rates, respectively: early_mean = Exponential('early_mean', beta=1., value=0.1) late_mean = Exponential('late_mean', beta=1., value=1) # Next, we define the variable `rate`, which selects the early rate # `early_mean` for times before `switchpoint` and the late rate # `late_mean` for times after `switchpoint`. We create `rate` using the # `deterministic` decorator, which converts the ordinary Python function # `rate` into a `Deterministic` object. @deterministic(plot=False) def rate(s=switchpoint, e=early_mean, l=late_mean): ''' Concatenate Poisson means ''' out = np.empty(len(disasters_array)) out[:s] = e out[s:] = l return out # The last step is to define the number of disasters `disasters`. This is # a stochastic variable but unlike `switchpoint`, `early_mean` and # `late_mean` we have observed its value. To express this, we set the # argument `observed` to `True` (it is set to `False` by default). This # tells PyMC that this object's value should not be changed: disasters = Poisson('disasters', mu=rate, value=disasters_array, observed=True) disasters.logp # ### Why are data and unknown variables represented by the same object? # # Since its represented by a `Stochastic` object, `disasters` is defined # by its dependence on its parent `rate` even though its value is fixed. # This isn't just a quirk of PyMC's syntax; Bayesian hierarchical notation # itself makes no distinction between random variables and data. The # reason is simple: to use Bayes' theorem to compute the posterior, we require the # likelihood. Even though `disasters`'s value is known # and fixed, we need to formally assign it a probability distribution as # if it were a random variable. Remember, the likelihood and the # probability function are essentially the same, except that the former is # regarded as a function of the parameters and the latter as a function of # the data. # # This point can be counterintuitive at first, as many peoples' instinct # is to regard data as fixed a priori and unknown variables as dependent # on the data. One way to understand this is to think of statistical # modelsas predictive models for data, or as # models of the processes that gave rise to data. Before observing the # value of `disasters`, we could have sampled from its prior predictive # distribution $p(y)$ (*i.e.* the marginal distribution of the data) as # follows: # # > - Sample `early_mean`, `switchpoint` and `late_mean` from their # > priors. # > - Sample `disasters` conditional on these values. # # Even after we observe the value of `disasters`, we need to use this # process model to make inferences about `early_mean` , `switchpoint` and # `late_mean` because its the only information we have about how the # variables are related. # ## Parents and children # # We have above created a PyMC probability model, which is simply a linked # collection of variables. To see the nature of the links, iexamine `switchpoint`'s `parents` attribute: switchpoint.parents # The `parents` dictionary shows us the distributional parameters of # `switchpoint`, which are constants. Now let's examine \`disasters\`'s # parents: disasters.parents rate.parents # We are using `rate` as a distributional parameter of `disasters` # (*i.e.* `rate` is `disasters`'s parent). `disasters` internally # labels `rate` as `mu`, meaning `rate` plays the role of the rate # parameter in `disasters`'s Poisson distribution. Now examine `rate`'s # `children` attribute: rate.children # Because `disasters` considers `rate` its parent, `rate` considers # `disasters` its child. Unlike `parents`, `children` is a set (an # unordered collection of objects); variables do not associate their # children with any particular distributional role. Try examining the # `parents` and `children` attributes of the other parameters in the # model. # # The following **directed acyclic graph** is a visualization of the # parent-child relationships in the model. Unobserved stochastic variables # `switchpoint`, `early_mean` and `late_mean` are open ellipses, observed # stochastic variable `disasters` is a filled ellipse and deterministic # variable `rate` is a triangle. Arrows point from parent to child and # display the label that the child assigns to the parent. # # ![Directed acyclic graph of disaster model](images/dag.png) # As the examples above have shown, PyMC objects need to have a name # assigned, such as `switchpoint`, `early_mean` or `late_mean`. These # names are used for storage and post-processing: # # - as keys in on-disk databases, # - as node labels in model graphs, # - as axis labels in plots of traces, # - as table labels in summary statistics. # # A model instantiated with variables having identical names raises an # error to avoid name conflicts in the database storing the traces. In # general however, PyMC uses references to the objects themselves, not # their names, to identify variables. # ## Variables' values and log-probabilities # # All PyMC variables have an attribute called `value` that stores the # current value of that variable. Try examining `disasters`'s value, and # you'll see the initial value we provided for it: disasters.value # If you check the values of `early_mean`, `switchpoint` and `late_mean`, # you'll see random initial values generated by PyMC: switchpoint.value early_mean.value late_mean.value # Of course, since these are `Stochastic` elements, your values will be # different than these. If you check `rate`'s value, you'll see an array # whose first `switchpoint` elements are `early_mean`, # and whose remaining elements are `late_mean`: rate.value # To compute its value, `rate` calls the function we used to create it, # passing in the values of its parents. # # `Stochastic` objects can evaluate their probability mass or density # functions at their current values given the values of their parents. The # logarithm of a stochastic object's probability mass or density can be # accessed via the `logp` attribute. For vector-valued variables like # `disasters`, the `logp` attribute returns the sum of the logarithms of # the joint probability or density of all elements of the value. Try # examining `switchpoint`'s and `disasters`'s log-probabilities and # `early_mean` 's and `late_mean`'s log-densities: switchpoint.logp disasters.logp early_mean.logp late_mean.logp # `Stochastic` objects need to call an internal function to compute their # `logp` attributes, as `rate` needed to call an internal function to # compute its value. Just as we created `rate` by decorating a function # that computes its value, it's possible to create custom `Stochastic` # objects by decorating functions that compute their log-probabilities or # densities. Users are thus not # limited to the set of of statistical distributions provided by PyMC. # ### Using Variables as parents of other Variables # # Let's take a closer look at our definition of `rate`: # # ```python # @deterministic(plot=False) # def rate(s=switchpoint, e=early_mean, l=late_mean): # ''' Concatenate Poisson means ''' # out = empty(len(disasters_array)) # out[:s] = e # out[s:] = l # return out # ``` # # The arguments `switchpoint`, `early_mean` and `late_mean` are # `Stochastic` objects, not numbers. If that is so, why aren't errors # raised when we attempt to slice array `out` up to a `Stochastic` object? # # Whenever a variable is used as a parent for a child variable, PyMC # replaces it with its `value` attribute when the child's value or # log-probability is computed. When `rate`'s value is recomputed, # `s.value` is passed to the function as argument `switchpoint`. To see # the values of the parents of `rate` all together, look at # `rate.parents.value`. # ## Fitting the model with MCMC # # PyMC provides several objects that fit probability models (linked collections of variables) like ours. The primary such object, `MCMC`, fits models with a Markov chain Monte Carlo algorithm: from pymc.examples import disaster_model M = MCMC(disaster_model) M M.late_mean # In this case `M` will expose variables `switchpoint`, `early_mean`, `late_mean` and `disasters` as attributes; that is, `M.switchpoint` will be the same object as `disaster_model.switchpoint`. # # To run the sampler, call the MCMC object's `sample()` method with arguments for the number of iterations, burn-in length, and thinning interval (if desired): M.sample(iter=10000, burn=1000) # ### Accessing the samples # # The output of the MCMC algorithm is a `trace`, the sequence of # retained samples for each variable in the model. These traces can be # accessed using the `trace(name, chain=-1)` method. For example: # M.trace('early_mean')[1000:] # The trace slice `[start:stop:step]` works just like the NumPy array # slice. By default, the returned trace array contains the samples from # the last call to `sample`, that is, `chain=-1`, but the trace from # previous sampling runs can be retrieved by specifying the correspondent # chain index. To return the trace from all chains, simply use # `chain=None`. # # ### Sampling output # # You can examine the marginal posterior of any variable by plotting a # histogram of its trace: plt.hist(M.trace('late_mean')[:]) # # PyMC has its own plotting functionality, via the optional `matplotlib` # module as noted in the installation notes. The `Matplot` module includes # a `plot` function that takes the model (or a single parameter) as an # argument: Matplot.plot(M) # The upper left-hand pane of each figure shows the temporal series of the # samples from each parameter, while below is an autocorrelation plot of # the samples. The right-hand pane shows a histogram of the trace. The # trace is useful for evaluating and diagnosing the algorithm's # performance, while the histogram is useful for # visualizing the posterior. # # For a non-graphical summary of the posterior, simply call the `stats` method. M.early_mean.summary() # ### Imputation of Missing Data # # As with most textbook examples, the models we have examined so far # assume that the associated data are complete. That is, there are no # missing values corresponding to any observations in the dataset. # However, many real-world datasets have missing observations, usually due # to some logistical problem during the data collection process. The # easiest way of dealing with observations that contain missing values is # simply to exclude them from the analysis. However, this results in loss # of information if an excluded observation contains valid values for # other quantities, and can bias results. An alternative is to impute the # missing values, based on information in the rest of the model. # # For example, consider a survey dataset for some wildlife species: # # Count Site Observer Temperature # ------- ------ ---------- ------------- # 15 1 1 15 # 10 1 2 NA # 6 1 1 11 # # Each row contains the number of individuals seen during the survey, # along with three covariates: the site on which the survey was conducted, # the observer that collected the data, and the temperature during the # survey. If we are interested in modelling, say, population size as a # function of the count and the associated covariates, it is difficult to # accommodate the second observation because the temperature is missing # (perhaps the thermometer was broken that day). Ignoring this observation # will allow us to fit the model, but it wastes information that is # contained in the other covariates. # # In a Bayesian modelling framework, missing data are accommodated simply # by treating them as unknown model parameters. Values for the missing # data $\tilde{y}$ are estimated naturally, using the posterior predictive # distribution: # # $$p(\tilde{y}|y) = \int p(\tilde{y}|\theta) f(\theta|y) d\theta$$ # # This describes additional data $\tilde{y}$, which may either be # considered unobserved data or potential future observations. We can use # the posterior predictive distribution to model the likely values of # missing data. # # Consider the coal mining disasters data introduced previously. Assume # that two years of data are missing from the time series; we indicate # this in the data array by the use of an arbitrary placeholder value, # `None`: x = np.array([ 4, 5, 4, 0, 1, 4, 3, 4, 0, 6, 3, 3, 4, 0, 2, 6, 3, 3, 5, 4, 5, 3, 1, 4, 4, 1, 5, 5, 3, 4, 2, 5, 2, 2, 3, 4, 2, 1, 3, None, 2, 1, 1, 1, 1, 3, 0, 0, 1, 0, 1, 1, 0, 0, 3, 1, 0, 3, 2, 2, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 2, 1, 0, 0, 0, 1, 1, 0, 2, 3, 3, 1, None, 2, 1, 1, 1, 1, 2, 4, 2, 0, 0, 1, 4, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1]) # To estimate these values in PyMC, we generate a *masked array*. These are specialised NumPy arrays that contain a matching True or False value for each element to indicate if that value should be excluded from any computation. Masked arrays can be generated using NumPy's `ma.masked_equal` function: masked_values = np.ma.masked_values(x, value=None) masked_values # This masked array, in turn, can then be passed to one of PyMC's data # stochastic variables, which recognizes the masked array and replaces the # missing values with Stochastic variables of the desired type. For the # coal mining disasters problem, recall that disaster events were modeled # as Poisson variates: disasters = Poisson('disasters', mu=rate, value=masked_values, observed=True) # Here `rate` is an array of means for each year of data, allocated # according to the location of the switchpoint. Each element in # `disasters` is a Poisson Stochastic, irrespective of whether the # observation was missing or not. The difference is that actual # observations are data Stochastics (`observed=True`), while the missing # values are non-data Stochastics. The latter are considered unknown, # rather than fixed, and therefore estimated by the MCMC algorithm, just # as unknown model parameters. # # The entire model looks very similar to the original model: def missing_data_model(): # Switchpoint switch = DiscreteUniform('switch', lower=0, upper=110) # Early mean early_mean = Exponential('early_mean', beta=1) # Late mean late_mean = Exponential('late_mean', beta=1) @deterministic(plot=False) def rate(s=switch, e=early_mean, l=late_mean): """Allocate appropriate mean to time series""" out = np.empty(len(disasters_array)) # Early mean prior to switchpoint out[:s] = e # Late mean following switchpoint out[s:] = l return out masked_values = np.ma.masked_values(x, value=None) # Pass masked array to data stochastic, and it does the right thing disasters = Poisson('disasters', mu=rate, value=masked_values, observed=True) return locals() # Here, we have used the `masked_array` function, rather than # `masked_equal`, and the value -999 as a placeholder for missing data. # The result is the same. M_missing = MCMC(missing_data_model()) M_missing.sample(5000) M_missing.stochastics Matplot.summary_plot(M_missing.disasters) # ## Fine-tuning the MCMC algorithm # # MCMC objects handle individual variables via *step methods*, which # determine how parameters are updated at each step of the MCMC algorithm. # By default, step methods are automatically assigned to variables by # PyMC. To see which step methods $M$ is using, look at its # `step_method_dict` attribute with respect to each parameter: M.step_method_dict # The value of `step_method_dict` corresponding to a particular variable # is a list of the step methods $M$ is using to handle that variable. # # You can force $M$ to use a particular step method by calling # `M.use_step_method` before telling it to sample. The following call will # cause $M$ to handle `late_mean` with a standard `Metropolis` step # method, but with proposal standard deviation equal to $2$: from pymc import Metropolis M.use_step_method(Metropolis, disaster_model.late_mean, proposal_sd=2.) # Another step method class, `AdaptiveMetropolis`, is better at handling # highly-correlated variables. If your model mixes poorly, using # `AdaptiveMetropolis` is a sensible first thing to try. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import os,sys import matplotlib.pyplot as plt sys.path.append( os.environ['OACIS_ROOT'] ) import oacis sim = oacis.Simulator.find_by_name("NS_model") host = oacis.Host.find_by_name("localhost") # + import numpy as np rho_values = np.linspace( 0.05, 0.5, 10) base_param = { "l": 200, "v": 5, "p": 0.1, "t_init": 100, "t_measure": 1000 } created_ps_list = [] for rho in rho_values: param = base_param.copy() param['rho'] = rho ps = sim.find_or_create_parameter_set( param ) runs = ps.find_or_create_runs_upto(1, submitted_to=host) created_ps_list.append(ps) len(created_ps_list) # - # an idiom to wait until all the parameter_sets finish w = oacis.OacisWatcher() w.watch_all_ps( created_ps_list, lambda pss: None) w.loop() # + parameter_sets = sorted( created_ps_list, key= lambda ps: ps.v()['rho'] ) x = [] y = [] for ps in parameter_sets: x.append( ps.v()["rho"] ) y.append( ps.average_result("flow")[0] ) plt.xlabel("rho") plt.ylabel("flow") plt.plot(x,y, 'ro') # - # ## Searching an optimum $\rho$ # # It seems that there is an optimal $\rho$ to maximize the traffic flow. Let us search the optimum value iteratively. # Algorithm for searching: # # 1. take the PS having the largest "flow". # 1. create two ParameterSets at the centers between its neighboring PSs. # 1. go back to 1 until we have enough resolution. # + def find_nearest_neighbors( ps, input_key ): ps_list = list( ps.parameter_sets_with_different( input_key ) ) idx = ps_list.index( ps ) left_ps = ps_list[idx-1] if idx > 0 else None right_ps = ps_list[idx+1] if idx+1 < len(ps_list) else None return (left_ps, right_ps) def find_optimum_ps( sim, input_key, output_key, base_param): query = { "v.%s"%k:v for k,v in base_param.items() if k != input_key } parameter_sets = sim.parameter_sets().where( query ).to_a() oacis.OacisWatcher.await_all_ps( parameter_sets ) sorted_by_output = sorted( parameter_sets, key=lambda ps: ps.average_result(output_key)[0] ) best_ps = sorted_by_output[-1] return best_ps def create_a_new_ps_in_between( ps1, ps2, input_key ): new_param = ps1.v() new_param[input_key] = (ps1.v()[input_key] + ps2.v()[input_key]) / 2.0 new_ps = sim.find_or_create_parameter_set( new_param ) new_runs = new_ps.find_or_create_runs_upto(1, submitted_to=oacis.Host.find_by_name("localhost") ) return new_ps def create_initial_pss(sim, input_key, base_param, domain=(0.05,0.95)): param1 = base_param.copy() param1[input_key] = domain[0] ps1 = sim.find_or_create_parameter_set( param1 ) ps1.find_or_create_runs_upto(1, submitted_to=oacis.Host.find_by_name("localhost") ) param2 = base_param.copy() param2[input_key] = domain[1] ps2 = sim.find_or_create_parameter_set( param2 ) ps2.find_or_create_runs_upto(1, submitted_to=oacis.Host.find_by_name("localhost") ) return (ps1,ps2) def search_for_optimum( sim, input_key, output_key, base_param, resolution): create_initial_pss(sim, input_key, base_param) best_ps = find_optimum_ps( sim, input_key, output_key, base_param ) left_ps, right_ps = find_nearest_neighbors( best_ps, input_key ) new_ps_list = [] if (left_ps is not None) and abs(left_ps.v()[input_key]-best_ps.v()[input_key]) > resolution: new_ps1 = create_a_new_ps_in_between( left_ps, best_ps, input_key ) new_ps_list.append(new_ps1) if (right_ps is not None) and abs(right_ps.v()[input_key]-best_ps.v()[input_key]) > resolution: new_ps2 = create_a_new_ps_in_between( right_ps, best_ps, input_key ) new_ps_list.append(new_ps2) if len(new_ps_list) > 0: return search_for_optimum( sim, input_key, output_key, base_param, resolution) else: return best_ps sim = oacis.Simulator.find_by_name("NS_model") base_param = { "l": 200, "v": 5, "p": 0.1, "t_init": 100, "t_measure": 1000 } # best_ps = find_optimum_ps( sim, 'rho', 'flow', base_param) # print( best_ps.v() ) # left, right = find_nearest_neighbors( best_ps, 'rho' ) # print( left.v(), right.v() ) # create_a_new_ps_in_between( left, best_ps, 'rho' ) w = oacis.OacisWatcher() best_ps = None def f(): global best_ps best_ps = search_for_optimum( sim, 'rho', 'flow', base_param, 0.005) w.do_async(f) w.loop() best_ps.v() # + def plot_rho_flow_diagram( best_ps ): import matplotlib.pyplot as plt parameter_sets = best_ps.parameter_sets_with_different( 'rho' ) x = [] y = [] for ps in parameter_sets: x.append( ps.v()["rho"] ) y.append( ps.average_result("flow")[0] ) plt.xlabel("rho") plt.ylabel("flow") plt.plot(x,y, 'ro') plot_rho_flow_diagram(best_ps) # + from functools import partial sim = oacis.Simulator.find_by_name("NS_model") p_list = [0.1, 0.3, 0.5] v_list = [3,4,5,6,7] w = oacis.OacisWatcher() best_ps_dict = {} def best_rho(v, p): base_param = { "l": 200, "v": v, "p": p, "t_init": 100, "t_measure": 1000 } best_ps = search_for_optimum( sim, 'rho', 'flow', base_param, 0.005) best_ps_dict[(p,v)] = best_ps.v()['rho'] for p in p_list: for v in v_list: f = partial( best_rho, v, p ) w.do_async(f) w.loop() best_ps_dict # + # After searching we plot the optimal rho for each v and p. plt.figure() plt.xlabel('v') plt.ylabel('Optimum rho') for p in p_list: opt_rho_list = [] for v in v_list: opt_rho = best_ps_dict[ (p,v) ] opt_rho_list.append( opt_rho ) plt.plot( v_list, opt_rho_list, 'o-', label="p=%f"%p ) plt.legend(loc='best') plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Smith-Wilson Model Overview # # This Jupyter notebook shows you how to load the **smithwilson** model included in the **smithwilson** project. It also walks you through steps to create the same model from scratch. # # The Smith-Wilson model calculates extraporated interest rates using the Smith-Wilson method. # # The Smith-Wilson method is used for extraporating risk-free interest rates under the Solvency II framework. The method is described in details in *QIS 5 Risk-free interest rates – Extrapolation method*, [a technical paper](https://eiopa.europa.eu/Publications/QIS/ceiops-paper-extrapolation-risk-free-rates_en-20100802.pdf) issued by CEIOPS(the predecessor of EIOPA). The technical paper is available on [EIOPA's web site](https://eiopa.europa.eu/publications/qis/insurance/insurance-quantitative-impact-study-5/background-documents). # Formulas and variables in this notebook are named consistently with the mathmatical symbols in the technical paper. # # This project is inspired by a pure Python implementation of Smith-Wilson # yield curve fitting algorithm created by . # His original work can be found [on his github page](https://github.com/simicd/smith-wilson-py). # # [the technical paper]: https://eiopa.europa.eu/Publications/QIS/ceiops-paper-extrapolation-risk-free-rates_en-20100802.pdf # ## About this notebook # # This notebook is included in **lifelib** package as part of the **smithwilson** project. # # If you're viewing this page as a static HTML page on https://lifelib.io, the same contents are also available [here on binder] as Jupyter notebook executable online (it may take a while to load). To run this notebook and get all the outputs below, Go to the **Cell** menu above, and then click **Run All**. # # [here on binder]: https://mybinder.org/v2/gh/fumitoh/lifelib/binder?filepath=lifelib%2Fprojects%2Fsmithwilson%2Fsmithwilson.ipynb # ## Reading in the complete model # # The complete model is included under the **smithwilson** project in lifelib package. Load a model from `model` folder in your # project folder by `modelx.read_model` function. # The example blow shows how to do it. # Note that you need two backslashes to separate folders: import modelx as mx m = mx.read_model("model") # Need 2 backslashes as a separator on Windows e.g. "C:\\Users\\fumito\\model" # The model has only one space, named `SmithWilson`. # The space contains a few cells: s = m.SmithWilson s.cells # It also contains references (refs), # such as `spot_rates`, `N`, `UFR` and `alpha`. # By default, these values are set equal to the values used in Dejan's # reference model. # The original source of the input data is Switzerland EIOPA spot rates with LLP 25 years available from the following URL. # # Source: https://eiopa.europa.eu/Publications/Standards/EIOPA_RFR_20190531.zip; EIOPA_RFR_20190531_Term_Structures.xlsx; Tab: RFR_spot_no_VA # # s.N s.UFR s.alpha s.spot_rates # `R` calculates the extrapoted spot rate for a give time index $i$. [s.R[i] for i in range(10, 151, 10)] # For $i = 1,\dots,N$, `R[i]` is the same as `spot_rates[i-1]`. [s.spot_rates[i-1] for i in range(10, 26, 5)] # ## Building the Smith-Wilson model from scratch # # We now try to create the **smithwilson** model from scratch. # The model we create is essentially the same as the model included in the **smithwilson** project, excpt for docstrings. # # Below are the steps to create the model. # 1. Create a model and space. # 2. Input values to as *references*. # 3. Define cells. # 4. Get the results. # 5. Save the model. # ### 1. Create a model and space # # First, we create an empty model named `smithwilson2`, and also an empty space named `SmithWilson` in the model. # The following statement creates the model and space, and assign the space to a name `s2`. s2 = mx.new_model(name="smithwilson2").new_space(name="SmithWilson") # ### 2. Input values to as *references* # # In this step, we create *references* in the *SmithWilson* space, and assign input values to the *references*. # We will create cells and define their formulas in the sapce in the next step, and those *references* are referred by the formulas of the cells. # # The values are taken from https://github.com/simicd/smith-wilson-py/blob/master/main.py # + # Annual compound spot rates for time to maturities from 1 to 25 years s2.spot_rates = [ -0.00803, -0.00814, -0.00778, -0.00725, -0.00652, -0.00565, -0.0048, -0.00391, -0.00313, -0.00214, -0.0014, -0.00067, -0.00008, 0.00051, 0.00108, 0.00157, 0.00197, 0.00228, 0.0025, 0.00264, 0.00271, 0.00274, 0.0028, 0.00291, 0.00309] s2.N = 25 # Number of time to maturities. s2.alpha = 0.128562 # Alpha parameter in the Smith-Wilson functions ufr = 0.029 # Annual compound from math import log s2.UFR = log(1 + ufr) # Continuous compound UFR, 0.028587456851912472 # - # You also nee to import `log` and `exp` from `math` module for later use. # We also use numpy later, so import `numpy` as `np`. # These functions and module need to be accessible from cells in `SmithWilson` space, # so assign them to refs. # + from math import log, exp import numpy as np s2.log = log s2.exp = exp s2.np = np # - # ### 3. Define cells # # In the previous step, we have assigned all the necessary inputs in the *SmithWilson* space. In this step we move on to defining cells. # # We use `defcells` decorator to define cells from Python functions. `defcells` decorator creates cells in the *current* space, so confirm the *SmithWilson* space we just created is set to the *current* space by the following code. mx.cur_space() # The names of the cells below are set consistent with the mathmatical symbols in the technical paper. # # * `u(i)`: Time at each `i` in years. Time steps can be uneven. For the maturities of the zero coupon bonds with known prices $u_i$ # * `m(i)`: The market prices of the zero coupon bonds, $m_i$ # * `mu(i)`: Ultimate Forward Rate (UFR) discount factors, $\mu_i$ # * `W(i, j)`: The Wilson functions, $W(t_i, u_j)$ @mx.defcells def u(i): """Time to maturities""" return i # + @mx.defcells def m(i): """Observed zero-coupon bond prices""" return (1 + spot_rates[i-1]) ** (-u[i]) @mx.defcells def mu(i): """Ultimate Forward Rate (UFR) discount factors""" return exp(-UFR * u[i]) # - @mx.defcells def W(i, j): """The Wilson functions""" t = u[i] uj = u[j] return exp(-UFR * (t+uj)) * ( alpha * min(t, uj) - 0.5 * exp(-alpha * max(t, uj)) * ( exp(alpha*min(t, uj)) - exp(-alpha*min(t, uj)))) # We want to use Numpy's vector and matrix operations to solve for $\zeta$, # so we create a vector or matrix version of cells for each of `m`, `mu`, `W`. # These cells have no parameter and return numpy arrays. # + @mx.defcells def m_vector(): return np.array([m(i) for i in range(1, N+1)]) @mx.defcells def mu_vector(): return np.array([mu(i) for i in range(1, N+1)]) @mx.defcells def W_matrix(): return np.array( [[W(i, j) for j in range(1, N+1)] for i in range(1, N+1)] ) # - # `zeta_vector` cells carries out the matrix-vector calcuculation: $\zeta = \bf W^{-1}(\bf m - {\mu})$. # # `zeta` extracts from an element from `zeta_vector` for each `i` # + @mx.defcells def zeta_vector(): return np.linalg.inv(W_matrix()) @ (m_vector() - mu_vector()) @mx.defcells def zeta(i): return zeta_vector()[i-1] # - # `P(i)` cells calculates bond prices from `mu`, `zeta` and `W`. The values of `P(i)` should be the same as those of `m(i)` for `i=1,...,N` . # # `R(i)` are the extaporated annual compound rates. The values of `R(i)` should be the same as those of `spot_rates[i-1]` for `i=1,...,N`. # + @mx.defcells def P(i): """Zero-coupon bond prices calculated by Smith-Wilson method.""" return mu(i) + sum(zeta(j) * W(i, j) for j in range(1, N+1)) @mx.defcells def R(i): """Extrapolated rates""" return (1 / P(i)) ** (1 / u(i)) - 1 # - # ### 4. Get the results # You can check that the cells you define above exists in the `SmithWilson` space by getting the space's `cells` attribute. s2.cells # `R` cells calculates or holds the extraporated spot rates. You can see that for `i=1,...,25`, the values are the same ase the `sport_rates`. # # The code below outputs `R(i)` for `i=10, 15, 20, ..., 100` [s2.R[i] for i in range(10, 101, 5)] # ### 5. Save the model # You can write the model by `write_model`. The model is written to files under the folder you specify as the second paramter. Later you can read the model by `read_model`. # # ```python # mx.write_model(mx.cur_model(), "your_folder") # # ``` # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Py3 Jhub # language: python # name: py3-jhub # --- # # diag_btrop_fluxeb # # Diagnose barotropic fluxes and impact of tidal potential based on TPXO8: # * compute fluxes (to be compared with simulations) # * compute fluxes across boundaries (is it positiv in TPXO ?) # * compute tidal potential work $$ (cf. croco) # # *(NJAL June 25th 2018)* # + # %matplotlib inline from matplotlib import pyplot as plt import numpy as np from netCDF4 import Dataset import scipy.interpolate as itp krypton = "/data0/project/vortex/lahaye/" alpha = "/net/alpha/exports/sciences/data/REFERENCE_DATA/TIDES/" ruchba = krypton + "local_ruchba/" # + rho0, grav = 1.025, 9.81 Erad = 6371e3 # convert complex to amplitude+phase (radians) def cmp2ap(re,im): return np.abs(re+1j*im),np.arctan2(-im,re) # interpolate from u-, v- points to z-points def u2rho(lon_u, data, lon_r): return itp.pchip_interpolate(lon_u, data, lon_r, axis=-1) def v2rho(lat_v, data, lat_r): return itp.pchip_interpolate(lat_v, data, lat_r, axis=-2) def rotuv(uu, vv, ang): return uu*np.cos(ang) + vv*np.sin(ang), -uu*np.sin(ang) + vv*np.cos(ang) def div_on_grid(uu, vv, pm, pn): dudx = (uu[:,1:] - uu[:,:-1])*0.5*(pm[:,1:] + pm[:,:-1]) dudx = 0.5 * np.pad( dudx[:,:-1]+dudx[:,1:], ((0,0),(1,1)), mode="edge" ) dvdy = (vv[1:,:] - vv[:-1,:])*0.5*(pn[1:,:] + pn[:-1,:]) dvdy = 0.5 * np.pad( dvdy[1:,:]+dvdy[:-1,:], ((1,1),(0,0)), mode="edge" ) return dudx + dvdy def grad_on_grid(uu, pm, pn): dudx = (uu[:,1:] - uu[:,:-1])*0.5*(pm[:,1:] + pm[:,:-1]) dudx = 0.5 * np.pad( dudx[:,:-1]+dudx[:,1:], ((0,0),(1,1)), mode="edge" ) dvdy = (uu[1:,:] - uu[:-1,:])*0.5*(pn[1:,:] + pn[:-1,:]) dvdy = 0.5 * np.pad( dvdy[1:,:]+dvdy[:-1,:], ((1,1),(0,0)), mode="edge" ) return dudx, dvdy # + # load LUCKY's grid ncgrd = Dataset(krypton+'lucky_corgrd.nc','r') h = ncgrd.variables['h'][:] lon_h = ncgrd.variables['lon_rho'][:] lat_h = ncgrd.variables['lat_rho'][:] mask = ncgrd.variables['mask_rho'][:] grang = ncgrd.variables['angle'][:] pm = ncgrd.variables['pm'][:] pn = ncgrd.variables['pn'][:] ncgrd.close() # load tidal potential from LUCKY's forcing file ncfrc = Dataset(ruchba+'prep_LUCKYTO/luckym2_frc.nc') potamp = ncfrc.variables['tide_Pamp'][0,:,:] potpha = np.deg2rad(ncfrc.variables['tide_Pphase'][0,:,:]) ncfrc.close() # + # load tpxo8 data : dealing with transport ! (warning) nc = Dataset(alpha+"TPXO8/uv.m2_tpxo8_atlas_30c_v1.nc",'r') ncvar = nc.variables lonu = ncvar['lon_u'][:] indxu, = np.where( (lonu>=(lon_h%360).min()) & (lonu<=(lon_h%360).max()) ) lonu = lonu[indxu] latu = ncvar['lat_u'][:] indyu, = np.where( (latu>=lat_h.min()) & (latu<=lat_h.max()) ) latu = latu[indyu] lonv = ncvar['lon_v'][:] indxv, = np.where( (lonv>=(lon_h%360).min()) & (lonv<=(lon_h%360).max()) ) lonv = lonv[indxv] latv = ncvar['lat_v'][:] indyv, = np.where( (latv>=lat_h.min()) & (latv<=lat_h.max()) ) latv = latv[indyv] ure = ncvar['uRe'][indxu,indyu].T*1e-4 # cm²/s to m²/s vre = ncvar['vRe'][indxv,indyv].T*1e-4 uim = ncvar['uIm'][indxu,indyu].T*1e-4 vim = ncvar['vIm'][indxv,indyv].T*1e-4 nc.close() # now load SSH nc = Dataset(alpha+"TPXO8/hf.m2_tpxo8_atlas_30c_v1.nc",'r') ncvar = nc.variables lonr = ncvar['lon_z'][:] indx, = np.where( (lonr>=(lon_h%360).min()) & (lonr<=(lon_h%360).max()) ) lonr = lonr[indx] latr = ncvar['lat_z'][:] indy, = np.where( (latr>=lat_h.min()) & (latr<=lat_h.max()) ) latr = latr[indy] zre = ncvar['hRe'][indx,indy].T*1e-3 # mm to m zim = ncvar['hIm'][indx,indy].T*1e-3 nc.close() nc = Dataset(alpha+"TPXO8/grid_tpxo8_atlas_30c_v1.nc",'r') topox = nc.variables['hz'][indx,indy].T nc.close() plt.pcolormesh(lonr,latr, topox, cmap="gist_earth_r") plt.colorbar() # + # convert Re, Im to phase, amplitude, interpolate u, v and z-grid and plot fluxes cmap = None pumin = -50 pumax = 200 uamp, upha = cmp2ap(ure, uim) vamp, vpha = cmp2ap(vre, vim) zamp, zpha = cmp2ap(zre, zim) puamp = rho0*grav*u2rho(lonu,uamp,lonr)*zamp*np.cos(u2rho(lonu,upha,lonr)-zpha)/2. pvamp = rho0*grav*v2rho(latv,vamp,latr)*zamp*np.cos(v2rho(latv,vpha,latr)-zpha)/2. fig, axs = plt.subplots(1, 2, sharex=True, sharey=True, figsize=(9,4)) hpc = axs[0].pcolormesh(lonr-360, latr, puamp, cmap=cmap, vmin=pumin, vmax=pumax) axs[1].pcolormesh(lonr-360, latr, pvamp, cmap=cmap, vmin=pumin, vmax=pumax) axs[0].plot(np.r_[lon_h[0,:],lon_h[:,-1],lon_h[-1,::-1],lon_h[::-1,0]] \ , np.r_[lat_h[0,:],lat_h[:,-1],lat_h[-1,::-1],lat_h[::-1,0]], color="brown") axs[1].plot(np.r_[lon_h[0,:],lon_h[:,-1],lon_h[-1,::-1],lon_h[::-1,0]] \ , np.r_[lat_h[0,:],lat_h[:,-1],lat_h[-1,::-1],lat_h[::-1,0]], color="brown") fig.subplots_adjust(right=0.87) cbar_ax = fig.add_axes([0.9, 0.2, 0.015, 0.6]) fig.colorbar(hpc, cax=cbar_ax, extend="both") # What is going out ? dy = np.deg2rad(np.diff(latr).mean())*Erad dxb = np.deg2rad(np.diff(lonr).mean())*Erad*np.cos(np.deg2rad(latr[0])) dxt = np.deg2rad(np.diff(lonr).mean())*Erad*np.cos(np.deg2rad(latr[-1])) Surf = dy*len(latr)*(dxb+dxt)/2.*len(lonr) # this seems wrong pvout = ( (puamp[:,-1]-puamp[:,0]).sum()*dy + (pvamp[-1,:]*dxt-pvamp[0,:]*dxb).sum() ) / Surf print(pvout*1e6) fluxamp_lonlat = np.sqrt(puamp**2 + pvamp**2) # + ### Interpolate fields on Lucky's grid # load tpxo8 data : dealing with transport ! (warning) nc = Dataset(alpha+"TPXO8/uv.m2_tpxo8_atlas_30c_v1.nc",'r') ncvar = nc.variables lonu = ncvar['lon_u'][:] indxu, = np.where( (lonu>=(lon_h%360).min()) & (lonu<=(lon_h%360).max()) ) indxu = slice(indxu[0]-1, indxu[-1]+1) lonu = lonu[indxu] latu = ncvar['lat_u'][:] indyu, = np.where( (latu>=lat_h.min()) & (latu<=lat_h.max()) ) indyu = slice(indyu[0]-1, indyu[-1]+1) latu = latu[indyu] lonv = ncvar['lon_v'][:] indxv, = np.where( (lonv>=(lon_h%360).min()) & (lonv<=(lon_h%360).max()) ) indxv = slice(indxv[0]-1, indxv[-1]+1) lonv = lonv[indxv] latv = ncvar['lat_v'][:] indyv, = np.where( (latv>=lat_h.min()) & (latv<=lat_h.max()) ) indyv = slice(indyv[0]-1, indyv[-1]+1) latv = latv[indyv] ure = ncvar['uRe'][indxu,indyu]*1e-4 # cm²/s to m²/s ure = itp.RectBivariateSpline(latu, lonu-360, ure.T).ev(lat_h,lon_h) uim = ncvar['uIm'][indxu,indyu]*1e-4 uim = itp.RectBivariateSpline(latu, lonu-360, uim.T).ev(lat_h,lon_h) vre = ncvar['vRe'][indxv,indyv]*1e-4 vre = itp.RectBivariateSpline(latv, lonv-360, vre.T).ev(lat_h,lon_h) vim = ncvar['vIm'][indxv,indyv]*1e-4 vim = itp.RectBivariateSpline(latv, lonv-360, vim.T).ev(lat_h,lon_h) nc.close() # SSH nc = Dataset(alpha+"TPXO8/hf.m2_tpxo8_atlas_30c_v1.nc",'r') ncvar = nc.variables lonr = ncvar['lon_z'][:] indx, = np.where( (lonr>=(lon_h%360).min()) & (lonr<=(lon_h%360).max()) ) indx = slice(indx[0]-1, indx[-1]+1) lonr = lonr[indx] latr = ncvar['lat_z'][:] indy, = np.where( (latr>=lat_h.min()) & (latr<=lat_h.max()) ) indy = slice(indy[0]-1, indy[-1]+1) latr = latr[indy] zre = ncvar['hRe'][indx,indy]*1e-3 # mm to m zre = itp.RectBivariateSpline(latr, lonr-360, zre.T).ev(lat_h,lon_h) zim = ncvar['hIm'][indx,indy]*1e-3 zim = itp.RectBivariateSpline(latr, lonr-360, zim.T).ev(lat_h,lon_h) nc.close() # bathymetry nc = Dataset(alpha+"TPXO8/grid_tpxo8_atlas_30c_v1.nc",'r') lonr = nc.variables['lon_z'][:] indx, = np.where( (lonr>=(lon_h%360).min()) & (lonr<=(lon_h%360).max()) ) indx = slice(indx[0]-1, indx[-1]+1) lonr = lonr[indx] latr = nc.variables['lat_z'][:] indy, = np.where( (latr>=lat_h.min()) & (latr<=lat_h.max()) ) indy = slice(indy[0]-1, indy[-1]+1) latr = latr[indy] topox = nc.variables['hz'][indx,indy] topox = itp.RectBivariateSpline(latr, lonr-360, topox.T).ev(lat_h,lon_h) nc.close() # apply rotation to transport field ure, vre = rotuv(ure, vre, grang) uim, vim = rotuv(uim, vim, grang) plt.figure() plt.pcolormesh(lon_h, lat_h, topox, cmap="gist_earth_r") plt.colorbar() # + # compute modal flux and plot it, compute what is going out # convert Re, Im to phase, amplitude, interpolate u, v and z-grid and plot fluxes cmap = None pumin = -50 pumax = 200 uamp, upha = cmp2ap(ure, uim) vamp, vpha = cmp2ap(vre, vim) zamp, zpha = cmp2ap(zre, zim) puamp = rho0*grav*uamp*zamp*np.cos(upha-zpha)/2. pvamp = rho0*grav*vamp*zamp*np.cos(vpha-zpha)/2. fig, axs = plt.subplots(1, 2, sharex=True, sharey=True, figsize=(9,4)) hpc = axs[0].pcolormesh(lon_h, lat_h, puamp, cmap=cmap, vmin=pumin, vmax=pumax) axs[1].pcolormesh(lon_h, lat_h, pvamp, cmap=cmap, vmin=pumin, vmax=pumax) fig.subplots_adjust(right=0.87) cbar_ax = fig.add_axes([0.9, 0.2, 0.015, 0.6]) fig.colorbar(hpc, cax=cbar_ax, extend="both") # What is going out ? this looks like a better estimate Surf = (1./pm/pn).sum() pvout = ( (puamp[:,-1]/pn[:,-1]-puamp[:,0]/pn[:,0]).sum() \ + (pvamp[-1,:]/pm[-1,:]-pvamp[0,:]/pm[0,:]).sum() ) / Surf print(pvout*1e6, pvout*Surf/1e6) fluxamp_curv = np.sqrt(puamp**2 + pvamp**2) # - fig, axs = plt.subplots(1, 2, sharex=True, sharey=True, figsize=(9,4)) axs[0].pcolormesh(lonr-360, latr, fluxamp_lonlat, vmin=-50, vmax=200) axs[1].pcolormesh(lon_h, lat_h, fluxamp_curv, vmin=-50, vmax=200) # + # compute and plot divergence of barotropic flux divf = div_on_grid(puamp, pvamp, pm, pn)*1e3 # W/m² dx, dy = (1./pm).mean(), (1./pn).mean() dibf = ( np.gradient(puamp, dx, axis=-1) + np.gradient(pvamp, dy, axis=-2) ) * 1e3 fig, axs = plt.subplots(1, 2, sharex=True, sharey=True, figsize=(9,4)) hpc = axs[0].pcolormesh(lon_h, lat_h, divf, cmap='seismic', vmin=-0.5, vmax=0.5) axs[1].pcolormesh(lon_h, lat_h, dibf, cmap='seismic', vmin=-0.5, vmax=0.5) fig.subplots_adjust(right=0.87) cbar_ax = fig.add_axes([0.9, 0.2, 0.015, 0.6]) fig.colorbar(hpc, cax=cbar_ax, extend="both") print(np.nanmean(divf)*1e3, np.nanmean(dibf)*1e3) # + # compute and plot tidal potential work term dpamp_dx, dpamp_dy = grad_on_grid(potamp, pm, pn) dphat_dx, dphat_dy = grad_on_grid(potpha, pm, pn) fpotx = uamp*( dpamp_dx*np.cos(upha-potpha) - potamp*dphat_dx*np.sin(upha-potpha) ) fpoty = vamp*( dpamp_dy*np.cos(vpha-potpha) - potamp*dphat_dy*np.sin(vpha-potpha) ) fpot = rho0*grav*(fpotx + fpoty)*1e6 # mW/m² plt.pcolormesh(fpot, vmin=-20, vmax=20, cmap='seismic') plt.colorbar() print(np.nanmean(fpot), np.nanmean(fpot)*Surf/1e12) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # #!/usr/bin/env python3 # Python Stream Deck Library # Released under the MIT license # # dean [at] fourwalledcubicle [dot] com # www.fourwalledcubicle.com # # Example script that prints out information about any discovered StreamDeck # devices to the console. from StreamDeck.DeviceManager import DeviceManager # Prints diagnostic information about a given StreamDeck def print_deck_info(index, deck): image_format = deck.key_image_format() flip_description = { (False, False): "not mirrored", (True, False): "mirrored horizontally", (False, True): "mirrored vertically", (True, True): "mirrored horizontally/vertically", } print("Deck {} - {}.".format(index, deck.deck_type()), flush=True) print("\t - ID: {}".format(deck.id()), flush=True) print("\t - Serial: '{}'".format(deck.get_serial_number()), flush=True) print("\t - Firmware Version: '{}'".format(deck.get_firmware_version()), flush=True) print("\t - Key Count: {} ({}x{} grid)".format( deck.key_count(), deck.key_layout()[0], deck.key_layout()[1]), flush=True) print("\t - Key Images: {}x{} pixels, {} format, rotated {} degrees, {}".format( image_format['size'][0], image_format['size'][1], image_format['format'], image_format['rotation'], flip_description[image_format['flip']]), flush=True) if __name__ == "__main__": streamdecks = DeviceManager().enumerate() print("Found {} Stream Deck(s).\n".format(len(streamdecks))) for index, deck in enumerate(streamdecks): deck.open() deck.reset() print_deck_info(index, deck) deck.close() # + deck = streamdecks[0] deck.close() deck.open() # + dir(deck) def my_key_callback(*args, **kwargs): print(args) print(kwargs) deck.set_key_callback(my_key_callback) # + from subprocess import run def my_callback(*args, **kwargs): print("hi") from StreamDeck.DeviceManager import DeviceManager streamdecks = DeviceManager().enumerate() deck = streamdecks[0] if not deck.connected(): deck.connect() deck.set_key_callback(my_callback) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + slideshow={"slide_type": "skip"} # %matplotlib inline import numpy as np import matplotlib.pyplot as plt from numpy import cos, exp, pi, sin, sqrt # + [markdown] slideshow={"slide_type": "slide"} # # Constant Acceleration # + [markdown] slideshow={"slide_type": "slide"} # ## Define the Dynamical System # - m = 1.00 k = 4*pi*pi wn = 2*pi T = 1.0 z = 0.02 wd = wn*sqrt(1-z*z) c = 2*z*wn*m # + [markdown] slideshow={"slide_type": "slide"} # ### Define the Loading # # Our load is # # $$p(t) = 1 \text{N} \times \begin{cases}\sin^2\frac{\omega_n}2t&0 \le t \le 5\\0&\text{otherwise}\end{cases}$$ # + NSTEPS = 200 # steps per second h = 1.0 / NSTEPS def load(t): return np.where(t<0, 0, np.where(t<5, sin(0.5*wn*t)**2, 0)) t = np.linspace(-1, 6, 7*NSTEPS+1) plt.plot(t, load(t)) plt.ylim((-0.05, 1.05)); # + [markdown] slideshow={"slide_type": "slide"} # ### Numerical Constants # # We want to compute, for each step # # $$\Delta x = \frac{\Delta p + a^\star a_0 + v^\star v_0}{k^\star}$$ # # where we see the constants # # $$a^\star = 2m, \quad v^\star = 2c + \frac{4m}{h},$$ # $$ k^\star = k + \frac{2c}{h} + \frac{4m}{h^2}.$$ # - kstar = k + 2*c/h + 4*m/h/h astar = 2*m vstar = 2*c + 4*m/h # + [markdown] slideshow={"slide_type": "slide"} # ### Vectorize the time and the load # # We want to compute the response up to 8 seconds # - t = np.linspace(0, 8+h, NSTEPS*8+1) P = load(t) # + [markdown] slideshow={"slide_type": "slide"} # ## Integration # 1. Prepare containers # 2. write initial conditions # 3. loop on load and load increments # - save initial conditions (`x0`, `v0`, `a0`), # - compute the _corrected_load increment, # - compute `dx` and `dy`, # - compute the vaues of `x0` and `y0` for the beginning of the next step # - compute `a0` at the beginning of the next step. # 4. save the results for the last step (they are usually saved at the beginning of the loop, but here we need to special case, why?) and vectorize the whole of the results # + x, v, a = [], [], [] x0, v0 = 0.0, 0.0 a0 =(P[0]-k*x0-c*v0)/m for p0, p1 in zip(P[:-1], P[+1:]): x.append(x0), v.append(v0), a.append(a0) dx = ((p1-p0) + astar*a0 + vstar*v0)/kstar dv = 2*(dx/h-v0) x0, v0 = x0+dx, v0+dv a0 = (p1 - k*x0 - c*v0)/m x.append(x0), v.append(v0), a.append(a0) x, v = np.array(x), np.array(v) # + [markdown] slideshow={"slide_type": "slide"} # ## Results # + [markdown] slideshow={"slide_type": "slide"} # ### The response # - plt.plot(t, x); # + [markdown] slideshow={"slide_type": "slide"} # ### Comparison # # Using black magic it is possible to divine the analytical expression of the response during the forced phase, # # $$x(t) = \frac1{2k}\left(\left(\frac{1-2\zeta^2}{2\zeta\sqrt{1-\zeta^2}}\sin\omega_Dt-\cos\omega_Dt\right)\exp(-\zeta\omega_nt)+1-\frac{1}{2\zeta}\sin\omega_nt\right), \qquad 0\le t \le 5.$$ # + [markdown] slideshow={"slide_type": "fragment"} # and hence plot a comparison within the exact response and the (downsampled) numerical approximation, in the range of validity of the exact response. # + slideshow={"slide_type": "subslide"} xe = (((1 - 2*z**2)*sin(wd*t) / (2*z*sqrt(1-z*z)) - cos(wd*t)) *exp(-z*wn*t) + 1. - sin(wn*t)/(2*z))/2/k plt.plot(t[:1001], xe[:1001], lw=2) plt.plot(t[:1001:10], x[:1001:10], 'ko'); # + [markdown] slideshow={"slide_type": "slide"} # Eventually we plot the difference between exact and approximate response, # - plt.plot(t[:1001], x[:1001]-xe[:1001]); / --- / jupyter: / jupytext: / text_representation: / extension: .q / format_name: light / format_version: '1.5' / jupytext_version: 1.14.4 / kernelspec: / display_name: xtidb / language: mysql / name: xtidb / --- / # XEUS-TiDB / xeus-tidb is a simple yet powerful Jupyter kernel for TiDB. / / It allows the user to use the complete TiDB syntax as well as some extra operations such as opening or closing a database connection, or visualizing the data in different ways using Jupyter magics. / ## Why we need it? / * TiDB 4.0: An Elastic, Real-Time HTAP Database / * Jupyter is the de facto IDE for scientific computing community / * SQL is the common language for all developers / ## Use cases / * Interactive documentation / * Development / Operational tools / * Data analysis / ## Demo / / Use Chinook dataset as an example, it's data model as follows: (credits [rpubs](https://rpubs.com/enext777/636199)) / ![chinook-models](chinook-models.jpg) / ### SQL Clients / #### Connect to TiDB / / Using `%LOAD` to connect to the instance %LOAD tidb user=root host=127.0.0.1 port=4000 db=Chinook / #### SQL commands / / Then we can use any SQL commands supportted by TiDB show databases; select * from mysql.tidb order by VARIABLE_NAME; / A more complex SQL to get the top 10 sales artists select at.Name as name, sum(il.quantity) as total_quantity, sum(il.quantity*il.unitPrice) as total_sales from track t inner join invoiceLine il on il.TrackId=t.TrackId inner join album a on a.AlbumId=t.AlbumId inner join artist at on at.ArtistId=a.ArtistId group by 1 order by total_sales desc limit 10; / As we can see, it's a fully functional SQL clients, and we can go beyond that! / ### Data Visualization / / Most data scientists use Jupyter and Python kernel to do data analyzing and visualizing. The main logic is to query data from databases and using matplotlib to do visualization, which full of glue code. / / With xeus-tidb, we can easily achive this goal, and the code is more compact and maintainable. We use [vega-lite](https://vega.github.io/vega-lite/examples/) as our graphing tools. According to it's official website: / > Vega-Lite is a high-level grammar of interactive graphics. It provides a concise, declarative JSON syntax to create an expressive range of visualizations for data analysis and presentation. / / And it had been bundled with JupyterLab by default. In xeus-tidb, we use the `%VEGA-LITE` magic code followed by the spec file which describe the graphing properties, and then the SQL command which get the data we want to plot. Quite simple, right? Let's go through some examples. / #### line chart / / A line chart gives us a clear visualization of where the price of a product has traveled over a given time period. In this example we can see the sales history by countries. %VEGA-LITE specs/line_color.vl.json SELECT EXTRACT(YEAR FROM invoiceDate) as date, BillingCountry as symbol, sum(total) as price From Invoice where BillingCountry in ("USA", "Canada", "Germany", "Brazil", "France") group by date, symbol / #### bar chart / / We use bar chart to visualizing a discrete, categorical data attribute. Let's see which genre of music is mostly sold in America. %VEGA-LITE specs/layer_bar_labels.vl.json SELECT g.Name as a, SUM(il.Quantity) as b, ROUND((CAST(SUM(il.Quantity) as float) / (SELECT SUM(Quantity) total FROM Chinook.InvoiceLine il INNER JOIN Chinook.Invoice i ON i.InvoiceId = il.InvoiceId WHERE i.BillingCountry = 'USA')) * 100, 2) percent_sold FROM Chinook.InvoiceLine il INNER JOIN Chinook.Invoice i ON i.InvoiceId = il.InvoiceId INNER JOIN Chinook.Track t ON il.TrackId = t.TrackId INNER JOIN Chinook.Genre g ON t.GenreId = g.GenreId WHERE i.BillingCountry = 'USA' GROUP BY 1 ORDER BY 2 DESC LIMIT 10 / #### pie chart / We can also use pie chart for this. / + %VEGA-LITE specs/arc_pie.vl.json SELECT g.Name as category, SUM(il.Quantity) as b, ROUND((CAST(SUM(il.Quantity) as float) / (SELECT SUM(Quantity) total FROM Chinook.InvoiceLine il INNER JOIN Chinook.Invoice i ON i.InvoiceId = il.InvoiceId WHERE i.BillingCountry = 'USA')) * 100, 2) value FROM Chinook.InvoiceLine il INNER JOIN Chinook.Invoice i ON i.InvoiceId = il.InvoiceId INNER JOIN Chinook.Track t ON il.TrackId = t.TrackId INNER JOIN Chinook.Genre g ON t.GenreId = g.GenreId WHERE i.BillingCountry = 'USA' GROUP BY 1 ORDER BY 2 DESC LIMIT 10 / - / #### scatterplot / / Please notice that above examples directly use the specs from vega-lite example gallery without any modification, you can just find the spec you want and use it in your code. / / And as the specs are just plain json files, it's easy to adapt to our needs. Let's first use scatterplot as an example to show the relation between track numbers and sales %VEGA-LITE colored_scatterplot.vl.json select g.name name, sum(i.total) sales, count(distinct t.trackId) trackNo from track t inner join genre g on g.genreId=t.genreId inner join invoiceLine il on il.trackId=t.trackId inner join invoice i on i.invoiceId=il.invoiceLineId group by g.name order by sales desc limit 10; / #### heatmap / Another example using heatmap to show which genre is most popular in each country. %VEGA-LITE rect_category_heatmap.vl.json select i.BillingCountry as Country, g.name as Genre, count(1) as c from invoice i inner join InvoiceLine il on i.InvoiceId=il.InvoiceId inner join track t on t.TrackId=il.TrackId inner join genre g on g.GenreId=t.GenreId group by 1,2 / With all of this, even a PM without any programming backgrounds can do data visualization by themselves, as long as they know SQL commands. / ## The Implementation / https://github.com/wangfenjin/xeus-tidb / * Jupyter community (you can also use xeus-tidb in vscode) / * jupyter-xeus: a framework for developing Jupyter kernel / * soci-mysql: mysql client library / * vega / vega-lite: a high-level grammar of interactive graphics / ## The Future / * Integrate into TiUP? So after `tiup playground`, we can setup a jupyter server and some default notebooks to let users learn and play with the data. / * Build a TiDB cluster operational runbook? So we can help DBA's better manage the TiDB cluster, and directly do it in Jupyter. / * Implement more magic commands such as integrate with machine learning algorithms? So we can further lower the bar of being a data scientist. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd import nltk from pandarallel import pandarallel import csv import time from IPython.display import clear_output import logging import boto3 from botocore.exceptions import ClientError from tqdm.notebook import tqdm pandarallel.initialize(progress_bar=True) nltk.download('punkt') # + def upload_file(file_name, bucket, object_name=None): """Upload a file to an S3 bucket :param file_name: File to upload :param bucket: Bucket to upload to :param object_name: S3 object name. If not specified then file_name is used :return: True if file was uploaded, else False """ # If S3 object_name was not specified, use file_name if object_name is None: object_name = file_name # Upload the file s3_client = boto3.client('s3') try: response = s3_client.upload_file(file_name, bucket, object_name) except ClientError as e: logging.error(e) return False return True def tokenize(row): text = row['text'] tokens = nltk.word_tokenize(text.lower()) return tokens def save_csv(output_file, tokens): with open(output_file, 'w') as csvoutfile: csv_writer = csv.writer(csvoutfile, delimiter=' ', lineterminator='\n') csv_writer.writerows(tokens) bucket = "yelp-dataset-pt-9" upload_file(output_file, bucket, f'spencer/data/sentiment/en/fasttext/{output_file}') def preproc(filename): start = time.time() path = f"s3://yelp-dataset-pt-9/spencer/data/sentiment/en/{filename}.csv" df = pd.read_csv(path) print(f'{filename} has {len(df):,} rows') labels = ['stars', 'pos_neg_neu', 'pos_neg_3_is_pos', 'pos_neg_3_is_neg'] tokens = df.parallel_apply(tokenize, axis=1) print("Done tokenizing, time to apply to each label.") for label in tqdm(labels): if label == 'stars': labels_df = ('__label__' + df['stars'].astype(int).astype(str)).str.split(" ") else: labels_df = ('__label__' + df[label].astype(str)).str.split(" ") tokens_and_labels = labels_df + tokens print(tokens_and_labels.tail()) print("Saving to csv.") save_csv(f"{label}_tokens_{filename}.csv", tokens_and_labels.to_list()) print("CSV saved.") print(f'Took {time.time() - start:.2f} seconds') # - files = ['train_bal', 'train_same_size_as_bal', 'test_small'] for filename in files: preproc(filename) clear_output(wait=True) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Homewrok Session #X # #### *Personal information* # **First name**: place here your name # **Last Name**: place here your name # ## Question 1 # Enunciate here the question # Put Here Your anwswer # + # Or code it here :) # - # ## Question 2 # Enunciate here the question # Put Here Your anwswer # + # Or code it here :) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: tensorflowgpu # language: python # name: tensorflowgpu # --- # + import pickle with open('dict_mid_mname.bin', 'rb') as f: ID_dict = pickle.load(f) with open('persona.txt', 'r', encoding='utf-8') as f: entire_movies = [] while f.readable(): read = f.readline() if read in ID_dict.values(): print(read) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import errno import pandas as pd import glob import csv import numpy as np import os.path from pathlib import Path import seaborn as sns from sklearn.model_selection import train_test_split from sklearn.neighbors import KNeighborsClassifier print("getting ready to work with the dataset") # get names of all csv files in this folder csvFile = pd.read_csv("./petrophysics.csv") # train = data[row] # test = data[-r?ow] # + csvFile.drop(["latitude", "longitude", "UWI"], axis = 1, inplace = True) # - member = csvFile.iloc[:,-2] csvFile.drop("member", axis = 1, inplace = True) # + # print(member) dict_map = {} count = 1 for x in member: if x not in dict_map: dict_map[x] = count count += 1 # print(dict_map) i = 0 temp = [] for x in member: temp.append(dict_map[x]) # i += 1 # print(temp) csvFile["member"] = temp pd.set_option('display.max_rows', None) csvFile.head() # - csvFile.drop("formation", axis = 1, inplace = True) csvFile.head() sns.pairplot(data=csvFile, hue='member') X_train, X_test, y_train, y_test = train_test_split(csvFile.iloc[:, 0:-1], csvFile.iloc[:,-1], test_size=0.25, random_state=42) knn = KNeighborsClassifier(n_neighbors=5, metric='euclidean') knn.fit(X_train, y_train) y_pred = knn.predict(X_test) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: ztf_paper_env # language: python # name: ztf_paper_env # --- # %matplotlib inline import matplotlib import matplotlib.pyplot as plt import pandas as pd import os import numpy as np from astropy.time import Time from astropy.table import Table from astropy import units as u from astropy import constants as const from astropy.coordinates.name_resolve import NameResolveError from ztfquery.lightcurve import LCQuery from astropy.coordinates import SkyCoord from astropy.cosmology import WMAP9 as cosmo from astropy.io import fits from astropy.table import Table import logging import sys from nuztf.ampel_api import ampel_api_name from nuztf.irsa import plot_irsa_lightcurve from nuztfpaper.style import plot_dir, output_folder from nuztfpaper.candidates import candidates # Set up logging level for logname in ["nuztf", "nuztfpaper"]: logger = logging.getLogger(logname) # handler = logging.StreamHandler(sys.stdout) # formatter = logging.Formatter('%(name)s - %(levelname)s - %(message)s') # handler.setFormatter(formatter) # logger.addHandler(handler) logger.setLevel(logging.INFO) sources = candidates[candidates["sub_class"] == "AGN Flare"] names = [ ("MCG +00-02-020", "IC190922B"), ("WISEA J205314.58+125218.9", "IC191001A"), ("ZTF19abexshr", "IC191001A"), ("CGCG 67-27", "IC200109A"), ("SDSS J105752.69+105037.9", "IC200109A"), ("NPM 1G+12.0265", "IC200109A"), ("SDSS J104431.98+110105.6", "IC200109A"), ("ZTF18aamjqes", "IC200530A"), ("WISEA J170539.32+273641.2", "IC200530A"), ("WISEA J165707.06+234643.8", "IC200530A"), ("ZTF18adbbnry", "IC200916A"), ("ZTF20aamoxyt", "IC200929A"), ("ZTF18abxrpgu", "IC201130A"), ("SDSS J002553.11-091252.1", "IC201209A"), ("4C 05.57", "IC210210A"), ("SDSS J134034.75+045241.3", "IC210210A"), ("WISE J224645.73+122935.7", "IC210629A"), ] # + # text = "\\begin{figure*} \n" # for i, (_, row) in enumerate(sources.iterrows()): # print(i, row["Name"], row["Catalogue Name"],row["neutrino"], names[i]) # # try: # # plot_irsa_lightcurve( # # str(row["Catalogue Name"]), # # nu_name=row["neutrino"], # # plot_mag=False, # # plot_folder=output_folder # # ) # # # except (TypeError, NameResolveError) as e: # # except (NameResolveError, RemoteServiceError) as e: # res = ampel_api_name(row["Name"], with_history=False)[0] # plot_irsa_lightcurve( # row["Name"], # source_coords=[res["candidate"]["ra"], res["candidate"]["dec"]], # nu_name=row["neutrino"], # plot_mag=False, # plot_folder=plot_dir, # extra_folder=output_folder # ) # text += f" \centering \includegraphics[width=0.45\\textwidth]{{figures/{row['Name']}_lightcurve_flux}}\n" # text += f""" # \caption{{ZTF lightcurves of {len(sources)} AGN flares coincident with high-energy neutrinos.}} # \label{{fig:flares}} # \end{{figure*}} # """ # + # print(text) # - literature_candidates = [ ("PKS 1502+106", "IC190730A"), # ("BZB J0955+3551", "IC200107A"), # ("PKS 0735+178", "IC211208A") ] for name, nu in literature_candidates: plot_irsa_lightcurve( name, nu_name=nu, plot_mag=False, plot_folder=plot_dir, extra_folder=output_folder ) from astropy.coordinates import SkyCoord c = SkyCoord(115.0271293, 29.2021507, unit=u.deg, frame='icrs') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # # Copyright (C) 2021 by <> # # This code is licensed under a Creative Commons Attribution 4.0 International License. (see LICENSE.txt for details) # # General Description - this notebook is used to calculate the e2e delay by combining the delay from OMNET with the one from CloudSim # It creates two types of output files: # - complete information about all e2e delay events that can be found when merging OMNET and CloudSim data # - coarse grain e2e delay information based on variable sized bins of the complete e2e delay information # # + def calculateE2Edelay(dataO, cloudsim, output): #read cloudsim delay file # format: time, VM, delay, description print(cloudsim) dataCS = pd.read_csv(cloudsim + ".csv", sep=' ', header=0,) # rename Time to time, and VM to car, to align with omnet dataframe # rename delay to csd to distinguish from omnet dataCS=dataCS.rename(columns={"Time": "time", "VM": "car", "delay":"csd"}) # concatenate cloudsim dataframe with omnet dataframe dataCars = pd.concat([dataO, dataCS], join = 'outer', ignore_index = True) # sort by time dataCars = dataCars.sort_values(by=['time']) dataCars = dataCars.reset_index(drop=True) dataCars = dataCars.set_index('car', append=True) # create subsets by car / VM dataCars["csd"] = dataCars.groupby('car')['csd'].transform(lambda x: x.ffill()) # print(dataCars) # dataCars = dataCars.fillna(999) # print NaN info print(dataCars.isna().sum()/len(dataCars)) # drop NAN values dataCars.dropna() # print(dataCars) # all delay information should be in ms dataCars['e2e'] = dataCars['delay']*1000 + dataCars['csd'] dataCars['delay'] = dataCars['delay']*1000 # only keep rows with omnet description filtered = dataCars[dataCars['description'] == 'omnet'] filtered = filtered.reset_index() filtered = filtered.drop(columns=['description','level_0']) filtered = filtered.round(2) filtered.to_csv(output + '.csv', index=False) # print(filtered) # create bins with size of 10 seconds and calculate average e2e delay per car per bin maxTime = filtered['time'].max() # bin size = 10 seconds cuts = pd.cut(filtered['time'], np.arange(1,maxTime+1,10)) binned = filtered.groupby(['car', cuts]).e2e.mean().reset_index(name='avg_e2e') # drop the bins with no information binned = binned.dropna() binned = binned.round(2) binned.to_csv(output + '_bin.csv', index=False) #print(binned) # + # measuring execution time # %load_ext autotime # cycle through all simulations and call function to calculate e2e delay import vaex as vx import numpy as np import pandas as pd text = "./VMs_network_delay/VMs_network_delay_" # change location of files accordingly outFile = "e2eDelay_" #cars = np.array([4951, 4955, 4956, 5734, 5740, 5749, 6910, 6915, 6923, 8600, 8620, 8635]) #hosts = np.array([45, 63, 81, 99, 117]) #hostType = np.array([0, 1]) midtext = "_sc_sc-mad_" #services = np.array([5074, 5013, 4256, 5808, 4966, 5858, 6042, 6187, 7004, 7455, 7463, 7486]) ind = 0 cars = np.array([4928]) hosts = np.array([45]) hostType = np.array([0]) services = np.array([1800]) for i in cars: # read omnet delay file # format: car, time, delay dataO = pd.read_csv(str(i) + '.csv', sep=',', header=0) # add description column to omnet dataframe dataO['description'] = 'omnet' for h in hosts: for t in hostType: cloudsim = text + str(h) + '_' + str(t) + midtext + str(services[ind]) + '_' + str(i) output = outFile + str(h) + '_' + str(t) + midtext + str(services[ind]) + '_' + str(i) calculateE2Edelay(dataO, cloudsim, output) ind = ind + 1 # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Best Practices for Coding in Python # # There are certain guides to coding in Python that are expected. These can be found in more detail in [the PEP 8](https://www.python.org/dev/peps/pep-0008/) guidelines. But for the basics, read on! # # These coding guidelines will be enforced with a CI tool, so make sure to follow these rules! # Python doesn't care about whether you use Tabs or Spaces, and so in general the rule is to be consistent. Since this is hard to enforce in large projects, the rule as determined is to use **spaces** over **tabs**. Use **4 spaces** every time that you are indenting. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Read file import itertools import re is_num = re.compile(r"^\d+$") def read_file(filename): with open(filename) as infile: lines = [line.strip().split() for line in infile.readlines()] bags = {} for line in lines: bag = ' '.join(line[:2]) i = 4 contents = {} while i < len(line) and is_num.match(line[i]): contents[' '.join(line[i+1:i+3])] = int(line[i]) i += 4 bags[bag] = contents return bags # - # Part 1 def contains_shiny_gold(bags, bag, memo=None): if memo == None: memo = {} if bag in memo: return memo[bag] if 'shiny gold' in bags[bag]: memo[bag] = True return True for inner in bags[bag]: if contains_shiny_gold(bags, inner, memo): memo[bag] = True return True memo[bag] = False return False # Test part 1 bags = read_file("test01.txt") sum([contains_shiny_gold(bags, bag) for bag in bags]) == 4 # Solve part 1 bags = read_file("input.txt") sum([contains_shiny_gold(bags, bag) for bag in bags]) # Part 2 def count_inner_bags(bags, bag, memo=None): if memo == None: memo = {} if bag in memo: return memo[bag] count = 0 for inner in bags[bag]: count += count_inner_bags(bags, inner, memo) * bags[bag][inner] + bags[bag][inner] memo[bag] = count return count # Test part 2 count_inner_bags(read_file("test01.txt"), 'shiny gold') == 32 and \ count_inner_bags(read_file("test02.txt"), 'shiny gold') == 126 # Solve part 2 count_inner_bags(read_file("input.txt"), 'shiny gold') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import ast import copy # ### Load CSV data df = pd.read_csv('via_export_csv.csv',header=0) # ### Check for labels other than specified ones for i in range(len(df)): if ast.literal_eval(df.iloc[i, -1])['name'] != "class1": print(df.iloc[i,0],i) len(df) cols = ['filename', 'file_size', 'file_attributes', 'region_count', 'region_id', 'region_shape_attributes', 'region_attributes'] filenames = df['filename'].unique() len(filenames) # ## To convert # ### *img_name, x, y, width, height, class* # ### type CSV to df (OPTIONAL, use only if you have this kind of data) # + # Convert to DataFrame # final_data = [] # for f in filenames: # same_files_len = len(df[df['file_name']==f]) # file_name = [f for _ in range(same_files_len)] # file_size = [0 for _ in range(same_files_len)] # region_count = [same_files_len for _ in range(same_files_len)] # region_id = [i for i in range(same_files_len)] # file_attributes = ['"{}"' for _ in range(same_files_len)] # region_attributes = [] # clses = df[df['file_name']==f]['class_name'] # for x in range(len(clses)): # t = '{"name":"'+clses.iloc[x]+'"}' # region_attributes.append(t) # region_shape_attributes = [] # attrs = df[df['file_name']==f][['x_max','x_min','y_max','y_min']] # for x in range(len(attrs)): # t = '{"name":"rect","x":'+str(attrs.iloc[x,1])+',"y":'+str(attrs.iloc[x,3])+',"width":'+str(int(attrs.iloc[x,0])-int(attrs.iloc[x,1]))+',"height":'+str(int(attrs.iloc[x,2])-int(attrs.iloc[x,3]))+'}' # region_shape_attributes.append(t) # for a,b,c,d,e,f,g in zip(file_name,file_size,file_attributes,region_count,region_id,region_shape_attributes,region_attributes): # final_data.append([a,b,c,d,e,f,g]) # cols = ['filename', # 'file_size', # 'file_attributes', # 'region_count', # 'region_id', # 'region_shape_attributes', # 'region_attributes'] # new_df = pd.DataFrame(final_data) # new_df.columns = cols # new_df.to_csv("final_data_via.csv",index=False) # - # ## Separate train val and test # + # Get train val test files # Split by index train_filenames = filenames[:418] val_filenames = filenames[418:434] test_filenames = filenames[434:] train_files = pd.DataFrame(columns=cols) for r in train_filenames: train_files = train_files.append(new_df[new_df['filename']==r]) val_files = pd.DataFrame(columns=cols) for r in val_filenames: val_files = val_files.append(new_df[new_df['filename']==r]) val_files = pd.DataFrame(val_files, columns=cols) # - # ## Create JSON for training set train_json = {} i=0 for f in train_filenames: same_files = train_files[train_files['filename']==f] t = {} t['filename'] = same_files.iloc[0,0] t['size'] = same_files.iloc[0,1] all_regions = [] for x in range(len(same_files)): t1 = {} t1['shape_attributes'] = eval(same_files.iloc[x,5]) t1['region_attributes'] = eval(same_files.iloc[x,-1]) all_regions.append(t1) t['regions'] = all_regions train_json['x'+str(i)] = t i+=1 # ## Create JSON for validation set val_json = {} i=0 for f in val_filenames: same_files = val_files[val_files['filename']==f] t = {} t['filename'] = same_files.iloc[0,0] t['size'] = same_files.iloc[0,1] all_regions = [] for x in range(len(same_files)): t1 = {} t1['shape_attributes'] = eval(same_files.iloc[x,5]) t1['region_attributes'] = eval(same_files.iloc[x,-1]) all_regions.append(t1) t['regions'] = all_regions val_json['x'+str(i)] = t i+=1 # ## Dump train val JSONs # + train_json = str(train_json).replace("'",'"') val_json = str(val_json).replace("'",'"') with open('train_json.json','w') as f: f.write(train_json) with open('val_json.json','w') as f: f.write(val_json) # - # ## Dump test filenames test_filenames # Dump validation filenames with open('validation.txt','w') as f: for i in val_filenames: f.write(i+'\n') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.3 64-bit (''otsu'': conda)' # name: python3 # --- # + import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D import math # 初期設定 N = 100 # 個体数 d = 2 # 次元 TC = np.zeros(N) #更新カウント lim = 30 xmax = 5 xmin = -5 G = 100 # 繰り返す周期 x = np.zeros((N,d)) for i in range(N): x[i] = (xmax-xmin)*np.random.rand(d) + xmin x_best = x[0] #x_bestの初期化 best = 100 # ルーレット選択用関数 def roulette_choice(w): tot = [] acc = 0 for e in w: acc += e tot.append(acc) r = np.random.random() * acc for i, e in enumerate(tot): if r <= e: return i #目的関数(例としてsphere関数) def func(x): return np.sum(x**2) # アルゴリズム best_value = [] for g in range(G): best_value.append(func(x_best)) # employee bee step for i in range(N): v = x.copy() k = i while k == i: k = np.random.randint(N) for j in range(d): r = np.random.rand()*2-1 #-1から1までの一様乱数 v[i,j] = x[i,j] + r * (x[i,j] - x[k,j]) if func(x[i]) > func(v[i]): x[i] = v[i] TC[i] = 0 else: TC[i] += 1 # onlooker bee step for i in range(N): w = [] for j in range(N): w.append(np.exp(-func(x[j]))) i = roulette_choice(w) for j in range(d): r = np.random.rand()*2-1 #-1から1までの一様乱数 v[i,j] = x[i,j] + r * (x[i,j] - x[k,j]) if func(x[i]) > func(v[i]): x[i] = v[i] TC[i] = 0 else: TC[i] += 1 # scout bee step for i in range(N): if TC[i] > lim: for j in range(d): x[i,j] = np.random.rand()*(xmax-xmin) + xmin TC[i] = 0 # 最良個体の発見 for i in range(N): if best > func(x[i]): x_best = x[i] best = func(x_best) print(x_best,func(x_best)) plt.plot(range(G),best_value) plt.yscale('log') plt.title("ABC") plt.show() # + def opt_mpc_with_state_constr(A, B, N, Q, R, P, x0, xmin=None, xmax=None, umax=None, umin=None): """ optimize MPC problem with state and (or) input constraints return x: state u: input """ (nx, nu) = B.shape H = scipy.linalg.block_diag(np.kron(np.eye(N), R), np.kron(np.eye(N - 1), Q), np.eye(P.shape[0])) # calc Ae Aeu = np.kron(np.eye(N), -B) Aex = scipy.linalg.block_diag(np.eye((N - 1) * nx), P) Aex -= np.kron(np.diag([1.0] * (N - 1), k=-1), A) Ae = np.hstack((Aeu, Aex)) # calc be be = np.vstack((A, np.zeros(((N - 1) * nx, nx)))) * x0 # print(be) # === optimization === P = matrix(H) q = matrix(np.zeros((N * nx + N * nu, 1))) A = matrix(Ae) b = matrix(be) if umax is None and umin is None: sol = cvxopt.solvers.qp(P, q, A=A, b=b) else: G, h = generate_inequalities_constraints_mat(N, nx, nu, xmin, xmax, umin, umax) G = matrix(G) h = matrix(h) sol = cvxopt.solvers.qp(P, q, G, h, A=A, b=b) fx = np.matrix(sol["x"]) u = fx[0:N * nu].reshape(N, nu).T x = fx[-N * nx:].reshape(N, nx).T x = np.hstack((x0, x)) return x, u def generate_inequalities_constraints_mat(N, nx, nu, xmin, xmax, umin, umax): """ generate matrices of inequalities constrints return G, h """ G = np.zeros((0, (nx + nu) * N)) h = np.zeros((0, 1)) if umax is not None: tG = np.hstack([np.eye(N * nu), np.zeros((N * nu, nx * N))]) th = np.kron(np.ones((N * nu, 1)), umax) G = np.vstack([G, tG]) h = np.vstack([h, th]) if umin is not None: tG = np.hstack([np.eye(N * nu) * -1.0, np.zeros((N * nu, nx * N))]) th = np.kron(np.ones((N, 1)), umin * -1.0) G = np.vstack([G, tG]) h = np.vstack([h, th]) if xmax is not None: tG = np.hstack([np.zeros((N * nx, nu * N)), np.eye(N * nx)]) th = np.kron(np.ones((N, 1)), xmax) G = np.vstack([G, tG]) h = np.vstack([h, th]) if xmin is not None: tG = np.hstack([np.zeros((N * nx, nu * N)), np.eye(N * nx) * -1.0]) th = np.kron(np.ones((N, 1)), xmin * -1.0) G = np.vstack([G, tG]) h = np.vstack([h, th]) return G, h # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python (tybalt) # language: python # name: vae_pancancer # --- # # Training Tybalt models with two hidden layers # # ** 2017** # # This script is an extension of [tybalt_vae.ipynb](tybalt_vae.ipynb). See that script for more details about the base model. Here, I train two alternative Tybalt models with different architectures. Both architectures have two hidden layers: # # 1. **Model A**: 5000 input -> 100 hidden -> 100 latent -> 100 hidden -> 5000 input # 2. **Model B**: 5000 input -> 300 hidden -> 100 latent -> 300 hidden -> 5000 input # # This notebook trains _both_ models. The optimal hyperparameters were selected through a grid search for each model independently. # # The original tybalt model compressed 5000 input genes into 100 latent features in a single layer. # # Much of this script is inspired by the [keras variational_autoencoder.py example](https://github.com/fchollet/keras/blob/master/examples/variational_autoencoder.py) # # ## Output # # For both models, the script will output: # # 1. The learned latent feature matrix # 2. Encoder and Decoder keras models with pretrained weights # 3. An abstracted weight matrix # + deletable=true editable=true import os import numpy as np import pandas as pd import matplotlib.pyplot as plt import tensorflow as tf from keras.layers import Input, Dense, Lambda, Layer, Activation from keras.layers.normalization import BatchNormalization from keras.models import Model, Sequential from keras import backend as K from keras import metrics, optimizers from keras.callbacks import Callback import keras import pydot import graphviz from keras.utils import plot_model from IPython.display import SVG from keras.utils.vis_utils import model_to_dot # + deletable=true editable=true print(keras.__version__) tf.__version__ # - # %matplotlib inline plt.style.use('seaborn-notebook') np.random.seed(123) # ## Load Functions and Classes # # This will facilitate connections between layers and also custom hyperparameters # + # Function for reparameterization trick to make model differentiable def sampling(args): import tensorflow as tf # Function with args required for Keras Lambda function z_mean, z_log_var = args # Draw epsilon of the same shape from a standard normal distribution epsilon = K.random_normal(shape=tf.shape(z_mean), mean=0., stddev=epsilon_std) # The latent vector is non-deterministic and differentiable # in respect to z_mean and z_log_var z = z_mean + K.exp(z_log_var / 2) * epsilon return z class CustomVariationalLayer(Layer): """ Define a custom layer that learns and performs the training """ def __init__(self, var_layer, mean_layer, **kwargs): # https://keras.io/layers/writing-your-own-keras-layers/ self.is_placeholder = True self.var_layer = var_layer self.mean_layer = mean_layer super(CustomVariationalLayer, self).__init__(**kwargs) def vae_loss(self, x_input, x_decoded): reconstruction_loss = original_dim * metrics.binary_crossentropy(x_input, x_decoded) kl_loss = - 0.5 * K.sum(1 + self.var_layer - K.square(self.mean_layer) - K.exp(self.var_layer), axis=-1) return K.mean(reconstruction_loss + (K.get_value(beta) * kl_loss)) def call(self, inputs): x = inputs[0] x_decoded = inputs[1] loss = self.vae_loss(x, x_decoded) self.add_loss(loss, inputs=inputs) # We won't actually use the output. return x # - # ### Implementing Warm-up as described in Sonderby et al. LVAE # # This is modified code from https://github.com/fchollet/keras/issues/2595 class WarmUpCallback(Callback): def __init__(self, beta, kappa): self.beta = beta self.kappa = kappa # Behavior on each epoch def on_epoch_end(self, epoch, logs={}): if K.get_value(self.beta) <= 1: K.set_value(self.beta, K.get_value(self.beta) + self.kappa) # ### Tybalt Model # # The following class implements a Tybalt model with given input hyperparameters. Currently, only a two hidden layer model is supported. class Tybalt(): """ Facilitates the training and output of tybalt model trained on TCGA RNAseq gene expression data """ def __init__(self, original_dim, hidden_dim, latent_dim, batch_size, epochs, learning_rate, kappa, beta): self.original_dim = original_dim self.hidden_dim = hidden_dim self.latent_dim = latent_dim self.batch_size = batch_size self.epochs = epochs self.learning_rate = learning_rate self.kappa = kappa self.beta = beta def build_encoder_layer(self): # Input place holder for RNAseq data with specific input size self.rnaseq_input = Input(shape=(self.original_dim, )) # Input layer is compressed into a mean and log variance vector of size `latent_dim` # Each layer is initialized with glorot uniform weights and each step (dense connections, batch norm, # and relu activation) are funneled separately # Each vector of length `latent_dim` are connected to the rnaseq input tensor hidden_dense_linear = Dense(self.hidden_dim, kernel_initializer='glorot_uniform')(self.rnaseq_input) hidden_dense_batchnorm = BatchNormalization()(hidden_dense_linear) hidden_encoded = Activation('relu')(hidden_dense_batchnorm) z_mean_dense_linear = Dense(self.latent_dim, kernel_initializer='glorot_uniform')(hidden_encoded) z_mean_dense_batchnorm = BatchNormalization()(z_mean_dense_linear) self.z_mean_encoded = Activation('relu')(z_mean_dense_batchnorm) z_log_var_dense_linear = Dense(self.latent_dim, kernel_initializer='glorot_uniform')(hidden_encoded) z_log_var_dense_batchnorm = BatchNormalization()(z_log_var_dense_linear) self.z_log_var_encoded = Activation('relu')(z_log_var_dense_batchnorm) # return the encoded and randomly sampled z vector # Takes two keras layers as input to the custom sampling function layer with a `latent_dim` output self.z = Lambda(sampling, output_shape=(self.latent_dim, ))([self.z_mean_encoded, self.z_log_var_encoded]) def build_decoder_layer(self): # The decoding layer is much simpler with a single layer glorot uniform initialized and sigmoid activation self.decoder_model = Sequential() self.decoder_model.add(Dense(self.hidden_dim, activation='relu', input_dim=self.latent_dim)) self.decoder_model.add(Dense(self.original_dim, activation='sigmoid')) self.rnaseq_reconstruct = self.decoder_model(self.z) def compile_vae(self): adam = optimizers.Adam(lr=self.learning_rate) vae_layer = CustomVariationalLayer(self.z_log_var_encoded, self.z_mean_encoded)([self.rnaseq_input, self.rnaseq_reconstruct]) self.vae = Model(self.rnaseq_input, vae_layer) self.vae.compile(optimizer=adam, loss=None, loss_weights=[self.beta]) def get_summary(self): self.vae.summary() def visualize_architecture(self, output_file): # Visualize the connections of the custom VAE model plot_model(self.vae, to_file=output_file) SVG(model_to_dot(self.vae).create(prog='dot', format='svg')) def train_vae(self): self.hist = self.vae.fit(np.array(rnaseq_train_df), shuffle=True, epochs=self.epochs, batch_size=self.batch_size, validation_data=(np.array(rnaseq_test_df), np.array(rnaseq_test_df)), callbacks=[WarmUpCallback(self.beta, self.kappa)]) def visualize_training(self, output_file): # Visualize training performance history_df = pd.DataFrame(self.hist.history) ax = history_df.plot() ax.set_xlabel('Epochs') ax.set_ylabel('VAE Loss') fig = ax.get_figure() fig.savefig(output_file) def compress(self, df): # Model to compress input self.encoder = Model(self.rnaseq_input, self.z_mean_encoded) # Encode rnaseq into the hidden/latent representation - and save output encoded_df = self.encoder.predict_on_batch(df) encoded_df = pd.DataFrame(encoded_df, columns=range(1, self.latent_dim + 1), index=rnaseq_df.index) return encoded_df def get_decoder_weights(self): # build a generator that can sample from the learned distribution decoder_input = Input(shape=(self.latent_dim, )) # can generate from any sampled z vector _x_decoded_mean = self.decoder_model(decoder_input) self.decoder = Model(decoder_input, _x_decoded_mean) weights = [] for layer in self.decoder.layers: weights.append(layer.get_weights()) return(weights) def predict(self, df): return self.decoder.predict(np.array(df)) def save_models(self, encoder_file, decoder_file): self.encoder.save(encoder_file) self.decoder.save(decoder_file) # ## Load Gene Expression Data # + deletable=true editable=true rnaseq_file = os.path.join('data', 'pancan_scaled_zeroone_rnaseq.tsv.gz') rnaseq_df = pd.read_table(rnaseq_file, index_col=0) print(rnaseq_df.shape) rnaseq_df.head(2) # + deletable=true editable=true # Split 10% test set randomly test_set_percent = 0.1 rnaseq_test_df = rnaseq_df.sample(frac=test_set_percent) rnaseq_train_df = rnaseq_df.drop(rnaseq_test_df.index) # - # ## Initialize variables and hyperparameters for each model # # The hyperparameters provided below were determined through previous independent parameter searches # + deletable=true editable=true # Set common hyper parameters original_dim = rnaseq_df.shape[1] latent_dim = 100 beta = K.variable(0) epsilon_std = 1.0 # - # Model A (100 hidden layer size) model_a_latent_dim = 100 model_a_batch_size = 100 model_a_epochs = 100 model_a_learning_rate = 0.001 model_a_kappa = 1.0 # Model B (300 hidden layer size) model_b_latent_dim = 300 model_b_batch_size = 50 model_b_epochs = 100 model_b_learning_rate = 0.0005 model_b_kappa = 0.01 # ## Create and process Tybalt Models # # ### Model A Training and Output model_a = Tybalt(original_dim=original_dim, hidden_dim=model_a_latent_dim, latent_dim=latent_dim, batch_size=model_a_batch_size, epochs=model_a_epochs, learning_rate=model_a_learning_rate, kappa=model_a_kappa, beta=beta) # Compile Model A model_a.build_encoder_layer() model_a.build_decoder_layer() model_a.compile_vae() model_a.get_summary() model_architecture_file = os.path.join('figures', 'twohidden_vae_architecture.png') model_a.visualize_architecture(model_architecture_file) # %%time model_a.train_vae() model_a_training_file = os.path.join('figures', 'twohidden_100hidden_training.pdf') model_a.visualize_training(model_a_training_file) model_a_compression = model_a.compress(rnaseq_df) model_a_file = os.path.join('data', 'encoded_rnaseq_twohidden_100model.tsv.gz') model_a_compression.to_csv(model_a_file, sep='\t', compression='gzip') model_a_compression.head(2) model_a_weights = model_a.get_decoder_weights() encoder_model_a_file = os.path.join('models', 'encoder_twohidden100_vae.hdf5') decoder_model_a_file = os.path.join('models', 'decoder_twohidden100_vae.hdf5') model_a.save_models(encoder_model_a_file, decoder_model_a_file) # ### Model B Training and Output model_b = Tybalt(original_dim=original_dim, hidden_dim=model_b_latent_dim, latent_dim=latent_dim, batch_size=model_b_batch_size, epochs=model_b_epochs, learning_rate=model_b_learning_rate, kappa=model_b_kappa, beta=beta) # Compile Model B model_b.build_encoder_layer() model_b.build_decoder_layer() model_b.compile_vae() model_b.get_summary() # %%time model_b.train_vae() model_b_training_file = os.path.join('figures', 'twohidden_300hidden_training.pdf') model_b.visualize_training(model_b_training_file) model_b_compression = model_b.compress(rnaseq_df) model_b_file = os.path.join('data', 'encoded_rnaseq_twohidden_300model.tsv.gz') model_b_compression.to_csv(model_b_file, sep='\t', compression='gzip') model_b_compression.head(2) model_b_weights = model_b.get_decoder_weights() encoder_model_b_file = os.path.join('models', 'encoder_twohidden300_vae.hdf5') decoder_model_b_file = os.path.join('models', 'decoder_twohidden300_vae.hdf5') model_b.save_models(encoder_model_b_file, decoder_model_b_file) # ## Extract the abstract connections between samples and genes # # In a two layer model, there are two sets of learned weights between samples and latent features and from latent features to reconstructed samples. The first set connects the genes to the hidden layer and the second set connects the hidden layer to the latent feature activation. The two layers can be connected by matrix multiplication, which provides a direct connection from gene to latent feature. It is likely that the two layers learn different biological features, but the immediate connection is easiest to currently analyze and intuit. def extract_weights(weights, weight_file): # Multiply hidden layers together to obtain a single representation of gene weights intermediate_weight_df = pd.DataFrame(weights[1][0]) hidden_weight_df = pd.DataFrame(weights[1][2]) abstracted_weight_df = intermediate_weight_df.dot(hidden_weight_df) abstracted_weight_df.index = range(1, 101) abstracted_weight_df.columns = rnaseq_df.columns abstracted_weight_df.to_csv(weight_file, sep='\t') # Model A model_a_weight_file = os.path.join('data', 'tybalt_gene_weights_twohidden100.tsv') extract_weights(model_a_weights, model_a_weight_file) # Model B model_b_weight_file = os.path.join('data', 'tybalt_gene_weights_twohidden300.tsv') extract_weights(model_b_weights, model_b_weight_file) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="37puETfgRzzg" # # Data Preprocessing Tools # + [markdown] id="EoRP98MpR-qj" # ## Importing the libraries # + id="nhLNqFwu8gGa" import numpy as np import matplotlib.pyplot as plt import pandas as pd # + [markdown] id="RopL7tUZSQkT" # ## Importing the dataset # + id="iINyGQMA8rs2" dataset = pd.read_csv("Data.csv") x = dataset.iloc[:, :-1].values y = dataset.iloc[:, -1].values # + colab={"base_uri": "https://localhost:8080/"} id="8wk1Cxkx_iQ5" outputId="37fac68b-412a-41fd-e08f-f9546f5eef2f" print(x) # + colab={"base_uri": "https://localhost:8080/"} id="GjTkcdA9_ycR" outputId="7cb1820b-0abb-4f64-f864-2b9ed0f18edf" print(y) # + [markdown] id="nhfKXNxlSabC" # ## Taking care of missing data # + id="YaYfCV98Dqx5" from sklearn.impute import SimpleImputer imputer = SimpleImputer(missing_values=np.nan, strategy='mean') imputer.fit(x[:, 1:3]) x[:, 1:3]=imputer.transform(x[:, 1:3]) # + colab={"base_uri": "https://localhost:8080/"} id="TuJvhWr-Eg19" outputId="595454b2-f88c-4d8d-9785-604ecae8650f" print(x) # + [markdown] id="CriG6VzVSjcK" # ## Encoding categorical data # + [markdown] id="AhSpdQWeSsFh" # ### Encoding the Independent Variable # + id="GwnincOVUvIn" from sklearn.compose import ColumnTransformer from sklearn.preprocessing import OneHotEncoder ct = ColumnTransformer(transformers =[('encoders', OneHotEncoder(),[0])] , remainder='passthrough') x = np.array(ct.fit_transform(x)) # + colab={"base_uri": "https://localhost:8080/"} id="9AtCHQcmB-bp" outputId="bf4c41e5-d5ed-4904-ae38-c0bd412118d3" print(x) # + [markdown] id="DXh8oVSITIc6" # ### Encoding the Dependent Variable # + id="843yhw27DuWX" from sklearn.preprocessing import LabelEncoder le = LabelEncoder() y = le.fit_transform(y) # + id="DW7dbY_LECYT" outputId="1d860f38-8b7e-42ff-dc45-36dc288bc52d" colab={"base_uri": "https://localhost:8080/"} print(y) # + [markdown] id="qb_vcgm3qZKW" # ## Splitting the dataset into the Training set and Test set # + id="L_ek1vBvTfc7" from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.2, random_state = 1) # + colab={"base_uri": "https://localhost:8080/"} id="gXHoa4LDUCCX" outputId="ce57e6f2-54a4-4f98-a0f5-87b3bf5621a9" print(x_train) # + colab={"base_uri": "https://localhost:8080/"} id="MNSPrZKTUHUG" outputId="9348ecfc-2ba6-4a8a-bf77-2b3a49368133" print(x_test) # + colab={"base_uri": "https://localhost:8080/"} id="gFNGxz8WUJ5y" outputId="4a088b26-2d93-4160-a3c9-b7e9348896a8" print(y_train) # + colab={"base_uri": "https://localhost:8080/"} id="VYurkWsUUMlc" outputId="acdffaba-530f-4a0a-8e66-751d6c2be8e9" print(y_test) # + [markdown] id="TpGqbS4TqkIR" # ## Feature Scaling # + id="RJCfj_XmaJOm" from sklearn.preprocessing import StandardScaler sc = StandardScaler() x_train[:, 3:] = sc.fit_transform(x_train[:, 3:] ) x_test[:, 3:] = sc.transform(x_test[:, 3:]) # + colab={"base_uri": "https://localhost:8080/"} id="RW6ffqe_a4Da" outputId="8a1764d5-1a7f-41bc-a212-5da39380d12b" print(x_train) # + colab={"base_uri": "https://localhost:8080/"} id="Aq-SyxMba6mu" outputId="ee342c9b-a5a0-4be6-bf25-f2ecbd9c0829" print(x_test) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from redminelib import Redmine redmine = Redmine('http://br-qa.hsa.ca', key='', requests={'verify': True}) redmine.project.get(1) project = redmine.project.get('adopt-scrum') project = redmine.project.get('aaa_ancillary', include='issue_categories') issue = redmine.issue.create(project_id='aaa_ancillary', subject='subject1' ) # + issue = redmine.issue.create( project_id='aaa_ancillary', subject='subject1', tracker_id=7, description='descr1', assigned_to_id=5, estimated_hours=0 ) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 🚜Predicting the Sale Price of Bulldozers using Machine Learning # # in this notebook, we're going to go through an example machine learning project with the goal of predicting the sales price of bulldozer. # # ## 1. Problem definition # # > How well can we predict the future sale price of a bulldozer, given its characteristics and previous examples of how much similar bulldozer have been sold for? # # ## 2. Data # # The data is downloaded from the Kaggle Bluebook for Bulldozers competitions: https://www.kaggle.com/c/bluebook-for-bulldozers/data # # There are three main datasets: # # * Train.csv is the training set, which contains data through the end of 2011. # * Valid.csv is the validation set, which contains data from January 1, 2012 - April 30, 2012 You make predictions on this set throughout the majority of the competition. Your score on this set is used to create the public leaderboard. # * Test.csv is the test set, which won't be released until the last week of the competition. It contains data from May 1, 2012 - November 2012. Your score on the test set determines your final rank for the competition. # # ## 3. Evaluation # # The evaluation metric for this competition is the RMSLE (root mean squared log error) between the actual and predicted auction prices. # # For more on the evaluation of this project check: https://www.kaggle.com/c/bluebook-for-bulldozers/overview/evaluation # # **Note:** The goal for most regression evaluation metrics is to minimize the error. For example, our goal for this project will be to build a machine learning model which minimizes RMSLE. # # ## 4. Features # # kaggle provides a data dictioary detailing all of the features of the dataset. You can view this data dictionary on Google Sheets: https://docs.google.com/spreadsheets/d/1jUumFSKpVst0k_peYIUIwIMVwLfJ_8MbGW-ATQvyBQI/edit?usp=sharing import numpy as np import pandas as pd import matplotlib.pyplot as plt import sklearn # Import training and validation sets df = pd.read_csv("data/bluebook-for-bulldozers/TrainAndValid.csv", low_memory=False) df.info() df.isna().sum() df.columns fig, ax = plt.subplots() ax.scatter(df['saledate'][:1000], df["SalePrice"][:1000]) df.SalePrice.plot.hist() # ### Parsing dates # # When we work with time series data, we want to enrich the time & date component as much as possible. # # We can do that by telling pandas which of our columns has dates in it using the `parse_dates` parameter. # import data again but this time parse dates df = pd.read_csv("data/bluebook-for-bulldozers/TrainAndValid.csv", low_memory=False, parse_dates=["saledate"]) df.saledate.dtype fig, ax = plt.subplots() ax.scatter(df['saledate'][:1000], df["SalePrice"][:1000]) df.head() df.head().T # ### Sort DataFrame by saledate # # When working with time series data, it's a good idea to sort it by date. # Sort DataFrame in date order df.sort_values(by=["saledate"], inplace=True, ascending=True) df.saledate.head(20) # ### MAke a copy of the original DataFrame # # We make a copy of the riginal DataFraame so when we manipulate the copy, we've still got our original data. # make a copy df_tmp = df.copy() df_tmp.saledate.head(20) # ### Add datetime parameters for `saledate` column df_tmp[:1].saledate.dt.year df_tmp[:1].saledate.dt.day df_tmp[:1].saledate df_tmp["saleYear"] = df_tmp.saledate.dt.year df_tmp["saleMonth"] = df_tmp.saledate.dt.month df_tmp["saleDay"] = df_tmp.saledate.dt.day df_tmp["saleDayOfWeek"] = df_tmp.saledate.dt.dayofweek df_tmp["saleDayOfYear"] = df_tmp.saledate.dt.dayofyear df_tmp.head() # Now we've enriched our DAtaFrame with date time features, wecan remove `saledate` df_tmp.drop("saledate", axis=1, inplace=True) # Check the values of different clumns df_tmp.state.value_counts() len(df_tmp) # ## 5.Modelling # # we've done enough EDA(we could always do more) but let's start to do some model-drivenEDA. # + # Let's build a machine learning model from sklearn.ensemble import RandomForestRegressor model = RandomForestRegressor(n_jobs=-1, random_state=42) model.fit(df_tmp.drop("SalePrice", axis=1), df_tmp["SalePrice"]) # - # ### Covert string to categories # # One way we can turn all our data into numbers is by converting them into pandas categories. # # We can check the different datatypes compatible with pandas here: https://pandas.pydata.org/pandas-docs/stable/reference/general_utility_functions.html pd.api.types.is_string_dtype(df_tmp["UsageBand"]) # Find the columns which contains string for label, content in df_tmp.items(): if pd.api.types.is_string_dtype(content): print(label) # if you're wondering what df.items() does, here's an eg. rand_dict = {"k1": "Hell0", "k2": "Aysha!"} for key, value in rand_dict.items(): print(f"key: {key}, value: {value}") # This will turn all of the string into category values for label, content in df_tmp.items(): if pd.api.types.is_string_dtype(content): df_tmp[label] = content.astype("category").cat.as_ordered() df_tmp.info() df_tmp.state.cat.categories df_tmp.state.cat.codes # Thanks to pandas Categories we now have a way to access all of our data in the form of numbers. # # But we still have a bunch of missing datas... # Check missing datas df_tmp.isnull().sum()/len(df_tmp) # ### Save preprocessed data # Export current tmp dataframe df_tmp.to_csv("data/bluebook-for-bulldozers/train_tmp.csv", index=False) # Import preprocessed data df_tmp = pd.read_csv("data/bluebook-for-bulldozers/train_tmp.csv", low_memory=False) df_tmp.head().T df_tmp.isna().sum() # ### Fill missing values # #### Fill numerical missing values first for label, content in df_tmp.items(): if pd.api.types.is_numeric_dtype(content): print(label) df_tmp.ModelID # Check for which numeric columns have null values for label, content in df_tmp.items(): if pd.api.types.is_numeric_dtype(content): if pd.isnull(content).sum(): print(label) # Fill the numeric rows with the median for label, content in df_tmp.items(): if pd.api.types.is_numeric_dtype(content): if pd.isnull(content).sum(): # Add a binary column which tells us if the data was missing or not df_tmp[label+"_is_missing"] = pd.isnull(content) # Fill missing numeric values with median df_tmp[label] = content.fillna(content.median()) # Check for which numeric columns have null values for label, content in df_tmp.items(): if pd.api.types.is_numeric_dtype(content): if pd.isnull(content).sum(): print(label) # Check to see how many examples were missing df_tmp.auctioneerID_is_missing.value_counts() df_tmp.isna().sum() # ### Filling and turning categorical variables into numbers # Check for columns which aren't numeric for label, content in df_tmp.items(): if not pd.api.types.is_numeric_dtype(content): print(label) # Turn categorical variables into numbers and fill missing for label, content in df_tmp.items(): if not pd.api.types.is_numeric_dtype(content): # Add binary coulmn to indicate whether sample had missing values df_tmp[label+"_is_missing"] = pd.isnull(content) # Turn categories into numbers and add + 1 df_tmp[label] = pd.Categorical(content).codes+1 df_tmp.info() df_tmp.head().T df_tmp.isna().sum() # Now that allof our data is numeric as well as our datafrmae has no missing values, we should be able to build a machine learning model. from sklearn.ensemble import RandomForestRegressor # + # %%time # Instantiate model model = RandomForestRegressor(n_jobs=-1, random_state=42) # Fit the model model.fit(df_tmp.drop("SalePrice", axis=1), df_tmp["SalePrice"]) # - model.score(df_tmp.drop("SalePrice", axis=1), df_tmp["SalePrice"]) # **Question:**Why does'nt the above metric hold water? (why isn't the metric reliable) # ### Splitting data into train/validation sets df_tmp.saleYear.value_counts() # + # Split data into training and validation set df_val = df_tmp[df_tmp.saleYear == 2012] df_train = df_tmp[df_tmp.saleYear != 2012] len(df_train), len(df_val) # + # Split data into x and y x_train, y_train = df_train.drop("SalePrice", axis=1), df_train.SalePrice x_valid, y_valid = df_val.drop("SalePrice", axis=1), df_val.SalePrice x_train.shape, y_train.shape, x_valid.shape, y_valid.shape # - y_train # ### Building an evaluation function # + # Create evaluation function (the competition uses RMSLE) from sklearn.metrics import mean_squared_log_error, mean_absolute_error, r2_score def rmsle(y_test, y_preds): ''' Calculate root mean squared log error between predictions and true labels. ''' return np.sqrt(mean_squared_log_error(y_test, y_preds)) # Create Function to evalaute model on a few different levels def show_scores(model): train_preds = model.predict(x_train) val_preds = model.predict(x_valid) scores = {"Training MAE": mean_absolute_error(y_train, train_preds), "Valid MAE": mean_absolute_error(y_valid, val_preds), "Training RMSLE": rmsle(y_train, train_preds), "Valid RMSLE": rmsle(y_valid, val_preds), "Training R^2": r2_score(y_train, train_preds), "Valid R^2": r2_score(y_valid, val_preds)} return scores # - # ## Testing our modle on a subset (to tune the hyperparameters) # + active="" # # This takes far too long... for experimenting # # %%time # model = RandomForestRegressor(n_jobs=-1, # random_state=42) # model.fit(x_train, y_train) # - len(x_train) # Change max_samples value model = RandomForestRegressor(n_jobs=-1, random_state=42, max_samples=10000) # %%time # Cutting down on the max number of samples each estimator can see improves training time model.fit(x_train, y_train) show_scores(model) # ### Hyperparameter tuning with RandomizedSearchCV # + # %%time from sklearn.model_selection import RandomizedSearchCV # Different RandomForestRegressor hyperparameters rf_grid = {"n_estimators": np.arange(10, 100, 10), "max_depth": [None, 3, 5, 10], "min_samples_split": np.arange(2, 20, 2), "min_samples_leaf": np.arange(1, 20, 2), "max_features": [0.5, 1, "sqrt", "auto"], "max_samples": [10000]} # Instantiate RandomizedSearchCV model rs_model= RandomizedSearchCV(RandomForestRegressor(n_jobs=-1, random_state=42), param_distributions=rf_grid, n_iter=2, cv=5, verbose=True) # Fit the RandomizedSearchCV model rs_model.fit(x_train, y_train) # + active="" # # Find the best model hyperparameters # rs_model.best_params_ # # O/P # # {'n_estimators': 50, # # 'min_samples_split': 10, # # 'min_samples_leaf': 9, # # 'max_samples': 10000, # # 'max_features': 'auto', # # 'max_depth': 3} # + active="" # # Evaluate the RandomizedSearch model # show_scores(rs_model) # #O/P # # {'Training MAE': 11600.450217388972, # # 'Valid MAE': 13169.983584964879, # # 'Training RMSLE': 0.49468301761355865, # # 'Valid RMSLE': 0.49954812812940574, # # 'Training R^2': 0.49581349333422875, # # 'Valid R^2': 0.48895962228392753} # - # ### Train a model with the best hyperparameters # **Note:** These were found after 100 iterations of `RandomizesSearchCV`. # # + # %%time # Most ideal model ideal_model = RandomForestRegressor(n_estimators=50, min_samples_leaf=9, min_samples_split=10, max_features='auto', n_jobs=-1, max_samples=None, random_state=42)# random state so our results are reproducible # Fit the ideal model ideal_model.fit(x_train, y_train) # + active="" # # Scores for ideal model (trained on all the data) # show_scores(ideal_model) # # O/P # # {'Training MAE': 3620.6459259911267, # # 'Valid MAE': 6120.019427314366, # # 'Training RMSLE': 0.17576470168353692, # # 'Valid RMSLE': 0.251602610379688, # # 'Training R^2': 0.9352797759068263, # # 'Valid R^2': 0.8698400358096663} # + active="" # # Scores on rs_model (only trained on 10000 examples) # show_scores(rs_model) # # O/P # # {'Training MAE': 11600.450217388974, # # 'Valid MAE': 13169.983584964879, # # 'Training RMSLE': 0.49468301761355865, # # 'Valid RMSLE': 0.49954812812940574, # # 'Training R^2': 0.49581349333422886, # # 'Valid R^2': 0.48895962228392753} # - # ### Make predictions on our test data # importing the test data df_test = pd.read_csv("Data/bluebook-for-bulldozers/Test.csv", low_memory=False, parse_dates=["saledate"]) df_test.head().T # Make predictions on the test dataset test_preds = ideal_model.predict(df_test) # ### Preprocessing the data (getting the test dataset in the ssame format as our training dataset) # def preprocess_data(df): """ Performs transformations on df and returns transformed df. """ df["saleYear"] = df.saledate.dt.year df["saleMonth"] = df.saledate.dt.month df["saleDay"] = df.saledate.dt.day df["saleDayOfWeek"] = df.saledate.dt.dayofweek df["saleDayOfYear"] = df.saledate.dt.dayofyear df.drop("saledate", axis=1, inplace=True) # Fill the numeric rows with median for label, content in df.items(): if pd.api.types.is_numeric_dtype(content): if pd.isnull(content).sum(): # Add a binary column which tells us if the data was missing or not df[label+"_is_missing"] = pd.isnull(content) # Fill missing numeric values with median df[label] = content.fillna(content.median()) # Fill the categorical missing data into numbers if not pd.api.types.is_numeric_dtype(content): df[label+"_is_missing"] = pd.isnull(content) # We add +1 to the category code beacuse pandas encodes missing categories as -1 df[label] = pd.Categorical(content).codes + 1 return df # Process the test data df_test = preprocess_data(df_test) df_test.head() x_train.head() # MAke predictions on updated test data test_preds = ideal_model.predict(df_test) # We can find how the columns differ using sets set(x_train.columns) - set(df_test.columns) # Manually adjust df_test to have 'auctioneerID_is_missing' column df_test['auctioneerID_is_missing'] = False df_test.head() # Finally now our test dataframe has the same features as our training dataframe, we can make predictions! # Make predictions on the test data test_preds = ideal_model.predict(df_test) test_preds # We've made some prediction but they're not in the same format Kaggle is asking for: # https://www.kaggle.com/c/bluebook-for-bulldozers/overview/evaluation # Format predictions into the same format Kaggle is after df_preds = pd.DataFrame() df_preds["SalesID"] = df_test["SalesID"] df_preds["SalesPrice"] = test_preds df_preds # Export predictions data df_preds.to_csv("Data/bluebook-for-bulldozers/test_predictions.csv", index=False) # ### Feature Importance # # Feature importance seeks to figure out which different attributes of the data were most importance when it comes to predicting the **target variable** (SalesPrice) # Find feature importance of our best model ideal_model.feature_importances_ # Helper function for plotting feature importance def plot_features(columns, importances, n=20): df = (pd.DataFrame({"features": columns, "feature_importances": importances}) .sort_values("feature_importances", ascending=False) .reset_index(drop=True)) #plot the dataframe fig, ax = plt.subplots() ax.barh(df["features"][:n], df["feature_importances"][:20]) ax.set_ylabel("features") ax.set_xlabel("feature_importance") ax.invert_yaxis() plot_features(x_train.columns, ideal_model.feature_importances_) df["Enclosure"].value_counts() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Usage Example: Pydantic # # This notebook shows an example of using erdantic with [Pydantic](https://pydantic-docs.helpmanual.io/) models. # # Let's take a look at the models from the `erdantic.examples.pydantic` module. Here's their source code for clariy. import erdantic.examples.pydantic ??erdantic.examples.pydantic # ## Using the CLI # # The fastest way to rendering a diagram is to use the command-line interface. Below we use IPython's `!` to run a command in the system shell. We pass the full dotted path to the root class of our composition hierarchy, along with an output file path. erdantic will walk the composition graph to find all child classes. # !erdantic erdantic.examples.pydantic.Party -o diagram.png # The format rendered is inferred from the file extension. # ## Using the Python library # # You can also use the erdantic Python library, which lets you inspect the diagram object. The diagram object even automatically renders in Jupyter notebooks as demonstrated below. # + import erdantic as erd from erdantic.examples.pydantic import Party diagram = erd.create(Party) diagram # - diagram.models diagram.edges # You can use the `draw` method to render the diagram to disk. # + diagram.draw("pydantic.svg") # Equivalently, use erd.draw directly from Party # erd.draw(Party, out="pydantic.svg") # - # erdantic uses [Graphviz](https://graphviz.org/), a venerable open-source C library, to create the diagram. Graphviz uses the [DOT language](https://graphviz.org/doc/info/lang.html) for describing graphs. You use the `to_dot` method to get the DOT representation as a string. # + print(diagram.to_dot()) # Equivalently, use erd.to_dot directly from Party assert diagram.to_dot() == erd.to_dot(Party) # - # ## Terminal Models # # If you have an enormous composition graph and want to chop it up, you can make that work by specifying models to be terminal nodes. # # For the CLI, use the `-t` option to specify a model to be a terminus. To specify more than one, used repeated `-t` options. So, for example, if you want one diagram rooted by `Party` that terminates at `Quest`, and another diagram that is rooted by `Quest`, you can use the following two shell commands. # # ```bash # erdantic erdantic.examples.pydantic.Party \ # -t erdantic erdantic.examples.pydantic.Quest \ # -o party.png # erdantic erdantic.examples.pydantic.Quest -o quest.png # ``` # When using the Python library, pass your terminal node in a list to the `termini` keyword argument. Below is the Python code for creating diagrams equivalent to the above shell commands. # + from erdantic.examples.pydantic import Quest erd.create(Party, termini=[Quest]) # - erd.create(Quest) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import torch import math import matplotlib import matplotlib.pyplot as plt import pandas as pd import seaborn as sns import GPyOpt import GPy from turbo import TurboM from turbo import Turbo1 import os import matplotlib as mpl import matplotlib.tri as tri import ternary import pickle import datetime from collections import Counter import matplotlib.ticker as ticker from sklearn import preprocessing import pyDOE import random from scipy.stats import norm from sklearn.ensemble import RandomForestRegressor from sklearn.datasets import load_boston from sklearn.preprocessing import StandardScaler from sklearn.model_selection import train_test_split import copy import pickle import random import imageio from scipy.spatial import Delaunay import time import tqdm import gpytorch from torch.distributions import Normal from itertools import product import matplotlib.font_manager as font_manager from smt.sampling_methods import LHS, Random # + # X: biomass concentration # g/L # X_0 = 0.1 # S:substrate concentration # g/L # S_0 = 0.1 # CL:dissolved oxygen concentration # P: penicillin concentration # g/L # P_0 = 0 # CO2:carbon dioxide concentration; # H:hydrogen ion concentration for pH # T: temperature. C_L_star = 8.26 Y_xs = 0.45 Y_xo = 0.04 Y_ps = 0.90 Y_po = 0.20 K_1 = 10**(-10) K_2 = 7 * 10**(-5) m_X = 0.014 m_o = 0.467 alpha_1 = 0.143 alpha_2 = 4*10**(-7) alpha_3 = 10**(-4) mu_X = 0.092 K_X = 0.15 # K_ox = 2*10**(-2) # K_op = 5*10**(-4) mu_p = 0.005 K_p = 0.0002 K_I = 0.10 p = 3 K = 0.04 k_g = 7 * 10**(3) E_g = 5100 k_d = 10**(33) E_d = 50000 # rou_dot_C_p = 1/1500 # rou_c_dot_C_pc = 1/2000 rou_dot_C_p = 1000 rou_c_dot_C_pc = 1000 r_q1 = 60 r_q2 = 1.6783 * 10**(-4) a = 1000 b = 0.60 alpha = 70 beta = 0.4 lambd = 2.5 * 10**(-4) gamma = 10**(-5) # kelvin T_v = 273 T_o = 373 # CAL/(MOL K) R = 1.9872 # - # # parameters # # # P = 0 initial # # V_max = 180 # V = [60, 120] initial # # X = [0.01, 12] initial # # Q_rxn = 0 initial # # system T = 293 - 303 initial range # F_c range [0, 5] # # S = [0.1, 18] initial # Feed flow rate F [0.01, 0.50] # Feed substrate concentration s_f range [400, 600] # Feed substrate temperature T_f = 293 - 303 # # sufficient oxygen # # H: pH 5 - 7.5 kept constant # # CO2 = 0 initial # # t = 0 initial # # # # # # + # total unit time: hrs t = 2500 V_max = 180 V_limits = [60, 120] X_limits = [0.05, 18] CO2 = 0 T_limits = [293, 303] S_limits = [0.05, 18] F_limits = [0.01, 0.50] s_f_limits = [500, 700] H_limits = [5, 6.5] limits = [V_limits, X_limits, T_limits, S_limits, F_limits, s_f_limits, H_limits] # + def penicilin_exp_BO(X_input): print(X_input) V, X, T, S, F, s_f, H_ = X_input[0],X_input[1],X_input[2],X_input[3], X_input[4], X_input[5], X_input[6] P = 0 CO2 = 0 t = 2500 l_P = [] l_V = [] l_X = [] l_T = [] l_S = [] l_F = [] l_s_f = [] l_H_ = [] l_CO2 = [] l_t = [] l_P.append(P) l_V.append(V) l_X.append(X) l_T.append(T) l_S.append(S) l_F.append(F) l_s_f.append(s_f) l_H_.append(H_) l_CO2.append(CO2) l_t.append(0) H = 10**(-H_) for i in np.arange(t) + 1: F_loss = V * lambd*(np.exp(5*((T - T_o)/(T_v - T_o))) - 1) dV_dt = F - F_loss mu = (mu_X / (1 + K_1/H + H/K_2)) * (S / (K_X * X + S)) * ((k_g * np.exp(-E_g/(R*T))) - (k_d * np.exp(-E_d/(R*T)))) dX_dt = mu * X - (X / V) * dV_dt mu_pp = mu_p * (S / (K_p + S + S**2 / K_I)) dS_dt = - (mu / Y_xs) * X - (mu_pp/ Y_ps) * X - m_X * X + F * s_f / V - (S / V) * dV_dt dP_dt = (mu_pp * X) - K * P - (P / V) * dV_dt dCO2_dt = alpha_1 *dX_dt + alpha_2 * X + alpha_3 # UPDATE P = P + dP_dt V = V + dV_dt X = X + dX_dt S = S + dS_dt CO2 = CO2 + dCO2_dt l_P.append(P) l_V.append(V) l_X.append(X) l_T.append(T) l_S.append(S) l_F.append(F) l_s_f.append(s_f) l_H_.append(H_) l_CO2.append(CO2) l_t.append(i) if V > V_max: # print('Too large V') break if S < 0: # print('Too small S') break if dP_dt < 10e-12: # print('Converged P') break # print('final results: ' + 'P = '+str(np.round(P, 2)) +', S = '+str(np.round(S, 2)) + ', X = ' + str(np.round(X, 2)) + ', V = ' + str(np.round(V, 2)) + ', t = ' + str(i)) # GpyOpt does minimization only print(P) return -P # - lowerb = np.array(limits)[:,0] upperb = np.array(limits)[:,1] dim = len(lowerb) print(dim) assert len(lowerb) == len(upperb) lowerb upperb class penicillin: def __init__(self, dim=7): self.dim = dim self.lb = lowerb self.ub = upperb def __call__(self, x): # assert len(x) == self.dim # assert x.ndim == 1 # assert np.all(x <= self.ub) and np.all(x >= self.lb) pred_val = penicilin_exp_BO(x) return pred_val seed_list = [15, 361, 5366, 9485, 8754, 1268, 9914, 8450, 9498, 5181, 1850, 4561, 3579, 9359, 3958, 2005, 6917, 2630, 7210, 1565, 8258, 5267, 7658, 1256, 7511, 4783, 9130, 1870, 962, 3558, 3273, 9770, 1872, 2649, 3725, 6433, 1248, 4856, 9881, 8410, 7137, 8056, 8478, 404, 8299, 1748, 9133, 4210, 5993, 1084, 1047, 5673, 7261, 8370, 3850, 7228, 2356, 5004, 6573, 6919, 5437, 3879, 8421, 2817, 8141, 7277, 8473, 9281, 8217, 8537, 8110, 3530, 2528, 4249, 6473, 5902, 1733, 3562, 6232, 9180, 9581, 4863, 6455, 6267, 397, 6571, 7682, 3655, 7695, 2154, 6157, 6971, 2173, 9005, 2441, 6703, 1639, 8149, 3067, 2846, 2169, 1028, 4480, 8621, 5321, 8092, 2448, 3002, 3640, 252, 7340, 3230, 5219, 5445, 520, 8960, 8561, 1950, 7742, 5925, 7894, 6451, 8327, 6679, 1567, 9964, 221, 7288, 6503, 6733, 3473, 5392, 5780, 7941, 3186, 2358, 8525, 7198, 2108, 8808, 4679, 1798, 3816, 7119, 3341, 452, 7081, 4490, 8964, 5409, 3689, 3374, 8459, 1725, 5356, 941, 1745, 138, 7065, 1700, 6068, 2405, 7856, 8817, 6921, 9942, 7909, 8551, 7599, 1782, 3102, 7489, 9145, 5203, 9243, 5665, 7459, 8400, 4465, 8331, 1222, 2967, 6165, 2063, 4569, 2748, 3068, 8000, 3487, 6941, 3248, 7922, 1255, 2618, 6029, 620, 1367, 3882, 1943, 5862, 3734, 174, 1642, 2565, 6276, 6942, 6643, 6883, 5610, 6575, 3474, 517, 1763, 5033, 5480, 5088, 1766, 804, 9661, 2237, 6250, 7481, 4157, 366, 5455, 7936, 1637, 7163, 6330, 6269, 1291, 7439, 9787, 3993, 4212, 6818, 9652, 3533, 928, 719, 8958, 2662, 4037, 2628, 5675, 6191, 1610, 7399, 3789, 48, 1968, 8677, 9064, 5403, 4695, 7317, 9382, 4873, 9049, 2668, 3669, 899, 1329, 7221, 9343, 3234, 4563, 5809, 4553, 2623, 6106, 4001, 942, 5719, 4114, 9164, 3163, 9751, 8709, 6675, 1767, 9868, 7828, 3809, 8566, 1660, 2135, 5726, 3829, 7493, 3749, 77, 6787, 5530, 3134, 9977, 6182, 8207, 850, 826, 3396, 9934, 1265, 843, 7561, 1720, 5175, 5553, 3554, 5836, 7350, 542, 1328, 601, 2480, 2459, 8026, 6563, 5129, 5901, 9656, 761, 1077, 6627, 5774, 3279, 6653, 5451, 1158, 5450, 4130, 6759, 6246, 1718, 9081, 8953, 743, 4201, 718, 1365, 5838, 1259, 5009, 720, 4619, 6803, 8124, 4072, 8249, 2631, 4147, 9225, 203, 5261, 5128, 6889, 9664, 7354, 9603, 8156, 6055, 4038, 3824, 6272, 1011, 1189, 2289, 5664, 5616, 3793, 5749, 9591, 968, 2530, 9194, 6906, 6721, 4420, 5634, 4819, 1092, 324, 6, 5882, 8999, 5585, 4094, 8368, 8620, 4631, 4310, 2464, 4513, 125, 2622, 9695, 544, 7239, 31, 3275, 7290, 2041, 7820, 4366, 5184, 33, 912, 7470, 1442, 6793, 9648, 2151, 1368, 432, 2865, 499, 3988, 5994, 5257, 5976, 5949, 5680, 4252, 838, 7574, 8845, 22, 3254, 4979, 4428, 5690, 6822, 2840, 6018, 2904, 4160, 5176, 8253, 3617, 7735, 910, 8454, 1103, 2923, 7863, 3389, 3895, 8644, 316, 2760, 4347, 9888, 1179, 3062, 8094, 7526, 3568, 8034, 482, 1892, 5677, 9822, 8989, 3141, 9000, 5354, 5177, 8086, 8652, 3190, 9626, 5542, 6047, 3367, 7309, 8704, 4628, 8375, 5778, 6476, 3961, 2933, 6760, 5095, 2638, 7906, 8131, 5811, 6870, 4260, 1452, 1132, 7245, 3353, 6566, 5757, 1562, 5834, 6661, 960, 4101, 391, 8147, 4445, 8009, 1109, 2258, 2737, 91, 2555, 9340, 4134, 9109, 6274, 8689, 8562, 3800, 2161, 7634, 5159, 668, 2456, 5540, 4965, 6882, 388, 8736, 523, 236, 1201, 5635, 4721, 9932, 5613, 1061, 5348, 6720, 7998, 6195, 5367, 3925, 9482, 5596, 5524, 7844, 251, 1231, 7908, 5889, 8865, 4229, 2600, 6134, 9704, 8743, 9647, 6048, 4775, 3147, 5477, 6040, 8260, 3286, 3704, 6095, 3516, 5499, 5916, 5715, 4462, 9056, 7426, 8968, 4689, 936, 3414, 6673, 5752, 5371, 9925, 4654, 7626, 3492, 2012, 3481, 3123, 3404, 1065, 1514, 9027, 16, 6648, 3909, 6062, 3670, 5697, 6150, 1236, 4568, 1258, 5420, 323, 1022, 7303, 2314, 421, 1899, 1342, 6558, 8666, 1708, 9275, 7321, 1846, 4250, 4915, 6877, 325, 2250, 7443, 479, 3682, 2957, 1491, 1472, 271, 3398, 190, 3364, 5654, 4341, 3591, 8104, 9057, 1829, 1049, 7566, 171, 5870, 7688, 8812, 965, 6738, 6524, 9699, 5178, 6110, 731, 364, 6354, 2829, 6520, 4545, 1085, 2710, 5966, 2481, 3089, 3973, 6647, 782, 6546, 4791, 9685, 6861, 5840, 6613, 2761, 1186, 8388, 7542, 8461, 3981, 3556, 5150, 5347, 3622, 475, 6765, 9549, 4625, 9562, 5815, 139, 9255, 1814, 2615, 2392, 8246, 4643, 5148, 2604, 3094, 505, 7161, 3018, 6534, 6947, 9627, 1089, 1498, 9946, 5158, 5854, 3542, 9502, 2897, 3724, 9553, 8988, 1156, 8906, 3718, 2402, 1198, 1362, 9364, 4204, 3549, 6347, 7122, 5865, 8250, 3876, 7946, 4736, 3320, 328, 6051, 698, 5974, 245, 5277, 145, 5691, 4627, 7661, 9272, 3612, 6026, 3007, 2171, 6600, 1622, 5602, 8481, 3452, 6094, 2939, 302, 1636, 9413, 7421, 5174, 7933, 8268, 2582, 4662, 5456, 561, 8318, 9777, 4159, 5495, 8881, 228, 1706, 7590, 7815, 7393, 8320, 7513, 9330, 7418, 6959, 9908, 129, 2787, 1625, 6151, 5212, 6739, 3409, 4216, 8035, 9945, 8225, 6978, 2694, 6385, 6124, 9991, 6828, 6186, 4174, 2672, 1352, 1412, 9318, 94, 5612, 6694, 3472, 1554, 4802, 6021, 4685, 6020, 8443, 6608, 6271, 2370, 3075, 5791, 1118, 7983, 9706, 4787, 8014, 5001, 6649, 3855, 4325, 8728, 1273, 2518, 8543, 7529, 7078, 1605, 5819, 8381, 9312, 3130, 8942, 4866, 2624, 1626, 6309, 7202, 5016, 5048, 3052, 2468, 4266, 5924, 4876, 7017, 4995, 6767, 8326, 1680, 2077, 3456, 2254, 3844, 8820, 3719, 5793, 6676, 556, 1433, 4706, 5794, 3574, 5239, 6641, 8300, 6430, 1088, 624, 8615, 9451, 7370, 2659, 7297, 2182, 1866, 2927, 6659, 4717, 6023, 7777, 6550, 8161, 3274, 4379, 6581, 6652, 5326, 8893, 898, 4497, 1272, 7992, 3001, 5247, 1449, 5896, 6260, 6597, 9832, 8152, 7640, 49, 6275, 539, 173, 8212, 6698, 7639, 9642, 4834, 1311, 7586, 6511, 7969, 904, 3730, 7892, 7987, 9512, 5084, 4415, 5556, 3757, 8572, 481, 1752, 8619, 9265, 374, 2411, 7572, 4494, 9193, 5241, 5325, 3104, 6289, 3510, 6960, 4708, 9474, 8669, 3253, 6234, 1034, 3076, 1845, 8025, 9719, 1746, 5238, 9808, 8750, 2909, 5683, 3603, 9733, 7903, 6262, 8723, 8698, 6450, 4584, 9515, 7556, 6282, 808, 7094, 7250, 4231, 5713, 6364, 4610, 8323, 3462, 37, 2574, 488, 6169, 2902, 9778, 1542, 222, 9517, 3727, 1000, 2644, 1863, 4681, 809, 1067, 2807, 9743, 5426, 7320, 9726, 9667, 1553, 5578, 3148, 6775, 747, 9279, 9623, 7660, 4739, 4715, 2060, 8489, 8243, 771, 5988, 7606, 1042, 7060, 604, 2347, 7707, 1539, 7260, 3587, 5679, 1376, 5269, 4993, 6323, 67, 9558, 2729, 5973, 2619, 64, 8613, 4592, 4337, 5817, 9379, 1030, 977, 2718, 3288, 5990, 8784, 666, 2294, 8057, 5316, 2106, 254, 4447, 6535, 7282, 5592, 2700, 2452, 3350, 3132, 1336, 3894, 8350, 6646, 26, 4650, 3377, 508, 6133, 8495, 8521, 8186, 675, 2260, 2142, 6409, 2834, 5595, 1716, 7088, 3181, 5074, 1161, 2792, 8764, 5550, 5501, 617, 5643, 1159, 2848, 3904, 7817, 1884, 8405, 5114, 865, 5271, 8004, 887, 710, 8789, 6115, 6461, 9367, 1536, 796, 5483, 3289, 1537, 1781, 8535, 915, 5709, 4455, 9616, 7343, 6122, 5655, 5329, 9101, 8039, 2764, 1279, 6768, 6967, 547, 9931, 1604, 4059, 9715, 860, 6610, 418, 5132, 2343, 3304, 3250, 4431, 4457, 6804, 2393, 4369, 4020, 4302, 7331, 1277, 9830, 950, 6417, 1358, 7208, 1040, 522, 3149, 2526, 5773, 439, 9550, 1638, 7405, 7788, 8846, 2134, 6614, 4360, 2228, 7950, 817, 2914, 717, 1657, 4520, 5850, 2253, 4905, 8420, 7723, 783, 6201, 806, 844, 5080, 6266, 3814, 511, 493, 8646, 4304, 1658, 6771, 9563, 2585, 4193, 6400, 8775, 5204, 2560, 6790, 9499, 3820, 837, 7630, 9202, 7898, 9154, 3482, 1721, 124, 5193, 4016, 2319, 632, 4008, 1251, 9493, 3902, 9788, 6066, 978, 95, 2634, 7982, 7505, 6121, 979, 8417, 3675, 9844, 8515, 5517, 9545, 2094, 8399, 2440, 540, 1552, 1635, 7084, 1958, 3741, 6376, 6166, 1343, 1998, 6179, 1505, 4198, 9441, 4485, 9939, 2588, 2328, 577, 2287, 9860, 5688, 274, 6632, 4984, 5971, 9198, 5291, 3415, 6456, 2028, 3766, 386, 6225, 856, 1719, 3490, 6303, 1214, 78, 2011, 8366, 9043, 7895, 4934, 8223, 4918, 5303, 7234, 581, 8226, 7233, 9252, 1898, 7995, 8317, 6938, 7814, 1760, 1874, 2378, 8448, 2870, 6922, 4452, 5788, 303, 3196, 1920, 4580, 1683, 1351, 5146, 8755, 6879, 8228, 5954, 143, 3537, 9785, 7323, 8516, 363, 9271, 9435, 1791, 3974, 6280, 384, 6297, 8720, 4748, 7240, 8810, 2984, 4738, 6190, 9149, 9128, 8757, 1686, 1386, 7772, 1025, 1732, 1556, 4890, 5465, 4012, 8596, 2557, 5470, 7724, 2310, 1597, 3985, 7498, 494, 1405, 1596, 4120, 3607, 6291, 6681, 8534, 8364, 9773, 8241, 3756, 7816, 663, 863, 9892, 1210, 7295, 8994, 6697, 9159, 8480, 8519, 5229, 2968, 6078, 7510, 2971, 2838, 156, 2670, 8859, 1710, 2017, 755, 6508, 6468, 162, 2880, 7872, 1468, 6293, 4334, 1420, 5144, 6954, 8112, 4826, 5706, 5813, 6878, 6462, 9624, 8465, 9227, 9096, 4460, 9713, 8836, 7728, 5345, 4816, 2290, 4235, 5504, 829, 9232, 8874, 2883, 4702, 2272, 8089, 4548, 6454, 8908, 5864, 9671, 9276, 5956, 9307, 9460, 5027, 7030, 5781, 1467, 3118, 1074, 7131, 735, 99, 5300, 8705, 8530, 7592, 1648, 692, 5000, 6170, 2714, 672, 6951, 3109, 9537, 9740, 2406, 42, 4053, 2988, 8254, 973, 6136, 3830, 9705, 2089, 3025, 8334, 2901, 8936, 6103, 9010, 6657, 6463, 6667, 425, 6352, 2024, 9282, 3090, 5952, 8358, 7827, 9463, 526, 4588, 5365, 5472, 2569, 9285, 9174, 5002, 5753, 3890, 5566, 3198, 2243, 6039, 468, 8037, 8622, 5330, 6069, 4345, 3956, 6259, 3057, 7314, 7332, 2673, 1815, 592, 4434, 218, 4699, 4376, 7700, 3297, 2265, 764, 769, 8579, 3737, 4663, 3246, 7554, 7989, 1245, 5256, 6478, 5771, 5101, 9009, 1066, 2701, 7719, 2422, 3200, 7324, 6123, 9968, 6131, 3366, 4872, 2050, 8129, 5734, 6809, 8055, 4653, 5694, 3726, 1670, 8079, 2274, 3802, 9141, 7763, 4519, 4502, 5764, 7024, 8424, 1785, 7383, 8732, 9783, 6578, 9567, 4358, 497, 5038, 1435, 2871, 1397, 9296, 5984, 8138, 7173, 8166, 8232, 9110, 6413, 7970, 6298, 9586, 5565, 7703, 9870, 5630, 4118, 9221, 5007, 2170, 9693, 3582, 3646, 277, 2692, 4290, 2083, 208, 2248, 9417, 3411, 9104, 4188, 6497, 6750, 3027, 5466, 2225, 9163, 7442, 54, 9031, 4815, 3913, 7588, 5947, 2774, 3778, 6204, 9767, 3195, 7533, 4790, 5375, 3352, 872, 6135, 6574, 7205, 6810, 191, 1419, 3408, 2109, 6708, 5551, 2491, 7018, 1946, 8059, 463, 8284, 1656, 2080, 9938, 8163, 5441, 1266, 9117, 9896, 2001, 5299, 403, 7598, 7669, 7427, 1735, 828, 8794, 3298, 6098, 27, 1951, 5315, 7326, 5005, 815, 1480, 6553, 8536, 7424, 498, 8654, 7960, 7031, 9734, 6242, 1825, 3416, 4869, 122, 6713, 5068, 5036, 969, 8725, 6746, 3317, 4843, 9518, 9319, 9501, 5355, 8756, 8027, 7318, 3345, 2821, 1406, 8132, 9470, 3028, 7665, 5579, 3633, 6812, 2928, 1188, 6705, 243, 6905, 9222, 8307, 9529, 2598, 5687, 4086, 3357, 9758, 8277, 4175, 4875, 9620, 4483, 7096, 781, 5989, 2070, 5397, 9231, 9015, 5137, 5717, 3735, 3605, 7614, 2082, 2515, 4132, 2793, 6791, 9003, 7145, 3686, 1164, 6853, 342, 70, 6370, 102, 6482, 7086, 9520, 4395, 5914, 8744, 4473, 5072, 7548, 5278, 346, 6829, 1509, 6107, 4044, 1538, 5637, 2067, 5516, 4538, 9115, 7633, 5963, 5681, 4928, 4052, 6472, 1409, 8638, 1244, 5432, 4904, 7879, 9401, 5478, 7994, 5260, 8897, 1310, 7782, 97, 825, 7603, 9293, 4639, 8878, 8462, 491, 2469, 2576, 6022, 1859, 8930, 7448, 304, 1598, 2002, 121, 780, 3796, 7014, 9001, 6580, 2818, 1571, 9412, 9728, 8016, 1921, 3174, 6012, 5089, 5698, 3657, 9893, 7573, 1667, 4672, 8330, 2300, 2708, 8740, 9203, 8598, 9086, 4476, 4078, 2078, 9486, 6116, 4583, 5094, 57, 2732, 8125, 3381, 6268, 6527, 2059, 5946, 6100, 658, 7779, 8476, 6709, 9880, 4018, 7116, 7445, 1149, 5098, 1983, 4506, 2915, 1411, 2359, 5173, 5746, 5032, 655, 5953, 1878, 1488, 2053, 9024, 350, 3835, 2685, 2160, 8351, 6692, 7377, 7299, 5386, 6332, 9800, 9798, 582, 9630, 1885, 9152, 9347, 2755, 2177, 1516, 1218, 1851, 3746, 6217, 1341, 5856, 3979, 8422, 7891, 2687, 3182, 6740, 945, 8464, 8383, 2983, 2214, 6794, 8961, 8803, 5755, 9557, 4029, 5604, 9570, 4755, 3782, 6036, 6891, 7907, 1246, 9083, 9287, 3987, 3402, 7019, 8633, 9590, 6748, 5416, 9827, 9653, 2639, 8509, 4825, 7047, 5245, 1886, 3040, 4973, 696, 2985, 9195, 5085, 2907, 6200, 5564, 1532, 1853, 4177, 7176, 5039, 1392, 3302, 3928, 9105, 6350, 703, 4251, 8655, 8354, 8745, 9289, 1566, 9069, 8032, 1162, 7972, 1582, 3999, 2860, 1069, 3019, 7962, 3886, 9478, 1490, 2016, 3593, 7832, 9422, 2698, 2740, 2187, 1004, 9430, 484, 6064, 7316, 161, 2782, 9963, 9226, 3112, 1561, 6875, 9815, 9750, 5453, 8998, 5763, 8504, 6102, 1346, 63, 3203, 8871, 9979, 5514, 6338, 5307, 9948, 9366, 670, 4372, 1786, 7624, 7313, 9674, 867, 69, 6254, 7382, 9527, 3866, 6944, 6168, 3772, 8264, 5747, 2249, 73, 8101, 1494, 6855, 4060, 8096, 354, 5800, 7389, 9905, 8851, 8045, 6372, 6556, 6770, 5738, 4092, 1726, 177, 4165, 2771, 9230, 7289, 7440, 3438, 4762, 2285, 6061, 7388, 7099, 258, 2323, 3769, 9065, 1621, 8369, 5849, 9585, 4412, 9561, 2858, 9535, 1111, 3326, 3188, 1083, 3161, 1171, 2805, 3362, 5243, 4646, 2966, 8980, 9666, 276, 7975, 1665, 7391, 6888, 9048, 6279, 9368, 9807, 1023, 2609, 2299, 6988, 8272, 8618, 292, 4333, 247, 4148, 349, 9473, 1145, 4367, 9730, 4213, 4004, 9971, 6622, 6004, 47, 2970, 9954, 5890, 180, 724, 7910, 6405, 4287, 810, 2316, 1191, 6030, 8697, 1776, 6894, 4966, 3392, 4867, 2996, 3968, 7428, 2875, 5481, 201, 7649, 9016, 4238, 6684, 2876, 2994, 2511, 1372, 3805, 6448, 5975, 1297, 6015, 5922, 1027, 3016, 7089, 5847, 8798, 799, 2252, 5502, 309, 9240, 773, 982, 5423, 2906, 4709, 6903, 5858, 8816, 4400, 8587, 4718, 5087, 4621, 4069, 5964, 8386, 8563, 3467, 5380, 7683, 1690, 1399, 5427, 6744, 628, 7121, 6163, 6167, 2788, 1313, 9241, 503, 2605, 7291, 6229, 8195, 148, 1337, 7160, 9615, 3256, 9178, 25, 294, 3532, 7926, 8776, 6206, 9264, 3327, 8608, 583, 9374, 2371, 4320, 1640, 412, 7847, 9540, 3767, 2163, 3111, 8921, 9445, 7076, 6560, 9192, 7232, 1315, 5640, 964, 6561, 1826, 5327, 9794, 8835, 1146, 6161, 2648, 1713, 7570, 5040, 8052, 3893, 9320, 8298, 6691, 41, 7953, 5003, 6745, 5561, 6408, 9902, 4792, 7635, 8203, 204, 674, 6253, 8902, 9327, 3691, 3243, 6587, 5293, 319, 3500, 3219, 6986, 5438, 8338, 5394, 3343, 6852, 1270, 9384, 7597, 4126, 196, 6322, 3336, 5605, 9346, 6729, 9242, 3950, 7840, 8192, 332, 3629, 1865, 9802, 9160, 5372, 2603, 5724, 7878, 9560, 5140, 2341, 3531, 6842, 4954, 4006, 1306, 29, 9124, 8017, 4605, 9455, 1912, 7263, 2974, 606, 821, 8084, 5727, 1010, 6530, 8416, 5802, 7540, 5454, 4827, 976, 4486, 3128, 9261, 8455, 9817, 5280, 516, 4080, 6965, 4276, 5263, 4464, 3330, 2149, 5827, 4542, 576, 7862, 1122, 8487, 5190, 905, 2960, 605, 7701, 645, 6929, 6626, 4321, 7395, 5462, 9672, 5111, 610, 8291, 6630, 9571, 3742, 1283, 819, 5663, 7378, 1669, 5897, 1216, 50, 8559, 8210, 682, 7111, 5880, 7296, 1900, 6562, 2414, 3347, 7744, 9940, 189, 2455, 7507, 3283, 1271, 5740, 3026, 9426, 4254, 2835, 45, 5237, 8447, 2808, 7438, 3659, 7138, 5281, 7747, 3484, 9108, 2806, 6860, 6214, 7978, 4191, 518, 5861, 2246, 2587, 4391, 5071, 4674, 2470, 568, 5543, 9472, 5986, 1910, 3012, 622, 9936, 2741, 8604, 9121, 5006, 1611, 760, 235, 5893, 2112, 2035, 9239, 9363, 5413, 2148, 3215, 6177, 1237, 3905, 6998, 3951, 353, 1867, 8648, 335, 9126, 7500, 5395, 4406, 7579, 5686, 8685, 7384, 7380, 5043, 7360, 2218, 1868, 407, 293, 7126, 6399, 8872, 5907, 4182, 4067, 5609, 7676, 7754, 2410, 5458, 8909, 7375, 2849, 5583, 2946, 4197, 7114, 8453, 9050, 6784, 1915, 301, 9218, 8532, 4449, 9073, 3923, 6522, 2501, 4075, 2682, 4033, 3393, 6779, 4602, 492, 6037, 84, 2204, 209, 8657, 5023, 459, 9966, 193, 7335, 8863, 4354, 2754, 4572, 1575, 3555, 4657, 9403, 1407, 4207, 3803, 6414, 4179, 9789, 1778, 159, 5737, 405, 4880, 9355, 6925, 6557, 3454, 5735, 2554, 7120, 1064, 8738, 4848, 330, 9602, 4609, 1966, 9118, 8843, 2230, 1655, 5536, 504, 9324, 5207, 2599, 9984, 119, 4651, 2201, 9955, 7337, 2329, 5012, 1674, 9338, 9394, 8099, 8491, 7562, 2007, 5292, 4299, 5490, 1046, 6813, 4936, 9041, 4803, 7322, 5210, 5108, 6197, 101, 9703, 7708, 256, 1363, 637, 7532, 9645, 3784, 4874, 7637, 4829, 9436, 2079, 8976, 6407, 4937, 441, 9779, 8882, 2727, 9208, 9393, 4932, 6747, 2263, 7098, 5885, 5875, 2238, 5279, 5510, 7870, 286, 5197, 8345, 1024, 5611, 4122, 8898, 6339, 3201, 1192, 5684, 8019, 9631, 8229, 5759, 3329, 2283, 4648, 2137, 4371, 263, 951, 8178, 2055, 4976, 3856, 6079, 8889, 2202, 4383, 4155, 5063, 299, 5552, 6516, 3139, 9017, 4205, 2333, 1993, 7028, 9530, 7804, 8593, 6808, 4747, 8786, 1048, 8426, 1559, 4961, 2887, 1821, 4892, 2397, 2045, 1204, 5943, 5628, 4817, 9901, 6943, 9066, 3110, 7956, 7154, 6827, 6442, 7675, 8356, 1138, 2420, 175, 3499, 71, 1239, 6375, 1830, 5992, 6886, 1114, 3535, 6953, 7143, 1475, 3638, 3649, 2802, 1909, 6496, 2568, 9503, 4794, 2690, 8075, 1736, 2678, 8699, 1312, 7284, 5707, 1193, 1354, 5632, 4711, 2671, 9996, 2496, 2353, 1893, 1768, 7130, 1163, 6363, 3602, 3864, 1432, 451, 5201, 8815, 8676, 9491, 1203, 1628, 590, 3529, 4444, 8706, 6471, 1526, 6846, 9856, 7276, 1423, 7244, 5231, 9718, 3807, 694, 1112, 1789, 5769, 2463, 8008, 3406, 4879, 9909, 3652, 2273, 7069, 1714, 5720, 3153, 8029, 5703, 2877, 9294, 6212, 1211, 3552, 9568, 2888, 1989, 2935, 6381, 223, 8297, 7796, 8700, 1603, 6000, 4206, 1426, 3936, 1293, 2736, 96, 224, 2540, 3826, 4181, 4949, 7937, 5298, 9921, 1087, 9673, 348, 1580, 6857, 2657, 4230, 2317, 5425, 3150, 5951, 8199, 1079, 2510, 5378, 9175, 4138, 8973, 1119, 4660, 410, 1908, 6844, 3739, 5572, 5754, 3143, 775, 6427, 4410, 8185, 4095, 6470, 1854, 5768, 6762, 5525, 6598, 3443, 8319, 7487, 8602, 4156, 4189, 7979, 8122, 392, 7229, 3493, 7264, 8165, 3495, 8082, 3311, 2268, 631, 6196, 7062, 8631, 2697, 2564, 7281, 1005, 2796, 5013, 3365, 2288, 7201, 7236, 5534, 9072, 2375, 4089, 2941, 1137, 2211, 2918, 7601, 4083, 5692, 1869, 4446, 643, 4894, 740, 3989, 3342, 3828, 2572, 5920, 2227, 419, 3483, 9935, 5384, 5218, 4737, 5937, 533, 103, 8866, 8412, 6665, 4889, 623, 7984, 748, 4697, 4336, 7465, 5476, 6577, 3328, 2658, 6621, 1824, 4145, 8840, 1589, 793, 4390, 7341, 660, 9004, 9089, 1957, 3590, 5932, 8494, 8046, 4823, 398, 8837, 1247, 6798, 2093, 5702, 5070, 5373, 8365, 3162, 2251, 2477, 4050, 6690, 1113, 4124, 7092, 1704, 1806, 7267, 9184, 6139, 2164, 6104, 8658, 8038, 3994, 7966, 3679, 6991, 9594, 6923, 4319, 9608, 4554, 8242, 6281, 4288, 3138, 2332, 3632, 4127, 5728, 4119, 3506, 5026, 6211, 6499, 9999, 5587, 4942, 6159, 393, 722, 6974, 729, 5786, 1936] n_ensemble = 30 # + y_collection = [] for s in seed_list: if len(y_collection) == n_ensemble: break print('initializing seed = ' +str(seed_list.index(s))) random.seed(s) np.random.seed(s) torch.manual_seed(s) turbo_m = TurboM(f=f, # Handle to objective function lb=f.lb, # Numpy array specifying lower bounds ub=f.ub, # Numpy array specifying upper bounds n_init=15, # Number of initial bounds from an Symmetric Latin hypercube design max_evals=1050, # Maximum number of evaluations n_trust_regions=2, # Number of trust regions batch_size=30, # How large batch size TuRBO uses verbose=True, # Print information from each batch use_ard=True, # Set to true if you want to use ARD for the GP kernel max_cholesky_size=2000, # When we switch from Cholesky to Lanczos n_training_steps=30, # Number of steps of ADAM to learn the hypers min_cuda=1024, # Run on the CPU for small datasets device="cpu", # "cpu" or "cuda" dtype="float64", # float64 or float32 ) turbo_m.optimize() y_collection.append(turbo_m.fX) print('Finished seed') np.save('Penicilin_TurBO2.npy', y_collection) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Copyright 2020 Verily Life Sciences LLC # # Use of this source code is governed by a BSD-style # license that can be found in the LICENSE file or at # https://developers.google.com/open-source/licenses/bsd # # Trial Specification Demo # # The first step to use the Baseline Site Selection Tool is to specify your trial. # # All data in the Baseline Site Selection Tool is stored in [xarray.DataArray](http://xarray.pydata.org/en/stable/generated/xarray.DataArray.html) datasets. This is a [convenient datastructure](http://xarray.pydata.org/en/stable/why-xarray.html) for storing multidimensional arrays with different labels, coordinates or attributes. You don't need to have any expertise with xr.Datasets to use the Baseline Site Selection Tool. The goal of this notebook is to walk you through the construction of the dataset that contains the specification of your trial. # # This notebook has several sections: # 1. **Define the Trial**. In this section you will load all aspects of your trial, including the trial sites, the expected recruitment demographics for each trial site (e.g. from a census) as well as the rules for how the trial will be carried out. # 2. **Load Incidence Forecasts**. In this section you will load forecasts for covid incidence at the locations of your trial. We highly recommend using forecasts that are as local as possible for the sites of the trial. There is significant variation in covid incidence among counties in the same state, and taking the state (province) average can be highly misleading. Here we include code to preload forecasts for county level forecasts from the US Center for Disease Control. The trial planner should include whatever forecasts they find most compelling. # 3. **Simulate the Trial** Given the incidence forecasts and the trial rules, the third section will simulate the trial. # 4. **Optimize the Trial** Given the parameters of the trial within our control, the next section asks whether we can set those parameters to make the trial meet our objective criteria, for example most likely to succeed or to succeed as quickly as possible. We have written a set of optimization routines for optimizing different types of trials. # # We write out different trial plans, which you can then examine interactively in the second notebook in the Baseline Site Selection Tool. That notebook lets you visualize how the trial is proceeding at a per site level and experiment with what will happen when you turn up or down different sites. # # If you have questions about how to implement these steps for your clinical trial, or there are variations in the trial specification that are not captured with this framework, please contact for additional help. # ## Imports # + import matplotlib as mpl import matplotlib.pyplot as plt import seaborn as sns sns.set_style('ticks') import functools import importlib.resources import numpy as np import os import pandas as pd pd.plotting.register_matplotlib_converters() import xarray as xr from IPython.display import display # bsst imports from bsst import demo_data from bsst import io as bsst_io from bsst import util from bsst import optimization from bsst import sim from bsst import sim_scenarios from bsst import public_data # - # ## Helper methods for visualization # + def plot_participants(participants): time = participants.time.values util.sum_all_but_dims(['time'], participants).cumsum('time').plot() plt.title('Participants recruited (both control and treatment arm)') plt.xlim(time[0], time[-1]) plt.ylim(bottom=0) plt.show() def plot_events(events): time = events.time.values events.cumsum('time').plot.line(x='time', color='k', alpha=.02, add_legend=False) for analysis, num_events in c.needed_control_arm_events.to_series().items(): plt.axhline(num_events, linestyle='--') plt.text(time[0], num_events, analysis, ha='left', va='bottom') plt.ylim(0, 120) plt.xlim(time[0], time[-1]) plt.title(f'Control arm events\n{events.scenario.size} simulated scenarios') plt.show() def plot_success(c, events): time = c.time.values success_day = xr.DataArray(util.success_day(c.needed_control_arm_events, events), coords=(events.scenario, c.analysis)) fig, axes = plt.subplots(c.analysis.size, 1, sharex=True) step = max(1, int(np.timedelta64(3, 'D') / (time[1] - time[0]))) bins = mpl.units.registry[np.datetime64].convert(time[::step], None, None) for analysis, ax in zip(c.analysis.values, axes): success_days = success_day.sel(analysis=analysis).values np.where(np.isnat(success_days), np.datetime64('2050-06-01'), success_days) ax.hist(success_days, bins=bins, density=True) ax.yaxis.set_visible(False) # subtract time[0] to make into timedelta64s so that we can take a mean/median median = np.median(success_days - time[0]) + time[0] median = pd.to_datetime(median).date() ax.axvline(median, color='r') ax.text(time[0], 0, f'{analysis}\n{median} median', ha='left', va='bottom') plt.xlabel('Date when sufficient statistical power is achieved') plt.xlim(time[0], time[-1]) plt.xticks(rotation=35) plt.show() # - # # 1. Define the trial # # ## Choose the sites # A trial specification consists a list of sites, together with various properties of the sites. # # For this demo, we read demonstration data embedded in the Baseline Site Selection Tool Python package. Specifically, this information is loaded from the file `demo_data/site_list1.csv`. Each row of this file contains the name of a site, as well as the detailed information about the trial. In this illustrative example, we pick sites in real US counties. Each column contains the following information: # # * `opencovid_key` . This is a key that specifies location within [COVID-19 Open Data](https://github.com/GoogleCloudPlatform/covid-19-open-data). It is required by this schema because it is the way we join the incidence forecasts to the site locations. # * `capacity`, the number of participants the site can recruit each week, including both control arm and treatment arms. For simplicity, we assume this is constant over time, but variable recruitment rates are also supported. (See the construction of the `site_capacity` array below). # * `start_date`. This is the first date on which the site can recruit participants. # * The proportion of the population in various demographic categories. For this example, we consider categories for age (`over_60`), ethnicity (`black`, `hisp_lat`), and comorbidities (`smokers`, `diabetes`, `obese`). **Here we just fill in demographic information with random numbers.** We assume different categories are independent, but the data structure supports complex beliefs about how different categories intersect, how much each site can enrich for different categories, and different infection risks for different categories. These are represented in the factors `population_fraction`, `participant_fraction`, `incidence_scaler`, and `incidence_to_event_factor` below. In a practical situation, we recommend that the trial planner uses accurate estimates of the populations for the different sites they are drawing from. # + with importlib.resources.path(demo_data, 'site_list1.csv') as p: demo_data_file_path = os.fspath(p) site_df = pd.read_csv(demo_data_file_path, index_col=0) site_df.index.name = 'location' site_df['start_date'] = pd.to_datetime(site_df['start_date']) display(site_df) # Add in information we have about each county. site_df = pd.concat([site_df, public_data.us_county_data().loc[site_df.opencovid_key].set_index(site_df.index)], axis=1) # - # ## Choose trial parameters # The trial requires a number of parameters that have to be specified to be able to simulate what will happen in the trial: These include: # # # * `trial_size_cap`: the maximum number of participants in the trial (includes both control and treatment arms) # * `start_day` and `end_day`: the boundaries of the time period we will simulate. # * `proportion_control_arm`: what proportion of participants are in the control arm. It's assumed that the control arm is as uniformly distributed across locations and time (e.g. at each location on each day, half of the recruited participants are assigned to the control arm). # * `needed_control_arm_events`: the number of events required in the *control* arm of the trial at various intermediate analysis points. For this example we assume intermediate analyses which would demonstrate a vaccine efficacy of about 55%, 65%, 75%, 85%, or 95%. # * `observation_delay`: how long after a participant is recruited before they contribute an event. This is measured in the same time units as your incidence forecasts. Here we assume 28 days. # * `site_capacity` and `site_activation`: the number of participants each site could recruit *if* it were activated, and whether each site is activated at any given time. Here we assume each site as a constant weekly capacity, but time dependence can be included (e.g. to model ramp up of recruitment). # * `population_fraction`, `participant_fraction`, and `incidence_scaler`: the proportion of the general population and the proportion of participants who fall into different demographic categories at each location, and the infection risk factor for each category. These three are required to translate an overall incidence forecast for the population into the incidence forecast for your control arm. # * `incidence_to_event_factor`: what proportion of infections lead to a clinical event. We assume a constant 0.6, but you can specify different values for different demographic categories. # # These factors are specified in the datastructure below. # # + start_day = np.datetime64('2021-05-15') end_day = np.datetime64('2021-10-01') time_resolution = np.timedelta64(1, 'D') time = np.arange(start_day, end_day + time_resolution, time_resolution) c = xr.Dataset(coords=dict(time=time)) c['proportion_control_arm'] = 0.5 # Assume some intermediate analyses. frac_control = float(c.proportion_control_arm) efficacy = np.array([.55, .65, .75, .85, .95]) ctrl_events = util.needed_control_arm_events(efficacy, frac_control) vaccine_events = (1 - efficacy) * ctrl_events * (1 - frac_control) / frac_control ctrl_events, vaccine_events = np.round(ctrl_events), np.round(vaccine_events) efficacy = 1 - (vaccine_events / ctrl_events) total_events = ctrl_events + vaccine_events analysis_names = [ f'{int(t)} total events @{int(100 * e)}% VE' for t, e in zip(total_events, efficacy) ] c['needed_control_arm_events'] = xr.DataArray( ctrl_events, dims=('analysis',)).assign_coords(analysis=analysis_names) c['recruitment_type'] = 'default' c['observation_delay'] = int(np.timedelta64(28, 'D') / time_resolution) # 28 days c['trial_size_cap'] = 30000 # convert weekly capacity to capacity per time step site_capacity = site_df.capacity.to_xarray() * time_resolution / np.timedelta64(7, 'D') site_capacity = site_capacity.broadcast_like(c.time).astype('float') # Can't recruit before the activation date activation_date = site_df.start_date.to_xarray() for l in activation_date.location.values: date = activation_date.loc[l] site_capacity.loc[site_capacity.time < date, l] = 0.0 c['site_capacity'] = site_capacity.transpose('location', 'time') c['site_activation'] = xr.ones_like(c.site_capacity) # For the sake of simplicity, this code assumes black and hisp_lat are # non-overlapping, and that obese/smokers/diabetes are non-overlapping. frac_and_scalar = util.fraction_and_incidence_scaler fraction_scalers = [ frac_and_scalar(site_df, 'age', ['over_60'], [1], 'under_60'), frac_and_scalar(site_df, 'ethnicity', ['black', 'hisp_lat'], [1, 1], 'other'), frac_and_scalar(site_df, 'comorbidity', ['smokers', 'diabetes', 'obese'], [1, 1, 1], 'none') ] fractions, incidence_scalers = zip(*fraction_scalers) # We assume that different categories are independent (e.g. the proportion of # smokers over 60 is the same as the proportion of smokers under 60) c['population_fraction'] = functools.reduce(lambda x, y: x * y, fractions) # We assume the participants are drawn uniformly from the population. c['participant_fraction'] = c['population_fraction'] # Assume some boosted incidence risk for subpopulations. We pick random numbers # here, but in actual use you'd put your best estimate for the incidence risk # of each demographic category. # Since we assume participants are uniformly drawn from the county population, # this actually doesn't end up affecting the estimated number of clinical events. c['incidence_scaler'] = functools.reduce(lambda x, y: x * y, incidence_scalers) c.incidence_scaler.loc[dict(age='over_60')] = 1 + 2 * np.random.random() c.incidence_scaler.loc[dict(comorbidity=['smokers', 'diabetes', 'obese'])] = 1 + 2 * np.random.random() c.incidence_scaler.loc[dict(ethnicity=['black', 'hisp_lat'])] = 1 + 2 * np.random.random() # We assume a constant incidence_to_event_factor. c['incidence_to_event_factor'] = 0.6 * xr.ones_like(c.incidence_scaler) util.add_empty_history(c) # - # # 2. Load incidence forecasts # # We load historical incidence data from [COVID-19 Open Data](https://github.com/GoogleCloudPlatform/covid-19-open-data) and forecasts from [COVID-19 Forecast Hub](https://github.com/reichlab/covid19-forecast-hub). # # We note that there are a set of caveats when using the CDC models that should be considered when using these for trial planning: # * Forecasts are only available for US counties. Hence, these forecasts will only work for US-only trials. Trials with sites outside the US will need to supplement these forecasts. # * Forecasts only go out for four weeks. Trials take much longer than four weeks to complete, when measured from site selection to logging the required number of cases in the control arm. For simplicity, here we extrapolate incidence as *constant* after the last point of the forecast. Here we extrapolate out to October 1, 2021. # * The forecasts from the CDC are provided with quantile estimates. Our method depends on getting *representative forecasts* from the model: we need a set of sample forecasts for each site which represent the set of scenarios that can occur. Ideally these scenarios will be equally probable so that we can compute probabilities by averaging over samples. To get samples from quantiles, we interpolate/extrapolate to get 100 evenly spaced quantile estimates, which we treat as representative samples. # # You can of course replace these forecasts with whatever represents your beliefs and uncertainty about what will happen. # + # Extrapolate out a bit extra to ensure we're within bounds when we interpolate later. full_pred = public_data.fetch_cdc_forecasts([('COVIDhub-ensemble', '2021-05-10'), ('COVIDhub-baseline', '2021-05-10')], end_date=c.time.values[-1] + np.timedelta64(15, 'D'), num_samples=50) full_gt = public_data.fetch_opencovid_incidence() # Suppose we only have ground truth through 2021-05-09. full_gt = full_gt.sel(time=slice(None, np.datetime64('2021-05-09'))) # Include more historical incidence here for context. It will be trimmed off when # we construct scenarios to simulate. The funny backwards range is to ensure that if # we use weekly instead of daily resolution, we use the same day of the week as c. time = np.arange(c.time.values[-1], np.datetime64('2021-04-01'), -time_resolution)[::-1] incidence_model = public_data.assemble_forecast(full_gt, full_pred, site_df, time) # - locs = np.random.choice(c.location.values, size=5, replace=False) incidence_model.sel(location=locs).plot.line(x='time', color='k', alpha=.1, add_legend=False, col='location', row='model') plt.ylim(0.0, 1e-3) plt.suptitle('Forecast incidence at a sampling of sites', y=1.0) pass # # 3. Simulate the trial # # Now that we've specified how the trial works, we can compute how the trial will turn out given the incidence forecasts you've specified. We do this by first imagining what sampling what incidence will be at all locations simultaneously. For any given fully-specified scenario, we compute how many participants will be under observation at any given time in any given location (in any given combination of demographic buckets), then based on the specified local incidence we compute how many will become infected, and how many will produce clinical events. # # Here we assume that the incidence trajectories of different locations are drawn at random from the available forecasts. Other scenario-generation methods in `sim_scenarios` support more complex approaches. For example, we may be highly uncertain about the incidence at each site, but believe that if incidence is high at a site, then it will also be high at geographically nearby sites. If this is the case then the simulation should not choose forecasts independently at each site but instead should take these correlations into account. The code scenario-generating methods in `sim_scenarios` allows us to do that. # + # incidence_flattened: rolls together all the models you've included in your ensemble, treating them as independent samples. incidence_flattened = sim_scenarios.get_incidence_flattened(incidence_model, c) # incidence_scenarios: chooses scenarios given the incidence curves and your chosen method of scenario-generation. incidence_scenarios = sim_scenarios.generate_scenarios_independently(incidence_flattened, num_scenarios=100) # + # compute the number of participants recruited under your trial rule participants = sim.recruitment(c) # compute the number of control arm events under your trial rules and incidence_scenarios. events = sim.control_arm_events(c, participants, incidence_scenarios) plot_participants(participants) # plot events and label different vaccine efficacies plot_events(events) # plot histograms of time to success plot_success(c, events) # - sim.add_stuff_to_ville(c, incidence_model, site_df, num_scenarios=100) # !mkdir -p demo_data bsst_io.write_ville_to_netcdf(c, 'demo_data/site_list1_all_site_on.nc') # # 4. Optimize the trial # # The simulations above supposed that all sites are activated as soon as possible (i.e. `site_activation` is identically 1). Now that we have shown the ability to simulate the outcome of the trial, we can turn it into a mathematical optimization problem. # # **Given the parameters of the trial within our control, how can we set those parameters to make the trial most likely to succeed or to succeed as quickly as possible?** # # We imagine the main levers of control are which sites to activate or which sites to prioritize activating, and this is what is implemented here. # # However, the framework we have developed is very general and could be extended to just about anything you control which you can predict the impact of. For example, # * If you can estimate the impact of money spent boosting recruitment of high-risk participants, we could use those estimates to help figure out how to best allocate a fixed budget. # * If you had requirements for the number of people infected in different demographic groups, we could use those to help figure out how to best allocate doses between sites with different population characteristics. # # The optimization algorithms are implemented in [JAX](https://github.com/google/jax), a python library that makes it possible to differentiate through native python and numpy functions. The flexibility of the language makes it possible to compose a variety of trial optimization scenarios and then to write algorithms that find optima. There are a number of technical details in how the optimization algorithms are written that will be discussed elsewhere. # # ### Example: Optimizing Static site activations # # Suppose that the only variable we can control is which sites should be activated, and we have to make this decision at the beginning of the trial. This decision is then set in stone for the duration of the trial. To calculate this we proceed as follows: # # The optimizer takes in the trial plan, encoded in the xarray `c` as well as the `incidence_scenarios`, and then calls the optimizer to find the sites that should be activated to minimize the time to success of the trial. The algorithm modifies `c` *in place*, so that after the algorithm runs, it returns the trial plan `c` but with the site activations chosen to be on or off in accordance with the optimizion. # %time optimization.optimize_static_activation(c, incidence_scenarios) # #### Plot the resulting sites # # Now we can plot the activations for the resulting sites. Only a subset of the original sites are activated in the optimized plan. Comparing the distributions for the time to success for the optimized sites to those in the original trial plan (all sites activated), the optimized plan will save a bit of time if the vaccine efficacy is low. If the vaccine efficacy is high, then just getting as many participants as possible as quickly as possible is optimal. # + all_sites = c.location.values activated_sites = c.location.values[c.site_activation.mean('time') == 1] # Simulate the results with this activation scheme. print(f'\n\n{len(activated_sites)} of {len(all_sites)} activated') participants = sim.recruitment(c) events = sim.control_arm_events(c, participants, incidence_scenarios) plot_participants(participants) plot_events(events) plot_success(c, events) df = (participants.sum(['location', 'time', 'comorbidity']) / participants.sum()).to_pandas() display(df.style.set_caption('Proportion of participants by age and ethnicity')) # - sim.add_stuff_to_ville(c, incidence_model, site_df, num_scenarios=100) # !mkdir -p demo_data bsst_io.write_ville_to_netcdf(c, 'demo_data/site_list1_optimized_static.nc') # ### Example: Custom loss penalizing site activation and promoting diverse participants # # Suppose we want to factor in considerations aside from how quickly the trial succeeds. In this example, we assume that activating sites is expensive, so we'd like to activate as few of them as possible, so long as it doesn't delay the success of the trial too much. Similarly, we assume that it's valuable to have a larger proportion of elderly, black, or hispanic participants, and we're willing to activate sites which can recruit from these demographic groups, even if doing so delays success a bit. # + def loss_fn(c): # sum over location, time, comorbidity # remaining dimensions are [age, ethnicity] participants = c.participants.sum(axis=0).sum(axis=0).sum(axis=-1) total_participants = participants.sum() return ( optimization.negative_mean_successiness(c) # demonstrate efficacy fast + 0.2 * c.site_activation.mean() # turning on sites is costly - 0.5 * participants[1:, :].sum() / total_participants # we want people over 60 - 0.5 * participants[:, 1:].sum() / total_participants # we want blacks and hispanics ) # %time optimization.optimize_static_activation(c, incidence_scenarios, loss_fn) # - # #### Plot the resulting sites # # This time only 53 of 146 sites are activated. The slower recruitment costs us 1-2 weeks until the trial succeeds (depending on vaccine efficacy). In exchange, we don't need to activate as many sites, and we end up with a greater proportion of participants who are elderly, black, or hispanic (dropping from 55.7% to 45.6% young white). # + all_sites = c.location.values activated_sites = c.location.values[c.site_activation.mean('time') == 1] # Simulate the results with this activation scheme. print(f'\n\n{len(activated_sites)} of {len(all_sites)} activated') participants = sim.recruitment(c) events = sim.control_arm_events(c, participants, incidence_scenarios) plot_participants(participants) plot_events(events) plot_success(c, events) df = (participants.sum(['location', 'time', 'comorbidity']) / participants.sum()).to_pandas() display(df.style.set_caption('Proportion of participants by age and ethnicity')) # - # ### Example: prioritizing sites # Suppose we can activate up to 20 sites each week for 10 weeks. How do we prioritize them? # We put all sites in on group. We also support prioritizing sites within groupings. # For example, if you can activate 2 sites per state per week, sites would be grouped # according to the state they're in. site_to_group = pd.Series(['all_sites'] * len(site_df), index=site_df.index) decision_dates = c.time.values[:70:7] allowed_activations = pd.DataFrame([[20] * len(decision_dates)], index=['all_sites'], columns=decision_dates) parameterizer = optimization.PivotTableActivation(c, site_to_group, allowed_activations, can_deactivate=False) optimization.optimize_params(c, incidence_scenarios, parameterizer) c['site_activation'] = c.site_activation.round() # each site has to be on or off at each time df = c.site_activation.to_pandas() df.columns = [pd.to_datetime(x).date() for x in df.columns] sns.heatmap(df, cbar=False) plt.title('Which sites are activated when') plt.show() participants = sim.recruitment(c) events = sim.control_arm_events(c, participants, incidence_scenarios) plot_participants(participants) plot_events(events) plot_success(c, events) sim.add_stuff_to_ville(c, incidence_model, site_df, num_scenarios=100) # !mkdir -p demo_data bsst_io.write_ville_to_netcdf(c, 'demo_data/site_list1_prioritized.nc') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- from lectures_to_viewers import getViewersToLectures from video_annotations import ml_004 as vid from get_seek_chains import getSeekChainsFast # + #import cPickle as pickle #video_to_seek_chains = pickle.load(open('/lfs/local/0/geza/new_seek_chains/video_to_seek_chains.pickle')) # + import csv import ujson as json from memoized import memoized @memoized def loadjson(table_name): return json.load(open('/lfs/local/0/geza/ml-004_data_export/' + table_name + '.json')) def get_table_path(table_name): csv_path = '/lfs/local/0/geza/ml-004_data_export/' return csv_path + table_name + '.csv' def get_table_data(table_name): return csv.DictReader(open(get_table_path(table_name)), doublequote=False, delimiter=',', escapechar='\\') def print_first_lines(table_name, n=10): for x in get_table_data(table_name): if n <= 0: break n -= 1 print x # - import numpy lecture_to_user_to_seek_event_timestamps = json.load(open('/lfs/local/0/geza/lecture_to_user_to_seek_event_timestamps.json')) lecture_to_user_to_simple_seek_chains = json.load(open('/lfs/local/0/geza/new_seek_chains/lecture_to_user_to_simple_seek_chains.json')) lecture_to_user_to_simple_seek_events = json.load(open('/lfs/local/0/geza/new_seek_chains/lecture_to_user_to_simple_seek_events.json')) # + #quiz_to_correct_answer = json.load(open('/lfs/local/0/geza/quiz_to_correct_answer.json')) # + #submission_id_to_answer = json.load(open('/lfs/local/0/geza/submission_id_to_answer.json')) # - submission_id_to_correct = json.load(open('/lfs/local/0/geza/submission_id_to_correct.json')) # + invideo_quiz_itemids = set() for line in get_table_data('quiz_metadata'): if line['deleted'] != '0': continue if line['duration'] != '0': continue quiz_id = int(line['id']) if quiz_id > 144: continue if line['quiz_type'] != 'video': continue invideo_quiz_itemids.add(quiz_id) # + # exclude the quizzes with more than 1 question invideo_quiz_itemids = set([2, 3, 4, 5, 6, 7, 9, 10, 11, 25, 26, 27, 28, 29, 30, 31, 32, 38, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 53, 54, 55, 56, 57, 58, 59, 60, 61, 63, 64, 76, 77, 78, 79, 80, 81, 82, 83, 84, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 129]) # - print invideo_quiz_itemids ''' user_to_quiz_to_attempts = {} for line in loadjson('kvs_course.quiz.saved'): key,value = line parts = key.split('.') quiz_id = int(parts[1].split(':')[1]) user_id = parts[2].split(':')[1] timestamp = int(value['saved_time']) if user_id not in user_to_quiz_to_attempts: user_to_quiz_to_attempts[user_id] = {} if quiz_id not in user_to_quiz_to_attempts[user_id]: user_to_quiz_to_attempts[user_id][quiz_id] = [] else: print 'duplicates found' print line print user_to_quiz_to_attempts[user_id][quiz_id] break user_to_quiz_to_attempts[user_id][quiz_id].append(timestamp) ''' # + submission_id_to_start_time = {} submission_id_to_saved_time = {} for line in loadjson('kvs_course.quiz.submission'): key,value = line parts = key.split('.') submission_id = int(parts[1].split(':')[1]) #timestamp = int(value['saved_time']) saved_time = int(value['saved_time']) start_time = int(value['start_time']) #submission_id_to_timestamp[submission_id] = timestamp submission_id_to_start_time[submission_id] = start_time submission_id_to_saved_time[submission_id] = saved_time # + #print user_to_quiz_to_attempts # + #print submission_id_to_correct.keys()[0:2] # + user_to_quiz_to_attempts = {} for line in get_table_data('quiz_submission_metadata'): item_id = int(line['item_id']) submission_id = int(line['id']) #print line['submission_time'] #if int(line['submission_time']) > 1388538061: # newer than jan 1, 2014 submission_time = int(line['submission_time']) #if submission_id not in submission_id_to_saved_time: # continue #submission_time = submission_id_to_saved_time[submission_id] if submission_time > 1391185339: # newer than jan 31, 2014 continue if str(submission_id) not in submission_id_to_correct: continue is_correct = submission_id_to_correct[str(submission_id)] if item_id not in invideo_quiz_itemids: continue user_id = line['session_user_id'] if user_id not in user_to_quiz_to_attempts: user_to_quiz_to_attempts[user_id] = {} if item_id not in user_to_quiz_to_attempts[user_id]: user_to_quiz_to_attempts[user_id][item_id] = [] user_to_quiz_to_attempts[user_id][item_id].append({'timestamp': submission_time, 'correct': is_correct}) # + lecture_id_to_title = {} lecture_title_to_id = {} for line in get_table_data('lecture_metadata'): if line['deleted'] != '0': continue if line['final'] != '1': continue lecture_id = int(line['id']) if lecture_id > 114: continue title = line['title'] lecture_id_to_title[lecture_id] = title if title in lecture_title_to_id: print 'duplicate:' print title print lecture_id print lecture_title_to_id[title] print line lecture_title_to_id[title] = lecture_id # + quiz_id_to_lecture_title = {} for line in get_table_data('quiz_metadata'): if line['deleted'] != '0': continue if line['duration'] != '0': continue quiz_id = int(line['id']) if quiz_id > 144: continue if line['quiz_type'] != 'video': continue title = line['title'] quiz_id_to_lecture_title[quiz_id] = title # + lecture_id_to_quizzes = {} quiz_to_lecture_id = {} for quiz_id,lecture_title in quiz_id_to_lecture_title.items(): if lecture_title not in lecture_title_to_id: print quiz_id print quiz_id_to_xml[quiz_id] continue lecture_id = lecture_title_to_id[lecture_title] if lecture_id not in lecture_id_to_quizzes: lecture_id_to_quizzes[lecture_id] = [] lecture_id_to_quizzes[lecture_id].append(quiz_id) quiz_to_lecture_id[quiz_id] = lecture_id # - import xml.etree.ElementTree as et # + quiz_id_to_video_time = {} for key,value in loadjson('kvs_course.quiz.xml'): try: tree = et.fromstring(value.encode('utf-8')) time = list(tree.iter('video'))[0].get('time') #.text time = float(time) parts = key.split('.') quiz_id = parts[1].split(':')[1] quiz_id = int(quiz_id) quiz_id_to_video_time[quiz_id] = time except: continue # + lecture_id_to_quiz_times = {} for lecture_id,quizzes in lecture_id_to_quizzes.items(): lecture_id_to_quiz_times[lecture_id] = [] for quiz_id in quizzes: quiz_time = quiz_id_to_video_time[quiz_id] lecture_id_to_quiz_times[lecture_id].append(quiz_time) # - print lecture_id_to_quiz_times print quiz_id_to_video_time # + #quiz_2_answer_distributions = {} #for key,value in loadjson('kvs_course.quiz.saved'): # print key # print value # break # - # # what percent of quiz attempts are correct? # + #print len(submission_id_to_correct) # + total_correct_attempts = 0 total_incorrect_attempts = 0 for user,quiz_to_attempts in user_to_quiz_to_attempts.items(): for quiz_id,attempts in quiz_to_attempts.items(): incorrect_attempts = [x for x in attempts if not x['correct']] correct_attempts = [x for x in attempts if x['correct']] total_correct_attempts += len(correct_attempts) total_incorrect_attempts += len(incorrect_attempts) # - print total_correct_attempts / float(total_correct_attempts + total_incorrect_attempts) print total_incorrect_attempts / float(total_correct_attempts + total_incorrect_attempts) # + correct_on_first_try = 0 incorrect_on_first_try = 0 for user,quiz_to_attempts in user_to_quiz_to_attempts.items(): for quiz_id,attempts in quiz_to_attempts.items(): first_attempt = attempts[0] if first_attempt['correct']: correct_on_first_try += 1 else: incorrect_on_first_try += 1 # - print correct_on_first_try / float(correct_on_first_try + incorrect_on_first_try) def num_incorrect_attempts_before_correct(attempts): idx = 0 for x in attempts: if x['correct']: return idx idx += 1 return None def get_incorrect_attempts(attempts): incorrect_attempts_before_correct = [] if len(attempts) == 0: return [] if attempts[0]['correct']: return [] first_attempt = attempts[0]['timestamp'] for x in attempts: timestamp = x['timestamp'] time_finding_answer = timestamp - first_attempt if time_finding_answer > 1800: break if time_finding_answer < 0: break if x['correct']: break incorrect_attempts_before_correct.append(x) return incorrect_attempts_before_correct # + num_incorrect_attempts = [] for user,quiz_to_attempts in user_to_quiz_to_attempts.items(): for quiz_id,attempts in quiz_to_attempts.items(): if attempts[0]['correct']: continue first_attempt = attempts[0]['timestamp'] incorrect_attempts_before_correct = get_incorrect_attempts(attempts) num_incorrect_attempts.append(len(incorrect_attempts_before_correct)) # - print numpy.mean(num_incorrect_attempts) # + num_incorrect_attempts_before_correct_log = [] num_incorrect_v2 = [] for user,quiz_to_attempts in user_to_quiz_to_attempts.items(): for quiz_id,attempts in quiz_to_attempts.items(): if attempts[0]['correct']: continue correct_attempts = [x for x in attempts if x['correct']] incorrect_attempts = [x for x in attempts if not x['correct']] if len(correct_attempts) == 0: # never got it right continue first_incorrect = incorrect_attempts[0] first_correct = correct_attempts[0] time_finding_answer = first_correct['timestamp'] - first_incorrect['timestamp'] if time_finding_answer < 0: continue if time_finding_answer > 1800: # more than 30 minutes continue incorrect_before_correct = num_incorrect_attempts_before_correct(attempts) num_incorrect_attempts_before_correct_log.append(incorrect_before_correct) num_incorrect_v2.append(len(get_incorrect_attempts(attempts))) # - print numpy.mean(num_incorrect_v2) print numpy.mean(num_incorrect_attempts_before_correct_log) print len([x for x in num_incorrect_attempts_before_correct_log if x == 1]) / float(len(num_incorrect_attempts_before_correct_log)) print len([x for x in num_incorrect_attempts_before_correct_log if x >= 2]) / float(len(num_incorrect_attempts_before_correct_log)) # + incorrect_answer_will_eventually_get_correct = 0 incorrect_answer_will_eventually_get_correct_within_30 = 0 incorrect_answer_total = 0 num_attempts_by_users_who_never_got_correct = [] time_before_final_attempt = [] for user,quiz_to_attempts in user_to_quiz_to_attempts.items(): for quiz_id,attempts in quiz_to_attempts.items(): incorrect_attempts = [x for x in attempts if not x['correct']] correct_attempts = [x for x in attempts if x['correct']] if len(incorrect_attempts) == 0: continue incorrect_answer_total += 1 if len(correct_attempts) == 0: first_incorrect_attempt_time = incorrect_attempts[0]['timestamp'] incorrect_attempts_within_30min = [x for x in incorrect_attempts if abs(x['timestamp'] - first_incorrect_attempt_time) < 1800] num_attempts_by_users_who_never_got_correct.append(len(incorrect_attempts_within_30min)) final_incorrect_attempt_time = incorrect_attempts_within_30min[len(incorrect_attempts_within_30min) - 1]['timestamp'] time_before_final_attempt.append(final_incorrect_attempt_time - first_incorrect_attempt_time) continue first_incorrect = incorrect_attempts[0] first_correct = correct_attempts[0] time_finding_answer = first_correct['timestamp'] - first_incorrect['timestamp'] if time_finding_answer < 0: continue incorrect_answer_will_eventually_get_correct += 1 if time_finding_answer > 1800: # more than 30 minutes continue incorrect_answer_will_eventually_get_correct_within_30 += 1 # - print numpy.mean([min(x, 3) for x in num_attempts_by_users_who_never_got_correct]) print numpy.median(time_before_final_attempt), numpy.mean(time_before_final_attempt), numpy.std(time_before_final_attempt) print len([x for x in num_attempts_by_users_who_never_got_correct if x == 1]) / float(len(num_attempts_by_users_who_never_got_correct)) print len([x for x in num_attempts_by_users_who_never_got_correct if x == 2]) / float(len(num_attempts_by_users_who_never_got_correct)) print len([x for x in num_attempts_by_users_who_never_got_correct if x >= 3]) / float(len(num_attempts_by_users_who_never_got_correct)) print incorrect_answer_will_eventually_get_correct_within_30 / float(incorrect_answer_total) print incorrect_answer_will_eventually_get_correct / float(incorrect_answer_total) # + time_until_first_correct_answer = [] incorrect_answer_will_eventually_get_correct = 0 incorrect_answer_total = 0 for user,quiz_to_attempts in user_to_quiz_to_attempts.items(): for quiz_id,attempts in quiz_to_attempts.items(): incorrect_attempts = [x for x in attempts if not x['correct']] correct_attempts = [x for x in attempts if x['correct']] if len(incorrect_attempts) == 0: continue if len(correct_attempts) == 0: continue first_incorrect = incorrect_attempts[0] first_correct = correct_attempts[0] time_finding_answer = first_correct['timestamp'] - first_incorrect['timestamp'] if time_finding_answer < 0: continue if time_finding_answer > 1800: # more than 30 minutes continue time_until_first_correct_answer.append(time_finding_answer) # - print numpy.median(time_until_first_correct_answer) print numpy.mean(time_until_first_correct_answer) print numpy.std(time_until_first_correct_answer) combined_distribution = time_before_final_attempt + time_until_first_correct_answer print numpy.median(combined_distribution) print numpy.mean(combined_distribution) print numpy.std(combined_distribution) # + #print correct_on_first_try / float(correct_on_first_try + ) # - print incorrect_answer_will_eventually_get_correct / float(incorrect_answer_total) # # what % of the time do users seek before they try re-answering the question? # + time_until_first_correct_answer = [] incorrect_answer_will_eventually_get_correct = 0 incorrect_answer_total = 0 for user,quiz_to_attempts in user_to_quiz_to_attempts.items(): for quiz_id,attempts in quiz_to_attempts.items(): incorrect_attempts = [x for x in attempts if not x['correct']] correct_attempts = [x for x in attempts if x['correct']] if len(incorrect_attempts) == 0: continue if len(correct_attempts) == 0: continue first_incorrect = incorrect_attempts[0]['timestamp'] first_correct = correct_attempts[0]['timestamp'] time_finding_answer = first_correct - first_incorrect if time_finding_answer < 0: continue if time_finding_answer > 1800: # more than 30 minutes continue #time_until_first_correct_answer.append(time_finding_answer) # + num_quiz_attempts_total = 0 num_quiz_attempts_withseek = 0 num_quiz_attempts_withseek_forward = 0 num_quiz_attempts_withseek_backward = 0 time_spent_withseek = [] time_spent_noseek = [] exit_loop = False for user,quiz_to_attempts in user_to_quiz_to_attempts.items(): for quiz_id,attempts in quiz_to_attempts.items(): if exit_loop: break incorrect_attempts = [x for x in attempts if not x['correct']] correct_attempts = [x for x in attempts if x['correct']] if len(incorrect_attempts) == 0: continue if len(correct_attempts) == 0: continue first_incorrect_attempt = incorrect_attempts[0]['timestamp'] first_correct_attempt = correct_attempts[0]['timestamp'] time_finding_answer = first_correct_attempt - first_incorrect_attempt if time_finding_answer < 0: continue if time_finding_answer > 1800: # more than 30 minutes continue lecture_id = quiz_to_lecture_id[int(quiz_id)] if int(lecture_id) not in lectures_with_single_invideo_quiz: continue video_time = quiz_id_to_video_time[int(quiz_id)] is_seeker = False if user in lecture_to_user_to_simple_seek_chains[str(lecture_id)]: num_quiz_attempts_total += 1 seek_events = lecture_to_user_to_simple_seek_chains[str(lecture_id)][user] for x in seek_events: if exit_loop: break timestamp = x['timestamp'] if int(first_incorrect_attempt)*1000 <= int(timestamp) <= int(first_correct_attempt)*1000: num_quiz_attempts_withseek += 1 direction = 'forward' if float(x['end']) < video_time: direction = 'backward' if direction == 'forward': num_quiz_attempts_withseek_forward += 1 else: num_quiz_attempts_withseek_backward += 1 time_spent_withseek.append(time_finding_answer) is_seeker = True break if not is_seeker: time_spent_noseek.append(time_finding_answer) # + num_quiz_attempts_total = 0 num_quiz_attempts_withseek = 0 num_quiz_attempts_withseek_forward = 0 num_quiz_attempts_withseek_backward = 0 time_spent_withseek_forward = [] time_spent_withseek_backward = [] time_spent_withseek = [] time_spent_noseek = [] incorrect_attempts_noseek = [] incorrect_attempts_withseek = [] incorrect_attempts_withseek_forward = [] incorrect_attempts_withseek_backward = [] exit_loop = False for user,quiz_to_attempts in user_to_quiz_to_attempts.items(): for quiz_id,attempts in quiz_to_attempts.items(): if exit_loop: break incorrect_attempts = [x for x in attempts if not x['correct']] correct_attempts = [x for x in attempts if x['correct']] if len(incorrect_attempts) == 0: continue if len(correct_attempts) == 0: continue first_incorrect_attempt = incorrect_attempts[0]['timestamp'] first_correct_attempt = correct_attempts[0]['timestamp'] time_finding_answer = first_correct_attempt - first_incorrect_attempt if time_finding_answer < 0: continue if time_finding_answer > 1800: # more than 30 minutes continue lecture_id = quiz_to_lecture_id[int(quiz_id)] if int(lecture_id) not in lectures_with_single_invideo_quiz: continue video_time = quiz_id_to_video_time[int(quiz_id)] is_seeker = False if user in lecture_to_user_to_simple_seek_chains[str(lecture_id)]: num_quiz_attempts_total += 1 seek_events = lecture_to_user_to_simple_seek_chains[str(lecture_id)][user] for x in seek_events: if exit_loop: break timestamp = x['timestamp'] if int(first_incorrect_attempt)*1000 <= int(timestamp) <= int(first_correct_attempt)*1000: num_quiz_attempts_withseek += 1 direction = 'forward' if float(x['end']) < video_time: direction = 'backward' if direction == 'forward': time_spent_withseek_forward.append(time_finding_answer) incorrect_attempts_withseek_forward.append(len(get_incorrect_attempts(attempts))) num_quiz_attempts_withseek_forward += 1 else: time_spent_withseek_backward.append(time_finding_answer) incorrect_attempts_withseek_backward.append(len(get_incorrect_attempts(attempts))) num_quiz_attempts_withseek_backward += 1 time_spent_withseek.append(time_finding_answer) incorrect_attempts_withseek.append(len(get_incorrect_attempts(attempts))) is_seeker = True break if not is_seeker: incorrect_attempts_noseek.append(len(get_incorrect_attempts(attempts))) time_spent_noseek.append(time_finding_answer) # - print numpy.mean(incorrect_attempts_noseek) print numpy.mean(incorrect_attempts_withseek) print numpy.mean(incorrect_attempts_withseek_backward) print numpy.mean(incorrect_attempts_withseek_forward) # + print numpy.median(time_spent_withseek_backward), numpy.mean(time_spent_withseek_backward), numpy.std(time_spent_withseek_backward) print numpy.median(time_spent_withseek_forward), numpy.mean(time_spent_withseek_forward), numpy.std(time_spent_withseek_forward) # - print numpy.median(time_spent_withseek), numpy.mean(time_spent_withseek), numpy.std(time_spent_withseek) print numpy.median(time_spent_noseek), numpy.mean(time_spent_noseek), numpy.std(time_spent_noseek) print num_quiz_attempts_total print num_quiz_attempts_withseek, float(num_quiz_attempts_withseek) / num_quiz_attempts_total print num_quiz_attempts_withseek_forward, float(num_quiz_attempts_withseek_forward) / num_quiz_attempts_withseek print num_quiz_attempts_withseek_backward, float(num_quiz_attempts_withseek_backward) / num_quiz_attempts_withseek # + num_quiz_attempts_noseek = 0 num_quiz_attempts_withseek = 0 num_quiz_attempts_withseek_forward = 0 num_quiz_attempts_withseek_backward = 0 #lectures_with_single_invideo_quiz = set([2, 3, 4, 5, 6, 7, 9, 10, 11, 25, 26, 27, 28, 29, 30, 31, 32, 38, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 53, 54, 55, 56, 57, 58, 59, 60, 61, 63, 64, 76, 77, 78, 79, 80, 81, 82, 83, 84, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 129]) lectures_with_single_invideo_quiz = set([4, 5, 6, 7, 9, 10, 25, 26, 27, 28, 29, 30, 31, 32, 38, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 53, 54, 55, 56, 57, 58, 59, 60, 61, 63, 64, 76, 77, 78, 79, 80, 81, 82, 83, 84, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 129]) exit_loop = False for user,quiz_to_attempts in user_to_quiz_to_attempts.items(): if exit_loop: break for quiz_id,attempts in quiz_to_attempts.items(): if exit_loop: break if len(attempts) <= 1: continue first_attempt = attempts[0] last_attempt = attempts[len(attempts) - 1] if last_attempt - first_attempt > 1800: continue lecture_id = quiz_to_lecture_id[int(quiz_id)] if int(lecture_id) not in lectures_with_single_invideo_quiz: continue video_time = quiz_id_to_video_time[int(quiz_id)] num_quiz_attempts_noseek += 1 if user in lecture_to_user_to_simple_seek_chains[str(lecture_id)]: seek_events = lecture_to_user_to_simple_seek_chains[str(lecture_id)][user] for x in seek_events: if exit_loop: break timestamp = x['timestamp'] if int(first_attempt)*1000 <= int(timestamp) <= int(last_attempt)*1000: num_quiz_attempts_withseek += 1 #print x #direction = x['direction'] direction = 'forward' if float(x['end']) < video_time: direction = 'backward' #if x['direction'] == 'forward': if direction == 'forward': num_quiz_attempts_withseek_forward += 1 else: num_quiz_attempts_withseek_backward += 1 break #break #break #break # - print num_quiz_attempts_noseek print num_quiz_attempts_withseek, float(num_quiz_attempts_withseek) / (num_quiz_attempts_noseek + num_quiz_attempts_withseek) print num_quiz_attempts_withseek_forward, float(num_quiz_attempts_withseek_forward) / num_quiz_attempts_withseek print num_quiz_attempts_withseek_backward, float(num_quiz_attempts_withseek_backward) / num_quiz_attempts_withseek # # what percentage of the back-seeks (where src=in-video quiz and dest=before in-video quiz) does this account for? from datetime import datetime # + #print user_to_quiz_to_attempts # - ''' back_seeks_no_attempts = 0 back_seeks_before_attempts = 0 back_seeks_between_attempts = 0 back_seeks_after_attempts = 0 back_seeks_from_quiz_total = 0 multiple_quiz_attempts = 0 lectures_with_single_invideo_quiz = set([2, 3, 4, 5, 6, 7, 9, 10, 11, 25, 26, 27, 28, 29, 30, 31, 32, 38, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 53, 54, 55, 56, 57, 58, 59, 60, 61, 63, 64, 76, 77, 78, 79, 80, 81, 82, 83, 84, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 129]) exit_loop = False num_printed = 0 for lecture_id,user_to_seek_timestamps in lecture_to_user_to_seek_event_timestamps: # does this lecture have exactly one in-video quiz? if int(lecture_id) not in lectures_with_single_invideo_quiz: continue quiz_time = float(lecture_id_to_quiz_times[int(lecture_id)][0]) quiz_id = int(lecture_id_to_quizzes[int(lecture_id)][0]) for user,seek_event_timestamps in user_to_seek_timestamps.iteritems(): quiz_attempt_timestamps = [] if user in user_to_quiz_to_attempts: if quiz_id in user_to_quiz_to_attempts[user]: quiz_attempt_timestamps = user_to_quiz_to_attempts[user][quiz_id] quiz_attempt_timestamps = [x*1000 for x in quiz_attempt_timestamps] # convert from seconds to milliseconds for timestamp in seek_event_timestamps: for lecture_id,user_to_simple_seek_chains in lecture_to_user_to_simple_seek_chains.iteritems(): if exit_loop: break # does this lecture have exactly one in-video quiz? if int(lecture_id) not in lectures_with_single_invideo_quiz: continue quiz_time = float(lecture_id_to_quiz_times[int(lecture_id)][0]) quiz_id = int(lecture_id_to_quizzes[int(lecture_id)][0]) for user_id,simple_seek_chains in user_to_simple_seek_chains.iteritems(): if exit_loop: break quiz_attempt_timestamps = [] if user in user_to_quiz_to_attempts: if quiz_id in user_to_quiz_to_attempts[user]: quiz_attempt_timestamps = user_to_quiz_to_attempts[user][quiz_id] first_quiz_attempt = quiz_attempt_timestamps[0] #quiz_attempt_timestamps = [x for x in quiz_attempt_timestamps if first_quiz_attempt <= x <= first_quiz_attempt + 1800] quiz_attempt_timestamps = [x*1000 for x in quiz_attempt_timestamps] # convert from seconds to milliseconds for seek_chain in simple_seek_chains: if exit_loop: break # does it start from the in-video quiz? start = seek_chain['start'] end = seek_chain['end'] direction = seek_chain['direction'] # forward or back timestamp = seek_chain['timestamp'] #if len(simple_seek_chains) < 10: # continue #print datetime.fromtimestamp(timestamp/1000) #num_printed += 1 #if num_printed >= 1000: # exit_loop = True # break #if (quiz_time - 10 <= start <= quiz_time + 10) and direction == 'back' and end < quiz_time: if (quiz_time - 1 <= start <= quiz_time + 1) and direction == 'back' and end < quiz_time: back_seeks_from_quiz_total += 1 # back seek from quiz. does it occur between or after if len(quiz_attempt_timestamps) == 0: back_seeks_no_attempts += 1 if len(quiz_attempt_timestamps) > 0: first_quiz_attempt = quiz_attempt_timestamps[0] last_quiz_attempt = quiz_attempt_timestamps[len(quiz_attempt_timestamps) - 1] #if True: #if len(quiz_attempt_timestamps) > 1: # print timestamp # this is in nov 7th (much later) # print quiz_attempt_timestamps # this is in october 29th (much earlier) # exit_loop = True # break if timestamp >= first_quiz_attempt and abs(first_quiz_attempt - timestamp) < 1800*1000: # within 30 minutes and after the first quiz attempt if len(quiz_attempt_timestamps) > 1: multiple_quiz_attempts += 1 #if int(first_quiz_attempt) <= int(timestamp) <= int(last_quiz_attempt): if True: # back seek occurred between attempts back_seeks_between_attempts += 1 if last_quiz_attempt < timestamp: back_seeks_after_attempts += 1 if first_quiz_attempt > timestamp: back_seeks_before_attempts += 1 ''' print back_seeks_no_attempts print back_seeks_before_attempts print back_seeks_between_attempts print back_seeks_after_attempts print back_seeks_from_quiz_total print multiple_quiz_attempts # + #print user_to_quiz_to_attempts['abcea9860ecfb2fef5140bc9e504f529ed33eb39'] # + num_quiz_attempts_total = 0 num_quiz_attempts_withseek = 0 num_quiz_attempts_withseek_forward = 0 num_quiz_attempts_withseek_backward = 0 time_spent_withseek = [] time_spent_noseek = [] exit_loop = False for user,quiz_to_attempts in user_to_quiz_to_attempts.items(): for quiz_id,attempts in quiz_to_attempts.items(): if exit_loop: break incorrect_attempts = [x for x in attempts if not x['correct']] correct_attempts = [x for x in attempts if x['correct']] if len(incorrect_attempts) == 0: continue if len(correct_attempts) == 0: continue first_incorrect_attempt = incorrect_attempts[0]['timestamp'] first_correct_attempt = correct_attempts[0]['timestamp'] time_finding_answer = first_correct_attempt - first_incorrect_attempt if time_finding_answer < 0: continue if time_finding_answer > 1800: # more than 30 minutes continue lecture_id = quiz_to_lecture_id[int(quiz_id)] if int(lecture_id) not in lectures_with_single_invideo_quiz: continue video_time = quiz_id_to_video_time[int(quiz_id)] is_seeker = False if user in lecture_to_user_to_simple_seek_chains[str(lecture_id)]: num_quiz_attempts_total += 1 seek_events = lecture_to_user_to_simple_seek_chains[str(lecture_id)][user] for x in seek_events: if exit_loop: break timestamp = x['timestamp'] if int(first_incorrect_attempt)*1000 <= int(timestamp) <= int(first_correct_attempt)*1000: num_quiz_attempts_withseek += 1 direction = 'forward' if float(x['end']) < video_time: direction = 'backward' if direction == 'forward': num_quiz_attempts_withseek_forward += 1 else: num_quiz_attempts_withseek_backward += 1 time_spent_withseek.append(time_finding_answer) is_seeker = True break if not is_seeker: time_spent_noseek.append(time_finding_answer) # + back_seeks_no_attempts = 0 back_seeks_before_attempts = 0 back_seeks_between_attempts = 0 back_seeks_after_attempts = 0 back_seeks_from_quiz_total = 0 back_seeks_before_first_correct = 0 back_seeks_after_first_correct = 0 back_seeks_after_first_correct_diff_session = 0 back_seeks_before_first_incorrect = 0 back_seeks_between_first_incorrect_and_first_correct = 0 back_seeks_between_first_incorrect_and_last_incorrect = 0 back_seeks_after_last_incorrect = 0 back_seeks_after_last_incorrect_diff_session = 0 back_seeks_after_already_correct = 0 back_seeks_after_already_correct_diff_session = 0 multiple_quiz_attempts = 0 lectures_with_single_invideo_quiz = set([2, 3, 4, 5, 6, 7, 9, 10, 11, 25, 26, 27, 28, 29, 30, 31, 32, 38, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 53, 54, 55, 56, 57, 58, 59, 60, 61, 63, 64, 76, 77, 78, 79, 80, 81, 82, 83, 84, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 129]) exit_loop = False num_printed = 0 for lecture_id,user_to_simple_seek_chains in lecture_to_user_to_simple_seek_chains.iteritems(): if exit_loop: break # does this lecture have exactly one in-video quiz? if int(lecture_id) not in lectures_with_single_invideo_quiz: continue quiz_time = float(lecture_id_to_quiz_times[int(lecture_id)][0]) quiz_id = int(lecture_id_to_quizzes[int(lecture_id)][0]) for user_id,simple_seek_chains in user_to_simple_seek_chains.iteritems(): if exit_loop: break quiz_attempts = [] correct_quiz_attempts = [] incorrect_quiz_attempts = [] if user_id in user_to_quiz_to_attempts: if quiz_id in user_to_quiz_to_attempts[user_id]: quiz_attempts = user_to_quiz_to_attempts[user_id][quiz_id] correct_quiz_attempts = [x['timestamp']*1000 for x in quiz_attempts if x['correct']] incorrect_quiz_attempts = [x['timestamp']*1000 for x in quiz_attempts if not x['correct']] for seek_chain in simple_seek_chains: if exit_loop: break # does it start from the in-video quiz? start = seek_chain['start'] end = seek_chain['end'] direction = seek_chain['direction'] # forward or back timestamp = seek_chain['timestamp'] if (quiz_time - 1 <= start <= quiz_time + 1) and direction == 'back' and end < quiz_time: back_seeks_from_quiz_total += 1 # back seek from quiz. does it occur between or after if len(quiz_attempts) == 0: back_seeks_no_attempts += 1 if len(quiz_attempts) > 0: # got correct first time? is_correct_first_time = quiz_attempts[0]['correct'] # note that we're assuming we have the right seek chain here. ie not taking into account duplicate sessions # probably want to check that it's within 30 minutes of first answer time if is_correct_first_time: first_answer_time = correct_quiz_attempts[0] if first_answer_time > timestamp: # back seek before first attempt (which was correct) back_seeks_before_first_correct += 1 break else: if abs(timestamp - first_answer_time) > 60*1000: back_seeks_after_first_correct_diff_session += 1 else: back_seeks_after_first_correct += 1 break else: # incorrect first time first_answer_time = incorrect_quiz_attempts[0] if first_answer_time > timestamp: # back seek before first attempt (which was incorrect) back_seeks_before_first_incorrect += 1 continue else: if len(correct_quiz_attempts) > 0: # got it wrong first time but eventually got it right! first_correct_answer_time = correct_quiz_attempts[0] if first_correct_answer_time > timestamp: back_seeks_between_first_incorrect_and_first_correct += 1 continue else: if abs(timestamp - first_answer_time) > 60*1000: back_seeks_after_already_correct_diff_session += 1 else: back_seeks_after_already_correct += 1 continue # this seek occurred after we already got the answer correct else: # got it wrong and never got it right last_incorrect_answer_time = incorrect_quiz_attempts[len(incorrect_quiz_attempts) - 1] if timestamp < last_incorrect_answer_time: back_seeks_between_first_incorrect_and_last_incorrect += 1 break else: if abs(timestamp - first_answer_time) > 60*1000: back_seeks_after_last_incorrect_diff_session += 1 else: back_seeks_after_last_incorrect += 1 break # + total_back_seeks_in_session = (back_seeks_no_attempts + back_seeks_before_first_correct + back_seeks_before_first_incorrect + back_seeks_after_first_correct + back_seeks_between_first_incorrect_and_first_correct + back_seeks_after_already_correct + back_seeks_after_last_incorrect + back_seeks_between_first_incorrect_and_last_incorrect) print 'back_seeks_no_attempts', back_seeks_no_attempts, back_seeks_no_attempts / float(total_back_seeks_in_session) print 'back_seeks_before_first_correct', back_seeks_before_first_correct, back_seeks_before_first_correct / float(total_back_seeks_in_session) print 'back_seeks_before_first_incorrect', back_seeks_before_first_incorrect, back_seeks_before_first_incorrect / float(total_back_seeks_in_session) print 'back_seeks_after_first_correct', back_seeks_after_first_correct, back_seeks_after_first_correct / float(total_back_seeks_in_session) print 'back_seeks_between_first_incorrect_and_first_correct', back_seeks_between_first_incorrect_and_first_correct, back_seeks_between_first_incorrect_and_first_correct / float(total_back_seeks_in_session) print 'back_seeks_after_already_correct', back_seeks_after_already_correct, back_seeks_after_already_correct / float(total_back_seeks_in_session) print 'back_seeks_after_last_incorrect', back_seeks_after_last_incorrect, back_seeks_after_last_incorrect / float(total_back_seeks_in_session) print 'back_seeks_between_first_incorrect_and_last_incorrect', back_seeks_between_first_incorrect_and_last_incorrect, back_seeks_between_first_incorrect_and_last_incorrect / float(total_back_seeks_in_session) print 'back_seeks_after_first_correct_diff_session', back_seeks_after_first_correct_diff_session, back_seeks_after_first_correct_diff_session / float(total_back_seeks_in_session) print 'back_seeks_after_already_correct_diff_session', back_seeks_after_already_correct_diff_session, back_seeks_after_already_correct_diff_session / float(total_back_seeks_in_session) print 'back_seeks_after_last_incorrect_diff_session', back_seeks_after_last_incorrect_diff_session, back_seeks_after_last_incorrect_diff_session / float(total_back_seeks_in_session) # + back_seeks_no_attempts = 0 back_seeks_before_attempts = 0 back_seeks_between_attempts = 0 back_seeks_after_attempts = 0 back_seeks_from_quiz_total = 0 multiple_quiz_attempts = 0 lectures_with_single_invideo_quiz = set([2, 3, 4, 5, 6, 7, 9, 10, 11, 25, 26, 27, 28, 29, 30, 31, 32, 38, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 53, 54, 55, 56, 57, 58, 59, 60, 61, 63, 64, 76, 77, 78, 79, 80, 81, 82, 83, 84, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 129]) exit_loop = False num_printed = 0 for lecture_id,user_to_simple_seek_chains in lecture_to_user_to_simple_seek_chains.iteritems(): if exit_loop: break # does this lecture have exactly one in-video quiz? if int(lecture_id) not in lectures_with_single_invideo_quiz: continue quiz_time = float(lecture_id_to_quiz_times[int(lecture_id)][0]) quiz_id = int(lecture_id_to_quizzes[int(lecture_id)][0]) for user_id,simple_seek_chains in user_to_simple_seek_chains.iteritems(): if exit_loop: break quiz_attempt_timestamps = [] if user_id in user_to_quiz_to_attempts: if quiz_id in user_to_quiz_to_attempts[user_id]: quiz_attempt_timestamps = user_to_quiz_to_attempts[user_id][quiz_id] first_quiz_attempt = quiz_attempt_timestamps[0] #quiz_attempt_timestamps = [x for x in quiz_attempt_timestamps if first_quiz_attempt <= x <= first_quiz_attempt + 1800] quiz_attempt_timestamps = [x*1000 for x in quiz_attempt_timestamps] # convert from seconds to milliseconds #print quiz_attempt_timestamps #num_printed += 1 #if num_printed >= 1000: # exit_loop = True # break for seek_chain in simple_seek_chains: if exit_loop: break # does it start from the in-video quiz? start = seek_chain['start'] end = seek_chain['end'] direction = seek_chain['direction'] # forward or back timestamp = seek_chain['timestamp'] #timestamp = timestamp + 1000 * 7 * 3600 #if len(simple_seek_chains) != 1: # continue #if len(quiz_attempt_timestamps) != 1: # continue #print datetime.fromtimestamp(timestamp/1000), datetime.fromtimestamp(quiz_attempt_timestamps[0]/1000) #num_printed += 1 #if num_printed >= 1000: # exit_loop = True # break #if (quiz_time - 10 <= start <= quiz_time + 10) and direction == 'back' and end < quiz_time: if (quiz_time - 1 <= start <= quiz_time + 1) and direction == 'back' and end < quiz_time: back_seeks_from_quiz_total += 1 # back seek from quiz. does it occur between or after if len(quiz_attempt_timestamps) == 0: back_seeks_no_attempts += 1 if len(quiz_attempt_timestamps) > 0: first_quiz_attempt = quiz_attempt_timestamps[0] last_quiz_attempt = quiz_attempt_timestamps[len(quiz_attempt_timestamps) - 1] #if True: #if len(quiz_attempt_timestamps) > 1: # print timestamp # this is in nov 7th (much later) # print quiz_attempt_timestamps # this is in october 29th (much earlier) # exit_loop = True # break if timestamp >= first_quiz_attempt and abs(first_quiz_attempt - timestamp) < 1800*1000: # within 30 minutes and after the first quiz attempt if len(quiz_attempt_timestamps) > 1: multiple_quiz_attempts += 1 if int(first_quiz_attempt) <= int(timestamp) <= int(last_quiz_attempt): # back seek occurred between attempts back_seeks_between_attempts += 1 if last_quiz_attempt < timestamp and abs(first_quiz_attempt - timestamp) < 1800*1000: back_seeks_after_attempts += 1 if first_quiz_attempt > timestamp and abs(first_quiz_attempt - timestamp) < 1800*1000: back_seeks_before_attempts += 1 # - print back_seeks_no_attempts print back_seeks_before_attempts print back_seeks_between_attempts print back_seeks_after_attempts print back_seeks_from_quiz_total print multiple_quiz_attempts print back_seeks_from_quiz_total # + num_quiz_attempts_noseek = 0 num_quiz_attempts_withseek = 0 num_quiz_attempts_withseek_forward = 0 num_quiz_attempts_withseek_backward = 0 for user,quiz_to_attempts in user_to_quiz_to_attempts.items(): for quiz_id,attempts in quiz_to_attempts.items(): if len(attempts) <= 1: continue first_attempt = attempts[0] last_attempt = attempts[len(attempts) - 1] if last_attempt - first_attempt > 1800: continue lecture_id = quiz_to_lecture_id[quiz_id] video_time = quiz_id_to_video_time[int(quiz_id)] num_quiz_attempts_noseek += 1 if user in lecture_to_user_to_simple_seek_chains[str(lecture_id)]: seek_events = lecture_to_user_to_simple_seek_chains[str(lecture_id)][user] for x in seek_events: timestamp = x['timestamp'] if int(first_attempt)*1000 <= int(timestamp) <= int(last_attempt)*1000: num_quiz_attempts_withseek += 1 #print x direction = 'forward' if float(x['end']) < video_time: direction = 'backward' #if x['direction'] == 'forward': if direction == 'forward': num_quiz_attempts_withseek_forward += 1 else: num_quiz_attempts_withseek_backward += 1 break #break #break #break # - print num_quiz_attempts_noseek print num_quiz_attempts_withseek, float(num_quiz_attempts_withseek) / (num_quiz_attempts_noseek + num_quiz_attempts_withseek) print num_quiz_attempts_withseek_forward, float(num_quiz_attempts_withseek_forward) / num_quiz_attempts_withseek print num_quiz_attempts_withseek_backward, float(num_quiz_attempts_withseek_backward) / num_quiz_attempts_withseek # + num_quiz_attempts_noseek = 0 num_quiz_attempts_withseek = 0 num_quiz_attempts_withseek_forward = 0 num_quiz_attempts_withseek_backward = 0 for user,quiz_to_attempts in user_to_quiz_to_attempts.items(): for quiz_id,attempts in quiz_to_attempts.items(): if len(attempts) <= 1: continue first_attempt = attempts[0] last_attempt = attempts[len(attempts) - 1] if last_attempt - first_attempt > 1800: continue lecture_id = quiz_to_lecture_id[quiz_id] video_time = quiz_id_to_video_time[int(quiz_id)] num_quiz_attempts_noseek += 1 if user in lecture_to_user_to_simple_seek_events[str(lecture_id)]: seek_events = lecture_to_user_to_simple_seek_events[str(lecture_id)][user] for x in seek_events: timestamp = x['timestamp'] if int(first_attempt)*1000 <= int(timestamp) <= int(last_attempt)*1000: num_quiz_attempts_withseek += 1 #if x['direction'] == 'forward': direction = 'forward' if float(x['end']) < video_time: direction = 'backward' if direction == 'forward': num_quiz_attempts_withseek_forward += 1 else: num_quiz_attempts_withseek_backward += 1 break # - print num_quiz_attempts_noseek print num_quiz_attempts_withseek, float(num_quiz_attempts_withseek) / (num_quiz_attempts_noseek + num_quiz_attempts_withseek) print num_quiz_attempts_withseek_forward, float(num_quiz_attempts_withseek_forward) / num_quiz_attempts_withseek print num_quiz_attempts_withseek_backward, float(num_quiz_attempts_withseek_backward) / num_quiz_attempts_withseek # + num_quiz_attempts_noseek = 0 num_quiz_attempts_withseek = 0 for user,quiz_to_attempts in user_to_quiz_to_attempts.items(): for quiz_id,attempts in quiz_to_attempts.items(): if len(attempts) <= 1: continue first_attempt = attempts[0] last_attempt = attempts[len(attempts) - 1] if last_attempt - first_attempt > 1800: continue lecture_id = quiz_to_lecture_id[quiz_id] num_quiz_attempts_noseek += 1 if user in lecture_to_user_to_seek_event_timestamps[str(lecture_id)]: seek_events = lecture_to_user_to_seek_event_timestamps[str(lecture_id)][user] for x in seek_events: if int(first_attempt)*1000 <= int(x) <= int(last_attempt)*1000: num_quiz_attempts_withseek += 1 break # - print num_quiz_attempts_noseek print num_quiz_attempts_withseek, float(num_quiz_attempts_withseek) / (num_quiz_attempts_noseek + num_quiz_attempts_withseek) def current_and_prev_item(l): prev_item = None is_first = True for x in l: if is_first: is_first = False else: yield (prev_item, x) prev_item = x # + num_quiz_attempts_noseek = 0 num_quiz_attempts_withseek = 0 for user,quiz_to_attempts in user_to_quiz_to_attempts.items(): for quiz_id,attempts in quiz_to_attempts.items(): if len(attempts) <= 1: continue lecture_id = quiz_to_lecture_id[quiz_id] for prev_attempt,next_attempt in current_and_prev_item(attempts): num_quiz_attempts_noseek += 1 if user in lecture_to_user_to_seek_event_timestamps[str(lecture_id)]: seek_events = lecture_to_user_to_seek_event_timestamps[str(lecture_id)][user] for x in seek_events: if int(prev_attempt)*1000 <= int(x) <= int(last_attempt)*1000: num_quiz_attempts_withseek += 1 break # - print num_quiz_attempts_noseek print num_quiz_attempts_withseek, float(num_quiz_attempts_withseek) / (num_quiz_attempts_noseek + num_quiz_attempts_withseek) # + num_quiz_attempts_noseek = 0 num_quiz_attempts_withseek = 0 num_quiz_attempts_withseek_forward = 0 num_quiz_attempts_withseek_backward = 0 for user,quiz_to_attempts in user_to_quiz_to_attempts.items(): for quiz_id,attempts in quiz_to_attempts.items(): if len(attempts) <= 1: continue lecture_id = quiz_to_lecture_id[quiz_id] video_time = quiz_id_to_video_time[int(quiz_id)] for prev_attempt,next_attempt in current_and_prev_item(attempts): if next_attempt - prev_attempt > 1800: continue num_quiz_attempts_noseek += 1 if user in lecture_to_user_to_simple_seek_events[str(lecture_id)]: seek_events = lecture_to_user_to_simple_seek_events[str(lecture_id)][user] for x in seek_events: timestamp = x['timestamp'] if int(prev_attempt)*1000 <= int(timestamp) <= int(last_attempt)*1000: num_quiz_attempts_withseek += 1 direction = 'forward' if float(x['end']) < video_time: direction = 'backward' #if x['direction'] == 'forward': if direction == 'forward': num_quiz_attempts_withseek_forward += 1 else: num_quiz_attempts_withseek_backward += 1 break # - print num_quiz_attempts_noseek print num_quiz_attempts_withseek, float(num_quiz_attempts_withseek) / (num_quiz_attempts_noseek + num_quiz_attempts_withseek) print num_quiz_attempts_withseek_forward, float(num_quiz_attempts_withseek_forward) / num_quiz_attempts_withseek print num_quiz_attempts_withseek_backward, float(num_quiz_attempts_withseek_backward) / num_quiz_attempts_withseek # + num_quiz_attempts_noseek = 0 num_quiz_attempts_withseek = 0 num_quiz_attempts_withseek_forward = 0 num_quiz_attempts_withseek_backward = 0 for user,quiz_to_attempts in user_to_quiz_to_attempts.items(): for quiz_id,attempts in quiz_to_attempts.items(): if len(attempts) <= 1: continue lecture_id = quiz_to_lecture_id[quiz_id] video_time = quiz_id_to_video_time[int(quiz_id)] for prev_attempt,next_attempt in current_and_prev_item(attempts): if next_attempt - prev_attempt > 1800: continue num_quiz_attempts_noseek += 1 if user in lecture_to_user_to_simple_seek_chains[str(lecture_id)]: seek_events = lecture_to_user_to_simple_seek_chains[str(lecture_id)][user] for x in seek_events: timestamp = x['timestamp'] if int(prev_attempt)*1000 <= int(timestamp) <= int(last_attempt)*1000: num_quiz_attempts_withseek += 1 direction = 'forward' if float(x['end']) < video_time: direction = 'backward' #if x['direction'] == 'forward': if direction == 'forward': num_quiz_attempts_withseek_forward += 1 else: num_quiz_attempts_withseek_backward += 1 break # - print num_quiz_attempts_noseek print num_quiz_attempts_withseek, float(num_quiz_attempts_withseek) / (num_quiz_attempts_noseek + num_quiz_attempts_withseek) print num_quiz_attempts_withseek_forward, float(num_quiz_attempts_withseek_forward) / num_quiz_attempts_withseek print num_quiz_attempts_withseek_backward, float(num_quiz_attempts_withseek_backward) / num_quiz_attempts_withseek # + #print video_to_seek_chains.keys() # + delay_between_attempts = [] for user_id,quiz_to_attempts in user_to_quiz_to_attempts.items(): for quiz_id,attempts in quiz_to_attempts.items(): first_submission = attempts[0] current_session_attempts = [x for x in attempts if abs(x - first_submission) < 1800] for x in current_session_attempts[1:]: delay_between_attempts.append(x - first_submission) # + #print user_to_quiz_to_attempts # + attempts_per_user = [] for user_id,quiz_to_attempts in user_to_quiz_to_attempts.items(): for quiz_id,attempts in quiz_to_attempts.items(): first_submission = attempts[0] current_session_attempts = [x for x in attempts if abs(x - first_submission) < 1800] attempts_per_user.append(len(current_session_attempts)) attempts_per_user = [min(x, 3) for x in attempts_per_user] print numpy.mean(attempts_per_user) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import os from glob import glob import cv2 import numpy as np from tqdm import tqdm from xml.etree import ElementTree as ET from bidi.algorithm import get_display import arabic_reshaper from wordcloud import WordCloud _ns = {'p': 'http://schema.primaresearch.org/PAGE/gts/pagecontent/2013-07-15'} dataset_folder='pinkas_images/' xml_folder = 'pinkas_xml/' output_folder = 'pinkas_word_images/' # + def get_page_filename(image_filename: str) -> str: return os.path.join(os.path.dirname(image_filename), '{}.xml'.format(os.path.basename(image_filename)[:-4])) def get_basename(image_filename: str) -> str: directory, basename = os.path.split(image_filename) return '{}'.format( basename.split('.')[0]) def save_and_resize(img: np.array, filename: str, size=None) -> None: if size is not None: h, w = img.shape[:2] resized = cv2.resize(img, (int(w*size), int(h*size)), interpolation=cv2.INTER_LINEAR) cv2.imwrite(filename, resized) else: cv2.imwrite(filename, img) def xml_to_coordinates(t): result = [] for p in t.split(' '): values = p.split(',') assert len(values) == 2 x, y = int(float(values[0])), int(float(values[1])) result.append((x,y)) result=np.array(result) return result # - image_filenames_list = glob('{}*.jpg'.format(dataset_folder)) word_labels = [] for image_filename in image_filenames_list: img = cv2.imread(image_filename) page_filename = get_page_filename(image_filename) tree = ET.parse(xml_folder+os.path.basename(image_filename)[:-4]+'.xml') root = tree.getroot() for i in root: for j in i: for k in j: for l in k: for m in l: for n in m: word_label=n.text word_labels.append(word_label) def save_wordcloud(word_labels): reshaped_text = arabic_reshaper.reshape(' '.join(filter(None,total_sorted_word_labels))) bidi_text = get_display(reshaped_text) wordcloud = WordCloud(font_path='arial.ttf',background_color='white', mode='RGB',width=2000,height=1000,collocations = False).generate(bidi_text) wordcloud.to_file("pinkas_wordcloud.png") return save_wordcloud(word_labels) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Exploratory data analysis and working with texts # # In this notebook, we learn about: # 1. descriptive statistics to explore data; # 2. working with texts # # Part 1: descriptive statistics # # *"The goal of exploratory data analysis is to develop an understanding of your data. EDA is fundamentally a creative process. And like most creative processes, the key to asking quality questions is to generate a large quantity of questions."* # # Key questions: # * Which kind of variation occurs within variables? # * Which kind of co-variation occurs between variables? # # https://r4ds.had.co.nz/exploratory-data-analysis.html # + # imports import os, codecs import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt # - # ## Import the dataset # Let us import the Venetian apprenticeship contracts dataset in memory. root_folder = "../data/apprenticeship_venice/" df_contracts = pd.read_csv(codecs.open(os.path.join(root_folder,"professions_data.csv"), encoding="utf8"), sep=";") df_professions = pd.read_csv(codecs.open(os.path.join(root_folder,"professions_classification.csv"), encoding="utf8"), sep=",") # Let's take another look to the dataset. df_contracts.info() df_contracts.head(5) df_contracts.columns # Every row represents an apprenticeship contract. Contracts were registered both at the guild's and at a public office. This is a sample of contracts from a much larger set of records. # # Some of the variables we will work with are: # * `annual_salary`: the annual salary paid to the apprencice, if any (in Venetian ducats). # * `a_profession` to `corporation`: increasingly generic classifications for the apprentice's stated profession. # * `startY` and `enrolmentY`: contract start and registration year respectively. # * `length`: of the contract, in years. # * `m_gender` and `a_gender`: of master and apprentice respectively. # * `a_age`: age of the apprentice at entry, in years. # * `female_guarantor`: if at least one of the contract's guarantors was female, boolean. df_professions.head(3) # The professions data frame contains a classification system for each profession as found in the records (transcription, first column). The last column is the guild (or corporation) which governed the given profession. This work was performed manually by historians. We don't use it here as the classifications we need are already part of the main dataframe. # ### Questions # # * Plot the distribution (histogram) of the apprentices' age, contract length, annual salary and start year. # * Calculate the proportion of female apprentices and masters, and of contracts with a female guarantor. # * How likely it is for a female apprentice to have a female master? And for a male apprentice? df_contracts.annual_salary.hist(bins=100) df_contracts[df_contracts.annual_salary < 20].annual_salary.hist(bins=25) df_contracts.a_gender.sum()/df_contracts.shape[0] df_contracts.m_gender.sum()/df_contracts.shape[0] df_contracts[(df_contracts.a_gender == 0) & (df_contracts.startY < 1600)].m_gender.sum() /df_contracts[(df_contracts.a_gender == 0) & (df_contracts.startY < 1600)].shape[0] df_contracts.startY.hist(bins=10) # ## Looking at empirical distributions df_contracts[df_contracts.annual_salary < 50].annual_salary.hist(bins=40) df_contracts[df_contracts.a_age < 30].a_age.hist(bins=25) # ### Two very important distributions # #### Normal # # Also known as Gaussian, is a bell-shaped distribution with mass around the mean and exponentially decaying on the sides. It is fully characterized by the mean (center of mass) and standard deviation (spread). # # https://en.wikipedia.org/wiki/Normal_distribution s1 = np.random.normal(5, 1, 10000) sns.distplot(s1) # for boxplots see https://en.wikipedia.org/wiki/Interquartile_range (or ask!) sns.boxplot(s1) # #### Heavy-tailed # Distributions with a small but non-negligible amount of observations with high values. Several probability distributions follow this pattern: https://en.wikipedia.org/wiki/Heavy-tailed_distribution#Common_heavy-tailed_distributions. # # We pick the lognormal here: https://en.wikipedia.org/wiki/Log-normal_distribution s2 = np.random.lognormal(5, 1, 10000) sns.distplot(s2) sns.boxplot(s2) # + # Why "lognormal"? sns.distplot(np.log(s2)) # - # #### Box plots # # # ### Outliers, missing values # # An *outlier* is an observation far from the center of mass of the distribution. It might be an error or a genuine observation: this distinction requires domain knowledge. Outliers infuence the outcomes of several statistics and machine learning methods: it is important to decide how to deal with them. # # A *missing value* is an observation without a value. There can be many reasons for a missing value: the value might not exist (hence its absence is informative and it should be left empty) or might not be known (hence the value is existing but missing in the dataset and it should be marked as NA). # # *One way to think about the difference is with this Zen-like koan: An explicit missing value is the presence of an absence; an implicit missing value is the absence of a presence.* # ## Summary statistics # A statistic is a measure over a distribution, and it is said to be *robust* if not sensitive to outliers. # # * Not robust: min, max, mean, standard deviation. # * Robust: mode, median, other quartiles. # # A closer look at the mean: # # $\bar{x} = \frac{1}{n} \sum_{i}x_i$ # # And variance (the standard deviation is the square root of the variance): # # $Var(x) = \frac{1}{n} \sum_{i}(x_i - \bar{x})^2$ # # + # Not robust: min, max, mean, mode, standard deviation print(np.mean(s1)) # should be 5 print(np.mean(s2)) # + # Robust: median, other quartiles print(np.quantile(s1, 0.5)) # should coincide with mean and mode print(np.quantile(s2, 0.5)) # - # #### Questions # # * Calculate the min, max, mode and sd. *hint: explore the numpy documentation!* # * Calculate the 90% quantile values. # * Consider our normally distributed data in s1. Add an outlier (e.g., value 100). What happens to the mean and mode? Write down your answer and then check. # Let's explore our dataset df_contracts[["annual_salary","a_age","length"]].describe() # ## Relating two variables # # #### Covariance # # Measure of joint linear variability of two variables: # # # # Its normalized version is called the (Pearson's) correlation coefficient: # # # # Correlation is helpful to spot possible relations, but is of tricky interpretation and is not exhaustive: # # # # See: https://en.wikipedia.org/wiki/Covariance and https://en.wikipedia.org/wiki/Pearson_correlation_coefficient. # # *Note: correlation is not causation!* df_contracts[["annual_salary","a_age","length"]].corr() sns.scatterplot(df_contracts.length,df_contracts.annual_salary) # #### Questions # # * Try to explore the correlation of other variables in the dataset. # * Can you think of a possible motivation for the trend we see: older apprentices with a shorter contract getting on average a higher annual salary? # ## Sampling and uncertainty (mention) # # Often, we work with samples and we want the sample to be representative of the population it is taken from, in order to draw conclusions that generalise from the sample to the full population. # # Sampling is *tricky*. Samples have *variance* (variation between samples from the same population) and *bias* (systematic variation from the population). # # Part 2: working with texts # # Let's get some basics (or a refresher) of working with texts in Python. Texts are sequences of discrete symbols (words or, more generically, tokens). # # Key challenge: representing text for further processing. Two mainstream approaches: # * *Bag of words*: a text is a collection of tokens occurring with a certain frequence and assumed independently from each other within the text. The mapping from texts to features is determinsitic and straighforward, each text is represented as a vector of the size of the vocabulary. # * *Embeddings*: a method is used (typically, neural networks), to learn a mapping from each token to a (usually small) vector representing it. A text can be represented in turn as an aggregation of these embeddings. # ## Import the dataset # Let us import the Elon Musk's tweets dataset in memory. # # root_folder = "../data/musk_tweets" df_elon = pd.read_csv(codecs.open(os.path.join(root_folder,"elonmusk_tweets.csv"), encoding="utf8"), sep=",") df_elon['text'] = df_elon['text'].str[1:] df_elon.head(5) df_elon.shape # ## Natural Language Processing in Python # import some of the most popular libraries for NLP in Python import spacy import nltk import string import sklearn # + # nltk.download('punkt') # - # A typical NLP pipeline might look like the following: # # # # ### Tokenization: splitting a text into constituent tokens. from nltk.tokenize import TweetTokenizer, word_tokenize tknzr = TweetTokenizer(preserve_case=True, reduce_len=False, strip_handles=False) example_tweet = df_elon.text[1] print(example_tweet) tkz1 = tknzr.tokenize(example_tweet) print(tkz1) tkz2 = word_tokenize(example_tweet) print(tkz2) # Question: can you spot what the Twitter tokenizer is doing instead of a standard one? string.punctuation # + # some more pre-processing def filter(tweet): # remove punctuation and short words and urls tweet = [t for t in tweet if t not in string.punctuation and len(t) > 3 and not t.startswith("http")] return tweet def tokenize_and_string(tweet): tkz = tknzr.tokenize(tweet) tkz = filter(tkz) return " ".join(tkz) # - print(tkz1) print(filter(tkz1)) df_elon["clean_text"] = df_elon["text"].apply(tokenize_and_string) df_elon.head(5) # + # save cleaned up version df_elon.to_csv(os.path.join(root_folder,"df_elon.csv"), index=False) # - # ### Building a dictionary from sklearn.feature_extraction.text import CountVectorizer count_vect = CountVectorizer(lowercase=False, tokenizer=tknzr.tokenize) X_count = count_vect.fit_transform(df_elon.clean_text) X_count.shape word_list = count_vect.get_feature_names() count_list = X_count.toarray().sum(axis=0) dictionary = dict(zip(word_list,count_list)) count_vect.vocabulary_.get("robots") X_count[:,count_vect.vocabulary_.get("robots")].toarray().sum() dictionary["robots"] # #### Questions # # * Find the tokens most used by Elon. # * Find the twitter users most referred to by Elon (hint: use the @ handler to spot them). dictionary_list = sorted(dictionary.items(), key=lambda x:x[1], reverse=True) [d for d in dictionary_list][:10] dictionary_list_users = sorted(dictionary.items(), key=lambda x:x[1], reverse=True) [d for d in dictionary_list if d[0].startswith('@')][:10] # ### Representing tweets as vectors # # Texts are of variable length and need to be represented numerically in some way. Most typically, we represent them as *equally-sized vectors*. # # Actually, this is what we have already done! Let's take a closer look at `X_count` above.. # + # This is the first Tweet of the data frame df_elon.loc[0] # + # let's get the vector representation for this Tweet vector_representation = X_count[0,:] # + # there are 3 positions not to zero, as we would expect: the vector contains 1 in the columns related to the 3 words that make up the Tweet. # It would contain a number higher than 1 if a given word were occurring multiple times. np.sum(vector_representation) # + # Let's check that indeed the vector contains 1s for the right words # Remember, the vector has shape (1 x size of the vocabulary) print(vector_representation[0,count_vect.vocabulary_.get("robots")]) print(vector_representation[0,count_vect.vocabulary_.get("spared")]) print(vector_representation[0,count_vect.vocabulary_.get("humanity")]) # - # ### Term Frequency - Inverse Document Frequency # We can use boolean counts (1/0) and raw counts (as we did before) to represent a Tweet over the space of the vocabulary, but there exist improvements on this basic idea. For example, the TF-IDF weighting scheme: # # $tfidf(t, d, D) = tf(t, d) \cdot idf(t, D)$ # # $tf(t, d) = f_{t,d}$ # # $idf(t, D) = log \Big( \frac{|D|}{|{d \in D: t \in d}|} \Big)$ from sklearn.feature_extraction.text import TfidfVectorizer count_vect = TfidfVectorizer(lowercase=False, tokenizer=tknzr.tokenize) X_count_tfidf = count_vect.fit_transform(df_elon.clean_text) X_count_tfidf.shape X_count_tfidf[0,:].sum() X_count[0,:].sum() # #### Sparse vectors (mention) # How is Python representing these vectors in memory? Most of their cells are set to zero. # # We call any vector or matrix whose cells are mostly to zero *sparse*. # There are efficient ways to store them in memory. X_count_tfidf[0,:] # ### Spacy pipelines # # Useful to construct sequences of pre-processing steps: https://spacy.io/usage/processing-pipelines. # + # Install the required model # #!python -m spacy download en_core_web_sm # + # without this line spacy is not able to find the downloaded model # #!python -m spacy link --force en_core_web_sm en_core_web_sm # + # Load a pre-trained pipeline (Web Small): https://spacy.io/usage/models # #!python -m spacy download en_core_web_sm nlp = spacy.load('en_core_web_sm') # - # *.. the model’s meta.json tells spaCy to use the language "en" and the pipeline ["tagger", "parser", "ner"]. spaCy will then initialize spacy.lang.en.English, and create each pipeline component and add it to the processing pipeline. It’ll then load in the model’s data from its data directory and return the modified Language class for you to use as the nlp object.* # # Let's create a simple pipeline that does **lemmatization**, **part of speech tagging** and **named entity recognition** using spaCy models. # # *If you don't know what these NLP tasks are, please ask!* # + tweet_pos = list() tweet_ner = list() tweet_lemmas = list() for tweet in df_elon.text.values: spacy_tweet = nlp(tweet) local_tweet_pos = list() local_tweet_ner = list() local_tweet_lemmas = list() for sentence in list(spacy_tweet.sents): # --- lemmatization, remove punctuation and stop wors local_tweet_lemmas.extend([token.lemma_ for token in sentence if not token.is_punct | token.is_stop]) local_tweet_pos.extend([token.pos_ for token in sentence if not token.is_punct | token.is_stop]) for ent in spacy_tweet.ents: local_tweet_ner.append(ent) tweet_pos.append(local_tweet_pos) tweet_ner.append(local_tweet_ner) tweet_lemmas.append(local_tweet_lemmas) # - tweet_lemmas[0] tweet_pos[0] tweet_ner[0] # + # but it actually works! tweet_ner[3] # - # *Note: we are really just scratching the surface of spaCy, but it is worth knowing it's there.* # ### Searching tweets # # Once we have represented Tweets as vectors, we can easily find similar ones using basic operations such as filtering. target = 0 print(df_elon.clean_text[target]) condition = X_count_tfidf[target,:] > 0 print(condition) X_filtered = X_count_tfidf[:,np.ravel(condition.toarray())] X_filtered print(X_filtered) # + from scipy import sparse sparse.find(X_filtered) # - tweet_indices = list(sparse.find(X_filtered)[0]) # + print("TARGET: " + df_elon.clean_text[target]) for n, tweet_index in enumerate(list(set(tweet_indices))): if tweet_index != target: print(str(n) +")"+ df_elon.clean_text[tweet_index]) # - # #### Questions # # * Can you rank the matched tweets using their tf-idf weights, so to put higher weighted tweets first? # * Which limitations do you think a bag of words representation has? # * Can you spot any limitations of this approach based on similarity measures over bag of words representations? # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # ESG-score using a MiniMax Model # ## Problem setting: # As a Business-idea firm xy plans to launch a new app, in which clients are able to invest in a socially responsible manner. The app has to be able to compute for every portfolio an Environmental, Social and Governance (ESG) score between 0 and 100, where a higher score indicates a more responsible investment. Using an optimization process for each client a customized portfolio that takes into account the provided target values should be derived. Since firm xy is based in Switzerland, the initial investments universe comprises the stocks listed on SMI. The files prices1617.csv, prices18.csv and scores.csv contain the weekly closing prices of the SMI and its constituents for the years 2016-2017 and 2018 as well as the latest ESG scores for all SMI constituents, respectively. # Import packages import pandas as pd import numpy as np import gurobipy as gb import matplotlib.pyplot as plt # Define precision of number display pd.set_option('precision', 5) # Import prices weekly_prices = pd.read_csv('prices1617.csv', index_col='date') # Look at first five rows weekly_prices.head() # Compute weekly returns weekly_returns = weekly_prices / weekly_prices.shift(1) - 1 # Drop NA-return (first period) weekly_returns = weekly_returns.dropna() # Look at first five rows weekly_returns.head() # Compute expected returns all_expected_returns = weekly_returns.mean() all_expected_returns # Get array of stocks (without index SMI) stocks = weekly_returns.columns.drop('SMI') # ## Portfolio 1 (P1): # As a first idea for a portfolio selection an equally weighted portfolio approach is chosen. This portfolio is referred by P1. # In a first step the variance, mean-absolute deviation, Beta (relative to SMI), minimum return and the expected downside $(r_t =1\%)$ for the portfolio P1 as well as for the SMI are computed. # Create equally weighted portfolio weights_p1 = pd.Series(1 / len(stocks), index=stocks) weekly_returns['P1'] = weekly_returns[stocks].dot(weights_p1) # Calculate variance weekly_returns[['P1', 'SMI']].var() # Calculate mean-absolute deviation weekly_returns[['P1', 'SMI']].mad() # Compute the beta of P1 weekly_returns['P1'].cov(weekly_returns['SMI']) / weekly_returns['SMI'].var() # Compute the beta of SMI weekly_returns['SMI'].cov(weekly_returns['SMI']) / weekly_returns['SMI'].var() # Compute the minimum return weekly_returns[['P1', 'SMI']].min() # + # Define target return target_return = 0.01 # Compute expected downside with respect to given target return np.maximum(0, -(weekly_returns[['P1', 'SMI']] - target_return) ).sum() / weekly_returns['P1'].size # - # ### Note: # In a next step the ESG Environmental, ESG Governance and ESG Social scores of P1 are computed. # Import ratings scores = pd.read_csv('scores.csv', index_col='scores') # Look at first five rows scores.head() # Get ESG scores for P1 scores.dot(weights_p1) # ### Question: # Does a distinct relationship between the expected returns and the average ESG scores exist? Do more sustainable stocks tend to have greatly inferior performance than less sustainable stocks? # Get average ESG scores average_esg_score = scores.mean() # Visualize average ESG scores and expected returns in scatterplot plt.figure(figsize=(9, 3), dpi=120) plt.scatter(average_esg_score, all_expected_returns[stocks], alpha=0.6) plt.xlabel('ESG average score') plt.ylabel('Expected return'); # The data does not support the claim. It appears that no distinct relationship between the expected returns and the average ESG scores exist. # ## Portfolio 2 (P2): # As an alternative the firm xy has the idea to implement a MiniMax model as the underlying portfolio selection model (P2). Further a blacklist is introduced that consist of two pharmaceutical companies: Novartis and Roche. The weights for stocks on the blacklist must be zero in all subsequent portfolios. # # ### Note: # A MiniMax(MM) model trades off the minimum portfolio rerun against the expected portfolio return. Therefore the minimum portfolio return is a downside risk measure. It is possible to formulate a MM model as a linear optimization problem. # # Some technical characteristics of the MM model: # * Minimum return is homogeneous # * Minimum return is monotone # * Minimum return is subadditive # * Minimum return is translational invariant # # => Therefore the Minimum return is coherent # # # # Notation: # * $\alpha$ Minimum portfolio return over all scenarios # # \begin{align} # Max. \alpha\\ # \sum_{i=n}^n r_{il}x_i \geq \alpha \quad (l \in \Omega)\\ # \sum_{i=n}^n \bar r_ix_i = \mu \\ # \sum_{i=n}^n x_i =1 \\ # x_i \geq 0 \quad (i =1,...,n) \\ # \alpha \in \Re # \end{align} # Get array of scenarios scenarios = weekly_returns.index # Compute expected returns expected_returns = all_expected_returns[stocks] # Define target portfolio return target_return = all_expected_returns['SMI'] # + # Create model m = gb.Model('MM') # Add a variable for each stock x = pd.Series(m.addVars(stocks), index=stocks) # Add a variable which represents the minimum portfolio return minimum_return = m.addVar(lb=weekly_returns[stocks].min().min()) # Define portfolio return portfolio_return = expected_returns.dot(x) # Add objective function to model m.setObjective(minimum_return, gb.GRB.MAXIMIZE) # Add minimum return constraints for l in scenarios: m.addConstr(weekly_returns.loc[l, stocks].dot(x) >= minimum_return) # Add budget constraint m.addConstr(x.sum() == 1); # Add target return constraint target_return_constr = m.addConstr(portfolio_return == target_return) # Add blacklist constraints m.addConstr(x.loc['ROCHE'] == 0) m.addConstr(x.loc['NOVARTIS'] == 0); # Specify Gurobi options m.setParam('OutputFlag', 0) # Run optimization m.optimize() # Get weights of stocks in optimal solution weights_p2 = pd.Series([var.X for var in x], index=stocks) # Get ESG scores for P2 scores[stocks].dot(weights_p2) # - # ### Question: # What is the minimum return of P2 and how does it stand up compared to the SMI? # Get minimum return of P2 minimum_return_p2 = minimum_return.X minimum_return_p2 # Is minimum return of P2 bigger than minimum return of SMI? minimum_return.X > weekly_returns['SMI'].min() # The minimum return of P2 (-0.03953) is larger (less negative) than the minimum return of the SMI (-0.06360) and thus, the SMI is not efficient # Get weights of stocks in optimal solution weights_p2 # ## Note: # Adjust the target return constraint of the model such that the optimal portfolio has an expected return equal to or greater than the target return (P3). # # \begin{align} # Max. \alpha\\ # \sum_{i=n}^n r_{il}x_i \geq \alpha \quad (l \in \Omega)\\ # \sum_{i=n}^n \bar r_ix_i \geq \mu \\ # \sum_{i=n}^n x_i =1 \\ # x_i \geq 0 \quad (i =1,...,n) \\ # \alpha \in \Re # \end{align} # Remove old target return constraint m.remove(target_return_constr) # Add new target return constraint target_return_constr = m.addConstr(portfolio_return >= target_return) # Run optimization m.optimize() # Get minimum return of P3 minimum_return.X # The minimum return impoves from -3.95% in P2 to -3.2% in P3. # Get weights of stocks in optimal solution weights_p3 = pd.Series([var.X for var in x], index=stocks) weights_p3 # ## Portfolio 4 (P4) # In order to develop a variant of the minimax model that additionally takes into account the investors' target values for the three ESG score a separate penalty for each of the three ESG scores is implemented. If the portfolio score is equal to or greater than the target value, then the penalty is zero. If the portfolio score is smaller than the target value, then the penalty corresponds to the difference between the target value and the portfolio score. # $penalty_{ESG} = max(taregt ESG score - portfolio ESG score,0)$ # # However, firm xy decides that a linear increasing penalty is not appropriate. It should be ensured that large shortfalls are penalized heavily. The structure of the three penalty functions can be seen in the following graphs: # # ![title](picture.PNG) # # # Additionally the already introduced constraints have to hold: # * The portfolio return does not fall below a prespecified minimum portfolio return $\alpha$ in any scenario. # * The expected portfolio return is equal to or greater than a prespecified target return µ. # * The sum of the weights is equal to 1. # * All portfolio weights are non-negative. # * The weights for stocks on the blacklist must be zero. # # Formulate a linear optimization problem in which the sum of the three penalties (given the structures in the graphs) is minimized subject to all constraints. # ### Note: # The resulting optimization problem: # # Notation: # * $S_i^E$: ESG Envionmental score of stock i # * $S_i^S$: ESG Social score of stock i # * $S_i^G$: ESG Governance score of stock i # * $S_t^E$: Target ESG Environmental value # * $S_t^S$: Target ESG Socail value # * $S_t^G$: Target ESG Governance value # * $x_i$: weight of asset i # * $\delta_{Ej}$: Shortfall of portfolio ESG Environmental score with respect to target ESG Environmental value in jth segment of penalty function (j = 1; 2) # * $\delta_{Sj}$: Shortfall of portfolio ESG Social score with respect to target ESG Social value in jth segment of penalty function (j = 1; 2) # * $\delta_{Gj}$: Shortfall of portfolio ESG Governance score with respect to target ESG Governance value in jth segment of penalty function (j = 1; 2) # # # \begin{align} # Min. (8/10)\delta_{E1} + (12/10)\delta_{E2} + (10/10)\delta_{S1} + (20/10)\delta_{S2} + (4/10) \delta_{G1} + (8/10)\delta_{G2}\\ # \sum_{i=n}^n r_{il}x_i \geq \alpha \quad (l \in \Omega)\\ # \sum_{i=n}^n \bar r_ix_i \geq \mu \\ # \sum_{i=n}^n x_i =1 \\ # \sum_{i=n}^n S_i^E x_i \geq S_t^E - \delta_{E1} - \delta_{E2}\\ # \sum_{i=n}^n S_i^S x_i \geq S_t^S - \delta_{S1} - \delta_{S2}\\ # \sum_{i=n}^n S_i^G x_i \geq S_t^G - \delta_{G1} - \delta_{G2}\\ # \delta_{E1} \leq 10 \\ # \delta_{S1} \leq 10 \\ # \delta_{G1} \leq 10 \\ # x_i \geq 0 \quad (i =1,...,n) \\ # x_i = 0 \quad (i = Novartis, Roche)\\ # \delta_{E1},\delta_{E2},\delta_{S1},\delta_{S2},\delta_{G1},\delta_{G2} \geq 0\\ # \end{align} # # Define target ESG values S_t_E = 100 S_t_S = 100 S_t_G = 100 # + # Create model m = gb.Model('MM2') # Add a variable for each stock x = pd.Series(m.addVars(stocks), index=stocks) # Add variables for ESG environmental penalties delta_e1 = m.addVar() delta_e2 = m.addVar() # Add variables for ESG social penalties delta_s1 = m.addVar() delta_s2 = m.addVar() # Add variables for ESG governance penalties delta_g1 = m.addVar() delta_g2 = m.addVar() # Define minimum return minimum_return = minimum_return_p2 # Define portfolio return portfolio_return = expected_returns.dot(x) # Define portfolio environmental score portfolio_environmental_score = scores.loc['ESG Environmental', stocks].dot(x) # Define portfolio social score portfolio_social_score = scores.loc['ESG Social', stocks].dot(x) # Define portfolio governance score portfolio_governance_score = scores.loc['ESG Governance', stocks].dot(x) # Add objective function to model m.setObjective((8 / 10) * delta_e1 + (12 / 10) * delta_e2 + (10 / 10) * delta_s1 + (20 / 10) * delta_s2 + (4 / 10) * delta_g1 + (8 / 10) * delta_g2, gb.GRB.MINIMIZE) # Add constraints for ESG penalties to model ESG_environmental_penalty_constraint = m.addConstr( portfolio_environmental_score >= S_t_E - delta_e1 - delta_e2) ESG_social_penalty_constraint = m.addConstr( portfolio_social_score >= S_t_S - delta_s1 - delta_s2) ESG_governance_penalty_constraint = m.addConstr( portfolio_governance_score >= S_t_G - delta_g1 - delta_g2) m.addConstr(delta_e1 <= 10) m.addConstr(delta_s1 <= 10) m.addConstr(delta_g1 <= 10); # Add minimum return constraints for l in scenarios: m.addConstr(weekly_returns.loc[l, stocks].dot(x) >= minimum_return) # Add budget constraint m.addConstr(x.sum() == 1); # Add blacklist constraints m.addConstr(x.loc['ROCHE'] == 0) m.addConstr(x.loc['NOVARTIS'] == 0); # Add target return constraint m.addConstr(portfolio_return >= target_return); # Specify Gurobi options m.setParam('OutputFlag', 0) # Run optimization m.optimize() # Get portfolio return portfolio_return.getValue() # - # Get weights of stocks in optimal solution weights_p4 = pd.Series([var.X for var in x], index=stocks) weights_p4 # Get ESG ratings P3 scores[stocks].dot(weights_p3) # Get ESG Environmental score for P4 portfolio_environmental_score.getValue() # Get ESG Social score for P4 portfolio_social_score.getValue() # Get ESG Governance score for P4 portfolio_governance_score.getValue() # ### Question: # Assume that the model and the parameters $\alpha$ and $\mu$ are given from above. Is it possible to consturct a portfolio with its Envionmental score beeing at least 70 and 65 for the Social and Governance scores? If yes, what is the expected portfolio return? # Adjust right-hand side of ESG penalty constraints ESG_environmental_penalty_constraint.rhs = 70 ESG_social_penalty_constraint.rhs = 65 ESG_governance_penalty_constraint.rhs = 65 # Specify Gurobi options m.setParam('OutputFlag', 0) # Run optimization m.optimize() # Get portfolio return portfolio_return.getValue() # Get objective function value m.ObjVal # Get ESG Environmental score portfolio_environmental_score.getValue() # Get ESG Social score portfolio_social_score.getValue() # Get ESG Governance score portfolio_governance_score.getValue() # ### Note: # Yes, such a portfolio is possible to construct without having to take on a penalty which is indidicated by the objective value beeing equal to zero. Further such a portfolio would have a expected return of 0.18%. # ### Question: # Given an investor whose main concern only consists of the highest possible Governance score? How would such a portfolio look like and what are the associated ESG-scores? # Adjust right-hand side of ESG penalty constraints ESG_environmental_penalty_constraint.rhs = 0 ESG_social_penalty_constraint.rhs = 0 ESG_governance_penalty_constraint.rhs = 100 # Specify Gurobi options m.setParam('OutputFlag', 0) # Run optimization m.optimize() # Get portfolio return portfolio_return.getValue() # Get objective function value m.ObjVal # Get ESG Environmental score portfolio_environmental_score.getValue() # Get ESG Social score portfolio_social_score.getValue() # Get ESG Governance score portfolio_governance_score.getValue() # ### Note: # A portfolio ESG Governance score of 73.88 can be achieved. The portfolio ESG Social score takes a value of 66.65 and the portfolio ESG Enviornmental score a value of 90.11. # ## Perfomance evalation # In order to evaluate our portfolios (P1-P4) and the SMI we use a second datasample containing of the weekly closing prices from the year 2018. Imagine that 1'000'000 CHF are invesed in each portflio and that it is possible to buy a fracitonal number of units. # Import prices weekly_prices_evaluation = pd.read_csv('prices18.csv', index_col='date') weekly_prices_evaluation.head() # get budget budget=1000000 # get budget per stock budget_per_stock_p1 = budget * weights_p1 budget_per_stock_p2 = budget * weights_p2 budget_per_stock_p3 = budget * weights_p3 budget_per_stock_p4 = budget * weights_p4 unit_per_stock_p1 = (budget_per_stock_p1 / weekly_prices_evaluation.loc['2018-01-05', stocks]) unit_per_stock_p2 = (budget_per_stock_p2 / weekly_prices_evaluation.loc['2018-01-05', stocks]) unit_per_stock_p3 = (budget_per_stock_p3 / weekly_prices_evaluation.loc['2018-01-05', stocks]) unit_per_stock_p4 = (budget_per_stock_p4 / weekly_prices_evaluation.loc['2018-01-05', stocks]) unit_smi = (budget / weekly_prices_evaluation.loc['2018-01-05', 'SMI']) # Convert index from string to date weekly_prices_evaluation.index = pd.to_datetime(weekly_prices_evaluation.index) # Compute value development of portfolios value_p1_evaluation = weekly_prices_evaluation[stocks].dot(unit_per_stock_p1) value_p2_evaluation = weekly_prices_evaluation[stocks].dot(unit_per_stock_p2) value_p3_evaluation = weekly_prices_evaluation[stocks].dot(unit_per_stock_p3) value_p4_evaluation = weekly_prices_evaluation[stocks].dot(unit_per_stock_p4) value_smi_evaluation = weekly_prices_evaluation['SMI'] * unit_smi # %matplotlib inline # Line plot of value development of portfolios plt.figure(figsize=(17, 5), dpi=120) plt.plot(value_p1_evaluation, label='P1') plt.plot(value_p2_evaluation, label='P2') plt.plot(value_p3_evaluation, label='P3') plt.plot(value_p4_evaluation, label='P4') plt.plot(value_smi_evaluation, label='SMI') plt.axhline(1000000, color='b', linestyle='--') plt.xlabel('Date') plt.ylabel('Value CHF') plt.legend(); # Get values on 28th December, 2018 print('Value P1:', value_p1_evaluation.loc['2018-12-28'].round(2)) print('Value P2:', value_p2_evaluation.loc['2018-12-28'].round(2)) print('Value P3:', value_p3_evaluation.loc['2018-12-28'].round(2)) print('Value P4:', value_p4_evaluation.loc['2018-12-28'].round(2)) print('Value SMI:', value_smi_evaluation.loc['2018-12-28'].round(2)) # ## Question: # Compute portfolio return and the minimum portfolio return of P1-P4 and the SMI for the years 2016-2017 as well as for the year 2018 and compare them to one another? # ### 2016-2017 # Get unit per stock (assumption: investment of 1,000,000 CHF at beginning of year 2016) unit_per_stock_p1 = (budget_per_stock_p1 / weekly_prices.loc['2016-01-01', stocks]) unit_per_stock_p2 = (budget_per_stock_p2 / weekly_prices.loc['2016-01-01', stocks]) unit_per_stock_p3 = (budget_per_stock_p3 / weekly_prices.loc['2016-01-01', stocks]) unit_per_stock_p4 = (budget_per_stock_p4 / weekly_prices.loc['2016-01-01', stocks]) unit_smi = (budget / weekly_prices.loc['2016-01-01', 'SMI']) # Compute value development of portfolios value_p1 = weekly_prices[stocks].dot(unit_per_stock_p1) value_p2 = weekly_prices[stocks].dot(unit_per_stock_p2) value_p3 = weekly_prices[stocks].dot(unit_per_stock_p3) value_p4 = weekly_prices[stocks].dot(unit_per_stock_p4) value_smi = weekly_prices['SMI'] * unit_smi # Compute weekly returns for P1-P4 return_p1 = value_p1 / value_p1.shift(1) - 1 return_p2 = value_p2 / value_p2.shift(1) - 1 return_p3 = value_p3 / value_p3.shift(1) - 1 return_p4 = value_p4 / value_p4.shift(1) - 1 return_smi = value_smi / value_smi.shift(1) - 1 # Drop NA-return in first period return_p1 = return_p1.dropna() return_p2 = return_p2.dropna() return_p3 = return_p3.dropna() return_p4 = return_p4.dropna() return_smi = return_smi.dropna() # Get average portfolio return (in-sample) print('In-sample') print('Average return P1:', return_p1.mean()) print('Average return P2:', return_p2.mean()) print('Average return P3:', return_p3.mean()) print('Average return P4:', return_p4.mean()) print('Average return SMI:', return_smi.mean()) # Get minimum portfolio return (in-sample) print('In-sample') print('Minimum return P1:', return_p1.min()) print('Minimum return P2:', return_p2.min()) print('Minimum return P3:', return_p3.min()) print('Minimum return P4:', return_p4.min()) print('Minimum return SMI:', return_smi.min()) # ### 2018 # Compute weekly returns for P1-P4 return_p1_evaluation = value_p1_evaluation / value_p1_evaluation.shift(1) - 1 return_p2_evaluation = value_p2_evaluation / value_p2_evaluation.shift(1) - 1 return_p3_evaluation = value_p3_evaluation / value_p3_evaluation.shift(1) - 1 return_p4_evaluation = value_p4_evaluation / value_p4_evaluation.shift(1) - 1 return_smi_evaluation = value_smi_evaluation / value_smi_evaluation.shift(1) - 1 # Drop NA-return in first period return_p1_evaluation = return_p1_evaluation.dropna() return_p2_evaluation = return_p2_evaluation.dropna() return_p3_evaluation = return_p3_evaluation.dropna() return_p4_evaluation = return_p4_evaluation.dropna() return_smi_evaluation = return_smi_evaluation.dropna() # Get average portfolio return (out-of-sample) print('Out-of-sample') print('Average return P1:', return_p1_evaluation.mean()) print('Average return P2:', return_p2_evaluation.mean()) print('Average return P3:', return_p3_evaluation.mean()) print('Average return P4:', return_p4_evaluation.mean()) print('Average return SMI:', return_smi_evaluation.mean()) # Get minimum portfolio return (out-of-sample) print('Out-of-sample') print('Minimum return P1:', return_p1_evaluation.min()) print('Minimum return P2:', return_p2_evaluation.min()) print('Minimum return P3:', return_p3_evaluation.min()) print('Minimum return P4:', return_p4_evaluation.min()) print('Minimum return SMI:', return_smi_evaluation.min()) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import pickle # %matplotlib inline # - # Hides the pink warnings import warnings warnings.filterwarnings('ignore') def data_formatter(path_pkl, column, aug_type): #with open(path, 'rb') as f: # df = pd.read_pickle(f) pickle_file = open(path_pkl, "rb") df = pd.DataFrame(pickle.load(pickle_file)) pickle_file.close() if column == 'shifts': df['shifts'] = list(range(1,23)) elif column == 'pixels_lost': if aug_type == 'Linear': df['pixels_lost'] = pd.Series(list(range(1,23))) * 28 elif aug_type == 'Diagonal' or aug_type == 'Combined': df['pixels_lost'] = pd.Series(list(range(1, 23))) * 28 * 2 elif column == 'percentage_lost': if aug_type == 'Linear': pixels_lost = pd.Series(list(range(1,23))) * 28 elif aug_type == 'Diagonal' or aug_type == 'Combined': pixels_lost = pd.Series(list(range(1,23))) * 28 * 2 df['percentage_lost'] = round((pixels_lost/784), 3) df = pd.melt(df, id_vars = [column], value_vars = list('12345'), value_name = 'accuracy') df.drop('variable', axis = 1, inplace = True) df['aug_type'] = [aug_type] * len(df) return df def shifts_visualizer_function(linear_file, diagonal_file, combined_file, column): linear_data = data_formatter(linear_file, column, 'Linear') linear_data_mean = linear_data.groupby(['shifts']).mean().reset_index() linear_data_var = linear_data.groupby(['shifts']).var().reset_index() diagonal_data = data_formatter(diagonal_file, column, 'Diagonal') diagonal_data_mean = diagonal_data.groupby(['shifts']).mean().reset_index() diagonal_data_var = diagonal_data.groupby(['shifts']).var().reset_index() combined_data = data_formatter(combined_file, column, 'Combined') combined_data_mean = combined_data.groupby(['shifts']).mean().reset_index() combined_data_var = combined_data.groupby(['shifts']).var().reset_index() fig = plt.figure(figsize=(20,10)) fig_1 = fig.add_subplot(211) fig_1.plot(linear_data_mean['shifts'], linear_data_mean['accuracy'], label = 'Linear Augmentation') fig_1.plot(diagonal_data_mean['shifts'], diagonal_data_mean['accuracy'], label = 'Diagonal Augmentation') fig_1.plot(combined_data_mean['shifts'], combined_data_mean['accuracy'], label = 'Combined Augmentation') plt.title("Mean Accuracy per Shift") plt.legend() fig_2 = fig.add_subplot(212) fig_2.plot(linear_data_var['shifts'], linear_data_var['accuracy'], label = 'Linear Augmentation') fig_2.plot(diagonal_data_var['shifts'], diagonal_data_var['accuracy'], label = 'Diagonal Augmentation') fig_2.plot(combined_data_var['shifts'], combined_data_var['accuracy'], label = 'Combined Augmentation') plt.title("Mean Variance Accuracy per Shift") plt.legend() plt.savefig('results_single/single_visualization.png') plt.show() # ## Le-Net 5 Visualizations linear_pkl = 'results_single/linear_non_augmented_test/performance_linear_non_augmented_test.pkl' diagonal_pkl = 'results_single/diagonal_non_augmented_test/performance_diagonal_non_augmented_test.pkl' combined_pkl = 'results_single/combined_non_augmented_test/performance_combined_non_augmented_test.pkl' # ### Shifts shifts_visualizer_function(linear_pkl, diagonal_pkl, combined_pkl, 'shifts') # ### Maximum image pixel loss # ### Maximum % of pixels losed # ## Support Vector Machine svm_linear_pkl = 'results/svm_linear_non_augmented_test/svm_performance_linear_non_augmented_test.pkl' svm_diagonal_pkl = 'results/svm_diagonal_non_augmented_test/svm_performance_diagonal_non_augmented_test.pkl' svm_combined_pkl = 'results/svm_combined_non_augmented_test/svm_performance_combined_non_augmented_test.pkl' # ### Shifts shifts_visualizer_function(svm_linear_pkl, svm_diagonal_pkl, svm_combined_pkl, 'shifts') # ### Maximum image pixel loss # ### Maximum % of pixels losed # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + import numpy as np import MyML.EAC.full as myEacFull import MyML.EAC.sparse as myEacSp import MyML.EAC.eac as myEac import MyML.EAC.eac_new as myNewEac import MyML.cluster.K_Means3 as myKM import MyML.helper.partition as myEacPart import MyML.metrics.accuracy as myAcc import matplotlib.pyplot as plt # %matplotlib inline # - reload(myEacFull) reload(myEacPart) reload(myEacSp) reload(myEac) reload(myNewEac) # + # rules for picking kmin kmax def rule1(n): """sqrt""" k = [np.sqrt(n)/2, np.sqrt(n)] k = map(np.ceil,k) k = map(int, k) return k def rule2(n): """2sqrt""" k = map(lambda x:x*2,rule1(n)) return k def rule3(n, sk, th): """fixed s/k""" k = [n * 1.0 / sk, th * n * 1.0 / sk] k = map(np.ceil,k) k = map(int, k) return k def rule4(n): """sk=sqrt/2,th=30%""" return rule3(n, sk1(n), 1.3) def rule5(n): """sk=300,th=30%""" return rule3(n,300, 1.3) # rules for picking number of samples per cluster def sk1(n): """sqrt/2""" return int(np.sqrt(n) / 2) # + from sklearn.datasets import make_blobs n_samples = 100000 data,gt = make_blobs(n_samples, centers=4) # %time data = data.astype(np.float32) n_clusts = rule4(n_samples) plt.plot(data[:,0], data[:,1], '.') generator = myKM.K_Means() generator._MAX_THREADS_BLOCK = 256 # %time ensemble = myEacPart.generateEnsemble(data, generator, n_clusts,npartitions=100) # - myEacSp._compute_max_assocs_from_ensemble(ensemble)*3 # + # %time fullEAC = myNewEac.EAC(n_samples, sparse=True, condensed=False, sparse_keep_degree=True) fullEAC.sp_max_assocs_mode = "constant" # %time fullEAC.buildMatrix(ensemble) # %time spEAC = myNewEac.EAC(n_samples, sparse=True, condensed=True, sparse_keep_degree=True) spEAC.sp_max_assocs_mode = "constant" # %time spEAC.buildMatrix(ensemble) # - import seaborn as sns sns.set_style("darkgrid") sns.set_palette(sns.color_palette("deep", 6)) # + fig1 = plt.figure(figsize=(16,6)) ax = fig1.add_subplot(1,2,1) x = np.arange(n_samples) y = spEAC.coassoc.degree[:-1] ax.plot(y,x, 'k', alpha=1) ax.fill_between(y,0,x,color='k', alpha=1) ax.plot([spEAC.sp_max_assocs, spEAC.sp_max_assocs], [0, n_samples], 'r', label="max. number of assocs.") ax.set_title("Number of associations per sample on a condensed matrix") ax.set_ylabel("Sample no.") ax.set_xlabel("Number of associations") plt.gca().invert_yaxis() ax.legend(loc="best") ax = fig1.add_subplot(1,2,2) x = np.arange(n_samples) y = fullEAC.coassoc.degree[:-1] ax.plot(y,x, 'k', alpha=1) ax.fill_between(y,0,x,color='k', alpha=1) ax.plot([spEAC.sp_max_assocs, spEAC.sp_max_assocs], [0, n_samples], 'r', label="max. number of assocs.") ax.set_title("Number of associations per sample on a condensed matrix") ax.set_ylabel("Sample no.") ax.set_xlabel("Number of associations") plt.gca().invert_yaxis() ax.legend(loc="best") # + fig_cond = plt.figure() x = np.arange(n_samples) y = spEAC.coassoc.degree[:-1] plt.barh(x, y) plt.plot([spEAC.sp_max_assocs, spEAC.sp_max_assocs], [0, n_samples], 'r', label="max. number of assocs.") plt.title("Number of associations per sample on a condensed matrix") plt.ylabel("Sample no.") plt.xlabel("Number of associations") plt.gca().invert_yaxis() plt.legend(loc="best") # - from MyML.helper.plotting import save_fig save_fig(fig_cond, "/home/chiroptera/eac_csr_cond_degree") save_fig(fig_full, "/home/chiroptera/eac_csr_full_degree") # + fig_full = plt.figure() x = np.arange(n_samples) y = fullEAC.coassoc.degree[:-1] plt.barh(x, y) plt.plot([spEAC.sp_max_assocs, spEAC.sp_max_assocs], [0, n_samples], 'r', label="max. number of assocs.") plt.title("Number of associations per sample on a complete matrix") plt.ylabel("Sample no.") plt.xlabel("Number of associations") plt.gca().invert_yaxis() plt.legend(loc="best") # - # %time fullEAC.finalClustering(n_clusters=0) # %time spEAC.finalClustering(n_clusters=0) # b MyML/EAC/eac_new.py:606 # b MyML/EAC/eac_new.py:556 # + histFig = plt.figure(figsize=(16,6)) ax = histFig.add_subplot(1,2,1) ax.hist(fullEAC.labels,bins=fullEAC.n_fclusts) ax.set_title("Full coassoc") print "full bincount", np.bincount(fullEAC.labels) ax = histFig.add_subplot(1,2,2) ax.hist(spEAC.labels,bins=spEAC.n_fclusts) ax.set_title("Sparse coassoc") print "full bincount", np.bincount(spEAC.labels) # - acc = myAcc.HungarianIndex(n_samples) acc.score(gt, fullEAC.labels) acc.accuracy acc = myAcc.HungarianIndex(n_samples) acc.score(gt, spEAC.labels) acc.accuracy acc2 = myAcc.ConsistencyIndex(N=n_samples) acc2.score(gt, fullEAC.labels) acc2 = myAcc.ConsistencyIndex(N=n_samples) acc2.score(gt, spEAC.labels) for c in xrange(fullEAC.n_fclusts): c_idx = fullEAC.labels == c plt.plot(data[c_idx,0], data[c_idx,1], '.') for c in xrange(spEAC.n_fclusts): c_idx = spEAC.labels == c plt.plot(data[c_idx,0], data[c_idx,1], '.') (fullEAC.coassoc.todense() == (spEAC.coassoc.todense() + spEAC.coassoc.todense().T)).all() # %time full = myEacFull.EAC_FULL(n_samples, condensed=False) # %time full.update_ensemble(ensemble) # %time cfull = myEacFull.EAC_FULL(n_samples, condensed=True) # %time cfull.update_ensemble(ensemble) (full.coassoc == cfull.todense() + ).all() max_assocs = myEacSp._compute_max_assocs_from_ensemble(ensemble) * 3 # %time sp = myEacSp.EAC_CSR(n_samples, max_assocs, condensed=False, sort_mode="numpy") # %time sp.update_ensemble(ensemble) max_assocs = myEacSp._compute_max_assocs_from_ensemble(ensemble) * 3 # %time sp2 = myEacSp.EAC_CSR(n_samples, max_assocs, condensed=False, sort_mode="surgical") # %time sp2.update_ensemble(ensemble) sp._condense(keep_degree=True) sp2._condense(keep_degree=True) print (sp.todense() == sp2.todense()).all() print (sp.todense() == full.coassoc).all() max_assocs = myEacSp._compute_max_assocs_from_ensemble(ensemble) * 3 # %time csp = myEacSp.EAC_CSR(n_samples, max_assocs, condensed=True, sort_mode="numpy") # %time csp.update_ensemble(ensemble) max_assocs = myEacSp._compute_max_assocs_from_ensemble(ensemble) * 3 # %time csp2 = myEacSp.EAC_CSR(n_samples, max_assocs, condensed=True, sort_mode="surgical") # %time csp2.update_ensemble(ensemble) # + csp._condense(keep_degree=True) csp2._condense(keep_degree=True) print "condensed numpy", ((csp.todense() + csp.todense().T) == full.coassoc).all() print "condensed surgical", ((csp2.todense() + csp2.todense().T) == full.coassoc).all() # - max_assocs = myEacSp._compute_max_assocs_from_ensemble(ensemble) * 3 # %time csp2 = myEacSp.EAC_CSR(n_samples, max_assocs, max_assocs_type='linear', condensed=True, sort_mode="surgical") # %time csp2.update_ensemble(ensemble) csp2._condense(keep_degree=True) print "condensed surgical", ((csp2.todense() + csp2.todense().T) == full.coassoc).all() # # Linkage sparse import scipy.sparse.csgraph as csgraph reload(csgraph) mst.indptr from scipy.sparse.csgraph._validation import validate_graph c2 = validate_graph(coassoc, True, dtype=coassoc.dtype, dense_output=False) c2 import scipy_numba.sparse.csgraph._min_spanning_tree as myMST reload(myMST) # + coassoc2 = spEAC.coassoc.csr.copy() coassoc2.data = coassoc2.data.max() + 1 - coassoc2.data # %time mst2 = myMST.minimum_spanning_tree(coassoc2) # - coassoc = spEAC.coassoc.csr.copy() coassoc.data = coassoc.data.max() + 1 - coassoc.data # + coassoc = spEAC.coassoc.csr.copy() coassoc.data = coassoc.data.max() + 1 - coassoc.data # %time mst = csgraph._min_spanning_tree.minimum_spanning_tree(coassoc) # - mst = mst.astype(mst2.dtype) coassoc_val = csgraph._validation.validate_graph(coassoc, True, dense_output=False) coassoc_val.data coassoc.data[0]=95 coassoc_val.indices a = csgraph.connected_components(mst) fig = plt.figure(figsize=(16,6)) ax = fig.add_subplot(1,2,1) plt.hist(mst.data, bins=mst.max()) ax = fig.add_subplot(1,2,2) plt.hist(mst2.data, bins=mst2.max()) # inputs n_clusters = 4 n_partitions = 100 # + # converting to diassociations coassoc.data = coassoc.data.max() + 1 - coassoc.data # get minimum spanning tree mst = csgraph.minimum_spanning_tree(coassoc).astype(np.uint8) # compute number of disconnected components n_disconnect_clusters = mst.shape[0] - mst.nnz # sort associations by weights asort = mst.data.argsort() sorted_weights = mst.data[asort] if n_clusters == 0: # compute lifetimes lifetimes = sorted_weights[1:] - sorted_weights[:-1] # add 1 to n_partitions as the maximum weight because I also added # 1 when converting to diassoc to avoid having zero weights disconnect_lifetime = n_partitions + 1 - sorted_weights[-1] # maximum lifetime m_index = np.argmax(lifetimes) th = lifetimes[m_index] # get number of clusters from lifetimes indices = np.where(mst.data > mst.data[m_index])[0] if indices.size == 0: cont = 1 else: cont = indices.size + 1 #testing the situation when only 1 cluster is present # if maximum lifetime is smaller than 2 times the minimum # don't make any cuts (= 1 cluster) # max>2*min_interval -> nc=1 close_to_zero_indices = np.where(np.isclose(lifetimes, 0)) minimum = np.min(lifetimes[close_to_zero_indices]) if th < 2 * minimum: cont = 1 # add disconnected clusters to number of clusters if disconnected # lifetime is smaller if th > disconnect_lifetime: cont += n_disconnect_clusters else: cont = n_disconnect_clusters nc_stable = cont else: nc_stable = n_clusters n_comps, labels = csgraph.connected_components(mst) # - print n_comps, labels plt.hist(labels, bins=n_comps, log=True) for c in xrange(n_comps): c_idx = labels == c plt.plot(data[c_idx,0],data[c_idx,1],'.') nc_stable nc_stable if n_clusters == 0: # lifetime is here computed as the distance difference between # any two consecutive nodes, i.e. the distance between passing # from n to n-1 clusters lifetimes = Z[1:,2] - Z[:-1,2] m_index = np.argmax(lifetimes) # Z is ordered in increasing order by weight of connection # the connection whose weight is higher than the one specified # by m_index MUST be the one from which the jump originated the # maximum lifetime; all connections above that (and including) # will be removed for the final clustering indices = np.where(Z[:,2] > Z[m_index, 2])[0] #indices = np.arange(m_index+1, Z.shape[0]) if indices.size == 0: cont = 1 else: cont = indices.size + 1 # store maximum lifetime th = lifetimes[m_index] #testing the situation when only 1 cluster is present # if maximum lifetime is smaller than 2 times the minimum # don't make any cuts (= 1 cluster) #max>2*min_interval -> nc=1 close_to_zero_indices = np.where(np.isclose(lifetimes, 0)) minimum = np.min(lifetimes[close_to_zero_indices]) if th < 2 * minimum: cont = 1 nc_stable = cont else: nc_stable = n_clusters if nc_stable > 1: # only the labels are of interest labels = labels_from_Z(Z, n_clusters=nc_stable) # rename labels i=0 for l in np.unique(labels): labels[labels == l] = i i += 1 else: labels = np.zeros(self.n_samples, dtype = np.int32) self.labels_ = labels return labels # # linkage full from scipy.cluster.hierarchy import linkage coassoc = cfull.coassoc.copy() # + myEac.make_diassoc_1d(coassoc, n_partitions) # apply linkage # %time Z = myEac.linkage(coassoc, method="average") # - Z labels = myEac.labels_from_Z(Z, n_clusters=100) # rename labels i=0 for l in np.unique(labels): labels[labels == l] = i i += 1 plt.hist(Z[:,2], bins=Z.shape[0]) print labels plt.hist(labels, bins=100, log=True) # # plot stuff # + fig1 = plt.figure(figsize=(16,6)) ax = fig1.add_subplot(1,2,1) x = np.arange(n_samples) y = fullEAC.coassoc.degree[:-1] x_max = [0, n_samples] y_max = [fullEAC.sp_max_assocs, fullEAC.sp_max_assocs] ax.plot(x, y, 'k', alpha=1) ax.fill_between(x, 0, y, color='k', alpha=1) ax.plot(x_max, y_max, 'r', label="max. number of assocs.") ax.set_title("Number of associations per sample on a condensed matrix") ax.set_xlabel("Sample no.") ax.set_ylabel("Number of associations") #ax.legend(loc=4) ################################################ ax = fig1.add_subplot(1,2,2) x = np.arange(n_samples) y = spEAC.coassoc.degree[:-1] lin_indptr = myEacSp.indptr_linear(n_samples, fullEAC.sp_max_assocs, spEAC.coassoc.n_s, spEAC.coassoc.n_e, spEAC.coassoc.val_s, spEAC.coassoc.val_e) y_lin = lin_indptr[1:] - lin_indptr[:-1] fit = np.polyfit(x,y,1) # linear regression of degree fit_fn = np.poly1d(fit) y_reg = fit_fn(x) y_reg = np.where(y_reg<0,0,y_reg) ax.plot(x, y, color='k') ax.fill_between(x, 0, y, color='k', alpha=1) ax.plot(x, y_reg, label="linear regression of no. of assocs.") ax.plot(x, y_lin, label="pre-allocated no. assocs.") ax.plot(x_max, y_max, label="max. number of assocs.") ax.set_title("Number of associations per sample on a condensed matrix") ax.set_xlabel("Sample no.") ax.set_ylabel("Number of associations") ax.legend(loc=[0.5,0.7]) # - save_fig(fig1, "/home/chiroptera/eac_csr_cond_full_degree_100k") x = np.arange(1,10,0.05) y = 1 / np.sqrt(x) y = y.max() -y y = y / y.max() plt.plot(x,y) plt.plot(x,y[::-1]) #plt.plot(x,np.sqrt(x)[::-1]/np.sqrt(x).max()) y.sum() x = np.arange(0,2,0.05) y = np.exp(x) y /= y.max() y = 1-y plt.plot(x,y) reload(myEacSp) y, y_sum = myEacSp.linear(n_samples, max_assocs, 0.1, 1, 1, 0.02) total_area = n_samples * max_assocs y_sum * 1.0 / total_area # + fig = plt.figure(figsize=(8,6)) x = np.arange(n_samples) ax = fig.add_subplot(1,1,1) ax.bar(x, csp.degree[:-1]) ax.plot([0,n_samples],[max_assocs, max_assocs]) #ax.plot(np.arange(n_samples), max_assoc_scheme) ax.plot(x,y) ax.set_title("full sparse matrix") ax.set_xlabel("sample id") # - import MyML.helper.scan as myScan reload(myScan) y_c = y.copy() y_sum y_c myScan.exprefixsumNumbaSingle(y) class testme: def __init__(self,**kwargs): self.val2=kwargs.get("val2", 50) a=testme(val2=100) a.val2 y ((y[1:] - y[:-1]) == y_c[:-1]).all() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # DataScience Course $\circ$ Python Programming $\circ$ Numpy Arrays # By Dr # #
    # ### Topics # - Using numpy # - package installation # - numpy import # - numpy arrays # - linspace # - eye # - random # + import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline rs = np.random.RandomState(0) M = rs.randint(1, 100, 20) M_cumsum = np.ones(20)*M.mean() def plot_M(): plt.figure(figsize=(10,5)) x = list(range(1, len(M)+1)) sns.barplot(x, M) sns.pointplot(x, M_cumsum, markers='+') ; # - # ## Exercises # - Run provided codes prior to the exercise question # - Add a **new cell** to try your code # - Type in Python code in the box to get the results as displayed right below # # __Warning__: results will be replaced, so make sure you add a new cell to try your code # **$\Rightarrow$ # Import the package `numpy` as `np` # ** # **$\Rightarrow$ # Use `arange` to create a numpy array of 100 items, assign it to `x` # ** # **$\Rightarrow$ # Use `linspace` to create a numpy array of 100 items, assign it to `y` # ** # **$\Rightarrow$ # Generate a [10, 5] matrix (2d array) of normally distributed numbers, assign it to `M1` # ** # **$\Rightarrow$ # Compute the mean along all the 10 rows of `M` # ** # **$\Rightarrow$ # Compute the max along all the 5 columns of `M` # ** # **$\Rightarrow$ # Create the following matrix `M2` with the help of `eye`, `ones`, and `arange` functions # ** # ``` # array([[ 2., 2., 3., 4., 5.], # [ 6., 8., 8., 9., 10.], # [ 11., 12., 14., 14., 15.], # [ 16., 17., 18., 20., 20.], # [ 21., 22., 23., 24., 26.]]) # ``` # **$\Rightarrow$ # Find the index of the maximum value in `M` # ** # ** OPTIONAL $\Rightarrow$ # Compute the cumulative sum of `M` using the method `cumsum` and assign it to `M_cumsum` # Then rerun the function `plot_M()` # ** #
    # ## Finished ! # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: cosmopipe-dev # language: python # name: cosmopipe-dev # --- # # Jackknife examples # # In this notebook we will show how to estimate the covariance of the correlation function based on jackknife estimates, following https://arxiv.org/pdf/2109.07071.pdf. # Look first at notebook basic_examples.ipynb to understand the *pycorr* API. # + import os import tempfile import numpy as np from matplotlib import pyplot as plt from pycorr import TwoPointCorrelationFunction, TwoPointEstimator, project_to_multipoles, project_to_wp, utils, setup_logging # To activate logging setup_logging() # - def generate_catalogs(size=10000, boxsize=(1000,)*3, offset=(1000., 0., 0.), n_individual_weights=1, seed=42): rng = np.random.RandomState(seed=seed) positions = [o + rng.uniform(0., 1., size)*b for o, b in zip(offset, boxsize)] weights = [rng.uniform(0.5, 1., size) for i in range(n_individual_weights)] return positions, weights # First, generate fake data with cartesian positions and weights boxsize = np.full(3, 1000.) boxcenter = np.full(3, 1000.) offset = boxcenter - boxsize/2. positions, weights = generate_catalogs(size=10000, boxsize=boxsize, offset=offset, seed=42) # ## Split in subsamples # Let us divide the box into subregions. # + from pycorr import BoxSubsampler subsampler = BoxSubsampler(boxsize=boxsize, boxcenter=boxcenter, nsamples=6**3) labels = subsampler.label(positions) subsampler.log_info('Labels from {:d} to {:d}.'.format(labels.min(), labels.max())) fig = plt.figure() ax = fig.add_subplot(projection='3d') ax.scatter(*positions, marker='.', c=labels) ax.set_xlabel('x') ax.set_ylabel('y') ax.set_zlabel('z') plt.show() # - # ## Check jackknife estimates # Let us generate many simulations to test jackknife estimates. edges = np.linspace(0., 90, 46) setup_logging('warning') # reduce logging num = 20 results = [] for ii in range(num): if (ii * 10) % num == 0: print('Simulation {:d}/{:d}.'.format(ii, num)) data_positions, data_weights = generate_catalogs(size=10000, boxsize=boxsize, offset=offset, n_individual_weights=1, seed=ii+42) randoms_positions, randoms_weights = generate_catalogs(size=50000, boxsize=boxsize, offset=offset, n_individual_weights=1, seed=ii+84) data_samples = subsampler.label(data_positions) randoms_samples = subsampler.label(randoms_positions) result = TwoPointCorrelationFunction('s', edges, data_positions1=data_positions, data_weights1=data_weights, randoms_positions1=randoms_positions, randoms_weights1=randoms_weights, data_samples1=data_samples, randoms_samples1=randoms_samples, engine='corrfunc', compute_sepsavg=False, nthreads=4) results.append(result) # + result = results[0] sep = result.sep cov = result.cov() corrcoef = utils.cov_to_corrcoef(result.cov()) fig = plt.figure(figsize=(3, 3)) ax = plt.gca() ax.pcolor(sep, sep, corrcoef.T, cmap=plt.get_cmap('jet_r')) ax.set_title('Correlation matrix') ax.set_xlabel(r'$s$ [Mpc/h]') ax.set_ylabel(r'$s$ [Mpc/h]') plt.show() # + std = np.std([result.corr for result in results], axis=0, ddof=1) jackknife_std_nocorr = np.mean([np.diag(result.cov(correction=None))**0.5 for result in results], axis=0) jackknife_std_corr = np.mean([np.diag(result.cov())**0.5 for result in results], axis=0) ax = plt.gca() ax.plot(sep, jackknife_std_nocorr/std, label='jackknife no correction') ax.plot(sep, jackknife_std_corr/std, label='jackknife Mohammad+21 correction') ax.axhline(1., xmin=0., xmax=1., color='k', linestyle='--') ax.set_xlabel(r'$s$ [Mpc/h]') ax.set_ylabel(r'$\sigma_{\mathrm{jk}}/\sigma$') ax.legend() plt.show() # - # ## Cutsky geometry # Let us try with more realistic cutsky catalogs. def generate_cutsky_catalogs(size=10000, rarange=(0., 30.), decrange=(0., 30.), drange=(1000., 2000.), n_individual_weights=1, seed=42): rng = np.random.RandomState(seed=seed) ra = rng.uniform(low=rarange[0], high=rarange[1], size=size) % 360. urange = np.sin(np.asarray(decrange) * np.pi / 180.) dec = np.arcsin(rng.uniform(low=urange[0], high=urange[1], size=size)) / (np.pi / 180.) d = rng.uniform(low=drange[0], high=drange[1], size=size) positions = [ra, dec, d] weights = [rng.uniform(0.5, 1., size) for i in range(n_individual_weights)] return positions, weights # + positions, weights = generate_cutsky_catalogs(size=100000, seed=42) # Catalogs do not have a box geometry, so we will try to subsample using k-means algorithms in RA/Dec (pixelated) space # Requires scikit-learn and healpy: pip install scikit-learn healpy from pycorr import KMeansSubsampler subsampler = KMeansSubsampler(mode='angular', positions=positions, nsamples=128, nside=512, random_state=42, position_type='rdd') labels = subsampler.label(positions) subsampler.log_info('Labels from {:d} to {:d}.'.format(labels.min(), labels.max())) # - fig = plt.figure(figsize=(4, 4)) ax = plt.gca() ax.scatter(*positions[:2], marker='.', c=labels) ax.set_xlabel(r'R.A. [$\mathrm{deg}$]') ax.set_ylabel(r'Dec. [$\mathrm{deg}$]') plt.show() data_positions, data_weights = generate_cutsky_catalogs(size=100000, seed=42) randoms_positions, randoms_weights = generate_cutsky_catalogs(size=200000, seed=84) data_samples = subsampler.label(data_positions) randoms_samples = subsampler.label(randoms_positions) edges = (np.linspace(0, 40, 41), np.linspace(-1., 1., 201)) result = TwoPointCorrelationFunction('smu', edges, data_positions1=data_positions, data_weights1=data_weights, randoms_positions1=randoms_positions, randoms_weights1=randoms_weights, data_samples1=data_samples, randoms_samples1=randoms_samples, position_type='rdd', engine='corrfunc', compute_sepsavg=False, nthreads=4) # You can save the result with tempfile.TemporaryDirectory() as tmp_dir: fn = os.path.join(tmp_dir, 'tmp.npy') result.save(fn) result = TwoPointEstimator.load(fn) ells = (0, 2, 4) nells = len(ells) # Let us rebin by 2 along first dimension (s) result = result[::2] s, xiell, cov = result.get_corr(ells=ells, return_sep=True, return_cov=True) std = np.array_split(np.diag(cov)**0.5, nells) ax = plt.gca() for ill,ell in enumerate(ells): ax.errorbar(s + ill * 0.1, xiell[ill], std[ill], fmt='-', label=r'$\ell = {:d}$'.format(ell)) ax.legend() ax.grid(True) ax.set_xlabel(r'$s$ [Mpc/h]') ax.set_ylabel(r'$\xi_{\ell}(s)$') plt.show() ns = len(s) corrcoef = utils.cov_to_corrcoef(cov) fig, lax = plt.subplots(nrows=nells, ncols=nells, sharex=False, sharey=False, figsize=(10,10), squeeze=False) fig.subplots_adjust(wspace=0.1, hspace=0.1) from matplotlib.colors import Normalize norm = Normalize(vmin=corrcoef.min(), vmax=corrcoef.max()) for i in range(nells): for j in range(nells): ax = lax[nells-1-i][j] mesh = ax.pcolor(s, s, corrcoef[i*ns:(i+1)*ns,j*ns:(j+1)*ns].T, norm=norm, cmap=plt.get_cmap('jet_r')) if i>0: ax.xaxis.set_visible(False) else: ax.set_xlabel(r'$s$ [Mpc/h]') if j>0: ax.yaxis.set_visible(False) else: ax.set_ylabel(r'$s$ [Mpc/h]') plt.show() # ## MPI # # All computations above are MPI'ed (if available). Below is an example script to run correlation function (jackknife) estimation with MPI, with 2 MPI processes per "jackknife realization". # + # %%file '_tests/script.py' import os import numpy as np from pycorr import KMeansSubsampler, TwoPointCorrelationFunction, mpi, setup_logging # Set up logging setup_logging() def generate_cutsky_catalogs(size=10000, rarange=(0.,30), decrange=(0., 30.), drange=(1000., 2000.), n_individual_weights=1, seed=42): rng = np.random.RandomState(seed=seed) ra = rng.uniform(low=rarange[0], high=rarange[1], size=size) % 360. urange = np.sin(np.asarray(decrange) * np.pi / 180.) dec = np.arcsin(rng.uniform(low=urange[0], high=urange[1], size=size)) / (np.pi / 180.) d = rng.uniform(low=drange[0], high=drange[1], size=size) positions = [ra, dec, d] weights = [rng.uniform(0.5, 1., size) for i in range(n_individual_weights)] return positions, weights mpicomm = mpi.COMM_WORLD mpiroot = 0 # We generate catalog on a single process if mpicomm.rank == mpiroot: data_positions, data_weights = generate_cutsky_catalogs(size=100000, seed=42) randoms_positions, randoms_weights = generate_cutsky_catalogs(size=200000, seed=84) else: data_positions, data_weights = None, None randoms_positions, randoms_weights = None, None subsampler = KMeansSubsampler(mode='angular', positions=randoms_positions, nsamples=128, nside=512, position_type='rdd', random_state=42, mpicomm=mpicomm, mpiroot=mpiroot) # mpiroot tells KMeansSubsampler to get positions on process mpiroot only; # If input positions and weights arrays are scattered, pass mpiroot=None (default) if mpicomm.rank == mpiroot: data_samples = subsampler.label(data_positions) randoms_samples = subsampler.label(randoms_positions) else: data_samples, randoms_samples = None, None edges = (np.linspace(0., 40, 21), np.linspace(-1., 1., 101)) result = TwoPointCorrelationFunction('smu', edges, data_positions1=data_positions, data_weights1=data_weights, randoms_positions1=randoms_positions, randoms_weights1=randoms_weights, data_samples1=data_samples, randoms_samples1=randoms_samples, position_type='rdd', engine='corrfunc', compute_sepsavg=False, nthreads=1, mpicomm=mpicomm, mpiroot=mpiroot, nprocs_per_real=2) # Notice mpiroot again, and nprocs_per_real=2 => 2 MPI processes per "jackknife realization" base_dir = '_tests' fn = os.path.join(base_dir, 'correlation.npy') result.save(fn) # + language="bash" # mpiexec -n 3 python _tests/script.py # - # !ls -l _tests # ## Split calculation # One can split the full jackknife calculation into several parts (e.g. to distribute on a cluster without using MPI), by specifying the subsamples to consider for the current calculation. list_samples = np.array_split(np.unique(np.concatenate([data_samples, randoms_samples])), 3) # split calculation in 3 parts results = [] for samples in list_samples: results.append(TwoPointCorrelationFunction('smu', result.edges, data_positions1=data_positions, data_weights1=data_weights, randoms_positions1=randoms_positions, randoms_weights1=randoms_weights, data_samples1=data_samples, randoms_samples1=randoms_samples, position_type='rdd', engine='corrfunc', compute_sepsavg=False, nthreads=4, samples=samples)) # Notice the keyword "samples" that list the samples to consider for each calculation result_split = results[0].concatenate(*results) s_split, xiell_split, cov_split = result_split.get_corr(ells=ells, return_sep=True, return_cov=True) assert np.allclose(s_split, s) assert np.allclose(xiell_split, xiell) assert np.allclose(cov_split, cov) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda root] # language: python # name: conda-root-py # --- # + % pylab inline from numpy import linalg as LA from ipywidgets import interact, interactive, fixed, interact_manual import ipywidgets as widgets import glob from tqdm import tqdm_notebook import os import sklearn.preprocessing as prep import pickle import joblib import tensorflow as tf def min_max_scale(X): preprocessor = prep.MinMaxScaler().fit(X) X_scaled = preprocessor.transform(X) return X_scaled # - master_matrix = joblib.load('../scripts/tumor_and_normal_200000_standardized_X.joblib.pickle') y = joblib.load('../scripts/tumor_and_normal_200000_standardized_y.joblib.pickle') master_matrix.shape y.shape # + WIDTH = 256 HEIGHT = 256 DEPTH = 3 def standardize_image(f): standardized = (imread(f) / 255.0).reshape(-1, 256 * 256 * 3) return standardized # + config = tf.ConfigProto( device_count = {'GPU': 0} ) config.gpu_options.allocator_type = 'BFC' #config IMAGE_WIDTH = 256 IMAGE_HEIGHT = 256 IMAGE_CHANNELS = 3 class VAE(object): def __init__(self, input_dim, learning_rate=0.001, n_latent=100, batch_size=50): self.learning_rate = learning_rate self.n_latent = n_latent self.batch_size = batch_size self.input_dim = input_dim self._build_network() self._create_loss_optimizer() init = tf.global_variables_initializer() #init = tf.initialize_all_variables() # Launch the session self.session = tf.InteractiveSession(config=config) self.session.run(init) self.saver = tf.train.Saver(tf.all_variables()) def _build_network(self): self.x = tf.placeholder(tf.float32, [None, self.input_dim]) dense1 = tf.layers.dense( activation=tf.nn.elu, inputs=self.x, units=512) dense2 = tf.layers.dense( activation=tf.nn.elu, inputs=dense1, units=512) dense3 = tf.layers.dense( activation=tf.nn.elu, inputs=dense2, units=512) dense4 = tf.layers.dense( activation=None, inputs=dense3, units=self.n_latent * 2) self.mu = dense4[:, :self.n_latent] self.sigma = tf.nn.softplus(dense4[:, self.n_latent:]) eps = tf.random_normal( shape=tf.shape(self.sigma), mean=0, stddev=1, dtype=tf.float32) self.z = self.mu + self.sigma * eps ddense1 = tf.layers.dense( activation=tf.nn.elu, inputs=self.z, units=512) ddense2 = tf.layers.dense( activation=tf.nn.elu, inputs=ddense1, units=512) ddense3 = tf.layers.dense( activation=tf.nn.elu, inputs=ddense2, units=512) self.reconstructed = tf.layers.dense( activation=tf.nn.sigmoid, inputs=ddense3, units=self.input_dim) def _create_loss_optimizer(self): epsilon = 1e-10 reconstruction_loss = -tf.reduce_sum( self.x * tf.log(epsilon + self.reconstructed) + (1 - self.x) * tf.log(epsilon + 1 - self.reconstructed), axis=1) self.reconstruction_loss = tf.reduce_mean( reconstruction_loss) / self.batch_size latent_loss = -0.5 * tf.reduce_sum( 1 + tf.log(epsilon + self.sigma) - tf.square(self.mu) - tf.square( self.sigma), axis=1) latent_loss = tf.reduce_mean(latent_loss) / self.batch_size self.latent_loss = latent_loss self.cost = tf.reduce_mean(self.reconstruction_loss + self.latent_loss) # ADAM optimizer self.optimizer = tf.train.AdamOptimizer( learning_rate=self.learning_rate).minimize(self.cost) def fit_minibatch(self, batch): _, cost, reconstruction_loss, latent_loss = self.session.run( [ self.optimizer, self.cost, self.reconstruction_loss, self.latent_loss ], feed_dict={self.x: batch}) return cost, reconstruction_loss, latent_loss def reconstruct(self, x): return self.session.run([self.reconstructed], feed_dict={self.x: x}) def decoder(self, z): return self.session.run([self.reconstructed], feed_dict={self.z: z}) def encoder(self, x): return self.session.run([self.z], feed_dict={self.x: x}) def save_model(self, checkpoint_path, epoch): self.saver.save(self.session, checkpoint_path, global_step=epoch) def load_model(self, checkpoint_dir): #new_saver = tf.train.import_meta_graph(checkpoint_path) #new_saver.restore(sess, tf.train.latest_checkpoint('./')) ckpt = tf.train.get_checkpoint_state(checkpoint_dir=checkpoint_dir, latest_filename='checkpoint') print('loading model: {}'.format(ckpt.model_checkpoint_path)) self.saver.restore(self.session, ckpt.model_checkpoint_path) # + learning_rate=1e-4 batch_size=32 num_epoch=1000 n_latent=100 checkpoint_dir = '/Z/personal-folders/interns/saket/vae_checkpoint_histoapath_2000_nlatent100' os.makedirs(checkpoint_dir, exist_ok=True) input_dim = IMAGE_CHANNELS*IMAGE_WIDTH*IMAGE_HEIGHT tf.reset_default_graph() #input_dims = input_dim[1] model = VAE(input_dim=input_dim, learning_rate=learning_rate, n_latent=n_latent, batch_size=batch_size) model.load_model(checkpoint_dir) # + # Test the trained model: generation # %pylab inline # Sample noise vectors from N(0, 1) z = np.random.normal(size=[model.batch_size, model.n_latent]) x_generated = model.decoder(z)[0] w = h = 256 n = np.sqrt(model.batch_size).astype(np.int32) I_generated = np.empty((h*n, w*n, 3)) for i in range(n): for j in range(n): I_generated[i*h:(i+1)*h, j*w:(j+1)*w, :] = x_generated[i*n+j, :].reshape(w, h, 3) plt.figure(figsize=(8, 8)) plt.imshow(I_generated)# cmap='gray') # - master_matrix[0].reshape() # + x_sample = np.reshape(master_matrix, (-1, 256*256*3)) x_encoded = model.encoder(x_sample) x_reconstruct = model.reconstruct(x_sample) plt.figure(figsize=(8, 12)) for i in range(7): plt.subplot(7, 2, 2*i + 1) plt.imshow(x_sample[i].reshape(256, 256, 3)) plt.title("Test input") #plt.colorbar() plt.subplot(7, 2, 2*i + 2) plt.imshow(x_reconstruct[0][i].reshape(256, 256, 3)) plt.title("Reconstruction") #plt.colorbar() plt.tight_layout() # - x_reconstruct[0].shape x_encoded[0].shape # # Train a TPOT on these reduced dimension! # + test_tumor_patches_dir = '/Z/personal-folders/interns/saket/histopath_data/baidu_images/test_tumor_level0/level_0/' list_of_tumor_files = list(glob.glob('{}*.png'.format(test_tumor_patches_dir))) test_normal_patches_dir = '/Z/personal-folders/interns/saket/histopath_data/baidu_images/test_normal_level0/level_0/' list_of_normal_files = list(glob.glob('{}*.png'.format(test_normal_patches_dir))) # + from tpot import TPOTClassifier from sklearn.model_selection import train_test_split X_train, X_valid, y_train, y_valid = train_test_split(x_encoded[0], y, train_size=0.75, test_size=0.25) pipeline_optimizer = TPOTClassifier(generations=5, population_size=20, cv=5, random_state=42, verbosity=2) pipeline_optimizer.fit(X_train, y_train) print(pipeline_optimizer.score(X_valid, y_valid)) pipeline_optimizer.export('tpot_exported_pipeline_autoencoder_nlatent100.py') # + y_test = [] X_test_matrix = [] for f in tqdm_notebook(list_of_tumor_files): standardized = (imread(f)).reshape(-1, 256 * 256 * 3) X_test_matrix.append(standardized) y_test.append(1) for f in tqdm_notebook(list_of_normal_files): standardized = (imread(f)).reshape(-1, 256 * 256 * 3) X_test_matrix.append(standardized) y_test.append(0) # - plt.imshow(X_test_matrix[0].reshape( 256 , 256 , 3)) plt.imshow(x_sample[i].reshape(256, 256, 3)) x_sample[i] X_test_matrix[0] X_test_matrix = np.array(X_test_matrix) y_test = np.array(y_test) x_test_input = np.reshape(X_test_matrix, (-1, 256*256*3)) x_test_encoded = model.encoder(x_test_input)[0] print(pipeline_optimizer.score(x_test_encoded, y_test)) x_test_reconstructed = model.reconstruct(x_test_input)[0] plt.figure(figsize=(10, 12)) for i in range(10): plt.subplot(10, 2, 2*i + 1) plt.imshow(x_test_input[i].reshape(256, 256, 3)) plt.title("Test input") #plt.colorbar() plt.subplot(10, 2, 2*i + 2) plt.imshow(x_test_reconstructed[i].reshape(256, 256, 3)) plt.title("Reconstruction") #plt.colorbar() plt.tight_layout() # # lightgbm import json import lightgbm as lgb from sklearn.metrics import mean_squared_error X_train.shape # + lgb_train = lgb.Dataset(X_train, y_train) lgb_eval = lgb.Dataset(X_valid, y_valid, reference=lgb_train) # specify your configurations as a dict params = { 'task': 'train', 'boosting_type': 'gbdt', 'objective': 'regression', 'metric': {'l2', 'auc'}, 'num_leaves': 31, 'learning_rate': 0.05, 'feature_fraction': 0.9, 'bagging_fraction': 0.8, 'bagging_freq': 5, 'verbose': 0 } print('Start training...') # train gbm = lgb.train(params, lgb_train, num_boost_round=20, valid_sets=lgb_eval, early_stopping_rounds=5) # - x_test_encoded.shape x_test_input.shape y_pred_test_gbm = gbm.predict(x_test_input, num_iteration=gbm.best_iteration) print('The rmse of prediction is:', mean_squared_error(y_test, y_pred_test_gbm) ** 0.5) y_pred_test_gbm_bin = [1 if x>0.5 else 0 for x in y_pred_test_gbm] from sklearn.metrics import accuracy_score accuracy_score(y_pred_test_gbm_bin, y_test ) gbm.feature_importance() features = pd. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.5 # language: python # name: python3 # --- # # # @hidden_cell # # The following code contains the credentials for a file in your IBM Cloud Object Storage. # # You might want to remove those credentials before you share your notebook. # credentials_2 = { # 'IBM_API_KEY_ID': '', # 'IAM_SERVICE_ID': 'iam-ServiceId-69636595-c7b4-451a-b26e-b9a67a5b1181', # 'ENDPOINT': 'https://s3-api.us-geo.objectstorage.service.networklayer.com', # 'IBM_AUTH_ENDPOINT': 'https://iam.bluemix.net/oidc/token', # 'BUCKET': 'textsummarizer-donotdelete-pr-jw3aso7wwj713f', # 'FILE': 'Mission_Impossible_6_Review.txt' # } # # Text Summarization & Visualization # ## Setup # To prepare your environment, you need to install some packages. # # ### Install the necessary packages # # You need the latest versions of these packages:
    # !pip install gensim # !pip install watson-developer-cloud==1.5 # !pip install pyldavis # !pip install wordcloud from gensim.summarization.summarizer import summarize from gensim.summarization import keywords import watson_developer_cloud import ibm_boto3 from botocore.client import Config from nltk.tokenize import word_tokenize from nltk.corpus import stopwords from nltk.stem.wordnet import WordNetLemmatizer import string import gensim import gensim.corpora as corpora from gensim.utils import simple_preprocess from gensim.models import CoherenceModel import pyLDAvis import pyLDAvis.gensim import matplotlib.pyplot as plt # %matplotlib inline import warnings warnings.filterwarnings("ignore") import urllib from bs4 import BeautifulSoup import requests import nltk nltk.download('all') # ## 1. Summarization & keywords extraction # ### 1a. Read the Data # Click on Insert to code and then select Insert Credentials as credentials_1 # + # @hidden_cell # The following code contains the credentials for a file in your IBM Cloud Object Storage. # You might want to remove those credentials before you share your notebook. credentials_1 = { 'IBM_API_KEY_ID': '', 'IAM_SERVICE_ID': 'iam-ServiceId-69636595-c7b4-451a-b26e-b9a67a5b1181', 'ENDPOINT': 'https://s3-api.us-geo.objectstorage.service.networklayer.com', 'IBM_AUTH_ENDPOINT': 'https://iam.bluemix.net/oidc/token', 'BUCKET': 'textsummarizer-donotdelete-pr-jw3aso7wwj713f', 'FILE': 'Mission_Impossible_6_Review.txt' } credentials_2 = { 'IBM_API_KEY_ID': '', 'IAM_SERVICE_ID': 'iam-ServiceId-69636595-c7b4-451a-b26e-b9a67a5b1181', 'ENDPOINT': 'https://s3-api.us-geo.objectstorage.service.networklayer.com', 'IBM_AUTH_ENDPOINT': 'https://iam.bluemix.net/oidc/token', 'BUCKET': 'textsummarizer-donotdelete-pr-jw3aso7wwj713f', 'FILE': 'Richwine.txt' } credentials_3_2 = { 'IBM_API_KEY_ID': '', 'IAM_SERVICE_ID': 'iam-ServiceId-69636595-c7b4-451a-b26e-b9a67a5b1181', 'ENDPOINT': 'https://s3-api.us-geo.objectstorage.service.networklayer.com', 'IBM_AUTH_ENDPOINT': 'https://iam.bluemix.net/oidc/token', 'BUCKET': 'textsummarizer-donotdelete-pr-jw3aso7wwj713f', 'FILE': 'Chavez.txt' } credentials_3 = { 'IBM_API_KEY_ID': '', 'IAM_SERVICE_ID': 'iam-ServiceId-69636595-c7b4-451a-b26e-b9a67a5b1181', 'ENDPOINT': 'https://s3-api.us-geo.objectstorage.service.networklayer.com', 'IBM_AUTH_ENDPOINT': 'https://iam.bluemix.net/oidc/token', 'BUCKET': 'textsummarizer-donotdelete-pr-jw3aso7wwj713f', 'FILE': 'Chavez.txt'} # @hidden_cell # The following code contains the credentials for a file in your IBM Cloud Object Storage. # You might want to remove those credentials before you share your notebook. credentials_4 = { 'IBM_API_KEY_ID': 'qMQ0q2CXDiOWd0LXP8IwNUwrLUfX0hjaOpOK6r7ioGn8', 'IAM_SERVICE_ID': 'iam-ServiceId-69636595-c7b4-451a-b26e-b9a67a5b1181', 'ENDPOINT': 'https://s3-api.us-geo.objectstorage.service.networklayer.com', 'IBM_AUTH_ENDPOINT': 'https://iam.bluemix.net/oidc/token', 'BUCKET': 'textsummarizer-donotdelete-pr-jw3aso7wwj713f', 'FILE': 'RichCha.txt' } # @hidden_cell # The following code contains the credentials for a file in your IBM Cloud Object Storage. # You might want to remove those credentials before you share your notebook. credentials_5 = { 'IBM_API_KEY_ID': '', 'IAM_SERVICE_ID': 'iam-ServiceId-69636595-c7b4-451a-b26e-b9a67a5b1181', 'ENDPOINT': 'https://s3-api.us-geo.objectstorage.service.networklayer.com', 'IBM_AUTH_ENDPOINT': 'https://iam.bluemix.net/oidc/token', 'BUCKET': 'textsummarizer-donotdelete-pr-jw3aso7wwj713f', 'FILE': 'Frank.txt' } # @hidden_cell # The following code contains the credentials for a file in your IBM Cloud Object Storage. # You might want to remove those credentials before you share your notebook. credentials_6 = { 'IBM_API_KEY_ID': '', 'IAM_SERVICE_ID': 'iam-ServiceId-69636595-c7b4-451a-b26e-b9a67a5b1181', 'ENDPOINT': 'https://s3-api.us-geo.objectstorage.service.networklayer.com', 'IBM_AUTH_ENDPOINT': 'https://iam.bluemix.net/oidc/token', 'BUCKET': 'textsummarizer-donotdelete-pr-jw3aso7wwj713f', 'FILE': 'Lodge.txt' } # @hidden_cell # The following code contains the credentials for a file in your IBM Cloud Object Storage. # You might want to remove those credentials before you share your notebook. credentials_7 = { 'IBM_API_KEY_ID': '', 'IAM_SERVICE_ID': 'iam-ServiceId-69636595-c7b4-451a-b26e-b9a67a5b1181', 'ENDPOINT': 'https://s3-api.us-geo.objectstorage.service.networklayer.com', 'IBM_AUTH_ENDPOINT': 'https://iam.bluemix.net/oidc/token', 'BUCKET': 'textsummarizer-donotdelete-pr-jw3aso7wwj713f', 'FILE': 'Glazer-m.txt' } # @hidden_cell # The following code contains the credentials for a file in your IBM Cloud Object Storage. # You might want to remove those credentials before you share your notebook. # @hidden_cell # The following code contains the credentials for a file in your IBM Cloud Object Storage. # You might want to remove those credentials before you share your notebook. credentials_9 = { 'IBM_API_KEY_ID': '', 'IAM_SERVICE_ID': 'iam-ServiceId-69636595-c7b4-451a-b26e-b9a67a5b1181', 'ENDPOINT': 'https://s3-api.us-geo.objectstorage.service.networklayer.com', 'IBM_AUTH_ENDPOINT': 'https://iam.bluemix.net/oidc/token', 'BUCKET': 'textsummarizer-donotdelete-pr-jw3aso7wwj713f', 'FILE': 'all.txt' } # @hidden_cell # The following code contains the credentials for a file in your IBM Cloud Object Storage. # You might want to remove those credentials before you share your notebook. credentials_10 = { 'IBM_API_KEY_ID': '', 'IAM_SERVICE_ID': 'iam-ServiceId-69636595-c7b4-451a-b26e-b9a67a5b1181', 'ENDPOINT': 'https://s3-api.us-geo.objectstorage.service.networklayer.com', 'IBM_AUTH_ENDPOINT': 'https://iam.bluemix.net/oidc/token', 'BUCKET': 'textsummarizer-donotdelete-pr-jw3aso7wwj713f', 'FILE': 'all.txt' } # - # ### 1b. Functions to extract files from Cloud Object Storage # + cos = ibm_boto3.client('s3', ibm_api_key_id=credentials_10['IBM_API_KEY_ID'], ibm_service_instance_id=credentials_10['IAM_SERVICE_ID'], ibm_auth_endpoint=credentials_10['IBM_AUTH_ENDPOINT'], config=Config(signature_version='oauth'), endpoint_url=credentials_10['ENDPOINT']) def get_file(filename): '''Retrieve file from Cloud Object Storage''' fileobject = cos.get_object(Bucket=credentials_10['BUCKET'], Key=filename)['Body'] return fileobject def load_string(fileobject): '''Load the file contents into a Python string''' text = fileobject.read() return text # - # ### 1c. Get File Contents text=str(load_string(get_file("Chavez.txt"))) text_2 = str(load_string(get_file("Richwine.txt"))) text3 = str(load_string(get_file("RichCha.txt"))) text4 = str(load_string(get_file("Frank.txt"))) text5 = str(load_string(get_file("Lodge.txt"))) text6 = str(load_string(get_file("Glazer-m.txt"))) text7 = str(load_string(get_file("all.txt"))) # ### 1d. Helper functions to extract summary and keywords # + '''Get the summary of the text''' def get_summary(text, pct): summary = summarize(text,ratio=pct,split=True) return summary '''Get the keywords of the text''' def get_keywords(text): res = keywords(text, ratio=0.1, words=None, split=False, scores=False, pos_filter=('NN', 'JJ'), lemmatize=False, deacc=False) res = res.split('\n') return res '''Tokenize the sentence into words & remove punctuation''' def sent_to_words(sentences): for sentence in sentences: yield(gensim.utils.simple_preprocess(str(sentence), deacc=True)) def split_sentences(text): """ Split text into sentences. """ sentence_delimiters = re.compile(u'[\\[\\]\n.!?]') sentences = sentence_delimiters.split(text) return sentences def split_into_tokens(text): """ Split text into tokens. """ tokens = nltk.word_tokenize(text) return tokens def POS_tagging(text): """ Generate Part of speech tagging of the text. """ POSofText = nltk.tag.pos_tag(text) return POSofText def extract_title_text(url): page = urllib.request.urlopen(url).read().decode('utf8') soup = BeautifulSoup(page,'lxml') text = ' '.join(map(lambda p: p.text, soup.find_all('p'))) return soup.title.text, text # - print('Printing Summary') print('--------------------------') print(get_summary(text7, 0.2)) print ('-------------------------') print('Printing Keywords') print('--------------------------') print(get_keywords(text7)) # ### 2a. Remove punctuation & special characters import re my_new_text = re.sub('[^ a-zA-Z0-9]', '', text) # ### 2b. Preprocess the text for next steps stop_words = set(stopwords.words('english')) lemma = WordNetLemmatizer() word_tokens = word_tokenize(str(my_new_text)) filtered_sentence = [w for w in word_tokens if not w in stop_words] normalized = " ".join(lemma.lemmatize(word) for word in filtered_sentence) # ### 2c. Create n grams where n is the number of words from nltk import ngrams n = 7 total_grams = [] number_of_grams = ngrams(normalized.split(), n) for grams in number_of_grams: total_grams.append(grams) print(total_grams[:10]) # ### 2d. Create the wordcloud visualization on the processed data # To highlight important textual data points & convey crucial information. The more a specific word appears in a source of textual data, the bigger and bolder it appears in the word cloud. # + from wordcloud import WordCloud wordcloud = WordCloud(max_font_size=60).generate(normalized) plt.figure(figsize=(16,12)) '''plot wordcloud in matplotlib''' plt.imshow(wordcloud, interpolation="bilinear") plt.axis("off") plt.show() # - # ### 2e. Analyze the frequency of words in the text. count = {} for w in normalized.split(): if w in count: count[w] += 1 else: count[w] = 1 for word, times in count.items(): if times > 3: print("%s was found %d times" % (word, times)) # ### 2f. Create a Dispersion plot # The motivation behind using the Lexical Dispersion Plots was to give us an alternative means of visualising how prevalent these words are in the text corpus, whether or not there was a clustering pattern that is whether or not a word featured heavily at certain point of the text corpus. from nltk.book import text4 as content plt.figure(figsize=(16,5)) topics = ['assimilation', 'Hispanics','ethnic', 'generation', 'economic', 'immigrant'] content.dispersion_plot(topics) # ## 3. Topic Modelling # ### 3a. Start the preprocessing for Topic Modelling # # Topic Modelling is an approach for finding topics in large amounts of text. Topic modeling is great for document clustering, information retrieval from unstructured text, and feature selection. # # Topic Modeling with Latent Dirichlet Allocation technique. # # Why Latent Dirichlet Allocation? This technique can create model which can be generalized easily on any new text corpus and help us in identifying the important topics from the corpus. # # Some of the advantages are : # # Training documents may come in sequentially, no random access required. # # Runs in constant memory w.r.t. the number of documents: size of the training corpus does not affect memory footprint, can process corpora larger than RAM. # # Is distributed & makes use of a cluster of machines, if available, to speed up model estimation. # + import gensim from gensim import corpora tokenized_sents = list(sent_to_words(filtered_sentence)) # Creating the term dictionary of our corpus, where every unique term is assigned an index. dictionary = corpora.Dictionary(tokenized_sents) # Converting list of documents (corpus) into Document Term Matrix using dictionary prepared above. doc_term_matrix = [dictionary.doc2bow(doc) for doc in tokenized_sents] # - # ### 3b. Creating the object for LDA model & train the model # + Lda = gensim.models.ldamodel.LdaModel # Running and Training LDA model on the document term matrix by selecting minimum parameters required. ldamodel = Lda(doc_term_matrix, num_topics=2, id2word = dictionary, passes=100) # - # ### 3c. Extract two topics with twenty words in each topic print(ldamodel.print_topics(num_topics=6, num_words=300)) # ### 3d. Model perplexity and topic coherence provide a convenient measure to judge how good a given topic model is. # + '''Compute Perplexity''' # a measure of how good the model is. lower the better. print('\nPerplexity: ', ldamodel.log_perplexity(doc_term_matrix)) '''Compute Coherence Score''' coherence_model_lda = CoherenceModel(model=ldamodel, texts=tokenized_sents, dictionary=dictionary, coherence='c_v') coherence_lda = coherence_model_lda.get_coherence() print('\nCoherence Score: ', coherence_lda) # - # #### Coherence score is 'higher the better' metric and given the score of 0.86 we can be assured that we have selected the right number of topics for this corpus. '''Visualize the topics''' # pyLDAvis tool to visualize the fit of our LDA model across topics and their top words. pyLDAvis.enable_notebook() vis = pyLDAvis.gensim.prepare(ldamodel, doc_term_matrix, dictionary) vis # #### We can observe that our LDA model has captured the prominent keywords under two topics in the text corpus which will give us a good understanding of what the text corpus is about. We can do further analysis by using this information to generate recommendations & classify the text for user profiling or push notifications. # ### We have seen how to summarize & visualize a document as well as a news article to get quick information about the data. This methodology can be applied to lot of usecases to extract insights from unstructured data. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Application of ICP to a generated sample dataset # Load required packages import numpy as np import matplotlib.pyplot as plt from icp import icp # + # Generate data n_dst = 50 x_dst, y_dst = np.meshgrid(np.linspace(-1, 1, n_dst), np.linspace(-1, 1, n_dst), indexing='ij') z_dst = x_dst ** 2 + y_dst ** 2 plt.figure(figsize=(12, 9)) plt.pcolormesh(x_dst, y_dst, z_dst) offset = 0.2 scale = 0.9 n_src = 50 x_src, y_src = np.meshgrid(np.linspace(-1, 1, n_src), np.linspace(-1, 1, n_src), indexing='ij') z_src = x_src ** 2 + y_src ** 2 -offset x_src *= 1 / scale y_src *= 1 / scale plt.figure(figsize=(12, 9)) plt.pcolormesh(x_src, y_src, z_src) src = np.column_stack((np.reshape(x_src, (n_src ** 2,)), np.reshape(y_src, (n_src ** 2,)), np.reshape(z_src, (n_src ** 2,)))) src += np.random.normal(loc=0, scale=0.1, size=(n_src * n_src, 3)) dst = np.column_stack((np.reshape(x_dst, (n_dst ** 2,)), np.reshape(y_dst, (n_dst ** 2,)), np.reshape(z_dst, (n_dst ** 2,)))) # - # Apply ICP scale, offset = icp(src, dst, dim_offset=[2], dim_scale=[0, 1], num_iter=50) # Print results print('Scale:', scale) print('Offset:', offset) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Ejercicios de Programación Resueltos # ## Ejercicio 1 # Calcule una función que determine si un año es bisiesto o no. # Recuerde que un año es bisiesto si es multiplo de 4 o es multiplo de 400. # Es decir, los años 1700, 1800 o 1900 son divisbles por 4 divisibles pero no divisbles por 400. # En cambio, los años 1600 o 2000 son divisbles por 4 divisibles y divisbles por 400. # ## Ejercicio 2 # Escriba la función que dada una cantidad de dinero calcule el desglose en billetes y monedas, empezando por el billete mayor al menor y contemabldo solo las monedas de 2€ y 1€. Por ejemplo, para la cantidad de dinero, 1378€ # * 2 billetes de 500€ # * 1 billete de 200€ # * 1 billete de 100€ # * 1 billete de 50€ # * 1 billete de 20€ # * 1 billete de 5€ # * 1 moneda de 2€ # * 1 moneda de 1€ # ## Ejercicio 3 # Un banco está intersado en llevar el registro de clientes que solicitan una extracción de dinero de un cajero, para ello el cliente debe introducir la cantidad de dinero a extraer del cajero y éste le devolverá el desglose de dicha cantidad en billetede de 200 a 10 euros. # # Al finalizar la opreación, se debe quedar en un fichero de registro una traza de los billetes dispensado al cliente junto con su nombre, sus apellidos y la *hora local* en el que se realizó dicha operacion. Guarda el fichero en el formato que más comodo te resulte. # # Por ejemplo, para la cantidad de dinero, 370€ # * 1 billete de 200€ # * 1 billete de 100€ # * 1 billete de 50€ # * 1 billete de 20€ # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Multiclass classification/ Phân loại nhiều lớp với DNN # ## Import motdules # %matplotlib inline import matplotlib.pyplot as plt import pandas as pd import numpy as np from sklearn.model_selection import train_test_split from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.optimizers import SGD, Adam from tensorflow.keras.callbacks import EarlyStopping # ## Load dataset df = pd.read_csv('../data/fetal_health.csv') df.head() df.info() df.head() X = df.drop('fetal_health', axis=1) X.head() # ## Mã hoá lại y target_names = df['fetal_health'].unique() target_names target_dict = {n:i for i, n in enumerate(target_names)} target_dict y= df['fetal_health'].map(target_dict) y.head(10) from tensorflow.keras.utils import to_categorical y_cat = to_categorical(y, dtype = int) y_cat[:10] # ### Train test split X_train, X_test, y_train, y_test = train_test_split(X.values, y_cat, test_size=0.2, random_state=10) # ## Tạo model model = Sequential() model.add(Dense(64, input_shape=(21,), activation='relu')) model.add(Dense(64, activation='relu')) model.add(Dense(32, activation='relu')) # model.add(Dense(100, activation='relu')) # model.add(Dense(12, activation='relu')) # model.add(Dense(10, activation='relu')) # model.add(Dense(4, activation='relu')) model.add(Dense(3, activation='softmax')) model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) model.summary() model.fit(X_train, y_train, epochs=150, validation_split=0.1) y_pred = model.predict(X_test) y_pred[:5] y_test_class = np.argmax(y_test, axis=1) y_pred_class = np.argmax(y_pred, axis=1) from sklearn.metrics import classification_report print(classification_report(y_test_class, y_pred_class)) from sklearn.metrics import accuracy_score, confusion_matrix confusion_matrix(y_test_class, y_pred_class) from sklearn.metrics import log_loss log_loss(y_test_class, y_pred) loss, accuracy = model.evaluate(X_test, y_test) print("Loss: " + str(loss)); print("Accuracy: " + str(accuracy)); # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- import pandas as pd df=pd.read_csv("C:\\Users\\HP\\Desktop\\LGMVIP Task2\\data.csv") df.head() df.tail() df1=df.reset_index()["Close"] df1 import matplotlib.pyplot as plt plt.plot(df1) import numpy as np df1 from sklearn.preprocessing import MinMaxScaler Scaler = MinMaxScaler (feature_range=(0,1)) df1=Scaler.fit_transform(np.array(df1).reshape(-1,1)) print(df1) training_size= int(len(df1)*0.65) test_size= len(df1)-training_size train_data, test_data= df1[0:training_size,:],df1[training_size:len(df1),:1] training_size, test_size train_data import numpy def create_dataset(dataset, time_step=1): dataX, dataY = [],[] for i in range(len(dataset)-time_step-1): a = dataset[i:(i+time_step),0] dataX.append(a) dataY.append(dataset[i+ time_step, 0]) return numpy.array(dataX), numpy.array(dataY) time_step = 100 X_train, y_train = create_dataset(train_data, time_step) X_test, ytest = create_dataset(test_data, time_step) print(X_train.shape), print(y_train.shape) X_train= X_train.reshape(X_train.shape[0], X_train.shape[1], 1) X_train= X_train.reshape(X_train.shape[0], X_train.shape[1], 1) from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.layers import LSTM model=Sequential() model.add(LSTM(50,return_sequences = True, input_shape=(100,1))) model.add(LSTM(50,return_sequences = True)) model.add(LSTM(50)) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='adam') model.summary() model.fit(X_train, y_train, validation_data=(X_test,ytest),epochs=100,batch_size=64,verbose=1) # + import tensorflow as tf train_predict=model.predict(X_train) test_predict=model.predict(X_test) import sklearn X = sklearn.preprocessing.standardScaler().fit_transform(X) train_predict=x_scaler.inverse_transform(train_predict) test_predict=x_scaler.inverse_transform(test_predict) # - import tensorflow as tf train_predict=model.predict(X_train) test_predict=model.predict(X_test) import math from sklearn.metrics import mean_squared_error math.sqrt(mean_squared_error(y_train,train_predict)) ### Test Data RMSE math.sqrt(mean_squared_error(ytest,test_predict)) ### Plotting # shift train predictions for plotting look_back=100 trainPredictPlot = numpy.empty_like(df1) trainPredictPlot[:, :] = np.nan trainPredictPlot[look_back:len(train_predict)+look_back, :] = train_predict # shift test predictions for plotting testPredictPlot = numpy.empty_like(df1) testPredictPlot[:, :] = numpy.nan testPredictPlot[len(train_predict)+(look_back*2)+1:len(df1)-1, :] = test_predict # plot baseline and predictions plt.plot(scaler.inverse_transform(df1)) plt.plot(trainPredictPlot) plt.plot(testPredictPlot) plt.show() len(test_data) x_input=test_data[613:].reshape(1,-1) x_input.shape temp_input=list(x_input) temp_input=temp_input[0].tolist() # + # demonstrate prediction for next 10 days from numpy import array lst_output=[] n_steps=100 i=0 while(i<30): if(len(temp_input)>100): #print(temp_input) x_input=np.array(temp_input[1:]) print("{} day input {}".format(i,x_input)) x_input=x_input.reshape(1,-1) x_input = x_input.reshape((1, n_steps, 1)) #print(x_input) yhat = model.predict(x_input, verbose=0) print("{} day output {}".format(i,yhat)) temp_input.extend(yhat[0].tolist()) temp_input=temp_input[1:] #print(temp_input) lst_output.extend(yhat.tolist()) i=i+1 else: x_input = x_input.reshape((1, n_steps,1)) yhat = model.predict(x_input, verbose=0) print(yhat[0]) temp_input.extend(yhat[0].tolist()) print(len(temp_input)) lst_output.extend(yhat.tolist()) i=i+1 print(lst_output) # - day_new=np.arange(1,101) day_pred=np.arange(101,131) len(df1) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Web Scrapping Corona Virus data into Ms Excel # ## Requests #Import library import requests #make request from webpage url =' https://www.worldometers.info/coronavirus/country/india/' result = requests.get(url) # + #result.text # - # ## Beautiful Soup #Import library import bs4 #create soup object soup = bs4.BeautifulSoup(result.text, 'lxml') # + #soup # - # ## Extracting the data # ### Find the div #find-all method cases = soup.find_all('div', class_ = 'maincounter-number') cases # ## Storing the data #python list data = [] #find the span and get data from it for i in cases: span = i .find('span') data.append(span.string) data # ## Exporting the data #Import library import pandas as pd #create a dataframe df = pd.DataFrame({'CoronaData': data}) #naming the columns df.index = ['TotalCases', 'Deaths', 'Recovered'] df # + #naming the file # - df.to_csv('Corona_Data.csv') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pickle import numpy as np import matplotlib.pyplot as plt import tensorflow as tf import gym # %matplotlib inline # - task = 'Ant-v1' hidden_layer_num = 2 hidden_layer_size = 30 # + # TODO: this should be refactored and DRY from tuning-hyperparameters-and-visualization/train_humanoid.py def weight_variable(shape): initial = tf.truncated_normal(shape, stddev=0.1) return tf.Variable(initial) def bias_variable(shape): initial = tf.constant(0.1, shape=shape) return tf.Variable(initial) with open('../section-3-behavioral-cloning/train_test_data/ant_train_test.pkl', 'rb') as inf: X_tv, y_tv, X_test, y_test = pickle.load(inf) tf.logging.info('{0}, {1}, {2}, {3}'.format( X_tv.shape, X_test.shape, y_tv.shape, y_test.shape )) x_plh = tf.placeholder(tf.float32, shape=[None, X_tv.shape[1]]) y_plh = tf.placeholder(tf.float32, shape=[None, y_tv.shape[1]]) with tf.name_scope('fc1'): Wh_var = weight_variable([x_plh.shape.dims[1].value, hidden_layer_size]) bh_var = bias_variable([hidden_layer_size]) hh = tf.nn.sigmoid(tf.matmul(x_plh, Wh_var) + bh_var) for i in range(hidden_layer_num - 1): with tf.name_scope('fc{0}'.format(i + 2)): Wh_var = weight_variable([hidden_layer_size, hidden_layer_size]) bh_var = bias_variable([hidden_layer_size]) hh = tf.nn.sigmoid(tf.matmul(hh, Wh_var) + bh_var) with tf.name_scope('out'): W_var = weight_variable([hidden_layer_size, y_plh.shape.dims[1].value]) b_var = bias_variable([y_plh.shape.dims[1].value]) y_pred = tf.matmul(hh, W_var) + b_var # + with tf.name_scope('mse'): mse = tf.losses.mean_squared_error(labels=y_plh, predictions=y_pred) mse = tf.cast(mse, tf.float32) with tf.name_scope('adam_optimizer'): train_op = tf.train.AdamOptimizer(1e-4).minimize(mse) # + saver = tf.train.Saver() sess = tf.InteractiveSession() sess.run(tf.global_variables_initializer()) # + mse_tv, mse_test = [], [] bs = 128 # batch size for k in range(10): # num. epochs print(k, end=',') for i in range(X_tv.shape[0] // bs): _x = X_tv[i * bs : (i+1) * bs, :] _y = y_tv[i * bs : (i+1) * bs, :] train_op.run(feed_dict={x_plh: _x, y_plh: _y}) mse_tv.append(mse.eval(feed_dict={x_plh: X_tv, y_plh: y_tv})) mse_test.append(mse.eval(feed_dict={x_plh: X_test, y_plh: y_test})) # - plt.plot(mse_tv, label='tv') plt.plot(mse_test, label='test') plt.legend() plt.xlabel('# iterations') plt.ylabel('MSE') plt.grid() print(mse_tv[-1], mse_test[-1]) # # Visualize performance # + def pred_action(obs): return y_pred.eval(feed_dict={x_plh: obs.reshape(1, -1)}) env = gym.make('Ant-v1') obs = env.reset() totalr = 0 done = False max_timesteps = 600 for k in range(max_timesteps): if (k + 1) % 20 == 0: print(k + 1, end=',') action = pred_action(obs[None,:]) obs, r, done, _ = env.step(action) totalr += r env.render() env.render(close=True) print() print(totalr) print(np.mean(totalr)) # - # #### Expert performance # Expert's return at 600 timestep is around 2878.66413703. # # Collected by running `python run_expert.py experts/Ant-v1.pkl Ant-v1 --max_timesteps 600 --num_rollouts 3 --render` # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import _pickle as pickle from base import * from distance import * from greedy import * from base import * import datetime import pandas as pd def readTrajsFromTxtFile(fName): df = pd.read_csv(fName, sep=',', header=None, names=['trajID', 't', 'lat', 'lon'], comment='#') df['t'] = pd.to_datetime(df.t) df = df.sort_values(by='t') df = df.groupby(by='trajID') return df if __name__ == "__main__": print("Loading trajectories ...") trajs = readTrajsFromTxtFile("data/n.txt") # print(("trajectories are",trajs)) rmin, rmax = 0.5, 2 print("Computing Frechet distances ...") distPairs1 = process(trajs, rmin, rmax) # print(("Output of Frechet distance", distPairs1)) distPairs2 = {} for k, v in distPairs1.items(): pth, trID, dist, straj = k[0], k[1].trajID, v, k[1] if (pth, trID) in distPairs2: distPairs2[(pth, trID)].append((straj, dist)) else: distPairs2[(pth, trID)] = [(straj, dist)] print("Computing prerequisite data structures for greedy algorithm ...") (strajCov, ptStraj, strajPth, trajCov) = preprocess(trajs, distPairs2) #print("Pre-requiste data structure are:") # print(("strajCov",strajCov)) # print(("ptStraj",ptStraj)) # print(("strajPth",strajPth)) # print(("trajCov",trajCov)) c1, c2, c3 = 1, 1, 1 print("Running greedy algorithm ...") retVal = Greedy(trajs, distPairs2, strajCov, ptStraj, strajPth, trajCov, c1, c2, c3) #print(("Bundle assigned and unassigned points are",retVal)) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # From the previous exploration of the GTFS data, here's where we arrived. # # The useful files: # * calendar_dates.txt # * shapes.txt # * stop_times.txt # * stops.txt # * trips.txt # # The Plan: # 1. Pick a date of service from calendar_dates.txt # 2. For each of the service codes (only S, U, W) on that date, collect the trip ids used (across all routes). # 3. For each trip id on each date, for each stop within each trip id, collected the scheduled stop info. # 4. Export the data: # * How? Ridership is organized by date, route, service key, stop. We need something that plays well with that. # Here we go! # + from pathlib import Path import pandas as pd import json # Point to where the GTFS archive data is stored. GTFS_ARCHIVE_PARENT_DIR = Path().home() / "Documents" / "Hack Oregon" / "GTFS_archive_data" # - # Loop over each archived dir. This takes a while. print("****** BEGIN ******") for ARCHIVE_DIR in GTFS_ARCHIVE_PARENT_DIR.iterdir(): # Ignore any hidden dirs. if ARCHIVE_DIR.name.startswith('.'): continue else: print(f"Folder: {ARCHIVE_DIR.name}") # Load in the files that we want. # calendar_dates.txt try: file = 'calendar_dates.txt' dates_df = pd.read_csv(ARCHIVE_DIR / file) except FileNotFoundError: print(f"\tUnable to locate '{file}' in {ARCHIVE_DIR}.") # stop_times.txt try: file = 'stop_times.txt' times_df = pd.read_csv(ARCHIVE_DIR / file) except FileNotFoundError: print(f"\tUnable to locate '{file}' in {ARCHIVE_DIR}.") # trips.txt try: file = 'trips.txt' trips_df = pd.read_csv(ARCHIVE_DIR / file) except FileNotFoundError: print(f"\tUnable to locate '{file}' in {ARCHIVE_DIR}.") # Init the dict to store all the stop info. stops_by_time = {} count = 0 # Look at each date - service_id combo. for name, group in dates_df.groupby(['date', 'service_id']): # Skip non S, U, W service ids. if name[1] not in ['S', 'U', 'W']: continue else: print(f"\tDate: {name[0]}\t Service ID: {name[1]}") # Find the trips and routes associated with that service on that date. trips = trips_df['trip_id'][trips_df['service_id'] == name[1]] # Look at each trip (i = index, r = row in the trips for this service id) for i, r in trips_df[['route_id', 'trip_id']][trips_df['service_id'] == name[1]].iterrows(): # df of the stops associated with this trip stops = times_df[times_df['trip_id'] == r['trip_id']] # Look at each stop in the trip to assemble a dict of the stop times (as strings). for ind, row in stops.iterrows(): # If that stop_id exists as a key in the dict. if stops_by_time.get(str(row['stop_id']), False): # If that route exists as a key for the stop. if stops_by_time[str(row['stop_id'])].get(str(r['route_id']), False): # If that date exists as a key for the stop. if stops_by_time[str(row['stop_id'])][str(r['route_id'])].get(str(name[0]), False): # Add the stop time. stops_by_time[str(row['stop_id'])][str(r['route_id'])][str(name[0])].append(row['arrival_time']) else: # Init the date as a list and add the stop time. stops_by_time[str(row['stop_id'])][str(r['route_id'])][str(name[0])] = [] stops_by_time[str(row['stop_id'])][str(r['route_id'])][str(name[0])].append(row['arrival_time']) else: # Init that route as a dict, init the date as a list, and add the stop time. stops_by_time[str(row['stop_id'])][str(r['route_id'])] = {} stops_by_time[str(row['stop_id'])][str(r['route_id'])][str(name[0])] = [] stops_by_time[str(row['stop_id'])][str(r['route_id'])][str(name[0])].append(row['arrival_time']) # Else init that stop as a dict, init the route as a dict, init the date as a list, and add the stop time. else: stops_by_time[str(row['stop_id'])] = {} stops_by_time[str(row['stop_id'])][str(r['route_id'])] = {} stops_by_time[str(row['stop_id'])][str(r['route_id'])][str(name[0])] = [] stops_by_time[str(row['stop_id'])][str(r['route_id'])][str(name[0])].append(row['arrival_time']) count +=1 if count >= 1: break # Write to a json for further analysis. EXPORT_PATH = ARCHIVE_DIR / f'{ARCHIVE_DIR.name}.json' print(f'\t\tEXPORT: {EXPORT_PATH.name}') with open(EXPORT_PATH, 'w') as fobj: json.dump(stops_by_time, fobj, indent=4) break print("****** COMPLETE ******") # Loop over each archived dir. This takes a while. print("****** BEGIN ******") for ARCHIVE_DIR in GTFS_ARCHIVE_PARENT_DIR.iterdir(): # Ignore any hidden dirs. if ARCHIVE_DIR.name.startswith('.'): continue else: print(f"Folder: {ARCHIVE_DIR.name}") # Load in the files that we want. # calendar_dates.txt try: file = 'calendar_dates.txt' dates_df = pd.read_csv(ARCHIVE_DIR / file) except FileNotFoundError: print(f"\tUnable to locate '{file}' in {ARCHIVE_DIR}.") # stop_times.txt try: file = 'stop_times.txt' times_df = pd.read_csv(ARCHIVE_DIR / file) except FileNotFoundError: print(f"\tUnable to locate '{file}' in {ARCHIVE_DIR}.") # trips.txt try: file = 'trips.txt' trips_df = pd.read_csv(ARCHIVE_DIR / file) except FileNotFoundError: print(f"\tUnable to locate '{file}' in {ARCHIVE_DIR}.") # Init the dict to store all the stop info. stops_by_time = {} count = 0 # Look at each date - service_id combo. for name, group in dates_df.groupby(['date', 'service_id']): # Skip non S, U, W service ids. if name[1] not in ['S', 'U', 'W']: continue else: print(f"\tDate: {name[0]}\t Service ID: {name[1]}") date_serv_id = '-'.join([str(name[0]), str(name[1])]) # Find the trips and routes associated with that service on that date. trips = trips_df['trip_id'][trips_df['service_id'] == name[1]] # Look at each trip (i = index, r = row in the trips for this service id) for i, r in trips_df[['route_id', 'trip_id']][trips_df['service_id'] == name[1]].iterrows(): # df of the stops associated with this trip stops = times_df[times_df['trip_id'] == r['trip_id']] # Look at each stop in the trip to assemble a dict of the stop times (as strings). for ind, row in stops.iterrows(): # If that route exists as a key in the dict. if stops_by_time.get(str(r['route_id']), False): # If that stop exists as a key for the route. if stops_by_time[str(r['route_id'])].get(str(row['stop_id']), False): # If that date exists as a key for the stop. if stops_by_time[str(r['route_id'])][str(row['stop_id'])].get(date_serv_id, False): # Add the stop time. stops_by_time[str(r['route_id'])][str(row['stop_id'])][date_serv_id].append(row['arrival_time']) else: # Init the date as a list and add the stop time. stops_by_time[str(r['route_id'])][str(row['stop_id'])][date_serv_id] = [] stops_by_time[str(r['route_id'])][str(row['stop_id'])][date_serv_id].append(row['arrival_time']) else: # Init that route as a dict, init the date as a list, and add the stop time. stops_by_time[str(r['route_id'])][str(row['stop_id'])] = {} stops_by_time[str(r['route_id'])][str(row['stop_id'])][date_serv_id] = [] stops_by_time[str(r['route_id'])][str(row['stop_id'])][date_serv_id].append(row['arrival_time']) # Else init that stop as a dict, init the route as a dict, init the date as a list, and add the stop time. else: stops_by_time[str(r['route_id'])] = {} stops_by_time[str(r['route_id'])][str(row['stop_id'])] = {} stops_by_time[str(r['route_id'])][str(row['stop_id'])][date_serv_id] = [] stops_by_time[str(r['route_id'])][str(row['stop_id'])][date_serv_id].append(row['arrival_time']) count +=1 if count >= 1: break # Write to a json for further analysis. EXPORT_PATH = ARCHIVE_DIR / f'{ARCHIVE_DIR.name}.json' print(f'\t\tEXPORT: {EXPORT_PATH.name}') with open(EXPORT_PATH, 'w') as fobj: json.dump(stops_by_time, fobj, indent=4) break print("****** COMPLETE ******") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Imports # %reload_ext autoreload # %autoreload 2 # %matplotlib inline # + import torch import torch.nn as nn import torch.nn.functional as F import pytorch_lightning as pl # - from nam.data import * from nam.config import defaults from nam.models import NAM, DNN, get_num_units from nam.engine import Engine # ## Configuration config = defaults() print(config) pl.seed_everything(config.seed) # ## Dataset & Dataloaders (California Housing Dataset) # dataset = load_sklearn_housing_data(config=config) dataset = load_breast_data(config=config) # dataset = load_gallup_data( # config=config, # features_columns= ["income_2", "WP1219", "WP1220", "weo_gdpc_con_ppp"] # ) len(dataset.data) dataset.features.shape dataset.targets.max().item() dataset.targets.shape train_dls = dataset.train_dataloaders() test_dl = dataset.test_dataloaders() # ## NAM Model model = NAM( config=config, name="NAMModel_Housing", num_inputs=len(dataset[0][0]), num_units=get_num_units(config, dataset.features), num_outputs=int(dataset.targets.max().item() + 1), ) model train_dl, val_dl = next(train_dls) x, y = next(iter(train_dl)) logits, fnn_out = model(x) logits pl.metrics.functional.accuracy(logits, y) # ## Training engine = Engine(config, model) # engine checkpoint_callback = pl.callbacks.model_checkpoint.ModelCheckpoint( dirpath=f"{config.output_dir}/{model.name}/checkpoints", filename=model.name + "-{epoch:02d}-{val_loss:.4f}", monitor='val_loss', save_top_k=3, mode='min' ) while True: try: train_dl, val_dl = next(train_dls) trainer = pl.Trainer( default_root_dir=f"{config.output_dir}/{model.name}", max_epochs=config.training_epochs, callbacks=[checkpoint_callback], ) trainer.fit(model=engine, train_dataloader=train_dl, val_dataloaders=val_dl) except StopIteration: break engine.metrics.compute() # ## Testing trainer.test(model=engine, test_dataloaders=test_dl) # --- # --- # --- # ## Dataset & Dataloaders (Breast Cancer Dataset) dataset = load_sklearn_housing_data(config=config) # dataset = load_breast_data(config=config) # dataset = load_gallup_data( # config=config, # features_columns= ["income_2", "WP1219", "WP1220", "weo_gdpc_con_ppp"] # ) len(dataset.data) dataset.features.shape dataset.targets.max().item() dataset.targets.shape train_dls = dataset.train_dataloaders() test_dl = dataset.test_dataloaders() # ## NAM Model model = NAM( config=config, name="NAMModel_Breast", num_inputs=len(dataset[0][0]), num_units=get_num_units(config, dataset.features), # num_outputs=int(dataset.targets.max().item() + 1), ) model # ## Training engine = Engine(config, model) # engine checkpoint_callback = pl.callbacks.model_checkpoint.ModelCheckpoint( dirpath=f"{config.output_dir}/{model.name}/checkpoints", filename=model.name + "-{epoch:02d}-{val_loss:.4f}", monitor='val_loss', save_top_k=3, mode='min' ) while True: try: train_dl, val_dl = next(train_dls) trainer = pl.Trainer( default_root_dir=f"{config.output_dir}/{model.name}", max_epochs=config.training_epochs, callbacks=[checkpoint_callback], ) trainer.fit(model=engine, train_dataloader=train_dl, val_dataloaders=val_dl) except StopIteration: break # ## Testing trainer.test(model=engine, test_dataloaders=test_dl) # --- # --- # --- # ## Dataset & Dataloaders (GALLUP Dataset) # dataset = load_sklearn_housing_data(config=config) # dataset = load_breast_data(config=config) dataset = load_gallup_data( config=config, features_columns= ["income_2", "WP1219", "WP1220", "weo_gdpc_con_ppp"] ) len(dataset.data) dataset.features.shape dataset.targets.max().item() dataset.targets.shape config.batch_size = 2048 train_dls = dataset.train_dataloaders() test_dl = dataset.test_dataloaders() # ## NAM Model model = NAM( config=config, name="NAMModel_GALLUP", num_inputs=len(dataset[0][0]), num_units=get_num_units(config, dataset.features), num_outputs=int(dataset.targets.max().item() + 1), ) model # ## Training config.training_epochs = 1 engine = Engine(config, model) # engine checkpoint_callback = pl.callbacks.model_checkpoint.ModelCheckpoint( dirpath=f"{config.output_dir}/{model.name}/checkpoints", filename=model.name + "-{epoch:02d}-{val_loss:.4f}", monitor='val_loss', save_top_k=3, mode='min' ) while True: try: train_dl, val_dl = next(train_dls) trainer = pl.Trainer( default_root_dir=f"{config.output_dir}/{model.name}", max_epochs=config.training_epochs, callbacks=[checkpoint_callback], ) trainer.fit(model=engine, train_dataloader=train_dl, val_dataloaders=val_dl) except StopIteration: break # ## Testing trainer.test(model=engine, test_dataloaders=test_dl) # --- # --- # --- from nam.data.base import NAMDataset config.regression = False csv_file = 'data/GALLUP.csv' data = pd.read_csv(csv_file) data["WP16"] = np.where(data["WP16"]<6, 0, 1) data.head(3) data = data.sample(frac=0.1) data = df data.head() len(data) features_columns = list(df.columns)[2:]# ["income_2", "WP1219", "WP1220", "weo_gdpc_con_ppp", "year"] targets_column = "WP16" weights_column = "wgt" dataset = NAMDataset( config=config, file_path=data, features_columns=features_columns, targets_column=targets_column, weights_column=weights_column, ) dataset len(dataset) config.hidden_sizes = [16] model = NAM( config=config, name="NAMModel_GALLUP", num_inputs=len(dataset[0][0]), num_units=get_num_units(config, dataset.features), num_outputs=int(dataset.targets.max().item() + 1), ) model # + total_count = len(dataset) train_count = int(0.7 * total_count) valid_count = int(0.2 * total_count) test_count = total_count - train_count - valid_count train_dataset, valid_dataset, test_dataset = torch.utils.data.random_split( dataset, (train_count, valid_count, test_count) ) train_dataset_loader = torch.utils.data.DataLoader( train_dataset, batch_size=config.batch_size, shuffle=True, num_workers=config.num_workers ) valid_dataset_loader = torch.utils.data.DataLoader( valid_dataset, batch_size=config.batch_size, shuffle=True, num_workers=config.num_workers ) test_dataset_loader = torch.utils.data.DataLoader( test_dataset, batch_size=config.batch_size, shuffle=False, num_workers=config.num_workers ) dataloaders = { "train": train_dataset_loader, "val": valid_dataset_loader, "test": test_dataset_loader, } # - dataloaders engine = Engine(config, model) config.training_epochs = 3 trainer = pl.Trainer( default_root_dir=f"{config.output_dir}/{model.name}", max_epochs=config.training_epochs, ) trainer.fit(model=engine, train_dataloader=dataloaders['train'], val_dataloaders=dataloaders['val']) trainer.test(model=engine, test_dataloaders=dataloaders['test']) # --- # --- # --- # + import pandas as pd import matplotlib.pyplot as plt import seaborn as sns data = pd.read_csv('data/GALLUP.csv') data.head(3) # - y=data.iloc[:,0].values data = data.drop('country', axis=1) data =(data-data.mean())/data.std() data = data.interpolate(method='linear', axis=0) data.head(3) x= data.iloc[:,1:].values # y=data.iloc[:,0].values x, y from sklearn.preprocessing import LabelEncoder ly = LabelEncoder() y = ly.fit_transform(y) y data.columns sns.set() sns.pairplot(data[:1000], hue="WP16", diag_kind="kde") from sklearn.model_selection import train_test_split x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=0.2) # + from sklearn.naive_bayes import GaussianNB gnb = GaussianNB() gnb.fit(x_train,y_train) y_pred_test = gnb.predict(x_test) from sklearn.metrics import accuracy_score acc = accuracy_score(y_test,y_pred_test) # - acc # + from sklearn.tree import DecisionTreeClassifier dt = DecisionTreeClassifier() dt.fit(x_train,y_train) y_pred2 = dt.predict(x_test) acc2 = accuracy_score(y_test,y_pred2) acc2 # + from sklearn.neighbors import KNeighborsClassifier clf = KNeighborsClassifier(n_neighbors=3,algorithm='ball_tree') clf.fit(x_train,y_train) y_pred3 = clf.predict(x_test) acc3 = accuracy_score(y_test,y_pred3) acc3 # + from sklearn.svm import SVC svc1 = SVC(C=50,kernel='rbf',gamma=1) svc1.fit(x_train,y_train) y_pred4 = svc1.predict(x_test) from sklearn.metrics import accuracy_score acc4= accuracy_score(y_test,y_pred4) acc4 # + import numpy as np from deeptables.models import deeptable, deepnets from deeptables.datasets import dsutils from sklearn.model_selection import train_test_split # #loading data # df = dsutils.load_bank() # df_train, df_test = train_test_split(df, test_size=0.2, random_state=42) # y = df_train.pop('y') # y_test = df_test.pop('y') # #training # config = deeptable.ModelConfig(nets=deepnets.DeepFM) # dt = deeptable.DeepTable(config=config) # model, history = dt.fit(df_train, y, epochs=10) # #evaluation # result = dt.evaluate(df_test,y_test, batch_size=512, verbose=0) # print(result) # #scoring # preds = dt.predict(df_test) # + import pandas as pd import matplotlib.pyplot as plt import seaborn as sns data = pd.read_csv('data/GALLUP.csv') data.head(3) # - y=data.iloc[:,0].values # data = data.drop('country', axis=1) # data =(data-data.mean())/data.std() data = data.interpolate(method='linear', axis=0) data.head(3) data = df x= data.iloc[:,1:].values y=data.iloc[:,0].values x, y from sklearn.preprocessing import LabelEncoder ly = LabelEncoder() y = ly.fit_transform(y) y from sklearn.model_selection import train_test_split x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=0.2) x_train.shape, y_train.shape # + #training config = deeptable.ModelConfig(nets=deepnets.DeepFM, categorical_columns='auto', # or categorical_columns=['x1', 'x2', 'x3', ...] metrics=['AUC', 'accuracy'], # can be `metrics=['RootMeanSquaredError']` for regression task auto_categorize=True, auto_discrete=False, embeddings_output_dim=20, embedding_dropout=0.3,) dt = deeptable.DeepTable(config=config) model, history = dt.fit(x_train, y_train, batch_size=2048, epochs=3) #evaluation result = dt.evaluate(x_test,y_test, batch_size=1024, verbose=0) print(result) #scoring preds = dt.predict(x_test) # - # # DATA PROCESSING import pandas as pd df = pd.read_csv('data/GALLUP.csv') df.head(10) df.isna().sum() df = df.interpolate(method ='linear', limit_direction ='forward') # + # df = df.drop(columns=['year', 'country']) # - df.head() df = df.drop_duplicates(keep=False) def encode_and_bind(original_dataframe, feature_to_encode): dummies = pd.get_dummies(original_dataframe[[feature_to_encode]]) res = pd.concat([original_dataframe, dummies], axis=1) res = res.drop([feature_to_encode], axis=1) return(res) df2 = encode_and_bind(df, 'year') df2 = encode_and_bind(df2, 'country') df2.head() df = df2 columns = ['wgt', 'income_2', 'WP1219', 'WP1220', 'weo_gdpc_con_ppp'] df[columns] = df[columns].apply(lambda x: (x - x.min()) / (x.max() - x.min())) df.head() df["WP16"] = np.where(df["WP16"]<6, 0, 1) df.head() list(df.columns)[2:] from sklearn.preprocessing import LabelEncoder ly = LabelEncoder() # y = ly.fit_transform(y) columns = ['year', 'country'] df[columns] = df[columns].apply(lambda x: ly.fit_transform(x)) df.head() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:qcodes] # language: python # name: conda-env-qcodes-py # --- # # Introduction # # In this notebook, we will explain how the hardsweep functionality works in pysweep. We will make a dummpy instrument called "Gaussian2D" to which we can send set points as arrays. The detector will subsequently send back an array representing measured values: `f(x, y) = N(x-locx, y-locy)`. Here `N(x, y) = ampl * exp(-(sx*x**2+sy*y**2+sxy*x*y))` is a 2D Gaussian. # # We will show how to perform a hardware sweep and how we can nest a hardware sweep in a software controlled sweep. To demonstrate the latter, we will sweep the value of `locx` in software. For each set point of `locx` we will get a measurement represening an 2D image. # ## Imports # + import numpy as np import itertools import matplotlib.pyplot as plt from functools import partial from qcodes import Instrument from qcodes.dataset.data_export import get_data_by_id from pytopo.sweep import sweep, do_experiment, hardsweep # - # ## Setting up a dummy instrument class Gaussian2D(Instrument): def __init__(self, name, loc, sig, ampl): super().__init__(name) self._loc = np.array(loc) self._ampl = ampl if np.atleast_2d(sig).shape == (1, 1): sig = sig * np.identity(2) self._sig = sig self.add_parameter( "locx", set_cmd=lambda value: partial(self.set_loc, locx=value)() ) self.add_parameter( "locy", set_cmd=lambda value: partial(self.set_loc, locy=value)() ) def send(self, setpoints): self._setpoints = setpoints def set_loc(self, locx=None, locy=None): locx = locx or self._loc[0] locy = locy or self._loc[1] self._loc = np.array([locx, locy]) def __call__(self): r = self._setpoints - self._loc[:, None] sig_inv = np.linalg.inv(self._sig) rscaled = np.matmul(sig_inv, r) dist = np.einsum("ij,ij->j", r, rscaled) return self._ampl * np.exp(-dist) # ## Make a detector # + loc = [0.5, -0.3] # The location of the Gaussian blob sig = np.array([[1.0, 0.9], [0.9, 3.0]]) ampl = 1 gaussian2d = Gaussian2D("gaussian", loc, sig, ampl) # - # ## Test if the detector works # + def sample_points(): x = np.linspace(-1, 1, 100) y = np.linspace(-1, 1, 100) xx, yy = zip(*itertools.product(x, y)) xy = np.vstack([xx, yy]) return xy xy = sample_points() gaussian2d.send(xy) # Send the set points. In a real hardware sweep, this can be a different instrument meas = gaussian2d() # Make the detector measure at the setpoints # Construct an image to see if we were succesfull. We should see an rotated and elongated gaussian blurr. img = meas.reshape((100, 100)) plt.imshow(img, extent=[-1, 1, -1, 1], interpolation="nearest", origin="lower") plt.show() # - # ## setting up the hardsweep # # We make a hardsweep by using the `hardsweep` decorator in the `pytopo.sweep` module. *In this example*, the number of independent parameters `ind` is two (`x` and `y`) and the numbers of measurement parameters is one (`i`). The decorator parameters `ind` and `dep` are arrays of tuples `(name, unit)`. # # The decorator does not place any restrictions on the number and type of arguments of the decorated function; this can be anything the user desires. However, the decorated function *must* return two parameters; the set points and measurements, both of which are ndarrays. The first dimension of the set points array contains the number of independent parameters. This must match the number of `ind` parameters given in the decorator, else an exception is raised. In the example, `setpoints` is an 2-by-N array since we have two independent parameters `x` and `y`. `N` is the number of set points. # # Likewise the first dimension of the measurements array contains the number of measurements *per set point*. In this example, we have one measurement `f(x, y)` per set point, where `f` is the previously mentioned 2D Gaussian. This is reflected by the fact that the number of dependent parameter registered in the decorator is one: `i`. The number of measurements `N` must match the number of set points. @hardsweep(ind=[("x", "V"), ("y", "V")], dep=[("i", "A")]) def hardware_measurement(setpoints, detector): """ Use the detector to measure at given set points. Notice that we do not literally iterate over the set points, rather, we let the hardware take care of this. Returns: spoint_values (ndarray): 2-by-n array of setpoints measurements (ndarray): 1-by-n array of measurements """ spoint_values = setpoints() detector.send(spoint_values) measurements = detector() return spoint_values, measurements # ## Perform the hardware sweep data = do_experiment( "hardweep/1", hardware_measurement(sample_points, gaussian2d) ) data.plot() # ## Combining a hardware and software sweep # We will now sweep the location of the Gaussin blob along the x-axis and for each set point, we will perform a hardware measurement sweep_object = sweep(gaussian2d.locx, np.linspace(-0.8, 0.8, 9))( hardware_measurement(sample_points, gaussian2d) ) data = do_experiment( "hardweep/2", sweep_object ) # ## Plot all the hardware measurements data_dict = get_data_by_id(data.run_id) print("parameter name (units) - shape of array") print("---------------------------------------") for axis_data in data_dict[0]: print(f"{axis_data['name']} ({axis_data['unit']}) - {axis_data['data'].shape} array") unique_locx = np.unique(data_dict[0][0]['data']) # + f, axes = plt.subplots(3, 3, figsize=(7, 7)) for locx, ax in zip(unique_locx, axes.ravel()): imgdata1 = np.array(data_dict[0][3]['data'][data_dict[0][0]['data'] == locx]).reshape(100, 100) ax.imshow(imgdata1, extent=[-1, 1, -1, 1], interpolation="nearest", origin="lower") ax.set_title(f"locx=={locx}") ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) f.tight_layout() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from __future__ import print_function, division from collections import deque import math import sys import time BOARD_SIZE = 16 INPUT_FILE = './input.txt' INFINITY = sys.maxsize NEGATIVE_INFINITY = -1*sys.maxsize DIRECTIONS = [(1, 1), (1, -1), (1, 0), (0, 1), (0, -1), (-1, 0), (-1, 1), (-1, -1)] CAMP = {} CAMP['W'] = [(11, 14), (11, 15), (12, 13), (12, 14), (12, 15), (13, 12), (13, 13), (13, 14), (13, 15), (14, 11), (14, 12), (14, 13), (14, 14), (14, 15), (15, 11), (15, 12), (15, 13), (15, 14), (15, 15)] CAMP['B'] = [(0, 0), (0, 1), (0, 2), (0, 3), (0, 4), (1, 0), (1, 1), (1, 2), (1, 3), (1, 4), (2, 0), (2, 1), (2, 2), (2, 3), (3, 0), (3, 1), (3, 2), (4, 0), (4, 1)] CAMP_CORNER = {} CAMP_CORNER['W'] = (15, 15) CAMP_CORNER['B'] = (0, 0) MY_COLOR = None def read_input(): # reading input.txt with open(INPUT_FILE, 'r') as file: line = file.readline() mode = line.strip() line = file.readline() player_color = line.strip() line = file.readline() total_play_time = float(line.strip()) board = [] for i in range(BOARD_SIZE): line = file.readline() board.append(line.strip()) return mode, player_color, total_play_time, board def save_output(x): path = x.get_parent_move()+'\n' while x.parent != None: x = x.parent path = x.get_parent_move() + '\n' + path with open('output.txt', 'w') as file: file.write(path) return path class Node: def __init__(self, pawn_positions, current_pawn, parent=None): self.state = pawn_positions self.current_pawn = current_pawn self.parent = parent def get_single_jump_options(self): options = [] for direction in DIRECTIONS: neighbor = (self.current_pawn[0] + direction[0], self.current_pawn[1] + direction[1]) neighbors_neighbor = (neighbor[0] + direction[0], neighbor[1] + direction[1]) if(neighbors_neighbor[0] < 0 or neighbors_neighbor[0] >= BOARD_SIZE or neighbors_neighbor[1] >= BOARD_SIZE or neighbors_neighbor[1] < 0): continue if(neighbor in self.state and not neighbors_neighbor in self.state):# neighbors_neighbor != self.parent.pawn_position pawn_color = self.state[self.current_pawn] temp = self.state.copy() del temp[self.current_pawn] temp[neighbors_neighbor] = pawn_color options.append((temp, neighbors_neighbor)) return options def get_parent_move(self): pass def print_path(self): x = self path = "" while x != None: path = "(" + str(x.current_pawn[0]) + "," + str(x.current_pawn[1]) + ")" + path x = x.parent return path def __repr__(self): board = ['................']*16 for position, color in self.state.items(): temp = list(board[position[0]]) temp[position[1]] = color board[position[0]] = "".join(temp) return "*0123456789012345\n"+"\n".join([str(i%10)+e for i, e in enumerate(board)]) + "\n" + self.print_path() def utility(state, max_player): distances = [] for position, color in state.items(): if(color == MY_COLOR): opponent_color = 'W' if MY_COLOR == 'B' else 'B' opponent_corner = CAMP_CORNER[opponent_color] distances.append(SLD(position, opponent_corner)) if(max_player): return 1/sum(distances)*len(distances)*1000 else: return -1/sum(distances)*len(distances)*1000 def DFS(start_state, current_pawn): q = deque() start_state = Node(start_state, current_pawn, None) q.append(start_state) visited = [] possible_moves = [] while len(q) != 0: currnode = q.pop() options = currnode.get_single_jump_options() visited.append(currnode.state) if(currnode.parent != None): possible_moves.append(currnode) for option in options: if(option[0] in visited): continue n = Node(option[0], option[1], currnode) q.append(n) return possible_moves def get_action_for_single_pawn(state, position): # adding moves node = Node(state, position, parent=None) moves = [] for direction in DIRECTIONS: neighbor = (position[0] + direction[0], position[1] + direction[1]) if(neighbor[0] < 0 or neighbor[0] >= BOARD_SIZE or neighbor[1] >= BOARD_SIZE or neighbor[1] < 0): continue if(not neighbor in state): pawn_color = state[position] temp = state.copy() del temp[position] temp[neighbor] = pawn_color moves.append(Node(temp, neighbor, parent=node)) jump_moves = DFS(state, position) # print("jump moves", jump_moves) moves = moves+jump_moves return moves def SLD(curr, goal): # straight line distance x = math.pow(goal[0] - curr[0], 2) y = math.pow(goal[1] - curr[1], 2) return math.sqrt(x + y) def actions(state, player_color): moves = [] # if something in camp move it out first for position in CAMP[player_color]: if(position in state and state[position] == player_color): proposed_moves = get_action_for_single_pawn(state, position) moves_out_of_camp = [] moves_away_from_corner = [] for move in proposed_moves: if(not move.current_pawn in CAMP[player_color]): #moves out of camp moves_out_of_camp.append(move) elif(SLD(position, CAMP_CORNER[player_color]) < SLD(move.current_pawn, CAMP_CORNER[player_color])): #moves away from the camp corner moves_away_from_corner.append(move) if(len(moves_out_of_camp) == 0 and len(moves) == 0): moves = moves+moves_away_from_corner else: moves = moves+moves_out_of_camp if(len(moves) != 0): return moves for position, color in state.items(): if(color == player_color): proposed_moves = get_action_for_single_pawn(state, position) for move in proposed_moves: if(not move.current_pawn in CAMP[player_color]):#does not return to camp moves.append(move) return moves def terminate(depth): if(depth >= 4): return True else: return False def alpha_beta_search(state, player_color): v, move = max_value(state, NEGATIVE_INFINITY, INFINITY, player_color, depth=0) return v, move def max_value(state, alpha, beta, player_color, depth=0): if(terminate(depth)): return utility(state, max_player=True), None v = NEGATIVE_INFINITY v_move = None for move in actions(state, player_color): next_player = 'W' if player_color == 'B' else 'B' min_v, _ = min_value(move.state, alpha, beta, next_player, depth=depth+1) if(min_v > v): v = min_v v_move = move if(v >= beta): return v, v_move alpha = max(alpha, v) return v, v_move def min_value(state, alpha, beta, player_color, depth=0): if(terminate(depth)): return utility(state, max_player=False), None v = INFINITY v_move = None for move in actions(state, player_color): next_player = 'W' if player_color == 'B' else 'B' max_v, max_move = max_value(move.state, alpha, beta, next_player, depth=depth+1) if(max_v < v): v = max_v v_move = move if(v <= beta): return v, v_move beta = min(beta, v) return v, v_move # - SLD((0, 0), (1,1)) SLD((0,0), (2,2)) # + mode, player_color, total_play_time, board = read_input() positions = {} for i in range(0, len(board)): for j in range(0, len(board[0])): if(board[i][j] != '.'): positions[(i, j)] = board[i][j] print(positions) if(player_color == 'WHITE'): player_color = 'W' else: player_color = 'B' MY_COLOR = player_color t = time.time() # %time action = alpha_beta_search(positions, player_color) print("CPU time", time.time()-t) print(action[0]) print(action[1]) # - m = actions({(6, 6): 'W', (8, 8): 'W', (9, 9): 'W'}, 'W') print(len(m)) for a in m: print(a) print() # + import numpy as np x = np.zeros((16, 16)) # + import math def SLD(curr, goal): # straight line distance x = math.pow(goal[0] - curr[0], 2) y = math.pow(goal[1] - curr[1], 2) return math.sqrt(x + y) precomputed_distances = {} def get_distances(position): distances = {} for i in range(len(x)): for j in range(len(x[0])): distances[(i, j)] = SLD(position, (i, j)) return distances for i in range(len(x)): for j in range(len(x[0])): precomputed_distances[(i, j)] = get_distances((i, j)) # - import pickle with open("distances.pkl", "wb") as f: pickle.dump(precomputed_distances, f) with open("distances.pkl", "rb") as f: PRECOMPUTED_DISTANCES = pickle.load(f) # %time SLD((0,0), (15, 15)) # %time PRECOMPUTED_DISTANCES[(1,5)][(10,10)] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # 1. Identify all SMILES strings that specify stereochemistry # 2. Find a way to encode whether the specified stereochemistry is being respected # 3. Screen all of the trajectories from bayes_implicit_solvent.solvation_free_energy import db, smiles_list, mol_top_sys_pos_list smiles_list[0] stereo_smiles = [(i, s) for (i, s) in enumerate(smiles_list) if ('/' in s) or ('\\' in s)] len(stereo_smiles) stereo_smiles = [(i, s) for (i, s) in enumerate(smiles_list) if ('@' in s) or ('/' in s) or ('\\' in s)] len(stereo_smiles) for (i,s) in stereo_smiles[::-1]: print(s) stereo_smiles # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- import glob import pickle import probanno import cobra universal = cobra.io.load_json_model("../Data/GramPosUni.json") rxn_ids = [x.id for x in universal.reactions] # Get IDs for original Prob files original_files = glob.glob('../likelihoods/*.probs') genome_ids = [x.replace("../likelihoods/","").replace(".probs","") for x in original_files] # Get IDs for existing Prob files existing_files = glob.glob('../likelihoods_py3/*.probs') existing_prob_py3s = [x.replace("../likelihoods_py3/","").replace(".probs","") for x in existing_files] likelihoods = pickle.load(open('../likelihoods/'+ genome_ids[0] +'.probs')) rxn_ids_w_probs = [] for rxn_id in rxn_ids: try: likelihoods[rxn_id] rxn_ids_w_probs.append(rxn_id) except: pass len(rxn_ids_w_probs) for genome_id in genome_ids: if not genome_id in existing_prob_py3s: try: likelihoods = pickle.load(open('../likelihoods/'+ genome_id +'.probs')) likelihoods_py3 = {} for rxn_id in rxn_ids_w_probs: try: likelihoods_py3[rxn_id] = likelihoods[rxn_id] except: pass pickle.dump(likelihoods_py3, open('../likelihoods_py3/'+ genome_id +'.probs', "wb")) except: pass likelihoods_py3[rxn_id] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="PLDyUYxdXVIO" colab_type="text" # # Understanding how the use of righ data structure helps in Code Optimization # # # + [markdown] id="f68faBD_Xg0p" colab_type="text" # The task we are about to perform is to find commond elements between two datapoints. # > Datasets are: # >> 1. all_coding_books.txt: contains the id of all the books written about coding in last two years # >> 2. books_published_last_two_years.txt: contains the id of all books written in the last two years # # + [markdown] id="6LKIPArMYTKw" colab_type="text" # > The first easily understood method that comes to our mind will be to store the data in two lists, compare the lists using for loop # + id="JSncdsIYYvyx" colab_type="code" colab={} #Import libraries import time import pandas as pd import numpy as np # + id="jivZjLmtU8DO" colab_type="code" colab={} #Store both the file data to a csv as a list with open('books_published_last_two_years.txt') as f: recent_books = f.read().split('\n') with open('all_coding_books.txt') as f: coding_books = f.read().split('\n') # + id="9rJglCbrY-mx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 52} outputId="b79dfeb4-7f02-4581-f789-6f7a2b19c18b" start = time.time() recent_coding_books = [] for book in recent_books: if book in coding_books: recent_coding_books.append(book) print("There are " + str(len(recent_coding_books)) + " coding books released recently") print('Duration: {} seconds'.format(time.time() - start)) # + [markdown] id="PFWAVezKec6X" colab_type="text" # Let's try using the data structure of sets instead of lists to improve the speed of search # > Note: While using sets we can also use the vector function intersection to check for common elements. # + id="KKj0hS4qeMm8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 52} outputId="a34f7121-c98f-4901-ead4-f5833a7483ce" start = time.time() recent_coding_books = set(recent_books).intersection(coding_books) print("There are " + str(len(recent_coding_books)) + " coding books released recently") print('Duration: {} seconds'.format(time.time() - start)) # + [markdown] id="j6WMDI_yfQvH" colab_type="text" # **Observation:** Using of sets is more efficient in terms of time in this case. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] papermill={"duration": 0.018849, "end_time": "2020-11-25T17:42:25.365369", "exception": false, "start_time": "2020-11-25T17:42:25.346520", "status": "completed"} tags=[] # **This notebook is an exercise in the [Data Cleaning](https://www.kaggle.com/learn/data-cleaning) course. You can reference the tutorial at [this link](https://www.kaggle.com/alexisbcook/inconsistent-data-entry).** # # --- # # + [markdown] papermill={"duration": 0.015082, "end_time": "2020-11-25T17:42:25.396013", "exception": false, "start_time": "2020-11-25T17:42:25.380931", "status": "completed"} tags=[] # In this exercise, you'll apply what you learned in the **Inconsistent data entry** tutorial. # # # Setup # # The questions below will give you feedback on your work. Run the following cell to set up the feedback system. # + papermill={"duration": 0.130998, "end_time": "2020-11-25T17:42:25.542254", "exception": false, "start_time": "2020-11-25T17:42:25.411256", "status": "completed"} tags=[] from learntools.core import binder binder.bind(globals()) from learntools.data_cleaning.ex5 import * print("Setup Complete") # + [markdown] papermill={"duration": 0.01555, "end_time": "2020-11-25T17:42:25.575006", "exception": false, "start_time": "2020-11-25T17:42:25.559456", "status": "completed"} tags=[] # # Get our environment set up # # The first thing we'll need to do is load in the libraries and dataset we'll be using. We use the same dataset from the tutorial. # + papermill={"duration": 0.036583, "end_time": "2020-11-25T17:42:25.627455", "exception": false, "start_time": "2020-11-25T17:42:25.590872", "status": "completed"} tags=[] # modules we'll use import pandas as pd import numpy as np # helpful modules import fuzzywuzzy from fuzzywuzzy import process import chardet # read in all our data professors = pd.read_csv("../input/pakistan-intellectual-capital/pakistan_intellectual_capital.csv") # set seed for reproducibility np.random.seed(0) # + [markdown] papermill={"duration": 0.015512, "end_time": "2020-11-25T17:42:25.658975", "exception": false, "start_time": "2020-11-25T17:42:25.643463", "status": "completed"} tags=[] # Next, we'll redo all of the work that we did in the tutorial. # + papermill={"duration": 0.039768, "end_time": "2020-11-25T17:42:25.714598", "exception": false, "start_time": "2020-11-25T17:42:25.674830", "status": "completed"} tags=[] # convert to lower case professors['Country'] = professors['Country'].str.lower() # remove trailing white spaces professors['Country'] = professors['Country'].str.strip() # get the top 10 closest matches to "south korea" countries = professors['Country'].unique() matches = fuzzywuzzy.process.extract("south korea", countries, limit=10, scorer=fuzzywuzzy.fuzz.token_sort_ratio) def replace_matches_in_column(df, column, string_to_match, min_ratio = 47): # get a list of unique strings strings = df[column].unique() # get the top 10 closest matches to our input string matches = fuzzywuzzy.process.extract(string_to_match, strings, limit=10, scorer=fuzzywuzzy.fuzz.token_sort_ratio) # only get matches with a ratio > 90 close_matches = [matches[0] for matches in matches if matches[1] >= min_ratio] # get the rows of all the close matches in our dataframe rows_with_matches = df[column].isin(close_matches) # replace all rows with close matches with the input matches df.loc[rows_with_matches, column] = string_to_match # let us know the function's done print("All done!") replace_matches_in_column(df=professors, column='Country', string_to_match="south korea") countries = professors['Country'].unique() # + [markdown] papermill={"duration": 0.016291, "end_time": "2020-11-25T17:42:25.747494", "exception": false, "start_time": "2020-11-25T17:42:25.731203", "status": "completed"} tags=[] # # 1) Examine another column # # Write code below to take a look at all the unique values in the "Graduated from" column. # + papermill={"duration": 0.024416, "end_time": "2020-11-25T17:42:25.788479", "exception": false, "start_time": "2020-11-25T17:42:25.764063", "status": "completed"} tags=[] # TODO: Your code here # + [markdown] papermill={"duration": 0.016296, "end_time": "2020-11-25T17:42:25.821637", "exception": false, "start_time": "2020-11-25T17:42:25.805341", "status": "completed"} tags=[] # Do you notice any inconsistencies in the data? Can any of the inconsistencies in the data be fixed by removing white spaces at the beginning and end of cells? # # Once you have answered these questions, run the code cell below to get credit for your work. # + papermill={"duration": 0.029941, "end_time": "2020-11-25T17:42:25.868249", "exception": false, "start_time": "2020-11-25T17:42:25.838308", "status": "completed"} tags=[] # Check your answer (Run this code cell to receive credit!) q1.check() # + papermill={"duration": 0.025266, "end_time": "2020-11-25T17:42:25.911672", "exception": false, "start_time": "2020-11-25T17:42:25.886406", "status": "completed"} tags=[] # Line below will give you a hint #q1.hint() # + [markdown] papermill={"duration": 0.018189, "end_time": "2020-11-25T17:42:25.948671", "exception": false, "start_time": "2020-11-25T17:42:25.930482", "status": "completed"} tags=[] # # 2) Do some text pre-processing # # Convert every entry in the "Graduated from" column in the `professors` DataFrame to remove white spaces at the beginning and end of cells. # + papermill={"duration": 0.031389, "end_time": "2020-11-25T17:42:25.998670", "exception": false, "start_time": "2020-11-25T17:42:25.967281", "status": "completed"} tags=[] # TODO: Your code here ____ # Check your answer q2.check() # + papermill={"duration": 0.030521, "end_time": "2020-11-25T17:42:26.049652", "exception": false, "start_time": "2020-11-25T17:42:26.019131", "status": "completed"} tags=[] # Lines below will give you a hint or solution code #q2.hint() #q2.solution() # + [markdown] papermill={"duration": 0.019878, "end_time": "2020-11-25T17:42:26.090012", "exception": false, "start_time": "2020-11-25T17:42:26.070134", "status": "completed"} tags=[] # # 3) Continue working with countries # # In the tutorial, we focused on cleaning up inconsistencies in the "Country" column. Run the code cell below to view the list of unique values that we ended with. # + papermill={"duration": 0.031323, "end_time": "2020-11-25T17:42:26.141457", "exception": false, "start_time": "2020-11-25T17:42:26.110134", "status": "completed"} tags=[] # get all the unique values in the 'City' column countries = professors['Country'].unique() # sort them alphabetically and then take a closer look countries.sort() countries # + [markdown] papermill={"duration": 0.020166, "end_time": "2020-11-25T17:42:26.182465", "exception": false, "start_time": "2020-11-25T17:42:26.162299", "status": "completed"} tags=[] # Take another look at the "Country" column and see if there's any more data cleaning we need to do. # # It looks like 'usa' and 'usofa' should be the same country. Correct the "Country" column in the dataframe so that 'usofa' appears instead as 'usa'. # # **Use the most recent version of the DataFrame (with the whitespaces at the beginning and end of cells removed) from question 2.** # + papermill={"duration": 0.033996, "end_time": "2020-11-25T17:42:26.236985", "exception": false, "start_time": "2020-11-25T17:42:26.202989", "status": "completed"} tags=[] # TODO: Your code here! ____ # Check your answer q3.check() # + papermill={"duration": 0.029375, "end_time": "2020-11-25T17:42:26.288656", "exception": false, "start_time": "2020-11-25T17:42:26.259281", "status": "completed"} tags=[] # Lines below will give you a hint or solution code #q3.hint() #q3.solution() # + [markdown] papermill={"duration": 0.021979, "end_time": "2020-11-25T17:42:26.333104", "exception": false, "start_time": "2020-11-25T17:42:26.311125", "status": "completed"} tags=[] # # Congratulations! # # Congratulations for completing the **Data Cleaning** course on Kaggle Learn! # # To practice your new skills, you're encouraged to download and investigate some of [Kaggle's Datasets](https://www.kaggle.com/datasets). # + [markdown] papermill={"duration": 0.022617, "end_time": "2020-11-25T17:42:26.378177", "exception": false, "start_time": "2020-11-25T17:42:26.355560", "status": "completed"} tags=[] # --- # # # # # *Have questions or comments? Visit the [Learn Discussion forum](https://www.kaggle.com/learn-forum/172650) to chat with other Learners.* # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="SxjD9fZHC02P" colab_type="text" # # Oxford Pets # + [markdown] id="c7B0PG8WC02R" colab_type="text" # ## Setup # + id="CadPzcshC02S" colab_type="code" colab={} SETUP = True # + id="lpgOZwfaGXBm" colab_type="code" colab={} # a = [] # while(1): # a.append('1') # + id="E4zT3_bjC02W" colab_type="code" outputId="d438d432-8ac5-41ca-f379-125f6efd0a00" colab={"base_uri": "https://localhost:8080/", "height": 34} import os try: # %tensorflow_version 2.x from google.colab import drive drive.mount('/content/drive', force_remount=True) # %matplotlib inline except: # %load_ext lab_black os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" os.environ["CUDA_VISIBLE_DEVICES"] = "0" # + id="h9LAWcbmC02a" colab_type="code" colab={} # if SETUP: # # !bash ./setup.sh # + id="XZBxWqUAZium" colab_type="code" outputId="cba38561-e642-4836-8bb9-4c061a177447" colab={"base_uri": "https://localhost:8080/", "height": 247} # !pip install -q -U toai # + id="2R1UMsioC02i" colab_type="code" outputId="750681e7-8a5e-4e54-e80f-94689cc1e6fb" colab={"base_uri": "https://localhost:8080/", "height": 52} print(__import__("toai").__version__) print(__import__("tensorflow").__version__) # + id="xFAJCHMqC02m" colab_type="code" outputId="c8df37a0-7d48-4db8-de82-5a218a4e1288" colab={"base_uri": "https://localhost:8080/", "height": 72} from toai.imports import * from toai.image import ImageAugmentor, ImageParser, ImageResizer, ImageDataContainer from toai.data.utils import split_df from toai.encode import CategoricalEncoder from typing import * import tensorflow as tf from tensorflow import keras import tensorflow_datasets as tfds import tensorflow_hub as hub from glob import glob from matplotlib.patches import Rectangle # + [markdown] id="HlSPx5C4C02r" colab_type="text" # ## Loading the data # + id="Pxc2VWsQC02s" colab_type="code" colab={} DATA_DIR = Path("data/pets") DATA_DIR.mkdir(parents=True, exist_ok=True) # + id="Z-Eo-D62EIvq" colab_type="code" colab={} TEMP_DIR = Path('./drive/My Drive/Projects/16_Advanced_CV/temp/pets') # + id="BbATjroeC02v" colab_type="code" colab={} # if SETUP: # shutil.rmtree(str(DATA_DIR)) # DATA_DIR.mkdir(parents=True, exist_ok=True) # TEMP_DIR.mkdir(parents=True, exist_ok=True) # kaggle.api.authenticate() # kaggle.api.dataset_download_files( # dataset="jutrera/stanford-car-dataset-by-classes-folder", # path=DATA_DIR, # unzip=True, # ) # + id="VYREx9dzaC_j" colab_type="code" colab={} def setup_kaggle(): # !rm -rf /root/.kaggle # x = !ls kaggle.json assert x == ['kaggle.json'], 'Upload kaggle.json' # !mkdir /root/.kaggle # !mv kaggle.json /root/.kaggle # !chmod 600 /root/.kaggle/kaggle.json setup_kaggle() # + id="veTITsyYaISD" colab_type="code" colab={} DOWNLOAD_DATA = True if DOWNLOAD_DATA: # !kaggle datasets download -q -d devdgohil/the-oxfordiiit-pet-dataset --unzip -p {str(DATA_DIR)} # + id="lMpAiijqKvsC" colab_type="code" colab={} import xml.etree.ElementTree as ET # + id="-98gMK0PK1_l" colab_type="code" colab={} tree = ET.parse(DATA_DIR/'annotations/annotations/xmls/Abyssinian_1.xml') root = tree.getroot() # + id="nCQYPAujLf82" colab_type="code" outputId="1500ae6c-c1cb-4441-ee2a-4e1cd9f56994" colab={"base_uri": "https://localhost:8080/", "height": 34} root.tag # + id="2pwfip5TLjLG" colab_type="code" outputId="f9f15e67-7be0-4ef1-b4d5-c39a154d2dfb" colab={"base_uri": "https://localhost:8080/", "height": 34} root.attrib # + id="GRLSe5C_LpEx" colab_type="code" outputId="f29ee90c-4513-4bca-ec2f-76b262a2c0c8" colab={"base_uri": "https://localhost:8080/", "height": 123} for child in root: print(child.tag, child.attrib) # + id="hubSDdzELtvD" colab_type="code" outputId="bca193a7-879b-439b-d834-06afd23067c0" colab={"base_uri": "https://localhost:8080/", "height": 425} [elem.tag for elem in root.iter()] # + id="w1wGI6PIOVZM" colab_type="code" outputId="9bcae2a3-3b64-48b6-f872-6e5c35a91da2" colab={"base_uri": "https://localhost:8080/", "height": 54} ET.tostring(root, encoding='utf8').decode('utf8') # + id="0U9b6TJPK2CB" colab_type="code" colab={} origin_img1 = Image.open(str(DATA_DIR/'images/images/Abyssinian_1.jpg')) # + id="SX-ICy5uRC9B" colab_type="code" colab={} from PIL import Image, ImageDraw # + id="C-fmWBCIUlWK" colab_type="code" colab={} # %matplotlib inline # + id="Bs9rfZsuUroq" colab_type="code" colab={} origin_img # + id="MmoSI8LRUbRo" colab_type="code" outputId="b40fd697-9745-489a-8f31-7c596d62ff82" colab={"base_uri": "https://localhost:8080/", "height": 286} plt.imshow(plt.imread(str(DATA_DIR/'images/images/Abyssinian_1.jpg'))) plt.plot([xmin, xmax, xmax, xmin, xmin], [ymin, ymin, ymax, ymax, ymin]) # + id="amtE1k_vRqoQ" colab_type="code" colab={} def save_cropped_img(path, annotation): tree = ET.parse(annotation) xmin = int(tree.getroot().findall('.//xmin')[0].text) xmax = int(tree.getroot().findall('.//xmax')[0].text) ymin = int(tree.getroot().findall('.//ymin')[0].text) ymax = int(tree.getroot().findall('.//ymax')[0].text) # + id="hucaBhxESI0X" colab_type="code" colab={} xmin = int(tree.getroot().findall('.//xmin')[0].text) xmax = int(tree.getroot().findall('.//xmax')[0].text) ymin = int(tree.getroot().findall('.//ymin')[0].text) ymax = int(tree.getroot().findall('.//ymax')[0].text) filename = tree.getroot().findall('.//filename')[0].text width = tree.getroot().findall('.//width')[0].text height= tree.getroot().findall('.//height')[0].text depth= tree.getroot().findall('.//depth')[0].text name= tree.getroot().findall('.//name')[0].text # + id="ha_PWzgjMgY4" colab_type="code" outputId="20e6d3a0-2caa-456e-a689-36c3a5cc98b8" colab={"base_uri": "https://localhost:8080/", "height": 34} filename, xmin, xmax,ymin, ymax, width, height, depth, name # + id="6S8j6TUCSSpv" colab_type="code" outputId="640a5e61-6992-47d6-e588-8a972fd7a60e" colab={"base_uri": "https://localhost:8080/", "height": 34} # df['V'] = df['V'].str.split('-').str[0] # + id="dc3gbLs6UB9K" colab_type="code" colab={} df_cols = ['filename', 'xmin', 'xmax','ymin', 'ymax', 'width', 'height', 'depth', 'name'] # + id="qtnUgmzyVkAp" colab_type="code" colab={} annotations = sorted(os.listdir(DATA_DIR/'annotations/annotations/xmls')) # + id="Cp_3pytJVsUN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="bff9ec46-638d-4314-bc40-d583344c1ff0" len(annotations) # + id="UYc7mJ5PYJTx" colab_type="code" colab={} df = pd.DataFrame(columns = df_cols) # + id="RHGwmssQYYBj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 47} outputId="fcb3fbc8-3644-4509-d56b-445b5ad27b0c" df # + id="gAz2E-HWWW_8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 36} outputId="6319fd40-8af6-4601-80dc-f20440346485" for xml_file in progress_bar(annotations): tree = ET.parse(str(DATA_DIR/'annotations/annotations/xmls/') + '/' + xml_file) for node in xroot: filename = tree.getroot().findall('.//filename')[0].text xmin = int(tree.getroot().findall('.//xmin')[0].text) xmax = int(tree.getroot().findall('.//xmax')[0].text) ymin = int(tree.getroot().findall('.//ymin')[0].text) ymax = int(tree.getroot().findall('.//ymax')[0].text) width = int(tree.getroot().findall('.//width')[0].text) height= int(tree.getroot().findall('.//height')[0].text) depth= int(tree.getroot().findall('.//depth')[0].text) name= tree.getroot().findall('.//name')[0].text df = df.append(pd.Series([filename, xmin, xmax,ymin, ymax, width, height, depth, name],index = df_cols), ignore_index = True) # + id="Txrc_dCFht2z" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 137} outputId="4526e209-7268-4395-84b4-30f6f6015dcf" df.head(3) # + id="tZvVrgrwkXGp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 137} outputId="b7e195a3-5630-469e-a055-62f6fab845c2" df.tail(3) # + id="9RwvMej-kf0U" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 137} outputId="15b720d5-ec15-4f94-fda1-ab34e2a466ff" df.sample(3) # + id="ph_ua5GDiwIU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 167} outputId="8043ae22-1f24-49e8-f715-98d0c9945eb8" df.describe() # + id="MZz6goOJik1R" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 265} outputId="2b8104ea-ea54-4424-93b3-cdba50ac62f8" df.info() # + id="TtcCuuVpnBWM" colab_type="code" colab={} df["x0"] = df["xmin"] / df["width"] df["x1"] = df["xmax"] / df["width"] df["y0"] = df["ymin"] / df["height"] df["y1"] = df["ymax"] / df["height"] # + id="UIDBtYO9nsej" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 137} outputId="d7d4062c-3f81-42bc-b1b4-201d1c0a79e7" df.head(3) # + id="8s3aqwC6oTtM" colab_type="code" colab={} df["object_width"] = df["x1"] - df["x0"] df["object_height"] = df["y1"] - df["y0"] # + id="qAFW0JoToYXm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 157} outputId="99aeb211-7771-46b6-bbaf-e92ff087fa82" df.head(3) # + id="lAF-vENNpRdv" colab_type="code" colab={} if SETUP: for filename in glob(str(DATA_DIR / "**/*.jpg"), recursive=True): try: shutil.move(filename, DATA_DIR) except: pass shutil.rmtree(DATA_DIR / "images") # + id="vdl1hPvlo3cT" colab_type="code" colab={} df["file_path"] = f"{DATA_DIR}/" + df["filename"] # + id="Cmn0PWLwp81L" colab_type="code" colab={} df = df[df["file_path"].apply(lambda x: Path(x).is_file())] # + id="8htlksfwnt1J" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 157} outputId="f4e4fff1-966a-4694-e7e6-25ec34351b0f" df.head(3) # + id="YyEhAw_osQWd" colab_type="code" colab={} df['breed'] = df["filename"].apply(lambda x: x.rsplit('_', 1)[0]) # + id="3bSQngmsud5A" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 194} outputId="4e4ed65e-32f6-43ff-b227-78717195d7f0" df.breed.unique() # + id="F4J5s1Fgupk-" colab_type="code" colab={} from sklearn.preprocessing import LabelEncoder # + id="w1kfrtAAwFn3" colab_type="code" colab={} le = LabelEncoder() # + id="alzPJS5gxXsf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="a9fd9515-6986-4c30-bcd3-80ce039c9ec5" le.fit(df['breed']) # + id="uZ4MZq7VxfmO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="2903db6c-8b5f-4460-c0df-712e6b613ed4" joblib.dump(le, TEMP_DIR / "le.pickle") # + id="tdCPOedRxr6r" colab_type="code" colab={} le = joblib.load( TEMP_DIR / "le.pickle") # + id="hY0xe674xfp2" colab_type="code" colab={} df['breed_encoded'] = le.transform(df['breed']) # + id="s6oJQaLWx9YU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 157} outputId="a16e6351-a3ac-4181-e7ad-9bb3c7609e79" df.sample(3) # + id="koQufY1ax9bZ" colab_type="code" colab={} le_name = LabelEncoder() # + colab_type="code" outputId="2c7247ad-4068-4b95-80fa-335b870b63cf" id="AlAo8gQuyROX" colab={"base_uri": "https://localhost:8080/", "height": 34} le_name.fit(df['name']) # + colab_type="code" outputId="bdab4f46-b9e7-476d-f36e-3018453dc1b3" id="Vfxv5stQyROj" colab={"base_uri": "https://localhost:8080/", "height": 34} joblib.dump(le_name, TEMP_DIR / "le_name.pickle") # + colab_type="code" id="67UT5mepyROo" colab={} le_name = joblib.load( TEMP_DIR / "le_name.pickle") # + colab_type="code" id="T5xYTSuqyROr" colab={} df['name_encoded'] = le_name.transform(df['name']) # + colab_type="code" outputId="afe9a607-b43d-451e-f636-91ce107b62d7" id="8Z5wwexMyROu" colab={"base_uri": "https://localhost:8080/", "height": 157} df.sample(3) # + id="oRRY4x4jkmo2" colab_type="code" colab={} df.to_csv(TEMP_DIR/'pets_df.csv', index=False) # + id="I6nSSjXdkxNx" colab_type="code" colab={} df = pd.read_csv(TEMP_DIR/'pets_df.csv') # + [markdown] id="f9nBGsFd1TSO" colab_type="text" # ## Preparing the dataset # # + id="YF9oPP4U1Uld" colab_type="code" colab={} BATCH_SIZE = 32 SHUFFLE_SIZE = 1024 IMG_DIMS = (299, 299, 3) N_IMAGES = df.shape[0] # + id="ZwBaWv-v1X1d" colab_type="code" colab={} train_data, validation_data, test_data = split_df(df, 0.2, "breed_encoded") # + id="a6Eh7fay1o1q" colab_type="code" colab={} train_set = ( tf.data.Dataset.from_tensor_slices( ( train_data["file_path"], tuple( [ train_data["breed_encoded"].values, train_data[["x0", "y0", "object_width", "object_height"]].values, ] ), ) ) .map(ImageParser()) .map(ImageResizer(IMG_DIMS, "stretch")) .map(ImageAugmentor(level=3, rotate=False)) .repeat() .shuffle(SHUFFLE_SIZE) .batch(BATCH_SIZE) .prefetch(tf.data.experimental.AUTOTUNE) ) # + id="FVlR0yIR1o5Q" colab_type="code" colab={} validation_set = ( tf.data.Dataset.from_tensor_slices( ( validation_data["file_path"], tuple( [ validation_data["breed_encoded"].values, validation_data[ ["x0", "y0", "object_width", "object_height"] ].values, ] ), ) ) .map(ImageParser()) .map(ImageResizer(IMG_DIMS, "stretch")) .batch(BATCH_SIZE) .prefetch(tf.data.experimental.AUTOTUNE) ) # + id="MstUqfJZ1o8J" colab_type="code" colab={} test_set = ( tf.data.Dataset.from_tensor_slices( ( test_data["file_path"], tuple( [ test_data["breed_encoded"].values, test_data[["x0", "y0", "object_width", "object_height"]].values, ] ), ) ) .map(ImageParser()) .map(ImageResizer(IMG_DIMS, "stretch")) .batch(BATCH_SIZE) .prefetch(tf.data.experimental.AUTOTUNE) ) # + id="KLe0ML932AXc" colab_type="code" colab={} def make_rectangle(x0, y0, width, height, img_dims, color, linewidth=2): return Rectangle( (x0 * img_dims[1], y0 * img_dims[0]), width * img_dims[1], height * img_dims[0], linewidth=linewidth, edgecolor=color, facecolor="none", ) # + id="Lq2sjmRs2Ag1" colab_type="code" colab={} def get_category_name(transformer, category) -> str: category_title = transformer.inverse_transform([category]) category_title = ' '.join([str(elem) for elem in category_title]) return category_title # + id="AMIIZk2L2Fgb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 268} outputId="84826b68-14a3-42d9-8461-666b6d77d1f7" for x, y in train_set.take(1): n_image = 0 categories, bounding_boxes = y category = categories[n_image].numpy() bounding_box = bounding_boxes[n_image].numpy() fig, ax = plt.subplots(1) ax.imshow(x[n_image].numpy()) plt.text( bounding_box[0] * IMG_DIMS[0], bounding_box[1] * IMG_DIMS[1], get_category_name(le, category), backgroundcolor="yellow", fontsize=12, ) y_rect = make_rectangle(*bounding_box, IMG_DIMS, "y") ax.add_patch(y_rect) plt.show() # + id="CV6QHeo05x4y" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="e89b5486-7d1e-4e3d-8fe4-b314fbdfbc67" n_categories = train_data["breed_encoded"].nunique() n_categories # + [markdown] id="sxdyax9J5-id" colab_type="text" # ## Creating the model # + id="DnexhzHr5_uo" colab_type="code" colab={} class GlobalConcatPooling2D(keras.layers.Layer): def __call__(self, layer: keras.layers.Layer) -> keras.layers.Layer: return keras.layers.concatenate( [ keras.layers.GlobalAvgPool2D()(layer), keras.layers.GlobalMaxPool2D()(layer), ] ) # + id="1RocpgBQ6EYH" colab_type="code" colab={} def make_model(): base = tf.keras.applications.Xception( input_shape=IMG_DIMS, include_top=False, weights="imagenet" ) x = GlobalConcatPooling2D()(base.output) x = keras.layers.Dropout(0.4)(x) category = keras.layers.Dense(n_categories, activation=keras.activations.softmax)(x) bounding_box = keras.layers.Dense(4, activation=keras.activations.sigmoid)(x) model = keras.Model(inputs=base.input, outputs=[category, bounding_box]) model.compile( loss=[ keras.losses.SparseCategoricalCrossentropy(), keras.losses.BinaryCrossentropy(), ], loss_weights=[0.5, 2.0], optimizer=keras.optimizers.Adam(3e-4), ) return model # + id="giVCHGW46Ebs" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 72} outputId="7abf47c3-ec09-4408-d1d1-d8053ad3056e" model = make_model() # + id="uQ3-A-s86JRG" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="cbd09c41-c218-401a-c3ac-275d587cdc6d" model.summary() # + id="JwdIPB4d6NVt" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 250} outputId="be1a9b14-58df-4af0-9bf6-56c772e08577" model.fit( x=train_set, validation_data=validation_set, steps_per_epoch=math.ceil(train_data.shape[0] / BATCH_SIZE), epochs=5, callbacks=[ keras.callbacks.ModelCheckpoint( str(TEMP_DIR / "model1.h5"), save_best_only=True, save_weights_only=True ), keras.callbacks.ReduceLROnPlateau(factor=0.3, patience=1), keras.callbacks.EarlyStopping(patience=3, restore_best_weights=True), ], ) # + id="P08AZWZ46S1w" colab_type="code" colab={} def predict(model, dataset, n_image=0): for x, y in dataset.take(1): fig, ax = plt.subplots(1) ax.imshow(x[n_image].numpy()) p_categories, p_bounding_boxes = model.predict(x) p_category = p_categories[n_image].argmax() p_bounding_box = p_bounding_boxes[n_image] plt.text( (p_bounding_box[0] + p_bounding_box[2]) * IMG_DIMS[0] - 25, p_bounding_box[1] * IMG_DIMS[1], get_category_name(le, p_category), color="white", backgroundcolor="blue", fontsize=16, ) p_rect = make_rectangle(*p_bounding_box, IMG_DIMS, "b", 3) ax.add_patch(p_rect) y_categories, y_bounding_boxes = y y_category = y_categories[n_image].numpy() y_bounding_box = y_bounding_boxes[n_image].numpy() plt.text( y_bounding_box[0] * IMG_DIMS[0] + 5, y_bounding_box[1] * IMG_DIMS[1], f"true:{get_category_name(le, y_category)}", backgroundcolor="red", fontsize=16, ) y_rect = make_rectangle(*y_bounding_box, IMG_DIMS, "r") ax.add_patch(y_rect) plt.show() # + id="jRVWkf0p7rnu" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 268} outputId="f1729cf1-91af-42f3-bc66-6247253fd4e4" predict(model, validation_set, 1) # + id="dUQUCfyl8Ijj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 268} outputId="098bcb60-e729-4a04-a0b0-6943b10ed903" predict(model, validation_set, 10) # + id="39RGUDQ99zlN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 275} outputId="0cc88e70-0b50-49f0-e53d-0ae6b8be20e8" predict(model, test_set, 10) # + id="DzJWIRLy85DN" colab_type="code" colab={} def make_report(model, dataset): print( classification_report( [category.numpy() for _, (category, _) in dataset.unbatch()], model.predict(dataset)[0].argmax(axis=1), target_names=list(category_map.values()), ) ) # + id="87il2cHe9g5c" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 292} outputId="c9aa1678-cd6e-4f70-c07d-cd815452c0e0" make_report(model, validation_set) # + id="jtg6uZtj9g9c" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + ## Example of multiple images reconstruction # - # + # Third party import import pysap from pysap.data import get_sample_data from modopt.math.metrics import ssim from modopt.opt.proximity import GroupLASSO import numpy as np import matplotlib.pyplot as plt from etomo.operators import Radon2D, WaveletPywt, HOTV from etomo.reconstructors.forwardradon import RadonReconstructor # - # Loading input data image = get_sample_data('2d-pmri') n_channels, img_size, _ = image.shape # Create radon operator and simulate data theta = np.arange(0., 180., 3.) radon_op = Radon2D(angles=theta, img_size=img_size, gpu=True, n_channels=n_channels) data = radon_op.op(image) # Create operators TV = HOTV(img_shape=image[0].shape, order=1, n_channels=n_channels) wavelet = WaveletPywt(wavelet_name='sym8', nb_scale=3, n_channels=n_channels) linear_op = wavelet regularizer_op = GroupLASSO(weights=1e-7) reconstructor = RadonReconstructor( data_op=radon_op, linear_op=linear_op, regularizer_op=regularizer_op, gradient_formulation='synthesis', ) # Run reconstruction x_final, cost, *_ = reconstructor.reconstruct( data=data, optimization_alg='pogm', num_iterations=200, cost_op_kwargs={'cost_interval': 5} ) # + # Results plt.plot(cost) plt.yscale('log') plt.show() image_ref = pysap.Image(data=np.sqrt(np.sum(image.data**2, axis=0))) image_rec = pysap.Image(data=np.sqrt(np.sum(np.abs(x_final)**2, axis=0))) recon_ssim = ssim(image_rec, image_ref) print(f'The Reconstruction SSIM is: {recon_ssim: 2f}') #print('The Reconstruction SSIM is: {}'.format(recon_ssim)) image_rec.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="ksuAn-q-hBJA" # # Cataract detection # + [markdown] id="rH_FRQY7hBJJ" # In this notebook I have attempted to detect cataracts in an human eye. # + [markdown] id="0i2Zt9GThBJK" # ## Necessary libraries # + id="pxtqIhyahBJL" import numpy as np import cv2 import os import pandas as pd from random import sample import seaborn as sns import matplotlib.pyplot as plt from scikitplot.metrics import plot_confusion_matrix as plt_con_mat from keras.utils.np_utils import to_categorical from sklearn.model_selection import train_test_split from tensorflow.keras.applications.vgg19 import VGG19 from tensorflow.keras.models import Model, Sequential from tensorflow.keras.layers import Conv2D, Dense, Dropout, MaxPooling2D, Flatten from keras.utils import plot_model # + [markdown] id="ODhAuzZahBJL" # ## Loading the data # + papermill={"duration": 0.211849, "end_time": "2020-11-27T09:59:18.425572", "exception": false, "start_time": "2020-11-27T09:59:18.213723", "status": "completed"} tags=[] id="oWu5wspwhBJM" outputId="7049824f-00c7-47ca-9ef3-747366d86003" path = "../input/ocular-disease-recognition-odir5k" df = pd.read_csv(os.path.join(path, "full_df.csv")) df.head() # + papermill={"duration": 0.136531, "end_time": "2020-11-27T09:59:18.905712", "exception": false, "start_time": "2020-11-27T09:59:18.769181", "status": "completed"} tags=[] id="3MgJxh5jhBJN" outputId="18a6c699-377f-40d1-88d2-3aab7ced1fdc" file_names = [] labels = [] for text, label, file_name in zip(df["Left-Diagnostic Keywords"], df["C"], df["Left-Fundus"]): if(("cataract" in text) and (label == 1)): file_names.append(file_name) labels.append(1) elif(("normal fundus" in text) and (label == 0)): file_names.append(file_name) labels.append(0) for text, label, file_name in zip(df["Right-Diagnostic Keywords"], df["C"], df["Right-Fundus"]): if(("cataract" in text) and (label == 1)): file_names.append(file_name) labels.append(1) elif(("normal fundus" in text) and (label == 0)): file_names.append(file_name) labels.append(0) print(len(file_names), len(labels)) # + id="p1ATmy4QhBJO" outputId="e25ccc61-04f9-488e-c1d0-2fc93318713c" plt.bar([0,1], [len([i for i in labels if i == 1]), len([i for i in labels if i == 0])], color = ['b', 'g']) plt.xticks([0, 1], ['Cataract', 'Normal']) plt.show() # + [markdown] id="lkoz_tQohBJO" # ## Extracting the data into train and test sets. # + id="zmUeBnB6hBJP" ROW = 224 COL = 224 # + id="OPzu3OUghBJP" outputId="d439eb94-c100-4382-db5d-c43b05270191" image_data = [] for idx, image_name in enumerate(file_names): img = cv2.imread(os.path.join(path,"preprocessed_images",image_name)) try: img = cv2.resize(img, (ROW, COL)) image_data.append(img) except: del labels[idx] image_data = np.array(image_data) print(image_data.shape) # + papermill={"duration": 6.18404, "end_time": "2020-11-27T09:59:26.792667", "exception": false, "start_time": "2020-11-27T09:59:20.608627", "status": "completed"} tags=[] id="gbKC5PR4hBJP" outputId="2ea22e64-f433-4bb0-d184-33dc19cd4010" temp = [] for idx, label in enumerate(labels): if label == 0: temp.append(idx) temp = sample(temp, len([label for label in labels if label == 1])) X_data = [] y_data = [] for idx in temp: X_data.append(image_data[idx]) y_data.append(labels[idx]) temp = [] for idx, label in enumerate(labels): if label == 1: temp.append(idx) for idx in temp: X_data.append(image_data[idx]) y_data.append(labels[idx]) X_data = np.array(X_data) y_data = np.array(y_data) y_data = np.expand_dims(y_data, axis = -1) y_data = to_categorical(y_data) print(X_data.shape, y_data.shape) # + id="-1d86tTuhBJQ" outputId="76736a8c-a8e0-4817-9f2d-768abba07e56" X_train, X_test, y_train, y_test = train_test_split(X_data, y_data, test_size=0.2, shuffle = True, random_state = 1) print(X_train.shape, y_train.shape) print(X_test.shape, y_test.shape) # + [markdown] id="S0h_wV6chBJR" # ## CNN model using VGG19 # + [markdown] id="Wlrp1UUbhBJS" # ### Transfer learning # + papermill={"duration": 5.914392, "end_time": "2020-11-27T09:59:44.973888", "exception": false, "start_time": "2020-11-27T09:59:39.059496", "status": "completed"} tags=[] id="CgMSmiWIhBJS" outputId="f1c2c3e9-9b13-4412-ee10-dbf6425abe27" vgg = VGG19(weights = "imagenet", include_top = False, input_shape=(ROW, COL, 3)) for layer in vgg.layers: layer.trainable = False # + papermill={"duration": 0.318562, "end_time": "2020-11-27T09:59:46.085091", "exception": false, "start_time": "2020-11-27T09:59:45.766529", "status": "completed"} tags=[] id="Alt_fBMJhBJT" outputId="c318da4b-b232-4eff-ee59-c355792ce610" model = Sequential() model.add(vgg) model.add(Flatten()) model.add(Dense(64, activation = 'relu')) model.add(Dense(2,activation = "softmax")) model.summary() # + papermill={"duration": 0.247418, "end_time": "2020-11-27T09:59:46.547409", "exception": false, "start_time": "2020-11-27T09:59:46.299991", "status": "completed"} tags=[] id="NlsEFCSWhBJT" outputId="e3c63d9e-279f-402d-b2c7-d88c13be5bbe" plot_model(model, show_shapes=True, show_layer_names=True) # + [markdown] id="GipcxmjAhBJT" # ## Training the CNN model # + papermill={"duration": 0.236221, "end_time": "2020-11-27T09:59:46.982157", "exception": false, "start_time": "2020-11-27T09:59:46.745936", "status": "completed"} tags=[] id="oCJFEQsQhBJU" outputId="10c494b8-4728-4319-cf4e-1f258a93e708" model.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics=['accuracy', 'Precision', 'Recall']) history = model.fit(X_train, y_train, validation_data = (X_test, y_test), epochs = 15, batch_size = 64) # + [markdown] id="b9XiSW97hBJU" # ## Model training performance # + papermill={"duration": 0.316185, "end_time": "2020-11-27T10:00:43.086059", "exception": false, "start_time": "2020-11-27T10:00:42.769874", "status": "completed"} tags=[] id="8SwRKlAthBJU" outputId="974735b6-d26c-4101-e27d-5e3bb32769b2" sns.set() fig = plt.figure(0, (12, 4)) ax = plt.subplot(1, 2, 1) sns.lineplot(history.epoch, history.history['accuracy'], label = 'train') sns.lineplot(history.epoch, history.history['val_accuracy'], label = 'validation') plt.title('Accuracy') plt.tight_layout() ax = plt.subplot(1, 2, 2) sns.lineplot(history.epoch, history.history['loss'], label = 'train') sns.lineplot(history.epoch, history.history['val_loss'], label = 'validation') plt.title('Loss') plt.tight_layout() #plt.savefig('epoch_history.png') plt.show() # + id="NTPHfa2OhBJV" outputId="cbb4da9e-cf4a-47e9-fba5-3031fa426940" preds = model.predict_classes(X_test) y_true = np.argmax(y_test, axis=1) plt_con_mat(y_true, preds, figsize=(14,14)) plt.show() # + id="CAonT78NhBJV" import tensorflow as tf tf.keras.models.save_model(model,'model.h5') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # PyConversations: A Reddit-based Example # # The following is a tutorial notebook that demonstrates how to use `pyconversations` with Reddit data. # # The first step will be to obtain some data. In order to do so, you will need to configure a personal application via your Reddit account's [App Preferences](https://www.reddit.com/prefs/apps). You'll want to set up a personal usage script. See the _Getting Access_ portion of [this blog](https://towardsdatascience.com/how-to-use-the-reddit-api-in-python-5e05ddfd1e5c) for additional instructions/visualization. # + from pprint import pprint from pyconversations.convo import Conversation from pyconversations.message import RedditPost # - # ## Data Sample # # Before demonstating how we can use `pyconversations` for pre-processing and analysis, we need to obtain a data sample. # To do so, we'll be using a package called [praw](https://github.com/praw-dev/praw). import praw # Private information CLIENT_ID = '' # this should be the 'personal use script' on your App Preferences SECRET_TOKEN = '' # this should be the 'secret' on your App Preferences USER_AGENT = '' # a custom name for your application for the User-Agent parameter in the request headers; gives a brief app description to Reddit # + # configure a read-only praw.Reddit instance reddit = praw.Reddit( client_id=CLIENT_ID, client_secret=SECRET_TOKEN, user_agent=USER_AGENT, ) reddit, reddit.read_only # + # obtain a sub-reddit of interest sub_name = 'Drexel' subreddit = reddit.subreddit(sub_name) subreddit.title # PRAW is lazy so won't request till we ask for an attribute pprint(vars(subreddit)) # + # get the top submission via 'hot' top_submission = list(subreddit.hot(limit=1))[0] top_submission.title pprint(vars(top_submission)) # + # get all the comments on this submission all_comments = top_submission.comments.list() print(len(all_comments)) all_comments[0].score pprint(vars(all_comments[0])) # - # ## Integration with `pyconversations` # # All that's left to do is plug our data directly into `pyconversations`! # construct a conversation container conv = Conversation() # parse our root submission cons_params = { 'lang_detect': True, # whether to enable the language detection module on the text 'uid': top_submission.id, # the unique identifier of the post 'author': top_submission.author.name if top_submission.author is not None else None, # name of user 'created_at': RedditPost.parse_datestr(top_submission.created), # creation timestamp 'text': (top_submission.title + '\n\n' + top_submission.selftext).strip() # text of post } top_post = RedditPost(**cons_params) top_post # add our post to the conversation conv.add_post(top_post) # which we can easily access via the .posts property, to verify inclusion conv.posts # + # iterate through comments and add them to the conversation for com in all_comments: conv.add_post(RedditPost(**{ 'lang_detect': True, # whethher to enable the language detection module on the txt 'uid': com.id, # the unique identifier of the post 'author': com.author.name if com.author is not None else None, # name of user 'created_at': RedditPost.parse_datestr(com.created), # creation timestamp 'text': com.body.strip(), # text of post 'reply_to': {com.parent_id.replace('t1_', '').replace('t3_', '')} # set of IDs replied to })) len(conv.posts) # - # ### Sub-Conversation Segmentation # + # seperate disjoint conversations (there is likely just the one with a full query from the site...) segs = conv.segment() len(segs) # - # ### (Detected) Language Distribution # + from collections import Counter lang_dist = Counter([post.lang for post in conv.posts.values()]) lang_dist # - # ### Conversation-Level Redaction # # Using `Conversation.redact()` produces a thread that is cleaned of user-specific information. # This is conversationally-scoped, so all usernames are first enumerated (either from author names or from in-text reference for Reddit and Twitter) and then user mentions (and author names) are replaced by `USER{\d}` where `{\d}` is the integer assigned to that username during the enumeration stage. # # Here's a demonstration: # + # pre-redaction names = {post.author for post in conv.posts.values()} len(names), names # - # redaction step conv.redact() # post-redaction names = {post.author for post in conv.posts.values()} len(names), names # ### Saving and Loading from the universal format import json # saving a conversation to disk # alternatively: save as a JSONLine file, where each line is a conversation! json.dump(conv.to_json(), open('reddit_conv.json', 'w+')) # reloading directly from the JSON conv_reloaded = Conversation.from_json(json.load(open('reddit_conv.json'))) len(conv_reloaded.posts) # ### Feature Extraction # # The remainder of this notebook exhibits some basic vectorization of features from conversations, posts, and users within this conversation. # For more information, see the documentation for PyConversations. from pyconversations.feature_extraction import ConversationVectorizer from pyconversations.feature_extraction import PostVectorizer from pyconversations.feature_extraction import UserVectorizer # + convs = True # convs = False users = True # users = False posts = True # posts = False # normalization = None # normalization = 'minmax' # normalization = 'mean' normalization = 'standard' # cv = ConversationVectorizer(normalization=normalization, agg_user_fts=users, agg_post_fts=posts, include_source_user=True) pv = PostVectorizer(normalization=normalization, include_conversation=convs, include_user=users) # uv = UserVectorizer(normalization=normalization, agg_post_fts=posts) # - # cv.fit(conv=conv_reloaded) pv.fit(conv=conv_reloaded) # uv.fit(conv=conv_reloaded) # + # cxs = cv.transform(conv=conv_reloaded) pxs = pv.transform(conv=conv_reloaded) # uxs = uv.transform(conv=conv_reloaded) # pprint(cxs.shape) pprint(pxs.shape) # pprint(uxs.shape) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #Import MNIST dataset from Sklearn from sklearn.datasets import load_digits from sklearn.linear_model import SGDClassifier import matplotlib.pyplot as plt # %matplotlib inline #Load digits and check train (data) and test (target) data available mnist_datasets = load_digits() mnist_datasets.keys() #Copy data to training and target to test variable X_data, y_data = mnist_datasets['data'], mnist_datasets['target'] #Check X training data shape X_data.shape #Check y test data shape y_data.shape #Let's print first digit (if you want to print other digits, just array number) for our prediction first_digit = X_data[1] first_digit_image = first_digit.reshape(8, 8) plt.imshow(first_digit_image) plt.axis("off") plt.show() #Split training and test data X_train_data, X_test_data, y_train_data, y_test_data = X_data[:1600], X_data[1600:], y_data[:1600], y_data[1600:] #Check X training data shape X_train_data.shape #Check X test data shape X_test_data.shape #Check y training data shape y_train_data.shape #Check y test data shape y_test_data.shape #The number 1 is a binay cllassifier,so let's change true for all 1 and false for all not 1 y_predict_digit = (y_train_data == 1) # Use Stochastic Gradient Descent(SGD) classifier for fit and prediction sgd_classifier = SGDClassifier(random_state=42) sgd_classifier.fit(X_train_data, y_predict_digit) #Predict using Stochastic Gradient Descent(SGD) classifier sgd_classifier.predict([first_digit]) #Let's print print and again to see whether it shows 1 plt.imshow(first_digit_image) plt.axis("off") plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Create a choropleth map with Folium # 1. Import libraries # 2. Wrangle data # 3. Clean data # 4. Create choropleth map # # 1. Import libraries # Import libraries import pandas as pd import numpy as np import seaborn as sns import matplotlib import os import folium import json # + # This command propts matplotlib visuals to appear in the notebook # %matplotlib inline # - # Import Berlin .json file country_geo = r'/Users/OldBobJulia/Desktop/CF/Course/6. Advanced Analytics and Dashboard Design/Berlin Airbnb Analysis/02 Data/Original data/berlin_bezirke.json' # Look at it f = open(r'/Users/OldBobJulia/Desktop/CF/Course/6. Advanced Analytics and Dashboard Design/Berlin Airbnb Analysis/02 Data/Original data/berlin_bezirke.json',) data = json.load(f) for i in data['features']: print(i) # File has a geometry column AND a LABEL. # Import Airbnb data df = pd.read_csv(r'/Users/OldBobJulia/Desktop/CF/Course/6. Advanced Analytics and Dashboard Design/Berlin Airbnb Analysis/02 Data/Prepared data/listing_derivedcolumns.csv') df.head() # Drop Unnamed: 0 df = df.drop(columns = ['Unnamed: 0']) df.head() df.shape df['neighbourhood_group'].value_counts(dropna=False) # # 2. Wrangle data # Select only necessary columns for choropleth map columns = ["neighbourhood_group", "latitude", "longitude", "room_type", "price"] # Create subset with only these columns df_2 = df[columns] df_2.head() # Make histogram of price to see what price categories could work. # Make subset of price excluding extreme prices to that end price_cat_check = df_2[df_2['price'] < 4000] price_cat_check.head() price_cat_check.shape sns.distplot(price_cat_check['price'], bins = 5) # Wrangle neighbourhood_group names to fit json data df_2['neighbourhood_group'].value_counts(dropna=False) df_2['neighbourhood_group'] = df_2['neighbourhood_group'].replace({'Charlottenburg-Wilm.': 'Charlottenburg-Wilmersdorf', 'Tempelhof - Schöneberg': 'Tempelhof-Schöneberg', 'Treptow - Köpenick': 'Treptow-Köpenick', 'Steglitz - Zehlendorf': 'Steglitz-Zehlendorf', 'Marzahn - Hellersdorf': 'Marzahn-Hellersdorf'}) df_2.head() df_2['neighbourhood_group'].value_counts(dropna=False) # # 3. Check consistency # Check for missings df_2.isnull().sum() # There are no missing values # Check for duplicates dups = df_2.duplicated() dups.shape # There are no dups # Check outliers df_2.describe() # 3 observations with prices over 4000 were imputed with the mean previously # Check how many rows with price under 15 super_low_price = df_2[df_2['price'] < 15] super_low_price.shape super_low_price.head() # I find it surprisingly many, still these rows are less than 10% of the total data set AND they remain within the reasonable price, so I leave them. # # 4. Plot choropleth map data_to_plot = df_2[['neighbourhood_group','price']] data_to_plot.head() # + # Setup a folium map at a high-level zoom map = folium.Map(location=[52.520, 13.404], width=750, height=500) folium.Choropleth( geo_data = country_geo, data = data_to_plot, columns = ['neighbourhood_group', 'price'], key_on = 'feature.properties.name', fill_color = 'PuBuGn', fill_opacity=0.6, line_opacity=0.1, legend_name = "price").add_to(map) folium.LayerControl().add_to(map) map # - # Save it map.save("index.html") # Export as png just in case map.save("map2.png") # The map answers the question which neighbourhoods are the most popular ones, or where most money can be made through Airbnb rentals. The answer is Friedrichshain-Kreuzberg, followed by Tempelhof-Schöneberg and Charlottenburg-Wilmersdorf. # # New research question: # - Does the number of commercial hosts influence the popularity of a neighbourhood? # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np # ## Latex # $ \sum $ e1=-(14/30*np.log2(14/30)+16/30*np.log2(16/30)) e1 e2=-(15/30*np.log2(15/30)+15/30*np.log2(15/30)) e2 e3=-(30/30*np.log2(30/30)) e3 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + colab={"base_uri": "https://localhost:8080/"} id="4V30dDc6wn07" outputId="1535b52b-23f8-4392-b28b-31991acb6025" # !nvidia-smi # + colab={"base_uri": "https://localhost:8080/"} id="il-IQ1A4UJgP" outputId="4272ac1c-f7ec-4b98-eeea-264522bcc9d7" # !pip install transformers'==4.2.2' # + id="_fFHWKDEoIuk" import torch from transformers import ( GPT2Tokenizer, GPT2LMHeadModel, TextDataset, LineByLineTextDataset, DataCollatorForLanguageModeling, TrainingArguments, Trainer, get_cosine_schedule_with_warmup ) import matplotlib.pyplot as plt # %matplotlib inline import json import numpy as np import seaborn as sns import json import nltk # + colab={"base_uri": "https://localhost:8080/"} id="hh6fjAqjpkrq" outputId="8728dbd0-8248-48c0-f261-7a6108bb2cbb" # !wget https://s3.amazonaws.com/datasets.huggingface.co/personachat/personachat_self_original.json # + id="xg0HHhFFRX6k" bos_token = '' eos_token = '' # + id="YD4qBeqOp51m" with open('/content/personachat_self_original.json') as json_file: data = json.load(json_file) # + id="GJQpDSUejPMh" chats = [] for i in range(len(data['train'])): personality = " ".join(data['train'][i]['personality']) chat = data['train'][i]['utterances'][-1]['history'] final_text = bos_token + personality + '' for j in range(len(chat)): if j % 2 == 0: final_text += '' else: final_text += '' final_text += chat[j] chats.append(final_text + eos_token) # + id="jpccGWtrlglm" chats = " ".join(chats) # + id="MMr2ug4ylSa3" train_data_path = 'chats.txt' with open(train_data_path, 'w') as t: t.write(chats) # + [markdown] id="Y1S4cjsQnLxG" # # Tokenezation # # + colab={"base_uri": "https://localhost:8080/", "height": 98, "referenced_widgets": ["1e4769c47a174dbb80aa31b73d60d6df", "aee2f1e955284382a340e853f05ee468", "f86c9687a0554592996b7c44f77cd447", "", "796acd6beaa1474eab01d5e4850802b8", "", "86f50e6cdaff4eb885681ba1fa22b888", "e4379d416f1c4a6dab6edf74b02cb1c4", "aab1c2886e2f47c6ad3f1fb51487dc92", "437238a1e32340b5839efbe94a33b5e9", "7bd08e7111ed49488564ee8a8d160520", "7afee4e6d42f41238916e61f35321e1e", "a06b8e6dd8b14d87b1912165339f90db", "cebe23073434472f8b5b1db624b3305b", "", "", "691e1051e3ee4c1fadf9f7ae20c89f98", "", "", "1c9ea78649d54f19b167d494c8e4c8f2", "", "f69b4b074f254e88aa46de31ce174278"]} id="mkJPniCBUpqQ" outputId="049a8a17-88b3-4895-aa07-1621f6648602" tokenizer = GPT2Tokenizer.from_pretrained("gpt2", bos_token=bos_token, eos_token=eos_token, pad_token='<|pad|>') # + id="j8FItmRBWSxS" block_size = 1024 # + colab={"base_uri": "https://localhost:8080/"} id="jb4j8jmzvOc1" outputId="6cb86235-afc8-42f2-a60f-f37e5280b165" train_dataset = TextDataset(tokenizer=tokenizer, file_path='/content/chats.txt', block_size=block_size, overwrite_cache=True) # + colab={"base_uri": "https://localhost:8080/"} id="rjIvTs1DQwf8" outputId="fc48a97a-2da0-483d-d53e-1c7e060d7565" len(train_dataset) # + [markdown] id="_O_n3PzynV_Q" # # Training # + id="bf3jNlt6r-WK" colab={"base_uri": "https://localhost:8080/", "height": 81, "referenced_widgets": ["8121f206a4c44c3aa1459fd19e7dd3db", "a05b72b4e53843d1916092f9e709db24", "56ee63b4f30445d0a6a3543073232bf8", "ee199dfabc474c829d637be2677e7bef", "3eecd0020ba644dd9932826678f0f340", "fe59676a0ddf448f8caac7ef2137b790", "d9c1a76278834c3ba5eda31245f5013a", "18b9b5a8a00f4f2baca2a7c99d0bac5c", "506d35d3e50c40be831fb61a320048b7", "66d6a8a4d46a43169a663b925de097d6", "", "", "", "", "", "", "77e9ab36dbe9441ebb3e20cee0d10bf7", "", "", "", "", ""]} outputId="31ff7433-757e-4392-baf4-aa35dc3d856f" model = GPT2LMHeadModel.from_pretrained("gpt2") # + id="uHOrlUse4wm9" model.cuda() None # + colab={"base_uri": "https://localhost:8080/"} id="BZ84KrfeVTLJ" outputId="a3f0d307-6a81-4c92-a566-834874319427" model.resize_token_embeddings(len(tokenizer)) # + id="9Nou2PgYEKJr" learning_rate = 2e-5 batch_size = 1 num_epochs = 4 # + id="-zyz1nIHbsrd" data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False) seed = 567 training_args = TrainingArguments( output_dir='chats_output', overwrite_output_dir=True, do_train=True, num_train_epochs=num_epochs, per_device_train_batch_size=batch_size, prediction_loss_only=True, logging_steps=100, load_best_model_at_end=True, #save_steps=300, seed=seed, learning_rate=learning_rate ) # + id="76wLGGclb1cQ" trainer = Trainer( model=model, tokenizer=tokenizer, args=training_args, data_collator=data_collator, train_dataset=train_dataset ) train_dataloader = trainer.get_train_dataloader() num_train_steps = len(train_dataloader) * num_epochs trainer.create_optimizer_and_scheduler(num_train_steps) trainer.lr_scheduler = get_cosine_schedule_with_warmup( trainer.optimizer, num_warmup_steps=num_train_steps//10, num_training_steps=num_train_steps ) # + [markdown] id="bSdoqrdNDune" # Заводим! # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="Pq5ZPdE1b8c9" outputId="0a1b003f-edda-45af-d9d8-a525b7685db1" trainer.train() # + [markdown] id="kgT-Z-wGFj8O" # # Save model # + colab={"base_uri": "https://localhost:8080/"} id="_gFW1bS5Fr3U" outputId="d1a658fc-3ca9-4828-d5ec-dec80c8ca5d0" output_dir = '/content/chats_output' model.save_pretrained(output_dir) tokenizer.save_pretrained(output_dir) # + [markdown] id="tYcS6UTnE6Gu" # # Text generation # # + id="s8_y37NeJc3u" def generate(prompt): model.eval() input_ids = tokenizer.encode(prompt, return_tensors='pt') input_ids = input_ids.to('cuda') eos_id = tokenizer.encode(eos_token)[0] sample_outputs = model.generate( input_ids, max_length= len(input_ids[0]) + 50, min_length=len(input_ids[0]) + 30, do_sample=True, top_k=50, top_p=0.92, temperature=0.95, num_beams=1, repition_penalty=3, eos_token_id=eos_id, num_return_sequences=1 ) for i, sample_output in enumerate(sample_outputs): sample_output = sample_output[len(input_ids[0]):] return tokenizer.decode(sample_output, skip_special_tokens=False) # + id="JZKW0SiNueZ5" # + id="yPuu-693vFmt" prompt = 'i am a musician and hope to make it big some day . i play the piano and guitar and sing . i also work as a custodian to help pay the bills . my favorite type of music to sing is folk music .' while True: prompt += '' prompt += input('Input: ') + '' generated = generate(prompt) if '' in generated: generated = generated[: generated.find('')] print(f'AI: {generated}') prompt += generated # + colab={"base_uri": "https://localhost:8080/"} id="DwF_pKBm88b0" outputId="8d11aa2d-3f1e-4afb-ad8d-cc5c50528739" from google.colab import drive drive.mount('/content/drive') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.8.5 64-bit ('newEnv') # metadata: # interpreter: # hash: a362b6a772d1e178ff192f32de2c1dd9f76d1954d2f029c57c350450b74e192a # name: python3 # --- import ipywidgets as widgets widgets.IntSlider() import ipywidgets as widgets widgets.Text(value='Hello World!', disabled=True) import ipywidgets as widgets widgets.Checkbox( value=False, description='< text>', disabled=False, indent=False ) # + from IPython.display import display import ipywidgets as widgets button = widgets.Button(description="Click Me!") caption = widgets.Label(value='Not Clicked') display(button, caption) def on_button_clicked(b): caption.value = 'Button Clicked -->' button.on_click(on_button_clicked) # - caption # + import threading from IPython.display import display import ipywidgets as widgets import time progress = widgets.FloatProgress(value=0.0, min=0.0, max=1.0) def work(progress): total = 100 for i in range(total): time.sleep(0.2) progress.value = float(i+1)/total thread = threading.Thread(target=work, args=(progress,)) display(progress) thread.start() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="qbmb46sKABc5" # !pip3 install pixellib --upgrade # + id="JFT-cXplE3fF" # !wget https://github.com/ayoolaolafenwa/PixelLib/releases/download/1.1/deeplabv3_xception_tf_dim_ordering_tf_kernels.h5 # + id="DqKntJOw2hsG" import pixellib from pixellib.tune_bg import alter_bg from matplotlib import pyplot as plt import numpy as np from PIL import Image from IPython.display import Image as img from pylab import rcParams # + id="RkX6xITp4xav" rcParams['figure.figsize'] = 10, 10 # + id="pLP_0UaHBEFI" change_bg = alter_bg() # + id="SA8lLOdNE5xd" dir(change_bg) # + id="uXcS5T3BBEIF" change_bg.load_pascalvoc_model("deeplabv3_xception_tf_dim_ordering_tf_kernels.h5") # + id="ZsdNdqPcFcxe" file_name = "image.jpg" bg_file_name = "background_img.jpg" # + [markdown] id="Ga5WlY2U6ygk" # # Original Images # + id="MZlrNJduF-Aq" plt.imshow(Image.open(file_name)) # + id="8KmogvOz6v8T" plt.imshow(Image.open(bg_file_name)) # + [markdown] id="Y8nGrcao63Yd" # # Blur Background # + id="fh9AojUeBL8s" change_bg.blur_bg(file_name, moderate = True , output_image_name="blur_img.jpg") # + id="AFNKj1GR2RVU" plt.imshow(Image.open("blur_img.jpg")) # + [markdown] id="5Eldf6vK67w7" # # Gray Background # + id="FyPNQ-zWBL_q" change_bg.gray_bg(file_name, output_image_name="gray_img.jpg") # + id="KnNe7lj35O8W" plt.imshow(Image.open("gray_img.jpg")) # + [markdown] id="PKye4pnw6_tC" # # Background Colour Change # + id="oYFdcRUfCor2" change_bg.color_bg(file_name, colors = (123, 123, 123), output_image_name="colored_bg.jpg") # + id="tjHCM4cFFFxc" plt.imshow(Image.open("colored_bg.jpg")) # + [markdown] id="D8PhGfde7Crd" # # Background Change # + id="amC623Ud5BS0" change_bg.change_bg_img(f_image_path = file_name, b_image_path = bg_file_name, output_image_name="new_img.jpg") # + id="OHvoo6U2523c" plt.imshow(Image.open("new_img.jpg")) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # New Questions # # #### 1. What is the average purchase price of properties by “Level”? # • Level 0 -> Price between U$ 0 and U$ 321,950 # • Level 1 -> Price between U$ 321,950 and U$ 450,000 # • Level 2 -> Price between U$ 450,000 and U$ 645,000 # • Level 3 -> Above U$ 645,000 # #### 2. What is the average size of the living room of the properties by “Size”? # • Size 0 -> Size between 0 and 1427 sqft # • Size 1 -> Size between 1427 and 1910 sqft # • Size 2 -> Size between 1910 and 2550 sqft # • Size 3 -> Size over 2550 sqft # #### 3. Add the following information to the original dataset: # • Place ID: location # • OSM Type: Open Street Map type # • Country: Country Name # • Country Code: Country code # #### 4. Add the following filters to the Map: # • Minimum living room area size. # • Number minimum of bathrooms. # • Maximum Price Value. # • Maximum size of the basement area. # • Property Conditions Filter. # • Filter by Year of Construction. # #### 5. Add the following filters to the Dashboard: # • Filter by date available for purchase. # • Filter per year of renewal. # • Filter whether it has a water view or not. # + [markdown] heading_collapsed=true # ## Import LIbraries # + hidden=true import defs import time import numpy as np import pandas as pd import seaborn as sns import ipywidgets as widgets import plotly.express as px from ipywidgets import interact, interactive, fixed, interact_manual from matplotlib import gridspec from matplotlib import pyplot as plt from multiprocessing import Pool from geopy.geocoders import Nominatim from IPython.core.display import HTML # + hidden=true # Function for format all over the graphics def jupyter_settings(): # %matplotlib inline # %pylab inline plt.style.use( 'bmh' ) plt.rcParams['figure.figsize'] = [14, 7] plt.rcParams['font.size'] = 24 display( HTML( '') ) pd.options.display.max_columns = None pd.options.display.max_rows = None pd.set_option( 'display.expand_frame_repr', False ) sns.set() jupyter_settings() # + [markdown] heading_collapsed=true # ## Loading data # + hidden=true data = pd.read_csv('datasets/kc_house_data.csv') data.head() # + [markdown] heading_collapsed=true # ## 1. What is the average purchase price of properties by “Level”? # • Level 0 -> Price between U$ 0 and U$ 321,950 # • Level 1 -> Price between U$ 321,950 and U$ 450,000 # • Level 2 -> Price between U$ 450,000 and U$ 645,000 # • Level 3 -> Above U$ 645,000 # # + hidden=true # Creating new column "level" and attribuiting values to new column whit .apply comand. data['level'] = data['price'].apply(lambda x: 'level_0' if (x >= 0) & (x <= 321950) else 'level-1' if (x > 321950) & (x <= 450000) else 'level_2' if (x > 450000) & (x <= 645000) else 'level_3') data[['price', 'level']].head() # + [markdown] heading_collapsed=true # ## 2. What is the average size of the living room of the properties by “Size”? # • Size 0 -> Size between 0 and 1427 sqft # • Size 1 -> Size between 1427 and 1910 sqft # • Size 2 -> Size between 1910 and 2550 sqft # • Size 3 -> Size over 2550 sqft # # + hidden=true # Creating new column "level" and attribuiting values to new column whit .apply comand. data['size'] = data['sqft_living'].apply(lambda x: 'size_0' if (x >= 0) & (x <= 1427) else 'size_1' if (x > 1427) & (x <= 1910) else 'size_2' if (x > 1910) & (x <= 2550) else 'size_3') data[['sqft_living', 'size']].head() # + [markdown] heading_collapsed=true # ## 3. Add the following information to the original dataset: # • Place ID: location # • OSM Type: Open Street Map type # • Country: Country Name # • Country Code: Country code # # + hidden=true #from geopy.geocoders import Nominatim geolocator = Nominatim( user_agent='geopyExercises') query = '47.5112, -122.257' response = geolocator.reverse(query) # + [markdown] hidden=true # ##### data[['lat', 'long']].head() # Len information about "lat" and long # ##### response - show us information about latitud and longitud attibuited at query # ##### response.raw - plot json information # ##### response.raw[ ] # ##### response.raw['place_id'] - plot key the json file # ##### response.raw['address']['house_number'] #plot number of house # + hidden=true # Handling .Json file whit Multi_thread process #import time #from multiprocessing import Pool # + hidden=true data['query'] = data[['lat', 'long']].apply(lambda x: str(x['lat']) + ',' + str(x['long']), axis=1) # + hidden=true # (Api request) geolocator = Nominatim( user_agent='geopyExercises') #def get_data( x ): # index, row = x # time.sleep( 1 ) # call the api # response = geolocator.reverse(row['query']) # try: # place_id = response.raw['place_id'] # osm_type = response.raw['osm_type'] # country = response.raw['address']['country'] # this informar are in other subpack # country_code = response.raw['address']['country_code'] # necessary declare the subfolder # # return place_id, osm_type, country, country_code # + hidden=true #import defs # Creating dataframe only for multi_thread process df1 = data[['id', 'query']].head() p = Pool(3) # Select only 3 cors at the machine start = time.process_time() df1[['place_id','osm_type', 'country', 'country_code']] = p.map( defs.get_data, df1.iterrows() ) end = time.process_time() print('Time Elapsed: {}', end - start) # + hidden=true df1.head() # + [markdown] heading_collapsed=true # ## 4. Add the following filters to the Map: # • Minimum living room area size. # • Number minimum of bathrooms. # • Maximum Price Value. # • Maximum size of the basement area. # • Property Conditions Filter. # • Filter by Year of Construction. # # + hidden=true #import ipywidgets as widgets #from ipywidgets import interact, interactive, fixed, interact_manual #from plotly import express as px # + hidden=true # First, created new column #data['is_waterfront'] = data['waterfront'].apply(lambda x: 'yes' if x == 1 else 'no') # Define interactive buttons price_limit = widgets.IntSlider( value = int( data['price'].mean()), min = int(data['price'].min()) , max = int(data['price'].max()) , step = 1, description = 'Maximunm Price', disable = False, style = {'description_width': 'initial'}) # Define interactive buttons living_limit = widgets.IntSlider( value = int( data['sqft_living'].mean() ), min = data['sqft_living'].min() , max = data['sqft_living'].max() , step = 1, description = 'Minimum Living Room Size', disable = False, style = {'description_width': 'initial'}) # Define interactive buttons bath_limit = widgets.IntSlider( value = data['bathrooms'].mean() , min = data['bathrooms'].min() , max = data['bathrooms'].max() , step = 1, description = 'Minimum Bathrooms Values', disable = False, style = {'description_width': 'initial'}) # Define interactive buttons basement_limit = widgets.IntSlider( value = int(data['sqft_basement'].mean()) , min = data['sqft_basement'].min() , max = data['sqft_basement'].max() , step = 1, description = 'Maximum Basement', disable = False, style = {'description_width': 'initial'}) # Define interactive buttons condition_limit = widgets.IntSlider( value = data['condition'].mean() , min = data['condition'].min() , max = data['condition'].max() , step = 1, description = 'House Condition', disable = False, style = {'description_width': 'initial'}) # Define interactive buttons yr_limit = widgets.IntSlider( value = data['yr_built'].mean() , min = data['yr_built'].min() , max = data['yr_built'].max() , step = 1, description = 'Year Build Limit', disable = False, style = {'description_width': 'initial'}) # + hidden=true def update_map(df, price_limit, living_limit, bath_limit, basement_limit): # Filter the data houses = df[(df['price'] < price_limit) & (df['sqft_living'] < living_limit) & (df['bathrooms'] < bath_limit) & (df['sqft_basement'] < basement_limit) & (df['condition'] == condition_limit ) & (df['yr_built'] == yr_limit)][['id', 'lat', 'long', 'price', 'sqft_living' ]].copy() # Plot Map fig = px.scatter_mapbox(houses, lat= 'lat', lon= 'long', size= 'price', color_continuous_scale=px.colors.cyclical.IceFire, size_max=15, zoom=10 ) fig.update_layout( mapbox_style='open-street-map' ) fig.update_layout( height=600, margin={'r':0, 'l':0, 't':0, 'b':0}) fig.show() # + hidden=true widgets.interactive( update_map, df=fixed( data ), price_limit=price_limit, living_limit=living_limit, bath_limit=bath_limit, basement_limit=basement_limit, condition_limit=condition_limit, yr_limit=yr_limit) # + [markdown] heading_collapsed=true # ## 5. Add the following filters to the Dashboard: # • Filter by date available for purchase. # • Filter per year of renewal. # • Filter whether it has a water view or not. # + hidden=true #from matplotlib import pyplot as plt #from matplotlib import gridspec #import seaborn as sns # + hidden=true # First format the column data['year'] = pd.to_datetime( data['date'] ).dt.strftime( '%Y' ) data['date'] = pd.to_datetime( data['date'] ).dt.strftime( '%Y-%m-%d' ) data['year_week'] = pd.to_datetime( data['date'] ).dt.strftime( '%Y-%u') # First filter: by date available for purchase. date_limit = widgets.SelectionSlider( options = data['date'].sort_values().unique().tolist(), value='2014-12-01', description='Max available date', disable=False, continous_update=False, style={'description_width': 'initial'}, redout=True) # Second filter: per year of renewal. year_limit = widgets.SelectionSlider( options = data['yr_renovated'].sort_values().unique().tolist(), value=2000, description='Max Year', disable=False, continous_update=False, style={'description_width': 'initial'}, redout=True) # Third filter: whether it has a water view or not. water_limit = widgets.Checkbox( value=False, description='Is waterfront?', disable=False, indent=False) # + hidden=true def update_map( data, date_limit, year_limit, water_limit): #FIltering data df = data[(data['date'] <= date_limit) & (data['yr_renovated'] >= year_limit) & (data['waterfront'] == water_limit)] fig = plt.figure( figsize=(24,12)) specs = gridspec.GridSpec( ncols=2, nrows=2, figure=fig) ax1 = fig.add_subplot( specs[0, :] ) ax2 = fig.add_subplot( specs[1, 0] ) ax3 = fig.add_subplot( specs[1, 1] ) # First graphic by_year = df[['price', 'year']].groupby('year').sum().reset_index() sns.barplot(x='year', y='price', data=by_year, ax=ax1); # Second graphic by_day = df[['price', 'date']].groupby('date').mean().reset_index() sns.lineplot(x='date', y='price', data=by_day, ax=ax2) plt.xticks( rotation=90 ); # Third graphic by_week_of_year = df[['price', 'year_week']].groupby('year_week').mean().reset_index() sns.barplot(x='year_week', y='price', data=by_week_of_year, ax=ax3) plt.xticks( rotation=90 ); # + hidden=true widgets.interactive( update_map, data = fixed( data ), date_limit = date_limit, year_limit = year_limit, water_limit = water_limit) # + hidden=true # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import tensorflow as tf import keras from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten, Input from keras.layers import Conv2D, MaxPooling2D from keras import backend as K from keras.callbacks import ModelCheckpoint import numpy as np # + batch_size = 128 n_classes = 10 n_epochs = 15 im_row, im_col = 28, 28 # - (x_train, y_train), (x_test, y_test) = mnist.load_data() # + import matplotlib.pyplot as plt # %matplotlib inline plt.imshow(x_train[1], cmap='gray') plt.show() # + fig = plt.figure(figsize=(15, 10)) i = 0 for f in range(0, y_train.shape[0]): if(y_train[f] == 1 and i < 10): plt.subplot(2, 5, i+1) plt.imshow(x_train[f], cmap='gray') plt.xticks([]) plt.yticks([]) i = i + 1 plt.show() # - print("x_train: {}\nx_test: {}\n".format( x_train.shape, x_test.shape, )) if K.image_data_format() == 'channels_first': x_train = x_train.reshape(x_train.shape[0], 1, im_row, im_col) x_test = x_test.reshape(x_test.shape[0], 1, im_row, im_col) input_shape = (1, im_row, im_col) else: x_train = x_train.reshape(x_train.shape[0], im_row, im_col, 1) x_test = x_test.reshape(x_test.shape[0], im_row, im_col, 1) input_shape = (im_row, im_col, 1) print(y_test[0]) # + from keras.utils import np_utils x_train = x_train.astype('float32') x_test = x_test.astype('float32') x_train /= 255 x_test /= 255 y_train = np_utils.to_categorical(y_train, n_classes) y_test = np_utils.to_categorical(y_test, n_classes) # - print("x_train: {}\nx_test: {}\ninput_shape: {}\n\ # of training samples: {}\n# of testing samples: {}".format( x_train.shape, x_test.shape, input_shape, x_train.shape[0], x_test.shape[0])) # + from tensorflow.keras.optimizers import Adam model = Sequential() model.add( Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape)) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(n_classes, activation='softmax')) model.compile(loss=keras.losses.categorical_crossentropy, optimizer="adam", metrics=['accuracy']) model.summary() # + model.fit(x_train, y_train, batch_size=batch_size, epochs=n_epochs, verbose=1, validation_data=(x_test, y_test)) score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) # + from sklearn.metrics import roc_auc_score preds = model.predict(x_test) auc = roc_auc_score(np.round(preds), y_test) print("AUC: {:.2%}".format(auc)) # - np.round(preds) preds = model.predict(x_test) print("Predictions for x_test[0]: {}\n\nActual label for x_test[0]: {}\n".format( preds[0], y_test[0])) print("Predictions for x_test[0] after rounding: {}\n".format( np.round(preds)[0])) # + from keras import models layers = [layer.output for layer in model.layers[:4]] model_layers = models.Model(inputs=model.input, outputs=layers) activations = model_layers.predict(x_train) fig = plt.figure(figsize=(15, 10)) plt.subplot(1, 3, 1) plt.title("Original") plt.imshow(x_train[7].reshape(28, 28), cmap='gray') plt.xticks([]) plt.yticks([]) for f in range(1, 3): plt.subplot(1, 3, f + 1) plt.title("Convolutional layer %d" % f) layer_activation = activations[f] plt.imshow(layer_activation[7, :, :, 0], cmap='gray') plt.xticks([]) plt.yticks([]) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="Okfr_uhwhS1X" colab_type="text" # # Lambda School Data Science - Making Data-backed Assertions # # This is, for many, the main point of data science - to create and support reasoned arguments based on evidence. It's not a topic to master in a day, but it is worth some focused time thinking about and structuring your approach to it. # + [markdown] id="9dtJETFRhnOG" colab_type="text" # ## Lecture - generating a confounding variable # # The prewatch material told a story about a hypothetical health condition where both the drug usage and overall health outcome were related to gender - thus making gender a confounding variable, obfuscating the possible relationship between the drug and the outcome. # # Let's use Python to generate data that actually behaves in this fashion! # + id="WiBkgmPJhmhE" colab_type="code" outputId="955ced12-8c35-4315-e396-6d9881d07957" colab={"base_uri": "https://localhost:8080/", "height": 1008} import random dir(random) # Reminding ourselves what we can do here # + id="Ks5qFtpnq-q5" colab_type="code" outputId="5c364669-a52b-475a-8012-fc6e69ca352b" colab={"base_uri": "https://localhost:8080/", "height": 33} # Let's think of another scenario: # We work for a company that sells accessories for mobile phones. # They have an ecommerce site, and we are supposed to analyze logs # to determine what sort of usage is related to purchases, and thus guide # website development to encourage higher conversion. # The hypothesis - users who spend longer on the site tend # to spend more. Seems reasonable, no? # But there's a confounding variable! If they're on a phone, they: # a) Spend less time on the site, but # b) Are more likely to be interested in the actual products! # Let's use namedtuple to represent our data from collections import namedtuple # purchased and mobile are bools, time_on_site in seconds User = namedtuple('User', ['purchased','time_on_site', 'mobile']) example_user = User(False, 12, False) print(example_user) # + id="lfPiHNG_sefL" colab_type="code" outputId="20621927-65d3-4add-b1e3-82401e7d8df3" colab={"base_uri": "https://localhost:8080/", "height": 53} # And now let's generate 1000 example users # 750 mobile, 250 not (i.e. desktop) # A desktop user has a base conversion likelihood of 10% # And it goes up by 1% for each 15 seconds they spend on the site # And they spend anywhere from 10 seconds to 10 minutes on the site (uniform) # Mobile users spend on average half as much time on the site as desktop # But have twice as much base likelihood of buying something users = [] for _ in range(250): # Desktop users time_on_site = random.uniform(10, 600) purchased = random.random() < 0.1 + (time_on_site // 1500) users.append(User(purchased, time_on_site, False)) for _ in range(750): # Mobile users time_on_site = random.uniform(5, 300) purchased = random.random() < 0.2 + (time_on_site // 1500) users.append(User(purchased, time_on_site, True)) random.shuffle(users) print(users[:10]) # + id="9gDYb5qGuRzy" colab_type="code" outputId="a37526ba-7e4e-4323-b852-65166e2bd085" colab={"base_uri": "https://localhost:8080/", "height": 191} # Let's put this in a dataframe so we can look at it more easily import pandas as pd user_data = pd.DataFrame(users) user_data.head() # + id="sr6IJv77ulVl" colab_type="code" outputId="9cd4e153-159b-4931-f4f3-907ecd3c592d" colab={"base_uri": "https://localhost:8080/", "height": 181} # Let's use crosstabulation to try to see what's going on pd.crosstab(user_data['purchased'], user_data['time_on_site']) # + id="hvAv6J3EwA9s" colab_type="code" outputId="04b17307-0098-457d-84e0-b71e99c69ff0" colab={"base_uri": "https://localhost:8080/", "height": 133} # OK, that's not quite what we want # Time is continuous! We need to put it in discrete buckets # Pandas calls these bins, and pandas.cut helps make them time_bins = pd.cut(user_data['time_on_site'], 5) # 5 equal-sized bins pd.crosstab(user_data['purchased'], time_bins) # + id="pjcXnJw0wfaj" colab_type="code" outputId="ad0fb6a4-9b42-43a7-f615-5752d1bfb3e4" colab={"base_uri": "https://localhost:8080/", "height": 133} # We can make this a bit clearer by normalizing (getting %) pd.crosstab(user_data['purchased'], time_bins, normalize='all') # + id="C3GzvDxlvZMa" colab_type="code" outputId="40853a66-c2f3-44f1-f303-7592a9f1938c" colab={"base_uri": "https://localhost:8080/", "height": 133} # That seems counter to our hypothesis # More time on the site seems to have fewer purchases # But we know why, since we generated the data! # Let's look at mobile and purchased pd.crosstab(user_data['purchased'], user_data['mobile'], normalize='all') # + id="KQb-wU60xCum" colab_type="code" colab={} # Yep, mobile users are more likely to buy things # But we're still not seeing the *whole* story until we look at all 3 at once # Live/stretch goal - how can we do that? # + id="RPiiRU9Zbdts" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 452} outputId="04559abd-c4ba-42c2-ddf0-7b55be0a2101" my_colors = ['k','gray','silver','w'] pd.crosstab(time_bins, [user_data['purchased'], user_data['mobile']], normalize='index').plot(kind = 'bar', stacked = True, color = my_colors) # Normalizing by index makes interpretation of the y axis make sense. # + id="4SY9OOolU-Jx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 363} outputId="26f39a02-41fd-4f2c-e9a7-ce37f43b6cde" pd.crosstab(time_bins, [user_data['purchased'], user_data['mobile']], normalize='all').plot(kind = 'barh', stacked = False, color = my_colors) # + [markdown] id="lOqaPds9huME" colab_type="text" # ## Assignment - what's going on here? # # Consider the data in `persons.csv` (already prepared for you, in the repo for the week). It has four columns - a unique id, followed by age (in years), weight (in lbs), and exercise time (in minutes/week) of 1200 (hypothetical) people. # # Try to figure out which variables are possibly related to each other, and which may be confounding relationships. # + id="TGUS79cOhPWj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 191} outputId="cadbbe13-c4d4-4a8a-df61-560b6dda8cdc" # TODO - your code here # Use what we did live in lecture as an example # HINT - you can find the raw URL on GitHub and potentially use that # to load the data with read_csv, or you can upload it yourself # path = 'DS-Sprint-01-Dealing-With-Data/module4-databackedassertions/persons.csv' XXX # path_alt = 'https://github.com/aapte11/DS-Sprint-01-Dealing-With-Data/blob/master/module4-databackedassertions/persons.csv' XXX path_alt2 = 'https://raw.githubusercontent.com/aapte11/DS-Sprint-01-Dealing-With-Data/master/module4-databackedassertions/persons.csv' # Always upload from raw file df_persons = pd.read_csv(path_alt2, index_col=0) df_persons.head() # + id="HwCt_DSgil9i" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1065} outputId="8b2255bb-4b4d-4d76-83dc-fc34be489971" df_persons.plot.scatter('age', 'weight') df_persons.plot.scatter('age', 'exercise_time', color = 'r') df_persons.plot.scatter('weight','exercise_time', color = 'g') # + id="yKs9F0yKjI0T" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 133} outputId="34a9b51c-d5cb-4b62-c4f8-251d4cdd2d22" df_persons.corr() # + id="PvEMq2ohD9Ko" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 472} outputId="e02381d6-c79f-43c0-c4a0-5d32bfb88635" print(df_persons.age.unique()) print (df_persons.weight.unique()) print (df_persons.exercise_time.unique()) # + id="F4PqSR6DGvt5" colab_type="code" colab={} age_bins = pd.cut(df_persons.age, 5) weight_bins = pd.cut(df_persons.weight, 5) exer_bins = pd.cut(df_persons.exercise_time, 5) import seaborn as sns df1 = pd.crosstab(age_bins, weight_bins, normalize = 'all') df2 = pd.crosstab(age_bins, exer_bins, normalize='all') df3 = pd.crosstab(exer_bins, weight_bins, normalize='all') # + id="5nIqT76bQ3an" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 376} outputId="16a57123-aaa8-429f-aa73-fd9f6e41307b" sns.heatmap(df1, annot=True).set_title('Age Bins vs Weight Bins'); # + id="J54_0FM4PyAi" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 376} outputId="37a94213-75a9-4021-b827-dc04c6673649" sns.heatmap(df2, annot=True).set_title('Age Bins vs Exercise Time Bins'); # + id="a0USpN1VPyQ3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 376} outputId="9864d0f5-5c92-404e-e143-009ae284b774" sns.heatmap(df3, annot=True).set_title('Exercise Time Bins vs Weight Bins'); # + [markdown] id="BT9gdS7viJZa" colab_type="text" # ### Assignment questions # # After you've worked on some code, answer the following questions in this text block: # # 1. What are the variable types in the data? # # **Age, weight and exercise time are all continuous variables even though in this dataset they seem to be put into bins.** # # # 2. What are the relationships between the variables? # # **Age and weight have aa weak positive relationship. Exercise time and age have a negative relationship and exercise time and weight have a strongly negative relationship. ** # # 3. Which relationships are "real", and which spurious? # # **From a conceptual standpoint, it seems as if weight and/or exercise time is a function of age as age is a given variable (can't change age) while the other two are variable. However when looking at the correlation matrix, one sees that the strongest relationship is between exercise time and weight. However, one feels it is necessary to bin the data and to put it into heatmaps to see where the relaationships are strongest as the original scatterplots suggest varying degrees of strength throughout the plot. ** # # + [markdown] id="_XXg2crAipwP" colab_type="text" # ## Stretch goals and resources # # Following are *optional* things for you to take a look at. Focus on the above assignment first, and make sure to commit and push your changes to GitHub. # # - [Spurious Correlations](http://tylervigen.com/spurious-correlations) # - [NIH on controlling for confounding variables](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4017459/) # # Stretch goals: # # - Produce your own plot inspierd by the Spurious Correlation visualizations (and consider writing a blog post about it - both the content and how you made it) # - Pick one of the techniques that NIH highlights for confounding variables - we'll be going into many of them later, but see if you can find which Python modules may help (hint - check scikit-learn) # + [markdown] id="QitRXbfMhDPR" colab_type="text" # **Monthly Milk Production vs S&P 500 1962-1975** # + id="bFOlK3Yk5lb_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 326} outputId="3ac793b4-fcf6-46bf-e152-0a2bd633e7be" urlsnp = 'https://raw.githubusercontent.com/aapte11/DS-Sprint-01-Dealing-With-Data/master/S%26P500MonthlyReturns.csv' urlmilk = 'https://raw.githubusercontent.com/aapte11/DS-Sprint-01-Dealing-With-Data/master/monthly-milk-production-pounds-p.csv' df_snp = pd.read_csv(urlsnp) df_milk = pd.read_csv(urlmilk) print(df_snp.head()) print (df_milk.head()) # + id="6UJF3nAGbjLH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} outputId="02830cac-cea9-4aeb-d637-2448f91f85f6" df_snp_new = df_snp[['Date', 'Close']] df_milk_new = df_milk[:-1] print(len(df_snp_new)) print(len(df_milk_new)) # + id="At-R7DT8dxXs" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 98} outputId="cac16272-60c9-4dfd-9cc1-06dda9670816" df_final = df_snp_new.join(df_milk_new) df_final.count() # + id="3L_rKzkFe_x6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 191} outputId="938cd95a-6722-49ea-e602-afeeb381fa45" del df_final['Date'] df_final['SNPClose'] = df_final['Close'] df_final['MMP'] = df_final['Monthly milk production: pounds per cow. Jan 62 ? Dec 75'] del df_final['Close'] del df_final['Monthly milk production: pounds per cow. Jan 62 ? Dec 75'] df_final.head() # + id="JDKd-6imf-X_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 105} outputId="9aab2da4-21f0-45da-91eb-a67a0cd61ec2" df_final.corr() # + id="5kZa9OkFfS8e" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 393} outputId="0f84b361-ed7f-412e-9b12-3c55469af748" import matplotlib.pyplot as plt ax = df_final.plot(x="Month", y="SNPClose", legend=False, color='blue') ax2 = ax.twinx() df_final.plot(x="Month", y="MMP", ax=ax2, legend=False, color="lime") ax.set_ylabel('S&P 500 Closing Price') ax2.set_ylabel('Monthly Milk Production') ax.figure.legend() plt.title("Spurious Correlation - S&P500 vs Milk Production") plt.show() # + id="c5-gyR8IkDgG" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python (pytorch) # language: python # name: pytorch # --- # # Federated learning: using a TensorFlow model # # This notebook is a copy of the notebook [Federated learning basic concepts](./federated_learning_basic_concepts.ipynb). The difference is that, here, the model is built by defining a custom layer. However, apart from that, the structure is identical so the text has been removed for clearness. Please refer to the original notebook for the detailed description of the experiment. # # ## The data # + import shfl database = shfl.data_base.Emnist() train_data, train_labels, test_data, test_labels = database.load_data() print(len(train_data)) print(len(test_data)) print(type(train_data[0])) train_data[0].shape import matplotlib.pyplot as plt plt.imshow(train_data[0]) iid_distribution = shfl.data_distribution.IidDataDistribution(database) federated_data, test_data, test_label = iid_distribution.get_federated_data(num_nodes=20, percent=10) print(type(federated_data)) print(federated_data.num_nodes()) federated_data[0].private_data # - # ## The model # + import tensorflow as tf #If you want execute in GPU, you must uncomment this two lines. # physical_devices = tf.config.experimental.list_physical_devices('GPU') # tf.config.experimental.set_memory_growth(physical_devices[0], True) class CustomDense(tf.keras.layers.Layer): """ Implementation of Linear layer Attributes ---------- units : int number of units for the output w : matrix Weights from the layer b : array Bias from the layer """ def __init__(self, units=32, **kwargs): super(CustomDense, self).__init__(**kwargs) self._units = units def get_config(self): config = {'units': self._units} base_config = super(CustomDense, self).get_config() return dict(list(base_config.items()) + list(config.items())) def build(self, input_shape): """ Method for build the params Parameters ---------- input_shape: list size of inputs """ self._w = self.add_weight(shape=(input_shape[-1], self._units), initializer='random_normal', trainable=True) self._b = self.add_weight(shape=(self._units,), initializer='random_normal', trainable=True) def call(self, inputs): """ Apply linear layer Parameters ---------- inputs: matrix Input data Return ------ result : matrix the result of linear transformation of the data """ return tf.nn.bias_add(tf.matmul(inputs, self._w), self._b) def model_builder(): inputs = tf.keras.Input(shape=(28, 28, 1)) x = tf.keras.layers.Conv2D(32, kernel_size=(3, 3), padding='same', activation='relu', strides=1)(inputs) x = tf.keras.layers.MaxPooling2D(pool_size=2, strides=2, padding='valid')(x) x = tf.keras.layers.Dropout(0.4)(x) x = tf.keras.layers.Conv2D(32, kernel_size=(3, 3), padding='same', activation='relu', strides=1)(x) x = tf.keras.layers.MaxPooling2D(pool_size=2, strides=2, padding='valid')(x) x = tf.keras.layers.Flatten()(x) x = CustomDense(128)(x) x = tf.nn.relu(x) x = tf.keras.layers.Dropout(0.1)(x) x = CustomDense(64)(x) x = tf.nn.relu(x) x = CustomDense(10)(x) outputs = tf.nn.softmax(x) model = tf.keras.Model(inputs=inputs, outputs=outputs) criterion = tf.keras.losses.CategoricalCrossentropy() optimizer = tf.keras.optimizers.RMSprop() metrics = [tf.keras.metrics.categorical_accuracy] return shfl.model.DeepLearningModel(model=model, criterion=criterion, optimizer=optimizer, metrics=metrics) # - aggregator = shfl.federated_aggregator.FedAvgAggregator() federated_government = shfl.federated_government.FederatedGovernment(model_builder, federated_data, aggregator) # + import numpy as np class Reshape(shfl.private.FederatedTransformation): def apply(self, labeled_data): labeled_data.data = np.reshape(labeled_data.data, (labeled_data.data.shape[0], labeled_data.data.shape[1], labeled_data.data.shape[2],1)) class CastFloat(shfl.private.FederatedTransformation): def apply(self, labeled_data): labeled_data.data = labeled_data.data.astype(np.float32) shfl.private.federated_operation.apply_federated_transformation(federated_data, Reshape()) shfl.private.federated_operation.apply_federated_transformation(federated_data, CastFloat()) # - # ## Run the federated learning experiment test_data = np.reshape(test_data, (test_data.shape[0], test_data.shape[1], test_data.shape[2],1)) test_data = test_data.astype(np.float32) federated_government.run_rounds(3, test_data, test_label) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="xQRH5MK47h6f" # # Teaching notebook, January 21, 2021 # + [markdown] id="1MjvoMX_-ASg" # Recall the formula for $e^x$: # $$e^x = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \frac{x^4}{4!} + \cdots.$$ # + [markdown] id="S05vUI7-8ZGp" # Approximating e. Solution 1 # + id="eioWYl_n7hKU" e = 1 x = 1 for n in range(1,21): x = x*n e = e + (1/x) print(e) # + [markdown] id="T21qI-_A8bsj" # Approximating e. Solution 2. # + id="0vlKjmls8pKw" 'Approximating e' coff=1 x_pow=1 Sum=1 for m in range(1,30): coff=((1/m)*coff) term=coff*(x_pow**m) Sum=(Sum+term) print((Sum)) # + [markdown] id="ut6OFyPv-Wp8" # Approximating e. Solution 3. # + id="YyWbOfsu-VlZ" s=1 f=1 y=0 for x in list(range(1,18)): f=f*(y+1) y=y+1 s=s+1/f print(s) # + id="VeWU3LS1DaV6" from mpmath import * print(mp) # + id="DIMpwQRNDaLX" mp.dps = 100 # Let's try 100 decimal digits precision. print(mp) # + id="q3DZR9X_Dcac" sqrt(2) # mpf(...) stands for an mp-float. # + id="Pi8ZjaTxDd9x" mpf(1.0) # + [markdown] id="7NmHwFjH9fi2" # Approximating $\pi$. # + id="vR2YaQcP9waz" type(sqrt(2)) # mpmath stores numbers in its own types! # + id="7PwLqLeS9xMB" pi**2 # + id="5IyuooSM9h-C" p=4*sqrt(2) P=8 for n in range(1,166): P = (2*p*P)/(p + P) p = sqrt(p*P) print(p/2) print(P/2) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Pick a website and describe your objective # ### Project Outline # # - We're going to scrape https://www.gutenberg.org/ # - We'll get a list of famous literary books. import urllib.request, urllib.parse, urllib.error import requests from bs4 import BeautifulSoup import ssl import re # Use the requests library to download web pages # - Inspect the website's HTML source and identify the right URLs to download # - Download and save web pages locally using the requests library # - Create a function to automate downloading for different topics/search queries top100url = 'https://www.gutenberg.org/browse/scores/top' response = requests.get(top100url) # Use Beautiful Soup to parse and extract information # - Parse and explore the structure of downloaded web pages using Beautiful Soup # - Use the right properties and methods to extract the required information # - Create functions to extract from the page into lists and dictionaries # - (Optional) Use a REST API to acquire additional information contents = response.content.decode(response.encoding) soup = BeautifulSoup(contents, 'html.parser') def status_check(r): if r.status_code==200: print('Success!') return 1 else: print('Failed!') return -1 status_check(response) lst_links = [] for link in soup.find_all('a'): lst_links.append(link.get('href')) lst_links[:30] lst_titles_temp=[] start_idx = soup.text.splitlines().index('Top 100 EBooks yesterday') for i in range(100): lst_titles_temp.append(soup.text.splitlines()[start_idx+2+i]) lst_titles = [] for i in range(100): id1,id2=re.match('^[a-zA-Z ]*',lst_titles_temp[i]).span() lst_titles.append(lst_titles_temp[i][id1:id2]) for l in lst_titles: print(l) # Create CSV file(s) with the extracted information # - Create functions for the end-to-end process of downloading, parsing and saving CSVs # - Execute the function with different inputs to create a dataset of CSV files # - Verify the information in the CSV files by reading them back using Pandas # Document and share your work # - Add proper headings and documentation in your Jupyter notebook # - Publish your Jupyter notebook to your Github portfolio # - (Optional) Write a blog post about your project and share it online # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from bs4 import BeautifulSoup from selenium import webdriver import time import pandas as pd import regex driver = webdriver.Chrome(r"C:\Users\mkart\OneDrive\Documents\Python\Mysore_Real_Estate_Price\chromedriver_win32\chromedriver.exe") driver.get('https://www.99acres.com/search/property/buy/residential-all/mysore?city=126&property_type=1%2C4%2C2%2C5&preference=S&area_unit=1&budget_min=0&res_com=R') soup = BeautifulSoup(driver.page_source, 'html.parser') #listings = soup.find_all("a", class_="srpTuple__tupleDetails ") print(soup) pattern = regex.compile(r'\{(?:[^{}]|(?R))*\}') pattern.findall(soup) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # default_exp meta.base # - #hide # %reload_ext autoreload # %autoreload 2 # %matplotlib inline # # Base Metalearner # # > Metalearner Base #hide from nbdev.showdoc import * # + #export # REFERENCE: https://github.com/uber/causalml # Copyright 2019 Uber Technology, Inc. # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from abc import ABCMeta, abstractclassmethod import logging import numpy as np import pandas as pd from causalnlp.meta.explainer import Explainer from causalnlp.meta.utils import check_p_conditions, convert_pd_to_np from causalnlp.meta.propensity import compute_propensity_score logger = logging.getLogger('causalnlp') class BaseLearner(metaclass=ABCMeta): @abstractclassmethod def fit(self, X, treatment, y, p=None): pass @abstractclassmethod def predict(self, X, treatment=None, y=None, p=None, return_components=False, verbose=True): pass def fit_predict(self, X, treatment, y, p=None, return_ci=False, n_bootstraps=1000, bootstrap_size=10000, return_components=False, verbose=True): self.fit(X, treatment, y, p) return self.predict(X, treatment, y, p, return_components, verbose) @abstractclassmethod def estimate_ate(self, X, treatment, y, p=None, bootstrap_ci=False, n_bootstraps=1000, bootstrap_size=10000): pass def bootstrap(self, X, treatment, y, p=None, size=10000): """Runs a single bootstrap. Fits on bootstrapped sample, then predicts on whole population.""" idxs = np.random.choice(np.arange(0, X.shape[0]), size=size) X_b = X[idxs] if p is not None: p_b = {group: _p[idxs] for group, _p in p.items()} else: p_b = None treatment_b = treatment[idxs] y_b = y[idxs] self.fit(X=X_b, treatment=treatment_b, y=y_b, p=p_b) return self.predict(X=X, p=p) @staticmethod def _format_p(p, t_groups): """Format propensity scores into a dictionary of {treatment group: propensity scores}. Args: p (np.ndarray, pd.Series, or dict): propensity scores t_groups (list): treatment group names. Returns: dict of {treatment group: propensity scores} """ check_p_conditions(p, t_groups) if isinstance(p, (np.ndarray, pd.Series)): treatment_name = t_groups[0] p = {treatment_name: convert_pd_to_np(p)} elif isinstance(p, dict): p = {treatment_name: convert_pd_to_np(_p) for treatment_name, _p in p.items()} return p def _set_propensity_models(self, X, treatment, y): """Set self.propensity and self.propensity_models. It trains propensity models for all treatment groups, save them in self.propensity_models, and save propensity scores in self.propensity in dictionaries with treatment groups as keys. It will use self.model_p if available to train propensity models. Otherwise, it will use a default PropensityModel (i.e. ElasticNetPropensityModel). Args: X (np.matrix or np.array or pd.Dataframe): a feature matrix treatment (np.array or pd.Series): a treatment vector y (np.array or pd.Series): an outcome vector """ logger.info('Generating propensity score') p = dict() p_model = dict() for group in self.t_groups: mask = (treatment == group) | (treatment == self.control_name) treatment_filt = treatment[mask] X_filt = X[mask] w_filt = (treatment_filt == group).astype(int) w = (treatment == group).astype(int) propensity_model = self.model_p if hasattr(self, 'model_p') else None p[group], p_model[group] = compute_propensity_score(X=X_filt, treatment=w_filt, p_model=propensity_model, X_pred=X, treatment_pred=w) self.propensity_model = p_model self.propensity = p def get_importance(self, X=None, tau=None, model_tau_feature=None, features=None, method='auto', normalize=True, test_size=0.3, random_state=None): """ Builds a model (using X to predict estimated/actual tau), and then calculates feature importances based on a specified method. Currently supported methods are: - auto (calculates importance based on estimator's default implementation of feature importance; estimator must be tree-based) Note: if none provided, it uses lightgbm's LGBMRegressor as estimator, and "gain" as importance type - permutation (calculates importance based on mean decrease in accuracy when a feature column is permuted; estimator can be any form) Hint: for permutation, downsample data for better performance especially if X.shape[1] is large Args: X (np.matrix or np.array or pd.Dataframe): a feature matrix tau (np.array): a treatment effect vector (estimated/actual) model_tau_feature (sklearn/lightgbm/xgboost model object): an unfitted model object features (np.array): list/array of feature names. If None, an enumerated list will be used method (str): auto, permutation normalize (bool): normalize by sum of importances if method=auto (defaults to True) test_size (float/int): if float, represents the proportion of the dataset to include in the test split. If int, represents the absolute number of test samples (used for estimating permutation importance) random_state (int/RandomState instance/None): random state used in permutation importance estimation """ explainer = Explainer(method=method, control_name=self.control_name, X=X, tau=tau, model_tau=model_tau_feature, features=features, classes=self._classes, normalize=normalize, test_size=test_size, random_state=random_state) return explainer.get_importance() def get_shap_values(self, X=None, model_tau_feature=None, tau=None, features=None): """ Builds a model (using X to predict estimated/actual tau), and then calculates shapley values. Args: X (np.matrix or np.array or pd.Dataframe): a feature matrix tau (np.array): a treatment effect vector (estimated/actual) model_tau_feature (sklearn/lightgbm/xgboost model object): an unfitted model object features (optional, np.array): list/array of feature names. If None, an enumerated list will be used. """ explainer = Explainer(method='shapley', control_name=self.control_name, X=X, tau=tau, model_tau=model_tau_feature, features=features, classes=self._classes) return explainer.get_shap_values() def plot_importance(self, X=None, tau=None, model_tau_feature=None, features=None, method='auto', normalize=True, test_size=0.3, random_state=None): """ Builds a model (using X to predict estimated/actual tau), and then plots feature importances based on a specified method. Currently supported methods are: - auto (calculates importance based on estimator's default implementation of feature importance; estimator must be tree-based) Note: if none provided, it uses lightgbm's LGBMRegressor as estimator, and "gain" as importance type - permutation (calculates importance based on mean decrease in accuracy when a feature column is permuted; estimator can be any form) Hint: for permutation, downsample data for better performance especially if X.shape[1] is large Args: X (np.matrix or np.array or pd.Dataframe): a feature matrix tau (np.array): a treatment effect vector (estimated/actual) model_tau_feature (sklearn/lightgbm/xgboost model object): an unfitted model object features (optional, np.array): list/array of feature names. If None, an enumerated list will be used method (str): auto, permutation normalize (bool): normalize by sum of importances if method=auto (defaults to True) test_size (float/int): if float, represents the proportion of the dataset to include in the test split. If int, represents the absolute number of test samples (used for estimating permutation importance) random_state (int/RandomState instance/None): random state used in permutation importance estimation """ explainer = Explainer(method=method, control_name=self.control_name, X=X, tau=tau, model_tau=model_tau_feature, features=features, classes=self._classes, normalize=normalize, test_size=test_size, random_state=random_state) explainer.plot_importance() def plot_shap_values(self, X=None, tau=None, model_tau_feature=None, features=None, shap_dict=None, **kwargs): """ Plots distribution of shapley values. If shapley values have been pre-computed, pass it through the shap_dict parameter. If shap_dict is not provided, this builds a new model (using X to predict estimated/actual tau), and then calculates shapley values. Args: X (np.matrix or np.array or pd.Dataframe): a feature matrix. Required if shap_dict is None. tau (np.array): a treatment effect vector (estimated/actual) model_tau_feature (sklearn/lightgbm/xgboost model object): an unfitted model object features (optional, np.array): list/array of feature names. If None, an enumerated list will be used. shap_dict (optional, dict): a dict of shapley value matrices. If None, shap_dict will be computed. """ override_checks = False if shap_dict is None else True explainer = Explainer(method='shapley', control_name=self.control_name, X=X, tau=tau, model_tau=model_tau_feature, features=features, override_checks=override_checks, classes=self._classes) explainer.plot_shap_values(shap_dict=shap_dict) def plot_shap_dependence(self, treatment_group, feature_idx, X, tau, model_tau_feature=None, features=None, shap_dict=None, interaction_idx='auto', **kwargs): """ Plots dependency of shapley values for a specified feature, colored by an interaction feature. If shapley values have been pre-computed, pass it through the shap_dict parameter. If shap_dict is not provided, this builds a new model (using X to predict estimated/actual tau), and then calculates shapley values. This plots the value of the feature on the x-axis and the SHAP value of the same feature on the y-axis. This shows how the model depends on the given feature, and is like a richer extension of the classical partial dependence plots. Vertical dispersion of the data points represents interaction effects. Args: treatment_group (str or int): name of treatment group to create dependency plot on feature_idx (str or int): feature index / name to create dependency plot on X (np.matrix or np.array or pd.Dataframe): a feature matrix tau (np.array): a treatment effect vector (estimated/actual) model_tau_feature (sklearn/lightgbm/xgboost model object): an unfitted model object features (optional, np.array): list/array of feature names. If None, an enumerated list will be used. shap_dict (optional, dict): a dict of shapley value matrices. If None, shap_dict will be computed. interaction_idx (optional, str or int): feature index / name used in coloring scheme as interaction feature. If "auto" then shap.common.approximate_interactions is used to pick what seems to be the strongest interaction (note that to find to true strongest interaction you need to compute the SHAP interaction values). """ override_checks = False if shap_dict is None else True explainer = Explainer(method='shapley', control_name=self.control_name, X=X, tau=tau, model_tau=model_tau_feature, features=features, override_checks=override_checks, classes=self._classes) explainer.plot_shap_dependence(treatment_group=treatment_group, feature_idx=feature_idx, shap_dict=shap_dict, interaction_idx=interaction_idx, **kwargs) # - #hide from nbdev.export import notebook2script; notebook2script() # -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.4.1 # language: julia # name: julia-1.4 # --- # # Rutile TiO$_2$ polaron hopping # ## Overview # In this notebook, we will: # - Sample the potential energy surface of the polaron hop of TiO$_2$ # - Fit it to harmonic wells with both a hard cut off to exclude anharmonic points, or Boltzmann weighting of the fit # - Calculate and compare the polaron hopping rate from Marcus theory using CarrierCapture using Plots, LaTeXStrings using DataFrames @show VERSION # ## Adiabatic PES # Here, we read the energies of linearly interpolation geometries from a set of DFT calculations. We then normalise it, such that its minimum sits at 0. # + pot = pot_from_file("data/tio2_interpolation.dat") pot.QE_data.E = pot.QE_data.E .- minimum(pot.QE_data.E) pot.Q0 = 0 pot.E0 = 0 # - # Now, we fit the surface to a spline, by default of order 2. pot.func_type = "spline" fit_pot!(pot) # ## Diabatic PESs # We now split the potential in half. In this case, the saddle point is included, therefore we discard the maximum, which does not belong to the initial or final sets of sample points. # + pot_i, pot_f = cleave_pot(pot,discard_max=true) # option to filter out points above kT #filter_sample_points!(pot_i) #filter_sample_points!(pot_f) pot_i.T_weight = true pot_f.T_weight = true fit_pot!(pot_i) fit_pot!(pot_f) # - # ## Plot # Let's put it all together in one plot plt = plot(pot.Q,pot.E,lab = "Adiabatic PES",thickness_scaling = 1.5) plt = plot!(pot_i.Q,pot_i.E,lab = "Diabatic PES i") plt = plot!(pot_f.Q,pot_f.E,lab = "Diabatic PES f") scatter!(pot.QE_data.Q,pot.QE_data.E, lab="DFT Data", xlabel = "Fractional reaction coordinate", ylabel = "Relative energy (eV)", legend = true, title = "TiO2 ") # ## Get transport properties # + tc = TransferCoord(pot_i, pot_f, pot) coupling = get_coupling(tc) reorg_ener = get_reorg_energy(tc) activation_ener = get_activation_energy(tc) transfer_rate = get_transfer_rate(tc) dist = 2.904e-8 temp = 293.15 mobility = einstein_mobility(transfer_rate, 1, dist, temp) println("Coupling: ", coupling, " eV") println("Reorganisation enrgy: ", reorg_ener, " eV") println("Activation energy: ", activation_ener, " eV") println("Transfer rate: ", transfer_rate, " 1/s") println("Einstein mobility: ", mobility, " cm^2V") # - # ## Comparison with NEB # We repeat the whole procedure in one go for NEB results, and compare. # + # read in data pot_neb = pot_from_file("data/tio2_neb.dat") # normalise pot_neb.QE_data.E = pot_neb.QE_data.E .- minimum(pot_neb.QE_data.E) pot_neb.Q0 = 0 pot_neb.E0 = 0 # fit adiabatic surface pot_neb.func_type = "spline" fit_pot!(pot_neb) # get diabatic surfaces pot_i_neb, pot_f_neb = cleave_pot(pot_neb,discard_max=true) pot_i_neb.T_weight = true pot_f_neb.T_weight = true fit_pot!(pot_i_neb) fit_pot!(pot_f_neb) # extract properties tc = TransferCoord(pot_i_neb, pot_f_neb, pot_neb) coupling = get_coupling(tc) reorg_ener = get_reorg_energy(tc) activation_ener = get_activation_energy(tc) transfer_rate = get_transfer_rate(tc) dist = 2.904e-8 temp = 293.15 mobility = einstein_mobility(transfer_rate, 1, dist, temp) println("Coupling: ", coupling, " eV") println("Reorganisation enrgy: ", reorg_ener, " eV") println("Activation energy: ", activation_ener, " eV") println("Transfer rate: ", transfer_rate, " 1/s") println("Einstein mobility: ", mobility, " cm^2V") # - plt = plot(pot.Q,pot.E,lab = "Adiabatic PES interpolation",linewidth = 3,linecolor=1) plt = plot!(pot_i.Q,pot_i.E,lab = "Diabatic PES interpolation",linewidth = 2,linecolor=1) plt = plot!(pot_f.Q,pot_f.E,lab="",linewidth = 2,linecolor=1) scatter!(pot.QE_data.Q,pot.QE_data.E, lab="DFT Data interpolation", markercolor=1, xlabel = "Fractional reaction coordinate", ylabel = "Relative energy (eV)", legend = true, title = "") plt = plot!(pot_neb.Q,pot_neb.E,lab = "Adiabatic PES NEB",linewidth = 3,linecolor=2,thickness_scaling = 1) plt = plot!(pot_i_neb.Q,pot_i_neb.E,lab = "Diabatic PES NEB",linewidth = 2,linecolor = 2) plt = plot!(pot_f_neb.Q,pot_f_neb.E,lab="",linewidth = 2,linecolor = 2) scatter!(pot_neb.QE_data.Q,pot_neb.QE_data.E, lab="DFT Data NEB", markercolor=2, xlabel = "Fractional reaction coordinate", ylabel = "Relative energy (eV)", legend = true, title = "") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # ## N1QL # N1QL (pronounced “nickel”) is Couchbase’s next-generation query language. N1QL aims to meet the query needs of distributed document-oriented databases. # # The N1QL data model derives its name from the non-first normal form, which is a superset and generalization of the relational first normal form (1NF). # # N1QL is a JSON query language for executing industry-standard ANSI joins and querying, transforming, and manipulating JSON data – just like SQL. With native support for N1QL, Couchbase allows you to visualize and optimize complex query plans for large datasets, deliver the best performance at any scale, and meet the demands of millions of users. # ## Running Queries # N1QL queries can be executed in the following ways: # - The Couchbase Query Workbench (in the Web Console) # - The Command-Line based Query Shell (cbq) # - Our REST API # - Any of our Language SDKs, including Python (which we’ll focus on today). # ## Notes # - In Couchbase, you need indexes to run N1QL queries. In the case of `travel-sample` data, the indexes are created for you when you import the sample bucket. # - The queries are executed lazily. Unless the Query Results are processed, the query might not have been executed. You can do that by iterating over the results. # - Another option to execute the queries is to use the `execute()` method on Query object. # needed to support SQL++ (N1QL) query from couchbase.cluster import ( Cluster, ClusterOptions, QueryOptions, QueryScanConsistency, ) from couchbase_core.cluster import PasswordAuthenticator # get a reference to our cluster cluster = Cluster( "couchbase://localhost", ClusterOptions(PasswordAuthenticator("", "Password")), ) # get a reference to our bucket bucket = cluster.bucket("travel-sample") # + import pprint pp = pprint.PrettyPrinter(indent=4, depth=6) # - # ## cluster.query() Method # The `query()` method in the Couchbase Cluster object can be used to run N1QL queries using the Python SDK. # It returns the data returned by the query or an error if the query is not successful. # # [`QueryOptions()`](https://docs.couchbase.com/python-sdk/current/howtos/n1ql-queries-with-sdk.html#query-options) can be used to specify options for the query like metrics, timeout, scan consistency, query parameters, query context, etc. # # The results are returned as an iterator and can be processed as needed. # + # Select Statement try: result = cluster.query( "SELECT * FROM `travel-sample`.inventory.airport LIMIT 10", QueryOptions(metrics=True), ) for row in result.rows(): pp.pprint(row) print(f"Query Execution time: {result.metadata().metrics().execution_time()}") except CouchbaseException as ex: print(ex) # - # ## Queries and Placeholders # Placeholders allow you to specify variable constraints for an otherwise constant query. There are two variants of placeholders: postional and named parameters. Positional parameters use an ordinal placeholder for substitution and named parameters use variables. A named or positional parameter is a placeholder for a value in the WHERE, LIMIT or OFFSET clause of a query. # # You can specify the query parameters either as a part of the query or as a `QueryOptions` object. # # The main difference between the positional parameters and the named parameters are in the way the parameters are mentioned in the query. Named parameters refer to the variables specified in the query while the positional parameters are always referred to by the order in which they are specified. The results do not change as the query is the same. # ## Positional Parameters # + # The positional parameters are replaced in the order in which they are specified. query = "SELECT a.airportname, a.city FROM `travel-sample`.inventory.airport a where country=$1 AND city=$2" try: result = cluster.query(query, "United Kingdom", "London") # Each row is one document for row in result: print(f"Airport: {row['airportname']}, City: {row['city']}") except Exception as e: print(e) # + query = "SELECT a.airportname, a.city FROM `travel-sample`.inventory.airport a where country=$1 AND city=$2" try: result = cluster.query( query, QueryOptions(positional_parameters=["United Kingdom", "London"]) ) for row in result: print(f"Airport: {row['airportname']}, City: {row['city']}") except Exception as e: print(e) # - # ## Named Parameters # + # The named parameters are replaced by the name specified in the query query = "SELECT a.airportname, a.city FROM `travel-sample`.inventory.airport a where country=$country AND city=$city" try: result = cluster.query(query, country="United Kingdom", city="London") for row in result: print(f"Airport: {row['airportname']}, City: {row['city']}") except Exception as e: print(e) # + query = "SELECT a.airportname, a.city FROM `travel-sample`.inventory.airport a where country=$country AND city=$city" try: result = cluster.query( query, QueryOptions(named_parameters={"country": "United Kingdom", "city": "London"}), ) for row in result: print(f"Airport: {row['airportname']}, City: {row['city']}") except Exception as e: print(e) # - # ## Query Metrics # The performance & metadata about a query can be measured using the optional `metrics` parameter in the QueryOptions query = "SELECT a.airportname, a.city FROM `travel-sample`.inventory.airport a where country=$country AND city=$city LIMIT 4" try: result = cluster.query( query, QueryOptions(named_parameters={"country": "United Kingdom", "city": "London"}), metrics=True, ) print("Results") print("------") for row in result: print(f"Airport: {row['airportname']}, City: {row['city']}") print("------") print(f"Query Metrics: {result.metrics}") print("------") except Exception as e: print(e) # ## Scan Consistency # By default, the query engine will return whatever is currently in the index at the time of query (this mode is also called `QueryScanConsistency.NOT_BOUNDED`). If you need to include everything that has just been written, a different scan consistency must be chosen. If `QueryScanConsistency.REQUEST_PLUS` is chosen, it will likely take a bit longer to return the results but the query engine will make sure that it is as up-to-date as possible. result = cluster.query( "SELECT ts.* FROM `travel-sample`.inventory.airline ts LIMIT 10", QueryOptions(scan_consistency=QueryScanConsistency.REQUEST_PLUS, metrics=True), ) for row in result: pp.pprint(row) print("Query Metrics:", result.metrics) # ## Create, Read, Update, Delete (CRUD) Operations # The most common operations in applications using Database systems are the CRUD operations. Most of the web applications are composed of these fundamental CRUD operations. # # These statements are similar to SQL. # ## Insert # Use the INSERT statement to insert one or more new documents into an existing keyspace. Each INSERT statement requires a unique document key and well-formed JSON as values. In Couchbase, documents in a single keyspace must have a unique key. # # The key represents the ID of the document to be inserted. It cannot be Missing or Null & must be Unique across all the documents in the collection. insert_statement = 'INSERT INTO `travel-sample`.inventory.hotel (KEY, VALUE) VALUES ("key1", { "type" : "hotel", "name" : "new hotel" })' try: result = cluster.query(insert_statement).execute() except Exception as e: print(e) # Fetch the inserted document result = cluster.query( "SELECT * from `travel-sample`.inventory.hotel where name='new hotel'" ) try: for row in result: print(row) except Exception as e: print(e) # ## Upsert # The UPSERT statement is used if you want to overwrite a document with the same key, in case it already exists. In case the document does not exist, a new document is created with the specified key. upsert_statement = 'UPSERT INTO `travel-sample`.inventory.hotel (KEY, VALUE) VALUES ("key1", { "type" : "hotel", "name" : "new hotel", "city":"Manchester"})' try: result = cluster.query(upsert_statement).execute() for row in result: print(row) except Exception as e: print(e) # ## Exercise 4.1 # - Fetch the upserted document # + # Solution # - # ## Update # UPDATE replaces a document that already exists with updated values. # # You can use the RETURNING clause to return specific information as part of the query. update_statement = "UPDATE `travel-sample`.inventory.hotel USE KEYS 'key1' UNSET city RETURNING hotel.name" try: result = cluster.query(update_statement) for row in result: print(row) except Exception as e: print(e) # ## Exercise 4.2 # 1. Fetch the updated document # 2. Update the hotel name to "New Hotel International" # + # Fetch the updated document # - # Update the hotel name to "New Hotel International" update_statement = "UPDATE `travel-sample`.inventory.hotel USE KEYS 'key1' SET name = 'New Hotel International' RETURNING hotel.name" try: result = cluster.query(update_statement) for row in result: print(row) except Exception as e: print(e) # ## Delete # DELETE immediately removes the specified document from your keyspace. delete_statement = ( "DELETE FROM `travel-sample`.inventory.hotel h USE KEYS 'key1' RETURNING h" ) try: result = cluster.query(delete_statement) for row in result: print(f"Deleted Row: {row}") except Exception as e: print(e) # ## Exercise 4.3 # - Check if the deleted record exists # + # Solution # - # ## Select # The SELECT statement takes a set of JSON documents from keyspaces as its input, manipulates it and returns a set of JSON documents in the result array. Since the schema for JSON documents is flexible, JSON documents in the result set have flexible schema as well. # # A simple query in N1QL consists of three parts: # # - SELECT: specifies the projection, which is the part of the document that is to be returned. # # - FROM: specifies the keyspaces(bucket, scope, collection) to work with. # # - WHERE: specifies the query criteria (filters or predicates) that the results must satisfy. # # To query on a keyspace, you must either specify the document keys or use an index on the keyspace. # # + # Select All Airlines in the Database with Country "United Kingdom" uk_airlines = ( "SELECT * from `travel-sample`.inventory.airline where country='United Kingdom'" ) try: result = cluster.query(uk_airlines) for row in result: pp.pprint(row) except Exception as e: print(e) # - # Select Just Airline Name & ICAO Codes for Airlines uk_airlines = "SELECT a.name, a.icao from `travel-sample`.inventory.airline a where country='United Kingdom'" try: result = cluster.query(uk_airlines) for row in result: pp.pprint(row) except Exception as e: print(e) # ## Limit Results using OFFSET & LIMIT # The LIMIT clause specifies the maximum number of documents to be returned in a resultset by a SELECT statement. # # When you don’t need the entire resultset, use the LIMIT clause to specify the maximum number of documents to be returned in a resultset by a SELECT query. # # The OFFSET clause specifies the number of resultset objects to skip in a SELECT query. # # When you want the resultset to skip over the first few resulting objects, use the OFFSET clause to specify that number of objects to ignore. # # The LIMIT and OFFSET clauses are evaluated after the ORDER BY clause. # # If a LIMIT clause is also present, the OFFSET is applied prior to the LIMIT; that is, the specified number of objects is omitted from the result set before enforcing a specified LIMIT. # + # Select Airline Name & ICAO Codes for 10 Airlines by ICAO Code uk_airlines = "SELECT a.name, a.icao from `travel-sample`.inventory.airline a where country='United Kingdom' ORDER BY icao LIMIT 5" try: result = cluster.query(uk_airlines) print("Initial 5 Records") for row in result: pp.pprint(row) except Exception as e: print(e) uk_airlines = "SELECT a.name, a.icao from `travel-sample`.inventory.airline a where country='United Kingdom' ORDER BY icao LIMIT 5 OFFSET 5" try: result = cluster.query(uk_airlines) print("Next 5 Records") for row in result: pp.pprint(row) except Exception as e: print(e) # - # ## Aggregate Functions # Aggregate functions take multiple values from documents, perform calculations, and return a single value as the result. The function names are case insensitive. # # You can only use aggregate functions in SELECT, LETTING, HAVING, and ORDER BY clauses. When using an aggregate function in a query, the query operates as an aggregate query. # # + # Get the Count of Airlines per Country airline_counts = "SELECT COUNT(DISTINCT a.icao) AS airline_count, a.country \ FROM `travel-sample`.inventory.airline a \ GROUP BY a.country" try: result = cluster.query(airline_counts) for row in result: print(row) except Exception as e: print(e) # + # Get the Cities with more than 150 Landmarks city_landmarks = "SELECT city City, COUNT(DISTINCT name) LandmarkCount \ FROM `travel-sample`.inventory.landmark \ GROUP BY city \ HAVING COUNT(DISTINCT name) > 150" try: result = cluster.query(city_landmarks) for row in result: print(row) except Exception as e: print(e) # - # ## Exercise 4.4 # - Get the Count of Airports per Country sorted in descending order of Airports # + # Solution # - # ## Joins # N1QL provides joins, which allow you to assemble new objects by combining two or more source objects. # # + # Join the Airport Object with Destination Airport in Routes from SFO join_example = "SELECT * \ FROM `travel-sample`.inventory.route AS rte \ JOIN `travel-sample`.inventory.airport AS apt ON rte.destinationairport = apt.faa \ WHERE rte.sourceairport='SFO' \ LIMIT 5" try: result = cluster.query(join_example) for row in result: pp.pprint(row) except Exception as e: print(e) # + # Join Airlines with the Routes using the Airline ID join_example2 = 'SELECT * \ FROM `travel-sample`.inventory.route \ JOIN `travel-sample`.inventory.airline \ ON route.airlineid = META(airline).id \ WHERE airline.country = "France" \ LIMIT 3' try: result = cluster.query(join_example2) for row in result: pp.pprint(row) except Exception as e: print(e) # + # Find the destination airport of all routes whose source airport is in San Francisco # Join using sub query join_example3 = 'SELECT DISTINCT subquery.destinationairport \ FROM `travel-sample`.inventory.airport \ JOIN ( \ SELECT destinationairport, sourceairport \ FROM `travel-sample`.inventory.route \ ) AS subquery \ ON airport.faa = subquery.sourceairport \ WHERE airport.city = "San Francisco"\ LIMIT 10' try: result = cluster.query(join_example3) for row in result: print(row) except Exception as e: print(e) # - # ## Exercise 4.5 # 1. Select the Airline Names & ICAO Codes for Airlines operating from France # 2. Get the Count of Landmarks By Country # 3. Find the source airport of all routes whose destination airport is in San Francisco # # + # Solution 1 # + # Solution 2 # + # Solution 3 # - # ## Array Operations # Couchbase supports arrays as part of the documents and also provides a rich set of operations to work with arrays. # ## NEST # NEST performs a join across two buckets. But instead of producing an object for each combination of left and right hand inputs, NEST produces a single object for each left hand input, while the corresponding right hand inputs are collected into an array and nested as a single array-valued field in the result object. # # + # Nesting landmarks with the airport & routes nest_query = "SELECT * \ FROM `travel-sample`.inventory.route AS rte \ JOIN `travel-sample`.inventory.airport AS apt \ ON rte.destinationairport = apt.faa \ NEST `travel-sample`.inventory.landmark AS lmk \ ON apt.city = lmk.city \ LIMIT 2" try: result = cluster.query(nest_query) for row in result: pp.pprint(row) except Exception as e: print(e) # - # ## UNNEST # UNNEST allow you to take the contents of nested arrays and join them with their parent object. # + # Iterate over the reviews array and collects the author names of the reviewers who rated the rooms less than a 2 unnest_example = "SELECT RAW r.author \ FROM `travel-sample`.inventory.hotel \ UNNEST reviews AS r \ WHERE r.ratings.Rooms < 2 \ LIMIT 4" try: result = cluster.query(unnest_example) for row in result: print(row) except Exception as e: print(e) # - # ## Transactions # A transaction is an atomic unit of work that contains one or more operations. It is a group of operations that are either committed to the database together or they are all undone from the database. # # Couchbase Supports Distributed ACID Transactions using N1QL. It is currently available for use with the Java, .NET & C++ SDK. It will be added to the Python SDK in the future. # # More details about Transactions in Couchbase including samples in Java, you can refer to the [documentation](https://docs.couchbase.com/server/current/learn/data/transactions.html). # ## References # - [N1QL Reference](https://docs.couchbase.com/server/current/n1ql/n1ql-language-reference/index.html) # - [N1QL Tutorial](https://query-tutorial.couchbase.com/tutorial/#1) # - [N1QL Queries from Python SDK](https://docs.couchbase.com/python-sdk/current/howtos/n1ql-queries-with-sdk.html) # - [JOINs in N1QL](https://docs.couchbase.com/server/current/n1ql/n1ql-language-reference/join.html) # - [N1QL Cheatsheet](https://docs.couchbase.com/files/Couchbase-N1QL-CheatSheet.pdf) # - [Transactions in Java](https://docs.couchbase.com/java-sdk/current/howtos/distributed-acid-transactions-from-the-sdk.html) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- def sum(a,b): return (a+b) print(sum(50,20)) a=int(input("Enter your marks out of 500 ")) per=a/500*100 if per>=80: grade="O" elif per>=65: grade="A" elif per>=50: grade="B" elif per >=40: grade="C" else: grade="Fail" print(f'Grade is {grade}') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + language="javascript" # IPython.OutputArea.prototype._should_scroll = function(lines) { # return false; # } # + import matplotlib.pyplot as plt import numpy as np dt = 0.1 def draw_plot(measurements, mlabel=None, estimates=None, estlabel=None, title=None, xlabel=None, ylabel=None): xvals = np.linspace(0, dt * len(measurements), len(measurements)) plt.title(title, fontsize=12) xlabel and plt.xlabel("Time in seconds") ylabel and plt.ylabel("Distance to Wall in cm") ax = plt.subplot(111) ax.plot(measurements, label=mlabel) np.any(estimates) and estlabel and ax.plot(estimates, label=estlabel) box = ax.get_position() ax.set_position([box.x0, box.y0 + box.height * 0.1, box.width, box.height * 1.1]) ax.legend(loc='upper center', bbox_to_anchor=(.5, -0.05), ncol=3) plt.show() def add_noise(data, size=20): noise = np.random.uniform(-1, 1, len(data)) * size return data + noise # - # ### Part A # + from kalman import predict, update, dot3 plt.rcParams["figure.figsize"] = [12, 9] wall_file = "RBE500-F17-100ms-Constant-Vel.csv" data = add_noise(np.loadtxt(wall_file, delimiter=","), 0) # Setup initial variables and matrices initial_pos = 2530 velocity = -10.0 variance = 10.0 dt = 0.1 F = np.array([[1., dt], [0., 1.]]) P = np.array([[100., 0], [0, 100.0]]) H = np.array([[1., 0.]]) R = np.array([[variance]]) Q = np.array([[0.1, 1], [1, 10.]]) x = np.array([initial_pos, velocity]).T predicted_xs = [] def run(x, P, R, Q, dt, zs): # run the kalman filter and store the results xs, cov = [], [] for z in zs: x, P = predict(x, P, F, Q) x, P = update(x, P, z, R, H) xs.append(x) cov.append(P) xs, cov = np.array(xs), np.array(cov) return xs, cov time_values = np.linspace(0, dt * len(data), len(data)) est_x, est_P = run(x, P, R, Q, dt, data) est_pos = [v[0] for v in est_x] est_vel = [v[1] for v in est_x] draw_plot(data, estimates=est_pos, title="Raw Data VS Kalman Estimation", mlabel="Measurements", estlabel="Kalman Estimates", xlabel="Time in Seconds", ylabel="Distance to Wall in CM") plt.plot(time_values, est_vel) plt.xlabel("Time in seconds") plt.ylabel("Velocity in cm/sec") plt.title("Velocity Over Time") plt.show() pos_std = [p[0][0]**0.5 for p in est_P] vel_var = [p[1][1]**0.5 for p in est_P] pos_vel_corr = [p[0][1] for p in est_P] plt.plot(time_values, pos_std) plt.ylabel("Position StdDev") plt.xlabel("Time in Seconds") plt.title("Position StdDev Over Time") plt.show() plt.plot(time_values, vel_var) plt.ylabel("Velocity StdDev") plt.xlabel("Time in Seconds") plt.title("Velocity StdDev Over Time") plt.show() plt.plot(time_values, pos_vel_corr) plt.ylabel("Velocity-Position Correlation") plt.xlabel("Time in Seconds") plt.title("Velocity-Position Correlation Over Time") plt.show() # - # ### Part B # + data = add_noise(np.loadtxt(wall_file, delimiter=","), 0) # Setup initial variables and matrices initial_pos = 2530 velocity = -10.0 variance = 10.0 dt = 0.1 F = np.array([[1., dt], [0., 1.]]) P = np.array([[100., 0], [0, 100.0]]) H = np.array([[1., 0.]]) R = np.array([[variance]]) Q = np.array([[0.1, 1], [1, 10.]]) x = np.array([initial_pos, velocity]).T predicted_xs = [] def run(x, P, R=0, Q=0, dt=0.1, zs=None): # run the kalman filter and store the results xs, cov = [], [] for z in zs: x, P = predict(x, P, F, Q) S = dot3(H, P, H.T) + R n = z - np.dot(H, x) d = n*n / S if d < 9.0: x, P = update(x, P, z, R, H) xs.append(x) cov.append(P) xs, cov = np.array(xs), np.array(cov) return xs, cov time_values = np.linspace(0, dt * len(data), len(data)) est_x, est_P = run(x, P, R, Q, dt, data) est_pos = [v[0] for v in est_x] draw_plot(data, estimates=est_pos, title="Raw Data VS Kalman Estimation", mlabel="Measurements", estlabel="Kalman Estimates", xlabel="Time in Seconds", ylabel="Distance to Wall in CM") est_vel = [v[1] for v in est_x] plt.plot(time_values, est_vel) plt.xlabel("Time in seconds") plt.ylabel("Speed in cm/sec") plt.title("Velocity Over Time") plt.show() pos_std = [p[0][0]**0.5 for p in est_P] vel_var = [p[1][1]**0.5 for p in est_P] pos_vel_corr = [p[0][1] for p in est_P] plt.plot(time_values, pos_std) plt.ylabel("Position StdDev") plt.xlabel("Time in Seconds") plt.title("Position StdDev Over Time") plt.show() plt.plot(time_values, vel_var) plt.ylabel("Velocity StdDev") plt.xlabel("Time in Seconds") plt.title("Velocity StdDev Over Time") plt.show() plt.plot(time_values, pos_vel_corr) plt.ylabel("Velocity-Position Correlation") plt.xlabel("Time in Seconds") plt.title("Velocity-Position Correlation Over Time") plt.show() # - # ### Part C # + data = add_noise(np.loadtxt(wall_file, delimiter=","), 0) def gaussian(x, mu, sig): return (1/(np.sqrt(2 * np.pi * np.power(sig, 2.)))) * np.exp(-np.power(x - mu, 2.) / (2 * np.power(sig, 2.))) # Setup initial variables and matrices initial_pos = 2530 velocity = -10.0 variance = 10.0 dt = 0.1 F = np.array([[1., dt], [0., 1.]]) P = np.array([[100., 0], [0, 100.0]]) H = np.array([[1., 0.]]) R = np.array([[variance]]) Q = np.array([[0.1, 1], [1, 10.]]) x = np.array([initial_pos, velocity]).T xs = [] object_c = 0.2 wall_c = 0.8 lamb = 0.0005 def object_pdf(z): return object_c * lamb * np.exp(-lamb * z) def wall_pdf(z, wall_mean): return wall_c * gaussian(z, wall_mean, variance) def run(x, P, R, Q, dt, zs): # run the kalman filter and store the results xs, cov = [], [] for z in zs: x, P = predict(x, P, F, Q) prob_wall = wall_pdf(x[0], z) prob_obj = object_pdf(z) if prob_obj < prob_wall: x, P = update(x, P, z, R, H) xs.append(x) cov.append(P) xs, cov = np.array(xs), np.array(cov) return xs, cov old_est_x = est_x est_x, est_P = run(x, P, R, Q, dt, data) est_pos = [v[0] for v in est_x] draw_plot(data, estimates=est_pos, title="Raw Data VS Kalman Estimation", mlabel="Measurements", estlabel="Kalman Estimates", xlabel="Time in Seconds", ylabel="Distance to Wall in CM") est_vel = [v[1] for v in est_x] plt.plot(time_values, est_vel) plt.xlabel("Time in seconds") plt.ylabel("Speed in cm/sec") plt.title("Velocity Over Time") plt.show() pos_std = [p[0][0]**0.5 for p in est_P] vel_var = [p[1][1]**0.5 for p in est_P] pos_vel_corr = [p[0][1] for p in est_P] plt.plot(time_values, pos_std) plt.ylabel("Position StdDev") plt.xlabel("Time in Seconds") plt.title("Position StdDev Over Time") plt.show() plt.plot(time_values, vel_var) plt.ylabel("Velocity StdDev") plt.xlabel("Time in Seconds") plt.title("Velocity StdDev Over Time") plt.show() plt.plot(time_values, pos_vel_corr) plt.ylabel("Velocity-Position Correlation") plt.xlabel("Time in Seconds") plt.title("Velocity-Position Correlation Over Time") plt.show() # - # ### Part D # # 1. Were any anomolous points processed by mistake? # - Kalman filter without outlier detection processed object readings. Both methods of outlier detection succesfully removed object readings. # 2. Were any valid points rejected? # - Neither outlier detection method rejected valid points. # 3. How can each of these methods fail? # - As long as the data supplied is linear and linearly separable these methods should work robustly. If any of these constraints are broken all 3 methods would likely fail. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import sys # append to path the folder that contains the analytic scanner sys.path.append('../../GaiaLab/scan/analytic_scanner') import frame_transformations as ft from scanner import Scanner from satellite import Satellite from source import Source import constants as const import quaternion from agis import Agis from agis import Calc_source import agis_functions as af import helpers as helpers import analytic_plots as aplots import numpy as np import astropy.units as units import matplotlib.pyplot as plt import astropy.units as units # - t_init = 0 t_end = 365*5 my_dt = 1/24 # [days] gaia = Satellite(ti=t_init, tf=t_end, dt= my_dt) scanner = Scanner() class calc_src : """the data structure that is used for a calculated source""" def __init__(self,alpha,delta,varpi,muAlphaStar,muDelta): self.s_params = [alpha,delta,varpi,muAlphaStar,muDelta] self.mu_radial = 0.0 # the agis.py code is not that simple, basic functionalities such as source solver are not obvious, below is a proposal for changes. # + def compute_du_ds(p,q,r,q_l,t_l): """ params p,q,r : the vectors defining the frame associated to a source position at reference epoch params q_l,t_l : the attitude at time t_l returns : du_ds_SRS """ # Equation 73 r.shape = (3, 1) # reshapes r b_G = gaia.ephemeris_bcrs(t_l) tau = t_l - const.t_ep # + np.dot(r, b_G) / const.c # Compute derivatives du_ds_CoMRS = [p, q, af.compute_du_dparallax(r, b_G), p*tau, q*tau] # Equation 72 # should be changed to a pythonic map du_ds_SRS = [] for derivative in du_ds_CoMRS: du_ds_SRS.append(ft.lmn_to_xyz(q_l, derivative)) return np.array(du_ds_SRS) def computeScanAngle(p0,q0,z): """ Compute the scan direction theta = atan2(q0'z, -p0'z) param p0 : local East (increasing alpha) param q0 : local North (increasing delta if |delta_0|<90) param z : unit vector z obtained from the attitude quaternion at the time of transit See equation (1) in LL-061 and equation (13) """ return np.atan2(q0@z,-p0@z) def compute_design_equation(true_source,calc_source,observation_times): """ param true_source : the parameters of the true source param calc_source : the parameters of the estimated source param observation_times : scanner observation times returns : dR_ds_AL, dR_ds_AC, R_AL, R_AC, FA(phi_obs, zeta_obs,phi_calc, zeta_calc) """ alpha0 = calc_source.s_params[0] delta0 = calc_source.s_params[1] p0, q0, r0 = ft.compute_pqr(alpha0, delta0) n_obs = len(observation_times) R_AL = np.zeros(n_obs) R_AC = np.zeros(n_obs) dR_ds_AL = np.zeros((n_obs, 5)) dR_ds_AC = np.zeros((n_obs, 5)) FA = [] for j, t_l in enumerate(observation_times): # one should use the 2 telescopes option for the residuals q_l = gaia.func_attitude(t_l) phi_obs, zeta_obs = af.observed_field_angles(true_source, q_l, gaia, t_l, True) phi_calc, zeta_calc = af.calculated_field_angles(calc_source, q_l, gaia, t_l, True) FA.append([phi_obs, zeta_obs,phi_calc, zeta_calc]) R_AL[j] = (phi_obs-phi_calc) R_AC[j] = (zeta_obs-zeta_calc) # but not for the derivatives... phi_c, zeta_c = af.calculated_field_angles(calc_source, q_l, gaia, t_l, False) m, n, u = af.compute_mnu(phi_c, zeta_c) du_ds = compute_du_ds(p0,q0,r0,q_l,t_l) dR_ds_AL[j, :] = m @ du_ds.transpose() * helpers.sec(zeta_calc) dR_ds_AC[j, :] = n @ du_ds.transpose() return dR_ds_AL, dR_ds_AC, R_AL, R_AC, np.array(FA) def solve_AL(true_source,calc_source,observation_times): """ perform one step of the source solver using only along scan observations """ # get the design equation dR_ds_AL, dR_ds_AC, R_AL, R_AC, FA = compute_design_equation(true_source,calc_source,observation_times) # build the normal equation N = dR_ds_AL.transpose() @ dR_ds_AL rhs = dR_ds_AL.transpose() @ R_AL # solve the normal equation updates = np.linalg.solve(N,rhs) # update the calculated source parameters # take care of alpha calc_source.s_params[0] = calc_source.s_params[0] + updates[0] * np.cos(calc_source.s_params[1]) calc_source.s_params[1:] = calc_source.s_params[1:] + updates[1:] # - # the source model might need some clarification zero_color = lambda t: 0 sirio = Source("sirio", 101.28, -16.7161, 379.21, -546.05, -1223.14, 0, func_color=zero_color, mean_color=0 ) scanner.scan(gaia, sirio, t_init, t_end) scanner.compute_angles_eta_zeta(gaia, sirio) scanner_observation_times = scanner.obs_times aplots.plot_field_angles(source=sirio, sat=gaia, obs_times=scanner.obs_times, ti=t_init, tf=t_end, n=10000, limit=True, double_telescope=True); aplots.plot_star(source=sirio, satellite=gaia, obs_times=scanner.obs_times); aplots.plot_star_trajectory_with_scans(sat=gaia, source=sirio, obs_times=scanner.obs_times, num_ms_for_snapshot=2); def noise_calc_sources(s,noise = 1e-5): """ add noise to source parameters """ s.s_params[0] += noise s.s_params[1] += noise s.s_params[2] += -s.s_params[2]/10 s.s_params[3] += s.s_params[3]*0.1 s.s_params[4] += s.s_params[4]*0.1 # The mode updating set to 'source' means that one used a non realistic attitude for each observation time based on the source position. The documentation should be updated to be clearer. calc_s = Calc_source(obs_times=scanner_observation_times, source=sirio, mean_color=sirio.mean_color) noise_calc_sources(calc_s) calc_s.s_params-sirio.get_parameters()[:5] # The field angles values computed in *compute_design_equation* can be used to visualised the source position in the sky as seen in the SRS reference frame associated to the satellite. dR_ds_AL, dR_ds_AC, R_AL, R_AC, FA = compute_design_equation(sirio,calc_s,scanner_observation_times) # for the true source np.sqrt(R_AL@R_AL) # One can check that the solver is converging after 10 iterations of the source update. for i in range(0,10): solve_AL(sirio,calc_s,scanner_observation_times) calc_s.s_params-sirio.get_parameters()[:5] dR_ds_AL, dR_ds_AC, R_AL, R_AC, FA = compute_design_equation(sirio,calc_s,scanner_observation_times) np.sqrt(R_AL@R_AL) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Laboratory Exercise. Estimation. # + [markdown] slideshow={"slide_type": "notes"} # , # # Version 1.0 (Nov. 2018) # - import numpy as np import matplotlib.pyplot as plt from IPython.display import display, Math # Company *Like2Call* offers hosting services for call centers. In order to better dimension the staff of operator, the company has collected a lot of data about the activity of the service during 5 consecutive labour days. # # In particular, they have stored an array `t` containing the ordered timestamps of all calls received during this period, in hours. This variable can be found in file `dataset.npy` # + # # t = np.load('dataset.npy') # - # ### 1. Time between calls # # * [1.1] Plot the histogram of the timestamps # Histograms of call times # # # * [1.2] Generate an array `x_all` containing the succesive time beween calls, and plot the corresponding histogram. # + # # # - # ### 2. Parameter estimation. # # The company has decided to build a statistical model to characterize the activity in the hosted call centers. By looking at the histogram, it seems that the time between incoming calls may follow an exponential distribution # # $$ # p_{X|S}(x|s) = s \exp(−s x), \qquad x > 0 # $$ # # where random variable $X$ represents the time before a new call arrives, and $S$ is the parameter of such distribution. Thus, we will use the dataset to estimate parameter $s$. # # #### 2.1. Maximum likelihood # * [2.1]. Obtain the maximum likelihood estimator or $S$ based on the observations in `x_all`, and save it in variable `sML`. You will need to compute two variables that will be used several times along this section: # - $K$: The number of observations in `x_all` # - $z = \sum_{k=0}^{K-1} x^{(k)}$, where $x^{(k)}$ are the components of `x_all`. # + # z = # K = # sML = display(Math(r'\hat{s}_\text{ML} = ' + str(sML))) # - # * [2.2]. Plot the log of the likelihood as a function of $s$ (in the appropriate range of $s$) and verify that the ML estimate reaches the maximum. # + # # # - # #### 2.2. Bayesian estimation # # In order to apply Bayesian estimation methods, parameter $S$ is taken as a random variable with the following a priori model: # # $$ # p_S(s) = \exp(−s), \qquad s > 0. # $$ # # * [2.3.] Obtain the maximum a posteriori estimator of $S$ given $X$, and save it in variable `sMAP`. # + # sMAP = display(Math(r'\hat{s}_\text{MAP} = ' + str(sMAP))) # - # * [2.4]. Show, in the same plot, the prior and the posterior probability density functions of parameter $S$, as a function of $s$ (in the appropriate range of $s$) and verify that the MAP estimate reaches the maximum of the posterior. # + # # # - # The prior distribution describes the initial belief about $S$. The figure should show that, for the given prior, the true value of $S$ can be expected to be between 0 and 5. However, the data modifies our knowledge about $S$. After observing the data, we can expect that the true value of $S$ will be somewhere between 15 and 18. # * [2.5.] Obtain the minimum mean square error estimator of $S$ given $\mathbf{X}$ (i.e. given the data in `x_all`) and save it in variable `sMSE`. # + # sMSE = display(Math(r'\hat{s}_\text{MSE} = ' + str(sMSE))) # - # * [2.6.] Note the MAP and the MSE estimates are very similar because the posterior distribution is approximately (although not exactly) symmetric. Also, the MSE estimate is only slightly different from the ML estimate, because we have a large dataset and the influence of the prior distribution decreases when we have much empirical evidence. # # However, the Bayesian approach provides not only an estimate but a posterior distribution, that describes how much we know about the true value of parameter $S$ after the data observation. The variance of this distribution describes how far the true value of $S$ could be from the posterior mean. # # (Incidentally, note that, since $\hat{s}_\text{MSE}$ is the posterior mean, the conditional MSE, which is given by, # # $$ # \mathbb{E}\left\{(S-\hat{s}_\text{MSE})^2| {\bf z}\right\} # $$ # # is equal to the variance of the posterior distribution). # # Compute the Minimum MSE for the given data, and save it in variable `mMSE` # + print("The minimum MSE is given by ") # mMSE = display(Math(r'\text{MSE} = \frac{K+1}{(z +1)^2} = ' + str(mMSE) )) # - # * [2.7.] [**OPTIONAL**] Compute the probability that the true parameter was not further than two standard deviations from the posterior mean, that is # # $$ # P\left\{\hat{s}_\text{MSE} - 2 \sqrt{v_\text{MSE}} \le S \le # \hat{s}_\text{MSE} + 2 \sqrt{v_\text{MSE}}\right\} # $$ # Save it in variable `p` # + from scipy.stats import gamma # # display(Math(r'P\left\{\hat{s}_\text{MSE} - 2 \sqrt{v_\text{MSE}} \le S \le ' + r'\hat{s}_\text{MSE} + 2 \sqrt{v_\text{MSE}}\right\} = ' + str(p) )) # - # ### 3. An improved data model. # # #### 3.1. Temporal dynamics # # The analysis in Section 2 is grounded on the assumption that the time between incoming calls follows an exponential distribution. The histogram obtained in exercise [1.2] provides some experimental evidence in support of this assumption. # # However, the histogram computed in exercise [1.1.] also shows that the activity of the call center varies with the time of the day. Therefore, we can expect that the time between calls also depends on the time of the day. # * [3.1] Plot the time between calls, as a function of time. # # # #### 3.1. Hour-dependent model # # According to this, we can make a different model for each time of the day. To do so, we will keep the asumption that the time between incoming calls follows an exponential distribution # # $$ # p_{X|S,}(x \mid s) = s \exp(−s x), \qquad x > 0 # $$ # # but, now, we will assume that parameter $s$ can take a different value from hour to hour. Therefore, we must estimate a different parameter $S$ for every hour. # # To do so, we will need to split the data in 24 groups, one per hour, and compute specific variables $z$ and $K$ for each group. # # * [3.2] Split the dataset in 24 groups, assigning the data to each group depending on the hour of the starting time. Then, compute parameters $z$ and $K$ for each group, storing them in numpy arrays `z24` and `K24`. # + # # # Check if your variables are ok. # (Note that his is not a full test. Passing it does not # guarantee that variables have been correctly computed) if np.sum(K24) == len(x_all): print("Test for variable K passed.") else: print("Error in variable K.") if np.sum(z24) == np.sum(x_all): print("Test for variable z passed.") else: print("Error in variable z.") # - # * [3.3] Compute the ML and the MSE estimates for each hour. Store them in vectors `sML24` and `sMSE24` and plot them as a function of the hour in the day. # + # # # - # * [3.4] One may wonder if spliting the data in segments provides a better model for the time between calls. The joint data likelihood is a useful way to get a first quantitative evaluation of the new model. # # Compute the maximum log-likelihood for the joint model, and save it in variable `L24max`. Compare the result with the value of `Lmax` computed in [2.2]. # # To compute the maximum log-likelihood of the joint model, take into account that the observations are independent, so that `L24max` is the sum of the values of the maximum log-likelihood computed for every hour. # # + # # print('Maximum log-likelihood of the simple model: {}'.format(Lmax)) print('Maximum log-likelihood of the hour-dependent model: {}'.format(Lmax24)) # - # #### 3.3. Posterior distributions # # * [3.5] Plot the posterior probabilities for each hour slot # + # # # - # You should observe that, as expected, each posterior distribution is centered around its respective estimate $\hat{s}_\text{MSE}$, (i.e. around `sMSE24[0]`, `sMSE24[8]` and `sMSE24[16]`. print('sMSE24[0] = {}'.format(sMSE24[0])) print('sMSE24[8] = {}'.format(sMSE24[8])) print('sMSE24[16] = {}'.format(sMSE24[16])) # However, you can visually verify that the posterior distributions for $h=0$ and $h=8$ have less variance than that for $h=16$, why? This is because the prior distribution is in agreement with the data for $h=0$ and $h=8$ (in borth cases, $\hat{s}_\text{MSE}$ is smaller thatn 5). The larger variance for $h=16$ is a consequence of the higher uncertainty about $s$ created by the discrepancy between the prior and the observations. # # In any case, for any value of $h$, more data is always better than less data. We can observe that, for any value of $h$, the variance of the posterior distribution tends to decrease when more data are used. # * [3.6] Take $h$=16. For $d=1,...,5$, compute the variance of the posterior distribution of $s$ given only the data at hour $h$ up to day $d$. Plot the minimum MSE as a function of $d$ # # # * [3.7] [**OPTIONAL**] Show, in the same plot, the posterior distribution of parameter $s$ given only the data at hour $h$ and for all days up to $d$, for $d=1,\ldots, 5$ # # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # RBF-Network interpolation examples # + from scipy import * from scipy.linalg import norm, pinv from matplotlib import pyplot as plt import numpy as np class RBFN: def __init__(self, indim, numCenters, outdim, time): self.indim = indim self.outdim = outdim self.numCenters = numCenters #not sure if it is right self.centers = np.linspace(0, time, numCenters) # self.centers = [random.uniform(-1, 1, indim) for i in range(numCenters)] self.beta = 8 self.W = random.random((self.numCenters, self.outdim)) def _basisfunc(self, c, d): return exp(-self.beta * norm(c-d)**2) def _calcAct(self, X): # calculate activations of RBFs G = zeros((X.shape[0], self.numCenters), float) for ci, c in enumerate(self.centers): for xi, x in enumerate(X): G[xi,ci] = self._basisfunc(c, x) return G """def train(self, X, Y): # choose random center vectors from training set rnd_idx = random.permutation(X.shape[0])[:self.numCenters] self.centers = [X[i,:] for i in rnd_idx] # print ("center", self.centers) # calculate activations of RBFs G = self._calcAct(X) # print (G) # calculate output weights (pseudoinverse) self.W = dot(pinv(G), Y)""" def test(self, X): """ X: matrix of dimensions n x indim """ G = self._calcAct(X) Y = dot(G, self.W) return Y def setHyperParams(self, W): assert W.shape[0] == self.numCenters assert W.shape[1] == self.outdim self.W = W def _sigmoid(self, x): return 1 / (1 + np.exp(-x)) def calOutput(self, X): """ X: 1 x indim """ #calculate activation value G = self._calcAct(X) #sum up and normalize Y = dot(G, self.W)/sum(G) Y = self._sigmoid(Y) return Y if __name__ == '__main__': time = 10 rbfn = RBFN(1, 10, 3, time) W = random.random((rbfn.numCenters, rbfn.outdim)) rbfn.setHyperParams(W) point = np.array([5]) priorities = rbfn.calOutput(point) a = ones((1,6)) a = np.concatenate((a, ones((1,6)))) a = np.concatenate((a, ones((1,6)))) print(priorities) print(a) print(dot(a*priorities)) # print(W) '''x = np.arange(0, time, 0.005) y = zeros((0,3)) for point in x: point = np.array([point]) y = np.concatenate((y, rbfn.calOutput(point))) plt.figure(figsize=(12, 8)) # print(y.shape) y = np.transpose([y]) # print(y.shape) plt.plot(x, y[1], 'k-')''' # - # ## 1D interpolation example # + from scipy import * from scipy.linalg import norm, pinv a=ones(3) print(a) print(sum(a)) print(a/sum(a)) a = random.random((4,3)) a len(a) a.shape[0] a.shape[1] y = ones(0) y = np.append(arr=y, values=10) y = np.append(arr=y, values=1) len(y) y # - # ## 2D interpolation example # + import numpy as np from matplotlib import pyplot as plt def sigmoid(x): return 1 / (1 + np.exp(-x)) x = np.linspace(-10, 10, 10) print(x) Y = sigmoid(x) print(Y) plt.plot(x, Y, 'g-') a # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: pyRHESSys # language: python # name: pyrhessys # --- import json import pkg_resources PARAMETER_PATH = pkg_resources.resource_filename( __name__, 'parameter_meta.json') with open(PARAMETER_PATH, 'r') as f: PARAMETER_META = json.load(f) FILE_PATH = pkg_resources.resource_filename( __name__, 'file_name.json') with open(FILE_PATH, 'r') as f: FILE_NAME = json.load(f) P = PARAMETER_META F = FILE_NAME P rhessys_run_cmd = ''.join(['{} -st {} -ed {}'.format(fman_dir, fman_dir), ' -v {}:{}'.format(settings_path, settings_path), ' -v {}:{}'.format(input_path, input_path), ' -v {}:{} '.format(output_path, output_path), " --entrypoint '/bin/bash' ", self.executable, ' -c "', run_cmd, '"']) rhessys_run_cmd = ''.join(['{} -st {} -ed {}'.format(P['version'], P['start_date'], P['end_date']), ' -b -newcaprise -capr {} -gwtoriparian - capMax {}'.format(P['capr'], P['capMax']), ' -slowDrain -leafDarkRespScalar {}'.format(P['leafDarkRespScalar']), ' -frootRespScalar {} -leafDarkRespScalar {}'.format(P['frootRespScalar'],P['leafDarkRespScalar']), ' -t tecfiles{} -w worldfiles{} -whdr worldfiles{}'.format(F['tecfiles'],F['worldfiles1'],F['worldfiles2']), ' -r flows{} -rtx {}'.format(F['flows'], P['rtz']), ' -pre output{} -s {} {} {} -sv {} {} -gw {} {}'.format(F['prefix'], P['rtz'], P['s1'], P['s2'], P['s3'], P['sv1'], P['sv2'], P['gw1'], P['gw2']) ]) rhessys_run_cmd # %pylab inline import pyrhessys import pyrhessys as rs r = rs.Simulation("aa", "bb") r.clim rs.Simulation. r.run("local") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import unittest import import_ipynb import pandas as pd import pandas.testing as pd_testing import tensorflow as tf class TestExercise04_01(unittest.TestCase): def setUp(self): import Exercise04_01 self.exercises = Exercise04_01 def test_reward(self): self.assertEqual(self.exercises.episode_rew > 150, True) # - suite = unittest.TestLoader().loadTestsFromTestCase(TestExercise04_01) unittest.TextTestRunner(verbosity=2).run(suite) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:geoproject] # language: python # name: conda-env-geoproject-py # --- # + import geopandas as gpd import pandas_bokeh import re import pandas as pd pandas_bokeh.output_notebook() from script.function3dto2d import remove_third_dimension capacity_list = ['conservative estimate Mt','neutral estimate Mt','optimistic estimate Mt'] move = ['COUNTRY','COUNTRYCOD','ID','geometry'] #load data storage_unit = gpd.read_file('data/storage_unit_map_lite.geojson') storage_unit.geometry = storage_unit.geometry.apply(remove_third_dimension) storage_unit.to_file('geodata verfication/storage_unit.geojson', driver = 'GeoJSON') trap_unit = gpd.read_file('data/trap_map_lite.geojson') trap_unit.geometry = trap_unit.geometry.apply(remove_third_dimension) complete_map = gpd.read_file('data/complete_map_37.geojson') complete_map.geometry = complete_map.geometry.apply(remove_third_dimension) complete_map.geometry = complete_map.geometry.buffer(0) # - # # Prepare Data for Visualization # + # prepare data of capacity detail storage_unit_summary = storage_unit.groupby('COUNTRYCOD').sum()[[i for i in storage_unit.columns if i not in move]] storage_unit_summary.columns = [x+'storage_unit' for x in storage_unit_summary.columns] trap_unit_summary = trap_unit.groupby('COUNTRYCOD').sum()[[i for i in trap_unit.columns if i not in move]] trap_unit_summary.drop(capacity_list,axis=1,inplace=True) storage_type_detail = storage_unit_summary.merge(trap_unit_summary, left_index= True, right_index= True, how= 'outer') def view_storage_type_detail(estimation_type = 'conservative'): # columns selection tmp_df = storage_type_detail[[x for x in storage_type_detail.columns if re.search(estimation_type,x)]].copy() tmp_df.columns = ['Storage_unit (normal density, geological formation to store CO2)', 'Aquifer (high density storage unit)', 'Oil (high density storage unit)', 'Gas (high density storage unit)'] for i in tmp_df.columns: tmp_df[i] = tmp_df[i]/1e3 tmp_df.plot_bokeh.bar(stacked = True,figsize=(900, 500), xlabel = 'country', ylabel = 'capacity Gt',title = estimation_type + ' per country (unit: (Gt) gigaton)') # + # total capacity summary = complete_map.groupby('COUNTRYCOD').sum()[capacity_list] summary.columns = [x.replace('Mt','') for x in summary.columns] capacity_list_ = list(summary.columns) summary = summary.reset_index() summary = pd.melt(summary, id_vars= ['COUNTRYCOD'],value_vars = capacity_list_) summary.value = summary.value/1e3 #Mt to Gt # print print('+'*20) print('total capacity of whole Europe unit: Gigaton') print('+'*20) print(summary.groupby('variable').sum()['value']) #--------------------------------------------------- # offshore capacity offshore = gpd.read_file('data/offshore_shapes.geojson') only_offshore = gpd.clip(complete_map, offshore) summary_position_holder = summary.copy() summary_position_holder.value = 0 summary_2 = only_offshore.groupby('COUNTRYCOD').sum()[capacity_list] summary_2.columns = [x.replace('Mt','') for x in summary_2.columns] capacity_list_ = list(summary_2.columns) summary_2 = summary_2.reset_index() summary_2 = pd.melt(summary_2, id_vars= ['COUNTRYCOD'],value_vars = capacity_list_) summary_2.value = summary_2.value/1e3 #Mt to Gt summary_2 = pd.concat([summary_2, summary_position_holder]).groupby(['COUNTRYCOD','variable']).sum().reset_index() print('\n\n') print('+'*20) print('total offshore capacity of whole Europe unit: Gigaton') print('+'*20) print(summary_2.groupby('variable').sum()['value']) # - # # Capacity of all European countries split by storage type view_storage_type_detail('neutral') # **external GB dataset's capacity estimation (only for United Kingdom):** # - total 61.768 Gt | storage_unit + aquifer: 52.749 Gt | oil: 2.678 Gt | gas: 5.997 Gt # # Total offshore capacity of each country under different estimations # ## vs # # Total capacity of each country under different estimations # + import seaborn as sns from matplotlib import pyplot as plt f,axes = plt.subplots(2,1,figsize = (20,12)) sub1 = sns.barplot(x = 'COUNTRYCOD', y='value', hue = 'variable',data= summary_2, ax = axes[0] ) sub1.set_yscale("log") sub1.title.set_text('Offshore Capacity') sub1.set_ylabel('Capacity (Gt) log scale',fontsize = 20) #sub1.set_xlabel('Country',fontsize = 20) sub2 = sns.barplot(x = 'COUNTRYCOD', y='value', hue = 'variable',data= summary, ax = axes[1] ) sub2.set_yscale("log") sub2.title.set_text('Total Capacity') sub2.set_ylabel('Capacity (Gt) log scale',fontsize = 20) #sub2.set_xlabel('Country',fontsize = 20) plt.show() # y in log scale change the name of y axis # - # # Storage Map pandas_bokeh.output_notebook() #pandas_bokeh.output_file("Interactive storage unit.html") complete_map = complete_map.sort_values('conservative estimate Mt',ascending= True) complete_map.plot_bokeh( figsize=(900, 600), simplify_shapes=5000, dropdown=capacity_list, colormap="Viridis", hovertool_columns=capacity_list+['ID'], colormap_range = (0,100) ) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- """ This is a python notebook to create a stock screener. The finished product will accept a set of parameters and output the stocks that meet those requirements. Currently this scanner follows the "Swing Traders Checklist" @ https://www.swing-trade-stocks.com/swing-traders-checklist.html. The goal is to deploy this with a suite of Azure Functions and use it in a suite of other related financial-python projects, such as twitter sentiment and FinViz sentiment analysis, using ML-clustering to define levels of support and resistance, and other ideas I might think of at 2am on a Sunday. """ import yahoo_fin.stock_info as si import pandas as pd from datetime import datetime, timedelta from dateutil.relativedelta import relativedelta import talib import requests import json import openpyxl import psycopg2 as pg import StockSentiment_FinViz.ipynb # + #Get connected conn = pg.connect("dbname=StonksGoUp user=postgres host=localhost password=") cur = conn.cursor() #read in the finnhub.io token # This will be used until the ML aspect is tested and complete with open('local_settings.txt') as f: json_local = json.load(f) finn_token = json_local["finn_token"] #define scanner parameters: low = float(2.5) high = float(25.0) to = int(datetime.strptime(datetime.today().strftime("%d/%m/%Y") + " +0000", "%d/%m/%Y %z").timestamp()) fro = int((datetime.strptime(datetime.today().strftime("%d/%m/%Y") + " +0000", "%d/%m/%Y %z")-relativedelta(days=300)).timestamp()) earnings_period = int((datetime.strptime(datetime.today().strftime("%d/%m/%Y") + " +0000", "%d/%m/%Y %z")+relativedelta(days=5)).timestamp()) capital = 100000 risk = 0.05 get_tickers = """ SELECT ticker from stockdata WHERE ticker IN (SELECT ticker from tickers) GROUP BY ticker """ cur.execute(get_tickers, conn) scanner_list = list([i[0] for i in cur.fetchall()]) print(scanner_list[:20]) # get sentiment table get_sentiment = """ SELECT ticker, round(avg(score), 2) as avg_sentiment FROM sentiment WHERE timestamp > current_date - interval '1 week' GROUP BY ticker """ df_sentiment = pd.read_sql(get_sentiment, conn) print(df_sentiment) # + # Functions for Technical Analysis def get_hist(ticker, conn): # Get data from web-scraping # req = f"https://query1.finance.yahoo.com/v7/finance/download/{ticker}?period1={fro}&period2={to}&interval=1d&events=history" # data = pd.read_csv(req) # data.index = data["Date"].apply(lambda x: pd.Timestamp(x)) # data.drop("Date", axis=1, inplace=True) # Get data from database get_data = f"""SELECT ticker ,quotedate as "Date" ,open as "Open" ,high as "High" ,low as "Low" ,close as "Close" ,adjclose as "Adj Close" ,volume as "Volume" FROM stockdata WHERE ticker = '{ticker}' ORDER BY quotedate ASC""" data = pd.read_sql(get_data, conn) return data def get_indicators(data): # Get MACD data["macd"], data["macd_signal"], data["macd_hist"] = talib.MACD(data['Close']) # Get SMA10 and SMA30 data["sma10"] = talib.SMA(data["Close"], timeperiod=10) data["sma30"] = talib.SMA(data["Close"], timeperiod=30) # Get MA200 data["sma200"] = talib.SMA(data["Close"], timeperiod=200) # Get RSI data["rsi"] = talib.RSI(data["Close"]) return data def analyze_chart(indicated_data, df_analyzed): #quote_price = indicated_data.loc[:,'Adj Close'].iloc[-1] # Check RSI if indicated_data.loc[:,'rsi'].iloc[-1] < 35: rsi = "Oversold" elif indicated_data.loc[:,'rsi'].iloc[-1] > 65: rsi = "Overbought" else: rsi = None # Check SMA Trend if indicated_data.loc[:,'sma30'].iloc[-1]indicated_data.loc[:,'sma10'].iloc[-1]: trend = "Downtrend" else: trend = None # Check 200SMA if indicated_data.loc[:,'Open'].iloc[-1]>indicated_data.loc[:,'sma200'].iloc[-1]: above200 = True else: above200 = None # Check for Earnings try: if pd.isnull(si.get_quote_table(ticker)['Earnings Date']): earnings_date = None elif datetime.strptime(si.get_quote_table(ticker)['Earnings Date'].split(' - ')[0], '%b %d, %Y'): earnings_date = datetime.strptime(si.get_quote_table(ticker)['Earnings Date'].split(' - ')[0], '%b %d, %Y') else: earnings_date = datetime.strptime(si.get_quote_table(ticker)['Earnings Date'], '%b %d, %Y') except: earnings_date = None # Check for support or resistance req = requests.get(f'https://finnhub.io/api/v1/scan/support-resistance?symbol={ticker}&resolution=D&token={finn_token}') supp_res = None supp_res_price = float() for level in req.json()['levels']: if float(level)*0.90 < indicated_data.loc[:,'Open'].iloc[-1] < float(level)*1.10: if indicated_data.loc[:,'Open'].iloc[-1] >= float(level): supp_res = "support" supp_res_price = round(level, 2) elif indicated_data.loc[:,'Open'].iloc[-1] <= float(level): supp_res = "resistance" supp_res_price = round(level, 2) else: supp_res = "Indeterminant" supp_res_price = None else: pass # Check TAZ # Check for Pullback if indicated_data.loc[:,'Adj Close'].iloc[-1]<= indicated_data.loc[:,'Adj Close'].iloc[-2]<= indicated_data.loc[:,'Adj Close'].iloc[-3]: pullback = True else: pullback = None # Add latest sentiment df_analyzed = df_analyzed.append({'Ticker' : ticker, 'Open' : round(indicated_data.loc[:,'Open'].iloc[-1]), 'Quote' : round(indicated_data.loc[:,'Adj Close'].iloc[-1]), 'RSI' : rsi, 'Trend' : trend, 'Above200' : above200, 'Earnings' : earnings_date, 'Supp/Res' : supp_res, 'S/R Price' : supp_res_price, 'Pullback' : pullback, 'Sentiment' : sentiment }, ignore_index=True) return df_analyzed def analyze_position(df_analyzed, capital, risk): position_risk = capital*risk df_analyzed['Entry'] = df_analyzed['S/R Price'] df_analyzed['Stoploss'] = df_analyzed['S/R Price'].astype(float).apply(lambda x: x * float(0.95)) df_analyzed['risk_per_share'] = df_analyzed['Entry'] - df_analyzed['Stoploss'] df_analyzed['position_size'] = round(position_risk/df_analyzed['risk_per_share']) return df_analyzed # + scanner_list = ['AAL', 'AES', 'AMCR', 'APA', 'BAC'] df_analyzed = pd.DataFrame(columns=['Ticker', 'Open', 'Quote', 'RSI', 'Trend', 'Above200', 'Earnings', 'Supp/Res', 'S/R Price', 'Pullback']) for ticker in scanner_list: print(ticker) # Get historical data data = get_hist(ticker, conn) # Add indicator data indicated_data = get_indicators(data) # Analyze stonks: df_analyzed = analyze_chart(indicated_data, df_analyzed) df_analyzed = analyze_position(df_analyzed, capital, risk) df_analyzed = df_analyzed[df_analyzed['Above200'] == True] df_analyzed = df_analyzed[df_analyzed['RSI'] != None] df_analyzed = df_analyzed[df_analyzed['Trend'] != None] df_analyzed = df_analyzed[df_analyzed['Earnings'] != None] df_analyzed = df_analyzed[df_analyzed['Supp/Res'] != None] df_analyzed = df_analyzed[df_analyzed['Pullback'] != None] print(df_analyzed) #df_analyzed.to_excel('Output.xlsx', ignore_index=True) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Quero imprimir todos os valores da Sequência de Fibonacci até a posição que eu vou especificar na Sequência de Fibonacci (SF). # # Os números de Fibonacci compõem a seguinte sequência: # 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, ... # # Em termos matemáticos: # Fn = Fn-1 + Fn-2 # ## Pseudocódigo: # # 1. n = a posição desejada na SF. # 2. n1 = valor da primeira posição na SF. # 3. n2 = valor da segunda posição na SF. # 4. count = contador. Serve para saber quando chegamos ao fim da SF até o valor de n. # 5. Se n < 0, imprima mensagem na tela indicando que n não pode ser negativo, pois não temos posição negativa na SF. # 6. Se n == 1, imprime n1. # 7. Caso as condições 5 e 6 não forem satisfeitas, repetimos os passos abaixo enquanto count < n: # 7.1. Imprime n1 na tela. # 7.2. Próximo valor = n1 + n2. # 7.3. n1 = n2. # 7.4. n2 = valor calculado em 7.2. # 7.5. Incrementa o contador. # # ## Iniciando a programação # + # 1. n = a posição desejada na SF. (Devo informar até qual posição desejo gerar a SF) n = input("Informe até qual posição você deseja obter a Sequência Fibonacci:") n = int(n) # 2. n1 = valor da primeira posição na SF. (O valor de n1 = 0, porque o primeiro número da Sequência de Fibonacci é 0). n1 = 0 # 3. n2 = valor da segunda posição na SF. (O valor de n2 = 1, porque o segundo número da Sequência de Fibonacci é 1) n2 = 1 # 4. count = contador. Serve para saber quando chegamos ao fim da SF até o valor de n. count = 0 # 5. Se n < 0, imprima mensagem na tela indicando que n não pode ser negativo, pois não temos posição negativa na SF. if n < 0: print("A posição não pode ser negativa.") # 6. Se n == 1, imprime n1. elif n == 1: print("A Sequência de Fibonacci até ", n, " :") print(n1) # 7. Caso as condições 5 e 6 não forem satisfeitas, repetimos os passos abaixo enquanto count < n: else: print("A Sequência de Fibonacci até ", n, " :") while count < n: # 7.1. Imprime n1 na tela. print(n1, end = ',') # 7.2. Próximo valor = n1 + n2. nth = n1 + n2 # 7.3. n1 = n2. n1 = n2 # 7.4. n2 = valor calculado em 7.2. n2 = nth # 7.5. Incrementa o contador. count += 1 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python3 # --- # + [markdown] toc="true" # # Table of Contents #

    # + import epistasis as epi import pandas as pd import numpy as np import scipy as scipy import sklearn.decomposition import matplotlib as mpl import matplotlib.pyplot as plt import seaborn as sns from matplotlib import rc import os rc('text', usetex=True) rc('text.latex', preamble=r'\usepackage{cmbright}') rc('font', **{'family': 'sans-serif', 'sans-serif': ['Helvetica']}) # %matplotlib inline # This enables SVG graphics inline. # %config InlineBackend.figure_formats = {'png', 'retina'} # JB's favorite Seaborn settings for notebooks rc = {'lines.linewidth': 2, 'axes.labelsize': 18, 'axes.titlesize': 18, 'axes.facecolor': 'DFDFE5'} sns.set_context('notebook', rc=rc) sns.set_style("dark") mpl.rcParams['xtick.labelsize'] = 16 mpl.rcParams['ytick.labelsize'] = 16 mpl.rcParams['legend.fontsize'] = 14 # + genmap = pd.read_csv('../sleuth/rna_seq_info.txt', sep='\t', comment='#') frames = [] for root, dirs, files in os.walk("../sleuth/sleuth_strains"): for file in files: if file == 'lrt.csv': continue strain = file[:-4].replace('_', '-') df = pd.read_csv(root + '/' + file, sep=',') df.sort_values('target_id', inplace=True) df['strain'] = strain.replace('b-', '') df['genotype'] = genmap[genmap.strain == file[:-4]].genotype.unique()[0].replace('b_', '').replace('_', '-') frames += [df] tidy = pd.concat(frames) tidy.dropna(subset=['ens_gene', 'b', 'qval'], inplace=True) tidy['absb'] = tidy.b.abs() tidy.sort_values(['target_id'], ascending=True, inplace=True) # - q=0.1 # # Defining the hypoxia response. See our paper # + hyp_response_pos = epi.find_overlap(['vhl1', 'egl9', 'rhy1', 'egl9-vhl1'], tidy[tidy.b > 0], col='genotype') hyp_response_neg = epi.find_overlap(['vhl1', 'egl9', 'rhy1', 'egl9-vhl1'], tidy[tidy.b < 0], col='genotype') either_or = (((tidy.b < 0) & (tidy.qval < q)) | (tidy.qval > q)) hyp_response_pos = tidy[(tidy.target_id.isin(hyp_response_pos)) & ((tidy.genotype == 'egl9-hif1') & either_or)].target_id.values.tolist() # do the same for the negative set either_or = (((tidy.b > 0) & (tidy.qval < q)) | (tidy.qval > q)) hyp_response_neg = tidy[(tidy.target_id.isin(hyp_response_neg)) & (tidy.genotype == 'egl9-hif1') & either_or].target_id.values.tolist() # get the list hyp_response = list(set(hyp_response_neg + hyp_response_pos)) hyp = tidy[(tidy.target_id.isin(hyp_response)) & (tidy.genotype == 'egl9') ].copy().sort_values('qval') # annotate whether they are candidates for direct or # indirect regulation. def annotate(x): if x > 0: return 'candidate for direct regulation' else: return 'candidate for indirect regulation' # annotate hyp['regulation'] = hyp.b.apply(annotate) cols = ['target_id', 'ens_gene', 'ext_gene', 'b', 'qval', 'regulation'] hyp[cols].to_csv('../input/hypoxia_response.csv', index=False) # get the list of gene IDs as a numpy array. hyp_response = tidy[tidy.target_id.isin(hyp_response)].ens_gene.unique() print('There are {0} genes in the predicted hypoxia response'.format(len(hyp_response))) # - # # Defining the Dpy phenotype response embryonic = epi.find_overlap(['dpy7', 'dpy10', 'unc54', 'clk-1'], tidy, col='genotype') dpy = epi.find_overlap(['dpy7', 'dpy10'], tidy, col='genotype', q=0.01) dpy = tidy[tidy.target_id.isin(dpy) & (~tidy.target_id.isin(embryonic))].copy() print(len(dpy.ens_gene.unique()), 'genes found Dpy') dpy[dpy.genotype == 'dpy7'][['b', 'qval', 'target_id', 'ens_gene', 'ext_gene']].to_csv('../input/dpy_geneset.csv', index=False) # # Defining the Ras response # + # ras_common = epi.find_overlap(['let60', 'let60.gf'], tidy, col='genotype') # & (~tidy.target_id.isin(ras_common) let60 = tidy[(tidy.genotype == 'let60') & (tidy.qval < 10**-2)] let60[['b', 'qval', 'target_id', 'ens_gene', 'ext_gene']].to_csv('../input/ras_geneset.csv', index=False) let60gf = tidy[(tidy.genotype == 'let60.gf') & (tidy.qval < 10**-2)] let60gf[['b', 'qval', 'target_id', 'ens_gene', 'ext_gene']].to_csv('../input/rasgf_geneset.csv', index=False) print('{0} genes found in let-60(lf)'.format(len(let60))) print('{0} genes found in let-60(gf)'.format(len(let60gf))) # - # # Defining the Wnt response wnt = tidy[(tidy.genotype == 'bar1') & (tidy.qval < 10**-2)] wnt[['b', 'qval', 'target_id', 'ens_gene', 'ext_gene']].to_csv('../input/wnt_geneset.csv', index=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Index # - File I/O # - OS Module # # File manipulation # make a file my_file = open('paragraph.txt', 'w') my_file.write('This is python file\n') my_file.write('This is python file') my_file.close() # open a file my_file = open('paragraph.txt', 'r') lines = my_file.readlines() print('type:', type(lines)) print(lines) my_file.close() # + # best practics for file IO subjects = ['CSE', 'ETE', 'EEE', 'ME', 'CE'] with open('subject.txt', 'w') as f: for subject in subjects: f.write(subject + '\n') print('text from file:') with open('subject.txt', 'r') as f: for line in f: print(line) # exclude new line print('exlude new line:') with open('subject.txt', 'r') as f: for line in f: print(line.strip('\n')) # - with open('subject.txt', 'r') as f: print('read 1\'st time') for line in f: print(line.strip('\n')) print('read 2\'st time') for line in f: print(line.strip('\n')) with open('subject.txt', 'r') as f: print(type(f)) # # Python Standered Modules # ## os # + import os current_working_dir = os.getcwd() print(current_working_dir) # - os.listdir(current_working_dir) # make some folder using os folders = ['train', 'test'] for folder in folders: os.makedirs(folder) print(folder, ' -created.') # make some folder using os # if file already exists then what happend ? folders = ['train', 'test'] for folder in folders: os.makedirs(folder) print(folder, ' -created.') # solution folders = ['train', 'test'] for folder in folders: if not os.path.exists(folder): os.makedirs(folder) print(folder, ' -created.') else: print(folder, ' -already exists.') # + class_names = ['cat', 'dog'] for _class in class_names: train_path = os.path.join('train', _class) print(_class) print(' ', train_path) if not os.path.exists(train_path): os.makedirs(train_path) print(' creation [ Done ]') # - train_cat_absulote_path = os.path.join(os.getcwd(), 'train/cat') print(train_cat_absulote_path) # get size of the folder os.path.getsize(train_cat_absulote_path) # using shell command # !du -sh /home/menon/development/path-to-sl/python3_intro/train/cat import os # !pwd # run system command using os sys_command = 'pwd' print(os.system("ls")) # !ls -ltr root = 'test_os_walk' i = 0 for r, d, f in os.walk(root): spacing = len(r.split('/')) - 1 print('\t' * spacing, 'root :', r) print('\t' * spacing, 'folder:', d) print('\t' * spacing, 'file :', f) i += 1 # + # os.remove(path_to_remove) # os.rename(old_name, new_name) # - # ## glob # - Todo # ## shutil # + import shutil # shutil.rmtree(path/to/dir) # shutil.copy(src, des) # shutil.move(src, des) # - # # Try Catch Block def calculate(): data = [1, 4, 5, 0, 10, 500] n = len(data) for i in range(n): print('{} / {} = {}'.format(i, data[i], i / data[i])) calculate() def calculate(): data = [1, 4, 5, 0, 10, 500] n = len(data) for i in range(n): try: print('{} / {} = {}'.format(i, data[i], i / data[i])) except Exception as e: print('Exception is:', e) calculate() # ### Exceptions in Python # -https://docs.python.org/3/library/exceptions.html#concrete-exceptions # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="RChBp_OArpe1" # https://colab.research.google.com/github/tensorpig/learning_tensorflow/blob/master/Lists%2C_Arrays%2C_Tensors%2C_Dataframes%2C_and_Datasets.ipynb#scrollTo=0-i0PylHrjWs # + id="1E6XpeNUs5wU" import pandas as pd # + id="G7-fmfvOsNpF" outputId="7a2117f1-f923-4e1d-f40a-71205579f64d" colab={"base_uri": "https://localhost:8080/"} data = [[1,2,3],[4.0,5.0,6.0],['100','101','102']] data # + id="I6kCAF6JuN__" outputId="8ea51898-8fd1-4642-a85f-d6e3eddffc91" colab={"base_uri": "https://localhost:8080/", "height": 143} data_df_raw = pd.DataFrame(data=data) data_df = data_df_raw.T data_df.columns=['legs','weight','version'] data_df # + [markdown] id="vq9xCbCKU74y" # Let's pretend we have a simple regression like problem. We start out with 3 features describing a robotic spider we're building. For example: number of legs (feature 1), weight (feature 2), and version number (feature 3). Say that we so far built three prototype robots, so have 3 values for each feature. # + id="Sl8TiMIFyldH" outputId="e452c683-781b-40f6-a360-88b002be2360" colab={"base_uri": "https://localhost:8080/", "height": 143} data_dict = {'legs':[1,2,3], 'weight':[4.0,5.0,6.0], 'version':['100','101','102']} data_df_dict = pd.DataFrame(data=data_dict) data_df_dict # + id="RTsI4YLuT9rQ" colab={"base_uri": "https://localhost:8080/"} outputId="8d58ddf1-0ded-46ab-a3c0-d048fecfa624" feature1 = [1,2,3] feature2 = [4.0,5.0,6.0] feature3 = ['100','101','102'] print(type(feature1)) # + id="VDzx6ADRutvL" outputId="e6c6d593-f532-400d-a847-985a4e179983" colab={"base_uri": "https://localhost:8080/"} data_df['legs'].tolist() # + id="hiJJnJaVxo8H" outputId="9c940d0d-41e3-46d4-9a26-377ddc04efb1" colab={"base_uri": "https://localhost:8080/"} data_df.iloc[0] # + [markdown] id="RY27QLAxV6HK" # We'll look at the various different data structures you will probably run into when doing ML/AI in pyhton and tensorflow. Combining the features into matrices etc. Starting from basic python lists and progressing up to keras Datasets which you will typically feed into your neural network. # # First up: the basic python LIST # + id="A9YCFqIcURVi" list2d = [feature1, feature2, feature3] print(type(list2d)) print(list2d) print('({},{})'.format(len(list2d),len(list2d[0]))) #nr of rows and cols print(list2d[0]) #first row print([row[0] for row in list2d]) #first col print(list2d[0][0]) # value at 0,0 print([[row[i] for row in list2d] for i in range(len(list2d[0]))]) # transpose to make more like excel sheet # + [markdown] id="Mm_xYpC-UWJ7" # A python list is a collection of any data types. The items in a list can be lists again, and there are no requirements for the items in a list to be of the same type, or of the same length. # # There is also the Tuple, which has () around the feautes instead of []. A Tuple works hte same, but once creatd, cannot be changed. # # Next up the Numpy ARRAY # + id="pAcQ8owJUaiG" import numpy as np array2d = np.array([feature1, feature2, feature3], dtype=object) print(type(array2d)) print(array2d) print(array2d.shape) #nr of rows and cols print(array2d[0,:]) #first element/row = array, could also be just array2d[0] print(array2d[:,0]) #first column, or actually first element from each 1d array in the 2d array print(array2d[0,0]) # value at 0,0 print(array2d.transpose()) #more like excel sheet # + [markdown] id="lU79BtIsUcQz" # A numpy array expects all items to be of the same type. If the dtype=object is not used above, all of the values will be converted to strings as this is the minimum type that can hold all values. A numpy array can handle features of different length, but then each element in the array will be of type 'list', so no direct indexing like you would expect from a matrix. # # Next up the Pandas DATAFRAME # + id="c_GTwMtGUfdK" import pandas as pd dataframe = pd.DataFrame() dataframe['feature1'] = feature1 dataframe['feature2'] = feature2 dataframe['feature3'] = feature3 print(type(dataframe)) print(dataframe) print(dataframe.shape) print(dataframe.iloc[0].tolist()) # first row, without .tolist() it also shows the column headers as row headers. You can also use loc[0], where 0 is now value in the index column (same as row number here) print(dataframe['feature1'].tolist()) #first column, without .tolist() it also shows the index. You can also use .iloc[:,0] print(dataframe.iloc[0,0]) #value at 0,0 # + [markdown] id="jFN7pPxmUhZh" # A Pandas dataframe is basically an excel sheet. It can handle features with different datatypes, but not different lengths of feature arrays. # # Next up TENSORs # + id="MF5N8S3QUiPZ" import tensorflow as tf feature3int = [int(x) for x in feature3 ] # map string values to numerical representation (in this case the string is a number so easy) tensorRank2 = tf.constant([feature1, feature2, feature3int], dtype=float) print(type(tensorRank2)) print(tensorRank2) print(tensorRank2.shape) print(tensorRank2[0,:].numpy()) #first row, without .numpy() a tensor object is returned. Could also use just [0] print(tensorRank2[:,0].numpy()) #first col print(tensorRank2[0,0].numpy()) # value at 0,0 print(tf.transpose(tensorRank2)) # more like excel sheet # + [markdown] id="BGx2Sl-wUmWC" # Tensors are n-dimensional generalizations of matrices. Vectors are tensors, and can be seen as 1-dimensional matrices. All are represented using n-dimensional arrays with a uniform type, and features with uniform length. I had to convert feature3 list to int, although I could also have converted feature1 and fature2 lists to strings. # # Next up DATASETs # + id="pio3BvloCGSx" feature1f = [float(x) for x in feature1 ] # map string values to numerical representation feature3f = [float(x) for x in feature3 ] # map string values to numerical representation dataset = tf.data.Dataset.from_tensor_slices([feature1f, feature2, feature3f]) print(type(dataset)) print(dataset.element_spec) print(dataset) print(list(dataset.as_numpy_iterator())) print(list(dataset.take(1).as_numpy_iterator())[0]) #first "row" print(list(dataset.take(1).as_numpy_iterator())[0][0]) # value at 0,0 # + [markdown] id="9znu97SGC_B_" # A Dataset is a sequence of elements, each element consisting of one or more components. In this case, each element of the Dataset is a TensorSliceDataset of shape (3,) which, when converted to a list, is shown to wrap around an array of 3 floats as expected. # # A Dataset is aimed at creating data pipelines, which get data from somewhere, process and transform it (typically in smaller batches), and then output it to a neural network (or somewhere else). A main goal of such a piepline is to avoid getting (all) the data in memory and enable large data sets to be handled in smaller peices. As such, getting values for specific elements in the dataset is not what Dataset are built for (and it shows). # + id="wmz8wFc4KpfO" datasett = tf.data.Dataset.from_tensor_slices((feature1, feature2, feature3)) print(type(datasett)) print(datasett.element_spec) print(datasett) print(list(datasett.as_numpy_iterator())) # + [markdown] id="5I3rF17HOue2" # If you create a Dataset from a tuple of arrays, instead of an array of arrays, you can see each element is now a tuple of 3 TensorSpec of different type and shape () which can be seen wrap around a tuple for transposed feature values. # # This shows that from_tensor_slices() "slices" the tensors along the first dimension # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # ## Configuration # _Initial steps to get the notebook ready to play nice with our repository. Do not delete this section._ # Code formatting with [black](https://pypi.org/project/nb-black/). # %load_ext lab_black # + import os import pytz import glob import pathlib this_dir = pathlib.Path(os.path.abspath("")) data_dir = this_dir / "data" # - import requests import pandas as pd import json from datetime import datetime from slugify import slugify # ## Download # Retrieve the page url = "https://services7.arcgis.com/zaLZMEOGUnUT78nG/ArcGIS/rest/services/COVID-19%20Public%20Dashboard%20Data%20V3/FeatureServer/11/query?where=1%3D1&objectIds=&time=&resultType=none&outFields=*&returnIdsOnly=false&returnUniqueIdsOnly=false&returnCountOnly=false&returnDistinctValues=false&cacheHint=false&orderByFields=&groupByFieldsForStatistics=&outStatistics=&having=&resultOffset=&resultRecordCount=&sqlFormat=none&f=pjson&token=" r = requests.get(url) data = r.json() # ## Parse # Get latest from timeseries latest = data["features"][-1] df = ( pd.DataFrame.from_dict( latest["attributes"], orient="index", columns=["confirmed_cases"] ) .reset_index() .rename(columns={"index": "area"}) ) # Create list of uncleaned city names and remove unneeded ones cities_list = list(df[df["area"].str.contains("P_")]["area"]) cities_list.remove("P_Total") cities_list.remove("P_Daily_Cases") cities_list.remove("P_Unk") # Trim down the dataframe based on the list def prep_df(df, list): df = df[df["area"].isin(list)] df["area"] = df["area"].astype(str) df["area"] = df["area"].str[2:] return df trim_df = prep_df(df, cities_list) # Convert camel case to regular def change_case(str): res = [str[0]] for c in str[1:]: if c in ("ABCDEFGHIJKLMNOPQRSTUVWXYZ"): res.append(" ") res.append(c) else: res.append(c) return "".join(res) trim_df["area"] = trim_df["area"].apply(change_case) # Fix truncated names def clean_city_names(s): if s in df: return s.strip() else: s = s.replace("Green V Lake", "Green Valley Lake") s = s.replace("Lucerne V", "Lucerne Valley") s = s.replace("Newberry S", "Newberry Springs") s = s.replace("Pinon Hills", "Piñon Hills") s = s.replace("R C", "Rancho Cucamonga") s = s.replace("R Springs", "Running Springs") s = s.replace("San B", "San Bernardino") s = s.replace("Twentynine P", "Twentynine Palms") return s.strip() trim_df["area"] = trim_df["area"].apply(clean_city_names) # Get timestamp timestamp = latest["attributes"]["Date"] timestamp = datetime.fromtimestamp((timestamp / 1000)) latest_date = pd.to_datetime(timestamp).date() trim_df["county_date"] = latest_date trim_df.insert(0, "county", "San Bernardino") # ## Vet try: assert not len(trim_df) < 64 except AssertionError: raise AssertionError("San Bernardino County's scraper is missing rows") try: assert not len(trim_df) > 64 except AssertionError: raise AssertionError("San Bernardino County's scraper has more rows than before") # ## Export # Set date tz = pytz.timezone("America/Los_Angeles") today = datetime.now(tz).date() slug = "san-bernardino" trim_df.to_csv(data_dir / slug / f"{today}.csv", index=False) # ## Combine csv_list = [ i for i in glob.glob(str(data_dir / slug / "*.csv")) if not str(i).endswith("timeseries.csv") ] df_list = [] for csv in csv_list: if "manual" in csv: df = pd.read_csv(csv, parse_dates=["date"]) else: file_date = csv.split("/")[-1].replace(".csv", "") df = pd.read_csv(csv, parse_dates=["county_date"]) df["date"] = file_date df_list.append(df) df = pd.concat(df_list).sort_values(["date", "area"]) df.to_csv(data_dir / slug / "timeseries.csv", index=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline # # ******************************************************************************** # 2D Robot Localization - Benchmark # ******************************************************************************** # Goals of this script: # # - implement different UKFs on the 2D robot localization example. # # - design the Extended Kalman Filter (EKF) and the Invariant Extended Kalman # Filter (IEKF) :cite:`barrauInvariant2017`. # # - compare the different algorithms with Monte-Carlo simulations. # # *We assume the reader is already familiar with the considered problem described # in the tutorial.* # # We previously designed an UKF with a standard uncertainty representation. An # advantage of the versatility of the UKF is to speed up implementation, tests, # and comparision of algorithms with different uncertainty representations. # Indeed, for the given problem, three different UKFs emerge, defined respectively # as: # # 1) The state is embedded in $SO(2) \times \mathbb{R}^2$, where: # # * the retraction $\varphi(.,.)$ is the $SO(2)$ exponential # for orientation and the vector addition for position. # # * the inverse retraction $\varphi^{-1}(.,.)$ is the $SO(2)$ # logarithm for orientation and the vector subtraction for position. # # 2) The state is embedded in $SE(2)$ with left multiplication, i.e. # # - the retraction $\varphi(.,.)$ is the $SE(2)$ exponential, # where the state multiplies on the left the uncertainty # $\boldsymbol{\xi}$. # # - the inverse retraction $\varphi^{-1}(.,.)$ is the $SE(2)$ # logarithm. # # - this left UKF on $SE(2)$ corresponds to the Invariant Extended Kalman # Filter (IEKF) recommended in :cite:`barrauInvariant2017`. # # 3) The state is embedded in $SE(2)$ with right multiplication, i.e. # # - the retraction $\varphi(.,.)$ is the $SE(2)$ exponential, # where the state multiplies on the right the uncertainty # $\boldsymbol{\xi}$. # # - the inverse retraction $\varphi^{-1}(.,.)$ is the $SE(2)$ # logarithm. # # We tests the filters on simulation with strong initial heading error. # # # Import # ============================================================================== # # from ukfm import SO2, UKF, EKF from ukfm import LOCALIZATION as MODEL import ukfm import numpy as np import matplotlib ukfm.utils.set_matplotlib_config() # We compare the filters on a large number of Monte-Carlo runs. # # # Monte-Carlo runs N_mc = 100 # Simulation Setting # ============================================================================== # We set the simulation as in :cite:`barrauInvariant2017`, section IV. The robot # drives along a 10 m diameter circle for 40 seconds with high rate odometer # measurements (100 Hz) and low rate GPS measurements (1 Hz). The vehicle gets # moderate angular velocity uncertainty and highly precise linear velocity. The # initial values of the heading error is very strong, **45° standard # deviation**, while the initial position is known. # # # sequence time (s) T = 40 # odometry frequency (Hz) odo_freq = 100 # create the model model = MODEL(T, odo_freq) # odometry noise standard deviation odo_std = np.array([0.01, # speed (v/m) 0.01, # speed (v/m) 1 / 180 * np.pi]) # angular speed (rad/s) # GPS frequency (Hz) gps_freq = 1 # GPS noise standard deviation (m) gps_std = 1 # radius of the circle trajectory (m) radius = 5 # initial heading error standard deviation theta0_std = 45/180*np.pi # Filter Design # ============================================================================== # The UKFs are compared to an Extended Kalman FIlter (EKF) and an Invariant EKF # (IEKF). The EKF has the same uncertainty representation as the UKF with the # retraction on $SO(2) \times \mathbb{R}^2$, whereas the IEKF has the same # uncertainty representation as the UKF with the left retraction on # $SE(2)$. # # # propagation noise covariance matrix Q = np.diag(odo_std**2) # measurement noise covariance matrix R = gps_std**2*np.eye(2) # initial covariance matrix P0 = np.zeros((3, 3)) # we take into account initial heading error P0[0, 0] = theta0_std ** 2 # sigma point parameter alpha = np.array([1e-3, 1e-3, 1e-3]) # We set error variables before launching Monte-Carlo simulations. As we have # five similar methods, the code is redundant. # # ukf_err = np.zeros((N_mc, model.N, 3)) left_ukf_err = np.zeros_like(ukf_err) right_ukf_err = np.zeros_like(ukf_err) iekf_err = np.zeros_like(ukf_err) ekf_err = np.zeros_like(ukf_err) # We record Normalized Estimation Error Squared (NEES) for consistency # evaluation (see Results). # # ukf_nees = np.zeros((N_mc, model.N, 2)) left_ukf_nees = np.zeros_like(ukf_nees) right_ukf_nees = np.zeros_like(ukf_nees) iekf_nees = np.zeros_like(ukf_nees) ekf_nees = np.zeros_like(ukf_nees) # Monte-Carlo Runs # ============================================================================== # We run the Monte-Carlo through a for loop. # #

    Note

    We sample for each Monte-Carlo run an initial heading error from the true # distribution ($\mathbf{P}_0$). This requires many Monte-Carlo # samples.

    # # for n_mc in range(N_mc): print("Monte-Carlo iteration(s): " + str(n_mc + 1) + "/" + str(N_mc)) # simulation true trajectory states, omegas = model.simu_f(odo_std, radius) # simulate measurement ys, one_hot_ys = model.simu_h(states, gps_freq, gps_std) # initialize filter with inaccurate state state0 = model.STATE( Rot=states[0].Rot.dot(SO2.exp(theta0_std * np.random.randn(1))), p=states[0].p) # define the filters ukf = UKF(state0=state0, P0=P0, f=model.f, h=model.h, Q=Q, R=R, phi=model.phi, phi_inv=model.phi_inv, alpha=alpha) left_ukf = UKF(state0=state0, P0=P0, f=model.f, h=model.h, Q=Q, R=R, phi=model.left_phi, phi_inv=model.left_phi_inv, alpha=alpha) right_ukf = UKF(state0=state0, P0=P0, f=model.f, h=model.h, Q=Q, R=R, phi=model.right_phi, phi_inv=model.right_phi_inv, alpha=alpha) iekf = EKF(model=model, state0=state0, P0=P0, Q=Q, R=R, FG_ana=model.iekf_FG_ana, H_ana=model.iekf_H_ana, phi=model.left_phi) ekf = EKF(model=model, state0=state0, P0=P0, Q=Q, R=R, FG_ana=model.ekf_FG_ana, H_ana=model.ekf_H_ana, phi=model.phi) # variables for recording estimates of the Monte-Carlo run ukf_states = [state0] left_states = [state0] right_states = [state0] iekf_states = [state0] ekf_states = [state0] ukf_Ps = np.zeros((model.N, 3, 3)) left_ukf_Ps = np.zeros_like(ukf_Ps) right_ukf_Ps = np.zeros_like(ukf_Ps) ekf_Ps = np.zeros_like(ukf_Ps) iekf_Ps = np.zeros_like(ukf_Ps) ukf_Ps[0] = P0 left_ukf_Ps[0] = P0 right_ukf_Ps[0] = P0 ekf_Ps[0] = P0 iekf_Ps[0] = P0 # measurement iteration number k = 1 # filtering loop for n in range(1, model.N): ukf.propagation(omegas[n-1], model.dt) left_ukf.propagation(omegas[n-1], model.dt) right_ukf.propagation(omegas[n-1], model.dt) iekf.propagation(omegas[n-1], model.dt) ekf.propagation(omegas[n-1], model.dt) # update only if a measurement is received if one_hot_ys[n] == 1: ukf.update(ys[k]) left_ukf.update(ys[k]) right_ukf.update(ys[k]) iekf.update(ys[k]) ekf.update(ys[k]) k = k + 1 ukf_states.append(ukf.state) left_states.append(left_ukf.state) right_states.append(right_ukf.state) iekf_states.append(iekf.state) ekf_states.append(ekf.state) ukf_Ps[n] = ukf.P left_ukf_Ps[n] = left_ukf.P right_ukf_Ps[n] = right_ukf.P iekf_Ps[n] = iekf.P ekf_Ps[n] = ekf.P # get state trajectory Rots, ps = model.get_states(states, model.N) ukf_Rots, ukf_ps = model.get_states(ukf_states, model.N) left_ukf_Rots, left_ukf_ps = model.get_states(left_states, model.N) right_ukf_Rots, right_ukf_ps = model.get_states(right_states, model.N) iekf_Rots, iekf_ps = model.get_states(iekf_states, model.N) ekf_Rots, ekf_ps = model.get_states(ekf_states, model.N) # record errors ukf_err[n_mc] = model.errors(Rots, ukf_Rots, ps, ukf_ps) left_ukf_err[n_mc] = model.errors(Rots, left_ukf_Rots, ps, left_ukf_ps) right_ukf_err[n_mc] = model.errors(Rots, right_ukf_Rots, ps, right_ukf_ps) iekf_err[n_mc] = model.errors(Rots, iekf_Rots, ps, iekf_ps) ekf_err[n_mc] = model.errors(Rots, ekf_Rots, ps, ekf_ps) # record NEES ukf_nees[n_mc] = model.nees(ukf_err[n_mc], ukf_Ps, ukf_Rots, ukf_ps, 'STD') left_ukf_nees[n_mc] = model.nees(left_ukf_err[n_mc], left_ukf_Ps, left_ukf_Rots, left_ukf_ps, 'LEFT') right_ukf_nees[n_mc] = model.nees(right_ukf_err[n_mc], right_ukf_Ps, right_ukf_Rots, right_ukf_ps, 'RIGHT') iekf_nees[n_mc] = model.nees(iekf_err[n_mc], iekf_Ps, iekf_Rots, iekf_ps, 'LEFT') ekf_nees[n_mc] = model.nees(ekf_err[n_mc], ekf_Ps, ekf_Rots, ekf_ps, 'STD') # Results # ============================================================================== # We first visualize the robot trajectory (for the last run) and the errors # w.r.t. orientation and position (averaged over Monte-Carlo). As simulations # have random process, the trajectory plot just gives us an indication but not a # proof of performances. # # ukf_e, left_ukf_e, right_ukf_e, iekf_e, ekf_e = model.benchmark_plot( ukf_err, left_ukf_err, right_ukf_err, iekf_err, ekf_err, ps, ukf_ps, left_ukf_ps, right_ukf_ps, ekf_ps, iekf_ps) # Two groups of filters emerge: group 1) consists of EKF and $SO(2) \times # \mathbb{R}^2$ UKF; and group 2) have IEKF, left $SE(2)$ UKF and right # $SE(2)$ UKF (the curves of these filters are superposed). The second # group is visibly highly better regarding position estimation. # # More statictical is to compute the results averaged over all the Monte-Carlo. # Let us compute the Root Mean Squared Error (RMSE) for each method both for the # orientation and the position. # # model.benchmark_print(ukf_e, left_ukf_e, right_ukf_e, iekf_e, ekf_e) # They confirm the results on the plot. # # A consistency metric is the Normalized Estimation Error Squared (NEES). # Classical criteria used to evaluate the performance of an estimation method, # like the RMSE, do not inform about consistency as they do not take into # account the uncertainty returned by the filter. This point is addressed by the # NEES, which computes the average squared value of the error, normalized by the # covariance matrix of the filter. The case NEES>1 reveals an inconsistency # issue: the actual uncertainty is higher than the computed uncertainty. # # model.nees_print(ukf_nees, left_ukf_nees, right_ukf_nees, iekf_nees, ekf_nees) # As the filters are initialized with perfect position and zero covariance # w.r.t. position, we compute NEES only after 20 s for avoiding numerical issues # (during the first secondes of the trajectory the covariance matrix # $\mathbf{P}_n$ is very low so inverting it leads to insignificantly high # numbers). Results are clear, the $SE(2)$ UKF are the more consistent. # # # **Which filter is the best ?** In this setting, the **left UKF**, the # **right UKF** and the IEKF filters obtain similar accurate results, that # clearly outperform $SO(2) \times \mathbb{R}^2$ UKF, and EKF, whereas the # two UKFs are the more consistent. # #

    Note

    We have set all the filters with the same "true" noise covariance # parameters. However, both EKF and UKF based algorithms may better deal , # with non-linearity by e.g. inflated propagation noise covariance.

    # # # # Conclusion # ============================================================================== # This script compares different algorithms for 2D robot localization. Two # groups of filters emerge: the $SO(2) \times \mathbb{R}^2$ UKF and the # EKF represent the first group; and the left $SE(2)$ UKF, the right # $SE(2)$ UKF and the IEKF constitute the second group. For the considered # set of parameters, it is evident that embedded the state in $SE(2)$ is # advantageous for state estimation. # # You can now: # # * compare the filters in different scenarios. Indeed, UKF and their (I)EKF # counterparts may obtain different results when noise is e.g. inflated or # with different initial conditions or different trajectory. # # * test the filters in a slightly different model (e.g. with orientation # measurement), which is straightforward for the UKFs. # # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="9gXfuwwD2dhE" # MTN Cote d'Ivoire would like to upgrade its technology infrastructure for its mobile users in Ivory Coast. Studying the given dataset, how does MTN Cote d'Ivoire go about the upgrade of its infrastructure strategy within the given cities? # # # # # + [markdown] id="qbM-FwmA-16g" # # IMPORTING THE DATASETS INTO THE ENVIRONMENT # + colab={"base_uri": "https://localhost:8080/", "height": 532} id="P5zPZoRQ2jMz" outputId="882c7eb8-79ce-49e8-c5d3-7fc7d811441e" import pandas as pd import numpy as np #Importing my cells_geo dataset, seprating by (;) with index_col set to zero in-order to pick the column names as they #are in the original dataset geo= pd.read_csv("cells_geo.csv", delimiter=";", index_col=0) geo #choosing 10 values geo.head(10) # + colab={"base_uri": "https://localhost:8080/", "height": 719} id="XVdlcU8yBcWT" outputId="c0a45cec-2eff-4da2-93ae-bd9d7efe4dc8" #Importing my Telcom 1 Dataset Tel1= pd.read_csv("Telcom_dataset.csv") Tel1 #choosing the first 10 Tel1.head(10) # + colab={"base_uri": "https://localhost:8080/", "height": 719} id="weZ6GnNpAjYT" outputId="1aacbd25-23fc-4b09-d31d-e7729bb05708" #Importing my Telcom 2 Dataset Tel2= pd.read_csv("Telcom_dataset2.csv") Tel2 #choosing the first 10 Tel1.head(10) # + colab={"base_uri": "https://localhost:8080/", "height": 719} id="OPW5bzTFAs_S" outputId="10073270-8af4-4823-a7b9-69eadcfc50aa" #Importing my Telcom 3 Dataset Tel3= pd.read_csv("Telcom_dataset3.csv") Tel3 #choosing the first 10 Tel1.head(10) # + [markdown] id="MwEjpRFt-cqd" # # **INFORMATION SECTION ON THE DATASETS** # + colab={"base_uri": "https://localhost:8080/"} id="kHLvwP4YGjxR" outputId="a30ad711-b3db-48dc-ec27-bb1131f765f7" #Cheking the information on the datasest # Getting to understand the Data types and the entries of the datasets geo.info() print("***************************************************************") Tel1.info() print("***************************************************************") Tel2.info() print("***************************************************************") Tel3.info() # + [markdown] id="stP0WEgB-tbm" # # **CHECKING THE NULL VALUES IN THE DATASETS** # + colab={"base_uri": "https://localhost:8080/"} id="lV74RnriHUnJ" outputId="2ebff3ee-b3e3-4a01-fa22-666b8c1eb2bb" #checking if my cells geo dataset has any null values geo.isnull().values.any() # + colab={"base_uri": "https://localhost:8080/"} id="UCJ-iCMABhE-" outputId="97449ced-25f4-4bd2-b8cb-b7bb52030691" #checking how many missing values are in my cell_geo dataset in the columns geo.isnull().sum() # + colab={"base_uri": "https://localhost:8080/"} id="xaarlmUh_jF2" outputId="d8eb7fee-2714-4ae6-e6f1-6266537542a6" #cheking null values in my telcom 1 dataset Tel1.isnull().values.any() # + colab={"base_uri": "https://localhost:8080/"} id="5FGOp8iHB3-S" outputId="bcc911d5-219e-4fb4-92d2-84dc391f9ecc" #checking how many missing values are in my telcome1 dataset Tel1.isnull().sum() # + colab={"base_uri": "https://localhost:8080/"} id="vGKMNhD3CEMV" outputId="732195ed-4e5f-4593-dae4-fc5e042dcb28" #cheking null values in my telcom21 dataset Tel2.isnull().values.any() # + colab={"base_uri": "https://localhost:8080/"} id="Pe6QgHIPCEmL" outputId="dd9b2b71-b802-40cf-9f84-d3b0a63520b9" #checking how many missing values are in my telcome 2 dataset Tel2.isnull().sum() # + colab={"base_uri": "https://localhost:8080/"} id="lqUOFYdKCE37" outputId="14deb8fc-c13f-459c-c0c1-33dde3978e7c" #checking how many missing values are in my telcome 3 dataset Tel3.isnull().values.any() # + colab={"base_uri": "https://localhost:8080/"} id="9MLKgr4KCFFK" outputId="9fe9bb69-8d75-4472-f77e-c21c03f2eddf" #checking how many missing values are in my telcome 3 dataset Tel3.isnull().sum() # + [markdown] id="4ovHIf8bNY2y" # # **CLEANING MY DATA** # + [markdown] id="-pBINrnF-CLn" # 1 .**Taking care of missing data** # + colab={"base_uri": "https://localhost:8080/", "height": 694} id="dds2rfPjWb3X" outputId="b97b0992-15a7-4b0d-bfe9-20f0d89d3870" # taking care of missing data in my cells_geo dataset by replacing the nan with . geo2 = geo.replace(np.nan, ".", regex = True) geo2 # + colab={"base_uri": "https://localhost:8080/"} id="mGf88KEF3FGf" outputId="78ad6276-4397-4333-a80d-2241e6e966ba" #checking if their any missing value in my geo2 dataset geo2.isnull().sum() # + colab={"base_uri": "https://localhost:8080/", "height": 779} id="kzDsEjnKYxaH" outputId="3fb52040-e295-4d74-e07a-d6578deabb6c" #Taking care of missing data in my Telcom 1 Dataset Tel1up = Tel1.replace(np.nan, ".", regex=True) Tel1up # + colab={"base_uri": "https://localhost:8080/"} id="nDu9TDS83wTD" outputId="372af9b6-ddf0-4d5b-9ade-29b09293e087" #checking if their any missing value in my Tel1up dataset Tel1up.isnull().sum() # + colab={"base_uri": "https://localhost:8080/", "height": 394} id="8amsM9-5YxpG" outputId="875ffe97-22e8-4187-a9ff-8928def8307c" #Taking care of missing data in my Telcom 2 Dataset Tel2up = Tel2.replace(np.nan, ".", regex=True) Tel2up.head() # + colab={"base_uri": "https://localhost:8080/"} id="GyrYZwlu32-Y" outputId="5108f2ec-2736-4ee6-f2b3-51172ac8711d" #checking if their any missing value in my Tel2up dataset Tel2up.isnull().sum() # + colab={"base_uri": "https://localhost:8080/", "height": 779} id="7mX8gGkXYx6G" outputId="6ff24243-3694-4e52-89a8-26a8a3897d4f" #Taking care of missing data in my Telcom 3 Dataset Tel3up = Tel3.replace(np.nan, ".", regex=True) Tel3up # + colab={"base_uri": "https://localhost:8080/"} id="mUvhW2nh36fe" outputId="3f7e89c7-7e68-45b9-e7a6-1080fdab1603" #checking if their any missing value in my Tel3up dataset Tel3up.isnull().sum() # + [markdown] id="F7v8edL7-STz" # 2. **Taking care of duplicate values** # + colab={"base_uri": "https://localhost:8080/"} id="87znnG0kZ8s8" outputId="09fc4671-c59f-42bd-b86e-0b04125fdfba" #cheking for duplicate values in my cell_geo dataset geo2.duplicated().sum() # + colab={"base_uri": "https://localhost:8080/"} id="LRxk5M8l4Y4i" outputId="2d65297e-650b-4250-d09a-1d27c31d6c35" #dropping the duplicates in my geo2 dataset and cheking if there any duplicates left geo3 = geo2.drop_duplicates() geo3.duplicated().sum() # + colab={"base_uri": "https://localhost:8080/"} id="zPag2cJIxWXI" outputId="e064438d-5b03-42a2-b988-ec632c78167a" #cheking for duplicate values in my Tel1 dataset #dropping the duplicates Tel1up.duplicated().sum() # + colab={"base_uri": "https://localhost:8080/"} id="GNdGnqtR8gI2" outputId="2602dac7-7c69-418a-e80c-cee678f256d5" #dropping the duplicates in my tel1up dataset and cheking if there any duplicates left Tel1upd = Tel1up.drop_duplicates() Tel1upd.duplicated().sum() # + colab={"base_uri": "https://localhost:8080/"} id="vFFB_bDyxWfY" outputId="75ceb966-62e8-4f78-fecd-427a2cf25175" #cheking for duplicate values in my Tel2 dataset Tel2up.duplicated().sum() # + colab={"base_uri": "https://localhost:8080/"} id="tJRk3fLL8xhX" outputId="eb35e31a-2cc9-4993-b7db-713e897d80b9" #dropping the duplicates in my Tel2up dataset and cheking if there any duplicates left Tel2upd = Tel2up.drop_duplicates() Tel2upd.duplicated().sum() # + colab={"base_uri": "https://localhost:8080/"} id="3Hrz3BTrxWrD" outputId="94fcef1e-bf45-4a21-f2b3-311f1ef63f9f" #cheking for duplicate values in my Tel3 dataset #dropping the duplicates Tel3up.duplicated().sum() # + colab={"base_uri": "https://localhost:8080/"} id="QZ2Y8NMq893t" outputId="c1770108-1e00-48dd-94a3-bf75dceee3be" #dropping the duplicates in my Tel3up dataset and cheking if there any duplicates left Tel3upd = Tel3up.drop_duplicates() Tel3upd.duplicated().sum() # + [markdown] id="6DYvQrLI_Fzs" # 3. **Dropping unnecessary column in the dataset** # # + colab={"base_uri": "https://localhost:8080/"} id="nhsFpvle_QJl" outputId="176b51f8-c119-489e-fd01-382eb4be3a03" #cheking the columns in my geo3 dataset geo3.head() # Columns such as Site_code, latitude, longitude will not help in seeing where the cells were most used in the cities we can go ahead #and drop these column geo3.drop(columns=["LONGITUDE", "LATITUDE", "CELL_ID"], axis=1, inplace=True) # + colab={"base_uri": "https://localhost:8080/", "height": 419} id="EljRFflQAX9H" outputId="4f736b68-3e82-4c0f-a86a-d337ac4cdd1a" #Cheking to see whether the columns have been deleted geo3 #Renaming my site_code column to match all the other datasets geo4 = geo3.rename(columns={'SITE_CODE': 'SITE_ID'}) geo4 # + [markdown] id="1POcgkJDL7CU" # b) **. Dropping in Telcome datasets** # + colab={"base_uri": "https://localhost:8080/"} id="jrlAnCw6Hv9G" outputId="703253d0-c71f-4b95-d9eb-d1f034f94bf6" #cheking the columns in my tel1 dataset Tel1upd.head() #Columns such as DW_A_NUMBER_INT, DW_B_NUMBER_INT, COUNTRY_A, COUNTRY_B, CELL_ID, SITE_ID are not needed to find out which cells were most used in the cities Tel1upd.drop(columns= ["DW_A_NUMBER_INT", "DW_B_NUMBER_INT", "COUNTRY_A", "COUNTRY_B", "CELL_ID"], axis= 1, inplace= True) # + colab={"base_uri": "https://localhost:8080/", "height": 419} id="ACJFbV8wJoew" outputId="8196f18e-5863-4653-94e6-8571c5ca7448" #Cheking to see whether the columns have been deleted Tel1upd #renaming the cell id column to match the other telecome datasets Tel1upda = Tel1upd.rename(columns={'PRODUTC': 'PRODUCT'}) Tel1upda # + colab={"base_uri": "https://localhost:8080/"} id="Sh_NxoxGKHwr" outputId="58ad2b9f-d86c-4ffd-e0db-2c774f58b50e" #cheking the columns in my tel1 dataset Tel2upd.head() #Columns such as DW_A_NUMBER, DW_B_NUMBER, COUNTRY_A, COUNTRY_B, CELL_ID, SITE_ID are not needed to find out which cells were most used in the cities Tel2upd.drop(columns= ["DW_A_NUMBER", "DW_B_NUMBER", "COUNTRY_A", "COUNTRY_B", "CELL_ID"], axis= 1, inplace= True) # + colab={"base_uri": "https://localhost:8080/", "height": 419} id="2wS8Npf0KeqT" outputId="bfbfa8f6-fcdf-49d6-e330-2c5483fe06da" #Cheking to see whether the columns have been deleted Tel2upd # + colab={"base_uri": "https://localhost:8080/"} id="MBNPRfz_Kx9C" outputId="222e3bb3-90d0-452f-9400-0f16872e884a" #cheking the columns in my tel1 dataset Tel3upd.head() #Columns such as DW_A_NUMBER, DW_B_NUMBER, COUNTRY_A, COUNTRY_B, CELL_ID, SITE_ID are not needed to find out which cells were most used in the cities Tel3upd.drop(columns= ["DW_A_NUMBER_INT", "DW_B_NUMBER_INT", "COUNTRY_A", "COUNTRY_B", "CELLID"], axis= 1, inplace= True) # + colab={"base_uri": "https://localhost:8080/", "height": 419} id="WeuPDpfgLAk0" outputId="622252e1-92d0-4619-cf6e-b6a88f3dc527" #Cheking to see whether the columns have been deleted Tel3upd #renaming the cell id column to match the other telecome datasets Tel3upda = Tel3upd.rename(columns={'SIET_ID': 'SITE_ID'}) Tel3upda # + [markdown] id="0E7nkkDyQr-0" # # **MERGING DATASETS** # + colab={"base_uri": "https://localhost:8080/", "height": 745} id="Zq5rdOcRQ0Ax" outputId="c42d51a8-2a62-4fd0-d0b0-00e7bec7d46a" #Merging Datasests Together # Merging my geo_cells dataset to my tel1 dataset with the column SITE_ID as the common column merge1 = geo4.merge(Tel1upda, how="left", on = 'SITE_ID') #Merging my merge to the Telcome 2 dataset with the same common column merge2 = merge1.merge(Tel2upd, how="left", on = 'SITE_ID') #Merging my merge2 to the Telcome 3 dataset with the same common column MTN = merge2.merge(Tel3upda, how="left", on="SITE_ID") MTN # + [markdown] id="WQMYEVHzvM6u" # a) **Taking care of unnecessary columns, missing data and duplicate values** # + colab={"base_uri": "https://localhost:8080/", "height": 745} id="4EwyBUNgh3BI" outputId="852b3d33-54a6-4fe6-821a-389d1c7a368d" #Dropping columns that are not needed MTN.drop(columns=["SITE_ID", "CELL_ON_SITE_y", "PRODUCT_x", "VALUE_x", 'CELL_ON_SITE_x', 'PRODUCT_y', 'VALUE_y', 'DATE_TIME_x', 'DATE_TIME_y'], axis=0, inplace=True) MTN # + id="BZoOaJlzrQp3" #Cheking the total of rows with missing data MTN.isnull().sum() #dropping rows that contain missing data mtnf= MTN.dropna() # + colab={"base_uri": "https://localhost:8080/", "height": 289} id="H2XzBheWuVDK" outputId="ca6b43d5-40ad-4215-cfdc-038d6cf49eb6" # Taking care of duplicate values in the dataset mtnf.duplicated().sum() MTNfinal = mtnf.drop_duplicates() MTNfinal.head() # + colab={"base_uri": "https://localhost:8080/"} id="tn9uxxwwvtBl" outputId="ee6f9540-24be-4f5a-dab7-410842e2bc1b" #Looking at the information of my final dataset MTNfinal.info() # + [markdown] id="VnfFqY3HoB8N" # # EXPORTING TO CSV # ### Exporting to csv to do the visualisation on tableau # # + id="Xn57ndvboBBh" MTNfinal.to_csv("MTNfinal.csv") # + [markdown] id="xikT4faUzSKR" # # **DERIVING NEW ATTRIBUTES/DATA ANALYSIS** # # # # 1. Which Cities had the highest billing? # 2. The city with the most cells? # 2. Which cities were the most used during business and home hours? # 3. Most used city for the three days? # 4. Which product is the most used in the cities # 5. Which city had the most value in product # 6. Which city had the most cells on site? # # # # # # + colab={"base_uri": "https://localhost:8080/", "height": 450} id="YDOlo29Y8rE-" outputId="4d0a5782-19ff-4616-bdad-bdd9e2b9bcdc" #Descriptive statistics between our city and Values column MTNfinal['VALUE'].groupby(MTNfinal['VILLES']).describe() # + colab={"base_uri": "https://localhost:8080/", "height": 450} id="PEw7lqYwsJhN" outputId="932e6613-00a5-4880-cef5-1c9772563fa4" #Descriptive statistics between our city and product column MTNfinal['PRODUCT'].groupby(MTNfinal['VILLES']).describe() # + colab={"base_uri": "https://localhost:8080/"} id="uX_TeioHzYEi" outputId="e5f49dca-15e5-437f-e110-cdaf5765bdb3" #Cheking which cities was used the most using the Value column #in description the value column is the billing price so the city with the highest billing price probably had the most users Cityv = MTNfinal['VILLES'].groupby(MTNfinal['VALUE']) MTNfinal # Displaying the max value of the each city Cityv.max().tail() # + colab={"base_uri": "https://localhost:8080/"} id="7MtN0AVG5tax" outputId="2c60437d-82bf-411a-f58e-33c9b5730d5e" # Finding which cities were the most used during business and home hours cityh = MTNfinal["VILLES"].groupby(MTNfinal["DATETIME"]) MTNfinal #Displaying the most used city during the different hours cityh.max().tail() # + colab={"base_uri": "https://localhost:8080/"} id="jK6AT0xl9pwp" outputId="66bdac64-2652-4a08-a0a5-1cece504baf8" #Finding the city with the least cells cityp = MTNfinal["VILLES"].groupby(MTNfinal["CELL_ON_SITE"]) MTNfinal #Displaying the city with least cells cityp.min().head() # + colab={"base_uri": "https://localhost:8080/"} id="9dSUqgi-A3d3" outputId="675b4df4-c8cd-4f23-b880-ab16b57fbe4a" #Stats to find the most used city in the three days freq = MTNfinal["VILLES"].value_counts() print(freq) # + colab={"base_uri": "https://localhost:8080/", "height": 142} id="iOsKlvquEpe6" outputId="c3e939aa-f64f-4ba6-eca4-479a033712e2" #Which product is the most used in the cities MTNfinal['VILLES'].groupby(MTNfinal['PRODUCT']).describe() # + colab={"base_uri": "https://localhost:8080/"} id="wtwVSaQHY6sa" outputId="27ea99fe-51e1-47cb-c9ca-f0d649b9e9af" freq = MTNfinal["PRODUCT"].value_counts() print(freq) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # K-Nearest Neighbors Classification # # The dataset was obtained from https://archive.ics.uci.edu/ml/datasets/Iris. import numpy as np import pandas as pd import seaborn as sns from heapq import heappush, nsmallest from collections import Counter # read data and parse it data = pd.read_csv('../datasets/iris.csv', names=["sepal_length","sepal_width","petal_length","petal_width","species"]) data = data.sample(len(data)) # Pair plots between features of different species sns.pairplot(data, hue="species", size=3, diag_kind='kde') label_id = {'Iris-setosa': 1, 'Iris-versicolor': 2, 'Iris-virginica': 3} data['species'] = [label_id[x] for x in data['species']] # + y = np.array(data['species']) x = np.array(data.loc[:, data.columns != 'species']) def split_data(x, y, ratio, fold=0): # Define the boundaries of the test data test_start = int(len(x) * ratio * fold) test_stop = int(len(x) * ratio * (fold+1)) # Split the data train_x = np.concatenate((x[:test_start], x[test_stop:])) train_y = np.concatenate((y[:test_start], y[test_stop:])) test_x = x[test_start:test_stop] test_y = y[test_start:test_stop] return train_x, train_y, test_x, test_y # + def cross_validate(x, y, folds, k): accuracies = [] for fold in range(folds): train_x, train_y, test_x, test_y = split_data(x, y, 1./folds, fold) knn = KNN(train_x, train_y, k) accuracies.append(calculate_accuracy(knn.predict_batch(test_x), test_y)) return np.mean(accuracies) def calculate_accuracy(y_hat, y): return np.sum(np.equal(y, y_hat), dtype=float) / len(y) # - class KNN: """ Implementation of the kNN classification algorithm """ def __init__(self, X, Y, k): self.X = X self.Y = Y self.k = k def predict(self, input_x): """ Predict labels using k-nearest neigbors for a given point. """ neighbors = [] for i in range(len(self.X)): heappush(neighbors, (self.distance(self.X[i], input_x), self.Y[i])) k_nearest = [x[1] for x in nsmallest(k, neighbors)] return Counter(k_nearest).most_common(1)[0][0] def predict_batch(self, X_test): """ Predict labels using k-nearest neigbors for a batch. X_test : batch of inputs to predict """ return [self.predict(point) for point in X_test] def distance(self,x1,x2): """ Calculate the distance between two points x1 and x2 """ # L2 distance return sum((x1 - x2) ** 2) # Manhattan distance # return sum(abs(x1 - x2)) # negative Cosine similarity (cosine similarity is inversely proportional to distance) # return 1 - x1.dot(x2)/(np.linalg.norm(x1) * np.linalg.norm(x2)) # + train_x, train_y, test_x, test_y = split_data(x, y, ratio=0.25) k_values = range(2, 80) best_k = 1 best_acc = 0 for k in k_values: accuracy = cross_validate(train_x, train_y, folds=4, k=k) if k % 10 == 0: print("validation accuracy for k = {} is {:1.3f}".format(k, accuracy)) if accuracy > best_acc: best_acc = accuracy best_k = k # Use the best parameter print("Checking accuracy on the test set using k = {}".format(best_k)) knn = KNN(train_x, train_y, best_k) y_hat = knn.predict_batch(test_x) test_acc = calculate_accuracy(test_y, y_hat) print("Test set accuracy: {:1.3f}%".format(test_acc * 100)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import os import tensorflow as tf from tensorflow import keras img_ht = 180 img_wt = 180 import cv2 cap = cv2.VideoCapture(0) face_cascade = cv2.CascadeClassifier("/home/hp77/miniconda3/envs/tunex/share/opencv4/haarcascades/haarcascade_frontalface_default.xml") import numpy as np model = tf.keras.models.load_model('EmoDet1/') class_names = ['0', '1', '2', '3', '4', '5', '6'] Emotion = { '0' : 'Afraid', '1' : 'Angry', '2' : 'Disgust', '3' : 'Happy', '4' : 'Nervous', '5' : 'Sad', '6' : 'Surprise' } import PIL import PIL.Image writer = None (W, H) = (None, None) x, y = (0, 0) crop_img=None # + while True: ret, frame = cap.read() print("It came here 1st") if not ret: print("It came here 2nd") break if W is None or H is None: (H, W) = frame.shape[:2] output = frame.copy() gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) faces = face_cascade.detectMultiScale(gray, 1.1, 4) for (x, y, w, h) in faces: cv2.rectangle(frame, (x,y), (x+w, y+h), (255,0,0), 2) crop_img = frame[x:x+w, y:y+h] cv2.imshow("cropped", frame) #frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) crop_img = cv2.cvtColor(crop_img, cv2.COLOR_BGR2RGB) #frame = cv2.resize(frame, (img_ht, img_wt)).astype("float32") crop_img = cv2.resize(crop_img, (img_ht, img_wt)).astype("float32") #img_array = keras.preprocessing.image.img_to_array(img) commented because I am doing preprocessing with cv2 #img_array = tf.expand_dims(frame, 0) # Create a batch crop_img_array = tf.expand_dims(crop_img, 0) #predictions = model.predict(img_array) predicitons_crop = model.predict(crop_img_array) #score = tf.nn.softmax(predictions[0]) score_crop = tf.nn.softmax(predictions_crop[0]) #text = "E: {}, C: {}".format(Emotion[class_names[np.argmax(score)]], 100*np.max(score)) text_crop = "E: {}, C: {}".format(Emotion[class_names[np.argmax(score_crop)]], 100*np.max(score)) #cv2.putText(output, text, (35, 50), cv2.FONT_HERSHEY_SIMPLEX,1.25, (0, 255, 0), 5) cv2.putText(output, text_crop, (x, y), cv2.FONT_HERSHEY_SIMPLEX, 1.25, (0,255,0), 5) cv2.imshow("Output", output) key = cv2.waitKey(1) & 0xFF if key == ord("q"): break print("[INFO] cleaning up...") #writer.release() cap.release() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import requests import re import pandas as pd from bs4 import BeautifulSoup page = requests.get('https://en.wikipedia.org/wiki/Fantastic_Beasts_and_Where_to_Find_Them_(film)') soup = BeautifulSoup(page.content, 'html.parser') li = soup.find('li', class_='interlanguage-link interwiki-zh') lin = li.find('a') link = lin['href'] df = pd.read_excel('titles2018.xlsx') df.head() titles = df['Titles'] type(titles) len(titles) links = [] for idx, title in enumerate(titles): title = title.replace(' ','_') print('Now processing title number' + str(idx)) try: try: tmp = [] path = 'https://en.wikipedia.org/wiki/' + title + '_(film)' page = requests.get(path) soup = BeautifulSoup(page.content, 'html.parser') li = soup.find('li', class_='interlanguage-link interwiki-zh') lin = li.find('a') link = lin['href'] tmp.append(link) print(len(tmp)) links.append(tmp) except: tmp = [] path = 'https://en.wikipedia.org/wiki/' + title page = requests.get(path) soup = BeautifulSoup(page.content, 'html.parser') li = soup.find('li', class_='interlanguage-link interwiki-zh') lin = li.find('a') link = lin['href'] tmp.append(link) print(len(tmp)) links.append(tmp) except: links.append('404') print('Not Found.') len(links) # Save links to csv links = pd.DataFrame(data={"links": links}) links.to_csv("links.csv", sep=',') # + # Create a Pandas Excel writer using XlsxWriter as the engine. writer = pd.ExcelWriter('links.xlsx', engine='xlsxwriter') # Convert the dataframe to an XlsxWriter Excel object. links.to_excel(writer, sheet_name='links') # Close the Pandas Excel writer and output the Excel file. writer.save() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="_wxmLPc2YysH" colab_type="text" # # An RNN for short-term predictions # This model will try to predict the next value in a short sequence based on historical data. This can be used for example to forecast demand based on a couple of weeks of sales data. # # This is the solution notebook. The corresponding work notebook is here: [00_Keras_RNN_predictions_playground.ipynb](https://colab.research.google.com/github/GoogleCloudPlatform/tensorflow-without-a-phd/blob/master/tensorflow-rnn-tutorial/00_Keras_RNN_predictions_playground.ipynb) # + id="3uk3AMTcYysJ" colab_type="code" colab={} import numpy as np from matplotlib import pyplot as plt import tensorflow as tf tf.enable_eager_execution() print("Tensorflow version: " + tf.__version__) # + id="Sk3oZyKsZLRf" colab_type="code" cellView="form" colab={} #@title Display utilities [RUN ME] from enum import IntEnum import numpy as np class Waveforms(IntEnum): SINE1 = 0 SINE2 = 1 SINE3 = 2 SINE4 = 3 def create_time_series(waveform, datalen): # Generates a sequence of length datalen # There are three available waveforms in the Waveforms enum # good waveforms frequencies = [(0.2, 0.15), (0.35, 0.3), (0.6, 0.55), (0.4, 0.25)] freq1, freq2 = frequencies[waveform] noise = [np.random.random()*0.2 for i in range(datalen)] x1 = np.sin(np.arange(0,datalen) * freq1) + noise x2 = np.sin(np.arange(0,datalen) * freq2) + noise x = x1 + x2 return x.astype(np.float32) from matplotlib import transforms as plttrans plt.rcParams['figure.figsize']=(16.8,6.0) plt.rcParams['axes.grid']=True plt.rcParams['axes.linewidth']=0 plt.rcParams['grid.color']='#DDDDDD' plt.rcParams['axes.facecolor']='white' plt.rcParams['xtick.major.size']=0 plt.rcParams['ytick.major.size']=0 def picture_this_1(data, datalen): plt.subplot(211) plt.plot(data[datalen-512:datalen+512]) plt.axvspan(0, 512, color='black', alpha=0.06) plt.axvspan(512, 1024, color='grey', alpha=0.04) plt.subplot(212) plt.plot(data[3*datalen-512:3*datalen+512]) plt.axvspan(0, 512, color='grey', alpha=0.04) plt.axvspan(512, 1024, color='black', alpha=0.06) plt.show() def picture_this_2(data, batchsize, seqlen): samples = np.reshape(data, [-1, batchsize, seqlen]) rndsample = samples[np.random.choice(samples.shape[0], 8, replace=False)] print("Tensor shape of a batch of training sequences: " + str(rndsample[0].shape)) print("Random excerpt:") subplot = 241 for i in range(8): plt.subplot(subplot) plt.plot(rndsample[i, 0]) # first sequence in random batch subplot += 1 plt.show() def picture_this_3(predictions, evaldata, evallabels, seqlen): subplot = 241 colors = plt.rcParams['axes.prop_cycle'].by_key()['color'] for i in range(8): plt.subplot(subplot) #k = int(np.random.rand() * evaldata.shape[0]) l0, = plt.plot(evaldata[i, 1:], label="data") plt.plot([seqlen-2, seqlen-1], evallabels[i, -2:]) l1, = plt.plot([seqlen-1], [predictions[i]], "o", color="red", label='Predicted') l2, = plt.plot([seqlen-1], [evallabels[i][-1]], "o", color=colors[1], label='Ground Truth') if i==0: plt.legend(handles=[l0, l1, l2]) subplot += 1 plt.show() def picture_this_hist(rmse1, rmse2, rmse3, rmse): colors = ['#4285f4', '#34a853', '#fbbc05', '#ea4334'] plt.figure(figsize=(5,4)) plt.xticks(rotation='40') plt.title('RMSE: your model vs. simplistic approaches') plt.bar(['RND', 'LAST', 'LAST2', 'Yours'], [rmse1, rmse2, rmse3, rmse], color=colors) plt.show() def picture_this_hist_all(rmse1, rmse2, rmse3, rmse4, rmse5, rmse6, rmse7, rmse8): colors = ['#4285f4', '#34a853', '#fbbc05', '#ea4334', '#4285f4', '#34a853', '#fbbc05', '#ea4334'] plt.figure(figsize=(7,4)) plt.xticks(rotation='40') plt.ylim(0, 0.35) plt.title('RMSE: all models') plt.bar(['RND', 'LAST', 'LAST2', 'LINEAR', 'DNN', 'CNN', 'RNN', 'RNN_N'], [rmse1, rmse2, rmse3, rmse4, rmse5, rmse6, rmse7, rmse8], color=colors) plt.show() # + [markdown] id="RJ-1YaKQYysO" colab_type="text" # ## Generate fake dataset # + id="pAhYNwMUYysP" colab_type="code" outputId="6338b0e2-1d3a-48f0-b655-ad83aa34af9c" colab={"base_uri": "https://localhost:8080/", "height": 375} DATA_SEQ_LEN = 1024*128 data = np.concatenate([create_time_series(waveform, DATA_SEQ_LEN) for waveform in Waveforms]) # 4 different wave forms picture_this_1(data, DATA_SEQ_LEN) DATA_LEN = DATA_SEQ_LEN * 4 # since we concatenated 4 sequences # + [markdown] id="Yt3TIQ6EYysS" colab_type="text" # ## Hyperparameters # + id="FNw-tcLPYysS" colab_type="code" colab={} RNN_CELLSIZE = 32 # size of the RNN cells SEQLEN = 16 # unrolled sequence length BATCHSIZE = 32 # mini-batch size LAST_N = SEQLEN//2 # loss computed on last N element of sequence in advanced RNN model # + [markdown] id="bzLvRzgjYysU" colab_type="text" # ## Visualize training sequences # This is what the neural network will see during training. # + id="IxTQ2-BRYysV" colab_type="code" outputId="4a5fdb1c-6898-4e46-c8dd-52ee5772c440" colab={"base_uri": "https://localhost:8080/", "height": 409} picture_this_2(data, BATCHSIZE, SEQLEN) # execute multiple times to see different sample sequences # + [markdown] id="SGh2-0E6YysZ" colab_type="text" # ## Benchmark model definitions # We will compare the RNNs against these models. For the time being you can regard them as black boxes. # + id="vVNz_D8bd8lA" colab_type="code" colab={} # this is how to create a Keras model from neural network layers def compile_keras_sequential_model(list_of_layers, msg): # a tf.keras.Sequential model is a sequence of layers model = tf.keras.Sequential(list_of_layers) # keras does not have a pre-defined metric for Root Mean Square Error. Let's define one. def rmse(y_true, y_pred): # Root Mean Squared Error return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true))) print('\nModel ', msg) # to finalize the model, specify the loss, the optimizer and metrics model.compile( loss = 'mean_squared_error', optimizer = 'rmsprop', metrics = [rmse]) # this prints a description of the model model.summary() return model # # three very simplistic "models" that require no training. Can you beat them ? # # SIMPLISTIC BENCHMARK MODEL 1 predict_same_as_last_value = lambda x: x[:,-1] # shape of x is [BATCHSIZE,SEQLEN] # SIMPLISTIC BENCHMARK MODEL 2 predict_trend_from_last_two_values = lambda x: x[:,-1] + (x[:,-1] - x[:,-2]) # SIMPLISTIC BENCHMARK MODEL 3 predict_random_value = lambda x: tf.random.uniform(tf.shape(x)[0:1], -2.0, 2.0) def model_layers_from_lambda(lambda_fn, input_shape, output_shape): return [tf.keras.layers.Lambda(lambda_fn, input_shape=input_shape), tf.keras.layers.Reshape(output_shape)] model_layers_RAND = model_layers_from_lambda(predict_random_value, input_shape=[SEQLEN,], output_shape=[1,]) model_layers_LAST = model_layers_from_lambda(predict_same_as_last_value, input_shape=[SEQLEN,], output_shape=[1,]) model_layers_LAST2 = model_layers_from_lambda(predict_trend_from_last_two_values, input_shape=[SEQLEN,], output_shape=[1,]) # # three neural network models for comparison, in increasing order of complexity # l = tf.keras.layers # syntax shortcut # BENCHMARK MODEL 4: linear model (RMSE: 0.215 after 10 epochs) model_layers_LINEAR = [l.Dense(1, input_shape=[SEQLEN,])] # output shape [BATCHSIZE, 1] # BENCHMARK MODEL 5: 2-layer dense model (RMSE: 0.197 after 10 epochs) model_layers_DNN = [l.Dense(SEQLEN//2, activation='relu', input_shape=[SEQLEN,]), # input shape [BATCHSIZE, SEQLEN] l.Dense(1)] # output shape [BATCHSIZE, 1] # BENCHMARK MODEL 6: convolutional (RMSE: 0.186 after 10 epochs) model_layers_CNN = [ l.Reshape([SEQLEN, 1], input_shape=[SEQLEN,]), # [BATCHSIZE, SEQLEN, 1] is necessary for conv model l.Conv1D(filters=8, kernel_size=4, activation='relu', padding="same"), # [BATCHSIZE, SEQLEN, 8] l.Conv1D(filters=16, kernel_size=3, activation='relu', padding="same"), # [BATCHSIZE, SEQLEN, 8] l.Conv1D(filters=8, kernel_size=1, activation='relu', padding="same"), # [BATCHSIZE, SEQLEN, 8] l.MaxPooling1D(pool_size=2, strides=2), # [BATCHSIZE, SEQLEN//2, 8] l.Conv1D(filters=8, kernel_size=3, activation='relu', padding="same"), # [BATCHSIZE, SEQLEN//2, 8] l.MaxPooling1D(pool_size=2, strides=2), # [BATCHSIZE, SEQLEN//4, 8] # mis-using a conv layer as linear regression :-) l.Conv1D(filters=1, kernel_size=SEQLEN//4, activation=None, padding="valid"), # output shape [BATCHSIZE, 1, 1] l.Reshape([1,]) ] # output shape [BATCHSIZE, 1] # + id="04lT11FL7FBL" colab_type="code" outputId="d53a620e-7aa3-4b00-f42b-48b73c7acefb" colab={"base_uri": "https://localhost:8080/", "height": 1695} # instantiate the benchmark models model_RAND = compile_keras_sequential_model(model_layers_RAND, "RAND") model_LAST = compile_keras_sequential_model(model_layers_LAST, "LAST") model_LAST2 = compile_keras_sequential_model(model_layers_LAST2, "LAST2") model_LINEAR = compile_keras_sequential_model(model_layers_LINEAR, "LINEAR") model_DNN = compile_keras_sequential_model(model_layers_DNN, "DNN") model_CNN = compile_keras_sequential_model(model_layers_CNN, "CNN") # + [markdown] id="idWk7K3FYysn" colab_type="text" # ## Prepare datasets # + id="ydOmIaCtYyso" colab_type="code" colab={} # training to predict the same sequence shifted by one (next value) labeldata = np.roll(data, -1) # cut data into sequences traindata = np.reshape(data, [-1, SEQLEN]) labeldata = np.reshape(labeldata, [-1, SEQLEN]) # make an evaluation dataset by cutting the sequences differently evaldata = np.roll(data, -SEQLEN//2) evallabels = np.roll(evaldata, -1) evaldata = np.reshape(evaldata, [-1, SEQLEN]) evallabels = np.reshape(evallabels, [-1, SEQLEN]) def get_training_dataset(last_n=1): dataset = tf.data.Dataset.from_tensor_slices( ( traindata, # features labeldata[:,-last_n:SEQLEN] # targets: the last element or last n elements in the shifted sequence ) ) # Dataset API used here to put the dataset into shape dataset = dataset.repeat() dataset = dataset.shuffle(DATA_LEN//SEQLEN) # shuffling is important ! (Number of sequences in shuffle buffer: all of them) dataset = dataset.batch(BATCHSIZE, drop_remainder = True) return dataset def get_evaluation_dataset(last_n=1): dataset = tf.data.Dataset.from_tensor_slices( ( evaldata, # features evallabels[:,-last_n:SEQLEN] # targets: the last element or last n elements in the shifted sequence ) ) # Dataset API used here to put the dataset into shape dataset = dataset.batch(evaldata.shape[0], drop_remainder = True) # just one batch with everything return dataset # + [markdown] id="Q443hxmJ5LNe" colab_type="text" # ### Peek at the data # + id="FN027ePq5Kxy" colab_type="code" outputId="014732eb-6292-4382-c0c5-43b282db16f5" colab={"base_uri": "https://localhost:8080/", "height": 190} train_ds = get_training_dataset() for features, labels in train_ds.take(10): print("input_shape:", features.numpy().shape, ", shape of labels:", labels.numpy().shape) # + [markdown] id="CuNpxRbP_QYQ" colab_type="text" # ## RNN models # + [markdown] id="XEyJklu-Yysi" colab_type="text" # ![deep RNN schematic](https://googlecloudplatform.github.io/tensorflow-without-a-phd/images/RNN1.svg) #
    # X shape [BATCHSIZE, SEQLEN, 1]
    # Y shape [BATCHSIZE, SEQLEN, 1]
    # H shape [BATCHSIZE, RNN_CELLSIZE*N_LAYERS] #
    # + id="zPsQiS8pYysi" colab_type="code" outputId="e187785a-e403-443e-eb89-4385bda7d0d2" colab={"base_uri": "https://localhost:8080/", "height": 311} # RNN model (RMSE: 0.164 after 10 epochs) model_layers_RNN = [ l.Reshape([SEQLEN, 1], input_shape=[SEQLEN,]), # [BATCHSIZE, SEQLEN, 1] is necessary for RNN model l.GRU(RNN_CELLSIZE, return_sequences=True), # output shape [BATCHSIZE, SEQLEN, RNN_CELLSIZE] l.GRU(RNN_CELLSIZE), # keep only last output in sequence: output shape [BATCHSIZE, RNN_CELLSIZE] l.Dense(1) # output shape [BATCHSIZE, 1] ] model_RNN = compile_keras_sequential_model(model_layers_RNN, "RNN") # + id="D5Kp1zjx4dY1" colab_type="code" outputId="32c3dea7-1dce-4b76-b259-5abcdef1b1a8" colab={"base_uri": "https://localhost:8080/", "height": 345} # RNN model with loss computed on last N elements (RMSE: 0.163 after 10 epochs) model_layers_RNN_N = [ l.Reshape([SEQLEN, 1], input_shape=[SEQLEN,]), # [BATCHSIZE, SEQLEN, 1] is necessary for RNN model l.GRU(RNN_CELLSIZE, return_sequences=True), l.GRU(RNN_CELLSIZE, return_sequences=True), # output shape [BATCHSIZE, SEQLEN, RNN_CELLSIZE] l.TimeDistributed(l.Dense(1)), # output shape [BATCHSIZE, SEQLEN, 1] l.Lambda(lambda x: x[:,-LAST_N:SEQLEN,0]) # last N item(s) in sequence: output shape [BATCHSIZE, LAST_N] ] model_RNN_N = compile_keras_sequential_model(model_layers_RNN_N, 'RNN_N') # + [markdown] id="xKoWDXEC_InP" colab_type="text" # ## Training loop # + id="KtqXxnUHhTHi" colab_type="code" outputId="76a53d4e-47e2-4251-bead-dafdd3a9555a" colab={"base_uri": "https://localhost:8080/", "height": 193} # You can re-execute this cell to continue training NB_EPOCHS = 3 # number of times the data is repeated during training steps_per_epoch = DATA_LEN // SEQLEN // BATCHSIZE model = model_RNN_N # model to train: model_LINEAR, model_DNN, model_CNN, model_RNN, model_RNN_N train_ds = get_training_dataset(last_n=LAST_N) # use last_n=LAST_N for model_RNN_N history = model.fit(train_ds, steps_per_epoch=steps_per_epoch, epochs=NB_EPOCHS) # + id="br7nVK9ijnCU" colab_type="code" outputId="64b08b73-ea03-4d44-c4ad-b92ace44c178" colab={"base_uri": "https://localhost:8080/", "height": 375} plt.plot(history.history['loss']) plt.show() # + [markdown] id="N9RkMeEXW_qQ" colab_type="text" # ## Evaluation # + id="GyOzPv8Znu61" colab_type="code" outputId="e53330a2-ec94-46b2-da0b-28c4b4770191" colab={"base_uri": "https://localhost:8080/", "height": 315} # Here "evaluating" using the training dataset eval_ds = get_evaluation_dataset(last_n=LAST_N) # use last_n=LAST_N for model_RNN_N loss, rmse = model.evaluate(eval_ds, steps=1) # compare agains simplistic benchmark models that require no training ds = get_evaluation_dataset() _, rmse1 = model_RAND.evaluate(get_evaluation_dataset(), steps=1, verbose=0) _, rmse2 = model_LAST.evaluate(get_evaluation_dataset(), steps=1, verbose=0) _, rmse3 = model_LAST2.evaluate(get_evaluation_dataset(), steps=1, verbose=0) picture_this_hist(rmse1, rmse2, rmse3, rmse) # + [markdown] id="qJeFuhNRXCmq" colab_type="text" # ## Predictions # + id="YpewCYd2Yys3" colab_type="code" outputId="7f1a4bc1-d959-414b-eb7b-f075e6440f09" colab={"base_uri": "https://localhost:8080/", "height": 377} # execute multiple times to see different sample sequences subset = np.random.choice(DATA_LEN//SEQLEN, 8) # pick 8 eval sequences at random predictions = model.predict(evaldata[subset], steps=1) # prediction directly from numpy array picture_this_3(predictions[:,-1], evaldata[subset], evallabels[subset], SEQLEN) # + [markdown] id="0ScB9vM_Yys6" colab_type="text" # # ## Benchmark # Benchmark all the algorithms. This takes a while (approx. 10 min). # + id="_EUH_ZsZKh1D" colab_type="code" colab={} # instantiate all models models again model_RAND = compile_keras_sequential_model(model_layers_RAND, "RAND") model_LAST = compile_keras_sequential_model(model_layers_LAST, "LAST") model_LAST2 = compile_keras_sequential_model(model_layers_LAST2, "LAST2") model_LINEAR = compile_keras_sequential_model(model_layers_LINEAR, "LINEAR") model_DNN = compile_keras_sequential_model(model_layers_DNN, "DNN") model_CNN = compile_keras_sequential_model(model_layers_CNN, "CNN") model_RNN = compile_keras_sequential_model(model_layers_RNN, "RNN") model_RNN_N = compile_keras_sequential_model(model_layers_RNN_N, 'RNN_N') # + id="pzUW6uMtYys6" colab_type="code" colab={} NB_EPOCHS = 10 # train all models for i, model in enumerate([model_LINEAR, model_DNN, model_CNN, model_RNN]): print("Trainig ", ["model_LINEAR", "model_DNN", "model_CNN", "model_RNN"][i]) model.fit(get_training_dataset(), steps_per_epoch=steps_per_epoch, epochs=NB_EPOCHS) print("Training ", "model_RNN_N") model_RNN_N.fit(get_training_dataset(last_n=LAST_N), steps_per_epoch=steps_per_epoch, epochs=NB_EPOCHS) # evaluate all models rmses = [] for model in [model_RAND, model_LAST, model_LAST2, model_LINEAR, model_DNN, model_CNN, model_RNN]: _, rmse = model.evaluate(get_evaluation_dataset(), steps=1) rmses.append(rmse) _, rmse = model_RNN_N.evaluate(get_evaluation_dataset(last_n=LAST_N), steps=1) rmses.append(rmse) # + id="gZt1F7yjgdAQ" colab_type="code" outputId="9b83eb92-6cad-470b-d0c5-1ff8d7c33d8c" colab={"base_uri": "https://localhost:8080/", "height": 318} print(rmses) picture_this_hist_all(*rmses) # + [markdown] id="2VgeUgL-Yys8" colab_type="text" # Copyright 2019 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # [http://www.apache.org/licenses/LICENSE-2.0](http://www.apache.org/licenses/LICENSE-2.0) # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Basic Circuits # ## Create a Circuit # To create a Circuit we first need to define how many **Qubits** it has and how many **output bits**. # + # First we import the QuantumCircuit Library from qiskit import QuantumCircuit, execute, Aer # Create a Quantum Circuit with 8 qubits and 8 output bits num_qubits = 8 num_output_bits = 8 quantum_circuit = QuantumCircuit(num_qubits, num_output_bits) # - # ### Measure the output # To measure the output we define what qubit to read and in what bit to set the output. # # + # quantum_circuit.measure(0,0) # quantum_circuit.measure(1,1) # etc # We comment it so we don't affect the next example (the alternative). # - # Alternative for j in range(8): quantum_circuit.measure(j,j) # ### Draw the circuit # To draw the circuit we have a very simple method: quantum_circuit.draw() # ### Plot the results of measurement # If we want to plot on an histogram what we have measured we can do so with plot_histogram(\).
    # As we are using a Quantum Simulator, it will execute a number of times (1024 times) to get a result. # + from qiskit.visualization import plot_histogram counts = execute(quantum_circuit, Aer.get_backend('qasm_simulator')).result().get_counts() print(counts) # plot_histogram(data, figsize=(7, 5), color=None, number_to_keep=None, # sort='asc', target_string=None, legend=None, bar_labels=True, title=None, ax=None) plot_histogram(counts, title = "Measurement results") # - # ## Encoding an input # ### Not-Gate # To perform a Not-Gate on our circuit, we use the method QuantumCircuit.x() # + quantum_circuit_2 = QuantumCircuit(num_qubits) quantum_circuit_2.x(7) quantum_circuit_2.draw() # - # ### Combine defined Circuits # We have before created quantum_circuit, which measures all 8 Qubits and sets the output.
    # We now have created a second circuit, which performs a Not-Gate on the last Qubit. # # Both circuits can be combined to create a single one with the steps from one and the other. quantum_circuit_combined = quantum_circuit_2 + quantum_circuit quantum_circuit_combined.draw() # + # And we can check the results as before: counts = execute(quantum_circuit_combined,Aer.get_backend('qasm_simulator')).result().get_counts() plot_histogram(counts) # - # ### CNOT-Gate # Like a classical XOR Gate. In Qiskit its name is cx qc_cnot = QuantumCircuit(2) qc_cnot.cx(0,1) qc_cnot.draw() # ### CNOT Table # We are going to try all four possible situations and check the output # ![CNOT-Table](img/img001.png) # #### Case 00 -> 00 # + qc00 = QuantumCircuit(2,2) qc00.cx(0,1) qc00.measure(0,0) qc00.measure(1,1) qc00.draw() # - counts = execute(qc00, Aer.get_backend('qasm_simulator')).result().get_counts() plot_histogram(counts, title = "Measurement results") # #### Case 01 -> 11 # + qc01 = QuantumCircuit(2,2) qc01.x(0) # Remember it goes from right to left, so 01 means 0 is qubit one and 1 is qubit zero. qc01.cx(0,1) qc01.measure(0,0) qc01.measure(1,1) qc01.draw() # - counts = execute(qc01, Aer.get_backend('qasm_simulator')).result().get_counts() plot_histogram(counts, title = "Measurement results") # #### Case 10 -> 10 # + qc02 = QuantumCircuit(2,2) qc02.x(1) qc02.cx(0,1) # Remember it goes from right to left, so 01 means 0 is qubit one and 1 is qubit zero. qc02.measure(0,0) qc02.measure(1,1) qc02.draw() # - counts = execute(qc02, Aer.get_backend('qasm_simulator')).result().get_counts() plot_histogram(counts, title = "Measurement results") # #### Case 11 -> 01 # + qc03 = QuantumCircuit(2,2) qc03.x(0) # Remember it goes from right to left, so 01 means 0 is qubit one and 1 is qubit zero. qc03.x(1) qc03.cx(0,1) qc03.measure(0,0) qc03.measure(1,1) qc03.draw() # - counts = execute(qc03, Aer.get_backend('qasm_simulator')).result().get_counts() plot_histogram(counts, title = "Measurement results") # ---- # ### **Important Measurement Note**: # Bits (Output) are **numbered** from right to left: bx...b4,b3,b2,b1,b0 # # # So input 10011 => Q(0)=1, Q(1)=0, Q(2)=0, Q(3)=1, Q(4)=1 # Will be output => b(4)=1, b(3)=0, b(2)=0, b(1)=1, b(0)=1 # # **It doesn't matter how you "number" the input Qubits, but how you save them as outputs** # Check example below. # + # Measurement 1 # We have 1 Qubit and 7 Output bits qc_measure01 = QuantumCircuit(1,7) # Qubit is set to 1 qc_measure01.x(0) # We save output in bit number 6 qc_measure01.measure(0,6) # The ouput is the same counts = execute(qc_measure01, Aer.get_backend('qasm_simulator')).result().get_counts() plot_histogram(counts, title = "Measurement results") #Result: 1000000 # + # Measurement 2 # We have 1 Qubit and 7 Output bits qc_measure02 = QuantumCircuit(1,7) # Qubit is set to 1 qc_measure02.x(0) # This time, we save output in bit number qc_measure02.measure(0,0) # The ouput is the same counts = execute(qc_measure02, Aer.get_backend('qasm_simulator')).result().get_counts() plot_histogram(counts, title = "Measurement results") #Result: 0000001 # - # So **careful** how we save the output. # Also, remember we can *measure* the other way round (measure one qubit before the other or viceversa) and the output would be the same: # + # Example qc03_ex = QuantumCircuit(2,2) qc03_ex.x(0) qc03_ex.x(1) qc03_ex.cx(0,1) # We do the measurement the other way round qc03_ex.measure(0,0) qc03_ex.measure(1,1) qc03_ex.draw() # - # The ouput is the same counts = execute(qc03_ex, Aer.get_backend('qasm_simulator')).result().get_counts() plot_histogram(counts, title = "Measurement results") # ### Histogram # It is also important to note that the numbers shown in the histogram represent the "bit" where the information of that Qubit is stored. # # So for example, we can start a Quantum Circuit with 2 Qubits and 5 output bits. # We will save the output of Qubit 0 in bit 0 and the output of Qubit 1 in bit 5. # Check the histogram image: # + # 2 Qubits and 5 Output bits qc_hist = QuantumCircuit(2,5) # To have different values we perform an x in both qc_hist.x(0) qc_hist.x(1) # Save output of Qubit 0 in bit number 0 qc_hist.measure(0,0) # Save output of Qubit 1 in bit number 4 (the fifth one) qc_hist.measure(1,4) qc_hist.draw() # - # ## Half-Adder # We are developing a half-adder. # The output should be: # # - 00 -> 0000 # - 01 -> 0110 # - 10 -> 0101 # - 11 -> 0011 # + qc_ha = QuantumCircuit(4,4) # Uncomment one of the four input options to test the output, # Intput 1: 00 # --- # Intput 2: 01 # qc_ha.x(1) # Intput 3: 10 # qc_ha.x(0) # Output 4: 11 # qc_ha.x(0) # qc_ha.x(1) qc_ha.barrier() # use cnots to write the XOR of the inputs on qubit 2 qc_ha.cx(0,2) qc_ha.cx(1,2) qc_ha.barrier() # extract outputs qc_ha.measure(0,0) # extract XOR value qc_ha.measure(1,1) qc_ha.measure(2,2) qc_ha.measure(3,3) qc_ha.draw() # - counts = execute(qc_ha, Aer.get_backend('qasm_simulator')).result().get_counts() plot_histogram(counts, title = "Measurement results") # ## Toffoli Gate # It is like an AND-Gate in classic computing. # Output should be: # - 00 -> 00 # - 01 -> 01 # - 10 -> 01 # - 11 -> 10 # # This basically is implementing a sum! # + qc_ha = QuantumCircuit(4,2) # encode inputs in qubits 0 and 1 # Intput 1: 00 # --- # Intput 2: 01 # qc_ha.x(1) # Intput 3: 10 # qc_ha.x(0) # Output 4: 11 # qc_ha.x(0) # qc_ha.x(1) qc_ha.barrier() # use cnots to write the XOR of the inputs on qubit 2 qc_ha.cx(0,2) qc_ha.cx(1,2) # use ccx to write the AND of the inputs on qubit 3 qc_ha.ccx(0,1,3) qc_ha.barrier() # extract outputs qc_ha.measure(2,0) qc_ha.measure(3,1) qc_ha.draw() # - counts = execute(qc_ha, Aer.get_backend('qasm_simulator')).result().get_counts() plot_histogram(counts, title = "Measurement results") # ---- # -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.7.1 # language: julia # name: julia-1.7 # --- # # MATH50003 Numerical Analysis: Problem Sheet 6 # # This problem sheet explores condition numbers, indefinite integration, # and Euler's method. # # Questions marked with a ⋆ are meant to be completed without using a computer. using LinearAlgebra, Plots, QuadGK # ## 1. Condition numbers # # # **Problem 1.1⋆** Prove that, if $|ϵ_i| ≤ ϵ$ and $n ϵ < 1$, then # $$ # \prod_{k=1}^n (1+ϵ_i) = 1+θ_n # $$ # for some constant $θ_n$ satisfying $|θ_n| ≤ {n ϵ \over 1-nϵ}$. # # **Problem 1.2⋆** Let $A,B ∈ ℝ^{m × n}$. Prove that if the columns satisfy $\|𝐚_j\|_2 ≤ \| 𝐛_j\|_2$ then # $\|A\|_F ≤ \|B\|_F$ and $\|A \|_2 ≤ \sqrt{\hbox{rank}(B)} \|B\|_2$. # # **Problem 1.3⋆** Compute the 1-norm, 2-norm, and ∞-norm condition numbers for the following matrices: # $$ # \begin{bmatrix} # 1 & 2 \\ 3 & 4 # \end{bmatrix}, \begin{bmatrix} # 1/3 & 1/5 \\ 0 & 1/7 # \end{bmatrix}, \begin{bmatrix} 1 \\ & 1/2 \\ && ⋯ \\ &&& 1/2^n \end{bmatrix} # $$ # (Hint: recall that the singular values of a matrix $A$ are the square roots of the eigenvalues of the Gram matrix # $A^⊤A$.) # # # **Problem 1.4** # State a bound on the relative error on $A 𝐯$ for $\|𝐯\|_2 = 1$ for the following matrices: # $$ # \begin{bmatrix} # 1/3 & 1/5 \\ 0 & 1/10^3 # \end{bmatrix}, # \begin{bmatrix} 1 \\ & 1/2 \\ && ⋯ \\ &&& 1/2^{10} \end{bmatrix} # $$ # Compute the relative error in computing $A 𝐯$ (using `big` for a high-precision version to compare against) # where $𝐯$ is the last column of $V$ in the SVD $A = U Σ V^⊤$, computed using the `svd` command with # `Float64` inputs. How does the error compare to the predicted error bound? A = [1/3 1/5 0 1/10^3] bigA = [big(1/3) big(1/5) big(0) big(1/10^3)] sing = svd(A) v = sing.S Av = A * v print(bigA*v-Av) # ## 2. Indefinite integration # # **Problem 2.1** Implement backward differences to approximate # indefinite-integration. How does the error compare to forward # and mid-point versions for $f(x) = \cos x$ on the interval $[0,1]$? # Use the method to approximate the integrals of # $$ # \exp(\exp x \cos x + \sin x), \prod_{k=1}^{1000} \left({x \over k}-1\right), \hbox{ and } f^{\rm s}_{1000}(x) # $$ # to 3 digits, where $f^{\rm s}_{1000}(x)$ was defined in PS2. # + function indefint(x) h = step(x) # x[k+1]-x[k] n = length(x) L = Bidiagonal([1; fill(1/h, n-1)], fill(-1/h, n-1), :L) end n = 1000 x = range(0, 1; length=n) L = indefint(x) m = (x[1:end-1] + x[2:end])/2 # midpoints c = 0 # u(0) = 0 f = x -> exp(exp(x) * cos(x) + sin(x)) 𝐟ᶠ = f.(x[2:end]) 𝐮ᶠ = L \ [c; 𝐟ᶠ] integral = zeros(n) for (index, value) in enumerate(x) if value != 0 integral[index] = quadgk(f, Float64(0), Float64(value))[1] end end plot(x, integral; label="integral", legend=:bottomright) scatter!(x, 𝐮ᶠ; label="forward") # - # **Problem 2.2** Implement indefinite-integration # where we take the average of the two grid points: # $$ # {u'(x_{k+1}) + u'(x_k) \over 2} ≈ {u_{k+1} - u_k \over h} # $$ # What is the observed rate-of-convergence using the ∞-norm for $f(x) = \cos x$ # on the interval $[0,1]$? # Does the method converge if the error is measured in the $1$-norm? # + function mid_err1(u, c, f, n) x = range(0, 1; length = n) m = (x[1:end-1] + x[2:end]) / 2 # midpoints uᵐ = indefint(x) \ [c; f.(m)] norm(uᵐ - u.(x), Inf) end function mid_err2(u, c, f, n) x = range(0, 1; length = n) m = (x[1:end-1] + x[2:end]) / 2 # midpoints uᵐ = indefint(x) \ [c; f.(m)] norm(uᵐ - u.(x), 1) end f = x -> cos(x) ns = 10 .^ (1:8) # solve up to n = 10 million scatter(ns, mid_err1.(sin, 0, f, ns); xscale=:log10, yscale=:log10, label="inf") scatter!(ns, mid_err2.(sin, 0, f, ns); label="1") # - # ## 3. Euler methods # # **Problem 3.1** Solve the following ODEs # using forward and/or backward Euler and increasing $n$, the number of time-steps, # until $u(1)$ is determined to 3 digits: # $$ # \begin{align*} # u(0) &= 1, u'(t) = \cos(t) u(t) + t \\ # v(0) &= 1, v'(0) = 0, v''(t) = \cos(t) v(t) + t \\ # w(0) &= 1, w'(0) = 0, w''(t) = t w(t) + 2 w(t)^2 # \end{align*} # $$ # If we increase the initial condition $w(0) = c > 1$, $w'(0)$ # the solution may blow up in finite time. Find the smallest positive integer $c$ # such that the numerical approximation suggests the equation # does not blow up. c = 1 a = t -> -cos(t) diff = 1 n = 1 while diff > 1e-3 n += 1 t = range(0, 1; length=n) h = step(t) L = Bidiagonal([1; fill(1/h, n-1)], a.(t[1:end-1]) .- 1/h, :L) sol = L \ [c; t[1:end-1]] diff = abs(sol[n]-2.96717791482681526423546014172681123622410405663335015) end print(n, " ", diff, " ", sol[n]) # **Problem 3.2⋆** For an evenly spaced grid $t_1, …, t_n$, use the approximation # $$ # {u'(t_{k+1}) + u'(t_k) \over 2} ≈ {u_{k+1} - u_k \over h} # $$ # to recast # $$ # \begin{align*} # u(0) &= c \\ # u'(t) &= a(t) u(t) + f(t) # \end{align*} # $$ # as a lower bidiagonal linear system. Use forward-substitution to extend this to vector linear problems: # $$ # \begin{align*} # 𝐮(0) &= 𝐜 \\ # 𝐮'(t) &= A(t) 𝐮(t) + 𝐟(t) # \end{align*} # $$ # # # **Problem 3.3** Implement the method designed in Problem 3.1 for the negative time Airy equation # $$ # u(0) = 1, u'(0) = 0, u''(t) = -t u(t) # $$ # on $[0,50]$. # How many time-steps are needed to get convergence to 1% accuracy (the "eyeball norm")? # **Problem 3.4** Implement Heat on a graph with $m = 50$ nodes with no forcing # and initial condition $u_{m/2}(0) = 1$ and $u_k(0) = 0$, but where the first and last node are fixed # to zero, that is replace the first and last rows of the differential equation with # the conditions: # $$ # u_1(t) = u_m(t) = 0. # $$ # Find the time $t$ such that $\|𝐮(t)\|_∞ <10^{-3}$ to 2 digits. # Hint: Differentiate to recast the conditions as a differential equation. # Vary $n$, the number of time-steps used in Forward Euler, and increase $T$ in the interval $[0,T]$ # until the answer doesn't change. # Do a for loop over the time-slices to find the first that satisfies the condition. # (You may wish to call `println` to print the answer and `break` to exit the for loop). # # #### **Problem 3.5** Consider the equation # $$ # u(1) = 1, u'(t) = -10u(t) # $$ # What behaviour do you observe on $[0,10]$ of forward, backward, and that of Problem 3.1 # with a step-size of 0.5? What happens if you decrease the step-size to $0.01$? (Hint: you may wish to do a plot and scale the $y$-axis logarithmically,) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Low-Rank Autoregressive Tensor Completion Imputer (LATC-imputer) # # This notebook shows how to implement a LATC (with truncated nuclear norm) imputer on three real-world data sets (i.e., PeMS traffic speed data, Guangzhou traffic speed data, Electricity data). To overcome the problem of missing values within multivariate time series data, this method takes into account both low-rank structure and time series regression. For an in-depth discussion of LATC-imputer, please see [1]. # #
    # # [1] , , (2020). Low-Rank Autorgressive Tensor Completion for Multivariate Time Series Forecasting. arXiv:2006.10436. [PDF] # #
    # import numpy as np from numpy.linalg import inv as inv # ### Define LATC-imputer kernel # # We start by introducing some necessary functions that relies on `Numpy`. # #
    #
      #
    • ten2mat: Unfold tensor as matrix by specifying mode.
    • #
    • mat2ten: Fold matrix as tensor by specifying dimension (i.e, tensor size) and mode.
    • #
    • svt: Implement the process of Singular Value Thresholding (SVT).
    • #
    #
    # + def ten2mat(tensor, mode): return np.reshape(np.moveaxis(tensor, mode, 0), (tensor.shape[mode], -1), order = 'F') def mat2ten(mat, dim, mode): index = list() index.append(mode) for i in range(dim.shape[0]): if i != mode: index.append(i) return np.moveaxis(np.reshape(mat, list(dim[index]), order = 'F'), 0, mode) # - def svt_tnn(mat, tau, theta): [m, n] = mat.shape if 2 * m < n: u, s, v = np.linalg.svd(mat @ mat.T, full_matrices = 0) s = np.sqrt(s) idx = np.sum(s > tau) mid = np.zeros(idx) mid[: theta] = 1 mid[theta : idx] = (s[theta : idx] - tau) / s[theta : idx] return (u[:, : idx] @ np.diag(mid)) @ (u[:, : idx].T @ mat) elif m > 2 * n: return svt_tnn(mat.T, tau, theta).T u, s, v = np.linalg.svd(mat, full_matrices = 0) idx = np.sum(s > tau) vec = s[: idx].copy() vec[theta : idx] = s[theta : idx] - tau return u[:, : idx] @ np.diag(vec) @ v[: idx, :] #
    #
      #
    • compute_mape: Compute the value of Mean Absolute Percentage Error (MAPE).
    • #
    • compute_rmse: Compute the value of Root Mean Square Error (RMSE).
    • #
    #
    # # > Note that $$\mathrm{MAPE}=\frac{1}{n} \sum_{i=1}^{n} \frac{\left|y_{i}-\hat{y}_{i}\right|}{y_{i}} \times 100, \quad\mathrm{RMSE}=\sqrt{\frac{1}{n} \sum_{i=1}^{n}\left(y_{i}-\hat{y}_{i}\right)^{2}},$$ where $n$ is the total number of estimated values, and $y_i$ and $\hat{y}_i$ are the actual value and its estimation, respectively. # + def compute_mape(var, var_hat): return np.sum(np.abs(var - var_hat) / var) / var.shape[0] def compute_rmse(var, var_hat): return np.sqrt(np.sum((var - var_hat) ** 2) / var.shape[0]) # - # The main idea behind LATC-imputer is to approximate partially observed data with both low-rank structure and time series dynamics. The following `imputer` kernel includes some necessary inputs: # #
    #
      #
    • dense_tensor: This is an input which has the ground truth for validation. If this input is not available, you could use dense_tensor = sparse_tensor.copy() instead.
    • #
    • sparse_tensor: This is a partially observed tensor which has many missing entries.
    • #
    • time_lags: Time lags, e.g., time_lags = np.array([1, 2, 3]).
    • #
    • alpha: Weights for tensors' nuclear norm, e.g., alpha = np.ones(3) / 3.
    • #
    • rho: Learning rate for ADMM, e.g., rho = 0.0005.
    • #
    • lambda0: Weight for time series regressor, e.g., lambda0 = 5 * rho. If lambda0 = 0, then this imputer is actually a standard low-rank tensor completion (i.e., High-accuracy Low-Rank Tensor Completion, or HaLRTC).
    • #
    • epsilon: Stop criteria, e.g., epsilon = 0.001.
    • #
    • maxiter: Maximum iteration to stop algorithm, e.g., maxiter = 50.
    • #
    #
    # def imputer(dense_tensor, sparse_tensor, time_lags, alpha, rho0, lambda0, theta, epsilon, maxiter): """Low-Rank Autoregressive Tensor Completion, LATC-imputer.""" dim = np.array(sparse_tensor.shape) dim_time = np.int(np.prod(dim) / dim[0]) d = len(time_lags) max_lag = np.max(time_lags) sparse_mat = ten2mat(sparse_tensor, 0) pos_missing = np.where(sparse_mat == 0) pos_test = np.where((dense_tensor != 0) & (sparse_tensor == 0)) X = np.zeros(np.insert(dim, 0, len(dim))) # \boldsymbol{\mathcal{X}} T = np.zeros(np.insert(dim, 0, len(dim))) # \boldsymbol{\mathcal{T}} Z = sparse_mat.copy() # \boldsymbol{Z} Z[pos_missing] = np.mean(sparse_mat[sparse_mat != 0]) A = 0.001 * np.random.rand(dim[0], d) # \boldsymbol{A} it = 0 ind = np.zeros((d, dim_time - max_lag), dtype = np.int_) for i in range(d): ind[i, :] = np.arange(max_lag - time_lags[i], dim_time - time_lags[i]) last_mat = sparse_mat.copy() snorm = np.linalg.norm(sparse_mat, 'fro') rho = rho0 while True: rho = min(rho*1.05, 1e5) for k in range(len(dim)): X[k] = mat2ten(svt_tnn(ten2mat(mat2ten(Z, dim, 0) - T[k] / rho, k), alpha[k] / rho, theta), dim, k) tensor_hat = np.einsum('k, kmnt -> mnt', alpha, X) mat_hat = ten2mat(tensor_hat, 0) mat0 = np.zeros((dim[0], dim_time - max_lag)) if lambda0 > 0: for m in range(dim[0]): Qm = mat_hat[m, ind].T A[m, :] = np.linalg.pinv(Qm) @ Z[m, max_lag :] mat0[m, :] = Qm @ A[m, :] mat1 = ten2mat(np.mean(rho * X + T, axis = 0), 0) Z[pos_missing] = np.append((mat1[:, : max_lag] / rho), (mat1[:, max_lag :] + lambda0 * mat0) / (rho + lambda0), axis = 1)[pos_missing] else: Z[pos_missing] = (ten2mat(np.mean(X + T / rho, axis = 0), 0))[pos_missing] T = T + rho * (X - np.broadcast_to(mat2ten(Z, dim, 0), np.insert(dim, 0, len(dim)))) tol = np.linalg.norm((mat_hat - last_mat), 'fro') / snorm last_mat = mat_hat.copy() it += 1 if it % 100 == 0: print('Iter: {}'.format(it)) print('Tolerance: {:.6}'.format(tol)) print('MAPE: {:.6}'.format(compute_mape(dense_tensor[pos_test], tensor_hat[pos_test]))) print('RMSE: {:.6}'.format(compute_rmse(dense_tensor[pos_test], tensor_hat[pos_test]))) print() if (tol < epsilon) or (it >= maxiter): break print('Total iteration: {}'.format(it)) print('Tolerance: {:.6}'.format(tol)) print('Imputation MAPE: {:.6}'.format(compute_mape(dense_tensor[pos_test], tensor_hat[pos_test]))) print('Imputation RMSE: {:.6}'.format(compute_rmse(dense_tensor[pos_test], tensor_hat[pos_test]))) print() return tensor_hat # ### Guangzhou data # # We generate **random missing (RM)** values on Guangzhou traffic speed data set. # + import scipy.io tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/tensor.mat') dense_tensor = tensor['tensor'] random_tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_tensor.mat') random_tensor = random_tensor['random_tensor'] missing_rate = 0.2 ### Random missing (RM) scenario: binary_tensor = np.round(random_tensor + 0.5 - missing_rate) sparse_tensor = np.multiply(dense_tensor, binary_tensor) dense_tensor = np.transpose(dense_tensor, [0, 2, 1]) sparse_tensor = np.transpose(sparse_tensor, [0, 2, 1]) del tensor, random_tensor,binary_tensor # - # We use `imputer` to fill in the missing entries and measure performance metrics on the ground truth. import time start = time.time() time_lags = np.array([1, 2, 3, 4, 5, 6, 142, 143, 144, 145, 146, 147]) alpha = np.ones(3) / 3 rho = 1e-4 lambda0 = 5 * rho theta = 30 epsilon = 1e-4 maxiter = 100 tensor_hat = imputer(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon, maxiter) end = time.time() print('Running time: %d seconds'%(end - start)) # + import scipy.io tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/tensor.mat') dense_tensor = tensor['tensor'] random_tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_tensor.mat') random_tensor = random_tensor['random_tensor'] missing_rate = 0.4 ### Random missing (RM) scenario: binary_tensor = np.round(random_tensor + 0.5 - missing_rate) sparse_tensor = np.multiply(dense_tensor, binary_tensor) dense_tensor = np.transpose(dense_tensor, [0, 2, 1]) sparse_tensor = np.transpose(sparse_tensor, [0, 2, 1]) del tensor, random_tensor,binary_tensor # - import time start = time.time() time_lags = np.array([1, 2, 3, 4, 5, 6, 142, 143, 144, 145, 146, 147]) alpha = np.ones(3) / 3 rho = 1e-4 lambda0 = 5 * rho theta = 30 epsilon = 1e-4 maxiter = 100 tensor_hat = imputer(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon, maxiter) end = time.time() print('Running time: %d seconds'%(end - start)) # + import scipy.io tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/tensor.mat') dense_tensor = tensor['tensor'] random_tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_tensor.mat') random_tensor = random_tensor['random_tensor'] missing_rate = 0.6 ### Random missing (RM) scenario: binary_tensor = np.round(random_tensor + 0.5 - missing_rate) sparse_tensor = np.multiply(dense_tensor, binary_tensor) dense_tensor = np.transpose(dense_tensor, [0, 2, 1]) sparse_tensor = np.transpose(sparse_tensor, [0, 2, 1]) del tensor, random_tensor,binary_tensor # - import time start = time.time() time_lags = np.array([1, 2, 3, 4, 5, 6, 142, 143, 144, 145, 146, 147]) alpha = np.ones(3) / 3 rho = 1e-4 lambda0 = 5 * rho theta = 30 epsilon = 1e-4 maxiter = 100 tensor_hat = imputer(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon, maxiter) end = time.time() print('Running time: %d seconds'%(end - start)) # We generate **non-random missing (NM)** values on Guangzhou traffic speed data set. Then, we conduct the imputation experiment. # + import scipy.io tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/tensor.mat') dense_tensor = tensor['tensor'] random_matrix = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_matrix.mat') random_matrix = random_matrix['random_matrix'] missing_rate = 0.2 ### Non-random missing (NM) scenario: binary_tensor = np.zeros(dense_tensor.shape) for i1 in range(dense_tensor.shape[0]): for i2 in range(dense_tensor.shape[1]): binary_tensor[i1, i2, :] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate) sparse_tensor = np.multiply(dense_tensor, binary_tensor) dense_tensor = np.transpose(dense_tensor, [0, 2, 1]) sparse_tensor = np.transpose(sparse_tensor, [0, 2, 1]) del tensor, random_matrix, binary_tensor # - import time start = time.time() time_lags = np.array([1, 2, 3, 4, 5, 6, 142, 143, 144, 145, 146, 147]) alpha = np.ones(3) / 3 rho = 1e-4 lambda0 = 1 * rho theta = 10 epsilon = 1e-4 maxiter = 100 tensor_hat = imputer(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon, maxiter) end = time.time() print('Running time: %d seconds'%(end - start)) # + import scipy.io tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/tensor.mat') dense_tensor = tensor['tensor'] random_matrix = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_matrix.mat') random_matrix = random_matrix['random_matrix'] missing_rate = 0.4 ### Non-random missing (NM) scenario: binary_tensor = np.zeros(dense_tensor.shape) for i1 in range(dense_tensor.shape[0]): for i2 in range(dense_tensor.shape[1]): binary_tensor[i1, i2, :] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate) sparse_tensor = np.multiply(dense_tensor, binary_tensor) dense_tensor = np.transpose(dense_tensor, [0, 2, 1]) sparse_tensor = np.transpose(sparse_tensor, [0, 2, 1]) del tensor, random_matrix, binary_tensor # - import time start = time.time() time_lags = np.array([1, 2, 3, 4, 5, 6, 142, 143, 144, 145, 146, 147]) alpha = np.ones(3) / 3 rho = 1e-4 lambda0 = 1 * rho theta = 10 epsilon = 1e-4 maxiter = 100 tensor_hat = imputer(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon, maxiter) end = time.time() print('Running time: %d seconds'%(end - start)) # + import scipy.io tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/tensor.mat') dense_tensor = tensor['tensor'] random_matrix = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_matrix.mat') random_matrix = random_matrix['random_matrix'] missing_rate = 0.6 ### Non-random missing (NM) scenario: binary_tensor = np.zeros(dense_tensor.shape) for i1 in range(dense_tensor.shape[0]): for i2 in range(dense_tensor.shape[1]): binary_tensor[i1, i2, :] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate) sparse_tensor = np.multiply(dense_tensor, binary_tensor) dense_tensor = np.transpose(dense_tensor, [0, 2, 1]) sparse_tensor = np.transpose(sparse_tensor, [0, 2, 1]) del tensor, random_matrix, binary_tensor # - import time start = time.time() time_lags = np.array([1, 2, 3, 4, 5, 6, 142, 143, 144, 145, 146, 147]) alpha = np.ones(3) / 3 rho = 1e-4 lambda0 = 1 * rho theta = 10 epsilon = 1e-4 maxiter = 100 tensor_hat = imputer(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon, maxiter) end = time.time() print('Running time: %d seconds'%(end - start)) # ### PeMS data # + dense_mat = np.load('../datasets/PeMS-data-set/pems.npy') random_tensor = np.load('../datasets/PeMS-data-set/random_tensor.npy') missing_rate = 0.2 ### Random missing (RM) scenario: binary_tensor = np.round(random_tensor + 0.5 - missing_rate) sparse_mat = np.multiply(dense_mat, ten2mat(binary_tensor, 0)) sparse_tensor = mat2ten(sparse_mat, np.array(binary_tensor.shape), 0) dense_tensor = mat2ten(dense_mat, np.array(binary_tensor.shape), 0) del dense_mat, random_tensor, binary_tensor # - import time start = time.time() time_lags = np.array([1, 2, 3, 4, 5, 6, 286, 287, 288, 289, 290, 291]) alpha = np.ones(3) / 3 rho = 1e-4 lambda0 = 5 * rho theta = 15 epsilon = 1e-4 maxiter = 100 tensor_hat = imputer(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon, maxiter) end = time.time() print('Running time: %d seconds'%(end - start)) # + dense_mat = np.load('../datasets/PeMS-data-set/pems.npy') random_tensor = np.load('../datasets/PeMS-data-set/random_tensor.npy') missing_rate = 0.4 ### Random missing (RM) scenario: binary_tensor = np.round(random_tensor + 0.5 - missing_rate) sparse_mat = np.multiply(dense_mat, ten2mat(binary_tensor, 0)) sparse_tensor = mat2ten(sparse_mat, np.array(binary_tensor.shape), 0) dense_tensor = mat2ten(dense_mat, np.array(binary_tensor.shape), 0) del dense_mat, random_tensor, binary_tensor # - import time start = time.time() time_lags = np.array([1, 2, 3, 4, 5, 6, 286, 287, 288, 289, 290, 291]) alpha = np.ones(3) / 3 rho = 1e-4 lambda0 = 5 * rho theta = 15 epsilon = 1e-4 maxiter = 100 tensor_hat = imputer(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon, maxiter) end = time.time() print('Running time: %d seconds'%(end - start)) # + dense_mat = np.load('../datasets/PeMS-data-set/pems.npy') random_tensor = np.load('../datasets/PeMS-data-set/random_tensor.npy') missing_rate = 0.6 ### Random missing (RM) scenario: binary_tensor = np.round(random_tensor + 0.5 - missing_rate) sparse_mat = np.multiply(dense_mat, ten2mat(binary_tensor, 0)) sparse_tensor = mat2ten(sparse_mat, np.array(binary_tensor.shape), 0) dense_tensor = mat2ten(dense_mat, np.array(binary_tensor.shape), 0) del dense_mat, random_tensor, binary_tensor # - import time start = time.time() time_lags = np.array([1, 2, 3, 4, 5, 6, 286, 287, 288, 289, 290, 291]) alpha = np.ones(3) / 3 rho = 1e-4 lambda0 = 5 * rho theta = 15 epsilon = 1e-4 maxiter = 100 tensor_hat = imputer(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon, maxiter) end = time.time() print('Running time: %d seconds'%(end - start)) # + dense_mat = np.load('../datasets/PeMS-data-set/pems.npy') random_matrix = np.load('../datasets/PeMS-data-set/random_matrix.npy') missing_rate = 0.2 ### Nonrandom missing (NM) scenario: binary_tensor = np.zeros((dense_mat.shape[0], 288, 44)) for i1 in range(dense_mat.shape[0]): for i2 in range(44): binary_tensor[i1,:,i2] = np.round(random_matrix[i1,i2] + 0.5 - missing_rate) binary_mat = ten2mat(binary_tensor, 0) sparse_mat = np.multiply(dense_mat, binary_mat) sparse_tensor = mat2ten(sparse_mat, np.array(binary_tensor.shape), 0) dense_tensor = mat2ten(dense_mat, np.array(binary_tensor.shape), 0) del dense_mat, random_matrix, binary_tensor # - import time start = time.time() time_lags = np.array([1, 2, 3, 4, 5, 6, 286, 287, 288, 289, 290, 291]) alpha = np.ones(3) / 3 rho = 1e-4 lambda0 = 1 * rho theta = 10 epsilon = 1e-4 maxiter = 100 tensor_hat = imputer(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon, maxiter) end = time.time() print('Running time: %d seconds'%(end - start)) # + dense_mat = np.load('../datasets/PeMS-data-set/pems.npy') random_matrix = np.load('../datasets/PeMS-data-set/random_matrix.npy') missing_rate = 0.4 ### Nonrandom missing (NM) scenario: binary_tensor = np.zeros((dense_mat.shape[0], 288, 44)) for i1 in range(dense_mat.shape[0]): for i2 in range(44): binary_tensor[i1,:,i2] = np.round(random_matrix[i1,i2] + 0.5 - missing_rate) binary_mat = ten2mat(binary_tensor, 0) sparse_mat = np.multiply(dense_mat, binary_mat) sparse_tensor = mat2ten(sparse_mat, np.array(binary_tensor.shape), 0) dense_tensor = mat2ten(dense_mat, np.array(binary_tensor.shape), 0) del dense_mat, random_matrix, binary_tensor # - import time start = time.time() time_lags = np.array([1, 2, 3, 4, 5, 6, 286, 287, 288, 289, 290, 291]) alpha = np.ones(3) / 3 rho = 1e-4 lambda0 = 1 * rho theta = 10 epsilon = 1e-4 maxiter = 100 tensor_hat = imputer(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon, maxiter) end = time.time() print('Running time: %d seconds'%(end - start)) # + dense_mat = np.load('../datasets/PeMS-data-set/pems.npy') random_matrix = np.load('../datasets/PeMS-data-set/random_matrix.npy') missing_rate = 0.6 ### Nonrandom missing (NM) scenario: binary_tensor = np.zeros((dense_mat.shape[0], 288, 44)) for i1 in range(dense_mat.shape[0]): for i2 in range(44): binary_tensor[i1,:,i2] = np.round(random_matrix[i1,i2] + 0.5 - missing_rate) binary_mat = ten2mat(binary_tensor, 0) sparse_mat = np.multiply(dense_mat, binary_mat) sparse_tensor = mat2ten(sparse_mat, np.array(binary_tensor.shape), 0) dense_tensor = mat2ten(dense_mat, np.array(binary_tensor.shape), 0) del dense_mat, random_matrix, binary_tensor # - import time start = time.time() time_lags = np.array([1, 2, 3, 4, 5, 6, 286, 287, 288, 289, 290, 291]) alpha = np.ones(3) / 3 rho = 1e-4 lambda0 = 1 * rho theta = 10 epsilon = 1e-4 maxiter = 100 tensor_hat = imputer(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon, maxiter) end = time.time() print('Running time: %d seconds'%(end - start)) # ### Electricity data # - **Random Missing (RM)**: # + dense_mat = np.load('../datasets/Electricity-data-set/electricity35.npy') random_tensor = np.load('../datasets/Electricity-data-set/random_tensor.npy') missing_rate = 0.2 ### Random missing (RM) scenario: binary_tensor = np.round(random_tensor + 0.5 - missing_rate) sparse_mat = np.multiply(dense_mat, ten2mat(binary_tensor, 0)) sparse_tensor = mat2ten(sparse_mat, np.array(binary_tensor.shape), 0) dense_tensor = mat2ten(dense_mat, np.array(binary_tensor.shape), 0) del dense_mat, random_tensor, binary_tensor # - import time start = time.time() time_lags = np.array([1, 2, 3, 4, 5, 6, 22, 23, 24, 25, 26, 27]) alpha = np.ones(3) / 3 rho = 1e-6 lambda0 = 5 * rho theta = 1 epsilon = 1e-4 maxiter = 100 tensor_hat = imputer(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon, maxiter) end = time.time() print('Running time: %d seconds'%(end - start)) # + dense_mat = np.load('../datasets/Electricity-data-set/electricity35.npy') random_tensor = np.load('../datasets/Electricity-data-set/random_tensor.npy') missing_rate = 0.4 ### Random missing (RM) scenario: binary_tensor = np.round(random_tensor + 0.5 - missing_rate) sparse_mat = np.multiply(dense_mat, ten2mat(binary_tensor, 0)) sparse_tensor = mat2ten(sparse_mat, np.array(binary_tensor.shape), 0) dense_tensor = mat2ten(dense_mat, np.array(binary_tensor.shape), 0) del dense_mat, random_tensor, binary_tensor # - import time start = time.time() time_lags = np.array([1, 2, 3, 4, 5, 6, 22, 23, 24, 25, 26, 27]) alpha = np.ones(3) / 3 rho = 1e-6 lambda0 = 5 * rho theta = 1 epsilon = 1e-4 maxiter = 100 tensor_hat = imputer(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon, maxiter) end = time.time() print('Running time: %d seconds'%(end - start)) # + dense_mat = np.load('../datasets/Electricity-data-set/electricity35.npy') random_tensor = np.load('../datasets/Electricity-data-set/random_tensor.npy') missing_rate = 0.6 ### Random missing (RM) scenario: binary_tensor = np.round(random_tensor + 0.5 - missing_rate) sparse_mat = np.multiply(dense_mat, ten2mat(binary_tensor, 0)) sparse_tensor = mat2ten(sparse_mat, np.array(binary_tensor.shape), 0) dense_tensor = mat2ten(dense_mat, np.array(binary_tensor.shape), 0) del dense_mat, random_tensor, binary_tensor # - import time start = time.time() time_lags = np.array([1, 2, 3, 4, 5, 6, 22, 23, 24, 25, 26, 27]) alpha = np.ones(3) / 3 rho = 1e-6 lambda0 = 5 * rho theta = 1 epsilon = 1e-4 maxiter = 100 tensor_hat = imputer(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon, maxiter) end = time.time() print('Running time: %d seconds'%(end - start)) # - **Nonrandom Missing (NM)**: # + dense_mat = np.load('../datasets/Electricity-data-set/electricity35.npy') random_matrix = np.load('../datasets/Electricity-data-set/random_matrix.npy') missing_rate = 0.2 ### Nonrandom missing (NM) scenario: binary_tensor = np.zeros((dense_mat.shape[0], 24, 35)) for i1 in range(dense_mat.shape[0]): for i2 in range(35): binary_tensor[i1,:,i2] = np.round(random_matrix[i1,i2] + 0.5 - missing_rate) binary_mat = ten2mat(binary_tensor, 0) sparse_mat = np.multiply(dense_mat, binary_mat) sparse_tensor = mat2ten(sparse_mat, np.array(binary_tensor.shape), 0) dense_tensor = mat2ten(dense_mat, np.array(binary_tensor.shape), 0) del dense_mat, random_matrix, binary_tensor # - import time start = time.time() time_lags = np.array([1, 2, 3, 4, 5, 6, 22, 23, 24, 25, 26, 27]) alpha = np.ones(3) / 3 rho = 1e-6 lambda0 = 5 * rho theta = 1 epsilon = 1e-4 maxiter = 100 tensor_hat = imputer(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon, maxiter) end = time.time() print('Running time: %d seconds'%(end - start)) # + dense_mat = np.load('../datasets/Electricity-data-set/electricity35.npy') random_matrix = np.load('../datasets/Electricity-data-set/random_matrix.npy') missing_rate = 0.4 ### Nonrandom missing (NM) scenario: binary_tensor = np.zeros((dense_mat.shape[0], 24, 35)) for i1 in range(dense_mat.shape[0]): for i2 in range(35): binary_tensor[i1,:,i2] = np.round(random_matrix[i1,i2] + 0.5 - missing_rate) binary_mat = ten2mat(binary_tensor, 0) sparse_mat = np.multiply(dense_mat, binary_mat) sparse_tensor = mat2ten(sparse_mat, np.array(binary_tensor.shape), 0) dense_tensor = mat2ten(dense_mat, np.array(binary_tensor.shape), 0) del dense_mat, random_matrix, binary_tensor # - import time start = time.time() time_lags = np.array([1, 2, 3, 4, 5, 6, 22, 23, 24, 25, 26, 27]) alpha = np.ones(3) / 3 rho = 1e-6 lambda0 = 5 * rho theta = 1 epsilon = 1e-4 maxiter = 100 tensor_hat = imputer(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon, maxiter) end = time.time() print('Running time: %d seconds'%(end - start)) # + dense_mat = np.load('../datasets/Electricity-data-set/electricity35.npy') random_matrix = np.load('../datasets/Electricity-data-set/random_matrix.npy') missing_rate = 0.6 ### Nonrandom missing (NM) scenario: binary_tensor = np.zeros((dense_mat.shape[0], 24, 35)) for i1 in range(dense_mat.shape[0]): for i2 in range(35): binary_tensor[i1,:,i2] = np.round(random_matrix[i1,i2] + 0.5 - missing_rate) binary_mat = ten2mat(binary_tensor, 0) sparse_mat = np.multiply(dense_mat, binary_mat) sparse_tensor = mat2ten(sparse_mat, np.array(binary_tensor.shape), 0) dense_tensor = mat2ten(dense_mat, np.array(binary_tensor.shape), 0) del dense_mat, random_matrix, binary_tensor # - import time start = time.time() time_lags = np.array([1, 2, 3, 4, 5, 6, 22, 23, 24, 25, 26, 27]) alpha = np.ones(3) / 3 rho = 1e-6 lambda0 = 5 * rho theta = 1 epsilon = 1e-4 maxiter = 100 tensor_hat = imputer(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon, maxiter) end = time.time() print('Running time: %d seconds'%(end - start)) # ### License # #
    # This work is released under the MIT license. #
    # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Метрики классификации import pandas as pd import numpy as np import matplotlib.pyplot as plt df = pd.read_csv('pokemon.csv') print(df.shape) df.head() predictors = ['attack', 'defense', 'speed'] X = df[predictors].values y = df['is_legendary'].values X y[:10] y.mean() # + from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X,y, stratify=y, test_size=0.2) # - y_train.mean(), y_test.mean() # + from sklearn.linear_model import LogisticRegression model = LogisticRegression(C=10**6) model.fit(X_train, y_train) model.coef_ # - p_hat = model.predict_proba(X_test)[:,-1] p_hat[:20] # P(y = 1) y_hat = model.predict(X_test) y_hat[:20] # + from sklearn.ensemble import RandomForestClassifier rf = RandomForestClassifier(n_estimators=10000) rf.fit(X_train, y_train) q_hat = rf.predict_proba(X_test)[:,-1] y_rf = rf.predict(X_test) # - # ## 1. Доля правильных ответов, accuracy np.mean(y_hat == y_test) from sklearn.metrics import accuracy_score accuracy_score(y_test, y_hat) y_zero_hat = np.zeros_like(y_test) y_zero_hat accuracy_score(y_test, y_zero_hat) # ## 2. Матрица ошибок, полнота и точность tr = 0.1 p_hat = model.predict_proba(X_test)[:,-1] y_hat = 1*(p_hat > tr) y_hat from sklearn.metrics import confusion_matrix confusion_matrix(y_hat, y_test) confusion_matrix(y_rf, y_test) # + active="" # y = 0 y = 1 # y_hat = 0 TN FN # y_hat = 1 FP TP # # Сколько в принципе единиц из существующих я нашёл? # Recall = TP/(TP + FN) # # Насколько точно я нахожу единички? # Precision = TP/(TP + FP) # # - from sklearn.metrics import recall_score, precision_score print( recall_score(y_test, y_hat) ) print( precision_score(y_test, y_hat) ) print( recall_score(y_test, y_rf) ) print( precision_score(y_test, y_rf) ) # ## 3. Precision-Recall curve # # > ЛОЛ! А что если перебирать порог и смотреть какими получаются метрики и рисовать их # + from sklearn.metrics import precision_recall_curve plt.figure(figsize=(6,5)) pr, rc, tres = precision_recall_curve(y_test, p_hat) plt.plot(rc, pr, lw=2, label='logreg') pr, rc, tres = precision_recall_curve(y_test, q_hat) plt.plot(rc, pr, lw=2, label='rf') plt.xlabel('Recall', fontsize=16) plt.ylabel('Precision', fontsize=16) plt.legend(fontsize=16); # - # Как принять решение? # # - Можно сказать, что мы хотим максимальную точность при ограничениях на полноту # # $$ # \max_{tr} (precision \mid recall > 0.4) # $$ ind = np.where(pr == pr[rc >= 0.4].max()) pr[ind][0], rc[ind][0], tres[ind][0] # > ЛМАО! Эту стратегию можно расширить. Можно пытаться накладывать ограничения на precision лиюо другие бизнес-штуки # - Либо мы можем задать свои предпочтения на множестве метрик => f-мера, если предпочтения выпуклые from sklearn.metrics import f1_score f1_score(y_test, y_hat) # > ОМГ, все метрики, которые мы посмотрели, зависят от значения порога, а хочется, чтобы не зависили и мы не зацикливались на выборе порога, ну то есть мы хотим сначала обучить модель и понять какая из них самая классная, а уже вторым шагом хотим выбирать порог для принятия решения. # # => pr-auc (площадь под pr-кривой) from sklearn.metrics import average_precision_score average_precision_score(y_test, y_hat) average_precision_score(y_test, y_rf) # ## 4. Roc-auc curve # + active="" # # Модель правильно отсортирорвала наблюдения # y = 1 P(y = 1) = 0.8 # y = 0 P(y = 1) = 0.2 # # # Модель неверно отсортирорвала наблюдения # y = 1 P(y = 1) = 0.3 # y = 0 P(y = 1) = 0.7 # # Могу взять все пары из нулей и единий и могу посмотреть насколько хорошо они отсортирорваны, а дальше я могу посчитать долю правильно расположенных пар => ROC-AUC # - from sklearn.metrics import roc_auc_score roc_auc_score(y_test, p_hat) # + from sklearn.metrics import roc_curve plt.figure(figsize=(6,5)) fpr, tpr, tres = roc_curve(y_test, p_hat) plt.plot(fpr, tpr, lw=2, label='logreg') fpr, tpr, tres = roc_curve(y_test, q_hat) plt.plot(fpr, tpr, lw=2, label='rf') plt.plot([[0,0], [1,1]], linestyle='dashed', color='grey') plt.xlabel('Recall', fontsize=16) plt.ylabel('Precision', fontsize=16) plt.legend(fontsize=16); # - # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import os import csv csv_path = os.path.join(".","Resources","Election_data.csv") file_to_output = os.path.join("Output","Election_result.txt") total = 0 list_1 = [] votes = {} winning_candidate = "" winning_count = 0 with open(csv_path, newline="") as csvfile: csvreader = csv.reader(csvfile, delimiter=",") csv_header = next(csvfile) for row in csvreader: total += 1 candidate_name = row[2] if candidate_name not in list_1: list_1.append(candidate_name) votes[candidate_name] = 0 votes[candidate_name] = votes[candidate_name] + 1 with open(file_to_output, "w") as txt_file: election_results = ( f"\n\nElection Results\n" f"-------------------------\n" f"Total Votes: {total}\n" f"-------------------------\n") print(election_results, end="") txt_file.write(election_results) for candidate in votes: total_votes = votes.get(candidate) votes_percent = float(total_votes)/float(total)*100 if (total_votes > winning_count): winning_count = total_votes winning_candidate = candidate voter_output = f"{candidate}: {votes_percent:.3f}% ({total_votes})\n" print(voter_output) txt_file.write(voter_output) print(f"-------------------------\n") winning_candidate_summary = (f"Winner:{winning_candidate}") print(winning_candidate_summary) print(f"-------------------------\n") txt_file.write(winning_candidate) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Assignment 1: Building a Better Contact Sheet # In the lectures for this week you were shown how to make a contact sheet for digital photographers, and how you can take one image and create nine different variants based on the brightness of that image. In this assignment you are going to change the colors of the image, creating variations based on a single photo. There are many complex ways to change a photograph using variations, such as changing a black and white image to either "cool" variants, which have light purple and blues in them, or "warm" variants, which have touches of yellow and may look sepia toned. In this assignment, you'll be just changing the image one color channel at a time # # Your assignment is to learn how to take the stub code provided in the lecture (cleaned up below), and generate the following output image: # # ![](./readonly/assignment1.png "") # # From the image you can see there are two parameters which are being varied for each sub-image. First, the rows are changed by color channel, where the top is the red channel, the middle is the green channel, and the bottom is the blue channel. Wait, why don't the colors look more red, green, and blue, in that order? Because the change you to be making is the ratio, or intensity, or that channel, in relationship to the other channels. We're going to use three different intensities, 0.1 (reduce the channel a lot), 0.5 (reduce the channel in half), and 0.9 (reduce the channel only a little bit). # # For instance, a pixel represented as (200, 100, 50) is a sort of burnt orange color. So the top row of changes would create three alternative pixels, varying the first channel (red). one at (20, 100, 50), one at (100, 100, 50), and one at (180, 100, 50). The next row would vary the second channel (blue), and would create pixels of color values (200, 10, 50), (200, 50, 50) and (200, 90, 50). # # Note: A font is included for your usage if you would like! It's located in the file `readonly/fanwood-webfont.ttf` # # Need some hints? Use them sparingly, see how much you can get done on your own first! The sample code given in the class has been cleaned up below, you might want to start from that. # ## HINT 1 # # Check out the `PIL.ImageDraw module` for helpful functions help(PIL.ImageDraw.Draw.text) # + import PIL from PIL import Image from PIL import ImageEnhance from PIL import ImageDraw from PIL import ImageFont # read image and convert to RGB image=Image.open("readonly/msi_recruitment.gif") image=image.convert('RGB') fnt = ImageFont.truetype('readonly/fanwood-webfont.ttf', 75) enhancer=ImageEnhance.Brightness(image) images=[] x=1 # this list, pixelData contains 3 tuples. First tuple is for R pixel at intensity int_lst. #second tuple is for G pixel at intensity in int_lst. #third tuple is for B pixel at intensity in int_lst. pixelData = [(20,100,180),(10,50,90),(10,30,50)] #these two lists for intensity and channel text draw below the images int_lst=[0.3,0.6,0.95,0.3,0.6,0.95,0.3,0.6,0.95] channel=[0,1,2] # this for loop changes images intensities int_lst respectively, #in group of 3 images for i in range(9): images.append(enhancer.enhance(int_lst[i])) # this function changes the images pixels as mentioned in pixelData list. This function takes image as input #and new pixel value from RGB pixels def newPix(image, pixData, pixel): for y in range(image.size[1]): for x in range(image.size[0]): if pixel=="R": pix= tuple(image.getpixel((x,y))) image.putpixel((x,y), (pixData, pix[1],pix[2])) if pixel=="G": pix= tuple(image.getpixel((x,y))) image.putpixel((x,y), (pix[0], pixData,pix[2])) if pixel=="B": pix= tuple(image.getpixel((x,y))) image.putpixel((x,y), (pix[0], pix[1],pixData)) # create a contact sheet from different brightnesses first_image=images[0] contact_sheet=PIL.Image.new(first_image.mode, (first_image.width*3,first_image.height*3+210)) x=0 y=0 n=0 x_1=0 y_1=0 d=ImageDraw.Draw(contact_sheet) for n in range(9): # Lets paste the current image and drawing into the contact sheet y1=y+450 #This is for R and for first 3 images with intensity 0.1, 0.5, 0.9 with R values 20 (image 1), #100 (image 2), 180 (image(3). if n<3: newPix(images[n], pixelData[0][n], "R") contact_sheet.paste(images[n], (x, y)) d.text(xy=(x,y1), text="channel " +str(channel[0])+" intensity "+ str(int_lst[n]), fill=None, font=fnt, anchor=None, spacing=0, align="left", direction=None, features=None, language=None, stroke_width=0, stroke_fill=None) #This is for G and for first 3 images with intensity 0.1, 0.5, 0.9 with G values 30 (image 4), #70 (image 5), 110 (image(6). if n>2 and n<6: newPix(images[n], pixelData[1][x_1], "G") contact_sheet.paste(images[n], (x, y) ) d.text(xy=(x,y1), text="channel " +str(channel[1])+" intensity "+ str(int_lst[x_1]), fill=None, font=fnt, anchor=None, spacing=0, align="left", direction=None, features=None, language=None, stroke_width=0, stroke_fill=None) x_1=x_1+1 #This is for B and for first 3 images with intensity 0.1, 0.5, 0.9 with G values 30 (image 7), #50 (image 8), 70 (image(9). if n>5: newPix(images[n], pixelData[2][y_1], "B") contact_sheet.paste(images[n], (x, y) ) d.text(xy=(x,y1), text="channel " +str(channel[2])+" intensity "+ str(int_lst[y_1]), fill=None, font=fnt, anchor=None, spacing=0, align="left", direction=None, features=None, language=None, stroke_width=0, stroke_fill=None) y_1=y_1+1 n=n+1 # Now we update our X position. If it is going to be the width of the image, then we set it to 0 # and update Y as well to point to the next "line" of the contact sheet. if x+first_image.width == contact_sheet.width: x=0 y=y+first_image.height+70 else: x=x+first_image.width # resize and display the contact sheet contact_sheet = contact_sheet.resize((int(contact_sheet.width/2),int(contact_sheet.height/2) )) display(contact_sheet) # - # ## HINT 2 # # Did you find the `text()` function of `PIL.ImageDraw`? # ## HINT 3 # # Have you seen the `PIL.ImageFont` module? Try loading the font with a size of 75 or so. # ## HINT 4 # These hints aren't really enough, we should probably generate some more. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Data Driven CNN StarNet # # This notebook builds a supervized learning model *StarNet* to predict stellar parameters from spectra, assuming we have access to a set of stellar parameters previously estimated. # # **Summary of the current implementation** # - Inputs: APOGEE DR14 spectra # - Labels: 3 stellar parameters resulting from the APOGEE pipeline # - Model: See the build_model routine below # # **TODO** # - could we add noise for spectra inputs also during training? # - add stellar abundances as parameters # - follow the recipe for errors on parameters (better than dropout?): https://tech.instacart.com/3-nips-papers-we-loved-befb39a75ec2 # - compare with the Bayesian NN from Henry: http://astronn.readthedocs.io/en/latest/neuralnets/apogee_bcnn.html # - compare normalization procedures: - ASPCAP, The Cannon, simplescaler and including in the CNN. # # + import numpy as np import random import h5py import time from keras.models import Sequential from keras.layers import Dense, Flatten, BatchNormalization, Dropout, Input from keras.layers.convolutional import Conv1D, MaxPooling1D, AveragePooling1D from keras.optimizers import Adam from keras.callbacks import EarlyStopping, ReduceLROnPlateau from keras_contrib.layers import InstanceNormalization from keras.layers import RepeatVector,Add from keras.layers import UpSampling2D, Reshape, Activation from keras.models import Model import keras.initializers # - # ## Hyper parameters for the model # # + # activation function used following every layer except for the output layers activation = 'relu' # model weight initializer initializer = 'he_normal' num_fluxes = 7514 num_labels = 3 # shape of input spectra that is fed into the input layer input_shape = (None,num_fluxes,1) # number of filters used in the convolutional layers num_filters = 8 # length of the filters in the convolutional layers filter_length = 3 # length of the maxpooling window pool_length = 4 # number of nodes in each of the hidden fully connected layers num_hidden = [256,128] # number of spectra fed into model at once during training batch_size = 64 # maximum number of interations for model training max_epochs = 30 # initial learning rate for optimization algorithm lr = 0.0001 # exponential decay rate for the 1st moment estimates for optimization algorithm beta_1 = 0.9 # exponential decay rate for the 2nd moment estimates for optimization algorithm beta_2 = 0.999 # a small constant for numerical stability for optimization algorithm optimizer_epsilon = 1e-08 early_stopping_min_delta = 0.0001 early_stopping_patience = 4 reduce_lr_factor = 0.5 reuce_lr_epsilon = 0.0009 reduce_lr_patience = 2 reduce_lr_min = 0.00008 loss_function = 'mean_squared_error' # - def build_model(input_spec): # input conv layer with filter length 1, no bias value x = Conv1D(kernel_initializer=keras.initializers.Constant(0.5), activation='linear', padding="same", filters=1, kernel_size=1,use_bias=False)(input_spec) # instance normalize to bring each spectrum to zero-mean and unit variance normed_spec = InstanceNormalization()(x) # upsample the spectra so that they can be easily added to the output of the conv blocks # this method just repeats the spectra n=num_filters times normed_spec = Reshape((num_fluxes,1,1))(normed_spec) repeated_spec = UpSampling2D(size=(1, num_filters))(normed_spec) # reshape spectra and repeated spectra to proper shape for 1D Conv layers repeated_spec = Reshape((num_fluxes,num_filters))(repeated_spec) x = Reshape((num_fluxes,1))(normed_spec) # Conv block w/ InstanceNorm w/ dropout x = Conv1D(kernel_initializer=initializer, padding="same", filters=num_filters, kernel_size=filter_length)(x) x = Activation('relu')(x) x = InstanceNormalization()(x) x = Conv1D(kernel_initializer=initializer, padding="same", filters=num_filters, kernel_size=filter_length)(x) x = Activation('relu')(x) x = InstanceNormalization()(x) x = Add()([x, repeated_spec]) x = Dropout(0.2)(x) # Conv block w/ InstanceNorm w/o dropout x = Conv1D(kernel_initializer=initializer, padding="same", filters=num_filters, kernel_size=filter_length)(x) x = Activation('relu')(x) x = InstanceNormalization()(x) x = Conv1D(kernel_initializer=initializer, padding="same", filters=num_filters, kernel_size=filter_length)(x) x = Activation('relu')(x) x = InstanceNormalization()(x) x = Add()([x, repeated_spec]) # Avg pooling w/ dropout (DO NOT APPLY DROPOUT BEFORE POOLING) x = AveragePooling1D(pool_size=pool_length)(x) x = Dropout(0.2)(x) x = Flatten()(x) # Fully connected blocks w/ BatchNorm x = Dense(num_hidden[0], kernel_initializer=initializer)(x) x = Activation('relu')(x) x = BatchNormalization()(x) x = Dropout(0.3)(x) x = Dense(num_hidden[1], kernel_initializer=initializer)(x) x = Activation('relu')(x) x = BatchNormalization()(x) x = Dropout(0.2)(x) # output nodes output_pred = Dense(units=num_labels, activation="linear")(x) return Model(input_spec,output_pred) # ## Build and compile model # + input_spec = Input(shape=(num_fluxes,1,)) model = build_model(input_spec) optimizer = Adam(lr=lr, beta_1=beta_1, beta_2=beta_2, epsilon=optimizer_epsilon, decay=0.0) early_stopping = EarlyStopping(monitor='val_loss', min_delta=early_stopping_min_delta, patience=early_stopping_patience, verbose=2, mode='min') reduce_lr = ReduceLROnPlateau(monitor='loss', factor=0.5, epsilon=reuce_lr_epsilon, patience=reduce_lr_patience, min_lr=reduce_lr_min, mode='min', verbose=2) model.compile(optimizer=optimizer, loss=loss_function) model.summary() # - # ## Load non-normalized spectra # + # hack to load pre-computed mean and std-dev for faster normalization mean_and_std = np.load('/data/stars/apogee/dr14/aspcap_labels_mean_and_std.npy') mean_labels = mean_and_std[0] std_labels = mean_and_std[1] num_labels = mean_and_std.shape[1] def normalize(lb): return (lb-mean_labels)/std_labels data_file = '/data/stars/apogee/dr14/starnet_training_data.h5' with h5py.File(data_file,"r") as F: spectra = F["spectrum"][:] labels = np.column_stack((F["TEFF"][:],F["LOGG"][:],F["FE_H"][:])) # Normalize labels labels = normalize(labels) print('Reference set includes '+str(len(spectra))+' individual visit spectra.') # define the number of wavelength bins (typically 7214) num_fluxes = spectra.shape[1] print('Each spectrum contains '+str(num_fluxes)+' wavelength bins') num_train=int(0.9*len(labels)) # set NaN values to zero indices_nan = np.where(np.isnan(spectra)) spectra[indices_nan]=0. # some visit spectra are just zero-vectors... remove these. spec_std = np.std(spectra,axis=1) spec_std = spec_std.reshape(spec_std.shape[0],1) indices = np.where(spec_std!=0.)[0] spectra = spectra[indices] labels = labels[indices] reference_data = np.column_stack((spectra,labels)) np.random.shuffle(reference_data) train_spectra = reference_data[0:num_train,0:num_fluxes] # Reshape spectra for convolutional layers train_spectra = train_spectra.reshape(train_spectra.shape[0], train_spectra.shape[1], 1) train_labels = reference_data[0:num_train,num_fluxes:] cv_spectra = reference_data[num_train:,0:num_fluxes] cv_spectra = cv_spectra.reshape(cv_spectra.shape[0], cv_spectra.shape[1], 1) cv_labels = reference_data[num_train:,num_fluxes:] reference_data=[] spectra=[] labels=[] print('Training set includes '+str(len(train_spectra))+' spectra and the cross-validation set includes '+str(len(cv_spectra))+' spectra') # + time1 = time.time() # Train model model.fit(train_spectra, train_labels, validation_data=(cv_spectra, cv_labels), epochs=max_epochs, batch_size=batch_size, verbose=2, callbacks=[reduce_lr,early_stopping]) time2 = time.time() print("\n" + str(time2-time1) + " seconds for training\n") # Save model in current directory model.save('StarNet_DR14.h5') # - # # Spectra Normalization # # Tentative to replace what stellar spectroscopist call normalization (supression of a global continuum over the whole spectrum) with an input normalization. # # This test is simply a convolutional layer with one filter of length 1, followed by an InstanceNormalization layer # # First build a model that only includes our input convolutional and instance normalization layers. # # **Note:** I use a constant initialization of 0.5 because if the kernel is < 0. then the normalized spectra are inverted. this probably doesn't matter for the NN but it makes it a lot nicer to plot # # + def build_normalizer_model(input_spec): # input conv layer with filter length 1 to flatten the shape x = Conv1D(kernel_initializer=keras.initializers.Constant(0.5), activation='linear', padding="same", filters=1, kernel_size=1,use_bias=False)(input_spec) # instance normalize to bring each spectrum to zero-mean and unit variance normed_spec = InstanceNormalization()(x) return Model(input_spec,normed_spec) input_spec = Input(shape=(num_fluxes,1,)) model = build_normalizer_model(input_spec) model.summary() normalized_cv = model.predict(cv_spectra) # - # Plot the input spectra, then the normalized spectra. I will force the second of the two plots to have the same y-axis range to ensure that the range for our normalized spectra are similar to one another # + import matplotlib.pyplot as plt # %matplotlib inline for i in range(10): fig, axes = plt.subplots(2,1,figsize=(70, 10)) axes[0].plot(cv_spectra[i,:,0],c='b') axes[1].plot(normalized_cv[i,:,0],c='r') axes[1].set_ylim((-4,4)) plt.show() # - # We may want to do some pre-processing clipping to the spectra to elminate the outliers # # Stacking # # Is the stacking method used on spectra to add them to the output from conv blocks correct? # # First extend previous model to include the upsample layer. # + def build_upsample_model(input_spec): # input conv layer with filter length 1, no bias value x = Conv1D(kernel_initializer=keras.initializers.Constant(0.5), activation='linear', padding="same", filters=1, kernel_size=1,use_bias=False)(input_spec) # instance normalize to bring each spectrum to zero-mean and unit variance normed_spec = InstanceNormalization()(x) # upsample the spectra so that they can be easily added to the output of the conv layers # this method just repeats the spectra n=num_filters times normed_spec = Reshape((num_fluxes,1,1))(normed_spec) repeated_spec = UpSampling2D(size=(1, num_filters))(normed_spec) repeated_spec = Reshape((num_fluxes,num_filters))(repeated_spec) return Model(input_spec,repeated_spec) input_spec = Input(shape=(num_fluxes,1,)) model = build_upsample_model(input_spec) model.summary() upsampled_cv = model.predict(cv_spectra[0:100]) # - # Plot the input spectra, then the normalized upsampled spectra for i in range(5): fig, axes = plt.subplots(9,1,figsize=(70, 10)) axes[0].plot(cv_spectra[i,:,0],c='b') for ii in range(8): axes[ii+1].plot(upsampled_cv[i,:,ii],c='r') axes[ii+1].set_ylim((-4,4)) plt.show() (Px���@h���0X���� Hp���8`���(Px���@h���0X���� Hp��� 8 ` � � � ( P x � � �  @ h � � �  0 X � � � � H p � � � 8`���(Px���@h���0X���� Hp���8`���(Px���@h���0X���� Hp���8`���(Px���@h���0X���� Hp���8`���(Px���@h��� 0 X � � � � !H!p!�!�!�!"8"`"�"�"�"#(#P#x#�#�#�#$@$h$�$�$�$%0%X%�%�%�%�% &H&p&�&�&�&'8'`'�'�'�'(((P(x(�(�(�()@)h)�)�)�)*0*X*�*�*�*�* +H+p+�+�+�+,8,`,�,�,�,-(-P-x-�-�-�-.@.h.�.�.�./0/X/�/�/�/�/ 0H0p0�0�0�0181`1�1�1�12(2P2x2�2�2�23@3h3�3�3�3404X4�4�4�4�4 5H5p5�5�5�5686`6�6�6�67(7P7x7�7�7�78@8h8�8�8�8909X9�9�9�9�9 :H:p:�:�:�:;8;`;�;�;�;<(<P<x<�<�<�<=@=h=�=�=�=>0>X>�>�>�>�> ?H?p?�?�?�?@8@`@�@�@�@A(APAxA�A�A�AB@BhB�B�B�BC0CXC�C�C�C�C DHDpD�D�D�DE8E`E�E�E�EF(FPFxF�F�F�FG@GhG�G�G�GH0HXH�H�H�H�H IHIpI�I�I�IJ8J`J�J�J�JK(KPKxK�K�K�KL@LhL�L�L�LM0MXM�M�M�M�M NHNpN�N�N�NO8O`O�O�O�OP(PPPxP�P�P�PQ@QhQ�Q�Q�QR0RXR�R�R�R�R SHSpS�S�S�ST8T`T�T�T�TU(UPUxU�U�U�UV@VhV�V�V�VW0WXW�W�W�W�W XHXpX�X�X�XY8Y`Y�Y�Y�YZStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFiltered����